Sunday 22 June

39 events

Sunday 22 June 14:00 - 15:20 (Room 1.14)

Hands-On: B.04.07 HANDS-ON TRAINING - A Disaster Response Toolbox for Efficient Damage Proxy Map Creation and Analysis

This interactive training session will introduce participants to damage proxy maps (DPMs) used for post-disaster damage assessment. Participants will get hands-on using a new web interface tool which is designed to simplify the process of requesting Damage Proxy Maps (DPMs) and thresholding them for disaster response support.

Attendees will learn how to use the platform to find ready-made DPMs for past events and to submit requests for on-demand DPM generation using synthetic aperture radar (SAR) satellite data. They will then learn how to threshold DPMs, taking into account external observations, contextual information, or their own datasets, to create maps that identify areas which are likely damaged after a disaster event.

By the end of this training session, participants will:

1. Understand the basics of InSAR coherence and its application in disaster response.
2. Be able to use the web interface to find and request DPMs.
3. Learn how to threshold DPMs to identify areas of varying damage likelihood.
4. Gain practical skills in analyzing satellite data for disaster response purposes.

Speakers:


  • Sang-Ho Yun
  • Khai Zher Wee
  • Ricky Winarko
  • Eleanor Ainscoe
  • Emma Hill
Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Foyer L3)

Hands-On: A.10.05 HANDS-ON TRAINING - Open-source hyperspectral analysis using hylite and python

Access to huge databases of (hyper)spectral Earth observation data is changing how we analyse the Earth's surface, providing information-rich datasets over wide areas (from satellite and airborne platforms) and at high spatial resolution (using laboratory, tripod or UAV-mounted sensors). However, gaining value from these data requires accessible correction and analysis techniques, many of which are currently only available via expensive, opaque and/or difficult-to-use software.

In this hands-on training, we aim to make (hyper)spectral analysis more accessible by introducing the open-source Python toolbox hylite and its associated GUI interface napari-hippo. We begin with a short overview of widely used Earth observation datasets, including multi- and hyperspectral data. While our training focuses on hyperspectral data – the most data-intensive – we will highlight techniques that can be applied to other optical datasets as well (such as
multispectral).

By the end of the training, we hope that participants will have gained practical experience using hylite for hyperspectral data analysis, from visualisation to analysis and machine learning
applications. In doing so, we hope to boost the impact of Earth observation data across remote sensing sciences – “from Observation to Climate action and sustainability for Earth”.

Detailed Hands-on training Agenda (80 mins)
1. Setup & Introduction to Python + Colab (10 mins)
i. Install required packages on Colab server
ii. Introduction to different hylite objects- quick overview
iii. Navigating and using Google Colab
iv. Comments on installing packages locally (e.g., Anaconda, Jupyter Notebooks)
2. Hyperspectral Data in hylite (15 mins)
i. Loading EO data in formats: .tif, .bsq, .dat using hylite
ii. Visualisation and plotting:
a. RGB composites, band selection (user-defined)
b. Plotting individual spectra from pixels or regions
iii. Basic data cleaning:
a. Bad band removal, handling NaNs, scaling and normalisation
b. Spectral smoothing: Using Savitzky-Golay or similar filters
c. Plotting raw vs cleaned spectra for comparison
3. Working with spectral libraries: (15 mins)
d. Loading existing libraries (e.g., USGS)
e. Creating spectral libraries from hyperspectral data.
4. Spectral Analysis Techniques (20 mins)
(In groups)
i. Band indices
ii. Hull correction – removing spectral continuum
iii. PCA and MNF – noise reduction & feature extraction
iv. Minimum wavelength mapping – for targeted absorption features
v. Spectral abundance maps from the spectral library prepared earlier in the tutorial.
5. Machine learning with hyperspectral data and scikit-learn (15 mins)
Quick demonstration involving supervised mineralogy prediction from tripod-based hyperspectral data.
6. Introduction to Napari (15 mins)
7. Questions and Discussion (5 mins)

Speakers:


  • Rupsa Chakraborty - Helmholtz Institute Freiberg for Resource Technology (HIF)
  • Sam Thiele - Helmholtz Institute Freiberg for Resource Technology (HIF)
Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Room 1.85/1.86)

Tutorial: D.02.22 TUTORIAL - Advancing Earth Observation Through Geospatial Machine Learning With TorchGeo

#stac

The growth of machine learning frameworks like PyTorch, TensorFlow, and scikit-learn has also sparked the development of a number of geospatial domain libraries. In this talk, we break down popular geospatial machine learning libraries, including:

TorchGeo (PyTorch)
Eo-learn (scikit-learn)
Raster Vision (PyTorch, TensorFlow*)
DeepForest (PyTorch, TensorFlow*)
Samgeo (PyTorch)
TerraTorch (PyTorch)
SITS (R Torch)
Srai (PyTorch)
Scikit-eo (scikit-learn, TensorFlow)
Geo-bench (PyTorch)
GeoAI (PyTorch)
OTBTF (TensorFlow)
GeoDeep (ONNX)

For each library, we compare the features they have as well as various GitHub and download metrics that emphasize the relative popularity and growth of each library. In particular, we promote metrics including the number of contributors, forks, and test coverage as useful for gauging the long-term health of each software community. Among these libraries, TorchGeo stands out with more builtin data loaders and pre-trained model weights than all other libraries combined. TorchGeo also boasts the highest number of contributors, forks, stars, and test coverage. We highlight particularly desirable features of these libraries, including a command-line or graphical user interface, the ability to automatically reproject and resample geospatial data, support for the spatio-temporal asset catalog (STAC), and time series support. The results of this literature review are regularly updated with input from the developers of each software library and can be found here: https://torchgeo.readthedocs.io/en/stable/user/alternatives.html

Among the above highly desirable features, the one TorchGeo would most benefit from adding is better time series support. Geotemporal data (time series data that is coupled with geospatial information) is a growing trend in Earth Observation, and is crucial for a number of important applications, including weather and climate forecasting, air quality monitoring, crop yield prediction, and natural disaster response. However, TorchGeo has only partial support for geotemporal data, and lacks the data loaders or models to make effective use of geotemporal metadata. In this talk, we highlight steps TorchGeo is taking to revolutionize how geospatial machine learning libraries handle spatiotemporal information. In addition to the preprocessing transforms, time series models, and change detection trainers required for this effort, there is also the need to replace TorchGeo's R-tree spatiotemporal backend. We present a literature review of several promising geospatial metadata indexing solutions and data cubes, including:

R-tree
Shapely
Geopandas
STAC
Numpy
PyTorch
Pandas
Xarray
Geopandas
Datacube

For each spatiotemporal backend, we compare the array, list, set, and database features available. We also compare performance benchmarks on scaling experiments for common operations. TorchGeo requires support for geospatial and geotemporal indexing, slicing, and iteration. The library with the best spatiotemporal support will be chosen to replace R-tree in the coming TorchGeo 1.0 release, marking a large change in the TorchGeo API as well as a promise of future stability and backwards compatibility for one of the most popular geospatial machine learning libraries. TorchGeo development is led by the Technical University of Munich, with incubation by the AI for Good Research Lab at Microsoft, and contributions from 100 contributors from around the world. TorchGeo is also a member of the OSGeo foundation, and is widely used throughout academia, industry, and government laboratories. Check out TorchGeo here: https://www.osgeo.org/projects/torchgeo/

Speakers:


  • Adam J. Stewart - TUM
  • Nils Lehmann - TUM
  • Burak Ekim - UniBw
Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Hall L3)

Tutorial: F.02.20 TUTORIAL - CEOS COAST Demo: Novel coastal satellite data products

We offer a tutorial session of the new and novel coastal satellite data products publicly available through the international interagency collaborative engagement within the CEOS COAST virtual Constellation. As a new virtual constellation within the Committee on Earth Observation Satellites, the Coastal Observations, Applications, Services, and Tools team work together across agencies to co-design new demanded coastal products and to test them in 6 pilot regions around the world, then later release of quality global coastal products. In addition to new product co-design with stakeholders, COAST also works to improve the quality of existing satellite data in the coastal areas. All products are freely and publicly available via the COAST Application Knowledge Hub, which provides access to a wealth of other publicly accessible data and information on coastal regions which may be of interest to coastal users. Navigating the Application Knowledge Hub will be a component of the tutorial. Examples of available products we plan to include in the tutorial are shoreline mapping and shallow-water satellited derived bathymetry, coastal water quality and chlorophyll a/sediments, mangrove and marsh mapping.

Speakers:


River Discharge and Sea Level Satellite Products for Coastal Hazards


  • Jérôme Benveniste - COSPAR

Satellite-derived products for monitoring coastlines and dynamic intertidal regions


  • Stephen Sagar - Geoscience Australia

Ocean Color Remote Sensing using EOS06 OCM3 sensor: Science products along the coastal waters


  • Moosa Ali - ISRO

CEOS COAST’s Application Knowledge Hub & satellite-derived Water Quality


  • SeungHyun Son - University of Maryland

Mapping the impact of catchment rainfall-runoff extremes along coastal waters


  • Kesav Unnithan
Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Room 1.15/1.16)

Hands-On: C.03.20 HANDS-ON TRAINING - ACOLITE processing of Sentinel-2/MSI and Sentinel-3/OLCI data

This training session will cover the processing of Sentinel-2/MSI and Sentinel-3/OLCI data with the open source ACOLITE processor, with a focus on coastal and inland water applications. The DSF and RAdCor algorithms and their assumptions and typical applications will be briefly discussed. The main goal of this session is hands on processing, including image subsetting to a region of interest, and the use of the different atmospheric correction algorithms. ACOLITE output products will be explored using the SNAP toolbox, and common processing settings and issues will be demonstrated. For advanced users ACOLITE configuration, processing, and output analysis from within Python will be covered.

Participants will be expected to bring a laptop, the processing software and example data will be provided. Participants are encouraged to acquire cloud-free TOA data for their areas of interest (L1C for MSI, and OL_1_EFR for OLCI). In the interest of time, participants are encouraged to download the latest ACOLITE release from https://github.com/acolite/acolite/releases/latest, and the SNAP toolbox for output visualisation from https://step.esa.int/main/download/snap-download/ before the start of the session. Advanced Python users are expected to set up an appropriate conda environment and a git clone of the ACOLITE code base.

Speakers:


  • Quinten Vanhellemont
  • Dimitry Van der Zande
  • Arthur Coqué

Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Room 0.94/0.95)

Tutorial: D.03.18 TUTORIAL - From Earth Science to Storytelling with EO Dashboard

EO Dashboard project has been developed by NASA, ESA (the European Space Agency), and JAXA (Japan Aerospace Exploration Agency). The three agencies and have combined their resources, technical knowledge, and expertise to produce the Earth Observing (EO) Dashboard (https://eodashboard.org), which aims to strengthen our global understanding of the changing environment with human activity. The EO Dashboard presents EO data from a variety of EO satellites, served by various ESA, NASA, and JAXA services and APIs in a simple and interactive manner. Additionally, by means of scientific storytelling, it brings the underlying science closer to the public providing further insights into the data and their scientific applications. In this tutorial participants use the EO Dashboard and its resources to do Open Earth Observation Science. Through very practical exercises hosted on the EOxHub JupyterLab Workspace in the Euro Data Cube environment, the participants will access open data sources of EO Dashboard, will use the open APIs and services to work with this data and will create workflows to extract insights from the data. Finally, they will learn how to create and publish stories, data and code on EO Dashboard following FAIR and Open Science principles. Free access to the Euro Data Cube resources will be ensured via the ESA Network of Resources sponsoring mechanism.

The tutorial is self contained and participants will be provided with the necessary information and support to run all exercises. A basic level of knowledge is expected in the following fields:
- Earth Observation
- EO image analysis
- Statistics
- Python

To ensure suitable support on site and interactions between participants, this tutorial is best suited for an audience of 15-30 participants.
Exercises will be run individually, however participants can collaborate on the storytelling part of the tutorial.
The team is formed of ESA, NASA and JAXA experts. The presenters are highly experienced in the delivery of hands-on tutorials and has already delivered previous editions of hands-on sessions with EO Dashboard at IGARSS 2021-2024 and FOSS4G 2022, 2023, 2024.

Speakers:


  • Lubomír Doležal- EOX
  • Diego Moglioni - Stario c/o ESA
  • Sara Aparício - Solenix c/o ESA
  • Anca Anghelea - ESA
  • Daniel Santillan - ESA
Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Room 1.31/1.32)

Tutorial: D.02.21 TUTORIAL - Easy and Efficient Fine-tuning of Foundation Models for Earth Observation

Foundation models for Earth Observation (EO)—large-scale AI systems pretrained on vast, diverse datasets—are revolutionizing geospatial analysis by enabling universal applications. These models learn rich, scalable representations of satellite imagery (multispectral, radar, SAR), offering unprecedented potential to process petabytes of EO data. Yet, widespread adoption faces two critical barriers:
- Prohibitive Costs: Training foundation models from scratch demands immense computational resources, specialized infrastructure, and technical expertise, excluding many academic and humanitarian organizations.
- Domain Adaptation Gap: Pretrained models often fail to generalize to downstream EO tasks—such as land cover mapping, urban and forest monitoring, and extreme event monitoring—without domain-specific recalibration.
This tutorial begins with an introduction to foundation models, detailing their architectures, pretraining strategies, and relevance to EO workflows. Following this introduction, the session bridges the adoption gap by presenting an end-to-end pipeline to benchmark, adapt, and deploy foundation models for EO, with a focus on:
- Benchmarking Foundation Models: A toolbox for efficient data engineering, automating rapid streaming of large EO datasets (e.g., Sentinel-1, Landsat) into GPU-ready batches while minimizing preprocessing efforts.
- Plug-and-Play Adaptation: Practical implementation of PEFT (Parameter-Efficient Fine-Tuning) of foundation models for diverse EO tasks, including LoRA (Low-Rank Adaptation) and ViT Adapters, enabling easy and efficient adaptation of foundation models with minimal computational overhead.
We demonstrate the usability of the pipeline on ExEBench, a benchmark dataset spanning seven categories of extreme events on Earth. By analyzing model performance on ExEBench, we gain insights into how foundation models can generalize across diverse data types (e.g., remote sensing, weather, and climate data) and the impact of different finetuning strategies on model performance.
By simplifying adaptation and providing tools for benchmarking foundation models, this pipeline empowers researchers to prioritize domain-specific innovation—not engineering hurdles—accelerating solutions for sustainability and climate resilience.

Speakers:


  • Sining Chen - Technical University of Munich
  • Shan Zhao - Technical University of Munich
Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Room 0.96/0.97)

Hands-On: A.02.14 HANDS-ON TRAINING - NaturaSat software in identification, classification and monitoring of habitats by using Sentinel and UAV data

NaturaSat is an innovative software solution designed to seamlessly identify and monitor protected habitats like the Natura 2000 sites. With its user-friendly interface, NaturaSat empowers the usage of satellite and UAV data to enhance biodiversity protection, ensure sustainable land use, and optimise environmental management.
The software has been successfully used to segment diverse structures from satellite and UAV data precisely. The time-monitoring of segmented habitats is possible thanks to NaturaSat's ability to visualise and analyse various bands and indices of satellite and UAV data and extract their statistical characteristics. The NaturaSat supervised deep learning model is used for spatial monitoring and allows accurate classification up to the habitat level. The NaturaSat Historical Map Transformation tool will enable users to transform desired areas from historical maps to contemporary ones, intending to revitalise the historical natural sites.
The main goal of this NaturaSat hands-on training is to guide participants through the solution of one complex use case. Specifically, we will be identifying, classifying, and monitoring wetlands with the help of satellite and UAV data and the freely available NaturaSat software tools. The NaturaSat software can be downloaded from http://www.algoritmysk.eu/en/naturasat_en/.

Session Instructors:


  • Aneta A. Ožvat
  • Maria Sibikova
Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Room 0.49/0.50)

Tutorial: D.03.12 TUTORIAL - Deep dive into vector data cubes for Python and R

Data cubes, as structures for representing multi-dimensional data, are typically used for raster data. When we deal with vector data, thinking of points, polygons and the like, we tend to represent it as tabular data. However, when the nature of the vector dataset is multi-dimensional, this is not sufficient. This tutorial will provide a deep dive into the concept of vector data cubes, an extension of the generic data cube model to support vector geometries either as coordinates along one or more dimensions or as variables within the cube itself. You will learn how to create such objects in Python using `Xarray` and `Xvec` and in R using `stars` and `post`. The tutorial is organised along a series of applied use cases that show different applications of vector data cubes. Starting with the one resulting from zonal statistics applied to a multidimensional raster cube, we will cover spatial subsetting, plotting, CRS transformations, constructive operations and I/O. Using the cube that captures origin-destination data, we will explore the application of multiple dimensions supported by vector geometry. Finally, the use case of time-evolving geometry will demonstrate the vector data cube composed of geometries both as coordinates along the dimensions and as variables in the cube. This includes an introduction to the concept of summary geometry supported by spatial masking and the more complex case of zonal statistics based on time-evolving geometry.

The tutorial will be given simultaneously in Python and R by the authors of the aforementioned software packages. Participants are free to choose their preferred language.

Speakers:


  • Martin Fleischmann - Charles University, CZ
  • Lorena Abad - University of Salzburg, AT
Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Room 0.11/0.12)

Tutorial: D.04.13 TUTORIAL - Accelerating insights: New Google Earth Engine data products and capabilities for sustainability applications

Learn about Google Earth Engine's latest data products and capabilities, designed to streamline earth observation data analysis for sustainability, conservation, and business applications.

• We’ll walk through interactive demos, such as sustainable sourcing with new forest data partnership commodity datasets and methane emissions monitoring and reduction.

• We'll highlight an innovative new Earth Engine dataset that leverages deep learning from multi-sensor, multi-temporal inputs to enhance efficiency and accuracy of classification and regression analyses.

• We'll demonstrate new integrations between Earth Engine and BigQuery to streamline data management and analytics, making it easier for data scientists to leverage earth observation data and insights.

The session will be a combination of short presentations and live demos to provide context and practical guidance. All data, notebooks, and apps will be made available to you to work with at your convenience. Join us to learn how Google's environmental geospatial tools can help you move more quickly out of the data processing and management phase and onto the task of deriving insights.

Speakers:


  • Valerie Pasquarella - Google
  • Nicholas Clinton - Google
Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Room 1.61/1.62)

Tutorial: D.01.11 TUTORIAL - From basic to advanced: build and run your first workflows with DeltaTwin and SesamEO services

This tutorial introduces the DeltaTwin and SesamEO, a suite of services designed to contribute to Digital Twins by running workflows and generating valuable results.

DeltaTwin provides a collaborative workspace for accessing datasets, configuring and running workflows, monitoring their execution and sharing results. While SesamEO provides a direct access to explore and retrieve data from different providers including DestinE Data Lake (DEDL), Copernicus Data Space Ecosystem (CDSE) and Eurostat.

The first part explores the web user interface of both services. Attendees will discover the main DeltaTwin functionnalities, such as how to browse existing DeltaTwin components, run them and save the generated results as artefacts in their personal gallery. Then, the presentation highlights how to interface with SesamEO service, to browse collections, select products and finally use one as input of a DeltaTwin component.
(~15 minutes)

The second part is more developers oriented and explains how to create DeltaTwin components using our Python command-line interface (CLI).

To build a DeltaTwin, bring your own code or model, and edit your workflow.
Then publish it to the service and run it.
Attendees will learn the configuration process. Several workflow creations will be shown starting from a basic single-step process to more complex scenario.

An example, on how to integrate a machine learning model to your workflow will be also presented. These examples will be mainly based on processing Sentinel-2 products retrieved from SesamEO.
(~55 minutes)

Questions & Answers
(~10 minutes)

Speakers:


  • Christophe Demange - Gael Sytems
Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Room 1.34)

Hands-On: D.06.06 HANDS-ON TRAINING – Space HPC , ESA’s High Performance Computing facility: how to use it for EO processing

This hands-on training will introduce you to SpaceHPC, ESA’s High Performance Computing facility. It will be an opportunity to use cutting-edge technology (CPU & GPU) and to get familiar with how to interact with an HPC system. By the end of this session, you’ll know more about HPC and in particular parallel computing, Deep Learning.

Introduction on the SpaceHPC [10 minutes presentation]
-General intro of HPCs + transition to explain why SpaceHPC
-For whom?
-How to access?
-What computing power is available?
-Explanation of the scheduler and the jobs

Connecting to the SpaceHPC and running a job [10 minutes hands-on]
-List available computing partitions
-Check the current level of utilization of the HPC and the free resources
-Run a 1st job: something easy and check the output of the job

CPU usecase [40 minutes]
Running a heavy satellite image processing (pan-sharpening, textures, features extraction, classification…)

First, all the participants would all run the same processing:
1)Processing on one single CPU core
Then, each participant will be tasked to run the processing for a specific computing power
2)Processing on X CPU cores, with X varying depending on the participant
3)Measure the speedup of 2) vs 1)

Finally, we would compile the results of all participants in an interactive process: Each participant would report the measured speedup of their X number of CPU cores and we would add it to a shared chart.

GPU usecase [30 minutes]
For the GPU processing, we’ll present some Deep Learning technics, based on Pytorch. The usecase the participants will work on is a neural network (CNN or transformer)

We’ll have a deeper look at some topics related to Deep Learning:
-Making the best use of the capabilities of the SpaceHPC by optimizing some parameters (batch size, learning rate, data I/O…)
-Best practices: saving (checkpointing) the state of the training after each epoch…
-Live visualization of the accuracy during the training (e.g. tensorboard)
-Trying several neural network architectures

We’ll also check some aspects more linked to the HPC:
-Difference between nodes with 8 GPUs and the nodes with 4 GPUs
-Difference between different libraries

Speakers:


  • Nicolas Narcon - ESA
  • Sean Quin - ESA
  • Neva Besker - ESA
  • Sean Quin - ESA
Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Room 0.14)

Hands-On: C.06.13 HANDS-ON TRAINING - SAR Calibration Toolbox (SCT) an open-source multi mission tool for quality analysis of SAR data

The SAR Calibration Toolbox (SCT) has been developed in the framework of the ESA funded EDAP+ and SAR MPC projects, with the goal of making it an open-source tool for multi-mission and general-purpose SAR data quality assessment.
The tool was released to the public in July 2024 and is available at the following GitHub page:
https://github.com/aresys-srl/sct
The git repository includes full on-line documentation guiding the users through the installation and the usage of SCT.
The SAR mission L1 products currently supported include Sentinel-1, ICEYE, SAOCOM, NovaSAR and EOS-04. Support to heritage ERS and ASAR missions is currently under development and new missions will be supported in the future.
The SCT tool implements the following analyses:
• Point targets analysis: geometric resolution, IRF parameters assessment, absolute radiometric calibration and geolocation accuracy
• Distributed target analysis: extraction of radiometric profiles for the assessment of relative calibration over Rain Forest and of thermal noise level (NESZ) over low back-scatter areas
• Interferometric data analysis: assessment of the interferometric coherence from an input interferometric product or from 2 co-registered SLC products
The session will provide an overview of the features and capabilities of the tool. During the hands-on training, the users will be able to install SCT on their devices and to perform sample quality analyses of L1 SAR products to showcase the main functionalities of SCT.

Speaker:


  • Giorgio Parma - aresys
Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Room 0.14)

Hands-On: C.01.23 HANDS-ON TRAINING - OGC API - DGGS and Free & Open-Source Software Discrete Global Grid Abstraction Library (DGGAL)

Hands-on experience assembling a visualization client for OGC API - DGGS (https://docs.ogc.org/DRAFTS/21-038.html) using the new Free & Open-Source Software Discrete Global Grid Abstraction Library (DGGAL: https://dggal.org).

The session will cover:
- A short introduction to OGC APIs
- Retrieving DGGS-quantized raster data in DGGS-JSON and GeoTIFF using OGC API - DGGS
- Retrieving DGGS-quantized vector data in DGGS-JSON-FG and GeoJSON using OGC API - DGGS
- Performing zone queries using OGC API - DGGS
- Resolving global zone identifiers to zone geometry using DGGAL
- Resolving local sub-zone indices within a parent zone to global zone identifiers / geometry using DGGAL
- Identifying zones within view with DGGAL
- Visualizing DGGS-JSON data retrieved from OGC API - DGGS with help from DGGAL

DGGAL will support at minimum the three DGGRS listed in Annex B of OGC API - DGGS:
- GNOSIS Global Grid: a rectangular WGS84 latitude, longitude grid hierarchy corresponding to an OGC 2D Tile Matrix Set (https://docs.ogc.org/is/17-083r4/17-083r4.html#toc58) utilizing variable widths to limit area variance in polar regions
- ISEA9R: An equal-area rhombic grid hierarchy in the Icosahedral Snyder Equal Area (ISEA) projection with a refinement ratio of 9, axis-aligned in a rotated, sheared and scaled Cartesian space
- ISEA3H: An equal-area hexagonal grid hierarchy in the Icosahedral Snyder Equal Area (ISEA) projection with a refinement ratio of 3, where 12 zones at any refimement level appear as "pentagons" due to interruptions in the projection

Support for additional DGGRSs will be added to DGGAL over time.

Participants are expected to bring their own laptop to follow with the exercises.

One or more demonstration end-points will be provided with sample datasets.

DGGAL is a library written in the eC programming language (https://ec-lang.org/) with bindings available for C, C++ and Python.
The exercises and demonstration will be using the Python bindings.

Speakers:


  • Jérôme Jacovella-St-Louis - Ecere Corporation
  • Dr Samantha Lavender - Pixalytics

Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Room 1.14)

Hands-On: F.01.13 HANDS-ON TRAINING - Crafting Interactive Stories with DEA: From Data to Narrative

This hands-on training will guide participants through the creation of impactful stories using DEA, a Data Visualization and Interactive Storytelling Service. DEA supports users in presenting environmental and climate information in a clear and engaging way.

The session will introduce DEA, its role within Earth Observation and Climate Change initiatives, and the portfolio of ready-to-use datasets it offers, including climate projections, reanalysis datasets like ERA5, and real-time forecasts data from ECMWF and Copernicus services.

Participants will also learn how to enrich their stories by adding external content from researches, local studies, or custom analyses. The presenter will demonstrate how to prepare and process external content — such as maps, charts, or videos generated from satellite-based data or socioeconomic statistics — using complementary tools and how to integrate these elements into DEA to enhance the final narrative.

The core of the training will focus on hands-on story creation, where participants will practice stepby-step:
• Selecting relevant datasets and defining the narrative angle.
• Creating maps, graphs, and media-rich visualisations.
• Combining data-driven content with textual annotations.

By the end of the session, attendees will understand how DEA can be used to present scientific results, support decision-making processes, show effects of extreme events and explain environmental challenges at different scales, from global strategies to local adaptation plans. They will leave with practical skills to use the DEA service and turn their own data and findings into compelling, interactive stories.

Training material available at the following link https://aliaspacesystems.github.io/DEA-LPS25-Hands-on-Training/

Speakers:


  • Arturo Montieri
  • Cristina Arcari
  • Monica Rossetti
Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Room 1.85/1.86)

Tutorial: D.03.13 TUTORIAL - Code Once, Reuse and Scale Seamlessly: Build EO Workflows using openEO in the Copernicus Data Space Ecosystem

In an era of unprecedented availability of Earth Observation (EO) data, the Copernicus Data Space Ecosystem (CDSE) plays a key role in bridging the gap between data accessibility and actionable insights. Despite the availability of freely accessible satellite data, widespread adoption of EO applications remains limited due to challenges in extracting meaningful information. Many EO-based projects struggle with non-repeatable, non-reusable workflows, mainly due to the lack of standardised, scalable solutions.

CDSE tackles these barriers by adopting common standards and patterns, most notably through openEO. This open-source solution is a community-driven standard that simplifies remote sensing data access, processing, and analysis by offering a unified platform. It empowers developers, researchers, and data scientists to use cloud-based resources and distributed computing environments to tackle complex geospatial challenges. Adhering to the FAIR principles (Findable, Accessible, Interoperable, and Reusable), it supports the global sharing and reuse of algorithms, enhancing collaboration and scalability.

Furthermore, by promoting the development of reusable, scalable, and shareable workflows, openEO enhances the efficiency and reproducibility of the EO workflow. Its feature-rich capabilities have also been used and validated in large-scale operational projects such as ESA WorldCereal and the JRC Copernicus Global Land Cover and Tropical Forestry Mapping and Monitoring Service (LCFM), which rely on its robust and reliable infrastructure.

Join us for a detailed tutorial to explore openEO's capabilities for developing scalable and reusable workflows within the Copernicus Data Space Ecosystem. Participants will learn how to design a reusable algorithm that is scalable for varying remote sensing applications. These algorithms can be shared among the EO communities as user-defined processes (UDPs) or openEO services in the platform offered by the ecosystem.

Speakers:


  • Pratichhya Sharma - VITO
  • Victor Verhaert - VITO
Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Room 1.31/1.32)

Tutorial: D.02.18 TUTORIAL - Mastering EOTDL: A Tutorial on crafting Training Datasets and developing Machine Learning Models

#stac

In this tutorial session, participants will dive deep into the world of machine learning in Earth observation. Designed for both beginners and seasoned practitioners, this tutorial will guide you through the comprehensive workflow of using the Earth Observation Training Data Lab (EOTDL) to build, manage, and deploy high-quality training datasets and machine learning models.

Throughout the session, you will begin with an introduction to the fundamentals of EOTDL, exploring its datasets, models, and the different accesibility layers. We will then move into a detailed walkthrough of EOTDL’s capabilities, where you’ll learn how to efficiently ingest raw satellite data and transform it into structured, usable datasets. Emphasis will be placed on practical techniques for data curation, including the utilization of STAC metadata standards, ensuring your datasets are both discoverable and interoperable.

Next, the session will focus on model development, showcasing the process of training and validating machine learning models using curated datasets, including feature engineering. Real-world examples and case studies will be presented to illustrate how EOTDL can be leveraged to solve complex problems in fields such as environmental monitoring, urban planning, and disaster management.

By the end of the tutorial, you will have gained valuable insights into the complete data pipeline—from dataset creation to model deployment—and the skills necessary to apply these techniques in your own projects. Join us to unlock the potential of Earth observation data and drive innovation in your machine learning endeavors.

Speakers:


  • Juan B. Pedro Costa - CTO@Earthpulse, Technical Lead of EOTDL
Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Room 0.49/0.50)

Tutorial: A.08.16 TUTORIAL - ALTICAP: a global satellite altimetry sea level product for coastal applications

The ALTimetry Innovative Coastal Approach Product (ALTICAP) is a new satellite altimetry sea level product dedicated to coastal and regional applications. It currently contains five years of collocated high-resolution (20 Hz) Jason-3 altimeter along-track coastal variables (sea level anomaly, significant wave height and wind) in a global strip from 0 to 500 km from the coast. This experimental product builds on the most recent algorithms to ensure sea level reliability in the coastal strip and beyond, with the objective to be extended in the past (Jason-2) and up to recent years (Sentinel-6). It has been developed with the goal of providing simple and easy-to-use files with a resolution consistent with the observable physical signals. It is developed by the coastal altimetry team from LEGOS/CTOH, CLS, Noveltis and CNES, and distributed on the AVISO+ website (Doi : 10.24400/527896/a01-2023.020 ).

During this tutorial session, we will briefly introduce the challenges of coastal satellite altimetry and we will present ALTICAP as well as some tools to handle the data, based on Jupyter notebooks in Python. Attendants will learn how to read the two different ALTICAP dataset formats and how to plot the data for various types of figures (maps, time series, Hovmuller diagrams…). The training will also explore practical applications such as the computation of geostrophic currents and the comparison with other types of observations.

Speakers:


  • Mathilde CANCET - CNRS/LEGOS * Léna TOLU - CNRS/LEGOS * Alexandre HOMERIN - NOVELTIS * Fabien LEGER - CNRS/LEGOS
Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Room 1.61/1.62)

Tutorial: C.02.21 TUTORIAL - Visualization and analysis of Imaging Spectroscopy data with the EnMAP-Box

The EnMAP-Box is a software designed to facilitate the visualization, processing and analysis of imaging spectroscopy data. Developed specifically to support the German Environmental Mapping and Analysis Program (EnMAP), the EnMAP-Box meanwhile offers a unique variety of ways to efficiently display and analyze remote sensing data from hyper- and multispectral missions, as EnMAP, EMIT, PRISMA, CHIME, or the Sentinel-2 and Landsat missions.

Developed as Python plugin for the QGIS geoinformation system, the EnMAP-Box integrates into a well-established, platform-independent, and free-and-open-source software ecosystem, that can be easily integrated into existing workflows.

In our tutorial, we will guide you through the essential functionalities of the EnMAP-Box, present its latest features and give an outlook on further developments.

Topics:
- Installation
- Introduction to EnMAP-Box GUI
- Raster import and metadata handling
- Visualization of hyper- and multispectral raster data, spatial and spectral linking
- Presentation of specific renderers for an optimized visualization of raster data and raster analysis results, e.g., to visualize class-fractions and probability layers
- How to run EnMAP-Box processing algorithms from QGIS, Python or CLI; how to create and run processing workflows using the QGIS Model Builder
- Spectral libraries: import spectral profiles from field campaigns; label spectral profiles with arbitrary attributes; collect image endmembers; modify profiles in QGIS field calculator
- SpecDeepMap: a deep learning-based semantic segmentation application; overview of functionalities and algorithms; how to finetune a pre-trained ResNet18 backbone on Sentinel-2 TOA imagery, utilizing European Union Cropmap labels; how to use a finetuned model to generate continuous mapping predictions

At the request of the participants, selected topics can be discussed more in detail. Questions and requests canbe sent in advance to enmapbox@enmap.org.

Docs: https://enmap-box.readthedocs.io
Code: https://github.com/EnMAP-Box/enmap-box
Publication: Jakimow, Janz, Thiel, Okujeni, Hostert, van der Linden, 2023, EnMAP-Box: Imaging spectroscopy in QGIS, SoftwareX, vol. 23, doi: 10.1016/j.softx.2023.101507.

Speakers:


  • Benjamin Jakimow - Humboldt-Universität zu Berlin, Geography Department
  • Andreas Janz - Humboldt-Universität zu Berlin, Geography Department
  • Leon-Friedrich Thomas - University of Helsinki, Department of Agricultural Sciences
Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Room 0.11/0.12)

Tutorial: D.02.20 TUTORIAL - EVE: A Comprehensive Suite of LLMs and Data for Earth Observation and Earth Sciences

LLMs have proven to be effective as general purpose tools to aid on a variety of tasks. However, when targeting specific domains, LLMs trained on general domain data require domain-specific knowledge to achieve state-of-the-art results, particularly on technical and scientific disciplines like Earth Observation (EO). This can be achieved by developing domain-specific LLMs, including training from scratch with vast amounts of domain-specific data [1, 2], or instruction
fine-tuning general domain LLMs [3, 4, 5].
Inspired by this trend, we develop Earth Virtual Expert (EVE), a suite of LLM, relevant training, benchmarking data, and strategies, for Earth Observation and related Earth Sciences. EVE is created by further pre-training an open-source general domain LLMs on billions of tokens of curated high quality scientific EO data sources. We then fine-tune instructed models with our own created datasets and authentic preference data. Finally, we integrate the chat models with
an external curated database for Retrieval Augmented Generation (RAG).
EVE, the resulting model, is designed to become a helpful assistant within EO, and can cater to a wide audience of users, both scientific specialists as well as the general public interested in any discipline related to EO. The target use cases include support for bibliographic exploration, assisting and informing policy decision-making, and enhancing EO approachability to non-specialized public.

Our contributions include:
1. Domain-Specific Models: domain-adapted models, pre-trained on billions of EO-specific tokens and fine-tuned for chat instruction-based interaction in EO and related Earth Sciences.
2. Benchmarking datasets: for EO instruction adherence, alignment, evaluating model performance on hallucination mitigation, enabling robust validation and iteration.
3. Training Data:
i. A curated corpus containing billions of cleaned and processed tokens specific to EO.
ii. Instruction datasets designed for fine-tuning models on EO downstream tasks.
iii. Authentic preference/alignment data.
4. Retrieval-Augmented Generation (RAG) System: A curated RAG database of EO-relevant documents, integrated with the chat models to facilitate accurate and contextually grounded responses.
5. Hallucination Mitigation Strategy: A fact-checking method to suppress factual errors generated by the RAG system.
6. Open-Source Codebase: The supporting code for data processing, model finetuning, and deployment of the RAG system, to ensure reproducibility and usability.

The adapted and instructed models, corresponding datasets and benchmarks will be released as open source contributions to the Earth Observation and Earth Sciences through public repositories.

Speakers:


  • Vijayasri Iyer - Pi School
  • Marcello Politi - Pi School
Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Room 1.34)

Hands-On: D.03.09 HANDS-ON TRAINING - The CoMet toolkit – Uncertainties made easy

The CoMet Toolkit (www.comet-toolkit.org), which stands for Community Metrology Toolkit, is an open-source software project to develop Python tools for the handling of error-covariance information in the analysis of measurement data. It was developed to handle uncertainty and error-correlation information in EO data and propagate these through EO processing chains, but it is generally applicable to any Python dataset with uncertainties and error-correlation, and can propagate uncertainties through any measurement function that can be written as a Python function.

The CoMet toolkit consists of a set of linked Python packages, which are all publicly available and installable through pip. The punpy tool implements metrologically-robust uncertainty propagation, including handling of complex error-covariance information. This enables to calculate the total output uncertainty of a measurement function (i.e. a processing chain) from the uncertainty on its inputs, and to study the effects of various uncertainty contributions. The obsarray tool allows to store uncertainty and error correlation in a self-described dataset (dubbed a `digital effects table’) using standardised metadata. These digital effects tables can also be passed to punpy, which can directly use this information, so that users typically never have to interact with the complex error-correlation information. Using the CoMet toolkit, uncertainty and error-correlation information can be written, read, and processed in a way that is user-friendly, machine-readable and traceable.

Within this training session, we will provide some brief theoretical background and introduce you to the various CoMet tools and give hands-on experience of how to use these tools in order to propagate uncertainties through an example processing chain. We will use Jupyter notebooks, hosted on google colab, to run the training session (for a preview, see https://www.comet-toolkit.org/examples/). We will run through an example of calibrating a satellite sensor, and will show how the tools can be used for this purpose. We will calculate how different uncertainty components (e.g. noise with random error correlation, uncertainty on gain with systematic error correlation, …) contribute to the overall uncertainty budget, and show why it is relevant to take into account error correlation information. If time allows, we will also help you set up the use of CoMet through your own example. Please bring a (simple) example use-case in Python through which you would like to propagate uncertainties (e.g. one step of your own processing chain).

Speakers:


  • Pieter De Vis - National Physical Laboratory
  • Sam Hunt - National Physical Laboratory
  • Astrid Zimmermann - National Physical Laboratory
  • Maddie Stedman - National Physical Laboratory

Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Room 1.15/1.16)

Hands-On: F.01.12 HANDS-ON TRAINING - Communicating Climate Change with the ESA Climate Change Initiative’s Essential Climate Variables

The Global Climate Observing System (GCOS) has defined 55 Essential Climate Variables (ECVs) that critically contribute to the characterisation of the Earth's climate. Of these 55 ECVs, around two thirds can be derived using satellite data, providing users with near-global coverage over decadal time scales.

Since its conception in 2010, ESA’s Climate Change Initiative (CCI) has exploited the full satellite record to produce long-term climate data series for 27 ECVs, with some records now spanning over four decades. This wealth of data is invaluable for illustrating the causes and impacts of climate change at global scale and regional scales.

In this interactive training session, participants will be given a hands-on opportunity to use and explore the CCI data archive. This session will showcase how and where the Earth’s climate is changing and how these data can be used research and development and to raise public awareness of climate change.

Participants will discover how to access the ECVs using the ESA CCI Open Data Portal and explore various relevant uses of CCI data for climate change applications (e.g., using CCI’s ECVs to illustrate key impacts of climate change, such as rising sea levels or increasing trends in the frequency and intensity of extreme weather events). Training material will be provided to participants, with exercises accessible to different levels of programming proficiency, from complete beginner to more advanced levels. Additionally, the session will introduce the ESA CCI Toolbox, a powerful Python package that simplifies access to and operations with ESA CCI’s ECVs.

Speakers:


  • Amina Maroini - Research Associate, Imperative Space
  • Dr. Lisa Beck - Deutscher Wetterdienst
Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Hall L3)

Tutorial: D.03.17 TUTORIAL - Cloud-Native Earth Observation Processing with SNAP and Copernicus Data Space Ecosystem CDSE

#cloud-native

This tutorial will provide participants with practical skills for deploying ESA’s SNAP in cloud environments, leveraging containerization, Python integration, and the Copernicus Data Space Ecosystem (CDSE). The 90-minute session combines conceptual foundations, live demonstrations, and guided exercises to enable operational EO data analysis directly within cloud infrastructure.

1. Introduction to SNAP and CDSE (15 minutes)
• SNAP Overview: Highlight new features, including enhanced Python support via snappy and SNAPISTA, containerized deployment options, dand hyperspectral ata support.
• CDSE Architecture: Explore the CDSE’s data catalog, processing tools, and Jupyter environment, emphasizing its role in reducing data transfer costs through in-situ analysis.

2. Containerized SNAP Deployment (15 minutes)
• Container Fundamentals: Contrast Docker containers with SNAP’s snap packaging, addressing isolation challenges (e.g., subprocess confinement) and scalability.
• Cloud Deployment: Walk through launching pre-configured SNAP containers on CDSE, including resource allocation and persistent storage setup.

3. Python-Driven Processing with SNAPISTA and Snappy (25 minutes)
• Snappy and SNAPISTA: Understand the low-level Java-Python bridge (snappy) and SNAPISTA’s high-level API for graph generation, including performance trade-offs.
• Operational Workflows: Build a Python script using SNAPISTA to batch-process Sentinel data on CDSE, incorporating cloud-optimized I/O and error handling.
• Integration with CDSE APIs: Retrieve CDSE catalog metadata, subset spatial/temporal ranges, and pipe results directly into SNAP operators without local downloads.

4. Jupyter-Based Analytics and Collaboration (20 minutes)
• Jupyter Lab on CDSE: Navigate the pre-installed environment, accessing SNAP kernels, GPU resources, and shared datasets.
• Reproducible Workflows: Convert SNAP Graph Processing Tool (GPT) XML workflows into Jupyter notebooks, leveraging snapista for modular code generation.
• Collaboration Features: Demonstrate version control, real-time co-editing, and result sharing via CDSE’s portal.

5. Best Practices and Q&A (15 minutes)
• Q&A: Address participant challenges in adapting legacy SNAP workflows to cloud environments.

Learning Outcomes: Participants will gain proficiency in deploying SNAP on CDSE, designing Python-driven EO pipelines, and executing scalable analyses without data migration. The tutorial bridges ESA’s desktop-oriented SNAP tradition with modern cloud paradigms, empowering users to operationalize workflows in alignment with CDSE’s roadmap.

Speaker


  • Pontus Lurcock - Brockmann
Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Room 0.94/0.95)

Tutorial: A.03.09 TUTORIAL - EO AFRICA – Continental Demonstrator LUISA project: Human Appropriation of Net Primary Production Tutorial

Human activities significantly impact land productivity and carbon fluxes, primarily due to the increasing intensity of land use for food, feed, and raw material production in agricultural and forestry systems. African landscapes are undergoing rapid transformation driven by land-use intensification. Consequently, building the resilience of smallholder farmers and pastoralists to global population growth and the subsequent pressures on land is of critical importance.
This is the long-term goal of the Land Use Intensity’s Potential, Vulnerability, and Resilience for Sustainable Agriculture in Africa (LUISA) project, funded by the European Space Agency. The project focuses on the Human Appropriation of Net Primary Productivity (HANPP), an indicator that quantifies the proportion of Net Primary Productivity (NPP) consumed through human land use. HANPP provides key insights into the drivers and consequences of land-use intensification on ecosystem productivity.
LUISA has two primary objectives:
Develop a remote sensing-driven HANPP monitoring framework for key land cover types—cropland, forest, rangeland, and urban areas—within case study agroecosystems in Ethiopia, Mozambique, Senegal, and Uganda.
Scale up HANPP estimates across the African continent over extended spatial and temporal scales.
To enhance the accuracy of NPP estimates, the project will employ data assimilation techniques that integrate in situ and remote sensing observations to optimize parameters in JULES. HANPP is derived by comparing the NPP of actual vegetation—remaining after harvest and land-use conversion—with the NPP of potential natural vegetation, which represents the productivity of undisturbed ecosystems under current climatic conditions.
The project’s outputs, including results and intermediary products, will be made accessible through a tailored platform. Combined with continuous user engagement, this platform will facilitate the adoption of the HANPP monitoring framework. Ultimately, LUISA aims to support sustainable agricultural development while promoting nature conservation across African landscapes.
Join our tutorial where we will introduce you to the HANPP concept and platform.

Speakers:


  • Michael T. Marshall - Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente
  • Sarah Matej - Institute of Social Ecology (SEC) University of Natural Resources and Life Sciences, Vienna
  • Wai-Tim Ng - VITO NV
  • Luboš Kučera - Gisat s.r.o.
Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Room 0.96/0.97)

Hands-On: B.01.06 HANDS-ON TRAINING - Unlocking Earth Observation Analytics: Hands-on Training with the Global Development Assistance (GDA) Analytical Processing Platform (APP)

This hands-on training session will offer participants a step-by-step, practical experience using the GDA APP, designed to simplify the use of EO in development aid activities and to accommodate both technical and non-technical users. The session will guide attendees through the platform’s main user interfaces and core functionalities, demonstrating how to process, analyse, and visualise Earth Observation (EO) data for real-world decision-making.

The training will begin with an introduction to the platform main interfaces and available tools, followed by interactive exercises where participants will explore practical use cases utilising the available GDA APP EO capabilities. The workshop aims to:
- Introduce users to the GDA APP, with a specific focus on the Capability Widgets and Explore interface.
- Demonstrate key capabilities of the GDA APP and raise awareness of their potential EO applications.
- Support capacity building by equipping attendees with the knowledge to integrate EO data into their daily workflows and decision-making processes.
- Encourage active engagement by allowing participants to explore EO capabilities firsthand and suggest improvements.
- Gather feedback on platform usability, front-end design, available tools, and ideas for future development.

The session will be highly interactive, promoting hands-on exploration while also collecting valuable feedback from participants on their user experience. This feedback will directly contribute to refining the platform, guiding future enhancements, and ensuring the GDA APP continues to meet the needs of its users. The session will also briefly introduce how new EO value adding applications can be integrated to the platform.

Participants will leave the session with an in-depth understanding of GDA APP and the tools available to support their work, while also having a direct influence on shaping the platform’s ongoing development.

We encourage all LPS participants to register and create an account on the GDA APP (https://app-gda.esa.int/) to fully explore its features. We especially recommend that training session attendees complete their registration in advance to familiarize themselves with the platform and make the most of the session.

Read more for additional details and updates:
https://app-gda.esa.int/user-guide
https://gda.esa.int/cross-cutting-area/app/

Speakers:


  • Hanna Koloszyc - GeoVille
  • Alessia Cattozzo - MEEO
  • Judith Hernandez - EarthPulse

Supporting team:


  • Simone Mantovani - MEEO
  • Fabio Govoni - MEEO
Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Foyer L3)

Hands-On: D.02.17 HANDS-ON TRAINING - Advanced Artificial Intelligence for Extreme Event Analysis: Hands-on with the AIDE Toolbox

We introduce the Artificial Intelligence for Disentangling Extremes (AIDE) toolbox that allows for anomaly detection, extreme event analysis, and impact assessment in remote sensing and geoscience applications. AIDE integrates advanced machine learning (ML) models and can yield spatiotemporal explicit monitoring maps with probabilistic estimates. Supervised and unsupervised algorithms, deterministic and probabilistic, convolutional and recurrent neural networks (CNNs and RNNs), and methods based on density estimation are covered by this framework.

This session is intended for researchers, data scientists, Earth observation specialists, and professionals in climate science, remote sensing, and AI-driven environmental monitoring. Participants should have a basic understanding of machine learning concepts and spatiotemporal data analysis, though no prior experience with the AIDE toolbox is required. Familiarity with Python programming and common data science libraries (e.g., NumPy, Pandas, PyTorch) will be beneficial but not mandatory, as step-by-step guidance will be provided. For the hands-on training, participants must bring their laptops with Python 3.8 or later installed, preferably within a conda or virtual environment. The training will use Jupyter Notebook or any Python IDE (e.g., VS Code, PyCharm) and the AIDE toolbox, with installation instructions and dependencies provided in advance (see https://github.com/IPL-UV/AIDE). A pre-configured dataset and setup guide will be shared two weeks before the session to ensure a smooth experience. Internet access is recommended for package installation and additional resources.

Speakers:


  • Miguel-Ángel Fernández-Torres - Department of Signal Theory and Communications, Universidad Carlos III de Madrid (UC3M), Madrid, Spain
  • Maria Gonzalez-Calabuig - Image Processing Laboratory (IPL), Universitat de València, Valencia, Spain
Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Room 0.94/0.95)

Tutorial: A.01.15 TUTORIAL - Atmospheric Composition Training at the Living Planet Symposium

The annual ESA/ECMWF/EUMETSAT Atmospheric Composition Training aims to enhance the knowledge and skills of early career scientists in the field of atmospheric composition monitoring and modelling. Building on the heritage of this training (https://atmostraining.info/ ), this session at the ESA Living Planet Symposium will provide participants with a taste of these annual trainings, through a series of tutorials and demonstrations.

The session will cover the Earth Observation story, demonstrating observation to modelling, potentially covering topics such as:
•Explore atmospheric composition data from state-of-the-art observing systems such as Sentinel 5P TROPOMI.
•Understand the difference between observation and model output data
•Create forecasts of aerosols, atmospheric pollutants and greenhouse gases with atmospheric composition forecast models provided by the Copernicus Atmosphere Monitoring Service (CAMS).
•Analyse events such as dust transport, wildfire and volcanic emissions, and the impact these may have across different regions.
•Practical skills in using Python to interact with and plot data from satellites and models

Participants will gain hands-on experience and practical skills that can be directly applied to their research, with demonstrations of tool sets such as the Atmospheric Virtual Lab (https://atmospherevirtuallab.org/).
Overall, this tutorial session aims to foster collaboration and knowledge exchange among participants, helping them stay at the forefront of atmospheric composition research and contribute to the broader goals of the ESA Living Planet Symposium.
The course is targeted to undergraduate or post-graduate level students, researchers, professionals or anyone interested in furthering their knowledge of atmospheric composition monitoring and modelling and developing their practical skills in data handling. Some basic background in physics, chemistry, mathematics and computing is assumed, and elementary familiarity with Python programming would be beneficial to make the most of the training.

Speakers:


  • Edward Malina - ESA
  • Daniele Gasbarra - ESA
  • Chris Stewart - ECMWF
  • Dominika Leskow-Czyzewska - EUMETSAT
Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Room 0.96/0.97)

Hands-On: F.04.33 HANDS-ON TRAINING - Monitoring the high seas – enhancing marine protection using transparency and technology

As global momentum builds for the creation and implementation of Marine Protected Areas (MPAs) in the high seas, attention is turning towards how these remote areas will be monitored in practice. The new BBNJ Agreement provides the legal framework for establishing MPAs beyond national jurisdiction, but ensuring effective compliance and enforcement will remain a challenge unless innovative monitoring and compliance tools and approaches are embraced. Advancements in satellite technology, remote vessel monitoring, and data transparency present game-changing opportunities to support area-based management tools (ABMTs) within and beyond national jurisdiction.

This session by Global Fishing Watch will provide hands-on training in Marine Manager, a powerful platform that integrates satellite data, vessel tracking, and analytical tools to enhance marine conservation, monitoring, and enforcement. It complements conference abstracts by IDDRI and BirdLife exploring the potential of satellite technology and vessel-based monitoring for high seas MPAs.

By the end of the session, participants will have:
- Explored Marine Manager and its capabilities for monitoring remote MPAs
- Understood the underlying automated methods used to create vessel related insights
- Analysed vessel-based data to assess human pressures in areas of interest
- Worked through real-world case studies to apply data-driven insights
- Discussed the practical applications for policy and management strategies

Outline:
- Introduction: Policy context and key monitoring challenges of remote MPAs
- Demonstration: Live walkthrough of Marine Manager’s key features and datasets.
- Hands-on Training: Participants will use the Global Fishing Watch platform, Marine Manager, learn about datasets available, analyse vessel data, and apply insights to real-world scenarios.
- Facilitated discussion: Open exchange on applications, challenges, and next steps.

This video gives a preview of Marine Manager and its functionalities: https://www.youtube.com/watch?v=-x67cHX5C-Q

Speakers:


  • Paul Tuda - Global Fishing Watch
  • Daniel Kachelriess - High Seas Alliance
  • Claudino Rodrigo - Global Fishing Watch
Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Room 1.85/1.86)

Tutorial: D.03.11 TUTORIAL - Satellite Image Time Series Analysis on Earth Observation Data Cubes

This tutorial presents an overview of state-of-the-art methods for big Earth observation data analysis using satellite image time series. Topics include: (1) Access to big EO data cloud services; (2) Production of EO data cubes; (3) Combination of optical, SAR and DEM data sets for multi-sensor based analytics; (4) Generation of derived spectral, temporal an textural indices using EO data cubes; (5) Extraction of training samples for data cubes; (6) Quality control of training datasets using self-organised maps; (7) Methods for reducing imbalances in EO training samples; (8) Deep learning algorithms for classification of image time series organized as data cubes; (9) Post-processing of classification results using spatial Bayesian techniques; (10) Segmentation and region-based classification of image time series; (11) Best practices for evaluation of classification maps.

The tutorial is based on the online book "Satellite Image Time Series Analysis on Earth Observation Data Cubes" (https://e-sensing.github.io/sitsbook), which provides working examples of the above-described methods. The book uses the open-source R package sits. The software accesses data on Amazon Web Services, Brazil Data Cube, Copernicus Data Space Ecosystem, Digital Earth Australia, Digital Earth Africa, Microsoft Planetary Computer, NASA Harmonised Landsat-Sentinel, and Swiss Data Cube. It has reached TRL 9 and is being used operationally for large -scale land classification.

The examples to be presented will be based on Copernicus data sets available in CDSE, including Sentinel-1, Sentinel-2 and Copernicus DEM.

Attendees to the tutorial will be able to get an overview of the whole process of land classification using open EO data. They will be able complement the information provided in the tutorial by reproducing the examples of the on-line book after the tutorial at their best convenience.

Speakers:


  • Gilberto Camara - National Institute for Space Research (INPE), Brazil
  • Rolf Simoes - Open Geo Hub Foundation, Netherlands
Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Room 1.61/1.62)

Tutorial: A.08.20 TUTORIAL - Satellite data for the UN Ocean Decade: Addressing the 10 "ocean challenges" with marine data from the Copernicus Programme and EUMETSAT

The United Nations Decade of Ocean Science for Sustainable Development, known more colloquially as the UN Ocean Decade, outlines ten challenges that must be addressed to ensure an ocean that is sustainably and equitably managed. Marine Earth observation data play a key role in addressing these challenges, providing operational data streams that contribute to monitoring a broad range of biological and physical processes. EUMETSAT and our Ocean and Sea Ice Satellite Applications Facility (OSI SAF) along with Mercator Ocean International, produce regular case studies showing how our data, produced either under the Copernicus Programme or via our mandatory missions, can be used to support these monitoring activities.
In this tutorial, we will explore some of these case studies, showing practical examples of how and where marine remote sensing can be used to address specific Ocean Decade challenges. Each example will be accompanied by a python-based Jupyter Notebook, which will allow participants to recreate and expand upon the analyses presented. The notebooks will be deployed on the Copernicus WEkEO DIAS JupyterHub, and made available under an open-source license, allowing them to be reused by participants in any future context. Examples will showcase EUMETSAT Sentinel-3 and Sentinel-6 products from the Copernicus marine data stream, those made available by our Ocean and Sea Ice Application Facility (OSI SAF) as well as downstream products from the Copernicus Marine Service (CMEMS). The tutorial will be supported by experts in the various data streams, who will be able to advise on data selection and product suitability across the broader marine portfolio.

Point of contact: ben.loveday@external.eumetsat.int

Session details:
This session is designed for early- and mid-career oceanographers and remote sensing scientists who have an interest in expanding their understanding of the uses of EUMETSAT and Copernicus marine data, as well as service providers and application developers focussing on the marine domain. The practical component of the tutorial will use a series of Python-based Jupyter Notebooks, hosted on the Copernicus WEkEO DIAS. A knowledge of Python and using notebooks would be advantageous, but is not strictly necessary.

Speakers:


  • Ben Loveday - EUMETSAT / Innoflair - EUMETSAT Copernicus Marine Training Service Manager
  • Hayley Evers-King - EUMETSAT - Lead Marine Applications Expert
  • Fabrice Messal - Mercator Ocean International - UX and Capacity Development Manager
  • Gwenaël Le Bras - Meteo France - OSI SAF communication and outreach officer
A.08.20 TUTORIAL - Satellite data for the UN Ocean Decade: Addressing the 10 "ocean challenges" with marine data from the Copernicus Programme and EUMETSAT&location=Room+1.61/1.62" class="text-info" target="_blank">Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Room 0.49/0.50)

Tutorial: D.01.10 TUTORIAL - Unlocking the Power of Destination Earth: A Guide to Data Lake Services

In this tutorial, you will learn how the Harmonised Data Access service streamlines data retrieval, ensuring seamless access to datasets from multiple sources including satellite imagery, climate models, and in-situ observations. We will then explore EDGE services, designed to bring computing closer to the data, reducing latency and enabling large-scale analytics. EDGE services consist of three core components:

STACK – A powerful environment featuring Jupyter Notebook and DASK, enabling interactive data analysis and distributed computing.
ISLET – An Infrastructure-as-a-Service (IaaS) solution providing scalable and distributed cloud-based computing resources to support intensive computational workloads.
HOOK – A workflow automation service that orchestrates data processing tasks, making it easier to manage complex workflows.

By the end of this tutorial, you will be equipped to navigate Data Lake Services, efficiently work with the Harmonised Data Access service and leverage EDGE services for advanced analytics. Whether you're a scientist, developer, or policymaker, this guide will help you unlock the full potential of Destination Earth Data Lake.

Let’s get started and turn data into actionable insights for a more sustainable future!

Speaker:


  • Michael Schick - EUMETSAT
Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Room 1.14)

Hands-On: A.02.13 HANDS-ON TRAINING - Biodiversity Data Cubes for Earth Science: From SQL Queries to Standardized Geospatial Output

Lina M. Estupinan-Suarez, Henrique Pereira, Lissa Breugelmans, Rocio Beatriz Cortes Lobos, Luise Quoss, Emmanuel Oceguera, Duccio Rocchini, Maarten Trekels, Quentin Groom
This 90-minute hands-on session empowers researchers and biosphere analyst to harness data mobilised by the Global Biodiversity Information Facility (GBIF) data for advanced biodiversity analysis. Through active, step-by-step exercises, participants will learn how to create species occurrence cubes using SQL queries, calculate key biodiversity indicators, and convert these outcomes into a standardized geospatial format (EBVCubes) for enhanced ecological monitoring.
Session Outline:
1.Creating Species Occurrence Cubes (30 minutes including Q&A): Participants will start by extracting and organizing GBIF species occurrence data into structured data cubes using an SQL query. This segment emphasizes practical exercises, allowing attendees to work with real data and receive one-on-one guidance.
2. Ecological Modeling and Simulated Data Cubes (30 minutes including Q&A):
This part of the session will demonstrate how Virtual Suitability Data cubes can be generated and used in modeling workflows. Participants will explore a data structure that can be useful for analyzing changes in suitability of multiple species across time and space
3.Converting to Standard Geospatial Data (30 minutesin including Q&A): In the final segment, the outcomes from the previous steps will be transformed into EBVCubes—a standardized geo-spatial data format tailored for biodiversity applications. This ensures that the results are readily applicable for further analysis and decision-making.
Participants will gain hands-on expertise in biodiversity data processing and a deeper understanding of how integrative data facilities can bridge the gap between Earth observation and biodiversity research. This enriched perspective is critical for developing informed conservation strategies and policies in response to the complex challenges posed by the intertwined crises of biodiversity loss and climate change.

Speakers:


  • Quentin Groom - Biodiversity Informatics, Meise Botanic Garden
  • Rocio Beatriz Cortes Lobos - University of Bologna
  • Lina M. Estupinan-Suarez - German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig
Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Room 1.31/1.32)

Tutorial: D.05.07 TUTORIAL - Using Earth Observations within Climate Applications that are Fit for Your Purpose

The Copernicus Climate Change Service (C3S), operated by ECMWF on behalf of the European Commission, provides climate data and information based on scientific research. It offers around 35 catalogue entries derived from Earth Observation (EO), including multiple Climate Data Records (CDRs) accessible via the Copernicus Climate Data Store (CDS). The service is designed to simplify the discovery and access of data while meeting user requirements—whether for monitoring climate change, supporting policy development, or performing environmental studies. Consequently, datasets in the CDS adhere to best practices established internationally (e.g., by the Global Framework for Climate Services of the World Meteorological Organization).
In addition, C3S has developed an Evaluation and Quality Control (EQC) framework, to review technical and scientific aspects of service components by involving experts who assess each dataset’s documentation, usability, and maturity. The outcome is a set of clear quality statements that help users identify and work with the most suitable datasets for their purposes.
The EQC framework goes beyond traditional static reporting by offering dynamic, interactive tools that cater for varied user needs. Following Dee et al. (2024, BAMS), the system organises information into distinct tiers: one focused on detailed documentation (Quality Assurance, implemented as a compliance checklist), another on practical demonstrations of dataset performance (Quality Assessment, available as Jupyter notebooks), and a summary (Fitness for Purpose) that presents an overview of each dataset’s strengths and limitations.
Within this proposed Tutorial activity, EO products will serve as main examples to demonstrate how to access and engage with EQC information. It includes datasets from diverse domains (atmosphere, land, and ocean) and sectoral applications, such as forestry, urban planning, or climate monitoring. Practical tutorial examples, including downloadable Jupyter notebooks, will be presented, serving as both a means of independent verification and a learning tool for best practices in climate data applications.

Speakers:


  • André Obregon – ECMWF –
  • João Martins – ECMWF
  • Joaquin Munoz – ECMWF
  • Chunxue Yang – CNR-ISMAR
  • Ana Oliveira – +ATLANTIC CoLAB
  • Inês Girão – +ATLANTIC CoLAB
Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Room 1.34)

Hands-On: D.04.11 HANDS-ON TRAINING - JupyterGIS: Collaborative Geospatial Analysis in Jupyter

This tutorial introduces JupyterGIS, a web-based, collaborative GIS platform integrated with Jupyter notebooks. Participants will learn to edit geospatial data, visualize raster and vector layers, apply symbology, and use the Python API for spatial analysis. We will explore real-time collaboration features such as shared document editing, live cursor tracking, and geolocated comments. The session also demonstrates JupyterGIS integration with QGIS.

Learning Objectives:
- Understand the core features of JupyterGIS and how it facilitates collaborative GIS workflows.
- Learn how to load and analyze raster and vector datasets in JupyterGIS.
- Apply symbology and filtering tools to geospatial data.
- Use the Python API for automating spatial analysis.
- Explore real-time collaboration features, including shared editing and live discussions.

Takeaways:
- Hands-on experience with JupyterGIS for geospatial data analysis.
- Practical knowledge of collaborative GIS workflows.
- Understanding of how JupyterGIS integrates with Jupyter notebooks and QGIS.
- Awareness of future developments and opportunities to contribute to the JupyterGIS community.

Agenda & Timeline (90 minutes):
- Introduction to JupyterGIS (15 min)
- Hands-on session: Loading and visualizing geospatial data
- Applying symbology and filtering tools
- Using the Python API for geospatial analysis
- Real-time collaboration features in JupyterGIS
- Discussion and feedback: Use cases and feature requests

Requirements:
- A modern web browser (Google Chrome or Firefox recommended; Safari support is not guaranteed)
- Basic familiarity with GIS concepts (e.g., layers, symbology, spatial data formats)
- Some experience with Jupyter Notebooks and Python is beneficial but not required

Instructors:


  • Anne Fouilloux - Simula Research Laboratory
  • Tyler Erickson - VorGeo, Founder, Radiant Earth, Technical Fellow
Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Hall L3)

Tutorial: D.04.12 TUTORIAL - Cloud optimized way to explore, access, analyze and visualize Copernicus data sets

#stac

This Tutorial will be present how to leverage various APIs provided by the Copernicus Data Space Ecosystem (CDSE) to process Copernicus data in a cloud computing environment using JupyterLab notebooks. In the beginning, it will be shown how to efficiently filter data collections using the SpatioTemporal Asset Catalog (STAC) catalogue API and how to make use of the STAC API extensions to enable advanced functionalities such as filtering, sorting, pagination etc. Secondly, it will be presented how to access parts of Earth Observiation (EO) products using STAC assets endpoint and byte range requests issued to the CDSE S3 interface. In this respect, it will be discussed in details how to do it using the Geospatial Data Abstraction Library (GDAL) and how to properly setup GDAL setting to maximize the performance of data access via the GDAL vsis3 virtual file system. Further, it will be presented how to leverage the STAC API to build a data cube for the sake of the spatio-temporal analysis. Ultimately, it will be show how to analyse the data cube using an open-source foundation model coupled with freely accessible embeddings generated from the Sentinel EO data and how to visualize and publish results using the Web Map Service (WMS) service. The ultimate goal of this Tutorial is to empower users with the novel EO analytical tools that are provided by the CDSE platform.

Speaker:


  • Jan Musial, CloudFerro
Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Room 0.14)

Hands-On: D.02.16 HANDS-ON TRAINING - AI Foundation Models for Multi-Temporal and Multi-Modal EO Applications

Participants will gain a solid understanding of the principles of AI Foundation Models (FMs) and engage in hands-on training to develop and apply these models specifically for remote sensing and geoscience. The training will cover key aspects of geospatial data analysis and address challenges unique to Earth Observation (EO), such as processing multi-source and multi-temporal satellite remote sensing datasets. Participants will develop the skills needed to effectively integrate FMs across various stages of geoscience research and practical applications.

The teaching material will be based on the Fostering Advancements in Foundation Models via Unsupervised and Self-supervised Learning for Downstream Tasks in Earth Observation (FAST-EO) project, funded by the European Space Agency (ESA) Phi-Lab. This will provide participants with access to state-of-the-art resources and cutting-edge research, enabling them to engage with the latest advancements in foundation models for EO.

Participants will explore computing solutions for training and deploying FMs, learn to apply fine-tuning techniques to adapt models for EO applications, and build pipelines to deploy models into production environments while evaluating them on new datasets.

FAST-EO: https://www.fast-eo.eu/

Speakers:


  • Gabriele Cavallaro - Forschungszentrum Jülich and University of Iceland
  • Thorsteinn Elí Gíslason - Forschungszentrum Jülich
  • Thomas Brunschwiler - IBM Research Europe – Zurich
  • Jakub Nalepa - KP Labs
  • Agata Wijata - KP Labs
Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Foyer L3)

Hands-On: A.10.06 HANDS-ON TRAINING - InSAR Time Series Analysis: Exploring SARvey and InSAR Explorer for Engineering Applications

InSAR is a key tool in engineering, enabling precise and timely evaluations of ground deformation and structural stability. This workshop provides a practical introduction to two open-source tools for InSAR time series analysis and visualization: SARvey and InSAR Explorer.
SARvey is a software package designed to perform single-look InSAR time series analysis, focusing on detecting and monitoring deformation in engineering applications, including dam stability assessment, road and railway monitoring, and urban deformation mapping at the building scale. This workshop covers a comprehensive SARvey workflow, including installation, parameter configuration, and advanced processing techniques, making it an ideal starting point for users new to InSAR as well as for experts seeking enhanced analysis capabilities.
InSAR Explorer complements SARvey as a QGIS plugin that facilitates the seamless integration of InSAR-derived deformation data into a Geographic Information System. The plugin provides intuitive tools for mapping, overlaying auxiliary datasets, and comparing outcomes from different processing workflows. Its user-friendly interface allows users to quickly visualize time series of deformation, generate interactive plots, and perform detailed assessments of the results.
The workshop will utilize notebooks hosted in a Google Colab environment to smoothly guide participants through the complete workflow, from software installation to executing real-world case studies using Sentinel-1 data. Attendees will learn how to modify processing parameters, interpret the resulting deformation time series, and utilize InSAR Explorer in QGIS for data visualization and analysis. Whether you are taking your first steps in InSAR processing or are an experienced practitioner exploring new tools, this workshop offers a comprehensive and interactive learning experience to advance your skills in Earth observation and deformation monitoring.

Speakers:


  • Andreas Piter - Institute of Photogrammetry and GeoInformation, Leibniz University Hannover
  • Mahmud Haghighi - Institute of Photogrammetry and GeoInformation, Leibniz University Hannover
Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Room 0.11/0.12)

Tutorial: D.03.15 TUTORIAL - FAIR and Open Science with EarthCODE Integrated Platforms

#pangeo

This hands-on tutorial introduces participants to FAIR (Findable, Accessible, Interoperable, Reusable) and Open Science principles through EarthCODE integrated platforms, using real-world Earth Observation datasets and workflows. We will begin by exploring the fundamentals of FAIR, explore the EarthCODE catalog, and apply a checklist-based FAIRness assessment to datasets hosted on EarthCODE. Participants will evaluate current implementations, identify gaps, and discuss possible improvements. Building on this foundation, we will demonstrate how integrated platforms such as DeepESDL, OpenEO, and Euro Data Cube (Polar TEP, Pangeo & CoCalc) can be used to create reproducible EO workflows. Participants will create and publish open science experiments and products using these tools, applying FAIR principles throughout the process. The tutorial concludes with publishing results to the EarthCODE catalog, showcasing how EarthCODE facilitates FAIR-aligned, cloud-based EO research. By the end of the session, attendees will have practical experience in assessing and improving FAIRness, developing open workflows, and using EarthCODE platforms to enable reproducible, FAIR and Open Science. Please register your interest for this tutorial by filling in this form: https://forms.office.com/e/yKPJpKV0KX before the session.

Speakers:


  • Samardzhiev Deyan - Lampata
  • Anne Fouilloux - Simula Labs
  • Dobrowolska Ewelina Agnieszka - Serco
  • Stephan Meissl - EOX IT Services GmbH
  • Gunnar Brandt - Brockmann Consult
  • Bram Janssen - Vito
Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Room 1.15/1.16)

Hands-On: D.04.10 HANDS-ON TRAINING - Working with Sentinel Hub API-s in Copernicus Data Space Ecosystem Jupyter Lab

Copernicus Data Space Ecosystem (CDSE) is the official data hub and cloud processing platform for Sentinel Data. CDSE integrates instant data availability with API-s (Application Programming Interfaces), free virtual machine capacity (within a quota) and an open codebase. The CDSE Jupyter lab connects all three of these, providing an open space to learn, experiment and upscale Sentinel data processing. The Sentinel Hub API-s enable advanced raster calculations and even raster-vector integration to generate zonal statistics, all within the API request, running on the server side. Therefore, CDSE makes it significantly easier to get started and learn coding Earth Observation data analysis. This training will show how to access, analyze, visualize and download satellite imagery in the CDSE Jupyter Lab using the Sentinel Hub API family. We will start with an introduction suitable for newcomers to coding. We will explore the Catalog, Process and Statistical API-s, and learn how to create scaleable end-to-end processing on practical use cases. We will use openly available tutorial notebooks that demonstrate how you can perform time series analysis and calculate long-term statistics without downloading a single satellite image. After the course, participants will be able to create their own data analysis pipelines, making use of the vast repository of open algorithms available and the capacity of CDSE. The participants are expected to use their own laptop, but only a web browser is needed, no other software installation is necessary.

Instructors:


  • András Zlinszky - Community Evangelist, Sinergise Solutions
  • William Ray - Remote Sensing Engineer, Sinergise Solutions
Add to Google Calendar

Monday 23 June

648 events

Monday 23 June 09:00 - 10:20 (Hall L3)

Tutorial: C.01.24 TUTORIAL - Land Characterization System Software (LCHS)

The Land Characterization System (LCHS) software, developed by the Food and Agriculture Organization of the United Nations (FAO), is a powerful tool for defining land cover using the standard ISO 19144-2 on Land Cover Meta Language (LCML). This tutorial provides an in-depth introduction to LCHS, focusing on its core features, functionalities, and applications in environmental and agricultural contexts to support sustainable development and/or national/international reporting frameworks. Participants will gain hands-on experience in defining land cover classes, ensuring standardized and harmonized land cover information to enhance interoperability and provide consistent information for sustainable land management. This tutorial is highly recommended for professionals and researchers in agriculture, land management, and environmental monitoring, as well as those interested in geospatial technologies for sustainable development. By the end of this tutorial, participants will: 1) understand the fundamental principles of LCML (ISO 19144-2); 2) define land cover legends using LCHS; and 3) learn to implement standard LCML logic in defining land cover.

The tutorial will be in presentation mode including 1) Introduction to LCML and its significance (10 minutes); 4) Navigating the LCHS interface (20 minutes); 3) Creating land cover legends (10 minutes); 5) Import and export functionalities (10 minutes); 5) Case studies: application in real-world scenarios (10 minutes); 6) Future potential and integration with geospatial technologies (10 minutes); and 7) Q&A and discussion (20 minutes).

Speakers:


  • Njomaba, Elisha - NSLS
  • Spiller, Dario - NSLD
  • Peiser, Livia - NSL
  • Henry, Matieu - NSL
Add to Google Calendar

Monday 23 June 09:00 - 10:20 (Room 0.49/0.50)

Tutorial: D.03.14 TUTORIAL - ESA WorldCereal: Cloud-based, Custom Crop Mapping Made Easy

Join us for an immersive tutorial on the ESA WorldCereal project—a groundbreaking, open-source cloud-based system that empowers anyone to generate custom crop type maps using both public and private reference data alongside free, high-resolution Earth Observation imagery. Built on the versatile Copernicus Data Space Ecosystem, our platform can be deployed anywhere in the world, leveraging the latest deep learning and machine learning innovations to produce precise crop maps from local to regional scales.



In this comprehensive session, you'll gain a clear understanding of the essential ingredients required for global, satellite-based crop mapping, including (harmonized) reference data and robust classification algorithms. You will explore the system's diverse functionalities through engaging, hands-on exercises in QGIS, intuitive Jupyter Notebooks and web interfaces—no prior Python programming experience required!



By the end of this tutorial, you'll be equipped to:



Select, review and prepare in-situ reference data to train crop identification algorithms
Access and explore public reference data on crop types available in the WorldCereal system
Train custom crop type models tailored to your area and crops of interest.
Deploy and run your custom crop type model anywhere, ensuring scalability and flexibility.


Whether you’re new to crop mapping or looking to enhance your expertise without extensive programming, this session offers the practical skills and innovative tools you need to take your mapping capabilities to the next level.



Target Audience and Technical requirements:



This session will be targeted towards anyone interested in learning the required technical skills to generate crop type maps based on free and open Earth Observation data using cloud-based processing infrastructure.

We expect basic prior knowledge on the concept of Earth Observation. Although having some prior experience in working with geospatial data and/or Jupyter Notebooks is definitely an asset, no specific technical skills are required to be able to participate in this session.



Participants are required to bring their own laptop with QGIS pre-installed and need a stable internet connection.



IMPORTANT

To guarantee a smooth experience and get access to the demo environment (where everything will be set-up and ready to go), participants are advised to take the following actions BEFORE entering the session:

Create a free CDSE account --> https://dataspace.copernicus.eu/
Create a free Terrascope account https://terrascope.be/en


Speakers:


  • Jeroen Degerickx - VITO Remote Sensing
  • Hendrik Boogaard - Wageningen Environmental Research

Add to Google Calendar

Monday 23 June 09:00 - 10:20 (Room 1.15/1.16)

Hands-On: D.04.09 HANDS-ON TRAINING - EOEPCA Exploitation Platform Hands-on Deployment and Usage

EOEPCA provides an out-of-the-box solution for an EO Exploitation Platform that integrates open-source components interfacing through open-standards.

This hands-on session introduces the core platform components comprising the EOEPCA solution - providing an end-to-end experience that incrementally deploys an Exploitation Platform within Kubernetes, and then demonstrates its capabilities.

The session is perfect for beginners, who have no prior EOEPCA knowledge, and only basic Kubernetes knowledge - but welcomes also people who already experimented with EOEPCA Building Blocks.

Anyone interested in integrating the EOEPCA components into their Exploitation Platform, or to reuse them for Ground Segment, EO Data Discovery, Processing and Analytics should join the session.

Each participant will be provided with their own Kubernetes cluster in which to create their own EOEPCA deployment. The session will be led by a Jupyter Notebook that describes each step in context, and provides the necessary commands to deploy, configure and utilise the emerging platform instance.

The walk-through derives from the EOEPCA Deployment Guide - https://eoepca.readthedocs.io/projects/deploy.

Support:


  • Salvatore Pinto - ESA
  • James Hinton - Telespazio UK
  • Franco Chen - Solenix/ESA
Add to Google Calendar

Monday 23 June 09:00 - 10:20 (Room 1.85/1.86)

Tutorial: C.04.02 TUTORIAL - EarthCARE sample dataset handling

In today's ever growing data-driven decision making environment, efficient data handling is a paramount for successful data analysis and creating new insights regarding data.

This tutorial provides a practical introduction to working with EarthCARE sample data files in the HDF5/NetCDF4 format using Python

In order to effectively handle and analyse large volumes of data, participants will learn how to use Dask for parallel data computation and prepare data for machine learning.

An introduction to the EarthCARE dataset will be given at the start of the tutorial, with an emphasis on comprehending the variables and data model. Attendees will then engage in practical exercises, starting from opening datasets, to reading and manipulating data variables seamlessly with Dask xarray. Additionally, participants will gain practical skills in managing missing values within datasets, including techniques to identify, fill, or remove these gaps. They will also learn how to apply both simple and complex masking techniques to filter and modify their data effectively based on specific criteria. Moreover, the tutorial will cover fundamental data visualisation techniques, enabling attendees to plot their data using Matplotlib, making their data insights more accessible. In addition to these topics, a brief introduction will be provided on the essential steps involved in preparing data for machine learning applications. By the end of the session, participants will have a solid understanding of how to use observations in their data workflows.

You are warmly invited to attend tutorial to unlock the potential of efficient data handling!

Speakers:


  • Marijana Crepulja - ECMWF

Add to Google Calendar

Monday 23 June 09:00 - 10:20 (Room 0.96/0.97)

Hands-On: D.03.10 HANDS-ON TRAINING - EarthCODE 101 Hands-On Workshop

#pangeo

This hands-on workshop is designed to introduce participants to EarthCODE's capabilities, guiding them from searching, finding, and accessing EO datasets and workflows to publishing reproducible experiments that can be shared with the wider scientific community. This workshop will equip you with the tools and knowledge to leverage EarthCODE for your own projects and contribute to the future of open science. During this 90 minute workshop, participants will, in a hands-on fashion, learn about: - Introduction to EarthCODE and the future of FAIR and Open Science in Earth Observation - Gain understanding in Finding, Accessing, Interoperability, and Reusability of data and workflows on EarthCODE - Creating reproducible experiments using EarthCODE’s platforms - with a hands-on example with Euro Data Cube and Pangeo - Publishing data and experiments to EarthCODE At the end of the workshop, we will take time for discussion and feedback on how to make EarthCODE better for the community. Pre-requirements for attendees: The participants need to bring their laptop and have an active github account but do not need to install anything as the resources will be accessed online using Pangeo notebooks provided by EarthCODE and EDC. Please register your interest by filling in this form: https://forms.office.com/e/jAB9YLjgY0 before the session.

Speakers:


  • Samardzhiev Deyan - Lampata
  • Anne Fouilloux - Simula Labs
  • Dobrowolska Ewelina Agnieszka - Serco
  • Stephan Meissl - EOX IT Services GmbH
Add to Google Calendar

Monday 23 June 09:00 - 10:20 (Room 1.14)

Hands-On: D.01.09 HANDS-ON TRAINING - DestinE Platform: how to create your processor in Insula Processing

During this hands-on session, attendees will learn how to integrate an algorithm into the Insula Platform, the scalable workflow engine of DestinE. Participants will be guided through configuring their service to access catalogued data, allocating appropriate ICT resources, configure your service for scalable execution and defining outputs for visualization and analytics.

Speakers:


  • Cesare Rossi - CGI
  • Beatrice Gottardi - CGI
  • Francesco Cantavenera - CGI
Add to Google Calendar

Monday 23 June 09:00 - 10:20 (Room 0.11/0.12)

Tutorial: D.02.19 TUTORIAL - Foundation Models for Remote Sensing Applications

Foundation models are Artificial Intelligence (AI) models trained on vast datasets that can be effectively adapted to a wide range of general tasks. Their recent emergence has brought a paradigm shift in AI, enabling solutions to complex problems with minimal task-specific data, often surpassing traditional machine learning models trained from scratch.

This tutorial aims to provide an introduction to foundation models and their transformative impact on Remote Sensing (RS) applications. The abundance of RS data as well as the variety of potential downstream tasks, such as object detection, scene identification, land use/land cover classification, weather forecasting, change detection and image captioning, offer an ideal setting for the development and application of robust and scalable foundation models. The tutorial will introduce attendees to the fundamentals of these multimodal models and explore their applications across various RS tasks. In addition, we will showcase the potential of foundation models to advance Earth Observation (EO) research and innovation, and discuss the challenges that still lie ahead.

The main goals of this tutorial can be summarized as follows:

- Introduce the principles of foundation models, large-scale pretraining and parameter-efficient finetuning.
- Explore state-of-the-art architectures and pre-training strategies for developing multimodal Foundation Models.
- Establish benchmark datasets and evaluation frameworks to assess the representation capacity of EO Foundation Models.
- Highlight advancements and challenges in developing remote sensing vision-language foundation models.
- Showcase how foundation models are revolutionizing RS applications, with specific examples in satellite and aerial imagery analysis.
- Review the inherent challenges of RS data, such as multi-source data fusion, resolution discrepancy, and temporal dynamics.
- Present future directions and open research questions.

Speakers:


  • Nikolaos-Ioannis Bountos - Nataional Technical University of Athens
  • Adam Stewart - Technical University of Munich

Add to Google Calendar

Monday 23 June 09:00 - 10:20 (Foyer L3)

Hands-On: A.01.14 HANDS-ON TRAINING - Forecast severe thunderstorms in the coming hours using new Meteosat Third Generation weather satellite data

Bring your laptop and join us for a realistic experience to forecast thunderstorms over Europe with live data from next-generation MTG imaging (FCI) and lightning (LI) data. Weather forecasters and trainers from the European Severe Storms Laboratory (ESSL) and from the European weather satellite operator EUMETSAT provide you background information on a weather situation close to the event, and how to use satellite and other products to interrogate ongoing storms.
Then, you will decide which of the ongoing storms have started growing, which ones are severe and which ones are dissipating. After this careful analysis, you will issue severe weather warnings to save lives and property. You will see how well you have done compared to your peers by validating your forecasts against the European Severe Weather Database.

Here is the approximate schedule for this event:
09:00 Welcome and Introductions
09:05 Novel satellite products from Meteosat Third Generation (Stephan Bojinski, EUMETSAT)
09:15 Background and challenge (Tomas Pucik, ESSL)
09:30 Work in individual groups
10:00 Discussion and questions

For our task, we are going to use the web-based ESSL Data Displayer and a case of severe storms from Europe either from 2024 or 2025. We will briefly talk about how the real weather forecasters would enter this situation and what type of products they would look at to infer the severity of storm using satellite data. We are going to concentrate on the satellite products that can tell us about the storm microphysics, their storm top dynamics and lightning activity.

Speakers:


  • Stephan Bojinski - EUMETSAT
  • Tomas Pucik - ESSL
Add to Google Calendar

Monday 23 June 09:00 - 10:20 (Room 0.94/0.95)

Tutorial: A.04.04 TUTORIAL - Empowering Climate Research with GHGSat Emissions Data

Methane is a potent greenhouse gas responsible for approximately 30% of global warming and a key precursor to tropospheric ozone formation, affecting both climate and air quality. Public satellites like SENTINTEL-5P have transformed our ability to detect and quantify emissions, particularly from large anthropogenic sources such as O&G production or coal mining. However, significant uncertainties remain, particularly for smaller, unaccounted sources in a variety of areas such as landfills, wastewater treatment plants, hydropower damns, agriculture or farming. High-resolution satellite data is essential to addressing these gaps and enabling precise source attribution, policy making and ultimately, drive mitigation actions.
This tutorial will provide a scientific and technical overview of GHGSat’s high-resolution methane monitoring capabilities and its integration into scientific, public and policy frameworks. Scientists will learn how to apply and access GHGSat data through a number of scientific data-sharing programs.
The session will explore in detail:
1. Technical review of the GHGSat sensor: the Fabry-Perot narrow band spectrometer, tuned for methane detection.
2. Methodologies for methane plume analysis, source rate estimation, and emission trends, focusing on an interactive hands-on approach.
1. Case studies demonstrating the detection and quantification of methane emissions, both from onshore and offshore sources (Glint mode)
2. Tip-and-cue concept: Complementary use of GHGSat data with Sentinel-5P-TROPOMI and other public missions to enhance methane source attribution at the facility level
4. GHGSat in support of the scientific commuity: how to access the data via Third Party Mission (TPM) programme, NASA’s Commercial Smallsat Data Acquisition (CSDA) programme, or the UK Space Applications Catapult data sharing agreement
5. GHGSat in support of GHG policy making: examples from around the world.

Speakers:


  • Carles Debart – Director of Business Development for Europe
  • Antoine Ramier – Science and Systems specialist
Add to Google Calendar

Monday 23 June 09:00 - 10:20 (Room 1.61/1.62)

Tutorial: B.04.08 TUTORIAL - Mapping for Disaster Risk Management: Using UN-SPIDER Recommended Practices and Digital Twin for Effective Flood Hazard Depiction

Effective flood mapping and modeling are crucial for disaster risk management, enabling authorities and communities to better prepare for and respond to extreme weather events. The SPEAR Project, by the Center for Remote Sensing of Land Surfaces (ZFL) at the University of Bonn and UN-SPIDER, works to support disaster risk management activities through innovative Earth Observation methods, with a regional focus on the African continent.
A key component of this initiative is the development of step-by-step mapping guides by UN-SPIDER, called “Recommended Practices” (RPs). These procedures, available through the UN-SPIDER knowledge portal, provide guidance on creating information products, such as flood maps. The RPs are continually refined through ongoing research and are designed to guide users—regardless of their experience with GIS and remote sensing—on how to map pre- or post-disaster scenarios.
The latest activity within SPEAR focuses on Tobago and the City of Accra in Ghana, creating potential future flood extents by integrating geospatial data into a digital twin for improved modeling and visualization of the affected areas.
In this tutorial, participants will first learn how digital elevation models can be used to model storm surges using the QGIS model builder, and secondly how Google Earth Engine (GEE) is applied for mapping floods in response and recovery efforts. The first two parts are based on the Recommended Practices from the UN-SPIDER Knowledge Portal. In the third part of the tutorial, the latest project of UN-SPIDER will be introduced, which integrates showing future scenarios of flash flood and sea level rise with digital twin technology derived from AI technology. These digital representations of real-world environments hold great potential for disaster risk management, as they demonstrate in interactive 3D applications how specific areas are affected by various events.

Speakers:


  • Josi Bregulla - University of Bonn
  • Lóránt Czárán - UNOOSA/UN-SPIDER
  • Martin Hilljegerdes - University of Bonn
  • Victor Korir - University of Bonn
  • Dr. Michael Schmidt - University of Bonn
  • Jumpei Takami - UNOOSA/UN-SPIDER
Add to Google Calendar

Monday 23 June 09:00 - 10:20 (Room 1.31/1.32)

Tutorial: B.01.07 TUTORIAL - From Space To Impact: Highlighting Key Elements Of EO Integration In The Processes Of Operational Development Finance

This tutorial aims to strengthen the connection between the Earth Observation (EO) industry and International Financial Institutions (IFIs). While EO has immense potential to support sustainable development initiatives, a key challenge remains: the EO industry often lacks a clear understanding of how IFIs operate, their dynamics and funding mechanisms, and how to effectively integrate EO solutions into development projects.
Building on two webinars organized with the World Bank (https://gda.esa.int/2022/12/video-leveraging-development-finance-procurement-insights/) and the Asian Development Bank (https://gda.esa.int/2023/06/video-gda-me-webinar-leveraging-development-finance-adb/), this session will further explore the entire development finance process—from donors to IFIs and client countries—highlighting where EO can play a role and how industry stakeholders can better engage. We will showcase real-world case studies, such as the role of ESA in supporting IFI projects, demonstrating how EO services have transitioned from initial support to direct engagement between IFIs and the industry.
Key topics will include:
• Understanding IFI procurement processes and funding mechanisms.
• The stages of development finance and how EO solutions fit in.
• Lessons learned from past ESA-supported engagements.
• Challenges faced by both the EO industry and IFIs, and ways to improve collaboration.
By the end of this session, participants will gain a deeper understanding of how to navigate IFI processes, position EO solutions effectively, and create meaningful partnerships that drive real impact in global development.

Speakers:


  • Pierre Chrzanowski - World Bank
  • Eric Quincieu - Asian Development Bank
  • Oliver Mundy - International Fund for Agricultural Development
Add to Google Calendar

Monday 23 June 09:00 - 10:20 (Room 0.14)

Hands-On: A.02.12 HANDS-ON TRAINING - lidR: (A workshop for) Airborne LiDAR Data Manipulation and Visualization for Environmental Applications

This workshop is all about unlocking the power of the lidR package for forest management. Attendees will learn how to access and analyze information from lidar files, including how to select regions of interest and generate standard products like digital terrain models and canopy height models. The workshop will also cover topics such as how to use lidR outputs for vegetation modeling using the area-based approach, and individual tree segmentation and metric extraction. Participants will discover how to efficiently process collections of lidar files using the lascatalog processing engine. Additionally, the workshop will delve into user-defined functions and how they can be utilized to further enhance analysis capabilities. We will focus on generation of metrics useful for forest and vegetation biodiversity assessment and forestry.
Don’t miss out on this opportunity to take your lidar analysis skills to the next level!

Speakers:


  • Prof Nicholas Coops - University of British Columbia
  • Mr Liam Irwin - University of British Columbia
  • Mr Brent Murray - University of British Columbia
Add to Google Calendar

Monday 23 June 09:00 - 10:20 (Room 1.34)

Hands-On: D.04.08 HANDS-ON TRAINING - EO Data Processing with openEO: transitioning from local to cloud

#stac #cloud-native

This hands-on training aims to provide participants with practical experience in processing Earth Observation (EO) data using openEO. By the end of the session, participants will be able to:
- Understand the core concepts of EO data cubes and cloud-native processing
- Transition from local data processing to cloud-based environments efficiently, always using the openEO API
- Use openEO Platform (openeo.cloud) to process EO data via multiple cloud providers
- Gain familiarity with Python data access and processing using the openEO API

Training Content & Agenda

Introduction & Overview
- Introduction to the openEO API: functionalities and benefits
- Data cubes concepts and documentation review
- Overview of the "Cubes & Clouds" online course by Eurac Research

Transitioning to Cloud Processing
- Challenges and advantages of moving from local processing to cloud environments
- Overview of cloud providers (VITO Terrascope, EODC, SentinelHub) and their integration with openEO Platform
- Key concepts of FAIR (Findable, Accessible, Interoperable, Reusable) principles implemented by openEO
- STAC: how the SpatioTemporal Asset Catalog allows interoperability

Hands-On Training with openEO
- Setting Up the Environment
-- Accessing openEO Platform JupyterLab instance
-- Clone GitHub repositories for training materials
- Basic openEO Workflow
-- Discovering and accessing EO datasets
-- Executing simple queries using openEO Python Client
-- Processing workflows using local and cloud-based computation
- Multi-Cloud Processing
-- Sample workflow using multiple cloud providers
- Executing an End-to-End EO Workflow
-- Data discovery and preprocessing
-- Applying processing functions (e.g., time-series analysis, indices computation)
-- Exporting and sharing results according to open science principles

Q&A and Wrap-Up
- Discussion on best practices and troubleshooting common issues
- Resources for further learning (EO College, openEO documentation)

Speakers:


  • Claus Michele - Eurac Research, Bolzano, Italy
  • Zvolenský Juraj - Eurac Research, Bolzano, Italy
  • Jacob Alexander - Eurac Research, Bolzano, Italy
  • Pratichhya Sharma - VITO, Mol, Belgium
Add to Google Calendar

Monday 23 June 10:30 - 12:30 (Opening - Hall A)

Session: Opening session


This session will be accessible via live captioning at the following link: HERE

Due to limitations in the app, this is clickable only from the web version of the programme

Alexander Van der Bellen (video address)
President of Austria

Margit Mischkulnig
Head of the Department for Space Affairs and Aeronautical Technologies, Austrian Federal Ministry for Innovation

Peter Hanke or representative
Federal Minister for Innovation, Mobility and Infrastructure Republic of Austria

Josef Aschbacher
Director General, ESA

Andrius Kubilius (video address)
EC Commissioner, Defence and Space

Simonetta Cheli
Director of Earth Observation Programmes, ESA

Aarti Holla-Maini
Director of the United Nations Office for Outer Space Affairs

Florence Rabier
Director General, ECMWF

Diana Ürge-Vorsatz
Vice-Chair, IPCC

Philip Evans
Director General, EUMETSAT

Lene Rachel Andersen
President of Nordic Bildung

Nicolas Gruber
Professor at the Department of Environmental Systems Science, ETH Zurich

Add to Google Calendar

Monday 23 June 12:30 - 13:15 (Frontiers Agora)

Session: F.05.06 GEOValue: an international community focusing on the value of geospatial information for decision making

GEOValue is a multi-disciplinary international community pursuing evidence-based methodologies to assess the use and value of Earth Observations (EO). The community is managed by a steering committee with representatives from ESA, EARSC, NOAA, USGS and NASA and gathers together an extended network of experts and analysts. The group has been tasked by the GEO to develop an impact assessment toolkit that can help GEO users and service providers to reflect upon the benefits of exploiting EO-based solutions for different purposes and across different organisations. The Toolkit could be potentially extended to serve a larger community of practitioners.
This session, after reflecting upon the needs and challenges encountered by GEO in this regards, calls for people at the LPS (companies, agencies, researchers, consultants) interested in the topic of EO impact assessments to discuss with GEOValue experts how they might apply the toolkit to assess the impacts of their activities on users and society.

Speakers:


  • Alessandra Tassa - ESA
  • Jean Dusart - EC
  • Tim Stryker - USGS
  • Daniela Requena Suarez -GFZ and GFOI
  • Harriet Wilson - Stirling University and GEOAquawatch

Add to Google Calendar

Monday 23 June 12:30 - 14:00 (ESA Agora)

Session: F.02.14 African-European Earth Observation Partnership

Europe and Africa are collaborating since decades in the field of Earth Observation (EO) co-developing innovative EO applications and services from national to continental scale through long-term partnerships. The newly founded African Space Agency (AfSA) and the European Space Agency (ESA) are both continental space agencies and as such, share common programmatic interests and similar goals which will only strengthen this partnership across the two neighboring continents. Considering the recent inauguration of the AfSA in Cairo and the upcoming setup of its operation, the future cooperation and joint priorities will be discussed involving the main African and European stakeholders – including the European Commission (DG-DEFIS, DG-INTPA, DG-JRC), African Union Commission, African regional/national institutions ESA and EUMETSAT. This Open Forum will discuss how Earth Observation can enable the Digital Transformation in Africa to address different social and scientific challenges of specific importance to Africa laid out in the African Union Agenda 2063, Africa Space Policy and Strategy as well as in the EU communication “Towards a Comprehensive Strategy with Africa”. In particular the following topics will be covered:
- African-European priorities for collaboration to strengthen the AfSA
- Emerging thematic domains where EO can create impact in Africa
- The role of the African private sector in the uptake of EO

Opening: Welcome and keynotes


  • Rune Floberghagen, Head of the Climate Action, Sustainability and Science Department – European Space Agency (ESA)
  • Tidiane Outtara – President African Space Council for the African Space Agency

Moderated panel discussion: Moderator Benjamin Koetz, Head of Long-term Action Section, ESA


  • Tidiane Outtara – President African Space Council for the African Space Agency
  • Rakiya Abdullahi Baba-ma'aji, Assistant Director NASRDA & African Space Council
  • Hamdi Kacem, GMES and Africa Support Program, African Union Commission
  • Mr. Meshack Kinyua, Capacity Coordinator, African Space Agency
  • Cecilia Donati, Regional & Multi-Country Programmes for Africa - DG International Partnership (DG INTPA), European Commission
  • Michel Massart, DG Joint Research Center (DG JRC), European Commission
  • Stephano La Terra Bella, Administrator, DG DEFIS, European Commission (TBC)

Open Questions & Answers


Add to Google Calendar

Monday 23 June 12:30 - 13:15 (Nexus Agora)

Session: D.03.06 Empowering Collective Action Through Earth Observation: Informing and Engaging Society

This Agora will explore how Earth Observation, combined with cutting-edge technologies like AI, can empower collective action on global challenges such as environmental degradation, public health, extreme climate, etc. Such action is only possible when people have accessible tools and information. EO provides a unique resource, yet its vast potential often remains untapped by the broader public.
Platforms like the ESA-NASA-JAXA EO Dashboard highlight how international collaboration can deliver accessible and actionable information, but we must go further to ensure these insights drive engagement and impact.
This Agora will focus on how to make EO more accessible, leveraging them to inspire and enable people to participate in solutions. Success stories such as EO Dashboard will be introduced as examples to set the scene for the conversation. We will also explore how cooperation among space agencies and other international organizations can strengthen efforts to inform and mobilize society effectively.
The Agora will feature a panel of 4–5 invited speakers and a moderator, followed by a dynamic discussion involving both the panelists and the audience. We propose engaging speakers and moderators from ESA, NASA, JAXA, European Commission as well as representatives from international bodies (e.g., UN agencies), educational institutions, or initiatives focused on citizen science and public engagement to provide diverse perspectives on themes like:
1.Connecting People with Space Technologies
- Bringing EO closer to the public through visualization, interactive platforms, and storytelling.
- Success stories (e.g., EO Dashboard) that simplify complex science for broader audiences.
2.Open Science for Collective Action
- The importance of a well-informed society in driving solutions
- How EO Open Science can boost public and policy engagement.
3.The Role of International Cooperation
- Role of inter-agency partnerships to amplify impacts in society.
- New initiatives to integrate EO into educational and public outreach programs.

Moderators:


  • Anca Anghelea - ESA

Speakers:


  • Naoko Sugita - Advisor to the Director, EORC, STD-I , JAXA
  • Giuliana Miranda - Climate Correspondent, Oxford Climate Journalism Network, Folha de S.Paulo
  • Julian Akani Guery - lead Data Scientist , Kayross
  • Stefano Natali – Managing Director, Sistema
  • Tim Lemmens - Policy Officer at European Commission (DEFIS)
Add to Google Calendar

Monday 23 June 12:30 - 12:50 (EO Arena)

Demo: C.06.14 DEMO - The DEMIX Operations Platform: a free tool for the visualisation, processing and quality assessment of Digital Elevation Models

Digital Elevation Models (DEMs) are a key data source for the correction of Earth Observation (EO) products (e.g., radar terrain correction, orthorectification) and a wide variety of thematic applications (e.g., floods, landslides, avalanches modeling and mapping). As a consequence, it is crucial to properly evaluate the quality of DEMs considered for these applications. A complete DEM evaluation can be performed with the joint use of two frameworks: the Earthnet Data Assessment Project (EDAP) guidelines and the DEM Intercomparison eXercise (DEMIX) methodology. The EDAP guidelines provide a complete set of rules allowing to evaluate the quality of EO missions and products directly from their documentation. Complementarily, the DEMIX methodology allows to evaluate the quality of DEMs with regards to specific areas of interest called DEMIX tiles (areas of approximately 10x10km) and user-defined criteria (generic or thematic). As opposed to EDAP guidelines, the DEMIX methodology requires DEM processing which may not be trivial to compute for end-users. This processing includes the following: - the collection of assessed (or "candidate") DEMs and Very-High Resolution (VHR) reference DEMs, - the transformations required to express candidate and reference DEMs in the same geometry (resamplings, reprojections, horizontal and vertical datum transformations, consideration of point and area pixel types), - the evaluation of scores and rankings based on custom criteria and areas of interest. In order to alleviate this issue, the DEMIX Operations Platform has been developed and made freely available. This session aims at providing a demonstration of the main features of this platform, including: - the retrieval of DEM metadata, - the visualisation of DEMs on a 3D globe, - the export of DEM elevations as GeoTIFF, with a custom geometry (Coordinate Reference System, Vertical Reference System, resampling method, pixel type), and - the ranking of DEMs according to user-selected criteria.

Speakers:


  • Axel Corseaux - VisioTerra
Add to Google Calendar

Monday 23 June 12:52 - 13:12 (EO Arena)

Demo: D.04.29 DEMO - xcube viewer: More than visualization - using xcube Viewer for exploration and analysis of Earth Science Data

The Deep Earth System Data Lab (DeepESDL) is an integrated, cloud-based service within ESA’s platform ecosystem, offering modular, scalable resources for data access, collaborative processing and analysis, as well as dissemination for the Earth Science community. It provides a collaborative Jupyter Hub environment, machine learning tools, and sophisticated applications for visualization. Its growing user base comprises individual researchers as well as several research projects leveraging DeepESDL’s modular offerings to implement their processing tasks.

A key component of the platform is xcube viewer, a powerful web application that enables interactive visualization and analysis of diverse datasets from Earth Observation and Earth Sciences. Participants will become familiar with the viewer’s core functionalities, including data exploration, comparing data, integration of feature data and external data sources, and statistical analysis tools. Additionally, new features, such as the generation of user-defined variables, sharing of data views, and customizable charts, will be introduced. The demonstration will also illustrate how xcube viewer seamlessly integrates with Jupyter Hub, providing a powerful tool within the interactive development environment.

By attending this session, participants will learn how to effectively use xcube viewer for their research, leveraging the cloud-based DeepESDL platform or their own hardware. They will also gain insight into the latest developments of the application.

Speaker:


  • Alicja Balfanz - Brockmann
Add to Google Calendar

Monday 23 June 13:15 - 14:00 (Nexus Agora)

Session: F.01.08 Climate Call Card Game - Session 1

Come play the Climate Call card game and see what activities from our daily lives that actually affect the climate. Let's reveal common misconceptions and be inspired about how one can communicate research findings through games and positive visualisations. Karl Sterner Isaksson, Operational Manager at Climate Call, is also happy to brainstorm of ways to gamify or visualise your research/work.

Speakers:


  • Karl Sterner Isaksson - Climate Call
Add to Google Calendar

Monday 23 June 13:15 - 14:00 (Frontiers Agora)

Session: F.01.11 What’s Next for Teaching Earth Observation? Trends, Tools, and Strategies

Education activities in recent years have generally undergone a significant transformation related to the global digitalization of education and training. Traditional teaching methods, like face-to-face trainings provided to small groups of students, are being complemented or even replaced by massive open on-line courses (MOOCs) with hundreds of participants following the course at their own pace. Institutions involved in training and capacity building are continuously exploring new approaches to developing course materials, modern tools and technologies to deliver training more effectively. However, there are still many open questions e.g. how we can monitor more effectively the impact of training, how can we increase the active involvement of participants, or which are the tools and technologies that should support training activities in the future.
During this agora the experts from Space Agencies, international organisations, universities and companies working in the domain of space education will exchange ideas and lessons learnt, discuss future opportunities and challenges that digital transformation of education has brought, consolidate recommendations for future education and capacity building activities, and explore opportunities to further collaborate. The outcomes of the discussion will be summarized into a set of recommendations that will be shared with ESA and other agencies and organisations involved in capacity building and training to develop workplan for future activities in the domain.

Speakers


  • Francesco Sarti - Scientific Coordinator of EO education, training and capacity building activities, ESA
  • Carlos López-Martínez - Director of Education, IEEE Geoscience and Remote Sensing Society (GRSS) and Associate Professor, Universitat Politècnica de Catalunya
  • Anca Anghela - Open Science Platform Engineer, ESA
  • Robert Eckhardt - CEO, ignite education and Coordinator, EO College

Add to Google Calendar

Monday 23 June 13:15 - 13:35 (EO Arena)

Demo: D.02.23 DEMO - Machine Learning API for Earth Observation Data Cubes

#stac

The Copernicus Data Space Ecosystem (CDSE) and the federated openEO platform have adopted the openEO specification, which was developed to standardize access to cloud-based processing of satellite imagery big data. This specification simplifies data retrieval via STAC and analysis via standardized processes across various applications.
Building on this foundation, we propose a Machine Learning (ML) API for Satellite Image Time Series Analysis, extending the openEO API to integrate ML workflows. This extension allows users to leverage openEO client libraries in R, Python, Julia, and JavaScript while utilizing the R SITS package, which provides specialized ML tools for satellite image time series analysis.
Our ML API supports both traditional ML algorithms (e.g., Random Forest, SVM, XGBoost) and advanced deep learning models (e.g., Temporal Convolutional Networks (TempCNN), Lightweight Temporal Attention Encoder (LightTAE)). A core focus is reproducibility, ensuring transparent tracking of data provenance, model parameters, and workflows. By integrating ML into the openEO specification, we provide scalable, flexible, and interoperable ML tools for Earth Observation (EO) data analysis.
We encapsulated SITS within the openEO ecosystem using a new R package called openeocraft. This empowers scientific communities to efficiently analyze EO data cubes using advanced ML concepts in a simplified manner across multiple programming languages. This work aims to demonstrate the democratization of access to ML workflows for satellite image time series analysis.

Speakers:


  • Brian Pondi - Institute for Geoinformatics, University of Munster
  • Rolf Simoes - OpenGeoHub Foundation
Add to Google Calendar

Monday 23 June 13:37 - 13:57 (EO Arena)

Demo: D.03.23 DEMO - Real-time Collaboration for GIS Workflows with JupyterGIS

Abstract:
This demonstration will showcase JupyterGIS, an innovative web-based, collaborative geospatial platform that integrates JupyterLab with GIS tools, enabling seamless real-time editing and visualization of geospatial data. Attendees will see how JupyterGIS supports collaborative workflows, geospatial analysis, and integration with QGIS files, raster/vector layers, and Python scripting—all within a cloud-based environment.

Objective:
To highlight how JupyterGIS enhances collaborative Earth observation workflows, providing an interactive and extensible environment for GIS users working with satellite data, spatial analysis, and real-time geospatial collaboration.

Format:
  • A live demonstration showcasing JupyterGIS in action, including:
- Importing and visualizing Earth observation data (GeoTIFF, GeoJSON, Shapefiles).
- Real-time collaborative editing and annotations.
- Integration with QGIS and Python scripting for advanced analysis.
  • Limited Q&A to address audience questions on applications and deployment.

Duration:
20-minute session at the ESA booth.

Key Takeaways for Attendees:
  • Understand the key capabilities of JupyterGIS for collaborative geospatial workflows.
  • See how it integrates with QGIS and Python to streamline Earth observation data processing.
  • Learn how to access and use JupyterGIS without installation, directly in a browser.
  • Give your feedback and help shape the direction of future developments.

Speaker:


  • Anne Fouilloux - Simula
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.14)

Session: F.02.08 Advancing Research and Development Through European-African Collaboration: EO AFRICA - PART 1

EO AFRICA stands for African Framework for Research Innovation, Communities and Applications in Earth Observation. EO AFRICA facilitates the sustainable adoption of the Earth Observation and related space technology in Africa – following an African user driven approach with a long-term (>10 years) vision for the digital era in Africa.

Coordinated by the EO AFRICA R&D Facility, several African-European R&D tandem projects collaborate creating an active research community and innovation processes for continuous development of EO capabilities in Africa.

This session aims to present EO-AFRICA, a successful example of cooperation between ESA and AUC, by focusing on the preliminary results provided by the ongoing projects selected at the end of 2024.

This session will offer the opportunity to complete an already existing peer-review session (Harnessing the Power of Remote Sensing for Research and Development in Africa ), with insights from projects not yet started by the closure of the call for abstracts and to have one or two VIP introducing the session.

Moderators:


  • Nelly-Helen N. Ebruka - The University of Manchester
  • Zoltan Szantoi - ESA

Panelists:


  • Dr. Beatrice Asenso Barnieh - University of Energy and Natural Resources, Ghana - EO AFRICA African Research Fellow
  • Prof. Kamal Labbassi - Chouaib Doukkali University, Morocco – President of the African Association of Remote Sensing of the Environment (AARSE)
  • Dr. Baba-maaji Rakiya - Strategic Space Applications - Deputy Director, NASRDA - Nigeria
  • Prof. Dr. Abel Ramoelo - Executive Director | Earth Observation Programme | South African National Space Agency (SANSA), Pretoria, South Africa
  • Dr Mahaman Bachir Saley - Senior Scientific Officer African Union Commission - Enseignant Chercheur chez Université Felix HOUPHOUET BOIGNY de Cocody à Abidjan
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 0.14)

Session: A.08.14 Enhancing Cooperation between EC and ESA Ocean Science projects

To address the many challenges society is facing at the onset of this century, a significant collaborative effort and an integrated approach to science is needed, where Earth Observation satellite data, in-situ and citizen observations, advanced modelling capabilities, interdisciplinary research and new technologies are essential elements to be used in synergy. Sharing this vision, in January 2020, the European Commission and the European Space Agency launched a joint Earth System Science Initiative aiming at joining forces to advance Earth System Science and its contribution to respond to the many global challenges. Several priority topics in Ocean Science have been identified (Ocean Health and Biodiversity, Land-Ocean Interactions, Ocean Carbon, Ocean dynamics) and a set of flagship projects have been launched both under EC Horizon Europe and ESA FutureEO programs.
The objective of this networking session is to provide an opportunity to the different EC and ESA projects to exchange on projects progress and plans, data, results including demonstration cases, and discuss on how to further avoid duplication and enhance synergies and other actions to advance knowledge. Strategies and actions to enhance collaboration on joint communication and engagement with stakeholders and the public will also be discussed.

Speakers:


  • Victor Vicente Martinez - PML
  • Pierre Gernez - University of Nantes
  • Jamie Shutler - University of Exeter
  • Rosalia Santoleri - CNR-ISMAR
  • Angela Landolfi - CNR-ISMAR
  • Yolanda Sagarminaga - AZTI
  • Vagelis Spyrakos - STIR
  • Fabiola Silva - Colab Atlantic
  • Alexander Hayward - DMI
  • Rafael Goncalves-Araujo - DTU
  • Artur Palacz - IOPAN
  • Gyde Kruger - DHI
  • Bede Davies - University of FNantes
  • Simon Oiry - University of Nantes
  • Bede Davies - University of Nantes
  • Simon Oiry - University of Nantes
  • Gemma Kulk - PML
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall G1)

Session: C.06.09 CAL/VAL towards future VNIR/SWIR imaging spectroscopy

Current scientific and commercial VNIR/SWIR imaging spectroscopy missions (HISUI, PRISMA, DESIS, EMIT, EnMAP, Tanager-1) are revolutionizing the earth observation sensing horizon. They are paving the way for future Hyperspectral global mapping missions like CHIME and SBG. The key to the transferability and generalizability of the information gained between the missions is a comparable and transparent calibration and validation scheme of the respective running missions and future missions. Here, systematic sources of errors must be identified and excluded based on well-balanced and well-defined anchor points, including propagation and budgeting of uncertainties. Hence a consistent use of protocols between mission and expansion of synergies are crucial.
The session will provide an overview of different protocols, approaches, efforts, and findings and identify and explain bottlenecks, limitations, and successful and promising approaches and synergies. A complementary and holistic strategy will be derived and developed based on the individual lessons learned, directly applicable to the increasing demands of future VNIR/SWIR missions.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall G1)

Presentation: Inclusion of the AERONET-OC sites into the RadCalNet framework for Cal/Val activities of the multi and hyperspectral missions over water

Authors: Dr. Tristan Harmel, Béatrice Berthelot, A B, Marc Bouvet
Affiliations: Magellium, NPL, ESA
The main objective of the RadCalNet program is to provide representative top-of-atmosphere (TOA) upwelling radiance for validation and calibration (Cal/Val) of optical satellite sensors. To gain in representativeness, this project proposes a thorough analysis of the oceanic and coastal site as a new contribution to fiducial data within the RadCalNet program. The core principle of the Cal/Val relies on ground radiometric measurements of the bottom-of-atmosphere reflectance that is exploited through radiative transfer computations to simulate the TOA satellite measurements. Another necessary information to perform those simulations concerns the atmosphere optical properties with the spectral aerosol optical properties, absorbing gases concentration… At TOA, the upwelling radiance originates from scattering of the sunlight modulated by absorption of the optically active material within the Earth atmosphere system. Above land, the computations are achieved based on knowledge of the bidirectional reflectance distribution function at the bottom-of-atmosphere level. Above water areas, part of the sun and sky light is reflected back to the atmosphere, and another part is transmitted through the air-water interface. The latter is further altered by scattering and absorption of the in-water constituents resulting in a typical water-leaving radiance signature. The reflected part is very dependent on the water surface rugosity, that is to say the wave slope distribution including millimetric and centimetric wavelets known as capillaries. We propose here to investigate those different aspects of the TOA radiance simulation above water based on the AERONET-OC network. AERONET-OC network consists of many sites mostly located in coastal waters providing high-frequency water-leaving radiance observation in addition to the core AERONET parameters on the aerosol optical properties and water vapor content. First, comprehensive radiative transfer will be performed based on a modified version of the OSOAA code coupled with hyperspectral absorption computation based on LibRadTran. Based on those simulations, a specific tool has been developed to provide the TOA signal for any given observing geometry from the AERONET-OC dataset. This tool is exploited to quantitatively assess the performances of the radiometric absolute calibration and stability of the Sentinel 2 and 3 sensors. In addition, hyperspectral interpolation of the multispectral AERONET-OC data are performed to evaluate the performances of hyperspectral sensors on board PRISMA and EnMAP missions to prepare the performance monitoring of the new generation satellites (PACE, CHIME, SBG). Based on those results, a roadmap is provided for long-term exploitation of aquatic sites for Cal/Val activities.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall G1)

Presentation: Calibration Insights from the DESIS Instrument: Lessons from Over Six Years in Orbit

Authors: Emiliano Carmona Flores, Dr. Martin Bachmann, Dr. Raquel De los Reyes, Dr. Uta Heiden, Dr. Kevin Kühl, Rupert Müller
Affiliations: German Aerospace Center (DLR), Earth Observation Center (EOC)
The DLR Earth Sensing Spectrometer (DESIS) aboard the International Space Station (ISS) has been delivering high-quality hyperspectral data to both the scientific community and commercial users since it began operations in September 2018. With an increasing number of spaceborne hyperspectral instruments in use, DESIS data remain highly relevant due to their superior spectral resolution (2.55 nm) and the acquisition of more than 300,000 product tiles over more than six years of operation. Calibration is a fundamental aspect of any instrument in operation and even more when the operations extend for several years. The calibration concept followed in DESIS uses of the on-board LED calibration unit for spectral calibration and vicarious calibration for radiometric part. The data obtained from the calibration unit shows that the average spectral performance of DESIS is very stable over long periods of time. However, it also shows measurement-to-measurement variability. Part of this variability is related to temperature gradients between the optical elements and can be corrected during data processing. Other variations seem to have a random nature and are not corrected (RMS ~0.1 nm), contributing to the measurement uncertainties. Radiometric calibration, on the other hand, is based on vicarious calibration using RadCalNet sites as reference after flat-fielding of the radiometric response using uniform scenes. Our results show that the radiometric calibration of DESIS can change by 3.4% over one year above 500 nm. Below 500 nm, a significantly larger variability is observed, increasing as the wavelengths become shorter. During the first 3 years of operations, we observed a fast decrease in sensitivity below 500 nm, followed by a short period of stability and then a rapid increase that has recovered an important part of the sensitivity lost during the first 3 years. Geometric calibration is conducted by comparing ground control points (GCPs) automatically extracted from DESIS images with those from reference images of higher geometric accuracy. When a large enough number of matching points are identified between the reference and DESIS images, an improved geometric sensor model is derived, achieving a root mean square error (RMSE) of approximately 21 meters in both the north and east directions. However, when matching is unsuccessful, the RMSE typically increases to around 300 meters in the transverse direction and 500 meters in the longitudinal direction. This contribution summarizes the calibration results obtained over more than six years of the DESIS hyperspectral mission in orbit. Additionally, we provide updates on calibration activities and guidelines for users interested in the accuracy and reliability of DESIS data calibration.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall G1)

Presentation: Hyperspectral is not just multispectral with lots of bands – use of WATERHYPERNET for validation of hyperspectral satellite missions

Authors: Kevin Ruddick, Agnieszka Bialek, Vittorio Brando, Alexandre Corizzi, Pieter De Vis, Ana Dogliotti, David Doxaran, Joel Kuusk, Quinten Vanhellemont, Dieter Vansteenwegen, Matthew Beck, Kenneth Flight, Anabel Gammaru, Claudia Giardino, Luis Gonzales Vilas, Kaspars Laizans, Héloise Lavigne, Francesca Ortenzio, Pablo Perna, Estefania Piegari, Denis Pailler, Lucas Rubinstein, Morven Sinclair, Dimitry Van der Zande
Affiliations: Royal Belgian Institue For Natural Sciences (RBINS), National Physical Laboratory (NPL), Consiglio Nazionale delle Ricerche (CNR-ISMAR), Laboratoire Océanographique de Villefranche, Sorbonne Université (SU/LOV), Instituto de Astronomía y Física del Espacio, Consejo Nacional de Investigaciones Científicas y Técnicas (IAFE, CONICET/UBA), Tartu University (TU), Flanders Marine Institute (VLIZ), Consiglio Nazionale delle Ricerche (CNR-IREA)
A new generation of hyperspectral satellite missions raises the potential for measurement of new aquatic processes and parameters, including phytoplankton groups, i.e. more than just chlorophyll a. Algorithms that exploit spectral curvature/derivatives/anomalies are promising and many hyperspectral algorithms have been proposed for phytoplankton groups. However, truly hyperspectral algorithms may be quite different from multispectral algorithms in their sensitivity to satellite measurement errors. For example, spectrally smooth errors (aerosol correction, broadband calibration, sunglint) have negligible impact on the second derivative of reflectance, whereas spectrally rough errors (wiggles, random noise, interband calibration) are greatly amplified in the second derivative of reflectance. This is a new paradigm for algorithm design, atmospheric correction and radiometric validation since accuracy for the second derivative of reflectance may be more important than the reflectance itself. The WATERHYPERNET component of HYPERNETS delivers hyperspectral water-leaving radiance reflectance measurements continuously at multiple sites for validation of satellite missions, both multispectral and hyperspectral. Satellite validation by WATERHYPERNET will be demonstrated here for hyperspectral missions (ENMAP, PRISMA, PACE, etc.) with specific focus on the new challenges of hyperspectral algorithms, including the potentially huge impact of spectrally rough errors. The status of the WATERHYPERNET network will be updated, including recent improvements made in data processing and quality control of the in situ measurements. The use of clear sky modelling for both downwelling irradiance and clear sky radiance allows better identification of suboptimal measurement conditions as well as providing an indication of any change in radiometer sensitivity during deployment. Use of the near infrared Similarity Spectrum allows checking of internal consistency of water-leaving radiance reflectance spectra and identification of measurement problems such as sunglint contmination and/or presence of foreign objects. Comparison of WATERHYPERNET measurements made at different relative azimuth angle and with their estimatd uncertainties further strengthens quality control of the in situ measurements, supporting their use for radiometric validation of both multispectral and hyperspectral satellite missions.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall G1)

Presentation: Impact of polarization on radiative transfer simulation for vicarious calibration.

Authors: Claudia Emde, Nicolae Marton, Dr Vincent Leroy, Mr Nicolas Misk, Dr Yves Govaerts
Affiliations: Rayference
Vicarious calibration relies on references based on different types of pseudo invariant Earth targets such as bright deserts, deep convective clouds, sun glint, etc., each offering specific advantages. Radiative Transfer Models (RTMs) play a critical role in utilizing these targets as calibration references. These models rely on several approximations that can impact the accuracy of the results. Among these approximations, polarization is often neglected. However, multiple studies already demonstrated scalar models to be a poor approximation for pure Rayleigh scattering, finding errors larger than 10% for phase angles close to 0 and 90 degrees (Mishchenko et al.,1994, Kotchenova et al., 2006,). For aerosols and clouds, the error is also dependent on the particle size and shape (Emde and Mayer, 2019). Smaller particles such as biomass burning aerosol were found to have differences of up to 5% at 670 nm (Kotchenova et al., 2006). On the other hand, larger cloud particles that are strong forward scatterers rarely have errors larger than 1% (Hansen, 1971). The effect of neglecting polarization becomes nuanced when molecular, aerosol, and cloud scatterers are mixed since the optical thickness of each component is dependent on the wavelength (Kotchenova et al., 2006). Similarly, adjacency effects caused by 3D clouds also attenuate errors for clear sky observations and exacerbate them for cloud observations (Emde and Mayer, 2019). In most cases, these studies remain theoretical and might not correspond to observation conditions occurring when exploiting satellite data. The impact of polarization on the accuracy of models is thus highly dependent on the composition of the scene that is being simulated, including its optical properties, geometry and wavelength. In this study, we extend this series of studies to a realistic scenario of a satellite time series over a Pseudo-Invariant Calibration Site (PICS). Using Eradiate, an open-source 3D Monte-Carlo radiative transfer model, we compare polarized and scalar radiances of simulated scenes informed by real satellite orbits used for vicarious calibration. This study aims to provide a more concrete quantification of the errors induced by neglecting polarization for different types of PICS and acquisition geometries. Recommendations will be formulated on when it is necessary to perform polarized simulations according to the desired radiometric accuracy. References: • Emde, C. and Mayer, B. (2018) ‘Errors induced by the neglect of polarization in radiance calculations for three-dimensional cloudy atmospheres’, Journal of Quantitative Spectroscopy and Radiative Transfer, 218, pp. 151–160. doi:10.1016/j.jqsrt.2018.07.001. • Hansen, J.E. (1971) ‘Multiple scattering of polarized light in planetary atmospheres part II. sunlight reflected by terrestrial water clouds’, Journal of the Atmospheric Sciences, 28(8), pp. 1400–1426. doi:10.1175/1520-0469(1971)028<1400:msopli>2.0.co;2. • Kotchenova, S.Y. et al. (2006) ‘Validation of a vector version of the 6s radiative transfer code for atmospheric correction of Satellite Data Part I: Path radiance’, Applied Optics, 45(26), p. 6762. doi:10.1364/ao.45.006762. • Mishchenko, M.I., Lacis, A.A. and Travis, L.D. (1994) ‘Errors induced by the neglect of polarization in radiance calculations for Rayleigh-scattering atmospheres’, Journal of Quantitative Spectroscopy and Radiative Transfer, 51(3), pp. 491–510. doi:10.1016/0022-4073(94)90149-x.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall G1)

Presentation: Acquisition of EnMAP and PRISMA scenes in close similarity conditions

Authors: Michael Bock, Dr. Emiliano Carmona, Vera Krieger, Laura LaPorta, Dr Nicole Pinnel, Maximilian Brell, Prof. Dr. Sabine Chabrillat, Dr Karl Segl, Ettore Lopinto, Patrizia Sacco
Affiliations: DLR - Deutsches Zentrum für Luft- und Raumfahrt, GFZ - GeoForschungsZentrum, ASI - Italian Space Agency
EnMAP (Environmental Mapping and Analysis Program) is a German hyperspectral satellite mission that monitors and characterizes Earth’s environment on a global scale. EnMAP measures geochemical, biochemical and biophysical variables providing information on the status and evolution of terrestrial and aquatic ecosystems. PRISMA (PRecursore IperSpettrale della Missione Applicativa) is an Italian EO hyperspectral Mission funded by ASI as a pre-operational and technology demonstrator, with a focus on the space qualification of a PAN/HYP payload and the development and validation of PAN/HYP products. Launched on 1 April 2022 and 22 March 2019 respectively, these two satellites are currently feeding the hyperspectral community with a large volume of VNIR-SWR images acquired worldwide, stimulating the scientific researches as well as their exploitation for innovative applicative usages. Despite the acquisition of multi-satellite image couples, in geometric (image centers and viewing angles) and acquisition time close similarity conditions (usually called matchups [1]) is known to be useful for cross validation / harmonization [1] and –behind the evaluation of the multi-mission data normalization functions– possible extension of the available pool of hyperspectral products, no or little effort was done for the purposely building of a large set of multi-mission matchups. Until now the acquisition of hyperspectral multi-sensor scenes in similarity conditions has been only coincidental and limited to special campaigns or events, like the AVIRIS campaigns in Europe during summer of 2021 in support of the future Copernicus CHIME mission [2]. Moreover, since EnMAP and PRISMA orbital characteristics are different, the building of a large matchups set cannot be systematically realized by a pure search of similar images in both mission catalogues (as described in [1]) but requires some degree of coordination between the involved missions acquisition, hence dedicating some resources of both mission to this specific scope, well in line with the DLR-ASI collaboration agreement on hyperspectral data exploitation [3]. The approach to the generation of EnMAP-PRISMA matchups is based on the forecast of both sensors image centers time, position and off nadir angle by using of a Python astronomy package [4], along a time span close to the two mission’s orbit cycles, followed by a many-to-many search of similar acquisition conditions respecting the two mission’s acquisitions constraints. Started in July 2024, up to now (mid November 2024) we have generated 24 EnMAP-PRISMA matchups having image centers geometric distance below 5km, average time distance of 32min and off-nadir angles maximum deviation of 3 degrees respect nadir condition, with half of these showing a cloudiness below 10%. Following these promising results, we plan to further continue the coordinated multi-mission acquisitions, deeper analyze (and justify) any possible discrepancies between the characteristics of image couples (radiometry, geometric position) and release the matchups info (geometric position, time and cloudiness level) to the community. We are also currently searching for the inclusion of the DLR DESIS and the NASA EMIT missions in the matchup generation. [1] Block, T., Embacher, S., Merchant, C. J., and Donlon, C.: High-performance software framework for the calculation of satellite-to-satellite data matchups (MMS version 1.2), Geosci. Model Dev., 11, 2419–2427, https://doi.org/10.5194/gmd-11-2419-2018, 2018 [2] Genesio L., Boschetti M., Braga F., Bresciani M., Carotenuto F., Cogliati S., Giardino C., Gioli B., Lopinto E., Panigada C., Pompilio L., Sacco P., Miglietta F.: Simultaneous hyperspectral PRISMA and AVIRIS-NG images, https://earth.esa.int/living-planet-symposium-2022-presentations/25.05.Wednesday/Addis_Abeba/1540-1720/02_Genesio.pdf [3] Implementing Agreement between Deutsches Zentrum Für Luft- Und Raumfahrt (DLR) and the Agenzia Spaziale Italiana (ASI) concerning the collaboration on activities in support of the exploitation of space hyperspectral earth observation missions’ data and associated technologies, signed on 22/09/2022 [4] Brandon Rhodes: Skyfield Elegant Astronomy for Python, https://rhodesmill.org/skyfield/
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall G1)

Presentation: Commissioning and On-orbit Calibration Validation for Planet’s Tanager-1 VSWIR Imaging Spectrometer

Authors: Keely Roth, Geert Barentsen, Justin Haag, Hannah Bourne, Joseph Harris, Christina Henze, Kevin Wurster, Alex Annejohn, Eric Peters, Adrian Gonzalez, Norberto Hernández Castilla, Venkataraman Krishnaswami, Saif Aati, Sara Bahloul, Trevor Mcdonald, Katie Salvaggio, Mark Keremedjiev, David Thompson, Robert Green, Riley Duren
Affiliations: Planet Labs, PBC, NASA Jet Propulsion Laboratory, Carbon Mapper
In August 2024, Planet launched Tanager-1, a space-based, full VSWIR imaging spectrometer and the company’s first hyperspectral instrument. Tanager-1 is the first in a constellation of imaging spectrometers that support global methane and CO2 detection and mitigation through the Carbon Mapper Coalition. The coalition is an ambitious, philanthropically-funded, public-private partnership which seeks to map ~90% of global point-source methane super-emitters. It includes non-profit Carbon Mapper, Planet Labs, NASA Jet Propulsion Laboratory, Arizona State University, the University of Arizona, the High Tide Foundation, California Air Resources Board, and Rocky Mountain Institute. By bringing together a non-profit organization, regulatory bodies, and public and private science and technology providers, the coalition aims to leverage the strengths of each of its members to drive for mission success. To date, the mission has already detected hundreds of methane and CO2 plumes and has been used to successfully mitigate large methane leaks. Tanager-1 consists of a full VSWIR Dyson-type imaging spectrometer with a 3 mirror telescope designed by NASA JPL and operated onboard Planet’s common smallsat bus. Tanager-1 collects spectral data from ~380-2500 nm in over 420 5 nm-wide channels, with a target 30 m spatial resolution. The satellite operates in a sun-synchronous, low earth orbit and is capable of pointing up to 30 degrees off-nadir. We also have the ability to maneuver the payload with “forward looks” and “back nods” to increase the effective integration time and therefore, the signal-to-noise ratio (i.e., sensitivity modes). Furthermore, we have demonstrated the ability to automate sunglint tasking for increasing methane detection sensitivity over dark water targets. Our target SNR is 300-600 under reference radiance conditions in the SWIR 2300 nm methane absorption feature. Tanager-1 was calibrated and characterized in the lab prior to launch following NASA JPL protocols for imaging spectrometers. This included extensive spectral, spatial, and radiometric measurements. On-orbit, we have also followed NASA JPL calibration validation methods, specifically those used for the EMIT instrument on the ISS. In this presentation, we share the calibration validation results of our on-orbit commissioning phase for Tanager-1. We demonstrate improvements and updates made to our calibrations, share ongoing quality monitoring results, and highlight our core imagery data products (calibrated radiance and surface reflectance). We also share early results of cross-comparisons with select public imaging spectrometer missions and discuss interoperability challenges and opportunities between public and commercial sensors.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.85/1.86)

Session: A.03.01 Global Carbon Budgets and Earth Observation - PART 1

Despite decades of effort by the International Community there remain significant differences between the global and regionally aggregated budgets of the key greenhouse gases (GHGs) carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O). This discrepancy relates partially to the complexity of the carbon cycle, its interactions with other cycles but also to differences in methods and data used to generate the budgets.

The principal budget calculations are also out of synchrony. The Global Carbon Budget (CO2) is produced annually, the global methane and N2O budgets on a 3-4 year cycle and the Regional Carbon Assessment (RECCAP) developed every five years. The challenge for the International Community is threefold:

• to align the budget calculations in time
• develop regional budget assessments for the three GHG gases on an annual cycle
• reconcile the global and regional assessments across the three GHG.

Fundamental research is needed to respond to these challenges, especially for the terrestrial carbon component. Space-based measurements of atmospheric concentrations of GHG from OCO-2, OCO-3, Sentinel-5P and observations of both the terrestrial and ocean carbon cycle are increasing in frequency, range, and detail. Allied with the planned launches of key carbon relevant satellites e.g. FLEX, BIOMASS, NISAR, EO may provide a unique capability to provide a dynamic reconstruction of the carbon cycle at unprecedented scales in space and time.

This session is dedicated to how EO can be used help provide answers to these three challenges building on activities in collaboration with NASA and the European Commission.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: Data.GEO-TREES - a global harmonised in-situ data repository for forest biomass map validation

Authors: Dmitry Schepaschenko, Jerome Chave, Martina Dürauer, Anto Subash, Klaus Scipal
Affiliations: International Institute of Applied System Analysis, Université Toulouse 3 – Paul Sabatier, European Space Agency
Monitoring forest biomass and its changes is critical for understanding climate and ecosystem dynamics. Recently, innovative space-based instruments specifically designed for this task have been launched, with more expected in the near future (e.g. ESA BIOMASS mission). These missions rely on ground-based data for algorithm calibrating and product validation . GEO-TREES (https://geo-trees.org/) is an international cooperation designed to maintain a global in-situ forest biomass database to support Earth observation and foster investment in field-based observations and research. Its mission is to bridge the gap between the remote sensing (RS) community and ecological and forest inventory experts, creating mutual benefits. GEO-TREES is an inclusive initiative, inviting collaboration from diverse networks and teams. Its online database, accessible for plots with author permission, provides essential information, including plot coordinates, canopy height, and above-ground biomass of trees, covering areas of 0.25 ha or larger. Larger plots are subdivided into smaller units to capture variability in height and biomass. Adhering to the CEOS Aboveground Biomass Land Product Validation protocol, GEO-TREES aims to establish a network of biomass reference measurement (BRM) sites, also known as Fiducial Reference Measurements (FRM). Core BRM sites meet rigorous measurement standards, including (1) comprehensive tree inventory across ten 1-ha permanent sample plots, (2) airborne LiDAR scanning over 1,000 ha, (3) terrestrial LiDAR scanning across three hectares, and supplementary environment information. The Data.GEO-TREES database is essential for validating and calibrating satellite observations and forest models. Comparison of plot biomass data with existing global and regional maps (e.g. CCI Biomass, NASA JPL, ICESat-2) reveals significant uncertainties in biomass estimation, highlighting the importance of initiatives like GEO-TREES in improving accuracy and reliability of biomass observation. This study is supported by the European Space Agency’s FRM4Biomass project (RFP/3- 18237/23/I-EF-bgh).
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: Modernizing the Global Stocktake: Introducing a new Satellite-based AI-enabled model for monitoring and reporting GHG emissions and removals for the land sector

Authors: Sassan Saatchi, Yan Yang, Zhihua Liu
Affiliations: JPL/CALTECH, CTrees.org
Accurately assessing net carbon dioxide emissions from global land carbon changes and understanding the role of land in climate mitigation are critical yet challenging tasks, fraught with significant uncertainties. These uncertainties are evident in two primary aspects of land carbon flux data: (1) the considerable difference (exceeding 6 GtCO2e yr-1) between national greenhouse gas inventories (NGHGIs) reported to the UNFCCC and the LULUCF (Land Use Land Use Change and Forestry) book-keeping models used by the IPCC and assessed by the Global Carbon Project (GCP), and (2) the substantial variation (over 3 GtCO2e yr-1) among the three book-keeping models employed in GCP's land use emissions estimates. As climate policy shifts from commitments to implementation, reconciling these differences before the next global stocktake in 2028 is imperative. Furthermore, establishing a reliable jurisdictional Measurement, Reporting, and Verification (MRV) system for land carbon is crucial to enable countries to effectively evaluate their progress towards national climate targets under the Paris Agreement. Here, we report results from a new geospatial carbon monitoring system that combines available ground inventory data with satellite observations of land use, forest structure, and biomass to map and monitor long-term (2000-present) land carbon stock changes. The results are compared with the national GHG reports to UNFCCC by focusing on three sources of uncertainty impacting the area of forest cover and land use, the magnitude of carbon stocks in land, and the estimates carbon fluxes in terms of emissions and removals. The new system provides a systematic observation-based approach, along with uncertainty assessments, to localize estimations of emissions and removals of carbon from land use activities and to improve land sinks and sources. The geospatial data and estimates are integrated into a jurisdictional MRV system to significantly improve the global stocktake, inform national carbon management policies, and bolster climate mitigation efforts, including initiatives like REDD+ and nature-based solutions. A new observation-based and AI-enabled model is introduced to localize emissions and removals for the land sector, improve the global stocktake, inform national carbon management policies, and bolster climate mitigation efforts.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: European carbon budget: closing the knowledge gaps

Authors: Nora Linscheid, Ana Bastos, Philippe Ciais, Chiara Aquino, Manuela Balzarolo, Martin Brandt, Maria Vincenza Chiriacò, Emilio Chuvieco, Maoyuan Feng, Rasmus Fensholt, Mélanie Juillard, Amin Khairoun, Katja Kowalski, Siyu Liu, Pierre Regnier, Cornelius Senf, Alba Viana-Soto, Yidi Xu
Affiliations: Institute for Earth System Science and Remote Sensing, University of Leipzig, Laboratoire des Sciences du Climat et de l'Environnement, LSCE/IPSL, CEA-CNRS-UVSQ, Université Paris-Saclay, CMCC Foundation - Euro-Mediterranean Center on Climate Change, Department of Geosciences and Natural Resource Management, University of Copenhagen, University of Alcalá, Environmental Remote Sensing Group, Citepa, Technical University of Munich, School of Life Sciences, Earth Observation for Ecosystem Management, Université Libre de Bruxelles, Biogeochemistry and Modelling of the Earth System
European forests have recently revealed an alarming decline of carbon sinks, thought to be a combination of a slight increase in harvest, and a strong increase of natural mortality: severe droughts, bark beetle infestations, fires in southern Europe, and perhaps a slow-down in growth, although European forest still keep a dominant greening trend (Korosuo et al. 2023). For CH₄, while anthropogenic emissions are declining, there is the question of increased natural emissions from wetlands under warming (Zhang et al., 2023, Lauerwald et al., 2024). Although Europe is an extensively studied region, several important knowledge gaps remain in European greenhouse gas budgets. In the highly fragmented European landscapes, it is key to capture disturbance and heterogeneity at small scales, so that data at very high resolution of tree cover and biomass is needed. Recent developments make it now possible to map tree mortality at high resolution (Seidl and Senf, 2024) and lake CH₄ emissions at 30m resolution. The ESA Climate Space RECCAP2 (RECCAP2-CS) project aims at developing new high-resolution information to further improve EO-based estimates of forest disturbance, land use change and fire emissions, and compare these with national greenhouse gas inventory approaches. Here, we will present preliminary results of Phase I of RECCAP2-CS. Specifically, we will discuss how high-resolution datasets produced on aboveground biomass carbon, forest fuel characteristics, forest disturbances and lakes, in tandem with national inventory datasets, might contribute for a more detailed investigation of the drastic decline of the European forest carbon sink in the past years (Korosuo et al. 2023), the doubling of tree mortality in Europe observed since 1990 (Senf et al., 2018), as well as tackling the remaining large uncertainties around land-cover and land-use change and fires (Bastos et al., 2022). --- References Bastos, A., Ciais, P., Sitch, S., Aragão, L.E.O.C., Chevallier, F., Fawcett, D., Rosan, T.M., Saunois, M., Günther, D., Perugini, L., Robert, C., Deng, Z., Pongratz, J., Ganzenmüller, R., Fuchs, R., Winkler, K., Zaehle, S., Albergel, C., 2022. On the use of Earth Observation to support estimates of national greenhouse gas emissions and sinks for the Global stocktake process: lessons learned from ESA-CCI RECCAP2. Carbon Balance Manage 17, 15. https://doi.org/10.1186/s13021-022-00214-w Cheng, Y., Oehmcke, S., Brandt, M., Rosenthal, L., Das, A., Vrieling, A., Saatchi, S., Wagner, F., Mugabowindekwe, M., Verbruggen, W., Beier, C., Horion, S., 2024. Scattered tree death contributes to substantial forest loss in California. Nat Commun 15, 641. https://doi.org/10.1038/s41467-024-44991-z Korosuo, A., Pilli, R., Abad Viñas, R., Blujdea, V.N.B., Colditz, R.R., Fiorese, G., Rossi, S., Vizzarri, M., Grassi, G., 2023. The role of forests in the EU climate policy: are we on the right track? Carbon Balance Manage 18, 1–14. https://doi.org/10.1186/s13021-023-00234-0 Lauerwald, R., Bastos, A., McGrath, M.J., Petrescu, A.M.R., Ritter, F., Andrew, R.M., Berchet, A., Broquet, G., Brunner, D., Chevallier, F., Cescatti, A., Filipek, S., Fortems-Cheiney, A., Forzieri, G., Friedlingstein, P., Fuchs, R., Gerbig, C., Houweling, S., Ke, P., Lerink, B.J.W., Li, Wanjing, Li, Wei, Li, X., Luijkx, I., Monteil, G., Munassar, S., Nabuurs, G.-J., Patra, P.K., Peylin, P., Pongratz, J., Regnier, P., Saunois, M., Schelhaas, M.-J., Scholze, M., Sitch, S., Thompson, R.L., Tian, H., Tsuruta, A., Wilson, C., Wigneron, J.-P., Yao, Y., Zaehle, S., Ciais, P., Winkler, K., 2024. Carbon and Greenhouse Gas Budgets of Europe: Trends, Interannual and Spatial Variability, and Their Drivers. Global Biogeochemical Cycles 38, e2024GB008141. https://doi.org/10.1029/2024GB008141 Seidl, R., Senf, C., 2024. Changes in planned and unplanned canopy openings are linked in Europe’s forests. Nat Commun 15, 4741. https://doi.org/10.1038/s41467-024-49116-0 Senf, C., Pflugmacher, D., Zhiqiang, Y., Sebald, J., Knorn, J., Neumann, M., Hostert, P., Seidl, R., 2018. Canopy mortality has doubled in Europe’s temperate forests over the last three decades. Nat Commun 9, 4978. https://doi.org/10.1038/s41467-018-07539-6 Zhang, Z., Poulter, B., Feldman, A.F., Ying, Q., Ciais, P., Peng, S., Li, X., 2023. Recent intensification of wetland methane feedback. Nat. Clim. Chang. 13, 430–433. https://doi.org/10.1038/s41558-023-01629-0
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: X-BASE: terrestrial carbon and water flux products from FLUXCOM-X

Authors: Jacob Nelson, Sophia Walther, Basil Kraft, Gregory Duveiller, Fabian Gans, Ulrich Weber, Weijie Zhang, Zayd Hamdi, Dr. Martin Jung
Affiliations: Max Planck Institute For Biogeochemistry, Institute for Atmospheric and Climate Science, ETH Zurich, Chair of Remote Sensing Technology, Technical University of Munich (TUM), Department of Environmental Science, Policy, and Management, University of California, Berkeley
Mapping in-situ eddy covariance measurements (EC) of terrestrial carbon and water fluxes to the globe is a key method for diagnosing terrestrial fluxes from a data-driven perspective. We describe the first global products (called X-BASE) from a newly implemented up-scaling framework, FLUXCOM-X. The X-BASE products cover the globe at 0.05° spatial resolution for every hour and include estimates of CO₂ net ecosystem exchange (NEE) and gross primary productivity (GPP). Compared to previous FLUXCOM products, the new X-BASE NEE better reconciles the bottom-up EC-based NEE and estimates from top-down atmospheric inversions (global X-BASE NEE is -5.75±0.33 PgC yr-1). The improvement of global NEE was likely only possible thanks to the international effort to improve the precision and consistency of eddy covariance collection and processing pipelines, as well as to the extension of the measurements to more site-years resulting in a wider coverage of bio-climatic conditions. However, X-BASE NEE shows low inter-annual variability, which is common to state-of-the-art data-driven flux products and remains a scientific challenge. With 124.7±2.1 PgC yr-1, X-BASE GPP is slightly higher than previous FLUXCOM estimates, mostly in temperate and boreal areas, and temporal patterns agree well with TROPOMI-based SIF. Many further opportunities for development exist. We will outline how the new FLUXCOM-X framework provides the necessary flexibility to experiment, diagnose, and converge to more accurate global flux estimates. Pathways of exploration include methodological choices in the selection and processing of eddy-covariance and satellite observations, their ingestion into the framework, and the configuration of machine learning methods. We further give initial results regarding the impacts of different machine learning models and sensor transitions as we look to moving away from a purely MODIS based model. Find both the associated paper detailing the full results can be found here: https://doi.org/10.5194/bg-21-5079-2024 Furthermore, the data is currently available in a variety of formats here: https://gitlab.gwdg.de/fluxcom/fluxcomxdata
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: Regional GHG budgets from EO-data in Climate Space RECCAP2

Authors: Philippe Ciais, Stephen Sitch, Ana Bastos, Filipe Aires, Gustaf Hugelius, Man Chen, Pierre Regnier, Yidi Xu, François Ritter, Frederic Chevallier, Emilio Chuvieco, Amin Kairoun, Cornelius Senf, Gunnar Brandt, Martin Brandt, Rasmus Fnesholt, Siyu Liu, Luiz Aragao, Melanie juilliard, Robert Colas, Manuela Balzarolo, Maria Vincenza Chiriaco, Simon Bowring, Nora Lincheid, Ronny Lauerwald, Frédéric Frappart, Aurélien de Truchis, Jean Pierre
Affiliations: Lsce
The objective of the CLIMATE SPACE RECCAP2 project is to harness satellite based ECV to make a significant step forward for reducing the uncertainty on the emissions and sinks of CO2, and CH4 over key land regions, and attributing them to anthropogenic and global change drivers. To tackle this objective, we will integrate bottom-up and top-down methods and compare their results with inventories for case study land regions that are of key importance for global budgets and where uncertainties on current CO2 and CH4 fluxes remain very large : Europe, Siberia, Amazon and Arctic. In addition to high resolution gridded fluxes over regional case studies, we will keep a focus on how these advances can contribute to improve the global land carbon budget. For this, EO-based estimates of C stocks changes and CO2 and CH4 fluxes will be compared and reconciled with traditional inventory methods used in national greenhouse gas inventories, by ensuring a periodical synthesis between national reported emissions and inversions. The proposed framework is built upon recent research methods to produce spatially explicit budgets of emissions and sinks for CO2 and CH4 using bottom-up and top-down approaches based on EO data. These methods are based upon the knowledge gained in previous projects where EO data have been integrated to analyse regional C fluxes for tropical and boreal forests, the impact of land cover change on C stocks and fluxes in Europe, and regional CH4 emissions budgets. The regional case studies targeted in this project: Siberia (2 regions), the Arctic peatlands, and the Amazon (2 regions) are of crucial importance for the global carbon and CH4 budgets, and have not been a focus of activities funded within Europe. The region Europe is additionally included in this project’s scope because of the strong interest from ESA member states to reduce uncertainties on GHG budgets in European countries, and the opportunity to work more closely with National Inventory Agencies with a special focus for quantifying biomass gross loss and gains to better understand the drivers of the dramatic decline of the European forest sink since a few years
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: Constraining a Data-Driven Carbon Dioxide Flux Model by Ecosystem and Atmospheric Observations Using Atmospheric Transport

Authors: Samuel Upton, Dr Markus Reichstein, Dr Wouter Peters, Dr Santiago Botia, Dr Jacob Nelson, Dr. Sophia Walther, Dr Martin Jung, Fabian Gans, Dr Basil Kraft, Dr Ana Bastos
Affiliations: Department of Biogeochemical Integration, Max Planck Institute For Biogeochemistry, Department of Biogeochemical Signals, Max Planck Institute For Biogeochemistry, Institute for Earth System Science and Remote Sensing, Leipzig University, Department of Meteorology and Air Quality, Wageningen University, University of Groningen, Centre for Isotope Research
When we try to model the carbon budget, a major source of uncertainty is the net flux of carbon dioxide from the biosphere to the atmosphere, or Net Ecosystem Exchange (NEE). There are two major data-driven approaches to quantifying NEE; top-down approaches, typically Bayesian atmospheric inversions, and bottom-up models trained from eddy-covariance data and Earth Observation (EO) products. Both approaches have known limitations: Bottom-up models are accurate at the ecosystem level, but because of problems of representation in the eddy-covariance network, do not align with other estimates of the flux when upscaled globally. Top-down models produce flux estimates that are in line with the atmospheric growth rate at larger scales, but cannot map the global signal down to the ecosystem level (Kondo et al., 2020). This study builds on Upton et al. (2024) which demonstrated that top-down estimates of atmospheric CO₂ can provide additional information to a data-driven bottom-up model (i.e. FLUXCOM / X-BASE (Nelson* and Walther* et al., 2024)). The use of a top-down constraint from inversions improved the upscaling of NEE, producing a model that was consistent with the eddy-covariance record at the ecosystem level, but closer at the regional and global level to other independent estimates of NEE. However, the atmospheric constraint had important shortcomings. And, it failed to make progress on key issues, particularly the interannual variance (IAV) of NEE. In this work, we build a bottom-up flux model (EC-ATM) which uses eddy-covariance, meteorological, and EO data to infer NEE at the ecosystem level. We add an additional constraint from direct observations of the atmospheric mole fractions of CO₂. We use the Stochastic Time-Inverted Lagrangian Transport (STILT) atmospheric transport model (Lin et al., 2003) to bridge from the model to the top-down observations. For a tower observation, the EC-ATM model can be run for a set of STILT-determined locations, and the resulting NEE transported to produce an inference of the mole fractions of CO₂ at the tower which is then included in the training loop. EC-ATM can ’see’ more of the land-surface, giving it a broader view of the distribution of the drivers of NEE. Three tall-tower observatories, from tropical, extra-tropical and boreal locations are included in our training. The EC-ATM model produces a global NEE product which, when integrated globally, is closer to the 2023 Global Carbon Budget land residual (GCB23, Friedlingstein et al. (2023)) than the current state of the art Fluxcom X-BASE NEE product (RMSE of 1.08 PgC/yr compared with3.06 PgC/yr). When compared with the time series of GCB23, the IAV of EC-ATM annual global time series is also closer in magnitude to GCB23 (0.63 PgC/yr compared with 0.34 PgC/yr, with GCB23 0.83 PgC/yr), and produces a higher coefficient of correlation than Fluxcom X-BASE (Pearson’s R of 0.53 compared with 0.18). This shows the potential of combining bottom-up and top-down approaches. The top-down information allows for the EC-ATM to access additional information for training. And EC-ATM’s strong link to the eddy-covariance record, allow it to robustly locate the atmospheric information in a spatial context. This dual constraint makes progress on the major shortcomings of the two approaches, while combining their strengths. References: Friedlingstein, P. et al. Global Carbon Budget 2023. Earth System Science Data 15, 5301–5369 (2023). Kondo, M. et al. State of the science in reconciling top-down and bottom-up approaches for terrestrial CO₂ budget. Global Change Biology 26, 1068–1084 (2020). Lin, J. C. et al. A near-field tool for simulating the upstream influence of atmospheric observations: The Stochastic Time-Inverted Lagrangian Transport (STILT) model. Journal of Geophysical Research: Atmospheres 108, (2003). Nelson J., and Walther S. et al. X-BASE: the first terrestrial carbon and water flux products from an extended data-driven scaling framework, FLUXCOM-X. EGUsphere 1–51 (2024) doi:10.5194/egusphere-2024-165. Upton, S. et al. Constraining biospheric carbon dioxide fluxes by combined top-down and bottom-up approaches. Atmospheric Chemistry and Physics 24, 2555–2582 (2024).
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 0.49/0.50)

Session: A.08.02 Advances in the theory and methodology of SAR Oceanography - PART 1

In 2003 the ESA’s workshop on Coastal and Marine applications was conducted gathering the community, to present the findings of ESA-funded studies of SAR Ocean applications, and to prepare recommendations for future research work. It was followed by the first SAR oceanography workshop, SEASAR 2006, entitled « Advances in SAR Oceanography from Envisat and ERS missions » to organise regular consultations with the science and applications community. Recently the latest SEASAR workshop 2023 (Svalbard) allowed scientists to meet and exchange their latest research findings and advances to support new mission proposals , prepare white papers and discuss the urgent need for dedicated field campaigns to strengthen the understanding of air-sea interactions and the coupling of the atmospheric boundary layer and the upper ocean. The applications are spanning a broad range of research areas tailored to wind, waves, sea ice and surface current as well as application areas including, for instance, coastal management, offshore industry, shipping, extremes and hazards.

We welcome contributions to research and applications within areas such as (not exclusive):
•  Ocean Wind
• Ocean Current
• Waves
• Sea Ice
• Oil spill and ship detection
• New advances in SAR Oceanography: sensor synergy, Doppler methods and applications.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 0.49/0.50)

Presentation: Mapping Sea’s Hidden Oscillations: Advancing Internal Wave Research with Sentinel-1 SAR Data and Machine Learning

Authors: João Pinelo, Adriana M. Santos-Ferreira, João Gonçalves, José C. B. da Silva, Jorge M. Magalhaes, Bertrand Chapron, Johnny A. Johannessen
Affiliations: AIR Centre - Atlantic International Research Centre, Department of Geosciences Environment and Spatial Planning, Faculty of Sciences, University of Porto and Institute of Earth Sciences (ICT), Interdisciplinary Centre of Marine and Environmental Research (CIIMAR), Laboratoire d’Océanographie Physique et Spatiale, Centre National de la Recherche Scientifique – Ifremer, Nansen Environmental and Remote Sensing Center (NERSC)
Oceanic internal waves (IWs) form within stratified ocean layers when disturbed by various physical mechanisms, often manifesting as nonlinear Internal Solitary Waves (ISWs) with amplitudes exceeding 100 meters. These waves generate intense vertical velocities and horizontal shear currents, posing navigation hazards and potentially damaging underwater infrastructure such as sea platforms (Osborne and Burch 1980). ISWs are critical to ocean dynamics, influencing global mixing by transporting cold, deep waters to warmer upper layers. This process plays an essential role in sustaining the meridional overturning circulation and facilitating the movement of heat and nutrients throughout the ocean. They also boost primary productivity by making nutrients more accessible to phytoplankton and increasing light exposure, affecting marine life (Muacho et al. 2013, Pineda et al. 2015). Furthermore, ISWs pose risks to underwater infrastructure and can resuspend sediments on the continental shelf (Quaresma et al. 2007), making it crucial to understand their regional impacts. Despite significant advances in deep ocean observation since the 20th century, the mechanisms behind IW generation remain unclear. Although typically driven by internal tides, winds, and currents, the interactions among these factors are still under investigation, making prediction and modeling particularly challenging. Key knowledge gaps in IW energy dissipation, evolution, and seabed interactions highlight the need for further research. Advancements in remote sensing, particularly high-resolution Synthetic Aperture Radar (SAR) technology, have transformed IW observation. The ESA Sentinel-1 satellites provide extensive SAR coverage of the open ocean (WV mode), facilitating routine analysis and validation of global numerical models. The "Internal Waves Service" uses machine learning to classify all Sentinel-1 coverage in WV mode to map in near real-time IW events on an interactive platform. The identification of the events is carried out with a machine learning model that is continually refined through expert validation as more images are collected and the model retrained. Several image classification models were tested, including a Multi-Layer Perceptron (MLP) classifier (feedforward artificial neural network) and a Convolutional Neural Network (CNN). By processing extensive volumes of Sentinel-1 WV mode satellite imagery, the service generates global maps of IWs, which are currently supervised and validated by experts. These maps offer crucial insights into the distribution of IWs, significantly contributing to scientific research. Additionally, the database of images of events is made openly available to scientists, overcoming one of the hurdles of research on the phenomenon, which was identifying events within thousands of images available.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 0.49/0.50)

Presentation: Direct ocean tidal current observations from space: Enhanced interpretation of Sentinel-1

Authors: Michael Hart-Davis, Dr. Artem Moiseev, Dr. Antonio Bonaduce, Dr Richard Ray, Dr Björn Backeberg, Prof. Johnny Johannessen
Affiliations: DGFI-TUM, Nansen Environmental and Remote Sensing Center, Geodesy and Geophysics Laboratory, NASA Goddard Space Flight Center, Deltares
Previously, tidal currents have never been directly measured from satellite observations and are usually derived using inversion techniques or hydrodynamic modelling efforts. With planned future missions, such as Harmony, putting emphasis on observing the total surface currents, the importance of accurately measuring ocean tidal currents will become critical. Improved insights into tidal currents will allow us to better understand their variability throughout the global oceans. Additionally, much like tidal heights, tidal current corrections will enhance our understanding of their variability, particularly in regimes of high tidal variability. In this study, radial velocities from Sentinel-1 Doppler observations are used in an attempt to derive tidal currents for the first time from satellite observations. The North Sea is a region of high tidal variability, with tidal heights reaching 10s of meters and is therefore a suitable natural laboratory to test tidal current retrievals from satellite remote sensing. The focus of this presentation is on the computation of the major tidal constituents from the sun-synchronous orbit of Sentinel-1, noting that the main solar tides are not possible to derive from sun-synchronous orbits. Results indicate a strong correlation of the estimations with global tidal current models, TPXO10.2 and FES2022b, both in terms of amplitude and phase lag retrievals of the tidal constituents. Moreover, when reducing the total surface current estimations with the derived tidal currents, the tidal signal within the results is significantly reduced allowing for the study of non-tidal processes with Sentinel-1. The results clearly indicate the potential of resolving tidal currents from Sentinel-1, which in the future will advance modelling efforts in terms of both validations and potential data assimilation.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 0.49/0.50)

Presentation: Two novel methods to distinguish mineral oil slicks and low wind areas using synthetic aperture radar images

Authors: Malin Johansson, Dr. Cornelius Quigley, Dr. Cathleen Jones
Affiliations: Uit The Arctic University Of Norway, NASA / JPL
Marine oil spills can have serious consequences for marine and coastal ecosystems, local communities, and tourism. Providing near-real time and accurate information on the location and extent of an oil spill to emergency responders is therefore essential. Separating radar-dark low-wind areas from radar dark marine oil spills is therefore important, and still an ongoing challenge. Using high resolution L-band synthetic aperture images, we present two methods that enables such near-real time information using airborne imagery. We make use of datasets covering three different areas of interest: (a) the Coal Oil Point seep field near Santa Barbara, California, (b) the Aleutian Island chain off the coast of Alaska and (c) the North Sea. The imagery was collected during airborne campaigns, which for (a) and (b) utilized UAVSAR, NASA’s plane mounted L-band SAR, and for (c) utilized DLRs plane-mounted F-SAR instrument which acquired imagery in X-, S- and L-bands. The first and last dataset also images mineral oil where (a) is a time series over naturally occurring releases and (c) is from a research experiment in which mineral oil emulsion was released into open water under controlled conditions. Using the first dataset, (a), we exploited the rapid repeat SAR imagery to develop a method for differentiating marine oil slicks from radar-dark, low-wind areas. In-situ data collected during the overflights were used to verified oil slicks as well as low wind areas. In-situ data in the form of GoPro imagery, simultaneous to the UAVSAR acquisitions, were acquired. The GoPros were attached to boats surveying the COP seep field and were used to verify the presence of oil slick or low-wind open water areas in the SAR imagery. The method only requires data from one polarization channel. The in-situ data were supplemented by wind speed information from NOAA weather buoys in the area. SAR imagery was acquired over the course of three days and contained both low-wind zones and mineral oil slicks. The low-wind areas and naturally occurring oil seeps were classified by exploiting the differences in spatial and temporal evolution over the course of a few hours between the two phenomena. The images were collected in a repeat-pattern ensuring that the imaging geometry was consistent for all pixels throughout the time-series. Using the GoPro data we classified the images into confirmed oil, likely oil, and open water, then compared this classification with the standard deviation values from the co-located SAR pixels. The confirmed oil pixels and the likely oil pixels showed a ~1 – 7 dB mean standard deviation backscatter difference compared to the open water areas. Using 3-5 images we show that we can accurately distinguish between areas of high likelihood of open water and oil slick, where the number of images needed depends on the spatial extent and persistence of the low-wind zones in the imagery. Based on this work we propose another method to distinguish low-wind zones from mineral oil slicks using SAR imagery. While the previous method relied on time-series of rapid-repeat SAR imagery and required only one polarimetric channel, our second method requires only one SAR acquisition and utilizes the statistical information contained within the scene. However, a SAR that is configured to acquire imagery in both VV and HH polarization channels is needed as well as a sufficiently high instrument signal-to-noise (SNR) ratio for the low-wind or mineral oil slick backscatter to be at least 6 dB above the noise floor of the instrument. Our method is based on the Kolmogorov-Smirnov (KS) separability measure, and the result is an easily interpretable index, which we call the KS index, ranging from -1 to +1. Negative values for the KS index indicate that a radar-dark area is more likely to be an oil slick, while positive values for the KS index indicate that a radar-dark area is more likely to be a low-wind area. We demonstrate our method on the three airborne SAR data sets. Our results show a clear separation between oil slick and low wind for all scenes tested. In this study, we demonstrate that low noise floor airborne SARs, such as NASA's UAVSAR and DLR's F-SAR instruments can reliability separate low-wind areas and mineral oil spills and have the potential to aid in clean-up efforts by first responders. We also note that the second method we propose has the potential to be employed on satellite SAR imagery for data with a sufficiently high SNR.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 0.49/0.50)

Presentation: Internal Solitary-like Waves within the Pacific Cold Tongue: Sentinel-1 SAR WM (vignette) observations

Authors: José da Silva, Adriana Santos-Ferreira, Jorge Magalhaes, Renan Huerre, Bestriz Ribeiro
Affiliations: University of Porto, Atlantic International Research Center, CIIMAR
The equatorial cold tongue in the Pacific Ocean plays an important role in air–sea interactions and climate issues and has been intensely studied during the last decades. Santos-Ferreira et al. (2023) has shown satellite signatures of Internal Solitary Waves (ISWs) within a zonal band from -5 to 5 °N stretching from 95 to 150 °W, away from any steep bottom topography and strong tides. The generation of these waves could not be explained from classical theories, but synergy between the Sentinel-3 OLCI sensor (where the waves were primarily observed) and the SLSTR (which shows clear thermal fronts and tropical instability vortices) has provided insights into their generation mechanisms. These authors associated the generation of ISWs within the cold tongue with gravity currents apparently originating in tropical instability waves, both phenomena observed in SLSTR. Results from a 2D numerical model suggest that resonantly generated nonlinear internal waves with amplitudes of O(10) m may be continuously initiated at the fronts of advancing gravity currents. The analysis of Sentinel-3 data was restricted to 2020, partly because of the difficulty to find the ISW signatures owing to cloud cover. Nevertheless, 116 ISW trains were detected during one full year, suggesting the region is a hotspot for ISW formation and raising questions about their longevity and energy dissipation. The waves have an average crest length of 300 km, typical wavelengths of 1500 m, and age estimated in about 28 hours. The findings in Santos-Ferreira et al. (2023) unveil the possible role of enhanced mixing in a region already known as being “marginal unstable”. The scarcity of SAR image mode products in the equatorial Pacific Ocean and the severe cloud coverage year-round in the region, lead us to search for alternatives to investigate the equatorial waves in the cold tongue. In this paper we present first results from a comprehensive analysis of SAR WM mode vignettes from the Sentinel-1 mission, spanning from 2020 to 2024. These vignettes were accessed via the X-WAVES IFREMER products. In total, 228 vignettes were found to contain signatures of ISWs, which allowed us to study their geographic distribution, main characteristics and propagation directions. Synergies with other sensors, such as SWOT KaRIn and OLCI/SLSTR, provide further insights into the longevity of these waves. While Moum et al. (1992) and Hebert et al. (1992) measured interfacial internal waves at the equator propagating over the core of the Equator Under Current (EUC), and reported very high dissipation rates which lead them to conclude the waves must have been very short-lived (a few hours), satellite evidence has shown evidence that some of these waves endure for more than three days. This poses an interesting question: how the ISWs survive for so long in a highly sheared environment, and what feedback mechanisms may be at work to keep these waves coherent? In addition to this, we report unusually high phase speeds in some of the waves observed in the vignettes in the Southern Hemisphere, for waves propagating northward, which raise more questions about their generation mechanisms: can some of these gravity currents become supercritical and propagate as undular internal bores? And what would be the generation mechanisms of these internal solitary-like waves, since very few sharp ocean fronts are found in the southern “border” of the equatorial cold tongue? In fact, the new SAR WM observations appear as an “invitation” to seek for other generation mechanisms at work in the eastern equatorial Pacific Ocean. The new findings in SAR WM mode vignettes add to the knowledge of ISWs within the cold tongue of the Pacific and mixing owing to these ISWs, pointing to intensified vertical heat transfer in a critical region of the ocean for climate, which may motivate further research to better understand how the uptake of heat from the atmosphere within the cold tongue may slow down (at least for some period) anthropogenic climate change, such as global warming. ISWs further enhance mixing because they are prone to transient growth instabilities owing to overturns and wave breaking, hence possibly entering the feedback mechanism for ENSO (El Niño Southern Oscillation) cycles. All these processes may impact ENSO variability and predictability according to Holmes et al. (2019). In summary, some of the ISWs observed in the Sentinel-1 SAR WM are consistent with resonantly generated internal solitary-like waves by gravity currents that propagate as frontal zones of 1000-km scale, sometimes associated to tropical instability waves. The satellite data suggest other unidentified generation mechanisms at work, prompting for more research in the equatorial Pacific Ocean. The multiple occurring phenomena provide a physical link of 1000 km ocean features with viscous mixing scales triggered by the instabilities within ISWs.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 0.49/0.50)

Presentation: New Sentinel-1 IW RVL products: calibration status and usage for ocean current measurement

Authors: Gilles Guitton, Fabrice Collard, Harald Johnsen, Geir Engen, Andrea Recchia, Alessandro Cotrufo, Sergio Bras, Nuno Miranda, Muriel Pinheiro
Affiliations: OceanDataLab, Norce, Aresys, ESTEC/ESA, ESRIN/ESA
The Doppler Centroid (DC) frequency shift recorded over ocean surfaces by Synthetic Aperture Radar (SAR) is a sum of contributions from satellite attitude / antenna and ocean surface motion induced by waves and underlying ocean currents. A precise calibration of the DC is needed in order to predict and subsequently remove contributions from attitude / antenna. A new Sentinel-1 Radial VeLocity (RVL) dataset will be presented for Interferometric Wide swath (IW) mode with a global coverage. This dataset takes benefit from all the Doppler calibration improvements of recent years. A novel data calibration technique based on combining gyroscope telemetry data and global Sentinel-1 WV OCN products has demonstrated promising capabilities to quantify the Sentinel-1 attitude. Coupled with a mean DC bias according to elevation angle that is learned on a daily basis from Sentinel-1 IW land acquisitions, this calibration technique has proven to reduce significantly the bias and variance of non geophysical DC. On top of that, local DC value over nearest land is now used to capture very short scale DC variations such as DC jumps caused by antenna temperature compensation scheme. This new RVL dataset is built on top of the existing OCN products (RVL component) allowing for bug corrections and new user friendly features. Regarding the new features, all consecutive OCN slices are merged into a single product in order to simplify the geophysical analyses of IW scenes. Moreover the products also include a wave Doppler estimation which is a mandatory step for surface current retrieval. The wave Doppler estimation is based on the latest CDOP geophysical model function derived from Sentinel-1 measurements ([1]). The remaining calibration limitations as well as the usage of these new products for ocean current measurement will be discussed. Also, six months after Sentinel-1C launch, early assessments of Doppler calibration will be shown. [1] Moiseev, A., Johannessen, J. A., & Johnsen, H. (2022). Towards retrieving reliable ocean surface currents in the coastal zone from the Sentinel-1 Doppler shift observations. Journal of Geophysical Research: Oceans, 127, e2021JC018201. https://doi.org/10.1029/2021JC018201
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 0.49/0.50)

Presentation: Mapping Surface Currents and Winds in the Southern Mediterranean Sea with the OSCAR Airborne SAR Instrument

Authors: David McCann, Adrien Martin, Eva Le Merle, Christine Gommenginger, Karlus Macedo, José Márquez, Tania Casal
Affiliations: National Oceanography Centre, NOVELTIS, MetaSensing BV, Radarmetrics SL, European Space Agency
Coastal and shelf seas are dominated by fast, small-scale ocean surface dynamics that are vital for the transport and exchange of climate properties like heat, carbon, and nutrients between ocean, atmosphere and land. An increasing body of evidence shows that sub-mesoscale ocean dynamics (scales below 10 km) have far-ranging impacts on lateral ocean dispersion, air-sea interactions, ocean carbon cycling, marine productivity and vertical stratification – governing exchanges across vital interfaces of the Earth System; directly affecting the ocean and atmosphere and regulating global climate. However, these processes remain poorly observed at the fine spatial and temporal scales necessary to resolve them: in situ observations are sparse, and current satellite EO measurements are unable to provide high enough temporal resolution. The Ocean Surface Current Airborne Radar demonstrator (OSCAR) is a Ku-band, squinted, 3-look airborne Synthetic Aperture Radar Along-Track Interferometer (SAR-ATI). Developed for ESA to support the SeaSTAR candidate for Earth Explorer, OSCAR is a unique airborne EO instrument capable of simultaneous measurements of Total Surface Current Vector (TSCV) and Ocean Surface Vector Wind (OSVW) over a wide swath at high resolution and high accuracy. Successfully demonstrated in 2022 during the ESA-funded SEASTARex project in the Iroise Sea, France, a second campaign was run between the 5th and 8th May 2023 in the Southern Mediterranean Sea; coincident with both the fast-repeat ‘Cal/Val’ phase of the NASA/CNES Surface Water and Ocean Topography (SWOT) mission and the Adopt a Crossover (AdaC) consortium project BioSWOT-Med, which aims to study biogeophysical interactions at sub-mesoscale frontal eddies and filaments at the site of the SWOT mediterranean crossover. The SEASTARex-Med OSCAR dataset comprises 50km x 5km swaths of SAR-ATI acquisitions over three days arranged in disks that fill the SWOT Cal/Val Track 3 sub-swath, with along-track acquisitions performed at the precise time of SWOT overpasses on the 5th, 7th and 8th May 2023. These SAR-ATI data were processed using the SeaSTAR concept methodology to retrieve TSCV and OSVW at 200m posting. This dataset represents a unique opportunity to study sub-mesoscale ocean processes at scales that are extremely challenging to achieve with in situ sensors. This work presents results and analysis from this campaign, as well as comparison and validation with in situ observations from the BioSWOT-Med campaign, operational numerical models and SWOT high-resolution altimetry measurements of Mediterranean frontal dynamics.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall E2)

Session: A.10.03 Our solid Earth: from core to surface - PART 1

This session solicits contributions ranging from the deepest layers of the Earth, the geodynamo and core dynamics, the mantle, the lithosphere, and Earth’s surface. ESA’s Swarm trio of satellites, exceeding 10 years in orbit, has changed our understanding of the geomagnetic field evolution, providing an unprecedent accuracy and coverage from space. This session invites contributions that address the applicability of current ground and satellite datasets from Swarm, GOCE, GRACE-(FO) and future missions such as NGGM and MAGIC, to study the dynamical behaviour of the solid Earth, core dynamics and the interactions between the core, mantle, and lithosphere in order to improve the process understanding on the surface. The session shall address all aspects linked to the subsurface and surface including oceans, using a multitude of observational techniques (e.g. gravimetry, magnetometry, seismology, surface deformation) as well as physics-based models, numerical simulations of the Earth’s dynamo and theoretical advances in the field. Research related to advances in modelling the 3D conductivity of the mantle enabled by ESA’s Swarm mission, as well as the 3D structure of Earth’s layers from lithosphere down to core-mantle boundary and their interplay with the flow in Earth’s core are addressed in this session. Contributions related to plate motion, volcanism, monitoring of the seismic cycle and geohazards monitoring are also encouraged.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall E2)

Presentation: Local Estimates of Core-Mantle Boundary Geomagnetic Secular Variation from the Swarm and MSS-1 Satellites

Authors: Chris Finlay, Jonas Bregnhøj Lauridsen
Affiliations: DTU Space
Changes in Earth’s magnetic field observed at the surface or at satellite altitude originate in magnetohydrodynamic processes taking place in the Outer Core. These processes can be tracked and better understood by using satellite observations and appropriate field modelling techniques to map field changes at the Core-Mantle Boundary (CMB). Here we show how observations from the three polar orbiting Swarm satellites and the recently launched low inclination MSS-1 satellite can be combined to produce high resolution estimates of field changes (or Secular Variation, SV) at low latitudes. We make use of the Subtractive Optimised Local Estimation (SOLA) technique (Hammer and Finlay, 2019) to produce local SV estimates at locations of interest at the CMB and do not require global coverage of satellite data, which is an advantage given the low inclination orbit of MSS-1. The equatorial belt in the Outer Core is characterised by wave dynamics that are especially prominent at decadal and sub-decadal periods. These waves are closely linked to geomagnetic jerk events and crucial in efforts to better predicting future geomagnetic field changes. We suggest that the SOLA method applied to data from the growing constellation of magnetic survey satellites can be a useful tool to further investigate the equatorial wave dynamics of the Outer Core.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall E2)

Presentation: Rapid geomagnetic dynamics and stable stratification at the top of Earth’s core

Authors: Julien Aubert
Affiliations: Institut de Physique du Globe
Probing the possible presence and physical properties of a stably stratified layer atop Earth's core is crucial to better determine the past history and heat budget of the planet. To this end, geomagnetic signals of interannual to decadal periods, as measured by ESA’s Swarm mission, are a powerful probe for the dynamics of a such layer. In this study, the dynamics within a stratified top layer is numerically modelled together with that of the underlying core convection, in physical conditions matching those of Earth’s core. The emergence of magneto-Archimedes-Coriolis waves, a kind of hydromagnetic waves specifically developing in stably stratified environments, is observed. However, the level at which core convection excites these waves is generally insufficient to account for observed geomagnetic variations in this period range. On the other hand, models without stratification can account for these variations through magneto-Coriolis hydromagnetic waves, and the presence of strong stratification is deleterious to these waves. The rapid geomagnetic dynamics observed by the Swarm mission therefore unambiguously constrains the top of the core to be at most lightly stably stratified, with layer thickness not exceeding a few tens of kilometers and Brunt-Väisälä frequency remaining significantly below the planetary rotation rate.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall E2)

Presentation: Fast waves in the Earth’s core over the last 25 years deduced from satellite and ground secular variation data

Authors: Kathy Whaler, Will Brown, Jonas Lauridsen, Nils Olsen, Chris Finlay, Frederik Madsen
Affiliations: University of Edinburgh, British Geological Survey, DTU Space
A number of recent studies, including those based purely on geomagnetic secular variation data, on numerical simulations, or with data assimilation in computational models, have demonstrated the existence of fast, low latitude waves in the Earth’s core. Their power is concentrated in low azimuthal wavenumbers, primarily westward propagating. We present results of inversions for core surface flow over the last 25 years, during most of which high quality measurements of the magnetic field from space are available. We incorporate ground and space estimates best representing the magnetic signal from the core, deriving the secular variation by temporally differencing the field. Satellite data are represented by geomagnetic virtual observatories (GVOs), evenly spaced points at satellite altitude at which time series of the field components are derived from data collected in a cylinder surrounding the GVO, mimicking those produced by ground geomagnetic observatories. We use vector field and spatial gradient values determined at the GVO locations as input to our flow inversions. Our flows are regularised spatially and temporally using a number of different constraints; the resulting flows are difficult to distinguish by eye, but differences emerge in their accelerations, which we calculate from first differences of the velocity, without any smoothing. The azimuthal acceleration component shows evidence of low latitude, fast, westward-propagating waves, visible in time-longitude plots during the Swarm era, but throughout the period of study in plots of power spectral density. The launch of MSS-1 in May 2023 is adding new high quality satellite data concentrated at low latitudes The duration of operation of MSS-1 is at this stage rather short for calculating meaningful secular variation time series, but we investigate the prospects for incorporating GVOs based on MSS-1 data in our inversions.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall E2)

Presentation: 4D Dynamic Earth Phase 1: Towards a digital Twin of the Solid Earth

Authors: Bart Root, Jörg Ebbing, Javier Fullea, Sergei Lebedev, Jakub Velímský, Juan Carlos Afonso, Clint Conrad, Nico Sneeuw, Cedric Thieulot, Wolfgang Szwillus, Henriette Sudhaus, Olga Ortega, Raffaele Bonadio, Yihe Xu, Zdenek Martinec, Florence de la Cruz Ramirez, Yixiati Dilixiati, Ángela Gomez Garcia
Affiliations: Delft University Of Technology, Christian Albrechts University, Universidad Complutense, Universidad Complutense Madrid, Charles University, University of Twente, University of Oslo, University Stuttgart, Utrecht University
We will present the final results of the 4D Dynamic Earth: Phase 1 project. This ESA funded project was a continuation of the previous solid Earth study: 3D Earth in which we analyse the optimal way to incorporate ground and satellite data to develop a digital Twin of the Earth's mantle. To study all the various process and observations of the complexity of a 4D Dynamic Earth, our project has four main elements: 1. 3D Earth Extended Element 2. 4D Earth Dynamic Element 3. Electro Magnetic Earth Element 4. Applications and Extended studies Element The Extended Element will focus on the multi-parameter joint inversion of the current state of the Earth. This Element will contain mainly tasks related to continue extending the WINTERC-G model downward into the transition zone and lower mantle. The Dynamic Element will focus on developing a dynamical model to study the mantle flow and its sensitivities, together with GIA models using the 3D Earth model from the Extended Element as a constitution model. The Conductivity Element will focus on the possibility to utilize Swarm magnetic data to infer conductivity and Core-mantle interactions and find new constraints to the mantle constitution. The Application Element has focussed on surface and hazard scenarios and contain application studies. To follow the framework of 3D Earth we have defined several different levels of components, parameters and solid Earth processes that play a role in the 4D Dynamic Earth. For the 4D Dynamic Earth project set up different sensitivity studies to understand the applicability of current ground and satellite datasets available to study the dynamical behaviour of the solid Earth. The first 8 tasks described in the tender are related to sensitivity studies for a 4D dynamical Earth model. This results in the following sensitivity studies that will be performed: seismic (velocity anomalies, anisotropy, temperature, density), gravity (density, viscosity), magnetic data (depth sensitivity, conductivity), GPS and InSAR (viscous responds, viscosity, temperature, composition), petrology (Composition, velocity to temperature, density), (dynamic) topography (elastic responds of the lithosphere). These physical parameters are studied for the lower mantle and transition zone, as well as re-analyses together with a dynamic model for the lithosphere (WINTERC-G). This will result into a roadmap towards a consistent whole-Earth mantle to incorporates ground and satellite data in a state-of-the-art modeling approach. The roadmap towards Phase 2 identifies the aspects that need to be performed to develop a consistent approach. The project has constructed the framework for a full mantle inversion joining satellite data with a high quality seismological dataset presently available. We have identified new improve approaches to use the GRACE(-FO) and NGGM gravity data. We identified that the global crustal model need to be revised. We can constraining the water and Fe content in the mantle with a joint local study using GIA and EM conductivity models, together with the gravity-seismic model, where we connect new rheology laws for GIA studies within our petrological framework. Finally, the digital twin needs proper visualisation for the applicability of the work. The global model can be used to provide 3D structure to application areas in earthquake hazard predication, surface and sea level change, ice sheet melting, and other solid earth application. Phase 2 of the project will start the development from the framework we developed in Phase 1 and will result in a consistent description of the Earth's mantle.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall E2)

Presentation: Mapping the Earth’s mantle thermochemical structure from coupled geophysical–petrological inversion of terrestrial and satellite data.

Authors: Javier Fullea, Dr Olga Ortega-Gelabert, Dr Sergei Lebedev, Dr Zdenek Martinec, Dr Juan Carlos Afonso, Dr Bart Root
Affiliations: Universidad Complutense De Madrid (UCM), University of Cambridge, Dublin Institute for Advanced Studies, ITC-University of Twente, TU Delft
The lateral and vertical thermochemical heterogeneity in the mantle is a long-standing question in geodynamics. The forces that control mantle flow and therefore Plate Tectonics arise from the density and viscosity lateral and vertical variations. Satellite gravity data are a unique source of information on the density structure of the Earth due to its global and relatively uniform coverage, which complements gravimetric terrestrial measurements. Gravity data (geoid, gravity, gravity gradients) sense subsurface mass anomalies but their inversion alone for the density distribution within) have proven to be helpful in determining the Earth’s thermochemical field in virtue of density’s relatively stronger dependence on rock composition compared to seismic velocities. However, the inversion of gravity data alone for the density distribution within the Earth is an ill-posed problem with a highly non-unique solution that requires regularization and smoothing, implying additional and independent constraints. A common approach to estimate the density field for geodynamical purposes is to simply convert seismic tomography anomalies sometimes assuming constraints from mineral physics. Such converted density field does not match in general with the observed gravity field, typically predicting anomalies the amplitudes of which are too large. Furthermore, a complete description of the Earth’s gravity field must include the internal density distribution and must satisfy the requirement of mechanical equilibrium as well. Therefore, the deformation of the density contrast interfaces (surface of the Earth and Core Mantle Boundary, primarily) must be consistent with the 3D mass distribution for a given rheological structure of the Earth. With the current resolution of modern tomography models and integrated geophysical-petrological modelling it is possible to consistently predict the topography of the mineral phase transitions across the transition zone (i.e., olivine --> wadsleyite, and ringwoodite+majorite --> perovskite+ ferropericlase) based on a temperature and chemical description of the Earth. However, for a consistent representation of the gravity field such thermochemical (i.e., density) 3D models must be compatible with the mantle flow arising from the equilibrium equations that explains both the surface topography (dynamic + isostatic-lithospheric components) and the CMB topography. In the context of the project 4D Dynamic Earth funded by ESA (4000140327/23/NL/SD) as part of a EXPRO+, here we present a new inversion scheme to image the global thermochemical structure of the whole mantle constrained by state-of-the-art seismic waveform inversion, satellite gravity (geoid and gravity anomalies and gradiometric measurements from ESA's GOCE mission) and surface heat flow data, plus surface and CMB dynamic topography (Stokes flow). The model is based upon an integrated geophysical-petrological approach where mantle seismic velocities and density are computed within a thermodynamically self-consistent framework, allowing for a direct parameterization in terms of the temperature and composition variables.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall E2)

Presentation: Rapid mass redistributions at the core-mantle boundary originating from the deep mantle detected by GRACE

Authors: Charlotte Gaugne, Isabelle Panet, Marianne Greff-Lefftz, Mioara Mandea, Séverine Rosat
Affiliations: IPGP, Université Paris Cité, CNRS, IGN, ENSG Géomatique, IGN, CNES, Université de Strasbourg, CNRS, EOST, ITES
Due to the difficulty in obtaining direct observations, our knowledge on the core-mantle boundary (CMB) region remains limited. It has been proposed that temporal changes in CMB topography coupled with core flows may be involved in explaining rapid changes in the geomagnetic field. Here, we use GRACE satellite and Satellite Laser Ranging (SLR) measurements of the Earth's gravity field variations from 2003 to 2015 to search for such deep mass redistributions at the CMB. To identify transient patterns of gravity variations of small amplitude, we analyze the second-order spatial derivatives of the gravity potential and apply a multi-scale temporal analysis. We identify a rapid, anomalous north-south oriented gravity signal at large spatial scales across the Eastern Atlantic ocean in January 2007 that evolves over months to years. We show that this signal cannot be fully explained by hydrological and/or oceanic mass redistributions, suggesting an origin within the solid Earth. Based on numerical modelling, we propose that this signal may result from vertical displacements of the perovskite to post-perovskite phase transition, caused by moving thermal anomalies near the base of the African Large Low Shear Velocity Province, potentially creating a decimetric dynamic CMB topography over a few years. These results stress the interest of satellite gravimetry to bring new information on deep Earth dynamical processes.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.15/1.16)

Session: A.02.10 ESA Agriculture Science Cluster

ESA’s Agriculture Science Cluster involves a number of ESA-funded projects and activities bringing together different expertise, data, and resources in a synergistic manner to ensure that the result may be larger than the sum of its parts. It also provides a mechanism for collaboration with Horizon Europe projects.

Projects address a variety of issues including, supporting evidence-based decisions for improving food security at national to global scale, exploring novel EO observation capabilities to advance the scientific basis to support resilience of agricultural productivity to climate change, and how to transfer EO science results into mitigation measures and alternative management tools for a sustainable, climate-neutral agriculture. Applications being developed include monitoring of agricultural practices, irrigation management, water productivity, fertilizer and crop protection optimization, impact assessment of crop diseases and natural hazards, crop yield and quality forecasts, and pasture monitoring.

Agenda:


Cluster objectives and status


  • Espen Volden - ESA

Lightning talks of the projects:


  • DTC-EOAgriTWin
  • DTC-SaveCrops4EU
  • SUP-AgriCEM
  • SIF-LST4Drought
  • AFRI4CAST
  • WRM
  • HyRelief
  • CRISP
  • WorldCereal
  • ECOSTRESS HUB
  • YIPEEO
  • EO4NUTRI
  • EO4Cereal Stress
  • Agriculture Virtual Lab
  • CERBERUS
  • STELLA
  • ScaleAgData
  • AgriDataValue
  • TEMBO AFRICA
  • SYLVA
  • EIFFEL
  • THEROS
  • DINOSAR
  • WaterSense

Presentation of ideas for ESA Science projects 2026-28


  • Jean Bouchat - ESA

Discussion on new project ideas


  • Espen Volden - ESA
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall M1/M2)

Session: C.03.15 10 Years of Copernicus Sentinel-2 - PART 1

The Copernicus Sentinel-2 mission celebrates a decade of revolutionising Earth observation, offering unique insights into our planet. Since its launch in 2015, Sentinel-2 has provided high-resolution, multispectral imagery supporting a large range of applications including agriculture, forestry, water resources management, natural disasters management or methane emissions monitoring. Its global reach, frequent revisit times, and open data policy have empowered policymakers, scientists, businesses, and communities worldwide.
This session celebrates the mission’s remarkable achievements over the past ten years, highlighting its contributions to addressing critical environmental and societal challenges.
Looking ahead, the session will also refer to emerging opportunities for Sentinel-2. Experts will provide their vision on the challenges, innovations, and the mission’s evolving role in addressing global societal challenges.
Join us in celebrating 10 years of Copernicus Sentinel-2, reflecting on its transformative impact, and envisioning how it will continue to shape the future of Earth observation.

The session will be followed by a small celebration with drinks & cakes.

Presentations and speakers:


Celebrating 10 Years of Sentinel-2 in Orbit


  • Mauro Fachini (EC), Pierre Potin, Ferran Gascon, Janice Patterson (ESA)

A new paradigm for EO based land monitoring


  • Andreas Brink, Usue Donezar Hoyos (JRC/EEA)

Sentinel-2 – The development of a satellite for optical monitoring services


  • Wilhelm Gockel (Airbus)

The Sentinel2 MSI recipe: a mixture of teamwork, willpower and perseverance


  • Vincent Chorvalli (Airbus)

Sentinel-2’s Eye on the Ocean: The Evolving use by Copernicus Marine


  • Antonio Reppucci (Mercator Ocean International)

The 10-Year Sentinel-2 Journey in 10 Flashbacks


  • Marc Paganini, Frank Martin Seifert (ESA)
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 0.94/0.95)

Session: F.05.08 Demonstrating the domestic benefits derived from Copernicus in ESA Member States: challenges and achievements

With the ESA Ministerial and the EU MFF discussions approaching, delegates from Member States are increasingly looking at ways to demonstrate the benefits derived from EO and from Copernicus for their own countries, not only to their colleagues in the Government but also to the general public. However, readily available economic figures are not always at hand and they are not easy to be derived due to several factors that are specific to the EO sector and to its dynamically evolving nature. Therefore, Member States are exploring a number of different methodologies to conduct nation-wide assessments, industrial surveys and collections of use cases. In this session, national delegations will share views and tell about the efforts they are carrying on, the encountered challenges and possible solutions, as well as the success stories that they are ready to share with the ESA, the EC and with their peers. Particular interest will be paid in stories that, spanning across borders, can be of use for neighbouring countries.

Moderators:


  • Alessandra Tassa - ESA

Speakers:


  • Helen Jones - UK Department for Science, Innovation and Technology
  • Louis George - UK Department for Science, Innovation and Technology
  • Adrian Strauch - German Federal Ministry of Digital and Transport
  • Andrea Taramelli - Italian Institute for Environmental Protection and Research
  • Alexis Foussard - French Ministry of ecological transition, biodiversity, forests, sea and fisheries
  • Lucie Lackar - ESA
  • Tim Lemmens - EC DG-DEFIS

Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall N1/N2)

Session: D.03.02 Free Open Source Software for the Geospatial Domain: current status & evolution - PART 1

#cloud-native

Free and Open Source Software is has a key role in the geospatial and EO communities, fostered by organizations such as OSGeo, Cloud Native Computing Foundation, Apache and space agencies such as ESA and NASA. This session showcases the status of OSS tools and applications in the EO domain, and their foreseen evolution, with a focus on innovation and support to open science challenges.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall N1/N2)

Presentation: Pangeo Europe: A Community-Driven Approach to Advancing Open Source Earth Observation Tools Across Disciplines

#zarr #pangeo

Authors: Deyan Samardzhiev, Anne Fouilloux, Tina Odaka, Benjamin Ragan-Kelley
Affiliations: Lampata, Simula Research Laboratory, IFREMER
The Pangeo Europe community embodies the collaborative movement of free and open-source software (FOSS), transforming Earth Observation (EO) by fostering innovation and inclusivity. As an open platform, Pangeo Europe not only empowers researchers, industry professionals, and software developers within geosciences but also extends its reach to other disciplines, creating synergies that enhance the robustness and adaptability of its tools and workflows. This presentation will explore how Pangeo Europe is advancing the state of open source EO tools through community engagement and cross-disciplinary collaboration, and the integration of flagship European initiatives such as EarthCODE: - Engaging the Community for Feedback and Roadmaps: Pangeo Europe actively involves users and developers in shaping its roadmap for future developments, including innovations like xDGGS, advanced Zarr implementations, and benchmarking tools. This participatory approach ensures the tools meet real-world needs and adapt to emerging scientific challenges. - Facilitating End-User Adoption: Through training programs, onboarding resources, and addressing bottlenecks in technology adoption, Pangeo Europe makes it easier for researchers and practitioners to adopt and use advanced open-source EO tools effectively. - Providing a Forum for Open Source Developers: The community creates opportunities for developers to collaborate, discuss best practices, and seek funding for advancing open-source software. By harnessing Europe’s collective expertise, Pangeo Europe strengthens the ecosystem for open geospatial innovation. - Promoting FAIR Principles and Performance Benchmarks: The integration of FAIR (Findable, Accessible, Interoperable, Reusable) principles, alongside efforts to benchmark and optimise tool performance, ensures that Pangeo Europe delivers transparent, scalable, and efficient solutions. - Showcasing EarthCODE for Collaborative Innovation: EarthCODE, the Earth Science Collaborative Open Development Environment, exemplifies the principles of Open Science and FAIR data management. Developed in response to FutureEO Independent Science Review 2022 recommendations and feedback from the 2023 Science Strategy Workshop, EarthCODE provides a unified, open-access environment for ESA-funded Earth System Science activities. It integrates key scalable cloud computing platforms and ecosystems for Earth Observation (EO) analysis such as Pangeo. It provides secure, long-term storage for research data and enables scientists to seamlessly share their research data and workflow outputs while adhering to FAIR and open science principles. Additionally, EarthCODE offers robust community support and comprehensive training, led by Pangeo. EarthCODE empowers scientists to share and reuse research outputs more effectively, facilitating collaboration and innovation across disciplines. - Driving Cross-Disciplinary Synergies: By promoting the Pangeo software stack beyond geosciences, such as in bioimaging and cosmology, the community identifies shared challenges and solutions across domains. For instance, collaborations around data formats and metadata, such as OME-Zarr versus GEO-Zarr, foster innovation and help build generic, scalable, and reusable workflows for diverse scientific applications. - Collaborating on Global Challenges: The cross-disciplinary outreach of Pangeo Europe not only broadens the applicability of its tools but also encourages a culture of co-creation where best practices from different domains converge to tackle challenges like climate change, biodiversity monitoring, and cosmological data analysis. For instance, the Global Fish Tracking System (GFTS), an ESA-funded use case of the Destination Earth initiative, uses the Pangeo ecosystem to model fish movement and develop a decision support tool in support of conservation policies. By showcasing real-world examples and the impact of its community-driven model, this session will highlight how Pangeo Europe fosters open innovation across disciplines, making EO tools more robust, generic, and adaptable while advancing the frontiers of open science.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall N1/N2)

Presentation: The importance of seeding opensource

Authors: Marco Bernasocchi
Affiliations: Opengis.ch Gmbh, QField.org Creator, QGIS.org Chair
This keynote delves into the transformative concept of "seeding" in the context of open-source software development. Using QField as a case study, it illustrates how a simple idea from a student project can evolve into a groundbreaking tool with a global impact. Today, QField stands as the leading fieldwork app, aiding hundreds of thousands of professionals in their daily work and contributing to addressing several Sustainable Development Goals (SDGs). The journey of QField is one of perseverance, innovation, and collaboration. Attendees will learn about the numerous challenges faced during the initial stages of development and the strategic steps that were crucial in overcoming them. From technical limitations and resource constraints to gaining visibility and trust within the open-source community, these early hurdles shaped the foundation for its eventual success. A key focus of the keynote is the pivotal role of community and industry involvement. The support, feedback, and collaboration of these groups helped QField grow far beyond its initial scope. With over 1.5 million downloads and a user base spanning the globe, QField’s success exemplifies how open-source projects can thrive through collective effort. This talk will highlight specific instances of how community contributions—both large and small—fueled the app’s development, improved its functionality, and broadened its impact. In addition to sharing QField's journey, the session will explore the broader significance of seeding and commitment in the open-source landscape. The concept of seeding emphasizes the importance of planting an idea, nurturing it with passion and persistence, and watching it grow into something that drives meaningful change. This process is not just about creating software; it is about fostering innovation, promoting collaboration, and advancing sustainability. Through this keynote, attendees will gain valuable insights into the practical and philosophical aspects of developing open-source software. They will leave inspired by the potential of open-source tools to address real-world challenges and motivated to plant their own seeds of innovation. Join us for an engaging discussion on how seeding can drive transformative impact and create a better, more sustainable world.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall N1/N2)

Presentation: Automatic Processing for High Resolution 3D Global Earth Coverage

Authors: Sylvia Sylvander, David Youssefi, Emmanuel Dubois, Emmanuelle Sarrazin, Dimitri Lallement, Yannick Tanguy, Roman Malinowski, Céline L’Helguen, Olivier Melet, Fabrice Buffe, Laurent Lebègue, Alexandre Constantin, Vincent
Affiliations: Cnes
1 CNES, 18 avenue Edouard Belin, 31401 TOULOUSE - France The acquisition and regular update of high-resolution, three-dimensional global coverage of the Earth presents a significant challenge for satellite Earth observation. CO3D or (Optical Constellation for 3D; in French Constellation Optique pour la 3D), developed by the French National Agency (CNES) in partnership with AIRBUS DS, promises to be a major breakthrough in this field [1]. With a constellation of four satellites, acquiring multi-spectral images at 50 cm resolution, it will enable the production of a worldwide digital surface model (DSM) with exceptional accuracy (1 meter in all three dimensions) within four years. The data will provide unprecedented value for both civilian and defence users. CNES has developed a complete 3D pipeline, composed of tools that are easy to integrate in processing chains. These tools are based on CNES’s multi-decade expertise in multi-view stereoscopic satellite optical high-resolution imaging [2][3]. They address two key aspects of space imagery: the physics of image acquisition and the processing of vast datasets. Advanced capabilities in large-scale processing and automation have been critical in meeting the challenges of the CO3D mission, which require the automated processing of massive data volumes. The 3D pipeline is designed with the principles of openness, collaboration and continuous improvement. It provides capabilities for geometric refinement and projection, image matching, triangulation and 3D point cloud generation, DSM computation and digital terrain model (DTM) extraction [4], as well as performance assessment and comparison tools. Throughout the pipeline, due consideration is accorded to quantifying uncertainty, with an innovative state of the art approach based on imprecise probabilities [5]. In order to overcome bottlenecks, such as generalization in the context of global coverage, this pipeline integrates artificial intelligence (AI) models into certain carefully selected processing steps such as matching cost in stereo matching. Furthermore, it employs a synergistic approach combining 3D model extraction with automatic semantic segmentation [6], which guides image matching, digital terrain model extraction procedures, as well as enhance subsequent image segmentations. This automatic processing pipeline is integrated into the CO3D ground segment and is also available to external users, including institutions, private companies or universities. The pipeline is open to contributions via a GitHub repository [2]. It will also be implemented as an on-demand processing service on the GEODES EO-data platform[7] to process, for instance, on the Spot World Heritage historic stereoscopic series. These 3D processing tools, when leveraged on CO3D and other optical satellite data sources such as Pleiades-HR, will play a crucial role in creating digital replicas of the Earth at global and local levels. [1] CO3D project: https://cnes.fr/en/projects/co3d [2] CARS 3D pipeline: https://github.com/CNES/cars [3] PANDORA stereo matching pipeline: https://github.com/CNES/pandora [4] Bulldozer DTM extraction tool : https://github.com/cnes/bulldozer [5] Roman Malinowski, Emmanuelle Sarrazin, Loïc Dumas, Emmanuel Philippe Dubois, Sébastien Dester- cke. Robust Confidence Intervals in Stereo Matching using Possibility Theory. 2024. hal-04681082 [6] Dumas, L., Defonte, V., Steux, Y., and Sarrazin, E.: IMPROVING PAIRWISE DSM WITH 3SGM: A SEMANTIC SEGMENTATION FOR SGM USING AN AUTOMATICALLY REFINED NEURAL NETWORK, ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-2-2022, 167–175, https://doi.org/10.5194/isprs-annals-V-2-2022-167-2022, 2022 [7] GEODES project: https://www.cesbio.cnrs.fr/multitemp/geodata_hub_data_access_platform_for_cnes_earth_observation_data
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall N1/N2)

Presentation: Enabling Large-Scale Earth Observation Data Analytics with Open Source Software

#stac

Authors: Gilberto Camara, Dr Rolf Simoes, Felipe Souza, Felipe Carlos
Affiliations: National Institute For Space Research (inpe), Brazil, Open Geo Hub Foundation, Menino Software Crafter
The current standard approach for open source software for big Earth observation (EO) data analytics uses cloud computing services that allow data access and processing [1]. Given such large data availability, the EO community could benefit significantly from open-source solutions that access data from cloud providers and provide comprehensive data analytics support, including machine learning and deep learning methods. The authors have developed the R package sits, an end-to-end environment for big EO data analytics based on a user-focused API to fill this gap. Using a time-first, space-latter approach, it supports land classification with deep learning methods. It offers several capabilities not currently available together in other EO open-source analytics software. The sits package focuses on time series analytics. The aim is to use as much data as possible from the big EO data collections. Many EO analysis packages only support the classification of single-date or seasonal composites. In such cases, most of the temporal land use and land cover variation produced by the repeated coverage of remote sensing satellites is not used. Time series are a powerful tool for monitoring change, providing insights and information that single snapshots cannot achieve. Spatiotemporal data analysis is an innovative way to address global challenges like climate change, biodiversity preservation, and sustainable agriculture [2-4]. The sits package uses a "time-first, space-later" approach [5] that takes image time series as the first step to analyse remote sensing data. Time series classification produces a matrix of probability values for each class. In the "space-later" part of the method, a Bayesian smoothing algorithm improves these classification results by considering each pixel's spatial neighbourhood. Thus, the result combines spatial and temporal information [6]. An essential capability of the package is its support for multiple EO cloud services. Using the STAC protocol [7], users can access services such as Copernicus Data Space Ecosystem (CDSE), Amazon Web Services (AWS), Microsoft Planetary Computer (MPC), Digital Earth Africa, Digital Earth Australia, NASA's Harmonized Landsat-Sentinel collection, the Brazil Data Cube (BDC). However, each provider has a particular implementation of STAC. Dealing with such a lack of complete standardisation required substantial work by the authors. Machine learning and deep learning algorithms for spatiotemporal data require that analysis-ready data (ARD) from EO cloud services be converted to regular data cubes. Appel and Pebesma [8] define a data cube as an n-dimensional matrix of cells combining a 2D geographical location, a 1D set of temporal intervals, and a k-dimensional set of attributes. For each position in space, the data cube should provide a multidimensional time series. The data cube should give a valid 2D image for each time interval. Their definition is the basis for software design in packages such as OpenEO [9] and OpenDataCube [10]. In sits, we have extended the data cube definition by Appel and Pebesma [8] to include a further spatial dimension related to the spatial organisation used by ARD image collections. For example, Sentinel-2 images are organised in the MGRS tiling system, which follows the UTM grid. Thus, to process data spanning multiple UTM grid zones, EO data cubes need an extra dimension, given by the ARD tile. This extension enables sits to process large-scale data, unlike systems that adopt a more restricted data cube definition. For classification, sits supports a range of machine learning and deep learning algorithms, including support vector machines, random forests, temporal convolutional neural networks, and temporal attention encoders. It also includes object-based time series classification methods and spatial-temporal segmentation, allowing for detailed analysis of land cover changes over time and space. One relevant feature of sits is its support for active learning [11], combining SOM (self-organised maps) and uncertainty estimation. SOM is a technique where high-dimensional data is mapped into a two-dimensional map [12]. The neighbours of each neuron of a SOM map provide information on intraclass and interclass variability, which helps detect noisy samples [13]. SOM maps are helpful in improving sample quality. Selecting good training samples for machine learning classification of satellite images is critical to achieving accurate results. Currently, SOM is one of the few data analysis methods that enables the quality of each sample to be assessed independently of the other samples for the same class. Active learning is an iterative strategy for optimising training samples. At each round, users analyse the data using the SOM maps to remove outliers; then, they classify the area and compute uncertainty maps to define critical areas for new sample collection. Users repeat this procedure until they obtain a final set of low-noise and high-validity training samples. To support Bayesian smoothing and uncertainty estimates, the output of machine learning classifiers in sits is a set of probability matrices. Each pixel associated with a time series is associated with a set of probabilities estimated by the classifier, one for each class. Using probability maps as an intermediate step between the classification algorithm and the categorical maps has proven to be of much relevance to improve the results of land classification. In conclusion, sits provides an integrated workflow for satellite data handling, including pre-processing, sampling, feature extraction, modelling, classification, post-classification analysis, uncertainty estimation and accuracy assessment. Designed with a clear and direct set of functions, it is accessible to users with basic programming knowledge. Its easy-to-learn API simplifies the complex tasks associated with large-scale EO data analysis. The open-source software is available in the standard R repository CRAN, and the source code is on https://github.com/e-sensing/sits. The online book at https://e-sensing.github.io/sitsbook/ enables step-by-step learning with many examples. References [1] Vitor C. F. Gomes, Gilberto R. Queiroz, and Karine R. Ferreira. “An Overview of Platforms for Big Earth Observation Data Management and Analysis”. In: Remote Sensing 12.8 (2020), p. 1253. [2] Curtis E. Woodcock et al. “Transitioning from Change Detection to Monitoring with Remote Sensing: A Paradigm Shift”. In: Remote Sensing of Environment 238 (2020), p. 111558. ISSN: 00344257. [3] Valerie J. Pasquarella et al. “From Imagery to Ecology: Leveraging Time Series of All Available LANDSAT Observations to Map and Monitor Ecosystem State and Dynamics”. In: Remote Sensing in Ecology and Conservation 2.3 (2016), pp. 152– 170. ISSN: 2056-3485. [4] Michelle Picoli et al. “Big Earth Observation Time Series Analysis for Monitoring Brazilian Agriculture”. In: ISPRS Journal of Photogrammetry and Remote Sensing 145 (2018), pp. 328–339. [5] Gilberto Camara et al. “Big Earth Observation Data Analytics: Matching Requirements to System Architectures”. In: 5th ACM SIGSPATIAL International Workshop on Analytics for Big Geospatial Data. Burlingname, CA, USA: ACM, 2016, pp. 1–6. [6] Rolf Simoes et al. “Satellite Image Time Series Analysis for Big Earth Observation Data”. In: Remote Sensing 13.13 (2021), p. 2428. [7] M. Hanson. “The Open-source Software Ecosystem for Leveraging Public Datasets in Spatio-Temporal Asset Catalogs (STAC)”. In: AGU Fall Meeting Abstracts 23 (2019). [8] Marius Appel and Edzer Pebesma. “On-Demand Processing of Data Cubes from Satellite Image Collections with the Gdalcubes Library”. In: Data 4.3 (2019). [9] Matthias Schramm et al. “The openEO API: Harmonising the Use of Earth Observation Cloud Services Using Virtual Data Cube Functionalities”. In: Remote Sensing 13.6 (2021), p. 1125. [10] Adam Lewis et al. “The Australian Geoscience Data Cube — Foundations and Lessons Learned”. In: Remote Sensing of Environment 202 (2017), pp. 276–292. [11] M. M. Crawford, D. Tuia, and H. L. Yang. “Active Learning: Any Value for Classification of Remotely Sensed Data?” In: Proceedings of the IEEE 101.3 (2013), pp. 593–608. ISSN: 1558-2256. [12] T. Kohonen. “The Self-Organizing Map”. In: Proceedings of the IEEE 78.9 (1990), pp. 1464–1480. ISSN: 1558-2256. [13] Lorena A. Santos et al. “Quality Control and Class Noise Reduction of Satellite Image Time Series”. In: ISPRS Journal of Photogrammetry and Remote Sensing 177 (2021), pp. 75–88.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall N1/N2)

Presentation: Deploying your own openEO: the Geotrellis backend

Authors: Ir. Jeroen Dries, Ir. Jeroen Verstraelen, Dr. Ir. Stefaan
Affiliations: VITO Remote Sensing
The openEO community maintains a set of OSS components ranging from client libraries to full backend solutions that allow you to process EO data at scale in the cloud or on HPC. The openEO open standard allows workflows expressed with predefined processes to be executed on different technology stacks. It also allows the processing to be distributed across multiple backends. This standard and technology is relevant for anyone that is offering EO products, both for public or commercial purposes. In this presentation, we will mainly cover the Geotrellis backend, which is deployed operationally in the Copernicus Dataspace Ecosystem. This instance, which enables processing of the full archives of Copernicus Sentinel missions among others, has already achieved a number of noteworthy milestones: - Full crop map production for 5 years in all countries of EU38 at 10m resolution. - Global preprocessing of Sentinel 1 & 2 for operational global landcover monitoring. - Creation of user-tailored cropmaps anywhere in the world in WorldCereal - On the fly regridding of Sentinel-3 products from sensor geometry to regular datacubes We’ll go over the architecture based on Apache Spark, and how the focus on performance has enabled these achievements at a cost that is often better compared to many proprietary and open source alternatives. At the same time, it doesn’t compromise on usability and aims to hide the complexities of distributed processing Recent efforts have focused on making it easier to run the backend locally, or on a Kubernetes cluster. This provides an opportunity for anyone seeking to speed up iterative development, or even debug the full stack. The results of this component are of course only achieved by integration many other OSS libraries, including XArray, Geotrellis, Proj, GeoPandas, Spark and GDAL. For the deployment we rely on Kubernetes and Elasticsearch as well as a host of other devops components. Likewise, processing system operators can consider to deploy the same software as a service or on premise, and even join the CDSE openEO federation.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall N1/N2)

Presentation: Scalable Workflows for Remote Sensing Data Analysis in Julia

#zarr

Authors: Lazaro Alonso, Fabian Gans, Felix Cremer
Affiliations: Max-Planck Institute for Biogeochemistry
Analysis-ready remote sensing data cubes provide a basis for a variety of scientific applications. In essence, they allow users to access remote sensing data as very large multi-dimensional arrays stored either on traditional hard drives or in object stores in cloud environments [1-2]. However, to actually work with the analysis-ready data, convenient software for accessing and analyzing these large datasets in an efficient and distributed way is required. Many data processing tasks are I/O-limited, so to maximize efficiency, data processing tools need to be aware of the internal chunking structure of the datasets. The goal is to apply user-defined functions over arbitrary dimension slices of large datasets or to perform groupby-combine operations along dimensions. A very popular tool for these tasks in the Python programming language is xarray, using Dask for delayed and distributed evaluation, by mapping storage chunks to nodes of processing chunks. However, for very large problems or some mapreduce-like operations, task graphs can become very large or hard to resolve, so users trying to scale their algorithms to the multi-terabyte scale regularly run into issues of unresolvable task graphs when applying their user-defined algorithms. Specialized computing backends for certain tasks, like flox for mapreduce, have been developed to mitigate these problems. The Julia programming language is designed for scientific computing and is known for its good performance and scalability in scientific applications. Here we present a family of Julia libraries that are designed to work together and cover the full stack of data cube analysis, such as data access across different gridded data formats, data models for labeled dimensional arrays, and tooling for distributed large-scale processing of data cubes. DiskArrays.jl provides a common interface for dealing with high-latency multidimensional array data sources and provides the array interface for I/O packages like NetCDF.jl, Zarr.jl, and ArchGDAL.jl. DimensionalData.jl wraps DiskArrays into arrays with labeled dimensions and very fast dimension-lookup based indexing. DiskArrayEngine is built on top of DiskArrays.jl and provides efficient applications of user-defined functions in a moving-window fashion over huge disk-based or remote arrays. It is aware of the data's underlying chunking structure when scheduling distributed computations but does not do a strict mapping between chunks storage units and DAG graph nodes and therefore avoids materializing huge graphs in memory and the scaling issues that follow from this. It forms the basis of YAXArrays.jl, which provides a user-friendly interface to the functionality provided by DiskArrayEngine.jl to enable users to run their custom analysis functions over combinations of slices and windows in different dimensions as well as flexible reductions. Distributed computing is supported by Distributed.jl for embarrassingly parallel operations or Dagger.jl for mapreduce-based operations. These operations scale to very large multi-terabyte arrays and don't break even when provided arrays with millions of small chunks. We present benchmarks showing that YAXArrays.jl can compete with state-of-the-art Python libraries like flox for mapreduce operations. Although some parts of the software stack presented here are still under development, they have been successfully used in a list of scientific applications, such as extreme event detection[3], characterization of vegetation-climate dynamics[4], or Sentinel-1 based forest change detection on the European continental scale[5], with very promising results. These libraries might become an important building block to enable the Julia remote sensing community to bring their high-resolution remote sensing applications to the continental and global scale. [1] https://esd.copernicus.org/articles/11/201/2020/ [2] https://arxiv.org/abs/2404.13105 [3] https://bg.copernicus.org/articles/18/39/2021/ [4] https://bg.copernicus.org/articles/17/945/2020/bg-17-945-2020.html
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall L1/L2)

Session: E.01.03 Green Transition Information Factories (GTIF) – EO driven solutions to address sustainability challenges

The ESA GTIF (Green Transition Information Factories) initiative is addressing information needs in the context of the Green Transition and the European Green Deal. It seeks to develop EO driven solutions to address key Green Transition challenges by utilizing EO together with geospatial and interdisciplinary data as well as advanced analytics and UI/UX technology. GTIF capabilities are developed in a co-creation approach with actors and stakeholders of across various the Green Transition domains and iteratively evolved to address their operational needs.

The initial GTIF Demonstrator was developed for Austria and released publicly in Spring 2023 [https://gtif.esa.int/]. The Austrian demonstrator is currently going through a consolidation phase aimed at optimising the capabilities for stakeholders, for example from the energy and mobility sectors. Moreover, capabilities are being operationalised and transitioned into cloud-based on demand services that are optimised for FAIR compliant re-use and made available at a commercial level. Finally, the Austrian consolidation phase will also facilitate the handover to the Austrian community for future operation, maintenance and evolution.

Several additional GTIF projects have been initiated in the last year. Regionally these cover the Baltic countries, UK, Ireland, France and the North Atlantic region between Canada and Northern Europe. Each project develops several new capabilities for example in the renewable energy, mobility, agri/aquaculture domains.

This session will provide an overview of the larger GTIF initiative and various contributing projects. The session will further showcase selected capabilities that were developed and highlights aspects of operationalisation and uptake in stakeholder networks. Finally, also the larger ESA strategy for scaling GTIF, its linkages to the ''Space for a Green Future'' accelerator (S4GF) as well as the envisioned business logic for future operations and long-term sustainability will be presented.

Presentations and Speakers:


ESA GTIF Initiative: overview & outlook


  • Patrick Griffiths - ESA

UK, Ireland & France GTIF: Food Security, Energy Transition, Sustainable Cities


  • Rui Song - UCL Mullard Space Science Laboratory

Baltic GTIF & Heat Trend Analysis capability


  • Daro Krummrich - OHB-DC

Cerulean Information Factory


  • David Artus - Polar View

GTIF Austria Consolidation and Evolution


  • Helmut Herglotz - EOX

High-resolution, national-scale mapping of Austrian renewable energy potential and capabilities – from prototype to commercial offering


  • Radoslaw Guzinski - DHI
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall F2)

Session: A.02.03 EO for Agriculture Under Pressure - PART 1

The human impact on the biosphere is steadily increasing. One of the main human activities contributing to this is agriculture. Agricultural crops, managed grasslands and livestock are all part of the biosphere and our understanding of their dynamics and their impacts on other parts of the biosphere, as well as on the wider environment and on the climate is insufficient.
On the other hand, today’s Agriculture is Under Pressure to produce more food in order to meet the needs of a growing population with changing diets– and this despite a changing climate with more extreme weather. It is required to make sustainable use of resources (e.g. water and soils) while reducing its carbon footprint and its negative impact on the environment, and result in accessible, affordable and healthy food.
Proposals are welcome from activities aiming at increasing our understanding of agriculture dynamics and at developing and implementing solutions to the above-mentioned challenges of agriculture, or supporting the implementation and monitoring of policies addressing these challenges. Studies on how these challenges can be addressed at local to global scales through cross site research and benchmarking studies, such as through the Joint Experiment for Crop Assessment and Monitoring (JECAM) are welcome.

The session will hence cover topics such as
- Impact on climate and environment:
- Crop stressors and climate adaptation
- Food security and Sustainable Agricultural Systems
- New technologies and infrastructure
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall F2)

Presentation: Optimizing Nitrogen Fertilization in Rice Cultivation Using Sentinel-2 and PlanetScope Data: The Detectoryza Project

Authors: Katarzyna Cyran, Belen Franch, Fatima Della Bellver, Javier Tarin-Mestre, Pancracio Piera Peiro, Concha Domingo, Francisco Girona Lopez, Alberto San Bautista
Affiliations: Global Change Unit, Image Processing Laboratory, University Of Valencia, Department of Geographical Sciences, University of Maryland, Cooperativa Valenciana del Camp Unió Cristiana (UNIANA), Centre de Genòmica, Institut Valencià d'Investigacions Agràries (IVIA), Federació Cooperatives Agroalimentàries de la Comunitat Valenciana (CACV), Universitat Politécnica de València
Rice is a major cereal crop and a staple food for nearly half the global population. Nitrogen (N) fertilizers are essential in rice cultivation for crop growth and achieving optimum yields, but their excessive use can pose environmental contamination. Funded by the Agencia Valenciana de Innovació, the DETECTORYZA project leverages EO technologies to support sustainable rice farming in the Valencian province. This study evaluates Sentinel-2 and PlanetScope data for monitoring N content in two rice varieties (Jsendra and Argila) cultivated in the Albufera natural reserve (Valencia, Spain) during the 2023 and 2024 seasons. The Albufera reserve, covering Spain’s largest lake and serving as a stop for migratory birds, is dominated by rice farming which accounts for 70% of its area. Fifteen fields of Jsendra and five fields of Argila were treated with different fertilization levels, from under (< 350 kg N/ha) to over (> 500 kg N/ha) fertilization, split into two doses: 80% applied before planting and 20% at tillering. Sentinel-2 L1C images were atmospherically corrected using LaSRC and PlanetScope data from the SuperDove sensors were processed for the two seasons. The multispectral signal in each parcel was exploited to determine the best band combination correlation with N content and build an EO-based model for estimating the Nitrogen Nutrition Index (NNI). Ground data collected includes SPAD chlorophyll readings, LAI taken with the AccuPAR instrument, hyperspectral reflectance with the ASD radiometer and biomass samples for laboratory analysis of N concentration in leaves (N%) and dry weight of plants. Results indicated that the best spectral separability for both varieties was early in the season using the Red Edge (B6-B7), NIR (B8) and SWIR (B12) bands. The 2023 model for N content estimation using a linear combination of B2 and B5 at tillering achieved an R² of 0.53 and RMSE of 9.15 gm-2. Instead, a model for N% estimation achieved an R² of 0.69 and RMSE of 0.71% using bands B4 and B12 for the same date. Preliminary results for 2024 suggest improved N content estimation due to better fertilization separability in the spectrum. Mathematical methods, such as first-order derivative and smoothing, are also being tested to be applied to the original spectra, along with several vegetation indices. PlanetScope data showed good consistency with Sentinel-2 results. The NNI, calculated as the ratio of actual to optimal N content, proved to be a valuable decision-support tool for optimizing N fertilizer use. Early findings highlight the potential of EO data for refining fertilization doses, promoting sustainable rice cultivation and preserving the Albufera ecosystem. An online platform integrating these findings is under development and will incorporate 2024 season data to provide farmers with recommendations for optimum doses of N application.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall F2)

Presentation: EARTH OBSERVATION DATA TO SUPPORT MEASURES TO REDUCE METHANE EMISSIONS FROM RICE CULTIVATION

Authors: Thuy Le Toan, PhD Stephane Mermoz, Dr. Nguyen Lam Dao, Hironori Arai, Alexandre Bouvet, PhD Juan Doblas, PhD Thierry Koleck, Linda Tomasini, My Nguyen Thanh
Affiliations: Cesbio, Globeo, Vietnam National Space Center, The University of Osaka, Centre National d'Etudes Spatiales, Rynan Technologies
Anthropogenic methane emissions are a major cause of rising global average temperatures. To slow global warming, reducing human-caused methane emissions is considered the most effective and least costly strategy in the short term. This is the path chosen by 158 countries that, since 2021, have committed to reducing their methane emissions by 30% by 2030 to keep the 1.5°C warming limit within reach. Agriculture is a major anthropogenic source of CH4, of which rice paddies are one of the largest. Global CH4 emissions from rice paddies during the decade 2008–2017 are estimated at 30 [25–38] Tg CH4 yr-1 (Saunois et al., 2019), with the largest emissions observed in monsoon Asia, which accounts for ~87% of the global rice area. In flooded rice fields, the decomposition of organic matter in an oxygen-free environment produces methane. Traditionally, rice fields are flooded throughout the plant cycle. However, intermittent drainage, except at key rice phenological stages, has been shown to significantly reduce methane emissions (by 40–50%) and reduce the volume of water required (by 30%) without affecting rice yield. Therefore, a recognized solution to reduce methane emissions from rice fields is to perform one or more drainages during the rice growing season. For mitigation measures by national authorities or by organizations that provide financing to rice farmers through low-carbon incentive credits, it is essential to monitor water regimes in rice fields at different stages of the plant growth cycle. Information is needed at the local level, for farmer reporting, but also at the regional level for national CH4 inventories and subsequent actions to reduce emissions. In this context, the MERIMEE (Mekong Rice Methane Emission Experiment) project was initiated. The project is part of the SCO initiative (Spatial Climate Observatory of the National Center for Space Studies). The objective of MERIMEE is to develop a methodology using remote sensing data, in situ observations and IoT measurements to estimate methane emissions from rice fields, with the aim of providing a tool for stakeholders to simulate different water management scenarios to reduce methane emissions. Remote sensing provides the following information: 1) Dynamic mapping of rice growth: using Sentinel-1 time series data, many research works have been developed to provide rice map, rice growth stage, and time series of mapping results are used to determine rice area, cropping calendar, number of crops per year, etc. and to evaluate their changes in the last decade, 2) Mapping of rice field flooding status using L-band SAR, which has the ability to penetrate through a rice canopy at the full growth stage. Currently, ALOS-2/PALSAR-2 data (with a repeat frequency of 14 days) are provided by JAXA over the Mekong Delta, under the CH4Rice initiative of Asia Rice. It is noted that he algorithm to detect rice flooding status using L-band SAR data requires as input the rice growth stage (or phenological stage, or rice age) data, which are provided by C-band SAR dense time series data. The in-situ data consist of ground surveys at satellite overpasses and automatic water level devices implemented in rice fields, selected to have different varieties, crop calendar, and management practices. Newly developed and tested automatic methane emission chambers (by Rynan technologies) are being implemented in experimental fields. To estimate emissions from rice fields, the IPCC Guidelines describe a generic equation: GHG emissions = Activity data x Emission factors, where Activity data represents the area of rice harvested each year (rice area x number of crops per year) and Emission factors represent the coefficients of daily emissions per unit area (e.g. kg CH4 per rice area). For national GHG inventories, if measurements of emission factor are not available, the Tier 1 reporting can use the IPCC by-default emission factor. However, different by-default emission factors are given based on water management practices (continuous flooding, single or multiple drainage), and with or without organic amendment (IPCC Guidelines 2019). Most necessary Tier 1 information can therefore be provided by remote sensing: rice area, number of crops per year, duration of each crop, and number of drainages per growing season. In addition, experimental measurements of methane emissions will be compared to IPCC default emission factors, with the aim of adopting the Tier 2 approach, if sufficient emission data are collected. The final products will be dynamic maps of rice fields at different growth stages, their flooding status, and maps of seasonal methane emissions estimated on a field basis. The products will be displayed on a platform that can be used for methane emissions inventory purposes and also to allow users to simulate different scenarios to reduce methane emissions by changing irrigation practices (and organic amendments). The tool is expected to be useful for governmental, non-governmental, corporate, and international organizations that provide carbon credits to farmers. Finally, we will discuss the potential application of the approach to rice-producing countries in Asia, particularly with the use of Sentinel 1 C, D, NISAR, ALOS4, and the upcoming Rose-L. References Saunois, M., Stavert, A. R., Poulter, B., Bousquet, P., Canadell, J. G., Jackson, R. B., ... & Zhuang, Q. (2019). The global methane budget 2000–2017. Earth System Science Data Discussions, 2019, 1-136.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall F2)

Presentation: Agricultural soil NO emissions derived from TROPOMI observations in the AGATE project.

Authors: Ronald Van Der A, Dr. Jieying Ding, Xiaojuan Lin, Henk Eskes
Affiliations: KNMI, Tsinghua University
Regions with intensive agriculture, e.g. India, East China, Netherlands/Belgium and the Po-Valley, often suffer from air pollution and acidification of the soil. Excessive anthropogenic emissions of nitrogen compounds to the environment have a major effect on the biogeochemical nitrogen cycle. Agricultural activities produce noteworthy ammonia (NH3) and nitric oxide (NO) emissions. Nitrogen oxides (NOx=NO+NO2) emissions mainly stem from fossil fuel combustion, while soil emissions are dominant in remote areas. The role of soil NO emissions on air quality is often underestimated. Current methods for estimating emissions of those gases are based on the collection of activity data with associated emission factors, which have large uncertainties. In the ESA AGATE project, we have started to derive agricultural emissions of methane (CH4), NH3, and NOx independently by using satellite observations, i.e. without relying on the reported or a-priori information. Several inversion algorithms have been developed to estimate emissions of those gases by using satellite observations. In this presentation we will especially show the results for the derived NO emissions from soil. The latest version of the inversion algorithm DECSO (Daily Emissions Constraint by Satellite Observations) has been used to derive NOx and NH3 emissions for Europe on a daily basis, averaged to monthly mean maps with a precision of 25%. These are based on observations of TROPOMI (Sentinel 5p) and CrIS. In a newly developed post-processing step anthropogenic NOx emissions are separated from soil NO emissions. Soil NO will be derived by taking into account the land-use fraction and the specific climate zone. Results will be presented for agricultural land and forestry for regions in Europe and Asia. These results will be discussed and compared to the existing bottom-up estimates of the soil NO emissions.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall F2)

Presentation: Enabling Robust Crop Yield Forecasting by Calibrating the AquaCrop Model with Sentinel-1 and Sentinel-2 Data

Authors: MSc Márton Tolnai, MSc Pál Tasnádi, MSc Bálint Alföldy
Affiliations: CropOM-Hungary Kft.
Accurate and timely crop yield forecasting is essential for addressing global challenges related to food security and sustainable agricultural resource management. Traditional approaches relying on either empirical methods or process-based crop models often face limitations in scalability and precision due to data constraints and the inability to account for spatial variability in crop development. To overcome these challenges, this study integrates Sentinel-1 and Sentinel-2 Earth observation (EO) data with AquaCrop—a process-based crop development model—by calibrating the model’s growth stages with satellite-derived biophysical parameters. The combined framework aims to improve crop yield forecasting across diverse agro-ecological regions and cropping systems in the Danube Region. AquaCrop, developed by the Food and Agriculture Organization (FAO), simulates crop growth and yield responses to water availability. Its strength lies in its parsimonious parameterization and ability to represent water-driven growth processes accurately. However, its performance heavily depends on precise calibration of growth stages such as emergence, flowering, senescence, and maturity. This study utilizes high-resolution EO data to dynamically update AquaCrop’s key parameters on filed level, enhancing the model’s accuracy. To quantify the improvement in model performance, we assess the difference in accuracy between the baseline AquaCrop model and the EO-enhanced model, and compare the model outputs with actual yield data measured in the field. This evaluation is based on a study involving three major crop types—wheat, corn, and sunflower—across a minimum of 100 agricultural fields per crop type, spanning three growing seasons (2021/2022, 2022/2023, 2023/2024). All sampled fields are located within the Danube Region. The results indicate that the raw model yielded a root mean square error (RMSE) of 2.63 t/ha. In contrast, the EO-enhanced model significantly improved accuracy, reducing the RMSE to 0.79 t/ha, thus bringing the prediction error below the threshold of 1.0 t/ha RMSE, a level considered acceptable for most agricultural applications. To demonstrate the scalability of this approach, an operational yield forecasting service has been onboarded to the Danube Information Factory, a data integration platform utilizing the Copernicus Data Space Ecosystem. This integration was carried out as part of a project funded by the European Space Agency (ESA). The service provides county-level (NUTS3) crop-specific yield potential and water demand layers for Austria, Hungary, and Romania, with daily updates. These datasets have recently been published on the RACE dashboard, where they are now accessible to the public. This integration of EO data and AquaCrop offers several advantages, including improved scalability, reduced dependence on ground-based observations, and enhanced capacity to capture spatial heterogeneity in crop growth and water stress. It also facilitates early-season yield forecasting by identifying deviations in crop development from baseline trends. However, challenges such as the impact of EO data gaps, cloud cover interference (for Sentinel-2), and the need for region-specific calibration remain areas for further research. In conclusion, calibrating AquaCrop growth stages with Sentinel-1 and Sentinel-2 EO data represents a significant advancement in crop yield forecasting. This novel approach has the potential to transform decision-support systems in agriculture, enabling data-driven resource management and enhancing resilience to climatic uncertainty across diverse cropping systems.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall F2)

Presentation: Modelling crop growth of maize by assimilating active and passive remote sensing data

Authors: Miriam Machwitz, Manuela Montella, Christian Bossung, Marco Chini, Jean-Francois Iffly, Nazzareno Pierdicca, Charles Werner, Carlos Lopez-Martinez, Julia Kubanek
Affiliations: Luxembourg Institute Of Science And Technology, Sapienza Universtia di Roma, CommSensLab – UPC, Department of Signal Theory and Communications, Universitat Politècnica de Catalunya (UPC) and IEEC-UPC, Gamma Remote Sensing, European Space Agency,
Crop failures are increasingly threatening food security, occurring more frequently as a consequence of climate change. A major contributing factor is drought, which is now affecting many regions that historically did not face such challenges. The effect of droughts on crops depends on many factors like soil condition, crop type, phenological stage, or management. Crop growth models (CGM) are valuable tools for observing and analyzing crop development, as well as for forecasting and potentially preventing crop failures. However, these models typically lack spatial information, and their parameterization requires extensive data inputs. To enable broader and more spatially comprehensive applications, data assimilation methods have been developed. A common approach is the assimilation of the Leaf Area Index (LAI) into these models. However, estimating LAI from remote sensing data introduces additional sources of uncertainty and requires further processing steps. Linking crop growth models with radiative transfer models like for instance PROSAIL offers the possibility of directly assimilating reflectance data. However, as with all optical data, reflectance measurements are limited by cloud cover and only capture information from the upper vegetation canopy or soil surface. In contrast, active radar data can penetrate vegetation layers and, depending on the frequency, can even provide insights into soil conditions. For example, L-band Synthetic Aperture Radar (SAR) offers valuable information that is particularly relevant for understanding crop growth. This data complements optical datasets, such as those from Sentinel-2, providing a more comprehensive view of crop conditions. In our study, we present findings from a two-year experiment (2022–2023) conducted on a maize field in Luxembourg as part of the ESA-funded project, LUXSCAT ("Ground-based Scatterometer Measurements for Soil Moisture and Vegetation"). This campaign involved ground-based measurements to investigate the response of microwave scatterometer observations to changes in soil moisture and vegetation across an entire growing season. Throughout the study, in-situ soil moisture was measured at 16 locations, and regular field campaigns were conducted to collect vegetation samples. The vegetation measurements included assessments of vegetation water content (VWC), plant height, plant density, planting row direction, size and orientation of primary plant structures, biomass, crop phenology, and LAI. Additionally, meteorological data such as air temperature, precipitation, and radiation were recorded. The crop growth model APSIM was employed to simulate crop phenology and biomass development. We calibrated the model with data from the 2022 growing season, with soil moisture and biomass measurements serving as benchmarks for validation and optimization of its outputs. Subsequently, the model was applied to simulate the 2023 season, which differed significantly from 2022 with respect to weather condition and the resulting biomass. The primary objective was to leverage data assimilation to address uncertainties in model parameters, enabling enhanced transferability across spatial and temporal scales and improving model accuracy when incorporating remote sensing data. To quantify uncertainties, we conducted an ensemble of model runs, varying key parameters typically characterized by limited knowledge or variability. These included soil parameters (e.g., soil type) and management practices (e.g., sowing date and depth). To improve the model's accuracy, we assimilated remote sensing data from Sentinel-2 and PlanetScope (SuperDove) by linking APSIM outputs with the radiative transfer model PROSAIL. A particle filter approach, as described by Machwitz et al. (2014), was implemented to facilitate data assimilation. Additionally, we integrated soil moisture data derived from the ground-based scatterometer measurements into the model and compared these results with DA model outputs using optical data. We use a multi-temporal algorithm to estimate the soil moisture at different frequencies, such as the C and L bands, over the test site. This algorithm uses a time series of backscattering measurements as its input, along with additional data such as a plant water content map. The modeled biomass for 2022 exhibited an error of approximately 1000 kg/ha. As expected, for 2023 the errors increased to 5000 kg/ha without data assimilation. By the assimilation of optical data, the results improved significantly, showing similar results to 2022. The soil moisture for 2023, showed slight underestimation compared to 2022. However, results for the assimilation of soil moisture from the scatterometer are still preliminary and analysis are ongoing. Because of the relationship between soil moisture and biomass development, we expect further improvement of the model outcomes. This integrated dataset enables a thorough analysis of the relationships between optical data, scatterometer observations, and soil moisture and vegetation dynamics, contributing valuable insights into the utility of microwave and optical remote sensing for agricultural monitoring. M. Machwitz, L. Giustarini, C. Bossung, D. Frantz, M. Schlerf, H. Lilienthal, L. Wandera, P. Matgen, L. Hoffmann, T. Udelhoven: Enhanced biomass prediction by assimilating satellite data into a crop growth model, Environmental Modelling & Software, 62, 2014, 437-453, https://doi.org/10.1016/j.envsoft.2014.08.010.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall F2)

Presentation: Enabling Accurate Crop Yield Estimation in Afghanistan by Leveraging Remote Sensing Data: A Perspective on Customized Methodologies Tailored for Different Field Data Availability

Authors: Dario Spiller, Qiyamud Din Ikram, Juan Correa Moreno, Irene Staffieri, Jessika Kluth, Bosco Kasundu, Alexandre Nobajas Ganau, Lorenzo Vita, Coen Bussink, Matieu Henry, Livia Peiser
Affiliations: Food And Agriculture Organization of The United Nations, United Nations Office on Drugs and Crime
Agriculture serves as the backbone of Afghanistan’s economy but is acutely vulnerable to a complex interplay of challenges. Decades of conflict, climate change-induced droughts, and water scarcity - particularly in key agricultural provinces - have significantly undermined productivity [1,2,3]. Compounding these challenges are crop diseases, limited access to modern farming techniques, and weak market infrastructure [4]. Within this context, the Food and Agriculture Organization (FAO) and the United Nations Office on Drugs and Crime (UNODC) are collaborating to address food security concerns, strengthen resilience, inform policy interventions, and promote sustainable development in alignment with their respective mandates. This paper presents the combined efforts of FAO and UNODC in Afghanistan, focusing on crop extent and yield estimation for staple and illicit crops. Monitoring staple crops like wheat is critical for achieving Sustainable Development Goals (SDGs), such as SDG 2 (Zero Hunger), SDG 12 (Responsible Consumption and Production), and SDG 15 (Life on Land). Simultaneously, understanding illicit crop cultivation is vital for addressing socio-economic challenges and supporting international security. Afghanistan remains one of the world's largest producers of illicit crops, with cultivation deeply intertwined with the socio-economic and environmental fabric of the country. A novel methodology was developed by FAO’s Geospatial Unit to monitor agricultural landscapes using Earth observation data. Leveraging WaPOR [5], CHIRPS, and Sentinel-2 imagery, the Irrigation Mapping Tool (IMT) was created to classify agricultural fields into rainfed, irrigated monocrop, and irrigated multi-crop fields (with seasonal or year-round irrigation). This nuanced classification significantly enhances the understanding of Afghanistan’s agricultural systems, where irrigation plays a critical role in ensuring stable crop production. IMT has been instrumental in the Emergency Food Security Project (EFSP), which distributed improved wheat seeds to approximately 600,000 households during the 2023/2024 planting season [6]. To assess the campaign’s impact, FAO conducted a large-scale wheat-cutting survey, generating approximately 8,000 sample points for yield estimation. A thorough statistical analysis was carried out to assess the effectiveness of the improved seeds distribution, highlighting the higher average yield of improved seeds with respect to the yield from local seeds. The wheat-cutting survey data was then used to train a machine learning model for wheat mapping and yield estimation. IMT’s ability to differentiate between rainfed and irrigated fields was pivotal in analyzing the effectiveness of the seed distribution campaign. These insights informed national and provincial-level interventions, underscoring the importance of spatial mapping in decision-making. In parallel, UNODC adapted the Global Agriculture Monitoring System (GLAM) to map and forecast yields of illicit crops under conditions of limited field data [7]. Using advanced Earth observation datasets - such as NDVI, rainfall, temperature, soil moisture, and the Evaporative Stress Index (ESI) - UNODC developed a regionally adaptive yield forecasting model. The methodology leveraged machine learning (CatBoost) and SHAP values to identify key environmental drivers, offering nuanced predictions across Afghanistan’s diverse agricultural zones. Despite the absence of extensive field data, the approach provided a robust understanding of illicit crops cultivation dynamics for 2024, highlighting opportunities for targeted interventions. This presentation will explore how Copernicus data is integrated with other geospatial and field data to estimate crop extent and yield accurately. It will delve into the role of spectral, temporal, and spatial analyses in capturing crop phenology and highlight the impact of various variables. The discussion will also cover the advantages of different machine learning models, particularly their integration with cloud computing environments, which enable efficient processing of complex geospatial computations, including long time series. A key emphasis is on ensuring the analysis remains operational, meaning the results are designed to be directly applicable to policy-making and actionable interventions. Limitations and potential improvements will be addressed, such as the impact of medium- and low-resolution evapotranspiration and precipitation data on distinguishing irrigated from rainfed fields. Despite these challenges, the effective integration of remote sensing data across different resolutions offers valuable insights, with the promise of further enhancements as higher-resolution datasets become available. The collaboration between FAO and UNODC highlights the power of partnership in addressing Afghanistan’s agricultural challenges. By combining expertise and resources, scalable solutions can be developed to enhance food security and reduce illicit crop cultivation, demonstrating the value of international cooperation in fostering sustainable development. References: [1] G. Zheng et al., “Assessment of Meteorological Drought under the Climate Change in the Kabul River Basin, Afghanistan,” Atmosphere 2023, Vol. 14, Page 570, vol. 14, no. 3, p. 570, Mar. 2023, doi: 10.3390/ATMOS14030570. [2] Q. Aliyar, F. Zulfiqar, A. Datta, J. K. M. Kuwornu, and S. Shrestha, “Drought perception and field-level adaptation strategies of farming households in drought-prone areas of Afghanistan,” International Journal of Disaster Risk Reduction, vol. 72, p. 102862, Apr. 2022, doi: 10.1016/J.IJDRR.2022.102862. [3] D. Spiller et al., "Impact of Drought on Irrigated Wheat Cultivation in Afghanistan. A Multi-Temporal Analysis from 2017 to 2023," IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 2024, pp. 4050-4054, doi: 10.1109/IGARSS53475.2024.10641783. [4] FAO Representation in Afghanistan (FAO-AFG) and FAO Geospatial Unit at the Land and Water Division, “Assessing Agriculture Drought Severity in Afghanistan. A detailed assessment of the agriculture drought severity and its effects on agriculture using remote sensing for the period 2021 to 2023,” https://storymaps.arcgis.com/stories/d9c5f20abc3a470f99ee561f36b4824d. [5] https://data.apps.fao.org/wapor/?lang=en [6] https://www.unodc.org/documents/crop-monitoring/Afghanistan/Afghanistan_opium_survey_2023.pdf [7] https://www.fao.org/afghanistan/news/detail-events/en/c/1715507/
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall G2)

Session: C.03.05 Copernicus CRISTAL: Operational Monitoring for Cryospheric Science, Hydrology, and Oceanography from Coasts to Polar Seas

The Copernicus polaR Ice and Snow Topography ALtimeter (CRISTAL) will significantly enhance Copernicus’ monitoring capabilities over the cryosphere and provide invaluable data over oceans and inland waters. CRISTAL will be the first operational radar altimeter for the cryosphere, the first Ka-band SAR mode altimeter in space, and the first Ku/Ka dual-band frequency altimeter. CRISTAL’s Ku channel will be interferometric, and the mission will also provide the first combination of an altimeter and radiometer poleward of 82°, thanks to the AMR-CR radiometer provided by NASA/JPL. CRISTAL’s altimeter builds on the heritage of CryoSat with several improvements, including the addition of the Ka channel for snow depth measurements, a high bandwidth of 500 MHz in both Ku and Ka channels, flexible tracking (closed/open loop), open-burst mode over sea ice (allowing fully-focused SAR altimetry processing), and full coverage of ocean, coasts, and all hydrological targets. The mission is now in the implementation phase, and the first of two satellites, CRISTAL-A, is expected to be ready by mid-2027.

This session will welcome contributions showcasing advances from CRISTAL and preparing the user community to exploit its data. Possible contributions include:
• Studies refining geophysical algorithms for CRISTAL, including dual-band and microwave measurements.
• CRISTAL performance analysis based on models, simulations, and in situ data.
• Preparatory activities for calibration and validation over the cryosphere, including campaigns.
• Studies on the expected impact of CRISTAL on Copernicus Services and specific user applications.
• Synergy studies with other Copernicus Missions, especially CIMR and ROSE-L.
• Joint exploitation of IRIS and AMR-CR measurements over the cryosphere.
• Ocean studies refining algorithms for open and coastal areas, and performance analysis.
• Preparatory oceanographic cal/val activities, including campaigns.
These contributions will ensure readiness for CRISTAL data exploitation and highlight its significant impact on monitoring cryosphere and hydrological targets and its support to oceanography.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall G2)

Presentation: CLEV2ER: Development of the CRISTAL Level-2 Prototype Processors for Land Ice and Inland Water

Authors: Malcolm Mcmillan, Michele Scagliola, Jérôme Bouffard, Paolo Cipollini
Affiliations: CPOM, Lancaster University, ESA-ESRIN, ESA-ESTEC
Satellite radar altimetry has provided a near-continuous record of Earth’s topography for over three decades, delivering invaluable measurements of Earth’s ice and inland water surfaces. From the pioneering measurements of ERS-1 in the early 1990s to the latest observations acquired by CryoSat-2 and the Copernicus Sentinel-3 satellites, this unique record has transformed our ability to monitor Earth’s surface dynamics, leading to significant advances in our understanding of the environment under a changing climate. Scheduled for launch towards the end of this decade, the Copernicus Polar Ice and Snow Topography Altimeter (CRISTAL) mission represents the next generation of polar-orbiting radar altimeters. As such, it will introduce a number of innovations relative to its CryoSat-2 heritage, including a dual-frequency Ku-Ka radar altimeter, large-scale Ku interferometric SAR acquisitions over ice surfaces, an increased bandwidth, and Open Loop tracking. These innovations will, ultimately, drive improved measurements and monitoring of various Earth system components, including sea ice thickness, ice sheet and glacier elevation, inland water level and ocean surface topography. This presentation will provide an overview and results from the ‘CRISTAL LEVel-2 procEssor prototype and R&D (CLEV2ER): Land Ice and Inland Water’ study, which is (1) designing and building the Level-2 thematic processor prototypes for land ice and inland water surfaces, and (2) supporting the further development, implementation and evolutions of the CRISTAL Level-2 product and algorithms, through the targeting of outstanding scientific and technical questions relating to these specific domains. Here we will report the progress achieved during the first phase of the project, including results from a number of the dedicated R&D studies. This work will support the continued development and increased scientific readiness of CRISTAL in the years leading up to operations, as the mission progresses through Phase C/D/E1.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall G2)

Presentation: Anticipating CRISTAL: An exploration of multi-frequency satellite altimeter snow depth estimates over Arctic sea ice, 2018–2023

Authors: Jack Landy, Claude de Rijke Thomas, Carmen Nab, Isobel Lawrence, Isolde Glissenaar, Robbie Mallett, Renée Fredensborg Hansen, Alek Petty, Michel Tsamados, Amy Macfarlane, Anne Braakmann-Folgmann
Affiliations: University of Tromsø - The Arctic University of Norway, University of Bristol, School of Geographical Sciences, Centre for Polar Observation and Modelling, Department of Earth Sciences, University College London, ESA ESRIN, DTU Space, National Space Institute, Technical University of Denmark (DTU), Earth System Science Interdisciplinary Center, University of Maryland
The dual-frequency Ku- and Ka-band polar-orbiting synthetic aperture radar (SAR) altimeter CRISTAL is planned for launch in 2027 to monitor polar sea ice thickness and its overlying snow depth, among other applications. However, the interactions of Ku- and Ka-band radar waves with snow and sea ice are not fully understood, demanding further research effort before we can take full advantage of the CRISTAL observations. Here, we present an analysis of three ongoing altimetry missions designed to mimic the sensing configuration of CRISTAL over Arctic sea ice and investigate the derived snow depth estimates obtained from dual-frequency altimetry. We apply a physical model for the backscattered radar altimeter echo over sea ice to CryoSat-2’s Ku-band altimeter in SAR mode and to the SARAL mission’s AltiKa Ka-band altimeter in low-resolution mode (LRM), then compare to reference laser altimetry observations from ICESat-2. Radar freeboards obtained from AltiKa are significantly thicker than those obtained from CryoSat-2, in all winter months, matching more closely to the patterns and distribution of ICESat-2 laser freeboards. Despite this, the AltiKa freeboards do not thicken at the same rate over winter, implying that Ka-band height estimates can be biased low by 10 cm relative to the snow surface due to uncertain penetration over snow-covered first-year ice in spring. Previously-observed mismatches between radar freeboards and independent airborne reference data have been frequently attributed to radar penetration biases, but can be significantly reduced by accounting for surface topography when retracking the radar waveforms. Waveform simulations of CRISTAL in its expected primary delay-Doppler SAR sea ice mode reveal that the heights of the detected snow and ice interfaces are more sensitive to multi-scale surface roughness than snow properties. For late-winter conditions, the simulations suggest that the CRISTAL Ku-band radar retrievals will track a median elevation just above the snow-ice interface, because the radar return is dominated by surface scattering from the snow-ice interface which has a consistently smoother footprint-scale slope distribution than the air-snow interface. Significantly more backscatter is simulated to return from the air-snow interface and snow volume at Ka-band, with the radar retrievals tracking a median elevation 10 % below the air-snow interface. These model results generally agree with the derived satellite radar freeboards, which are consistently thicker for AltiKa than CryoSat-2, across most of the measured snow and sea ice conditions.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall G2)

Presentation: CRISTAL, CRISTALair and campaigns: Paving the way for the next generation of cryospheric measurements.

Authors: Tania Gil Duarte Casal
Affiliations: European Space Agency (ESA)-ESTEC
The Copernicus Polar Ice and Snow Topography Altimeter (CRISTAL) mission is one of the six Copernicus Expansion Missions under development, due to launch in 2027. CRISTAL will carry, for the first time, a dual-frequency radar altimeter with interferometric capabilities in Ku-band, that will measure and monitor sea-ice thickness, overlying snow depth and ice-sheet elevations. A high- and low-frequency passive microwave radiometer will be also on-board with the primary objective to provide wet troposphere delay correction. Campaigns are an integral part of ESA's continuing effort to develop new and better instruments for observing Earth from space, particularly important when Earth Observation satellite missions carry novel technology and use new measuring techniques, such as CRISTAL. Pre-launch campaigns will allow to build a comprehensive dataset which will be crucial to test retrieval algorithms and validate geophysical retrievals, ensuring reliable operational data shortly after CRISTAL's launch. These campaigns are also essential to lay the foundations on which to build the CRISTAL post-launch cal/val strategy. Such campaigns are expected to guide the design of activities to collect Fiducial Reference Measurements (FRMs) and demonstrate the benefits of a synergistic approach combining the Copernicus Expansion missions in the polar domain—CRISTAL, CIMR and ROSE-L. To support the CRISTAL's campaigns, ESA is developing CRISTALair, the airborne demonstrator of the Interferometric Radar Altimeter for Ice and Snow (IRIS), which is the primary payload of the CRISTAL mission. CRISTALair will support CRISTAL science by providing dual-frequency band interferometric L1B data from dedicated science campaigns which will support L2 algorithm development, tuning and validation. This presentation will address CRISTAL campaign needs and describe the proposed campaigns plan for the years up to launch, in 2027. A description of CRISTALair and its capabilities will also be presented.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall G2)

Presentation: Enhanced Sea Ice classification for CRISTAL mission

Authors: Michel Guerra-Bernal, Clara Colet, Albert Garcia-Mondéjar, David Brockley, Michel Tsamados, Michele Scagliola, Jérôme Bouffard, Paolo Cipollini
Affiliations: Isardsat, ESA-ESTEC, University College London, ESA-ESRIN, Mullard Space Science Laboratory
Sea ice plays a crucial role in the regional and global climate system, it has high albedo, insulates the ocean and at the same time regulates the freshwater input into it. It also has a great influence in polar ecosystems balance as well as in navigation and logistics in these environments, therefore its monitoring from remote sensing data is paramount. Advances in radar altimetry such as the Fully Focused SAR processing make it feasible to increase the resolution to the order of meters, which enables the detection of smaller features like cracks or icebergs. In the context of the CLEV2ER project for the upcoming launch of the CRISTAL mission, our aim is to define a new algorithm capable to perform sea ice classification at radargram level. It will be later implemented and integrated into the CRISTAL Sea Ice and Iceberg L2 Ground Processor Prototype. During the testing and refinement of the algorithm, we have tested it using FFSAR radargrams from Sentinel-6 and we have been able to identify features, such as leads, that are in the order of meters. This methodology not only relays on the classification on a record-by-record basis but is also able to analyse the off-track characteristics providing an enhanced classification approach and potentially the capabilities to quantify the percentage of sea ice. The methodology is based on separating the radargram into sections by surface type using the waveform backscatter. Applying computer vision techniques, we obtain an equalized radargram for each section that allows to compensate for the power decay of far range scatters. Via parameters like the peak peakiness and power, depending on the surface type given to each section, we are capable to retrieve a measure of the approximate proportion of sea ice, ocean and leads in the 2D radargram. This presentation will provide a comprehensive explanation of the methodology along with results using CRISTAL IRIS simulated data and Sentinel-6 sea ice altimeter data acquisitions .
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall G2)

Presentation: The CRISTAL Mission for oceanography: algorithms design, scientific challenges and ground segment development status

Authors: Cristina Martin Puig, Salvatore Dinardo, Carolina Nogueira Loddo, Teoman Unal, Julia Figa Saldana
Affiliations: Eumetsat
Within the expansion of the European Copernicus Sentinel Constellation, the Copernicus polaR Ice and Snow Topography Altimeter (CRISTAL) mission is in development to provide response to the need for operational monitoring of the polar regions. CRISTAL will fly up to 88° latitude ensuring an almost complete coverage of the Arctic Ocean, as well as of the Antarctic ice sheet (like CryoSat-2, which is currently in its extended mission phase). The primary objective of the CRISTAL mission is to monitor the cryosphere (sea ice and its snow loading, land ice including glaciers) but the novel altimeter on board, called IRIS (Interferometric Radar Altimeter for Ice and Snow), is also expected to contribute significantly to the enhancement of oceanography observations. The IRIS dual-frequency Ku/Ka band SAR altimeter, which is interferometric on the Ku channel, is the first instrument of this kind in space, and is expected to enable enhanced measurement capabilities. CRISTAL will also embark the Advanced Microwave Radiometer (AMR-CR), an evolution of the AMR-C developed by NASA-JPL for the Copernicus Sentinel-6 mission, also incorporating HRMR subsystem providing high-resolution measurements of the total water content in coastal regions. For the CRISTAL mission EUMETSAT has been entrusted to produce the global ocean products (e.g. open ocean, coastal and sea level into the leads). In this presentation, we will describe CRISTAL expected performance over ground segment development activities for the global ocean products. We will present the results of undergoing Science Support studies that are contributing to the refinement of the level 1 and level 2 algorithms and present the scientific challenges we are facing in preparation for launch, and beyond. Finally, we will review the many applications expected from CRISTAL and discuss how CRISTAL measurements are expected to contribute to Copernicus user services.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.31/1.32)

Session: B.04.03 Integrating multi-sensor Earth Observation data to improve natural hazard monitoring - PART 1

A deeper knowledge of natural hazards, which may originate from different sources and systems (such as atmospheric, hydrological, oceanographic, volcanological or seismic), is essential for accelerating actions towards the pursuit of Sustainable Development Goals of 2030 Agenda. In this framework, Earth Observation (EO) data are essential for all phases of natural hazard management, from spatial planning to post-emergency operations.
Satellite data offer extensive spatial coverage, which is highly beneficial for various scientific applications. Optical sensors on-board satellite gather information in the visible and near-infrared regions of the electromagnetic spectrum, providing images that are ‘easier’ to interpret but also valuable spectral information to accurately recognize different objects. SAR sensor, on the other hand, is mostly independent of weather and illumination conditions, with the possibility to detect roughness and dielectric characteristics of the target objects.
Changing platform type, UAVs allow acquiring very high-resolution, on-demand data and in near-real time. The development of different sensors (optical, multispectral, LiDAR, hyperspectral), enables different types of data to be acquired, even simultaneously.
Despite the complementarity of sensors from different platforms and their respective advantages (and disadvantages) of use, these data sources are often used separately. Data fusion and the integration of different methodologies make it possible to address the spatial and resolution gaps.
This session aims to examine and discuss multi-platform and multi-sensor data fusion, also by sharing qualitative and quantitative information to foster collaborative advances for monitoring and mapping natural hazards.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: Rheticus® Safeland: New Frontiers in Hydro - Geomorphological Risk Management

Authors: Anna Sblano, Vincenzo Massimi, Michele Antonicelli, Paolo Petio, Ilenia Argentiero, Dr. Khalid Tijani, Dr. Davide Oscar Nitti, Eng. Raffaele Nutricato
Affiliations: Planetek Italia, GAP srl c/o Department of Physics “M. Merlin”, University of Bari
Land instability phenomena pose a serious threat to public safety, infrastructure preservation and environmental protection. Under the pressure of anthropogenic activities and climate change, the intensity and frequency of these events are rapidly increasing, and landslides and mudslides cause extensive damage to infrastructure, economic losses and loss of life. In line with the Sustainable Development Goals of the 2030 Agenda, it is becoming increasingly important for those who are called upon to manage land and infrastructure to have effective land stability monitoring systems that can simultaneously provide up-to-date, accurate and cost-effective information, so that timely inspections and spot checks can be planned in areas of higher concern and be effective in post-emergency operations. Rheticus® Safeland, developed by Planetek Italia, is a land stability monitoring and reporting service that, through automated procedures, assigns a level of attention to each part of the territory based on trends and anomalies of surface displacements measured using remote sensing satellite images. The service provides a synoptic view of the territory, providing constantly updated information on the level of attention, with the possibility of distinguishing stable areas from those subject to slow-evolving landslides and/or subsidence, as a diagnostic tool to complement in-situ monitoring activities. Over the years, the Rheticus® Safeland platform has been enhanced. While in an early version, the service provided the user with a classification of the territory based only on interferometric data, the service currently allows the user to obtain a more complete picture of the territory by integrating the said interferometric data with additional information layers considered essential for identifying areas of higher hydro-geomorphological risk. In particular, new parameters were defined to improve the performance of multi-temporal interferometric analyses (MTInSAR). The results of MTInSAR analyses have been integrated with both optical satellite data and ancillary data needed for hydro-geomorphological hazard characterization, such as flood severity maps, burnt area maps, deforestation polygons, hydraulic hazard, change map - forest fire, landslide hazard, capable faults, landslide inventories (etc.). The update of the Rheticus® Safeland service allows the end user to obtain a detailed overview of the area of interest, focusing also on the infrastructure falling within the area of interest, providing valuable decision support and prioritization of point diagnostic actions and/or interventions aimed at preventing or mitigating the expected damage. The evolution of the service aims to significantly improve the quality and automation of the service offered, making the information immediately usable by the user operating in the area, supporting more meticulously the design of hydro-geomorphological risk mitigation works.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: Combining AI with Multi-Source Satellite Imagery to Support Natural Hazard Monitoring and Risk Assessment

Authors: Raffaele Lafortezza, . Francesco Giordano, Domenico Capolongo, Alberto Refice, Francesco P. Lovergine, Mario Elia, Davide D’Alò, Nicola Amoroso, Raffaele Nutricato, Marina Zingaro, Alessandro Parisi, Giacomo Caporusso, Alessandra Costantino, Alessandro Ursi, Patrizia Sacco, Maria Virelli, Dr Deodato Tapete
Affiliations: Department of Soil, Plant and Food Sciences (DISSPA), University of Bari Aldo Moro, Department of Physics, University of Bari Aldo Moro, Department of Earth and Geoenvironmental Sciences (DISTGEO), University of Bari Aldo Moro, Institute for Electromagnetic Sensing of the Environment (IREA), National Research Council of Italy, Department of Pharmacy-Drug Sciences, University of Bari Aldo Moro, GAP s.r.l., c/o DIF, University of Bari Aldo Moro, Italian Space Agency (ASI)
The increasing frequency and severity of natural hazards, such as floods, wildfires, land degradation and ground displacement pose significant challenges to the protection of urban areas worldwide. Although traditional monitoring approaches based on a single-source satellite sensor have proved to be reliable, they may often not be able to provide a holistic representation of the complexity, scale, and rapid evolution of these phenomena. The recent advancement of artificial intelligence (AI), coupled with the unprecedented availability of multi-source satellite imagery, offers new perspectives for enhancing natural hazard monitoring and risk assessment. AI, particularly deep learning algorithms, has demonstrated remarkable capabilities in processing and analyzing complex datasets. Recent studies have shown AI's potential in various Earth observation applications, from land cover classification to change detection. However, the integration of AI with multi-source satellite images for natural hazard monitoring remains relatively unexplored, particularly in terms of combining different types of sensors – including optical and Synthetic Aperture Radar (SAR) – to enhance prediction accuracy and reliability. In this study we present a novel approach that leverages state-of-the-art XAI (eXplainable AI) techniques to analyze multi-source satellite imagery for natural hazard monitoring and risk assessment in urban areas. The test sites include the Metropolitan City of Bari and the urban settlements in the Gargano promontory, Apulia region, in southern Italy. These have been intentionally selected as testbed, given their susceptibility to and historical records of floods, wildfires, land degradation and ground displacement, the potential impacts that these natural hazards may cause to populations and infrastructures, and the availability of validation datasets. We combined XAI-based models with optical imagery from Sentinel-2, SAR data from Sentinel-1, COSMO-SkyMed and SAOCOM, and hyperspectral data from PRISMA to extract the key features explaining the risk of occurrence and magnitude of the following hazards: (1) sediment connectivity; (2) land displacement; (3) urban floods; and (4) urban wildfires. Our results suggest that the integration of multi-source satellite imagery through AI not only enhances the accuracy of hazard detection but also enables the identification of subtle patterns that might not be detected by traditional analytical methods. Our approach demonstrates significant improvements in both the spatial and temporal resolution of hazard monitoring, while also providing quantitative risk assessments that can support decision-making processes and cross-regional prevention strategies for natural hazard mitigation. This research has been conducted within the framework of the project “GEORES - Applicativo GEOspaziale a supporto del miglioramento della sostenibilità ambientale e RESilienza ai cambiamenti climatici nelle aree urbane”, funded by the Italian Space Agency (ASI), Agreement no. 2023-42-HH.0, as part of the ASI program “Innovation for Downstream Preparation for Science” (I4DP_SCIENCE).
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: Time series analysis of vegetation indices and radar coherence as precursors of landslide occurrence

Authors: Katarzyna Januchta, Ph.D. Paweł Ćwiąkała, Ph.D. Edyta Puniach
Affiliations: AGH University of Krakow
Contemporary natural hazard monitoring systems utilizing remote sensing techniques are crucial for risk management and provide essential support for Early Warning Systems (EWSs). The detection of spatial and temporal changes in landslides, along with the acquisition of information about their precursors, is essential for hazard prediction. Despite the rapid advancement of various measurement techniques, accurately monitoring landslide areas using satellite systems remains a significant challenge. These include the diversity of landslide types and the type and density of vegetation cover in these areas. This study presents a time series analysis of changes in normalized vegetation indices and radar coherence for selected landslide areas, aiming to identify precursors and characterize landslide dynamics. The proposed method utilizes vegetation indices, including the Normalized Difference Vegetation Index (NDVI), Green Normalized Difference Vegetation Index (GNDVI), Normalized Difference Water Index (NDWI), Normalized Difference Moisture Index (NDMI), Moisture Stress Index (MSI), Normalized Moisture Stress Index (NMSI), and interferometric coherence, to analyze the stability of slopes. The methodology involves calculating the ratio of mean values of these indices on unstable slopes to those on surrounding hillsides. The normalization of indices enabled the unification of their value scales, allowing for comparing results across different locations and environmental conditions. Analyses of high-resolution temporal data showed that normalized vegetation indices, particularly NDWI and MSI, in many cases exhibited significant changes in landslide-prone areas before the landslides occurred. Decreases in coherence coefficient values over the same period also indicated significant changes in the analyzed areas, further confirming the occurrence of displacement in these areas. The observed correlation between the decrease in coherence and changes in the values of vegetation indices suggests that landslide processes affected both the land structure and vegetation cover. Such data integration indicates that optical and radar satellite data can be used to search for landslide precursors and can be used in assessing landslide activity. This type of analysis has great potential in supporting the development of landslide risk assessment tools and EWSs.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: Global Landslide Susceptibility Mapping Using Multi-Model Machine Learning and Deep Learning Approaches on Geospatial Satellite Data

Authors: MSc Saverio Mancino, MSc Francesco Paolo Lovergine, MSc Tushar Sethi, PhD Domenico Capolongo, PhD Giuseppe Amatulli
Affiliations: Department of Earth and Geoenviromental Sciences, University of Bari Aldo Moro, Institute for Electromagnetic Sensing of the Environment (IREA), National Research Council of Italy (CNR), School of the Environment Yale University, Spatial Ecology, Margosa Environmental Solutions Ltd
ABSTRACT: Landslides are among the most prevalent and damaging natural hazards, affecting environments and communities worldwide. Understanding landslide susceptibility is critical for effective hazard mitigation and risk assessment, especially in densely populated areas. We present an advanced, globally-scaled landslide susceptibility model developed through a multi-model framework incorporating machine learning (ML) and deep learning (DL) techniques. Using ground truth data, high-resolution satellite data products and robust computational methods, our model aims to generate a susceptibility map as a core component of a dynamic global landslide early warning system capable of monitoring risks on a weekly to daily basis. Traditionally, landslide susceptibility mapping (LSM) has been constrained to regional, national and sometimes basic sub-continental scale models due to the computational demands and complexities associated with large-scale data integration and model generalization. However, advances in high-performance computing (HPC) and the abundance of global satellite data now enable the development of LSM not only on a global level but also with finer spatial- resolution. By leveraging the use of HPC, we address the diverse complexities of global LSM, including variability in topography, geology, climate, and triggering mechanisms. The LSM model harmonizes datasets across multiple spatial resolutions, ensuring a scalable, interpretable, and flexible solution for worldwide landslide monitoring. This project aims to improve NASA’s 2016 approach in landslide susceptibility modeling (Stanley T. et al. 2017) by incorporating several improvements, such as: a much richer training dataset, derived from the Unified Global landslide Catalog (*), which consists of more than one million points for training and more than one million polygons for validation; an increase in resolution, from 1 km to 90 m by employing a newer high resolution Digital Elevation Model; a large increase in the number of predictor features to obtain a more robust model of landslide susceptibility, from 4 to over 100 features. These were extracted from several global geospatial products such as MERIT DEM 90m (Yamazaki D. 2017), hydrograpy90m (Amatulli G. et al. 2022), geomorpho90m (Amatulli G. et al. 2020), SoilGrids250m 2.0 (Poggio L. et al 2021), Global Seismic Hazard Map v.2018 .1 (Pagani M. et al. 2018), TerraClimate (Abatzoglou J. et al. 2018), and will highlight several aspect regarding morphology, hydrology, soil characteristics, seismic activity and other geoenvironmental indicators. implementing a more complex modeling algorithm able to derive robust relationships between landslides, environmental and physical features. Methodologically, this project employs a multi-model strategy that combines ensemble-based ML techniques (Random Forest) with DL models (Convolutional Neural Networks (CNNs) and Multi-Layer Perceptron (MLP)) for spatial pattern recognition. Ensemble learning methods capitalize on the strengths of each constituent model to ensure model interpretability and transferability across regions allowing a dynamic adjustment to varying geographic conditions, especially improving generalization in regions with limited landslide data The UGLC catalog on which these models train provides carefully picked landslide events and non-landslide data derived from a pixel-wise thresholding joint with dynamic buffers to minimize spatial autocorrelation and reduce overfitting risks. To validate the robustness of the model across diverse geographical settings, k-fold and spatial block cross-validation is employed. This validation considers subsets of the training and testing points grouped by trigger type, landslide type, and date range, ensuring relevance across temporal and spatial domains. HPC resources are necessary due to the computational demands of processing massive geographic datasets and ML algorithms . Our framework utilizes parallel processing across HPC nodes and distributed learning on GPU clusters, enabling efficient handling of high-dimensional data and accelerating model training and prediction. This setup facilitates spatially detailed analysis and shortens computational time, making it feasible to deploy a large-scale global LSM model. In preliminary testing, we evaluated only 49 out of over 100 predicting features, focusing on prioritised global geospatial products (MERIT DEM 90m, SoilGrids250m 2.0, hydrograpy90m and geomorpho90m). Initial runs used a sample of only 25,000 presence and absence points with a simple 2-hidden layer MLP architecture, which achieved rapid convergence but indicated some overfitting issues. Subsequent model tests on HPC with all 1.06 million presence and absence points and a Random Forest architecture reached 95% accuracy, although with high misclassification rates. SHAP analysis revealed the principal features as morphological components from DEMs and soil properties, highlighting the impact of sample size on feature importance. Testing also revealed imbalances, leading to the introduction of density-wise balancing and thematic testing splits by climate, event date, and landslide type to improve generalizability and error mapping. In conclusion, this scalable global landslide susceptibility model, exploiting ML and DL with a multi-model, HPC-enabled approach, opens new horizons for global-scale modeling of landslide susceptibility. It highlights how multi-model methods not only improve prediction accuracy, but also offer enhanced geoenvironmental interpretability of the phenomena, which is essential to support- decision making. This product will provide a robust foundation for future global early warning systems through the dynamic integration of up-to-date or real-time data, moving from susceptibility mapping to an operational dynamic hazard mapping tool that aids global disaster preparedness and response. *The UGLC landslide catalog is being developed in the framework of this project. It will be made publicly accessible by the time of the conference. REFERENCES: Abatzoglou, J., Dobrowski, S., Parks, S. et al. TerraClimate, a high-resolution global dataset of monthly climate and climatic water balance from 1958–2015. Sci Data 5, 170191 (2018). https://doi.org/10.1038/sdata.2017.191 Amatulli, G., McInerney, D., Sethi, T. et al. Geomorpho90m, empirical evaluation and accuracy assessment of global high-resolution geomorphometric layers. Sci Data 7, 162 (2020). https://doi.org/10.1038/s41597-020-0479-6 Amatulli, G., Garcia Marquez, J., Sethi, T., Kiesel, J., Grigoropoulou, A., Üblacker, M. M., Shen, L. Q., and Domisch, S.: Hydrography90m: a new high-resolution global hydrographic dataset, Earth Syst. Sci. Data, 14, 4525–4550, https://doi.org/10.5194/essd-14-4525-2022, 2022. Huffman, G. J., Bolvin, D. T., Braithwaite, D., Hsu, K., Joyce, R. J., & Xie, P. (2015). Algorithm theoretical basis document (ATBD) for NASA global precipitation measurement (GPM) integrated Multi-satellitE Retrievals for GPM (IMERG). Pagani, M., Monelli, D., Danciu, L., Rovida, A., Ismail-Zadeh, A., Simionato, M., ... & Silva, V. (2018). Global Seismic Hazard Map v.2018.1. Global Earthquake Model (GEM). Poggio, L., de Sousa, L. M., Batjes, N. H., Heuvelink, G. B. M., Kempen, B., Ribeiro, E., & Rossiter, D. (2021). SoilGrids 2.0: Producing a global gridded soil information system at 250 m resolution. Geoderma, 384, 114785. https://doi.org/10.1016/j.geoderma.2020.114785. Stanley, T., Kirschbaum, D.B. A heuristic approach to global landslide susceptibility mapping. Nat Hazards 87, 145–164 (2017). https://doi.org/10.1007/s11069-017-2757-y Yamazaki, D., Ikeshima, D., Sosa, J., Bates, P. D., Allen, G. H., & Pavelsky, T. M. (2017). MERIT DEM: A high-resolution global DEM with removed vegetation and urban area effects. Water Resources Research, 53(7), 586–590. https://doi.org/10.1002/2017WR020574
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: Satellite Optical and SAR Photomonitoring for Geohazards

Authors: Carlo Alberto Stefanini, Dr. Antonio Cosentino, Paolo Mazzanti
Affiliations: Department of Earth Sciences, Sapienza University of Rome, IntelligEarth S.r.l., Start-up of Sapienza University of Rome, NHAZCA S.r.l, Start-up of Sapienza University of Rome
Satellite-based Earth observation has emerged as a critical tool for geohazard investigation, with over 1,200 satellites currently orbiting the planet according to the Union of Concerned Scientists (2023). This study explores innovative photomonitoring techniques applied to optical and Synthetic Aperture Radar (SAR) satellite imagery for comprehensive environmental monitoring, control, and post-emergency recovery efforts. The research examines two primary satellite sensor technologies: multispectral optical sensors and SAR systems. Optical satellites provide high-resolution RGB and spectral imagery, offering detailed surface insights with pixel resolutions up to 10 meters, in some cases even less than one metre. However, these systems are limited by weather conditions and daylight constraints. In contrast, SAR satellites utilize microwave technologies to penetrate cloud cover and operate day and night, generating amplitude and phase-based imagery, but are subdued to geometric acquisition problems. Photomonitoring emerges as a versatile Digital Image Processing (DIP) technique that transcends traditional monitoring approaches. Beyond initial reconnaissance, this method proves equally effective for continuous control monitoring and post-emergency recovery activities. The technique leverages advanced algorithms such as Change Detection and Digital Image Correlation to enable precise quantification of environmental changes. Key algorithmic approaches include: - Change detection, based on Structural Similarity Index (SSI), that quantifies image variations through three critical pixel parameters: luminance, contrast and structure. - Digital Image correlation, that track the path of an object using one of the following algoritmhs: Phase Correlation, Optical flows or Normalized-Cross-Correlation. The methodology demonstrates particular efficacy in monitoring complex geological phenomena, including landslides, glacial evolution, and seismic deformations. The Advanced Differential Interferometric SAR (A-DInSAR) analysis produces Persistent Scatterers (PS), points characterized by high reflectance to microwaves such as buildings or bare rocks, representing displacement time series with millimetric accuracy. However, the distribution of PS is not homogeneous, primarily located in urbanized areas, potentially creating information gaps in zones without reflectors. The A-DInSAR analysis estimates displacements primarly in the first instance using images from ascending and descending acquisition geometries and giving a one-dimension results, the displacement along the line of sight (LOS). Subsequently, results may be decomposed into synthetic points with time series along East-West and Up-Down movement components. However, the North-South direction remains challenging to evaluate due to the acquisition geometry and data processing limits, being parallel to the satellite's movement direction. In this context, photomonitoring offers a complementary approach, overcoming some A-DInSAR limitations. It enables the identification of displacements in North-South and East-West directions, providing more comprehensive spatial coverage. Case studies in Southern Italy demonstrated that both techniques allow overall identification of geological phenomena, detecting landslides without SAR reflectors and those characterized by significant North-South displacements. Future research directions include further refinement of 3D displacement field extraction and enhanced integration of optical and SAR imagery analysis techniques, with potential applications ranging from urban planning to disaster management and environmental conservation.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: A Comprehensive Solution to Disaster Response Assistance with Earth Observation

Authors: Guy Schumann, Patricia Fernandez, Chloe Campo, Ben Suttor, Guillaume Gallion, Livio Loi, Paolo Tamagnone
Affiliations: RSS-Hydro
Earth observation (EO) data holds immense potential for emergency response, but there is a painful gap between data collection and its usefulness in real-time situations. This problem has been evident during many disasters, particularly during dynamic, fast moving events like floods or flash floods, that can cause devastation in minutes and last from hours to weeks. The disastrous flooding of the region of Valencia, Spain, at the end of October this year is a perfect example of this problem. Landsat-8 acquired a nearly cloud-free image of approximately the maximum extent of the inundation on October 30; Sentinel-2 and Sentinel-1 took similarly clear images of the flooded areas on October 31 and November 1. Many organizations processed these images into more or less accurate flood maps; however, the value-add of those map products has been judged by many as near zero. The main criticisms were that the flood maps were too little, too late, and that there was a lack of understanding of how to interpret and effectively act on such large-scale data. EO data alone can give the right information of the right place during a disaster, but rarely at the right time, in the right way, and reaching the right people. These are well-known issues during most flood disaster events when it comes to EO data and products, as reviewed and critically elaborated on by Schumann et al. (2018) . While new satellite constellations promise rapid revisits of disaster zones, the current challenge lies in processing and delivering that data quickly enough, and in a way that it provides actionable and clear information. This project tackles this issue by looking beyond just data transmission protocols. It suggests an innovative solution by using on-board processing on different orbital platforms (satellites, cloud computing infrastructure) to generate initial detection alerts and redefining methods for data transmission to analytics platforms and other receiving devices such as cell phones, dashboards, screens, etc on the ground with clear actions for decision-making. By improving both processing and transmission, we aim to bridge the gap and deliver near real-time actionable information to those on the ground, ultimately making EO data a true game-changer for emergency response. The solution will deliver the right information to the right person in the right place at the right time, thereby enabling a better informed decision-making process in situations where minutes matter, saving lives, and reducing damages. A comprehensive all-in-space solution should encompass a network of high-resolution, rapidly orbiting EO satellites equipped with advanced sensors capable of capturing high-frequency imagery and data. These satellites should be designed for frequent overpasses, ensuring near-real-time data acquisition, even in areas with limited infrastructure. We will explore, test and vet different space service technologies with the objective to better and faster assist rapid response and recovery operations directly in the field during disasters. The solution developed will be applicable to many different types of disasters, or indeed, potentially to non-disaster situations that require a rapid dissemination of critical information, but the proposed project focuses on flooding to demonstrate the orchestration of space services for rapid impact detection and alerting. The different technology components assessed and complemented by this project to form a network of space services for a federated, more intelligent emergency response include the following: • [Onboard Satellite Constellation Processing] To facilitate rapid processing and analysis, the all-in-space solution should incorporate on-board processing capabilities within the satellites. This enables initial data processing and analysis while the satellites are still in orbit, reducing latency and providing decision-makers with actionable insights promptly. Solution Demonstration: FloodSENS-IOX executed on a satellite constellation. • [Orbital Computing Center] The solution should further leverage novel orbital cloud-based processing “centers” or nodes to handle more complex analyses and data integration. These orbital compute nodes should be equipped with powerful computing infrastructure and machine learning algorithms to extract deeper insights from the vast amounts of EO data acquired by satellites and streamed to the nodes directly in orbit. Solution Demonstration: FloodSENS-IOX will be executed on an in-orbit cloud computing infrastructure. • [Space-to-Ground Transmission & Distribution] In addition to data acquisition, processing, and analysis, the all-in-space solution must also address data dissemination and communication challenges. This involves developing advanced data transmission protocols and establishing secure communication networks to ensure timely delivery of critical information to decision-makers and disaster response teams on the ground. Solution Demonstration: FloodSENS-IOX ML results will be transformed into actionable information in the form of a simple SMS (for delivering flood detection & impact intelligence in kilobytes) and MMS (for delivering highly compressed image thumbnails in kilobytes) to allow for easy continuous transmission to all possible devices on the ground.
Add to Google Calendar

Monday 23 June 14:00 - 14:20 (EO Arena)

Demo: A.07.10 DEMO - Generating hydrographic products with GRASS GIS: A hands-on workshop on the Hydrography90m methodology

GRASS GIS (Geographic Resources Analysis Support System) stands at the forefront of open-source geospatial analysis software, offering comprehensive tools for hydrological modeling and environmental analysis. This powerful suite excels in processing raster, vector, and time-series data.

At its core, GRASS GIS provides sophisticated modules for essential hydrological tasks, from watershed delineation and stream network extraction to flow accumulation and surface runoff modeling. The software's robust Digital Elevation Model (DEM) processing capabilities enable detailed terrain analysis, crucial for understanding water movement across landscapes.

What sets GRASS GIS apart is its flexibility and extensibility. Users can create advanced workflows by coupling hydrological models through BASH and Python scripting, and its plugin architecture is well suited for specialized hydrological analyses.

The software's dual interface approach, combining an intuitive GUI with powerful command-line functionality, accommodates both newcomers and experienced users.

Our upcoming demonstration at Living Planet Symposium will explore the breadth of GRASS GIS’ capabilities and demonstrate the groundbreaking methodology developed by Amatulli et al. (2022) for creating the Hydrography90m dataset. Participants will gain hands-on experience in using GRASS GIS to generate high-resolution global hydrographic products, and learn techniques that are directly applicable to their own projects.

Speakers:


  • Giuseppe Amatulli - School of the Environment, Yale University
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall L3)

Session: B.03.08 Earth Observation in support of the energy transition - PART 1

The energy sector is the largest contributor to greenhouse gas emissions, making the transition to green sustainable energy imperative, while ensuring its resilience and affordability. Countries worldwide have set ambitious targets and established associated regulatory frameworks to drive the transition, which requires a transformation and reshaping of the energy sector and market, driven by tailwinds such as increased investments and financial incentives, digitalization and technological innovation. This session explores the role of Earth observation integrated services in navigating the complexities of the energy transition landscape and providing insights to support decision making among various energy stakeholders, including policymakers and industrial actors. Topics in the scope of this session include (and not limited to):

• Relevant EO initiatives, products and integrated solutions serving the energy transition in Europe and beyond. Examples can cover the use of EO in diverse applications related to energy transition, sustainability and resilience across various segments of the energy value chain (e.g. energy policy formulation and enforcement, energy planning and resource management - including demand characterisation, site selection and resource assessment - energy production and operations, storage, transportation, distribution, and consumption, energy efficiency and performance monitoring, environmental impact assessment, infrastructure monitoring and resilience, hazard and risk assessment, decommissioning etc). Examples focusing on operational use-cases and solutions addressing final users and stakeholders are encouraged to illustrate the current uptake of EO-integrated solutions for the energy transition.

• Evolving user requirements in the energy sector and gaps in current EO capabilities, along with potential developments and solutions to effectively address them.

• Challenges in scaling the adoption of EO-integrated services in the energy sector and fostering full vertical integration, including challenges in resource alignment, difficulties to effectively combine EO and non EO data and products and concerns related to data accessibility, standardization, licensing, privacy and capacity barriers. Potential recommendations and best practices on EO data and EO-integrated service provision tailored to the energy sector are also within the scope of this session.

• Leveraging technological advances and innovative analytics to accelerate the energy transition (e.g.. AI-driven predictive analytics).

This session is addressed to individuals, stakeholders and energy sector representatives with interest in the use of EO for the energy transition including:

• Policy-making entities: Governmental authorities and agencies, national or regional regulatory bodies.

• Industrial stakeholders: Grid operators, energy and utility companies, energy investors, traders and asset owners, energy project development agencies, energy consulting firms etc.

• EO data and service providers for energy-related applications from both public and commercial offerings, as well as energy system integrators.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall L3)

Presentation: Wave Energy Potential in the Western Mediterranean Sea: Insights from High- Resolution Satellite Altimetry and In-Situ Data

Authors: Sonia Ponce de León, Dr Andrés Orejanera Rondón, Dr Maria Panfilova, Dr Marco Restano, Dr Jérôme Benveniste, Dr Roberto Sabia
Affiliations: CENTEC-IST, NOC, ESA-ESRIN, COSPAR
This study, conducted as part of ESA's ongoing WAPOSAL project (https://eo4society.esa.int/projects/waposal/), investigates the wave energy potential in the Western Mediterranean Sea, a region of strategic importance for renewable energy exploration. To evaluate the wave power potential, we employ the empirical method proposed by Gommenginger et al. (2003) to derive wave periods, critical for wave power computations. This analysis leverages high-resolution sea-state data from two satellite altimetry missions (Sentinel-3A/B and CryoSat-2), reprocessed using the advanced SAMOSA+ retracker, complemented by in situ wave buoy measurements from Spanish and French coastal networks. The study spans a comprehensive temporal window from January 2011 to December 2022. It estimates wave power density across the region, highlighting the most favorable seasons and locations for wave energy harvesting in coastal zones. While the Mediterranean Sea is predominantly characterized by wind-driven seas, our findings reveal a significant presence of swell energy, particularly during autumn and winter. We demonstrate the robustness of the applied method in capturing the sea-state dynamics of this semi-enclosed basin. This research underscores the value of innovative methodologies and the transformative role of satellite data in enhancing the precision of wave energy resource assessments, contributing to the sustainable energy landscape of the Mediterranean region.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall L3)

Presentation: SkyImageNet: A global dataset of cloud cover observations for solar energy meteorology

Authors: Stephen Campbell, Yuhao Nie, Max Aragon, Adam Brandt, Samer Chaaraoui, Tao Jing, Mengying Li, Stefanie Meilinger, Quentin Paletta, Andea Scott, Sherrie Wang, Yupeng Wu, Liwenbo Zhang, Jack
Affiliations: European Space Agency, MIT, Mines Paristech, Stanford University, Bonn-Rhein-Sieg University of Applied Sciences, PolyU, University of Nottingham
Integrating renewable resources, such as solar photovoltaic (PV), into the electricity grid has been recognized as an important pathway towards a low-carbon energy system. However, large-scale integration of PV is challenged by its fluctuating power output, mainly caused by short-term cloud passage events. Current electricity systems contain a large amount of dispatchable resources (e.g., coal, natural gas, etc.) that can be ramped to fill in for the variability. In contrast, as future grids transition towards a significant share of PV, the rapid loss of power supply within minutes would pose a substantial challenge for grid management. Anticipating such events, even only 5 to 15 minutes in advance, would allow grid operators to efficiently adapt the response of the grid to incoming power supply fluctuations. Short-term solar forecasting, defined as predicting either PV power generation or solar irradiance within a time horizon up to 30 minutes, is challenging because of the complex cloud dynamics. Images taken by ground-level sky cameras contain rich information of the surrounding sky that can be used to provide warning of approaching clouds from minutes to an hour ahead, making it increasingly popular in short-term solar forecasting. In the recent years, the use of deep learning (DL) models has seen a significant rise in image-based solar power modeling. These deep learning models achieve superior performance over traditional physics-based models and machine learning models coupled with hand-crafted feature engineering. However, most existing DL-based solar forecasting models trained on datasets with limited spatial and temporal coverage, struggle to generalize effectively to new locations. In addition, these DL-based methods show poor anticipation skills under cloudy conditions, for which solar power generation exhibits higher levels of variability, partly due to the lack of diversified cloudy samples for model training. The data limitation also hinders the further exploration of more advanced and promising deep learning approaches for solar forecasting, such as foundation models, which are trained on large-scale datasets and can be fine-tuned to a range of downstream tasks or to new locations with excellent generalization skills. Moreover, the lack of a standardized dataset hinders the consistent comparison of various existing solar forecasting methods. The increasing release of sky image datasets provides opportunities to address these limitations. In a recent work, 72 open-source ground-based sky image datasets collected globally for research on solar forecasting and cloud modeling have been identified. Such open-source datasets ameliorate access to in-situ data collection, which is expensive and time consuming especially when devices need to be deployed in multiple locations for multiple years to ensure broad spatial and temporal data coverage. Hence, centralizing suitable open-source datasets to build a large and diversified dataset would improve comparable and robust model development. This study follows the common procedures for machine learning pipeline development, which covers the following four stages: (1) data collection, (2) data processing, (3) model development and evaluation, and (4) deployment. The methodology adopted for each stage is described separately below: Data collection: 19 datasets suitable for short-term solar forecasting are selected from the 72 open-source sky image datasets identified in a previous study, based on attributes including label type (solar irradiance or PV power output), temporal resolution, and image pixel resolution. The raw data are collected and stored locally, while the meta information for each dataset (e.g., the geographic locations of cameras/sensors, the camera model, the camera orientation, the time stamps, etc.) and the processed data are centralized and will be made publicly available. Data processing: The main challenge of this project is to address the heterogeneity of the data characteristics, e.g., image pixel resolution, temporal resolution and label categories (solar irradiance or PV power output). Specifically, the various high resolution of sky images are down-sampled to the same lower resolution to facilitate model development. Furthermore, to standardize images taken by cameras from sensors with different celestial orientations, a clear transformation pipeline is developed. After processing, valid samples formatted as {input, output} are formed based on the specific setup of the solar forecasting task of interest (e.g., forecasting horizon, sampling interval, history length, etc.). The resulting dataset is then split into model training/validation and testing subsets, where a selection of data points representing diverse weather conditions and locations are isolated to constitute a fixed test set. Model development and evaluation: A typical set up for a short-term solar forecasting task is to predict the future solar irradiance or PV power output up to 30-min ahead based on a sequence of past sky images and possibly with auxiliary data such as sun angles, wind speed/direction, irradiance value and PV measurement as model input. In this project, diverse types of existing solar forecasting models are implemented, such as statistical, machine learning, deep learning, or physics-based models to construct a performance benchmark. Following this, the training of more advanced large-scale deep learning models based on SkyImageNet, i.e., foundation models, will be explored. Deployment: The resulting processed dataset will be uploaded to public repositories (e.g., Zenodo, Mendeley data, Hugging Face) and a permanent URL will be generated to simplify its access. A python package was developed to include pre-implemented dataset download functions, data pre-processing pipelines, typical image transform and augmentation functions, pre-trained benchmark models and forecasting performance evaluation metrics. This will enable researchers to accelerate the development of ML-based forecasting model. The project aims at facilitating access to multi-location multi-year cloud cover observations and solar data, benefiting a wide community including grid operators, energy producers, solar forecasting companies and broad academic research areas, including, for example, energy meteorology, atmospheric and climate sciences. To ensure a long-term impact, SkyImageNet will be well documented and will result in several publications. In addition, guidelines to contribute to this initiative via new datasets, code for relevant functionalities, or model additions will be included. We thereby build the foundation for a dataset and code base that can be used and extended by the whole community and can result in larger versions of SkyImageNet.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall L3)

Presentation: A satellite multi-model ensemble approach to support renewable energy transition in Germany

Authors: Helga Weber, Jaqueline Drücke, David Geiger, Regina Daus, Omar Herrerasanchez, Herbert Koch, Doron Callies, Garret Good, Dehong Yuan, Anna Christina Mikalsen, Frank Kaspar, Lukas Pauscher
Affiliations: Deutscher Wetterdienst, Fraunhofer IEE, University of Kassel, Menzio GmbH
The acceleration of green energy infrastructure investment and its expansion requires high-quality meteorological datasets at a fine-scale spatial resolution. Long-term derived satellite observation and reanalysis ensemble data can serve the underlying needs for the energy sector. Concerning the increasing impact of weather extremes on electricity supply and demand, the multi-model ensemble approach allows a robust analysis of high-impact compound weather and climate events. Satellite derived surface solar radiation products are used for a wide range of industrial and research applications in the solar energy sectors as are wind speed data from reanalysis models in the wind energy sector. For the later, this spans from early stage energy resource assessment for wind energy systems to the integration of wind energy in electrical grid system important for the planning, design, and maintenance of (future) energy systems. The input data and their accuracy thus have a direct impact on the results, subsequent measures, and energy economic development pathways. The overarching goal of the MEDAILLON project is to create a new, user-friendly, open, optimized, and high-resolution meteorological dataset for Germany, as well as to establish it as the meteorological standard dataset within system analysis and energy economics. The dataset will cover a 15-year time series at a resolution of 250 m x 250 m and include all relevant variables for energy system analysis (wind, surface radiation, temperature, etc.). Here, we present the satellite multi-model approach of the MEDAILLON project. The extensive measurement dataset includes the high-quality long-term satellite derived surface solar radiation product SARAH-3 by EUMETSAT CM SAF and a set of reanalysis models ranging from ERA5 by ECMWF, the newly developed ICON-DREAM Ensemble by DWD, COSMO-R6G2 by DWD, CERRA by SMHI to the mesoscale simulations of NEWA by DTU. The quality of these input datasets underwent a rigorously comparison and (cross-)evaluation for example to study the behavior of wind speed over orographic exposure by using approx. 100 lidar and mast measurements over Germany. The satellite products and different reanalysis models are calibrated and combined with post-processing methods and regionally available meteorological observational data as well as by using Copernicus CORINE land cover data. Using statistical methods (robust regression and (co-)kriging), approaches have been developed to best compensate for the known weaknesses of the models in the respective weather conditions. A key objective of the project is the implementation of an automated and operationally deployable model coupling (model nesting) and the associated downscaling. To increase the spatial model resolution, different methods for spatial interpolation of the various meteorological variables onto a uniform grid of 250 m (e.g., for wind data using PyWAsP and RANS CFD flow simulation) are developed. To provide information on the temporal and spatial uncertainty, Quantile Neural Networks (Deep Quantile Regression) and Transformer networks will be specifically tested and its results shown. The final data set will be openly available in future.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall L3)

Presentation: Promoting the Use of Copernicus in the Energy Industry: Results of EUSPA’s Demonstrator Projects in Solar, Hydro, and Grid

Authors: Tim Heijmann, Lefteris Mamais, Mr Nicolas Fichaux, Mr Eduard Escalona Zorita
Affiliations: Evenflow, ARMINES, EUSPA
The European Union Agency for the Space Programme (EUSPA) plays an important role in advancing the operational adoption of Copernicus data, with dedicated efforts targeting the renewable energy sector. Numerous applications in the energy industry can benefit from the value provided by Copernicus-powered solutions. However, many users remain unaware of the added value space data can bring, or face other barriers that keep them from integrating such data and services in their workflows. EUSPA aims to address these barriers by funding short Copernicus demonstration projects (or “proof-of-concepts”) for energy users. The showcased solutions utilise Copernicus data and services to address issues and processes defined by the users themselves. This is considered a decisive step to bring supply and demand in contact, as users are able to fully grasp the added value of Copernicus for their specific workflows, and understand the steps required to operationally adopt such solutions. This ultimately paves the way for follow-on actions focussed on scaling up adoption of Copernicus in the energy industry as a whole. Working together with Evenflow, ARMINES, and EO Service Providers in the industry, several of such demonstrations have been organised in 2024 showcasing the value of Copernicus data in the workflows of various energy users, spanning solar, hydroelectric, and grid management domains. The team identified emerging energy use cases by performing a multi-criteria analysis (based on EO service maturity, current level of uptake/novelty, capability of Copernicus to satisfy user needs, and policy developments further driving uptake) for various applications in various energy domains (e.g. solar, offshore wind, hydro, etc.). The results showcased the highest potential areas for the operationalization of Copernicus in the energy industry. Based on the outcomes of the multi-criteria analysis, multiple energy users were approached. For five energy users, demonstration plans were set up to highlight the value of Copernicus for an issue or process defined by the end-users themselves. Two demonstration projects were executed by the consortium members (Evenflow and ARMINES), and three demonstration projects were executed through public tender: a call for proposals was issued and three service providers were selected to execute the activities over a 4-month time span. The first demonstration addressed the need of a major electricity transmission and distribution system operator to identify illegal construction activities around their assets. Copernicus data enabled the early detection of possible construction activities, after which very high resolution data was used to gain more detailed insights of the disturbance. A second demonstration supported a grid operator to map wildfire and hydro-geological risks around their assets, leveraging the Copernicus Emergency Service (specifically the European forest fire information system EFFIS), Land Monitoring Service, and Sentinel data, among others. A third demonstration supported a major hydropower player to address dam spillway designs/upgrades required due to climate change and extreme flood scenarios, with 50 and 100 year return times, relying on the Copernicus Climate Impact indicators. Hydraulic modelling and flood prediction and mapping were performed to generate semi-dynamic flood susceptibility maps. A fourth demonstration looked at the grid balancing needs of a major transmission system operator, which is required to invest in balancing capacity (production resources that can be mobilised in 15 minutes, in case of a sudden demand peak). By more accurately predicting the production of a larger number of distributed small-scale solar PV installations on the grid, the TSO can therefore increase the accuracy of their demand forecasts, reduce the security margins and optimise the corresponding investments in balancing capacity. The demonstration showcased the ability of a prediction model using Copernicus CAMS data to more accurately forecast PV productions for a large number of distributed installations. The last demonstration addressed the needs of a major financial institution, to perform technical and commercial assessments of new PV installations. The solution leverages CAMS data; includes a detailed digital elevation model to factor the building characteristics available for major cities in Europe; and considers shading for a more reliable estimate of sub-hourly production and corresponding cash flow estimates. The solar model matched with an advanced financial model to generate accurate rate of returns of potential investments. The results from these demonstrators validate the added value of incorporating space data (specifically Copernicus) in energy users’ workflows, and highlight the real-world needs of these users, both in terms of problems that need solving and operational challenges faced by these users related to the integration of space solutions in day-to-day operations.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall L3)

Presentation: “Same same, but different” – A Cross-country Approach to enrich Locations of Energy Asset with hyper-localized Information on Climate Risk

Authors: Elke Kraetzschmar, Kristin Fleischer, Mareyam Belcaid
Affiliations: IABG mbH
Environmental hazards, either natural, human-made or climate-related, have significant implications for ecosystems, human health and livelihood, and thus on the economies. Natural hazards such as earthquakes, floods, hurricanes, and wildfires, are difficult to predict or behave region-specific. Human-made land exploitation like urban expansion, mining, deforestation, often increase the downstream impact of the afore mentioned events Both need therefore be considered specifically in the geo-spatial and socio-economic context of a region. Within the ESA Global Development Assistance Program (Clean Energy) a tool was elaborated to tackle such challenges and to provide highly granular information, linking to energy infrastructure assets and any related site-specific decisions to be made. In cooperation with the World Bank team, various hazards and their impact on energy infrastructure were investigated, specifically relevant for the country of Bangladesh. The country is one of the most vulnerable countries related to climate change, as it is prone and exposed to extreme events such as flooding, landslide, extreme winds, and droughts. During an agile period of discussions and prototyping, the current best-practice implemented at WB was reviewed, the needs of economists and local stake holders was discussed, and likely realisation strategies were prioritised. To examine the transferability of the approach, to better apprehend the geospatial specifics and how they impact the analyses/ questions asked by the users, the showcasing was extended in location and thematics to other sites. Whereas in • Bangladesh, hazards are fundamentally linked to significant oversupply vs. temporary lack of water, require • large areas of Sub-Saharan Africa with droughts and contrasting severe rain events a different strategy of combining geospatial analysis with Earth Observation. • Agriculture areas in Latin America regularly exposed to the El-Niño/ El-Niña phenomena, experienced severe heat waves over the last years, potentially intensified by the rapid reduction of rain forest, resulting in a long-term climatic change with extreme droughts and wide spreading fires. • The Jamali islands (Indonesia) are densely populated, and the installation of energy assets competes with other local demands. Here, flood related hazards, earthquakes caused by the Sunda Arc volcano range and subsequent landslides are the focal point of analysis. The derived hazard and exposure maps, as well as related analytics (visual, geospatial, statistics) are designed and provided for multiple user groups. Thus, multiple scales and associated task on user side can be served with generic indicators, contributing to decision-based funding. This extensive, geographically and thematically extended use case and developed tool addresses the assessment of the energy infrastructure exposure to hazards by estimating the relevant climate and disaster risks accordingly. On the side of World Bank, own platforms and standardised data pools are already available presenting ‘at hand’ global data, monitoring and reporting national specifics on a draft level (see World Bank’s Climate Knowledge Portal). The services of the GDA Clean Energy team consider these analyses whenever possible and suitable. Going further into detail, the developments of the GDA Clean Energy prepared extended solutions with regard to a more detailed scale, considering most recent and freely available data sets and imagery. Thus, it provides answers, supporting the exposure assessment of energy-related assets to natural hazards and extreme events responding to most recent developments. The multiple hazard data base, combined with thematic layers (e.g. land cover) is able to respond to the full range of requests - national to hyper-localised. With this is becomes of high interest to engagements exceeding the energy sector. It is in particular to mention that the results are of interest to the World Bank Group’s Country Climate and Development Reports (CCDRs), a core country specific diagnostic, integrating climate change and development specifics. The developments within GDA was identified as important scalable and transferable tool, ready to feed into the statistical and geospatial baseline of this report providing financial assistance towards sustainable development in support to green transition.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall F1)

Session: A.01.01 Advances in atmospheric composition - PART 1

High-quality global atmospheric composition observations are essential for our understanding of atmospheric responses and contributions to climate change, to document trends in greenhouse gases, their precursors, and air pollutant emissions, and to assess and underpin new policy measures for climate mitigation and adaptation.
This session will present latest results on how atmospheric composition measurements can be used to monitor climate change. Furthermore changes in the troposphere, stratosphere and their coupling (e.g circulation & transport of trace gases, chemical composition, aerosol information,) will be discussed. We invite presentations on data product improvements, validation aspects as well as studies using satellite data for applications in atmospheric chemistry, composition monitoring and air quality from current and future missions.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall F1)

Presentation: Impact of Australian Wildfires on Stratospheric Chlorine and Ozone

Authors: Lavinia Toso, Martyn P. Chipperfield, Jeremy J. Harrison
Affiliations: University Of Leicester, National Centre for Earth Observation, University of Leeds, National Centre for Earth Observation
The monitoring of inorganic chlorine species in the stratosphere, particularly hydrogen chloride (HCl), has been a critical measure for the success of the 1987 Montreal Protocol. As the most abundant chlorinated reservoir, HCl levels also reflect stratospheric variability caused by transient events, such as large wildfires. During December 2019 and January 2020, the Australian wildfires injected an unprecedented amount of smoke, containing organic aerosol, into the stratosphere. These particles provided surfaces for heterogeneous chemical reactions, altering the distribution of chlorine species as a result. To investigate these effects, we have used the TOMCAT 3-D chemical transport model to simulate the year 2020 and analyse the smoke behaviour in the stratosphere. By incorporating an organic tracer into our simulations, we could model the evolution of the smoke-related aerosol and the observed impact on the HCl distribution and variability. Output from TOMCAT has been evaluated using remote sensing data from the Atmospheric Chemistry Experiment - Fourier Transform Spectrometer (ACE-FTS) solar occultation instrument, along with data from other sounders, such as the Aura Microwave Limb Sounder (MLS) and the nadir Tropospheric Monitoring Instrument (TROPOMI) sensor. The ACE-FTS detections, indeed, showed that HCl concentrations decreased to half their average levels after the Australian wildfires, caused by reactivation processes of sulfate and organic aerosol on the stratospheric particles. Ongoing analyses are focusing on other inorganic chlorine species to further assess wildfire impacts on stratospheric chemistry, and in particular ozone depletion. We have also performed TOMCAT simulations to test the possible impact of similar wildfire smoke injections in the year 2050, in an atmosphere with less chlorine (and increased methane and nitrous oxide). This provides insights into inorganic chlorine processing and ozone layer recovery under conditions of increasing wildfire frequency and intensity driven by climate change. Overall, these findings highlight the significant impact of organic aerosols on stratospheric chlorine chemistry, leading to HCl processing and subsequent ozone depletion. With wildfires expected to become more frequent and severe due to climate change, understanding the chemical impact of smoke-related organic aerosol particles in the stratosphere is essential for predicting and ensuring the recovery of the ozone layer.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall F1)

Presentation: Stratospheric fluorine and chlorine trends using the Empirical Mode Decomposition (EMD): a comparison of machine-learning based models, CTM simulations and satellite measurements

Authors: Antonio Giovanni Bruno, Dr. Jeremy J. Harrison, Dr. Sandip Dhomse, Prof. Martyn Chipperfield
Affiliations: University of Leicester, School of Physics and Astronomy, National Centre for Earth Observation (NCEO), University of Leicester, University of Leeds, School of Earth and Environment, National Centre for Earth Observation (NCEO), University of Leeds
The fluorine- and chlorine-containing halogenated species in the atmosphere exhibit significant global warming potentials and together with nitrous oxide (N2O) are often classified as other long-lived greenhouse gases (OLLGHGs). Chlorine-containing gases also play an important role in anthropogenic ozone depletion. These species are regulated internationally under the 1987 UN Montreal Protocol (MP) and its amendments. Recently, the Kigali Amendment in 2016 included fluorine-containing halogenated gases such as hydrofluorocarbons (HFCs) as globally controlled substances, in order to address the rapid growth of their emissions and to limit their climate impact. The continuous monitoring of atmospheric halogenated species is an important aspect of the Montreal Protocol. Among the key species are the main chlorine and fluorine stratospheric reservoir species hydrogen chloride (HCl) and hydrogen fluoride (HF) which provide insight into changes in halogen species within the atmosphere. HCl is of particular interest in the polar regions as it produces molecular chlorine, which during spring photolyzes into chlorine atoms causing ozone depletion. In order to evaluate the quality of the information on OLLGHGs provided by the current satellite missions, in November 2023 ESA started the LOng-LIved greenhouse gas PrOducts Performances (LOLIPOP) CCI+ project. Here we present an analysis of long-term trends in stratospheric HCl and HF datasets using the technique of empirical mode decomposition (EMD) in the context of the LOLIPOP CCI+ project. Analysed data sets include ACE-FTS v5.2 data, the output from a three-dimensional (3D) chemical transport model (CTM) called TOMCAT, and a machine-learning based dataset called TCOM that is derived from ACE and TOMCAT data. EMD is a data-driven decomposition method developed to decompose a signal into physically meaningful components. The use of EMD is particularly relevant in processing non stationary and nonlinear time series, whose statistical properties (e.g., mean and standard deviation) change over time, by separating them into components with different time scales. EMD improves the de-seasonalization of the HCl and HF time series, allowing a better identification of the seasonal and annual components, and the trends. The analysis of variations in the trends allows us to identify breakpoints associated with events that influence the abundance of fluorine- and chlorine-containing species on regional and global scales, such as the unexpected enhancement in emissions of CFC-11 during 2012–2019 in East Asia.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall F1)

Presentation: Challenges in assessing the quality of Climate Data Records for Precursors of Ozone and Aerosol Essential Climate Variables

Authors: Tijl Verhoelst, Steven Compernolle, Jean-Christopher Lambert, Corinne Vigouroux, Bavo Langerock, Gaia Pinardi, Mahesh Kumar Sha, Arno Keppens, Daan Hubert, Isidora Anglou, Athina Argyrouli, Lieven Clarisse, Cathy Clerbaux, Thomas Danckaert, Isabelle De Smedt, Maya George, Isolde Glissenaar, Andreas Richter, Sora Seo, Nicolas Theys, Martin Van Damme, Folkert Boersma, Michel van Roozendael
Affiliations: Royal Belgian Institute For Space Aeronomy (BIRA-IASB), Royal Netherlands Meteorological Institute (KNMI), Chair of Remote Sensing Technology, School of Engineering and Design, Technical University of Munich (TUM), German Aerospace Center (DLR), Spectroscopy, Quantum Chemistry and Atmospheric Remote Sensing, Université libre de Bruxelles (ULB), LATMOS/IPSL, Sorbonne Université, UVSQ, CNRS, Institute of Environmental Physics (IUP), University of Bremen
The ESA Precursors_cci+ project pioneers the generation of multi-satellite Level-3 Climate Data Records (CDRs) for several precursors of the ozone and aerosol Essential Climate Variables (ECVs): nitrogen dioxide (NO2), formaldehyde (HCHO), sulfur dioxide (SO2), and glyoxal (CHOCHO) data acquired by the GOME, SCIAMACHY, OMI, TROPOMI and/or GOME-2 missions, and carbon monoxide (CO) and ammonia (NH3) data acquired by the IASI and/or MOPITT missions. Quality assessment of the CDRs is essential to enable users to judge the fitness of the data for their purposes. In this project, verification of compliance to user requirements is done by validation, relying primarily on comparisons to ground-based reference measurements obtained with ZSL-DOAS (stratospheric NO2 column), MAX-DOAS (tropospheric NO2, HCHO, SO2, CHOCHO column or profile), direct-sun DOAS Pandora (NO2 and SO2 total column) and direct-sun FTIR (column or profile HCHO, CO and NH3) instruments associated with the Network for the Detection of Atmospheric Composition Change (NDACC) and the Pandonia Global Network (PGN). A close iteration loop between product providers and validators ensures quick feedback on format and data content, e.g., with suggestions for additional variables that are useful for validation and to the wider user community (e.g., on representativeness). In this contribution we provide an overview of the CDR validation, for which we apply state-of-the-art methodologies to estimate quality indicators such as bias, dispersion and drift. These quality indicators are then compared with the user requirements, such as those available from the Global Climate Observing System (GCOS). Detailed results are made available in the Product Validation and Intercomparison report. To pinpoint error sources, an end-to-end validation approach is followed which includes quality assessment of intermediate components: validation of the Level-2 product (end product is Level-3), stratospheric NO2 column validation (end product is tropospheric NO2), comparison with and without application of the averaging kernel, and with and without cloud correction. E.g., for the HCHO column, validation results improve and are more homogeneous when assuming clear-sky than using cloud height retrievals that clearly differ between individual Level-2 products. Next, we highlight key challenges faced during the validation of initial Precursors_cci+ CDRs. To varying degrees for the different ECVs, the domain of validation can be limited by lack of ground-based reference data (especially for historical sounders), by lack of harmonized reference data sets accessible via centralized platforms like ESA’s Validation Data Centre (EVDC), and/or by data policy restrictions. SO2 is a good example where several of these issues are at play: the sparse MAX-DOAS data available is research-grade, has to be collected from the individual PIs, and varies in measurement protocol, data format, auxiliary data and uncertainty representation. We also point out the progress that is made, with the expansion of the FRM4DOAS NO2 and HCHO central processing and the recent addition of harmonized FTIR HCHO data to the NDACC capabilities. Since drift assessment is a major part of the quality analysis of climate data records, the continuation of longer-term time series (e.g., ZSL-DOAS, FTIR) is indispensable and need to be further supported. We conclude this contribution with a synthesis of results and lessons learned.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall F1)

Presentation: Unveiling Nitrogen Oxide Emissions from Open-Pit Copper Mines through Satellite Observations

Authors: Iolanda Ialongo, Henrik Virta, Janne Hakkarainen, Cem Özcan, Mikko Ranta, Simon Zieleniewski
Affiliations: Finnish Meteorological Institute, University of Vaasa, Rovjok oy
Copper is a critical mineral for clean energy and transportation, and it is necessary for a sustainable economy that aims at reducing fossil fuel usage. Assessing the performance and environmental impacts of copper mining is therefore necessary to evaluate the progress towards sustainable development. In this study, we estimate the nitrogen oxide (NOx) emissions (largely attributable to the diesel-powered mobile fleet) over 14 of the world’s largest open pit copper mines. We derive the emissions by applying a data-driven approach to the satellite-based nitrogen dioxide (NO2) observations from the TROPOMI (TROPOspheric Monitoring Instrument) on board the Sentinel-5 Precursor (S5P) satellite. We find that the annual NOx emissions over the different mines are coupled to the corresponding copper production, ore processed and total material moved. The time series analysis reveals that the annual amount of total material moved over the open pit of each mine best reproduces the year-to-year variability of the NOx emissions. Overall, satellite NO2 observations show good potential in tracking mining activities and for improving the assessment of the environmental impact of the mining industry.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall F1)

Presentation: A 25-year Climate Data Record of tropospheric NO2 columns and uncertainties from the ESA CCI+ ECV Precursor project

Authors: Klaas Folkert Boersma, Isolde Glissenaar, Isidora Anglou, Henk Eskes, Sora Seo, Pieter Valks, Athina Argyrouli, Andreas Richter, Melanie Coldewey-Egbers, Tijl Verhoelst, Steven Compernolle, Gaia Pinardi, Jean-Christopher Lambert, Huan Yu, Michel Van Roozendael
Affiliations: KNMI, Wageningen University, DLR, University of Bremen, BIRA
Level 2 (L2) retrieval algorithms to derive NO2 tropospheric columns from raw satellite measurements have received a lot of attention. In comparison, Level 3 (L3) data, spatially and temporally averaged products on a consistent grid derived from L2 data, have been less considered by the scientific community. Here we present a new L3 climate data record of tropospheric NO2 columns, compiled from morning and afternoon satellite instruments GOME, SCIAMACHY, the GOME-2 suite, OMI and TROPOMI. The record spans the period 1997 – 2021 and is based on reprocessed EUMETSAT AC-SAF and EU FP7 QA4ECV data products and the TROPOMI v2.3 product. For GOME-2A, L2 algorithms have been intercompared and improvements on the stratospheric correction and air mass factor calculation have been included. The L3 dataset contains monthly mean tropospheric NO2 columns, associated averaging kernels and uncertainties and is available at various spatial grids, ranging from 0.2° x 0.2°, to 0.5° x 0.5°, 1.0° x 1.0° and 2° x 2.5°. For the first time, we address spatial and temporal error correlations, by considering errors in the stratosphere-troposphere separation and air mass factor calculations, which are partly spatiotemporally correlated. We also account for representativeness uncertainty from partial cloud coverage. Our L3 dataset shows significantly reduced uncertainties over polluted areas: from 30–50% (L2) to under 20% (L3), meeting the GCOS 'breakthrough' and even 'goal' requirements. The L3 data has been validated against ground-based MAX-DOAS and PANDORA measurements, showing good consistency. We further discuss the need for consistent sampling in harmonizing the data for purposes of trend analysis. In particular, the row anomaly issue for OMI needs to be accounted when interpreting the 2004-2021 OMI and 2018-2021 TROPOMI record in a consistent manner. This monthly mean L3 tropospheric NO2 dataset offers a coherent and much reduced (in size) data record, making it suitable for atmospheric chemistry studies, for evaluating atmospheric models and analyzing spatiotemporal NO2 trends.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall F1)

Presentation: The retrieval of atmospheric trace gases using passive solar remote sensing from satellite and aircraft platforms: progress and challenges

Authors: John Burrrows
Affiliations: University Of Bremen
GOME (ESA ERS-2 (1995-2011)) and SCIAMACHY (ESA Envisat (2002-2012)) began a new age of nadir viewing passive remote sensing instruments from space, aiming to deliver globally trace gas amounts and distributions. Follow-ons with progressively improving spatial resolution and signal to noise have resulted: the most recent being TROPOMI, on the Copernicus Sentinel-5 Precursor, EU-ESA-S5P (2017 to present). In addition, the Geostationary Environment Monitoring Spectrometer, GEMS, on the Korean Aerospace Research Institute GEO-KOMPSAT-2B satellite launched in 2020 into a Geostationary orbit, GEO. This evolution of passive remote sensing stimulated the development of aircraft-borne instruments at the University of Bremen e.g. the AIRMAP and the MAMAP family of instruments. The spectral resolution of the above instruments is sufficient to identify the electronic-vibrational-rotational-absorptions in the ultraviolet, visible, and near-infrared spectral regions of the upwelling radiance. Using differential optical absorption spectroscopy, DOAS, total column amounts of key trace gases (e.g. ozone, O3, nitrogen dioxide, NO2, bromine monoxide, BrO, chlorine dioxide OClO, iodine oxide, IO, formaldehyde, HCHO, glyoxal, CHO.CHO, and water vapour H2O) are retrieved. In addition, SCIAMACHY and TROPOMI observe in the shortwave-infrared. CO columns and the dry columns of methane, XCH4, and carbon dioxide, XCO2, are retrieved from SCIAMACHY measurements, and the inversion of TROPOMI radiances delivers CO columns and XCH4 at a much higher spatial resolution. The AIRMAP and MAMAP family of instruments have been developed to target absorptions respectively in the ultraviolet/visible and the near-infrared/shortwave-infrared. This presentation focuses on recent results addressing: i) the validation of TROPOMI and the estimation of urban emissions of NO2; ii) ozone measurements; iii) the validation of the GEMS data products; iii) the emissions of CH4 from TROPOMI and CH4 and CO2 from the new MAMAP 2D Light instrument. These developments will be put in the context of the evolving global observing system.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall K1)

Session: D.01.02 Technological Innovations for a Digital Twin of the Earth system - PART 1

The dream of a comprehensive Digital Twin of the Earth System is fuelled by constant advancements. This session dives into the cutting-edge technologies propelling Digital Twin of the Earth System development.

The session will highlight advancements in data acquisition, processing, modelling, and visualisation that enable high-fidelity simulations of Earth's complex systems. Emphasis will be placed on the integration of various technologies, including AI, machine learning, high-performance computing, and cloud platforms, to create an interactive and dynamic digital representation of our planet.
In this session, we invite contributions to discuss the following key topics:

- Next-Generation Earth Observation - We seek discussions on the latest advancements in acquiring satellite data, including new satellite technologies and sensors. Contributions are welcome on techniques for processing and analysing satellite data to enrich the Digital Twin Earth (DTE) with detailed and dynamic information. Case studies that demonstrate how these advancements are being applied in current projects are highly encouraged.

- High-Resolution Earth System Modeling - We invite detailed discussions on the development of next-generation climate models that simulate atmospheric, oceanic, and terrestrial processes with unprecedented accuracy. Contributions on techniques for integrating different Earth system components (e.g., atmosphere, hydrosphere, biosphere) into unified models for comprehensive simulations are sought. Innovations in achieving real-time or near-real-time simulation capabilities, enabling dynamic monitoring and decision-making, are also welcome.

- High-Performance Computing and Artificial Intelligence - We seek contributions on utilising high-performance computing (HPC) and cloud platforms to handle the large-scale data and computational demands of digital twins. Discussions on using AI and machine learning to refine model predictions, detect complex patterns, and automate data processing workflows are encouraged. Additionally, contributions on developing AI-based tools for forecasting environmental changes and extreme events, enhancing preparedness and response strategies, are invited.

- Big Data Management and Integration - We invite discussions on innovative data management techniques and strategies for managing the vast amounts of data generated by Earth system models and simulations. Contributions on techniques for ensuring seamless integration of data from diverse sources, including satellite EO, ground observations, and in-situ sensors, are welcome. Solutions for storing and accessing large datasets efficiently and securely are also sought.

- Emerging Technologies for enhancement of a Digital Twin of the Earth system - We seek contributions on leveraging cloud platforms to enhance the scalability and flexibility of the Digital Twin Earth. Discussions on processing data closer to its source using edge computing to improve response times and reduce bandwidth usage are invited. Contributions on developing interactive and intuitive visualisation tools to explore complex Earth system data are also encouraged.

- Visualisation and User Interaction - We invite discussions on developing tools and platforms for visualising complex Earth system data in intuitive and interactive formats. Contributions on applications of virtual and augmented reality (VR and AR) in exploring digital twin models, enhancing user engagement and understanding, are sought. Creating user-friendly interfaces and dashboards for accessing, analysing, and interacting with digital twin data is another key topic for this session.

- Challenges and Future Directions - We seek discussions on addressing the need for standard protocols and frameworks to ensure interoperability among different digital twin components. Contributions on ensuring the privacy and security of data used in and generated by digital twin systems, addressing ethical and regulatory concerns, are invited. Strategies for ensuring the sustainability and scalability of digital twin initiatives over the long term, including funding and resource allocation, are also welcome.

By exploring these topics, this session aims to highlight the technological innovations driving the development of the Digital Twin Earth and discuss the challenges and future directions in this rapidly evolving field.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall K1)

Presentation: The Earth Data Hub: redefining access to massive climate and Earth observation datasets using Zarr and Xarray

#zarr #stac

Authors: Alessandro Amici, Nicola Masotti, Luca Fabbri, Francesco Nazzaro, Benedetta Cerruti, Cristiano Carpi
Affiliations: B-Open
The growing need for efficient and rapid access to climate and Earth observation datasets poses several challenges for data providers. As the volume and complexity of data grows, existing infrastructures struggle to handle the demands for high-performance, massive data access. Traditional systems represent a barrier between data and users, who often find themselves struggling between download queues and a growing number of non-standard retrieve APIs. Traditional systems also fail to optimize for specific needs, such as regional and time-series data access. For this reason many organizations choose to maintain private copies of datasets resulting in unnecessary effort and storage costs. These inefficiencies hinder data-driven research and delay actionable insights. There is an urgent need for a solution that addresses these issues while providing a seamless, efficient, and user-friendly experience. This solution must support diverse workflows, reduce data transfer overheads, and enable scalable analysis of large, multi-dimensional datasets. This session will introduce Earth Data Hub, a cutting-edge data distribution service that leverages cloud-optimized, analysis-ready technologies to provide researchers, policymakers, and technologists with a fast and easy access to more than 3 PB of Earth related data including ERA5 reanalysis, Copernicus DEM and Destination Earth Climate digital twin. By storing data in a heavily compressed Zarr format and organizing it into optimally sized chunks, Earth Data Hub ensures that users retrieve only the data they need, minimizing unnecessary data transfer. Our design favours workflows involving regional and time series analysis, making it extraordinarily fast to work with geographically limited data or time series on point locations. This efficiency is a key enabler for data-driven and AI research, as it reduces computational overheads and accelerates the time to insight. Earth Data Hub also eliminates many other traditional bottlenecks associated with accessing and analyzing climate datasets, such as download queues and cumbersome retrieve APIs. Instead, Earth Data Hub users can leverage tools like Xarray and Dask to perform distributed computations directly on cloud-hosted data, streamlining workflows and unlocking new possibilities for data exploration and modeling. The platform’s catalogue is organized as a SpatioTemporal Asset Catalog (STAC), which ensures discoverability and standardization. Users can search, filter, and retrieve metadata for a wide array of datasets. By adhering to open standards, Earth Data Hub empowers diverse user communities to integrate these datasets into custom workflows. A crucial innovation underpinning Earth Data Hub is its use of a serverless architecture. Serverless data distribution is a data distribution approach in which the distributor does not have to think about or manage backend servers, clusters, caches, queues, requests, adaptors or other infrastructure. This is all done automatically by plugging modern tools such as Xarray and Dask to an object storage instance where the data is kept in Zarr format. This serverless paradigm simplifies maintenance, reduces operational costs, and provides high availability, all while accommodating large-scale, on-demand data access. Functions are executed in response to user requests, ensuring efficient utilization of resources. The absence of traditional servers reduces latency and ensures that users experience consistent performance regardless of the size or complexity of their queries. This session will showcase the practical usage of Earth Data Hub, starting from the catalogue exploration up to the actual data usage in a Dask and Xarray powered environment (Jupyter notebook). The session will also showcase Earth Data Hub integration with other Destination Earth services such as Insula Code. Industry experts will present insights into the practical benefits of adopting Earth Data Hub services, fostering collaboration among stakeholders and the Destination Earth ecosystem. In summary, Earth Data Hub exemplifies the fusion of advanced data management techniques, cloud-optimized formats, and serverless technologies to create a robust platform for accessing and analyzing climate and Earth observation data. Its innovative design supports scalable, efficient, and user-friendly workflows, making it an indispensable resource for anyone working with complex Earth system datasets. The inclusion of cutting-edge datasets such as the Climate Adaptation Digital Twin within Earth Data Hub underscores its commitment to supporting global climate adaptation and mitigation efforts. By providing easy access to high-resolution, up-to-date, and scientifically rigorous data, the platform plays a critical role in empowering stakeholders to make informed decisions in response to the challenges of climate change.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall K1)

Presentation: DestinE’s Earth system digital twins

Authors: Irina Sandu
Affiliations:
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall K1)

Presentation: Destination Earth: The Power of Data Lake Edge Services

Authors: Danaële Puechmaille, Patryk Grzybowski, Michael Schick
Affiliations: EUMETSAT, CloudFerro
Destination Earth initiative (DestinE), driven by the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT), the European Space Agency (ESA) and the European Centre for Medium-Range Weather Forecasts (ECMWF) target towards establishing a highly accurate replica - Digital Twin - of the Earth. At present two Digital Twins are operational and provide scenarios for weather-induced and geophysical extremes, as well as climate change adaptation. In the following years, the number of Digital Twins will be extended. Thus, to integrate new Twins and facilitate its data for the DestinE Community, the DestinE Data Lake is the connecting element. Answering “What-If” questions require to exploit and analyse a vast amount of data and hence in today’s world people would use the power of AI. The Data Lake Edge services allow users to either exploit existing models or develop new small to medium sized models. During this development, there is a high need to facilitate access to data and ways of working with data, which is made possible by Destination Earth Data Lake (DEDL) Edge services. DEDL Edge services are available on request and by grant of the EC briefly introduced: STACK application development environment providing JupyterHub/ BinderHub & DASK usable with Python and R. It allows users to create DASK cluster on selected DEDL edges and process close to the data. The setup will empower users to efficiently tackle complex tasks on DestinE datasets, facilitating the development of their own applications and services. ISLET is a cloud infrastructure as a service (IaaS) OpenStack based, which is open-source cloud computing infrastructure software. It allows users to create and manage virtual machines as well as users’ storage through graphical user interface (GUI) and command line interface (CLI). What makes Islet exceptional is providing services in proximity to data holdings as a distributed infrastructure in proximity to High Performance Computing (HPC). It is possible due to data bridges (e.g. edge clouds) which enable working with large volumes of data (including DTs data). HOOK service is the Data Lake workflow engine, which can be used by users to execute ready-to-use workflows, such as generating User defined Cubes, Sentinel-2: MAJA Atmospheric Correction; Sentinel-1: Terrain-corrected backscatter. It also enables user defined workflows. The Harmonised Data Access (HDA) of the DestinE Data Lake is available from anywhere, without request, and allows users to easily and efficiently access seamless the heterogeneous DestinE data portfolio. The DestinE Data Lake contributes to DestinE which is overall a groundbreaking initiative that allows decision and policy makers to find answers to “What-If” questions, contributing further in climate research and sustainable development. The on request available Edge services provided by DEDL (Islet, Hook, Stack) with its access to data including the Digital Twins (HDA) is built on the key concepts such federation, processing near data, harmonisation of data access.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall K1)

Presentation: DestinE Data Lake – AI-Driven Insights on Edge Services

#stac

Authors: Oriol Hinojo Comellas, Miruna Stoicescu, Dr. Sina Montazeri, Michael Schick, Danaële
Affiliations: EUMETSAT
This paper describes how the Destination Earth Data Lake (DEDL) is enhanced to allow users to exploit cutting-edge artificial intelligence (AI) and machine learning (ML) capabilities for scientific and policy applications. These advancements focus on creating AI-ready data, developing AI application demonstrators, and providing Machine Learning Operations (MLOps) tooling coupled with robust infrastructure. By unlocking AI/ML capabilities close to the data, DestinE empowers users with advanced tools for processing and analyzing large-scale datasets efficiently and innovatively. The European Commission’s Destination Earth (DestinE) initiative is creating precise digital replicas of the Earth, known as Digital Twins, to monitor and simulate natural and human activities and their interactions. Services/ Tools available within DestinE facilitate end-users and policymakers to develop/ execute “what-if” scenarios to evaluate the impacts of environmental challenges, such as extreme weather events and climate adaptation changes, as well as the effectiveness of proposed solutions. Focusing on the Data Lake component, DestinE provides users with unparalleled access to large-scale and diverse datasets, along with a dynamic suite of big data processing services that operate close to these massive data repositories. In practice, DestinE Data Lake provides a portfolio of services, consisting of Harmonised Data Access (HDA), which enables users to access the diverse datasets defined in the DestinE Data Porfolio using a unified STAC API, regardless of data location and underlying access protocol and Edge services: Islet (compute, storage and network resources that users can instantiate and manage), Stack (ready-to-use applications such as JupyterHub and DASK) and Hooks (ready-to-use or user-defined functions). These services are powered by a geographically distributed cloud infrastructure consisting of a Central Site (Warsaw) and Data Bridges, co-located with EuroHPC sites where the DestinE Digital Twins are running (Kajaani, Bologna and Barcelona) or large data providers (EUMETSAT). To harness the full potential of artificial intelligence (AI) next to the data, DestinE Data Lake is evolving its service offerings built on three pillars: AI-ready data, AI applications Demonstrators, and MLOps infrastructure. 1. AI-Ready Data: DestinE’s Data Lake is being equipped with an open-source framework which will facilitate the transformation of the DestinE Data Portfolio diverse datasets into AI-ready formats. The framework will provide preprocessing capabilities such as data collocation, reprojection, regridding, resampling, data cleaning, and metadata handling. By tailoring and combining, this framework will facilitate the seamless integration of diverse datasets and usage of outputs within user-defined AI workflows powered by DEDL Edge Services. 2. AI Application Demonstrators: A set of scientific applications in the field of AI/ML, making use of DEDL data and services, in combination with EUMETSAT satellite data, aiming to gain more insights into specific parameters (e.g. clouds and cloud types, fire risk etc.) or to explore potential improvements of existing data products. End users are involved during the development of the applications, to ensure that their expectations are being met. Mature applications will be incorporated in the operational DestinE service offering. The applications, outputs and deliverables from the work undertaken are open source to stimulate uptake and re-use. 3. MLOps Infrastructure: AI/ML preparedness. DestinE DataLake with GPU-accelerated infrastructure offers flexible edge computing capabilities. This hybrid setup empowers users to train, evaluate, refine, and deploy AI/ML models of small to medium size. Tools like OpenStack IaaS and pre-deployment resources on Destination Earth bridges lower the barrier to entry for model experimentation and operationalization. In summary, these advancements establish a robust foundation for integrating AI/ML capabilities into the DestinE ecosystem. By streamlining workflows, saving time, and providing scalable tools, DestinE enables users to make data-driven decisions more effectively. Moving forward, the initiative will focus on fostering broader collaboration among stakeholders and expanding the portfolio of AI-ready tools and services, ensuring that DestinE continues to drive innovation and sustainable development.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall K1)

Presentation: Leveraging Insula for Advanced Earth Observation Data Processing: Use Cases in Atmospheric Correction and Evapotranspiration Estimation

#zarr

Authors: Cesare Rossi, Beatrice Gottardi, Davide Foschi, Davide Giorgiutti, Stefano Marra, Alessandro Marin, Gaetano Pace
Affiliations: CGI
Insula - the hub between data and decisions, is an advanced platform for Earth Observation (EO) analytics that leverages powerful big data capabilities to provide users with deep understanding of Earth-related phenomena. Insula integrates cutting-edge and production-ready technologies, such as Kubernetes and Argo Workflows, offering a seamless user experience. Its features enable them to perform complex analyses, such as trend analysis, anomaly detection, predictive modeling, and much more, by harnessing harmonized and integrated datasets. With customizable visualization analytics, autoscaling processing campaigns, and real-time monitoring, Insula empowers different actors throughout the value chain extracting actionable insights efficiently, aligning with their background and objectives. The Insula's flexible environment supports the integration of new data sources and services, ensuring that researchers can adapt the platform to their specific needs. The availability of Python libraries such as GDAL, Rasterio, and xarray, and the incorporation of AI and machine learning (ML) capabilities, with popular libraries like TensorFlow, PyTorch, and scikit-learn, further enhances the platform’s versatility, enabling expert users to perform coding activities directly within Insula. Two use cases demonstrate the powerful applications of Insula in EO analysis: Sentinel-3 OLCI (Ocean and Land Color Instrument) L1 to L2 C2RCC conversion and Standard Evapotranspiration using ERA5. In the first use case, Insula was employed to process Sentinel-3 OLCI data and perform atmospheric correction using the C2RCC algorithm and multi-sensor pixel classification with IdePix tool. Insula’s integrated environment allowed for seamless execution of this complex process, enabling the extraction of key physical variables such as chlorophyll concentration and water quality parameters. By leveraging the platform’s autoscaling capabilities, the processing of large datasets can be optimized for efficiency, ensuring timely results. The final outputs were used for classifying oceanic and coastal regions, providing valuable insights for marine research and management. In the second use case, Insula was used to calculate Standard Evapotranspiration (ET0) using the Penman-Monteith method, a widely accepted approach for estimating water loss from soil and vegetation. The platform facilitated the integration of newly optimized ERA5 hourly reanalysis dataset (in ZARR format) to compute ET0 maps at regional scales. Insula’s processing environment allowed for the smooth execution of the complex calculations while incorporating real-time data updates. A standout feature of Insula is “Insula Perception”, which excels in its visualization and analytics capabilities, empowering effective collaboration between teams and organizations. These features were instrumental in both use cases, enabling a clearer understanding and communication of results. These use cases highlighted Insula’s ability to manage diverse datasets which are critical for climate studies, water resource management, agriculture to mention a few. Moreover, the ability to collaborate and share findings fosters a data-driven approach to Earth observation, facilitating decision-making at various scales and enhancing interdisciplinary research across domains. In summary, Insula – the hub between data and decisions, is a powerful tool for researchers, policymakers, and industries reliant on EO for informed decision-making.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall K1)

Presentation: Global Fish Tracking System (GFTS): Harnessing Technological Innovations for Conservation and Sustainable Resource Management

#zarr #pangeo

Authors: PhD Daniel Wiesmann, Anne Fouilloux, Tina Odaka, Benjamin Ragan-Kelley, Mathieu Woillez, Quentin Mazouni, Emmanuelle Autret
Affiliations: Development Seed, Simula Research Laboratory, LOPS (Laboratory for Ocean Physics and Satellite remote sensing), DECOD (Ecosystem Dynamics and Sustainability)
Advancing our understanding of marine ecosystems and fostering sustainable resource use requires innovative technological applications. The Global Fish Tracking System (GFTS) represents a pioneering initiative that harnesses the power of advanced technologies to model fish movements and migration patterns, facilitating informed conservation strategies, and sustainable management practices. GFTS operates within the European Union's Destination Earth (DestinE) initiative, integrating high-resolution digital replicas of Earth systems and diverse datasets from DestinE, Copernicus Marine Services, and biologging data stored in the European Tracking Networks (ETN) database. The GFTS leverages the Pangeo ecosystem and a suite of technologies, such as Pangeo-fish, Jupyter, HEALPix, xDGGS, Xarray, and Zarr, for cloud-based data processing and analysis. In line with the session's emphasis on next-generation Earth observation and high-resolution system modeling, the GFTS employs the Climate Change Adaptation Digital Twin to evaluate future environmental conditions for essential fish habitats. By integrating ocean physics with fish ecology, the GFTS provides new perspectives on essential fish habitats such as migration swimways and spawning grounds, facilitating effective conservation strategies. The GFTS addresses the session's focus on big data management and integration, utilizing advanced data management techniques for handling vast amounts of biologging data. The system's use of high-performance computing and cloud platforms aligns with the session's focus on leveraging these technologies for large-scale data handling and computational demands. Through the development of decision support tools, GFTS transforms complex datasets into actionable insights, highlighting the importance of interactive and intuitive visualization tools in making data accessible for scientists and policymakers. These technologies embody the session's emphasis on visualization and user interaction, demonstrating the potential of digital twins in enhancing accessibility and reproducibility of scientific findings. The GFTS initiative demonstrates the crucial role of technological innovations in addressing environmental data challenges, offering a practical example of the Digital Twin of the Earth system application for sustainable resource management and conservation. This project underscores the potential of technological advancements in revolutionizing our understanding and management of marine ecosystems, presenting a compelling case study for discussion in this session.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 0.11/0.12)

Session: F.04.05 In-land Water Quality and Resources Management - PART 1

Water is key to sustainable development, being critical for socio-economical development, energy and food production, and healthy ecosystems. Today water scarcity affects more than 40 percent of the world’s population and is projected to rise further, exacerbated by climate change. Limited access to water supply and sanitation, more water-intensive patterns of consumption and production, increasing rainfall variability, and pollution are combining in many places to make water one of the greatest risks to economic progress, poverty eradication and sustainable development. As the global population grows, there is an increasing need to balance the competing demands for water resources and have more efficient ways to manage water supply in a sustainable manner.

The importance of ensuring availability, quality and sustainable management of water for all has been increasingly addressed in the global political agenda, as seen with the Sustainable Development Goals (SDGs) of the UN 2030 Agenda for Sustainable Development and with the adoption of an International Decade 20018-2028 for Action on ‘Water for Sustainable Development’ by the UN General Assembly. Water touches every aspect of development and is linked to almost every Sustainable Development Goal.

Earth Observation is increasingly seen as an essential source of information which can complement national data and support countries to collect regular information on the use and changes to their water resources for more informed policy decisions on water resource management.

The session will present the latest scientific advances on the use of Earth observations for Water Quality and Water resources management, discuss opportunities and challenges which lie ahead for mainstreaming EO into sustainable management of waters and future paths of research.

Topics of interest for the urban sessions include (not limited to):
- Multi-sensor approaches to the monitoring of the seasonal and annual changes in surface water extent,
- Monitoring of the changes in surface water level from satellite radar altimetry,
- EO approaches for monitoring changes in lake volume,
- Integration of EO in hydrodynamic/hydrological modelling to infer information on river discharges,
- EO solutions for Water Use estimation (e.g., for irrigated crops),
- Inland water pollution (Water Quality),
- River sediment dynamics (erosion risk potential, sediment transport estimates)
- Impact of hydropower dams on rivers flows and morphology,
- Monitoring of groundwater resources (groundwater recharge modeling, groundwater estimation)
- Drought forecasting

Convenors: Marc Paganini (ESA); Christian Tottrup (DHI); Eva Hass (EOMAP)
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: A framework for estimating lake water transparency from satellite data globally

Authors: Dr Dalin Jiang, Xiaohan Liu, Dr. Stefan Simis, Andrew Tyler, Evangelos Spyrakos
Affiliations: University Of Stirling, Plymouth Marine Laboratory
Water transparency, in one form or another, is a widely used water quality indicator. It determines light availability at depth and is, therefore, highly relevant to aquatic biodiversity and eutrophication monitoring. Water transparency is a candidate for data assimilation in biogeochemical models, linking a physical driver (light) and primary productivity. It is, furthermore, a potential proxy for modelling the heat trapping behaviour of surface waters. Water transparency is traditionally expressed as the Secchi disk depth (ZSD), the depth at which a white disk disappears from the view of an above-water observer. Tracing its history back to 1860s, there are many historical ZSD observations in lakes, which is extremely valuable for studying change in aquatic environments that may today be impacted by climate change and other human activities. The vertical diffuse attenuation coefficient (Kd) is another quantitative variable to define water transparency. It follows from the extinction of light energy over a depth interval and, being obtained using light sensors, has no observer effect. As a result, Kd has a robust relation to water-leaving radiance reflectance, being only influenced by the orientation of the ambient light field. Remote sensing has been widely used to derive water transparency in the past decades, resulting in many empirical, semi-empirical and semi-analytical algorithm definitions, but universal application to lakes has remained largely untested. Consequently, our research aims to develop an optical water type (OWT) classification-based evaluation to identify the performance (accuracy and uncertainty) of water transparency estimates in lakes. Existing studies have shown that semi-analytical methods based on reflectance inversion, such as the quasi-analytic algorithm (QAA), outperform alternatives. This approach obtains the inherent absorption and backscattering coefficients (a and bb) from remote sensing reflectance (Rrs) by extrapolation from a reference wavelength, where the known absorption coefficient of pure water dominates the reflectance signal. As a second step, Kd is then obtained from a and bb. Further relations between ZSD and Kd have been proposed, either directly or via the inherent optical properties and assumptions on the angular distribution of light scattering. However, the first step of estimating a and bb is challenging especially for turbid and extremely turbid inland waters. The research presented here firstly evaluates the performance of different QAA forms across OWTs, and then select the best-performing QAA for each OWT. A blending approach with the OWT membership score as weights is then used to build transparency products from the component algorithms. The proposed water transparency estimation approach will be applied to MERIS, MODIS and OLCI satellite images in anticipation of including this in the next iteration of the European Space Agency Lakes Climate Change Initiative (ESA Lakes_cci) project Climate Research Data Package, covering two decades of observations in more than 2000 lakes.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: CYANOBACTERIA DETECTION IN INLAND WATERS – TOWARDS A COMPREHENSIVE PAN-EUROPEAN SYSTEM

Authors: Ilaria Cazzaniga, Frédéric Mélin, Elisabetta Canuti
Affiliations: European Commission - Joint Research Centre
Harmful algal blooms can have significant economic impacts on European countries, affecting various industries and sectors, threatening ecosystem functioning and degrading water quality for diverse uses such as recreation, drinking water and fisheries (Huisman et al., 2018). Cyanobacteria, also known as blue-green algae, are in fact a type of photosynthetic bacteria naturally occurring in freshwater ecosystems, capable of forming intense harmful blooms. They can cause major problems for water quality inducing hypoxia and anoxia, causing the death of fish and benthic fauna and can finally interfere with the recreational function of waters. Some species can additionally produce a variety of cyanotoxins that can cause liver and kidney disease, neurotoxicity, gastrointestinal disturbances and other problems (Zanchett and Oliveira-Filho, 2013). As reported by Huisman et al. (2018), several studies have recently indicated an increasing trend in the frequency, intensity and duration of cyanobacterial blooms in many aquatic ecosystems across the globe. The potential of optical Earth Observation (EO) in monitoring aquatic ecosystems has been amply demonstrated. The detection and monitoring of cyanobacteria blooms is one of its applications. Numerous test cases have been documented and diverse regional and national systems are in place (e.g. Anttila et al., 2018; Binding et al., 2021; Wynne et al., 2021). Considering existing methodologies and applications, this work presents the first steps of a feasibility study to assess the possibility of scaling them up at a pan-European level, in view of using EO data for the purpose of European Union (EU) policymaking or in support to Member States’ directives implementation efforts and for monitoring compliance in inland waters. It focused on a first selection of European water bodies, representative of different latitudes, size, ecological conditions and optical water properties. Copernicus Sentinel-3 Ocean and Land Colour Instrument (OLCI) data were processed to obtain various indices already used in operational systems or reported in literature. They include, among others, Maximum Chlorophyll Index (MCI, Gower et al., 2008), Alternative Floating Algae Index (AFAI, (Qi et al., 2018), Normalized Difference Vegetation Index (NDVI, Hu and He, 2008), Normalized Difference Chlorophyll Index (NDCI, Mishra and Mishra, 2012), Fluorescence Line Height (FLH), cyanobacteria index (CIcyano, Coffer et al., 2021), the spectral shape around 490 nm (SS490, Yao et al., 2024), maximum peak-height (MPH, Matthews et al., 2012), and Chlorophyll-a concentration (chl-a). The indices were obtained from both remote sensing reflectance RRS values (obtained through atmospheric correction via POLYMER (Steinmetz and Ramon, 2018)) or Rayleigh-corrected reflectance RRC values (obtained by the SeaWiFS Data Analysis System (SeaDAS) l2gen processor (Franz et al., 2007)). Chl-a values were obtained from RRS utilizing the methods defined in the ESA Lakes CCI project (ESA Lakes CCI, 2023), i.e. classifying each RRS spectrum according to the optical water classes defined in Spyrakos et al. (2018) and using class membership to weighted-blending the class-dedicated algorithms for chl-a retrieval tuned in the ESA Lakes CCI project. Satellite-derived chl-a values were first compared against in situ chl-a values obtained from WISE Water Framework Directive Database. Then, all the indices were compared among them and against satellite chl-a values estimating Pearson correlation values, to assess consistency or discrepancy among the different methods. This was done acknowledging the relative scarcity of field data for validation while inter-comparing EO products among them could inform on their behaviours in different water bodies and ultimately on their fitness for purpose of identifying cyanobacteria blooms. Finally, two different operational systems were considered: the US EPA Cyanobacteria Assessment Network (CyAN) and Canadian EOLakeWatch system, characterized by the use of two different methods. Whereas in the latter blooms are identified through a threshold applied to satellite chl-a values (Binding et al., 2021), the former uses a threshold on a cyanobacteria-dedicated index, CIcyano (Urquhart et al., 2017; Wynne et al., 2021). The systems were compared based on the number of cyanobacteria bloom occurrences they could identify (at two different levels of risk, medium and high) in various EU lakes for which information on the ecological status and on the occurrence of cyanobacteria blooms were available. Chl-a validation with in situ data showed satellite-derived values are generally overestimated for lower values and underestimated for larger values (with median relative difference with in situ data about +40%). However, performances significantly vary when subdividing the dataset based on optical water classes. Correlation analysis showed relatively high correlation (r>0.7, p<0.05) for several pairs of indices, including comparisons between different indices derived from both RRC or RRS values. High correlation was for example obtained comparing chl-a vs CIcyano, MCI and NDCI values derived from RRC, and comparing chl-a vs FLH and MCI values derived from RRS. Finally, the comparison run on the number of occurrences recorded using Cyan and EOLakeWatch criteria showed contrasting results. These differences were obviously caused by the different characteristics of each lake. For example, the baseline chlorophyll concentration outside blooming period could affect the results when using simple thresholds on a non-cyanobacteria-dedicated index; bloom dominant species may differ from those characterizing the water bodies where methods and criteria were originally developed; optical water properties may hinder the accuracy of atmospheric correction and chl-a estimation. In the next phase of this activity, the in situ database will be enlarged to also include a more diverse range of ecological conditions and optical water properties. Further analyses will be also carried out to assess the feasibility of retuning these methods for a broader scale application in EU inland waters. References - Anttila, S. et al., 2018. A novel earth observation based ecological indicator for cyanobacterial blooms, International Journal of Applied Earth Observation and Geoinformation, vol. 64, no. October 2017, 145–55 - Binding, C. E. et al., 2021. EOLakeWatch; delivering a comprehensive suite of remote sensing algal bloom indices for enhanced monitoring of Canadian eutrophic lakes, Ecological Indicators, vol. 121, 106999 - Coffer, M. M. et al., 2021. Assessing cyanobacterial frequency and abundance at surface waters near drinking water intakes across the United States, Water Research, vol. 201, 117377 - ESA Lakes CCI. 2023. Lakes_CCI+ Phase 2. D2.2. Algorithm Theoretical Basis Document (ATBD):, date last accessed at www.cls.fr - Franz, B. A. et al., 2007. Sensor-independent approach to the vicarious calibration of satellite ocean color radiometry, Applied Optics, vol. 46, no. 22, 5068–82 - Gower, J. et al., 2008. Global monitoring of plankton blooms using MERIS MCI, International Journal of Remote Sensing, vol. 29, no. 21, 6209–16 - Hu, C. and He, M. 2008. Origin and Offshore Extent of Floating Algae in Olympic Sailing Area, Eos, Transactions American Geophysical Union, vol. 89, no. 33, 302–3 - Huisman, J. et al., 2018. Cyanobacterial blooms, Nature Reviews Microbiology, vol. 16, no. 8, 471–83 - Matthews, M. W. et al., 2012. An algorithm for detecting trophic status (chlorophyll-a), cyanobacterial-dominance, surface scums and floating vegetation in inland and coastal waters, Remote Sensing of Environment, vol. 124, 637–52 - Mishra, S. and Mishra, D. R. 2012. Normalized difference chlorophyll index: A novel model for remote estimation of chlorophyll-a concentration in turbid productive waters, Remote Sensing of Environment, vol. 117, 394–406 - Qi, L. et al., 2018. Diurnal changes of cyanobacteria blooms in Taihu Lake as derived from GOCI observations, Limnology and Oceanography, vol. 63, no. 4, 1711–26 - Spyrakos, E. et al. 2018. Optical types of inland and coastal waters, Limnology and Oceanography, vol. 63, no. 2, 846–70 - Steinmetz, F. and Ramon, D. 2018. Sentinel-2 MSI and Sentinel-3 OLCI consistent ocean colour products using POLYMER, p. 13, in Frouin, R. J. and Murakami, H. (eds.), Remote Sensing of the Open and Coastal Ocean and Inland Waters, SPIE - Urquhart, E. A. et al., 2017. A method for examining temporal changes in cyanobacterial harmful algal bloom spatial extent using satellite remote sensing, Harmful Algae, vol. 67, 144–52 - Wynne, T. et al., 2021. Harmful Algal Bloom Forecasting Branch Ocean Color Satellite Imagery Processing Guidelinese, NOAA Technical Memorandum NOS NCCOS, vol. 296, 48 - Yao, Y. et al., 2024. Detecting Cyanobacterial Blooms in the Caloosahatchee River and Estuary Using PlanetScope Imagery and Deep Learning, IEEE Transactions on Geoscience and Remote Sensing, vol. 62, 1–13 - Zanchett, G. and Oliveira-Filho, E. 2013. Cyanobacteria and Cyanotoxins: From Impacts on Aquatic Ecosystems and Human Health to Anticarcinogenic Effects, Toxins, vol. 5, no. 10, 1896–1917.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: Global inland water quality climate data records from the ESA’s Climate Change Initiative for Lakes

Authors: Xiaohan Liu, Nick Selmes, Mark Warren, Dalin Jiang, Evangelos Spyrakos, Eirini Politi, Dagmar Müller, Kerstin Stelzer, François Steinmetz, Dr. Stefan Simis
Affiliations: Plymouth Marine Laboratory, University of Stirling, Brockmann Consult GmbH, Hygeos
The Lakes_cci (ESA Climate Change Initiative) provides a multi-decadal, multi-sensor, and globally representative climate data record of water-leaving reflectance (Rw) and optical-biogeochemical water quality products for over 2,000 lakes worldwide, alongside other essential lake variables describing lake physical properties. An upgraded atmospheric correction algorithm (Polymer v4.17) and associated recalibration of algorithms for the retrieval of chlorophyll-a (Chla), total suspended matter (TSM), and coloured dissolved organic matter (CDOM) has resulted in several marked performance improvements. Algorithm selection and calibration was carried out for MERIS, OLCI, and MODIS observations within an Optical Water Type (OWT) classification framework. This dynamic scheme enables the selection of the most appropriate algorithms for each satellite pixel. For matchups with at least 80% similarity to each OWT, algorithms were evaluated and adjusted from previously published parameterisations. The recalibrated algorithms significantly outperformed the original parameterisations, achieving strong agreement with in situ matchups across different water types. Specifically, for all three sensors, Chla, TSM, and aCDOM(440) retrievals achieved correlation coefficients (R) exceeding 0.8, 0.75, and 0.7, respectively, with Root-Mean-Square Errors (RMSE) below 0.45 mg/m³, 0.42 g/m³, and 0.43 m⁻¹, respectively. These results highlight the robust performance of the recalibrated algorithms across diverse inland waters with varying optical properties. The combination of improved atmospheric correction and rigorous assessment and calibration of water quality algorithms forms part of the v3.0 release of the Lakes_cci climate research data package, anticipated for open-access in summer 2025. Looking ahead, these algorithms are also set to support the provision of Copernicus Land Monitoring Service Lake Water Quality data, as well as the Copernicus Climate Change Service from 2026. These datasets are expected to serve as valuable resources for a wide range of applications, including climate research, environmental monitoring, and water resource management studies. Ultimately, this work will support informed decision-making and policy development aimed at preserving vital freshwater resources, requiring accurate satellite observation products, long-term consistency, and information on product uncertainty.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: State of play for Earth Observation tools to support Water Framework Directive (WFD)

Authors: Dr Jenni Attila, Dr. Ioanna Varkitzi, Krista Alikas, Dr. Sandra Poikane, Dr. Gary Free, Dr. Wouter van de Bund
Affiliations: Finnish Environment Institute (syke), Joint Research Centre, University of Tartu
In 2024, Joint Research Centre initiated a survey to map the use of Earth observations (EO), and specifically satellite observations, in the Water Framework Directive (WFD) in each EU Member State (MS). In addition, the survey welcomed answers about actions in using EO in water quality status assessment in some European countries that are not EU MS. The aim of the survey was to explore with the Member States and EO experts, how EO tools could better serve and support their needs for WFD purposes. Also the survey was targeted to make preparations for the ECOSTAT work programme 2025-2027, where one of the tasks is dedicated to advancing the use of EO for WFD (Exploitation of remote sensing data in WFD monitoring and classification). The survey mapped four specific topics with altogether 18 questions: Present use of EO for WFD in MS, Strengths and limitations of EO data in the WFD monitoring and assessment, Willingness of MS to increase the use of EO, and WG ECOSTAT future work. The survey was sent to EO experts and their colleagues of each MS during Aug-Sep 2024. It was also shared for Copernicus User Forum national representatives. As a result, 45 answers were received from altogether 22 MS. Most of the answers were given by EO experts (66%) and national Copernicus user forum representatives (14%). Out of the present use of EO to support WFD, most focus is put on lakes and coastal water types (68% each), but also rivers and transitional water types have cases where EO is used for the purposes of WFD status assessment. According to the survey, in most EU countries, EO data are presently not yet used as part of WFD status assessment - although there are very good results on how EO can support WFD. However, there are some exceptions to this, for example Finland, Belgium and Sweden already utilize EO data in WFD reporting. Most often EO data are produced by national public entities, private companies, Copernicus services and Lakes/Ocean Color CCI. Among the EO data that are used in different water categories (rivers, lakes, coastal and transitional waters), most often EO is used to derive Chlorophyll-a, turbidity and total suspended matter. Harmful Algal Blooms (HABs) and surface temperature follow, while deriving water color and Secchi depth and mapping of hydromorphological modifications come next. In the countries where EO is used for status assessment, EO is used as supporting information via so-called expert judgement rule or via modelling. Typically, EO is used as water body-specific results and indicators. The main benefits of the present use of EO data in WFD monitoring were clear: more than 80% of the respondents felt that increase in spatial coverage, availability of EO over water bodies with no sampling and cost-effectiveness are all relevant benefits. Furthermore, there is clear trust in that EO can also increase the confidence of status assessment in the future. The survey also mapped, which are the major obstacles for taking EO as one monitoring method to account in WFD. Three major obstacles were ranked high: 1. there are no common guidelines on how to utilize EO for WFD reporting, 2. there are legal issues in WFD (use of EO is seen against current WFD monitoring guidelines) and 3. authorities are not well informed about EO datasets and tools. Other relevant obstacles categorized as ‘obstacles to some extent’ were related to technical linkages between EO data and WFD assessment, such as lack of interfaces for EO data or WFD assessment systems are not suitable for EO data. Furthermore, the respondents identified the following two given ‘obstacles to some extent’ related to EO data and its quality most often: EO depends too much on meteorological conditions, or that only few WFD parameters can be measured via EO. Also the fact that EO cannot trace deep chl-a maxima in open ocean or may result in overestimations over turbid or humic waters (near coast and in river estuaries) or that EO resolution was not considered good enough for small lakes were also raised up in the answers. In most EU countries, the willingness to increase the use of EO for WFD is high. There are very good results on how EO can support WFD, and preparations are being made for the future use. Actions to prepare the future use of EO for WFD are most related to research and development (field data collection, validation, algorithm development and creation of quality control procedures and uncertainty estimation). Also, numerous user uptake actions are being made, like workshops, training and other events, where EO experts and authorities share information. Some web applications exist, that provide EO water quality to either WFD status assessment or promote the use of EO for water quality in management in general for national purposes. Such applications are e.g. System Saltbaltyk in Poland, Terrascope by Vito (viewer.terrascope.be), Finnish Tarkka (tarkka.syke.fi) by the Finnish Environment Institute, Estonian user interface by University of Tartu (https://fpcup.to.ee), EOapp Aqua by EOMAP (https://aqua.eoapp.de/) and two platforms by Brockmann Consult and Brockmann Geomatics (https://viewer.cyanoalert.com/ and (https://bc-viewer.brockmann-consult.de/). In most MS, there is willingness to use EO for WFD in the future, but not yet clear plan how to proceed. The survey mapped, what could be done to advance the use of EO in WFD status assessment. Most given alternatives for actions were seen rather beneficial, but the following four were ranked highest: 1. Adjustment of national water management systems: EO datasets should be integrated in national status assessment tool, 2. Training courses at national level, 3. User-friendly guidance and 4. Collaboration and discussion among MS. According to the respondents, EO is seen very useful for the harmonization of WFD with other EU Directives, mostly with Marine Strategy Framework Directive (MSFD), Bathing Directive (BD) and Habitats Directive (HD). In this presentation, we give an overview of the results of the survey and give information about the progress and plans about the actions in the next three years in ECOSTAT work.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: Operational monitoring of the water quality of French lakes and rivers from space

#zarr

Authors: Guillaume Morin, Anne-Sophie Dusart, Robin Buratti, Guillaume Fassot, Tristan Harmel, Flore Bray, Nathalie Reynaud, Thierry Tormos, Gilles Larnicol
Affiliations: Magellium, 1 rue Ariane, INRAE, Aix Marseille Univ, RECOVER, Team FRESHCO, Pôle ECLA, OFB, DRAS, Service ECOAQUA, Pôle ECLA
United Nations Sustainable Development Goal No. 6 required to “ensure availability and sustainable management of water and sanitation for all” by 2020. Many national or international acts imposed regular monitoring of water quality by the authorities : the US Clean Water Act & Safe Drinking Water Act, the European Water Framework Directive (WFD). Though, lack of resources make such a surveillance hard to maintain on a continuous and long term. From the late 1970’s, and especially since 20 years, remote sensing (RS) has regularly made progress, and is now considered as a major and innovative asset for Earth Observation. It especially complements the deficit represented by temporal and spatial sparsity of in situ data samplings although it relies on strict validation with long term gathering of in situ data. In 2024, the French Ministry for Ecological Transition and the French Spatial Agency funded a national project for three years, where water quality, along with water resources and irrigation indicators, must be delivered on a weekly basis, i.e Near-Real-Time (NRT), to most of the public institutions in charge of water management. The aim of this project is to develop a perennial service where water quality (WQ) products will be delivered for all water bodies and rivers larger than 50 m, five to seven days after acquisition, to every public decision maker. During the first year and since June 2024, our system is operating and has been covering 99 tiles in mainland France and Corsica, and delivering products on the 579 water bodies monitored by the WFD (DCE). To do so, we developed a fully operational processing chain, organised structured archives on the cloud, and created a visualizing web interface, user friendly enough to be easily used by all end-users. Concerning the processing of satellite images per se, our system is implemented on WEkEO’s cloud machines with immediate access to Copernicus satellite images. It allows it to operate as soon as images are available on WEkEO’s s3 buckets. For optically derived parameters, the processing routine first treats the atmospheric-correction (AC) and sunglint removal from level-1C to level-2A with GRS. Cloud- and water- masks are synchronously computed with Sen2cloudless and an OTSU filter respectively, for an optimal water pixel identification. Then level-2A remote sensing reflectances are converted to transparency, chlorophyll-a concentration, turbidity and suspended particulate matter thanks to widely used algorithms. On average, time for the computation takes around 2h31 for 318 tiles each week. Skin surface temperatures of the water bodies are also derived from TIRS data with a split window algorithm. Data is then stored on WEkEO’s bucket in the zarr format, which is open, community supported, cloud compliant as recommended by ESA. Water quality rasters can be visualized through a web interface with their spatial average and standard deviation. Also, a feature allows users to consult the time-series for each parameter from 2017 to date, and download them to simple text format. Validation is made by matchups between RS data based on the gathering of historical national databases (priorly normalized) in order to evaluate the accuracy and uncertainties of products. It will also include the deployment of several autonomous stations that will provide high-frequency water surface temperatures, and hyperspectral data from the two Hypernet stations installed in France. Our service is a co-design solution developed with actors like institutional public managers, authorities in charge of the environment, scientists, and stakeholders interested in evaluating the ecological status of inland water bodies. We consider this program as a leverage to the use of RS data for all. As it is widely known, all quality elements required by the WFD cannot be provided by satellite observations, which will never replace in situ measurements and have to be seen as a complement. Unfortunately, the lack of technical expertise, understanding of satellite-based Earth observation methods and capacity to process the RS products render the use of such data laborious for most users. It also necessitates decision makers to get acculturated to, and to build their confidence in these products, which still contain hidden bias and require to be validated with ground data as mentioned earlier. In comparison, conventional assessment methods that, traditionally, can be intercalibrated with round-robin laboratory exercises for example, still appear more reliable. Finally, WFD status assessment and classification systems vary between European Union member countries, so their conventions and databases are unlikely compatible between states. As a result, the chance that we have to propose a wide national service providing satellite-based data associated with their validation with reglementary in situ monitoring, which is already well structured in France, offers a unique opportunity to popularize the use of such data to a wide range of decision makers. We expect this opportunity to tackle the obstacles and inertia cited above, which prevent the use of RS data by most. It also will be the first extensive production exercise on French territories and will undoubtedly bring useful data for aquatic environmental studies and related fields.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: Pixels of Change: Monitoring the Oder River Ecosystem with Earth Observation

Authors: Jorrit Scholze, Dr Paula Torre Zaffaroni, Kerstin Stelzer, Vanessa Bremerich, Carole Lebreton, Dr Tobias Goldhammer
Affiliations: Brockmann Consult GmbH, Leibniz Institute of Freshwater Ecology and Inland Fisheries (IGB)
Monitoring river systems has traditionally relied on in situ measurements through fixed probes or periodic water sample analyses. While these methods are essential for localized assessments, they often lack the spatial and temporal coverage required to analyze large-scale dynamics of water quality, particularly in capturing the patterns of algal bloom onset and dispersal. The capabilities of earth observation provide a complementary approach, enabling the assessment of water quality parameters over extensive areas and across transboundary river systems. The aftermath of the environmental disaster in the Oder River in August 2022 can be monitored closely by satellite-based data. During this event, a bloom dominated by Prymnesium parvum caused unprecedented ecological damage, with over half of the fish and shellfish populations decimated. The bloom resulted from a combination of factors, including elevated water temperatures, reduced river flows, and saline industrial discharges. Building on this, the ODER~SO project, initiated in May 2023, aims to provide a continuous, near-real-time monitoring system for the Oder River using Copernicus Sentinel 2 MSI data. Early results indicated a robust correlation between satellite-derived chlorophyll estimates and in-situ measurements (R2 = 0.64, RMSE = 42,4 μg/L), demonstrating the reliability of the method across multiple years and conditions, to complement traditional point-based monitoring strategies. Retrospective analyses of Sentinel-2 data from 2016 onwards reveal significant interannual variability in algal bloom dynamics, not only in magnitude and duration, but also in the timing or moment of development. For instance, while the 2022 bloom was characterized by extreme conditions, other years have shown distinct spring patterns influenced by climatic and hydrological factors. This study focuses on EO data to analyze the phenology and spatial-temporal behavior of blooms along a downstream river system. By examining these dynamics in the context of climatic and hydrological influences, it seeks to deepen understanding of bloom drivers and variability. Furthermore, a critical objective of this research is the development of an alert system to predict and mitigate the impacts of peak bloom events, providing a proactive tool for ecosystem management and policy planning. The datasets are made available on an integrated data platform, which allows users to visualize temporal trends, extract aggregated time series, and explore spatial patterns, enabling retrospective analyses but also supporting near-real-time monitoring systems. This capability could significantly enhance the ability of authorities to respond proactively to harmful algal blooms. By integrating Sentinel-2 MSI observations with existing in situ networks, the project offers a scalable solution for river monitoring, with applications extending beyond the Oder to other river systems worldwide. This approach represents a step forward in addressing the complex challenges posed by climate-driven changes in aquatic ecosystems.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.34)

Session: E.01.02 Earth Observation in Practise: Linking Public and Commercial Data for Earth Action - PART 1

Recent advances in Earth observation – more sensors, open data policies, and refined analysis methods – have yielded considerable progress in environmental monitoring. These advances can support land managers, policy makers, and other end users, and enable impactful Earth Action. Future developments, such as the increasing range of public and commercial sensors, will further accelerate this progress; but also raise new challenges around the operational delivery of EO-based solutions. In particular, taking concepts from research and development into active practice is not simple. End users often have requirements that are unexpected, and not satisfied by conventional methods or products. Often, end users require spatial, temporal, and thematic detail at a level unobtainable from public missions alone (e.g. Sentinel-1, -2, Landsat 8 and 9), necessitating the incorporation of commercial sensors.

In line with ESA’s upcoming EO Science Strategy 2040, this session aims to explore the interface of environmental monitoring as it crosses from the research and design to practical domains. We welcome presentations that: demonstrate Earth observation-based work undertaken in collaboration with all manner of end users; discuss the issues and challenges commonly encountered in these collaborations; and/or examine the synergistic use of public and commercial EO data. Ultimately, this session aims to develop best practices for collaborations between the EO and end user communities, ensuring that investments into research and new missions can be rapidly and meaningfully implemented into end user portfolios.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.34)

Presentation: SAR4Infra - A Service for On-Demand InSAR Time Series Analysis of Critical Infrastructure

Authors: Daniela Rabe, Andreas Piter, Dr. Romulo Goncalves, Johannes Knoch, Arash Madadi, Dr. Daniel Scheffler, Dr. Alison Beamish, Dr. Mahmud Haghighi, Prof. Dr. Mahdi Motagh
Affiliations: Geoforschungszentrum Potsdam, Institut für Photogrammetrie und GeoInformation
Mobility in Germany depends heavily on the condition of vital transport infrastructures. Damage to such structures can result in significant economic losses and costs, as demonstrated by the demolition of the Rahmede valley bridge near Lüdenscheid, North Rhine-Westphalia in May 2023, and the collapse of the Carola Bridge in Dresden in 2024. Monitoring ground motion at transport infrastructures using geodetic measurement methods is limited by sparse measurement locations and observation epochs. However, satellite-based radar interferometry can cover large areas. Thanks to the temporal resolution of 12 days and the freely available data, Sentinel-1 satellite images are well-suited for observing ground motion. Within the completed research project SAR4Infra, funded by the German Federal Ministry of Digital and Transport, we developed a free-of-charge, on-demand service for ground motion monitoring using Sentinel-1 InSAR time series analyses for German transport infrastructures. The SAR4Infra service complements the existing German and European Ground Motion Services (EGMS) by providing additional flexibility and user-specific analysis capabilities. While EGMS offers authorities a general overview of ground motion over a five-year period updated annually, SAR4Infra enables on-demand monitoring. Authorities can select a custom time span from the 10 years of available Sentinel-1 data, define a specific spatial bounding box, enable persistent scatterer (PS) or distributed scatterer (DS) pixel selection and choose between ascending or descending orbits. The co-registered stack of Sentinel-1 images is continuously updated, ensuring access to the latest analysis-ready datasets for InSAR time series analyses. For example, users at the Schleswig-Holstein land surveying authorities can send requests via a REST API, triggering an automated processing pipeline with resulting InSAR displacement time series and velocity map seamlessly integrated into their GIS portal (www.gdi-sh.de). The SAR4Infra is an open-source solution, designed to be deployed as a cloud-agnostic service offering a high level of scalability, flexibility, and cost efficiency. It is currently operational on CODE-DE (https://code-de.org/de/), the German cloud platform managed by the German Aerospace Center (DLR) to promote the use of Copernicus remote sensing data. By leveraging CODE-DE, SAR4Infra benefits from a computationally efficient and cost-free infrastructure for German authorities. The service's adaptability paves the way for exciting expansions beyond transport infrastructure monitoring, including applications for infrastructure management and mining-related ground motion analyses, establishing SAR4Infra as a powerful tool for addressing a wide range of motion monitoring applications.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.34)

Presentation: Partnerships with Practitioners in Agriculture for the Commercialization of Hyperspectral Satellite Data Products

Authors: Anita Bayer, Anne Schucknecht, Matthias Wocher
Affiliations: OHB System AG
In the HyLAP project, products derived from data of the hyperspectral EnMAP satellite are developed for stakeholders in the agricultural domain. One key objective of the project is the identification of user needs and requirements for primary (i.e. biophysical/biochemical vegetation variables) and secondary information products (derived information, e.g., management recommendations) for selected use cases on crop and grassland systems. For this, HyLAP involves stakeholders from agricultural authorities and businesses such as seed producers, agro-insurances and farmers. A close cooperation between practitioners and developers is needed to ensure demand-oriented product development and establish the data base needed. Although EnMAP data are in general freely available, proprietary data of the stakeholders are essential for calibration and validation of the developed methods. For developing primary information products on vegetation variables, state-of-the-art methods (i.e. hybrid models) and as far as possible open-source tools (e.g., EnMAP-Box) are used. Models are developed to allow for a transfer to data of the CHIME satellite which will provide increased coverage of hyperspectral data from 2028 on which is needed for a potential commercial application of products derived from hyperspectral satellite data. The final aim of the HyLAP project is to demonstrate that scientific concepts for analysing hyperspectral satellite data can be transferred to information relevant within agricultural practice. Besides preparing example information products for selected agricultural use cases, also the necessary steps towards operational application of the processing chains are identified which are the basis for a commercial application of hyperspectral satellite data in agriculture. In this presentation, we will introduce the HyLAP project and show results of the analysis of stakeholder needs and their requirements for products. At the beginning of the project, we conducted stakeholder workshops to identify the thematic and technical requirements of agricultural stakeholders from different application fields. These are consolidated for a suite of use cases related to crop and grassland systems, such as (1) optimization of plant nitrogen supply by quantification of plant nitrogen content, or (2) yield estimation in grasslands by retrieving the aboveground biomass. The selection of use cases considers the availability of reference data for model development. This progress achieved in the first phase of the HyLAP project gives valuable insights on user needs in the field of agriculture and the potential and obstacles of bringing hyperspectral earth observation data into agricultural practice, along with issues of communication and regarding data availability and usage.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.34)

Presentation: An AI-Enhanced Multi-Sensor Multi-Operator Approach to bring Transparency to Global Supply Chains

Authors: Chetan Pradhan, Guy Cook, Charles Davis
Affiliations: Earth-i
With support from the European Space Agency’s Business Applications & Space Solutions (BASS) programme, Earth-i has been successful in integrating public and commercial satellite data with proprietary AI technologies to benchmark, monitor and forecast environmental impact of global supply chains, starting with the ferrous and non-ferrous metal production sector. As part of part of Earth-i’s SAVANT portfolio, the Global Activity Indicators from Space (GAINS) service is now collecting and processing data on a daily basis from Sentinel, Landsat, Planet, Airbus and other medium-, high- and very-high-resolution satellite sources, and integrating it with complementary terrestrial data, to monitor and forecast production levels and the environmental impact of global commodity supply chains including copper, nickel and steel production, and serving it on a commercial basis to customers worldwide. With a multi-year historic time-series back to 2016 of high-revisit images, such as those now available from satellites including the Copernicus Sentinels, it is possible to gain a deeper understanding of supply chain activity, productivity and environmental impact over multi-annual periods. A multi-sensor approach also minimises the limitations of relying upon a single data type, by using optical satellites for object identification and change detection, SWIR and IR for thermal feature and change detection, complemented by SAR and InSAR to ensure coverage during cloudy periods and for night-time observation. The combination of sensor types gives good primary and secondary indicators of global industrial activity to a high level of confidence, while the multi-operator strategy limits the reliance upon any single source of data whilst increasing the data capture opportunities. With the application of machine learning techniques, it has been possible to set up automated processing chains to detect change over time to a high level of accuracy (+/- 5%), whilst keeping the manual quality control burden down to minutes on a daily basis. Such automated detection has helped Earth-i to provide a consistent level of accuracy that allows hundreds of sites to be analysed and published daily within a couple of hours of data receipt. This has a significant commercial benefit in keeping the service affordable, unlike services based upon HR and VHR data alone. Use of AI also unlocks insights in the spectral and spatial detail of satellite data that are simply not visible to the human eye. The SAVANT and GAINS products were initially developed for the financial services markets, where Earth-i is working with major international commodity traders, hedge funds, market analysts and investment houses, all of whom are interested in understanding near-real-time production levels of critical materials (metals, minerals and chemicals). This data gives them a competitive edge in forecasting market demand and pricing. For non-earth observation companies like these, the challenges of identifying, procuring, processing, analysing and integrating satellite data into their in-house systems are too significant, and so the benefits of satellite-derived datasets are failing to be taken up, despite their significant value in answering complex supply chain questions. Earth-i provides derived information products in a format easy for non-experts to digest and interpret – no clients have the need to see a satellite image, and the history back to 2016 allows clients to back-test the SAVANT and GAINS. The data products generated are of significant interest to the financial services sector not only because of their ability to help inform of likely movements in commodities and equities market prices, as well as acting as leading indicators for macro-economic performance, but also because they act as an independent evidential benchmark of the commodity asset owners on their net zero and environmental sustainability targets and provide a means of measuring the impact of their Environmental, Social and Governance (ESG) investments over time. The products have also been of keen interest to the metals recycling industry, wanting to know which operators are increasing their production and therefore in need of fresh supplies of scrap for recycling. Metals such as steel, copper and nickel are 100 percent recyclable, which means they can be recycled into the same material of the same quality again and again. This has led Earth-i to add stockpile monitoring to the GAINS portfolio, providing valuable insight on levels of raw materials including coal, iron ore and scrap metal at key port and supply depot locations worldwide. Sophisticated machine learning and computer vision techniques have been used both to identify the material and to calculate the volume of the stockpiles. The next group of end-users that Earth-i seeks to engage is the regulatory sector, who are interested in understanding the environmental impact of these heavy industries, and in independent verification of environmental credentials and claims of the facility operators. Uptake in this sector is slow, as the question is always ‘who pays’ for such information. This is where Earth-i seeks to bridge the gap between the environmental monitoring community and the commercial Earth observation services industry, ensuring that ESA’s investment in the service development delivers environmental impact as well as commercial returns. In this presentation Earth-i will demonstrate the latest developments from the project, including several examples of the high-revisit, multi-sensor, multi-operator approach; the machine learning analytics applied; and the commercial products generated for financial, environmental and regulatory institutions to exploit, and how the work is set to drive transformational change in the industries being monitored.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.34)

Presentation: Developing a satellite-driven monitoring system for climate resilience in the British Virgin Islands

Authors: Samuel Pike, Dr. Suzana Barreto, Nancy Woodfield-Pascoe, Dr Katie Medcalf
Affiliations: Environment Systems Ltd, National Parks Trust of the Virgin Islands
The British Virgin Islands (BVI) are increasingly impacted by climate change, with intensified storms, rising sea levels, and prolonged droughts. The 2017 hurricane season, which saw two Category 5 hurricanes strike the islands within two weeks of each other, highlighted the region’s vulnerability and the need for advanced climate resilience. As climate change intensifies the environmental threats facing the BVI, the need to monitor and protect biodiversity and critical habitats becomes more urgent. The National Parks Trust of the Virgin Islands (NPTVI) has been monitoring these on the ground for over 20 years, but faces significant challenges due to limited time, resources and the need to access remote areas. This creates a knowledge gap in detecting habitat changes and responding to emerging environmental threats in a timely manner. In response, Environment Systems designed the first satellite-based monitoring system for the BVI, aimed at supporting NPTVI’s conservation efforts and BVI Governments climate change preparedness. Using open satellite data from Landsat and Sentinel-2, the system provides routine, automated outputs every six months, including flooding susceptibility, vegetation health (for drought monitoring), land clearance detection, and relative suspended solids in the shallow marine environment. These metrics were selected based on consultations with the NPTVI, BVI government, and other stakeholders to ensure that the data aligns with their specific needs. The development of the system highlighted the challenges of translating research into actionable solutions, in order to ensure the data met the specific needs of the island. Specifically, in terms of aligning scientific principles and the practical realities of calibration and validation, as well matching user requirements with user understanding of the data and where it came from. The system integrates outputs into the BVI government’s existing servers and data portal, making the information accessible to relevant end users, including government agencies, local decision-makers, and the general public. By providing spatial, temporal, and thematic details that are not easily obtainable from public sensors alone, the system bridges the gap between research-based EO products and practical environmental management applications. This work demonstrates how EO technologies can directly support environmental monitoring and climate action. It also provides a model for the effective integration of Earth observation into environmental management, while outlining the operational challenges of using remote data within small island contexts.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.34)

Presentation: Rapid Wildfire Mapping with Sentinel-2 and Planet Data

Authors: Professor Kevin Tansey, Akram Abdulla
Affiliations: University Of Leicester
Challenge Wildfire has a significant and growing impact on the environment. The increasing prevalence and frequency of wildfires directly challenge our ambitions around enhancing habitat and the natural environment, and our ambitions around achieving net zero carbon status. Many organisations are working to develop and refine approaches to managing wildfire risk and responding to fires when they occur. Future climate change is predicted to significantly increase the risk of wildfire in the British Isles with severe summer fires being 3 to 4 times more likely by the 2080s. Between January and August 2022, the UK suffered a staggering 969 wildfires, compared to 247 in 2021. There is an urgent need for a more effective and rapid mobilisation of fire control and provision of information about the immediate post fire impacts to enable timely interventions to minimise secondary impacts such as sediment erosion and runoff of organic and inorganic pollution of water resources. Methodology Very high-resolution SEO with a high repeat cadence has offered the opportunity to monitor both the conditions leading to wildfire and the immediate impact of fires, enabling a rapid proactive response. However, access to the data and insight provided by such sensors and systems remains in the expert domain and is rarely available to influence operational decisions, but rather forms the basis of a posteriori analysis. To be operationally useful, large volumes of data require processing and analysis in near real time, data from a variety of sources both SEO and field observation collected automatically using IoT enabled environmental sensors, require integration allowing non-expert users direct insight into conditions and the information required for an immediate operational response. A further barrier is the skill and expertise required from the stakeholders to make decisions over which imagery should be selected, managing cloud or smoke haze, selecting the right spectral indices, geolocating the area of interest and manually undertaking these tasks on a day-to-day basis. Finally, dissemination of key information data sets to authorities, partners as spatial data layers or reports. In this paper, the methodology that was developed using Sentinel-2 imagery and evaluated over the devastating fires in Australia and 2019-2020 will be presented. The evolution of this mapping to very small fires in the UK utilising Planet data will then be described. This project supported the needs of Geoscience Australia for their wildfire mapping services. The solution prototype will utilise Planet data (3-5m spatial resolution, at least and sometimes better than daily repeat) to continuously map regions being flagged detected as a wildfire in the UK to characterise the burned area development (spread) and burn severity (impact) information. Planet data can provide at least daily observations of the land surface at 3-5m resolution. Machine learning tools are utilised to train and then classify the burned area. A number of different training solutions are generated to take into account the differing nature of burned vegetation and the particular properties of fires in the UK which are often, but not always, associated with urban areas. The results will be of interest to a range of government and commercial entities in the UK. Results Outputs are digital layers in raster and vector format of the perimeter of the fire and associated severity indicators. If cloud cover or smoke obscures the land surface, we will continue to map until a complete burned area map is acquired and the fire is extinguished. The mapping solution is offered via machine learning from a range of UK fire scenarios in the past. All of this processing is undertaken in the cloud and the results published to an open access geo-web server, supported by our industry partner, CGI UK Ltd. The ability to auto map burned areas by receiving alerts as to the spatial and temporal location of fires from active fire data will be shown. Future work will expand the work into European countries.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.34)

Presentation: IRIDE Marketplace, a cloud-native data platform to manage the ecosystem of IRIDE Satellite Data and Services in a scalable cloud environment

#stac #cog #parquet #cloud-native

Authors: Mr. Antonio Giancaspro, Mr. Marco Corsi, Mr. Alessio Cruciani, Claudio Scarsella, Mr Fabrice Brito, Mr Davide D'Asaro, Mr Antonio Vollono, Mr Fabio Lo Zito
Affiliations: e-GEOS S.p.a., Terradue S.r.l., Lutech S.p.A, Exprivia S.p.A., Serco Italia S.p.A.
IRIDE Marketplace, a cloud-native data platform to manage the ecosystem of IRIDE Satellite Data and Services in a scalable cloud environment A. Giancaspro, M. Corsi, A. Cruciani, C. Scarsella, D. Grandoni (e-GEOS), F. Brito (Terradue), D. D’Asaro (Lutech), F. Vollono (Exprivia), F. Lo Zito (SERCO) In the framework of the Italian PNRR IRIDE Programme implemented by ESA under the mandate of the Italian Government, the IRIDE Marketplace is the downstream component acting as a) the unique access point to the whole IRIDE offering (IRIDE multi-constellation EO Data, geospatial products produced by the IRIDE Service Segment), b) hosting platform for the IRIDE Service Segment, providing infrastructure and platform services for more than 60 geospatial applications (Service Value Chains, in IRIDE language) and c) Integrated Development environment and Marketplace and for the onboarding of third-party providers in line with the Space Economy trends. Within IRIDE System overall architecture, IRIDE Marketplace has been conceived and developed as cloud-native platform leveraging on industry standards both from the geospatial domain (for example, in terms of data formats) and from the ICT domain (for example, infrastructure provisioning, DevSecOps pipelines, Security Operations). The IRIDE Marketplace system is designed to allow Users to access Data, Services and Applications/Toolboxes online with simple models like DaaS (Data as a Service) and SaaS (Software as a Service). The business model for the end-user is based on configurable subscription and a credit system that allow flexibility in management of product offerings including data, services, applications and toolboxes. The IRIDE Marketplace will additionally support both the category of Institutional end-users (e.g. free quota access on the basis of service policy) and private end-users (e.g. fee based access based on consumption of credits). As anticipated, the IRIDE Marketplace has been designed as a cloud-native geospatial data and service platform focusing on scalability, portability and interoperability. This system serves as a unified access point for IRIDE multi-constellation EO data, geospatial services, and third-party applications, adopting open standards to ensure compatibility and flexibility. The platform leverages the OGC API processes for seamless interoperability, enhancing modular integration through decoupled API layers. Metadata and data discovery are optimized using the Spatio Temporal Asset Catalog (STAC) specification, enabling efficient cataloging and querying of geospatial data. Through the adoption of STAC and cloud-native formats such as Cloud Optimized GeoTIFF (COG) and GeoParquet the platform supports interoperability and extensibility, minimizing vendor lock-in. Furthermore, standard API services facilitate external M2M interactions, identity integrations, and the connection of common services. The STAC-based Data Catalog employs a stac-fastapi-pgstac backend for efficient metadata organization and query handling, significantly improving the discoverability and usability of multi-source geospatial data. Furthermore, COG and GeoParquet enable efficient data access, storage, and processing, with COG supporting scalable raster visualization and GeoParquet optimizing vector data queries in cloud environments. To support large-scale geospatial data handling, the Marketplace integrates Data Lake capabilities for lifecycle management, from ingestion to transformation, enabling systematic and user-driven workflows for Analysis Ready Data (ARD) production. The platform also facilitates third-party integration through flexible frameworks for hosted applications, algorithm deployment, and public APIs. In particular, the IRIDE Marketplace further supports its third-party users with a console allowing the management of their own product catalog, with the possibility to select from Earth Observation (EO) product configurations templates covering: 1. DaaS products structured as STAC collections, accessible directly to end-users; 2. SaaS products include applications such as geospatial analytics tools and user interfaces. In this way, third-party users can onboard new products, configuring them within the Marketplace to extend offerings to end-users dynamically. At the same time, they can exploit IRIDE Marketplace cloud-native technologies and become a provider of scalable, and interoperable solutions. The IRIDE Marketplace, developed under the Italian PNRR IRIDE Program by ESA, serves as a unified platform for IRIDE offerings enabling the onboarding and the diffusion of EO services, applications and toolboxes with a consistent and simple approach. As shown, its design prioritizes scalability, portability, and interoperability, supporting diverse use cases and fostering innovation in the geospatial ecosystem through advanced technologies.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall E1)

Session: A.09.11 Snow in the cryosphere: improving observations and modelling - PART 1

Snow is frequently present across cryosphere surfaces such as sea ice, ice sheets and shelves, glaciers, permafrost and soil, and is an important factor in key processes including mass balance, albedo and thermal insulation. The importance of snow is well known and increasingly recognised with snow featuring in the World Meteorological Organisation set of Essential Climate Variables including recently-added snow depth on sea ice. Changes in snowfall, snow cover and snow properties have far reaching implications including water supply, modification of available sunlight affecting biological and chemical processes, climate feedbacks and many other aspects of the Earth system. Changes in snow physical properties can also bring both additional information and/or challenges in relation to remote sensing due to alterations in the interaction of electromagnetic radiation with snow, impacting our ability to collect satellite data over long time- and length-scales.

Studies of snow from in situ, laboratory, remote sensing and modelling studies can provide insights into the state of snow cover as well as changes, trends and future predictions. This session welcomes abstracts on any aspect of snow studies including physical properties and evolution, tools and techniques, modelling and application of statistical or AI methods, human, biological and chemical implications as well as snow dielectric properties which affect remote sensing retrievals. In this session, we invite presentations at all length scales from satellite through airborne, surface, microstructure and smaller. The session will explore insights into the status, changes and potential futures of snow on Earth and impacts e.g. on people and climate, as well as future directions for research, including techniques, modelling, instrumentation, mission design and retrieval algorithms.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall E1)

Presentation: Dual-frequency airborne SAR campaign on snow mass retrieval for high-Alpine terrain

Authors: Helmut Rott, Thomas Nagler, Ralf Horn, Jens Fischer, Julia Kubanek
Affiliations: ENVEO IT, DLR Microwaves and Radar Instutute, ESA ESTEC
In March 2021 a field experiment was conducted in the high-Alpine test site Woergetal of the Austrian Alps, exploring the feasibility of snow mass (snow water equivalent, SWE) retrieval by means of C- and L-band SAR. Multiple repeat-pass acquisitions were acquired with the airborne F-SAR of DLR along two flight tracks from opposite view directions. F-SAR operated in C-band and L-band polarimetric mode. Main objective was the development and evaluation of tools for use of the repeat-pass InSAR technique for SWE monitoring, with focus on the future Radar Observation System for Europe at L-band (ROSE-L), the Sentinel-1 Next Generation (NG) mission and the geosynchronous C-band SAR mission Hydroterra, proposed in response to the ESA Call for Earth Explorer 10 Mission Ideas. For Hydroterra+, an updated version of the mission, feasibility studies are currently going on as one of the Earth Explorer 12 mission candidates. In addition to the InSAR approach, the response of backscatter intensity and polarimetric parameters to changes snow mass was studied. Eight flights, each one comprising multiple repeat passes, were conducted between 2 and 19 March 2021. The acquisitions covered two snowfall events. Event 1 took place on 5 March, covered by interferograms spanning the period 4 to 6 March. The mean snow accumulation (delta-SWE) was 13 mm. Event 2 includes snowfall during the days 14 to 18 March and is covered by interferograms spanning from 9 to 19 March. The mean delta-SWE was 65 mm. During the SAR acquisitions the snow at the in-situ measurement points was completely dry, but on steep east- and south-facing slopes the skin layer was slightly wet on cloudless days. The data analysis focuses on the three comparatively level subsections (RoIs) of the test site where field measurements were made. The InSAR pair covering the 2nd event includes three days of transient surface melt (11 to 13 March) which resulted in formation of a solid refrozen crust before snowfall started. The C-band co-polarized (VV-HH) phase difference (CPD) increased on the average by 0.25 rad between 9 and 19 March but does not show a clear relation to the delta SWE magnitude. The first snowfall event was associated with a small increase of C-band CPD in one of the SAR tracks, but a similar signal is observed for a time span without snowfall. The L-band CPD data show negligible temporal change. Also the C- and L-band VH/VV backscatter ratios do not show a viable relation with snow depth and SWE. This confirms the preference for the DInSAR-based SWE retrieval approach. Auxiliary data for SAR processing included sensor and system parameters, an airborne lidar-based DEM with 5 m pixel size, and corner reflectors of 95 cm leg length. The reflectors serve as geodetic control points and as interferometric phase reference (cleaned of snow), as well as for checking the C-band radiometric and polarimetric calibration. Regarding the InSAR-based SWE retrievals of the two snowfall events, three approaches were implemented and evaluated: (i) the conventional repeat-pass InSAR technique using C- and L-band repeat-pass data in full spatial resolution; (ii) the use of simulated geosynchronous C-band repeat-pass data reflecting the spatial resolution of the Hydroterra system; (iii) the use of C- and L-band repeat-pass data in full resolution applying split-bandwidth processing to obtain sub-band SLC images that are used to generate differential split-bandwidth (Delta-k) interferograms for DInSAR processing. The error analysis shows that the reflector phase used as reference for zero delta SWE is the main error source for SWE retrievals, exceeding the uncertainty of the random phase noise. The conventional DInSAR approach can be applied with L-band data for both snow events because the change of the phase delay is also for event 2 well below the 2 pi ambiguity. C-band is only used for SWE retrievals of the first event because the phase signal of event 2 exceeds the 2 pi threshold more than two-fold and the coherence is low. The comparisons with in-situ snow accumulation show good agreement, with the mean bias for event 1 below 1 mm for both C- and L- band, and a maximum delta SWE bias of individual sites and tracks of 4 mm for C-band and of 5 mm for L-band. The L-band data show for the 2nd event a mean bias of 2 mm, and a maximum value of 13 mm. Differences between use of HH and VV polarized data for SWE retrievals are insignificant, amounting to 0.1 mm for the 1st and to 0.5 mm for the 2nd event. Regarding simulated geosynchronous C-band data, SWE retrievals were performed for event 1, showing similar performance as the high resolution InSAR retrieval, however at lower spatial resolution. We applied the Delta-k method for event 2, in order to check the complementarity to RP-InSAR. In spite of low phase sensitivity, both the C- and L-band Delta-k SWE retrievals show a mean bias within 10 mm, confirming the usefulness for closing gaps related to large snowfall events in SWE time series.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall E1)

Presentation: 40 Years of Snow Cover in Europe - Developments and Trends From the TIMELINE Project

Authors: Sebastian Roessler, Andreas Dietz
Affiliations: German Aerospace Center
Daily observations of snow coverage are essential for understanding climate change due to snow's sensitivity to temperature changes and its role in regional climate systems. Snow cover directly reflects shifts in temperature, precipitation patterns, and atmospheric circulation, making it a key indicator of climate trends. Monitoring these changes over several decades helps to identify long-term reductions in snow extent, earlier snowmelt, and shifts in seasonal patterns. In Europe, where climate change impacts vary across regions – from the Alps to Scandinavia – such data allows detection of regional warming trends and the effects on snowpack, which are crucial for understanding the broader implications of climate change. In addition, snowpack has significant hydrological, ecological, and socioeconomic impacts. Changes in the timing of snowmelt affect water availability, river discharge, and flood patterns, while changes in snowpack disrupt ecosystems and biodiversity that depend on winter conditions. The high reflectivity (albedo) of snow also amplifies warming through feedback mechanisms and accelerates climate change. In addition, long-term snow data are critical for improving climate models, predicting future snow patterns, and understanding the broader impacts of climate change. In regions that depend on winter tourism, such as the Alps, a decline in snow cover can also have serious economic consequences. Overall, consistent, long-term snow observations provide invaluable insights into climate trends, regional variations, and future projections, helping to inform adaptation and mitigation strategies. Within the TIMELINE (Time Series Processing of Medium Resolution Earth Observation Data assessing Long-Term Dynamics in our Natural Environment) project, we utilize Advanced Very High-Resolution Radiometer (AVHRR) data spanning over 40 years to analyze snow cover variation at full spatial resolution. The project integrates consistent, long-term AVHRR observations and uses advanced processing techniques to create a harmonized dataset with improved temporal and spatial accuracy. Snow cover is one of the key geophysical products generated within TIMELINE, alongside land surface temperature (LST), normalized difference vegetation index (NDVI), and hazard-related products such as burnt area mapping. The extended temporal record of over 40 years and enhanced spatial resolution of the AVHRR dataset leads to major improvements in snow cover detection. The study highlights key trends in snowpack dynamics over the past 40 years, capturing annual variability, seasonal shifts, and a long-term decline in snowpack extent due to global warming. Sensitive regions such as the Arctic and high-altitude areas show pronounced changes, highlighting the importance of continuous monitoring. The TIMELINE project illustrates the value of harmonized Earth observation records spanning multiple decades. By preserving and extending the AVHRR legacy, TIMELINE provides an important resource for studying snow cover and related processes and supports climate monitoring, hydrology, and hazard assessment.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall E1)

Presentation: Linking Sentinel-1 to a Radiative Transfer Model: A Spatio-Temporal Forward and Inverse Modeling Analysis Over the Alps

Authors: Jonas-frederik Jans, Firoz Borah, Zhenming Huang, Dr. Edward Kim, Dr. Leung Tsang, Hans Lievens
Affiliations: Ghent University, University of Michigan, NASA Godard Space Flight Center
Accurate retrievals of snow depth (SD) and snow water equivalent (SWE) are necessary to support snow water resource management in mountain regions. However, unravelling dynamics in SD and SWE in mountains remains a major challenge. Several remote sensing techniques, i.e., passive microwave, active optical, such as Light Detection And Ranging (LiDAR), and active microwave, such as Synthetic Aperture Radar (SAR) have been investigated to address the issue. In past research, a conceptual change detection technique was developed, based on Sentinel-1 (S-1) C-band (5.4 GHz) co- (VH) and cross-polarized (VV) backscatter observations, which shows great potential to represent SD and SWE accumulation dynamics in mountain regions. However, the physical relationship between the backscatter observations and snow (and soil) properties is still poorly understood. Recent radiative transfer modeling (RTM) work has succeeded in replicating S-1 VV and VH backscatter observations during dry snow accumulation using the Dense Media Radiative Transfer Model-Bicontinuous (DMRT-Bic) model. The model allows to disentangles the backscatter contribution coming from the snow and the rough soil surface and attributes the increase in VH backscatter to the formation of closely packed ice crystals during snow accumulation. Furthermore, advancements have been made to characterize the rough surface scattering of the snow-soil interface based on full wave simulations. However, these experiments are conducted on specific locations without regarding soil freezing and vegetation, and without analyzing the effects of varying elevation, interannual snow dynamics, land cover and viewing geometry. With this research, we continue on recent RTM advancements by applying DMRT-Bic on a regional scale over multiple snow seasons. Furthermore, we add a vegetation component (Water Cloud Model) and a dielectric model that can properly represent frozen mineral soils. We simulate S-1 VV and VH backscatter observations by relying on soil, i.e., soil moisture and soil temperature from Noah Multi-parameterization version 4 (Noah-MP), vegetation, i.e., leaf area index from MODIS, and snow information, i.e., SD from SnowClim. Performance metrics such as Pearson correlation coefficient, mean absolute error and bias are analyzed and stratified by elevation, snow accumulation and local incidence angle. Finally, we invert DMRT-Bic to retrieve spatiotemporal SD information across the European Alps.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall E1)

Presentation: Snow melt dynamics of an Alpine snowpack from Sentinel-1 radar backscattering and high-resolution ground measurements

Authors: Francesca Carletti, Dr. Carlo Marin, Dr. Mathias Bavay, Prof. Dr. Michael Lehning
Affiliations: Institute For Snow And Avalanche Research SLF, Institute for Earth Observation, EURAC Research, School of Architecture, Civil and Environmental Engineering, Ecole Polytechnique Federale de Lausanne EPFL
Snow in mountain catchments is an important water resource, as it accumulates and stores water during winter and releases it consistently as meltwater runoff during the melting season. This reliable release from higher altitudes supports many human activities downstream, including agriculture and hydropower production. At the same time, snow poses significant threats to human life and infrastructures during the melting season, such as glide snow avalanches and/or severe flooding due to rain-on-snow events. The presence and distribution of liquid water content within the snowpack is a key parameter to monitor in order to address the above mentioned challenges and potentialities, as it can well identify the different stages of the melting season. Multilayer, energy-balance snow models can in principle simulate the distribution of liquid water content and other key snow properties at high resolution in time and space. However, performances heavily depend on the quality of the input data, the chosen parametrizations and the structure of the models. Synthetic Aperture Radar images have proven useful for wet snow detection: as the free liquid water in snow increases, dielectric losses and, consequently, absorption coefficients rise. As a result, the average backscattering experiences a significant drop when the snow starts moistening. Later in the melting season, on high Alpine snowpacks, it was observed that the backscattering signal drops until a local minimum is reached and then increases steadily to meet the same averages recorded for dry snow at the end of the melting season and before complete snow ablation. These backscattering trends might contain important information on the state of the melting snowpack which can be used to inform snow models towards a better monitoring of water resources. However, the liquid water content in the snowpack only partially explains the behaviour of the coefficient of backscattering, as the latter can be also affected by other scattering properties, such as surface roughness, density, specific surface area and potentially the large structures buried underneath a wet snowpack such as ice layers. Some of these properties can be simulated by snow models at a high input data quality and computational cost, but the validation of the outputs is challenging, because the snowpack properties should be sampled at a high vertical resolution and continuously over time. Such datasets are rare, because their collection is time intensive and requires particular expertise. With this work, we try to bridge this gap starting from an extensive measurement campaign over two consecutive winters at the field site of Weissfluhjoch, Davos, Switzerland. In this context, we sampled the main snow scattering properties at a high vertical resolution and continuously over time – up to three times a week once the snowpack reached the isothermal state, to map the progression of the wetting front. We assembled an extensive dataset of snow scattering properties composed of a total of 85 full snow profiles of temperature, density, liquid water content and specific surface area, with additional measurements of snow surface roughness. All measurements were performed with up-to-date and well-established methods. When possible, several techniques were used to reduce measurement errors. When not, validations were performed. These data are used as inputs for the Snow Microwave Radiative Transfer model. By using a combination of model parameters specifically tailored to describe wet snow derived from measured variables, we simulated the C-band backscattering signal from Sentinel-1 over the measurement field for each day of our measurement campaign. The simulated backscattering intensities were then compared against the recorded time series, enabling several analyses and insights. First, we validated the possibility of identifying different melting phases based on morning and afternoon backscattering trends. We then analyzed the variability of simulated backscattering intensities as a function of different snowpack discretizations, and disentangled the effect of single scattering properties on the extinction coefficients during different phases of the melt. Additionally, we investigated how the backscattering signal can be influenced by processes such as melt-refreeze cycles, and how the spatial variability in the distribution of liquid water content as well as measurement error and uncertainties could impact the reconstruction of the backscattering coefficient. Finally, we performed a detailed study of C-band radar sensitivity to different stratifications of liquid water content observed in the field, ranging from the formation of superficial melt to the presence of ice lenses. These findings, to be presented in detail at the conference, highlight that scattering mechanisms become increasingly complex as liquid water saturates the snowpack, suggesting that the applicability of Sentinel-1 backscattering trends for melt monitoring may be constrained to specific phases of the melt process. Supported by an extensive ground-truth dataset, this work aims at explaining the main physical processes driving Sentinel-1 backscattering trends and exploring the potential of interpreting these trends towards an improved monitoring of melting snow processes. This investigation is particularly timely given the imminent launch of the Sentinel-1C satellite, which will restore the mission’s full capabilities following the failure of Sentinel-1B.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall E1)

Presentation: Investigating the feasibility of detecting sub-canopy snow in forested environments with satellite LiDAR

Authors: Thomas Laraia, Dr. Steven Hancock, Dr. Richard Essery, Dr. Amy Neuenschwander, Dr. Andrew Ross
Affiliations: University Of Edinburgh, University of Texas at Austin, University of Leeds
Seasonal snow provides a freshwater supply for approximately a sixth of the world's population and keeps our climate cooler by reflecting significant sunlight. The warming of our climate is exacerbated by the global decrease in snow cover and resulting snow albedo feedback. As such, snow is an essential climate variable. Snow cover maps are critical considerations in near-real-time weather forecasting, hydrological modelling, run-off simulations, and climate model validation. Most snow mapping methods apply algorithms or machine learning models to passive optical data to map snow. However, in forested environments, the canopy partially obscures the ground and affects the accuracy of such models. This problem is worth addressing because about half of the Northern Hemisphere's seasonally snow-covered land is forested. The SCAmod method addresses this limitation most effectively, to the best of our knowledge, by combining reflectance measurements around 550 nm with a canopy transmissivity map to estimate fractional snow cover (FSC). For a given pixel, the SCAmod method takes the estimated fraction of ground obscured by the canopy and the observed reflectance and computes the fraction of visible ground that would have to be snow-covered to produce that reflectance. However, this does not account for differing melt rates between canopied and open areas resulting from different sunlight exposure and thermal radiation emitted from trees that capture sunlight. Satellite LiDAR, unlike passive optical sensors, can differentiate between ground and canopy signals by measuring the return height of light pulses. ICEsat-2's photon-counting ATLAS sensor records heights for returning photons from 10 kHz pulses of 532 nm light fired at the Earth's surface. The ground and canopy radiometry (rate of photon returns) depend on factors such as surface reflectance, canopy cover, signal-to-noise ratio, etc. Our research explores the feasibility of using ICESat-2 data to distinguish snow-covered from snow-free surfaces. We used phenocam imagery from seasonally snowy, forested sites across the Northern Hemisphere, where available, to estimate fractional snow cover. We validated Snow_CCI's Snow Covered Fraction on Ground (SCFG) product, derived from the SCAmod method, as a benchmark to compare with our ICESat-2 based snow detection model for each ICESat-2 overpass within 5km of each phenocam site. Preliminary results indicate that ICESat-2 can reliably differentiate snow-covered and snow-free surfaces under sufficient signal-to-noise conditions. While ICESat-2's groundtracks are narrow and have a low temporal resolution for a given small area, its dataset spans the globe with well over a trillion light pulses and billions of kilometers of groundtracks. Accurately detecting snow cover with ICESat-2 could enable the development of a globally comprehensive, forest-penetrating snow estimate dataset. This could be instrumental in training passive optical sensors to map FSC, which we aim to pursue in future research.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall E1)

Presentation: Spatial and temporal variability of snow cover in the French Alps inferred from Sentinel-1

Authors: Clémence Turbé, Fatima Karbou, Antoine Rabatel, Adrien Mauss
Affiliations: CNRM CNRS Météo-France / CEN, Institut des Géosciences et de l'Environnement, UGA, OSUG, DirOP/Centre de Météorologie Spatiale, Météo-France
Snow cover has been extensively studied in terms of climatological trends with optical and infrared satellite imagery, particularly with data from MODIS, Landsat or Sentinel-2 (e.g., Derksen et Brown, 2012; Notarnicola, 2020; Gascoin et al., 2024). Beyond optical measurements, active microwave imagery, specifically from Sentinel-1, can be used to track wet snow and to analyze snowmelt dynamics. The backscatter coefficient of a snow-covered area decreases as the liquid water content of the snowpack increases (Karbou et al., 2021, Turbé et al., 2024, Dasser et al., prepint). To the best of our knowledge, no studies has focused on long-term monitoring of wet snow or on analyzing trends and variability in wet snow cover over several years. Our work aims to: (1) characterize the variability of wet snow in the French Alps using Sentinel-1 data over ten years (2015-2024); (2) extract information about total snow cover using melt-line diagnostics at the massif scale; and (3) establish the link between specific meteorological events (rain-on-snow, desert dust deposits) and the melt of the snowpack. We rely on Sentinel-1 VH-polarized observations, in ascending orbit, from 2015 to 2024 pre-processed using the EWC (European Weather Cloud) facility at CMS Lannion. We then use the method of Karbou et al. (2022) to detect areas of wet snow in Sentinel-1 times series. This method consists of detecting wet snow using a colour space segmentation technique applied to SAR images acquired at different times: an image for a given date (winter or spring) and a reference image (multi-year average SAR maps over 4 summers). The segmentation relies on the HSV colour scheme, which excels at highlighting variations within images (the H component represents the hue, the S component represents the intensity of the hue, and the V component represents the brightness). Consequently, the colour detection algorithm looks for color location and its luminosity rather than searching for specific RGB values (wet snow being identified as shades of magenta). The binary maps of wet snow obtained as output of the processing chain were compared with SAR Wet Snow (SWS) products provided by Copernicus. The binary wet snow maps we obtained were transformed into wet snow probabilities estimates for each French Alps massif, taking into account factors such as elevation, aspect and slope (extracted from the IGN DEM, 10 m), following the method described in Karbou et al. (2021). The resulting wet snow probabilities enable us to highlight the boundaries of wet snow and to compute time series of wet snow cover area (WSCA). In addition, we develop a method to compute the total snow cover area (SCA) from the WSCA, by assuming that above the upper wet snow limit, the terrain is entirely covered with dry snow, and that there is no snow (or wet patchy snow) below the lower wet snow limit. To validate our method, our estimations of total snow cover area from Sentinel-1 are compared with time series of total snow cover retrieved from optical satellite measurements (MODIS, Sentinel-2). To study the spatial variability, we examine a few massifs that are characterized by specific meteorological events, such as rain-on-snow events and Saharan dust deposition on snow. We used SAFRAN meteorological reanalysis data to obtain solid and liquid precipitations for the analysis of rain-on-snow events. For instance, there were many rain-on-snow events in the Alps during the winter and spring of 2018. Using the wet snow probabilities from Sentinel-1 data, we find a rain/snow limit of about 2400 (+/-100) m on the 4th of January 2018, which is consistent with the boundary set at 2300 m by avalanche forecasters. Regarding dust deposition, we used simulations between 2020 and 2024 made with the MOCAGE atmospheric chemistry model (wet deposition, dry deposition, and sedimentation). These simulations enable us to detect the most significant Saharan dust events in the French Alps and to etablish a potential link with snowmelt and wide areas of wet snow, taking into account any potential time lags that may arise.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.61/1.62)

Session: F.02.02 C- and L-band SAR synergies - ESA-JAXA cooperation and beyond

In 2020, the Japanese Aerospace Exploration Agency (JAXA) and the European Space Agency (ESA) signed an agreement on the “Cooperation for the Use of Synthetic Aperture Radar Satellites in Earth Science and Applications”. The cooperation focuses on the joint analysis of L-band data acquired by JAXA’s ALOS-2/PALSAR-2 satellite together with ESA’s Sentinel-1 C-band satellite data for various applications. Research areas include polar and ocean monitoring, snow water equivalent retrieval, forest and wetland monitoring, surface soil moisture and agriculture monitoring, as well as the monitoring of geohazards and urban areas.

The key objective of the ESA-JAXA cooperation is to develop a better understanding of the benefits of combining L-band and C-band data over various areas and for the different thematic applications. We invite contributions using L- and C-band SAR data not only limited to Sentinel-1 and ALOS-2, but also including other data available such as from SAOCOM, RCM and potentially NISAR. A comparison with ground-based campaign data is envisaged to validate the results.

The session aims to provide insights for the development of future (L-band) SAR satellite missions, such as JAXA’s ALOS-4 satellite and ESA’s ROSE-L mission as well as synergies with existing and future spaceborne C-band SAR missions including Sentinel-1 and Sentinel-1 Next Generation.

This jointly chaired session shall give the involved scientists the opportunity to present ongoing research and results and foster the collaboration and exchange between European, Japanese and international participants.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: Land fast sea ice evolution using L- and C-band SAR imagery over southern Svalbard

Authors: Malin Johansson, Tom Rune Lauknes, Dr. Zuzanna Swirad, Prof Cathleen Jones
Affiliations: Uit The Arctic University Of Norway, NORCE, Institute of Geophysics · Polish Academy of Sciences, NASA/JPL
Sea ice forms every year around Svalbard, though is experiencing shorter seasons and reduced extent (Johansson et al, 2020). The ice located along the shoreline that is anchored to the shore is called fast ice. In addition, there is sea ice transported from the Arctic Ocean down towards Svalbard, so called drift ice. The presence of sea ice is important as it protects coasts from flooding and erosion by reducing the incoming wind-generated wave energy. In this study we focus on southern Svalbard in the Hornsund fjord and along the southern Svalbard coast. The study location was selected because the Polish Polar Station is located in the Hornsund fjord and has actively collected in-situ data in the fjord since 1979, including year-round weather data. This includes, e.g., air and water temperature, precipitation, snow cover thickness, and wind speed and direction. The aim of this study is to establish the sea ice extent as well as separate the drifting sea ice, the fast ice, and the glacier ice. The study will utilize existing ALOS-2 and Sentinel-1 imagery and combine them with in-situ data to confirm expected sea ice and snow wetness conditions. Using the full archive of Sentinel-1A/B SAR imagery acquired between 14 October 2014 and 29 June 2023, the yearly seasonal evolution of the sea ice was investigated (Swirad et al, 2024). Near-daily images at 50 m resolution were used to establish that the average length of the sea ice season was 158 days, and that the thicker drift ice from the Barents Sea first entered the Hornsund fjord in October and the last typically entered in March. The sea ice reaches the largest extent in April, though the annual cycle means that the ice has a limited thickness and is often experiencing a wet snow cover. Interferometric synthetic aperture radar (InSAR) distinguishes between drifting sea ice and annual landfast sea ice through the zero coherence observed in drifting sea ice between the two SAR images used to create the interferogram. L-band radar is less sensitive than C-band to surface and volume scattering changes, thus maintaining better coherence over landfast sea ice. This study explores the use of InSAR coherence to delineate fast ice extent using image pairs from the winter to early melt season of 2024. We utilized ALOS-2 Fine Beam Dual Pol mode InSAR data stack, demonstrating that L-band effectively outlines the fast ice extent. This was compared to Sentinel-1 C-band SAR data covering the same period, which presented less definitive results due to decorrelation over the 12-day repeat period, likely from redistributed snow on the ice surface. Additionally, we found Sentinel-1 C-band backscatter time series valuable for understanding sea ice evolution, utilizing the dense time series to derive information about the stability of the fast ice as well as infer thickness estimates. The comparisons between the L- and C-band InSAR findings are significant as the upcoming NISAR mission will collect L-band data in an InSAR-compatible mode, focusing on the study area in southern Spitsbergen. References Johansson A.M., et al., 2020, Consistent ice and open water classification combining historical synthetic aperture radar satellite images from ERS-1/2, Envisat ASAR, RADARSAT-2 and Sentinel-1A/B. Annals of Glaciology,1 - 11.s doi: 10.1017/aog.2019.52. Swirad, Z. M., Johansson, A. M., and Malnes, E, 2024, Extent, duration and timing of the sea ice cover in Hornsund, Svalbard, from 2014–2023, The Cryosphere, 18, 895–910, doi: 10.5194/tc-18-895-2024.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: Current observation status of ALOS-4 PALSAR-3

Authors: Keisho Ito, Takeshi Motohka, Yukihiro Kankaku, Yohei Kojima
Affiliations: Jaxa
Japan Aerospace Exploration Agency (JAXA) launched the Advanced Land Observing Satellite-4 (ALOS-4) on July 1st, 2024. Phased Array type L-band Synthetic Aperture Radar-3 (PALSAR-3) onboard the ALOS-4 achieves significantly enhance observation frequency along with an expanded observation width of 200 km in Stripmap mode and 700 km in ScanSAR mode by the digital beam forming technology. Additionally, ALOS-4 shares the same orbit as ALOS-2, enabling combined analyses such as interferometric SAR with both satellites data. It promises early detection of changes and improved accuracy in time-series analysis due to abundant archive data that can contribute to a wide range of fields internationally, including disaster monitoring, infrastructure management, agricultural applications, forest monitoring, and maritime observation for sea ice and ship monitoring. Furthermore, collaboration not only with ALOS-2 but also with other satellites, including Sentinel-1, is expected to enhance observation frequency in time series, enable the detection of three-dimensional surface deformation, and reduce latency in disaster monitoring. This presentation will provide an overview of the initial calibration and validation results, along with the current observation status of ALOS-4 PALSAR-3. In 2025, the initial calibration and validation operation for PALSAR-3 is expected to be completed, and the satellite will begin operations based on the basic observation scenario (BOS) and provide observation data to users worldwide. The initial calibration and validation focused on high-priority observation modes and beams used in the BOS, evaluating various parameters, such as geometry, antenna patterns, radiometric accuracy, polarization, and image quality. In the BOS, ALOS-4 primarily conducts 3 m resolution observations with a 200 km swath over Japan, 10 m resolution observations with a 200 km swath globally (descending orbit), and 6 m resolution observations with a 100 km swath globally (ascending orbit, full polarization mode). Additionally, thematic observations are carried out based on the needs of users worldwide, tailored to specific application themes such as forests/wetlands, agriculture, crustal movement, ice/snow/polar regions, disaster, ocean, and international collaboration.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: ALOS-2 OPERATION STATUS AND INTERNATIONAL COOPERATION

Authors: Shin-ichi Sobue, Yuta Tochigi
Affiliations: JAXA
The Advanced Land Observing Satellite-2 (ALOS-2) was successfully launched on the 24th May 2014. ALOS-2 has several new techniques in order to respond to user requests. ALOS-2 contributes the disaster management and other applications requirements. Successful completion of the initial checkout phase, ALOS-2 has been contributing to numerous applications with a special emphasize on emergency observations of natural disasters in Japan as well as regional and global space-based disaster supporting systems. Furthermore, based on the Basic Observation Scenario (BOS) of ALOS-2, 10m global map data and other mode data are routinely collected and archived. This paper describes the status of ALOS-2 operation during 10 years including 5 years nominal operation period by May 2019 and 2+2 years extend operation period. This paper describes the overview of ALOS-2 operation status and data distribution, especially ALOS-2 ScanSAR mode normalized sigma naught data (CEOS Analysis Ready Data for Land (CEOS ARD) compatible analysis ready data) free and open distribution as well as international cooperation for ALOS-2 and beyond, especially for JAXA-ESA L-SAR research cooperation overview.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: Spaceborne C and L Band SAR for Monitoring Soil and Canopy Freeze/Thaw State in the Boreal Region

Authors: Juval Cohen, Dr. Juha Lemmetyinen, Jorge Jorge Ruiz, Kimmo Rautiainen, Jaakko Ikonen, Anna Kontu, Jouni Pulliainen
Affiliations: Finnish Meteorological Institute
Satellite instruments operating at microwave frequencies enable efficient monitoring of land surface freeze/thaw (F/T) processes. Generally, lower frequencies are preferable in soil F/T detection, due to reduced sensitivity to above-ground vegetation and the snowpack, as well as deeper soil penetration. In the presented work, seasonal variations in space-borne SAR backscatter measured with dual-polarization ALOS2 L band and Sentinel-1 C band satellites in the boreal forest region, Finland, were investigated [1]. The ALOS2 data were available through the ESA-JAXA Mutual SAR Cooperation Agreement, covering the Sodankylä region in Northern Finland. The Sodankylä area and two additional test areas located in Central and Southern Finland were studied using Sentinel-1 C band data. The aim was to improve soil F/T monitoring in the boreal forest region. The focus was on the influence of canopy freezing on the observed backscatter during the winter, and the backscatter trends during the thaw season, as these aspects have not yet been properly investigated for space-borne SAR data in the context of soil F/T detection. A new method for soil F/T monitoring in the boreal region at L band, and an improved method using C band were demonstrated. The ground backscatter, canopy backscatter and canopy extinction coefficient were first retrieved by an inversion of a zeroth-order forest backscatter model in various forest densities. The classification to F/T was done by comparing the retrieved backscatter with previously collected reference backscatter values representing fully frozen and thawed conditions. In situ air temperature and snow depth observations were used to select the appropriate reference SAR scenes. Probability density functions for the freeze and the thaw classes were calculated from the reference backscatter values, and the F/T class was determined based on the probabilities of belonging to either one of the classes. A wet snow flag was added by utilizing air temperature and snow depth information in addition to the retrieved backscatter. The F/T detection accuracy was assessed by analyzing the classification results against: 1) in situ air temperature measurements, 2) a soil F/T index calculated from soil temperature and moisture measured at 5 cm depth, and 3) Sentinel-2 based Copernicus fractional snow cover (FSC) maps. Gradual freezing of forest canopies occurs at temperatures below -5 °C, with relatively lower freezing point for dry compared to wet wood. Our results demonstrated the opposite backscatter response of the lower frequency L band compared to the higher frequency C band to canopy freezing. Canopy backscatter of the L band decreased, while for the C band it increased in very cold air temperature conditions. These findings are in line with a study analyzing multi-frequency tower-based SAR data [2]. We also showed a decreasing backscatter trend between spring and autumn thaw periods. Observed backscatter during the spring thaw conditions was typically stronger compared to the autumn thaw conditions, most likely due to higher soil moisture of the top-soil layer during the spring. The use of autumn thaw references for the freezing season and spring thaw references for the end of the winter is therefore suggested for improving previous methods relying only on autumn thaw reference data [3]. For L band, both canopy and soil freezing contributed to a decrease in the total observed backscatter, meaning that freezing of the soil-canopy system can be recognized from a general decrease in the observed backscatter. However, the retrieval of ground backscatter is still recommended for excluding the influence of canopy freezing to the detection if the soil state is of interest. For F/T detection with C band, the separation of ground and canopy backscatter is essential before applying F/T classification, due to the increase of canopy backscatter following canopy freezing versus a decrease of ground backscatter following soil freezing. The overall similarity of the presented L and C band F/T classification with air temperature was 90-95 % and with the soil F/T index measured at 5 cm depth 67-75 %. Both frequencies were sensitive to F/T variations in the top-soil layer, and detected the onset of freezing before it was indicated by the soil F/T index. C band wet snow detections corresponded well with the fractional snow cover maps, while L band seemed to be less sensitive to wet snow. The demonstrated method for the F/T detection with ALOS2 can be utilized when planning methods for F/T monitoring, with the availability of future operational low-frequency SAR data from e.g. ROSE-L or NISAR. Some of the improvements suggested in this work have already been applied in the Sentinel-1 based operational soil F/T product of Finland (https://nsdc.fmi.fi/data/data_s1freezethaw). [1] J. Cohen, J. Lemmetyinen, J. Jorge Ruiz, K. Rautiainen, J. Ikonen, A. Kontu, J. Pulliainen, “Detection of soil and canopy freeze/thaw state in the boreal region with L and C Band Synthetic Aperture Radar”, Remote Sensing of Environment, Vol 305, 2024. [2] J. Lemmetyinen, J. Jorge Ruiz, J. Cohen, J. Haapamaa, A. Kontu, J. Pulliainen, J. Praks, “Attenuation of Radar Signal by a Boreal Forest Canopy in Winter”, IEEE Geoscience and Remote Sensing Letters, Vol 19, pp. 1-5, 2022. [3] J. Cohen, K. Rautiainen, J. Lemmetyinen, T. Smolander, J. Vehviläinen, J. Pulliainen, “Sentinel-1 based soil freeze/thaw estimation in boreal forest environments”, Remote Sensing of Environment, Vol 254, 2021.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: Permafrost thawing in disturbed areas inferred from ALOS series and Sentinel-1 InSAR data

Authors: Kazuki Yanagiya, Takahiro Abe, Go Iwahana, Yoshihiro Iijima, Professor Masato Furuya, Mamoru Ishikawa, Takeo Tadono
Affiliations: Japan Aerospace Exploration Agency, Mie University, University of Alaska Fairbanks, Hokkaido University, Tokyo Metropolitan University
Recent climate changes have accelerated permafrost thawing, which manifests as ground surface displacement in circumpolar and alpine regions. Due to the vast extent of permafrost distribution, satellite observations are essential for monitoring surface displacement, assessing current conditions, and making future projections. Satellite-based Synthetic Aperture Radar Interferometry (InSAR) is a powerful tool for investigating surface displacement related to permafrost dynamics without the need for ground-based equipment. Through the research activities of the Japanese polar satellite observation science team “ALOS4ice,” we investigated surface displacement in disturbed areas (e.g., forest logging and wildfire) across major permafrost regions, including Siberia, Alaska, Mongolia, and Canada. Applying InSAR time-series analysis to ALOS-2 and Sentinel-1 data, our studies revealed significant seasonal surface subsidence in anthropogenically disturbed areas and post-wildfire scars. Furthermore, interannual surface subsidence caused by irreversibly changing ground ice was identified. Notably, more than 30 cm of cumulative subsidence was detected in post-wildfire areas over a four-year period. In ice-rich permafrost regions, rapid thermokarst development significantly influenced polygonal terrain evolution and altered the hydrological environment. At several selected sites, we compared surface displacement data with thaw depth, ground temperature, and soil moisture measurements and conducted direct validation using GNSS observations and leveling. The high temporal resolution (12 days) of Sentinel-1 data enables detailed monitoring of surface displacement caused by seasonally recurring thaw subsidence and frost heave. In contrast, the L-band ALOS series data maintains high coherence over long-term image pairs and vegetated areas, allowing for the observation of interannual permafrost thaw dynamics. The successful launch of ALOS-4 in July 2024, which follows the same orbital path as ALOS-2, facilitates InSAR image analysis between ALOS-2 and ALOS-4 data. This development allows long-term and continuous monitoring of ground displacement processes with high spatial resolution L-band data. In the future, this capability could enable the observation of the complete sequence of processes in disturbed areas, from rapid thawing after disturbance to stabilization and recovery. Furthermore, integrating data from upcoming satellite missions such as Sentinel-1 Next Generation, NISAR, and ROSE-L is expected to enhance comprehensive, large-scale, and long-term monitoring of permafrost thawing.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: Analysis of ALOS2 InSAR over snow-covered boreal forest for SWE retrieval with SnowModel simulations and in-situ measurements.

Authors: Jorge Jorge Ruiz, Juha Lemmetyinen, Ioanna Merkouriadi, Juval Cohen, Anna Kontu, Thomas Nagler, Jouni Pulliainen, Jaan Praks
Affiliations: Finnish Meteorological Institute, Environmental Earth Observation Information Technology GmbH, Aalto University
Numerous L-band satellite missions will be available in the upcoming years, such as: ROSE-L from ESA, NISAR from NASA and ISRO, ALOS4 from JAXA, or SAOCOM from CONAE. These satellite missions will improve the capabilities of the existing L-band ones by reducing the temporal baseline, increasing the spatial resolution, or by providing two simultaneous bands. L-band has been proposed for retrieval of Snow Water Equivalent (SWE) using repeat-pass InSAR (d-InSAR). The method is an inversion of the interferometric phase, which can be linked to change in SWE (Δ𝑆𝑊𝐸), with simply an estimate of the bulk snow density and a Digital Elevation Model (DEM) as ancillary data [1,2]. L-band can maintain relatively high values of coherence over large periods of time [3-5] and is capable of effectively penetrating trough the vegetation. Higher frequencies have been shown to be more susceptible to environmental effects such as wind or precipitation than L-band, making them less suitable for d-InSAR SWE retrieval [2,3,6,7]. Although this technique has received increasing attention in the past 5 years, there are still important topics that require attention before any operational product can be developed. The effect of the vegetation on the retrieval is still unclear, and even though some models have been proposed, these have not been extensively validated [8]. Decorrelation over snow is not completely understood. Structural changes in the snowpack such as Liquid Water Content (LWC), snow metamorphism, or ice layering, have been linked to decreased decorrelation [3,9]. Furthermore, spatial variability of the snowpack can also influence the interferometric coherence [10]. Additionally, seasonal retrievals have been based on the exploitation of subsequent interferometric pairs, while the exploitation of fully connected networks has not been yet investigated for snow. In this work [9], a set of ALOS2 imagery (L-band and HH polarization) over Sodankylä, in Northern Finland, was used to understand the retrieval capabilities over the boreal forest. Simulations from SnowModel were generated using in-situ assimilation. Total SWE and daily snowmelt were generated with a temporal sampling of 1 day. These simulations were used for comparison between the InSAR-derived and the SnowModel ΔSWE. The role of air temperature in the retrieval was empirically assessed. Increased differences between InSAR and SnowModel were observed when the air temperature approached 0ºC, because of the increase in the dielectric permittivity of the canopy [11]. Furthermore, the coherence was analyzed using the daily snowmelt generated by SnowModel. Cumulative snowmelt between passes was linked to increased decorrelation, where areas of the image most affected by it exhibit less coherence. We hypothesized that this was caused by ice layering formation in the snowpack and/or water infiltration into the soil. Additionally, daily snowmelt for the SAR acquisition data was used as a proxy to detect wet snow. As expected, pixels affected by wet snow showed on average less coherence than those unaffected. However, only a small difference between dry and wet snow pixels was found for low quantities of liquid water content. Finally, fully connected stacks expanding 5 years were generated. Strong seasonality was observed in coherence. Summer to summer exhibited moderate coherence, while summer to dry snow winter exhibited low coherence. The presence of wet snow drastically reduced coherence. However, moderate coherence is observed for subsequent acquisitions for dry snow. A coherence recovery behavior is observed between different years when snow depth is similar. [1]. T. Guneriussen, K. A. Hogda, H. Johnsen, and I. Lauknes, “Insar for estimation of changes in snow water equivalent of dry snow,” in IGARSS 2000. IEEE 2000 International Geoscience and Remote Sensing Symposium. Taking the Pulse of the Planet: The Role of Remote Sensing in Managing the Environment. Proceedings (Cat. No.00CH37120), vol. 2, 2000, pp. 463–466 vol.2. [2]. S. Leinss, A. Wiesmann, J. Lemmetyinen, and I. Hajnsek, “Snow water equivalent of dry snow measured by differential interferometry,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 8, pp. 3773–3790, 2015. [3]. J. J. Ruiz, J. Lemmetyinen, A. Kontu, R. Tarvainen, R. Vehmas, J. Pulliainen, and J. Praks, “Investigation of environmental effects on coherence loss in sar interferometry for snow water equivalent retrieval,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–15, 2022. [4]. Tarricone, J., Webb, R. W., Marshall, H.-P., Nolin, A. W., and Meyer, F. J.: Estimating snow accumulation and ablation with L-band interferometric synthetic aperture radar (InSAR), The Cryosphere, 17, 1997–2019, https://doi.org/10.5194/tc-17-1997-2023, 2023. [5]. Bonnell, R., McGrath, D., Tarricone, J., Marshall, H.-P., Bump, E., Duncan, C., Kampf, S., Lou, Y., Olsen-Mikitowicz, A., Sears, M., Williams, K., Zeller, L., and Zheng, Y.: Evaluating L-band InSAR snow water equivalent retrievals with repeat ground-penetrating radar and terrestrial lidar surveys in northern Colorado, The Cryosphere, 18, 3765–3785, https://doi.org/10.5194/tc-18-3765-2024, 2024. [6]. S. Oveisgharan, R. Zinke, Z. Hoppinen, and H. P. Marshall, “Snow water equivalent retrieval over idaho, part a: Using sentinel-1 repeat-pass interferometry,” The Cryosphere Discussions, vol. 2023, pp. 1–19, 2023. [Online]. Available: https://tc.copernicus.org/preprints/tc-2023-95/ [7]. H. Rott, T. Nagler, and R. Scheiber, “Snow mass retrieval by means of SAR interferometry,” in Proc. 3rd FRINGE Workshop, Eur. Space Agency, Earth Observ., 2003, pp. 1–6 [8]. Y. Lei, J. Shi, C. Liang, C. Werner and P. Siqueira, "Snow Water Equivalent Retrieval Using Spaceborne Repeat-Pass L-Band SAR Interferometry Over Sparse Vegetation Covered Regions," IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 2023, pp. 852-855, doi: 10.1109/IGARSS52108.2023.10282234. [9]. J. Jorge Ruiz, I. Merkouriadi, J. Lemmetyinen, J. Cohen, A. Kontu, T. Nagler, J. Pulliainen, and J. Praks, “Comparing insar snow water equivalent retrieval using alos2 with in situ observations and snowmodel over the boreal forest area,” IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1–14, 2024. [10]. S. Yueh, X. Xu, T. Wang and M. Chaubell, "Interferometric Coherence of Bistatic Radar Observations and Spatial Resolution," 2024 International Conference on Electromagnetics in Advanced Applications (ICEAA), Lisbon, Portugal, 2024, pp. 590-595, doi: 10.1109/ICEAA61917.2024.10701804. [11]. J. Lemmetyinen et al., "Attenuation of Radar Signal by a Boreal Forest Canopy in Winter," in IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1-5, 2022, Art no. 2505905, doi: 10.1109/LGRS.2022.3187295.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall K2)

Session: D.06.01 Orbital Intelligence for Earth Observation applications: The edge of AI In Space

In the current technological era, we are witnessing a significant interest toward the application of Artificial Intelligence (AI) directly on board Small Earth Observation (EO) satellites. The incorporation of AI on edge of satellite constellations enables the extraction of actionable insights from spaceborne data and the generation of immediate and automate responses. This, in turn, has the potential to significantly increase the autonomy, agility, and reconfigurability of EO missions and enable novel applications.
This session delves into the technologies, novel computing paradigms for onboard satellite processing (edge computing, neuromorphic computing, cloud computing), EO use cases, innovative mission concepts enabled by the application of AI on board EO satellites, and design of resilient and reliable AI algorithms for onboard satellite applications. In addition, it aims at fostering a collaborative environment for sharing knowledge, experiences, and potential partnerships among space industry professionals, researchers, and policymakers on the topic of orbital intelligence for EO applications.
The topics of discussion for this session, although not exhaustive, will include:

1. AI-enhanced onboard payload data processing:
- AI for efficient onboard radiometric and geometric payload data processing (e.g., band-to-band alignment, georeferencing, calibration, Synthetic Aperture Radar (SAR) data focusing)
- End-to-end AI algorithms for on board processing of payload raw data (e.g., on board segmentation of payload raw data, data processing partially focused or unfocused SAR data)
- Onboard payload data compression
- Other novel AI-enhanced onboard payload data processing algorithms

2. Edge AI for EO Applications:
- Near-real-time onboard data processing for early alert systems
- Edge computing for data transmission data-rates reduction
- AI-driven payload data anomaly detection and response
- Autonomous operations and decision making for automated operations
- Other novel EO applications enabled by onboard AI

3. Novel Mission Concepts and Distributed data processing and intelligence:
- Tip & cue missions and New Observing Strategies (NOS) with distributed spacecraft missions and collaborative sensor nodes
- Distributed edge and federated learning on board satellite constellations
- Swarm intelligence for multi-satellite constellations


4. AI Reliability and Resilience for onboard satellite applications:
- Addressing ethical considerations and AI governance in space missions
- Verification and validation of AI algorithms for onboard satellite EO applications
- Design of resilient AI algorithms on Commercial Off-The-Shelf hardware

5. Novel computing paradigms for onboard satellite processing:
- Cloud computing for onboard distributed data processing
- Neuromorphic computing for EO

6. Collaborative and Interdisciplinary Approaches:
- Partnerships between academia, industry, and government agencies
- Cross-disciplinary research and development initiatives
- Future directions and long-term vision for AI in Earth observation missions.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall K2)

Presentation: Neural Architecture Search: Exploring Constructive Model Compression for Satellite Efficient Onboard Processing

Authors: Roberto Del Prete, Dr Nicolas Longépé, Dr Gabriele Meoni, Andrea Mazzeo, Prof. Maria Daniela Graziano, Parampuneet Kaur Thind, Gaurav Bajaj, Matthew Whitley
Affiliations: European Space Agency, Università degli Studi di Napoli Federico II, Little Place Labs
Satellite on-orbit processing driven by Artificial Intelligence (AI) lies at the intersection of Machine Learning (ML) and embedded systems. The capacity to generate real-time insights directly at the edge is fundamentally reshaping frameworks for Earth Observation (EO) data management and analysis. By minimizing latency, crucial applications such as target detection, change monitoring, and early warning systems are considerably augmenting their operational efficiency and impact within the EO domain. Nonetheless, onboard processing introduces significant challenges arising from inherent limitations related to thermal dissipation, power availability, and computational resources, which collectively hinder the widespread implementation of AI methodologies in spaceborne systems. Efforts to address these constraints have led to considerable research into com- pressing models to satisfy the stringent requirements of edge AI devices. While many model compression approaches are destructive—such as quantization and pruning—Neural Architecture Search (NAS) distinguishes itself by employing a constructive paradigm, wherein the optimal model architecture for a particular task and dataset is synthesized progressively, layer by layer. The optimization challenge associated with NAS is intrinsically complex, grounded in advanced optimization theory, and requires sophisticated methodologies. Among various optimization paradigms—including Bayesian optimization, reinforcement learning, and evolutionary programming—Genetic Algorithm (GA) optimization has emerged as a particularly promising approach due to its stochastic characteristics and its efficacy in navigating non-convex optimization problems. This paper introduces a GA-driven framework for NAS, specifically tailored to satellite-based applications, demonstrating its effectiveness in model compression across a diverse range of EO datasets and establishing a novel pathway for the optimization of edge AI systems in spaceborne environments. The transformative potential of the proposed NAS framework is exemplified through its application to two critical Earth Observation tasks: burnt area detection and cloud masking. For burnt area detection, the framework optimized a neural network trained on simulated PhiSat-2 data, capturing post-fire burnt area scars across Europe over a two- year period. This task involved classifying regions into multiple categories relevant to burnt area identification. The resulting NAS-optimized model is lightweight and highly efficient, processing satellite imagery at a rate closely aligned with typical data acquisition speeds. This makes the model exceptionally well-suited for real-time or near-real-time predictions, a critical requirement for time-sensitive applications such as fire monitoring and response. Its robustness was further validated on a globally diverse dataset, spanning fire events from multiple continents, highlighting its adaptability to varied geographic and environmental conditions. For cloud masking, a sensor-agnostic NAS-derived neural network was developed, lever- aging training data from a range of optical sensors, including Sentinel-2 and Landsat series satellites. This network, designed for generalization across sensors with resolutions ranging from fine to moderate scales, effectively addressed the ubiquitous challenge of cloud contamination in EO data. Despite its lightweight architecture, the model achieved high accuracy and computational efficiency, showcasing the capability of NAS to produce streamlined yet effective models tailored for resource-constrained satellite environments. These outcomes underscore the promise of NAS as a groundbreaking optimization approach for edge AI systems in spaceborne applications, driving advancements in both operational efficiency and data reliability in EO missions.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall K2)

Presentation: Onboard results of the Deep Compression Application and potential use of raw data for the Φsat-2 mission

Authors: Dr. Giorgia Guerrisi, Dr Gabriele Meoni, Dr Nicolas Longépé
Affiliations: Tor Vergata University of Rome, European Space Agency (ESA)
Φsat-2 CubeSat mission was developed by the European Space Agency (ESA) with the aim of demonstrating the benefits of running onboard several Artificial Intelligence (AI)-based applications relevant to Earth Observation. The satellite, which was successfully launched in August 2024, integrates a multispectral instrument operating as a pushbroom, the MultiScape100, capable of acquiring 8 bands in the visible and near-infrared including a panchromatic band. Raw acquisitions are elaborated through a dedicated processing chain producing data at three different processing levels: L1A, L1B, and L1C. Particularly, L1A products are radiance data with no registration and georeferencing; L1B products include fine registration and georeferencing; L1C products are registered and geo-referenced Top-Of-Atmosphere reflectance data. Onboard compression via Deep Neural Nerworks (DNNs) is one of the applications selected to be tested on board, whose interest is due to save bandwidth for data transmission. To this aim, a dedicated DNN model, the “Deep Compression App”, was designed. The model can perform a 2-dimensional or a 3-dimensional compression. In the first case, the compression of a single-band image is executed, while the second case a multispectral compression, which involves image height, width, and depth, is performed. In both scenarios the compression algorithm is based on a Convolutional AutoEncoder (CAE), i.e., an AutoEncoder with convolutional layers. The encoder is executed on-board, where the compressed version of the input is extracted. The decoder, instead, operates on the ground reconstructing the image back. The quality of the reconstructed images is evaluated by means of standard image similarity metrics, but also in real application scenarios that aim at assessing the practical use of the reconstructed images. This work presents two main contributions. First, it showcases the “Deep Compression App” and the methodology to train and validate the DNN model prior to the launch of Φsat-2. Due to the original lack of Φsat-2 images, the dataset used for training and validating the algorithm was based on Sentinel-2 acquisitions, which were converted to simulate the final Φsat-2 acquisitions at L1A level through a processing chain that includes the selection of the matching spectral bands and the simulation of the spatial resolution, Signal-to-Noise Ratio (SNR), and Modulation Transfer Function (MTF) of the camera. The model architecture was specifically tailored for the on-board context and designed according to the requirements imposed by the mission, especially the onboard computing hardware (i.e., Intel Movidius Myriad 2 Vision Processing Unit). As a second contribution, this work investigates the feasibility to apply image compression via DNNs directly on Φsat-2 raw data imagery. The idea is to minimize run time and energy consumption by avoiding onboard computationally-intensive processing and calibration steps, and applying compression directly on raw data after a coarse band registration. To measure performance, the results were compared with those obtained from the images collected after the satellite launch.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall K2)

Presentation: Hybrid PCA-Enhanced Autoencoders for Onboard Hyperspectral Data Processing : Advancing Edge AI for Atmospheric and Environmental Monitoring in Satellite Platforms

Authors: Sarathchandrakumar Thottuchirayil Sasidharan, Githin Kuruvanthoor Suresh, Giorgia Guerrisi, Davide De Santis, Giovanni Schiavon, Fabio Del Frate
Affiliations: Sapienza University of Rome, Tor Vergata University of Rome, DICII
Hyperspectral imaging (HSI) has transformed Earth observation by providing rich spectral data across hundreds of bands, enabling high-impact on applications like Atmospheric and Environmental. However, the high dimensionality of hyperspectral data presents significant challenges for onboard processing, particularly for small satellites with constrained resources. Efficient dimensionality reduction is critical to ensure real-time data processing and transmission within the limitations of edge devices. In this study, we propose a novel hybrid dimensionality reduction approach that integrates Principal Component Analysis (PCA) and autoencoders, optimized for onboard hyperspectral data processing using PRISMA satellite data. This hybrid model uniquely combines PCA and autoencoders in a synergistic manner, where PCA not only performs initial dimensionality reduction but also informs the autoencoder’s architecture. PCA-derived principal components determine the bottleneck structure, guide kernel initializations, and optimize model efficiency based on explained variance. This integration allows the autoencoder to capture complex non-linear spectral dependencies while leveraging PCA’s computational simplicity, achieving a balance between accuracy and resource efficiency for onboard implementation. The model was validated using PRISMA hyperspectral data, focusing on reducing dimensionality from 189 bands to 26 while preserving critical information for downstream tasks. It was deployed and tested on multiple edge AI platforms, including NVIDIA Jetson Nano, Intel Neural Compute Stick 2 (NCS2), and a Raspberry Pi-NCS2 hybrid configuration. A high-performance host machine served as a benchmark for comparative analysis. Key applications included land cover classification, achieving over 95% accuracy post-reduction, and wildfire monitoring, where the model demonstrated high precision in identifying risk zones, highlighting its suitability for real-time disaster response. Performance evaluation based on metrics such as Root Mean Square Error (RMSE), Structural Similarity Index Measure (SSIM), and Peak Signal-to-Noise Ratio (PSNR) confirmed the model’s ability to maintain data fidelity. Among the tested platforms, NCS2 exhibited the best energy efficiency, making it ideal for low-power satellite missions, while Jetson Nano offered faster processing, suitable for speed-critical tasks. The Raspberry Pi-NCS2 hybrid balanced computational performance and energy efficiency, presenting a viable solution for small satellites. Further optimizations, including simplified autoencoder architectures, lightweight activation functions, and hardware accelerators like FPGAs, enhanced real-time processing capabilities, reducing computational overhead by approximately 40% compared to standalone autoencoder approaches. This study demonstrates the feasibility of applying a hybrid PCA-autoencoder model for onboard hyperspectral data processing using PRISMA data, significantly reducing the need for raw data transmission and enabling timely access to actionable insights. The approach offers scalable and efficient solutions for Earth observation missions, particularly for time-sensitive applications like wildfire monitoring and environmental management. Future research will expand validation across diverse datasets, refine compression techniques, and explore broader use cases to maximize the model’s impact in next-generation space-based platforms.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall K2)

Presentation: Event-driven computation and sparse neural network activity deliver low power AI

Authors: Dr Douglas McLelland, Mr Gilles
Affiliations: Brainchip
The availability of AI in small EO satellites would bring very many advantages in diverse situations, but to be truly useful must be delivered at small size and power. Dedicated hardware accelerators present one possible solution. Neuromorphic technology, inspired by the functioning of the brain, has long been proposed as a route to ultra-low power compute, but has not achieved widespread uptake because the associated training algorithms cannot yet match the performance of mainstream deep learning techniques, or are perceived as requiring (even more) specialised knowledge, or atypical input data. Here we present a hardware implementation sitting at the interface between mainstream deep learning and neuromorphic computing (Akida from BrainChip, available as Commercial Off-The-Shelf hardware, or as IP for integration). This approach provides an inference platform for standard convolutional neural networks (CNNs) but exploits the neuromorphic idea of event-based computation – efficiently skipping zero-valued outputs – to greatly reduce the compute required. We show that in this hardware implementation, processing time (and thus energy per inference) is linearly dependent on the number of events to be processed. Naturally, to be effective, this approach requires that there be significant sparsity (many zero-values) in the outputs of model layers. We show that the widely available pretrained versions of many popular models already feature high levels of activation sparsity, often higher even than the 50% that we might expect as a starting point from the application of a rectifying non-linearity (the commonly used ReLU activation function). This sparsity can be further enhanced by simply including L1 and/or L2 regularization on activations during model training, via the loss function. When set to appropriate levels, this can yield very significant increases in sparsity (to >90% in some layers, that is, a more than 10x reduction in required compute) at the cost of a minimal drop in model accuracy. Further, there is a common preconception that event-driven computation is only useful where inputs to the model are themselves event-based (such as those generated by dynamic event cameras). On the contrary, we will show that even standard frame-based images can drive very sparse activity: less so in early layers (typically with high spatial resolution and a low channel dimension) transmitting mostly spatial information that is required for task resolution; but very much so in later layers with a higher feature dimension, of which only a small subset of features may be required for processing of a particular input. By way of example, we will present results showing the performance of this approach on several typical EO tasks (image classification and object detection).
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall K2)

Presentation: SAR meets AI: the role of AI in next-generation SAR missions

Authors: Sanna Sandberg, Roberto Del Prete, Dr. Nicolas Longepe, Gabriele Meoni, Ernesto Imbembo, Gianluca Furano, Max Ghiglione
Affiliations: ESA - ESTEC, ESA - ESRIN
Currently, most Synthetic Aperture Radar (SAR) imagery from spaceborne platforms is processed terrestrially due to its computational resource requirements. However, future earth observation missions demand increased functionality, low-latency responses and reduced costs, thus more value must be extracted already in orbit. For addressing future requirements and enabling more on-board processing for SAR, Artificial Intelligence (AI) techniques are being integrated into the signal processing chain which paves the way for developing cognitive systems capable of supporting edge operations for SAR payloads in orbit. This paper presents recent advancements in next-generation digital onboard processing for future SAR satellite missions, emphasizing greater autonomy and adaptability, based on activities from the European Space Agency (ESA). Some novel technological breakthroughs include advancements in data-driven Radio Frequency Interference (RFI) detection, or AI-based adaptive compression methods. For example, to efficiently decrease onboard data volumes for SAR technology, an ESA activity applied Machine Learning (ML) for compressing Level 0 (L0) SAR data, successfully reducing the data volume from a bitrate perspective. Target detection is a crucial issue for SAR technology, another ESA activity investigated the timeliness onboard detection of vessels from L0 raw Sentinel-1 Stripmap (SM) imagery, detailing an ML segmentation method exploiting the spatial features of isolated targets. Furthermore, this research investigates the potential of a Selective Focusing approach, a method designed to identify and prioritize specific areas within raw data for further analysis and processing. This approach leverages ML algorithms, particularly convolutional neural networks, to automatically detect regions of interest based on predefined criteria. By selectively focusing on these regions, the method reduces overall data processing latency, optimizes resource utilization, and broadens the scope for downstream applications such as near-real-time monitoring, anomaly detection, and targeted analysis. Once it is possible to perform digital signal processing efficiently in orbit, the introduction of cognitive SAR concepts is feasible. Cognitive radar autonomously adapts its operational modes as a direct response to the sensed environment, thus creating an automatic closed-loop architecture. This dynamic capability opens many use cases and benefits, such as enhancing resolution by using the full capability of the radar platform only after target detection—without ground intervention. Within its operational mode, cognitive radar could also adjust digital radar backend functions, like beam steering, interaction and number of beams. Therefore, it can tailor the use of the platform and payload to meet the low latency, functionality, and cost requirements of future missions. In conclusion, this research presents ESA’s collaborative initiatives and internal investigations, thereby providing a steppingstone for the progression of AI-centric SAR data handling in subsequent research endeavors.
Add to Google Calendar

Monday 23 June 14:00 - 15:30 (Hall K2)

Presentation: Onboard Machine Learning-based Compression of Synthetic Aperture Radar (SAR) Images Using FPGA/MPSoC Hardware

Authors: Cédric Léonard, Dr. Ing. Francescopaolo Sica, Prof. Dr. Martin Schulz
Affiliations: Technical University of Munich, Remote Sensing Technology Institute, German Aerospace Center (DLR), Institute of Space Technology and Space Applications, University of the Bundeswehr
Synthetic Aperture Radar (SAR) data is a critical modality in remote sensing, offering unparalleled capabilities for earth observation under all weather conditions and during both day and night. Despite its numerous advantages, SAR missions face significant challenges in their ground segment, particularly in data handling and transmission. The number of ground stations available for data downlink cannot scale proportionally when the number of satellites and their sensing resolution increase, leading to bottlenecks in data transmission. Consequently, data compression plays a central role in SAR missions, as it can alleviate the burden on ground stations and improve operational efficiency. Recent advances in SAR data processing, such as onboard focusing, have further highlighted the need for effective data compression strategies. These innovations enable real-time SAR data processing and transmission, paving the way for efficient onboard data handling [1]. However, developing practical, high-performance compression solutions that meet the constraints of onboard hardware remains challenging. In this work, we present a comprehensive framework for the onboard compression of SAR data, addressing challenges from both algorithmic and hardware implementation perspectives. As a baseline, we employ the methodology described in [2], a state-of-the-art approach for simultaneous despeckling and compression of SAR images. The lightweight model, based on an autoencoder architecture [3], incorporates quantization and entropy encoding of the latent variables - fundamental steps in traditional data compression. Additionally, the method leverages a self-supervised training strategy to effectively eliminate speckle, achieving competitive performance in both tasks. The model operates within a split-execution paradigm: compression occurs onboard the satellite (transmitter side), while reconstruction is performed on the ground (receiver side). This split enables efficient use of bandwidth and computational resources. However, translating this algorithm into a practical onboard implementation necessitates careful compromises between compression rates, reconstruction quality, and hardware constraints. A highly suitable hardware infrastructure for onboard SAR data processing is the Field Programmable Gate Array (FPGA), known for its flexibility and suitability for parallel computations. FPGAs are widely used in earth observation missions for their low-power consumption and potential for radiation tolerance. However, deploying Machine Learning models on FPGAs is non-trivial and requires significant expertise and design effort. To address these challenges, we employ Vitis AI [4]. The toolchain facilitates the quantization and compilation of deep learning models, easing their deployment on FPGA-based systems. We evaluate our implementation on a Zynq UltraScale+ evaluation board. Zynq devices are heterogeneous Systems-on-Chip (SoC), including FPGA-based programmable logic and CPU-based processing systems. In our development workflow, we refactor the dataflow of the compression algorithm to optimize its performance on FPGA hardware. Specifically, we focus on minimizing data movement between the programmable logic (FPGA) and the processing system (MPSoC), as these operations are often bottlenecks in such systems [5]. Additionally, we address key practical considerations such as model size, latency, and power consumption to ensure suitability for deployment onboard CubeSats, which face strict constraints in size, weight, and power (SWaP). Our results demonstrate that the proposed implementation achieves a favorable trade-off between task-specific metrics, e.g., peak signal-to-noise ratio and compression rate, and hardware resource utilization. Despite the challenges of migrating CPU- or GPU-based algorithms to FPGA-based systems, we achieve a light and fast solution viable for real-world onboard deployment. This work represents a step forward in developing efficient, hardware-compatible SAR data compression techniques, contributing to the broader goal of enabling scalable and efficient SAR missions in the era of NewSpace. References: [1] L. P. Garcıa, G. Furano, M. Ghiglione, V. Zancan, E. Imbembo, C. Ilioudis, C. Clemente, and P. Trucco, “Advancements in Onboard Processing of Synthetic Aperture Radar (SAR) Data: Enhancing Efficiency and Real-Time Capabilities,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 17, pp. 16625–16645, 2024. [2] F. Sica, N. Foix-Colonier, and J. Amao-Oliva, “Self-Supervised Joint SAR Image Compression and Despeckling,” in IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, pp. 2188–2190. [3] D. Minnen, J. Balle, and G. D. Toderici, “Joint Autoregressive and Hierarchical Priors for Learned Image Compression,” in Advances in Neural Information Processing Systems, vol. 31, Curran Associates, Inc., 2018. [4] “AMD Vitis™ AI Software.”, https://www.amd.com/en/products/software/vitis-ai.html, 2019 [5] K. Abdelouahab, M. Pelcat, J. Serot, and F. Berry, “Accelerating CNN inference on FPGAs: A Survey.”, 10.48550/arXiv.1806.01683, 2018
Add to Google Calendar

Monday 23 June 14:22 - 14:42 (EO Arena)

Demo: D.04.20 DEMO - Can I reuse your code? Free, fast and FAIR processing of EO data

The most innovative constellation of EO missions requires the most innovative approach to exploiting that data. In this demo, we will showcase the latest innovations in large-scale EO processing through the use of openEO.

What You’ll Experience:
• Hands‑On Walkthrough: See openEO in action, from data ingestion to analysis, all in one unifying API.
• Scalable Cloud Processing: Leverage ESA‑sponsored infrastructure for near‑zero cost data access and processing.
• Collaborative Ecosystem: Learn how researchers, cloud providers, and engineers are working together—EU‑style collaboration, no big‑tech lock‑in.
• Copernicus Integration: Explore seamless access to Copernicus Data through a FAIR, open platform.

Why Attend?
• Free & Open: Thanks to ESA sponsorship, data access and processing are essentially free—no hidden fees or proprietary barriers.
• Fair & Transparent: Join a truly FAIR ecosystem free from commercial conflicts of interest.
• Community‑Driven Innovation: Contribute to and benefit from a growing network of experts pushing the boundaries of EO science.

Speakers:


  • Pratichhya Sharma - VITO
Add to Google Calendar

Monday 23 June 14:45 - 15:05 (EO Arena)

Demo: D.03.31 DEMO - SNAP in Action - Various Application Examples throught the week demonstrating the power of SNAP for EO data visualisation, analysis and processing - Session 1

SNAP is the ESA toolbox for visualising, analysing and processing of optical and microwave EO data. SNAP support a large number of current and past satellite sensors as well as generic data formats. SNAP addresses all kind of users, from early stage students, through experienced researchers, up to production managers responsible for a public and commercial EO processing service.

In a series of demonstrations we showcase this breadth of possibilities at various land and water real life applications. Demonstratoins will be repeated multiple times to allow as many as possible participants to join a specific demonstration. We will tailor the daily programme from a set of prepared demonstrations according to themes of the days, and user needs if expressed during the conference.

The following list give a glimpse of demonstrations from which we can select:
1. Sentinel-1 ETAD processing with SNAP
2. Change Detection Monitoring
3. Supporting new SAR missions with SNAP
4. “Live” fire evolution in Los Angeles using Sentinel-2 image
5. Burned Areas Detection – Mehedinti, Romania
6. Monitoring Drought Evolution – Dobrogea, Romania
7. Water Quality in urban areas at the example of the city of Hamburg
8. Interpreting Hyperspectral Data for coastal habitat mapping

Speakers:


  • Diana Harosa - CS Romania
  • Cosmin Cara - CS Romania
Add to Google Calendar

Monday 23 June 15:07 - 15:27 (EO Arena)

Demo: D.04.23 DEMO - Leveraging Sentinel Zarr Data

#zarr #stac

The EOPF Sample Service generates and publishes new Sentinel products in Zarr format, enabling scalable and efficient Earth observation analysis. This demosntration introduces tools developed in the activity that allow users to fully exploit these new data products: an xarray EOPF backend and an xcube EOPF data store.
The xarray EOPF backend provides seamless access to individual Sentinel Zarr data products, with additional features to enhance usability, such as aligning of all Sentinel-2 bands to a common grid. The xcube EOPF data store builds on this by using the STAC API to locate relevant observations and leveraging the xarray backend to open and process the data. It mosaics and stacks Sentinel tiles along the time axis, creating an analysis-ready data cube for advanced geospatial analysis.
Beyond simple data access, xcube offers powerful processing capabilities, including sub-setting, resampling, and reprojection. It also includes an integrated server and a visualisation tool, xcube Viewer, which efficiently renders multi-resolution data pyramids for fast, interactive exploration. The viewer supports basic data analytics, such as polygon-based statistics, band math, and time series visualisation.
This demonstration will show how to access and process Sentinel Zarr data using these tools. We will introduce the xarray backend, explore the EOPF xcube data store, and showcase how xcube enables the creation and visualisation of analysis-ready data cubes. Participants will learn how to perform efficient geospatial analysis with Sentinel Zarr products in a Python environment.

Point of Contact:
Konstantin Ntokas (available on site 23-26 of June)
konstantin.ntokas@brockmann-consult.de
Brockmann Consult GmbH

Speakers:


  • Konstantin Ntokas - Brockmann Consult
Add to Google Calendar

Monday 23 June 15:30 - 15:50 (EO Arena)

Demo: D.04.31 DEMO - NoR Updates and Road Map - Session 1

This session will showcase the ESA Network of Resources initiative current status, major updates, and road map.

Speaker:


  • Francesco Barchetta - Starion for ESA
Add to Google Calendar

Monday 23 June 15:30 - 16:15 (Nexus Agora)

Session: D.03.07 Implementing FAIR Open Science: Advancing ESA’s EO Science Strategy

ESA’s new EO Science Strategy “Earth Science in Action for Tomorrow’s World” puts forth Open Science and Digital Innovation as key enablers for Earth Action, driving accelerated discovery to address the pressing challenges of our time. To realise this vision, national agencies, international organizations, and research institutions must work together to create frameworks that support open science, facilitate data sharing, and enable interoperability.
By creating a space for conversation among global, European and national space and research agencies and institutions, the scientific users and industrial service providers, the session will explore practical pathways to Open Science in EO and Earth System Science. Highlighting examples like EarthCODE and APEx, the discussion will focus broadly on the wider ecosystem of interoperable solutions, fostering international partnerships, and accelerating the transition from EO science to impactful innovation.
This Agora will also explore how ESA can take a leading role to enhance global collaboration on Open Science, ensuring the EO Science Strategy translates into action and impact.
1.Implementing the ESA EO Science Strategy
- How Earth Action can benefit from Reproducible and Reusable Science
- Open science and Digital innovation in practice:
i.Technology to enable practice: current solutions, best practices, and future trends (EarthCODE, APEx, other national initiatives)
ii.Technology to support discovery: Leveraging AI, big data, and interoperable systems to enable faster discoveries and insights
- Challenges and opportunities in implementing these principles across ESA, EU and National programmes.
2.European and Global Collaboration in Open EO Science
- Strengthening partnerships with national space agencies, international organizations, and research institutions.
- How ESA can act as a catalyst for global open science initiatives.
- ESA Science Clusters and collaborative research

This panel-style Agora will include 4–5 invited speakers from national space agencies, European Commission, the scientific community and the European EO Platforms industry, with expertise in open science and collaboration. The session will be moderated by ESA.

Moderator:


  • Anca Anghelea - ESA

Speakers:


  • Anna Hogg – Professor, School of Earth and Environment, University of Leeds
  • Miguel Mahecha – Professor, Earth System Data Science, Leipzig University
  • Stefanie Lumnitz – policy officer, seconded expert, DG-RTD
  • Marco Celesti - Earth Surface Hyper & Multi-spectral Optical Scientist
  • Geoff Busswell - Vice President of Business Growth, Telespazio UK
  • Dennis Clarijs - Program Manager Remote Sensing Services, VITO
  • Julie Lowndes – founding director, OpenScapes
Add to Google Calendar

Monday 23 June 15:30 - 16:15 (ESA Agora)

Session: C.05.08 TRUTHS, the ESA metrology mission for Earth action

TRUTHS, an innovative ESA Earth Watch mission, acts as a metrology laboratory in-orbit and will benchmark other missions’ data, by embarking a primary radiometric standard traceable to SI-units. Its unprecedented accuracy in measuring the Earth and Solar radiances permit to respond to questions about the Earth/Sun radiation imbalance and to provide at the same time a calibration anchor for other missions and datasets. The session will review TRUTHS value in the context of the ESA EO program and its role in the international Earth action activities.

Speakers:


  • Dr. Simonetta Cheli - ESA
  • Dr.Thomas August - ESA
  • Dr.F.Rabier - ECMWF
  • Dr.B.Bojkov - Eumetsat / CEOS
  • Prof. Nigel Fox - NPL
  • Andrea Marini - ESA
  • P.Bargellini - ESA
Add to Google Calendar

Monday 23 June 15:30 - 16:15 (Frontiers Agora)

Session: F.02.15 FAO & ESA Partnership on the use of Earth Observation for Food and Agriculture

The Food and Agriculture Organization (FAO) has partnered with the European Space Agency (ESA) to improve the exchange of expertise, knowledge and relevant data for the joint development of Earth Observation applications responding to the mandate of FAO. The two organizations have signed a Memorandum of Understanding in 2021 which will facilitate synergies of R&D efforts and to scale up solutions in particular the FAO capacity development work aimed at enabling countries in the use of Earth Observations for agricultural statistics and SDG monitoring, by allowing the standardization of methods/applications, increasing results accuracy and sustainability of solutions.
The LPS22 Agora will discuss with FAO experts working in different thematic domains the following topics:
• Requirements and challenges for using satellite Earth Observation data;
• Exchange of data sets from integrated household/field surveys, essential for calibration and validation of Earth Observation models;
• Developing innovative Earth Observation algorithms, products and applications relevant for the mandate of FAO making full use of latest IT capabilities, such as cloud computing;
• Demonstrating and validating Earth Observation capabilities for data generation under FAO’s mandate.

Opening: Welcome and keynotes


  • Rune Floberghagen, Head of the Climate Action, Sustainability and Science Department – European Space Agency (ESA)

Moderated panel discussion:


  • Moderator Benjamin Koetz, Head of Long-term Action Section, ESA
  • Jose Rosero – Director Statistic Division, UN Food and Agriculture Organisation (FAO)
  • Livia Peiser – FAO, Head of Geospatial Unit - Land and Water Division
  • Lorenzo DeSimone – FAO, Statistic Division
  • Erik Lindquist – FAO, Forestry Division
  • Jonathan Pound – FAO, Global Information and Early Warning System, Markets and Trade Division
Add to Google Calendar

Monday 23 June 15:52 - 16:12 (EO Arena)

Demo: A.02.15 DEMO - Sen4Stat: an open-source toolbox leveraging satellite Earth Observation to improve agriculture statistics

Since years, the potential of satellite Earth Observation (EO) for agricultural statistics has been recognized but it has not yet led to the adoption of this technology by National Statistical Offices (NSOs). The open-source Sentinels for Agricultural Statistics (Sen4Stat) toolbox aims at facilitating the uptake of sentinel EO-derived information in the official processes of NSOs, since the early stages of the agricultural surveys to the production of the statistics.
It consists of an open source EO processing system linked with (i) a module for in situ datasets quality control, (ii) a visualization tool and (iii) a set of tools for higher-level statistical analyses. Being open source, it allows any user to generate, at his own premises and in an operational way, products tailored to his needs.
This demonstration aims at showcasing how the Sen4Stat toolbox automatically ingests and processes Sentinel-1 and Sentinel-2 time series in a seamless way for operational crop mapping and yield modelling, using ground data provided by national statistical surveys. EO products will then be integrated with the survey dataset to improve the statistics. The session will:
• provide a live demonstration of the toolbox: how to download the tool and to get access to the tool and how to start for basic usage;
• present case studies that highlight the performance of the tool and its fitness for use.
The Sen4Stat system has already convinced countries that it can provide reliable, robust and timely information needed to strengthen food security. It also received the support from international funders like FAO, CIMMYT, World Bank and Development Banks, which provide the mid-term perspectives needed to facilitate the adoption of such new technologies. In this context, this session aims at convincing new users and widen the Sen4Stat community.

Speakers:


  • Boris Nörgaard – Université catholique of Louvain (UCLouvain)
  • Guillaume Jadot – Université catholique of Louvain (UCLouvain)
  • Pierre Houdmont – Université catholique of Louvain (UCLouvain)
  • Sophie Bontemps – Université catholique of Louvain (UCLouvain)
  • Cosmin Udroiu – from CS ROMANIA
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.61/1.62)

Session: A.01.02 Vertical Coupling in the Whole Atmosphere System

The Earth's atmosphere extends from the Earth’s surface to the upper thermosphere and ionosphere. While the various vertical domains are governed by different processes they are coupled through chemical, physical and dynamical processes (internal waves, solar, and magnetospheric forcings, etc.). The implications of these couplings for the waether and climate system are subject to ongoing research. Progress is being made in studying and understanding the modelling the atmosphere as a whole. This session invites contributions dealing with the vertical coupling mechanisms between these vertical domains.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.61/1.62)

Presentation: Vertical Coupling due to Natural Hazard Induced Atmospheric Waves: Overview, Challenges and Requirements in the Context of Early Warning

Authors: Sabine Wüst, Marco Guerra, Jaroslav Chum
Affiliations: DLR, INGV, IAP-CAS
Atmospheric waves, often excited in the troposphere, can propagate both horizontally and vertically over long distances. They carry momentum and energy without a net transport of mass. In this way, they contribute to the re-distribution of energy and momentum in the atmosphere. This must be adequately taken into account in atmospheric modelling. However, as well as carrying energy and momentum, they also transport information about their source (e.g. activity). The fastest atmospheric waves are infrasound and acoustic gravity waves. They are generated amongst others by natural hazards such as tsunamis, earthquakes, volcano eruptions, hurricanes etc. and can reach the mesosphere and ionosphere under certain circumstances. They may provide information on the activity and nature of a natural hazard. Due to their speed of propagation, they may be used for early warning of natural hazards. This presentation gives an overview of the current state of observations of natural hazard induced atmospheric waves in the mesosphere and ionosphere based on literature. The current status and challenges of using them for early warning are analyzed. Requirements for ground-and satellite-based remote sensing observations are formulated.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.61/1.62)

Presentation: Measuring atomic oxygen in the MLT from a stratospheric balloon with the OSAS-B terahertz heterodyne spectrometer

Authors: Martin Wienold, Alexey Semenov, Peder Bagge Hansen, Heinz-Wilhelm Hübers
Affiliations: German Aerospace Center (DLR), Institute of Optical Sensor Systems, Humboldt-Universität zu Berlin, Department of Physics
The Oxygen Spectrometer for Atmospheric Science on a Balloon (OSAS-B) is a heterodyne receiver for the thermally excited ground state transition of neutral atomic oxygen at 4.75 THz [1]. It has been shown that this transition is favorable for the determination of atomic oxygen in the mesosphere and lower thermosphere (MLT) region of Earth [2]. Due to water absorption, it cannot be observed from ground. Atomic oxygen is the dominant species in the MLT, and thus plays an important role for the chemistry and energy balance in the MLT region as well as for the deceleration of low-earth orbit satellites. OSAS-B uses a combined helium/nitrogen cryostat for the detector of the instrument, a superconducting hot-electron bolometer mixer, as well as for cooling the quantum-cascade laser, which serves as the local oscillator for heterodyne detection. A turning mirror allows for measurements at different vertical inclinations and for radiometric calibration against two blackbody sources. The first flight took place as a one-day flight in September 2022 from Esrange, Sweden in the framework of the EU-funded Hemera 2020 program. During the course of the flight, several hundred spectra for different elevation angles and azimuth directions were recorded. We will present the analysis of these spectra and discuss findings from this analysis with respect to the concentration of atomic oxygen and winds in the MLT. Also the results will be compared with the MSIS model (NRL MSIS 2.0/2.1).
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.61/1.62)

Presentation: Identification of Auroral OH Emissions and Their Basic Properties as Seen in High-Latitude Airglow Observations

Authors: Carsten Schmidt, Juliane Neußer, Lisa Küchelbacher, Sabine Wüst, Michael Bittner
Affiliations: German Aerospace Center, Earth Observation Center, University of Bonn, Institute of Geosciences, University of Augsburg, Institute of Physics
Spectroscopic observations of the OH (hydroxl) airglow are performed at the German Antarctic Research Station Neumayer III (NEU, 70.67°S, 8.27°W) since 2013 in order to determine neutral temperatures at the height of the mesosphere and lower thermosphere (MLT). Similar observations with a high temporal resolution of 15 seconds were performed at the Arctic Lidar Observatory for Middle Atmosphere Research (ALOMAR, 69.28°N, 16.01°E) from 2010 until 2014. A reanalysis of these extensive data sets in the context of the QUID-REGIS (QUiet Ionospheric Disturbances - REsearch based on Ground-based mesospheric and Ionospheric data with Swarm data) project for ESA’s 4d Ionosphere Initiative involves the implementation of a better discrimination between OH airglow and occasional auroral contaminations. The reanalysis provides a clear identification of N₂+(1-2) and N₂(1P) band systems contaminating the airglow observations in the wavelength range between 1500 and 1600 nm, albeit with a certain amount of unexplained variability remaining. This remaining variability exhibits similar temporal characteristics as the N₂+ / N₂ auroral emissions, but it leaves no additional signatures in the observed airglow emission spectra. It is therefore interpreted as auroral OH emission superimposed onto the OH airglow, leading to a radiance increase of up to 20% within a few seconds, an increased variability and a return back to undisturbed airglow radiances within one or two minutes. The majority of the events is identified in close temporal connection to (but not limited to) the occurrence of N₂+ aurora. Thus, it appears to be linked to the processes also leading to N₂+ aurora. However, the exact mechanism which is either 1) direct excitation via electron impact, 2) influence on the well-known chemical reaction chains leading to OH airglow under undisturbed conditions or 3) dynamical coupling of magnetohydrodynamic and neutral acoustic waves (i.e. infrasound) is still under investigation. Both, N₂+ and the unusual OH emissions are only observed during minor or moderate geomagnetic activity (for kp between 2 and 6), which might either be due to the geomagnetic latitude of the observation sites or due to the formation mechanism of both emissions. So far, a common feature of the few dozen OH cases identified is that their occurrence is limited to the evening hours before local midnight. The potential discovery of OH auroral emission opens up new possibilities for advancing our understanding of the interactions between the magnetosphere and the neutral middle and upper atmosphere. With the help of respective satellite-based observations, the simultaneous observation of N₂+, N₂(1P) and OH in the SWIR spectral range might provide further insight into temporal and vertical evolution of electron precipitation and energy redistribution into the upper atmosphere.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.61/1.62)

Presentation: Dynamics across the Polar Mesopause Region: Observations at the Polar Environmental Atmospheric Research Station (PEARL)

Authors: William Ward, Dr. Samuel Kristoffersen, Dr. Chris Meek, Dustin Fraser, Dustin Gamblin
Affiliations: University Of New Brunswick, University of Saskatcherwan, McGill University
The instrumentation sited at the Polar Environment Atmospheric Research Laboratory (PEARL), at Eureka, Nunavut, Canada on Ellesmere Island (80N 85W) includes several instruments which take observations in the mesopause region and lower thermosphere. They include an all-sky imager (PEARL All Sky Imager (PASI), irradiance images), a meteor radar (MR, wind), a Spectral Airglow Temperature Imager (SATI, temperature and irradiance), a field-widened Michelson interferometer (E-Region Wind Interferometer (ERWIN) - wind and irradiance) and a Fabry-Perot interferometer (wind, temperature, irradiance: Q. Wu, P.I.). In this paper, selected results from observations over the lifetime of this observatory are presented. Included are detailed wind comparisons between the meteor radar and ERWIN, vertical wind observations, climatologies of airglow variability and gravity wave observations. The meteor radar/ERWIN comparisons show that the directional information on horizontal winds are in good agreement but the ERWIN wind amplitudes are 70% of the radar winds. Vertical winds from ERWIN are precise enough and of a suitable cadence for gravity wave signatures in time series of this wind to be identified. The vertical wind standard deviation is a few m/s and gravity waves in vertical wind and airglow variations are typically in quadrature although there are exceptions. Gravity waves in horizontal wind with periods from tens of minutes to close to 12 hours are observed. Ray tracing using GROGRAT has been undertaken and indicates that short period waves may be due to secondary gravity waves exited by mountain wave breaking. It is not clear whether the ~12 hour signatures are semidiurnal tides or inertial gravity waves. Seasonal and solar cycle variability in airglow is present with several significant peaks appearing during the winter and airglow signatures proportional to solar activity observed. Coordination of the various measurements to form more extensive data products is planned. The meteor radar/ERWIN wind comparison is the first step in this direction. Airglow brightness from PASI, ERWIN and SATI are highly correlated for the common emissions amongst them (as expected) and the PASI observations can be used to set the spatial and temporal variability of all three instruments in context. Correlations between temperature measurements with SATI and airglow irradiance will be explored to determine whether temperature and airglow variations are consistent with the vertical wind variations. Taken together, these instruments provide the means to explore the dynamics in this region in detail.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.61/1.62)

Presentation: The interaction of gravity waves with larger scale dynamics during the Sudden Stratospheric Warming 2018/2019 as inferred from synthetic CAIRT observations

Authors: Peter Preusse, Sebastian Rhode, Hanli Liu, Nicholas Pedatella, M Pramitha, Arun Jo Mathew, Manfred Ern, Karlheinz Nogai, Alex Hoffmann, Bernd Funke, Scott Osprey, Bjoern-Martin Sinnhuber, Joern Ungermann
Affiliations: Forschungszentrum Juelich, National Center for Atmospheric Research NCAR-HAO, Indian Institute of Science Education and Research, ESA ESTEC, Instituto Astrofisico de Andalucia, University of Oxford, Karlsruhe Institute of Technology
A major sudden stratospheric warming (SSW) occurred on 01-Jan-2019. This SSW was followed by an elevated stratopause event, which led to downward transport of NO generated by energetic particle precipitation (EPP) and, in consequence, to one of the strongest indirect effects of EPP on stratospheric ozone. In this study we focus on the dynamical effects of this SSW and how CAIRT, one of the two candidate missions for ESA Earth Explorer 11 would observe these effects. CAIRT is an infrared emission limb imager with wide spectral coverage which allows for the inference of temperature and more than 20 trace species. For dynamical analyses of particular importance is its high spatial resolution of 50km along-track, 25km across-track and 1km vertically over a 400km wide track and almost throughout the entire middle atmosphere. The study is based on a high-resolution WACCM-X run nudged to MERRA2 data as well as on ECMWF analysis data. These data are interpolated to CAIRT observation locations, and a forward modeling of CAIRT-observable radiances is performed. These radiance data are then perturbed by instrument noise before a tomographic retrieval is performed. This forms an end-to-end simulation and further analyses are based on these synthetic retrieval results. Importantly, we will sample the large-scale waves to the orbit-coordinates in a way that the measurement time of the individual observations is correctly taken into account. This means, for instance, that the most prominent tide in the mid mesosphere, the diurnal migrating tide DW1 does not appear under its true zonal wavenumber 1 but independent of the longitude as opposing deviations from the background zonal mean for ascending and descending orbit nodes, respectively. In order to analyze the large scale waves in these synthetic orbit data, we follow the approach of Ern et al 2018 and perform sinusoidal fits on wavenumbers 1 to 7 in longitude and for a set of wave periods on a sliding window of several weeks of observations. The according wave periods reach from 1 day (Nyquist limit) to the length of the time window and follow the natural grid of an FFT for this window length. This leads to a roughly orthogonal basis of wave modes for each latitude and altitude. In the resulting spectra the leading modes are identified and their evolution over the winter described. For scale separation and the identification of GWs, only frequencies up to 1/(1.5 days) are used directly. Tides are removed by fitting stationary planetary waves in short 3-day long windows to the ascending and descending nodes separately. All planetary wave structures are further smoothed by a 3rd-order polynomial Savitzky-Golay filter in latitude. Furthermore, the potential benefit for planetary wave analysis of having a second limb-sounder in space at the time will be assessed both considering the planetary waves and tides for their own merit as well as with respect to gravity wave (GW) analysis. Temperature residuals obtained by the scale separation are associated with GWs and are analysed by the JUWAVE/S3D small-volume 3D data analysis method. Amplitudes and 3D wave vectors are determined for each observed altitude and 3 traces across track and every third point along track. The results will be compared to analyses based on the full model fields. Combining the analyzed GWs with a GW ray-tracer main propagation paths will be identified and the modulation of GWs by the large scale waves on the one hand and potential effects of the GWs on the planetary waves on the other hand will be investigated. Consistency between dynamics and tracer transport inferred from observed composition will be discussed. The uniqueness of CAIRT in comparison to previous and planned missions will be highlighted.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.61/1.62)

Presentation: Driving The Mid-Latitude Ionosphere from Below: Observations Made Using the International LOFAR Telescope

Authors: Alan Wood, Gareth Dorrian, Ben Boyde, Hannah Trigg, Richard Fallows, Maaijke Mevius
Affiliations: University Of Birmingham, Science and Technology Facilities Council (STFC) Rutherford Appleton Laboratory, ASTRON – The Netherlands Institute for Radio Astronomy
The Low Frequency Array (LOFAR) is one of the most advanced radio telescopes in the world. When radio waves from a distant astronomical source traverse the ionosphere, structures in this plasma affect the signal. The high temporal resolution available (~10 ms), the range of frequencies observed (10-90 MHz & 110-250 MHz) and the large number of receiving stations (currently 52 across Europe) mean that LOFAR can also observe the effects of the midlatitude and sub-auroral ionosphere at an unprecedented level of detail. The Space Environment and Radio Engineering (SERENE) research group at the University of Birmingham (UoB) are leading a four-year research programme to determine the morphology, origin and evolution of plasma structures inferred from observations made using LOFAR, and to establish the implications of these observations for Earth system science. Multiple observational case studies have been undertaken. These show substructure within a sporadic-E layer (Wood et al., 2024), substructure within a Medium Scale Travelling Ionospheric Disturbance (TID) (Dorrian et al., 2023), a Small Scale TID (Boyde et al., 2022) and symmetric quasi-periodic scintillations (Trigg et al., 2024). The small-scale sizes of many of these features (kilometres to tens of kilometres) implies a local source, either due to instability processes in the mid-latitude ionosphere or due to drivers from below. A methodology has been developed to determine the propagation direction, speed and amplitude of waves observed in the ionosphere (Boyde et al., 2024) and a climatology has been created using this method. The waves observed primarily propagate in the opposite direction to the prevailing wind, strongly suggesting that the structures observed are the ionospheric manifestation of upward propagating Atmospheric Gravity Waves (Boyde et al., in preparation). This indicates that the majority of the plasma structures observed in this climatology are driven by variability lower in the terrestrial atmosphere. This work is supported by the Leverhulme Trust under Research Project Grant RPG-2020-140. References Boyde, B. et al. (2024). Wavelet analysis of differential TEC measurements obtained using LOFAR. Radio Science, 59, doi:10.1029/2023RS007871 Boyde, B. et al. (2022). Lensing from small-scale travelling ionospheric disturbances observed using LOFAR.” J. Space Weather Space Clim., 12, doi:10.1051/swsc/2022030 Dorrian, G. D. et al. (2023). LOFAR observations of substructure within a traveling ionospheric disturbance at mid-latitude, Space Weather, 21, doi:10.1029/2022SW003198 Trigg, H. et al. (2024). Observations of high definition symmetric quasi-periodic scintillations in the mid-latitude ionosphere with LOFAR. J. Geophys. Res., 2023JA032336, doi:10.1029/2023JA032336 Wood, A. G. et al. (2024). Quasi-stationary substructure within a sporadic E layer observed by the Low-Frequency Array (LOFAR). J. Space Weather Space Clim. 14, 27. doi:10.1051/swsc/2024024
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall K1)

Session: D.01.02 Technological Innovations for a Digital Twin of the Earth system - PART 2

The dream of a comprehensive Digital Twin of the Earth System is fuelled by constant advancements. This session dives into the cutting-edge technologies propelling Digital Twin of the Earth System development.

The session will highlight advancements in data acquisition, processing, modelling, and visualisation that enable high-fidelity simulations of Earth's complex systems. Emphasis will be placed on the integration of various technologies, including AI, machine learning, high-performance computing, and cloud platforms, to create an interactive and dynamic digital representation of our planet.
In this session, we invite contributions to discuss the following key topics:

- Next-Generation Earth Observation - We seek discussions on the latest advancements in acquiring satellite data, including new satellite technologies and sensors. Contributions are welcome on techniques for processing and analysing satellite data to enrich the Digital Twin Earth (DTE) with detailed and dynamic information. Case studies that demonstrate how these advancements are being applied in current projects are highly encouraged.

- High-Resolution Earth System Modeling - We invite detailed discussions on the development of next-generation climate models that simulate atmospheric, oceanic, and terrestrial processes with unprecedented accuracy. Contributions on techniques for integrating different Earth system components (e.g., atmosphere, hydrosphere, biosphere) into unified models for comprehensive simulations are sought. Innovations in achieving real-time or near-real-time simulation capabilities, enabling dynamic monitoring and decision-making, are also welcome.

- High-Performance Computing and Artificial Intelligence - We seek contributions on utilising high-performance computing (HPC) and cloud platforms to handle the large-scale data and computational demands of digital twins. Discussions on using AI and machine learning to refine model predictions, detect complex patterns, and automate data processing workflows are encouraged. Additionally, contributions on developing AI-based tools for forecasting environmental changes and extreme events, enhancing preparedness and response strategies, are invited.

- Big Data Management and Integration - We invite discussions on innovative data management techniques and strategies for managing the vast amounts of data generated by Earth system models and simulations. Contributions on techniques for ensuring seamless integration of data from diverse sources, including satellite EO, ground observations, and in-situ sensors, are welcome. Solutions for storing and accessing large datasets efficiently and securely are also sought.

- Emerging Technologies for enhancement of a Digital Twin of the Earth system - We seek contributions on leveraging cloud platforms to enhance the scalability and flexibility of the Digital Twin Earth. Discussions on processing data closer to its source using edge computing to improve response times and reduce bandwidth usage are invited. Contributions on developing interactive and intuitive visualisation tools to explore complex Earth system data are also encouraged.

- Visualisation and User Interaction - We invite discussions on developing tools and platforms for visualising complex Earth system data in intuitive and interactive formats. Contributions on applications of virtual and augmented reality (VR and AR) in exploring digital twin models, enhancing user engagement and understanding, are sought. Creating user-friendly interfaces and dashboards for accessing, analysing, and interacting with digital twin data is another key topic for this session.

- Challenges and Future Directions - We seek discussions on addressing the need for standard protocols and frameworks to ensure interoperability among different digital twin components. Contributions on ensuring the privacy and security of data used in and generated by digital twin systems, addressing ethical and regulatory concerns, are invited. Strategies for ensuring the sustainability and scalability of digital twin initiatives over the long term, including funding and resource allocation, are also welcome.

By exploring these topics, this session aims to highlight the technological innovations driving the development of the Digital Twin Earth and discuss the challenges and future directions in this rapidly evolving field.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall K1)

Presentation: Environmental Digital Twins Based on the interTwin DTE Blue-Print Architecture

#stac

Authors: Juraj Zvolensky, Michele Claus, Iacopo Ferrario, Andrea Manzi, Matteo Bunino, Maria Girone, Miguel Caballer, Sean Hoyal, Alexander Jacob
Affiliations: Eurac Research, EGI, CERN, UPV, EODC
The Horizon Europe interTwin project is developing a highly generic yet powerful Digital Twin Engine (DTE) to support interdisciplinary Digital Twins (DT). Comprising thirty-one high-profile scientific partner institutions, the project brings together infrastructure providers, technology providers, and DT use cases from Climate Research and Environmental Monitoring, High Energy and Astro Particle Physics, and Radio Astronomy. This group of experts enables the co-design of the DTE Blueprint Architecture and the prototype platform benefiting end users like scientists and policymakers but also DT developers. It achieves this by significantly simplifying the process of creating and managing complex Digital Twins workflows. There are 6 use cases co-designing and validating the interTwin DTE in Climate Reaearch and Environmental Monitoring, ranging from Early Warning for floods and droughts to climate impact assessment, The DTs exploit one of the most promising applications of digital twins of the Earth, the simulation of user-defined what-if scenarios by allowing the selection of a high number of different input datasets, models, models’ parameters, regions and periods of time. This talk will highlight in more technical detail the implementation of drought early warning DT and the utilized components from the digital twin engine. Motivated by the goal of contributing to climate change adaptation measures and recognizing the importance of seasonal forecasts as a crucial tool for early warning systems and disaster preparedness, we are developing a hydrological seasonal forecasting digital twin for the Alpine region to tackle the critical challenge of drought risk management. The analysis of historical observations shows that the pattern and intensity of precipitation and temperature trends are changing over the European Alpine region (Brunner et al. 2023), with important consequences for the management of water resources in the Alpine and downstream basins. The modelling workflow of the proposed forecasting system is based on the integration of physical-based models, artificial intelligence, climate forcings and satellite-based estimates. We believe that the complexity of such workflow can effectively showcase the benefits of developing a digital twin. The project emphasizes reproducibility and portability, adhering to the principles of FAIR (Findable, Accessible, Interoperable, Reusable) and open science to ensure transparency, usability, and widespread applicability of the results. All software components are built as new open-source software (https://github.com/orgs/interTwin-eu/ ) or contributing to existing open-source projects. To achieve these goals, we adopt cutting-edge technologies widely recognized within the Earth Observation (EO) and environmental modeling communities. The openEO API, a standardized interface for processing large geospatial datasets, enables seamless integration of remote sensing data, while the SpatioTemporal Asset Catalog (STAC) API facilitates efficient data discovery and management. Together, these technologies form the backbone of our data pipeline, enabling scalable and efficient workflows. A distinguishing feature of our approach is the use of containerized workflows, implemented using the Common Workflow Language (CWL) (Amstutz et al. 2018). CWL provides a standardized, flexible framework for defining and executing computational workflows, ensuring consistency and repeatability across different computing environments. However, the integration of CWL with APIs like openEO and STAC in the Earth Observation domain presents unique challenges. Real-world examples of such integrations are sparse, requiring us to pioneer innovative solutions that bridge these technologies. This involves addressing complexities in workflow orchestration, data handling, and inter-API communication to build a robust and interoperable system. The ITwinAI core module of intertwin allows seamless integration of data driven modelling with our workflows. It enables development and deployment of complex deep learning models in scalable HPC and cloud environments. The DT is deployable using standard TOSCA templates and utilizes High-Performance Computing (HPC) instances to accommodate the computational demands of large-scale simulations and data processing. This deployment ensures scalability, enabling the system to handle extensive datasets and support a diverse range of applications. By leveraging distributed computing resources, we aim to create a responsive and adaptive framework capable of addressing dynamic environmental challenges. The interTwin project represents a significant step forward in the application of Digital Twins to environmental monitoring and prediction. By integrating state-of-the-art technologies with open science principles, we aim to deliver a powerful tool for drought prediction that is not only accurate and reliable but also accessible to researchers and policymakers. Our work paves the way for broader adoption of Digital Twin technologies in the Earth Observation community, offering a replicable and scalable model for tackling global environmental issues. In doing so, we hope to contribute to the development of resilient and sustainable systems capable of mitigating the impacts of climate change and environmental degradation. Amstutz, P. (Ed.), Crusoe, M. R. (Ed.), Tijanić, N. (Ed.), Chapman, B., Chilton, J., Heuer, M., Kartashov, A., Leehr, D., Ménager, H., Nedeljkovich, M., Scales, M., Soiland-Reyes, S., & Stojanovic, L. (2016). Common Workflow Language, v1.0. figshare . https://doi.org/10.6084/m9.figshare.3115156.v2 Brunner, M. I., Götte, J., Schlemper, C., & van Loon, A. F. (2023). Hydrological Drought Generation Processes and Severity Are Changing in the Alps. Geophysical Research Letters, 50(2). https://doi.org/10.1029/2022GL101776
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall K1)

Presentation: Integrating Apache Airflow and UNICORE: Executing Hybrid Workflows on Supercomputers and Cloud-Resources

Authors: Christian Böttcher, Jens Henrik Göbbert, Jedrzej Rybicki, Bernd Schuller, Prof. Gabriele Cavallaro
Affiliations: Forschungszentrum Jülich, University of Iceland
The advent of current European petaflop and upcoming exaflop supercomputers offers unprecedented opportunities for the Earth Observation and Earth System Modeling communities to address increasingly complex environmental challenges. Tackling issues like climate change and ecosystem dynamics demands the integration and processing of vast, diverse datasets using scientific workflows that combine traditional solvers, simulators, and innovative AI and data analytics methods capable of scaling on High-Performance Computing (HPC) systems. This necessitates a collaborative, interdisciplinary approach. To leverage these large-scale computational resources effectively, institutions and research communities are forming federations—unions of organizations that facilitate collaboration through network connectivity, identity management, and code and data mobility. In the context of federating computation and data resources, the main challenges are related to the management of users, the integration of independent computing systems, and the deployment and orchestration of software/services such as scientific workflows. In particular, two of the biggest hurdles of executing workflows across federated compute and storage resources are access management and portability of the (often preexisting) applications for each execution step. While most HPC centers or cloud providers offer similar ways of access, they often require different credentials to be used, and managing different identities for a team of workflow developers and users across different locations is challenging. At the Jülich Supercomputing Centre (Forschungszentrum Jülich), we utilize and support the Uniform Interface to Computing Resources (UNICORE), an HPC middleware that provides tools and services for building federated systems, making HPC and data resources accessible in a seamless and secure way for a wide variety of applications in intranets and the internet. UNICORE provides a convenient and feature-rich interface to various distributed computing and storage resources, as well as allowing a common authentication mechanism (such as OAuth) to be configured. At the Jülich Supercomputing Centre, UNICORE enables users to run computational jobs, orchestrate workflows, and operate JupyterLabs on HPC systems. In this abstract, we highlight the advantages and work in progress for the integration of UNICORE and Apache Airflow for executing hybrid workflows on HPC systems and cloud resources. While UNICORE already supports federated workflow execution by itself, it still requires potential users to modify their workflows to work with UNICORE and the underlying HPC systems. To address this issue, Apache Airflow can be used as an orchestration service. Apache Airflow manages the execution of its workflows as DAGs (Directed Acyclic Graphs), which are written in Python code. It also provides premade operators for some types of execution steps, such as running scripts, container images, or connecting to other cloud resources. Airflow allows for different, and even custom, execution backends, ranging from local sequential execution to running all steps in their own containers across various cloud environments. As both products are mature, actively developed, and well-maintained open-source projects, most of the additional development has focused on integrating UNICORE as a possible execution backend for Apache Airflow. A first working prototype already supports executing any Airflow DAG on any UNICORE-connected computing system, only requiring a one-time setup of an appropriate Airflow environment. The next steps include establishing an automated environment setup, preparing extension points to tie in other kinds of HPC and resource access (e.g., HPC centers not supporting UNICORE), and adding better usage accounting management for cases of multiple teams sharing Airflow resources.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall K1)

Presentation: Prometheus: AI-Powered Framework for Forecasting Environmental and Urban Transformations Using Optical Satellite Imagery

Authors: Andrea Cavallini, Salvatore Prochilo, Angie Catalina Carrillo Chappe, Leticia Pérez Sienes
Affiliations: Starion Group
Climate change continues to shape our world with far-reaching consequences for ecosystems, infrastructure, and communities. Interconnected phenomena such as rising sea levels, melting glaciers, and rapid urban expansion underscore the urgency of addressing environmental challenges that directly and indirectly impact human life. For instance, sea-level rise, driven by glacier melt and thermal expansion of seawater, is a pressing global issue, exemplified by land loss on the Greek island of Keros, the submersion risks faced by the Maldives, frequent high-tide flooding in Norfolk, Virginia, and the relocation of entire villages in Fiji. These scenarios illustrate the need for innovative solutions that can anticipate these changes and visually represent their impacts to foster better decision-making and public awareness. Traditional forecasting systems, while valuable, predominantly deliver numerical or symbolic predictions focused on single events, such as weather patterns, flood or fire risks. While these approaches provide essential insights, they lack the ability to visually communicate how changes develop over time, leaving stakeholders without a clear understanding of the potential consequences of environmental shifts. Prometheus aims to bridge this gap by offering time-lapse-like visualizations of dynamic environmental transformations. By allowing users to see how areas might evolve in the coming months, years or even decades, this tool transforms abstract predictions into vivid, actionable insights. Imagine witnessing islands gradually disappear beneath rising seas or observing glaciers retreat, offering a clear and intuitive grasp of future scenarios. Prometheus leverages Earth Observation (EO) data and advanced artificial intelligence to provide flexible, scalable, and intuitive solutions for environmental forecasting. Its hybrid architecture combines Convolutional Neural Networks (CNNs), which capture local features like urban boundaries and coastline changes, with Transformer networks that employ self-attention mechanisms to model global dependencies and complex contextual relationships. This powerful integration ensures that Prometheus delivers detailed spatial analysis and comprehensive temporal modeling, creating visually coherent and dynamic representations of environmental changes over time. Prometheus not only generates accurate predictions but also accepts an input reference image, along with auxiliary data such as meteorological and topographic information, to produce one or more future instants. These generated instants can be cyclically used as input for further predictions, effectively extending the forecasting time window and enabling long-term dynamic modeling. This iterative capability enhances the model’s applicability to scenarios requiring extended forecasts, such as urban planning spanning decades or monitoring gradual environmental changes. Prometheus also fits seamlessly into the concept of a Digital Twin for Earth that integrates real-world data with simulations to replicate environmental processes in a virtual setting. As a predictive tool, Prometheus can contribute dynamic, time-lapse-like visualizations to Digital Twin platforms, enhancing their ability to model and forecast environmental scenarios. This synergy allows Digital Twins to incorporate forward-looking insights into urban planning, climate resilience strategies, and resource management, making it an indispensable component of such initiatives. By aligning with the objectives of Digital Twins, Prometheus not only reinforces their accuracy and functionality but also amplifies their capacity to provide actionable insights to policymakers, scientists, and other stakeholders. To validate its forecasting capabilities, Prometheus was rigorously tested using a vast historical dataset comprising years of EO imagery from Copernicus and earlier missions over selected test sites. This extensive multi-temporal dataset allowed for a robust evaluation of the model's ability to generate valuable predictions. Each forecast was compared against actual data within the same time series, enabling the model’s output to be assessed for accuracy and consistency. Metrics such as Mean Squared Error (MSE), Structural Similarity Index (SSIM), and Peak Signal-to-Noise Ratio (PSNR) were employed to evaluate how closely the predicted outputs matched real-world data. MSE quantified the average difference between the predicted and actual pixel values, PSNR measured the quality of the reconstructed images in relation to the originals, and SSIM assessed the perceived structural similarity. Temporal consistency metrics further evaluated the coherence of sequential predictions, ensuring that the dynamic changes depicted by the model aligned with actual observed patterns. This thorough validation demonstrated the reliability of Prometheus in providing high-quality, actionable forecasts of environmental transformations. Prometheus is agnostic in its application but tailored to demonstrate its capabilities through three critical use cases: sea-level rise, glacier melting, and urban growth. In the case of sea-level rise, the model generates sequential visualizations of coastal inundation, highlighting areas at risk and providing actionable insights for the development of adaptive infrastructure and policies. For glacier melting, it forecasts retreat patterns and their implications for water resources and rising sea levels, enabling more effective conservation strategies. In urban growth scenarios, Prometheus visualizes the expansion of cities and its impacts on natural habitats and infrastructure, guiding sustainable urban planning efforts to balance development and environmental preservation. Beyond its forecasting capabilities, Prometheus also serves as a powerful communication tool, visually demonstrating the potential consequences of inaction. By translating complex data into accessible visual narratives, it enables stakeholders to engage broader audiences, such as policymakers, educators, and the public. This unique ability to depict dynamic environmental changes fosters awareness and urgency, inspiring proactive measures to mitigate the effects of climate change. Stakeholders can use these visualizations not only for decision-making but also as compelling evidence to drive policy changes and community actions. Prometheus represents a significant advancement in environmental forecasting, shifting the paradigm from reactive monitoring to proactive planning. Its state-of-the-art AI framework, combined with the richness of EO data, allows it to provide time-lapse-like visualizations and actionable insights that empower stakeholders to make informed decisions and implement effective strategies. By integrating its innovative features, iterative forecasting capabilities, and predictive insights into digital twin platforms, Prometheus enhances understanding, preparedness, and urgency in addressing the challenges of climate change. Through its ability to generate dynamic, visually enriched forecasts and extend predictive horizons, Prometheus offers a transformative approach to climate resilience, equipping stakeholders with the tools needed to shape a more sustainable and resilient future.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall K1)

Presentation: Empowering Coastal Zones digital twins with the Digital Twin Factory

Authors: Pierre-marie Brunet, Vincent Lonjou, Vincent Martin, Thibault
Affiliations: Cnes
Digital twins are powerful tools for scientists and decision-makers to understand current system dynamics ("what now"), explore potential future trajectories ("what next"), and evaluate the impacts of mitigation strategies ("what if") to reduce risks to humans, infrastructure and ecosystems. At the local scale, digital twins leverage high-resolution data to capture complex interactions between natural and anthropogenic phenomena unique to specific study sites (e.g., cities, watersheds, forestry sectors). The availability of very high-precision spatial data (e.g., optical, 3D, thermal, climate), merged with in situ data, now enables detailed analysis and unlocks new opportunities across diverse locations on Earth. Consequently, interest in developing local-scale and high resolution digital twins is growing. However, their construction often demands significant user effort, diverting focus from the scientific insights these tools are intended to deliver. The Digital Twin Factory (DTF) project, coordinated by the French Space Agency (CNES), addresses these challenges by providing a user-centric framework for building, deploying, and operating HR digital twins. Designed as a Digital Twin as a Service (DTaaS) API, DTF abstracts underlying infrastructure, granting seamless access to both high-performance computing (HPC) resources and commercial cloud platforms. The framework includes modular tools for data access (catalog harvesters), processing (preprocessing and ingestion pipelines), visualization (3D, plotting, and dashboards), and interaction (what-if scenario design and execution). DTF empowers users to address domain-specific scientific challenges by leveraging the digital twin's capacity to explore multiple future scenarios. This contribution introduces the multi-layered architecture of the Digital Twin Framework (DTF), detailing its components and user services. We demonstrate its application in constructing a Coastal Zones Digital Twin (CZ-DT), a critical tool for addressing challenges in coastal regions. This work is carried out through a collaboration between CNES, NASA and NOAA under the umbrella of the Space Climate Observatory (SCO). Currently, over two billion people live in or near coastal zones at the ocean-land interface, with nearly one billion more residing in adjacent low-lying coastal areas. These populations face significant risks from increasingly severe storms and long-term sea level rise, leading to coastal erosion, water pollution, urban and agricultural inundation, and ecosystem degradation. Advanced understanding and prediction of coastal systems represent a pivotal application of digital twin technologies. Satellite and in situ data play essential roles in a CZ-DT, providing timely, spatially relevant inputs for models and validating digital twin performance. We highlight the process of creating the CZ-DT using DTF modules, share key insights, and outline how lessons learned will inform the development of future digital twins.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall K1)

Presentation: The Wild West of Energy Profiling on EuroHPC Systems

Authors: Okke Van Eck, Dr Mario Acosta, Xavier Yepes Arbós, Razvan Aguridan, Sergi Palomas, Manuel Giminez de Castro
Affiliations: Barcelona Supercomputing Center
Climate models are known to be huge energy consumers, using up tons of HPC resources as reported by CMIP6. This consumption brings a hefty electricity bill, which can come with big CO2 emissions. It almost becomes a paradoxical question: are the benefits of climate modeling worth the emissions? Regardless of the answer, we can all agree that energy consumption is a topic that needs to be seriously addressed. However, when researching energy profiling tools, you will quickly discover that this is an extremely new topic for HPC centers. Moreover, measuring the energy consumption of Earth System Models is not as simple as wrapping your application with a profiler that outputs the consumption at a given interval. Instead, there are different architectural layers at which the energy can be measured, each coming with its benefits and shortcomings. BSC has set out to review the state-of-the-art energy profiling tools to understand what methods are most applicable for the Destination Earth projects. For this review, we have gathered hands-on experience on the EuroHPC pre-exascale machines: MareNostrum 5, LUMI, and Leonardo. We can summarize the possible methods for collecting information on energy consumption in three layers: runtime frameworks, application interfaces, and low-level hardware counters. Starting with the runtime frameworks, we have SLURM, EAR, and Meric, which all offer a single interface for the different EuroHPC hardware architectures. While this may seem perfect for hardware-agnostic profiling, in practice we found their deployability and usability quite lacking. Moreover, in the case of user-level installations, you may even lose accuracy, making it difficult to compare the reported energy consumption to HPC machines that come with pre-installed versions. Secondly, we have application interfaces, which depend on the low-level hardware counters and tools. There are 3 different types: out-of-band tools; model-specific registers (MSRs); and general hardware counters. Out-of-band tools measure consumption through external hardware, which is not present for any of the EuroHPC systems, making this option unusable for Destination Earth. MSRs on the other hand are available everywhere and are essential for the so-called Running Average Power Limit (RAPL) approach for CPU energy measurement. RAPL is considered the industry standard nowadays, as it reports energy consumption at very fine granularity and a high sampling rate. However, utilizing MSRs requires administrator privileges on all of the EuroHPC systems, rendering this option futile as well. This leaves us with general hardware counters, often targeted through the Performance Application Programming Interface (PAPI). These general and PAPI counters can be profiled through application interfaces like LIKWID, which is a state-of-the-art profiling tool suite. While we were successful in profiling energy consumption with LIKWID on MareNostrum 5 and Leonardo, we were unable to get it to work on LUMI due to the custom PM_Counters of the Cray environment. The PM_Counters seem only to work when manually instrumenting your models, which adds unwanted complexity when your model needs to be operational on multiple EuroHPC systems. There are many tools on the market, but their portability is the main bottleneck for integrating them into Destination Earth projects. The industry standard RAPL and PAPI interfaces seem not applicable for all EuroHPC machines due to missing administrator rights and incompatibilities with the Cray environment. Hence, hardware-agnostic frameworks might seem the better option. However, we know that Meric will not work on LUMI due to the lack of support for AMD GPUs, and we have not yet been able to fully test EAR on LUMI either. This leaves us with SLURM’s energy accounting plugin, which is unfortunately disabled on Leonardo and MareNostrum 5’s ACC partition. Concluding, we do not seem to have a unified solution that we can use to collect energy consumption metrics on the EuroHPC systems. The user will always need to switch between different methods when using multiple EuroHPC systems and hope that the profilers have similar accuracies. It is a shame knowing the importance of energy consumption for users and vendors, especially when comparing the performance of multiple EuroHPC systems. Maybe one day HPC centers will open up MSR access to users, enabling us to use the state-of-the-art RAPL interface. Until then, it remains the Wild West for energy profiling tools.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall K1)

Presentation: Implementation of a regional scale, accurate hydro-meteo-climate Digital Twin: the IRIDE Cyberitaly experience

Authors: GABRIELE MURCHIO, Chiara Perna, Paolo Macciacchera, Simone Riccardi, Claudio De Blasio, Nicola Corsini, Domenico Grandoni
Affiliations: e-GEOS
In the framework of the PNRR IRIDE Programme, the CyberItaly project aims to develop a solution that provides national institutional users with "Digital Twin" applications in a "Geo-spatial" perspective with integrated data, models and simulations based on information of current or probable extreme weather events. Using high-precision Earth Observation (EO) data, non-EO datasets, models, artificial intelligence and data analytics, the platform offers a dynamic virtual representation of a physical system to evaluate its behaviour under different boundary conditions. The e-GEOS team is developing together with a consortium of companies, a Digital Twin application capable of modelling, simulating and visualizing the potential impact of floods on a specific geographical area. The key innovative concept of the IRIDE CyberItaly - Digital Twin application developed by e-GEOS lies in the synergistic use of meteorological models, hydrological models, hydraulic models, elevation layers to provide an accurate representation of the terrain and infrastructures, GIS data (use of land/land cover, building footprint), hydro-meteorological data (river flow, precipitation levels, soil moisture), georeferenced data from social media, flood footprint satellite and a 2D/3D viewing platform for a comprehensive view of potential future floods or to provide updated flood risk information. The above-mentioned models are integrated into a single Flood Forecasting Tool (FFT), which represents the new flood mapping service offered by e-GEOS to overcome the critical limitations of traditional flood mapping services based on satellite observations. Within the FFT, the hydrological model developed by the various regional and national competence centers provides the time series of river flows, used as input data into the hydraulic model, which produces the extent, depth and speed of floods occurred during the event of interest. Earth Observation provides a key contribution in the observation of current and historical flood events to calibrate and validate hydraulic models. The hydraulic model has as its main input the Hydro ready DTM (Digital Terrain Model) for the physical modeling of phenomena related to hydrogeological risk. Based on the results generated by the hydraulic model, flood hazard and damage assessment maps are created. A further point of innovation within the IRIDE CyberItaly - DT _Hydro-Meteo application is the possibility of implementing "what-if" scenarios, supporting end users in the decision-making process. For this purpose, four scenarios can be generated: i) Inundation Mapper (scenario based on the probabilistic concept of the Return Period (RP)); ii) Simulation from weather forecasts (flood event reproduced using data derived from forecast models); iii) Simulated flood boundaries from water level and flow/runoff model estimates iv) Flood preparedness (setting the water level to visualize the resulting effect of changes in flood extent over area urban). In conclusion, the IRIDE CyberItaly Digital Twin application for flood risk assessment can provide valuable information on the potential impact of extreme weather events on a specific geographical area. By simulating and visualizing potential flood risk, decision makers can take proactive measures to mitigate the impact of these events.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall F2)

Session: A.02.03 EO for Agriculture Under Pressure - PART 2

The human impact on the biosphere is steadily increasing. One of the main human activities contributing to this is agriculture. Agricultural crops, managed grasslands and livestock are all part of the biosphere and our understanding of their dynamics and their impacts on other parts of the biosphere, as well as on the wider environment and on the climate is insufficient.
On the other hand, today’s Agriculture is Under Pressure to produce more food in order to meet the needs of a growing population with changing diets– and this despite a changing climate with more extreme weather. It is required to make sustainable use of resources (e.g. water and soils) while reducing its carbon footprint and its negative impact on the environment, and result in accessible, affordable and healthy food.
Proposals are welcome from activities aiming at increasing our understanding of agriculture dynamics and at developing and implementing solutions to the above-mentioned challenges of agriculture, or supporting the implementation and monitoring of policies addressing these challenges. Studies on how these challenges can be addressed at local to global scales through cross site research and benchmarking studies, such as through the Joint Experiment for Crop Assessment and Monitoring (JECAM) are welcome.

The session will hence cover topics such as
- Impact on climate and environment:
- Crop stressors and climate adaptation
- Food security and Sustainable Agricultural Systems
- New technologies and infrastructure
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall F2)

Presentation: Assessing the impact of floods on food security in low-income countries using Satellite imagery

Authors: Jeremy Eudaric, Frederick S Policelli, Thomas RH Holmes, Andrés Camero, Rasmus Fensholt, Beth Tellman, Heidi Kreibich, Prof. Xiao Xiang Zhu
Affiliations: Chair of Data Science in Earth Observation, Technical University of Munich, German Aerospace Center (DLR), Hydrological Sciences Lab, NASA Goddard Space Flight Center, Department of Geosciences and Natural Resource Management, University of Copenhagen, School of Geography, Development and Environment, University of Arizona, GFZ German Research Centre for Geosciences, Munich Center for Machine Learning
The United Nations Sustainable Development Goal 2 (SDG 2) emphasizes the urgent need to reverse land and crop degradation and create a world free of hunger by 2030. The Planetary Boundaries Framework highlights that we are operating in the Anthropocene, with humans putting increasing pressure on the Earth System. While climate conditions remain within safe boundaries, they are approaching critical thresholds. The Anthropocene can increase flood risks and food insecurity, potentially exacerbating inequality. The proportion of people exposed to floods is projected to rise as climate change disrupts the water cycle. This cascading effect significantly stresses agricultural systems and threatens food security in many regions. In our analysis, we examine the impact of floods on crop production losses in low-income countries from 2000 to 2018. We utilize geospatial-temporal flood (derived from satellite imagery) data alongside satellite imagery from the Landsat series to assess vegetation stress and crop production losses using a time series model. The Normalized Difference Vegetation Index (NDVI) and the Tasseled Cap Greenness (TCG) index assess flood damage to vegetation in crop zones by comparatively analyzing pre- and post-flood images. We evaluate total agricultural production losses for each low-income country. Additionally, we conduct a vulnerability assessment based on each country's Gross Domestic Product (GDP) data, focusing on the share of agriculture within GDP to evaluate the economic impact of crop production losses.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall F2)

Presentation: Performance assessment for different multi-sensor strategies: JECAM Use Case in Mali's Cotton Belt

Authors: Abdoulaye GUINDO, Sophie Bontemps, Souleymane Traoré, Pr Pierre Defourny
Affiliations: UCLouvain-Geomatics, Université des Sciences Sociales et de Gestion de Bamako (USSGB)
In Mali, agriculture employs over 85% of the population and contributes approximately 15% to the national Gross Domestic Product (GDP). Agricultural statistical services generate data on cultivated crop types, areas, and yields. However, national official statistics often aggregated at country or regional levels, lack transparency in their processes and accuracy. Cotton statistics from the Malian Textile Development Company (CMDT) provide a more robust and relevant baseline for agricultural production estimates. Given the economic importance of cotton as a major cash crop and this monopolistic commercial position, accurate estimates of its cultivated area are essential for effective planning and policy-making. This study aims to benchmark EO-derived cotton acreage statistics with regard to CMDT’s figures. Satellite imagery from Sentinel-2 (High Resolution, 10 meters) and Pléiades (Very High Resolution, 50 cm) are combined in four strategies while evaluating the cost-effectiveness of combining EO data with in situ observations. The methodology consists of five main steps: (1) in situ data collection for crop types and areas using two sampling strategies – points from windshield survey and polygons in segments selected from Area Sampling Frames (ASF) - to calibrate and validate the classification models; (2) crop classification of HR and VHR imagery using the collected in situ data; (3) multi-scale spatial estimation of cotton areas by comparing results derived from HR and VHR data; (4) construction of a regression estimator to adjust mapped cotton proportions using in situ data from the statistical ASF survey; (5) cost-effectiveness analysis of cotton area estimates derived from HR, VHR, HR-VHR, and in situ approaches. Initial results from the EO4AFRICA project, funded by ESA in 2022, have produced the first cotton area map for the Dioïla district. This map, produced using more than 4,495 labelled crop polygons through the ASF approach, showed promising accuracy metrics. Building on these results, a new field campaign conducted in 2024 under the FAO- EOSTAT programme collected around 10,000 geolocated crop points (windshield) and labelled 4,332 polygons (ASF). These data are used to produce precise cotton maps, consolidating the achievements of the EO4AFRICA project. The multi-sensor strategies allow going one step further in estimating cotton acreage statistics at varying aggregation levels (cercle, sectors and communes). These results are compared and discussed with regard to CMDT estimates to demonstrate the effectiveness of various EO-based strategies in enhancing the accuracy, transparency, and cost-efficiency of agricultural statistics. They provide essential tools for better planning of agricultural policies and the sustainable development of Mali’s cotton sector. Keywords: Multiscale observation, agriculture statistics, cotton area, cost-efficiency, Mali
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall F2)

Presentation: Use of Advanced Earth Observation-Based Techniques to Monitor Crop Stress across Europe: EO4CerealStress

Authors: Dr. Zaib Un Nisa, Dr Booker Ogutu, Prof Victor Rodriguez Galiano, Dr. Roshanak Darvishzadeh, Prof Andy Nelson, Dr. Furkan Celik, Dr. Omid Ghorbanzadeh, Dr. Catherine Champagne, Dr. Aaron Berg, Dr. Espen Volden, Ewelina Dobrowolska, Luigi Pinto, Dr Jadu Dash
Affiliations: School of Geography and Environmental Science,University of Southampton, Department of Geography, University of Seville, Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, Institute of Geomatics, University of Natural Resources and, Life Sciences (BOKU), Agriculture and Agri-Food, University of Guelph, European Space Agency - ESRIN, Serco S.p.A c/o ESRIN-ESA, Bonifiche Farm Educational s.r.l., Jolanda di Savoia (FE)
The EO4CerealStress project, supported by ESA, harnesses advanced satellite technology and Earth observation data to address critical environmental stressors impacting agriculture and food security. Focusing on cereal crops like wheat, rice, and maize, which are vulnerable to extreme events (drought, salinity, heatwaves, nutrient deficiencies, heavy metal contamination, and lodging), the project aims to improve early detection, optimize crop management, and enhance agricultural productivity across Europe. To pursue this, three pilot sites were selected, each addressing specific stressors: wheat lodging at Bonifiche Ferrarsi Farm (Italy), salinity, heat, nutrient deficiency and heavy metal contamination in rice fields managed by the Andalusia Rice Federation (Spain), and drought stress in maizefields in the Marchfeld region (Austria). At each site, a combination of in-situ and EO data (including UAV-VIS-NIR-SWIR, EnMAP, PRISMA, and Sentinel-2) was used to develop stressor-specific indicators from soil and crop datasets. Advanced machine learning and deep learning models were applied to predict the stressors. For soil salinity, Random Forest (RF) algorithms applied to EnMAP data delivered the most accurate results, outperforming both DESIS and PRISMA. For wheat lodging, both EnMAP and Sentinel-2 proved effective in predicting lodging severity, with Sentinel-2 slightly outperforming EnMAP in accuracy. For drought prediction in maize fields, Random Forest and a 1D Convolutional Neural Network (CNN1D) applied to Sentinel-2 vegetation indices, integrated with soil and local climate data, showed the best performance in predicting drought intensity. Under this task, a gap-filling algorithm was developed for Sentinel 2 data continuity despite cloud cover and other obstructions. This comprehensive approach—combining EO data and advanced modelling—provides a powerful toolset for enhancing crop stress monitoring and management. To further address gaps in advance knowledge, the project will deliver a 5-year plan aligned with future initiatives, such as the ESA Digital Twin Earth, ensuring that EO4CerealStress findings contribute to and integrate with ongoing research.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall F2)

Presentation: PEOPLE4NewCAP: Monitoring the New CAP and Agriculture eco-schemes

Authors: Lucie Dekanova, Jan Mišurec, Luboš Kučera, Gerbert Roerink, Wouter Meijninger, Jose Martinez Heras, Josselin Stark, Petr Konata
Affiliations: Gisat s.r.o, Wageningen University & Research, Solenix, CleverFarm
Funded by the European Space Agency's FutureEO-1 Programme, this project supports the EU's New Common Agricultural Policy by enhancing eco-schemes under Pillar 1. It employs innovative Earth Observation methods to improve agri-environmental measures and align them with CAP objectives. National Paying Agencies from the Netherlands, Sweden, Spain (Castilla y León), and Czechia serve as Early Adopters. The workflows developed for these PAs address key activities: Crop Rotation, Land Lying Fallow, Extensive Use of Permanent Grassland, Winter Soil Cover and Catch Crops, and Geotagged Photos. Additionally, use cases for Managing Crop Water Demand and Estimation of Carbon Stock were developed specifically for farmers. The Crop Rotation use case employs crop classification based on optical (Sentinel-2) and radar (Sentinel-1) data to assess parcel compliance with crop rotation, diversification, and erosion-prone area regulations. This workflow has been tested across Czechia, analyzing more than 270,000 agricultural parcels and 23,000 farms from October 2020 to August 2023. The Land Lying Fallow use case focuses on fulfilling the rules for different types of fallow land (e.g., black or green) using Sentinel-1 and Sentinel-2 imagery by monitoring the occurrence of the management events in the defined resting period. NDVI and radar coherence data detect land management activities such as mowing or bare soil presence. The system verifies compliance with country-specific fallow land regulations. Despite challenges with parcel size and data quality, this method has proven effective across Czechia, Sweden, Spain, and the Netherlands. The Extensive Use of Permanent Grassland use case develops methods to detect non-managed grassland parcels, considering country-specific management rules. It analyzes NDVI profiles from Sentinel-2 and Landsat data (2020-2022) to identify management events. The Gross Productivity Index (GPI) serves as a supporting indicator to measure management intensity. The workflow systematically identifies potentially non-managed parcels by eliminating those with clear management signs. The identified parcels shall be considered for further inspection within the AMS. This method has been successfully tested in Czechia, the Netherlands, and Spain (Castilla y León). The Winter Soil Cover and Catch Crops use case monitors compliance with eco-scheme regulations through satellite data. It combines NDVI and radar coherence measurements, with radar coherence proving especially valuable during winter when cloud cover and wet soils affect optical data. While the integration improves detection accuracy, winter conditions remain challenging. The system aims to verify that 80% of agricultural parcels maintain crop coverage during key periods. Besides the identification of the catch/winter crops’ presence, the analysis investigates whether the growing season (defined by start-of-season and end-of-season parameters) meets the given eco-scheme compliance rules. The Geotagged Photos use case demonstrates the potential of AI techniques in the automated categorization of photos in terms of grassland management. Additionally, it provides recommendations on optimal ranges of quality variables (such as hue, blur, contrast, etc.) required for the successful recognition of geotagged photo content by AI techniques. Managing crop water demands primarily focuses on the identification of spatio-temporal anomalies in crop canopy water content described as approximated by the Normalized Multiband Drought Index (NMDI). These data are further used for calculating a cumulative drought impact indicator taking into account 1) amplitude, 2) temporal duration, and 3) temporal distance of the identified anomalies from the current date. Finally, the correlation of this indicator with the observed yield was assessed under this use case. The second part of the use case provided a theoretical concept of using space-bone thermal infrared data for monitoring such variables as Crop Water Requirements (CRW) and Crop Water Productivity (CWP) at the parcel level. Both of the considered parts aimed to provide information on actual and accumulated water stress and the balance of water income and consumption which could help in optimizing irrigation schemes. The Managing Crop Water Demands use case addresses agricultural water consumption using Earth Observation data. It uses the Normalized Multi-band Drought Index (NMDI) to assess soil and vegetation water content. The system provides near real-time monitoring by comparing historical and current data. It calculates Crop Water Requirements (CWR) and the Evaporative Stress Index (ESI) to inform water consumption and crop needs. Results show these indicators effectively monitor water stress and can help optimize irrigation and crop productivity. The Carbon Stock Estimation use case provides an alternative approach for calculating farm carbon footprint as a part of reporting greenhouse gas emissions. EO-based estimations of the actual Soil Organic Carbon (SOC) content are considered for calculating carbon sequestration coefficients instead of using fixed values reported by the National Greenhouse Gas Inventory. On top of that, it helps to bring forward the important topic of soil organic carbon and its loss on arable land, which is crucial for landowners, agronomists, and farm managers. The results of the use cases were presented to the Paying Agencies in two separate webinars to gather feedback, one of the key aspects of the project. Feedback was collected not only during the webinars but also through an online questionnaire, which played a major role in the feedback collection.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall F2)

Presentation: How to Capture Within-Field Heterogeneity Across Multi-Year Crop Rotation: High-Resolution Insights from Sentinel-2

Authors: Tom Kenda, Pr. Xavier Draye, Pr. Pierre Defourny
Affiliations: Earth and Life Institute, UCLouvain
Europe faces an urgent need to transition agroecosystems towards agroecological practices to ensure food security, enhance resilience to climate change, restore biodiversity and improve soil carbon stocks. Achieving this transition will require advances in phenotyping to identify optimal combinations of species, genotypes and management practices that can withstand projected climate scenarios. Expanding phenotyping from controlled microplot experiments to field-based applications is a critical step in this direction, as it enables the study of plant responses under realistic environmental conditions. This expansion requires broader access to phenotypic data, which can be facilitated by emerging technologies ranging from low-cost IoT sensors deployed in fields to extensive satellite imagery. However, a major challenge is to address the spatial and temporal variability of environmental conditions within fields, which have a significant impact on plant growth in farmers' fields. This study aims to characterise the within-field spatial and temporal variability of crop growth. To achieve this, it will (1) develop a method to represent the spatial heterogeneity of crop growth across different seasons and crop types, (2) establish criteria to distinguish heterogeneous fields from homogeneous ones, and (3) identify some of the drivers of heterogeneity (4) while investigating how these patterns evolve over time. After pre-processing the Sentinel-2 data with the open-source system Sen4Stat, the Leaf Area Index (LAI) was retrieved by inversion of a radiative transfer model, i.e. the BV-Net algorithm locally tuned. For each year of Sentinel-2 acquisition (2017-2024) and each parcel of the study area (Wallonia), the LAI time series was used to (1) infer the growing season of the main crop and (2) produce a spatial indicator of vegetation growth heterogeneity based on this in-season time series. An expert-based qualitative validation was carried out. The resulting maps effectively capture the homogeneity or heterogeneity observed both in the LAI profiles and by the experts on the field, regardless of the crop. Despite significant variation in the seasonal maps between years, discernible patterns emerged, highlighting similarities in conditions or crops between years. An important finding is that certain fields, identified as spatially homogeneous on the basis of soil characteristics, exhibit heterogeneity in vegetation growth. Conversely, fields that appear to have strong spatial heterogeneity based on the soil map may either have a fair degree of homogeneity in vegetation growth, or a pattern of heterogeneity that differs from the soil map. A method has been developed to assign an annual and crop-specific heterogeneity score to each plot. The score is calculated by comparing all heterogeneity maps of a given region and ranking them by their standard deviation. These standard deviations were also used as target variables for a linear mixed effects model and a random forest regressor (RF) to try to understand the main drivers of spatial heterogeneity. The input features included namely the crop type, the year (accounting for weather conditions), and variable related to topographic and soil characteristics of each parcel. Finally, to study the potential change in spatial patterns across time, we selected parcels of the study area with constant field boundaries across the 8 years. This resulted in a dataset of 45,376 parcels. Pairwise correlation coefficients were calculated between all pixels of the heterogeneity maps of each year, resulting in 1,270,528 correlation coefficients. For a given parcel, higher correlations were found for two years with similar heterogeneity patterns, while negative or low correlation coefficients were found for two years with opposite or different heterogeneity patterns. Again, a linear mixed effects model was fitted with the correlation coefficient as the target variable and the parcel ID as a random effect. The analysis showed a significantly higher correlation for two years with the same crop (mean correlation of 0.50 ± 0.001) than for two years with different crops (0.39 ± 0.001). Among the observations for two years with the same crop, chicory, winter wheat and sugar beet appeared to be the crops with the highest average coefficient (0.59 ± 0.001, 0.56 ± 0.001 and 0.55 ± 0.001 respectively). On the other hand, fallow land had the lowest average coefficient (0.30 ± 0.024). The method of Blackmore (2000) has been adapted to delineate management zones. These zones correspond to the following: (1) high productivity stable zone across years, (2) low productivity stable zone or (3) unstable zone across years. Blackmore classified these zones combining annual yield maps. Here, the latter are replaced by the LAI-derived heterogeneity maps using several maps of similar crops for a given field. This allows to overcome the lack of available and good quality yield maps for most farmers. The versatility of the method extends its applicability to different agricultural settings and crops at very low (or no) cost to the end user. The resulting maps could guide dynamic agricultural practices towards greater sustainability, including irrigation, fertilisation and spraying management, and could also provide opportunities for new soil sampling designs and targeted in-field phenotyping. The latter will be further investigated in the European PHENET project.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall F2)

Presentation: SEN4RUST: Sentinel for Wheat Rust Diseases – Earth Observation potential for the Ethiopian Wheat Rust Early Warning and Advisory System

Authors: Jacob W. Smith, Louise Lesne, Gerald Blasch, Tamás Mona, Sophie Bontemps, Yoseph Alemayehu, David Hodson, Christopher Gilligan, Pierre Defourny
Affiliations: University of Cambridge, Université catholique de Louvain, CIMMYT - Mexico, CIMMYT - Ethiopia, CIMMYT - Nepal
Globally, transboundary pests and diseases are an increasing threat to agricultural crop production and food security, particularly in Sub-Saharan Africa. Globalization in terms of increased trade and travel, coupled to a changing climate, are resulting in increased frequency and severity of crop disease outbreaks. Especially, wheat rusts pose a major threat to food security and livelihoods of millions of farmers in Ethiopia, with several devastating epidemics in recent history. To help prevent major disease outbreaks, early detection and timely control are essential. In response to the wheat rust problem in Ethiopia, a consortium of national and international partners (e.g., Ethiopian Institute for Agricultural Research, Agricultural Transformation Institute, International Maize and Wheat Improvement Center (CIMMYT), UK Met Office, and Cambridge University) have created one of the most advanced, operational crop disease early warning and advisory systems in the world, the Ethiopian Wheat Rust Early Warning and Advisory System (EWAS). The EWAS integrates advanced meteorologically-driven spore dispersal and epidemiological models to forecast in-season disease risk, providing warnings of wheat rust outbreaks to smallholder farmers. In SEN4RUST, a project supported by the European Space Agency, we explored the added value of Earth Observation (EO) data on the existing EWAS. The purpose was to provide the first stage methodology for developing an environmental monitoring tool able to locate the wheat areas, i.e. generating wheat maps related to the main and irrigated seasons, and over the pixels identified as wheat, to provide dynamic information about the wheat development cycle along the targeted season. These new products are based on EO data coming from the Copernicus Sentinel-2 mission, and their impact on wheat rust spread modelling was assessed by integrating them in the current EWAS. As a first step, a binary wheat / non-wheat map with a spatial resolution of 10 meters was produced for the year 2021 over the Sen4Rust study area (47 Sentinel-2 tiles). Exploiting a Sentinel-2 time series and state-of-the-art preprocessing methods, the wheat map was obtained with an overall accuracy of 80.4% using a Random Forest algorithm trained on the Sentinel-2 bands and spectral indices values available every 7 days and on additional temporal features. Moreover, a “dynamic wheat map” was produced to provide information on wheat phenology, aiming at improving key information for the epidemiological models, meaningly informing about the wheat area susceptible to wheat rust infection at a given time of the year (susceptible host), instead of knowing only the wheat area during the main season (potential host). The second stage of the SEN4RUST project was to assess the impact of using EO-based estimates of wheat distribution instead of other data sources. Comparing against MapSPAM2017SSA wheat production area as a baseline, EO data has a noticeable impact on simulated disease risk. Non-local effects modelled by the aerial dispersal component of EWAS demonstrate the complexity of the host distribution’s influence on simulations. The integrated risk responded especially to the phenology map’s time-varying feature that has so far been unresolved by other data sources. In particular, resolving weekly changes in host distribution refined estimates of season start and end dates. As well as refining advisory information reaching roughly 300,000 small-holder farmers, this offers promising potential to monitor and account for seasonal changes in cropping dates which may be more prominent in the face of increasing climate extremes.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall F1)

Session: A.01.01 Advances in atmospheric composition - PART 2

High-quality global atmospheric composition observations are essential for our understanding of atmospheric responses and contributions to climate change, to document trends in greenhouse gases, their precursors, and air pollutant emissions, and to assess and underpin new policy measures for climate mitigation and adaptation.
This session will present latest results on how atmospheric composition measurements can be used to monitor climate change. Furthermore changes in the troposphere, stratosphere and their coupling (e.g circulation & transport of trace gases, chemical composition, aerosol information,) will be discussed. We invite presentations on data product improvements, validation aspects as well as studies using satellite data for applications in atmospheric chemistry, composition monitoring and air quality from current and future missions.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall F1)

Presentation: The New Multi-Sensor Formaldehyde Climate Data Records for the ESA Climate Change initiative

Authors: Isabelle De Smedt, Huan Yu, Jonas Vlietinck, Klaus-Peter Heue, Thomas Danckaert, Nicolas Theys, Corinne Vigouroux, Steven Compernolle, Gaia Pinardi, Trissevgeni Stavrakou, Folkert Boersma, Michel Van Roozendael
Affiliations: Royal Belgian Institute For Space Aeronomy (bira-iasb), Institut für Methodik der Fernerkundung (IMF), Deutsches Zentrum für Luft und Raumfahrt (DLR), Royal Netherlands Meteorological Institute (KNMI)
The Ozone and Aerosols Precursors Climate Change Initiative project (Precursors_cci+) is dedicated to developing robust, long-term data records of six important short-lived atmospheric species: formaldehyde (HCHO), glyoxal (CHOCHO), nitrogen dioxide (NO2), sulphur dioxide (SO2), carbon monoxide (CO), and ammonia (NH3). The project consolidates data from various satellite missions including GOME, SCIAMACHY, GOME-2, OMI, TROPOMI, IASI, and MOPITT to create consistent and harmonized Climate Data Records (CDRs). In this work, we develop two HCHO CDRs for mid-morning and early afternoon observations, derived respectively from the GOME, SCIAMACHY, and GOME-2 sensors, and from OMI and TROPOMI. The retrieval algorithms build on the EU FP7 QA4ECV products (GOME, SCIAMACHY, OMI), the EUMETSAT AC-SAF products (GOME-2A/B/C) and the Sentinel-5P TROPOMI operational product. However, significant updates have been implemented, and a full reprocessing of all the L2 and L3 products has been performed. The creation of consistent and reliable HCHO climate data records covering more than 20 years presents significant challenges. Differences in cloud data, a priori profiles, and albedo climatology among various sensors lead to inconsistencies in HCHO retrievals. Furthermore, those auxiliary products need to be stable and consistent over time. The selection of valid observations for data averaging is also crucial, especially for HCHO which exhibits greater dispersion than NO2, resulting in more negative columns and reduced precision for lower columns. Careful consideration of quality values is necessary to ensure accurate statistical representation while accounting for instrumental differences and temporal changes in sensor performance. Finally, validation efforts using ground-based and aircraft measurements have consistently revealed a negative bias for high HCHO columns, while the precision of lower columns is often found insufficient. To address these challenges, the CCI+p HCHO L2 products have undergone substantial improvements. These include: (1) updated albedo climatology based on TROPOMI data for enhanced accuracy; (2) adoption of consistent CAMS global reanalysis over the period 2005-2023 as a priori profiles; (3) provision of VCDs with and without cloud correction, along with corresponding air mass factors and averaging kernels; (4) harmonised background correction methods across different instrument; (5) refined definition and harmonisation of quality values. Furthermore, the L3 CDRs have been enhanced by incorporating comprehensive L2 information, implementing optimised data filtering, and providing vertical-dimension variables in the grids. This paper presents the newly generated ESA CCI morning and afternoon HCHO climate data records. Validation results show a reduction of the bias of the satellite HCHO columns towards ground-based measurements. This bias reduction is primarily driven by the updated a priori profiles. A consolidated trend analysis is also presented, which is made possible owing to the improved time stability of the CCI+p HCHO climate data records.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall F1)

Presentation: Mixing processes in tomographically imaged filaments of Asian Monsoon outflow during the PHILEAS campaign

Authors: Jan Kaumanns
Affiliations: Forschungszentrum Jülich
Mixing processes in tomographically imaged filaments of Asian Monsoon outflow during the PHILEAS campaign J. Kaumanns1, S. Johansson3, Ungermann1, M. Dick2, F. Friedl-Vallon3, J.-U. Grooß1, T. Gulde3, M. Höpfner3, A. Kleinert3, E. Kretschmer3, G. Maucher3, T. Neubert2, H. Nordmeyer3, C. Piesch3, F. Plöger1, P. Preusse1, M. Retzlaff1, S. Rhode1, H. Rongen2, G. Schardt2, B.-M. Sinnhuber3, W. Woiwode3, P. Braesicke3, and M. Riese1 1Institute of Climate and Energy Systems (ICE) - Stratosphäre, Forschungszentrum Jülich 2Zentralinstitut für Elektronik, Forschungszentrum Jülich 3Institut für Meteorologie und Klimaforschung, Karlsruher Institut für Technologie The Asian summer monsoon (ASM) is one of the most important events of the northern winter hemisphere. It represents an effective pathway of tropospheric air originating from the south-Asian continent into the upper troposphere (UT), which is to date only partially understood and quantified. Through Rossby wave breaking events large filaments of ASM air are transported westward into the mid-latitudes, where they are mixed into the lower stratosphere (LS). The composition of the UTLS, especially with respect to radiatively active trace gas species, is a key factor for the global climate, which the ASM affects directly. During the PHILEAS (Probing High Latitude Export of Air from the Asian Summer Monsoon) campaign in later summer 2023 a filament of ASM outflow was measured with the airborne limb imager GLORIA on board the German research aircraft HALO on two consecutive days. The edge of the filament was imaged in 3-D during the first flight and its outflow based on CLaMS trajectory calculations was revisited inside a second 3-D retrieval on the following day. The chemical composition of the filamented air was measured in the five different trace gas species water vapor, ozone, peroxyacetyl nitrate (PAN), nitric acid and carbon tetrachloride with unprecedented 3-D spatial resolution unique to the GLORIA instrument. The filament contains a strong tropopause fold, which perturbs its dynamical structure. We present the tomographic retrievals of the matching flights. We are able to identify the different types of air from their chemical composition using a novel classification method based on mixture models, and are able to resolve the spatial structure of both the filament and the mixing process on the mesoscale. By revisiting the outflow of the filament we are able to directly measure the change in chemical composition and are able to determine and quantify the different possible pathways during mixing. We are able to uniquely link the different types of air to different regions of origin. GLORIA is an airborne demonstrator for the European Space Agency Earth Explorer 11 candidate CAIRT, currently selected for Phase A. GLORIA observations offer an outlook on how exploring global processes in the UTLS would be possible using CAIRT.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall F1)

Presentation: On the estimation of stratospheric age of air from correlations of multiple trace gases

Authors: Florian Voet, Felix Plöger, Dr Johannes Laube, Peter Preusse, Dr Paul Konopka, Jens-Uwe Grooß, Jörn Ungermann, Björn-Martin Sinnhuber, Michael Höpfner, Bernd Funke, Gerald Wetzel, Sören Johansson, Dr Gabriele Stiller, Dr Eric Ray, Dr Michaela Hegglin
Affiliations: Institute for Atmospheric and Environmental Research, University of Wuppertal, Institute of Climate and Energy Systems: Stratosphere (ICE-4), Forschungszentrum Jülich, Institute of Meteorology and Climate Research - Atmospheric Trace Gases and Remote Sensing (IMK-ASF),, Instituto de Astrofísica de Andalucía, CSIC, NOAA Chemical Sciences Laboratory, Cooperative Institute for Research in Environmental Science, University of Reading, Department of Meteorology
The stratospheric circulation is an important element in the climate system, but observational constraints are prone to significant uncertainties due to the low circulation velocities and uncertainties in available trace gas measurements. Here, we propose a method to calculate mean age of air as a measure of the circulation from observations of multiple trace gas species which are reliably measurable by satellite instruments, like trichlorofluoromethane (CFC-11), dichlorodifluoromethane (CFC-12), chlorodifluoromethane (HCFC-22), methane (CH4 ), nitrous oxide (N2 O) and sulfur hexafluoride (SF6 ), and show that this method works well up to a height of about 25 km. The method is based on the compact correlations of these gases with mean age. Methodological uncertainties include effects of atmospheric variability, non-compactness of the correlation, and measurement related effects inherent for satellite instruments. The age calculation method is evaluated in a model environment and compared against the true model age. We show that combination of the six chosen species reduces the resulting uncertainty of derived mean age to below 0.3 years throughout most regions in the lower stratosphere. Even small-scale, seasonal features in the global age distribution can be reliably diagnosed. The new correlation method is further applied to trace gas measurements with the balloon borne Gimballed Limb Observer for Radiance Imaging of the Atmosphere (GLORIA-B) instrument. The corresponding deduced mean age profiles agree reliably with SF6 -based mean age below about 22 km and show significantly lower uncertainty ranges. Comparison between observation-based and model simulated mean age indicates a slow-biased circulation in the ERA5 reanalysis. Overall, the proposed mean age calculation method shows promise to substantially reduce the uncertainty in mean age estimates from satellite trace gas observations
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall F1)

Presentation: Development of a long-term SO2 column data record from satellite nadir UV sensors

Authors: Dr. Nicolas Theys, Huan Yu, Jonas Vlietinck, Isabelle De Smedt, Pascal Hedelt, Steven Compernolle, Michel Van Roozendael
Affiliations: Bira-iasb, DLR
As part of its Climate Change Initiative extension program (CCI+), the European Space Agency is aiming to develop space-based long-term climate data records (CDR) for precursors gases involved in the formation of aerosols and ozone (https://climate.esa.int/en/projects/precursors-for-aerosols-and-ozone/about/). The targeted trace gases are NO2, SO2, HCHO, CHOCHO, CO and NH3. The CDRs consist of a set of global gridded data products (level-3) that will be made available to the scientific community. For SO2, the project relies on recent developments for Sentinel-5 Precursor/TROPOMI using the Covariance-Based Retrieval Algorithm (COBRA). On November 2024, this algorithm has replaced the classical Differential Optical Absorption Spectroscopy (DOAS) algorithm in the operational TROPOMI processor, as it leads to improved SO2 retrievals both in terms of noise and bias. In CCI+, the COBRA has been adapted to other satellite sensors (GOME, SCIAMACHY and OMI) to build consistent column retrievals, covering nearly three decades of observations. This includes a harmonized approach for the air mass factor (AMF) and error calculations. Importantly, the AMFs are based on a priori SO2 profiles from the CAMS reanalysis data. Here, we give an overview of the SO2 development activities in the Precursors CCI+ project. Results for OMI, TROPOMI and the historical sensors GOME and SCIAMACHY will be shown. Comparison between CCI SO2 products and other satellite data will be discussed. Validation of satellite SO2 columns against ground-based observations from MAX-DOAS and Pandora instruments will be presented. Finally, we will discuss our plans for future work.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall F1)

Presentation: Observing System Simulation Experiment of CAIRT limb profiles focusing on stratosphere to troposphere exchange

Authors: Quentin Errera, Marc Op de beeck, Stefan Bender, Johannes Flemmings, Bernd Funke, Alex Hoffmann, Michael Höpfner, Gabriele Poli, Piera Raspollini, Jörn Ungermann, Björn-Martin Sinnhuber
Affiliations: Royal Belgian Institute of Space Aeronomy, Instituto de Astrofísica de Andalucía, CSIC, Research Department, European Centre for Medium-Range Weather Forecasts, European Space Research and Technology Centre, Karlsruhe Institute of Technology, Institute of Applied Physics ‘N. Carrara’, Italian National Research Council, Forschungszentrum Jülich
The Changing-Atmosphere Infra-Red Tomography Explorer (CAIRT) is a candidate for ESA’s Earth Explorer 11. This mission has been proposed in order to achieve a step change in our understanding of the coupling of atmospheric circulation, composition and regional climate. The CAIRT concept proposes to perform limb infra-red tomography of the atmosphere from the troposphere to the lower thermosphere (about 5 to 115 km altitude) with a 400 km swath to provide a three-dimensional picture of atmospheric structure at unprecedented scales. This contribution investigates the capability of CAIRT to analyse stratosphere to troposphere exchange using an Observing System Simulation Experiment (OSSE). In this effort, a reference atmosphere – the nature run in the OSSE terminology – is built based on the Copernicus Atmosphere Monitoring Services (CAMS) control run (horizontal resolution ~40 km and a vertical resolution ~500 m in the tropopause region) between October 2021 and March 2022 (5 months). The nature run is used to generate CAIRT level 2 (L2) profiles of ozone (O3), water vapour (H2O) and carbon monoxide (CO), along with a CAIRT orbit simulator and a simulator to account for CAIRT instrumental errors as well as vertical and along track smoothing. Simulated CAIRT L2 profiles are then assimilated by the Belgian Assimilation System for Chemical ObsErvations (BASCOE) to provide analyses of O3, H2O and CO – the assimilation run. In order to measure the added value of CAIRT data in the assimilation run, a BASCOE control run without CAIRT assimilation, is also done. We have also simulated and assimilated Aura Microwave Limb Sounder (MLS) O3, H2O and CO profiles in order to measure the added value of CAIRT against this instrument. This study reveals that CAIRT (1) O3 profiles are able to constrain BASCOE down to 7 km of altitude, a few km lower (thus better) than MLS; (2) H2O profiles are able to constrain BASCOE down to the tropopause region, with a slightly better performance than MLS at high latitudes; and (3) CO profiles are able to constrain BASCOE in the UTLS region while MLS providing a much better constraint above 10 km.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall F1)

Presentation: Updated versions of the S5P/TROPOMI and OMI UV-Absorbing Aerosol Index datasets: over 20 years of consistent aerosol event monitoring

Authors: Deborah Stein Zweers, Martin de Graaf, Maarten Sneep
Affiliations: KNMI - Royal Netherlands Meteorological Institute
The absorbing aerosol index (AAI) is a robust indicator of the presence of ultraviolet (UV) absorbing aerosols in the troposphere and stratosphere. It is relatively straightforward, computationally light calculation which relies upon the principle of Lambertian surface reflecting behavior to describe the observed scenes. For missions prior to the Tropospheric Monitoring Instrument (TROPOMI) onboard the Sentinel 5-precursor (S5P) satellite with comparatively large pixels, the Lambertian surface characterization works well, resulting in near-zero values for the aerosol index for cloud covered scenes. As a result, the AAI datasets from missions going back as early as 1978 (TOMS) and up to the present day (TROPOMI, OMI, GOME-2, and others) do not require cloud clearing. Therefore with maximized daily, global coverage, the AAI is an ideal dataset for tracking emission and long range transport events of absorbing aerosol that arise from desert dust outbreaks, smoke from wildfires, and ash from volcanic eruptions. The relatively small pixels of S5P/TROPOMI however, dictate that the Lambertian surface assumption breaks down for certain cases. A proven approach for handling some of these effects is to introduce a bi-reflector scene model. Using independent pixel approximation the clouded portion of the scene is handled differently than the remaining surface portion. With 7 years operational aerosol index (AI) data from TROPOMI and more than 20 years of data from the, the latest OMI OMAERO aerosol index dataset (Collection 4) extending back to 2004 we will be able to test the performance of the bi-reflector approach in the period of mission overlap. Results will be shown to demonstrate operational readiness of this TROPOMI aerosol index (AER_AI) algorithm. Lastly, ongoing seasonal cycle analyses will be presented to highlight the importance and scientific benefit of maintaining a long and consistent data record.
Add to Google Calendar

Monday 23 June 16:15 - 16:35 (EO Arena)

Demo: D.03.21 DEMO - Revolutionizing EO Data Access: The Power of openEO Federation

Are you tired of jumping between multiple EO platforms, managing different APIS and going through cumbersome workarounds? You are not alone.
Earth Observation (EO) data are crucial for many scientific, government, and commercial purposes. Environmental monitoring, decision-making, agricultural and climate monitoring, to name a few. However, accessing and processing the data remains a large obstacle.
The openEO Federation, introduced within the Copernicus Data Space Ecosystem (CDSE), addresses this challenge by providing a unified interface for seamless EO data access and processing. By federating multiple backends, openEO eliminates the need to manage multiple accounts and APIs, streamlining workflows for researchers, developers, and users working with remote sensing data.
This demonstration will showcase how the openEO federated platform allows users to seamlessly discover, access, and process EO data from different sources through a standardized interface. Join us to explore how openEO Federation is transforming EO data accessibility, making it easier than ever to develop innovative solutions using the Copernicus Data Space Ecosystem.

What You Will Experience:
1. One API to Rule Them All: Interact with any compliant backend using a single, standardized openEO interface.
2. Streamlined Access: Access a network of research, commercial, and cloud providers without juggling accounts or APIS.
3. Server-Side Processing: Execute scalable workflows on federated backends without local downloads.

Why Attend?
• Simplify Workflows: Eliminate API fragmentation and cumbersome workarounds.
• Free & Fair: Leverage ESA-sponsored, FAIR infrastructure with no hidden costs or vendor lockin.
• Drive Innovation: Use a unified interface to share and re-use your research.

For more information, visit the openeo federation documentation.

Speakers:


  • Jeroen Dries - VITO
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall N1/N2)

Session: D.03.02 Free Open Source Software for the Geospatial Domain: current status & evolution - PART 2

#cloud-native

Free and Open Source Software is has a key role in the geospatial and EO communities, fostered by organizations such as OSGeo, Cloud Native Computing Foundation, Apache and space agencies such as ESA and NASA. This session showcases the status of OSS tools and applications in the EO domain, and their foreseen evolution, with a focus on innovation and support to open science challenges.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall N1/N2)

Presentation: xcube geoDB: Bridging the Gap in Vector Data Management for Earth Observation

#stac

Authors: Thomas Storm, Alicja Balfanz, Gunnar Brandt, Norman Fomferra
Affiliations: Brockmann Consult
Although Earth Observation data are typically disseminated as gridded products, vector data with geographical context often play a crucial role in their analysis, processing, and dissemination. For instance, in-situ data used for instrument calibration and validation are usually provided as vector data and training and validation datasets for machine learning applications often come as feature data. Additionally, crop type reports for agricultural fields are stored in vector formats. Such data are commonly integrated with satellite imagery for further downstream analysis and processing. However, the diversity of formats, coordinate reference systems, and data models present significant challenges, requiring substantial expertise and effort from users to process vector data effectively. To address these challenges, we introduce here xcube geoDB, a geographical database solution designed to simplify common tasks associated with vector data. xcube geoDB offers an open-source, user-friendly API that facilitates the storage, reading, and updating of vector data. Users can store, manipulate, share, and delete their data through an easy-to-use Python interface, process it using the flexible openEO API, or access it via a dedicated STAC interface. Currently, xcube geoDB is in active use by individual researchers and several projects: • The Green Transition Information Factory Austria leverages xcube geoDB to efficiently store and retrieve air quality data, traffic statistics, and other variables. • ESA’s Earth Observation Training Data Lab uses xcube geoDB to manage metadata for large machine learning test datasets. • The DEFLOX project relies on xcube geoDB to maintain and expand a streamlined archive of in-situ reflectance data collected by globally distributed devices. Potential users can explore xcube geoDB through its Python client and pre-configured Jupyter notebooks are available for testing on platforms like EuroDataCube and DeepESDL. Thanks to its versatility, generic design, and robust user management features, xcube geoDB addresses a wide range of use cases involving geographical vector data. In this presentation, we will showcase several real-world applications that highlight its reliability and performance. xcube geoDB is under active development. Recent enhancements include the introduction of a STAC interface for streamlined and standardized vector data access, improved user management features allowing for shared namespaces and collaborative data collections, and support for the openEO API, which introduces the innovative vector data cube concept. Future developments on the roadmap include a dedicated web-based graphical user interface to further enhance usability and accessibility.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall N1/N2)

Presentation: The ZOO-Project and EOEPCA+: A Practical Approach to Interoperable EO Workflows

Authors: Gerald Fenoy
Affiliations: Geolabs
The ZOO-Project is an open-source software implementing the OGC API Processes parts 1 and 2. It provides a solution to create, deploy, and manage web processing services for geospatial and Earth Observation (EO) applications. Within the European Space Agency’s EO Exploitation Platform Common Architecture (EOEPCA), the ZOO-Project plays a key role in supporting the deployment and exploitation of interoperable workflows that can operate seamlessly across different platforms. The ZOO-Project implements the OGC API Processes standard that defines a consistent way to interact with geospatial processing services, making it easier to describe, execute, and manage workflows programmatically. By following this standard, the ZOO-Project ensures workflows are compatible with a wide range of systems and can be integrated into diverse applications without depending on specific infrastructures or interfaces. EOEPCA, as a framework for federated EO platforms, promotes the sharing of data and services while encouraging the use of open standards. Among its contributions, the ZOO-Project stands out as a key open-source component that is being sponsored by the initiative, turning EOEPCA’s vision into practical tools for researchers, developers and Platform operators. The ZOO-Project’s support for the Common Workflow Language (CWL) further enhances its utility, allowing workflows to be described in a standard format that ensures portability and scalability. The importance of the ZOO-Project is further highlighted by the recognition of its main developer, Gerald Fenoy, who recently received the DAVID A. HASTINGS AWARD for his contributions to the FOSS4G community in Asia. This acknowledgment reflects the ZOO-Project’s role in advancing open-source geospatial technologies and fostering collaboration on a global scale. This oral presentation will describe the current state of the ZOO-Project and present its ongoing evolution including new features that allow the ZOO-Project to execute workflows in containerized environments like high-performance computing (HPC) systems and Kubernetes relying on distributed platforms like Argo Workflows. These enhancements ensure that the ZOO-Project remains relevant to modern EO challenges, offering scalable and interoperable solutions for a broad range of users.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall N1/N2)

Presentation: FOSS4G Observatory: empowering communities through insights into the open source geospatial ecosystem

Authors: Codrina Maria Ilie, Vasile Craciunescu, PhD Andreea Marin
Affiliations: Terrasigna
Core packages supporting Earth Observation are part of a winder, more complex ecosystem that expands from the very first pre-processing steps to the delivery of the final EO-based knowledge service or product to the user. If the selection of a core package for a specific task performed within an academic environment for demonstration purposes only, might be made based on language preference, moving further away from that scenario, the selection of one over another within an operational workflow scientific, policy-related or commercial depends on a multitudine of parameters. The surprise might come from the fact that these parameters do not overly depend on external factors, but on aspects intrinsically related to the project itself. These can vary significantly and more often than not, they might even escape the attention of its developers, such as interest in license interoperability with other relevant software projects/ core packages, governance model that ultimately leads the existing and future developments directions of the software project. In this talk, the authors will tackle the question as old as open source itself. Be it an open source for geospatial desktop software, a core library, a server-side or a core package project, what stands as the locomotive that supports it over time? Standing on the shoulders of a wider, more extensive initiative - the FOSS4G Observatory - the authors have worked on understanding the connections between the heart of a “health open source project” and “software metrics” in regards to the project viability over the long term. Expanding on sustainability issues in the open source geospatial projects, efforts have been invested in deciphering what are the elements that support the uptake of FOSS4G within operational activity, be it scientific-, policy- or commercially related activities, irrespective of its language. All of the three sectors are governed by different driving principles and best practices when it comes to addressing the development, management and use of the open source environment. In such complex matters, there is never one true simple answer, but the presented results can further support and empower the developers’ communities, across languages, to address open source for geospatial out-of-sight-out-of-mind aspects on which the viability of their project depends. In that direction, efforts have also been invested into further enriching the existing literacy programs, in the spirit of open source principles and best practices. The prepared materials were focused on the three main types of potential beneficiaries: developers, users and decision-makers with a focus on the three main types of corresponding use: development & implementation, use and adoption/ transition from proprietary software. The FOSS4G Observatory, which is the foundation of the presented work, represents a community-led initiative part of the wider Open Innovation framework at the European Space Agency, which scope is to maintain and further develop the open, interactive platform for a constantly updated, comprehensive and detailed overview of the dynamic environment of the open source digital infrastructure for geospatial data storage, processing and visualisation systems. Today, within the FOSS4G Observatory there are nearly 500 FOSS4G projects documented and interconnected, collected by the implementation team as well as by the community of contributors.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall N1/N2)

Presentation: KNeo: yet another cloud-native platform for scalable and automated EO data processing

#cloud-native

Authors: Dr. Marian Neagul, Dr. Markus Neteler, Vasile Craciunescu, Mrs. Carmen Tawalika, Mrs. Anika Weinmann, Mr. Dan Avram
Affiliations: Terrasigna, mundialis
The Kubernetes Native Earth Observation (KNeo) platform (humble) aims to improve EO data processing through the integration of cutting-edge cloud-native technologies. Building on the Terrasigna Y22 and mundialis actinia cloud processing platforms, KNeo employs serverless computing, event-driven architectures and open-source solutions to deliver scalable, efficient and cost-effective EO data services. The platform leverages KNative for serverless operations, integrates OpenID Connect for authentication and adopts ZOO Project as a core implementation for OGC API Processes. The platform supports a diverse range of use cases, including deforestation detection, forest classification, single-tree detection, urban green roof identification and irrigation suitability mapping. These use cases employ a variety of EO data (e.g. Sentinel-1, Sentinel-2 and aerial imagery) and advanced analytical methods, such as time-series analysis, vegetation indices, image segmentation and raster algebra. All these innovations are aligned with ESA’s Earth Observation Exploitation Platform Common Architecture.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall N1/N2)

Presentation: Geospatial Machine Learning Libraries and the Road to TorchGeo 1.0

#stac

Authors: Adam Stewart, Nils Lehmann, Xiao Xiang Zhu
Affiliations: Technical University of Munich
The growth of machine learning frameworks like PyTorch, TensorFlow, and scikit-learn has also sparked the development of a number of geospatial domain libraries. In this talk, we break down popular geospatial machine learning libraries, including: 1. TorchGeo (PyTorch) 2. eo-learn (scikit-learn) 3. Raster Vision (PyTorch, TensorFlow*) 4. PaddleRS (PaddlePaddle) 5. segment-geospatial (PyTorch) 6. DeepForest (PyTorch) 7. TerraTorch (PyTorch) 8. SITS (R Torch) 9. scikit-eo (scikit-learn, TensorFlow) For each library, we compare the features they have as well as various GitHub and download metrics that emphasize the relative popularity and growth of each library. In particular, we promote metrics including the number of contributors, forks, and test coverage as useful for gauging the long-term health of each software community. Among these libraries, TorchGeo stands out with more builtin data loaders and pre-trained model weights than all other libraries combined. TorchGeo also boasts the highest number of contributors and test coverage, while Raster Vision has the most forks and segment-geospatial has the most stars on GitHub. We highlight particularly desirable features of these libraries, including a command-line or graphical user interface, the ability to automatically reproject and resample geospatial data, support for the spatio-temporal asset catalog (STAC), and time series support. The results of this literature review are regularly updated with input from the developers of each software library and can be found here: https://torchgeo.readthedocs.io/en/stable/user/alternatives.html Among the above highly desirable features, the one TorchGeo would most benefit from adding is better time series support. Geotemporal data (time series data that is coupled with geospatial information) is a growing trend in Earth Observation, and is crucial for a number of important applications, including weather and climate forecasting, air quality monitoring, crop yield prediction, and natural disaster response. However, TorchGeo has only partial support for geotemporal data, and lacks the data loaders or models to make effective use of geotemporal metadata. In this talk, we highlight steps TorchGeo is taking to revolutionize how geospatial machine learning libraries handle spatiotemporal information. In addition to the preprocessing transforms, time series models, and change detection trainers required for this effort, there is also the need to replace TorchGeo's R-tree spatiotemporal backend. We present a literature review of several promising geospatial metadata indexing solutions and data cubes, including: 1. R-tree 2. GDAL GTI 3. STAC 4. OpenDataCube 5. Rasdaman 6. SciDB 7. Geopandas 8. Rioxarray 9. Geocube For each spatiotemporal backend, we compare the array, list, set, and database features available. We also compare operating system support and ease of installation for different solutions, as well as preliminary performance benchmarks on scaling experiments for common operations. TorchGeo requires support for geospatial and geotemporal indexing, slicing, and iteration. The library with the best spatiotemporal support will be chosen to replace R-tree in the coming TorchGeo 1.0 release, marking a large change in the TorchGeo API as well as a promise of future stability and backwards compatibility for one of the most popular geospatial machine learning libraries. TorchGeo development is led by the Technical University of Munich, with initial incubation by the AI for Good Research Lab at Microsoft, and contributions from 75+ contributors from around the world. TorchGeo is also a member of the OSGeo foundation, and is widely used throughout academia, industry, and government laboratories. Check out TorchGeo here: https://www.osgeo.org/projects/torchgeo/
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall N1/N2)

Presentation: JupyterGIS — in-Browser, Collaborative, and Open Source GIS

Authors: Sylvain Corlay, Matthias Meschede, Andreas Trawöger
Affiliations: QuantStack
In this presentation, we introduce JupyterGIS [1], a novel open-source project for editing QGIS project files in a JupyterLab-based web user interface. JupyterGIS provides co-editing features, allowing users to work collaboratively on QGIS projects in real-time. The primary user interface of JupyterGIS is a JupyterLab extension providing a GUI allowing users to edit QGIS files graphically. However, it also provides a Python API for interacting with JupyterGIS sessions, utilizing Jupyter’s rich display system to render content in notebooks. The JupyterGIS project is built upon the following key ideas that we believe are going to be decisive for open-source GIS software in the future: - The future is Web-based. While QGIS is mainly a desktop application, we aim to provide a web-based UI for editing QGIS session files. - Collaboration features should be central for any web-based authoring user interface. We devised JupyterGIS from the ground up with collaborative editing in mind. We strongly believe that co-editing will not be limited to word processing in the future. Sharing a QGIS editing session should be as simple as sharing a link. - Finally, any sustainable open-source solution leveraging web technologies should be built with scalability in mind, so that operating a service based on these technologies can be operated with a reasonable amount of funding. For this reason, we made sure that JupyterGIS is compatible with JupyterLite and can run entirely in the browser without a backend server. In this session, we will discuss the current feature set of the JupyterGIS project, and detail the main reasons for our technical choices and the interaction with the Jupyter project, diving into the relevant details of the JupyterLab extension system. We will present the main features of the project, with live demos of the project during the presentation, featuring the edition of QGIS files, collaborative editing features, and the Python API. We will also demonstrate the use of multiple data types, such as GeoJSON, ShapeFile, GeoTIFF. We will conclude the presentation with an outlook on the upcoming features and a broader presentation of the multi-stakeholder collaboration behind the project and our goals, including the European Space Agency (ESA), QuantStack SAS, Simula Research Laboratory, the Schmidt Center for Data Science & Environment at UC Berkeley (DSE).
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall M1/M2)

Session: C.03.15 10 Years of Copernicus Sentinel-2 - PART 2

The Copernicus Sentinel-2 mission celebrates a decade of revolutionising Earth observation, offering unique insights into our planet. Since its launch in 2015, Sentinel-2 has provided high-resolution, multispectral imagery supporting a large range of applications including agriculture, forestry, water resources management, natural disasters management or methane emissions monitoring. Its global reach, frequent revisit times, and open data policy have empowered policymakers, scientists, businesses, and communities worldwide.
This session celebrates the mission’s remarkable achievements over the past ten years, highlighting its contributions to addressing critical environmental and societal challenges.
Looking ahead, the session will also refer to emerging opportunities for Sentinel-2. Experts will provide their vision on the challenges, innovations, and the mission’s evolving role in addressing global societal challenges.
Join us in celebrating 10 years of Copernicus Sentinel-2, reflecting on its transformative impact, and envisioning how it will continue to shape the future of Earth observation.

The session will be followed by a small celebration with drinks & cakes.

Presentations and speakers:


Has Sentinel-2 really changed the world of Earth Observation?


  • Grega Milcinski (Sinergise)

Harmonized Landsat Sentinel (HLS) data: a retrospective


  • Sergii Skakun (NASA, UMD)

Feedback on 20 years of Sentinel-2 (-like) data


  • Olivier Hagolle (CNES)

A Decade of Sentinel-2: Pioneering Disruptive Technologies and AI


  • Giuseppe Borghi, Nicolas Longepe (ESA)

From Data to Action: Reducing Methane Emissions with Sentinel-2


  • Itziar Irakulis Loitxate (International Methane Emissions Observatory (IMEO) of UNEP, Polytechnic University of Valencia)
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 0.96/0.97)

Session: D.06.02 HPC and Quantum Computing

Nowadays, to fully scale up Deep Learning and modern data processing technique for the processing of high amount of Earth Observation data the access to High Performance Computing (HPC) infrastructure and techniques is becoming mandatory. In this scenario, quantum computing and quantum technologies, which are often hybridized with High Performance Computing resources, represent a paradigm shift in computational power and sensing capabilities, offering unprecedented potential for advancing Earth Observation. However, accessing High Performance Computing resources is today often prohibitive to most end-users due to cost, technological and governmental barriers. Modern cloud-based ecosystems are by design more user friendly, but they will greatly benefit from a synergetic use of HPC systems.
This session focuses on solutions, challenges, paradigms and scenarios able to show how HPC and/or Quantum Computing can help to solve modern EO needs of scientists and application developers, with a preference synergetic use of HPC and Quantum computing technologies.

The topics of discussion for this session, although not exhaustive, will include:

1. Combining HPC with Quantum: examining how combining quantum algorithms with classical computing algorithms executed in a hybrid HPC environment can solve novel EO problems.
2. HPC for EO Data Processing: examining how HPC can be integrated within cloud EO Data Processing platform for generating information more efficiently.
3. HPC for fostering AI in EO: discussing how training and execution of AI models can be optimised in the GPU accelerated clusters provided by several public and private HPC infrastructures in Europe.
4. Heterogeneous HPC computing: discussing how modern heterogenous HPC, running workloads on GPU boosted, memory-optimized, storage-optimized and computing optimized clusters in parallel can optimize the execution of EO algorithms.
5. Quantum-Enhanced Data Analysis: discussing novel quantum algorithms and computational methodologies for processing large-scale Earth Observation datasets, improving accuracy, efficiency, and scalability.
6. Quantum Machine Learning for Earth Sciences: examining the application of quantum machine learning techniques for extracting insights from complex Earth system data, enabling enhanced predictive modelling and decision support.
7. Quantum-Enabled Climate Modelling: investigating how quantum computing can revolutionize climate modelling and simulation, enabling more accurate and comprehensive assessments of climate change impacts and mitigation strategies.

Presentations and speakers:


Quantum Algorithms for Earth Observation: Overview and Perspectives


  • Mihai Datcu - CEOSpaceTech, POLITEHNICA Bucharest
  • Sigurd Huber - Microwaves and Radar Institute, German Aerospace Center (DLR)

Exploring the use of Quanvolutional Operator for Efficient Earth Observation Data Analysis


  • Francesco Mauro - Department of Engineering, University of Sannio

IEEE GRSS Quantum Computing for Earth Observation Working Group


  • Alessandro Sebastianelli - ESA, Φ-lab
  • Amer Delilbasic - Doctoral Researcher, Forschungszentrum Jülich/University of Iceland
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.85/1.86)

Session: A.03.01 Global Carbon Budgets and Earth Observation - PART 2

Despite decades of effort by the International Community there remain significant differences between the global and regionally aggregated budgets of the key greenhouse gases (GHGs) carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O). This discrepancy relates partially to the complexity of the carbon cycle, its interactions with other cycles but also to differences in methods and data used to generate the budgets.

The principal budget calculations are also out of synchrony. The Global Carbon Budget (CO2) is produced annually, the global methane and N2O budgets on a 3-4 year cycle and the Regional Carbon Assessment (RECCAP) developed every five years. The challenge for the International Community is threefold:

• to align the budget calculations in time
• develop regional budget assessments for the three GHG gases on an annual cycle
• reconcile the global and regional assessments across the three GHG.

Fundamental research is needed to respond to these challenges, especially for the terrestrial carbon component. Space-based measurements of atmospheric concentrations of GHG from OCO-2, OCO-3, Sentinel-5P and observations of both the terrestrial and ocean carbon cycle are increasing in frequency, range, and detail. Allied with the planned launches of key carbon relevant satellites e.g. FLEX, BIOMASS, NISAR, EO may provide a unique capability to provide a dynamic reconstruction of the carbon cycle at unprecedented scales in space and time.

This session is dedicated to how EO can be used help provide answers to these three challenges building on activities in collaboration with NASA and the European Commission.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: Reconciling TROPOMI monthly Methane Emissions and Sentinel-2 derived Wetland Dynamics to Explain Seasonal Lags in Africa

Authors: Mengyao Liu, Ruoqi Liu, Ronald van der A, Dr. Michiel van Weele, Prof. Geli Zhang, Vincent Huijnen, Xiaoxing Zuo, Yutao Chen
Affiliations: Royal Netherlands Meteorological Institute (KNMI), De Bilt, The Netherlands, College of Land Science and Technology, China Agricultural University, Delft University of Technology
Methane (CH4) is the second most important anthropogenic greenhouse gas after CO2, accounting for 22%-34% of radiative forcing increase from greenhouse gases according to the IPCC Sixth Assessment Report. The acceleration in methane growth during 2020-2022 has been attributed to increased tropical inundation associated with La Niña conditions, heightened microbial activity in boreal regions, and decreased tropospheric OH concentration during COVID-19 shutdowns. Tropical wetlands, accounting for 50–70% of global wetlands, significantly contribute to uncertainties in the global CH₄ balance due to their substantial seasonal and interannual variability in spatial extent, particularly in seasonally flooded areas such as those in Africa. Methane emissions from African wetlands have been identified as a key driver of the recent rise in atmospheric methane concentrations. However, estimating methane emissions from African wetlands is challenging due to significant spatiotemporal variability and the many environmental and climatic factors controlling the emission processes, its annual cycle, and long-term variation. In the ESA project IMPALA, an improved divergence method is applied to estimate methane emissions using TROPOspheric Monitoring Instrument (TROPOMI) satellite observations at a 0.2° grid resolution. Significant monthly varying methane emissions over the Lake Chad Basin, South Sudan, and the Okavango Delta are identified by both the operational Sentinel-5P (S5P) and WFM-DOAS (TROPOMI/WFMD v1.8) XCH4 products. Here we derived annual and monthly methane emissions for these regions from 2019 to 2022 using TROPOMI/WFMD v1.8 data. To reduce uncertainties in TROPOMI retrievals, TROPOMI/WFMD v1.8 XCH4 is first corrected using its retrieved apparent albedo. To analyze the driving factors behind the annual cycle in methane emissions from African wetlands, the monthly extent of open and vegetated water (wetlands) at a high spatial resolution (30m) for individual years is mapped using daily Sentinel-2 observations. High-resolution maps of seasonal open water and vegetated water extent based on Sentinel-2 enable correlation analysis with TROPOMI-based monthly methane emission changes, providing new insights into the environmental and climatic drivers of African wetland methane emissions. Wetland monthly emissions estimates in wetland models are primarily driven by (monthly) precipitation data. However, in tropical regions, where wetland areas are predominantly fed by river discharge, precipitation often has a delayed impact on wetland extent. This time lag between precipitation events and wetland expansion affects the seasonality and magnitude of wetland areas and their corresponding methane emissions. Moreover, our derived monthly emissions are significantly higher than previous predictions by process-based models. These discrepancies can be attributed to the underestimation of wetland areas and inaccuracies in the seasonality used as model inputs. The effects of regional livestock emissions and other climatic factors such as soil temperature, and monthly total water storage (liquid water equivalent depth, LWE) will also be discussed.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: Small persistent clearings in humid forests drive tropical forest biomass carbon losses

Authors: YIDI XU, Philippe Ciais, Maurizio Santoro, Clément Bourgoin, François Ritter, Agnes Pellissier-Tanon, Wei Li
Affiliations: LSCE, Gamma Remote Sensing, Joint Research Centre, University of Copenhagen, Tsinghua University
Tropical forests are essential to the global C cycle, storing about 50% of the global forest above-ground C (1). Over the last 30 years, 46% of tropical forests have experienced some form of disturbance at least once, according to Landsat observations (2, 3). While some forests are cleared and converted to non-forest land, most disturbed forests can recover, gradually restoring those C stocks and ecological functions. However, the rates at which this happens are poorly constrained. Quantification of forest C loss and subsequent post-disturbance biomass recovery is thus essential for understanding forests’ substantial yet uncertain impact on the global C budget (4, 5). Here, we introduce a novel bookkeeping model to estimate C dynamics in tropical humid and dry forests affected by disturbances between 1990 and 2020. By integrating high-resolution ESA-CCI Biomass data with Landsat-based disturbance history maps, we constructed spatially explicit biomass recovery curves on a 1° by 1° grid across the tropics that track biomass changes at different forest ages following disturbances. Using the biomass recovery curves, this data-driven model detects the annual location of forest disturbances, calculates C losses associated with deforestation and degradation, and estimates subsequent C gains through recovery after fire and non-fire degradation as well as regrowth from deforestation and afforestation. Our results show contrasting patterns in C dynamics between dry and humid forests. Tropical dry forests remained carbon-neutral, as fire-induced losses were offset by post-fire recovery. In contrast, tropical humid forests experienced a net AGC loss of 15.6±7.4 PgC, primarily driven by small but persistent deforestation clearings. These small-size deforestations, despite affecting only 5% of the disturbed area, accounted for 56% of the net C losses. Over time, deforestation expanded into humid forests with higher C stock densities, intensifying AGC losses per unit area. Conversely, fire degradation in dry forests showed a decreasing trend over the study period. These findings highlight the disproportionate impact of small clearings on tropical C losses, emphasizing the need to curb land-use changes and prioritize recovery strategies. References 1. Y. Pan et al., The enduring world forest carbon sink. Nature 631, 563-569 (2024). 2. T. Long, Zhang, Z., He, G., Jiao, W., Tang, C., Wu, B., ... & Yin, R., 30 m Resolution Global Annual Burned Area Mapping Based on Landsat Images and Google Earth Engine. Remote Sensing. 2019 (10.3390/rs11050489). 3. C. Vancutsem et al., Long-term (1990-2019) monitoring of forest cover changes in the humid tropics. Science Advances 7, eabe1603 (2021). 4. A. Baccini et al., Tropical forests are a net carbon source based on aboveground measurements of gain and loss. Science 358, 230-234 (2017). 5. T. A. M. Pugh et al., Role of forest regrowth in global carbon sink dynamics. Proceedings of the National Academy of Sciences 116, 4382-4387 (2019).
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: EO-LINCS: Towards integrated use multiple (EO) observational datasets for land carbon cycle studies

Authors: Sujan Koirala, Dr. Jacob A. Nelson, Konstantin Ntokas, Mike O'Sullivan, Agnes Pellissier-Tanon, Yogesh K. B. Singh
Affiliations: Max Planck Institute For Biogeochemistry, Brockmann Consult GmbH, University of Exeter, Laboratory of Climate and Environmental Sciences - University of Versailles Saint-Quentin-en-Yvelines
Land ecosystems play a central role in carbon-climate feedbacks and their interannual and long-term variability. Yet, the land carbon exchange remains one of the largest sources of uncertainties. Observational data from site-level measurements to remote sensing Earth Observations (EO) have, therefore, been the instrumental of understanding and modeling carbon cycle. There are, though, inherent differences in the observations from the perspectives of ecosystem processes, the domain of measurements, instrument and sensors, and the characteristic of a measured variable and methods of mapping them to a grid. Due to these differences, a careful integration of diverse observations and their processing, is needed within and across different carbon cycle applications, especially considering the growth in availability of EO data streams. Under ESA Carbon Science Cluster, the EO-LINCS (Earth Observation- Local and Integrated data eNquiry for Terrestrial Carbon Science) project aims to promote a community effort towards an integrated understanding of the terrestrial carbon cycle. In particular, EO-LINCS aims to: • identify methods and technology to harmonize, exploit, and link diverse data sources and streams across spatial and temporal scales, from site to regional contexts. • evaluate cross-consistency of EO data with the on-the-ground/in-situ observation of ecophysiological variables of carbon cycle. • translate knowledge learned at the local scale using EO-data and site measurements to the spatial scales of terrestrial carbon models, and improve their parameterizations across space. • use novel data streams to constrain contemporary carbon cycle to reduce the uncertainties in current and future carbon fluxes and states. Here, we will present an overview of the EO-LINCS project, and present the first lessons-learnt with regards to using multiple observations datasets within the following land carbon cycle case studies (SCS): 1. SCS1: Explanatory power of novel EO data streams for predicting net carbon fluxes 2. SCS2: Forest recovery post disturbance 3. SCS3: Model-Data Fusion for Understanding Carbon State-Flux Relationships Across Space 4. SCS4: EO enhanced benchmarking of global carbon budget vegetation models
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: Quantifying biomass loss due to forest disturbances across Europe

Authors: Dr. Katja Kowalski, Alba Viana-Soto, Dr. Martin Brandt, Philippe Ciais, Wanda De Keersmaecker, Prof. Dr. Rasmus Fensholt, Thomas Pugh, Ruben Van De Kerchove, Prof. Dr. Cornelius Senf
Affiliations: Earth Observation for Ecosystem Management, School of Life Sciences, Technical University of Munich, Department of Geosciences and Natural Resource Management, University of Copenhagen, Laboratoire des Sciences du Climat et de l’Environnement, CEA/CNRS/UVSQ/Université Paris Saclay, Vlaamse Instelling Voor Technologisch Onderzoek (VITO), Department of Physical Geography and Ecosystem Science, Lund University
Europe’s forests hold over 12 Pg of carbon in aboveground biomass and represent one of the largest carbon sinks in Europe. Yet, recent increases in forest disturbances might severely affect this important carbon sink, challenging the EU’s goals to increase carbon uptakes of the land sector until 2030. Advances in satellite-based high-resolution monitoring of forest disturbances, as well as aboveground biomass mapping at continental to global scales, offer novel possibilities to improve our understanding of the role of forest disturbances in the carbon cycle. Here, we tap into this rich data source to quantify the impact of forest disturbances on the European forest carbon sink over the past four decades. Specifically, we (i) developed a robust space-for-time approach to derive disturbance severity in regard to aboveground biomass lost during disturbance events, (ii) compared disturbance severity from three state-of-the-art biomass products, and (iii) quantified the biomass loss from natural and human disturbances over the past four decades across all of Europe. We estimated disturbance severity and associated biomass losses based on three aboveground biomass density (AGBD) products, namely GEDI L4A Footprint AGBD, a Planet-based biomass map for Europe, and ESA CCI Biomass v5. We used space-for-time substitution to derive disturbance severity from pre- and post-disturbance AGBD estimates on a 20km resolution across Europe. We then combined disturbance severity estimates with Landsat-based estimates of disturbance area to quantify biomass losses from disturbances annually from 1985-2023. Disturbance severity estimates from the three biomass products agreed well, with the highest agreement between GEDI and Planet (R² = 0.4, bias = 2.82 Mg/ha , mean GEDI = 73.4 Mg/ha, mean Planet = 76.2 Mg/ha). Estimates from CCI (mean = 41.5 Mg/ha) had a higher bias when compared to Planet (R² = 0.22, bias = -34.65 Mg/ha) and GEDI (R² = 0.28, bias = -31.83 Mg/ha). Total biomass loss due to forest disturbances amounted to 4.5-6.6 Pg across Europe for the past four decades. More than 80% of overall biomass loss is attributable to human disturbances, i.e. mostly harvest for timber production. While biomass loss from human disturbances increased steadily over time, biomass loss due to natural disturbances was more stochastic and linked to large-scale wind events in the past. In more recent years, however, increasing bark-beetle activity caused higher biomass losses in temperate forests, accounting for 28% of annual biomass loss since 2018. Overall, biomass loss from disturbances increased from 0.62 Mg/ha on average before 2000 to 0.69 Mg/ha on average after 2000 and to 0.89 Mg/ha on average after 2018. Disturbance area, in turn, increased only moderately after 2018 in temperate forests, suggesting that even slight increases in disturbances can cause large biomass losses due to overall large biomass stocks in regions most susceptible to natural disturbances and timber harvest. Our results thus have far-reaching implications for the forest carbon sink of Europe and call for improved European forest management strategies to buffer the impacts of increasing disturbances under climate change.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: Carbon loss in the Amazon rainforest over the past 15 years, where and why

Authors: Jean-Pierre Wigneron, Xiaojun Li, Arthur Fendrich, Stephen Sitch, Hui Yang, Scott Barningham, Ana Bastos, Emilio Chuvieco, Amin Khairoun, Dr. Frédéric Frappart, Dr. Lei Fan, Huan Wang, Dr. Clément Albergel, Luis Aragao, Dr. Philippe Ciais
Affiliations: Inrae, ISPRA, European commission, University of Exeter, Peking University, University of Leipzig, Universidad de Alcala, Southwest University, European Space Agency Climate Office, National Institute for Space Research, LSCE, Laboratoire des Sciences du Climat et de l'Environnement
Globally, the Amazon forest plays a prominent role in the carbon and water cycles of our planet. While the effects of degradation and deforestation on the carbon budget of the Amazon forest have been widely studied, less analysis has focused on the intact Amazon forest, which corresponds to forest areas where no major anthropogenic disturbances have yet been identified. The intact Amazon forest has been exposed to severe drought and mortality in recent years (Wigneron et al., 2020, Feng et al., 2024). In addition, intact forests in the central and northeastern Amazon have also experienced disturbances from windthrow events (Lapola et al., 2023). The effects of these natural disturbances on C stocks and the regional ecosystem differences and changes in carbon stocks are not yet fully understood in the Amazon basin (Rosan et al., 2024). A fully integrated assessment of biogeochemical and hydrological processes in the Amazon requires new approaches that combine multiple EO data streams and flexible machine learning models. In the Amazon forest, a major limitation of remote sensing applications - for optical, active microwave and lidar systems alike - is due to saturation effects of dense forest biomass in observations over dense forest biomass. Here, we aim to overcome these limitations by using L-VOD datasets to better track biomass carbon changes in the Amazon (Wigneron et al., 2024), a requirement of great importance for determining local greenhouse gas budgets. L-VOD is obtained from low-frequency passive microwave observations (at L-band, frequency = 1.4 GHz), which are insensitive to saturation up to AGB levels of about 400 t/ha (see review in Wigneron et al., 2024). Thanks to these unique properties, L-VOD has been used in several analyses in dense tropical forests, including Amazonian forests, e.g. to monitor the recovery of tropical forests after the strong El Niño event of 2015-2016 (Wigneron et al., 2020), to analyse the respective contributions of degradation and deforestation to forest carbon loss in the Brazilian Amazon (Qin et al., 2021), to assess the patterns and drivers of tropical aboveground carbon changes (Feng et al., 2024). In the ESA Climate Space RECCAP2 project, we aim to produce wall-to-wall maps of biomass carbon changes over the period 2010-2014 from L-VOD satellite measurements by scaling from coarse to high resolution. To this end, we estimate the annual net C balance of the AGB at 25 km using a top-down approach based on L-VOD, followed by disaggregation of L-VOD derived forest C losses and gains at 100 m using a statistical downscaling model. We then attribute the observed C losses and gains to fire and other natural disturbances, climate, vegetation and soil parameters. These results should contribute to a better understanding of the role of intact forests in the carbon budget of the entire Amazon forest. References Feng Yu, Ciais P., J.-P. Wigneron et al., Global patterns and drivers of tropical aboveground carbon changes, Nature Clim. Change (2024). https://doi.org/10.1038/s41558-024-02115-x Lapola, D. M., Pinho, P., Barlow, J., Aragão, L. E., Berenguer, E., Carmenta, R., ... & Walker, W. S. (2023). The drivers and impacts of Amazon forest degradation. Science, 379(6630), eabp8622. DOI: 10.1126/science.abp8622 Rosan, T.M., Sitch, S., O’Sullivan, M. et al. Synthesis of the land carbon fluxes of the Amazon region between 2010 and 2020. Commun Earth Environ 5, 46 (2024). https://doi.org/10.1038/s43247-024-01205-0 Qin, Y., Xiao X., Wigneron J.-P.*, Ciais P., Brandt M., Fan L., Li X., Crowell S., Wu X., Doughty R., Zhang Y., Liu F., Sitch S., Moore III B. (2021). Carbon loss from forest degradation exceeds that from deforestation in the Brazilian Amazon, Nature Climate Change, 11, 442–448 (2021). https://doi.org/10.1038/s41558-021-01026-5 Wigneron J-P, Ciais P, Li X, Brandt M, Canadell JG, Tian F, Wang H, Bastos A, Fan L, Gatica G, Kashyap R, Liu X, Sitch S, Tao S, Xiao X, Yang H, Espinoza Villar JC, Frappart F, Li W, Qin Y, De Truchis A and Fensholt R (2024), Global carbon balance of the forest: satellite-based L-VOD results over the last decade. Front. Remote Sens. 5:1338618. https://doi.org/10.3389/frsen.2024.1338618 Wigneron J.-P., L. Fan, P. Ciais, A. Bastos, M. Brandt, J. Chave, S. Saatchi, A. Baccini, R. Fensholt, Tropical forests did not recover from the strong 2015-2016 El Niño event, Science Advances, vol. 6, no. 6, eaay4603, 2020, https://doi.org/10.1126/sciadv.aay4603
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: Focus on the Arctic-Boreal: towards reconciling terrestrial CO2 flux estimates from regional and global data-driven up-scaling approaches

Authors: Dr. Sophia Walther, Dr. Anna Virkkala, Dr. Jacob A. Nelson, Isabel Wargowsky
Affiliations: Max-Planck-Institute for Biogeochemistry, Woodwell Climate Research Center
The combination of in-situ measurements of terrestrial CO₂ fluxes with relevant information on environmental conditions from in-situ and satellite observations using machine learning is a key method for mapping terrestrial fluxes to regional or global domains. It offers a scalable complementary perspective to process-based models. In the Arctic-Boreal zone, this is particularly valuable because the latter still have large uncertainties in representing the dynamics of vegetation, permafrost, and hydrology, with strong implications given the large soil carbon stocks it holds and the potential to trigger global climate feedbacks. However, data-driven upscaling is inherently dependent on data quantity and quality, as well as the representativeness of flux data, and the relevance of predictors. Observational conditions, both in-situ and spaceborn, are particularly challenging in the Arctic-Boreal zone and limit both data quantity and quality compared to non boreal regions. Modeling the Arctic-Boreal zone can therefore potentially benefit from domain specific data streams, such as additional flux data from chambers and spaceborn observational proxies which are particularly relevant in this domain. We systematically explore data-driven terrestrial CO₂ flux estimates  in the Arctic-Boreal with products from works with both a focus on the domain of the high latitudes and a global perspective. We aim at understanding the robustness of  flux magnitudes and their spatiotemporal dynamics. We further explore how the temporal resolution (ranging from subdaily to monthly) as well as how tailoring predictors and training samples to a specific domain can impact CO₂ fluxes. Results show major differences in the net CO₂ uptake magnitude and seasonality related to the chosen training dataset. Preliminary tests on the influence of the choice of predictor variables as well as the temporal resolution indicate a smaller, yet non-negligible contribution of these methodological choices as well. Our work is a step towards better understanding the performance of global and regional upscaling efforts and quantifying CO₂ exchange in the vulnerable and spatially heterogeneous Arctic-Boreal zone.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 0.94/0.95)

Session: C.05.01 PROBA-V and PV-CC

Proba-V operational mission ended in October 2021 after 8 years of successful operations, followed bu an experimental misison phase (which included regular moon sensing). The data collected during the 8+ years had largerly supreseeded the expectations providing not only the continuity data with Spot-VGT but also increasing the resolution by 3 times.
The definition of a new procesor prototype for the final full data reprocessing is ongoing, including also the SPOT VGT data, with the goal to create a fondamental data record for vegetation (FDR4VGT) collection covering more than 20 years.
In addition, Proba-V carrried other instruments out of which the Energetic Particle Telescope (EPT) instruemt is of particular interest.
In October 2023 a small satellte (PVCC) has been launched with the spare camera of the vegetation instrument.
PV-CC purpose is to extend the Proba-V mission and to analyse/compare the results obtained by carry on-board the same vegetation instrument of Proba-V but on a 12U microsat. In the future, another microsat is planned, an hyperspectral misison, wich launch is foreseen by Sep 2025.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: PV-CC In flight calibration performances during the commissioning

Authors: Iskander Benhadj, Sindy Sterckx, Tom Van Roey, Stefan Adriaensen, Jan C. Dries, Fabrizio Niro, Roberto Biasutti
Affiliations: VITO Remote Sensing, European Space Agency (ESRIN)
Developed by prime contractor Aerospacelab in Belgium for ESA, the PROBA-V Cubesat Companion (PV-CC) In-Orbit Demonstration (IOD) mission is a small low-cost 12-unit CubeSat, satellite built up from standardised 10-cm boxes. It was successfully launched on October 9 2023 carrying a cut-down version of the vegetation-monitoring instrument aboard the Earth-observing PROBA-V to perform experimental observations. PVCC's primary goal therefore is to evaluate the feasibility of mounting high-performance sensors onto nanosats while maintaining data quality. The mission is supported through the fly-in-orbit testing element of ESA’s General Support Technology Programme. PV-CC is flying at 564 km in a Sun-synchronous orbit providing a similar Earth observation dataset as PROBA-V but with an improved spatial resolution (70 m at nadir). VITO is responsible for the development and hosting of the data processing facility and the in-flight Cal/Val activities. As PV-CC is carrying the same Vegetation imager, the data processing and calibration are largely reusing the existing PROBA-V ground segment infrastructure. Processing adaptation and fine-tuning have been developed to accommodate the PV-CC specifications and differences. After launch, PV-CC has undergone critical checks to verify its subsystems' functionality and establish its initial orientation and trajectory in orbit. Following LEOP (Launch and Early Orbit Phase), platform commissioning was conducted by the spacecraft manufacturer to ensure that the PV-CC platform and all its subsystems were operating as expected. Once the PV-CC platform was commissioned, it started acquiring imagery datasets above calibration sites to perform radiometric and geometric in-flight calibration, a critical activity to ensure accurate Earth observation data quality. Continuous and highly autonomous in-flight geometrical calibration/validation software has been applied to enhance the PV-CC product's geolocation accuracy. It estimated appropriate adjustments to the Exterior Orientation parameters (boresight angles) and Interior Orientation deformations (CCD line of sight vectors) of the PV-CC Vegetation Instrument (VI). The absolute geolocation was reduced from 48 pixels up to subpixels. A correction was also applied to remove the interband errors. As only a limited preflight radiometric characterization was performed, with no absolute calibration and equalization data available before launch, in-flight radiometric calibration was critical for PV-CC. The absolute radiometric calibration was performed using observations over desert PICS (Pseudo-Invariant Calibration Sites) and RadCalNet sites, as well as using lunar calibration. Furthermore, vicarious approaches were successfully implemented to retrieve the equalization coefficient to correct for both high and low-frequency non-uniformities in the images. In this paper, the overall PV-CC in-flight calibration activities and results will be presented.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: The Proba-V/EPT near real-time data products within the ESA Space Weather Service and their use

Authors: Stanislav Borisov, Sylvie Benck
Affiliations: Université catholique de Louvain
By now the Energetic Particle Telescope (EPT) on-board Proba-V (launched on 7th May 2013 onto a polar Low Earth Orbit of 820 km altitude) is providing, quasi continuously for more than 11 years, flux spectra data for electrons (0.5–8 MeV, 5 bins), protons (9.5–248 MeV, 10 bins) and α-particles (38–980 MeV, 10 bins) with a time resolution of 2 seconds. The data are transmitted to ground 3 – 4 times per day, where within several hours they are processed towards the following data products: 1) daily flux spectra time series along the orbit; 2) weekly flux geographical maps; 3) weekly averaged auroral electron energy spectra; 4) weekly averaged SAA proton and helium energy spectra; 5) yearly static radiation model of the three energetic particles: predefined percentile flux levels and corresponding time series on a regular B-L grid; 6) high latitude (electron) and polar (proton, helium) survey of light charged particles; 7) Total Ionizing Dose (TID) and Total Non-Ionizing Dose estimation for LEO. All these products are publicly available at the ESA Space Weather Service Portal in form of images and numerical text files. An interesting feature of the PROBA-V/EPT dataset is that although the instrument has only one looking direction, recently it allows also to sample systematically anisotropy of particle fluxes in the inner radiation belt. In fact, in the last years measurements from different directions with regard to the magnetic field come naturally as a result of new satellite orientation strategy used by the Redu operation team in order to assure good thermal balance of the platform on the precessing orbit. This presentation starts with a short description of the EPT instrument wherein the particularities of its concept will be highlighted. The instrument showed excellent stability of operation during all its mission except of an issue with one detector after a year of operation, that could however well be recovered. Then the presentation will give a summary on the status of the EPT mission as well as on its data products that are available within the ESA Space Weather Service Network and their possible use within space radiation studies and radiation effect assessment. For these latter topics, some examples of specific analysis of solar energetic particle events and geomagnetic storms will be given.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: An overview of the Lunar Calibration developments driven by PROBA-V

Authors: Stefan Adriaensen
Affiliations: Vito
For more than 20 years, the Moon has been evaluated as a valuable radiant calibration source for Earth observation sensors. The absence of an atmosphere, its temporal stability and high and predictable availability makes it an ideal target for the vicarious calibration of optical sensors. The photometric stability of the surface of the Moon is estimated to be of the order of 10 8 per year. With the launch of PROBA-V in 2013, the exploration of using the moon as a possible target for vicarious calibration for the PROBA-V instrument was initiated. Within the first month, the moon was captured and used to support the in-orbit verification of instrument properties like MTF and straylight. Monthly lunar images have been taken during the entire operational lifetime, enabling the stability monitoring of the instrument in an automated way. These curves where cross-verified with similar desert target monitoring. At the end of its extended operations, PROBA-V slowly moved into a more limited acquisition rate, which gave the opportunity to increase the amount of lunar acquisitions. End 2020, after approx. 3000 days in orbit, the first full lunar cycle was taken. From that period until summer 2024, the PROBA-V operated in experimental mode which enabled it to acquire multiple full cycles between -90 and 90 degrees phase angle. In total, more than 600 lunar images have been acquired with the PROBA-V instrument. In the meantime, VITO was also taking part of different initiatives in the domain of the Lunar Calibration. Task groups like CEOS-IVOS and GSICS also acknowledged the use of the moon for calibration purposes. From 2018 on, ESA launched a measurement campaign on the Pico Teide in Tenerife to capture the absolute irradiance level of the moon and to derive a new lunar irradiance model (LIME) from these measurements. The PROBA-V acquisitions have been a support to validate this new lunar irradiance model, both its absolute level and its phase dependency. As the development is still ongoing, Lunar calibration was incorporated in the calibration plan of different existing missions like S2 and S3 for the O-MPC and in preparation of new hyperspectral missions like FLEX and CHIME. In this paper, an overview of the PROBA-V mission with respect to lunar calibration activities will be presented and the need for a well characterized lunar irradiance and reflectance model is shown.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: Monitoring Biweekly Dynamics of Pan-Tropical Industrial Plantations Over 6 Years Thanks to 100m PROBA-V

Authors: Audric Bos, Celine Lamarche, Fabrizio Niro, Pierre Defourny
Affiliations: UCLouvain-Geomatics, Serco for European Space Agency (ESA)
Anthropogenic land conversion profoundly impacts the Earth's surface, with varying effects across regions. In the tropical zone, industrial plantations particularly impact natural forests. Monitoring land use and land cover changes due to agricultural expansion is crucial for achieving sustainable imports to the European Union (EU) under the EU Deforestation-free Regulation (EUDR). Satellite Earth observation missions are used to monitor tropical ecosystems and their changes. Mapping of annual global deforestation is becoming more accurate, but the problem is not yet completely solved. However, mapping the dates of tree cutting and planting with greater temporal accuracy remains a challenge. This study addresses this gap by developing a near real-time, sensor-agnostic method for monitoring the expansion and rotation of industrial perennial plantations at a bi-weekly time step. It is developed using 100m PROBA-V full Collection 2 archive with a 5-day revisit, spanning 2014 to 2020. A novel index enabled distinguishing vegetation from land cleared for plantations. The impact variability due to atmospheric and weather conditions, as well as both intra- and inter-annual variability of the vegetation spectral signature was mitigated using spatial standardization. Statistical thresholds identified pixels that deviated from the normal distribution of forest spectral values, capturing land use and land cover changes. It results in pan-tropical annual maps series 2015-2020 illustrating the typical dynamics of perennial plantations, from land preparation to mature plantations, including the dates of cutting and planting. Validation using 899 randomly selected samples through confidence-based stratified sampling yielded a global accuracy of 82% ± 2% for new plantation detection. 62% of the detections were accurate to the exact year, which represents a significant 19% improvement over previous studies. Initial estimates of industrial plantation dynamics indicate that new oil palm plantations cover approximately 3,064 km² per year. Of this, 21% represents expansion into new areas, specifically 11% from tropical forest, 7% from other land uses and 4% from other perennial plantations. The remaining 79% is due to rotation within existing plantations. Meanwhile, other perennial plantations are planted at a rate of about 13,875 km² per year, with 19% from expansion into new areas - 15% from other land uses and 4% from tropical forests - and 81% from rotation within existing plantations. This work demonstrates the effectiveness of optical 100m spatial resolution and 5-day revisit for near real-time pan-tropical mapping of perennial industrial plantations in cloudy regions. It paves the way for the integration of the algorithm on satellites with higher temporal and spatial resolution. This collection of maps detailing industrial perennial plantations is accessible via the Zenodo directory at https://doi.org/10.5281/zenodo.14217166 and is scheduled for publication.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: Proba-V Companion Cubesat Paving the Way for In-Orbit Demonstrations

Authors: Xavier Collaud, Mikko Viitala
Affiliations: Aerospacelab
The Proba-V Companion CubeSat (PVCC) mission, initiated by VITO and supported by the Belgian Science Policy Office, has been realized by a consortium led by Aerospacelab and composed of VITO and OIP. This mission involves a 12U CubeSat designed to acquire images from a known payload on a smaller platform. PVCC was meant to serve as a complementary satellite to Proba-V, which was launched in 2013 and tasked with mapping land cover and vegetation growth across the entire planet every two days. Therefore, PVCC features the spare spectral imager from Proba-V as its main payload to provide data that improves the calibration of Earth Observation imagery by leveraging a CubeSat platform. This presentation aims to provide an overview of the PVCC mission, including its launch, initial imaging activities, commissioning results, limitations and mission status. Key technical aspects such as the payload thermal environment and Attitude and Orbit Control System (AOCS) commissioning will be discussed. The knowledge and lessons learned from this mission are crucial in enabling the demonstration of new Earth Observation capabilities and the provision of operationally ready products. The PV-CC mission is significant as it highlights the potential of CubeSats in achieving objectives traditionally handled by larger satellites at a fraction of the cost. The compact nature of CubeSats and their ability to be launched as secondary payloads provide a cost-effective solution for acquiring Earth Observation data. Additionally, the use of spare components from Proba-V illustrates an efficient approach to resource utilization and mission planning. However, it is important to acknowledge that while CubeSats have proven to be highly effective for initial and low-scale testing, their physical limitations, such as size constraints and power capacity, are pushing Aerospacelab towards developing larger platforms to meet more complex and demanding mission requirements. In conclusion, the Proba-V Companion CubeSat mission exemplifies the advancements and potential of CubeSat technology in Earth Observation. Its implementation and the insights gained from its operations will undoubtedly influence future missions and the development of similar projects, reinforcing the value of CubeSats in the expanding landscape of space exploration and data acquisition. As follow-up missions are being considered, the move towards larger platforms will be assessed to overcome the inherent limitations of nanosats, ensuring that future missions can achieve even greater scientific and operational goals.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: FDR4VGT: building a 20+ years data record of land surface reflectances from SPOT-VGT and PROBA-V missions

Authors: Dr Iskander Benhadj, Sindy Sterckx, Joram Agten, Tom Van Roey, Stefan Adriaensen, Marta Luffarelli, Nicolas Misk, Roberto Biasutti, Fabrizio Niro, Didier Ramon, Luis Gómez-Chova
Affiliations: Vito, Rayference, ESA, Hygeos, Image Processing Laboratory, University of Valencia
The FDR4VGT project addresses the critical need for harmonized and temporally stable satellite data to support long-term climate and environmental research. This initiative focuses on the SPOT-VGT (1998–2014) and PROBA-V (2013–2020) satellite missions, which collectively provide a 20+ year global dataset spanning the visible to shortwave infrared spectrum. The project's primary objective is to establish a Fundamental Data Record (FDR) by harmonizing SPOT-VGT1, SPOT-VGT2, and PROBA-V datasets. Key goals include improving radiometric and geometric harmonization and generating per-pixel uncertainty estimates to ensure traceability throughout the processing chain. User insights gathered through a survey conducted at the project's onset highlighted several priorities, including the importance of pixel-level uncertainty information, improved cloud screening, and harmonization across sensors. Users also emphasized the need for uncertainty propagation to NDVI and composite products, further reinforcing the importance of these elements in the project's framework. A sensor-to-sensor harmonization process will be implemented within the project, adhering to FIDUCEO definitions. This process will address and correct calibration biases between SPOT-VGT1, SPOT-VGT2, and PROBA-V, while considering associated uncertainties. This approach will enhance radiometric harmonization and temporal stability by quantifying and correcting inter-sensor biases and sensor-induced drifts. An important component of FDR4VGT is the development of a reliable and consistent cloud masking algorithm, a need clearly identified in the user survey. The cloud masking algorithm will be based on a fully convolutional neural network (FCNN), extending the algorithm successfully implemented for PROBA-V. Atmospheric correction will also be harmonized across VGT1, VGT2, and PROBA-V using the SMAC-CL processor and ancillary data from the MERRA-2 reanalysis. Uncertainty propagation to Level 2 products will be conducted in compliance with FIDUCEO standards to ensure rigor and traceability. The project is organized into two distinct phases. The Development Phase, spanning from 2024 to 2026, focuses on the design and validation of algorithms for Level 1C, Level 2, and Level 3 processing, including the assessment of inter-sensor biases and the estimation of uncertainties. Following this, the Integration Phase, scheduled from 2026 to 2028, will involve deploying the processing chain on an operational platform and reprocessing the entire 20+ year data archive. Ultimately, FDR4VGT will deliver a harmonized and temporally stable dataset of reflectances at TOA and TOC levels which could be potentially extended in time using the Sentinel-3 SYN VGT dataset. The FDR4VGT dataset will provide a robust foundation for climate studies, enabling seamless integration into higher-level thematic analyses, and ensuring long-term data preservation and usability.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall G1)

Session: C.03.06 Synergy of visible and infrared measurements for Earth surface applications

Current and future Copernicus Sentinels provide multiple observations of the earth surface in the visible and infrared domains. Sentinel-2 is the workhorse of land observations for Copernicus providing multi-spectral high-resolution in 13 channels ranging from the visible to the NIR and SWIR regions, Sentinel-3 observations from OLCI (21 channel in visible and NIR) and SLSTR (9 channels from Visible to Thermal infrared, with dual-view capability) are crucial for operational oceanography, land and atmosphere monitoring. The Copernicus Expansion Missions CHIME and LSTM will further add to the pool of observations covering the high spatial resolution requirements for visible-to-shortwave-infrared spectroscopy and temperature measurements, respectively, over land and coastal areas. The Next Generation of Sentinel-2 and Sentinel-3 Mission will bring further performance, with improved spatial and spectral resolution (Sentinel-3 Next Gen Optical will be virtually hyperspectral in the visible and near infrared).
While observations from single instruments are at the basis of many operational processing chains that generate products for the Copernicus Services (such as ocean colour, SST, LST, vegetation indices, land cover products), the potential synergy between visible and infrared is still mostly unexploited (with the notable exception of some atmospheric applications that use VIS/IR data together in operational products, for example for aerosols).
This session will welcome all contributions that deal with the synergy of visible and infrared observations from current and future Copernicus missions to improve the characterisation of the earth surfaces (land, ocean, cryosphere) and prepare the user community for future synergistic operational products. Possible contributions include but are not limited to:
• Data merging for synergy: remapping, harmonization, fusion of data from different instruments and views.
• Use of dual-view observations and infrared channels to improve the atmospheric correction of ocean and land colour applications.
• Use of combined visible and infrared for improved surface classification.
• Operationalisation of new synergistic products for agriculture, water resources management, weather forecasts, climate studies (for instance, evapotranspiration).
• Thermal and optical remote sensing for urban management.

Speakers:


A Review of Visible/IR Synergy for Land Applications


  • Carolien Totè - VITO

Towards a Copernicus Land Monitoring Service Actual Evapotranspiration Product


  • Radoslaw Guzinski

Evaluation of Copernicus Sentinel-3 derived SYNERGY surface directional reflectance products


  • Naga Moparthy

SWIR-band atmospheric correction for OLCI-SLSTR Synergy and S3NGO requirements


  • Constant Mazeran
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall L1/L2)

Session: F.05.04 Earth Observation in Governance: Bridging Data and Decision-Making

This session aims to bring together practitioners from government agencies, research organizations, and commercial satellite data providers who utilize remote sensing data to enhance public sector services and efficiency. Case studies that showcase the tangible benefits of integrating EO data into public sector operations using up-to-date technologies and approaches, such as advanced data processing techniques - including Machine Learning and Deep Learning - are particularly welcome.
These studies should illustrate how remote sensing has led to measurable improvements in areas such as environmental protection, crowd management, law enforcement, and natural disaster management or response.
The focus will be on quantifying the socio-economic impacts of Earth Observation (EO) technologies on the public sector, such as achieving efficiency savings, improving public services, stimulating economic growth, and enhancing decision-making processes.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall L1/L2)

Presentation: The Valencian Community from Satellite: The VALENSAT Project

Authors: José Antonio Sobrino, Sergio Gimeno, Virginia Crisafulli, Alvaro Sobrino-Gomez, Rafael Llorens, Yves Julien, Letian Wei
Affiliations: University of Valencia. Global Change Unit. Image Processing Laboratory
The European Union and the Generalitat Valenciana co-financed the project "Estimación de Indicadores Medioambientales en la Comunidad Valenciana usando datos de Satélite (VALENSAT)," as part of the INVESTIGO 2022 financing plan. The project aims to create a user-friendly website offering free, accessible Remote Sensing products to Valencian Community citizens. Sentinel, METEOSAT, Landsat, MODIS, and ECOSTRESS images are used to generate products covering a wide range of applications, including monitoring the Urban Heat Island effect, calculating vegetation indices, estimating fire intensity, and assessing water quality. Each product on the website features a detailed section that explains its functionality, applications, and key features. Utilizing Google Earth Engine, interactive apps have been developed to allow users to customize the calculation of specific parameters of interest. Users can download these products tailored to their needs, complemented by practical application examples and proper literature. This initiative ensures real-time access to valuable information, empowering informed decision-making and fostering a deeper understanding of Earth changes within the Valencian Community. Presented here are some of the relevant results achieved so far in this project. A Land Cover Change analysis revealed that urban areas in the region nearly doubled between 1984 and 2022, expanding from 482 to 940 square kilometers, with the most substantial increase between 1985 and 1990. Additionally, a novel methodology was developed for estimating frost intensity and its impact on citrus crops. The results indicate a decline in both the frequency and severity of frosts, with moderate-damage frosts showing a significant reduction and severe-damage frosts becoming almost non-existent. Other findings include the estimation of the surface urban heat island in Valencia City, the estimation of burned areas and their severity, the spatial and temporal analysis of wildfires across the region in recent decades, and the generation of air and water quality maps. Following recent events caused by a cut-off low (DANA) on October 29, 2024, a mud map was generated for part of the province of Valencia.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Bridging the Science-Policy Gap through the Use of Remote Sensing Data and Deep Learning Methods for Environmental Policy Support

Authors: Stien Heremans, Dr Lisa Landuyt, Dr Tanja Van Achteren, Dr Ils Reusen, Mr Jo Van Valckenborgh, Prof Matthew Blaschko, Prof Ben Somers
Affiliations: Research Institute Nature And Forest (INBO), Flemish Institute for Technological Research (VITO), Digitaal Vlaanderen, div Data Platforms and EODaS, KU Leuven, div Processing Speech and Images (PSI), KU Leuven, div Forest, Nature and Landscape (FNL)
Evidence-based approaches are gaining popularity, also in the domain of environmental policy. Despite this, the primary source of policy-relevant information often remains ground sampling, which offers limited spatial and temporal detail. The GEO.INFORMED project (2020-2024) addressed this gap by combining the freely available Sentinel satellite imagery from the European Copernicus programme with (machine learning and) deep learning modelling approaches, specifically for supporting environmental policy in Flanders. The project focused on designing workflows for mapping geospatial indicators, in close collaboration with policymakers from several Flemish government agencies. Engaging stakeholders throughout the entire process (co-design) ensured that the selected applications addressed real-world policy challenges. The final use cases of the project included monitoring water bodies, classifying catch crops, updating the Biological Valuation Map, and mapping forest composition. By aligning the project outcomes with the requirements of these end-users, the initiative demonstrated how Earth observation data can directly support evidence-based policymaking. One of the key challenges encountered was the complexity of input data in Earth observation. Unlike most standard computer vision tasks that rely on RGB images, Earth observation data often includes multi-band imagery with spectral channels beyond the visible range, as well as time series data capturing temporal changes in the environment. These non-standard inputs, combined with data gaps and temporal variability, necessitated adaptations to existing deep learning models. To address these issues, the project explored techniques such as data fusion to effectively combine different inputs and the redesign of model architectures to handle time series data as inputs. A second major challenge was the limited availability of labeled reference data, essential for training deep learning models. High-quality, geospatially accurate labels are often scarce, particularly for the type of regional environmental monitoring applications addressed in the project. We tackled this limitation by employing strategies such as data augmentation to artificially increase the diversity of training samples and transfer learning to leverage pre-trained models from related tasks. Active learning approaches were also implemented, allowing the iterative refinement of models using end-user-provided annotations. Additionally, self-supervised learning techniques were applied to make use of the vast amounts of unlabeled data available, further enhancing model performance and resource efficiency. To ensure practical usability, the developed models were implemented in standardized workflows by means of the openEO framework and a set of interactive web viewers, that offer easy access to the geodata produced by these workflows, was developed. The openEO framework allows for parallellized cloud-based computing and allows for easy scaling in both space and time. The viewer provides visualization capabilities, enabling users to explore spatial and temporal trends while interacting with model outputs. Features such as uncertainty visualization allow users to gain insight into the spatio-temporal model performance. Annotation tools allow them to also provide feedback on detected errors, which can then be incorporated into the models’ next version through active learning. This feedback loop ensures continuous improvement and fosters trust in the system among policymakers. The project’s results demonstrated measurable socio-economic impacts, particularly in improving the efficiency and effectiveness of environmental monitoring. For example, the traditional monitoring of water bodies and habitats in Flanders relied on labor-intensive field surveys. By integrating satellite data and deep learning approaches, the GEO.INFORMED project enabled faster and more cost-effective updates to both data layers, while also providing insights into seasonal variations and long-term changes. Similarly, the automated classification of catch crops reduced the time and resources needed from the end-user for monitoring compliance with this environmental regulation. Another distinctive feature of the project was specifically its emphasis on participatory design as a bridge between scientific innovation and policy implementation. While participatory approaches are common in other domains, their application in the development of Earth observation tools for environmental policy support remains rare. The project demonstrated that involving end-users from the outset results in solutions that are more relevant, impactful, and aligned with their operational needs. This approach also facilitated knowledge transfer, enhancing the capacity of policymakers to use the results of these advanced geospatial technologies more effectively. The interdisciplinary dialogue was instrumental for successfully combining the project’s complex technical challenges with the need for actionable policy data. The lessons learned from this initiative have broader implications for integrating Earth observation data into the development of policy-relevant geodata. By addressing both technical and institutional barriers, the project provides a blueprint for the integration of advanced satellite data processing techniques in the policy sector. The standardized openEO workflows developed within the GEO.INFORMED framework can be adapted for use in other regions and contexts, further amplifying their impact. In conclusion, the GEO.INFORMED project is an example of the transformative potential of Earth observation data when combined with state-of-the-art deep learning techniques and participatory design processes. By bridging the gap between data and decision-making, the project not only advanced the scientific Earth observation and deep learning community but also demonstrated tangible benefits for environmental policy support. The methodologies and insights developed through this project offer valuable guidance for other regions and sectors aiming to integrate Earth observation data into their decision-making processes.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Integrating Earth Observation Into Public Services: Advanced Machine Learning Techniques for Flood-induced Damage Mapping

Authors: Victor Hertel, Dr. Christian Geiß, Marc Wieland, Dr. Monika Gähler, Dr. Hannes Taubenböck
Affiliations: German Aerospace Center (DLR), Rheinische Friedrich-Wilhelms-Universität Bonn, Julius-Maximilians-Universität Würzburg
Accurate and timely assessment of building damage following natural disasters is critical for effective emergency response and resource allocation. Earth observation offers invaluable insights for this purpose; however, the diversity of built environments and varying imaging conditions introduce significant domain shifts between training and target data that challenge the performance of pre-trained machine learning models. To address this challenge, we present a practical application of advanced domain adaptation techniques specifically tailored for time-sensitive and resource-constrained situations. Alongside, we introduce a flood-induced building damage dataset for Germany, developed jointly by the German Aerospace Center (DLR) and federal ministries as part of the IF-Bund (Innovative Remote Sensing for the German Federal Administration) initiative. This dataset serves as a benchmark for the evaluation and improvement of model performance in the context of flood-induced building damage assessment. More specifically, our work focuses on overcoming the domain shift problem inherent in automatic building damage assessment by leveraging both high-quality, location-specific datasets and state-of-the-art domain adaptation methods. We introduce a Siamese bi-temporal multitask network, optimised for efficient semantic segmentation and change detection, capable of rapidly adapting to new disaster scenarios with minimal latency. This approach supports rapid decision-making and the effectiveness of government response in high-stakes, resource-constrained situations. Our flood-induced building damage dataset provides a fully annotated and standardised resource representing several significant flood events in Germany. By harmonising classification schemes from the engineering and remote sensing communities, we also introduce flood-specific building damage descriptors in line with internationally recognised standards. This dataset serves not only as a benchmark for evaluating domain adaptation techniques, but also as a valuable tool for government agencies to improve disaster management strategies and policy planning. Furthermore, we perform a diagnostic decomposition of different domain adaptation techniques to assess their suitability for rapid mapping applications in the public sector. Domain adaptation methods vary in their approaches depending on the availability of labelled data in the source and target domains. Unsupervised domain adaptation applies to situations where no labels are available in either domain. In contrast, semi-supervised domain adaptation utilises labels from the source domain but lacks labels in the target domain. Finally, supervised domain adaptation leverages labels from both the source and a limited number of labels from the target domain. Our analysis includes class-specific performance improvements, tendencies towards over- or underestimation of building damage, and resource requirements in terms of processing time, human capacity, and computational demands. The results demonstrate that all proposed domain adaptation methods significantly enhance model performance within a short timeframe, achieving up to 86% of the potential performance gain compared to the upper performance limit achieved through supervised learning. Supervised domain adaptation with minimal annotations per class emerged as the most effective method for immediate deployment, offering substantial performance improvements with minimal additional resources. Semi-supervised domain adaptation, combined with an automatic labelling strategy based on hazard intensity, provided the highest overall performance gains while maintaining low demands on time and human resources. Purely semi-supervised domain adaptation was found to be time consuming and computationally demanding, making it suitable only under specific conditions where there is sufficient time available or where human intervention is not feasible. These results can be used to provide scenario-based recommendations to assist policy makers and emergency response teams in selecting the most appropriate method under given conditions. From an institutional perspective, by integrating advanced machine learning techniques with tailored datasets into public sector operations, the IF-Bund framework demonstrates tangible socio-economic benefits derived from Earth observation technologies. The improved accuracy and speed of building damage assessment directly contributes to improved public services, more efficient allocation of emergency resources, and better-informed policy decisions within government agencies. This work exemplifies how bridging data and decision-making through collaborative efforts can lead to measurable improvements in disaster management and response, ultimately stimulating economic resilience and safeguarding affected populations.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Enhancing Decision-Making With EO Data and High-Performance Computing: Applications in Renewable Energy, Agriculture, and Disaster Response

Authors: Giuseppe Piparo, Matteo Albéri, Gioacchino Alex Anastasi, Salvatore Calcagno, Enrico Chiarelli, Irem Nedime Elek, Fabio Mantovani, Kassandra Giulia Cristina Raptis, Virginia Strati, Emiliano Tramontana, Alessia Tricomi
Affiliations: Istituto Nazionale Di Fisica Nucleare (INFN), Catania Section, Istituto Nazionale di Fisica Nucleare (INFN), Ferrara section, University of Ferrara, Department of Physics and Earth Science, University of Catania, Department of Physics and Astronomy "Ettore Majorana", University of Catania, Department of Mathematics and Informatics, Centro Siciliano di Fisica Nucleare e Struttura della materia (CSFNSM)
The integration of Earth Observation (EO) data into public sector operations holds significant potential for enhancing decision-making processes and delivering socio-economic benefits. Leveraging the extensive computational resources provided by the Italian National Research Centre for High Performance Computing, Big Data, and Quantum Computing (ICSC), we present three case studies conducted under Working Group 6 (WP6), "Cross-Domain Initiatives and Space Economy", as part of Spoke 2, "Fundamental Research and Space Economy", where advanced EO techniques have been employed to support governance in renewable energy planning, agricultural disease management, and natural disaster response. Firstly, we developed a deterministic learning algorithm for object identification, efficiently detecting photovoltaic (PV) panels from high-resolution RGB satellite imagery. Notably, the algorithm achieves excellent performance using a reduced amount of training data, highlighting its efficiency and scalability. By accurately mapping existing PV installations across vast geographic areas, this approach aids government agencies in assessing current renewable energy infrastructure and planning future investments. Furthermore, its potential applicability to other object identification and image segmentation tasks expands its utility beyond this specific use case. The precise identification of PV panels contributes to the optimization of energy networks, stimulates economic growth, and promotes sustainable energy policies aligned with environmental goals. Then, we employed spectral index-based image processing techniques to monitor vineyard diseases, specifically focusing on the detection of symptoms associated with viticultural ailments like Flavescence dorée. Using high-resolution images obtained from airborne surveys, we provided precise assessments of disease spread at both the plant and field levels. This detailed information enables timely interventions by agricultural authorities and vineyard managers, facilitating targeted treatments and preventive measures. By improving the efficiency of disease management strategies, our work enhances public services, protects crop yields, and safeguards the economic interests in the viticulture sector. Finally, we applied advanced deep learning techniques, including Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks, to analyze satellite imagery for the accurate segmentation of wildfire-affected areas. Utilizing data from the Copernicus satellite constellation and the Copernicus Emergency Management Service, our methods significantly enhance the accuracy and speed of disaster mapping. This rapid and precise identification of affected zones supports efficient resource allocation and improves emergency response efforts by providing insightful data to relevant authorities. Moreover, the predictive capabilities of our models contribute to better risk assessment and mitigation strategies for future wildfire events. These case studies demonstrate the huge potential of integrating EO data with advanced analytical methods to guarantee an increasingly efficient, focused, and diversified impact in public sector operations. By enhancing environmental oversight through precise monitoring, optimizing the management of natural and infrastructural resources, and refining emergency response strategies with timely and accurate information, our work clearly shows the tangible socio-economic advantages of EO technologies. These developments highlight the importance of the synergistic application of EO data and high-performance computing in decision-making processes to not only improve existing public services but also open up new avenues for innovation, underlining the central role of these technologies in modern governance.
LPS Website link: Enhancing Decision-Making With EO Data and High-Performance Computing: Applications in Renewable Energy, Agriculture, and Disaster Response&location=Hall+L1/L2" class="text-info" target="_blank">Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Earth Observation for Emission Monitoring: Evaluation of Methane Plume Detection Capabilities for Emission Reporting Authorities

Authors: Lukas Häffner, Christian Mielke, Lennart Resch, Marvin Knapp, Alberto Alvaro-Diaz, Louisa-Marie Rüther, Adela Collado-Rodriguez, Cristina Prados-Roman, Andre Butz, Leonie Olivia Scheidweiler
Affiliations: German Environment Agency, Institute of Environmental Physics, Harvard John A. Paulson School of Engineering and Applied Science, Atmospheric Research and Instrumentation Branch, Spanish National Institute for Aerospace Technology (INTA), Electronics Technology Department, Universidad Carlos III de Madrid, Heidelberg Center for the Environment (HCE), Interdisciplinary Center for Scientific Computing (IWR)
With the Global Methane Pledge, more than 150 countries have committed themselves to reduce global methane emissions by at least 30% by 2030 compared to 2020. A central component of efforts to cut emissions and corresponding policy frameworks are national emission inventories, which are reported within the framework of the United Nations Framework Convention on Climate Change (UNFCCC). Currently, the potential benefits of developing independent atmospheric measurement capacities are subject to broad discussion and efforts, including for example the Copernicus Earth observation programme or the European Union’s research programme Horizon. Specifically, spaceborne remote sensing missions for atmospheric composition are rapidly evolving, and spaceborne detection of methane point source emissions is becoming operational. To evaluate the applicability of Earth observation of methane point sources for emission reporting, we present the results of a joint measurement campaign in the summer of 2024 in collaboration of German and Spanish research institutions. We have observed emissions of a selected landfill target in the area of Madrid, known from previous studies, with methane emissions in the order of tonnes per hour. We were able to perform multiple different observation approaches within the same time frame. Our analysis includes spaceborne measurements of the commercial mission GHGSat, the public mission EnMAP and the Copernicus mission Sentinel-5P/TROPOMI. We deployed two ground-based hyperspectral cameras using a scanning geometry with sub-hourly resolution and we deployed a wind-LIDAR to analyse local transport variation for comparison with meteorological models such as ERA5. Here, we present details on the plume formation of the landfill’s methane emissions and the results of the different remote sensing approaches. In particular, we resolve sub-hourly variability and contrast snapshot-like spaceborne observations. We showcase the information gained from the different commercial and public concepts and measurement approaches to compare and cross-validate findings for the landfill’s emissions. We identify and discuss potential practical contributions of Earth observation to current emission monitoring frameworks. With this study we present the first tests to directly co-organise and perform independent methane point source detections with remote sensing as Germany’s national authority for emission reporting. In particular, this includes inventory-driven target selection and data analysis of the satellite scenes. We present highlights and challenges of the different spaceborne and ground-based approaches for a governmental organisation and inventory compiler. In addition, we present lessons learned when using Earth observation at a governmental agency and we discuss the central role of frameworks such as Copernicus for emission monitoring and verification.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Earth Observation Solutions in Support of the Hungarian Public Sector

Authors: Daniel Kristof, Edina Birinyi, Dr. Márta Belényesi, Dávid Ákos Gera, Irén Hubik, Rómeó Ife Komolafe, Vivien Pacskó, Róbert Pataki, Mátyás Richter-Cserey, Anikó Rotterné Kulcsár, Máté Péter Simon, Boglárka Szatmáriné Bártfai, Ágnes Tar-Andrási, dr. Gábor Mikus
Affiliations: Lechner Knowledge Centre, Earth Observation Operations, Satellite Remote Sensing Department, Lechner Knowledge Centre, Earth Observation Operations
Lechner Knowledge Centre (LTK), a not-for-profit public company is a central hub of geospatial activities in Hungary, also fulfilling the role of National Mapping Agency. Its Earth Observation Operations division is in charge of providing EO-based data, information and services to various end-users. Here, a variety of synergistic, country-wide data sets are created regularly, based on Earth Observation (EO) and ancillary data, integrated to a number of specific products, and provided to authorities and other users through various infrastructures. Also, research and development activities ensure the continuous enhancement of algorithms, products and services. High-resolution land cover data are essential for land monitoring activities. At LTK, the country-wide, 10-m resolution National High-Resolution Land Cover (NHRL) data set has been created retroactively and updated every year, now covering the period of 2016-2023. It is based on machine learning classification of annual Sentinel-2 optical and Sentinel-1 radar time series, elevation and soil data. Specific features (e.g., road and railway network, buildings) are integrated directly from cadastral and other official data sets. To ensure high quality and temporal consistency, visual inspections and manual corrections are carried out on NHRL time series to remove false changes and other inconsistencies, thus providing the higher-level Harmonized NHRL (hNHRL) data. Data about annual crops, permanent crops and grasslands are crucial for applications and activities related to agriculture, ecosystems or spatial planning, among others. At LTK, country-wide crop and grassland maps have been retroactively created and updated every year, currently covering the period of 2013-2024, with 24 categories and a resolution of 20 meters and high accuracy (typically 94..98%). They are based on machine learning using annual time series of Sentinel-2 optical data and various spectral indices derived thereof along with Sentinel-1 polarimetric descriptors, trained on reference data derived from anonymized agricultural subsidy claims and other ancillary data sets including hNHRL. Our current developments apply deep learning techniques in order to decrease the need for up-to-date reference data by using pre-trained models. Extreme water conditions occur with increasing frequencies in Hungary due to climate change, impacting ecosystems and human activities. Flooding and inland excess water conditions often appear in the spring, while summers are prone to severe drought; lately, the temporal patterns of these phenomena are becoming less predictable. EO solutions have been applied at multiple levels and continuously improved to support data-driven decision making. Flood and inland excess water maps have been regularly created based on Sentinel-2 and Landsat optical imagery, with recent developments involving Sentinel-1 SAR data as well. Country-wide generic agricultural drought maps have been derived from MODIS data at 250-m resolution, with the recent addition of high-resolution, crop-specific vegetation condition products based on time series of Sentinel-2 spectral indices and crop maps. Based on the long-term series of water extreme maps, we also assess spatio-temporal charecteristics of these phenomena and hence contribute to the data-driven adaptation of agriculture and water management. The above products, along with a number of others derived from EO data, are provided to authorities, public-sector and other users via various frameworks. The Earth Observation Information System (FIR) is a central governmental infrastructure providing storage, processing and services of EO (more specifically: Sentinel) data and products derived thereof. It also serves as an ESA Collaborative Ground Segment (CollGS) node. LTK is in charge of the maintenance and operation of its application layer and provides the professional background. Multiple authorities (e.g., water management, disaster management, agricultural paying agency, forestry) possess specific sub-systems providing data and services tailored to their needs and have specific service contracts with LTK, with a number of the above-mentioned data sets being served to them. Another infrastructure, the Agricultural Risk Management System (MKR) is integrating authorities, professional institutions and insurance companies to ensure the payment of loss compensations in order to mitigate the effects of extreme events in the agricultural sector. LTK has been an active contributor since the beginning by offering ever-developing data products based on remote sensing / EO data. Currently, we regularly provide inland excess water / flood maps, country-wide medium-resolution drought maps and high-resolution, crop-specific crop condition maps to support decision making. The above-described set of synergistic products, processes, infrastructures and use cases will be presented and demonstrated through concrete examples at the Symposium.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.15/1.16)

Session: A.07.09 ESA Hydrology Science Cluster

ESA’s Hydrology Science Cluster involves a number of ESA-funded projects and activities bringing together different expertise, data, and resources in a synergistic manner to ensure that the result may be larger than the sum of its parts. It also provides a mechanism for collaboration with Horizon Europe projects.

The pressure on water resources is steadily increasing due to population growth and increasing wealth for part of the population. Climate change adds to this pressure, e.g. through more frequent and intense heat waves and droughts, and also disturbs the water cycle through more frequent floods. In order to tackle such challenges we need to further advance the way we observe, understand, and predict the evolution of the water cycle and its interactions with human activities and ecosystems. In this decade ESA, several other space agencies and private organisations have been and are developing a unique observation infrastructure in space, including an extraordinary and complementary suite of sensors on board Copernicus Sentinels series, ESA’s Earth Explorers, coming meteorological missions and different EO observation satellites planned to be launched by national space agencies and private operators in Europe. This comes at a time where novel information and computing technologies, Artificial Intelligence, cloud computing and digital platform capabilities are opening the door to new and advanced ways to implement open science and develop novel applications and services. The unprecedented potential for water cycle science and hydrology of this exceptional set of capabilities is far from being reached and needs to be fully explored and exploited. This requires an integrated approach to the water cycle which exploits both observations (satellite and in situ) and modelling and cross-domain research (ocean, land and atmosphere). Through this Cluster, ESA aims at contributing to the establishment of a strong European hydrology and water cycle research area in close collaboration with the European Commission Directorate General for Research and Innovation and other European and international partners. Over the next years, ESA is planning a number of activities and opportunities to further develop the cluster as part of the ESA FutureEO programme, in coordination with other ESA activities such as the Climate Change Initiative and the dedicated mission developments under the Earth Explorer and Copernicus Sentinel Expansion and Next Generation lines.

Agenda:


Cluster objectives and status


  • Espen Volden - ESA

Lightning talks of the projects:


  • 4DHydro
  • DTC-Hydr’Avatar
  • DTC-DTE-H Next
  • 4DMED-H-DEMETRAS
  • CCI-SoilMoisture
  • SUPSAR
  • CCI-Lakes
  • STREAM-NEXT
  • CCI-RiverDischarge
  • EO4Flood
  • Irrigation+ /EU
  • CCI-AnthropogenicWaterUse
  • AlpSnow
  • AI4Snow
  • CCI-Snow
  • CCI-Glacier
  • SING

Presentation of ideas for ESA Science projects 2026-28


  • Karim Douch - ESA

Discussion of the new project ideas


  • Espen Volden - ESA
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.31/1.32)

Session: B.04.03 Integrating multi-sensor Earth Observation data to improve natural hazard monitoring - PART 2

A deeper knowledge of natural hazards, which may originate from different sources and systems (such as atmospheric, hydrological, oceanographic, volcanological or seismic), is essential for accelerating actions towards the pursuit of Sustainable Development Goals of 2030 Agenda. In this framework, Earth Observation (EO) data are essential for all phases of natural hazard management, from spatial planning to post-emergency operations.
Satellite data offer extensive spatial coverage, which is highly beneficial for various scientific applications. Optical sensors on-board satellite gather information in the visible and near-infrared regions of the electromagnetic spectrum, providing images that are ‘easier’ to interpret but also valuable spectral information to accurately recognize different objects. SAR sensor, on the other hand, is mostly independent of weather and illumination conditions, with the possibility to detect roughness and dielectric characteristics of the target objects.
Changing platform type, UAVs allow acquiring very high-resolution, on-demand data and in near-real time. The development of different sensors (optical, multispectral, LiDAR, hyperspectral), enables different types of data to be acquired, even simultaneously.
Despite the complementarity of sensors from different platforms and their respective advantages (and disadvantages) of use, these data sources are often used separately. Data fusion and the integration of different methodologies make it possible to address the spatial and resolution gaps.
This session aims to examine and discuss multi-platform and multi-sensor data fusion, also by sharing qualitative and quantitative information to foster collaborative advances for monitoring and mapping natural hazards.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: SAR and Optical-MSI data implementation in the coastal erosion management

Authors: Simone Ippoliti, Paolo Caporossi, Dr. Francesco Menniti, Ing. Stefano Cruciani, Prof. Lorenzo Lipparini
Affiliations: Department of Earth Science, University of Roma Tre, Titan4 S.r.l., ISPRA, Italian Institute for Environmental Protection and Research
About 40% of the global population lives near coastal areas, and even more in the Mediterranean, where relentless anthropogenic pressures collide with natural dynamics such as wave activity. In this context, coastal erosion emerges as an environmental challenge that threatens not only the integrity of natural landscapes of coastal areas, but also the economic and social security of the communities. In Italy, with over 7,000 kilometers of coastline, this issue takes on particular significance: beaches that have shaped the landscape and fueled tourism for centuries are increasingly exposed to more frequent and intense climatic events and subjected to heightened anthropogenic pressures. Coastal erosion is a significant geohazard in regions like Liguria and Emilia-Romagna, which face the northern Tyrrhenian Sea and the northern Adriatic Sea, respectively. In recent years, these areas have been affected by extreme flooding events, such as those in Montenotte Superiore in 2021 and the floods in Ravenna, Forlì-Cesena, Rimini, and Bologna provinces in 2023-2024, impacting the socio-economic stability of the communities reliant on these coastal areas. The complex interplay of factors such as rainfall, marine currents, erosion, sediment transport (from inland or along the coast), and geomorphological and anthropogenic settings significantly impact on the effects of extreme events. To address these challenges, this study utilized Synthetic Aperture Radar (SAR) data to analyze key coastal dynamics, measuring and verifying coastline retreat and the spatial-temporal variability of shoreline changes. Leveraging the Bragg scattering phenomenon, which distinguishes radar wave interactions with the roughness of the sea surface, it was possible to detect coastal currents and wave reflection phenomena caused by anthropogenic structures. Additionally, optical and multispectral imagery (such as LANDSAT and Sentinel-2) supported the analysis. These data sources, with their higher spatial resolution and broad areal coverage, enabled detailed and continuous monitoring of coastal dynamics, validating the findings and strengthening the proposed methodology. By integrating data from diverse sources—including ECMWF tidal current catalogs, meteorological, geomorphological, and rainfall data, urban development records, and prevailing wind direction information—this study provided a comprehensive understanding of coastal geohazards in those two key Italian regions, offering a regional-scale view of the interplay between natural and anthropogenic dynamics threatening coastal zones. The proposed methodology aim to support hazards mitigations, both during assessment phases as well as in emergency scenarios and, more in general, aims to contribute to sustainable, effective management and enhanced resilience of coastal areas.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: An AI-driven approach to monitoring Volcano Hazards from Space

Authors: Gaetana Ganci, Francesco Spina, Giuseppe Bilotta, Elisa Trasatti, Stefano Corradini, Alessandro Aiuppa, Christian Bignami, Nicole Bobrowski, Flavio Cannavò, Sonia Calvari, Annalisa Cappello, Iole Serena Diliberto, Maddalena Dozzo, Filippo Greco, Francesco Guglielmino, Lorenzo Guerrieri, Luca Merucci, Emilio Pecora, Carlotta Petrucci, Alessandro Pignatelli, Michele Prestifilippo, Giuseppe Puglisi, Vito Romaniello, Francesco Romeo, Giulia Romoli, Simona Scollo, Malvina Silvestri, Marco Spina, Dario Stelitano, Francesco Zuccarello
Affiliations: Istituto Nazionale di Geofisica e Vulcanologia, University of Palermo
Identifying the observable signals that warn against volcanic unrest and impending eruptions is one of the greatest challenges in the management of natural disasters. For 50 years, satellite imagery has provided a unique means to study and monitor Earth's active volcanoes, used to remotely observe eruptive activity and as a complement to other ground-based techniques. Recent advances in satellite technology have led to an exponential increase in the multi-source capabilities of on-orbit instruments, which can help us to better understand eruptive dynamics from space. However, the large and heterogeneous amount of data generated requires the development of new analyses and techniques for effective processing and interpretation. Here we present a novel methodology for classifying volcanic activity by applying Artificial Intelligence to integrate and analyse satellite remote sensing data. In particular, we combine a Semi-Supervised Generative Adversarial Network (SGAN) with multi-sensor satellite data including multispectral (optical, UV, IR) and SAR satellite imagery. For each typology of satellite image, we summarise the informative content into different indicators in order to extract meaningful time series. Well-established products from space-based volcano monitoring, such as (i) volcanic radiative power, (ii) surface displacement and (iii) volcanic gas emission time series, are combined with less commonly used, yet still informative, time series including (iv) ground skin temperature of the volcanic edifices, (v) change detection time series, (vi) time-varying ash top height time series, and (viii) gravity field data. These time series are provided as input to the SGAN with the final goal to early recognize the different phases of the volcanic activity, relying only on Earth Observation data. We demonstrate this methodology at two Italian volcanoes: Mt Etna, one of the most active volcanoes in the world, well known for the persistent activity from its summit craters and frequent lava flow-forming eruptions from its flanks, and Vulcano, experiencing a notable increase in activity in many monitoring parameters since 2021, including microseismicity, fumarole temperatures and composition, and carbon and sulfur dioxide gas emissions. Both volcanoes are constantly monitored by dense ground-based networks operated by INGV, which therefore provide first controlled experiments with an existing ground truth for our findings. For Etna, four levels of activity were considered (Quiet, Unrest, Eruption and Cooling of volcanic deposits); while for Vulcano, three levels of activity were taken into account (Background activity, Transient degassing, New degassing level ). First results show that the proposed SGAN model can successfully identify the volcanic level of activity at both volcanoes. For Etna, the model highlights a clear separation between the Quiet and Eruption levels, while transitions to Unrest and Cooling phases were captured through significant variations in the observed parameters, such as surface temperature time series, ground displacements, and sulfur dioxide flux. For Vulcano, the model accurately identifies the three levels of activity, demonstrating its capability to detect signals consistent with observations from ground-based monitoring networks. These results confirm the effectiveness of the proposed methodology in integrating satellite multi-sensor data for the automatic classification of volcanic activity levels paving the way for new volcanic monitoring approaches. This work was developed in the framework of the project “INGV Pianeta Dinamico VT SAFARI” funded by the Italian Ministry of University and Research.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: Predicting global monthly streamflow across the Hydrography90m stream network

Authors: PhD Giuseppe Amatulli, PhD Jens Kiesel, PhD Liguang Jiang, MSc Tushar Sethi, PhD Sami Domish
Affiliations: School of the Environment, Yale University, Spatial Ecology, Margosa Environmental Solutions Ltd, Stone Environmental, Leibniz Institute of Freshwater Ecology and Inland Fisheries, Department of Community and Ecosystem Ecology, School of Environmental Science and Engineering, Southern University of Science and Technology
Freshwater resources are essential for human survival and development, and their sustainable use is crucial for water security and environmental sustainability efforts. In recent years, the threats to global freshwater resources have surged due to population growth, industrialization, pollution and climate change-driven events such as droughts and floods. As a result of these pressures, an overhaul of water resource management and provisioning is urgently needed worldwide. However, global freshwater data is limited, especially due to sparse gauge station information on surface hydrology. The data gaps are particularly acute for headwaters and tributaries, small basins and ungauged mountainous regions with steep terrain; where streamflow estimation locations are crucial for any form of hydrological assessment. As a result, the lack of adequate hydrological data continues to pose a significant challenge for water resource management, flood and drought risk mitigation and preparedness, and ecological protection. A machine learning-based global streamflow model can be transformative for water resource management, especially in data-scarce regions. By leveraging satellite observations, meteorological inputs, and topographical data, these models accurately predict streamflow in ungauged areas. The predictions are based on patterns from gauged regions with similar environmental characteristics and validated with in-situ gauge station observations. Approaches such as neural networks, support vector machines, and ensemble methods excel at capturing complex, nonlinear relationships between streamflow and its influencing factors. Moreover, the advantage of implementing global models is that it offers a standardized approach across multiple regions while allowing for localised fine-tuning, which ensures accurate streamflow forecasts even in ungauged basins. The adaptability of these models makes them well-suited for handling diverse environments, from urban watersheds to remote mountainous regions. Importantly, these models are designed to continuously improve as more data becomes available and refine predictions in response to newly emerging patterns that may be related to climate change, land use shifts, and other factors. This work aims to develop a machine learning-based global model for accurate monthly volumetric streamflow (m3/sec) predictions across the entire Hydrography90m (Amatulli G. et al. 2022) stream network dataset, enabling the reconstruction of streamflow at each quantile (e.g., the 10-quantile levels), over six-decades (1958-2019). Using the daily volumetric flow of the underlying dataset that was used to build the "Global Streamflow Indices Time Series Dataset" (Chen, X., et al. 2023), we derive monthly streamflow quantiles (quantiles: Qmin, Q10, Q20, …, Q80, Q90, Qmax) for a total of 41,263 gauge stations. Specifically, we collected daily streamflow between 1958-2019 and calculated the 11 quantiles for each month during the entire period, i.e. 11Q x 12 months x 61 years. These quantiles serve as the response variable in a machine learning framework employing the Random Forest algorithm. To explain the spatial and temporal variability of streamflow patterns, we use more than 100 environmental and physical predictors. These include: TerraClimate (Abatzoglou J. et al. 2018): precipitation, temperature, snow cover, soil moisture, evapotranspiration, and related climatic variables. SoilGrids250m 2.0 (Poggio L. et al 2021): multi feature soil characteristics. GRWL (Allen, G. H. et al 2018) & GSW (Pekel, J. et al 2016): surface water features for rivers and lakes. Geomorpho90m (Amatulli G. et al. 2020): geomorphological attributes, including terrain and landform features. Hydropgraphy90m (Amatulli G. et al. 2022): hydrological and hydrographic information for stream segments. To simulate the contribution of each environmental and physical factor from upstream areas, individual predictors were accumulated downstream along the stream network. This approach mirrors the computational methodology of flow accumulation. Specifically, it involves summing the area of each pixel cumulatively downstream in accordance with the terrain's gradient. The accumulated property layer is then divided by the flow accumulation at each pixel to produce a continuous and seamless mean property surface. The watershed properties derived through this process emulate the peak-to-valley influence and time-lag effects characteristic of water flow. For instance, at a gauge station, the temperature property corresponds to a pixel value representing the mean temperature of the upstream catchment. In addition to accounting for upstream conditions, this accumulation process offers notable advantages: (i) properties with varying input pixel resolutions (e.g., 250 m, 1 km) are standardized to a 90 m resolution after accumulation; (ii) the procedure smoothes out potential errors in geomorphological and environmental layers, reducing absolute errors in the compiled properties. By integrating these features, this comprehensive methodology enhances the derivation of robust relationships between streamflow and its environmental drivers, contributing to a deeper understanding of hydrological processes. These computations were carried out using advanced scripting procedures implemented with several open-source software tools on high-performance computing systems. The outcome of the RF predictions is mapped on the Hydrography90m stream layer, enabling a fine-grained, dynamic representation of global historical (1958-2019) monthly quantiles streamflow. Preliminary results demonstrate an overall accuracy with R^2>0.90. We are currently refining our model to account for variations in the spatio-temporal densities of gauging stations and flow rates. This approach fills a long-standing worldwide data gap in headwater stream hydrology, and provides crucial information for regional and global freshwater resource management and for the ecological modeling and management of the aquatic environment. REFERENCES Chen, X., Jiang, L., Luo, Y., & Liu, J. (2023). A global streamflow indices time series dataset for large-sample hydrological analyses on streamflow regime (until 2021). Earth System Science Data Discussions, 2023, 1-18. Amatulli, G., Garcia Marquez, J., Sethi, T., Kiesel, J., Grigoropoulou, A., Üblacker, M. M., ... & Domisch, S. (2022). Hydrography90m: A new high-resolution global hydrographic dataset. Earth System Science Data, 14(10), 4525-4550. Abatzoglou, J. T., Dobrowski, S. Z., Parks, S. A., & Hegewisch, K. C. (2018). TerraClimate, a high-resolution global dataset of monthly climate and climatic water balance from 1958–2015. Scientific data, 5(1), 1-12. Allen, G. H., & Pavelsky, T. M. (2018). Global extent of rivers and streams. Science, 361(6402), 585-588. Pekel, J. F., Cottam, A., Gorelick, N., & Belward, A. S. (2016). High-resolution mapping of global surface water and its long-term changes. Nature, 540(7633), 418-422. Amatulli, G., McInerney, D., Sethi, T., Strobl, P., & Domisch, S. (2020). Geomorpho90m, empirical evaluation and accuracy assessment of global high-resolution geomorphometric layers. Scientific Data, 7(1), 162. Poggio, L., De Sousa, L. M., Batjes, N. H., Heuvelink, G. B., Kempen, B., Ribeiro, E., & Rossiter, D. (2021). SoilGrids 2.0: producing soil information for the globe with quantified spatial uncertainty. Soil, 7(1), 217-240.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: Multi-Sensor Earth Observation Data for Operational Drought Monitoring: swissEO VHI

Authors: David Oesch, Joan Sturm, Anke Duguay-Tetzlaff, Vincent Humphrey, Yannick Barton, Fabia Hüsler
Affiliations: Federal Office Of Topography Swisstopo, Federal Office of Meteorology and Climatology MeteoSwiss, Federal Office of Meteorology and Climatology MeteoSwiss
Projections of climate change indicate an increase in the frequency and intensity of drought events, which will affect not only traditionally drier regions but also areas such as Switzerland, often referred to as Europe's "water tower." The dearth of historical drought events in Switzerland has resulted in ecosystems that are poorly equipped to cope with prolonged dry periods, exhibiting uncertain response patterns. The recent drought events in Switzerland in 2003, 2018, and 2022 have demonstrated the ecological, economic, and social impacts of this emerging threat. In response, the Swiss federal agencies of the environment, meteorology and climatology, and topography developed a national drought monitoring program. This program employs a novel approach that integrates Copernicus Sentinel-2 and EUMETSAT Meteosat data, combining high-resolution optical data with thermal observations for comprehensive and continuous monitoring of drought conditions across Switzerland. This presentation will focus on the operational application of an innovative remote sensing approach within the national drought monitoring program, with a specific focus on the development and operational implementation of the Vegetation Health Index (VHI) as accessible via www.swisstopo.admin.ch/en/satelliteimage-swisseo-vhi for the monitoring of vegetation health under drought conditions. The VHI integrates biophysical plant characteristics with surface thermal conditions, thereby enabling a comparison of the current state to that of the historical climate reference period (1991-2020). The calculation incorporates two primary components: the first is the biophysical component, derived from the Normalized Difference Vegetation Index (NDVI), calculated from Sentinel-2 data at a spatial resolution of 10 meters with a temporal resolution of approximately five days for current conditions, and Landsat 5, 7, and 8 data at a spatial resolution of 30 meters with a 16-day revisit period for the historical data. The thermal component employs Land Surface Temperature (LST) data from Meteosat, which exhibits a coarser spatial resolution of approximately 1.7 kilometers but offers daily availability. Two principal VHI products (SwissEO VHI) have been developed: a daily updated operational product from 2017 onward and a historical monthly product covering the period from 1991 to 2016. The efficacy of the VHI is assessed through a comparison with in-situ data and with outputs from drought models. The integration of datasets with disparate spatial, temporal, and radiometric resolutions presents a significant methodological challenge. Nevertheless, this operational multi-sensor approach is an effective method for comprehensive drought monitoring in Switzerland. By integrating high-resolution optical data with broader thermal observations, we are able to achieve continuous monitoring with enhanced spatial coverage, which allows for detailed vegetation assessment across diverse landscapes. The utilization of Landsat archive data facilitates long-term trend analysis and anomaly detection. The operational VHI product facilitates national-scale monitoring and contributes to the federal natural hazard monitoring and warning platform of Switzerland. The monitoring and warnings, along with the accompanying bulletins, inform federal and cantonal decision-makers, who in turn formulate water management strategies. In alignment with open government data, both the VHI product and its source code are publicly accessible, fostering transparency and collaborative research opportunities. By leveraging the complementary strengths of different Earth observation platforms and sensors, a robust operational system has been developed and is being operated, which enhances drought resilience and supports evidence-based decision-making.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: Multi-Domain Market Place within the ESA Civil Security from Space Hub

Authors: Alen Berta, Christine Gläßer, Hendrik Oppenberg, Tobias Gies, Christian Grubert, Béla Lars Müller, Norbert Czeranka, Katharina Weitz, Noelia Felipe Otero, Anna Hirsch-Holand, Stephane Pirio
Affiliations: CGI Deutschland, Fraunhofer HHI, ESA
In the realm of civil security, one of the most pressing issues is the fragmentation of information and the lack of timely access to critical data, especially when terrestrial communication networks are disrupted. During emergencies, such as natural disasters or large-scale incidents, the ability to quickly gather, analyze, and disseminate information is crucial for effective response and recovery efforts. However, the current systems often suffer from disjointed data sources and unreliable communication channels, leading to delays and inefficiencies in decision-making processes. This fragmentation not only hampers the coordination among various agencies and responders but also puts lives and property at greater risk. The need for a unified, resilient communication and information-sharing platform is paramount to overcoming these challenges and ensuring that civil security operations can function seamlessly even in the most adverse conditions. The ESA Civil Security from Space Hub (developed within the ESA Civil Security from Space framework) aims to create a digital Multi-Domain MarketPlace- MDMP (one-stop-shop) for civil security-related organizations, providing quick access and control to integrated satellite communication, Earth Observation, and AI services. This market place is designed to support a variety of applications across civil security, disaster preparedness, and recovery activities. By initially combining EO and Satcom (later also IOT and SatNav services), the CSS Hub and related MDMP will offer a comprehensive solution that addresses the challenges of monitoring, mitigating, and resolving civil security and crisis events. The integration of Earth Observation (EO) and Satellite Communication (Satcom) technologies within a dedicated market represents a groundbreaking and unique advancement, not just for a field of civil security, but also in general as such market place was non-existing. Earth Observation technologies provide critical data on the Earth's surface, atmosphere, and water bodies, using a variety of remote sensing techniques. This data is invaluable for monitoring natural disasters such as earthquakes, floods, landslides or wildfires, providing precise information that aids in fast and accurate emergency response. While satellite communication complements EO by ensuring stable and immediate communication within crisis areas. Specialized satellite constellations provide reliable internet access and communication services, even in remote or disaster-stricken regions. This capability is crucial for coordinating emergency response efforts and ensuring that vital information reaches the right recipients swiftly and securely. The integration of EO and Satcom within the Civil Security from Space Hub is further enhanced by services and products using AI and machine learning technologies. These technologies enable the development of advanced models for predicting and analyzing natural disasters, optimizing resource allocation, and improving decision-making processes. The Civil Security from Space Hub also emphasizes the importance of stakeholder engagement and collaboration. By bringing together diverse stakeholders from industry, academia, and public administrations, the hub fosters a collaborative environment that drives innovation and ensures that the solutions developed are aligned with the needs of civil security players. This collaborative approach is essential for building a resilient and adaptive civil security system capable of mitigating the impact of natural disasters on communities. This supported one of the goals of the project to identify gaps and requirements, define the approach towards fully operational CSS Hub and MDMP, alongside building the initial demonstrator of Multi-Domain Market Place within the CSS Hub concept. In conclusion, the Civil Security from Space Hub represents a unique and transformative approach to enhancing civil security. By combining the strengths of Earth Observation and Satellite Communication technologies, and integrating AI and machine learning, the hub provides a comprehensive and effective solution for monitoring, mitigating, and resolving civil security and crisis events. This innovative approach not only improves public safety and disaster management but also supports the overall well-being and resilience of society.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: Meteosat Third Generation: entering a new era of satellite-based nowcasting of severe thunderstorms

Authors: Stephan Bojinski, Alois Holzer, Tomas Pucik, Pieter Groenemeijer
Affiliations: Eumetsat, European Severe Storms Laboratory
In the coming decades, Europe, Africa and the Middle East will benefit from novel Meteosat Third Generation (MTG) geostationary satellite data of unprecedented resolution and quality for nowcasting severe thunderstorms. The first MTG imaging satellite (Meteosat-12), launched in December 2022 and operated by EUMETSAT, hosts a high-resolution spectral imager (Flexible Combined Imager, FCI) and a novel Lightning Instrument (LI). Since December 2024, data from both instruments are flowing operationally to users including National Meteorological and Hydrological Services (NMHS). First analyses of the 10-minute FCI imagery, and of the continuous LI monitoring of total lightning activity are very promising, for example for tracking severe convective storms. Looking at examples of severe cases with up to 10cm hail, heavy rainfall and wind gusts during the convective season 2024, we demonstrate the added value of FCI and LI products for nowcasting. We will emphasize the value of the following products: novel RGB imagery composite combinations from FCI like Cloud Phase and Cloud Type enable better insight into microphysical characteristics of convective storm clouds; the new 0.9-micron channel provides low-level humidity information crucial to assess the potential for convective development; the better resolution of visible and IR channels reveals the thermal structure of the storm tops with unprecedented detail and allows analysis of different storm top features connected to severe weather events; new Lightning Imager data demonstrates the ability to detect total lightning, assess lightning hazard, and shows interesting features of lightning activity depending on the storm type or the phase of the storm life cycle. In 2025, the MTG satellite constellation will see addition of the sounding satellite (MTG-S1), hosting the world’s first operational Infrared Sounder in geostationary orbit. Through soundings of atmospheric temperature and humidity over Europe every 30 minutes, this instrument will add further capacity to more precise nowcasting and for example for tracking moisture variability in a pre-convective environment. In 2026, the MTG full constellation will be completed with the second imaging satellite (MTG-I2) provided 2.5-min rapid scanning spectral imagery over Europe.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.34)

Session: E.01.02 Earth Observation in Practise: Linking Public and Commercial Data for Earth Action- PART 2

Recent advances in Earth observation – more sensors, open data policies, and refined analysis methods – have yielded considerable progress in environmental monitoring. These advances can support land managers, policy makers, and other end users, and enable impactful Earth Action. Future developments, such as the increasing range of public and commercial sensors, will further accelerate this progress; but also raise new challenges around the operational delivery of EO-based solutions. In particular, taking concepts from research and development into active practice is not simple. End users often have requirements that are unexpected, and not satisfied by conventional methods or products. Often, end users require spatial, temporal, and thematic detail at a level unobtainable from public missions alone (e.g. Sentinel-1, -2, Landsat 8 and 9), necessitating the incorporation of commercial sensors.

In line with ESA’s upcoming EO Science Strategy 2040, this session aims to explore the interface of environmental monitoring as it crosses from the research and design to practical domains. We welcome presentations that: demonstrate Earth observation-based work undertaken in collaboration with all manner of end users; discuss the issues and challenges commonly encountered in these collaborations; and/or examine the synergistic use of public and commercial EO data. Ultimately, this session aims to develop best practices for collaborations between the EO and end user communities, ensuring that investments into research and new missions can be rapidly and meaningfully implemented into end user portfolios.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.34)

Presentation: Land Consumption Monitoring on Insula EO Platform From Spot 6 and PlanetScope Very High-Resolution Satellite Imagery

Authors: Erminia De Grandis, Gaetano Pace, Alessandro Marin, Simona Gargiulo, Beatrice Gottardi, Roberto Di Rienzo, Alen Berta, Fanny La Rue, Gianluca Brunelli, Sara Romagnoli, Marzia Franceschilli, Mariano Focareta, Gianna Ida Festa, Giacomo Pavanello
Affiliations: CGI, Italy, CGI, Germany, CGI, France, MapSat, Italy, Planetek, Italy
The increasing urbanization and expansion of infrastructures have accelerated the loss of natural ecosystems and biodiversity, making the continuous monitoring of land use and soil consumption crucial. Current approaches rely on either Sentinel data from the Copernicus program, which offers continuous updates but has limitations in spatial resolution, or on high-quality aerial and Lidar surveys, which are costly, complex, and risk becoming obsolete by the time they are available to users. This results in a fragmented information landscape, with inconsistencies in both spatial resolution and update frequency. Insula is an innovative EO Platform operated by CGI Italy designed to handle the processing of large-scale EO data efficiently. By leveraging cutting-edge technologies, it provides powerful tools for big data analytics, anomaly detection, predictive modelling in the deep study of Earth-related phenomena. Insula provides a set of functionalities designed to support and optimize the use and analysis of EO data: the possibility to integrate new services and datasets driven by user needs thanks to its standard interfaces, a resource scalability to optimize large-scale processing enabling the execution of processing campaigns, a robust monitoring system that provides real-time supervision to identify problems and make rapid adjustments for greater efficiency and accuracy. This paper presents the outcomes of a project that has fully exploited the Insula platform to implement a processing chain aimed at generating thematic maps focused on land consumption. In detail the processing chain generates an annual reference map on a national scale of land consumption, ensuring the monitoring of the territory with an update frequency of 4-months thanks to a semi-automatic change detection process that compares the reference map with new optical acquisitions acquired on specific areas of interest defined by the end user. The comparison creates a new support layer that includes only potential new consumed lands that are eventually further refined by photo interpreters to achieve the desired thematic resolution and accuracy based on the taxonomy adopted by the end users. The 3 m spatial resolution is ensured by the multitemporal acquisition of commercial VHR data, SPOT 6 and PlanetScope products. An automatic classification is achieved through a combination of spectral analysis and deep learning model, specifically a UNet architecture trained with SPOT 6 VHR imagery, used in the dynamic urban landscape to identify and outline the exact position and shape of buildings. Launched in November 2023, the project shown promising results with the Insula platform demonstrating excellent performance in generating 3 meters resolution soil sealing maps for the whole Italian national territory - approximatively 300.000 Km2 – in a really challenging time window of 3 months. The integrated and harmonized dataset is available for end-users through a user-friendly interface that provides a set of functionalities for data analysis and management, regardless of their background in Earth Observation. The Insula platform allowed access to the full Copernicus archive along with the SPOT6 and PlanetScope datasets and in situ measurements uploaded to the platform for data production and validation. The combined use of public and commercial EO data led to an improvement of the final land consumption maps expanding our capacity to monitor, predict and respond to Earth dynamic systems. The Insula platform proved to be a suitable framework to foster interoperability, allowing users to develop, execute, and share code in a collaborative environment, enriching the data collection, tools and ready-to-use notebooks, promoting algorithms development. Finally, the project represents a valuable initiative of collaboration with end users that has highlighted the need for standardization of rules and user requirements within thematic map generation.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.34)

Presentation: Understanding Oil Spill Dispersal Dynamics in Complex Coastal Systems from High-Resolution Synthetic Aperture Radar

Authors: Elizabeth C. Atwood, Mark Warren, Dr. Daniel Clewley, Dr. Thomas Jordan
Affiliations: Plymouth Marine Laboratory
Oil spills in marine and coastal environments present severe ecological, economic, and public health challenges, making the monitoring of their occurrence and dispersal dynamics crucial for mitigation efforts. Large-scale spills often receive global attention and monitoring resources, but smaller, chronic spills remain underrepresented despite their significant contribution to global oil slick totals. Recent research by Dong et al. (2022), utilizing freely available satellite data resampled to 100-meter resolution, highlight that small-scale oil slicks account for a much larger proportion of global oil pollution than previously assumed. However, the study’s reliance on satellite data with revisit times spanning multiple days and limited spatial resolution leave EO-based nearshore dispersal dynamics of hydrocarbons poorly understood. These limitations underscore the need for advanced monitoring capabilities to better capture the dispersal behavior of oil spills, particularly in ecologically sensitive coastal areas that are difficult to monitor by boat or plane. The increase of commercial Earth Observation (EO) Synthetic Aperture Radar (SAR) systems provides an opportunity to fill this gap. SAR platforms, capable of capturing high-resolution imagery (10 meters and below) irrespective of cloud cover or lighting conditions, are increasingly accessible through commercial multi-platform satellite constellations. Notably, Capella Space Corp.’s constellation offers sub-meter spatial resolution and the ability to acquire multiple observations per day, representing a significant leap forward in oil spill monitoring capabilities. These advancements pave the way for near real-time, high-resolution monitoring of oil slick dispersal in complex nearshore environments, offering critical insights into spill dynamics on timescales of hours rather than days. Despite these technological breakthroughs, the application of very high-resolution SAR data for oil spill detection remains underexplored. Existing approaches relying solely on Sentinel-1 imagery (with spatial resolution of 10 meters) are constrained by revisit intervals of 5-10 days, which often prove insufficient for dynamic nearshore environments. While optical satellites have shown promise in detecting oil slicks through sun glint patterns, their dependence on cloud-free conditions limits their reliability in operational scenarios. Ocean current hydrodynamic models can be used to interpolate spill movement between satellite observations but require significant regional specificity to produce accurate predictions, which is often unavailable for many parts of the Global South where oil exploitation has risen rapidly in the last decade. This highlights a global disparity in the capacity to assess and respond to oil spill events effectively. The OSCSAR project, developed through a partnership between Plymouth Marine Laboratory (PML) and Capella Space Corp., seeks to address these challenges by leveraging Capella’s state-of-the-art SAR data together with publicly available Sentinel-1 data. The project has developed a novel oil spill detection algorithm tailored to Capella’s high-resolution imagery. Building on experience with Sentinel-1 data, the team employs a three-part machine learning process of (i) dark object clustering followed by (ii) feature calculations and (iii) classification of the dark objects. The classification stage implements both a random forest and a multi-layer perceptron classifier on the feature set of each dark object, and results are combined to give a probability that the detected dark object is oil or a look-alike. This approach aims to significantly enhance detection accuracy and enable near real-time monitoring of oil spill dispersal dynamics in the nearshore environment utilizing both commercial and public EO data. By advancing SAR-based oil spill monitoring, OSCSAR directly supports global efforts to mitigate the environmental and societal impacts of oil spills. Utility of OSCSAR outputs for improved national monitoring capabilities are being discussed with a variety of end users in Ghana and Papua New Guinea. Coastal ecosystems, particularly in under-resourced regions, are often at heightened risk due to a lack of regionally tuned hydrodynamic models and insufficient on-the-water monitoring infrastructure. The novel capabilities developed through OSCSAR provide valuable tools for assessing the trajectory and impact of oil slicks in such vulnerable areas, enabling effective containment and cleanup strategy decision making with the resources available. Dong Y et al. (2022). Chronic oiling in global oceans. Science 376, 1300–1304. DOI: 10.1126/science.abm5940
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.34)

Presentation: Partnering for Impact: Knowledge and Technology Transfer for EO from Public Institutions - Collaborative Approaches for End-User Adoption

Authors: Alison Beamish, Robert Behling, Romulo Goncalves, Jennifer Wenzel, Daniel Scheffler, Johannes Knoch, Daniela Rabe, Friederike Kaestner, Martin Roffeis, Arash Madadi
Affiliations: Helmholtz Zentrum Potsdam, Deutsches Geofroschungzentrum (gfz)
The last 5 years has seen important financial sectors such as environmental liability insurance and environmental regulatory compliance adopt EO data into their workflows. Public agencies are also eager to incorporate these data into their monitoring and forecasting workflows. This increasing demand is pushing EO further into the main stream. However, the rapid growth in demand along side available sensors has resulted in large knowledge gaps for stakeholders looking to use EO data in their operations. The benefits of using EO data are clear but many entities don’t know where to begin. Decisions around optimal spatial, spectral, and temporal resolution of sensors can be overwhelming to non-experts and the handling and creation of actionable information from these data is not trivial. Dedicated Knowledge and Technology Transfer activities from within public research institutions is key to developing best practices to engage effectively with stakeholders. KTT activities provide an unbiased interface to: 1) effectively define and consolidate user requirements through early and effective outreach activities, 2) set clear goals through iterative co-design and agile development, and 3) ensure meaningful stakeholder uptake through effective education and training. FERN.Lab is a working group dedicated to KTT of remote sensing and EO research at the German Centre for Geosciences (GFZ). Our user outreach activities in the public agency sector show a strong need for high resolution data and pilot projects on obligatory monitoring activities and have clearly demonstrated the importance of commercial (e.g., Planet Labs, Pleiades) and other high resolution data (e.g., UAS data, airborne orthophotos) to achieve their desired outcomes. Also in the commercial sectors of precision agriculture and carbon Monitoring Reporting and Verification (MRV), the resolution of freely available data alone often do not meet needs. We share our experience integrating commercial optical EO data (e.g., Planet Labs, Wyvern) into our transfer activities while prioritizing open and FAIR principles. We will highlight some of our ongoing projects in close collaboration with the public sector and industry. For municipal environmental agencies in Berlin, Germany we combine open data of Sentinel-2, with commercial data (e.g. Planet Scope, Pleiades, GeoEye) as well as airborne orthophotos and UAS surveys to support the quantification and evaluation of urban green space to facilitate nature conservation tasks accompanying urban developments. In the scope of agricultural carbon MRV we team up with industry partners to evaluate best ways to monitor soil organic carbon by comparing open data of Sentinel-2 and EnMAP and commercial hyperspectral data of higher spatially resolution (e.g. Wyvern). Finally we will present our work optimizing hydrological models with high resolution terrestrial X-band radar and publicly available C-Band data to optimize agricultural water use, working closely with agricultural professionals in the Brandenburg, Germany. We aim to share our experience and best practices through our diverse transfer activities across the German ecosystem.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.34)

Presentation: How Space Intelligence has Used Open Satellite Data from the ESA and Copernicus to Contribute to Protecting and Restoring Forests: Case Study of Success, and Lessons for the Future.

Authors: Edward Mitchard, Dr Paula Nieto Quintano, Euan Mitchell
Affiliations: Space Intelligence
Space Intelligence is a private company dedicated to creating the datasets to enable the halting of tropical deforestation, and the planning and monitoring of forest restoration globally. It was founded in 2018 by researchers from the University of Edinburgh, UK, and has grown into a company of 60 people, the majority of whom are remote sensing, data or ecological scientists. Space Intelligence has produced maps to enable, monitor, and conduct due diligence on forest carbon protection/restoration projects across 20 countries globally, ranging from Scotland and the USA to Brazil, Peru, Ghana, Tanzania, Cambodia and Indonesia. These have been for governments, large corporations (eg Apple), and forest project developers (eg Indonesian company Forest Carbon). They also run a service to create high quality data for the purposes of supply chain monitoring for the EU's Deforestation Regulation (EUDR) in partnership with ICE, the owners of the New York Stock Exchange. This involves creating and continously updating 10m resolution maps over 39 tropical countries. All this is only possible thanks to the European Space Agency, and the European Union's (with support from the UK through Copernicus), provision of high quality, operational satellite datasets without charge for commercial use. In particular Space Intelligence relies on Sentinel-1 SAR, and Sentinel-2 optical satellite datasets, to create its maps of landcover and forest carbon storage. The partnership between these taxpayer funded satellite systems and private companies like Space Intelligence runs deep. It involves more than just the launching of the satellites - the easy provision of data, ideally hosted on all major cloud computing providers, and long term commitments to data continuity, are essential. Copernicus and ESA have done well with these commitments and on data provision, but there is further work that could be done on both counts, as this talk will highlight. Space Intelligence provides a strong case study for the provision of EO data to drive economic growth and to do good for the world. However - there are activities that could be done to increase the ease of setting up and running the type of analysis done, which would enable more such companies to exist and for them to grow faster. This will be explored further with examples in this talk.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.34)

Presentation: Land Use Change Identification (LUCI) for Rail: Lessons from User-Centric Design of a Land Use Monitoring Framework for Rail Infrastructure

Authors: Andrew Tewkesbury, Thomas Higginbottom, Tom Harling, Liam Harris, Dominik Rains, Frazer Christie, Michael Hall, Amy Mclean, Pasquale Iervolino
Affiliations: Airbus Defence & Space Ltd., Geospatial Business UK
Railways and rail infrastructure are vital transportation and economic assets for many countries, yet are vulnerable to a wide range of natural hazards, including flooding, fire, landslides, mudflow and tree collapse. Furthermore, rail infrastructure can itself exacerbate hazards, with powerlines igniting fires, or embankments blocking flow and causing flooding. Monitoring railways via manual inspection is time consuming and expensive, and often subject to access constraints that limit the frequency of resurvey work. An alternative approach is offered by recent advances in Earth Observation (EO) data and analysis, which have enabled a variety of hazards to be monitored and assessed in a systematic, efficient manner. Facilitating the diffusion of these developments from scientific research into rail operator-based management practices, however, remains an ongoing challenge. From 2021-2023, Airbus Geospatial Business UK undertook the Land Use Change Identification for Rail (LUCI) project, in collaboration with Network Rail - the UK’s rail infrastructure manager. This ESA Space4Rail-funded project aimed to increase the uptake of space technology by facilitating end-user-focused product development. Under the framework of the LUCI project, Airbus developed a range of land use and land-use change monitoring products, based on high-resolution Pleiades, Sentinel-1, and Sentinel-2 data. Airbus employed a range of machine learning and object-based classification methods to fuse the Sentinel and Pleiades-derived information layers, resulting in highly detailed, accurate data products. These products were published on a web service and optimised for a streamlined ingestion into Network Rail’s operational management portfolios. This talk will focus on the process of taking EO technology into new monitoring contexts. Particular focus will be devoted to the challenges of aligning EO analysis with very specific end-user requirements. Meeting these end-user requirements is the stated goal of many EO projects, yet what users want or need may not be obvious to scientists and developers, who typically work within pre-defined taxonomies and guidelines. LUCI benefited from a highly engaged stakeholder and an iterative development process, ultimately allowing processes and outputs to be refined into more useful products. The benefits and challenges of this work process will be discussed, and the lessons learnt presented.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.34)

Presentation: Commercial Satellite Imagery and Copernicus Services for Discovering Pollution Sites and Sources

Authors: Luca Cicala, Cesario Vincenzo Angelino, Donato Amitrano, Sara Parrilli, Francesco Gargiulo, Marco De Mizio, Giuseppe
Affiliations: Italian Aerospace Research Centre
The use of remotely sensed data is essential for effective environmental monitoring and territory management. To this end, data quality, in terms of spatial resolution and frequency, and their acquisition cost are two key variables to be considered, also in light of the necessity to build archives useful for historical analysis. Therefore, it is crucial to design effective strategies for the integration of medium resolution data provided by Space Agencies at free of charge and those acquired by commercial platforms, usually characterized by higher resolution and thus essential for investigations at fine scale. In recent years the Italian Aerospace Research Centre has employed optical and SAR satellite data to support public administrations in monitoring and fighting environmental crimes. The main challenges of this activity were to carefully analyze the objectives of the end users and to integrate the results obtained using satellite technologies within the currently adopted processes, which mainly rely on traditional in situ inspections and patrolling based on unoptimized scheduling and/or spontaneous signaling from citizens. In this presentation, we will show how satellite data can contribute within the processes of the public administration by analyzing two emblematic case studies for southern Italy related to soil and water environmental matrices. As for the soil matrix, we will discuss the usefulness of multiscale remote sensing for the monitoring of illegal micro-dumping sites in rural areas and the detection of potential pollution sources, like greenhouses. As for the water matrix, it will be investigated the contribution of satellite data with the process of monitoring of illegal spills at rivers, detecting potential pollution threats. In both the cases, the focus is on the employment of Copernicus downstream services and commercial imagery, respectively, for territory characterization at medium scale and environmental criticalities detection at fine one, thus showing how satellite data information can be included within existing monitoring processes for discovering and enforcing environmental crimes.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall L3)

Session: B.03.08 Earth Observation in support of the energy transition - PART 2

The energy sector is the largest contributor to greenhouse gas emissions, making the transition to green sustainable energy imperative, while ensuring its resilience and affordability. Countries worldwide have set ambitious targets and established associated regulatory frameworks to drive the transition, which requires a transformation and reshaping of the energy sector and market, driven by tailwinds such as increased investments and financial incentives, digitalization and technological innovation. This session explores the role of Earth observation integrated services in navigating the complexities of the energy transition landscape and providing insights to support decision making among various energy stakeholders, including policymakers and industrial actors. Topics in the scope of this session include (and not limited to):

• Relevant EO initiatives, products and integrated solutions serving the energy transition in Europe and beyond. Examples can cover the use of EO in diverse applications related to energy transition, sustainability and resilience across various segments of the energy value chain (e.g. energy policy formulation and enforcement, energy planning and resource management - including demand characterisation, site selection and resource assessment - energy production and operations, storage, transportation, distribution, and consumption, energy efficiency and performance monitoring, environmental impact assessment, infrastructure monitoring and resilience, hazard and risk assessment, decommissioning etc). Examples focusing on operational use-cases and solutions addressing final users and stakeholders are encouraged to illustrate the current uptake of EO-integrated solutions for the energy transition.

• Evolving user requirements in the energy sector and gaps in current EO capabilities, along with potential developments and solutions to effectively address them.

• Challenges in scaling the adoption of EO-integrated services in the energy sector and fostering full vertical integration, including challenges in resource alignment, difficulties to effectively combine EO and non EO data and products and concerns related to data accessibility, standardization, licensing, privacy and capacity barriers. Potential recommendations and best practices on EO data and EO-integrated service provision tailored to the energy sector are also within the scope of this session.

• Leveraging technological advances and innovative analytics to accelerate the energy transition (e.g.. AI-driven predictive analytics).

This session is addressed to individuals, stakeholders and energy sector representatives with interest in the use of EO for the energy transition including:

• Policy-making entities: Governmental authorities and agencies, national or regional regulatory bodies.

• Industrial stakeholders: Grid operators, energy and utility companies, energy investors, traders and asset owners, energy project development agencies, energy consulting firms etc.

• EO data and service providers for energy-related applications from both public and commercial offerings, as well as energy system integrators.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall L3)

Presentation: Novel Approaches to Support Safe Energy Operations & Decommissioning Activities

Authors: Georgia Karadimou, Charlotte Bishop, Alastair Cannell
Affiliations: KSAT
The development of new technologies and services in decommissioning and environmental management is becoming crucial to the energy sector. Increasingly, remote sensing technologies are being used as a complementary reconnaissance tool to existing ground services, to provide context and remote monitoring whilst reducing risk and targeting ground activities more effectively. At the same time, almost all oil producing countries have public organizations that oversee regulating, monitoring and supervision of the exploration, production and transport of hydrocarbons to ensure safe and clean operations as well as decommissioning, in compliance with national and international law. The European Space Agency (ESA) has funded two projects supporting the energy sector and the energy transition, 1) DECOM focused on decommissioning of offshore energy assets and 2) AVISAR that aims to identify, characterize and support safe and clean management of hydrocarbons and other activity onshore and offshore. Both demonstration projects, led by KSAT with key energy stakeholders including TotalEnergies and CODA, aim to assess the technical and business viability of Earth Observation data in support of the energy sector as well as providing proactive solutions designed to be a hub for thematic services. By engaging with stakeholders from across the industry as part of the development it is possible to provide a feedback loop into the development of the portal based delivery solution to benefit the wider energy sector. The ultimate goal being that these projects become part of a centralised platform for service display, integration and delivery. The projects focus on key, and complementary, themes: 1. Oil spill monitoring around offshore installations 2. Vessel detection for offshore operations 3. Emissions and flaring at offshore and onshore assets 4. Onshore pipeline monitoring (leaking, vandalism and operations) 5. Debris and water quality monitoring around offshore assets This presentation will share more details about the two projects their status and feedback as well as examples of the integration of these solutions into bundled product services providing comprehensive contextual information using a wide range of EO solutions in a visual and interactive way. This makes them accessible to different types of user needs and requirements and provides valuable information enabling the energy sector help meet their own ESG reporting goals and obligations more efficiently.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall L3)

Presentation: Spatial Regression-Based Study of Ground Movement Patterns Above Underground Gas Storage Facilities

Authors: Aleksandra Kaczmarek, Prof. Jan Blachowski, Anna Buczyńska, Dariusz Głąbicki, Piotr Grzempowski, Karolina Kawałko
Affiliations: Department of Geodesy and Geoinformatics, Faculty of Geoengineering, Mining and Geology, Wrocław University Of Science And Technology
The geological storage of energy sources is an important part of the energy sector. It refers to the storage of various materials, with underground gas storage (UGS) being the most common. UGS is used to maintain natural gas reserves during low-demand periods and to support the market during high-demand periods. Natural gas is responsible for generating 20% of electricity in the European Union and 22.2 % globally. In 2023, there were 681 UGS facilities worldwide, with 61 under construction and more than 100 more at different stages of planning. Advances in this industrial sector contribute to the transformation of green energy transformation ensuring safe and stable reserves of clean energy. Emergingly, the facilities are being converted and adjusted for effective storage of hydrogen. Another important component is the geological sequestration of carbon dioxide (CO2). CO2 as a byproduct of the energy production process, is one of the major contributors to global warming and climate change. To minimise the emissions to the atmosphere, it is injected and stored underground. Storage facilities are constructed in various geological formations including aquifers and porous media, depleted oil and gas reservoirs, and rock caverns. The latter are of particular interest, since they are developed mainly in salt. The properties of rock salt make it a suitable host for natural gas, hydrogen, carbon dioxide, but also radioactive waste. It creates conditions for safe storage, ensuring long-term stability and tightness of the reservoirs. Underground gas storage (UGS) interacts with the environment in a number of ways. The main impact observed on the surface is the subsidence caused by cavern convergence. The movements can be overlapped with cyclical movements caused by pressure changes inside the caverns. As salt is a plastic medium, the effects of salt mining on the surface appear with a temporal delay and are observed decades after mine closure. However, the characteristics of UGS operations are different, as the gas is alternately injected and withdrawn from the storage caverns. It may lead to cyclical up and downward movements of the land surface. This study is based on a case study of a UGS site in northern Poland. It is located in close proximity to the Baltic Sea. The facility includes 10 storage caverns developed in two clusters of 5 units and leached at a depth of approximately 1000 meters below the ground surface. The operation of the first 5 caverns began in 2014, while the remaining 5 were put into use in 2021. Since the start of exploitation, the max. The subsidence above them has reached approx. 10 cm. The top geological layers of the area are mainly holocene and quaternary peat with shallow groundwater table, which may influence the quality of the results of geodetic levelling. In this study, the relationship between the observed ground movements and potential causative variables representing mining, geological, and topographical factors was studied using spatial regression with the purpose of identifying the statistically significant factors. It is based on open-access remote sensing data acquired by ESA Copernicus Sentinel satellites. Ground movements were calculated with SBAS-InSAR methods on Sentinel 1 data for the 2015-2024 period in ISCE (InSAR Scientific Computing Environment) and MintPy (Miami InSAR time series software, Python) and subsequently decomposed into vertical and horizontal components. Due to the diverse land cover types in the area of interest (AOI), and vegetation in particular, the spatial distribution of InSAR derived results is not even and there are areas located north of the AOI with missing data. Thus, the analysis is limited to regions with complete data coverage. A grid of 50 meters was selected for the representation of the dependent and independent variables. The latter included: the distance from cavern clusters and thickness of the peat based on geological drilling data. Due to the topography and geology of the area, it is prone to local waterlogging and flooding. Thus, the independent variables included distances from the sea, surface water bodies, namely main water streams, and also distances from underground water intake wells. Moreover, the depth of the groundwater table was incorporated. To consider the influence of the topography, a digital elevation model was used. Further, Sentinel-2 imagery was used to calculate the average values of the normalized difference vegetation index (NDVI) that represent vegetation cover within the AOI. All variables were defined in order to capture the complexity of the area of interest. After exploratory data analysis of the variables, analysis of the dependence of observed ground movements on the potential causative variables was performed with multiscale locally weighted regression and random forest regression. The results of the regression analysis indicate the potential significance of the thickness of the peat layers and the distance from groundwater aquifers in addition to the distance from the UGS facility. Nevertheless, as the analysed UGS facility is relatively young (10 years of operation) and small (compared to other UGS operations in e.g. Germany, the ground subsidence registered up to date is not large. Furthermore , the facility does not work in full cycles, as the operating company maintains stable high gas reserves, so the seasonality of ground movements observed in the exploratory data analysis is not directly related to the operation of the UGS. The fluctuations may be caused by the annual cycle of the water moisture of the peats. Depending on the amount of water, peats may soak up the water from the rain or dry out during seasonal droughts, causing changes in the land surface. Another challenge is posed by the vegetation cover that also undergoes seasonal changes and limits the application of radar interferometry. The proposed approach considered the analysis of spatial relationships between ground movements and potential causative environmental components. The temporal variability was not taken into account in this preliminary study; further research on this subject will be conducted with a deep learning methodology. The work was carried out as part of the Closed-Loop Impact Monitoring for Environmentally and Socially Acceptable Energy Transition in Rural Regions (acronym: CLEAR) supported by the National Research and Development Centre, Poland [WPN/4/67/CLEAR/2022].
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall L3)

Presentation: Support sustainable wind energy multi-use

Authors: Enora Lecordier, Dr Pierre Gernez, Dr Krysia Mazik, Marie Kelly, Pr Rodney Forster
Affiliations: Energy and Environment Institute, University of Hull, Nantes Université, Institut des Substances et Organismes de la Mer, ISOMER, School of Environmental Sciences, University of Hull, Offshore Renewable Energy Catapult, Hull Marine Laboratory, School of Environmental Sciences, University of Hull
Energy is central to the 2030 Agenda for Sustainable Development and the Paris Agreement on Climate Change. The need for sustainable energy is a driving force for the offshore wind industry. In Europe, the ambitious target of NetZero by 2050 accelerates its transition and expansion. For this purpose, space at sea is required with predictions showing a very crowded North Sea by 2050. This rapid and huge expansion can be received negatively by other sea stakeholders, especially those also in need of expansion to meet their own targets. This is the case, for instance, for fisheries and aquaculture, which need to significantly increase production to meet the 2050 food (especially protein) demand. With more conflicts, the need for marine spatial planning (MSP) is required. A branch of MSP is the ability to allow human activities to share space, transport, infrastructures, or resources, in other words, to enhance the multi-use of the marine environment. Co-locating the offshore wind (OSW) industry with other activities has become a topic of interest for sustainable economic growth in several countries. Sharing space between aquaculture and OSW would tackle both food and energy security at the same time. In this study, the assessment for multi-use between the OSW industry and the cultivation of blue mussels was carried out to help build an understanding of the feasibility of such a project from a biological ecological perspective. This Earth observation (EO)-integrated project included several aspects and scales of study. First, a selection of suitable OSW farms for co-location with Mytilus edulis cultivation with a spatial multi-criteria evaluation (SMCE) at a European scale. Among others, EO data such as sea surface temperature (OSTIA), chlorophyll-a and suspended sediment (Globcolour processed multi-sensors products), and wave height (altimeter) were used to identify the OSW farms where M. edulis can develop quickly at low-risk for the stakeholders in the present and future (2040-2050 prediction). It was shown that the many OSW farms in the southern part of the Nort Sea are already very suitable for mussel cultivation and will continue to be in the future. The study revealed that more co-location options will be available under climate scenarios for 2050, as the number of OSW farms in very suitable regions will increase significantly. Then, the use of a virtual constellation composed of Sentinel-2 and Landsat-8/9 allowed the quantification of the impact of OSW turbines on suspended particulate matter concentration in the water column, and EO images of hydrodynamic and aerial forces applied to the structure allowed the predicted of sediment wakes downstream of the structures (fine-scale analysis). This revealed that sediment wakes can be predicted in the first few hundred metres downstream of bottom-fixed turbines leading to a reduction of the euphotic zone in the area. Finally, the previously developed sediment model was used to reinforce a dynamic energy budget (DEB) model analysis to model a potential mussel farm’s yield within an OSW farm previously identified as suitable. The DEB model was forced with Sentinel-3 data on Chlorophyll-a and temperature interpolated on Landsat-8/9 and Sentienl-2 grid used for SPM retrieval. This novel tool is addressed to marine spatial planners and policymakers for the energy transition and supports multi-use project designs and environment impact assessments. It is a good example of EO's potential for wind energy applications.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall L3)

Presentation: Assessing Climate Impacts on Global PV Energy Generation with Machine Learning and Satellite Data

Authors: Dr Rui Song, Mr. Feng Yin, Prof Jan-Peter Muller, Adam Povey, Dr Chenchen Huang, Prof Roy G. Grainger
Affiliations: University Of Oxford, University College London, University of Leicester, University of Bath
Addressing the challenge of limiting global warming to 1.5°C above pre-industrial levels requires a rapid transition to renewable energy sources. Among these, photovoltaic (PV) solar energy has become a key focus due to its scalability and declining costs. However, PV power generation is influenced by atmospheric conditions such as aerosols, clouds, and temperature, which vary across regions and may be affected by climate change. While global PV capacity has grown to unprecedented levels, the extent of climate-related impacts on PV energy generation remains insufficiently understood, particularly at the facility level. Existing studies often rely on generalized assumptions about atmospheric impacts, overlooking the variability in PV deployment across different geographic and environmental contexts. This underscores the need for more granular analysis to evaluate PV performance under real-world conditions. This study employs machine learning and satellite data to identify PV installations and assess energy losses caused by climate-related atmospheric factors. The machine learning model is trained on diverse datasets to enhance the detection of PV systems in various settings, including those with complex terrains and mixed land use. Facility-level PV data is then integrated with satellite and reanalysis datasets to quantify the effects of aerosols, cloud cover, and temperature on solar energy generation for the last decade. Our findings suggest substantial variability in PV energy losses across regions, driven by differences in atmospheric conditions and climate dynamics. For example, areas with high aerosol concentrations experience significant reductions in solar irradiance, while fluctuations in cloud cover impact both energy generation and grid reliability. Extreme temperatures are also found to reduce PV efficiency in certain locations. These results highlight the importance of addressing site-specific challenges to improve the resilience of PV infrastructure and maximize renewable energy output. This research aims to contribute to ongoing discussions on the global energy transition by shifting the focus from installation growth to understanding energy losses. By providing a detailed analysis of facility-level impacts, this study seeks to inform strategies for optimizing PV deployment and addressing climate-related challenges in renewable energy systems.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall L3)

Presentation: EO4Energy - Bridging Data Gaps in Energy Transition with Satellite-Based Mapping

Authors: Dr. Annekatrin Metz-Marconcini, Dr.-Ing. Mattia Marconcini, Cornelia Zygar, Dr. Zoltan Bartalis
Affiliations: German Aerospace Center - DLR, ESA-ESRIN
The global energy landscape is undergoing a critical transformation toward decentralization, digitization, and decarbonization to meet the ambitious goals of the 2015 Paris Agreement, aiming to limit global warming to well below 2°C. Despite significant progress, reliance on fossil fuels persists, underscoring the urgency of transitioning to sustainable energy sources. In this framework, wind energy is increasingly recognized as a highly efficient renewable option, thanks to falling costs driven by technological advancements and improved financing conditions. The growing deployment of wind turbines (WTs) worldwide is crucial for evolving power systems by decentralizing energy production. This not only helps alleviate transmission congestion but also enhances energy security and stability by generating power closer to its point of use. However, the inherent variability of wind power presents significant challenges, as turbine output fluctuates between periods of excess and shortfall. Overcoming these drawbacks demands precise energy modeling, which relies on accurate and comprehensive information on infrastructure locations; however, this is often inadequate or unavailable, hindering optimization of the grid and integration of renewable energy sources. The ESA EO4Energy project aims to bridge this critical data gap by leveraging advanced deep neural networks (DNNs) and freely available Sentinel-1/2 (S1/S2) satellite imagery to produce detailed maps of onshore WT locations. These maps are key to improving energy modeling and grid management by providing the precise, high-quality data necessary for effective renewable energy integration. Additionally, the project also targets coal power plants (CPP), a key contributor to CO2 emissions and an essential component of the current energy landscape. While CPPs are traditionally associated with fossil fuel dependence, they also hold potential for transformation. By retrofitting them into heat storage power plants, CPPs could complement wind energy systems, addressing the variability of wind power by storing surplus energy during periods of high production and releasing it during low-wind conditions. This innovative approach underscores the dual focus of EO4Energy: enhancing renewable energy integration while exploring pathways to repurpose existing infrastructure for a more stable and sustainable energy future. Despite the existence of geospatial databases for training DNNs, comprehensive global datasets specifically tailored to energy infrastructure types remain unavailable. To address this gap, significant effort has been invested in EO4Energy to curate available information from diverse global sources. For WTs, very high-resolution imagery from Google Earth and Bing Maps was employed to precisely geolocate approximately 25,000 turbines across 50 test sites, strategically selected to ensure representation across all major climate zones, enhancing the robustness and generalizability of the proposed solution to diverse environmental and geographic conditions. The same approach has been applied to precisely locate 2,300 CPPs distributed all across the world. The implemented solution relies on a key hypothesis: the temporal behavior of energy infrastructures, combined with their spatial arrangement, exhibits distinct patterns compared to all other land cover classes. Hence, given all S1 and S2 scenes available over the targeted regions of interest in the selected time interval (i.e., the year 2023), key temporal statistics (e.g., temporal mean, minimum, maximum, and standard deviation) were first extracted for: i) the original VV/VH backscattering value in the case of S1 data, as well as polarimetric decomposition indices; and ii) the 4 S2 bands available at 10m resolution (after performing cloud and cloud-shadow masking), along with all possible corresponding normalized difference indices. To optimize processing time, only the most relevant features—five from S1 and 24 from S2—were ultimately retained, selected for their suitability in accurately detecting WTs and CPPs. These were then used as inputs to train specific detection models, employing a fine-tuned Faster R-CNN object detection algorithm with a ResNet-50 FPN backbone, modified to handle 29-band imagery. The Faster R-CNN framework was chosen for its proven effectiveness in object detection tasks. Implementation and testing were carried out using Python and the PyTorch framework. Model validation involved internal assessments with 20% of the training data set aside and post-training evaluation using independent data not employed during training. Final results demonstrate the models' high accuracy in identifying individual WTs and CPPs, with independent F1 scores of 0.79 and 0.827, respectively. These performance metrics highlight the robustness of the approach and its readiness for large-scale deployment. Furthermore, a case study conducted for the entire country of India illustrates the feasibility of scaling the methodology to diverse regions and energy landscapes. These findings emphasize the potential of the proposed solution to significantly enhance the precision of energy infrastructure mapping, supporting better grid integration, site selection, and policy development.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall L3)

Presentation: Explore support for geothermal energy potential assessment in Austria

Authors: Filippo Vecchiotti, Eszter Buga-Nyeki, Stefan Hoyer
Affiliations: Geosphere Austria
In the framework of the project Green Transition Information Factories (GTIF) Austrian capability at the GeoSphere Austria was assigned the task to assess the shallow geothermal energy potential in Austria. The consortium of the project leaded by the EOX aims to produce geospatial content from Earth Observation (EO) data, quantifying and helping to understand the complexities of transitioning our energy and mobility sectors to Carbon neutrality. GeoSphere derives the specific thermal yield of shallow geothermal methods (horizontal collectors, geothermal baskets, ditch collectors) and create the algorithm which is deployed as container on GTIF/EOxHub and elastic cloud. The ThermoMap for Europe produced by Bertermann at al, 2015 represented the first attempt to produce a wide area map of very shallow geothermal energy potential in 2013. In this paper was evaluated spatially the thermal conductivity in EU using soil data, GIS based humidity index and simplified USDA texture triangle information. This approach was also replicated for Austria (Schwarz at al, 2022) with another method that relied into soil maps, air temperature and DEM derived altitude. We kept the 2015 approach for the calculation of the thermal conductivity, being very shallow geothermal system dependent on climate conditions and soil thermal property by introducing in situ and satellite derived new variables such as bulk density, soil temperature, air temperature and soil moisture. Based on an equation proposed for ThermoMap with the help of bulk density (BD) and pore size distribution (PSD) is possible, using the Rosetta model, to evaluate the thermal conductivity. The Rosetta Model (Zhang, Y. and Schaap, 2017) is based on the van Genuchten and Mualem equations and consists on calculate hydraulic properties from basic soil data. For the algorithm calibration at national scale, soil map (with soil texture information) at 200m resolution, in situ data with soil moisture (TDR derived) soil temperature and pF values will be used. The latter dataset will be feed into the model to recover the water retention curves, which will deliver the PSD for the calibration. Once the conductivity is calculated by extracting the monthly soil temperature, based on the time window requested , the surface zone temperature will be established , whereas the air temperature quarterly and yearly will determinate accordingly the shallow zone temperature and the deep zone temperature. By imposing the depth of the vertical thickness of the soil for the three zone (between 0 - 10m) we can evaluate the heat flux which is an important component to find out the soil heat. Moreover, when the soil heat is established the volumetric heat capacity can be calculated once known the monthly soil moisture. Form heat capacity and thermal conductivity the thermal diffusivity for selected area of Austria is calculated. The same approach will be replicated and automated for the whole Austria at a 1 km scale. The data used for the final product are: • Sentinel-1 derived Soil moisture provided by Copernicus Global land Service • Sentinel-3 Land Surface Temperature from Copernicus Space Data Ecosystem • ERA5-Land Air temperature from Copernicus Data Store • Harmonised World Soil database (HWSD2) from FAO The output product will assess the thermal yields to individual homeowners and municipalities but it’s widely thought to be suited for other Austrian Stakeholder such as: • Conference on Spatial Planning (ÖROK) - established by the federal government • Provinces and municipalities to coordinate spatial development at the national level • Austrian Geothermal Association • Forschungsnetzwerk soil2heat • Klima & Energiereferat der Stadt Baden bei Wien References Bertermann, D., Klug, H., & Morper-Busch, L. (2015). A pan-European planning basis for estimating the very shallow geothermal energy potentials. Renewable Energy, 75, 335-347. https://doi.org/10.1016/j.renene.2014.09.033 Schwarz, H., Jocic, N., & Bertermann, D. (2022). Development of a Calculation Concept for Mapping Specific Heat Extraction for Very Shallow Geothermal Systems. Sustainability, 14(7), 4199. https://doi.org/10.3390/su14074199 Zhang, Y. and Schaap, M.G. (2017). Weighted recalibration of the Rosetta pedotransfer model with improved estimates of hydraulic parameter distributions and summary statistics (Rosetta3). Journal of Hydrology 547: 39-53. doi: 10.1016/j.jhydrol.2017.01.004
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 1.14)

Session: F.02.08 Advancing Research and Development Through European-African Collaboration: EO AFRICA - PART 2

EO AFRICA stands for African Framework for Research Innovation, Communities and Applications in Earth Observation. EO AFRICA facilitates the sustainable adoption of the Earth Observation and related space technology in Africa – following an African user driven approach with a long-term (>10 years) vision for the digital era in Africa.

Coordinated by the EO AFRICA R&D Facility, several African-European R&D tandem projects collaborate creating an active research community and innovation processes for continuous development of EO capabilities in Africa.

This session aims to present EO-AFRICA, a successful example of cooperation between ESA and AUC, by focusing on the preliminary results provided by the ongoing projects selected at the end of 2024.

This session will offer the opportunity to complete an already existing peer-review session (Harnessing the Power of Remote Sensing for Research and Development in Africa ), with insights from projects not yet started by the closure of the call for abstracts and to have one or two VIP introducing the session.

Moderators:


  • Nelly-Helen N. Ebruka - The University of Manchester
  • Zoltan Szantoi - ESA

Panelists:


  • Dr. Beatrice Asenso Barnieh - University of Energy and Natural Resources, Ghana - EO AFRICA African Research Fellow
  • Prof. Kamal Labbassi - Chouaib Doukkali University, Morocco – President of the African Association of Remote Sensing of the Environment (AARSE)
  • Dr. Baba-maaji Rakiya - Strategic Space Applications - Deputy Director, NASRDA - Nigeria
  • Prof. Dr. Abel Ramoelo - Executive Director | Earth Observation Programme | South African National Space Agency (SANSA), Pretoria, South Africa
  • Dr Mahaman Bachir Saley - Senior Scientific Officer African Union Commission - Enseignant Chercheur chez Université Felix HOUPHOUET BOIGNY de Cocody à Abidjan
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 0.49/0.50)

Session: A.08.02 Advances in the theory and methodology of SAR Oceanography - PART 2

In 2003 the ESA’s workshop on Coastal and Marine applications was conducted gathering the community, to present the findings of ESA-funded studies of SAR Ocean applications, and to prepare recommendations for future research work. It was followed by the first SAR oceanography workshop, SEASAR 2006, entitled « Advances in SAR Oceanography from Envisat and ERS missions » to organise regular consultations with the science and applications community. Recently the latest SEASAR workshop 2023 (Svalbard) allowed scientists to meet and exchange their latest research findings and advances to support new mission proposals , prepare white papers and discuss the urgent need for dedicated field campaigns to strengthen the understanding of air-sea interactions and the coupling of the atmospheric boundary layer and the upper ocean. The applications are spanning a broad range of research areas tailored to wind, waves, sea ice and surface current as well as application areas including, for instance, coastal management, offshore industry, shipping, extremes and hazards.

We welcome contributions to research and applications within areas such as (not exclusive):
•  Ocean Wind
• Ocean Current
• Waves
• Sea Ice
• Oil spill and ship detection
• New advances in SAR Oceanography: sensor synergy, Doppler methods and applications
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: SARWAVE : Sea-state parameters retrieval from Interferometric Wide Swath acquisitions of Sentinel-1 SAR

Authors: Frédéric Nouguier, Alexis Mouche, Antoine Grouazel, Lisa Maillard, Quentin Febvre, Bertrand Chapron, Nicolas Rascle, Fabrice Collard, Gilles Guitton
Affiliations: Ifremer, OceanDataLab
Ocean sea-state parameters have routinely been derived since the launch of Sentinel-1 from the dedicated “Wave Mode” acquisitions. In this mode, small imagettes of 20 km x 20 km are acquired few hundreds of kilometres apart in the open-ocean with a leap-frog pattern. Ocean parameters such as wind speed or significant wave height are separately derived from each imagette without any time or space coherency. Large swath acquisitions such as the Interferometric Wide (IW) and Extra Wide (EW) swath open new possibilities to infer spatially coherent ocean structures but rely on Terrain Observation with Progressive Scans (TOPS) acquisition technique leading to more complex data. Deriving sea state parameters from such acquisitions which mostly occur close to the coasts and at the Marginal Ice Zones are desirable to study processes such as wave-ice interaction or spatially large ocean phenomena such as Tropical Cyclones. Mapping sea-state spatial variability over large areas is also a key asset to study wave-current interaction, wind action (fetch) or wave propagation and dissipation (fireworks). The scientific objectives of the ESA SARWAVE project are numerous: 1. Develop, test and validate new sea-state parameters retrieval algorithm from SAR data.. 2. Apply this algorithm on Sentinel-1 Interferometric Wide Swath (IW) acquisitions and provide to the oceanography community a one year experimental dataset over the European coasts. Due to the required large amount of data, processing is realized on CREODIAS facility. 3. Investigate synergistic approaches with other sensors (FF-SAR altimetry, optical, ...) 4. Investigate new directive wave spectrum retrieval algorithm (Deep Learning, IA, ...) and exploit IW large swath possibilities. 5. Explore synergistic analysis in test case studies IW L1 Single Look Complex (SLC) data has been chosen as the starting point of the SARWAVE algorithm. SLC offers new possibilities compared to GRD but data is harder to handle because it is provided per subswath and on the RAW SAR acquisition unit: a burst. However, having complex signals (magnitude and phase) allows temporal analysis inside the synthetic aperture delay. For simplicity and practicality, the algorithm is divided into two different steps. The first one deals with the complex (slant-range, azimuth-time) radar geometry and the pre-processing steps (deramping, ...) to provide important radar parameters such as sigma nought, sub-looks cross-spectra, IMACS, Co-Cross Polarisation Coherence (CCPC), CWAVES parameters, incidence, ... over an evenly spaced ground located grid. This intermediate product is called L1B. Thanks to burst separation in SLC data, we have been able to conduct intra-burst (during the burst duration) temporal analysis but also inter-burst analysis. Successive bursts of a subswath have indeed a slightly overlapping area (~ 2 km x 80 km) and are acquired about 2 seconds apart enabling the possibility to compute inter-burst quantities (Cross-spectra, ....) with a significant delay opening new possibilities to study ocean kinematics (inter-IMACS, ...). Quantities processed in intra- and inter-burst areas are both provided in L1B products. L1B products produced during SARWAVE are the entry products of the second step of SARWAVE algorithm but also for the wind assessment algorithm developed in another ESA project: ESAWAAI. The second step of SARWAVE algorithm relies on a deep learning algorithm trained on a large colocation dataset. The latter is obtained with Sentinel-1 acquisitions (L1B), co-located with the wave model WaveWatch III providing sea state parameters such as Significant Wave Heigh of swells and wind-sea but also peak periods. This second step is validated against independent WW3 dataset, altimeter and buoys wave measurement. We will present the developed SARWAVE algorithm and its key steps as well as its statistical performances. Interesting case studies will also be presented to emphasize the capability of wide swath acquisition to document large and fine scale ocean phenomena.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: On the Challenge of Oil Slick Lookalikes in SAR mapping: Transfer Learning With Limited Labels

Authors: Julien Vadnais, Benjamin A. Robson, Christian H. Eide, Malin Johansson, Rune Mattingsdal
Affiliations: University of Bergen, Norwegian Offshore Directorate, UiT the Arctic University of Norway
The monitoring of marine oil makes one of the principal historical domains of applications of synthetic aperture radar (SAR) imagery. Notwithstanding the many advantages of SAR in this field, some critical mapping problems remain, such as the challenge of oil slick lookalikes. Distinguishing oil slicks from other slick-like shapes in SAR imagery has been an ongoing research topic for over twenty years. Indeed, numerous shapes of low radar backscatter complexify the task of accurately discriminating marine oil slicks from the surrounding waters. Low wind areas, rain cells, biogenic oil, ships’ turbulence, and grease sea ice reflect low backscatter, as would mineral oil. This similarity can lead to erroneous oil slick detection, also known as a Type 1 error. Within the field of geomatics, machine learning and deep learning models revolutionized the performance and the automation of geospatial analysis and large-scale mapping techniques. Popularized a few years ago for computer vision tasks, convolutional neural networks (CNNs) can prove a better alternative to threshold or pixel-based approaches for SAR oil mapping, despite being notoriously demanding for large training datasets. In this presentation, we will first synthesize and compare the current advances in deep learning architectures for the semantic segmentation —or delineation— of marine oil slicks in spaceborne SAR imagery. At the same time, we will address the recurring challenge of limited labeled images, and particularly those featuring slicks of natural origin. When hydrocarbons escape through fractures and interconnected permeable beds, and rise into the ocean, the slow seepage of hydrocarbon liquids from the seafloor is referred to as natural hydrocarbon seepage (NHS), in contrast to emissions from anthropogenic sources. NHS is a key, yet under-researched, source of hydrocarbons in marine environments. Historically, data scarcity —due to difficulties in monitoring remote areas and the challenge of continuous temporal observation— has hindered possibilities to study the ecosystem contributions and global behavior of NHS. Since CNNs tend to require a large amount of input data, the limitation in labeled images poses a key problem to model training and validation. Although a lot of attention in the field understandably goes towards oil spill monitoring, publicly available data linking SAR images and oil spills remain scarce. Knowing this, we bring recommendations for improving SAR mapping methods with limited labels, something which could help research on NHS. Our argument is based on transfer learning. In the case of deep transfer learning, one can use pre-trained machine learning models, which may not have been trained in the same context, in other words not necessarily with SAR or even any earth observation images. Still using the example of marine oil slicks, our presentation discusses zero-shot learning models such as the recent Segment Anything Model, and their potential for enhancing semantic segmentation performance in SAR imagery. In short, our presentation reviews the long-lasting problem of lookalikes in oil slick SAR mapping. We discuss recommendations for improving SAR mapping methods in cases limited by a small amount of labeled data. Such cases could benefit from integrating transfer learning steps within the mapping framework, as shown by our early results working on oil slicks originating from natural hydrocarbon seepage (NHS). The presentation will be relevant for researchers interested in integrating machine learning for SAR oceanography, working on important topics like oil spill mapping, but also less-studied areas like NHS.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Retrieving Ocean Surface Topography in the Presence of Systematic And Oceanic Wave Errors With Harmony

Authors: Andreas Theodosiou, Dr Marcel Kleinherenbrink, Dr Paco
Affiliations: European Space Agency, TU Delft
Sea-surface height (SSH) is one of the key geophysical parameters in the study of mesoscale and submesoscale ocean currents, tropical cyclones, and internal waves. Harmony, the European Space Agency's 10th Earth Explorer, will be the first instrument to retrieve relative SSH at the submesoscales over a continuous 250 km swath. Relative SSH will be an experimental product of the single-pass interferometry (XTI) phase of the mission. In the XTI phase, the two Harmony satellites will fly in a close formation 350 km behind Sentinel-1D. The close formation is optimised for interferometric sensitivity and temporal coherence to allow the estimation of ocean surface topography. The topography of the surface is estimated by combining the SAR signals of the two Harmony companions interferometrically in a single pass. The motion and the variation of the roughness of the surface, the bistatic operation of Harmony's instrument, and the squinted line of sight complicate the retrieval of the sea-surface height. The sea state modulates the radar scattering via several mechanisms: The cross-correlation between the height of the ocean surface and the scattering; The sampling of the surface at several points for a given range bin due to the elevation distribution of the waves, commonly known as range bunching; The concentration and dilation of scattering elements that occur during focusing due to the orbital motion of the waves (velocity bunching). These effects introduce a loss of coherence as well as a bias in the interferometric estimate. Furthermore, the temporal lag of the acquisitions used to form the single-pass interferogram leads to a phase contribution driven by the motion of the surface. We intend to use Harmony's fore and aft phase centres to estimate this component of the phase using along-track interferometry and remove it from the estimate of the surface height. Finally, each Harmony companion will have its own oscillator leading to a phase term due to the lack of synchronisation. The error arising from the lack of synchronisation dominates the error budget of the relative SSH retrieval. The retrieval will have to estimate and correct for this with a data-driven method. In this paper we will simulate a realistic ocean scene using a coupled ocean-atmosphere model and use it to produce synthetic radar observables using the Harmony Scientific Workbench. We will then run the retrieval algorithm that will correct for the aforementioned effects and estimate the relative sea-surface height.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Tracking the Evolution of Ocean Wave Wavelengths Using Fully-Focused SAR Data From Sentinel-6 Radar Altimetry

Authors: Sergi Hernández Burgos, Michel Guerra, Ferran Gibert, Dr. Marcel Kleinherenbrink, Bertrand Chapron, Frédéric Nouguier
Affiliations: isardSAT, ESA-ESTEC, Ifremer
Swell, as a fundamental component of wave dynamics, plays a crucial role in oceanographic systems [1]. These long-period waves, generated by distant storms and weather systems, travel vast distances across open waters, impacting coastal environments and influencing a range of marine and coastal phenomena [2]. Accurate determination of swell wave parameters, such as the dominant wave period and significant wave height, is therefore of great interest in ocean research. Satellite-based sensors, such as the high-resolution SAR radar aboard Sentinel-1, provide valuable information on swell waves over extensive oceanic expanses [3]. While swell retrieval using SAR techniques has been extensively explored through satellites like Sentinel-1, observing swells via radar altimetry remains challenging. This is primarily due to the nadir-pointing geometry characteristic of radar altimeters, which introduces cross-track ambiguity, and the limited spatial resolution of earlier processing techniques. In radar altimetry, monitoring swells is crucial for discerning the influence of wind waves and swell on sea-state bias [4]. Additionally, swell-flagging is essential for identifying potential biases in retrackers [5]. Furthermore, altimetry-derived swell observations contribute to the cross-calibration of data from various platforms [6]. Traditionally, operational satellite radar altimeter missions, such as Sentinel-3 and CryoSat-2, have employed delay/Doppler processing [7], achieving spatial resolutions of around 300 meters—insufficient for short-period swell parameter estimation. However, advancements in processing techniques during the last years, such as the Fully-Focused SAR (FF-SAR) backprojection algorithm and the 2D frequency algorithms, have significantly improved resolution, achieving meter-scale precision along the track [8, 9, 10]. In 2022, Altiparmaki et al. demonstrated that radar altimeters, when processed with FF-SAR, can perceive swell energy as modulations in waveform tails [11]. Nevertheless, the unique nadir-looking geometry of radar altimetry differentiates its interpretations from conventional SAR. These differences lead to the unfeasibility of directly applying established SAR methods to retrieve long-wave parameters, complicating the development of a consistent closed-form algorithm for determining swell characteristics [12]. Ongoing projects, such as SARWAVE by the European Space Agency (ESA), are advancing research to enhance the understanding of changes in swell intensity, which aims to improve the estimation of ocean wave parameters [13]. Building on the foundations of previous works from [11] and [12], we developed a novel algorithm capable of determining the wavelength of long-traveling waves along the ocean processed with FF-SAR Sentinel-6 radar altimetry data. In this study, we demonstrate that the nadir-looking radar altimeter aboard Sentinel-6 is sensitive to changes in the wavelength of ocean waves in the order of meters. To achieve this, we analysed the dynamics of the modulation of waveform tails over thousands of kilometres of ocean, based on the understanding that longer-wavelength swells travel faster than shorter ones, and that satellites move significantly faster than all swells. The results have been compared with CFOSAT and ERA5 products, showing good agreement among them. This new research paves the way for estimating the impact of swell waves on key radar altimetry parameters: Sea Surface Height, Significant Wave Height and σ₀. References 1. The generation and propagation of ocean waves and swell. I. Wave periods and velocities. (1948). In Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences (Vol. 240, Issue 824, pp. 527–560). The Royal Society. https://doi.org/10.1098/rsta.1948.0005 2. van Dongeren, A., de Jong, M., van der Lem, C., van Deyzen, A., & den Bieman, J. (2016). Review of Long Wave Dynamics over Reefs and into Ports with Implication for Port Operations. In Journal of Marine Science and Engineering (Vol. 4, Issue 1, p. 12). MDPI AG. https://doi.org/10.3390/jmse4010012 3. H. Wang, A. Mouche, R. Husson, and B. Chapron (2018). Dynamic Validation of Ocean Swell Derived from Sentinel-1 Wave Mode Against Buoys, in IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium, 2018, pp. 3223–3226. 4. F. Collard, L. Mari ́e, F. Nouguier, M. Kleinherenbrink, F. Ehlers, and F. Ardhuin (2022). Wind-Wave Attenuation in Arctic Sea Ice: A Discussion of Remote Sensing Capabilities. Journal of Geophysical Research: Oceans, vol. 127, no. 7, p. e2022JC018654, 2022. 5. F. Schlembach et al (2020). Round Robin Assessment of Radar Altimeter Low Resolution Mode and Delay-Doppler Retracking Algorithms for Significant Wave Height. Remote Sensing, vol. 12, no. 8, 2020. 6. Y. W. Yanmin Zhang and Q. Xu (2020). On the Nonlinear Mapping of an Ocean Wave Spectrum into a New Polarimetric SAR Image Spectrum. Journal of Physical Oceanography, vol. 50, no. 11, pp. 3109–3122, 2020. 7. R. K. Raney (1998). The delay/Doppler radar altimeter. IEEE Transactions on Geoscience and Remote Sensing, vol. 36, no. 5, pp. 1578-1588, Sept. 1998, doi: 10.1109/36.718861. 8. A. Egido and W. H. F. Smith (2017). Fully Focused SAR Altimetry: Theory and Applications. IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 1, pp. 392-406, Jan. 2017, doi: 10.1109/TGRS.2016.2607122. 9. Guccione P, Scagliola M, Giudici D. 2D Frequency Domain Fully Focused SAR Processing for High PRF Radar Altimeters. Remote Sensing. 2018; 10(12):1943. https://doi.org/10.3390/rs10121943 10. S. Hernández-Burgos et al (2024). A Fully Focused SAR Omega-K Closed-Form Algorithm for the Sentinel-6 Radar Altimeter: Methodology and Applications. IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1-16, 2024, Art no. 5206016, doi: 10.1109/TGRS.2024.3367544. 11. Altiparmaki, O., Kleinherenbrink, M., Naeije, M., Slobbe, C., & Visser, P. (2022). SAR altimetry data as a new source for swell monitoring. Geophysical Research Letters, 49, e2021GL096224. https://doi.org/10.1029/2021GL096224 12. M. Kleinherenbrink et al (2024). Cross-Spectral Analysis of SAR Altimetry Waveform Tails. IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1-15, 2024, Art no. 4206615, doi: 10.1109/TGRS.2024.3402390. 13. SARWAVE project. Accessed: 7 December 2023. [Online]. Available: https://www.sarwave.org/ 14. NOAA National Data Buoy Center (1971). Meteorological and oceanographic data collected from the National Data Buoy Center Coastal-Marine Automated Network (C-MAN) and moored (weather) buoys. [NOAA Marine Environmental Buoy Database]. NOAA National Centers for Environmental Information. Dataset. https://www.ncei.noaa.gov/archive/accession/NDBC-CMANWx. (Accessed on 08-01-2024). 15. Hersbach, H., Bell, B., Berrisford, P., Biavati, G., Horányi, A., Muñoz Sabater, J., Nicolas, J., Peubey, C., Radu, R., Rozum, I., Schepers, D., Simmons, A., Soci, C., Dee, D., Thépaut, J-N. (2023). ERA5 hourly data on single levels from 1940 to present. Copernicus Climate Change Service (C3S) Climate Data Store (CDS), DOI: 10.24381/cds.adbb2d47 (Accessed on 08-01-2024)
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: A New Method for Surface-Current Retrieval: Polarization Diversity

Authors: Owen O'Driscoll, Dr Paco López-Dekker, Bertrand Chapron
Affiliations: TU Delft, Ifremer
A unique property of synthetic aperture radar (SAR) systems is their ability to estimate the Doppler velocity of the ocean surface. This Doppler velocity should be understood as an average of the line-of-sight (LoS) projection of the instantaneous velocities of each scatterer on the surface, weighted by their radar cross-sections. The local normalized radar cross section (NRCS) is generally modulated by the local slope of the surface caused by the presence of ocean waves. The correlation between local slopes and orbital velocities results in an effective Doppler velocity, often referred to as the wave-Doppler, even in the absence of any mean surface motion. In general, the measured Doppler velocity due to wind-driven waves is in the order of 10% to 20% of the surface wind (Chapron et al., 2005). Aside from this wave-Doppler, the measured Doppler velocity also includes the Line-of-Sight (LoS) projection of the actual mean surface motion, often referred to as the Total Surface Current (TSC). The capability to measure surface currents with SAR was first demonstrated decades ago (Goldstein and Zebker, 1987). Naturally, one would need to distinguish between wave Doppler and the current Doppler when interested only in the latter. Considering that the wave Doppler may easily exceed the surface-current of interest, an accurate separation is paramount. The wave Doppler also has intrinsic value beyond its use in targeting a TSC; it captures information on the wave state which has utility in data assimilation schemes. Contemporary wave-Doppler estimation techniques use empirical Geophysical Model Functions (GMF). These models, such as CDOP and XDOP (Mouche et al., 2012; Li and Lehner, 2013), are tuned to provide a good average fit as a function of wind speed. Thus, they can only be valid in the average sense. Neglecting departures from the mean may inadvertently lead to models capable of estimating wave Doppler only when no surface-currents gradients are present---an undesirable trait. More wholistic models (e.g. Moiseev et al., 2020, 2022) consider additional information relating to the wave state, making them more suitable to disentangle the convoluted signal resulting from complex wind-wave-current interactions. Still, these models implicitly rely on the nonexistence of wind-wave-current interactions or require ancillary data such as directional wind, wave, and swell information. In this work we rely on an untapped resource for the estimation of the wave Doppler: polarization diversity. It is known that the measured Doppler velocity is polarization dependent, which should be attributed to a polarization dependence of the wave-Doppler. The potential to exploit polarization diversity was already anticipated in the literature (Chapron et al., 2005; Wollstadt et al., 2017). From a signal model based on (Dop)RIM (Kudryavtsev et al., 2005; Hansen et al., 2012), we derive a closed-form expression for wave Doppler as function of polarized observables. We assume that the Doppler originating from the TSC is polarization independent. Therefore, the polarized Doppler difference removes the TSC component, providing direct insight into the wave-Doppler contributions. Through careful weighting of this difference, we obtain a quantitative estimate of the wave-Doppler. We further introduce a simplified approximation tailored to practical use. When applied on real data---for which quantitative comparisons are lacking---the signal model is capable of distinguishing between features with an oceanographic or atmospheric origin. Then, on simulated data, the performance is quantitatively assessed. Results of which illustrate the unparalleled potential of dual-polarized Doppler and NRCS observations to infer contributions to the Doppler budget for a complex and dynamic array of wind-wave-current conditions.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Modeling of the Co-Cross Polarization Coherence of C-Band Synthetic Aperture Radar

Authors: Aurélien Colin, Romain Husson, Alexis Mouche, Antoine Grouazel
Affiliations: Collecte Localisation Satellite, Ifremer
Current estimation of surface wind over the ocean is critical for meteorological forecasting, marine operations, and offshore wind energy development. Estimates from SAR rely on analyzing sea surface roughness through Geophysical Model Functions (GMFs), which relate Normalized Radar Cross-Section (NRCS) to wind speed and direction. However, this is an underdetermined problem that requires wind field priors to resolve ambiguities, usually from atmospheric models. Recent studies have introduced the Co-Cross-Polarization Coherence (CCPC) as a promising parameter for supplementing the wind direction prior. However, despite initial advancements, previous models for CCPC have shown limitations at high wind speeds, necessitating further refinement. This paper aims at developing an improved Polarized Geophysical Model Function (PGMF) for CCPC, enhancing its applicability across a broader range of wind conditions. The study employs a dataset of tens of thousands Interferometric Wide Swath (IW) SAR observations from the Sentinel-1 mission, spanning a wide range of wind speeds and directions. The dataset particularly focuses on extreme wind conditions, deviating from natural wind speed distributions to capture rare high-speed events. The refined PGMF incorporates a log-normal function for the wind speed component, replacing the polynomial model used in earlier studies. This change ensures convergence at low wind speeds, addressing diverging behavior observed with polynomial models. Additionally, the model accounts for asymmetrical behavior of the CCPC with the wind direction. Separate wind speed components are modeled for different wind direction ranges. Results indicate that the improved PGMF reduces the Mean Absolute Error (MAE) of the CCPC estimate compared to earlier models. Computed for wind speeds contained in [0, 30] m/s, the log-normal model reduces the MAE by 38%. If independent wind components are used for the four wind direction quadrants, the MAE is reduced by 60%. The refined model also captures key behavioral shifts in CCPC at higher wind speeds, where sinusoidal modes with a period of 180° are replaced by a 360° period in the real part of CCPC. The reason for this shift is still undetermined. Though the CCPC can currently only be obtained from IW products, potential extension to Extra-Wide (EW) products is of interest for the study of tropical cyclones, as the current state of the wind estimation impacted by the time difference between the atmospheric model and the SAR observation, which incurs potential errors in the wind direction prior.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall E1)

Session: A.09.11 Snow in the cryosphere: improving observations and modelling - PART 2

Snow is frequently present across cryosphere surfaces such as sea ice, ice sheets and shelves, glaciers, permafrost and soil, and is an important factor in key processes including mass balance, albedo and thermal insulation. The importance of snow is well known and increasingly recognised with snow featuring in the World Meteorological Organisation set of Essential Climate Variables including recently-added snow depth on sea ice. Changes in snowfall, snow cover and snow properties have far reaching implications including water supply, modification of available sunlight affecting biological and chemical processes, climate feedbacks and many other aspects of the Earth system. Changes in snow physical properties can also bring both additional information and/or challenges in relation to remote sensing due to alterations in the interaction of electromagnetic radiation with snow, impacting our ability to collect satellite data over long time- and length-scales.

Studies of snow from in situ, laboratory, remote sensing and modelling studies can provide insights into the state of snow cover as well as changes, trends and future predictions. This session welcomes abstracts on any aspect of snow studies including physical properties and evolution, tools and techniques, modelling and application of statistical or AI methods, human, biological and chemical implications as well as snow dielectric properties which affect remote sensing retrievals. In this session, we invite presentations at all length scales from satellite through airborne, surface, microstructure and smaller. The session will explore insights into the status, changes and potential futures of snow on Earth and impacts e.g. on people and climate, as well as future directions for research, including techniques, modelling, instrumentation, mission design and retrieval algorithms.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall E1)

Presentation: Challenging Traditional Methodologies Applied for Snow Depth Retrieval Using Near-Coincident Multi-Frequency Altimetry

Authors: Renée Mie Fredensborg Hansen, Dr. Henriette Skourup, Prof. René Forsberg, Dr. Arttu Jutila, Dr. Eero Rinne, Dr. Isobel Lawrence, Prof. Andrew Shepherd, Prof. Knut Høyland, Jilu Li, Fernando Rodriguez-Morales, Jeremy Wilkinson, Prof. Dr. Christian Haas, Dr. Tania Casal
Affiliations: DTU Space, Finnish Meteorolgoical Institute, University Centre in Svalbard (UNIS), ESA - ESRIN, Centre for Polar Observation and Modelling, Northumbria University, Norwegian University of Science and Technlogy, University of Kansas, Center for Remote Sensing and Integrated Systems (CReSIS), British Antarctic Survey, Alfred Wegener Institute, ESA - ESTEC
Understanding the microwave penetration capabilities at different frequency bands warrants further investigation to evaluate how accurately we can derive snow depth over sea ice from multi-frequency altimetry. Former studies have investigated utilising dual-frequency spaceborne altimetry (laser/Ku-band or Ka/Ku-bands) for snow depth during winter seasons and created monthly snow depth composites assuming the Ku-band radar scatters at the snow-ice interface and laser/Ka-band at the air-snow interface (or calibrated to it). Alternatively, airborne studies have shown discrepancies between current methodologies applied to the same sensor (“snow radar”), and few studies have evaluated the coincident Ka/Ku-band airborne observations available. With the upcoming launch of the dual-frequency polar altimetry mission CRISTAL (Copernicus Polar Ice and Snow Topography Altimeter), expected in 2027/2028, there is a need to investigate whether dual-frequency altimetry along orbits is realistic and to what extent the assumption of complete and zero penetration is valid under different sea ice conditions. Here, we evaluate different airborne surveys along satellite orbits where different combinations of sensors at different frequencies were carried out, with the explicit purpose of challenging the traditional assumptions of penetration currently applied in traditional retrieval methodologies to leverage the information available on microwave penetration further. Specifically, we examine the Antarctic ESA/NERC CRYO2ICEANT22 campaign that carried a laser scanner, an ultrawide-band (UWB) Ku/Ka-band radar, and a UWB snow radar (S/C-band), and the Arctic 2017 dual-frequency (Ka- and Ku-band) airborne CryoVEx campaign that in addition also carried a laser altimeter. The CryoVEx/ICE-ARC 2017 campaign further arranged for near-coincident overflights of Operation IceBridge (OIB) carrying a set-up of a snow-radar and laser using the airborne topographic mapper (ATM); PanArcMIP EC/AWI Polar-5 aircraft with an airborne electromagnetic-induction (AEM) set-up; and, the NASA wide-swath topographic mapper (GLISTN), as well as an in situ component (ground-based EM and magnaprobe data collection) with a ground team visiting 12 sites along relevant satellite orbits, split over two phases. This unique combination of sensors and observations covering different sea ice conditions can provide fascinating insights into the possibilities and limitations of microwave penetration and snow depth retrieval whilst applying traditional retrieval methods. Our research evaluates radar (Ka/Ku/snow radars) penetration capabilities compared with the airborne laser scanner as a reference surface for estimating the different contributing scattering horizons (singular or multiple, depending on the frequency band). We further derive estimates of snow depth from various combinations of frequency bands or re-tracked interfaces using established and novel re-tracking methodologies and discuss the limitations of the methods over different sea ice conditions. Intermediate results show that established methods (threshold-first-maximum-retracking-algorithm, TFMRA) for Ka- and Ku-band often estimate re-tracked backscattering horizons close to or at the air-snow interface, challenging the assumption (and requirement) of complete penetration at Ku-band, but also some penetration at Ka-band. Multiple peaks are identified in the Ku-band, coinciding with locations of air-snow and snow-ice interfaces seen from the snow radar. These multiple peaks indicate a possibility of estimating these interfaces from the airborne Ku-band estimates alone when strong reflections are present. Additional work is required to leverage further the multiple scattering horizons identified in the frequency bands. Furthermore, future work will also include a comparison with in situ data to evaluate the overall accuracy of the methods and data.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall E1)

Presentation: Validating SnowModel-LG on Arctic Sea Ice in The Summer with Remotely-sensed Snowmelt Onset and In Situ Data from the MOSAiC Expedition

Authors: Clara Vydra, Anne Braakmann-Folgmann, Jack Landy, Robert Ricker, Evgenii Salganik
Affiliations: Julius-Maximilians-Universtiät Würzburg, Arctic University of Norway, NORCE Norwegian Research Centre, Norwegian Polar Institute
Snow on sea ice plays a critical role in the Arctic climate system due to its high albedo, contribution to the Arctic freshwater budget, and influence on energy exchange processes. Furthermore, information on the snow mass is required when converting sea ice freeboard heights from satellite altimeter measurements to estimates of the ice thickness. Although satellite remote sensing methods – such as differential depth information on ice surface elevation and snow surface elevation from radar and laser altimetry – provide valuable information at continuous spatial and temporal scales, they are less effective during the summer months, when temperatures are high and the snow pack contains liquid water. Similarly, wet snow acts as a blackbody hampering snow depth retrieval from passive microwave satellites during the summer months. Consequently, snow depth on sea ice during the melt period remains a poorly constrained variable. Snow models like SnowModel-LG can bridge this gap simulating snow conditions using climate data, sea ice motion observations, and atmospheric forcing. However, SnowModel-LG has not been thoroughly validated for summer conditions. To improve our understanding of the model's uncertainties and reliability, this study compares SnowModel-LG outputs with snow melt onset information derived from satellite remote sensing products and in-situ snow depth measurements collected during the MOSAiC campaign. Snowmelt onset dates derived from from SMMR and SSM/I-SSMIS brightness temperatures provide information across the entire Arctic , while in-situ data collected from buoys during the Multidisciplinary drifting Observatory for the Study of Arctic Climate (MOSAiC) expedition between the October 2019 and September 2020 offers spatially explicit ground truth data. With the objective of analyzing the ability of SnowModel-LG to reproduce seasonal snow depth evolution and melt onset timing over sea ice we investigated melt onset timings from the three different datasets over different regions of the Arctic, comparing trends, and focusing on timing and events in snow thickness evolution rather than magnitude. Preliminary results indicate that SnowModel-LG effectively captures key patterns of snow accumulation and melt and exhibits sensitivity to the spatial variability of snow. The findings are particularly valuable for enhancing the integration of SnowModel with sea ice thickness calculations, contributing to advancing Arctic climate research and highlighting the potential of SnowModel-LG for broader applications in understanding snow-sea ice interactions.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall E1)

Presentation: UAV-Based Observations of HDRF and Albedo Over Snow-Covered Boreal Landscape

Authors: Henna-Reetta Hannula, Roberta Pirazzini, Julia Martin, Aleksi Rimali, Dr Aku Riihelä
Affiliations: Finnish Meteorological Institute, Victoria University of Wellington
Surface albedo retrievals from space-borne optical platforms are essential for understanding energy exchange processes, particularly in cold regions where snow and ice dominate. However, validating and refining these retrievals through in-situ measurements remain challenging as observations of spectral albedo and quantities approximating the bidirectional reflectance distribution function (BRDF) are scarce. Furthermore, the in-situ measurements are typically restricted to small footprints of a few tens of centimetres or a single fixed location, limiting their representativeness over heterogeneous landscapes. Leveraging recent advances in unmanned aerial vehicle (UAV) technologies, a study was conducted to characterize the spatial heterogeneity of surface spectral albedo and the hemispherical-directional reflectance factor (HDRF) (390–950 nm) over a heterogeneous, snow-covered boreal landscape in Sodankylä, Finland. During a brief 5-day period in April 2023, coinciding with one Sentinel-2 and one Landsat-8 overpass, vertical profiles of spectral albedo were measured up to an altitude of 100 m. The profiles were initiated over either snow-covered open bogs or sparse pine forests. To further characterize the area around the vertical profile centres, multi-angle HDRF measurements were conducted using a UAV-based platform at a coarse angular resolution of approximately 25°. The measurement footprint diameter for HDRF observations varied, spanning 6 m or 12 m over snow-covered bogs, and 16 m or 20 m over sparse pine forests. Additionally, UAV RGB mapping was performed to provide detailed site characterization, while daily measurements of snow-specific surface area (SSA), temperature, and density were taken to support the interpretation of UAV observations. Initial results demonstrate that spectral albedo variations along vertical profiles are directly linked to the footprint heterogeneity. Multi-angle HDRF patterns collected consecutively at the same centre point over snow on the same day were consistent across different footprint sizes of 6 m and 12 m. Notably, the range of measured snow HDRF values within the same pattern increased at longer wavelengths in the near-infrared (NIR) compared to the visible (VIS) spectrum. Reflectance decreased significantly from VIS to NIR at the lowest HDRF values, typically observed near nadir. In contrast, the difference in reflectance at the highest values, measured in the forward scattering direction, was more modest. Despite this, the directional distribution of HDRF values over snow at different wavelengths showed notable similarity. The impact of environmental changes was also evident. An increase in ambient temperature, accompanied by a slight decrease in SSA between consecutive measurement days, was most apparent in lower HDRF values at NIR wavelengths, which are highly sensitive to snow physical properties. Meanwhile, UAV-based HDRF measurements over sparse pine forest canopies were more complex to interpret, due to the presence of different tree parts and a mix of shadowed and non-shadowed snow. While UAV observations can be as labour-intensive as manual in-situ methods, they offer distinct advantages. UAV platforms provide adjustable measurement footprints and the ability to access remote or otherwise inaccessible locations, with greater flexibility in adapting to weather conditions, changes in plans, and cost constraints compared to traditional airborne campaigns. These methods have the potential to complement and improve our understanding of discrepancies between in-situ and satellite-derived measurements and albedo-related processes. Future work will focus on expanding UAV-based observations to cover a wider range of sun and snow conditions, further enhancing the robustness and applicability of these observations for diverse applications.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall E1)

Presentation: Terrestrial Snow Mass Mission Simulator: Towards a Dual Ku-Band Satellite for Monitoring Terrestrial Snow

Authors: Justin Murfitt, Claude Duguay, Garrett Parsons, Al-Abbass Yousef Al-Habashneh, David Anez, Erik Mak, Chris Derksen, Benoit Montpetit, Mike Brady, Aurélien Fourmault, Vincent Vionnet
Affiliations: H2O Geomatics, C-Core, Airbus UK, Environment and Climate Change Canada, Canadian Space Agency
Terrestrial snow cover plays an important role in the planet’s climate system. In areas where snow cover remains year round or nearly year round, snow is important for planetary albedo and its energy budget. Seasonal snow cover is also important for water resources, providing contributions to streamflow and groundwater recharge. Due to its contribution to various aspects of the environment, monitoring terrestrial snow cover is important, however, the extent to which monitoring can occur is limited due to large spatial coverage. Satellite remote sensing provides a solution to these challenges. Optical sensors can be used to track snow cover extent while passive microwave sensors can be used to determine snow mass. However, these sensors are limited by a combination of what snow properties can be monitored (optical) and the resolution of the observations (passive microwave). Synthetic Aperture Radar (SAR) provides a solution with higher spatial resolution and the capability to provide information on snow water equivalency and microstructure. While lower frequencies (L, C, X-band) have little interaction with dry snow, higher frequencies such as Ku-band can be used. The Terrestrial Snow Mass Mission has been in development since 2015 at Environment and Climate Change Canada in partnership with the Canadian Space Agency. The fundamental concept of TSMM is a novel measurement approach which exploits Synthetic Aperture Radar (SAR) measurements in a dual-frequency observation mode in Ku-bands at 13.5 GHz and 17.25 GHz. The next phase, highlighted in this presentation, is the development of a data simulator using the requirements that were previously identified in the initial phase 0 analysis of TSMM. This presentation will introduce the key modules of the simulator, including the geometry module, SAR payload module, scene generation module, and product generation module. Simulated backscatter is calculated using the snow microwave radiative transfer (SMRT) model over a 2D gridded dataset which provides input snow properties and static properties such as soil type and topography. When comparing the simulated backscatter to known Ku-band sensitivities as well as observed Ku-band backscatter from different field sites across Canada, consistent patterns in backscatter were identified. Examples of various outputs from the simulator modules will be presented to showcase its utility. This simulator will allow for scientific readiness of the measurement techniques and generated Level-1B backscatter products from the mission.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall E1)

Presentation: Evaluation of Snow Depth on Sea Ice from Five Satellite Microwave Radiometer Retrievals

Authors: Dr. Gunnar Spreen, Dr. Philip Rostosky, Prof. Dr. Christian Haas
Affiliations: University of Bremen, Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research
Snow on sea ice is an important parameter in the Arctic climate system. Due to its insulating properties snow has a direct impact on the sea ice growth and melt. Compared to open ocean, snow has a very high albedo and therefore modifies the ice-ocean to atmosphere heat fluxes. In addition, snow depth variability influences the under-ice biology, especially in spring. Knowledge about snow depth on sea ice is also important for accurate modelling of sea ice thickness and surface heat fluxes as well as sea ice thickness retrievals from satellite altimetry. Thus, to improve the understanding of the Arctic climate system, accurate observations of snow depth on Arctic sea ice are crucially needed. Given the vast area and temporal resolution requirements, satellites are the best tool to derive snow depth estimates on an Arctic-wide scale. Microwave satellite observations like those from AMSR2 or the future CIMR radiometers can be used to estimate snow depth on Arctic sea ice. The scattering cross section and emission of snow grains varies with frequency and the number of available scatterers along the propagation path, i.e., with snow depth. In addition, the depth of the snow governs the temperature profile towards the ice surface and thereby also the microwave emission. By combining microwave radiometer observations at different frequencies, the snow depth on sea ice can be estimated. Suitable satellite microwave radiometers go back until the 1980s and thus long-term climate data records of snow depth can be derived. There are, however, also several sources of uncertainty, e.g., the change of snow grain size and density with time, which impact the available satellite retrievals differently. In recent years, a variety of new snow depth algorithms have been developed using available airborne snow depth observations as reference. Most of the algorithms are trained and evaluated with snow depth observations in spring covering only a limited area of the Arctic. Their performance through different seasons or on pan-Arctic scales is often not analyzed. We performed an in-depth analysis of five microwave radiometer snow depth algorithms and compared them with snow depth measurements from different in-situ and airborne campaigns with snow radars. The retrievals are all based on the same input data, i.e., AMSR-E and AMSR2, but use different methods and complexity: two are based on the classical gradient ration between two different frequencies (GR(37/19) and GR(19/7)); one uses machine learning based on the airborne campaigns; one a multi-regression also on the airborne campaigns; and the last one is based on a multi-parameter retrieval using an inversion of a physical forward model. Additionally, we analyze the seasonal cycle of retrieved snow depths by comparing the different algorithms to a snow evolution model. Our results show that most of the algorithms perform reasonably well in spring. However, the retrieved snow depth widely varies during early winter and the seasonal cycle of the retrieved snow depths strongly differs between the different algorithms. In addition, the sensitivity of the algorithms to snowfall events is strongly linked to snow grain types. This study shows that additional training data from early winter are crucially needed for the development of reliable satellite microwave radiometer snow depth algorithms.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall E1)

Presentation: Decadal observations of the regional and seasonal variability of snow depth on sea ice in the Weddell Sea

Authors: Stefanie Arndt, Dr. Christian Haas
Affiliations: Alfred Wegener Institute Helmholtz Center for Polar and Marine Research, University of Hamburg, Institute of Oceanography
Snow depth on sea ice is a critical climate variable, as it governs energy and momentum exchanges across the atmosphere-ice-ocean interfaces and plays a pivotal role in the sea ice mass balance. This is especially true for Antarctic sea ice, which usually remains snow-covered year-round. The snow column undergoes pronounced seasonal changes in density, thermal conductivity, water content, microstructure, and stratification, significantly influencing microwave properties and the energy and mass balances of the underlying sea ice. Satellite laser altimetry retrieves snow freeboard, but the conversion to sea ice thickness critically depends on accurate snow and sea ice property information. Similarly, Ku-band radar altimeters (e.g., CryoSat-2, Sentinel-3A/B) are affected by backscatter interactions within the snow column, complicating freeboard retrieval. Thus, to enhance our understanding of Antarctic sea ice volume and its temporal and spatial variability, it is essential to better characterize snow depth distributions across different seasons and ice regimes, which is vital for enhancing the accuracy of large-scale Antarctic sea ice thickness and volume products. The Weddell Sea, with its coexisting seasonal and perennial sea ice, is an ideal study region for exploring the regional and seasonal variability of snow depth. Here, we present the first comprehensive compilation of in-situ snow depth observations spanning the Weddell Sea's seasonal and perennial ice regimes since the 1990s. The dataset integrates measurements from manual observations, autonomous ice-tethered platforms, and shipborne observations. Our findings reveal no significant changes in snow thickness over recent decades (up to 2022), despite declining sea ice extent in recent years. Instead, substantial regional differences emerge. Data from autonomous ice-tethered platforms (Snow Buoys) reveal that in the eastern Weddell Sea (seasonal ice), snow depth ranges from ~0.3 m in December to ~0.7 m in August, with pronounced summer ablation. In contrast, in the western Weddell Sea (perennial ice), snow depth varies from ~0.5 m in December to ~1.0 m between July and September. By merging these diverse datasets with their distinct seasonal and spatial coverage, we aim to create quasi-climatologies of snow depth for different ice regimes in the Weddell Sea. This dataset provides an essential reference for snow depth in the Southern Ocean, enhancing large-scale estimates of Antarctic sea ice mass balance and its regional and interannual variability.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall G2)

Session: F.02.05 Capacity Building and Technology Transfer in Earth Observation

This session will focus on capacity building and technology transfer initiatives that aim to enhance the EO capabilities of developing countries. It will present successful programs and discuss strategies for effective knowledge and technology transfer. This includes best practices, case studies of successful capacity building initiatives and the challenges and opportunities of technology transfer.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall G2)

Presentation: Data-Driven Climate Action - Breaking Down Barriers to Make Earth Observation Accessible to All

Authors: Nicki McGoh
Affiliations: Caribou Space
As the world faces more frequent and more catastrophic weather related disasters, humanitarian aid budgets and capacity are being stretched to breaking point. However, despite an unprecedented volume of satellite Earth Observation (EO) data available, coupled with advancing predictive analytics capabilities and a wealth of geospatial capacity development programmes, many grassroots humanitarian organisations in low- and middle-income countries are operating in a data void. This represents a missed opportunity to use available technologies and data to drastically improve preparedness for and response to extreme weather events, such as flood and drought, in low-resource settings. Through a three-phase initiative with the UK Humanitarian Hub (funded by UK Foreign, Commonwealth & Development Office), Caribou has explored system-level barriers that must be removed to promote greater climate resilience globally. Our work has focused on mapping out current usage of EO applications and understanding the experiences of non-technical humanitarian practitioners trying to access these products to support their work with the most vulnerable communities, with a focus on East Africa. Our research and community engagement activities have highlighted a knowledge and capability gap, as well as a need for a fresh approach to capacity building and technology transfer. Results and Insights Using ideas generated in collaborative community workshops as a springboard, Caribou has launched new tools and processes that aim to remove knowledge and capability barriers for non- or low-technical professionals. The presentation will demonstrate these tools and processes and share insights from a three-year long work programme: Foreground the insights not the inputs: Policymakers and practitioners in many developing countries are tasked with making decisions and taking action on large intractable issues, often in data- and resource-poor environments. However, communication that focuses too heavily on the underlying satellite technology or on the data processing techniques used to develop EO products can alienate and overwhelm less technical practitioners. During our ideation sessions, we commonly spoke about ‘geographic data’ and ‘environmental information’ rather than satellite imagery or EO applications to be inclusive of all participants. Lead with the end user value proposition: Technology transfer is more likely to succeed when the end user recognises that the technology will deliver a tangible benefit for them. Here, the link with impact assessment and cost/benefit analysis is clear. Through this work we have developed an Evidence Synthesis tool - the GeoEvidence Explorer - which distils examples of past experiences using geospatial data to impact humanitarian operations. We also launched a ‘Technical Assistance’ facility and have provided hands-on support to four grassroots organisations to better understand effective techniques to support them to integrate new forms of data. Our facility focused on clearly understanding the operational processes and decision-making of the recipients to identify the key areas where additional insights, delivered through EO data, were required. Provide signposting and navigation: The volume of EO data, analytics products, and training materials that are available, both publicly and commercially, may seem overwhelming to those less familiar with the technology. There is a clear role for organisations to ‘bridge’ those from the technology sector with potential end users in developing countries, to raise awareness of the potential and to guide those less familiar with the technical details towards the right resources for their needs. We have developed an interactive and engaging knowledge repository that uses ‘scrollytelling’ techniques and AI-enabled voice assistance to guide non-expert users towards the most relevant resources to their work. Transfer and embed skills in a sustainable way: EO capacity building is stronger when embedded in a broader geospatial or even data literacy initiative that promotes a wide set of skills to be embedded. A more holistic curriculum that incorporates spatial data collection and data processing techniques and enables the interpretation of EO products for specific decision making needs is more likely to be repeated and repeatable by those operating in these contexts, without an ongoing need for further training. Degree of innovation and expected contribution to LPS This presentation offers a new perspective within the LPS community, focusing on the perspectives of humanitarian organisations and aid practitioners that are exploring the use of EO in their operations. By hosting community workshops and holding ideation sessions with non-expert and less technical members of the sector, this work highlights important considerations for technology transfer initiatives and possible downfalls of one-off capacity building programmes. Technical correctness and validation outcome Each phase of this three-year endeavour has had a strong element of co-design and co-creation with the humanitarian community and where possible has leveraged existing stakeholders within the ecosystem through partnerships and consultation exercises. Thanks to this approach, our work and insights have been widely validated within the humanitarian community and shared with parts of industry. New tools and processes have undergone rigorous user testing and iterative development cycles. Implications for ESA and Beyond For the EO Sector: Increased understanding of the importance of the user context and capability and consideration of products within a more holistic view of skills transfer. For ESA: Advocating for an investment in capacity building approaches that forefront the development of practical and actionable skills and demonstrate the value of the technology for the end user. For policy, international cooperation and service operationalisation: Generating insights and new techniques and approaches to improve the effectiveness and long-term sustainability of capacity building initiatives.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall G2)

Presentation: CEOS WGDisasters: insights, contribution and showcases of international cooperation for capacity building and development initiatives

Authors: Dr. Deodato Tapete, Dr. Antonio Montuori, Dr Laura Frulla, Lorant Czaran, Andrew Eddy, Maria Virelli, Gianluca Pari, Simona Zoffoli
Affiliations: Agenzia Spaziale Italiana (ASI), Comisión Nacional de Actividades Espaciales (CONAE), United Nations Office for Outer Space Affairs (UNOOSA), Athena Global
The Working Group on Disasters (WGDisasters) has been established since 2013 by the Committee on Earth Observation Satellites (CEOS, https://ceos.org) to ensure the sustained coordination of disaster-related activities undertaken by the CEOS Agencies as well as to act as an interface between CEOS and the community of stakeholders and users involved in risk management and disaster reduction. The main pillars of WGDisasters are: • Support Disaster Risk Management (DRM) authorities by means of satellite-based Earth Observation (EO) and science-based analyses. • Support the United Nations Office for Disaster Risk Reduction (UNDRR) Sendai Framework, mainly with reference to Priority 1 “Understanding Risk” and Priority 4 “Build Back Better” activities. • Support international initiatives such as the Group on Earth Observation (GEO), International Charter Space & Major Disasters and Copernicus Emergency Management Service (EMS). • Raise the awareness of Governments, policymakers, decision-makers, and major stakeholders about the benefits of using satellite EO in all phases of DRM. • Foster increased use of EO in support of DRM and DRR, and express related EO capabilities and needs. In this framework, the WGDisasters has undertaken ad hoc projects and initiatives on disaster management related to natural hazards (e.g., volcanic, seismic, landslide and flooding ones), which are supported by CEOS space agencies and scientific EO communities/stakeholders on best effort basis: • Wildfire Pilot [1], which aims to provide basis for defining global priorities for active-fire monitoring and characterization, provides inventory of EO systems suitable for active-fire monitoring and explores existing gaps in wildfire EO capabilities (existing and proposed) thus suggesting way-forward for the community. • Flood Pilot [2], which aims to explore and demonstrate best practices for combining optical and Synthetic Aperture Radar (SAR) data to map floods and improve vulnerability, exposure and hazard information, reporting best practices of both low-Earth Orbit (LEO) and geostationary (GEO) SAR data and developing SAR-based flood mapping methodology in coordination with global stakeholders and users. • Landslide Demonstrator [3], which aimed to demonstrate the effective exploitation of EO across the full cycle of landslide disaster risk management (preparedness, response, and recovery at global, regional, and local scales), including the possibility of multi-hazard focus on cascading impacts and risk. • Volcano Demonstrator [4] and its Follow-on initiative (G-VEWERS), which aims to evaluate the utility of EO data for anticipating, detecting, and tracking volcanic eruptions and support EO applications for volcanic disaster risk reduction worldwide, by providing global volcano observations via SAR and optical satellite data, eruption response and early warning capabilities as well as assisting local capacity and train scientists at Volcano observatories for EO data analysis • Seismic Hazard Demonstrator [5], which aims to develop and demonstrate advanced science products for rapid earthquake response, supporting active faults mapping and seismic exposure mapping with EO satellite data in strict relationship with International Charter. • Recovery Observatory (RO) [6], which aims to provide integrated awareness to support recovery, inform PDNA and provide Pre and post disaster protocols to ensure medium term monitoring as well as Capacity Building assessment & implementation plan. • Geohazards Supersites and Natural Laboratories (GSNL) [7], which is articulated in Supersite projects with the aim to improve geophysical and geohazard assessment at local level and worldwide, by promoting uptake of new science results for societal benefits with best effort international partnerships. Many of the above initiatives and projects have included activities of capacity building and technological transfer, in order to train local stakeholders and technical officers on the use of EO data and develop the same type of application products that CEOS WGDisasters may have provided them with during a specific activation or during the lifetime of the given international cooperation initiative. The purpose is generating a longer term impact of the WGDisasters activities, well beyond the single activation or experience, so to translate the high-level objective of fostering increased use of EO in support of DRM and DRR into real and tangible expertise at stakeholders and end-users levels. The motivation behind the choice of WGDisasters to add capacity building components to EO data dissemination and technical activities lies on the evidence that the opportunity to increase and serve the needs of early adopters and new users of EO data requires a dedicated commitment to capacity development. In this sense WGDisasters promotes open data access, use and utility for DRM, also taking benefits of mankind, EO data, hardware/software and training resources made available by CEOS Space Agencies, international stakeholders and user communities, thus allowing to fill the major gaps recorded in terms of operations and services. Application development and co-development with user communities, and related capacity building opportunities are fully integrated in all initiatives promoted and supported by WGDisasters. In this activity WGDisasters is also coordinating with CEOS Working Group on Capacity Building and Data Democracy (WGCapD) whose mission is to increase the capacity of institutions in less developed countries for effective use of EO data for the benefit of society and to achieve sustainable development [8]. WGDisasters has therefore gained substantial lessons learnt on the opportunities and challenges, as well as intrinsic weaknesses and limitations, of undertaking capacity building through international cooperation and the present paper aims to share these outcomes and considerations. In this regard, several showcases, e.g. including but not limited to some of the activations run within the Recovery Observatory, will be presented at the time of the conference to highlight the contribution and the benefits given by WGDisasters to capacity building and development activities. References 1. Wildfire Pilot: https://ceos.org/ourwork/workinggroups/disasters/wgdisasters-activities/wildfire-pilot/. 2. Flood Pilot: https://ceos.org/ourwork/workinggroups/disasters/wgdisasters-activities/floods/ 3. Landslide Demonstrator: https://ceos.org/ourwork/workinggroups/disasters/wgdisasters-activities/landslide-pilot/ 4. Volcano Demonstrator: https://ceos.org/ourwork/workinggroups/disasters/wgdisasters-activities/volcanoes/ 5. Seismic Hazard Demonstrator: https://ceos.org/ourwork/workinggroups/disasters/wgdisasters-activities/earthquakes/ 6. Recovery Observatory: https://ceos.org/ourwork/workinggroups/disasters/wgdisasters-activities/generic-ro/ 7. GSNL: https://ceos.org/ourwork/workinggroups/disasters/wgdisasters-activities/gsnl/ 8. CEOS WGCapD: https://ceos.org/ourwork/workinggroups/wgcapd/
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall G2)

Presentation: The Copernicus LAC initiative: Transferable EO Solutions and Platform integration for Enhanced Disaster Risk Management in Latin America and the Caribbean

Authors: Caterina Peris, Dr. Alberto Lorenzo, Andrew Eddy, Fabrizio Pacini, Marco Chini, Mattia Marconcini, Michael Foumelis, Paolo Campanella, Paolo Farina, Dr. Rubén Ramo, Roberto Rudari, Patrick Matgen
Affiliations: Indra Espacio S.l.u., CIMA Foundation, Athena Global, WASDI Sarl, Luxembourg Institute of Science and Technology (LIST), Terradue Srl, Aristotle University of Thessaloniki (AUTh), Geoapp, The German Aerospace Center (DLR)
Latin America and the Caribbean (LAC) are among the most disaster-prone regions globally, accounting for 25% of natural disasters and 53% of economic losses from climate-related events. To mitigate these impacts, the European Union (EU) has launched the Copernicus LAC Hub in Panama, a regional initiative under the EU's Copernicus Program. This initiative leverages Earth Observation (EO) data to address Disaster Risk Management (DRM) and Disaster Risk Reduction (DRR) challenges while ensuring that solutions are transferable and adaptable across diverse institutional and geographic contexts. The EO Services Development project, a key component of the Copernicus LAC Hub, emphasizes the scalability and transferability of its services. Designed using open and reusable methodologies, the project ensures that EO-based solutions can be implemented across regions with minimal adaptation costs. By employing open-access Sentinel data from the Copernicus Program and co-developing solutions with local users, the project facilitates the seamless integration of EO services into national and sub-national DRM frameworks. This includes applications in flood event mapping, vegetation and drought monitoring, wildfire assessments, and terrain motion analysis, all of which are tailored to address the most pressing hazards in LAC. To maximize accessibility and operational sustainability, the project integrates these EO services into advanced processing platforms deployed in the upcoming Copernicus LAC Panama Data Centre. This data hub will function as a Sentinel mirror site and central processing infrastructure for the region, ensuring real-time access to EO data and services. Where local processing capabilities are preferred, the project makes algorithms and service chains available to end users, enabling independent operations within their existing technological ecosystems. These dual pathways—centralized through the data hub or decentralized for local use—enhance the adaptability and sustainability of EO services. An agile development process ensures iterative feedback from users and stakeholders, fostering ownership and ensuring the solutions are adaptable to diverse operational environments. Demonstration use cases, implemented in selected LAC regions, validate service functionality and scalability while building user capacity to deploy EO services autonomously. The pre-operational phase of each service includes capacity building of users in their integration into the regional data hub, ensuring a smooth transition to operational exploitation. This presentation will highlight the methodologies enabling the transferability of EO solutions, the strategic integration into the Copernicus LAC Panama Data Centre, and the localized implementation pathways. By focusing on scalable platforms and adaptable solutions, the project offers a model for leveraging EO to build resilience, not only in LAC but in other vulnerable regions worldwide.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall G2)

Presentation: ESA EO Training Academy

Authors: Premysl Stych, Antonios Mouratidis, Daniel Paluba, Jan Svoboda, Josef Laštovička, Jun.-Prof. Dr. Andreas Rienow, Krištof Oštir, Connor Heeney, Francesco
Affiliations: Charles Universty
The ESA EO Training Academy is an innovative initiative developed with the support of the European Space Agency (ESA) to provide specialized training and resources for users in the field of Earth Observation (EO) techniques and applications. The project aims to cover a broad range of areas, from Earth Science to operational uses of Remote Sensing from space, addressing the growing demand for expertise in these fields. This comprehensive training program is designed to meet the needs of various users, including universities, young scientists, professionals working with geo-information, and anyone looking to develop expertise in EO technologies. The core objective of the ESA EO Training Academy is to establish a platform for organizing a series of high-quality training sessions, courses, and webinars. These programs will focus on key topics related to EO, providing participants with the skills and knowledge necessary to work independently with cutting-edge EO data. The academy will offer access to an Online Learning Platform that will serve as a resource hub, enabling users to autonomously engage with and exploit Sentinel data, ESA EO data. Additionally, the platform will offer essential ICT tools and resources provided by ESA to ensure that users have everything they need to work effectively with EO data and applications. A key aspect of the project is the creation of a consortium of European universities, which play an important role in organizing and handling training materials, courses, and other educational content. These universities collaborate to design training programs that cater to the diverse needs of ESA's target groups, from students at the university level to post-doctorate researchers. By leveraging the expertise of these academic institutions, the academy will be able to offer up-to-date, high-quality content that aligns with the latest developments in EO technologies and applications. The training programs offered by the ESA EO Training Academy are particularly valuable for students at various stages of their academic careers, including undergraduate, postgraduate, and doctoral levels. By providing access to expert knowledge, practical resources, and hands-on experience with EO data, the academy aims to equip the next generation of scientists, researchers, and professionals with the skills they need to thrive in the rapidly evolving field of Earth Observation. Ultimately, the ESA EO Training Academy helps bridge the gap between academic education and practical applications in EO, contributing to the advancement of science and technology in Europe and beyond.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall G2)

Presentation: Cloud Native Copernicus Platform for Latin America and Caribbean (LAC) region

#stac #cloud-native

Authors: Pedro Goncalves, Fabrizio Pacini, Florencio Utreras, Uwe Marquard
Affiliations: Terradue Srl, Universidad de Chile, T-Systems
The CopernicusLAC Platform is a regional Earth Observation (EO) infrastructure designed to address the unique challenges of the Latin America and Caribbean (LAC) region. Its primary objectives include facilitating easy access to Copernicus data, providing robust data processing capabilities, enabling seamless data exchange through federation, and supporting local institutions in achieving independent operational capacity. Developed under a European Space Agency (ESA) contract, the platform is a collaborative effort between Terradue, T-Systems, and the Universidad de Chile. By combining advanced cloud-native technologies with open-source solutions, the platform supports disaster risk reduction, environmental monitoring, and climate resilience efforts. The platform is designed around a cloud-native architecture that emphasizes scalability, flexibility, and interoperability. Kubernetes serves as the backbone for containerized application orchestration, enabling dynamic resource allocation and automated scaling to manage growing data demands, projected to exceed 22 petabytes by 2027. Metadata management adheres to the SpatioTemporal Asset Catalog (STAC) standard, providing efficient and user-friendly data discovery capabilities. Processing workflows leverage Common Workflow Language (CWL) and Argo Workflows to deliver systematic and on-demand solutions for transforming satellite data into actionable information. The platform’s design is aligned with the Earth Observation Exploitation Platform Common Architecture (EOEPCA) and compliant with Open Geospatial Consortium (OGC) standards and best practices. Middleware developed within the platform enables scalable data hosting, processing, and exchange, with all components released as open-source to promote reusability and customization by other initiatives. The platform’s data processing tools are packaged following OGC Best Practices for EO Application Packages, ensuring portability and adaptability for diverse use cases. Beyond its technical innovations, the CopernicusLAC Platform places a strong emphasis on empowering local institutions through capacity building and knowledge transfer. By providing open-source middleware and applications, the platform enables local operators to avoid reliance on expensive commercial software licenses, reducing recurring costs while enhancing autonomy. Open-source solutions offer the flexibility to customize and adapt tools to meet specific regional needs, fostering local ownership and building long-term capacity for independent operation. This approach ensures that institutions in the LAC region can develop a sustainable and resilient ecosystem for EO applications, aligned with their priorities. This presentation will detail the architectural blueprint and system objectives of the CopernicusLAC Platform, highlighting its cloud-native design and integration of open-source technologies within a replicable framework. We will present how the platform addresses objectives such as data accessibility, robust processing, and interoperability, alongside the operational strategies such as pre-operational demonstrations and training programs. By sharing lessons learned and challenges encountered, with this presentation we aim to contribute to the global conversation on advancing Earth Observation platforms through effective capacity building and technology transfer. The CopernicusLAC Platform demonstrates how regional needs can be addressed through innovative design, international collaboration, and adherence to open standards, while empowering local stakeholders with sustainable, adaptable tools that prioritize their independence and growth.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall G2)

Presentation: ESA COPPHIL initiative, to build a collaboration between EO service provider and local stakeholders for better environment management.

Authors: Anne-Laure Beck, LCdr Kerwin C. Ferrer, Dr Stephen Emsley, Jean Laporte
Affiliations: ACRI-ST, NAMRIA, ARGANS
The Copernicus Philippines (CopPhil) program, led by the Philippine Space Agency (PhilSA) is an initiative under the European Union’s Copernicus Earth Observation Programme, aimed at strengthening the Philippines' ability to tackle challenges such as climate change, natural disasters, and environmental sustainability. Launched in collaboration with PhilSA, the Department of Science and Technology (DOST), and the European Space Agency (ESA), the program aims to harness Earth observation technologies for environmental monitoring and sustainable development. An understanding of benthic habitats is critically important for the Philippines due to its rich marine biodiversity, extensive coral reef systems, and reliance on marine resources for livelihoods and ecosystem services. A core objective of CopPhil, therefore, is the development an advanced EO-based benthic habitat mapping service in collaboration with Philippine user institutions. The service will include an open-access data repository and a web-enabled processing interface as well as training sessions and knowledge exchange initiatives to ensure long-term and sustainable monitoring of benthic environment. Historically, Philippine institutions have carried out extensive in situ benthic surveys including bathymetry and habitat maps. These have relied on boat and diving surveys so are expensive and have limited temporal and geographic scope. The application of EO, combined with modern ML/AI classification techniques offers a cost-effective alternative able to cover much larger areas and at a much higher frequency. These are essential requirements for benthic habitat monitoring needed to support benthic habitat conservation. Outreach and knowledge exchange activities are a fundamental objective of all thematic programmes within CopPhil. Key outputs of this service will therefore be demonstrations of societal relevance and comprehensive training for local analysts to enable operational handover of EO activities. The benthic mapping service outreach and training activities will build user trust in the underlying science and enable a robust local EO-capacity for long-term monitoring of benthic environments. CopPhil is an exemplar of international collaboration and technology transfer and capacity building to leverage EO for sustainable development. By enabling local stakeholders with EO skills, the project contributes to the broader understanding and application of EO for ecosystem monitoring and the conservation of natural resources. These efforts align with global objectives such as the United Nations Sustainable Development Goals (SDGs) and the Decade of Ocean Science for Sustainable Development.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall K2)

Session: C.03.01 The Sentinel User Preparation Initiative for High-Priority Applications

The Sentinel Users Preparation (SUP) is an initiative within the FutureEO programme of the ESA Earth Observation. The SUP initiative will ensure that European entities are optimally placed to exploit the opportunities due to the future Copernicus Sentinel Expansion and Next Generation missions responding to recommendation of the relevant stakeholder communities. SUP will strengthen expertise in diverse application domains to prepare future downstream services addressing high-priority societal challenges, and to enable rapid uptake by end-users and stakeholders of the derived information products. The objective is to develop and test methodologies to exploit multi-mission approaches of Copernicus Sentinel Expansion class datasets in high-priority applications (Food systems and agriculture; Ecosystem and biodiversity monitoring; Soil management, Inland water management; Coastal management; GHG and air quality; Forest management; Urban resilience; Critical infrastructure management; Mining and extractives; Arctic operations; Natural hazard management) to consolidate the added value of the Copernicus Sentinel Expansion within these applications and to build the relevant experience and expertise within both supply-side and demand-side stakeholders.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall K2)

Presentation: Operational monitoring of sea ice and icebergs: Towards judging local conditions at high spatial resolution

Authors: Malin Johansson, Prof. Wolfgang Dierking, Nick Hughes, Keld Quistgaard, Des Power, Prof. Leif Eriksson, Dr. Zuzanna Swirad, Prof. Raed Lubbad, Jakob Bünger, Prof Cathleen Jones, Prof. Anthony Doulgeris, Prof. Torbjørn Eltoft, Johannes Lohse, Mark Howell, Kelly Dodge
Affiliations: Uit The Arctic University Of Norway, Alfred-Wegener-Institut, Norwegian Meteorological Institute, Danish Meteorological Institute, C-CORE, Chalmers University of Technology, Institute of Geophysics, Polish Academy of Sciences, Norwegian University of Science and Technology, Drift + Noise Polar Services, JPL/NASA
To address the Arctic operations priority area within the ESA’s Sentinel Users Preparation Initiative we established a project titled “HIgh Resolution LOcal sea ice MAPping for arctic operations” (HIRLOMAP). The teams have expertise in operational sea ice and iceberg monitoring, tactical ship route planning, algorithm development for sea ice classification and retrieval of drift and deformation, simulations of sea ice dynamics, short-term forecasts of local ice movements, and evolution of fast and fjord ice. The project will work towards upcoming missions such as Sentinel Next Generation, and ESA’s Expansion Missions. In the development stage will different image processing and information retrieval methods be used, to test out the products on data from ESA’s Sentinel missions, as well as missions from JAXA and NASA such as ALOS-2/-4 and NISAR. The main objectives are (1) to provide local information products on scales of tens of kilometers with the highest possible spatial resolution between a few tens of meters and 5 meters which are required for special operations and applications such as listed below, and (2) to achieve higher accuracies in detection, classification, and parameter retrievals for Arctic sea ice and iceberg information product generation. Our focus is towards (a) the tactical product delivery (maps, short-term forecasts, simulations of ice dynamics) for detailed local analysis in the vicinity of a vessel or offshore structure to support immediate operations or operation planning, (b) extending detection capabilities also to smaller icebergs, (c) providing local charts of fast and fjord ice conditions to local communities for evaluations of the safety of on-ice travels, and (d) monitoring of coastal sea ice for research on coastal erosion and flooding. One way to achieve these goals in the next decade is to use complementary satellite data from combinations of ESA’s Sentinel, Sentinel Next Generation, and Expansion Missions ROSE-L and LSTM, combined with the use of different image processing and information retrieval methods. This includes visualization of results and confidence measures such that the immediate needs of the end-users are reflected in the final product. Here we will introduce some of the selected test sites and end-users who collaborate with us, and present preliminary ideas for the products that are currently under development.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall K2)

Presentation: AQUATIME - Novel phytoplankton information products for improved understanding of aquatic ecosystems and biodiversity based on synergistic time series analysis of Sentinel Expansion and Sentinel Missions

Authors: Petra Philipson, Krista Alikas, Dr Jenni Attila, Carsten Brockmann, Stina Drakare, Caroline Ek, Vivi Fleming, Juergen Fischer, Dr Kersti Kangro, Sampsa Koponen, Kaisa Kraft, Joel Kuusk, Dr Marcel König, Héloïse Lavigne, Carole Lebreton, Joppe Massant, Dagmar Müller, Rene Preusker, Ian-Andreas Rahn, Alfred Sandström, Jorrit Scholze, Kerstin Stelzer, Susanne Thulin, Dimitry Van der Zande
Affiliations: Brockmann Geomatics Sweden AB, University of Tartu, Finnish Environment Institute (Syke), Brockmann Consult GmbH, Swedish University of Agricultural Sciences, Spectral Earth GmbH, Royal Belgian Institute of Natural Sciences
With focus on the Copernicus Sentinel Expansion missions, ESAs Sentinel Users Preparation (SUP) initiative includes a number of projects that target applications preparedness with stakeholder participation. The purpose is to ensure that European entities are ready to exploit the opportunities by the future Copernicus Expansion Missions. The initiative focuses on getting the EO value-adding entities ready to utilize the new datasets for several predefined high-priority applications, and the engagement of stakeholders and end-users to stimulate their readiness to rapidly uptake the new derived information products within their operational activities. AQUATIME is focused on the applications Ecosystem and Biodiversity Monitoring, Inland Water Management and Coastal Management. Sentinel-based information has become a good tool for assessing and monitoring the status of inland and coastal water bodies. Satellites can map phytoplankton surface accumulations (blooms and scums) and with sufficient spectral resolution also immersed phytoplankton. With this information, the Sentinels can support prediction of the magnitude and severity of phytoplankton blooms and potential climate scenarios and have become a powerful alternative as they provide high temporal resolution, high spatial resolution, and cover large areas. Chlorophyll-a concentration (Chl-a) is one of the parameters used to assess the status related to phytoplankton conditions. Different indices are used in different countries, and many were initially developed to reflect the effects of eutrophication. Moreover, satellite data can provide near-real time information and several water quality parameters can be estimated from the measured reflectance. The performance depends on the sensors’ spectral and radiometric properties and the availability of associated algorithms for proper data processing. The Sentinel expansion missions offer new and complementary information for improved and novel data products to better characterize biogeochemical and physical properties of inland and coastal waters and thus eventually improve understanding of the aquatic ecosystems and support the preservation of aquatic biodiversity. In combination, CHIME and LSTM will cover the spectral range from the Ultra-Violet (UV) to the Thermal Infra-red (TIR), allowing the determination of phytoplankton types and species, assessing their biomass, and studying their phenological cycle in combination with the water temperature, at a spatial resolution suitable also for smaller water bodies and relevant to several European directives such as the Water Framework Directive (WFD), the Marine Strategy Framework Directive (MSFD), Habitat and Birds directives, Natura2000 and the EU 2030 Biodiversity strategy. In AQUATIME, we want to advance from standard phytoplankton products, such as (Chl-a) status and trend, to the detection and monitoring of different types of phytoplankton based on pigment composition, including potentially harmful species. Such data is important for a better understanding of aquatic ecosystems under high pressure from anthropogenic activities and the current needs for large scale reporting and management needs for several European Commission Directives and Strategies. CHIME hyperspectral information is expected to provide not only more accurate assessments of turbidity and transparency measures, Chl-a, suspended matter, and coloured dissolved organic matter concentration, but also more sophisticated phytoplankton information products. The hyperspectral information provided by CHIME, would allow detection and analysis of additional phytoplankton pigments, their absorption and fluorescence properties, and enable an improved understanding regarding the abundance and life cycle of phytoplankton types and species, and corresponding effects on the ecosystem. Aquatic ecosystems are rapidly changing environments, and it is a challenge to use data from different satellites synergistically. However, utilizing time series from several instruments allows synergistic exploitation and provides additional information for exploiting the temporal evolution, i.e., the phenological cycle of aquatic ecosystems. Such an approach also allows to combine high frequent but low spatial resolution data (Sentinel-3/OLCI and SLSTR, later S3 Next Generation) with high spatial but low spectral (S2, later S2 NG) data with high spatial high spectral but lower temporal resolution data from CHIME and LSTM. AQUATIME is presently collecting stakeholder requirements and prepares the production of a time series data set for generating phytoplankton information products by combining data from existing Copernicus data sources, planned Copernicus CHIME and LSTM data sources through means of simulations and supporting model and in situ data sources. Subsequent steps include utilizing Machine Learning to find correlations between time series and to exploit the potential of time series analysis of environmental information, specifically by combining Chl-a with temperature but also salinity, nutrients, wind, currents and light availability, to enhance quantification and early detection of blooms and phytoplankton community composition. The generated information will be validated with in situ phytoplankton community composition information data from several sites in Sweden, Finland, Estonia and Germany. The approach will be demonstrated to stakeholders and end users through a number of case studies. First results will be presented at the conference.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall K2)

Presentation: NextSoils+: Advancing Soil Health Monitoring through Sentinel Expansion Missions EO Data (CHIME & LSTM) and Stakeholder-Driven Application Development

Authors: Robert Milewski, Stéphane Guillaso, Konstantinos Karyotis, Nikolaos Tsakiridis, Thomas Schmid, Dr. Stefano Pignatti, Raffaele Casa, Eyal Ben-Dor, Sabine Chabrillat
Affiliations: GFZ Helmholtz Centre for Geosciences, GFZ Helmholtz Centre for Geosciences & Leibniz University Hannover, I-BEC InterBalkan Environment Center, Research Centre for Energy, Environment and Technology (CIEMAT), Institute of Methodologies for Environmental Analysis (IMAA) of the National Research Council of Italy (CNR), University of Tuscia, Tel Aviv University, Remote Sensing Laboratory
The NextSoils+ project funded under the Sentinel User Preparation (SUP) program, aims at addressing the pressing societal challenge of soil health monitoring and sustainable land management. By leveraging Earth Observation data from the forthcoming ESA Copernicus Sentinel Expansion missions, specifically CHIME (hyperspectral imaging) and LSTM (thermal infrared monitoring), alongside complementary missions such as NASA's SBG and CNES-ISRO's TRISHNA, NextSoils+ seeks to advance the assessment of soil health. The project targets key soil properties and indicators, including soil organic carbon (SOC), texture, salinity, mineralogy, and moisture content, which are critical for sustainable agriculture, ecosystem functioning, and climate resilience. Soils, a non-renewable yet vital resource, are increasingly degraded by unsustainable land management, climate change, pollution, and other anthropogenic pressures. This degradation compromises their ability to provide essential ecosystem services such as carbon sequestration, clean water provision, and food production. The European Commission's Soil Monitoring Law and the EU Soil Strategy for 2030 underscore the urgency for innovative, scalable methods to monitor and restore soil health. In response, NextSoils+ aims to exploit the enhanced spectral and spatio-temporal capabilities of Sentinel Expansion missions to derive high-accuracy soil health indicators, focusing on the synergetic integration of VNIR-SWIR (400–2500 nm) and TIR (8–12 μm) spectral data. The project employs a multi-faceted approach involving advanced simulations of CHIME and LSTM data using high-quality airborne survey data and spaceborne datasets from current hyperspectral national missions such as EnMAP, PRISMA. These simulations enable the development of cutting-edge methodologies for soil property retrieval, validated through four representative European case studies (Greece, Spain, and Italy) and one international case (Israel). Each site represents diverse soil compositions, management challenges, and environmental threats, including salinization, erosion, drought risk, and nutrient depletion. These case studies are co-designed with local stakeholders, ensuring that the applications and products developed address real-world soil management needs and align with policy and operational requirements. The project actively involves local and regional authorities, agricultural companies, and farmer associations from the initial design phases through to validation and deployment. These stakeholders contribute critical insights into soil management needs and help define relevant indicators such as soil degradation status and drought risk. Collaborative workshops, surveys, and iterative feedback loops ensure that the developed products meet end-user requirements and facilitate their adoption. This participatory framework not only enhances the scientific and technical validity of the tools but also ensures their operational relevance and scalability. Validation of soil products is carried out using harmonized datasets, including spectral libraries and in situ measurements, to ensure robust and transferable methodologies. Soil health indicators derived from the synergetic CHIME-LSTM data are benchmarked against ground-truth observations and traditional soil analysis techniques. This process enables the assessment of prediction accuracy, uncertainty quantification, and the refinement of models to meet diverse soil management needs across different agro-ecological zones. For instance, soil moisture content is linked to irrigation management, while SOC and texture maps inform strategies for improving soil fertility and mitigating degradation. The innovative methodologies developed under NextSoils+ align with ESA's SUP objectives by demonstrating the added value of multi-mission approaches to address high-priority applications. The project showcases the potential of Sentinel Expansion missions to provide actionable insights for soil monitoring, sustainable land management, and policy development. It strengthens capacity among both supply-side (data providers, researchers) and demand-side (policy makers, land managers, and farmers) stakeholders, fostering the rapid uptake of EO-derived information products. NextSoils+ aspires to bridge the gap between advanced EO technologies and practical soil management applications, contributing to the EU's vision of healthy soils by 2050. By integrating scientific innovation with stakeholder-driven solutions, the project paves the way for transformative impacts on soil health, agricultural sustainability, and climate resilience.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall K2)

Presentation: CHILL-Y: Synergistic use of CHIME, LSTM, and ROSE-L data for assessment of yield quantity and quality

Authors: Heike Bach, Silke Migdall, Christian Miesgang, Philipp Klug, Sophia Kremser, Joris Blommaert, Astrid Vannoppen, Jonathan Leon-Tavares, Louis Snyders, Tanel Tamm, Catherine Akinyi, Mihkel Veske, Mauricio Chaves-Vargas, Johannes Schmidt
Affiliations: VISTA GmbH, VITO Remote Sensing, KappaZeta Ltd, EurA AG
Within the project CHILL-Y (CHIME + LSTM + ROSE-L: Assessment of Yield Quantity and Quality) representative datasets of three upcoming Sentinel Expansion missions are created and synergistically used to demonstrate a novel service for improved yield quantity and yield quality assessment. These missions are currently under development, meaning that no measured data is available yet. Therefore, corresponding satellite data is simulated and representative data sets generated. Hyperspectral data, as will be available in the future via CHIME, will be used to derive the nitrogen uptake of agricultural crops. They will also provide information on plant water content and crop structure. Representative land surface temperature data, as will be provided by LSTM in the future, will be generated from thermally sharpened Sentinel-3 images and Sentinel-2 data. This will be used as a representative data set to obtain information on surface temperature, evapotranspiration and drought stress of crops. L-band radar data will be used to determine soil moisture under agricultural crops and detect harvest events even under cloudy conditions. NISAR is the preferred input satellite sensor (launch planned for Q1 2025). Additionally, high-resolution ALOS-2 data will be used and fused with low-resolution ECMWF’s ERA5 products. Sophisticated methods will be adapted to derive the desired parameters (crop leaf area, canopy nitrogen content, evapotranspiration, harvest events etc.) from one synergistic dataset. The derived parameters will then be assimilated into a well calibrated and validated physiological crop growth model to calculate yield quantity and quality. This application will be demonstrated for two crop types (wheat and sugar beet). The yield quantity will be t/ha grain yield or sugar beet (root) yield respectively. The quality parameters for wheat will be protein content which is of relevance for the food industry since only high protein values allow the use of wheat for human food (as flour and for baking), otherwise wheat is only used for feeding animals. The sugar content of the beets is the quality parameter for sugar beets. The novel data sets and methods will be tested and evaluated in several test areas in Europe in cooperation with stakeholders, so-called Champion Users. This ensures knowledge transfer and future benefit of the developed solution for the users by considering their needs in the co-development process. At the Living Planet Symposium, the results of the design phase of the project will be shown, so that the methodological approach and the selection of the test-sites can be discussed. The representative dataset, while optimized for the requirements of CHILL-Y, will be made publicly available at the end of the project and can be utilized by interested users for their own analyses. Background: The project is funded as part of the Sentinel Users Preparation (SUP) initiative within the FutureEO programme of the ESA Earth Observation Programmes (EOP) Directorate. The objective of the initiative is to develop and test novel methodologies to exploit Copernicus Expansion missions class datasets in existing application domains, to develop innovative products, and to gain relevant experience and expertise on both the supply and demand side.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall K2)

Presentation: Rail-adjacent vegetation monitoring with Copernicus Expansions (RAVE): A Sentinel User Preparation (SUP) project for critical infrastructure management

Authors: Dr Thomas Higginbottom, Dr Frazer Christie, Dominik Rains, Andrew Tewkesbury, Tom Harling, Pasquale Iervolino
Affiliations: Airbus
Operators of critical rail infrastructure face a continuous challenge in monitoring their networks for ongoing maintenance, and preventing potentially catastrophic accidents. In recent years, Earth observation (EO) technology has greatly enhanced our ability to undertake remote, large-area monitoring, yet many challenges remain unresolved. The upcoming Copernicus Expansion Missions will provide new sources of data and opportunities to address these challenges. Working in synergy, the Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) and the Radar Observing System for Europe in L-band (ROSE-L) missions will provide particularly useful data for rail infrastructure monitoring purposes, by bringing together routine, high precision hyperspectral and L-band SAR information for the first time. In this context, ESA has recently funded the Rail-Adjacent Vegetation monitoring with Copernicus Expansions (RAVE) project under the auspices of the Sentinel User Preparation initiative, which will be delivered by Airbus Defence & Space Geospatial Business UK in collaboration with Network Rail and the Société Nationale des Chemins de fer Français (SNCF) (the primary rail operators in the UK and France, respectively). As a collaborative project, Airbus will initially undertake a detailed assessment of operator requirements and opportunities in the context of CHIME and ROSE-L, and will then demonstrate the utility of these missions for a variety of rail management-relevant applications including, for example, the high-precision monitoring of lineside land use, flooding, tree height, tree disease and windthrow. Ahead of launch, these applications will utilise emulated CHIME and ROSE-L datasets, which will be modified from existing hyperspectral (PRISMA, EnMAP) and L-band SAR (ALOS, SAOCOM-1A/1B) imagery, respectively, to match the imaging characteristics associated with these missions. As part of the wider SUP initiative, these emulated datasets will be made publicly available to support other Copernicus Expansion Mission preparation activities in the future. In this presentation, we will give an overview of the project’s scope, discuss the areas of interest selected by our end-users for detailed analysis, and showcase initial project results as obtained from the emulated CHIME and ROSE-L datasets outlined above.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall K2)

Presentation: HEATWISE: High-resolution Enhanced Analysis Tool for Urban Hyperspectral and Infrared Satellite Evaluation

Authors: Carsten Brockmann, Dr Marcel König, Dagmar Müller, Kerstin Stelzer, Iphigenia Keramitsoglou, Panagiotis Sismanidis, Chris T. Kiranoudis, Professor Paolo Gamba, Antonietta Sorriso, Jürgen Fischer, Hans-Peter Grossart, Igor Ogashawara, Törmä Markus, Jaalama Kaisa
Affiliations: Brockmann Consult GmbH, National Observatory of Athens, University of Pavia, Spectral Earth GmbH, Leibniz Institute of Freshwater Ecology and Inland Fisheries, Finnish Environment Institute
Urban resilience has been identified as one of the high-priority applications for ESA’s Sentinel Expansion missions, and in the Sentinel User Preparation (SUP) Project HEATWISE we are combining data from CHIME and LSTM to address it with three novel products. One aspect of urban resilience is related to the Urban Heat Island (UHI) effect, one of the clearest examples of human-induced climate modification. The root cause of an UHI is the transformation of the natural landscape into a corrugated, mostly manufactured, and less vegetated surface with fundamentally different radiative, aerodynamic, thermal, and moisture properties, leading to reduced evapotranspiration and the uptake, storage, and release of more heat. Studies of UHIs and heatwaves over the past 30 years have also indicated that heat exposure in cities is markedly increased by the UHI effect, especially during heatwaves. High heat exposure during heatwaves is found to be caused by an accumulation of heat storage in cities, where night-time cooling cannot compensate for daytime heating. The need to identify and isolate the causes of UHI and the factors that contribute to the observed intra-urban differences and to provide clear answers to important questions about monitoring and predicting heatwaves, investing in specific countermeasures in an effective yet conscious way by involving local authorities, emergency service providers and other stakeholders. Conversely, in cold climate conditions, energy saving by means of optimal building coating and heat distribution is becoming a very important issue, for economic and environmental reasons. Controlling the materials and the spatial distribution of the elements of the built-up becomes a mandatory element of the definition of urban resilience with respect to economic and social effects of the overall green and energy transitions that the world is facing. Indeed, as stated in the Copernicus Land Surface Temperature Monitoring (LSTM) Mission requirements document (MRD) [1], “... modern society is now paying particular attention to climate conditions in ever-growing mega-cities. National and city organizations want to monitor and understand the excess heat generation in city areas and potential power wastage. This affects many activities, including urban planning, environment monitoring, and building construction techniques and legislation.” Current research has identified many drivers for the UHI effect, but the availability of tools able to provide detailed information about the actual situation on the ground in any urban area worldwide are still to be designed and tested. Existing global mapping approaches for Local Climate Zones do not include any information about natural or artificial materials and their thermal properties [2,3]. The characterization of relevant environments for UHI studies requires instead that this information is available. The use of hyperspectral images will enable the recognition of specific materials, and the exploitation of thermal data will be instrumental to analyze their thermal behavior. Therefore, the use of data from hyperspectral and thermal missions will improve the understanding of the UHI in many ways, but specifically by providing information about the urban environment that has not been considered so far other than in very limited portions of urban areas where similar data sets from airborne sensors are available. The HEATWISE project, of the Sentinel User Preparation of the ESA EOP Initiative, is designing tools for the synergistic use of the LSTM and the Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) to help overcoming these challenges. Specifically, LSTM, having 3-5 bands in the thermal infrared (TIR) region of the electromagnetic spectrum, will allow for more detailed analysis of surface temperature and emissivity variations within urban areas compared to existing missions such as Landsat, which typically have only one or two thermal bands. With a 3-day revisit capability, the frequency of data acquisition would be significantly higher than, e.g., Landsat's 16-day revisit cycle, enabling more timely monitoring of surface temperature changes and trends. Additionally, the finer pixel size of 30-50 m and the availability of nighttime data, which agree more closely to the distribution of urban near-surface air temperature, will allow for more precise identification of hotspots, heat sources, and cooling spots at the city block scale. Overall, these improvements would enhance the accuracy and utility of thermal data for urban heat mapping and analysis, facilitating better-informed decision-making by city authorities. Although the application domain of urban resilience and insights is not specifically listed in the CHIME MRD [4], visible-to-shortwave-infrared (VSWIR) imaging spectroscopy has been shown to enable the identification of materials and estimating their fractional cover in the urban context. CHIME, covering the spectral range between 400 and 2500 nm with a spectral resolution ≤ 10 nm, is tailored to the characterization of natural and artificial materials alike, and will have a spatial resolution of 30 m, comparable to LSTM. Combinations of spatial unmixing and subpixel mapping techniques enable generating detailed material maps within urban areas for multiple applications, including the identification and characterization of buildings and roofs, vegetation, and water surfaces (grey, green, and blue components of the city). The higher temperatures of cities, including the subsurface UHI effect, influence not only human health but also the temperature and quality of urban waters. Several studies [5,6] have shown that the shallow groundwater temperature in cities is several degrees higher than in rural areas and is clearly attributed to anthropogenic impacts (e.g., heat losses from buildings, sewage leakage, and the “thermal wastewater” injected to the aquifer). Regarding surface waters, warmer temperatures in water bodies foster microbial activities such as bacterial and algal blooms, which might turn into harmful algal blooms (HABs), e.g., the likelihood that cyanobacteria or other species such as Prymesium parvum (recent Oder River fish kill) find good conditions to grow increases [7,8]. These blooms are toxic for humans and animals and can cause severe health damage. Hyperspectral data allow to identify cyanobacteria and other phytoplankton in water bodies and to determine the concentration of algal occurrence via chlorophyll measurement. Therefore, the combination of LSTM and CHIME in the given temporal and spatial resolution will be a big step forward for assessing changes of water quality in urban waters and providing managers actionable information for improved resilience. In preparation of future services and down-stream products for strengthening expertise in the urban resilience domain, and to foster rapid uptake by stakeholders and end-users of the newly derived knowledge, the HEATWISE project introduces a set of pioneering products, that could potentially lead to services, that revolutionizes urban planning by combining hyperspectral VSWIR and multispectral TIR satellite data. By seamlessly integrating day and night thermal observations with detailed material analysis, the improvements over the state-of-the-art by HEATWISE are a novel insight into urban heat dynamics and a more detailed characterization of heat island and loss effects pertaining to urban grey (built-up), blue, (water) and green (vegetation) components. The proposed solutions will be co-designed with potential stakeholders in three European cities- Athens, Berlin and Helsinki- so that they meet their requirements and expectations. Overall, the HEATWISE products, providing a refined spatial composition and characterization of these components of the urban spaces, in combination with the spatiotemporal information about surface temperature trends, will empower city authorities and other stakeholders to transform overheated urban areas into cooler, greener, and more livable spaces, ultimately improving urban resilience.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 0.14)

Session: C.01.17 Creating the Perfect Bouquet of Innovation: Designing the Next EO Technology Demonstration Mission - Session 1

What are the next generation of promising technologies to be demonstrated in space? Are you working on new technologies for space demonstration? How can these developments contribute to strengthening Earth Observation efforts worldwide?

This session is designed to gather ideas for potential technology demonstration missions that could be developed within three years, with an estimated launch in 2030. The session will include a series of activities combining individual and group efforts, applying a design-thinking approach and creative facilitation methods to foster unconventional ideas and maximize innovation.
The goal is to collect a broad range of ideas and refine them into realistic, feasible mission concepts within the given timeline.

What happens after?
The top ideas will be presented on Friday, 27th June, and reviewed by a panel of ESA experts.

Speakers:


  • Emma De Cocker - ESA
  • Tuur Strobbe - ESA
  • Sofia Lembo - ESA
  • Paolo Bazzocchi - ESA
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall E2)

Session: A.10.03 Our solid Earth: from core to surface - PART 2

This session solicits contributions ranging from the deepest layers of the Earth, the geodynamo and core dynamics, the mantle, the lithosphere, and Earth’s surface. ESA’s Swarm trio of satellites, exceeding 10 years in orbit, has changed our understanding of the geomagnetic field evolution, providing an unprecedent accuracy and coverage from space. This session invites contributions that address the applicability of current ground and satellite datasets from Swarm, GOCE, GRACE-(FO) and future missions such as NGGM and MAGIC, to study the dynamical behaviour of the solid Earth, core dynamics and the interactions between the core, mantle, and lithosphere in order to improve the process understanding on the surface. The session shall address all aspects linked to the subsurface and surface including oceans, using a multitude of observational techniques (e.g. gravimetry, magnetometry, seismology, surface deformation) as well as physics-based models, numerical simulations of the Earth’s dynamo and theoretical advances in the field. Research related to advances in modelling the 3D conductivity of the mantle enabled by ESA’s Swarm mission, as well as the 3D structure of Earth’s layers from lithosphere down to core-mantle boundary and their interplay with the flow in Earth’s core are addressed in this session. Contributions related to plate motion, volcanism, monitoring of the seismic cycle and geohazards monitoring are also encouraged.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall E2)

Presentation: On the detectability of slow tectonics by the upcoming NGGM/MAGIC mission

Authors: Anna Maria Marotta, Valeria Fedeli, Alessandro Regorda, Roberto Sabadini
Affiliations: Università degli Studi di Milano, Department of Earth Sciences
Every mass redistribution that occurs on our planet due to dynamic processes induces a variation in the Earth’s gravity field over time that can be measured through space missions and studied to deepen the knowledge of the interior of our planet in terms of density and viscosity. In the field of Solid Earth geophysics, a wide literature has been developed in the past decades regarding the gravity signature of different processes and its variation in time at different temporal and spatial scales. Most of these works, however, focus on processes characterized by short to intermediate time scales. The study of the variations in time of gravity due to slow tectonic processes (e.g. subduction and rifting), occurring on a time scale of millions of years, received less interest. De facto, it is common belief that slow tectonic processes produce a static gravity signature being the associated gravity rate of change too small to be observed. Only in recent years, few studies have addressed this aspect in depth and some promising findings on the geodetic detectability of the gravity rate of change due to rifting have been already obtained (Sabadini et al., 2019). In addition, other recent works on subduction resulted in a gravity rate of change that is observable only for very young contexts, while for old subduction zones the gravity rate of change becomes negligible (Marotta et al., 2020). These preliminary achievements are being further investigated in the frame of the ASI-funded project” NGGM-MAGIC: A breakthrough in the understanding of the dynamics of the Earth” (Marotta, et al., 2024). Here we show that the time-dependent gravity signature of slow tectonic processes is likely to be visible by the next MAGIC gravity mission, over its entire duration, when horizontal movements of density anomalies occur, as at passive margins or at migrating trenches, and the slow tectonics induce changes over a decade that are comparable with the yearly ones induced by other mechanism, such as Glacial Isostatic Adjustment. References Marotta, A.M., Barzaghi, R., Braitenberg, C., Brocca, L., Cambiotti, G., Camici, S., . . . Longo, F. (2024). The” nggm-magic: A breakthrough in the understanding of the dynamics of the earth” project. In Egu general assembly conference abstracts (p. 2909). doi: 10.5194/egusphere-egu24-2909. Marotta, A.M., Restelli, F., Bollino, A., Regorda, A., & Sabadini, R. (2020). The static and time-dependent signature of ocean–continent and ocean–ocean subduction: the case studies of Sumatra and Mariana subductions. Geophys. J. Int , 221 , 788–825. Sabadini, R., Anselmi, A., Cambiotti, G., Cesare, S., Douch, K., Marotta, A.M., & Sneeuw., N. (2019). Eo science for society: Gravitational seismology, scientific technical report. esa itt ao/1-9101/17/i-nb. Retrieved from https://unimibox.unimi.it/index.php/s/ZRiNCP2mgRfMKAL
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall E2)

Presentation: Estimating the Earth's lithospheric magnetization, thickness and magnetic field for SH degrees 1 to 50

Authors: Erwan Thebault, Gauthier Hulot
Affiliations: Université Clermont Auvergne, CNRS, IRD, OGPC, Laboratoire Magmas Et Volcans, F-63000 Clermont-ferrand, France, Université Paris Cité, Institut de physique du globe de Paris, CNRS, F-75005 Paris, France
Detailed mapping of the Earth's lithospheric magnetic field provides key constraints on the composition, dynamics and history of the crust. This field can be mapped using satellite and near-surface measurements to build a model over a wide range of spherical harmonics. However, at wavelengths above about 2,500 km, the lithospheric field is obscured by the dominant magnetic signal from the Earth's core. Many attempts have been made to estimate these missing wavelengths by forward prediction using geophysical priors such as crustal magnetization maps and/or models describing the Mohorovicic discontinuity as a lower magnetization limit. These works generally carried out in the space domain are based on equivalent discrete dipole distributions placed in the crust. They showed significant limitations in terms of amplitude of the lithospheric field signal predicted and did not resolve the ambiguity regarding the contributions attributable to the horizontal variations in magnetization from those linked to the magnetic thickness. We propose a strategy that differs from these previous studies. Here, we propose to analyze directly the World Digital Magnetic Anomaly Grid (WDMAM2.1) in the spectral domain without seismic or magnetization a priori information. For this, we rely on a statistical expression for the expected lithospheric magnetic field power spectrum as a function of the magnetization and the crustal thickness. The spectral analysis is carried in the geodetic coordinate system within a series of 6000 spherical caps. The observed power spectra are estimated from a Spherical Cap modelling dedicated chain developed to process the WDMAM2.1 scalar anomaly data to a spatial resolution of 30 km on the Earth’s geoid. The observed and statistical power spectra are then compared in the least-squares sense using a series of parallel Monte Carlo Bayesian inversions. For each region we estimate the joint probability of magnetization and crustal thickness together with their marginal probabilities. This allows us to estimate to a first order the large scale magnetization and thickness maps with error bars. The probability density function of the lithospheric field Gauss coefficients from SH degrees 1 to 50 is then estimated from random drawing from the joint probability magnetization/crustal distributions. We conclude that the expected static lithospheric field satisfy the constraints provided by Swarm satellite observations while being consistent with statistical spectral prior information. This result shows that estimate of the large scale lithospheric fields, and thus possible contamination of core field models by large scales static lithospheric field structures, is achievable.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall E2)

Presentation: Routine Global Volcano Monitoring Using Sentinel-1 Data and the LiCSAlert Algorithm

Authors: Matthew Gaddes, Andy Hooper, Camila Novoa Lizama, Milan
Affiliations: University Of Leeds
The Earth’s subaerial volcanoes pose a variety of threats, yet the vast majority remain unmonitored. However, with the advent of the latest synthetic aperture radar (SAR) satellites, interferometric SAR has evolved into a tool that can be used to monitor the majority of these volcanoes. Whilst challenges such as the automatic and timely creation of interferograms have been addressed, further developments are required to construct a comprehensive monitoring algorithm that is able to automate the interpretation of these data. To monitor volcanoes using SAR data, we have previously published work towards this goal as a first version of the LiCSAlert algorithm. This algorithm uses independent component analysis (ICA) to first separate deformation signals, topographically correlated atmospheric phase screens, and turbulent atmospheric phase screens in time series of InSAR data. Using these separated signals, the LiCSAlert algorithm is able to detect both when an existing signal changes in rate (e.g. accelerating deformation), and when a new signal enters a time series (e.g. new deformation). These two detection metrics make our algorithm ideal for routine global monitoring as we detect changes in time series that indicate a volcano has entered a period of unrest, rather than simply detecting deformation. We present results from a second version of the LiCSAlert algorithm that, in order to be used globally, addresses multiple challenges, such as working with deformation signals that are of low rate or those that are correlated with topography, providing visualisation tools to allow the status of all the ~1300 monitored volcanoes to be assessed easily, and providing web-based visualisation of results for use by other organisations (e.g. volcano observatories). For global monitoring, the ~1300 volcanoes that we monitor in both ascending and descending time series produce ~2600 LiCSAlert results, which update every ~12 days (and will increase in frequency after the launch of Sentinel-1C). Our novel 2D visualisation tool allows these ~2600 results to be easily interpreted, and functions by assigning a probability of unrest due to a new signal entering a time series in one dimension, and to unrest due to the change of an existing signal in the second dimension. We show how this representation varies through time from 2018 as new data are acquired by Sentinel-1, and present examples of interesting unrest episodes that we detect. To disseminate our results, we have made our results available to view and explore via an online tool. We demonstrate how this can be used in conjunction with interferograms from LiCSAR and time series from LiCSBAS by other parties to utilise our volcano monitoring results.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall E2)

Presentation: Volcanism and Long-Term Seismicity Controlled by Plume-Induced Plate Thinning

Authors: Raffaele Bonadio, Dr Sergei Lebedev, David Chew, Yihe Xu, Javier Fullea, Thomas Meier
Affiliations: Department of Earth Sciences, Bullard Laboratories, University of Cambridge, School of Cosmic Physics, Dublin Institute for Advanced Studies, Department of Geology, Museum Building, Trinity College Dublin, School of Earth Sciences, Yunnan University, Department of Earth Sciences and Astrophysics, Universidad Complutense Madrid, Institute of Geosciences, Christian Albrecht University
The ESA-supported project 4D Dynamic Earth has inspired the development of novel approaches to the joint analysis and inversion of satellite gravity data and terrestrial geophysical and geological observations. The new methods combine the complementary sensitivities of the datasets to get the most out of each and to reveal breathtaking interconnections in the dynamics of the whole-Earth system. Mantle plumes, hot upwellings from the Earth's core-mantle boundary, are thought to trigger surface uplift and the emplacement of large igneous provinces (LIPs). In turn, the emplacement of LIPs causes profound changes in the Earth's climate and catastrophic mass extinctions. Magmatic centres of many LIPs are scattered over thousands of kilometres. This has been attributed to lateral flow of plume material into thin-lithosphere areas, but evidence for such flow is scarce. Here, we use new seismic data and new methods of optimal resolution tomography (Bonadio et al., 2021) and seismic thermography (Fullea et al. 2021; Lebedev et al., 2024) to map previously unknown plate-thickness variations in the Britain-Ireland part of the North Atlantic Igneous Province, linked to the Iceland Plume. The locations of the ~60 Myr-old uplift and magmatism are systematically where the lithosphere is anomalously thin at present. The dramatic correlation indicates that the hot Iceland Plume material reached this region and eroded its lithosphere, with the thin lithosphere, hot asthenosphere, and its decompression melting causing the uplift and magmatism. We demonstrate, further, that the unevenly distributed current intraplate seismicity in Britain and Ireland is also localised in thin-lithosphere areas and along lithosphere-thickness contrasts. The deep-mantle plume has, thus, created not only a pattern of thin-lithosphere areas and scattered magmatic centres but, also, lasting mechanical heterogeneity of the lithosphere that controls long-term distributions of deformation, earthquakes, and seismic hazard. These results provide a spectacular demonstration of Earth-system dynamics, including previously unrecognised inter-connections between deep-Earth processes and natural hazards. Hot plumes rising from the core-mantle boundary shape the bottom of the lithosphere and carve out thin-lithosphere areas that capture the hot material and localise uplift and magmatism. Long after the uplift and magmatism subside, the lithospheric heterogeneity remains and localises deformation and seismicity. The distribution of earthquakes and seismic hazard today in such intraplate areas has thus been shaped by the action of mantle plumes many tens of millions of years ago. This work has been completed in the framework of the projects 3D Earth, funded by the European Space Agency (ESA) as Support to Science Element (STSE) and 4D Dynamic Earth, funded by ESA (4000140327/23/NL/SD) as part of EXPRO+. References Bonadio, R., S. Lebedev, T. Meier, P. Arroucau, A.J. Schaeffer, A. Licciardi, M.R. Agius, C. Horan, L. Collins, B.M. O'Reilly, P. Readman and the Ireland Array Working Group. Optimal resolution tomography with error tracking and the structure of the crust and upper mantle beneath Ireland and Britain. Geophys. J. Int., 226, 2158-2188, https://doi.org/10.1093/gji/ggab169, 2021. Fullea, J., S. Lebedev, Z. Martinec, N. L. Celli. WINTERC-G: mapping the upper mantle thermochemical heterogeneity from coupled geophysical-petrological inversion of seismic waveforms, heat flow, surface elevation and gravity satellite data. Geophys. J. Int., 226, 146-191, https://doi.org/10.1093/gji/ggab094, 2021. Lebedev, S., J. Fullea, Y. Xu, R. Bonadio. Seismic Thermography, Bulletin of the Seismological Society of America, "Modern Seismic Tomography" Special Section, 114, 1227-1242, https://doi.org/10.1785/0120230245, 2024.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall E2)

Presentation: Detectability of earthquake-induced signals in future gravity mission observations

Authors: Lili Ge, Yoshiyuki Tanaka, Jill Peikert, Meike Bagge, Andrea Hampel, Volker Klemann, Linus Shihora, Henryk
Affiliations: The University Of Tokyo, Leibniz Universität Hannover, Helmholtz Centre Potsdam, GFZ German Research Centre for Geosciences
In addition to mass transport processes in atmosphere, oceans, and continental hydrosphere, also the (deformable) Solid Earth contributes to time-variable gravity field signatures. Besides elastic and viscoelastic deformations in response to surface loads, it is also possible to track dynamic processes in the Earth’s interior with novel satellite gravimetry missions. In this contribution, we will focus on co- and post-seismic deformations of large earthquakes, for which only events as small as magnitude 7.9 could have been detected so far with the GRACE and GRACE-FO missions. In view of the future double-pair constellation MAGIC with its significantly enhanced measurement sensitivity, this threshold is expected to be lowered significantly to M=7.4, meaning that a much larger number of seismic events will become detectable. In order to discuss the anticipated co- and post-seismic signals in the data from future gravity missions, we investigate the impact of different focal parameters, focal depth, focal mechanism, and magnitude in areas with varying earth structures. Our modelling is based on the viscoelastic spectral-finite element code VEGA, which assumes Burgers rheology with 3D viscosity distributions in a self-gravitating sphere. From a catalogue of events with a wide range of characteristics, we will discuss how the derived signals affect the gravity field variations detectable from space. A preliminary simulation using the ISC-GEM catalogue shows that the amplitude of the coseismic gravity change due to the December 21, 2010 M7.4 event in Japan increases from 0.02 microGal to 0.25 microGal as the cut-off spherical harmonic degree increases from 40 to 100, where a Gaussian filter with radius of 500 km and 200 km is applied in the former and the latter, respectively. In addition to examminig the effects of a single event, we will estimate long-term trends due to cumulative postseismic gravity changes from M>7 events that have occurred in the last 100 years. Based on the obtained results, we will propose selected modelling results for possible inclusion into the next release of the ESA Earth System Model 3.0.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Hall E2)

Presentation: Assessing the viscoelastic Earth response of historic and present mass changes of glacier systems

Authors: Ingo Sasgen, Sebastian Cruz Bacca, Volker Klemann, Anouk Vlug, Linus Shihora, Henryk Dobslaw
Affiliations: Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research, GFZ German Research Centre for Geosciences, Helmholtz Centre Potsdam, MARUM – Center for Marine Environmental Sciences, University of Bremen
Glaciers worldwide contribute nearly as much meltwater to global mean sea-level rise as the Greenland Ice Sheet. However, their small sizes and wide geographic dispersion pose challenges for detecting mass balance changes using satellite gravimetry. Furthermore, the historical evolution of glaciers, particularly since the Little Ice Age, induces glacial-isostatic adjustment (GIA) that is highly sensitive to regional Earth structures. These GIA signals must be accurately corrected in satellite data, alongside and consistently with those from the ice sheets. While elastic Earth models sufficiently capture gravity and deformation signals from short-term hydrological mass changes, the complexities of glacier systems in tectonically active regions—such as West Antarctica, Patagonia, Alaska, and Iceland—require consideration of anelasticity and viscoelasticity. Here, we present advancements in modeling GIA for glaciated regions underlain by low mantle viscosities. We reconstruct glacier evolution over the past millennium using the Open Global Glacier Model (OGGM) and apply an advanced viscoelastic Earth model that resolves fine spatial scales and accurately accounts for low-viscosity regions to predict the GIA signature. We evaluate the impact of historical and contemporary glacier mass changes on present-day gravity field variations and assess the limitations of conventional elastic Earth models and standard GIA corrections. Additionally, we explore the complementary potential of Interferometric Synthetic Aperture Radar (InSAR) and GNSS observations to constrain GIA processes underlying present-day mass changes in global glacier systems. These findings aim to enhance our understanding of glacier contributions to sea-level rise and improve the interpretation of satellite gravimetry data in glaciated regions.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 0.11/0.12)

Session: F.04.05 In-land Water Quality and Resources Management - PART 2

Water is key to sustainable development, being critical for socio-economical development, energy and food production, and healthy ecosystems. Today water scarcity affects more than 40 percent of the world’s population and is projected to rise further, exacerbated by climate change. Limited access to water supply and sanitation, more water-intensive patterns of consumption and production, increasing rainfall variability, and pollution are combining in many places to make water one of the greatest risks to economic progress, poverty eradication and sustainable development. As the global population grows, there is an increasing need to balance the competing demands for water resources and have more efficient ways to manage water supply in a sustainable manner.

The importance of ensuring availability, quality and sustainable management of water for all has been increasingly addressed in the global political agenda, as seen with the Sustainable Development Goals (SDGs) of the UN 2030 Agenda for Sustainable Development and with the adoption of an International Decade 20018-2028 for Action on ‘Water for Sustainable Development’ by the UN General Assembly. Water touches every aspect of development and is linked to almost every Sustainable Development Goal.

Earth Observation is increasingly seen as an essential source of information which can complement national data and support countries to collect regular information on the use and changes to their water resources for more informed policy decisions on water resource management.

The session will present the latest scientific advances on the use of Earth observations for Water Quality and Water resources management, discuss opportunities and challenges which lie ahead for mainstreaming EO into sustainable management of waters and future paths of research.

Topics of interest for the urban sessions include (not limited to):
- Multi-sensor approaches to the monitoring of the seasonal and annual changes in surface water extent,
- Monitoring of the changes in surface water level from satellite radar altimetry,
- EO approaches for monitoring changes in lake volume,
- Integration of EO in hydrodynamic/hydrological modelling to infer information on river discharges,
- EO solutions for Water Use estimation (e.g., for irrigated crops),
- Inland water pollution (Water Quality),
- River sediment dynamics (erosion risk potential, sediment transport estimates)
- Impact of hydropower dams on rivers flows and morphology,
- Monitoring of groundwater resources (groundwater recharge modeling, groundwater estimation)
- Drought forecasting
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 0.11/0.12)

Presentation: Use of machine learning models to retrieve water quality parameters from hyperspectral imaging in complex coastal and inland waters

Authors: Sandip Banerjee, Dr. Jeremy Kravitz
Affiliations: Pixxel Space India Pvt. Ltd, Pixxel Space Technologies
Monitoring water quality is a vital aspect of environmental management, essential for sustaining water resources and supporting healthy aquatic ecosystems. Remote sensing offers a cost-effective solution for analyzing diverse and complex water bodies across large spatial and temporal scales. Accurate estimation of optically active water quality parameters—such as concentrations of Chlorophyll-a (Chl-a), Phycocyanin (PC), and total suspended solids (TSS), as well as absorption by colored dissolved organic matter (CDOM)—is critical for identifying and addressing environmental hazards. These parameters play pivotal roles in understanding aquatic ecosystems: Chl-a serves as a key indicator of primary productivity and eutrophication, providing insights into nutrient levels and the risk of harmful algal blooms (HABs); PC highlights the presence of cyanobacteria, offering early warnings for toxic HAB events; TSS reveals sediment transport dynamics that impact water clarity and aquatic habitats; and CDOM sheds light on organic pollution and light attenuation, acting as a marker for agricultural and industrial runoff. Retrieving these parameters from water-leaving reflectance is inherently challenging, especially in turbid and productive waters where overlapping spectral signatures complicate interpretation. The limited spectral resolution of traditional multispectral sensors further exacerbates these difficulties, underscoring the need for advanced approaches. This study introduces decision tree-based machine learning (ML) algorithms tailored for high-resolution hyperspectral satellite imagery to enhance water quality monitoring in coastal and inland waters. To develop robust models, an extensive dataset comprising both in-situ measurements and radiative transfer model (RTM) simulations from diverse water types was used. This comprehensive training process enabled the ML models to effectively capture the complex relationships between hyperspectral water-leaving reflectance and key water quality parameters. Validation against independent test datasets demonstrated high predictive accuracy, with correlation coefficients (R² > 0.8) and mean absolute percentage errors (MAPE < 30%), underscoring the reliability of the approach. The proposed model was further evaluated using hyperspectral L2 images from the EMIT and AVIRIS sensors, applied to eutrophic and sediment-rich water bodies. Results indicated that the ML algorithms outperformed traditional semi-analytical models, demonstrating superior accuracy and robustness in retrieving water quality parameters. Comparison with results of traditional multispectral sensors also underscore the benefit of increased spectral resolution. These findings highlight the potential of hyperspectral imaging combined with machine learning as a transformative tool for environmental monitoring and management.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 0.11/0.12)

Presentation: Global monitoring of total water stored in reservoirs with innovative deep learning model

Authors: Christophe Fatras, Jérémy Augot, Iris Lucas, Alice Andral, Lionel Zawadzki, Santiago Peña-Luque
Affiliations: CLS, Centre National d'Etudes Spatiales (CNES)
A changing climate affects the water regimes across the world, creating new challenges for regional policies and global infrastructures. Along with a rising impact of domestic water use and an increasing stress on crop irrigation, the knowledge of the precise amount of water available at a given time becomes a challenge that needs to be tackled. In this frame, different approaches have been explored to be able to follow from space the lake storage change. This is for instance what is done in the ESA Lakes_cci project. With SWOT, continuous and global monitoring of water bodies is now possible. However, for water resources managers, the interesting parameter is the total volume available at a given time and not only the changes in volume. The total water body volume estimation of a reservoir is thus of sheer interest to have a full insight on the water quantity available at a given date at a regional scale. Direct access to bathymetry may lead straightforwardly to the available water quantity of a reservoir given the water level or area at a specific date. A global approach was proposed with the GLOBathy dataset encompassing 1.4million water bodies at a global scale using maximum depth assumptions and the distance to the known shorelines. Nevertheless, the estimated volume of a reservoir using this dataset tends to be erroneous, even more for small to medium-size reservoirs. That is why we developed an innovative method based on deep learning to assume accurate bathymetry of a reservoir from its surrounding topography. After creating a large virtual reservoir dataset at a global scale, we train our U-net algorithm for an inpainting approach on a Digital Elevation Model (DEM). That is done to assess the bathymetry of water-pixel in DEM. Then we can assess the water volume on this virtual dataset with a median volume error of 9.47%. Tested on a real reservoir validation dataset spread across Occitany (France) and Andalusia (Spain), the median volume error estimate was of 13.3% in Occitany and 6.9% in Andalusia. Even better, the total capacity of the 44 biggest reservoirs in Andalusia has been estimated to represent 7.274km3 with our methodology. This, compared to the real value provided by the Spanish government of 7.541km3, represents an error in the total water capacity on the Guadalquivir basin of 3.54%. The methodology is now mature enough to be applied to the monitoring of small and medium reservoirs in France. The use of very high-resolution DEM over France is explored, and the results are evaluated with in-situ data to ponder to what extent our algorithm can be applied to small to medium-sized reservoirs. Regarding the results, this is one more step towards an operational water stock monitoring at a regional to national level, using altogether DEM, altimetry, optical/SAR imagery and AI.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 0.11/0.12)

Presentation: Integrating satellite data and discharge modeling for sediment flux estimations - showcased in Peru and Cameroon

Authors: Dr. Fabian von Trentini, Karin Schenk, Dr. Thomas Heege
Affiliations: EOMAP
Reliable data on sediment fluxes and other hydrologically relevant data are usually rare and expensive to gather. Such data help to reduce investment risks and operational costs of hydropower systems. Spatially and temporally highly resolved satellite data can nowadays significantly contribute with related measurements that otherwise would not be available. However, the uptake and integration of these comprehensive data into business relevant information for the hydropower industry so far was not an easy task for various reasons. With the web application eoapp HYPOS an operational service for environmental and economic investment planning and monitoring based on Earth Observation (EO) technologies and modelling for the hydropower industry has been developed. It brings together high-quality satellite-based measurements for historic time periods, actual current monitoring, up-to-date modelled hydrological parameters, with nowcasting on various orderable levels of detail and available in-situ data for integrated baseline and environmental impact assessments. A smart suite of dashboards and plots enables the user to efficiently retrieve relevant information from the raw data in various ways. The application of HYPOS are demonstrated for two use cases in Pochos reservoir in Peru and the Lagdo reservoir in Cameroon in in frame of GDA Water initiative for the World Bank Group. Sediment load for the two reservoirs is assessed with a combination of satellite-derived turbidity and discharge from hydrological modelling. The methodology includes two steps: i) The turbidity from satellite measurements is translated to suspended sediment concentrations (SSC) with different formulas (global/regional); ii) Usually, high sediment loads occur during high flood days. The SSC values are therefore related with discharge data from the same day to obtain a power regression law. With this regression gaps in the SSC measurements (via satellite-derived turbidity) can be filled and the much longer time series of discharge can be used to obtain long-term data of sediment loads. The turbidity data for the two use cases are retrieved from both Sentinel-2 and Landsat 8/9 satellites with multispectral sensors in the optical and near infrared bandwidth. This results in almost 400 satellite images for Lagdo and over 500 images for Poechos over the years 2016-2023. These turbidity measurements are transformed into SSC values with a simple 1:1 relationship and regional formulas of higher complexity. The discharge is taken from two different hydrological model setups. For Lagdo, the global model World-wide HYPE of the Swedish Meteorological and Hydrological Institute (SMHI) is applied for the years 1981-2020, while for Poechos a combination of the conceptual rainfall-runoff (ARNO/VIC) model and a river-routing (RAPID) model is available from 1981 to 2022. The three major tributaries to the Lagdo reservoir (Benue, Mayo Rey and Mayo Godi) deliver sediment loads of 150,000 to 720,000 tons per year. Considering an initial reservoir storage capacity of 7.7 billion m³, these sediment loads accumulate to only 0.2 %. The Chira river sediment loads that enter the Poechos reservoir show an extraordinary variability. While in the majority of years 100,000 t/a to 4,000,000 t/a flow into the reservoir, in extreme El-Nino years up to 25,000,000 t/a are possible. Two methods were applied to estimate the sediment inflow into the Poechos and Lagdo reservoir: 1) a complex method based on daily discharge data and the relation between discharge and SSC and 2) a simple method based on mean monthly values of discharge and SSC. In Poechos a huge variability of the sediment flow is the dominant feature while applying the daily methods, with two extremely elevated sediment flows for 1983 and 1998 to be explained by the very extreme El Nino situations in those years. Mean monthly discharge volumes [m³/month] were aggregated from the daily discharge data of the model. This was multiplied with the mean monthly SSC from EO data from 2016-2023 [mg/l]. The result shows that the mean annual sediment flow is around 822,000 t/year. In Lagdo, the daily modelled discharge and the turbidity measurements from 2016-2023 lead to daily sediment flows for the three major tributaries Benue, Mayo Rey and Mayo Godi. The time series show much less interannual variability than in Poechos. Annual aggregation for this data shows that Benue and Mayo Rey show very similar sums for 1981-2020 of around 6.8 Mio tons, while Mayo Godi shows 3 Mio tons for this period. The fraction of total inflow of the three tributaries shows similar means for Benue and Mayo Rey, but a much larger variability in Benue river. Mayo Godi usually contributes less than 20 % of the total inflow and shows similar variability as Mayo Rey. The monthly method first of all shows that the discharge is basically zero during the dry period from Nov/Dec to Apr/May and that almost all discharge appears in the months from July to October. A similar seasonality, but less distinct can be seen for the monthly SSC in the three rivers. Compared to bathymetry measurements, we see that the difference between bathymetric results and the two applied methods are in a similar range. For Poechos, the 473 Mio tons (or m³) are about 5 times as the 92 Mio tons from the daily method. The monthly method reveals a much larger factor of 16, but this does not include SSC measurements for the extreme El Nino years 1983 and 1998. This method generally has more problems with the high interannual variability of Poechos inflows, when averaging over all years. However, when assuming that about half of the sediments were contributed during these two extreme years and the method rather resembles an average year, doubling the amount of the monthly method results in a factor of 8 instead of 16. For Lagdo, the initial storage capacity was 7.7 billion m³ when it was built. A bathymetric survey from 2005 revealed that 1.8 billion m³ have already been lost to sedimentation, which results in an average loss of 75 Mio m³ per year. While the daily method shows around 10.7 Mio m³, the monthly method results in about 14.2 Mio m³ for the same period respectively. This leads to scale-up factors of 7 (daily) and 5 (monthly). When the local formula from Poechos (Morocho) is applied for Lagdo, the factor is 7.8. There is a wide range of scale-up factors, but it seems there is a general possibility to use such an empirical scale-up factor to estimate the sedimentation in those two reservoirs with a value range between 5 and 8. It becomes apparent that there is potential in estimating sediment flows with a combination of EO data and modeled discharge if more in-situ data is available to improve the accuracy.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 0.11/0.12)

Presentation: AquaWatch-AUK: A bilateral EO programme towards wide-scale water quality monitoring

Authors: Stephanie Mottershead
Affiliations: Surrey Satellite Technology Ltd
AquaWatch-AUK: A bilateral EO programme towards wide-scale water quality monitoring AquaWatch-AUK is a bilateral programme between the UK and Australia, which addresses the critical need for wide-scale water-body quality measurements and monitoring, with the key aim of informing policy, regulation, and decision-making for water-related risks, to enable more effective management of water resources. AquaWatch-AUK builds on work initiated by Australia, and comprises the development of a world class monitoring and forecasting system, for implementation across the UK, Australia, and beyond. It will provide actionable information on both inland and coastal water quality by integrating satellite, in-situ, and modelling data to fulfil the goals of assessing and predicting water quality on a regional, national, and eventually, a global scale. Water quality is a serious and growing concern for communities across Australia, the UK, and around the world, with risks including: human health impacts, ecosystem health, and industrial applications. According to the United Nations (UN), more than 3 billion people globally are at risk from unsafe water; hence, the aim of the UN’s Sustainable Development Goal 6 is to ensure availability and sustainable management of water and sanitation for all. Australia has experienced some of the world’s most severe blue‐green algae blooms, including one extending over more than 1700km in the River Murray in 2016 that resulted in livestock deaths, reduced urban and recreational amenity, and the declaration of a State of Emergency in New South Wales. Water quality issues are also high on the UK political agenda, demonstrated through multiple policies and frameworks, and also amongst government departments and public bodies such as Defra, The Office for Environmental Protection, The Environment Agency, and the Drinking Water Inspectorate. Public concern about bathing water quality, health impacts, environmental impacts (eutrophication, ecosystem degradation, etc) and the management practices of private water companies have elevated water quality to regular media coverage. Earth Observation satellites offer a cost-effective solution to global-scale monitoring. Various approaches are currently used for monitoring larger water bodies (oceans and lakes), but there is a growing need to improve the frequency and coverage of data, and to cover smaller, more disparate water bodies to provide a more comprehensive picture and a catchment to coast approach. SSTL has partnered with UK organisations Assimila, RAL Space, Pixalytics, Cefas, UKCEH, and University of Stirling, and with CSIRO and SmartSat in Australia to develop this work, using funding from UK Space Agency and Australian Space Agency through an International Bilateral Fund Direct Award. This bilateral work will build and strengthen a wide community of water users and stakeholders, evaluate new and existing in-situ pilot sites in UK territories, evaluate space segment instruments and concepts, and evaluate requirements for combining multiple data sources into a standardised datahub to provide monitoring and prediction through a range of user services. At the Living Planet Symposium, we propose to present an overview of the AquaWatch-AUK programme, including the outcomes of this current Phase 2 study.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 0.11/0.12)

Presentation: When do machine learning models sink or swim? On the generalisation ability of hyperspectral neural networks across optically complex waters

Authors: Mortimer Werther, Olivier Burggraaff, Daniela Gurlin, Arun, M. Saranathan, Sundarabalan V. Balasubramanian, Dr.ssa Claudia Giardino, Dr.ssa Federica Braga, Mariano Bresciani, Andrea Pellegrino, Monica Pinardi, Stefan G.H. Simis, Moritz K. Lehmann, Kersti Kangro, Krista Alikas, Dariusz Ficek, Daniel Odermatt
Affiliations: Swiss Federal Institute Of Aquatic Science And Technology (EAWAG), Institute of Environmental Sciences (CML), Leiden University, Wisconsin Department of Natural Resources, Science Systems and Applications, Goddard Space Flight Center, National Aeronautics and Space Administration, Geo-Sensing and Imaging Consultancy, Institute for Electromagnetic Sensing of the Environment, National Research Council (CNR-IREA), National Bio-diversity Future Center (NBFC), Institute of Marine Sciences, National Research Council (CNR-ISMAR), University of Sapienza, Department of Engineering, Plymouth Marine Laboratory, Starboard Maritime Intelligence, The University of Waikato, Department of Remote Sensing, Tartu Observatory, University of Tartu, Centre for Limnology, Estonian University of Life Sciences, Institute of Geography, Pomeranian University, Department of Geography, University of Zurich
Machine learning models can now be readily trained across remote sensing disciplines to extrapolate environmental information, sometimes even from limited training datasets. This raises critical issues regarding their veracity. Despite clear advantages in computational requirements for satellite image processing, we must ensure that validating the generalisation ability of ML models keeps pace with their development and deployment. Here, we introduce a novel assessment approach that systematically evaluates model performance across multiple scenarios, providing new insights into generalisation capabilities. We implement and compare five probabilistic neural network (PNN) architectures focusing on absorption coefficients at 443 and 675 nm – key wavelengths to identify phytoplankton and coloured dissolved organic matter. Our approach enables us to comprehensively assess model performance across three distinct scenarios: (1) interpolation within existing in situ datasets, (2) extrapolation beyond training set boundaries, and (3) application to hyperspectral observations from the ASI PRecursore IperSpettrale della Missione Applicativa (PRISMA) satellite. Algorithm accuracy varies significantly across these scenarios. While models demonstrate high accuracy within familiar conditions, achieving median symmetric accuracy (MdSA) as low as 21%, this declines markedly in extrapolation scenarios (MdSA > 50%). Applying the models to PRISMA imagery results in a further decrease in accuracy, with MdSA reaching 80% or higher. This underscores the substantial challenges that atmospheric correction introduces in achieving reliable model generalisation across different satellite observations. Incorporating region-specific in situ observations in the model training set improves accuracy by 10 to 15 percentage points, emphasising the crucial role of regional calibration in model accuracy. This finding suggests that models trained solely on large, generalised datasets may have limited applicability when faced with specific regional conditions or varying environmental parameters. Additionally, we investigate model uncertainty quantification and calibration. Across all scenarios, the models estimate uncertainties exceeding 40%. Models are generally miscalibrated and underconfident, showing a significant discrepancy between their estimated confidence and actual estimation error. Notably, uncertainties are better calibrated during extrapolation, suggesting that models recognise their limitations when operating outside their training domain. To achieve reliable model performance under novel conditions, we introduce an uncertainty recalibration method using 10% of the dataset, which improves model reliability in 70% of PRISMA evaluations with minimal impact on estimation accuracy. Notably, our analysis reveals that uncertainty is predominantly aleatoric, stemming from inherent variability in observations, measurement errors in remote-sensing reflectance and in situ measurements, and spectral ambiguity. Due to these fundamental limitations, merely increasing the training dataset size or refining data-driven neural network architectures may not be sufficient to improve model generalisation ability across optically complex waters. To overcome the limited model extrapolation abilities revealed by our analysis, we propose transitioning towards physics-informed machine learning approaches that integrate established physical principles with data-driven models. While these methods may not eliminate aleatoric uncertainties arising from inherent variability and measurement error, this integration promises to enhance model accuracy in novel conditions, reduce reliance on potentially erroneous observations, and improve model interpretability. Through this contribution, we aim to foster community collaboration in developing and adopting robust protocols for model assessment, particularly focusing on extrapolation capabilities and uncertainty quantification. By leveraging upcoming missions such as the Copernicus Hyperspectral Imaging Mission for the Environment (CHIME), we can develop interpretable and reliable water quality monitoring approaches that support effective management practices and evidence-based policy decisions in pursuit of global water-related goals.
Add to Google Calendar

Monday 23 June 16:15 - 17:45 (Room 0.11/0.12)

Presentation: Sentinel 2 user-relevant water quality monitoring in small southern African water bodies

Authors: Marie Smith, Dr Dalin Jiang, Evangelos Spyrakos, Ms Humeshni Pillay
Affiliations: Council for Scientific And Industrial Research (CSIR), University of Cape Town, University of Stirling
Water sufficiency and quality is fundamental for the economic development of any country and is closely linked to human health. It is vital to efficiently and regularly monitor the quality and quantity of water resources to support their sustainable management and utilisation. In the semi-arid country of South Africa, freshwater is the most limiting natural resource, especially in the highly populated Western Cape province which has faced severe water shortages during the past decade. Several small water bodies feed into the Western Cape Water Supply System, which supplies domestic, agricultural and industrial waters to the City of Cape Town and surrounds, in addition to contributing to tourism and recreation. However, the combined effects of a growing population, economic activities, and climate pressures in this region have presented various stresses to these aquatic ecosystems, resulting in environmental problems such as eutrophication, harmful algal blooms and water scarcity. While the efficacy of remote sensing for monitoring water quality has been demonstrated in South Africa’s larger water bodies, existing monitoring programmes and long-term global datasets (e.g. ESA Lakes Climate Change Initiative) doesn’t include the water bodies in Western Cape because the small surface area makes it difficult to be observed from MERIS and OLCI. The spatial resolution and revisit time of Sentinel-2 Multispectral Instrument (MSI) provides a good capability for monitoring those small water bodies, but the accuracy of atmospheric correction and water quality estimating algorithms for the waters in Western Cape haven’t been assessed until recently. In the present study these limitations are addressed by (1) evaluating the accuracy of different atmospheric correction models including C2RCC, C2X, Polymer and Acolite for Sentinel-2 MSI images over the small water bodies in Western Cape; (2) assessing the performance of algorithms for estimating water quality including chlorophyll-a (Chl-a) and total suspended solids (TSM) concentrations in these waters; and (3) investigating the potential links between water quality, surface area dynamics and extreme events (e.g. droughts and floods). The key users of these outputs, including government departments and agencies, are kept informed and involved throughout the co-design process, which builds their confidence and trust in the resulting satellite-derived products and information, and enhances their capability to use remote sensing tools for water quality monitoring and to support the sustainable use of water resources in South Africa. This study is funded as part of the ESA EO Africa Research and Development Facility in collaboration with the African Union Commission. The resulting water quality products, workflows and datasets directly supports the South African National Eutrophication Monitoring Programme implemented by the Department of Water and Sanitation and their managing agencies; addresses the United Nations Sustainable Development Goal 6 of ensuring availability and sustainable management of water and sanitation for all; and supports the Africa Union Agenda 2063 Goal 7 priority areas for biodiversity, conservation and sustainable natural resource management, water security, and climate resilience and natural disasters preparedness for environmentally sustainable and climate resilient economies and communities.
Add to Google Calendar

Monday 23 June 16:37 - 16:57 (EO Arena)

Demo: F.01.14 DEMO - Free visualization, analysis and sharing of Sentinel satellite imagery in Copernicus Browser

Satellite imagery provides highly relevant information for wide area monitoring, but also offers a new perspective of large-scale or long-term processes. For vast areas of the Earth, satellite images are the only sources of information available on the web. Copernicus Browser is free, open, easy to use online tool for visualization, processing and downloading satellite imagery. Its use does not require earth observation, GIS or computer skills, and it runs directly in a web browser without the need to download or install anything. All Sentinel satellite image datasets over land and the coastal zone are freely and instantly accessible in Copernicus Browser. The interface enables quantitative measurements and time series analysis, but also supports 3D visualization, image and timelapse export and even direct sharing of scenes online. Copernicus Browser supports quick "fact checks", interactive slideshows, small data analysis projects and the creation of informative and attractive visualizations. Therefore, it is an essential tool for education and scientific outreach, but also NGO-s, government agencies and industries involved in environmental monitoring or area management.

Speakers:


  • András Zlinszky - Community Evangelist, Sinergise Solutions GmbH
Add to Google Calendar

Monday 23 June 17:00 - 17:45 (Frontiers Agora)

Session: D.04.07 Unlocking Copernicus Browser & openEO Capabilities in the Copernicus Data Space Ecosystem

This interactive session will show the impact of the Copernicus Data Space Ecosystem (CDSE) services on digital innovation including live demonstrations of openEO and the Copernicus Browser. These tools aim to revolutionize the Earth observation (EO) landscape by reducing the time to market of EO applications and by enabling new innovations in application design that go beyond the possibilities of most current methods.

CDSE’s codebase and interactive code lab empower users to rapidly learn and develop EO applications. In an interactive demonstration, the CDSE team will show how easy it is to move from observing satellite imagery in the Copernicus Browser to editing script for visualization and analysis, without requiring advanced knowledge of IT infrastructures or file formats.

Operational use cases will illustrate how these tools enable faster development of fully operational workflows while reducing the operational costs. This approach increases the impact and opens new possibilities across diverse domains.

Guest speakers from ESA and the European Commission will highlight their vision for integrating and expanding these tools in the Copernicus services, and how they aim to boost the uptake and integration of EO derived information into our society.

During this session the CDSE team is looking forward to engaging with a variety of EO experts to join the discussion, share their experiences and needs, and ask questions. This interactive exchange will not only enrich the session but also contribute to additional sessions dedicated to the Copernicus Dataspace Ecosystem such as the CDSE Annual User Review.

Speakers:


  • Grega Milcinski - Sinergise
  • Jeroen Dries - Vito
Add to Google Calendar

Monday 23 June 17:00 - 17:20 (EO Arena)

Demo: D.03.25 DEMO - The WorldCereal Reference Data Module: An open harmonized repository of global crop data

#parquet

The ESA-funded WorldCereal system produces seasonal global cropland and crop type maps at 10 m resolution. To support the system, the Reference Data Module (RDM) was created as a repository of harmonized in-situ reference data. The RDM can be accessed through an API and through a web user interface (UI). Through the RDM UI and API, users can browse a collection of over 130 public data set collections. Furthermore, users can upload their own data sets (geo-parquet and shapefiles) and these can then be used as reference data to run the WorldCereal system. The upload process through the RDM UI is straightforward, with AI-assisted legend mapping to match the WorldCereal standards. If a user decides to make a data set public, they will receive support throughout the interface and from the WorldCereal staff to e.g. select the most appropriate license to make a data set public, check data quality, etc. The WorldCereal RDM will additionally run quality checks to ensure the quality of the data shared. High quality contributed data sets improve the accuracy of the maps being produced in the regions covered by these and the overall quality of the WorldCereal system.
The demonstration will include a short introduction to the RDM API and UI, including the AI-assisted legend mapping, but also the process to make a data set public and what quality checks are necessary. Participants can try the system on the spot via web browser.

Speaker:


  • Juan Carlos - IIASA
Add to Google Calendar

Monday 23 June 17:22 - 17:42 (EO Arena)

Demo: D.03.30 DEMO - Raster and Vector Data Cubes Across Spatial Data Science Languages - Part 1

Geospatial datasets can be represented as either raster or vector data cubes, depending on the workflow requirements. Raster cubes store multi-dimensional arrays with coordinates like longitude and latitude, capturing the spatial aspects of a dataset. On the other hand, vector data, represented as geometry objects such as points, lines, and polygons, is traditionally stored as tables. Vector data cubes generalize this concept by having multi-dimensional arrays that include a geometry dimension to represent the spatial domain.

In this hands-on training, attendees will learn how to work with raster data cubes that cover large spatial extents to derive vector data cubes that focus on specific areas of interest, allowing us to observe how geospatial features and their attributes change over time. The participants will go through workflows for sampling and aggregating raster data cubes using vector geometries to produce vector data cubes, and explore different functions to show the advantages this provides over using raster data cubes, leveraging the languages R, Python, and Julia. In this session, we will also highlight the advantages of using Quarto as a publishing tool for facilitating cross-language resources in spatial data science.

The presented case study was developed in the EU Horizon Europe Project ScaleAgData (101086355). We demonstrate how to create raster data cubes from Sentinel-2 derived Leaf Area Index (LAI) and Sentinel-1 catalogues that cover large spatial extents. We focus on demonstrating how to handle such datasets, which can be computationally challenging, by subsetting them to vector data cubes, limiting the analysis to the target areas within our farm boundary polygons and visualizing our results. Furthermore, we use the vector data cube for computing different statistical metrics, and we present how they can be used for machine learning modelling.

Content contributors: Yomna Eid, Mohammad Alasawedah, Abhishek Singh, Felix Cremer, Edzer Pebesma

Speakers:


  • Yomna Eid - Institute for Geoinformatics, University of Münster
  • Mohammad Alasawedah - Institute for Geoinformatics, University of Münster
  • Abhishek Singh - Eurac Research
  • Felix Cremer - Max-Planck Institute for Biogeochemistry
  • Lazaro Alonso - Max-Planck Institute for Biogeochemistry
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: A.03.01 - POSTER - Global Carbon Budgets and Earth Observation

Despite decades of effort by the International Community there remain significant differences between the global and regionally aggregated budgets of the key greenhouse gases (GHGs) carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O). This discrepancy relates partially to the complexity of the carbon cycle, its interactions with other cycles but also to differences in methods and data used to generate the budgets.

The principal budget calculations are also out of synchrony. The Global Carbon Budget (CO2) is produced annually, the global methane and N2O budgets on a 3-4 year cycle and the Regional Carbon Assessment (RECCAP) developed every five years. The challenge for the International Community is threefold:

• to align the budget calculations in time
• develop regional budget assessments for the three GHG gases on an annual cycle
• reconcile the global and regional assessments across the three GHG.

Fundamental research is needed to respond to these challenges, especially for the terrestrial carbon component. Space-based measurements of atmospheric concentrations of GHG from OCO-2, OCO-3, Sentinel-5P and observations of both the terrestrial and ocean carbon cycle are increasing in frequency, range, and detail. Allied with the planned launches of key carbon relevant satellites e.g. FLEX, BIOMASS, NISAR, EO may provide a unique capability to provide a dynamic reconstruction of the carbon cycle at unprecedented scales in space and time.

This session is dedicated to how EO can be used help provide answers to these three challenges building on activities in collaboration with NASA and the European Commission.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Integrating Multisource LiDAR Data for Aboveground Biomass Estimation

Authors: Qian Song, Dr. Simon Besnard, Mikhail Urbazaev, Dr Benjamin Brede, Martin Herold, Mike Sips
Affiliations: Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences
Recently, foundation models, which are famous for their high generalizability, becomes a hot topic. However, there is no such foundation model dedicated for carbon stock estimation. The project 3D-ABC (Towards Global 3D Above and Below Ground Carbon Stocks) aims at developing a foundation model which can be generalized and applied to map the below and above ground carbon stocks. Our contribution to the 3D-ABC foundational model is to provide space-and airborne LiDAR scanning (ALS) data for integration into the foundation model for biomass estimation, which serves as a proxy for carbon stocks in our project. It is a challenging task due to the sparsity of the data and high cost of reference biomass maps. Spaceborne LiDAR provide global coverage but has larger footprint or lower point density. ALS data are of better quality but are irregularly available. Besides, acquiring reference biomass data via field survey and TLS data is costly and therefore is less geographically representative for training the foundation model head. Considering this, spaceborne LiDAR data are used as inputs of the foundation model, while ALS data are for generating biomass data. Spaceborne GEDI (Global Ecosystem Dynamics Investigation) LiDAR waveform and ICESat (Ice, Cloud and Land Elevation Satellite) point cloud data will be downloaded, preprocessed, and co-registered with other remote sensing data (such as Harmonized Landsat and Sentinel-2 data). Together with the regional and national reference biomass data derived from Airborne LiDAR data, the data cube will be used for training the foundation model. Existing studies showed that ALS data are capable of deriving high-resolution canopy height model. In this project, we will underpin the national field inventory (NFI) biomass measurements with ALS data to derive lidar-heights-to-biomass (LHB) models for biomass mapping. The processing chain includes filtering the data by the acquisition dates difference, correcting potential inaccurate geographical coordinates. The models derived in this way will be used for regional and national biomass mapping from ALS data. Under the circumstances of lack of NFI or ALS data, publicly available LHB models and high-quality biomass maps are adopted from existing literatures. These regional and national maps are then used for fine-tuning the foundation model for domain-specific application. Our goal is that the dataset prepared in this way are co-registered, consistent, representative and quality controlled and cover multiple forest types, nations and continents. In summary, our goal is integrating multi-source LiDAR data from different platforms for biomass estimation to support quantifying the global above-ground land carbon cycle monitoring.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Patterns and Drivers of Tree Mortality Contributing to Biomass Carbon Losses Across Disturbance Agents

Authors: Guohua Liu, Shengli Tao, Laura Eifler, Philippe Ciais, Ana Bastos
Affiliations: Leipzig University, Peking University, Laboratoire Sciences du Climat et de l’Environnement
The rising incidence of tree mortality worldwide poses a significant threat to ecosystems' capacity to sequester carbon, potentially destabilizing the global carbon cycle. Tree mortality which is caused by different disturbance agents such as drought and insects, have different impacts on biomass carbon dynamics and are influenced by distinct climatic and non-climatic factors. In this study, we used satellite radar backscatter data to map biomass carbon from 1993 to 2020 and quantify biomass loss from tree mortality. By integrating these biomass carbon maps with annual tree cover loss and inventory data, we estimate biomass carbon losses due to tree mortality from each disturbance agent including drought, insects, wind, and fire. We then develop machine learning models to analyze the biomass carbon losses due to mortality for each agent, considering climatic factors, vegetation characteristics, and soil properties, with the goal of understanding and improving the representation of mortality-related carbon loss processes in Earth System Models.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Towards two decades of annual, sub-hectare resolution forest biomass maps from European radar satellites

Authors: Maciej Soja, Christina Cappello, Bas Lerink, Marek Lisanczuk, Professor Krzysztof Sterenczak
Affiliations: Wageningen Environmental Research, Forest Research Institute
Accurate, global, high-resolution maps of above-ground forest biomass density (AGBD) are needed for carbon accounting, forest management, and research [1]. These needs will be partially addressed by European Space Agency’s BIOMASS satellite, expected to launch in March or April 2025. Every year for about five years, BIOMASS will generate AGBD maps of South America, Africa, Asia, and Oceania at 200 m resolution, and using a P-band (67 cm wavelength) synthetic aperture radar [2]. Furthermore, BIOMASS will provide the first near-global digital terrain model (DTM), with an expected best-case resolution of 100 m [3]. We here propose, illustrate, exemplify, and discuss a pathway to improve the spatial and temporal resolution and coverages of the AGBD maps from BIOMASS, through synergy with existing datasets from European radar satellites. The baseline approach relies on phase height (PH), which is the difference between a radar-based digital elevation model (DEM) and a DTM. PH is a height metric modulated by canopy density and as such, it has many similarities with mean canopy height from airborne laser scanning, a well-known proxy of AGBD. By subtracting the BIOMASS DTM from available DEM data from the TanDEM-X mission, maps of PH can be obtained at sub-hectare resolutions. In a recent manuscript [4], eleven test sites on four continents were used to show that PH derived from the TanDEM-X-based Copernicus DEM and a 100 m lidar DTM has a consistently near-linear relationship and strong correlation with AGBD. Moreover, the PH-to-AGBD scaling constant has smaller spatial variability than presented in literature, showing a promising prospect of global application. In the recently funded FORBEAR project from European Space Agency, we will further explore this concept, perform strict validation using in situ data, and develop a methodology for annual estimation of AGBD from TanDEM-X CoSSC data, a coarse-resolution DTM like that from BIOMASS, and Sentinel-1 time series for change detection. Here, we will also present some preliminary results obtained within the FORBEAR project, including national maps of AGBD derived from 12-metre resolution TanDEM-X DEMs for Netherlands, Poland, and Spain, and validation results obtained with in situ data from national forest inventories and selected supersites. [1] M. Herold et al., “The Role and Need for Space-Based Forest Biomass-Related Measurements in Environmental Management and Policy,” Surv. Geophys., vol. 40, no. 4, Art. no. 4, Jul. 2019, doi: 10.1007/s10712-019-09510-6. [2] S. Quegan et al., “The European Space Agency BIOMASS mission: Measuring forest above-ground biomass from space,” Remote Sens. Environ., vol. 227, pp. 44–60, Jun. 2019, doi: 10.1016/j.rse.2019.03.032. [3] M. Mariotti d’Alessandro and S. Tebaldini, “Digital Terrain Model Retrieval in Tropical Forests Through P-Band SAR Tomography,” IEEE Trans. Geosci. Remote Sens., vol. 57, no. 9, Art. no. 9, Sep. 2019, doi: 10.1109/TGRS.2019.2908517. [4] M. J. Soja et al., “Sub-Hectare Resolution Mapping of Forest Biomass with Global Dem Data and a Coarse Digital Terrain Model,” 2024. doi: https://dx.doi.org/10.2139/ssrn.4762399.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Map assessment, inter-comparison and harmonization of global biomass maps

Authors: Mikhail Urbazaev, Arnan Araza, Valerio Avitabile, Polyanna da Conceição Bispo, Kevin Black, Sytze de Bruin, Joao Carreiras, Michael Campbell, Dr. Emil Cienciala, Jan Joseph Dida, Ren Avell Flores, Hammad Gilani, Lars Hein, Amit Kumar, Hantao Li, Jingjing Liang, Simon Lewis, Pr Richard Lucas, Edward Mitchard, Alexandra Morel, Ana Maria Pacheco Pascagaza, Oliver Phillips, Prof Casey Ryan, Dmitry Schepaschenko, Hansrajie Sukhdeo, Ferry Slik, Kato Tomomichi, Ghislain Vieilledent, Hans Verbeeck, Arief Wijaya, Simon Willcock, Martin
Affiliations: Wageningen University And Research, GFZ Helmholtz Centre for Geosciences
Global above-ground biomass (AGB) maps are essential for understanding carbon cycles, supporting climate modeling, and guiding forest management and conservation efforts. However, inaccuracies and discrepancies among these maps, driven by data and methodological variations, pose significant challenges to their usability. Additionally, the anticipated further increase of global AGB maps also contributes to potential confusion in selecting the most suitable products. To address this, we present an enhanced framework for the accuracy and uncertainty assessment of global AGB maps. We make use of a global reference dataset "AGBref" originating from a compilation of National Forest Inventories, permanent research plots, and local AGB maps from airborne LiDAR. This reference dataset provides biomass estimates for multiple dates (2005, 2010, 2015, 2020) at various spatial resolutions or grid cells (500 m, 1 km, 10 km, 25 km). The dataset comes with uncertainty estimates from plot measurement errors and a series of quality flags that inform users about the characteristics of grid cells according to the attributes and configuration of the plots within them. Using this dataset, we demonstrate an updated assessment and inter-comparison of prominent global AGB maps from our previous study, including assessment of newer products. This exercise provides useful information to both map users and producers, and provides the necessary motivation and data inputs for harmonizing different global AGB maps toward a more accurate biomass and carbon estimates from regional to global scales.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Hyperspectral Modelling of Microphytobenthos Gross Primary Productivity in France Estuarine Environments

Authors: Hajar Saad El Imanni, Augustin Debly, Engineer Researcher Philippe Rosa, Professor Vona Meleder
Affiliations: Nantes Université, Institut des Substances et Organismes de la Mer, ISOMer, UR 2160, F-44000
Coastal ecosystems represent a crucial component of the global carbon budget and climate change mitigation. In mudflat habitats, Gross Primary Productivity (GPP) is primarily governed by microphytobenthos (MPB), a community of microscopic photosynthetic organisms that inhabit the sediment's surface layer. Remote sensing technologies play a vital role in quantifying GPP, enabling large-scale assessments of carbon fluxes and enhancing our understanding of their contribution to climate change processes. This study, conducted in estuarine environments in France, aims to (1) evaluate the spatio-temporal variability of GPP across different seasons and locations, and (2) develop a model for GPP estimation using hyperspectral indices in combination with environmental covariates and direct carbon flux measurements. The research integrates hyperspectral remote sensing indices with environmental variables, including photosynthetically active radiation (PAR) and mudflat temperature, alongside in situ measurements of Net Ecosystem Exchange (NEE) and Respiration (R) derived from CO₂ chamber-based flux measurements. This approach links direct GPP observations to remote sensing data and environmental factors, enabling the development of a robust model for GPP estimation. The results indicate that MPB GPP values exhibit significant spatio-temporal variation, with measurements ranging from 144.26 mgC/m²/h to 289.08 mgC/m²/h across different seasons and sites. The hyperspectral indices, when coupled with environmental variables, effectively capture these seasonal and spatial variations, providing reliable estimates of GPP at the global coastal estuary scale.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Simulation and Assimilation of Meteorological Parameters and Carbon Dioxide Concentration in the Cities of Eastern China

Authors: MR Wenhao Liu, Yong Xue
Affiliations: Nuist, CUMT
According to the announcement by the World Meteorological Organization (WMO, 2021), the increase in greenhouse gases has accelerated global warming, resulting in a 1.39°C rise in average temperatures across Asia in 2020 compared to the 1981-2020 average. Moreover, high impact weather events such as droughts and floods are becoming more frequent due to the rising global temperatures. These events not only significantly impact agricultural production and water resource management but also pose challenges to socioeconomic stability and ecosystem resilience. Therefore, enhancing the collection and analysis of meteorological data and greenhouse gases, improving meteorological forecasting accuracy through model simulations and data assimilation techniques, is crucial for effectively addressing climate change. In this study, we utilized the WRF-Chem model and the three-dimensional variational assimilation (3DVAR) method to simulate meteorological conditions in eastern China in December 2020, and performed data assimilation (DA) of meteorological data, including temperature (T), pressure (P), wind speed (WS), and relative humidity (RH). Experiments revealed that without DA, the correlation coefficients for P, T, and WS were 0.97, 0.86, and 0.75, respectively. The results after assimilation were 0.99, 0.88, and 0.76, respectively, demonstrating that data assimilation significantly enhances the accuracy of weather forecasts. The experiment also simulated high-resolution CO2 concentrations in eastern China for July 2020. Additionally, it assimilated data from multiple satellite sources to enhance the accuracy of the simulations. We compared the simulated layered CO2 concentration at the lowest level with surface-instu observation data from Carbon Tracker. We found that the biases of our simulation results and Carbon Tracker's simulation results with respect to the station observation data were 1.98 and 1.89, respectively, while the RMSEs were 3.23 and 3.19. The simulation results are similar, but our temporal and spatial resolution is higher, which allows us to capture the trends of carbon concentration changes in localized areas. We also converted the simulated layered CO2 concentration into column concentration using pressure-weighting and compared it with column concentration data from TCCON, CT2022, and OCO-2/3. The results show that our simulated results had biases of 2.23, 1.23, and 2.64, and RMSEs of 2.43, 1.64, and 3.72, respectively. After assimilation, the biases were reduced to 1.84, 0.96, and 2.03, and the RMSEs improved to 2.01, 1.42, and 3.21, respectively. The results after assimilation showed significant improvement when compared to observational data, indicating that the assimilation process effectively enhanced the predictive accuracy of the model. With the assimilation of satellite remote sensing data, the model’s accuracy has significantly increased. This improvement not only enhances the model's ability to capture dynamic changes in CO2 but also provides a more reliable data foundation for further research on regional climate change and its impacts on ecosystems. Moving forward, we will continue to optimize model parameters and explore CO2 concentration changes under different meteorological conditions, aiming to provide more accurate predictive support for regional carbon emission management and climate change mitigation. Additionally, we plan to apply the newly acquired data to simulations in other regions to verify the model's applicability and accuracy under different environmental conditions. Through these efforts, we hope to contribute more valuable information to global greenhouse gas climate change research.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Mapping Cecropia distribution to detect small scale disturbance and adjust biomass estimates for early successional stages in the Amazon

Authors: Scott Barningham, Lina Mercado, Stephen Sitch, Michael O’Sullivan, Luiz Aragao, Jean-Pierre Wigneron, Xiaojun Li, Clement Albergel, Philippe Ciais
Affiliations: Faculty of Environment, Science and Economy, University of Exeter, Earth Observation and Geoinformatics Division, National Institute for Space Research, INRAE, UMR ISPA, Université de Bordeaux, Villenave d'Ornon, France Xiaojun Li, INRAE, UMR ISPA, Université de Bordeaux, European Space Agency Climate Office, Laboratoire des Sciences du Climat et de l'Environnement LSCE
The Amazon Forest is a critical regulator of the global climate system and a tipping element in Earth’s climate (Lenton et al., 2008). However, it is increasingly recognised as a disturbance dominated ecosystem with degradation associated emissions surpassing those from deforestation (Aragao et al., 2018). With the full extent of small-scale disturbances, both natural and anthropogenic, remaining underreported new approaches are required to better understand the disturbance regime and carbon cycle of the region (Silva Junior et al., 2021). Post-disturbance dynamics, such as the exposure of lower canopy layers and the proliferation of early successional species with low wood density, complicate the estimation of carbon stocks. Traditional biophysical vegetation indices struggle to capture small-scale disturbances due to canopy closure and saturation in dense tropical forests. Consequently, the uncertainties in carbon flux and biomass estimates remain high, particularly due to the limitations of spaceborne sensors like GEDI in distinguishing successional stages and wood density variability. Recent advances in Earth observation technologies and machine learning offer unprecedented opportunities to detect disturbances at finer spatial resolutions across broader regions. Cecropia, an early successional species ubiquitous in the Amazon and easily distinguishable from mid- and late-successional species by its unique canopy structure, serves as an ideal indicator of disturbance. Monitoring Cecropia distribution hence can improve biomass estimates by accounting for successional-stage variability. This study aims to utilize 30 cm resolution imagery from the Worldview platform to manually delineate Cecropia presence, following methodologies similar to those used in the Atlantic Forest (Wagner et al., 2020). This high-resolution data will train deep learning models, principally a U-Net convolutional neural network, to detect Cecropia’s spectral signature against mature canopy or soil backgrounds. Once developed, the trained model will be applied across diverse Amazonian environmental gradients to generate static Cecropia distribution maps. To enhance biomass estimation, these maps will be downscaled to 20 m resolution using machine learning techniques with Sentinel-2 data. This approach will enable the identification of small-scale disturbances such as selective logging, wind throw, and mortality events, which are typically undetectable at coarser resolutions at the biome scale. As part of the ESA RECCAP2 project, we therefore propose producing annual Cecropia distribution maps from 2016 to 2022. These maps will support the generation of biomass estimates that account for wood density variations across successional stages and uncover the legacy mosaic of small-scale disturbances across the Amazon, providing critical insights into the region’s carbon dynamics. References Aragão, Luiz E. O. C., Liana O. Anderson, Marisa G. Fonseca, Thais M. Rosan, Laura B. Vedovato, Fabien H. Wagner, Camila V. J. Silva, et al. 2018. “21st Century Drought-Related Fires Counteract the Decline of Amazon Deforestation Carbon Emissions.” Nature Communications 9 (1): 536. Lenton, T.M., Held, H., Kriegler, E., Hall, J.W., Lucht, W., Rahmstorf, S. and Schellnhuber, H.J., 2008. Tipping elements in the Earth's climate system. Proceedings of the national Academy of Sciences, 105(6), pp.1786-1793. Silva Junior, Celso H. L., Nathália S. Carvalho, Ana C. M. Pessôa, João B. C. Reis, Aline Pontes-Lopes, Juan Doblas, Viola Heinrich, et al. 2021. “Amazonian Forest Degradation Must Be Incorporated into the COP26 Agenda.” Nature Geoscience 14 Wagner, F.H., Sanchez, A., Aidar, M.P., Rochelle, A.L., Tarabalka, Y., Fonseca, M.G., Phillips, O.L., Gloor, E. and Aragao, L.E., 2020. Mapping Atlantic rainforest degradation and regeneration history with indicator species using convolutional network. PLoS One, 15(2), p.e0229448.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: D.01.02 - POSTER - Technological Innovations for a Digital Twin of the Earth system

The dream of a comprehensive Digital Twin of the Earth System is fuelled by constant advancements. This session dives into the cutting-edge technologies propelling Digital Twin of the Earth System development.

The session will highlight advancements in data acquisition, processing, modelling, and visualisation that enable high-fidelity simulations of Earth's complex systems. Emphasis will be placed on the integration of various technologies, including AI, machine learning, high-performance computing, and cloud platforms, to create an interactive and dynamic digital representation of our planet.
In this session, we invite contributions to discuss the following key topics:

- Next-Generation Earth Observation - We seek discussions on the latest advancements in acquiring satellite data, including new satellite technologies and sensors. Contributions are welcome on techniques for processing and analysing satellite data to enrich the Digital Twin Earth (DTE) with detailed and dynamic information. Case studies that demonstrate how these advancements are being applied in current projects are highly encouraged.

- High-Resolution Earth System Modeling - We invite detailed discussions on the development of next-generation climate models that simulate atmospheric, oceanic, and terrestrial processes with unprecedented accuracy. Contributions on techniques for integrating different Earth system components (e.g., atmosphere, hydrosphere, biosphere) into unified models for comprehensive simulations are sought. Innovations in achieving real-time or near-real-time simulation capabilities, enabling dynamic monitoring and decision-making, are also welcome.

- High-Performance Computing and Artificial Intelligence - We seek contributions on utilising high-performance computing (HPC) and cloud platforms to handle the large-scale data and computational demands of digital twins. Discussions on using AI and machine learning to refine model predictions, detect complex patterns, and automate data processing workflows are encouraged. Additionally, contributions on developing AI-based tools for forecasting environmental changes and extreme events, enhancing preparedness and response strategies, are invited.

- Big Data Management and Integration - We invite discussions on innovative data management techniques and strategies for managing the vast amounts of data generated by Earth system models and simulations. Contributions on techniques for ensuring seamless integration of data from diverse sources, including satellite EO, ground observations, and in-situ sensors, are welcome. Solutions for storing and accessing large datasets efficiently and securely are also sought.

- Emerging Technologies for enhancement of a Digital Twin of the Earth system - We seek contributions on leveraging cloud platforms to enhance the scalability and flexibility of the Digital Twin Earth. Discussions on processing data closer to its source using edge computing to improve response times and reduce bandwidth usage are invited. Contributions on developing interactive and intuitive visualisation tools to explore complex Earth system data are also encouraged.

- Visualisation and User Interaction - We invite discussions on developing tools and platforms for visualising complex Earth system data in intuitive and interactive formats. Contributions on applications of virtual and augmented reality (VR and AR) in exploring digital twin models, enhancing user engagement and understanding, are sought. Creating user-friendly interfaces and dashboards for accessing, analysing, and interacting with digital twin data is another key topic for this session.

- Challenges and Future Directions - We seek discussions on addressing the need for standard protocols and frameworks to ensure interoperability among different digital twin components. Contributions on ensuring the privacy and security of data used in and generated by digital twin systems, addressing ethical and regulatory concerns, are invited. Strategies for ensuring the sustainability and scalability of digital twin initiatives over the long term, including funding and resource allocation, are also welcome.

By exploring these topics, this session aims to highlight the technological innovations driving the development of the Digital Twin Earth and discuss the challenges and future directions in this rapidly evolving field.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: Machine learning-based emulation of a forest growth and productivity model

Authors: Heikki Astola, Dr. Annika Kangas, Dr. Francesco Minunno, Matti Mõttus
Affiliations: VTT Technical Research Centre Of Finland, Natural Resources Institute Finland (Luke), Yucatrote
Advancing our understanding in land surface processes will help to predict and mitigate the effects of climate change on the functioning of our societies. The land surface model should be able to simulate the various natural phenomena, hazards, and the related human activities. At global scale, this would inevitably include the state of the Earth’s forests under different management and climate scenarios and its interactions with other parts of the Earth system and the anthroposphere. The vast amount of available environmental data allows one to initialize and run models at increased spatial and temporal resolution with more realistic constraints. Simultaneously, the recent advancements in the Machine Learning (ML) and Artificial Intelligence (AI) have led to new methods for inter- and extrapolating the state of the system in space and time. The emulation of a physical model or specific parts of it can be useful for computational efficiency and tractability reasons: once trained, the ML emulators can perform simulations faster than the original physical model without sacrificing much accuracy. The motivation for studying the emulation capacity of advanced machine learning models in predicting forest growth is driven by the needs of Earth system science. Current Earth system models typically have a spatial resolution of tens of kilometres with regional models capable of running at 1 – 5 km. A recent flagship initiative by the European Union – Destination Earth (DestinE) – aims for a large reduction in grid cell size of the global simulations, their spatial resolution remains several orders of magnitude coarser than the requirements set by users for forest growth and productivity models. To overcome the computational burdens of increasing resolution of both Earth and forest system models, extensive use of ML is foreseen by DestinE. The objective of this study was to investigate the accuracy of modern ML methods (i.e., deep learning) in emulating (or mimicking) the functions of a complex forest growth and productivity models under realistic climate scenarios. We paid particular attention to the prediction bias. Systematic error, when accumulated over time, may mislead the users of the models in predicting the future forest resources and in assessing the impact of forest on the global environment. We set our goal for the ML emulator prediction relative bias to be within ±2% to be in line with the bias values reported earlier in assessing the prediction uncertainties using the MELA forest simulator. We used the process-based Rprebasso [1], [2] forest growth and probability model available on GitHub to produce yearly forest structural variable and carbon balance predictions for 25 years. The predictions included the forest variables tree height, stem diameter, basal area, and stem volume, and a subset of carbon balance variables: net primary production (npp), gross primary production per tree layer (GPPtrees), net ecosystem exchange (NEP), and gross growth (GGrowth). These predictions were used as reference data for training the ML models. We tested three different neural network architectures for modelling the prediction task of the Rprebasso software: 1. RNN Encoder-decoder network combined with a fully connected section parallel to the encoder (S2S) 2. RNN encoder with a fully connected input section (FC_RNN) 3. Transformer encoder network (TXFORMER) The study area included the continental Finland with the field inventory plots provided by the Finnish Forest Centre (FFC) that we used as the initial state of the sites to be simulated. These data were combined with climate data downloaded from Copernicus Climate Data Store (CDS). The data set was augmented by randomly shuffling the combination of the climate and field plot data, and by adding random noise to forest site characteristics. The model with the combined fully connected MLP section and an RNN encoder module (FC_RNN model) produced the best results. The yearly bias values for the forest variables of the test set were within the specified ±2% limit, while the minimum and maximum bias of the carbon balance variables slightly exceeded this limit for some years within the 25-year prediction period. The mean relative RMS error varied between 5.1% and 26.1% with the best model architecture. The somewhat more complex RNN architecture with both encoder and decoder parts, and with larger number of trainable parameters (S2S model) produced clearly higher error levels than the FC_RNN model with the variable used for comparison (npp). Somewhat unexpectedly the models with transformer architecture (TXFORMER) did not reach as good results as the FC_RNN models, albeit the differences with the best models were relatively small. We demonstrated the possibility of training a machine learning-based emulator of a complex physically based forest growth and productivity model for efficient long time span prediction of forest development for large areas under different climate scenarios. For application in a digital twin of the world’s forests, the emulator needs to produce unbiased estimates of the key forest parameters related to carbon storage. References: [1] A. Mäkelä, “A Carbon Balance Model of Growth and Self-Pruning in Trees Based on Structural Relationships,” For. Sci., vol. 43, no. 1, pp. 7–24, Feb. 1997, doi: 10.1093/forestscience/43.1.7. [2] F. Minunno et al., “Calibration and validation of a semi-empirical flux ecosystem model for coniferous forests in the Boreal region,” Ecol. Modell., vol. 341, pp. 37–52, Dec. 2016, doi: 10.1016/j.ecolmodel.2016.09.020.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: Towards Cost-Effective Remote Sensing Models: Quantifying Sub-Domain Difficulty

Authors: Akram Zaytar, Girmaw Abebe Tadesse, Gilles Quentin Hacheme, Caleb Robinson, Rahul Dodhia, Juan M. Lavista Ferres
Affiliations: Microsoft AI for Good Lab
The rapid advancement of remote sensing technologies has led to an abundance of high-resolution satellite imagery, providing unprecedented opportunities for monitoring and understanding Earth's surface. However, the utility of this imagery in machine learning applications is constrained by the limited availability of labeled data. The process of manual labeling is resource-intensive, creating a bottleneck in the development of robust models for geospatial analysis. Consequently, optimizing labeling strategies to maximize model performance under restricted budgets has become a critical area of research. This work focuses on addressing the problem of identifying and prioritizing sub-domains within geospatial datasets that are inherently more challenging to learn and therefore require greater labeling budget allocation. Geospatial datasets are inherently heterogeneous, comprising regions with varying visual characteristics and complexities. Certain patterns, such as water bodies, clouds, and bare land, are relatively uniform and easy for deep learning models to learn, whereas more complex patterns, including vegetation, forests, and urban areas, pose greater challenges. This disparity necessitates a labeling strategy that accounts for the difficulty of learning each sub-domain within a dataset. In this study, we explore whether the learning difficulty of specific sub-domains can be quantified and leveraged to optimize data selection strategies. We propose a methodology that examines the geospatial machine learning dataset as a distribution (X,y) and investigates whether there are sub-domains within this distribution that require a disproportionately larger labeling effort to achieve equivalent performance. Our approach involves clustering (X,y) into N representative groups that capture the underlying sub-domains of the data. By measuring the volume of each cluster in feature space and evaluating the incremental improvement in model performance per labeled sample, we estimate the relative difficulty of each sub-domain. To validate this approach, we focus on three widely used land cover segmentation datasets, representing diverse geospatial contexts. For each dataset, we conduct the following experiments: (1) Partition the data into sub-domains using clustering techniques that preserve feature-label relationships; (2) Train a baseline model using varying proportions (1%, 5%, 10%, and 25%) of labeled data from each sub-domain; (3) Evaluate the model performance on validation curves and metrics, such as accuracy and F1 score, to quantify the learning difficulty of each sub-domain; (4) Analyze the consistency of these findings across datasets and identify commonalities among harder-to-learn sub-domains. The broader implications of this work are significant for the field of remote sensing and beyond. By standardizing the process of quantifying sub-domain difficulty, we aim to contribute a robust toolkit for intelligent data selection and labeling. This toolkit has the potential to transform workflows in remote sensing applications, enabling practitioners to allocate labeling efforts strategically and reduce costs. Furthermore, the insights gained from this research could inform the design of more efficient machine learning models, particularly for critical applications such as land cover mapping, disaster response, and biodiversity conservation. In conclusion, this work addresses a fundamental challenge in geospatial machine learning: how to maximize model performance under constrained labeling budgets. By quantifying sub-domain learning difficulty and optimizing labeling strategies, we aim to enhance the efficiency and effectiveness of machine learning in remote sensing. This research provides a pathway toward more intelligent and cost-effective use of remote sensing data, with wide-ranging implications for environmental monitoring and Earth observation.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: Scaling-up Frame Field Learning algorithms for building extraction on very high resolution remote sensing images

Authors: Axel Rochel, Clément Dechesne, Hélène Savatier-Dupré, Chloé Thenoz, Pierre-marie Brunet
Affiliations: Magellium, Centre National d'Etudes Spatiales (CNES)
In recent years, Deep learning brought state of the art results in semantic segmentation tasks. In particular, encoder-decoder architectures such as Unet [1] are able to create very good building footprints from an aerial or a remote sensing picture. However, classic semantic segmentation models fail to distinguish joined buildings, which can be an issue for post-processing tasks such as polygonization or LOD1 generation. Based on this observation, [2] developed a deep learning based method designed to extract buildings as vector polygons. This method relies on a multi-task learning paradigm: the model learns to jointly predict not only the buildings footprints but also their boundaries and a frame field designed to help the polygonization and regularization step. The frame field here is a vector field associating to each point 2 directions with the following constraints: on the sides of the buildings, one of the two directions should be aligned with the wall whereas the other should be orthogonal to the first one; on the corners, each direction is aligned to a different wall; the field of vector should be as smooth as possible. Once trained, an inference can be performed to produce a raster image composed of 6 bands (the interior of the buildings, their boundaries and 4 values characterizing the frame field at each pixel). Then, the polygonization can be performed using the Active Skeleton Model (ASM) described by [2]. First, the thinning method [3] is applied on the boundaries prediction to extract a graph of connected pixels. Then, inspired by the Active Contour Method [4], a gradient descent is performed to minimize 3 energy values. The first energy term fits the segment of the graph to the segmentation of the interior of the buildings. The second term aligns the segments of the graph with the frame field. The last term is a regularization term that ensures that the points are almost equally spaced. After this gradient descent, the frame field can be used to detect corners and the Ramer-Douglas-Peucker algorithm [5, 6] can then be used to reduce the number of points between two corners. An implementation of these algorithms is available on the github of the authors; we successfully reused these codes to train our own model and perform predictions on tiles extracted from images of Pléiades Néo (PNEO) [7], a constellation of 2 satellites which provides satellite images all around the world with a spatial resolution of 30 cm. However, if the size of the input picture increases, memory issues can appear since everything is loaded with no specific mechanisms for large input images . More specifically, the algorithms do not scale up, which make them not usable in an operational context. This work, carried out with funding from CNES (the French Space Agency), presents the methodology set up to adapt the previously described algorithms to full extent PNEO products. This method mainly relies on image tiling. For the first part, the inference with the deep learning model, several methods at different levels are employed to prevent edge effects. First, a mirror padding is performed on the whole image to remove unwanted side effects and ensure consistent predictions on the image borders. Then, the image is divided into tiles, and each tile is divided into patches with an overlap. The model is run on these extracted patches and its associated predictions are then merged with a weighted average favoring the pixels at the center. Finally, only the center of the tile is written to get rid of the borders and ensure smooth transitions between tiles. In order to prevent missing areas at the tile level, an overlap is introduced for the reading of the tiles. The number of overlapping pixels is carefully chosen to ensure that written tiles cover the entire image and do not overlap with their neighbors, thereby preventing concurrent writing. Concurrent reading, on the other hand, is not an issue. Therefore, the predictions can be computed on several tiles simultaneously. For the polygonization part, the image is also divided into overlapping tiles. The ASM algorithm is then executed in parallel on each tile independently, creating a list containing all the polygons for the entire image. The overlap between tiles enables to merge parts of the same geometries at the borders, but it also creates duplicates. Duplicate polygons are identified using a threshold applied to the intersection area. Since most of the polygons contain many points, the intersection computation is time consuming and intersection for all polygon pairs shall not be computed. In that end, bounding boxes were used to approximate the footprint of the buildings. After that, the intersection area between 2 polygons was computed only if their bounding boxes intersected. This greatly reduced the computation time. Furthermore, comparing polygons only on neighboring tiles also decreased the processing time. In our operational context, on a computing cluster using one Nvidia A100 GPU (80 Go Ram) and 16 cores with 128 Go Ram, we were able to perform the building extraction on a 47204x21753 pixels image with 4 bands using the updated FFL algorithms in only 2 hours and 5 minutes. These computation times are distributed equally between inference (1h) and polygonization (1h05). References [1] Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18 (pp. 234-241). Springer International Publishing. [2] Girard, N., Smirnov, D., Solomon, J., & Tarabalka, Y. (2021). Polygonal building extraction by frame field learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 5891-5900). [3] Zhang, T. Y., & Suen, C. Y. (1984). A fast parallel algorithm for thinning digital patterns. Communications of the ACM, 27(3), 236-239. [4] Kass, M., Witkin, A., & Terzopoulos, D. (1988). Snakes: Active contour models. International journal of computer vision, 1(4), 321-331. [5] Ramer, U. (1972). An iterative procedure for the polygonal approximation of plane curves. Computer graphics and image processing, 1(3), 244-256. [6] Douglas, D. H., & Peucker, T. K. (1973). Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Cartographica: the international journal for geographic information and geovisualization, 10(2), 112-122. [7] Airbus Defence and Space (2021). Pléiades Neo. First-rate performance for trusted intelligence. https://www.airbus.com/en/space/earth-observation/earth-observation-portfolio/pleiades-neo.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: Improving the 3D Representation of Plant Architecture and Parameterization Efficiency of Functional-Structural Tree Models using Terrestrial LiDAR Data

Authors: Vera Bekkers, prof.dr. J.B. Evers, Alvaro Lau
Affiliations: Wageningen University
Functional-Structural Plant (FSP) models are useful tools for understanding plant functioning and how plants react to their environment. Developing tree FSP models is data-intensive and measuring tree architecture using conventional measurement tools is a laborious process. Light Detection and Ranging (LiDAR) could be an alternative non-destructive method to obtain structural information about tree architecture. This research investigated how terrestrial LiDAR (TLS)-derived tree traits could be used in the design and parameterization of tree FSP models. A systematic literature search was performed to create an overview of tree parameters needed for FSP model development. The resulting structural parameters were compared to LiDAR literature to get an overview of the possibilities and limitations. Furthermore, a tropical tree and Scots pine FSP model were selected and parametrized with TLS-derived parameters. Quantitative Structural Models (QSM) were used to derive the parameters and a total of 37 TLS-scanned tropical trees and 20 Scots pines were included in the analysis. Ninety papers on FSP tree models were screened and 8 papers fulfilled all the selection criteria. From these papers, 50 structural parameters used for FSP model development were identified, from which 28 parameters were found to be derivable from LiDAR. The TLS-derived parameters were compared to measurements, and the accuracy was variable. It was found that branch angle could be used as model input, but internode length was unsuitable. Outputs of the FSP models with TLS-derived branch angle differed from the FSP model outcomes with default branch angle. Results showed that it is possible to use TLS for FSP model inputs, although with caution as this has implications for the model variable outputs. In the future, LiDAR could help improve efficiency in building new FSP models, increase the accuracy of existing models, add metrics for optimization, and open new possibilities to explore previously unobtainable plant traits.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: HIGHWAY – Bridging Earth Observation and Digital Twin Technologies

#zarr #stac #cog #cloud-native

Authors: Henry de Waziers, Giovanni Corato, Simone Mantovani, Mohamed Boukhebouze, PhD Christophe Lerebourg
Affiliations: Adwaiseo, MEEO, EarthLab Lu, ACRI-ST
The DTE high performance earth observation (EO) digital twin components ready data (hereafter referred as HIGHWAY) is the ESA DTE component to enable a high-performance and efficient access to EO data and processing capabilities to Digital Twins under the Destination Earth (DestinE) initiative. The scope of the service is to offer, within the DestinE platform, seamless access to: - ESA (such as Earth Explorer, Earth Watch and Heritage) and Third-party Mission data in native and Digital Twin Analysis Ready (DT-ARCO) format. - Processing capabilities in a cloud environment; High-Performance Computing (HPC) is under development. Role and Core Mission HIGHWAY acts as a vital link between EO data, processing resources and DestinE, addressing the growing need for efficient data exchange and processing to power Digital Twins. It is integrated within DestinE platform, ensuring interoperability and delivering advanced services that accelerate the creation and utilization of Digital Twin models for monitoring, analysis, and decision-making. Infrastructure Services Highway relies on a secure, resilient and scalable multi-cloud infrastructure: 1. OVH Cloud: As the primary site, OVH ensures proximity to DESP (DestinE Core Service Platform), reducing latency and enhancing data exchange performance. This location also supports environmentally friendly operations by reducing egress and energy consumption, aligning with ESA's green initiatives. 2. Terra Adwäis: Serving as the Disaster Recovery (DR) site and DT-ARCO production site, Terra Adwäis ensures the resilience of HIGHWAY operations. The Highway infrastructure and platform leverage cloud-native technologies to ensure flexibility and elasticity ensuring that the service remains robust and efficient regardless of technological shifts or operational demands. In a near future, to cope the most demanding Digital Twins AI needs it will be connected to HPC. Data Services HIGHWAY delivers efficient, high-performance data access tailored to the needs of Digital Twins, enabling advanced Earth observation capabilities. Its key data services include: • DT-ARCO Data Production: HIGHWAY produces data in DT-ARCO format, package following the EOPF data model and advanced formats such as ZARR and COG (for specific datasets). These formats align with Copernicus roadmaps and the latest Digital Twin standards, ensuring compatibility and forward-looking integration. Supported Missions include SMOS, CryoSat, Proba-V, Aeolus, SWARM, and EarthCARE. • Quality Assurance: HIGHWAY integrates rigorous systematic and manual quality assessments to ensure data accuracy and reliability. Processes include original-to-final pixel comparisons and non-regression loops to prevent any quality degradation during the ARCO transformation. • Advance Data Access and APIs: Seamless data integration is provided through advanced APIs, including WMS, WCS, OpenSearch, and STAC. Data can be accessed both in native and DT-ARCO format. Advance data access offer data-cube features to explore and access DT-ARCO. Processing Services In addition to its robust data services, HIGHWAY provides scalable processing capabilities through cloud. The integration with high-performance computing (HPC) is currently under agreement. The access point to process service is Max-ICS platform. Max-ICS enables state-of-the-art AI modeling tools and supports the creation of AI pipelines for Earth observation applications. It is coupled with an HPC broker that allows dispatching and managing processing requests to a HPC, enhancing processing power and scalability for complex Earth observation tasks. Currently, this interface allows HIGHWAY to leverage Luxembourg's HPC capabilities through its initial integration with MeluXina HPC. Enabling Digital Twin Innovation HIGHWAY’s services are specifically designed to support Digital Twin applications, which require vast amounts of accurate, high-resolution Earth observation data. By delivering data aligned with the latest standards, ensuring rigorous quality assurance, and providing high-performance processing capabilities, HIGHWAY empowers scientists and decision-makers to model and simulate Earth’s systems with unprecedented precision. Conclusion HIGHWAY represents a pivotal step forward in the integration of Earth observation and Digital Twin technologies. It provides a seamless ecosystem of infrastructure, data, and processing services to DestinE. Its scalable, flexible and elastic approach ensures the long-term success of ESA’s Digital Twin initiatives, fostering innovation in Earth Observation and advancing our understanding of the planet. This service sets a new standard for Earth observation data management and processing, positioning ESA as a leader in providing essential tools for climate monitoring, environmental management, and sustainable development.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: A digital assistant for digital twins of the Earth

Authors: Manolis Koubarakis, Sergios-Anestis Kefalidis, Konstantinos Plas, Myrto Tsokanaridou, Prof. Begum Demir, Genc Hoxha, Jakob Heinrich Hackstein, Marco Corsi, Cristian Leoni, Giorgio Pasquali, Chiara Pratola, Simone Tilia, Nicolas
Affiliations: National and Kapodistrian University of Athens
Digital twins of the Earth environment are now developed through initiatives such as Destination Earth of the European Commission and Digital Twin Earth of ESA. One aspect that has been overlooked by research and development in this area is the design of an appropriate interface for allowing non-expert users to interact with the digital twin. This is the focus of the ESA project DA4DTE (http://da4dte.e-geos.earth/). The participants of the project are the Italian company e-GEOS, the National and Kapodistrian University of Athens and the Technical University of Berlin. The main idea of the project is to develop a digital assistant which interacts with users using two modalities: text and satellite images. The developed prototype consists of a Task Interpreter and four back-end question answering engines (the search-by-image engine, the search-by-text engine, the knowledge graph question answering engine and the visual question answering engine). The Task Interpreter interprets user requests and invokes appropriate components of the rest of the system for answering a user question. It has been implemented as a multi-agent system with the agents being: a request classifier, a conversational (chat) agent, and two helper ones. The request classifier agent classifies the user’s request in one category (the ones corresponding to backend engines and the None one). An argument needed is the existence of an image in the user’s input, given its determinant role in the classification. The conversational agent is the human-like aspect of the Digital Assistant that answers to the user's generic dialog acts (e.g., thanking), and also takes the responsibility of staying on topic, being prompted to conduct discussions only on Earth Observation. As for the helper agents, one of them is responsible for determining if a search-by-image request is multimodal, and the other one handles a particularity of the assistant, which is the restriction of the search-by-text engine in a vessel domain dataset (the use case of the current demo of the digital assistant). The Task Interpreter is implemented as a multi-agent system using AutoGen library [WBZ+23] leveraging module design to handle the assistant’s current needs while targeting adaptability to be possibly extended with more sophisticated functionalities. As the basis of all agents, the GPT-3.5-turbo model was used, in zero-shot settings, given both its cost-effectiveness and sufficiency for the project needs. To ensure deterministic classification results, the temperature parameter was set to zero. The search-by-image engine takes a query image and computes the similarity function between the query image and all archive images to find the most similar images to the query in a scalable way. This is achieved based on two main steps: i) the image description step, which characterizes the spatial and spectral information content of remote sensing (RS) images; and ii) the image retrieval step, which evaluates the similarity among the considered hash codes and then retrieves images similar to a query image in the order of similarity. Our search-by-image engine is defined based on two self-supervised methods: 1) deep unsupervised cross-modal contrastive hashing (DUCH) [MRD22]; and 2) cross-modal masked autoencoder (CM-MAE) [HSCD+24]. For both methods, the image description step is composed of two modules: 1) feature extraction module, which learns deep feature representations of RS images by exploiting visual transformers (ViT); and 2) deep hashing module, which learns to map image representations into hash codes. The first module of the DUCH method is based on contrastive self-supervised image representation learning, while that of the CM-MAE method is based on unsupervised masked image modelling. The second module of each method employs a hashing subnetwork with binarization loss functions. Our engine has both the single-modal (also known as uni-modal) and cross-modal content-based image retrieval capability due to the consideration of the modality-specific encoders. The search-by-text engine takes a text sentence as a query and efficiently retrieves the most similar images to the query text, achieving scalable cross-modal text-image retrieval. The search-by-text engine is developed by adapting the above-mentioned self-supervised DUCH [MRD22] to be operational on text based queries. To this end, the feature extraction module is adapted to extract feature representations of image-text pairs by exploiting bidirectional transformers (e.g., BERT [DCL+18]) as text-specific encoders together with ResNet-152 [HZRS+16] as image-specific encoders. The second module of each method is adapted to learn cross-modal binary hash codes for image and text modalities by simultaneously preserving semantic discrimination and modality-invariance in an end-to-end manner. The knowledge graph question answering engine takes as input natural language requests and returns links to resources (in the scope of this project, images) that satisfy the request. To do this, it creates SPARQL queries, which are subsequently executed over a purpose-built Knowledge Graph containing image metadata and geospatial information for administrative divisions, rivers, ports and other points of interest. The engine uses a pipeline architecture, meaning that it consists of a number of components, each of which performs a specific task. The WHERE clause is generated dynamically by combining basic SPARQL/GeoSPARQL building blocks, using dependency parsing, string similarity, lemmatization and named entity recognition and disambiguation. The SELECT/ASK clause is generated by a combined approach that utilizes large language models and dependency parsing to identify user intent and collect information from the WHERE clause. The query generation step works by combining the SELECT/ASK clause and WHERE clause generated previously. In addition, at this stage, any information about superlatives, ordering, grouping and specific number of expected return values is used to enhance the final generated query based on a set of rules. Optionally, queries are evaluated and modified by a fine-tuned small scale Large Language Model. To ensure the timeliness of the replies geospatial relations are materialized and used instead of GeoSPARQL functions. The knowledge graph question answering engine is based on the GeoQA2 engine presented in [KPT+24]. The visual question answering engine allows a user to formulate a free-form question concerning the content of remote-sensing images to extract generic information. For the requirements of the project the LIT-4-RSVQA [HCR+23] architecture is employed because of its lightweight nature that enables speedy responses. The architecture includes: i) a lightweight encoder module for the text modality; ii) a lightweight encoder module for the image modality; iii) a fusion module; and iv) a classification module. The encoder modules produce vector representations which are subsequently passed to the fusion module. The feature fusion module consists of two linear projections and a modality combination. The projected features are then element wise multiplied. The classification module is defined as an MLP projection head. Let us now consider a use case scenario for the digital assistant. The assistant welcomes the user and asks them to pose a request. The user asks for a Sentinel-1 image from France during 2020, with snow coverage of more than 50%. Then the request classifier of the task interpreter decides that there is a request that should be fulfilled by the knowledge graph question answering engine which returns the appropriate image. The interaction goes on with the user asking for a similar Sentinel-2 image and then the image by image retrieval engine, activated by the task interpreter, gets one. The user continues asking why these images are similar and the result by the image similarity tool is presented to them. Afterward, having selected that Sentinel-2 image, the user asks whether it presents a rural area and the answer by the visual question answering engine, is presented. Finally, the user thanks and goodbyes the assistant, and then the request classifier agent of the task interpreter, calls the conversational agent of it, which politely goodbyes back and declares available for any further assistance. The digital assistant has been implemented fully and a prototype is available at http://da4dte.e-geos.earth/ui . Bibliography [DCL+18] J. Devlin et al. BERT: Pre-training of deep bidirectional transformers for language understanding, 2018. [KPT+24] S.-A. Kefalidis et al. The question answering system GeoQA2 and a new benchmark for its evaluation. International Journal of Applied Earth Observation and Geoinformation, 2024. [HCR+23] L. W. Hackel et al. LIT-4-RSVQA: Lightweight Transformer-Based Visual Question Answering in Remote Sensing. IGARSS 2023. [HSCD+24] Hackstein, J. et al. Exploring Masked Autoencoders for Sensor-Agnostic Image Retrieval in Remote Sensing. arXiv preprint. [HZRS+16] K. He et al. Deep residual learning for image recognition. IEEE CVPR, 2016. [MRD+22] G. Mikriukov et al. Unsupervised Contrastive Hashing for Cross-Modal Retrieval in Remote Sensing. ICASSP, 2022. [WBZ+23] Wu, Q. et al. AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation. arXiv preprint.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: Data cubes – approaches to exploit parts of DestinE Digital Twin outputs

#zarr #cloud-native

Authors: Patryk Grzybowski, Michael Schick, Miruna Stoicescu, Aubin Lambare, Christoph Reimer
Affiliations: CloudFerro S.A., EUMETSAT, CS Sopra Steria, EODC
European Commission’s flagship initiative Destination Earth (DestinE), driven by the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT), the European Space Agency (ESA) and the European Centre for Medium-Range Weather Forecasts (ECMWF), provides unique data outputs generated by Digital Twins: the Weather Extremes Digital Twin (ExtremeDT) and the Climate Change Adaptation Digital Twin (ClimateDT). These systems deliver meteorological and climate scenarios with very high detail, thanks to their globally uniform, high spatial resolutions (~5km grid spacing). However, this high level of detail comes with significant storage requirements, reaching several petabytes. This work aims to demonstrate the possibilities and best practices related to dealing with DTs’ outputs using Destination Earth Data Lake (DEDL) near data process services (EDGE). The Destination Earth Data Lake (DEDL) offers unique opportunities and approaches that enable high-demand processing directly near the data. What sets DEDL apart is its ability to provide services in close proximity to its data holdings, leveraging a distributed infrastructure receiving DT Outputs from interconnected High-Performance Computing (HPC). This capability is made possible through data bridges — edge clouds that facilitate operations with large volumes of data initially produced by the computing power of HPC systems combined with a data cube accessible object store provided by ECMWF for its digital twin data. That joint effort of EUMETSAT and ECMWF depicted by collocating OpenStack cloud infrastructure with EuroHPC, allows for innovative and exceptional data handling. While the infrastructure and computing power are provided through cloud-edge technology, additional services, applications, and practices are essential for effective data handling and the creation of valuable products. To address these needs, DEDL supports and provides capabilities to setup data cubes and workflows for the generation of cloud-native data formats like “.zarr”. Specifically, DEDL offers for parts of the DT Outputs pre-built data cubes, as well as a virtual environment designed for creating Open Data Cubes. However, with more than 150 collections available in DEDL data portfolio, users’ needs may go beyond ready-to-use solutions. To meet these expectations, it is crucial to demonstrate various approaches to handling such extensive and diverse datasets. The aim of this work is to evaluate and demonstrate three distinct approaches to processing and generating “.zarr” format data cubes. These methods utilize different tools and frameworks to handle DT’s data. The cases include: 1) Utility of Climate Data Operators (CDO) with python libraries: xarray and dask; 2) Using python packages: dask, xarray and ECMWF developed earthkit; 3) Using xdggs with xarray for Discrete Global Grid Systems (DGGS) based ClimateDT data. The first case will introduce the process of converting ExtremeDT data from an octahedral reduced Gaussian grid to a regular Gaussian grid and writing it as a NetCDF file using CDO. It will also demonstrate how to generate a “.zarr” data cube output using xarray and dask to handle multidimensional data and enable parallelization. The second case will provide an example of how to work with ClimateDT data, which is delivered on a Hierarchical Equal Area isoLatitude Pixelation grid (HEALPix). It will demonstrate the conversion of data from HEALPix to a regular Gaussian grid using the earthkit tool provided by ECMWF and the preparation of data for further analysis. The third case will demonstrate the use of xdggs, an extension of xarray that provides tools for handling geospatial data using Discrete Global Grid Systems (DGGS). This case will focus on working with native ClimateDT data in the HEALPix grid format. For each case, both advantages and disadvantages were identified. The first case preserves the native spatial resolution and provides data on a regular grid, enabling straightforward analysis and visualization. However, the native grid is lost, which could impact the accuracy of the delivered. simulated data. What is more, regular grid impact on distortions due to the convergence of meridians at the poles. It implies that distance between gird points decreases significantly near the poles. The second case also provides data on a regular grid, facilitating easier analysis. However, not only is the native grid lost and distortions near poles occurred, but the spatial resolution is also reduced to 10 km x 10 km. The third case retains both the native resolution (5 km x 5 km) and the native grid, HEALPix. This ensures that the data remains undistorted. However, HEALPix, as a relatively new method of data provision, is not yet widely supported by existing software and communities. Fortunately, tools like healpy and xdggs are making it significantly easier to work with such data. To sum up, handling Digital Twin outputs efficiently is crucial, especially when working with data in a DGGS format. Proximity to data infrastructure, such as data bridges and pre-prepared environments provided by DEDL, enables users to manage large volumes of data effectively. This work demonstrates a proposed approach to handling Digital Twin outputs using three different data cubes’ generation methods, along with their respective pros and cons. For the first two cases, integration with existing tools and simplicity were highlighted as advantages, while the loss of the native grid was identified as a disadvantage. In contrast, the third approach preserved the native grid and resolution but posed challenges due to limited support for HEALPix in existing tools. There are also approaches on the native data without the .”zarr” transformation, accessing the data via ECMWF’s polytope and using the python packages earthkit and healpy. Whichever method users of Destination Earth choose, the foundational principles behind DEDL and DTs coupled with novel cloud solutions, will enable data processing and analysis on a scale that surpasses current capabilities.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: Using High Resolution Drone-Derived LiDAR to Construct Digital Twins for Resilience Planning in Small Coastal Communities

Authors: Alejandra Rios, Aidan Young, Elaina Sherline, Josh Matherson, Tasya Vadya Sarira, Saager Arya, Andrew J. Read, David W.
Affiliations: Duke University Marine Laboratory
Pivers Island, a modest 3-acre island situated on the low-lying North Carolina coast, is experiencing increased risk of storm surge and sea level rise. Pivers Island hosts Duke University Marine Lab and the NOAA Southeast Fisheries Science Center, two multi-million dollar research institutions that are important resources to the Beaufort community. Historically shielded by Shackleford Banks and the Rachel Carson Reserve, recent erosion at the west end of Shackleford Banks and inland retreat of the Rachel Carson Reserve shoreline have left Pivers Island vulnerable to periodic flooding and damaging storm surges. Digital twins developed with Light Detection and Ranging (LiDAR) have been advantageous in informing risk and resilience planning for many coastal communities. Conventional aerial LiDAR data typically covers extensive areas at a resolution of 1.0 meter/ pixel ground sampling distance adequate for large-scale assessments but lacking the detail needed for a fine-scale study. A DJI M350 quadcopter was deployed over Pivers Island equipped with a Zenmuse L2 sensor to gather LiDAR data at a 0.3 m/pixel ground sampling distance. LiDAR data was aligned and filtered in TerraSolid, and then imported into ArcGIS Pro and CityEngine to build a geospatially accurate digital twin of Pivers Island. In the next phase of this research, the Advanced Circulation (ADCIRC) model that indicates sea level rise and storm surge impact will be run on the area to understand im`1pact of storm surge and sea level rise in 25, 50, and 100 years. This method allows for the creation of detailed storm surge models on the digital twins that can more accurately inform risk assessment and resilience planning for high-risk areas like Pivers Island. Furthermore, this approach can be adapted to other small coastal areas, providing a valuable tool for researchers and policymakers focused on coastal resilience.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: DestinE Platform: A Collaborative Ecosystem for Addressing Environmental Challenges through Services and Innovation

Authors: Alexis Longuet, Calogera Tona, Barbara Scarda, Elisabetta Giuliani, Matteo Cortese
Affiliations: Serco Italia
This paper describes the DestinE Platform, one of the three key components of the European Union's transformative initiative, Destination Earth (DestinE). The Destination Earth (DestinE) initiative is a joint effort by ESA (European Space Agency), ECMWF (European Centre for Medium-Range Weather Forecasts), and EUMETSAT (European Organisation for the Exploitation of Meteorological Satellites) to develop a digital twin of Earth. DestinE will accurately simulate and monitor the planet's natural systems using satellite data, climate models, and advanced computing technologies. The DestinE Platform, as the user access point for the Destination Earth ecosystem, will provide real-time insights into Earth’s processes, promoting climate resilience, environmental sustainability, and data-driven decision-making. It also aims to foster collaboration among scientists, policymakers, and stakeholders to address global challenges such as climate change, natural disasters, and resource management. DestinE Platform provides a comprehensive ecosystem of services and resources for understanding environmental challenges. Through advanced Digital Twin technologies, the platform enables users to extract valuable insights from complex Earth system datasets.By leveraging cloud-based technologies, DestinE Platform creates a flexible, high-performance environment where stakeholders can not only access critical environmental data but also contribute their own services. Building on this collaborative foundation, the DestinE Platform facilitates broad engagement, by enabling users to integrate their own services. This enhances their visibility, fosters cross-sector collaboration, and drives innovation. Contributors can manage their user communities with tools to control service access and distribution, while amplifying their impact by reaching a diverse, trusted audience of experts, organisations, and potential partners. 1. DestinE Platform implements a strategic three-stage service integration framework that ensures rigorous quality and alignment, transforming external service contributions into platform innovationsRequest: Potential service providers submit a Service Onboarding Request, ensuring essential information and compliance with platform terms are provided. 2. Evaluation: Stakeholders assess the request's alignment with DestinE’s objectives to ensure coherence with the platform’s vision. 3. Integration: Approved services are integrated into the platform with comprehensive guidance and support from the DestinE Platform team. Through these stages, the DestinE Platform ensures that new services enrich its offerings while maintaining high standards of quality and relevance. To streamline the onboarding and integration process, the platform provides extensive documentation andcommunity tools, creating an inclusive environment that empowers researchers, policymakers, and practitioners to leverage advanced data for sustainability and climate resilience. The platform’s collaborative ecosystem has the potential to accelerate climate research, support more effective policy development, and drive cross-sector innovation in sustainability technologies. In summary, the DestinE Platform is an essential tool for scientists, policymakers, and industry innovators seeking to explore and contribute to Earth system analysis. By integrating user services and encouraging collaboration, the platform strengthens the collective capability to understand and respond to environmental changes, driving progress toward sustainable development and global climate action.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: Augmented Reality Meets Earth Observation (ARmEO)

Authors: Doctor Francesco Vuolo, Matthias Husinsky
Affiliations: University of Natural Resources and Life Sciences, Vienna (BOKU University), University of Applied Sciences, St. Pölten (FHSTP)
New digital technologies enable efficient management systems that enhance farm profitability, minimize environmental impacts, and improve production sustainability while raising safety and health standards. Notably, work-related health issues are most prevalent in agriculture, hunting, and forestry, yet these sectors lag in digitalization, ranking second last among European industries. Even when supportive data is available, practitioners often hesitate to adopt digital tools due to concerns over complexity and reliability. However, digitalization and automation hold immense potential to drive transformative improvements in working conditions, workforce resilience, and productivity while supporting sustainability and economic goals. This paper introduces the development and field demonstration of ARmEO, an augmented reality (AR) prototype developed by BOKU and FHSTP and tested successfully with Austrian farmers, crop consultants, and researchers in spring 2023. Utilizing satellite-based data and AR glasses, ARmEO integrates geospatial information directly into the operator’s field of view, providing critical insights in real-time without interrupting workflows. Key use cases included: Field scouting, enabling quick analysis of vegetation progress over the season and growth anomalies and Field management, facilitating variable rate fertilizer application. The system allows users to compare Earth observation data with real-world environments side-by-side, greatly enhancing the intuitiveness and accessibility of spatial data. While tailored for agriculture, ARmEO demonstrates significant potential for applications in forestry and grassland management. This innovation could revolutionize smart forestry, making satellite data more actionable for field practitioners and fostering collaboration through data sharing. The project highlights the transformative role of AR technologies in promoting sustainable, efficient, and engaging practices in agriculture and beyond.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: HIGHWAY: Scalable and Reliable Digital Twin Earth Data Processing base on MLOps Approach

Authors: Mohamed Boukhebouze, Henry de Waziers
Affiliations: Earthlab Luxembourg, Adwaiseo
The concept of Digital Twin Earth (DTE) represents a significant advancement in simulating, monitoring, and optimizing Earth's dynamic processes. Machine Learning (ML) serves as a foundational enabler, facilitating the integration and analysis of diverse Earth observation data into a real-time digital model. ML algorithms extract meaningful patterns from heterogeneous data sources, predict environmental changes, and enhance decision-making processes through automated complex data analysis. However, developing effective ML models for DTE presents significant challenges due to the scale and heterogeneity of the data involved. These challenges necessitate robust preprocessing pipelines and advanced architectures capable of handling multimodal inputs while ensuring real-time operation and continuous adaptation to dynamic conditions on Earth. MLOps (Machine Learning Operations) emerges as a promising solution to address challenges in advanced applications like Digital Twin Earth. It is a paradigm that encompasses best practices, concepts, and a development culture for the end-to-end lifecycle of machine learning products. As an engineering practice, MLOps combines machine learning, software engineering (particularly DevOps), and data engineering to effectively productionize ML systems. By leveraging key principles such as CI/CD automation, workflow orchestration, collaboration, ML metadata tracking and logging, and feedback loops, MLOps bridges the gap between development (Dev) and operations (Ops). This approach enables organizations to overcome the challenges associated with complex models, vast data volumes, and limited explainability in AIOps and digital twin applications. In the context of Digital Twin Earth, MLOps enhances the performance, scalability, and reliability of Earth observation systems, which generate vast amounts of data requiring real-time processing to deliver timely insights. MLOps enable more efficient development, deployment, and maintenance of ML models, ensuring that digital twins remain accurate, up-to-date, and aligned with the physical systems they represent. Additionally, MLOps facilitates continuous model improvement, addressing the need for more advanced digital twin representations and paving the way for their evolution in IT operations. However, adapting MLOps to the complexities of DTE systems presents its own challenges, including managing continuous data integration, ensuring real-time model retraining, and maintaining the interoperability of diverse data sources. The HIGHWAY service aims to support advancements in earth observation by leveraging advanced technologies to facilitate the development of Digital Twin Earth (DTE) applications. Its innovative approach addresses key operational needs, including the efficient processing of large-scale and heterogeneous data, ensuring scalability to accommodate dynamic and growing data streams, and maintaining reliability to deliver accurate, real-time insights. These aspects are critical for building robust Earth system models that can adapt to evolving environmental and observational conditions. The data processing service of the HIGHWAY solution plays a key role in managing the complexities of Earth observation. This service integrates diverse data sources into a unified pipeline, enabling seamless ingestion, preprocessing, and transformation into actionable insights. By automating and optimizing workflows, the data processing service ensures high performance and minimizes errors, addressing the inherent challenges of managing vast, multimodal datasets. By adopting an MLOpsdriven approach, the service incorporates best practices for machine learning operations, including continuous integration and deployment, workflow orchestration, and metadata tracking, ensuring a transparent and reliable data pipeline. This allows for real-time model updates, continuous learning from new data, and the ability to adapt to fluctuating observational needs, all of which are essential for keeping Digital Twin Earth applications accurate and up-to-date. Through this approach, the data processing service not only enhances the efficiency of Earth observation systems but also ensures their scalability and reliability, paving the way for more advanced, resilient, and impactful digital twin technologies. This presentation focuses on how the principles of MLOps have been applied within the HIGHWAY service’s data processing service to meet the complex demands of real-time Earth system modelling and decision-making.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: EDEN: seamless access to the Destination Earth data portfolio

#stac #cloud-native

Authors: Moris Pozzati, Alessia Cattozzo, Damiano Barboni, Federico Cappelletti, Simone Mantovani
Affiliations: MEEO
Destination Earth (DestinE) is a major initiative of the European Commission which aims to develop high-precision digital models of Earth (referred to as ‘Digital Twins’) to monitor the effects of natural and human activity on our planet, predict extreme events, and adapt policies to climate-related challenges. The European Space Agency leads the implementation of the DestinE Platform, the cloud-based ecosystem enabling users to exploit a wide range of applications and services, including direct access to the data provided by the Digital Twin Engine and DestinE Data Lake. Among the core services of the Platform, EDEN offers seamless access to the DestinE Data Portfolio. The infrastructure based on FAIR principles enables users to search and retrieve geospatial data from federated sources, local data caches, and Digital Twin simulations. The service also delivers Analysis-Ready Cloud-Optimised (ARCO) products for advanced research. Key technological components include: - Harmonised Data Access is the core API allowing for data discovery and access across heterogeneous data sources. - Open Geospatial Consortium services (i.e. OpenSearch, STAC, WMS) enhance cloud-native resource indexing and exploitation within the Platform. - Data cache management enables access to DestinE Platform’s local cache with pre-fetched data. - Finder is the user-friendly webGIS providing data discovery, visualisation, and access capabilities. By bridging machine- and human-readable interfaces, EDEN strives to create an seamless environment for researchers, policymakers, and scientists to integrate satellite data, in-situ observations, and model outputs, advancing DestinE's mission of comprehensive Earth system understanding.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: Machine-learning emulators of land surface model JULES for a future Digital Twin of Drought in Africa

Authors: Cristina Ruiz Villena, Robert Parker, Tristan Quaife, Ross Maidment, Mouhamadou Bamba Sylla
Affiliations: NCEO - University Of Leicester, NCEO - University of Reading, African Institute for Mathematical Sciences
Human-induced climate change is the greatest threat the world has ever faced, according to the United Nations. The impacts of our changing climate are already having devastating consequences for people and the planet, particularly in Africa, with threats to health, food and water security, livelihoods, and biodiversity. In particular, drought is one of the largest concerns for many African nations, given the strong reliance of their populations on rain-fed subsistence agriculture. With climate change, meteorological drought is expected to increase in frequency, severity and extent. This can be translated into agricultural drought, however, the extent of this effect remains uncertain, so preparedness and resilience are crucial to mitigating its impacts, and climate data are key to make informed decisions. We now have a wealth of climate models and Earth Observation (EO) data from an ever-growing network of in situ and remote sensing instruments. However, these sources of data often do not translate directly into information that can easily be used by stakeholders, and they require high computational power and expert knowledge. Digital Twins (DTs) are tools that combine models and observations to provide actionable information for stakeholders. DTs allow users to easily explore the available data and test potential interventions to help decision making. In this work, we present a novel approach for the technical component underpinning DTs. Our proposed solution is an innovative model-data fusion that consists of developing machine-learning emulators for specific processes within Earth system models and driving them using EO data. These emulators are much faster and lightweight than the models they replicate and can be easily run by non-experts without dedicated high-performance computing facilities, thus providing a way to democratise access to climate data. We are now developing emulators of JULES soil moisture that will form the basis of a future Digital Twin of drought in Africa, and our initial prototype is able to reproduce JULES top-layer soil moisture with excellent agreement (R² > 0.95). The next steps include emulator optimisation, integration of EO datasets to create a new soil moisture product, and evaluation against existing products. We have also partnered with climate scientists in Rwanda to exploit the emulator and new dataset for studying drought in East Africa, including under future climate scenarios. This collaboration will allow us to tailor the emulator applications to regional needs and empower African scientists to lead on Digital Twin technologies for climate resilience. In this presentation we will discuss our results and the potential of the emulators in the context of Digital Twins for climate action, focusing on Africa.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: A.08.02 - POSTER - Advances in the theory and methodology of SAR Oceanography

In 2003 the ESA’s workshop on Coastal and Marine applications was conducted gathering the community, to present the findings of ESA-funded studies of SAR Ocean applications, and to prepare recommendations for future research work. It was followed by the first SAR oceanography workshop, SEASAR 2006, entitled « Advances in SAR Oceanography from Envisat and ERS missions » to organise regular consultations with the science and applications community. Recently the latest SEASAR workshop 2023 (Svalbard) allowed scientists to meet and exchange their latest research findings and advances to support new mission proposals , prepare white papers and discuss the urgent need for dedicated field campaigns to strengthen the understanding of air-sea interactions and the coupling of the atmospheric boundary layer and the upper ocean. The applications are spanning a broad range of research areas tailored to wind, waves, sea ice and surface current as well as application areas including, for instance, coastal management, offshore industry, shipping, extremes and hazards.

We welcome contributions to research and applications within areas such as (not exclusive):
• Ocean Wind
• Ocean Current
• Waves
• Sea Ice
• Oil spill and ship detection
• New advances in SAR Oceanography: sensor synergy, Doppler methods and applications

Convenors: Johnny Johanessen (NERSC) , Yves-Louis Desnos (ESA)
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Oil Spill Monitoring in the North Sea Based on Deep Learning Using SAR Imagery

Authors: Christoph Schnupfhagn, Yi-Jie Yang
Affiliations: Maritime Safety and Security Lab Bremen, Remote Sensing Technology Institute, German Aerospace Center (DLR), Faculty of Mathematics and Natural Sciences, Kiel University
Oil pollution in the North Sea is mainly caused by shipping activities and offshore oil production. The North Sea contains busy international shipping lanes through the English Channel and to the Baltic Sea. In addition, hundreds of offshore oil and gas platforms are scattered throughout the North Sea. Oil leaks and discharges from offshore platforms, pipelines and ships result in surface films that threaten sensitive marine and coastal ecosystems such as the Wadden Sea. Continuous satellite-based monitoring is therefore needed to detect accidental and deliberate oil spills in this large area, coordinate their mitigation and identify possible polluters. In this contribution, we present an oil spill monitoring system for the North Sea based on Synthetic Aperture Radar (SAR) imagery from the Sentinel-1 satellite. For this, a diverse dataset with manually labeled SAR image patches including oil spills and typical other oceanic and atmospheric phenomena was generated. The labeling was supported by contextual information such as wind speed and the position of offshore platforms. This dataset was used to train a state-of-the-art deep learning based object detector. The detector was then validated with an additional dataset of labeled SAR images covering all seasons. Each detected oil spill is segmented to provide the actual extent of the polluted area. The oil spill monitoring system is designed for operational use. Therefore, the key metrics are the false negative and false discovery rates. By taking advantage of local image statistics, we show that the accuracy of the detector can be significantly improved. Starting from individual SAR scenes, the monitoring system automatically generates geocoded binary masks and relevant metadata of the detected oil spills. The method is optimized for speed and accuracy and can be adapted for upcoming SAR missions. With its potential for operational use, our oil spill monitoring system can serve as a basis for decision makers and early response to mitigate pollution.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Enhancing Deep Learning Ship Wake Detection and Feature Identification in SAR Imagery With Meteo-Marine Data Integration

Authors: Angela Carmen Cristofano, Maria Daniela Graziano, Giuliano Vernengo, Davide Bonaldo, Andrea Mazzeo, Sergio Iervolino, Roberto Del Prete, Margareth Di Vaia, Marisa Sperandeo, Diego Villa, Federico Franciosa, Gianmarco Scarpa, Federica Braga, Paolo Vavasori, Amedeo Fadini, Stefano Menegon
Affiliations: Università Degli Studi Di Napoli Federico II - DII, Università degli Studi di Genova - DITEN, Consiglio Nazionale delle Ricerche – Istituto di Scienze Marine
Synthetic Aperture Radar (SAR) systems, providing high-resolution data in all-weather and all-time conditions, are a valuable resource for remote sensing applications at sea. The imaging mechanism of SAR enables it to detect the small variations in the sea surface roughness, revealing features that can be observed and quantitatively estimated from satellite imagery. As a result, radar imagery plays a key role in oceanographic research, driving growing interest in SAR technology and spurring significant advancements in SAR imaging instruments and data processing techniques in recent years. SAR sensors exploit the interaction between the transmitted signal and the rough surface of the sea to detect features, which form characteristic patterns into the image due to the modulation of the radar signal by the surface waves. This capability has a great relevance for marine surveillance applications, due to the possibility to exploit SAR signatures of ship wakes to identify vessels and extract information about their direction, speed, and other parameters. Ship detection plays a pivotal role in maritime operations, including border control, pollution monitoring, navigation safety, and rescue missions. Various techniques leveraging SAR data have been developed for this purpose. However, these methods are often challenged by the inherent complexity of the maritime environment. Wake detection must contend with the issue of false alarms caused by this intricate environment, and the multitude of sea clutter sources within it. Being able to recognize and properly filter sea features requires good contextual knowledge of their appearance in SAR imagery, which depends on many factors including the radar operational parameters, the observation geometry and meteo-marine variables like wind, precipitation, currents and others. While considering these elements is crucial for effectively classifying phenomena and detecting ship wakes, no existing methods fully integrate all this information to enhance ship detection systems. Addressing this gap is the goal of the UEIKAP project (Unveil and Explore the In-depth Knowledge of Earth Observation Data for Maritime Applications), funded under the Research Projects of National Interest (PRIN) by Italian Ministry of University and Research, which seeks to combine detailed contextual knowledge of local marine phenomena and weather conditions with a deep learning-based detection model, specifically designed and trained for wake detection in remote sensing data. As a part of the UEIKAP project, a multi-level study on sea characterization has been conducted, addressing various aspects of the topic and pursuing complementary objectives. Its ultimate goal is the development of a sea imaging model able to overcome the existing limitations of the state-of-the-art, and a well-balanced dataset for wake detection which incorporates into the network the knowledge of the environmental data, in order to improve detection accuracy. The starting point of the study is the analysis of the most common marine and atmospheric phenomena that leave visible signatures in SAR imagery of the sea, in order to provide an updated overview of the features and the corresponding patterns which generate clutter. Additionally to the description of their genesis and characteristics, a particular attention has been given to the dependence of patterns on SAR frequency, polarization, and incidence angle, which can be exploited as filters to enhance image clarity or to highlight specific features over others. This becomes especially significant when distinguishing between features with similar signatures. In such a framework an analysis of different analytical models for the description of sea surface features has been carried out in order to identify the fundamental relationships between the radar backscatter and the sea waves spectra, which is the key to find out how their interaction works and hence how the characteristics of each feature affect the resulting pattern in the satellite image. This highlighted the links between the occurrence of certain phenomena and the variation of some environmental parameters such as the sea temperature, which is among the variables that can help in the process of classifying features. The presentation highlights the key findings of this analysis, which were taken into account in defining the criteria for the development of a dataset specifically designed for wake detection. Such criteria are discussed and the dataset structure explained in detail, along with the methodologies used to incorporate sea-state information into the classification process as required by the project. Many different data were employed in the project, and collected from well-know open access sources. From the Copernicus Hub, a large quantity of Sentinel-1 acquisitions were collected to obtain crops including SAR signatures of ship wakes. The acquisitions are at C-band and VV polarization, which enhances the contrast between features and background. The TenGeoP dataset, released in 2018 by Wang C. et al., provided a wide selection of SAR samples for different classes of meteo-marine phenomena, while the European Centre for Medium-Range Weather Forecasts (ECMWF) ERA5 dataset provided the data about the weather conditions and the sea state at the location and the time of each satellite acquisition selected for the dataset. A first classification experiment was made to test the performances of a Convolutional Neural Network (CNN) model in recognizing features after being trained on a SAR dataset, which contained different classes of phenomena including ship wakes. The architecture employed in this application had ResNet-50 as backbone, which provided the weights obtained from the pre-training with the ImageNet dataset and subsequently fine-tuned for the classification of marine features. The so built classifier showed a good performance in distinguishing between wakes and non-wakes, and was able to correctly classify the other phenomena in the majority of the test cases. The next part of the project is focused on the development of a more advanced model which integrates the auxiliary information about weather and sea state into the classification process, as anticipated. The presentation shows the methodologies employed in this experiment and the results obtained from the classification before and after the introduction of meteo-marine data, which is expected to significantly improve the classification performances. This also provides valuable clues on the correlation between the occurrence of certain phenomena and the sea-state parameters, which could give a significant contribution to the oceanographic research and boost the future applications of EO-based technology in the field of maritime surveillance.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Temporal Analysis of Ice Tongue-Originated Iceberg in Terra Nova Bay Using Multi-polarization SAR Imagery

Authors: Mozhgan Zahribanhesari, Antonino Ian Ferola, Andrea Buono, Giuseppe Aulicino, Maurizio Migliaccio
Affiliations: Università Degli Studi Di Napoli Parthenope
Icebergs in Antarctica play a pivotal role in both local and global ecosystems. Acting as vast freshwater reservoirs, they influence ocean salinity and circulation as they melt. This process releases nutrient-rich water that supports phytoplankton growth, forming the base of the marine food web. Icebergs also contribute to sea-level regulation, with their calving dynamics providing critical insights into climate change and enabling future sea-level rise predictions [1, 2]. A significant event occurred on January 16, 2022, when the Drygalski Ice Tongue underwent a major calving, resulting in the detachment of a large iceberg [3]. This event was potentially triggered by the powerful eruption of the Hunga Tonga-Hunga Ha’apai submarine volcano on January 15, 2022, which caused atmospheric pressure changes and shockwaves [4]. In this case, a continuous and effective monitoring of iceberg location, drifting and morphological properties is needed to advance environmental research and understanding polar dynamics. In this study, we leveraged remote sensing techniques to analyze: 1. Iceberg drifting: Using a long multi-polarization and multi-frequency time series of SAR images, complemented by cloud-free optical data when available, the complex drift of the iceberg in Terra Nova Bay is investigated. 2. Scattering and morphological properties: Dual-polarimetric parameters derived from the SAR covariance matrix are analyzed to identify key changes in surface roughness, moisture content and structural variations of the iceberg. These analyses reflected air-sea interactions with environmental factors as currents, winds and temperature, offering insights into iceberg stability, melting rates and ecological impacts. 3. Ocean current influence: Geostrophic current patterns, derived from satellite altimetry data, were analyzed to compute velocities based on sea surface height. This revealed how large-scale pressure gradients and the Coriolis effect influence iceberg drift. Overlaying these patterns with the iceberg's trajectory highlighted both global and localized current dynamics, aiding predictions of iceberg pathways, collision risks and broader environmental consequences. Preliminary results showed that the iceberg, calved from the Drygalski Ice Tongue on January 16, 2022, followed a rotational path during its 14-months northward journey until breaking apart on April 14, 2023. Its summer drift was marked by significant rotation influenced by dynamic oceanic and meteorological forces. In winter, it became trapped in dense sea ice, slowing its rotation. Partial polarimetric analysis revealed a mixture of iceberg scattering mechanisms, with notable changes over time. In later stages, the iceberg's scattering became more uniform, likely due to surface smoothing or structural consolidation. These results highlight the combined effects of environmental forces and seasonal changes on iceberg behavior and evolution. References: 1. NASA. "Ice Sheets: Vital Signs of the Planet." NASA Climate Change and Global Warming. Available at: https://climate.nasa.gov/vital-signs/ice-sheets/?intent=121. Accessed November 28, 2024. 2. NASA. "NASA Studies Find Previously Unknown Loss of Antarctic Ice." NASA Climate Change and Global Warming, December 18, 2020. Available at: https://climate.nasa.gov/news/3206/nasa-studies-find-previously-unknown-loss-of-antarctic-ice/. Accessed November 28, 2024. 3. Liang, Q., et al. "Ice Tongue Calving in Antarctica Triggered by the Hunga Tonga Volcanic Tsunami, January 2022." Science Bulletin, vol. 68, no. 5, 2023, pp. 456–459. 4. Witze, A. "Why the Tongan Eruption Will Go Down in the History of Volcanology." Nature, vol. 602, no. 7897, 2022, pp. 376–378.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Sustainability of the Coast: Ship Monitoring and Coastal Mapping

Authors: Dr. Emlyn Hagen, René Günzkofer
Affiliations: NV5 Geospatial
The advent of high-resolution Synthetic Aperture Radar (SAR) satellites has transformed maritime domain awareness, enabling enhanced planning, tasking, and frequent revisits. These advancements offer unprecedented capabilities for monitoring ship activity and coastal features, even in challenging conditions such as poor weather or darkness. Despite this progress, significant gaps remain in effectively deploying these technologies outside of known maritime hotspots, where illicit activities and under-monitored regions pose substantial challenges to global maritime security and environmental protection. To address these challenges, a novel global ship detection and classification system has been developed, leveraging SAR data from the European Space Agency’s Copernicus Sentinel-1 mission. This system establishes a comprehensive and independent capability to monitor maritime activity and coastal regions worldwide. Its database, one of the most extensive of its kind, integrates historical data dating back to 2017 and continues to grow, enabling long-term trend analysis and real-time monitoring alike. A key feature of this system is its independence from Automatic Identification System (AIS) transponders, allowing it to identify and track vessels engaged in illegal fishing, smuggling, or unregistered military movements. By combining SAR data with advanced deep learning algorithms, the system classifies ships into 13 distinct categories, capturing crucial attributes such as orientation, size, and width. This capability extends to the detection of static offshore assets, including oil platforms and wind turbines, enabling broader applications in maritime security and environmental management. This ship detection system plays a pivotal role in uncovering activity patterns that AIS alone cannot reveal. For instance, it identifies vessels operating without AIS transponders, providing critical intelligence for monitoring Exclusive Economic Zones (EEZs) and combating illegal fishing. By indexing areas of interest for further high-resolution satellite imagery, it ensures that resources are optimally allocated to regions requiring detailed analyses. Additionally, the ability to generate comprehensive statistics on AIS-equipped and non-AIS vessels supports decision-making in maritime management and enforcement. Beyond vessel monitoring, the system contributes to renewable energy initiatives by identifying and monitoring offshore wind turbines. It delivers crucial data on turbine locations, construction progress, and operational capacity while also detecting potential damage. This functionality aids in optimizing wind energy infrastructure and supports the transition toward sustainable energy sources. The system’s versatility extends further with its capability for coastal mapping. By automating tidal height-based coastline extraction, it provides insights into coastal dynamics at various tidal stages, which are critical for understanding erosion, habitat changes, and shoreline management. Simultaneously, it incorporates anomaly detection, triggering follow-up observations from high-resolution satellites to address unexpected maritime events swiftly. Moreover, the system addresses the issue of Radio Frequency (RF) interference in SAR imagery, filtering out distortions while geolocating RF emitters. This dual functionality ensures clearer imaging and the ability to identify potential sources of interference, which is crucial for maintaining the integrity of SAR data in complex operational environments. In this presentation, we showcase how such SAR-based monitoring capabilities contribute to the broader community by filling gaps in global maritime surveillance and promoting sustainable practices. By leveraging the capabilities of Copernicus Sentinel-1 data and cutting-edge artificial intelligence, this system provides a foundation for safer, more transparent, and environmentally responsible maritime operations. With applications ranging from security and enforcement to renewable energy and coastal management, it highlights how SAR technology can serve as a cornerstone for advancing global efforts in sustainability, security, and resilience in the maritime domain.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Integration of satellite Synthetic Aperture Radar data with numerical models for investigating wind, waves and currents at the lakes surface

Authors: Marina Amadori, Mariano Bresciani, Giacomo De Carolis, Francesca De Santi, Ali Farrokhi, Gianfranco Fornaro, Claudia Giardino, Lorenzo Giovannini, Marco Papetti, Sebastiano Piccolroaz, Marco Toffolon, Giulia Valerio, Simona Verde, Virginia Zamparelli
Affiliations: IREA CNR, IMATI CNR, University of Trento, University of Brescia
The water quality and the overall ecological state of a lake depend on the mixing dynamics and transport processes. The use of hydrodynamic numerical models for resolving the physics behind water motion is a practice in limnological investigations to overcome the existing measurement limitations. However, feeding the models with initial and boundary conditions, as well as calibrating and validating them, requires spatially distributed information. Water velocity is typically measured by in-situ sensors at fixed locations or on boats or through drifters. All these tools provide time series with high temporal but limited spatial resolution. Similar limitations affect the measurement of the meteorological conditions and surface wind waves, which have important effects on the development of surface mixing. In marine and coastal environments, such limitations are overcome by the use of Synthetic Aperture Radar (SAR) data for the retrieval of spatially resolved information on wind, waves and water currents at the water surface. With the exception of very large lakes, radar applications in this context are limited to water level retrieval from altimetry. SAR products have been seldom exploited for lakes and in general for enclosed basins for several reasons: the size of lakes compared to the image spatial resolution; the surrounding orography impact; the minor, yet relevant, intensity of environmental processes (e.g. smaller wave heights and currents than those developing in open seas and oceans). Within such a frame, we aim to face the non-trivial problem of “rescaling” to lakes the methodologies consolidated in the ocean context. We present the preliminary results of a pioneering study on Lake Garda, Italy, where spaceborne SAR data are combined with numerical models for retrieving fields of wind speed and surface velocity for the first time in a medium-sized lake. COSMO-SkyMed (CSK) SAR backscatter amplitude and doppler anomaly are used. CSK doppler anomaly is processed to derive maps of Surface Radial Velocity (SRV). CSK backscatter amplitude is converted into wind speed by applying the XMOD2 GMF. Tests using the scaled version to X band of CDOP, called here XDOP, with XMOD2 are made to both estimate the wind vector and understand the wind-related contribution to the SRV. In situ wind observations are collected together with surface current and wave measurements through ADCP, floating drifters and a wave buoy. Such data is used to calibrate a coupled lake-atmosphere model for Lake Garda (WRF+Delft3D-FLOW) with a surface wave model (SWAN). The atmospheric model (WRF) provides time and space varying fields of meteorological variables at the lake surface. The hydrodynamic model (Delft3D-FLOW) computes the heat fluxes between air and water and wind stress at the lake surface and simulates the 3D flow field, turbulence, heat and mass transport within the lake. The wave model (SWAN) simulates the 2D wave fields and updates the hydrodynamics in FLOW incorporating the dynamic feedback of wave motion on surface currents, bed shear stress, and water levels. SAR products for Lake Garda return evident structures with intense and spatially varying values of backscattering and doppler frequency. In dates when a high wind blows uniformly over the lake, the wind maps retrieved from CSK+XMOD2 are consistent with those from the atmospheric model WRF, while this is not the case for the low wind dates. The SRV retrieved from CSK shows a qualitative agreement with the flow field simulated by Delft3D+SWAN, but significantly overestimates it. By applying XMOD2, we obtain that most of the signal retrieved from SAR is wind-driven, with a very little contribution of residual currents, which is consistent with the smaller inertia of lakes surface compared to that of the oceans. Further in-situ measurements and modeling is ongoing to deepen our understanding of surface motions in our test site and to define to what extent wind, waves and bulk currents contribute to surface radial velocity retrieved from SAR in medium-sized lakes.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Global Detection of Offshore Oil and Gas Platforms Using Sentinel-1 and Sentinel-2 Data to Methane Tracking

Authors: Lulu Si, Shanyu Zhou, Luis Guanter
Affiliations: Research institute of Water and Environmental Engineering (IIAMA), Universitat Politècnica de València (UPV), Environmental Defense Fund
Accurate and comprehensive information on the location and operational status of offshore oil and gas platforms (OOGPs) is essential for evaluating their potential environmental impacts and formulating effective marine management policies. However, obtaining such detailed information is often hindered by incomplete or outdated data records, especially on the global scale. The issue is further exacerbated when the decommissioning of offshore platforms is inadequately planned, designed, or executed, leading to gaps in documentation and oversight. In this context, remote sensing data, such as synthetic aperture radar (SAR) images from Sentinel-1 observations, provide a valuable tool for systematically evaluating OOGPs leveraging the strong backscatter signal from offshore platforms against the water background. This capability is further enhanced by their independence from cloud coverage, continuous availability, high spatial resolution, and frequent global coverage. Yet, the prevalence of false positives (e.g. vessels, sun glint and wind turbines) in the detection results poses technical challenges, as these factors can obscure true platform signals and reduce detection accuracy. We developed an adaptive filtering method based on Sentinel-1 SAR time-series images (2017-2023) to detect OOGPs and determining their operational status, paving the way for the first global OOGPs database. The method integrates geospatial technology and advanced mathematical operations on the Google Earth Engine (GEE) platform. The proposed roadmap mainly includes: (1) preliminary OOGPs detection candidates using threshold filtering on single backscatter image; (2) noise and false positive removal based on long time-series analysis; (3) spatial distribution mapping and status analysis of OOGPs after the post-processing of the platform targets; (4) Monitoring of natural gas flaring activities adjacent to the OOGPs for a calendar year based on a normalized metric according to the high-temperature signals recorded by the long-time series of Sentinel-2 MSI images. A total of 146,080 Sentinel-1 SAR images from 2017 to 2023 were processed, yielding a dataset with 13,439 times of detected OOGPs across the Gulf of Mexico, Persian Gulf, and Gulf of Thailand. The algorithm was validated using an independent dataset, achieving a detection accuracy exceeding 99%. Additionally, we have analyzed the temporal evolution and spatial expansion of OOGPs in several study regions. These findings highlight the potential of the derived dataset to enhance understanding of the marine environmental impacts of OOGPs. It serves as a valuable resource for monitoring offshore flaring, methane emissions and oil spills and represents a critical addition to the global OOGPs database.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: A.02.03 - POSTER - EO for Agriculture Under Pressure

The human impact on the biosphere is steadily increasing. One of the main human activities contributing to this is agriculture. Agricultural crops, managed grasslands and livestock are all part of the biosphere and our understanding of their dynamics and their impacts on other parts of the biosphere, as well as on the wider environment and on the climate is insufficient.
On the other hand, today’s Agriculture is Under Pressure to produce more food in order to meet the needs of a growing population with changing diets– and this despite a changing climate with more extreme weather. It is required to make sustainable use of resources (e.g. water and soils) while reducing its carbon footprint and its negative impact on the environment, and result in accessible, affordable and healthy food.
Proposals are welcome from activities aiming at increasing our understanding of agriculture dynamics and at developing and implementing solutions to the above-mentioned challenges of agriculture, or supporting the implementation and monitoring of policies addressing these challenges. Studies on how these challenges can be addressed at local to global scales through cross site research and benchmarking studies, such as through the Joint Experiment for Crop Assessment and Monitoring (JECAM) are welcome.

The session will hence cover topics such as
- Impact on climate and environment:
- Crop stressors and climate adaptation
- Food security and Sustainable Agricultural Systems
- New technologies and infrastructure

Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Laying the Groundwork for Next-Gen Hyperspectral Satellites in Digital Soil Mapping

Authors: Beth Delaney, Kevin Tansey, Professor Mick Whelan
Affiliations: University of Leicester
The properties of soil are fundamental to agricultural productivity and environmental sustainability, influencing processes such as water retention, nutrient cycling, and carbon storage (FitzPatrick, 1978; Lal, 2016). With agriculture facing increasing pressure to produce more food for a growing population, adapt to climate change, and minimise environmental impacts, the need for novel, scalable methods to predict soil properties continues. Hyperspectral imaging, which captures detailed spectral data across an extensive range of wavelengths, is particularly effective at detecting subtle changes in soil composition and condition and advances in hyperspectral remote sensing provide a promising opportunity to improve soil property predictions (Castaldi et al., 2016; Ward et al., 2020). Recent launches of hyperspectral satellites, such as PRISMA and EnMAP, have already begun to revolutionise digital soil mapping by offering unprecedented spectral resolution and spatial coverage. These missions provide critical insights into soil properties, paving the way for more effective management strategies. The upcoming next-generation hyperspectral missions, such as ESA's CHIME, are poised to become the 'go-to' for digital soil mapping. This shift is driven not only by their advanced capabilities but also by the anticipated availability of consistent, high-quality global datasets, which will significantly enhance their accessibility and utility for large-scale soil monitoring. Focusing on the use of laboratory-based hyperspectral measurements to train machine learning algorithms, this research establishes the groundwork for leveraging hyperspectral satellite data to predict soil properties. To achieve this, soil spectral libraries from across Europe, such as the LUCAS soil spectral database, will be utilised to train machine learning models, enabling them to interpret hyperspectral data accurately. These algorithms will then be applied to existing hyperspectral satellite imagery, such as EnMAP, or Hyperion, to evaluate their capability to translate laboratory-based hyperspectral data into robust satellite-based predictions. The feasibility and scalability of this approach will be assessed to prepare for the anticipated advancements in future hyperspectral missions. Positive results have already been established using simulated EnMAP imagery (Ward et al., 2020), but testing on actual hyperspectral satellite imagery is yet to be trialled. The results carry significant implications for promoting sustainable agricultural practices. Accurate soil property maps will facilitate more precise management of resources such as fertilisers and water, improving efficiency and reducing environmental harm (Zanini et al., 2023). These advancements also align with global sustainability goals, contributing to improved food security and the responsible use of natural resources. By bridging the gap between laboratory research and satellite-based applications, this study demonstrates the transformative potential of hyperspectral data in monitoring and managing agricultural landscapes. References Castaldi, F., Palombo, A., Santini, F., Pascucci, S., Pignatti, S. & Casa, R. (2016) Evaluation of the potential of the current and forthcoming multispectral and hyperspectral imagers to estimate soil texture and organic carbon. Remote Sensing of Environment, 179, 54-65. https://doi.org/10.1016/j.rse.2016.03.025. FitzPatrick, E. A. (1978) An Introduction to Soil Science. Soil Science, 125(4). Lal, R. (2016) Soil health and carbon management. Food and Energy Security, 5(4), 212-222. https://doi.org/10.1002/fes3.96. Ward, K. J., Chabrillat, S., Brell, M., Castaldi, F., Spengler, D. & Foerster, S. (2020) Mapping Soil Organic Carbon for Airborne and Simulated EnMAP Imagery Using the LUCAS Soil Database and a Local PLSR. Remote Sensing, 12(20), 3451. https://doi.org/10.3390/rs12203451. Zanini, M., Priori, S., Petito, M. & Cantalamessa, S. (2023) Digital soil mapping for precision agriculture using multitemporal Sentinel-2 images of bare ground. 2023 IEEE International Workshop on Metrology for Agriculture and Forestry (MetroAgriFor), 232-236. https://doi.org/10.1109/MetroAgriFor58484.2023.10424302.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Wheat crop mapping using Sentinel-1 SAR and Sentinel-2 Optical Data in Central India

Authors: Ms. Sakshi Jain, Dr. Unmesh Khati, Vineet Kumar
Affiliations: Department of Astronomy, Astro-physics and Space Engineering, Indian Institute Of Technology Indore, IHE Institute for Water Education
Wheat is India’s second most important crop after rice. India is the second largest producer of wheat globally. Accurately mapping wheat holds vital importance in ensuring food security. In this research, a synergistic use of openly available Sentinel-1 Synthetic Aperture Radar (SAR) and Sentinel-2 satellite data. SAR (VH, VV, VH/VV, DpRVI, DpRBI and mod. DpSVI) and optical (EVI) satellite data derived descriptors are fed into the Random Forest classifier to classify wheat central India. he Dual-pol Radar Vegetation Index (DpRVI) is based on the eigenvalue spectrum and degree of depolarization. DpRVI is defined as the product of co-pol intensity parameter and co-pol purity parameter subtracted from unity. Dual-Pol Radar Built-up Index (DpRBI) formulation is based on the three Stokes vector elements of the scattered wave derived from the 2 × 2 covariance matrix C2. Optical Derived Enhanced Vegetation Index (EVI) is optimized to high biomass crop conditions and monitor through decoupling canopy background response and reduced atmospheric influences. EVI is more responsive to canopy structural variations, including leaf area index (LAI), canopy type, plant physiognomy, and canopy architecture compared to NDVI. This study is conducted in Indore district in western Madhya Pradesh (MP), India. Major rabi season (November to March) crops are wheat, potato and onion in the study area. Random forest classifier is an ensemble learning method that uses bootstrapping techniques from inputs that have higher capacity to distribute high-dimensional data and handle nonlinearity. Field location and bio-physical information is collected during the cropping season. Further, a classification comparison is performed by only using SAR derived descriptors, optical derived descriptors and synergistically using both the dataset. Results show 94.82% classification accuracy with SAR, 96.55% with optical and 98.27% accuracy by complimenting data from both techniques. The obtained results showed the combination of Sentinel-1 SAR and Sentinel-2 optical outperforms classification based on single imaging technique.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: AI-Assisted Derivation of Agronomically Relevant, Small-Scale Soil Information Using an Integrated Soil Sensor System, Satellite and Drone Image Data, and additional Geodata

Authors: Niels Lakämper, Dr. Heike Gerighausen, MSc. Linda Tendler
Affiliations: Julius Kühn Institute (JKI) – Federal Research Centre for Cultivated Plants, Institute for Strategies and Technology Assessment, Julius Kühn Institute (JKI) – Federal Research Centre for Cultivated Plants, Institute for Crop and Soil Sciences
Climate change is affecting crop production through increasing instances of heavy rainfall, heat, drought, and prolonged dry periods. To mitigate these impacts, climate-adapted production methods are necessary, including site-specific adjustments to sowing depth, sowing density, and fertilization intensity. Implementing these methods requires practical, small-scale, agronomically relevant soil information. The “soil4climate” research project aims to develop a data collection and processing system that seamlessly provides farmers and advisors in crop production consulting with high-resolution soil information as part of routine workflows, without significant additional effort. To achieve this, data from multiple sensor systems and external data sources (environmental and geodata) are acquired and combined. A key element is a novel, low budget geo-electric sensor system integrated into a cultivator, which collects information on the soil’s electrical conductivity alongside with data on operation depth and position, and the tractor’s machine data. These ground-based measurements are augmented by remote sensing satellite data from the Copernicus program (Sentinel-1/-2) and UAV-based multispectral images (MicaSense RedEdge-MX Dual). Additional geodata include a Digital Elevation Model (DEM) and terrain attributes such as the Topographic Wetness Index (TWI) as well as weather data from the German Meteorological Service (DWD). All data from the various data sources are stored in a database. After preprocessing and fusion of the data, two approaches are compared. First, a Random Forest model is established as a baseline. Second, a Deep Learning approach is tested using two separate stages: one model trained on regional to large-scale soil samples functioning as a base model and later combined with another to train on localized on-site samples from the measured fields. The aim of this setup is to improve generalization performance in predicting soil organic carbon and texture beyond soils of the study area. A broad array of soil samples is sourced from the Land Use/Cover Area Frame Survey (LUCAS) topsoil dataset to enhance generalizability and ensure a dataset sufficient for deep learning algorithms. This data is complemented with official soils surveys in selected German federal states. Calibration and validation of generated soil maps are carried out through laboratory analysis of georeferenced soil samples, in close collaboration with agricultural practitioners in Lower Saxony’s Hildesheim region and the Osnabrück area. A key challenge is to effectively integrate the multimodal dataset, consisting of measurements from various spatial scales and time frames, and to examine the impact of different combinations on potential map products. Presented will be the concept of the data collection and processing system and first results of the in situ data acquisition, as well as an evaluation concept using artificial intelligence methods with the combined model trained on large-scale soil sample dataset.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Unveiling the Potential of RCM Compact Polarimetry SAR Data for Agriculture: From Despeckling to Multi-Temporal Crop Classification

Authors: Saeid Homayouni, Mr. Ramin Farhadiani, Pr. Avik Bhattacharyab, Dr. Masoud Mahdianpari
Affiliations: National Institute of Scientific Research, Center For Water, Earth, Environment, National Institute of Scientific Research, Center For Water, Earth, Environment, Microwave Remote Sensing lab (MRSlab), Indian Institute of Technology Bombay, C-Core and Department of Electrical and Computer Engineering, Memorial University of Newfoundland
The RADARSAT Constellation Mission (RCM) Compact Polarimetry (CP) Synthetic Aperture Radar (SAR) data offers a powerful tool for agricultural monitoring. Still, its full potential is hindered by speckle, a granular noise-like phenomenon inherent in all coherent imaging systems. Effective speckle reduction and robust analysis of CP-derived features are critical for improving crop classification accuracy. This study, unlike prior studies relying on simulated CP data, used real RCM CP SAR data for two essential aspects: the evaluation of various polarimetric despeckling techniques and the comprehensive assessment of crop classification using single-date and multi-date RCM CP SAR data and the evaluation of the feature importance. The first component of this study focused on comparing various Polarimetric SAR (PolSAR) despeckling methods, including Box Car, IDAN, Lee Refined, Lee Sigma, Improved Lee Sigma, and Lopez filters, using a CP SAR dataset collected over agricultural land in southern Quebec, Canada. The methods were evaluated based on no-reference quantitative indicators, measuring their ability to reduce speckle in homogeneous areas, preserve edge details, and prevent radiometric distortion. The impact of despeckling on classification performance was also examined utilizing the Random Forest classifier and essential CP features such as Stokes parameters, m-chi decomposition, and intensity images. The results revealed that the Box Car filter effectively suppresses speckle but over-smoothed edges. In contrast, the Lee Sigma and Improved Lee Sigma filters provided an optimal balance between speckle reduction and detail preservation, significantly enhancing classification accuracy. The second component emphasized crop classification and the analysis of the importance of CP features. The RCM CP data from July 1, July 30, and August 27, 2021, covered various crop types, including soy, corn, hay, and cereal. Features extracted from CP data are classified using a Random Forest model to compare single-date and multi-date classification approaches. Results indicated that single-date classification achieves overall accuracies of 61.10%, 75.00%, and 86.45% for the respective dates. However, integrating multi-date data markedly improved classification performance, achieving an overall accuracy of 91.20%. This improvement highlights the value of temporal diversity in capturing phenological variations and improving classification robustness. By combining insights from despeckling techniques with CP feature analysis, this study underscored the synergistic benefits of effective preprocessing and feature engineering for agricultural applications. The findings demonstrated that applying advanced despeckling methods enhances data quality and supports higher classification accuracy. At the same time, the use of multi-temporal CP data provided a more comprehensive framework for crop mapping. These methodologies provided a precise and reliable approach for monitoring diverse agricultural landscapes. This integrated analysis establishes a foundation for leveraging RCM CP SAR data in operational agricultural monitoring and underscores its potential for advancing remote sensing applications in precision agriculture.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Divergent crop mapping accuracies across different field types in smallholder farming regions

Authors: Xin HUANG, Dr. Anton Vrieling, Dr. Yue Dou, Xueying Li, Prof. Andrew Nelson
Affiliations: Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, Department of Physical Geography and Ecosystem Science, Lund University
Accurately mapping crop types in smallholder farming regions is crucial for monitoring crop dynamics and estimating production but remains challenging, especially over large extents. Remote sensing based crop mapping studies in smallholder farming regions often focus on major crops and the challenge of mapping small fields. However, minor but possibly emerging crops are often not considered, nor is the impact of environmental factors, such as water stress, on mapping accuracy evaluated. This study addresses these gaps by categorizing crop fields into different types, and assessing mapping accuracy for both major and minor crops within each field type. Four indicators, field size, green chlorophyll vegetation index (GCVI), bare soil index (BSI), and shortwave infrared water stress index (SIWSI), were considered for field categorization, and their impacts on crop type mapping were assessed. Among them, field size and season-averaged SIWSI (SIWSImean) derived from Sentinel-2 (S2) were combined to categorize crop fields (big/small fields with/without water stress) due to their relevance to smallholder farming characteristics, sensitivity to crop mapping accuracy, and mutual independence. Crop mapping accuracies for different field types and crops (maize as a major crop and soybean as a minor crop) were compared at pixel-based (PB) and object-based (OB) levels using random forest classification applied to S2 and two additional publicly accessible multispectral datasets (PlanetScope with four bands (PS4) and eight bands (PS8)). Based on S2 data, big fields without water stress can be most accurately mapped (F1-score = 0.89 for maize and 0.85 for soybean), followed by small fields without water stress (0.85 and 0.68) and big fields with water stress (0.82 and 0.59), while small fields with water stress are the most challenging type (0.77 and 0.37). Despite that the use of PS8 data with higher spatial resolution and OB classification improved mapping accuracy for small soybean fields with water stress, limitations to map such fields remain (F1-score < 0.50). Our study reveals substantial variability in crop type mapping accuracy across different field types and crops. Major crops, as well as the trends and industrial expansion of minor crops, can be effectively mapped using publicly accessible multispectral datasets and machine learning algorithms. However, mapping minor crops cultivated in small fields under sub-optimal growing conditions exhibits higher uncertainty. This study provides a new perspective on crop type mapping in smallholder farming regions by using a simple and relevant categorization of field types and offers valuable insights on potentials and limitations for large-scale crop type mapping using machine learning algorithms. We highlight that confidence intervals around area estimates and accuracy metrics should be provided not only for each crop type but also for each field type represented in the map product to support more impartial and reliable estimates of crop area.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: A Coupled Phasor-Based Machine Learning Approach for Fusarium Head Blight of Wheat Detection from Hyperspectral Images

Authors: Laura Sironi, Luca Tuzzi, Davide Panzeri, Giuseppe Quaratiello, Samuele Risoli, Lorenzo Cotrozzi, Sergio Cogliati
Affiliations: University of Milano-Bicocca, Physics Department, University of Milano-Bicocca, Department of Earth and Environmental Sciences, University of Pisa, Department of Agriculture, Food and Environment
The increase in the global population has recently raised the issue of global food security, related to both the early detection of pathogen and the development of sustainable crop management strategies. In this context, remote sensing techniques are of fundamental importance, as they allow for the observation and monitoring of vegetation status. Moreover, the current and future advent of satellite missions capable of acquiring hyperspectral images enables us to gather information at different spatial scales, which can be utilized to monitor the health and productivity of crops. However, hyperspectral images are currently poorly exploited in detecting plant diseases (biotic factors) compared to other agricultural monitoring, such as water use and nutrient deficiency monitoring (abiotic factors). A small-scale controlled field experiment was conducted to assess the response of wheat (Triticum aestivum L.) cultivars to Fusarium Head Blight (FHB) infection. In particular, wheat ears subjected to various treatments, including the presence/absence of a conventional agrochemical agent or a biocontrol fungal agent antagonistic to FHB, have been analyzed. Hyperspectral images of wheat spikes were acquired both by means of drones and in laboratory, at higher spatial resolution, using a visible to near-infrared hyperspectral imaging system with the aim of developing methods capable to automatically identify Fusarium-induced symptoms. Here, we took advantage of the phasor approach to quickly and easily highlight differences in the spectral signatures of healthy and infected spikes in order to automatically identify Fusarium-induced symptoms. Through this analysis, spectra are Discrete Fourier transformed resulting into points lying on a 2D scatter plot, named phasor plane. In this way, it is possible to easily identify and retrieve spectral differences since the position of the points is strictly related to spectra shape and properties. Spectral differences between healthy and infected species have been identified by analyzing the separation of points on the phasor plane, since this parameter codes for spectral difference in the spikes under investigation. The most informative spectral windows for separating healthy and infected spikes are 555-735 nm. Moreover, we combined phasor-based or/and texture features to train a supervised machine-learning model capable of discriminating FHB infection at multiple spatial levels (pixels, grains, spikes, crops). Unsupervised clustering methods have also been exploited for an early pathogen detection from the acquired hyperspectral images.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Innovative Technologies For Non-Invasive Assessment Of Plant Health Condition To Support Precision Farming.

Authors: Emanuele Setale, Samiullah Yousaf, Francesco Anelli, Andrea Buono, Camilla Di Donato, Mariateresa Melis, Alessandro Fanti, Francesco Prudenzano, Giuseppe Mazzarella, Maurizio Migliaccio
Affiliations: Università Parthenope Di Napoli, Università degli Studi Di Cagliari, Università degli Studi Di Bari
Agriculture plays a crucial role both in local and global ecosystems. Providing food, fiber, and other essential resources, agriculture contributes to human well-being and the sustainability of communities. Additionally, through agricultural practices, important ecological cycles, such as carbon regulation and biodiversity conservation, are influenced. Agricultural techniques, from soil cultivation to water resource management, can have a direct impact on climate change, both in terms of greenhouse gas emissions and adaptation to extreme events. Events like the increasing incidence of droughts and floods, increasingly linked to climate change, have a devastating impact on global agricultural production. The transition to sustainable agricultural practices, such as precision farming and the use of innovative technologies, is essential to ensure food security and reduce humanity's ecological footprint. Ongoing research and the adoption of innovative agricultural techniques are key to addressing future challenges and improving the agricultural sector's resilience to changing climatic conditions. This study will present the progress of the project Active Microwave Sensors for Vegetation Water Content Estimation, which aims to estimate the water content within plants and study the growth of individual plants through the use of measurements derived from proximity radar. This work is part of the project Innovative Technologies for a Non-Invasive Assessment of Plant Health Condition to Support Precision Farming, which aims to develop sensor prototypes to be equipped on a moving platform that operates directly in the field [1]. Vegetation monitoring will not only be carried out with radar sensors, but also with the use of millimetre waves. Monitoring vegetation using millimeter waves, such as those used in active microwave radar, offers numerous advantages, including: 1. Non-invasive measurement: Millimeter waves allow data collection on vegetation without damaging it, avoiding the use of invasive methods that could affect plant health. 2. Monitoring plant physiological conditions: Millimeter waves are sensitive to changes in plant tissues, enabling the monitoring of plant health and growth. This helps detect early signs of water stress, diseases, or nutrient deficiencies. 3. Assessment of biomass and plant structure: Millimeter waves can provide information about plant density and structure, offering valuable data to assess biomass and growth dynamics. 4. Real-time detection: The use of active radar allows for real-time monitoring of vegetation, even in challenging weather conditions like fog or rain, which may hinder other detection methods. 5. Large-scale monitoring: Millimeter waves enable the monitoring of large agricultural or forested areas, supporting landscape-level management and increasing the efficiency of agricultural and forestry practices. Last, but not least, also the use of multispectral infrared is involved in this study, because it offers several advantages, including: 1. Assessment of plant health: Plants reflect light in the near-infrared (NIR) spectrum in a characteristic way based on their health. Healthy plants reflect more light in the infrared compared to stressed or diseased ones. Analyzing these signals helps monitor plant well-being, detecting early signs of diseases, water stress, or nutrient deficiencies. 2. Estimation of biomass and plant density: Multispectral infrared can be used to calculate vegetation biomass and density. This is useful for monitoring plant growth and optimizing agricultural practices such as irrigation management and harvest planning. 3. Monitoring chlorophyll content: Near-infrared reflectance is sensitive to the amount of chlorophyll in plants, which is a key indicator of photosynthesis. By monitoring chlorophyll levels, it is possible to gain insights into the overall health of plants and their photosynthetic capacity. 4. Detection of environmental conditions: Multispectral infrared is sensitive to various environmental factors, such as soil type, moisture, and temperature. This provides a more comprehensive understanding of the conditions affecting vegetation. 5. Mapping agricultural and forested areas: The ability to collect data over large areas in relatively short periods is a major advantage of multispectral infrared. This makes it possible to map large agricultural crops or forested areas, facilitating landscape-level management. 6. Detection of water stress: Plants under water stress reflect less light in the near-infrared. Using multispectral infrared allows for the detection of water stress signs and the optimization of water use, contributing to more sustainable agricultural practices. 7. Detection of land cover and seasonal changes: Multispectral infrared is useful for monitoring land cover and seasonal variations in vegetation, such as changes in plant growth stages or defoliation. [1] S. Yousaf, M. A. Iqbal, A. Buono, F. Nunziata and M. Migliaccio, “Active Microwave Sensors for Vegetation Water Content Estimation”.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Global distribution of livestock densities (2000–2022) at 1 km resolution based on spatiotemporal machine learning and irregular census data

#stac

Authors: Dr. Leandro Parente, Carmelo Bonannella, Dr. Steffen Erhmann, Tomislav Hengl, Radost Stanimirova, Dr. Katya Perez Guzman, Steffen Fritz, Dr. Carlos Gonzalez Fischer, Lindsey Sloat
Affiliations: Opengeohub Foundation, German Centre for Integrative Biodiverity Research (iDiv) Halle-Jena-Leipzig, International Insitute for Applied Systems Analysis (IIASA), Insitute of Biology, Leipzig University, Land & Carbon Lab, World Resource Insitute, Department of Global Development, College of Agriculture and Life Sciences, Cornell University, Cornell Atkinson Center for Sustainability, Cornell Universit
This study presents a novel framework for spatiotemporal mapping of livestock densities at a 1 km resolution, developed as part of the Land & Carbon Lab’s Global Pasture Watch (GPW) initiative. GPW is a collaborative effort to enhance global grassland monitoring and agricultural dynamics through the production of medium- to high-resolution datasets that inform sustainable agricultural systems and environmental policies. As part of these efforts, this research estimates annual densities of buffalo, cattle, goats, horses, and sheep for the period 2000 to 2022, addressing critical gaps in the temporal and spatial precision of livestock distribution data. A cornerstone of this work is the integration of GPW’s grassland extent maps, which provide annual classifications of cultivated and natural/semi-natural grasslands at a 30 m spatial resolution. These high-resolution grassland products are then used to distribute spatially the irregular and incomplete livestock census data within administrative polygons, weighted according to forage availability. This approach ensures that the livestock density estimates account for both ecological and management contexts. The methodological framework combines a point-sampling approach with ensemble machine learning models, specifically Random Forest and Gradient Boosting Trees. Census data were transformed into spatially distributed point samples, with weights assigned based on grassland proportions. These were further combined with dynamic environmental and socioeconomic covariates, such as climate indicators, and accessibility metrics, to model livestock densities. The reliability of the predictions was assessed through both internal validation, using cross-validation with spatial blocking, and external validation, employing a hold-out validation approach with 20% of the data. Uncertainty quantification was performed to provide users with confidence intervals, and predictions were standardized to match national statistics for consistency. The resulting dataset provides a globally consistent, medium-resolution time series of livestock densities, spanning more than two decades. This fills a critical gap, as previous efforts have largely been limited to static snapshots with coarse spatial resolutions and limited temporal depth. These new outputs enable improved analyses of livestock dynamics over time and across diverse regions, supporting applications in greenhouse gas emissions accounting, land-use optimization, and sustainable agricultural planning. Data will be made freely available as Google Earth Engine assets and in a STAC catalog. As open-access products (mapping products and reference harmonized census data) they align with GPW’s mission to provide tools that address the growing need for sustainable food production under changing climatic and socioeconomic conditions.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Investigating Yield Reductions Under Extreme Events: Clusterization and Disaggregation Using ARYA with Remote Sensing and Climate Data

Authors: Italo Moletto-Lobos, Belen Franch, Eric Vermote, Natacha Kalecinski
Affiliations: Global Change Unit, University of Valencia, Department of Geographical Sciences, University of Maryland, NASA Goddard Space Flight Center
The reduction of agricultural yields under extreme climatic events, such as frosts, insufficient solar radiation, droughts, and other environmental stressors, remains a pressing challenge in global agriculture. These yield reductions often manifest with varying intensities across regions, complicating efforts to quantify their impact, particularly in data-scarce areas. The Agriculture Remotely-sensed Yield Algorithm (ARYA), an Earth Observation (EO)-based approach, offers a pathway to understanding yield variability at national and subnational scales. However, ARYA's reliance on detailed input data limits its applicability in heterogeneous regions with significant climate variability and sparse data availability. This research introduces a novel method to investigate and disaggregate yield reductions associated with extreme events. By combining ARYA outputs with k-means clustering and auxiliary climate datasets, we analyzed patterns of yield reduction across regions experiencing similar extreme conditions worldwide. Key variables, including frost occurrence, solar radiation deficits, and drought intensity, were aggregated and analyzed to identify areas exhibiting common yield constraints. Vegetation indices (e.g., DVI and crop-specific indices) derived from remote sensing, in conjunction with key parameters of growing degree days (GDD) around extreme events, were evaluated to explore their relationship with observed yield reductions. The clustering process allowed the identification of regions with homogeneous yield reduction patterns, particularly in areas experiencing significant yield reductions. These were further analyzed by integrating ERA-5 meteorological data and wheat-specific remote sensing indices, as demonstrated in Franch et al., 2025. Machine learning techniques, such as XGBoost, were employed to facilitate the disaggregation of yield estimates at finer spatial scales. Preliminary results demonstrate that this methodology effectively identifies key drivers of yield reduction and their spatial manifestation, providing insights into the timing and magnitude of these impacts. The analysis revealed distinct vegetation response patterns to extreme events, offering critical insights into the processes underlying yield variability and reductions. This approach highlights the potential of combining remote sensing, climate auxiliary data, and machine learning to investigate and quantify yield reductions under extreme climatic events. The findings provide a basis for addressing the challenges of yield disaggregation in areas affected by environmental stress, offering actionable insights for global agricultural monitoring and resilience-building efforts in the face of climate change.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Production of vegetation indices from Sentinel-1 SAR images for temporal densification for early crop monitoring

Authors: Benjamie Lavie, Nathan Sobetsky, Chloé Thenoz, Henry Rivas, Nafissa Sfaksi, Marie-Odile Marché
Affiliations: Magellium, MEOSS, Centre National d'Etudes Spatiales (CNES)
Vegetation indices are commonly estimated from optical data such as Sentinel-2 and are widely used in applications such as crop monitoring, early crop growth detection, crop production assessment and carbon monitoring. However these optical data are heavily affected by cloud coverage which impedes taking advantage of all the acquisitions available, thus reducing monitoring’s temporal resolution. SAR instruments can overcome this issue by leveraging the wavelength property in the centimeter range of the electromagnetic spectrum, providing backscattered signals not affected by the clouds. It has been established that SAR acquisitions are impacted by the vegetation’s 3D structure as well as the water contained in the vegetation, and that it was particularly well suited for crop monitoring [VEL17, VAV20]. The relationship between vegetation and SAR is also highlighted by the existence of radar vegetation indices like the RVI (Radar Vegetation Index) and the RFDI (Radar Forest Degradation Index). Moreover, some research works found correlations between SAR VV and VH polarizations and optical vegetation indices like the NDVI (Normalized Difference Vegetation Index) [PAL23] and biophysical parameters like the LAI (Leaf Area Index) [YAD21]. However, SAR data show specificities impacting the information one can retrieve from it — as speckle noise, a degraded spatial resolution compared to optical acquisitions, or a high sensitivity to dielectric properties of the environment (water content), roughness and relief (linked to acquisition geometry) affecting the degree of correlation with optical vegetation indices along the year. Most approaches proposed lately to overpass these limitations are machine learning and deep learning methodologies. Some approaches use both optical images and SAR images to gap fill cloud areas with predicted vegetation indices from SAR, whereas others predict vegetation indices only from SAR acquisitions that can be processed mono-temporally or multi-temporally. In this study, we aim at demonstrating that vegetation indices produced from SAR images can be used for crop monitoring, with denser data and performance similar to vegetation indices computed from optical images. Our work focuses on predicting SAVI (Soil Adjusted Vegetation Index), EVI2 (Enhanced Vegetation Index) and NDRE (Normalized Difference Red Edge Index) optical vegetation indices from Sentinel-1 GRD VH and VV backscatters, with deep learning methods. The application evaluation of these predicted indices will be carried out within the framework of early crop monitoring, to test their relevance and effectiveness in meeting operational needs. This evaluation will be based on a comparative analysis of time series derived from vegetation indices predicted using radar data (Sentinel-1) and those calculated from indices based on optical data (Sentinel-2). Four French regions presenting a crop diversity are selected over the 2018-2021 period. An additional French bioclimatic region is also used in order to test the networks’ generalizability ability. A configurable data preparation pipeline is designed to download and coregister Sentinel-2 L2A products, and Sentinel-1 GRD produced by Copernicus Data Space, from which the Sentinel-1/Sentinel-2 product dataset is derived. The Sentinel-2 L2A products are used for computing the ground truth of the vegetation indices, which is used, jointly with the Sentinel-1 products, as input to the deep learning networks. Different encoder-decoder convolutional network architectures are implemented to produce the vegetation indices, and their performances are compared. Various tests are carried out to assess the contribution of technical options such as the use of either a digital elevation model (DEM) or a local incidence angle (LIA) from Sentinel-1. Other technical options like one-network-per-index versus a multi-index network training are also evaluated. The evaluation is performed in two steps: firstly on the selected regions on which the network has been trained (but on areas not seen during the training process), then on the additional “unseen” region. A multi-region network, i.e. trained on the four regions, is also confronted to region specific networks, i.e. trained on only one specific region, in order to identify the best training strategy from a generalization perspective. As a second step, the contribution of SAR predicted vegetation indices is evaluated thanks to the early crop monitoring application. To do so, the applicative results obtained from SAR predicted indices are compared with the results obtained from vegetation indices produced from Sentinel-2 images, and processed by the same crop monitoring application. To conclude, this approach provides an automatic dataset construction pipeline from Sentinel-1 and Sentinel-2 data facilitating scalability and generalization to other countries or bioclimatic regions. Moreover, such networks, if trained globally, might work as “foundation models” that could be finetuned to different bioclimatic regions and provide information on crop monitoring with a lower computational cost. Eventually, exploiting the synergy of multi-modality data could benefit decision-making on crop planification and water management. References [VEL17] Veloso, A., Mermoz, S., Bouvet, A., Le Toan, T., Planells, M., Dejoux, J. F., & Ceschia, E. (2017). Understanding the temporal behavior of crops using Sentinel-1 and Sentinel-2-like data for agricultural applications. Remote sensing of environment, 199, 415-426. [VAV20] Vavlas, N. C., Waine, T. W., Meersmans, J., Burgess, P. J., Fontanelli, G., & Richter, G. M. (2020). Deriving wheat crop productivity indicators using sentinel-1 time series. Remote Sensing, 12(15), 2385. [PAL23] Paluba, D., Saux, B. L., Sarti, F., & Stych, P. (2023). Estimating optical vegetation indices with Sentinel-1 SAR data and AutoML. arXiv preprint arXiv:2311.07537. [YAD21] Yadav, V. P., Prasad, R., & Bala, R. (2021). Leaf area index estimation of wheat crop using modified water cloud model from the time-series SAR and optical satellite data. Geocarto International, 36(7), 791-802.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Using machine learning and IoT with Earth Observation data as an innovative method for prediction of suitable beekeeping areas

Authors: Mirna Bušić, Ivan Razum, MSc. in Geospatial Technology Management Dragan Divjak
Affiliations: LIST LABS LLC
The Intellibee project demonstrates the potential of leveraging Earth Observation (EO), Machine Learning (ML), and Internet of Things (IoT) technologies to transform beekeeping practices and address the growing challenges of sustainable bee management. By integrating satellite imagery, sensor data, and in-situ observations, the project has developed a methodology to predict optimal bee forage locations with an accuracy exceeding 90%. This groundbreaking solution enables beekeepers to make data-driven decisions that optimize productivity, reduce colony risks, and contribute to environmental sustainability. The project was implemented across 12 diverse locations in Croatia, collecting ground truth data on honey yields, forage types, and hive locations. Sentinel-2 satellite data was used to calculate indices such as NDVI and NDMI, while IoT sensors provided localized measurements of temperature, humidity, and precipitation. By combining these data layers, advanced ML algorithms, including XGBoost and Graph Neural Networks (GNN), were trained to classify suitable areas for beekeeping activities. The web-based GIS platform developed during the project allows users to visualize suitability maps, analyze key parameters, and plan hive placement based on real-time and historical data, ensuring operational efficiency and sustainability. Results highlight the technical robustness and practical relevance of the Intellibee solution. Field validation confirmed that it achieves high predictive accuracy, addressing critical challenges such as honey yield fluctuations, adverse climatic conditions, and resource mismanagement. Moreover, the integration of EO and IoT technologies facilitates a shift toward systematic and precise management of apiculture, reducing dependency on intuition and traditional trial-and-error methods. This innovative approach promotes the use of space technologies in agriculture, aligning with global efforts to enhance food security and biodiversity. In addition to technical achievements, Intellibee emphasizes user accessibility through a mobile-friendly interface and collaboration with beekeepers for tailored solutions. Future development plans include expanding the geographic scope, refining ML models with additional datasets, and enhancing user engagement. By offering a replicable framework for sustainable beekeeping, Intellibee contributes to decision-making, policy formation, and global cooperation aimed at preserving bee populations and promoting environmentally responsible practices. At the intersection of science, technology, and agriculture, Intellibee stands as an innovative tool with significant implications for apiculture, environmental conservation, and rural development, highlighting the value of interdisciplinary collaboration in addressing contemporary challenges.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Advancing the Monitoring of Traditional Meadow Orchards: Current Approaches and Future Directions

Authors: Paul Joseph, Dr. Maike Petersen, Prof. Dr. Alexander Siegmund
Affiliations: Institute for Geography and Geocommunication– Research Group for Earth Observation (rgeo), Heidelberg University of Education, Heidelberg Center for the Environment (HCE) & Institute of Geography, Heidelberg University
Traditional meadow orchards are iconic features of European cultural landscapes and vital reservoirs of biodiversity, supporting up to 5,000 plant and animal species. However, these semi-natural habitats face increasing threats from agricultural intensification, land-use changes, and inadequate management, leading to declining tree vitality and loss of ecological functionality. Balancing ecosystem conservation with growing pressures on agricultural productivity requires innovative and scalable monitoring solutions. Our project adopts an integrated approach to monitoring meadow orchard ecosystems using remote sensing technologies. High-resolution imagery captured by unmanned aerial systems (UAS) provides detailed insights into individual trees, enabling the derivation of key metrics such as canopy dimensions, height, and structure. Spectral indices like the Normalized Difference Vegetation Index (NDVI) facilitate assessments of tree vitality and maintenance levels. Advanced classification algorithms further support mapping tree species and their spatial distribution. A further central aspect to our approach is capacity building and stakeholder engagement as it emphasizes the integration of advanced geospatial technologies into community-based conservation efforts. By collaborating with local stakeholders such as trainees, farmers, and conservation groups, the project fosters digital skills, environmental awareness, and a shared responsibility for preserving cultural landscapes. The feasibility of assessing meadow orchards using UAS was tested near Bad Schönborn in southwestern Germany. A total of 4,995 fruit trees were identified, predominantly apple trees (68%), followed by pear (16%) and walnut (5%). While nearly all trees were classified as highly vital (99%), only 28% were deemed well-maintained. Over half (53%) required low-level maintenance, while 19% had urgent maintenance needs. This highlights the difference between a vital plant and an agriculturally optimized plant, where pruning enhances yield. Classification accuracies ranged from 56% to 85%, suggesting improvements to the workflow, such as optimizing the timing of UAS image acquisition. The dataset is being expanded with additional study areas across Baden-Württemberg, alongside ongoing workflow refinements. Future work includes scaling methodologies by integrating satellite data with UAS-derived insights. The combination of high-resolution satellite imagery and UAS data allows for cost-effective, large-scale monitoring of meadow orchards, which is needed to guide targeted maintenance activities that contribute to the efficient conservation of these landscapes. These findings underscore the importance of modern geospatial technologies in preserving fragile ecosystems. By providing actionable insights into the condition and management of traditional orchards, this work supports broader biodiversity conservation and cultural heritage preservation goals.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Supporting food security and sustainable agriculture by improved agricultural monitoring combining hyperspectral and multispectral satellite data

Authors: Anne Schucknecht, Matthias Wocher, Anita Bayer, Tobias Hank
Affiliations: OHB System AG, Ludwig-Maximilians-University, Department of Geography
The main challenges of agriculture are to produce more food for a growing population and to become more sustainable by reducing environmental impacts – all under advancing climate change. One building block for tackling these challenges is the optimisation of agricultural management e.g., via sub-field specific management. This requires an improved monitoring of agricultural areas with respect to vegetation status, health, and stress. The combined use of physically-based modelling and machine learning in the so-called hybrid retrieval approach is considered a promising generalized and transferable method to quantitatively estimate biophysical and biochemical vegetation traits from optical earth observation data. So far, the approach has mainly been tested with hyperspectral airborne data. With the EnMAP and PRISMA missions, spaceborne hyperspectral data is increasingly available and with the Copernicus Sentinel expansion mission CHIME also a mapping mission will provide regular data in the near future. However, the revisit of CHIME (~11 days with two satellites) might still be too coarse for agricultural applications, especially considering the problem of cloud cover. Therefore, the combined use of data from different missions is beneficial to enable a dense monitoring of agricultural fields. In our study, we will test the combined use of hyperspectral EnMAP and multispectral Sentinel-2 data to monitor crops and grasslands in a study region in southwest Poland utilizing the hybrid retrieval approach. Based on preliminary model tests (with internal testing, no validation), we assume that the accuracy of EnMAP-based retrieval models is higher than those of Sentinel-2-based models, but that Sentinel-2 with its higher revisit time provides general trends of the vegetation variables. Over the growing season of the year 2024, we acquired multi-temporal in-situ data of vegetation variables of winter wheat, summer barley, and grassland fields in the study region. The collected data included measurements of chlorophyll a+b (Cab), leaf area index (LAI), fresh and dry aboveground biomass (AGBfresh/AGBdry), as well as area-based carbon (Carea) and nitrogen (Narea) contents. In addition, EnMAP was tasked around the sampling dates. Despite the frequent cloud cover in the study region, we obtained four EnMAP (31/03/2024, 01/05/2024, 05/05/2024, 02/07/2024) that were at least partially cloud free over the sampling sites. We developed hybrid retrieval models for crops and grassland based on EnMAP data building on the agricultural applications available in the EnMAP-Box 3. Therefore, we used the radiative transfer model (RTM) PROSAIL-PRO to generate look-up tables of sensor-specific spectra and associated RTM parameters. Then 75% of the LUT were used to train the machine learning regression algorithm Gaussian progress regression (GPR) with the internal active learning strategy pool active learning (PAL). The remaining 25% of the LUT were used for internal testing. In a next step, we will integrate real data (field spectrometer measurements and associated vegetation variables) in the active learning framework. Afterwards, the trained retrieval models will be validated with EnMAP scenes and associated in-situ data. In a last step, we will transfer the hybrid retrieval approach to Sentinel-2 data, validate it with the obtained field data, and apply the retrieval models to the available Sentinel-2 time series of the study region in 2024. For dates with joint EnMAP and Sentinel-2 acquisitions, we will compare the retrieval results of both sensors. Finally, we will assess the joined usage of EnMAP and Sentinel-2 data for time series analysis.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Monitoring and Predicting the Drivers of Land Degradation in Malawi Using Plot-Level Survey Data, Remote Sensing, and Machine Learning

Authors: Swarnalee Mazumder, Dr. Olena Dubovyk, Dr. Francis Muthoni, Nikhil Raghuvanshi
Affiliations: University Of Hamburg, International Institute of Tropical Agriculture (IITA)
Land degradation is the decline in the biological or economic productivity of land. It is one of the biggest challenges for people’s livelihoods and the environment in Malawi. This study aims to develop a machine-learning model to monitor and predict the drivers of land degradation using satellite remote sensing data and plot-level field surveys for this country. There are various ways to monitor land degradation, but most focus solely on distinguishing between degraded and non-degraded land. However, in this study, our model will not only monitor the land but also identify the causes behind the changes. We will use various satellite-derived indices for vegetation, soil health, and land quality parameters that change over time, as well as indices to represent socio-economic and agro-ecological features. The multi-source RS images including Landsat, Sentinel-1 and Sentinel-2 data are used (i) to classify land into degraded and non-degraded categories; and (ii) to distinguish various factors of LD. DL model trained on the field-level survey data from Malawi and multi-source RS images is used to understand degradation drivers during the baseline and monitoring periods. The results provide 30m resolution LD maps for the baseline period (2001-2015) and monitoring period (2015-2023) and information about key socio-economic and environmental factors contributing to land degradation and inform possible strategies for sustainable land management.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: SOLUM: A Bidirectional Soil Reflectance Dataset for the Advancement of EO-based SOC Monitoring

Authors: Ben Cutting, Professor Clement Atzberger, Professor Belen Marti-Cardona, Ms Theresa Strobl, Doctor Francesco Vuolo, Professor Franz
Affiliations: University Of Surrey
The LUCAS dataset (Orgiazzi et al., 2018) provides an exhaustive list of physical and chemical soil properties together with their reflectance spectra. These data are of paramount importance for the development of theoretical and empirical models relating soil physical/chemical and optical properties. LUCAS spectral measurements rely on diffusion-based reflectance whereby a sample is placed into an integrating sphere and measurements are taken from multiple detectors to give a single averaged value. This method has advantages such as providing a high accuracy reflectance spectra over short time frames. However, taking a pseudo-single measurement presents a significant disadvantage, inherent to use of diffuse reflectance, in that this value represents an average across all viewing angles and lacks any specific viewing geometries. Indeed, soil datasets offering bidirectional reflectance remain conspicuously absent. We suggest that these data are important since soils are not Lambertian reflectors, meaning that their reflectance is dependent on the viewing geometry. By using conventional soil optical datasets alone, it is impossible to train models to be truly agnostic to these effects without also obtaining spectra at multiple viewing geometries. To address this gap, we propose the creation of the SOLUM dataset representing the first open-access collaborative soil dataset containing both soil chemical and physical properties and bidirectional reflectance spectra at multiple viewing and illumination geometries. SOLUM integrates bidirectional reflectance data across wavelengths ranging from 350nm to 2500nm, coupled with key properties such as soil organic carbon (SOC) and moisture. The inclusion of SOC is particularly significant, as it is an indicator of soil health and can potentially play a critical role in carbon capture. To this end, 166 samples accounting for a variety of soil types and textures, were collected from areas in Austria, Denmark, and the United Kingdom. These samples were then illuminated with a lamp of known power and incident elevation angle. The bidirectional reflectance spectra were measured for 56 viewing geometries in a hemisphere above the sample at three different moisture levels for each sample at the standard illumination. A Dorna 2 robotic arm inverted over the sample was used in combination with a spectroradiometer to record the bidirectional reflectance data to a high angular precision. The reflectance spectra were measured at 1nm-wide bands from 350nm to 2500nm. The physical and chemical properties of the samples were evaluated separately giving the SOC, soil texture, and soil type for each. This comprehensive dataset is intended to support the development of optically driven models, either empirical, such as through machine learning, or through radiative transfer models, which are agnostic to the non-Lambertian effects of the soil surface. Indeed, many pre-existing RTMs require the bidirectional reflectance data for calibration, including but not limited to, the SOILSPECT (Jacquemoud et al., 1992) and MARMIT (Bablet et al., 2018) models. Both models can achieve a high accuracy when calibrated on suitable data suggesting that accounting for the surface roughness can significantly improve performance when connecting soil optical properties to the physical and chemical. Therefore, surface roughness is integral to the optical behaviour of soils and thus its inclusion within the SOLUM dataset. While our work represents the first phase of the dataset, considerable expansion is required and so the authors invite researchers to contribute measurements to this open-access project, so as to form a comprehensive dataset with a wide variety of soil types. It is the intention of the authors that this preliminary work forms the basis of a standard or benchmark for fellow researchers in order to produce consistent and comparable bidirectional soil optical data with the inclusion of key chemical and physical properties. Complementary to the LUCAS dataset, SOLUM will facilitate the creation and validation of rigorous soil optical models which are agnostic to the effects of both the soil moisture and, critically, the viewing geometry and/or surface roughness. Bablet, A., Vu, P. V. H., Jacquemoud, S., Viallefont-Robinet, F., Fabre, S., Briottet, X., Sadeghi, M., Whiting, M. L., Baret, F. & Tian, J. 2018. MARMIT: A multilayer radiative transfer model of soil reflectance to estimate surface soil moisture content in the solar domain (400–2500 nm). Remote Sensing of Environment, 217, 1-17. Jacquemoud, S., Baret, F. & Hanocq, J. F. 1992. Modeling spectral and bidirectional soil reflectance. Remote Sensing of Environment, 41, 123-132. Orgiazzi, A., Ballabio, C., Panagos, P., Jones, A. & Fernández-Ugalde, O. 2018. LUCAS Soil, the largest expandable soil dataset for Europe: a review. European Journal of Soil Science, 69, 140-153.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Detecting the Invisible Enemy in Maize: Machine Learning Classification of Fall Armyworm Damage in Maize

Authors: Tatenda Dzurume, Dr. Roshanak Darvishzadeh Varchehi, Professor Timothy Dube, Dr. Amjath Babu, Mutasim Billah, Dr. Syed Nurul Alam, Dr. Mustafa Kamal, MSc Md Harun-Or-Rashid, Badal Chandra Biswas, Md Ashraf Uddin, Dr. Md Abdul Muyeed, Dr. Rahman Shah, Dr. Timothy Krupnik, Professor Andrew Nelson
Affiliations: Department of Natural Resources, Faculty of Geo-inormation Science and Earth Observation,University Of Twente, Institute for Water Studies, University of Western Cape, International Maize and Wheat Improvement Centre, Department of Agricultural Extension, Wheat and Maize Research Institute
The fall armyworm (FAW), Spodoptera frugiperda, is a highly destructive pest of maize, exploiting the crop’s nutritional and structural characteristics and causing significant yield loss and economic hardship for maize farmers. This study investigates the potential of remote sensing to detect and classify FAW infestations in Bangladesh maize fields using freely available Sentinel-1 (radar backscatter) and Sentinel-2 (optical reflectance) satellite datasets. Field observations were conducted during the 2019–2020 maize growing season in maize fields across six administrative divisions, covering both infested and non-infested sites across various maize growth stages. The corresponding Sentinel images were downloaded and processed for analysis. Radar backscatter values, spectral reflectance profiles, and vegetation indices were extracted from the Sentinel data, and statistical tests were performed to identify significant differences between infested and non-infested maize fields. Machine learning models were used to classify infestation severity based on five different combinations of Sentinel data and vegetation indices as inputs. Additionally, the results were validated through cross-validation techniques to ensure model accuracy. Statistical tests revealed significant differences between infested and non-infested maize across growth stages. Infested fields showed reduced near-infrared reflectance (Sentinel-2) and distinct radar backscatter patterns (Sentinel-1 VH polarization), with notable variations at silking and maturity stages. The Random Forest classifier demonstrated superior accuracy and robustness, especially when integrating multi-source data. The red edge (740 nm) and near-infrared (865 nm) bands were particularly effective in distinguishing infestation levels, while the combination of spectral and radar data significantly improved classification accuracy by leveraging their complementary strengths. These results underscore the potential of integrating multi-source remote sensing data for scalable and accurate pest monitoring. Freely available Sentinel satellite imagery is a valuable source of information for early pest detection and management, aiding policymakers in identifying high-risk areas, implementing timely interventions, and promoting sustainable pest management strategies to protect maize production and reduce economic losses.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Estimating Cover Crop Biomass From Optical Satellite Images for Supporting More Sustainable Agricultural Systems

Authors: Amanda Veloso, Clémence Biller
Affiliations: Airbus Defence And Space
Cover crops are gaining recognition for their significant agronomic and environmental benefits, such as enhancing soil health, preventing erosion, improving nutrient management, and mitigating climate change through carbon sequestration. Their integration into farming systems is seen as an essential component of sustainable agricultural practices, aligned with policies like the European Union's Common Agricultural Policy (CAP) and similar global initiatives. Despite their potential, estimating cover crops biomass accurately and efficiently over large areas remains a significant challenge and there is a lack of tools to tackle this need. This study investigates the use of optical multi-satellite imagery for estimating cover crop biomass based on an empirical methodology that combines satellite-derived biophysical parameters with in-situ data. This work aligns with an existing operational framework within a precision agriculture service, which aims at providing farmers objective and precise biomass information, enabling improved nitrogen management and the adoption of low-carbon practices. In-situ aboveground fresh biomass data were collected in France during the 2023 growing season as part of a field campaign to calibrate and validate the proposed satellite-based model. Plots were situated across main agronomic regions in France, representing diverse cover crop species and management practices under contrasting climatic conditions. Main represented species were phacelia, fava beans, radish, vetch, mustard and oat. An additional dataset from research projects and public institutions collected over past years was gathered and used to enrich the dataset and increase robustness of the proposed approach. Key biophysical parameters such as green leaf area index (GLAI), fraction of vegetation cover (FCOVER), fraction of absorbed photosynthetically active radiation (fAPAR) and chlorophyll content (CHL) were obtained from multiple optical satellite images (Sentinel-2, Landsat-8, Pleiades and Pleiades-Neo) by reflectance model inversion. This process takes into account specific plant, soil and satellite sensors characteristics and includes an atmospheric correction module, allowing to generate biophysical estimations that are spatially and temporally consistent over time while minimizing potential inter-sensor bias. Next, the empirical approach involved evaluating and establishing relationships between the satellite-based biophysical parameters and the measured aboveground biomass. Further objective was to use the field-specific biomass estimations as input for the MERCI decision-support tool to quantify i) nitrogen availability from cover crop residues to the next crop and ii) carbon sequestration in the soil induced by the cover crops cultivation. Initial findings indicated that the product GLAIxCHL demonstrates better correlations with in-situ biomass across different cover crop species. Still, as expected, the GLAI parameter also presented a robust correlation with biomass. Linear, polynomial and exponential regression models were tested, optimizing for the best-fit relationships between satellite-based biophysical parameters and ground biomass. The best-performing model achieved a R-squared (r²) value of 0.65 and a root mean square error (RMSE) of 0.60 kg/m², indicating a reliable estimation capability under contrasting field conditions. Future work will focus on refining the empirical model to further enhance biomass estimation accuracy by integrating a wider set of in-situ data (new field campaign currently in progress). It should improve the methodology’s scalability across diverse agricultural contexts, in terms of climate, soils, crop management practices and cover crop species and development, especially for very high values. Furthermore, exploring the potential of time series data to better capture the cover crops growth dynamics and to identify the peak of biomass production represents a promising avenue for further investigation. The proposed approach underscores the potential of satellite-based biophysical parameters for estimating cover crops biomass through an empirical approach. The integration of this model into an operational precision agriculture service will enable automated generation of biomass estimates at the field scale, allowing for a better management of resources. This includes a more precise quantification of nutrients available for the subsequent main crops, reducing reliance on artificial fertilizers, and it also provides farmers with tools to quantify soil organic carbon stock changes, directly contributing to broader efforts towards more sustainable and productive agricultural systems, while creating opportunities for new revenue streams.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Optimizing Agro-Climatic Zones for the Global Crop Type Mapping and Food Security within WorldCereal Project

Authors: Andreu Guillem-Valls, Belen Franch, Italo Moletto-Lobos, Katarzyna Cyran, Dr. Kristof Van Tricht, Jeroen Degerickx, Christina Butsko, Shabarinath Nair, Inbal Becker-Reshef, Benjamin Koetz, Zoltan Szantoi
Affiliations: Global Change Unit, Image Processing Laboratory, University Of València, Department of Geographical Sciences, University of Maryland, VITO, University of Strasbourg, European Space Agency, Department of Geography & Environmental Studies, Stellenbosch University
The WorldCereal Project (WC) is a pioneering global initiative dedicated to facilitate a system that enable researchers and agricultural managers to obtain crop type mapping and forecast yields. As the project enters it second phase, a critical component involves refining agro-climatic zones, which serve as the bedrock for agricultural statistics and the development of precise crop type maps for the whole WC System. Our innovative approach leverages the power of machine learning to generate advanced crop calendars. By integrating climatic and spatial data into a XGBoost model, we can accurately capture the intricate details of crop growth cycles and seasonal variations. Further enhancing this precision, we incorporate a novel land surface phenology (LSP) method, which provides a deeper understanding of the physiological processes underlying plant growth. To ensure a granular and accurate spatial delineation of these agro-climatic zones, we rely on the Global Administrative Units Level 3 (GAUL 3) framework This framework offers a detailed geographic breakdown, enabling us to identify regions with distinct climatic and agricultural characteristics. To optimize the zoning process, we employ a comparative analysis of various clustering techniques, including K-means, PAM K-medoids, Agglomerative Hierarchical and Gaussian Mixture Models. By evaluating these techniques, we aim to identify the most suitable approach for grouping regions based on their unique crop calendar profiles. For example, regions with similar planting and harvesting dates, temperature regimes, and precipitation patterns would be clustered together. The refined agro-climatic zones resulting from this research offer a wealth of insights into the suitability of different regions for cereal cultivation. These insights empower agricultural stakeholders to make informed decisions regarding crop selection, planting schedules, and resource allocation. By optimizing agricultural practices and enhancing monitoring and management efforts, we can significantly improve global food security and address the challenges posed by climate change and increasing population demands.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Earth Observation for Agricultural Drought Monitoring in Ethiopia: Spatiotemporal Analysis of Rainfed Cropland Drought

Authors: Aster Tesfaye Hordofa, Juliane Huth, Dr. Christina Eisfelder
Affiliations: German Remote Sensing Data Center (dfd), German Aerospace Center (dlr), 82234 Wessling, Germany, Faculty of Water Resources and Irrigation Engineering, Arba Minch Water Technology Institute (AWTI), Arba Minch University, 21 Arba Minch, Ethiopia
Agricultural drought poses a critical threat to Ethiopia’s rainfed farming systems, affecting food security and rural livelihoods. Reliant on rainfed agriculture, Ethiopia faces rising challenges due to climate variability, with droughts impacting crop productivity and water resources. Existing monitoring systems often lack the spatial and temporal resolution needed to capture drought impacts across Ethiopia’s varied landscapes. Field-based methods, while informative, have limited scope and do not provide the rapid, comprehensive data essential for effective drought response, highlighting the need for enhanced tools that deliver timely, actionable insights. This study uses Earth Observation (EO) data to address these challenges by examining drought dynamics from 2009 to 2022, focusing on rainfed cropland in the Amhara, Oromia, Tigray, and Southern Nations, Nationalities, and Peoples' Region (SNNP). Leveraging the Moderate Resolution Imaging Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI) version 6.1, which provides 16-day composite data at a spatial resolution of 250 meters, and Water Productivity through Open access of Remotely sensed (WaPOR) land cover datasets. We analyze EVI deviations from long term median across Amhara, Oromia, Tigray, and SNNP regions. Monthly EVI deviations serve as indicators of vegetation stress, with negative anomalies pointing out periods of drought. Our results reveal significant drought events during critical growing periods in 2011–2012, 2015, and, 2022, with vegetation stress varying widely by region. Amhara and some parts of Oromia show fluctuating drought patterns with some years of partial recovery, while Tigray and SNNPR experience more sustained stress, reflecting higher vulnerability. These findings provide essential insights for developing climate adaptation strategies tailored to regional needs. By offering a scalable, EO-based approach to drought monitoring, this study supports the development of targeted agricultural policies and early-warning systems. This work contributes to the broader goal of achieving sustainable food security and climate resilience in Ethiopia and other drought-prone regions across the Horn of Africa. Keywords: Agricultural drought; Ethiopia; Rainfed agriculture; Earth Observation; MODIS EVI; Food security.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Evaluating Algorithms For Minority Class Augmentation In Crop-Type Classification: a Case Study In Senegal

Authors: Guillaume Jadot, Boris Norgaard, Pr. Pierre Defourny, Sophie Bontemps
Affiliations: UCLouvain
In Senegal, agriculture supports much of the rural population but faces challenges like sporadic rainfall and sub-optimal soil conditions, resulting in a reliance on food imports. Reliable agricultural statistics, supported by remote sensing-based crop-type maps, are essential for agricultural policy monitoring. However, generating these maps is challenging due to limited data availability, small field sizes, and variability in crop management practices. Furthermore, the classifiers used to create crop-type maps often require balanced datasets, which are difficult to obtain due to the inherent variability in agricultural systems. This variability is naturally mirrored in statistical survey data, as it tends to reflect the true distribution of crop types. Machine learning classifiers are particularly sensitive to imbalanced datasets, as underrepresented classes can lead to biased predictions. To address this, resampling techniques such as oversampling and under-sampling are often applied to adjust the class distribution and improve classification performance. This study investigates the use of Synthetic Minority Oversampling Techniques (SMOTE) to address class imbalance in two crop classification approaches utilizing Sentinel-2 (S2) optical imagery. The first approach involves a comprehensive gap-filled time-series dataset constructed from S2 images covering the Kolda region. After cloud masking, data gaps were filled using linear interpolation between the nearest cloud free dates. Subsequently, 37 synthetic observation dates were created using temporal resampling at 10-day intervals between January 4 and December 30, 2023. Spectral indices time-series were then computed to enhance crop type discrimination. The second approach uses only cloud-free images captured during the growing season (June to October). This method avoids interpolation, thereby reducing potential noise introduced by gap-filling. This second approach intends to verify if clear pixels at key growth stages can effectively be used to discriminate between different crop types. Along with S2 acquisitions, in situ data were collected during a field campaign conducted by the Direction de l’Analyse, de la Prévision et des Statistiques Agricoles (DAPSA) as part of their statistical agricultural survey. DAPSA surveyed 445 agricultural plots and recorded crop types. Pixels located within these parcels could then be considered as input samples with known labels to a classification model. This dataset was then augmented using SMOTE and its variants, including Adaptive Synthetic Sampling Approach (ADASYN) and SMOTE + Edited Nearest Neighbours (SMOTEENN), to improve the representation of minority classes. Classification was performed using an Extreme Gradient Boosting (XGBoost) model, and the effectiveness of each SMOTE variant was quantitatively evaluated in terms of impact on class specific f1-scores and model overall accuracy. Results indicated that the application of SMOTE algorithms, particularly SMOTEENN, improved classification accuracy for minority crop classes compared to models trained on non-augmented datasets. SMOTE also enhanced spectral separability, as demonstrated by Principal Component Analysis (PCA) visualizations. In contrast, non-crop classes exhibited higher spectral variability, reducing the need for SMOTE augmentation. Additionally, integrating random undersampling for the majority class effectively reduced classification bias, emphasizing the importance of balanced datasets for robust modeling. These results highlight the potential for combining undersampling and oversampling techniques to address dataset imbalances. Challenges remained due to intra-class variability and the presence of weeds which introduced noise into spectral signatures and occasionally led to misclassification. Finally, the analysis revealed limited spectral separability among certain crop classes such as millet, corn and sorghum, underscoring the challenges in the literature regarding the differentiation of these specific crop types using optical imagery in West African agricultural contexts. These findings illustrate both the opportunities and limitations of SMOTE-enhanced datasets for improving crop classification in smallholder farming systems.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Enhancing Seasonal Yield Predictions Through Hybrid Modelling and Data Assimilation: Integrating Sentinel-3 Observations With the LPJmL Agro-Ecosystem Model

Authors: Christoph Jörges, Prof. Dr. Tobias Hank, Prof. Dr. Marianela Fader, Prof. Dr. Matthias Forkel
Affiliations: Ludwig-Maximilians-Universität München (LMU), Technische Universität Dresden (TUD)
Increasing pressure on agricultural systems due to climate change, population growth, change of dietary preferences, and resource scarcity intensifies the need for advanced methods for accurate and timely seasonal yield predictions before harvest. In the context of food security and price stability, seasonal predictions – also under climate change impacts – are of paramount importance for agricultural planning, logistics, and food markets. This study presents a novel approach to enhance early-season yield forecasts at the regional scale by combining the strengths of observed spatial dynamics, provided by wide-swath Earth observation (EO) data from the Copernicus Sentinel-3 satellites, with capabilities of tracing temporal dynamics at the land surface, provided by the Lund-Potsdam-Jena managed Land (LPJmL) dynamic global vegetation model (DGVM), as well as the advantages of computational efficiency and data-driven approaches, provided by state-of-the-art machine learning (ML) algorithms for hybrid modelling and assimilation techniques. With a spatial sampling of 300m, the Sentinel-3 Ocean and Land Colour Instrument (OLCI) provides crucial information on crop phenological status, crop growth, and development on the regional scale via land surface variables such as the fraction of absorbed photosynthetically active radiation (FAPAR) and the OLCI terrestrial chlorophyll index (OTCI). Daily composites are derived from time-series of Sentinel-3 data at 300m spatial resolution and aggregated to the 1km LPJmL modelling resolution. To date, no interfaces for the continuous assimilation of EO data or for the integration of ML modules exist for the LPJmL model, but there are approaches for calibrating model parameters with EO data. Thus, different approaches such as parameter forcing and ensemble methods allowing for continuous parameter optimizations are conceptually presented and compared, exploring the possibilities to unlock the benefits of EO data assimilation also for LPJmL. This approach is scalable and therefore provides the basis for operating on a national or European scale. The study area focuses on the region of Bavaria, southern Germany. With the year 2018, a comparably dry year was chosen to serve as a first demonstrator, due to the rich meso-scale data that is available for that year (e.g., land use). The relevant staple crop cultivated in Bavaria are taken into account. As LPJmL intrinsically is designed for global applications, the model needs to be adapted for regional scale use. Prior to the data assimilation, model parameters are calibrated and subsequently validated against historical yield statistics for different crop functional types in LPJmL to adequately reflect the agricultural landscape in the study area. The developed assimilation methods will allow continuous updating of the model using satellite observations over the course of the growing season, increasing the accuracy of early-season yield predictions from the latest observed state until harvest. Meteorological forcing by ERA5-Land and ECMWF Seasonal Forecasts will be evaluated. Since extreme weather events can significantly impact crop yields depending on the crop’s phenological status, integrating effects of growth stage-specific extreme events during the season via a hybrid model approach with integrated machine learning processing will be performed as a posterior of the seasonal yield predictions and used for bias correction. The integration of diverse data sources allows for a more comprehensive representation of crop growth and development throughout the growing season, in future allowing for a food security early warning system at the regional scale. The combination of process-based and data-based methods is expected to make a substantial contribution to improving accuracy and the time between prediction and harvest. The conceptual framework and first results of ongoing work will be presented and put up for discussion.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: A Comparative Evaluation of Manual Assessment With UAV-Based Pixel-Wise Semantic Segmentation and Instance Segmentation for Weed Identification in Field Assessments

Authors: Maren Pukrop, David Hagemann, Dr. Thomas Jarmer, Prof. Dr. Dieter Trautz
Affiliations: University of Osnabrück, Osnabrück University of Applied Sciences
In agricultural field trials, manual assessments in plots of a certain extend are used to determine the field emergence rate of different crops, the weed coverage or the number of individual weed plants. However, these tasks are time consuming and require expert agricultural knowledge. For side specific weed management trials, the precise timing of a manual assessment is necessary to correctly determine the regulatory efficacy of the regulation. To reduce the time required for a manual assessment, a UAV-based assessment using high resolution UAV imagery could be integrated in the evaluation process. To compare manual assessments with UAV-based assessments, we used data from field trials in 2023 and 2024 in maize crops. Within each field trial, 90 to 120 0.25 m² plots were placed. In each plot, both the number of weed plants and the weed coverage were estimated manually. For the UAV based estimation, multispectral images with a ground resolution of 3.5 mm were acquired with a MicaSense Altum mounted on a DJI M210 in 10 m altitude. Two different approaches were used for the UAV-based assessment: First, a machine learning based pixelwise semantic segmentation using support vector machines, and second, a deep learning-based instance segmentation approach using YOLOv8-Seg. While the training data generation for the pixel-wise semantic segmentation approach is relatively simple, the instance segmentation approach requires more and more sophisticated training data in the form of digitized plant polygons. However, the instance segmentation approach can separate individual plants even in occluded weed areas, which is necessary to correctly identify the number of weeds correctly. On the other hand, instance segmentation approaches may have problems identifying small weed plants that are only a few pixels in size. A pixel-wise semantic segmentation approach should be able to identify these plants. By comparing the manual approach with both the pixel-wise semantic segmentation and the instance segmentation approach, we aim to answer the following questions: 1. Is a UAV-based field assessment capable of correctly identifying the number of weeds and weed coverage in field trials? 2. Is there a significant difference in performance between pixel-wise semantic segmentation and instance segmentation for weed detection, especially in areas where both small weed plants and occluded weed patches occur?
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Determination of Shifting Cropping Patterns and Their Assessment Regarding Food Supply in Climate Change Scenarios

Authors: Denise Dejon, Gunther Schorcht
Affiliations: green spin Gmbh
With the rising awareness and visibility of current and future impacts caused by climate change and linked extreme weather events, the need for short- and long-term information is becoming a progressively pressing matter in many different anthropogenic, social and economic sectors. To meet these needs, the existing wealth of free, open and globally available analysis-ready weather and climate data serve as a valuable source. However, the lack of understanding how to access, handle and combine data sets from different sources often prevents end-users in the previously mentioned sectors from making use of the data. Furthermore, domain-specific knowledge to extract additional information out of climate and weather data is hardly existing. Therefore, green spin developed Croptify, a web-application incorporating various data sources, to monitor and describe agricultural areas. Because of the global interconnection of supply-and-demand chains, the food commodity sector is one of the most vulnerable economic sectors to the effects of extreme weather events and climate change. Especially the dependence of a lot of low-income countries (e.g., Sudan, Somalia, Egypt) on the production of winter wheat in the so-called “granaries of the world” (Russia, U.S., Canada, Ukraine) emphasizes the need for exploring future effects of climate change in order to anticipate changes in the existing trade flows. Another important wheat variety is summer wheat, which can be grown in higher latitudes which are currently not reaching the required temperature sums for winter wheat. However, compared to winter wheat, it falls short in terms of harvest volume, granting the former one the role of one of the world’s most important staple foods, next to corn and rice. It is commonly known and anticipated that climate change will change the areas suitable for growing winter wheat. One of the most anticipated developments is the northward shift of agricultural climate zones (King et al. 2018), changing established growing patterns and trade flows in the U.S., Canada and Russia. This northward shift not only makes northern latitudes more suitable for farming but at the same, there is also a loss in areas located in more southern latitudes. This will impact food security worldwide since not only export quantities from these countries will change, but also impact local agricultural commodity prices because they are strongly linked to global prices. As a concrete example of a local change having global implications, we chose the change of areas able to grow winter wheat in Russia using different scenarios of IPCC data and a selected set of parameters (out of 20) describing key vulnerabilities (also referred to as crop stressors) of winter wheat. The regions of Rostov (Black Sea, inside the main growing area of winter wheat) and Novosibirsk (eastern Russia, inside the main growing area of summer wheat) serve as example regions to describe the aforementioned effects of the northward shift. Therefore, CMIP 6 data from the Copernicus Climate Data Store was downloaded. Daily temperature and precipitation data for the years 2015 - 2100 from three different experiments (SSP1-2.6, SSP2-4.6, and SSP5-8.5) from the model AWI-CM-1-1-MR (Germany) are used. Vulnerability parameters include heat stress during flowering and maturing and precipitation during the main growth period for Rostov, and temperatures during sowing and winter periods in Siberia. The selection of the vulnerability parameters was made based on their importance for causing the northward shift in the example regions. Goal of this assessment is creating a framework in which users are able to anticipate the future development of growing regions and - in combination with currently available vegetation-, weather and climate-data - enabling them to get the “big picture” of yield variability in the context of climate change. Users include not only stakeholders in the commodity market but also policy makers on different levels. This work is also part of research on the evaluation of changes in future cultivation systems and their effects on the global supply situation and the future role of regions and countries on the global agricultural market. We are also aiming for easy transferability to other crops and regions. Apart from a representation in tabular form, results are also visualized via interactive maps in our web application Croptify. In addition to benefits mentioned above, this assessment is directly supporting the accomplishment of United Nation’s sustainable development goal (SDG) 2 by ensuring the proper function of food commodity markets. King, M., Altdorff, D., Li, P. et al. Northward shift of the agricultural climate zone under 21st-century global climate change. Sci Rep 8, 7904 (2018). https://doi.org/10.1038/s41598-018-26321-8
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: EO Africa Water Resource Management: support to farmers and planners for irrigation water management

Authors: PhD Pietro Sciusco, Dr. Anna Maria Deflorio, PhD Vito De Pasquale, PhD Ahmed Ali Ayoub Abdelmoneim, PhD Bilal Derardja, Prof, Roula Khadra, Prof. Mohammed El-Shirbeny, Dr. Dimitra Tsakou, Dr. Maria Ieronymaki, Dr. Theophilos Valsamidis, Dr. Espen Volden
Affiliations: Planetek Italia s.r.l., CIHEAM Bari (International Center for Advanced Mediterranean Agronomic Studies - Mediterranean Agronomic Institute of Bari), NARSS - National Authority for Remote Sensing and Space Sciences, Planetek Hellas, ESA ESRIN
Improved agricultural water management can significantly reduce water demand by enhancing soil water management. A cost-effective method based on remote sensing for the estimation of the crops evapotranspiration (ETa) can help to optimize the irrigation scheduling and to enhance water use efficiency. EO AFRICA WRM (Water Resource Management) project, funded by ESA, started in November 2022, within the “EO Africa Explorers” framework. The team is composed by Planetek Italia, the International Center for Advanced Mediterranean Agronomic Studies - Mediterranean Agronomic Institue of Bari (CIHEAMB), Planetek Hellas with the participation of National Authority for Remote Sensing & Space Sciences (NARSS), Egypt, as local early adopter of the proposed solution. The project has demonstrated an open-source innovative model, assessing ETa by a combination of ET0, crop coefficient (Kc) and water stress coefficient (Ks) over a pilot area in northern Egypt. The output has been integrated into a web platform as a supporting tool for irrigation water management. As second project's aim, the capabilities of hyperspectral sensors in measuring ETa have been evaluated. Data acquired from PRISMA and EnMap sensors in the VNIR part of the spectrum, as well as Landsat and Ecostress for the thermal, have been exploited for this purpose. Additionally, Sentinel-2 data have been used as a performance benchmark for hyperspectral data. Machine learning techniques have also been applied to better exploit the potential of the hyperspectral content and evaluate possible improvements in the performance of the ETa prediction models. The results have been validated using ground data collected twice a month during the growing seasons of two different crops (wheat and peanuts). The comparison with in-situ data demonstrated a good alignement with the results obtained through the application of PRISMA, leading to an enhancement in the quality of the output information. Moreover, the characteristics of the hyperspectral sensor appear to enhance algorithm performance compared to multispectral sensors such as Sentinel-2 or Landsat. The model's development and validation have followed an iterative process involving development, testing, validation, and refinement cycles. The obtained results, especially for the first crop seasonality, demonstrate the feasibility of the methodology and indicate good agreement between EO-derived ETa and the in-situ data with promising outcomes from the machine learning application.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Comparison of UAV and satellite imagery to identifiy Winter Wheat and Oilseed Rape crop damages for site-specific spraying

Authors: Tomáš Kaplánek, Vojtěch Slezák, Vojtěch Lukas, Tomáš Večerka, Jakub Elbl, Petr Škarpa, Jan Křen
Affiliations: Department of Agrosystems and Bioclimatology, Faculty of AgriSciences, Mendel University in Brno, Zemědělská 1, 613 00, ZOD Rataje, Department of Agrochemistry, Soil Science, Microbiology and Plant Nutrition, Faculty of AgriSciences, Mendel University in Brno, Zemědělská 1, 613 00
The occurrence of damaged areas in field crops caused by biotic and abiotic factors not only reduces yields but also imposes a significant environmental burden due to the unnecessary application of plant protection products in these areas. Modern spray technologies equipped with pulse-width modulation enable individual nozzle control and targeted application based on prescription maps. Therefore, the method of mapping these damaged areas and creating application maps plays a crucial role. A significant source of biotic damage is rodent activity, with vole infestations causing substantial damage to winter and perennial crops, such as winter wheat, oilseed rape, and alfalfa. Vole outbreaks are increasingly frequent due to climate change and the adoption of conservation tillage practices. The typical manifestation includes 1–5 m focal points of severely damaged crops. Damage intensity is effectively detectable using UAV imaging in the RGB or VIS-NIR spectrum with ultra-high spatial resolution. However, UAVs are not always an ideal solution for continuous vegetation monitoring due to their operational time requirements and the need for skilled operators. This study aimed to evaluate the use of Earth Observation data from the Sentinel-2 and PlanetScope satellite systems to simplify continuous monitoring of crop damage and to create foundations for spot spraying of pesticides and liquid fertilizers. Additionally, the temporal dynamics of damaged crop recovery were monitored. Satellite systems represent a more efficient alternative for long-term monitoring, as they allow regular and large-scale imaging without the need for field presence. Modern satellite technologies also provide highly accurate data that can be easily integrated into agricultural analyses. The research was conducted in winter oilseed rape and winter wheat fields during the 2023–2024 growing season season at 2 locations Kroměříž region (Czech Republic; 49.2716661N, 17.3409658E) and Vyškov region (Czech Republic; 49.2186175N, 17.0068056E). UAV mapping of experimental plots was performed at a flight altitude of 120 m using a DJI Mavic 3 Multispectral drone. UAV images were processed using Pix4D Fields software to generate RGB and multispectral orthomosaics. Multispectral satellite data were acquired from the PlanetScope (PlanetLabs) system, with a spatial resolution of 3.7 m per pixel. Ground surveys were conducted using RTK-GNSS. GIS-based image data analysis involved spatial analyses of the NDVI vegetation index to evaluate damage intensity, including supervised classification methods. The study focused on identifying and monitoring areas with varying levels of damage during the growing season. Based on high-resolution UAV imagery during the spring vegetation phase, three levels of damage were identified ; (1) Severely damaged areas – regions with significantly low NDVI values indicating vegetation cover loss, (2) moderately damaged areas – regions with slightly reduced NDVI values, indicating partial vegetation damage. Undamaged areas (3) – regions with high NDVI values corresponding to healthy vegetation without visible damage. For each category, a circular area with a diameter of 10 m was defined, and the average NDVI was calculated. The temporal development of these categories was monitored throughout the growing season using average NDVI values. This data enabled the analysis of NDVI changes over time and the identification of critical phases when differences between damaged and undamaged areas were most pronounced. Additionally, the average NDVI for the entire field was calculated to provide a broader context of vegetation dynamics and rodent activity. Remote sensing analyses confirmed the capability to assess crop damage. Satellite images reliably identified damage in larger areas, while UAV imaging achieved high accuracy even in smaller plots. Within the study, damage ranged from 2% to 40% of the field area. For precise damage assessment, tramlines were excluded from the calculations, constituting approximately 2% of the field area on average. Time-series satellite data enabled the identification of periods when significant vegetation cover reduction occurred in severely and moderately damaged areas compared to undamaged areas. Satellite data analysis revealed differences in crop responses to damage. For cereals, rapid recovery was observed at approximately the stem elongation stage (BBCH 32–35), while oilseed rape showed only partial reduction of damaged areas due to intensified branching at the edges. The results demonstrate that UAV imaging can be used for precise identification of damaged areas, enabling targeted application of plant protection products. For targeted applications, oilseed rape has the highest potential, as damage tends to stabilize after a certain period. UAV orthomosaics allow the creation of prescription maps for repeated use during the growing season. Satellite images, on the other hand, are suitable for damage estimation and tracking its development over time. They can also be used for variable fertilizer applications, where high spatial resolution is not essential. Both approaches enhance the efficiency of agricultural interventions and reduce the environmental burden of agrochemicals. Future research will focus on comparing methods for detecting damaged areas using machine learning tools, assessing the impact of weed occurrence, and evaluating crop recovery after damage. This study was supported by research projects NAZV QL25020034 and MENDELU IGA24-AF-IP-055.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Rapeseed mapping using Sentinel-1 time series coupled with growing degree-days information

Authors: Sami NAJEM
Affiliations: INRAE
Accurate and up-to-date data on rapeseed cultivation areas holds great significance for agricultural planning, as well as ensuring food and energy stability and promoting environmental sustainability. This study aims to leverage Synthetic Aperture Radar (SAR) data to enhance the reliability and regularity of rapeseed mapping through innovative data alignment techniques and machine learning models. SAR technology presents an important remote sensing resource for crop monitoring and mapping. It offers reliable and consistent observations because it is not affected by cloud cover or by atmospheric conditions. Growing Degree Days (GDD) serves as a widely adopted heat index for evaluating crop development. It is calculated as the mean ambient temperature minus the threshold temperature required for the crop's survival. In this study, Sentinel-1 (S1) time series and S1 Growing Degree Days (GDD) series were used for timely and regular rapeseed mapping and compared with the aim of gauging the benefits of using GDD information for rapeseed mapping. The S1 time series was constructed using VV and VH polarization along with the S1 acquisition dates, while the S1 GDD series was developed by combining the same data with cumulative GDD, calculated from a supposed sowing date of the crop. Mitigating discrepancies caused by climatic variations in different study sites is essential for generalizing a crop classification model, ensuring that the models are trained and tested under comparable growth conditions. With the aim of alleviating the shifts in the rapeseed growth cycles, an alignment method based on detected flowering dates was proposed to align rapeseed key phenological stages between different sites and years. Our proposed method is based on detecting the average flowering date for each dataset, as it is considered a key stage in the growth cycle. The flowering dates were detected by finding the signature rapeseed minimum (indicative of the flowering stage) between 15 March and 15 May. The results show that the flowering dates range from 2 April to 26 April. Rapeseed classification was realized using two machine-learning models, namely Random Forest (RF) and Inception Time (IT). The alignment technique aimed to synchronize the growth stages, thereby overcoming temporal shifts in the rapeseed growth cycle often caused by meteorological variations or differences in climatic conditions across sites, thereby improving the models' performance in cross-regional settings. The spatial (cross-regional) transferability of the models was tested accordingly before and after the alignment of the S1 time series as well as the S1 GDD series. The results showed that using the S1 time series before alignment, the overall F1-score achieved by RF was 81.3% ± 16.5%, and the overall F1-score of IT was 89.2% ± 5.7%. After alignment, RF achieved an overall F1 score of 90.3% ± 8.2%, while the overall F1 score of IT was 91.7% ± 3.8%. The alignment method led to significant improvements, particularly for RF, and ensured greater consistency in the performance of IT. Using the S1 GDD series, before alignment, the overall score of RF was 58.2% ± 36.8%, while the overall F1-score achieved by IT was 86.6% ± 11.7%. After the alignment of the S1 GDD series, the F1-score achieved by RF was 73.9% ± 29.0%, and the F1-score of IT was 86.8% ± 10.3%. Despite some gains in certain combinations of training/testing datasets, the GDD-based series lagged behind the S1 time series in overall classification performance. Therefore, the best configuration for rapeseed mapping was using IT with S1 time series after alignment, as it gave the highest overall F1-score and the best consistency with the lowest standard deviation among different sites and years. Overall, the S1 time series provided better results than the S1 GDD series, meaning that employing thermal time does not significantly enhance the classification performance. The performance of the GDD series was hindered by the lack of a precise sowing date for each field, which is crucial for calculating cumulative GDD accurately. The results indicate that the proposed method enables reliable, timely, and continuous rapeseed monitoring, with a possible reduction in the effort related to systematic field campaigns. This approach highlights the potential of integrating remote sensing data and alignment methods to optimize agricultural monitoring using a spatially and temporally adapted classification model, paving the way for more effective food stock management and planning by policymakers and stakeholders.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Agricultural Drought Monitoring in Italy: Preliminary Results From PRISMA Imagery on Different Crops

Authors: Filippo Bocchino, Giulia Graldi, Alireza Hamoudzadeh, Lorenza Ranaldi, Dr. Deodato Tapete, Alessandro Ursi, Maria Virelli, Dr. Patrizia Sacco, Antonio Denaro, Laura Rosatelli, Camillo Zaccarini, Valeria Belloni, Roberta Ravanelli, Mattia Crespi
Affiliations: Sapienza University of Rome, DICEA, Geodesy and Geomatics Division, Sapienza School for Advanced Studies, Sapienza University of Rome, Italian Space Agency (ASI), University of Liège, Department of Geography (Faculty of Sciences), Geomatics Unit, Risk Management Department, Institute of Services for Agricultural and Food Market (ISMEA)
In recent years, climate change has intensified drought events, leading to water shortages and severely compromising agriculture [1]. Agricultural drought episodes have repeatedly affected the Italian territory within the last three years, causing loss of yields difficult to predict. Agricultural drought is indeed not only influenced by meteorological variables, but also by the vegetation phenological phases and the conditions of soil moisture. Adequately monitoring such a complex phenomenon is crucial to quantitatively and qualitatively assess the severity and impact of the droughts [2]. Technologies such as remote sensed hyperspectral satellite imagery could help in the monitoring of the drought occurrences on the vegetation conditions, given their particular sensitivity to some parameters indicative of the vegetation stress [3]. However, the assessment of the practical effectiveness of the remote sensing resources still is a challenge because of the lack of reliable in situ data to be used as reference. In this study, we used the Hyperspectral Precursor of the Application Mission (PRISMA) satellite of the Italian Space Agency (ASI) to derive the spectral signatures over crop fields potentially affected by drought. We compared the obtained results with in situ data provided by the Institute of Services for Agricultural and Food Market (ISMEA). The PRISMA sensor acquires images across a continuous spectrum of 240 bands spanning from 400 to 2500 nm. It offers a spectral resolution of less than 12 nm and a spatial resolution of 30 meters. 5-m panchromatic image is also provided to relate spectral signatures with depicted on-the-ground objects. The in situ dataset includes drought damage percentages for crop fields across Italy, assessed during a 2022 experimental activity [4]. For each surveyed field, the damage percentages (0–100%) have been estimated in situ using standardized procedures [5]. Since the damage data are provided at the field scale, considering that the average extension of the investigated fields is a few thousand square meters, the well-known geolocation error of PRISMA imagery [6] needed to be addressed for working at such a fine spatial scale. To fix this error, an original terrain-independent approach [7] was used. After excluding the spectral bands severely impacted by atmospheric water vapor and CO2 absorption [9], the median spectral signatures were extracted (from each image) considering all the pixels within the investigated field. This allowed the comparison of the spectral signatures of fields characterized by the same crop and different levels of drought damage. One of the analyzed study cases regards durum wheat fields in the province of Foggia (Southern Italy) during the agricultural season of 2022. Two cloud-free PRISMA images were sourced from the existing ASI’s archive. Although the image dataset is limited, it is worth acknowledging that the selected images were collected independently from and prior to the present experiment. Nonetheless, given their acquisition date and in absence of any other hyperspectral satellite image from other missions, they could well be related to the time window of the ISMEA damage records. Even considering a limited sample of data (i.e. two images acquired over two fields), it was possible to identify some qualitative differences in the spectral range of 700-1100 nm between fields characterized by different levels of damage. These spectral bands are associated with leaf/canopy structure and water content, as highlighted by [6], which are indeed parameters likely affected by drought stress. Specifically, the two images were acquired on 29/04/2022 and 14/06/2022 over two fields exhibiting 0% and 36% damage, respectively. The first image revealed more pronounced differences in the previously mentioned spectral range compared to the second image. This discrepancy is likely due to the second image being captured during the crop’s maturity phenological stage, a period when parameters influencing the spectral signature, such as water content and reserve biomass, are typically reduced. Despite the limitation due to the low number of analyzed fields (which in turn was constrained by the spatial distribution of the survey sample and related collection date of the ISMEA damage records), the initial results are promising, demonstrating the potential of PRISMA imagery for highlighting and monitoring drought effects on crops. Future efforts will focus on expanding the dataset, integrating more PRISMA images and ground-truth data where available, to enhance the reliability and applicability of the analysis. Acknowledgements This research is performed in the framework of the GRAW project, funded by the Italian Space Agency (ASI), Agreement n. 2023-1-HB.0, as part of the ASI’s program “Innovation for Downstream Preparation for Science” (I4DP_SCIENCE). References [1] West, H., Quinn, N., & Horswell, M. (2019). Remote sensing for drought monitoring & impact assessment: Progress, past challenges and future opportunities. Remote Sensing of Environment, 232, 111291. https://doi.org/10.1016/j.rse.2019.111291 [2] N. Alahacoon and M. Edirisinghe, “A comprehensive assessment of remote sensing and traditional based drought monitoring indices at global and regional scale,” Geomatics, Natural Hazards and Risk, vol. 13, no. 1, pp. 762–799, Mar. 2022. http://dx.doi.org/10.1080/19475705.2022.2044394 [3] Pasqualotto, N., Delegido, J., Van Wittenberghe, S., Verrelst, J., Rivera, J. P., & Moreno, J. (2018). Retrieval of canopy water content of different crop types with two new hyperspectral indices: Water Absorption Area Index and Depth Water Index. International Journal of Applied Earth Observation and Geoinformation, 67, 69–78. https://doi.org/10.1016/j.jag.2018.01.002 [4] ISMEA. Sperimentazione e avviamento fondo agri-cat 2022. www.ismea.it/flex/cm/pages/ServeBLOB.php/L/IT/IDPagina/12034 . [Accessed 23-05-2024]. [5] ISMEA. Progetto pilota di standardizzazione delle procedure per le valutazioni dei danni alle colture vegetali. www.ismea.it/flex/cm/pages/ServeBLOB.php/L/IT/IDPagina/11688 , 2021. [Accessed 23-05-2024] [6] Belgiu, M., Marshall, M., Boschetti, M., Pepe, M., Stein, A., & Nelson, A. (2023). PRISMA and Sentinel-2 spectral response to the nutrient composition of grains. Remote Sensing of Environment, 292, 113567. https://doi.org/10.1016/j.rse.2023.113567 [7] Nascetti, A., Crespi M.(2024) Personal communication. [8] Lastilla, L.; Belloni, V.; Ravanelli, R.; Crespi, M. DSM Generation from Single and Cross-Sensor Multi-View Satellite Images Using the New Agisoft Metashape: The Case Studies of Trento and Matera (Italy). Remote Sens. 2021, 13, 593. https://doi.org/10.3390/rs13040593 [9] Marshall, M., Belgiu, M., Boschetti, M., Pepe, M., Stein, A., & Nelson, A. (2022). Field-level crop yield estimation with PRISMA and Sentinel-2. ISPRS Journal of Photogrammetry and Remote Sensing, 187, 191–210. https://doi.org/10.1016/j.isprsjprs.2022.03.008
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Robust Crop Classification: Bridging Spatial and Temporal Challenges

Authors: Caglar Senaras, Nejc Vesel, Anze Zupanc, Piers Holden, Annett Wannia
Affiliations: Planet Labs
Crop classification stands out as a foundational application of Earth observation data for the agricultural sector, playing a vital role in food security, crop monitoring, and agricultural policy. However, ensuring the robustness of crop classification models in the context of variations in environmental conditions and management practices across years and locations remains a critical challenge. We developed this solution as part of the Horizon Europe project Open Earth Monitor, where one of the main objectives is to build a number of environmental monitors to serve concrete organizations / European Union programs. In the EO context, an environmental monitoring system typically implies data which show current, past, and/or future states of the environment and environmental events potentially affecting the quality of life of citizens and/or living beings. Crop Classification at the continental scale is an example of such a monitor. We aimed to develop a novel location-aware model to handle spatial and temporal variability, needed when running crop classification over largescales. Coverage of ground truth data across multiple years, locations, and diverse crop types is a crucial factor for building reliable models. To achieve this coverage, we started with data from the EuroCrops dataset (covering Belgium, Sweden, Estonia and Slovenia for a single year each; either 2020 or 2021). We then used the HCAT crop taxonomy and mappings produced by the EuroCrops project to extend the dataset across five years (2018 - 2022) and seven additional countries (France, Germany, Netherlands, Portugal, Denmark, Austria and Spain). Only partial coverage of Germany and Spain was possible from public data sources. We call this extension to the original EuroCrops dataset “RapidCrops” and plan to make this data publicly available in the future. From these datasets, we selected the eight most common crop types per country, resulting in a final dataset of 28 crop classes. For training, we used data from 2018 to 2021, employing a random selection strategy to ensure a balanced spatial distribution and minimize class imbalance. This process yielded 638,933 training samples. Sentinel-2 time series data for these fields were extracted using Sentinel Hub. We developed a location-aware, transformer-based deep learning model, integrating Sentinel-2 time series data with geographic location information. Our model leverages SatClip embeddings, a global, general-purpose location encoding framework proposed by Microsoft. To incorporate location-awareness, longitude and latitude data are passed through the SatClip model to generate embeddings, which are then concatenated with transformer-based feature embeddings. The combined embeddings are fed into a fully connected network for final classification. To our knowledge, this is the first application of SatClip embeddings in the context of crop classification. For validation, we tested the performance on an unseen year. Therefore, all of the validation samples are selected from 2022. We focused on the RapidCrop dataset and 7 countries. We randomly selected around 127,000 samples using the same sampling strategy. Our experiments aim to answer the following research questions: What is the impact of temporal depth in the training dataset? To address this, we compared model performance when trained on single-year data versus multi-year (4 years) data. Results demonstrated that using multi-year data significantly outperforms single-year data in terms of classification accuracy. What is the impact of field size on crop classification accuracy? Our analysis showed that the accuracy of crop classification is about 0.81 for fields smaller than 1 hectare. However, the accuracy improves to 0.92 for fields larger than 5 hectares. For fields larger than 10 hectares, the accuracy levels off at around 0.92 to 0.94. These results indicate that Sentinel-2 time series data may face challenges in accurately classifying crops in smaller fields, pointing to potential limitations for small-scale agricultural analysis. What is the impact of location awareness on training countries? We tested the model with and without using SatClip embeddings to see the effect of location awareness. The overall accuracy for 28 crop types was 0.88 with SatClip and 0.82 without it. The improvement was even bigger for some crops, like clover, where the accuracy increased from 0.73 to 0.90. This shows that using location-aware embeddings helps the model, especially for crops that are sensitive to where they are grown. However the impact of location awareness on unseen countries is not explored for now. This study highlights the importance of temporal depth, field size, and location-awareness in improving crop classification using Sentinel-2 time series data. The results demonstrate that multi-year training data significantly enhances model performance when applied in unseen years, particularly for regions with varying environmental and management conditions. While Sentinel-2 data performs well for larger fields, its reduced effectiveness for smaller fields highlights potential challenges, suggesting the need for alternative approaches in fine-scale analysis. Furthermore, incorporating SatClip embeddings as a location-aware component has shown substantial improvements in accuracy, especially for location-sensitive crops. Future work will focus on evaluating the impact of location-awareness on countries outside the training dataset, further extending the applicability and robustness of the proposed model.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Remote Sensing of Nitrogen Uptake Efficiency: A multidisciplinary Approach

Authors: Melina Maria Zempila, Lecturer William Salter, Senior Lecturer Oorbessy Gaju, Associate Professor Zhiyong Wang, Professor Simon Pearson, Dr Nicolas Younes Cardenas, Kellie Smith, Professor Owen Atkin, Yoon Cho, Wenhua Wu, Research Fellow Martin Amidy, Director Robert Furbank
Affiliations: RAL Space, STFC, UKRI, University of Sydney, University of Lincoln, Australian National University
Synthetic nitrogen (N) is a cornerstone of global agricultural systems and is crucial for ensuring food security. However, its application must be managed responsibly and efficiently. Nitrogen represents a significant portion of farm input costs and is a major contributor to greenhouse gas (GHG) emissions from crop-based agriculture, accounting for approximately 80% of emissions in wheat production. Additionally, nitrogen runoff adversely affects biodiversity and contaminates watercourses. This study establishes the groundwork for the next generation of remote sensing technologies aimed at assessing nitrogen use efficiency (NUE) and canopy photosynthetic potential in crops. It introduces an innovative hyperspectral (HS) analysis technique that extracts biologically meaningful data from leaves and canopies at various scales. These scales range from leaf clip-on sensors and cameras mounted on field robots and/or drones to satellite-based earth observation (EO) systems. The data obtained through this HS analysis can be directly utilized to enhance crop phenotyping and breeding, as well as to support agronomic practices and optimize nitrogen application rates for farmers. The research evaluates this approach in both Australia and the United Kingdom, exploring the potential benefits of hyperspectral imaging from space as a novel EO product for agricultural applications. Acknowledgements: The authors extend their gratitude to Drs. H. Mortimer and M. Hamilton, J. Powell and C. McGurk for their contributions to the RAL Space’s HSI design, software development and drone field work respectively. We also thank Rothamsted Research and March Castle for granting access to the field trials and supporting us throughout the field work. We also thank the Australian Plant Phenomics Network for providing infrastructure access, This research has received funding through the UK-Australia Earth Observation for AgroClimate programme (EO4AgroClimate).
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Comparison between Sentinel-1 and SAOCOM sensitivity and SSM retrieval over crop types and phenological cycles in Switzerland

Authors: Marion Dugué, Prof. Irena Hajnsek
Affiliations: ETH Zürich, DLR
Accurate soil surface moisture (SSM) retrieval at high spatial and temporal resolutions is essential for sustainable agricultural management and effective drought monitoring. The introduction of L-band satellites, such as SAOCOM in 2018, has improved SSM retrieval in densely vegetated areas due to their deeper penetration capabilities. However, the effects of crop types and phenological cycles on radar backscatter remain insufficiently explored [1]. On the other hand, while significant progress has been made in SSM retrieval using Sentinel-1 C-band data, challenges persist in separating soil and vegetation contributions, especially using automated retrieval methods [2]. This is specifically difficult as with high-resolution data there is a larger effect of the vegetation status on the interpretation of the SAR backscatter [3,4] and thus the SSM retrieval [5,6,7]. Some methods explore how to smooth out vegetation by for example up-scaling high-resolution data [8], however there is still little research performed on using artificial neural networks (ANNs) to assess the sensitivity of the vegetation for SSM retrieval when comparing L- and C-band data. This study evaluates the potential of open-access 10m resolution C-band Sentinel-1 and L-band SAOCOM data for automated SSM retrieval across diverse crop types and phenological stages in Switzerland. This test site is challenging due to its diverse topography and microclimates requiring high-spatial and temporal resolution data. However, Switzerland has established country-wide soil moisture in-situ measurement networks such as BodenWessMeden network and SwissSMEX, which can be used to compared with remote sensing results [9,10]. Using polarimetric decomposition techniques, we analyze how L-band and C-band data respond to vegetation dynamics and soil moisture changes. Additionally, we assess the effectiveness of a multi-layer perceptron algorithm in predicting SSM from both datasets, comparing it to the Swiss soil moisture network in-situ measurements. The comparative analysis highlights the strengths and limitations of each radar frequency for SSM retrieval as a function of crop phenology. Findings from this research contribute to improving automated, high-resolution SSM monitoring and advancing drought management strategies at regional and national scales. [1] Brunelli, Benedetta, and Francesco Mancini. "Comparative analysis of SAOCOM and Sentinel-1 data for surface soil moisture retrieval using a change detection method in a semiarid region (Douro River’s basin, Spain)." International Journal of Applied Earth Observation and Geoinformation 129 (2024): 103874. [2] El Hajj, Mohammad, et al. "Synergic use of Sentinel-1 and Sentinel-2 images for operational soil moisture mapping at high spatial resolution over agricultural areas." Remote Sensing 9.12 (2017): 1292. [3] I. Hajnsek, E. Pottier and S. R. Cloude, "Inversion of surface parameters from polarimetric SAR," in IEEE Transactions on Geoscience and Remote Sensing, vol. 41, no. 4, pp. 727-744, April 2003, doi: 10.1109/TGRS.2003.810702. [4] Dipankar Mandal, Vineet Kumar, Debanshu Ratha, Subhadip Dey, Avik Bhattacharya, Juan M. Lopez-Sanchez, Heather McNairn, Yalamanchili S. Rao, Dual polarimetric radar vegetation index for crop growth monitoring using sentinel-1 SAR data, Remote Sensing of Environment, Volume 247, 2020, 111954, ISSN 0034-4257, https://doi.org/10.1016/j.rse.2020.111954. [5] I. Hajnsek, T. Jagdhuber, H. Schon and K. P. Papathanassiou, "Potential of Estimating Soil Moisture Under Vegetation Cover by Means of PolSAR," in IEEE Transactions on Geoscience and Remote Sensing, vol. 47, no. 2, pp. 442-454, Feb. 2009, doi: 10.1109/TGRS.2008.2009642. [6] D. Mandal et al., "A Radar Vegetation Index for Crop Monitoring Using Compact Polarimetric SAR Data," in IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 9, pp. 6321-6335, Sept. 2020, doi: 10.1109/TGRS.2020.2976661. [7] N. Basargin, A. Alonso-González and I. Hajnsek, "Constrained Tensor Decompositions for SAR Data: Agricultural Polarimetric Time Series Analysis," in IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1-13, 2023, Art no. 4410913, doi: 10.1109/TGRS.2023.3331599. [8] B. Bauer-Marschallinger et al., "Toward Global Soil Moisture Monitoring With Sentinel-1: Harnessing Assets and Overcoming Obstacles," in IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 1, pp. 520-539, Jan. 2019, doi: 10.1109/TGRS.2018.2858004. [9] Eidgenössische Forschungsanstalt für Wald, Schnee und Landschaft WSL. (2023). Bodenmessstellen – FB 24-03. Retrieved from https://www.bodenmessnetz.ch/pdf/pub/fb-24-03_Bodenmessstellen.pdf [10] Mittelbach, Heidi, et al. "Soil moisture monitoring for climate research: Evaluation of a low‐cost sensor in the framework of the Swiss Soil Moisture Experiment (SwissSMEX) campaign." Journal of Geophysical Research: Atmospheres 116.D5 (2011).
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Mapping Crop Rotations at National Scale: High-Resolution Monitoring of Agricultural Dynamics in Germany Using Sentinel Data

Authors: Sarah Asam, Andreas Hirner, Ursula Geßner
Affiliations: German Aerospace Center DLR
The type and rotation of cultivated crops significantly influence the functions of arable land, agricultural productivity, and ecological processes. Specific crop rotations are associated with various environmental impacts, which can be beneficial or detrimental. These factors are crucial for addressing challenges such as food security, carbon sequestration, climate change adaptation, and maintaining agro-biodiversity. Therefore, information on crops rotations is essential for addressing a range of environmental and agricultural policy issues. For instance, SDG Indicators 2.4.1 (proportion of agricultural area) and 15.3 (land degradation) heavily rely on spatially explicit information on crop rotations. Recognizing their importance, the Group on Earth Observations Global Agricultural Monitoring (GEOGLAM) has designated “Crop Rotation Sequences” as an Essential Agricultural Variable (EAV). Satellite remote sensing provides an effective tool for mapping crop types uniformly at regular intervals over large areas, facilitating the monitoring of agricultural dynamics across regions. In this study, we generated high-resolution (10-meter) crop type maps for Germany covering the years 2017 to 2024, classifying 18 distinct crop types or groups. We used time series data from Sentinel-1 A/B and Sentinel-2 A/B satellites from March to October. Sentinel-2 data were pre-processed, resampled to a 10-meter resolution at 5-day intervals, temporally gap-filled, and smoothed using a Savitzky-Golay filter. Sentinel-1 Ground Range Detected (GRD) data were also pre-processed to derive backscatter values for VV and VH polarization, which were resampled to the same 10-meter grid as the Sentinel-2 data using the nearest neighbour algorithm. For each pixel, monthly statistical features were calculated based on the Sentinel-2 spectral bands (2-4, 8, and 12), four spectral indices, and the Sentinel-1 VV and VH backscatter. These features were then used in a Random Forest (RF) classification model, which was trained and validated using anonymized Integrated Administration and Control System (IACS) data from six federal states of Germany. This dataset provided broad geographical coverage, representing diverse agricultural landscapes across the country. Eighteen crop type classes were defined based on IACS land use data, and separate RF models were trained for each year to map crop types at 10-meter spatial resolution across all eight years. To refine the crop type classifications, we applied spatial and temporal filtering techniques, along with region-specific knowledge-based rules. For example, hops and vineyards were restricted to regions with known cultivation patterns. Group-wise spatial sieve filters were applied to eliminate smaller misclassified patches caused by spectrally similar crop types. Permanent classes were subjected to temporal consistency rules to assign misclassifications in single years to the majority class. The resulting classification achieved high F1-scores (0.79-0.83) across all study years. The crop type maps enable spatiotemporal analysis of crop diversity patterns in Germany, for example, at the municipal level or for hexagons. For instance, wheat cultivation is widespread across all agricultural regions, while maize cultivation is concentrated in the northwest and south of the Danube. A mix of wheat and sugar beet is found in areas with nutrient-rich, deep soils, such as in the Magdeburg and Hildesheim Börde. Regional specialities include hop cultivation in the Holledau, fruit trees in the Altes Land, or vineyards along the Upper Rhine, Moselle and Main. In the next phase, we analyse crop rotations at the pixel level to identify common rotation patterns across different regions of Germany. This analysis covers both crop groups (grouped according to physiology and management, e.g. winter cereals) and individual crop types. Preliminary results indicate that only a small proportion of all identified crop rotations dominate the majority of arable land in Germany, with these rotations focused mostly on winter cereals or maize. Furthermore, the dataset enables an assessment of monoculture prevalence — both regionally and by detecting fields where the same crop is cultivated in consecutive years—and the identification of crops for which monoculture is more common. Conversely, we also quantify the frequency and composition of diverse crop rotations, including those with five or more different crop types. This novel dataset provides detailed insights into crop rotation dynamics at a national scale and offers valuable information for supporting agricultural and landscape planning. It also contributes to policy monitoring and implementation, particularly in the context of sustainable land management and climate mitigation and adaptation strategies.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Spatially Thermal Stress Model for Precision Agriculture: Assessing Crop Risk under Climate Variability

Authors: Mauro Roscini, Alessandro Vignati, Matteo Cardinali
Affiliations: Agricolus srl
Thermal stress is a significant challenge in modern agriculture, especially as climate change intensifies the frequency and severity of extreme temperatures. Crops, which are sensitive to temperature fluctuations, can experience disruptions in growth and development, leading to reduced yields and poor crop quality. The effects of thermal stress are not uniform across fields, as local factors such as topography, microclimates, and crop phenology influence the response to heat stress. Therefore, understanding and quantifying thermal stress at a fine spatial resolution is crucial for effective crop management and for minimizing the risks associated with extreme temperature events. Accurate assessment of thermal stress can help farmers make informed decisions regarding irrigation, crop protection, and other management practices to mitigate potential yield losses and optimize crop productivity. To address this issue, thermal stress model has been developed to evaluate heat stress on a daily basis at a high spatial resolution. The core of the model is based on the calculation of the spatialized phenological stage of the crop, which is determined by the thermal sum of spatialized temperature and crop-specific parameters. Spatialized temperature, a fundamental input to the model, is calculated by considering several factors such as altitude, day of the year, relative humidity, slope, and aspect, all derived from a 25-meter Digital Elevation Model (DEM) from Copernicus. This allows the model to capture the localized variations in temperature across the field. The phenological stage is calculated pixel-by-pixel, taking into account the crop’s growth stage and temperature exposure. This spatialized phenology, combined with maximum temperature thresholds specific to each crop and growth stage, allows the model to determine the level of thermal stress for each pixel, categorizing it into low, medium, or high risk. The outputs of the model include detailed raster maps that visualize the distribution of thermal stress across the field on a daily basis. In addition, a summary dataframe provides key indicators such as maximum temperature, stress indices, and alert labels for each day. These results are essential for precision agriculture, as they enable targeted interventions in the field. By identifying areas at high risk of thermal stress, farmers can implement measures such as optimized irrigation, shading, or other practices to reduce the impact of heat stress on crop yields. By integrating crop physiology, high-resolution spatial data, and meteorological variables, the Thermal Stress model offers a comprehensive and dynamic tool for assessing and managing heat stress in agriculture. This model supports farmers and agronomists in adapting to the increasing challenges posed by climate change, enhancing agricultural sustainability, and ensuring food security by minimizing crop losses due to thermal stress. The ability to precisely monitor and mitigate the heat effect at the field level provides a significant advantage in modern farming, contributing to more resilient and efficient agricultural systems.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Agricultural stress monitoring through a synergistic utilisation of hyperspectral and high-resolution thermal satellite observations

Authors: Tian Hu, Tingxuan Jiang, Egor Prikaziuk, Matthias Wocher, Marco Spagnolli, Anne Schucknecht, Krzysztof Smykała, Żaneta Swacha, Christiaan van der Tol
Affiliations: ENVISION Unit, Luxembourg Institute of Science and Technology (LIST), Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, OHB System AG, QZ Solutions Sp. z o. o
With the dramatic increase in the global population, humanity is facing a pressing issue regarding food security. The intensification of warming and drying in different regions of the globe further challenges the sustainability of agriculture. Effective and timely monitoring of agricultural stress is crucial for mitigating detrimental impacts on crops and improving crop yields. The reflectance spectra of crops in the visible/near infrared (VNIR) and shortwave infrared (SWIR) domains indicate their biophysical and biochemical statuses, providing an effective approach to monitoring crop health. The thermal emission of crops correlates with canopy temperature, carrying the imprints of water and energy exchanges between plants and the atmosphere. These plant physiological activities are directly linked to crop stress conditions. Satellite observations, with their large spatial coverage and easy accessibility, can facilitate the agricultural stress monitoring by improving efficiency and providing desired reflectance and radiance data. The Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) and the high-resolution Land Surface Temperature Monitoring (LSTM) mission, part of the ESA Copernicus Expansion missions, will play a pivotal role in monitoring crop stress due to their significant improvements in the spectral and spatial resolutions. By leveraging their advanced monitoring capabilities, a synergistic utilisation of observations from these two missions is envisaged under the ESA Sentinel Users Preparation programme. Using synthetic CHIME and LSTM data generated by an end-to-end simulator, we aim to monitor various crop abiotic stress conditions, including heat stress, water stress, and light stress, while also exploring the possibility of detecting nitrogen stress. The Level 2 biophysical and biochemical parameters, such as leaf water content, leaf chlorophyll content, and canopy temperature, will be estimated as the baseline. The estimated biophysical parameters will be integrated into an analytical surface energy balance model to retrieve the Level 3 canopy evapotranspiration. Combining these crop parameters with vegetation indices calculated from the CHIME VNIR and SWIR bands, different crop stress indicators will be proposed to monitor the aforementioned stressors. These indicators will be classified into two tiers: one for the ensembled condition of the crop canopy and one exclusively for vegetation independent of soil background. In addition, a comprehensive index incorporating plant transpiration and photosynthesis information will be proposed to depict the crop health condition, which may be useful for predicting crop yield at the end of the growing season. Overall, the study aims to develop effective crop abiotic stress indicators from hyperspectral and thermal observations and prepare for the application of future CHIME and LSTM missions in the agricultural domain.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Using SAR Intensity and Polarimetric Data for NDVI Modeling in Crops: A Case Study in Navarre, Spain

Authors: Itxaso Aranguren, Armando Marino, Maria Gonzalez-Audicana, Jesus Alvarez-mozos
Affiliations: Public University Of Navarre, University of Stirling
Remote sensing has become an essential tool in agriculture, supporting tasks such as crop classification and monitoring, and the identification of farming practices. Effective agricultural monitoring is critical for understanding crop dynamics, identifying farming practices, and supporting decision-making in the face of climate change and extreme weather events. Optical multispectral sensors are among the most frequently employed in this field. Their data are often summarized into vegetation indices like the widely used Normalized Difference Vegetation Index (NDVI), which has become a benchmark for agricultural studies and applications. However, despite its utility, the NDVI has certain limitations, particularly its reliance on clear skies, which restricts its effectiveness under cloudy conditions. To address these limitations, the use of Synthetic Aperture Radar (SAR) data has gained traction. The capacity of SAR sensors to provide consistent data acquisitions regardless of weather conditions or time of day makes them invaluable for agricultural monitoring. They provide a reliable alternative for data capture, particularly in regions with frequent cloud cover or during critical periods where uninterrupted data collection is essential. Furthermore, the use of SAR data introduces complementary insights into crop dynamics, as it is sensitive to the structural and moisture-related properties of vegetation. The objective of this study is to explore the potential of using Sentinel-1 derived features to model Sentinel-2 NDVI time series for four crops: two winter crops (barley and wheat) and two summer crops (sunflower and corn). Three regression scenarios were proposed: (1) based on intensity features (VH, VV, VH*VV and VH/VV), (2) based on polarimetric features (entropy, alpha and anisotropy), and (3) combining the two. Furthermore, temporal features were extracted from both intensity and polarimetric features, in order to capture the evolution of crop growth dynamics throughout the entirety of the crop season. A Random Forest regressor was utilized to model NDVI time series. To this end, a case study was conducted in the region of Navarre, Spain. The study area encompasses approximately 73,000 hectares of agricultural land. The region is predominantly agricultural, with over 73% of the land dedicated to this sector. The primary crops grown are rainfed herbaceous crops (mainly barley and wheat), followed by irrigated herbaceous crops (mainly corn, wheat, barley and sunflower). Two vector files served as field data for the study: an anonymized version of farmers’ Common Agricultural Policy (CAP) and the Land Parcel identification System (LPIS or SIGPAC in Spanish). The study focused on a selection of the most representative crops (barley, wheat, sunflower and corn). The study period spanned the 2019 agricultural year, ranging from September 2018 to December 2019. The Harmonized S-2 Level-2A collection from Google Earth Engine (GEE) was employed to obtain the NDVI time series for the specified study period. A cloud and shadow mask was applied in GEE using Sent2Cor SLC layer, and the median NDVI was then calculated for each parcel on each acquisition date. All available Sentinel-1 (S-1) Single Look Complex (SLC) and Ground Range Detected (GRD) images were downloaded from the Copernicus Data Space Ecosystem and processed using the SNAP software. The processing of SLC images entailed the following steps: orbit correction, bursts selection, calibration to complex numbers, merging of bursts, generation of the polarimetric covariance matrix (C2), multilooking (1:4 ratio), H-A-Alpha compact polarimetric decomposition (setting a 3x3 window size), and ortorectification. As a result of this processing, the entropy (H), anisotropy (A), and alpha (α) features were generated for each acquisition date. The processing of GRD images consisted on the removal of thermal noise, orbit correction, beta0 calibration, speckle filtering, terrain-flattening and orthorectification. Two terrain-flattened backscatter coefficient (gamma0) bands were generated in VH and VV for each acquisition date. The ratio (VH/VV) and the product (VH*VV) were also calculated. Additionally, the rate of change of each feature over time was calculated for each feature and named here as the temporal ‘slope’. The results demonstrated that the combined scenario exhibited the highest modeling results, closely followed by the model that used only intensity features. Models trained exclusively with polarimetric features obtained inferior performance metrics. However, the R2 values for barley and wheat were comparable to those of the intensity and the combined scenarios. The incorporation of temporal features derived from both intensity and polarimetric features resulted in enhancement in the predictive capacity of the model across all crops and scenarios. The crops that exhibited the most favorable model performance were barley, wheat and sunflower. The performance metrics for corn were found to be low across all three scenarios. The analysis of model performance across the phenological stages revealed that the germination and vegetative growth phase generally yielded the most optimal modeling performance. The majority of crops achieved high R2 values across all scenarios, exceeding 0.8. However, sunflower exhibited a negative R2 in the polarimetric-features scenario. Notably, corn performed best in this phase, achieving R2 values close to the other crops. Conversely, the reproductive phase showed the weakest performance across all crops and scenarios. Altogether, the results obtained demonstrate the potential for modeling NDVI using S-1 features with satisfactory accuracy. It was assessed that incorporating SLC images alongside GRD images provided better performance at the expense of processing time and storage. Further research should investigate the application of this procedure to different crops and regions, as well as the exploration of advanced machine learning models, as deep learning techniques, to more accurately capture the relationships between SAR and NDVI data.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Machine Learning for Plant and Tree Counting in Climate-Sensitive Agricultural Areas

Authors: Boudewijn van Leeuwen, Dr Gergely Kitka, Dr Orsolya Tóth, Tamás Bánhidi, Beáta
Affiliations: Bay Zoltán Nonprofit Ltd. for Applied Research
The increasing frequency of weather extremes presents a significant challenge to the agricultural sector. With climate change intensifying the frequency and severity of these events, ensuring the sustainability and stability of agro-ecosystems has become crucial. Farmers and foresters are now required to meet stricter environmental standards, for example by reducing or eliminating pesticides and synthetic fertilizers while still ensuring food production for a growing population and maintaining economic viability. This aligns with the European Green Deal’s goals to make the EU climate-neutral by 2050, promoting sustainable agriculture, preserving biodiversity, and reducing environmental impact. By integrating advanced monitoring technologies and adhering to sustainable practices, agriculture can play a vital role in achieving the Green Deal's objectives and enhancing the resilience of ecosystems and societies to climate change. Precision agriculture can be a strategy for farmers and foresters to cope with the challenges posed by climate change. It relies on the use of detailed data to guide decision-making, and optimizing inputs like water, fertilizers, and pesticides. Precision agriculture can not just increase yields but also minimizes environmental impacts, contributing to more sustainable agricultural practices. Machine learning algorithms make it possible to analyse large data set that are inherent to precision agriculture. Our team develops different methods based on machine learning algorithms to help farmers and foresters. Here, we present an in-house developed methodology to count plants or trees over large areas based on drone and satellite data using open source machine learning algorithms. Plant count is important to evaluate and possibly adjust the stand density, to create zonation maps, detect problems, like diseases or water stress and it forms the basis of yield estimation. In the framework of the MarginUp! project (https://margin-up.eu/), we developed a methodology that uses one of two algorithms for counting plants. Both are based on a centimeter resolution multispectral drone image of one or more a small areas within a study area that is covered by a lower resolution satellite image. The first algorithm is a more traditional approach, where individual plants are derived by slicing based on a vegetation index. This method does not work very well when the plants are overlapping. In this case, a computer vision approach is utilized where a convolutional neural network is trained for segmentation of plants in the drone image. The mean average precision of the training is 0.866 or better. In selected sample areas, the number of plants per hectare is correlated with the average values of a vegetation index, like OSAVI, NDVI, NDRE or LCI, which are derived from a Sentinel-2 satellite image of the same day. Then, this relationship is used to determine the number of plants for the larger area covered by the satellite data. Our study areas are on the Great Hungarian Plain, which covers about 56% of the total area of Hungary. The Plain has a warm and dry climate with an average annual precipitation ranging from 500-550 mm. The region is strongly impacted by climate change, with, on the one hand severe periods of drought and on the other hand extreme rainfall during short periods, causing inland excess water. In the first study area, Virginia mallow (Sida hermaphrodita) is grown for biomass production. This drought resistant plant is very suitable to grow on the sandy soils in this area. Here, our plant count algorithm was used to estimate future yield. The second study area is an oak tree plantation, where the trees are grown as building material and biomass. Part of the stand was infested by disease and the trees died. Here, we counted the number of trees that had to be replaced, and to detect areas with irregular growth patterns. Our last study area is a 35 ha parcel with irrigated corn. Here, we count the number of young plants, which helps the farmer to adjust the use of resources, give insights on interventions during the growing season and allows for planning of future agricultural practises on the parcel. The number of plants in the seedling or vegetative stages also provide information on the required inputs and expected yield. Our methodology demonstrates the potential of combining advanced technologies with sustainable agricultural practices to address the challenges of climate change while ensuring food security and ecosystem resilience.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Sugar Beet Cercospora Leaf Spot Quantified By Integration Of Active (Fq’/Fm’) And Passive (SIF) Chlorophyll Fluorescence Methods In The Field

Authors: Deepthi Konche, Dr. Juan Manuel Romero, Facundo Ispizua Yamati, Dr. Christoph Jedmowski, Ilgaz Askin, Micheal Quarten, Angelina Steier, Prof. Dr Anne Katrin Mahlein, Prof. Dr. Uwe Rascher, Dr. Onno Muller
Affiliations: Institute of Bio- and Geosciences 2 (IBG-2), Plant Sciences, Forschungszentrum Jülich, Institute of Sugar Beet Research Göttingen, Holtenser Landstrasse 77, 37079, CONICET, Instituto de Investigaciones Fisiológicas y Ecológicas vinculadas a la Agricultura (IFEVA), Universidad de Buenos Aires, Facultad de Ciencias Exactas y Naturales, Departamento de Química Inorgánica, Analítica y Química Física
Cercospora leaf spot (CLS), caused by the fungus Cercospora beticola, is a destructive foliar disease in sugar beet (Beta vulgaris L.), leading to reduced yields and sugar content (Rangel et al., 2020). Traditional detection and monitoring of CLS on visual assessments and laboratory analyses is very time-consuming (Shane & Teng. 1992). The ability to detect physiological and structural changes due to CLS is vital for understanding genotype-specific responses and improving crop management strategies. Sun-induced chlorophyll fluorescence (SIF) has emerged as a promising tool for assessing photosynthetic activity and detecting plant stress (Rascher et al., 2015) (Porcar-Castell et al., 2014). However, the application of SIF in monitoring biotic stress, specifically fungal diseases like CLS in sugar beet is very less explored. The study took place in 2023 in experimental sites at the Institute of Sugar Beet Research, Goettingen. Two sugar beet genotypes—one susceptible (S) and one very low-susceptible (VLS) to CLS—were evaluated in plots (12.0m × 5.4m) under inoculated and control conditions. Three replications were sown using a randomized block design. The mobile positioning system ‘FieldWeasel’ was used in open fields for canopy-scale fluorescence measurements with RTK to position the sensor platform at less than 2 cm. Two chlorophyll fluorescence methods were employed: the passive FloX system for monitoring of top-of-canopy solar-induced fluorescence and reflectance measurements using high-performance spectrometers (Damm et al., 2022) and the active Light-Induced Fluorescence Transient (LIFT) device, which uses fast repetition rate (FRR) pulses to measure PSII electron transport efficiency (Fq'/Fm') (Kolber et al., 1998) (Knopf et al., 2024). First when looked at disease severity score, in inoculated plots the disease severity peaked up to 10 in the S genotype, while the VLS genotype showed severity below 8 by the season's end. In control plots severity remained below 6 for the VLS genotype while the S genotype reached 10, indicating successful inoculation and even distribution. The three Vegetation Indices (VI’s) (NDVI, MTCI & PRI) derived from reflectance measurements collectively demonstrated consistent patterns in response to disease progression in the sugar beet genotypes. Overall, VLS genotype showed no effect of disease in control treatments, whereas inoculated plots showed changes in the VIs only towards the end of the season, reflecting lower biomass and chlorophyll content and higher light stress. On the other hand, inoculated plots of S genotype revealed significant effects of the disease already in the first measuring day while control plots were affected only in the second part of the season. Coming to sun-induced fluorescence (SIF), SIFFR and SIFR signal changes as a result from not only the structural effects through fAPAR and Fesc but also the physiological effects through SIF yield therefore showing sensitivity to early physiological disruptions at the photosystem level. This allowed to identify subtle disease effects on the VLS genotype that were not visible in the reflectance measurements: the early decrease observed for the inoculated plots in the first measuring date and the effect on the control plots in the second measuring date. SIF changes upon CLS infection in S genotype, on the other hand, follow the same trend as described by vegetation indices, due to its strong dependence on APAR. PSII efficiency (Fq’/Fm’) measured from LIFT is compared with both red and far-red SIF emission yields. At first, it can be observed that for S variety both fluorescence yields follow the same general pattern as PSII efficiency, contributing to the first scenario described by (Porcar-Castell et al., 2021). This suggests a direct relationship between PSII yield and SIF yield, indicating that fluctuations in photosynthetic efficiency and fluorescence emission upon CLS disease are linked, possibly due to a degree of decoupling between reaction centers and the antenna complex or quenching of chlorophyll excited states. The study conducted in this work showed how Cercospora leaf spot disease, a widely distributed diseases in german production, affects sugar beet crop mainly at the structural level. We have shown how simultaneous retrieval of active and passive fluorescence can provide insights on early disease effects, provided structural normalization is adequately addressed. By bridging these findings with satellite-based remote sensing approaches, such as ESAs FLEX mission, we can offer practical solutions for monitoring crop health and stress on larger scales, from regional to global levels. References: Damm, A., Cogliati, S., Colombo, R., Fritsche, L., Genangeli, A., Genesio, L., Hanus, J., Peressotti, A., Rademske, P., Rascher, U., Schuettemeyer, D., Siegmann, B., Sturm, J., & Miglietta, F. (2022). Response times of remote sensing measured sun-induced chlorophyll fluorescence, surface temperature and vegetation indices to evolving soil water limitation in a crop canopy. Remote Sensing of Environment, 273, 112957. https://doi.org/10.1016/j.rse.2022.112957 Knopf, O., Castro, A., Bendig, J., Pude, R., Kleist, E., Poorter, H., Rascher, U., & Muller, O. (2024). Field phenotyping of ten wheat cultivars under elevated CO2 shows seasonal differences in chlorophyll fluorescence, plant height and vegetation indices. Frontiers in Plant Science, 14, 1304751. https://doi.org/10.3389/fpls.2023.1304751 Kolber, Z. S., Prášil, O., & Falkowski, P. G. (1998). Measurements of variable chlorophyll fluorescence using fast repetition rate techniques: Defining methodology and experimental protocols. Biochimica et Biophysica Acta (BBA) - Bioenergetics, 1367(1–3), 88–106. https://doi.org/10.1016/S0005-2728(98)00135-2 Porcar-Castell, A., Malenovský, Z., Magney, T., Van Wittenberghe, S., Fernández-Marín, B., Maignan, F., Zhang, Y., Maseyk, K., Atherton, J., Albert, L. P., Robson, T. M., Zhao, F., Garcia-Plazaola, J.-I., Ensminger, I., Rajewicz, P. A., Grebe, S., Tikkanen, M., Kellner, J. R., Ihalainen, J. A., … Logan, B. (2021). Chlorophyll a fluorescence illuminates a path connecting plant molecular biology to Earth-system science. Nature Plants, 7(8), 998–1009. https://doi.org/10.1038/s41477-021-00980-4 Porcar-Castell, A., Tyystjärvi, E., Atherton, J., Van Der Tol, C., Flexas, J., Pfündel, E. E., Moreno, J., Frankenberg, C., & Berry, J. A. (2014). Linking chlorophyll a fluorescence to photosynthesis for remote sensing applications: Mechanisms and challenges. Journal of Experimental Botany, 65(15), 4065–4095. https://doi.org/10.1093/jxb/eru191 Rangel, L. I., Spanner, R. E., Ebert, M. K., Pethybridge, S. J., Stukenbrock, E. H., De Jonge, R., Secor, G. A., & Bolton, M. D. (2020). Cercospora beticola: The intoxicating lifestyle of the leaf spot pathogen of sugar beet. Molecular Plant Pathology, 21(8), 1020–1041. https://doi.org/10.1111/mpp.12962 Rascher, U., Alonso, L., Burkart, A., Cilia, C., Cogliati, S., Colombo, R., Damm, A., Drusch, M., Guanter, L., Hanus, J., Hyvärinen, T., Julitta, T., Jussila, J., Kataja, K., Kokkalis, P., Kraft, S., Kraska, T., Matveeva, M., Moreno, J., … Zemek, F. (2015). Sun‐induced fluorescence – a new probe of photosynthesis: First maps from the imaging spectrometer HyPlant. Global Change Biology, 21, null. https://doi.org/10.1111/gcb.13017
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: AgroSoil - Innovative high-spatiotemporal resolution soil moisture modelling for climate-smart agriculture

Authors: Kwabena Nketia, Anna Schaertel, Elmer Ekinzog, Peter Navratil
Affiliations: GAF AG
Soil moisture, one of the Essential Climate Variables, underpins sustainable agriculture, crop productivity, smart water management, and global food security. Given agricultural drought stress, responsible for nearly 70% of crop yield losses, which increasingly threatens agricultural productivity and continues to escalate, there is an urgent global need for advanced spatio-temporal tools to predict and forecast soil moisture dynamics, especially under changing climatic conditions. Despite these needs, global in-situ networks remain insufficient to provide the high-resolution space-time soil moisture data required for precision agricultural applications. Current soil moisture datasets and models also fall short in spatial, temporal, and vertical resolution, especially within crop root zones, limiting their relevance for both global and localized smart farming efforts. To address these gaps, AgroSoil, a facet of the AgroSuite package, was developed by GAF AG, leveraging cutting-edge Artificial Intelligence (AI) and pedo-transfer modeling techniques. AgroSoil delivers daily global soil moisture information with unparalleled precision, offering spatial resolutions from 5 to 500 m, temporal scales from daily to seasonal (1981+), and root zone coverage (vertical resolution) down to 100 cm. This innovative product integrates ERA5-Land data produced by the Copernicus Climate Change Service (C3S) at ECMWF, hydro-geomorphological indices, in-situ measurements, and other Earth Observation (EO) data using AI-driven time-series modeling. The result is near-real-time insights into soil moisture dynamics across five soil layers and crop-specific root zones. AgroSoil supports diverse applications, including optimizing irrigation and crop cultivation, mitigating drought impacts, and advancing agro-insurance. By combining pedo-transfer functions, soil properties, and EO data with advanced AI disaggregation tools, AgroSoil generates dynamic, high-resolution maps compliant with GlobalSoilMap specifications. Its capabilities extend to assessing agricultural drought severity, evaluating crop water requirements, monitoring water storage anomalies, and forecasting soil drought to enhance climate-smart farming practices. AgroSoil opens new ways for drought monitoring and the creation of innovative crop insurance index portfolios tailored to specific landscapes. This fosters resilience against climate change impacts while enhancing agricultural productivity and global food supply. Through AgroSoil, soil moisture information has already been produced for countries such as Germany, Ukraine, India, France, and Ghana, as well as various subnational regions in Europe and Australia, covering periods of up to 20 years with daily resolution. Validation of AgroSoils AI-driven models has demonstrated consistently high predictive accuracy across diverse landscapes. In alignment with the Green Deal’s sustainability and finance goals, AgroSoil empowers agronomy industries by providing reliable asset monitoring and actionable climate adaptation strategies. With its transformative capabilities, AgroSoil is redefining soil moisture monitoring and advance sustainable agricultural practices worldwide.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Cover Crop Type Mapping: A Candidate Prototype for the Copernicus Land Monitoring Service

Authors: Dr. Mathilde De Vroey, Kasper Bonte, Dr. Wanda De Keersmaecker, Tim Ng, Ruben Van De Kerchove
Affiliations: Vito
The Evolution of the Copernicus Land Monitoring Service (CLMS) portfolio project (EvoLand) develops innovative methods, algorithms, and prototypes for monitoring land use/land cover changes and surface characteristics at high spatial and temporal resolutions. One of the eleven candidate CLMS prototypes developed in EvoLand focuses on cover crop type mapping. Cover crops, grown primarily to enhance and protect soil rather than for harvest, play a critical role in sustainable agriculture. They mitigate soil erosion, reduce nutrient leaching, increase soil organic matter, provide natural fertilization, and enhance biodiversity. Common cover crops include grasses (e.g., rye, oats), legumes (e.g., clover, vetch), and brassicas (e.g., radishes, mustard). Recognized as a key sustainable agriculture practice under the Common Agricultural Policy, the implementation of cover crops necessitates effective monitoring. While operational methods and services exist for mapping and monitoring main crop seasons, few focus on cover crop mapping. EO-based cover crop mapping and monitoring present specific challenges, such as limited reference data, higher risk of cloud and snow interference during secondary growing seasons, and the similarity in phenological patterns among cover crop mixes. The new CLMS Vegetated Land Cover Characteristics (VLCC) component already provides specific “secondary crop” layers with parcel-based information on the presence and seasonality of cover crops. This study builds on VLCC by developing a prototype service to map cover crop types annually at 10-meter resolution using Sentinel-1 and Sentinel-2 imagery. Belgium served as the prototype’s test site, leveraging detailed cover crop information from the Flemish Land Parcel Identification System (LPIS). LPIS declarations on secondary crops were filtered and reclassified to create a training and validation dataset. Given the challenges of differentiating individual cover crop types and mixes, a hierarchical typology was designed to maximize thematic detail while maintaining reasonable classification accuracy. Sentinel-1 and Sentinel-2 features were extracted using openEO. A CatBoost classifier was fine-tuned, and input features were optimized through a cross-validation approach. The resulting model distinguishes two cover crop groups— “grass-like” (e.g., Poaceae, Leguminosae) and “leaf-rich & mix” (e.g., Brassicaceae, seed mixes)—based on NDVI time series, achieving an overall accuracy of 80.7%. A cover crop type map was produced for Belgium, with potential for annual updates given the availability of LPIS data. Despite promising results, the prototype's scalability is hindered by the scarcity of reference data. Publicly available datasets, such as those in the Flemish LPIS, are rare but essential for further development and operationalization. In conclusion, this candidate prototype demonstrates the potential of Earth observation for cover crop type mapping and highlights the need for robust reference datasets to advance this capability.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: GreenerCotton: How to support Cotton producers to produce more sustainably?

Authors: Florian Schlenz, Johannes Sommer, Jenny Franco, Michael Holzapfel, Rita Lunghamer, Stefan Scherer
Affiliations: Geocledian GmbH
Cotton is the main natural fiber used by the global textile industry. World production reached 26M tons last year. The 5 major producers are China, India, USA, Brazil and Pakistan. As most cotton is still grown in a classical, unsustainable way it has a variety of negative effects on the environment. This circumstance led to the creation of several sustainable cotton standards. The GreenerCotton project (funded by ESA under grant number 4000143121/23/NL/FGL) aims at supporting sustainable cotton production by the application of satellite based digital agriculture technology. In the frame of the project, we develop an EO-based platform that can be integrated into external IT systems like Farm Management software or Traceability systems via API and deliver services for sustainable cotton crop production and sustainable cotton standard implementation. Integrated into individual cotton management solutions, the GreenerCotton platform can help to: • better manage production risks and optimize sustainable cotton cultivation. It supports extension services to provide services more efficiently and at a higher quality. • enable the transparency on production areas required by sustainable cotton standards. It allows to check responsible land management maintaining key biodiversity areas, protecting rivers and avoiding deforestation. Thus, it can help to enhance trust and commercial value of the sustainable cotton labels. These benefits are enabled by a comprehensive set of micro services for data capture and validation, cotton mapping, crop growth monitoring, disease risk assessment, yield estimation and sustainability reporting. GreenerCotton targets the following main user groups: • Extension services advise farmers on agricultural methods, implement certification methods and sustainability standards. Optimizing workflows and thus reducing costs will be the key benefit of Greener|Cotton. • Ginneries or raw cotton processors typically contract farmers to produce cotton of a certain quality. They either contract the extension services or operate the extension services themselves. • Traceability solutions are used by the extension services and ginneries for data recordings, analysis and transparency. They are also a source of information for the downstream supply chain including auditors. This puts them in a central role in data management and distribution. • Sustainable cotton standards are implementing and controlling the compliance of their standards through auditors. They are users of the services in particular in respect to the prevention of identity fraud and increased transparency. To test the developed services several pilots are carried out involving a sustainable cotton standard with ginneries supporting thousands of farmers, a cotton trader and a traceability system in Tanzania, Greece and India. The EO-based GreenerCotton platform is intended to provide value added services for cotton farming, using remote sensing and weather data. The defined services support the digital data management, validation and data analytics for cotton-relevant information extraction that also supports the criteria on sustainable land use management. The services work on the regional level and the field level. The field-based production monitoring provides timely information about crop growth, ripening and harvest. The crop monitoring allows to manage large numbers of contract farmers, also smallholders, for all field crops on a global basis. It includes: • Phenology and performance monitoring including portfolio level reporting • Sustainability KPI Module including crop rotation verification and cropping intensity detection • Determination of cotton crop water demand • Yield validation • Sustainability and ESG KPIs like crop rotation validation, cropping intensity detection, winter cover validation and crop cover duration. The regional services support ESRS and CSDDD reporting. They include a regional risk assessment on deforestation and protected areas violation. The main system components are an EO data preprocessing engine managing input data sources, a field object management system for the management of parcel data cubes and core analytical capabilities, a service generation layer for efficient data analysis and an API interface layer to allow for REST-requests for the provision of the different services in well-defined JSON response models to the final decision support system. The platform delivers data analytics results based on the data sources Sentinel 1 and 2, Landsat 8/9, Modis, weather data and ancillary information like deforestation data sets and protected areas GIS layers. We will present the system design and developed services as well as preliminary results of the project. The pilots with the involvement of different user groups will be introduced and first outcomes presented.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Using Satellites to Assess the Impacts of Large-Scale Cover Crop Expansion

Authors: Stefania Di Tommaso, David B. Lobell
Affiliations: Stanford University
Cover cropping (CC), defined as the planting of non-cash crops between primary growing seasons, is widely being promoted as a way to reduce net greenhouse gas emissions and decrease nutrient leakage from agricultural fields, as well as make them more resilient to climate change. Despite the many benefits that motivate public subsidies, questions remain about potential downsides associated with their widespread adoption. Here, we study the recent expansion of CC in the United States, where the total area under cover crops has doubled since 2012. We use satellite observations from over 100,000 fields, half of which recently adopted CC, in combination with two causal inference methods (difference-in-difference and causal forests) to demonstrate that CC led to: (i) declines in average yields for corn and soybean, by approximately 3% and 2%, respectively; (ii) delays in sowing of corn (4 days) and soybean (3 days); and (iii) reduced damages in the wet spring of 2019, with cover cropped fields only half as likely to experience prevented planting as fields without cover crops. Importantly, we find that timely planting of the cash crop is critical for minimizing yield penalties associated with CC. Eliminating sowing delays could reduce yield losses by nearly 50% for corn and 90% for soybeans, underscoring the need for targeted management practices that prioritize timely planting after cover crop termination. These results indicate that CC can mitigate certain climate-related risks, especially under excessive moisture, reducing farmer vulnerability in wet years. However, they also suggest that CC may introduce additional risks in drier conditions, as cover crops can impact soil moisture availability during critical planting windows. This dual effect highlights the need for region-specific, climate-sensitive recommendations for CC adoption. This study showcases how remote sensing and causal inference together provide a powerful toolkit for assessing the impacts of large-scale agronomic shifts occurring around the world. This approach enables robust, data-driven insights into the effects of agronomic practices across diverse landscapes, supporting evidence-based decisions in agricultural policy and research globally.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Mapping paddy rice cropping intensity and calendar in Monsoon Asia at 20 m resolution from multi-source satellite data using a sample-free algorithm

Authors: Yongzhe Chen, Professor Shunlin Liang, Dr. Wenyuan
Affiliations: Department Of Geography, the University Of Hong Kong
In Monsoon Asia (MA), rice is the staple food due to the region's suitable climate and planting traditions, accounting for around 90% of global rice production and consumption. Considering the potential uncertainties in statistical data from developing countries, spatially and temporally consistent monitoring of paddy rice harvest areas is crucial for future food security and agricultural output in MA. Paddy rice cropping calendars, which include planting and harvesting days of a year, can significantly influence rice yield. Furthermore, high-quality data on paddy rice harvest areas and cropping calendars are essential for estimating irrigation water use and cropland greenhouse gas emissions, such as methane, thereby contributing to water conservation and climate change mitigation. Existing satellite-based paddy rice distribution, cropping intensity and calendar maps face limitations such as coarse spatial resolution, inadequate spatial or temporal coverage, and poor data quality. To address these issues, we develop an innovative sample-free algorithm to map paddy rice cropping intensity and calendar across MA at a 20 m resolution between 2018 and 2021. Initially, we used the Sentinel-1 C-band SAR backscattering coefficient (backscatter) time series data to extract potential transplanting signals of paddy rice fields. We recognized that troughs in the SAR backscatter time series could be induced by various factors, including rice transplanting, flooding after heavy rain, vegetation cover change, soil moisture or temperature changes, and unremoved noises related to incidence angle variation and topographic effects. To address this, we performed a six-dimensional check on all troughs using optical, infrared, and passive microwave remote sensing data. These dimensions include 1) distance to neighboring trough, 2) trough prominence, 3) trough width, 4) trough sharpness, 5) trough location time, and 6) trough value. Validation against 2305 crop type reference samples collected from very-high resolution satellite images and 1484 public dataset samples indicates an overall accuracy of over 80%, outperforming most existing regional-scale datasets. Our national paddy rice harvest area estimates align with official statistics (regression R-square: 0.97), and our regional mean planting and harvest dates correlate well with the RiceAtlas dataset (R-square: 0.92‒0.93, RMSE: 27‒28 days). Key findings include an increasing average paddy rice cropping intensity across MA, inter-cropping with other crops in approximately 50% of paddy rice fields, and high spatial heterogeneity in rice calendars in South and Southeast Asia. Despite some uncertainties, the satellite-based paddy rice map can identify specific issues, such as potential exaggeration and extraordinary stability in official statistics for most developing countries in MA.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Sentinel-2 monthly Composites and Google Street View Images to monitor Land Use Dynamics in Natura 2000 site in Transylvania, Romania

Authors: Jakob Martius, Natascha Oppelt
Affiliations: Kiel University
The Natura 2000 Special Protection Area Podisul Hârtibaciului in central Romania is home to a variety of high-nature-value ecosystems, including native forests, biodiverse pastures, and traditional small-scale agricultural landscapes. Since Romania joined the European Union in 2007, agricultural practices have shifted towards more intensive methods, including cattle breeding, corn monocropping, and the consolidation of ecologically valuable parcels. Despite these changes, no comprehensive spatiotemporal land use assessment or baseline mapping has been established for this area. This study aims to fill this monitoring gap by creating a baseline land use dataset, using a combination of Google Street View images (2018, 2022 and 2023) and Sentinel-2 (S2) satellite data. Monthly S2 composites were generated using the best-available pixel approach and served as inputs for machine learning (ML) models to track land use changes from 2018 to 2023, using the Weighted Parametric Scoring algorithm. We also evaluated the potential for temporal transfer of these models to future years, providing recommendations for improving transferability. Results show a mean class-specific F1-score of 93.5% for models in 2018, 2022, and 2023, with strong performance for land use classes with relatively stable phenological pattern (e.g., pasture, corn, and grain). However, classes with variable seasonal patterns, such as mown grassland, showed lower accuracy. Analysis of the baseline dataset reveals an 18 km² decrease in forest area and a 16 km² increase in corn cultivation between 2018 and 2023, with 70 km² of land under continuous corn cultivation for three or more years. Study limitations include variations in the spatial distribution and sample sizes of reference fields, as well as limited reference data for effective temporal transfer. These factors suggest opportunities to improve the dataset, emphasizing the potential benefit of expanded reference data collection efforts. This highlights the value of further administrative and scientific collaboration in Romania to support future monitoring endeavors. Future directions should include leveraging larger reference datasets and exploring deep learning on S2 time series, along with comparative analyses of historical and current high-resolution imagery to document changes in this unique agricultural landscape.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Supervised Crop Type Mapping Using Multi-Temporal Satellite Data and Computer Vision: Addressing Data Imbalance in Classification

Authors: Vasiliki Thomopoulou, Dr Panagiotis Kossieris, Mr George Bariamis, Prof Dr Christos Makropoulos
Affiliations: National Technical University Of Athens (ntua)
Crop type mapping at plot scale is important for many purposes related to agriculture, food security and inland water quality particularly for monitoring agricultural practices that contribute to nutrient loading in basins. For the identification of canopies, traditional techniques like in-situ sampling and data from census campaigns have been extensively utilized and many approaches have been applied. Some of these approaches include the identification of crops of different species, the mapping of a single crop or a group of crops of major interest or even binary cropland mapping. Although the above-mentioned traditional approaches are accurate, they are time-consuming and labor-intensive. Access to cropland can be challenging, as farmers and land managers may limit the number of accessible parcels, making the mapping of even a single parcel a highly time-consuming task. Multi-year crop maps are essential for estimating crop planting frequencies, monitoring seasonal crop rotations, and identifying violations of crop rotation practices, which can lead to soil degradation and reduced yields. Earth Observation (EO) data is a key resource for creating such maps, as it eliminates the need for frequent in-situ crop mapping and enables monitoring at a much larger scale. This study addresses the crop type classification problem as a computer vision task, and specifically as an image segmentation problem, using multi-temporal satellite imagery from the Sentinel-2 MSI mission. The study compares three variations of the widely-known U-Net architecture. The baseline architecture is the standard U-Net model with 3D convolutions. The second architecture compresses the time dimension in the encoder part of the network, while the third one incorporates attention modules at the skip connections of the baseline U-Net. Given the highly imbalanced nature of the reference dataset, this study utilizes and explores suitable loss functions (i.e., the focal loss function) to account for the crucial data imbalancing aspect. Particularly, the focal loss function, with different gamma values, is used in the training step of each of the above-mentioned architectures. Results indicate that a gamma value equals to 3 yields the best performance across several evaluation metrics examined (i.e., accuracy, precision, F1-score, and Kappa Coefficient). Furthermore, performance assessment of the architectures, with a gamma value equals to 3, reveals that the overall best-performing model is the second one, which compresses the time dimension in the data cube, yielding F1-score=0.83, Kappa Coefficient = 0.70, precision=0.86 and accuracy=0.82.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Land transformation across agroecological zones reveals expanding cropland and settlement at the expense of tree-cover and wetland areas in Nigeria

Authors: Felicia O. Akinyemi, Chinwe Ifejika Speranza, Sai Ganesh Veeravalli
Affiliations: Karlstad University
Evaluating how land cover is being transformed is essential to identify patterns necessary to infer the change trajectories and the driving factors. This study considers the case of Nigeria, where various natural ecosystems are being converted and for which a current national scale assessment at high spatial resolution is lacking. Producing 30 m Landsat-based time-series data, we analyze change among land cover types (i.e. tree-covered area, grassland, wetland, waterbody, cropland, artificial surface, and otherland) across seven agroecological zones. The annual change intensity was assessed at multi-levels across two time-intervals (i.e. 2000–2013, 2013–2022). Distinguishing between natural land cover and human activity-related land-use, we estimate the extent of change signifying how humans have appropriated natural land cover. Insights from analysis at the interval level reveal that land transformation accelerated from 3.3% in 2000–2013 to 4.5% during 2013–2022 in all agroecological zones (e.g. rainforest, mangrove), except in Sudan savannah and Sahel savannah where speed was higher in 2000–2013 as grasslands were increasingly cultivated. Cropland expanded almost two-fold (22% to 37%), whereas tree-cover declined from 50% to 31% and wetland from 7% to 3.7% over the 23 years. Much loss of natural land cover (e.g. tree-cover, grassland, and wetland) to cropland occurred in 2000–2013 (22%) when most irrigation schemes in Nigeria were established. In contrast, the loss of natural land cover to settlement (0.9%) during 2000–2013 increased to 2.0% in 2013–2022. Of all agroecological zones, the mangrove zone was most disturbed as its persisting land cover areas reduced from 69% to 5% between 2000–2013 and 2013–2022. The amount of persisting land cover was highest in the Sudan savannah at 44% in 2000–2013 and 49% in 2013–2022. Processes of human-appropriated natural land cover in Nigeria are related to urbanization and cropland expansion into natural areas with some instances of natural regeneration, especially in croplands and abandoned settlement areas.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Time-series of Landsat-based bi-monthly and annual spectral indices for continental Europe for 2000--2022

Authors: Davide Consoli, Xuemeng Tian, Florian Schneider, Leandro Parente, Murat Şahin, Yufeng Ho, Robert Minařík, Tomislav Hengl
Affiliations: Opengeohub, Wageningen University & Research, Thünen Institute of Climate-Smart Agriculture
The production and evaluation of the Analysis Ready and Cloud Optimized (ARCO) data cube for continental Europe (including Ukraine, the UK, and Turkey), derived from the Landsat Analysis Ready Data version 2 (ARD V2) produced by Global Land Analysis and Discovery team (GLAD) and covering the period from 2000 to 2022 is described. The data cube consists of 17TB of data at a 30--meter resolution and includes bimonthly, annual, and long-term spectral indices on various thematic topics, including: surface reflectance bands, Normalized Difference Vegetation Index (NDVI), Soil Adjusted Vegetation Index (SAVI), Fraction of Absorbed Photosynthetically Active Radiation (FAPAR), Normalized Difference Snow Index (NDSI), Normalized Difference Water Index (NDWI), Normalized Difference Tillage Index (NDTI), minimum Normalized Difference Tillage Index (minNDTI), Bare Soil Fraction (BSF), Number of Seasons (NOS), and Crop Duration Ratio (CDR). The data cube was developed with the intention to provide a comprehensive feature space for environmental modeling and mapping. The quality of the produced time series was assessed by: (1) assessing the accuracy of gap-filled bimonthly Landsat data with artificially created gaps, (2) visual examination for artifacts and inconsistencies, (3) plausibility checks with ground survey data, and (4) predictive modeling tests, examples with soil organic carbon (SOC) and land cover (LC) classification. The time series reconstruction demonstrates high accuracy, with RMSE smaller than 0.05, and R2 higher than 0.6, across all bands. The visual examination indicates that the product is complete and consistent, except for winter periods in northern latitudes and high-altitude areas where high cloud and snow density introduce significant gaps, and hence many artifacts remain. The plausibility check further shows that the indices logically and statistically capture the processes. The BSF index showed a strong negative correlation (-0.73) with crop coverage data, while the minNDTI index had a moderate positive correlation (0.57) with the Eurostat tillage practices survey data. The detailed temporal resolution and long-term characteristics provided by different tiers of predictors in this data cube proved to be important for both soil organic carbon regression and LC classification experiments based on the 60,723 LUCAS observations: long-term characteristics (tier 4) were particularly valuable for predictive mapping of SOC and LC coming on the top of variable importance assessment. Crop-specific indices (NOS and CDR) provided limited value for the tested applications, possibly due to noise or insufficient quantification methods. The data cube is made available under a CC-BY license and will be continuously updated.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Crop Cover Estimation Using Satellite Images

Authors: Elija Albert, Lukas Heubach, Marco Schmidt
Affiliations: University of Würzburg
Monitoring food and agricultural systems is increasingly critical to addressing the dual challenges of climate change and global population growth. Ensuring food security requires a corresponding increase in agricultural productivity, a goal made challenging by resource constraints and environmental pressures. Accurate crop cover estimation plays a pivotal role in identifying yield bottlenecks early, enabling timely and effective interventions to safeguard and enhance food production. The EU28 and the United States, as major global producers and exporters of agricultural food, particularly wheat, differ significantly in agricultural factors like climate, soil, and field sizes. These differences pose challenges for applying field-monitoring models across regions. Models designed for the expansive, uniform fields of the U.S. may not seamlessly transfer to the smaller, more heterogeneous fields in Europe. This study focuses on adapting such models, specifically using a Convolutional Neural Network (CNN) initially developed for the U.S., to estimate winter and spring wheat crop cover in Germany and France. Leveraging satellite data (MOD13Q11) and statistical data from Agreste (France) and the Federal Statistical Office (Germany), the CNN generates district-level crop estimates, exploring its applicability in European contexts. The model produces both statistical outputs and a detailed map of crop coverage. Validation is conducted on a per-country basis largely comparing statistical and pixel-wise validation results across multiple training epochs. Furthermore, the study evaluates the model's transferability to new regions by applying the CNN trained on data from France to the German dataset. Our results demonstrate the robust potential of the adapted CNN for accurate crop coverage estimates, with only minimal deviations from the ground truth statistical data published by Agreste and the Federal Statistical Office. Despite some statistical outliers, the model consistently delivers high accuracy. Epoch wise analyses revealed insights into the development of the correlation between statistical and pixel-wise performance. We identified cloud coverage as a key factor influencing predictions, emphasizing the importance of advanced preprocessing for challenging regions. Most notably, transfer validation between France and Germany achieved results nearly as good as results by direct training, significantly surpassing previous U.S. studies. This highlights the CNN's ability to excel in regions with similar climates, paving the way for reliable, scalable applications in global agricultural monitoring. Further preprocessing and refinement were suggested to enhance performance. Our findings highlight the potential of transfer learning for agricultural monitoring and emphasize the importance of collaboration to address global food security.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: An Integrated Framework for Multi-Scale Crop Residue Estimation Using Earth Observations, Deep Learning and Machine Learning

Authors: Saeid Homayouni, Miss Maryam Rahimzad
Affiliations: National Institute Of Scientific Research, Center For Water, Earth, Environment
Effective management of agricultural lands requires precise and scalable estimation of cropland cover, including crop residue, crop cover, and soil. This study introduces an integrated framework that combines the strengths of a deep learning-based smartphone module and a machine learning-based satellite module to address this challenge. This novel approach leverages high-resolution ground-based data and Sentinel-2 satellite imagery to improve residue mapping, land conservation strategies, and sustainable agricultural practices. The smartphone-based module is built around RAUNet, a lightweight deep learning model designed to efficiently segment crop residue, crop cover, and soil. RAUNet performs superior to traditional models like UNet and RUNet in recognizing intricate textures and adapting to diverse field conditions. Its lightweight architecture ensures scalability and suitability for on-field applications, paving the way for user-friendly mobile apps that empower farmers to act as citizen scientists. By enabling farmers to collect residue data through smartphones, RAUNet facilitates precision agriculture while significantly reducing the reliance on expensive measurement techniques and manual processing. The residue estimates generated by RAUNet are further used as high-quality training data to develop satellite-based machine learning models. This satellite module integrates Google Earth Engine (GEE) with machine learning algorithms to map residue and cropland cover at larger scales using Sentinel-2 time-series data. Ground truth data provided by RAUNet enhances the accuracy and reliability of satellite-based models, ensuring robustness under varying crop types, soil conditions, and climatic scenarios. The satellite module simplifies processing by transforming traditional regression tasks into classification tasks while maintaining precision in cropland cover estimations. A key innovation of this study is developing a web-based application powered by GEE, which provides decision-makers with easy access to residue and cropland cover maps derived from Sentinel-2 temporal median mosaics. These maps support land management, soil conservation policies, and sustainable agricultural practices. Integrating RAUNet and satellite-based machine learning models offers a seamless transition between local-scale precision and regional-scale monitoring, addressing immediate and long-term agricultural and environmental challenges. This framework showcases a high degree of innovation by combining advanced artificial intelligence techniques with Earth observation data to create a scalable, cost-effective solution for agricultural residue mapping. Validation across diverse datasets and field conditions confirms the proposed methodology's technical correctness and reliability. The results presented in this work have significant implications for science, policy-making, and operational applications, providing a robust tool for advancing sustainable agriculture and environmental resilience.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Monitoring Soybeans and Ozone Relationship with TROPOMI Solar-Induced Fluorescence

Authors: Luka Mamić, Mj Riches, Delphine K. Farmer, Rose K. Rossell, Francesco Pirotti
Affiliations: Department of Civil, Building and Environmental Engineering, Sapienza University of Rome, Department of Chemistry, Colorado State University, Department of Land and Agroforestry Systems (TESAF), University of Padua, Interdepartamental Research Centre in Geomatics (CIRGEO), University of Padua
The worldwide increases in air pollution threaten agricultural productivity and food security. Among air pollutants, ground-level ozone (O₃) is considered one of the most important stressors to crop production globally. Even short-term periods of high O₃ concentrations can cause significant losses in crop production. Soybeans are particularly sensitive to O₃ damage, as shown in various field and laboratory studies. Plant stress is defined as a state of applied force followed by a strain phase, which is an expression of stress before damage occurs. The damage phase is the final state of acute and chronic damage visible on the leaf surface. Traditional remote sensing indices such as Normalized Difference Vegetation Index (NDVI) allow us to detect the damage phase of a plant as a change in the "greenness" of the canopy, but by that time these morphological changes are already irreversible. However, if the stressor is removed before the damage phase, the plant will regenerate and move to a new physiological standard (Meroni et al., 2008). Solar-induced fluorescence (SIF) holds great promise for the early detection of physiological stress in plants. Measured SIF from the ground has been successfully used to indicate strain phase (Wu et al., 2024). With the advancement of satellite-based SIF measurements, such as those from the TROPOspheric Monitoring Instrument (TROPOMI), there is a potential to detect the strain phase of crop stress and over large spatial scales. Therefore, this study addresses two main questions: I. Are we able to detect periods of soybean stress using satellite SIF measurements; II. Are we able to attribute soybean stress to explicit stressors such as heat, drought or ozone? This study uses several datasets to carry out the analysis. Daily ground-level O₃ concentrations are derived from nearby EPA monitoring stations. Daily SIF data at 743 nm were derived from the ESA-TROPOSIF project, which produced a global SIF product from Sentinel-5P TROPOMI data. Daily meteorological variables, including temperature, relative humidity (RH), vapor pressure deficit (VPD), and precipitation, were obtained from the University of Idaho Gridded Surface Meteorological Dataset (GRIDMET). In addition, eight-day Gross Primary Productivity (GPP) and four-day Fraction of Photosynthetically Active Radiation (fPAR) were derived from the MODIS datasets. Due to the coarse spatial resolution of the TROPOMI SIF dataset (~7 × 3.5 km), our primary study sites were selected to include large soybean fields in Arkansas, while fields in Ohio were chosen for validation purposes. The study focused on the soybeans growing season - May to August - from 2018 to 2021. To improve the precision of the SIF data and calculate the Standardized Precipitation-Evapotranspiration Index (SPEI), all datasets were aggregated to a weekly scale, resulting in 70 weeks of observations. To capture the interactive effects of O₃ with weekly maximum temperature (Tmax), mean VPD, and mean RH, three new indices were calculated: - Normalized Ozone Temperature Index (NOTI): NOTI = (normalized O₃ - normalized Tmax) / (normalized O₃ + normalized Tmax ) - Normalized Ozone Vapor Pressure Deficit Index (NOVPDI): NOVPDI = (normalized O₃ - normalized VPDI) / (normalized O₃ + normalized VPD) - Normalized Ozone Humidity Index (NOHI): NOHI = (normalized O₃ - normalized RH) / (normalized O₃ + normalized RH) Preliminary results indicate that soybeans in Arkansas and Ohio have a clear seasonal SIF signature, peaking around the 200th day of the year (DOY), and thus are distinguishable from other regional crops. Annual average TROPOMI SIF is correlated with annual soybean yields at the county level (R = 0.90 in Arkansas; R = 0.41 in Ohio) based on data from the United States Department of Agriculture (USDA). Our analysis shows that soybeans are sensitive to ambient O₃ levels during the growing season, even at low weekly average concentrations of 40 ppb. Soybeans in Arkansas did not experience severe stress, supported by a strong correlation between SIF and fPAR (R = 0.87) or SIF and GPP (R = 0.89), and no significant heat or drought stress from 2018 to 2021. Calculated average of normalized O₃ and normalized SPEI values indicate a decrease in SIF during wet periods with high O₃ levels. The calculated NOTI index suggests that SIF is reduced by almost 0.5 units when temperature and ozone levels were both high. NOVPDI shows that under high ozone and high VPD conditions, there is a reduction in SIF of about 0.4 units, indicating that ozone can damage plants through non-stomatal uptake (Clifton et al., 2020). On the other hand, NOHI shows that under high ozone and high relative humidity (≥ 78%), the SIF decreased by about 0.4 units, suggesting that very humid conditions may further promote ozone uptake and plant stress (Kavassalis & Murphy, 2017). To additionally validate these findings, we propose to conduct controlled laboratory experiments to test the observed relationships between low O₃ exposure and soybean stress. In addition, we propose to use machine learning techniques to downscale the spatial resolution of the TROPOMI SIF data - possibly using high-resolution Sentinel-2 or other datasets - so that these methods can be applied to smaller fields. Extending this methodology to other crop types or forested areas could provide broader insights into the effects of ozone and environmental stressors on vegetation health. References Clifton, O. E., Fiore, A. M., Massman, W. J., Baublitz, C. B., Coyle, M., Emberson, L., ... & Tai, A. P. (2020). Dry deposition of ozone over land: processes, measurement, and modeling. Reviews of Geophysics, 58(1), e2019RG000670. Kavassalis, S. C., & Murphy, J. G. (2017). Understanding ozone‐meteorology correlations: A role for dry deposition. Geophysical Research Letters, 44(6), 2922-2931. Meroni, M., Panigada, C., Rossini, M., Picchi, V., Cogliati, S., & Colombo, R. (2009). Using optical remote sensing techniques to track the development of ozone-induced stress. Environmental pollution, 157(5), 1413-1420. Wu, G., Guan, K., Ainsworth, E. A., Martin, D. G., Kimm, H., & Yang, X. (2024). Solar-induced chlorophyll fluorescence captures the effects of elevated ozone on canopy structure and acceleration of senescence in soybean. Journal of experimental botany, 75(1), 350-363.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Estimating soil properties under different moisture conditions using Vis-NIR-SWIR reflectance spectroscopy

Authors: Ms Theresa Strobl, Doctor Francesco Vuolo, Professor Franz Zehetner
Affiliations: University of Natural Resources and Life Sciences, Vienna
Predicting soil properties using reflectance spectroscopy in the Vis-NIR-SWIR range (350-2500 nm) has gained importance in recent decades. Its key advantages over conventional wet chemistry methods are the low costs, rapid and non-destructive measurements as well as the possibility of simultaneous assessment of multiple properties. The technology is especially interesting for large-scale soil monitoring efforts, the assessment of carbon stocks or precision farming applications. The aim of this study was to investigate the prediction of soil properties, such as soil organic carbon (SOC), clay, CaCO3 and pH, under various moisture contents, i.e. 0.4-45.3%, using hyperspectral data. Soil surface reflectance of 75 samples was measured in the laboratory with a spectroradiometer (Spectral Evolution PSR 2500) from 56 different viewing angles and analysed in RStudio using multivariate statistical methods (PLSR, PCR). Results show that the inclusion of moist soil samples in the calibration data set allows for moderate to good prediction of soil properties without corrections despite their variance in water content. The choice of model and data transformations (e.g. log) can improve the results, but overall, the different specifications yield comparable results. The best performing models for SOC and clay content (both in %), as well as for pH (-) exhibited a RMSEtest of 0.39, 5.26, and 0.34 respectively. For soils containing very little or no CaCO3 prediction proves challenging, hinting at complex interactions between various soil components. All results need to be interpretated carefully in the context of their application. Overall, the laboratory method appears promising for supporting and implementing policies related to soil health monitoring, as well as for providing valuable information about the resilience and sustainability of agricultural systems. However, for field measurements it is important to consider not just the issue of plant cover and residuals or surface roughness but also soil heterogeneity and spatial resolution of sensors.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Farmland infrastructure: a new manifestation for observing human activities

Authors: Qiangyi Yu, Dr. Zhanli Sun, Prof. Dr. Daniel Müller, Dr. Liangzhi You, Prof. Dr. Wenbin Wu
Affiliations: Chinese Academy of Agricultural Sciences, Institute of Agricultural Resources and Regional Planning, Leibniz Institute of Agricultural Development in Transition Economies (IAMO), International Food Policy Research Institute
Land systems are underpinned by the interactions between natural elements and human activities. Such interactions are particularly significant in agricultural landscapes. Humans periodically adjust their crop choices, which can markedly modify vegetation patterns in both temporal and spatial dimensions. Moreover, humans continually invest in their farmland by leveling land, building field parcels, developing irrigation and drainage systems, introducing tree and flower belts, and constructing various auxiliary facilities. Remote sensing enhances the observation of agricultural landscapes, enabling multi-faceted mapping, including land use and land cover, crop types, and cropping systems. However, these manifestations are primarily based on vegetation information, which is only indirectly related to human activities. As farmland infrastructure has directly added human-made objects, such as field blocks, roads, pathways, canals, and pivots, to the earth’s surface, the latest advancements in high spatial resolution remote sensing techniques demonstrate great potential for observing human activities in agricultural landscapes. Examples from China (detecting changes in field sizes through the national land consolidation campaign) and Ukraine (the disappearance of pivot irrigation due to social conflicts) illustrate how simple remote sensing indicators can be developed for this new perspective. In addition to independent indicators, it is even more important to develop a comprehensive indicator that allows for the measurement, evaluation, and comparison of the status of farmland infrastructure across time and space. This would not only help to understand the consequences of the development or abandonment of farmland infrastructure on agricultural production, ecological conservation, and economic compensation, but also further improve the sustainability of our living planet by enhancing the understanding on the interactions between natural elements and human activities in shaping land systems.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Daily High-Resolution Thermal Data: A Game-Changer for Food and Water Security

Authors: Matteo G. Ziliani, Isabela P. Velázquez, Florian Werner, Rim Sleimi, Albert Abelló, Wim
Affiliations: Hydrosat S.à R.l
Science and applications communities have emphasized the need for daily, field-scale (< 100 m) thermal infrared (TIR) data to support operations ranging from wildfire management to agriculture and drought monitoring. TIR signals vary rapidly both diurnally and between days, making frequent low-latency data essential for effective decision-making. For example, missing crop stress during critical growth stages or incorrect irrigation can result in devastating yield losses. While the commercial sector has revolutionized spaceborne observation with high-resolution visible and near-infrared (VNIR) data, equivalent progress in TIR data is still urgently needed. Hydrosat has innovated through this barrier to achieve field-scale, global TIR measurements multiple times daily. The first satellite mission, successfully launched in August 2024, marks the beginning of a planned constellation of 16 small satellites, enabling unprecedented insights into agriculture and water management to enhance global food and water security. Here we present innovative applications where daily 10-meter resolution TIR and VNIR satellite data are integrated with advanced crop models (APSIM and SEBAL) to improve farmland water resource management and early, accurate predictions of end-of-season yield. By assimilating key satellite-derived indices - such as evapotranspiration (ET), cumulative drought index (CDI), and leaf area index (LAI) – into crop models using state-of-the-art techniques, we refine irrigation strategies and yield estimates, demonstrating the critical role of thermal insights in monitoring crop conditions throughout the growing season. As part of this work, we include insights derived from Hydrosat’s novel satellite data, offering a glimpse into this unique potential of field-level TIR observations. We present results from multiple fields and crops under diverse environmental and management conditions across Europe, US, and Asia, validated against in-situ data from commercial and research farms. Among such examples, we demonstrate field-level yield predictions for potatoes in the Netherlands, where validation against regular test-lifting data reveals accurate yield predictions from the start of yield formation till harvest. Sub-field pixel-by-pixel yield estimates at 10x10 m² pixel size successfully capture spatial yield patterns within fields compared to yield maps collected with combine harvesters, spanning corn fields in the US and Kazakhstan. Further results highlight a 20% reduction in irrigation needs and yield predictions within 10% of observed field measurements, demonstrating the transformative potential of high-resolution TIR data for sustainable agriculture and more efficient water use.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: The Intersection of Earth Observation and Science Communications: EO for Farming Communities in Kenya and the UK.

Authors: Fiona Imbali
Affiliations: University Of Leicester
Abstract Earth Observation for Farming Communities in Kenya and the UK Climate change poses significant threats, particularly impacting food security and vulnerable populations worldwide. The sixth Intergovernmental Panel on Climate Change (IPCC) Assessment Report (AR6) emphasizes the urgent need for comprehensive climate action, highlighting the importance of technological innovations, policy interventions, and community engagement in addressing these challenges. Indigenous Knowledge Systems (IKS) are increasingly recognized for their valuable contributions to effective adaptation and mitigation strategies. In many rural parts of Africa, where agriculture heavily relies on rainfall, farmers continue to struggle with weather predictions. Most of these farmers are small-scale operators who face severe risks from climate-induced disruptions such as droughts, heatwaves, and crop failures. For instance, the Horn of Africa experienced a devastating drought from 2020 to 2022, resulting in over $2 billion in losses— the worst drought in 70 years. Similarly, in the UK, vulnerable groups such as women, the elderly, and individuals with pre-existing health conditions are disproportionately affected by extreme weather events, underscoring the global relevance of climate resilience. Achieving food security remains a significant challenge. According to the FAO’s 2024 report, 900 million people faced severe food insecurity, a crisis exacerbated by rising food prices and stagnant progress toward the UN Sustainable Development Goal of Zero Hunger. Earth Observation (EO) technologies, including satellite-based tools like Remote Sensing and GIS, offer innovative solutions for monitoring and mitigating climate change impacts. EO systems, such as NASA’s Landsat and the European Space Agency’s Sentinel satellites, enhance agricultural management, resource monitoring, and disaster response, enabling data-driven policy development. The Global Earth Observation System of Systems (GEOSS) further highlights EO's potential for fostering international cooperation and addressing global challenges. This research integrates cutting-edge EO technologies with participatory communication approaches to tackle climate-induced food insecurity. It explores how EO systems can enhance climate resilience by providing farmers with real-time weather predictions, actionable data, and effective science communication strategies. Conducted in diverse socio-ecological contexts, including the East Anglia-Fens area in the UK and the Sekerr Range in West Pokot, Kenya, the study prioritizes farmer participation in co-designing tools and frameworks, ensuring that solutions align with farmers’ needs and preferences and increasing the likelihood of successful adoption. The gendered impacts of climate change further exacerbate existing inequalities, with women in Sub-Saharan Africa facing additional barriers in agriculture, limited access to resources, and a lack of climate information. This situation undermines food security. Similarly, in the UK and other developing regions, vulnerable groups—such as women, the elderly, and individuals with pre-existing health conditions—are disproportionately affected by extreme weather events. A major technical contribution of this research is the development of a mobile application that incorporates EO-derived real-time data, such as soil moisture levels and weather forecasts, validated through pilot testing and iterative feedback from end users. The app will use data derived from JULES and a digital twin simulator that employs machine learning to provide location-specific insights, bridging the gap between EO capabilities and on-the-ground decision-making. The inclusion of Indigenous Knowledge Systems represents an innovative framework for integrating traditional knowledge with EO technologies. By combining local insights with satellite data, the research introduces novel methodologies for understanding and addressing climate impacts on vulnerable communities. Results will offer actionable recommendations for policymakers, international organizations, and researchers, supporting the design of equitable food security policies and sustainable agricultural practices. This study is expected to contribute to the Living Planet Symposium (LPS) by showcasing a contextually adaptable communication framework for EO data. It highlights applications relevant to global science, partnerships, and decision-making, contributing to the operationalization of EO services. This work demonstrates how EO can bridge technological, social, and policy gaps to strengthen climate resilience in farming communities worldwide.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: A Global Framework for Agricultural Sustainability: The CroP Productivity Index (CPI) as a Data-Driven Solution for Agriculture Under Pressure

Authors: Lorenzo De Simone, Muhammad Fahad, Ana Paula De la O Campos, Carlos Esteban Cabrera Cevallos
Affiliations: Agrifood Economics and Policy Division (ESA). Food And Agriculture Organization of the United Nations (FAO)
Agriculture is at a critical crossroads, facing the dual challenge of meeting the growing global demand for food while mitigating its environmental impact. The need for sustainable resource management, adaptation to climate variability, and minimization of the sector’s carbon footprint underscores the urgency for innovative tools that can assess agricultural productivity at local, regional, and global scales. In response, the Crop Productivity Index (CropPI) has been developed as a dynamic and scalable solution. The CropPI is a composite index that integrates three key biophysical factors—climate, water availability, and soil conditions—into a standardized measure of crop productivity potential, enabling data-driven decision-making for food security and sustainable agricultural planning. The CropPI is calculated using the formula: CropPI = w_c* S_c+w_w* S_w+w_s* S_s where S_c, S_w and S_s are the suitability scores for climate, water availability, and soil conditions, respectively, and w_c,w_w and w_s are their corresponding weights (30%, 50%, and 20%, respectively). These weights reflect the relative importance of each factor in determining crop productivity, prioritizing water availability as the most critical constraint. Suitability scores for climate and water availability are calculated for each month of the crop-growing season, with a penalization strategy applied to the lowest-performing month to account for limiting factors such as droughts or temperature extremes. Soil suitability is treated as a fixed factor, incorporating pH, water-holding capacity, and soil organic carbon. The CropPI relies on high-resolution Earth Observation (EO) datasets (summarized in Table 1) to compute suitability scores for climate, water, and soil. This integration of EO data enables a dynamic and spatially detailed assessment of agricultural productivity, adaptable to different crops and regions. Unlike traditional yield forecasting models, which often rely on extensive biophysical simulations and static long-term averages, the CropPI dynamically reflects seasonal variability and limiting factors. Its application spans multiple agro-ecological zones and crop types, with a focus on maize, rice, and wheat. Validation exercises conducted in countries like Ethiopia and Malawi have demonstrated the CropPI’s robustness, with strong correlations observed between CropPI scores and independent productivity metrics, such as Net Primary Productivity (NPP) and simulated crop yields. For example, the CropPI successfully explained up to 75% of maize yield variability in Malawi, highlighting its potential as a predictive tool for agricultural planning. The operationalization of the CropPI through a web-based application enhances its accessibility and usability. This platform enables stakeholders—from policymakers to local farmers—to visualize and analyze productivity data at a high spatial resolution (250 meters) for any region worldwide from 2000 to the present. By identifying productivity hotspots and regions at risk of yield shortfalls, the CropPI supports targeted interventions, resource allocation, and the design of adaptive strategies to enhance agricultural resilience. Looking ahead, the CropPI project envisions several key enhancements to broaden its utility and precision. Future developments include integrating socio-economic factors such as irrigation, population density, market access and infrastructure quality to create a more holistic measure of agricultural potential. The addition of crop-specific growth models and irrigation / groundwater data will refine the index’s responsiveness to local agronomic practices and water stress conditions. Furthermore, incorporating dynamic climate projections will enable the CropPI to assess future land suitability under climate change scenarios, strengthening its role as a forward-looking tool for sustainable agricultural planning. These planned improvements underscore the project’s commitment to advancing the CropPI as a cornerstone of global efforts to enhance food security and agricultural resilience. Table 1: EO input data Biophysical Factor Dataset Resolution Description Climate MOD21C3 1 km Monthly mean temperature for assessing crop viability across regions. Water Availability MOD16A2 500 m Actual/Potential evapotranspiration ratio to evaluate water sufficiency. Soil pH OpenLandMap 250 m Indicates nutrient availability and suitability for crop growth. Soil Water Holding Capacity OpenLandMap 250 m Measures field capacity and water retention critical for crop productivity. Soil Organic Carbon OpenLandMap 250 m Provides insights into soil fertility and organic matter content. Crop Masks WorldCereal, SPAM 10 m / 1 km Delineates areas where specific crops (maize, rice, wheat) are cultivated.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Ambrosia Detection with Machine Learning and Earth Observation: Towards Predictive Management Solutions

Authors: Lea Dujic Rodic, MSc. in Geospatial Technology Management Dragan Divjak, Elena Marić
Affiliations: LIST LABS LLC
The AMSADE project, under the name Employing Machine Learning for Ambrosia Satellite Detection using Earth Observation Data, represents a new and purely data-driven approach to facing widespread ecological and health challenges related to Ambrosia artemisiifolia, also known as common ragweed. This is an invasive plant, of North American origin, a producer of allergens; it has become a relevant threat throughout Europe, particularly in Croatia, where its propagation endangers biodiversity, impacts the results of agricultural production, and aggravates public health issues. Currently used control methods, heavily relying on manual field surveys and community reporting, are insufficient in controlling its spread effectively. Amsade intends to change from traditional detection methods to a modern, scalable system that uses Earth-observing technologies and advanced ML algorithms. The project focuses on the application of Sentinel-2 satellite imagery, enriched with vegetation indices such as NDVI and moisture-related metrics, combined with historical and in-situ field data acquired from the cities of Zagreb and Porec . By integrating supervised learning techniques, the system shall develop a highly accurate ML pipeline that detects ragweed growth areas larger than 100 m² with more than 30% coverage, achieving at least 90% detection accuracy while keeping the false positive rate below 30%. We are currently testing and refining the system's fundamental component, a Class Weighted XGBoost model, which has shown promising results in addressing the challenge of imbalanced data distribution. Moving forward, we plan to enhance its capabilities and develop predictive models to ensure robust performance across diverse environmental conditions over time. In addition to model development, AMSADE includes a web-based GIS application for easy-to-use visualization and analysis of detected ragweed growth. The app provides users, from public authorities to citizens, with the ability to interactively map, predict, and monitor ragweed locations to underpin evidence-based decision-making and resource allocation. The integrated platform can export data and might serve as an educational resource, highlighting the impacts of invasive species and their management. Preliminary evaluations underline the potential of the system to change ragweed management from a reactive to a proactive approach, enabling almost real-time observations of growth trends while significantly reducing labor and temporal costs associated with manual ground assessments. This paper not only offers a scalable solution for Croatia's pressing ragweed problem but also provides a flexible framework for addressing invasive species globally, using the combined benefits of remote sensing and machine learning to drive environmental conservation and sustainable management practices.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Connecting Satellite Earth Observations to Agricultural Supply Chains: NASA Harvest Tools for Contextualizing Agricultural Production and Food Security

Authors: Michael Humber, Inbal Becker-Reshef, John Keniston, Fernanda Argueta, Blake Munshell, Balaji Manikandan, Antonio Sanchez-Galvez, Mary Mitkish, Brian Barker, Christina Justice, Kara Mobley, Soonho Kim, Christopher Justice
Affiliations: University of Maryland / NASA Harvest, IFPRI / CGIAR
For decades, Earth observations (EO) and satellite-based remote sensing have proven to be reliable and consistent tools for estimating agricultural production across large areas. These tools provide timely and valuable insights into the global supply of agricultural commodities, which in turn helps inform price discovery that ripples through global markets. However, despite the globalization of food markets, localized food insecurity often arises from an acute lack of food availability and access. Access is influenced not only by the physical presence of food but also by its price in a specific location. As a result, global assessments of food availability alone are not sufficient to assess acute issues like famine, as commodity trade is shaped by regional and seasonal dynamics. For example, while crop harvesting dates are detectable from satellite data, the export timing of the crop is influenced by the available infrastructure and market factors. Trade policies, including tariffs and agreements, further impact the flow of goods between countries. These factors significantly impact food prices and availability, yet they are not immediately observable through satellite EO alone. When considering the relationship between food production and consumption, a production shortfall in one country does not affect only that country. In reality, most countries maintain a limited number of trade partners, making import-dependent nations particularly vulnerable to food insecurity stemming from production disruptions elsewhere. At the same time, there is a substantial body of knowledge—both in peer-reviewed literature and popular media—that documents the local and global impacts of regional production changes. This knowledge can help compare current events to similar historical occurrences, providing valuable context. NASA Harvest is working to integrate socioeconomic, market, and trade data with EO data—particularly the GEOGLAM Crop Monitor monthly bulletins—to enhance analyses of market supply and assess the direct impacts of production shortfalls. This effort includes analyzing country-to-country trade dependencies from both import and export perspectives, evaluating the health, robustness, and diversification of national supply chain networks, and utilizing up-to-date subnational production data to link production regions to consumers and identify food security risks. By integrating this information with bespoke analysis platforms, geospatial data visualization tools, and Large Language Models (LLMs), NASA Harvest aims to bridge the gap between remote sensing data and food security analysis. This approach lays the groundwork for assessing the potential impacts of impending food production shortages. Given the persistent threats of climate change, ongoing conflicts in major grain-producing regions, and the risks of escalating trade wars, this work provides critical insights into evolving global food trade patterns. Ultimately, these efforts aim to foster a more robust understanding of risks and to support effective mitigation strategies to combat food insecurity.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: DINOSAR: Integration of Copernicus Optical and Radar Satellite Images for Sugarcane Monitoring in the Cauca Valley

Authors: Alejandro Mestre-Quereda, Juan M Lopez-Sanchez, Jiayin Luo, Arturo Villarroya-Carpio, Carlos Mosquera, Juliana Lozano, Jonathan D. Quiñones Mamian, Corné van der Sande, Andrea Idrovo, Antonia Skarli, Henk Pelgrum, Karlis Zalite, Patrick van Bergen, Annemieke Verhoek, Dirk Hoekman, Martin Vissers, Boris Kooij, Mark Noort
Affiliations: University of Alicante, AgroAP, eLEAF, SarVision, HCP International
Earth observation satellites provide useful information on the crop status which help organising cultivation practices, harvest planning, and yield optimisation. A project funded by the European Space Agency was carried out to establish an operational monitoring platform of sugarcane in the Cauca Valley (Colombia). That project, called CostCutting4Sugarcane, served to demonstrate the potential of using optical images from Sentinel-2, for this purpose. However, due to the presence of clouds during long periods of time, optical images are missing at key dates for cultivation. To overcome this issue, a new project funded by the European Union has started, DINOSAR (https://www.dinosarproject.eu), in which a methodology for integrating images from Sentinel-1 (a radar satellite, hence unaffected by clouds) with images from Sentinel-2 is being developed to improve sugarcane monitoring. In the first part of the DINOSAR project, a one-year long in-situ campaign is being carried out in 30+ distributed fields to gather relevant reference data on the plants weekly. Using an app in a mobile device, measures of number of stems per metre, plant height, plant diameter, and fresh biomass are collected. A quality check is applied on the collected data to ensure consistency and to detect potential annotation mistakes. Anomalies in the fields are also annotated, following a coding designed according to the potential issues in the field. Moreover, the gathered in-situ data is visualised in an online platform developed by eLEAF (named FieldLook), together with optical and radar data, plus additional field data, to carry out the data analysis. The set of fields includes four different sugarcane varieties, three different environments, and diverse cases regarding the cultivation cycles, i.e. comprising new sowing and first to seventh ratoons. Moreover, both high and low productivity parts are sampled within a field to capture the variability within the fields. In addition, the orientation of the rows with respect to the radar is accounted for, as it influences the radar response from the scene. The dataset is representative enough to develop an ad hoc physical model that links the sugarcane features with the radar observations (backscattering coefficient and repeat-pass interferometric coherence), as well as to refine the optical model previously used. In addition, the in-situ data collected will be used for validation. Leveraging these physical models for radar and optical data, an integrated algorithm based on dynamic systems is implemented for combining the model outputs from both image types and for ensuring a valid and accurate source of information to the decision-making tools developed for sugarcane. The integration of the outputs from both radar and optical models, available whenever a satellite acquisition is available, is optimally carried out in the framework of a Kalman filter. For this purpose, a crop growth model is being fine tuned for the specific characteristics of sugarcane in Colombia, using the previous experience and the in-situ data gathered in this project. The Kalman filter is able to provide a daily or weekly estimate (depending on the end user) of key biophysical variables (e.g. biomass) by combining the prediction of the growth model (produced daily) with the estimates from the models that relate the satellite imagery with the biophysical variables. This combination properly takes into account the uncertainty associated with the growth model and the satellite data. The integration methodology is aimed at increasing productivity and reducing the costs associated with fertilization, also decreasing the environmental impact. This is achieved by providing, along the growing season, estimates of the final biomass, which are directly related to yield. Finally, this methodology will be transferred to other geographical locations and other crop types.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: 3D Radiative Transfer Modeling for Maize Traits Retrieval Across Different Growth Stages: Exploring the Complementarity of Sentinel-2 and CHIME

Authors: Romain Démoulin, PhD Jean-Philippe Gastellu-Etchegorry, PhD Sidonie Lefebvre, PhD Xavier Briottet, PhD Zhijun Zhen, PhD Karine Adeline, PhD Matthieu Marionneau, PhD Valérie Le Dantec
Affiliations: CESBIO, Université de Toulouse, CNES/CNRS/INRAE/IRD/UT3, ONERA/DOTA, Université de Toulouse, ONERA/DOTA, Université Paris Saclay, LMA2S, Université Paris Saclay, Hyperplan
# Context Maize (Zea mays) is one of the world's most important food crops, facing growing uncertainties, such as climatic events and the consequent rise in vulnerability to pests and diseases. The cereal industry constantly faces uncertainties regarding harvest dates, grain quantity, and quality. Consequently, it has become essential to monitor crops more accurately, both spatially and temporally. Satellite Remote Sensing (RS) offers significant potential for near real-time crop monitoring at a global scale through the tracking of biophysical and biochemical crop traits. Existing multispectral instruments such as Sentinel-2 (S2) with high spatial resolution (10-20 m) and revisit rate (approx. 5 days) are already used to monitor the temporal dynamic of key variables such as the Leaf Area Index (LAI) or Leaf Chlorophyll Content (LCC). The spectral information of these instruments is however limiting in key spectral regions (typically the SWIR domain) to monitor Leaf dry Matter Content (LMA), Leaf Water Content (LWC) and Leaf Nitrogen Content (LNC). Upcoming earth observation mission CHIME [1] will provide richer information on crop development thanks to more than 200 contiguous narrow spectral bands at the cost of a lower 11-day revisit rate and 30 m spatial resolution. Hyper- and multi-spectral instruments thus appear highly complementary to monitor crop development, support informed decision-making and ensure global food security. A common crop traits retrieval approach consists in inverting RS data with hybrid methods combining Radiative Transfer Models (RTMs) and Machine Learning (ML) techniques. RTMs help reducing costly and time-consuming field data collection and are used in the inversion approach to link spectral reflectance to crop biophysical and biochemical properties on the [0.4−2.5] μm domain. RTM simulations usually represent vegetation as a horizontally homogeneous environment, while crop fields like maize typically grow on varied terrain slopes with different row orientations and spacing. Advanced three-dimensional (3D) RTMs such as DART [2] are able to simulate RS observations of complex 3D scenes with topography and atmosphere. The use of 3D RTMs provide an opportunity for a growth stage-dependent understanding of the complex interactions between crop field parameters and RS observations. It is however essential for operational crop traits mapping with 3D RTMs to develop semi-automatic scene generation methods able to capture the natural variability of plant architecture at different growth stages. This study has two objectives: - Couple a dynamic 3D plant growth model with the DART RTM to accurately simulate maize field RS observations of S2 and CHIME sensors, for different growth stages and field geometries. - Validate the complementarity of hyper- and multi-spectral instruments for maize field development monitoring from multi-temporal S2 and synthetic CHIME data. # Data The study site includes irrigated maize fields at different phenological stages, located north of Grosseto (42°49′47.02′′N, 11°04′10.27′′E), Italy. The RS dataset consists of two airborne HyPlant-DUAL hyperspectral images (acquired on July 7 and July 30, 2018) collected during the ESA FLEXSENSE campaign [3] and a series of bare soil ASD FieldSpec 4 spectroradiometer measurements (on July 7, 2018). Field measurements were conducted for both flights to measure leaf (LCC, LMA, LNC, LWC) and canopy (LAI) traits as well as architectural features such as plant height and density. Both HyPlant images are used to generate synthetic S2 and CHIME data. Additional level 2A S2 data are selected for the entire maize season from May to August 2018. # Method DART model is combined with the DLAmaize [4] dynamic plant growth model to simulate S2- and CHIME-like maize field reflectances across multiple growth stages, leaf biochemical quantities and field geometries. A Look-Up Tables (LUT) composed of 5000 samples and covering the growth of maize (LAI: 0.0 - 7.0 m²/m²) is simulated using HyPlant-DUAL bands (wavelengths and FWHM), then resampled to S2 and CHIME bands. A global sensitivity analysis (GSA) on the simulated LUTs is used to select a series of optimal vegetation indices (VIs) from the S2 multispectral broad bands (MBB) and the CHIME hyperspectral narrow bands (HNB). The hybrid inversion approach is performed on the selected VIs using the Kernel Ridge Regression (KRR) ML algorithm. The training data for the KRR is first optimized by selecting a subset of ≈1000 samples from the original LUT using an active learning (AL) strategy. The models trained on S2 and CHIME LUTs are then validated against ground measurements for each of LAI, LCC, LNC, LMA and LWC, and used to generate biophysical and biochemical traits maps from synthetic S2 and CHIME data on July 7 and July 30. Finally, the KRR models trained on S2 LUT only are used to monitor the temporal dynamic of LAI and LCC from actual S2 data acquisitions from May to August 2018. # Results The approach was already tested for high spatial-spectral resolution HyPlant-DUAL airborne images at 10 m resolution using the full reflectance spectrum instead of VIs. Preliminary results using the proposed method achieved high retrieval accuracy for LAI (R²=0.91, RMSE=0.42 m²/m²), LCC (R²=0.61, RMSE=3.89 µg/cm²), LNC (R²=0.86, RMSE=1.13×10⁻² mg/cm²), LMA (R²=0.84, RMSE=0.15 mg/cm²), and LWC (R²=0.78, RMSE=0.88 mg/cm²) and a good traits maps spatiotemporal consistency for moderate to high LAI (> 1.5 m²/m²). The findings demonstrated that integrating 3D RTMs with dynamic growth models allow high trait retrieval accuracy from hyperspectral data on heterogeneous row crops at various growth stages. S2 is expected to provide accurate LAI and LCC temporal dynamics from maize emergence to maturation. CHIME at a lower temporal frequency is expected to improve LAI and LCC retrieval accuracy for spatially homogeneous pixels and help to broaden the panel of accessible variables with the estimation of LMA, LNC and LWC challenging to estimate with S2 limited spectral information. # References [1] Celesti, M., Rast, M., Adams, J., Boccia, V., Gascon, F., Isola, C., & Nieke, J. (2022). The Copernicus Hyperspectral Imaging Mission for the Environment (Chime) : Status and Planning. IGARSS 2022 - 2022 IEEE International Geoscience and Remote Sensing Symposium, 5011‑5014. https://doi.org/10.1109/IGARSS46834.2022.9883592 [2] Gastellu-Etchegorry, J. (1996). Modeling radiative transfer in heterogeneous 3-D vegetation canopies. Remote Sensing of Environment, 58(2), https://doi.org/10.1016/0034-4257(95)00253-7 [3] Candiani, G., Tagliabue, G., Panigada, C., Verrelst, J., Picchi, V., Rivera Caicedo, J. P., & Boschetti, M. (2022). Evaluation of Hybrid Models to Estimate Chlorophyll and Nitrogen Content of Maize Crops in the Framework of the Future CHIME Mission. Remote Sensing, 14(8). https://doi.org/10.3390/rs14081792 [4] Zhen, Z., Chen, S., Yin, T., Han, C., Chavanon, E., Lauret, N., Guilleux, J., & Gastellu, J.-P. (2024). A Dynamic L-System-Based Architectural Maize Model for 3-D Radiative Transfer Simulation. IEEE Transactions on Geoscience and Remote Sensing, 62, 1‑20. https://doi.org/10.1109/TGRS.2023.3348511
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Rice Bacterial Leaf Blight Detection Using Canopy Hyperspectral Data with Spectral Transformation Methods

Authors: Ziyi Wang, Associate Prof. Roshanak Darvishzadeh, Nancy Castilla, Alice Laborte, Andrew Nelson
Affiliations: University Of Twente, International Rice Research Institute
Rice, the staple food for over half of the global population, is vital for food security and regional economies, especially in developing countries. Bacterial leaf blight (BLB), caused by Xanthomonas oryzae pv. oryzae (Xoo), significantly reduces yields under favourable weather conditions. Timely detection of BLB is crucial for effective management. Remote sensing, especially hyperspectral measurements offers a rapid and non-destructive alternative to identify stress-induced variations in leaves and canopies. With hundreds of narrow spectral bands, hyperspectral data can assist in distinguishing between healthy and BLB-infected rice based on significant differences in the rice spectral signatures. This study evaluated canopy hyperspectral measurements obtained from inoculated rice fields for detecting BLB. Besides the original spectra, spectral transformation methods including first derivative, second derivative, continuum removal (CR) and continuous wavelet transform (CWT) were tested to distinguish healthy and BLB-infected rice. The sensitivity of spectral bands to BLB was examined on both original and transformed spectra. The most sensitive spectral bands were identified using the optimal thresholding method, with the top 5% most accurate bands highlighted. Results showed that the most sensitive spectral bands are mainly identified from original and continuum-removed spectra compared with other spectral transformation methods. Using the original spectra, sensitive bands were concentrated in the red and red edge regions. While for continuum-removed CR spectra, sensitive bands were also concentrated in the near-infrared region. The highest classification accuracy was achieved by continuum-removed spectra at 970 nm for differentiating healthy and BLB-infected rice (Overall accuracy=79%). These findings demonstrate the potential of CR for enhanced diagnosis of BLB in rice. Future efforts will focus on establishing a BLB detection model using sensitive spectral features and scaling up using UAV and satellite hyperspectral imagery. We will further explore machine learning algorithms optimized for small datasets and effective feature selection methods to mitigate high-dimensional data-induced overfitting and multicollinearity.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Coupling Light Use Efficiency Model and Random Forest for Improved Crop Yield and Biomass Estimation Accuracy at Field and Regional Scales

Authors: Manuele Ragazzi, Michele Croci, Andrea Ferrarini, Giorgio Impollonia, Professor Stefano Amaducci
Affiliations: Departement of Sustainable Crop Production, Università Cattolica del Sacro Cuore di Piacenza, via Emilia Parmense, 84, 29122 Piacenza, PC, Italy
Agricultural digital twins are fundamental tools for monitoring and analyzing various aspects of agroecosystems, enabling the adoption of advanced management strategies in response to the growing demand for precise and efficient solutions. In particular, accurate estimation of crop yield and biomass is crucial for optimizing the efficiency of agricultural systems, such as in carbon farming initiatives and drought stress monitoring, as well as for enhancing natural resource management and addressing climate change challenges. Yield information aggregated at the end of the season and available only at a territorial scale, while useful for a general productivity overview, fails to capture intra-field variability, limiting the effectiveness of decisions at the local level. Satellite remote sensing, particularly Sentinel-2 imagery, offers high spatial and temporal resolution data that has proven effective for agroecosystem monitoring throughout the growing season. The Light Use Efficiency (LUE) model is a semi-empirical approach that can be easily implemented using remotely sensed data. Thanks to its lower number of inputs compared to more complex, mechanistic models, it requires limited calibrations. Despite its advantages, the LUE model's accuracy can be limited at field or intra-field scales, particularly when applied across different regions. To enhance its performance, integrating the physiology-based LUE model with machine learning techniques, such as Random Forest (RF), is proposed. This study aims to evaluate whether integrating a LUE model with a RF approach can improve yield and biomass estimation accuracy. The LUE model assumes that plant productivity is a function of the Absorbed Photosynthetically Active Radiation (APAR) and its conversion efficiency, constrained by temperature and water-related stressors (Ts and Ws, respectively). In this study, a LUE model was developed and calibrated for the crop yield and biomass estimation at both field and regional scale. Different temperature and water stressors proposed in the literature were tested to optimize the model. The performance of the LUE model was then compared to two alternative approaches: i) a RF model in which only Vegetation Indices (VIs) time series were used as predictors and ii) a hybrid LUE-RF model in which only LUE-derived parameters (APAR, Ts, and Ws) have been used as predictors in the Random Forest model. These methods will be applied to three crops – winter wheat, maize, and processing tomatoes – in the Pianura Padana region. Preliminary results for winter wheat in Piacenza province showed that combining the LUE model with RF significantly improved accuracy. The hybrid LUE-RF approach achieved a coefficient of determination (R²) of 0.64, compared to 0.06 for LUE alone and 0.56 for RF using only VIs. Correspondingly, the hybrid model reduced the normalized root mean square error (nRMSE) to 10.5%, from 29.2% for LUE alone and 11.1% for RF. These results highlight the potential for scaling the hybrid approach to larger regions, offering promising applications for regional-scale crop monitoring and management.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Use of satellite multispectral imagery for delineation of production zones for site-specific crop management

Authors: Vojtěch Lukas, Petr Sirucek, Jan Goth, Vaclav Necid, Jiri Mezera, Lubomir Neudert, Jakub Elbl
Affiliations: Department of Agrosystems and Bioclimatology, Mendel University in Brno, World from Space s.r.o., ADW AGRO, a.s.
Current geo-information technologies used in crop production allow to address the spatial variability and to adapt the crop treatments in the form of site-specific crop management - precision agriculture. The effect of soil heterogeneity is most evident in crop yields, so identifying yield levels within the fields is a prerequisite for the successful application of precision farming practices. A common method for the identification of yield zones is based on the analysis of crop yield maps obtained during the harvest. However, the comprehensive collection of yield records on farms is rather rare and the resulting yield records require advanced outlier and error filtering procedures. An alternative approach is the delineation of production zones, including quantification of the range of values based on spatial analysis of time series multispectral satellite data. Within the research projects TACR TQ03000882, NAZV QL25020034 and ESA TAVAP - Timing And Variable rate APplication at the Department of Agrosystems and Bioclimatology at Mendel University in Brno, an approach of delineation the management/production zones was verified in last years based on the satellite multispectral imagery from Landsat and Sentinel-2 systems. A time series of cloudless multispectral scenes are selected based on the scene classification mask (L2A products calculated by sen2cor or CFmask algorithm), mostly from the period of maximum vegetation (May to July, depending on the crops). From the multispectral bands, the vegetation index EVI is calculated, which shows the closest relationship between vegetation indices and yield records. The delineation of areas is possible to realize with the available plot boundaries (e.g. from LPIS) or farm records. In the case of low-area plots, Planet Scope multispectral imagery can be used to reach sufficient pixel-size data. The final result is a determination of the productivity zones within each plot which corresponds to the distribution of yield levels. If yield maps are available, the percentage range of relative yield potential can be corrected for the actual mapped yields in the area. In addition to the above, the stability of yield zones within the period of interest can also be determined and can be used to identify areas at risk of high inter-annual variability. The production zone maps are intended as a basis for site-specific crop management practices, for which information on the distribution of yield levels within plots is essential. These include (1) variable rate application of nitrogen fertiliser where the fertiliser rate is adjusted based on the expected crop yield. The results of the validation during 2021-2023 showed a positive effect of reducing the intensity of nitrogen application in areas with lower yield potential, i.e. in areas where lower nutrient uptake can be expected due to lower expected yield in the long term. Adjusting fertiliser application rates in this way brings an equal nitrogen balance and reduces the risk of N leaching, especially from permeable parts of the soil. The results of the nitrogen balance comparison showed a missed use of 10 to 69 kg N per ha in uniform application. Similarly, the production zone maps were used for (2) variable rate application of phosphorus and potassium fertilizers. In this case, the level of fertilization was determined based on production zones to estimate the spatial distribution of nutrient uptake in the yield zones combined with analysis of nutrient soil content by soil sampling. In the higher productivity zones, an increase in fertilizer application rate was recommended to cover the higher nutrient uptake. This approach led to a reduction of the average nutrient dose by 15%, which represents an economic benefit of around 44 EUR per ha. Variable rate application fo potassium fertilizer showed showed a reduction of the average dose by 65 % in comparison to the uniform application. A final example of the use of production zone maps is (3) variable sowing. For cereal crops, the strategy recommends reducing the seeding rate in more productive areas, while increasing in less productive areas. For other crops (e.g. maize) the opposite strategy is more common to increase of plants density in high-yielded areas. This study was supported by research projects TACR TQ03000882, NAZV QL25020034 and ESA TAVAP - Timing And Variable rate APplication.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Satellite data for the provision of early, area-wide and continuous information on crop yield estimates for agricultural statistics and policy advice

Authors: Heike Gerighausen, Denis Debroize, Patric
Affiliations: Julius Kühn Institute
Increasing climate variability and adverse weather patterns are posing unprecedented challenges to the agricultural sector, with negative impacts on farms, the entire value chain and consumer prices. When damaging events occur on a national and regional scale, federal and state governments provide support in the form of aid payments to mitigate farm losses and safeguard livelihoods. The results of the Special Crop and Quality Assessment (BEE) and the Crop and Farm Reporting (EBE), which are based on § 46 and § 47 of the Agricultural Statistics Act (AgrStatG), serve in these cases as a basis for decision-making. Reliable, regionalised in-season forecasts and the determination of the actual harvested yields of certain major crops would fill information gaps, support decision-making by federal and state authorities, and significantly improve planning in agricultural practice. However, the current estimation procedure underlying the EBE is not spatially small-scaled and in some cases, suffers regionally from the partly declining participation of voluntary reporters. The SatErnte project aims to use new data sources and innovative methods from earth observation, data science and yield modelling to further develop existing methods of official agricultural statistics, to reduce current shortcomings and to expand and improve the content, timeliness and comparability of official statistics. In SatErnte, two complementary methodological approaches, machine learning-based and process-based yield forecasts, are investigated for the provision of intraseasonal, regionalized yield forecasts, whereby we initially focus on machine learning. Managing the vast volumes of multi-source data necessary for yield modeling continues to pose a significant challenge, particularly for public authorities. Therefore, we utilize a cloud-integrated spatial data infrastructure which is built on interconnected components linking EO cloud computation and data storage, using the CODE-DE platform, with internal data cubes through web services. Our model-based yield estimation approach incorporates a multitude of both dynamic and static predictors. Analysis ready data of multi-spectral Sentinel-2 imagery is used for space-borne retrieval of crop traits such as leaf area index and above ground biomass. Geospatial data on meteorological time-series are queried from our data cube, providing daily variables such as temperature, precipitation, and global radiation. External geospatial data on soil moisture and physicochemical soil properties are obtained from the Copernicus Global Land Service and SoilGrids 2.0 data portals, respectively. Crop-specific ML models are trained on multi-annual data (2019 - 2022) collected at agricultural parcel level for three crops, i.e. winter wheat, winter barley, and winter rape, in two German federal states. The ensemble of ML regressors employed, includes Gradient boosted trees (CatBoost, LightGBM, XGBoost), Partial Least Squares, RandomForest, and Support Vector Machines. Parcel geometries obtained from the Integrated Administration and Control System (IACS) enable the spatially scaled application of trained yield models, covering larger administrative regions represented by two federal states. Overall, most robustly performing ensemble learning technique was majority voting. RSQ values of best performing models, inferred from cross validations at parcel level, range between 0.66 - 0.74. Related normalized RMSE (nRMSE) values range between 13 - 17%. At district level, aggregated yield estimates compared against mean yields from official yield statistics reach RSQ values for best performing models ranging between 0.60 - 0.89. Related nRMSE values range between 6 - 10%. In order to produce regionalised in-season forecasts, the established model ensembles are trained in a step-by-step manner, with each step corresponding to a specific week, using the available reference data from previous years. The step-by-step trained ensembles are then applied to the current season. This approach was applied within the selected test region and evaluated retrospectively for two years (2021 and 2022) using yield data from official agricultural statistics. Preliminary results are promising, suggesting several advantages compared to traditional yield estimation approaches, regarding area coverage, cost effectiveness, and timeliness. To overcome current deficits in terms of model precision and scalability, high-quality training data derived from representative sampling across the country, along with open access to IACS parcel geometries, are essential. This is of particular importance for the development of reliable and robust regionalised in-season forecasts.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Enhancing Agricultural Insights: Combining Planet’s Crop Biomass with Complementary Environmental Variables

Authors: Margot Ridderikhoff, Dr. Pierre Guillevic, Dr. Benjamin Aouizerats, Master of Science Rogier Burger, Dr. Yoann Malbeteau
Affiliations: Planet Labs Pbc
Satellite-based monitoring of agricultural crops is essential for addressing the dual pressures of meeting global food demands and mitigating environmental impacts. While the Normalized Difference Vegetation Index (NDVI) has long been a cornerstone for assessing crop health, its reliance on cloud-free optical imagery limits its ability to provide continuous monitoring. Synthetic Aperture Radar (SAR), on the other hand, offers temporal consistency under all weather conditions but lacks the spatial accuracy needed for detailed intra-field assessments. The Crop Biomass, a relative measure of biomass, bridges this gap by integrating the temporal reliability of SAR data (Sentinel-1) with the spatial resolution of optical imagery (Sentinel-2 and PlanetScope). The methodology processes SAR and optical signals independently, dynamically combining them to ensure robust, high-resolution assessments suitable for diverse agricultural and environmental conditions. The recent addition of PlanetScope imagery enhances data quality in regions where Sentinel-2 alone cannot provide sufficient information and enables 3m spatial downscaling. These improvements unlock more granular insights into crop growth dynamics and expands its applicability to smallholder farmers while maintaining daily, cloud-free operational capabilities. A recent validation study (Guillevic et al., 2024) already demonstrated that the Crop Biomass can accurately capture changes in crop fresh biomass during the growing season and detect rapid growth variations caused by agricultural practices or environmental stresses such as nitrogen or water deficiencies. Building on this foundation, integrating the Crop Biomass with other near-real-time and daily variables, like Land Surface Temperature (LST) and Soil Water Content (SWC), can provide a more holistic understanding of crop growth and environmental interrelationships. Such an integrated approach enables the assessment of critical indicators, including crop heat stress, evapotranspiration rates, growing degree days, and crop water availability. By combining these variables, we create new valuable insights into the complex dynamics of agricultural ecosystems for forecasting yield and optimizing resource management. With its cutting-edge fusion of satellite data and compatibility with additional constellations and datasets, the updated Crop Biomass product sets a new benchmark for precision agriculture. It delivers a scalable and comprehensive solution for monitoring agricultural practices, environmental stress, and climate adaptation, offering transformative insights for stakeholders across diverse agricultural systems worldwide. References Guillevic, P. C., Aouizerats, B., Burger, R., Den Besten, N., Jackson, D., Ridderikhoff, M., Zajdband, A., Houborg, R., Franz, T. E., Robertson, G. P., & De Jeu, R. (2024). Planet’s Biomass Proxy for monitoring aboveground agricultural biomass and estimating crop yield. Field Crops Research, 316, 109511. https://doi.org/10.1016/j.fcr.2024.109511
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: The capability of very high-resolution satellite imagery for early wheat rust disease detection, monitoring, and phenotyping in Ethiopia

Authors: Gerald Blasch, Ashenafi Gemechu, Tamirat Negash, Lidiya Tilahun, Fikrte Yirga Belayineh, Tadesse Anberbir, Girma Mamo, Yoseph Alemayehu, Adama Ndour, Francelino A. Rodrigues Jr, Sophie Bontemps, David P. Hodson
Affiliations: CIMMYT – Mexico, Debre Zeit Agricultural Research Center, Kulumsa Agricultural Research Center, Ethiopian Institute of Agricultural Research, CIMMYT – Ethiopia, Lincoln Agritech Ltd, Lincoln University, Université Catholique de Louvain, CIMMYT – Nepal
Wheat rusts are amongst the most devastating and damaging diseases occurring on staple food crops. In Ethiopia, stripe (yellow) rust and stem (black) rust pose a major threat to food security, with several devastating epidemics in recent history. In 2010, a stripe rust epidemic affected over 600,000 ha of wheat in Ethiopia resulting in an estimated loss of USD 250 million. In 2014, a stem rust epidemic affected over 30,000 ha with close to 100% crop loss on individual fields. Such devastating epidemics have a major impact on the food security and livelihoods of the 5 million households dependent on wheat in Ethiopia. To help prevent major disease outbreaks, early detection and timely control are crucial. Key challenges of crop disease management are early detection, monitoring, and the quantitative assessment of crop damage over large regions. Currently, no operational methods exist that exploit the potential of satellite imagery for crop disease early warning systems. Availability of very high (spatial and temporal) resolution satellite (VHRS) imagery, such as SkySat and Pleiades, provides for the first time the opportunity to develop new crop disease detection methods at early growth stages that could revolutionize disease early warning and response systems. The capability of multispectral UAV and VHRS systems were assessed as a high throughput phenotyping (HTP) and rapid early-stage disease detection tool for wheat rusts. The objectives were to (i) characterize the progressive development of yellow and stem rusts and associated grain yield loss using traditional visual disease estimations; (ii) assess the level of agreement of remotely sensed multispectral spectral features with visual disease scores of rust symptoms and their prediction capability to estimate visual disease scores at earlier growth stages, general disease progression, and grain yield; and finally, (iii) evaluate the potential to upscale UAV-based detection and phenotyping to VHRS systems. Two field experiments based on a modified randomized complete block design with varying irrigation and fungicide management, were conducted in Ethiopia at the Kulumsa Research Center in 2020 and Debre Zeit Research Center in 2023. In both experiments, six bread wheat varieties with differing rust resistance were planted. Multispectral VHRS imagery (e.g., SkySat and Pleaides) were acquired over the experimental sites and weekly, in-situ visual inspections by pathologists on disease severity were coordinated with satellite overpasses. Crop cuts at harvest for yield measurement were undertaken. The findings provide valuable insight into the capability of multispectral VHRS sensors for early disease detection, demonstrating the possibility of upscaling disease detection from plot to regional scales at early growth stages. This research was the first time that remote sensing detection methods using VHRS imagery have been tested for wheat stem rust. As wheat stem rust is now re-emerging as a disease of concern in Europe (and other regions), after an absence of over six decades, detection methods tested in Ethiopia may in the future find application and utility in other regions.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Integrating Earth Observation and Large Scale Statistical Surveys Through Spatiotemporal Modeling: A Comparative Study for Crop Type Classification Algorithms

Authors: Quentin Deffense, Boris Norgaard, Sophie Bontemps, Pr. Pierre Defourny
Affiliations: UCLouvain-Geomatics (Belgium)
The ESA Sen4Stat toolbox bridges the gap between Earth Observation (EO) data and National Statistical Office (NSO) statistics, offering an EO-compatible framework for agricultural monitoring. By leveraging such tools, users can harness the potential of machine learning (ML) models to process and analyse large volumes of geospatial data, with automatic quality control of in-situ data. This study builds on these capabilities to enhance crop type mapping, a critical component of agricultural statistics and informed decision-making. The focus extends beyond model accuracy metrics to the generation of unbiased acreage statistics, achieved by integrating spatially comprehensive Sentinel-2-based crop type maps with statistically sound in-situ data using a regression estimator. As ML becomes increasingly central to remote sensing applications, tools like Sen4Stat demonstrate their value in addressing challenges associated with EO data analysis. Among these applications, accurate crop type mapping stands out for its importance in supporting national agricultural statistics, resource management, agricultural policy planning, and food security. However, with the growing diversity of deep learning (DL) models, selecting the most suitable algorithm for such tasks has become a significant challenge since their performance and feasibility are highly influenced by the availability of in situ data and computational resources. This study systematically compares traditional ML models (Random Forest, CatBoost) with deep learning models across three categories: temporal models (e.g., GRU, Transformer), spatial models (e.g., ResUNet2D, ResNet2D), and spatiotemporal models (e.g., Transformer with spatial context, ResNet3D). Model ensembles are also explored. This work was carried out over the autonomous community of Castilla-y-Leon, Spain, using the statistical data coming from the official agricultural survey provided by the Spanish National Statistical Office. The analysis evaluates the trade-offs between model accuracy, computational efficiency, and robustness under varying in situ data availability scenarios and computational capacities, with a distinction between training cost and inference cost. Results reveal that the temporal dimension is the most influential for crop type classification, while the spatial dimension offers a smaller increase in accuracy, requiring only a small spatial context. The analysis highlights that traditional models outperform DL approaches in data-scarcity conditions (fewer than ~5000 training parcels, which corresponds approximately to 100 km²). In contrast, Transformers with a small spatial context excel in leveraging spatiotemporal features when both data and computational resources are abundant. The study further evaluates model performance from the perspective of statistical accuracy in acreage estimation, using regression estimators computed for each model and major crop type under varying in situ data availability scenarios. Additionally, it explores self-supervised pretraining as a strategy to mitigate in situ data limitations. To this end, the Presto model will be fine-tuned on the same dataset and compared with other approaches to assess its potential improvements in performance. This work provides a comprehensive framework for selecting and applying ML and DL models to crop type mapping, balancing accuracy, resource efficiency, and statistical robustness to support agricultural monitoring and decision-making.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: A Multi-Mission Dataset Leveraging the Synergy of CHIME and LSTM for Advanced Monitoring of Sugar Beets Over the Growing Season

Authors: Marco Spagnolli, Egor Prikaziuk, Tian Hu, Anne Schucknecht, Krzysztof Smykała, Żaneta Swacha, Christiaan van der Tol, Matthias Wocher
Affiliations: Ohb System Ag, University of Twente, Luxembourg Institute of Science and Technology, QZ Solutions Sp. z o. o
The European Space Agency's upcoming Copernicus Expansion Missions (CEMs) present unprecedented opportunities to advance current Earth Observation, particularly for applications in food systems and agriculture. The Advanced Agricultural Monitoring with Copernicus Expansion Missions (AgriCEM) project leverages this potential by focusing on two primary missions, CHIME (Copernicus Hyperspectral Imaging Mission for the Environment) and LSTM (Land Surface Temperature Monitoring), and exploring their synergies to develop innovative Earth Observation products. By integrating hyperspectral reflectance data from CHIME with thermal data from LSTM, the project aims to provide precise and comprehensive assessments of crop traits, including structure, biochemical composition, and evapotranspiration, which in turn are used to derive crop stress indicators of increasing complexity. The consortium includes a champion user that has already started using EO data as part of their business, and other stakeholder, who could benefit from an early adoption of advanced satellite data products shortly after the missions become operational. Therefore, the project’s goal is to co-develop added-value products that meet the operational requirements and information needs of these stakeholders. The developments focus on one specific use case: the improved/advanced monitoring of sugar beets. The main objectives of the project are to deliver a) a representative dataset that reflects the unique capabilities of CHIME and LSTM in terms of spatial resolution, spectral coverage, and revisit time; b) a deployable application capable to produce from said dataset CHIME-based vegetation variables, LSTM-based temperature and evapotranspiration and derived vegetation stress products. This contribution focuses on AgriCEM in general and the dataset creation methodology. The combined CHIME and LSTM datasets are fundamental for the project's goal to develop and validate synergistic EO products. Since both missions are still in design phase, the dataset is produced by means of a complex, physically based end-to-end simulation workflow. It includes the collection of measured input data, the derivation of bottom-of-atmosphere (BOA) radiances, and its propagation through atmosphere, spaceborne sensor and downlink and the processing within the ground segment. This comprehensive approach ensures the dataset's relevance and robustness for both scientific research and prototypes for operational applications. The input data will be provided by measurement campaigns including field and tower-based observations that will take place in Italy and Poland during the course of the project. As a backup, airborne campaign data (e.g., from the FLEXsense campaign), and existing satellite data from Sentinel-2 and EnMAP can also be incorporated. This variety of available data is provided as input to simulate the observable crop properties and the bottom-of-atmosphere radiation fluxes. For this purpose, the Soil-Canopy-Observation of Photosynthesis and Energy fluxes (SCOPE) model is used. SCOPE goes beyond a radiative transfer model and includes photosynthesis, stomatal regulation, and the turbulent energy exchange with the atmosphere. The inclusion of the full energy budget makes it possible to simulate both hyperspectral solar reflection and emitted thermal radiance with the same model representation of the crop. The BOA radiances outputs from SCOPE serve as inputs to the Remote sensing Image Simulation Environment (RISE) simulator, a generic simulator that allows to generate a dataset reproducing the spectral, spatial, and radiometric properties of CHIME and LSTM. RISE first simulates the acquisition geometry, then an atmospheric transfer model is applied to obtain the top-of-atmosphere (TOA) radiance. A generic sensor module is set up for the simulation of both hyperspectral and thermal acquisition. The CHIME simulator is based on the heritage from CHIME Optical Performance Simulator (OPSI) developed by OHB as instrument prime of the CHIME mission, while the LSTM simulator will be based on the heritage from the HIVE Payload Performance Simulator (PPS) developed by OHB, with adaptations to comply with the LSTM specifications. Finally, the ground processing chain is processed including atmospheric correction for the hyperspectral data. Based on the simulated datasets, a couple of EO products will be developed. First, vegetation variables are derived from CHIME data; land surface temperature and evapotranspiration are obtained from LSTM data; finally, the previous products are combined to obtain two types of vegetation stress products: “basic” indicators based on state-of-the-art methods, and “innovative” products resulting from the research in the frame of AgriCEM. These indicators will allow stakeholders for example to intervene timely in case of crop stress and avoid yield loss. Their direct involvement in the project will allow an exchange on how to best convey this valuable information to make it actionable for the end user. In summary, the AgriCEM project represents a significant step forward in the integration of hyperspectral and thermal Earth Observation data to address critical challenges in agricultural monitoring and management for one specific use case. By leveraging the synergies between CHIME and LSTM, and by creating a physics-based simulated dataset based on tailored field campaigns, the project aims to create innovative, stakeholder-oriented EO products with applications in sugar beet monitoring. AgriCEM sets the stage for the transformative potential of the Copernicus Expansion Missions, bridging the gap between the next generation of Sentinel data and practical, actionable solutions for global food security.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Spotting the Rust: Tracking disease progression by static and aerial hyperspectral imagery

Authors: Melina Maria Zempila
Affiliations: RAL Space, STFC, UKRI
Wheat is a globally significant commodity, integral to ensuring worldwide food security. The recent rapid dissemination of genetically diverse sexually derived Puccinia striiformis f. sp. tritici (Pst) races, commonly known as wheat rust, has led to an estimated loss exceeding 5 million tons of wheat crops—this figure being a conservative estimate. This underscores the critical need for enhanced monitoring of wheat fields to safeguard global wheat yields. This study introduces a regional monitoring tool designed to promptly and accurately detect the emergence and spread of wheat rust using static and drone imagery. To achieve these objectives, we have developed two distinct yet complementary experimental setups: 1. Laboratory Static Sampling: This involves the collection of healthy and diseased wheat samples within a controlled environment, utilizing a hyperspectral imaging system capable of capturing spatial features across 41 different wavelengths in the visible and short near-infrared spectral regions. 2. Drone Surveys: These surveys are conducted over wheat plots exhibiting varying levels of rust infection, employing the same hyperspectral imaging sensors (41 bands). This approach enables the identification of spectral and spatial features and the monitoring of disease progression at two growth stages. The datasets obtained from the in-lab sampling have enhanced our understanding of the spectral reflectance changes associated with the progression of rust in winter wheat. Utilizing drone-captured imagery, a machine learning algorithm was developed to identify varying degrees of disease presence. This classification algorithm effectively distinguished between healthy plants and those exhibiting low, medium, and high levels of disease severity. By leveraging spatial information from 41 distinct wavelengths, the algorithm demonstrated its potential for broader applications in agricultural monitoring and disease management, paving the way for more precise and large-scale implementation. Acknowledgements: The authors extend their gratitude to Drs. C. Nellist and A. Hubbard for granting access to NIAB field plots and conducting in-lab image capture, and Dr. M. Hamilton for supporting the drone field campaigns. Dr. H. Mortimer and J. Powell are also acknowledged for their contributions to the systems design and software development respectively. This research has received funding through the UK UKRI STFC FoodNetwork+ Proof of Concept 2022 program.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Estimating Sunflower Biomass in Northwestern Turkey Using Satellite Imagery and Artificial Neural Networks

Authors: Berk Ulker, Burcu Suslu Algur
Affiliations: Agcurate BV
Accurate estimation of crop biomass is crucial for effective agricultural management and yield prediction. This study aims to estimate sunflower (Helianthus annuus) biomass in the Thrace region of northwestern Turkey by leveraging multi-source satellite imagery and artificial intelligence (AI) models. We utilize multiple data modalities, including multispectral imagery across the visible, near-infrared (NIR), and infrared (IR) spectra, as well as weather information. Time-series datasets are constructed from these diverse data sources to capture the phenological stages of sunflower growth. We employ multiple sequence modeling neural network architectures in this study. By utilizing interpretability methods, we study the impact of each input feature on biomass estimation, allowing us to evaluate feature relevance and identify which data sources contribute most significantly to the model's predictions. The models are trained to predict existing biomass at any point during the growing season and to forecast the maximum potential biomass by season's end. Ground truth biomass measurements obtained from field surveys will serve as benchmarks for model validation. We aim to demonstrate the potential of integrating multi-source satellite data with AI models for biomass estimation and highlight the insights gained from feature relevance analysis.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Climate-conditioned satellite image time series encoding for robust crop type mapping

Authors: Vivien Sainte Fare Garnot, Philemon Thalmann, Prof. Jan Dirk Wegner
Affiliations: Dept. of Mathematical Modelling and Machine Learning (DM3L), University Of Zurich
Deep learning-based crop type mapping from satellite image time series has become a critical tool for monitoring agricultural production, enabling better resource management and food security assessments. Over the past decade a variety of deep learning architectures were developed to tackle the different the tasks involved in crop type mapping. Arguably, these architectures can perform very well, provided that enough training data is available. However, the spatial distribution of high quality training data is highly uneven, and it remains a challenge to develop models that generalise well to new geographical areas. Because of local variations in climatic conditions or farming practice, the same crop type can present a different temporal profile from one region to the next. As a result, the patterns that a deep learning model has learned to extract in a given spatial scope might very well fail to accurately identify the same crop types in another region. In this work, we argue that providing climatic data in addition to satellite observations enables such a model to learn more robust characterisations of crop types that generalise better to unseen regions. We consider specifically the task of semantic segmentation of crop types from Sentinel-2 image time series. We propose a simple extension to the U-TAE architecture, dubbed WU-TAE, that fuses a meteorological time series in the encoding of the satellite image time series. Our method aims to improve the robustness of crop type classification models by explicitly incorporating the environmental context in which crops grow. It relies on a separate encoding of the meteorological time series, and a multi-level and multi-temporal fusion with the satellite data. Using featurewise modulation, we fuse the climate context with each individual satellite observation and at multiple spatial resolution levels. We conduct our experiments on the PASTIS benchmark. We extend this dataset of Sentinel-2 image time series, with matching meteorogical time series retrieved from the AgERA5 dataset of ECMWF. We split the dataset geographically so that the test set is entirely located in a region that was not seen at training time and that has a different climate. Our results show that our approach improves the mIoU performance by 10 percentage points compared to a model that does not have access to the meteorological context. We also show that our WU-TAE fusion method outperforms conventional fusion approaches of early (input) fusion and late (feature) fusion. Lastly, we investigate the contribution of the different meteorological variables to the the different crop types included in the PASTIS dataset. This work highlights how beneficial climatic data fusion can be for more robust crop type mapping, and calls for further exploration of these techniques on larger datasets. Our approach contributes to the broader goal of developing sustainable and adaptable agricultural monitoring systems. The integration of climate data not only improves predictive robustness but also aligns with ecological and agronomic principles, recognizing the intrinsic link between crops and their environment. Lastly, while we focused here on spatial generalisation, further research could explore how such integration improve temporal generalisation, especially in the context of climate change and increased climate variability.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Evaluating PlanetScope and UAV Multispectral Data for Monitoring Winter Wheat and Sustainable Fertilization Practices in Mediterranean Agroecosystems

Authors: Katarzyna Cyran, Italo Moletto-Lobos, Luciano Orden, Silvia Sánchez-Méndez, Belen Franch, Natacha Kalecinski, Francisco J. Andreu-Rodríguez, Miguel A. Mira-Urios, José A. Saéz-Tovar, Pierre C. Guillevic, Raul Moral
Affiliations: Global Change Unit, Image Processing Laboratory, University Of Valencia, Instituto de Investigación e Innovación Agrolimentaria y Agroambiental (CIAGRO-UMH), Universidad Miguel Hernández, Departamento de Agronomía, Universidad Nacional del Sur (UNS), Department of Geographical Sciences, University of Maryland, Planet Labs Germany GmbH
Winter wheat is a cornerstone of global food security, but its productivity faces challenges from climate change and unsustainable agricultural practices. This study explores the feasibility of using PlanetScope satellite imagery and UAV-based multispectral data to monitor experimental plots of winter wheat treated with different fertilization strategies in a Mediterranean agroecosystem. The experiment was conducted in Zaragoza, Spain, during the 2022-2023 growing season, in a randomized complete block design with three replicates (plot size 8 x 3m) testing eleven fertilization treatments, ranging from organo-mineral fertilizer (OMF) pellets and conventional inorganic fertilizers, including a control treatment not fertilized. The latest version of PlanetScope sensors, known as Super Dove, was used in this study. The orthorectified surface reflectance products were processed by including pixels with >85% coverage of a given experimental plot. UAV flights were conducted at three critical wheat growth stages using the MicaSense RedEgde multispectral camera, which acquires information in five spectral bands at 2 cm resolution. Crop phenology, vegetation cover fraction, crop yield and grain quality components (starch content, protein content and hectoliter weight) were examined by using the two datasets. Results demonstrated that PlanetScope effectively tracked key phenological metrics, including start (SOS) and end of season (EOS), with detection errors of 8-12 days compared to field data. NDVI time-series derived from PlanetScope correctly captured seasonal dynamics and showed significant differences between the control and fertilized treatments, reflecting also the observed yield. PlanetScope achieved moderate accuracy in predicting crop yield (R² = 0.54), protein content (R² = 0.49), starch content (R² = 0.56) and hectoliter weight (R² = 0.51). In contrast, UAV data, despite its higher spatial resolution, showed limited performance due to sensor calibration issues, achieving R² < 0.35 for most parameters. Notably, OMF treatments achieved yields (8183 kg/ha) comparable to conventional fertilizers (8618 kg/ha), demonstrating their potential as a sustainable alternative. These findings highlight the strengths of PlanetScope for monitoring small-scale experimental plots while indicating the need for the further refinement of UAV-based tools for crop-specific applications. This research contributes to the integration of remote sensing and sustainable agricultural practices to support wheat production. The findings align with the European Union’s ‘Farm to Fork’ strategy by promoting sustainable fertilization approaches and improving food security in Mediterranean climates. Please refer to the published paper for additional information: https://www.mdpi.com/2072-4292/16/23/4474
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Temporal analysis of the dormant season in European agricultural fields: characterization of soil management practices using Sentinel time series

Authors: Paolo Dal Lago, Nandika Tsendbazar, Lammert Kooistra, Kirsten de Beurs
Affiliations: Wageningen University
In recent decades, agricultural soils have been treated more as inert substrates rather than the living ecosystems they truly are. According to the Soil Health Dashboard, 89% of European agricultural soils are considered unhealthy (Panagos et al., 2024). Erosion, pollution, compaction, and nutrient excess from input-intensive farming are major contributors to soil degradation. Agricultural practices during the dormant season - the period between harvest and sowing of a new crop - are crucial for soil management, especially when applied over several years. Maintaining healthy soils is key to addressing the challenges of sustainable agriculture under the pressure of a changing climate and growing population. However, a method to temporally characterize soil management practices at a regional scale with field-level precision is still lacking. Soil management in arable agricultural fields is typically divided into three practices: leaving the soil bare after harvest until the next growing season, leaving crop residues on the field, or cultivating cover crops (Eurostat, 2022). The timing and intensity of tillage activities influence soil cover by removing or incorporating crop residues, affecting the extent of soil surface cover. Leaving soil bare in agricultural fields leads to erosion, nutrient loss, and threatens soil health (Lal, 2014). Crop residues and cover crops contribute to soil conservation by reducing erosion, retaining moisture, and preserving organic matter and nutrients (Elhakeem et al., 2023; Garland et al., 2021; Ranaivoson et al., 2017; Tribouillois et al., 2015). In 2016, 46% of arable land in the EU was covered by winter main crops, 23% was left bare without vegetation cover during winter, and only 8% and 7% had cover crops and plant residues, respectively (Eurostat, 2016). The remaining 16% was either not recorded or covered by multi-annual plants. The short-wave infrared (SWIR) bands of Sentinel-2 and the Normalized Difference Tillage Index (NDTI) are sensitive to cellulose and lignin content in crop residues, making them useful for monitoring soil cover and tillage activities when clouds and moisture do not affect data quality. Sentinel-1 bands VH and VV are sensitive to vegetation and soil volume, moisture, and roughness, even under cloudy conditions. While reflectance and backscatter data have been used to characterize soil cover and tillage activities, these approaches have mostly been applied independently to single dates or specific locations. Further research is needed to develop a scalable methodology that integrates radar and optical satellite time series for analyzing agricultural management practices over multiple years. This study uses time series from ESA Sentinel 1 and 2 to characterize soil management during the dormant seasons. To develop a methodology that can be generalized across different regions, a data-driven model is calibrated and tested using LUCAS soil samples (Tóth et al., 2013) taken in European agricultural fields, which span various climates, soil types, and agricultural practices. These samples, specific to one field and one date in 2018, recorded the presence of crop residues, bare soil, or standing vegetation. Multispectral bands, radar backscatter, and vegetation indices were used as inputs to the model, and output the probability of a field being bare soil, covered by crop residues, or by vegetation on the sampled date. Weather data (from NASA POWER) and soil texture data (from LUCAS) were incorporated to account for frost, snow, cumulative precipitation, and soil texture, which are likely to influence reflectance and backscatter signals. Then, farm management data from the Netherlands and Italy’s Veneto region were collected to validate the model trained on LUCAS samples. The farms recorded sowing, harvest, and tillage dates for 2023 and 2024. For these two years, Sentinel 1 and 2 time series were extracted, and dormant seasons identified combining radar and optical vegetation indices. The model is applied to each time step in the satellite time series to classify soil cover and tillage activities during the dormant periods. The accuracy of dormant season identification and classification is validated against farm management data, and the uncertainty quantified. The lack of temporal characterization of agricultural practices at scale is hampering scientists and policymakers from tailoring research and policies to specific regions. LUCAS 2022 is expected to be released this year, with resampling conducted at the same locations as in previous campaigns. Including information on management practices for the years between consecutive samples could provide valuable insights into soil management trends over time. Index terms – LUCAS, arable farming, management practices, bare soils, crop residues, Sentinel 1 and 2 References Elhakeem, A., Porre, R. J., Hoffland, E., Van Dam, J. C., Drost, S. M., & De Deyn, G. B. (2023). Radish-based cover crop mixtures mitigate leaching and increase availability of nitrogen to the cash crop. Field Crops Research, 292. https://doi.org/10.1016/j.fcr.2022.108803 Garland, G., Edlinger, A., Banerjee, S., Degrune, F., García-Palacios, P., Pescador, D. S., Herzog, C., Romdhane, S., Saghai, A., Spor, A., Wagg, C., Hallin, S., Maestre, F. T., Philippot, L., Rillig, M. C., & van der Heijden, M. G. A. (2021). Crop cover is more important than rotational diversity for soil multifunctionality and cereal yields in European cropping systems. Nature Food, 2(1), 28–37. https://doi.org/10.1038/s43016-020-00210-8 Lal, R. (2014). Soil conservation and ecosystem services. In International Soil and Water Conservation Research (Vol. 2, Issue 3). Panagos, P., Borrelli, P., Jones, A., & Robinson, D. A. (2024). A 1 billion euro mission: A Soil Deal for Europe. European Journal of Soil Science, 75(1). https://doi.org/10.1111/ejss.13466 Ranaivoson, L., Naudin, K., Ripoche, A., Affholder, F., Rabeharisoa, L., & Corbeels, M. (2017). Agro-ecological functions of crop residues under conservation agriculture. A review. Agronomy for Sustainable Development, 37(4). https://doi.org/10.1007/s13593-017-0432-z Tóth, G., Jones, A., Montanarella, L., & European Commission. Joint Research Centre. Institute for Environment and Sustainability. (2013). LUCAS topsoil survey : methodology, data and results. Publications Office. Tribouillois, H., Cohan, J.-P., & Justes, E. (2015). Cover crop mixtures including legume produce ecosystem services of nitrate capture and green manuring: assessment combining experimentation and modelling. Springer, 7(27), 18. https://doi.org/10.1007/s
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Monitoring Moroccan Olive Groves Using Very High Satellite Imagery: A Case Study in the Ouezzane Province (Morocco)

Authors: Auréa Pottier, Dr Jean-Francois Girres, Dr Julien Blanco, Chaima Dalous, Joris Vallier, Cosma Zydek, Eliott Yvetot, Younès Hmimsa
Affiliations: SENS, IRD, CIRAD, Univ Paul Valery Montpellier 3, Univ Montpellier, ESPACE-DEV, Univ Montpellier, IRD, Univ Guyane, Univ Reunion, Univ Antilles, Univ Avignon, Maison de la Télédétection, Univ Paul Valery Montpellier 3, TEDAEEP Team Research, Abdelmalek Essaâdi University, Polydisciplinary Faculty of Larache
Olive tree, native to the Mediterranean Basin, is one of the oldest tree crops in the world and holds significant cultural and socio-economic importance in the countries of this region. In Morocco, olive groves are predominant in many agricultural landscapes, in particular in the Ouezzane province, as a result of a long agricultural history but also of more recent national agricultural programs such as the Green Morocco Plan. Ouezzane landscapes therefore offer a patchwork of different types of olive groves, from the oldest and most traditional ones characterized by large ancient trees to the newest and most industrial ones characterized by dense and irrigated plantations of smaller trees. In order to better understand the contemporary changes in olive grove management and their territorial consequences in the Ouezzane province, we aimed to rely on satellite imagery to (i) distinguish the different types of olive groves that coexist today and (ii) quantify their respective evolutions between 2013 and 2023. To do so, we used two optical images: one from the SPOT-6 satellite (6 m resolution in multi-spectral and 1.5 m resolution in panchromatic) and one from the Pléiades satellite (3 m resolution in multi-spectral and 0.5 m resolution in panchromatic). Both images used were acquired in summer (July 13th 2013 for SPOT-6 and July 14th 2023 for Pléiades) in order to avoid misclassification from the presence of grass or culture under olive tree. Besides, the following method was applied on a test area of 194 km² before it could be processed for the whole province. In a first step, we conducted a pixel-based classification to produce a 5-class land-cover map (tree, other vegetation, bare soil, built area and open water) for 2013 and 2023. From the tree class, we then used diverse spatial and geometric rules or features (distance between trees, area of the polygon, Thiessen polygons…) to distinguish the different types of olive groves. In particular, this second step allowed us to differentiate: (1) continuous tree formations, i.e., wooded areas with a continuous tree crown cover (usually forests and matorrals); (2) linear discontinuous formations, i.e., wooded areas where tree crowns are distinct and aligned in rows or according to topography (usually olive more or less recent plantations); and (3) non-linear discontinuous formations, i.e., wooded areas with no geometric pattern (usually more ancient olive groves). Finally, accuracy and statistics were computed to assess the method reliability. Research is still in progress, but analyses demonstrated a good tree detection of 83% for Pléiades and 82% for SPOT-6, respectively. In 2013, on the test area, 68.61 km² were detected as wooded area in 2013 (including 35.1% of continuous formations, 16.7% of linear discontinuous formations and 48.2% of non-linear discontinuous formations). In 2023, this area was estimated at 54.82 km² (including 27.1% of continuous formations, 12.7% of linear discontinuous formations and 60.2% of non-linear discontinuous formations). These results do not match with the olive-growing trends of recent years promoted through the agricultural programs Green Morocco Plan (2010-2020) and its successor Green Generation (2020-2030). The area of land dedicated to olive growing has indeed increased by 20% between 2013 and 2020 according to the Moroccan Ministry of Agriculture and Maritime Fishing (2023), but our results show the opposite. However, we observed that non-linear discontinuous formation is the predominant formation in both cases, which is in the line with the trends and corresponds to the official numbers. Besides, these discrepancies in the surface areas detected can be explained by technical factors and do not mean that these dynamics are linked to a change in land cover. First, the sample used was only on a small portion on the province while national statistics were computed on a national or regional scale. In addition, satellite images used have a different spatial resolution (0.5 m vs 1.5 m) leading to over- or under-detection. Thus, it is therefore necessary to apply the methodology to the entire Ouezzane Province in order to be able to compare the results with the official statistics. Finally, the use of a SPOT-7 image (same spatial resolution as SPOT-6) would allow a more accurate and reliable comparison between 2013 and 2023 by reducing discrepancies due to the use of heterogeneous data.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Integrating SAR and Optical Remote Sensing for Soil Moisture Prediction in Vineyards

Authors: Ana Teixeira, Matus Bakon, Domingos Lopes, Joaquim Sousa
Affiliations: University of Trás-os-Montes and Alto Douro, Institute for Systems and Computer Engineering, Technology and Science (INESC-TEC), Faculty of Management and Business, University of Presov (UNIPO), insar.sk Ltd, CITAB – Centre for the Research and Technology of Agro-Environmental and Biological Sciences, Fundação Côa Parque
The accurate estimation of daily evapotranspiration is pivotal for effective water management in agriculture, particularly in crops like vineyards, where maintaining a precise water balance is imperative for grape quality. With climate change intensifying water scarcity, optimizing water usage has become crucial for ensuring both profitability and sustainability in agricultural production. Thus, there's a need to enhance monitoring and quantification of water usage in this sector. The integration of spatial data from Earth observation, notably Synthetic Aperture Radar (SAR), has gained prominence in monitoring environmental changes, including agricultural water usage. SAR's capacity to provide insights into vegetation status, soil moisture levels, and water usage characteristics has proven invaluable. By integrating SAR imagery with optical images and ground sensor data, decision-makers can refine irrigation practices and minimize environmental impact. This study focuses on utilizing SAR (Sentinel-1), optical (Sentinel-2), and in-situ soil moisture data from vineyards in northern Portugal to predict soil moisture values through remote sensing. The SAR dataset comprises 60 series of Interferometric Wide (IW) level-1 Ground Range Detected (GRD) data from three acquisition tracks. SAR backscatter data, known for its sensitivity to soil moisture, were processed to derive synthetic bands based on various polarizations from Sentinel-1. Additionally, optical data were used to compute indices such as the Normalized Difference Water Index (NDWI), Normalized Difference Infrared Index (NDII), and Normalized Difference Vegetation Index (NDVI). These values were extracted using a 3x3 window centered at each geographical coordinate, corresponding to the locations of sensors in two vineyards. The combined SAR, optical, and sensor data resulted in 174 samples for the year 2023, each containing 28 features. Subsequently, an artificial neural network model with 6 hidden layers was trained, tested, and evaluated using repeated k-Fold cross-validation. Performance metrics including Root Mean Squared Error (RMSE), R-squared (R2), and Mean Absolute Percentage Error (MAPE) were employed, yielding an R2 of 0.857, MAPE of 6.199%, and RMSE of 1.515 on the evaluation dataset. Figure 1 illustrates the graphical analysis of the model's predictions compared to sensor measurements. Future research endeavors will focus on expanding sensor data collection across various crops, augmenting sample size, incorporating topographic features, and enhancing model performance.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Mapping Global Risk of Fusarium Wilt Under a Changing Climate With Remote Sensing and Earth System Modeling

Authors: Rocio Calderon, Hannah Brodsky, Sharifa Crandall, Ryan Pavlick, Natalie Mahowald, Katie Gold
Affiliations: Cornell University, Pennsylvania State University, NASA Headquarters
Increasing plant disease incidence, exacerbated by a changing climate, globalization of food trade networks, and evolution of new pathogen lineages, challenges the United Nations' vision of a world without poverty, hunger, and environmental degradation. Agricultural losses caused by plant diseases threaten food security by undermining regional nutrition and disrupting local economies and social structures, and contributing to biodiversity loss across managed and natural ecosystems. This is especially true for fungus Fusarium oxysporum, a soilborne plant pathogen capable of causing significant Fusarium wilt diseases in 100+ crops and other plants worldwide. Risk assessment, typically conducted at regional level, evaluates the likelihood and potential impact of pathogen introduction, establishment and spread, and can guide the prioritization of interventions and resource allocation. However, a lack of accessible information on global disease distribution, compounded by insufficient international coordination and communication networks, hampers the effectiveness of these endeavors. Using the plant disease triangle as a theoretical framework, we mapped global Fusarium wilt suitability with satellite-based remote sensing and modeled potential future distribution under climate change. We consolidated 3980 incidence reports from 30+ years of literature to represent virulent pathogen distribution and adjusted for publication bias. Bioclimatic, topographic, and edaphic variables represented an environment conducive to disease, along with cropland fraction and solar-induced chlorophyll fluorescence (SIF) to represent susceptible hosts. An ensemble model approach revealed that 20.7% of global land and 70.8% of cropland are currently suitable for Fusarium wilt (AUC = 0.998). The top 6 variables included annual mean surface temperature, cropland, annual mean soil moisture, soil moisture seasonality, anneal mean surface temperature diurnal range, and mean SIF of the driest quarter. SIF-related metrics were crucial for accurate suitability ranking. We then used an ensemble approach to model disease suitability change under various CMIP6 climate change scenarios. We found that suitable land and cropland will increase by 2100 under all predictions, driven by cropland expansion, and especially so in intermediate, regional rivalry scenarios. The increase in disease suitability under all future climate scenarios suggests that global agricultural systems need to prepare for increased vulnerability to Fusarium wilt disease, especially in countries expecting significant cropland expansion and high gridded relative deprivation index. These findings highlight the importance of using remote sensing to inform disease risk and climate modeling to guide future agricultural strategies and disease management under climate change.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Mapping subsurface drained agricultural areas using remote sensing imagery with deep learning

Authors: Ask Holm Carlsen, Martin Rudbeck Jepsen, Rasmus Fensholt, Simon Stisen, Majken Caroline Looms
Affiliations: University Of Copenhagen, Geological Survey of Denmark and Greenland
Agriculture is the second most abundant land use in the EU and the most abundant in Denmark and many other EU countries. A prerequisite for the agricultural dominance of the terrestrial surfaces has been land amelioration, often as drain installments. However, the knowledge of the geographical extent and location of subsurface drainage systems is lacking at national scales, albeit sometimes available at farm level. Many drainage systems are old, unmaintained, and undocumented. Under-dimensioned or broken drainage systems often first becomes evident after large precipitation events leading to waterlogging and potential loss in crop yield. Additionally, this knowledge gap is unfortunate, as drainage systems alters the hydrological cycle and act as effective transport corridors for excessive nutrients to the aquatic recipients. Data concerning drained areas are currently lacking in hydrological modelling, thereby obscuring details in modelling nutrient flows and preventing cost-effective measures to improve ecological conditions in the recipients. Better hydrological modelling would benefit studies exploring pathways to mitigating the nutrient flux to coastal zones by allowing the modeler to identify optimal areas that could function as nitrogen retention areas. Besides the provided nutrient transport, drainage systems also lower the water table, which in carbon-rich wetlands contribute to increased GHG emissions. Data on drainage in wetlands will correspondingly provide researchers and policy makers with data for better estimation of GHG fluxes and associated mitigation strategies. The field of classifying drainage systems with EO is a small but emerging research area possibly hindered by the fact that classifying drained areas with EO are not a trivial task since the objects in questions, drainage systems, are located beneath the soil surface. In addition, it is very time-consuming to gather large-scale training data due to the lack of information or registers about the extent and location of drainage systems. We present a novel method to classify drained areas based on 10 meter resolution time series of Copernicus Sentinel-2 and aerial imagery of two real world extreme scenarios, a very wet and a very dry growing season. We train a UNet on a stack of optical images covering the two growing seasons combined with geodata such as derivations of a digital elevation model and soil properties. The rationale is that incorporating time series of two growing seasons with contrasting precipitation amounts will enhance the signal of crop performances for drained or not-drained fields. Training data has been gathered from multiple field trips to farms in Denmark. The study area and output will be the whole country of Denmark.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: MORERA: SPACE TECHNOLOGY TO SAVE WATER

Authors: Julián Cobos, Raquel Lladro Climent, LUIS PASCUAL, ANGEL ALVARO, GORKA LENDRINO
Affiliations: Thales Alenia Space
Making the most of a scarce resource such as water is the objective of MORERA, a comprehensive system for the agricultural sector that will allow to reduce the amount of water used for crops by up to 25%. In most regions of the world, more than 70% of fresh water is used for agriculture. By 2050, feeding a world population of 9 billion with 65% of people living in cities, will require an estimated 50% increase in agricultural production and a 15% increase in water withdrawals. Spain is a typical case of industrialized country with heavy Agriculture industry (11% of Spanish GDP), that is still far from its hypothetical optimum performance. Some aspects need to be addressed with special care: • 86% of the total Spanish area is at risk from climate change. • 34% of the irrigated area in Spain is deficient. • 15 million hectares are estimated to be fertilized annually. • The commitments with the European Union within the Common Agricultural Policy, related to sustainable development in the 2030 Agenda. MORERA is aimed primarily at those irrigated crops that are allocated less water quota than would be ideal: the vast majority in Spain. To this end, it brings together different earth and space based technologies in order to provide irrigation recommendations and yield predictions. This will enable well-informed decisions that optimize resources and costs, thus maximizing productivity. The project, an Earth Observation software-defined system, integrates the complete value chain, from sensor to user, to provide daily recommendations for optimized water irrigation directly to farmers’ mobile phones. MORERA’s mission concept proposes a new and complementary approach to solve main climate change challenges identified in agriculture, considering a highly intensive production system that uses a large amount of inputs (mainly water, fertilizers and energy) and targeting an optimization of resources usage to achieve “More Crop Per Drop”. To face these challenges MORERA is developed by a strong Spanish consortium and integrates: • A new generation of Miniaturized Instruments for Remote Sensing operating in Thermal InfraRed (TIR) capable of measuring crops and soil evapotranspiration with accuracy and enhanced agility. • Artificial intelligence (AI) and Big Data techniques, to combine all relevant input sources (weather forecast, historical data, ground sensors, satellite data, etc.) to build watering recommendations for each crop. • Final personalized irrigation recommendations directly provided to the farmer, through a friendly interface that makes it possible to enter data and corrections and refine the predictions as per user needs. This shall allow all types of exploitations to implement Precision Farming techniques. MORERA proposes a modular and scalable system based on an Artificial Intelligence core that will combine existing in situ and Copernicus data, as well as images captured by remote sensing instruments. This modular approach will allow adapting the MORERA system concept in a very flexible way to other applications related to climate change and 2030 Agenda: desertification monitoring, identification of urban heat islands, optimization of energy consumption, disasters’ management (fires, volcano eruptions, oil spills…), etc. The MORERA program was selected in 2021 as one of the “Science and Innovation Missions” from the Spanish CDTI (Center for the Technical Development and Innovation), a governmental program targeting solutions for deep social problems through innovation. It will finalize at the end of 2024 with AI core architecture completed and a TRL6 (Technological Readiness Level) at TIR instrument level; thus ready for commercial exploitation based on currently available open data, and ready also for a first flight mission to improve accuracy of predictions thanks to TIR compact remote sensing instrument. *Note: MORERA stands for “Monitorización del Riego Eficiente y el Rendimiento Agrícola” (Monitoring for Efficient Watering and Agricultural Yield)
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: A relational framework for investigating phenological development at landscape and field level via Sentinel-1 time series

Authors: Johannes Löw, Steven Hill, Christoph Friedrich, Insa Otte, Tobias Ullman, Christopher Conrad
Affiliations: Martin-Luther-University Halle-Wittenberg, Earth Observation Research Cluster, University of Würzburg
With increasingly dynamic weather patterns and extreme events such as droughts or intense rainfall, the ability to provide timely and spatially explicit information is critical for effective crop management. Remote sensing data have the potential to provide such information, but their use is often limited by spatial resolution, weather conditions, lighting challenges and the availability of adequate training data. Sentinel-1 (S1) data, using a C-band Synthetic Aperture Radar (SAR) sensor, addresses some of these limitations by operating independently of cloud cover and illumination. To overcome the challenges associated with training data and to improve the tracking of crop phenological development, particularly for winter wheat, oilseed rape and sugar beet, a framework was developed to identify and assess phenological stages at the landscape scale. This framework analysed time series metrics (TSM), including breakpoints and extremes, across multiple years, relative orbits and S1 features and linked these to growing degree days (GDD) to identify key stages in crop development. While our previous research has provided valuable insights into crop-specific tracking capabilities at the landscape level, it has not provided detailed spatial information on phenological progress and uncertainties at the field level. By comparing TSM and GDD data at both landscape and field scales, this study aims to improve the framework and establish a link between these scales. This approach allows the assessment of crop and landscape dynamics using a complementary set of S1 time series data, as opposed to traditional optical vegetation indices. The core of the analysis is an overlay of time windows which coincided with phenological developments. Individual time windows were derived by splitting temporal TSM distribution via threshold (S1 revisiting rate). To bridge the gap between the field and landscape scales, the study calculated focal points of time windows at both scales using a combination of weighted means (for fields) and k-means clustering (for the landscape). The focal points were examined in relation to GDD distributions, providing insights into the correspondence between time windows at field and landscape level, the progression of phenological stages, and the variation within these time windows. In addition, the study analysed trends in the correspondence, identifying fields that either lagged behind or progressed ahead of the landscape-wide development. This investigation also highlighted outliers and common patterns, providing insight into which fields showed the greatest deviation from the landscape trend. Such information could be valuable in identifying fields that are more vulnerable to stresses such as drought or soil degradation. It also generated field-level data on trackable phenological progress and the inherent uncertainties. In summary, this study addresses the following key questions How can TSM associated with landscape-level phenological development be applied to characterise phenological progression at the field level using only SAR data? Furthermore, what is the explanatory power of such a framework when disaggregated to the field level? Hereby, the study presents a new method that compares TSM occurrence from PolSAR and InSAR coherence data with an agro-meteorological baseline at both field and landscape scales. This novel approach assesses field-level tracking performance in relation to overarching landscape patterns, providing a more nuanced understanding of SAR time series and their application in crop phenology monitoring.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Assessing Sen-ET performance for crop evapotranspiration estimation in temperate climate: Lonzee case study

Authors: Shahla Yadollahi, Professor Bernard Tychon
Affiliations: University Of Liege, University of Liege
Sentinels for Evapotranspiration (Sen-ET) is a project funded by the European Space Agency (ESA). Its primary objective was to develop a practical methodology for estimating evapotranspiration (ET) at fine (tens of meters) spatial scale, utilizing the combined observations from the Sentinel 2 and Sentinel 3 satellites. In this product, a two-source energy balance (TSEB) model, dividing soil and vegetation components of energy balance, is used. In this study, the performance of this product in temperate climate, Lonzee in Belgium, for crop fields is assessed. The results was validated against ground-truth data from ICOS network, which showed a variation in performance. The overall correlation (R2) of ET for three consecutive years was 0.6, with mean absolute error of 1.24 mm and root mean square error of 1.66 mm. But, the performance for some years and some crops is considerably better, which shows that better results could be achieved with a better representation of crop characteristics.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Satellite-Powered Modelling of Salinity Effects on Crop Development

Authors: Nicola Paciolla, Chiara Corbari, Kamal Labbassi, Justin Sheffield, Youssef Houali, Sven Berendsen, Zoltan Szantoi
Affiliations: Politecnico Di Milano, Université Chouaib Doukkali, University of Southampton, ESA - ESRIN
Soil salinity, together with water scarcity, can be a critical factor in hindering crop yield, depending on the tolerance of the plants. The combination of climate change (with increased aridity in many areas of the world and less freshwater availability) and population increase (with a consequent enhanced food demand, triggering an expansion of irrigated land) is expected to reduce the availability of freshwater for agriculture, pushing for the expansion of salinity-affected areas and posing a serious challenge for global food production. This work relates to the role of satellite data in assisting a distributed, high-resolution agro-hydrological model for crop monitoring and management. Specifically, this activity focused on an irrigation district in Morocco, which has been exposed to a prolonged drought (still ongoing for five years) and has seen an increase in the use of (partially) saline water for irrigation. Because of the drought, all available freshwater was reserved for civil use, causing a surge in groundwater pumping to satisfy the irrigation demand. This in turn has progressively increased the salinity of the groundwater reserve. The monitoring of salinity-affected areas was performed at high spatial resolution (30m) by integrating into the crop-energy-water balance model FEST-EWB-SAFY the remote sensing data of leaf area index (LAI, from Sentinel-2) and land surface temperature (LST, from Landsat-8/9 and also from Sentinel-3, downscaled to 30m using Sentinel-2) to monitor crop development. The crop-energy-water balance FEST-EWB-SAFY model couples the distributed energy-water balance FEST-EWB model, which allows computing continuously in time and distributed in space all the components of the surface energy and water balances (without requiring LST as an input, but instead computing it internally), and the SAFY (Simple Algorithm For Yield estimates) model, for crop development. Satellite LST data were used for the calibration and validation of the water and energy balances, whereas satellite LAI values were used both for standard calibration of crop parameters and a data assimilation scheme, to enhance and correct the prognostic model output during the simulation itself. The model was able to pick up information regarding soil salinity via its effect on crops visible from the satellite imagery. The large-scale application of the FEST-EWB-SAFY model, through the synergy with satellite observations of LST and LAI, allowed to quantify the impact of variations in water availability and water quality on the final crop yield. Thus, the model can become a valuable tool for formulating sustainable water and food policies in areas facing the harsh consequences of climate change. This work stems from two ESA-funded projects. Its roots are in the district-level water and energy balance analysis carried out during the AFRISMART project (2022-24, EO AFRICA - NATIONAL INCUBATORS EXPRO+), whereas the crop impact analyses fall within the framework of the TERESA project (2025-2027, Sentinel User Preparation call).
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Grassland Classification integration and compliance within the Area Monitoring System

Authors: Phil Baldacchino-Steward, Robert O'Loughlin, Adriaan Keurhorst
Affiliations: Compass Informatics Ltd
Context and importance of satellite imagery Grasslands cover more than 60% of the Irish landscape. These grasslands are fundamental to livestock rearing, silage and hay production, native flora and fauna from semi-natural grasslands, water regulation and soil health. It is therefore essential to maintain and monitor these grasslands. Monitoring grasslands can be done through in-situ observations, but also from space. The advantages of space-based methods are its spatial coverage and low cost. Satellite imagery, ground observations and machine learning can help us distinguish between different types of grasslands and provide more information on the activities that occur on these grasslands. These activities can include mowing and livestock grazing. Irish grasslands The Irish grasslands can be parsed into 4 subclasses: paddocks, dry pastures, semi-natural pastures and rough upland grasslands. In each of these subclasses, grazing or mowing can either be more or less likely. For example, all paddocks are expected to be grazed at least once a year, whereas grazing is unlikely to occur on rough upland grasslands due to their remote location. Even if grazing does occur, it is extremely difficult to detect because of the extensiveness of these grasslands. Distinguishing between these grasslands can be done using Sentinel-2 data through a supervised machine learning algorithm. It’s downstream usefulness for the grazing and mowing markers is significant. Relevance of a grassland classification Introducing the rough upland grasslands to the grazing and mowing markers during inference of the machine learning process increases the uncertainty of the results. The markers are trained on “good” grasslands such as paddocks and dry pastures. They will not have seen these rough upland grasslands before and should therefore not be relied on to make accurate predictions on these grasslands. Analysis has also shown that the confidence intervals of a grazing or mowing detection on rough upland grasslands are similar to those on “good” grasslands, signifying that the markers still detect activity on these grasslands. This reaffirms the importance of including a grassland classification to filter the types of grasslands passed to these markers. Alternative solutions An alternative solution to this issue is a different approach to the training datasets. The current methods are parcel-based algorithms, meaning that the Sentinel signals ingested into the markers are aggregated signal values over the whole parcel. As a result of this, you get a single signal, e.g. NDVI, value for a parcel, but also a single label. A different approach would be a pixel-based method. This method has been explored, but it requires a change in the label dataset as well. The label dataset also needs to be a pixel-based dataset, meaning it needs to label each pixel as mowed or not mowed. This is a very expensive and labour-intensive process. A grassland classification as a filter for “good” and “bad” grasslands is a scalable and impactful solution for improving grazing and mowing detection in Ireland. Relation to Common Agricultural Policy Compass Informatics, with project partners NEO and EOX, are the providers of the Area Monitoring System (AMS) for the Department of Agriculture Fisheries, and Marine (DAFM) in Ireland. AMS is a satellite-based monitoring framework used to ensure compliance with agricultural policies, such as the EU's Common Agricultural Policy (CAP). AMS tracks agricultural activities like mowing, grazing, and crop growth, enabling accurate, real-time assessment of land use and management practices. For this reason, it is imperative that the different land cover types in Ireland are understood, and this knowledge is applied. Distinguishing between different types of grasslands is vital for providing the best possible grazing and mowing results for the Irish AMS. This in turn allows for accurate monitoring and verifying farmers’ claims. It is also a very valuable tool in attesting if a farmer is adhering to payment schemes, such as the ACRES scheme to enrich Irish biodiversity on grasslands.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Leveraging Earth Observation Data and Machine Learning for Agricultural Drought Stress Monitoring

Authors: Odunayo David Adeniyi, Valeria Gentile, Manuela Balzarolo
Affiliations: CMCC Foundation, Euro-Mediterranean Centre on Climate Change, European Space Agency
Agricultural drought poses significant challenges to food security, water resource management, and sustainable farming practices, particularly in Euro-Mediterranean region, which is highly vulnerable to climate change. This research leverages Earth Observation (EO) data and advanced machine learning (ML) algorithms to develop a new robust method for early agricultural drought stress detection in plants, enabling proactive decision-making in agricultural management. This method is based on the integration of multiple EO datasets i.e. Sentinel-1 (S-1, Radar), Sentinel-2 (S-2, Optical), and Landsat-5/7, grouped into five distinct combinations of features: (i) S-1 all bands, (ii) S-2 all bands, (iii) Landsat-5/7 all bands, (iv) S-1 and S-2 combined, and (E) Landsat and S-1 combined, to target the Crop Water Stress Index (CWSI), commonly used in agricultural drought detection. The CWSI is computed using the energy balance equation (CWSI=1- LE/(LE+H) ) latent heat flux (LE) and sensible heat flux (H) are available from 38 ICOS (Integrated Carbon Observation System) flux stations. This index ranges from 0 (fully irrigated, no stress) to 1 (severe water stress), providing a quantitative measure of drought severity. To identify the optimal modelling approach, a comparative analysis of various ML algorithms was performed, including Random Forest (RF), Extreme Gradient Boosting (XGBoost), Multi-Layer Perceptron (MLP), Deep Neural Networks (DNN), Support Vector Regression (SVR), Elastic Net Regression, and Residual Neural Network (ResNet). Feature selection techniques are applied to isolate the most relevant variables for each model, while hyperparameter optimization is performed to enhance predictive accuracy. Cross-validation ensures robust model evaluation and comparison. Preliminary results suggest significant potential for integrating EO datasets and ML to enhance drought monitoring and mitigation strategies. By identifying the best-performing feature groups and algorithms, this research aims to advance the operationalization of agricultural drought indicators tailored for the Euro-Mediterranean region. Beyond the immediate applications, this work highlights the critical role of EO data and ML in addressing global challenges in agriculture and water resource management.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Wheat Blast and Fusarium Head Blight: a Risk Map of Wheat Pathogen Prevalence Using EO Data and Climatic Modelling

Authors: Matthew Payne, Kevin Tansey, Dr Gerardo Lopez-Saldana, Alex Cornelius, Dr Michelle Hamilton, Dr Melina Zempila, Connor McGurk, Pascale Bodevin, Dr Bryony Taylor, Dr Libertad Sánchez-Presa, Tim Beale, Elizabeth King, Professor Darren Kriticos, Will Holland, Dr Luke Barrett
Affiliations: University Of Leicester, Assimila, RAL Space, STFC, UKRI, CABI, Cervantes Agritech, CSIRO Agriculture and Food
Global agricultural production needs to double by 2050, and as wheat is the second most important crop globally, novel solutions are needed to help manage fungal pathogen induced diseases of agriculture (Duveiller et al., 2007; Tilman et al., 2011; Ulukan 2024). Wheat Blast (WB) is a tropical disease caused by Magnaporthe oryzae Triticum, with potential for yield losses of greater than 50%. WB was discovered in Brazil in 1985 and has spread to other Latin American countries such as Bolivia, Argentina and Paraguay, before dispersing to Bangladesh and Zambia in 2016 and 2018, respectively (Pequeno et al., 2024). Comparatively, Fusarium Head Blight (FHB) is a wheat disease caused by species of the Fusarium Genus, which have been studied since the beginning of the 20th Century and are prevalent globally, particularly in the midwest USA and the Canadian Prairies. FHB results in yield losses of 30 - 100% and produces mycotoxins that are poisonous to humans and animals, representing a danger to life as well as economic prosperity (Johns et al., 2022; Alisaac & Mahlein, 2023). The geographic range of WB is theorised to expand further across tropical countries by 2040 - 2070 as temperature and humidity increase due to climate change, with the prevalence of different Fusarium strains changing depending on environmental conditions (Pequeno et al., 2024). This increased pressure on the agricultural industry requires a spatially explicit and climatically sensitive risk system. As the symptoms of both WB and FHB are premature bleaching of the wheat spikelets, which is observable spectrally and visible from the nadir perspective (Bauriegel & Herppich, 2014; Islam et al., 2020), it is intuitive to use remote sensing to extract the spectral signature of the stressed wheat and downscale to map at the regional scale. A novel interdisciplinary proof of concept framework is presented that leverages hyperspectral UAV data, multiple Earth Observation data sources (WorldView-3, PlanetScope, Sentinel-2 and MODIS) and in-situ climatic measurements. This is then integrated with a regional wheat classification map using Sentinel-1 backscatter and Sentinel-2 harmonic Leaf Area Index time-series weighted with MODIS observations to gap-fill cloudy Sentinel-2 observations. Finally, the potential pathogen distribution based on climate suitability is modelled using CLIMEX (Kriticos et al. 2015) based on historical climate data (CliMond), whilst disease infection on a finer temporal scale is modelled using DYMEX (Maywald et al. 2007) forced with ERA5 reanalysis weather data. The different risk components are computed dependent on area of interest and time frame and are connected on a cloud-hosted web platform and served as a set of interactive web maps. The importance of these dynamic web maps is that in-country agronomists and advisors can readily access the maps, using integrated earth observation and climate modelling technologies to help with diagnostics, and to frame advice to farmers. Bibliography Alisaac, E., & Mahlein, A.-K. (2023). Fusarium Head Blight on Wheat: Biology, Modern Detection and Diagnosis and Integrated Disease Management. Toxins, 15(3). https://doi.org/10.3390/toxins15030192 Bauriegel, E., & Herppich, W. B. (2014). Hyperspectral and Chlorophyll Fluorescence Imaging for Early Detection of Plant Diseases, with Special Reference to Fusarium spec. Infections on Wheat. Agriculture, 4(1), 32–57. https://doi.org/10.3390/agriculture4010032 Duveiller, E., Singh, R. P., & Nicol, J. M. (2007). The challenges of maintaining wheat productivity: pests, diseases, and potential epidemics. Euphytica, 157(3), 417–430. https://doi.org/10.1007/s10681-007-9380-z Islam, M. T., Gupta, D. R., Hossain, A., Roy, K. K., He, X., Kabir, M. R., Singh, P. K., Khan, Md. A. R., Rahman, M., & Wang, G.-L. (2020). Wheat blast: a new threat to food security. Phytopathology Research, 2(1), 28. https://doi.org/10.1186/s42483-020-00067-6 Johns, L. E., Bebber, D. P., Gurr, S. J., & Brown, N. A. (2022). Emerging health threat and cost of Fusarium mycotoxins in European wheat. Nature Food, 3(12), 1014–1019. https://doi.org/10.1038/s43016-022-00655-z Kriticos, D. J., Maywald, G. F., Yonow, T., Zurcher, E. J., Herrmann, N. I., & Sutherst, R. W. (2015). Exploring the Effects of Climate on Plants, Animals and Diseases. Pequeno, D. N. L., Ferreira, T. B., Fernandes, J. M. C., Singh, P. K., Pavan, W., Sonder, K., Robertson, R., Krupnik, T. J., Erenstein, O., & Asseng, S. (2024). Production vulnerability to wheat blast disease under climate change. Nature Climate Change, 14(2), 178–183.https://doi.org/10.1038/s41558-023-01902-2 Sutherst, R. W., Maywald, G. F., & Bourne, A. S. (2007). Including species interactions in risk assessments for global change. Global Change Biology, 13(9), 1843–1859. https://doi.org/https://doi.org/10.1111/j.1365-2486.2007.01396.x Tilman, D., Balzer, C., Hill, J., & Befort, B. L. (2011). Global food demand and the sustainable intensification of agriculture. Proceedings of the National Academy of Sciences, 108(50), 20260–20264. https://doi.org/10.1073/pnas.1116437108 Ulukan, H. (2024). Wheat Production Trends and Research Priorities: A Global Perspective. In N. Zencirci, F. Altay, F. S. Baloch, M. A. Nadeem, & N. Ludidi (Eds.), Advances in Wheat Breeding : Towards Climate Resilience and Nutrient Security (pp. 1–22). Springer Nature Singapore. https://doi.org/10.1007/978-981-99-9478-6_1
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Estimation of Rice Area, Yield, and Yield Limiting Factors in West Africa. A Case Study, Based on the Synergistic Use of Remote Sensing and Crop Growth Modelling in Nigeria

Authors: Massimo Barbieri, Alessandro Cattaneo, Luca Melchiori, Francesco Holecz, Luca Gatti, Loris Copa, Emma D. Quicho, Kazuki Saito, Renaud Mathieu, Ali Ibrahim
Affiliations: sarmap sa, IRRI-Philippines, IRRI-Africa, Africa Rice Center
Nigeria is among the five countries leading rice production in West Africa, with the potential to become self-sufficient in the near future. A two years (2022-2024) project, aimed at mapping and monitoring rice throughout wet and dry seasons, in three sample states, has been piloted by a consortium with a wide spectrum of expertise, spanning from remote sensing to crop modelling, agricultural local practices, and field data collection. Earth Observation data are an essential input to build detailed, scalable, updated and unbiased products for assessing rice area, plant phenology, crop season development, and possible damages in case of natural events. The integration of EO-based information (i.e., rice area, planting time and leaf area index) and agro-meteorological data (e.g., plant variety, fertilizer, soil, water availability, planting method, etc.), within a crop growth model, allows deriving spatially explicit forecast and estimate of yield at harvest time. This data integration process is facilitated by the support of a local team, whose knowledge about crop practices and ground survey skills, have been crucial to properly calibrate the system and eventually validate the product accuracy. More than 650,000 hectares of rice have been mapped by exploiting the RIICE (Remote sensing-based Information and Insurance for Crops in emerging Economies) technology. The approach is based on the integration of high resolution SAR and Optical EO data with the ORYZA model. This methodology, which has already been successfully implemented during the last 15 years in several Southeast Asian countries, provides governments with reliable rice production figures as well as yield losses due to natural events. Not to be ignored, the RIICE service is also intended to reduce the vulnerability of smallholder rice farmers by contributing in setting up affordable insurance schemes. With respect to some Asian countries, the Nigerian case presents new complexities; one of the most relevant is the urgency of observing small plots made of different crop types, which grow in the same period and close to each other in an environment where the agricultural area is often interspersed with scattered groups of trees. The main objectives of the present work are: i) mapping extent and distribution of the actual cultivated area, ii) identifying seasonal planting shifts linked to changes in climate patterns, iii) assessing the actual yield and its spatial variability, iv) analyzing gaps between potential and actual yield, and proposing possible ways to reduce these gaps. The final goal is to critically asses positive as well as negative aspects of the methodology, in order to identify the best way for scaling it up to other agricultural systems and geographical locations.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Mapping Winter Fallow Fields for Biogas Production: A Semantic Data Cube Approach to Enhance Cover Crop Potential in Austria

Authors: Vanessa Streifeneder, Lorena Abad, Dr. Martin Sudmann, Hannah Augustin, Florian Brunner, Bernhard Stürmer, Richard Zweiler, Assoz. Prof. Dr. Dirk Tiede
Affiliations: Department of Geoinformatics_ZGIS, University of Salzburg, Kompost & Biogas Verband Österreich, Güssing Energy Technology GmbH, Austrian Cooperative Research,
In pursuit of fulfilling the upcoming NetZero goals, energy production is transitioning from fossil fuels to renewable sources. Achieving this transformation requires a diverse mix of renewable energy options. Biogas generation from biomass, in particular, has the potential to play a significant role in the future of renewable energy production. Biogas plants use agricultural by-products such as straw and maize stalks, animal manure, food waste and also energy crops (e.g. maize, sorghum). Energy crops have the advantage of a high biomass yield, but they compete with food production. In Austria, the use of such feedstocks is strictly limited. One idea to compensate for this disadvantage is to plant cover crops during the winter season or when the main crop has been harvested and the field is left fallow. For the planning of potential sites for future biogas plants, it would be very valuable to know where there are more fallow fields in winter. The use of Earth Observation (EO) data to identify these areas provides a cost- and time-efficient solution. However, analysing fields and vegetation patterns in winter months can be challenging and is not often done due to persistent snow and cloud cover. We try to overcome limitations of available data by the use of an approach implemented within a semantic data cube environment to identify fallow fields successfully. To identify untapped potential for cover crops, we analysed the vegetation pattern on agricultural land from September to April in three different regions of Austria (Innviertel, Mostviertel, Südburgenland) from 2018 to 2023. For this purpose, we employed a semantic analysis approach using the Sen2Cube.at semantic EO data cube. The Sen2Cube.at system contains all semantically enriched Sentinel-2 images captured of Austria and provides cloud-based analysis tools (Sudmanns et al., 2021). To obtain the field polygons, we used the INVEKOS (Integrated Administration and Control System) open data (Agrarmarkt Austria, n.d.), which field pieces and crop types recorded by the applicants, which serve as the basis for funding processing. We first filtered the INVEKOS data to include only fields that are neither too small, nor have a main crop planted in winter (e.g. winter maize, fruit trees, grassland). A small field exists if, after subtracting a buffer of half the geometrical standard derivation (to compensate for geometric inaccuracies) along the boundaries of an agricultural area, more than 60% of the pixels that were originally within the area are now partially outside it. We then extracted time series information of each field using semantic analysis within Sen2Cube.at. The advantage of the semantic analysis is that each pixel has a category assigned derived from the pixel’s spectral signature (e.g. cloud, barren land or built-up, water or shadow, etc.). This allows efficient and comprehensive time series analysis at the pixel level (increasing the amount of usable data compared to cloud filtering on image wide statistics), which can be aggregated over the agricultural fields. As a result, we obtained a time series of percentage of bare soil, weak, medium and strong vegetation, clouds, snow and shadow per field. From this, we developed an algorithm in R to categorise the fields according to their vegetation patterns per season (autumn, winter, spring) to identify fallow fields. In total we distinguished 11 categories, including fallow, sparse and dense vegetation. Due to the high number of cloudy days, additional information was needed to improve the quality of the categorised fields. We therefore developed an event detection algorithm to detect potential turning events. Before the event detection, the time series curves are interpolated for missing values due to cloud or snow cover. We used a moving window of 15 days given the number of cloud/snow cover during winter months. The event detection algorithm works by analysing two time series curves that are expected to cross when an event occurs. For example, a sharp decrease in the barren land curve when there is new vegetation growth, crosses the steep increase in the vegetation curve. The algorithm identifies the crossing date as the turning event date. A turning event can indicate whether the field has been re-cultivated in the autumn after harvest. This allowed further refinement of the field categorisation. The results were validated using high resolution Planet Scope data. In the final step we integrated expert knowledge to estimate if the field had a cover crop or cloud be used for planting a cover crop, which could be used as biomass for biogas production. In the study areas, the proportion of fields with mostly bare soil during winter varies by region, though it has remained relatively stable over time, with the notable exception of Südburgenland. In this region, more than 40% of fields were bare in winter 2018, followed by a sharp decrease to approximately 5% in both 2020 and 2022, and increase up to 30% in other years. The Mostviertel region consistently showed the highest percentage of bare fields in winter, ranging from 40% to 45%, while the Innviertel region had the lowest, with about 7% to 20%. Our analysis suggests that the potential for cover crops is highest in the Mostviertel, where over 50% of fields could support cover crops. In contrast, the Innviertel offers a lower potential, with only about 25%–30% of fields suitable for cover crops. In Südburgenland, cover crop potential fluctuates, mirroring the variability in the proportion of bare fields during winter. In the future this approach can be used to further analyse vegetation patterns in winter and the possible impact of climate change on agricultural practices. References: Agrarmarkt Austria. (n.d.). INVEKOS Schläge Österreich. AMA. https://www.data.gv.at/ Sudmanns, M., Augustin, H., van der Meer, L., Baraldi, A., & Tiede, D. (2021). The Austrian Semantic EO Data Cube Infrastructure. Remote Sensing, 13(23), 4807. https://doi.org/10.3390/rs13234807
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Application of remote sensing in variable-rate nitrogen treatment – optimisation of the early stage fertilization

Authors: Radek Malinowski, Mateusz Stankiewicz, Marcin Rybicki, Bartosz Buszke, Karol Łukaszewski, Michał Michalski
Affiliations: Wasat Ltd
Nitrogen is the most important macronutrient for the majority of crops in agriculture, strongly influencing crop growth as well as the quantity and quality of yields. A key aspect of nitrogen application is using the appropriate amounts of fertilizer at the right time and in the right place. This approach plays a crucial role in sustainable agriculture by reducing fertilization costs and limiting the negative impact of excessive fertilization on the environment that includes nitrogen leaching. The aim of this research was to develop operational methods for variable nitrogen use for winter wheat with an emphasis on fertilization at the early stage of crop development (first nitrogen application during the season). Due to limited crop canopy cover and nitrogen uptake at this stage of plant development, direct measurement of plant nitrogen content based on chlorophyl concentration is difficult. Therefore, the proposed method uses field productivity as a key indicator of nitrogen distribution and retrieves productivity zones to differentiate fertilization rates. Productivity zones were developed using multitemporal Sentinel-2 imagery representing multiple seasons. Yield maps from previous seasons collected by combine harvesters equipped with yield monitoring systems were used as reference data for finding the relationship between yield and spectral characteristics of satellite data. For this purpose, different spectral indices were calculated from cloud-free Sentinel-2 images, associated with crop development: NDVI, NDRE, NIR, GNDVI, NDWI, VARI, MTCI. For NDWI and MTCI the application of different Sentinel-2 bands resulted in double sets of indices). During the research, their values were used to segment the field area using the K-Means algorithm into classes representing different levels of field productivity. The mean values of spectral index within segments were then used to assign a class order. The higher the mean index value, the higher the class order and its productivity. The conducted research have shown that the use of two images from a single season from June to July assures a good representation for most crop types. Additionally, the comparison of different spectral indices has shown that MTCI2 (Merris Terrestrial Chlorophyll Index) based on Sentinel-2 bands (B06-B05/B05-B04) is characterised by the highest cumulated correlation with crop yields. The findings were used to develop a procedure for productivity zones delineation that was further tested on a real field in a strip-trials experiment. In that experiment, the effectiveness of variable nitrogen treatment was compared with a standard fixed-rate fertilization. The selected fertilization strategy was based on the Nmin method and the standard practice used by the farm manager at the experimental site. According to recommendations, nitrogen fertilization was divided into three applications targeting the following growth stages: end of tillering, beginning of stem elongation, and flag leaf. The Nmin value from a depth of 0–60 cm was used as a reference for calculating the first split dose, applied following a generally accepted distribution scheme, often in fixed proportions (e.g., 50/30/20% in three doses or 60/40% in two doses). In the case of standard fixed fertilization, the average Nmin value for the entire field was used. The variable nitrogen application was performed with three rates that were set as -20% and -10% of the average dose for the low and medium field productivity areas, respectively, and the average Nmin value for high productivity areas. The second and third nitrogen applications in the season were set to a fixed dose for the entire field. At the end of the season the harvested yields of winter wheat were statistically analysed and were compared between the applied strategies (variable vs. fixed). The results showed that higher yields were achieved in areas with variable nitrogen fertilisation despite the fact that the applied amount of fertilizer was reduced in comparison to areas with fixed doses. Therefore, the applied strategy resulted in lower fertilization costs and environmental impact without losing the final yield. The positive effect of variable fertilization approach was also confirmed by conducting additional analyses, including verification of the values of Leaf Area Index and Nitrogen Nutrition Index, as well as DRIS analysis of the experimental field after the first nitrogen application.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Too little data for field scale crop yield forecasting? Not with transfer learning!

Authors: Emanuel Büechi, Prof. Wouter Dorigo, Felix Reuß, Ph.D. Lucie Homolová, Ph.D. Miroslav Pikl, Ph.D. Vojtěch Lukas, Prof. Miroslav Trnka
Affiliations: Department of Geodesy and Geoinformation - TU Wien, Global Change Research Institue CAS (CzechGlobe), The Department of Agrosystems and Bioclimatology, Faculty of AgriSciences, MENDELU
Climate change is threatening food security. To ensure food security, we do not only have to safeguard agricultural production but also optimally distribute the yields between regions. For that, farmers and decision-makers need reliable crop yield forecasts at the field scale to know where crop yield losses occur and potentially adjust their farming practices. Earth observation and machine learning are key tools to calculate such forecasts. Especially Sentinel-1 and 2 data has been used a lot as it provides regular high-resolution information about the state of crops and soil moisture. However, field level crop yield data is often scarce, which limits model development. In the project “Yield Prediction and Estimation from Earth Observation” (funded by ESA - Contract No. 4000141154/23/I-EF), we tested transfer learning to reduce the dependency on field level data. Transfer learning is a machine learning technique where a model is trained on a certain domain and applied to a different domain. With this technique, we can train a crop yield forecasting model on a regional scale - where we have sufficient data available - and apply it on a field scale. Here, we used Sentinel-1, Sentinel-2, and ERA5-Land data with an artificial neural network to forecast maize, winter wheat, and spring barley yields in southern Czechia. We compared four model setups: a model trained and tested on a regional scale and one trained and tested on a field scale as a baseline. We compared these models to two models that are trained on a regional scale and tested on a field scale, one with and one without finetuning the model using field scale data. Forecasts were calculated at four lead times (1-4 months) before harvest. We showed that transfer learning with fine-tuning demonstrates superior performance, achieving correlations of approximately 0.75 at a one-month lead time for all crops. It outperformed the field scale trained model by 0.05-0.12. In addition, TL required significantly less field level data to achieve a performance comparable to the model trained at the field level: 50% of the data for spring barley and maize, and only 25% for winter wheat. A leave-one-year-out cross-validation revealed model limitations, with all setups showing low performance due to large interannual variabilities. Nevertheless, this transfer learning approach improves the efficiency of crop yield data utilization and, thus, enhances field level crop yield forecasting.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Machine Learning Prediction of Agricultural Flood Damage With Sentinel-2 Imagery: a Case Study of the 2023 Emilia-Romagna Floods

Authors: Filippo Bocchino, Valeria Belloni, Roberta Ravanelli, Roderik Lindenbergh
Affiliations: Sapienza University of Rome, DICEA, Geodesy and Geomatics Division, University of Liège, Department of Geography (Faculty of Sciences), Geomatics Unit, Department of Geoscience and Remote Sensing, Delft University of Technology
Flooding is one of the most severe consequences of climate change [1], causing extensive damage across various sectors, including agriculture. Despite its critical impact, flood-related agricultural damage often remains overlooked [2], highlighting the need for improved assessment methods. Such analyses are particularly important in regions with intensive agricultural activities to evaluate losses effectively and support compensation for affected farmers. Remote sensing data is already widely used to assess the spread of flood events. We propose to use remote sensing data from before, during and after a flood event to automatically estimate the damage of the flood to different agricultural plots. This study focuses on the May 2023 flood in the Emilia-Romagna region of Italy, which claimed 15 lives and ranked among the year’s most devastating disasters. Over 5,000 farms were submerged, resulting in significant losses to crops, livestock, and agricultural infrastructure, with total damages exceeding €8.8 billion. The lack of widespread flood insurance, especially in the agricultural sector, exacerbated the situation, leaving the majority of recovery costs to individuals, businesses, and the State [3]. In the weeks following the event, experts visited the area and for each of 412 plots estimated damage percentages (0%-100%) using standardized procedures incorporating various in-situ crop data [4] resulting in a dataset provided by the Institute of Services for Agricultural and Food Market (ISMEA). We propose a machine learning (ML) based approach for assessing agricultural flood damage, using the expert data set for training and validation. The model integrates remote sensing indices derived from Sentinel-2 multispectral imagery, flooded area maps from the Copernicus Emergency Management Service (CEMS) [5], and a Digital Elevation Model (DEM). Specifically, we calculated key spectral indices - Normalized Difference Vegetation Index (NDVI), Leaf Area Index (LAI), and Normalized Difference Water Index (NDWI) - using median temporal aggregation for April, May, and June. These indices allowed us to compute changes between months (e.g., ΔNDVI for April-May and May-June), providing insights into crop health [6] and water presence [7]. For each field, we derived spatially aggregated statistics, including mean and standard deviation. Flooded area maps were created by merging delineation products from CEMS Rapid Mapping Activations EMSR659 (May 3, 2023) and EMSR664 (May 16, 2023) [5]. Using these maps, we calculated the percentage of flooded area per field. The NASADEM [8] provided median elevation values for each field. All data were processed using Google Earth Engine (GEE), a cloud computing platform offering a vast catalog of remote sensing data. A 3-classes Random Forest (RF) classifier was trained using these features alongside the ISMEA ground-truth dataset. The RF model classified fields into zero, medium, and high damage classes, achieving promising performance metrics: accuracy, precision, recall, and F1-score were 0.78, 0.79, 0.78, and 0.79, respectively. Although the model was always capable of separating zero from high damage, it showed some limitations in distinguishing between zero and medium damage classes, suggesting that additional features could enhance the performance. Regarding the feature importance in the RF model, the most influential predictors were the percentage of flooded area, the ΔNDWI (April-May), and the crop type. These results highlight the crucial role of water-related features, such as the ΔNDWI, in capturing the impact of flooding on crop fields, as well as the significant influence of crop type in determining damage susceptibility. Future work will focus on expanding the dataset by incorporating more in-situ data from ISMEA. This might enhance the model’s performance and robustness and pave the way for more precise decision-making tools and ensure fair compensation for affected farmers. This study's key innovation lies in leveraging a unique, ground-truth dataset specific to agricultural flood damage, enabling precise model training and improving the reliability of flood impact assessments. References: [1] Anon. 2023 disasters in numbers. Technical report, Centre for Research on the Epidemiology of Disasters (CRED), 2024. [2] Stefan Klaus, Heidi Kreibich, Bruno Merz, Bernd Kuhlmann, and Kai Schröter. Large-scale, seasonal flood risk analysis for agricultural crops in Germany. Environmental Earth Sciences, 75(18), September 2016. http://dx.doi.org/10.1007/s12665-016-6096-1 [3] Chiara Arrighi and Alessio Domeneghetti. Brief communication: On the environmental impacts of the 2023 floods in emilia-romagna (italy). Natural Hazards and Earth System Sciences, 24(2):673–679, February 2024. http://dx.doi.org/10.5194/nhess-24-673-2024 [4] ISMEA. Progetto pilota di standardizzazione delle procedure per le valutazioni dei danni alle colture vegetali. www.ismea.it/flex/cm/pages/ServeBLOB.php/L/IT/IDPagina/11688 , 2021. [Accessed 23-05-2024] [5] The Emergency Management Service - Mapping — emergency.copernicus.eu. https://emergency.copernicus.eu/mapping/ems/emergency-management-service-mapping [Accessed 14-09-2024]. [6] Sha Huang, Lina Tang, Joseph P. Hupy, Yang Wang, and Guofan Shao. A commentary review on the use of normalized difference vegetation index (ndvi) in the era of popular remote sensing. Journal of Forestry Research, 32(1):1–6, May 2020. http://dx.doi.org/10.1007/s11676-020-01155-1 [7] S. K. McFEETERS. The use of the normalized difference water index (ndwi) in the delineation of open water features. International Journal of Remote Sensing, 17(7):1425–1432, 1996. arXiv:https://doi.org/10.1080/01431169608948714 [8] R. Crippen, S. Buckley, P. Agram, E. Belz, E. Gurrola, S. Hensley, M. Kobrick, M. Lavalle, J. Martin, M. Neumann, Q. Nguyen, P. Rosen, J. Shimada, M. Simard, and W. Tung. Nasadem global elevation model: Methods and progress. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLI-B4:125–128, 2016. https://isprs-archives.copernicus.org/articles/XLI-B4/125/2016/
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: To what extent can EnMAP data differentiate between 6 similar winter cereal species? A case study in Wallonia (Belgium)

Authors: Maxime Troiani, Pierre Defourny
Affiliations: UCLouvain (Earth and Life Institute)
Crop mapping plays a key role in supporting sustainable land use by guiding policy decisions. It strengthens resilience against climate change and food security challenges by offering detailed information about spatial distribution and potential yields on a given region. Crop type mapping by remote sensing is mostly divided into two categories: a) multispectral data-based studies, with a high temporal revisit time and medium to high spectral resolution and b) hyperspectral data-based studies, relying on finer spectral resolutions, but coarser temporal resolution from single- or multiple-date selective acquisitions. This poor temporal resolution does not allow the assessment of phenological development of crops throughout the growing season, potentially limiting the ability to capture critical growth stages required for accurate discrimination and monitoring. Nevertheless, the finer spectral resolution of hyperspectral data offers a valuable asset for discriminating between species, as it captures subtle differences in reflectance that are often indistinguishable with multispectral data. In this context, this study explores the potential of EnMAP hyperspectral satellite data for in season winter cereal species classification in the Walloon Region (Belgium) in 2023 and 2024, benchmarking it against multispectral time series from Sentinel-2. This analysis has a major advantage: it has access to a robust dataset comprising thousands of agricultural parcels distributed across a wide geographical area. This extensive spatial coverage eliminates potential gradient environmental or meteorological effects, ensuring a robust evaluation. By leveraging such a comprehensive dataset, the study assesses the true capabilities of hyperspectral data in discriminating closely related cereal species and provides a rigorous comparison with the established performance of multispectral data. The classification on multispectral and hyperspectral data are compared using a cross-validation on various classification algorithms, such as Random Forest (RF) and Support Vector Machine (SVM). Additionally, on hyperspectral data, multiple dimensionality reduction techniques, including Principal Component Analysis (PCA) and Partial Least Squares (PLS), are being explored. Ultimately, this research aims to provide new insights into crop monitoring, setting a foundation for next-generation agricultural decision-making and precision farming, where hyperspectral imagery could redefine species-level mapping and monitoring across large-scale landscapes.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Dynamics of Photosynthesis Under Elevated Atmosphere CO2 in Different Cropping Systems Measured by Solar-Induced Fluorescence

Authors: Juan Manuel Romero Barrios, Deepthi Konche, Julie Krämer, Uwe Rascher, Onno Muller
Affiliations: Institute of Bio- and Geoscience 2: plant sciences (IBG-2), Forschungszentrum Jülich GmbH, CONICET, Instituto de Investigaciones Fisiológicas y Ecológicas vinculadas a la Agricultura (IFEVA), Universidad de Buenos Aires, Facultad de Ciencias Exactas y Naturales, Departamento de Química Inorgánica, Analítica y Química Física, Buenos Aires, Argentina
Increasing atmospheric CO2 concentration is one of the main drivers of climate change, but may also contribute to increase food production due to enhanced crop photosynthesis. However, the photosynthetic rate under elevated CO2 (eCO2) conditions is mainly limited by nitrogen availability, which impairs the expected positive effect of eCO2 [1]. Therefore, a promising alternative to seize the increasing CO2 concentrations is intercropping, the ancient practice of mixing two or more species that grow together. Intercropping allows for complementary nutritional requirements, increased light interception, and biological pest control, among other benefits. Compared to monoculture systems, crops grown in mixed stands have shown higher yields, improved weed management, and even higher carbon sequestration [2]. Given its high capacity for nitrogen fixation, leguminous intercropping has been proposed as a possible method to take advantage of the increase in CO2 concentrations in the future [3]. A rigorous study on the benefits of intercropping under eCO2 requires a detailed assessment of the underlying physiological mechanisms and the detection of sometimes subtle differences in plant-plant interaction. Also, the implementation of monitoring techniques that allow fast and sensitive data retrieval will be key in future rapidly changing environmental conditions. In this context, remote sensing techniques emerge as the most adequate tools for monitoring the effects of eCO2 on intercrop systems. Sun induced fluorescence (SIF), given its close link to photosynthesis, is the main candidate for crop physiology assesment [4]. Mixed crops performance has been studied before in our group showing how machine learning algorithms allowed the predicition of some canopy traits from reflectance and SIF data [5]. In this work we have analyzed the performance wheat and bean crops in mono- and intercrop systems under ambient and elevated CO2 concentration (c.a. 600 pmm) in the field. Active and passive radiometric high troughput ground measurements were conducted by means of Flox and LIFT devices mounted on FieldSnake, a semi automated positioning system [6]. Reflectance-based measurements show intercrop signals behave as a mix onf monocrop systems, where effects of CO2 on greeness are mainly observed in wheat monocrop systems. SIF yields show strong increase for wheat cultivars towards the end of the season, possibly because a decreased photosynthetic efficiency. Increased dry biomass production is observed for all the crops and systems under eCO2 and Land Equivalent Ratio shows more than a 10% benefit of intercropping systems both under ambient and elevated CO2. In this presentation we will show how remote sensing techniques at ground level allow detailed monitoring of plant development in mixed and monocrop systems, with a special focus of how SIF tracks photosynthetic efficiency and biomass accumulation. Last, we will discuss how these findings can contribute to monitoring upscaling from drone and airborne measurements. [1] Kant et al. In: Frontiers in Plant Science (2012). [2] Brooker et al. In: New Phytologist (2015). [3] Butterly In: Plant and Soil (2015). [4] Maxwell and Johnson. In: Journal of experimental botany (2000). [5] Krämer et al. In: Remote Sensing of Environment, submitted 14. Mai 2024. [6] Knopf et al. In: Frontiers in Plant Science (2024).
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Evaluating the Copernicus High Resolution Layer on Crop Types using farmers declarations from 50 million parcels

Authors: Martin Claverie, Momtchil Iordanov, Ayshah Chan, Renate Koeble, Ludvig Forslund, Linda See, Raphaël d'Andrimont, Guido Lemoine, Marijn van der Velde
Affiliations: European Commission, Joint Research Centre (JRC), Technical University of Munich (TUM), TUM School of Engineering and Design, Chair of Remote Sensing Technology, European Environment Agency (EEA), European Commission, DG Agriculture & Rural Development (DG AGRI), International Institute for Applied Systems Analysis (IIASA), Munich Data Science Institute (MDSI)
Accurate and timely information on crop types is crucial for monitoring agriculture, evaluating policies, and assessing food security. With challenges such as climate change, biodiversity loss, and the imperative of sustainable farming confronting Europe, the use of satellite-based products for consistent and dependable monitoring is becoming ever more important. The release of the Copernicus High Resolution Layer (HRL) on Crop Types (CTY) as part of the HRL on Vegetated Land Cover Characteristics (HRL-VLCC) marks a significant advance in agricultural monitoring capabilities, with implications for sustainability and food security. This product, which is derived from the Copernicus Sentinel-1 and Sentinel-2 satellite missions, offers a comprehensive pan-European crop type identification at a 10-meter spatial resolution. The dataset encompasses 19 crop classes, including the major cultivated crops across Europe. Recognizing the considerable potential of this dataset, a rigorous evaluation is needed to determine its strengths and limitations for practical applications. Here, we present a detailed assessment of the HRL-VLCC CTY layers from 2017 to 2021 using EuroCrops. EuroCrops is the most extensive harmonized and open dataset of Geospatial Applications (GSA), or parcel-level crop type declarations collected in relation to CAP (Common Agricultural Policy) subsidy payments. Despite the growing availability of Geospatial Applications (GSA) datasets, challenges persist due to substantial heterogeneity in data formats and crop nomenclature, which impedes interoperability at a pan-European level. However, the Commission Implementing Regulation (EU) 2023/138, adopted under the Open Data Directive (Directive (EU) 2019/1024), mandates that Member States provide high-value datasets—such as GSA—free of charge and in machine-readable formats, thereby promoting open access to GSA data. EuroCrops v2, an Analysis Ready Dataset, employing a hierarchical taxonomy, provides readily usable parcel-level data with harmonized crop types across up to 18 Member States from 2008 to 2023. By mapping these crop types to the 19 classes delineated by the HRL-CTY and discarding those parcels that were used for training, we enable a direct comparison using conventional statistical metrics to ascertain user and producer accuracies, as well as F1-scores for each crop, region, and year. Our analysis also provides insights into the spatial consistency of these products. Preliminary findings, particularly for major crops, indicate the good quality of the HRL-CTY product. The overall accuracy of the product varies from 88% to 92%, depending on the year, with major crop types exhibiting even higher accuracy levels. Moreover, typically challenging crop distinctions, such as between wheat and barley, demonstrate minimal confusion, with less than a 5% error rate. Errors are predominantly associated with broad classes like fresh vegetables and dry pulses. In addition, an anomaly was noted in the classification of sugar beet, which was frequently misclassified as a fresh vegetable. The spatial consistency of the product is extremely high, indicating that very effective post-processing techniques have been applied to the datasets. While these results confirm the high potential of the HRL-CTY for agricultural monitoring, ongoing research is essential to refine the product further and explore its full range of applications for policy and decision-making support.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Vitality of wild plant mixtures for biogas production compared to maize in a heterogeneous agricultural landscape in Northern Bavaria, Germany

Authors: Sofia Haag, Dr. Sarah Schönbrodt-Stitt, Florian Baumgartner, Dr. Thorsten Dahms, Dr. Maninder Singh Dhillon, Tobias Ullmann
Affiliations: Earth Observation Research Cluster, Department of Remote Sensing, Institute of Geography and Geology, University of Würzburg, Ingenieurbüro Will, Vermessung, Geoinformatik, German Federal Agency for Cartography and Geodesy (BKG)
Cultivating crops for biogas production, such as silage maize, often requires a high level of land use and landscape fragmentation, which can affect the habitat structure of flora and fauna and cause an associated biodiversity loss. Specially developed perennial wild plant mixtures (WPMs) are becoming increasingly relevant as alternative biogas crops. Although their energy yield is lower than that of maize, they offer a range of positive ecological effects in return, from species composition and their potential for pollinators to the improvement of soil and water quality, e.g., by saving on fertilizers and generally reducing the amount of maintenance required, as well as reducing the potential for soil erosion. Socio-ecologically, a positive effect on preserving small-scale farming structures is being discussed. A comprehensive ecosystem approach coupled with considering economic potentials primarily requires a spatiotemporal analysis of the flowering areas cultivated WPMs compared to fields equipped with other biogas crops. This study, inter alia, deals with the satellite-based observation of WPM fields in a topographically heterogeneous area in northern Bavaria, Germany. Based on multispectral Sentinel-2 data, spectral indices and vegetation indices (NDVI, NDWI, and NDMI) are employed to depict and describe the phenology of WPM flowering fields compared to maize fields. The integration of meteorological parameters and arable yield indices supports the analysis of different phenological stages over seven years (2017 - 2023) against meteorological extremes, such as drought. The results provide important insights into the vegetation development of the WPMs. For instance, they exhibit a higher vitality than the maize fields, even with lower precipitation, indicating a higher resilience to water stress and underlining the importance of WPMs in energy transition, climate change, and biodiversity.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: The WorldCereal Reference Data Module: Open sharing and collection of agricultural reference data globally

Authors: Juan Carlos Laso Bayas, Hendrik Boogaard, Santosh Karanam, Arun Kumar Pratihast, Christina Butsko, Jeroen Degerickx, Kristof Van Tricht, Zoltan Szantoi, Steffen Fritz
Affiliations: International Institute for Applied System Analysis, Wageningen Environmental Research (WENR), Wageningen University & Research, VITO, European Space Agency, Department of Geography & Environmental Studies, Stellenbosch University
Crop monitoring on a global scale is an important endeavor in light of achieving SDG 2 “Zero Hunger”, enabling improved decision making in individual countries related to food security. The use of specialized classification algorithms based on Earth Observation (EO) data for crop (type) mapping is not new but remains barely accessible to non-experts. The ESA-funded WorldCereal project is effectively lowering the barriers in this respect by providing an open and flexible cloud-based processing system that allows the production of local to global 10-meter resolution seasonal cropland and crop type maps. To ensure high local accuracy of these products, and hence guarantee their widespread adoption and use, the classification algorithms require high-quality reference data on land cover and crop types. The WorldCereal reference module (RDM) is the section of the project where diverse types of reference data have been collected and harmonized, to be later sampled by the WorldCereal system to produce high accuracy global crop maps. The WorldCereal RDM offers anyone the chance to independently host and access their harmonized crop reference data through a web-based interface and API. The web-based interface offers a highly automated and AI-supported data upload and harmonization tool, ensuring for instance seamless compatibility of crop type labels with the official WorldCereal crop type legend. Any data sets provided to the WorldCereal RDM are always initially treated as private, but these can easily be shared with the WorldCereal consortium or made public by the person or organization providing them. Upon sharing or publishing a data set, the WorldCereal RDM team supports the user in the selection of an appropriate license, crediting of original sources and performing semi-automated quality control. The scope of the RDM could be extended in the future beyond just crop type mapping and could also host other agricultural data (e.g., biomass, yield, phenology), in turn extending the range of potential applications in the field of agricultural monitoring. As the WorldCereal project ends in 2026, there is a clear need to transfer, host and sustain the WorldCereal reference data repository and the RDM in a new infrastructure. This could be organized as a dedicated place where the GEOGLAM community: 1) has access to harmonized open in-situ reference data; 2) receives guidance on open in-situ reference data sharing including data rights, sharing policies and agreements for best practices; and 3) can contribute by providing additional in-situ data, via standardized interfaces, to form an harmonized repository bigger than the sum of its individual parts. The presentation will demonstrate the use of the WorldCereal RDM, accessing the web and API instances, and will also present our short-term plans and long-term roadmap on how we intend to further support and promote the collection, harmonization, centralized hosting and exchange of in-situ reference data for crop mapping purposes, through collaborations with GEOGLAM, the GEOSS Platform and FAO.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Cost-efficiency of Multiscale Crop Acreage Estimation: The Wheat Use-Case in Pakistan

Authors: Boris Norgaard, Sophie Bontemps, Pierre Houdmont, Pr. Pierre Defourny
Affiliations: UCLouvain-Geomatics
Agricultural statistics are a critical tool in shaping and monitoring agricultural policies. Timely and accurate information on crop area estimation and yield allows for evidence-based decision-making in the agricultural sector, which is essential to ensure food security. The traditional methods of data collection for agricultural statistics are often time-consuming and labour-intensive, leading to delays and inconsistencies that can hinder timely and appropriate policy responses. This work evaluates the cost-efficiency of multiscale remote sensing methods for crop acreage estimation, through a case study on irrigated wheat in Sindh province, Pakistan. Sentinel-2 (S2) imagery was combined with very high-resolution (VHR) Pléiades images and statistically sound in-situ data collected within a stratified area sampling frame to estimate wheat acreage. Thirty-four primary sampling units (PSUs) were collected within the statistical survey in an area of interest (AOI) composed of four districts: Sanghar, Khairpur, Matiari and Naushero Feroze. An acreage estimate of wheat in the AOI was first computed based on the statistical survey data only as baseline. S2 time-series data were then ingested and pre-processed by the ESA Sen4Stat system to generate a wheat map, which was used to compute a regression estimator with the statistical survey data. This allowed to correct the bias related to classification errors in the map and obtain unbiased acreage estimates with a reduced confidence interval amplitude. In a third scenario, twenty-two Pléiades images were segmented and classified into three classes: wheat, other crops, and non-crop areas. The rationale behind this approach was that highly accurate segmentation of homogeneous areas in VHR images, combined with their classification, could enable the creation of EO-derived PSUs. This process, in turn, would effectively increase the sampling rate and improve the precision of in-situ data. Using this method, 27 additional PSUs derived from Earth Observation (EO) data were generated and incorporated into another regression estimator. All variations of the regression estimator highlighted a tendency towards overestimation of wheat by the S2 based map. A cost-efficiency analysis revealed that combining Sentinel-2 and Pléiades data improved the accuracy of acreage estimates while maintaining economic feasibility. The study quantified the trade-offs between variance reduction and additional costs relative to the cost of acreage estimation based on the statistical survey only. It was shown that complementing an S2-based crop type map with exhaustive statistical in-situ data was the most cost-efficient acreage estimation method. The regression estimators using Pléiades data were logically associated with a higher cost, but still achieved a marked reduction in confidence intervals with a favourable cost-benefit ratio. A second part of the work focused on the characterization of the bias observed in the S2 based crop type map based on the Pléiades VHR map. Pixels causing the observed overestimation in the map were identified, and it was shown that most of them were pixels located at the borders of the plots and pixels resulting from salt and pepper effect in the classification. This work underscores the potential of multiscale remote sensing for agricultural monitoring through an original approach of cost-efficiency assessment, with scalability depending on local conditions and available resources. The results highlight the benefits of integrating remote sensing with statistical estimators to provide robust and financially viable solutions for agricultural statistics improvement.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Crop Specific Drought Impacted Yield Assessment using Multi-Source Data and Modelling Approaches

Authors: Esther Shupel Ibrahim, Dr. rer. nat. Gohar Ghazaryan, Dr Marlene Palka, Ms. Rachel Escueta, Ms. Anna Taege, Prof. Dr. rer. nat. Claas
Affiliations: ​​​​​​​​​​​​​​​​​​​​​Leibniz Centre for Agricultural Landscape Research (ZALF), Humboldt University of Berlin, National Space Research and Deveölopment Agency
Drought and heat stress pose severe risks to global populations and ecosystems, with their frequency and intensity increasing due to global climate change. Europe, including Germany, is not exempt from these challenges. Drought have remained persistent over the past two decades in Germany, especially in the North- Eastern region with farmers reporting significant yield losses, also with major impacts on ecosystems and food production. There has been increasing research interest to understand the causes and predict drought occurrences in Germany using data driven as well as process based modelling. There is a need to expand methodologies to develop local/culture-specific indicators to assess the risks of crop failures. Therefore, this study aimed to develop a robust framework for monitoring drought—including severity, intensity, and frequency—by integrating meteorological, hydrological, and remote sensing indicators. The ultimate goal was to address knowledge gaps in drought risk analysis, define critical thresholds, and improve crop-specific risk adaptation by combining hazard monitoring, exposure data, and yield estimates for maize, winter wheat, and rapeseed. Several datasets were used to assess the drought impact such as meteorological, hydrological observations as well as multiscale remote sensing data from MODIS, Landsat and Sentinel-2. First, drought indicators from 1991 to 2020 were analysed for their statistical importance in predicting yields. Seven key drought indices—Standardized Precipitation Index (SPI), Standardized Precipitation Evapotranspiration Index (SPEI), Soil Moisture Index (SMI) of drought monitor Germany, Drought Index (dMI), Soil Moisture (nFK), Actual Evapotranspiration (ETa), and Vegetation Condition Index (VCI)Anomalies based on each indicator were estimated and based on the anomalies, the frequency and the area affected by drought which was correlated with drought. In addition, a random forest model was employed to analyse the relationships between the drought indicators and crop-specific yields (2015–2021). A data cube with 34, 114, and 147 monthly input features for maize, winter wheat, and rapeseed, respectively, enabled accurate modelling of drought impacts across growing seasons. Besides random forest assessing the past drought, a mechanistic agro-ecosystem model, MONICA, was used to simulate crop production under drought stress. For this study, we used MONICA to run maize, winter wheat, and rapeseed simulations using several CMIP6 models. In general, six historic simulations (1991–2019) and 74 future simulations (2023–2080) were generated under different CMIP6 climate scenarios. The simulations were programmed to produce two predictions for each crop, considering the impact of drought on crop yields to estimate the drought risks. To achieve this objective, the ‘water deficit response’ set up in MONICA was activated or deactivated. The findings revealed a strong negative correlation between yields and drought indicators, highlighting the significant impact of droughts on crop production. Maize exhibited the highest correlation (R = −0.62), followed by winter wheat (R = −0.27) and rapeseed (R = −0.11) in the VCI analysis. However, most critical drought month impacting yields is July with all indicators showing the highest correlation in July (R value from −0.62 to −0.85) and the most important indicators being dMI, then SPEI and SPI. This is collaborated by the machine learning results, with only these three indicators and SMI appearing at the top 10 random forest variables of importance for all three crops. However, the most important months varies per crop due to variations in the cropping season. With June–July for maize, and March and June as the most significant months for winter wheat and rapeseed. The results from RF show maize as most affected especially in 2015, 2019, 2020, but most critical in 2018 with all maize cultivated areas in Saxony predicted to be affected with drought. While 2016, 2016 and 2021 were predicted to be have no significant drought impact. On the other hand, only 2018 and a small portions of 2015 were predicted with drought for winter wheat and rapeseed. The results of the agro-ecosystem model were also consistent, with severe yield losses revealed in 2003, 2015, 2019 and most severely in 2018 and mainly in the North-western region of Saxony. Yields losses up to 40-50% were shown for maize. The future predictions using the extreme RCP8.5 scenario shows up to 40 years of drought impacted yields for maize from 2023 to 2080, with yields from 5,700- 12,000 kg ha-1 due to water deficit. While the predicted impact of drought was minimal for winter wheat (7 years) and rapeseed (5 years). The findings provide critical insights into the spatial and temporal dynamics of drought and its effects on crop yields, highlighting the importance of locally relevant drought indicators for early warning and adaptation strategies. Conversely, through the use of remote sensing data, hazards can be detected comprehensively and dynamically by monitoring certain parameters. Vegetation analysis offer near real-time assessments of yield fluctuations as a result of drought and likewise the high resolution ETa showed the field-level drought impacts. This can provide the platform for near real-time monitoring of drought.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N-O)

Poster: Monitoring Citrus and Olive Phenology With Remote Sensing Methods

Authors: Marianna Hadjichristodoulou, Mr. Thrasos Stylianou, Mrs. Volha Dubovik, Mr. Filippos Charalambous, Ms. Eleni Loulli, Prof. Francisco Rovira-Más, Dr. Loukas Kanetis, Dr. Menelaos Stavrinides, Prof. Diofantos G. Hadjimitsis, Dr. Christiana Papoutsa
Affiliations: Cyprus University of Technology (CUT), Department of Civil Engineering and Geomatics, Eratosthenes Centre of Excellence (ECoE), Environment & Climate Department, Polytechnic University of Valencia, Departamento de Ingeniería Rural y Agroalimentaria, Cyprus University of Technology (CUT), Department of Agricultural Sciences, Biotechnology and Food Science
Remote sensing methods allow detailed spatial and temporal monitoring of crop phenology. By understanding and monitoring phenological changes in different crops, a baseline can be established and used to detect changes associated with biotic and abiotic stresses. The current study focuses on two economically significant crops, olives and citrus, and aims at establishing a remote sensing phenology baseline for the subsequent detection of abiotic stress as well as biotic stress due to pest and disease infections. Many citrus species typically have 4-5 flushes per year, are flowering in the spring and are setting fruits in spring to summer while fruit maturation begins in the summer and ends in autumn. These seasonal phenological patterns differ depending on rootstock and variety. In olive trees flushing takes place in spring and autumn, flowering occurs in spring and setting fruits start from late spring to the end of autumn. Phenological changes during the annual crop cycle are associated among others, with variations in leaf chlorophyll content and leaf area. Such changes could be effectively monitored using remote sensing techniques and calculating various vegetation indices (VI) using available satellite data. In the current work a time-series analyses of VI for a seven-year period was implemented. More specifically, the Multitemporal Sentinel-2 data was used to calculate the following VI: the Normalized Difference Vegetation Index (NDVI), the Enhanced Vegetation Index (EVI), the Chlorophyll Index Green (CIgreen) and the Leaf Area Index (LAI). The aim was to identify and characterize phenological changes in the two crops. Both NDVI and EVI are sensitive in quantifying vegetation greenness while EVI is more sensitive in areas with dense vegetation. CIgreen provides insights into chlorophyll dynamics and LAI reflects leaf area variations. A time-series analyses of VI offer a unique perspective on monitoring crops, distinguishing phenological stages and identifying trends and anomalies caused by abiotic and biotic stresses. The primary objective of the current study is to correlate the variations of the VI, retrieved using satellite data, throughout the key phenological stages of the two crops and evaluate whether common patterns are observed. Time-series analysis of Sentinel-2 data revealed clear seasonal variations. In citrus orchards, changes in CIgreen, NDVI, EVI and LAI were identified which are potentially related to fruit coloring taking place in September. Additionally, at the beginning of flushing observed in March distinct similarities became apparent in LAI and CIgreen. Furthermore, the NDVI and EVI changes can be related with the fruit developmental stage, offering insights into the physiological changes that occur during this phase of the tree cycle. In olive orchards, the NDVI, EVI, CIgreen, and LAI indices exhibited higher values from November to January compared to the rest of the year. In contrast, during the flowering period, the indices demonstrated moderate to low values. It should be noted that ground-based calibration and validation are essential to establish precise and explicit correlations with specific phenological stages for both crops. The current work underscores the potential of using remote sensing technologies for non-invasive crop monitoring. By identifying critical phenological changes, the study aims to enhance the application of satellite monitoring to improve crop management across different agricultural crop systems. Acknowledgements The authors acknowledge the ‘EXCELSIOR’: ERATOSTHENES: Excellence Research Centre for Earth Surveillance and Space-Based Monitoring of the Environment H2020 Widespread Teaming project (www.excelsior2020.eu). The ‘EXCELSIOR’ project has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No 857510, from the Government of the Republic of Cyprus through the Directorate General for the European Programmes, Coordination and Development and the Cyprus University of Technology. The present work was carried out in the framework of CERBERUS project that has received funding from the Horizon Europe HORIZON-CL6-2023-GOVERNANCE-01-16- HORIZON Research and Innovation Actions, under Grant Agreement No 101134878.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: A.01.02 - POSTER - Vertical Coupling in the Whole Atmosphere System

The Earth's atmosphere extends from the Earth’s surface to the upper thermosphere and ionosphere. While the various vertical domains are governed by different processes they are coupled through chemical, physical and dynamical processes (internal waves, solar, and magnetospheric forcings, etc.). The implications of these couplings for the waether and climate system are subject to ongoing research. Progress is being made in studying and understanding the modelling the atmosphere as a whole. This session invites contributions dealing with the vertical coupling mechanisms between these vertical domains.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Towards the concept of SULi index - Swarm-based ULF Lightning index for detection of magnetic disturbances triggered by thunderstorm activity

Authors: Ewa Slominska, Jan Slominski
Affiliations: OBSEE, CBK PAN
Magnetic storms, severe thunderstorms, earthquakes and volcanic eruptions belong to the class of natural hazard phenomena, which generate perturbations at different temporal and spatial scales. All four contribute to generation of electromagnetic waves that can propagate in the upper layers of the Earth's atmosphere and ionosphere. Magnetic field observations provide the most detailed description for the first mentioned phenomenon. Vast range of indices is used for proper characterization of geomagnetic conditions and magnetic activity. They accurately quantify the strength of currents in the magnetosphere-ionosphere system and are provided with time cadence from minutes to hours. Additionally numerous studies evaluate intensity of magnetic activity manifested as ultra-low-frequency fluctuations, commonly known as electromagnetic pulsations (Pc waves). Pc pulsations are divided into subgroups on the basis of their typical periods, and several satellite missions allowed to construct global indices of this type of activity on the basis of their typical spectral features. In case of thunderstorms and lightning activity, the most accurate quantification can be obtained with magnetic and electric field registrations conducted in the VLF range. However, worldwide lightning activity strongly contributes to generation of magnetic background noise level, in the ELF/ULF frequency range. Swarm mission proved its capabilities to register small scale magnetic perturbations originating from intense thunderstorms. The burst mode registrations from the ASM instrument allow for detection of whistler-type waves, while the VFM 50Hz data are used for monitoring of a special class of lightning events, called TLEs (Transient Luminous Events). TLEs manifested as sharp peaks clearly standing from the background noise of magnetic field intensity cover the spectral range between 1.5-8 Hz. Additionally simultaneous registrations with the Langmuir probe allowed to negatively verify hypotheses, whether the most severe lightning contributes to seeding of ionospheric irregularities. Unique database of detected cases provided a set of requirements, which allow to develop the SULi index (Swarm-based ULF Lightning index). The SULi index relies on the formalism of Stokes parameters, and takes into account the fact, that TLEs mostly affect toroidal components of the magnetic field (δB_N, δB_E). The SULi index aims to provide climatological representation of magnetic disturbances that originate from phenomena other than geomagnetic activity. Furthermore, representation of small-scale locally induced magnetic perturbations is of high-importance for examination of processes of Lithosphere-Atmosphere-Ionosphere Coupling (LAIC).
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Investigation of Swarm disturbances during solar quiet intervals by ground-based observations of the mesosphere and ionosphere. – Results from the project QUID-REGIS

Authors: Patrick Hannawald, Jaroslav Chum, Petra Koucká Knízová, Lisa Küchelbacher, Ján Kubancák, Simon Mackovjak, Carsten Schmidt, Vladimir Truhlik, Jaroslav Urbar, Sabine Wüst
Affiliations: German Aerospace Center, German Remote Sensing Data Center, Czech Academy of Sciences, Institute of Atmospheric Physics, Slovak Academy of Sciences, Institute of Experimental Physics
In the QUID-REGIS (QUiet Ionospheric Disturbances - REsearch based on Ground-based mesospheric and Ionospheric data with Swarm data) project it is investigated if ionospheric disturbances as detected by the European Space Agency Swarm satellites originate from the lower atmospheric layers in case there is no solar forcing that could cause these disturbances. Therefore, various ground-based multi-year datasets are used and brought in context with various parameters from the Swarm satellites during solar quiet intervals to find coherences between the different atmospheric layers, i.e. the topside ionosphere, bottom side ionosphere and upper mesosphere. The ground-based dataset used consists of GNSS, digisonde and continuous HF Doppler radar observations of the ionosphere as well as images and spectra from airglow imagers and spectrometers (OH, green line, red line). The instruments are spread over Europe. In order to compare the different ground-based datasets, several indices or proxies are derived to make them better comparable to each other. E.g. from the OH airglow spectrometers, the gravity wave potential energy density is derived as a proxy for the gravity wave activity in the altitude region around 90km (the height of the OH airglow layer) as well as a station-based dynamic activity index, which is related to longer period oscillations e.g. from planetary waves. From the HF Doppler radar the medium-scale ionospheric wave activity (MSTIDs) is derived having overall ionospheric conditions monitored by digisondes (e.g., foF2). As a proxy for the ionospheric variability also the ionospheric GNSS rate of the TEC index (ROTI) is used. Further proxies and datasets constitute the database constructed for purposes of the project. The data is investigated on a statistical basis. Therefore, the datasets are aggregated on a daily basis and merged in the same time so that they are directly comparable. Different time series analysis methods are applied to indicate coherences in the datasets. In addition, case studies are carried out on the basis of the higher resolution data and also on the data underlying the indices in order to complete the view of the observed variability. In this contribution we present results of the statistical analyses between the various ground-based and Swarm datasets and also show results of specific case studies.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: A.01.01 - POSTER - Advances in atmospheric composition

High-quality global atmospheric composition observations are essential for our understanding of atmospheric responses and contributions to climate change, to document trends in greenhouse gases, their precursors, and air pollutant emissions, and to assess and underpin new policy measures for climate mitigation and adaptation.
This session will present latest results on how atmospheric composition measurements can be used to monitor climate change. Furthermore changes in the troposphere, stratosphere and their coupling (e.g circulation & transport of trace gases, chemical composition, aerosol information,) will be discussed. We invite presentations on data product improvements, validation aspects as well as studies using satellite data for applications in atmospheric chemistry, composition monitoring and air quality from current and future missions.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Retrievals of Atmospheric Carbonyl Sulfide Over Land and Ocean Surfaces From IASI Satellite Observations

Authors: David P. Moore, Jeremy J. Harrison, Chris Wilson, Michael P. Cartwright, Richard J. Pope, Daniel J. L. Coxon, Martyn P. Chipperfield, John J. Remedios
Affiliations: National Centre for Earth Observation (NCEO), University of Leicester, School of Physics and Astronomy, University of Leicester, National Centre for Earth Observation (NCEO), University of Leeds, School of Earth and Environment, University of Leeds
Carbonyl sulfide (COS) is the most abundant sulfur-containing gas in the atmosphere and a dominant source of stratospheric aerosol. COS can also be used as a proxy for photosynthesis and its measurement has the potential to quantify global gross primary productivity. In recent years there has been an improved understanding of the location and magnitude of COS fluxes, and recent progress has been made in incorporating satellite datasets to provide increased global coverage and aid efforts to reduce uncertainties in current budget estimates. The Infrared Atmospheric Sounding Interferometer (IASI) instruments onboard the MetOp series of satellites has provided a continuous data record of nadir-viewing radiance measurements since 2007. This high-spectral resolution dataset offers the potential to observe COS over almost 20 years. Previous work by the authors has demonstrated the quality of COS from IASI-A and IASI-B radiances for 2018 using an adapted version of the University of Leicester IASI retrieval scheme (ULIRS). In that work, retrievals were performed primarily over the ocean. Here, we extend that approach to land retrievals, with a particular focus on tropical regions. We have adapted the scheme to incorporate the improved v3 Combined ASTER and MODIS Emissivity database over Land (CAMEL) and included retrievals of emissivity alongside COS to improve the quality of the fit. The retrieval of COS is complicated by overlapping CO2 spectral lines in the most suitable COS band: 2000 – 2100 cm-1 . Here, we use new CO2 spectroscopic line parameters and show the effect these have on the retrieval compared to the older spectroscopy. We compare IASI COS results with TOMCAT 3-D chemical transport model simulations from the University of Leeds and measurements from the Network for the Detection of Atmospheric Composition Change (NDACC) ground-based observation network. Finally, we present the results of employing the land and ocean COS data in the INVICAT 4D-Var flux inversion scheme (based on TOMCAT) and discuss the improvements compared with previous studies which have only considered the ocean data.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: TROPOMI on Sentinel-5 Precursor and Lesson Learned for Sentinel 4 & 5

Authors: Pepijn Veefkind
Affiliations: Knmi
Sentinel 5 Precursor (S5P) with its payload TROPOMI (Tropospheric Monitoring Instrument) is the first of the Sentinel mission dedicated to atmospheric composition. S5P has been launched in 2017, for a 7-year nominal mission. Whereas the mission recently completed its nominal lifetime, it is in excellent shape and from a technical point of view the mission is expected to last at least 10 more years. In this presentation, we will present the lessons learned of TROPOMI, which is now relevant because of the upcoming launches of the Sentinel-4 UVN and the Sentinel-5 UVNS instruments. Therefore, specific emphasis will be given to the first part of the mission, including the commission of TROPOMI. We will also highlight some of the latest developments for the operational and scientific data products from TROPOMI.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Assessment of Cloud Product Impact on Tropospheric NO2, HCHO and SO2 Retrievals from OMI and TROPOMI

Authors: Huan YU, Folkert Boersma, Isabelle De Smedt, Nicolas Theys, Maarten Sneep, Pepijn Veefkind, Jonas Vlietinck, Michel Van Roozendael
Affiliations: Royal Belgian Institute for Space Aeronomy (BIRA-IASB), Royal Netherlands Meteorological Institute (KNMI)
As part of the ESA Climate Change Initiative (CCI+) Precursors project, significant efforts have been dedicated to assessing the quality of long-term trace gas datasets. These datasets include nitrogen dioxide (NO2), formaldehyde (HCHO), sulfur dioxide (SO2), glyoxal (CHOCHO), carbon monoxide (CO), and ammonia (NH3), which are key precursors for aerosols and ozone. Over the past decades, these data have been collected from various satellite missions such as GOME, SCIAMACHY, GOME-2, OMI, TROPOMI, IASI, and MOPITT. The main objective of the project is to develop consistent, harmonized, and long-term data records for these precursor gases. Clouds have a significant impact on satellite measurements of tropospheric trace gases derived from UV-Vis-NIR sensors, and several studies have demonstrated that variations in cloud products contribute to biases in trace gas retrievals between different sensors. This study evaluates the influence of clouds on tropospheric NO2, HCHO, and SO2 retrievals from OMI and TROPOMI, focusing on the cloud products used in the operational processing. These include the OCRA (Optical Cloud Recognition Algorithm) and ROCINN (Retrieval of Cloud Information using Neural Networks) for TROPOMI HCHO and SO2, FRESCO-S (Fast Retrieval Scheme for Clouds from the Oxygen A-band for Sentinel) for TROPOMI NO2, and OMCLDO2 (OMI cloud product based on O2–O2 absorption at 477 nm) for OMI. Additionally, the analysis includes the TROPOMI O2-O2 cloud product, which follows a similar approach to the OMI’s OMCLDO2, as well as newly developed harmonized cloud datasets for OMI and TROPOMI based on O2-O2 absorption. We focus on investigating the dependence of tropospheric AMFs on cloud products. Apart from cloud parameters, the AMF calculation uses consistent input data, including the latest version of the TROPOMI surface albedo climatology and CAMS global reanalysis data for the a priori profiles. The analysis concentrates on selected polluted regions worldwide. The aims are (1) to evaluate the importance of cloud corrections for each species, and (2) to examine the consistency of these corrections between the two sensors.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: The Arctic Observing Mission - Pre-Formulation Study Progress and International Partnerships

Authors: Cristina Surdu, Dr. Ray Nassar, Dennis Dufour, Dam-Tam Nguyen, Dr. Alec Casey, Matt Arkett, Dr. Chris Sioris, Dr. Chris McLinden, Joseph Mendonca, Dr. Josep Aparicio, Dr. Shen-En Qian, Dr. Isabelle Jean, Nadhem Rojbi, Myriam Plourde, Dr Alexander Trishchenko, Lisa Marie Vaccaro
Affiliations: Environment and Climate Change Canada (ECCC), Canadian Space Agency (CSA), Natural Resources Canada (NRCan)
The Arctic Observing Mission (AOM) is a unique and innovative Canadian-led mission concept that would provide the world’s most detailed view of the Arctic. Consisting of two satellites in space positioned in a highly elliptical orbit (HEO) that would make quasi-geostationary observations, AOM would provide new observations for weather, greenhouse gases (GHGs), air quality and space weather over northern latitudes (~40-90°N) with unprecedented frequency and density. This mission concept is a collaborative effort between the Canadian Space Agency, Environment and Climate Change Canada and Natural Resources Canada. It is envisioned that the mission would be implemented in collaboration and cooperation with international partners to address observations gaps in a rapidly changing environment. AOM observations would primarily be used to inform, but not limited to, operational weather forecasts, environmental monitoring, situational awareness and scientific research aligned with key priorities of the Government of Canada and prospective partners, namely US and European counterparts at this time. AOM is undergoing a pre-formulation study (PFS), scheduled to be completed by early 2025, as a step toward a future funding request. With a proposition to deploy the Advanced Baseline Imager (ABI) to HEO, the AOM would extend the coverage of National Oceanographic and Atmospheric Administration’s (NOAA) geostationary ABI instrument to more northerly latitudes, leading to improved weather forecasts in the Arctic. Northern GHG observations would improve the ability to detect and monitor changes in the Arctic and boreal carbon cycles, including CO2 and CH4 emissions from permafrost thaw. Air quality observations would enhance the ability to monitor anthropogenic emissions and mid-latitude pollution transport, which will improve air quality forecasts. Space weather observations would support operational space weather forecasting to protect valuable space-based assets and improve scientific understanding of solar-terrestrial interactions. This presentation will provide an overview of AOM’s progress on technical and science aspects of the pre-formulation study as well as provide updates on international partnership scenarios and status. This will include a summary of the outcomes of AOM’s mission design contract which will end in late 2024. The objectives of the mission design contract were to assess payload and orbit options, update conceptual designs for the dispersive air quality spectrometer and the GHG Imaging Fourier Transform Spectrometer (IFTS), and develop cost estimates for the entire AOM space system and ground segment.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Improved retrievals of SO₂ plume height and column density using TROPOMI UV band 2 measurements

Authors: Lorenzo Fabris, Nicolas Theys, Lieven Clarisse, Bruno Franco, Hugues Brenot, Jonas Vlietinck, Thomas Danckaert, Huan Yu, Jeroen van Gent, Michel van Roozendael
Affiliations: Royal Belgian Institute for Space Aeronomy (BIRA-IASB), Université libre de Bruxelles (ULB)
Sulfur dioxide (SO₂), released by volcanoes, considerably affects the human health, society, and atmospheric composition in general. While space nadir-viewing sensors have been providing valuable information on the SO₂ Vertical Column Density (VCD) for several decades, accurate retrieval of its Layer Height (LH) remains challenging, yet essential to further understand volcanic events, mitigate aviation hazards, constrain SO₂ emissions and assess their global impact on climate. Current UV fitting algorithms often struggle with long computation times and reduced sensitivity, especially for environments rich in aerosols. Here, we have developed an improved SO₂ LH (and VCD) retrieval algorithm for the high-resolution TROPOspheric Monitoring Instrument (TROPOMI). To this end, the second UV spectral band (BD2) is exploited instead of the third band (BD3) traditionally used for sulfur dioxide, thereby taking advantage of the SO₂'s greater absorption at shorter UV wavelengths. We first performed retrievals on an extended set of synthetic spectra representative of TROPOMI measurements, using the Look-Up Table Covariance-Based Retrieval Algorithm (LUT-COBRA) [1, 2]. Theses analyses aimed at assessing the sensitivity of the algorithm to varying atmospheric, spectroscopic and observation conditions. We also tested alternative approaches, e.g., combining LUT-COBRA with the Optimal Estimation Method (OEM) [3]. Overall, we find that the SO₂ LHs and VCDs in the second band are more accurate and the associated retrieval errors appear to be drastically lower, particularly for SO₂ in the Upper Troposphere/Lower Stratosphere (UTLS). Our algorithm was then applied to real TROPOMI measurements for different volcanic events, ranging from major eruptions to minor degassing activities. The results show an excellent consistency with the BD3 LUT-COBRA retrievals, while having less noise and a detection limit as low as 2.0 DU. We also note that our approach has a greater sensitivity than the current TROPOMI operational product to retrieve the SO₂ LH. Furthermore, these observations are in good agreement with plume height estimates derived from the IASI, CALIOP and OMPS-LP satellites. [1] N. Theys et al. Atmospheric Chemistry and Physics, 21(22):16727-16744, 2021. [2] N. Theys et al. Atmospheric Measurement Techniques, 15(16):4801-4817, 2022. [3] Rodgers. Inverse Methods for Atmospheric Sounding: Theory and Practice, volume 2 of Series on Atmospheric, Oceanic and Planetary Physics. World Scientific, 2000.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Comparison of mean age of air in ERA5, ERA-I, MERRA2 and JRA-3Q using the BASCOE chemistry transport model and observations from MIPAS, ACE-FTS, GLORIA-B and CAIRT

Authors: Sarah Vervacke, Quentin Errera, Stefan Bender, Dr Bernd Funke, Alex Hoffmann, Dr Michael Höpfner, Gabriele Poli, Dr Felix Plöger, Dr Jörn Ungermann, Florian Voet, Dr Björn-Martin Sinnhuber
Affiliations: Royal Belgian Institute of Space Aeronomy, Instituto de Astrofísica de Andalucía, CSIC, European Space Research and Technology Centre, European Space Agency (ESA/ESTEC), Institute of Applied Physics ‘N. Carrara’, Italian National Research Council, Karlsruhe Institute of Technology, Forschungszentrum Jülich
We present an intercomparison of the mean age of air (AoA) derived from three recent reanalyses: the European Centre for Medium-Range Weather Forecasts Reanalysis version 5 (ERA5) and its predecessor (ERA-Interim), the National Aeronautics and Space Administration’s Modern-Era Retrospective analysis for Research and Applications version 2 (MERRA2), and the Japan Meteorological Agency’s 3-Quarter Century Reanalysis (JRA-3Q). AoA is computed using an idealized clock tracer within the Belgian Assimilation System for Chemical Observations (BASCOE) chemistry transport model. We examine the simulated AoA time series with a particular focus on differences between the reanalyses. Preliminary results indicate that MERRA2 and JRA-3Q show a decreasing AoA trend between 1990 and 2000, after which all three reanalyses exhibit a relatively stable AoA with no significant trends. ERA5 consistently provides the youngest AoA, suggesting a faster stratospheric transport. These findings are compared with the results of Chabrillat et al. (2018), which analyzed older reanalysis versions (ERA-Interim and JRA-55) using a previous version of the BASCOE model, and with Ploeger et al. (2019, 2021), which presented AoA from the CLaMS model driven by ERA-Interim, MERRA2, JRA-55 and ERA5. We also compare with existing limb retrievals of age of air from MIPAS and ACE-FTS, as well as simulated retrievals from the future CAIRT mission, which is currently in phase A of development. Finally the results are compared with two GLORIA-B balloon profiles. This study provides insights into the evolution of AoA estimates in successive reanalysis products and highlights key differences in stratospheric transport representations.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Recent Global Trends in Urban Nitrogen Dioxide Observed from Space

Authors: Philipp Schneider, Amirhossein Hassani, Sam-Erik Walker, Sverre Solberg, Kerstin Stebel
Affiliations: NILU
The TROPOMI instrument onboard of the Sentinel-5P platform now provides a fully consistent data set of tropospheric vertical column data of nitrogen dioxide (NO2) spanning from 2018 through 2024. We use this data to study the most recent trends in NO2 for a total number of over 5400 urban areas worldwide, including all major megacities and metropolitan areas as well as thousands of smaller cities down to a population of 100,000 inhabitants. We obtained a labelled dataset of urban area polygons and extracted consistent daily time series from the TROPOMI NO2 TVCD product for each site using areal weighting, while applying an 80% minimum coverage threshold. We then fitted a Generalized Additive Model (GAM) to each time series to a) obtain non-linear smooth trends and b) to control for meteorological effects by including ERA5 meteorology as predictor variables. The results indicate significant variability between regions and between individual sites. While most sites in Europe and North America exhibited decreasing NO2 levels over the study period, multiple sites particularly in South Asia showed very significant increases in NO2 (e.g. Kerman in Iran with an increase of 43% over the five-year study period). Most sites show a non-linear behaviour of the trend. The impact of the Covid-19 pandemic is evident in some regions but not universally. While prior studies have analyzed changes in tropospheric NO₂ in large urban areas using satellite data, our study is, to the best of our knowledge, the first to: 1) do so at an unprecedented scale of over 5000 urban sites worldwide, 2) provide not only linear but non-linear trends based on a GAM model, 3) correct for the effect of changes in meteorology, and 4) utilize solely consistent time series from the TROPOMI instrument. Through this work, we aim to demonstrate the potential of TROPOMI for consistent, global monitoring of urban air pollution levels, offering new insights into trends in air quality across a wide range of urban environments.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: 15-years Dust Variability in the Mediterranean, Sahara and the Middle East Seen by Different Satellites

Authors: Dominika Leskow-Czyzewska, Antonis Gkikas, Stavros-Andreas Logothetis, Anu-Maija Sundström, Simone Mantovani, Federico Fierli
Affiliations: EUMETSAT, Academy of Athens, University of Patras, FMI, MEEO S.r.l.
Mineral dust is more than dirt – it’s a powerful force shaping climate, ecosystems, and health. By interacting with radiation and clouds, it alters weather patterns and precipitation. Dust deposition nourishes ecosystems, while its transport impacts air quality, poses health risks, and disrupts visibility, with far-reaching societal effects. Many studies concerning Aerosol Optical Depth (AOD) measurements from space base their results on the MODerate resolution Imaging Spectroradiometer (MODIS) onboard Terra and Aqua satellites. It has been delivering data for over 20 years. However, with MODIS nearing its end of life, there is an emerging need to seek for another datasets, consistent with the results seen so far. Dust load fluctuations are influenced by atmospheric and terrestrial processes operating on various time scales, making them inherently non-stationary over time. Consequently, directly comparing results from studies representing different time periods may not be reliable. To evaluate the ability of satellite products to accurately capture trends in mineral particle loads, it is essential to establish a standardized long-term observation period, ideally spanning at least a decade. Such a framework facilitates a thorough analysis of satellite measurements, accounting for variations in observational techniques, sampling intervals, and the assumptions underlying the respective retrieval algorithms. We are going to present an analysis based on data from several satellites: NASA’s Terra & Aqua, EUMETSAT’s MetOp; and Copernicus Sentinel-3 operated by EUMETSAT. When it comes to MODIS, we used MIDAS dataset (Gkikas et al. 2021, 2022), which utilized AOD combined with dust fractions from the NASA’s reanalysis Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2). Further from MetOp, we looked at data from the Polar Multi-Sensor Aerosol Optical Properties product (PMAp) combining information from the Global Ozone Monitoring Experiment-2 (GOME-2), the Infrared Atmospheric Sounding Interferometer (IASI) and Advanced Very High Resolution Radiometer (AVHRR), as well as from the IASI itself. Finally, we included AOD Near-Real Time product from the Sea and Land Surface Temperature Radiometer (SLSTR) onboard Sentinel-3. To compliment the results, we compared them to the ground-measurements from the AErosol RObotic NETwork (AERONET). The analysis period covered years 2007-2023 and focused on the regions frequently affected by dust: the Mediterranean, Sahara (including parts of Atlantic Ocean) and the Middle East.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Ammonium nitrate in the lower stratosphere: Observations of the CAIRT airborne demonstrator GLORIA of Asian Monsoon outflow during the PHILEAS campaign 2023

Authors: Sören Johansson, Jörn Ungermann, Peter Preusse, Michael Höpfner, Felix Friedl-Vallon, Norbert Glatthor, Anne Kleinert, Erik Kretschmer, Tom Neubert, Markus Retzlaff, Martin Riese, Björn-Martin Sinnhuber, Franziska Trinkl, Stefan Versick, Gerald Wetzel, The GLORIA Team
Affiliations: (1) Institute of Meteorology and Climate Research, IMK-ASF, Karlsruhe Institute of Technology, (2) Institute of Climate and Energy Systems - Stratosphere (ICE-4), Forschungszentrum Jülich, (3) Central Institute of Engineering, Electronics and Analytics - Electronic Systems (ZEA-2), Forschungszentrum Jülich
We present trace gas and aerosol measurements obtained by the airborne infrared imaging limb sounder GLORIA (Gimballed Limb Observer for Radiance Imaging of the Atmosphere) that has been operated onboard HALO (High Altitude and Long Range Research Aircraft) during the PHILEAS campaign (Probing High Latitude Export of air from the Asian Summer Monsoon; August-September 2023). The GLORIA instrument is an airborne demonstrator for the ESA Earth Explorer 11 candidate mission CAIRT (Changing-Atmosphere Infra-Red Tomography Explorer). We measured outflow from the Asian Monsoon above the North Pacific, and the Mediterranean, as well as pollution plumes from biomass burning events in North America. In this contribution, we present retrieval results of ammonia (NH₃), solid ammonium nitrate and other pollution trace gases (e.g. PAN) as two-dimensional distributions with high vertical resolution, derived from GLORIA observations in the UTLS (Upper Troposphere Lower Stratosphere). A highlight of the GLORIA observations is the unprecedented detection of significant amounts of solid ammonium nitrate, which is connected to the Asian Monsoon, in the lower stratosphere outside the Asian Monsoon Anticyclone. Measurements from a previous airborne campaign within the Asian Monsoon (StratoClim 2017) showed large enhancements of NH₃ (precursor of ammonium nitrate), and solid ammonium nitrate in the Asian Monsoon upper troposphere. In our analysis, we compare measurements with atmospheric models to examine air mass origins and estimate transport pathways. In particular, we use artificial tracers calculated by the ICON-ART (ICOsahedral Nonhydrostatic - Aerosol and Reactive Trace gases) model.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: CSA SATELLITE EARTH OBSERVATION MISSIONS AND COLLABORATIONS

Authors: Marcus Dejmek, Taryn Tomlinson
Affiliations: Canadian Space Agency
Satellite Earth Observation missions and data analysis are critical elements of each segment of the Earth Observation value chain. New atmospheric composition observations lead to new knowledge of physical, chemical, and biological processes of the Earth system. When used to improve Canadian Earth System models, environmental prediction and climate projection services to Canadians are made better, more accurate and robust. For over two decades, the Canadian Space Agency (CSA) has funded EO missions and scientific research to advance satellite and instrument operations, data product development and validation, and data analysis. This presentation will focus on current CSA atmospheric composition missions in the Earth System (SCISAT, OSIRIS on Odin, and MOPITT on Terra, their new data products and validation, recent scientific advances they have enabled, along with new research results from satellite data analysis projects. The talk will also highlight academic-government collaborations, the Earth system models they advance, and international collaborations enabled. SCISAT was launched in 2003 with the objective of measuring atmospheric profiles of ozone and ozone-depleting substances, and with its high-inclination orbit, offer a frequent sampling of northern latitudes. With a planned duration of two years, and 7 substances to be measured, SCISAT has greatly exceeded expectations, having celebrated its 20th anniversary last year, and measuring 70 substances to this day. Thanks to the scientific and calibration/validation teams’ continued efforts over the last decades, SCISAT’s two instruments have provided an uninterrupted series of atmospheric composition profiles. The Swedish satellite Odin was launched in 2001 into a low Earth orbit, with the Canadian OSIRIS and the SMR instruments on board. OSIRIS measures limb-scattered sunlight in the 280–800 nm spectral region, and by its scanning motion, produces profiles of various trace gases from the upper troposphere to the lower mesosphere. Its planned life expectancy was two years, and 23 years later, it is still operating.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Innovative surface reflectance retrieval from UVN satellites

Authors: Pascal Hedelt, Dr. K.-P. Heue, Dr. R. Lutz, Dr. V. Molina García, F. Romahn, Dr. D. Loyola
Affiliations: DLR IMF-ATP
The knowledge of the surface reflectance is essential for the retrieval of atmospheric trace-gases, cloud and aerosol properties from satellite sensors. In atmospheric trace gas retrievals, the surface reflectance is required for the conversion of the observed trace gas slant column to the total vertical column by means of a so-called air-mass factor. Although there exist climatological databases based on UV satellite data (e.g. OMI, GOME-2), these have a low spatial resolution and are not appropriate for current and future UV satellite missions like Sentinel 5p/TROPOMI or MTG-S/UVN (Sentinel 4) due to their significantly higher spatial and spectral resolution. Current climatologies which are used in operational retrievals provide the Lambertian Equivalent Reflection (LER, e.g. OMI, GOME 2, TROPOMI, see [1,2,3]) and Directional-LER (DLER, e.g. GOME 2, TROPOMI see [3,4]) for selected wavelength in the UV VIS range and are based on the so-called minimum LER approach, i.e. determine the minimum surface reflectance in the measurement timeframe. We present here a new climatology based on the GE_LER (Geometry-dependent Effective Lambertian Equivalent Reflectivity, see [5]), which retrieves the GE_LER from UVN satellites in a wavelength range (i.e. fitting window) as opposed to the single-wavelength approaches of existing climatologies. In this way, dedicated surface reflectivities for specific trace gas and cloud/aerosol parameter retrieval fitting windows can be determined. We have trained a Neural Network with simulated UV/VIS/NIR spectra, which have been calculated with (V)LIDORT (see [6]). The same radiative transfer model is also used for the generation of Air Mass Factors in the operational TROPOMI retrievals of O₃, SO₂, and HCHO as well as in the cloud parameters retrieval. GE_LER climatologies for five main trace gases and for cloud and aerosol parameters retrieved by TROPOMI will be presented. [1] Kleipool (2010), OMI/Aura Surface Reflectance Climatology L3 Global Gridded 0.5 degree x 0.5 degree V3, Greenbelt, MD, USA, Goddard Earth Sciences Data and Information Services Center (GES DISC), Accessed: [Data Access Date], 10.5067/Aura/OMI/DATA3006 [2] Tilstra et al. (2017), Surface reflectivity climatologies from UV to NIR determined from Earth observations by GOME-2 and SCIAMACHY, J. Geophys. Res. Atmos. 122, 4084-4111, doi:10.1002/2016JD025940 [3] Tilstra et al. (2021), Directionally dependent Lambertian-equivalent reflectivity (DLER) of the Earth's surface measured by the GOME-2 satellite instruments, Atmos. Meas. Tech. 14, 4219-4238, doi:10.5194/amt-14-4219-2021 [4] Tilstra et al. (2024): A directional surface reflectance climatology determined from TROPOMI observations, Atmos. Meas. Tech., 17, 2235–2256, https://doi.org/10.5194/amt-17-2235-2024, 2024. [5] Loyola et al. (2020): Applying FP_ILM to the retrieval of geometry-dependent effective Lambertian equivalent reflectivity (GE_LER) daily maps from UVN satellite measurements, Atmos. Meas. Tech., 13, 985–999, https://doi.org/10.5194/amt-13-985-2020, 2020. [6] Spurr et al. (2008), LIDORT and VLIDORT: Linearized pseudo-spherical scalar and vector discrete ordinate radiative transfer models for use in remote sensing retrieval problems. Light Scattering Reviews, Volume 3, ed. A. Kokhanovsky, Springer
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Mutual Gap-Filling of Sentinel-5p Datasets

Authors: Giorgia De Moliner, Alessandro D'Ausilio, Giovanni Lonati, Camillo Silibello
Affiliations: Politecnico Di Milano, ARIANET-SUEZ
Sentinel-4 (S4), soon to be launched under the European Union’s Copernicus program, marks a pivotal advancement in air quality monitoring and atmospheric composition analysis. Operating from a geostationary orbit, S4 will provide hourly measurements across the European domain, complementing the daily global observations of Sentinel-5p (S5p), which has been operating in a sun-synchronous orbit since 2018. Together, these satellites will offer consistent, wide-scale, and cost-effective observations, delivering critical Earth Observation (EO) datasets to inform decision-making across various sectors, from public health to climate mitigation. Despite significant advancements in remote sensing, EO datasets still suffer from complex, large-scale missing values that vary across chemical species. Frequently, the impartiality of the data hinders its usability and reduces its reliability. These gaps stem from sensor limitations, thick clouds, snow/ice, or low-quality data caused by the challenges of retrieving column totals from spectral information. Missing data hinder trend analyses, introduce biases into observational records, and obscure physical dependencies among trace gases. Moreover, when EO data are used as predictors in Machine Learning (ML) applications, gaps can invalidate other predictors, significantly reducing the size of training datasets. High-quality gap-filling is therefore essential to unlock the full potential of EO datasets, ensuring they deliver valuable, reliable, and usable information products. To address this challenge, an advanced ML-based gap-filling methodology designed to address gaps in EOs of Land–Climate Interaction, CLIMFILL (V. Bessenbacher, 2022), was selected and its performance in reconstructing Sentinel-5p TROPOMI Level 2 datasets was evaluated with respect to naïve and state-of-the-art spatio-temporal interpolation techniques. CLIMFILL employs multiple Random Forest Regressors and leverages temporal autocorrelation, spatial neighborhood information, and cross-variable dependencies in a truly mutual gap-filling framework. This design exploits the non-overlapping missingness patterns of trace gases and effectively carries out a multivariate analysis—an innovative approach in the field of air pollution modeling. By capturing complex non-linear relationships, CLIMFILL effectively addresses non-random missingness, hinting on the feasibility of correlating pollutants that lack strong physical linkages. The study also examined whether incorporating chemical transport model concentration maps as predictors would enhance the gap-filling process. Additionally, the study analyzed the occurrence of missing data in Sentinel-5p products, identifying patterns and potential causes, including whether clouds are the primary contributors to missing values.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: An Extensive, Consistent Time Series of Volcanic and Anthropogenic Sulfur Dioxide (SO2) from IASI

Authors: Bruno Franco, Lieven Clarisse, Nicolas Theys, Lorenzo Fabris, Antoine Comiant, Juliette Hadji-Lazaro, Cathy Clerbaux, Pierre Coheur
Affiliations: Université Libre De Bruxelles (ULB), Royal Belgian Institute for Space Aeronomy (BIRA-IASB), LATMOS/IPSL, Sorbonne Université, UVSQ, CNRS
Atmospheric sulfur dioxide (SO2) originates from both anthropogenic activities, such as fossil fuel combustion and smelting, as well as natural sources like eruptions and quiescent degassing of volcanoes. Once in the atmosphere, SO2 rapidly oxidizes into sulfuric acid and sulfate aerosols, contributing to air pollution at local and regional scales. The altitude of SO2 injection critically influences its lifetime, and environmental and climatic effects. In the low- and mid-troposphere, SO2 leads to acid rain and cloud modification, harming air quality and ecosystems. In the stratosphere, it forms sulfate aerosols that scatter solar radiation, cooling the Earth's surface and potentially altering global climate patterns for months/years. Nadir satellite sounders operating in the thermal infrared (TIR) and ultraviolet-visible (UV-Vis) spectra have been providing valuable measurements of SO2 vertical column densities (VCDs) for years. In particular, the Infrared Atmospheric Sounding Interferometer (IASI), aboard the Metop satellites, delivers bi-daily global observations and offers a reliable long-term dataset currently spanning 17 years (since 2007). We present a general algorithm for fast retrieval of both SO2 plume heights and VCDs from IASI measurements. Compared to previous works, this algorithm achieves higher accuracy of SO2 height retrieval for dense, saturated volcanic plumes, and enhanced sensitivity of the column product, especially in the lower troposphere and for weak SO2 abundance. We illustrate the robustness of the SO2 retrievals on case studies of volcanic and anthropogenic plumes, with comparisons against TROPOMI UV-Vis SO2 measurements, CALIPSO plume altitude data, and findings in the literature. Efforts were made to ensure product consistency across the full 2007–2024 period and between the three IASI instruments onboard Metop-A, -B, and -C. The resulting dataset offers an extensive, coherent record of SO2 altitude and VCDs, with the potential of supporting diverse research and air quality monitoring applications. We present the entire 17-year IASI time series of SO2, focusing on major eruptions, the most active volcanoes, and long-term trends over large anthropogenic SO2 sources.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Synergistic Approach for Discriminating Aerosol Chemical Composition Profiles from airborne lidar and polarimeter measurements

Authors: Abou Bakr Merdji, Professor Juan Cuesta, Mr Fazzal Qayyum, PhD Anton Lopatin, Professor Oleg Dubovik, Snior researcher Richard Ferrare, Snior researcher Sharon Burton
Affiliations: Univ Paris Est Creteil and Université Paris Cité, CNRS, LISA, GRASP SAS, Lezennes, 59260, France, Laboratoire d’Optique Atmosphérique, UMR 8518, NASA Langley Research Center
Understanding the vertical distribution of aerosol chemical species is vital for assessing their impact on climate, air quality, and human health. Airborne measurements, with high-resolution vertical profiles, capture local and regional variations often missed by ground-based or satellite observations. These measurements are key for validating retrieval methods and improving chemistry-transport models. Airborne campaigns equipped with lidars and polarimeters provide valuable insights into aerosol properties, aiding in the refinement of retrieval algorithms and enhancing the accuracy of aerosol characterization. The forthcoming Atmosphere Observing System (AOS) initiative, a collaborative effort involving National Aeronautics and Space Administration (NASA), Centre national d'études spatiales (CNES), Japanese Aerospace Exploration Agency (JAXA), Italian Space Agency (ASI) and Canadian Space Agency (CSA), a tandem of advanced spaceborne lidar and multi-angular polarimeter instruments is set for deployment into a polar orbit, aiming to enhance aerosol and cloud observations. In this framework, we have developed a new synergetic approach for retrieving vertical profiles of aerosol species simultaneously present in the atmospheric column, by jointly using measurements from lidar and polarimeter instruments. We call this method Synergistic Aerosol Species Profiling using Lidar and Polarimeter Measurements (SASPro), which is based on the GRASP (Generalized Retrieval of Atmosphere and Surface Properties) retrieval framework. Recently, this method has been used with synthetic lidar/polarimeters measurements. The current work presents the first application of SASPro/GRASP methodology to real airborne measurements. The SASPro retrieval using the multispectral lidar capabilities to distinguish between the profiles of aerosol modes: a fine mode containing black carbon, brown carbon, inorganic salts, and aerosol water content; a desert dust coarse mode composed of iron oxide and quartz; and a second coarse mode of sea salt and aerosol water content. By providing a statistically optimized fit within a continuous solution space, the method offers detailed insights into the vertical distribution of aerosol species and bridges observational data to aerosol chemical composition. This last is important for both understanding the chemical evolution of particle species and numerical simulation performed with chemistry-transport models. In this work, the SASPro methodology is applied to airborne measurements collected by two advanced instruments onboard the same aircraft: NASA Langley Research Center’s second-generation High Spectral Resolution Lidar-2 (HSRL-2) and the GISS Research Scanning Polarimeter (RSP). The HSRL-2 provides high-precision active remote sensing of aerosol and cloud properties, while the RSP, a passive, downward-facing polarimeter, measures total radiance and linear polarization across nine spectral bands spanning the visible/near-infrared (VNIR) to shortwave infrared (SWIR) regions. By combining these complementary datasets, the results will show the capability to discriminate vertical profiles of aerosol species and retrieve their type-specific optical and microphysical properties, and sets the stage for future applications in upcoming spaceborne missions like AOS. Keywords: AEROSOL OBESERVING SYSTEM; LIDAR; POLARIMETER; AEROSOL PROFILE; AEROSOL CONCENTRATION; GRASP
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: CO hotspots detection using IASI/Metop, in preparation for IRS/MTG

Authors: Antoine Comiant, Cathy Clerbaux, Maya George, Selviga Sinnathamby, Bruno Franco, Lieven Clarisse, Pierre. Coheur
Affiliations: LATMOS/IPSL, Sorbonne Université, UVSQ, CNRS, Université libre de Bruxelles (ULB), Brussels Laboratory of the Universe (BLU-ULB) Spectroscopy, Quantum Chemistry and Atmospheric Remote Sensing (SQUARES)
Carbon monoxide (CO) is present in trace quantities in the atmosphere but plays a pivotal role in atmospheric chemistry because it is the dominant sink of the hydroxyl radical (OH). It controls the oxidizing power of the troposphere, and hence influences the lifetime of most atmospheric pollutants and greenhouse gases. CO is produced by the incomplete combustion of biomass and fossil fuels, and by atmospheric oxidation of hydrocarbons. CO can be remotely accurately measured on regional and global scales with satellite sounders. As its lifetime (ranging from a few weeks to a few months) is relatively long, CO is well-mixed in the atmosphere, making it difficult to identify CO emission hotspots from space. Since 2007, the IASI instruments aboard the Metop-A, Metop-B and Metop-C satellites have delivered a long-term record of CO concentrations, derived from nadir radiances recorded in the thermal infrared spectral range. Thanks to the recent release of a CO Climate Data Record (CDR) by EUMETSAT, this dataset is now temporally homogeneous. Similarly to other nadir looking instruments, these observations are known to have a medium sensitivity close to the surface, especially during nighttime, linked to the low thermal contrast (i.e. the temperature difference between the surface and the air above it). In this study we have applied oversampling and supersampling methods, originally developed to track NH3 and other reactive gases, to the 17-year CO record to identify global emission hotspots over cities and non-urban areas. Bottom-up emission inventories and other publicly available databases are used to identify the largest point source emitters. A seasonal analysis was performed, allowing a dedicated treatment of the regions and periods affected by biomass burning. This work was done in preparation for geostationary observations, in particular IRS to be launched this summer onboard MTG. With its more frequent overpasses over Europe (every 30 minutes) and significantly higher spatial resolution compared to IASI, this mission will be a central element in the monitoring of local sources.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Determination of SO₂ fluxes from Mt. Etna exploiting S5P-TROPOMI and ground-based UV camera observations

Authors: Maddalena Dozzo, Alessandro Aiuppa, Steffen Beirle, Marcello Bitetto, Dario Delle Donne, Giovanni Lo Bue Trisciuzzi, Thomas Wagner, Simon Warnach, Gaetana Ganci
Affiliations: Istituto Nazionale Di Geofisica e Vulcanologia, Università degli Studi di Palermo, Max Planck Institute for Chemistry, Istituto Nazionale Di Geofisica e Vulcanologia
Sulfur dioxide (SO₂) is sourced by degassing magma in the shallow crust; hence its monitoring provides information on the rates of magma ascent in the feeding conduit and the style and intensity of eruptions, ultimately contributing to volcano monitoring and hazard assessment. Here, we retrieve time series of the SO₂ fluxes exploiting the TROPOMI instrument, launched in 2017, onboard the Sentinel-5 Precursor satellite, from which atmospheric column densities of sulfur dioxide and other gases can be derived with an unprecedented spatial resolution (pixel size of 5.5 km × 3.5 km at nadir) and daily revisit time. SO2 fluxes are determined in two ways, either by the so-called divergence method or by the plume traverse method. In both cases, wind fields at the approximate altitude of the plume are combined with the satellite SO2 data. In addition to the sulfur dioxide column densities, we also use information on clouds (fraction and altitude) and aerosols (UV absorbing index) from TROPOMI observations. These data are used to constrain the radiative transfer simulations used for the conversion to sulfur dioxide vertical column densities and the related uncertainties. We compare our results for validation with ground-based measurements, obtained from a permanent UV camera system. The Mount Etna open-conduit volcano (Sicily, Italy) is taken as a case study, considering an area large enough to contain the major part of the plume. Etna consistently exhibits persistent degassing, primarily maintained by the passive release of gas from shallow convecting magma, emitting SO₂ with fluxes ranging typically between 500 and 5000 t/day. The considered year is 2021, when the volcano was affected by more than 60 paroxysmal events, also many times a day, producing high eruptive columns and high amounts of sulfure dioxide. This work, if extended to other volcanoes, can contribute to obtaining a response to unrest/eruption at volcanoes worldwide.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: The Atmospheric Chemistry Experiment (ACE): Recent Validation and Science Results

Authors: Kaley A. Walker, Paul S. Jeffery, Laura N. Saunders, Patrick E. Sheese, Jiansheng Zou
Affiliations: University Of Toronto
In February 2025, the Canadian-led Atmospheric Chemistry Experiment (ACE) mission completes its twenty first year of measurements on board the SCISAT satellite. The long lifetime of ACE provides a valuable time series of composition measurements that contribute to our understanding of ozone recovery, climate change and pollutant emissions. These profiles of atmospheric trace gases and aerosols provide altitude-resolved data that are necessary for understanding processes that occur at specific altitudes or over limited vertical length scales. The SCISAT/ACE mission uses infrared and UV-visible spectroscopy to make its solar occultation measurements. The primary instrument on-board, the ACE Fourier Transform Spectrometer (ACE-FTS) is a high-resolution (0.02 cm-1) FTS operating between 750 and 4400 cm-1. It also contains two filtered imagers (0.525 and 1.02 microns) to measure atmospheric extinction due to clouds and aerosols. The second instrument is a dual UV-visible-NIR spectrophotometer called ACE-MAESTRO (Measurements of Aerosol Extinction in the Stratosphere and Troposphere Retrieved by Occultation) which was designed to extend the ACE wavelength coverage to the 280-1030 nm spectral region. From these measurements, altitude profiles of atmospheric trace gas species, aerosol extinction, temperature and pressure are retrieved. The 650 km altitude, 74 degree circular orbit of SCISAT provides global measurement coverage over a latitude range of ~85 ºN to ~85 ºS with a focus on the Arctic and Antarctic regions. The ACE data set can be combined with other data sets to provide the climate data records required for long term monitoring of ozone and related species and for initialization and testing of chemistry-climate models. In order to do this, it is essential to quantify the biases between the different instruments and investigate their changes over the operational time period. Validation and comparison studies are a necessary component of this data assessment process. Highlights of validation and science results from the ACE mission will be presented in this paper along with mission and instrument status.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: A new FRESCO two-band cloud product to improve the TROPOMI nitrogen dioxide retrieval

Authors: Dr. Henk Eskes, Jos van Geffen, Dr Maarten Sneep, Dr Pepijn Veefkind
Affiliations: KNMI
The Sentinel-5P TROPOMI instrument provides unique observations of atmospheric composition at a high spatial resolution of about 5 km with near-daily global coverage. The retrieval algorithm of nitrogen dioxide on Sentinel 5P relies on the FRESCO-S (Fast Retrieval Scheme for Clouds from the Oxygen A-band – Sentinel) cloud retrieval algorithm for information on the cloud altitude. A recent upgrade of the TROPOMI NO2 product of November 2024 to version 2.8.0 introduced a major upgrade of the FRESCO algorithm (2-band FRESCO). Traditionally FRESCO is using three windows (continuum, deep and weak) covering wavelengths outside the O2-A band, in the strongly absorbing O2-A R-branch and weaker absorbing P-branch. The performance of the pressure retrieval can be tested by comparing the retrieved scene pressures with the surface pressure derived from orography in combination with weather prediction pressure data. For FRESCO three-band retrievals this check was not very satisfactory. A FRESCO 2-band retrieval, obtained by removal of the deep window (which is sensitive to calibration but not very sensitive to surface/cloud pressure changes) leads to a much-improved retrieval of the surface pressure for cloud-free scenes, consistently for land, sea and sun glint cases. In our contribution we will discuss the differences between the FRESCO implemented in versions 2.2 to 2.7.1 and FRESCO 2-band implemented in v2.8.0. The change has a major positive impact on the NO2 retrievals over snow-covered scenes (mid- and high latitude wintertime). The changes in NO2 for other (snow-free) seasons over polluted regions will be presented.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: A Combined Approach Using Convolutional Neural Networks for Land-Sea Mask Extraction from Hyperspectral Data

Authors: Claudio Mammone, Florian Ehmann, Elisa Lohfink, Stephan Stock, Steven Hadesty, Johannes
Affiliations: Telespazio Germany Gmbh, Eumetsat
EUMETSAT’s Meteosat Third Generation (MTG) programme is about to enter a new phase, with the upcoming launch of the MTG-S mission. One of the instruments on-board will be the InfraRed Sounder (IRS), a hyperspectral instrument in the mid-wave infrared (MWIR) and long-wave infrared (LWIR) bands. IRS will provide numerical weather prediction models with hyperspectral information about water vapour distributions that previously were not available at these spatial, spectral, and temporal resolutions. This richness of information in hyperspectral data products does, however, pose a challenge in the processing and quality assurance of said data products. EUMETSAT, supported by Telespazio Germany, is developing a suite of multi-mission frameworks integrated with mission-specific processing modules to support the commissioning and the operations of its missions. The PIQMICS framework (Performance, Image Quality, MonItoring and Characterisation System) takes care of image quality and navigation accuracy assessment. IDEAL (indepenDEnt AbsoLute geometric quality tool), a mission specific module to assess the registration performance, relies on registration of geometric features of the data, such as coastlines, to a known reference. Extracting these geometric features from hyperspectral data can prove difficult due to the usually high spectral dimension and limited information available in individual spectral channels and narrow spectral sub-ranges. EUMETSAT and Telespazio Germany are working together to overcome these challenges, each leveraging their specific expertise. EUMETSAT is refining an approach to reduce the dimensionality of the available data using techniques such as a principal component analysis to create a dimensionality-reduced subset of data that more strongly exhibits desired geometric features. Subsequently a processing chain developed by EUMETSAT segments the image into, e.g., a land-sea mask for image registration. So far, this registration algorithm is delivering very promising results on simulated hyperspectral data, but misclassifications in the generation of the land-sea mask can be a bottleneck that prevent the algorithm from reaching its full potential. This is where Telespazio Germany and Artificial Intelligence enter the picture. AI models, Convolutional Neural Networks (CNNs) in particular, are a well-established and powerful approach to image classification and image segmentation. As such, CNN-based algorithms form a promising improvement to geometric quality analyses by handling the extraction of geometric features. Moreover, in a suitable training loop, CNN-based models can easily learn to make use of multiple input channels and make better use of the full available spectrum. Within the scope of this activity, we create an AI model to generate land-sea masks from hyperspectral. This alleviates the bottleneck posed by the limited feature extraction performance in current approaches. The project involves (i) study of available algorithms and model architectures (ii) preparation of reference data and pipelines for data selection and augmentation for AI training loops (iii) training and evaluation of candidate algorithms and (iv) packaging of the developed algorithm into a deployment-ready format. Although currently tested only on simulated IRS data, this combined approach should significantly increase the consistency and confidence to which we can verify the geolocation of hyperspectral image data, without completely relinquishing control to a “black box” application. Moreover, the CNN-based approach is extensible to further potential applications using both supervised and unsupervised learning with, e.g., the extraction of cloud masks or occultation masks being a natural extension.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Comparison Between Metop-A RO and MW/IR Water Vapour Data Sets: Systematic Differences and Uncertainties

Authors: Vinícius Ludwig Barbosa, Johannes Kristoffer Nielsen, Kent Bækgaard Lauritsen
Affiliations: Danish Meteorological Institute
For more than two decades, GNSS Radio Occultation (GNSS-RO) has provided high-quality and valuable atmospheric data with application in Numerical Weather Prediction (NWP) and in Climate monitoring studies. Among different retrieved geophysical parameters, GNSS-RO has the capability of providing vertically resolved profiles of water vapour (WV) up to the tropopause. Water vapour is the most important greenhouse gas and an essential climate variable (ECV), according to the Global Climate Observing System (GCOS), since it shapes the global environment and provides the conditions needed for a livable planet. Therefore, it is essential to understand the WV distribution and its dynamics, such as cloud formation and precipitation, to monitor and detect changes on regional and global scales. Such tasks require trustworthy and long-spanning climate data records (CDRs), such as the one provided by GNSS-RO. However, different sorts of Earth Observing Systems (EOS), e.g., relying on microwave (MV) and infrared (IR) instruments, also produce their own WV CDRs. The fact that these CDRs are based on different remote sensing techniques means the data differs to some extent in terms of the representativeness of the measurements, i.e., the geometry of the acquisition and vertical footprint, and inherent bias and uncertainties due to the instrument's characteristics and retrieval schemes. Starting in 2006, the Metop-A satellite has orbited the Earth and collected atmospheric data using the GNSS Receiver for Atmospheric Sounding (GRAS, GNSS-RO), the Infrared Atmospheric Sounding Interferometer (IASI, IR), the Advanced Microwave Sounding Unit (AMSU, MW) and the Microwave Humidity Sounder (MHS, MW). Their operation differs in frequency and geometry, GRAS-RO being a limb-view, whereas IM/IR sensors are nadir-view-based techniques. Such a variety of instruments delivering independent daily WV data from the same platform offer favourable conditions for setting up an inter-comparison study of collocated WV profiles in order to assess systematic differences between RO and MW/IR data sets and estimate their uncertainties. In the context of this study, the GRAS WV product processed by the Radio Occultation Meteorology Satellite Application Facility (ROM SAF) and composing the RO CDR-v1 is collocated to the Rutherford Appleton Laboratory (RAL) Infrared Microwave Sounding (IMS) data set. RO CDR-v1 is based on WV profiles retrieved using the 1D-Var method. RAL IMS WV product is based on IASI, AMSU, and MHS retrievals merged through an Optimized Estimation Method (OEM) scheme and a forward model (RTTOV). Matchups are defined by 300-km and 180-minute collocation criteria, and systematic differences are assessed by taking into account aspects such as cloud coverage and land/sea location since RAL IMS data is sensitive to these aspects. Further, RAL IMS averaging kernels (AK) are used to equalize the vertical footprint difference to GNSS-RO WV profiles. Statistics are presented in different latitude bands and throughout the interval covered by the RAL IMS data set, v2.1, i.e., 2007 to 2016, and assume GCOS Reference Upper-Air Network (GRUAN) radiosondes as the reference. The uncertainties and error covariance of the three data sets are estimated empirically using the Three-Cornered Hat (3CH) method. This assessment and characterization exercise of RO CDR-v1 and RAL IMS WV products summarises the Danish Meteorological Institute (DMI) contribution to the ESA Water Vapour Climate Change Initiative (ESA WV_cci) project.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: The NH3 Daily Cycle Over Agricultural Areas in Asia Using Combined Satellite Measurements

Authors: Adriana Iorga, Jeremy Harrison, David Moore
Affiliations: University of Leicester, National Centre for Earth Observation
Ammonia (NH3) is one of the most important nitrogen pollutant species in the lower troposphere and is mostly produced from the use of synthetic fertilisers and manure spreading over agricultural areas. Most NH3 is emitted in Asia, particularly in the Indo-Gangetic Plain (IGP) in India and the North China Plain (NCP). Currently, NH3 emissions are subject to limited regulation in Asia and there are few ground monitoring stations taking NH3 measurements. However, satellite instruments offer global coverage for NH3 observations where ground stations are sparse. Observations of NH3 are essential for establishing environmental regulations for agricultural practices, as NH3 plays an important role in the formation of secondary aerosols and PM2.5 over cities, through transport from rural areas. Wet and dry deposition of NH3 on soils and water bodies is also detrimental to ecosystem biodiversity as it leads to acidification of the environment. The remote sensing of NH3 presents numerous challenges because NH3 concentrations rapidly change over time and space due to the short lifetime of the gas, which ranges from a few hours up to a day. Studying the NH3 diurnal cycle has the potential to provide valuable information on its sources, surface exchange, deposition and transport processes, and the impact on these by weather and surface conditions; all these are crucial for improving atmospheric models. Here the NH3 daily cycles over the IGP and the NCP were studied for different seasons during 2022 using day and night time observations from the IASI and CrIS satellite instruments. The NH3 total column measurements were obtained by using an optimal-estimation-based retrieval method developed at the University of Leicester, incorporating a fast NH3 detection method for selecting the a-priori profile. Good NH3 signal and coverage was observed for both day and night time measurements. Different environmental factors, such as surface temperature, thermal contrast and planetary boundary layer were studied along with the NH3 concentrations. The satellite NH3 measurements were also compared to modelled NH3 data.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: C.03.06 - POSTER - Synergy of visible and infrared measurements for Earth surface applications

Current and future Copernicus Sentinels provide multiple observations of the earth surface in the visible and infrared domains. Sentinel-2 is the workhorse of land observations for Copernicus providing multi-spectral high-resolution in 13 channels ranging from the visible to the NIR and SWIR regions, Sentinel-3 observations from OLCI (21 channel in visible and NIR) and SLSTR (9 channels from Visible to Thermal infrared, with dual-view capability) are crucial for operational oceanography, land and atmosphere monitoring. The Copernicus Expansion Missions CHIME and LSTM will further add to the pool of observations covering the high spatial resolution requirements for visible-to-shortwave-infrared spectroscopy and temperature measurements, respectively, over land and coastal areas. The Next Generation of Sentinel-2 and Sentinel-3 Mission will bring further performance, with improved spatial and spectral resolution (Sentinel-3 Next Gen Optical will be virtually hyperspectral in the visible and near infrared).
While observations from single instruments are at the basis of many operational processing chains that generate products for the Copernicus Services (such as ocean colour, SST, LST, vegetation indices, land cover products), the potential synergy between visible and infrared is still mostly unexploited (with the notable exception of some atmospheric applications that use VIS/IR data together in operational products, for example for aerosols).
This session will welcome all contributions that deal with the synergy of visible and infrared observations from current and future Copernicus missions to improve the characterisation of the earth surfaces (land, ocean, cryosphere) and prepare the user community for future synergistic operational products. Possible contributions include but are not limited to:
• Data merging for synergy: remapping, harmonization, fusion of data from different instruments and views.
• Use of dual-view observations and infrared channels to improve the atmospheric correction of ocean and land colour applications.
• Use of combined visible and infrared for improved surface classification.
• Operationalisation of new synergistic products for agriculture, water resources management, weather forecasts, climate studies (for instance, evapotranspiration).
• Thermal and optical remote sensing for urban management.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: HOW CAN SATELLITE-DERIVED “VEGETATION” PROPERTY MAPS BE COMPARED FROM LOCAL TO CONTINENTAL TO GLOBAL SCALES?

Authors: Stuart Phinn, Dr Peter Scarth, Alisa Starkey, Jason Dail
Affiliations: The University Of Queensland, Ozius Ltd, Cibo Labs
This paper addresses a growing challenge across multiple industries, levels of government, NGOs and research, in determining how to objectively compare the growing multitude of satellite derived maps of “vegetation” properties. This initial work is applied to Australia. In some areas, these data have a long history of established programs, with 40+ years of application for developing, implementing and revising vegetation management, and associated activities at state levels. Increasingly, these data are used as primary inputs to carbon accounting workflows, as well as certification of deforestation free supply-chains. The “vegetation” properties mapped are not consistent, e.g. these maps cover properties that include vegetation: extent (by [in]consistently defined vegetation types, e.g. woodland, forest, native, non-native); composition (by structure and/or floristics); canopy cover; fractional cover per pixel; foliage projective cover; leaf area index (LAI); height; and above ground biomass. The aim of developing this approach is to provide a transparent, repeatable, and fit for purpose comparison tool and results, able to be used for assessing which satellite derived vegetation product (SDVP) is fit for purpose. The basis of the comparison are as follows, each of the following is an attribute specified for each SDVP: • Name: • Vegetation biophysical property estimated: • Spatial and temporal scales of SDVP: • Origin of SDVP method: • Required satellite and other data for SDVP method: • Summary of method used to produce SDVP: • Known assumptions of SDVP: • Open access reference for SDVP: • Validation status for SDVP: • Range of vegetation types SDVP developed + applied in: • Range of validated vegetation types SDVP developed + applied in: • Established limitations of SDVP: • Sources for SDVP methods and code: • Sources for SDVP data products: • Is the SDVP part of an ongoing program with review/revision: The SDVP’s assessed in this trial work include statewide (4 products), Australia-wide (11 products) and global (4 products). The initial comparison shows that: - The majority of SDVPs are focused on vegetation cover and extent. - The majority of SDVPs operate at variable spatial and temporal scales, often ranging from daily to annual intervals and from local patches to continental extents. - Most SDVPs have published and accessible algorithms, along with some form of validation. However, validation studies are not consistent and are challenging to compare. - Limitations and assumptions of each SDVP at general levels and in specific vegetation communities are not clearly reported in all areas. - The reported limitations and assumptions concern factors such as inaccuracies in specific vegetation types, scale-dependent discrepancies, sensor-specific errors, temporal inconsistencies, and variability in methodological approaches. - Directions/priorities for future work in various areas emphasize the need for standardized validation protocols, improved reporting of limitations, enhancement of spatial resolution, integration of multi-source satellite data, and addressing inconsistencies across vegetation property definitions. The initial results will be presented in a form able to be communicated via web-page in a consistent, secure and accessible location, potentially www.eoa.org.au and published as a pre-print, with the intention to update on an annual basis as part of JRSRP activities. Follow on work is intended to progress this to global scales once the initial Australian work is complete.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Impact of Temporal Aggregation on High Resolution Evapotranspiration Estimation in Kenya Using Sentinel-2 and Sentinel-3

Authors: Mohammad Mirmazloumi, Vicente Burchard-Levine, Hamideh Nouri, Neda Abbasi, Abebe Demissie Chukalla, Harison Kipkulei, Gohar Ghazaryan
Affiliations: Leibniz Centre for Agricultural Landscape Research (ZALF), Tech4Agro, Institute of Agricultural Sciences (ICA), Spanish National Research Council (CSIC), UniSA-STEM, University of South Australia, Mawson Lakes, Adelaide, SA, 5095, Department of Crop Sciences, University of Göttingen, Von-Siebold-Straße 8, 37075, Department of Land and Water Management, IHE Delft Institute for Water Education, 2611 AX, Lehrstuhl für Klimaresilienz von Kulturökosystemen Zentrum für Klimaresilienz, Universität Augsburg
Evapotranspiration (ET) is a fundamental process in the Earth’s water cycle, with significant implications for climate regulation, agricultural productivity, and ecosystem health. Accurate estimation of ET at high temporal and spatial resolution is critical for various environmental and land management applications. ET is also an important variable for drought monitoring as it can indicate moisture deficits and plant stress. This study explores the development and implementation of a processing chain for downscaling Sentinel-3 Land Surface Temperature (LST) data, integrated with Sentinel-2 images and ERA5 data to estimate daily ET of Kenya for 2021-22. The procedure is a cloud-based processing chain, including three main steps: LST downscaling, ET estimation using the Two-Source Energy Balance (TSEB) model, and intercomparison of estimated ET with available products. Additionally, the impact of Sentinel-2 compositing (i.e., monthly compared to daily) is evaluated on LST downscaling and ET estimation. The ET estimation of entire Kenya was part of ADM-Kenya project, focusing on co-developing solutions for monitoring crop condition and cropping systems with Earth Observation (EO) data to derive evidence-based quantitative vegetation condition estimates with high spatial and temporal resolution. In the first step, downscaling of Sentinel-3 SLSTR LST data was performed by integrating Sentinel-2 based vegetation indices. Initially, cloud-covered pixels were masked, and 1-km daily LST images and Sentinel-2 data composites were prepared as inputs for a machine learning model. The LST data access, preprocessing, and downscaling steps were performed on the Copernicus Data Space Ecosystem (CDSE) and openEO services. The machine learning model was trained using five spectral indices—NDVI, NDMI, GNDVI, EVI, and NMDI—derived from Sentinel-2 monthly composites at 20-m spatial resolution. A random forest regressor was employed to model the relationship between 1-km LST values and the indices from the corresponding month. This trained model was then used to predict 20-m LST values for cropland pixels, enabling finer spatial resolution predictions, followed by residuals corrections to enhance predictions and refine the downscaling process. The second step of the processing chain involved ET estimation using the pyTSEB library, which implements the Priestley-Taylor Two-Source Energy Balance (TSEB) model. The pyTSEB package offers a robust framework for estimating bulk fluxes such as net radiation, sensible heat flux, latent heat flux, and soil heat flux. Processed 20-m LST images, along with meteorological data from ERA5 and topographic information from GLO-30 DEM, served as inputs for the model. The input parameters were categorized into two groups: those directly derived from Sentinel-2, Sentinel-3, and ERA5, and intermediate variables calculated using the pyTSEB framework. Considering the requirements of TSEB model, seven meteorological variables from ERA5 were ingested, including wind speed components (horizontal and vertical), temperature (air and dewpoint), surface pressure, solar radiation, and geopotential, using Climate Data Store (CDS), the data access facility to Copernicus services implemented by ECMWF. Two daily products have been prepared at 20-m spatial resolution for this project: downscaled LST and ET estimations. The daily, cloud-free, and accurate products were provided by an optimized random forest model, utilizing monthly compositing of Sentinel-2 data. Monthly compositing enabled a more accurate LST prediction compared to daily Sentinel-2 indices affected by intensive missing values caused by cloud coverage. Consequently, downscaling was performed with accuracies 0.88, 0.82, and 0.55 for R2, RMSE, and MAE, respectively. Additionally, ET estimations were compared with GLEAM, dekadal national data from FAO Water Productivity Open-access portal (WaPOR) WaPOR v3, and decadal Sub-National WaPOR v3 products. The inclusion of multiple high-resolution datasets (e.g., Sentinel-2, Sentinel-3) enhanced the accuracy of the ET estimation by capturing the spatial heterogeneity of surface and atmospheric conditions. Monthly compositing of Sentinel-2 images used for temporal aggregation offered a temporal comparison (full-year time series) of our estimations with these products. The comparison highlighted a high correlation to the available products over Kenya temporally and spatially. The proposed approach demonstrated significant improvements in the temporal and spatial coverage and resolution, and accuracy of ET estimates compared to the current products. The ability to predict 20-m resolution ET values provides critical insights for applications in precision agriculture and water resource management. This study highlights the potential of integrating Sentinel-2 and Sentinel-3 imagery, the impact of Sentinel-2 temporal compositing, advanced machine learning algorithms, and state-of-the-art energy balance models for high-resolution ET estimation. The developed methodology serves as a scalable solution for addressing the challenges of spatial and temporal heterogeneity in ET estimation across agricultural landscapes.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Development of a Monoculture Crop Index Using Remote Sensing and Time-Series Analysis

Authors: Tomasz Dąbrowa, PhD Michał Lupa, eng Piotr Cięgotura, Natalia Abramowicz
Affiliations: AGH University Of Krakow
The growing challenges posed by climate change, resource scarcity, and the demand for sustainable agriculture highlight the need for advanced tools to monitor and manage monocultural crops. This study utilizes remote sensing and time-series analysis to develop an innovative crop index, incorporating spectral reflectance analysis, which includes, among other aspects, vegetation index analysis, and model-based segmentation, all conducted using technologies like Google Earth Engine (GEE). The proposed index provides valuable insights into the spatial and temporal dynamics of monoculture fields, thereby enhancing decision-making in agricultural management. The research employs a comprehensive approach, starting with the delineation of agricultural fields using machine learning models and satellite imagery available through GEE and CREODIAS provided by CloudFerro. These delineated fields serve as the foundation for analyzing the variability of key vegetation indices, including the Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index 2 (EVI2), which are computed over several years and then subjected to time-series analysis. A detailed analysis of spectral reflectance data across the electromagnetic spectrum, employing various methods, allows for the detection of subtle changes in crop health and soil conditions, based on data collected over multiple years, including crop growth dynamics, seasonal variability and stress conditions. A key innovation of this project lies in its capacity to evaluate water, and energy demands within agricultural systems. The index identifies areas experiencing water stress, optimizing irrigation practices and minimizing water loss. Furthermore, it highlights regions requiring agrotechnical interventions, reducing unnecessary energy inputs and enhancing resource efficiency. By addressing these aspects, the index supports climate mitigation efforts by reducing greenhouse gas emissions associated with overuse of fertilizers and pesticides. The relevance of this research extends beyond its scientific contribution. It aligns with global sustainability goals, particularly those outlined in the United Nations Sustainable Development Goals (SDGs), such as Zero Hunger (Goal 2), Clean Water and Sanitation (Goal 6), and Climate Action (Goal 13). The index serves as a practical tool for stakeholders, including policymakers, agronomists, and farmers, promoting sustainable agriculture practices while ensuring food security. This abstract presents an innovative approach to integrating remote sensing technologies with advanced analytical methods, providing a validated framework for monitoring monocultural crops. By addressing technical challenges and delivering actionable insights, this research advances agricultural science and enhances operational decision-making, helping build resilience in the face of climate and environmental pressures. The findings highlight the potential of interdisciplinary strategies in addressing key challenges related to climate, water, and energy in agricultural systems.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Detecting and filtering cloud shadow : Impact on SYNERGY products and on data quality

Authors: Claire LEJOSNE, Julien FLOQUET, Jan WEVERS, Suman MOPARTHY, Carolien TOTE, Dr Ludovic BOURG, Jérôme BRUNIQUEL, Silvia Scifoni, Dr Steffen DRANSFELD, Dr Véronique Bruniquel
Affiliations: ACRI-ST, Brockmann Consult GmbH, VITO, SERCO, ESA/ESRIN
The Sentinel 3 SYNERGY L2 products are acquired and provided to users since 2016 with the launch of S3A. These Land products are taking advantage of the simultaneous acquisition of OLCI (Ocean and Land Colour Instrument) and SLSTR (Sea and Land Surface Temperature Radiometer) instruments to obtain over the same grid a complex dataset with wider radiometric and spectral range. An aerosol retrieval is then applied, using a single continental aerosol model and Surface directional reflectance associated with all OLCI and SLSTR bands (except the absorption ones and the ones dedicated to cloud detection) can be provided to users. Vegetation-like products are also derived from the SYNERGY processing, with similar format and datasets than the ones provided by PROBA-V and then, ensuring the continuity of the SPOT-VGT time series. To improve the quality of the SYNERGY L2 products and the consistency between SYNERGY VGT-like products and PROBA-V ones, several evolutions have been included these last years (updated of the Spectral Response Function, Compositing method using bicubic interpolation, …). The detection of cloud shadow is the latest inconsistency between the PROBA-V products and the current operational SYNERGY products. A specific prototype project has been handled by the Optical Mission Cluster Performance (OPT-MPC, a Copernicus service contract managed by the European Space Agency (ESA) and led by ACRI-ST) to implement and analyse the impact of cloud shadow mask on SYNERGY products. In addition to a better compliancy with ARD requirements, an improved filtering of the radiometric OLCI and SLSTR content can be made before the aerosol retrieval. This evolution has been implemented and tested over 4 months of data (each month representing one season). The impact on data quality, either concerning the retrieval of aerosol optical depth, the quality of Surface directional reflectance or the consistency with PROBA-V products, have been analysed. After a quick overview of the cloud shadow module (based on the IDEPIX approach) applied on SYNERGY products and a review of the computed cloud shadow mask, the impact on data quality will be presented and described.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone K)
Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Making EO Workflows Reproducible and Portable Across Federated High-Performance Computer and Cloud Platforms With Conda

Authors: Jean Iaquinta, Anne Fouilloux
Affiliations: IT Department, Simula Research Laboratory
Earth System Modeling (ESM) workflows, which are critical for Earth Observation (EO) and climate research, are highly complex and demand substantial computational power. These workflows typically employ domain decomposition techniques, using a combination of MPI (Message Passing Interface) and OpenMP (Open Multi-Processing) to parallelize computations efficiently. For global high-resolution models, each processor handles a small column of data (e.g., latitude-longitude and vertical levels), enabling scalability on powerful supercomputers like the LUMI EuroHPC. Conversely, calculations for smaller domains can be performed on personal computers, underscoring the need for a portable approach that spans this computational spectrum. This work presents the use of Conda, an open-source package manager, to create reproducible, portable, and scalable computational environments for ESM workflows across federated HPC and cloud platforms. A key challenge was that default Conda MPI implementations were not optimized for the high-performance interconnects and architectures of modern supercomputers. They lacked in particular support for advanced communication libraries such as Universal Communication X (UCX) for Mellanox Infiniband-based clusters and Open Fabric Interfaces (OFI) for Intel Omni-Path and other RDMA-capable networks like the EuroHPC. To address this, we upgraded the MPI toolchains in Conda-Forge, ensuring compatibility and optimized performance on state-of-the-art HPC infrastructure. This approach significantly accelerates the development of new algorithms by allowing scientists to develop locally on their personal computers or even laptops, then seamlessly scale to the largest HPC systems without having to modify their codes and scripts due to differences in the software stack. This eliminates the typically time-intensive porting steps required to adapt workflows to HPC environments, drastically shortening the transition from research development to production. This process should not only accelerate innovation by streamlining the full software environment management and usage but also make it more accessible to non-specialists, empowering researchers to focus on scientific discovery rather than sorting out technical software compatibility issues. Conda environments seamlessly run on a range of platforms without relying on host software stacks or requiring privilege escalation. By integrating these environments into lightweight containers, it is possible to achieve performance that matches or even exceeds native configurations. Additionally, Conda-based containers, viewed as single compressed files, reduce storage requirements and alleviate strain on storage and file systems. Through comprehensive case studies and benchmarks, we demonstrate how this approach supports reproducibility, scalability, and computational efficiency for ESM workflows. This work highlights the transformative potential of Conda and containerized solutions in enabling complex EO modeling, accelerating innovation, and fostering sustainable and scalable digital innovation across federated platforms.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: D.03.02 - POSTER -Free Open Source Software for the Geospatial Domain: current status & evolution

#cloud-native

Free and Open Source Software is has a key role in the geospatial and EO communities, fostered by organizations such as OSGeo, Cloud Native Computing Foundation, Apache and space agencies such as ESA and NASA. This session showcases the status of OSS tools and applications in the EO domain, and their foreseen evolution, with a focus on innovation and support to open science challenges.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: On-demand data cubes – knowledge-based, semantic querying of multimodal Earth observation data for mesoscale analyses anywhere on Earth

#stac

Authors: Felix Kröber, Martin Sudmanns, Dirk Tiede
Affiliations: Department of Geoinformatics, University of Salzburg, Forschungszentrum Jülich, Institute of Bio- and Geosciences, IBG-2: Plant Sciences
Introduction In recent years, Earth observation (EO) data access and processing have undergone a transformative shift, driven by the advent of novel big EO data paradigms [1,2]. With the increasing volume and variety of EO data, the significance of cloud-enabled processing frameworks that allow users to focus on the actual analysis of data with an abstraction of technical complexities is growing. However, many cloud-based platforms are proprietary or closed-source [3,4], imposing costs and service uncertainties, as illustrated by the unexpected shutdown of the Microsoft's Planetary Computer Hub in June 2024. However, open-source, free alternatives like the Open Data Cube [5] may require significant setup effort, with dataset indexing being one reason. This effort is only justified for larger data cubes with long-term infrastructure goals, whereas for shorter-term projects the practicality is restricted. Moreover, while current systems offer technical solutions in terms of data access and scalability of analyses, many approaches are still lacking in image-understanding capabilities. A modern processing framework needs to provide adequate means to address the semantic complexity of EO data [6]. Analysts still grapple with raw data structures, rather than having frameworks at hand to focus on data meaning. In brief, there is a pressing need for open-source EO data processing frameworks that are both user-friendly and capable of representing the semantics of EO data. To this end, we introduce a novel Python package (gsemantique) for building ad hoc data cubes for semantic EO analyses. We demonstrate its utility for querying multi-modal data by focussing on the use case of forest disturbance modelling. Design choices & Technical implementation The technical foundations for the gsemantique package are threefold: First, data in cloud-optimised formats is fetched on-demand to regularised three-dimensional data cubes. The SpatioTemporal Asset Catalog (STAC) [7], fostering standardisation in the structuring of geospatial metadata, is leveraged to facilitate data access. A pre-defined suite of STAC endpoints including several common EO datasets such as Landsat, Sentinel-1 and Sentinel-2 along with additional datasets such as a global DEM is part of the package. The way data access is modelled easily allows to extend the set of pre-defined datasets for custom ones. Second, the creation of comprehensible, knowledge-based, transparent models is supported by providing a semantic querying language to address and model the data. Here, we build on the foundation laid by the semantique package [8], which introduced a structured approach to semantic querying of EO data. This can be used to supplement conventional, non-semantic approaches. To facilitate effective exchanges with end users and domain experts regarding the design of analyses, graphical visualisation options for models are integrated. Specifically, the model coded in a python structure can be represented using graphical blocks as defined by Google’s Blockly library [9]. Third, the scalable execution of the models is enabled by an internal tiling of the queried spatio-temporal extent into smaller chunks. The complexity of the chunking mechanism with the decision on the dimension (i.e. chunk-by-space or chunk-by-time), the execution of the recipe and the merging of the individual chunks into a single result is abstracted from the user. Focussing on efficiency, the chunked execution of the model supports multiprocessing. As data dependencies are not fixed or can be replaced and extended, the presented python package offers a very flexible and portable way of performing data analyses. Big EO data archives can be analysed both on local, consumer-grade devices, and on cloud-based, high-performance processing platforms without being tied to a specific platform. Application case: Forest disturbance analyses To prove the value of the proposed package, we focus on the use case of analysing forest disturbances via remote sensing data. The focus here is deliberately not on the optimisation of the model, i.e. to create the best performing forest disturbance model. Instead, we intentionally choose to address the example with a simple but still effective analysis model aiming at highlighting the conceptual advantages of our approach. Specifically, three beneficial properties of the processing framework are showcased. First, the entity of interest (forest) is a 4D real-world phenomenon, that needs to be translated to features in the 2D image domain. This is indeed not unique to the forest entity but applies to all entities in the 4D world (including their relationships). However, in the image domain, entities such as water bodies are spectrally distinct relative to other objects. This makes the selection of useful image features a straightforward task, even without an explicit model that translates properties of the entity to features of the object. Forests, on the other hand, represent a type of vegetation, which is more challenging to distinguish in the image domain. Similar image features may be observed for other vegetated surfaces such as meadows or bogs. Here, the advantage of knowledge-based, semantic modelling with the possibility of an explicit definition of multiple relevant entity properties (and their translation into object features) becomes clear. We create such a model for the entity forest in an undisturbed state by defining the properties of temporal stability (translated to low radar coherence), vitality (translated to a positive NDVI) and altitude below the tree line (translated to an elevation below a thresholded DEM level). We compare this entity definition with a pre-defined one that was derived in a data-driven way. Both entity definitions can be generated without further effort leveraging the data connections pre-implemented in the package. The comparison of both definitions allows an estimation of the uncertainty when modelling the entity forest based on different data sets and approaches. Second, the phenomenon of disturbance is an ambiguous concept. There is no unique, crisp definition of forest disturbances such that a remote sensing expert needs to make his/her specific assumptions in modelling the phenomenon explicit and transparent in order to discuss it further with other domain experts. Also, there is no simple data-driven way to solve the task of disturbance modelling since there is a lack of available label data. Hence, this example is well suited to be approached by a semantic, knowledge-based modelling approach that allows to visualise and communicate the resulting human-readable model to others. Third, forest disturbances are inherently process-based, i.e. they are characterised by a temporal change in the forest’s status. A datacube-based approach is therefore well positioned to approach this task, as it allows to query every single observation through time instead of relying on pre-processed, aggregated EO products. Using a multiannual use case design incorporating Sentinel-1, Sentinel-2 and DEM data for a spatial extent of more than 1000 km2, we demonstrate the usability of our package for meso-scale analyses querying all available data references through time. [1] H. Guo, Z. Liu, H. Jiang, C. Wang, J. Liu, and D. Liang, ‘Big Earth Data: a new challenge and opportunity for Digital Earth’s development’, International Journal of Digital Earth, vol. 10, no. 1, pp. 1–12, Jan. 2017, doi: 10.1080/17538947.2016.1264490. [2] M. Sudmanns et al., ‘Big Earth data: disruptive changes in Earth observation data management and analysis?’, International Journal of Digital Earth, vol. 13, no. 7, pp. 832–850, Jul. 2020, doi: 10.1080/17538947.2019.1585976. [3] N. Gorelick, M. Hancher, M. Dixon, S. Ilyushchenko, D. Thau, and R. Moore, ‘Google Earth Engine: Planetary-scale geospatial analysis for everyone’, Remote Sensing of Environment, vol. 202, pp. 18–27, Dec. 2017, doi: 10.1016/j.rse.2017.06.031. [4] Microsoft Open Source, R. Emanuele, D. Morris, T. Augspurger, and McFarland, Matt, microsoft/PlanetaryComputer: October 2022. (Oct. 2022). Zenodo. doi: 10.5281/zenodo.7261897. [5] B. Killough, ‘Overview of the Open Data Cube Initiative’, in IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia: IEEE, Jul. 2018, pp. 8629–8632. doi: 10.1109/IGARSS.2018.8517694. [6] H. Augustin, M. Sudmanns, D. Tiede, S. Lang, and A. Baraldi, ‘Semantic Earth Observation Data Cubes’, Data, vol. 4, no. 3, p. 102, Jul. 2019, doi: 10.3390/data4030102. [7] ‘STAC: SpatioTemporal Asset Catalogs’. Accessed: Nov. 24, 2024. [Online]. Available: https://stacspec.org/en/ [8] L. Van Der Meer, M. Sudmanns, H. Augustin, A. Baraldi, and D. Tiede, ‘Semantic Querying in Earth Observation Data Cubes’, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., vol. XLVIII-4/W1-2022, pp. 503–510, Aug. 2022, doi: 10.5194/isprs-archives-XLVIII-4-W1-2022-503-2022. [9] Google, Google blockly - The web-based visual programming editor. (Nov. 25, 2024). TypeScript. Google. Accessed: Nov. 25, 2024. [Online]. Available: https://github.com/google/blockly
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: Implementing and utilizing National Copernicus services for hydrology and natural hazard monitoring at NVE using Open Source tools

Authors: Stefan Blumentrath, Aron Widforss, Solveig Havstad Winsvold, Nils Kristian Orthe, Kjetil Melvold, Liss Marie Andreassen, Sjur Anders Kolberg, Karsten Müller, Rune Verpe
Affiliations: Norwegian Water Resources And Energy Directorate (NVE)
The Norwegian Water Resources and Energy Directorate (NVE) is tasked with management of water- and energy resources in Norway, as well as reducing the risk of damages associated with landslides and flooding. Copernicus satellite data can provide valuable insight for those tasks. The vast amount of Copernicus data, however, requires scalable and robust solutions for processing. Standardized and modular workflows help safeguarding maintainability and efficiency of service delivery. In order to implement operational Copernicus services at NVE, the Open Source OSGeo Community project actinia was introduced together with the Open Source Apache Airflow software as a platform at NVE for delivering operational Copernicus services on a national scale. actinia (https://actinia-org.github.io/) is a REST API for scalable, distributed, and high performance processing of time series of satellite images, as well as geographical raster and vector data. It is a modular system that uses mainly GRASS GIS for computational tasks and provides amongst other the OpenEO API. Apache Airflow (https://airflow.apache.org/) is an orchestration solution that allows to programmatically author, schedule and monitor workflows. It also provides flexible solutions to distribute and allocate computing resources to various tasks. User uptake is essential to justify resources allocated to service provision. Starting with the next phase of the NVE Copernicus project from 2025 the platform will therefore be enhanced by solutions for interactive data exploitation. Together with user training this is aimed at identifying new ways and areas where and how Copernicus data can support decision making in NVE`s day-to-day operations. In the presentation, we will illustrate how Apache Airflow and actinia work together and present selected examples of current and future applications operationalized on the platform. Those applications cover currently: avalanches, flooding, snow cover, lake ice. More services related to NVE`s area of responsibility are being investigated, like landslides, slush flows, glacier lake outburst floods, and specific land cover changes. Plans for increasing user uptake with interactive access for analysis and data extraction as well as user training will be outlined too. Finally, we discuss challenges and opportunities of using Open Source Software tools and collaborative science approaches at public authorities like NVE in national, operational services.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: The latest evolutions of the Orfeo ToolBox (OTB)

Authors: Dr Mickael Savinaud, Julien Osman, Thibaut Romain, Tristan Laurent, Yannick Tanguy, Guillaume
Affiliations: Cs Group - France
OTB (Orfeo ToolBox) is an advanced free and open-source software library designed for remote sensing image processing. It was developed primarly by CS Group under the leadership of the CNES (French Space Agency) in the frame of the development of the ORFEO program (French and Italian support program from Pleiades and Cosmo-Skymed). This toolbox is designed for geospatial applications, and capable to handle high-resolution satellite imagery, thanks to its built-in streaming and multithreading mechanisms which optimizes resources needed. Its data processing schema is primarily based on ITK pipelines, and uses GDAL dependency to read and write raster and vector data. With a modular architecture, OTB is built to ensure scalability and compatibility with formats are supported by the library (at least those supported by GDAL) as CosmoSkyMed, Formosat, Ikonos, Pleiades, QuickBird, Radarsat 2, Sentinel 1, Spot5, Spot 6/7, TerraSarX or WorldView 2.The tools composing OTB integrate advanced algorithms for large-scale data analysis, including image classification, segmentation, radar, and feature extraction, both for optical and SAR imagerie. This tool is easily installable on Windows and Linux computer and is available with Docker. Because of its free and open-source nature, OTB is supported by a strong remote sensing community.Users can also create their own plugins thanks to OTB remote modules mechanism. Theses advantages make it a perfect choice for researchers, analysts and industrial. Simple OTB algorithms can be plugged together to create complex pipelines able to handle up to terabyte scale data. This can be done thanks to Streaming mechanism which optimizes resources needed. The library is written in C++ but all the applications can also be accessed from Python, command line launcher, or a user-friendly QGIS plugin. In this presentation, we will explore the newest functionality that come with the latest and coming releast of OTB. We will see how it can be integrated in diverse workflows, and its contributions to Earth observation challenges. Practical demonstrations will highlight its performance on real-world applications, how you can use it for advancing environmental monitoring, urban planning, and disaster management. This session aims to demonstrate how OTB can be integrated into modern and robust pipelines and how can it contribute to improve data used to monitor climate changes.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: Processing geospatial data at scale in geoscience: taking advantage of open-source tools.

#parquet

Authors: Jean Baptiste Barré, Romain Millan, Sylvain Dupire, Bernd Scheuchl, Eric Rignot
Affiliations: BARRE, Institute of Environmental Geosciences, National Research Institute for Agriculture, Food and Environment, University of California
The field of Earth Observation (EO) is experiencing an era of unprecedented data proliferation, driven by the increasing availability and diversity of multimodal sensor data. This surge in geospatial data volume directly responds to the growing need to study territories globally with greater precision. Simultaneously, the scientific community and funding agencies advocate for the widespread adoption of Open Source technologies, aligning with the FAIR (Findable, Accessible, Interoperable, Reusable) principles. These technologies are transforming the construction of processing chains and ensuring data accessibility for scientific research, education, and outreach purposes. Additionally, practitioners across public and private sectors are increasingly embracing open-source ecosystems to foster stronger connections with research communities, thereby driving collaborative innovation. As a result, open-source tools have become a cornerstone in advancing scientific computing and addressing the challenges of big geospatial data. In this presentation, we explore the transformative impact of open scientific principles applied to geospatial data through four diverse projects in geosciences: Polartopo: Integrates multi-sensor satellite data to explore changes in Earth's cryosphere. Leveraging modern geospatial formats like GeoParquet, coupled with DuckDB and high-level tools such as Xarray/Dask, Polartopo demonstrates how heterogeneous altimetric datasets can be processed uniformly, enhancing efficiency and scalability. Ice Velocities Workflow: Monitors large-scale ice mass flow changes in Antarctica using a blend of open-source and proprietary tools for interferometric satellite radar data processing. It enhances computational flexibility and performance by integrating SQL databases and wrapping Fortran in Python, showing the ongoing reliance on proprietary tools for complex processes like interferometry. Fireaccess: In collaboration with firefighting professionals, this project uses the open-source Python geospatial stack for large-scale processing of dense LiDAR data in fire protection contexts. It exemplifies the design of fully open-source operational tools tailored for direct application in critical services, bridging scientific research and practical implementation. 3D Worldwide Glaciers Map: Highlights the role of open-source visualization tools in translating complex multidimensional scientific datasets into user-friendly platforms. It underscores the potential of open-source technologies in enhancing communication, outreach, and education by simplifying the representation of intricate geospatial data. Spanning 1 to 15 years, these projects showcase the adaptability, longevity, and transformative potential of open-source solutions in EO. They also illustrate the continuing interplay between open-source and proprietary tools, underscoring the need for ongoing development of open-source alternatives. Ultimately, this presentation emphasizes the pivotal role of open-source tools in fostering collaboration, driving innovation, and advancing knowledge sharing within the EO community.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: Leveraging BPMN for Earth Observation Data Provisioning Workflows

Authors: Mario Winkler, Jonas Eberle, Jonas Müller
Affiliations: German Aerospace Center, DLR
Earth Observation (EO) cloud-based data exploitation platforms have become increasingly popular due to their ability to harness advanced analysis and processing technologies and allowing science and application initiatives to efficiently derive actionable information from a huge volume of satellite data. However, a major task to operate these environments is to make this data available to platform users and applications in an efficient and controlled manner, so that it can be discovered, accessed and used collaboratively. This process of data provisioning often includes the harvesting of external catalogs and sources to systematically ingest data into the platform and imposes the challenges to the platform operator of integrating the data into heterogeneous platform services whilst ensuring the completeness of the data offerings and managing the ingestion process itself. Due to this complexity, such data provisioning processes are often implemented using workflow engines which execute and manage predefined workflows, typically defined as a sequence of tasks, activities, or operations. They can handle complex, long-running tasks with conditional logic as well as user interactions and state management. Although every platform may have specific requirements for its data provisioning system, the variety of implementations may be generalized into a set of common use cases: 1) On-demand, batch oriented provisioning of data from a time period in the past, 2) Continuous or near real-time provisioning of new data sets, keeping up with their publication at the source, and 3) Continuous or on-demand consistency verification and correction between the data source and platform, ensuring data completeness. To implement these use cases, a set of individual, dependent tasks must be designed and integrated into structured and repeatable processes. This is where BPMN comes into play. The Business Process Model and Notation (BPMN) is a standardized graphical modeling language widely used to define, document and implement processes. By providing a clear, visual framework it enhances the visibility of the process during both, the development as well as the operational phase, bridging the gap between stakeholder requirements, technical implementation and operational tasks. While traditionally associated with business processes and automation, BPMN also offers significant potential for creating Earth Observation data provisioning workflows, well supporting the aforementioned use cases. This presentation shall give an overview on how BPMN constructs like tasks, conditional gateways, multi-instance call activities, timer and message events can be used within the operational Earth Observation exploitation platform terrabyte to create data driven, scalable and conditional workflows that can run continuously or on request, exchange messages with external systems or include manual conflict resolution steps. BPMN is also supported by the ESA EOEPCA+ project, that develops reusable components for EO exploitation platforms. The Resource Registration building block is built upon a BPMN workflow engine and fully supports the use case presented here. It is based on open standards and is available as open source software.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: Efficient Satellite Data Management: The Role of the STAC Standard and EOmetadatatool in Open-Source Metadata Harmonization for the Geospatial Domain

#stac

Authors: Wojciech Bylica, Michał Bojko, Marcin Niemyjski, Christoph Reck, Jędrzej Bojanowski, Jacek Chojnacki, Kamil Monicz, Jan Musiał, Jonas Eberle, Tomasz Furtak
Affiliations: CloudFerro S.A., German Aerospace Center (DLR)
The modern era of satellite exploration generates vast amounts of data, playing a pivotal role in science, business, and public administration. The diversity of sources and the growing number of satellite missions have made efficient data management increasingly dependent on advanced solutions. The Copernicus program and its Sentinel missions are prime examples of rapidly expanding sources of satellite data, supporting numerous vital initiatives. One of the main challenges is standardizing metadata from various sources to enable effective comparison and analysis. The STAC (SpatioTemporal Asset Catalog) standard addresses this challenge by providing consistent and harmonized metadata catalogs. Built on JSON, STAC offers an open framework for describing satellite and geospatial data. Its structure comprising collections, items, and assets, ensures clarity and flexibility. Users can search products based on various metadata, such as sensor parameters, spectral range, or spatial resolution, improving resource identification and comparison. By eliminating disparate formats, STAC simplifies navigation through satellite data, benefiting researchers and enterprises alike. To address the challenge of populating STAC-compliant catalogs with standardized data, the German Aerospace Center (DLR) initially developed EOmetadatatool. This tool was later adapted for the needs of the Copernicus Data Space Ecosystem (CDSE), tailored to meet end-user requirements, and released under an open license. EOmetadatatool is designed to integrate seamlessly with modern data infrastructures, allowing it to read data directly from S3 buckets using user credentials and process metadata asynchronously. This capability makes it ideal for handling large-scale datasets generated by satellite missions. The tool extracts and maps metadata from various formats and structures into the STAC format, ensuring compatibility and consistency. It also supports formatting metadata into custom templates, offering flexibility to cater to specific operational requirements. EOmetadatatool further provides validation modes for STAC compliance and supports direct loading into Postgres/PostGIS stacks via DNS configurations. Supporting over 60 Sentinel mission product types, it simplifies harmonization and enables the creation or updating of catalogs. Built on stactool, a Python library and CLI for working with STAC, it inherits a robust framework from PySTAC for handling geospatial metadata. The presentation will showcase EOmetadatatool's role in metadata harmonization and its integration into STAC-compliant catalogs, emphasizing STAC’s flexible structure for efficient discovery and management. Together, these solutions advance satellite data management, creating a more accessible ecosystem for the ever-growing volume of satellite resources.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: The Earth Observation DataHub - Using Open Source Software to Make EO and Climate Data More Accessible and Usable, Supporting the Creation of New Applications and Open Science

#stac

Authors: Richard Conway, Alex Hayward, James Hinton, Philip Kershaw, Alasdair Kyle
Affiliations: Telespazio UK Ltd, Centre for Environmental Data Analysis
Telespazio UK are delivering the Platform for the Earth Observation DataHub (EODH) Platform which aims to aid the federation of national data sets, information sources and processing facilities in order to enable the growth of the Space / Earth Observation economy. The Platform is open source by design and uses, where possible, common open interface / API standards for software services, enabling improved interoperability and federation between platforms. The EODH Platform also re-uses some open-source component solutions, such as building blocks from the Earth Observation Exploitation Platform Common Architecture (EOEPCA) Reference Implementation (an ESA funded initiative), most of which are widely used in the EO community. The EODH Platform is the core part of the Earth Observation DataHub UK Pathfinder project which is delivering improved access to Earth Observation (EO) and Climate data to support effective decision-making. The project is supported by UKRI NERC, Department for Science Innovation and Technology (DSIT) and the UK Space Agency. Telespazio UK would like to present the EODH Platform which is currently in an early adopter operational phase, to demo current functionality and outline planned future functionality, to potential users attending the Living Planet Symposium to generate user uptake and gather user feedback. The EODH Platform components are built using a wide range of open-source software as well as open-source standards which include: - Identify and Access Management (aligned to OAuth2 standard): Keycloak, Nginx, OIDC - Resource Catalog: Stac-fastapi, stac-fastapi-elasticsearch - Data visualisation: TiTiler - Workflow Execution: EOEPCA ADES, JupyterHub/JupyterLab, Ploomber, Calrissian - Event generation: Argo Events - Messaging system: Apache Pulsar - Web Presence: Wagtail CMS - Supporting: ArgoCD, Kubernetes, Testkube etc. - OGC Standards: OGC Records API, OGC Best Practice for Application Packages, OGC WMTS The presentation will focus on how open-source software and standards have been utilised in the EODH Platform and will consider their fitness-for-purpose in terms of delivering operational functionality for EO and Climate users. Additionally, the presentation will also outline the future evolution of the EODH Platform as well as plans to incorporate new open-source software and pushing EODH Platform bespoke enhancements upstream. The creation of a financially and operationally sustainable EODH Platform will break down data silos and allow stakeholders from government, industry and academia to work together in a centralised manner, offering a better model for research and commercial service delivery, which will support a range of sectors including green finance, energy, infrastructure, and climate change monitoring. The EODH Platform’s open-source design will also allow federation opportunities with other aligned initiatives such as EarthCODE, Open Science Persistent Demonstrator (OSPD) and Application Propagation Environments (APEx).
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: Evolution of the SNAP Open Source Tools Towards a Cloud-based Modular App

Authors: Carsten Brockmann, Thomas Block, Norman Fomferra, Benjamin Lutz, Luis Veci, Cosmin Cara, Marcus Engdahl
Affiliations: Brockmann Consult GmbH, Skywatch, CS-Romania, ESA ESRIN
The landscape of Earth Observation data processing is rapidly evolving. Small Apps with dedicated functionality running close to the data in cloud systems are standard today for many EO applications. These apps provide new agile modular application specific tools, which complement and classical Desktop applications as well as backend heavy-lifting processing tools. The latter still keep their relevance. The SNAP toolbox is the de-facto standard for data visualisation, analysis and processing for optical and microwave EO data. In this presentation we will explain the architectural developments for the SNAP software platform towards a distributed, and service-based computing environments, focusing on creating a flexible and extensible system that can adapt to emerging technological requirements. This evolution has been elaborated by the SNAP developer team as an Evolution Roadmap and prototype implementation. We present the Roadmap and Prototype, in order to gather feedback from the user community towards a development a user driven, modular, and future proof tool. The architectural changes prioritize developing reusable components while maintaining the core engine's flexibility. By implementing a modular design, the new architecture seeks to enable SNAP to operate effectively across different computing scenarios - from traditional desktop applications to cloud-based processing platforms. The ultimate goal is to create an adaptable architecture that can respond to user demands and technological advancements, ensuring SNAP remains a cutting-edge tool for Earth Observation data analysis. The following 5 scenarios will be addressed by SNAP's evolution: Big data scenario: Processing large volumes of similar data products. Remote data scenario: Processing data stored in cloud infrastructure. CPU and memory intensive scenario: Handling complex calculations requiring significant resources. Application interoperability scenario: Accessing SNAP functionality from different programming languages and platforms. Advanced data analysis scenario: Integration with interactive programming environments like Python and R The necessary architectural evolutions include the containerisation and cloudification of SNAP Engine and the transition of the SNAP Desktop into a modern web application based purely on latest web technology. It involves developing a RESTful Server API for the SNAP Engine, allowing remote interaction with SNAP's functionality. The UI concept will be completely new, with several web-based user interfaces and command-line tools as client components, be developed to interact with the Server API, facilitating access to SNAP's capabilities from various platforms. These Apps can be selected and installed as needed by users. The architectural evolution further include Server Resource Management, and extension of the Command API by a (1) Data Catalogue API, (2) Processing API, (3) Workspace API, (4) Workflow API and (5) Orchestration API.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: High Performance Desert Analytics: Characterizing Earth Surface Dynamics in Arid Regions Through ‘terrabyte’ and Multi-Sensor Earth Observation Archives

#stac

Authors: Baturalp Arisoy, Dr. Florian Betz, Univ.-Prof. Dr. Georg Stauch, Dr. Doris Klein, Univ.-Prof. Dr. Stefan Dech, Prof. Dr. Tobias Ullmann
Affiliations: Earth Observation Research Cluster, University of Würzburg, Chair of Geomorphology, University of Würzburg, German Remote Sensing Data Center, DLR
Cloud-based Earth Observation (EO) analysis has become increasingly accessible due to the rapid growth in EO datasets. By leveraging cloud-stored satellite imagery and high-speed processing capabilities, Earth Observation Data Cubes (EODCs) are emerging as a new paradigm, eliminating the need for high-end personal systems and bulky image download methods. A critical consideration in EODC development is the selection of cloud computing platforms, given the diverse range of available services. While many well-known platforms are popular within the EO community, they are often commercially driven, subjecting users to platform-specific packages and policies. This dependency poses risks, including restricted free access for academics and service discontinuations due to policy or privacy concerns. DLR’s terrabyte is a scientific, non-commercial alternative High Performance Data Analytics (HPDA) platform accessible for partner scientists that offers openly available STAC catalogs with diverse missions, SLURM-based open-source HPC cluster management and the most importantly, a customizable development infrastructure and direct access to the German Remote Sensing Data Center (DFD). This enables users to bring their own code and environments, supporting Python, R, and open-source tools such as Open Data Cube, Xarray, and DASK for parallel computing. Therefore, terrabyte is a well-suited platform to handle large scale geospatial vector and raster dataset with the support of supercomputing facility, ensuring independent scientific work without any commercial affiliation. Additionally, the generated results can be transferred and stored in an allocated container. This contribution highlights the use of terrabyte for analyzing land surface dynamics at high spatial and temporal resolution for selected study sites in Mongolia and Kyrgyzstan. In this exemplary project, ready-to-analysis data cubes will be used as the main data infrastructure for the analysis of surface dynamics. The main research question of the dryland project aims to improve the understanding of frequency-magnitude relationship between surface dynamics and climate change. Therefore, different time series patterns will be generated from different EO data; trends will be provided to reveal erosion and sedimentation trends, seasonal patterns for the vegetation development and cyclic patterns for fluctuations in river discharge over multi-year cycles. The advantage of storing extensive temporal and spectral data will be beneficial in distinguishing the different time series patterns mentioned above. Moreover, storing such amount of diverse and long-term data along with the computation power of the HPC and recent advancements in machine learning such as foundational models can significantly enhance our capability to understand extract accurate information on the Earth Surface over vast areas and at high temporal frequency. As such, terrabyte plays a central role in two key stages: designing EODCs for both dryland river systems in Mongolia and Kyrgyzstan, secondly, based on the ready-to-analysis data stored in EODCs, applying Machine Learning (ML) and Deep Neural Network (DNN) as well as time series analysis algorithms to assess surface dynamics. Our code repository automates the retrieval of satellite imagery based on mission specifications, spectral/radar bands, date ranges, and areas of interest (AOIs). It processes the data by stacking raw bands, calculating spectral indices, and applying mission-specific scale factors to produce a single complex multidimensional file with band and time dimensions. Furthermore, the automated workflow addresses and handles several challenges which are not sufficiently addressed in existing cloud computing systems. These are for example as co-registration of Sentinel-2 scenes, advanced cloud masking options, particularly important in river systems, and also geodesy and GIS related tasks. In addition to ready-to-analysis optical and Synthetic Aperture Radar (SAR) imagery, the system integrates various digital elevation models (DEMs) available through STAC APIs, processed using open geomorphometry tools like Whitebox Geospatial Analysis Tools. These outputs are consecutively stacked as individual layers within the data cube. UAV field missions will complement this work by contributing LiDAR point cloud results and very high-resolution multispectral bands to the data cube. The data cubes also function as continuous services, automatically updating existing stacks with newly available scenes through periodic execution of the repository. A caching mechanism ensures that identical AOI requests are not redundantly processed, significantly improving workflow efficiency. Finally, the repository's dependencies are fully defined and easy to install by other users, without worrying about overwhelming IT workload, reducing technical barriers for new users and offering a scalable, reproducible framework for remote sensing professionals.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: S1Tiling: A Sentinel-1 Preprocessing Tools for Large Analysis Ready Data Time Series

Authors: Thierry Koleck, Luc Hermitte, Fabien Contival
Affiliations: CNES, CS-GROUP
Sentinel-1 is currently the only system to provide systematic, long-term and free SAR images on all land on Earth. Accessing to these time series of images opens an extraordinary range of applications. In order to meet the needs of a large number of applications, users need to pre-process the available Sentoinel-1 data to generate "Analysis Ready" time series. For many of them, this task could be very time-consuming and an obstacle for developing new applications. To simplify the access to well-formed Sentinel-1 time-series, CNES and CS-GROUP have developed an open-source project S1Tiling [1]. This software can easily and efficiently generate calibrated, ortho-rectified, radiometrically corrected, and speckle-filtered Sentinel-1 images on any lands on the Earth. Based on the open-source image processing software Orfeo Tool Box (OTB [2]), S1Tiling is a fully automatic processing chain for Sentinel-1 data, including these features: - Automatic and smart download, from many data providers (Copernicus Data Space, Geodes …) - Radiometric calibration with thermal noise removal - Border noise removal - Geometric correction and orthorectification (with various Digital Elevation Model) - Radiometric correction (Local incidence Angle, Gamma0 flattening) - Speckle filtering As output, times series are ortho-rectified on the Sentinel-2 MGRS geographic reference grid (UTM projection). As S1Tiling is based on the pipeline of Orfeo Tool Box applications, the processing is very efficient and need only small computing resources, even for large time-series. So, it can be used on High Performance Computing cluster, as well as on a laptop. It is considerably faster and more optimized than SNAP-based chain with similar results and can be easily used in script form. The software can also be easily adapted to user-specific processing, by adding more processing steps. A developer documentation is available online to customize S1Tiling. S1Tiling is currently used for many applications, both for research and operational systems. As examples, S1Tiling provides Analysis Ready Data for real-time deforestation detection in the tropics [3], monitoring of rice crops in Southeast Asia [4] or monitoring of water stocks in India. In the framework of the THEIA project, S1tiling is currently on implementation in the HESPERIDES production center. It will produce Sentinel-1 Analysis Ready Data, on different regions and distribute the time-series to users. The next step in the development of S1tiling will include Single Look Complex processing with co-registration. It will be available in the next release. References : [1] https://gitlab.orfeo-toolbox.org/s1-tiling/s1tiling [2] https://www.orfeo-toolbox.org/ [3] https://www.tropisco.org [4] https://www.vietsco.org
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: Supporting Open Science for InSAR Research: An Integrated Toolkit - SARvey, Erudite, and InSAR Explorer

Authors: Andreas Piter, Dr. Mahmud Haghshenas Haghighi, Mahdi Motagh
Affiliations: Institute Of Photogrammetry And Geoinformation, Helmholtz Centre Potsdam, GFZ German Research Centre for Geosciences
Interferometric Synthetic Aperture Radar (InSAR) is a remote sensing technique used to measure displacements of the Earth surface, typically on a scale of millimetres to centimetres. InSAR finds applications across various fields, including geophysical analysis, geological studies, and geodetic monitoring. Medium- and high-resolution sensors onboard satellite missions such as Sentinel-1 or TerraSAR-X provide researchers with image stacks, which are used to derive time series of displacement associated with natural or human activities such as mining, groundwater pumping, landslides, or tectonic and volcanic events. Over the past two decades, several InSAR time series methods have been developed, including persistent scatterer interferometry (PSI) and the small baseline subset (SBAS). The SBAS method has been implemented in several open-source software, including MintPy, LiCSBAS, and GMTSAR and has been used broadly by the scientific community, mainly for geophysical and geological applications. Conversely, the PSI method has been primarily integrated into internal or proprietary software, with the notable exception of StaMPS, an open-source implementation of PSI that has been extensively used by the scientific community across diverse applications, despite its reliance on Matlab, which is a commercial software. The current status of available InSAR time series software for research indicates significant opportunities for improvement regarding software accessibility, transparency, and reproducibility and quality assessment of results. First, there is a particular need for accessible software solutions tailored for engineering applications. Second, the interpretation of intermediate products of InSAR time series analysis can be complex and often non-intuitive, which may lead to misinterpretations of the resulting time series if substantial expertise is not available. Third, comparing the results of different InSAR time series methods is typically challenging, as each software operates on its own visualisation and analysis platform. In this work, we address these three challenges. We present a toolkit of three different packages that can be used to improve InSAR time series analysis, its understanding, and visualisation. "SARvey - survey with SAR" software is the first and core element of our toolkit that enables InSAR time series analysis processing. The "Erudite - Exploring, Retrieving and Understanding Displacement Time series" package provides a graphical user interface that allows visualising different components of SAR stack and analyse intermediate and final products for understanding the processing and assess the reliability of the final results. Finally, the "InSAR Explorer" package provides an interface for interactively visualising and time series plotting the final results in a GIS environment. We introduce SARvey software as a free and open source software for PSI analysis. SARvey is written in Python for broad accessibility and community contribution. In alignment with FAIR principles for sustainable research software, SARvey is findable and accessible on Github with each version archived on a Zenodo repository with a globally unique and persistent identifier. The modular structure and GPL3 open-source license enhances its reusability and adaptability. Furthermore, it has a continuous integration pipeline with automatic tests that allows a sustainable software development. SARvey is specifically designed for retrieving displacement time series for engineering applications, such as structural health assessment of buildings, highways, railways, bridges and dams. Nevertheless, testing has demonstrated its utility in non-engineering context as well. Beyond PSI analysis, SARvey is interoperable with both MintPy and MiaplPy software for SBAS analysis and Distributed Scatterer filtering (Phase Linking). Since its release in October 2024, several research groups are using SARvey and its user community continues growing. Furthermore, its functionality has already been validated in several publications. Notably, SARvey serves as the core of automatic processing pipeline within the SAR4Infra project, funded by the German Federal Ministry of Digital and Transport, which provides an on-demand InSAR time series analysis service to the German land surveying authorities. The service is deployed on a cloud server and uses Sentinel-1 images to derive displacements at transport infrastructures in northern Germany. The Erudite package enhances the understanding of InSAR time series methods, including their inputs, intermediate products, and outputs. Designed for interoperability with SARvey, it facilitates the evaluation of consistency and the execution of quality checks on displacement time series derived from SARvey. Erudite utilises Qt for Python to provide an interactive interface for data exploration and facilitate pre-analysis of input data. It also enables rapid on-the-fly analyses of individual pixels and comparison with the time series products of SARvey. The users can also examine intermediate products, which enhances the understanding of the processing workflow and enables them to assess the results of individual pixels. Additionally, Erudite offers insights into the performance and limitations of individual components within the SARvey workflow, such as pixel selection and phase unwrapping, thereby offering a better intuition about parameter settings in SARvey. Developed in Python, Erudite will be available under an open-source license on GitHub. Finally, the InSAR Explorer package offers a graphical user interface within QGIS, enabling users to interactively visualize final products and plot displacement time series. The InSAR Explorer plugin is readily accessible via the QGIS plugins repository, ensuring ease of use for the users. The source code is written in Python and is available on GitHub under the GPL-2.0 license, with additional storage of each version on a Zenodo repository. This plugin allows users to visualize results generated by the SARvey software, interactively plot deformation time series, and export plots in various formats. To facilitate comparisons of InSAR products across different software, InSAR Explorer supports outputs from MintPy, MiaplPy, GMTSAR, and StaMPS. This capability enables comprehensive comparison of results obtained from various InSAR time series methods and software. On the path to Open Science in InSAR time series analysis research, we are committed to providing open educational resources to complement our open-source software. This includes comprehensive documentation for the software packages, as well as test datasets and demo tutorials that are offered to enhance the user experience. Additionally, for the future, we aim at offering workshops and tutorials on the use of SARvey at various conferences.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: Uncertainty Quantification for Geospatial Deep Learning applications and the Lightning UQ Box

Authors: Nina Maria Gottschling, Nils Lehmann, Jakob Gawlikowski, Adam J. Stewart
Affiliations: MF-DAS OP - EO Data Science, Deutsches Zentrum für Luft- Und Raumfahrt (DLR), Data Science in Earth Observation, Technical University of Munich (TUM)
General machine and deep learning open-source frameworks like scikit-learn or PyTorch have lead to tremendous research advancements in the application of these powerful tools to geospatial data. Over the years the data collected by hundreds of satellites have created datasets that total in the scale of petabytes. This rich source of information allows for applications among others in climate change, biodiversity loss, crop yield monitoring or disaster response. Almost always these applications require decision making under uncertainty, and although neural networks have shown impressive results in a multitude of application domains, the `"black box'' nature of deep learning and lack of confidence estimates are met with criticism in the geospatial community. Research on uncertainty quantification has helped elucidate the reliability of these models, and a series of open-source frameworks have appeared. Here, we summarize existing open-source software for geospatial deep learning with uncertainty quantification. As we find a lack of flexibility in these packages regarding their application to geospatial data where data sources are much more diverse than general machine learning datasets, we introduce Lightning UQ Box, a PyTorch-based Python library for deep learning-based UQ methods powered by PyTorch Lightning. Released under the open source Apache-2.0 license, the library supports classification, regression, semantic segmentation, and pixelwise regression applications, and UQ methods from a variety of theoretical motivations, such as bayesian deep learning, conformal prediction, or recent developments in generative modeling. In our presentation, we will highlight how existing deterministic PyTorch architectures can easily be coupled with an uncertainty quantification technique, such that the model output task specific predicitive uncertainty estimates. Additionally, an emphasis has been placed on implementing the uncertainty techniques in such a way that the can scale with both model and dataset size. We will furthermore highlight how these uncertainty estimates should be evaluated and provide the tools to do so. With the library and presentation, we provide an entry point for practitioners new to UQ, as well as easy-to-use components and tools for scalable deep learning applications. Feel free to take a look at our GitHub repository, https://github.com/lightning-uq-box/lightning-uq-box, or documentation page, https://lightning-uq-box.readthedocs.io/en/latest/ for more information.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: Overview of geospatial tools stack through Earth Observation API (eoAPI)

#stac

Authors: Emmanuel Mathot, Jonas Sølvsteen
Affiliations: Development Seed
The Earth Observation API (eoAPI) represents a modern community-standard approach to managing and analyzing complex Earth observation data, addressing critical challenges in geospatial technology and environmental monitoring. Free and open source, maintained by Development Seed,, eoAPI emerges as a comprehensive solution to the challenges of satellite imagery and planetary data hosting and access at scale. At its core, eoAPI consists of modular open-source software components designed to simplify the intricate process of earth observation data management. Recognizing the fundamental challenge of underutilized imagery—where traditional methods are time-consuming and require specialized expertise—the API provides a unified, customizable solution that seamlessly integrates with existing cloud environments. This approach democratizes access to sophisticated earth observation technologies, breaking down barriers for researchers, environmental scientists, agricultural specialists, and technology companies. The individual components and their combined use has already gained significant traction among global technology leaders, with implementations by NASA IMPACT, AWS Sustainability, Microsoft Planetary Computer and Planet. eoAPI for Kubernetes constitutes an essential block in the EO Exploitation Platform Common Architecture (EOEPCA) initiative led by the European Space Agency. These use cases demonstrate eoAPI's versatility across multiple domains, from climate research and environmental monitoring to agricultural land management and geospatial data services. Technically, eoAPI combines several state-of-the-art open-source projects from many community contributors to create a full Earth Observation API. Each service can be used and deployed independently, but eoAPI creates the interconnections between each service. ● Database: The STAC database is at the heart of eoAPI and is the only mandatory service. We use PgSTAC Postgres schema and functions, which provides functionality for STAC Filters, CQL2 search, and utilities to help manage the indexing and partitioning of STAC Collections and Items. ● Metadata: The Metadata service deployed in eoAPI is built on stac-fastapi.pgstac application. By default, the STAC metadata service will have a set of endpoints to search and list STAC collections and items. All reformatted and re-engineered data will be registered via this service using extensively the relevant STAC extensions for characterising and searching the Sentinels data. ● Raster: The Raster service deployed in eoAPI is built on top of titiler-pgstac. It enables Raster visualisation for a single STAC Item and large-scale (multi collections/items) mosaic based on STAC search queries. ● Vector: The OGC Features and (Mapbox Vector) Tiles API service deployed in eoAPI is built on top of TiPg. It enables vector Features/Features Collection exploration and visualisation for Tables stored in the Postgres database (in the public schema). It is not strictly necessary in the scope of the reformatted and re-engineered data but that could be used for specific User Adoption activities. ● Browsing UI: The browsing UI deployed in eoAPI is built on the Radiant Earth STAC Browser, and provides a configurable, user-friendly interface to search across and within collections and quickly visualise single items assets.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: xDEM: Unifying Open-Source Softwares for 3D Geospatial Analysis

Authors: Alice de Bardonnèche-Richard, Emmanuel Dubois, Sébastien Dinot, Amaury Dehecq, Romain Hugonnet, Erik Schytt Mannerfelt
Affiliations: CS Group, CNES, GlacioHack Community
Did you know there are over 300 open-source window manager projects? In the field of Digital Elevation Model (DEM) analysis, at least two tools stand out: demcompare, a software developed by CS for CNES (the French Space Agency) and xDEM developed by the GlacioHack community. This abundance of open-source software is not merely anecdotal; competing projects are a common feature of this ecosystem. While this competition can lead to innovative or more efficient solutions, it also results in a significant dispersion of effort and duplication of human and technological resources. To avoid this loss of energy, CNES and the GlacioHack community decided to pool their efforts to create a more robust project. Providing glaciologists and analysts with a software that is efficient in terms of time and memory will enable them to address climate crisis issues under the best possible conditions. CS's team conducted a complete audit of 3D geospatial analysis tools. Following this audit, three merging strategies were proposed: - Winner Takes All (WTA) demcompare: This approach leverages demcompare, a tool that meets CNES's needs and allows them full freedom to modify its source code. However, the community is small, and GlacioHack lacks the time to adopt this tool fully. - New Software from Scratch: This would only triple the effort and resources required. - Winner Takes All (WTA) xDEM: which benefits from a more advanced tool that partially meets CNES’s needs and enjoys greater recognition within the scientific community. Since then, development efforts have begun to unify resources to xDEM tool, integrating some demcompare parts into xDEM. Come and discover the technical, legal, and social challenges that had to be addressed to make this merger successful without alienating part of the communities. Today, xDEM’s application to operational missions is both relevant and timely, demonstrating the truth of the adage: “Many hands make light work.”
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: A Dynamic and Platform-Agnostic Standard for Awesome Spectral Indices

Authors: David Montero Loaiza, Prof. Dr. Miguel D. Mahecha, Cesar Aybar, Francesco Martinuzzi, Sebastian Wieneke
Affiliations: Leipzig University, University of Valencia, German Centre for Integrative Biodiversity Research (iDiv), ScaDS.AI
Spectral indices derived from multispectral remote sensing data play a crucial role in monitoring Earth system dynamics. These indices serve a diverse array of applications, extending from vegetation analysis—the most prominent domain, highlighted by indices such as the widely recognized Normalized Difference Vegetation Index (NDVI)—to the monitoring of water bodies, soil properties, urban environments, and snow cover. Their ease of computation and straightforward interpretation make spectral indices highly popular among researchers, spurring the continuous development of novel indices for various sensors—ranging from satellite-based multispectral sensors to hyperspectral sensors and even modeled spectra and simple RGB cameras mounted on drones. The rapid spread of spectral indices has driven a demand for comprehensive catalogs and computational tools, among which the "Awesome Spectral Indices" (ASI) ecosystem has emerged as a potential resource. ASI provides an open catalog of spectral indices, employing a standardized, interoperable, machine-readable format. This foundation has enabled seamless extension of the ASI ecosystem to multiple platforms and programing languages, including Python (via the spyndex package), Google Earth Engine (via the spectral JavaScript module), and Julia (via the SpectralIndices.jl package), while opening the door to third-parties to develop on top of it—i.e., the rsi R package developed outside the ASI ecosystem. However, the existing ASI standard currently supports bands from only the most popular multispectral instruments, limiting its flexibility to include bands from platforms outside of this standard—such as ASTER or the upcoming Landsat Next mission. In this work, we introduce a new dynamic standard for ASI that is easily and automatically extensible to a broad range of platforms, including multispectral, hyperspectral, and modeled spectra. This enhanced standard maintains its open format, allowing integration across different programming languages while also providing the capability to link spectral indices with specific platforms and project them onto new ones based on the available spectral bands. This dynamic framework significantly facilitates the addition of new indices, enabling the remote sensing community to continuously expand the catalog as novel indices are developed and published. The proposed approach will foster transparency, traceability, and increased domain knowledge, supporting advancements in remote sensing and Earth system monitoring.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: C.05.01 - POSTER - PROBA-V and PV-CC

Proba-V operational mission ended in October 2021 after 8 years of successful operations, followed bu an experimental misison phase (which included regular moon sensing). The data collected during the 8+ years had largerly supreseeded the expectations providing not only the continuity data with Spot-VGT but also increasing the resolution by 3 times.
The definition of a new procesor prototype for the final full data reprocessing is ongoing, including also the SPOT VGT data, with the goal to create a fondamental data record for vegetation (FDR4VGT) collection covering more than 20 years.
In addition, Proba-V carrried other instruments out of which the Energetic Particle Telescope (EPT) instruemt is of particular interest.
In October 2023 a small satellte (PVCC) has been launched with the spare camera of the vegetation instrument.
PV-CC purpose is to extend the Proba-V mission and to analyse/compare the results obtained by carry on-board the same vegetation instrument of Proba-V but on a 12U microsat. In the future, another microsat is planned, an hyperspectral misison, wich launch is foreseen by Sep 2025.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: A consistent cloud masking algorithm for reprocessing 20 years of Proba-V and SPOT-VGT data records

Authors: Prof. Luis Gómez-Chova, Julio Cesar Contreras Huerta, Cesar Aybar, Gonzalo Mateo-García, Dr Fabrizio Niro
Affiliations: Image Processing Laboratory, University of Valencia, International Methane Emissions Observatory, United Nations Environmental Programme, Serco for European Space Agency ESA/ESRIN
The Proba-V mission was designed to fill the gap between the SPOT-VGT and ESA Sentinel-3 satellite missions and it has exceeded all expectations, providing over 8 years of data. Now that Proba-V has reached the end of its operations, the goal is to build a fundamental data record for vegetation (FDR4VGT) covering over 20 years. The objective of the FDR4VGT project consists of the reprocessing of the VGT1/VGT2/Proba-V archives to build a 20+ year data record of coherent and harmonized TOA/TOC products. Therefore, a reliable and consistent cloud masking algorithm is one of the requirements for processing data from all three instruments. Cloud masking is a mandatory processing step for the estimation of vegetation biophysical parameters from images acquired by optical instruments onboard Earth observation satellites. However, cloud detection is particularly challenging due to the limited number of spectral bands acquired by Proba-V and SPOT-VGT: four bands in the blue, red, near infrared and shortwave infrared. During the reprocessing of Proba-V C2, several cloud masking methodologies were tested and compared to the operational Proba-V cloud mask algorithm (v001). The proposed machine learning based algorithms were identified as the most promising approach and implemented in the current operational processor. Within the framework of the FDR4VGT project, further efforts are devoted to the development of machine learning cloud masking algorithms for Proba-V and SPOT-VGT. The goal is to improve and harmonize their cloud masks by extending the Proba-V 1km cloud mask algorithm to SPOT-VGT data. To achieve this, we have created a high-quality dataset containing manually labeled clouds covering all critical situations: optically thin clouds and bright surfaces such as ice and snow. However, cross-mission applications require a level of data harmonization that is not easy to meet, as differences in the TOA radiance products, such as the high saturation in the blue band of Proba-V, generally hinder data sharing across sensors. Essentially, it can be thought of as a domain adaptation problem that can be mitigated by reducing the statistical differences between Proba-V and SPOT-VGT images. In this work, we have implemented a deep learning cloud detection algorithm based on fully convolutional neural networks that allows us to conveniently exploit and combine the spectral and spatial information. Acknowledgements: This work was supported by the European Space Agency (ESA VITO FDR4VGT project) and by the Spanish Ministry of Science and Innovation (PID2019-109026RB-I00, PID2023-148485OB-C21).
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Towards a 20+ years harmonised data record of land surface reflectances derived from VGT-1, VGT-2 and Proba-V sensors

Authors: Dr Marta Luffarelli, Nicolas Misk, Dr Yves Govaerts, Iskander Benhadj, Ms. Sindy Sterckx, Fabrizio Niro, Roberto Biasutti
Affiliations: Rayference, VITO, ESA ESRIN
The SPOT-VEGETATION (SPOT-VGT) and PROBA-V missions provide an exceptional opportunity for climate research, offering over two decades of global and continuous land surface observations. To fully exploit the potential of these observations, it is essential to assess the radiometric, geometric, and spectral consistency of the sensors, as well as the temporal stability of the archive. A metrological approach, as outlined in the FIDUCEO guidelines, is indispensable for achieving reliable quantification of anthropogenic trends in derived Essential Climate Variables (ECVs). To ensure the robustness of these data records, the ESA Fundamental Data Record for VGT (FDR4VGT) project has implemented advanced methodologies for characterizing and harmonizing observations from SPOT-VGT and PROBA-V. Central to this effort is the development of Level-1 pixel-level uncertainty estimates for all three sensors. This methodology involves the creation of uncertainty diagrams that allow to identify all potential sources of uncertainties and their correlation structure. The improved radiometric characterization is expected to enhance the accuracy and consistency of ECV retrievals, including aerosol optical thickness and surface albedo, from the SPOT-VGT and PROBA-V archives. Such improvements build on findings from earlier research, including the ESA SPAR@MEP project and pave the way for more precise monitoring of global climate dynamics (Luffarelli et al., 2023). A major challenge in creating a consistent Fundamental Data Record (FDR) lies in harmonizing the three sensors. Harmonization, as defined by FIDUCEO, involves calibrating all sensors against a common reference dataset that can be traced back to known and, ideally, SI-traceable standards. This process ensures that the calibrations are consistent, while preserving the unique characteristics of each sensor. However, the lack of an SI-traceable, in-flight calibration reference - such as the kind that will be provided by the upcoming TRUTHS mission - presents a challenge. To address this, the Rayference Radiometric Calibration Reference (RRCR) has been utilized as an alternative. Within the RRCR, the surface reflectance is modeled using Rahman-Pinty-Verstraete (RPV) parameters, as described in Luffarelli et al. (2025, in preparation), with a high spectral resolution of 1 nm. Atmospheric conditions are characterized using gas profiles provided by the Copernicus Atmospheric Monitoring Service (CAMS) and an aerosol model developed by Rayference. This aerosol model integrates data from multiple sources, including AERONET and OPAC, to provide an accurate representation of atmospheric conditions. Satellite observations are then simulated using the open-source 3D radiative transfer model Eradiate. Current estimates suggest the RRCR achieves a calibration accuracy of approximately 3%, but ongoing refinements aim to reach the 1% target required by upcoming satellite missions. To facilitate harmonization, SPOT-VGT and PROBA-V observations were simulated using the RRCR, focusing on four Pseudo-Invariant Calibration Sites (PICS): Libya4, Niger2, Algeria5, and Arabia2. These sites are known for their stable surface reflectance properties, making them ideal for calibration activities. The added value of these extensive radiometric characterization and harmonization efforts is validated through the application of the CISAR algorithm (Luffarelli et al., 2022) to a diagnostic dataset. CISAR enables the retrieval of surface reflectance and aerosol properties, which are then compared against reference datasets, including ground-based observations, atmospheric models, and data from other satellite missions. This validation ensures that the improved methodologies deliver tangible benefits in terms of data quality and consistency. Furthermore, the quality of retrievals is evaluated in terms of pixel-level uncertainties, providing deeper insights into the confidence levels of the derived data products. The outcomes of the FDR4VGT project are expected to highlight the importance of consistent and accurate radiometric calibration for long-term satellite missions. The methodologies and tools developed within this project not only enhance the usability of the SPOT-VGT and PROBA-V archives but also set a foundation for similar efforts with future missions; in particular, extending the time series with Sentinel-3 observation could ensure the continuation of the long-term data record of land observations.
Add to Google Calendar

Monday 23 June 17:45 - 18:30 (Frontiers Agora)

Session: F.04.21 Waves of Data: EO’s Role in Ensuring Global Food Security

Earth Observation (EO) technologies are generating a wave of essential data that is transforming agricultural practices and advancing global food security efforts. EO systems—particularly through satellite imagery—provide crucial insights into crop location, yield, health, land use changes, and environmental impacts, enabling timely responses to challenges such as food insecurity, climate change, and resource management.
As a leading technology organization, the European Space Agency (ESA) spearheads the development of advanced EO systems, including the Copernicus program, which offers high-resolution data essential for agricultural monitoring. In collaboration with global institutions such as the United Nations Food and Agriculture Organization (FAO), the Institute of Agricultural Resources and Regional Planning (IARRP) of the Chinese Academy of Agricultural Sciences (CAAS), the Group on Earth Observations Global Agricultural Monitoring (GEOGLAM), and the European Union (EU), ESA’s EO data plays a critical role in shaping policies, enhancing agricultural resilience, and addressing global food crises.
This Agora session will explore how ESA’s technological innovations—developed in partnership with FAO, CAAS, the EU, and NASA Harvest—are transforming agricultural research and development, promoting food security, and informing global decision-making. Through practical case studies, the session will showcase how state-of-the-art EO technologies are being used to monitor crop yields, assess environmental stresses, and develop strategies for sustainable agricultural practices.
Participants will be invited to engage in dynamic discussions on how EO can further drive agricultural innovation, improve data accessibility, and strengthen international collaboration. By integrating cutting-edge technology with policy frameworks, these collective efforts are helping to build a more sustainable and food-secure future for all.

Moderators:


  • Dr. Sven Gilliams, Director - GEOGLAM
  • Dr. Zoltan Szantoi - Land Applications Scientist, ESA

Panel members:


  • Dr. José A. Rosero Moncayo - Director of the Statistics Division (ESS), FAO
  • Prof Dr Wenbin Wu - Director General Institute of Agricultural Resources and Regional Planning, Chinese Academy of Agricultural Sciences
  • Dr. Bettina Baruth - Senior Advisor to the Director of JRC
  • Dr. Inbal Becker-Reshef -NASA Harvest Director
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: F.02.05 - POSTER -Capacity Building and Technology Transfer in Earth Observation

This session will focus on capacity building and technology transfer initiatives that aim to enhance the EO capabilities of developing countries. It will present successful programs and discuss strategies for effective knowledge and technology transfer. This includes best practices, case studies of successful capacity building initiatives and the challenges and opportunities of technology transfer.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: CADEO Project: Empowering Vietnam's Future through Geospatial and Earth Observation Innovation

Authors: Qiongjie Xu, Maria Antonia Brovelli, Dr. Vasil Yordanov, Prof. Minh Nguyen Quang, Prof. Ali Mansourian, Prof. Linh Nguyễn Hoàng Khánh, Prof. Vien Ha Thuc, Prof. Dinh Nguyen Quoc, Mr. Thanh Le Phuoc
Affiliations: POLITECNICO DI MILANO, HANOI UNIVERSITY OF MINING AND GEOLOGY, LUNDS UNIVERSITET, HUE UNIVERSITY, VIETNAMESE GERMAN UNIVERSITY, PHENIKAA UNIVERSITY, VIETNAM DENMARK VIDAGIS
Vietnam, a nation of stunning natural beauty, is increasingly vulnerable to the devastating impacts of climate change. As the sixth most affected country by natural disasters, the nation faces significant threats from extreme weather events, including floods, droughts, and typhoons. The severe consequences of these disasters, such as widespread damage and loss of life, pose a serious risk to the sustainability of Vietnam's rapid economic growth and its promising future. The CADEO (Climate change Adaptation using Digital geospatial twins and Earth Observation) project aims to tackle these challenges by harnessing advanced geospatial technologies. By leveraging innovations in geospatial data, Earth Observation (EO), and Digital Twin Earth (DTE) technologies, the project seeks to enhance Vietnam's capacity to monitor, analyze, and adapt to environmental and societal changes. The initiative focuses on equipping Vietnamese higher education institutions (HEIs) with the expertise to utilize EO and DTE frameworks, fostering a more resilient and sustainable future for Vietnam. Participating in the project are four Vietnamese High Education Institutions (HEIs): Hanoi University of Mining and Geology (HUMG), Hue University (HU), Vietnamese-German University (VGU), and Phenikaa University (PU), alongside VidaGIS, a prominent geospatial enterprise acting as a bridge between academia and industry for the application of advanced geospatial technologies. Leading the project are two top European HEIs in the geospatial domain: Politecnico di Milano (POLIMI), Italy (Coordinator), and Lund University (LU), Sweden. The primary goal of the project was to design and implement four innovative master-level courses: Earth Observation, Geospatial Intelligence, Geospatial Web Applications, and Digital Twin Earth. These courses incorporate cutting-edge content and modernized teaching approaches, such as a blended learning model utilizing eLearning infrastructure, developed collaboratively among project partners. As part of the project, faculty members from Vietnamese partner higher education institutions received training to deliver these high-quality courses and enhance their research capabilities. The courses were co-developed by European and Vietnamese experts, ensuring a globally relevant and high-standard curriculum. To provide practical experience, the courses include real-world case studies and hands-on exercises, equipping students with the skills needed to address contemporary challenges. Adopting a blended learning approach accommodates diverse student needs and optimizes learning outcomes. The course materials underwent rigorous expert review to ensure their quality, relevance, and alignment with international standards. Feedback from experts was used to refine the course content and delivery methods. In addition to the master-level courses, the project offers short courses for geospatial professionals. These short courses provide opportunities for upskilling and reskilling, enabling professionals to stay updated with the latest advancements in geospatial technologies. The contribution to the Living Planet Symposium will present CADEO and its products, which are made available with an open license.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: Building Earth Observation (EO) Capacity in the Philippines: Lessons from CopPhil

Authors: Carla Mae Arellano, Vanessa Streifeneder, Dr. Zahra Dabiri, Dr. Daniel Hölbling, Prof. Dr. Stefan Lang, Dr. Peter Zeil, Eva-Maria
Affiliations: Z_GIS, University of Salzburg
The National Copernicus Capacity Support Action Programme for the Philippines (CopPhil) demonstrates how targeted knowledge transfer and capacity-building initiatives can enhance the use of Earth Observation (EO) technologies for environmental management, disaster preparedness, and sustainable development. Emerging from a strategic partnership between the European Union (EU), the European Space Agency (ESA), and the Philippine Space Agency (PhilSA), CopPhil underscores the EU’s commitment to advancing geospatial capacity in partner nations. This study explores CopPhil's methodologies and outcomes, showcasing its potential as a scalable framework for global collaboration. CopPhil adopts a data-driven approach to capacity development, beginning with an EO and Geoinformation (EO/GI) maturity assessment of the Philippines during the project's bridging phase. This foundational study revealed critical gaps in technical skills, data infrastructure, and localized EO applications across sectors like agriculture, disaster management, and urban planning. To address these gaps, the project’s knowledge and skills transfer team developed a multi-phase training strategy anchored in skills gap analysis, co-creation workshops, and tailored training. The skills gap analysis involved 22 stakeholders from government agencies, academia, and institutions, identifying deficits in advanced EO data processing, programming, large-scale data management, and the application of machine learning in geospatial analysis—barriers to the effective use of EO data. To validate and refine these findings, a co-creation workshop was held in October 2024, bringing together experts and local stakeholders from key institutions. This participatory approach ensured that training efforts were aligned with real-world needs while fostering a platform for meaningful exchange. Based on these insights, a series of ten training courses was designed to equip participants with practical knowledge, ranging from foundational EO concepts to advanced applications. The first course, conducted in November 2024, highlighted the critical role of EO in the Philippines and introduced CopPhil’s three pilot EO services: ground motion monitoring, land cover and forest mapping, and benthic habitat analysis. Practical use cases, such as identifying land subsidence, mapping forest and crop extent changes, and conducting bathymetric assessments, directly tied the training to pressing national challenges. Rooted in the broader EU-Philippines partnership, CopPhil reflects shared goals for capacity building and sustainable development. Copernicus data integration and uptake enhance national EO infrastructure while fostering local expertise. ESA’s collaboration with PhilSA exemplifies a balanced exchange: European experts contribute technical knowledge, while Philippine stakeholders ensure relevance through contextual insights. The CopPhil model offers valuable lessons for global capacity-building efforts. By combining local stakeholder engagement, co-creation of training content, and long-term institutional support, CopPhil demonstrates how tailored EO capacity-building initiatives can drive practical applications and sustainable decision-making across diverse contexts.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: Hyperspectral Remote Sensing: A Key Pillar of Earth Observation Cooperation for Capacity Building Among European Space Agencies

Authors: Deodato Tapete, Dr. Daniele Cerra, Dr. Iulia Dana Negula, Brunella Castelli, Rina Angeletti, Dr. Alina Radutu, Dr. Violeta Poenaru, Florina Dediu, Mirela Bivolaru, Dr. Daniel-Eugeniu Crunțeanu, Giacomo Lazzeri, Cristina Stancu
Affiliations: Italian Space Agency, German Aerospace Center, Romanian Space Agency, University of Agronomic Sciences and Veterinary Medicine of Bucharest, National University of Science and Technology Politehnica Bucharest
Hyperspectral (HS) remote sensing is a well-established Earth Observation (EO) domain, which has matured over the last 40 years by means of various airborne (AIS, AVIRIS, CASI, HyMAP) and space borne instruments (Hyperion, CHRIS). Due to a number of current (e.g. DESIS, PRISMA, EnMAP) and upcoming (e.g. CHIME) HS sensors and missions, its relevance over the next decade is expected to increase even further. Launched by the National Aeronautics and Space Administration (NASA) in 2000, Earth Observing-1 (EO-1) was the first non-military satellite mission equipped with a HS instrument called Hyperion. The instrument acquired 220 spectral bands (30 m spatial resolution) in the visible-near-infrared (VNIR) and shortwave-infrared (SWIR) spectrum. Next, the Compact High Resolution Imaging Spectrometer (CHRIS) on board of Project for On-Board Autonomy-1 (PROBA-1) was launched by the European Space Agency (ESA) in 2001. Onboard of a scientific mission, CHRIS represented the second hyperspectral spaceborne instrument. Currently operational, the German Aerospace Center (DLR)’s Earth Sensing Imaging Spectrometer (DESIS), the Italian Space Agency (ASI)’s PRecursore IperSpettrale della Missione Applicativa /Hyperspectral Precursor of the Application Mission (PRISMA) and the DLR’s Environmental Mapping and Analysis Program (EnMAP) are advanced hyperspectral missions that acquire data across more than 200 spectral bands, with a spatial resolution of 30 m. The first Tanager satellite was launched this year (2024) by PLANET private company. Future missions are already planned, including the Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) that represents one of the Copernicus Sentinel Expansion Missions (planned for launch by ESA in 2028) and the Surface Biology and Geology (SBG) which will be launched by NASA at the end of 2028 and encompasses an ASI-JPL cooperation. In this lively and evolving upstream context by which an increasing amount of HS imagery is disseminated across the international science and commercial community and, in near future, much more abundant data flow is expected, more than ever it is of paramount importance that investments are made to create a distributed network of scientific (researchers and academia groups) and commercial (developers, SMEs and geospatial services companies) users capable of analyzing these data, also through novel processing algorithms, testing of prototype products workflows and delivery of application products and services. The applications that can be developed based on HS satellite data are manifold, including (but not limited to) natural resources management, sustainable agriculture and food security (food nutrition and nutrition quality, sustainable use of nutrients and water, soil degradation and soil properties), raw materials (exploration and mining, mining environmental monitoring), biodiversity and ecosystem sustainability, forestry, coastal and inland waters (phytoplankton composition, harmful algae species), environmental degradation (methane monitoring, hazardous materials: oil spills, mine waste, pollutants, plastics) and hazards (landslides, volcanos, floods, droughts, etc.), hydrology, urban surface materials, and cultural heritage. However, these developments may mostly remain unrealized opportunities for those countries where, in absence of a long-standing tradition of HS remote sensing and companies producing HS sensors, there is no yet a consolidated and distributed community with HS satellite remote sensing technical and hands-on expertise (except for sparse individual scholars or groups already working with these technologies or with prior expertise on airborne data only). For example, this scenario can be found at the moment in Romania and, more widely, across Eastern Europe countries. While several academia and research organizations have strong competences in various strands of remote sensing – first of which multispectral optical and Synthetic Aperture Radar – as well as commercial companies exist and deliver applications and downstream services, HS satellite remote sensing has not yet reached the same level of maturity across the country. To stimulate an acceleration in this EO domain at national level, one of the most effective strategies is to identify a national champion (or a group of them) that can act as a technological innovator hub that may start creating a competence basis and promote further dissemination across the reference community through training initiatives. It is with this scope that the Romanian Space Agency (ROSA), as the national space body in charge of coordinating the national space research and application programs, promoting Romania's development in the space field, representing the Government in international cooperation programs and undertaking research oriented on space matters, promoted the application for and is coordinating the Horizon Europe "Reaching Excellence in Hyperspectral Remote Sensing" (EXPERT) project (https://www.expert-project.rosa.ro/). EXPERT is funded in the framework of HORIZON.4.1 “Widening participation and spreading excellence” program, “HORIZON-WIDERA-2023-ACCESS-02-01 - Twinning Bottom-Up” topic, “HORIZON Coordination and Support Actions” funding scheme. EXPERT aims to improvement of the excellence capacity and resources of ROSA in the area of HS remote sensing through knowledge and best practices transfer by ASI and DLR. Started in June 2024, EXPERT achieves such ambitious goal through international cooperation between the two space agencies – ASI & DLR – which are European leaders in HS sensor technologies, data analysis, applications and downstream services, and the national organization – ROSA – which has the institutional mandate and profile for playing the above said technological innovator hub and accelerator in Romania. Centered around the HS area of research and innovation, the EXPERT project targets an increased science and innovation capacity for the organization based in the widening country, consequently generating a greater involvement of local actors in the research and innovation process, both at national and European levels, through the specific actions designed for the wider communities around ROSA. Additionally, ROSA will develop a research profile and technical competence in order to fully exploit the R&D opportunities of such emerging field given the current rise of, and upcoming, HS satellite technologies and missions in Europe and beyond, under the guidance of ASI and DLR, which are at the forefront of HS remote sensing science and applications at global level. This paper illustrates the tiered approach and the various actions that are undertaken in EXPERT to achieve the above objectives: • Capacity building through training delivered by ASI and DLR to ROSA scientific and technical staff members, in order to build specialist competence and hands-on technical skills in collection, processing, post-processing and analysis of HS data collected from ground-based, airborne and spaceborne sensors; • Joint development of research use-cases focusing on the main applications that ROSA has identified as priorities to support the growth of the national satellite based downstream and applications sector, i.e. mining, agriculture and cultural heritage; • Outreach, networking and capacity building towards the user community, with the main focus on the Romanian academic, scientific and commercial sectors and, secondarily, neighboring countries in the Eastern Europe. Specific actions include: outreach webinars for awareness about HS remote sensing and basic and entry-level training; summer school devoted to EO researchers and professionals for an intensive specialist training; networking events to expand the community benefitting from the EXPERT impact and create opportunities for future partnerships and development in HS remote sensing at national level. Networking also encompasses liaising and partnering with other TWINNING project, such as AI4AGRI coordinated by the R&D Institute of Transilvania University of Brasov in Romania. It is worth noting that the capacity building, training and research use-cases rely on the joint exploitation of PRISMA and EnMAP. This choice accounts for the fact that PRISMA and EnMAP share several technical properties and are currently paving the way to the future generations of HS missions. In September 2022, ASI and DLR signed an implementation agreement to share HS data and the strategies, methods and results, and strengthen the synergies between the two missions (https://www.asi.it/en/2022/09/orbital-twinning-between-the-italian-prisma-satellite-and-the-german-enmapsatellite/). Therefore, in the context of this inter-agency twinning initiative, the EXPERT represents an excellent opportunity to capitalize on this cooperation and widen the benefits from PRISMA and EnMap HS data exploitation and applications development in the framework of the cooperation with ROSA. The present paper will showcase a model of cooperation between European space agencies, providing examples of good practices and knowledge transfer in EO. The EXPERT project has received funding from the European Union's HORIZON EUROPE "Widening participation and spreading excellence" programme under Grant Agreement No. 101160059.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: Large-scale land cover change mapping: a new co-developed service for the Philippines

Authors: Etienne Ducasse, Yoann Courmont, Marion Sutton, Carlos Dewasseige
Affiliations: Collecte Localisation Satellite (CLS)
CopPhil (National Copernicus Capacity Support Action Program) is a European Union initiative to promote the use of Earth Observation data from Copernicus program, implemented by the European Space Agency (ESA). Land Cover mapping is one of the three pilot services co-developed with the Philippines Space Agency (PhilSA), the National Mapping and Resource Information Authority (NAMRIA) and Department of Human Settlements and Urban Development (DHSUD). Land Cover (LC) mapping is a solution for a wide range of practical applications such as risk assessment and land use monitoring. Combined with additional data such as Digital Elevation Models (DEM), it can be useful to detect vulnerable zones such as landslide-risk terrains, fire-susceptible areas and to implement flood zoning. Also, a regular update on LC maps is needed for territorial authorities to track urban expansion and its impact on natural habitats. These regular updates are not always possible with supervised classification methods, due to the large amount of processing resources needed to process accurate national scale LC maps. We present an example of co-development of a cloud-based LC mapping service for the Philippines at a national scale with a yearly update within the CopPhil program. The high temporal resolution of Sentinel-2 satellites is used to capture the annual spectral variations for each class of LC. The Philippines rainy season limits the Sentinel-2 observations due to cloud cover, so Sentinel-1 IW monthly mosaics are also used to strengthen the classification process. This process uses Iota² software, a machine-learning toolbox for LC classification (Inglada et al., 2017). The reference dataset for training and validation was created from NAMRIA land cover 2020 map. Polygons for each class have been selected and updated to the classes for the year 2023 using visual analysis of VHR images. The processing chain is cloud-based, to make it accessible and configurable for future developments. The land cover map is produced first for the reference year 2023 with 11 classes and their variable minimal mapping unit (MMU) defined by NAMRIA, on two specific aera of interest (AOIs) for Luzon and Mindanao regions. Preliminary results show an overall accuracy of 0.8 for all classes, which demonstrates the the possibility of a high-reliability semi-automatic LC mapping in Philippines soon and open new grounds for long-term monitoring LC changes in the Philippines. References: Inglada, J., Vincent, A., Arias, M., Tardy, B., Morin, D., & Rodes, I. (2017). Operational High Resolution Land Cover Map Production at the Country Scale Using Satellite Image Time Series. Remote Sensing, 9(1), 95. https://doi.org/10.3390/rs9010095
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: Strengthening the Academic Network on Capacity Development for Climate Change Adaptions between Europe and Africa

Authors: Michael Thiel, Insa Otte, Lilly Schell, Renee van Dongen, Marlon Maranan, Andreas Fink, Thomas Gaiser, Pranav Patil, Franziska Schünemann, Elke Mertens, Ralf Löwner, Elbek Erdanaev, Luiz Bondi, Stefan Porembski, Walter Leal, Franziska Wolf, Daouda Koné
Affiliations: University Of Würzburg, International Centre for Water Resources and Global Change, Karlsruhe Intitute of Technology, University of Bonn, University of Hohenheim, University of Applied Sciences Neubrandenburg, University of Rostock, Hamburg University of Applied Sciences, West African Science Service Centre on Climate Change and Adapted Land Use
Climate change is constantly presenting people around the world with new challenges. The conditions for successful agricultural cultivation are changing, increasing extreme weather events are making life in cities and rural areas more difficult, nature has to adapt to new climatic conditions and, last but not least, the economies of entire nations are facing new challenges. Meaningful and promising adaptation measures to the changing climate require a comprehensive recording and analysis of existing systems and the possibilities for adaptation. The basis for this is comprehensive education/training of stakeholders on site, whether in Europe or Africa. In our NetCDA network we would like to bring together a wide range of players who are involved in the academic capacity development of current and future stakeholders in climate change adaptation in Africa and strengthen cooperation between the players. Remote sensing/Earth observation can play a thematically unifying role in such a network because it can cover and link different disciplines. In addition, satellite-based remote sensing offers the great potential to record and evaluate adaptation measures across the board. NetCDA initially starts through project financing with close cooperation between partners from Germany and West Africa. The next step is to open the network and define the requirements for establishing the network (constitution). The basic idea is to provide a framework to strengthen existing collaborations and ensure their continued existence beyond the term of pure project financing. The cooperation with the West African Science Service Center on Climate Change and Adapted Land Use (WASCAL) is an ideal starting point for the network in Africa, as the center is very well networked throughout Africa and has an internal network of universities throughout West Africa can be used and can also cover a wide range of different climate change topics. The current focus is on the training of doctoral and master's students; an expansion in this context, for example to bachelor's students and postdocs, is necessary in the long term and is planned for in NetCDA.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: Bridging GMTSAR and LiCSBAS: A Graphical User Interface for Accessible InSAR Processing

Authors: Muhammad Badar Munir, Dr Hakan Tanyas, Dr Islam Fadel, Ms Amira Ahmed, Dr Ling Chang, Dr Cees van Westen
Affiliations: ITC, University Of Twente
Over the past couple of decades, the availability of synthetic aperture radar (SAR) data and advanced tools for interferometric SAR (InSAR) time series analysis has significantly increased, both through free and commercial platforms. The time series analysis workflows enhance the utility of InSAR for understanding dynamic surface processes and assessing their relationship with natural and anthropogenic factors. However, time series analysis using finer spatiotemporal resolution SAR data over extended periods remains a technically complex and computationally expensive process, particularly for non-programmers.. The advent of high level programming languages combined with the development of specialized and open source libraries has paved the way to advanced solutions to these problems by offering more efficient data handling, automation, and algorithmic advancements. GMTSAR, an open source and UNIX based software, is one example of such solutions which streamlines processing chain by generating deformation time series with greater accuracy and reduced technical complexity. It integrates the Generic Mapping Tools framework with highly efficient algorithms written in C for InSAR data processing. Another example is python based open source LiCSBAS InSAR processing package that performs different quality control checks during the InSAR time series analysis. However, reliance of such packages or software on python or Unix-based scripting can pose challenges for users unfamiliar with programming or UNIX based environments. To address this issue, many efforts have already been made to extend GMTSAR’s core functionality through various other means. Despite these contributions, further enhancements are still needed to accommodate non-programmers and users seeking more intuitive tools. Here we present a Graphical User Interface (GUI) tailored to enhance GMTSAR’s accessibility while also offering the functionality of LiCSBAS. The GUI supports batch processing of large datasets, integrates multicore processing for improved efficiency, and includes built-in quality control measures to ensure high output reliability. Designed for users with little to no programming experience, it enables effortless interaction with intermediate and final results while maintaining flexibility of GMTSAR and LiCSBAS for advanced users. By providing an open-source, user-friendly platform, this interface democratizes InSAR processing for a broader audience, from beginners to experts looking to implement custom algorithms.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: IEEE Geoscience and Remote Sensing Society Image Analysis and Data Fusion School on Computer Vision for Earth Observation

Authors: Silvia Liberata Ullo, Gemine Vivone, Gulsen Taskin, Ronny Hansch, Ujjwal Verma, Dalton Lunga, Claudio
Affiliations: University Of Sannio
Over the last decades, the importance of Earth observation (EO) data has steadily increased. Today, it plays a pivotal role in scientific research and monitoring, analyzing, and modeling natural and anthropogenic processes in the atmosphere and on the surface of the Earth, including its oceans, forests, ice and snow fields, and urban areas. It is also connected to the United Nations Sustainable Development Goals, where 34 indicators across 29 targets and 11 goals can be informed with EO data, and more and more companies leverage EO data, e.g., for insurance in regions prone to natural disasters or smart farming. Nowadays, the quality and quantity of EO imagery calls for fully automatic methods that can efficiently and robustly analyze globally distributed samples. Computer vision (CV) methods—shown to be very successful in other areas, such as semantic or geometric interpretation of close-range imagery—promise to fulfill exactly this need by leveraging data-driven machine learning-based frameworks that model the complex relationship between spectral-spatial (and sometimes temporal) input and the target variable. Training young researchers on these specific topics is increasingly gaining attention and the goal of the IEEE Geoscience and Remote Sensing Society (GRSS) Image Analysis and Data Fusion (IADF) School on CV for EO (CV4EO) was indeed to provide a general overview of the multitude of different aspects of how CV is used in EO applications together with deep insights into modern methods to automatically process and analyze remote sensing images. However, very soon, the general awareness was that the IADF School was realizing much more than only students’ training, but especially a global cooperation of researchers working on EO. Until now we had three editions of the School as you can see from the website (https://iadf-school.org/). In the Past Events, the 2022 online edition website and the website of the 2023 in-presence edition held at the University of Sannio in Benevento (Italy) were shown. This latter was a success of registrants (about 170 people with the final 30 students selected to attend the School). The 2024 IADF School was again held at the University of Sannio with about 250 registrants and a final number of 60 students selected. Therefore, at the University of Sannio, in Benevento (Italy), we had the privilege of hosting for two following years an excellent school program covering a wide range of topics, including sophisticated image processing methods, artificial intelligence (AI) applications in EO, and remote sensing, Quantum machine learning applied to remote sensing. In addition, this multiday event combined experience, ideas, and real-world applications with a clear focus on EO, allowing the students to enjoy not only an enhanced and broad educational experience but emphasizing above all collaborative learning, where they gained practical skills through interactive sessions in addition to theoretical information. Moreover, people coming from all over the world experienced a unique immersion in an enjoyable environment where differences and any type of obstacles were overcome in the name of science, technology transfer and shared knowledge objectives. To this, it is worth to underline how Italy represents a special place to host people for many reasons: the culture, the history, the monuments (we had a City tour during the week), the food, all things which help to create moments of sharing and carefree. The 2024 School edition hosted 60 students from five continents: Africa, Asia, the Americas, Europe, and Oceania, and from 26 countries: Cameroon, India, Bulgaria, Netherlands, Marocco, UK, Italy, Nigeria, Romania, China, Australia, Iran, Norway, Egypt, Saudi Arabia, Ghana, Venezuela, Portugal, Germany, Pakistan, Spain, Armenia, Singapore, Turkey, USA, Canada. IEEE provided 10 Travel Grants that the organizing committee assigned to students coming from countries outside Europe. We had several sponsors: ESA (European Space Agency), Thales Alenia Space, cosine Netherlands, Intelligentia, COSMIND, Strega Alberti, SMS Engineering, and the University of Sannio. Sponsorships allowed us to award the best posters presented during the School in a specific session, and this created a healthy and important competition among the students eager to present their research topics. In conclusion, we can highlight that the school program provided a diverse range of opportunities for developing knowledge in geoscience and remote sensing, but it also fostered a sense of community among students, professionals, and researchers. Looking back, we are inspired by the collaborative spirit, exchange of ideas, and topics discussed. We would like to thank GRSS and IADF for their support and all of the lecturers who gave freely their time and expertise. A survey among the participants conducted after school clearly showed that the event received high attention and provided an exciting experience. The next edition of the IADF School in 2025 has been already planned to be held again at the University of Sannio in Benevento: now a tradition, and the IADF Technical Committee is going to support the Summer School to be held in Tunisia next May 2025 during the IEEE JURSE Conference. Link to the pictures of 2024 IADF School: https://drive.google.com/drive/folders/1gNIqR8AwSpwWfb6xDkt5BRUVmRGzrlUy?usp=sharing
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: Romanian Excellence Center in Artificial Intelligence on Earth Observation Data for Agriculture

Authors: Mihai Ivanovici, Prof. Josiane Mothe, Fabio Del Frate
Affiliations: Transilvania University of Brasov, IRIT, University of Toulouse, Tor Vergata University of Rome
Romania, an ESA member state, has a long history of contributions to the remote sensing and Earth Observation (EO) domains, starting with the first research based on the multispectral data from the Landsat launched in 1972. The Romanian Space Agency (ROSA) is developing the strategy in these domains and specific activities to stimulate the public and private initiatives. Various projects and initiatives were implemented at national level to foster EO data, with particular focus on applications in agriculture. The Romanian Center for the Use of Remote Sensing in Agriculture (CRUTA) was founded in 1992, in the context of the MARS (Monitoring Agriculture with Remote Sensing) European program started in 1988. Another example is the project implemented by the University of Agricultural Sciences and Veterinary Medicine (USAMV) in Bucharest at the teaching farm at Moara Domnească, Romania, in collaboration with Terrasigna private company. Within this framework the Sentinel-1 data are validated by a radar system for applications in agriculture. The list of projects can continue, SIRIUS project financed by EC, DIANA project etc. The AI4AGRI European project (2022-2025) aims at creating the Romanian Excellence Center in Artificial Intelligence (AI) on Earth Observation (EO) Data for Agriculture within the R&D Institute of Transilvania University of Brasov, Romania (UNITBV). The AI4AGRI project is a Horizon Europe Twinning between UNITBV and two high-level universities in the Western Europe: University of Toulouse, France (UT) and Tor Vergata University of Rome, Italy (UNITOV). Therefore, AI4AGRI is a capacity building initiative led by international cooperation. The AI4AGRI Excellence Center opened officially in June 2023 and its ultimate goal is to boost agricultural productivity and sustainability in Romania by leveraging AI and EO data. In the framework of the twinning project, the AI4AGRI Excellence Center trains early-stage researchers and young scientists, provide EO-based vegetation maps to farmers, and enhance research and innovation at UNITBV. Key activities of the AI4AGRI project include joint research in tight collaboration with the two western European partners (UT and UNITOV), staff exchanges and partner visits, training, mentoring and outreach. From a long-term perspective, the AI4AGRI encompasses three major components: (i) training the future generation of researchers and experts, thus addressing the capacity building aspect at national level in Romania and Eastern Europe, (ii) creating the premises for technological transfer between the newly-created center and various stakeholders such as institutes and private companies, and (iii) research activity. The first is performed by various specific activities like short training sessions, mentoring, as well as networking events such as summer schools and workshops. All the resulting support material is available as free open sources for further re-use and increase of the training impact. The second one is prepared by the signature of partnership agreements and the identification of the innovative aspects that can be protected as intellectual property and then valorized through technological transfer. Setting the long-term sustainability of the newly-created AI4AGRI Excellence Center includes attracting new funding. In the framework of the AI4AGRI project, various project proposals (both national – Romania level – and European) were submitted, some of them are already accepted for financing and currently in implementation, and some of them are under evaluation. The third is the joint research which follows the FAIR principles of EU, all the results and outcomes of the project are open and distributed or accessible for free. The research in AI is mainly based on remote sensing and EO data. The EO data includes Sentinel-1 and Sentinel-2 from the EU-ESA Copernicus program, as well as hyper-spectral images such as PRISMA provided by ASI, Italy and EnMAP provided by DLR, Germany. The applications targeting the domain of agriculture include the NDVI map generation (with new fuzzy logic-based approaches for the coloring scheme and inference), two-source NDVI validation using Monte Carlo simulations, AI-based color composites generation based on multi- and hyper-spectral images, AI-based crop identification, including and focusing on the early identification, AI-based identification of potato fields by fusion of radar and optical data, etc. Remote sensing data includes color digital images as well as synthetic, computer-generated images for two applications of interest: AI-based soil roughness estimation and AI-based emulation of a physics-based model for the estimation of the radar signal back scattering of the bare soil. An output of the research activity is the data sets that are made publicly available, like the color images for the estimation of the soil roughness and the Sentinel-2 based data set spanning over a period of 5 years, with ground truth labels over 47 parcels and 12 types of agriculture crops belonging to our associated partner, the National R&D Institute for Potato and Sugar Beet, Brasov, Romania. Last but not least, the project results dissemination is done in international conferences and peer-reviewed journals, as open publication. Communication is performed through the AI4AGRI web site (https://ai4agri.unitbv.ro) and the dedicated LinkedIn social media channel. The outreach events are organized in the framework of the European Researchers’ Night and World Space Week, targeting both the general public and pupils and their teachers from the primary schools. As a particular impact of the project, several farmers and farmer associations were contacted in order to increase their awareness on the EO data and AI-based tools for crop management and decision-making processes and NDVI maps, together with guidelines for interpretation, are generated and delivered to them. Funded by the European Union. The AI4AGRI project entitled “Romanian Excellence Center on Artificial Intelligence in Earth Observation Data for Agriculture” received funding from the European Union’s Horizon Europe research and innovation programme under the grant agreement no. 101079136.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: Capacity Building Towards Raised Excellence on Artificial Intelligence and Earth Observation for Disaster Risk Reduction in the Framework of AI-OBSERVER Project

Authors: Dr Marios Tzouvaras, Prof. Fabio Del Frate, Dr Gerd Reis, Prof. Diofantos Hadjimitsis
Affiliations: ERATOSTHENES Centre Of Excellence, Università degli Studi di Roma “Tor Vergata”, Deutsches Forschungszentrum für Künstliche Intelligenz
ERATOSTHENES Centre of Excellence (CoE) is the only established research organization in Cyprus for space-based Earth Observation (EO), with a significant track record in participation and coordination of EO related research projects through the Cyprus University of Technology. By 2026, the ERATOSTHENES CoE, through the H2020 Teaming EXCELSIOR project (https://excelsior2020.eu/), will become a world-class Digital Innovation Hub for EO and Geospatial Information, becoming a self-sustained Centre and the reference Centre in the Eastern Mediterranean, Middle East, and North Africa. Acknowledgements: The present work was conducted in the framework of AIOBSERVER project (https://ai-observer.eu/) titled “Enhancing Earth Observation capabilities of the Eratosthenes Centre of Excellence on Disaster Risk Reduction through Artificial Intelligence”, that has received funding from the European Union’s Horizon Europe Framework Programme HORIZON-WIDERA-2021-ACCESS-03 (Twinning) under the Grant Agreement No. 101079468. The authors also acknowledge the 'EXCELSIOR': ERATOSTHENES: Excellence Research Centre for Earth Surveillance and Space-Based Monitoring of the Environment H2020 Widespread Teaming project (www.excelsior2020.eu). The 'EXCELSIOR' project has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No. 857510, from the Government of the Republic of Cyprus through the Directorate General for the European Programmes, Coordination and Development and the Cyprus University of Technology. Artificial Intelligence (AI) already has a major impact on many sectors and its influence is predicted to expand rapidly in the coming years. One area where there is considerable untapped potential for AI is in the field of Earth Observation, where it can be used to manage large datasets, find new insights in data and generate new products and services. In this context, the use of Sentinel data, or data provided by other ESA missions or Third Party Missions, plays a crucial role. In the framework of the Horizon Europe AI-OBSERVER project (https://ai-observer.eu/), funded by the European Commission, the main goal is to enhance the scientific capabilities of ERATOSTHENES CoE on the use of Artificial Intelligence (AI) Earth Observation-based Disaster Risk Reduction by twinning with two top-class leading institutions, the German Research Centre for Artificial Intelligence (DFKI) and the University of Rome Tor Vergata (UNITOV), involving highly reputable professionals dealing with AI for the EO sector, and increase the scientific and innovation capacity of ECoE in the Research and Innovation system in Cyprus and beyond. During the 3 years of the project, this is being achieved through a series of capacity building activities, i.e., workshops, webinars, short-term staff exchanges, summer schools and expert visits, offered by DFKI and UNITOV on Artificial Intelligence designed based on a gap analysis of the existing staff and scientific capacity of the ERATOSTHENES CoE researchers, on the thematic research areas of land movements (Earthquakes, Landslides and Soil erosion), forest fires, floods and marine pollution (oil spills, illegal waste dumping, etc.). All these will enable the ERATOSTHENES CoE researchers to build AI models for large scale image processing and Big EO data utilizing data from the ESA and Third Party Missions, as well as other auxiliary datasets. Up to date, over forty early stage and senior researchers have participated in these trainings. The knowledge transferred to ERATOSTHENES CoE is currently utilized by its staff in a research exploratory project applying Artificial Intelligence on Earth Observation for multi-hazard monitoring and assessment in Cyprus, with the support of the advanced partners, leading to the development of the first ERATOSTHENES CoE product integrating EO and AI for Disaster Risk Reduction to address the needs of stakeholders and end-users in Cyprus.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: Bringing Copernicus Data to Latin America & the Caribbean: Capacity building activities of the CopernicusLAC Centre in Panama

Authors: Nicolas Ayala Arboleda, Thomas Tanghe, Jesus Carrillo
Affiliations: CopernicusLAC Centre in Panama, Novaspace
The CopernicusLAC Centre in Panama serves as a regional hub, advancing the utilisation of free and open Earth observation (EO) data from the European Union's (EU) Copernicus programme across Latin America and the Caribbean (LAC). The Copernicus LAC Panama Centre’s activities take place within the overall context of the EU-LAC Digital Alliance, which is a strategic framework for promoting cooperation between the EU and the LAC region on digital and space issues under the EU Global Gateway umbrella. Within this context, the European Space Agency (ESA) is coordinating the Centre’s implementation on the basis of a Contribution Agreement with the Directorate-General for International Partnerships (DG INTPA) of the European Commission, and in close collaboration with Panama’s government, MIRE (Ministry of Foreign Affairs), Government Innovation Authority (AIG), and SENACYT (National Secretariat for Science, Technology and Innovation). The Centre's capacity-building programme is one of its cornerstones and it seeks to empower stakeholders with the knowledge and skills necessary to effectively leverage Copernicus data and services. This programme encompasses a variety of training modalities, including in-person trainings, massive open online courses (MOOCs), live online trainings, among other activities, all of them tailored to address the unique challenges and requirements of the LAC region. In-person training sessions have a central role in allowing beneficiaries of the Centre to gain hands-on experience with Copernicus tools and data. For instance, in July & November 2024, the Centre conducted two five-day in-person trainings delivered in Spanish to accommodate regional participants. These sessions are instrumental in fostering practical skills and encouraging the integration of Copernicus data into various operational contexts. These events employ a "train the trainers" approach, enabling participants to disseminate acquired knowledge within their respective organisations, thereby amplifying the programme's impact across the region. Complementing its in-person offerings, the Centre is developing a suite of online resources for its stakeholders. In May 2024, the Centre launched its Digital Campus, a dedicated platform offering training materials and MOOCs focused on the utilisation of Copernicus data. This publicly available resource will support the Centre's training activities over a four-year span, aiming to equip participants from national institutions with the expertise needed to harness EO data for disaster risk reduction (DRR) and other critical applications. Moreover, the Centre is implementing a series of live online courses, which will allow participants from all across the region to learn more about how to employ Copernicus in their activities. These events provide a valuable opportunity for attendees to interact with international experts and gain practical experience with EO. To further stimulate innovation and the practical application of EO data, the Centre hosts events such as hackathons. In October 2024, the inaugural CopernicusLAC Hackathon was held online, focusing on developing solutions to regional challenges using Copernicus data. Participants addressed issues such as food security, DRR, and biodiversity preservation, with the event offering over $8,000 in prizes to encourage engagement and excellence in the projects. During the hackathon the participants received constant support from experts to help them better apply Copernicus data in their projects, while putting an emphasis on the practical value of these solutions and supporting their learning. The Centre also conducts tailored outreach activities to better inform policymakers of the great value of Copernicus data for their countries’ and for the regions welfare. Through its multifaceted capacity-building activities, the CopernicusLAC Centre in Panama significantly contributes to the region's ability to harness EO data for informed decision-making. By providing accessible training resources, facilitating practical applications and encouraging international collaboration, the Centre is empowering stakeholders across LAC to address a variety of challenges more effectively, ultimately fostering a more resilient and informed society.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: D.06.01 - POSTER -Orbital Intelligence for Earth Observation applications: The edge of AI In Space

In the current technological era, we are witnessing a significant interest toward the application of Artificial Intelligence (AI) directly on board Small Earth Observation (EO) satellites. The incorporation of AI on edge of satellite constellations enables the extraction of actionable insights from spaceborne data and the generation of immediate and automate responses. This, in turn, has the potential to significantly increase the autonomy, agility, and reconfigurability of EO missions and enable novel applications.
This session delves into the technologies, novel computing paradigms for onboard satellite processing (edge computing, neuromorphic computing, cloud computing), EO use cases, innovative mission concepts enabled by the application of AI on board EO satellites, and design of resilient and reliable AI algorithms for onboard satellite applications. In addition, it aims at fostering a collaborative environment for sharing knowledge, experiences, and potential partnerships among space industry professionals, researchers, and policymakers on the topic of orbital intelligence for EO applications.
The topics of discussion for this session, although not exhaustive, will include:

1. AI-enhanced onboard payload data processing:
- AI for efficient onboard radiometric and geometric payload data processing (e.g., band-to-band alignment, georeferencing, calibration, Synthetic Aperture Radar (SAR) data focusing)
- End-to-end AI algorithms for on board processing of payload raw data (e.g., on board segmentation of payload raw data, data processing partially focused or unfocused SAR data)
- Onboard payload data compression
- Other novel AI-enhanced onboard payload data processing algorithms

2. Edge AI for EO Applications:
- Near-real-time onboard data processing for early alert systems
- Edge computing for data transmission data-rates reduction
- AI-driven payload data anomaly detection and response
- Autonomous operations and decision making for automated operations
- Other novel EO applications enabled by onboard AI

3. Novel Mission Concepts and Distributed data processing and intelligence:
- Tip & cue missions and New Observing Strategies (NOS) with distributed spacecraft missions and collaborative sensor nodes
- Distributed edge and federated learning on board satellite constellations
- Swarm intelligence for multi-satellite constellations


4. AI Reliability and Resilience for onboard satellite applications:
- Addressing ethical considerations and AI governance in space missions
- Verification and validation of AI algorithms for onboard satellite EO applications
- Design of resilient AI algorithms on Commercial Off-The-Shelf hardware

5. Novel computing paradigms for onboard satellite processing:
- Cloud computing for onboard distributed data processing
- Neuromorphic computing for EO

6. Collaborative and Interdisciplinary Approaches:
- Partnerships between academia, industry, and government agencies
- Cross-disciplinary research and development initiatives
- Future directions and long-term vision for AI in Earth observation missions.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Verification of Deep Learning Approaches for Remote Sensing - An Evaluation of Possibilities and Challenges

Authors: Jakob Gawlikowski, Dr. Nina Maria Gottschling
Affiliations: German Aerospace Center (DLR)
Today, data-driven artificial intelligence builds an essential backbone of various Earth Observation and Remote Sensing applications. However, applying these methods directly on edge devices and in safety-critical domains, such as satellites or other missions, requires a thorough validation, verification, and certification process for the final algorithms. This is particularly challenging for data-driven approaches, such as neural networks, which are considered black boxes and whose behavior can, so far, only be empirically explained through heuristic methods such as evaluations of gradients, the effects of input augmentations, or specific properties of network parameters and structure. Moreover, data-driven approaches can behave unexpectedly, especially when confronted with shifts in data distribution, a common issue in remote sensing. Additionally, such approaches can be sensitive to adversarial attacks, where an attacker can alter the network's predictions by applying minor perturbations to the input that are not visible to the human eye. Recent concept papers from the European Union Aviation Safety Agency (EASA) and the European Cooperation for Space Standardization (ECSS) outline initial plans for the certification of artificial intelligence, including machine learning and deep learning algorithms. But even though first verification methods for AI-based algorithms have been proposed, their effectiveness and applicability to remote sensing have not been evaluated. Verification of algorithms can be divided into formal verification, which aims to mathematically prove that specific properties hold for the algorithm, and experimental verification, which seeks to demonstrate these properties through empirical testing. In this work, we evaluate existing approaches for the (formal) verification of deep neural networks in satellite-based Earth Observation, covering a variety of data sources, tasks, and problem statements. We focus on three key aspects: 1. We investigate potential remote sensing-specific requirements and application and task-specific challenges, from which we derive properties that neural networks must fulfill and that should be verified. 2. We evaluate the properties of existing approaches concerning these necessities, pointing out which aspects are already available and what is still needed for remote sensing. These additional necessities include, on the one hand, the unique characteristics of remote sensing data, which are different from RGB datasets on which existing algorithms are often tested. For this, we also include technical requirements that can play a role, such as computational or memory efficiency. 3. We apply various existing approaches to different remote sensing tasks, covering networks trained with standard and explicitly robust training strategies. In conclusion, we show that there still exists a variety of challenges to (formally) verifying neural network behavior on remote sensing tasks.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: ComeOnBoardPSG! – Advancing PRISMA Second Generation With Deep Learning and Onboard Edge Computing

Authors: Angela Cratere, Ilaria Cannizzaro, Stefania Amici, Andrea Carbone, Mark Anthony De Lunas De Guzman, Luigi Ansalone, Matteo Picchiani, Dario Spiller
Affiliations: Department of Electrical and Information Engineering, Polytechnic University of Bari, School Of Aerospace Engineering, Sapienza University Of Rome, National Institute of Geophysics and Vulcanology (INGV), Department of Astronautical, Electric and Energy Engineering, Sapienza University of Rome, Italian Space Agency (ASI)
The ComeOnBoardPSG! project – Autonomous computing with on-board machine learning for optimized acquisition and real-time analysis of hyperspectral data on PRISMA Second Generation – supported by the Italian Space Agency (ASI), aims to enhance the capabilities of the upcoming PRISMA Second Generation (PSG) hyperspectral (HS) mission by integrating advanced onboard artificial intelligence (AI) technologies for real-time data processing, optimized resource utilization, and autonomous decision-making. Leveraging commercial off-the-shelf (COTS) edge computing (EC) and deep learning (DL) techniques, the project seeks to increase the autonomy and efficiency of PSG, addressing key challenges in Earth observation (EO) missions: real-time onboard data processing for early alert systems, EC solutions for data-rate reduction, and autonomous operations through adaptive mission planning. A central focus of the project is real-time onboard data processing for early alert systems, particularly the timely detection of high-temperature events such as wildfires and volcanic eruptions. By equipping the primary HS payload with DL-based fire detection algorithms, the system provides timely and accurate warnings to support ground-based emergency responses and natural disaster management. Preliminary results using PRISMA HS imagery and convolutional neural network (CNN) models demonstrate the feasibility of this approach, achieving 98% accuracy. Edge deployment on an Nvidia Jetson TX2 platform further validates its suitability for space missions, delivering low latency (3.0 ms) and efficient power consumption (4.8 W). This capability ensures prompt identification and reporting of critical events, significantly enhancing disaster response operations. For EC solutions to optimize data transmission and onboard storage, the project integrates cloud detection algorithms to automatically identify and discard cloudy images, reducing bandwidth usage, optimizing storage, and ensuring efficient resource management. Results with Sentinel-2 imagery using a U-Net-based cloud segmentation model achieved 98% accuracy, a low false positive rate (1%), and efficient hardware deployment on FPGA accelerators, with inference times of 27 ms and average power consumption of 2.5 W. These results highlight the potential for implementing similar systems in resource-constrained satellite platforms, enhancing the operational efficiency of PSG. In the domain of autonomous operations and decision-making, the project investigates two possible onboard cloud detection solutions. The approach with the lower impact on the system involves equipping the primary HS payload with a DL-based computing unit to perform real-time cloud coverage assessment, automatically identifying and discarding cloudy images to optimize onboard storage and bandwidth utilization. The proposed high-impact solution enhances the PSG platform with a secondary payload, an RGB forward-looking camera strategically oriented between the nadir direction and the satellite velocity vector. This camera acquires RGB images in advance of the PRISMA satellite's overflight of a given area, and a DL-based computing unit processes these images in real time to evaluate cloud coverage. The system identifies both cloudy areas to be excluded and potential cloud-free zones for optimized HS image acquisition. This analysis can be used by the satellite's Guidance, Navigation, and Control (GNC) system to plan attitude adjustments, within the operational limits of PSG, enabling the primary payload to target only cloud-free regions. This innovative strategy avoids expending energy on capturing and storing cloudy images, while providing the mission with onboard decision-making capabilities. The ComeOnBoardPSG! project highlights the potential of AI-driven EC techniques in advancing EO HS missions such as the upcoming PSG. By integrating real-time data processing, efficient resource management, and autonomous decision-making, it introduces a transformative approach for resource-aware satellite operations, opening up the possibility for enhanced autonomy and capabilities in next-generation EO missions.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Mission Persistence – A Canadian CubeSat for Edge AI

Authors: Matt Cross, Becca Bonham-Carter, Hugo Burd, Luis Chavier, Tim Heydrich, Alan Higginson, Galen O'Shea, Dr. Alexis Pascual, Kaizad Raimalwala, Andrew Macdonald, Dr. Samara Pillay, Yolanda Brown, Dr. Michele Faragalli
Affiliations: Mission Control
Deploying and maintaining reliable machine learning (ML) models poses significant challenges, particularly in space-based Earth observation (SBEO), where limited computational resources and data accessibility constraints create a uniquely demanding environment. Leveraging years of flight heritage and lessons learned from previous missions, Mission Control, a Canadian space start up, has developed a Machine Learning Operations (MLOps) pipeline to address these challenges and build reliable ML models for space. In June 2025, Mission Control will launch Mission Persistence, a 6U satellite designed to validate its MLOps framework for spaceflight applications. The satellite's primary objective is to demonstrate the effectiveness of this matured MLOps approach in maintaining reliable and robust ML models in orbit. Mission Persistence features a custom ground segment and ML pipeline designed for a handful of computer vision tasks. The pipeline ingests millions of new datapoints, leveraging pseudo-labeling techniques to label the imagery and enable the training of robust models. The pipeline validates these models using an array of metrics and visualizations. Furthermore, the pipeline offers hardware-in-the-loop testing to bridge the gap in domain adaptation and achieve operational Technology Readiness Level (TRL). The flight software application utilizes a convolutional neural network (CNN) for image classification, processing ingested earth observation imagery from the Dragonfly Gecko imaging payload through inference passes on the flight computer, a Xilinx Ultrascale+. This longitudinal study comprises a 12-month experiment, during which the performance of ML models deployed on the satellite will be continuously monitored and compared against a static model on the ground. The results will provide valuable insights into the long-term efficacy of MLOps techniques in ensuring the robustness and reliability of ML for orbital applications. This research aims to facilitate the development of novel observing strategies (NOS) through enhanced AI reliability and resilience for onboard earth observation satellite applications. The findings of this study will provide valuable insights into the application of MLOps in spaceflight, enabling the deployment of more robust and reliable ML models in future space missions. Furthermore, this research will contribute to the broader advancement of autonomous spaceflight, in which MLOps can play a critical role.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: End-to-End Onboard Raw Data Processing for Earth Observation Applications

Authors: Mr. Domenico Barretta, Mr. Roberto Del Prete, Dr. Gabriele Meoni, Nicolas Longépé
Affiliations: ESA/ESRIN, University of Campania L. Vanvitelli, University of Naples Federico II
The integration of Machine Learning (ML) into onboard satellite data processing represents a transformative shift in Earth Observation (EO) capabilities, enabling near real-time, autonomous analytics for a variety of time-sensitive applications. Traditional EO data acquisition systems often depend on a “bent-pipe” architecture, where raw data is transmitted to ground stations for processing. This reliance introduces significant latency, primarily due to the time required for the satellite to pass over a ground station, limiting the utility of these systems in scenarios requiring rapid response. Onboard ML offers a solution by enabling satellites to process data directly in orbit, generating actionable insights without waiting for ground-based processing. This paradigm shift accelerates the “sensing-communication-decision-feedback” cycle, as insights can be delivered immediately after processing the data onboard, bypassing the latency associated with downlinking raw data. However, onboard processing presents its challenges. Preprocessing raw data for ML algorithms is resource-intensive and introduces significant delay when applied on board EO satellites. In this work, we proposed simplifying the data processing chain by minimizing computationally intensive steps. Instead, we directly applied lightweight ML models to raw satellite imagery, enabling the extraction of actionable insights with minimal preprocessing. We applied this paradigm to two EO use cases requiring near real-time response: detection of thermal anomalies and maritime surveillance. For the first use case, raw Sentinel-2 data is processed to detect and classify environmental anomalies, such as thermal hotspots, using a streamlined pipeline that includes coarse spatial registration and a deep learning model, EfficientNet-lite0. The system outputs binary classification maps highlighting areas of interest, achieving an average Matthew’s correlation coefficient (MCC) score of 0.854 and a peak MCC of 0.90 on a geographically diverse dataset. The entire pipeline processes data in just 1.8 seconds per granule with a peak power consumption of 6.4 W, demonstrating feasibility for resource-constrained satellite platforms. For the maritime use case, we aim to detect and classify vessels from raw multispectral data. By developing specialized datasets, such as VDS2Raw and VDV2Raw, respectively derived from Sentinel-2 and VENµS missions and enriched with auxiliary metadata like Automatic Identification System (AIS) records, this research demonstrates the utility of raw data for training high-performance models. Statistical analyses reveal that minimal performance loss occurs when using a single spectral band, highlighting a key advantage for onboard processing. Indeed, this finding implies that complex preprocessing steps, such as band registration, can be bypassed, significantly reducing processing time and resource requirements. The models are validated on hardware representative of small satellite platforms, confirming their suitability for near real-time deployment under strict resource constraints. This streamlined approach enhances the framework’s efficiency, making it a practical solution for real-world applications. In conclusion, this research demonstrates the potential of processing raw unprocessed data on board satellites directly via ML for different EO applications. The ability to process raw satellite data directly onboard offers improved trade-offs between latency and accuracy, reducing dependence on ground-based infrastructure and enabling timely insights. The lightweight computational requirements make this approach suitable for deployment on energy-efficient satellite hardware, enhancing scalability and adaptability across various EO scenarios. As edge AI technologies evolve, this methodology holds potential for broader application in EO, supporting autonomous, near real-time data processing.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Harnessing the Potential of Onboard AI-Driven Coastal Water Monitoring for Near Real-Time Risk Detection with Φsat-2 Mission

Authors: Pietro Di Stasio, Gilda Schirinzi, Silvia Liberata Ullo, Cristiano Nattero, Marco Esposito, Nicolas Longépé, Francesco Mauro, Dr Gabriele Meoni, Elisa Liparulo, Francesca Razzano
Affiliations: University of Naples 'Parthenope', University of Sannio, Cosine, ESA ESRIN, WASDI
The increasing contamination of coastal waters poses significant risks to environmental sustainability, public health, and marine ecosystems. Traditional monitoring approaches, which rely on ground-based sampling and post-acquisition satellite data processing, face critical limitations, such as insufficient spatial coverage, high operational costs, and slow response times. These limitations are particularly problematic when addressing dynamic and localized water quality issues that demand rapid intervention. However, recent advancements in onboard data processing and artificial intelligence (AI) offer a promising solution by enabling near real-time detection of contaminants directly on satellites. This study introduces an innovative framework for onboard monitoring of coastal water quality, leveraging the advanced spectral imaging capabilities of the Φsat-2 mission and integrating AI techniques for anomaly detection. The Φsat-2 satellite is a key component of the European Space Agency’s (ESA) efforts to advance onboard AI-powered environmental monitoring. Φsat-2 is equipped with a state-of-the-art multispectral instrument called MultiScape100, which is capable of capturing images across eight spectral bands, ranging from the visible to near-infrared (VIS/NIR) spectrum. This advanced imaging system is designed to provide high-resolution data that is crucial for monitoring water quality parameters like turbidity, total suspended solids, and chlorophyll concentration. The satellite’s ability to gather this data from space is vital for monitoring large-scale environmental changes, particularly in coastal waters, where contamination often occurs. A critical feature of Φsat-2 is its onboard processing capability, which allows data to be analyzed directly in space, rather than relying on post-acquisition processing on the ground. This onboard processing is powered by the Intel Myriad 2 Vision Processing Unit (VPU), a highly efficient hardware accelerator designed specifically for tasks involving computer vision and deep neural networks. The Myriad 2 VPU provides the computational power needed for AI-based anomaly detection and image processing directly on the satellite, significantly reducing latency and enabling faster, real-time insights. This system is crucial for the rapid detection of environmental hazards such as water contamination, as it can process the satellite's multispectral images in near real-time, identify anomalies based on predefined thresholds, and transmit only the relevant data to ground stations, reducing the volume of data sent to Earth and improving efficiency. Myriad 2 is a specialized chip designed for low-power, high-performance applications. It can handle complex AI and machine learning tasks, making it ideal for satellite missions that require both high-speed processing and energy efficiency. The VPU can run convolutional neural networks (CNNs) like AI4EDoET (Artificial Intelligence for Early Detection of Environmental Threats), a model developed for this study to detect anomalies in water quality data. The proposed AI framework is designed to process satellite data in near real-time, identify environmental anomalies, and transmit only relevant alert data to ground stations, drastically reducing latency compared to traditional systems. This could enable the transmission of actionable insights much faster in future mission equipped with inter-satellite-links. Preliminary tests using simulated data and onboard hardware demonstrated the feasibility of this approach. The model achieved rapid inference times of approximately 40 milliseconds per frame, ensuring compatibility with the satellite's computational and energy constraints. The abstract presents three case studies: •     One initial study on Liguria in which validation against ground-truth measurements revealed strong predictive accuracy for turbidity and pH parameters, with performance surpassing that of existing models in the literature. •     One further adaption trying to achieve global scale by using simulated data from Copernicus Marine Services. This integration is fundamental to the proposed solution as it enhances the scalability and adaptability of the monitoring system across diverse geographic and environmental contexts. •     An adaptation for WASDI to further demonstrate the applicability of this system, a prototype will be developed for deployment on the WASDI (Web Automated Services for Data Interoperability) platform. WASDI will act as a foundation for a visual environmental risk monitoring system, providing an intuitive, real-time tool for analysing and addressing ecological threats. The societal impact of this work is expected to be transformative. By enabling near real-time detection of water quality anomalies, this system addresses a critical gap in existing monitoring systems, delivering timely and actionable insights exactly when they are needed. This capability will enhance efforts to mitigate threats to public health and marine biodiversity, support the sustainable management of water resources, and contribute to global initiatives such as the United Nations Sustainable Development Goals.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Onboard processing for SAR data: AI-driven focusing, detection and compression for Earth monitoring

Authors: Armando La Rocca, Gianluca Campagna, Luca Manca, Francesco Tosetti, Francesco Rossi, Daniele Latini, Giorgia Guerrisi, Raffaele Votta
Affiliations: AIKO.s.r.l., GEO-K, ASI
Synthetic Aperture Radar (SAR) satellites are essential for Earth Observation (EO), providing high-resolution imagery regardless of weather or lighting conditions. However, the processing of raw SAR data presents numerous challenges. SAR generates vast amounts of complex data that require substantial computational resources for focalization—transforming raw radar signals into usable images. This task becomes even more difficult in spaceborne systems, where onboard processing power is limited. Additionally, downlinking the massive data volumes from SAR satellites to ground stations is constrained by bandwidth limitations, making efficient data compression a necessity. Onboard processing and monitoring offer a promising solution by taking advantage of SAR’s unique characteristics, allowing real-time data analysis and selective transmission of crucial information, such as ship detection in maritime surveillance. Artificial Intelligence (AI) plays a key role in this approach, enabling advanced algorithms to optimize focalization processes and compress data for efficient transmission. By leveraging AI, onboard systems can reduce the burden of raw data transmission, allowing more efficient use of satellite resources and ensuring critical information is delivered to the ground in a timely manner. This approach not only enhances mission efficiency but also opens new opportunities for monitoring applications. In this context, AIKO and GEO-K are collaborating on AISAR (Applicazioni Innovative per il processamento a bordo di dati SAR) project funded by the Italian Space Agency (ASI) to develop an AI-driven proof of concept for onboard processing of COSMO-SkyMed data. COSMO-SkyMed, an advanced constellation of SAR satellites, provides high-resolution radar imagery for various EO applications, including environmental monitoring, defence, and disaster management. The project’s primary goal is to demonstrate the feasibility of using AI to efficiently process and compress SAR data directly onboard the satellite. AIKO will focus on developing a novel algorithm for the focalization of raw COSMO-SkyMed SAR data, transforming the raw radar signals into usable images for onboard analysis. This will enable the implementation of specific monitoring use cases, such as ship detection, by processing the focalized data in real time. GEO-K, on the other hand, will develop an AI model aimed at optimizing data compression, ensuring that information can be transmitted to ground stations with minimal bandwidth usage. All algorithms will be designed for hardware efficiency, making them suitable for the limited computational resources available onboard satellites. Additionally, deployment tests will be conducted to validate the system’s performance in a significant environment, ensuring robustness and operational readiness for future missions and applications, assessing TRL-5. The proposed focusing algorithm utilizes a hybrid approach that integrates traditional signal processing techniques with AI models to perform the focalization of SAR data. This method is capable of processing raw SAR data acquired by onboard payloads, generating actionable Single Look Complex (SLC) products directly on the platform for real-time analysis. This innovative approach has been rigorously tested and validated in the DeepSAR project, funded by the European Space Agency (ESA), where it was applied to Sentinel-1 data. The results demonstrated the algorithm's effectiveness in operational environments, including a maritime monitoring use case that successfully detected and tracked ships in SAR imagery. The entire processing chain has been tested on hardware platforms such as FPGAs and GPUs, confirming its compatibility with the computational constraints of spaceborne systems. Building on these successes, AIKO aims to extend this approach to evaluate its adaptability and scalability to different SAR payloads, with a focus on the COSMO-SkyMed satellites. The objective is to demonstrate the algorithm's applicability across multiple SAR missions with minimal modifications, ensuring its versatility for a wide range of EO platforms and fostering its potential use in future missions. Onboard data compression is crucial for satellite operations, especially in the context of modern missions that generate massive volumes of data. Satellites often operate in environments with limited bandwidth for transmitting data back to Earth and then advanced data compression is a desired block into the onboard processing pipeline, allowing satellites to maximize the efficiency of these transmissions by reducing the volume of data without losing critical information. Furthermore, compression can help optimize the use of onboard storage, ensuring that satellites can continue collecting valuable data even when immediate transmission is not possible. This capability is particularly important for high-resolution SAR imaging that generates large datasets. Typically, to achieve high compression rates, lossy compression algorithms have to be used, resulting in a loss of information. However, to ensure the effective use of the products by the end users, important information must not be lost. For this reason, Deep Learning (DL) algorithms have already been successfully applied, proving their effectiveness in compressing images preserving fundamental details. The compression algorithm here proposed is based on a lightweight Convolutional AutoEncoder (CAE) Neural Network (NN). The CAE uses the encoding and decoding parts separately to encode imagery on board the satellite platform and decode them on the ground station facilities. GEO-K has already tested this algorithm in the onboard context for the Φsat-2 mission developed by ESA, demonstrating its ability to perform efficient data compression with limited computational resources. The algorithm was tested on a hardware platform and validated also by considering the application of decompressed images in real application scenarios. The introduction of compression significantly reduces the weight of the SAR product during processing, allowing for an efficient and rapid transmission, enhancing mission performance, and ensuring the timely delivery of critical information to ground stations. In conclusion, the project aims to develop AI-driven onboard algorithms for SAR data focalization and compression, addressing the challenges of limited processing power and downlink bandwidth on satellites. By enabling efficient processing and transmission of COSMO-SkyMed data, the solution reduces data bottlenecks and enhances real-time decision-making. This approach, designed for hardware efficiency, can be adapted to other satellite platforms, offering a scalable solution for various space missions. The key advantage lies in boosting satellite autonomy, allowing missions to process critical data independently and transmit only essential information, significantly increasing operational efficiency and responsiveness.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Explainability for AI Applications in Space

Authors: Evridiki Ntagiou, Piotr Wilczyński, Dawid Płudowski, Agata Kaczmarek, Ramez Shendy, Krzysztof Kotowski, Jakub Nalepa, Paweł Skorupa, Artur Janicki, Przemysław Biecek, Aniello Fiengo
Affiliations: ESA, Warsaw University of Technology, KPLabs
AI systems must include components that enhance their explainability and interpretability. But what techniques to use, when to use them, and why to use them? The answers to these questions can be found in the catalog of explainability techniques for space missions, which we will present during the presentation. A variety of AI systems are being developed for space applications, particularly in the fields of Earth Observation and Earth Science. While these systems demonstrate promising performance, significant challenges remain in deploying them in operational settings. Among these challenges are the lack of explainability and, consequently, a lower level of trust among users and stakeholders. In our work, we aim at helping to improve the transparency and trustworthiness in AI-driven solutions. In the first phase of our study, we compile a comprehensive catalog of explainability challenges across diverse use cases within the space domain. Some of the use cases are based on the AI for Automation Roadmap created by the European Space Operations Centre (ESOC) and were presented by ESA experts. Our document provides detailed guidelines on how to improve the explainability, understanding, and trustworthiness of AI models for space operations. It emphasizes four key data modalities integral to AI applications in this domain. First, Computer Vision, where examples include Synthetic Aperture Radar (SAR) imagery analysis and space debris detection. For time series data applications such as satellite telemetry monitoring, space weather forecasting, and predictive maintenance rely heavily on this modality. In Natural Language Processing, use cases include intelligent assistants and automated generation of documents and procedures. The tabular data, include tasks such as event classification and power consumption prediction. By addressing these modalities, our guidelines aim to provide operators with actionable insights to improve the explainability of AI systems, thereby improving their trust. In the second part of the study, we demonstrate the proposed framework and mitigation strategies through real-world scenarios from ESOC’s operations. One illustrative scenario involves the application of explainable AI attribution techniques to ship segmentation models. These models process both raw SAR images and corresponding Single Look Complex (SLC) images. Our work offers valuable insights into how and where the information about detected ships is embedded within the SAR data. By leveraging machine learning, we enable domain experts to validate their hypotheses and better understand the decision-making process of AI models. Furthermore, we organize a hackathon centered around a practical task: distinguishing between real and fake images of space debris and later, provide lessons learned from the task. Higher accuracy in differentiation between real and fake images, results in increased trustworthiness in AI-driven solutions. This dual approach—identifying and addressing explainability challenges while showcasing practical applications—underscores our commitment to improving trustworthiness of AI technologies within the space sector. By improving transparency and equipping users with the tools to validate AI outputs, we aim to bridge the gap between cutting-edge AI capabilities and the operational needs of stakeholders.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: An Intelligent Tasking and Processing Chain for Generating Climate Resilience Insights Onboard

Authors: Lucy Donnell, Richard Tipper, Clare Rumsey, Hazel Jeffrey, Dr Murray Ireland, Cameron Anderson, Charlotte Crawshaw, Dr Luis Sanchez
Affiliations: Craft Prospect Ltd, Resilience Constellation Management Ltd, Omanos Analytics Ltd
Craft Prospect Ltd is leading a consortium to deliver the next mission in ESA’s OPS-SAT programme, OPS-SAT VOLT (Versatile Optical Laboratory for Telecommunications). This mission will launch in Q1 of 2026 and demonstrate multiple service concepts: • Resilience Constellation Satellite System (RCSS) – Earth Observation as a Service. This demonstrates automation and delivery of data products from the onboard Hyperspectral imager with advanced processing techniques and data assurance. • Augmented Quantum Key Distribution from Space (AQKDS) – Cryptographic key delivery service. • OPS-SAT Experimenter (OSE) – In-orbit laboratory for third-party experiments. This laboratory enables third-party experiments in optical communication, quantum, hyperspectral tasking, and AI. This presentation will focus on the first service concept: the RCSS, which will provide timely, continuous, location specific information enabling businesses and governments to take appropriate actions to increase their resilience and avoid maladaptive responses to climate change. The service will deliver vital gap filling Earth observation climate mitigation knowledge and services to country partners, and is a collaboration between: • Craft Prospect (CPL) as mission prime, developing the satellite, service and portal. • Resilience Constellation Management (RCML) as the service user, delivering actionable EO insights. • Omanos Analytics (OAL), providing in-country ground truthing and community engagement via their global networks with NGOs and communities. • Third party users, directly accessing the system or the services of RCML. In 2019, CPL and RCML developed an initial satellite system concept based on CubeSats, including as a distributed smart, secure, taskable integrated sensing and communications network. Since then, building on EO application expertise, RCML has continued its engagement program focusing on opportunities exporting UK capability for through-chain Earth observation, with over ten nations interested in the RCSS service. The RCSS advanced processing chain begins with a user configured tasking request for the hyperspectral imager. On approach to the defined region of interest, onboard feature (typically cloud) detection is carried out by CPL’s intelligent sensor: the Context Imager (also known as the Forwards Looking Imager). This detection informs intelligent tasking of the Hyperspectral instrument, maximising the crucial power and data storage resources onboard. The next stage of onboard processing occurs when the hyperspectral data has been acquired. AI-driven feature detection is carried out to generate actionable data products onboard (for example, a text alert and compressed verification thumbnail to indicate forest change or flood extent). These light-weight data products are delivered to the end user to provide low latency and actionable insights to inform appropriate on-ground response. This approach breaks the mould of space data service as the provision of advanced image data products to end users for further analysis. Instead, the service targets end users in need of actionable knowledge about specific areas of interest. User engagement is strongly focused on the novel aspects of the OPS-SAT VOLT mission, which relate to onboard responsive tasking, inference generation and salient communication of results. The RCSS processing chain, its components and capabilities will be presented in detail, with defined use case examples and benefits discussed. Craft Prospect Ltd will also share opportunities for collaboration with the RCSS service, and participation in the OPS-SAT Experimenter (OSE) in-orbit laboratory, which will enable user defined experiments in optical communication, quantum technologies, hyperspectral tasking and applications of AI at the edge.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Proving the feasibility of continuous AI processing on EO spacecraft

Authors: Tom Hendrix, Juan Romero-Cañas, Pablo Tomas Toledano González, Aubrey Dunne, David Rijlaarsdam
Affiliations: Ubotica Technologies
The rise of onboard processing will enable new operational paradigms for Earth Observation spacecraft by interpreting payload data at the edge and enabling the system to autonomously act on this data. While current research has shown the feasibility of AI hardware acceleration in spacecraft, quantitative measurements of accelerator hardware performance in orbit remain underreported. We present the results of the commissioning campaign of the CogniSAT-XE2 AI hardware accelerator on the CogniSAT-6 satellite, confirming for the first time the feasibility of continuous edge AI processing on resource-constrained systems such as a 6U CubeSat. These findings prove the feasibility of intensive processing workload execution on spacecraft, paving the way for intelligence-enabled satellites.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: The Intuition-1 mission: Explaining and improving on-board hyperspectral image analysis using XAI for Earth observation

Authors: Hubert Baniecki, Przemyslaw Biecek, Tymoteusz Kwiecinski, Dr. Nicolas Longepe, Jakub Nalepa, Lukasz Tulczyjew, Agata M. Wijata, Vladimir Zaigrajew
Affiliations: ESA, Silesian University of Technology, KP Labs, University of Warsaw, Warsaw University of Technology
Hyperspectral missions have garnered significant attention from researchers and industry due to their wide-ranging applications, including precision agriculture, surveillance, event detection, environmental monitoring, and many more use cases with the ultimate goal of observing Earth and providing actionable items based on in-orbit data. These missions capture detailed data across tens or hundreds of contiguous narrow spectral bands. However, large data volumes of hyperspectral images (HSIs) poses challenges for their efficient transfer and storage. Transmitting raw hyperspectral images to the ground for processing is often inefficient, as only a subset of the bands or features typically contain the most relevant information for a particular scenario. Moreover, transferring such massive datasets can be time-consuming and costly, impacting mission response times, especially in the use cases like natural disaster management or wildfire detection where speed is critically important, as the outdated information might easily become obsolete. To address these challenges, on-board data processing has emerged as an extremely important Earth observation solution. By performing data analysis directly on the satellite, being hardware-, memory-, and compute-constrained edge devices, it has become possible to compress data (in a “smart” way, e.g., by automatically determining and focusing the analysis on the specific regions of interest), and to extract actionable insights before any transmission. This approach reduces the data volume sent to Earth and accelerates decision-making. On top of that, if there are more co-operating satellites available in space, a variety of tip-and-cue satellite applications offer exciting opportunities to build smart “swarms” of satellites, effectively exchanging useful information across them to e.g., point the larger satellites to specific areas on Earth which should be observed in more detail. There are indeed various multi- and hyperspectral satellites, such as the Φ-Sat-1 and Φ-Sat-2 by the European Space Agency, enMap (The German Spaceborne Imaging Spectrometer Mission) or Intuition-1 by KP Labs (Poland) that have been launched into space or are under preparation. The Intuition-1 mission (launched in November 2023) aimed to demonstrate the capabilities of a 6U nanosatellite equipped with a hyperspectral optical instrument and advanced on-board artificial intelligence (AI) processing capabilities. Intuition-1 incorporates a smart compression system that prioritizes data transmission based on cloud cover in the target area. The mission is designed to be flexible, supporting in-orbit updates to AI algorithms and accommodating new use cases during its operation, effectively making it the "flying laboratory”. The hyperspectral optical instrument operates in a push-broom mode, capturing data in the visible to near-infrared (VIS-NIR) range (465–940 nm) across 192 spectral bands with a ground sampling distance of 25 m at a 600 km orbit. The Leopard Data Processing Unit manages raw image acquisition, data storage, preprocessing, compression, and AI processing, and also handles communications. Leveraging the Xilinx Vitis AI framework, Leopard accelerates AI inference on FPGA, achieving energy-efficient processing with the capability for in-flight model reconfiguration. Although Intuition-1 can be considered to be the “flying laboratory”, its initial use case focuses on estimating soil parameters from hyperspectral data. In this talk, we will discuss the entire analysis chain designed, implemented and thoroughly validated (also using data-level digital twins, reflecting specific atmospheric conditions and in-space noise that might be expected during the operations of Intuition-1). Here, the pivotal aspects concern detecting clouds and bare soil using (extremely fast) image processing & analysis techniques, followed by the AI-powered extraction of soil parameters. Knowing such parameters for specific parcels can help optimize the fertilization process, ultimately leading to better and more sustainable crops. The in-orbit soil parameters’ extraction builds upon a classic machine learning approach, in which feature extractors are followed by machine learning regression models, specifically random forests. Although this algorithm is the current state of the art, as presented during the HYPERVIEW Challenge organized by KP Labs and European Space Agency, it requires extracting thousands of image-level features which are later feed into the model. With the hypothesis anticipating that not all of such features are critically important to offer high-quality regression, we will discuss how explainable artificial intelligence (XAI) technology might be used to not only understand the model predictions, but also to ultimately optimize the model for potential on-board deployment. We will present and walk through the insights learned using a methodology for examining AI models and their robustness (in Intuition-1, trained from weakly-labeled datasets) operating on multi-/hyperspectral images. We believe that exploiting XAI techniques in onboarding AI models (both classic and deep learning-powered) will become an important and adopted tool to build trust in such models, as well as to robustify and optimize them for efficient in-orbit operations.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: E.01.02 - POSTER -Earth Observation in Practise: Linking Public and Commercial Data for Earth Action

Recent advances in Earth observation – more sensors, open data policies, and refined analysis methods – have yielded considerable progress in environmental monitoring. These advances can support land managers, policy makers, and other end users, and enable impactful Earth Action. Future developments, such as the increasing range of public and commercial sensors, will further accelerate this progress; but also raise new challenges around the operational delivery of EO-based solutions. In particular, taking concepts from research and development into active practice is not simple. End users often have requirements that are unexpected, and not satisfied by conventional methods or products. Often, end users require spatial, temporal, and thematic detail at a level unobtainable from public missions alone (e.g. Sentinel-1, -2, Landsat 8 and 9), necessitating the incorporation of commercial sensors.

In line with ESA’s upcoming EO Science Strategy 2040, this session aims to explore the interface of environmental monitoring as it crosses from the research and design to practical domains. We welcome presentations that: demonstrate Earth observation-based work undertaken in collaboration with all manner of end users; discuss the issues and challenges commonly encountered in these collaborations; and/or examine the synergistic use of public and commercial EO data. Ultimately, this session aims to develop best practices for collaborations between the EO and end user communities, ensuring that investments into research and new missions can be rapidly and meaningfully implemented into end user portfolios.

Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Using Earth Observation Data and Deep Learning for Post-Fire Landslide Susceptibility Assessment: The Mt. Mario Case Study (Rome, July 31, 2024)

Authors: Serena Moretto, Saverio Romeo, Giulia Cammarata, Lidia Saavedra de Miguel
Affiliations: Serco Italia S.p.A., ISPRA
On July 31, 2024, a wildfire burned through 6.4 hectares of Mt. Mario in Rome, altering the landscape and potentially increasing its vulnerability to landslides. Landslide phenomena along the slopes of Mt. Mario have been studied extensively by numerous researchers over the years. Several historical landslides have been documented in the area (e.g., Brancaleoni et al., 2003; Bozzano et al., 2006; Amanti et al., 2008), some of which have recently reactivated. These events primarily involve the clay soils of the Monte Vaticano and Monte Mario Formations. Post-fire analysis, focused on delimiting the burned areas, was conducted by comparing pre- and post-event multispectral Sentinel-2 images. To improve the accuracy of these results, a Single-Image Super-Resolution model based on deep learning techniques was implemented, upscaling all 12 spectral bands of a single Sentinel-2 scene from the original spatial resolution (10, 20, and 60 m/px) to the target 1 m/px. Super-resolution Sentinel-2 images demonstrate how enhanced detail can significantly improve the identification of fire-affected zones and the accuracy of perimeter delineation. This level of precision can be crucial for assessing post-fire impacts and planning mitigation strategies. Satellite analyses combined with geological field studies have led to the conclusion that the slope, already prone to historical landslide processes, is now increasingly susceptible to landslides due to the state of its shallow subsurface. Indeed, a direct consequence of vegetation loss is the reduced drainage capacity of the soil, which, conversely, becomes more vulnerable to the erosive action of rainfall. This study emphasizes the practical applications of EO-based solutions, the challenges in transitioning from research to operational use, and the importance of detailed and accurate data for effective environmental monitoring and natural hazards mitigation.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: A Retrogressive Analysis of High-Resolution Satellite Imagery to Understand Irrigation History and Archaeology in Southern Iraq

Authors: Hope Irvine, Dr Louise Rayne, Dr Francesca Cigna, Dr. Ing. Deodato Tapete, Dr Michelle de Gruchy, Prof. Jaafar Jotheri, Mrs Jennifer Makovics
Affiliations: Newcastle University, National Research Council (CNR) - Institute of Atmospheric Sciences and Climate (ISAC), United Nations Satellite Centre (UNOSAT)
Remote sensing is increasingly being used to identify and record archaeological sites and traditional landscapes. Given the wealth of high-resolution imagery that is now available, from declassified historical imagery to commercial multispectral data, it is possible to analyse how these locations have evolved. This study presents the development of irrigation since ancient times and into the present day in an area of southern Iraq bounded between the western desert and the modern course of the Euphrates (for the full regional scale remote sensing analysis, see Cigna et al., 2024), with a particular focus on the spring of Ayn Sayad and nearby irrigation systems. A complex palimpsest of irrigation systems can be identified in satellite imagery, including features pre-dating the 1st millennium AD, and gravity-flow canal systems originating in the Sasanian (224-651) AD period drawing from springs. Some of these systems were still in use until very recently. Using Hexagon imagery (1970s-80s), the layout of the irrigation systems was recorded prior to more recent intensification of agriculture. The Hexagon was orthorectified using ERDAS with RMSEs of 1.4-2.9. With a resolution of up to 0.6 m, fine details of features could be mapped from the Hexagon. A standardised image interpretation approach was developed to separate the systems into a) ancient canals b) historical canals c) 20th century canals d) modern canals; the mapping of these canals provides a proxy for field system locations. These systems are now at risk due to climate change and modern development. While traditional canals may offer a sustainable alternative to modern pumping methods in the face of climate change, socio-environmental issues pose a threat to them (Rost et al. 2011: 205-206). An analysis of Landsat (from 1985) and Sentinel-2 (from 2018) obtained via Google Earth Engine allowed the impact of the recent changes on these systems to be mapped and measured. Using Pleaides NEO imagery and the sequences of imagery available on google earth, it was possible to identify the land use activities which are involved, for example the production of bricks. 6 band PNEO was purchased from Airbus DS and processed in ArcGIS Pro. The multispectral bands were stacked and pansharpening was applied using the 0.3 m resolution panchromatic band. This research work contributed to the Bilateral Agreement CNR/Royal Society (United Kingdom), project “Vanishing archaeological landscapes under anthropogenic and climate change threats” (Principal Investigators: F. Cigna, CNR-ISAC, and L. Rayne, UNEW), 2023-2024 (Cigna et al., 2024). Field work undertaken By Jotheri was funded by Rayne’s Newcastle University Academic Track (NUACT) Fellowship. Hexagon rectification was undertaken by Lavris Makovics was funded by a NUACT scheme PhD scholarship. Irvine’s MSc and PhD are funded by EPSRC as part of the Geospatial Data Science Centre for Doctoral Training (CDT) with Newcastle University. References: F. Cigna et al., "Assessing Anthropogenic and Climate Change Threats to Archaeological Landscapes in Iraq Using Earth Observation," 2024 IEEE Mediterranean and Middle-East Geoscience and Remote Sensing Symposium (M2GARSS), Oran, Algeria, 2024, pp. 424-428, doi: 10.1109/M2GARSS57310.2024.10537372. Rost, S., Hamdani, A. and George, S. (2011) ‘Traditional Dam Construction in Modern Iraq: A Possible Analogy for Ancient Mesopotamian Irrigation Practices’, Iraq, 73: 201–220.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: TransparenC Explore Map: An intuitive end-to-end mapping solution

Authors: Ron Hagensieker, Nicklas Wallendahl, Dr. Javier Sáez Gallego, David Inga, Henry Albrecht
Affiliations: TransparenC
The importance of a compelling story in the mapping and communication of data is often overlooked in the Voluntary Carbon Market (VCM). Market adoption has been impeded in large part due to the immense reputational damage caused by questionable carbon projects and unrefined methodologies. Concerns of greenwashing are now overshadowing projects of merit and good practices - which make up the bulk of the market. In addition, within the domain of voluntary carbon markets, the communication mediums of choice are either lengthy technical PDF-based reports or high-level generic brochures. Neither medium does a good job of convincing an already skeptical audience of project integrity and impact. We believe storylined mapped data can effectively narrow the gap between the complexity of carbon projects and people’s ability and willingness to understand geospatial information to effectively erode false greenwashing concerns. The TransparenC Platform - a data visualization and risk mitigation tool for nature-based carbon projects - is designed to address this problem by leveraging powerful remote sensing data to tell tangible and objective stories of project impact, providing a critical medium for combating greenwashing concerns, especially with the general public and non-specialized investors. Our primary focus is to use objective datasets, not for carbon credit ratings or dMRV (digital Monitoring Reporting and Verification), but to increase understanding of project impact. We fundamentally believe that - in the absence of regulation and clear guidance - effective marketing is the key driver to unlock required capital. We work with a variety of VCM players - from investors, developers, intermediaries, rating agencies to buyers. Our clients have different motivations, but an overarching need to access complex information in an inviting and efficient manner, that neither requires in-depth methodological knowledge nor a steep learning curve. In our first four months since launch (June 2024), we have visualized over 3.5 million hectares across five project types (ARR, REDD+, Mangroves, Seagrass, IFM) and 13 globally-dispersed nations. Our interactive visualizations simplify carbon market impact and credit quality: 91% of people get a better understanding of project impact and 87% of people are more inclined to trust the validity when visualized by TransparenC (independently-run survey of 500 randomly-selected employees). Our platform provides an attractive alternative for communicating project impact. To achieve this, we generate three modules: 1) a storyboarded “Project Page”, 2) an interactive “Explore Map”, and 3) a “Dashboard” of statistical graphs. Condensed information is extracted from varying sources, including 1) project reports, 2) our spatial database housing archives of EO and other geospatial data, and 3) in-house analytics. We built our platform with automation, scalability and efficiency at its core. Our data flow is triggered by ingesting project documentation (i.e. a PDF report) to conduct live extraction of data via LLMs to fill a predefined set of fields that form the basis of various queries and processing chains, to then condense information based on our client’s needs. For example, after polygons of project sites and relevant time windows are automatically identified, we query publicly available EO imagery such as Sentinel-2, that are then visualized as a multi-temporal layer in our Explore Map. Furthermore, Sentinel-2 serves as input to our pipeline to create a time-series of canopy height layers (and various other derivatives). Lastly, these components undergo basic spatial queries, in order to create graphs on trends of NDVI, canopy heights, etc. Parallel to this process, the Project Page is populated with KPI’s and critical qualitative information lifted from the project documentation. The final outcome is a highly condensed and intuitive platform tailored to clients’ needs, fed by a variety of free and commercial sources of geospatial data. Central components of our platform include Strapi as CMS, facilitating data handling from ingestion to internal data handling. We leverage Prefect to orchestrate our workflows, including both complex CPU and GPU computations, which we distribute to Azure and Modal Cloud respectively. We also utilize Terraform with Github Actions for infrastructure management. We rely on openly accessible tools including GDAL, Fiona/Shapely, GeoPandas etc., and Mapbox GL through React. Furthermore, we implement current research, such as Nico Lang’s global canopy height model, into our solution. Aside from our “off-the-shelf” layers using Analysis Ready Data, OpenStreetMap, etc., our platform enables extrapolation of targeted insights from a project basis to regional and continental scales. For example, it allows us to horizontally scale the detection of illegal airstrips from the Paraguayan Chaco to different areas of South America. At this stage, we are pursuing the integration of commercial EO imagery to create vertical deep dives as well. Low confidence in the detection of airfields, variable estimates of canopy heights, or saturated above ground biomass levels, all could trigger the acquisition and integration of relevant VHR data (SAR or optical). Therefore it is important for us to remain targeted, balancing the financial burden of VHR data acquisition with the additional value VHR can provide in addition to the freely available data from Copernicus and Landsat.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Availability and use of Copernicus data in the commercial ArcGIS Platform

#stac

Authors: Guenter Doerffel
Affiliations: Esri Europe
Services based on Copernicus data in the ArcGIS Living Atlas are available since 2018 and receive millions of requests annually. They have become an important asset for Users/Organizations in many industries, but similar in Education and Research. This presentation will focus on the flavors the data and access to it is offered today and the knowledge we have about the usage. Capabilities through User Interfaces (Mobile, Web, Desktop) and Developer Tools (Python, JavaScript, REST) will be used as samples. Generic viewer- and educational apps and apps for derived results (like global Land Use time series) will be referenced. Main purpose of the presentation is sharing Industry requirements and experiences obtained from scaling multi-modal Enterprise implementations that are not tightly linked with the EO domain as such. On demand Deep-Learning and analytics against the services will be discussed. Experiences with integrations into platforms like Creodias and CODE-DE will be summarized. Latest additions, like accessing the same resources through the Copernicus Data Spaces infrastructure (via STAC and the s3-compatible cloud stores) or Offerings like the Discomap EEA portal will complete this overview. A note for the Science committee and any reader of this: Easy-to-use integration into a standardized and scaling system that offers access to Copernicus data from "any desk" is massively requested and very much appreciated by our customers. This presentation is more of a Experience summary from many years of engagement with these commercial enterprise users ranging from heavy EO users (like the oil and gas industry) to complete "newbie's" who actually only start engaging because they have access to pre-processed data and methodical descriptions how to use it. Insights offered through this presentation might be valuable for attending EO companies plus offer us the ability to learn more about the needs of the community at Living Planet 2025.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Paving the Impact Pathway: Capturing the Benefits of ESA’s Early R&D EO Investments

Authors: Greg Sadlier, Alyssa Frayling, Luca Niccolai, Ewan Dallas
Affiliations: know.space
Amidst mounting climate crises, socio-economic challenges, and importance of technological sovereignty, the European Space Agency (ESA) continues to invest in early-stage Research and Development (R&D) of Earth Observation (EO) technologies. These investments are crucial for strengthening Europe’s competitiveness, yet the socio-economic impacts of such early R&D activities are often difficult to quantify. Reliable impact assessments are essential for ensuring accountability, demonstrate return on investment, and securing sustained investment in these key activities that support European ambitions in Earth Observation. To address this challenge, know.space has developed a conceptual framework to assess the socio-economic benefits of ESA’s early R&D efforts in the EO sector. The framework outlines a structured process for impact assessment, covering preparation, scoping, planning, data collection, analysis reporting, and dissemination. It includes a set of indicators and impact categories aligned with the European Commission’s Better Regulation Toolbox to ensure consistency and comparability across assessments, along with detailed metrics, measurement rationale, defined data sources, and timelines for data collection. Tested across six upstream segment case studies covering varying technology maturities, applications, and ESA Member State organisations, the framework is designed to be applicable across the broad range of EO early R&D activities. Test case studies include: 1. Large Deployable Reflector for Earth Observation (LEOB) 2. Sentinel Next-Generation Land & Ocean Optical Imaging Architecture 3. Hybrid Atom Electrostatic System for Satellite Geodesy Follow-On 4. Nightwatch - New Earth Observation Mission Ideas (NEOMI) 5. Digital Altimeter Integrated Circuit Design 6. PHI-SAT 2 Artificial Intelligence for Earth Observation The impact analysis for each case study is synthesised into a concise report, outlining the funded activity, its applications, and the socio-economic benefits realised. A summary infographic adds visual storytelling of the narrative to facilitate dissemination, making the results accessible to a wide-ranging audience, including non-experts. This presentation will introduce the framework, highlighting its relevance and addressing the methodological challenges encountered during case study testing, such as issues with data availability, accessibility, and impact quantification. It will outline how these challenges were mitigated and the subsequent improvements applied to the framework. Overall, the conceptual framework developed provides an innovative comprehensive tool for assessing the full range of impacts from early EO R&D activities. It enables effective identification, assessment, and dissemination of impacts to secure sustained public funding, which is essential for ensuring continued investment in these, at times neglected, critical activities. It also informs policy, industry, and innovation strategies in EO, helping strengthen Europe’s technological sovereignty.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Transforming Biomass Estimation for Australia’s Carbon Market: Leveraging ESA’s Earth Observation data and Machine Learning

Authors: Georgina Doyle, Mr Sebastian Robertson
Affiliations: The University of New England, Earth Observation Australia
Australia’s leadership in carbon market innovation requires scalable methodologies to meet the rigorous standards of the Australian Carbon Credit Unit (ACCU) Scheme. With its vast and ecologically diverse landscapes, Australia plays a pivotal role in the global carbon market, offering unique opportunities for both domestic and international investors to develop carbon credit projects. Birdi, a leader in drone technology and GIS systems with deep expertise in capturing and analysing geospatial data, has partnered with the University of New England to present a pioneering platform that integrates advanced Earth observation technologies. Ensuring the derived model is prepared for future data partnership with existing Copernicus constellations and upcoming ESA’s ROSE-L and BIOMASS P-band sensors, this product fuses high-resolution drone-based LiDAR datasets with optical imagery from Sentinel-2, as well as C band SAR from Sentinel-1 and L-Band SAR from JAXA. In commercialising this biomass modelling method, Birdi aims to address the complexities industry faces in assessing biomass for carbon project origination and ongoing monitoring across diverse ecosystems in the Southern hemisphere. The 35,000-hectare pilot study demonstrates the feasibility and scalability of site-specific estimations critical for operational success in carbon credit projects. Five distinct models were developed, integrating optical and SAR data collected within 1-3 months of drone LiDAR capture to ensure temporally aligned biomass estimations. Biomass calculations from LiDAR are tailored to each site, leveraging tree segmentation and a height-cover index informed by ecosystem and vegetation characteristics derived from scientific literature. Results revealed that increasing the LiDAR sample geographic range meaningfully improved prediction performance while lower-resolution models offered broader explanatory power. Notably, the fusion of L-band SAR data with Sentinel-2 indices outperformed other configurations, highlighting the value of combining optical and SAR datasets for detailed vegetation analysis. Leaf Area Index (LAI) and Inverted Red Edge Chlorophyll Index (IRECI), calculated from Sentinel-2 spectral bands, consistently emerged as critical predictors, highlighting the importance of vegetation structural information for biomass modeling. Birdi leverages API integrations to deliver the latest ESA satellite data insights directly to commercial users via its advanced geospatial platform. By utilising real-time information and localised training data, our product gives actionable intelligence, empowering carbon market clients to assess feasability, optimise operational workflows, and make informed, data-driven decisions based on real-time environmental conditions. The Birdi Platform empowers stakeholders, including foreign investors, to leverage Australia’s unique ecosystems for high-impact carbon credit projects, further cementing the nation’s role as a hub for innovation and sustainability in the global carbon economy. This specific Birdi Platform development highlights the transformative potential of upcoming ESA sensors, such as ROSE-L and BIOMASS, which promise enhanced assessments of vegetation structure and refined biomass modeling. Our product delivers valuable insights for carbon credit stakeholders while fostering international collaboration through knowledge transfer programs. Such advancements are essential for driving innovation and bolstering global environmental monitoring in a rapidly evolving carbon market.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Sat4GAIA: Greek National Satellite Space Project: Axis 3 Land Monitoring Service

Authors: Vasiliki Charalampopoulou, Vassiliki Charalampopoulou, Polychronis Kolokoussis, Athanassios Ganas, Varvara Tsironi, Stavroula Alatza, Alexandros Apostolakis, David Parastatidis, Zina Mitraka, Dimitris Tsirantonakis, Maria Gkolemi, Nektarios Chrysoulakis, Charalabos Ioannidis, Anastasios Doulamis, Anastasios Temenos, Styliani Verykokou, Sofia Soile, Eugenia Papatheodorou, Nikolaos Argyropoulos, Aggeliki Kyriou, Konstantinos Nikolakopoulos, Georgios Vergos, Vassilios Foteinos
Affiliations: GEOSYSTEMS HELLAS, Foundation for Research and Technology - Hellas, National Observatory of Athens, National Technical University of Athens, University of Patras, CONSORTIS GEOSPATIAL L.P, WINGS ICT Solutions
The “SAT4GAIA Greek National Satellite Space Project: Axis 3 Land Monitoring Service” project is part of the Greek National Satellite Space Program which aims at revolutionizing the way remote sensing data and services are used with a great environmental, technological and societal impact. The aim of the project is to design, develop, validate, integrate, and deliver a holistic Land Monitoring Service for the entire Greek territory, utilizing a series of multisource remote sensing data from the Greek microsatellite program (AXIS 1&2), as well as external sources such as IoT sensor data and Copernicus data. A partnership of 8 industrial and academic partners will implement an end-to-end service value chain to address critical environmental and urban challenges across three core modules: Land Use and Land Cover (LULC) Mapping, Deformation Monitoring, and Urban Analytics. All modules and their outputs will be integrated into the Greek governmental hub IT infrastructure, helping in the formulation of national policies for sustainable development. Land Use and Land Cover Mapping The Land Use and Land Cover Mapping module provides a critical solution and – simultaneously – a groundbreaking innovation for monitoring and analyzing Greece’s land resources. The land cover classification service uses advanced deep learning models to accurately categorize land into over 20 distinct classes with pixel-level accuracy exceeding 80%, while the change detection service utilizes image differencing, statistical validation and post-classification analysis to monitor and evaluate temporal changes effectively. The classification maps along with the change analysis maps will be produced yearly from 2015 to 2024 and semi-annually from 2025 onward and will be crucial resources for urban management and policy planning. Deformation Monitoring The Deformation Monitoring module utilises remote sensing techniques, enabling the continuous observation of the Earth's surface to detect changes or patterns of displacements over time. Technologies such as SAR, optical and multispectral imagery, and LiDAR time series allow for the precise detection and analysis of movements. Persistent Scatterer Interferometry (PSI) & SBAS techniques and synthetic aperture radar (SAR) C-band imagery will be utilized to map ground displacement with millimeter-level precision. By comparing results over time, these techniques enhance decision-making and proactive management. Additionally, an on-demand landslide tracking service will be developed based on the offset tracking technique. The method applies a cross-correlation algorithm between a SAR reference and secondary images, and it is based on feature similarity of homologous intensity patches. The landslide boundaries and the displacement velocity will be the outputs of the specific service offering valuable information to the stakeholders. These services contribute to the proactive monitoring of geohazards, such as landslides, subsidence, build-up of tectonic strain and tectonic offsets, offering critical insights for infrastructure resilience and disaster preparedness. Urban Analytics Services The Urban Analytics Module consists of three products namely: a) Urban Heat Islands, b) Urban and Public Health and c) Air Quality. By providing critical insights, the Urban Analytics Services enable cities to implement sustainable strategies, boosting public well-being, and fostering resilient urban environments, which enhance quality of life and attract investments. By addressing urban heat, health, and air quality, these services align closely with European and global policies such as the European Green Deal and the United Nations Sustainable Development Goals (SDGs), and Greece's commitments to the EU's climate and energy framework, enhancing urban resilience and public health national policies. Moreover, the Urban Analytics Service aims to support national policies such as the National Plan for Energy and Climate (Ministry of Environment), by improving urban heat management, public health monitoring, and air quality, towards achieving Greece's environmental and health objectives. Urban Heat Islands (UHIs) refer to the phenomenon of the significant increase in temperature observed in the urban environment compared to rural areas and is closely linked to various human activities that alter the natural environment. Factors such as extensive construction, reduced vegetation and lack of green areas and development of industrial activities heavily are pivotal to the escalation of the Urban Heat Island. Utilizing remote sensing tools to monitor this phenomenon, areas with high heat stress can be identified and dedicated steps can be taken to improve urban living conditions and reduce UHI impact. Poor urban living conditions, overcrowding in urban centres, climate change and socioeconomic factors contribute to worsening of public health conditions. Remote sensing technology offers a wealth of data for monitoring these factors that impact urban and public health. This information and its analysis are essential; for developing targeted interventions, enhancing urban planning strategies and promoting healthier and more resilient cities. The proposed Air Quality service will offer significant advantages in various applications, including environmental management, urban monitoring, management and planning, climate change monitoring as well as disaster management. The Air Quality service will provide several benefits and assets to policymakers, urban planners from the public and private sectors, environmental organisations and governmental agencies as well as researchers from Academia. The service will enhance Copernicus data by providing a much higher resolution product derived from them. Similar available data and services are lacking either the temporal or spatial resolution for real time monitoring at city scale. The Air Quality service will provide a solution to these issues by producing air pollutant concentration maps at spatial (100 m x 100 m) while keeping the temporal resolution to near real-time. In addition, Greek cities will benefit from the installation of air quality monitoring ground stations networks that can contribute successfully to tracking PM2.5, PM10, NO2, SO2, CO, and O3 concentrations and send timely alerts to help protect the sensitive groups of population when levels are rising above safety values. The data will complement and enhance geographically the existing Air Quality Measurement Stations, create a grid of low-cost multi-sensor stations, by which Air Quality measurements in many more geographical areas may be gathered and finally can estimate the total Air Quality Index with higher geographical spatial accuracy The project spanning over a duration of 24 months with an expected completion date in June 2026 has a profound impact across multiple domains, addressing key environmental, societal, and technological challenges. The project is being carried out under an ESA Contract in the frame of the Greek National Satellite Space Project “Views expressed herein can in no way be taken to reflect the official opinion of the European Union/European Commission/European Space Agency/ Greek Ministry of Digital Governance. Views and opinions expressed are those of the author(s) only and the European Union/European Commission/European Space Agency/ Greek Ministry of Digital Governance, cannot be held responsible for any use which may be made of the information contained therein”.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Elevating the Performance of Aerial Imagery-based Building Detection with Super-resolution Networks

Authors: Ms Yuwei Cai, Zhimeng HE, Dr Brian Barrett, Dr Meiliu Wu
Affiliations: University Of Glasgow
High-resolution (HR) imagery is critical for accurate building detection in remote sensing applications, underpinning essential tasks such as urban planning, environmental monitoring, and disaster response. However, the limited availability of HR datasets poses significant challenges, particularly in disaster response scenarios. For instance, following an earthquake, HR images may not be readily accessible, leaving low-resolution (LR) imagery inadequate for detailed assessments of structural damage or identifying areas requiring immediate attention. To address these limitations, Super-Resolution (SR) networks have emerged as a pivotal solution. By enhancing spatial resolution and improving image quality, SR networks uncover finer image details, thereby enabling more precise landcover detection and improving the accuracy of object detection . The core concept of Super-Resolution (SR) networks is to leverage computer vision techniques to train deep learning models capable of generating more detailed information between pixels on images. Consequently, a critical challenge lies in how to effectively leverage the information from low-resolution (LR) images to generate these missing details. Traditional SR evaluation metrics, such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measurement (SSIM), only measure the pixel-level (dis)similarity between high-resolution (HR) and SR images, and thus fail to reflect and quantify the degree of enhancement achieved from LR to SR images (e.g., the perceptual quality improvements). In other words, the effectiveness of SR images on downstream tasks has not yet been fully studied. To address these limitations, Residual Feature Aggregation Network-Involution Network (RFA-InvoNet), utilizing cross-channel information for image reconstruction, was developed to improve image resolution and its effectiveness was directly evaluated by the improvements achieved in building detection tasks, instead of purely based on PSNR and SSIM . Specifically, comparative experiments were implemented with different SR networks, including Bicubic Interpolation (BI), Deep Recursive Residual Network (DRRN), Super-resolution Convolutional Neural Network (SRCNN), Super-Resolution Residual Network (SRResNet) and Momentum Spatial Channel Attention Residual Feature Aggregation Network (MSCA-RFANet) and applied to aerial imagery-based building datasets. Their performances were evaluated through a downstream building rooftop detection task, by employing a U-Net for model training and testing on SR images (e.g., WuHan University (WHU) Building Dataset, the Massachusetts (MAS) Building Dataset, and the Waterloo (WAT) Building Dataset). With superior performance in the three datasets, the proposed network achieved PSNR of 22.04 dB, 22.89 dB, 33.66 dB, SSIM of 0.50, 0.63, 0.87 and a testing speed of 3.67 FPS (image per second), 3.63 FPS, 3.66 FPS, respectively. The most significant improvements were observed on the WAT building dataset, where the proposed method achieved an increase of 2.32 dB in PSNR and 0.05 in SSIM compared with DRRN. Given these promising results, this study has advanced the field by addressing theoretical gaps in SR evaluation and demonstrating the practical value of SR networks through three key contributions: (1) tackling the scarcity of HR imagery, particularly in disaster scenarios, by leveraging SR networks to enhance LR images for building detection tasks; (2) providing a more robust evaluation framework using U-Net for building rooftop detection on SR-enhanced imagery with comprehensive metrics such as Accuracy, IoU, Precision, Recall, and F1 Score; and (3) validating the effectiveness of SR networks in improving image quality for critical applications such as urban planning, environmental monitoring, and disaster response.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: The Canadian Space Agency smartEarth Program

Authors: Mrs. Julie Claveau, Mr. Pierre Nadeau, Mr. Guy Aubé, Mr. Yves Crevier, Helena van Mierlo
Affiliations: Canadian Space Agency
The Canadian Space Agency’s (CSA) smartEarth program (CSA 2020) is a renewed funding initiative related to Earth observation applications development. It was initiated by CSA’s Space Utilization (SU) Directorate to foster the smart use of satellite data to develop solutions to key challenges on Earth, in our everyday lives. It also aims to increase collaboration among Canadian stakeholders, enhance expertise, growth, and competitiveness in Canada’s space sector and advance scientific knowledge. This program aligns with Canada’s Strategy for Satellite Earth Observation (CSA 2022) whose vision is for Canada to be resourceful, resilient, and ready. This vision is delivered by 1) ensuring that satellite Earth observation (EO) data is free, open, and accessible to maximize science, innovation, and economic development, 2) harnessing satellite EO to address climate change and issues that matter to Canadians, 3) strengthening the delivery of critical services to keep Canadians healthy, safe, and informed, and 4) inspiring skills and capacity development for the next generation. Within smartEarth, funding opportunities are delivered through three different tracks: Accelerator, Integrator, and Enabler. The goal of the Accelerator track is to speed up the development of innovative satellite data solutions, at different levels of readiness (up to the demonstration stage), that will respond to challenges of importance to Canada (through an application, a product, or a service). The Integrator track aims to provide complete innovative satellite data solutions on a specific thematic, that will meet national priorities and enhance federal government operations and service delivery to Canadians. Lastly, the Enabler track seeks to enhance the capacities and expertise of Canada’s for-profit organizations, not-for-profit organizations and post-secondary institutions in the use and application of satellite data – it also aims to foster competitiveness and growth. This presentation will provide an overview of smartEarth tracks, portfolio of projects and their impact on helping the industry and academia acquire the knowledge and expertise required to develop and exploit novel applications using space data and emerging digital technologies. Projects covered will range from Earth observation to address Canada's key environmental and socioeconomic challenges, to using satellite data to help detect and monitor the presence of North Atlantic right whales (NARWs) in Canadian waters.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Enhancing Earth Observation Access for Informed Decision-Making: The EODH Project

Authors: Federica Moscato, Philip Kershaw, John Remedios, Piotr Zaborowski, Chetan Pradhan, Owen Hawkins, Luisa Texeira, Jennifer King, Jennifer Bulpett, Alasdair Kyle
Affiliations: Ral Space, University of Leicester, Earth-i, Planet, Telespazio UK, The Open Geospatial Consortium
The Earth Observation DataHub (EODH) is a groundbreaking UK initiative aimed at enhancing access to high-quality, space-derived Earth Observation (EO) data for government bodies, businesses, and academic institutions. This first path-finder phase is running until March 2025: this two-year pathfinder project centralizes access to UK EO data, enabling stakeholders across sectors to make more informed and effective decisions. Leveraging the growing availability of satellite data, the EODH supports the UK’s efforts to address critical challenges, including climate change, urban planning, resource management, and disaster response. Developed with input from key stakeholders—businesses, academia, and government—the EODH ensures that its platform aligns with the diverse needs of its users. This collaborative approach makes the Hub a valuable and adaptable resource for supporting various sectors. End-user testing of its applications and products will begin in early 2025. These tools are designed to make EO data integration more efficient, authoritative, and sustainable, enabling data-driven decision-making. By offering a unified access point, the EODH empowers a wide range of users—from researchers and policymakers to businesses and non-governmental organizations. It aims to be a cornerstone of the UK’s digital transformation, fostering cross-sector collaboration, innovation, and sustainability. Challenges to Accessing Earth Observation Data Accessing Earth Observation (EO) data is not without its challenges. While EO data has tremendous potential for improving decision-making and addressing global challenges, several barriers prevent its widespread use. These challenges include the volume and complexity of the data, fragmentation of data sources, high costs associated with commercial data, licensing issues, and technical expertise requirements. These challenges are further compounded by access restrictions, lack of awareness among potential users, and the difficulty in translating raw data into actionable insights. Data Volume and Complexity One of the key challenges in accessing EO data is the sheer volume and complexity of the datasets. Earth observation satellites produce vast amounts of data, and processing these datasets requires significant computational power and advanced tools. The data is often highly detailed, with varying spatial, temporal, and spectral resolutions, making it difficult to integrate and analyze without specialized knowledge. Moreover, EO data often comes in different formats, which can further complicate its use. Fragmentation of Data Sources Another challenge is the fragmentation of data across multiple platforms and providers. Earth observation data is stored and distributed by different organizations and satellite programs, creating difficulties in locating and accessing the right datasets. Lack of standardization across data sources further complicates the process of integrating multiple datasets into a single, coherent analysis. As a result, users may struggle to obtain the necessary data from various sources and may face difficulty combining datasets with different formats, resolutions, and metadata. High Costs of Commercial Data High-resolution, high-frequency EO data—often provided by commercial satellite companies—can be prohibitively expensive for small businesses, academic institutions, and even some government bodies. These data sets, which offer the most precise insights, come with significant costs associated with both licensing and the infrastructure required for analysis. While open data from government-funded projects, such as the European Space Agency’s Sentinel missions, offers an alternative, the high cost of commercial data remains a major barrier to wider adoption. Licensing and Access Restrictions The issue of licensing is another critical challenge. EO data is typically licensed rather than sold, meaning users must agree to specific conditions based on the intended use. These licensing agreements can vary widely, with multi-user licenses often being expensive and academic licenses typically coming with restrictions on commercial use. Additionally, some commercial datasets are subject to national security or proprietary restrictions, further limiting access. These complex licensing models can make it difficult for users to understand what they can and cannot do with the data, particularly when accessing commercial data archives like those from Planet or Airbus. Technical Expertise and Usability The lack of technical expertise is a significant barrier to EO data adoption. Many potential users, including government departments and businesses, do not have the necessary skills or infrastructure to process and analyze EO data. This lack of expertise often leads to underutilization of available data. Moreover, many existing platforms for accessing EO data are not user-friendly, requiring specialized knowledge or training to navigate and effectively use the data. This usability gap can discourage potential users from adopting EO data for their work. Awareness and Actionable Insights Many users are also unaware of the availability of EO data or lack understanding of how to use it effectively. The abundance of data, combined with the complexity of understanding its potential applications, often leads to underutilization. Additionally, translating raw EO data into actionable insights—such as identifying climate trends, monitoring urban growth, or assessing land use—is a complex task that requires specialized tools and expertise. Bridging this gap is essential for ensuring that EO data can be used to inform decisions. EODH's Approach to Overcoming Challenges The EODH is specifically designed to address these barriers, acting as a crucial link between data providers and end-users. By centralizing access to a wide range of EO datasets, the platform simplifies data discovery and integration, making it easier for stakeholders to leverage EO data effectively. Key features of the EODH include: Integration of commercial and public data: The platform combines open datasets with high-resolution commercial data to provide comprehensive coverage. Transparent licensing models: Clear and accessible licensing agreements help users understand their rights and permissible uses of data. Quality assurance: The platform provides robust quality assurance to ensure data reliability. User capability building: The EODH fosters skill development, enabling users to work with multiple datasets and integrate them into their workflows seamlessly. Key Datasets Offered by EODH To address the needs of the UK community, the EODH provides a curated selection of key datasets, specifically chosen to reflect the UK’s expertise in Earth observation and the expected requirements of its users Sentinel Data Access: The EODH facilitates access to Sentinel data through SentinelHub and the CEDA Archive. This includes Analysis Ready Data (ARD) for both Sentinel-1 and Sentinel-2 satellites, which cover the UK and its territories. Commercial Satellite Data: The platform also provides access to data from commercial satellite providers, such as Planet (Planetscope and Skysat) and Airbus (Optical and SAR archives). This allows users to access high-resolution, timely data for a variety of applications, from environmental monitoring to infrastructure planning. Climate Projections: The EODH integrates global climate projections from the CMIP6 model, regional climate projections from CORDEX (COordinated Regional Climate Downscaling Experiment), and high-resolution projections from the Met Office’s UK Climate Projections (UKCP). These datasets offer valuable insights for climate forecasting, policy planning, and adaptation efforts. Climate Observations: The UK Earth Observation Climate Information Service (EOCIS) provides data on 12 essential climate variables at both global and regional scales. This includes high-resolution climate data tailored specifically to the UK, which supports efforts to monitor and adapt to climate change. Licensing and Collaboration with Data Providers A unique aspect of the EODH is its collaboration with commercial data providers to establish a clear and transparent licensing model. By offering a well-defined licensing structure, the platform aims to address issues of accessibility and understanding, ensuring users know exactly how they can use the data. The EODH’s collaboration with data providers to offer both open and commercial data makes it an essential tool for data-driven decision-making. Supporting Adoption and Ensuring Success The EODH’s success depends on its ability to engage users and meet their needs. To support the adoption of the platform, the Hub plans to provide training, knowledge transfer, and workshops to help users make the most of the data and tools available. This will ensure that the user community can effectively access, interpret, and apply EO data in a variety of contexts. We aim to share the lessons learned from the Earth Observation DataHub (EODH) project in this path finder phase in facilitating access to both commercial and open-source data. These insights will be presented from the perspectives of application suppliers and early adopters, highlighting the challenges, opportunities, and strategies that have emerged in integrating and utilizing diverse datasets effectively. In conclusion, the EODH is a vital resource that addresses many of the challenges associated with accessing and using EO data. By centralizing data access, and providing user-friendly tools and training, the EODH aims to support informed decision-making across the UK, driving innovation and collaboration in addressing global challenges.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: A.10.03 - POSTER - Our solid Earth: from core to surface

This session solicits contributions ranging from the deepest layers of the Earth, the geodynamo and core dynamics, the mantle, the lithosphere, and Earth’s surface. ESA’s Swarm trio of satellites, exceeding 10 years in orbit, has changed our understanding of the geomagnetic field evolution, providing an unprecedent accuracy and coverage from space. This session invites contributions that address the applicability of current ground and satellite datasets from Swarm, GOCE, GRACE-(FO) and future missions such as NGGM and MAGIC, to study the dynamical behaviour of the solid Earth, core dynamics and the interactions between the core, mantle, and lithosphere in order to improve the process understanding on the surface. The session shall address all aspects linked to the subsurface and surface including oceans, using a multitude of observational techniques (e.g. gravimetry, magnetometry, seismology, surface deformation) as well as physics-based models, numerical simulations of the Earth’s dynamo and theoretical advances in the field. Research related to advances in modelling the 3D conductivity of the mantle enabled by ESA’s Swarm mission, as well as the 3D structure of Earth’s layers from lithosphere down to core-mantle boundary and their interplay with the flow in Earth’s core are addressed in this session. Contributions related to plate motion, volcanism, monitoring of the seismic cycle and geohazards monitoring are also encouraged.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: A time variation of the lithospheric magnetic field and tectonic interpretations of a vertically-integrated magnetic susceptibility

Authors: Josef Sebera, Aleš Bezděk, Wolfgang Szwillus, Jörg Ebbing, Alexandra Guy
Affiliations: Astronomical Institute ASCR, Kiel University, Czech Geological Survey
The Earth lithospheric magnetic field is one of the main objectives of ESA’s ongoing satellite constellation mission Swarm (in orbit since 2013). Excluding the oceanic remanent part from this field, a continental lithospheric field can be represented by two quantities - the vertically-integrated susceptibility (VIS) and the inducing core field. The inducing core field acting on magnetically susceptible rocks of the lithosphere necessarily produces a subtle time variable component of the lithospheric field. Thus, the lithospheric field can be considered static only over a certain period. An increasing accuracy and length of satellite magnetic time series may make this subtle signal observable soon. In this contribution we focus on two applications of VIS: the time variation of the induced lithospheric magnetic signal over the period of 25-30 years (since the launch of the CHAMP mission to the predicted lifetime of Swarm and MSS-1), and the tectonic interpretation of VIS in geologically interesting areas. First, the VIS model is carried out by inverting the lithospheric field using some core field. Then altering the core field in the forward problem one can estimate the time variable lithospheric field. For the inversion the latest lithospheric and the core field CHAOS-8 is used (i.e., including observations from satellite missions Swarm, MSS-1, CSES, CryoSat-2, CHAMP, SAC-C, and Ørsted). The time prediction of the induced component can be done e.g. with IGRF-14 (including the secular variation). In the inversion of the lithospheric field two schemes will be adopted: 1) a certain degree cut-off delimiting the core and the lithospheric field (e.g. spherical harmonic degree 20 in case of CHAOS-8), and, 2) a smooth degree-wise transition defined in the Lowes-Mauersberger spectrum between the two fields. Second, as the VIS consists of the average maximum susceptibility value of the different rocks in a given crustal column down to some surface like the Moho, without considering the Curie temperature and depth variations within the crust, interesting correlations between lithospheric-scale tectonic structures and the VIS signal variations exist. For example, the large-scale Central Asia tectonic features can be observed from the VIS map such as the boundary of the Siberian Craton and to a lesser extent the northern boundaries of the Tarim and North China cratons, which correspond to a sequence of strong gradient discontinuities. The striking pattern emerging from the VIS map in Central Asia is the continuous U-shaped belt of positive average susceptibility values corresponding to the Precambrian continental domain wrapping around the negative susceptibility of the Mongol-Okhotsk Oceanic domain.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: A Deep Learning-based Approach for Predicting Seismic Anomalies within Swarm and CSELF Electromagnetic data

Authors: Yaxin Bi, Wei Zhai
Affiliations: School of Computing, Ulster University, 2Lanzhou Institute of Seismology, China Earthquake Administration, Lanzhou, China
This work presents a study on the development of viable deep learning (DL) methods for detecting anomalies in space- and terrestrial-based electromagnetic data observed by the SWARM satellites and the Control Source Extremely Low Frequency (CSELF) network in China. The study also investigates the correlation between detected anomalies and earthquakes. These two data sources offer a unique opportunity to explore potential causes and effects embedded in the measured magnetic fields, particularly in relation to earthquake preparation and seismic precursors. We developed a data-driven, non-physics-informed DL approach targeting earthquake-prone areas, treating electromagnetic data as time series prediction and reconstruction tasks. Two models were proposed: one using Long Short-Term Memory (LSTM) networks and another based on Generative Adversarial Networks (GANs). To ensure the quality over the five-year period of observed electromagnetic data for the targeted areas, extensive pre-processing was performed. For predictions, time series data were processed by extending the given look-back time window using an appropriate frequency interpolation. For reconstructions, the original data window was recovered by the interpolating the frequency representation of its down-sampled part. The pre-processing represents initial steps toward generating synthetic electromagnetic data observed. Among the two methods, the LSTM-based approach demonstrated superior performance in long-term predictions. Additionally, using equal input and output window lengths significantly enhanced the models' effectiveness. A rigorous evaluation was conducted using five year electromagnetic data from both SWARM and CSELF. The results underscore the suitability of deep learning techniques for this task and provide a solid foundation for future efforts to predict anomalies within observed electromagnetic data.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: The inverse problem of electromagnetic induction for the determination of magnetic field at the core-mantle boundary

Authors: Zdeněk Martinec
Affiliations: Dublin Institute for Advanced Studies
We deal with the inverse problem for estimating the magnetic field (MF) at the core-mantle boundary (CMB) from the time-varying MF on the Earth's surface. This ill-posed problem is reformulated as the upward continuation problem for the iterative update of the MF at the CMB. The iterations are stopped when the time variations of the updated solution exceeds a threshold. The gradient of a misfit is calculated by the so-called adjoint sensitivity method. We consider the mantle with an electrical conductivity varying both in radial and horizontal directions and estimate the effect of a 3D conducting mantle on the amplitudes and backward time shift of magnetic field at the CMB with respect to that observed on the Earth's surface.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Lower mantle 3-D density structure from joint inversion of satellite gravity data

Authors: Wolfgang Szwillus
Affiliations: Kiel University
The density structure within Earth's lowermost mantle is a crucial component for understanding its thermo-chemical nature and thereby its dynamics. While a wide array of seismological techniques can provide insight into the mantle, resolving density remains a challenge, due to its relatively weak impact on seismic wave propagation compared to changes in seismic velocity. One of the aspects of ESA's 4D-Earth project was studying how satellite gravity data benefit studies of the lower mantle. In this presentation, the focus will be on a joint inversion of satellite gravity data with normal modes, a type of long-period seismological data with high sensitivity to the lower most mantle. Due to mantle convection, the gravity response of density anomalies differs from a purely 'Newtonian' kernel and contains contributions from the deformed surface and CMB. As a result, the radial viscosity distribution enters becomes an additional unknown. In this contribution, the density models from a trans-dimensional joint inversion of normal mode and satellite gravity data for different viscosity scenarios will be presented and compared in terms of their dynamical implications.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Crust Structure and Thermal Lithosphere Thickness of the South China Sea and Adjacent Areas

Authors: Jing Hou, Carla Braitenberg, Prof. Jian Fang
Affiliations: Innovation Academy For Precision Measurement Science And Technology, Chinese Academy Of Sciences, University of Trieste
The South China Sea (SCS) is situated at the intersection of continental and oceanic crustal domains, and at the convergence of the Eurasian, Pacific, and Indian plates. It is surrounded by structural units of varying ages, characteristics, and orientations. Due to its complex tectonic evolution, the SCS and adjacent regions hold significant scientific value for studies on seismicity, lithospheric deformation, and tectonic mechanisms. Investigating the thermal structure of its lithosphere is fundamental for understanding its physical properties and geodynamics. This study analyzes the Free-air and Bouguer gravity anomalies across the SCS and its surroundings using the WGM2012 model, and inverts the Moho depth distribution. Additionally, Curie depths were derived from magnetic anomaly data from model EMAG2, and combined with the measured surface heat flow data to assess the lithospheric thermal structure. The WGM2012 model integrates satellite gravity data (GRACE, GOCE, etc.), ground and airborne gravity measurements, and terrain data to construct a high-resolution global gravity field model, while the EMAG2 model incorporates airborne and shipborne magnetic measurements, enhanced with satellite magnetic observations (CHAMP, Ørsted, etc.) and terrain correction. We combined the traditional centroid method with an improved version, modifying the selection of the fractal index and applying mean square error constraints during the forward fitting process. This approach yielded a more reliable distribution of Curie point depth, providing a more accurate data foundation for the study of lithospheric thermal structure. The macroscopic trends of the northern and southern sides of the South China Sea basin are similar but not entirely symmetrical, suggesting that these regions may have belonged to a "conjugate" continental margin prior to expansion. For the inversion of the Curie surface, we determined the optimal thermal conductivity in the South China Sea and adjacent areas based on the nonlinear inverse relationship between the Curie point depth and the measured heat flow, and obtained high resolution and uniform surface heat flow. By constraining surface heat flow with magnetic anomalies and employing a steady-state heat conduction equation, we derived a more refined depth distribution of the thermal lithosphere's bottom boundary. The deep thermal lithosphere of the Xisha Trough demonstrates the control of deep structures over surface features. The thermal lithosphere bottom boundary on the eastern side of the southwestern sub-basin connects with the southwestern part of the eastern sub-basin, indicating similar deep structural attributes. This comprehensive integration of gravity, magnetic, and heat flow data advances our understanding of the lithospheric thermal structure and tectonic dynamics of the SCS, offering valuable insights into regional geophysical processes.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Magnetic Boundary Layers and Diffusion near the Top of the Core in Non-Slip Geodynamo Models

Authors: Dominique Jault, Paolo Personnettaz, Nicolas Gillet, Thea Lepage, Ilya Firsov, Mioara Mandea
Affiliations: CNRS, University Grenoble Alpes, CNES
We can separate a boundary layer where magnetic diffusion is important from the fluid interior where it is negligible if the electrical conductivity of the Earth's core is large enough. Describing this diffusive layer is important since it acts as a filter between the time-evolving magnetic field in the core interior and the field at the core surface, where models built from geomagnetic data are available. We investigate magnetic diffusion at the top of the Earth's core in non-slip geodynamo simulations with small viscosity (Ekman number E≤10^-6). We seek the dimensionless number that best predicts magnetic diffusion in this framework. Non-slip boundary conditions are appropriate to the Earth's core but require numerical models with small viscosity (to ensure that the viscous boundary layer is thin in comparison with the depth of the fluid shell) and higher radial resolution than stress-free boundary conditions, which eliminate viscous coupling between core and mantle. Therefore, the geodynamo models that have until now guided core flow inversions and provided the necessary statistics (covariance matrices for the surface flow coefficients) have employed stress-free boundary conditions. We find that the key parameter that determines the importance of diffusion in our simulations is the Elsasser number Λ=σB^2/ ρΩ, where ρ and σ are the density and electrical conductivity of the fluid core, Ω the Earth's rotation rate and B the root mean square magnetic field intensity in the core interior. We compare the different terms that enter the induction equation as a function of depth below the core-mantle boundary (CMB). For Λ=0(30) as in the Earth's core, we find that magnetic diffusion is one order of magnitude weaker than the two other terms in the interior of the fluid core. We estimate the distance from the CMB over which magnetic diffusion attenuates for the radial and tangential components of the induction equation. Finally, we build statistics for the flow at the top of the free stream from non-slip geodynamo simulations and use these statistics to estimate core surface motions from geomagnetic models based on Swarm data.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Near-Surface Fluids Movement in Porous Rocks, Observed by Satellite from Gravity Change

Authors: Gerardo Maurizio, Carla Braitenberg
Affiliations: University of Trieste
The study of the Earth's gravity field from satellite has made many advances over the years, thanks to the satellite missions GRACE and GRACE-FO, which are widely used, for example, to tracking Earth’s water movement across the planet, providing a unique view of Earth’s climate cycle, and new advancements are expected in the coming years with the new satellite missions already planned, such as the MAGIC constellation, featuring a single polar pair GRACE-C, and the enhanced ESA’s NGGM inclined pair. Such advancements are expected to reduce spectral noise and allow for higher spatial and time resolution compared to the current GRACE-FO mission. These improvements open exciting possibilities for applications in solid Earth studies. Our study aims at modeling different porosity scenarios for a sedimentary basin and calculating the variation of the gravitational field as the mass of the liquid filling the voids changes. In the approach to the scenario modelling, we started from real density data of a sedimentary basin, obtained through a 3D lithosphere density model derived from joint inversion of gravity and magnetic data. The starting model, which has a spatial resolution of 0.2°, equal to ~ 22 km, was interpolated into new grids with resolution 0.05° and 0.01°, equal to ~ 5.5 km and ~ 1 km respectively. Also, the depth resolution, equal to 1 km in the original model, was reduced to 50 m, to have a tesseroids model as resolute as possible. The general sedimentary model has a volume of 8.58 * 1012 m³, with an aerial extension of 72 x 50 km, and a total depth of 5 km. The porosity part considers a small section of the sedimentary basin, down to a depth of 1000 m. Porosity values are assumed to be variable with the depth of the model, starting from 30% in the superficial part to 5% in the deepest porous part. After measuring the volume of the tesseroids and their density, we can calculate the total mass of our isolated sedimentary body. Then, we can go to evaluate the change in the total mass of the body depending on the changing in the proposed porous model, and calculate the gravity signal linked to the variation. We calculate the signal of the sedimentary strata considering different stages of material filling the porosity: the pores can be void, filled with water, or filled with CO₂, and the changes are evaluated for the mass difference of consolidated rock without porosity, and the difference between the air filled and fluid filled pores. The analysis between porosity filled with air and filled with water could be representative of an aquifer extraction/replenishment or a fluid injection/extraction experiment, while the analysis between water and CO₂ could be representative of a monitoring the gravity signal in CO₂ injection site. Generated signal could be compared with the error curves estimated for future satellite missions, to evaluate the sensitivity of the new missions to the detectability of these events.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Sensitivity of Long-Wavelength Dynamic Topography and Free-Air Gravity to Lateral Variations in Lower Mantle Viscosity

Authors: Clinton Conrad, Florence Ramirez, Bart Root
Affiliations: University Of Oslo, Delft University and Technology
Deflection of the Earth’s surface supported by mantle flow, known as dynamic topography, is associated with a free-air gravity anomaly because such topography is not isostatically compensated. Consequently, the ratio of the gravity anomaly to the dynamic topography, known as the admittance, has been used to estimate the amplitude of dynamic topography, which can be difficult to measure directly. However, at long wavelengths (e.g., spherical harmonic degrees 2 to 6) both dynamic topography and gravity anomalies, and thus the admittance, are sensitive to the viscosity structure of the deep mantle. Previous studies have demonstrated a reversal in sign of the free-air gravity anomaly resulting from lower mantle structures as the viscosity of the lower mantle is increased. This suggests potential complexity for inferring long-wavelength dynamic topography from observations of gravity anomalies, because the upper-lower mantle viscosity contrast is poorly constrained. Furthermore, several recent studies have suggested that the lower mantle may also feature large lateral variations in viscosity. Such variations may be associated with remnant compositional heterogeneity left over from Earth’s formation or thermal heterogeneity imprinted on the mantle by Earth’s supercontinental cycles. Such heterogeneity probably manifests at the long wavelengths (spherical harmonic degrees 2 to 6) and may have amplitudes of one order of magnitude or larger. We investigated the importance of lateral viscosity variations by for dynamic topography and gravity anomalies. To do this, we introduced lateral viscosity variations into a finite element model of global mantle flow. We find that the gravity anomaly above lower mantle density heterogeneity can change dramatically as we begin to introduce different models for lateral viscosity variations into the upper and lower mantle viscosity fields. In such models we find that the sign of the admittance varies laterally because the horizontal gradients in mantle viscosity perturb mantle flow patterns in ways that produce large changes gravity anomalies and smaller changes in dynamic topography. This suggests that the admittance (ratio of gravity anomaly to dynamic topography) may vary laterally in addition to being highly sensitive to the viscosity stratification of the mantle. A spatially-varying admittance will greatly complicate estimation of dynamic topography from observed gravity, but it may help to explain mismatches between observations of dynamic topography and predictions made using global mantle flow models. Indeed, we suggest that the reconciliation of such mismatches may help to constrain viscosity heterogeneity in the lower mantle. This greater understanding of lower mantle structure can inform global models, for example as part of ESA’s 4D Dynamic Earth Project, and can help us to understand the dynamic interaction of the mantle with Earth’s isostatic lithosphere.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Global simulations of temporal gravity due to mantle flow and their sensitivity to the mantle rheology

Authors: Bart Root, Cedric Thieulot
Affiliations: Delft University Of Technology, Utrecht University
With the plans of the MAGIC/NGGM mission approved, there will be several decades of satellite gravity data available. Both periodic and secular mass changes can be studied with this data, mostly surface mass changes like hydrology, ice melt, glacial isostatic adjustment, and large earthquakes. With the increasing time period of the gravity data set, smaller processes in the signal can be detected. Therefore, we conduct sensitivity analysis on small temporal gravity signals which can be related to mass change due to mantle convection. We perform various sensitivity analysis studies to understand the added benefit of detecting mantle flow with satellite gravity change observations. A fast stoke solver (FLAPS) is developed that is based on an axisymmetric half annulus geometry. The model evolves over 50 years after which the difference between the initial and final state to compute the rate of change. Realistic Earth models (PREM) as well as synthetic models are tested to better understand the sensitivity of the gravity change data. To understand 3D variations in structure and viscosity, we use the open-source mantle flow software ASPECT and incorporate interior models related to ESA's 4D Dynamic Earth project. For the upper mantle the WINTERC-G model incorporates multi data types information in a joint inversion. New analysis show data sensitivity down to the transition zone. For the lower mantle, we use available global tomography models. The gravity change observations are sensitive to the absolute viscosity state of the mantle. This is contrary to dynamic topography and geoid data, which do not have this sensitivity and studies using these data always have an ambiguity wrt. viscosity state. Moreover, it seems that the gravity change data is more sensitive to the lower mantle of the Earth. 3D calculations need HPC resources and we show that the mesh resolution needs high computational demands to consistently account for the temporal gravity due to mantle flow. Nevertheless, the modelled magnitude of the gravity change linked to global mantle convection seems to be larger than the formal error estimates of the GRACE and GRACE-FO instrumentation. A longer acquisition period will reduce the secular errors in the ocean, atmosphere and tidal correction models, such that eventually mantle convection can be studied directly by satellite gravimetry.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Background states for Magneto-Coriolis modes in Earth's core

Authors: Felix Gerick, Phil Livermore, Mioara Mandea
Affiliations: Centre National d'Etudes Spatiales, School of Earth and Environment, University of Leeds
Magneto-Coriolis modes evolve over a quasi-steady background magnetic field and flow within the Earth's liquid core. In most numerical computations of these modes, a background flow is usually neglected for simplicity. However, for Magneto-Coriolis modes of long periods (say hundreds of years) such a steady background flow can have significant implications. We present preliminary investigations of how properties of the spectrum of modes changes for several background states. This work may have implications on how we interpret data from geomagnetic observatories and dedicated satellite missions such as Swarm and NanoMagSat.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Short-term Postseismic Gravity Changes Driven by Nonlinear Rheology

Authors: Kazuma Nakakoji, Yoshiyuki Tanaka, Dr. Volker Klemann, Dr. Zdeněk Martinec
Affiliations: The University of Tokyo, German Research Centre for Geosciences, Dublin Institute for Advanced Studies, Charles University
There are two major mechanisms for postseismic deformation of large earthquakes: viscoelastic deformation and afterslip. Quantitative evaluation of their contributions to postseismic deformation provides insights into the frictional properties of plate boundaries and the rheological properties of the asthenosphere. Although viscoelastic deformation was traditionally considered dominant on decadal timescales, seafloor geodetic observations after the 2011 Tohoku earthquake revealed significant short-term viscoelastic deformation within one year after the earthquake. One way to model such a short-term behavior is to consider nonlinear rheology as a property of the asthenosphere. Some previous studies have successfully explained GNSS observations using nonlinear rheology. However, few of them have used satellite gravity data. Satellite gravimetry, such as GRACE, captures mass changes over land and ocean, providing complementary information to GNSS observations, particularly in ocean areas. Furthermore, few theoretical models consider nonlinear rheology in a spherical viscoelastic Earth. Therefore, the present study aims to develop a method that incorporates nonlinear rheology in a viscoelastic spherical model and investigate the extent to which gravity change signals due to nonlinear rheology are detectable by satellite gravity missions. We extended a spectral-finite element method (SFEM) by incorporating nonlinear rheology. A major advantage of SFEM is that it enables the calculation of viscoelastic deformation of a self-gravitating sphere with a 3-D viscosity structure. The constitutive equation for nonlinear rheology was derived and implemented in the SFEM using a time-domain approach. We adopted a nonlinear Burgers model that previous studies used to explain postseismic GNSS observations. The model represents nonlinear rheology through two dashpots and transient rheology through a Kelvin element. To evaluate the effects of nonlinear rheology, we calculated postseismic gravity changes due to a MW 8.8 dip-slip point source using the extended method, assuming a megathrust earthquake in a subduction zone with the elastic structure PREM and a slab. The results were compared with those from a linear Burgers model adjusted to yield the same peak gravity change amplitude near the epicenter. The comparison revealed that nonlinear rheology produced shorter-wavelength gravity changes near the epicenter and significantly enhanced amplitudes on the oceanic plate side. For the strike-slip case, nonlinear rheology expanded the horizontal extent of the region with dominant gravity changes by approximately 1.3 times. The method was then applied to the 2011 Tohoku earthquake (MW 9.0), and the results demonstrated a similar tendency to those obtained for the dip-slip case. In the presentation, we will discuss the detectability of the effects of nonlinear rheology by upcoming satellite missions, such as MAGIC.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Advancing Detection of Submarine Volcanism: A Novel Approach Using Remote Sensing and Machine Learning

Authors: Alice Hopkins, Dr Susanna Ebmeier, Dr Tim Craig, Dr Isobel Yeo
Affiliations: University Of Leeds, National Oceanography Centre
Around three-quarters of the volcanoes on Earth are submarine yet they are relatively understudied in comparison with those on land. Eruptions are often only detected due to passing ships or aeroplanes hence it is likely that many erupt unnoticed and undetected. For monitoring subaerial volcanoes, systematic remote sensing is done from a range of techniques: ground deformation, SO2 output, ground temperature etc. There are no comparable approaches currently available for submarine volcanism. However, there can be observable effects on the ocean surface of underwater volcanic eruptions, the main effects being that they can produce pumice rafts, eruptive plumes, and seawater discolouration. These can all be observed with sufficiently high-resolution satellite imagery. The advent of higher-resolution free access global satellite imagery datasets, for example Sentinel-2, brings new opportunities to light for the improvement of our understanding and monitoring of underwater volcanism. Here we present a combined analysis of satellite measurements and hand samples from the 2019 pumice raft-producing eruption of Volcano F, Tofua-Kermadec arc. The pumice raft is tracked using optical satellite imagery from MODIS and Sentinel-2 and its spectral properties are analysed over time. Field spectroscopy measurements of hand samples are taken to compare with the Sentinel-2 satellite imagery to help understand which parts of the raft were saturated. Supervised Machine Learning is trained on labelled MODIS imagery aiming to detect previously undetected eruptions from Volcano F. No previously unknown eruptions from Volcano F are detected but the Home Reef 2006 eruption close to Volcano F is flagged. Unsupervised Machine Learning is used to see how clustering can be used for pumice identification and to analyse the spectral properties of the clusters. We use the clustering and classification methods tested at Volcano F to search for both known and previously undescribed explosive submarine eruptions in the SW Pacific region, combining both radar and optical satellite imagery to overcome the region’s high cloud coverage. Various machine learning models are evaluated to determine their effectiveness in accurately detecting and classifying submarine eruptions. The detection algorithm is used to automatically update a catalogue of submarine eruptions in the area. This algorithm is applied to other regions to evaluate its applicability and identify possible region-specific changes for improvements. This study demonstrates a novel combination of machine learning and satellite analysis in the detection of submarine eruptions, a significant improvement on current monitoring methods.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Magnetohydrodynamic Eigenmodes in the Plesio-geostrophic Model

Authors: Jingtao Min, Prof. Andrew Jackson, Dr. Stefano Maffei
Affiliations: ETH Zurich
Temporal variations of the geomagnetic field contain contributions from multiple electrical current systems, the most dominant of which is the electric current generated by the fluid flow in the electrically conductive outer core. Understanding the dynamics of the fluid flow in the outer core is not only crucial in understanding the complete dynamics of the Earth interior, but also lies at the very centre of simulating the future evolution of the geomagnetic field. The current 3-D magnetohydrodynamic simulations of the Earth’s core suffer from low efficiency, high computational cost, and operate in parameter space that is far away from the realistic Earth. In light of this, a reduced-dimensional description of the system is desired. We present here the magnetohydrodynamic eigenmodes of the plesio-geostrophic (PG) model, a reduced model of the original 3-D model for planetary dynamos. In the Earth’s outer core, rapid rotation of the Earth places the fluid near the geostrophic force balance, organising the flow into columnar structures. Making use of the columnar flow ansatz, the PG model shows that by taking axial integrals, the ideal magnetohydrodynamic equations in 3-D space can be reduced to a set of equations on 2-D manifolds. We investigate the properties of the PG system by computing the eigenmodes, linearized around certain background fields, and study the role of magnetic diffusion in the system. Comparison of the PG and 3D results demonstrates the capability and tests the validity of the PG model. Compared to 3-D models, the PG model enjoys reduced dimensionality, allows more efficient computation, and can be computed with parameters closer to the real Earth. These properties are especially desirable for geomagnetic data assimilation (GDA), where the geomagnetic measurements at ground observatories and satellites are used to probe the underlying dynamics in the Earth’s outer core, and provide extrapolation of the core field into the future. The eigenmode calculation is thus a first of several steps towards a GDA framework that utilises the PG model as the dynamical core.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Inventory Deformation Mapping of Hydrocarbon Fields in Nigeria Using InSAR and GIS Techniques

Authors: Imeime Uyo, Mahdi Motagh, Mahmud Haghighi
Affiliations: Leibniz University Hannover, Helmholtz Centre Potsdam, GFZ German Research Centre for Geosciences, Potsdam, Germany
The identification and monitoring of surface deformation over hydrocarbon fields are essential to ensuring that the benefits of hydrocarbon exploration and production are achieved in harmony with environmental sustainability. Understanding the magnitude and nature of surface deformation within production zones is crucial for effective management and mitigation of potential environmental impacts In this study, the surface deformations around the hydrocarbon fields of the Niger Delta, Nigeria, are analyzed leveraging a combination of InSAR (Interferometric Synthetic Aperture Radar) and GIS (Geographic Information System) techniques. The process includes (1) regional deformation detection using the Persistent Scatterer Interferometry (PSI) to generate a deformation map across the Niger Delta oil production area, this step enables a comprehensive understanding of deformation patterns across the region. (2) identification of active deformation areas (ADAs) to pinpoint regions most prone to deformation, and (3) inventory integration and localized analysis, where an inventory of known hydrocarbon fields is combined with the deformation map using GIS, to spatially correlate deformation patterns within specific production areas. The results provide critical insights into the relationship between subsidence and hydrocarbon production activities in the Niger Delta. Such insights are vital for sustainable resource management, infrastructure planning, and the mitigation of environmental impacts in hydrocarbon-rich regions. This methodology serves as a valuable tool for policymakers and stakeholders in ensuring that oil and gas exploration proceeds in a manner that minimizes risks to the environment and communities.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Using Swarm Satellite Magnetic Field Data to Unlock the Potential of Earthquake Prediction

Authors: Angelo De Santis, Saioa A. Campuzano, Gianfranco Cianchini, Homayoon Alimoradi, Loredana Perrone, Habib Rahimi
Affiliations: Istituto Nazionale Di Geofisica E Vulcanologia, Departmento de Física de la Tierra y Astrofísica, Universidad Complutense de Madrid (UCM), Institute of Geophysics, University of Tehran
Predicting earthquakes is one of the greatest challenges in seismology and a long-standing aspiration for humanity. Among various potential precursors, changes in the Earth's magnetic field have shown promise, though their reliability remains a topic of debate (e.g., De Santis et al., 2015). Advances in satellite technology now allow us to measure the magnetic field with unprecedented precision, opening new possibilities for earthquake research. In this study, we use data from the European Space Agency’s Swarm satellites to explore whether magnetic anomalies can signal an impending earthquake. Our analysis takes two complementary approaches: 1. Global Statistical Analysis: We applied a superposed epoch and spatial approach to several years of global earthquake data, paired with Swarm’s magnetic field measurements (De Santis et al., 2019; Marchetti et al., 2022). 2. Tectonic Case Study: We focused on major earthquakes from 2014 to 2023 in the tectonically active Alpine-Himalayan belt (Alimoradi et al., 2024). For both approaches, we used an automated algorithm (De Santis et al. 2017) to examine satellite data recorded 90 days (global case) and 10 days (Alpine-Himalayan case) before each earthquake, respectively. The results revealed clear magnetic anomalies preceding earthquakes. Intriguingly, in the Alpine-Himalayan case study, we found a strong link between the size of an earthquake and the duration and intensity of these anomalies: larger earthquakes tended to produce longer-lasting and stronger signals. This method achieved impressive predictive performance, with an accuracy of 79%, precision of 88%, and a hit rate of 84%. These results suggest that satellite-based magnetic field analysis could one day form the basis of an operational earthquake prediction system, offering a powerful tool to help mitigate the impact of these natural disasters. References Alimoradi, H., Rahimi, H., De Santis, A. Successful Tests on Detecting Pre-Earthquake Magnetic Field Signals from Space, Remote Sensing, 16(16), 2985, 2024. De Santis et al., Geospace perturbations induced by the Earth: the state of the art and future trends, Phys. & Chem. Earth, 85-86, 17-33, 2015. De Santis A. et al., Potential earthquake precursory pattern from space: the 2015 Nepal event as seen by magnetic Swarm satellites, Earth and Planetary Science Letters, 461, 119-126, 2017. De Santis A. et al. Precursory worldwide signatures of earthquake occurrences on Swarm satellite data, Scientific Reports, 9:20287, 2019. Marchetti D., De Santis A., Campuzano S.A., et al. Worldwide Statistical Correlation of eight years of Swarm satellite data with M5.5+ earthquakes, Remote Sensing, 14 (11), 2649, 2022.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Space It Up: an innovative project to uncover LAIC processes

Authors: Gianfranco Cianchini, Loredana Perrone, Angelo De Santis, Dr. Mariagrazia De Caro, Dr. Domenico Di Mauro, Alessandro Ippolito, Dario Sabbagh, Dr. Maurizio Soldani, Dr. Adriano Nardi, Dr. Massimo Calcara
Affiliations: Istituto Nazionale di Geofisica e Vulcanologia (INGV)
Threatening lithospheric processes such as earthquakes, volcanic eruptions, landslides, and tsunamis have significant impacts on the upper atmosphere, involving a lithosphere-atmosphere-ionosphere coupling (LAIC) and necessitating a deeper understanding of their effects across various atmospheric strata. In the framework of the Space It Up project, the INGV (Istituto Nazionale di Geofisica e Vulcanologia) LAIC Group aims to enhance the comprehension of lithospheric-atmospheric-ionospheric interactions through innovative data analyses, advanced modeling techniques, and the integration of ground-based and satellite (as ESA’s Swarm constellation) observational systems. By leveraging INGV's expertise and contributing to Italy's global leadership in geohazard risk management via space technologies, the initiative will focus on: 1. Revising existing data and algorithms to detect and model lithospheric effects on the atmosphere. 2. Conducting gap analysis to inform the development of new sensors and measurement systems. 3. Designing AI-driven models that integrate satellite and ground observations to elucidate atmospheric and ionospheric responses to lithospheric activity. 4. Testing and validating these models on case studies of earthquakes, volcanic eruptions, and tsunamis. Specific actions include refining algorithms for studying ionospheric responses to earthquakes, analyzing the atmospheric impacts of volcanic activity during quiescence, unrest, and eruption phases, and characterizing the interactions of volcanic processes with the lower and medium atmosphere. These efforts are expected to provide valuable insights into geohazard-induced atmospheric and ionospheric phenomena, reinforcing Italy’s role as a reference point in space-based risk management technologies.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Exploring the impact of the temporal resolution of satellite gravity products on hydrological Data Assimilation (DA)

Authors: Ms Leire Retegui-Schiettekatte, Dr Fan Yang, Associate Professor Maike Schumacher, Mr Marius Schlaak, Professor Roland Pail, Ehsan Forootan
Affiliations: Geodesy group, Department of Sustainability and Planning, Aalborg University, Astronomical and Physical Geodesy, TUM School of Engineering and Design
Terrestrial Water Storage (TWS) derived from the GRACE and GRACE-FO satellite gravity missions has been used in the past to improve the water storage representation of large-scale hydrological models through state-of-the-art sequential Data Assimilation (DA). These past experiments often used monthly gravity solutions, whose temporal resolution is not short enough to capture sub-monthly and fast-evolving hydrological processes such as floods. The future gravity missions, i.e., ESA’s NGGM (a single low-low satellite-to-satellite, ll-sst, gravity mission on an inclined orbit) and MAGIC (a double ll-sst gravity mission including NASA/DLR’s GRACE-C and ESA’s NGGM), are designed to provide gravity solutions with 5-day and monthly temporal resolutions with short latency. Their spatial resolution is expected to be better and their uncertainty smaller than the GRACE-type gravity missions. Therefore, in this study, we will evaluate the added value of the NGGM and MAGIC missions in hydrological DA applications, especially for simulating water storage and river discharge during sub-weekly and sub-monthly flooding events. This assessment is important to understand the role of the NGGM and MAGIC products in future operational flood prediction systems. Our experiments will be performed using the daily 0.1° resolution W3RA water balance model, forced by the ERA5 climate inputs, during floods in the Brahmaputra and Danube River Basins. As observations, 5-day and monthly GRACE-C-like, NGGM-like, and MAGIC-like TWS estimates will be used. The full error covariance matrix information of the observed TWS will be considered during the DA experiments, which are the outputs of closed-loop simulations of the ESA SING project, considering both instrumental and dealiasing errors. Validations will be performed against synthetic hydrological water storage and river discharge time series that are used to generate the hydrological truth.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Insights into River Sedimentation from Satellite Temporal Gravity Field Variations

Authors: Daniele Sampietro, MARTINA CAPPONI
Affiliations: Geomatics Research & Development Srl, Istituto Nazionale di Geofisica e Vulcanologia
Satellite missions designed to monitor gravity variations over time, such as GRACE and GRACE-FO, represent a significant opportunity for the scientific community to better understand several processes occurring on our planet. In fact, these missions provide information on mass redistribution processes within the Earth system, spanning multiple disciplines, including hydrology, solid earth, cryosphere, and oceanography. However, the separation of these overlapping geophysical signals still presents important challenges and will become even more critical with the advent of next-generation satellite missions such as NGGM and MAGIC. These missions promise higher temporal and spatial resolution as well as higher accuracy, revealing finer details of mass variations, but requiring more advanced techniques to disentangle different contributions. Signals related to slow deformation of the Earth’s crust, such as those caused by subsidence/uplift or sediment deposition from rivers, exemplify the difficulties of signal separation. These processes typically induce gravitational changes of the order of a few tenths of a μGal per year, often overshadowed by larger-scale signals. However, in certain cases, the spatial pattern of the expected anomaly is relatively well-known, allowing signals to be isolated by exploiting their expected position and stochastic characteristics. In this study, we focus on the sedimentation process of major world rivers. Using ocean current patterns from the ECCO2 model, we simulate the expected gravitational signal caused by sedimentation for eight major rivers. We analyse average gravity variations derived from approximately 18 years of GRACE and GRACE-FO data, synthesised up to spherical harmonic degree 96 (corresponding to a spatial resolution of roughly 200 km) from the Jet Propulsion Laboratory global temporal models, and filtered using the DDK8 filter. To distinguish the sedimentation signal from the noise, we first removed the most significant signals at global scale by using both their amplitude and correlation length. After that, we used a mask derived by the synthetic sedimentation model on the residual field to isolate the signals related to sediments accumulation. Each anomaly is then inverted in the Fourier domain to determine the annual mass density distribution, which is aggregated to compute the annual sediment discharge for each river. We achieve consistent results for rivers with expected high sediment discharge (e.g. Amazon, Gange and Brahmaputra), while overestimated sedimentation rates for rivers with lower sediment discharge like Nile, Congo. We finally compare the simulated river discharge gravitational effect with the expected accuracy of ESA’s NGGM mission. Our results demonstrate that NGGM will be able to resolve sedimentation rates larger than 350 Mt/year.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Ground Deformation Analysis over Hydrocarbon Fields with Remote Sensing and Geomechanical Modeling – A Case Study of Karamay Oil Field in China

Authors: Imeime Uyo, Mahmud Haghighi, Wei Tang, Mahdi Motagh
Affiliations: Leibniz University Hannover, Helmholtz Centre Potsdam, GFZ German Research Centre for Geosciences, Potsdam, Germany, College of Geoscience and Surveying Engineering, China University of Mining and Technology, Beijing, China
The continuous production of hydrocarbons presents critical challenges for sustainable resource management and environmental safety. Changes in pore pressure, whether through a decrease caused by oil and gas extraction or an increase from water or gas injection into the reservoir, alter the effective stress on the rock framework. This adjustment in stress can result in the compaction of the reservoir, which translates to the earth’s surface as a loss in elevation (subsidence) or a gain in elevation (uplift). An example of such surface deformation over hydrocarbon fields is observed in the Karamay oilfield in China, where changes in reservoir dynamics have resulted in notable patterns of both subsidence and uplift. Previous studies have assessed these ground deformations, linking them to the impacts of hydrocarbon production and the associated changes in reservoir pressure. In this study, we conducted a comprehensive analysis spanning a longer time period and a larger study area, providing a more detailed evaluation of the temporal dynamics of reservoir deformation in the region. The small baseline subset (SBAS) method is used to process the Sentinel-1 SAR dataset from ascending and descending datasets between 2017 and 2024 to obtain deformation velocities and deformation time series in line-of-sight direction. Our results confirmed the findings of previous studies, while also identifying additional deformation hotspots associated with oil production. We identified several localized uplift areas with rates up to 6 cm/year and additional subsidence areas with deformation rates exceeding 4 cm/year around the oilfield. We utilize a 3D analysis method based on multi-geometry synthetic aperture radar (SAR) data. The 3D components are obtained by combining ascending and descending line-of-sight observations, assuming that the horizontal component is oriented toward the pumping wells. The 3D displacement field of the study area is analyzed using a linear elastic geomechanical models to infer information about the underlying reservoir pressure depletion dynamics. The resulting model offers new insights into subsidence and uplift patterns and their relationship with production operations. The findings of the study reveal critical deformation zones, emphasize the limitations of earlier approaches and the necessity of 3D modeling for comprehensive assessment of the Karamay oil field. This provides a robust framework for monitoring deformation risk in similar oilfields worldwide and paves the way for sustainable resource extraction, reservoir management, and optimization.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: A.09.11 - POSTER - Snow in the cryosphere: improving observations and modelling

Snow is frequently present across cryosphere surfaces such as sea ice, ice sheets and shelves, glaciers, permafrost and soil, and is an important factor in key processes including mass balance, albedo and thermal insulation. The importance of snow is well known and increasingly recognised with snow featuring in the World Meteorological Organisation set of Essential Climate Variables including recently-added snow depth on sea ice. Changes in snowfall, snow cover and snow properties have far reaching implications including water supply, modification of available sunlight affecting biological and chemical processes, climate feedbacks and many other aspects of the Earth system. Changes in snow physical properties can also bring both additional information and/or challenges in relation to remote sensing due to alterations in the interaction of electromagnetic radiation with snow, impacting our ability to collect satellite data over long time- and length-scales.

Studies of snow from in situ, laboratory, remote sensing and modelling studies can provide insights into the state of snow cover as well as changes, trends and future predictions. This session welcomes abstracts on any aspect of snow studies including physical properties and evolution, tools and techniques, modelling and application of statistical or AI methods, human, biological and chemical implications as well as snow dielectric properties which affect remote sensing retrievals. In this session, we invite presentations at all length scales from satellite through airborne, surface, microstructure and smaller. The session will explore insights into the status, changes and potential futures of snow on Earth and impacts e.g. on people and climate, as well as future directions for research, including techniques, modelling, instrumentation, mission design and retrieval algorithms.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Validation and Uncertainties of a Multi Frequency Altimetry Snow Depth Product over the Arctic and Antarctic Ocean

Authors: Alice Carret, Sara Fleury, Alessandro Di Bella, Jack Landy, Isobel Lawrence, Antoine Laforge, Nathan Kurtz, Florent Garnier
Affiliations: Serco, LEGOS, Université de Toulouse, IRD, CNES, CNRS, UPS, ESA-ESRIN, UiT, NASA Goddard Space Flight Center
Satellite altimetry allows to estimate sea ice thickness, an essential variable to better understand and forecast the dynamic ice cover. Nevertheless, some sources of uncertainty remain, and one of the most important concerns the snow depth, a key parameter to convert the measured ice freeboard into sea ice thickness. Snow depth can be estimated using different altimeter frequencies with different snow penetration capabilities. We have developed several snow depth products based on the differences between CryoSat-2 or Sentinel-3 SAR Ku and IceSat-2 laser altimeters. CryoSat-2 has been monitoring the Polar Oceans since 2010 providing unprecedented spatial and temporal coverage. Sentinel-3 was launched in 2016 for marine and land services. And ICESat-2 provides data since 2018 carrying a photon-counting LIDAR altimeter in space for the first time. We have compared these products with other altimeter-based snow depth products, but also with products based on spaceborne radiometers, on models, on climatologies and on in-situ measurements over the Arctic and Antarctic Ocean. According to the snow depth product we found maximum correlations of 0.94 with BGEP moorings, 0.74 with OIB and 0.86 with ICEBird airborne measurements. We also found that using a physical retracker or a polar dedicated product for Ku-band freeboards improve the comparisons. Moreover using the same radar freeboard to estimate the snow depth and ice freeboard enables better results for the sea ice thickness estimation. A theoretical analysis of the uncertainties also shows that combining two frequencies to simultaneously measure ice freeboard and total freeboard (ice+snow) reduces the uncertainties in ice thickness by around a factor of two compared with a single-frequency solution combined with an independant snow depth product. The snow depth uncertainties of our Laser / Ku-band product are estimated to be ~6 cm on average. This ESA-supported study should help prepare the Copernicus CRISTAL mission, which will include a Ka/Ku dual-frequency altimeter for the first time. For this upcoming mission, validation data are of major importance.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Positive Antarctic Mass Balance Contributions from Extreme Precipitation Observed in Short-Period ICESat-2 Data, 2019–2024

Authors: Susheel Adusumilli, Prof Helen Fricker
Affiliations: University Of Oregon, Scripps Institution of Oceanography
Projected increases in Antarctic Ice Sheet precipitation in the 21st century will modify its associated sea level contribution. However, there is limited confidence in model representations of these changes, particularly for extreme precipitation events that occur over large areas at time scales of hours to days. Although satellite altimetry and gravimetry can be used to infer mass changes from these events, current data products are only available for the full continent at monthly or longer time scales. This work improves on these products using ICESat-2 laser altimetry from 2019 to 2023 to estimate changes in ice sheet height on weekly to fortnightly time scales. These data reveal widespread increases in surface height due to extreme precipitation across many Antarctic basins. In 2022, an anomalously high number of extreme precipitation events, including the largest Atmospheric River event on record, contributed to the highest annual increase in ice mass for the 2003–2023 period.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Assimilation of Snow Tomography Data for Improvement of Snow Microstructure and Macrostructure Simulations

Authors: Mel Sandells, Esteban Alonso-González, Richard Essery, Laurent Ferro-Famil, Othmar Frey, Stefano Tebaldini, Andreas Wiesmann
Affiliations: Northumbria University, Pyrenean Institute of Ecology, CSIC, University of Edinburgh, CESBIO/ISAE-SUPAERO, GAMMA Remote Sensing, Politecnico di Milano
Snow has recently been identified as an Essential Climate Variable. There is currently no Earth Observation mission dedicated to monitoring snow, despite snow having an important impact on the climate through its high albedo and temperature moderating effects, protecting the ecosystem from extreme air temperatures. Snow itself is an important water resource with more than a billion people who use melt water. We know the snow extent in the Northern Hemisphere is decreasing in spring and summer but observing changes in snow mass remains challenging due to vertical and horizontal variability in the snow properties, which are also changing in response to meteorological conditions. New snow layers are formed by separate precipitation events and the structure of the snow changes with the thermal gradient, pressure of overlying snow and melt water percolation, which may refreeze forming crusts. Strong temperature gradients cause faceting of the snow crystals and formation of depth hoar crystals/poorly bonded crystals increasing the risk of avalanches. Over time the size of the snow crystals generally increases, causing a darkening of the snow cover. Scattering of microwave radiation is also sensitive to the changing snow microstructure and is the basis for snow retrieval algorithms. At a point scale with accurate meteorological data, physics-based models can represent changes in snowpack properties well i.e., temperature, microstructure size, density and stratigraphy. These can be used to simulate weak layers responsible for avalanches. On a global scale with simpler snowpack models and meteorological information provided by weather forecasts and/or reanalysis, model outputs are less accurate. Snow also impacts the accuracy of other Earth Observation products that operate at microwave frequencies. Numerical weather prediction systems assimilate passive microwave satellite data in atmospheric sounding channels, but these data also contain a contribution from the snow emission up to frequencies of 243 GHz and can be affected by even thin layers of fresh precipitation. Lower frequencies have larger penetration depths so are sensitive to layers deeper in the snow. A good understanding of snow layering is needed to use these data for many applications, including weather forecasts, particularly in Arctic regions. Assimilation of layering information and density in land surface models offers potential to obtain the best estimate of multiple snow parameters. This in turn will improve hydrological forecasts, snowpack stability estimates for avalanche warning and soil temperature. A new spaceborne mission “SnowCAT” has been proposed to observe main snowpack layers from tomography. This uses multiangular observations from a constellation of X-band SAR instruments to determine the 3D structure of snow. In addition to stratigraphic information, SnowCAT will also provide estimates of snow depth, SWE and density. Here we present results from an Observing System Simulation Experiment to improve estimates of snow parameters from assimilation of data with SnowCAT accuracy into the Multiple Snow Data Assimilation system. We compare different assimilation methodologies and present new methods of incorporating the novel stratigraphic information that will be available from SnowCAT. The expected output accuracy of the assimilation system to estimate snow macro- and microstructure will be presented. Finally, we demonstrate the benefits of better snowpack parameter estimates to support other Earth Observation missions e.g. CRISTAL and CIMR.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Advanced long term global snow cover climate data record from satellite data generated within CCI Snow

Authors: Dr. Thomas Nagler, Dr. Gabriele Schwaizer, Mag. Markus Hetzenecker, Mag Johanna Nemec, Richard Essery, Mag. Ursula Fasching, Maria Heinrich, Dr. Nico Mölg, Dr. Rune Solberg, Dr Anna Maria Trofaier
Affiliations: ENVEO-Environmental Earth Observation IT GmbH, University of Edinburgh, Norwegian Computing Center, European Space Agency
Seasonal snow is an important component of the global climate system. It is highly variable in space and time and sensitive to short term synoptic scale processes and long-term climate-induced changes of temperature and precipitation. Current satellite-based global snow cover products are generated with different algorithms applied to various satellite missions. They show major differences in snow extent, which is a source of uncertainty for climate change monitoring and climate model verification. Within the ESA Climate Change Initiative (CCI), we developed a homogenized time series of daily global snow extent maps with 1 km resolution derived from data of different medium resolution optical satellite sensors. The retrieval algorithms for fractional snow extent provide consistent daily products for snow viewable from space (viewable snow) and snow on the surface corrected for forest masking (snow on ground) with global coverage. Input data are medium resolution optical satellite images (Terra MODIS, Sentinel-3 SLSTR) from the year 2000 to present. To obtain a consistent multisensor time series a homogenisation procedure was applied on the 3 years overlapping period in order to check and remove the bias between snow products from different sensors. The quality of the data sets was assessed using high resolution snow products from Sentinel-2 and Landsat, generated by means of a multispectral unmixing algorithm. The selected reference images were from different seasons and solar illumination conditions as well as over different environments and surface types around the world. One of the major drawbacks of optical satellite data in monitoring the Earth’s surface is the loss of information due to cloudiness and polar night. Therefore, we developed a physically based method in order to fill gaps by integrating satellite-based snow extent products with snowpack models driven by gridded numerical meteorological data. The integration is started when a pixel is detected as cloud covered using the observed snow cover fraction from the previous day for initializing the snowpack model. The integration is performed by setting up the snowpack model for each pixel separately. The performance of the gap-filling procedure is checked by introducing artificial data gaps in the products and comparing the interpolated values with the observed data sets. In this presentation we will explain the advancements in the algorithms and will show the 25-years time series of the global and daily CCI snow extent products, version 4, with 1 km pixel spacing. The advancements in gap-filling of snow products from optical sensors using physically based snowpack models will be highlighted and their use for monitoring the response of the snow cover to climate warming will be demonstrated. The data sets provide detailed information on the trends in snow coverage, from global to regional scale.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Towards Panarctic Snow Density Retrievals From Passive Microwave Remote Sensing

Authors: Jeffrey Welch, Richard Kelly
Affiliations: University of Waterloo
Snow density information is important for a variety of applications, such as estimating snow water equivalent for hydrology, modelling thermal interactions between Earth’s surface and atmosphere, and ecological monitoring of flora and fauna. Yet, existing methods to estimate snow density are limited, especially in remote areas like the Arctic. There are currently no operational automated networks to monitor snow density in the Arctic, so it is typically measured through in situ gravimetric sampling. The manual sampling procedure to estimate snow density is labour intensive and the remote nature of the Arctic environment makes it difficult to expand over space and time. As a result, applications that require large-scale estimates of snow density typically rely on modelled data. However, the available models to estimate snow density (e.g. physical snow models and reanalysis models) do not adequately represent Arctic snow density conditions. Therefore, applications that could make use of large scale estimates of snow density in the Arctic are limited by the available snow density data. A possible approach to estimate snow density conditions across the Arctic is through passive microwave remote sensing. Microwave radiometry is sensitive to snow mass accumulation, so satellite-based passive microwave sensors can be exploited to observe snow conditions across remote areas. The wide spatial coverage and repeat observations provided by satellite remote sensing could allow for identification of spatial patterns of snow density conditions across the Panarctic that were not previously available from traditional point based manual samples. Additionally, the relatively long passive microwave data record could provide insights into how snow density conditions might be changing across the Panarctic. The authors recently developed a prototype snow density retrieval algorithm using satellite passive microwave radiometry and automatic weather stations (AWS) to estimate Arctic snow density conditions. The retrieval algorithm was applied at four AWS sites in the Canadian high Arctic, coincident with periodic snow course observations, and appears to replicate snow density estimates from manual sampling well. With some modifications that retrieval algorithm could be applied across the Panarctic to provide a new dataset to support applications that require estimates of snow density across large spatiotemporal scales. Building upon the prototype algorithm, this work focuses on expanding the scope of passive microwave snow density retrievals across the Panarctic. To apply the retrieval algorithm across large spatial scales a source for spatially continuous snow depth forcing data will need to be identified and varying snow conditions across different Arctic environments need to be accounted for in the snowpack model. Further, snow density datasets must be assembled and curated to evaluate algorithm estimates across large spatiotemporal scales – specifically, distributed stratigraphic snow density measurements to assess the simulated density profiles from the retrieval algorithm. Ultimately, the work on this retrieval algorithm aims to provide high quality, continuous snow density estimates across the Panarctic at spatiotemporal scales that are currently unavailable.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Retrieving glacier-specific snow cover and snow line altitudes from optical remote sensing and Google Earth Engine

Authors: Christian Sommer, Alexander Groos, Prof. Dr. Matthias Braun
Affiliations: Friedrich-Alexander-Universität Erlangen-Nürnberg
Measurements of seasonal glacier snow cover and snowline altitude (SLA) are important for reconstructing glacier mass balance and observing the effects of global warming in mountain regions. Due to the short revisit times of current Earth observation satellites (up to 5 days), optical remote sensing has the potential to provide dense time series observations of glacier snow cover change. However, the vast amount of satellite imagery available, e.g. several tens of thousands of images for the medium-sized Randolph Glacier Inventory (RGI) 1st order region “Central Europe” over the last two decades, presents a challenge in terms of computing and storage capacity. Cloud-based platforms, such as Google Earth Engine, provide an efficient way to access the satellite image archives of the European Space Agency (ESA) and the National Aeronautics and Space Administration (NASA). Here, we use surface reflectance values from the Sentinel-2 and Landsat missions to create time series of optical images from ~2000 to the present based on glacier outlines from the Randolph Glacier Inventory V7.0 outside the polar ice sheets. Time-varying characteristics of the glacier surface, such as seasonal snow cover or bare ice zones, are observed by thresholding the reflectance ratios of the corresponding spectral bands of each satellite. To increase the spatial coverage of glacierized areas and to reduce data voids due to cloud covered pixels, the selected satellite images are then aggregated as stacks of median surface reflectance at intervals of 15 days. Finally, the pixel values of each RGI glacier and each timestamp are classified as snow or non-snow (e.g. bare ice, rock or debris) and the transient snow line altitude (TSLA) is estimated by dividing the reclassified binary snow raster into 50 m elevation bins, using a reference digital elevation model (DEM) and calculating the corresponding snow covered area fraction (SCAF) of each bin. To account for the vertical change in glacier surface elevation of each pixel over the ~20 year period, we assume a linear rate of change based on the Shuttle Radar Topography Mission (SRTM) NASADEM and Copernicus DEM GLO-30 global products, representing terrain elevations for the years 2000 and 2010-2015, respectively. The processing and analysis of the image raster data is carried out entirely in Google Earth Engine. This drastically reduces the computation time compared to a more traditional approach of processing data on local machines, as the satellite data can be accessed and processed directly, eliminating the need to download and store any raw data or intermediate products.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Length-scale variabilities of Antarctic summer snow on sea ice

Authors: Daria Paul, Prof. Dr. Stefanie Arndt
Affiliations: Alfred-Wegener-Institut, University of Hamburg
The snow cover on Antarctic sea-ice significantly contributes to the sea ice mass budget through seasonal transition processes, strongly influences the sea ice growth through its insulating properties and modifies radar signal propagation. The snow cover does not only strongly vary in depth but also in its physical characteristics such as density and stratigraphy. However, little is known about Antarctic snow properties and their seasonal and spatial variability. To quantify the heterogeneity of the Antarctic snowpack on different spatial scales, we present a study of the internal snow properties over the time period of three austral summers at 39 different sites in the Weddell Sea and in varying sea ice regimes. Connecting SnowMicroPen (SMP) profiles with traditional snow pit studies allows us to upscale single-point measurements to spatial scales up to 60 meters and conduct a spatial autocorrelation analysis. A supervised classification algorithm trained on manually snow-type classified SMP signals enables a segmentation and classification of 800 SMP profiles and the separation of SMP-derived density and specific surface area by snow layer type. Preliminary analyses of the classified datasets reveal substantial variability in snow grain type fractions within individual snow profiles, which is also reflected in the corresponding density profiles. However, a clear distinction between seasonal and perennial snow and ice regimes emerges. Specifically, seasonal ice is characterized by higher fractions of fragmented crystals, whereas melt-freeze clusters dominate the snowpack on perennial ice. The extensive dataset of vertical profiles provides a robust foundation for statistical analyses of snowpack properties in relation to prevailing snow and ice conditions. Furthermore, data collected along transect lines – extending up to 60 meters with measurements at approximately 1- to 2-meter intervals – enable the characterization of spatial variability in snow layer properties at the floe scale. These findings offer critical insights for improving model-based and satellite remote sensing approaches aimed at estimating snow and sea ice thickness on larger scales.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: How snow evolves after precipitating on sea ice: Results from autonomous measurements and 1-D model simulations in the Weddell Sea and comparisons with the Arctic

Authors: Nina Maaß, Stefanie Arndt
Affiliations: Alfred Wegener Institute Helmholtz Centre for Polar and Marine Reseach, University of Hamburg, Institute of Oceanography
Antarctic sea ice is characterized by a year-round relatively thick snow cover, while snow on Arctic sea ice is thinner and usually melts during summer time. Snow increases the surface albedo of sea ice and acts as an insulator for the sea ice underneath, hampering both melting of ice under warm conditions and growing of ice under freezing conditions. Snow on sea ice is frequently transformed into meteoric ice, especially in the Antarctic: 1. When snow is flooded with sea water and re-freezes, it forms snow ice. 2. Internal melting and re-freezing of snow above the sea water level results in the formation of superimposed ice. Thus, snow is an important factor in the sea ice energy and mass budget, and the lack of knowledge about snow and the processes it undergoes after it has precipitated on sea ice leads to substantial uncertainties in sea ice remote sensing products and climate studies. In this study, we examine 35 autonomously operating ice-tethered snow observation platforms (‘Snow Buoys’) deployed in the Weddell Sea since 2013. During their lifetime (typically from several months up to two years) they measure the height of the snow surface in reference to the snow-ice interface at the time of deployment. As parts of the accumulated snow turn into meteoric ice (snow ice and superimposed ice), the height measured by the Snow Buoy is generally not equal to the snow thickness. In order to quantify the fraction of meteoric ice and the impact of the snow cover on the sea ice, we simulate the temporal evolution of the snow and ice column along the Snow Buoys’ drift trajectories using the 1-D snow cover model SNOWPACK. SNOWPACK is a multi-purpose snow and land-surface model with a detailed description of the mass and energy exchange between the snow, the atmosphere and optionally the underlying ground layer. The model was originally derived for alpine regions, but has also been applied to polar regions, and recently also to sea ice environments. The SNOWPACK simulations can either be driven with observed snow heights or with precipitation from reanalyses, for example. Here, we show how driving SNOWPACK with Snow Buoy observations allows us to understand what processes lead to the measured snow height. We also present how comparing observed snow-heights with reanalysis-driven simulations for different model configurations allows us to find out how SNOWPACK can be best applied to Antarctic sea ice environments, specifically the Weddell Sea. Using the resulting best model settings, we can upscale our results from the Snow Buoys’ point measurements to regional scales and, for example, estimate the fraction of snow that has turned into meteoric ice for the whole Weddell Sea. We find that, compared to the buoy observations, snow heights simulated with SNOWPACK are considerably overestimated when driven with ERA5 reanalysis precipitation. To account for possible snow removal after the snow has precipitated on sea ice, we apply a wind speed dependent snow drift model, optionally constrained with the available open water areas (’sinks’) around the buoy position. However, we find that the simulated and observed snow height timeseries agree best when the precipitation from the reanalysis data is simply reduced to ~50 % of its value, independently of wind speed and ice concentration. We have also applied the SNOWPACK model to simulate the snow and ice evolution along the drift trajectories of 57 Arctic Snow Buoys and to compare the results with the Antarctic observations. For this comparison, we drive SNOWPACK with the buoy-observed snow heights and we find that meteoric ice forms very frequently on Antarctic sea ice, while it is less abundant on Arctic sea ice. Snow ice is found in 62% of the Antarctic and 8% of the Arctic data points. On average, the snow ice is at its maximum 31±26cm thick in the Antarctic, as compared to 10±12cm in the Arctic. Superimposed ice is found in 58% of the Antarctic and 15% of the Arctic data points. Though when forming, the superimposed ice is on average as thick in the Antarctic as it is in the Arctic at its maximum thickness (9±6cm in the Antarctic vs. 8±6cm in the Arctic). The different behaviour of snow on sea ice in the Northern and Southern Hemispheres’ polar regions leads to different implications for the respective sea ice mass balances, and potential climate change induced changes in precipitation may have different consequences for Arctic and Antarctic regions.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Detecting snowfall events over the Arctic using optical and microwave satellite measurements

Authors: Emmihenna Jääskeläinen, Dr Kerttu Kouki, Research professor Aku Riihelä
Affiliations: Finnish Meteorological Institute
Determining precipitation over the Arctic region with high accuracy is challenging due to the sparse in situ observation network. Additionally, current climate models, atmospheric reanalyses, and direct satellite-based precipitation observations face various difficulties that hinder accurate assessment of precipitation. In this study, we undertake a proof-of-concept investigation into how accurately optical satellite observations, specifically Sentinel-2 surface-reflectance-based grain-size-connected specific surface area of snow (SSA), and microwave-based snow water equivalent (SWE, from the ESA Snow Climate Change Initiative version 2) estimates can detect snowfall over the Arctic. Additionally, we include ERA5-Land SWE data to support the analysis. We focused on a limited area (a circle of 100 km radius around Luosto weather radar located in Northern Finland) and a short time period (covering March 2018) to evaluate the chosen data sources for assessing precipitation in this area. The threshold for snowfall is defined to be 1 cm based on simulation of relevant snow reflectivity change from the Two-streAm Radiative TransfEr in Snow (TARTES) snow model. Because of metamorphism, snow grains grow larger as snow ages, leading to surface area decreasing compared to fresh snow (i.e. fresh snow increases SSA values, and conversely, older snow decreases SSA values). Snowfall events are therefore detected by positive differences between two SSA values. Similarly, the increasing SWE value indicates snowfall event. Using this information, we classified differences between observations independently for SSA and SWE and compared the results to the weather radar-based snowfall information which serves as the independent reference. The initial results are promising. Situations with snowfall are classified with high recalls: 64% for the satellite-based SWE, 77% for ERA5-Land-based SWE, and around 90% for SSA. However, it was more challenging to accurately classify cases without snowfall using satellite-based data, with a 34% recall value for satellite-based SWE and varying recall values from almost 60% to over 70% for SSA. ERA5-Land-based SWE had the highest recall value for cases without snowfall, 80%. These findings suggest that optical and microwave-based satellite observations can be used to detect snowfall events over the Arctic.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Understanding C-band Radar Interactions with Snow in Various Environmental Conditions

Authors: Hans Lievens, Isis Brangers, Hans-Peter Marshall, Jaron Vandenbroucke, Emma Tronquo, Devon Dunmire, Bert Cox, Guus Leenders, Francesca Carletti, Mathias Bavay, Tobias Jonas, Gabriëlle De Lannoy
Affiliations: Ghent University, KU Leuven, Boise State University, WSL - SLF
Seasonal snow in mountain ranges is an essential source of water. The monitoring of snow properties, such as snow depth or snow water equivalent (SWE), can benefit a range of applications, including water resource management and renewable energy production. A promising remote sensing method for the monitoring of snow in mountain environments is Synthetic Aperture Radar (SAR), for instance using the ESA and Copernicus Sentinel-1 satellite mission, which operates at C-band (5.4 GHz) frequency in dual-polarization (VV and VH). However, Sentinel-1 retrievals of snow depth or SWE remain complex, as the scattering mechanisms of C-band microwave signals in snow are still not fully understood. During recent years, multiple C-band tower radar experiments have been carried out, which aimed at improving the understanding of C-band radar interactions with different types of snow and in different environmental conditions. Timeseries experiments covering multiple snow seasons were carried out at point locations in the Idaho Rocky Mountains (2021-2024) and in the Swiss Alps (2023-2024). Spatial transects were collected in colder snowpacks around Fairbanks, Alaska, during the March 2023 NASA SnowEx campaign. The collected side-looking radar data consist of detailed time-domain profiles, revealing the scattering contributions of different snow layers and the soil surface. From the timeseries experiments it was observed that throughout the accumulation of dry snow, the scattering from the snow volume increases over time. However, changes in scattering intensity can also partly be related to changes in the snow microstructure and stratigraphy. The radar measurements in both co- and cross-polarization are impacted by dry snow accumulation, but impacts in cross-polarization are more strongly evidenced in the total measured backscatter due to the relatively weaker reflection from the underlying soil. From the transects in the shallower and colder snowpack around Fairbanks, we found VV polarization to scatter more efficiently than HH or VH polarization. Manual removal of the snowpack decreased the backscatter by 2 dB in all polarizations, supporting sensitivity of C-band data to snow and the potential use of Sentinel-1 to retrieve snow depth.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: In Situ Characterization of Snow Dielectric Properties From 100 MHz to 2 GHz

Authors: Riccardo Barella, Nicola Ciapponi, Barbara Cosciotti, Sebastian Emanuel Lauro, Emanuele Mariani, Carlo Marin, Elisabetta Mattei, Elena Pettinelli
Affiliations: Università Degli Studi Roma Tre, EURAC
Snow cover is the most extensive component of the cryosphere, and it affects numerous phenomena of the Earth system, from the radiative balance to the climate, from glacier mass balance to water resources. Monitoring the physical and dielectric properties of snow is fundamental to improve the understanding of snowpack dynamics, hydrological processes, and their interactions with the climate system. In this scenario, the characterization of the dielectric properties of snow is particularly important for remote sensing applications. In-situ measurements are crucial for improving the interpretation and validation of satellite observations, with significant implications for studies of the water cycle, water resource management, and avalanche monitoring. While various models have been proposed to describe the dielectric properties of snow and their relationship to physical characteristics like density and liquid water content, significant discrepancies exist between these models. Although all formulations agree that permittivity (both real and imaginary parts) increases with liquid water content, substantial differences in absolute values arise at varying densities, liquid water contents, and frequencies. The root causes of these discrepancies likely include: i) lack of consensus on the shape of water inclusions within the mixing models at the different water regimes; and ii) limited availability of field-acquired data encompassing all snow conditions and frequencies. As a result, the selection of a specific model often relies on pragmatic choices rather than definitive validation. This underscores the urgent need for comprehensive data to validate or develop more accurate dielectric models. This work presents a new measurement protocol for the dielectric properties of snow, based on the use of three-wire probes connected to a Vector Network Analyzer (VNA). By analysing the resonances of the signal propagating through the probes, it is possible to obtain accurate estimates of the real and imaginary parts of snow dielectric permittivity in the frequency range 100 MHz – 2 GHz. The protocol was applied in two field campaigns conducted during the spring season of 2024 at two sites in Val Senales in the Italian Alps. These measurements were supported by “PNRR-M4C2-I1.1-PRIN 2022-PE10-SnowMed - Apennine snow cover in the Mediterranean climate region: multi- sensor data, observations, modeling and trend analysis-F53D2300228 0001-Finanziato dall'U.E. – NextGenerationEU” and carried out in collaboration with researchers from EURAC institute in Bolzano. Over 100 measurements were performed in the field, yielding more than 300 dielectric permittivity values using probes of varying lengths (50 mm, 100 mm, 200 mm), a dataset that is unprecedented in the literature. The innovative aspect of these measurements lies in the simultaneous acquisition of snow density and liquid water content (LWC) measurements through melting calorimetry. Such measuring protocol provides a direct comparison between physical and dielectric properties. Additionally, measurements were taken throughout the day, providing insights into the temporal evolution of snowpack characteristics, a factor that allowed the acquisition of measurements spanning LWC values from about 2% to more than 10%. Preliminary results from the Senales campaign demonstrate a high correlation between the real and imaginary components of permittivity and LWC. Nevertheless, the application of different retrieval schemes, each underpinned by distinct semi-empirical snow mixing models, leads to varying outcomes, particularly at frequencies deviating from i) 1 GHz; and ii) for intermediate LWC values. The portability of the instrument, the rapidity of the measurement and the possibility of in-situ calibration procedure ensure the applicability of this method in different environmental conditions, including remote areas. The potential large amount of data that can be collected with the proposed measuring protocol will contribute to the validation of dielectric models presented in the past and enhance accuracy and precision in estimating snow properties. Future developments will involve applying this measurement protocol to a broader range of snow conditions and integrating it with other electromagnetic techniques, such as ground-penetrating radar (GPR), to achieve multi-scale snow characterization.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Harmonizing Satellite-Based Snow Cover Area Time-Series with Snow Model Input Data

Authors: Valentina Premier, Dr. Carlo Marin
Affiliations: Eurac Research
Satellite-based snow cover area (SCA) time-series provide relevant spatial information about snow presence. However, SCA derived from satellite sources still suffers from several classification errors. Notable issues include the underestimation of snow presence in forested areas, where snow beneath the canopy is not visible to satellites (i.e., snow on the ground), and missed snow detection in shadowed regions, which can cover large portions of mountainous areas, particularly during winter. Additional challenges arise from the trade-off between spatial and temporal resolution. Achieving both daily coverage and high spatial resolution requires merging data from multiple satellite sources through downscaling and gap-filling techniques, which can introduce further errors, such as the missed identification of snow patches at the end of the season when using lower-resolution satellites for those specific dates. When SCA data are integrated into snow models through data assimilation schemes, these errors and inconsistencies can propagate, undermining the accuracy of model outputs. For example, errors during melting periods, particularly at the end of the snow season, that result in intermittent snow presence caused by snow patches misclassification, are especially problematic. These challenges complicate the integration of SCA into simple modeling frameworks, such as backward Snow Water Equivalent (SWE) reconstruction, where inconsistencies in the correct determination of the snow disappearance date can lead to significant errors. A previous attempt to harmonize SCA time-series relied on in-situ snow depth data to define accumulation and ablation periods, assuming these periods are uniform across the region (Premier et al., 2023). The method corrected inconsistencies represented by snow cover increases without corresponding accumulation or decreases during periods without melting. While effective for small areas, this approach is limited by the availability of in-situ data and the validity of its assumptions over larger regions. In this work, we extend the previous harmonization technique by using observed meteorological grids variable in time and space. In detail, for this test we used precipitation and temperature from the EMO-1 dataset, used as forcing in the European Flood Awareness System (EFAS). Despite potential inaccuracies in these meteorological fields, we assume they are reliable in terms of timing and spatial extent. This method ensures the coherence of SCA time-series with snow state dynamics (accumulation, ablation) by imposing logical checks on pixel class changes. For example, during accumulation, a pixel can transition from snow-free to snow only if a snowfall occurs on bare ground or remain as snow if precipitation falls on existing snow. During ablation, a pixel can transition from snow to snow-free only if snow cover disappears or remain as snow if partial melting occurs. Misclassified transitions are corrected by analyzing an appropriate time window and applying a majority rule to determine the most frequent label. The harmonization process is performed iteratively, considering previous steps to avoid introducing new inconsistencies during corrections. Logical masks streamline the computation, enhancing efficiency and scalability. By providing coherent, harmonized SCA time-series, this approach improves the integration of satellite-derived snow cover data into snow models, eliminating the need for further pre-processing. References Premier, V., Marin, C., Bertoldi, G., Barella, R., Notarnicola, C., & Bruzzone, L. (2023). Exploring the use of multi-source high-resolution satellite data for snow water equivalent reconstruction over mountainous catchments. The Cryosphere, 17(6), 2387-2407.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Polarimetric Radar Altimetry of the Cryosphere

Authors: Dr Rosemary Willatt, Dr Melody Sandells, Julienne Stroeve, Dr Heather Selley, Dr Vishnu Nandan, Professor Steven Baker, Thomas Newman, Dr Robbie Mallett, Dr Amy Macfarlane, Mr Monojit Saha, Professor Anna Hogg, Lanqing Huang
Affiliations: UCL, Northumbria University, University of Manitoba, University of Leeds, UiT Arctic University of Norway, NASA
We present analysis of data collected by the KuKa surface-based, fully polarimetric dual-frequency radar instrument, deployed in multiple field campaigns over Arctic and Antarctic sea ice, lake ice and terrestrial snow i.e. covering a variety of scenarios and regions of the cryosphere. Surface-based studies such as these, combining KuKa and geophysical measurements, offer the chance to compare retrieval techniques, and radar waveforms, with physical snow and ice conditions. We demonstrate that using dual-polarisation techniques could provide accurate retrievals of snow depth, performing better than dual-frequency Ku- and Ka-band approaches at the surface-based scale. Earth's cryosphere comprises snow, ice and frozen ground in the polar regions and beyond. Remote sensing offers the potential to monitor areas over long time- and length-scales relevant for climate research, but interpretation of the data can pose challenges. For example, snow depth and sea ice thickness are WMO-designated Essential Climate Variables, indicating a need for consistent, accurate monitoring. There is the potential to retrieve them using satellite radar altimetry, but uncertainties in the scattering surfaces and/or volumes can cause a lack of confidence in the retrieved values. This is especially true for snow-covered Antarctic sea ice, where relatively sparse in situ data and complex snow and ice conditions further complicate retrievals. We make comparisons between our findings and a study using a different polarimetric surface-based radar, confirming that this is not an instrument- or location-dependent effect, but represents a novel method for surface-based snow depth retrieval. We present data from modelling studies and consider relevant theory relating to our observations, contextualising our findings in the physics causing the polarisation effects we observe in our data. In making considerations of upscaling of from surface-based campaigns to larger scales, we discuss how this concept and technique might be further developed and advanced for future emote sensing of the cryosphere.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: From Sentinel 1 to CROCUS: a new data assimilation approach to wet snowline detection and monitoring

Authors: Bastien Delacroix, Fatima Karbou, Matthieu Lafaysse, Mathieu Fructus
Affiliations: CNRM CNRS Météo-France / CEN
Can we safely hike in the mountains without fear of an avalanche? Is the snowpack stable enough to prevent it from collapsing? Will there be sufficient water in the valleys this summer? What is snow's impact on our mountain ecosystems and on global climate change? These questions, and many others, are the focus of research at Météo-France's Snow Study Center, where understanding the snowpack is essential for addressing environmental and societal issues. Crocus is a state-of-the-art unidimensional numerical snowpack model developed at Météo-France to simulate the physical properties of the snowpack and its evolution in time. Crocus performs the thermodynamic coupling between snow and soil and simulates different mass and energy exchange processes with the atmosphere and with the underlying ground, such as radiative fluxes, latent and sensible heat fluxes, precipitation. However, since the model does not yet assimilate snow observations, any errors during the winter season can carry over into the rest of the year. Analysing the dynamics of snow-covered areas, considering both spatial and temporal variations, is a highly intricate task, as it is subject to the influence of weather, terrain caracteristics, among many other factors. The Sentinel missions, operated by the European Space Agency, offer a major asset in this context, thanks to their high spatial resolution and high revisit frequency. Data assimilation is a promising way of combining the output of physical snow models, such as the state-of-the-art model Crocus, with snow-sensitive satellite observations. The synthetic aperture radar (SAR) Sentinel-1, operating in C-band, is sensitive to wet snow thanks to the attenuation of the signal caused by the high permittivity of liquid water compared with dry snow or air. This behavior is particularly useful for detecting areas of melting, which are often difficult to monitor using conventional methods. Our aim is to reduce model uncertainties by linking Sentinel-1 radar backscatter data to CROCUS liquid water content variable (LWC), using a particle filter-based data assimilation algorithm. The methodology is based on the development of an observation operator that links Sentinel-1 data with the LWC, an intrinsic variable of CROCUS. A wet snow map is first constructed from the radar images, classifying the study area according to altitude, slope and orientation, using a digital elevation model (DEM). For each class, the percentage of wet snow is estimated and converted into LWC values. These observational data are then assimilated into CROCUS via an ensemble simulation, where each member uses a different physical parameterization to simulate the evolution of the snowpack. The particle filter adjusts member state variables to minimize discrepancies between observations and simulations, while preserving the physical links between model variables. This approach was tested on the Grandes Rousses massif in the French Alps, during the 2017-2018 season, characterized by high wet snow coverage. The results show a significant improvement in snowline detection compared with simulations without assimilation. Comparisons with independent data, such as Sentinel-2 optical images or Météo-France avalanche bulletins (BERA), confirm the added value of this method. These initial results highlight the potential of integrating radar data into snowpack modeling. In the continuity of this work our future effort will focus on coupling Sentinel-1 radar data with Sentinel-2 optical data, combining their complementary features to further improve the accuracy and applicability of snowpack monitoring tools in the face of the growing challenges of climate change.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: A Preliminary Assessment of High-Resolution Remote Sensing Time Series for Constraining Intermediate Complexity Snow Model Simulations in Complex Terrain by Perturbating Energy Inputs

Authors: Cristian Tonelli, Valentina Premier, Carlo Marin, Carla Braitenberg
Affiliations: University Of Trieste, Eurac Reserach
Water stored in the snowpack is an indispensable resource for sustaining human life, especially for mountainous regions like the Alps. Monitoring snowpack over remote areas is challenging and its heterogeneity limits the representativeness of in-situ measurements. A valid alternative is the use of physically based snow models, that provide high-resolution information on snow evolution for large areas. However, these models require highly accurate meteorological input data to produce reliable results and struggle to accurately represent snow redistribution caused by gravity and wind transport. Flexible snow model (FSM2) is a multi-physics energy balance snow model that can reconstruct the snowpack evolution using meteorological forcings as temperature and radiation as input (Essery et al., 2024). Another option is remote sensing, which provides free access to large-scale data. Current optical sensors provide accurate information about the snow extent. By integrating multi-source satellite data, such as high spatial resolution snow cover derived from Sentinel-2 and daily low-resolution snow cover derived from MODIS, we can derive daily high-resolution Binary Snow Cover time series (BSC-TS) (Premier et al., 2023). In this work, in order to investigate the impact of spatialization errors in meteorological input data on snow model simulations, we analyze how perturbations in temperature and radiation affect the timing of snow disappearance (i.e., time when SWE=0) in a complex terrain environment. Specifically, we aim to identify the minimum (co)perturbation required to minimize the discrepancy between snow disappearance dates simulated by the FSM2 model and those derived from remote sensing BSC-TS data. The perturbation analysis involved modifying temperature in a range from -1°C to +1°C and then altering the radiation (including both direct and diffuse components) with a multiplicative factor ranging from -20% to +20%. The area of interest is the Dischma catchment in Switzerland where high-resolution hydrometeorological and snow data are available as input to the snow model (Magnusson et al., 2024). The results are further analyzed in relation to altitude, slope, and aspect. The analysis reveals that the model tends to overestimate the time of snow disappearance with increasing altitude and slope. As expected, the sensitivity analysis confirms that an increase in both temperature and radiation decreases the bias in the timing for high altitudes and slopes while lower temperature and radiation is required for low altitudes and slopes. This result might be linked to inaccuracies of interpolation techniques due to missing in-situ data especially at high elevation, resulting in inaccurate trends of the forcings with elevation. Regarding slope, the overestimation trend can be attributed to gravitational forces which lead lateral snow redistribution, a process not implemented in FSM2. Also, wind redistribution might not be well represented by the method but better captured by the satellite product. For aspect, there is no clear trend but FSM2 tends to slightly overestimate the time of disappearance for North-exposed slopes, requiring an increase of temperature/energy to fit the benchmark contrarily to what is expected, while South-exposed areas tend to underestimate. On the other hand, it was expected to be an inverse trend of overestimation of temperature/energy for North-exposed slopes, given the fact that usual interpolation techniques do not count for casted shadow areas. This preliminary assessment highlights the potential for more complex data assimilation schemes while considering the limitations of input forcing and model. Integrating remote sensing data into snow models could enhance the accuracy of snow evolution simulations. The next step involves implementing improvements to simulate snow disappearance and the onset of melting, leveraging SAR observations from Sentinel-1. Further research could also improve an alternative method to abandon inputs such as meteorological forcing and implement satellite inputs. Acknowledgement: This work has been produced with co-funding from the European Union - Next Generation EU. References: Essery, R., Mazzotti, G., Barr, S., Jonas, T., Quaife, T., Rutter, N., 2024. A Flexible Snow Model (FSM 2.1.0) including a forest canopy. https://doi.org/10.5194/egusphere-2024-2546 Magnusson, J., Bühler, Y., Quéno, L., Cluzet, B., Mazzotti, G., Webster, C., Mott, R., Jonas, T., 2024. High-resolution hydrometeorological and snow data for the Dischma catchment in Switzerland. https://doi.org/10.5194/essd-2024-374 Premier, V., Marin, C., Bertoldi, G., Barella, R., Notarnicola, C., Bruzzone, L., 2023. Exploring the use of multi-source high-resolution satellite data for snow water equivalent reconstruction over mountainous catchments. The Cryosphere 17, 2387–2407. https://doi.org/10.5194/tc-17-2387-2023
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Distribution of Snow Depth Over Different Surface Types of Arctic Sea Ice

Authors: Lanqing Huang, Julienne Stroeve, Thomas Newman, Rosemary Willatt, Lu Zhou
Affiliations: University College London, University of Manitoba, University of Colorado Boulder, Utrecht University
The distribution of snow depth (SD) on sea ice is a crucial parameter influencing both sea ice thermodynamics and dynamics. Snow surfaces are roughened and textured by the wind, creating self-organized textures collectively known as snow bedforms. Different wind, time, and temperature conditions produce bedforms as opposed to flat snow surfaces (Kochanski et al., 2018). The SD distribution on sea ice, depending on the snow bedforms, is thus influenced by numerous factors, including wind redistribution, ice surface roughness, and ice type. Understanding these distributions is vital for improving climate models and interpreting satellite observations of sea ice thickness. Shalina and Sandven (2018) demonstrated SD variability across different snow bedforms. Mallett et al. (2022) derived a statistical distribution for SD over multi-year ice as a function of the mean SD value. However, SD distribution across various surface types of Arctic sea ice remains limited. This study characterizes SD distribution variability across different Arctic sea ice surface types leveraging data from field campaigns. In-situ SD data were collected from multiple test sites, each with several transects. The sites cover various snow bedforms and weather conditions. Test site 1 is Elson Lagoon with smooth landfast ice; Test site 2 is the rougher Beaufort Sea; and Test site 3 is the Chukchi Sea, less rough than the Beaufort Sea but not as smooth as Elson Lagoon. SD measurements were taken from the Lincoln Sea (Test Site 4), which is predominantly covered by multi-year ice >2m in thickness. Data collected from Cambridge Bay (Test Site 5) in 2017, 2022, and 2024 indicate conditions similar to those in Elson Lagoon. At Eureka (Test Site 6), measurements were obtained over both first-year and multi-year landfast ice. During the MOSAiC expedition (Test Site 7) (Itkin et at. 2021, Hutter et al. 2023), SD data were collected alongside surface roughness and ice thickness measurements, capturing snow depths from thicker ice (>2m) in the Northern Loop and thinner ice (0.5-2m) in the Southern Loop. Additionally, SD measurements from the Weddell Sea provide a comparison with the Arctic test sites. We categorize the datasets into different snow bedforms based on ice type and surface roughness. For each measured transect, SD is statistically analyzed to determine distribution patterns across different surface categories. We calculate the length scale using the semi-variogram method across the snow bedforms. Furthermore, the observed SD is fitted with Gaussian distribution, log-normal distribution, gamma distribution, and skew distribution across various snow surface and ice types. Preliminary results indicate significant variability in SD distribution between thinner and thicker ice. The observed variability suggests that parameterizations of SD distribution in models should account for surface roughness and ice type to accurately simulate sea ice dynamics and the overall energy balance in the Arctic. References: [1] Kochanski, K., Robert S. Anderson, and Gregory E. Tucker. "Statistical classification of self‐organized snow surfaces." Geophysical Research Letters 45.13 (2018): 6532-6541. [2] Shalina, Elena V., and Stein Sandven. "Snow depth on Arctic sea ice from historical in situ data." The Cryosphere 12.6 (2018): 1867-1886. [3] Mallett, Robbie DC, et al. "Sub-kilometre scale distribution of snow depth on Arctic sea ice from Soviet drifting stations." Journal of Glaciology 68.271 (2022): 1014-1026 [4] Itkin, Polona; Webster, Melinda; Hendricks, Stefan; Oggier, Marc; Jaggi, Matthias; Ricker, Robert; Arndt, Stefanie; Divine, Dmitry V; von Albedyll, Luisa; Raphael, Ian; Rohde, Jan; Liston, Glen E (2021): Magnaprobe snow and melt pond depth measurements from the 2019-2020 MOSAiC expedition [dataset]. PANGAEA, https://doi.org/10.1594/PANGAEA.937781 [5] Hutter, Nils; Hendricks, Stefan; Jutila, Arttu; Birnbaum, Gerit; von Albedyll, Luisa; Ricker, Robert; Haas, Christian (2023): Merged grids of sea-ice or snow freeboard from helicopter-borne laser scanner during the MOSAiC expedition, version 1 [dataset publication series]. PANGAEA, https://doi.org/10.1594/PANGAEA.950896.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Modeling microwave emissions of snow at high frequencies: opportunities and challenges for satellite retrievals including frequencies above 100 GHz

Authors: Janna Rückert, Marcus Huntemann, Gunnar Spreen
Affiliations: Institute of Environmental Physics, University Of Bremen
In the microwave regime, the variability of the emissions of snow (and sea ice) is a crucial challenge for satellite remote sensing of both surface and atmospheric geophysical parameters in polar regions. Spaceborne radiometers measure brightness temperatures with contributions from both atmosphere (such as atmospheric gases like water vapor and oxygen, cloud liquid water or ice cloud particles) and surface (i.e., open ocean or snow-covered sea ice). The contribution of each geophysical component depends on the measured frequency. Generally, lower frequencies (<50 GHz) are very sensitive to the surface whereas higher frequencies are dominated by the atmospheric signal. In order to retrieve information from the satellite measurements, assumptions about snow and sea ice contributions to the overall brightness temperatures are necessary. This is true even for higher frequencies because the atmosphere in the polar regions is usually dry and thus more transparent for the microwave radiation. New satellite missions like JAXA’s AMSR3 or ESA’s Arctic Weather Satellites measure at these higher frequencies, e.g., around the water vapor absorption line at 183 GHz or at even higher frequencies up to 325 GHz, posing the question on how the snow on ground affects these satellite observations. Accurately representing the emitted radiation from the surface, e.g., for satellite retrievals or data assimilation purposes, is an ongoing challenge as they depend on a variety of unknown parameters such as the snow microstructure or the brine content of the sea ice. Direct field measurements are sparse in polar regions. Also, whether and how model physics or parameterizations developed for lower frequencies can be extrapolated to the higher frequencies remains to be clarified. However, in recent studies [1,2] high frequencies up to 243 GHz have been modeled successfully using the state-of-the-art microwave radiative transfer model SMRT [3]. Here, our starting point are measurements of surface emissions between 21 and 340 GHz from a ship-based campaign to the central Arctic in August-October 2024, with complementing observations of the snow and ice properties in the radiometers’ footprints. These measurement data serves as first step for both analyzing the observed variability of the emissions and for modeling the surface emissions using microwave radiative transfer models such as SMRT. We aim to develop a better representation of the emissions in a physical forward model which can be used in satellite retrievals and to identify which microphysical snow parameters are most relevant. References [1] Sandells, M., Rutter, N., Wivell, K., Essery, R., Fox, S., Harlow, C., Picard, G., Roy, A., Royer, A., and Toose, P.: Simulation of Arctic snow microwave emission in surface-sensitive atmosphere channels, The Cryosphere, 18, 3971–3990, https://doi.org/10.5194/tc-18-3971-2024, 2024. [2] Wivell, K., Fox, S., Sandells, M., Harlow, C., Essery, R., and Rutter, N.: Evaluating Snow Microwave Radiative Transfer (SMRT) model emissivities using observations of Arctic tundra snow, EGUsphere [preprint], https://doi.org/10.5194/egusphere-2023-878, 2023. [3] Picard, G., Sandells, M., and Löwe, H.: SMRT: an active–passive microwave radiative transfer model for snow with multiple microstructure and scattering formulations (v1.0), Geosci. Model Dev., 11, 2763–2788, https://doi.org/10.5194/gmd-11-2763-2018, 2018.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Brine Movement and Mineral Dissolution in Saline Snow on Sea Ice

Authors: Amy Macfarlane, Dr Ruzica Dadic, Dr Martin Schneebeli, Dr Melody Sandells, Michael Lombardo, Dr Robbie Mallett, Jack Landy, Dr Polona Itkin, Jullien Meloche
Affiliations: UiT The Arctic University of Norway, Northumbria University, WSL Institute for Snow and Avalanche Research SLF, Environment and Climate Change Canada
Snow is often measured to be saline on first-year and young sea ice, yet there are many outstanding questions about the motion and state of brine in the snowpack. This salty snow has the potential to influence the microwave backscatter, be a sink for CO₂, have applications in snow biogeochemistry, and influence snow bacteria and atmospheric chemistry through sea salt aerosol production. Recent studies on sea ice have shown wicking as the likely process of salt entering the base of the snowpack towards the start of the winter when snow grain sizes are small with quasi-liquid layers and small capillaries between them. As temperatures drop during the polar winter, this likely results in salt precipitating out of the brine, creating a four-mixture component (air, ice, liquid brine, mineral salts). In this study, we use high-resolution in-situ microcomputer tomography measurements, combined with conductivity and temperature profiles, collected during the MOSAiC expedition winter period to answer critical questions about the brine pathway into the snow and mineral formation within brine pockets. The microcomputer tomography measurements revealed highly reflective inclusions within the snowpack, whose distribution aligns with the conductivity profiles. As a result, we explain the distribution and physical characteristics of these mineral deposits, which we use to develop the physical-based model of snow permittivity, applied to the snow microwave remote sensing model (SMRT), an essential tool in various remote sensing applications.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Sensitivity of L and C-band radiometer measurements to the liquid water content of snow with SMRT

Authors: Pierre Zeiger, Ghislain Picard, Baptiste Vandecrux
Affiliations: Université Grenoble Alpes, Geological Survey of Denmark and Greenland (GEUS)
Snowmelt affects the surface mass balance of the ice sheets and has the potential to destabilize several ice shelves in Antarctica. Despite its importance for climate monitoring and the assessment of future sea level rise, the quantification of the liquid water content (LWC) in the snowpack is still very uncertain today. Melt rates have been estimated either point scale using automatic weather station data, or regionally using regional climate models. There is now a growing interest for satellite-based LWC retrieval based on L-band radiometry that would allow a validation of model outputs. The retrieval algorithms proposed up to now are based on a forward modeling of SMOS or SMAP brightness temperatures using the radiative transfer theory. The snowpack is generally represented as a 3-layer medium including wet snow, dry snow and semi-infinite ice layers. This oversimplified configuration fails in representing snow heterogeneity both vertically and horizontally and so cannot reliably explain the spatial variations of brightness temperature. This affects LWC retrieval because of the highly non-linear relationship between brightness temperature and LWC. For this reason, we explored the sensitivity of L and C-band brightness temperatures to the LWC content of snow in more complex snowpack configurations. The snowpack in our modeling framework includes a number of diffusive snow layers accounting for scattering and thin ice layers to reproduce the effect of layering. Brightness temperatures were simulated using the Snow Microwave Radiative Transfer (SMRT) model and further compared to SMOS and AMSR2 observations. The forward modeling allows us to find snowpack configurations that best explain the dynamic range of satellite measurements. Then, a series of sensitivity experiments was run to study the response of brightness temperatures to the content and distribution of liquid water in the snowpack. We also included experiments with mixed pixels which are only partially wet. This ongoing work will help to understand the influence of surface and subsurface wetness variations on the microwave signal and thus will allow the development of more accurate snow LWC products in near future.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Passive Microwave Emissivity Modelling of Global Scale Snow-Covered Areas Using Machine Learning

Authors: Iris de Gélis, Carlos Jimenez, Catherine Prigent
Affiliations: Estellus, Observatoire de Paris, CNRS, LERMA, Observatoire de Paris
In order to assimilate satellite passive microwave data in numerical weather prediction scheme, the full understanding of the components of the radiative transfer equation is needed. For surface-sensitive observations, forward modelling of the surface emissivity is necessary. The future Copernicus Imaging Microwave Radiometer mission (CIMR) will observe at frequencies of 1.4, 6.9, 10.6, 18.7, and 36.5 GHz, providing a unique spatial resolution for channels above 1.4 GHz and unprecedented accuracy for all measurements. This mission specifically targets polar regions. To assess its potential and prepare for the assimilation of its observations, a realistic modelling of the surface emissivities over snow is required, for the full frequency range and covering all snow-covered regions. Passive microwave signatures over snow-covered areas show a very large space and time variability, depending on the snow environments, the macro- and microstructural properties of the snow, as well as on the observation frequency and polarization. This makes the snow emissivity modelling particularly challenging. Large scale physicaly-based models have been proposed (such as the Community Microwave Emission Modelling, CMEM, Drush et al., 2009), but they have difficulties representing the variability of the snow signatures, across frequencies at continental scales (Hirahara et al., 2020). The Snow Microwave Radiative Transfer model (SMRT) (Picard et al. 2018)) provides simulations for both passive and active microwaves over snow, using multiple detailed information on the snow layering and micro-properties. It makes it possible to precisely analyse the radiative transfer processes, but is not directly suitable for fast calculation at global scale, with only limited snow information. Satellite-derived emissivity calculations (e.g. monthly-mean atlases such as TELSEM2, Wang et al., 2016) are publicly available, but these climatologies by definition do not include daily variability and cannot be linked to the actual surface conditions, so their application in radiative transfer models is limited to provide an average estimate of the snowpack emission. Therefore, here we propose a machine learning methodology, based on geophysical variables available at global scale, to provide realistic emissivity parameterization from 1.4 to 89 GHz, on a daily basis, over continental snow-covered areas. First, a satellite-derived surface emissivity database is calculated covering the frequency range from 1.4 to 90 GHz. Observations from SMOS, SMAP, and AMSR2 are collected and collocated with ECMWF meteorological reanalysis data (ERA5), to systematically calculate continental emissivities, by subtracting the atmospheric contribution to the signal and its modulation by the surface temperature. Emissivity climatologies are also computed from this database. A neural network is trained to link the satellite-derived surface emissivity with available geophysical variables. ERA5 provides snow-related geophysical parameters globaly every hour. After a careful joint analysis of the microwave emissivity database and the geophysical variables, ten geophysical parameters are selected as inputs to the neural network. They include ERA5 snow variables (density, depth, albedo, …), as well as fixed surface characteristics such as orography information, lake cover, and vegetation characteristics. Information on temperatures is also needed, as it plays a major role in the snow metamorphism process. The satellite-derived emissivity climatologies are also tested as an additional input, to improve the emissivity parameterization. The results are encouraging, for all frequencies, and polarisations. Snow emissivities are parameterized at continental scale with a coefficient of determination (R²) above 0.8 and a root mean square error (RMSE) below 0.02 for frequencies up to 18.7 GHz and close to 0.03 for the 36.5 and 89 GHz bands. The most accurate results are achieved by combining ERA5 input parameters with satellite-derived emissivity climatologies. A comparison is also made with the SMRT physical model at a location where the macro- and microstructural properties of the snow could be estimated. Reasonable agreement is obtained between our emissivity parameterization and the SMRT simulations, with similar frequency dependence. References : Drusch, M., Holmes, T., de Rosnay, P., & Balsamo, G. (2009). Comparing ERA-40-based L-band brightness temperatures with Skylab observations: A calibration/validation study using the Community Microwave Emission Model. Journal of hydrometeorology, 10(1), 213-226. Hirahara, Y., de Rosnay, P., Arduini, G., 2020. Evaluation of a microwave emissivity module for snow covered area with cmem in the ECMWF integrated forecasting system. Remote Sensing 12, 2946. Picard, G., Sandells, M., Löwe, H., 2018. SMRT: An active–passive microwave radiative transfer model for snow with multiple microstructure and scattering formulations (v1. 0). Geoscientific Model Development 11, 2763–2788. Wang, D., Prigent, C., Kilic, L., Fox, S., Harlow, C., Jimenez, C., ... & Karbou, F. (2017). Surface emissivity at microwaves to millimeter waves over polar regions: Parameterization and evaluation with aircraft experiments. Journal of Atmospheric and Oceanic Technology, 34(5), 1039-1059.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: The relation of passive microwave snow thickness and sea-ice concentration observations in the Weddell Sea, Antarctica.

Authors: Dilbara Abdushukurova, Dr. Stefan Kern, Proshonni Aziz
Affiliations: University Hamburg, Helmholtz-Zentrum Hereon
The precise determination of sea ice concentration (SIC) plays a crucial role in climate research. Satellite data provides a valuable source of information, but has uncertainties that are influenced by various factors. This study aims to gain a better understanding of the causes of these uncertainties by investigating the interplay of snow depth estimates and sea ice concentration measurement errors. The analysis focuses on the comparison of sea ice concentration data from two satellite sources: the OSI-SAF (Ocean and Sea Ice Satellite Application Facility) and manually classified Landsat 8 scenes. By comparing the satellite data in the Antarctic region, especially in the Weddell Sea in the years 2019 and 2020, differences in the data can be interpreted as sea ice concentration measurement errors. Links between these error estimates and snow depth estimates from a satellite passive microwave product are investigated for potentially influential factor for these measurement errors. To answer this question, the differences between the sea ice concentration data from OSI-SAF and Landsat 8, were first calculated. Following this, the snow depth data was resampled using the nearest-neighbor method and interpolated to the grids of the sea ice concentration data. Statistical analyses are carried out to evaluate the correlations from more than 80 Landsat 8 scenes within the warm season (October to March). The results indicates that Landsat 8 largely records higher sea ice concentration values than OSI-SAF, which can be attributed to algorithmic differences and variations in data acquisition. Preliminary results support a very weak correlation between the snow depths estimates and the differences in the sea ice concentration measurements, which might not be significant. This would suggest a possible contribution of snow depths to the sea ice concentration values recorded by OSI-SAF. However, the strength of the correlation might be insufficient to establish a clear relationship and clear limitations of all involved datasets must be taken into account. It is particularly difficult to reliably retrieve snow depth on sea ice and ambiguities in surface roughness and snow depth in passive microwave data have to be assumed. That being said, the exceptional availability of passive microwave data highlights its potential impact. The knowledge gained here provides a basis for further investigations. If substantiated, a link between the results of snow depth algorithms and errors in sea ice concentration would suggest a potential to improve SIC algorithms. This is in particular relevant, because both algorithms can be based on the same measurement sensors. A better understanding of the interactions in passive microwave algorithms could support algorithm development and validation efforts and hence could lead to a better understanding of the polar system as a whole.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Bayesian inversion for Arctic Ocean sea ice and snow retrievals from satellite altimetry

Authors: Michel TSAMADOS, Mr Elie René-Bazin, Ms S. S. B. Aliff Raziuddin, Mr Joel Perez Ferrer, Mr Tudor Suciu, Dr Carmen Nab, Jack Landy, Dr Matt Fox, Professor Chamkaur Ghag, Dr Thomas Bodin
Affiliations: Earth Sciences, UCL, Ecole Normale Supérieure de Lyon - Laboratoire de Géologie de Lyon Terre Planètes Environnement, Department of Physics and Techonology, UiT The Arctic University of Norway, Tromsø,, Department of Physics, UCL
Inverse Methods have been widely used in the field of Earth Sciences, particularly in seismology. Here, we introduce a new application of inversion theory to retrieve the sea ice thickness and the snow depth over the Arctic Ocean using freeboards from radar and laser altimeters. We do this using the TransTessellate2D software, a Bayesian trans-dimensional approach that allows us to invert for an unknown number of model parameters. The software also allows to chose between different parameterization for the 2D model space (Voronoi or Delaunay). This approach allows us to retrieve ice and snow thickness in one step without any assumptions on the snow depth. The inversion results are statistically encouraging compared to snow and ice validation products. Both inverted snow and ice thickness field compare well against independent validation data and against other available thickness products. For April 2019, our sea ice thickness has similar Pearson regression coefficient against Operation Ice Bridge ice thickness data compares to the AWI CryoSat-2 ice thickness product. Our snow depth is also similar in terms of spatial paterns to the AMSR2 product for the October/April Winter months from 2018 to 2021. This method is flexible as to the number of physical parameters to invert for. We therefore extend it to also include the inversion for the penetration factors of one or two satellites (as well as inverting for ice thickness and snow thickness). This paves the way for further investigations in the understanding of the radar light-snow interactions and the factors that control the location of the main scattering horizons. We investigate also the multi-frequency inversion using data from the Ka- and Ku-band radar altimeters, paving the way for the future dual-frequency altimetry mission CRISTAL. The new inversion method is probabilistic in nature including in space and can offer novel understanding on covariances between fields of interest as well as their uncertainties.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Examining the spatiotemporal dependency of coupled snow and ice thicknesses in the Weddell Sea using distributions from observational data

Authors: Anton Birkedal, Prof. Dr. Stefanie Arndt
Affiliations: Alfred Wegener Institute, University of Hamburg
Snow on sea ice has a significant influence on heat fluxes, radiative transfer, and the overall mass budget of the sea ice. This is especially the case in the Southern Ocean where the extensive precipitation results in regular flooding and the formation of snow-ice layers. The high spatial and structural variability of the snowpack contributes to substantial uncertainties in models and remote sensing observations. This highlights the need for detailed observational data to reduce these uncertainties and improve our understanding of snow-sea ice interactions. Through five different expeditions with the German research icebreaker RV Polarstern in the Weddell Sea from 2013 to 2022, more than 50 ice stations have been surveyed on floe-scale. Among many other observations, an extensive dataset with co-located snow and ice thicknesses from different periods and regions have been retrieved by using a ground-based electromagnetic induction sounding system (GEM-2) and a semi-automatic snow probe (MagnaProbe). Assuming that the surveyed areas are representative of their respective sea ice floes, they provide highly valuable insights into the relation between the snow distribution and the thickness of the underlying ice. Most of the ice stations are accessed via helicopter with no reference GPS logging. The measurements from the two instruments therefore lie in a non-inertial reference frame, drifting with the sea ice floe. Combined with the GEM-2 operating ahead of the MagnaProbe, the drift causes a spatial displacement of their respective measurements. A semi-continuous drift correction is applied by manually matching up all the transects based on visual inspections, thereby acquiring more than 27.000 data pairs. The length and sampling frequency of the transects vary between the ice stations, covering distances in a range from 0.25 to 5 km. On average, each transect covers 1.4 km with a spacing of 2.7 m between data pairs. The comprehensive dataset makes it possible to analyze the coupled distribution, relating each snow measurement with the ice thickness below. This is first done by plotting the snow thickness as a function of ice thickness, and secondly by representing each ice station with a 2D histogram of binned snow and ice thicknesses to preserve the full distribution. Generally, the spread of the snow thickness and the frequency of outliers seem to increase with the age of the ice. Older ice in the western Weddell Sea has experienced more precipitation events, and it has more ridges, in which the snow accumulates irregularly. This also shows a slight positive correlation with the ice thickness. Somewhat counterintuitively, a strong negative correlation between the snow and ice thicknesses is observed for younger ice in the eastern Weddell Sea. This might be linked, for instance, to snow-ice formation at the snow/ice interface, to which young ice is more susceptible due to its low buoyancy and its close proximity to open water. The further analysis is part of ongoing work and will include linear regressions of the correlations to derive a parameterization of the snow cover on young Antarctic sea ice. Furthermore, since the 2D histograms seem to be generally uni-modal, they will be used to fit coupled probability density functions to get a detailed parameterization of typical thickness distributions. These parameterizations will greatly benefit both active and passive satellite retrievals as well as sea ice models.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Advances in hyperspectral remote sensing of snow from space

Authors: Alexander Kokhanovsky, Daniel Sheffler, Karl Segl, Biagio Di Mauro, Niklas Bohn, Dr. Sabine Chabrillat
Affiliations: German Research Centre for Geosciences, Institute of Polar Sciences, JPL, NASA, Hannover University
The polar territories are not easily accessed using ground, shipborne or airborne means of transportation. Therefore, spaceborne remote sensing plays an important role in providing temporal and spatial information of various surface and atmosphere parameters across both Arctic and Antarctica. Both passive (optical, thermal infrared and microwave) and active (lidars, radars) instrumentation is employed. Various atmospheric and underlying surface parameters are retrieved on a routine basis providing information needed for studies in broad scientific areas ranged from climate science and biology to atmospheric chemistry, physics, and meteorology. In this work we discuss the advances in snow remote sensing achieved by an emerging technology such as satellite hyperspectral remote sensing currently performed by such instruments as PRISMA (Italian Space Agency, launched in 2019), EnMAP (German Space Agency, launched in 2022) and EMIT (NASA, launched in 2022 and operating from outside the International Space Station). Several similar instruments (e.g., CHIME (ESA) and SBG (NASA)) are to be launched in the near future. All these instruments have similar features such as the spectral range around 400-2500nm, a high spectral (200 - 250 spectral channels) and spatial (30m) resolution. Therefore, the development of the generic remote sensing software applicable to all these instruments is under way. The high spatial resolution of the instruments makes it possible to study snow characteristics in a greater spatial detail as compared to the sensors with moderate spatial resolution such as, e.g., OLCI/S-3 (ESA) and MODIS (NASA). This makes it possible to develop more precise snow extent retrieval algorithms and make retrievals of snow properties in the areas with complex terrain, where the sensors with the moderate spatial resolution need complex algorithms to derive the percentage of snow - covered ground. Remote sensing of mountainous regions also benefits from studies of hyperspectral imagery because the possibility to select 100% snow-covered ground scenes. The frequency of flat terrain pixels (say at the top of mountain ranges) also increases with the increase in the spatial resolution, which makes it possible to simplify the snow retrieval algorithms. In addition, the high spectral resolution makes it possible to derive snow liquid water content, impurity load, presence of various algae species in snow and snow grain size (including vertical snow inhomogeneity) with much better confidence as compared to the sensors with moderate spatial resolution. The atmospheric correction, ground scene classification and cloud screening algorithms can be greatly improved using hyperspectral measurements. In this work we present the Generic Remote sensing Algorithm for the retrieval of Snow properties based on hyperspectral measurements (GRAS). The algorithm is based on the approximate analytical solution of the inverse radiative transfer problem for the atmosphere – underlying snow system. The snow properties derived using EnMAp, EMIT and PRISMA data are discussed.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Evaluation of VIIRS Snow Cover Products over Mountainous Areas in Europe Using Sentinel-2

Authors: Nicola Imperatore, Simon Gascoin, Matthieu Lafaysse, Marie Dumont, Jean-Baptise Hernandez, Adrien Mauss
Affiliations: Univ. Grenoble Alpes, Université de Toulouse, Météo-France, CNRS, CNRM, Centre d’Etudes de la Neige, Centre d’Études Spatiales de la Biosphère (CESBIO), Université de Toulouse, Météo-France, Direction des Opérations pour la Prévision, Centre de Météorologie Spatiale
Remote sensing observations of the snow-covered area from moderate resolution (100–1000 m) optical sensors are relevant for many applications in hydrology, ecology and climate research. In addition, some operational models for avalanche and/or flood forecasts could benefit from the assimilation of such observations. The Visible Infrared Imaging Radiometer Suite (VIIRS), mounted on 3 platforms and supplying up to 7 daily observations at mid-latitudes, represents the state of the art of moderate resolution optical observations. However, despite a history of more than a decade, VIIRS snow cover products are rarely used by the snow science community and systematic evaluations are lacking. This may be due to the fact that MODIS sensors are still operating well beyond their design life time of 6 years. However, MODIS data quality is decreasing fast due to orbital drift and sensor degradation. In addition, the official VIIRS snow cover products distributed by the NASA (the VNP10 series) do not supply snow cover fraction, but only the Normalized Difference Snow Index (NDSI). This means that users willing to extract snow cover fraction from VNP10 must either (i) use a formulae that was fitted for MODIS data twenty years ago as in the older MODIS and VIIRS snow cover collections or (ii) fit their own regressions on NDSI with the associated difficulties in collecting reference data and uncertainty estimation. In this context, the recent Sentinel-2 snow collection distributed by Copernicus Land and Theia, with a 20 meters resolution and 5 days revisit, offers the opportunity to validate moderate resolution observations over Europe. In this work, we aim to help bridge the gap between VIIRS observations availability and science or operational applications by evaluating VIIRS snow cover fraction using Sentinel-2 snow cover products as reference. We also assess another VIIRS snow product processed by Météo-France in near real time over metropolitan France for snow and weather forecast. This analysis covers a 7 years period in the French massifs. Additionally, we investigate the potential of using Sentinel-2 snow cover products to perform fast and accurate retrieval of spatially and temporally consistent snow cover fractions from VIIRS observations. This quantitative estimation of uncertainties will open the way towards an appropriate assimilation of VIIRS snow cover fractions in snow simulation systems in order to improve the real-time description of snow cover for all snow sensitive applications.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Assessing the impact of light-absorbing particles on snow from imaging spectroscopy data

Authors: Claudia Ravasio, Olga Gatti, Roberto Garzonio, Biagio Di Mauro, Giacomo Traversa, Giovanni Baccolo, Niklas Bohn, Alexander Kokhanovsky, Riccardo Barella, Carlo Marin, Claudia Giardino, Monica Pepe, Katayoun Fakherifard, Erica Matta, Antonio Montuori, Roberto Colombo
Affiliations: University of Milan - Bicocca, National Research Council, ISP, University of Roma Tre, Jet Propulsion Laboratory, California Institute of Technology, GFZ German Research Centre for Geosciences, EURAC Research -Institute for Applied Remote Sensing, Institute for Electromagnetic Sensing of the Environment, National Research Council, Research Institute for Geo-Hydrological Protection, National Research Council of Italy, Agenzia Spaziale Italiana (ASI)
The cryosphere is a key indicator of climate variability and an essential component of the Earth's climate system. Its study has been greatly advanced by the application of diverse remote sensing techniques and instruments that provide critical insights into snow and ice dynamics. Hyperspectral remote sensing data enable the precise retrieval of snow and ice properties, such as albedo, snow and ice grain size, liquid water content, and the concentration of Light-Absorbing Particles (LAPs). LAPs, including organic impurities (e.g., cryospheric algae) and inorganic particles (e.g., mineral dust, black carbon), are recognized as significant drivers of cryospheric change. These particles absorb solar radiation, darkening snow and ice surfaces and enhancing the snow-albedo feedback mechanism. This radiative forcing impacts the timing and volume of meltwater, with significant implications for ecosystems and human communities. This study focuses on the retrieval and validation of snow parameters using hyperspectral data from the PRISMA satellite, a radiative transfer modeling, and field campaigns conducted in the European Alps (Plateau Rosa, Aosta Valley) following a record-breaking Saharan dust storm. Field reflectance measurements were obtained using a Spectral Evolution spectrometer, which operates across the 300–2500 nm wavelength range. At each site, snow samples were collected to measure density, liquid water content (LWC) and grain size using a Snow Sensor. Samples were stored in a cold room at -20°C, enabling subsequent analysis of mineral dust concentration and particle size distribution using a Coulter Counter. LWC mapping was performed using the Snow Surface Water Index (SSWI), which analyzes the spectral shift of the 1030 nm absorption feature that distinguishes wet snow from dry snow (Ravasio et al. 2024). A supervised machine learning algorithm was developed and tested to estimate dust concentration and snow grain size in PRISMA imagery. The model was trained using simulated reflectance data from the BioSNICAR radiative transfer model, incorporating snow parameters such as slope, aspect, grain size, and dust concentration. Topographic corrections were applied following the methodology of Picard et al. (2020, TC). Additionally, this study reviewed existing spectral indices for mineral dust and grain size retrieval, providing an evaluation of their effectiveness and limitations. The results were validated by spatial comparisons with ground measurements and intercomparisons with alternative retrieval methods based on Optimal Estimation and ART theory, yielding promising preliminary results. Finally, we quantified the impact of LAPs on snow melting by calculating the surface radiative forcing, highlighting the significant role of these particles in accelerating seasonal snow melting. This research highlights the critical role of integrating remote sensing, field validation, and modeling to overcome challenges in snow parameter retrieval. It emphasizes the need to refine retrieval algorithms, expand field campaigns for validation, and utilize hyperspectral datasets to improve global snow monitoring. Future efforts will aim to use these tools to support climate research and promote sustainable water resource management. Research work has been carried out using original PRISMA Products - © Italian Space Agency (ASI); the Products have been delivered under an ASI License to Use. This work is carried out within Contract “SCIA” n. 2022-5-E.0 (CUP F53C22000400005), funded by Italian Space Agency (ASI) in the “PRISMA SCIENZA” program.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Change Vector-Based Convolutional Neural Network for Monitoring Glacier Surface Dynamics

Authors: Shruti Pancholi, Lukáš Brodský
Affiliations: Charles University
Glaciers are significant indicators of climate change. The established importance of glacier study has led to extensive research on the change dynamics in the glacier ecosystem. Numerous studies have been conducted to exploit the potential of machine learning and recently convolutional neural networks for remote sensing image classification and segmentation processes. Although pipelines have been established for monitoring glacier boundaries and status classification of glacier landscape, there is rather little research on the change detection methods utilizing the deep learning framework. In this study, we develop a deep-learning change vector-based detection technique for glacier surface change detection on Disko Island in Greenland. The input to the model is a pair of Landsat multispectral images for 1985 and 2023 along with topographical features. The deep learning change detection model is based on a convolution neural network, U-Net type, while this model is designed to directly segment the changes. Through this approach, we aim to eliminate the need for post-classification classification or post-feature change detection and the results are directly in the form of change maps. It bypasses the need to create explicit change masks or compute difference maps, streamlining the workflow. The model directly learns to detect changes from the data, capturing both spatial and temporal relations simultaneously. The applied model can detect different glaciological processes occurring in the Disko Island region over 2 decades. We detected both retreats as well as advancement (surge event). By developing this method, we aim to eliminate the errors associated with each process in the existing change detection techniques, however demand to prepare training data directly from a bi-temporal images set. The area of Disko Island is about 8 578 km², and also well represents the types of glaciers and the processes occurring on them. The preliminary results from this study, indicate pixel accuracy of about 97.8%, Intersection over Union (mIoU) of about 93.6%, and a Cross-Entropy Loss of 0.06 for training the segmentation model with initial pilot augmented images with 196 tiles (128X128 size) and with initial 50 epochs. The model prediction was validated against 32 image tiles with a validation accuracy of 98.13%. This experiment shows the potential direction of the glacier change monitoring. This shall be further developed into a continuous change detection technique.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Snow Cover Variability and Trends Over Karakoram, Western Himalaya and Kunlun Mountains: Insights From MODIS (2001–2024) and Reanalysis Data

Authors: Cecilia Delia Almagioni, Veronica Manara, Guglielmina Adele Diolaiuti, Alessia Spezza, Maurizio Maugeri
Affiliations: Environmental Science and Policy Department, University of Milan, Department of Physics “Aldo Pontemoli”, University of Milano, Environmental Sciences, Informatics and Statistics Department, Ca’ Foscari University
Monitoring snow cover variability and its trends is critical for understanding its role in river formation and sustenance, as well as its response to climate change and its broader impact on the cryosphere. In this study, we utilized gap-filled MODIS snow cover data spanning the 2001–2024 period to investigate the spatial distribution and temporal evolution of three snow cover metrics: the length, start, and end of the snow cover season. The gap-filling procedure has been evaluated. Our analysis focused on fourteen regions encompassing the Karakoram, Western Himalayas, Kunlun Mountains, and part of the Tibetan Plateau. The results revealed a highly complex pattern of variability in the snow cover metrics, with elevation emerging as the primary factor influencing their spatial and temporal distribution. Nevertheless, a single explanatory factor for the observed variability remains elusive. The average length of the snow season varies considerably across the study area, ranging from approximately 14 days in arid desert regions to about 185 days in the Karakoram. Despite high interannual variability, no significant trend was detected for the snow cover metrics across the entire study area; however, region-specific trends were identified. To gain deeper insights into the drivers of snow cover changes, we also examined meteorological variables and snow cover data derived from ERA5 and ERA5-Land reanalysis. Our findings indicate a good correlation between MODIS-derived snow metrics and reanalysis data over the 24-year period. Notably, the Taklamakan Desert and Kunlun Mountains exhibited a significant decrease in snow cover extent, likely driven by rising temperatures and declining precipitation found in these regions. Conversely, the Karakoram and Western Himalayas showed a positive trend in precipitation, which could at least partially explain the lack of trends in snow cover metrics and in snow cover extent. Furthermore, we extended our analysis to evaluate long-term snow cover trends using reanalysis data spanning 74 years and other satellite images such as AVHRR. Unlike the global trend of declining snow cover, our findings reveal no significant decrease in snow cover extent over the Karakoram and Himalayas, underscoring the unique climatic and cryosphere dynamics of this region.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Investigating the Impact of Viewing Geometry on Passive Microwave Snow Mapping

Authors: Lina Zschenderlein, Richard Kelly, Dr Kari Luojus
Affiliations: Finnish Meteorological Institute, University of Waterloo
The coarse spatial resolution of spaceborne passive microwave observations induces large uncertainties in the retrieval of geophysical land parameters, including terrestrial snow cover. Physical and microstructural snow properties as well as land cover types, soil composition, open or frozen water fraction, and topography vary at sub-footprint scales. This very high natural variability of snow and terrain conditions, both spatially and seasonally, can hardly be captured by current passive microwave observations. Moreover, the precise impact of the sensor viewing geometry on the accuracy of snow retrieval methodologies remains unclear. We therefore investigate whether the sensor viewing geometry can further help to counteract the general underestimation of snow cover extent, focusing on WindSat brightness temperature observations. WindSat was a spaceborne microwave radiometer with a conical scanning geometry that allows for an almost instantaneous view of the same ground scene from two different look angles. Such fore and aft-looking L1C WindSat brightness temperatures of 19 and 37 GHz are used to map snow cover extent by means of the EUMETSAT H SAF snow detection algorithm for the winter season of 2016–2017. Independent reference snow cover maps are obtained from the Interactive Multisensor Snow and Ice Mapping System (IMS), amongst others. The objectives are to examine the effect of the sensor’s look direction on snow detectability, and to quantify its impact on (dry) snow detection performance across Eurasia. Comparing snow cover maps derived from fore versus aft-looking brightness temperatures highlights both spatial and temporal differences in detected extent. The analysis focuses on early-morning data from descending orbits, in order to minimize wet snow conditions and thereby maximize dual-look effects stemming from the ground scene composition. Yet, more snow is detected using south-looking data regardless of overpass time, likely due to capturing predominantly north-facing landscape features with less sun exposure and hence snow melt. This can be observed to varying degree for different levels of topographic relief. Especially for mountain regions, dual-look brightness temperatures can differ up to several Kelvin despite their near-instantaneous acquisition. Similarly noticeable brightness temperature differences expose mixed pixel effects along shorelines, highlighting the need for water spill-over corrections for land applications. While the spatial variability of snow cover properties may not be fully resolved, combining fore and aft-looking data noticeably improves snow cover extent estimates throughout the whole season and especially during snow onset. Dual-view brightness temperatures, notably of high spatial resolution as anticipated from the Copernicus Imaging Microwave Radiometer (CIMR) mission, have the potential to further advance our understanding of terrestrial snow property retrieval.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Enhancing Flood Warning Service at Norwegian Water Resources and Energy Directorate (NVE) with Sentinel-3 Fractional Snow Cover Products

Authors: Kjetil Melvold, Mr Sjur Kolberg, Mr Tuomo Saloranta, Mr Stefan Blumentrath, Mr Aron Widforss, Mrs Solveig Havstad Winsvold
Affiliations: Norwegian Water Resources and Energy Directorate (NVE)
Remote sensing efficiently observes snow cover across vast and remote regions, playing an increasingly vital role in the Varsom flood warning service at the Norwegian Water Resources and Energy Directorate (NVE). Through the European Union’s Earth Observation Program, Copernicus, NVE accesses free and openly available Sentinel-3 satellite data to derive snow cover monitoring products in mainland Norway. The project NVE Copernicus Services has implemented operational fractional snow cover (FSC) algorithms at NVE. The latest version of the algorithm, applied to Sentinel-3 SLSTR satellite data, is based on a deep learning approach. The fractional snow cover (FSC) product provides an estimate of the percentage of snow cover for each grid cell for main land Norway. Simultaneously with the development of the FSC product, a new cloud mask based on machine learning has been developed. The new algorithm has reduced the misclassification of clouds over snow-covered terrain, which often resulted in cloud-covered areas being classified as snow. Snow is an essential component of the hydrological cycle in Norway (as well as in many other countries). In Norwegian mountain areas, as much as 50–60% of the annual precipitation falls as snow, while in Norway as a whole, it is about 30%. Therefore, it is critical to have knowledge of snow amounts and their spatial distribution, particularly in relation to spring melt floods, water resources management and hydropower production planning. The spatial variability of both snow accumulation and melting is shaped by the interaction between precipitation, topography, wind transport, vegetation, and radiation. In order to predict this variability and the amount of snow a multitude of physical and empirically based models are used. For nationwide simulations, smallscale variability of snow is often described through the use of probability distribution functions (PDF). In these catchment hydrological models, an estimate of the frequency of snow amount over an area is more important than their exact location. For evaluation of such snow models the fractional snow cover areas from, e.g. satellite data, airborne survey or terrestrial cameras, can be used. In this study, we demonstrate how satellite-derived fractional snow cover (FSC) was used to validate our nationwide flood forecasting model and snow model. Over the past two seasons, NVE has utilized the fractional snow cover product to validate simulated snow cover in one of its flood models, the HBV-model. Our HBV model runs at a 1 km resolution, and the satellite derived FSC (originally at 500 m product) has been resampled to match the model resolution. This presentation will demonstrate how NVE's flood forecasters use the fractional snow cover product when issuing flood warnings during the spring flood season, typically from April to June. It will include how simulated fractional snow cover time series is compared to time series of aggregated satellite-derived snow cover across specific elevation bands within each catchment area (typical ten elevation bands). Furthermore, an approach to comparing simulated and observed snow cover across key catchment areas in Norway will be discussed, along with a visual interpretation of the products. NVE plans to use the fractional snow cover product to quantitatively calibrate its flood models and our long-term modeled snow cover maps from the SeNorge grid data, and with a future goal of assimilating the product into these models. However, this approach presents certain challenges, which will be discussed. Traditionally, the snow reservoir in our snow models and hydrological models is reset to zero in September 1st to prevent accumulation from building up over time. We have tested an alternative approach to updating the models by incorporating snow cover data from satellite products during the autumn, prior to the onset of a new cold season.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: C.06.09 - POSTER - CAL/VAL towards future VNIR/SWIR imaging spectroscopy

Current scientific and commercial VNIR/SWIR imaging spectroscopy missions (HISUI, PRISMA, DESIS, EMIT, EnMAP, Tanager-1) are revolutionizing the earth observation sensing horizon. They are paving the way for future Hyperspectral global mapping missions like CHIME and SBG. The key to the transferability and generalizability of the information gained between the missions is a comparable and transparent calibration and validation scheme of the respective running missions and future missions. Here, systematic sources of errors must be identified and excluded based on well-balanced and well-defined anchor points, including propagation and budgeting of uncertainties. Hence a consistent use of protocols between mission and expansion of synergies are crucial.
The session will provide an overview of different protocols, approaches, efforts, and findings and identify and explain bottlenecks, limitations, and successful and promising approaches and synergies. A complementary and holistic strategy will be derived and developed based on the individual lessons learned, directly applicable to the increasing demands of future VNIR/SWIR missions.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Hyperspectral vicarious calibration with the Eradiate radiative transfer model

Authors: Vincent Leroy, Dr Marta Luffarelli, Nicolas Misk, Dr Yves Govaerts
Affiliations: Rayference
Current high interest in imaging spectroscopy leads to the launch of many hyperspectral satellite missions, most of which without post-launch calibration systems, and thus calibrated through vicarious calibration. Hyperspectral instruments pose new challenges as they cover parts of the spectral domain, most often avoided by multispectral instruments, where atmospheric absorption of radiation by molecular species plays a significant role. Molecular absorption, strongly impacted by atmospheric composition and pressure and temperature fields, is an important contributor to the overall uncertainty budget of radiative transfer simulations used for vicarious calibration. Moreover, hyperspectral missions with excellent radiometric accuracy (1%) will be launched soon, which further raises requirements on radiative transfer simulation accuracy. In that context, improving vicarious calibration workflows with better radiative transfer models and more detailed input data is essential. Rayference maintains a vicarious calibration reference utilizing bright desert pseudo-invariant calibration sites (PICS) that has recently been extended to support the calibration of hyperspectral instruments. This reference consists of carefully curated input data, and a radiative transfer simulation pipeline based on the Eradiate radiative transfer model. It can be used to simulate reference calibration time series for multi- and hyperspectral instruments, but also to explore the effects of various scene parameters on simulation results. This pipeline has been used to assess the impact of classic assumptions made when building atmospheric profiles — using a standard profile with limited species concentration rescaling — compared to leveraging more detailed information — e.g. sourcing species concentrations and temperature from CAMS or ECMWF reanalysis. Out of this, we provided modelling and data collection guidelines depending on required accuracy. This approach has also been used to compare PRISMA, EnMAP and EMIT observations acquired over bright desert PICS with our calibration reference. Results show that observations agree with the simulated calibration reference within ±5% across most of the spectral domain. The generation of this simulated calibration reference can therefore be used for the harmonization of hyperspectral observations. This research was funded by the ESA CalibrEO and QA4EO HyperPICS projects.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Verifying the Spectral Performance of the EnMAP Imaging Spectrometer using Earth Datatakes – Data Quality Control Results from 3 Years in Orbit

Authors: Martin Bachmann, Emiliano Carmona, Stefanie Holzwarth, David Marshall, Miguel Pato, Raquel De los Reyes, Maximilian Brell, Dr. Sabine Chabrillat, Luis Guanter, Vera Krieger, Laura La Porta
Affiliations: German Aerospace Center (DLR), Earth Observation Center (EOC), German Research Centre for Geosciences (GFZ), Universitat Politecnica de Valencia, Res. Inst. of Water and Env. Engineering (IIAMA), German Aerospace Center (DLR), Space Agency
In this contribution, the results of the data quality control within the EnMAP Ground Segment related to the spectral performance and the underlying methods are presented. While scene-dependent approaches with atmospheric simulations are frequently used for selected scenes, the presented approach is based on mass data, as every recorded Earth datatake is used, resulting in a database of currently over 100,000 tiles dating from commissioning phase (CP) to the latest acquisitions. This straightforward approach was developed in the context of the DESIS ground segment, and adjusted for the spectral characteristics of EnMAP, which has a wider spectral sampling distance of 6.5 nm (VNIR) and 10 nm (SWIR) and covers the full wavelength region between 420 nm and 2450 nm. In short, the approach is based on the narrow atmospheric absorption bands of Oxygen A at ~760 nm and CO2 at ~2060 nm, where -after a radiometric normalization step- influence of the underlying Earth surface on the radiance signal is small. Using the pre-launch laboratory characterization, shifts of the center wavelengths of +/- 3.0 nm with 0.05 nm steps are simulated and applied to a MODTRAN-based typical atmospheric transmission spectra at 0.1 nm grid resolution. For the analysis of mass data, the approach then uses the detector maps (DMs), where the bandwise column means of the calibrated top-of-atmosphere (TOA) radiances are calculated, resulting in the mean radiance received for every spatial and spectral detector element for every Earth tile. The per-pixel center wavelength estimation at the two atm. features is carried out for every tile using a least-squares fitting of the actual spectral absorption region and the simulated shifts. The results for the currently ~100,000 Earth datatakes show that both the spectral smile and the center wavelengths position can be estimated reliably with the method for bands at ~760 nm and ~2060 nm. Comparison with the more precise dedicated spectral calibration facilities of the On-Board Calibration Assembly (OBCA) show a good agreement, with the DM-based approach of course having larger spread and uncertainty inherent in the usage of Earth datatakes. These results are also cross-checked with selected interactive modelling results from the EnMAP independent validation team. Summarizing the findings for the EnMAP spectral performance over time, both the VNIR and SWIR center wavelengths at the given features are well within the requirements and stable after the initial gravity release encountered during the CP, and are adequately updated within the spectral calibration. Also, the shape of the spectral smile was found to be constant, indicating the good in-orbit performance of EnMAP and the high quality of the data.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: LANDHYPERNET Data Distribution and Science Results

Authors: Dr Pieter de Vis, Dr. Agnieszka Bialek, Harry Morris, Miss Morven Sinclair, Kevin Ruddick, Dr. Kevin Alonso, Mirco Boschetti, Vittorio Brando, Gabriele Candiani, Alberto Crema, Ana Dogliotti, David Doxaran, Ivan Farace, Philippe Goryl, Joel Kuusk, Ashley Ramsay, Joe Riordan, Mohammadmehdi Saberioon, Quinten Vanhellemont, Astrid M. Zimmermann
Affiliations: National Physical Laboratory (NPL), Royal Belgian Institute of Natural Sciences (RBINS), Consiglio Nazionale delle Ricerche (CNR), Instituto de Astronomía y Física del Espacio, Consejo Nacional de Investigaciones Científicas y Técnicas (IAFE, CONICET/UBA), Laboratoire Océanographique de Villefranche, Sorbonne Université (SU/LOV), European Space Agency (ESA), University of Tartu (UT), Deutsches GeoForschungsZentrum (GFZ)
The HYPERNETS project (www.hypernets.eu) developed an innovative hyperspectral radiometer (HYPSTAR®) integrated in automated networks of water (WATERHYPERNET) and land (LANDHYPERNET) surface reflectance measurements for satellite validation. This new network of automated hyperspectral radiometers will be invaluable for radiometric validation of surface reflectance for hyperspectral (and multi-spectral) satellite sensors. We here present the LANDHYPERNET data, the LANDHYPERNET data portal and some science results including examples of satellite validation of various products. The HYPSTAR®-XR instrument used in the LANDHYPERNET network features both a VNIR and SWIR sensor. It measures the hemispherical-conical reflectance factor (HCRF) at a range of different viewing angles. VNIR data is provided with a wavelength range of 380–1000 nm with 0.5 nm sampling and 3nm resolution, and SWIR data ranging from 1000–1680 nm with 3 nm sampling and 10 nm resolution. The raw data is automatically transferred to a central server, processed in near real-time to reflectance and other variables and archived for web distribution. Furthermore, with an aspiration to achieve fiducial reference measurement (FRM) quality, the data is SI traceable, processed with an open source hypernets processor (https://hypernets-processor.readthedocs.io/) and accompanied by its uncertainty. The measurement uncertainty is propagated through the full processing chain, including treatment of temporal and wavelength error-covariance, a level of detail unique for such satellite validation network. We also discuss the LANDHYPERNET data portal (www.landhypernet.org.uk), the LANDHYPERNET data distribution and license, and show how to query the data and download monthly zip files. In addition, we highlight some results comparing the LANDHYPERNET network data to satellite data at both bottom of atmosphere and top of atmosphere. These include a study looking at the feasibility of using LANDHYPERNET surface reflectance data for vicarious calibration of multispectral (Sentinel-2 and Landsat 8/9) and hyperspectral (PRISMA) satellites; a study utilising LANDHYPERNET data products to understand the seasonal dynamics of the reflectance’s of a deciduous broadleaf forest in the UK; and science results from the agricultural Jolanda di Savoia site in Italy. Through these science results, the usefulness of the LANDHYPERNET data is demonstrated.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Assessing Changes in the Processing Chains of Sentinel-3/OLCI and PACE/OCI Using Reference Measurements From Fixed Autonomous Hyperspectral Radiometers in Four Italian Sites

Authors: Luis Gonzalez Vilas, Vittorio Ernesto Brando, Simone Colella, Javier Alonso Concha, Ivan Farace, Mauro Bastianini, Federica Braga, Mariano Bresciani, Claudia Giardino, Salvatore Mangano, Alice Fabbretto, Andrea Pellegrino, Giorgio Di Sarra, Damiano Sferlazzo
Affiliations: CNR-ISMAR, Serco S.p.A. c/o ESA-ESRIN, CNR-ISMAR, CNR-IREA, ENEA
The calibration and validation (Cal/Val) of satellite data products using measurement networks and reference sites are fundamental to evaluate the sensor performance during the missions and to assess data quality and integrity. Autonomous in situ radiometer systems can provide extensive, representative and high-quality reference datasets to perform reliable radiometric Cal/Val activities based on match-up analysis. In the last few years, autonomous hyperspectral radiometers have been developed and deployed at fixed platforms and ships with the aim of supporting the validation of visible and near infrared bands of present and future multispectral and, especially, hyperspectral missions. Four autonomous hyperspectral radiometers have been established at four aquatic sites across diverse optical regimes in Italy: at the Acqua Alta Oceanographic Tower, in Lake Garda, in Lake Trasimeno and close to the Lampedusa Island. - Acqua Alta Oceanographic Tower (AAOT) is located on the North Adriatic Sea, 15 km off the coast of the Venice lagoon. Waters ranges from clear oligotrophic waters to moderately turbid mesotrophic waters depending on the wind and wave driven re-suspension and/or the local coastal currents, as the site is not reached directly by the river plumes under flood conditions. In addition to the AERONET-OC autonomous multispectral radiometer installed in 2022, AAOT hosts two hyperspectral above water systems: a PANTHYR since October 2019 and a HYPERNETS/HYPSTAR system since May 2021. - Lake Garda is a large deep Italian lake characterized by clear oligo/meso-trophic waters and coastal areas colonized by submerged macrophyte beds. An HYPERNETS site was deployed in the south-western basin on the “Sesarole” navigational pylon near Manerba in June 2022.  -The REmote Sensing for Trasimeno Lake Observatory (RESTO) is located on a shallow and turbid lake in Central Italy. Despite its shallow bathymetry, Lake Trasimeno waters are always optically deep because of wind-induced sediment resuspension and high nutrient availability facilitating the occurrence of phytoplankton blooms. The RESTO station is equipped since 2018 with a WISPStation, an hyperspectral above water radiometer system.  - The Lampedusa Oceanographic Observatory (LOO) is an elastic beacon located about 5 kilometers Southwest of the island of Lampedusa, south of the Strait of Sicily. The ocean depth at the location is 74 m in blue oligotrophic waters. A series of Satlantic hyperspectral underwater radiometers were deployed at LOO in June 2022 replacing the multispectral radiometers installed in 2019. In this work, we perform a validation analysis using the hyperspectral radiometric datasets acquired at these sites in order to assess the changes in the processing chains of two satellite products: on the one hand, EUMETSAT is updating the Sentinel-3/OLCI WFR product from collection 3 to collection 4; on the other hand, NASA is evolving the PACE/OCI ocean color products from version 2 to version 3. Changes in processing chains are evaluated by comparing the validation results derived from the match-up datasets including co-collocated in situ and satellite radiometric (i.e., remote sensing reflectance) measurements. The match-up analysis is based on the Match-up Database (MDB) approach, consisting of the following steps: 1) satellite data extraction of an area centered on the site; 2) spatio-temporal linking of satellite and in situ observations for obtaining all the potential match-ups; 3) applying of the quality control protocols to get the valid match-ups; and 4) plotting and computation of the validation metrics. Satellite quality control is based on the recommendations of the agencies, i.e., EUMETSAT and NASA, but with some small modifications for specifics sites and/or products. For instance, the criterium about the minimum number of valid (non-masked) pixels in the extraction window is relaxed in the Garda and Trasimeno Lakes because of the land proximity. In situ quality control is based on the recommendations of the site managers. Match-ups datasets include data since the beginning of the measurements for each site for Sentinel-3/OLCI and since March 2024 for PACE/OCI. Due to differences in the flagging between product versions (Sentinel-3/OLCI collections 3 and 4, PACE/OVI V2 and V3), there may be some discrepancies in the number of valid match-ups. Hence, results include a coverage analysis, as well as metric comparisons based on a common set of match-ups. The results presented in this work show that fixed autonomous hyperspectral radiometer systems are becoming mature and capable of acquiring regularly good quality data to support space agencies for assessing new satellite products as well as changes in processing chains.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Current and future radiometric calibration and validation of hyperspectral imaging systems at CNES

Authors: Camille Desjardins, Damien Rodat, Arthur Dick, Aimé Meygret
Affiliations: CNES
Radiometric calibration is crucial for providing a physical meaning to the digital values of an image by linking them to a parameter known as radiance at the top of the atmosphere. Calibration contributes to the data quality obtained through optical remote sensing and makes possible to combine measurements from various instruments. Hyperspectral imaging instruments, due to their multiple bands and wide spectral range, raise significant challenges for their radiometric calibration and validation. The purpose of this presentation is to showcase the current and future means of radiometric calibration developed in CNES for hyperspectral imaging systems, along with the main current calibration results obtained from these techniques. Over the last two decades, using the SADE/MUSCLE calibration system [1], CNES has calibrated and cross-calibrated a wide range of multispectral imagers, including SPOT1 to 7, MERIS, Pleiades, Pleiades-Neo, POLDER, VENµS, Sentinel-2 MSI, Sentinel-3 OLCI & SLSTR, … Furthermore, CNES participates in the radiometric validation of several systems based on multispectral photometers [2] located at La Crau and Gobabeb as part of the RadCalNet network [3]. This presentation will demonstrate how current radiometric calibration methods dedicated to multispectral imagers can be adapted to hyperspectral imagers like PRISMA or EnMAP. As an illustration, results from cross-calibration on desert targets [4] of these hyperspectral imagers with widely used reference sensors will be presented. Radiometric validation results will also be shown using the in-situ photometer installed at instrumented sites of La Crau and Gobabeb. References : [1] Cabot F., O. Hagolle, C. Ruffel, P. Henry, Remote Sensing Data Repository for In-Flight Calibration of Optical Sensors over Terrestrial Targets, Proceedings of SPIE, 3750, Denver, 1999. [2] Meygret, A.; Santer, R.; Berthelot, B. ROSAS: A robotic station for atmosphere and surface characterization dedicated to on-orbit calibration. In Proceedings of the Earth Observing Systems XVI, San Diego, CA, USA, 13 September 2011. [3] Bouvet, et al. RadCalNet: A Radiometric Calibration Network for Earth Observing Imagers Operating in the Visible to Shortwave Infrared Spectral Range. Remote Sens. 2019, 11, 2401 [4] Lacherade, S. & al, Cross Calibration Over Desert Sites : Description, Methodology, and Operational Implementation. IEEE Trans. Geosci. Remote Sens., vol51, no. 3, pp. 1098 – 1113, 2013
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: B.04.03 - POSTER - Integrating multi-sensor Earth Observation data to improve natural hazard monitoring

A deeper knowledge of natural hazards, which may originate from different sources and systems (such as atmospheric, hydrological, oceanographic, volcanological or seismic), is essential for accelerating actions towards the pursuit of Sustainable Development Goals of 2030 Agenda. In this framework, Earth Observation (EO) data are essential for all phases of natural hazard management, from spatial planning to post-emergency operations.
Satellite data offer extensive spatial coverage, which is highly beneficial for various scientific applications. Optical sensors on-board satellite gather information in the visible and near-infrared regions of the electromagnetic spectrum, providing images that are ‘easier’ to interpret but also valuable spectral information to accurately recognize different objects. SAR sensor, on the other hand, is mostly independent of weather and illumination conditions, with the possibility to detect roughness and dielectric characteristics of the target objects.
Changing platform type, UAVs allow acquiring very high-resolution, on-demand data and in near-real time. The development of different sensors (optical, multispectral, LiDAR, hyperspectral), enables different types of data to be acquired, even simultaneously.
Despite the complementarity of sensors from different platforms and their respective advantages (and disadvantages) of use, these data sources are often used separately. Data fusion and the integration of different methodologies make it possible to address the spatial and resolution gaps.
This session aims to examine and discuss multi-platform and multi-sensor data fusion, also by sharing qualitative and quantitative information to foster collaborative advances for monitoring and mapping natural hazards.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Spatiotemporal Analysis of Vegetation Dynamics in a Complex Post-Mining Landscape

Authors: Jan Blachowski, Dr Anna Buczyńska, Mrs. Aleksandra Kaczmarek, Ms. Katarzyna Reclik
Affiliations: Wroclaw University of Science and Technology
A major challenge after the end of mining is reversing of environmental degradation caused by removal of ground and vegetation, soil contamination and water pollution (Li et al., 2019; Agboola et al., 2020). Properly managed post-mining sites can be rehabilitated to restore or introduce new species of flora, create new habitats in artificial waterbodies (e.g. in abandoned open pits). Adverse effects of mining can manifest itself, persist or intensify many years after the end of mineral extraction and successful reclamation processes (Yang et al., 2018). Examples include flooding caused by groundwater table restoration after the cessation of mining drainage, the sudden occurrence of sinkholes due to the deterioration of underground workings, erosion of waste dumps, acid mine drainage, fires in spoil heaps, and other related issues. Therefore, even rehabilitated mining sites require constant, long-term monitoring to analyse environmental trends and develop predictive models. Many consider these risk management activities to be perpetual tasks (Kretschmann and Nguyen, 2020). However, data on the environmental conditions of post-mining sites is often limited due to factors such as insufficient records of historical mining activities, unclear responsibilities due to land ownership changes and importantly high costs associated with conducting ground surveys. In this study we focused on adopting an open satellite data and geographic information system (GIS) based spatial regression approach for monitoring and modelling environmental changes in artificially and naturally rehabilitated complex post-mining site located in a glaciotectonic area subjected to long term combined underground and open-pit extraction of lignite that ended in the 70’ties of the 20th Century. It area of interest subjected to reclamation procedures that ended with partial success in the mid-80’ties. The study area, known as the Babina mine, covers 13.7 sq km and is located in Western Poland on the border with Germany and makes up part of the UNESCO Geopark Muskau Arch. In our research we selected 10 Sentinel-2 cloud free images (L2A product) for the same season in each year between 2015 and 2024, as well as developed geospatial data describing the known extent of mining activity with the aim to quantitatively analyse and to predict temporal trends of vegetation condition. We developed and applied a GIS based methodology utilising multidimensional raster map algebra statistics and machine learning spatial regression to determine trends in vegetation condition and identify zones of vegetation cover change. Based on the literature (e.g. McKenna, 2020; Blanche at al., 2024), we selected the following spectral indices, Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI) and Soil Adjusted Vegetation Index (SAVI) that acted as proxies describing condition of vegetation at 10 m spatial resolution. We utilised information derived from descriptive, temporal Getis and Ord and Mann-Kendall statistics to identify statistically significant clusters of healthy, deteriorating and steady vegetation and change of their intensity in time. Then, we analysed statistical relationships between the vegetation condition expressed as NDVI, EVI and SAVI indices and 16 explanatory variables representing mining, topographical and weather factors (e.g. depth of mining, distance from mining, distance from waste heaps, distance from artificial lakes, aspect, monthly precipitation) using the multiscale spatial regression and random forest based regression models. Our results indicate general good condition of vegetation in the post-recovery phase of the area of interest. The combination of spatiotemporal statistics allowed us to identify statistically significant clusters of higher and lower values of the vegetation indices and their temporal trends in time. Interestingly, we identified change between vegetation cover classes on 4% of the study area. These zones represent local processes still taking place in the post-mining area that were also confirmed in the field. The identified phenomena include encroachment of aquatic vegetation in waterlogged mining subsidence basins, emergence of low vegetation in old pits filled with waste material, expansion of barren earth zones on external waste dumps, as well as present-day forest management activities. Thus, we confirmed that significant vegetation changes associated with former mining activities persist even five decades later. In our study we proved the applicability of open Sentinel 2 satellite data derived spectral indices for studies of post-mining environments and for the detection and analysis of phenomena related to local natural landscaping processes with multidimensional spatial statistics . The methodology adopted in our research and consisting of a combination of GIS-based data mining methods can be used in combination or separately in other mining sites, as well as support their sustainable management. Literature Agaboola et al., (2020) A review on the impact of mining operation: Monitoring, assessment and management. Results in Engineering, Vol. 8, https://doi.org/10.1016/j.rineng.2020.100181 Blanche, M.F., Dairou, A.A., Juscar, N. et al. (2024) Assessment of land cover degradation due to mining activities using remote sensing and digital photogrammetry. Environ Syst Res 13, 41. https://doi.org/10.1186/s40068-024-00372-5 Kretschmann J., Nguyen N. (2020) Research Areas in Post-Mining – Experiences from German Hard Coal Mining. Mineral Engineering – Inżynieria Mineralna, no. 2, vol. 1, pp. 255-262, DOI 10.29227/IM-2020-02-31. Li W., Wang D., Li H., (2019) Environmental Engineering Issues Induced by Abandoned Coal Mine Hidden Disasters. IOP Conf. Ser.: Earth Environ. Sci. 237:2, DOI 10.1088/1755-1315/237/2/022039 McKenna P. B., Lechner A. M., Phinn S., and P. D. Erskine, (2020) Remote Sensing of Mine Site Rehabilitation for Ecological Outcomes: A Global Systematic Review" Remote Sensing 12, no. 21: 3535. https://doi.org/10.3390/rs12213535 Yang, Y.; Erskine, P.D.; Lechner, A.M.; Mulligan, D.; Zhang, S.; Wang, Z. (2018) Detecting the Dynamics of Vegetation Disturbance and Recovery in Surface Mining Area via Landsat Imagery and LandTrendr Algorithm. J. Clean. Prod., 178, 353–362
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Flood Intensity Mapping Based on the SAR images Using Deep Learning in Rice Field

Authors: Shobha Poudel, Dr Bhogendra Mishra, Dr Bhoj Raj Ghimire, Dr Yogendra Mishra, Dr Krishna Chandra Devekota
Affiliations: Science Hub, Consiglio Nazionale delle Ricerche-Istituto per il Rilevamento Elettromagnetico dell’Ambiente (CNR-IREA), Nepal Open University, Ministry of Energy, Water Resource and Irrigation, Gandaki Provincial Policy and Planning Commission
The frequency of floods during the monsoon in South Asia has been increased over the years that raising the risk of loss and damage to rice crops, the staple food for the majority of people in the world, resulted in production uncertainty. Timely and accurate loss and damage estimation is critical for effective compensation, relief distribution, crop insurance policy development, and ensure the food security in the country or in region. To quantify crop losses, the data on flood intensity is required, which eventually measures the extent of damage caused by floods in rice crops. Synthetic Aperture Radar (SAR)-based flood mapping methods have been validated and well accpted, but challenges remain in accurately mapping flood intensity. We developed a noval U-Net deep learning algorithm for automatically mapping flood intensity using SAR images, along with topographic information. The deep learning approach requires a large amounts of training dataset. We prepared them considering the different flood events in an ex-post flood case of the Ganga River basin in Nepal, India and Bangladesh from 2019 to 2024. The flood extension and intensity were labelled using the visual interaction in the Sentinel 1 SAR, Sentinel 2 and PlanetScope dataset where available. The duration of standing water and vegetation condition were major indicator to confirm the damage level manually. The digitized flood intensity level were refined with the field visit and validated with the stakeholder discussion. We know that extent of crop damage is affected by flood duration (i.e., how long standing water remains in the field) and the speed of the water current. Therefore, we used these dataset to fit the model. The U-Net model was trained with more than 60,000 Sentinel-1 10m 64 × 64 patches across the three countreis, covering different agro-ecological zones and topographic conditions in South Asia. The model was able to precisely map the flood intensity in different ecological zones and topography. The comparative analysis between the ground truth data with the model generated damage intensity showed high agreement: 88% over all accuracy and omission error (detected less damage in high damage area) was 11.6% and commission error (detected less damage in high damage area) was 13.2%. This demonstrated, for the first time, the potential of using automated method that combined deep learning algorithm and remote sensing data for flood intensity mapping. Keywords: Flood, Rice Crop, South Asia, Deep Learning
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Cross-sectoral connections in the service of sustainable development: satellite observation in insurance in the context of risk prevention

Authors: Katarzyna Malinowska
Affiliations: Kozminski University, Space Entrepreneurship Institute
In the field of insurance, the profound impact of new technologies, including satellite techniques, has become a ubiquitous topic of discussion. This impact includes the application of earth observation to the entire insurance value chain, from pricing, risk assessment and loss calculation. Satellite techniques enable spreading new insurance models, such as parametric insurance, which have a significant potential to reduce the insurance gap. It concerns especially the agriculture insurance, so much dependent on the climate conditions. The evolving possibilities cannot be underestimated in the era of extreme weather events and fight for the sustainability in all aspects of our lives. Risk prevention emerges as an area where new technologies, such as remote sensing show significant and generally uncontroversial potential. This potential includes the identification of factors that increase the risk of damage, as well as the ability to respond quickly to such events to reduce their impact. Technologies such as advanced image analysis tools, including satellites, play a key role in this preventive aspect. The use of satellite techniques not only facilitates advanced risk management, but also enables the establishment of early warning systems. These systems allow for timely interventions aimed not only at reducing the frequency of insurance events, but also at minimizing damage when risks materialize. As such, using them expands the role of insurance beyond simple risk transfer, protecting against the consequences of an insurance event to proactively predicting and preventing the materialization of risk or mitigating the losses. The creation of predictive models, an integral part of the entire insurance cycle, should be linked to product design, distribution and marketing. Thanks to satellite techniques, we are witnessing a groundbreaking change in the insurance paradigm, in which prevention and mitigation of risk becomes an immanent and equivalent function to risk transfer. Despite the above ability, a comprehensive analysis of their impact on the preventive function of insurance is lacking. Filling this gap is crucial to the evolution of insurance as a tool for mitigating direct social risk. The motivation behind this study therefore stems from three key factors: (1) dynamic changes in the risk landscape and sustainability needs, (2) increasingly sophisticated satellite observation methods that facilitate risk prediction, and (3) a perceived regulatory gap in addressing social needs and technological potential. Accordingly, the purpose of this research is to analyze the legal, social and technological environment in the context of insurance principles and functions that can and have influenced for hundreds of years the level of losses absorbed by society, also contributing to risk prevention in the context of the opportunities offered by technologies such as earth observation. This is increasingly important in a changing climate, where limiting the function of insurance to compensating for losses is becoming ineffective and impossible to do satisfactorily. Combining the experience of the insurance sector with evolving earth observation techniques can help bring about the necessary paradigm shift in disaster management from a compensatory to a preventive function. While insurance has been known for centuries as a tool for transferring risk to compensate for the consequences of its materialization, its potential does not end there. After all, one of the long-established, though often perceived as ancillary, functions of insurance is risk prevention. With this in mind, we must remember that insurance has traditionally been a form of risk management, and today insurers are the risk managers of the modern world. The experience gained by insurers has always predestined them to go beyond purely compensatory aspects. Therefore, one of the auxiliary but important functions of insurance is risk prevention, considered the most rational method of controlling losses. The preventive function stems from the social role of insurance. Developing earth observation techniques provide insurance companies, policyholders, as well as governments with various opportunities to control losses, and provide better, more efficient risk management practices. The issue of risk prevention and mitigation is increasingly being raised by all stakeholders, indicating that insurance and risk prevention are complementary. Just as insurance protects a client's business from the effects of current and future risks, loss prevention reduces those risks and the losses they can cause. Immanent to this research is the link between risk prevention and the Sustainable Development Goals and ESG strategy, that is, building an organization with financial, environmental (including climate, biodiversity), social and human factors in mind, ensuring resilience and long-term value creation. Researchers studying the Sustainable Development Goals in the context of risk management are asking profound questions: "Isn't sustainability itself the queen of risk mitigation?" or what is more sustainable than risk transfer, mitigation/avoidance and environmental protection? Insurance is in a position to play an undeniable role in achieving sustainability goals. Insurers have also begun to recognize their role as risk managers (beyond their traditional role as risk carriers). The insurance industry's contribution to the transition to low-carbon, resource-efficient, inclusive and disaster-resilient economies - i.e., promoting resilience - implies the need for insurers to change their approach to risk-taking - i.e., to focus more and in a structured, institutionalized way on loss prevention and mitigation. While the insurance industry has historically provided its leadership to minimize risks from building fires and earthquakes, insurers today have a tremendous opportunity to develop creative loss prevention solutions and products that reduce climate change-related losses for consumers, government and themselves. The ultimate aim of this study is to show how the sectoral combination of insurance and space, with the right support from government and regulators, has the potential to influence the change its thinking about risk and focus on risk prevention or mitigation.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: The life cycle of the 2018 Kakroud landslide in northern Iran: Results from optical and radar remote sensing data

Authors: Jianan Li, Prof. Dr. Mahdi Motagh, Dr. Haonan Jiang, Dr. Bahman Akbari, Dr. Mehdi Rezaei, Dr. Sigrid Roessner
Affiliations: GFZ German research centre for Geosciences, Forest, Range and Watershed Management of Iran
Landslides are one of the most destructive geological hazards, often causing significant damage to buildings, roads, bridges and other infrastructure, as well as possible casualties. They pose a major threat, especially to communities living near steep slopes. Optical data excels in highlighting surface features, providing clear visualization of affected areas and tracing movement paths. However, its effectiveness is limited under adverse weather conditions. This limitation is addressed by Synthetic Aperture Radar (SAR), which offers all-weather, all-day imaging and excels in quantifying the extent and magnitude of deformation, providing precise measurements of slope instability. By combining optical and SAR data, a comprehensive understanding of landslide dynamics can be achieved, enabling more effective monitoring and analysis. In this study, we aim at evaluating pre-, co- and post-failure deformation related to the Kakroud landslide that occurred in Gilan province of northern Iran on 16 June 2018 using multi-source remote sensing measurements. The landslide caused seven deaths, a number of damaged houses and washed out the main access road to a village nearby. Precipitation data indicates that the region experienced heavy rainfall the day before the failure, which was the main triggering factor for the landslide. In June 2018, the rainfall in the region was 10 mm higher than the average rainfall for the same period from 2014 to 2017. Further analysis of annual precipitation reveals a cyclical pattern of rainfall in the region, alternating between dry and wet years. From 2014 to 2017, the region experienced a dry period with an average annual rainfall of 1400 mm. However, 2018 marked the beginning of a wet cycle, with a total annual rainfall reaching 2000 mm. The co-failure displacement was analyzed using cross-correlation of high-resolution optical images from Planet and Sentinel-2 data. Results show that the landslide sustained varying degrees of movement for three consecutive days between June 14th and 18th, 2018, with a maximum horizontal movement of about 35 m. Because of pronounced river undercutting, the first major failure occurred in the lower left part of the landslide body, washed into the river and blocked the river valley. Rising water flow increased erosion at the toe of the landslide, which in turn caused further failure of the landslide body. Pre-failure instability analysis using InSAR results of Sentinel-1 data from October 2014 to June 2018 show that the landslide was already active prior to its failure in June 2018, with an average deformation rate of 2 mm/yr. In conclusion, this study provides an example highlighting the importance of monitoring the early slow movement of landslides and the effectiveness of techniques combining optical and SAR remote sensing in order to analyze the evolution pertaining to landslide kinematics. Keywords: Landslide; Remote Sensing; Multi-temporal InSAR; Cross-correlation
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Monitoring Landslide Dynamics Using DInSAR and Historical Geotechnical Data in the Pianello Slope, Bovino, Italy.

Authors: Ester Pantaleo, Roberto Cilli, Nunzio Losacco, Loredana Bellantuono, Enes Tabak, Domenico Pomarico, Giovanni Lorusso, Riccardo Lanari, Pasquale Striano, Manuela Bonano, Francesco Casu, Claudio De Luca, Gianni Onorato, Yenni Roa, Vito Tagarelli, Federica Cotecchia, Roberto Bellotti, Amoroso Nicola
Affiliations: Dipartimento Interateneo di Fisica M Merlin, Università Degli Studi Di Bari Aldo Moro, Istituto Nazionale Fisica Nucleare, Sezione di Bari, DICATECh, Politecnico di Bari, Dipartimento di Biomedicina Traslazionale e Neuroscienze, Università degli Studi di Bari Aldo Moro, Istituto per il rilevamento elettromagnetico dell'ambiente (IREA), CNR, Dipartimento di Farmacia - Scienze del Farmaco, Università degli Studi di Bari Aldo Moro
The Pianello hillslope, located in the municipality of Bovino (Puglia), represents an emblematic case of slope instability in the Southeastern Apennines (the Daunia Apennines), characterized by slow-moving landslides that pose significant threat to local buildings and infrastructures. The slope's low mechanical strength, resulting from its specific geological composition, combined with high pore water pressure, are the primary factors predisposing it to deep instability. Erosion at the slope's toe by a stream, coupled with seasonal rainfall—rather than isolated extreme events—are the primary triggers for the initiation and prolonged activity of the numerous landslide bodies along the Pianello hillslope. Previous inclinometric surveys on the slope have identified shear bands along which the landslide bodies displace relative to the stable ground beneath, with displacement rates varying over time. To monitor displacements, we employed advanced satellite techniques using C-band DInSAR (Sentinel-1) and X-band DInSAR (COSMO-SkyMed) and the small baseline subset (SBAS) technique developed by CNR-IREA. SBAS InSAR time series analysis utilizing both ascending and descending orbital geometries confirmed active landslide movements. After spatial interpolation and vertical and East-West velocity disentanglement, the analysis revealed an empirical correlation between velocity gradients in the East-West direction and the distribution of structural damage observed in the town of Bovino. This study underscores the value of integrating satellite-based techniques, in-situ measurements, and historical data to enhance the phenomenological understanding of landslide activity and its associated damage. The combined use of Earth observation and ground-based data allows for a more comprehensive assessment of geohazards, offering complementary insights into displacement patterns, structural impacts, and the underlying mechanisms of slope instability. Future work will focus on deriving empirical relations between surface and subsurface data, incorporating additional in-situ measurements.
The study aligns with the objectives of Spoke5 - Environment & Natural Disasters of the ICSC Foundation, which promotes the integration of advanced computing technologies and digital twins for environmental monitoring and natural risk mitigation.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: A Framework for Assessing Flood Vulnerability, Adaptation and Disaster Responses Using Remote Sensing

Authors: Msc. Akram Mahan, Prof. Christian Geiss, Dr. Inga Sauer
Affiliations: Potsdam Institute for Climate Impact Research(PIK), The German Aerospace Center(DLR)
Climate change, urbanization and population growth exacerbate flooding and other climate disasters in many world regions. In order to mitigate the impacts adequate strategies to foster preparedness, recovery and overall resilience are required. Yet, the assessment of progress in adaptation through hazard, exposure and vulnerability reduction is challenging. We introduce an assessment framework that combines remote sensing, with physical and socio-economic indicators. We apply the framework for consecutive flood events in Chhapra-India to assess local disaster impacts, responses and recovery. The framework evaluates flood extents by utilizing pre- and post-flood Sentinel-1 Synthetic Aperture Radar (SAR) data. Furthermore, we assess the changes in population distribution, land use, land cover, and particularly on buildings and agricultural land. We find significant changes in the land use types affected during the first and the second flood events. In Chhapra-India larger shares of urban areas were exposed during the second event, despite a reduction in the flood extent compared to the preceding flood. The framework may be applicable on a larger geographic scale and may help to identify vulnerable areas and support decisions on post-disaster support and disaster risk reduction.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Monitoring Mediterranean Wildfires Impact: Burn Severity Assessment in Greece During July and August 2024 Using Sentinel-2 Imagery

Authors: Dr. Ignacio Castro-Melgar, Ms. Artemis Tsagkou, Ms. Maria Zacharopoulou, Ms. Eleftheria Basiou, Mr. Ioannis Athinelis, Mr. Efstratios-Aimilios Katris, Ms. Ioanna-Efstathia Kalavrezou, Prof.Dr. Issaak Parcharidis
Affiliations: Department of Geography, Harokopio University of Athens, Institute of Geophysics, Czech Academy of Sciences, Department of Geology and Geoenvironment, National and Kapodistrian University of Athens
In the summer of 2024, Greece experienced an alarming increase in wildfire activity, driven by extreme heatwaves, prolonged drought conditions, and strong winds. These factors created an ideal environment for rapid fire spread, causing significant damage to ecosystems, infrastructure, and human livelihoods. Wildfires during this period posed substantial threats to biodiversity, wetlands, and air quality, with smoke dispersing hazardous pollutants such as particulate matter (PM) and nitrogen dioxide (NO₂) over large areas. Greece’s Mediterranean climate, characterised by hot, dry summers and low humidity, coupled with increasingly intense heatwaves attributed to climate change, has amplified the frequency and severity of fire events in recent years. This study focuses on assessing the burn severity of six wildfires that occurred during July and August 2024, using Sentinel-2 satellite imagery from the Copernicus programme. These central fire season months were selected due to their high incidence of fire events and critical environmental impact, accounting for some of the most destructive wildfires recorded in Greece. By analysing these events, this research aims to contribute to the understanding of wildfire dynamics under changing climatic conditions, providing insights into the increasing vulnerability of Mediterranean ecosystems to wildfire damage. The methodology integrates pre- and post-fire Sentinel-2 satellite images to calculate the Normalized Burn Ratio (NBR) and the derived differenced NBR (dNBR), enabling precise delineation of burned areas and classification of burn severity levels. Sentinel-2 imagery, with its high spatial resolution and short revisit time, is particularly suited for wildfire analysis, offering detailed insights into fire-affected areas. Further refinement was achieved using cloud masking and water masking techniques to eliminate spectral interferences, ensuring accurate analysis even in challenging atmospheric conditions. These indices were calculated and combined using the ESA’s SNAP software, followed by spatial analyses conducted in a Geographic Information System (GIS) environment. This multi-step approach allowed for the quantification of burned areas and the spatial assessment of burn severity across diverse landscapes. The final results were reclassified into severity categories and visualized in vector and raster formats, providing a comprehensive view of the wildfire impact. Preliminary results indicate that the total burned area across the six main wildfire events exceeds 150 Km2, with varying degrees of severity. The data underline the increasing vulnerability of Mediterranean ecosystems to wildfire damage, compounded by climate change and land-use patterns. The findings of this study highlight the vital role of remote sensing in monitoring wildfire events and evaluating their environmental impact. By leveraging Sentinel-2 imagery, this approach offers an operationally efficient and replicable methodology for analysing wildfire dynamics and severity. The insights derived can inform emergency response strategies, post-fire rehabilitation efforts, and long-term policy frameworks aimed at mitigating wildfire risk in fire-prone regions.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Environmental Monitoring of Zinc and Lead Post-Mining Area Using Remotely Sensed Open Data - Case Study of the Hutki Site in Olkusz Region, Poland

Authors: Aleksandra Smentek, Prof. Jan Blachowski
Affiliations: Wrocław University of Science and Technology
Open-access multispectral satellite data from systems such as Sentinel or Landsat are available for almost the entire Earth's surface, providing images with 10-60 meters spatial resolution every 5-8 days. Polish online service of the Spatial Information Infrastructure provides open access to high-resolution imagery (0.25 m pixel size) and LiDAR data (4-12 points/m2) acquired from airborne laser scanning missions. The temporal resolution of this data is however inferior to satellite images – depending on the location data are available every few years with higher measurement frequency in large urban areas. Ongoing mining activity impacts the environment, however, the adverse effects are present also after the mine closure. Post-mining areas are a subject of many environmental hazards such as secondary ground movements that affect the landscape and infrastructure, air and soil pollution due to self-combustion of coal waste heaps, soil and water pollution due to acid mine drainage, and many more. Environmental changes occur at different pace and times, shortly after, or years after the mining activity ends. The aim of this study is to investigate the land cover changes that occurred after the closure of the last underground mine in the Olkusz region (southern Poland). The study area covering 3,584,703 m2 is located between 535 589,84 and 537691016 E and 272 069,31 and 270 363,39 N. It includes the former “Hutki II” backfilling sand mine, and the former and historical zinc mining areas. The landscape has been altered for many years by mining activity, resulting predominately in discontinuous land deformations. Additionally, reports show water appearing on the surface in early 2023, due to gradual restoration of the underground water table, after the shutdown of the water pumps (January 2022) used for mine dewatering. This research uses open-access data: multispectral images from Sentinel-2 satellite and LiDAR data from Airborne Laser Scanning (ALS). To compare the environmental state before and after the closure of the Olkusz-Pomorzany mine, two satellite images are obtained for October 2022 - before the first sign of water appearance on the surface, and after 2 years for October 2024. Due to the lack of LiDAR data after the water phenomenon, point clouds from October 2019 and April 2023 were used, with a focus on surface elevation changes. Methodology includes separate analyses of satellite imagery and LiDAR data and a merge of the results for comparison and interpretation. Satellite imagery analysis comprises visual analysis of the images in Color Infra-Red (CIR) bands composition and calculation of the Modified Normalized Difference Water Index (MNDWI) for waterbody identification, including water surface area calculations. LiDAR data processing consists of merging several point clouds into a single, coherent point cloud and the subsequent creation of the Digital Elevation Model (DEM) and Differential Digital Elevation Model (DDEM) to analyze changes in the ground elevation. In 2022 the MNDWI marked single pixels as water with a total area of 2,300 m2, in 2024, the index showed visible areas of waterbodies with a total surface area of 398,800 m2. Comparing the results of the MNDWI calculation with the image in the CIR composition, it was found that the index predominately showed water areas correctly, with a local incorrect water indication in urban areas, and either non-marked or redundantly marked water at the edge of water bodies or water mixed with the vegetation (where the error is the most visible). DEM for 2019 and 2023 show three large areas (417,000 m2, 522,000 m2, 197,000 m2) with similar elevations (mean minimum elevation = 299.50 m asl (meters above sea level), mean max elevation = 303.0 m asl), which are lower compared to the mean elevation of the whole study area (approx. 312.10 m asl). DDEM indicates changes in ground elevation between 2019 and 2023, revealing one large area (approx. 51,330 m2) of the lowered ground surface where the max. difference range up to 15.00 m (mean difference equal to 9.75 m), and about 80 round-shaped sinkholes reaching a maximum of 12.50 m depth, and up to 33.5 m diameter. Sinkholes area range from 1 m2 to 969.25 m2 (mean = 182.85 m2, median = 97.50 m). Comparing the result of the MNDWI for 2024 and the DEM for 2023, there is a strong correlation between the locations of identified water and regions with lower elevation values. In October 2024, water is present on the lowest-elevation surfaces, it does not exceed the boundaries of these areas, covering between 31-41% of the total region area. Water covers all detected discontinued subsidences, however, the MNDWI does not indicate it as well as it is visible on the CIR composition image. Open-access data is found useful in environmental monitoring of zinc and lead post-mining area, with a focus on examining surface water and discontinuous land subsidences. Systematically acquired satellite images and a rich archive of historical data allow multi-temporal analyses, spatial and spectral resolution of Sentinel-2 data is sufficient for water surface identification and analysis. The Modified Normalized Difference Water Index can be used as a proxy for water detection and a means to achieve preliminary results of multi-temporal water surface area extent. Obtained results, however, are often subject to uncertainties - image processing techniques (i.e. indices) applied on satellite data are not always accurate in detecting the desired phenomenon due to either spatial/spectral resolution of data, selection of the methods, or characteristics of the study area. Introducing data from another open-access source, i.e. LiDAR data, allows to perform different analyses on a different data type (3D point cloud) subsequently collating them and reading more accurate results and detailed conclusions. Integrating data requires a similar acquisition date, due to the limited availability of LiDAR data, performing multi-sourced open-access remote sensing analyses of the environment is often not possible. Moreover, frequent measurements are essential in monitoring post-mining areas, as rapid changes occur shortly after the mining operations end. It is crucial to acquire site imagery and measurements both before, and during the change occurrence, as well as periodically in the future. Due to the lack of high temporal- and spatial resolution open-access data there is a need for both systematic airborne LiDAR/multispectral data acquisitions, and for developing a methodology for accurate analyses of a post-mining environment using available data with inferior resolutions.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Operational Flood Mapping in the Valencia Metropolitan Area (Spain) Using Multi-Sensor Satellite Data Following the October 2024 Cut-Off Low Event.

Authors: Dr. Ignacio Castro-Melgar, Mr. Triantafyllos Falaras, Eleftheria Basiou, Prof.Dr. Issaak Parcharidis
Affiliations: Department of Geography, Harokopio University of Athens, Institute of Geophysics, Czech Academy of Sciences, Department of Geology and Geoenviroment, National and Kapodistrian University of Athens
In October 2024, an extreme cut-off low impacted eastern Spain, resulting in very high precipitation levels and extensive flooding across the Valencia Metropolitan Area. This weather event, the most severe recorded in this century, led to the inundation of residential, agricultural, and industrial zones, causing extensive damage to infrastructure and land cover and resulting in over 200 fatalities. Given the urgency for real-time data on flood extent and impact, satellite-based Earth Observation has proven invaluable for operational flood mapping and monitoring. This study applies a multi-sensor approach using Sentinel-1 and Sentinel-2 data from the European Space Agency’s Copernicus programme to systematically map the affected areas. Synthetic Aperture Radar (SAR) data from Sentinel-1 and Multispectral (Optical) imagery from Sentinel-2 were processed to detect and delineate the flooded zones. Sentinel-1’s radar capabilities allowed consistent flood detection despite atmospheric challenges such as cloud cover, while Sentinel-2’s optical imagery provided critical land cover insights. This combination enabled flood mapping and the spatial impact on various land uses, including urban, agricultural, and natural landscapes. The methodology implemented includes the calculation of the Normalized Difference Water Index (NDWI) utilizing pre-flood and post-flood Sentinel-2 Level-2A (L2A) atmospherically corrected images to enhance water feature detection. Cloud masking was applied to address the frequent cloud cover existing during extreme events. For the Sentinel-1 Interferometric Wide Swath (IW) Ground Range Detected (GRD) data, processing involved applying a speckle filter and terrain correction, followed by threshold-based backscatter histogram segmentation to classify flooded and non-flooded areas. Binary flood masks were generated from both datasets and transformed into vector polygons for integration in a Geographic Information System (GIS) environment, enabling quantitative flood analysis and results extraction. The results include precise delineation of the flood extent immediately post-event. Using Corine Land Cover (CLC) data, the flood's impact on land cover and infrastructure is assessed. Preliminary analysis suggests that large areas of irrigated land, residential areas, and critical infrastructure were significantly affected, underscoring the need for rapid-response flood mapping in future events. This study highlights the effectiveness of the multi-sensor satellite Earth Observation for operational flood mapping and the value of the Copernicus programme in emergency response and risk management.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Rapid Earthquake Damage Assessment Using SAR and Optical Remote Sensing

Authors: Constantijn Vleugels, Prof Tim Wright, Andrew Hooper, Dr. Stuart King, Ms Sarah Douglas
Affiliations: COMET, School of Earth and Environment, University Of Leeds, School of Mathematics and Maxwell Institute for Mathematical Sciences, The University of Edinburgh, SatSense
Earthquakes are among the most destructive natural disasters. Approximately 1 million deaths have been caused by earthquakes during the period 2000-2020 [4]. To mitigate the impact of earthquakes, rapid damage mapping is essential. As 75% of the casualties in disasters are due to structural collapse [3], it is of particular importance to assess building damage. Ground surveys, while providing reliable information, are not effective for large-scale, rapid damage assessment. Remote sensing provides large spatial coverage on short timescales. Algorithms exploiting high-resolution (sub-meter) optical imagery have been most widely used for remote sensing damage assessment as they yield the highest accuracies. For example, for the devastating Türkiye 2023 earthquake sequence, a Microsoft team partnered with the Türkiye Ministry of Interior Disaster and Emergency Management Presidency (AFAD) to assess building damage using high-resolution optical images [13]. However, atmospheric conditions, such as the occurrence of clouds or fog, limit the effectiveness of optical satellite data for rapid damage assessment, especially in the equatorial region [11]. In contrast, Synthetic Aperture Radar (SAR) observations are less sensitive to weather conditions. Post-event SAR imagery is often available within one day following recent SAR satellite launches (e.g., ALOS-4 [9], Sentinel-1 [15] and commercial constellations such as Capella Space [2] and ICEYE [10]). SAR data availability will further improve with the launch of planned missions (e.g., NISAR [6]). Therefore, post-event SAR imagery can provide more consistent rapid disaster response than post-event optical imagery. However, the accuracy of damage detection methods using SAR data proposed to date is lower than that of methods using optical data [5]. Initially, SAR damage proxy maps were produced from coherence difference or intensity correlation images [7,8], which typically struggled to disentangle building damage from other causes of temporal decorrelation (such as vegetation). Moreover, before the launch of Sentinel-1 in 2014, there was no access to time series for the areas of interest. In the past few years, the use of machine learning, SAR time series and data fusion have significantly improved SAR damage mapping accuracy. Examples of recent studies include the fusion of SAR-derived damage proxy maps and USGS ShakeMaps with a Random Forest model [12] and anomaly detection using SAR time series and a Recurrent Neural Network [14]. While their accuracy is high on the test cases, the authors of these past works acknowledge that the models face difficulties in generalising to regions not seen in the training data sets. We will present a review of existing methods for earthquake damage assessment from SAR, and show preliminary results from a new algorithm under development. In our new approach, we explore the integration of SAR Sentinel-1 time series, pre-event optical imagery and USGS ShakeMaps for earthquake damage assessment within a deep learning approach. We aim to leverage the Copernicus Emergency Mapping Service building damage dataset for earthquakes from 2015-2023, which provides a diverse sample to enhance the model’s generalisation capability. The goal of our work is to develop an algorithm that can be integrated within a platform such as the COMET LiCSAR portal [1], facilitating the rapid public release of damage assessment products shortly after earthquake events. [1] COMET LiCSAR portal. https://comet.nerc.ac.uk/COMET-LiCS-portal/. Accessed: 2024-11-26. [2] Davide Castelletti, Gordon Farquharson, Craig Stringham, Michael Duersch, and Duncan Eddy. 2021. Capella space first operational SAR satellite. In 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS. IEEE, 1483–1486. [3] Andrew W Coburn, Robin JS Spence, and Antonios Pomonis. 1992. Factors determining human casualty levels in earthquakes: mortality prediction in building collapse. In Proceedings of the tenth world conference on earthquake engineering, Vol. 10. Balkema Rotterdam, 5989–5994. [4] D. Delforge et al. 2023. EM-DAT: The Emergency Events Database. Preprint. https://doi.org/10.21203/rs.3.rs-3807553/v1 Accessed: 2024-10-21. [5] Jiancheng Gu, Zhengtao Xie, Jiandong Zhang, and Xinhao He. 2024. Advances in Rapid Damage Identification Methods for Post-Disaster Regional Buildings Based on Remote Sensing Images: A Survey. Buildings 14, 4 (April 2024), 898. https://doi.org/10.3390/buildings14040898 Number: 4 Publisher: Multidisciplinary Digital Publishing Institute. [6] Kent Kellogg, Pamela Hoffman, Shaun Standley, Scott Shaffer, Paul Rosen, Wendy Edelstein, Charles Dunn, Charles Baker, Phillip Barela, Yuhsyen Shen, Ana Maria Guerrero, Peter Xaypraseuth, V Raju Sagi, C V Sreekantha, Nandini Harinath, Raj Kumar, Rakesh Bhan, and C. V. H. S. Sarma. 2020. NASA-ISRO Synthetic Aperture Radar (NISAR) Mission. In 2020 IEEE Aerospace Conference. 1–21. https://doi.org/10.1109/AERO47225.2020.9172638 ISSN: 1095-323X. [7] Masashi Matsuoka and Fumio Yamazaki. 2004. Use of Satellite SAR Intensity Imagery for Detecting Building Areas Damaged Due to Earthquakes. Earthquake Spectra 20, 3 (Aug. 2004), 975–994. https://doi.org/10.1193/1.1774182 [8] Masashi Matsuoka and Fumio Yamazaki. 2005. Building Damage Mapping of the 2003 Bam, Iran, Earthquake Using Envisat/ASAR Intensity Imagery. Earthquake Spectra 21, 1_suppl (Dec. 2005), 285–294. https://doi.org/10.1193/1.2101027 [9] Takeshi Motohka, Yukihiro Kankaku, Satoko Miura, and Shinichi Suzuki. 2021. Overview of ALOS-2 and ALOS-4 L-band SAR. In 2021 IEEE Radar Conference (RadarConf21). 1–4. https://doi.org/10.1109/RadarConf2147009.2021.9454977 ISSN: 2375-5318. [10] Darren Muff, Vladimir Ignatenko, Ozan Dogan, Leszek Lamentowski, Pierre Leprovost, Matthew Nottingham, Andrea Radius, Tino Seilonen, and Valentyn Tolpekin. 2022. The ICEYE constellation-some new achievements. In 2022 IEEE Radar Conference (RadarConf22). IEEE, 1–4. [11] Simon Plank. 2014. Rapid Damage Assessment by Means of Multi-Temporal SAR — A Comprehensive Review and Outlook to Sentinel-1. Remote Sensing 6, 6 (June 2014), 4870–4906. https://doi.org/10.3390/rs6064870 Number: 6 Publisher: Multidisciplinary Digital Publishing Institute. [12] Anirudh Rao, Jungkyo Jung, Vitor Silva, Giuseppe Molinario, and Sang-Ho Yun. 2023. Earthquake building damage detection based on synthetic-aperture-radar imagery and machine learning. Natural Hazards and Earth System Sciences 23, 2 (Feb. 2023), 789–807. https://doi.org/10.5194/nhess-23-789-2023 [13] Caleb Robinson, Ritwik Gupta , Simone Fobi Nsutezo, Erick Pound, Anthony Ortiz, Melissa Rosa, Kevin White, Rahul Dodhia, Andrew Zolli, Cameron Birge, and Juan M. Lavista Ferres. 2023. Turkey Earthquake Report. Technical Report MSR-TR-2023-7. Microsoft. https://www.microsoft.com/en-us/research/publication/turkey-earthquake-report/ [14] Oliver L. Stephenson, Tobias Köhne, Eric Zhan, Brent E. Cahill, Sang-Ho Yun, Zachary E. Ross, and Mark Simons. 2022. Deep Learning-Based Damage Mapping With InSAR Coherence Time Series. IEEE Transactions on Geoscience and Remote Sensing 60 (2022), 1–17. https://doi.org/10.1109/TGRS.2021.3084209 Conference Name: IEEE Transactions on Geoscience and Remote Sensing. [15] Ramon Torres, Paul Snoeij, Dirk Geudtner, David Bibby, Malcolm Davidson, Evert Attema, Pierre Potin, BjÖrn Rommen, Nicolas Floury, Mike Brown, et al. 2012. GMES Sentinel-1 mission. Remote sensing of environment 120 (2012), 9–24.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Advancing UAV-based Natural Hazard Assessment with High-Performance Photogrammetry

Authors: Marco La Salandra
Affiliations: University Of Bari
This study explores the critical role of Unmanned Aerial Vehicles (UAVs) in addressing natural hazards, an urgent issue intensified by climate change. UAVs excel in disaster management through rapid deployment, adaptability, and cost-effectiveness, providing high-resolution spatiotemporal data essential for near real-time disaster impact assessments. However, challenges such as data processing demand necessitate innovative solutions. To address these challenges, the study introduces a high-performance UAV photogrammetry workflow based on open-source tools and implemented on the ReCaS-Bari HPC cluster. Utilizing distributed computing, GPU optimization, and a hybrid bundle adjustment strategy, the workflow significantly reduces processing time—achieving a 70% improvement over commercial software and up to an 86% reduction compared to other cluster-based methods. The system processes extensive datasets, analyzing up to 11,549 images (~100 GB) in 7.8 hours and enabling rapid surveys over large areas with exceptional accuracy. A case study on a May 2023 flood event along the Basento River, Southern Italy, demonstrates the workflow's capabilities. The system facilitated morphological change detection along a 3-km river section in three hours, supporting emergency teams with timely, accurate insights. The workflow's technical foundation—leveraging parallel processing, GPU resources, and optimized file systems—delivers significant efficiency gains, particularly for large datasets. While hybrid bundle adjustment introduces minor delays for smaller datasets, its advantages for large-scale operations are pronounced. Future enhancements, such as integrating GNSS RTK georeferencing and cloud-based data transmission, could further improve performance and broaden applications. This approach represents a cost-effective, scalable tool for disaster management, bridging the gap between data collection and actionable insights. By delivering rapid, high-resolution analysis, it enhances emergency response, situational awareness, and resilience to natural hazards.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Multi-sensor assessment of vegetation dynamics, its drivers and connection to land degradation processes in Chile over the last 40 years

Authors: Daniela Rivera-Marin, Dr Booker Ogutu, Prof. Jadu Dash
Affiliations: University of Southampton
Chile's diverse ecosystems and biodiversity value represent a challenge for sustainable development and the task of balancing economic growth, inclusion and environmental protection is critical. Chilean biodiversity faces threats from climate change and anthropogenic activities such as urbanisation, agricultural expansion, deforestation and industry-led afforestation. To develop effective policies for managing Chile's vegetation and ensuring the continued provision of ecosystem services, it is essential to understand the temporal and spatial dynamics of vegetation and their interactions with climate and human activities. This study aimed to identify the primary drivers behind vegetation dynamics, that can explain different processes of land management and land degradation. It analysed 96.2% of the continental territory of Chile to assess vegetation dynamics, specifically browning and greening trends using AVHRR and MODIS 16-Day catalogues, over the last 40 years. Climate variables, such as changes, trends and patterns in precipitation and temperature were examined using ERA5 from ECWMF satellite data. Finally, for the analysis of anthropogenic activities, it was evaluated the land cover and land use change over the last 40 years, using Landsat TM and OLI. All analysis were conducted using a time frame from 1981 to 2022, with each variable used being assess using trends and time series analysis for the determination of significant trends over time. A pixel size of 5km x 5km was used to facilitate the categorisation of scenarios, favouring the assessment and correlation analysis for each variable used. Our findings indicated that approximately 4.1% (~30,850 km²) of Chilean territory has experienced browning over the past 40 years, distributed across various regions and vegetation types, suggesting ongoing land degradation processes. Contrarywise, about 7% (~59,600 km²) of the territory shows a greening effect, predominantly in temperate climates. Climate trends and overall change over the last 40 years in Chile are categorical, precipitation is decreasing, and temperature is increasing all along the country. Precipitation decline presented different behaviours across the country, which is expected considering the climate distribution of the country, but it can be roughly estimated a decrease of 0.3mm per month/year for the entire country. In the same tone, temperature registered an average increase of 0.5°C at national level in the last four decades. The correlation between both climate variables is consistent with these findings. Climate variations and trends don’t explain the entirety of the vegetation dynamics found in the country, for example, for both greening and browning scenarios there is at least 18% of their total areas that significantly correlated to areas where precipitation is decreasing, and temperature is increasing, the other 82% didn’t presented a strong and significant trend to determined climate variation as the main driver. In terms of land cover change or anthropogenic interaction with the found vegetation dynamics, forestry is the main cause of greening in Chile, where whiting the 7% of greening, ~73% corresponded to a land cover change to forest, corresponding with the national efforts and economic sector growth of the forestry industry (afforestation). For the browning area, only ~3.2% could be explained with a change towards desert/barren characteristics, while ~89% present desert/barren characteristics since 1984. Finally, for example and for both catalogues of results, climate and anthropogenic drivers, there is a low relation in the browning areas that combines a significant precipitation decrease and temperature increase to a significant land cover change such as a change to desert/barren cover. This low probability of finding both results over a browning area complicates the determination of land degradation processes that is leading to a decrease of vegetation health, greenness and distribution. Contrarywise, greening dynamics have a strong combination of both factors, where greening areas that have a significant trend of precipitation decrease and temperature increase is also where, in terms of land cover change, there is a change of land cover type to a forest cover or use, which explains why vegetation is not displaying a decrease that should be present if only climate variation were considered. In summary, four different sources of EO information were used to determine vegetation, climate and anthropogenic trends and patterns over the last 40 years in Chile, these insights are crucial for informing policy and conservation strategies aimed at mitigating land degradation, desertification and drought management, promoting sustainable land use practices in Chile.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Solving Local Challenges with Global Insights: A Comprehensive Spaceborne and In-Situ Analysis of the Klodne Landslide, Poland

Authors: Zbigniew Perski, Petar Marinkovic, Maria Przyłucka, Piotr Necieruk, Tomasz Wojciechowski
Affiliations: Polish Geological Institute - National Research Institute, PPO.Labs
On June 1, 2010, intense and prolonged rainfall triggered the Klodne landslide in the Polish Carpathians, Beskid Wyspowy Mountains, near Limanowa. Previously considered stable, this catastrophic event displaced 17 houses and two farm buildings within days. While current displacement rates have significantly reduced, the landslide remains active, raising critical questions: Will the site stabilize naturally over time, or could future extreme rainfall events reactivate hazardous movement? To address these uncertainties, the Polish Geological Institute’s Geohazards Center launched an integrated monitoring program, combining in-situ and spaceborne datasets. Local-scale measurements include GNSS, terrestrial and UAV-based laser scanning, and borehole sensors for subsurface monitoring. These efforts are enhanced by global satellite-based observations, specifically Synthetic Aperture Radar (SAR) interferometry techniques such as Persistent Scatterer Interferometry (PSI) and Small Baseline Subset (SBAS) time-series analyses, conducted using TerraSAR-X and Sentinel-1 data. To overcome the challenge of insufficient radar reflectors in vegetated areas, six radar corner reflectors (CRs) were deployed in December 2018. These CRs, designed for compatibility with both ascending and descending Sentinel-1 geometries, also function as GNSS validation points, ensuring consistency across datasets. The integration of all Sentinel-1 tracks enabled nearly daily displacement measurements, decomposed into vertical and horizontal components. These displacement data were cross-analyzed with environmental parameters, including daily precipitation, groundwater table fluctuations, soil moisture, vegetation indices (NDVI), and surface temperature. Key findings reveal a pronounced seasonal signal in surface displacements, driven by variations in the groundwater table, as well as short-term correlations between movement velocities and rainfall events. For example, groundwater table fluctuations of just a few meters corresponded to significant increases in surface movement velocities, with a clear lag time identified between rainfall events and slope response. These insights advance the understanding of hydrological triggers for landslides and provide actionable information for early warning systems. The novelty of this study lies in its multidisciplinary approach, which integrates high-temporal-resolution interferometric SAR data, CR-enhanced radar reflectivity, and in-situ environmental monitoring. By bridging global and local scales, the methodology offers a robust framework for landslide monitoring and predictive modeling. Importantly, the findings have broad implications, highlighting the potential to adapt similar techniques for monitoring landslides in other regions prone to hydrological instability. This presentation will detail the methodological innovations, data integration strategies, and implications of these findings for landslide risk assessment and management, demonstrating the critical role of global satellite data in solving local geohazards.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Estimating the reactivation times and velocities of slow-moving landslides and investigating their potential relations with precipitation in Central Italy

Authors: Ebrahim Ghaderpour, Ms Claudia Masciulli, Dr Marta Zocchi, Professor Francesca Bozzano, Professor Gabriele Scarascia Mugnozza, Professor Paolo Mazzanti
Affiliations: Sapienza University Of Rome
Effective monitoring of slow-moving landslides is an important task that can help in mitigating damages to infrastructure, including railroads, buildings, and pipelines. Following a devastating seismic sequence during 2016–2017, Central Italy has experienced many landslides, including slow-moving landslides that were further triggered by precipitation. Persistent Scatterer Interferometric Synthetic Aperture Radar (PS-InSAR) is an advanced remote sensing method for monitoring ground deformation. The Sequential Turning Point Detection method (STPD) is a fast and robust method of estimating the turning points and their directions within time series. In this study, we analyzed PS-InSAR time series derived from Sentinel-1 and COSMO-SkyMed by STPD within areas of interest susceptible to landslides in Central Italy. We produced and classified monthly maps of significant turning points and their directions for years 2018, 2019, 2020, and 2021. We found that more than 80% of detected turning points in the PS-InSAR time series had a direction between −4 to 4 mm/year. In addition, we analyzed monthly Global Precipitation Measurement (GPM) images by STPD to investigate when the precipitation rate has changed and how they might have reactivated slow-moving landslides. We found that Marche and Abruzzo (the coastal regions) had an insignificant precipitation rate while Umbria and Lazio had a significant increase in precipitation from 2017 to 2023. The coastal regions also exhibit relatively lower precipitation amounts. Our findings indicate a strong correlation between the trend turning dates of the accumulated precipitation and displacement time series, especially for Lazio during summer and fall 2020, where relatively more significant precipitation rate of change is observed. We also observed a strong Pearson correlation greater than 0.7 between GPM (satellite-based) and local precipitation (station-based) with similar STPD results. Our results may guide stakeholders and responsible authorities for risk management and mitigating damage to infrastructures.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Super-Resolution for Volcano Monitoring: Enhancing Satellite Image Precision

Authors: Giovanni Salvatore Di Bella, Dr. Claudia Corradino, Dr. Ciro Del Negro
Affiliations: INGV - National Institute of Geophysics and Vulcanology, Etna Volcano Observatory
Volcanic activity is among the most dynamic and hazardous natural phenomena, making continuous monitoring essential to mitigate its impacts on human health, the environment, and infrastructure. Satellites are critical tools for global volcano surveillance, particularly in remote areas, enabling thermal monitoring to detect early signs of activity such as lava flows, gas emissions, and ash clouds. High spatial resolution images, such as those from Landsat-8 and Sentinel-2, are invaluable for identifying key volcanic features. However, their low temporal resolution, with acquisition intervals of approximately 16 days, limits their ability to capture rapid changes typical of many eruptions. This temporal gap can result in the loss of crucial information about eruption dynamics and their evolving impacts. To address these challenges, advanced image enhancement techniques, particularly super-resolution methods, offer promising solutions for improving the quality of images acquired by satellites with higher temporal frequency but lower spatial resolution, such as Moderate Resolution Imaging Spectroradiometer (MODIS) and Spinning Enhanced Visible and Infrared Imager (SEVIRI). Super-resolution, powered by neural networks, increases the spatial detail and sharpness of such images, enhancing both their visual quality and the accuracy of radiometric and morphological analyses. By making fine-scale volcanic features more evident, this technique improves both manual and automated analysis, enabling more effective volcanic monitoring. Recent innovations in machine learning, such as diffusion models, further extend these capabilities. Diffusion models iteratively add and remove noise from an image, reconstructing missing details while preserving essential features. They are particularly effective for addressing degradation caused by factors such as cloud cover or sensor limitations. By combining data from multiple sources and resolutions, diffusion models generate high-quality, detailed images continuously over time, opening new possibilities for near real-time volcanic monitoring. The integration of super-resolution and diffusion models represents a significant advancement in satellite-based volcano monitoring. This combined approach enables precise and timely analysis of volcanic processes, enhances the understanding of eruption dynamics, and strengthens response capabilities during emergencies, marking a major step forward in volcanic hazard assessment and mitigation.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Multi-source Monitoring of Harmful Algal Blooms (HABs) in the Northern Arabian Gulf

Authors: Alyaa Dashti, Adrian Bass, Brian Barrett
Affiliations: School of Geographical & Earth Sciences, University Of Glasgow
Changes in the marine environment can have a direct impact on biodiversity and human health. In Kuwait, sewage effluent along the shoreline supplies the Gulf area with nutrients, which alongside similar effluent activities in Iraq and Iran, causes rapid growth of harmful algae populations, a phenomenon known as harmful algae blooms (HABs). Beyond the visual water discoloration caused by phytoplankton blooms due to increased light backscattering and algal pigment absorption, these events place additional strain on the water system by blocking sunlight, depleting oxygen, and releasing toxins. This ultimately causes red tides and fish-killing phenomena, leading to deleterious effects on marine life in general, fisheries and human health. It is therefore critical to monitor changes in the marine environment in order to forecast the location and magnitude of hazardous events and enable decision-makers to take mitigating action before these phenomena grow and become too difficult to deal with. This research aims to monitor algal blooms using machine learning models and multi-source satellite data. The study focuses on the northern region of the Arabian Gulf, specifically, Kuwait’s waterbody. Coarse spatial resolution (MODIS) and medium resolution (Sentinel-2 & Landsat) satellite imagery and several machine learning models (e.g. Long Short-Term Memory (LSTM), random forest, and XGBoost) are used to identify the optimum model for predicting essential variables (e.g., chlorophyll, surface temperature, turbidity) that can influence the formation of algae blooms. The results of this study could subsequently be incorporated into a forecasting tool to predict, manage, and inform mitigation strategies for algal bloom events. Such models can also complement field studies by suggesting the most promising areas to explore or investigate, saving time and cost. Identifying and predicting such events will enable decision-makers to intervene early and thus potentially prevent large-scale damage, such as HAB outbreaks or fish kills.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: IMAGE CORRELATION WEBSERVICES APPLIED TO OPTICAL AND SAR SATELLITE OBSERVATIONS: USEFULNESS FOR LANDSLIDE MONITORING

Authors: Floriane Provost, David Michéa, Jean-Philippe Malet, Elisabeth Pointal, Antoine Lamielle, Nicolas d’Oreye, Dominique Derauw, Claude Boniface
Affiliations: EOST - École et Observatoire des Sciences de la Terre, CNRS / Université de Strasbourg, ITES - Institut Terre et Environnement de Strasbourg, CNRS / Université de Strasbourg, Data-Terra / Pole de Données et de Services FormaTerre, CNRS, Centre Européen de Géodynamique et de Sismologie, ECGS, Centre Spatial de Liege, CSL, CNES - Centre National Des Etudes Spatiales, Toulouse
Measuring terrain deformation is relevant for many applications in Earth Sciences (i.e. active faults, volcanoes, landslides or glaciers understanding). Nowadays, radar interferometry has proven its capacities to monitor a variety of slow-moving phenomena from local to continental scale with the European Ground Motion Service). However, the technique remains limited when the ground moves at faster rates. In these cases, offset tracking techniques are more suitable to monitor the ground deformation. The Research Infrastructure DATA-TERRA, through its Solid Earth data hub FormaTerre, has developed and implemented the GDM (Ground Deformation Monitoring) family of webservices accessible to the scientific community for both techniques. The GDM-OPT services were developed to offer a complete processing chain for performing multi-temporal offset tracking of Sentinel-2 optical images. Three services were specifically designed for three main applications: 1) GDM-OPT-ETQ for measuring co-seismic deformation, 2) GDM-OPT-Slide for measuring landslides motion and 3) GDM-OPT-ICE for monitoring glacier surface velocity. To complete the portfolio of offset tracking techniques, two new services have been developed and are now operational: 1) the GDM-OPT-Survey for near-real time and systematic monitoring of specific areas in routine mode and 2) the GDM-SAR-cor, an on-demand offset tracking service for multi-temporal SAR images. Indeed, with now nearly 9 years of operation, the Sentinel-2 satellites provide a global coverage of the Earth every 5 days at 10 meter resolution. This offers a great potential for systematic and near-real time monitoring and for designing early warning systems. The GDM-OPT-Survey service has been designed for this purpose: processing automatically every new cloud-free Sentinel-2 acquisition over a designated area of interest in order to compute up-to-date time series of terrain motion. To achieve this goal, the GDM-OPT original workflow has been adapted in order to 1) detect the cloud coverage over the Area of Interest (AOI) to include or reject the image in the processing, 2) automatically detect the tiles and orbits covering the AOI in order to compute all image pairs per tile and per orbit, 3) merge the images per orbit in order to provide a maximum coverage of the AOI and 4) invert the displacement time series for the archive and every new cloud-free Sentinel-2 acquisition. The service works with the MicMac photogrammetric library and the time series inversion is performed with the TIO algorithm. Sentinel-1 offers nowadays ten years of archive with global coverage every 12 days (6 days between 2016 and 2021) at 5 m x 20 m spatial resolution. Radar images are less sensitive to clouds and atmospheric conditions in comparison to optical sensors. Consequently, the Sentinel-1 archive presents a significant interest to monitor fast motion with offset-tracking techniques. Moreover, if both a signal can be measured in both ascending and descending direction, radar offset tracking provides the 3D vector of the deformation. Therefore, the GDM-SAR-cor service is designed to compute deformation time series from radar offset tracking. The workflow follows the main processing steps of the GDM-OPT services with an additional pre-processing of the Level-1 SLC images using the AMSTer mass processing chain. The correlation is performed with the MicMac photogrammetric library, with parameters adapted to SAR images. Finally, the displacement time series can be inverted with the MSBAS algorithm and with TIO. The current version of the service ingests Sentinel-1 images and can process images from the same orbit. First, we present in details the workflows developed for these two new services. The services are tested on two landslides: the Vitor Valley (Peru), the La Valette landslide (France) and validated with in-situ measurement and sensor inter-comparison. Second, we show the new capabilities of the GDM offset tracking services by merging the output of the different services to retrieve regular time series of the 3D deformation at medium resolution (20 m x 20 m) offering new perspectives to monitor slow moving landslides (> 1 m.year-1).
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: ENSO+IOD and Soil Moisture: Analysing the relationship between atmospheric patterns and East African Soil Moisture from 1988-2022

Authors: Zoe Helbing, Dr. José
Affiliations: University Of Valencia
Soil moisture (SM) is one of the most significant attributes driving land and atmospheric interactions. It directly affects the status of entire ecosystems and their respective agricultural productivity, which can lead to issues with food security in arid and semi-arid regions. As climate change is known to increase frequency of extreme events like floods and droughts, understanding SM variations is essential for improving model predictions and decision-making processes. Previously described as one of the most drought-prone regions in the world, Eastern Africa has historically been affected by fluctuations in precipitation and temperature, which further exacerbates food and water availability. Partially responsible for the interannual rainfall fluctuations are atmospheric patterns like the El Niño-Southern Oscillation (ENSO) and the Indian Ocean Dipole (IOD), which influence overall precipitation intensity and quantity. This poster aims to illustrate the effects of the previously mentioned atmospheric patterns over Eastern Africa, using the ESA Climate Change Initiative Soil Moisture (ESA CCI SM) combined product for SM estimates and the National Oceanic and Atmospheric Administration’s (noaa.gov) data for the atmospheric pattern data. The ESA CCI SM product was selected, as it combines data from multiple microwave satellites, which reflect SM values in the first centimetres of soil, into one product. This enables further in-depth analysis due to greater image availability. The data is made available from 1978 onward, but due to limitations in image frequency during the first years, the time frame for this study was adjusted to 1988-2023. After having tested the combined dataset performance to their active and passive counterparts, as well as other sensor outputs, the relationship between the atmospheric patterns and SM was tested using cross-correlation analysis, as it allows the incorporation of potential lagging effects. The results illustrate strong regional differences, with Somalia and Kenya showing weak positive, significant coefficients, while South Sudan has significant, weak negative values. The next steps include adding additional variables like elevation or vegetation cover to better explain regional patterns.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Remote sensing analysis of forest disturbance and recovery at volcanoes worldwide

Authors: Megan Udy, Dr Susanna Ebmeier, Sebastian Watt, Professor Andy Hooper
Affiliations: School of Earth and Environment, University Of Leeds, School of Geography, Earth and Envrionmental Sciences, University of Birmingham
Understanding the impact of volcanic eruptions on surrounding environments is crucial for improving hazard mapping and managing risks in the pursuit of the Sustainable Development Goals. Forests surrounding volcanoes are vulnerable to a range of hazards such as pyroclastic flows, lahars, tephra deposition, lava flows, and gas emissions. These hazards cause varying degrees of damage, influenced by factors such as eruption intensity, forest density, and local environmental conditions. The subsequent vegetation recovery is also influenced by both the intensity for the eruption and the environmental setting. We utilise optical and radar satellite data to analyse and compare forest damage and recovery from 15 volcanoes in varying regions and forest types. We assess how forest disturbance from volcanic eruptions varies with different hazard type and environmental setting, as a way to understand volcanic deposit distribution and improve the use of vegetation damage as a proxy to improve volcano monitoring. Optical remote sensing, widely used to monitor vegetation health through indices like the Normalised Difference Vegetation Index (NDVI), provides valuable insights but is often hindered by persistent cloud cover in tropical regions. To address this limitation we also utilise Sentinel-1 radar imagery, through backscatter and coherence as a way to detect vegetation changes as a result of volcanic activity. By combining these datasets and applying time series and cluster analyses we identify patterns in vegetation damage and recovery. This integrated approach is particularly valuable for monitoring volcanoes in remote or inaccessible settings where field campaigns are not feasible. This approach reveals insights into how the magnitude of initial damage, duration of volcanic activity and local environmental factors (including latitude and forest type) influence the amount of damage and subsequent recovery rates. We also explore what these findings can reveal about the resilience of vegetation exposed to repeated disturbance, such as in volcanic settings. This research highlights the importance of multi-sensor data fusion in monitoring natural hazards and their ecological impacts. By bridging the capabilities of optical and radar platforms, our methodology contributes to a more integrated understanding of hazard dynamics and the use of vegetation as a proxy for monitoring volcanic activity and understanding eruption history.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Fusion of Cross-band SAR Datasets for Enhanced PS-InSAR Based Land Subsidence Monitoring in the Chiayi Region, Taiwan

Authors: Hsun-yan Liu, Prof. Ya-Lun Tsai
Affiliations: Earth Observation and Remote Sensing Lab, Surveying and Geospatial Engineering Division, Department of Civil Engineering, National Taiwan University
In 2023, the Taiwanese government designated approximately 620.6 square kilometers as significant subsidence areas, with average annual subsidence rates exceeding 3 cm. Among these affected regions, western Taiwan faces particularly serious challenges of land subsidence, with the Chiayi region emerging as one of the most critical areas. Due to excessive groundwater extraction for rice cultivation and aquaculture operations, this region experiences subsidence rates of up to 6.4 mm annually and accounts for 40% of Taiwan's total affected area. The extensive scale of land subsidence, combined with the presence of the High-Speed Rail system, have made precise subsidence monitoring and implementation of preventive measures under current regulatory standards an urgent challenge. Previous studies have demonstrated that Persistent Scatterer InSAR (PS-InSAR), compared to traditional surveying methods, such as leveling and GNSS, offers advantages including cost-effectiveness, broad spatial coverage, and high monitoring density for surface subsidence monitoring. This technique enables interpretation of general deformation trends and can be further integrated with conventional monitoring data to provide three-dimensional deformation measurements with wide coverage and high spatio-temporal resolution, facilitating comprehensive subsidence monitoring and city-scale prediction. However, earlier studies have typically utilized single-wavelength radar imagery for long-term deformation detection within the same monitoring period, predominantly using Sentinel-1 imagery due to cost considerations. The limitations of different sensors' wavelengths, resolutions, and coverage areas have prevented PS-InSAR monitoring data from achieving: (1) county-scale monitoring coverage, (2) high-density monitoring points in critical areas (such as historically significant subsidence zones) and (3) high-precision monitoring across mixed land cover areas. These limitations have impeded subsequent integration with GNSS and leveling data, as well as subsidence model development, ultimately failing to meet government operational standards and practical applications due to precision constraints. This study aims to obtain more comprehensive and high-precision radar surface subsidence information through cross-wavelength image fusion to enable subsequent value-added applications. The research utilizes one year of concurrent Sentinel-1 IW mode (C-band with 20×5 m resolution) and high-resolution TerraSAR-X StripMap mode images (X-band with 3 m resolution), applying the Persistent Scatterer InSAR (PS-InSAR) method to monitor subsidence in Taiwan's Chiayi region with different land cover. The results were validated against the leveling measurements to examine the spatial error distribution before data fusion, which reveal an accuracy of 8.77 mm/yr. Furthermore, via computing the global Moran index, it is found that there is significant spatial autocorrelation in the error distribution (Index 0.57 at the 99% confidence level), demonstrating the feasibility of high-precision cross-wavelength subsidence data fusion. The comprehensive subsidence fusion information developed in this study is expected to reduce systematic errors with spatial autocorrelation characteristics across different wavelengths, thereby providing higher-precision surface subsidence monitoring results. Also, it enables integration with other geodetic measurements (such as GNSS and leveling) through its complete coverage of the study area. This integrated approach provides more robust and diverse subsidence information, which can assist government agencies in the statutory designation of significant subsidence areas and serve as a data source for prediction models, providing timely and effective potential risk information to support decision-making by relevant authorities.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: SAR Interferometry to Detect Badlands Erosion

Authors: Angelo Sozio, Dr. Rosa Colacicco, Dr. Alberto Refice, Dr. Marco La Salandra, Dr. Fabio Bovenga, Dr. Antonella Belmonte, Dr. Domenico Capolongo
Affiliations: Department of Earth and Geoenvironmental Sciences, University Of Bari, Institute for the Electromagnetic Sensing of the Environment (IREA), National Research Council of Italy
Badlands are typical landforms on clayey, bare and sparsely vegetated slopes, characterized by high erosion rates due to water washout [1]. Their evolution and processes have important implications for land planning and resources management. In particular, monitoring and predicting erosion processes within badlands catchments is strategic for assessing downstream consequences and natural hazards, such as landslides, debris flows, piping, soil erosion, fluvial erosion and deposition. Erosion rates and processes have traditionally been studied through in situ measurements or indices derived from satellite optical imagery. However, these approaches are often associated with significant uncertainties, particularly when large areas or long periods need to be studied. Recently, coherence measurements from Interferometric Synthetic Aperture Radar (InSAR) have been proposed as a tool to observe soil erosion phenomena in badlands with high spatial and temporal resolution [2]. Time series of Sentinel-1 SAR images, acquired in interferometric wide swath (IW) mode, were collected for the study area in both ascending and descending geometries. Each time series consisted of over 300 images spanning a five-year period. The temporal resolution began at 12 days during the first year and improved to 6 days from 2016 to December 2021, enabled by the Sentinel-1B sensor (which ceased radar data operations in December 2021 due to an anomaly). Through sub-pixel precision coregistration, interferograms were generated between image pairs with short temporal baselines. Stacks of coherence images with fixed temporal baselines were analyzed alongside time series of cumulative daily rainfall data from nearby rain gauge stations. Additionally, each coherence time series was fitted to a periodic function. The average coherence over badland areas was higher than in adjacent areas with natural vegetation (shrubs or Mediterranean scrub) or cultivated land. Episodes of partial coherence loss in gullies were temporally correlated with cumulative rainfall data for the intervals between InSAR acquisitions. The challenging climatic conditions of the study area complicate the analysis of individual rainfall events and their impact on spatial coherence [e.g., 3]. Nevertheless, statistical analysis revealed a significant correlation between cumulative rainfall over short intervals (6–18 days) and abrupt decreases in InSAR coherence. In contrast, coherence time series in vegetated or cultivated areas exhibited lower rainfall correlation but displayed a significant seasonal trend (p-values below 0.1 over large areas). This study presents further experiments aimed at investigating badlands erosion processes. The analysis integrates satellite PSInSAR observations, including coherence and phase measurements, with geomorphic digital elevation models and rainfall data, across various test sites in the Basilicata region of southern Italy. This region features two primary badlands morphologies: Calanchi and Biancane [4, 1]. An analysis integrating UAV-based LiDAR and RGB data was conducted to provide a robust framework to complement the Sentinel-1 InSAR analysis. These datasets facilitated the generation of high-resolution point clouds (PCs) and digital elevation models (DEMs), enabling precise quantification of morphological changes over time and the detailed mapping of surface features, including drainage density (rills and gullies) and vegetation cover. This approach was instrumental in detecting precursory signs of evolving instability by monitoring geomorphic processes, correlating them with InSAR observations. These findings confirm the ability to monitor badlands soil erosion processes with remote sensing techniques, at high spatial and temporal resolution. They also emphasize the potential for expanding this approach to a global scale, given the widespread presence of badlands and bare soils prone to surface erosion. Through the integration of UAV-derived data with InSAR coherence time series, the study achieved a deeper and more precise understanding of erosion dynamics, enhancing the accuracy and detail of spatial and temporal erosion patterns across the study area. References [1] Gallicchio, S., Colacicco, R., Capolongo, D., Girone, A., Maiorano, P., Marino, M., Ciaranfi N. (2023). Geological features of the Special Nature Reserve of Montalbano Jonico Badlands (Basilicata, Southern Italy). Journal of Maps, accepted, https://doi.org/10.1080/17445647.2023.2179435. [2] Refice, A, Partipilo, L., Bovenga, F., Lovergine, F.P., Nutricato, R., Nitti, D.O., Capolongo, D. (2022). Remotely sensed detection of badland erosion using multitemporal InSAR. IGARSS 2022 - 2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 2022, pp. 5989-5992. doi: 10.1109/IGARSS46834.2022.9883555. [3] Cabré, A., Remy, D., Aguilar, G., Carretier, S., Riquelme, R. (2020). Mapping rainstorm erosion associated with an individual storm from InSAR coherence loss validated by field evidence for the Atacama Desert. Earth Surf. Process. Landforms, vol. 45, pp. 2091–2106. https://doi.org/10.1002/esp.4868. [4] Alexander, D.E. (1982). Difference between “calanchi” and “biancane” badlands in Italy. R. Bryan, A. Yair (Eds.), Badland Geomorphology and Piping, Geo Books, Norwich, UK (1982), pp. 71-88
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Data Fusion and Change Detection Techniques Based on Optical and SAR Data EO for Damage Mapping and Multi-Temporal Assessment of the Recovery Process After Natural Disasters.

Authors: Wilson Andres Velasquez Hurtado, Deodato Tapete, Armando Marino
Affiliations: Sapienza University of Rome, Italian Space Agency (ASI), University of Stirling
The assessment and appropriate analysis of the risks associated with natural disasters caused or exacerbated by climate change is crucial for mitigation planning, preparedness and organisation of the territory for adaptation to climate change. This implies that mapping damages during the emergency is not sufficient any more. More efforts need to be made to monitor what happens after the disaster has happened, how the territory and the local community recover and whether reconstruction would encompass an improvement with regard to disaster preparedness or, instead, is undertaken without addressing the lessons learnt or, worse, in a way that susceptibility to same or other hazards is increased. Therefore, this paper presents an analysis of the impacts and recovery period after a natural disaster event and the role that satellite data, both optical and Synthetic Aperture Radar (SAR), can play. The recovery period after a natural disaster is generally divided into two phases: immediate post-disaster emergency and long-term reconstruction. Each phase requires different approaches in the use of satellite data. During the post-disaster emergency phase, optical and radar satellites are essential for the rapid identification and mapping of affected areas, as well as depiction of the first-response activities to help the affected population (e.g. installation of temporary tents, debris accumulation and clearance). These data allow critical decisions to be made for aid distribution, evacuation planning and the identification of priority areas for initial interventions. However, extended monitoring of the recovery process has received considerably less attention, creating a significant gap in the use of EO resources. The Committee for Earth Observation (CEOS) Working Group on Disasters have developed the concept of Generic Recovery Observatory which, nowadays, stands as a model for EO-based monitoring after disasters (https://www.gfdrr.org/sites/default/files/publication/Use_of_EO_Satellites_012322020_D_LOW-RES.pdf). Users increasingly demand for quality geoinformation to address both the emergency and post-emergency phases. Currently, the use of satellite data decreases considerably after the initial phase of first response ad damage mapping and not enough detailed and timely geoinformation is provided during the reconstruction and restoration of affected locations. This lack of continuity in data analysis and user-oriented products limits the ability to adequately assess progress and challenges in recovery. Additionally, while during emergencies it is more likely that the affected countries can receive support from organisations, NGOs and volunteer mappers, in the months to years after the event the generation of EO-based products should be undertaken by local institutions, academia and technical officers of the bodies in charge of reconstruction and land planning. However, this is a real challenge and requires, at least, EO data accessibility, IT infrastructure to manage the whole process from data storage to products generation, and operators with skills in data processing. These constraints are even more relevant in developing countries. In this context, this paper proposes a methodology that addresses the above issues, using advanced data fusion techniques applied to optical and SAR time series from the European Copernicus programme. The use of Sentinel-1 and Sentinel-2 (also in combination with Landsat collections and very high resolution commercial imagery accessed for free) was intentional, to develop a method that can be implemented in future by users without the barrier to EO data accessibility. The combination of these EO sources allows not only to detect changes in the territory accurately, but also to conduct multi-temporal monitoring of specific areas in order to map damage, assess recovery and analyse reconstruction patterns. The methodology focuses on integrating different observation frequencies with multi-temporal EO data, during both the emergency phase and the recovery and reconstruction phase, by means of changes in the backscatter levels in radar images, further supported and combined with optical data. To illustrate their applicability, two case studies are presented. The first relates to the impact and recovery process from Hurricane Matthew that hit Haiti in 2016, while the second focuses on the 2021 and 2024 floods in the Mojana region of Colombia. These cases demonstrate how the combination of satellite techniques can contribute significantly to reconstruction process monitoring and resilience planning in post-disaster scenarios. The first case study of the Hurricane Matthew in 2016 that devastated large areas of Haiti, causing severe damage to infrastructure, agriculture and natural ecosystems, used multi-temporal SAR and optical satellite imagery to analyse changes on the island during the reconstruction process. A detailed mapping of the immediate impacts of the hurricane was produced, which included the zones of destruction of urban infrastructure, areas flooded by the hurricane's heavy rains, loss of extensive agricultural areas with significant reduction of vegetation cover, and impact on the protected areas and natural parks. Satellite data made it possible to identify the most affected areas and to establish criteria for the assessment of the recovery process. The temporal analysis revealed differentiated patterns of recovery. In terms of infrastructure, significant reconstruction changes were observed in priority urban areas and temporary infrastructure such as refugee shelters and centres for people affected by the disaster. In the agricultural sector, extensive areas of cultivation were identified as abandoned or transformed, reflecting not only the difficulties in restoring production systems, but also possible socio-economic changes resulting from the disaster. In terms of vegetation cover, the data showed a progressive recovery in some areas of protected areas, mostly driven by the natural process of forest growth. The results indicate that recovery in Haiti varied significantly according to the type of damage and the priority assigned to each sector. Satellite data also helped to identify patterns of land-use change, such as transformation into residential or commercial areas during the reconstruction phase, as well as new areas that, unfortunately, appear to be built in at-risk zones. This information is crucial to assess the efficacy of recovery strategies and to guide future interventions. This first case study highlights the importance of multi-temporal data in post-disaster planning, not only to assess progress, but also to anticipate long-term challenges in land us planning. The second case study addresses flooding in the Mojana region of Colombia, which is a crucial ecosystem for agricultural production and biodiversity, and which faces increasing risks due to climate change. During La Niña phase in 2021, this region experienced the longest flooding on record, with devastating effects on agricultural systems, livestock and local communities. Furthermore, vast areas of the territory were still flooded after years since the event that in May 2024 a second disastrous flood occurred and the International Charter was activated (https://disasterscharter.org/web/guest/activations/-/article/flood-in-colombia-activation-883-). This case study used advanced image fusion techniques based on SAR and optical through machine learning methods available and run in Google Earth Engine. Therefore, this test was undertaken towards replication by users without the barrier of data processing software and tools. The main objective was to map the affected areas, evaluate the magnitude of the impact and analyse the recovery process in this strategic region over years 2024 and, later, 2025. The results show that the floods significantly altered the water balance of the region, creating immediate and long-term challenges. In the agricultural sector, important crops like rice suffered extensive damage, and pastures for livestock were rendered unusable for long periods due to soil saturation by moisture and their subsequent transformation into wetland areas. Comparison of pre- and post-event satellite imagery identified areas where production systems managed to partially recover and others that remained abandoned. The results also revealed impacts on the infrastructure and forests in the area, including buildings that were abandoned and then dismantled by the inhabitants themselves, as well as roads that connected different villages within the region. Analysis of the forest areas with optical data revealed that areas flooded for long periods took on a light brown colour indicating the loss of their green colour due to the impact of flooding. These findings suggest uneven recovery, influenced by both natural conditions and the lack of implementation of adaptive strategies by management entities. Furthermore, the results warn that the increasing frequency of floods puts the future of the region at risk, underlining the need for comprehensive policies to strengthen adaptive capacity. The analysis throughout year 2025 will show how the recovery process will further progress. These case studies highlight the critical role of satellite data in the continuous monitoring of areas vulnerable to climate change. By providing detailed multi-temporal information, these tools not only allow for the assessment of the recovery of affected areas, but also for the design of mitigation and forward planning strategies that take into account the increasing incidence of extreme events in the context of the global climate crisis.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: VOLCPLUME, an interactive web platform for the multiscale monitoring of volcanic emissions and their impacts on the atmosphere

Authors: Marie Boichu, Dr Raphael Grandin, Théo Mathurin, Nicolas Pascal, Deroo Christine, Pr Colette Brogniez, Maximilien Patou, Sylvain Neut, Cédric Tétard, Pr Jérôme Riedi, Luc Blarel, Pr Philippe Goloub
Affiliations: CNRS/Univ. Lille - Laboratoire d'Optique Atmosphérique, Institut de Physique du Globe de Paris, Univ. Paris Cité, ICARE Data and Services Center, CNRS, CNES UMS 2877, Univ. Lille
The open access VOLCPLUME web platform (https://volcplume.aeris-data.fr) is part of the Volcano Space Observatory portal under development within the framework of the Horizon Europe EOSC FAIR EASE project. This web interface aims at supporting the near-real-time monitoring of volcanic emissions and their multi-scale atmospheric impacts on atmospheric chemistry, air quality, aviation safety and climate (Boichu and Mathurin 2022). To reach this goal, VOLCPLUME allows users to jointly analyse a broad set of satellite and ground-based active/passive remote sensing observations of both volcanic gas and particles, including Low Earth and Geostationary Orbit imagery, spaceborne and ground-based lidar, as well as photometric measurements. The platform also gives access to in-situ ground-level data from air quality monitoring networks. This synergy enables users to detect and isolate a volcanic plume signature down to the ground, from source to regional or global scales. A companion web application, named « SO2 Flux Calculator » (https://dataviz.icare.univ-lille.fr/so2-flux-calculator), has also been developed to automate the estimation of daily SO2 gas flux emissions from S5P/TROPOMI observations with a robust noise estimation (Grandin et al. 2024). Such interactive tools allow to remotely track changes in the degassing or eruptive activities of any isolated or non-instrumented volcano, a crucial information for accurately initialising models of volcanic plume dispersion and robustly assessing atmospheric hazards. Regarding volcano monitoring, it represents an additional tool for better understanding the eruptive dynamics and local volcanological hazards. Hence, VOLCPLUME is an interdisciplinary platform relevant for atmospheric scientists and institutions in charge of air quality or aviations hazards, but also for the Earth Science community, in particular volcanologists and volcanological observatories. For illustration, we present different case-studies including the eruptions of Cumbre Vieja (La Palma, Canary Islands), Piton de La Fournaise (La Réunion), Soufrière Saint-Vincent (Lesser Antilles) and Hunga Tonga. Boichu, M. and Mathurin, T. (2022). VOLCPLUME, an interactive web portal for the multiscale analysis of volcanic plume physico-chemical properties [Interactive Web based Ressource], AERIS, DOI : 10.25326/362, Portal access: https://volcplume.aeris-data.fr, Homepage: https://www.icare.univ-lille.fr/volcplume/ Grandin, R., Boichu, M., Mathurin, T. and Pascal, N. (2024). Automatic estimation of daily volcanic sulfur dioxide gas flux from TROPOMI satellite observations: application to Etna and Piton de la Fournaise. J. Geophys. Res.: Solid Earth. 129, e2024JB029309. https://doi.org/10.1029/2024JB029309
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Lava flow monitoring from Synthetic Aperture Radar

Authors: Edna W Dualeh, Juliet Biggs, Jemima Gosling, Simona Cariello
Affiliations: COMET, University Of Bristol, Istituto Nazionale di Geofisica e Vulcanologia
Volcanic activity can completely reshape the Earth’s surface, from the emplacement of lava flows and ash to catastrophic explosive events that destroy volcanic edifices. Satellite imaging has emerged as a powerful tool for monitoring volcanic eruptions, providing new insights where other remote sensing or ground-based observations may be limited due to inaccessibility or hazardous environment. Monitoring lava flow emplacement is crucial for understanding volcanic behaviour and mitigating associated hazards, such as the destruction of infrastructure, livelihoods, and loss of life. It also helps for understanding future eruptions and can be decisive in hazard communication and eruption response. However, tracking the progression and morphology of lava flows, particularly continuously during an eruption, remains a challenge. This study demonstrates the potential of using satellite data for monitoring lava flows. We focus on Synthetic Aperture Radar (SAR) backscatter, showing that pre-processing filters and temporal analysis techniques significantly improve flow extraction accuracy. We found that applying pre-processing filters to SAR data reduces noise and improves the accuracy of lava flow extraction, with a 2.8x improvement in flow extent. Thresholding techniques, such as Otsu’s Method, successfully extracted lava flow boundaries in simple cases, while more complex scenarios – characterised by varied backscatter changes, or high levels of non-volcanic noise – benefited from temporal analysis. For example, at Erta ‘Ale Volcano, Ethiopia, pixel-based temporal analysis achieved 79% accuracy in flow extent extraction compared to manual inspection, with rougher flow surfaces emplaced farthest from vent. Further, we compare our results with flow maps extracted from phase coherence which indicates changes in the ground surface to validate and refine our backscatter observations and Sentinel-2 optical data which observes thermal anomalies, to providing additional insights into the active lava flow. The combination of flow maps from these datasets allows for a more complete and dynamic mapping of lava flow extents, as well as insight into areas of active flow, cooling and surface characteristics over time. Our findings demonstrate SAR backscatter’s ability for monitoring lava flow progression and morphology, enabling further analysis of different flow textures across the flow field. The comparison of SAR with optical data offers an accurate approach for mapping and examining lava flow emplacement. As the global population continues to grow, the risk from volcanoes amplifies as more people live within proximity to a volcano, emphasising the need for near-real time monitoring. Remote sensing datasets like SAR and optical data are critical tools for tracking volcanic hazards, providing frequent observations for flow models allowing for better eruption response and risk assessment.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Optimizing GNSS Processing for Integrating InSAR Persistent Scatterer Displacement into the ITRF Using Integrated Geodetic Reference Stations

Authors: Alexandru Mihai Lapadat, Dr. ir. Hans van der Marel, Freek van Leijen, Ramon Hanssen
Affiliations: Mathematical Geodesy and Positioning, Delft University of Technology
InSAR Persistent Scatterer (PS) displacement products are essential products for monitoring and mapping natural hazards. Typically, these products are referenced to a reference pixel or scatterer within the radar image datum, selected using statistical metrics such as the Normalized Amplitude Dispersion (NAD). However, this approach prioritizes scatterers with stable amplitude without considering potential displacements of the reference scatterer itself. The Integrated Geodetic Reference Station (IGRS) offers a physical benchmark designed to integrate multiple geodetic techniques—such as spirit leveling, GNSS, gravimetry, airborne laser scanning, and InSAR—into a unified global reference frame, i.e., the International Terrestrial Reference Frame (ITRF). Using GNSS, the IGRS enables the continuous monitoring of the 3D displacements of the benchmark itself, providing a controlled and robust alternative for the reference scatterer. This study evaluates various methods for processing raw GNSS data to optimize the integration of InSAR PS phase measurements within ITRF and correct for the movement of the reference scatterer. A case study using three nearby IGRSs (200 m to 3.4 km baseline distances) to test different processing scenarios, including: 1) short baseline versus precise point positioning (PPP), 2) integration in the measurement domain (displacement) versus parameter space (e.g., velocity), 3) daily position versus instantaneous position at the time of the SAR acquisition, or versus smoothed time series near the time of the SAR acquisition, or 4) solutions with or without earth-tide corrections. The results are first evaluated relative to each other, and subsequently compared between two approaches: (i) referencing InSAR PS displacement products to a single IGRS using the selected GNSS processing method, and (ii) referencing them to the average of a PS cluster near the same IGRS (as if only simple GNSS sensors were available). The comparison underscores the critical role of kinematic monitoring and correction of reference scatterer movement in producing accurate InSAR PS displacement products.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: F.04.05 - POSTER - In-land Water Quality and Resources Management

Water is key to sustainable development, being critical for socio-economical development, energy and food production, and healthy ecosystems. Today water scarcity affects more than 40 percent of the world’s population and is projected to rise further, exacerbated by climate change. Limited access to water supply and sanitation, more water-intensive patterns of consumption and production, increasing rainfall variability, and pollution are combining in many places to make water one of the greatest risks to economic progress, poverty eradication and sustainable development. As the global population grows, there is an increasing need to balance the competing demands for water resources and have more efficient ways to manage water supply in a sustainable manner.

The importance of ensuring availability, quality and sustainable management of water for all has been increasingly addressed in the global political agenda, as seen with the Sustainable Development Goals (SDGs) of the UN 2030 Agenda for Sustainable Development and with the adoption of an International Decade 20018-2028 for Action on ‘Water for Sustainable Development’ by the UN General Assembly. Water touches every aspect of development and is linked to almost every Sustainable Development Goal.

Earth Observation is increasingly seen as an essential source of information which can complement national data and support countries to collect regular information on the use and changes to their water resources for more informed policy decisions on water resource management.

The session will present the latest scientific advances on the use of Earth observations for Water Quality and Water resources management, discuss opportunities and challenges which lie ahead for mainstreaming EO into sustainable management of waters and future paths of research.

Topics of interest for the urban sessions include (not limited to):
- Multi-sensor approaches to the monitoring of the seasonal and annual changes in surface water extent,
- Monitoring of the changes in surface water level from satellite radar altimetry,
- EO approaches for monitoring changes in lake volume,
- Integration of EO in hydrodynamic/hydrological modelling to infer information on river discharges,
- EO solutions for Water Use estimation (e.g., for irrigated crops),
- Inland water pollution (Water Quality),
- River sediment dynamics (erosion risk potential, sediment transport estimates)
- Impact of hydropower dams on rivers flows and morphology,
- Monitoring of groundwater resources (groundwater recharge modeling, groundwater estimation)
- Drought forecasting
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Towards highly precise altimetry for inland waters: A 4D approach for range determination in level-1B data

Authors: Shahin Khalili, Mohammad J Tourian, Dr.-Ing. Omid Elmi, Johannes Engels, Nico Sneeuw
Affiliations: Institute of Geodesy, University Of Stuttgart
Inland altimetry requires greater precision in range determination due to the unique possibility of monitoring the water level of lakes and rivers, expanding beyond the origins in oceanography. The success of Inland altimetry technique depends on the performance of retracking methods, which determine the satellite-to-surface range. The Bin-Space-Time (BiST) retracker previously demonstrated robustness by integrating spatial and temporal domains in a stack of level-2 radargrams; however, challenges remain in addressing complex waveform variations in diverse inland water conditions. We extend this approach to level-1B data by incorporating a four-dimensional (4D) cost function that accounts for dependencies in beam look power, bin, space, and time domains: Bin-Look – Considering the angle dependent variation for enhanced discrimination between water, non-water, and secondary water regions in view. Space – maintaining spatial continuity between neighboring measurements along the track of the altimetry. Time – integrating the evolution of reflected power across multiple cycles. This additional information in the beam look domain leverages the reflected power from multiple look angles, a capability unique to SAR altimetry processing to improve the determination of the tracker range. To find the retracking line of the FFSAR image, the overall cost function is minimized using optimization algorithms. The defined retracking line keeps a balance between different dependencies and ensures a robust and accurate determination of the range. Unlike regular retracking of level-2 data products, analysing 4D information simultaneously by leveraging FFSAR technique, provides significant advantages, including improved spatial resolution and reduces land contamination. We evaluate this enhanced strategy in level-1 data products over various lakes and reservoirs with different shapes and sizes in the United States where USGS water level gauge records are available.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: A novel cyanobacteria occurrence index derived from optical water types

Authors: Davide Lomeo, Stefan G H Simis, Dr Xiaohan Liu, Nick Selmes, Mark A Warren, Anne D Jungblut, Emma J
Affiliations: King's College London, Natural History Museum, Plymouth Marine Laboratory
Cyanobacteria blooms are a threat to water quality of lakes and reservoirs worldwide requiring scalable monitoring solutions. Existing approaches for remote sensing of cyanobacteria focus on quantifying (accessory) photosynthetic pigments to map surface accumulations. These approaches have proven challenging to validate against in situ observations, limiting uptake in water quality management. Additionally, while individual algorithms perform well under specific conditions, they often fail to capture the full spectrum of bloom dynamics in optically complex waters, particularly when cyanobacteria are mixed within the water column or during transitional bloom phases. Optical Water Types (OWTs) have been used in inland and ocean waters to dynamically select suitable algorithms over optical gradients. However, current OWT frameworks for lakes are derived from limited in situ observations, potentially missing key optical features of cyanobacteria blooms. This is particularly evident in the inability of current frameworks to capture the various stages of bloom development, from early formation to surface accumulation, and to account for the diverse optical manifestations of cyanobacteria presence across different environmental conditions. Here, we extend an OWT framework through K-means clustering of >18 million Sentinel-3 OLCI spectra. The extended library of 25 water types captures all cyanobacteria bloom phases, including those in optically mixed conditions. We translate this framework into a novel Cyanobacteria Occurrence Index (COI) by assigning weights to key optical features such as phycocyanin absorption and surface accumulation. COI correlates strongly with established algorithms for chlorophyll-a (Maximum Peak Height; r = 0.9) and phycocyanin (Simis07; r = 0.84), while providing a more comprehensive and intuitive representation of bloom dynamics across diverse optical conditions. The index successfully synthesises information typically derived from multiple algorithms, offering a more complete picture of cyanobacteria presence and occurrence. Examples mapped onto a three-category risk classification facilitate communication of cyanobacteria occurrence risk. Tests across diverse water bodies globally demonstrate the potential of COI for widespread use in cyanobacteria monitoring and management in optically complex waters worldwide. This new approach enables proactive management strategies by providing an intuitive understanding of bloom phases and associated risks, bridging the gap between remote sensing capabilities and water quality management needs. The ability of the framework to adapt to varying optical conditions makes it particularly valuable for operational monitoring across different geographical and environmental contexts.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Time series analysis of Earth Observation data for water cycle science, anthropogenic impact and climate change: test cases in Iraq and Italy

Authors: ELEONORA Azzarone, Dr. Ing. Deodato Tapete, Dottoressa Francesca Cigna
Affiliations: Università Roma La Sapienza, Italian Space Agency (ASI), National Research Council (CNR), Institute of Atmospheric Sciences and Climate (ISAC)
The Mediterranean Basin is among the regions in the world that are most vulnerable to climate change, with evidence (retrieved from instrumental measurements) of rising average temperatures, decreasing annual precipitation alternated with sudden very intense and heavy rainfall, and consequent increasing frequency of extreme events such as droughts and floods. Not better is the situation across the Middle East where, for example, Iraq is listed among the five countries most vulnerable to climate change impacts. These changes significantly impact the water cycle, altering the availability of water resources, water quality, and the ecological dynamics of both land and sea. The countries surrounding the Mediterranean Basin are also heavily influenced by anthropogenic activities, such as urbanization, intensive agriculture, and industrialization, which exert additional pressure on natural systems, modifying water flows and the distribution of resources. Similarly, several regions in the Middle East are witnessing the spread of intensive industrial-scale agriculture, which leads to water resources overexploitation and damage to landscape due to poorly regulated modern development. Sustainable management of water resources in these regions has thus become one of the main challenges for environmental and climate policies. Given the distribution and extent of the affected regions, as well as the rapid acceleration of trends that have been developing over the past decades, Earth Observation (EO) data from missions such as Landsat and Copernicus Sentinels providing global data coverage, offer an effective means to track changes with temporal regularity and wide spatial coverage. Therefore, this study addresses the complex interactions between climate change, anthropogenic impacts, and the water cycle in the Middle East and Mediterranean Basin regions most affected by climate change and desertification, using a multi-sensor data approach based on the analysis of time series of optical and Synthetic Aperture Radar (SAR) satellite data. The chosen variable to observe is the extent of natural water bodies (either ephemeral or permanent), artificial reservoirs and marshlands, for the following reasons: (i) they represent one of the main sources of water for multiple uses (potable, residential, commercial, agriculture, energy production) and, as such, have been exploited for decades. Reservoirs and dams have been often realised across many Middle East and Mediterranean Basin countries to ensure water supply, not rarely through huge and expensive engineering works. However, in several cases, they failed to accomplish the intended purpose and led to severe impacts on landscape and communities; (ii) they are very sensitive to drought and changed rainfall regimes, thus their fluctuations in water level can be utilised as a reliable proxy to infer climatological trends and associated impact on water availability. Of the many areas that are being investigated in this research, the present paper presents and compares the results achieved over southern Iraq (from south of Baghdad to Basra) and Sicily, southern Italy. These two regions were selected in light of the ongoing water crisis that, in the case of Sicily, also led to the emergency state declaration by the regional authorities in 2024. Furthermore, they both encompass several natural and artificial water bodies. In particular, southern Iraq is characterised by a wide variety of bodies, from permanent and ephemeral lakes (i.e. Hammar Lake, Najaf Sea, Hor Al-Shuwaija), artificial reservoirs (i.e. Razzaza, El Delmej) to marshlands (i.e. the Ahwar of Southern Iraq UNESCO World Heritage Site, that have significantly shrunk from the former extent in ancient times). The analysis of water body extent and level was undertaken by combining optical and SAR EO datasets spanning over the last 40 years. Optical data from Landsat and Sentinel-2 collections were used to derive detailed information on land cover, vegetation, surface water resources, and thus infer seasonal variations to relate with rainfall regimes recorded for the investigated regions. Normalized Difference Vegetation (NDVI), Normalized Difference Water (NDWI) and Moisture Indexes enabled automatic per-pixel image classification, followed by thresholding and, when needed, manual refinement, to assess surface extent changes and spatio-temporal trends of water bodies and marshlands. To complement the analysis, SAR data were exploited. In particular, while Sentinel-1 time series were used to temporally densifying the observations of water levels and extent, especially to compensate lack of cloud-free optical images due to adverse weather conditions, higher resolution SAR data from the Italian Space Agency (ASI)’s COSMO-SkyMed mission were used to better depict specific reservoirs in Sicily. In this case, the research benefitted of the regular acquisitions collected since 2011 in the framework of the Map Italy Project. The achieved results allowed drawing interesting considerations that prove how EO-based estimates help unveiling the complexity of the ongoing processes and that interpretation of the found spatio-temporal trends is not trivial. In the case of southern Iraq, although the investigated water bodies and marshlands distribute across the same climate zone and are exposed to the same climate change forcing factors, a significantly divergent behaviour is found between the various locations. While the current condition of Razzaza Lake and the southern marshlands is the consequence of a drying and shrinking process that has started since the 1990s-2000s and was driven by human actions, other lakes such as Hammar Lake and the Najaf Sea appear more stable bodies, especially in the last decade. However, when the EO-based water extent results are contextualised with regard to the effective use of those lakes for water supply, it comes out that water quality is poor (e.g. at Hammar Lake) and thus these bodies cannot always represent a real resource for local communities. On the other side, the analysis of Sicilian water bodies and reservoirs reveals the clear impacts of prolonged absence of precipitation and confirms the hotspots of water scarcity. When these data are spatially related with records of the provinces where water supply reduction has been enforced, the match is apparent. The implications of this study are therefore significant for envisioning strategies for sustainable management of water resources and planning of climate change adaptation policies. The present research was partially undertaken in the framework of the Dragon-5 SARchaeology project [grant id.58113] funded by ESA and the National Remote Sensing Center (NRSCC) – Ministry of Science and Technology (MOST) of the P.R. China, and the “Vanishing archaeological landscapes under anthropogenic and climate change threats” bilateral project, funded by CNR and the UK Royal Society.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Improving the Co-Detection of Water Hyacinth and Traditional Fishing Traps (Acadja) With Remote Sensing in Lake Nokoué, Benin

Authors: Dr. Claire Michailovsky, Dr. Johannes van der Kwast, Dr. Belfrid Djihouessi
Affiliations: IHE Delft, Université Abomey-Calavi
Lake Nokoué is a coastal lagoon located in southern Benin. Its hydrological regime is driven by exchanges between oceanic waters and the fresh water from the Ouémé and Sô rivers draining an upstream catchment of approximately 45,000 km2. The lake is used for many activities, including the traditional Acadja fishing technique which involves constructing artificial fish habitats with dried branches to increase fish productivity. Unfortunately, the expansion of Acadja (up to 50% of the lake according to some studies) has rendered the practice unsustainable and creates environmental pressures for the lake as well as for the surrounding mangroves and forests which are the source of the wood used in building the structures. Another challenge lake Nokoué is facing is the proliferation of water hyacinth during the high-water periods, when the salinity in the lake decreases. The water hyacinth has both environmental and economic impacts, causing water quality issues such as a reduction in dissolved oxygen, reduction of natural biodiversity and making parts of the lake unnavigable. The two issues of Acadja and water hyacinth are connected as the water hyacinth tends to get trapped and to accumulate in and around Acadja structures. In-situ monitoring of Acadja and water hyacinth is challenging due to the size of the lake and the high cost of field campaigns. Previous studies (Ovienmhada, 2020; Djihouessi et al., under review) have shown the potential of Earth Observation, particularly the Sentinel 1 (S-1) and Sentinel 2 (S-2) satellites, to detect Acadja and water hyacinth in Lake Nokoué. However, several challenges such as cloud cover and high turbidity in parts of the lake make it difficult to accurately monitor these issues throughout the year. The current study leverages 1) object-based classification to improve the detection of Acadja structures, and 2) the co-occurrence of Acadja and water hyacinth to improve water hyacinth monitoring. Our classification algorithm for Acadja uses image segmentation and shape parameters calculated for the generated segments to improve the identification of the presence of Acadja on an annual basis (the structures are known to stay in place for 3 to 5 years) using S-1 and S-2 data. Once these are identified, the pixel-based classification of water hyacinth is augmented using a proximity-metric to Acadja structures to reduce confusion with turbid waters and other types of vegetation. Results are compared to previous studies which used only pixel-based classification algorithms.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Qualitative and quantitative management of surface water from super-resolved Sentinel-2 images

Authors: Vincent Poulain, Nafissa Sfaksi, Eric Lasserre
Affiliations: Thales Services Numériques, MEOSS
This abstract presents a project to manage available water resources using Copernicus satellite data. Knowledge of available water resources and their use is essential for the implementation of various public policies, for example: - Quantitative water management involves defining and preserving a balance between available resources, uses and the needs of aquatic environments, and to this end, controlling withdrawable volumes (structural management). - Anticipating and managing the risk of drought involves monitoring the state of resources and implementing strategies to limit use when this state falls below a certain threshold (conjunctural management). - Both structural and cyclical management require monitoring and knowledge of water resources. National systems enable continuous monitoring of meteorological variables, soil moisture, water table levels, river flows, etc. However, knowledge of water bodies and the status of volumes stored in reservoirs (dams and dykes) is currently very limited. This project aims to provide end-users with monthly updated information on: - Water surface area for all water bodies over 1ha; - Stored water volumes and filling rates for dams and hillside reservoirs over 3ha; Area measurements are based on Sentinel-2 super-resolved images at 5m resolution. The Sentinel-2 super-resolution algorithm was developed by Thales Services Numériques as part of an ESA-funded project. This project involved training and validation strategies for the application of a Single Image Super-Resolution (SISR) architecture to Sentinel-2 imagery using the SEN2VENμS open dataset, in order to generate 5m resolution images from the initial 10m and 20m bands. The approach focused on preserving radiometry and geometry of the input images and avoiding the introduction of artifacts. Indeed, the major difficulty in this type of training comes from the fact that pairs of Sentinel-2 and Venµs images cannot be identical. The different viewing angles of the two satellites lead to variations in geometry and radiometry in the training dataset, despite the attention paid to these details. Hence, during initial tests on the SEN2VENµS dataset with L1 and GAN loss on all frequencies and an 18 building block network, although the images generated were visually very satisfactory, we could observe that radiometry and geometry were altered. Therefore, a solution was to separate our objective into low and high frequencies, with the idea that the low frequencies should retain those of Sentinel-2 and that the high frequencies generated by the network should be realistic and similar to the Venµs target image. In this way, GAN loss was applied only to high-frequency part of images, to limit the hallucination potential in the low-frequency range. On the other hand, the number of building blocks has been reduced, which allows to limit the network ability to capture spatial distortion. Regarding SWIR bands (B11 and B12), with no reference in the SEN2VENµS dataset, we used the WALD protocol, training the network to a super resolution of a factor 4 on lower resolutions. The detection of water pixels super-resolved Sentinel-2 images is performed by an algorithm developed by MEOSS (WaterReserve chain). This water body detection algorithm is based on water-specific spectral indices (ANDWI, NDVI and NDPI). These indices are combined and adaptive thresholding using the otsu method follows. The results highlight the value of super-resolved Sentinel-2 images, which can reliably detect water surfaces of less than 1.5ha. Overall, the results obtained with super-resolved Sentinel-2 images outperform those obtained from standard Sentinel-2 images at 10 meters resolution, with an 8-points gain in accuracy for the water class (93 vs 85), and a 11-points gain in kappa (85 vs 74). More importantly, SISR allows to identify 2132 bodies of water, compared to the 791 bodies detected with standard data, thus allowing for a gain of 160% in number of detected objects and 35% in detected surfaces. This gain is higher for smaller Water Bodies, with a 40% increase in detected surfaces for WB bellow 1.5 ha. Water levels, volumes and filling rates are estimated using the CNES DEM4Water open-source tool (https://github.com/CNES/dem4water). This tool establishes relationships between water surface elevation (Z), surface area (S) and reservoir volume (V). The method is based on two inputs: a digital elevation model (DEM) acquired after reservoir construction, and a water occurrence map. DEM4water uses 3D information from a Digital Terrain Model around the water body and the shape of the surrounding valley to interpolate with the estimated depth. The DEM4water software has been tested and validated on a base of 66 instrumented sites in South West of France, Andalusia, Brazil and India. Performances show a fill rate uncertainty of less than 10%. Around 22,000 water bodies are currently monitored in France, and the method can be transposed to any other part of the world.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Satellite-based Water Quality Monitoring System for East African Lakes

Authors: Emma Tebbs, Aidan Byrne, Davide Lomeo, Winnie Owoko, Kevin Obiero, Zeph Migeni, Steph Smith, Ted Laurence, John Malala, Abebe Getahun, Mulugeta Wakjira, Erick Ogello, Maurice Obiero
Affiliations: King's College London, African Centre for Aquatic Research and Education (ACARE), Kenya Marine and Fisheries Research Institute, Addis Ababa University, Jimma University
The East African lakes are vital ecosystems supporting millions of livelihoods, yet they face significant threats from eutrophication, climate change, and increasing human population pressures. Our understanding of how these lakes are changing and how to mitigate impacts is limited by a lack of long-term observations. Current ground-based monitoring is infrequent and spatially limited, highlighting the urgent need for robust satellite monitoring tailored to local conditions. This presentation summarizes efforts to establish a satellite-based water quality monitoring system for East African lakes. It covers: (1) the establishment of the African Lake Remote Sensing Network (ALReS) to facilitate knowledge sharing and capacity building, (2) the development of an initial water quality monitoring tool: the LAndsat water QUality retrieval tool for East African lakes (LAQUA), and (3) refinements to the tool through collaboration between King’s College London (KCL) and the African Centre for Aquatic Research and Education (ACARE) and other local partners. Our approach emphasizes developing localized algorithms for water quality parameters, addressing the limitations of existing global models predominantly trained and validated with North American and European data. By leveraging field data from multiple East African lakes, we assess the accuracy of various water quality retrieval algorithms, including existing global models and newly developed models specifically designed for this region. Workshops with local stakeholders and ALReS network members identified steps required to deliver a satellite-based monitoring system for African lakes, including: (1) developing local institutions’ capacity to collect ground-truth data, (2) co-creating satellite products with local partners, (3) producing open-source, user-friendly software tools for visualizing and analyzing data, and (4) delivering training to local researchers and managers. We introduce the first iteration of the monitoring system: the LAQUA tool, a user-friendly application developed on Google Earth Engine. This app provides stakeholders with easy access to satellite-derived water quality information to inform decision-making and facilitate sustainable management practices. Our findings demonstrate the effectiveness of tailored algorithms for chlorophyll-a, total suspended solids (TSS), and Secchi disk depth (SDD), enhancing local institutions’ capacity to monitor and manage these critical water resources. Recent developments to enhance the LAQUA app as part of the ACARE Monitoring Pilot include creating a comprehensive in situ water quality database, integrating over 7,500 observations from various sources to further validate and refine the satellite algorithms. This dataset encompasses key variables such as chlorophyll-a, water temperature, and turbidity across critical lakes like Lake Turkana, Lake Victoria, and Lake Kivu. Key advancements in the new KCL-ACARE water quality monitoring tool include integrating higher resolution Sentinel-2 data, refined water quality algorithms, and additional environmental variables like temperature and rainfall. Enhanced user interface features improve accessibility, while optimized code efficiency reduces processing time, making the tool a robust resource for tracking chlorophyll-a, TSS, and SDD in East African lakes. The tool provides easy access to satellite-derived water quality information validated with local data. Co-production and capacity-building activities have facilitated a user-centric design, ensuring the tool meets local needs through collaboration with regional stakeholders. The study demonstrates that while Landsat models showed lower predictive performance compared to Sentinel-2 models, their longer archive allows for long-term trend analysis, making both datasets complementary. The KCL-ACARE tool serves as a vital resource for researchers and managers working on the African Great Lakes, with significant potential for future enhancements, including additional water quality parameters and hydrological data. This project has successfully delivered a validated, user-friendly satellite-based water quality monitoring tool tailored to the African lakes and created a robust in situ water quality database. These advancements collectively enhance the capacity for effective water quality management and provide a strong foundation for future research and tool development. In conclusion, the establishment of a satellite-based monitoring system for East African lakes addresses immediate environmental concerns and empowers local communities through capacity building, co-creation of satellite products, and accessible data visualization tools. This initiative represents a significant step towards sustainable management of East African lakes, ensuring their health and productivity for future generations.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Satellite-Based Combined Analysis of Water Bodies and Vegetation Dynamics Under Dry Conditions

Authors: Olga Parshina, Luca Cenci, Giuseppe Squicciarino, Luca Pulvirenti
Affiliations: Sapienza University of Rome, CIMA Research Foundation
Drought is a climate extreme event characterized by persistent unusual dry weather conditions affecting the hydrological balance. In recent years, its frequency and severity have been increasing, posing significant challenges to water resource management and food security. The main goal of this research is to assess the effects of dry periods and drought events on water bodies and vegetation. Within this framework, this study investigates the relations between the dynamics of satellite-based indicators of a reservoir Water Extent (WE) and the stress of the surrounding vegetation described by means of Vegetation Indices (VIs). The VIs and the WE were derived from Sentinel-2 (S2) data, and the analysis was performed over a six-year time period (2018.06–2024.05). The study area comprises the Maccheronis Reservoir (Sardinia, Italy) and territories surrounding it (Lodè, Torpè and ten adjacent municipalities, covering ca. 1300 km²). The area’s primary Land Cover (LC) classes are Tree Cover (49.6%), Grassland (26.3%), Shrubland (19.6%), and Cropland (2%), based on ESA WorldCover 2021 data. A quantitative comparison was performed between the monthly time-series describing the changes in the WE of the reservoir and the following VIs: Normalized Difference Vegetation Index (NDVI), Normalized Difference Moisture Index (NDMI), fraction of Absorbed Photosynthetically Active Radiation (fAPAR). All VIs were computed at 20m spatial resolution and then monthly aggregated over each LC. Subsequently, the monthly averages and the corresponding standardized anomalies were computed for WE and VIs. Indeed, (substantial) negative anomalies can be linked to stress conditions of vegetation or water resources (according to the indicator under analysis), potentially linked to drought periods. Finally, monthly averages and anomalies of combined indicators VIs&WE were computed as 0.5(VI+WE), to evaluate the potentiality of integrated S2-based indicators for drought monitoring. Performance metrics such as Accuracy, Precision and Recall were used to evaluate the relationship between anomalies of WE, VIs, VIs&WE and dry months defined based on the levels of Combined Drought Indicator (CDI, provided by European Drought Observatory). In this study, a month is considered dry or under drought conditions when the CDI level “No drought” is present in less than a third of the territory, resulting in 33 dry months out of 72. More details about the methodology can be found in Cenci et al., 2024 and Parshina et al., 2024. The analysis of the results suggests that, in general, all anomalies’ trends have similar behavior. The highest scores were obtained for fAPAR&WE over Grassland, whose Precision reached values of ca. 80%. It means that the corresponding anomalies are able to (quite) successfully identify dry months. This result is obtained for a LC class largely present in the area, whose dynamics are not influenced by anthropic interventions (e.g., in contrast with croplands, affected by irrigation, and covering only a small portion of the study area). Moreover, as opposed to the Tree cover class, Grassland is expected to react quicker to stress conditions, which is important for capturing early signs of critical periods potentially connected to drought events. However, Accuracy and Recall not exceeding 70% suggests that anomalies are often positive during dry periods. It might be caused by a high percentage of dry months in this case study. To further investigate temporal relations between the VIs and WE, it was chosen to consider the agreement in the signs of the First Differences (FD) of WE and each of the VI time series. A negative sign of FD is assumed to be an indicator of stress conditions of vegetation for VIs time series and water resources for WE time series. Accuracy was computed comparing VIs FDs against the current and the next month values of WE FDs. Introducing a one-month lag substantially increased Accuracy for both monthly averages and anomalies of all indicators. It might suggest that vegetation stress as a sign of the agricultural drought might often anticipate the signs of hydrological drought. These results show the potential added-value of performing a combined S2-based monitoring of vegetation and water bodies dynamics to support drought management. With this regard, it is also important to highlight that the best performances were obtained by using, as VI, the fAPAR. Indeed, this outcome further supports the potential of this indicator for identifying vegetation water stress due to drought. Further analyses will be carried out by taking into account other water reservoirs, to better understand potentialities and limitations of the presented approach that, being based solely on the satellite data, may be applied also in data-scarce environments. References L. Cenci, G. Squicciarino, L. Pulvirenti, S. Puca and A. Duro, "Validation of a Prototype Monitoring System of Water Bodies Extent for Civil Protection Applications", IGARSS 2024 – 2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 2024, pp. 3765-3769, doe: 10.1109/IGARSS53475.2024.10641198. O. Parshina, L. Cenci, G. Squicciarino and L. Pulvirenti, "Satellite-Based Monitoring of Vegetation Health and Water Bodies Extent in Dry Periods and Under Drought Conditions", IGARSS 2024 – 2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 2024, pp. 4027-4031, doi: 10.1109/IGARSS53475.2024.10640789.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: The EcoNet Project: Surface Water Monitoring Through AI Algorithms From Hyperspectral PRISMA and Multispectral Landsat 8 With Ground Sensors Analysis

Authors: Valeria La Pegna, Dario Cappelli, Martina Frezza, Davide De Santis, Fabio Del Frate, Roberto Dragone, Gerardo Grasso, Daniele Zane, Bruno Brunetti, Sabrina Foglia, Giulio Ferrara, Giorgio Licciardi, Patrizia Sacco, Dr. Ing. Deodato Tapete
Affiliations: Tor Vergata University Of Rome, Istituto per lo Studio dei Materiali Nanostrutturati, National Research Council (CNR-ISMN), Italian Space Agency (ASI)
Inland water pollution is one of the most challenging issues of the last decades that needs proper and targeted actions to protect and properly monitor aquatic ecosystems. Numerous industrial discharges insist on freshwater basins and a valuable solution to ensure a continuous monitoring can be an accurate analysis of satellite data that can observe the dynamics over time. While satellite-based products are increasingly used for monitoring water parameters, the combination with ground bio/chemosensor measurements able to detect significant variations in selected physico-chemical parameters among different water sampling points, attributed to agricultural and peri-urban anthropogenic pressures, is yet to be fully experimented in pre-operational scenarios. The innovation of this work therefore stands in the combination of satellite data deriving from different sensors mediated by the use of Artificial Intelligence (AI) with the support of in-situ traditional measurements. In particular, a multi-parametric and flexible system combining a ground-based bio/chemosensor device (the patented CNR-ISMN “Snoop” prototype) and satellite-based products for monitoring water parameters was developed and implemented. This research focuses on specific areas, included in the Natura 2000 network, located in central Italy: (i) Piediluco Lake in the municipality of Terni; (ii) the confluence of Tiber River with the Farfa stream for the reserve “Nazzano Tevere-Farfa” in Nazzano; (iii) Mezzano lake for the reserve “Selva del Lamone”. Their distinct feature stands in the representation of natural hydrological basins of small size, such as lakes or river confluence with a maximum distance between the banks of approximately 2 km. According to the available literature (Gitelson et al. 1993, Gholizadeh et al. 2016), chlorophyll-a (Chl-a), total suspended matter (TSM), coloured dissolved organic matter (CDOM), total nitrogen (TP) and total phosphorus (TP) were assessed in this study. These concentrations can be retrieved using reflectance values in the visible range of the spectrum and can provide substantial indicators on the trophic status of a basin. AI algorithms were implemented in order to retrieve significant water parameters concentrations, using MLP (Multi-Layer Perceptron) neural networks with two hidden layers and variable number of neurons and activation functions (between Sigmoidal and ReLu) depending on the water parameter to assess. A dedicated dataset was designed, exploiting hyperspectral imagery from the Italian Space Agency (ASI)’s PRISMA mission as it represents an innovative approach for assessing water quality parameters, having 63 bands of 30 m spatial resolution in the VIS/VNIR. This work has been accomplished by applying reflectance values, sensitive to the concentrations of the related parameters, from multispectral Landsat 8/9 acquisitions that were implemented as ground truth target examples to be provided to AI algorithms. Furthermore, satellite image analysis and parameter retrieval was coupled with temporally co-located in situ sampling and exploitation of the patented CNR-ISMN “Snoop” prototype enabling the measurements of various parameters: Chlorophyll-a fluorescence kinetics or OJIP transient, whose changes can be linked to the presence of contaminating bioactive substances (e.g., pesticide residues and heavy metals), free ionic fraction, redox potential, pH, ionic conductivity, temperature, dissolved O₂, and CO₂. The results obtained from the AI model implementation are promising, showing R2 higher than 0.6 in every application case considered. The further analysis conducted comparing these outcomes with the trends of the ground data allowed to complete the temporal investigation of fluctuations in the water basins and confirm also the reliability of the found trends. The results were also part of a pre-operational demonstration with natural park management teams to prove the benefit of implementing such novel multi-parametric and flexible system for multi-temporal monitoring and early warning. The experimentation outcomes have demonstrated to eventually contribute to the development of advanced monitoring tools for assessing water status in small water basins, wherein satellite hyperspectral and multispectral data play a key complementary role compared to ground measurements. Acknowledgments: The results presented in this abstract were achieved during the EcoNet project (agreement n. 2022-32-HH.0), funded by the Italian Space Agency (ASI) as part of ASI’s program “Innovation for Downstream Preparation for Science” (I4DP_SCIENCE).
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: EO based estimates of water transparency to support the monitoring and reporting under EU directives

Authors: Krista Alikas, Kersti Kangro, Susanne Kratzer, Martin Hieronymi, Rüdiger Röttgers, Henning
Affiliations: University Of Tartu
The color of the water is directly influenced by the substances and particles present in it. In optically complex waters, a wide ranges of substances can be present with varying concentration ranges. Here we focus on eutrophic and absorbing waters to develop the most applicable approaches to derive water transparency. It is an important parameter for water quality, primary production, and also one of the relevant parameters to estimate the ecological status as required by the EU directives (e.g. Water Framework Directive (WFD), Marine Strategy Framework Directive (MSFD)). Empirical, semi-analytical and machine learning models exist for deriving information about water clarity. Despite the advancement in the methods, EO application rely on the accurately derive radiometric products and might exhibit high errors in the higher order products (inherent optical properties, chlorophyll-a, total suspended matter). Based on the extensive dataset, the focus has been to derive robust algorithms for the diffuse attenuation coefficient and Secchi depth covering a wide ranges of optical conditions (chlorophyll-a 5.4…105.7 mg m3, total suspended matter 2.2…46 mg/l, absorption by coloured dissolved organic matter acdom(442) 0.5…43.7 m-1). As over optically complex waters, the errors in the radiometric products are high in the blue and NIR wavelengths (median absolute percentage error > 40%), the priority have been put on green and red wavelengths covering various bands of already operational (e.g. Sentinel-2 MSI, Sentinel-3 OLCI, PACE OCI) and future (e.g. CHIME) missions. Here we demonstrate the performance of the EO based algorithms to derive transparency estimates over inland and coastal waters and the potential to support the monitoring and reporting under EU directives (e.g. WFD, MSFD). The combination of EO sensors allows monitoring of waterbodies with different shapes and sizes, the developed algorithm can switch between different models suitable for specific optical water type, and produce coherent transnational datasets usable to support the management and policy.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Advancing Inland Water Quality Monitoring through Hyperspectral Earth Observation: Insights from Early Science Applications

Authors: Ryan Hammock, Jeremy Kravitz, Megan Gallagher, Amy
Affiliations: Pixxel
Inland water bodies are critical for biodiversity, human health, and economic activities, but face growing challenges from pollution, eutrophication, and climate change. Effective water quality monitoring is essential for mitigating these threats, yet traditional methods such as in-situ sampling and multispectral imaging often fall short of providing the detail, frequency, and scalability required. Hyperspectral imaging (HSI) addresses these challenges, offering the ability to capture many narrow spectral bands that reveal subtle variations in water quality parameters, enabling precise, actionable insights. This study presents the applications of HSI for inland water quality monitoring, focusing on early science results from Pixxel’s hyperspectral imaging satellites. A case study of Utah Lake, the largest freshwater body in the US State of Utah, underscores the value of HSI in tackling pressing water quality challenges. Utah Lake has long been impacted by recurring harmful algal blooms (HABs), driven by nutrient pollution from agricultural runoff and urban stormwater, exacerbated by climate change. Using HSI data, key indicators such as chlorophyll-a concentrations, phytoplankton absorption, and phytoplankton carbon distributions were mapped to identify bloom hotspots, track bloom dynamics, and assess their severity. The spectral resolution of HSI enables the distinction of cyanobacteria-dominated areas, which are of concern due to their potential to produce toxins harmful to humans and wildlife. These results can provide actionable intelligence for local water resource managers to develop targeted mitigation strategies, such as reducing nutrient inputs from key pollution sources. Furthermore, this case study showcases how HSI can enhance ongoing monitoring efforts, improving the detection and management of HABs in Utah Lake. By providing a comprehensive view of water quality dynamics, HSI supports the development of proactive water management practices that ensure the long-term sustainability of inland aquatic ecosystems. Looking forward, Pixxel’s planned constellation of hyperspectral satellites will help further the objective of water quality monitoring. The planned constellation will deliver global, high-resolution imagery with daily revisit capability, addressing the spatial and temporal variability of inland water systems. The unique capability will enable the operationalization of applications such as near real-time early warning systems for HABs, tracking of nutrient runoff impacts, and monitoring of emerging contaminants. The Utah Lake case study represents HSI’s potential, with opportunities to expand its use to other critical water systems worldwide. The ability of spaceborne HSI to provide detailed, frequent, and scalable water quality insights is directly aligned with global sustainability initiatives, particularly the United Nations Sustainable Development Goal 6 (Clean Water and Sanitation). By enhancing the ability to monitor and manage water quality, HSI supports the achievement of targets related to reducing water pollution, improving water resource management, and protecting aquatic ecosystems. This work underscores the importance of hyperspectral earth observation in supporting sustainable development and fostering resilient communities in the face of growing environmental pressures.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Benefits from the use of Sentinel in water-related cases: findings from the Sentinel Benefits Study

Authors: Ms. Lauriane Dewulf, Mr. Lefteris Mamais, Mr. Geoff Sawyer
Affiliations: Evenflow, EARSC
Through a series of case studies conducted under the Sentinel Benefits Study (SeBS), led by EARSC for ESA, we have examined water-related challenges such as lake water quality monitoring and groundwater resource management, highlighting the tangible economic, societal, and environmental impacts of Sentinel data in real-world cases. The SeBS methodology uses a “bottom-up” value chain approach to highlight the diverse benefits of Copernicus Sentinel data. Unlike previous "top-down" studies, this method captures monetary and non-monetary benefits experienced by stakeholders, which range from public administrations to citizens and ecosystems. By focusing on water quality monitoring in lakes and groundwater management, SeBS demonstrates how Sentinel data effectively supports societal, environmental, and economic parameters. Three SeBS cases cover lakes which is an important feature in our natural environment. They are a source of drinking water and irrigation for crops, they provide leisure facilities, and they are a strong factor in maintaining biodiversity and sustaining both flora and fauna. Water quality is affected by many human activities of which farming has perhaps the biggest impact through fertilizer and pesticide runoff. These cases are looking at how water quality is managed using Sentinel data in Germany, in Finland and in the Netherlands. We have seen from the three cases that the introduction of EO data has been a game-changer in lakes water quality monitoring, especially when it comes to Harmful Algal Bloom (HAB). Data from Sentinel 2 and Sentinel 3, - as well as other missions - is being used to develop a wider and more frequent pictures of water quality. In this matter, it complements traditional sampling, offering a comprehensive picture of water quality over large areas, faster detection of events, and facilitating better decision-making and communication. Benefits include cost savings, improved environmental monitoring, enhanced policy-making, and stronger public trust through transparency, ultimately supporting biodiversity and citizens. The total estimated benefits in these cases range from €4million to €37 million per year per country. The Water Framework Directive (WFD) could also be a key driver in developing EO utilisation across other countries. However, it remains a barrier because it does not yet recognize the use of satellite-based measurements. While only a few countries have embraced satellite data, its potential for better monitoring is evident and deserves broader acceptance. SeBS has also studied closely the subject of “aquifer management”, a topic relatively unknown to the average person. Yet, it is truly a critical subject, if one considers that nearly half of all drinking water in the world and over 40% of the water used for irrigation is sourced from groundwater. Moreover, with many areas in the world experiencing more frequent and more severe droughts – a direct result of climate change – the sustainable management of groundwater resources becomes essential for the well-being of communities depending on them. However, the interests of these communities often conflict, with farmers, residents and tourists competing against each other – especially in large urban areas – over the increasingly scarce water resources. Unlike the case studies on water quality management, the two case studies on aquifer management showcase the benefits brought by Sentinel 1 (instead of Sentinel 2) to diverse stakeholders across the value chain. A first study covering Catalonia (Spain) shows how EO is transforming groundwater resource management through innovative applications developed by the Catalan Water Agency (ACA) and the Cartographic and Geological Institute of Catalonia (ICGC). Using Sentinel-1's SAR technology, these agencies can detect ground deformation caused by aquifer exploitation with millimetre precision, providing critical insights into groundwater extraction levels. This capability enables the detection of illegal wells, supports the redefinition of water concession rights, and ensures aquifer levels are maintained above critical thresholds to prevent saltwater intrusion and land compaction. Ultimately, such initiative helps ensuring economic viability and social cohesion of many communities and industries across the region. Municipalities and water utilities gain from reduced risks of subsidence damage and improved groundwater management. Farmers benefit from more efficient irrigation practices and a sustainable access to water, critical for maintaining agricultural yields during drought periods. Citizens and communities experience improved social cohesion, lower water costs, and a more reliable water supply. Furthermore, Groundwater-Dependent Ecosystems (GDEs) are protected from over-extraction. The economic benefits of the use of EO quantified in this case are estimated at 1,78 to 3,58 million euros per year in Catalonia. A second case study covering the region of Murcia (Spain) focuses more on damages caused by ground deformation due to groundwater exploitation. The benefits associated with the use of Sentinel-enabled InSAR monitoring of ground subsidence are significant in this case. Economic benefits, extrapolated to Spain, amount to 31.7-71.8 million Euros annually; these are related to avoided costs from investment in GNSS total stations, and, very importantly, to avoided costs from damages to buildings. Ultimately, the SeBS cases on water highlight the key role of Copernicus Sentinel data on water management. In both water quality and aquifer monitoring cases, Sentinel data enables more frequent and spatially dense measurements, offering high precision for ground movements detection and broader coverage for water quality monitoring. While water quality measurements may be less precise, this is more than compensated by the ability to monitor many more lakes at higher frequencies. Historically, both water quality and aquifers were monitored using a limited number of point-based measurements, such as monitoring stations or sampling, which lacked the comprehensive coverage provided by Sentinel data. Sentinel data overcomes these limitations, empowering stakeholders with a holistic view to make informed decisions that ensure water quality and sustainable water use. It in turn preserves social cohesion, safeguards ecosystems and biodiversity, improves ecosystem benefits, and enhances resilience to climate change. SeBS was an ESA project funded by the European Commission
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Earth Observation Satellite Imagery and Cloud Computing to Monitor Human Activities Impacts on Water Quality in the Major Hydrological Basins in Ghana

Authors: Beatrice Asenso Barnieh, Francesca Elisa Leonelli, Anke Schickling, Giuseppe Ottavianelli, Emmanuel Obuobie
Affiliations: European Space Agency, Council for Scientific and Industrial Research, Water Research Institute
Intensive human activities in recent years have presented enormous challenges to the quality of water resources in Ghana. The rate at which the discharge from unregulated small-scale mining activities is polluting the major river basins in Ghana, more particularly the Pra hydrological basin is unprecedented. This has serious implications on the quality of the water bodies as previous studies have detected heavy metals contamination and risks of acid drainage along the major hydrological basins in the mining areas. Unreliable rainfall pattern has exacerbated the situation, as many farmers in Ghana are gradually shifting from rain-fed agriculture practices to irrigated farming. Excessive use of fertilizers and pesticides for farming further pose a serious threat to water bodies and soil moisture in the hydrological basins. Deteriorated water quality poses threat to human health and aquatic life, which raises concerns for the future of water resources in Ghana. Therefore, continuous monitoring and assessment of surface water quality are of paramount importance in the country. Earth observation (EO) and Remote sensing (RS) techniques have been applied widely to measure the qualitative parameters of waterbodies. Despite the possibility of EO and RS applications in assessing water quality over large extent, these applications have not been fully utilized for water quality assessment in Ghana. The current approach for assessing water quality in Ghana is the conventional in-situ measurement. Despite the high accuracy involved with the in-situ assessment of water quality, it requires huge labor, high cost of financial resources and longer time for the analysis. Therefore, it is unable to offer continuous and concurrent database of water quality indicators. Furthermore, this method is limited by spatial and temporal variations in water quality. A comprehensive review of in-situ water quality data in Ghana suggests that data on water quality is available but it is limited in time and space. The questions this research which is still in progress (on-going) is answering are: 1) Can EO data enhance in-situ measurements of optically Active Water Quality Constituents (Total Suspended Matter, Chl-a and Coloured Dissolved Solids) both spatially and temporally (i.e., covering all water bodies where no situ data is available coupled with continuous monthly information)? 2) Can we infer from EO data the impact of human activities, (i.e., uncontrol use of chemicals in mining and excessive use of fertilizer for intensive crop farming) on water quality? With limited availability of in-situ data in time and space to validate the development of a robust and reliable empirical model for large scale water quality assessments from EO data. This research is applying Case 2 Regional Coast Colour (C2RCC) Machine Learning Neural Network Algorithm to retrieve optically active water quality constituents and water leaving reflectance from Sentinel 2 level 1 c data, apply different band indices for water quality assessment and compare the results with the limited in-situ data to establish the best indices for water quality assessment in Ghana. Furthermore, we are developing regression relationship between water quality parameters extracted from EO data and changes in human activities indicators (uncontrolled mining, crop farming and settlement expansion) to assess the impact of human activities on water quality. The research is expected to provide information (maps, figures and tables) about the best indices and empirical algorithm for large scale and continuous water quality assessments from EO data in time and space. Also, the analysis is expected to provide information about the relationship between water quality parameters and human activities indicators to infer the impact of the changing human activities on water quality. Such information is essential for comprehensive assessments and management of water resources in Ghana.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Towards Improved Transparency in Large Optical Water Quality Processing Chains – Bringing Calimnos to the Community

Authors: Nick Selmes, Professor Stefan Simis, Dr. Mark Warren, Dr Xiaohan Liu
Affiliations: Plymouth Marine Laboratory
Calimnos (from καλή λίμνη, ‘good lake’ in Greek) is the processing chain currently used for production of lake water quality products in the Copernicus Land Monitoring Service (CLMS) and ESA Lakes_cci. As data from these sources have become increasingly widely used, it has also become clear that users should be able to use Calimnos to generate their own complementary data over specific regions of interest. Having been optimised for in-house computing (when satellite data did not have a firm presence in cloud environments), Calimnos has never been available as a standalone version for users to deploy on their own systems. Whilst documentation is provided on each processing step in the respective documentation of the services, wider community adoption has therefore not been possible. Here, we discuss our ongoing work to broaden accessibility and applicability of Calimnos, providing new opportunities for water quality monitoring and research, and community collaboration. One of the main challenges in operating and improving Calimnos is its versatility: Calimnos combines into a single processing chain the processes of spatial resampling and subsetting by target area (individual water bodies or regions), radiometric and atmospheric corrections, pixel identification (land/cloud/water/ice), optical water type classification, individual algorithms (per parameter and water type), algorithm blending, conversion and spatiotemporal aggregation. One of the main advantages of this approach is wide applicability of the system, as algorithms are trained across water bodies and local calibration is no longer required. This is particularly relevant at the global scale, as most waterbodies are not a priori characterised to determine which algorithmic approaches are valid. However, many of the algorithms adopted in the processing chain have complex requirements for the host system. The number of individually configurable components means there are many ways to set up a valid processing chain, but also many that would fail or produce undesirable results. To facilitate the use of Calimnos by end users we have developed a standalone version that is independent of specific infrastructure. System dependencies are now encapsulated in containers, enhancing portability. The user interface has been simplified and configuration options made clearer. Host-specific systems have been removed from Calimnos and are now incorporated in designated wrappers. Most importantly, documentation of each key configuration choice has been improved. Here, we present our progress to date and invite interested users to join the conversation. This can vary from testing to co-development, and providing feedback for further refinement ahead of a full public release.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Project BIGFE - using remote sensing to evaluate water quality in German water bodies

Authors: Kurt Friese, Susanne Schmidt, Tanja Schröder, Rebecca D. Kutzner, Pia Laue, Hendrik Bernert, Kerstin Stelzer, Karsten Rinke
Affiliations: Helmholtz Centre For Environmental Research Gmbh - Ufz, Institute for Lake Research, Institute for Hygiene and Environment, EOMAP GmbH & Co. KG, Brockmann Consult GmbH, Faculty of Environment and Natural Sciences, Brandenburg University of Technology
Remote sensing of water bodies is increasingly used. Optical sensors can be applied for monitoring water quality, e.g. for evaluating the algal blooms that occur frequently in summer and which may consist of toxic cyanobacteria. However, questions regarding validation and best use practices remain. We compared the remote sensing products Secchi depth, turbidity and chlorophyll-a of the ESA Copernicus satellites Sentinel-2 and Sentinel-3, i.e. the MSI and OLCI instruments. Geographically, we limited the processing to 109 German lakes and reservoirs. We restricted the processing to the time period from 2016 to 2020. Thirteen federal states contributed in situ and laboratory data which we compared with the remote sensing data. The processing was carried out using two different processing lines, provided by the eoApp AQUA® from Eomap GmbH and the CyanoAlert® portal from Brockmann Consult GmbH, in order to also analyse the influence of different algorithms. The underlying signal processing in the two lines follows fully published procedures. The way in which the software is implementated in a high-performance environment, is the intellectual property of the two companies. These implementations are available for scientific, consulting, or regulatory purposes. For defining a sophisticated validation scheme, we performed a statistical comparison of different spatial resolutions (3*3, 5*5, 15*15 macropixel to whole waterbody scale) and temporal matches (exact day, +/- 1 day; +/- 5 days). Our results show good (Chl-a) to very good (Secchi depth, turbidity) agreement, depending on the satellite/sensor and lake type. We recommend using the 3*3 macropixel or waterbody-scale evaluation, and exact day, but errors were low when deviating from these recommendations. Most of the results are reliable and convincing for trophic classification enabling a routine application. Within BIGFE we demonstrated the applicability and hurdles of satellite-derived water quality product in the frame of governmental monitoring programs or water quality management frameworks of reservoir authorities or water administrations.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Atmospheric Correction Assessment of Sentinel-2 MSI Imagery and Chlorophyll-a Monitoring During an Algal Bloom in Lake Balaton

Authors: Stamatina Tounta, Nicola Ghirardi, Dirk Tiede
Affiliations: Department of Geoinformatics - Z_GIS, Paris Lodron University of Salzburg, CNR–Institute for Electromagnetic Sensing of the Environment, CNR–Institute of BioEconomy
Remote sensing data plays a crucial role in studying and monitoring the natural environment, offering a synoptic and multispectral view in a cost-efficient and timely manner. One application of remotely sensed data is water quality monitoring in inland water bodies. Atmospheric correction (AC) represents a critical step for accurate water quality (WQ) assessments, as the signal received by the satellite sensors is affected by different parameters, such as high concentrations of Optically Active Constituents (OACs), sun- and sky-glint effects, and the adjacency effect. The primary aim of this study is to evaluate the accuracy of the ACOLITE AC processor applied to Sentinel-2 MSI imagery in a productive turbid lake. To evaluate the AC results and the water quality products a band-combined empirical index was derived and compared with in-situ measurements of both chlorophyll-a concentration and water reflectance. The test area is Lake Balaton, a shallow lake in western Hungary, usually characterized by a significant spatially distributed trophic gradient. In the summer of 2019, the lake experienced a significant algal bloom event whose detection is important to study eutrophication processes, that can cause the proliferation of algae, the degradation of water quality, oxygen depletion, and more generally a reduction in biodiversity. For this study, three Sentinel-2 MSI Level-1C images (02-07-2019, 31-08-2019, 15-10-2019) acquired during the algal bloom were downloaded through the Earth Observation Data Access Gateway (EODAG - 2.12.1) API, choosing the Copernicus Data Space Ecosystem as the data provider. The date in July was selected to coincide with the in-situ data that were available for that month. Imagery data were processed with ACOLITE, a software specifically tailored for atmospheric correction over inland and coastal waters that can be applied to imagery data gathered from several satellite sensors. In particular, the Sentinel-2 images were atmospherically corrected, employing the Dark Spectrum Fitting (DSF) algorithm to obtain Remote Sensing Reflectances (Rrs) data. In-situ chlorophyll-a concentration measurements (chl-a, μg/L) and Rrs (1/sr) from the beginning of July 2019 were retrieved from the “In situ bio-optical data in Lake Balaton, Horizon 2020 MONOCLE project” (Hunter et al., 2021) dataset. To analyze the algal bloom event, the Normalized Difference Chlorophyll Index (NDCI) was calculated from the Rrs data. The index utilizes the chl-a absorption peak between 665 nm and 675 nm and the low absorption at 708 nm. The temporal changes of this index helped detect and track the evolution of the algal bloom. From the comparison between the ACOLITE-retrieved Rrs and the in-situ Rrs measurements, a good correlation emerged (r = 0.83, R²= 0.69) and a similar shape in the spectral signatures (mean Spectral Angle (SA) = 22.7°). The analysis of the NDCI index in July revealed elevated values in areas where in-situ chl-a concentrations were also high, and this pattern was particularly evident in the southwestern basin of the lake, where the algal bloom was developing. Trends indicated elevated values by late August, coinciding with the peak of the algal bloom. Finally, by October, NDCI values had declined, signaling the bloom’s dissipation. These findings highlight the utility of NDCI as an effective indicator of high phytoplankton concentration. The next phase of the study will focus on applying established algorithms (OC2, OC3, and MDN) to estimate chl-a concentrations in Lake Balaton during the summer of 2019.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Long Short-Term Memory (LSTM) models and ESA CCI products for assessing the impacts of wildfires on lake water quality

Authors: Daniela Stroppiana, Monica Pinardi, Lorenzo Parigi, Bahman Amirsardary, Milad Ramezani Ziarani, Giovanna Venuti, Rossana Caroni, Claudia Giardino, Mariano Bresciani
Affiliations: Consiglio Nazionale delle Ricerche - Istituto per il Rilevamento Elettromagnetico dell'Ambiente (CNR-IREA), Politecnico di Milano, Dipartimento di Ingegneria Civile e Ambientale (DICA), European Space Agency (ESA) Climate Office, ECSAT
This study focuses on the analysis of the influence of wildfires on lake water quality as observed in global products generated within the ESA Climate Change Initiative (CCI). In particular, the study exploits long-term CCI products of burned areas and Chlorophyll-a and turbidity concentration made available by the European Space Agency (ESA) Climate Change Initiative (CCI). We exploited, in particular, ESA LakeCCI (https://climate.esa.int/en/projects/lakes/) and FireCCI (https://climate.esa.int/en/projects/fire/about/) products. Long Short-Term Memory ) neural network (LSTM-NN) Neural Networks models are trained and validated to assess the contribution of the burned area information on better modelling the relationship between lake Chl-a and turbidity and meteo-climatic variables (precipitation, runoff, lake surface temperature, wind speed, surface solar radiance). Parameters that characterize the lake basin and catchment, such as lake depth and surface area are obtained from the HydroBASIN and HydroLakes datasets (https://www.hydrosheds.org/products/hydrobasins, https://www.hydrosheds.org/products/hydrolakes); meteorological variables are extracted from the ERA5 dataset delivered by the European Centre for Medium-Range Weather Forecasts (ECMWF) providing major drivers for lake water quality: total daily precipitation and surface runoff lake catchment and daily aggregated surface solar radiance, lake surface water temperature and wind speed for the lake surface. Global datasets for both lake catchments and lake basins are aggregated at weekly time step and spline interpolation are applied to fill missing values. The lake database is split into burned and unburned catchments, and lakes with more weekly observations are selected. Unburned catchments (lakes not affected by fires occurring in the catchment) were selected to train a LSTM-NN model able to predict Chl-a concentration and turbidity in undisturbed conditions. This model is then applied to both unseen burned and unburned lake catchments. Moreover, a new model is trained by including fire as predictor variable and applied to both burned and unburned lakes to show how predictions can improve over burned catchments. Models’ performance is analyzed in terms of R2 and rRMSE. Results highlight that model performance improves when the BA variable is included as predictor in the LSTM-NN model. Although the regression coefficient of determination is low for all models, rRMSE of models’ predictions is significantly better for burned lakes. This work will be continued and applied within the ESA XFires cross-ECV project (Modelling multidimensional causes and impacts of extreme fires in the climate system through X-ECV analysis).
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: High spatial resolution mapping of French water bodies surface temperature through Landsat imagery : toward exploitation of the future TRISHNA and LSTM missions

Authors: Flore Bray, Guillaume Morin, Dr. Tristan Harmel, Gilles Larnicol, Anne-Sophie Dusart, Guillaume Fassot, Robin Buratti
Affiliations: Magellium
The quality of inland water bodies is a major issue today, particularly in the context of climate change. It is of paramount importance to know as much as possible about the quantity and water quality of the numerous hydrosystems at country-scale in order to manage their use, preserve their ecological state and leverage conservation actions. Many parameters can provide information about the state of a water body such as temperature, chlorophyll-a content and transparency. Dedicated in situ instrumentation only provides partial data over time and space, and sometimes requires prohibitive effort for maintenance. Therefore, exploitation of Earth observation satellites provides much greater spatial and temporal coverage. As part of the France2030 investment plan, and in collaboration with the French Ministry of Ecological Transition and the French Spatial Agency, an ongoing project aims to set up a near-real time production of water quality and temperature products from remote sensing data to provide geoinformation to a large number of key actors to manage the aquatic environment of the French territory. The operational processing chain is already functional for the main water quality parameters except for water surface temperature. This work focuses on the implementation of the retrieval algorithm of water skin temperature from high-spatial resolution satellite missions. Water temperature estimated from remote-sensing methods requires measurements in the thermal infrared so we based our developments on the exploitation of the Landsat 8 and 9 missions, which have two bands in this part of the spectrum at 100-m spatial resolution. The application to inland water adds further difficulties, such as the limitation caused by the resolution of the images, the “mixed” pixels containing water and land, and the influence of the environment surrounding the lake on the signal received. All these issues need to be taken into account to obtain the most accurate temperature data possible. Several algorithms already exist for deriving the surface temperature of continental water from the top-of-atmosphere signal. In particular, the algorithm proposed by Jiménez-Munoz et al. (2014) was successfully tested and evaluated thanks to matchup with in situ data collected by the thermal monitoring over 400 French water bodies operated by national services (Prat et al., 2020). We have therefore implemented this method, which uses a ‘split window’ algorithm, i.e., using the signal in two thermal-infrared bands to perform the atmospheric correction. The ancillary data, such as the water vapour content of the air, was taken from the Copernicus Atmospheric Monitoring System data base. The entire operational chain was implemented on the WEkEO cloud solution (designed to provide easy access to Copernicus data and climate, marine, terrestrial or atmospheric products) : (i) recovery of L1C images, (ii) masking of continental pixels, clouds, snow, etc. to retain only the water pixels, (iii) calculation of the temperature, (iv) validation of the products obtained and (v) large-scale distribution. Product quality has been investigated at various sites in France including: the Etang de Berre and the Etang de Thau, two Mediterranean lagoons and Lake Geneva. A comparison of satellite and in situ data, collected by French institutional and scientific organizations, revealed a systematic bias in the calculated data, which was corrected. In addition, retrieved surface temperature was compared to Landsat level 2 products provided by NASA. We were able to show that the products calculated using the ‘Split window’ algorithm proposed in the 2014 article are closer to field measurements than the temperature supplied directly by NASA. Comparison of the initial products with the in situ measurements revealed a systematic bias, which has been corrected to minimise the error. Since monitoring water temperature is essential for effective surveillance, the method used can be adapted for future missions. In particular, the TRISHNA and LSTM missions will take measurements in 5 thermal infrared bands, offering a revisit time of a few days at most but with a finer spatial resolution than the Landsat satellites. It should allow us to obtain more regular and accurate results, thereby improving monitoring. A user-friendly portal was developed to analyze the produced temperature of all the main lakes of the French territory. This makes the data accessible to as many people as possible, regardless of their knowledge of remote sensing. All those involved in monitoring and protecting water bodies in mainland France can therefore have access to water quality data obtained in their zone of interest, and act accordingly.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Towards widespread phytoplankton monitoring in small lakes: a case study comparing satellite imagery from Planet SuperDoves and ESA Sentinel-2

Authors: Daniel Atton Beckmann, Professor Evangelos Spyrakos, Peter Hunter, Ian Jones
Affiliations: University Of Stirling
Satellite remote sensing has enabled widespread monitoring of important aquatic processes such as algal dynamics in larger water bodies, however until recently, the spatial resolution of available sensors has not been sufficient to apply this to smaller lakes. Therefore, this study investigated the potential of a new dataset of high-resolution metre-scale imagery for monitoring phytoplankton at spatial and temporal scales previously impossible with satellite data. Specifically, data from the Planet SuperDoves satellite constellation [1] is used to monitor a small (0.069 km^2), eutrophic lake over a period from 2021 to 2024. A suite of atmospheric correction and chlorophyll-a (chl-a) retrieval algorithms were tested on both SuperDoves and Sentinel-2 [2] data against in situ reflectance and chl-a data. It was found that there were some challenges with the currently available atmospheric correction procedures for SuperDoves data for this water body. Specifically, atmospheric correction errors were generally higher in the wavebands above 700 nm, which resulted in poor performance of chl-a algorithms reliant on these wavelengths. However, chl-a was still successfully retrieved (MAPD = 18.0 %) using the NASA Ocean Colour 3 algorithm [3], which was competitive with the best performing Sentinel-2 chl-a algorithms (MAPD = 19.2 %). Additionally, an evaluation of the algal bloom detection capabilities of both sensors was undertaken using citizen science records (Bloomin’ Algae, UK Centre for Ecology and Hydrology) and environment agency data (Scottish Environment Protection Agency) for comparison. It was found that both Sentinel-2 and SuperDoves data were equally effective for algal bloom detection, both having F1-scores of 0.89 at a chl-a bloom threshold of 40 ug L-1. Not only does this show the capability of both sensors for monitoring smaller water bodies, the correspondence between citizen science records and satellite data points to a low-cost means of validating widespread monitoring data, whilst also generating an increased sense of shared environmental stewardship in society. This study demonstrates that metre-scale satellite monitoring of algal dynamics is possible even in challenging and optically complex environments such as this small, shallow lake. Therefore, the increased temporal and spatial resolution offered by new microsatellite constellations such as the Planet SuperDoves opens the potential for a step-change in the number of remotely monitorable inland water bodies. This would be a significant advancement for global lake science, environmental management and public health protection efforts. [1] Planet Labs PBC, ‘Planet Application Program Interface: In Space for Life on Earth’. Planet, 2018. [Online]. Available: https://api.planet.com [2] M. Drusch et al., ‘Sentinel-2: ESA’s Optical High-Resolution Mission for GMES Operational Services’, Remote Sensing of Environment, vol. 120, pp. 25–36, May 2012, doi: 10.1016/j.rse.2011.11.026. [3] J. E. O’Reilly et al., ‘Ocean color chlorophyll algorithms for SeaWiFS’, Journal of Geophysical Research: Oceans, vol. 103, no. C11, pp. 24937–24953, 1998, doi: 10.1029/98JC02160.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Intercomparison of bio-optical primary production models in freshwater and coastal ecosystems.

Authors: Jonas Wydler, Mortimer Werther, Dr Tiina Nõges, Daniel Odermatt
Affiliations: Eawag, University of Zurich, Estonian University of Life Sciences
Monitoring primary production (PP) in inland and coastal waters provides a measure of biological productivity and helps track changes in nutrient dynamics and carbon cycling. Satellite remote sensing offers a means to expand PP monitoring across various water bodies. However, remote sensing models for PP estimation have not been extensively validated in optically complex inland and coastal waters due to a lack of reference data. In this study, we evaluate primary production models originally designed for marine waters and specific lakes to assess their consistency across different optically complex water bodies. Our aim is to improve bio-optical models and their parameterizations, enabling more accurate estimates of aquatic primary production. We use in situ datasets of vertically resolved PP incubation measurements, chlorophyll-a, temperature, and Secchi depth, representative of typical inland and coastal conditions: Lake Geneva (oligo-mesotrophic), coastal sites in the Baltic Sea (transitional waters), and Estonian lakes (eutrophic). Using these datasets, we evaluated four established primary production models: (1) the Vertically Generalized Production Model (VGPM), (2) a simplified absorption-based model with global parameterizations for photosynthetic efficiency and light absorption, (3) Arst’s semi-empirical chlorophyll-a-based model, and (4) the Carbon, Absorption, and Fluorescence Euphotic-resolving (CAFE) model. By standardizing the description of the underwater light field and unifying constants, we ensured consistent intercomparisons while preserving key differences in how the models handle photosynthesis-irradiance relationships and other parameterizations. We generated depth-resolved primary production profiles and interannual time series for each model and study site. Various model settings were evaluated to assess both the models' ability to estimate total carbon fixation and their accuracy in capturing the shape of vertical primary production profiles across different water bodies. For these datasets, a simple depth-integrated model like the VGPM achieves comparable accuracy to more complex models when estimating total carbon fixation. However, more complex models offer potential advantages in capturing details of vertical and temporal variability. The VGPM performs well due to its reliance on few and well-resolved input variables such as chlorophyll-a, the diffuse attenuation coefficient, and temperature. In contrast, more complex models either require additional input variables or depend on assumptions for parameters like spectral phytoplankton absorption and carbon assimilation efficiencies, as measurements of these parameters in conjunction with PP measurements are very limited. This highlights the importance of data availability and resolution to fully utilize the capabilities of more complex models. In conclusion, we provide aligned model implementations that allow for direct comparison of popular PP models. While the scarcity of PP validation data remains a challenge, PP algorithms applied to Earth observation data could significantly support user needs in water quality monitoring.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Evaluation of the Seasonal and Spatial Lake Level Change Using By multitemporal Satellite and UAV Images in the Burdur Lake (Turkey)

Authors: Erhan Şener, Prof.Dr. Şehnaz ŞENER
Affiliations: Süleyman Demirel University Remote Sensing Center, Süleyman Demirel University Water Institue
In recent years, the use of satellite images in studies on the quantity and quality of water resources has become quite widespread. Especially temporal changes in surface water resources can be detected more quickly and economically through satellite images. In addition, satellite images are also used extensively in quality (chlorophyll-a, turbidity, suspended solids, etc.) assessments of surface waters. In recent years, when the effects of Global Climate Change have been felt much more, the balance of protection and utilization of water resources has become more important. The level of Lake Burdur, which is in the region defined as the Lakes Region in the southwest of Turkey, has experienced significant decreases in recent years. Since 1960 until today, the lake level has decreased by approximately 15 meters. Accordingly, the lake area has decreased by 70k km2 and the lake water volume has decreased by 2178hm3. Since the lake waters, which are NaSO4-rich salty water, are not used for agricultural irrigation and drinking water, evaporation due to climate change is one of the most important reasons for lake level changes. In this study, seasonal and spatial changes in the lake shoreline were determined using Landsat, Planetscope and Sentinel satellite images and unmanned aerial vehicles. For this purpose, high-resolution orthophotos were produced from the data obtained from autonomous flights with unmanned aerial vehicles in pilot regions determined around the lake. In addition, the efficiency of high-resolution orthophotos and satellite imagery in determining the changes in the shoreline of Lake Burdur was investigated.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Development of a Methodological Framework for Hyperspectral Estimation of Water Quality Parameters in Diverse Inland Waters

Authors: Raúl Alejandro Carvajal Téllez, Giovanni Laneve, Ashish Kallikkattil Kuruvila, Emilio D'ugo, Fabio Magurano, Alessandro Ursi, Patrizia Sacco, Dr. Deodato Tapete
Affiliations: University Of Sapienza, Istituto Superiore di Sanità, Italian Space Agency
Water quality is a fundamental indicator of ecosystem health, with significant influence on biodiversity conservation, public health, and sustainable management of natural resources. Parameters such as chlorophyll-a (Chl-a) and total suspended solids (TSS) are crucial for monitoring changes in aquatic ecosystems and identifying critical issues like eutrophication and harmful algal blooms (HABs). These processes are exacerbated by climate change, which intensifies extreme events, alters hydrological cycles, and increases nutrient runoff, with profound impacts on water bodies globally. In this context, the present study proposes the development of specialized algorithms to estimate water quality parameters in various types of inland waters, such as clear, turbid, and chlorophyll-rich waters. The implemented methodology establishes an advanced processing framework to predict water quality parameters in environments with diverse optical properties. To classify optical water types (OWTs), a Random Forest classifier was trained using the GLORIA database (a globally representative hyperspectral in situ dataset), preprocessed with unsupervised clustering algorithms and reflectance’s simulated for the Italian Space Agency (ASI)’s hyperspectral PRISMA sensor. Subsequently, regression models based on Random Forest, Support Vector Machine (SVM), and XGBoost were trained to predict key parameters such as Chl-a concentration and TSS. Independent variables for these models were selected through a sensitivity analysis of the simulated PRISMA sensor channels. Results were achieved over a wide selection of lakes in Italy (e.g. Trasimeno Lake) and around the world, with more than 2,000 in-situ samples. OWT Classification: spectral clustering succeeded in capturing a greater diversity in optical behavior and also maintains reduced variance for each of the selected classes. Although an unsupervised approach allows identification of patterns inherent to the dataset, exploring supervised classifications based on the spectral similarity of modeled OWTs offers greater control in training a classifier. Model Performance Chl-a: Cross-validation evaluations indicate that the random forest model offers the best performance with R2 higher than 0.6. However, in waters with Chl-a concentrations below 10 mg/m³, overestimations of around 5 mg/m³ were observed, while for concentrations above 200 mg/m³, the model tends to underestimate by approximately 90 mg/m³. Sensitivity Analysis: The analysis revealed how well individual PRISMA channels correlated with Chl-a concentrations across the nine OWTs. Based on the coefficient of determination (R²), certain PRISMA channels are more sensitive to Chl-a in specific OWTs. This insight highlights the importance of spectral resolution and the need to tailor models based on water type for accurate water quality assessments. This approach therefore appears promising in addressing the challenge of accurately estimating water quality parameters in different aquatic environments, tailoring the algorithms to the specific characteristics of each water type. As a result, it enhances the precision of parameter estimation and optimizes the use of hyperspectral data for local applications. This research is performed in the framework of the SatellOmic project, funded by the Italian Space Agency (ASI), Agreement n. 2023-36-HH.0, as part of the ASI’s program “Innovation for Downstream Preparation for Science” (I4DP_SCIENCE).
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: A deep learning approach to near real-time water quality monitoring of Dutch water bodies using Sentinel-2 imagery

Authors: David Modiano, Miguel Dionisio Pires, Dr. Thijs Perenboom
Affiliations: 52impact, Deltares, Aeres, University of Applied Sciences
Monitoring water quality of inland freshwater bodies is critical for preserving aquatic ecosystems and addressing challenges like eutrophication, biodiversity loss, and human health. Traditional water quality monitoring methods, benefitting from significant local expertise and precise measurements, are limited in spatial and temporal coverage. As part of a Dutch government-funded research innovation initiative and piloted in collaboration with Dutch water boards, the WaterWatch project aimed to address these limitations by integrating multispectral satellite data with ground-truth information through machine learning. We leveraged Sentinel-2 imagery and in-situ water quality data provided by Dutch water authorities (years 2013–2024, >1,000,000 training points). Pilot sites Plas de Eend and Strijkviertelplas were selected with the Utrecht Province water board, due to their severe water quality issues. Although the model is designed to work for any water quality parameter, focus was primarily on chlorophyll-a, and also on dissolved oxygen, turbidity, and electrical conductivity. Nineteen spectral features were derived from Sentinel-2 imagery (years 2017-2024), comprising 12 raw band reflectances and 7 derived spectral features. Ground-truth and satellite data were pre-processed by temporally aligning timeseries with Savitzky-Golay filters to smooth data and address in-situ/satellite sampling discrepancies. The dataset was split into training (2017–2022) and testing (2023–2024) subsets. A fully connected feedforward neural network deep learning framework was developed to model the relationships between input features and water quality parameters. Unlike traditional regression techniques, deep learning models are effective at capturing highly complex relationships within datasets, such as associations between optical signals and water quality indicators. The model architecture was optimized through extensive testing with variable depth (layers) and complexity (nodes). Models are trained independently for each water body and parameter, enabling mapping Sentinel-2 data to specific water quality indicators, such as chlorophyll-a concentrations in mg/m3. Model performance was evaluated using R² and Spearman correlation (rs). The highest performance was achieved for chlorophyll-a, with R²=0.75 and rs=0.89 at Plas de Eend, and R²=0.82 and rs=0.70 at Strijkviertelplas. Optically active indicators like chlorophyll-a outperformed non-optically active parameters, though moderate performance (R²~0.6) was still observed for the latter (likely due to correlations with some optically active proxies). Established models were applied spatially to all available Sentinel-2 imagery for each water body. Results revealed clear temporal and spatial water quality dynamics in line with in-situ data: seasonal increases in chlorophyll-a where detected, with concentrations at Plas de Eend consistently exceeding 200 mg/m³ each summer, while Strijkviertelplas peaked at 70–80 mg/m³ (except for 2023, which reached 130 mg/m³), whereas spatial variability was limited but occasionally significant. Real-time outputs over the 2024 summer demonstrated WaterWacht as an early warning system, detecting high-risk concentrations several days before in-situ data (Plas de Eend, 26-06-2024). Chlorophyll-a predictions were additionally integrated into an algal bloom forecasting tool using meteorological parameters to predict blooms with up to 80% accuracy up to weeks in advance. Combining this tool with chlorophyll-a maps enables spatial forecasts, allowing for more localized and precise early-warning systems for water management. WaterWatch demonstrates the potential of integrating satellite data and machine learning to provide scalable, real-time water quality monitoring, and collaboration with local stakeholders ensures the system is tailored to real operational water management needs. As WaterWatch is designed for scalability across inland freshwater bodies worldwide, future work will focus on expanding test sites and further automating processes. Additionally, methodologies will be prepared for upcoming ESA hyperspectral missions (such as CHIME) to advance water quality monitoring.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Realignment of MSI derived Chl-a and Turbidity algorithms for the Copernicus Land Monitoring Service Lake Water Quality using coincident OLCI data

Authors: Dr. Mark Warren, Dr Xiaohan Liu, Nick Selmes, Professor Stefan Simis, Dr Eirini Politi, Dr Dagmar Müller
Affiliations: Plymouth Marine Laboratory, Brockmann Consult
The Copernicus Land Monitoring Service (CLMS) provides lake water quality estimates derived from both Sentinel-3 OLCI and Sentinel-2 MSI sensors for more than 4000 lakes (OLCI) and 200+ MSI tiles. These products include turbidity and chlorophyll-a concentration in addition to water leaving reflectance, using globally tuned algorithms blended on the basis of optical water type similarity. Typically, up to ten algorithms are used to optimise the retrieval of a product across the optical water types, and each algorithm is tuned to provide optimal performance within. With the anticipated release of CLMS version 3.0 products in 2025 (inheriting algorithms from the Lakes cci), the chlorophyll-a and turbidity algorithm coefficients of the spectrally and radiometrically less sensitive MSI data have been re-tuned to align with the ocean colour quality OLCI derived data, ensuring consistency across observation scales. The tuning of algorithm coefficients is performed through bootstrap analysis, using a globally representative sample of 64 lakes, addressing all observed optical water types. The selection of lakes includes those over a range of latitudes and across six continents, sizes ranging from 17 km² to 29000 km² and across elevations of -15 m to 4960 m. The tuning of the coefficients is performed using the level-2 water leaving reflectance derived from the Polymer atmospheric correction algorithm, using equivalent parameters for both MSI and OLCI. The OLCI and MSI scenes are selected such that they are acquired within minutes of each other, to reduce temporal change between the images. The mean and/or median coefficients from the bootstrap analysis are selected, and the standard deviation gives an impression of the expected quality of fit for each algorithm and optical water type pairing. The results of the algorithm alignment are then validated across in situ data and lakes that were not used within the tuning and alignment procedure. Metrics of uncertainty (root mean square error, bias, mean absolute error, etc.) between OLCI, MSI and in situ are then determined. The resulting algorithm coefficients are being implemented in CLMS version 3.0 of the 100-m resolution water quality products, anticipated in 2025.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: C.03.01 - POSTER - The Sentinel User Preparation Initiative for High-Priority Applications

The Sentinel Users Preparation (SUP) is an initiative within the FutureEO programme of the ESA Earth Observation. The SUP initiative will ensure that European entities are optimally placed to exploit the opportunities due to the future Copernicus Sentinel Expansion and Next Generation missions responding to recommendation of the relevant stakeholder communities. SUP will strengthen expertise in diverse application domains to prepare future downstream services addressing high-priority societal challenges, and to enable rapid uptake by end-users and stakeholders of the derived information products. The objective is to develop and test methodologies to exploit multi-mission approaches of Copernicus Sentinel Expansion class datasets in high-priority applications (Food systems and agriculture; Ecosystem and biodiversity monitoring; Soil management, Inland water management; Coastal management; GHG and air quality; Forest management; Urban resilience; Critical infrastructure management; Mining and extractives; Arctic operations; Natural hazard management) to consolidate the added value of the Copernicus Sentinel Expansion within these applications and to build the relevant experience and expertise within both supply-side and demand-side stakeholders.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: User preparation for novel aquatic applications based on synergistic use of CHIME and LSTM data

Authors: Stefan Muehlbauer, Dr Claudia Giardino, Dr Mariano Bresciani, Dr Peter Gege, Stefan Plattner, Dr Ian Somlai Schweiger, Dr Christian Schmidt, Dr Rohini Kumar
Affiliations: EOMAP GmbH & CO KG, CNR - National Research Council, DLR - German Aerospace Centre, UFZ - Helmholtz-Centre for Environmental Research
Two thirds of the planet are covered by water, the single most important element to sustain life. It is a habitat, provider of food, energy and recreation, but also the source of disasters. The changing climate and pollution diminish the availability of usable water, and thus increase the potential of user conflicts and public health problems. Preserving and promoting good water quality not only benefits the environment but also safeguards public health and ensures the preservation of water resources for future generations. There is a need for information on the potential of natural climate protection measures, but also regarding the preparedness to occurring threats such as toxic algae blooms or human contamination posing significant risks to water quality, water life and their habitats. Remote sensing by satellites can bring globally applicable, transboundary and large-scale monitoring solutions for current and long-term trend analysis. By combing the satellite data with modelling approaches even future projections can be calculated and applied for ecosystem assessments. The six new Sentinel Expansion missions are being developed to tackle prospective societal challenges such as global warming and its impact on nature and humans, urbanisation and food security. The combined data exploitation of these missions allows for the development of new applications bringing benefit to their end-users and thereby addressing EU policies and gaps in Copernicus user needs. In this context, the objective of the presented ESA funded AQUASUP project is to develop and validate novel concepts for the integrated use of data from the CHIME and LSTM missions, to overcome prevailing information deficiencies in inland water and coastal monitoring and management, focusing on enhanced water quality and ecosystem assessment. Combining hyperspectral and thermal imagery for advanced applications presents a promising pathway for addressing a multitude of societal challenges in the future holding the potential for understanding various impacts, such as the response of both inland and coastal ecosystems to population growth and climate change enabling to establish mitigation action plans. The project presents a first step in preparing the uptake of the new Copernicus Expansion missions together with selected stakeholders The novel applications will be based on simulated Chime and LSTM data and focus on the mapping and monitoring of aquatic environments, both inland and coastal. Four applications will be developed: A) Algae Classification and Quantification: The hyperspectral optical signature of algal blooms enables to quantify phytoplankton pigments (extent, types and concentrations), including those common to potentially toxic species, while the tolerance of phytoplankton to thermal changes is captured by combing CHIME and LSTM observations. Accurate calculations of chlorophyll-a, turbidity and water visibility (transparency) are used to create advanced products related to the quantification of cyanobacteria and the differentiation of phytoplankton functional types and pigments. B) Benthic Classification: The limiting constraint of water depth regarding the classification of benthic habitats, caused by the narrowing of usable spectral range with increasing depth, is estimated to be reduced using hyperspectral data. Improved retrieval algorithms are developed considering the noise of CHIME. The novel approach will improve the classification of benthic types and monitor the extent and spread of macrophytes and mussels in shallow water. A major goal is to distinguish quagga mussels from macrophytes in Lake Constance to monitor the spread of these invasive mussels. C) Water Temperature Models for ecosystem assessment in rivers: Water temperature is a critical parameter that influences the structure, function, and stability of river ecosystems and is a main factor influencing other water quality parameters such as oxygen and ecological adverse events such as harmful algal blooms as seen in the Oder River in Summer 2022. Based on the simulated LSTM land and water temperature data sets, a data-driven approach employing spatiotemporal, process-guided deep-learning techniques to estimate river temperatures distributed spatially along the Elbe River stream-network will be developed. The improved water temperature information will be combined with hyperspectral data to assess deterioration of water quality conditions through algae bloom detections and the discrimination into functional groups. An alert system will be established indicating critical ecological water states. D) Synthesis Products for early warning and risk indications in sea grass meadows: Enhancing the identification of biological species, including macrophytes, phytoplankton, and cyanobacteria, can be achieved by considering hydro-meteorological factors and physical boundary conditions like water temperature. The high frequent and higher resolution measurement on water temperature using the LSTM sensor as potential stressor and input to biomass model and the hyperspectral satellite sensor CHIME to map benthic habitats have the potential to provide useful insights of the growth cycle and trends on seagrass meadows. The application serves to initialise and answer scientific questions on the growth and biomass uptake (including C sequestration) of seagrass meadows in the German Baltic and their stressors. The development of the new approaches will be based on simulated data from sensors with similar specifications like EnMAP and PRISMA for hyperspectral data and TIR from Landsat or airborne sensors for the thermal data. The developed products will be tested and evaluated for different applications in selected representative use cases in Germany and Italy, covering lakes, rivers and coastal water bodies across different ecoregions. The selected users represent various stakeholder groups and are influential players within their respective fields. Their participation in testing and evaluation activities will ensure that the new approaches are tailored to address the stakeholders' real-world challenges, enabling the demonstration of viability and benefits concerning the Copernicus expansion missions. The participating users are: i) Baden-Württemberg State Institute for the Environment, Survey and Nature Conservation, ii) the State office of Environment of the region of Schleswig-Holstein, iii) the Regional Agency for the Protection of the Environment of Lombardy, iv) Helmholtz Centre for Environmental Research, and v) XYLEM. The described applications aim to prepare users to fulfil current and future regulations and policies imposed by global, EU, and regional initiatives, including the United Nations' Sustainable Development Goal (SDG) 6 on a global scale, the Water Framework Directive (WFD)—the most critical EU directive for water body protection and management—alongside the Nitrates Directive (ND) and the Drinking Water Directive (DWD) on a European level, and NATURA 2000, the Habitat Directive and Environmental Impact Assessments on a local scale.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Monitoring Landslides with Multiplatform L-Band Radar Techniques

Authors: Tazio Strozzi, Rafael Caduff, Othmar Frey, Philipp Bernhard, Urs Wegmüller, Federico Agliardi, Alessandro de Pedrini, Christian Ambrosi, Andrea Manconi
Affiliations: Gamma Remote Sensing, University of Milano Bicocca, Scuola Universitaria Professionale della Svizzera Italiana, Swiss Federal Institute for Forest, Snow and Landscape Research
Large-scale rock slope instabilities are widespread in alpine areas. Accurate information on the surface displacement of these phenomena is important to analyse and interpret the associated hazard potential. Satellite SAR interferometry is an attractive technology for monitoring surface deformation in large areas, which is now entering an advanced operational phase triggered by the increasing availability of satellite data, especially from Sentinel-1. However, C-band sensors have limited performance in vegetated areas and for comparatively large rates of movement (e.g. a few cm/year). In these cases, L-band sensors can supplement the high-frequency systems. The MODULATE project (MOnitoring lanDslides with mUltiplatform L-Band rAdar Techniques) was initiated in April 2024 within the framework of the ESA’s Earth Observation Science for Society component of the FutureEO-1 Segment 2 Programme. The aims of the MODULATE project are to: • identify current shortcomings of C-band SAR interferometry for use in alpine landslide monitoring; • perform a detailed analysis of the benefits, limitations, and improvements of L-Band SAR interferometry from different platforms (spaceborne, airborne and terrestrial) for specific use in monitoring alpine landslides in preparation of the Copernicus Sentinel Expansion mission ROSE-L; • define guidelines on the use of different radar products (multi-frequency (L-, C-, X-, Ku-Band) and multi-platform (spaceborne, airborne and terrestrial)) to interpret slope deformation in the different phases of the monitoring program to support the decisions taken by stakeholders. In collaboration with the regional authorities, three test areas in the canton of Ticino (Switzerland), the canton of Graubünden (Switzerland) and the region of Lombardy (Italy) were selected for the analysis. For the three sites, we obtained L-band ALOS-2 PALSAR-2 and SAOCOM-1 data since 2014 and 2021 respectively and performed Persistent Scatterer Interferometry (PSI) processing and the calculation of differential interferograms (DInSAR) with multiple temporal intervals. In addition, car-borne SAR measurements at L-band were performed. A compilation of in-situ data, including Sentinel-1 PSI, is currently being carried out to validate the satellite and terrestrial L-band DInSAR results. Preliminary results clearly show the potential of L-Band DInSAR and PSI for monitoring surface deformation in alpine regions as a complement to the nationwide land deformation maps that have been produced based on regular acquisitions of Sentinel-1 data since 2014. Our results can support the efficient planning and utilisation of future L-band SAR missions (e.g. ROSE-L) in services for landslide monitoring.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Estimating Canopy Water Availability From Earth Observation For Supporting Forest And Agriculture Management Practices

Authors: Paulina Bartkowiak, Anne Orban, Sébastien Dandrifosse, Mariapina Castelli
Affiliations: Eurac Research, Centre Spatial de Liège, Centre Wallon de Recherches Agronomiques
Acute droughts observed in recent years represent one of the most significant vegetation disturbances and are expected to intensify under climate change, even in cooler regions like northern Europe and the Alps often referred as the Europe’s ‘Water Tower’. Limited precipitation, higher temperatures and reduced soil moisture largely affect vegetation vigour and the water cycle. Increased drought events negatively impact canopy health, leading to larger plant water demands, elevated risk of insect outbreaks and vegetation mortality, wildfires, and also shifts in mountain forest composition. In the framework of the recently funded ESA SENWISE project (New Sentinel Missions for Optimized Wildfire Hazard, Forestry and Agriculture Monitoring), we aim to develop a high-resolution canopy water index (≤ 100 m) for forestry and agriculture management practices by leveraging sensing capabilities of the two ESA Copernicus Expansion missions: the Land Surface Temperature Radiometer (LSTM) and the Copernicus Radar Observing System for Europe at L-band (ROSE-L), supported by in-situ observations and airborne campaigns. Our approach involves two major steps: consolidating representative remote sensing datasets for LSTM and ROSE-L instruments and integrating the multi-sensor products in two-source energy balance modeling of green water flux combined with water sensitive EO indices for monitoring vegetation water status in forests and grasslands. To ensure long-term and frequent observations, we combine land surface temperature (LST) imagery through synergistic use of high-resolution NASA Landsat and ECOSTRESS data, complemented by Copernicus Sentinel-3 mission. In parallel, the representative datasets for ROSE-L will be derived from NISAR and CONAE SAOCOM dual-pol and quad-pol acquisitions to estimate vegetation biophysical properties, including tree canopy height and biomass. Initial simulations using 100-m sharpened Sentinel-3 LST data, Sentinel-2 reflectances, and LIDAR GEDI-derived canopy height show promising results. Modeled ET correlates well with eddy covariance measurements (correlation coefficient of nearly 0.6), with potential improvements expected from incorporating high-resolution inputs like Landsat and ECOSTRESS LST data and L-band derived structural parameters. The multi-sensor ET-based index will be linked with in-situ indicators of plant water availability (i.e., soil moisture and temperature, sap flow, and turbulent fluxes) to quantify its sensitivity to drought periods, vegetation type, and climatic conditions. This will provide a foundation to upscale the EO vegetation water index across study areas in South Tyrol (Italy), Wallonia Region (Belgium), and Provence (France). Following the Sentinel User Preparation initiative, SENWISE results will be shared with local stakeholders active in forestry and agricultural sectors to evaluate the practical utility of the developed product with respect to their needs and requirements.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Multisensor Water Discharge project within the ESA Sentinel User Preparation initiative

Authors: Maria Kremezi, Professor Vassilia Karathanassi, Dr. Giulio Ceriola, PhD Antonello Aiello, Elisabetta Lamboglia
Affiliations: Laboratory of Remote Sensing, National Technical University Of Athens, Planetek Italia s.r.l., European Space Research and Technology Centre (ESTEC), European Space Agency
Anthropogenic activities and some natural events can introduce liquids into aquatic systems such as seas, rivers, and lakes, posing challenges to environmental monitoring and management. Examples include industrial leakages, urban sewage overflow, and illegal dumping, as well as natural events like heavy rainfall that cause overflow from sewage systems. Information about such discharges can be broadly categorised into three main needs: early warning, continuous monitoring, and post-event assessment. In coastal areas, remote sensing has proven valuable, with discharges often detected using optical sensors that reveal plumes or anomalies in water colour, transparency and sediment patterns, or thermal sensors that highlight temperature irregularities. However, the application of remote sensing for large lakes and rivers is less developed. This is primarily due to the spatial, temporal, and spectral limitations of current Earth Observation (EO) missions, which constrain the ability to detect and monitor such events effectively in these environments. The "Multisensor Water Discharges" project, part of the ESA Sentinel Users Preparation (SUP) initiative, leverages upcoming Copernicus Expansion missions—CHIME (hyperspectral) and LSTM (thermal)—to support and enhance the capabilities of current EO missions, such as Sentinel 2/3, Landsat 8/9 and ASTER. The project focuses on enhancing the detection of water discharges in inland freshwater bodies and coastal areas, with the goal of improving monitoring and management practices. To generate representative simulated datasets for CHIME and LSTM sensor data, the project utilises PRISMA and EnMAP hyperspectral imagery for hyperspectral simulations, and Landsat 8/9 and ASTER thermal bands for thermal data simulations. Additionally, data from airborne campaigns are processed to supplement remote sensing datasets, offering significant flexibility in planning and execution. Airborne campaigns allow targeted coverage of specific sites of interest, enabling temporal and spatial alignment with field data collection or other end-user-relevant activities. However, the high costs associated with flights and sensor deployment require meticulous planning and strategic trade-offs to maximise the benefits while maintaining cost efficiency. For CHIME hyperspectral data simulation, two scenarios have been developed. The first scenario utilises satellite data from other orbiting sensors, while the second relies on data acquired during airborne campaigns. These scenarios are similarly applied to LSTM thermal data simulations. However, for LSTM, the first scenario is limited to leveraging ASTER data, which includes five bands in the thermal infrared (TIR) region. A third scenario is introduced for LSTM simulations, incorporating a combination of Landsat 8/9 thermal bands and airborne data to enhance the simulation process and improve the reliability of results. Simulating data using airborne imagery involves several critical steps: a) Analysis of Satellite Sensor Characteristics, including spatial resolution, spectral bands, radiometric resolution, and other sensor-specific parameters; b) Resampling Airborne Imagery to match the spatial resolution of the satellite imagery; c) Adjusting Spectral Bands using the Spectral Response Function (SRF) of the sensor. If SRF is not available, a hypothetical SRF is used as a baseline for obtaining a suitable simulated image (i.e., the normal distribution based on the bandwidth and the central wavelength of each band; and d) Simulating Atmospheric Effects by adding synthetic atmospheric effects on the airborne imagery to mimic the radiance recorded by the satellite sensor. For simulating atmospheric effects, Radiative Transfer Models (RTMs) are used, such as the MODTRAN and/or RTTOV models for the hyperspectral data. The latter will also be used for thermal data. Simulating satellite hyperspectral images from PRISMA and EnMAP data is relatively straightforward, as both sensors and their orbital characteristics closely align with those planned for the CHIME mission. However, adjustments are necessary to account for differences in spectral band configurations between CHIME and PRISMA/EnMAP. This process involves simulating CHIME's spectral bands by applying the satellite sensor's spectral response function (SRF) to reproduce the sensor's spectral characteristics accurately. Simulating thermal LSTM data from Terra's ASTER thermal data involves a downscaling process to generate thermal infrared (TIR) bands with a spatial resolution of 50 meters. It is achieved using Wald's protocol, which assumes that the effectiveness of downscaling methods remains consistent regardless of the spatial resolution of the fused products. The process specifically employs a conditional Generative Adversarial Network (cGAN) to train a model for producing 50m TIR bands. During training, the visible (VIS), near-infrared (NIR), and shortwave infrared (SWIR) bands are resampled to a spatial resolution of 90m. In comparison, the TIR bands are resampled to 162m (a resolution ratio of 1.8). Once trained, the model leverages the original bands—50m resolution for VIS, NIR, and SWIR, and 90m resolution for TIR—to generate TIR bands at the desired 50m spatial resolution. For simulating LSTM thermal data from Landsat data, we introduce a novel approach. It involves the use of airborne thermal acquisitions combined with deep learning algorithms. Airborne acquisitions do not need to be synchronised with the water discharge event but must align with Landsat acquisitions. The simulation of LSTM satellite data in this scenario involves two key steps: a) generation of thermal bands that are not present in Landsat imagery, and b) image downscaling. Landsat imagery includes only two thermal bands, so image-to-image translation deep learning architectures to generate the additional 3 bands are required. The conditional Generative Adversarial Network (cGAN) architecture, especially the Pix2Pix model, is extensively used for generating near or thermal infrared imagery (IR) from optical data. A key feature of Pix2Pix is incorporating the generator input into the adversarial loss calculation. The SatGAN (Satellite Image Generation using Conditional Adversarial Networks) enhances cGAN by adding a perceptual reconstruction loss to the L1 loss, aiming to improve the structural integrity of the generated images. Moreover, InfraGAN, designed for generating thermal IR images, leverages the GAN framework to convert visible images into the infrared domain, utilising the Structural Similarity Index Measure (SSIM) and the pixel-based L1 norm as loss functions to enhance the IR-like quality of the architecture's output. Within the ESA SPOTTED, a previous project, the authors experimented with various hyperparameters of the cGAN architecture to generate 11 Sentinel-2 bands in the VIS, NIR, and SWIR spectrum from 4-band VIS-NIR Planet images. The training process utilised over 150 images of landfill areas and floating debris on water, with another 300 images for validation. The project achieved high accuracy metrics, including an ERGAS index below 80, an SSIM index exceeding 0.85, and a CC index above 0.90. To our knowledge, the cGAN model has not yet been evaluated for generating TIR bands. This challenge is being actively addressed within the framework of the "Multisensor_Water_Discharges" project. The spatial resolution of Landsat TIR bands (120m) needs to match the spatial resolution of the LSTM satellite imagery (50m). Three approaches for downscaling Landsat thermal bands will be evaluated within this project. In the first, a cGAN network will be trained to estimate 50m TIR bands from Landsat VIS-NIR-SWIR bands resampled to 50 m spatial resolution. The simulated LSTM data from the airborne images will be used as ground truth for the training process. The second approach will be employed in case the available airborne acquisitions are not enough. Similarly to Terra's ASTER thermal data downscaling, Wald's protocol will be applied. The cGAN network will be trained to estimate 120m TIR bands from Landsat VIS-NIR-SWIR bands resampled to 288 m (ratio: 2.4) spatial resolution. The original 120m Landsat TIR bands will be used as ground truth for the training process. This way, the trained model can then be used to estimate 50m TIR bands from Landsat VIS-NIR-SWIR bands resampled to 50m spatial resolution. In the third approach, the single image super-resolution (SISR) technique will be applied to reconstruct a high-resolution image from a single low-resolution input. The simulated LSTM data from airborne imagery will be used as ground truth for the training process. Some of the most popular and widely used SISR methods include Convolutional Neural Networks (CNNs), Generative Adversarial Networks (GANs), and Deep Learning with Perceptual Losses. The simulated CHIME and LSTM data will be integrated into methodologies designed to leverage multi-mission approaches using Copernicus Sentinel Expansion class datasets for high-priority applications. These methodologies will be developed and assessed for their effectiveness in monitoring a) inland waters and b) coastal areas. Inland water monitoring focuses on overseeing areas near water treatment plants and sewage outputs to detect anomalies and ensure water quality remains within acceptable standards. This also includes monitoring "restricted" basins, especially those designated for drinking water, to identify and address illegal or undesirable discharges. Coastal area monitoring, on the other hand, involves evaluating regions near water treatment plants, sewage outlets, and industrial facilities to ensure compliance with water quality regulations. It further encompasses detecting unauthorized or harmful discharges and assessing pollution levels associated with agricultural runoff and related activities.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Overview on Objectives and Status of ROSE-L: the Copernicus SAR Mission in L-Band

Authors: Malcolm Davidson, Lorenzo Iannini, Julia Kubanek, Gianluigi Di Cosimo, Nico Gebert, Steve Osborne, Silvia Mezzasoma, Adriano Carbone, Tobias Bollian
Affiliations: European Space Agency, Aurora Technology BV @ Esa/estec
The Radar Observing System for Europe at L-Band (ROSE-L) is aimed at serving the needs for L-band SAR data for Land Monitoring, Emergency Management, Marine Environment Monitoring and Security Services. The ROSE-L SAR mission cannot be considered in isolation but needs to build on existing and planned Copernicus observation capabilities and infrastructure to derive maximum benefits for users and services. Observational gaps filled by ROSE-L require careful combination of the new information with information provided by existing Sentinel missions. The enhanced continuity also requires harmonised, coordinated and systematic acquisitions in conjunction with other Sentinel data, and in particular those provided by the C-band radar aboard Sentinel-1. ROSE-L consists of a constellation of two satellites, so far named PFM (Proto-Flight Model) and FM2 (Flight Model 2). In the system-of-systems perspective, the mission has been designed to sense the ground from approximately the same incidence angles as Sentinel-1. This will be achieved by following the same ground-track as Sentinel-1, and hence by adopting the same sun-synchronous orbit. A combined repeat of 6 days will be achieved with the two spacecrafts. Both the options of a S1/ROSE-L convoy and of a constellation with different orbit phasing remain open at current date. The mission is designed to guarantee a revisit frequency of: 6 days as minimum at all covered latitudes, 3 days over Europe, 1 day over the Arctic. The ROSE-L instrument envisions, as current baseline, 3 different SAR imaging modes: the ROSE-L Interferometric Wide Swath (RIWS) mode, the Quad-pol interferometric Wide Swath (QWS) mode and the ROSE-L Wave Mode. The first two are wide-swath imaging modes based on ScanSAR illumination, whereas the latter acquires small vignettes over the ocean, emulating the Sentinel-1 WV mode. As evoked by the acronym, the RIWS mode is designed to fully overlap with the IWS swath. Borrowing the same approach of Sentinel-1, a consistent and systematic use of the modes over the whole Globe will be adopted, in order to generate geometrically and interferometrically coherent time-series for land and sea monitoring applications. More details on the mission status and objectives as for the end of Phase C will be provided during the presentation.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Collaborative Development of Arctic information Products Leveraging the CIMR, ROSE-L, and CRISTAL Missions

Authors: Till Andreas Soya Rasmussen, Suman Singha, Maxime Christian Jean-Marie Beauchamp, Lasse Rabenstein, Matilde Brandt Kreiner, John Yackel, Jens Jakobsen, Imke Sievers, Hoyeon Shi, Eva Howe, Tore Wulf, Jacob L Høyer, Jakob Bünger, Henning Weidenhöfer, Ronan Fablet
Affiliations: Danish Meteorological Institute, Drift+Noise GmbH, University of Calgary, IMT Atlantique
One of the focus areas for the Copernicus expansion missions and the Sentinel user preparation (SUP) is Arctic operations. This presentation and the project CROSCIM (CRistal rOSe-l CIMr) highlight the benefits of ROSE-L (Synthetic Aperture L-band), CIMR (Passive Microwave) and CRISTAL (altimeter). Each of the three missions contribute to improved observations of the Arctic environmental conditions. The development of the products within CROSCIM is guided by the so-called “champion users”, namely the Greenland Ice Service at the Danish Meteorological Insitute and the Drift + Noise Polar Services GmbH including their ice information App IcySea. In addition to guidance the champion users will also provide links to other end users. CROSCIM develops a processor, that produces a gap free Level 4 nowcast and forecast of four parameters, namely sea ice concentration, sea ice thickness, snow thickness and sea surface height. Input is based on a representative dataset that simulates the expected output of the Copernicus Expansion Mission and similar missions. The processor is based on a state-of-the-art neural scheme (4DVarNet ) initially designed for gap-filling Level 4 reconstructions. Within CROSCIM it is extended in order to create predictions based on data fusion of multiple dataset and additional covariates. It is essential to prepare for the Copernicus expansion missions and to support product development before launch of the missions in order to benefit from the new observations when data is ready. For this purpose, CROSCIM develops a representative dataset based on existing similar missions and physical model data. The representative dataset will include level 2 data that have the same spatial and temporal coverage as the Copernicus expansion mission in focus. In addition, it is expected that the data will include biases and uncertainties when the observed signals are transformed to physical properties. Based on input from the Mission Advisory Groups (MAG) this will be added to the representative dataset. This presentation will introduce the co-development and how the upcoming satellites will improve the Arctic information for the champion users and other downstream users. In addition, the presentation will discuss the representative dataset and output of the processor. The new simulated data products and processing pipelines will be demonstrated based on two use cases: 1) a near coastal domain - Disko Bay located on the western side of Greenland; and 2) a pan Arctic domain. The two use cases illustrate how the Copernicus expansion missions will meet existing challenges of observing Arctic environmental conditions.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: The Sentinel User Preparation Initiative for Copernicus Expansion Missions

Authors: Anke Schickling, Giuseppe
Affiliations: European Space Agency - ESRIN
The Sentinel Users Preparation (SUP) is an initiative within the FutureEO programme of the ESA Earth Observation. The SUP initiative will strengthen expertise in diverse application domains to prepare future downstream services addressing high-priority societal challenges, and to enable rapid uptake by end-users and stakeholders of the derived information products. The objective is to develop and test methodologies to exploit multi-mission approaches of Copernicus Sentinel Expansion mission datasets in high-priority applications such as Food systems and agriculture; Ecosystem and biodiversity monitoring; Soil management, Inland water management; Coastal management; GHG and air quality; Forest management; Urban resilience; Critical infrastructure management; Mining and extractives; Arctic operations; Natural hazard management. In addition, the added value of the Copernicus Sentinel Expansion missions is consolidated within these applications and the relevant experience and expertise within the relevant stakeholder communities is built.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: F.05.04 - POSTER -Earth Observation in Governance: Bridging Data and Decision-Making

This session aims to bring together practitioners from government agencies, research organizations, and commercial satellite data providers who utilize remote sensing data to enhance public sector services and efficiency. Case studies that showcase the tangible benefits of integrating EO data into public sector operations using up-to-date technologies and approaches, such as advanced data processing techniques - including Machine Learning and Deep Learning - are particularly welcome.
These studies should illustrate how remote sensing has led to measurable improvements in areas such as environmental protection, crowd management, law enforcement, and natural disaster management or response.
The focus will be on quantifying the socio-economic impacts of Earth Observation (EO) technologies on the public sector, such as achieving efficiency savings, improving public services, stimulating economic growth, and enhancing decision-making processes.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Precision in Peril: SAR Strategies for Disaster and Emergency Operations

Authors: Nicolai Holzer, James Slater
Affiliations: NV5 Geospatial, NV5 Geospatial
The increasing frequency and intensity of climate-induced natural disasters are placing unprecedented strain on resources and severely testing the resilience of communities worldwide. As these challenges grow more complex, effective crisis management demands the adoption of advanced technologies. Among the most critical tools in this effort is satellite-based remote sensing, with Synthetic Aperture Radar (SAR), and here especially the Copernicus Sentinel-1 mission of the European Space Agency, standing out as an indispensable asset due to its unique technical advantages and broad application potential. SAR is able to acquire data under almost any conditions – unlike passive optical satellite sensors that require daylight and clear skies to function. Its signals can penetrate clouds and even smoke, and as an active sensor technology, SAR can acquire data both day and night. Recent advancements in SAR technology, including higher spatial and temporal resolution and improved information extraction algorithms, have further expanded its applications, making it an essential tool in managing crises effectively. Continuous data acquisition is especially critical in emergency situations, where every moment counts to ensure uninterrupted situational awareness. Active all-weather satellite imaging makes SAR ideal for capturing disaster information at nighttime, during disasters that cause large amounts of smoke like fires or volcanoes, in cloudy equatorial regions and in polar darkness. For instance, SAR excels in precisely delineating water and land boundaries, allowing to map flooded areas with high precision. This capability is crucial for coordinating relief efforts and directing resources to the most affected areas, making SAR-based mapping indispensable for disaster management in flood-prone regions. Environmental applications of SAR are equally important. For instance, SAR is used to combat illegal deforestation, particularly in areas where dense cloud coverage obscures optical imaging. By penetrating these barriers, SAR enables authorities to track changes in forest cover over time, supporting efforts in protecting vital ecosystems. In wildfire management, SAR’s ability to operate under smoky conditions and during the night is invaluable. SAR data can provide real-time situational awareness, helping responders contain fires more effectively and minimize damage. In addition to its imaging capabilities, SAR stands out through additional insights that optical sensors cannot provide: Through SAR interferometry techniques, geohazards related to ground surface movements such as from earthquakes or volcanic activity can be precisely mapped. Following an earthquake for instance, SAR can reveal ground deformation patterns offering critical insights into seismic and fault line activity, to help prioritize areas for emergency response. Volcanic eruptions pose a significant hazard since they can trigger natural disasters on a local and global scale. SAR data allows scientists to monitor surface elevation changes, track lava flow paths, and anticipate further geological activity, providing timely warnings to affected communities. Extending pairwise SAR Interferometry to the analysis of multi-temporal observations allows to improve the measurement sensitivity from centimeters to millimeters. This is crucial when monitoring critical infrastructure, such as dams, power plants, and transportation corridors, where even minor shifts can indicate impending structural failures. For instance, hydrological dams and mine tailing dams are often located in remote areas and prone to catastrophic failures. SAR’s ability to detect subtle ground movements allows for early identification of potential issues, providing crucial time for mitigation and prevention. Beyond natural disasters, SAR has demonstrated its unique capabilities in conflict zones. During the ongoing conflict in Ukraine, for instance, high resolution SAR imagery is used to assess infrastructure damage, track changes and monitor troop movements over time. The ability to acquire data independent of daylight and cloud conditions makes SAR an essential resource for operational planning and humanitarian efforts in the region. Despite these transformative capabilities, SAR technology is not without challenges. Interpreting SAR data requires advanced analytic tools and considerable expertise, underscoring the need for continued investment in training and technology development. As SAR sensors continue to evolve, ensuring accessibility and usability will be key to unlocking its full potential in crisis management. In this presentation, we explore the pivotal role of SAR in crisis and emergency operations, drawing on real-world examples of NV5, partners and customers, to demonstrate its transformative impact. From mapping flood extents and combating deforestation to assessing earthquake deformation and supporting conflict-zone surveillance, SAR has proven itself as a vital tool in addressing the most pressing global challenges. This discussion will emphasize SAR’s ability to enhance decision-making, accelerate response times, and ultimately save lives in an era where precision and resilience are more critical than ever.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Sentinel data for Highways, Forest and Lake Water Management - Transversal Analysis within the Sentinel Benefits Study

Authors: Ms. Lauriane Dewulf, Mr Lefteris Mamais, Mr. Geoff Sawyer
Affiliations: Evenflow, EARSC
Europe has invested more than €6 billion in the flagship Copernicus programme. This investment has allowed the continuous provision of world class data from the Copernicus Sentinels satellites and the development of the operational Copernicus services supporting a wide range of users. Copernicus is also a key enabler for the development of innovative downstream solutions. All this underpins Copernicus contribution across a wide range of topics, making it an invaluable tool for policy making and in support of operational processes. Copernicus is today considered the world’s most advanced system for gathering Earth Observation data and delivering information services. Against all this, an important question arises: Have the European taxpayers received value for money for this investment? The Sentinel Benefits Study (SeBS), which ran from March 2017 to July 2024 was designed to provide some answers to this question, through a “bottom-up”, value chain approach. Previous, top-down studies have shown the value of the programme, but these “top down”, macro-level studies do not capture the detailed, and very real value being felt across various stakeholder groups. However, the individual cases studied by SeBS were, by definition, limited; tracing the impact of a product derived from Sentinel data along an extended, but finite value chain. Geographically, they were bounded by national borders reflecting the public service nature of many of the products. With a growing number of cases across different geographic and application areas, SeBS was increasingly able to make comparisons between the cases. The more cases that were available to consult, the richer such comparisons became. Comparison across cases has led to the refinement and homogenization of the levels of benefits attributable to the single cases and an improvement in the interpretation of the cases themselves as well as their specificities and boundaries of application. For these reasons, a separate Transversal Analysis was undertaken, which compared and contrasted cases within three application domains, namely, “Highways Management”, “Forest Management”, and “Lake Water Management”. Overall, this Transversal Analysis serves as further profound evidence for the likes of decision-makers on the efficacy and utility of Sentinel data, encouraging both uptake and further investment. On the topic of Highways Management, three cases were analysed, in Norway, Italy and Portugal. The transversal analysis on this topic showed that the use of ground motion maps derived from Sentinel satellite data can provide significant benefits for the actors concerned and for society at large. This is especially relevant for continually monitoring ground movement over large areas and with high precision, since doing so with other means is not possible. The benefits were similar in nature across cases and, whilst not all countries have the same degree of problems and can expect the same scale of benefits, it is clear that there are advantages to be gained from adopting ground movement measuring technology. However, the analysis did find differing priorities and additional types of use being explored across countries which enrich the notion of the benefits derived from using satellite technology and also suggests that there remains more fertile ground for its exploitation. On the topic of Forest Management, three cases were analysed, in Sweden, Portugal and on ESG reporting for deforestation in Indonesia and Malaysia. The analysis found that experts on both the demand and supply sides working on forest management are generally aware of the benefits that can be obtained from the use of satellite-based monitoring. This technology provides the capability to monitor forests country-wide that is not possible using traditional in-situ measurements. The latter is not replaced but complemented and hence an investment and annual budget is required to take advantage of satellite information. Moreover, whilst the studied cases were centred around clear-cut monitoring, many other applications were found to emerge, most notably for monitoring the effects of fires and the spread of infestation of bark beetles. The analysis found that from three SeBS cases in Finland, The Netherlands and Germany, the introduction of EO data has been a game-changer in lake water quality monitoring. Satellites can provide broad-scale coverage, (almost) real-time data and historical data which all bring valuable insights for water quality monitoring, especially when it comes to the monitoring of harmful algal blooms (HAB). With regards to the products and services, improvements are expected from new AI methods for early detection and automated communication of HAB events (and other phenomena requiring special attention to those in charge and the public). Higher spatial resolution, temporal frequency and measurement precision offered by commercial satellites and Copernicus next generations will also help improving the services. One of the main barriers for such evolution is the lack of EO data acceptance explicitly for the purpose of reporting in the relevant regulations, especially with regards to the Water Framework Directive. In the Netherlands and in Germany, both regional authorities are advocating for the use of EO at national level and its implementation in the water plans/acts. Some key lessons learned via this analysis revolved around why and how certain organisations, in particular public authorities, adopt EO data and others don’t. It was found that in order for any organisation to successfully integrate a new technology into its operational processes, several conditions should exist that deploy according to a fairly recognisable pattern. The entity must first recognise a need for improvement of a service, then an “agent for change” (either an individual or group) is required to champion the new technology by actively raising awareness of how it may help to meet one or more of the organisation’s needs. Evaluation of the technology’s fitness for the purpose is then required and finally, having proven its suitability, the new technology can be adopted and implemented into the internal processes of the organisation. In conclusion, by building a portfolio of SeBS cases, a broader analysis over and above the cases themselves was permitted. The Transversal Analysis elucidated the common benefits, shared limitations and differing approaches to the application of Sentinel data across three domains, providing deeper insights into where and how value is being created. This project was managed by the European Space Agency and funded by the European Commission.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: DINAMIS, the French National Facility for Shared Access to Very High Resolution Imagery: Maximizing Benefits for Research Applications and Public Policy Making

Authors: Jean-francois Faure, Anna Cristofol, Jean-Paul Sempere, Etienne Berthier, Nadia Landachoco, Pierre Maurel, Jean-Philippe Caminade, Stéphane
Affiliations: DINAMIS-IRD
The French National Institutional Facility for Shared Access to Satellite Imagery (DINAMIS) is a cross-functional component of the Data Terra Research Infrastructure (RI). It was created in 2018 under the impetus of 6 founding French organizations: CNES, CNRS, IGN, INRAE, IRD, CIRAD. It takes over from pioneering data access devices by harmonizing and pooling resources: old archives, reception means, IT infrastructures, organization. DINAMIS ensures the supply of satellite images to all public Institutions in France, for scientific or non-scientific purposes. It feeds the needs of the federated scientific community gathered in the IR DATA TERRA: Land surfaces (THEIA), Ocean (ODATIS), Earth solid FORMATERRE) and Atmosphere (AERIS) hubs. It thus contributes to the development of new thematic products/services resulting from Earth observation for the benefit of national and territorial public policies. The Executive Office and the permanent technical teams are coordinated by the Executive Secretary of DINAMIS; these bodies ensure the daily operation of acquisition, dissemination, planning and resource management operations. To benefit from the data offered by the Facility, eligible organizations (French public Bodies, French R&D private corporations, International Scientific Laboratories) must submit a membership Form to DINAMIS, in order to verify compliance with Policy and Licensing agreements. If and after acceptance, End-users from these organizations create individual accounts giving access to DINAMIS on line services. DINAMIS acquires and disseminates to its End-users Very High-Resolution Imagery from Spot 6-7, Pléiades, Pleiades Neo and up-coming programs (Pléiades World Heritage) or sensors (CO3D). All products are pooled in a unified Catalog which also relays Spot World Heritage data sets and historic complementary imageries (RapidEye France, products from ALOS2, Terra-SAR X, or CosmoSkyMed,). Regularly updated with new acquisitions, the DINAMIS Catalog also harvests data from the Airbus Defense and Space commercial catalog to identify archives of potential interest that may be requested by End-users. For requests for new imagery, the DINAMIS online services allow members to either specify Airbus archives of interest or to program Pléiades, Pleiades Neo or Spot satellites. All products are associated to a specific DINAMIS Licensing terms that allow all End-users to access and process all data sets delivered by the Facility. Uploading data from the DINAMIS catalog is accessible free of charge to all authorized End-users. New imageries requests (Airbus Archives, Tasking) from End-users are granted by DINAMIS on the basis of a permanent Call that distributes free quotas of data. The free quotas are revised annually in accordance with DINAMIS financial resources. If End-user requests are greater than free quotas, the End-user is invited to contribute financially to the acquisition of the data sets, within a Taylor-made pricing negotiated by DINAMIS with its providers. After acquisitions, all new data is made available to all End-users through the Catalog. Services provided by DINAMIS range from national coverages planification, data acquisition and storage, data cataloging and dissemination, satellite tasking with Airbus, production of archive images, follow-up of the Call for new imageries addressed to DINAMIS End-users, imagery requests managing (tools and procedures), technical support to End-users (memberships, accounts, use of the Catalog). DINAMIS regularly broadcasts webinars to raise awareness and inform on the Facility, in complement to its website, LinkedIn account and You Tube Channel.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Enhancing Climate Resilience, Water Scarcity Management, and Urban Planning at the local and regional level through the Strategic Integration of Satellite Data: The Role of the Copernicus4regions Initiative

Authors: Ms. Roya Ayazi, Ms. Margarita Chrysaki
Affiliations: NEREUS – Network of Regions Using Space technologies
In view of climate change, growing water scarcity, and expanding urban landscapes, robust policy and strategic frameworks are urgently required to enhance regional resilience across Europe. Satellite-based Earth observation, particularly through the Copernicus programme, is pivotal in delivering timely and high-quality data essential for evidence-based policy and strategic decision-making. Since its inception in 2012, the Copernicus4Regions project- a joint initiative between the Network of European Regions Using Space Technologies, the European Commission (EC), and the European Space Agency (ESA)- has played a central role in promoting the uptake of Copernicus satellite data among European regions and making regional experiences and capabilities more transparent. By producing targeted publications of public user stories (in 2012, 2018, and with a new activity in 2023), Copernicus4regions explores how other regions in Europe have managed to tackle common challenges and shows the benefits of the Copernicus programme. The new edition (2023-2027) aims to reflect on new user experiences, showcasing how innovative applications of Copernicus data can improve a wide range of public domains, such as climate change, urban planning, or water scarcity. Another important aspect is to collect successful approaches to integrate sustainably Copernicus usages into workflows of public administrations with a long term perspective. However, in order to maximise the impact of the stories across regions and ease the efforts of the Copernicus users, this section aims at discussing the following challenges inspired by the new stories. Firstly, this section will elaborate on the need for the development of Copernicus data friendly legal environment (laws, regulations, and policies) to ensure data accessibility, standardization, and knowledge sharing. For instance, addressing water scarcity and water management—an intensifying issue—necessitates not just real-time data on surface and groundwater resources, which Copernicus Sentinel satellites can offer, but a proper legal regime which implies the use of data rather suggests. The use case Monitoring the Health of Water and Sewerage Networks from Lombardy regions explains how Copernicus Sentinel-1 SAR data monitored ground displacements near Milan's sewer network, revealing water infrastructure damage . Additionally, climate resilience and urban planning require actionable insights on land use, deforestation rates, and urban sprawl—data that is increasingly available through Copernicus and can’t be sufficiently integrated at regional and local levels due to fragmented frameworks and insufficient cross-regional coordination. The use case Measures of Surface Movements in Catalonia Using Sentinel-1 Data showcases how regional surface movements are monitored as a risk management decision-support tool . Secondly, outreach towards European and regional policymakers, emphasizing the system's public policy value and supporting its broader adoption across public administrations in Europe is crucial for realising all previous points. Copernicus4Regions events at the European Parliament and the Committee of the Regions can demonstrate real-world user experiences has highlighted Copernicus' role in empowering regions and engage the political level. Thirdly, the discussion will also evolve around the key role of regional space ecosystems. Model case studies from the Copernicus4regions initiative, will demonstrate that collaboration amongst the public administration, the industry and the academia can create a user community in which the flow of data is transparent and accessible while bridging public users and service provider. In this respect, the initiative builds on workshops and events bringing these stakeholders together and better understand what the challenges and opportunities from the demand/offer side are. The growing integration of satellite-based data requires efforts on targeted training/education programmes as well to ensure a well-equipped workforce to use space data in view of climate and environmental pressures. Fourth, the need for long term and increased public investment for the uptake of space data can ensure a robust and autonomous public system in which the use of Copernicus data is regarded as a fundamental component. These new stories bring up innovative space-based solutions that require more funding to continue contributing to the solution of the climate resilience. Emphasis on Public-Private Partnership (PPP) Programmes or ways to motivate public authorities to launch public procurements can create a positive atmosphere for investing on data value chain and accelerate the adoption of data solutions in the market. Acknowledgements: This activity was managed by the Network of European Regions Using Space Technologies (NEREUS) under a contract from the European Space Agency. The activity is funded by the European Union, in collaboration with NEREUS. It builds on a volunteering effort from the partnership organisations and all regional stakeholders and representatives. In particular, NEREUS expresses its appreciation to Alessandra Tassa, ESA Officer, for her dedication to the initiative.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Beyond damage assessments: Remote sensing and archaeological heritage management

Authors: Benjamin Ducke, Pouria Marzban, Simon Crutchley, Jean-Noël Anslijn
Affiliations: Deutsches Archäologisches Institut, Historic England, Agence wallonne du Patrimoine
For more than a decade, the disastrous effects of armed conflicts on vulnerable archaeological sites in Syria, Ethiopia, Sudan, and other countries have dominated media reports on the role of remote sensing for the protection of the world's cultural heritage. There should be no doubt that the wealth of recent media coverage has been of great importance when it comes to raising public awareness of looting and wanton destruction of archaeological sites. The same holds true for the publicly available damage assessments, issued by national and international remote sensing agencies on world heritage sites such as Palmyra in Syria. Unfortunately, however, such activities can, by their very nature, contribute little to counter-acting the continuous, creeping destruction that is happening around the globe, in a less extreme but equally alarming manner. All archaeological sites are embedded into our living planet's landscape, and thus subject to the same forces of nature that can preserve or destroy what the past has built. With global warming progressing quickly and in step with sprawling urbanization and unmitigated land-use, the world's vulnerable archaeological heritage is set to suffer from increasingly frequent and detrimental events. Recent examples of these are the rock-cut tombs of Xerxes the Great in Iran (under threat by land subsidence), the early urban centre of Mohenjo-daro in Pakistan (under threat by extreme weather events) and ancient Olympia (perennially threatened by wildfires). On the territories of most of the world's nations, the management of archaeological sites and artefacts is regulated by law and entrusted to the public services. Heritage agencies, national museums and archives, universities and public trusts work to identify, protect and make accessible sites and monuments, for the benefit of current and future generations. In their endeavours, public agencies might take different approaches, but they all face the same challenge of managing an enormously diversified and dense, globally distributed archaeological record (most of it still hidden below the surface) with comparatively meagre resources. To make matters worse, the number of sites that become inaccessible due to political instabilities is rising. And this worrisome trend affects some of the world's archaeological treasure troves, such as Afghanistan, the Levant, vast swaths of northern Africa and Central America. Under these circumstances, the role of remote sensing in support of global archaeological heritage protection and management moves into focus more sharply than ever. Beyond academic case studies on the usefulness of space borne sensors in archaeology, and beyond well-defined digital products such as damage assessments, the full potential of remote sensing tools and innovations remains untapped by public cultural heritage services. The reasons for this lie in financial as much as technological and educational barriers. Recent activities on the EU and member states levels aim to alleviate these problems and to permanently strengthen cultural heritage agencies' capabilities in spatial planning, risk preparedness and policymaking via the Copernicus programme. In 2019 and 2020, the Copernicus Cultural Heritage Task Force (CHTF), representing the heritage institutions of all EU member states, carried out a detailed review of the European cultural heritage sector's needs and requirements in regard to remote sensing and Copernicus. Our contribution provides current insights into remote sensing innovations and issues in archaeological heritage management, with special consideration of the CHTF’s recommendations and matching national initiatives.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Copernicus - based monitoring at municipal level of the capacity of urban ecosystems to provide regulating and cultural ecosystem services (the case of Sofia-capital and Ruse-trans-border Euro-region)

Authors: Ass.Prof.Msc.Arch Kristian Milenov, Independent researcher Viktor
Affiliations: ASDE-ECOREGIONS
This study investigates the potential of ESA/Copernicus-based data sets to provide consistent observations in support of the national monitoring of land changes in urban areas. The objective is to assess the compliance of management and investment activities with mandatory legal indicators. It further investigates the capacity of urban ecosystems to provide regulating and cultural ecosystem services at the municipal level. The work focuses on two cities in Bulgaria: Sofia (the capital) and Ruse (a trans-border city within the Euro-region). An ISO - based approach to describe land cover/land use and globally approved land cover classification model is also applied, thus establishing a user-oriented descriptive system, applicable on national and global level. The Copernicus EO Program, with its extensive satellite data and monitoring tools, combined with local and national mandatory territorial planning elements, provides objective and sufficient information, as critical insights into urban ecosystems and critical infrastructure, as well as urban, nature and agriculture strata, enabling the evaluation of services such as sustainable territorial management, soil protection, water regulation, resources control, biodiversity support, and human health/recreational opportunities. This research aims to explore how these tools can support national and local authorities in managing and enhancing the territorial sustainability and resilience to natural and man-made risks. Support is provided also to urban resilience and ecological functions that urban ecosystems offer to citizens. Through the integration of Copernicus data with digital mapping(GIS) and ecosystem service modeling, the study analyzes the availability and spatial distribution of ecosystem services (NBS) in Sofia and Ruse. Additionally, the research investigates the potential application of artificial intellect through specific algorithms and simulative impacts of urban development and climate change on the capacity of these cities to provide such services, providing a foundation for more sustainable urban planning. The results demonstrate that Copernicus-based monitoring can effectively capture the dynamics of urban ecosystems, helping local policymakers and urban planners to prioritize green infrastructure, enhance urban resilience to climate change, and improve the overall quality of life for residents. This approach has the potential to inform future strategies for sustainable urban development, bridging the gap between environmental management, ecosystem services, and urban well-being in Sofia, Ruse, and similar cities across Europe.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (ESA Agora)

Session: Icebreaker

Lounge, Bossa, Soul & Jazz set the tone as LPS25 kicks off the week From Observation to Climate Action and Sustainability for Earth.
The LPS25 Icebreaker is a moment to connect, unwind, and ease into the week. At the end of the day, while exploring the poster session or grabbing a bite, the ESA Agora will be the stage for a musical moment.
The band Bossa Club ft Didier will set the scene with their unique blend of smooth grooves — think Rolling Stones with a twist or a Whitney Houston classic reborn, all delivered in Didier’s unmistakably soulful voice and laidback style.
A perfect moment to relax, celebrate the launch of LPS25, and set the tempo for a week of inspiration, innovation, and meaningful connections.

Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: F.02.02 - POSTER - C- and L-band SAR synergies - ESA-JAXA cooperation and beyond

In 2020, the Japanese Aerospace Exploration Agency (JAXA) and the European Space Agency (ESA) signed an agreement on the “Cooperation for the Use of Synthetic Aperture Radar Satellites in Earth Science and Applications”. The cooperation focuses on the joint analysis of L-band data acquired by JAXA’s ALOS-2/PALSAR-2 satellite together with ESA’s Sentinel-1 C-band satellite data for various applications. Research areas include polar and ocean monitoring, snow water equivalent retrieval, forest and wetland monitoring, surface soil moisture and agriculture monitoring, as well as the monitoring of geohazards and urban areas.

The key objective of the ESA-JAXA cooperation is to develop a better understanding of the benefits of combining L-band and C-band data over various areas and for the different thematic applications. We invite contributions using L- and C-band SAR data not only limited to Sentinel-1 and ALOS-2, but also including other data available such as from SAOCOM, RCM and potentially NISAR. A comparison with ground-based campaign data is envisaged to validate the results.

The session aims to provide insights for the development of future (L-band) SAR satellite missions, such as JAXA’s ALOS-4 satellite and ESA’s ROSE-L mission as well as synergies with existing and future spaceborne C-band SAR missions including Sentinel-1 and Sentinel-1 Next Generation.

This jointly chaired session shall give the involved scientists the opportunity to present ongoing research and results and foster the collaboration and exchange between European, Japanese and international participants.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: Characterization of L and C-band radar backscatter of Arctic sea ice during the melt season

Authors: Truls Karlsen, Malin Johansson, Johannes Lohse, Anthony Doulgeris
Affiliations: UiT - The Arctic University Of Norway
Synthetic aperture radar (SAR) is widely used for sea ice monitoring, due to its ability to acquire images independent of daylight and cloud conditions. Currently, C-band is the most widely used radar frequency, but with several upcoming L-band missions, this wavelength is becoming more readily available for sea ice monitoring. The aim of this study is to better characterize the SAR signal at L and C-band during the summer melt season, and to evaluate the separability of ice types from the two wavelengths. In addition, we are investigating the benefits of combining both bands in a multi-frequency ice type classification approach. By evaluating overlapping SAR and optical imagery, we characterize how the different signals change over time. Particularly, we want to link the presence of melt ponds to changes in the SAR signal. For this, we use the optical imagery to derive melt pond fraction. Compared to C-band, the longer L-band wavelength interacts with surface roughness on larger scales and penetrates deeper into the snow and ice surface. As such, C and L-band SAR provides complementary information about sea ice conditions. It has been shown that during the cold winter season, both C and L-band SAR can reliably separate sea ice types based on the degree of deformation [1, 2]. However, once the snow and/or ice melt begins, scattering from the wet snow dominates the C-band signal, making separation of ice types challenging. At the same time, L-band SAR can still reliably produce high separability due to the signal’s larger penetration depth [1, 3]. As the melt season progresses, neither wavelength provides separability on the same level as during the winter season. At C-band, a non-linear temporal trend in backscatter intensity can be observed during the advanced melt season, which must be accounted for when evaluating ice type separability. The driver behind these changes in backscatter is not yet fully understood, and hence requires more in-depth investigation. The usefulness of L-band SAR during this period also requires further investigation. With several L-band missions upcoming (NISAR and ROSE-L) as well as the recent launch of ALOS-4, there is a strong incentive to get a better understanding of how L-band SAR can be used for sea ice monitoring during the summer season. A hindrance for the investigation of the L-band SAR evaluation in details during the melt season has been the sparce amount of data available, making it challenging to characterize the backscatter evolution during this period. In this study, we use 21 L-band SAR images over Belgica Bank, of the north-east coast of Greenland, from the period May – August 2024. This is complemented by 82 C-band SAR images from the same period. The time series enables us to investigate the temporal evolution of the SAR signals at both frequencies over time during the melt season. For both datasets, we use wide-swath SAR images (ALOS-2 ScanSAR and Sentinel-1 Extra Wide respectively) to get a large spatial coverage. This image type is the most commonly used type of SAR imagery for large-scale monitoring of Arctic sea ice. The area of interest for this study is Belgica Bank, a fast ice area where grounded icebergs keep the sea ice immobile. This enables us to study the same sea ice over time, without having to account for ice drift. To supplement the SAR time series and better understand and interpret the physical sea ice situation on the ground and corresponding scattering mechanisms, we use a combination of overlapping fully polarimetric ALOS-2 L-band SAR imagery and cloud-free Sentinel-2 optical imagery from the same time-period, as well as temperature data from the nearby weather station on Station Nord. The optical imagery is used to identify different ice types, and to estimate the melt pond fraction during the advanced melt season. We are also examining the temporal evolution of the optical reflectance for the different ice types, linking the changes in SAR backscatter to changes in reflectance and air temperature. References 1. Karlsen, T., Johansson, M., Lohse, J., & Doulgeris, A. P. (2024). Incidence angle dependency and seasonal evolution of L and C-band SAR backscatter over landfast sea ice. Annals of Glaciology, 1-14. 2. Lohse, J., Taelman, C., Everett, A., & Hughes, N. E. (2024). Automated Sentinel-1 sea ice type mapping and in-situ validation during the CIRFA-22 cruise. Annals of Glaciology, 1-12. 3. Casey, J. A., Howell, S. E., Tivy, A., & Haas, C. (2016). Separability of sea ice types from wide swath C-and L-band synthetic aperture radar imagery acquired during the melt season. Remote sensing of environment, 174, 314-328.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: SMOSAR retrieval of SSM from Sentinel-1 and ALOS-2/PALSAR-2 time series

Authors: Francesco Mattia, Anna Balenzano, Davide Palmisano, Giuseppe Satalino, Carsten Montzka, Mehdi Rahmati, Julia Kubanek
Affiliations: National Research Council of Italy (CNR), Institute for Electromagnetic Sensing of the Environment (IREA), Bari, Italy, Forschungszentrum Jülich GmbH, European Space Agency, ESTEC
Soil moisture dynamics influence the hydrological cycle, ecosystem function, and climate processes. In dry and semi-arid regions, such as the majority of the Mediterranean basin, soil moisture directly restricts surface evapotranspiration. This can significantly affect terrestrial energy, water, and biochemical cycles in a warmer climate. Monitoring soil moisture can therefore enhance our understanding of the energy feedback between the Earth's surface and the atmosphere, mitigate the impact of extreme events, and improve meteorological forecasts and water resource management in agriculture. Currently, satellite soil moisture content products at coarse resolution (e.g., 10–40 km) are operational, while those at higher resolution (e.g., 0.1–1.0 km) are transitioning from research to systematic application. At high resolution, SAR systems in C- and L-bands are most suitable for retrieving soil moisture. The retrieved soil moisture refers to the volumetric water content (m³/m³) in the top soil layer (e.g., 0-5 cm), known as surface soil moisture (SSM). The variable crop cover and soil characteristics on land surfaces make retrieving SSM at high resolution more challenging than at coarse resolution, where the effects of surface parameters such as roughness and vegetation cover are averaged out. A crucial obstacle to taking up high-resolution SSM products has been the inadequate observational features, such as revisit time and spatial coverage, of most past spaceborne SAR systems. For the first time, Sentinel-1 (S-1) has attracted significant interest from a wide user community and stimulated synergistic interaction with low-resolution SSM products. Since 2018, Copernicus Land Service has supplied a soil water index (SWI) product at 1 km resolution and European scale derived from S-1 data (Bauer-Marschallinger et al., 2018). Despite this substantial progress, the Copernicus product is prone to the effects of vegetation and roughness temporal changes that are not accounted for. Other approaches that mitigate the impact of vegetation and roughness changes on S-1 SSM retrieval performance have been developed (e.g., Balenzano et al., 2021; Madelon et al., 2022). However, the impact of crop canopies in modulating C-band backscatter remains an important disturbance for S-1 SSM retrieval. The upcoming Radar Observing System for Europe at L-band (ROSE-L) SAR constellation can overcome this limitation because, in the L-band, the wavelength interaction with vegetation is much smaller than in the C-band. S-1 and ROSE-L will share the same orbit and image the same area, making it crucial to devise methods integrating C- and L-band SAR acquisitions to maximize retrieval performance. The agreement between the Japanese Aerospace Exploration Agency (JAXA) and the European Space Agency (ESA) on the “Cooperation for the Use of Synthetic Aperture Radar Satellites in Earth Science and Applications” has allowed the acquisition of consistent time series of ALOS-2/PALSAR-2 data on a few selected sites since 2020. This study aims to retrieve and assess SSM from time series of Sentinel-1 and ALOS-2/PALSAR-2 data and C- and L-band airborne SAR acquisitions. Test Sites and Data Sets This study focuses on two sites: Apulian Tavoliere (Southern Italy) and Rur catchment (Western Germany). Both sites host hydrologic networks measuring SSM at various depths. Meteorological data, crop, DEM, and soil texture maps are available. In preparation for the ROSE-L launch, the ESA SARSENSE airborne campaign featuring SAR acquisitions at C- and L-band and an intensive collection of ground data was carried out near the Selhausen village in the Rur catchment (Mengen et al., 2021). A similar effort was undertaken over the Apulian Tavoliere site, where in 2022, the SARSIMHT-NG campaign (Balenzano et al., 2024) included F-SAR overflights in April and June. C- and L-band fully polarimetry data were acquired during each overflight. In situ data include crop maps, SSM measurements (both by ground stations and by sampling with portable probes over regular grids), meteorological data, irrigation scheduling, and volumes of water consumption. ALOS-2 PALSAR-2 Data Time series of ALOS-2/PALSAR-2 products have been gathered from two orbits over Apulian Tavoliere from April 2021 to October 2023, and from two orbits over Rur Catchment from April 2021 to December 2023. The dataset has been divided into subsets of interferometric data, with a temporal baseline of 28 days. Eight subsets are acquired over Apulian Tavoliere and seven datasets over Rur Catchment. Retrieval The retrieval has been performed using the SMOSAR software developed at CNR-IREA (Balenzano et al., 2013 & 2021). The code implements short-term change detection to transform time series of dual-polarized SAR observations into time series of SSM and the associated uncertainty. The algorithm assumes that SSM may change between two successive SAR observations while all other surface parameters affecting radar backscatter (e.g., vegetation layer, soil roughness) are considered constant. The shorter the revisit, the better the approximation. Moreover, the method uses dynamic masking of vegetated areas dominated by volume scattering, which obscures SSM sensitivity. Currently, SMOSAR can work on C- or L-band SAR data independently. Results A first experiment produced and analyzed SSM time series derived from Sentinel-1 and ALOS-2 separately and then aggregated into an interleaved SSM product. The focus was on time series acquired over the Apulian Tavoliere and Rur Catchment from 2019 to 2021. The performance was assessed using in situ data measured by the hydrologic networks deployed at the two sites. Results indicate an RMSE of 0.06 m³/m³ and a Pearson correlation of 0.6 estimated at the network scale. The presentation will discuss detailed results comparing the performance of SMOSAR SSM retrieval from Sentinel-1 and ALOS-2/PALSAR-2 data over the entire dataset and from airborne acquisitions over Selhausen and Apulian Tavoliere. References Balenzano, Anna, et al. "On the use of temporal series of L-and X-band SAR data for soil moisture retrieval. Capitanata plain case study." European Journal of Remote Sensing 46.1 (2013): 721-737. Balenzano, Anna, et al. "Sentinel-1 soil moisture at 1 km resolution: a validation study." Remote Sensing of Environment 263 (2021): 112554. Bauer-Marschallinger, Bernhard, et al. "Toward global soil moisture monitoring with Sentinel-1: Harnessing assets and overcoming obstacles." IEEE Transactions on Geoscience and Remote Sensing 57.1 (2018): 520-539. Madelon, Remi, et al. "Soil moisture retrieval at 1-km resolution making a synergistic use of Sentinel-1/2/3 data." EGUsphere 2022 (2022): 1-29.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: ALOS-2 PPP DEMONSTRATION PROJECTS STATUS

Authors: Shin-ichi Sobue, Yuta Tochigi
Affiliations: JAXA
The Advanced Land Observing Satellite-2 (ALOS-2) launched on May 24, 2014 is a follow-on mission from ALOS. To improve data accessibility and to stimulate archived data usage, JAXA agreed to be mirroring all of ALOS-2 standard products for Japanese region to Japanese governmental cloud platform named “Tellus” operated by Tellus inc. To promote more practical and operational use with these archived data in domestic governments and other Japanese users, JAXA has launched public-private partnership (PPP) project named “ALOS-2 PPP demonstration project” since June 2022, in cooperation with intermediate users and end-users including private sectors, local governments and academia. In 2024, we have been working on fifteen demonstration projects related to digital transformation and green innovation use cases including land movement and infrastructure monitoring and agriculture monitoring. This paper describes the status of ALOS-2 PPP demonstration projects (Phase A) and the aim for ALOS-4/2 PPP demonstration project (Phase B) starting from 2025
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: Update of Cal/Val and research activities of the Advanced Land Observing Satellite-2 (ALOS-2)

Authors: Takeo Tadono, Dr. Masato Ohki, Dr. Takeshi Motohka, Dr. Shinichi Sobue, Dr. Osamu Isoguchi
Affiliations: Jaxa, Remote Sensing Technology Center of Japan
The Advanced Land Observing Satellite-2 (ALOS-2) was the follow-on mission of the Phased Array type L-band Synthetic Aperture Radar (PALSAR) onboard ALOS and was launched in May 2014. ALOS-2 continues to operate smoothly using PALSAR-2 beyond its design life of seven years. PALSAR-2 onboard the ALOS-2 is an L-band SAR, less affected by clouds and rain. This all-weather observing capability is suitable for rapidly monitoring disasters, and it can obtain information on vegetation and the ground surface, partially penetrating vegetation. The observation capability of PALSAR-2 was a dramatic improvement over PALSAR, e.g., a finer spatial resolution of 1 x 3 m in its Spotlight observing mode, a wider swath width of 490 km in the ScanSAR mode, and right-and-left aiming. Sensor calibration is extremely important to take advantage of these capabilities. This paper presents an overview and the major accomplishments of ALOS-2/PALSAR-2. The key to achieving these results is evaluating and improving its observation accuracy, i.e., calibration and validation, and planning systematic global observations based on a basic observation scenario. Its design is based on the experience gained during the Japanese Earth Resources Satellite-1 (JERS-1) and ALOS operations and will be integrated into ALOS-4, successfully launched in July 2024. To utilize observed data, users need to understand its accuracy. For this reason, calibration and validation are critical issues for data providers. JAXA performs calibration and sensor characterization of PALSAR-2 continuously from the initial calibration phase after launch to the operational calibration during satellite operation and publishes the results of the accuracy evaluation on the web. Image quality is regularly evaluated during calibration and involves geometric accuracy, radiometric accuracy, and polarimetric accuracy of the standard product. This requires measuring the resolution, sidelobe level, noise equivalent sigma zero (NESZ), and ambiguity. Corner Reflectors (CRs) and Active Radar Calibrators (ARCs) with theoretically well-defined reflective properties are used as artificial targets in these evaluations. Precise measurements of the geolocations of these targets are stored for later geometric accuracy evaluation and radiometric calibration. JAXA routinely installs CRs with a length of about 3 m at Tomakomai City and Gotenba City in Japan and has developed portable CRs and ARCs for experimental periodic calibration, mainly in the Kanto region. Based on Research Announcements (RAs) coordinated with the International Calibration and Validation Science Team (CVST), CRs and other sites that members routinely set up are used for ALOS-2 calibration. Major examples of research using ALOS-2 include disaster prevention, disaster monitoring, forest observation, and estimation of ocean winds during cyclones and typhoons. For example, when flooding occurs in Japan, ALOS-2 conducts emergency observations and automatically extracts inundated areas from PALSAR-2 data and archived data in the same area, and provides this information to disaster management agencies. The agencies are implementing emergency response measures based on the information obtained. The Geospatial Information Authority of Japan (GSI) also regularly monitors crustal deformation based on 3-meter resolution observations by ALOS-2, which is used as a reference for mountain entry restrictions when volcanic activity is active, for example. In the forest observation by ALOS-2, data from ALOS-2 and other sources have been used to create a global forest above-ground biomass map in the Climate Change Initiative (CCI) as a result of ESA-JAXA collaboration research. Another result of the ESA-JAXA collaboration is the cooperation with the Meteorological Research Institute of Japan (MRI) and the French National Institute for Ocean Science and Technology (IFREMER) on the estimation of ocean winds during typhoons and cyclones. IFREMER and ESA are conducting C-band SAR ocean wind estimation on an operational basis. In Japan, JAXA and MRI are working on a similar theme in a joint research project. This activity will also be collaborated with the RCM by the Canadian Space Agency (CSA). In this presentation, we report on the latest status of ALOS-2 operations, including calibration and validation, and utilization research.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: Advancing Tropical Forest Disturbance Monitoring: Integrating Multi-Source and Multi-Wavelength SAR Data for Improved Detection and Timeliness

Authors: Johannes Balling, Bart Slagter, Sietse van der Woude, Martin Herold, Dr. Johannes Reiche
Affiliations: Wageningen University, Helmholtz GFZ German Research Centre for Geosciences
Tropical forests play a vital role in the global carbon cycle and biodiversity, acting as important carbon sinks and providing habitats for diverse flora and fauna species. However, high deforestation rates in many tropical regions continue to threaten these critical ecosystems. To effectively monitor and protect these forests, spatially and temporally accurate information on disturbances are crucial, supporting sustainable forest management and enabling law enforcement to combat illegal deforestation. Synthetic Aperture Radar (SAR) based remote sensing has become an invaluable tool in tropical forest monitoring. Unlike optical remote sensing, SAR can penetrate cloud cover and acquire data at night, making it highly suited for consistently tracking forest disturbances in tropical regions where cloud cover is frequent (Reiche et al., 2024). The emergence of freely accessible Sentinel-1 data has made large-scale SAR-based forest monitoring feasible, overcoming past challenges posed by high costs and inconsistent data archives. In recent years, various SAR-based forest disturbance monitoring and alerting systems (e.g., RADD, TropiSCO, JJ-FAST, and Deter-B) (Doblas et al., 2020; Mermoz et al., 2021; Reiche et al., 2021; Watanabe et al., 2021) have been developed to detect changes as quickly as possible with minimal omission and commission errors. To achieve scalability, monitoring and alerting systems rely on a pixel-based approach and use single-wavelength SAR data (either C- or L-band), focusing on changes in backscatter levels. However, this approach can result in delayed or missed detections, especially in areas where post-disturbance tree remnants are present. This study examines the potential of combining multi-source information to improve large-scale forest disturbance detection, specifically using integrated backscatter and backscatter-based Grey-Level Co-occurrence Matrix (GLCM) textural features (Balling et al., 2023). Additionally, the study explores the use of multi-wavelength SAR data (C- and L-band) (Balling et al., 2024) to improve large-scale forest disturbance detection. A key challenge in current pixel-based SAR monitoring and alerting systems is the frequent occurrence of omission errors and delayed detections due to post-disturbance tree remnants. Post-disturbance tree remnants can produce backscatter levels similar to undisturbed forest, leading to missed or delayed detections of disturbances. To address this issue, we examined the effectiveness of incorporating Grey-Level Co-occurrence Matrix (GLCM) textural information alongside traditional pixel-based backscatter data. Using a probabilistic Bayesian change algorithm on C-band Sentinel-1 data, we tested various GLCM features and found that the GLCM Sum Average with a kernel size of 5 performed best across multiple disturbance types, including fires and large-scale logging events, effectively reducing tree remnant-based omission errors. This integrated approach lowered omission errors by up to 36% and improved detection timeliness, reducing the average delay by approximately 30 days. We further explored the benefits of integrating C-band and L-band SAR data for large-scale forest disturbance monitoring, a task now made feasible by the recent free availability of the ALOS-2 ScanSAR Level 2.2 L-band dataset. Previously, access to L-band data —such as from ALOS— was costly, limiting its use in large-scale monitoring and alerting systems. By combining Sentinel-1 C-band with ALOS-2 ScanSAR L-band data, we detected forest disturbances, using a probabilistic Bayesian algorithm. The dual-wavelength approach increased detection rates by up to 38% and reduced the average detection delay by 16.5 days. L-band data proved especially valuable in reducing omission errors in cases involving post-disturbance tree remnants, where C-band data alone often proved insufficient. Conversely, C-band detections were able to capture even small-scale disturbances that were largely missed by L-band, which has a relatively coarse spatial resolution of around 100m in ALOS-2 ScanSAR data. The potential of multi-wavelength SAR data will be further amplified by upcoming L-band missions, such as NiSAR (2025) and ROSE-L (2028), which will provide freely accessible L-band data with improved spatial resolution. Our findings suggest that relying solely on traditional pixel-based methods using single-source (pixel-based backscatter) and single-wavelength (C- or L-band) data is insufficient for accurate, large-scale SAR-based forest disturbance monitoring. We demonstrated that incorporating additional features, such as textural information, or combining multiple SAR wavelengths can improve detection performances. These approaches have the potential to be integrated into existing monitoring and alerting systems, improving both detection accuracy and timeliness. In doing so, monitoring and alerting system can provide stronger support for carbon mitigation efforts and assist law enforcement in safeguarding vulnerable tropical forest ecosystems. References Balling, J., Herold, M., & Reiche, J. (2023). How textural features can improve SAR-based tropical forest disturbance mapping. International Journal of Applied Earth Observation and Geoinformation, 124, 103492. https://doi.org/10.1016/j.jag.2023.103492 Balling, J., Slagter, B., van der Woude, S., Herold, M., & Reiche, J. (2024). ALOS-2 PALSAR-2 ScanSAR and Sentinel-1 data for timely tropical forest disturbance mapping: A case study for Sumatra, Indonesia. International Journal of Applied Earth Observation and Geoinformation, 132, 103994. https://doi.org/10.1016/j.jag.2024.103994 Doblas, J., Shimabukuro, Y., Sant’Anna, S., Carneiro, A., Aragão, L., & Almeida, C. (2020). Optimizing Near Real-Time Detection of Deforestation on Tropical Rainforests Using Sentinel-1 Data. Remote Sensing, 12(23), 3922. https://doi.org/10.3390/rs12233922 Mermoz, S., Bouvet, A., Koleck, T., Ballère, M., & Le Toan, T. (2021). Continuous Detection of Forest Loss in Vietnam, Laos, and Cambodia Using Sentinel-1 Data. Remote Sensing, 13(23), 4877. https://doi.org/10.3390/rs13234877 Reiche, J., Balling, J., Pickens, A. H., Masolele, R. N., Berger, A., Weisse, M. J., Mannarino, D., Gou, Y., Slagter, B., Donchyts, G., & Carter, S. (2024). Integrating satellite-based forest disturbance alerts improves detection timeliness and confidence. Environmental Research Letters, 19(5), 054011. https://doi.org/10.1088/1748-9326/ad2d82 Reiche, J., Mullissa, A., Slagter, B., Gou, Y., Tsendbazar, N.-E., Odongo-Braun, C., Vollrath, A., Weisse, M. J., Stolle, F., Pickens, A., Donchyts, G., Clinton, N., Gorelick, N., & Herold, M. (2021). Forest disturbance alerts for the Congo Basin using Sentinel-1. Environmental Research Letters, 16(2), 024005. https://doi.org/10.1088/1748-9326/abd0a8 Watanabe, M., Koyama, C. N., Hayashi, M., Nagatani, I., Tadono, T., & Shimada, M. (2021). Refined algorithm for forest early warning system with ALOS-2/PALSAR-2 ScanSAR data in tropical forest regions. Remote Sensing of Environment, 265(August), 112643. https://doi.org/10.1016/j.rse.2021.112643
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: RadWet: A novel approach for mapping inundated vegetation dynamics in grassland and forested wetlands using C-Band and L-Band radar imagery.

Authors: Gregory Oakes, Andy Hardy, Dr Pete Bunting, Dr Ake Rosenqvist
Affiliations: Aberystwyth University, SoloEO
Radar remote sensing offers unique capabilities for monitoring wetland inundation dynamics at fine spatial scales, and at a high temporal frequency. The use of radar imagery for flood monitoring is well established, due to the characteristic low backscatter properties of the water’s surface. But there are also opportunities for mapping inundation underneath vegetation canopies because surface water and emergent vegetation can produce a distinct double-bounce backscatter mechanism. By exploiting complementary data streams from C and L-Band radar systems, inundation can be mapped in both grassland and forested wetland environments. We present the RadWet wetland inundation radar-based mapping system. RadWet isolates key backscatter characteristics, using dual polarised indices, split-based thresholding and temporal variability metrics for open water and emergent vegetation to automatically generate training data. This data is used to train a fully optimized machine learning classifier for computationally-efficient mapping of inundation on a scene-by-scene basis. RadWet was applied to serial Sentinel 1 C-Band image acquisitions over two grassland dominated wetlands: 1) Barotseland Floodplain, Western Zambia, and 2) Upper Rupununi Wetlands, Guyana. This provided wetland inundation maps at a 12-day frequency from 2017-21, achieving classification accuracies of 88.68% (95th Confidence Interval 88.65 - 88.72%), and 80.15% (95th Confidence Interval 80.06 - 80.23%) respectively. The resulting maps provide new insights into the inter and intra annual spatiotemporal dynamics of inundation within vegetated areas, including periods of drought leading to significant implications for local populations. Additionally, we found that ~80% of total wetted extent is inundated vegetation, which is not mapped by existing inundation mapping systems, such as the Global Open Surface Water Layer. The RadWet mapping approach was further developed and applied to serial ALOS-2 PALSAR-2 ScanSAR L-band data over the forest-dominated wetlands of the Amazon Basin. This produced a unique dataset of fine scale (50m) wetland maps at a 42 day frequency between 2019-23 over the entire basin. A strong correlation was demonstrated between RadWet-mapped inundation extent and auxiliary datasets: 1) NASA-GFZ GRACE-FO measurements of Water Equivalent thickness mass anomaly (Pearson’s R= 0.96), 2) lake water levels measured by USDA G-REALM (Pearson’s R between 0.83-0.91), and 3) in-situ river stage data (Pearson’s R between 0.78 and 0.94). The resulting maps revealed important intra/interannual dynamics, including the response of the basin to El Niño events. RadWet provides a tractable solution for mapping wetland dynamics in both grassland and forested environments. Its transferability across multiple sites and large scales gives confidence that this system can be applied at continental or pan-tropical scales. This is particularly relevant given the range of C and L-band radar systems that are due to become online in the near future, including Sentinel-1 Next Generation, ROSE-L, ALOS-4 PALSAR-3, and NISAR. Providing new data on inundation dynamics in this way can transform our understanding of how our changing climate is impacting tropical wetlands, particularly in the face of climatic shocks like strong El Niño, and the subsequent impact on ecosystems, livelihoods, and global greenhouse gas emissions.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: DEM error estimation with independent component analysis applied to time-series InSAR data of Sentinel-1 and ALOS-2 in Miyakejima volcano

Authors: Yutaro Shigemitsu, Matthew Gaddes, Andrew Hooper, Takeo Tadono
Affiliations: Japan Aerospace Exploration Agency, Earth Observation Research Center, University of Leeds, COMET, School of Earth and Environment
DEM error is a kind of noise component that should be removed in time-series interferometric SAR (InSAR) analysis. Removal of this error is particularly performed when the perpendicular baseline between two periods of satellite is large, such as in the case of ALOS data, which is in the order of kilometers. Therefore, for satellites whose orbits are controlled to within a few hundred meters, such as Sentinel-1 and ALOS-2, the DEM error component is often not discussed in noise removal. Since there is currently a perpendicular baseline regardless of which satellite is used, the DEM error component should not be ignored, but this issue has not been sufficiently discussed among many previous studies. Here, we conducted time-series InSAR analysis using data from Sentinel-1 and ALOS-2 to verify whether DEM error components are included in each dataset. For InSAR analysis, we used Sentinel-1 data acquired in 2022-2024 in Path 46 (descent orbit) and ALOS-2 data acquired in Path 46 (descent orbit) in 2015-2024, and ALOS data acquired in Path 57 (descent orbit) from 2006-2010 at Miyakejima volcano in Japan. Miyakejima is a small island measuring just 2km by 2km, and so is ideal for minimising the amount of calculation required for InSAR analysis. We use two methods to robustly verify the extraction of DEM error components. First, we use LiCSAlert (Gaddes et al., 2019), a method of independent component analysis (ICA), to calculate independent components that are correlated with the DEM error components contained in the InSAR data. We modify LiCSAlert so that the correlation coefficient is calculated by comparing each extracted independent component and the perpendicular baseline from the perspective of time series (Liang et al., 2019). Next, we create a spatial image of the DEM error component using the conventional method of Fatthai et al. (2013), and compare the independent components extracted by ICA that are correlated with the DEM error from the perspective of spatial pattern. We have verified the validity of our method using ALOS data. The results shows that the highest correlation coefficient between the independent component and the perpendicular baseline in ALOS InSAR analysis was 0.9 in terms of time-series, and it is clear that the DEM error component is extracted. When we check the time series change of the independent component, we find that it changes almost the same as the perpendicular baseline change. Next, we applied our method to ALOS-2, which uses the same wavelength as ALOS, and find that the correlation coefficient was also 0.9, indicating that the method can remove DEM error components with high accuracy even in ALOS-2 data. Finally, when we tried the same method on Sentinel-1, which is C-band, the correlation coefficient is relatively small, at about 0.5, but the coefficient is higher than that of the other independent components. We conclude that there is a high possibility of being able to extract DEM error components. To test the validity of our method in terms of the spatial characteristics of the DEM error components we compare the results with the DEM error spatial images produced using the method of Fatthai et al. (2013). The results shows that the spatial characteristics of both methods are in good agreement for all ALOS, Sentinel-1, and ALOS-2 InSAR analysis data. Our results show that the DEM error component is included in the time series InSAR analysis data of not only ALOS but also Sentinel-1 and ALOS-2, which have relatively small perpendicular baseline. Although the DEM error component can be removed using conventional methods, which need to use perpendicular baseline data, it is possible to remove the DEM error component by applying our method that uses only the phase component.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: The Impact of ESA-JAXA Cooperation on Wall-to-wall Mapping of Terrestrial Carbon

Authors: Maurizio Santoro, Oliver Cartus, Jukka Miettinen, Dr Ake Rosenqvist, Frank Martin Seifert, Osamu Ochiai, Takeo Tadono, Kazufumi Kobayashi
Affiliations: Gamma Remote Sensing, VTT Technical Research Centre of Finland, soloEO, ESA ESRIN, Japan Aerospace Exploration Agency, Remote Sensing Technology Center of Japan
Timely and accurate quantification of carbon stored in vegetation is of paramount importance to understand climate pathways and support policies developing natural climate solutions. Measuring carbon at continental to global scale is a major challenge that in practice cannot be mastered with field campaigns only. Remote sensing from space fills the gap with repeated observations, which however do not represent a direct measure of the carbon stored in the vegetation. Only by interfacing diverse sets of observations from space with field measurements and mathematical models, it is possible to obtain a reliable, albeit still uncertain, portrait of the spatial distribution of terrestrial carbon. The temporal consistency of the remote sensing observations used to estimate carbon is fundamental to correctly pick up dynamics over decades. Currently, the most suited combination of satellite observations to characterize carbon at high resolution includes images acquired by Synthetic Aperture Radar (SAR) at different frequencies and LiDAR-based metrics of canopy structure from footprint-level measurements. Although data acquired by SARs in space are characterized by weak sensitivity to forest structural variables, the use of repeated observations has been proven to yield an accurate estimation of the forest biomass well beyond the level considered possible in earlier studies. Availability of and access to hyper-temporal ESA’s C-band data collected by the Envisat ASAR and Sentinel-1 SAR and repeated JAXA’s L-band data collected by the ALOS-1/2 PALSAR-1/2 have enabled unique wall-to-wall mapping of forest carbon at (sub)hectare resolution for the last two decades. Most prominent examples are the global datasets of forest aboveground biomass by the Climate Change Initiative (CCI) Biomass project and the European-wide dataset of forest biomass variables by the Forest Carbon Monitoring (FCM) project, both supported by ESA-JAXA’s cooperation on biomass mapping. The key to obtain such data products, which are becoming a reference for several downstream activities, are the complementarity of the SAR observations and the capability of retrieval models to handle the physics behind the observations. In this presentation, we will illustrate the design of the CCI and FCM methods, with emphasis on the impact of joint C- and L-band observations on the carbon data products. We will then discuss the evolution of terrestrial carbon mapping from space taking into consideration current and future missions that provide global observations suited to support its estimation.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: C.03.05 - POSTER - Copernicus CRISTAL: Operational Monitoring for Cryospheric Science, Hydrology, and Oceanography from Coasts to Polar Seas

The Copernicus polaR Ice and Snow Topography ALtimeter (CRISTAL) will significantly enhance Copernicus’ monitoring capabilities over the cryosphere and provide invaluable data over oceans and inland waters. CRISTAL will be the first operational radar altimeter for the cryosphere, the first Ka-band SAR mode altimeter in space, and the first Ku/Ka dual-band frequency altimeter. CRISTAL’s Ku channel will be interferometric, and the mission will also provide the first combination of an altimeter and radiometer poleward of 82°, thanks to the AMR-CR radiometer provided by NASA/JPL. CRISTAL’s altimeter builds on the heritage of CryoSat with several improvements, including the addition of the Ka channel for snow depth measurements, a high bandwidth of 500 MHz in both Ku and Ka channels, flexible tracking (closed/open loop), open-burst mode over sea ice (allowing fully-focused SAR altimetry processing), and full coverage of ocean, coasts, and all hydrological targets. The mission is now in the implementation phase, and the first of two satellites, CRISTAL-A, is expected to be ready by mid-2027.

This session will welcome contributions showcasing advances from CRISTAL and preparing the user community to exploit its data. Possible contributions include:
• Studies refining geophysical algorithms for CRISTAL, including dual-band and microwave measurements.
• CRISTAL performance analysis based on models, simulations, and in situ data.
• Preparatory activities for calibration and validation over the cryosphere, including campaigns.
• Studies on the expected impact of CRISTAL on Copernicus Services and specific user applications.
• Synergy studies with other Copernicus Missions, especially CIMR and ROSE-L.
• Joint exploitation of IRIS and AMR-CR measurements over the cryosphere.
• Ocean studies refining algorithms for open and coastal areas, and performance analysis.
• Preparatory oceanographic cal/val activities, including campaigns.
These contributions will ensure readiness for CRISTAL data exploitation and highlight its significant impact on monitoring cryosphere and hydrological targets and its support to oceanography.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Monitoring the cryosphere with CRISTAL: Advancing calibration and validation through the CRISTAL IN-PROVA Project

Authors: Claire Miller, Henriette Skourup, Sebastian B. Simonsen, Julien Renou, Geir Moholdt, Dr. Christian Haas, Vincent Favier, Emma Woolliams, Eva Le Merle, Vincent Boulenger, Nathan Lalloué, Julien Sarrau, Javier Pastor, Jean-Christophe Poisson, Sajedeh Behnia, Renée M. F. Hansen, Ghislain Picard, Emmanuel Lemeur, Laurent Arnaud, Nicolas Taburet, Mahmoud El Hajj, Jérôme Bouffard, Michele Scagliola, Alessandro Di Bella, Filomena Catapano
Affiliations: NOVELTIS, DTU Space, CLS, NPI, AWI, IGE, NPL, vorteX-io, ESA ESRIN
The polar regions are undergoing rapid and dramatic changes due to climate change, with profound implications for global systems. Monitoring these transformations is essential but remains challenging due to the remoteness and harsh conditions of these environments. The CRISTAL (Copernicus Polar Ice and Snow Topography Altimeter) mission – ESA’s ice mission developed as part of the European Union Copernicus programme – represents a significant advancement in cryosphere observation. As the first high-resolution, operational radar altimeter mission, CRISTAL will provide invaluable data on the cryosphere, including sea ice, icebergs, land ice, as well as for oceans and inland water domains. The CRISTAL IN-PROVA (CRISTAL In-Flight Level-2 Products Validation Plan) project is designed to contribute to the mission’s success by establishing a calibration and validation roadmap for CRISTAL’s Level-2 data products. These products cover the cryosphere (sea ice, icebergs and land ice) and inland waters (rivers, lakes). CRISTAL IN-PROVA employs a validation strategy that combines two approaches: direct comparison with Fiducial Reference Measurements (FRM) and cross-validation with other satellite altimetry missions and Copernicus Expansion Missions (CEM). FRM are independent, tailored and fully characterised reference measurements with traceable uncertainty serving as a benchmark for CRISTAL's products. Cross-validation expands this effort by comparing CRISTAL’s products with data from missions such as Sentinel-6, CryoSat-2, ICESat-2, SWOT, and Sentinel-3 NG TOPO, as well as from CEMs like CIMR and ROSE-L, where compatible. To strengthen the validation framework, the project is developing innovative tools. A Validation Performance Simulator enables pre-assessment of the calibration and validation methods by generating simulated and co-located datasets, ensuring their alignment with the CRISTAL Mission Requirements Document. Additionally, a Match-Up System facilitates the co-location of CRISTAL data with external measurements, enabling precise comparisons to confirm data accuracy. This presentation will provide an overview of the CRISTAL IN-PROVA project’s objectives, showcasing current progress and future directions.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Impact of assimilation sea-ice thickness on the ECMWF system – DANTEX/CRISTAL

Authors: Tsz Yan Leung, Hao Zuo, Philip Browne, Patricia de Rosnay, Stephen English, Filomena Catapano
Affiliations: ECMWF, ESA
Consistent exploitation of interface observations across different Earth System components in our coupled data assimilation system is a key strategic development at ECMWF (European Centre for Medium-Range Weather Forecasts). Recent advances in coupled data assimilation methodology are enabling more ambitious use of current and future observations. In collaboration with ESA, the new Data Assimilation and Numerical Testing for Copernicus Expansion Missions (DANTEX) project will develop novel ways to use so-called interface observations from a range of satellite instruments, including the Copernicus polaR Ice and Snow Topography Altimeter (CRISTAL) mission. The snow and sea ice thickness data from CRISTAL will provide complementary information in addition to sea ice concentration data that is already widely used in the Earth System Data Assimilation. In DANTEX we are exploring assimilation of CRISTAL data over the sea-ice. Developments have started by using existing observations (e.g. CryoSat2 and SARAL/Altika) as proxy. The impact on analysis and coupled forecasts from adding new sea-ice thickness and snow depth information is being assessed by carrying out observation system experiments (OSEs) with the new Ocean ReAnalysis System-6 (ORAS6). ORAS6 uses an ensemble variational ocean data assimilation system that employs a hybrid background error covariance model with flow-dependent variances and correlation scales, which are critical for balancing contributions between satellite surface observations and sub-surface in-situ observations. Care has been taken in application of single category sea-ice increments into the multicategory sea-ice model, and balance between sea-ice and ocean state variables. Assessment of observation impact on coupled forecasting system of ECMWF has been carried out by initializing the ocean and sea-ice states from different OSE reanalyses. In the future, we will explore assimilation of lower level data from CRISTAL (alongside L1 assimilation of other interface observations, e.g. from CIMR) with a coupled data assimilation approach, including novel approaches in forward modelling in support of sea ice thickness and snow depth assimilation.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: The ESA Permanent Facility for Altimetry Calibration in the service of CRISTAL Cal/Val

Authors: Stelios Mertikas, Dr. Craig Donlon, Paolo Cipollini, Dr. Franck Borde, Dr. Constantine Mavrocordatos, Dr. Fabrice Collard, Dr. Costas Kokolakis, Dr. Dimitrios Piretzidis, Dr. Simmon Williams, Mr. Achilles Tripolitsiotis, Mr. Xenophon Frantzis
Affiliations: Technical University of Crete, Space Geomatica, European Space Agency/ESTEC, Space Geomatica, OceanDataLab, National Oceanography Center
For more than two decades, the ESA Permanent Facility for Altimetry Calibration (ESA-PFAC) has supported the absolute and relative calibration and validation of all ESA and world altimeters in a consistent, continuous and reliable manner. This ground infrastructure includes active and passive reference targets and ocean calibrating regions with transponders, corner reflectors, coastal sites, sea-state optical and monitoring devices, reference clocks, GNSS, a DORIS beacon and meteorological networks. It covers a region of 260 km in East-West and 150 km in North-South directions all over the island of Crete, Greece. A network of reference GNSS stations, with ground radiometers at times, continuously operating, also validates the performance of the onboard radiometers. As a matter of usual practice, this infrastructure applies the ESA principle for Fiducial Reference Measurements and monitors satellite observations for altimetric range, datation, backscatter coefficient, as well as products of sea-surface height, significant wave height, etc. Past reference and current and operational satellite altimeters are routinely compared and monitored at different ground locations, settings, with diverse, redundant, complementary instrumentation and data processing. This work presents the unique capabilities that the ESA-PFAC is capable to offer for the quality assurance of the CRISTAL observations and products during its commissioning and operational phases. Specifically: • The infrastructure is large enough to frequently cover the drifting orbits of CRISTAL and calibrate its measurements whenever the satellite passes over its calibrating region. • Complementary and redundant infrastructure with transponders and corner reflectors calibrates range, sigma0 and datation for its Ku-band and the Ka-band altimeters. • Six coastal Cal/Val sites validate CRISTAL’s products in the open ocean. • The length and orientation of the CRISTAL interferometric baseline is to be determined using collocated reference targets. • Finally, uncertainties for all CRISTAL Cal/Val results are to be reported in accordance to the FRM principle prescribed by ESA. Demonstration of all these ESA-PFAC capabilities will be provided using examples from CryoSat-2, Sentinel-6, Sentinel-3A/B and tandem Cal/Val activities.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Inland Water Extent Measurements for the CRISTAL Mission

Authors: Adrià Gómez Olivé, Charlie McKeown, Ferran Gibert, Albert Garcia-Mondéjar, Malcolm MacMillan, Michele Scagliola
Affiliations: isardSAT SL., isardSAT, Ltd., Lancaster University, RHEA, ESA ESRIN
The CLEV2ER LI+IW (CRISTAL LEVel-2 procEssor prototype and R&D - Land Ice and Inland Water) aims at developing, implementing, and supporting the evolution of the CRISTAL Level-2 product and algorithms, which are associated with the thematic Ground Processor Prototypes (GPP’s) over Land Ice and Inland Water surfaces. Within this framework, one of the objectives is to study the potential and exploit the development of valuable FF-SAR applications [1] to estimate inland water bodies extent using FF-SAR waveforms. Fully-Focussed SAR (FF-SAR) technique has revolutionised the processing of altimetry data. Using FF-SAR, pulse echoes can be coherently integrated, significantly improving along-track resolution while simultaneously reducing waveforms contamination and enhancing the signal-to-clutter ratio, allowing not only the retrieval of water level but also the precise measurement of the total water surface extent. Within this scope, the study introduces a methodology to showcase the capability of FF-SAR altimetry data in estimating and monitoring the size and spatial distribution of inland water bodies characterized by significant seasonal variability. This approach broadens the scope of radar altimetry beyond traditional water level measurements, enabling a comprehensive assessment of total water surface extent. Sentinel-6 FF-SAR data was selected, acquired in open-burst mode to minimise the impact of secondary lobes effects, in order to better assess the applicability and benefits of such a method. Additionally, interferometric altimetry data from CryoSat-2 mission has been evaluated as it offers a valuable approach for tracking near-nadir water bodies. The dual-antenna system of CryoSat-2 plays a critical role in acquiring angle-of-arrival information, allowing for the resolution of iso-range ambiguities and the precise geolocation of each pixel on the correct side of the satellite track. FF-SAR radargrams were segmented using unsupervised techniques, such as K-means clustering, to classify pixels based on shape or intensity similarities and extract the corresponding water regions. Subsequently, the obtained water pixels were projected onto the ground to estimate the total extent of the water bodies. Cloud coverage does not have an impact on the measurements as radar signals can penetrate clouds without being altered, allowing for consistent and continuous data collection over time. Various case studies were analysed to validate and demonstrate the effectiveness of the approach, involving direct comparison of water extension measurements derived from Sentinel-6 and CryoSat-2 FF-SAR products against corresponding multispectral optical imagery from Sentinel-2 satellites and in-situ data, specifically derived from volumetric and water level measurements within the water regions, and provided by local institutions. This multi-dimensional validation process, integrating satellite observations and ground-based measurements, further enhanced the robustness and reliability of the validation of the results. [1] A. Egido and W. H. F. Smith, ‘Fully Focused SAR Altimetry: Theory and Applications’, IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 1, 2017, doi: 10.1109/TGRS.2016.2607122.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Radar altimeter backscatter modeling for reducing uncertainties in sea ice thickness estimation

Authors: Rasmus Tonboe, Vishnu Nandan, John Yackel, Hoyeon Shi, Suman Singha
Affiliations: Denmark's Technical University, Amrita University, University of Calgary, Danish Meteorological Institute
This study uses a first-order backscatter model in combination with a waveform simulator and Archimedes principle. Our focus is to understand microwave interactions with snow covered sea ice towards improved accuracy of sea ice thickness retrievals. The simulations are relevant for the European Space Agency (ESA)’s currently operational CryoSat-2 and the forthcoming Ka- and Ku-band dual-frequency CRISTAL radar altimeter missions. The scattering model is forced with snow depth data from the European Space Agency (ESA) climate change initiative (CCI) round-robin data package, in which NASA's Operation IceBridge (OIB) data and climatology are included, and detailed snow geophysical property profiles and other in situ observations from the Canadian Arctic. The insights gleaned from the model can aid in reducing uncertainties when assimilating freeboard in sea ice models, when radar altimeter data are combined with other remote sensing data for snow and ice thickness retrievals and for planning efficient sampling strategies during field campaigns. The influence of mechanisms such as radar extinction in the snow, interface roughness across air/snow, snow/sea and within-snow layers and ice floe buoyancy, to Ka- and Ku-band radar altimeter backscatter from snow-covered sea ice are simulated separately. Also the spatial variability of the snow cover within the footprint is simulated. Subsequently, their impact on the Ka- and Ku-band radar scattering horizon or the “track point” (i.e. the scattering layer depth detected by the radar re-tracker) is evaluated. We then discuss implications for (1) the use of snow climatology in the conversion of radar freeboard into sea ice thickness, (2) first-year and multi-year ice types for the radar freeboard variability, and (3) sampling procedures of snow and sea ice parameters which are relevant for modeling during future sea ice field campaigns. Finally, we discuss the use of radar altimeter data in combination with other remote sensing data for snow and ice thickness retrievals.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Monitoring Ice Sheet Surface Melt and Snow Grain Growth with the CRISTAL AMR-CR Radiometer

Authors: Alamgir Hossan, Andreas Colliander, Federica Polverari, Mary Morris, Zhimeng Zhang, Shannon T. Brown, Sahra Kacimi
Affiliations: Jet Propulsion Laboratory, California Institute Of Technology, California Institute Of Technology
The ESA Copernicus polaR Ice and Snow Topography ALtimeter (CRISTAL) mission will ensure the continual monitoring of polar ice sheets beyond current polar altimeter missions such as CryoSat-2 and IceSat-2. In addition to the dual-frequency radar altimeter operating at Ku- and Ka-band frequencies (IRIS), the CRISTAL payload will include the Advanced Microwave Radiometer (AMR-CR), and provide nadir radiometric observations across frequency channels ranging from 18.7 to 166 GHz. While the primary objective of the radiometer is to correct radar propagation delays caused by tropospheric water vapor, it will greatly enhance the monitoring of ice sheet surface properties across a wide range of frequencies. To support our analysis, we use observations from the radiometer onboard the Sentinel-6 satellite which is identical to the AMR-CR. Sentinel-6 was launched on November 21, 2020 and operates in a non-sun-synchronous orbit with an inclination of 66°. It therefore provides coverage up to ±66° latitude, including the southern part of the Greenland Ice Sheet (GrIS). However, it does not cover the Antarctic Ice Sheet (AIS). To address this limitation, we plan to incorporate comparable observations at 19, 37, 91, and 150 GHz from the Special Sensor Microwave Imager Sounder (SSMIS) onboard the Defense Meteorological Satellite Program (DMSP) F-18 satellite. We will assess the performance of commonly used threshold-based algorithms for mapping ice sheet melt using AMR-CR low frequency channels (18.7 and 34 GHz). These melt algorithms will further be evaluated using surface energy balance models, that are forced by in situ observations from automatic weather stations (AWS), specifically PROMICE GC-NET for the GrIS and AntAWS for the AIS. To estimate grain size and growth, we will perform radiative transfer inversion at AMR-CR-like frequency bands (18.7 to 166 GHz). We will also evaluate the effectiveness of the Microwave Grain Size Index, defined as the normalized difference between two frequency channels. As a final assessment, the retrieved microwave grain size will be converted into an equivalent optical grain size/specific surface area (SSA) and compared with available in situ observations from AWS for both the GrIS and AIS.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: In Preparation for CRISTAL Coastal Ocean Processing: Multi-Year Analysis of Interferometric CryoSat-2 Data Around Cuba

Authors: Pablo García Arnaud, Mr. Michel Guerra, Alba Granados, Albert Garcia-Mondéjar, Mònica Roca i Aparici, Pierre Femenias, Bruno Lucas, Jeremie Aublanc, Francesco Nencioli, Ester Vendrell
Affiliations: isardSAT, ESA-ESRIN, EUMETSAT, CLS
Coastal zones present unique challenges for altimetry studies, whish aim to obtain precise geophysical retrievals of the ocean surface, including Sea Surface Height (SSH), Significant Wave Height (SWH), and wind velocities. Various factors influence the quality of these scientific parameters in coastal environments, such as coastal topography, the angle of attack of the ground track relative to the coastline, the specularity of water surfaces near the coast or highly reflective ground targets, sea state conditions, onboard operational modes, science chronograms, and other variables. Efforts have been made over time to manage undesired interferences within coastal zone science datasets. One effective strategy involves isolating the ocean signal segment of the echo, significantly enhancing data quality by filtering out most interferences. In the CryoSat-2 Plus for Ocean (CP4O) project, funded by ESA, isardSAT proposed a coastal altimetry processing method using SAR interferometry (SARin) data. This approach leverages SARin's ability to determine the across-track angle of arrival for all range bins within the echoes - a differential capability that has not been utilized in any ESA altimetry mission since CryoSat-2 (CS2). The CP4O coastal processor identifies a segment of the original science echo characterized by high coherence, high power, and a low angle of arrival. This method assumes that the nadir backscattering represents the ocean signal during overflight, effectively exploiting SARin capabilities. During the second phase of the CP4O project, an enhanced version of the SARin retracker was introduced, extending its capabilities to identify and manage coastal environmental interferences in modes beyond SARin. By incorporating the Window Delay parameter, the coastal processor can be adapted to meet the requirements of various altimetry missions. Subsequent significant enhancements to the coastal processor, rebranded as CORS, included updates to the retracking algorithm. Validation exercises were conducted after adapting the processor to Copernicus missions Sentinel-3A/B (S3) and Sentinel-6 (S6), with support from ESA and EUMETSAT. This presentation highlights recent validations performed on CS2 data over a SARin-mode region around the Cuban archipelago. The integration of CP4O's SARin-specific methodology and the enhanced CORS framework led to the validation of SSH and SWH parameters through multiple approaches. A comparison is done versus the operational products. The results demonstrated a reduction in SSH noise and improved power spectral density (PSD) across all wavelengths. However, rough sea states tend to degrade the level of improvement. For this occasion, we will conduct a multi-year analysis of the Coastal Processor, making use of the extensive CS2 SARin data series for this area of interest, with a particular focus on Mean Sea Level Rise. The upcoming CRISTAL altimetry mission, the first to incorporate SARin mode since CS2, will benefit from the validation efforts presented here, supporting the future development of coastal processing solutions tailored specifically for the mission. (Ray et al. 2015) SAR altimeter backscattered waveform model. IEEE Trans.Geosci. Remote Sens. 53, 911–919. https://doi.org/10.1109/TGRS.2014.2330423. (Garcia et al. 2018). SARin mode, and a window delay approach, for coastal altimetry. Advances in Space Research, 62(6), 1358-1370. https://doi.org/10.1016/j.asr.2018.03.015. (Makhoul et al. 2018). Evaluation of the precision of different Delay-Doppler Processor (DDP) algorithms using CryoSat-2 data over open ocean. Advances in Space Research, 62(6), 1464-1478. https://doi.org/10.1016/j.asr.2018.04.004. (Garcia et al. 2022): The CORS processor outcomes. Improving the Coastal Ocean SSH & SWH series from the Copernicus altimetry constellation. 2022 OSTST Meeting. https://ostst.aviso.altimetry.fr/fileadmin/user_upload/OSTST2022/Presentations/COA2022-Coastal_Processing_from_the_Copernicus_Altimeters__the_CORS_processor_outcomes.pdf. (Garcia et al. 2023), HYDROCOASTAL CCN2 Technical Note. Coastal and FF-SAR processing of S6 data. https://satoc.eu/projects/hydrocoastal/docs/Hydrocoastal_isardSAT_TN_CCN2_Final.pdf
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Validation and performance assessment of the future CRISTAL mission over inland waters using simulated measurements based on SWOT products

Authors: Julien Renou, Nicolas Taburet, Claire Miller, Valentin Fouqueau, Filomena Catapano
Affiliations: Collecte Localisation Satellites (CLS), Noveltis, Vortex-io, ESA-ESRIN
The CRISTAL Copernicus Expansion Mission (CEM) planned for launch in 2027 has the primary objective to monitor sea ice thickness and land ice elevation, and as secondary objective the monitoring of inland waters. Indeed, its on-board SAR altimeter that measures the surface topography with enhanced spatial resolution has proven ability to accurately estimate water surface height over lakes and rivers, as demonstrated by the Copernicus Sentinel-3 and Sentinel-6 missions. While validation approaches of these repetitive orbit missions have made large progress over the last few years based on ESA projects such as St3TART, the quantification of the uncertainty budget in the case of a geodetic orbit like for the CRISTAL mission remains challenging. As the drift of satellite ground track implies a return to the same location only once a year, the standard validation based on comparisons of water surface height timeseries estimated from altimetric measurements with water surface height timeseries provided by in-situ stations cannot be performed for small lakes and rivers. In the frame of the ESA CRISTAL IN-PROVA project, we develop a validation plan for the CRISTAL Level-2 data products over inland waters with the objective of assessing their performance. In order to provide a first assessment of the adequation with the mission requirements, we first set into place an innovative method to generate simulated CRISTAL products based on SWOT L2 PIXC products. The main benefit of using the SWOT KaRIn data is the reliable measurements provision over most of the worldwide rivers and lakes at least every 21 days, which is valuable to reproduce observations of the CRISTAL mission and its geodetic orbit. Associated uncertainty is simulated by adding specific instrument noise as well as by reproducing uncertainty that has been already quantified over inland waters for SAR altimetry missions with similar instruments. Second, we propose validation methodologies for the simulated CRISTAL Level-2 products to be confronted to the mission requirements. A combination of existing validation approaches developed for the CryoSat-2 mission within the ESA Cryo-TEMPO project as well as innovative validation methods currently applied on Sentinel-3 hydrology thematic products for the ESA MPCS3 project are performed. These methods for instance rely on both comparisons with Fiducial Reference Measurements (FRM) and cross-validation with other missions such as Sentinel-3A/B, Sentinel-6 and ICESat-2. Finally, validation metrics quantifying the performance of CRISTAL measurements are derived to provide key elements for the monitoring of water surface height.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Geophysical retrievals from CRISTAL over Cryosphere and inland waters

Authors: Michele Scagliola, Filomena Catapano, Alessandro Di Bella, Jérôme Bouffard, Paolo Cipollini
Affiliations: Starion For Esa, ESA
The Copernicus Polar Ice and Snow Topography Altimeter (CRISTAL) is a satellite mission developed as part of the European Union Copernicus program expansion activities. The main objectives of CRISTAL are to measure and monitor variability of Arctic and Southern Ocean sea ice thickness and its snow depth, to measure and monitor the surface elevation and changes therein of polar glaciers, ice caps and the Antarctic and Greenland ice sheets. Built on the heritage of CryoSat, the CRISTAL instrument technical solution is based on a dual-frequency synthetic-aperture radar altimeter with interferometric capabilities at Ku-band and a second Ka-band frequency. Additionally CRISTAL will carry a passive microwave radiometer to support atmospheric corrections and surface-type classification. Throughout this abstract, the current plan for the design, development and validation of the ESA CRISTAL Level-2 products is described. The CRISTAL Level-2 products will include different geophysical parameters in the sea ice, land ice, hydrology and ocean domains. ESA is responsible for the generation of the Level-2 Thematic Data Products (TDP) related to the sea ice, land ice and hydrology aspects of the Mission. The CRISTAL Level-2 TDP and algorithms are planned to be provisioned in a two-step approach prior to launch: a ground processor prototype will be designed and developed in a first stage, with the primary objective of defining the Level-2 product format and up-to-date algorithms for the geophysical variables retrieval to leverage the full capabilities of the CRISTAL instruments, while the operational processor will be developed in a second stage, with the objective of optimizing the computational performance and guaranteeing the integration in the Copernicus Ground Segment. At this moment the CRISTAL LEVel-2 procEssor prototype and R&D (CLEV2ER) activities, which are defined on a thematic domain basis for Sea Ice&Icebergs and Land Ice&Inland Waters respectively, are on-going. Another key task to be carried on is the definition of the methods and protocols for the validation of the CRISTAL Level-2 TDP. To this aim, Pre-Flight and In-Flight validation will be prepared. The Pre-Flight Validation will be based on the validation of the Level-2 products generated by the ground processor prototype starting from simulated altimeter radar echoes in order to preliminary assess the end-to-end performance. In the context of the IN-fight level-2 PROducts Validation Plan (IN-PROVA) activity started in 2024, the In-Flight validation plan will be prepared to define methods and protocols for the validation of the CRISTAL Level-2 products versus Fiducial Refence Measurements (FRM) and in synergy with other Missions, primarily with the other Copernicus Expansion Missions operating in the polar domain, namely CIMR and ROSE-L. The scientific activities related to geophysical algorithms and validation in preparation for the CRISTAL mission will take advantage of cooperation among ESA, EUMETSAT and NASA/JPL. This strategic collaboration aims to leverage the strengths and expertise of ESA, EUMETSAT and NASA/JPL ensuring the success of the CRISTAL mission through a coordinated and comprehensive approach focused in particular on R&D studies and joint validation activities.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: CRISTALair waveform generator

Authors: Timur Mulic, Valeria Gracheva, Michele Scagliola
Affiliations: ESA-ESTEC, ESA-ESRIN
23 – 27 June 2025 Living Planet Symposium, Vienna, Austria Timur Mulic, Valeria Gracheva, Michele Scagliola ESA-ESTEC Keplerlaan 1, 2201 AZ Noordwijk, The Netherlands Email: timur.mulic@esa.int ABSTRACT CRISTALair will act as the airborne demonstrator for Interferometric Radar Altimeter for Ice and Snow (IRIS), the main payload of the Copernicus Polar Ice and Snow Topography Altimeter, CRISTAL mission. The instrument is scheduled to be completed next year. With ASIRAS (Airborne SAR/Interferometric Radar System), which had previously served as the airborne demonstrator for CryoSat, ceasing operations in 2019, the introduction of CRISTALair has become crucial. The main objective of CRISTALair is to support CRISTAL processing concepts and scientific hypotheses by providing dual-frequency band L1B data from dedicated science campaigns. This demonstrator will be capable of operating in Low and High Resolution Modes simultaneously in Ku- and Ka-band. By placing dual receive antennas side by side in an across-track configuration, this setup also enables interferometric capabilities for both frequency bands. An altimeter typically measures at the nadir, producing a backscattered signal that forms an altimeter waveform detected by the receiver. This waveform contains valuable information about the characteristics of the surface. Then retracking algorithms are commonly used to extract geophysical variables characterizing the overflown surface such as the significant wave height (SWH). One crucial part of the CRISTALair radar altimeter processing chain is the set of L1B processors which include low resolution processor (L1B-LROS), a high resolution Delay-Doppler (L1B-HR) and a Fully-Focussed SAR processor (L1B-FF). To simulate the low resolution mode product (L1B-LROS) an analytical waveform generator which produces conventional LRM waveforms will be presented. Here the three-fold convolution of the point target response, surface elevation probability density function (PDF) and “rough surface” impulse response is applied according to [1]. Moreover, further improvement of the model was introduced by Amarouche [2] will be used to generate CRISTAL, Cryosat-2 as well as ASIRAS and CRISTALair LRM waveforms with different parameters. Especially, the dependence of the waveform of the reflected impulse on the antenna beam width, the SWH, the mispoiting angle and the distance from the radar to the sea surface will be presented. As aforementioned, CRISTALair will have SAR processing capabilities. For the Delay-Doppler processing (UF-SAR) coherently pulses with an incoherent integration between bursts will provide multi- look waveforms referred to a specific location on ground. Also, FF-SAR altimeter waveform coherently integrate the echoes over a time longer than a burst to get a higher along-track resolution. While in the literature facet-based numerical models for simulating delay-Doppler (SAR) altimeter echoes are used [3] which requires high computation time, we decide to implement a semi-analytical model proposed by Wingham [4]. This model was usually implemented for spaceborne altimeters to generate LRM and UF- SAR altimeter waveforms. Notably, it appears to be the only model that also includes interferometric capabilities. These will be tested over open ocean, snow, land and sea ice surfaces. For small mispointing pitch and roll angles the same author introduced a semianalytical echo model in [5]. Modified by [6], this model is subject to further analysis for CRISTALair, taking into account the specific constraints of airborne altimeters, such as high roll mispointing angles and acquisition geometry. Airborne altimeters generally have roll angles significantly larger—by about an order of magnitude—than those of satellites. This will be accounted for in the design of the CRISTALair waveform generator. Another key consideration is that most semi-analytical models are designed for ocean applications, where the surface PDF is commonly approximated by a Gaussian distribution. However, for sea ice observations, a lognormal PDF offers a more accurate representation of the surface characteristics. This will be implemented into the model for each scenario. Also simulated across-track elevation topography and off-nadir leads will be discussed in the final presentation. The new waveform generator opens the possibility of simulation altimeter waveform over sea ice, land ice and open ocean for airborne altimeters. It is worth to note that during the CRISTALair activity, a test flight campaign is planned next year. The goal is to test the instruments capabilities over snow, and frozen lakes. Preliminary results and performances assessment will be presented and compared to the outcomes of the waveform generator. In summary, this contribution will present an analytical waveform generator which produces LRM waveforms and a semi-analytical waveform generator which produces delay-doppler (UF-SAR) altimeter waveforms. Expanding on this work as the next step, both the power and phase difference of FF-SAR altimeter waveforms will be demonstrated This waveform generator is adapted to the specific constraints of airborne altimeters (high roll angle and acquisition geometry) and optimized for CRISTALair (high across-track and along-track beamwidth) across various surface types. REFERENCES [1] G. S. Brown, "The Average Impulse Response of a Rough Surface and Its Applications," IEEE Transactions on Antennas and Propagation, vol. AP-25, no. 1, January 1977. [2] L. Amarouche, P. Thibaut, O. Z. Zanife, J.-P. Dumont, P. Vincent, and N. Steunou, "Improving the Jason-1 Ground Retracking to Better Account for Attitude Effects," Marine Geodesy. [3] J. C. Landy, Michel Tsamados, and Randall K. Scharien, "A Facet-Based Numerical Model for Simulating SAR Altimeter Echoes From Heterogeneous Sea Ice Surfaces," IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 7, July 2019. [4] D. J. Wingham, Laurent Phalippou, C. Mavrocordatos, and D. Wallis, "The Mean Echo and Echo Cross Product From a Beamforming Interferometric Altimeter and Their Application to Elevation Measurement," IEEE Transactions on Geoscience and Remote Sensing, vol. 42, no. 10, pp. 2305, October 2004. [5] D. J. Wingham, Katharine A. Giles, Natalia Galin, Robert Cullen, Thomas W. K. Armitage, and Walter H. F. Smith, "A Semianalytical Model of the Synthetic Aperture, Interferometric Radar Altimeter Mean Echo, and Echo Cross-Product and Its Statistical Fluctuations," IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 5, pp. 2539, May 2018. [6] L. Recchia, Michele Scagliola, Davide Giudici, and Mieke Kuschnerus, "An Accurate Semianalytical Waveform Model for Mispointed SAR Interferometric Altimeters," IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 9, pp. 1537, September 2017.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: CRISTALair: A Dual-frequency Airborne Demonstrator for the CRISTAL Altimeter

Authors: Valeria Gracheva, Michele Scagliola, Paolo Cipollini, Dr. Ferran Gibert, Ester Vendrell, Albert Garcia-Mondéjar, Filippo Speziali
Affiliations: ESTEC-ESA, ESRIN-ESA, isardSAT, MetaSensing
CRISTALair will be the airborne demonstrator from the Interferometric Radar Altimeter for Ice and Snow (IRIS), which is the primary payload of the Copernicus polaR Ice and Snow Topography Altimeter (CRISTAL) mission. CRISTAL is one of the six Copernicus Expansion Missions being developed to address EU policy and evolving Copernicus user needs, and it will be the first mission to carry a dual-frequency (Ku- and Ka-band) high-Pulse Repetition Frequency (PRF) radar altimeter, which allows SAR processing of the received echoes. The main objective of CRISTALair is to support CRISTAL science by providing dual-frequency L1B data from dedicated science campaigns to support both pre- and post-launch cal/val activities. CRISTALair data can be also used for L2 algorithm development and validation, as well as for L1 algorithm verification, during the current Phase D of CRISTAL development. CRISTALair will acquire data simultaneously in Ku- and Ka-band and it will be an interferometric radar in both frequency bands. The instrument will operate at a bandwidth of 1 GHz performing correct sampling of the full bandwidth with a sampling rate of 1.28 GHz, and it will have a 10 kHz PRF. CRISTALair will therefore acquire data simultaneously with 4 channels (2 channels for Ku and 2 channels for Ka). CRISTALair’s range window will be 300 m after on-board matched filtering. To improve CRISTALair’s interferometric performance, the instrument will be mounted on a stabilized platform. Additionally, CRISTALair will be equipped with a high precision IMU, an airborne laser scanner and a colour-infrared digital camera to support data validation and interpretation. The main performance drivers of CRISTALair are the elevation uncertainty and the point target response. CRISTALair’s L1 ground processor will generate L1B products with the same format and processing modes as envisaged for CRISTAL, namely low resolution, Delay/Doppler and fully-focused Ku and Ka waveforms. CRISTALair is currently in the final stage of its development. Final laboratory tests and first flight tests will be completed in Q1-Q2 2025. These results will be illustrated at LPS25.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Optimised Iceberg detection for CRISTAL mission

Authors: Pol Villalvilla, Juan Pedro López-Zaragoza, Albert García-Mondéjar, Jean Tournadre, Michele Tsamados, David Brockley, Michele Scagliola, Jérôme Bouffard, Paolo Cipollini
Affiliations: isardSAT, IFREMER, University College London, Mullard Space Science Laboratory, RHEA/ESA-ESRIN, ESA-ESRIN, ESA-ESTEC
The Copernicus Polar Ice and Snow Topography (CRISTAL) mission, expected to be launched in 2027, is planning to carry on with the heritage of Cryosat-2 in the study of the cryosphere. It will carry a dual Ku/Ka-band altimeter as main payload, with interferometric capabilities in Ku-band, and implementing a continuous transmission/reception of pulses, as was done in Sentinel-6, to enhance the along-track capabilities. This continuous transmission/reception mode (Open Burst mode), which is planned to be operating over sea ice and iceberg areas, allows Fully Focussed SAR processing, which yields an improved along track resolution. Icebergs are a major contributor to the freshwater flux into the oceans, especially in the Antartic [Tournadre et al. 2016], playing a major role in the Earth’s ecosystem. Current iceberg detection methods using satellite altimetry data are based on the work described in [Tournadre et al. 2008]. This method is based in detecting the iceberg signature in the thermal noise part of the L1B waveforms. Since the iceberg is at a shorter range than the surrounding ocean, its signature appears before the leading edge of the ocean waveforms. Thanks to OB mode and FF-SAR processing, CRISTAL altimeter will be able to increase the along track resolution down to 10 metres, compared with the unfocussed SAR waveforms that have a nominal 300 metres resolution. This improvement opens the door to a better determination of the iceberg shape, and therefore to the determination of its area and volume. This can in the end allow to a more precise estimation of the contribution of the freshwater flux into the oceans. The CLEV2ER Sea Ice and Icebergs project will include the state-of-the-art algorithms for iceberg detection and volume estimation in the CRISTAL IRIS L2 Ground Prototype Processor (GPP). This allows an early assessment of the performance of these algorithms with data simulated or adapted from other missions. These algorithms will be further tuned to CRISTAL data, and detection and estimation methods suitable for FF-SARIn data will be developed. Ongoing research using Cryosat-2 and Sentinel-6 data processed using Fully Focussed together with CRISTAL simulated data show early promising results regarding the improved volume estimation of icebergs. In this work we will assess how this improvement in resolution impacts in the estimation of the iceberg key characteristics.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: CRISTAL pre-launch performance assessment: an end-to-end simulation approach

Authors: Albert Garcia-Mondéjar, Gorka Moyano, Stephanie Urien, Juan Pedro López-Zaragoza, Alessio Izzo, Pietro Guccione, Verena Lieb, Bernhard Ristow, Enriko Mank, Michele Scagliola, Alessandro Di Bella, Marco Fornari, Carlo Zelli, Jerome Bouffard, Paolo Cipollini, Franck Borde
Affiliations: isardSAT, Aresys, Airbus Defence and Space, RHEA, ESA/ESRIN, RHEA, ESA/ESTEC, ESA/ESRIN, ESA/ESTEC
The Copernicus Polar Ice and Snow Topography Altimeter (CRISTAL) mission, planned to be launched in 2027 will incorporate a dual Ku/Ka-band interferometric altimeter with specific transmission pulse sequences designed to enhance the performances over sea and land ice. The open burst mode will enable the generation of Fully Focussed products over sea ice with snow depth retrievals derived from Ku/Ka waveforms comparison instead of taking them from external auxiliary data. In the same way as in CryoSat-2, the closed burst interferometric mode over land ice will allow the generation of swath elevations for the full Greenland and Antarctica, improving the current coverage of the CryoSat-2 swath products that are only produced in the ice margins. At this stage of the mission design, phase D, the expected performances are being evaluated against the requirements to verify the effectiveness of the mission configuration and further evaluate the room for improvements within the data processing point of view. In this framework, an end-to-end validation environment has been designed. It is composed of the System and Instrument Simulator (SIS), the Ground Processor Prototype (GPP), and the Performance assessment tool (PAT). Following the validation plan defined during the first stage of the project, the SIS is in charge of generating datasets for the different scenarios that are foreseen to be of interest for the mission performance assessment (e.g. point targets, sea ice with different snow properties, ice sheet with small slope and uniform snow and ice characteristics, glaciers with different size, slope, and orientations, ocean tracks with different SWH and wind conditions, river and lakes for specific size and geometry). Additional scenarios are also being generated in order to satisfy the needs from the scientist working within the L2 processor development and the Mission Advisory Group. The GPP will process the instrument science data packets using different processing chains to ensure compliance with the functional and performance requirements. It is composed among others of Level1 Calibration chains, Level1 Low Resolution chains (LR-RMC, LR Over-Sampled, and the conventional LR), Level1 Delay Doppler chain, Level1 Fully Focussed chain, Level2 retrackers module (compilation of different retrackers tailored for the different thematic surfaces), Level 2 Geophysical corrections and retrievals (translating the information from the retrackers into sea ice, land ice, ocean and inland waters measurements). Calibration data measured during the instrument characterisation can be processed and analysed to ensure the quality of the PTR echoes. The PAT oversees closing the end-to-end chain: it cross-checks each of the geophysical parameters generated by the GPP against the corresponding requirement, starting from the knowledge of the simulated parameters, assessing and validating the end-to-end performance chain. This presentation will give an overview of the expected performances of the CRISTAL mission based on the end-to-end validation activity carried out in this project.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: B.03.08 - POSTER - Earth Observation in support of the energy transition

The energy sector is the largest contributor to greenhouse gas emissions, making the transition to green sustainable energy imperative, while ensuring its resilience and affordability. Countries worldwide have set ambitious targets and established associated regulatory frameworks to drive the transition, which requires a transformation and reshaping of the energy sector and market, driven by tailwinds such as increased investments and financial incentives, digitalization and technological innovation. This session explores the role of Earth observation integrated services in navigating the complexities of the energy transition landscape and providing insights to support decision making among various energy stakeholders, including policymakers and industrial actors. Topics in the scope of this session include (and not limited to):

• Relevant EO initiatives, products and integrated solutions serving the energy transition in Europe and beyond. Examples can cover the use of EO in diverse applications related to energy transition, sustainability and resilience across various segments of the energy value chain (e.g. energy policy formulation and enforcement, energy planning and resource management - including demand characterisation, site selection and resource assessment - energy production and operations, storage, transportation, distribution, and consumption, energy efficiency and performance monitoring, environmental impact assessment, infrastructure monitoring and resilience, hazard and risk assessment, decommissioning etc). Examples focusing on operational use-cases and solutions addressing final users and stakeholders are encouraged to illustrate the current uptake of EO-integrated solutions for the energy transition.

• Evolving user requirements in the energy sector and gaps in current EO capabilities, along with potential developments and solutions to effectively address them.

• Challenges in scaling the adoption of EO-integrated services in the energy sector and fostering full vertical integration, including challenges in resource alignment, difficulties to effectively combine EO and non EO data and products and concerns related to data accessibility, standardization, licensing, privacy and capacity barriers. Potential recommendations and best practices on EO data and EO-integrated service provision tailored to the energy sector are also within the scope of this session.

• Leveraging technological advances and innovative analytics to accelerate the energy transition (e.g.. AI-driven predictive analytics).

This session is addressed to individuals, stakeholders and energy sector representatives with interest in the use of EO for the energy transition including:

• Policy-making entities: Governmental authorities and agencies, national or regional regulatory bodies.

• Industrial stakeholders: Grid operators, energy and utility companies, energy investors, traders and asset owners, energy project development agencies, energy consulting firms etc.

• EO data and service providers for energy-related applications from both public and commercial offerings, as well as energy system integrators.

Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Leveraging Semantic Segmentation for Photovoltaic Plants Mapping in Optimized Energy Planning

Authors: Martina Aiello, Giulia Ronchetti, Alberto Maldarella
Affiliations: Ricerca Sul Sistema Energetico - RSE S.p.A.
In recent years, to achieve decarbonization goals, there has been a significant increase in photovoltaic (PV) installations, representing a key renewable energy source for the energy transition. For optimized energy planning, territorial characterization is crucial, including knowing locations of currently installed PV ground-mounted systems. Despite the growing interest in this technology, both in Italy and globally, there is a lack of available data regarding the distribution of PV installations across the territory. When such data is accessible, it is frequently outdated, rendering it less effective in a rapidly changing environment. While operators are aware of the locations and capacities of these installations, there is often an absence of publicly accessible data that would facilitate the analysis of the spatial distribution of existing installations and their development over time, posing challenges for effective energy planning. In the literature, several examples can be found where remote sensing (RS) data were applied to estimate the PV potential of a territory, to detect and monitor failures in PV systems, as well as to map ground-mounted or rooftop PV installations. Thanks to the spatial coverage of the data, the update frequency, the availability of multiple types of free acquisitions and the wide range of data processing techniques and algorithms, satellite or aerial RS can be successfully applied for the identification of PV installations for energy planning purposes. This study presents a methodology for automatic recognition of ground-mounted PV systems in Italy, using semantic segmentation and Sentinel-2 10 m resolution RGB images. The detection algorithm ought to be improved through the integration of power estimation models, which would enable to map both PV location and capacity data, thereby facilitating a precise geographical representation for energy planning objectives. Thus, the primary aim of this research is to develop a model that not only effectively identifies ground-mounted PV installations but is designed for continuous data updating on a wide scale. Regular monitoring and integration of both new and existing systems is essential for sustaining an accurate and current comprehension of the photovoltaic asset. Furthermore, the methodology must be flexible enough to accommodate various study areas, facilitating regular automatic updates that account for changes in the distribution and capacity of installations. To address the scope of this research, we developed a semantic segmentation model, based on a U-Net architecture, and trained on a Sentinel-2 RGB dataset from 2019. The model achieved 99% accuracy during training, including validation dataset, then was tested on two distinct cases, involving different imagery dates and areas. The methodology relies on a multi-temporal approach, deploying the semantic segmentation model on a set of images collected throughout the year. Model outputs are then aggregated into a final map representing the probability of PV plants detection. This method allows for flexible accuracy optimization, depending on the application needs: lower probability thresholds can be chosen to increase Producer Accuracy (PA), while higher thresholds improve User Accuracy (UA). Furthermore, lower probability thresholds ensure continuous area detection, proving more effective in estimating PV power output using an area-to-power relationship. Such thresholds are also advantageous for identifying new installations, despite the trade-off of increased false positives, which can be mitigated through post-processing techniques, such as filters for plastic covered greenhouses recognition. Nevertheless, results indicate that the proposed methodology has certain limitations. Detection of small installations is constrained by image resolution, as well as some specific PV plant layouts which necessitate a broader training dataset, more specialized model architectures, or the integration of additional spectral data. Another challenge lies in the model’s generalizability across diverse landscapes. To address this issue, future work will focus on re-training the model with images from various environmental contexts to facilitate PV plants recognition across different landscapes and PV configurations. Additionally, the complexity of terrain morphology has emerged as a significant obstacle: integrating topographic information, such as elevation maps, slope and hill-shade data, is proposed to enhance detection accuracy and robustness.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: “Wave Power Density assessments in the Spain Atlantic using high-resolution altimetry data”.

Authors: Andrés Orejarena, Dr Maria Panfilova, Sonia Ponce de León
Affiliations: CENTEC-IST
Assessing wave energy potential is crucial for identifying and quantifying ocean regions with the highest energy availability for harnessing. However, vast areas of the ocean lack in-situ data, making accurate evaluations challenging. High-resolution satellite altimetry offers a promising alternative for estimating wave energy potential. In this study, we utilized the OCRE EO high-resolution satellite altimetry data distributed by Earth Console’s Altimetry Virtual Lab (AVL), which are reprocessed data from two missions—CryoSat-2 and Sentinel-3AB—using the SAMOSA+ retracker. Additionally, we integrated in-situ coastal wave buoy data from CMEMS and applied the empirical method of Gommenginger to derive the wave period required for estimating wave energy potential. Our focus was on the Spanish Atlantic, a key region within the WAPOSAL (Wave Power and Satellite Altimetry) project (https://eo4society.esa.int/projects/waposal/). The results of this work accurately identified specific locations and seasonal windows where wave energy extraction is most viable. These insights support the diversification of renewable energy sources and contribute to advancing the transition toward clean energy solutions.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: High-Resolution Altimetry Analysis of Wave Energy Potential Around the British Isles

Authors: Dr Maria Panfilova, Dr Andrés Orejarena, Dr Lotfi Aouf, Roberto Sabia, Marco Restano, Jérôme Benveniste, Dr Sonia Ponce de León
Affiliations: CENTEC-IST, METEOFRANCE, ESA-ESRIN, COSPAR
Wave energy represents a significant renewable resource for coastal regions, particularly for island nations that can leverage their advantageous geographic positioning. Assessing wave energy potential in coastal zones is vital for optimizing renewable energy strategies. This study, conducted as part of the European Space Agency's (ESA) ongoing WAPOSAL project (https://eo4society.esa.int/projects/waposal/), investigates the wave energy potential in the Atlantic waters surrounding the British Isles. We utilize CryoSat-2 and Sentinel-3 high resolution synthetic aperture radar altimetry data from the OCRE-EO database, processed and distributed by ESA’s Altimetry Virtual Lab hosted on EarthConsole®. These data were generated using the advanced SAMOSA+ retracker, which provides high-resolution measurements that enable us to retrieve wave parameters close to the coast. Significant wave height is derived directly from these altimeter measurements, while the wave period is calculated using data on normalized radar cross-section and significant wave height based on the method proposed by Gommenginger et al. (2003). To validate the altimeter-derived wave period estimates, we incorporated observational data from ocean buoys and wave period measurements from the SWIM radar onboard the CFOSAT satellite. Our analysis spans the period from January 2011 to December 2022, evaluating wave power density along altimeter tracks, generating spatial maps, and examining seasonal variations in wave energy potential. This research identifies the most favorable regions and seasons for wave energy harvesting around the British Isles, providing critical insights to support renewable energy development and policy planning.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Validation of Wave Energy Estimates from High-resolution altimeter data Using Wave Buoy Measurements and ERA5 Data

Authors: Dr Maria Panfilova, Dr Andrés Orejarena, Dr João Bettencourt, Dr Sonia Ponce de León
Affiliations: Centec-ist, Geophysical Institute, University of Bergen
Amid the ongoing energy crisis, wave energy conversion emerges as a viable and sustainable alternative for energy generation, contributing significantly to the advancement of the Blue Economy. Satellite altimetry data is a promising tool to obtain measurements of sea waves parameters all over the globe. Significant wave height (SWH) is directly measured by the altimeter, while wave period is evaluated using both SWH and normalized radar cross section. Wave period estimation from altimetry data needs validation, and wave buoys measurements serve as a ground truth data. However, in some regions buoy data are not available. In this case we use the reanalysis ERA5 data. This study, undertaken as part of ESA's ongoing WAPOSAL project (https://eo4society.esa.int/projects/waposal/), aims to develop a method for validation of wave energy obtained from altimeter measurements, using ERA5 reanalysis data. The research focuses on a region encompassing the Baltic Sea, the North Sea, and part of the Atlantic Ocean near Norway. The data from January 2011 to December 2022 were used in the research. Triple collocation of buoy, ERA5 and altimeter data was performed. First, buoy and ERA5 wave period and SWH were compared and discrepancies between these two kinds of data are discussed. Secondly, the ERA5 and buoy data are used to evaluate the accuracy of wave power density obtained by the altimeter data. As a result, the method to verify wave energy assessments from altimeter data using ERA5 data is developed and can be applied in the other coastal zones where buoy measurements are not available.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Detecting Wind Turbine Motion Using Sentinel-2 Data: A Case Study in Germany

Authors: Dr. Philipp Gärtner, Felix Nahrstedt
Affiliations: German Environment Agency, Accenture DACH
The ability to monitor wind turbine activity is crucial for improving energy efficiency, facilitating maintenance, and supporting environmental monitoring. However, proprietary wind turbine data is often inaccessible, limiting opportunities for broader analysis. This study presents a novel method for detecting wind turbine motion using Sentinel-2 satellite data, leveraging the temporal band offset between spectral bands caused by the unique geometrical layout of the MultiSpectral Instrument focal plane. Specifically, the method focuses on analysing motion between the blue, green, and red bands to detect turbine activity through a convolutional neural network. The staggered arrangement of Sentinel-2’s detectors results in a temporal offset of up to 1.005 seconds between bands, which has been previously utilized for analysing fast-moving objects such as airplanes or ships. By extending this approach to wind turbines, the proposed method identifies subtle motion by focusing on the movement of turbine blades and their shadows. A DenseNet architecture was fine-tuned for binary classification of spinning and non-spinning turbines, achieving an accuracy of approximately 96% on the validation dataset. The dataset used for training and validation was created specifically for this study, encompassing more than 4,500 manually labelled wind turbines and their activity in Germany. The shadows of turbine blades were used during the labelling stage due to their enhanced visibility compared to the blades themselves, although the final model was trained to detect blade motion instead of relying solely on shadow information. The study highlights key challenges, including shadow visibility in complex environments (e.g., forests, water, or snow), occlusion effects, and the rotational alignment of turbine blades relative to the satellite's perspective. An occlusion sensitivity analysis confirmed that the model focuses on turbine-specific features rather than irrelevant environmental factors. Despite limitations, such as reduced accuracy for smaller turbines or environments with poor visibility, the method demonstrates robust performance across varying conditions. This work highlights the potential of satellite-based remote sensing for monitoring wind turbine activity in a scalable and non-invasive manner. By providing independent access to turbine activity data, this approach offers valuable applications for the wind energy industry, environmental monitoring, and regulatory compliance. Future research could address speed estimation, energy output prediction, and expanded datasets to include decommissioned or stationary turbines, further enhancing the model's capabilities and global applicability.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Advancing Solar Energy Monitoring With Satellite Imagery: Bridging Earth Observation and the Commercial Sector

Authors: Dora Hegedus, Dr Gareth Thomas, Dr Brian Kerridge, Jack Barber, Konrad Hayes
Affiliations: RAL SPACE, STFC, Amira Technologies Ltd, Ecovision Asset Management Limited
The Satellite Imagery for Solar Energy Monitoring (SISEM) project, an Actionable Information Project supported by the UK Earth Observation Climate Information Service (EOCIS), aims to enhance solar energy forecasting and monitoring systems (FMS) for small and medium-sized solar power installations across the UK. By leveraging satellite-derived surface insolation data, the project seeks to improve the accuracy of solar photovoltaic (PV) performance predictions and monitoring. The FMS integrates extensive historical performance data from solar sites and incorporates near-real time (NRT) information from satellite observations to forecast power output at individual sites, accounting for variables such as cloud cover, weather conditions, and geographical location. SISEM is a collaborative effort involving RAL Space, which provides NRT radiative flux data from the SEVIRI (Spinning Enhanced Visible and Infrared Imager) instrument aboard Meteosat-10; Amira Technologies Ltd, the FMS developer; and Ecovision Asset Management Limited, which oversees approximately 18,000 solar installations across the UK. The system ingests NRT solar insolation data together with meteorological forecasts to predict output expected from individual solar installations. These predictions are compared with site-specific performance monitoring information and the results are fed back into the Ecovision Asset Management System (EAMS). This feedback loop enables efficient identification of outlier sites, irradiation-adjusted benchmarking, and prioritization of maintenance activities to optimize system performance within the EAMS. A key advancement of the SISEM project is the transition from daily ground-based solar irradiance estimates based on World Meteorological Organization (WMO) data to hourly, high-resolution satellite-derived inputs. Early results demonstrate significant improvements, including a decreased RMSE (Root Mean Square Error) between expected and measured output, and a marked increase in the R² coefficient, underscoring enhanced model accuracy. This presentation will provide an overview of the SISEM project, detailing the development and implementation of the satellite-driven FMS prototype, operational improvements and limitations, as well as the observed impact on fault detection and system efficiency within the EAMS. This pilot phase can serve as a springboard for future advancements in integrating Earth observation data with operational solar energy monitoring systems.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: CAMS radiation service for solar energy: Exploring the error space with data-driven and spatially resolved methods and service evolution

Authors: Faiza Azam, Jorge Lezaca, Marion Schroedter-Homscheidt, Antti Arola, Antti Lipponen, Laurent Saboret, Mireille Lefevre, Yves-Marie Saint-Drenan
Affiliations: German Aerospace Center (DLR), Institute of Networked Energy Systems, Oldenburg, Germany, Finnish Meteorological Institute (FMI), Vaisala, MINES Paris, PSL University
The Copernicus Atmosphere Monitoring Service (CAMS) provides open data access to solar irradiances through its CAMS Radiation Service (CRS). Data is accessible via both the CAMS Atmospheric Data Store and user-specific Python libraries such as PV-LIB. The services meet the needs of European and national policy development and the requirements of partly commercial downstream services in the solar energy sector for e.g. planning, monitoring, efficiency, improvements, and integration of renewable energies into the energy supply system. Observations from the meteorological satellites are combined with modelled aerosols, water vapor, and ozone from the CAMS Integrated Forecasting System to retrieve Surface Solar Irradiance (SSI). SSI time series in 1 min, 15 min, hourly and daily temporal resolution are produced 'on-the-fly' on user request at the desired location inside the domain of the MSG and the HIMAWARI satellites field of view. In addition to the standard access as time series, a gridded dataset in 0.1° and 15 min for the years 2004-2023 is available for land surfaces of Europe and Africa. Validation of the CRS is done regularly on a quarter-annual basis with a 6 months delay between data acquisition and assessment due to the delay in ground observation data provision from ground networks. The focus of this validation is on direct, diffuse and global irradiation components against ground observations and it provides classical, station-wise metrics such as bias, RMSE, and correlations. To widen the coverage of the quality assessment of the CRS, the ground observations database is continuously being extended with additional networks. Nevertheless, special care must be taken as these observations may have uncertainties of the order of the observed uncertainty in the CRS. Typically, in an operational setup, from the physical point of view, the retrieval of SSI is limited in the treatment of heterogeneous or multi layer clouds and not well-defined aerosol conditions. Moreover, in a general setting the errors at the pixel-level are hard to estimate. Europe’s CAMEO project supports the CRS service evolution through a detailed assessment of the error patterns. In this project, data-driven methods e.g., SHAP analysis, are being exploited using all input parameters to bring light into the error space of the CRS. In addition, sensitivity studies and ML-based surrogate models provide additional insights on the errors. Mainly, we use retrieval inherent information to provide a single retrieval uncertainty estimate or a physics-informed bias correction. We aim at methods without the need for any auxiliary information which is not available in a stand-alone real time satellite-based retrieval process. Furthermore, the CRS team is currently evaluating the 0.5 km visible channel of HIMAWARI to prepare for the upcoming MTG capabilities. Impact of this high-resolution information for solar radiation accuracy will be addressed – either as input for an ML-based uncertainty quantification or alternatively for a direct high-resolution retrieval product.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Nocturnal remote sensing as tool to monitor the energy use

Authors: Alejandro Sánchez de Miguel
Affiliations: Universidad Complutense de Madrid
Nighttime remote sensing provides a critical window into understanding energy usage by tracking artificial light emissions, serving as a proxy for human activity and environmental changes. Despite the utility of instruments like VIIRS (Visible Infrared Imaging Radiometer Suite) and DMSP/OLS (Defense Meteorological Satellite Program/Operational Linescan System), these platforms exhibit significant limitations. VIIRS struggles to have a reliable human vision intensity because of its panchromatic nature sources and distinguishing between spectral differences, while DMSP/OLS suffers from saturation and a lack of calibration, which complicates long-term analyses and limits resolution. These constraints underscore the necessity for advanced remote sensing platforms. SDGSAT-1 and the International Space Station (ISS) provide valuable data for monitoring the energy transition, particularly through their capabilities in capturing nighttime lights and other environmental indicators. The ISS, with its data spanning from 2003 to the present, offers a long-term perspective on changes over two decades, making it especially powerful for tracking trends in energy use and efficiency. Here’s how both platforms contribute: ### 1. Long-Term Monitoring of Nighttime Lights with ISS Since 2003, the ISS has continuously collected nighttime imagery, providing a unique long-term dataset that captures changes in light emissions worldwide. This extended timeline is crucial for monitoring the energy transition: - **Tracking Changes in Urban Lighting**: The ISS's long-standing observations allow for monitoring urban areas over many years, capturing the effects of changes like transitions to LED lighting. Such long-term monitoring reveals trends in lighting intensity, spatial expansion, and shifts in light color, which are indicative of the adoption of energy-efficient technologies. - **Urban Growth and Energy Demand**: The continuous ISS imagery can be used to assess the growth of urban areas and the corresponding increase in energy demand. By comparing images taken over two decades, one can observe how cities have expanded and how energy use has changed, giving insights into which regions are making progress in adopting efficient technologies. ### 2. Complementary Spectral Analysis with SDGSAT-1 - **Distinguishing Lighting Types**: While the ISS provides an unparalleled timeline of nighttime lights, SDGSAT-1 brings complementary spectral capabilities. SDGSAT-1 can detect multiple bands of artificial light, allowing for the differentiation between various types of lighting (e.g., sodium vapor, LED). This is important for identifying areas transitioning to more energy-efficient lighting sources, which often produce light with distinct spectral characteristics. - **Correlating ISS Trends with SDGSAT-1 Data**: The combination of ISS and SDGSAT-1 data allows researchers to validate trends seen over the years. For example, if ISS data shows a reduction in brightness in a certain region, SDGSAT-1’s spectral data can help confirm if the reduction is due to a switch to efficient LED lights or other measures. ### 3. Renewable Energy Infrastructure Monitoring - **Expansion of Solar and Wind Installations**: ISS daytime imagery, along with SDGSAT-1’s high-resolution capabilities, can be used to detect the growth of renewable energy infrastructure such as solar and wind farms. Long-term data from the ISS helps establish when and how quickly these installations have expanded, providing insight into the pace of renewable adoption. - **Environmental Impact**: Observations from the ISS help in evaluating the environmental impact of renewable energy projects. The high-resolution cameras operated by astronauts aboard the ISS, coupled with SDGSAT-1 data, can monitor changes in vegetation and land use due to the construction of renewable energy facilities, offering a more comprehensive view of their sustainability. ### 4. Longitudinal Light Pollution Analysis - **Assessing Policy Effectiveness**: The ISS’s long-term imagery is ideal for assessing the effectiveness of policies aimed at reducing light pollution and energy wastage. Trends in light pollution over two decades can be correlated with public policy initiatives, such as regulations on outdoor lighting or programs promoting energy-efficient light bulbs. - **Identifying High-Emission Areas**: The extensive data from the ISS allows for the identification of high-emission areas that contribute significantly to light pollution. Once identified, SDGSAT-1’s multispectral data can be used to analyze the specific types of lights used, helping policymakers target interventions more effectively. ### 5. Community-Level Energy Transition Tracking - **Household and Street Lighting Trends**: The ISS provides a rich history of nighttime imagery that can reveal changes in lighting at the community level. By comparing images from 2003 to the present, it is possible to track how small towns and neighborhoods are transitioning to more energy-efficient lighting. - **Urban vs. Rural Transitions**: The long-term data from the ISS makes it possible to compare energy use trends between urban and rural areas. This helps identify disparities and target regions where energy efficiency measures are lagging. ### 6. Supporting Sustainable Development Goals SDGSAT-1, designed to support the United Nations' Sustainable Development Goals (SDGs), works in tandem with ISS data to provide a more complete picture of the global energy transition. The ISS’s long-term records offer historical context, while SDGSAT-1 adds the capability to analyze the present situation in detail, particularly regarding spectral characteristics of nighttime lights. This combination helps assess progress toward achieving clean and affordable energy for all (SDG 7). ### 7. Public Engagement and Awareness Images from the ISS are visually compelling and widely accessible, having been collected for over two decades. This helps raise public awareness of energy use and efficiency trends by showing the visible impact of energy consumption over time. SDGSAT-1, with its advanced spectral analysis, complements these visuals by providing additional context on the types of lighting in use. In conclusion, the ISS provides crucial long-term data that allows for monitoring energy transition trends over the last two decades, while SDGSAT-1’s more recent multispectral capabilities offer detailed insights into the types of lights and their efficiencies. Together, they provide a comprehensive view of the energy transition—capturing both the historical changes in lighting and the current state of energy use and efficiency across different regions.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Automated Detection and Analysis of Ground-Mounted Photovoltaic Systems in Germany

Authors: Johannes Albert, Dr. Philipp Gärtner, Dr. Stephan Klingner, Chantal Schymik, Jan Siegismund, Claudius Wehner, Katja Bunzel, Jeremias Kempt, Rene Höfer
Affiliations: German Environment Agency, Federal Agency for Nature Conservation
The current trend towards more renewable energy infrastructure, particularly ground-mounted photovoltaic (PV) systems, has highlighted the necessity for effective monitoring tools to assess environmental impacts and enable data-driven policy decisions in Germany. According to the Renewable Energy Sources Act (EEG), the installed capacity of solar systems is expected to increase from 81,7 GW in 2023 to 400 GW by 2040. Despite the German Federal Network Agency's data register (Marktstammdatenregister) encompassing all energy generation plants, it lacks detailed spatial information such as module covered area, park area and row spacing. As a result, the absence of a suitable Germany-wide data basis for an assessment of existing PV systems poses significant challenges for the German Federal Agency for Nature Conservation and the German Environment Agency to analyse the status quo and to review the adoption and effectiveness of nature conservation measures and regulations. Given the rapid expansion dynamics and potential conflicts between PV system development and nature conservation, addressing this gap is highly urgent to enable more informed management of sustainable energy expansion strategies. To address this critical need, our team at the newly founded Application Laboratory for AI and Big Data at the German Environment Agency developed an automated object detection and segmentation pipeline. This pipeline leverages advanced remote sensing and deep learning techniques to provide precise, up-to-date spatial data on PV systems in Germany. Specifically, we applied the state-of-the-art deep learning model YOLO (You Only Look Once) to detect and segment PV systems in high-resolution monthly PlanetScope composite satellite imagery, using high-quality manually mapped PV park outline polygons for labelling. The inference results from PlanetScope were then used to pre-select digital orthophotos with a resolution of 20 cm, enabling a more detailed analysis of PV systems. By applying the zero-shot foundation model SAM (Segment Anything Model) on the orthophoto imagery, we obtained highly detailed masks. The masks were subsequently clustered and classified as PV modules by using spectral and geometric characteristics. Finally, the proportion of PV modules in the total plant area was calculated. The high-resolution vector products generated through this process are available for addressing further nature conservation-related questions and studies. By applying the processing pipeline to new imagery on a regular basis, a more effective monitoring of the dynamics of installed ground-mounted PV systems in Germany is enabled.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Potential of Abandoned Agricultural Lands for New Photovoltaic Installations

Authors: Giulia Ronchetti, Martina Aiello, Elisabetta Garofalo
Affiliations: RSE Spa
The achievement of 2030 decarbonization goals and the energy transition toward a more sustainable system involve increasing Renewable Energy Sources (RES) capacity and require a careful analysis of the integration of renewable energy plants and environment. Two key elements should be addressed: on the one hand, the availability of resources, on the other hand, the availability of areas suitable for hosting renewable energy plants, while respecting environmental and landscape constraints and considering competitive land uses. In the case of solar plants, the need for landscape protection and sustainable land use exploitation contrasts with the necessity of finding adequate spaces for utility-scale ground-mounted photovoltaic (PV) systems, which are generally installed in agricultural areas. To balance agricultural and energy policies, PV development should not limit agricultural purposes, allowing sustainable exploitation under specific technological and environmental conditions, and particularly in areas of actual or potential abandonment. Agricultural land abandonment is a complex and heterogeneous phenomenon of land cover change, generally driven by a combination of geographic, demographic, socio-economic, agro-ecological and environmental factors, or issues related to poor land management. A widely accepted definition of agricultural land abandonment is the one provided by the Food and Agriculture Organization of the United Nations (FAO) which defines as “abandoned” an agricultural land (i.e., dedicated to food production for humans) where conditions deteriorate for a period exceeding five years. Studying agricultural abandonment is challenging due to its multifaceted nature, lack of a clear definition, and issues in acquiring cartographic data. This research introduces and compares two methodologies to identify abandoned agricultural areas, aiming to delineate macro-areas of potential abandonment and examine patterns for conversion to energy use. The study focuses on Toscana, a region (NUTS 2) in central Italy, which has experienced cropland reduction unrelated to urbanization in the last 20 years. The first simplified approach analyses land cover changes from 2000 to 2018, while the second method provides a more detailed abandonment detection, by means of medium spatial resolution satellite imagery from the Harmonized Landsat and Sentinel-2 dataset. The first method identifies abandoned agricultural land with an analysis on changes of land cover classes derived from Corine Land Cover (CLC) maps available every six years. The last four versions of the CLC maps are considered, referring to the years 2000, 2006, 2012 and 2018, discretized in points representing 100mx100m cells. Any variations in land cover classes of each map compared to the previous one are analysed, according to a simplified version of the Land Cover Flow (LCF) methodology proposed by the European Environment Agency (EEA). Through a trajectory-based approach, agricultural points that have been transformed into areas with shrubby and/or herbaceous vegetation or sparse vegetation are classified into potentially abandoned agricultural areas. However, this estimate includes some inaccuracies caused mainly by the spatial resolution and update frequency of the analysed data. The second method identifies abandoned agricultural land by mapping the evolution of agricultural areas through satellite imagery time-series, providing a more detailed abandonment detection. The Harmonized Landsat and Sentinel-2 (HLS) dataset (30m spatial resolution) provided by NASA is used, with a cloud-free time-series of more than 10 years ranging from 2013 to 2023. Spectral bands and spectral indices (i.e., BSI, NBR, NDVI, Brightness, Greenness, Wetness) are used to extract annual metrics (i.e., minimum, maximum, mean, median, standard deviation, 20° and 80° percentiles) for all the images. The training/validation dataset is composed of samples randomly selected from triennial land cover maps generated from stable land cover points extracted from available Corine Land Cover maps. A Random Forest classifier, based on previously extracted metrics and combined with Object Based Image Analysis (OBIA), is applied to satellite data to map annual active/non-active croplands, providing results at agricultural field scale. Annual maps are then validated with a trajectory-based approach to detect agricultural land abandonment. This second methodology can help in providing spatially and timely meaning estimates of abandoned agricultural areas to be recovered for energy purposes and promote a sustainable growth of PV systems.
Add to Google Calendar

Monday 23 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Automated Detection of Wind Turbines Across Germany Using High-Resolution Satellite Imagery

Authors: Dr. Philipp Gärtner, Johannes Albert, Jan Siegismund, Johannes Zschache, Claudius Wehner, Max
Affiliations:
According to the current Renewable Energy Sources Act, 115 gigawatts (GW) of onshore wind energy are to be installed in Germany by the end of 2030. This will require an annual increase of around 9 GW gross or 7 GW net. It is also expected that old wind power plants with a capacity of around 17 GW will be dismantled by the end of 2030. The data register ‘Marktstammdatenregister’(MaStR), managed by the German Federal Network Agency, is Germany's central database for all energy systems, including wind power. It stores detailed information on wind turbines, such as their locations, installed capacity, rotor diameter, hub height, and details on operational status of existing, planned and decommissioned systems. The registration of energy plants is carried out by the plant operators themselves. Unfortunately, these data entries can be incomplete or faulty. Such data errors can impede the expansion of renewable energies and make planning and funding cumbersome. The lack of a reliable data basis for monitoring wind turbines in Germany therefore poses major challenges for the German Environment Agency in analysing the status quo and deriving data-based policy advice. To address this critical need, our team at the newly founded Application Laboratory for AI and Big Data at the German Environment Agency developed an automated approach to detect existing, newly constructed or dismantled wind turbines across Germany on a regular basis. Utilizing carefully hand-labelled training data with YOLO (You Only Look Once), a state-of-the-art object detection framework on high-resolution (4.77m resolution) PlanetScope satellite imagery time series, our method achieves a detection rate of 97 %. While new wind turbines are detected with high confidence, older turbines, which often have shorter blades and smaller hub heights, are occasionally missed due to their less prominent appearance in the satellite imagery. We outline our process pipeline, detailing the step-by-step refinement of the labels used and the resulting improvements in model performance. By comparing our results with the entries in the MaStR, discrepancies such as missing, outdated, or incorrect information can be identified and corrected. Our approach not only highlights inaccuracies in the register but also provides a reliable mechanism to improve its overall quality and accuracy. Moreover, this detection process can be repeated whenever new satellite scenes become available, allowing for the continuous monitoring of wind turbines and enabling an up-to-date data basis for decision making. The study also examines potential limitations, addresses the challenges associated with remote sensing-based monitoring, and underscores its importance for the wind energy sector and environmental monitoring.
Add to Google Calendar

Monday 23 June 17:45 - 18:05 (EO Arena)

Demo: D.03.30 DEMO - Raster and Vector Data Cubes Across Spatial Data Science Languages - Part 2

Geospatial datasets can be represented as either raster or vector data cubes, depending on the workflow requirements. Raster cubes store multi-dimensional arrays with coordinates like longitude and latitude, capturing the spatial aspects of a dataset. On the other hand, vector data, represented as geometry objects such as points, lines, and polygons, is traditionally stored as tables. Vector data cubes generalize this concept by having multi-dimensional arrays that include a geometry dimension to represent the spatial domain.

In this hands-on training, attendees will learn how to work with raster data cubes that cover large spatial extents to derive vector data cubes that focus on specific areas of interest, allowing us to observe how geospatial features and their attributes change over time. The participants will go through workflows for sampling and aggregating raster data cubes using vector geometries to produce vector data cubes, and explore different functions to show the advantages this provides over using raster data cubes, leveraging the languages R, Python, and Julia. In this session, we will also highlight the advantages of using Quarto as a publishing tool for facilitating cross-language resources in spatial data science.

The presented case study was developed in the EU Horizon Europe Project ScaleAgData (101086355). We demonstrate how to create raster data cubes from Sentinel-2 derived Leaf Area Index (LAI) and Sentinel-1 catalogues that cover large spatial extents. We focus on demonstrating how to handle such datasets, which can be computationally challenging, by subsetting them to vector data cubes, limiting the analysis to the target areas within our farm boundary polygons and visualizing our results. Furthermore, we use the vector data cube for computing different statistical metrics, and we present how they can be used for machine learning modelling.

Content contributors: Yomna Eid, Mohammad Alasawedah, Abhishek Singh, Felix Cremer, Edzer Pebesma

Speakers:


  • Yomna Eid - Institute for Geoinformatics, University of Münster
  • Mohammad Alasawedah - Institute for Geoinformatics, University of Münster
  • Abhishek Singh - Eurac Research
  • Felix Cremer - Max-Planck Institute for Biogeochemistry
  • Lazaro Alonso - Max-Planck Institute for Biogeochemistry
Add to Google Calendar

Monday 23 June 18:00 - 19:30 (Arcotel Kaiserwasser, 1220)

Social: Earth Engine User Meetup
Register to attend here

Join us for an evening of networking opportunities with fellow Earth Engine enthusiasts who are attending the Living Planet Symposium.

Whether you're a beginner or an advanced user, this event is perfect for anyone looking to connect with like-minded individuals and learn from experts in the field.
Add to Google Calendar

Monday 23 June 18:07 - 18:27 (EO Arena)

Demo: C.06.19 DEMO - Sentinel-2 Open-Source Viewing Model Tool (Sen2VM)

Sen2VM enables the geolocation of Sentinel-2 Level-1B (L1B) products within a proper latitude/longitude/(elevation) reference system by computing a direct location grid as additional L1B raster bands. It also provides the capability to generate inverse location grids over a specified area.

The tool is open-source and is intended for expert users with access to L1B products. It offers two modes of use:
- as a standalone command-line tool
- as a SNAP plugin.

Speakers:


  • Antoine Burie - CS GROUP
Add to Google Calendar

Tuesday 24 June

1134 events

Tuesday 24 June 08:30 - 10:00 (Room 1.31/1.32)

Session: F.02.12 Achieving EO uptake in Latin America and Caribbean through partnerships

ESA is partnering with the Directorate General for International Partnerships (DG-INTPA) under the Global Gateway initiative to address disaster risks through the use of Earth Observation (EO) in the LAC region. The overall objective is to transfer EO skills and expertise to Latin America and Caribbean (LAC) through the joint development of a regional CopernicusLAC Centre. The initiative aims initially to enhance the resilience of the Latin America and Caribbean region by making use of the Copernicus data and services with a primarily focus on disaster risk and recovery (DRR) activities.

The CopernicusLAC Centre is currently co-developing a range of EO services for DRR with 14 mandated organizations in the region (addressing floods, drought, wildfires, landslides, subsidence and exposure) and knowledge transfer at continental level through trainings, hackathons and private sector engagement. These activities will help shaping the LAC ecosystem that will be developed around the Centre. The objective is to demonstrate how the CopernicusLAC Centre can fulfill the needs of regional and international organisations with a mandate in DRR (UNDRR, CEPREDENAC, CDEMA) as well as in the LAC national DRR entities. Therefore, this session will showcase how CopernicusLAC is:
• Supporting a regional ecosystem of EO stakeholders, from government agencies to researchers and civil society, through targeted engagement and training;
• Translating local challenges into operational services, including pilot applications in areas such as disaster risk management, environmental monitoring, and urban resilience;
• Building a sustainable bridge between European EO capabilities and LAC priorities, underpinned by co-designed platforms and strategic policy alignment.

Speakers:


Intro


  • Alex Chunet - ESA, Earth Observation Applications Engineer

CopernicusLAC Panama Center


  • Claudia Herrera - LAC Panama center, Liaison Officer of the Copernicus

CopernicusLAC Stakeholder engagement and Knowledge activities


  • Nicolás Ayala Arboleda - Novaspace, Consultant
  • Jesús Carrillo Vázquez - Novaspace, Consultant

CopernicusLAC Service Development activities:


  • Alberto Lorenzo - Indra, Project Manager
  • Caterina Peris - Indra, Senior Engineer
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall L3)

Session: A.08.01 Advances in Swath Altimetry - PART 1

The NASA and CNES Surface Water and Ocean Topography (SWOT) Mission, launched in December 2022, is the first inflight experience in orbit of a swath altimeter. The SWOT mission has revealed the capability of swath altimeters to measure ocean and inland water topography measurements in an unprecedented manner. The onboard Ka-band interferometer (KaRIn) observes wide-swath sea surface height (SSH) with a sub-centimetre error. It is already unveiling the small mesoscale ocean circulation that is missing from current satellite altimetry. SWOT already carried a campaign for the satellite calibration and validation (Cal/Val) including ground truths and airborne campaigns.
ESA’s Sentinel-3 Next Generation Topography (S3NGT) mission is being designed as a pair of two large spacecrafts carrying nadir looking synthetic aperture radar (SAR) altimeters and across-track interferometers, enabling a total swath of 120 km, in addition to a three-beam radiometer for wet tropospheric correction across the swath, and a highly performant POD and AOCS suite.
With a tentative launch date of 2032, the S3NGT mission will provide enhanced continuity to the altimetry component of the current Sentinel-3 constellation, with open ocean, coastal zones, hydrology, sea ice and land ice, all as primary objectives of the mission.
This session is dedicated to the presentation of advances in swath altimetry - including airborne campaigns- and the application of swath altimetry to the primary objectives of the mission, i.e. open ocean and coastal processes observation, hydrology, sea ice and land ice. We also invite submissions for investigations that extend beyond these primary objectives, such as the analysis of ocean wave spectra, internal waves, geostrophic currents, and air-sea interaction phenomena within swath altimeter data.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall L3)

Presentation: Overview of SWOT ocean surface topography performance

Authors: Emeline Cadier, Pierre Prandi, Dr Francesco Nencioli, Benjamin Flamant, Etienne Jussiau, Matthias Raynal, Albert Chen, Alexander Fore, Curtis Chen
Affiliations: CLS, CNES, Jet Propulsion Laboratory, California Institute of Technology
The launch of the Surface Water and Topography Mission (SWOT) in December 2022 represented a major breakthrough in satellite altimetry. Its Ka-band Radar Interferometer (KaRIn) provides for the first time 2-dimensional images of ocean surface topography, at kilometer resolution and over a 120 km wide swath. These novelties represent a main paradigm shift also for Cal/Val activities compared to conventional 1-dimensional nadir altimeters. As part of the mission performance activities, the quality and performances of the level 2 ocean surface topography have been assessed and monitored throughout the first two years of the mission. The analysis is performed using 2 km resolution products collected during both the calval (1-day repeat orbit) and the science (21-day repeat orbit) phases. Observations from SWOT nadir altimeter and other current altimeters, such as Sentinel-3, during the same period are also used as terms of comparison for our analysis. All mission performance metrics highlight the excellent performance of KaRIn measurements. Here we will provide a synthesis of this assessment. First, a brief summary of the data availability and validity throughout the mission lifetime will be given, listing the main events impacting KaRIn data. The end-to-end performance metrics on the main topographic mission variables (e.g. sea level, significant wave height, sigma0) will then be presented. These include residual estimates at crossovers as well as analysis of the wavenumber spectrum. The assessment of the cross-calibration algorithm at level 2 will also be presented. In the process of continuously improving the data quality, the processing of KaRIn ocean topography measurments has undergone several updates over the last year. In October 2024, the product generation executable (PGE) has been upgraded from PIC version to PIC2. It will be further upgraded to PID in January 2025. This version will be used for the next full mission reprocessing, planned for the first half of 2025. The main evolutions introduced with these upgrades, such as state of the art geophysical correction models and the novel KaRIn-dedicated wave algorithm, will be presented along with their impact on the data performances. Finally, the remaining known limitations of interest to science users will be discussed.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall L3)

Presentation: Performances of the Swath Altimeter SAOOH on board the Sentinel 3 Next Generation Topography Mission

Authors: Alexandre HOUPERT, Franck Demeestere, Dr Laurent Phalippou, Marc Deschaux-Beaume, Laurent Rys, Laurent Rey, Pierre Dubois, Laiba Amarouche, Pierre Thibaut, Pierrik Vuilleumier, Alejandro Egido
Affiliations: Thales Alenia Space, CLS, ESA/ESTEC
The Sentinel-3 Next Generation Topography mission, addresses the need for a timely extension of the current Sentinel-3 capability in terms of stability and continuity, while improving performances and increasing the quantity and quality of geophysical products. Sentinel-3 Next Generation belongs to the Copernicus “enhanced continuity” missions of the European Union. The baseline concept relies on Thales Alenia Space state of the art technology with an altimetry payload composed of Poseidon-5 (POS5) a SAR nadir altimeter for continuity, and SAOOH the Swath Altimeter for Operational Oceanography and Hydrology for enhancing sampling, coverage, revisit, and enhanced topography product. In order to achieve the specified five days revisit time, two satellites will operate simultaneously, on a dawn-dusk sun synchronous orbit, with same ground tracks as for Sentinel 3 First Generation. SAOOH is currently developed, under ESA contract, by Thales Alenia Space and benefits of heritage from CryoSat (SIRAL), Sentinel-3 (SRAL), Sentinel-6 (POS4), SWOT (KaRin) and Cristal (IRIS) missions. SAOOH improves the spatial/temporal sampling of the ocean and inland waters with respect to nadir altimeters. The products are available over a typical swath of 120 km centred on nadir. Swath altimeters also permit to observe continental water extent, and to measure surface elevation and river slopes. New observation capabilities for significant wave height (SWH), wave spectrum, sea ice and land ice are anticipated as shown by preliminary SWOT data. SAOOH is a Ka-band multibeam swath altimeter using one transmit antenna and two receive antennas with a 3m interferometric baseline. The SAOOH antenna design results in high signal to noise ratio (SNR) to comply with the random error requirement while keeping a relatively short interferometric baseline. The thermally regulated low noise amplifier front-ends (LNA) integrated very close to the antenna feeds further maximises the SNR. It is then possible to accommodate SAOOH without deployment mechanism, leading to a simple mechanical design and excellent antenna stability. This paper addresses the SAOOH design with a focus on achievable performances at radar level and the related end-to-end geophysical product accuracies (L2 level). The measurement principle, including the external calibration is addressed. Random errors and systematic errors are also presented and discussed with respect to the mission requirements.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall L3)

Presentation: SWOT validation and analyses during the fast-sampling phase in the western Mediterranean Sea with high-resolution observations

Authors: Laura Gómez Navarro, Elisabet Verger-Miralles, Daniel R. Tarry, Barbara Barcelo-Llull, Nikolaos Zarokanellos, Lara Diaz-Barroso, Guiomar Lopez, Irene Lizaran, Emma Reyes, Baptiste Mourre, Ananda Pascual
Affiliations: IMEDEA (UIB-CSIC), Applied Physics Laboratory and UW, SOCIB
The FaSt-SWOT field campaigns sampled the western SWOT pass in the western Mediterranean cross-over region of the fast-sampling phase. Two campaigns took place between 25-28 April and 7-10 May 2023, with the aim of collecting multi-platform in situ observations of meso- and submesoscale ocean structures in the area covered by the SWOT satellite during its initial fast-sampling phase. The data collected during the campaigns included both using multi-scale ship-based instruments (CTD, Moving Vessel Profiler, thermosalinograph, ADCP) and autonomous platforms (surface drifters and gliders). Complementary information was also retrieved from satellite observations (SST, ocean colour and nadir altimetry products). The sampling focused on a small (~20 km in diameter) anticyclonic eddy detected under the SWOT swath thanks to satellite imagery and drifter trajectories. Several cross-sections of the ship-based instruments, namely the Moving Vessel Profiler, and underwater gliders provided insights into the structure of temperature and salinity fields and the associated signals in chlorophyll and dissolved oxygen. This allowed the comparison of the in situ measurements with the observation of this small-scale eddy by SWOT. Two gliders were programmed to perform back-and-forth sections during a 3-week time with a 1-day delay between them. This gives us the opportunity to evaluate the temporal variability of the ocean fields at the same frequency as SWOT’s fast-sampling phase repeat cycle time. In total, 45 surface drifters were deployed during the two phases to evaluate in situ surface currents and their associated convergence and divergence in the vicinity of the eddy. The continuity of the drifter dataset after the campaigns allows us to further evaluate the SWOT data and the surface dynamics by comparing surface drifters (CARTHE and HEREON) with the 15m-drogued drifters (SVP-B). Furthermore, this dataset also allows to analyse the velocities derived from SWOT and evaluate the limit of the geostrophic assumption in or region of study. Lastly, to evaluate the improvements of SWOT, we compare the fields with the DUACS dataset. Our observations provide new insights into SWOT's ability to detect fine-scale structures, including previously unobserved features and improved characterization of structures identified in earlier altimetric products. While conventional altimetry already could partly detect the sea level signature of the eddy observed during the campaigns, initial SWOT measurements indicate an improved detection capability by this new satellite. In addition, we can take advantage of regional, high resolution numerical simulations which reproduce a small anticyclonic eddy with similar characteristics as that of the observed eddy. These simulations are used to provide a more general understanding of the situation and helps us to evaluate SWOT observations before/after the field campaigns. This study presents an overview of the FaSt-SWOT dataset, offering a multi-platform perspective for validating and comparing SWOT observations with in situ and remote sensing data. The data obtained during the FaSt-SWOT project, in addition to helping with SWOT’s Cal/Val activities, sheds light onto better characterizing and understanding the fine-scale dynamics not observed before in this region characterized by a small Rossby radius of deformation. We also highlight the impacts of the SWOT swath corrections that can affect the proper characterization of small mesoscale structures, and how wide-swath altimetry is opening too the door to new challenges.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall L3)

Presentation: The Sentinel-3 Next Generation Topography Copernicus Altimetry Mission: Enhancing Continuity, Performance and Observational Capabilities

Authors: Dr. Alejandro Egido
Affiliations: European Space Agency
The European Space Agency (ESA), under EU Council and Parliament directives, is mandated to define the Copernicus space component (CSC) architecture based on user requirements coordinated by the Commission. In collaboration with the EC, EUMETSAT, and Member States, ESA has identified key elements of the CSC Long-Term Scenario (LTS) (ESA, 2020). A pivotal element of the CSC-LTS is the Next Generation Topography Constellation, comprising the Sentinel-3 Next Generation Topography (S3NG-T) mission. The S3NG-T mission is designed to ensure an enhanced continuity of the Copernicus Sentinel-3 nadir-altimeter measurements in the 2030s-2050s timeframe. Recognizing the current constellation's limitations in temporal and spatial coverage, S3NG-T aims to significantly upgrade global-scale altimeter sampling. Additionally, hydrology has been elevated as a primary mission objective, introducing a new set of stringent requirements to the mission. To achieve the sampling and revisit time requirements, the S3NG-T mission is being designed a constellation of two large spacecraft, embarking a nadir-looking synthetic aperture radar (POS-5) altimeter for baseline continuity and an across-track interferometer swath altimeter (SAOOH). The two spacecrafts will fly in a sun-synchronous orbit (6pm local time ascending node (LTAN), at a mean orbital height of 815 km, with an orbital phase difference of 140 deg, achieving an interleaved ground-track between both satellites, and matching the Sentinel- 3A/3B ground tracks. Featuring two continuous swaths of 50 km on each side of the track, the S3NG-T mission is expected to provide an almost complete coverage of the ocean every 5 days. There are two main operational modes for the SAOOH instrument: a low-resolution (LR) mode, meant for open ocean and land ice; and a high-resolution (HR) mode meant for coastal zones, sea ice, in-land water and land ice margins. For across-track swath interferometers, errors in baseline length and attitude knowledge translate into systematic SSH errors. As an example, a roll error translates directly into uncertainty in the angle of arrival and causes a linear SSH error across the track. A roll error knowledge of 1 μrad corresponds to ~ 6 cm of SSH error at the outer edge of the swath, and to ~3.8 cm rms of SSH within swath. Ensuring an absolute knowledge error, of the baseline vector roll, better than 1 μrad is challenging and cannot rely on the AOCS and instrument performances alone, hence, in-orbit cross-calibration should be applied to meet the performance requirements. This presentation covers the essential elements of the mission, design considerations, and overall performance for the S3NG-T mission. ESA, 2020,“The next phase of Copernicus” Updated Copernicus Space Component (CSC) Long Term Scenario, ESA/PB-EO(2020)41 Paris, 9 September 2020
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall L3)

Presentation: SWOT mission overview and status

Authors: François Boy
Affiliations:
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall L3)

Presentation: Performance Assessment of the Copernicus Sentinel-3NG Topography mission

Authors: Pierre THIBAUT, Pierre DUBOIS, Laiba Amarouche, Franck Demeestere, Laurent Phalippou, Alejandro Egido, Pierrik Vuilleumier
Affiliations: Collecte Localisation Satellites, Thalès Alenia Space, ESA
The Sentinel-3 Next Generation Topography (S3NG-T) mission addresses the need for a timely extension of the current Sentinel-3 capability in terms of stability and continuity, while increasing the quantity and quality of products and services. It will provide fundamental measurement data in an operational context to enable Copernicus Services to achieve their aim and objectives. In the frame of the S3NG-T Phase A/B1 study, CLS was in charge of assessing the performances of the mission over different targets on earth (ocean, continental water, polar regions). This assessment was made possible by the development of several complementary simulation tools taking into account the instrumental design and the different processing applied on the measurements (on-board and on-ground) and accounting for geophysical properties of the targets (different sea state conditions, inland water and sea ice geometries and backscattering contrasts). In particular, large scale ocean simulations including cross-over calibration method have allowed to assess the impact of various platform attitude characteristics on the final performance. At the end of Phase A, ESA decided to embark swath altimetry technology onboard S3NG-T mission which is designed as a constellation of two satellites, each of them hosting a swath altimeter and a nadir altimeter. This choice was confirmed at the Mission Gate Review held in spring 2024, bolstered by the excellent results of the CNES/NASA altimetry SWOT mission. This presentation will address the different elements contributing to the final Sea Surface Height performance (water surface height in hydrology) of SAOOH including altimeter random error on range, geophysical correction errors, interferometer pointing errors. All the results that have been obtained in this study confirm that the S3NG-T mission performance requirements will be fully fulfilled.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E2)

Session: A.06.01 Geospace dynamics: modelling, coupling and Space Weather - PART 1

This session aims to capture novel scientific research outcomes in the Geospace dynamics field, encompassing atmosphere, ionosphere, thermosphere, and magnetosphere - modelling and coupling. A significant contribution is expected from Space Weather science with the usage of, but not limited to, data of ESA Earth Observation missions, such as Swarm, in particular FAST data, and SMOS. The objective of the session is to collect recent findings that improve the knowledge and understanding of the dynamics and coupling mechanisms of the middle and upper atmosphere and their link with the outer regions that are mainly driven by the Sun and the solar cycle, as well as a focus on data validation and on Space Weather events. We solicit results also from simulations, ground-based observatories or other heliophysics missions, in particular demonstrating synergetic combinations of these elements.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E2)

Presentation: The geomagnetic and ionospheric effects of the May 2024 Mother’s Day superstorm over the Mediterranean sector

Authors: Luca Spogli, Tommaso Alberti, Paolo Bagiacchi, Lili Cafarella, Claudio Cesaroni, Gianfranco Cianchini, Igino Coco, Domenico Di Mauro, Rebecca Ghidoni, Fabio Giannattasio, Alessandro Ippolito, Carlo Marcocci, Michael Pezzopane, Emanuele Pica, Alessio Pignalberi, Loredana Perrone, Vincenzo Romano, Dario Sabbagh, Carlo Scotto, Sabina Spadoni, Roberta Tozzi, Massimo Viola
Affiliations: Istituto Nazionale Di Geofisica e Vulcanologia, Alma Mater Studiorum - Università degli studi di Bologna
On 8 May 2024, the solar active region AR13664 started releasing a series of intense solar flares. Those of class X released between the 9 and 11 May 2024 gave rise to a chain of fast Coronal Mass Ejections (CMEs) that proved to be geoeffective. The Storm Sudden Commencement (SSC) of the resulting geomagnetic storm was registered on 10 May 2024 and it is, to date, the strongest event since November 2003. The May 2024 storm, named hereafter Mother's Day storm, peaked with a Dst of -412 nT and stands out as a “standard candle” storm affecting modern era technologies prone to Space Weather threats. Moreover, the recovery phase exhibited almost no substorm signatures, making the Mother's Day storm as a perfect storm example. in this paper we concentrate on the Space Weather effects over the Mediterranean sector, with a focus on Italy. In fact, the Istituto Nazionale di Geofisica e Vulcanologia manages a dense network of GNSS receivers (including scintillation receivers), ionosondes and magnetometers in the Mediterranean area, which facilitated for a detailed characterization of the modifications induced by the storm. Concerning the geomagnetic field, observatories located in Italy recorded a SSC with a rise time of only 3 minutes and a maximum variation of around 600 nT. The most notable ionospheric effect following the arrival of the disturbance was a significant decrease in plasma density on 11 May, resulting in a pronounced negative ionospheric storm registered on both foF2 and Total Electron Content. Another negative effect was recorded on 13 May, while no signatures of positive storm phases were reported. These negative ionospheric phases are ascribed to neutral composition changes and, specifically, to a decrease of the [O]/[N2] ratio. The IRI UP IONORING data-assimilation procedure, recently developed to nowcast the critical F2-layer frequency (foF2) over Italy, proved to be quite reliable during this extreme event, being characterised just by an overestimation during the main phase of the storm, when the electron density and the height of the F region, decreased and increased respectively. Relevant outcomes of the work related to the Rate of TEC change Index (ROTI), which shows unusually high spatially distributed values on the nights of 10 and 11 May. The ROTI enhancements on 10 May might be linked to Stable Auroral Red (SAR) arcs and an equatorward displacement of the ionospheric trough. Instead, the ROTI enhancements on 11 May might be triggered by a joint action of low-latitude plasma pushed poleward by the pre-reversal enhancement (PRE) in the post-sunset hours and wave-like perturbations propagating from the north. Furthermore, the storm generated immediate attention from the public to Space Weather effects, including mid-latitude visible phenomena like SAR arcs. This paper outlines the report of the Space Weather Monitoring Group (SWMG) of the INGV Environment Department and its effort to disseminate information about this exceptional event.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E2)

Presentation: The SMOS L-Band Solar Radio Burst Database

Authors: Federica Guarnaccia, Roberta Forte, Raffaele Crapolicchio
Affiliations: Serco Italia SpA, ESA-ESRIN
The SMOS (Soil Moisture and Ocean Salinity) mission was launched in 2009 and has been collecting full polarimetric brightness temperature images at 1.413 GHz for over 15 years. Most acquisitions include information about the Sun signal: the latter has thus been removed and used to evaluate the solar flux at high time resolution, with one measurement cycle every 5 seconds. This resolution is compatible with the detection of solar radio bursts (SRBs), enabling the development of a dedicated detection algorithm. The full SMOS solar radio burst database has been compared to the Radio Solar Telescope Network (RSTN) L-Band channel, revealing similar yet complementary performances. The detection rate of the SMOS mission is mainly hindered by two factors: 1) Radio Frequency Interference (RFI) in the L-Band, which deteriorates the quality of the collected signal, preventing the detection of smaller events; 2) the Sun signal transitioning from the antenna front lobe to the back lobe, greatly limiting the signal-to-noise ratio. The extension of the detection algorithm in the antenna’s back lobe is discussed; depending on the event’s intensity and the Sun elevation angle, solar radio burst detection is still possible, but with a reduced success rate. Radio burst events missed by the RSTN L-Band channel and detected by SMOS are analyzed and correlated with the full RSTN database, including both higher and lower frequency channels. The SMOS database has also been compared to the Geostationary Operational Environmental Satellite (GOES) X-ray flare list. Thanks to the Microwave Imaging Radiometer with Aperture Synthesis (MIRAS) payload, the SMOS radio burst database includes a unique additional layer of information: the Degree of Circular Polarization (DoCP). Prominent polarized radio burst events in the SMOS database have been identified: the DoCP may indicate the source radiation mechanisms and propagation processes originating the events. The polarized radio burst subset has been cross-correlated with announced contemporary Global Navigation Satellite Systems (GNSSs) signal degradation: only right-hand circularly polarized events are expected to yield a significant impact on GNSSs. Overall, despite challenges such as RFI and variable signal-to-noise ratios across its orbit, the SMOS satellite provides a valuable asset for solar radio burst detection in the L-Band, both as a stand-alone system and in synergy with state-of-the-art databases.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E2)

Presentation: Introducing QUID-REGIS: Contribution to the understanding of unexpected variability in the ionosphere during solar-quiet periods by atmospheric dynamics from below.

Authors: Lisa Küchelbacher, Jaroslav Chum, Patrick Hannawald, Petra Koucka Knizova, Jan Kubancak, Simon Makovjak, Carsten Schmidt, Vladimir Truhlik, Jaroslav Urbar, Sabine Wüst, Michael Bittner
Affiliations: German Aerospace Center, DLR-DFD, Institute of Atmospheric Physics, CAS-IAP, Slovak Academy of Sciences, Institute of Experimental Physics, IEP-SAS
The day-to day variability of quiet-time ionosphere is surprisingly high even during periods of negligible solar forcing. Swarm measurements allow the characterization of the upper atmospheric and ionospheric conditions and dynamics for more than 10 years now. The analysis of Swarm data also showed that the ionosphere is sometimes disturbed even during solar-quiet periods: the electron density and electric field, for instance, can show significant variability that currently remains unexplained. Whenever there is unexpected variability in Swarm data the lower atmospheric dynamics might serve as a source region of disturbances causing these variabilities by vertical coupling processes. With QUID-REGIS we aim for a better quantification of the role of upper mesosphere lower thermosphere (UMLT) dynamics in the occurrence of solar quiet ionospheric disturbances along with a better representation of baseline ionospheric conditions. Use of Swarm data measured in topside ionosphere is supported by an extensive set of ground-based measurements of both, the upper mesospheric/ lower thermospheric and ionospheric D-, E- and F-regions. These measurements comprise airglow observations representative for the neutral atmosphere in the UMLT (80-100km), magnetic field (and other) observations representative for the ionosphere (85-300km) as well as airglow observations from 200-300km of altitude. Therewith, we contribute to characterize the atmospheric state during these quiet periods. Thus, QUID-REGIS contributes to the understanding of disturbances in the upper atmosphere and clarifies whether these are at least in part a result of neutral atmospheric dynamics from the lower atmosphere at mid-latitudes. The obtained findings are used to modify inputs and compare possible improvements of the International Reference Ionosphere Model. We give an overview about the main project goals, challenges and first results. This includes basically three steps: First, we identify solar-quiet periods, by selecting various solar parameters. Second, we look for unexpected high variability of Swarm measurements. Third, with this we select case studies for a detailed analysis of the dynamic state of the atmosphere below.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E2)

Presentation: Augmenting thermosphere mass density and crosswind observations derived from accelerometer and GNSS tracking data with uncertainty information

Authors: Natalia Hladczuk, Sabin Anton, Dr.ir. Jose van den IJssel, Dr.ir. Christian Siemes, Prof.dr.ir. Pieter Visser
Affiliations: Delft University of Technology
Accurate knowledge of the thermosphere mass density and crosswind is essential in understanding the coupling of Earth's thermosphere-ionosphere and in constructing empirical models used in space operations. Accelerometer measurements, together with the precise GNSS data, allow for deriving in-situ neutral thermosphere mass density and crosswind. TU Delft maintains a database of precise thermosphere density and crosswind observations, which we continuously strive to improve by upgrading processing components. Currently, the datasets from the CHAMP, GRACE, GOCE, Swarm, and GRACE-FO satellites are included. These datasets are, however, provided without comprehensive uncertainty specifications. Quantifying the observational uncertainty is complex, considering the number and diversity of error sources. Recently a method was developed to propagate various error sources (such as measurement noise and errors in the satellite specification, thermosphere models, and radiation flux data) and quantify their impact on thermosphere density derived from the accelerometer and GNSS tracking data (Siemes et al., 2024). While the method was successfully applied to a few sample datasets from the GRACE-B satellite, the current implementation treats errors in atmospheric conditions such as temperature, density of constituents, and wind in a simplified way. In this presentation, we propose an extended and more advanced approach to model these often correlated variables. Moreover, we will extend the current method to quantify the uncertainty not only of density observations but also of crosswind observations. Finally, we will demonstrate the method output by analyzing various use cases, e.g. for missions such as GOCE and GRACE-FO, for which the new density and crosswind data were recently processed and released. The developed method will allow to supplement existing datasets in the uncertainty information. This will be beneficial both for the data users, as well as the data assimilation purposes. Moreover, the method can be used to predict the capacity of future missions to observe thermosphere density and crosswind.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E2)

Presentation: Ionospheric Joule heating and neutral density variations at low Earth orbits during geomagnetic storms

Authors: Heikki Vanhamäki, Marcus Pedersen, Anita Aikio, Lei Cai, Milla Myllymaa
Affiliations: University Of Oulu
Auroral Joule heating is one of the main energy sinks in the solar wind - magnetosphere - ionosphere system. During geomagnetic storms intense Joule heating causes thermal expansion of the upper atmosphere, thus increasing the thermospheric density and satellite drag at low Earth orbits (LEO). This chain of events often begin as geoeffective solar wind transients such as high-speed stream/stream interaction regions (HSS/SIR) or interplanetary coronal mass ejections (ICME) impact Earth’s space environment. The “Joule heating effects on ionosphere-thermosphere coupling and neutral density (JOIN)” research project is part of ESA’s “4D Ionosphere” initiative. In the JOIN projects we determine the statistical distribution of the auroral Joule heating int he northern hemisphere during geomagnetic storms using SuperMAG, SuperDARN and AMPERE data. This is correlated with the large-scale atmospheric density variations at LEO observed by the Swarm, GRACE and GRACE-FO satellites. The geomagnetic storms are further divided into different categories based on the solar wind driver - HSS/SIR, and magnetic clouds or sheat regions inside ICMEs. Based on superposed epoch analysis of 231 geomagnetic storms between 2014 and 2024, it is found that the Joule heating in the ionospheric E-region and neutral density enhancements at the altitude of the Swarm and GRACE satellites show different characteristics depending on the geomagnetic storm driver. The Joule heating has a faster increase at the beginning of the storm main phase when the storm is initiated by a HSS/SIR or sheath region of ICMEs, while a more gradual and longer lasting increase is found in storms driven by magnetic clouds within ICMEs. This is inline with previous results of the total field-aligned and ionospheric currents during storms (Pedersen et al., 2021, 2022). The thermospheric density enhances gradually during the storm main phase and the enhancements are typically largest and longest-lasting for storms driven by MC due to the prolonged interval of increased Joule heating.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E2)

Presentation: Vertical Total Electron Content maps from SMOS radiometric data: Analysis of geomagnetic storms

Authors: Verónica González-Gambau, Dr. Nuria Duffo, Dr. Lorenzo Trenchi, Dr. Ignasi Corbella, Roger Oliva, Manuel Martín-Neira, Raffaele Crapolicchio
Affiliations: Barcelona Expert Center, Institute Of Marine Sciences, Csic, Universitat Politècnica de Catalunya, ESA-European Space Research Institute, Zenithal Blue Technologies, ESA-European Space Research and Technology Centre
As microwave radiation from Earth propagates through the ionosphere, the electromagnetic field components are rotated at an angle, called the Faraday Rotation Angle (FRA). At the SMOS operating frequency (1.4135 GHz), the FRA is not negligible and must be compensated for to get accurate geophysical retrievals. FRA can be estimated using a classical formulation [Le Vine et al., 2002] that makes use of the total electron content and geomagnetic field data provided by external databases and models. Alternatively, it can be retrieved from full polarimetric radiometric data [Yueh et al., 2000]. The possibility of retrieving the FRA from SMOS radiometric data opens up the opportunity to estimate also the VTEC (Vertical Total Electron Content) of the ionosphere by using an inversion procedure from the measured FRA in the SMOS field of view. However, estimating the Faraday rotation from SMOS radiometric data per each pixel is not straightforward because of the presence of spatial errors in SMOS images. A new methodology was proposed in order to derive VTEC maps from the SMOS radiometric measurements, which uses optimized spatio-temporal filtering techniques to be robust against the thermal noise and image reconstruction artifacts present in SMOS images [Rubino et al., 2020; 2022]. These derived VTEC maps can be then re-used in the SMOS level 2 processor for the correction of the FRA in the mission. We generated three years of SMOS-derived VTEC maps and, using these maps instead of the VTEC data from GPS measurements, we analyzed the impact on the stability of brightness temperatures over the oceans. Results of this analysis showed that the usage of these new SMOS-derived VTEC maps allowed a significant enhancement in the quality of the brightness temperatures, which will lead to an improvement on salinity retrievals. More recently, further improvements have been introduced in the methodology: (i) applying filtering/Radio-Frequency Interferences mitigation techniques and (ii) estimating the uncertainty of the derived VTEC products to improve the quality of the derived VTEC maps over strongly contaminated regions. This methodology will be implemented in the SMOS L1 processor to operationally generate the new SMOS-derived VTEC product. Solar activity can influence the ionization levels in the upper atmosphere, leading to variations in the ionosphere properties. Additionally, space weather events, such as solar flares and geomagnetic storms, can further contribute to ionospheric disturbances [Zhai, 2023]. In this context, VTEC maps derived from satellite observations can be very useful for studying the ionospheric response to geomagnetic storms, complementing the measurements from ground stations. Besides the SMOS VTEC maps, the Swarm magnetic field mission, launched in 2013, is also providing VTEC data. Currently, we are comparing VTEC data from SMOS and Swarm satellites taking into account they are obtained by using different technologies (GNSS signals in the case of Swarm and microwave radiometer in SMOS) and explore different layers of the ionosphere (Swarm: ~500 km upward, SMOS: ~750 km downward). First results show that under quiet geomagnetic conditions, VTEC data from both missions are consistent, with larger SMOS VTEC values (as expected since it measures the lower and denser ionospheric layers) and both clearly measure the intensification of density around the magnetic equator. We have also used Swarm and SMOS VTEC data for monitoring the geomagnetic storms (for example, storms on November 2021 and May 2024). Preliminary results show that both capture a clear intensification of VTEC, with peaks spreading poleward during the main and early recovery phases of the storms.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall N1/N2)

Session: D.05.05 CDSE User Review Meeting - Annual User Review: Co-Creating the Copernicus Data Space Ecosystem

This session centers on the important role of users in actively shaping the Copernicus Data Space Ecosystem. By co-creating tools, services, and data solutions, users are directly influencing how the ecosystem grows to meet real-world needs. Additionally, through their feedback, users are shaping the way the consortium develops the tools and datasets offered in the ecosystem. We will showcase feedback from our user community and share specific examples of how these insights have led to impactful improvements and innovations. We will present the results of a comprehensive Yearly User Review Survey that captures diverse perspectives from across the ecosystem, user experiences, valuable user insights, emerging needs, and priorities, followed by Live Interactive User Session moderated by the Copernicus Data Space Ecosystem team, inviting participants to join the discussion, as questions, and share their perspectives in a dynamic, collaborative setting as a vital part of shaping the CDSE direction.

Presentations and speakers:


Opening of the CDSE User Review Meeting 2025 by ESA


  • ESA/EC

Latest advancements in the Copernicus Data Space Ecosystem (CDSE)


  • Jurry de la Mar – T-Systems
  • Jan Musial – CloudFerro
  • Grega Milcinski – Sinergise

Results of the CDSE User Satisfaction Survey 2025


  • Dennis Clarijs – VITO Remote Sensing

The State of Earth Observation Platforms: Towards Data Fusion, AI-Readiness and Vertical Specialization


  • Aravind Ravichandran – TerraWatch Space
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.34)

Session: C.06.06 Global Digital Elevation Models and geometric reference data

One of the most important factors for the interoperability of any geospatial data is accurate co-registration. This session will be dedicated to recent developments and challenges around the availability, quality, and consistency of reference data required for accurate co-registration of remotely sensed imagery at global scale. It will provide a forum for the results of studies performed under CEOS-WGCV, EDAP, and other initiatives aiming at the quality assurance and harmonisation of DEMs, GCPs and reference orthoimagery used by different providers worldwide.

Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.34)

Presentation: Generative Modelling of Terrain with Sentinel-2 and COP-DEM

Authors: Paul Borne--Pons, Mikolaj Czerkawski, Rosalie Martin, Romain Rouffet
Affiliations: ESA, Φ-lab, Adobe Research
Terrain modelling is a common and challenging task in the creative industry, particularly relevant in the domains such as video games and VFX (Visual Effects). It is a complex and time-consuming task, particularly when it involves large-scale landscapes. Large scenes are becoming more and more common with the current boom in popularity of open world games. The current state-of-the-art in terrain modeling relies mainly on procedural and simulation methods which tend not to scale well after a certain point (it is too computationally expensive or lacks realism) and most importantly, it most often fails to capture the variety of the landscape the world offers. The recent advances in generative machine learning, especially denoising diffusion models, have paved the way for tools that can learn and model the visual representations directly from the data. In this work, we leverage these advances and the availability of rich data of the Earth terrain in the Copernicus programme. Specifically, the representation of terrain is defined here as a 2.5D combination of the optical visible bands and a supporting channel containing elevation information. By abstracting the complexity of the underlying physical processes that interact to shape terrain the model can generate patterns and mutual dependencies between terrain features hence achieving perceptual realism. In this work, a generative diffusion model is trained on a global collection of Sentinel 2 Level-2A data and Copernicus DEM. To provide a source of AI-ready training data, an expansion dataset with cropped and reprojected global COP-DEM 30m data has been built and released openly for free on the HuggingFace platform (1,837,843 of images in total), formatted as an expansion dataset of Major TOM (an open AI-ready dataset project conceived in ESA Φ-lab). Subsequently, a set of text captions was obtained for each of the Sentinel-2+DEM pairs. During training the model learnt from a text to recreate the pair of DEM and Sentinel 2 image from the corresponding text description. The generative model was then trained to generate new Sentinel-2 and COP-DEM pairs based on text captions. The preliminary results demonstrate a high quality of generated new data, which in general matches the user text prompt well. Further mechanisms for quantitative evaluation are currently being investigated. As a result, creative professionals, such as game designers, are now able to use the model to quickly prototype terrains or use the output of the model for further post processing. At the current stage, the model is primarily designed for creative applications, however, its potential benefit in the context of scientific applications is going to be explored in future work.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.34)

Presentation: TerraSAR-X and TanDEM-X Mission Overview and System Status Update

Authors: Allan Bojarski, Dr. Markus Bachmann, Johannes Böer, Christopher Wecklich, Dr Patrick Klenk, Dr Kersten Schmidt
Affiliations: German Aerospace Center (DLR) - Microwaves and Radar Institute
TerraSAR-X and its almost identical twin satellite TanDEM-X continue to acquire high resolution radar images and Digital Elevation Models with unprecedented accuracy far beyond their expected lifetime. Since launch the Synthetic Aperture Radar image quality and resolution have remained constant which is owed to a very stable instrument but also to an elaborate ground segment. As a result, the bistatic mission, which was designed for 5.5 years to generate a single global Digital Elevation Model, could be extended. In the following and still ongoing TanDEM-X 4D phase, constant updates of the global dataset are acquired showing in particular the three-dimensional elevation and terrain changes over time. This is performed especially in areas where significant changes are expected as for example in mountainous areas or on glaciers. In this way, the mission has generated a unique dataset by adding change layers to the existing global Digital Elevation Model showcasing natural and man-made topography transformations over the last decade. Furthermore, various timelines could be dedicated specifically to scientific acquisitions focusing on fast changing areas such as forest or permafrost regions and enabling the application of experimental modes such as concurrent imaging. Since both satellites are still in good condition, the ongoing missions will be continued providing continuous updates and extending this unique dataset as long as possible. In order to give an outlook on the remaining lifetime expectancy, this contribution will give a detailed overview of the current status of the satellites’ systems, the remaining onboard consumables and recent as well as upcoming operational challenges. We will present geometric calibration results using on-ground targets as well as antenna pattern measurements and baseline calibration datatakes to demonstrate the excellent radiometric stability of the system. Furthermore, since the battery is the fastest depleting resource, the focus on consumables will particularly be put on the progression of the battery ageing, the assessment of the battery capacity and the implication on the mission planning. For completeness, also the remaining fuel and the according lifetime limitation will be addressed. Additionally, recent events of external and internal nature will be discussed which temporarily affect the position determination and the synchronization of the satellites. In this context, solutions to overcome any correlated acquisitions constraints will be given. In summary the objectives of this contribution are to validate the good condition of both satellites after 14 and 17 years in orbit as well as to demonstrate the strategies and adaptions to extend their lifetime while maintaining image quality.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.34)

Presentation: The TanDEM-X 4D Phase – Input for a potential future update of the Copernicus DEM

Authors: Markus Bachmann, Maximilian Schandri, Thomas Kraus, Allan Bojarski, Johannes Böer, Manfred Zink
Affiliations: German Aerospace Center (DLR)
TanDEM-X Mission The TanDEM-X mission is the first bistatic SAR mission with two satellites [1]. It was realized by placing a second satellite (TDX) in close formation with TerraSAR-X (TSX) [2]. The primary mission goal was to deliver a global Digital Elevation Model (DEM) at a 12 m posting with a relative vertical accuracy of 2 m/4 m for terrain slopes less/steeper than 20%. The first global coverage was mainly acquired between 2010 and 2015 with outstanding height performance [3]. Copernicus DEM – relation to the TanDEM-X DEM The Copernicus DEM is a publicly available global digital elevation model with 30 m horizontal resolution. It was generated on basis of the TanDEM-X Global DEM. The bistatic TanDEM-X data acquired between 2010 and 2015 was processed into a global, consistent DEM dataset until 2016, called TanDEM-X Global DEM. It is available for scientific users via the TanDEM-X Science Service [4]. The DEM data set was then edited by Airbus Defense and Space GmbH and provided for commercial customers in form of the WorldDEM. Water bodies and cities were flattened in a largely manual editing process. In addition, data gaps in the DEM were filled using external reference DEMs [5]. Finally, this data was provided to ESA and published as the Copernicus DEM. TanDEM-X DEM 2020 (2017 – 2020) In the years 2015 until 2017 several additional scientific coverages of forest regions and the cryosphere as well as high resolution DEMs for demonstration purpose were generated. This was followed by a second global coverage between 2017 and 2020 . Using the experience from the preceding global coverages, the acquisitions were performed with an optimized acquisition strategy, respecting acquisition time constraints for areas with large seasonal variation, like in the Arctic region or over temperate and boreal forests [6]. As a result, the consistency of the data could be further improved and the need for reacquisitions due to seasonal effects was minimized. The acquisitions were processed and are publicly available in form of DEM Change Maps generated by DLR/IMF providing the height change with respect to the TanDEM-X Global DEM [7]. In additional a dataset similar to the global TanDEM-X DEM is currently generated by DLR/DFD as the “TanDEM-X DEM 2020” [8]. TanDEM-X 4D Phase (2021 – 2028) Currently, the TanDEM-X mission is in its TanDEM-X 4D phase. As the fourth dimension, the time aspect is incorporated by the regular monitoring of dedicated areas. For this purpose, regions with high scientific interest are acquired repeatedly in a bi-yearly alternating sequence. In each first year, the global forests and artic areas area acquired. During each second year, other areas with large height changes are in the focus of the satellites. The remaining areas, about one third of the Earth, is interspersed during the years in order to obtain a third global coverage until 2028 and allow a homogeneous utilization of the satellites. One main constraint in this phase is the advanced age of the two satellites. Consequently, the utilization of satellite resources like the propellant consumption or the battery were optimized in order to prolong the mission as long as possible. The final paper and the presentation will focus on these challenges and give an outlook on the future observation plan. References [1] G. Krieger et al. (2007) TanDEM-X: A Satellite Formation for High Resolution SAR Interferometry, TGRS, Vol.45, no.11. [2] M. Zink et al. (2021) TanDEM-X: 10 Years of Formation Flying Bistatic SAR Interferometry. JSTARS, DOI: 10.1109/JSTARS.2021.3062286 [3] P. Rizzoli et al. (2017) Generation and performance assessment of the global TanDEM-X digital elevation model. JPRS, DOI: 10.1016/j.isprsjprs.2017.08.008 [4] TanDEM-X Science Service https://tandemx-science.dlr.de/, last accessed 2024-12-02 [5] E. Fahrland, “Copernicus DEM Product Handbook,” 25. June 2020 [6] M. Bachmann et al. (2021) The TanDEM-X Mission Phases - Ten Years of Bistatic Acquisition and Formation Planning. JSTARS, DOI: 10.1109/JSTARS.2021.3065446 [7] M. Lachaise, et al. (2024) The TanDEM-X 30m DEM Change Maps: applications and further developments, EUSAR, Garching, Germany, 2024. [8] B. Wessel et al., (2022) The new TanDEM-X DEM 2020: generation and specifications, EUSAR, Leipzig, Germany, 2022.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.34)

Presentation: The impact of differences between Global Digital Elevation Models in geolocation with ESA EOCFI

Authors: Ludo Visser, Montserrat Pinol Sole, Michele Zundo
Affiliations: Akkodis for ESA/ESTEC, ESA/ESTEC
Global Digital Elevation Models (GDEMs) play an important role in geolocation in the processing of sensor data in Earth observation missions. The models provide elevation information and, sometimes, surface masks for a location, for given latitude and longitude. The ESA Earth Observation CFI software libraries (EOCFI), that are used in the data processing pipelines of many Earth observation missions, provide a standardized interface for querying various DEMs, including ACE-2, GETASSE and Copernicus global elevation models, at multiple resolutions. Together with the routines for spacecraft state propagation (on-orbit position, velocity and attitude) and instrument pointing, the EOCFI libraries provide a complete toolbox for geometry and geolocation in the data processors. However, its versatility in handling the various available elevation models naturally leads to the consideration if all models are equally suitable and, if not, which model should then be used. To aid in answering these questions we explore a few aspects of the available elevation models in relation to the geolocation routines in EOCFI, such as reference (geoid versus ellipsoid) and resolution. The handling of “water surfaces” is also discussed: the presence of surface (land/sea) mask, the presence of bathymetric data and the interpretation of “sea level” water pixels near and far away from the coast. The discussion points are illustrated by examples. These will show the impact of elevation model characteristics on geolocation, and how “similar” elevation models (e.g., models with the same spatial resolution) can lead to very different geolocation results. The impact of spacecraft attitude and instrument pointing is also considered. The purpose of the discussion is not only to create awareness about the fundamental differences between the available DEMs, but more importantly to help develop an understanding about how these differences affect geolocation in EOCFI. This will hopefully aid in choosing a suitable elevation model for a particular processor, considering mission design and observation objectives.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.34)

Presentation: Sentinel 2 GRI : Guidelines for Optimal Use and Outcomes of Study for a New Version

Authors: Emmanuel Hillairet, Sebastien Clerc, Antoine Burie, Simon Rommelaere, Seetha Palla, Silvia Enache, Rosalinda Morrone, Valentina Boccia
Affiliations: Cs Group, ACRI-ST, STARION Group, ESA/ESRIN
Due to their worldwide regular acquisitions and accurate geolocation (<5m CE90) Sentinel2 satellites are providing interesting references for other missions. MicroCarb and TRISHNA, institutional missions involving the French Agency (CNES), are using Sentinel2 acquisitions as geometric and radiometric references. New Space missions (for instance Copernicus Contributing Missions: CCM) are also using or intend to use Sentinel2 as references. On behalf of ESA, the OPT-MPC (Optical Mission Performance Cluster) team is publicly providing the Copernicus Sentinel 2 Global Reference Image, (GRI, via https://sentinels.copernicus.eu/web/sentinel/global-reference-image, and soon via Copernicus Data Space Ecosystem : CDSE). These mono-spectral (red band) databases of L1C multi-layer tiles and GCPs correspond to acquisitions in the early stages of the mission (2015-2017). Even if this current GRI is considered of great interest due to its worldwide extend and accurate geolocation, some limitations appear: inequal GCP density, some temporally instable GCPs, non-negligible cloud cover, landscape changes since its generation) and a new GRI is under study, aiming at taking advantage of the full available archive of Sentinel2 acquisitions, and more restricted rules to extract the GCPs. The proposed presentation will address the following topics: - First, introduction to the current GRI: how it was built, what are the geolocation performances, what are the advantages and limitations of use. - Then, proposition of guidelines for an optimize use: depending on the context (sensor bands, resolution and swath), whether use of GCPs, or tiles or even the use of a more adapted product in the archive (more recent, adapted spectral band). - Finally, presentation of the outcomes of the study dedicated to the generation of a new up-to-date GRI: statistics provided by the screening of the full archive, ways to tackle remaining cloudy areas, methods to extract and validate the GCPs from the selected tiles. This presentation will be also the opportunity to initiatet discussions with users or potential users of S2 acquisitions as references to their mission, to exchange experience and feedback, in the objective of a constant improvement of the proposed Global Reference Image.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.34)

Presentation: Towards a multi-source and multi-scale DTM/DSM

Authors: Dr. Serge Riazanoff, Mr. Kévin Gross, Mr. Axel Corseaux
Affiliations: VisioTerra
During the last decade, numerous Medium Resolution (MR) global Digital Elevation Models (DEM) and Very High Resolution (VHR) DEMs of cities, regions and countries of the world have been released publicly, with very permissive licenses. In the framework of the Earthnet Data Assessment Project (EDAP) and the DEM Intercomparison eXercise (DEMIX), local VHR DEMs (1 to 2.5 m of resolution) have been used as reference data to assess the quality of well-known global DEMs (1 arcsecond, approximatively 30 m of resolution at equator). These assessments underlined the various criteria of interest for DEM users, from very generic metrics (horizontal and vertical accuracies as RMSE or linear/circular error) to the thematic ones (accurate depiction of coasts, crests, buildings, vegetation…). Nowadays, these user requirements cannot be fulfilled by a static DEM, as they depend on the scale, features and possibly time of interest. The profusion of public DEMs offers an opportunity to create a multi-source and multi-scale DEM fitting a wide variety of users’ needs. While promising, this idea raises several technical challenges, such as the collection of input elevation data, the geometric transformations to international datums, the resampling methods, the generation of Digital Terrain Models (DTM) from Digital Surface Models (DSM) or even cross-border merging algorithms. Through DEMIX and EDAP, VisioTerra has performed assessments regarding the geometry and transformations of local VHR DEMs, as well as the impact of resampling methods on DEM products. Additionally, VisioTerra has developed the DEMIX Operations Platform, allowing to retrieve a selection of DEMs with custom export parameters such as the Ground Sampling Distance (GSD), Coordinate Reference System (CRS), Vertical Reference System (VRS), resampling methods (interpolation or aggregation) and pixel-type (point or area). The experience gained from these studies and tools, along with ongoing and future works about DEM merging and surface to bare-earth conversion algorithms, could pave the way to the creation of a multi-scale and multi-source DSM / DTM.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall M1/M2)

Session: D.03.04 Innovative technologies, tools and strategies for scientific visualisation and outreach

In recent years there has been an advancement in the field of communicating science-based information and facts to the citizens through open access and open source web based solutions and mobile applications. In earth observation, these solutions use innovative ways of presenting eo-based indicators and data, often coupled with storytelling elements to increase accessibility and outreach. Additionally, such innovations, coupled with data access and computation on cloud-based EO platforms, are very effective tools for scientific data dissemination as well as for education in EO and Earth Science. In this session we welcome contributions from innovative web-based solutions, dashboards, advanced visualisation tools and other new and innovative technologies and use cases for scientific communication, dissemination and education. In particular, we seek to explore how such solutions help to increase the impact of science, create and grow communities and stimulate adoption of EO. We also look towards the future, exploring trends and opportunities to connect with non-EO communities and adopt new technologies (e.g. immersive tools, AR, VR, gaming engines, etc.). The session will be an opportunity to exchange experiences and lessons, and explore opportunities to further collaborate.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall M1/M2)

Presentation: Biodiversity and Climate Change: Coral Reef Visualization Using Immersive Digital Twins (VR)

Authors: Rainer Ressl, Thomas Heege, Knut Hartmann, Eva Haas, Genghis Borbolla, Veronica Aguilar, José Davila, Raúl Jimenez
Affiliations: EOMAP GmbH, National Comission for the Knowledge and Use of Biodiversity - CONABIO
Immersive virtual reality (VR) has proven to be an effective tool for enhancing understanding by creating a sense of spatial presence, where users feel as “being there” in a virtual environment. This capability makes VR particularly valuable for visualizing geospatial phenomena. However, much of the existing research on immersion and presence occurs in laboratory conditions, while studies focusing on virtual representations of real-world environments remain limited. Integrating VR with data from real locations with true geometries opens new opportunities for scientific research, public engagement, and environmental conservation. VR visualization offers not only realistic and immersive experiences but also enables effective monitoring of fragile ecosystems by combining data from Earth Observation (EO) satellites and biodiversity surveys. Such integration provides a platform for presenting complex scientific findings in accessible and visually engaging formats, making them understandable to broader audiences. This fosters environmental awareness and supports decision-making for conservation efforts. Our prototype demonstrates the potential of immersive VR by visualizing the biodiversity of a tropical coral reef of the Mexican Caribbean coast. Using precise 3D models derived from geospatial data, we recreate the geometry and habitats of the reef with unparalleled detail. This combination of EO-derived baseline information, such as bathymetry and benthic habitat maps, with in-situ biodiversity data allows users to explore coral reefs in a highly realistic way. The result is an authentic VR environment that provides a deeper understanding of these ecosystems. Future enhancements will incorporate data from Sentinel-3 on sea surface temperatures, enabling users to explore the impacts of climate change scenarios, such as coral bleaching events, on species and habitats. By visualizing these dynamics, users gain a clearer understanding of how climate stressors affect coral reefs and their associated biodiversity. The immersive VR experience goes beyond static visualization by allowing users to interact with ecosystems under different environmental conditions or future scenarios. This interaction educates users about the significance of these ecosystems, builds empathy, and inspires a sense of stewardship. It also promotes sustainable tourism emphasizing the importance of conserving natural resources. By highlighting the ecological value of these ecosystems, our immersive VR prototype contributes to both public awareness and environmental protection policies. While the tropical coral reef serves as an initial showcase, the methodology can be applied to other fragile ecosystems, such as mangroves or other coastal ecosystems. This flexibility ensures broad applicability for future developments in ecosystem monitoring and conservation.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall M1/M2)

Presentation: Xcube UI: The Next-Generation Interactive Visualization & Communication Platform

Authors: Norman Fomferra, Yogesh Kumar Baljeet Singh, Gunnar Brandt
Affiliations: Brockmann Consult
Effective communication and dissemination of Earth Observation (EO) data in the digital era requires appealing, intuitive, and interactive visualisation tools. Numerous toolkits, applications, and services exist to manage and visualise the vast volumes of EO data and other sources. These range from simple plotting libraries used during the development phase to sophisticated online viewers, dashboards, and cloud-based visualisation services. However, creating and disseminating impactful, tailored visualizations of raw EO data paired with interactive analysis and processing features often requires multiple specialized tools and substantial expertise in diverse techniques. As a consequence, technical support from external experts or the usage of third-party services is often required for typical research users, which makes the entire process cumbersome, costly, and time-consuming. Consequently, many research findings remain insufficiently visualised, impairing their impact. To address this issue, we extended the xcube ecosystem by xcube UI, an open source user-interface framework that enables Python-literate researchers to easily create customized online viewers, dashboards, and even toolboxes. The xcube ecosystem facilitates data access and processing of EO data, e.g., within Jupyter notebooks, and the new visualisation framework makes developing and displaying tailored visualisation apps as straightforward as using common plotting libraries. Researchers can leverage xcube datastores for convenient data access, even from remote cloud sources, resulting in xarray datasets that can be further processed and eventually be handed over to the xcube server, which is the back-end for any xcube visualisation app. This setup enables interactive visualisations with just two lines of code. In a JupyterLab, the app can be either started and operated in full-screen mode in a separate browser tab or embedded inline in a notebook. The xcube UI framework provides a complete EO data viewing and analysing application out of the box, which offers an interactive map alongside configurable charts, featuring numerous tools such as layer and colourmap management, animations, feature data integration, on-the-fly computation of user-defined variables, and split screen. Additionally, users can tailor the app by adding and configuring extra panel components that could comprise of any chart(s) of the powerful Vega Altair library combined with Material UI components, with the data of their choice, thus enabling the creation of high-quality, customizable diagrams.Other charting, visualization, or UI component libraries can be easily supported. This flexibility allows users to prototype individualized visualisation apps on their own by creating individual configurations, thus eliminating the need for external assistance. These tailored visualisation apps can subsequently be deployed publicly without further adaptations, either on the user's own infrastructure or through platforms that offer hosting services, such as DeepESDL. In our presentation, we will showcase an end-to-end workflow for publishing data cubes with EO data, alongside several examples that highlight the adaptability of the xcube framework and the advantages of its powerful yet user-friendly approach to develop interactive, tailored web applications for data visualization and dashboards. The xcube UI framework's open licensing, seamless integration into Python's data science ecosystem, and compatibility with Jupyter or as a standalone web application greatly facilitates the dissemination of EO data and research results to a wider audience. By enabling researchers to independently create and deploy interactive visualisations and toolboxes, the framework supports the democratization of research and aligns with the European Space Agency's ambition to foster open science and boost collaborative innovation.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall M1/M2)

Presentation: An Interactive Scientific Visualization Toolkit for Earth Observation Datasets

#zarr

Authors: Lazaro Alonso, Jeran Poehls, Nuno Carvalhais, Markus Reichstein
Affiliations: Max-Planck Institute for Biogeochemistry
Visualizing and analyzing data is critical for identifying patterns, understanding impacts, and making informed predictions. Without intuitive tools, extracting meaningful insights becomes challenging, diminishing the value of collected information. FireSight[1], an open-source prototype developed within the Seasfire[2] project, addresses these challenges by offering a data-driven visual approach to fire analysis and prediction. Other tools, such as LexCube[3], which focuses on visualizing 2D textures in a 3D space, or the Initiative Carbon Plan[4], which specializes in 2D maps from Zarr stores, provide additional methods for interacting with spatial data. While these tools excel in specific areas, FireSight's comprehensive visualization enhances multidimensional analysis, enabling users to derive deeper insights from complex datasets. The toolkit leverages advanced web technologies to deliver interactive and visually compelling 3D volumetric renders. Its design allows users to easily customize the interface by integrating modern user interface (UI) components. The platform provides intuitive browser experiences powered by React, OpenGL Shading Language, and ThreeJS. Through a web-based interface, users can interactively select variables from different data stores, dynamically explore data in 2D and 3D where applicable, and calculate relationships between variables. A key objective is to enhance the visualization of observational data and modeling outputs, supporting the interpretation and communication of results. The visualization toolkit offers several key features: (1) users can dynamically explore data, selecting any variable and viewing it in both 2D and 3D when a time dimension is available. (2) Relationships between variables can be calculated, enhancing analytical capabilities for deeper data insights. (3) The tool supports the visualization of various Earth observation datasets, which can serve as inputs for modeling frameworks, ensuring flexibility in data exploration. (4) Finally, the code base is released on GitHub as open-source FireSight, with detailed instructions for installation and operation. Currently, plotting is restricted to the entire spatial extents of datasets, requiring a local dataset for streaming information. However, the chunking method of the Zarr data format offers potential for cloud-based EO platforms to enable pixel-level exploration. This capability would facilitate the visualization of complex modeling outputs without excessive data transfer. Aligned with Open Science principles, FireSight development incorporates community-driven libraries such as React, ThreeJS, and Zarr.js, while actively contributing to repositories like tweekpane-react[5]. The platform emphasizes modularity to ensure adaptability for future EO applications and interdisciplinary outreach. This presentation will explore the platform's design philosophy, technical implementations, and future expansion plans, including the integration of pyramid data schemes for high-resolution datasets. These advancements pave the way for next-generation scientific data exploration. By fostering open innovation, FireSight aims to bridge the gap between Earth Observation researchers, educators, and non-specialist communities, amplifying the impact of scientific endeavors and encouraging cross-disciplinary collaborations. [1] https://github.com/EarthyScience/FireSight [2] https://seasfire.hua.gr/ [3] https://www.lexcube.org/ [4] https://carbonplan.org/blog/maps-library-release [5] https://github.com/MelonCode/react-tweakpane/pull/3
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall M1/M2)

Presentation: Lexcube: A Multi-Platform Ecosystem for Interactive Data Cube Visualization and Exploration

Authors: Maximilian Söchting, Prof. Dr. Gerik Scheuermann, David Montero, Miguel D. Mahecha
Affiliations: Remote Sensing Centre for Earth System Research (RSC4Earth), Leipzig University, Institute for Earth System Research and Remote Sensing, Leipzig University, Image and Signal Processing Group, Leipzig University, ScaDS.AI (Center for Scalable Data Analytics and Artificial Intelligence), German Centre for Integrative Biodiversity Research (iDiv)
In Earth observation (EO) and modeling, the exploration and understanding of large-scale, multi-dimensional data remains a significant challenge. Data visualization and interactive exploration tools are crucial for enabling data interpretation, model development and scientific workflows, but are faced with technological challenges such as increasing data set sizes and resolutions, being complex to use or not being well-integrated into existing scientific workflows. We present Lexcube, an ecosystem of tools designed to bridge the gap between complex Earth observation data sets and intuitive visual exploration through interactive 3D data cube visualization approaches. The Lexcube ecosystem consists of five main components: (1) a public web-based platform (lexcube.org), (2) an open-source Python-based Jupyter notebook plugin, (3) a customized data cube visualization interface suitable for museum exhibits, (4) a physical interactive touch cube and (5) data cube paper craft templates as visualization outputs. The web platform (1) provides immediate access to preset datasets, enabling users to explore and understand high-resolution Earth observation data through the interactive 3D data cube visualization, even on smartphones and tablets. The Jupyter plugin (2) extends these capabilities to any gridded dataset compatible with Xarray or NumPy, allowing researchers to seamlessly employ interactive visualizations in their existing workflows in Jupyter notebooks. Utilizing a customized data cube visualization interface with explainer texts and a simplified user experience (3), Lexcube has been successfully exhibited at various German institutions and demonstrated its capability for science communication, allowing visitors to interactively explore local or global EO data. Combining the tactility of physical interaction with the existing visualization capabilities, we conceptualized and built an interactive touch cube (4) that brings a fully interactive data cube into the real world, transforming the digital visualization into a tangible experience. Similarly, any Lexcube deployment allows to export the current data cube visualization as a paper craft template (5), enabling users to create manifested physical representations of their data cubes and subselections, providing an engaging method for science communication and education. The ecosystem shares a common Lexcube application core which enables consistent functionality and user experience across all platforms. This approach facilitates the deployment of new features and improvements across the ecosystem. Current developments focus on implementing 3D volume visualization capabilities, specifically designed for exploring extreme events within datasets. This enhancement will allow users to "look inside" their data cubes, providing new insights into the spatial and temporal characteristics of extreme events across all Lexcube deployments. In this contribution, we demonstrate how the Lexcube ecosystem advances data visualization in Earth observation, scientific workflows and outreach by providing multiple, complementary approaches to data exploration. We showcase how the integration of digital and physical interactions enhances data understanding and communication across different user groups, from researchers to the general public.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall M1/M2)

Presentation: Innovative discovery and analysis tools for multisensor exploitation

Authors: Lucile Gaultier, Fabrice Collard, Dr. Craig Donlon, Ziad El Khoury Hanna, Sylvain Herlédan, Guillaume Le Seach
Affiliations: Oceandatalab, ESA
Nowadays, a wide variety of observations from different sensors at different processing levels as well as in-situ observations and models provide us with an estimation of ocean geophysical variables at different scales from submesocale (few hundred meters) to large scale (hundred of kilometers). For instance, the Sentinel 1-2-3-6 program encompasses sensors like SAR, Ocean Color, Temperature brightness, or altimeter, each with an individual long revisit time but a rapid revisit from a constellation perspective.Exploiting the synergy of these various sources is essential to our understanding of the ocean dynamics while being aware of their respective potential and limitations. Despite the wealth of data, discovering, collocating, and analyzing a heterogeneous dataset can be challenging and act as a barrier for potential users wishing to leverage Earth Observation (EO) data. Accessing low-level data and preparing them for analysis requires a diverse set of skills. Addressing this challenge, the Ocean Virtual Laboratory Next Generation (OVL-NG) project has developed two tools, which will be introduced. Firstly, online data visualization websites, such as https://ovl.oceandatalab.com, have been made publicly accessible. The OVL portal empower users to explore various satellite, in-situ, and model data with just a few clicks. Users can navigate through time and space, easily compare hundreds of products (some in Near Real-Time), and utilize drawing and annotation features. The OVL web portal also facilitates sharing interesting cases with fellow scientists and communicating about captivating oceanic structures using SEAShot embedded tool. Secondly, a complementary tool named SEAScope offers additional features for analyzing pre-processed data and user-generated data. SEAScope is a free and open-source standalone application compatible with Windows, Linux, and macOS. It allows users to collocate data in time and space, rendering them on a 3D globe. Users can adjust rendering settings on the fly, extract data over a specific area or transect, and interface with external applications like Jupyter notebooks. This functionality enables users to extract data on a shared grid, analyze them, and import the results back into SEAScope for visualization alongside the input data. Join us at the ESA booth for live demonstrations of the OVL tools—explore and interact with Earth Observation data firsthand!
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall M1/M2)

Presentation: Earth Observation Science Storytelling with Dashboards

Authors: Dr Anca Anghelea, Manil Maskey, Naoko Sugita, Shinichi Sobue, Daniel Santillan
Affiliations: Esa, NASA, JAXA, EOX IT Services
The role of Earth Observation information in supporting societal decision making and action has expanded significantly in recent years. EO data is no longer an instrument reserved for scientists and experts, but has penetrated in several aspects of society and economy. Having easy and convenient access to open global EO data empowers and citizens and companies to look deeper into aspects related to their immediate environments, and make informed decisions about their lives and businesses. Yet in order to achieve real use of EO-based information we need to provide it in a form consumable by the end user - potentially higher level information, abstracted from the technicalities of dealing with the EO data itself - and through channels that are commonly accessed - such as mobile and web applications. Open data policies (e.g. Copernicus Programme) are making it possible to develop sophisticated tools that connect to these vast open data archives, analyse and process the data to transform it into meaningful information, package and represent this information through visualisations that are intuitive, and allow the users to configure what and how they want to consume this information. The ESA-NASA-JAXA EO Dashboard is one such solution that builds on informational and technology solutions developed by the three collaborating agencies, to bring to the general public an open and intuitive tool for exploring global changes with human activity, based on EO data. The EO Dashboard is accessible at https://eodashboard.org and has been developed and expanded over the past 4 years of the collaboration. The project started as an initiative to identify and communicate changes due to the covid-19 pandemic that were observable with EO data. In 2020, the impact of the information shown on the EO Dashboard through powerful visualisations (such as the air pollution decrease in major urban areas) was global. It demonstrated the impact of coupling EO Data with Web Technologies, through the use of interoperable standards and protocols, and delivered via suitable visualisation and exploration tools. The EO Dashboard has been evolved beyond the covid-19 to cover several other domains including: atmosphere, oceans, cryosphere, biomass, agriculture, economy, and the most recent one - extreme events. The tools has also expanded its visualisation and interactive capabilities, providing now not only an exploration interface but also scientific storytelling based on EO mission data, and a self-service tool for both scientists, journalists or citizens to create and publish their own findings using EO data, interactive maps, coupled with a variety of multimedia elements, and linked to applications developed on EO cloud platforms. In this presentation we will present and demonstrate the EO Dashboard and all its features and capabilities to support scientific communication and storytelling with EO Data, and will showcase some of the most compelling user-contributed as well as official tri-agency stories published on the platform. We will also present the plans for future features (such as integration of AI-based capabilities). Some example features can be explored at: - Explore Data interface: https://www.eodashboard.org/explore?indicator=GRDI1&x=962284.36551&y=5994420.53067&z=5.23679 - Thematic Storytelling - Oceans: https://www.eodashboard.org/oceans - User contributed storytelling - Extreme Events: https://www.eodashboard.org/story?id=hunga-tonga-aerosols
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.49/0.50)

Session: A.08.07 Ocean Health including marine and coastal biodiversity - PART 1

Ocean Health, defined as the Ocean’s condition allowing to continuously provide services for Humans in a sustainable way, while preserving its intrinsic well-being and its biodiversity, is under considerable threat. Decades of pollution, overexploitation of resources and damaging coastal environment use have severely degraded the condition of both coastal and offshore marine ecosystems, compromising the Oceans capacity to provide their services. This degradation is being further exacerbated by Climate Change whose effects on Oceans are numerous. The many sensors on-board currently operating satellites (Altimeters, Radiometers, Scatterometers, Synthetic Aperture Radars, Spectrometers) have high relevance for Ocean Health and Biodiversity studies, providing continuous, global and repetitive measurements of many key parameters of the physical (temperature, salinity, sea level, currents, wind, waves) and biogeochemical (Ocean Colour related variables) marine environment, including also high resolution mapping of key marine habitats (coral reefs, kelp forests, seagrass,…). In this context, this session welcomes contributions demonstrating how satellite data can be used to better monitor Ocean Health, including the retrieval of Essential Biodiversity variables and the estimations of the many different stressors, also including marine litter, impacting Ocean Health and marine and coastal biodiversity. Single sensors capability is even more amplified when used in synergy with other space and in-situ measurements, or together with numerical modelling of the physical, biogeochemical, ecological ocean state, so that the session is encouraging multi sensors and multi-disciplinary studies. The session is also open to contributions demonstrating how EO derived products can be used to support management actions to restore and preserve Ocean Health and the marine and coastal biodiversity.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.49/0.50)

Presentation: An exceptional phytoplankton bloom in the southeast Madagascar Sea driven by African dust deposition

Authors: John Gittings, Dr Giorgio Dall’Olmo, Dr Weiyi Tang, Joan Llort, Dr Fatma Jebri, Dr Eleni Livanou, Dr Francesco Nencioli, Dr Sofia Darmaraki, Mr Iason Theodorou, Dr Robert J. W. Brewin, Prof Meric Srokosz, Prof Nicolas Cassar, Prof Dionysios E. Raitsos
Affiliations: National and Kapodistrian University Of Athens, Sezione di Oceanografia, Istituto Nazionale di Oceanografia e Geofisica Sperimentale – OGS; Borgo Grotta Gigante, Trieste, 34010, Italy, Department of Geosciences, Princeton University; Guyot Hall, Princeton, NJ 08544, United States of America, Barcelona Supercomputing Center; Plaça d'Eusebi Güell, 1-3, Les Corts, 08034 Barcelona, Spain, National Oceanography Centre; Southampton, SO14 3ZH, United Kingdom, Collecte Localisation Satellites; 31520 Ramonville-Saint-Agne, France, Centre for Geography and Environmental Science, Department of Earth and Environmental Science, Faculty of Environment, Science and Economy; University of Exeter, Cornwall, United Kingdom, Division of Earth and Climate Sciences, Nicholas School of the Environment, Duke University; Durham, NC, United States of America
Rising surface temperatures are projected to cause more frequent and intense droughts in the world’s drylands. This can lead to land degradation, mobilization of soil particles, and an increase in dust aerosol emissions from arid and semi-arid regions. Dust aerosols are a key source of bio-essential nutrients, can be transported in the atmosphere over large distances, and ultimately deposited onto the ocean’s surface, alleviating nutrient limitation and increasing oceanic primary productivity. Currently, the linkages between desertification, dust emissions and ocean fertilization remain poorly understood. Here, we show that dust emitted from Southern Africa was transported and deposited into the nutrient-limited surface waters southeast of Madagascar, which stimulated the strongest phytoplankton bloom of the last two decades during a period of the year when blooms are not expected. The conditions required for triggering blooms of this magnitude are anomalous, but current trends in air temperatures, aridity, and dust emissions in Southern Africa suggest that such events could become more probable in the future. Together with the recent findings on ocean fertilization by drought-induced megafires in Australia, our results point towards a potential link between global warming, drought, aerosol emissions, and ocean blooms.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.49/0.50)

Presentation: Information content analysis of hyperspectral data for identification of microalgae and cyanobacteria species: from laboratory experiments to PRISMA and EnMAP satellite applications for super blooms monitoring

Authors: Pierre Gernez, Dr. Tristan Harmel, Victor Pochic, Dr. Martin Hieronymi, Dr. Maria-Laura Zoffoli, Dr. Thomas Lacour, Dr. Amalia Sacilotto Detoni, Dr. Antonio Ruiz-Verdù, Dr. Ruediger Roettgers
Affiliations: Nantes University, Magellium, Helmholtz-Center Hereon, Consiglio Nazionale delle Ricerche, Ifremer, University of Valencia
Optical sensing of the aquatic environment relies on the radiation exiting the water body measurable by radiometers above the water surface or on-board satellites. Light propagation within the water column is controlled by the balance between absorption and scattering eventually leading to the water-leaving radiance. It is common practice to approximate the radiative transfer equation by non-linear equations relating the water-leaving radiance (or the remote sensing reflectance) to a ratio of the bulk absorption and backscattering coefficients. In the case of phytoplankton « super blooms » (i.e. highly-concentrated blooms typically dominated by one or two microalgal species), the absorbing pigments of the microalgae or cyanobacteria greatly alter the water-leaving radiance with conspicuous discoloration visible from space. Over decades, numerous efforts have been made to collate and document the species-specific absorption properties of the phytoplankton in relation to their absorbing pigment concentration. In contrast, far less data have been collected and analysed concerning the phytoplankton scattering properties. The objectives of this study were twofold: first to analyse how the taxonomical determination of the bloom-dominating taxon could be retrieved from the absorption spectrum, and second, to assess the importance of the scattering properties and their impacts on the detectable water-leaving radiance to identify dominant species in super blooms or very productive waters from space. To achieve the absorption-related objective, a dataset of 164 hyperspectral absorption measurements of bloom-forming species was obtained from monospecific culture data, compiling published and new measurements. The level of taxonomic information amenable to absorption-based analysis was assessed, and compared with pigment-based classification. In particular, the ability to distinguish dinoflagellates from diatoms, prymnesiophytes, and raphidophytes was demonstrated. This is an important result because Dinophyceae are known for their ability to form super blooms, and recognized to be notoriously challenging to discriminate from other phytoplankton classes. To address the scattering-related objective, the analysis was based on innovative measurements of the volume scattering function (from LISST-VSF) and hyperspectral backscattering coefficient (from the HiFi-bb, a newly developed instrument to measure the backscattering coefficient at hyperspectral resolution) obtained for several species from a new laboratory experiment. The absorption impact on scattering (anomalous dispersion) was first investigated. Then, the data set was used within radiative transfer and its backscattering-absorption ratio approximation to study the impact of species-specific scattering and potential identification of dominant species. Case studies and validation are discussed based on applications to in situ and PRISMA/EnMAP satellite hyperspectral data as a demonstration for potential global applications to the next operational hyperspectral missions such as CHIME (ESA) and SBG (NASA). Altogether, this study contributed to quantifying the optimal potential of hyperspectral remote sensing to identify super blooms events in the absence of field information.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.49/0.50)

Presentation: Retrieving Phytoplankton Functional Groups and Size Classes in in Optically Complex Waters Using Sentinel-2 and Sentinel-3 imagery

Authors: Tiit Kutser, Karolin Teeveer, Laura Argus, Birgot Paavel, Kaire Toming, Martin Ligi, Külli Kutser, Tuuli Soomets, Ele Vahtmäe
Affiliations: Estonian Marine Institute, University of Tartu
Knowing the phytoplankton functional groups (PFTs) and/or its size classes (PSCs) is important for multiple purposes e.g. for understanding the biological carbon pump, studying ecology of waterbodies and predicting possible changes that may happen due to climate change and other stressors. The models that are used to study the above mentioned problems require input data with large spatial coverage (often globally) and high temporal frequency. The latter is especially critical in inland and near-coastal waters where processes happen at hourly scales and spatial heterogeneity of PFTs and PSCs may be high. It is obvious that in situ data from ships/boats cannot provide data with sufficient spatial and temporal coverage. Remote sensing is the only tool that could provide sufficient spatial and temporal coverage of PFT and PSC data. And it has been used in the open ocean context where the main optically active water constituent is phytoplankton and other constituents (coloured dissolved organic matter - CDOM and suspended particulate matter - SPM) are phytoplankton degradation products. In inland and coastal waters most of CDOM and SPM originates from nearby land or is resuspended from the bottom. Thus, the optically active constituents concentrations vary independently from each other and in huge magnitude making interpretation of the remote sensing imagery (including retrieving PFTs and PSCs) extremely difficult. PFT retrieval from remote sensing reflectance is usually based on detecting pigments that are specific to certain phytoplankton group based on the absorption features each pigment causes in water reflectance spectra. PSC retrieval is based on the fact that particles with different size scatter light differently and change absolute value of water reflectance. This allows us to make assumptions on the size of particles in the water under investigation. There is also an alternative empirical method for retrieving PFTs from phytoplankton biomass (expressed as concentration of chlorophyll-a, Chl-a) using relationships between the Chl-a and relative biomass of some PFTs obtained from laboratory measurements. This method is used by the Copernicus Marine Service (CMEMS) where the relative biomasses of some PFTs are calculated from Chl-a product retrieved from Sentinel-3 and Sentinel-2 data. We have studied species composition of phytoplankton in more than 200 sampling stations in the Baltic Sea. The species composition was determined based on microscopy data. We measured water reflectance (Trios Ramses) and inherent optical water properties (WetLabs ac-s, eco-bb3, eco-vsf3, CTD) in each of the sampling stations and took water samples for determining Chl-a and other chlorophylls, CDOM, SPM and it’s organic and inorganic components SPIM and SPOM, as well as some carbon fraction (DOC, POC, TOC, DIC, PIC) data in recent years. Analysis of the published PFT-specific pigments shows that most of them absorb light at the same wavelengths. For example, CMEMS divides phytoplankton in seven PFTs. Four characteristic pigments (Zeaxanthin, Alloxanthin, B,e-carotene and B,B-carotene have nearly identical specific absorption spectra while Fucoxanthin absorption spectrum is very similar to the above four. It is practically impossible to separate these pigments from each other using hyperspectral sensors and pure pigments. Natural assemblages are mixtures of many phytoplankton species blurring the effect of these pigments on water reflectance. In the Baltic Sea (and in many lakes) water leaving signal is almost missing in the blue part of spectrum, where the above mentioned pigments absorb, due to high absorption by CDOM. Moreover, if to use Sentinel-3 OLCI or Sentinel-2 MSI sensors spectral resolution then the optical effect of these pigments becomes even less distinguishable. Thus, in optically complex waters like the Baltic Sea it is impossible to separate PFTs from each other using the group-specific pigments absorption features. We showed more than two decades ago that blooms of cyanobacteria are separable from blooms of other phytoplankton as the phycocyanin absorption feature is at 620 nm. On the other hand, we also showed that the biomass has to be very high (Chl-a concentration at least 8-9 mg/m3) even in the clearest parts of the Baltic Sea in order to make the make the phycocyanin absorption feature detectable by hyperspectral satellites. Our current results show that the absorption feature (i.e. phycocyanin) is often gone in the later stages of the bloom not allowing to separate cyanobacterial blooms from blooms of other phytoplankton. Moreover, there are other phytoplankton groups (e.g. Diatoms) that contain pigments like chlorophyll-c1 and c2 that absorb also light at 620 nm. Meaning that the 620 nm absorption feature in water reflectance is not specific to cyanobacteria only. We divided PSCs into three groups – picoplankton, nanoplankton and microplankton based on the microscopy results. At present we haven’t been able to find relationships between backscattering measured in situ and the dominating PSC. It is not surprising as most of our samples the inorganic component of SPM is larger or much larger than the organic component. Moreover, in coastal waters not all the SPOM is phytoplankton. There are many organic particles originating from land or from the sea bottom. Consequently, the backscattering of light in water is almost not related to the phytoplankton size and the amount of it as the amount is often negligible compared to mineral and other organic particles. We are in the process of validating the CMEMS PFT product for the Baltic Sea, but it is unlikely that it can provide reasonable results. First of all the laboratory relationships between relative biomass of different PFTs and Chl-a are not very strong (r2<0.3) and the correlation between CMEMS Chl-a product and in situ data is 0.24 according to their own validation document. Such weak relationships cannot provide a reliable product. As a result we may say that obtaining reliable PFT and PSC products for optically complex waters like the Baltic Sea is rather unlikely although we continue to explore the capabilities of different machine learning methods that should allow to find more sophisticated relationships between the water reflectance obtained by Sentinel-3 OLCI and Sentinel-2 MSI and the PFTs and PSCs.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.49/0.50)

Presentation: Trends of phytoplankton community structure over the ocean colour satellite era, an inter-comparison perspective

Authors: Stéphane Doléac, Luther Ollier, Laurent Bopp, Roy El Hourany, Marina Lévy
Affiliations: Sorbonne Université, LOCEAN-IPSL, Ecole des Ponts, Université Littoral Côte d'Opale, LOG, Ecole Normale Supérieure, LMD-IPSL
Phytoplankton community structure plays a critical role in the natural carbon cycle and in the sustainability of marine ecosystems, making it central to the overall ocean health. This structure varies across time and space, influenced by environmental factors such as temperature and nutrient availability. While climate change is already affecting these factors, its long-term impact on phytoplankton community structure remains poorly understood. Since 1997, numerous algorithms have been developed to estimate from ocean colour remote sensing both total phytoplankton abundance and its structure. These tools have enabled 25 years of continuous, global-scale observations, providing invaluable insights into the impacts of climate change on phytoplankton. However, leveraging these datasets for long-term trend analysis remains challenging. Temporal inconsistencies and discontinuities in satellite time series, caused by sensor transitions or decommissioning, introduce biases that hinder the accurate detection of trends. In this study, we evaluate four distinct algorithms for retrieving, not only total chlorophyll, representative of the entire phytoplankton community, but also its partitioning into key phytoplankton groups. Temporal consistency is ensured by excluding regions with inconsistent observations since 1998 or by filling data gaps. Trends are then analyzed in an inter-comparison framework to identify robust patterns across products. Our results show that the assumptions underlying each algorithm significantly influence the detected trends, leading to substantial inter-product differences. Notably, we find that the assumption of a strong dependency of community structure on total phytoplankton abundance, at the basis of some algorithms, does not hold in others. Overall, little inter-product agreement is found, which questions our current ability to monitor these changes. These findings highlight the need for a better understanding of the drivers controlling the long-term evolution of phytoplankton community structure, in order to develop more robust algorithms. Developing such products would enhance our ability to reliably monitor and predict the impacts of climate change on phytoplankton and ocean health.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.49/0.50)

Presentation: Coastal Phytoplankton Super Blooms At High Resolution: What Can We Learn From Space?

Authors: Anastasia Tarasenko, Pierre Gernez, Victor Pochic, Tristan Harmel
Affiliations: CNES, Nantes University, Magellium
Recent advances in Earth Observation have improved our ability to analyse phytoplankton blooms with remarkable detail. Phytoplankton, a vital component of marine ecosystems, can also pose serious threats to coastal zones when it develops as harmful algal bloom (HAB), creating hypoxia zones or releasing toxins. HABs can become very concentrated locally, to such an extent that the optical properties of the water are determined almost solely by one phytoplankton species – phenomena we will refer to as “super blooms”. Although conspicuous, these super blooms may cover a relatively small area and last just a few days. Together with the hydrodynamical complexity of coastal waters, it makes their detection extremely challenging for “classic” medium resolution satellite missions, as well as for in situ monitoring. Building upon a recent “optical bloom type” approach (Gernez et al., 2023), we used Sentinel-2’s high spatial resolution and advanced spectral capabilities to construct time series of super blooms, characterize their optical properties, and determine their environmental drivers. The French Atlantic coastal zone was used as the main study area: it is a region influenced by eutrophication where super blooms frequently occur, and for which phytoplankton taxonomic composition has been documented over the past decades thanks to in situ phytoplankton observation networks. Our research highlights the possibility of automatically detecting super blooms and identifying their dominant optical type using their reflectance signature. By adapting processing techniques specifically for turbid waters, we enhance the robustness of bloom detection and spectral characterization in complex environments influenced by river plumes. The advantage of hyperspectral over multispectral data are also analysed based on the precursor satellite missions PRISMA and EnMAP in preparation of further exploitation of operational hyperspectral missions (ESA CHIME, NASA SBG). We discuss the possibility of using additional parameters (surface temperature, currents, etc.) from other satellite missions in the complement of Sentinel-2 observations to better characterize the environmental forcings driving the spatio-temporal variability of super blooms. Of particular interest is the inclusion of the future high-resolution thermal TRISHNA mission to be launched in 2025 that will provide quasi-daily acquisitions over the study area. This multi-sensor approach is contemplated providing a deeper understanding of the relationships between phytoplankton blooms and environmental factors such as river inputs, water temperature, and ocean surface dynamics. This capability is crucial for implementing timely monitoring and response strategies, aiming at mitigating HABs impacts. References Gernez, P., Zoffoli, M.L., Lacour, T., Fariñas, T.H., Navarro, G., Caballero, I. and Harmel, T., 2023. The many shades of red tides: Sentinel-2 optical types of highly-concentrated harmful algal blooms. Remote Sensing of Environment, 287, p.113486.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.94/0.95)

Session: B.03.06 Climate, Environment, and Human Health - PART 1

It is well-known that many communicable and non-communicable diseases have a seasonal component. For example, flu and the common cold tend to increase in autumn and winter whilst vector borne diseases like Dengue and West Nile Virus tend to peak in late summer when the vectors are at their most abundant. Under monsoon regimes, many diseases peak during the rainy season. Hay fever, spring-time allergies and other respiratory disorders also have seasonality related to the abundance of pollens and other allergens in the air. Environmental conditions in water, air and land have a role in regulating the variability in the presence or absence and abundance of pathogenic organisms or material in the environment, as well as the agents of disease transmission like mosquitoes or birds. For example, air temperature and relative humidity are linked to flu outbreaks. Water quality in coastal and inland water bodies impact outbreaks of many water-borne diseases, such as cholera and other diarrheal diseases, associated with pathogenic bacteria that occur in water. The seasonality has inter-annual variabilities superimposed on it that are difficult to predict. Furthermore, in the event of natural disasters such as floods or droughts, there are often dramatic increases in environmentally-linked diseases, related to break down of infrastructure and sanitation conditions.

Climate change has exacerbated issues related to human health, with the shifting patterns in environmental conditions, and changes in the frequency and magnitude of extreme events, such as marine heat waves and flooding, and impacts on water quality. Such changes have also led to the geographic shifts of vector-borne diseases as vectors move into areas that become more suitable for them, as they become less cool, or retract from those that become too hot in the summer. The length of the seasons during which diseases may occur can also change as winters become shorter. There are growing reports on the incidence of tropical diseases from higher latitudes as environmental conditions become favourable for the survival and growth of pathogenic organisms.

Climate science has long recognised the need for monitoring Essential Climate Variables (ECVs) in a consistent and sustained manner at the global scale and with high spatial and temporal resolution. Earth observation via satellites has an important role to play in creating long-term time series of satellite-based ECVs over land, ocean, atmosphere and the cryosphere, as demonstrated, for example, through the Climate Change Initiative of the European Space Agency. However, the applications of satellite data for investigating shifting patterns in environmentally-related diseases remain under-exploited. This session is open to contributions on all aspects of investigation into the links between climate and human health, including but not limited to, trends in changing patterns of disease outbreaks associated with climate change; use of artificial intelligence and big data to understand disease outbreaks and spreading; integration of satellite data with epidemiological data to understand disease patterns and outbreaks; and models for predicting and mapping health risks.

This session will also address critical research gaps in the use of Earth Observation (EO) data to study health impacts, recognizing the importance of integrating diverse data sources, ensuring equitable representation of various populations, expanding geographic scope, improving air pollution monitoring, and understanding gaps in healthcare delivery. By addressing these gaps, we aim to enhance the utility of EO data in promoting health equity and improving health outcomes globally.

The United Nations (UN) defines Climate Change as the long-term shift in average in temperatures and weather patterns caused by natural and anthropogenic processes. Since the 1800s, human emissions and activities have been the main causes of climate change, mainly due to the release of carbon dioxide and other greenhouse gases into the atmosphere. The United Nations Framework Convention on Climate Change (UNFCCC) is leading international efforts to combat climate change and limit global warming to well below 2 degrees Celsius above pre-industrial levels (1850–1900), as set out in the Paris Agreement. To achieve this objective and to make decisions on climate change mitigation and adaptation, the UNFCCC requires systematic observations of the climate system.

The Intergovernmental Panel on Climate Change (IPCC) was established by the United Nations Environment Programme (UNEP) and the World Meteorological Organization (WMO) in 1988 to provide an objective source of scientific information about climate change. The Synthesis Report, the last document part of the sixth Assessment Report (AR6) by IPCC, released in early 2023, stated that human activities have unequivocally caused global warming, with global surface temperature reaching 1.1°C above pre-industrial levels in 2011–2020. Additionally, AR6 described Earth Observation (EO) satellite measurements techniques as relevant Earth system observation sources for climate assessments since they now provide long time series of climate records. Monitoring climate from space is a powerful role from EO satellites since they collect global, time-series information on important climate components. Essential Climate Variables (ECV) are key parameters that explain the Earth’s climate state. The measurement of ECVs provide empirical evidence in the evolution of climate; therefore, they can be used to guide mitigation and adaptation measures, to assess risks and enable attribution of climate events to underlying causes.

An example of an immediate and direct impact of climate change is on human exposure to high outdoor temperatures, which is associated with morbidity and an increased risk of premature death. World Health Organisation (WHO) reports that between 2030 and 2050, climate change is expected to cause approximately 250,000 additional deaths per year from malnutrition, malaria, diarrhoea and heat stress alone. WHO data also show that almost all of the global population (99%) breathe air that exceeds WHO guideline limits. Air quality is closely linked to the earth’s climate and ecosystems globally; therefore, if no adaptation occurs, climate change and air pollution combined will exacerbate the health burden at a higher speed in the coming decades.
Therefore, this LPS25 session will include presentations that can demonstrate how EO satellites insights can support current climate actions and guide the design of climate adaptation and mitigation policies to protect and ensure the health of people, animals, and ecosystem on Earth (e.g., WHO’s One Health approach).
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.94/0.95)

Presentation: Supporting Urban Heat Adaptation with Earth Observation

#zarr #stac

Authors: Daro Krummrich, Adrian Fessel, Malte Rethwisch
Affiliations: OHB Digital Connect
Climate change has ubiquitous effects on the environment and on human life. While the increased frequency of extreme weather events or droughts have immediate and drastic consequences, the direct effect of rising ambient temperatures on humans is more subtle and affects demographics unequally. The direct influence of rising temperatures is most significant in cities and in environments that are heavily shaped by humans, partly because of lack of awareness of climate change and partly because planning and redesign processes did not consider those changes. A typical phenomenon seen in urban environments are urban heat islands, which manifest as microclimates affecting the surface and atmosphere above the urban space. They are indicated by average temperatures and thermal behavior that significantly exceeds that of the surrounding rural areas and can be attributed in part to the ubiquitous presences of artificial surface types suppressing natural soil function, regulatory functions of water bodies or vegetation and altered radiation budget. Further, the atmospheric modifications brought about by urban heat islands affect air quality and may even influence local weather patterns, such as rainfall. Mitigation of urban heat islands in principle can be achieved by altering urban planning to integrate more green spaces, water surfaces and to avoid certain man-made surface types. However, despite the intensity with which heat islands affect human life, redesign of existing urban environments is rarely a practical solution. Nevertheless, the need to act has been realized by administrators, leading to novel regulations foreseeing for instance the implementation of heat action plans which contain immediate measures during heat waves and guidelines for more sustainable future planning. In this presentation, we highlight the status and results from two complementary initiatives devised to support urban heat adaptation: First, we present the “Urban Heat Trend Monitor”, a GTIF capability striving to ease integration of satellite Earth observation data into adaptation strategies. Recognizing that spaceborne Earth observation cannot deliver thermal infrared data at spatial resolutions appropriate for urban spaces we introduce the thermal infrared sensor RAVEN as the second focus. RAVEN is a custom SWAPc-sensitive multiband sensor for airborne Land Surface Temperature retrieval in urban environments, which can help fill the gaps where spaceborne sensors struggle. In line with digitalization efforts across virtually all sectors, the efficiency and efficacy of adaptation measures can be supported via the provision of accessible and actionable information from spaceborne Earth observation, but also in conjunction with information from local sources including, for instance, demographic data or airborne acquisitions. This is one objective of the ESA GTIF (Green Transition Information Factories) driving the cloud integration, standardization, and commercialization of a diverse set of capabilities targeted at green transition venues, also including the domain of sustainable cities. Focusing on efforts in the scope of the ongoing “GTIF Kickstarters: Baltic” project, we present the development status of the “Urban Heat Trend Monitor”, a capability which exploits data from ESA’s Copernicus Sentinel 2 and 3 satellites to provide users with easy-to-interpret maps of urban climate information that can be integrated into administrative processes and to facilitate sustainable urban planning. Super-resolution imaging is used to enhance the resolution of the satellite imagery, allowing the analysis of heat islands and temperature fluctuations at the level of individual neighborhoods. Complementing streamlined access to raster data, the focus of the heat trend monitor is to enable users to extract, analyze and compare time series data for purposes such as the of comparison of regions, identification of problematic trends or the analysis of landcover changes. As an alternative to trend extraction in user-defined regions of interest or administrative boundaries, we propose a spatial partitioning method based on a superpixel approach to identify meaningful regions based on thermally homogeneous behavior. We approach time series analysis and trend identification using Generalized Additive Models, a data driven approach balancing predictive power and explainability. GTIF capabilities are developed in close cooperation with stakeholders to meet their, in the case of the Urban Heat Trend Monitor from the Baltic region and build on a technology stack aimed at interoperability and reusability. To this end, we adhere to standards including OpenEO, STAC and cloud-optimized storage formats like Zarr. Our second focus, RAVEN (“Remote Airborne Temperature and Emissivity Sensor”) was devised as an efficient solution to enable Land Surface Temperature retrieval at a scale appropriate for urban environments (resolution at typical operating altitudes 0.5-4 m). RAVEN employs a multi-band sensing and retrieval scheme typically reserved to spaceborne sensors and airborne demonstrator instruments yet implemented with relatively low-cost COTS hardware, enabling future use with unmanned airborne platforms. We report on the conceptualization and implementation of the sensor including geometric and radiometric calibration efforts, as well as on results from a 2024 airborne campaign conducted in Valencia in the scope of the Horizon2020 project CityCLIM and elaborate their relevance for urban adaptation.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.94/0.95)

Presentation: BRIDGING GAP IN AIR POLLUTION HEALTH RISK ASSESSMENT: INTEGRATING EARTH OBSERVATION, MOBILITY DATA AND SPATIAL ANALYSIS FOR AIR POLLUTION HEALTH RISK ANALYSES

Authors: Lorenza Gilardi, Thilo Erbertseder, Dr. Frank Baier, Prof. Dr. Heiko Paeth, Prof. Dr. Tobias Ullmann, Prof. Dr. Hannes Taubenböck
Affiliations: German Aerospace Center, University of Würzburg
When performing a health risk assessment related to air pollution by means Earth Observation (EO) data two of the main challenges are: (1) the challenge of accurately assessing population exposure and (2) the inconsistent spatial and temporal coverage of air pollution data, especially in remote areas. Health risk is determined by the interaction of three components: hazard, exposure, and vulnerability. When investigating the health risk from outdoor air pollution, exposure at individual level is oftentimes of challenging quantification. Therefore, epidemiological studies are typically conducted using static residential data, neglecting the dynamic of human mobility and its impact on exposure. In recent years, the popularity of epidemiological studies exploiting an ecological approach is rising due to the increasing availability of publicly available of geospatial data. This approach brings several advantages like easy scalability, and the possibility to address cumulative exposures to multiple stressors. To support this approach, a case study was conducted to evaluate the long-term population exposure to PM2.5, NO2, and O3 in two European regions—Lombardy, Italy, and Germany—covering the period from 2013 to 2022 using the Copernicus Atmosphere Monitoring Service (CAMS) European Air Quality Reanalysis data. These datasets provide consistent and reliable estimates of pollutant concentrations, enabling a detailed evaluation of exposure by considering a static (residential) and dynamic (commuting habits included) population. The analysis integrated commuting data from national statistical institutes as well as the remote-sensing-derived global settlement mask, the World Settlement Footprint. The results highlight the significant disparities between exposure while considering a static and a dynamic population, emphasizing the importance of accounting for mobility in health risk assessments. Furthermore, the study demonstrates widespread exceedances of the World Health Organization’s updated air quality guidelines, particularly for PM2.5, and underscores the spatial variability in exposure levels. To further investigate these variations, the study proposes the use of spatial analysis techniques, particularly the Local Indicators of Spatial Association (LISA) to study the temporal evolution of air pollution hotspots and cold spots. By applying this method, the research aims to identify spatial clusters of pollutants such as particulate matter (PM2.5), nitrogen dioxide (NO2), and ozone (O3) as well as to produce multi-hazard maps of areas where hotspots of multiple pollutants converge. These analyses provide critical insights into regions with heightened health risks and inform strategies for mitigating exposure. As an exploratory approach, the LISA spatial analysis is extended to additional areas worldwide, exploring NO2 column data from satellite missions such as the Ozone Monitoring Instrument (OMI) and Sentinel-5P TROPOspheric Monitoring Instrument (TROPOMI) . These provide consistent, worldwide air quality data, offering opportunities to overcome the limitations of traditional ground-based monitoring networks.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.94/0.95)

Presentation: Advanced ecosystem restoration: Blending phytoremediation with satellite-based and non-imaging-based remote sensing in the Himalayas of PIN Valley National Park, India

Authors: Abhinav Galodha, Deepika Sharma, Dr. Jagdeep Verma
Affiliations: School of Interdisciplinary Research (SIRe), Indian Institute of Technology, IIT Delhi, Department of Botany, Himachal Pradesh University (HPU), Department of Botany, Sardar Patel University
Heavy metal pollution presents a formidable challenge to global ecosystems, threatening biodiversity, soil and water quality, and human health. In regions with ecological sensitivity or limited access, traditional remediation techniques often fall short due to their resource-intensive nature and potential environmental disturbance. In response, phytoremediation emerges as an innovative and sustainable solution. Advanced remote sensing techniques, spanning proximal, airborne, and spaceborne data collection, enhance the prediction accuracy of contamination levels by correlating spectral reflectance data with metal concentrations. Proximal sensing, involving laboratory and field-based spectroradiometers combined with drone and satellite insights, permits exhaustive coverage and detail, crucial for monitoring shifts in land use and surface cover. Despite challenges, such as spectral complexity and atmospheric variability, spectral data delineates metal-induced stress markers in vegetation, underscoring phytoremediation's potential. In response, this study explores the viability of phytoremediation as an environmentally friendly alternative strategy, with a focus on Pin Valley National Park in Himachal Pradesh, India. Here, we target plant species that naturally accumulate heavy metals, effectively detoxifying the environment. To enhance the accuracy and efficiency of contamination assessment, we employed cutting-edge remote sensing technologies, integrating proximal, airborne, and space-borne data collection systems. Proximal sensing was conducted using a spectroradiometer that provided high-resolution spectral data directly from the field. This was supplemented by data from drones, which offered flexibility in covering large and varied terrains, and from satellites such as Landsat-8, Landsat-9, and Sentinel-2, which offered extensive temporal and spatial coverage. These tools enabled comprehensive monitoring of changes in land use and vegetation cover over an extended period from 2010 to 2023. The analysis used several indices to determine plant health and the extent of environmental degradation, including the Normalized Difference Vegetation Index (NDVI), Normalized Difference Red Edge (NDRE), and Soil-Adjusted Vegetation Index (SAVI). These indices were instrumental in evaluating vegetation vigour and health. Additionally, the Heavy Metal Index, Iron Oxide Index, and Hydrothermal Index were applied to directly measure contamination levels. Our results showed a significant correlation between heavy metal concentration and stress markers in vegetation. For instance, areas with high NDVI often coincided with low heavy metal presence, indicating healthier vegetation capable of successful metal uptake for phytoremediation. The species identified as most effective in this environment included Indian mustard (Brassica juncea) and hemp (Cannabis sativa), which demonstrated remarkable capacities to absorb lead (Pb) and cadmium (Cd), two of the most problematic contaminants in the area. Specifically, Brassica juncea achieved a biomass lead accumulation of up to 2,500 mg/kg, while Cannabis sativa showed cadmium uptake levels reaching 900 mg/kg, suggesting their efficacy for targeted phytoremediation practices. Small traces of heavy metals such as Yttrium (3-11 ppb), Strontium (20-32 ppb), Rubidium (0.050-0.155 ppb), and Cadmium (0.045-0.170 ppb) could be retrieved from the identified site locations. The application of remote sensing technology ensured precise mapping of these metal concentrations and plant health, thereby optimizing phytoremediation efforts. In addition to supporting immediate remediation efforts, the integration of remote sensing technology provided valuable longitudinal data, revealing trends in environmental recovery. Over the observed period, reclaimed lands showed a gradual increase in NDVI values, from an average of 0.35 to 0.65, indicating significant improvement in vegetation cover and health. These positive trends were corroborated by reductions in Heavy Metal Index values, confirming a decrease in soil and water contamination levels to safer thresholds over the decade-long study. Moreover, the study's findings underscore the critical role of remote sensing in ongoing environmental monitoring. By processing and analyzing collected data, we were able to identify contamination hotspots rapidly, optimize plant selection and placement, and ensure efficient resource allocation. The proximate sensor data aligned closely with drone and satellite data, ensuring consistent and reliable results across the various scales and technologies applied. In conclusion, this research affirms the feasibility and effectiveness of combining phytoremediation with remote sensing technologies to manage and mitigate heavy metal contamination. Our approach offers a scalable framework for environmental monitoring, capable of being adapted to various ecological contexts and contaminant profiles. The successful application of this methodology in Pin Valley National Park illustrates its potential as a practical tool for supporting the restoration of similar polluted regions worldwide, ultimately promoting ecological resilience and sustainability. Continued refinement and adaptation of these technologies hold the promise of enhancing global efforts to combat heavy metal pollution and support sustainable land management practices. This study not only contributes to academic knowledge but also offers actionable insights for policymakers and environmental managers committed to preserving natural ecosystems. Keywords: Environmental Monitoring, Metal Contamination, Phytoremediation, Pin Valley NP, Hyperspectral, Normalized Difference Red Edge Index (NDRE), Normalized Difference Vegetation Index (NDVI), Soil-adjusted Vegetation Index (SAVI), Strontium, Rubidium, Yttrium.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.94/0.95)

Presentation: AeDES2.0 - An enhanced climate-and-health service for monitoring and forecasting environmental suitability of Aedes borne disease transmission

Authors: Javier Corvillo, Dr. Ángel Muñoz, Dr. Verónica Torralba, Dr. Alba Llabrés-Brustenga, Dr Ana Rivière Cinnamond
Affiliations: Barcelona Supercomputing Center, Pan American Health Organization
Aedes-borne diseases, such as dengue, Zika and Chikungunya, pose a grave threat to millions of people worldwide each year. Aware of potential compound effects regarding other important diseases, such as COVID-19, it has become imperative for health authorities to maintain a detailed surveillance of key environmental variables that can trigger epidemic episodes. While disease transmission is generally conditioned by multiple socioeconomic factors, the environmental suitability for vectors and viruses to proliferate is a necessary –although not sufficient– condition that needs to be closely monitored and forecasted. As such, a comprehensive service that allows stakeholders to analyze and visualize environmental suitability on affected hotspots is crucial for communities to better prepare in the case of present and future outbreaks. The newest version of the Aedes-borne Diseases Environmental Suitability (AeDES2) climate-and-health service is a next-generation, fully-operational monitoring system that reproduces and improves the previous version (Muñoz et al., 2020), broadening both the temporal and spatial scope while simultaneously enhancing both observational and forecasting quality. With AeDES2, users can consult the historical evolution of the environmental suitability values on any grid point of interest, as well as the expected future evolution up to three seasons in advance. Aside from the environmental suitability values, health authorities can additionally analyze the estimated incidence or percentage of population at risk threshold –a key indicator for governing bodies to trigger the implementation of control measures to reduce the spread of the disease in an affected population. AeDES2 incorporates four state-of-the-art environmental suitability models, considering both epidemiological factors for transmission probability and climate variables such as temperature values. On the monitoring side, AeDES2 provides a continuously updated monthly historical sequence of environmental suitability values by generating an ensemble with multiple observational references, hence providing uncertainty estimates in the monitoring system, an improvement over the previous version. On the prediction side, still under development, AeDES2 builds on its predecessor’s pattern-based multi-model calibration approach, assimilating new Machine Learning calibration methods such as neural networks, aiming to reliably reproduce key non-linear patterns that are used as predictors in the cross-validated forecast system.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.94/0.95)

Presentation: Inland Cholera Seasonality, North India: Role of Climate and Environmental Factors

Authors: Dr Neelam Taneja, Dr Arti Mishra, Dr Nisha Singh
Affiliations: PGIMER
One-third of India’s population lives under the threat of cholera. The region around Chandigarh is a hot spot for cholera and has experienced a resurgence from 2002 onwards. In the freshwater environment of north India, cholera appears seasonally in the form of clusters as well as sporadically, accounting for a significant piece of the puzzle of cholera epidemiology. Cholera cases occur during hot and humid months peaking with monsoons. This region does not exhibit the bi-annual cycle (pre- and post-monsoon) of coastal cholera due to distinct climatic factors and experiences a single peak only during the monsoon months. The ecology of Vibrio cholerae in the freshwater aquatic environs is poorly understood. We conducted a environmental and ecological surveillance in our region to understand the seasonality of cholera. The influence of environmental parameters, including abiotic factors (temperature, salinity, pH, rainfall) and biotic factors (phytoplanktons and zooplanktons) on the prevalence and isolation of V. cholerae was measured. The northern part of India has a dense network of many major rivers and several freshwater lakes. This region receives heavy rainfall during the monsoon months (July–October) and exhibits high temperatures >30 °C during the summer and rainy season (April–October). The winter months (November–March) exhibit temperatures below 20 °C and little rainfall. Clinical cholera cases coincided with elevated rainfall, chlorophyll concentration, and air temperature, whereas isolation of V. cholerae non-O1 non-O139 from water was dependent on temperature (p < 0.05) but was independent of rainfall and pH (p > 0.05). However, isolation from plankton samples correlated with increased temperature and pH (p < 0.05). A lag period of almost a month was observed between rising temperature and increased isolation of V. cholerae from the environment, which in succession was followed by an appearance of cholera cases in the community a month later. All the abiotic and biotic factors vary in this region with season, except salinity which was almost constant throughout the year. The isolation of V. cholerae non-O1 non-O139 varied in different seasons, with a peak during summers (69%) and monsoons (46.5%) and was minimal in winters (15.5%, p < 0.05). It was during this peak that V. cholerae O1 could be isolated from sewage and drinking water samples. With the onset of the rainfall, chances of a breach in sewage and contamination of drinking water supplies increase. On multivariate regression analysis, rainfall was found to be an independent predictor for outbreaks of cholera, whereas elevated temperature had a significant effect only if combined with rainfall. Chlorophyll also exhibited a significant correlation with the occurrence of cholera outbreaks. The pH of water significantly increased with plankton blooms, although the actual changes were small. Nevertheless, there was a significant correlation between the plankton bloom, which governs pH changes, and an increase in isolation percentage for V. cholerae from planktons. We conclude that environmental parameters play a significant role in the emergence and spread of cholera and the abundance of V. cholerae in the environment. However, more detailed analyses of other climatic factors and genomic analysis are needed to understand the links between the enviornmental Vibrio cholerae and clinical cholera.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.94/0.95)

Presentation: EO4Health Resilience: Leveraging Earth Observation for Public Health Preparedness

Authors: Filipe Brandao, Joao Vitorino, Annamaria Conte, Carla Ippoliti, Luca Candeloro, Shubha Sathyendranath, Dhriti Sengupta, Gunnar Brandt, Tejas Morbagal Harish, Marcello Maranesi, Rafaelle Scarano, William Wint, Simone Calderara, Marco Marchetti, Stefano Ferretti
Affiliations: GMV, Istituto Zooprofilattico Sperimentale dell'Abruzzo e del Molise (IZSAM), Plymouth Marine Laboratory, Brockmann Consult, GMATICS, Environmental Research Group Oxford, UNIMORE, Albertitalia Foundation, European Space Agency
The EO4Health Resilience project, funded by the European Space Agency (ESA), aims to evaluate the suitability of Earth Observation (EO) imagery in supporting public health decision-making, scenario analysis, and impact and risk assessments. By addressing scientific gaps and challenges while aligning with the needs of key stakeholders, the project seeks to conceptualize a practical, long-term initiative that integrates EO technology into health resilience strategies. Building on extensive knowledge from previous ESA activities, EO4Health Resilience focuses on developing and implementing value-added services that use EO data and Artificial Intelligence to identify patterns for accurately predicting the spatio-temporal dynamics of vector-borne and water-borne diseases. The project addresses two thematic domains: Vector-Borne Diseases (VBD) and Water-Borne Diseases (WBD). The VBD servicescenter around implementing a model originally developed for Italy, which assesses the probability of West Nile Virus circulation under suitable conditions. In EO4Health Resilience, this model has been expanded to cover a broader Area of Interest, encompassing significant parts of North Africa and Europe. Its outputs are validated using ground truth data from official sources. Moreover, the consortium, in collaboration with ESA, is engaging stakeholders such as the UN Food and Agriculture Organization (FAO) to synergize with existing initiatives like the Rift Valley Fever Early Warning Decision Support Tool (RVF DST). In the WBD domain services focus on environmental risks associated with cholera, Escherichia coli, flooding in Vembanad Lake, and non-cholera Vibrio infections in the Baltic Sea region. These services enhance existing models with geospatial data, offering valuable insights often underutilized in public health risk assessments. The project also explores novel applications of very high-resolution imagery for both VBD and WBD themes. Preliminary results demonstrate the potential for spatially detailed insights, strengthening the links between environmental conditions and disease emergence. These advancements have been made possible through ESA’s support, including access to commercial satellite imagery that would otherwise be unattainable. A group of relevant advisors is following all activities to ensure the scientific and technical advancements are aligned with best practices. This group provides expertise across the domains of disease knowledge and advanced EO data processing, helping to tailor the project’s outputs to more effectively address the challenges of public health. On the engineering front, EO4Health Resilience is advancing public health capabilities through the "Resilience & Earth Observation Virtual Observatory." This web-based platform serves as a centralized hub for project activities. It provides access to EO and health-related data, supports the full implementation of VBD and WBD services, and integrates additional tools for analyzing patterns associated with emerging diseases. Designed to be user-friendly, the observatory enables even non-specialists in EO and geographic data to access and utilize critical information, promoting the wider uptake of project outputs. Through innovative EO applications, strategic partnerships, and user-centric tools, EO4Health Resilience is paving the way for a more resilient public health system equipped to anticipate and mitigate disease risks globally.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.96/0.97)

Session: A.02.05 Peatland

Peatlands cover only 3 per cent of the world’s land, mainly in the boreal and tropical zone, but they store nearly 30% of terrestrial carbon and twice the carbon stored in forests. When drained and damaged they exacerbate climate change, emitting two Gt of CO2 every year, which accounts for almost 6% of all global greenhouse gas emissions. The unprecedented observations collected by the Copernicus Sentinel family and other sensors allow new ways to monitor and manage peatlands. Emphasis will be put on advances in improved mapping and monitoring of intact, degraded and cultivated peatlands for conservation, management and restoration in a global and a specific climate zone (e.g. boreal, temperate, tropical) context. This session will showcase some of the more recent key achievements including methods/algorithms, science and applications.

Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: Monitoring Tropical Peatland Hydrology With Spaceborne L-band SAR

Authors: Antje Uhde, Dr. Laura Hess, Dr. Alison Hoyt, Prof. Christiane Schmullius, Dr. Euridice N. Honorio Coronado, Edmundo Mendoza, Dr. Gerardo Flores Llampazo, Prof. Timothy Baker, Prof. Susan Trumbore, Dr. Scott Winton
Affiliations: Department of Biogeochemical Processes, Max Planck Institute For Biogeochemistry, Department of Geography, Friedrich Schiller University, Earth Research Institute, University of California Santa Barbara, Department of Earth System Science, Stanford University, Royal Botanic Gardens, Kew, Department of Environmental Studies, University of California Santa, Instituto de Investigaciones de la Amazonía Peruana, School of Geography, University of Leeds
Globally, peatlands store large amounts of carbon (C), but the fate of said C is highly uncertain. While many studies focus on high latitude peatlands, recent work shows that tropical peatlands store large amounts of C that can be vulnerable to rapid loss if hydrological conditions change. In tropical peatlands, water table levels drive greenhouse gas (GHG) emissions. During low water table conditions, C is emitted to the atmosphere through oxidation, while under high water and anaerobic conditions, methane (CH4) is produced. Knowledge of water table dynamics provides important information on processes regulating tropical peatland GHG dynamics necessary to assess the impact of global warming and climate extremes on tropical peatland GHG dynamics. However, very few field observations are available on water table levels in tropical lowland peatlands. In this study, we used time-series data of in-situ water table dynamics for 12 sites in the Pastaza-Maranón Foreland Basin in Peru (2018 – 2021) and 9 sites in the eastern lowland of Colombia (2023 – 2024). These were combined with information on ecosystem structure to model changes in tropical peatland above-ground water tables using L-band HH backscatter. We observed each PALSAR-2 orbit separately to account for varying incidence angles, which increases the total number of water level vs. backscatter time-series to 41. Using a kmeans clustering analysis we found two ecosystem types where water table changes correlate linearly with changes in PALSAR-2 ScanSAR L-band HH backscatter. The first cluster (1) consists of short-statured forest with a GEDI L2A relative height 95 (rh95) of 6.5 m – 12 m, a multi-temporal standard deviation of L-band HV backscatter > 0.42, combined with a multi-temporal mean NDVI < 0.82 (6 sites with a total of 10 time-series) The second cluster (2) is characterized by a GEDI rh95 of 21 m – 28 m, and a GEDI L2B foliage height diversity (fhd) index > 3 (7 sites with a total of 9 time-series). For sites outside these criteria we observed logarithmic, exponential or no correlation with PALSAR-2 HH backscatter. We next used a multiple linear regression model to predict the sensitivity of the L-band HH backscatter to changes in above-ground water table (i.e. the slope of the linear regression). In our preliminary model, the highest correlation coefficients were obtained for the GEDI L2B variables (total canopy cover, foliage height diversity) and the L-band HV multi-temporal mean backscatter and standard deviation. The PALSAR-2 incidence angle had a larger effect on the multiple linear regression for cluster 2 sites than cluster 1. We used a leave-one-out cross validation and obtained an average mean absolute error of up to 6, meaning we predicted the increase of water table per increase of 1 dB L-band HH backscatter with an average accuracy of ± 6 cm. With this model we can monitor changes in tropical peatland above-ground water table dynamics using freely available Earth observation data. In addition to spatial extrapolation beyond our measurement sites, we also want to test temporal extrapolation by applying a temporal cross-validation for sites with multiple years of water table data. This would allow us to assess the influence of past and future El-Nino Southern Oscillation (ENSO) extreme events, as well as global warming, on tropical peatland water table dynamics. Knowledge of seasonal water table dynamics, and changes in such, enables us to draw conclusions on changes in the GHG balance of tropical peatland ecosystems.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: Multi-temporal Mapping of Peatland Species Abundance and Condition After Rewetting

Authors: Christina Hellmann, Dr. Bernd Bobertz, Enna Drege, Duc-Viet Nguyen, Dr Vu-Dong Pham, Malin Stephan, Ariane Tepaß, Dr. Marcel Schwieder, Dr Sebastian van der Linden
Affiliations: Institute of Geography and Geology, University of Greifswald, Partner in the Greifswald Mire Centre, Friedrich-Ludwig-Jahn-Str. 16, Thünen Institute of Farm Economics, Bundesallee 63
Peatlands are huge carbon sinks, but due to anthropogenic impact also massive sources of greenhouse gas emissions. Peatlands around the world have been drained, e.g., for agriculture in temperate Europe. Although they share only 0.5% of the global land surface, they cause 4% of total GHG emissions. In the north-eastern German state Mecklenburg-Western Pomerania their share amounts to almost 40% of the total emissions. To stop these emissions, peatlands need to be rewetted, but rewetting measures are not always successful. Therefore, monitoring concepts are required. The occurence of typical peatland vegetation gives an indication on abiotic environmental factors, especially hydrological conditions or nutrient supply. Further, phenological stages can be different within species at a given time, indicating, e.g., stress induced by low waterlevels or shifted phenological development following differences in micro-climate. Hyperspectral satellite missions, such as EnMAP, pose a good opportunity in monitoring rewetted peatlands. EnMAP imagery offer high spectral resolution for relatively large areas and multiple dates per year. The high spectral resolution and range promise a high information content regarding vegetation composition and state. We quantified the abundance and condition of typical peatland genera or species for drained and rewetted peatlands in the Peene and Trebel Valley in Mecklenburg-Western Pomerania. Multi-temporal hyperspectral EnMAP imagery from two years (June and August 2023, May and September 2024) were used in a two-step approach, both with regression based unmixing and synthetic training data. (1) We derived the abundance of Phragmites australis, Phalaris arundinacea, Typha spp., Carex spp., and other wetland vegetation, including, e.g., Juncus effusus, Glyceria maxima, Iris spp., and Agrostis stolonifera, within the 30-m pixels for each point in time. (2) We disentangled the pixel-wise fraction of green vegetation (GV), NPV and Water for each point in time. By combining the two products we can compare species-wise GV-NPV fractions in time. The abundance of GV and NPV at a given time, both give an indication on the phenological stages. These vary between species, but also within species. Phenological differences within species highlight small scale differences in abiotic environmental factors, that may indicate stress from insufficient rewetting or a successful rewetting. This way, the multi-date hyperspectral data complements research on peatland management with essential information on characteristic vegetation. Our results show that our suggested approach, i.e., the unmixing of multi-temporal hyperspectral satellite data supports ongoing peatland research. Mapping and monitoring of peatland vegetation and rewetting processes benefit from the increasing availability of multi-temporal hyperspectral satellite data. Future upcoming hyperspectral satellite missions such as CHIME with improved spatial and temporal coverage will make it possible to extend the approach for monitoring rewetted peatlands to large areas.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: Return to origins: restored peatlands align with intact peatlands in satellite-derived albedo and land surface temperature over time, but not in vegetation properties

Authors: Iuliia Burdun, Mari Myllymäki, Rebekka R.E. Artz, Mélina Guêné-Nanchen, Leonas Jarašius, Ain Kull, Erik A. Lilleskov, Kevin McCullough, Mara Pakalne, Jiabin Pu, Jurate Sendzikaite, Liga Strazdina, Miina Rautiainen
Affiliations: School of Engineering, Aalto University, Natural Resources Institute Finland (Luke), Ecological Sciences, James Hutton Institute, Department of Plant Sciences, Peatland Ecology Research Group (PERG) and Centre for Northern Studies (CEN), Université Laval, Foundation for Peatlands Restoration and Conservation, University of Tartu, Institute of Ecology and Earth Sciences, USDA Forest Service, Northern Research Station, USDA Forest Service, Northern Research Station, University of Latvia, Botanical Garden, Department of Earth and Environment, Boston University
Restoring degraded peatlands presents a powerful opportunity for climate change mitigation. As a result, global initiatives to restore peatlands have been showing significant growth, especially in northern regions where degradation is most extensive. To ensure the success of these restoration efforts, continuous and comprehensive spatial monitoring is crucial. Remote sensing offers a powerful tool for enabling this type of monitoring, providing consistent, large-scale data across regions. Capturing essential climate variables over time allows us to track restoration progress with precision and continuity. In our work, we aimed to uncover restoration-induced changes in essential climate variables of degraded northern peatlands. We hypothesized that, prior to restoration, degraded peatlands with different initial land cover types display more pronounced differences compared to intact peatlands, but these differences diminish as restoration progresses. Utilizing over two decades of satellite data, we analyzed climate variables to track changes in restored peatlands, evaluating whether they are progressing toward their original, natural conditions in Finland, Estonia, Latvia, Lithuania, the United Kingdom, Canada, and the United States of America. By leveraging a long-term dataset across a wide geographical range of degraded northern peatlands, encompassing four distinct pre-restoration land cover types, we observed significant restoration-driven changes. Overall, we found that restored peatlands tended to resemble intact peatlands more closely after a decade following restoration. Our findings highlighted diverse and complex restoration-induced changes in satellite-derived observations. Restoration impacts were particularly notable in vegetation cover, surface temperature, and albedo, with the latter two showing the strongest indications of peatlands gradually recovering their natural state over time. Such changes have the potential to impact local and regional climate dynamics, especially in areas where large-scale restoration efforts are underway. With the increasing number of restored peatlands, particularly in Europe and Northern America, it becomes essential to incorporate these factors into evaluations of the climatic effects of land-use change.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: Large Scale Assessment of Fire Impacts on Siberian Peatlands Carbon Through High-Resolution Datasets

Authors: Thu-Hang Nguyen, Clement J. F. Delcourt, Chunjing Qiu, Sander Veraverbeke, Filipe Aires, Philippe Ciais, Emilio Chuvieco, Amin Khairoun
Affiliations: Universidad de Alcalá, Environmental Remote Sensing Research Group, Department of Geology, Geography and the Environment, Laboratoire des Sciences du Climat et de l’Environnement, UMR 1572 CEA-CNRS-UVSQ, Université Paris-Saclay, Research Center for Global Change and Complex Ecosystems, School of Ecological and Environmental Sciences, East China Normal University, LERMA, CNRS/Observatoire de Paris/Sorbonne University, Faculty of Science, Vrije Universiteit Amsterdam
Peatlands are the world’s largest natural terrestrial carbon sink. Arctic fires represent one of the major agents responsible for carbon released from permafrost. Coarse-resolution Burned Area (BA) and emission datasets revealed that fire had largely affected carbon-rich peatlands in the Siberian arctic region leading to striking belowground carbon emissions in recent years. However, accurate evaluations of the impacts of these fires on peatland carbon stocks using high-resolution data are lacking. In this work, we present a wall-to-wall assessment of fire impacts over the entire Siberian region expanding over around 9 Mkm2 (extent: 63°E-180°E; 60°N-74°N) for the period 2001-2023 using new high-resolution maps of BA and peatland cover. We analyse peat fire trends over time and their impacts on belowground carbon stocks and the drivers of spatio-temporal variability. We found that the yearly BA shows a large variability as it ranges from 0.48 Mha in the year 2015 to 10.58 Mha in 2021 with an average of 4.68 Mha and a coefficient of interannual variation reaching more than 57%. A significant increase was observed in the years 2019-2021 that was mainly linked to extremely anomalous dry summers. Our BA estimates were higher than coarse-resolution BA products (88.92 ± 24.81 and 62.76 ± 16.41% higher than MCD64A1 and FireCCI51, respectively) while the trends were similar. Notably, 2020 emerged as the most striking fire season for peatlands as a result of extensive fires in carbon-rich permafrost above the Arctic Circle. Overall, peat fires accounted for 33.96 ± 2.31% of total BA. Carbon emissions from fires and burn depth were modelled using a variety of predictors including, climate, soil, biomass and fire properties. Annual carbon emissions ranged from 6.12 to 158 Mt C, of which 69.25 ± 2.28% were attributed to belowground carbon emissions of burned peatlands, contrasting GFED4s and GFED5 fractions that doesn’t exceed 4.41 and 12.57%, respectively. A causal inference model revealed that drought and fire weather indicators control 58% of the interannual variability of peat fires occurrence in three distinct zones of Siberia (western, central and eastern). Additionally, the model accounts for 56% and 46% of the conditional variability (50% and 28% marginal variability) in belowground carbon emissions and the fraction of peat fires relative to total BA, respectively. This analysis highlights the critical role of fire in peatland degradation with improved certainty over previous studies.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: Earth Observation for Peatlands: An Integrated Framework for Validation of Peatland Properties

Authors: Harika Ankathi, Professor Kevin Tansey, Gerardo Lopez Saldana, Ian Jory, Yara Al Sarrouh, Susan Page, Michel Bechtold, Fred Worrall, Lisa Beccaro, Cristiano Tolomei, Stefano Salvi, Christian Bignami
Affiliations: University Of Leicester, Assimila Ltd, KU Leuven, Durham University, Istituto Nazionale di Geofisica e Vulcanologia
Storing over 600 gigatons of carbon – twice that held in all global forest biomass – peatlands are critical yet increasingly threatened ecosystems that demand urgent, sophisticated monitoring solutions. With significant areas facing degradation, their effective management is hampered by the lack of consistent, high-quality monitoring systems. The ESA WorldPeatland Project addresses this critical challenge by pioneering an integration of Earth Observation (EO) technologies to develop a standardized, global framework for comprehensive peatland assessment. Through systematic stakeholder engagement, our project identifies critical knowledge gaps and develops innovative solutions for mapping, monitoring, and assessing peatland conditions across diverse biomes, marking a significant advance in peatland science and conservation. Through extensive stakeholder engagement, we identified critical monitoring needs and ESA WorldPeatland Project generates a comprehensive suite of Earth Observation products to address them. Our foundational work involves peatland extent analysis, where we systematically compare and validate peat extent mapping using multi-source satellite data. Our SM_L4-Sentinel-1/2 water level dynamics product, delivered at 1km resolution, addresses the crucial stakeholder need for monitoring peatland hydrology and assessing restoration effectiveness. For tracking peatland degradation, we implement ground motion measurements using both E-PS and ISBAS techniques, enabling detailed subsidence monitoring for possible carbon loss assessments. Responding to requirements for vegetation and biodiversity monitoring, we generate bio-geophysical parameters including Leaf Area Index (LAI), Land Surface Temperature (LST), and specialized vegetation indices using data from MODIS, Sentinel-2, and Landsat satellites. These parameters provide essential information for assessing revegetation progress and habitat development. To address the stakeholder need for holistic peatland assessment, these individual products are synthesized into integrated health indicators that combine hydrological, ecological, and geophysical parameters. All products maintain consistent temporal resolutions from daily to monthly observations and spatial resolutions ranging from 10m to 1km, directly addressing user requirements for both broad-scale monitoring and detailed site analysis. Following stakeholder feedback emphasizing accessibility, these products are delivered through standardized, web-based applications designed for both technical and non-technical users. The ESA WorldPeatland Project focuses on advancing global peatland monitoring through systematic validation and intercomparison with existing datasets. Our validation framework encompasses comparison with ground measurements, high-resolution reference data, and other operational products. For initial assessments of peatland extent mapping, we adopted a systematic approach that integrates multi-source satellite datasets. We detail comparisons of Global Peatland Map (GPM), UKCEH, CORINE, Congo-Peat, ESA, MODIS, ESA CCI, and ESRI Land Cover datasets. Moving beyond peat extent for carbon stock estimation, we aim to integrate satellite-derived vegetation metrics with soil carbon models to quantify carbon storage and emissions. Peat depth mapping will utilize a combination of radar backscatter, LiDAR data, and field surveys to refine depth estimates across varying peatland types. To address fire risk and disturbance monitoring, we propose leveraging thermal anomaly datasets from MODIS and Sentinel-3 alongside vegetation dryness indices. Furthermore, methane flux modelling will be explored by combining wetness indicators with climate variables to assess greenhouse gas emissions more accurately. These efforts will further enhance the utility of our framework for addressing diverse stakeholder needs. Initial validation across eleven test sites highlights the framework's adaptability to diverse peatland ecosystems. For peat extent mapping, we conducted agreement and disagreement analysis using multiple reference datasets, including Global Peatland Map (GPM), UKCEH, CORINE, Congo-Peat, ESA, MODIS, ESA CCI, and ESRI Land Cover datasets. By comparing these datasets, we identified significant discrepancies in peat extent estimates, particularly in regions with complex hydrology and vegetation cover. For instance, in the UK, notable differences were observed between GPM and UKCEH, emphasizing the need for robust validation and harmonization. Our analysis provides a valuable training dataset to improve the accuracy and consistency of future peatland mapping efforts, contributing to a more reliable assessment of global peat carbon stocks and climate mitigation potential. This work contributes to advancing global peatland conservation by bridging the gap between Earth Observation capabilities and stakeholder requirements. The developed methodology provides a foundation for consistent, long-term monitoring of peatland ecosystems, supporting both policy decisions and practical conservation efforts. Future developments will focus on expanding the validation network through continued stakeholder engagement and incorporating emerging satellite sensors for enhanced monitoring capabilities.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: SAR coherence and backscatter time series for monitoring restored, rewetted, abandoned and natural peatlands

Authors: Koreen Millard, Vincent Ribberink, Tauri Tampuu, Ain Kull
Affiliations: Carleton University, KappaZeta, University of Tartu
Although peatlands are gaining increased recognition as important habitats and stocks of carbon, they continue to be threatened due to anthropogenic pressures, including the extraction of peat, drainage for agriculture and forestry, and climate change. The status of restoration and rewetting is important to monitor in order to ensure biodiversity conservation and greenhouse gas emissions reduction targets are met [1]. Peatlands of different status (natural, abandoned extractions, restored, rewetted) exhibit different vegetation, soil and ecohydrological characteristics. Synthetic Aperture Radar (SAR) offers a method to explore the spatial and temporal changes in surface and vegetation conditions within these ecosystems. This research demonstrates the use of time series SAR backscatter and coherence at natural and restored peatlands across Canada. At seven natural peatland sites across Canada, soil moisture data were acquired from Ameriflux and field data collection efforts spanning the Sentinel-1 time data availability (e.g. spring 2017 to the end of the respective soil moisture time series, where end dates varied by station). There is no soil moisture data available for restoration and rewetted sites, however, the spatial location is available for >200 locations across Canada where the Moss Layer Transfer Technique (MLTT) and other restoration and rewetting techniques (e.g. ditch blocking) were applied. To provide a comparison to these restoration and rewetted sites, natural peatlands, abandoned extraction sites, and active extraction sites were also acquired within 10 km of the restoration/rewetted sites. Correlation between soil moisture and backscatter in natural sites was highly variable by site, but soil moisture time series were compared with Sentinel-1 backscatter time series using the Seasonal and trend decomposition using Loess ([2]) to analyze long term trends in both the soil moisture and backscatter, with a goal of determining if any sites were in drought, long term wetting or exhibited no changes in hydrology. The STL method allows us to remove the variation due to seasonality (e.g. phenology) and focus on the change over time that is not regular [3]. This analysis indicated many similar trends in backscatter and soil moisture, but significant differences between peatlands with different surface conditions (e.g. peatlands with many ponds vs peatlands dominated by Sphagnum lawns). Interferometric coherence indicates the similarity of a pixel between two 12-day image pairs and the presence of vegetation (trees and shrubs) usually results in low coherence. Despite the trees and shrubs that exist in some types of natural peatlands, they often exhibit high coherence and InSAR coherence and displacement has been used in peatland ecosystem mapping [4], [5] and in estimating surface height changes due to bog breathing [6], water table and soil moisture conditions in peatlands [7], [8]. In this study, coherence for the natural peatlands, active extraction and restored sites were extracted at 50 m spatial resolution for each date pair between 2017 and the end of the soil moisture time series for each site in natural sites, and until August 2024 in disturbed sites (e.g. restored/rewetted/abandoned). Generally, coherence was significantly lowest in rewetted sites and within sites demonstrated the greatest variability in coherence. This likely indicates that the water regime is becoming less homogenous and significant vegetation (e.g. upland trees such as birch) has grown in some parts of the rewetted sites. This was not the same in the MLTT restored or drier abandoned sites. Other classes were similar to each other in fall and spring, but in summer there was a clear distinction in coherence in Natural Peatlands and MLTT peatlands in comparison with abandoned, rewetted and active extraction. While natural peatlands did demonstrate significantly higher coherence than restored sites in summer, the restored peatlands were more similar to natural peatlands than other classes indicating that these sites are beginning to show similar vegetation and moisture conditions to natural peatlands. An analysis of coherence in comparison with time since restoration indicated lower coherence beyond 15 years since rewetting but no significant differences in coherence appeared to be related to time since restoration. This may also highlight the likelihood that these sites are gradually being claimed by upland vegetation. These findings underscore the potential of SAR backscatter and coherence time series analysis to provide critical insights into the ecohydrological dynamics of peatlands under various management conditions. By distinguishing trends in moisture and vegetation, SAR backscatter and coherence can support the effective monitoring of restoration efforts in these vital carbon-rich ecosystems. [1] E. B. Barbier and J. C. Burgess, “Economics of Peatlands Conservation, Restoration and Sustainable Management,” SSRN Electron. J., 2024, doi: 10.2139/ssrn.4695533. [2] 6.6 STL decomposition | Forecasting: Principles and Practice (2nd ed). Accessed: Nov. 30, 2024. [Online]. Available: https://otexts.com/fpp2/stl.html [3] K. Millard., S. Darling, N. Pelletier, and S. Schultz, “Seasonally-decomposed Sentinel-1 backscatter time-series are useful indicators of peatland wildfire vulnerability,” Remote Sens. Environ., vol. Accepted, In press, 2022. [4] K. Millard, P. Kirby, S. Nandlall, A. Behnamian, S. Banks, and F. Pacini, “Using Growing-Season Time Series Coherence for Improved Peatland Mapping: Comparing the Contributions of Sentinel-1 and RADARSAT-2 Coherence in Full and Partial Time Series,” Remote Sens., vol. 12, no. 15, Art. no. 15, Jan. 2020, doi: 10.3390/rs12152465. [5] N. J. Pontone, “The Classification and Characterization of Canadian Boreal Peatland Sub-classes,” Master of Science, Carleton University, Ottawa, Ontario, 2023. doi: 10.22215/etd/2023-15739. [6] T. Tampuu, F. De Zan, R. Shau, J. Praks, M. Kohv, and A. Kull, “CAN Bog Breathing be Measured by Synthetic Aperture Radar Interferometry,” in IGARSS 2022 - 2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia: IEEE, Jul. 2022, pp. 16–19. doi: 10.1109/IGARSS46834.2022.9883421. [7] T. Tampuu, J. Praks, F. De Zan, M. Kohv, and A. Kull, “Relationship between ground levelling measurements and radar satellite interferometric estimates of bog breathing in ombrotrophic northern bogs,” Mires Peat, vol. 29, no. 17, pp. 1–28, Aug. 2023, doi: 10.19189/MaP.2022.OMB.Sc.1999815. [8] T. Tampuu, J. Praks, R. Uiboupin, and A. Kull, “Long Term Interferometric Temporal Coherence and DInSAR Phase in Northern Peatlands,” Remote Sens., vol. 12, no. 10, Art. no. 10, Jan. 2020, doi: 10.3390/rs12101566.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall L1/L2)

Session: F.02.09 The Space for Climate Observatory Initiative: accelerating the deployment of digital solutions for climate change adaptation

The Space for Climate Observatory (SCO) is an international initiative which aims to support the development of Earth-observation based operational tools for climate adaptation, mitigation and monitoring at local level, at the nearest of the users. Currently, 53 signatories are part of the International Charter in total, representing 28 countries and 6 international organizations. It is a portfolio of 123 projects related to many thematic: ocean, coastal areas, biodiversity, extreme events; agriculture, water etc.

To operate effectively, the SCO has established global governance bodies but primarily relies on more or less structured local implementations. These local implementations are crucial for generating projects, proposing synergies between private ecosystems and research, public policies, public funding, and local climate challenges.



This session will present how local interfaces help bridge the gap between science, users, and decision-makers, with examples from Europe, France, the UK and Norway. It will also showcase projects that have delivered concrete tools to end-users.



Agenda:

1. Introduction – Presentation of the SCO with a focus on SCO France

2. Roundtable – From Science to Users: the role of SCO and local interfaces in turning space data into action

Speakers: NOSA, Space4Climate, ESA, ACRI-ST, and a researcher on EO governance.

3. Project Pitches – Operational tools from SCO addressing real-world needs (e.g. agriculture, carbon, coasts, forests).

Speakers: MEOSS, GlobEO, Hytech-Imaging, CNES, Hydromatters

Convenors: Claire Macintosh (ESA), Frédéric Bretar (CNES)

Moderators:


  • Frédéric Bretar - Head of the Space for Climate Observatory (SCO), CNES (French Space Agency)
  • Alexia Freigneaux - International Development Officer for the Space for Climate Observatory (SCO), CNES (French Space Agency)

Speakers:


  • Susanne Mecklenburg - Head of the Climate Office, ESA (European Space Agency)
  • Anja Sundal - Senior Adviser, Science and Earth Observation, NOSA (Norwegian Space Agency)
  • Krupa Nanda Kumar - Climate Services Development Manager, Space4Climate
  • Antoine Mangin - Scientific Director, ACRI-ST
  • Dorian Droll - Researcher, CNES-INSP
  • Thomas Ferrero - CEO, MEOSS
  • Stéphane Mermoz - CEO and Research Scientist, GlobEO
  • Marie Jagaille - Product Line Manager, Hytech-Imaging
  • Vincent Lonjou - Earth Observation Downstream Application Project Manager, CNES (French Space Agency)
  • Adrien Pâris - HydroMatters
  • Swed-Coast Blue Carb - TBC

Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall G2)

Session: C.05.09 EO National Missions Implemented by ESA - Setting the Scene

The session will be used to introduce the different National Projects under implementation at ESA, and to exchange about challenges and opportunities ahead.

Speakers:


  • S Lokas – ESA
  • Konstantinos Karantzalos – Secretary General, Greek Ministry of Digital Governance and Greek Delegate to the ESA Council
  • Dimitris Bliziotis – Hellenic Space Centre and Greek delegate to PBEO
  • G. Costa – ESA
  • F. Longo – ASI
  • D Serlenga – ESA
  • Head of Delegation to ESA – MRiT
  • R. Gurdak – POLSA
  • L. Montrone – ESA
  • N. Martin Martin / J.M. Perez Perez – (Affiliation not specified)
  • Pedro Costa – CTI
  • Betty Charalampopoulou – Geosystems Hellas CEO and BoD Hellenic Association of Space Industry
  • Dr. hab. inż. Agata Hościło – Institute of Environmental Protection – National Research Institute
  • A. Taramelli – ISPRA
  • V. Faccin – ESA
  • R. Lanari – CNR/IREA
  • M. Manunta – CNR/IREA
  • L. Sapia – ESA
  • E. Cadau – ESA
  • Rosario Quirino Iannone – ESA
  • Mario Toso – ESA
  • Enrique Garcia – ESA
  • Ana Sofia Oliveira – ESA
  • Ariane Muting – ESA
  • V. Marchese – ESA
  • Jolanta Orlińska – POLSA
  • G. Grassi – ESA
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall G1)

Session: C.03.08 The European Copernicus Space component: status, future prospects and challenges - PART 1

Copernicus is the European Earth monitoring program which opened a new era in Earth Observation with continuous and accurate monitoring of our planet and continuous improvement to respond to the new challenges of global change.
Since it became operational in 2014 with the launch of the first dedicated satellite, Sentinel-1A, Copernicus has provided a wealth of essential, timely and high-quality information about the state of the environment, allowing borderless environmental and emergency monitoring, and enabling public authorities to take decisions when implementing European Union policies.
The intense use and increased awareness for the potential of Copernicus have also generated great expectations leading to an evolved Copernicus system that has embraced emerging needs, new user requirements and a new commercial dimension.
This future evolution of the Copernicus program will fill observational gaps and will help monitor the “pulse” of our planet for the decades to come, but to do so, programmatic and budgetary commitments will need to be maintained.

Presentations and speakers:



Sentinel-1C transfer of ownership side event


  • S. Cheli - ESA, Director of Earth Observation Programmes
  • M. Facchini - EC, DG DEFIS

S. Cheli's introductory key speech


The European Union in the Copernicus Space Component


  • M. Facchini - EC, DG DEFIS

ESA and the Copernicus Space Component: present and future perspectives


  • P. Potin - ESA, Head Copernicus Space Office

The future Copernicus Sentinel satellite missions


  • P. Bargellini - ESA, Copernicus Space Segment Programme Manager

The Copernicus Sentinel missions and data management framework: European excellence in high quality data and services


  • B. Rosich - ESA, Head Copernicus Ground Segment and Data Management Division

The Copernicus current Sentinel satellite missions: Sentinel-1


  • N. Miranda - ESA, Sentinel-1 Mission Manager
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.14)

Session: A.03.07 The Carbon Budget Reconciliation Challenge

Earth Observation plays a critical role in supporting the estimation of greenhouse-gas fluxes between land, ocean and atmosphere. Space-based optical, radar and lidar instruments currently provide information on vegetation state (biomass), land-use change, land dynamics, and greenhouse-gas concentrations that are used in bottom up or top-down modelling approaches to support scientific understanding of the carbon cycle and hence inform policy applications.

The Carbon Budget Grand Challenge has been established to help reconcile bottom-up and top-down estimates of Greenhouse Gas Emissions in response to one of the key recommendations from the Fourth Carbon from Space workshop. It coincides with the conclusion of the Global Carbon Project’s Second Regional Carbon Cycle and Processes (RECCAP2) study (AGU special collection). RECCAP2 has identified several challenges that can be addressed to improve the timeliness, coordination, and methodologies used for RECCAP3 (2020-2029). These include supporting and training of early career scientists, provisioning datasets using cloud based tools and standard formats, including datacubes, and developing a low-latency workflow for implementing tiered budgets of varying complexity and annual to multi-annual cadence. The Reconciliation Challenge will address the challenges raised by RECCAP2 through a partnership that will provide coordination and early career support to address the following key tasks:

· A synthesis of RECCAP2 in the context of reconciling bottom-up and top-down budgets including lessons learned especially in the context of EO. The synthesis will help prioritize planning for RECCAP3 in terms of data needs from EO and identify a path forward for sub-regional and national scale GHG budgets.
· A dedicated effort to coordinate EO contributions to the development of annual updates on the status and dynamics of the terrestrial carbon cycle. that leverage existing and planned NASA, ESA and other space agency satellite missions (OCO2/3, ICESAT-2, GEDI, TROPOMI, Sentinel-1/2, BIOMASS, NISAR, SWOT), as well as identifying datasets and modelling frameworks to improve top-down and bottom-up reconciliation in intermediate years leading to the third RECCAP study.
· The establishment of a low-latency framework for GHG budgets to help align the RECCAP process better with the Global Carbon Budget exercise.
· Provide leadership opportunities and involvement to Early Career Scientists who contributed to RECCAP2 and those who are enthusiastic to be involved with RECCAP3.

Agenda


Introduction to the Carbon Budget Reconciliation Challenge


  • Stephen Plummer - ESA

Science Talks


GCB, RECCAP, Insights from TRENDY and the need for Benchmarking


  • Mike O’Sullivan

Establishing the NRT Budget scheme


  • Philippe Ciais

The carbon cycle viewed from the US


  • Ben Poulter

NextGenCarbon and CONCERTO


  • NextGenCarbon
  • Manuela Balzarolo

EO-LINCS and data harvesting for RECCAP


  • Jake Nelson
  • Sujan Koirala

FLUXCOM-X data-driven carbon budgets


  • Sophia Walther

Round Table


Why is the Carbon Budget Reconciliation Needed, What is the problem?


  • Ben Poulter
  • Philippe Ciais
  • Mike O’Sullivan
  • Sophia Walther
  • Manuela Balzarolo

Open discussion


Establishment of a coordinated approach from EO


Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K1)

Session: D.01.01 Collaborative Innovation: building a Digital Twin of the Earth System through Global and Local Partnerships

The concept of a Digital Twin of the Earth System holds immense potential for revolutionizing our understanding and management of our planet. However, building such a complex and comprehensive system requires a global effort. This session explores the power of collaborative innovation in bringing together diverse stakeholders to create a robust and impactful Digital Twin Earth.

In this session, we invite contributions to discuss the following key topics:

- International Collaborations and Global Initiatives
We seek to highlight major international collaborations, such as ESA's Digital Twin Earth and the European Commission's Destination Earth, which exemplify the collective effort needed to develop these advanced systems. Contributions are welcome from successful international projects that demonstrate the potential for global partnerships to significantly advance the development and application of the Digital Twin Earth.

- Public-Private Partnerships (Industry and Academia Collaborations)
We invite discussions on innovative models for funding and resource allocation within public-private partnerships, which are crucial for sustainable development and effective environmental monitoring. Contributions from tech companies and startups that have been instrumental in developing key technologies for the Digital Twin Earth are especially welcome, showcasing the private sector's vital role in this global initiative.

- Local and Community Engagement
Engaging local communities and fostering grassroots initiatives are essential for the success of the Digital Twin Earth. We invite contributions that discuss the role of citizen scientists in data collection, monitoring, and validation efforts. Examples of training and capacity-building programs that empower local communities and organizations to actively participate in and benefit from these advanced technologies are also sought. Additionally, we welcome examples of successful local collaborations that highlight the positive impact of digital twin technologies on environmental monitoring and resilience.

- Multi-Disciplinary Approaches
Addressing the complex challenges of developing a Digital Twin Earth requires a multi-disciplinary approach. We seek contributions that integrate diverse expertise from climate science, data science, urban planning, and public policy to create comprehensive digital twin models. Discussions on developing standards and protocols for interoperability and effective data sharing among stakeholders are critical for holistic problem-solving and are highly encouraged.

- Policy and Governance Frameworks
We invite contributions that explore policy and governance frameworks supporting the development of policies for sustainable development and climate action. Effective governance structures that facilitate collaboration across different levels of government, industry, and academia are crucial. Additionally, we seek discussions on addressing ethical, privacy, and regulatory considerations to ensure the responsible use of digital twin technologies.

By fostering international collaborations, leveraging public-private partnerships, engaging local communities, integrating diverse expertise, and developing robust policy frameworks, this session aims to collectively advance the development of the Digital Twin Earth. This holistic approach ensures that the Digital Twin Earth is not only a technological marvel but also a collaborative, inclusive, and impactful tool for sustainable development and environmental resilience.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K1)

Presentation: GTIF Austria: Bridging International Developments in Snow Science and Hydrology With Local Decision-Making in the Hydropower Sector Through a Digital Twin Framework.

Authors: Maxim Lamare, Dr Matteo Dall’Amico, Federico Di Paolo, Stefano Tasin, Nicolò Franceschetti, Dr Johannes Schober, Dr Mario Strigl, Dr Gerhard Triebnig, Konstanze Fila
Affiliations: Sinergise Solutions GmbH, Waterjade Srl, TIWAG-Tiroler Wasserkraft AG, EOX IT Services GmbH, FFG (Austrian Research Promotion Agency)
The climate crisis stands as one of the most pressing global challenges we face, profoundly impacting ecosystems, economies, and societies worldwide. In response to this urgent need for climate action and sustainability, ESA launched the Space for a Green Future (S4GF) Accelerator in 2021, aiming to harness Europe’s space innovation to accelerate the Green Transition towards a carbon-neutral, sustainable and resilient society. At the heart of this initiative lie the Green Transition Information Factories (GTIF), which are based on a cloud based platform infrastructure providing a portfolio of digital, geo-related information services (“GTIF Capabilities”). GTIFs focus on the use of data from Earth Observation in order to empower decision-making for climate change adaptation and ecological change. Building on the developments of the first GTIF demonstrator (https://gtif.esa.int/), the current GTIF-Austria (also referred to as “Digital Twin of Austria”) initiative is expanding the existing set of capabilities with a focus on the transition to carbon neutrality by 2050. Amongst the numerous “Capabilities” of the GTIF-AT being implemented over the period 2024-2026, the “Energy transition with hydropower” project focuses on enhancing water management and hydropower operations by integrating advanced snowpack data into hydrological forecasting systems aiming to improve reservoir management and optimising energy production as well as improving flood management. The main service providing information about the snowpack stems from the international ESA-funded Digital Twin Alps demonstrator (https://digitaltwinalps.com/) and produces daily maps of modelled and forecasted snow metrics including snow water equivalence (the amount of water stored in the snowpack), snow depth, snow-covered area and melt rates. These snow products are then integrated into a short-term and seasonal runoff forecast model, improving the knowledge of the water stored in the catchment and indirectly flood risk and energy production potential. While numerous Digital Twin initiatives such as the Digital Twin Alps have been innovative in their technological approaches, they often lack strong connections with stakeholders and fall short in translating their potential into practical, real-world applications. In this GTIF-Austria project, the services developed are directly being put to use by TIWAG, a local state-owned electricity generation and distribution company in Tyrol, Austria, to improve their hydrological models including a flood forecast system jointly used by the state of Tyrol. By directly involving the stakeholder as a partner within the project consortium, we ensure direct alignment between the features of the Digital Twin and their practical applicability. Iteratively developing the information services of the GTIF hand-in-hand with the stakeholder will ensure that the hydrological solutions available in the platform fit the needs of the market, align with operational realities, and deliver tangible value. Furthermore, by leveraging TIWAG’s extensive network within the Alps, the project will expand its reach, fostering collaboration and enabling the adoption of these advanced solutions across the broader hydropower sector.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K1)

Presentation: UrbanSquare: An Integrated Climate Risk Assessment Tool for Urban Areas on the Destination Earth Platform

Authors: Fabien Castel, Camille Lainé, Hugo Poupard, Sabrina Outmani, Leon Staerker, Melih Uraz, Mariana Damova, Dr Stanko Stankov, Hristo Hristov, Hermand Pessek, Dr Emil Stoyanov
Affiliations: Murmuration, Sistema, Mozaika
UrbanSquare is an innovative use case funded by ESA to be deployed on the Destination Earth platform, developed by a consortium led by Murmuration in partnership with Sistema, Mozaika, and Imperative Space. Designed to offer urban planners a comprehensive tool to assess and monitor climate risks in urban environments, UrbanSquare provides a holistic view of critical risk factors: urban heat islands, flooding, sea level rise and storm surges, air pollution, infrastructure deterioration, and increased resource demand. The service is built around a modular architecture where each risk theme is addressed by a component developed with a specific demonstrator and end-user acting as the product owner. UrbanSquare operationalizes risk assessment by integrating data from diverse sources, including the Destination Earth Digital Twins, Copernicus datasets, Landsat observations, ESA WorldCover, and additional open or commercial data such as Planet HR imagery, OpenStreetMap, and Eurostat socio-economic data. UrbanSquare is designed to scale by leveraging standardized, globally available datasets and state-of-the-art software that is seamlessly integrated into the Destination Earth System Platform (DESP). While five of the six components are natively integrated within DESP to exploit its data, services, and ICT infrastructure, the flood component stands out as a federated application using the platform’s data and service APIs. A key feature of UrbanSquare is its dual temporal approach, providing not only a retrospective analysis of historical and current data but also forward-looking projections of future what-if scenarios. This functionality empowers urban planners to anticipate climate risks and develop proactive adaptation and mitigation strategies. The initial implementation focuses on local demonstrator sites, where the service can be refined and validated. However, the use of globally consistent datasets ensures the tool’s scalability, enabling rapid adaptation to a growing number of deployments worldwide. UrbanSquare thus represents a significant step toward equipping cities with actionable insights for climate resilience, fostering informed decision-making and sustainable urban development in the face of accelerating climate change. The air quality monitoring component utilizes an AI-driven super-resolution model to enhance the spatial resolution of NO₂ concentration data from 10 km (Copernicus Atmosphere data) to 1 km. It integrates various datasets, including meteorological data (ERA5), environmental factors (topography, land cover), and human activity indicators (traffic, population density). The system delivers daily, near-real-time 1-km air quality maps, supporting interactive visualization, data export, and detailed time-series analysis. Users can simulate "what-if" scenarios, such as changes in traffic patterns, urban density, or climate conditions, to assess the potential impacts on air quality. The platform includes features for evaluating influential factors and allows customization according to WHO or national air quality standards. Designed for ease of use, the tool empowers users to monitor, analyze, and project air quality trends for informed decision-making in urban and environmental planning. The urban heat monitoring component provides a heat exposure indicator to identify urban heat islands, utilizing land use and vegetation data. It employs Land Surface Temperature (LST) data from Landsat 8/9, combined with climate change projections from the DestinE platform, to model and project heat wave impacts under various Shared Socioeconomic Pathways (SSPs). Through pixel-wise linear regression, the tool computes LST projections for moderate to extreme heat waves (30°C to 45°C) and calculates the annual frequency of extreme heat days under future climate scenarios. It supports interactive visualization, enabling urban planners to compare thermal conditions across neighborhoods, assess the impact of urban development and renovation, and evaluate policy measures such as vegetation management. Designed with an interactive dashboard, it empowers stakeholders to simulate scenarios and make informed decisions for urban climate resilience. The Sea Level Rise and Storm Surges component is built upon long-term mean sea surface height projections and aims at producing a comprehensive depiction of inundation risk in coastal areas, which are particularly vulnerable to climate change. The globally available tool allows to generate what-if scenarios between 2040 and 2150 by varying SSPs and storm surge heights, which are added to the predicted sea level retrieved from the IPCC dataset (AR6). The latter is integrated with Copernicus Digital Elevation Models (DEMs) and ESA Waterbodies layers to compute inundation maps. Furthermore, datasets from the Copernicus Global Human Settlement and ESA WorldCereals are used to produce exposure assessments in terms of population, built-up surface and cultivated areas affected by the floods. Thanks to the integration with DestinE Climate Change and Adaptation DT data, higher resolution inundation products can be generated over Europe providing a better understanding of the impact of sea level rise. The Flood component provides advanced forecasting and simulation capabilities for managing flood risks. It employs the ISME-HYDRO® (http://isme-hydro.com) novel EO4AI method using earth observation data for meteorological features with relevance for hydrological status of rivers like precipitations, soil moisture, vegetation index, snow cover combined with in-situ measurements applied on pipelines of neural network architectures in order to generate forecasts for river discharge and water level with models predicting hydraulic status for up to 30 days ahead, and then introduces digital elevation data to obtain the projected flood plains. To simulate flood scenarios the integrated into DESP tool offers customizable simulations, allowing users to specify water levels, precipitation increases, and flood events for specific locations and timeframes. These simulations generate detailed maps and data tables and visualize their impacts to aid in impact analysis to support municipalities in planning and responding effectively. ISME-HYDRO® application, based on a complex intelligent e-Infrastructure, that demonstrates federated integration with DESP (http://destine.isme-hydro.com), helps users predict flood spans, evaluate flood risks, and assess affected land and infrastructure in high-risk areas. With its user-friendly interface, interactive maps, and scenario-building features, the component is essential for policymakers and municipal officials to mitigate flood damages, enhance preparedness, and protect communities and assets under diverse flood conditions. The Resources component is blended in the Flood component by integrating socio-economic data to quantify the potential impacts of climate risks, focusing primarily on floods. For example, it provides insight about the damages caused by floods by identifying and quantifying the infrastructure – buildings and roads – that fall inside the flooded areas, as well as agricultural fields and forest areas. The component supports municipal officials in three key areas: evaluating flood damages, estimating recovery resources, and planning recovery efforts. The Resources component streamlines data analysis for critical decision-making, enabling municipalities to effectively allocate resources and design recovery strategies post-flood events. Its goal is to bridge data insights and actionable recovery measures, ensuring efficient response and mitigation. The Infrastructure component aims at producing updated maps of roads from high resolution satellite images, analysing the land cover along the roads and highlighting the areas that would need action or restoration. By harnessing the capabilities of the Destination Earth platform and leveraging a robust consortium-driven approach, UrbanSquare aims at making a transformative impact on how urban planners address the multifaceted challenges of climate risks in cities worldwide.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K1)

Presentation: EnvironTwin: A Digital Twin for Environmental Monitoring Project

Authors: Mahtab Niknahad, Simone Tritini, Michele Claus, Roberto Monsorno, Abraham Mejia-Aguilar
Affiliations: Eurac Research
The EnvironTwin project seeks to enrich the Environmental Data Platform (EDP) [1] by implementing a Digital Twin (DT) service to represent, model, and forecast key alpine environmental scenarios. EDP was previously developed for the FAIR management of environmental data resources at Eurac Research, to provide stakeholders with actionable insights into ecosystem dynamics, risks, and management strategies. EnvironTwin leverages technologies such as in-situ sensors, proximal sensing, satellite imagery, cloud computing, and advanced data modeling. The project addresses critical challenges posed by human activities and their impacts on agriculture, forestry, and environmental conservation as well as some limitations of technology. Objectives: The project aims to: (ι) Commission advanced instrumentation and sensor technology for environmental monitoring and digital shadowing. (ιι) Establish and integrate computing infrastructure within the existing Environmental Data Platform. (ιιι) Implement a Continuous Integration and Continuous Development (CI/CD) environment to streamline digital twin creation. (ιν) Combine heterogeneous data sources into a unified framework for robust digital twin modeling. Challenges: Mountain and Alpine environment monitoring systems face technological, organizational, and operational limitations. The sudden change in orography, the weather variability, different climate zones, and a wide variety of human activities make it difficult to use one single monitoring strategy. EnvironTwin integrates different natures of instrumented systems, from ground to remote sensing approaches. However, one of the primary challenges is acquiring high-quality, heterogeneous data and building an operative infrastructure to effectively integrate these data into simulation models to suggest different possible scenarios in four key alpine use cases: Grasslands, Forestry, Agrovoltaic and Natural Hazards.. Above all, every use case is followed by a scientific supervisor, making environTwin a unique multi-disciplinary approach. Use Cases: (ι) Grassland management detection: Grasslands in South Tyrol are managed by small farming businesses. Grazing, mowing and harvesting, and fertilization are some of them. However, every farmer follows a different management strategy, making it challenging to generalize monitoring systems. EnvironTwin evaluates the effectiveness of high-resolution Planet satellite data in overcoming spatial and temporal monitoring events in South Tyrol. The optimization will be achieved by incorporating Sentinel-2 imagery and webcam-derived reference data, thereby refining the spatial and temporal resolution for agricultural applications. (ιι) Forest structural diversity and modeling: Ground and proximal sensing-based data on forest structural diversity foreseen the monitoring of individual trees distributed in a 1500-meter elevation profile. EnvironTwin integrates heterogeneous data sources into forest dynamics models, enabling predictions of forest adaptation and growth under changing climatic conditions. This data supports the creation of a digital twin of forests for current and future scenario adaptation. (ιιι) Agri-Voltaic Systems: By modeling the interactions between photovoltaic (PV) systems and agricultural practices, digital twins optimize dual land use for energy generation and crop cultivation. Climatic and weather variability, including droughts and extreme temperatures, are considered to ensure resource efficiency and sustainability. The main objective is to forecast energy and fruit production based on IoT data. (ιν) Rock glacier deformation: Traditional methods to monitor rock glacier deformation rely on the monitoring of medium-sized boulders using GPS. Nevertheless, the integration of proximal sensing (LiDAR, Thermal, and RGB) allows the possibility to identify hot spots, as well as monitor rotational, gravitational, material deposition, and geological structures at a very small scale. In this use case, these technologies are applied in the Lazaun Senales Valley, Italy, to extract environmental variables such as temperature and time. The collected data is incorporated into a digital twin (DT) model, which provides valuable insights for developing strategies to mitigate and manage natural hazard risks. Pilot Study: Alongside the four use cases planned for the project, a pilot case has been developed to test the infrastructure from which to start and then adapt to other application fields. The pilot study was set in the Laimburg field, South Tyrol in Italy, and it focuses on predicting soil moisture and precipitation using in situ sensors, satellite data (temperature, water, and soil moisture indices), and weather station observations. Algorithms such as LightGBM, CNNs, LSTMs, and regression models are applied to analyze data, predict irrigation needs, and enhance water management. Impact: EnvironTwin fosters collaboration among researchers, public authorities, and businesses by creating synergies with predictive modeling experts and showcasing the potential of digital twin technology through the research and development ecosystem. The project’s outcomes include scalable, data-driven tools for managing environmental challenges, validated through interdisciplinary case studies. By demonstrating its applicability across diverse fields, EnvironTwin establishes a foundation for sustainable, resource-efficient environmental management. The research leading to these results has received funding from the European Regional Development Fund, Operational Programme Investment for jobs and growth ERDF 2021-2027 under Project number ERDF1045 Service for the development of digital twins to predict environmental management scenarios through dynamic reconstruction of the Alpine environment, EnvironTwin. [1] FAIRsharing.org: EDP; Environmental Data Platform, DOI: 10.25504/FAIRsharing.e4268b, Last Edited: Monday, October 9th 2023, 18:54, Last Accessed: Wednesday, November 29th 2023, 10:19
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K1)

Presentation: How Earth observation, citizen science, automated sensors and models are bringing Lake Geneva to life

Authors: Daniel Odermatt, Abolfazl Irani Rahaghi, Damien Bouffard, James Runnalls, Laurence Haller, Pasche Natacha
Affiliations: Eawag - Swiss Federal Institute of Aquatic Research, Department of Geography, University of Zurich, Eawag - Swiss Federal Institute of Aquatic Research, Limnology Center, École Polytechnique Fédérale de Lausanne (EPFL)
Lakes are fascinating ecosystems with discrete yet permeable spatial boundaries, within which biological, chemical and physical processes overlap. Therefore, lake research is geographically focussed, but thematically diverse. To meet the challenges associated with this interdisciplinarity, we pursue a vision for lake research that utilises and combines various sources of information, including in situ measurements, remote sensing and model simulations. To this end, we have developed a digital infrastructure in the ESA projects CORESIM and AlpLakes, which, in the case of Lake Geneva, can be used as a digital twin for the detailed simulation of limnological processes. A similar yet reduced approach was scaled to more than 80 other lakes in the Alpine region (www.alplakes.eawag.ch). Lake Geneva is regarded as one of the first subjects of limnological research in the nineteenth century, when standing waves were first investigated in the lake. In the second half of the 20th century, like many lakes in Europe, Lake Geneva was subject to pronounced eutrophication. The Franco-Swiss Commission CIPEL was founded in 1962 to curb this development by means of international coordination, and in 1980, civil society organised itself into another cross-border NGO for the protection of Lake Geneva, the Association pour la Sauvegarde du Léman (ASL). Nevertheless, the lake's degree of eutrophication remains high, which occasionally leads to disturbing algal blooms. Added to this is the warming of the lake due to climate change, which is affecting its vertical mixing, and the introduction of invasive species such as the quagga mussels (since 2015). This complex interplay of impairments is both, hard to understand and difficult to communicate. Using the concept of a digital twin, we aim to improve the scientific understanding of this interplay, and to support the exchange of knowledge between researchers, authorities and civil society. We thereby depend on the exceptional availability of measurements, model simulations and Earth observation data products for Lake Geneva. Five limnological research institutes jointly installed the LéXPLORE research platform (https://lexplore.info/) off the northern shore of the lake in February 2019, where they are acquiring a unique variety of measurements for all domains of lake research. Earth observation satellite data has been used to support research on Lake Geneva for more than two decades. Today, we benefit from the availability of a variety of continuous LéXPLORE measurements for the validation and interpretation of Earth observation data, including unique hyperspectral measurements of aquatic absorption and scattering that are acquired by a profiler several times a day. The use of operational three-dimensional models provides deeper insights into visible changes on the lake surface and the underlying causes. An extraordinary bloom of golden algae, namely Uroglena Sp., in September 2021 provides a vivid application example for the concept of digital twins. Such a bloom had not occurred since 1999 and was hardly expected due to declining phosphorus concentrations and the changing climate. Sentinel-2 images show the pronounced spatial heterogeneity of the bloom, due to which representative in situ measurements are rare. Using hydrodynamic simulations, we reconstructed the circulation around the time of the bloom, and the tracking of particles indicated its geographic origin. The analysis of hydrological and meteorological data provided further indications of the combination of external conditions that enabled the bloom, conditions that had not occurred in this combination since 1999. The use of various data and tools has therefore made a decisive contribution to improving process understanding. A lively exchange between science and the public in the Léman region was established with the support of ASL and the University of Lausanne in the past year. We registered 600 participants. We have equipped 250 citizen scientists, who are regularly out on the lake, with Secchi discs to measure transparency. After transmission with a smartphone, their measurements are visualized in a web portal with coinciding Sentinel-3 Secchi depth products (https://lemanscope.org/). Among over 2000 measurements, 400 match Sentinel-3 overpasses. Their unique spatial distribution enables an improved estimation of the uncertainties of Sentinel-3 products, in particular their dependence on the proximity to the shore. In turn, improved water transparency products from Sentinel-3 can be used to parameterize hydrodynamic models. We inform in regular seminars about visible properties of the lake, such as the strong turbidity caused by inorganic particles in summer 2024, and how they affect the measurements. For other current topics such as the spread of invasive quagga mussels, we organise lectures by selected specialists. The interdisciplinary use of different information technologies enables new insights in research and is an important tool for communicating with the public. But it is also complex and costly, and scaling it requires collaborative approaches. This is why all the software we use is openly available. In the new EU Interreg AlpineSpace project DiMark (https://www.alpine-space.eu/project/dimark/), we promote the further dissemination of our tools for lakes across the entire the Alpine region.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K1)

Presentation: Digital twin politics: Unlocking the full potential of digital twins for sustainable ocean futures

Authors: Associate Professor Alice Vadrot, PhD Researcher Carolin Hirt, PhD Researcher Emil Wieringa Hildebrand, PhD Researcher Felix Nütz, PhD Researcher Wenwen Lyu
Affiliations: University of Vienna
The concept of a Digital Twin of the Ocean (DTO) represents a potentially significant leap in advancing ocean knowledge and fostering sustainable action, while having the potential to significantly reshape the interface between science and politics: Under many multilateral environmental agreements, DTOs can be crucial for supporting intergovernmental efforts to monitor progress in achieving environmental protection goals including in the areas of marine biodiversity, deep-seabed mining, fishing, shipping and plastic pollution. Despite rapid technological progress and rapid expanding range of potential applications, research into the social and political dimensions of DTOs remain underdeveloped. This gap is particularly concerning, as we argue that DTOs are inherently contested, ambiguous and political: Firstly, DTOs can risk exacerbating global inequalities, given the unequal capacities to develop, access, and utilize ocean data, information, and DTO models and technologies. Secondly, they introduce a range of legal and political challenges, including uncertainties around data access, ownership, security, and sharing. Thirdly, to ensure ethical use of DTOs, they require a robust framework of norms, rules, and values. All these aspects, we argue, remain often-overlooked amid the current “twin rush.” To address these aspects and the overall lack of empirical social science research on the development and use of digital twins, the ERC project TwinPolitics at the University of Vienna re-conceptualizes DTOs as a socio-technical relation shaped by specific institutional, political, and economic conditions within a hybrid environment of research, data, and observation. TwinPolitics seeks to unpack the emergence of so-called “digital twin politics” in international environmental governance by tackling key questions: How and why are DTOs developed by governments and utilized in marine scientific research? How are they designed to inform decision-making? To what extent are they, or could they be, integrated into multilateral governance? This presentation introduces the project’s innovative methodology to track the development of DTOs across multiple field sites, policy levels, and spatial scales. By addressing this critical research gap, TwinPolitics aims to provide valuable insights into the role of digital practices in contemporary data-driven policymaking, fostering more equitable and responsible implementations of DTOs. TwinPolitics will transform our understanding of data-driven policy making by producing fine-grained analyses and visualisations of the socio-technical making of digital twins, and of how they can serve multilateralism.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K1)

Presentation: CITYNEXUS: Empowering Sustainable Urban Development through Digital Twin Technology

Authors: Mr Ludovico Lemma, Dr Simone Fratini, Dr Alessandra Feliciotti, Dr Mattia Marconcini, Mr Francesco Asaro, Mr Josselin Stark, Mr Andreas Altenkirch, Dr Claudia Vitolo
Affiliations: MindEarth s.r.l., Solenix Engineering GmbH, European Space Agency
Destination Earth (DestinE), a flagship initiative jointly promoted by the European Commission and the European Space Agency (ESA), is transforming the use of Earth Observation (EO) data to generate actionable insights for sustainable urban planning. As part of the DestinE ecosystem, CITYNEXUS, an advanced urban digital twin use case developed for the city of Copenhagen, has been tailored to address the specific needs and requirements of Copenhagen Amager Vest district, which is currently facing a key moment of transition towards healthier and sustainable mobility. CITYNEXUS aims to address critical urban challenges, including traffic congestion, air pollution, and equitable land use planning, by deploying advanced simulation technologies. The platform integrates cutting-edge AI models with a diverse range of data sources, including Earth Observation (EO) data, ground-based environmental observations, together with High-Frequency Location-Based (HFLB) mobility data, or else anonymized, high-frequency geolocation data collected from mobile devices, which provides detailed insights into population movements and traffic patterns, enabling dynamic urban modeling and improved decision-making. To monitor air quality and pollutant emissions, CITYNEXUS exploits Sentinel-5P TROPOMI Level 2 data to track pollutants such as NO₂, CO, O₃, and SO₂, alongside meteorological parameters from ECMWF ERA5 reanalysis data, including temperature, precipitation, wind, and solar radiation. These EO datasets are further enriched with CORINE land cover data and the Copernicus Digital Elevation Model (DEM), facilitating detailed spatial and environmental analyses. Ground-based air quality observations from the Danish Environmental Protection Agency monitoring network provide high-resolution, real-time validation for the EO-derived pollutant data, ensuring robust and actionable insights. The platform further incorporates historical datasets from Google's Environmental Insights Explorer, collected before the COVID-19 pandemic, which serve as valuable baseline measurements for urban emissions and mobility trends. To dynamically model the relationship between urban mobility and pollutant distribution and intensity, CITYNEXUS integrates a Deep Gravity Model, a sophisticated deep learning framework able to predict origin-destination flows by analyzing the interplay between spatial factors such as population density, land use, and infrastructure connectivity, serving both as a baseline for existing conditions and as a foundation for simulating changes under different scenarios. Complementing this framework is SUMO (Simulation of Urban MObility), which uses the output of the Deep Gravity Models to simulate vehicular traffic patterns, congestion levels, and associated emissions. The combined approach provides a comprehensive understanding of urban dynamics, enabling detailed analysis of traffic and its environmental impact. To ensure accuracy, the models are validated using traffic camera counts from the City of Copenhagen, aligning simulated outputs with observed traffic flows and ground conditions. The platform’s unique strength lies in its ability to empower users to configure and explore “what-if” scenarios tailored to the needs of Copenhagen’s Amager Vest district. Policymakers and urban planners can simulate road closures, introduce tunnels, adjust speed limits, and redefine land use distributions across residential, commercial, and industrial zones. CITYNEXUS also allows users to modify traffic compositions, such as increasing the proportion of bicycles or electric vehicles, and analyze the effects of these changes across different temporal settings, including specific time slots or weekdays versus weekends. Outputs from these simulations include pollutant concentrations for NO₂, CO₂, PM₁₀, and PM₂.₅, as well as metrics for traffic congestion, fuel consumption, and noise pollution. These results are presented through dynamic, interactive maps, enabling stakeholders to visualize and compare the impacts of various interventions in a risk-free virtual environment. Explainable AI (XAI) is a cornerstone of CITYNEXUS, enhancing transparency and usability by providing clear explanations for simulation results and actionable recommendations for optimization. In its first stage, the XAI module quantifies the environmental impacts of user-defined modifications, such as the reduction in NO₂ levels resulting from a road closure or speed limit adjustment. In the second stage, it suggests strategies to mitigate adverse outcomes or amplify positive effects, enabling stakeholders to make informed, evidence-based decisions. The XAI module also generates intermediate outputs, such as mobility trajectories, which provide deeper insights into traffic and pollutant dispersion patterns, fostering trust and confidence in the platform’s predictions. Developed in close collaboration with stakeholders, CITYNEXUS reflects the specific needs of the Amager Vest district. This diverse area, characterized by its mix of residential, commercial, and institutional spaces, faces significant challenges from high-speed thoroughfares that disrupt connectivity and exacerbate air quality issues. Working closely with the Local Council of Amager Vest, CITYNEXUS has been instrumental in exploring transformative interventions, such as tunneling major roads and reallocating reclaimed space for green areas or residential development. These initiatives align with Copenhagen’s ambitious goals to achieve carbon neutrality by 2025 while enhancing urban livability and resilience. Beyond its immediate application in Amager Vest, CITYNEXUS demonstrates scalability and adaptability for broader urban contexts. Its integration with the DestinE Service Platform (DESP) ensures robust computational performance and seamless data handling, making the platform transferable to other cities facing similar challenges. Engaging with regional and international networks like ICLEI, EUROCITIES, and the Global Covenant of Mayors, CITYNEXUS extends its impact, fostering collaboration and knowledge exchange to drive sustainable urban transformation. Furthermore, its alignment with the EU Mission “100 Climate-Neutral and Smart Cities by 2030” positions CITYNEXUS as a replicable model for cities across Europe and beyond. The technical foundation of CITYNEXUS ensures scientific rigor and operational relevance. Its integration of EO data with deep learning models like the Deep Gravity Model and SUMO enables precise, high-resolution simulations that capture the complex interplay between mobility, infrastructure, and environmental factors. Validation efforts, including cross-referencing outputs with traffic camera data and ground-based air quality observations, further reinforce the platform’s accuracy and reliability. By addressing urban challenges with precision, transparency, and adaptability, CITYNEXUS exemplifies the transformative potential of digital twin technologies in achieving sustainable urban futures. CITYNEXUS represents a paradigm shift in urban planning, combining innovative data integration, advanced modeling, and user-centric design within a collaborative framework. It operationalizes the synergies between the ESA DTE and DestinE ecosystems to address pressing urban challenges, offering a scalable, technically robust, and impactful solution. As a flagship use case, CITYNEXUS highlights the potential of digital twins to empower cities worldwide in their pursuit of sustainability and resilience.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall F1)

Session: C.03.11 Sentinel-1 Mission: Sentinel-1C In-Orbit Commissioning Phase Results and beyond

Sentinel-1 is the is space radar observatory of the Copernicus Programme. Its a constellation of two polar-orbiting satellites carrying a C-band synthetic aperture radar as the main payload. Sentinel-1 mission started in 2014 with the launch of the first unit (the ""A"" unit), followed by the second unit (the ""B"" unit) two years afters. Sentinel-1 ambitions to provide free and open radar backscatter over two decades. To this aim, the first two units will be gradually be replaced by the C and D recurrent units.

The replenishment of the constellation will start in 2024 by the launch of the long awaited Sentinel-1C units and will continue later iin 2025 with Sentinel-1D. Sentinel-1C is expected to be lauched in Q4 2024 with the Vega-C Return to Flight. The in-orbit commissioning phase will last 4 months with the ambition to have the S/C operated at its full capacity soon after.

This session will present the activities and results achieved during the commissioning phase in terms on instrument performance, calibration and validation. It will also present the new capabilities offered by the new specific AIS payload carried by Sentinel-1. First results of the usage of Sentinel-1C achieved during and after the commissioning will be also addressed.

Presentations and speakers:


Return to a 6-Day-Repeat Sentinel-1 Constellation: An Overview of the Sentinel-1C In-Orbit Commissioning


  • Tobias Bollian - ESA

S1C Elevation and Azimuth Pointing Verification during the Commissioning Phase using Data and Antenna Model


  • Beatrice Mai - Aresys

Introduction to the Sentinel-1 AIS Payload; Commissioning and Performance Results


  • Stefan Graham - ESA

InSAR Methods and Preliminary Results for Sentinel-1C In-Orbit Validation


  • Marco Manzoni - PoliMi

DLR’s Independent Calibration of the Sentinel-1C System – First Results from S1C Commissioning Phase Activities


  • Patrick Klenk - DLR
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.61/1.62)

Session: F.04.03 Desertification, land degradation and soil management

Desertification and land degradation poses a major threat to food security, ecosystem services and biodiversity conservation. Soil is not a renewable resource when viewed on a time scale of a couple of decades and it is threatened worldwide by climate change, natural hazards and human activities. The consequences are an increased soil loss due to wind and water erosion, land-slides, reduced soil quality due to organic matter loss, contamination and soil sealing. The EU Soil monitoring law on the protection and preservation of soils aims to address key soil threats by sustainable soil use, preservation of soil quality and functions. Space-based earth observation data together with in-situ measurements and modelling can be used in an operational manner by national and international organizations with the mandate to map, monitor and repot on soils. With the advent of operational EO systems with a free and open data policy as well as cloud-based access and processing capabilities the need for systematic, large area mapping of topsoil characteristics with high spatial resolution that goes beyond recording degradation processes can be addressed

We encourage submissions related to the following topics and beyond:
- Advanced earth observation-based products to monitor desertification and land degradation at a large scale
- Specific earth observation-based methods for soil related topics such as soil parameter mapping. Soil erosion mapping as well as other soil related health indicators in different pedo-climatic regions and biomes.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.61/1.62)

Presentation: High Resolution Land Degradation Neutrality Monitoring – Achievements of the ESA SEN4LDN Project

Authors: Carolien Toté, Dr. Ruben Van De Kerchove, Daniele Zanaga, Giorgia Milli, Cai Zhanzhang, Lars Eklundh, Katja Berger, Martin Herold, Nandika Tsendbazar, Panpan Xu, Gabriel Daldegan, Marc Paganini
Affiliations: VITO, University of Lund, GFZ, Wageningen University, Conservation International, ESA-ESRIN
The 2030 Agenda for Sustainable Development is fundamentally based on 17 Sustainable Development Goals (SDG) which are targets agreed upon by the UN members regarding various interlinked objectives that must be ensured to achieve sustainable development. Diminished overall productivity and reduced resilience in the face of climate and environmental change, have made addressing land degradation a global priority formalized by the United Nations Convention to Combat Desertification (UNCCD) and the SDG. To this end, the 2030 Agenda for Sustainable Development defined target 15.3 of SDG 15, called ‘Life on Land’, that strives to reach Land Degradation Neutrality (LDN) by 2030. Efficient monitoring of Land Degradation (LD) requires constant monitoring of various biophysical and biochemical characteristics of the land. These disturbances range from rapid land cover change (e.g., fire or logging) to continuous and slower degradation of soil and land quality. While monitoring these at larger scale becomes a logistical impossibility if not using Earth Observation (EO) data, there are still several challenges and opportunities to address particularly related with increasing spatial and temporal resolution and diversity of sensor types. The European Space Agency (ESA) Sentinels for Land Degradation Neutrality (SEN4LDN) project aimed to address these two limitations by developing and showcasing a novel approach for improving both the spatial and temporal resolution of the data required for LD monitoring. While LDN is agreed between the SDG signatories, each region/country has its own specific challenges and drivers of LD and therefore the inclusion of local partners in the product development was extremely important. Therefore, SEN4LDN engaged with 3 pilot countries – Colombia, Uganda and Portugal – to participate to the project as early adopters. These stakeholders provided insights on the user requirements and feedback on the final product and its actual usability for SGD 15.3.1 reporting and have been actively engaged in the projects through three iterative rounds of Living Labs. The SEN4LDN national demonstration products consist of a series of output products on the three sub-indicators of land degradation as defined by the UNCCD – trends in land cover, trends in land productivity, and trends in carbon stocks – and a combined integrated LDN indicator. Trends in land cover between 2018 and 2023 are evaluated based on an automated algorithm to map land cover dynamics at 10 m resolution that combines deep learning and a pixel classifier on pre-processed Sentinel-2 imagery and ancillary input layers. Post-processing is performed to mitigate class fluctuations, resulting in consistent annual land cover maps. Land cover probabilities are used to generate land cover transition (probability) layers, that are further processed to discrete and continuous land cover degradation products. The land cover and land cover transition maps were validated against independent reference datasets. Overall accuracies of the land cover map in the three demonstration countries ranged between 69.6%±5.5% and 90.1%± 3.4%, and the LC transition map achieved an overall accuracy of 73.7% validated with a ground reference dataset collected by Uganda experts. To evaluate trends in land productivity, the seasonal accumulated production of green biomass is estimated from a Sentinel-2 derived index, which is an indicator for photosynthetic activity and overall ecosystem functionality. The trend of vegetation productivity is estimated for the period 2018-2023 at 10 m spatial resolution. The performance of vegetation productivity is based on comparison of the local productivity to similar land units over a larger area. Discrete and continuous land productivity degradation maps are generated based on the combination of the former two. Validation of these products is based on internal consistency analysis and indirect validation with external datasets. The concept of carbon stocks in terms of LDN assessments is primarily related to the soil carbon pool and related changes. However, since soil organic carbon (SOC) stock change estimates from remote sensing are not readily available (yet), SEN4LDN explored the use of above-ground biomass (AGB) changes as a proxy for carbon stock changes to provide an estimate independent of the other two sub-indicators. Two approaches were combined to quantify trends in carbon stocks: a stock change approach based on ESA CCI biomass maps, and a gain-loss approach based on a carbon flux model. Results from the hybrid approach estimate AGB evolution between 2010 and 2018/20 at 100 m spatial resolution have been generated for three countries as a feasibility assessment. Finally, the outputs of the trends in land cover and trends in land productivity sub-indicators are integrated to generate a product that allows to calculate the extent of land degradation for reporting on UN SDG indicator 15.3.1, expressed as the proportion (percentage) of land that is degraded over total land area. In SEN4LDN two methods were tested: (i) the so-called one-out-all-out in which a significant reduction or negative change in any one of the sub-indicators is considered to comprise land degradation, and (ii) a continuous sub-indicator integration method that combines the continuous land cover degradation and land productivity degradation products into a continuous land degradation probability index. This allows for a more in depth interpretation of the combined product, including an assessment of the magnitude or probability of degradation and restoration, and for an interpretation of possible contrasting sub-indicators. SEN4LDN has shown that there is great interest in LDN indicators at high spatial and high temporal frequency, and that these can be delivered from Sentinel input data. Country engagement and interactions through various rounds of Living Labs have been essential and instrumental to understand the potential and limitations of the different datasets generated. All three data streams (land cover/change, productivity, carbon stocks) have been found useful and should now be operationalized and integrated with national reference data in LDN monitoring frameworks.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.61/1.62)

Presentation: Enhancements to the European soil organic carbon monitoring system Worldsoils

Authors: Prof. Dr. Bas van Wesemael, Dr. Asmaa Abdelbaki, Prof. Dr. Eyal Ben-Dor, Prof. Dr. Sabine Chabrillat, Dr. Pablo d'Angelo, Prof. Dr Jose A.M. Dematte, Dr. Giulio Genova, Dr. Asa Gholizadeh, Dr. Uta Heiden, Dr. Paul Karlshoefer, Dr. Robert Milewski, Dr. Laura Poggio, Dr. Marmar Sabetizadeh, Adrián Sanz, Dr. Peter Schwind, Dr. Nikolaos Tsakiridis, Prof. Dr. Nikolaos Tziolas, Dr. Julia Yagüe Ballester, Dr. Daniel Žižala
Affiliations: GMV Aerospace and Defence S.A.U., Earth and Life Institute, Université Catholique de Louvain, Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences, Fayoum University, Tel Aviv University. Porter School of Environment and Earth Science, Leibniz University of Hannover, Institute of Earth System Science, Department of Soil Science, German Aerospace Center (DLR), Remote Sensing Technology Institute (IMF), University of São Paulo. Luiz de Queiroz College of Agriculture, Department of Soil Science, University of Life Sciences Prague, Aristotle University of Thessaloniki. Laboratory of Remote Sensing, Spectroscopy, and GIS, Department of Agriculture, University of Florida. Southwest Florida Research and Education Center, Department of Soil, Water and Ecosystem Sciences, Institute of Food and Agricultural Sciences, ISRIC - World Soil Information
The EU Soil Monitoring Law, proposed on July 5, 2023, incorporates the use of Copernicus data to enhance the monitoring of Soil Organic Carbon (SOC). The law mandates that Member States utilize satellite data from the Copernicus program to complement ground-based measurements thus providing a more comprehensive and accurate assessment of SOC/lay ratio across regions, ensuring consistent and reliable data for soil health monitoring. In this context, the ESA funded Worldsoils project (Contract No. 400131273/20/I-NB, 2020-24) has developed a pre-operational SOC monitoring system in a cloud environment capable of: (i) predicting topsoil organic carbon content at regional and continental scales from Earth Observation (EO) satellite data with a continuous cover over Europe, (ii) leveraging upon multitemporal soil-spectral data archives and modelling techniques and, (iii) consulting with end users and EO experts for developing soil indices, relevant for monitoring topsoil. This abstract summarizes the results obtained in the first version of the system and presents the enhancements to be implemented in the second version, during 2025. Soil/land cover types analysed included croplands, grasslands and forests. The system utilized spectral models for croplands and a digital soil mapping approach for permanently vegetated areas (i.e.: grasslands and forests, although forests were not included in the independent validation). Models strongly rely on soil reflectance composites from the Sentinel 2 multispectral instrument. The composites provide the median reflectance for all valid pixels over a period of three years in the main growing season (from March to October). The bare soil frequency, a proxy for the degree of crop cover during the growing season, is lower in Mediterranean regions, with extensive cover of winter cereals and fodder crops during the growing season. Key outcomes of Worldsoils v1 include: • A graphical user interface that provides the SOC content and the 90% uncertainty ratio for 50 m pixels in three pilot regions (Wallonia, Central Macedonia and the Czech Republic) and 100 m pixels for the rest of Europe. https://gui.world-soils.com/. • Evidence that the SOC prediction remains stable, as expected for the short three-year period. • The reasonably good performance of the SOC prediction algorithms compared to others at continental scale (R²: 0.41 for croplands and 0.28 for permanently vegetated areas). • The pixels are accurately attributed to one of the two SOC prediction models (i.e. spectral vs digital soil mapping), except for tree crops in Macedonia -Mediterranean regions-. • Predicted SOC contents were evaluated against independent data sets from the National Reporting Centers on Soils in the three pilot regions • The evaluation of the SOC prediction is satisfactory in Wallonia (Belgium; R² 0.51) but is hindered by the limited SOC range in croplands in Greece and the Czech Republic. • The bare soil frequency is lower in Greece because of abundant tree crops, cereals and fodder crops. • The monitoring system reproduces spatial patterns in SOC content like those obtained from detailed regional algorithms using new generation hyperspectral satellites. Worldsoils v1 SOC prediction results and their evaluation over the land covers tested in Europe, although promising, seek expansion to a global scale to an enhanced version of Worldsoils v2. For this reason, during 2025 the following objectives are sought: (i) to improve the models and SOC products over extended areas in America, Africa and Asia; (ii) to provide a SOC/clay ratio model over Europe that leverages satellite data, (iii) to achieve the production and validation of v2 Worldsoils SOC maps, (iv) to transfer the results, tools and algorithms to ESA’s Application Propagation Environment (APEx) and (v) to import the soil compositing processor into an APEx compliant service implementation. It is foreseen that at the time of the ESA LPS 2025, the team can report on Worldsoils v1 results, and V2 specifications of indexes, models, data and system design as well as the implementation of the system, i.e.: coding of indexes and models, training and testing results, cloud system adjustments and improvements to the graphical user interface.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.61/1.62)

Presentation: Leveraging Artificial Intelligence and Earth Observation for Accessible, Predictive Soil Management Insights

Authors: Nikos Tziolas, Anastasia Kritharoula, Giannis Gallios
Affiliations: Department of Soil, Water and Ecosystem Sciences, Institute of Food and Agricultural Sciences, University of Florida
As agricultural systems face growing anthropogenic and environmental pressures, effective soil management is essential for sustainable food production. Recent advancements in digital soil mapping, primarily driven by the integration of cloud computing and Artificial Intelligence (AI) in Earth Observation (EO) data analysis, have opened new possibilities. These innovations have significantly improved the efficiency and accuracy of monitoring soil health, providing better tools for managing and preserving soil resources in response to environmental challenges. However, there are fundamental limitations in current processes. Despite the availability of several spatial products, users often find it difficult to access these maps for informed decision-making related to soil management. This is primarily due to several factors: non-experts often struggle to interact effectively with complex geospatial systems, which require specialized knowledge; access to EO data and generated products can be slow and cumbersome; and many existing tools lack user-friendly interfaces that facilitate quick and intuitive exploration of geospatial information. In this work, we introduce GAIA (Geospatial Artificial Intelligence Analysis), a cutting-edge AI conversational platform developed to simplify and enhance soil management through a chat-based interface. GAIA integrates multispectral Sentinel-2 data with AI-driven predictive models, using convolutional neural networks to estimate key soil properties like Soil Organic Carbon (SOC), pH, cation exchange capacity (CEC), and clay content. Subsequently, the proposed approach focuses on utilizing Large Language Models (LLMs) to transform general stakeholder inquiries about soil health into specific, actionable research questions. By fine-tuning LLMs, we aim to automate the interpretation of natural language queries into technical questions related to soil health indicators, enhancing geospatial data retrieval. This process allows stakeholders, like growers, to easily access and analyze soil health data without deep technical knowledge. The system has been developed through training open-source LLMs, integrating expert guidelines, and enhancing the GAIA platform for efficient, real-time data retrieval. For instance, a grower might inquire, “I want to monitor the health state of my soil in my field to optimize crop production,” and the system would respond by presenting maps and statistics of key soil properties. GAIA system provides actionable insights without requiring users to have specialized technical knowledge. By leveraging AI’s predictive capabilities, it enhances the accuracy of soil health assessments and simplifies the interpretation of complex geospatial data through natural language queries, making the process more accessible and efficient for non-experts. GAIA's early findings demonstrate its ability to optimize soil sampling, and deliver real-time insights on soil conditions, enhancing decision-making in agriculture. Florida has been selected as a demonstration site, showcasing the system's practical applications. Its user-friendly interface enabling chat could potentially empower farmers, land managers, and policymakers to access critical data swiftly, allowing timely responses to environmental challenges. By integrating AI with geospatial data, GAIA bridges the gap between complex analytics and practical uses, supporting sustainable farming practices and revolutionizing how stakeholders interact with soil health data.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.61/1.62)

Presentation: Earth Observation as a Tool for Monitoring and Reporting on SDG Indicator 15.3.1

Authors: Brian O’Connor, Coleen Carranza, Sara Minelli, Barron
Affiliations: United Nations Convention to Combat Desertification
The United Nations Convention to Combat Desertification (UNCCD) is the custodian agency for SDG Indicator 15.3.1 defined as the proportion of degraded land over total land area. SDG Indicator 15.3.1 is the sole indicator used to measure progress towards SDG Target 15.3 which strives to achieve a land degradation-neutral world by 2030. UNCCD continues to be a significant ‘consumer’ and advocate of Earth Observation (EO) data for sustainable development and land. Since the launch of the SDG Indicator 15.3.1 reporting process in 2018, the UNCCD has accumulated valuable insights into the use of EO data at the science-policy interface both in terms of the opportunities afforded and where countries experience the greatest challenges. The UNCCD is also a broker of EO data for its 196 country Parties to access free and open global data sources on the three sub-indicators of SDG Indicator 15.3.1: trends in land cover, trends in land productivity or functioning of the land and trends in carbon stocks above and below ground. Through provision of free and open global EO data for these sub-indicators as ‘default’ data for national reporting by the UNCCD, countries can report even when access to national datasets is limited. This ‘default data’ model has been very successful with 115 national estimates of degraded land reported in 2022 and 118 reported in 2018. In this presentation, UNCCD will focus on the following key aspects of Earth Observation as a tool for monitoring and reporting on SDG Indicator 15.3.1: (i) How EO has been a game changer for informing evidence-based policy messages on the status and trends in land degradation globally and regionally (ii) How the use of EO is challenging in certain regional geographies hampering the ability of countries in hyper arid regions and small island developing states in particular to report effectively (iii) Challenges faced by countries in using EO data at the national level (iv) Recommendations to EO data providers to enable more countries to report on SDG Indicator 15.3.1, set evidence-based land degradation neutrality voluntary targets and track progress towards their achievement.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.61/1.62)

Presentation: Advancing Soil Organic Carbon Monitoring and Modeling with Hyperspectral Earth Observation: Insights for Policy and Practice

Authors: Robert Milewski, Kathrin J Ward, Asmaa Abdelbaki, Pia Gottschalk, Marta Gómez Giménez, David de la Fuente Blanco, Asa Gholizadeh, Judith Walter, Robert Müller, Albrecht Bauriegel, Sabine Chabrillat
Affiliations: GFZ Helmholtz Centre for Geosciences, GMV Aerospace and Defence, Czech University of Life Sciences Prague, Landesamt für Bergbau, Geologie und Rohstoffe (LBGR), Dezernat Bodengeologie, GFZ Helmholtz Centre for Geosciences & Leibniz University Hannover
Soils are essential for food production and ecosystem services, storing approximately 30% of global terrestrial carbon. Accurate mapping and monitoring of soil properties are critical for assessing soil quality and achieving the goals of policies like the European Directive on Soil Monitoring and Resilience, which aims for 100% healthy soils in Europe by 2050. Among the key indicators suggested by the EU Soil Monitoring Law is the ratio of soil organic carbon (SOC) to soil clay content, which relates the carbon present in the soil to its storage potential. This metric is particularly relevant for addressing soil degradation in agricultural contexts. Spectroscopy in the VNIR-SWIR (400–2500 nm) spectral range has proven highly effective for the precise determination of SOC and clay content, utilizing narrow absorption features that are diagnostic of clay minerals. This approach also overcomes the limitations of broadband sensors by improving the differentiation of complex soil compositions and surface conditions, such as soil moisture, vegetation cover, and soil sealing. Current hyperspectral spaceborne sensors like EnMAP and PRISMA offer a unique opportunity to test and refine soil mapping capabilities, paving the way for global soil monitoring with forthcoming missions like ESA’s Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) and NASA/JPL’s SBG. This research leverages advanced hyperspectral capabilities to explore innovative methods for SOC monitoring and modeling, supporting evidence-based policymaking. Under the EU-funded MRV4SOC project, hyperspectral data from EnMAP, PRISMA, and airborne HySpex sensors (plane and UAV) are utilized to estimate SOC, clay content, and carbon stocks for a demonstration site in NE Germany (Demmin). By integrating multitemporal hyperspectral soil property mapping and Sentinel-2 time-series data used in an agro-crop model and the RothC carbon process model, the research delivers accurate assessments of SOC/clay ratios and insights into carbon dynamics influenced by climate conditions and farming practices. Seasonal carbon inputs, derived from end-of-season dry biomass estimates using Sentinel-2 data, enable dynamic evaluations of land management practices, such as cover cropping and low-tillage farming. For the state of Brandenburg, SOC and soil texture maps were generated in collaboration with the Landesamt für Bergbau, Geologie und Rohstoffe Brandenburg (LBGR). This effort, based on over 170 cloud-free EnMAP scenes collected over 2.5 years, produced high-resolution maps that are vital for agricultural and environmental management. Meanwhile, hyperspectral UAV surveys provided finer-scale insights into field heterogeneity, offering a complementary perspective to satellite EO data. These case studies underscore the operational potential of hyperspectral EO missions in delivering actionable insights into soil health and carbon dynamics. They also highlight the critical synergies between current missions like Sentinel-2, EnMAP, and PRISMA, and future missions like CHIME, which promise enhanced spectral, spatial, and temporal resolutions. This research emphasizes the transformative role of hyperspectral EO data in bridging the gap between science and policy. By refining methodologies and generating robust indicators of soil degradation, it directly supports EU environmental initiatives such as the Soil Monitoring Law and the Common Agricultural Policy. The integration of hyperspectral data into advanced models and decision-support tools showcases the indispensable role of EO in monitoring and mitigating environmental degradation.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.61/1.62)

Presentation: High resolution soil property maps and their uncertainty for Europe

Authors: Dr. Laura Poggio, Dr Uta Heiden, Dr Pablo Angelo, Dr Paul Karlshoefer, Fenny van Egmond
Affiliations: Isric - World Soil Information, German Aerospace Center
High-resolution, reliable soil data is crucial for addressing climate change and sustainable land management. Integrating high resolution remote sensing data, such as from Copernicus Sentinel, is essential for improving accuracy and relevance. This study presents an overview of our Digital Soil Mapping (DSM) approach and its innovations. We combine satellite imagery, environmental covariates (e.g., elevation, weather data), and ground truth observations (e.g., LUCAS and other European and national datasets) to create high-resolution soil property maps using statistical models. These maps encompass primary properties (e.g., organic carbon, pH, texture), derived properties, and soil health indicators. We used the Soil Composite Mapping Processor (SCMaP) to derive soil reflectance composites from Sentinel-2 time series. These composites aid in identifying bare soil areas and estimating their spectral reflectance, spectral dynamics and frequency of occurrence, serving as a proxy for land management. Random Forest models, in particular Quantile Random Forests for uncertainty assessment, are employed to predict soil properties. This study delves into the advantages and challenges of using high-resolution remote sensing data with limited ground truth data. We also provide insights into product uncertainty assessment at a continental scale, including accuracy, spatial patterns, and user evaluation. We focus in particular on the relevance of finer resolution and accuracy for continental products.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E1)

Session: C.02.07 FORUM- ESA's 9th Earth Explorer

The FORUM mission will improve the understanding of our climate system by supplying, for the first time, most of the spectral features of the far-infrared contribution to the Earth’s outgoing longwave radiation, particularly focusing on water vapour, cirrus cloud properties, and ice/snow surface emissivity. FORUM’s main payload is a Fourier transform spectrometer designed to provide a benchmark top-of-atmosphere emission spectrum in the 100 to 1600 cm-¹ (i.e. 6.25 to 100 µm) spectral region filling the observational gap in the far-infrared (100 to 667 cm-¹ i.e. from 15 to 100 µm), which has never been observed from space, spectrally resolved, and in its entirety. The focus of this session is on the scientific developments in the frame of this mission and the outlook into the future.

Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E1)

Presentation: FORUM Science: Current status and future plans

Authors: Prof Helen Brindley
Affiliations: Imperial College London, National Centre for Earth Observation
The far infrared (far-ir: defined here as wavelengths between 15 - 100 μm) plays a pivotal role in determining the Earth’s energy balance with, in the global mean, approximately half of our planet's emission to space occurring within this wavelength range. Despite this, the Earth’s outgoing far-ir radiation spectrum has never been systematically measured from space. ESA’s 9th Earth Explorer, the Far-infrared Outgoing Radiation Understanding and Monitoring (FORUM) mission will change this, opening a new window on climate science by measuring, for the first time, the Earth’s outgoing energy spectrum across the far-ir with high spectral resolution and unprecedented radiometric accuracy. The dominant role of the far-ir in determining the Earth’s Outgoing Longwave Radiation (OLR) is in part due to the strong water vapour rotation band at wavelengths > 16.5 μm. This in turn means that radiative emission in the far-ir is particularly sensitive to water vapour in the climatically important upper troposphere/lower stratosphere (UTLS) region. Similarly, clear-sky longwave radiative cooling through the mid and upper troposphere is dominated by the contribution from the far-ir. FORUM observations offer the potential to both improve our knowledge of UTLS water vapour concentrations and couple them to their associated radiative impact. While for much of the globe this water vapour absorption means that far-ir surface emission cannot be sensed from space, in dry, clear-sky conditions this is no longer the case. As water vapour concentrations reduce, micro-windows in the far-ir become progressively more transmissive such that the surface emission in these regions of the spectrum can propagate to space. Recent studies show that properly accounting for the contribution of surface emissivity in the far-ir may be critical to both reduce persistent climate model biases and determine the pace of high-latitude climate change. Moreover, ice clouds, crucial players in determining current and future climate, have emitting temperatures that place the peak of their radiative emission within the far-ir. Our ability to correctly simulate the interaction of the radiation spectrum with ice cloud relies on our capability to adequately represent their macrophysical and microphysical properties. The latter are critically dependent on the complex ice-crystal shapes and their size distributions within the clouds. Recent advances in ice cloud optical modelling have attempted to capture their bulk microphysical properties spanning the entire electromagnetic spectrum. However, while there are many space-based observations of the reflected visible and emitted near- and mid-infrared radiation in the presence of ice clouds that can be exploited to test these developments, there are no such observations that span the far-ir. This represents a major barrier to improving our confidence in our ability to understand and monitor ice cloud properties and their interaction with the Earth’s outgoing longwave energy, particularly since the contrast in ice and water refractive indices between the far-ir and mid-infrared implies that unique information relating to cloud classification and microphysics can be leveraged from measurements of the far-ir spectrum. Dedicated studies are ongoing to understand exactly how FORUM can benefit these areas and to prepare the tools needed to fully exploit the observations. In this talk I will summarise these efforts, providing a high-level overview of recent and planned scientific activities in support of the mission.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E1)

Presentation: Principal Component Analysis of Infrared Spectra for the Evaluation of Climate Model’s Variability: Application to IASI and ARPEGE-CLIMAT

Authors: Lucie Leonarski, Pascal Prunet, Claude Camy-Peyret, Sarah Pipien, Quentin Libois, Romain Roehrig
Affiliations: Spascia, Institut Pierre-Simon Laplace, Centre National de Recherches Météorologiques / Météo-France
Climate models are key tools for understanding the past and present climate, and to perform climate projections. For decades, those models have been validated against broadband measurements of the radiative budget from space, for instance with the Clouds and the Earth’s Radiant Energy System (CERES) instrument. Even if the latter has proven its utility, the use of broadband measurements can potentially hide spectral error compensation. To avoid this, spectrally-resolved radiative fluxes could be used. The spectrally-resolved spectra, especially in the infrared (IR), contain valuable information about the climate system, such as the spatial and temporal variability of temperature, water vapour, clouds and other atmospheric constituents, that could be used to investigate climate models deficiencies through systematic model/observation comparison. Such measurements can be obtained from spaceborne instruments like the Infrared Atmospheric Sounding Interferometer (IASI) series that measure with high spectral resolution the infrared spectrum between 645 and 2760 cm-1 and will be completed by the future FORUM satellite mission (100-1600 cm-1). Comparing high-resolution spectra can be complex. However the increasing amount of satellite data used by the Earth observation community has raised the interest in compression methods based on Principal Component Analysis (PCA). Beyond the data compression, this statistical method has also been successfully employed for noise-reduction, instrument artefact removal, and extreme atmospheric events detection and monitoring like fires, volcanoes, pollution episodes. Putting forward the principal directions of the variability (eigenvectors), the PCA represents an objective tool for analysing the climate variability. In this study, we have used the PCA to compare the spatial and temporal variability of the IR spectra modeled by the ARPEGE-Climat climate model to the one captured by IASI. The outputs of a 7-year amip simulation are used with the RTTOV radiative transfer code to produce synthetic clear-sky spectra. Eigenvectors generated from monthly averaged model spectra and IASI measurements are compared using the canonical principal angles. Spatial distribution of the principal components as well as their annual cycle are investigated. An approach is proposed for the geophysical interpretation of eigen vectors to extract useful information from the model/observation comparison about the climate model deficiencies in terms of thermodynamical variables. Potential for the evaluation and intercomparison of climate models is discussed in the perspective of the FORUM and IASI-NG missions.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E1)

Presentation: W-band, HiSRAMS, AERI, FIRR-2, FINESSE and FIRMOS Experiment on Remote Sensing (WHAFFFERS): multi-frequency, multi-platform campaign overview

Authors: Natalia Bliankinshtein, Cuong Nguyen, Keyvan Ranjbar, Paloma Borque, Leonid Nichman, Kenny Bala, Yi Huang, Lei Liu, Benjamin Riot Bretcher, Eve Bigras, Helen Brindley, Jonathan Murray, Zen Mariani, Jean-Pierre Blanchet, Yann Blanchard, Adrian Loftus, Marco Barucci, Claudio Belotti, Giovanni Bianchini, Luca Palchetti, Silvia Viciani, Laura Warwick, Hilke Oetjen, Dirk Schuettemeyer
Affiliations: National Research Council Canada, McGill University, Imperial College London, Environment and Climate Chance Canada, Université de Québec à Montréal, NASA Goddard Space Flight Center, CNR National Institute of Optics, European Space Agency
Far-infrared measurements from space at present constitute a measurement gap, which will be addressed by a number of new and planned space missions, in particular, ESA’s Far-infrared Outgoing Radiation Understanding and Monitoring (FORUM) mission, NASA’s Polar Radiant Energy in the Far-InfraRed Experiment (PREFIRE) and Canada’s Thin Ice Cloud in the Far InfraREd (TICFIRE) instrument on NASA’s Atmosphere Observing System (AOS) satellite mission. Airborne measurements with a far-infrared sensor would provide valuable information for advancement of these missions. WHAFFFERS (W-band, HiSRAMS, AERI, FIRR-2, FINESSE and FIRMOS Experiment on Remote Sensing) is a multi-platform, multi-frequency field campaign planned for January-February 2025 in Canada. The campaign will include data collection at two ground-based sites, one in Ottawa, Ontario and one at Gault reserve near Montreal, Quebec, as well as flight data collection by NRC Convair-580, timed with overpasses of PREFIRE satellites. WHAFFFERS research objectives include radiative closure experiments in microwave, infrared and far-infrared bands, observations of snow and ice surface emissivity, as well as synergy of active and passive sensors for atmospheric retrievals. National Research Council Canada’s Convair-580, responsible for airborne data collection, is a twin-engine, pressurized turbo-prop aircraft equipped with a state-of-the-art suite of atmospheric probes. In-situ measurements include basic aircraft state and atmospheric state variables, as well as cloud bulk and microphysics parameters and aerosols. Remote sensors onboard the aircraft include NRC airborne W- and X-band radars, Radar-Enhanced W-band Airborne Radiometer Detection System (REWARDS), 355nm elastic cloud lidar and the High Spectral Resolution Airborne Microwave Sounder (HiSRAMS). In addition to the typical suite of airborne instruments, The Far-InfraRed Radiometer-2 (FIRR-2) is a passive ground-based instrument that belongs to Environment and Climate Change Canada (ECCC). FIRR-2 is an innovative atmospheric radiometer that provides radiometrically-calibrated data in eight spectral channels in the range of 360-1265 cm-1. Measurements are enabled with the use of a far infrared detector based on microbolometers developed by the Institute of Optics of Canada (INO). This new technology is compact, low-cost, and can operate remotely 24/7 with out any human intervention. In addition to the post-processed temperature and moisture profiles, FIRR-2 can also provide cloud microphysics information on Arctic Thin Ice Clouds (TIC). This instrument was modified by its manufacturer and redesigned for nadir-viewing onboard the Convair-580 for the WHAFFFERS campaign. The ground-based component of the campaign is two-fold. Primary sensors for the campaign are McGill University’s extended Atmospheric Emitted Radiances Interferometer (AERI), 425-3000 cm-1, and Imperial College’s Far Infrared Spectrometer for Surface Emissivity (FINESSE), 400-1600 cm-1, deployed at the Gault site, as well as National Institute of Optic’s Far-Infrared Radiation Mobile Observation System (FIRMOS), 100-1600 cm-1, deployed near the Ottawa International Airport. Secondary observations include a specialized surface observation site at the Ottawa location, deployed by ECCC and McGill University. Additionally, two Climate Sentinels network stations operated by McGill University and Université du Québec à Montréal will collect critical surface data to complement the campaign measurements. With four sensors measuring at far-infrared frequencies, WHAFFFERS aims at applications that benefit from integrated sensing by building synergies between the ground and airborne measurements with simultaneous overpasses of the PREFIRE satellite mission for clear conditions and in the presence of thin clouds. This presentation will provide an overview of the WHAFFFERS campaign and a first look at the data collected through the campaign period.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E1)

Presentation: FORUM mission development status

Authors: Kotska Wallace, Paolo Laberinti, Felice Vanin, Michael Miranda, Dulce Lajas, David Buehler, Cedric Salenc, Hilke
Affiliations: European Space Agency
The Far-infrared Outgoing Radiation Understanding and Monitoring mission, FORUM, is being developed within FutureEO, ESA’s Earth Observation research and development programme. It will deliver spectrally resolved measurements of the Earth’s emission spectrum in the far infrared, with continuous spectral coverage, to help scientists understand and quantify Earth’s radiative processes. In 2022 the contract for FORUM space segment was signed with an Airbus led consortium. This presentation will describe the development status of the instrument and platform subsystems and preparation of the mission and ground segment. FORUM’s science payload consists of two instruments. The nadir pointing FORUM Sounding Instrument (FSI), developed by OHB, is based on a Fourier-transform spectrometer hosting a double pendulum interferometer as the core sub-unit. For calibration purposes its view can be directed to a cold space view or to the absolute reference of a floating temperature blackbody unit, whose temperature is precisely measured. The interferograms produced by the interferometer are imaged using pyroelectric detectors. The FORUM Embedded Imager (FEI) is being developed by Leonardo, for scene heterogeneity assessment. It is a long wave infrared camera which uses a microbolometer array to form a 36 by 36 km image with 600m ground sample resolution. Several component and subsystem level tests are available at the end of 2024. The spectrometer Instrument Development Model has been assembled for performance characterisation of radiometric sensitivity, including the detector assembly, optical components and the alignment. An Engineering Qualification Model of the Interferometer Mechanism Assembly has also been used to conduct synchronisation tests to de-risk the electro-functional interface for the interferogram acquisition, checking the timing performance of the Scan Unit, Pointing Unit and Interferometer Mechanism Assembly. A performance model for the FSI has been used to generate performance predictions, based on components and system level data. The first modules of the FORUM end to end performance simulator that includes a prototype of the data processor have been delivered. Elaboration of the payload data ground segment preliminary design is underway. The Instrument Critical Design Review was held in 2024 and a launch contract was signed with Vega-C. FORUM System CDR will take place in the first half of 2025.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E1)

Presentation: EE9 FORUM L1/ L2 data processors and E2E simulations: development of test data to prepare for FORUM launch

Authors: Bram Sanders, Hilke Oetjen, Dulce Lajas, David Buehler, Kotska Wallace
Affiliations: ESA - ESTEC
The FORUM mission aims to measure the Earth’s emission spectrum in the far infrared spectrally resolved and with continuous spectral coverage. The wavelength range from 100 to 1600 cm-1 or 100 to 6.25 mum is measured with a resolution better than 0.5 cm-1, which enables retrieval of UTLS water vapour, high-altitude ice cloud properties and surface emissivities in polar regions. Another geophysical data product is TOA spectral flux intended specifically for time series analysis and climate model benchmarking. FORUM therefore targets to measure radiances with a high absolute radiometric accuracy better than 0.1 K in terms of brightness temperature. FORUM carries two instruments: an imager dedicated to scene heterogeneity assessment and a Fourier-transform spectrometer measuring in nadir. The latter instrument samples every 100 km a FOV with a diameter of 15 km. Spectral fluxes are derived from nadir radiances using atmospheric profiles retrieved from the same measurement. The FORUM satellite will fly in loose formation with IASI-NG on Metop-SG and both FORUM stand-alone and synergy L2 products with IASI-NG are foreseen. The primary benefit of their combination is the extension of the spectral range into the mid infrared to complete coverage of the outgoing longwave radiation spectrum. In this presentation we will introduce ESA’s development approach to the FORUM End-to-End performance Simulator (FEES) and will discuss the FORUM L1 and L2 data products. The FEES is a comprehensive software tool to support the development of the FORUM prototype processors and the FORUM ground segment implementation by embedding them in a closed-loop simulation environment that includes the FORUM Instrument Simulator Modules (ISMs) for the spectrometer and the imager and a sophisticated Scene Generator Module (SGM). In addition, a dedicated FEES module simulates the IASI-NG spatial collocation and synergy retrievals. Finally, we will discuss the status of the FEES for the generation of test data for the user community to prepare for the availability of the operational FORUM data for scientific applications.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E1)

Presentation: Improvement of the Long-term Traceability to the SI of the FORUM on-board Blackbody

Authors: Lars Bünger, Bruno Rohloff, Dirk Fehse, Max Reiniger, Daniela Narezo Guzman, Christian Monte
Affiliations: PTB
Ensuring the traceability of outgoing longwave radiation (OLR) measurements and achieving the lowest uncertainties during the entire duration of missions is crucial for the new generation of earth observation satellites and for the inter-mission comparability. To maintain very high accuracy in radiometric measurements in the infrared spectral range, regular checks/recalibrations of the on-board spectrometers are required. The calibration concept of the FORUM (Far-infrared Outgoing Radiation Understanding and Monitoring) mission is based on an optimised and precisely characterised on-board blackbody. This paper describes the implementation of a strategy to maintain high standards of temperature measurement of the on-board blackbody over the duration of the FORUM mission. The strategy consists of enhancing the reliability and accuracy of the blackbody contact temperature measurement by using additional sensor technology besides the previously chosen Pt2000 type. Having two sets of sensors allows the identification of sensor-to-sensor variation within the same set and set-to-set variation possibly caused by different aging processes among sensor types. Identifying these variations improves the reliability and accuracy of the blackbody temperature. The design of the flight blackbody was finalized prior to the selection of additional sensor technology. Thus, compatibility of the additional sensors with pre-defined electrical instrumentation was required. In addition, the geometric dimensions of these sensors had to be comparable to or smaller than the Pt2000 sensors to allow the usage of Pt2000 sensors if no suitable candidate could be found. Four different types of thermistors (NTC) were pre-selected as suitable candidates for the additional implementation. Three of these were epoxy encapsulated and space qualified, and one type was glass encapsulated and not space qualified. After the initial plausibility, stability, reproducibility and self-heating measurement campaign of the 95 sensors, two thermistor types were excluded. The two remaining types, including the glass-encapsulated type, were further analyzed. Three sensors of each pre-selected NTC type, together with two Pt2000 sensors were mounted in a specially developed test plate based on the design of the emitting backplate of the flight black body. This simulated the mechanical integration into the satellite's blackbody, including the various adhesives and cover plates. Furthermore, the test plate was geometrically designed in such a way that it could be integrated into various PTB measurement setups. The sensors integrated into the test plate were investigated for effects of self-heating, vibration, thermal cycling, thermal vacuum, mechanical shock and vibration. The remaining NTC sensors of the two preselected types were subjected to thermal cycling and characterized for self-heating and hysteresis. Finally, the glass-encapsulated type NTC was recommended as additional sensor type for the FORUM on-board blackbody. A group of selected sensors were calibrated and fully characterized with respect to repeatability, reproducibility, drift and residual errors (hysteresis effects) and provided to the integrator of the on-board blackbody. Furthermore, another outcome of the project was the recommendation to modify the on-board electronics to provide varying sensing currents. This enables the on-board measurement of the self-heating effect. By monitoring the self-heating in regular intervals, the thermal contact of the sensors can be checked for stability. This recommendation will also be implemented in the flight hardware. Financial support of this work by the ESA project Novel Reference/Calibration System to Measure Spectral Radiance on the Range 4 μm to 100 μm - CCN1 is gratefully acknowledged.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.85/1.86)

Session: A.07.07 Advancements in Observation of Physical Snow Parameters

Comprehensive quantitative observations of physical properties of the seasonal snow cover are of great importance for water resources, climate impact and natural hazard monitoring activities. This has been emphasized by the Global Energy and Water EXchanges (GEWEX) (a core project of the World Climate Research Programme (WCRP)) for decades and highlighted by the Global Precipitation Experiment (GPEX) (a WCRP Light House Activity) launched in October 2023. Satellite-based observation systems are the only efficient means for obtaining the required high temporal and spatial coverage over the global snow cover. Due to the sensitivity to dielectric properties and penetration capabilities, SAR systems are versatile tools for snow parameter observations. Significant advancements have been achieved in SAR based retrieval algorithms and their application for operational snow cover monitoring. Additionally, lidar backscatter measurements has been proven to provide accurate observations on snow height and its changes. However, there is still need for improvement of snow cover products, addressing physical parameters such snow depth, SWE, liquid water content, freezing state and snow morphology. In this session the current status of physical snow cover products will be reviewed and activities towards further improvements will be presented, taking into account satellite data of current and future satellite missions. In this context, a broad range of observation techniques is of interest, including methods based on backscatter intensity, polarimetry, interferometry, tomography, as well as multi-frequency and multi-sensor approaches.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.85/1.86)

Presentation: Snow Water Equivalent (SWE) Retrieval Algorithms Based on Volume Scattering Approach from Dual Frequency Radar Measurements.

Authors: Firoz Borah, Prof. Leung Tsang, Dr. Edward Kim, Dr. Michael Durand
Affiliations: University Of Michigan, NASA Goddard Space Flight Center, The Ohio State University
In this paper we describe two retrieval algorithms of snow water equivalent (SWE) based on the volume scattering of snow at X (9.6 GHz) and Ku (17.2 GHz) bands. The significance of the algorithms is that neither a prior on grain size nor scattering albedo are required - an important advancement, as accurate estimates of grain sizes are difficult to obtain, especially on a global basis. The two algorithms are validated with 4 sets of airborne data and 3 years of tower-based time series measurements. In the algorithms, the rough surface scattering from the snow/soil interface is first subtracted to obtain the net volume scattering of the SWE. The physically-based bi-continuous DMRT model was used to generate a look up table of backscattering coefficients as functions of several snow parameters. Using the lookup table, the model was further parametrized and the parameterized model gives the X and Ku band co-polarization backscatter as a pair of equations in terms of two parameters: SWE and scattering albedo at X band (ωX). We present a critical analysis of the solution space, and illustrate the domains within the solution space where SWE may be uniquely estimated. We show that the uniqueness is further improved by using time series radar measurements. The robustness of the no-prior approach was validated with airborne observations, by using a prior SWE value that is intentionally far (75% different) from the true SWE. Validation using tower-based data was also conducted, using time series observations from the NoSREx experiment in Sodankyla, Finland. In this case, the SWE of the previous time step is used to choose between the two solutions for the current time step. The cost function thus uses the previously retrieved SWE within the cost function, initialized with a time series starting from a snow-free condition. Recently we have also performed full wave simulations of rough surface scattering of the snow/soil interface up to rms heights of 1.5 wavelengths. This is 5 times larger than the previous upper limit of 0.3 wavelength. The results can be used for L band, C band, X band and Ku band. The model results improve the accuracy of rough surface scattering subtraction of the SWE retrieval algorithms. These advances in snow retrieval algorithms were central factors in a recent global SWE mission concept being rated “selectable” for the first time by either NASA or ESA.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.85/1.86)

Presentation: Sensitivity Analysis of Snow-Parameter Retrieval by Means of Tomographic Profiling Using KAPRI

Authors: Esther Mas Sanz, Dr. Marcel Stefko, Othmar Frey, Irena Hajnsek
Affiliations: Institute of Environmental Engineering, Swiss Federal Institute of Technology (ETH), GAMMA Remote Sensing AG, Microwaves and Radar Institute, German Aerospace Center (DLR)
Snow is a key cryosphere parameter covering about a 31% of the land surfaces [1] and is one of the major drivers of our planet’s climate. It has a direct influence on fresh water reservoirs, sea level variations and Earth’s energy budget, to name a few. Furthermore, snow is known to have a direct impact on the climate by affecting the planet’s albedo [2]. Given the importance of snow to better understand both past, present and future Earth’s climate, there have been many efforts within the radar community to characterize the backscattered signal of the snow pack and to establish links to specific properties such as density, grain size, crystal structure or layering for several decades (e.g. [3]). Depending on these specific properties of the snow pack and the frequency of the electromagnetic waves different penetration depths will be achieved. In the case of dry snow, it is expected that the electromagnetic waves can penetrate up to a few meters even at high frequencies [4]. Radar tomographic profiling is an imaging technique that allows to reconstruct the vertical profile of snow from multibaseline interferometric acquisitions [5]. This technique has a clear advantage over conventional methods for snow profiling, e.g. snow pit observations, as first, it does not destroy the snow pack and second, provides a complete image of the interest area (and not a single data point). The literature shows that, over the last decade, there has been a growing interest for tomographic profiling experiments using ground-based systems to fully characterize snow covered areas (e.g. [6],[7]). In this experiment, we have acquired tomographic radar data of snow on the Aletsch glacier using KAPRI, the Ku-band Advanced Polarimetric Radar Interferometer, a real-aperture frequency-modulated continuous-wave (FMCW) ground-based radar operating at a central frequency of 17.2 GHz based on a polarimetric version of the Gamma Portable Radar Interferometer (GPRI) [8-10]. The experiment setup consists of two GPRI units deployed on a pair of exterior terraces of the Jungfraujoch High Altitude Research Station with a vertical separation of 4.2 meters. One of the units acts as the primary device, with transmission and reception capabilities while the other unit acts as a secondary device, as passive receiver. The effective perpendicular baselines vary substantially with the incidence angle. This, in combination with the dependency on range, leads to varying unambiguous heights and tomographic resolutions from near to far range. The aim of this experiment is twofold. First, investigating the capabilities of bistatic KAPRI for retrieving snow layering in a glacier environment. Second, if height resolution is favorable, investigating the snow pack signatures derived from the snow vertical profiling. [1] Tsang, L. et al. (2022). Review article: Global monitoring of snow water equivalent using high-frequency radar remote sensing. Cryosphere, 16(9), 3531–3573. https://doi.org/10.5194/TC-16-3531-2022 [2] ESA - Snow grain size – it matters. (n.d.). Retrieved July 27, 2023, from https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Sentinel-3/Snow_grain_size_it_matters [3] Mätzler, C. (1987). Applications of the interaction of microwaves with the natural snow cover. Remote Sensing Reviews, 2(2), 259–387. https://doi.org/10.1080/02757258709532086 [4] Rignot, E., et al. (2001). Penetration depth of interferometric synthetic-aperture radar signals in snow and ice. Geophysical Research Letters, 28(18), 3501–3504. https://doi.org/10.1029/2000GL012484 [5] Griffiths, H. D., & Baker, C. J. (2006). Fundamentals of Tomography and Radar. NATO Security through Science Series A: Chemistry and Biology, 171–187. https://doi.org/10.1007/1-4020-4295-7_08 [6] Tebaldini, S., et al. (2013). High resolution three-dimensional imaging of a snowpack from ground-based SAR data acquired at X and Ku Band. International Geoscience and Remote Sensing Symposium (IGARSS), 77–80. https://doi.org/10.1109/IGARSS.2013.6721096 [7] Frey, O. et al. (2023). Analyzing Time Series of Vertical Profiles of Seasonal Snow Measured by SAR Tomographic Profiling at L/S/C-Band, Ku-Band, and Ka-Band in Comparison With Snow Characterizations. International Geoscience and Remote Sensing Symposium (IGARSS), 754–757. https://doi.org/10.1109/IGARSS52108.2023.1028302z [8] Werner, C., et al. (2008). A real-aperture radar for ground-based differential interferometry. International Geoscience and Remote Sensing Symposium (IGARSS), 3(1), 210–213. https://doi.org/10.1109/IGARSS.2008.4779320 [9] Baffelli, S., et al. (2018). Polarimetric calibration of the ku-band advanced polarimetric radar interferometer. IEEE Transactions on Geoscience and Remote Sensing, 56(4), 2295–2311. https://doi.org/10.1109/TGRS.2017.2778049 [10] Stefko, M., et al. (2022). Calibration and Operation of a Bistatic Real-Aperture Polarimetric-Interferometric Ku-Band Radar. IEEE Transactions on Geoscience and Remote Sensing, 60, 1–19. https://doi.org/10.1109/TGRS.2021.3121466
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.85/1.86)

Presentation: Modelling polarimetric Ku- and L-band synthetic aperture radar observations of snow-covered Arctic terrain using airborne CryoSAR instrument data and field measurements

Authors: Richard Kelly, Jeffrey Welch, Dan Kramer, Alex Langlois, Gabriel Hould Gosselin, Nick Rutter, Christian
Affiliations: University Of Waterloo
The Ku-band backscatter from snow at 13.5 GHz is strongly influenced by the snow water equivalent (SWE) of accumulated dry snow. The physical connection between SWE and the Ku-band backscatter response is generally understood. However, there is significant interest in the development of methods to retrieve SWE from Ku-band SAR observations when coupled with snowpack microstructure data that represent the status of a multi-layered snowpack. Although the quantification of the snow microstructure can be approximated generally using models of snow physics such as SNOWPACK, direct in situ measurements of microstructure provide an opportunity to force microwave electromagnetic models, such as the snow microwave radiative transfer (SMRT) model, with observed field data that is un-smoothed and represents the high spatiotemporal variability of snow microstructure. SMRT model estimates and their behaviour over a range of observed states can then be compared with the SAR observations thereby testing the model skill. This paper describes an experiment to compare SAR backscatter data with SMRT estimates of backscatter using field-observed snow microstructure measurements at Trail Valley Creek, Northwest Territories, and at Cambridge Bay, Nunavut. Wintertime snow microstructure data were acquired at these sites when Ku- and L-band polarimetric airborne SAR observations were made. The snowpack at Cambridge Bay is generally moderate to shallow with snow thicknesses ranging from 70 cm to 30 cm. Trail Valley Creek snow sites were unusually thicker than normal in 2024 with significant accumulations across the landscape and especially in the river gulleys. Ku-band co-polarized data demonstrate a positive correlation with the snow thickness although it is moderated by the microstructure layering (layer snow grain specific surface area and density). The L-band response is generally unaffected by the snow but responds to the sub-surface roughness scattering and volume scattering. This research has important implications for SAR retrievals of SWE using Ku-band and L-band polarimetric radar sensors. The Terrestrial Snow Mass Mission is a dual-frequency Ku-band system in early planning stages designed for SWE estimation. To achieve its goal, the Ku-band instruments will be used to estimate SWE and snow microstructure, while it plans to leverage C-band SAR (such as Sentinel-1 or Radar Constellation Mission) to characterize the freeze-thaw and roughness state of the underlying soil. L-band data from NISAR or the planned Radar Observing Mission for Europe in L-band (ROSE-L) mission will have the capacity to quantify the sub-nivean soil roughness and freeze-thaw status probably more fully by virtue of its longer wavelength and, therefore, deeper soil penetration. The CryoSAR instrument, therefore, provides an opportunity to test the multi-frequency responses to the snow-covered terrestrial environment to support ongoing planned Ku- and L-band missions for snow mass estimation.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.85/1.86)

Presentation: Addressing spatiotemporal challenges of InSAR Snow Water Equivalent retrieval using MultiChannel and Maximum A Posteriori estimators

Authors: Jorge Jorge Ruiz, Manu Holmberg, Juha Lemmetyinen, Ioanna Merkouriadi, Anna Kontu, Jouni Pulliainen, Jaan Praks
Affiliations: Finnish Meteorological Institute, Aalto University
The retrieval of changes in Snow Water Equivalent (ΔSWE) using repeat-pass Interferometric Synthetic Aperture Radar (d-InSAR) can be achieved by the simple inversion of the interferometric phase [1]. Furthermore, little ancillary data is required, making it an interesting approach for high-resolution monitoring of seasonal snow accumulation. However, snow-covered surfaces are highly affected by decorrelation due to the fast-changing nature of the snow [2,3,4]. Environmental effects such as wind [2,4], snow melt between passes [5,6], or air temperature [2,3,6], have been linked to increased decorrelation. This often results in noisy interferograms where heavy spatial filtering needs to be applied, hindering detection of the spatial accumulation. Low frequencies are less prone to suffer from these effects, and consequently, are regarded as more suitable for the d-InSAR retrieval technique as can conserve coherence for longer temporal baselines. However, low frequencies lack precision due to the long wavelength, and small deviations (either systematic or stochastic) in the interferometric phase can translate into errors of several millimeters in ΔSWE. Conversely, high frequencies are more affected by temporal decorrelation but offer higher retrieval precision. Furthermore, high frequencies are increasingly affected by the interferometric phase exceeding 2π for typical temporal baselines, causing lost phase cycles in the signal [7]. MultiChannel (MCh) techniques refer to the combination of multiple, statistically independent, measurements to improve accuracy, and reduce ambiguities and retrieval noise [8]. By exploiting the statistics of the measurements, MCh allows the formulation of a Maximum Likelihood Estimator (MLE) that enables the robust estimation of parameters. Furthermore, spatial constraints can be also applied in Maximum A Posteriori (MAP) estimators, accounting for the contextual information. This can be done e.g., by favoring similarity among nearby pixels. Additionally, the use of Markov Random Fields has been proposed to deal with nosy interferogram in a MCh context [9]. In the upcoming years, the availability of SAR satellites will increase dramatically [10] and new opportunities and mission concepts enabling multi-frequency SAR will emerge. For example, NISAR (the new NASA-ISRO SAR) will account with L- and S- bands, ESA's ROSE-L (L-band) satellites may orbit alongside the existing Sentinel-1 (C-band), or TSMM from the CSA will account for a dual frequency Ku-band radar. This makes MCh an interesting technique to improve retrievals. To support the investigation, we made use of SodSAR [11], a SAR tower-based system with InSAR capabilities, located in Northern Finland. The radar performed a measurement every 6 hours at L-, S-, C-, and X- bands over a non-vegetated area. Four configurations of MCh were considered: L- and S- bands, L- and C- bands, C- and X- bands, and all bands. We investigated the effect of temporal baseline of the retrieval. For each temporal baseline, all possible pairs from the dataset's image were generated and ΔSWE retrieved. The retrieved ΔSWE was then compared to an in-situ snow scale. Results indicate a r²>0.8 for up to 20 days of temporal baseline. To evaluate the MAP estimators, we performed simulations using semi-synthetic data based on an ALOS2 L-band interferometric pair and coincidental SnowModel simulations [12]. The coherence from ALOS2 was used along the ΔSWE from SnowModel to generate noisy interferometric phase maps. To generate the coherence for the other frequency bands, we degraded the coherence from ALOS2, and generated noisy interferometric phase maps for L-, S-, C-, and X-bands. We performed the inversion of ΔSWE at pixel level and by locally fitting planes over the neighboring pixels. We included the prior of ΔSWE as a mixture of normal distributions and investigated the implementation of MRF based on Local Gaussian Markov Random Fields (LGMRF). Both priors are controlled by hidden variables, or hyperparameters, which need to be estimated from the noisy data. To solve the optimization problem, we employed the Monte Carlo Expectation Maximization (MCEM) algorithm. A Metropolis-Hasting algorithm was used to generate samples from the posterior distribution, which were directly used for the hyperparameter estimation. The result from the inversion of these techniques was compared to the same SnowModel map. The joining of local planes and the normal distribution provided the best solution in terms of RMSE and Pearson correlation while keeping the computational burden relatively simple. The LGMRF provided good results at the cost of an increased computational load. [1]. T. Guneriussen, K. A. Hogda, H. Johnsen, and I. Lauknes, “Insar for estimation of changes in snow water equivalent of dry snow,” in IGARSS 2000. IEEE 2000 International Geoscience and Remote Sensing Symposium. Taking the Pulse of the Planet: The Role of Remote Sensing in Managing the Environment. Proceedings (Cat. No.00CH37120), vol. 2, 2000, pp. 463–466 vol.2. [2]. J. J. Ruiz, J. Lemmetyinen, A. Kontu, R. Tarvainen, R. Vehmas, J. Pulliainen, and J. Praks, “Investigation of environmental effects on coherence loss in sar interferometry for snow water equivalent retrieval,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–15, 2022. [3]. S. Leinss, A. Wiesmann, J. Lemmetyinen, and I. Hajnsek, “Snow water equivalent of dry snow measured by differential interferometry,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 8, pp. 3773–3790, 2015. [4]. S. Oveisgharan, R. Zinke, Z. Hoppinen, and H. P. Marshall, “Snow water equivalent retrieval over idaho, part a: Using sentinel-1 repeat-pass interferometry,” The Cryosphere Discussions, vol. 2023, pp. 1–19, 2023. [Online]. Available: https://tc.copernicus.org/preprints/tc-2023-95/ [5]. Hoppinen, Z., Oveisgharan, S., Marshall, H.-P., Mower, R., Elder, K., and Vuyovich, C.: Snow water equivalent retrieval over Idaho – Part 2: Using L-band UAVSAR repeat-pass interferometry, The Cryosphere, 18, 575–592, https://doi.org/10.5194/tc-18-575-2024, 2024. [6]. J. Jorge Ruiz, I. Merkouriadi, J. Lemmetyinen, J. Cohen, A. Kontu, T. Nagler, J. Pulliainen, and J. Praks, “Comparing insar snow water equivalent retrieval using alos2 with in situ observations and snowmodel over the boreal forest area,” IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1–14, 2024. [7]. K. Belinska, G. Fischer, G. Parrella and I. Hajnsek, "The Potential of Multifrequency Spaceborne DInSAR Measurements for the Retrieval of Snow Water Equivalent," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 17, pp. 2950-2962, 2024, doi: 10.1109/JSTARS.2023.3345139 [8]. F. Baselice, G. Ferraioli, V. Pascazio, and G. Schirinzi, “Contextual information-based multichannel synthetic aperture radar interferometry: Addressing dem reconstruction using contextual information,” IEEE Signal Processing Magazine, vol. 31, no. 4, pp. 59–68, 2014. [9]. G. Ferraiuolo, V. Pascazio and G. Schirinzi, "Maximum a posteriori estimation of height profiles in InSAR imaging," in IEEE Geoscience and Remote Sensing Letters, vol. 1, no. 2, pp. 66-70, April 2004, doi: 10.1109/LGRS.2003.822882. [10]. R. Wilkinson, M. Mleczko, R. Brewin, K. Gaston, M. Mueller, J. Shutler, X. Yan, and K. Anderson, “Environmental impacts of earth observation data in the constellation and cloud computing era,” Science of The Total Environment, vol. 909, p. 168584, 2024. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0048969723072121 [11]. Jorge Ruiz, J.; Vehmas, R.; Lemmetyinen, J.; Uusitalo, J.; Lahtinen, J.; Lehtinen, K.; Kontu, A.; Rautiainen, K.; Tarvainen, R.; Pulliainen, J.; et al. SodSAR: A Tower-Based 1–10 GHz SAR System for Snow, Soil and Vegetation Studies. Sensors 2020, 20, 6702. https://doi.org/10.3390/s20226702 [12]. Liston, Glen E.; Elder, Kelly. 2006. A distributed snow-evolution modeling system (SnowModel). Journal of Hydrometeorology. 7(6): 1259-1276.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.85/1.86)

Presentation: High-resolution snow depth profiles from ICESat-2

Authors: Désirée Treichler, Marco Mazzolini, Simen Aune, Yves Bühler, Luc Girod, Zhihao Liu, Livia Piermattei, Clare Webster
Affiliations: Department of Geosciences, University of Oslo, WSL Institute for Snow and Avalanche Research SLF, Department of Geography, University of Zürich
To date, snow depth measurements require manual measurements or targeted field campaigns. The ATLAS sensor onboard the ICESat-2 satellite acquires profiles of surface elevation measurements of very high accuracy. When compared with an accurate digital elevation model (DEM) from snow-free conditions, this data directly translates into snow depth measurements. Previous studies used the spatially summarised ICESat-2 data products ATL06 and ATL08 with an along-profile resolution of 20-100 m. They found that ICESat-2 can provide average snow depth estimates at the watershed scale with decimeter-level uncertainties. Uncertainties and bias were found to strongly increase with slope and depend on the reference DEM accuracy and the quality of the co-registration of the datasets. This study uses the individual photon data product ATL03 that has an along-track resolution of 0.7 m and thus the potential to provide high-resolution snow depth profiles. Each overpass results in a profile pair separated by 90 m laterally and consisting of a weak and a strong beam with approximately four times as many return photons. We compare different photon filtering and co-registration methods for five field sites in Norway, Finland, and Switzerland including alpine terrain of different topography, sparse and dense forest. Given careful pre-processing, we find that ATL03 data can yield snow depth profiles with a few meters' spatial resolution that closely match profiles derived from reference snow depth maps from uncrewed aerial vehicles (UAVs) acquired within 5-10 days of an ICESat-2 overpass. The goodness of fit depends on the chosen filtering method which should be adjusted to the acquisition conditions and the presence of vegetation and snow on trees to yield optimal results. Dataset co-registration is crucial for steep terrain, and in some areas, the strong and weak beam have different spatial offsets that require separate co-registration for each beam. Given locally optimised pre-processing, the uncertainty (mean absolute deviation, MAD) for ICESat-2/UAV data pairs ranges from 0.05 m in a sparsely forested, flat site to 0.4 m in a very steep alpine site. Bias (median residual) ranges from a few centimetres to several decimetres. For most sites the snow depths retrieved from strong and weak beams are consistently offset, with a vertical difference of up to 25 cm between strong/weak snow depth profiles for the same filtering/co-registration method and site, suggesting inconsistencies in the geolocation of beams of the same pair in ATL03 data. Improved low-level processing on the data provider's side in future ATL03 data versions may reduce the differences between the beams. This study finds that, given accurate reference DEMs, ICESat-2 data can not only provide catchment-scale snow depth averages but also high-resolution snow depth profiles in areas where no measurements currently exist, showcasing the potential of spaceborne laser altimetry for providing high-resolution snow depth measurements globally.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.85/1.86)

Presentation: Ku-band Radar for Snow Water Equivalent (and other) Applications: Status of the Terrestrial Snow Mass Mission

Authors: Stephen Howell, Chris Derksen, Benoit Montpetit, Vincent Vionnet, Vincent Fortin, Courtney Bayer, Marco Carrera, Nicolas Leroux, Julien Meloche, Jean Bergeron, Fauve Strachan, Patrick Plourde, Ralph Girard, Roger DeAbreu, Shawn MacArthur, Richard Kelly
Affiliations: Environment And Climate Change Canada, Canadian Space Agency, Natural Resources Canada, University of Waterloo
Freshwater delivered by seasonal snow melt is of the utmost importance for the health and well-being of people and ecosystems across midlatitude, northern, and mountain regions, yet poses risks by contributing to costly and damaging flood events. The current lack of information on how much water is stored as snow (expressed as the ‘snow water equivalent’ or SWE), and how it varies in space and time, limits the hydrological, climate, and weather services provided by Environment and Climate Change Canada (ECCC). To address this knowledge gap, ECCC, the Canadian Space Agency (CSA), and Natural Resources Canada (NRCan) are working in partnership to implement a Ku-band synthetic aperture radar (SAR) mission presently named the ‘Terrestrial Snow Mass Mission’ – TSMM. A technical concept capable of providing dual-polarization (VV/VH), moderate resolution (500 m), wide swath (~250 km), and high duty cycle (~25% SAR-on time) Ku-band radar measurements at two frequencies (13.5; 17.25 GHz) is under development. Ku-band radar is a desirable approach for a terrestrial snow mass mission because these measurements are sensitive to SWE through the volume scattering properties of dry snow and can identify the wet versus dry state of snow cover. This presentation will provide an update on the mission status, including: (1) Implementation of computationally efficient SWE retrieval techniques, based on the use of physical snow modeling to provide initial estimates of snow microstructure which can effectively parameterize forward model simulations for prediction of snow volume scattering. (2) Testbed experiments facilitated by the recently developed TSMM simulator. (3) Analysis of airborne Ku-band radar measurements acquired across Canada with the ‘CryoSAR’ instrument operated by the University of Waterloo. (4) Advancements to the technical readiness and mission concept of operations (5) Key policy drivers which have anchored mission development, including ensuring resilient adaption to climate change, enhanced environmental prediction, and ensuring strategic water supply information is available to support industry in meeting future clean energy regulations in Canada.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall F2)

Session: A.02.03 EO for Agriculture Under Pressure - PART 3

The human impact on the biosphere is steadily increasing. One of the main human activities contributing to this is agriculture. Agricultural crops, managed grasslands and livestock are all part of the biosphere and our understanding of their dynamics and their impacts on other parts of the biosphere, as well as on the wider environment and on the climate is insufficient.
On the other hand, today’s Agriculture is Under Pressure to produce more food in order to meet the needs of a growing population with changing diets– and this despite a changing climate with more extreme weather. It is required to make sustainable use of resources (e.g. water and soils) while reducing its carbon footprint and its negative impact on the environment, and result in accessible, affordable and healthy food.
Proposals are welcome from activities aiming at increasing our understanding of agriculture dynamics and at developing and implementing solutions to the above-mentioned challenges of agriculture, or supporting the implementation and monitoring of policies addressing these challenges. Studies on how these challenges can be addressed at local to global scales through cross site research and benchmarking studies, such as through the Joint Experiment for Crop Assessment and Monitoring (JECAM) are welcome.

The session will hence cover topics such as
- Impact on climate and environment:
- Crop stressors and climate adaptation
- Food security and Sustainable Agricultural Systems
- New technologies and infrastructure
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall F2)

Presentation: Assessing the value of surface soil moisture products for the prediction of Spring Barley yield in Central Europe

Authors: Felix Reuß, Emanuel Buecchi, Dr. Mariette Vreugdenhil, Wolfgang Wagner
Affiliations: Department of Geodesy and Geoinformation, TU Wien
Spring Barley is the third most common crop in terms of cultivated area and total production in Europe. Its economic value and resilience to environmental stressors contribute to its importance in global food security and agricultural systems. It is used for food production, livestock feed, and the brewing industry and therefore plays a pivotal role in the food chain. Compared to other cereals, barley is considered a drought-tolerant crop type. Yet, proper water supply is still crucial for Spring Barley growth. A key variable indicating the water supply is soil moisture (SM). Various studies have already used SM products, either modelled or satellite-derived, as an input for yield prediction. However, a detailed assessment of the value of SM for Spring Barley yield prediction has not yet been carried out. The aim of this study is therefore to assess the value of two different SM products for the prediction of Spring Barley yield in Central Europe. In detail, it is evaluated, (1) How much of Spring Barley yield variability can be explained solely by SM? (2) How do yield prediction accuracies differ spatially? (3) How does Spring Barley yield prediction based on a modelled SM product compare to a satellite-derived SM product? This study was carried out within the ESA funded project Yield Prediction and Estimation from EO. Spring Barley yield on NUTS3 level for Germany and NUTS4 level for Austria, and Czechia, and the years 2016-2022 was used as a reference. The ASCAT based HSAF SM products for the top soil layer and the ERA5 soil water index level 1 (SWVL 1) product were used as predictors. From these datasets 14 daily minimum, maximum and mean soil moisture values were extracted. A Neural Network was used to train distinct models for the two SM products. All models were finally evaluated by calculating the metrics PearsonR, explained variability (R²), root mean square error, and the mean absolute error. The results indicate the following: 1) A high explained variability R² of 0.36 was achieved for ERA5 SWVL1. The HSAF product in contrast achieved an R² of 0.21. 2) Lowest accuracies are achieved in region with low average yields due to unfavourable conditions for Spring Barley, e.g. regions frequently affected by drought conditions. 3) The ERA5 product showed higher accuracies, especially in the alpine region, where the SM noise of the HSAF product is especially high. In summary, the study has shown, that SM based models can explain over one-third of the Spring Barley yield variability in Central Europe, thus underlining the high value of SM products for Spring Barley yield prediction. The ERA5 SWVL1 overall showed better performance and provides more accurate results, especially in mountainous regions. The results from this study can contribute to the development of more precise and robust yield prediction models not only for Spring Barley but also for other crops. Future work will focus on understanding the performance differences between the two SM products in more detail and assessing the performance in drought years.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall F2)

Presentation: Innovative Space Methods to Monitor Crop Diversity for Resilient Agriculture

Authors: Bingfang Wu
Affiliations: Aerospace Information Research Institute, Chinese Academy of Sciences
The pressing question for science remains: how to feed humanity on current croplands without increasing greenhouse gas emissions or destroying biodiversity and nature? Agriculture systems often face a trade-off between specialization and diversification. Specialization, with its focus on high-yield monocultures and economies of scale, offers efficiency and cost-effectiveness through volume but often leads to biodiversity loss, soil degradation, and vulnerability to pests and climate extremes. Moreover, traits such as drought tolerance, heat resistance, and high nutrient density, which are crucial for future agriculture, have disappeared from many modern commercial crops. On the other hand, diversification aims at economies of scope where efficiencies are formed by variety and not just by volume. It promotes crop diversity and nutritional diversity, enabling more stable yields, enhanced soil health and improved ecosystem resilience, but requires more complex management and may involve relatively lower productivity in the short-term compared to specialized systems. Nutritional composition and productivity differences among crop varieties, alongside their social-economic impacts, play a crucial role in agricultural sustainability. Resilient agricultural practices are aligned with the UN FAO’s “four betters”: better production, better nutrition, a better environment and a better life, and crop diversity plays a crucial role in achieving these goals. The need to preserve crop diversity at individual farm, national and global scales is more urgent than ever to safeguard genetic diversity, strengthen agroecosystem resilience, and ensure the security of global food supplies, nutrition and health. Therefore, deep understanding and knowledge together with effective monitoring of crop diversity is vital. Earth Observation (EO) offers unprecedented potential for the continuous monitoring of crop diversity, enhancing our understanding and knowledge thus supporting informed policy measures for resilient agriculture. However, technical challenges remain in attaining a cost-effective, reliable data retrieval system. High-resolution datasets, such as UAV hyperspectral and LiDAR data, combined with satellite integration, are promising for capturing fine-scale crop diversity, particularly in fragmented and heterogeneous farming systems. While new-generation satellite sensors enable high-resolution crop type mapping at continental scales, distinguishing crop varieties remains a challenge due to spectral similarities and mixed pixels, especially in smallholder landscapes. To overcome these challenges, advanced deep learning models and expanded ground sampling networks are essential. Additionally, EO-derived proxies for functional traits, such as leaf chlorophyll, grain micronutrient content, require standardized field protocols including trait-spectral libraries for calibration and validation. While UAV data hold promise for precision agriculture, a more effective data integration strategy is needed to scale small-scale UAV insights to regional or global levels. Moreover, innovative methods, open-access data policies, cost-effective technologies and capacity building for local stakeholders are critical to address the challenges of monitoring crop and cropping diversity in diverse climatic and farming contexts. Understanding the nexus of the crop diversity, dietary patterns and livelihoods, particularly, human health is essential for informing evidence-based policymaking that enhances agroecosystem resilience and promotes sustainable agriculture. Agriculture is an indispensable part of the rural economy in major parts of Asia and the Pacific, contributing 29 percent of GDP and 65 percent of all employment. Rice-growing areas in Southeast Asia, characterized by fragmented fields and diverse cropping practices, present unique challenges and opportunities for enhancing crop diversity and resilience. Selecting Southeast Asia as a focal region with pilot sites across different dietary cultures and climate zones, the initiative of “Promoting crop biodiversity through innovative space applications (CropBio)” was launched. This initiative employs co-designed, inter- & transdisciplinary approaches, aiming to establish a protocol for crop diversity monitoring and assessment, which includes a protocol of innovative methods to detect crop traits and varieties, and an assessment framework to evaluate the impact of crop diversity on the environment, economy, society, and livelihood such as health. The recommendation will be worked out to explore nature-based agricultural solutions addressing the global challenge of feeding humanity sustainably while showcasing the evidence of the importance of crop diversity.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall F2)

Presentation: Synergistic use of optical and SAR imagery for near real-time green area index retrieval in maize

Authors: Jean Bouchat, Quentin Deffense, Thomas De Maet, Pierre Defourny
Affiliations: European Space Agency (ESA), European Space Research Institute (ESRIN), Earth and Life Institute, Université catholique de Louvain
The green area index (GAI) is a key biophysical variable for crop monitoring, widely used to assess crop health, growth, and productivity. Most large-scale and cost-effective methods for estimating the GAI rely on optical remote sensing data; consequently, frequent cloud cover can severely limit their reliability. This challenge is ever more pressing in tropical regions, where timely vegetation monitoring is essential for food security and where cloud cover often overlaps with key phases of vegetation growth. In these instances, synthetic aperture radars (SARs) offer a valuable alternative, as their cloud-penetrating capabilities enable the generation of dense time series that can enhance the spatial and temporal coverage of optical data. In recent years, various methods have been proposed to address gaps in time series of biophysical variables retrieved from optical data, including fusion techniques that aim to reproduce or enhance optical imagery using ancillary Earth observation data to compensate for cloud cover. However, there have been relatively few efforts to systematically leverage dense SAR time series to directly fill gaps in GAI time series, despite the potential for reducing modeling errors and production time. In this study, a method is proposed to fill these gaps as they occur along the crop growing season with current and past SAR data as well as past GAI values. The focus on near real-time gap-filling ensures enhanced temporal resolution and timeliness, addressing critical needs in operational crop monitoring. The approach involves the use of a transformer encoder, a deep learning architecture that exploits the sequential nature of the values of the target variable and its complex relationship with SAR backscatter and interferometric coherence. Sentinel-1 and Sentinel-2 data acquired from 2018 to 2021 over the Hesbaye region of Belgium are used for cross-validation. The results demonstrate the robustness of the method. The model can successfully retrieve the GAI at the parcel level on an unseen growing season with a mean R² of 0,88 and RMSE of 0,71. External validation with in situ data collected from 10 maize fields in 2018 in Belgium further confirms its accuracy, outperforming traditional approaches based on Water Cloud model inversion. These promising results not only highlight the immediate applicability of this approach but also its potential for broader impact. While this study focused on maize—a high-biomass crop that has been shown to be challenging to monitor using C-band SAR—the method shows promise for extension to other major temporary crops. Additionally, future advancements incorporating multi-frequency SAR data, using the L-band data from forthcoming NISAR and ROSE-L missions, are anticipated to further enhance its performance. In the end, by enabling the generation of accurate and dense GAI time series throughout the crop growing season, this method has the potential to significantly advance the capability of operational crop monitoring systems in cloud-prone regions, where timely delivery of information on crop condition is critical for informed decision-making.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall F2)

Presentation: Monitoring best management practices using Earth Observation for improving estimates of greenhouse gas emissions and sinks from Canadian agriculture

Authors: Dr. Catherine Champagne, Dr. Emilly Lindsay, Dr. Bahram Daneshfar, Dr. Jiangui Liu, Dr. Jiali Shang, Dr. Andrew
Affiliations: Agriculture And Agri-food Canada
Agricultural land plays an important role in mitigating climate change. Agriculture and Agri-Food Canada maintains key national data sets derived from Earth observation on land cover and land use across the agricultural regions of Canada and is developing new information and models to expand our knowledge of changing agroecosystems. Understanding changes in yield and productivity, crop rotations and best management practices and changes in the transition between agricultural and other land uses is helping us better evaluate the environmental and economic performance of the agricultural sector and support resilience to climate change. While many methods to quantify key aspects of agricultural performance are found in the literature, developing robust methods that can reliably estimate these parameters over diverse agricultural regions and over long periods of time require adaptation and robust validation of methods. This presentation will cover how methods are being developed and adapted over pilot sites in Canadian agricultural regions to support measurement and verification of agricultural resilience indicators to support national greenhouse gas inventories. The work will cover methods development and validation in three sub areas: Landscape productivity, grassland and perennial rotations and parameterization of process based models using seeding and harvest dates. Seeding dates were derived using multi-temporal optical data from Sentinel-2, Landsat and MODIS were used to develop dense time series of seasonal data in different regions of Canada. A multi-parameter curve fitting function was used to estimate start, end and magnitude of canopy greenness. These data were integrated with reanalysis data and a biometeorological time scale model to estimate seeding dates. Results were evaluated over different regions of Canada with different crop calendars and crop mixes. Landscape productivity was estimated using multi-temporal optical data from Sentinel-2 and Landsat combined with a radiation use efficiency model to estimate net primary productivity. This was compared with published methods using leaf area index and normalized difference vegetation index methods to estimate relative productivity and crop yield to estimate carbon sequestration in agricultural canopies. Grasslands and perennial agricultural classes were evaluated using a time series of annual crop classifications derived from Landsat, Sentinel-2, Sentinel-1 and Radarsat-2 data. A field based crop rotation model was developed using image segmentation to classify rotations based on the frequency of perennial agricultural classes. A more detailed classification of native and seeded forage was developed for the Canadian Prairies using a multi-year time series of synthetic aperture radar and optical data to capture short and long term trends in land conversion in agricultural regions. This presentation will discuss how long term Earth Observation data sets are being integrated into robust Earth Observation data services to improve reporting of greenhouse gas emissions and discuss the challenges of method development for operational data services.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall F2)

Presentation: Leveraging multi-year Sentinel-2 time series for mapping organic farmland

Authors: Jan Hemmerling, Dr. Marcel Schwieder, Dr Patrick Hostert, Dr. Stefan Erasmi
Affiliations: Thünen Institute of Farm Economics, Thünen Earth Observation (ThEO), Germany, Humboldt-Universität zu Berlin, Earth Observation Lab, Geography Department, Germany
Organic farming plays a crucial role in achieving a more sustainable agriculture, offering benefits such as reduced greenhouse gas emissions, improved soil health, and enhanced ecosystem services. These attributes align with the European Union's ambition to transition towards sustainable agriculture, as outlined in the European Green Deal. A key target is to bring 25% of agricultural land under organic management by 2030. An effective assessment of the drivers and impact of an extension of organic farmland at national scale requires comprehensive, spatially explicit digital data on organic agricultural land. However, such data remains scarce in many countries. We seek to address this gap by leveraging multispectral Sentinel-2 time series data to differentiate between organic and conventional farming practices. Organic farming is characterized by specific management practices, minimized use of pesticides and synthetic fertilizers, while simultaneously emphasizing mechanical weed control and the use of more diverse crop rotations, thereby supporting soil health and naturally reduced pest pressures. On the other hand, organic farming results in significantly reduced yield compared to conventional farming in cropland. Direct or indirect expressions of these differences in management offer potential for differentiation by the means of remote sensing, but so far, have not been explored. In this work we combine both, intra-annual and multi-annual remote sensing inherent multispectral features in order to differentiate between the two management systems. We analyze organic and conventional farming practices across seven German federal states, using extensive sampling. We use the Integrated Administration and Control System (IACS) of the Common Agricultural Policy (CAP) framework between 2018 and 2022 to establish a reference for permanently organically managed fields. We interpolate all available Sentinel-2 imagery with less than 75% cloud cover between 2018 and 2022 into equidistant time series with 10-day intervals at 10-meter spatial resolution. These data are then processed in a two-stage classification model. The first stage focuses on intra-annual management and phenology differences whose output is then utilized by the second stage, which is thereby enabled to identify crop rotation patterns indicative of organic practices. For both stages we use a Vision Transformer based approach which we compare against a Random Forest baseline-model. We evaluate how integration of multi-year data enhances classification accuracy compared to the application of annual time series. So far, our findings show that differentiation between organic and conventional farming is partly possible using one-year multispectral Sentinel-2 time series alone, whereby the differentiation accuracy between organic and conventional farming heavily depends on the cultivated crop type. While crops like winter wheat, winter rye and spring oat show best results with F1-scores above 0.8 indicating a good distinguishable, permanent grassland, hops and orchards show results with F1-scores below 0.2 indicating a poor distinction between organic and conventional farming. Future work will refine this approach by integrating multi-annual data to enhance mapping accuracy and expand the methodology for nationwide organic farming assessments. This research underscores the value of Sentinel-2 time series in supporting the EU’s sustainable agriculture goals.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall F2)

Presentation: ESTIMATING DAILY HIGH-RESOLUTION LEAF AREA INDEX (LAI) FOR WHEAT USING PLANETSCOPE DATA

Authors: Rhianna Mcaneny, Qiaomin Chen, Rhianna McAneny, Marie Weiss, Raul Lopez-Lozano, Jérémy Labrosse, Qiaomin Chen, Dan Smith, Mingxia Dong, Scott Chapman, Shouyang Liu, Alexis Comar
Affiliations: INRAE, Université d'Avignon, UMR EMMAH 1114, HIPHEN SAS, School of Agriculture and Food Sciences, The University of Queensland, Engineering Research Center of Plant Phenotyping, Ministry of Education, State Key Laboratory of Crop Genetics & Germplasm Enhancement and Utilization, Jiangsu Collaborative Innovation Center for Modern Crop Production, Academy for Advanced Interdisciplinary Studies, Nanjing Agricultural University, Agriculture and Food, CSIRO, Queensland Bioscience Precinct
The emergence of new commercial satellite constellations such as PLANETSCOPE enables the acquisition of satellite imagery with high temporal resolution (daily) and high spatial resolution (3 meters per pixel). However, they often suffer from inconsistencies in data calibration, which complicates the estimation of the Leaf Area Index (LAI). We show that PLANETSCOPE spectral sampling is not sufficient to reach the same accuracy as obtained with SENTINEL-2 and we investigate the added value of such constellation for wheat LAI estimations using different prior information (e.g. crop specific, soil specific) to compensate this effect. Our study relies on the BV-NNET algorithm designed for global level application over any vegetation type and is based on the inversion of the PROSAIL radiative transfer model using artificial neural networks. This algorithm was initially developed for SENTINEL-2 and has been adapted to align with PLANETSCOPE’s spectral and orbital characteristics. In order to train the neural network, we gathered a comprehensive LAI dataset from different phenotyping experiments including a variety of wheat genotypes, environmental conditions (France, China, Australia) and phenological stages. To address the differences between PLANETSCOPE and SENTINEL-2 regarding radiometric accuracy and the number of available bands, we compare two harmonization approaches by fitting linear regression (i) at the reflectance level and (ii) at the product level (LAI). We show that harmonizing at the product level is more efficient, leveraging the presence of SWIR bands in SENTINEL-2. We then explore how the estimation of LAI can be improved by developing a wheat specific algorithm that consists in adding prior information into BV-NNET (e.g. soil reflectance, vegetation biochemical content and joint distributions between those variables). We explore different scenarios for the prior information and show that with the appropriate joint distribution, the model performance in estimating LAI can be significantly improved. Finally, the best performance is achieved when combining the priori information with harmonization with SENTINEL-2 at the product level.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.15/1.16)

Session: F.04.06 Wetlands: from Inventory to Conservation

Wetlands are an essential part of our natural environment. They are scattered across the world in all bio-geographic regions, providing a range of critically important ecosystem services and supporting the livelihoods and well-being of many people. For much of the 20th century, wetlands have been drained and degraded.

The Ramsar Convention on wetlands is an intergovernmental treaty that provides the framework for national actions and international cooperation for the conservation and wise use of wetlands, as a means to achieving sustainable development. The 172 countries signatory to the convention commit, through their national governments, to ensure the conservation and restoration of their designated wetlands and to include the wise use of all their wetlands in national environmental planning.

Wetland inventory, assessment and monitoring constitute essential instruments for countries to ensure the conservation and wise use of their wetlands. Earth Observation has revolutionized wetland inventory, assessment and monitoring. In the recent years, the advent of continuous data streams of high quality and free of charge satellite observations, in combination with the emergence of digital technologies and the democratisation of computing costs, have offered unprecedented opportunities to improve the collective capacities to efficiently monitor the changes and trends in wetlands globally.

The importance of EO for wetland monitoring has been stressed by Ramsar in a recently published report on the use of Earth Observation for wetland inventory, assessment and monitoring.

The SDG monitoring guidelines on water related ecosystems (SDG target 6.6) also largely emphasize the role of EO, while the EO community is getting organised around the GEO Wetlands initiative to provide support to wetlands practitioners on the use of EO technology.

The Wetland session will review the latest scientific advancements in using Earth observations for wetland inventory, assessment, and monitoring to support effective wetland conservation. It will also discuss strategies for integrating Earth observations into the sustainable management of wetland ecosystems.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: Empowering National Wetland Inventorying through Earth Observation

Authors: Christian Tottrup, Cécile Kittel, Alexander Kreisel, Adam Pasik, Anis Guelmami, Nina Bègue
Affiliations: DHI A/S, GeoVille, Tour du Valat
The Earth Observation for Wetland Inventory (EO4WI) is an application project funded by the European Space Agency (ESA) and aiming to empower countries with the capacity to independently leverage Earth Observation (EO) data and tools for national wetland inventorying. These inventories are critical for meeting the reporting requirements of multiple international frameworks, including the post-2020 Global Biodiversity Framework (GBF), the Ramsar Convention, SEEA Ecosystem Accounting, and the 2030 Agenda for Sustainable Development (e.g., SDG 6.6.1). EO4WI focuses on developing robust methodologies and flexible tools to maximize the utility of the recent surge in radar and optical satellite data availability, the development and increased accessibility of advanced machine-learning classification algorithms, and the continuous improvements in computational capacity and availability of cloud-based platforms. By aligning national wetland inventory processes with global policy requirements, the project addresses the dual needs of building local capacity and demonstrating a coherent data and ICT infrastructure capable of supporting national and regional wetland mapping efforts. The EO4WI project has been implemented in collaboration with national stakeholders (Early Adopters), regional Ramsar/NGO networks, and global domain experts to bridge the gap between local knowledge, in-situ data, and EO technologies. This inclusive approach aimed to harmonize data production and knowledge-sharing across geographic scales and scientific disciplines, ultimately fostering a better understanding of wetland processes and trends. EO4WI envisions a transformative impact on wetland management by equipping nations with the tools and data necessary to support multiple policy agendas. Furthermore, the project seeks to drive meaningful actions for the protection and restoration of critical wetland ecosystems by advancing large-scale wetland inventorying and ensuring that wetland-related data are integrated into global environmental decision-making frameworks. The aim of this presentation is to review the EO4WI implementation approach and present several country demonstrations showcasing how the EO4WI mapping solution has been used to enhance national wetland inventorying data and capacity, and thereby contributing to national efforts related to wetland conservation and restoration as well as to national monitoring and reporting obligations.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: Global Mangrove Watch: Updated Global Mangrove Extent and Change 1990-2024

Authors: Dr Pete Bunting, Lammert Hilarides, Dr Ake Rosenqvist, Paula Castro Brandão Vaz dos Santos, Daniele Zanaga, Dr. Ruben Van De Kerchove, Jessica Rosenqvist, Nathan Thomas, Tom Worthington, Richard Lucas
Affiliations: Aberystwyth University, Wetlands International, soloEO, VITO, Edge Hill University, University of Cambridge
The Global Mangrove Watch (GMW) was initiated in 2011 as part of the Japan Aerospace Exploration Agency (JAXA) Kyoto & Carbon Initiative and is now undertaken as an independent project led by Aberystwyth University, Wetlands International, soloEO and The Nature Conservancy (TNC). The GMW team have produced several global mangrove extent and change products over the past 8 years (Bunting et al., 2018, 2022a,b, 2023), with each update improving the mapping methodology and accuracy of the products. The latest version 4.0 GMW products have a significantly revised and improved methodology, resulting in greater map accuracy. Improvements include a new baseline for 2020, with a spatial resolution of 10 m, and change products extending to 1990 (previously 1996). Understanding historical changes in the extent of mangrove forests is vital for effective restoration and conservation efforts. Accurately mapping these changes is essential, as regions that have experienced mangrove loss are frequently identified as potential sites for mangrove restoration and blue carbon projects. The GMW v3.0 change products have presented several limitations, mainly due to the previous misregistration of the global L-band SAR mosaic data, which were used as the basis for the change mapping. This misregistration contributed to significant uncertainty in the area mapped as mangrove change. Consequently, it was advised that only net changes, rather than separate gains and losses, be reported when utilizing the GMW v3.0 data. Additionally, for many end users, for which the GMW v3.0 mapping constitutes their primary source of up-to-date and readily available mangrove extent data for their region of interest, the existing accuracy at the local level and minimum mapping unit is often insufficient. Therefore, the primary aim of GMW v.4.0 has been to improve the local relevance of the products. To achieve this aim, the new GMW v4.0 baseline map for 2020 has a spatial resolution of 10 m and the historical change layers are now produced using a combination of Landsat and JAXA L-band SAR data. JAXA has reprocessed the JERS-1, ALOS PALSAR and ALOS-2 PALSAR-2 global mosaic datasets to remove the misregistration from the products used to generate the GMW v3.0 maps. JAXA has also provided all the available observations of JERS-1 data from 1992 to 1998 rather than the single mosaic for 1996, which was used previously. For the new 2020 baseline, a time series composite was generated for each Sentinel-2 granule. This composite included the 10th, 50th, and 90th percentiles for each reflectance band and index, such as NDVI, NDWI, NBR, NDGI, EVI, and ANIR. A global XGBoost classifier was trained, and Boruta SHAP feature selection was utilized to reduce the number of variables in the model. A transfer learning step was applied, in which the global XGBoost model was further trained using local reference samples for each granule. The resulting map underwent visual checks and refinements to produce the final GMW v4.0 baseline. An accuracy assessment was conducted using 44 globally distributed sites, providing a total of 49,600 reference points. The true class for each point was identified with reference to PlanetScope and other higher-resolution image sources. The accuracy of the mangrove class was estimated to be 95.3% (with a 95% confidence interval of 94.9% to 95.7%). In comparison, using the same reference points, the accuracy of the GMW v3.0 2020 map was estimated at 81.4% (with a 95% confidence interval of 80.4% to 82.2%). Change mapping was conducted using the newly defined GMW v4.0 baseline by combining Landsat imagery with JAXA L-band SAR data. Landsat reflectance composites were created for 1990, 1995, 2000, 2005, 2010, 2015, 2020, and 2024 using Google Earth Engine. A Multivariate Alteration Detection (MAD) change detection method was applied to each Landsat composite to identify pixels that had not changed from 2020. The 2020 training samples within these no-change regions were selected, and a global XGBoost classifier was trained and applied to each Landsat composite. The classification results and change detection from Landsat were then merged with the 2020 baseline classification to produce a final classification for each Landsat composite. These classifications were utilized to constrain the map-to-image change detection approach applied to the L-band SAR data, following the GMW v3.0 change methodology. The resulting mangrove change maps extend the time series of mangrove changes back to 1990 and significantly increase the number of observations within the 1990s, thereby enhancing the confidence in the mangrove extent estimates for that decade. The new change results also enable independent calculations of annual mangrove gains and losses, allowing the GMW v4.0 datasets to be used as Activity Data for national reporting to the UNFCCC. The new v4.0 products also significantly improved the mapping of landward mangrove changes. For example, areas such as the Andaman and Nicobar Islands, which experienced significant in-land mangrove changes after the 2004 Boxing Day Earthquake and Tsunami, were not captured by the GMW v3.0 products.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: A pan-European monitoring of the wetland use intensity in coastal zones

Authors: Jonas Franke, Kevin Kuonath, Anis Guelmami, Nina Bègue
Affiliations: Remote Sensing Solutions, Tour du Valat
All major European Union (EU) policies acknowledge the critical role of wetlands in achieving the EU's goals related to climate neutrality, biodiversity conservation, pollution reduction, flood regulation, and the circular economy. As a result, evaluating the current extent and condition of European wetlands, including their capacity for long-term mitigation through restoration or other conservation measures, is a top priority for the EU in addressing climate change. With increasing extreme weather events, sea level rise and intensified land use, coastal wetlands are playing an important role as buffer zones with a wide range of ecosystem services. Land cover and land use are indicators for wetland status, since land use is a main driver of coastal wetland degradation and loss. However, land cover/use categories cannot reflect the wide variation of the actual wetland use intensity. For example, the intensity of grassland use (e.g. number of mowings), the type of agricultural land use or any disturbances (such as fires or deforestation) have a major influence on the ecosystem. To complement the monitoring of wetlands with information beyond land cover/use categories, time series of Sentinel-2 data were leveraged. This new approach, developed in the Horizon Europe RESTORE4Cs project, resulted in a pan-European layer on coastal wetland use intensity (WUI) for the year 2023. Wetland use intensity refers to the degree or extent to which wetlands are utilized for various purposes and indicates how wetlands are being impacted or exploited through various human activities which may affect the ecological health or function of the wetland. The pan-European WUI layer for coastal wetlands differentiates intensively used wetland areas, such as crop cultivation, burned areas, peat extraction, etc. from less intensively used areas, such as grazing areas for livestock farming as well as natural/semi-natural areas and permanent water. WUI can be described as the magnitude of changes in spectral properties over time. An increasing WUI can impact the health of the wetland, potentially leading to degradation, loss of biodiversity, or changes in the hydrological and biogeochemical functions of the ecosystem. Thus, knowledge about the WUI is fundamental for managing wetland use and is crucial for prioritizing protection and restoration efforts and for maintaining the balance between human needs and ecological preservation. The WUI bases on the algorithm for time series analysis, the Mean Absolute Spectral Dynamics (MASD), developed by Franke et al. (2012). The MASD algorithm assesses the average spectral change (absolute magnitude) across selected spectral bands across a certain number of timesteps over a growing season. A timestep is the time between two cloud-free or cloud-masked scenes, for which spectral change is measured. Ideally, all timesteps are four weeks long, with the first scene capturing the start of the growing season and the last one the end of the growing season. For the cloud-based processing of the pan-European coastal WUI layer, an automatic Sentinel-2 scene selection procedure was developed that maximally follows this logic. Areas with persistent cloud cover, in which the time series did not meet the minimum requirement of coverages per season, were indicated as such in the final WUI layer. Since the MASD is sensitive to the observation lengths and observation density of the satellite time series, it was modified here to minimize the impact of varying time series inputs. Averaged daily MASD values provided more temporally stable values that allowed for a scale up and for more comparable WUI values across regions. Since the aim was to assess WUI mainly in vegetated wetland areas, only vegetation-sensitive bands were selected to calculate the MASD. To account for a balance between the spectral ranges and the spatial resolution of the Sentinel-2 bands, the green and red band in the VIS, two bands from the red-edge and near infrared (NIR) and two bands in the short-wave infrared (SWIR) were used. In order to generate a WUI layer that is focusing on coastal wetlands and which is spatially coherent to other wetland information layers, it was processed within areas likely to host wetland habitats, with a high probability of presence calculated using the potential wetland areas (PWA) layer (Guelmami, 2023). Since surface water dynamics in the coastal wetlands also cause spectral changes that can cause confusion in the WUI interpretation through high MASD values, seasonal and permanent water bodies were identified in the coastal wetlands for the same year (2023) and considered complementary to the WUI. All Sentinel-1 scenes were used to assess the surface water dynamics, following an approach described in Tøttrup et al. (2022). The final WUI layer provides insights into wetland use and status beyond classic land use categories. It is a complementary dataset to other wetland information layers that can be used to identify pressures from agricultural activities, over-use of resources and it can help to find priority areas for restoration actions and protection enforcement within the wetlands. Being produced annually, the WUI can indicate trends and being used for impact assessment of protection or restoration measures.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: Global Wetland Watch – A new system for globally mapping and monitoring changes to wetland ecosystems

Authors: Daniel Druce, Puzhao Zhang, Gyde Krüger, Dr. Cécile Kittel, Torsten Bondo, Christian Tøttrup
Affiliations: DHI, DHI
Wetlands are vital ecosystems that play a crucial role in regulating water quality, supporting biodiversity, and acting as carbon sinks. However, they are often poorly mapped and characterized, creating a significant information gap. To address this, DHI, in collaboration with UNEP and funded by Google.org, is developing the Global Wetland Watch (GWW) system. This innovative tool utilizes satellite earth observation data and AI to map over 20 different wetland types providing the first high-resolution and globally consistent wetland assessment. The data, which will be released freely as a public good, is vital for efforts in conservation, sustainable development, biodiversity, and climate change mitigation. It will also support national policies and legal frameworks aimed at protecting and restoring wetland ecosystems, helping countries meet targets set out in important global agendas such as the Ramsar convention, and the UN 2030 Agenda on Sustainable Development (SDG 6.6).The GWW methodology prioritises a time-series approach to harness the spectral, phenological, and hydrological characteristics unique to wetlands. This approach employs harmonic regression through the continuous change detection and classification (CCDC) algorithm, utilizing data from multiple satellite sensors Sentinel-1, Sentinel-2, PALSAR-2 and Landsat 8/9. Additionally, the system incorporates supplementary layers designed to enhance decision making. These layers provide interpretable, explainable, and adaptable data on key wetland characteristics, such as surface water extent dynamics, Height Above Nearest Drainage (HAND), and coastal geomedian and inundation frequencies, ensuring the system remains user-centric and accessible for diverse stakeholders. Through engagement with multiple pilot countries, the methodology and relevance of the work has been refined, and the system will serve as a benchmark for wetland inventories and monitoring by producing the world’s first high-resolution, multi-class, globally consistent wetland assessment. It will provide a foundation for future advancements, ensuring the system remains accurate, relevant, and capable of supporting ongoing conservation strategies, policy decisions and global initiatives.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: National Mapping of Wetland Habitats in mainland France

Authors: Anis Guelmami, Nina Bègue, Sébastien Rapinel, Léa Panhelleux, Guillaume Gayet, Laurence Hubert-Moy, Hugo Potier
Affiliations: Tour du Valat, University of Rennes 2, PatriNat
Wetlands face increasing threats and degradation globally, with an estimated 35% decline in area between 1970 and 2015 (Darrah et al., 2019), rising to 48% in the Mediterranean region during the same period (MWO, 2018). In Europe, this loss is primarily driven by urban sprawl and agricultural expansion and intensification (Čížková et al., 2013). Conservation efforts have led to the designation of protected areas such as Ramsar and Natura 2000 sites, but their ecological effectiveness remains underexplored due to the absence of wetland-specific evaluation tools (Munguía & Heinen, 2021). Furthermore, wetland monitoring is often incomplete and imprecise, focusing mainly on heritage sites through field observations (Perennou et al., 2012; Darrah et al., 2019) or relying on coarse-scale global land use and land cover (LULC) maps. Such maps frequently fall short for wetland monitoring, as they fail to provide a national picture of the total extent of wetland habitats (Perennou et al., 2012; Perennou et al., 2018; Rapinel et al. 2023). This study, focused on mainland France, aims to address these gaps by (i) develop a detailed, fine-scale national map of wetland habitats across mainland France, (ii) characterize and differentiate wetland habitats using advanced EO techniques combined with archival field data for robust modeling, and (iii) provide spatial data to support decision-making for sustainable wetland management, and habitat and biodiversity conservation and restoration. To achieve these objectives, the proposed approach combines advanced Earth Observation (EO) techniques alongside underutilized archival field plots to produce a detailed 10 m resolution national map of wetland habitats across mainland France. It is built upon an existing methodology to map wetland ecosystems extent across mainland France (Rapinel et al., 2023) and integrates spectral variables derived from Sentinel-2 time series, bioclimatic variables generated using Worldwide Bioclimatic Classification System (Perrin et al., 2020), phenological variables derived from NDVI indices (Orusa et al., 2023; Peng et al., 2023), and topo-hydrological metrics such as Topographic Wetness Index (TWI), Multi-Resolution Topographic Position Index (MRTPI), and Vertical Distance to Channel network (VDC). Geological data from the national geodatabase, as well as surface water dynamics derived from global datasets, are also integrated. Additionally, vegetation height data, produced by Lang et al. (2020) and freely available globally on Google Earth Engine, are used to refine habitat characterization. The study uniquely utilizes underused national archives, including historical biodiversity surveys and local wetland inventories, to calibrate and validate habitat classification models. This is enhanced by new field data collected during the implementation of this study, ensuring recent and accurate inputs for algorithm processing. The methodology involved processing Sentinel-2 2017-2022 dense time series to create pseudo-annual composites for robust habitat discrimination. A machine learning based hierarchical Random Forest model was applied to classify habitats using the EUNIS nomenclature. Post-processing, including segmentation for the smoothing of the classification rasters and expert validation, enhanced the reliability of the results and ensured the accurate identification and inclusion of rare or underrepresented wetland habitat types. The study delivered a 10 m resolution map of wetland habitats at the national scale of mainland France, offering essential data for evaluating ecosystem functions, guiding sustainable management, prioritizing areas for wetlands preservation and/or restoration and informing policies and strategies related to water and land use planning. It marks a major step forward in national-scale wetland ecosystem assessment. Keywords: Wetland habitats, National mapping, Earth Observation, EUNIS, Sentinel-2, Field data archives.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: Mapping Tropical Wetlands Extent and Dynamics over 10 Years by ALOS-2 PALSAR-2

Authors: Ake Rosenqvist, Greg Oakes, Jessica Rosenqvist, Dr Pete Bunting, Dr Andy Hardy, Bruce Forsberg, Pedro Barbosa, Thiago Silva, Kazufumii Kobayashi, Takeo Tadono
Affiliations: solo Earth Observation (soloEO), Japan Aerospace Exploration Agency (JAXA), Aberystwyth University, soloEO, Inst. for Amazonia Research (INPA), Université du Quebec (UQAM), University of Stirling, Remote Sensing Technology Center (RESTEC)
Earth observation provides opportunities to address the information needs of the Ramsar Convention for the monitoring and reporting on key wetland indicators, including progress on the Sustainable Development Goals, where SDG Indicator 6.6.1 – Change in the extent of water-related ecosystems over time – is of particular relevance to the work presented here on mapping and monitoring major inland freshwater (floodplain) wetlands. Floodplain forests are a dominant ecosystem in meandering river basins with moderate topography, where they provide important habitats for aquatic flora and fauna, and critical ecosystem services for riverine communities. Seasonal inundation is a dominant environmental factor affecting floodplain forest ecosystems and the timing, duration and amplitude of flooding vary spatially on the floodplain as a function of fluctuations in river stage height and topography. Floodplain forests sequester carbon as they grow, but are also significant sources of methane (CH4) and other trace gases essential to climate regulation. In river basins with low topography, floodplain forests can constitute more than 10% of the total basin area, e.g. corresponding to around 600,000 km2 in the Amazon Basin alone [1, 2]. While previous studies have mapped the maximum and minimum extents of inundation in the Amazon Basin with significant detail, what remains lacking is data on the temporal and spatial dynamics of the inundation patterns, both within years and between different years. Recent increases in the intensity and duration of droughts due to climate change are threatening the integrity of the whole floodplain ecosystems, as manifested by the historical low water levels in the Amazon Basin in 2023 and 2024, and a synoptic view of how the flooding patterns across the basin are changing is needed. L-band SAR has a proven long track record in mapping and detection of forest inundation, thanks to the capacity of the long (23.5 cm) wavelength signal to penetrate a forest canopy and interact with the ground or a water surface below. As part of JAXA’s systematic acquisition strategies for ALOS PALSAR and ALOS-2 PALSAR-2, L-band SAR data have been acquired across the entire pan-tropical zone zone in the Americas, Africa and Asia-Pacific on a regular (every 6 weeks) basis since 2007, with additional historical coverage by JERS-1 SAR available from the 1990s. Within the project described here, ALOS-2 PALSAR-2 wide-swath (ScanSAR) data acquired between 2014 and 2024 over the Amazon basin was used, corresponding to about 90 coverages over the 8 million km2 basin. The data were processed by JAXA to CEOS Analysis Ready Data (CEOS-ARD) Normalised Radar Backscatter (NRB) format, which includes full geometric and radiometric terrain corrections, and provided as HH and HV polarisation gamma-0 backscatter as 1 x 1 degree image tiles at 50 m pixel spacing. The image classification method employed, termed “RadWet-L” was developed by Aberystwyth University. RadWet-L uses the PALSAR-2 tile data together with ancillary datasets, such as hydrological terrain (HAND) metrics, DEMs and Land Cover maps, to automatically generate training data for open water and inundated vegetation. This data is then used to train an XGBoost machine learning classifier, which is subsequently applied to serial PALSAR-2 tiles across the area of interest. The RadWet-L algorithm and methods for proxy validation are described in [3]. The RadWet-L software is light-weight and transferable, and was run as a Docker image on a 4-core laptop computer. Processing of a single observation cycle over the Amazon Basin, typically comprising some 800-1000 PALSAR-2 image tiles, takes about 5 hours. The classified tiles featured three main classes; inundated vegetation, open water, and an “other” class which includes masked areas and anything that is not classified as any of the former two classes. The tiles were subsequently mosaicked into 90 basin-wide inundation extent maps, one for each PALSAR-2 observation cycle. The nine inundation extent maps for each year were in turn merged into annual inundation duration maps, where the value for each pixel represents the number of times that particular geographic location has been classified as inundated vegetation in the annual time-series. The inundation duration maps thus describe both the spatial and the temporal characteristics of inundation and constitute a unique new data source for forested wetlands. Potential applications include, amongst others, assessment of inundation patterns and change trends, ecosystem stratification and habitat mapping, and input to regional models for CH4 and other trace gas emissions, thus addressing information needs of international conventions and frameworks, such as Ramsar, the CBD GBF, and, not least, SDG Indicator 6.6.1. At the time of writing (30 Nov 2024) the Amazon data processing has just been completed and analysis of the extent and duration data is to begin. Next steps in the project also include extension to other significant wetland areas, including the Congo river basin, the Pantanal and Sudd wetlands, and forested wetlands in Southeast Asia as well as extending the time-series by use of ALOS PALSAR data from 2007-2010. Initial results from these studies will be presented at the session. Acknowledgement: This work has been undertaken within the framework of the JAXA Kyoto & Carbon Initiative. The ALOS-2 PALSAR-2 ScanSAR data has been provided by JAXA EORC [1] Hess L.L., Melack J.M., Affonso A.G., Barbosa C., Gastil-Buhl M. and Novo E.M.L.M. Wetlands of the Lowland Amazon Basin:
Extent, Vegetative Cover, and Dual-season Inundated Area as Mapped with JERS-1 Synthetic Aperture Radar. Wetlands (2015) 35:745–756. doi:10.1007/s13157-015-0666-y [2] Rosenqvist J., Rosenqvist A., Jensen K., and McDonald K. Mapping of Maximum and Minimum Inundation Extents in the Amazon Basin 2014–2017 with ALOS-2 PALSAR-2 ScanSAR Time-Series Data. Remote Sens. 2020, 12, 1326, doi.org/10.3390/rs12081326 [3] Oakes, G.; Hardy, A.; Bunting, P.; Rosenqvist, A. RadWet-L: A Novel Approach for Mapping of Inundation Dynamics of Forested Wetlands Using ALOS-2 PALSAR-2 L-Band Radar Imagery. Remote Sens. 2024, 16, 2078. https://doi.org/10.3390/rs16122078
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.14)

Session: F.05.05 Copernicus4regions: meet the community of the Copernicus regional and local users and providers

The joint ESA/EC/NEREUS Copernicus4regions initiative succeeded in establishing a lively community along the years, which meets regularly at the occasion of events at the European Parliament and at NEREUS regional symposia to showcase the regional experiences including to high-level political representations (see https://www.nereus-regions.eu/copernicus4regions/). Currently, NEREUS is working on the selection of new Copernicus User Stories, to refresh and enrich its broad collection of examples, as well as on the organisation of events at the European Parliament and at the Committee of the Regions. This early breakfast provides a chance to meet the community and discover inspiring new user stories that showcase the impact of using Copernicus on citizens and on the work of regional administrations.

Moderators:


  • Alessandra Tassa - ESA
  • Roya Ayazi - NEREUS
  • Margarita Chrysaki - NEREUS

Speakers:


  • Macjek Mysliviek - Space Agency
  • Marcel Simoner - UIV Urban Innovation Vienna GmbH
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.11/0.12)

Session: A.01.05 Ozone and its precursors through the Atmosphere: Advances in understanding and methods

Ozone is a fundamentally important constitute of the atmosphere, in the troposphere it is a greenhouse gas and pollutant that is detrimental to human health and crop and ecosystem productivity. In the troposphere data is available from ozonesondes, aircraft, and satellites, but high levels of uncertainty biases remain. While in the stratospheric ozone protects the biosphere from UV radiation, long-term observations from satellites and the ground confirmed that the long-term decline of stratospheric ozone was successfully stopped, as a result of the Montreal protocol. Future stratospheric ozone levels depend on changes on many factors including the latitude domain and interactions with the troposphere, and potentially the mesosphere.

This session is detected to presentation of methods and results for furthering the understanding of the distribution of ozone and its precursors through the atmosphere through remote sensing techniques, with particular emphasis on advanced methods with past and current missions such as OMI and Sentinel-5P, and preparing for future missions such as ALTIUS, Sentinels 4 & 5 and their synergies with other missions.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: Machine Learning to Construct Daily, Gap-Free, Long-Term Stratospheric Trace Gases Data Sets

Authors: Sandip Dhomse, Professor Martyn Chipperfield
Affiliations: University Of Leeds
Understanding the complex relationship between ozone and various trace gases influencing its concentrations necessitates continuous and reliable datasets. However, obtaining comprehensive long-term profiles for key trace gases is a significant challenge. Our research addresses this issue by merging data from a Chemical Transport Model (CTM) and satellite instruments (HALOE and ACE-FTS). This integration results in the creation of daily, gap-free datasets for six crucial gases: ozone (O3), methane (CH4), hydrogen fluoride (HF), water vapour (H2O), hydrogen chloride (HCl), and nitrous oxide (N2O) from 1991 to 2021. Chlorofluorocarbons (CFCs) are a critical source of chlorine that controls stratospheric ozone losses. Currently, ACE-FTS is the only instrument providing sparse but daily measurements of these gases. Monitoring changes in these ozone-depleting substances, which are now banned, helps assess the effectiveness of the Montreal Protocol. We have initiated the construction of gap-free stratospheric profile data for CFC-11 as a subsequent step. We use XGBoost regression model to estimate the relationship between various tracers in a CTM and the differences between the CTM output field and the observations, assuming all errors are due to the CTM setup. Once the regression model is trained for observational collocations, it is used to estimate biases for all the CTM grid points. To enhance accuracy, we employed various regression models and found that XGBoost regression outperforms other methods. ACE-FTS v5.2 data (2004-present) is used to train (70%) and test (30%) the XGBoost performance. Our results demonstrate excellent agreement between the constructed profiles and satellite measurement-based datasets. Biases in TCOM data sets, when compared to evaluation profiles, are consistently below 10% for mid-high latitudes and 50% for the low latitudes, across the stratosphere. The constructed daily zonal mean profile datasets, spanning altitudes from 15 to 60 km (or pressure levels from 300 to 0.1 hPa), are publicly accessible through Zenodo repositories. CH4: https://doi.org/10.5281/zenodo.7293740 N2O:  https://doi.org/10.5281/zenodo.7386001 HCl : https://doi.org/10.5281/zenodo.7608194 HF: https://doi.org/10.5281/zenodo.7607564 O3: https://doi.org/10.5281/zenodo.7833154 H2O: https://doi.org/10.5281/zenodo.7912904 CFC-11: https://doi.org/10.5281/zenodo.11526073 CFC-12: https://doi.org/10.5281/zenodo.12548528 COF2: https://doi.org/10.5281/zenodo.12551268 In an upcoming iteration, we are enhancing the algorithm (e.g. hyperparameter tuning, feature engineering, neural network) as well as add more species in the current setup. We believe these data sets would provide valuable insights into the dynamics of stratospheric trace gases, furthering our understanding of their behaviour and impact on the ozone layer. References: Dhomse, S. S., et al.,: ML-TOMCAT: machine-learning-based satellite-corrected global stratospheric ozone profile data set from a chemical transport model, Earth Syst. Sci. Data, 13, 5711–5729, https://doi.org/10.5194/essd-13-5711-2021, 2021. Dhomse, S. S. and Chipperfield, M. P.: Using machine learning to construct TOMCAT model and occultation measurement-based stratospheric methane (TCOM-CH4) and nitrous oxide (TCOM-N2O) profile data sets, Earth Syst. Sci. Data, 15, 5105–5120, https://doi.org/10.5194/essd-15-5105-2023, 2023.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: Total Column Ozone Retrieval Using the BTS Array Spectroradiometer and a Custom Double Ratio Technique

Authors: Dr. Luca Egli, Dr. Julian Gröbner, Dr. Eliane Maillard Barras
Affiliations: Physikalisch Meteorologisches Observatorium and World Radiation Center, Davos, Federal Office of Meteorology and Climatology, MeteoSwiss
Over the past five years, PMOD/WRC has developed and extensively validated a groundbreaking system named Koherent for the precise measurement of total column ozone (TCO). This innovative system is built around a compact, cost-efficient, and low-maintenance commercial array spectroradiometer, offering a robust solution for long-term atmospheric monitoring. During its five-year operational period, Koherent demonstrated exceptional reliability, achieving over 99% data acquisition uptime. The system employs a BTS-2048-UV-S-F array spectroradiometer developed by Gigahertz-Optik GmbH. This spectroradiometer is integrated with an optical fiber connected to a lens-based telescope mounted on a sun tracker. This configuration enables the measurement of direct ultraviolet (UV) irradiance in the wavelength range of 305–345 nm, a critical band for ozone studies. A key innovation of Koherent is the implementation of the Custom Double Ratio (CDR) technique, a novel algorithm that utilizes four specifically selected wavelengths from the spectral data to derive TCO. This algorithm is calibrated against ultraviolet reference instruments, achieving accuracy comparable to that of a single-monochromator Brewer spectrophotometer. Koherent’s flexibility is enhanced by its ability to be field-calibrated during campaigns using reference instruments such as Brewer spectrophotometers. Through a two-point calibration method combined with adjustments to the absorption coefficient and extraterrestrial constant, the system demonstrates excellent agreement with existing TCO monitoring networks. Notably, a five-year comparison of TCO measurements between Koherent and Brewer 156 in Davos, Switzerland, revealed an impressive average agreement within 0.05% ± 0.88%. Beyond ozone concentration, the CDR algorithm enables the determination of effective ozone layer temperature with a daily average precision of 3 K, utilizing parameterizations derived from historical balloon soundings. This capability underscores Koherent's multifaceted utility. The integration of a state-of-the-art instrument with an advanced retrieval algorithm makes Koherent a promising candidate for the next generation of TCO monitoring systems. Its high reliability, accuracy, and operational efficiency position it as a valuable tool for global atmospheric studies and long-term environmental monitoring.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: Ozone Recovery from Merged Observational Data and Model Analysis (OREGANO)

Authors: Mark Weber, Brian Auffarth, Carlo Arosio, Alexei Rozanov, Andreas Richter, Martyn Chipperfield, Sandip Dhomse, Wuhu Feng, Viktoria Sofieva, Monika Szelag, Andreas Chrysanthou, Kleareti Tourpali, Edward Malina
Affiliations: Institute of Environmental Physics (IUP), University of Bremen, School of Earth and Environment, University of Leeds, Finnish Meteorological Institute (FMI), Aristotle University Thessaloniki, ESA ESRIN
Stratospheric ozone (the "ozone layer") protects the biosphere from harmful UltraViolet (UV) radiation. It is expected to recover due to the Montreal Protocol signed in 1987 and its Amendments regulating the phase-out of ozone-depleting substances (ODS). The amount of stratospheric halogen (mainly bromine and chlorine) released by ODSs reached its max-imum abundance in the middle of the 1990s. Observations from satellites and the ground confirmed that the long-term decline of stratospheric ozone was successfully stopped. Future stratospheric ozone levels depend not only on changes in ODS but also on changes in greenhouse gases (GHG) and possibly stratospheric aerosols. The latter modifies both the chemistry and dynamics (transport, circulation) of ozone. The rate of ozone recovery thus depends on the geographic region and altitude. According to most chemistry-climate models, ozone in some altitude domains, like the lower tropical stratosphere, will likely continue to decline. At middle latitudes, the current trends in lower stratospheric ozone re-main highly uncertain in part due to larger uncertainties in observational data and larger year-to-year variability in ozone. A clear sign of ozone recovery is evident in the upper strato-sphere. The major goal of the OREGANO project is to advance our understanding of ozone recovery using a combination of observations and model analyses. The following topics will be high-lighted in this presentation: • Long-term ozone column and profile trends up to the end of 2024 from models and observations in support of the upcoming WMO/UNEP Ozone Assessment; • Impact of atmospheric dynamics and chemistry on polar and extrapolar ozone; • Tropospheric ozone trends in support of the IGAC Tropospheric Ozone Assessment Report Phase 2 (TOAR-2) • Role of tropospheric ozone in column ozone trends; • Evaluation of the bromine monoxide - chlorine monoxide (BrO-ClO) cycle using nadir BrO and chlorine dioxide (OClO) observations; • Impact of aerosol and GHG changes on stratospheric ozone trends. Recommendations for future satellite missions and programs will be made to maintain continued ozone monitoring.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: Extension of the S5P-TROPOMI CCD tropospheric ozone retrieval to mid-latitudes

Authors: Swathi Maratt Satheesan, Kai-Uwe Eichmann, Mark Weber
Affiliations: University of Bremen
Tropospheric ozone, a key atmospheric pollutant and greenhouse gas, shows significant spatio-temporal variability on seasonal, inter-annual, and decadal scales, posing challenges for satellite observation systems. Traditional methods like the Convective Cloud Differential (CCD) and Cloud Slicing Algorithms (CSA) are effective for Tropospheric Column Ozone (TCO) retrieval but are typically restricted to the tropical region (20°S-20°N). The CCD approach has been successful with satellite sensors like Aura OMI, MetOp GOME-2, and Sentinel-5 Precursor TROPOMI. In this study, we present the first application of the CCD retrieval method outside the tropical region, introducing CHORA-CCD (Cloud Height Ozone Reference Algorithm-CCD) to retrieve TCO from TROPOMI in the mid-latitudes. The approach uses a local cloud reference sector (CLCD, CHORA-CCD Local Cloud Decision) to estimate the stratospheric (above-cloud) column ozone (ACCO), which is then subtracted from the total column under clear-sky scenes to determine TCO. This method minimizes the impact of stratospheric ozone variations. An iterative process automatically selects an optimal local cloud reference sector around each retrieval grid point, varying the radius from 60 to 600 km to estimate the mean TCO. Due to the prevalence of low-level clouds in mid-latitudes, the estimation of TCO is constrained to the column up to a reference altitude of 450 hPa. In cases where cloud-top heights in the local cloud sector are variable, an alternative approach is introduced to directly estimate the ACCO down to 450 hPa using Theil-Sen regression. This method allows for the combination of the CCD approach with the CSA. The algorithm dynamically selects between CCD and the Theil-Sen method for ACCO estimation based on an analysis of cloud characteristics. The CLCD algorithm is further optimised by incorporating a homogeneity criterion for total ozone, addressing potential inhomogeneities in stratospheric ozone. Monthly averaged CLCD-TCOs for the time period from 2018 to 2022 were calculated from TROPOMI for the mid-latitudes (60°S–60°N). The accuracy of the CLCD algorithm was assessed by comparing the retrieved TCO with spatially collocated HEGIFTOM-SHADOZ/WOUDC/NDACC ozonesonde data from thirty-two stations. The validation results demonstrate that TCO retrievals at 450 hPa using the CLCD method show good agreement with the ozonesonde measurements at most stations. This study demonstrates the advantages of using a local cloud reference sector in mid-latitudes, providing an important basis for systematic applications in current and future geostationary satellite missions.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: Synergistic Use of Limb and Nadir Observations for Studying Stratospheric Ozone Intrusions in the Himalayan Region.

Authors: Dr Liliana Guidetti, Dr Erika Brattich, Dr Simone Ceccherini, Dr Michaela Hegglin, Dr Xiaodan Ma, Piera Raspollini, Dr Cecilia Tirelli, Ing Nicola Zoppetti, Ugo Cortesi
Affiliations: IFAC-CNR, Università di Bologna, Forschungszentrum Jülich GmbH ICE-4
Stratospheric intrusions play a crucial role in the exchange of ozone between the stratosphere and troposphere, with significant implication for surface air quality, radiative forcing, and climate dynamics. These events are particularly relevant in regions like the Himalayas, recognized as one of the main hotspots because of the unique topography and complex meteorological conditions. Despite their importance, our comprehension of stratospheric ozone intrusions processes and their impacts on tropospheric ozone variability is still limited, especially due to deficiencies and limitations in our observational networks and modeling techniques. Within this framework, satellite remote sensing observations can help address these limitations by providing extensive spatial and temporal coverage, filling the gaps left by ozonesondes, which, while offering high vertical resolution, suffer from sparse coverage. This study investigates the potential of combining limb-viewing measurements from the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) with nadir-viewing observations from the Infrared Atmospheric Sounding Interferometer (IASI) in the detection and characterization of stratospheric intrusion events. The limb observation geometry of MIPAS provides high vertical resolution, which is particularly critical for capturing the fine-scale ozone gradients at the tropopause. On the other hand, the nadir-viewing capabilities of IASI ensure broad horizontal coverage, which complements the spatial limitations of MIPAS. By integrating these two distinct datasets, we aim to harness their complementary strengths, enabling a more comprehensive analysis of stratospheric ozone intrusions. These datasets are fused using the Complete Data Fusion (CDF) method, an algebraic approach rooted in optimal estimation theory. This method harmonizes the individual retrievals from MIPAS and IASI into a unified dataset. Our study begins by identifying a specific intrusion event occurred within the overlapping operational period of the two instruments (2008–2012). The fused dataset is then validated with two methodologies: through comparisons with model reanalysis profiles, and independent ozone measurements from radiosonde profiles, providing insights into its accuracy and reliability. We finally exploit the newly fused dataset in combination with meteorological and composition variables from different models, including ERA5 and CAMS reanalysis, and EMAC model simulations in order to gain a more robust understanding of these phenomena. These comparisons can highlight the potential of the fused dataset to bridge observational and model-based approaches, offering a more robust understanding of these phenomena.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: Geostationary Satellites Total Ozone Observations: First Results and Ground-based Validation Efforts for TEMPO and GEMS

Authors: Chris McLinden, Xiaoyi Zhao, Debora Griffin, Vitali Fioletov, Xiong Liu, Junsung Park, Irina Petropavlovskikh, Tom Hanisco, James Szykman, Lukas Valin, Alexander Cede, Martin Tiefengraber, Manuel Gebetsberger, Itaru Uesato, Xiangdong Zheng, Soi Ahn, Limseok Chang, Won-Jin Lee, Jae Hwan Kim, Kanghyun Baek, Alberto Redondas, Masatomo Fujiwara, Ting Wang, Sum Chi Lee
Affiliations: Environment and Climate Change Canada, Harvard & Smithsonian Astrophysical Observatory, NOAA, NASA, US-EPA, LuftBlick, Japan Meteorological Agency, Chinese Meteorological Agency, National Institute of Environmental Research, Pusan National University, State Meteorological Agency, Hokkaido University, Institute of Atmospheric Physics Chinese Academy of Sciences
The Tropospheric Emissions: Monitoring of Pollution (TEMPO) satellite instrument, launched in April 2023, is the first geostationary atmospheric monitoring instrument over North America. It forms part of a global geostationary constellation with Asia’s Geostationary Environment Monitoring Spectrometer (GEMS) launched in 2020 and Europe’s upcoming Sentinel-4. TEMPO and GEMS offer hourly, high-resolution air pollution and ozone monitoring from space, improving on the once-daily observations of instruments like the TROPOspheric Monitoring Instrument (TROPOMI). This study presents the analysis of TEMPO’s total ozone data, demonstrating TEMPO’s ability to observe sudden changes in ozone (and thus UV index). Further, the first validation of TEMPO and GEMS ozone is presented using ground-based networks (Brewer, Dobson, and Pandora). Results show good correlations between the geostationary datasets with ground observations but also highlight latitude-dependent discrepancies (-2% to 2% for TEMPO, -1% to -3% for GEMS) and solar zenith angle (SZA) dependency issues. Both TEMPO and GEMS data require SZA corrections, though the magnitude of these corrections differs. After applying SZA corrections, both instruments show good agreement with ground-based measurements in capturing diurnal variations. For latitude dependency, TEMPO data can be effectively corrected using a viewing zenith angle (VZA) empirical approach, as it lacks pronounced seasonal variation. In contrast, GEMS exhibits latitude dependency with a seasonal component, necessitating more advanced correction methods in future work. Findings are further validated using TROPOMI and reanalysis data sets (ECWMF’s ERA5 and NASA GMAO’s MERRA-2). Overall, the data quality (accuracy and precision) from these geostationary satellite instruments appears to be good, indicating their potential for reliable ozone and UV index monitoring. While these new satellite instruments can observe ozone well, some issues affecting data quality can be identified, especially when they are compared to the more established polar-orbiting satellite instruments.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K2)

Session: B.04.01 Satellite based terrain motion mapping for better understanding geohazards. - PART 1

Better understanding geohazards (such as landslides, earthquakes, volcanic unrest and eruptions, coastal lowland hazards and inactive mines hazards) requires to measure terrain motion in space and time including at high resolution with multi-year historical analysis and continous monitoring. Several EO techniques are able to contribute depending on the context and the type of deformation phenomena considered and some techniques can provide wide area mapping (e.g. thanks to Sentinel-1). Advanced InSAR or pixel offset tracking using radar imagery including newly available missions with different sensing frequency (e.g. L Band) can help provide relevant geoinformation. This is also the case with optical streo viewing technique and optical correlation techniques including for wide area mapping. There is a need to assess new EO techniques to retrieve such geoinformation both locally and over wide areas and to characterise their limitations. New processing environments able to access and process large datastacks have allowed to increase user awareness, acceptance and adoption of EO and have created opportunities for collaboration including co-development and increased combination of data sources and processing chains. With this in mind it is needed to understand what is the agenda of geohazard user communities and what are the barriers for reaching their goals.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K2)

Presentation: InSAR-based regional land subsidence risk assessment in the Emilia Romagna (Italy)

Authors: Roberta Bonì, Andrea Taramelli, Leila Goliraeisi, Dr. Francesca Cigna, Prof Pietro Teatini, Dr Roberta Paranunzio, Dr Claudia Zoccarato
Affiliations: Department of Science, Technology and Society (STS), University School for Advanced Studies (IUSS), Institute of Atmospheric Sciences and Climate (ISAC), National Research Council (CNR), Department of Civil, Environmental and Architectural Engineering (ICEA), University of Padua (UNIPD)
According to the Sendai Framework for Disaster Risk Reduction, Risk is a function of the combined effects of hazards, the assets or individuals exposed to these hazards, and the vulnerability of those exposed elements. Priority 1 of this framework focuses on "Understanding Disaster Risk". The Sendai Framework for Disaster Risk Reduction 2015-2030 report highlights the growing exposure of people and assets across all countries, which is increasing at a rate that outpaces the reduction of vulnerability, leading to new risks and a consistent rise in socio-economic and environmental losses. Urban areas constructed in regions affected by groundwater pumping-induced subsidence may experience damage if they cannot support differential settlements beneath their foundations. Therefore, assessing the building level of risk in these areas is crucial for enhancing current awareness and informing future urban planning. In this study, we present a new methodology for assessing land subsidence risk at the regional scale. This approach has been developed and tested in the Emilia Romagna region, which is located in the Po Plain area; a sedimentary basin characterized by significant ground deformation, exhibiting high spatial and temporal variations due to both natural and anthropogenic factors. We utilize measurements of vertical and horizontal ground deformation obtained from Interferometric Synthetic Aperture Radar (InSAR) data, collected between 2018 and 2022 via Copernicus’ European Ground Motion Service, to calculate the hazards associated with differential subsidence. Additionally, for the exposure-vulnerability, global datasets, such as the Global Human Settlement Layer and the World Settlement Footprint (WSF) Evolution, along with a regional dataset provided by the Emilia Romagna Region, are employed to determine building types (i.e., residential versus non-residential), periods of construction, and building heights. The hazard map generated from land subsidence is then combined with the exposure-vulnerability map using a risk matrix to evaluate four risk levels, ranging from very low to very high (e.g., R1 to R4). The results of the proposed approach provide a basis for evaluating land subsidence risks in other urbanized areas vulnerable to this phenomenon. This facilitates geohazard assessments and enhances understanding of associated risks. This work is funded by the European Union – Next Generation EU, component M4C2, in the framework of the Research Projects of Significant National Interest (PRIN) 2022 National Recovery and Resilience Plan (PNRR) Call, project SubRISK+ (grant id. P20222NW3E), 2023-2025 (CUP B53D23033400001).
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K2)

Presentation: Estimating Lava Extent and Quantifying Terrain Changes Using Daily Ground Track Repeat SAR Time Series: Fagradalsfjall Volcano, Reykjanes Peninsula, Iceland

Authors: Dr Melanie Rankl, Valentyn Tolpekin, Michael Wollersheim, Qiaoping Zhang, Angel Johnsy, Vincent Drouin
Affiliations: ICEYE OY, Icelandic Met Office
SAR Interferometry (InSAR) technology has been widely used in volcanic activity monitoring ranging from assessment of eruption risk and location prediction using deformation InSAR, monitoring eruptions and assessing the impact of lava flow using coherent change detection, and estimating the volume of hardened lava using topographic InSAR [1-2]. In this presentation we demonstrate the unique advantage of high-revisit SAR time-series collected by ICEYE’s Daily Ground Track Repeat (DGTR) configuration for lava progression mapping during the major eruption event of Fagradalsfjall Volcano, Reykjanes Peninsula, Iceland in 2021. On March 19th, 2021 a volcanic eruption began in Geldingadalur, close to Mount Fagradalsfjall ending a 800 years pause in eruptive activity on the Reykjanes Peninsula. Prior to the eruption increased seismicity had been recorded on the Peninsula since mid-December 2019. The series of earthquakes culminated in a magnitude MW 5.64 event on February 24, 2021 followed by a high rate of deformation due to inflow of magma into a vertical dyke. However, deformation and seismicity gradually decreased prior to the onset of the eruption on March 19th, 2021 [3]. The eruption lasted until September 2021. It was relatively small compared to other eruptions in Iceland. However, it had a bigger impact and proved more challenging for local civil protection agencies than other eruptions of its size as it was easily accessible and attracted 356,000 tourists [4]. In this presentation we focus on ICEYE’s DGTR SAR time-series covering the period March 2021 until March 2022 with a total of 296 Spotlight acquisitions (1 m ground resolution). The SAR images collected by DGTR have matching geometry, radiance and phase and offer a unique advantage for frequent and persistent monitoring of natural catastrophes. With this time-series we were able to monitor the full event in 2021 with a coherent acquisition almost every day. We show results of lava extent changes using amplitude and coherent change detection, and changes of the volume of deposited lava at different phases of the eruption using InSAR derived Digital Elevation Models (DEMs). The latter have been derived from multiple image pairs with 1-day temporal baseline and varying interferometric baselines, thus varying altitude of ambiguity. Best suitable interferometric baselines are presented and discussed. ICEYE DEM results are evaluated using aerial surveys performed multiple times in the same period. In addition, we present methods to mitigate the atmospheric phase delay on the phase signal. Without mitigating the atmospheric delay on the phase, significant errors might affect InSAR derived measurements of height and velocity [5,6]. Thanks to the long DGTR SAR time-series we use Persistent Scatterer Interferometry and Distributed Scatterer Interferometry to extract the Atmospheric Phase Screen by modeling it over time [7,8]. We also show results from averaging multiple DEMs derived from many interferograms in order to correct for the atmospheric noise. [1] Zhang, Q., Tolpekin, V., Wollersheim, M., Angeluccetti, I., Ferner, D., Fischer, P. 2022. Daily Repeat Pass Spaceborne SAR Interferometry for La Palma Volcano Monitoring [Conference presentation abstract]. 10th International Conference on Agro-Geoinformatics and 43rd Canadian Symposium on Remote Sensing, July 11-14, 2022, Quebec City, Canada [2] Drouin, V., Tolpekin, V., Parks, M., Sigmundsson, F., Leeb, D., Strong, S., Hjartardóttir, Á, Geirsson, H., Einarsson, P., Ófeigsson, B., 2022. Conduits feeding new eruptive vents at Fagradajsfjall, Iceland, mapped by high-resolution ICEYE SAR satellite in a daily repeat orbit. EGU22 General Assembly, May 23-27, 2022, Vienna, Austria. [3] Sigmundsson, F., Parks, M., Hooper, A., Halldór Geirsson, Kristín S. Vogfjörd, Drouin, V., Ófeigsson, B., Hreinsdóttir, S., Hjaltadóttir, S., Jónsdóttir, K., Einarsson, P., Barsotti, S., Horálek, J., Ágústsdóttir, T., 2022. Deformation and seismicity decline before the 2021 Fagradalsfjall eruption. Nature 609, 523–528. https://doi.org/10.1038/s41586-022-05083-4. [4] Barsotti, S., Parks, M.M., Pfeffer, M.A., Óladóttir, B.A., Barnie, T., Titos, M., Jónsdóttir, K., Pedersen, G., Hjartardóttir, Á. R., Stefansdóttir, G., Johannsson, T., Arason, Þ., Gudmundsson, M. T., Oddsson, B., Þrastarson, R. H., Ófeigsson, B. G., Vogfjörd, K., Geirsson, H., Hjörvar, T., von Löwis, S., Petersen, G. N., Sigurðsson, E. M., 2023. The eruption in Fagradalsfjall (2021, Iceland): how the operational monitoring and the volcanic hazard assessment contributed to its safe access. Nat Hazards 116, 3063–3092. https://doi.org/10.1007/s11069-022-05798-7. [5] Zhiwei Li, Meng Duan, Yunmeng Cao, Minzheng Mu, Xin He, Jianchao Wei, 2022. Mitigation of time-series InSAR turbulent atmospheric phase noise: A review, Geodesy and Geodynamics, Volume 13, Issue 2, Pages 93-103. [6] Zhiwei Li, Yunmeng Cao, Jianchao Wei, Meng Duan, Lixin Wu, Jingxing Hou, Jianjun Zhu, 2019. Time-series InSAR ground deformation monitoring: Atmospheric delay modeling and estimating, Earth-Science Reviews, Volume 192, Pages 258-284. [7] Hanssen, R.F., 2001. Radar interferometry: data interpretation and error analysis (Vol. 2). Springer Science & Business Media. [8] Liu, S., Hanssen, R.F., Samiei-Esfahany, S., Hooper, A. and Van Leijen, F.J., 2011. Separating non-linear deformation and atmospheric phase screen (APS) for InSAR time series analysis using least-squares collocation. In Proceedings of the Advances in the Science and Applications of SAR Interferometry, ESA Fringe 2009, Workshop ESA.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K2)

Presentation: Studying the dike intrusion in the Fentale volcano (Ethiopia) via DInSAR and seismic data

Authors: Fernando Monterroso, Derek Keir, Alessandro La Rosa, Carolina Pagli, Hua Wang, Atalay Ayele, Elias Lewi, Martina Raggiunti, Manuela Bonano, Claudio De Luca, Pasquale Striano, Michele Manunta, Francesco Casu
Affiliations: IREA CNR, School of Ocean and Earth Science, University of Southampton, Department of Earth Sciences, University of Florence, Department of Earth Sciences, University of Pisa, College of Natural Resources and Environment, South China Agricultural University, Institute of Geophysics, Space Science and Astronomy (IGSSA), Addis Ababa University, National Institute for Geophysics and Volcanology (INGV), IREA CNR
In this study, we measured and modeled the ground displacements of the Earth's surface during an intense phase of magma and seismic unrest that occurred in September-November 2024 at the Fentale volcano in the northern Main Ethiopian Rift (MER), a young continental rift extending at 5 mm/yr. We used DInSAR displacement maps and seismic records during the Fentale intrusion in the MER to analyze the behavior of magma-assisted rifting at slow extension rates. The data indicate that, a ~10 km-long dyke of magma was injected into the rift, causing it to widen by 2 meters in three and a half weeks. The upper-crustal diking began propagating northward along the rift from mid-September 2024 and it was accompanied by seismic activity. Prior to the intrusion, from January 2021 to June 2024, the Fentale volcanic complex has experienced an uplift of up to 6 cm. We used descending Sentinel-1 acquisitions (Track 79) from the European Copernicus Program and ascending data from the Italian COSMO-SkyMed constellation to measure the line-of-sight (LOS) surface displacement in the Fentale region. Then the interferograms were inverted to quantify the spatio-temporal pattern of the dike intrusion and fault kinematics. DInSAR source modeling revealed an initial 3 km-long intrusion along the dike, starting 10 km northeast of the Fentale volcano. This intrusion was accompanied by low-intensity seismic activity. Between September 24th and October 18th, 2024, the deformation progressively expanded northward, becoming more complex. Our models suggest that the dike's opening increased to 2 meters and extended 8 km northward, accompanied by faulting above the dike and beyond its northern end. We complemented our models with analysis of global and local seismic recordings suggesting that dike propagation and opening accelerated from late September through the first week of October. DInSAR models indicate that the dike opening accounts for over 90% of the total geodetic moment release. However, the models also require the presence of normal faults to fully explain the observed deformation field. These evidences suggest that rapid magma movements, occurring approximately every hundred years, play a significant role in continental separation even in young rifts extending at slow rates. Further developments will be shown at the meeting.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K2)

Presentation: Geodetic Imaging as Monitoring Component of Santorini Volcano Observatory

Authors: Michael Foumelis, Dr Elena Papageorgiou, Costas Papazachos, Georgios E. Vougioukalakis, Christos Pikridas, Stylianos Bitharis, Jose Manuel Delgado Blasco, Giorgos Siavalas, Fabrice Brito, Fabrizio Pacini
Affiliations: Aristotle University Of Thessaloniki (AUTh), Hellenic Survey of Geology & Mineral Exploration (HSGME), Terradue s.r.l.
We are advancing efforts to complement Santorini Volcano's seismological, GNSS, and in-situ monitoring networks operated by ISMOSAV. The goal is to strengthen our near real-time capabilities in detecting unrest signals and documenting events previously undetectable without proper instrumentation or systematically acquired Earth Observation data. Utilizing Interferometric SAR measurements through both platform-based and in-house automated processing schemes, we address this requirement. Presently, the volcano seems to be in a post-unrest deflation since the 2011 unrest event. The ongoing development of a multi-parametric monitoring system, incorporating satellite, seismological, and in-situ observations, aims to robustly characterize the volcano's state. Our objective is to deepen our understanding of volcano’s dynamics and enhance alert capabilities for potential unrest. The system incorporates a user-friendly web interface for result visualization and dissemination, tailored to individual information needs. The Institute for the Study and Monitoring of the Santorini Volcano (ISMOSAV) is a non-profit organization established in the summer of 1995. Its primary objective is to continue the operation of the Volcanological Observatory and volcano monitoring networks. The main goal of ISMOSAV is to advance volcanic research on the island, specifically focusing on achieving the most accurate assessment of volcanic phenomena and increasing the likelihood of precise prediction of any future volcanic eruption. ISMOSAV operates a comprehensive monitoring system, crucial for timely prediction of a potential volcanic eruption. The permanent monitoring system (Fig. 1) incorporates local seismic and GNSS networks, as well as in-situ instruments to collect CO2 emissions and temperature at various depths, established and maintained mainly by the Aristotle University of Thessaloniki (AUTh) and the Hellenic Survey of Geology & Mineral Exploration (HSGME) with the support of local authorities. As part of its activities, ISMOSAV initiatives various communications to improve the local community's comprehension of the volcano and to make aware of its behavior. To further enhance the monitoring capabilities of the observatory, a geodetic imaging component based on InSAR was added to the existing ISMOSAV monitoring system. The InSAR monitoring component is expressed by different distinct solutions addressing both long-term monitoring aspects, as well as rapid response. For the long-term monitoring of Santorini volcano, the SNAPPING service of AUTh, integrated on the GEP platform was utilized. The operational SNAPPING services generate average Line-of-Sight (LoS) motion rate maps and displacement time series based on the Persistent Scatterers Interferometry (PSI) technique at both reduced spatial (PSI Med; at approx. 100 m) and full sensor resolutions (PSI Full). A well-defined set of processing parameters optimized to the specific environment are defined and a dedicated application on GEP is designed to execute SNAPPING PSI Med in a monitoring framework based on user defined temporal step. Rapid response solution is triggered when InSAR observations, real-time GNSS and in-situ networks provide relevant indications. During unrest events, near-real time PSI solutions can be generated whenever a new Sentinel-1 acquisition is available, based on both online hosted SNAPPING service, as well as in-house multi-temporal interferometric processing. Additionally, conventional differential interferograms are to be generated to access basic interferometric products, such as wrapped interferograms, alongside the advanced measurements where specific processing assumptions are considered. In this process, products are created and published at various spatial resolutions, with access levels tailored to the characteristics of the user. Following the testing of the system and the assessment of its operational performance our next steps are focused on developing a web graphical interface for visualization and dissemination of the results. This will include user-friendly tools for basic exploitation of measurements, co-visualization together with other data and finally dissemination options. The interface should address aspects of hierarchical access to measurements in a customized way depending on the level of information required by each user.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K2)

Presentation: On-demand Sentinel-1 Interferogram Generation Service for Monitoring of Volcano Deformation

Authors: Raphael Grandin, Marie Boichu, Théo Mathurin, Nicolas Pascal, Roland Akiki, Jérémy Anger, Carlo de Franchis
Affiliations: Institut de physique du globe de Paris, Univ. Paris Cité, UMR 7154, CNRS, Laboratoire d'Optique Atmosphérique, UMR 8518, ICARE Data and Services Center, CNRS, CNES UMS 2877, Kayrros SAS
Sentinel-1 interferograms now represent standard products to assess the extent and magnitude of ground deformation in volcanic areas. These measurements provide quantitative constraints on the volume of material accumulating at depth in the magma reservoir, which is essential to anticipate the magnitude of an impending eruption [1]. However, the typical area of interest (AOI) for volcano analysis is much smaller than the 250km x 200km Interferometric Wide (IW) Sentinel-1 SAFE products, the latter spanning three adjacent subswaths and around 30 bursts. Existing InSAR services, such as the SNAPPING service on the Geohazards Exploitation Platform (GEP) [2] or CNES-FormaTerre Flatsim [3], can process entire Sentinel-1 products but do not allow to narrow down the processing to a small AOI within a single or a few consecutive bursts. To complement these already existing general-purpose services, there is a need for the development of an efficient and flexible tool capable of responding to the specific needs of volcano monitoring strategies. Institut de physique du globe de Paris (IPGP) and Université de Lille are developing an online open-access service for on-demand computation of Sentinel-1 interferograms over small AOIs centered on volcanic areas, accessible through a web application. The service back-end is deployed redundantly on the computing cluster S-CAPAD (IPGP) and in the AERIS/ICARE facility (Université de Lille). It relies on the EOS-SAR Python library developed at Kayrros. EOS-SAR implements an accurate Sentinel-1 geometric model [4] accounting for fine timing corrections, which allows to get native co-registration and stitching of bursts, resulting in a time series of well-aligned, geometrically consistent, Sentinel-1 bursts mosaics. The processing can be restricted to arbitrarily small AOIs, within a single or a few consecutive bursts and adjacent sub-swaths, which saves time, computing resources and storage space. The service leverages the Copernicus Data Space Ecosystem (CDSE) S3 object storage service for efficient data access. A Sentinel-1 image crop, located within a burst, can be read from a sub-swath measurement TIFF file, stored on S3, through a single http range request. The service front-end lets users select a volcano of interest, a Sentinel-1 ground track, and the list of dates to process. Existing ground tracks and dates for the selected volcano are retrieved from CDSE catalog APIs. Once the selection is made, a configuration file with the input parameters is sent to the back-end and triggers the processing. After processing completion, results are returned to the user via the interactive web interface, and products (interferograms, coherence maps, orbital fringes, topographic fringes, amplitude maps, etc…) can be downloaded. Planned developments include the optional correction of the atmospheric phase delay from the ERA-5 atmospheric model [5], retrieved via the COPERNICUS Climate Data Store API. The Sentinel-1 interferogram generation service for volcanic areas is developed as part of the “Volcano Space Observatory” platform, funded in the framework of the Horizon Europe, EOSC FAIR-EASE project [6], led by the French Research Infrastructure “Data Terra”. The service aims at offering a practical and efficient solution for the on-demand processing of InSAR products on volcanic targets. Anticipated end-users of the service include volcano observatory teams, scientists and researchers from academia and students training in the field of volcanology and remote sensing. - - - - - - - - - - - - - - - References [1] Shreve, T., Grandin, R., Boichu, M., Garaebiti, E., Moussallam, Y., Ballu, V., ... & Pelletier, B. (2019). From prodigious volcanic degassing to caldera subsidence and quiescence at Ambrym (Vanuatu): The influence of regional tectonics. Scientific Reports, 9(1), 18868. [2] Foumelis, M., Delgado Blasco, J. M., Brito, F., Pacini, F., Papageorgiou, E., Pishehvar, P., & Bally, P. (2022). SNAPPING Services on the Geohazards Exploitation Platform for Copernicus Sentinel-1 Surface Motion Mapping. Remote Sensing, 14(23), 6075. [3] Thollard, F., Clesse, D., Doin, M. P., Donadieu, J., Durand, P., Grandin, R., ... & Specht, B. (2021). Flatsim: The form@ ter large-scale multi-temporal sentinel-1 interferometry service. Remote Sensing, 13(18), 3734. [4] Akiki, R., Anger, J., de Franchis, C., Facciolo, G., Morel, J. M., & Grandin, R. (2022, July). Improved Sentinel-1 IW Burst Stitching through Geolocation Error Correction Considerations. In IGARSS 2022-2022 IEEE International Geoscience and Remote Sensing Symposium (pp. 3404-3407). IEEE. [5] Jolivet, R., Grandin, R., Lasserre, C., Doin, M. P., & Peltzer, G. (2011). Systematic InSAR tropospheric phase delay corrections from global meteorological reanalysis data. Geophysical Research Letters, 38(17). - - - - - - - - - - - - - - - Acknowledgements Support from AERIS/ICARE Data and Services Centre, for the codevelopment of the « Volcano Space Observatory » platform, and Horizon Europe FAIR-EASE Project (Grant 101058785) are acknowledged. - - - - - - - - - - - - - - -
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K2)

Presentation: When radar observation is needed: Unravelling long-term spatiotemporal deformation and hydrological triggers of slow-moving reservoir landslides

Authors: Fengnian Chang, Dr. Shaochun Dong, Dr. Hongwei Yin
Affiliations: School of Earth Sciences and Engineering, Nanjing University, COMET, School of Earth and Environment, University of Leeds
Active landslides pose significant global risks, emphasizing precise displacement monitoring for effective geohazard management and early warning. In China’s Three Gorges Reservoir Area (TGRA)—a pivotal section of the world's largest water conservancy project—unique hydrogeological conditions and reservoir operations have triggered thousands of landslides [1]. Many of these landslides exhibit a north-south orientation and are covered by seasonal vegetation, posing challenges to conventional remote sensing-based displacement monitoring, especially in estimating three-dimensional (3D) deformation and long-term displacement time series. To address these challenges, we propose a framework that integrates interferometric synthetic aperture radar (InSAR), pixel offset tracking (POT), stacking, and topography-constrained model. This approach leverages phase and amplitude information from multi-platform, multi-band SAR datasets (i.e., L-band ALOS-1, C-band Sentinel-1, and X-band TerraSAR-X). Using this framework, we investigated the long-term spatiotemporal deformation and evolution mechanisms of two slow-moving, north-south-oriented reservoir landslides in the TGRA. First, we applied the SBAS InSAR method to ascending ALOS-1 (2007–2010) and Sentinel-1 (2015–2021) data to reconstruct LOS displacement velocities and time series [2]. POT and stacking methods were used with TerraSAR-X data (2019–2021) to derive azimuth velocities [3]. Due to the exclusive availability of ascending orbit data in the study area, we combined LOS and azimuth velocities during the overlapping period under the surface parallel motion (SPM) assumption to reconstruct average 3D velocity fields of the landslides [4]. From these fields, we determined the average sliding direction for each landslide pixel over the monitoring period, which served as a reference for projecting LOS displacement time series. This approach enabled the reconstruction of the actual landslide deformation evolution along the average sliding direction. By introducing temporal constraints, we bridged a five-year observation gap between ALOS-1 and Sentinel-1, reconstructing—for the first time—the 15-year displacement evolution of the landslides pre- and post-reservoir impoundment. Our findings reveal spatiotemporal heterogeneity in landslide deformation driven by hydrologic triggers. The reservoir impoundment in September 2008 induced transient acceleration in both landslides, followed by a relatively stable, step-like deformation pattern influenced by rainfall and reservoir water level (RWL) fluctuations. Finally, using Singular Spectrum Analysis (SSA) [5] and cross-correlation analysis, we quantitatively assessed the response of landslide deformation to hydrologic triggers. Rainfall, with a lag of approximately 20 days, predominantly influenced both landslides, while RWL fluctuations primarily affected deformation at landslide toes. Notably, the impact of RWL diminished with increasing distance from the reservoir, with lag times ranging from 8 to approximately 40 days. This quantitative characterization of landslide responses to hydrologic triggers represents a crucial step towards improved hazard mitigation capabilities. References: [1] Tang, H.; Wasowski, J.; Juang, C.H. Geohazards in the three Gorges Reservoir Area, China–Lessons learned from decades of research. Eng. Geol. 2019, 261, 105267. [2] Berardino, P.; Fornaro, G.; Lanari, R.; Sansosti, E. A new algorithm for surface deformation monitoring based on small baseline differential SAR interferograms. IEEE Trans. Geosci. Remote Sensing 2002, 40, 2375-2383. [3] Chang, F.; Dong, S.; Yin, H.; Ye, X.; Zhang, W.; Zhu, H.-h. Temporal stacking of sub-pixel offset tracking for monitoring slow-moving landslides in vegetated terrain. Landslides 2024, 1-17. [4] Samsonov, S.; Dille, A.; Dewitte, O.; Kervyn, F.; d'Oreye, N. Satellite interferometry for mapping surface deformation time series in one, two and three dimensions: A new method illustrated on a slow-moving landslide. Eng. Geol. 2020, 266, 105471. [5] Vautard, R.; Ghil, M. Singular spectrum analysis in nonlinear dynamics, with applications to paleoclimatic time series. Physica D 1989, 35, 395-424.
Add to Google Calendar

Tuesday 24 June 09:00 - 10:00 (ESA Agora)

Session: F.02.19 Austrian Space Cooperation Day - Connectivity & Secure Communications, Navigation, Space Safety

The Austrian space community and international testimonials take a kaleidoscopic look at products and services “made in Austria”, highlighting existing and inviting future cooperations within international partner networks. With a view to the ESA Ministerial Conference in 2025, the great importance of ESA programmes for maintaining and improving Austria's excellence in space will be explained using technological and commercial success stories. In the FFG/AUSTROSPACE exhibition, Earth observation space hard- and software products manufactured in Austria are presented (next to Agora area and ESA booth in Main Entrance Hall).

Speakers:


Keynote


  • Josef Aschbacher - Director General at European Space Agency

Opening statements by Austrospace board


  • Dieter Grebner - Peak Technology
  • Hans Martin Steiner - Terma

Overview of Austrian activities in the area of Navigation, Space Safety and Telecommunications


  • Georg Grabmayr - Beyond Gravity
Add to Google Calendar

Tuesday 24 June 09:00 - 09:20 (EO Arena)

Demo: C.01.27 DEMO - Sen2Like Tool & data harmonization workflow

The Sen2Like demonstration processor has been developed by ESA in the framework of the EU Copernicus programme (https://www.copernicus.eu/).
The main goal of Sen2Like is to generate Sentinel-2 like harmonised/fused surface reflectance products with an higher periodicity thanks to the integration of additional Senintel-2 compatible optical mission sensors.
The Sen2Lke software meet expectations of community regarding production of fit for purposes multi-source spatiotemporal dataset, so called Analysis Ready Data (ARD).
With this scope, the Sen2Lke software accomplish standardized pre-processing steps derived from Calibration / Validation algorithms. With this approach, user is discharged from complexity of algorithm and s/w development/implementation and become, confidently, focused on its own thematic analysis.
The Sen2Like software delivers Copernicus Sentinel-2 L2H/ L2F products
( https://sentinels.copernicus.eu/sentinel-data-access/sentinel-products/copernicus-sentinel-2-msi-level-2h-and-level-2f-1 ) . Products are generated for a given temporal period and geographic location as specified by the user.
Basically, the Sen2Like software has been designed as a processing framework. Also, the user is able to configure processing workflow; processing algorithms can be selected, removed and for some of them tuned. Processing algorithms address many Cal / Val topics, as for instance geometric correction, radiometric calibration, spectral correction, BRDF correction, slope correction and data fusion
One major objective of Sen2Like ARD is to ease the analysis of temporal changes. The Sen2Like processing enable pixel-based analysis even if data stream is from different missions. Moreover, Sen2Like approach makes user able to perform multi-year analysis. Finaly, harmonization of data leads to temporal noise reduction, and de facto enable detection of short-term changes.
The scope of this training is to demonstrate the value added of Sen2Like tool in the context of multi temporal analysis. Use cases are defined in such a way that for a given location, temporal period, results obtained with different workflow are computed. We should demonstrate that harmonization of data becomes important for certain application types.
The break down of the training is as follow:
• General introduction regarding the sen2like tool
• Definition of processing workflow as part of configuration
• Selected Test data set (Glacier Area, Amazonia, Maricopa Fields …)
• Region of interest definition and use case definition
• Inspect and discussed time series from use case results

Speaker:


  • Sébastien Saunier
Add to Google Calendar

Tuesday 24 June 09:22 - 09:42 (EO Arena)

Demo: C.04.03 DEMO - Handling observations in BUFR format

BUFR (Binary Universal Form for data Representation) is a data format maintained by the WMO. It is self-describing and uses tables to encode a wide variety of meteorological data: land and ship observations; aircraft observations; wind profiler observations; radar data; climatological data. Therefore, it is used as a the primary data format for operational real-time global exchange of weather and satellite observations.

This tutorial is designed to enhance participants' understanding and practical skills in the encoding and decoding of meteorological data in BUFR. In order to efficiently handle BUFR data participants will learn how to use ecCodes library developed by the European Centre for Medium-Range Weather Forecasts (ECMWF) and it's python API.

The tutorial begins with a comprehensive introduction to the BUFR format, including its structure and definition of its descriptors and templates. Participants will learn how to use ecCodes tools for command-line operations. Through practical exercises, they will learn to decode BUFR messages and extract relevant data by developing python software for automated data processing.

Additionally, participants will also explore best practices for encoding meteorological datasets in BUFR by applying WMO observations data governance.

By the end of the tutorial, attendees will be equipped with technical understanding of BUFR and ecCodes, allowing them to efficiently use this knowledge for data processing.

Speaker:


  • Marijana Crepulja - ECMWF
Add to Google Calendar

Tuesday 24 June 09:45 - 10:05 (EO Arena)

Demo: C.01.25 DEMO - DGGS: Scalable Geospatial Data Processing for Earth Observation

#zarr

Objective:
This demonstration will introduce the DGGS (Discrete Global Grid System) framework, highlighting its ability to process and analyze large Earth Observation (EO) datasets efficiently. The demo will focus on DGGS’ scalability, data accessibility, and potential to improve EO workflows by leveraging hierarchical grid structures and efficient data formats like Zarr.

Demonstration Overview:
Introduction to DGGS:
Brief overview of the DGGS framework and its hierarchical grid system designed to handle large-scale geospatial data efficiently.
Application to Earth Observation Data:
Demonstrating DGGS' ability to transform and process EO datasets, with an emphasis on its potential for improved data storage and access.
Visualization and Analytics:
Showcasing basic visualization and analytic capabilities within the DGGS framework, demonstrating its ease of use for EO data exploration.
Future Potential:
Explaining and discussing how DGGS could enhance future EO workflows, particularly for climate monitoring and large-scale environmental data analysis.
Format:
The presenter will guide the audience through the demonstration, highlighting DGGS' features and potential for real-world applications.
A short Q&A session will allow for audience interaction.
Duration:
20-minute slot.
This demonstration will showcase DGGS as a promising tool for scalable and efficient Earth Observation data processing, offering a glimpse into its potential applications and future benefits.

Speaker:


  • Tina Odaka - Ifremer
Add to Google Calendar

Tuesday 24 June 10:00 - 11:30 (Plenary - Hall D)

Session: Breaking Barriers by Working Together in Earth Science

This Plenary session will convene international partners who already have good connection(s) to ESA/EOP and bring them together for a discussive panel session on international cooperation. The session will focus on what barriers, if any, are present in creating and maintaining effective partnerships, either bilaterally or multilaterally. Examples of successful collaboration are expected to be reported and celebrated during the session, but those that haven’t worked are also welcomed to form part of the narrative. When there have been issues/barriers, practical measures that have been put in place can be tabled so that others can learn from these experiences and mitigate against any similar situations in the future. Looking ahead, panellists will be asked to identify any problem areas on the horizon that are of concern and how can we mitigate for these as a collective EO community and support more effective collaboration on EO moving forwards.

This session will be accessible via live captioning at the following link: HERE

Due to limitations in the app, this is clickable only from the web version of the programme


Panel Members:


  • Tidiane Ouattara - President, AfSA
  • Paul Bate - CEOS Chair; DG, UKSA
  • Kandasri Limpakom - Deputy Executive Director, GISTDA
  • Christian Feichtinger - Executive Director, IAF
  • Hironori Maejima - Senior Chief Officer of EO missions, JAXA
  • Karen St Germain - Earth Science Division Director, NASA
  • Ariel Blanco - Director Space Information Infrastructure Bureau, PhilSA
  • Lorant Czaran - Scientific Affairs Officer, UNOOSA
Add to Google Calendar

Tuesday 24 June 10:00 - 11:30 (ESA Agora)

Session: E.03.02 New approaches to support commercialisation

Europe has been investing a lot over the last three decades into the European Space Programme which implements world class space activities in the fields of Earth Observation (e.g. Copernicus), Satellite Navigation (e.g. Galileo), Connectivity, Space Research and Innovation.
However, return on investment, at least on the Earth Observation side, for European countries, entities and citizens, could be improved.

A main reason is the existing gap between space stakeholders and end-users, who are often unaware that space can contribute to the provision of essential services for their operations.
End users want to access information when they want, where they need it, at an affordable cost, and through known, common interfaces. They should not be responsible of selecting the relevant space assets to be used and activated, as it is not their core business.
It is up to the space community to work together to provide what is required by end users, which involves speaking their languages and understanding their requirements.

Therefore, in order to address the evolving needs of public institutions and private entities, Earth Observation data, products and services have become increasingly relevant.
In today's competitive landscape, where rapid development and deployment are essential, new approaches to support commercialisation emerge to cover those needs.
The upcoming session will feature a dynamic panel discussion, bringing together a diverse group of stakeholders from both industry and institutional backgrounds. This panel will represent the whole EO value chain, end to end, between end users and space assets, identify relevant their different roles, giving examples of stakeholders and use-cases with the panellists.
The discussion will also address their needs, the challenges that may arise, and solutions foreseen such as public private partnerships, public anchor tenancy or accelerating market pull, paving the way for innovative solutions in an ever-evolving landscape.

Speakers:


  • Monika Krzyżanowska - CloudFerro, Business Development Director
  • Beate Tempel - Constellr, VP Product
  • Daniel Van Der Maas - Ellipsis Drive, Co-founder & CTO
  • Mila Luleva - Rabobank, Head of Remote Sensing
  • Antonella Calvia-Götz – principal advisor at European Investment Bank to the speakers
Add to Google Calendar

Tuesday 24 June 10:00 - 10:45 (Nexus Agora)

Session: F.05.07 Women Trailblazers Round Tables - Session 1 - Stimulating Cooperation

“Women trailblazers: the present and the future of ...” is a series of round tables focused on the specific disciplines of Earth Observation Remote Sensing, Science, Engineering, Policy making, Entrepreneurship etc. that recognizes women’s significant contributions including senior and young promising professionals, whilst promoting female role models in our community.
The session will bring together prominent figures from diverse organisations, academia, industries, and associations to engage in a focused dialogue on collaborative strategies to address climate change and promote sustainable development. The main objective is to inspire and to discuss the current status and future development in Earth observation data and technologies to address climate change and promote sustainable development.

Speakers:


  • ‎Aarti Holla-Maini - Director of United Nations Office for Outer Space Affairs (UNOOSA)
  • Fani Kallianou de Jong - Climate Strategy and Delivery department at European Bank of Reconstruction and Development (EBRD)
  • Susanne Mecklenburg - Head of the Climate & Long-Term Action Division, European Space Agency (ESA)
  • Rakiya Babamaaji - Deputy Director at Strategic Space Applications Department, National Space Research and Development Agency (NASRDA) Nigeria



Add to Google Calendar

Tuesday 24 June 10:00 - 10:45 (Frontiers Agora)

Session: E.01.09 Space for Energy Sector Transformation, Sustainability, and Resilience

Transformation of the energy sector is crucial for a sustainable and green future, relying on a low-carbon energy mix and enabling sustainable development, economic growth, and resilience.

This session explores the role of space technology in driving the transformation of the energy sector, underpinning integrated solutions to support decision-making and operational processes for the energy transition. Through expert insights from different stakeholder groups in the energy sector, the session will shed the light on opportunities for the adoption and scaling of space solutions and identify barriers which must be overcome. The scope is broad and will include societal, technical, business, and regulatory challenges.

Discussions will address how innovative space technologies, digitalisation and artificial intelligence are impacting the energy sector and how to fully leverage their potential. The session will also discuss collaboration opportunities between the space and the energy sector, laying the ground for further networking among diverse energy actors from both the supply and demand sides.

Chairs:


  • Richard Eyers - Richard Eyers Geoscience & Photography
  • Zaynab Guerraou - ESA

Speakers:


  • Maziar Golestani - Head of Metocean & Site and System Design Project Management, Vattenfall
  • Itziar Irakulis Loitxate - IMEO Scientist, UNEP
  • Julien Fiore - Remote Sensing Team Lead, TotalEnergies France
  • Werner Hoffman - Head of Institute for Strategic Management, WU Wien.
Add to Google Calendar

Tuesday 24 June 10:07 - 10:27 (EO Arena)

Demo: D.04.14 DEMO - ESA WorldCereal: Effortless Crop Mapping from Local to Global Scales

Join the ESA WorldCereal team for a dynamic demonstration showcasing the system's capabilities in generating precise crop maps by integrating public and private reference data. Designed for researchers, policymakers, and stakeholders, this session will illustrate how to efficiently create tailored cropland and crop type maps using WorldCereal’s user-friendly tools.
We will begin with an introduction to the cloud-based WorldCereal processing system, an open platform for training and applying cropland and crop type detection models using open Earth Observation and complementary datasets. Attendees will learn how to access and integrate public and private reference datasets from the WorldCereal Reference Data Module to train their own models.
The demonstration will include a step-by-step walkthrough of the WorldCereal Processing Hub, a web interface that simplifies the launch and monitoring of cloud-based processing jobs. Participants will observe how to initiate crop mapping tasks directly from the hub, streamlining workflows and boosting productivity. For users preferring a Python environment, we will also showcase how Jupyter Notebooks support flexible and customized model training and processing.
Throughout the session, we will highlight the system’s support for diverse crop types, its adaptability to various geographic regions, and its capability to produce high-resolution, seasonally updated crop maps at a 10-meter spatial resolution. These features are invaluable for agricultural monitoring, food security assessments, environmental research, and policymaking.

Speakers:


  • Kristof Van Tricht - VITO
  • Jeroen Degerickx - VITO
Add to Google Calendar

Tuesday 24 June 10:30 - 10:50 (EO Arena)

Demo: A.08.18 DEMO - OVL-NG portals: online web portals for EO data discovery

Numerous new satellites and sensors have arisen during the past decade. The satellite constellation has never been better, providing us with a wide range of view angles with the ocean surface from the coast to the open ocean, at various scales and from physical to biological processes. A good example is the Sentinel 1-2-3 program that covers various sensors such as SAR, optical, infrared or altimeter with a repeat subcycle of only a few days.

OVL-NG portals are publicly available portals allowing anyone to visually explore a large amount and variety of EO data, without the difficulty of handling huge and heterogeneous files.
OVL-NG also offers some drawing and annotation capabilities, as well as the ability to create web links that users can share to communicate about beautiful oceanic structures or use as support for discussing interesting cases with other scientists.
There is also the capability to easily share analysis and interesting test cases using short link or the SEAShot tool (https://seashot.odl.bzh )

During this demo, we will showcase how you can navigate in time and space to explore the synergy between the different Sentinel sensors (e.g.https://odl.bzh/Y_d9phB9 ) or compare different sources of current derived from model, in-situ and satellite (e.g. https://odl.bzh/uWiicyJO ) using drawing capabilities and share your analyses using SEAShot tool.

Discussions and feedback are more than welcome and will drive the future evolutions of these tools, so don't hesitate to come at the ESA booth and discuss with us!

Speaker:


  • Lucile Gaultier - OceanDataLab
Add to Google Calendar

Tuesday 24 June 10:45 - 11:30 (Frontiers Agora)

Session: D.03.08 Open Science in the Making

Open Science is made live, at LPS, at the “Open Science in the Making” booth! Here scientists and open source software developers will meet to discuss best practices in Open Science, and collaborate to build user-driven expansions to open source tools and algorithms, integrate and share their applications and data, discuss on the most popular Open Source projects helping scientific research and digital innovation advance further!

This Agora will present the tools and projects the “Open Science in the Making” booth will focus on, its organization, and the opportunities for engaging with strategic initiatives like EarthCODE and APEx or EOEPCA by joining in it. Participating in the “Open Science in the Making” activities will be an excellent opportunity to collaborate, learn about the potentials of Open Science and Free and Open Source Software (FOSS) to support your own activities, and, why not, sharpen your coding skills!

A variety of ways to contribute during the “Open Science in the Making” will be showcased in this Agora, such as test code, file and fix bugs, propose and add new features, improve documentation, or just ask more info to the developers about a FOSS software or Open Science project and tools, their inner workings and how they can fit your use case.

At this Agora, you will also be able to discuss the “Open Science in the Making” booth agenda, which will include experts coming from different projects and activities, such as the EarthCODE initiative, the APEx platform, the EOEPCA Building Blocks, popular OSGeo softwares, open standards, etc... Come to this Agora or pass by the “Open Science in the Making” booth to know more!

Speakers:


  • Salvatore Pinto - ESA
  • Anca Anghelea - ESA
Add to Google Calendar

Tuesday 24 June 10:45 - 11:30 (Nexus Agora)

Session: F.01.08 Climate Call Card Game - Session 2

Come play the Climate Call card game and see what activities from our daily lives that actually affect the climate. Let's reveal common misconceptions and be inspired about how one can communicate research findings through games and positive visualisations. Karl Sterner Isaksson, Operational Manager at Climate Call, is also happy to brainstorm of ways to gamify or visualise your research/work.

Speakers:


  • Karl Sterner Isaksson - Climate Call
Add to Google Calendar

Tuesday 24 June 10:52 - 11:12 (EO Arena)

Demo: D.02.25 DEMO - Freedom to apply complex calculations and ML models on EO data

Machine Learning (ML) is revolutionizing Earth Observation (EO), but deploying models efficiently and at scale can be challenging. How can we simplify this for researchers and developers?
To make ML more accessible for EO practitioners, openEO supports the concept of user-defined functions (UDFs). Furthermore, in order to remain lightweight, an openEO backend does not have the necessary dependencies to run the model.
This session will demonstrate how to bring ML into your EO processing chain using openEO's standardized interface. No ML expertise is required—just an interest in leveraging scalable AI solutions for geospatial analysis. If you’re curious about scalable, efficient, and portable AI for geospatial applications, this session is for you.

Why Attend?
• Unlock Scalable AI for EO: Learn how to apply advanced ML models to EO data without needing heavy infrastructure or expert-level ML knowledge.
• Run Anywhere with ONNX: Discover how openEO leverages the ONNX format to deploy models flexibly across backends.
• Customize with UDFs: See how user-defined functions enable powerful, tailored processing within the openEO ecosystem.
• Simplify Deployment: Avoid complex setup—process your models server-side without worrying about dependencies.

Join us to see how openEO + ONNX + UDFs can make your geospatial ML workflows smoother, faster, and more scalable than ever!

Speakers:


  • Hans Vanrompay - VITO
Add to Google Calendar

Tuesday 24 June 11:15 - 11:35 (EO Arena)

Demo: D.04.19 DEMO - Visualizing Sentinel satellite imagery and data products in desktop GIS with the Copernicus Data Space Ecosystem QGIS Plugin

Quantum GIS (QGIS) is a widely used open source desktop geographical information system sofware. The Copernicus Programme offers open and free satellite imagery. These two resources can be connected with the Copernicus Data Space Ecosystem QGIS Plugin. This tool is powered by the Sentinel Hub API family and the OGC Standard. The plugin enables users to view and download Sentinel imagery, filtering with dates and cloud cover. This tool is particularly suited for visual interpretation of satellite imagery and raster-vector integration. It is directly available in the QGIS plugin repository, and connects with the user's Copernicus Data Space Ecosytem account.

Speakers:


  • András Zlinszky - Community Evangelist, Sinergise Solutions GmbH
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K1)

Session: D.01.03 Synergies between ESA DTE Programme and DestinE Ecosystem

The content of this session shows the potential of dynamic collaborations between ESA DTE Programme activities and the opportunities provided by DestinE Platform. The session includes presentations about the capabilities available in DestinE Platform, and the defined framework to grow the ecosystem of services through onboarding opportunities for ESA and non-ESA activities. It also includes presentations on the pre-operational innovative services and applications developed under ESA DTE activities (such as the Digital Twin Components) and their synergies with DestinE Platform.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K1)

Presentation: Empowering Climate Insights: Integrating Digital Twin Earth and DestinE Services through DEA's API-Driven Storytelling

Authors: Cristina Arcari, Arturo Montieri, Monica Rossetti
Affiliations: Alia Space Systems
Destination Earth (DestinE) Platform makes available innovative services aimed at exploiting the potential of the Destination Earth initiative. The goal of the initiative is to build a near-real digital twin of our planet so that it is possible to simulate environmental changes at an unparalleled level of detail. The European Space Agency (ESA) Digital Twin Earth programme aims to underpin this ecosystem, making the capabilities offered by the latest Earth Observation (EO) satellite missions freely available to all users. The integration between the services provided by the ESA Digital Twin Components (DTC) and the services offered by the DestinE Platform is crucial for maximizing the effort to make citizens and policymakers aware of the consequences related to climate change and give insights into planning effective mitigation strategies. In this context, DEA, the DestinE web-based storytelling service, was developed with the ambition to make data understandable for users, allowing them to craft engaging stories as interactive presentations by combining datasets provided by the tool with their own assets. Although DEA has a graphical interface, it is designed with an API-driven approach. This feature facilitates the programmatic creation of stories using the exposed APIs. Moreover, DEA also provides dedicated endpoints for generating standard plots and qualitative graphs such as climate spirals, climate stripes, and anomaly bars on both global and local scales. In this way, users can easily integrate these visualizations into their stories or utilize them in various contexts and applications to emphasize topics related to climate change. Similarly, most of the DTC and the DestinE services provide APIs that can be exploited by DEA to generate new content or to collect data ready to be shown in a story. This capability fosters the service chaining by integrating Digital Twin Components and DestinE services and encourages the development of applications based on Artificial Intelligence (AI) and agents based on Large Language Models (LLMs). We propose to demonstrate how to create seamless workflows combining features offered by the Digital Twin Earth and DestinE services with the APIs provided by DEA to generate visual insights.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K1)

Presentation: Earth’s Digital Future: Insights into Destination Earth and ESA DTE

Authors: Franka Kunz
Affiliations: ESA
Destination Earth (DestinE) and the ESA Digital Twin Earth (DTE) Programme are transforming the landscape of Earth observation and simulation, offering unprecedented tools to address global environmental challenges. These initiatives mark a significant leap forward in leveraging digital technologies to understand, predict, and respond to complex planetary phenomena. The Destination Earth initiative, led by the European Commission, aims to create a high-precision digital replica of the planet. DestinE is designed to empower decision-makers with actionable insights by providing advanced simulations and predictive capabilities for diverse applications, including climate change mitigation, disaster risk reduction, and sustainable development. With its cutting-edge platform and technical framework, DestinE establishes a robust foundation for understanding and managing Earth's systems. The ESA Digital Twin Earth (DTE) Programme, part of the ESA Earth Watch Programme, complements and strengthens DestinE by advancing the use of Earth Observation (EO) data and technologies to create pre-operational digital twins that monitor and predict environmental changes. ESA DTE demonstrates their value for applications including agricultural management, urban development, and environmental management. By leveraging data from ESA’s Earth Explorer Missions and integrating it into the DestinE Platform, the program ensures high-quality EO data serves as the foundation for advanced digital twin development. The collaboration between DestinE and the ESA DTE Programme is crucial for advancing their objectives. DestinE serves as a comprehensive platform for integrating and operationalising digital twin applications, while the DTE Programme contributes by developing pre-operational digital twins and providing high-quality Earth Observation data. These collaborations enhance the overall digital twin ecosystem, enabling the integration of advanced Earth observation data, AI-driven analytics, and simulation technologies. This synergy fosters innovation and ensures the ecosystem supports diverse applications, from real-time decision-making to long-term strategic planning. This overview of both programs highlights their complementary roles in advancing Earth system science and fostering a sustainable future. By uniting cutting-edge technologies, high-quality data, and innovative frameworks, DestinE and ESA DTE are paving the way for a new era in Earth observation, simulation, and decision-making.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K1)

Presentation: Synergies between ESA DTE Programme and DestinE Ecosystem: The Role of the HIGHWAY Project

Authors: Luca Girardo, Henry de Waziers
Affiliations: Esa Esrin, adwäisEO
The European Space Agency’s (ESA) Digital Twin Earth (DTE) Programme serves as an innovative research companion to the operational DestinE platform, fostering dynamic collaborations that enhance Earth observation capabilities. At the forefront of this collaboration is the HIGHWAY project, which plays a crucial role in transforming and harmonizing Earth Explorer datasets into Digital Twin Analysis Ready Cloud Optimized (DT-ARCO) formats. This process ensures the seamless integration of diverse datasets into the DestinE platform, enabling advanced digital twin simulations and the development of pre-operational services. HIGHWAY contributes to both ESA and non-ESA activities by aligning its data transformation and harmonization efforts with evolving cloud-optimized formats for Copernicus and Earth Explorer missions. Through this harmonization, datasets from missions such as SMOS, Proba-V, Aeolus, CryoSat-2, and EarthCARE are made accessible to the DestinE platform via OpenSearch, WMS, and WCS interfaces. These efforts support the expansion of DestinE’s ecosystem, facilitating the onboarding of additional services and fostering interoperability across the platform. Additionally, HIGHWAY is exploring the integration of High-Performance Computing (HPC) systems, particularly in the context of harmonizing workflows between the HPC MeluXina system and the DestinE Core Service Platform (DESP). This integration is key to enabling the large-scale data processing and real-time simulations required for sophisticated digital twin applications. By bridging HPC and cloud-optimized data infrastructure, HIGHWAY is setting the stage for innovative pre-operational services that align with DestinE’s mission. In this session, we will explore how HIGHWAY’s role in transforming and harmonizing datasets, along with its contributions to HPC integration, strengthens the synergy between ESA DTE activities and the DestinE platform. Attendees will gain insights into how these efforts are unlocking new opportunities for service growth, collaboration, and the future of Earth observation services within the DestinE ecosystem.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K1)

Presentation: DestinEStreamer - a new paradigm for storing, disseminating and accessing big data in Earth Observation and Climate Science domains

Authors: Andreas Walli, Dr. Wolfgang Kapferer
Affiliations: Geoville
PROBLEM The growing volume of Earth Observation and climate data presents significant challenges in storing, managing, and processing this information. As satellite and sensor technologies advance, and as more missions are launched, the amount of data generated in the field of Earth Observation is increasing exponentially, leading to unprecedented storage demands. A similar trend is evident in Climate Sciences, where advancements in supercomputing facilities enable higher-resolution simulations in both space and time, generating vast volumes of data. This surge is accompanied by rising costs for data storage, dissemination, and increasing complexities in accessing and distributing the information. Ensuring timely and equitable access to this vital data across sectors such as research, industry, and decision-making is becoming increasingly difficult as access challenges and costs continue to rise. Traditional storage solutions, such as single artifact based object stores with millions of individual files, are insufficient to meet the needs of modern machine learning and artificial intelligence applications. Therefore, there is an urgent need for innovative solutions that streamline data handling, reduce complexity and costs, and enable more efficient and effective use of this valuable data. LEARNING FROM OTHER INDUSTRIES When it comes to data dissemination over the internet, one industry dominates: video streaming. Nearly one half of the global internet traffic share accounts for video streaming services (Sandvine 2023). Cross-Industry Innovation The streaming revolution was introduced by several major advances, such as: • next generation compression with algorithms to reduce the amount of data but preserve the quality of the content, • adequate video container formats, which determine how different types of media (video, audio, subtitles, etc.) are packaged together into one object and • broad accessibility to the streams via standardization like HTML5 Video APIs or standard libraries (e.g. OpenCV). The streaming industry has achieved remarkable scalability, serving hundreds of millions of users globally across diverse devices. Could the key success factors of this industry be harnessed to revolutionize scientific domains such as Earth Observation and Climate Sciences? The answer is yes. By adopting similar technologies and software architectures, Earth Observation and Climate data can be compressed, packaged, and disseminated with unprecedented efficiency. This process requires innovative preprocessing techniques to optimize data for next-generation compression algorithms, robust metadata management, and an adaptable access layer. Such a layer, built with libraries and web applications, enables seamless transformation of data into target formats like GeoTIFFs or in-memory representations such as arrays. This is precisely the work carried out by the DestinEStreamer service, demonstrating how these advancements can drastically reduce archive sizes, accelerate data access and downloads, and unlock new edge-device capabilities. Ultimately, this approach paves the way for groundbreaking services in Earth Observation and Climate domains, yet to be imagined. ENHANCING THE DESTINE PLATFORM WITH THE DESTINESTREAMER SERVICE The DestinEStreamer service significantly enhances the capabilities of the DestinE platform by achieving impressive compression ratios of 1:14 to 1:27 compared to the original, already-compressed datasets. This enables the platform to store and deliver a far greater volume of online datasets. These results are achieved through meticulous preprocessing of the data and aligning their temporal structure to leverage state-of-the-art video compression codecs. The core approach involves a block-based transformation, organizing data into three frame types (I-frames, B-frames, and P-frames) within a repeating Group of Pictures (GOP) sequence. By removing temporal redundancies, especially for Earth Observation and Climate data that naturally align as temporal stacks over the same spatial region, this technique enables exceptionally high compression ratios. While the compression is lossy, it allows adjustable quality settings. In the DestinEStreamer service, the default configuration ensures a Structural Similarity Index (SSIM) above 0.99 and less than 0.01% data difference—delivering high fidelity. With these settings, storage capacities increase over 30-fold, and network bandwidth can handle 30 times more data. Decompression and data access are as seamless as streaming a movie (e.g., 24 frames per second). HOW TO USE THE SERVICE The DestinEStreamer service provides access to data streams on the DestinE Platform through multiple vectors: • Web Application: This interface enables users to explore dataset variables (e.g., ERA5 and Climate Digital Twins) as visual streams, facilitating fast temporal scans and interactive map capabilities. Within the application, users can follow links to the Jupyter Hub and the Insula service for deeper analysis. A dedicated Python module, along with example scripts, is provided to support data conversion. These scripts illustrate how to extract specific timesteps or time series, georeference the data, and interact with it via collections like xarray. • API: For each dataset variable, an API provides comprehensive metadata and quality metrics, ensuring transparency and facilitating automated workflows. As part of ESA’s initiative, DUNIA, the technology supports continental-scale data delivery for Africa, including Sentinel-1 and Sentinel-2 datasets. This access vector is designed for low-bandwidth environments, enabling efficient data streaming to clients, even on mobile devices. The service also includes features to mitigate the effects of unstable network connections, allowing users to download and convert Sentinel data-streams with minimal bandwidth requirements. SUMMARY AND OUTLOOK The DestinEStreamer service, with its advanced access methods, delivers a groundbreaking solution for storing and disseminating massive volumes of georeferenced raster and climate simulation data. By leveraging cutting-edge technologies from the over-the-top video streaming industry, it introduces a transformative approach to Earth Observation and Climate data management. This innovative technological foundation opens the door to entirely new, previously unimaginable services. Combined with the capabilities of edge computing devices within distributed infrastructures, DestinEStreamer paves the way for the applications and services of the future, enabling a fully connected and integrated Earth Observation- and Climate-Data ecosystem.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K1)

Presentation: ESA EO-based Digital Twin Components of the Earth System

Authors: Dr Martin Wearing, Dr Diego Fernandez Prieto
Affiliations: ESA, ESRIN
The ESA Digital Twin Earth (ESA DTE) programme aims at ensuring that the latest Earth observation satellite capabilities play a major role in the design, implementation and future evolution of DestinE; the flagship initiative of the European Commission to develop a highly accurate digital twin of the Earth to monitor and simulate natural phenomena, hazards and the related human activities. This presentation with provide an overview of the progress and achievements so far of ESA’s EO-based Digital Twin Components (DTCs), which each focus on a particular element of the Earth system. Current activities are split into Lead and Early Development Actions, with themes covering: - Lead Development Actions; Agriculture, Forests, Hydrology and hydro-hazards, Ice Sheets and regional/global impacts, Coastal processes and extremes; and - Early Development Actions; Air quality & health, Arctic (land and ocean), Energy sector, Geo-hazards, Mountain glaciers, Urban areas and smart cities. These DTCs will offer high-precision digital replicas of Earth system components, boosting our capacity to understand the past and monitor the present state of the planet, assess changes, and simulate the potential evolution under different (what-if) scenarios at scales compatible with decision making. This overview will highlight the progress of DTC activities, the opportunities for the use of EO data in digital twins, the use of DestinE services and implementation of DTCs in the DestinE ecosystem, engagement with users, stakeholders and development of use cases, along with recommendations for future developments.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K1)

Presentation: Unlocking the potential of Destination Earth: An analysis of how ESA DTE could strengthen Destination Earth attractiveness

Authors: Dr Rochelle Schneider, Christophe Taillandier, Madleen Bultez, Pierre Arnaud, Nicolas Monzali
Affiliations: ESA, MEWS Partners
The European Commission’s Destination Earth (DestinE) initiative represents a groundbreaking effort to simulate and analyse Earth's processes with unprecedented accuracy. It aims to provide a scalable platform and services to enable users to interact with digital twins for decision-making in areas like climate action, sustainability, and disaster resilience. By simulating specific Earth systems using real-time Earth Observation (EO) data from ESA satellites and other sources, the ESA Digital Twin Earth should serve as a technical enabler for DestinE by completing the foundational digital twin technology and scientific methods. While ESA DTE primarily targets researchers and experts focused on Earth system science, climate modeling, and high-end computational simulations, there exist strong potential synergies with DestinE that could be leveraged to improve DestinE attractiveness. In particular, both systems aim to address critical societal challenges, from climate change mitigation to disaster management, by enabling data-driven decision-making at scale. Strong technological synergies exist between the two initiatives. For instance, ESA DTE provides key digital twin technology that could then be deployed in real world applications through DestinE. This proposed presentation will explore the synergies of the two initiatives with the objective of improving DestinE attractiveness. It will analyse design, accessibility, usability, and value proposition aspects and highlight possible recommendations. In particular, the following dimensions will be assessed. It begins with APIs and interfaces, evaluating the effectiveness of interfaces and tools to access the two systems. Then, it explores data synergies with ESA’s DTE, highlighting the enhanced value generated by integrating ESA’s DTE data assets with DestinE’s advanced simulation capabilities, enabling deeper insights and more robust decision-making across applications. This is followed by an analysis of user engagement synergies, investigating the drivers and barriers to fostering collaboration and cross-fertilization between ESA’s DTE and DestinE, with a focus on unlocking shared value for diverse user groups with varying needs and expertise. Finally, the impact potential is assessed by evaluating the tangible outcomes of potential new use cases, leveraging the ‘best of both worlds’ from DTE and DestinE. Our findings will aim to contribute to the ongoing implementation of DestinE, ensuring its relevance and effectiveness within the broader ESA DTE ecosystem. This presentation will conclude with recommendations for maximizing the impact of DestinE through enhanced integration with ESA DTE, user-centric innovation, and targeted outreach.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall L3)

Session: A.08.01 Advances in Swath Altimetry - PART 2

The NASA and CNES Surface Water and Ocean Topography (SWOT) Mission, launched in December 2022, is the first inflight experience in orbit of a swath altimeter. The SWOT mission has revealed the capability of swath altimeters to measure ocean and inland water topography measurements in an unprecedented manner. The onboard Ka-band interferometer (KaRIn) observes wide-swath sea surface height (SSH) with a sub-centimetre error. It is already unveiling the small mesoscale ocean circulation that is missing from current satellite altimetry. SWOT already carried a campaign for the satellite calibration and validation (Cal/Val) including ground truths and airborne campaigns.
ESA’s Sentinel-3 Next Generation Topography (S3NGT) mission is being designed as a pair of two large spacecrafts carrying nadir looking synthetic aperture radar (SAR) altimeters and across-track interferometers, enabling a total swath of 120 km, in addition to a three-beam radiometer for wet tropospheric correction across the swath, and a highly performant POD and AOCS suite.
With a tentative launch date of 2032, the S3NGT mission will provide enhanced continuity to the altimetry component of the current Sentinel-3 constellation, with open ocean, coastal zones, hydrology, sea ice and land ice, all as primary objectives of the mission.
This session is dedicated to the presentation of advances in swath altimetry - including airborne campaigns- and the application of swath altimetry to the primary objectives of the mission, i.e. open ocean and coastal processes observation, hydrology, sea ice and land ice. We also invite submissions for investigations that extend beyond these primary objectives, such as the analysis of ocean wave spectra, internal waves, geostrophic currents, and air-sea interaction phenomena within swath altimeter data.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall L3)

Presentation: Validation of HR SWOT Data over Inland Waters, an Opportunity to Assess the Future Performance of S3NG-T Swath Altimetry Missions

Authors: Maxime Vayre, Julien Renou, Dr Gabriel CALASSOU, Marie Chapellier, Nicolas Taburet, François Boy, Roger Fjortoft, Nicolas Picot, Claire Pottier, Noemie Lalau
Affiliations: CLS, CNES, Magellium
The Surface Water and Ocean Topography (SWOT) mission, conducted by CNES and NASA was successfully launched on 16 December 2022. The Ka-band Radar Interferometer (KaRIn) provides unprecedented 2D observations of the sea-surface height and sub-mesoscale structures as well as water surface elevation, water stock estimates and discharge over continental water surfaces. SWOT performances are extremely encouraging for the upcoming Sentinel-3 Next Generation Topography mission. S3NG-T aims to ensure the continuity of the current Sentinel-3 nadir altimeter. This mission represents the future of swath altimetry, and SWOT is an excellent opportunity to assess its potential performance. The SAOOH instrument of S3NG-T has its own specificities compared to KaRIn. Differences in specifications should be considered such as random and systematic errors. Assessing the performance of HR SWOT products estimates and its ability to track water bodies is of major interest for S3NG. SWOT KaRIn products validation is part of the global performance assessment of the HR SWOT products managed by CNES on the French side. In situ networks are first compared to SWOT measurements over lakes and rivers. Data from the French (SCHAPI), Swiss (BAFU) and American (USGS) in situ networks are used to estimate the performance of SWOT HR elevation data. In addition, we use measurements from current nadir altimetry missions (Sentinel-3A/B, Sentinel-6, ICESat-2) to assess the performance over a large number of water bodies. SWOT HR data are compared with the station networks available on the Copernicus Global Land Service. ICESat-2 data completes the analysis based on tens of thousands of lakes. We have also set up an innovative method to level existing gauges with ICESat-2 and therefore increasing the number of in situ references over which KaRIn accuracy can be estimated. Based on these results and understanding the random and systematic noise affecting KaRIn Pixel Cloud data, we will present the simulation of SAOOH data performed over a variety of rivers and lakes. The performance assessment metrics are then applied, and we obtain qualitative indicators on the expected S3NG-T SAOOH performances. In this presentation, after introducing the metrics developed with CNES to validate SWOT KaRIn performances over continental waters, we will present an original method that we developed within the ESA S3NG-MPUA project, to simulate S3NG-T SAOOH data and assess the performance based on KaRIn products and knowledge of the planned S3NG instrumental characteristics.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall L3)

Presentation: KaRIn Noise Reduction Using a Convolutional Neural Network for the SWOT 2km and 250m Ocean Product

Authors: Anaelle Treboutte, Gaetan Meis, Marie-Isabelle Pujol, Gerald Dibarboure
Affiliations: CLS, CNES
The recent launch of the new altimetric satellite SWOT (Surface Water and Ocean Topography) was a revolution in oceanography. It can observe ocean dynamics at mesoscale and submesoscale by measuring the Sea Surface Height (SSH) using two KaRIn (Ka-band Radar Interferometer) instruments. It provides two-dimensional measurements of Sea Surface Height (SSH) at high resolution: 2 km and 250 m (also known as Unsmooth product). For each product, SSH field is impacted by a noise that comes from the instrument and is referred to as KaRIn noise. For the 2km product, the KaRIn noise is correlated, lower than expected and does not have a significant impact on the SSH. However, the SSH derivatives quickly amplify the millimeter-scale noise and cannot be usable without denoising. The methods currently used in conventional nadir altimetry must be revised and readapted for these new data. Therefore, a neural network model based on a U-Net architecture was developed, and it was trained and tested with simulated data in the North Atlantic. The U-Net described in Tréboutte et al. (2023) gives satisfying results on real SWOT data except where the waves heights are important (Dibarboure et al., 2024). Validation and improvement of the U-Net are ongoing. The KaRIn noise on the unsmooth product is a random centimeter-scale noise. The SSH field must be denoised in order to exploit finer scales. The U-Net was adapted and retrained for this product. All the denoised SSH are available in the Level-3 product on the Aviso website. Dibarboure, G., Anadon, C., Briol, F., Cadier, E., Chevrier, R., Delepoulle, A., Faugère, Y., Laloue, A., Morrow, R., Picot, N., Prandi, P., Pujol, M.-I., Raynal, M., Treboutte, A., and Ubelmann, C.: Blending 2D topography images from SWOT into the altimeter constellation with the Level-3 multi-mission DUACS system, EGUsphere [preprint], https://doi.org/10.5194/egusphere-2024-1501, 2024. Tréboutte, A., Carli, E., Ballarotta, M., Carpentier, B., Faugère, Y., Dibarboure, G., 2023. KaRIn Noise Reduction Using a Convolutional Neural Network for the SWOT Ocean Products. Remote Sens. 15, 2183. https://doi.org/10.3390/rs15082183
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall L3)

Presentation: Inland Water Monitoring with The Surface Water and Ocean Topography Saellite

Authors: Hind Oubanas, Tamlin Pavelsky
Affiliations: INRAE G-EAU, University Of North Carolina
The Surface Water and Ocean Topography (SWOT) satellite mission represents a significant advancement in hydrological sciences as the first wide-swath satellite designed to investigate surface water within the global water cycle. Utilizing Ka-band radar interferometry, SWOT provides, for the first time, simultaneous, high-resolution maps of water surface elevation and inundation extent across rivers, lakes, reservoirs, and wetlands globally. Over the past decade, the hydrologic remote sensing community has developed new methodologies and scientific frameworks to fully leverage the potential of SWOT data, enhancing our understanding of global water fluxes and fundamentally transforming how we perceive and analyze surface water dynamics. In this presentation, we will highlight what SWOT has added compared to previous satellite missions, in inland water observation, through its different products. we will explore SWOT's performance over rivers and lakes in measuring water surface elevation and extent, slope, and in estimating discharge at the global scale. We will present the latest advances and acheivements using SWOT data from this community efforts lead through multiple SWOT working groups. Several investigations using this new dataset are already uncovering valuable insights into hydrologic processes, with performance exceeding the mission's pre-launch science requirements for certain variables. However, challenges remain, such as the presence of dark water associated with specular reflectance and other sources, misalignment of SWOT pixels to rivers when they were actually collected over other surfaces, and the outward propagation of very strong signals collected directly beneath the satellite, known as nadir ringing. These issues are actively being addressed through ongoing research and algorithm refinement to improve future data releases and ensure the highest possible data quality. By presenting SWOT's capabilities and the collaborative efforts of the scientific community, this presentation aims to illustrate how wide-swath altimetry is expanding our understanding of Earth's changing water systems and revolutionizing the field of hydrology.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall L3)

Presentation: Investigating the impact of sea state on SWOT-KaRIn measurements of Significant Wave Height and Sea Surface Height

Authors: Eva Le Merle, Daria Andrievskaia, Adrien Martin, Mahmoud El Hajj, Yannice Faugere
Affiliations: NOVELTIS, CNES
The Ka-band Radar Interferometer (KaRIn) onboard the Surface Water and Ocean Topography (SWOT) mission, delivers groundbreaking high-resolution, two-dimensional ocean topography data, revealing mesoscale to sub-mesoscale processes with unprecedented detail. However, the precision of sea surface height (SSH) measurements is affected by sea state-induced distortions, collectively known as Sea State Bias (SSB). Understanding and mitigating SSB is critical to unlocking the full potential of SWOT and KaRIn for oceanographic research. In addition to topographic measurements, KaRIn is also capable to provide sea state information such as significant wave height (SWH) measurements, which is a key parameter for SSB estimations. This study focuses on the statistical analysis of SWOT nadir and KaRIn significant wave height measurements, as influenced by cross-track distance and sea state characteristics. By comparing SWH data from SWOT (KaRIn and nadir) with model outputs, CFOSAT observations, and wave buoy records, we quantify measurement differences and identify the driving conditions behind these variations. Initial comparisons between SWOT data (KaRIn and nadir) and ERA5 model outputs reveal several findings. First, there is strong agreement between SWOT nadir and ERA5 data. However, comparisons between KaRIn-derived significant wave height (SWH) and those from ERA5 and nadir indicate that KaRIn SWH measurements tend to exhibit a systematic high bias on average. Interestingly, the largest discrepancies do not align with the most extreme sea states (highest SWH). Further analyses reveal that the spread and positive bias of KaRIn SWH measurements increase with cross-track distance, with the standard deviation rising from 0.2 m near nadir to 0.5 m at the far nadir. Additionally, differences between KaRIn and SWOT nadir SWH were examined as a function of proximity to the coast. While no clear trends emerged, it appears that the most significant differences generally occur closer to the coast. Lastly, we assessed the impact of these discrepancies on sea surface height (SSH) accuracy in relation to wave parameters such as wavelength and direction in order to revisit SSB models. Beyond its immediate implications for SSB correction, this work paves the way for leveraging SWOT’s high-resolution SWH 2D maps to explore near-coastal wave dynamics; a domain currently underexplored due to observational limitations. By validating SWOT’s wave products, this study addresses a key barrier to improving SSH accuracy and our understanding of high-resolution wave dynamics, ensuring that SWOT data can robustly support global and coastal oceanography.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall L3)

Presentation: Improvements in SWOT HR water classification and area estimation

Authors: Dr Roger Fjørtoft, Dr Claire Pottier, Dr Nicolas Gasnier, Dr Damien Desroches, Lucie Labat-Allée, Dr Mathilde de Fleury, Dr Manon Delhoume
Affiliations: Centre National d'Etudes Spatiales, CS Group
Water classification and estimation of waterbody area are key steps in the operational processing of KaRIn HR images from the SWOT altimetry mission [1, 2]. SWOT was launched in December 2022, and since end of March 2023 provides geolocated water surface elevation and extent for continental water surfaces, globally and repeatedly, with two or more observations per 21-day orbit cycle. A first global performance assessment was presented at the SWOT Science Validation Meeting in June 2024, achieving accuracies meeting or being close to the Science Requirements [3]. Since then, the algorithms and auxiliary data used for operational processing have been further improved. We here present these evolutions, and their impact on performance, focusing mainly on the estimation of lake area. Some options for future improvements are also addressed. The algorithm selected for operational water detection in SWOT HR images is a binary Bayesian classifier, with Markov Random Field (MRF) regularization and iterative estimation of water and land backscattering characteristics [4, 5, 6]. It processes thousands of images every day, with quite satisfactory results, and only minor algorithm modifications and parameter adjustments have been made so far. The basic underlying hypothesis is that water is much brighter than the surrounding land surfaces [7], which is mostly, but not always, true for such Ka-band images at near-nadir incidences (~1°-4°). An important exception is so-called “dark water”, which occurs when there is no wind at the water surface, nor swirl, so that it acts like a mirror and practically no signal gets back to the radar [8]. By comparing the detected parts of a waterbody with a prior water probability map (based mainly on the Global Water Surface Occurrence mask [9]), the missing “dark water” parts can be flagged [6, 10] and accounted for in the subsequent area estimates [11, 12]. This mechanism was present in the processing chain from the very beginning of the mission, and contributing to the area performances presented at the 2024 Validation Meeting. However, since then, JPL members of the SWOT Algorithm Development Team have improved the projection of the thresholded prior water probability map into radar geometry, using phase information from the SWOT HR data rather than just prior water surface elevations [6]. Another challenge is so-called “bright land”, that can be erroneously detected as water. This typically occurs for urban areas and other man-made structures, but can also come from topographic layover and humid soil. A prior bright land mask (based mainly on the World Settlement Footprint [13] and ESA WorldCover [14] masks) has been used to flag such pixels. Since the Validation Meeting, the prior bright land mask has been updated, and its projection into radar geometry reworked. As the SWOT HR reception window includes nadir, and despite the attenuation by the antenna pattern, a phenomena referred to as “specular ringing” may occur, typically when there is a waterbody at nadir, generating a characteristic bright stripe into the first part of the nominal swath (10-60 km), thereby causing false water detection, and subsequently degraded river widths and lake extents. Improved flagging of this phenomenon [6], and better use of the flag in downstream processing to generate vector products for rivers and lakes [11, 12], have been implemented in recent algorithm updates. Other recent improvements with impact on area performances include updates to the SWOT River Database (SWORD) [15], version 17, and the Prior Lake Database (PLD) [16, 17], version 2.00. Using the same approach as for the SWOT Science Validation Meeting, relying on reference water masks derived from high-resolution optical and radar images, segmented into river reaches and lakes based on the above-mentioned prior databases [15, 16, 17], the positive effect of these improvements on water classification and area estimation performance will be presented, focusing on lakes. Potential long-term improvements for SWOT, or in the perspective of the S3-NG mission, will also be addressed, including a water detection algorithm that relies more heavily on prior data to detect and estimate the extent of narrow rivers [18, 19]. References: [1] L.-L. Fu, D. Alsdorf, R. Morrow, E. Rodriguez, and N. Mognard, “SWOT: The surface water and ocean topography mission: Wide-swath altimetric elevation on Earth,” Jet Propulsion Laboratory, Nat. Aeronautics Space Administ., Washington, D.C., USA, JPL Publication 12-05, 2012. [2] M. Durand, L. Fu, D. P. Lettenmaier, D. E. Alsdorf, E. Rodriguez, and D. Esteban-Fernandez, “The surface water and ocean topography mission: Observing terrestrial surface water and oceanic submesoscale eddies,” Proc. IEEE, vol. 98, no. 5, pp. 766–779, May 2010. [3] Jet Propulsion Laboratory, “Surface water and ocean topography mission (SWOT): Science requirements document,” JPL D-61923, Rev. B, SWOT NASA/JPL Project, Pasadena, CA, 2018. [4] S. Lobry, "Markovian models for SAR images: Application to water detection in SWOT satellite images and multi-temporal analysis of urban areas," Télécom ParisTech, Paris, France, 2017. [5] S. Lobry, L. Denis, B. Williams, R. Fjørtoft, and F. Tupin, "Water Detection in SWOT HR Images Based on Multiple Markov Random Fields," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2019 [6] Jet Propulsion Laboratory, "Algorithm Theoretical Basis Document: L2_HR_PIXC Level 2 Processing," JPL D-105504, Pasadena, CA, 2024. [7] R. Fjørtoft et al., “KaRIn on SWOT: Characteristics of near-nadir Ka-band interferometric SAR imagery,” IEEE Trans. Geosci. Remote Sens., vol. 52, no. 4, pp. 2172–2185, Apr. 2014. [8] Jet Propulsion Laboratory, "SWOT Science Data Products User Handbook," JPL D-109532, Pasadena, CA, 2024. [9] J.-F. Pekel, A. Cottam, N. Gorelick and A. S. Belward, "High-resolution mapping of global surface water and its long-term changes," Nature, no. 540, pp. 418-422, 2016. [10] Jet Propulsion Laboratory, "SWOT Level 2 KaRIn high rate water mask pixel cloud product (L2_HR_PIXC)," JPL D-56411, Pasadena, CA, 2024. [11] Jet Propulsion Laboratory, “SWOT Level 2 KaRIn high rate river single pass vector science data product”, JPL D-56413, Pasadena, CA, 2024. [12] Centre National d’Etudes Spatiales, “SWOT Level 2 KaRIn high rate lake single pass vector science data product”, SWOT-TN-CDM-0674-CNES, Toulouse, France, 2024. [13] "World Settlement Footprint," 2019. [Online]. Available: https://geoservice.dlr.de/web/maps/eoc:wsf2019. [14] "ESA WorldCover," 2020. [Online]. Available: https://esa-worldcover.org/en [15] E. H. Altenau, T. M. Pavelsky, M. T. Durand, X. Yang, R. P. d. M. Frasson and L. Bendezu, "The Surface Water and Ocean Topography (SWOT) mission River Database (SWORD): A global river network for satellite data products," Water Resources Research, vol. WRCS25408, 2021. [16] Centre National d’Etudes Spatiales, “SWOT Prior Lake Database”, SWOT-IS-CDM-1944-CNES, Toulouse, France 2024. [17] J. Wang, C. Pottier, C. Cazals, M. Battude, Y. Sheng, C. Song, M. S. Sikder, X. Yang, L. Ke, M. Gosset, R. Reis, A. Oliveira, M. Grippa, F. Girard, G. H. Allen, S. Biancamaria, L. C. Smith, J.-F. Créteaux and T. M. Pavelsky, "The Surface Water and Ocean Topography Mission (SWOT) Prior Lake Database (PLD): Lake mask and operational auxiliaries," Water Resources Research, 2024 (submitted). [18] N. Gasnier, L. Denis, R. Fjørtoft, F. Liège and F. Tupin, "Narrow River Extraction From SAR Images Using Exogenous Information," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 5720-5734, 2021. [19] N. Gasnier, R. Fjørtoft, B. Williams, D. Desroches, L. Labat-Allée, J. Maxant, “Early Results on Water Detection in SWOT HR Images”, in Proc. IGARSS, Athens, Greece, 2024.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall M1/M2)

Session: F.01.03 Trends in Earth Observation Education and Capacity Building: Embracing Emerging Technologies and Open Innovations - PART 1

Education activities in recent years have undergone a significant transformation related to the global digitalization of education and training. Traditional teaching methods, like face-to-face trainings provided to small groups of students, are being complemented or even replaced by massive open on-line courses (MOOCs) with hundreds of participants following the course at their own pace. At the same time, the Earth observation sector continues to grow at a high rate; in Europe, the European Association of Remote Sensing Companies (EARSC) reported in 2023 that the sector grew by 7.5% in the past 5 years.
This session will cover new trends in modern education in the Space and EO domains as well as methods, use cases, and opportunities to cultivate Earth observation literacy in diverse sectors, such as agriculture, urban planning, public health, and more. It will focus on new methods and tools used in EO education and capacity building, such as: EO data processing in the cloud, processing platforms and virtual labs, dashboards, new and innovative technologies, challenges, hackathons, and showcase examples which make successful use of EO data. Participants will also have opportunity to share and discuss methods for effective workforce development beyond typical training or education systems.
Based on the experience of Space Agencies, international organisations, tertiary lecturers, school teachers, universities and companies working in the domain of space education, this session will be an opportunity to exchange ideas and lessons learnt, discuss future opportunities and challenges that digital transformation of education has brought, consolidate recommendations for future education and capacity building activities, and explore opportunities to further collaborate, build EO literacy in new users outside of the Earth and space science sector and expand the impact of EO across sectors.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall M1/M2)

Presentation: Trends in Earth Observation Education and Capacity Building: Embracing Collaboration and Innovation

Authors: PhD. Terefe Hanchiso Sodango, Prof. Effiom Oku, PhD Fabiola D. Yépez Rincón, PhD Rishiraj Dutta, PhD Mark Higgins, PhD Ganiy Agbaje, PhD Jean Danumah, PhD William Straka III, Álvaro Germán Soldano, PhD CM Bhatt, PhD Luca Brocca, Martyna A. Stelmaszczuk-Górska, Erin Martin, Yakov M. Moz, PhD Altay Özaygen, Dr. Nesrin Salepci
Affiliations: Wolkite University, University of Abuja, Universidad Autónoma de Nuevo León, Asian Disaster Preparedness Center (ADPC), European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT), National Space Research and Development Agency, African Regional Centre for Space Science and Technology Education (ARCSSTE-E), Université Félix Houphouët Boigny, University of Wisconsin–Madison, National Commission on Space Activities (CONAE), Indian Institute for Remote Sensing (IIRS/ISRO), National Research Council of Italy, Research Institute for Geo-Hydrological Protection, Friedrich Schiller University Jena, Erin Martin Consulting, Booz Allen Hamilton, Metis Analytica, Université Paris Saclay, Institut Mines-Télécom Business School, LITEM
The accelerating reality of climate change and the increasing disaster risks observed on our living planet underscore the urgent need to connect users with Earth Observation (EO) technologies, education, and capacity-building efforts. These actions are critical to enabling meaningful impact, ensuring no community is left behind, and contributing to sustainable development. This challenge demands new, inclusive, and collaborative approaches that transcend traditional methods. The Earth Observation Training, Education, and Capacity Development Network (EOTEC DevNet) is addressing this challenge by fostering global and regional collaboration, sharing knowledge, and aligning EO resources with regional and thematic priorities. EOTEC DevNet operates as a virtual network of networks through Regional Communities of Practice (CoPs) in Africa, the Americas, Asia-Oceania, and Europe. These CoPs unite educators, policymakers, researchers, and data users to share experiences, identify regional needs, and develop strategies for leveraging EO to solve societal challenges. By linking regional priorities with global frameworks such as the 2030 Agenda for Sustainable Development, the Sendai Framework for Disaster Risk Reduction, and the Paris Agreement, EOTEC DevNet strengthens EO education and capacity-building efforts worldwide. Practical Tools and Methods EOTEC DevNet helps users access and use tools and methods tailored to specific regional challenges. For instance, the Flood Tools Tracker and Drought Tools Matrix simplify navigation through existing EO resources, making it easier for decision-makers and practitioners to identify the tools and training materials they need. Interactive dashboards and digital platforms further enhance accessibility and usability, enabling stakeholders to integrate valuable data and information into their operations. Sharing Success Stories EOTEC DevNet gathers and shares case studies and guidance documents that highlight successful EO capacity-building efforts. Among these is the Global Flood Extent Use Case, which includes five regional event analyses that demonstrate the practical application of EO data in addressing flood risks and offers a replicable model for others to enhance disaster preparedness and response. Another notable effort is the ongoing development of Needs Assessment Guidance, designed to help stakeholders systematically identify and address gaps in EO capacity building. These efforts are practical examples that others can adapt and apply in their own regions. Promoting Engagement and Knowledge Sharing EOTEC DevNet’s strength lies in its ability to bring people together. Through regular online meetings, spotlight presentations, task teams, webinars, and thematic discussions, the network creates opportunities for sharing ideas and developing solutions. Spotlight presentations feature top practitioners and researchers who share their experience on the latest trends, tools, and strategies for addressing disasters. These presentations are particularly useful in helping members stay informed, resolve queries, interact with experts, and identify practical approaches for their own work.The result is a network of collaborators who learn from each other, explore new trends, and strategize on solutions to regional and global challenges. Since its establishment, EOTEC DevNet has organized over 105 events, engaging more than 1,300 participants as of November 2024. These events include 43 Task Team meetings across the four regions (Africa, Americas, Asia-Oceania, and Europe), as well as 43 meetings of the Flood Working Groups and a global webinar on flood disaster risk management. The network has also supported five events for the Drought Working Group, consisting of global consultation meetings, a kick-off event, and webinars. Additionally, 13 Global Task Team meetings and a webinar on the Copernicus Data Space Ecosystem were held to foster broader collaboration. Every meeting features at least one Spotlight presentation or multiple presentations, showcasing the latest tools, strategies, and success stories shared by experienced practitioners. These Spotlights serve as a cornerstone for knowledge exchange and engagement, helping participants stay informed and interact with experts. Lessons from Collaboration EOTEC DevNet uses tools such as social network analysis to understand how stakeholders connect, identify gaps in capacity-building efforts, and improve knowledge flows. This ensures that collaborations are effective and that resources and training efforts align with the diverse needs of communities. Supporting Global Goals EOTEC DevNet’s work aligns closely with global frameworks for sustainable development, disaster risk reduction, and climate action. By bridging gaps in EO knowledge and resource availability across regions, the network contributes to achieving the Sustainable Development Goals (SDGs) and enhancing climate resilience. Its collaborative model brings together global, regional, and local stakeholders, minimizing duplication and fostering partnerships, maximizing the value of EO resources. Looking Ahead As EO technologies continue to evolve, EOTEC DevNet is committed to expanding its impact by scaling up knowledge-sharing initiatives and embracing digital innovations. The network will continue to address barriers to resource access, ensuring that communities worldwide benefit from EO tools and training. By fostering collaboration and innovation, EOTEC DevNet empowers communities to tackle the complex challenges of climate change, disasters, and sustainability. This contribution will share insights into how EOTEC DevNet operates, emphasizing its role as an innovative hub for collaboration and knowledge sharing. It will demonstrate the value of community-based approaches in driving change, with practical examples showcasing how the network fosters capacity building through open innovation and promotes global and regional cooperation.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall M1/M2)

Presentation: Online and in-person learning for decision making: The NASA Applied Remote Sensing Training (ARSET) Program

Authors: Brock Blevins, Melanie Follette-Cook, Suzanne
Affiliations: NASA Applied Remote Sensing Training Program (ARSET)
The NASA Applied Remote Sensing Training (ARSET) Program, part of the NASA Earth Action Capacity Building Program, has sought to close the gap between the availability of remote sensing data and the use of Earth Observations for informed decision making through providing cost-free training targeted to working professionals. Since 2009, ARSET has trained more than 140,000 participants from 186 countries in over 19,000 unique organizations across seven Earth Science themes, health and air quality, climate and resilience, agriculture, water resources, disasters, ecological conservation, and wildland fires. Trainings are designed with a learner-centered, goal-driven approach that employs a combination of teaching strategies, including: lecture, demonstrations, in-class and independent exercises, and case-study analysis. A post-training homework provides participants with opportunities for learning reinforcement and self-assessment of mastery of learning goals, and provides ARSET training team with insights on impact and to inform continuous improvement. ARSET trainings are offered in three formats, virtual live instructor-led, virtual asynchronous self-paced, and in-person and at a range of levels according to technical skill, from introductory to advanced. In this presentation, we will discuss the challenges and opportunities of ARSET training modalities, and best practices and approach to training design, delivery, and impact evaluation.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall M1/M2)

Presentation: GDA Knowledge Hub: A Platform To Support Global EO Capacity Building in International Development

Authors: Ravi Kapur
Affiliations: Imperative Space
The ESA Global Development Assistance (GDA) programme is focussed on accelerating uptake of Earth Observation within the International Development arena, and on ‘mainstreaming’ its use at all levels of the operational value chain in development funding and assistance. However, to achieve this through effective knowledge exchange and training, several obstacles must be overcome, including: • Lack of widespread awareness in the development and International Financial Institution (IFI) communities about the fundamental capabilities available from EO. • Lack of common language and unified taxonomies between the development and EO arenas. • The need to enable easy access to EO training resources for integration into existing IFI-based capacity building. • The need to access ‘certified’ EO expertise for specific development use case contexts To address these and other related challenges, Imperative Space has led the development of the GDA ‘Knowledge Hub’. This is a new platform that leverages clear UX design, carefully structured taxonomies, bespoke training support tools and cutting-edge LLM technology to enable rapid access to relevant information and training resources. This session will provide an overview of the GDA Knowledge Hub platform, its key features and available training resources, and how it sets a new template for providing sector-specific EO knowledge and capacity building support. Key features of the platform to be demonstrated in the session will include: Main GDA Knowledge Hub Library: This is an extensive and interactive repository of European EO Service Capabilities, Use Cases, Case Studies and associated Training Resources. It is intended to support a wide range of user needs and interests at every level of the international development value chain, and to support IFIs in their own EO capacity building and skills development activities in client states. It also includes an archive of GDA-linked webinars and publications. Training Module Assembler (TMA): The ‘TMA’ is an innovative bespoke new tool created for IFI-based trainers to gather, assemble, re-order and export training resources available within the Knowledge Hub. This simplifies the process of gathering valuable information for use in training activities in their own context. Additionally, users have the option to create personalised collections, allowing them to categorise and organise their saved resources by themes, projects, or specific areas of interest. There is an option to download any training content from the training modules directly to a device for offline use or project integration. Consultation Rooms: The ‘Consultation Rooms’ feature offers an interactive space for users to engage with expert knowledge and personalised assistance. It provides access to a multi-tiered support system, initially via an advanced NLP-based chatbot, but progressing to the option to book one-on-one training sessions with EO experts online.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall M1/M2)

Presentation: Advancing Earth Observation Literacy: A Strategic Approach to Skills Development in the Downstream Space Sector

Authors: Martyna A. Stelmaszczuk-Górska, Assoc. Prof. Dr Angela Aragon-Angel, Eva-Maria Steinbacher, Gabriella Povero, Bärbel Deisting, Danny Vandenbroucke, Dr. Carsten Pathe, Prof. Dr. Christiane Schmullius
Affiliations: Friedrich Schiller University Jena, Technical University of Catalonia, Paris Lodron Universität Salzburg, LINKS Foundation, bavAIRia e.V., Katholieke Universiteit Leuven, Earth Observation Services GmbH
The rapid growth of the downstream space sector and its expanding applications in industries such as agriculture, urban planning, public health, and disaster management highlight an urgent need to address the workforce skills gap. To fully unlock the potential of Earth Observation (EO) technologies, Satellite Communications (SatCom), and Global Navigation Satellite Systems (GNSS), innovative strategies are essential to foster downstream space literacy. A key distinguishing feature of SpaceSUITE is its deliberate shift from treating these subdomains in isolation to adopting an integrated approach, leveraging combined expertise to address overarching challenges. The SpaceSUITE project—SPACE downstream Skills development and User uptake through Innovative curricula in Training and Education—provides a comprehensive framework for building these skills. By combining data-driven methodologies and collaborative approaches, the project aims to meet the evolving demands of the workforce and strengthen the downstream space sector. A central pillar of SpaceSUITE’s strategy for pursuing an in-demand, adaptive workforce is the Skills Intelligence Mechanism, a data-driven approach that analyzes labor market trends and evaluates educational offerings. This mechanism identifies skills gaps, supports the design of reskilling and upskilling programs, and ensures alignment with industry trends. By bridging connections between educational providers, industry leaders, and policymakers, the mechanism promotes sustainable and inclusive workforce development, equipping professionals to adapt to the challenges and opportunities of the downstream space sector. In addition, SpaceSUITE employs a persona-driven methodology to create targeted training programs tailored to specific professional needs. Personas, developed through detailed market analysis and stakeholder input, represent key user profiles such as data analysts, sustainability experts, and technical operators. These personas guide the design of curricula and training materials to address both technical and transversal skills gaps, ensuring relevance and impact across diverse professional contexts A business strategy is being developed to enable the mainstreaming and scaling of the training and educational environments created during the project, ensuring their long-term sustainability. Training materials will be integrated into a digital repository, providing seamless access to curated learning content and practical applications. The SpaceSUITE digital platform will support continuous learning by helping professionals navigate available training opportunities and engage with EO, SatCom, and GNSS concepts, applying them effectively in their fields. The results of SpaceSUITE’s initiatives demonstrate the value of integrating EO, SatCom, and GNSS literacy into diverse professional settings. Reactive training packages have already been deployed, covering topics such as the basics of remote sensing with hands-on disaster applications, image processing and machine learning for building management, an introduction to satellite communications, and GNSS data analysis, including quality assessment and massive data processing. These programs have equipped participants with the tools and knowledge needed to address immediate challenges effectively. Looking ahead, proactive activities will focus on scaling these efforts by incorporating emerging technologies and refining methodologies to keep pace with the rapidly evolving landscape of the downstream space sector. Furthermore, new skills for the digital and green transitions of Europe’s economy will be promoted through lifelong learning opportunities facilitated by the SpaceSUITE digital platform. This platform will help share best practices and monitor both available and needed skills to support ongoing workforce development. This contribution will showcase how SpaceSUITE integrates innovative methodologies, tools, and collaborative strategies to enhance EO, SatCom, and GNSS literacy and capacity building. By emphasizing open innovation, cooperation, and cross-sectoral synergies, SpaceSUITE demonstrates how targeted education and vocational training efforts can advance workforce development, amplify the societal benefits of space technologies, and contribute to a more sustainable and resilient future.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall M1/M2)

Presentation: Large Language Models in Digital Education: Assessing Reliability, Efficiency, and Content Quality

Authors: Robert Eckardt, Dr. Carsten Pathe, Dr. Henryk Hodam, Dr. Nesrin Salepci, Dr. Martyna Anna Stelmaszczuk-Górska, Jun.-Prof. Dr. Andreas Rienow, Prof. Dr. Christiane Schmullius
Affiliations: Friedrich Schiller University Jena, Ruhr-Universität Bochum, EOS Jena GmbH, ignite education GmbH
The integration of Large Language Models (LLMs) into digital education is fundamentally transforming the production, personalization, and dissemination of educational content. As education increasingly adopts digital tools, the role of LLMs has become pivotal in reshaping how content is created and tailored to diverse audiences. This study critically examines the potential of LLMs within the Earth Observation (EO) educational content production lifecycle, focusing on their influence on reliability, operational efficiency, content quality, and learner engagement. By embedding LLMs throughout various stages - ranging from conceptualization and scripting to post-production and the development of interactive modules - this research evaluates both, the opportunities and inherent challenges associated with deploying LLMs in EO education. The use of LLMs presents transformative possibilities in the way EO educational content is conceptualized and developed. During the initial phases of content creation, LLMs can support the rapid generation of ideas and provide initial drafts for educational materials, thereby accelerating the overall production cycle. By leveraging vast databases of information, LLMs can suggest relevant topics, highlight critical areas of interest, and assist in drafting outlines that align with pedagogical objectives. This preliminary support helps educators and content creators streamline their workflows, saving time and ensuring that the content produced is grounded in the latest scientific findings and educational best practices. Furthermore, LLMs are capable of generating diverse content variations, allowing educators to choose from multiple approaches and select the one that best fits the learning objectives and target audience. Findings from controlled experiments demonstrate that LLMs substantially enhance efficiency by automating routine processes, such as script generation, voiceover production, and visual content creation. These automated processes not only save time but also introduce a level of consistency that can be challenging to achieve manually, particularly when content is produced at scale. Such automation enables educators to concentrate on high-value tasks, including creative storytelling, contextual adaptation, and ensuring scientific accuracy. The creative aspects of educational content production, such as developing narratives that resonate with learners or contextualizing information to make it more relatable, benefit greatly from human involvement. Educators are thus able to focus their expertise on enriching the content, ensuring that it meets the cognitive and emotional needs of the learners, rather than being bogged down by repetitive tasks. However, sustained human oversight remains imperative to safeguard the quality and precision of specialized content, particularly in areas where domain-specific expertise is indispensable. While LLMs provide significant efficiencies, they are not without limitations. The nuances involved in interpreting EO data, particularly when conveying complex geospatial and environmental concepts, require a depth of understanding that current AI models may not fully possess. Human experts are crucial in verifying the accuracy of AI-generated content, ensuring that it adheres to educational standards and effectively communicates intricate concepts. This is particularly important in specialized domains like EO, where inaccuracies can lead to misconceptions and undermine the educational value of the content. Therefore, a hybrid approach that combines the scalability of LLMs with the precision of human expertise is essential for producing high-quality educational materials. Moreover, the incorporation of LLMs significantly augments translation capabilities, facilitating the creation of multilingual EO content and thereby expanding the global accessibility of these resources. The ability of LLMs to translate content into multiple languages with contextual accuracy is a major advantage for EO education, which often targets a diverse, international audience. By reducing language barriers, LLMs contribute to the democratization of knowledge, allowing learners from different linguistic backgrounds to access high-quality educational materials. This capability is particularly beneficial for regions where access to EO education is limited due to language constraints, thereby promoting inclusivity and broadening the impact of EO training programs. Additionally, LLMs can be used to localize content, adapting not only the language but also cultural references and examples to better resonate with specific audiences, thus enhancing learner engagement and comprehension. Despite these benefits, challenges such as the risk of inaccuracies in AI-generated content and the ethical considerations regarding data privacy underscore the necessity for a balanced approach to AI integration. LLMs, by virtue of being trained on large datasets, may inadvertently produce content that includes outdated or incorrect information, especially in a rapidly evolving field like EO. This necessitates rigorous validation by subject matter experts to ensure the accuracy and reliability of the educational content. Furthermore, ethical issues related to data privacy, bias in training data, and the potential for misuse of AI-generated materials must be carefully managed. Transparency in the use of AI tools, along with clear guidelines on data handling and content validation, is crucial to maintaining trust in AI-assisted educational processes. These challenges highlight the importance of developing robust frameworks for AI integration that prioritize both technological innovation and ethical responsibility. The integration of LLMs into EO education also holds potential for enhancing personalized learning experiences. By analyzing learner data, LLMs can adapt content to individual learning styles, providing tailored feedback and customized learning pathways. This adaptability makes learning more responsive to the unique needs of each student, fostering a more engaging and effective educational experience. For instance, LLMs can generate adaptive quizzes that adjust in difficulty based on learner performance, or provide supplementary materials that cater to areas where a learner may be struggling. Such personalized interventions help ensure that all learners, regardless of their starting point, can progress at their own pace and receive the support they need to master complex EO topics. Additionally, LLMs can facilitate the development of interactive learning modules, incorporating elements such as simulations and scenario-based activities that are particularly well-suited to EO education. Interactive modules allow learners to explore EO data in a hands-on manner, fostering deeper understanding through practical application. For example, LLMs can help generate interactive exercises where learners manipulate satellite imagery to observe environmental changes over time or analyze geospatial data to draw conclusions about climate patterns. These kinds of active learning opportunities not only enhance engagement but also help learners develop critical skills in data analysis and interpretation, which are crucial for understanding EO concepts. Furthermore, LLMs can be instrumental in supporting collaborative learning environments. By integrating with digital platforms, LLMs can facilitate discussion forums, group projects, and peer-to-peer interactions. They can summarize group discussions, suggest relevant resources, or even moderate debates by providing factual clarifications. This capability enhances the collaborative dimension of learning, allowing students to benefit from diverse perspectives while ensuring that discussions remain focused and informative. In EO education, where interdisciplinary understanding is often required, the ability of LLMs to provide contextually relevant information in real-time can significantly enhance the learning experience. Another significant advantage of LLMs is their ability to streamline the iterative refinement of educational content. Given the dynamic nature of EO, where new research and data are continually emerging, educational materials must be updated regularly to remain relevant. LLMs can assist in this process by quickly analyzing new research findings and integrating them into existing content. This ensures that learners always have access to the most current information, without placing an excessive burden on educators to manually revise materials. The iterative updating facilitated by LLMs not only keeps the content fresh but also allows educators to respond swiftly to changes in the field, maintaining the quality and relevance of EO education. Ultimately, this study presents elements of a comprehensive framework for leveraging LLMs to optimize the EO educational content lifecycle, emphasizing that AI, when judiciously integrated with human expertise, can enhance the efficiency, scalability, and inclusivity of digital education. By automating routine aspects of content creation, LLMs free educators to focus on the more nuanced and creative components of teaching, which are critical for fostering deep learning and engagement. This synergy between AI and human educators not only improves the quality and reach of EO education but also ensures that learning experiences are adaptable, culturally sensitive, and aligned with the needs of diverse learner populations. The potential for LLMs to personalize learning pathways, provide real-time feedback, and adaptively respond to learner inputs further underscores their role as a transformative tool in the evolving landscape of digital education. As educational institutions continue to explore the integration of AI, this study provides valuable insights into how LLMs can be effectively utilized to advance educational practices while maintaining the core values of human-centered learning. The findings of this study also have broader implications for the future of digital education beyond EO. As LLMs continue to evolve, their application are extending to various other domains that require specialized knowledge and adaptability. The principles outlined in this research - such as the importance of human oversight, the need for ethical AI practices, and the benefits of personalization -are applicable across a wide range of educational contexts. By understanding how to effectively integrate LLMs into EO education, educators and policymakers can develop best practices that can be adapted to other fields, thus contributing to the broader transformation of education in the digital age. The scalability and flexibility offered by LLMs provide an opportunity to rethink traditional educational models, making them more inclusive, engaging, and responsive to the needs of learners.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall M1/M2)

Presentation: Expanding Access to EO Education: The Impact of IEEE GRSS Webinars on Global Learning

Authors: Gunjan Joshi, Stephanie Tumampos, Fairouz Stambouli, Keely Roth
Affiliations: Helmholtz-Zentrum Dresden-Rossendorf, Technical University of Munich, German Aerospace Center, Planet Labs, PBC
The IEEE Geoscience and Remote Sensing Society (GRSS) webinars have become a cornerstone in disseminating remote sensing education globally, emerging as a pivotal resource during the COVID-19 pandemic. Since their inception in 2020, these webinars have featured contributions from over 120 experts spanning 25 countries and have reached a global audience of enthusiasts, students, early researchers, and professionals. These webinars are made possible through the collaborative efforts of IEEE GRSS's eight technical committees and various other initiatives, which bring together diverse experts to deliver high-quality state-of-the-art research content. In the past year alone, the GRSS webinars have attracted over 6,500 registrations from 137 countries across all 7 continents, with over 4,000 participants joining the live sessions. The webinars have been met with overwhelmingly positive reception, as evidenced by consistently high ratings and enthusiastic feedback from attendees. In addition, a significant majority of participants expressed willingness to recommend the webinars to others and indicated strong interest in attending future sessions, underscoring the program’s impact and value to the community. These webinars, free for both members and non-members, foster open and inclusive access to knowledge and discussion. The webinars are also posted on YouTube, where they have garnered wide viewership and continue to serve as invaluable resources for asynchronous learning. These webinars have not only focused on fundamental topics but have also embraced a diverse range of emerging subjects, including Earth Observation (EO) data platforms, geospatial foundation models, digital twins, unmanned aerial vehicles (UAVs), hyperspectral remote sensing technologies, geospatial artificial intelligence, climate and environment, and emerging satellite missions. Additionally, they offer capacity and skill-building sessions on academic writing, publishing as well as career development. We highlight the role of IEEE GRSS webinars in leveraging innovative digital tools and global collaboration to democratize EO education. It also explores audience metrics and thematic trends to understand how these initiatives contribute to capacity building in today’s evolving digital education landscape. The webinars strive to make knowledge universally available, ensuring that anyone with internet access and a curiosity about our planet can easily access information. Looking ahead, IEEE GRSS webinars aim to improve accessibility and interactivity further, adapt to the changing needs of the EO and geospatial community, and promote a truly inclusive and borderless knowledge sharing community.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.15/1.16)

Session: A.01.03 Fourier Transform Spectroscopy for Atmospheric Measurements

Fourier Transform Spectroscopy (FTS) is a powerful technique for atmospheric observations allowing the Earth' and atmosphere's thermal radiation to be sampled with high spectral resolution. This spectral range carries profile information of many atmospheric gases (water vapour, carbon dioxide, nitrous oxide, methane, ammonia, nitric acid,...), but also information on cloud (e.g. phase or liquid/ice water path) and aerosol (e.g. dust optical depth). Measurements have been performed from satellite (nadir and limb), from ground, or with airborne platforms for several decades and have recently come into the foreground in ESA's Earth Explorer (EE) programme with the EE9 FORUM mission and the EE11 candidate CAIRT, both aiming to fly in convoy with the FTS IASI-NG on MetOp-SG. The Infrared Sounder (IRS) will be launched on MTG-S1 in 2025. In addition, new airborne and ground-based instruments became available with performance and versatility that allow for innovative research applications. This session invites presentations on:
- retrieval algorithms and methods for uncertainty quantification including calibration/validation techniques for existing and future missions,
- new spectrometer developments for field work and satellite applications.

Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: Experimental validation of HiSRAMS and REWARDS all-sky airborne measurements in synergy with active remote sensors and in-situ probes

Authors: Natalia Bliankinshtein, Lei Liu, Philip Gabriel, Cuong Nguyen, Keyvan Ranjbar, Yi Huang, Kenny Bala, Leonid Nichman, Mengistu Wolde, Dirk Schuettemeyer
Affiliations: National Research Council Canada, McGill University, Horizon Science and Technology, European Space Agency
Microwave sounders contribute most information to operational numerical weather prediction models (Saunders, 2021). Following recent advancements in radio frequency (RF) technologies, research organizations around the world are currently undertaking proof of concept studies (e.g., Henry et al., 2023; Pradhan et al., 2024) to explore the benefits of hyperspectral microwave radiometers for atmospheric profiling, which involves building prototype instruments and their suborbital testing. The High Spectral Resolution Airborne Microwave Sounder (HiSRAMS) is a prototype instrument (Auriacombe et al., 2022) developed by AAC Omnisys, National Research Council Canada (NRC) and McGill University under European Space Agency (ESA) funding and operated onboard the NRC Convair-580 research aircraft. HiSRAMS consists of two spectrometers covering absorption bands of oxygen (49.6-58.3 GHz) and water vapor (175.9-184.6 GHz) respectively. It can be configured to measure single-polarized or dual-polarized radiance at up to 305 kHz spectral resolution. HiSRAMS brightness temperature spectra and Optimal Estimation Method retrievals of temperature and humidity profiles were validated in clear-sky conditions in a dedicated flight campaign in 2021-2022 (Bliankinshtein et al., 2023, Liu et al., 2024) In 2023, NRC undertook a follow-up study to extend HiSRAMS validation to cloudy atmospheres. To assist with validation, NRC and ProSensing Inc. implemented a single-channel passive W-band radiometer at 94.05 GHz in one of the channels of the NRC Airborne W-band (NAW) cloud radar, by re-equipping one receiver channel of the NAW with a dedicated radiometer RF frontend and modifying the radar electronics and software. This modification of NAW added a capability to run a radiometer mode using existing antenna ports, which was part of the NAW original switching design of 10 ports. The resulting Radar-Enhanced W-band Airborne Radiometer Detection System (REWARDS) is effectively a noise-diode-calibrated W-band radiometer (‘passive channel’) which can operate in an interleaving mode with the radar (‘active channel’) with very fine temporal and spatial resolution. The information provided by REWARDS mainly derives from cloud particles, complementing that obtained in the G band. Calibrated brightness temperatures of REWARDS show great sensitivity to liquid clouds, as shown by staircase sampling of a warm stratocumulus cloud deck (Bliankinshtein et al, 2024). Results from two warm stratocumulus cloud flights highlight the synergy of REWARDS and HiSRAMS with the NRC Convair-580's advanced instrument suite which includes in-situ atmospheric state sensors, cloud microphysics probes, NAW radar, and a 355 nm Airborne Elastic Cloud Lidar. Particle size distribution from in-situ probes and cloud boundaries from radar and lidar are incorporated in the forward radiative transfer model to simulate microwave observations of HiSRAMS and REWARDS. W-band radiances, as particularly sensitive to cloud information, are used to fine-tune the model setup, which is subsequently applied to HiSRAMS radiation closure tests. Resulting uncertainties reveal the challenge of experimental validation of all-sky radiances in heterogeneous environment. The multi-sensor representations of clouds in all-sky radiative transfer models are shown to be of great importance to achieve radiation closure and analyze its uncertainties. This study highlights the value of hyperspectral radiometry in enhancing our understanding of cloud microphysics and atmospheric processes, with significant implications for future satellite missions. References: Auriacombe, Olivier, et al. "High spectral resolution airborne microwave sounder (HiSRAMS)." 2022 47th International Conference on Infrared, Millimeter and Terahertz Waves (IRMMW-THz). IEEE, 2022. Bliankinshtein, Natalia, et al. "Airborne validation of HiSRAMS atmospheric soundings." IGARSS 2023-2023 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2023. Bliankinshtein, Natalia, et al. "Calibration and Flight Test of NAW W-Band Radiometric Mode." IGARSS 2024-2024 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2024. Henry, Manju, et al. "Development of a hyperspectral microwave sounder for enhanced weather forecasting." IGARSS 2023-2023 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2023. Liu, Lei, et al. "Radiative closure tests of collocated hyperspectral microwave and infrared radiometers." Atmospheric Measurement Techniques 17.7 (2024): 2219-2233. Pradhan, Omkar, et al. "Hyperspectral Microwave Radiometer for Airborne Atmospheric Sounding." IGARSS 2024-2024 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2024. Saunders, Roger. "The use of satellite data in numerical weather prediction." Weather (00431656) 76.3 (2021).
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: First Flight - First Light: the Novel Limb-imaging FTIR Sounder GLORIA-Lite Crossing the Atlantic

Authors: Felix Friedl-Vallon, Erik Kretschmer, Tom Neubert, Jörn Ungermann, Michael Höpfner, Thomas Gulde, Sören Johansson, Anne Kleinert, Guido Maucher, Christof Piesch, Peter Preusse, Markus Retzlaff, Martin Riese, Georg Schardt, Gerald Wetzel, Wolfgang Woiwode
Affiliations: Institute of Meteorology and Climate Research, Karlsruhe Institute of Technology, Central Institute of Engineering, Electronics and Analytics-Electronics Systems (ZEA2), Forschungszentrum Jülich, Institute of Climate and Energy Systems - Stratosphere (ICE4), Forschungszentrum Jülich
A new remote-sensing instrument, GLORIA-Lite, was developed by the Institute of Meteorology and Climate Research (IMK-ASF) at the Karlsruhe Institute of Technology (KIT), in collaboration with the ICE4 and ZEA2 institutes at Forschungszentrum Jülich(FZJ). It was launched within the TRANSAT2024 field campaign on board a large stratospheric balloon by a team of the Centre National d'Études Spatiales (CNES) from the European Space and Sounding Rocket Range (ESRANGE, Swedish Space Corporation), on June 22, 2024. The balloon ascended to an altitude of 40 km, traveling from Kiruna, northern Sweden, to Baffin Island, Canada, where it safely landed on June 26. GLORIA-Lite is an advanced limb-imaging Fourier-Transform Infrared instrument, extending the decades-long legacy of its predecessors, GLORIA (airborne/balloon) and MIPAS (airborne/balloon). By leveraging state-of-the-art infrared detectors, customized electronics, and innovative manufacturing techniques, GLORIA-Lite achieves a significant reduction in size and weight compared to its predecessors. This miniaturization enables its deployment on transcontinental balloon flights, sharing a gondola with multiple other instruments. The alignment of the fully reflective optical system is performed during manufacturing, ensuring consistent performances over the wideband long wave spectral range of the infrared detector array. The quasi-monolithic design approach eases thermal constraints of instrument operation. The electronics controlling the instrument are developed towards further miniaturisation into a Multi-Processor System-on-Chip architecture, with the goal to process the data on the fly up to Level 1. GLORIA-Lite is capable of analyzing infrared emissions of more than 20 different molecules and aerosols in the atmosphere. The instrument is designed to enhance our understanding of dynamic and chemical processes occurring from the middle troposphere deep into the stratosphere. In times of accelerating climate change, it is particularly important to study the impacts on the middle atmosphere and to monitor them through long-term measurement series. Additionally, GLORIA-Lite serves as a technology demonstrator for the CAIRT satellite project, a proposed mission developed by the European Space Agency (ESA). CAIRT aims to bring these advanced atmospheric observing capabilities to a global scale. We will provide a detailed account of the instrument's technical development and characterization, along with the results obtained from retrieving geophysical parameters, such as trace-gas distributions, during its first flight.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: The Universal InfraRed Airborne Spectrometer (UNIRAS): Mid-to-Far-Infrared spectral radiance measurements from aircraft

Authors: Jonathan Murray, Prof Helen Brindley, Stephane Lantagne, Mikael Zubilewich, Oleg Kozhura, Arthur Zielinski, Dirk Schuettemeyer, Hilke Oetjen
Affiliations: National Centre for Earth Observation, Department of Physics,Imperial College London., ABB Canada, Facility for Airborne Atmospheric Measurements, National Centre for Atmospheric science, European Space Agency
New mission concepts, designed to provide new insights into controls on atmospheric composition, clouds and surface properties will push observational capacity towards higher precision and higher spatial resolution. With the advent of NASA’s PREFIRE and ESA’s FORUM missions a new observational window, covering the far-infrared (100-667 cm-1) will be investigated for the first time. Given these developments there is a clear need to develop the capability to deliver proxy observations that can (a) test and refine new concepts, including the retrieval methods being developed to deliver the level 2 products, and (b) offer a calibration validation framework for the level 1 radiances that are the actual measurand of the satellite instruments the UNIRAS initiative seeks to benefit. UNIRAS, UNiversal InfraRed Airborne Spectrometer, jointly funded by NERC and ESA, is capable of measuring spectrally resolved radiances in the 100 cm-1 to 1600 cm-1 wavenumber range and will be flown on the FAAM Airborne Laboratory's atmospheric aircraft, a modified Bae146. UNIRAS is a combination of two main units: a spectrometer, assembled at ABB, Canada, and the calibration and scene selection unit currently being assembled at Imperial College. The spectrometer design concept for UNIRAS is based on one of the industrial studies undertaken for FORUM phase A. This spectrometer design is a 4-port Fourier Transform Spectrometer, comprised of two input ports and 2 outputs ports. For UNIRAS, one of the input ports uses a steering mirror to select between calibration targets and the atmospheric signal of interest, the second input port is filled using a fixed temperature thermoelectrically cooled plate at -25C. The FORUM spectrometer employs uncooled DLaTGS detectors at each of the two output ports. To expand the UNIRAS instrument versatility beyond FORUM and PREFIRE, UNIRAS employs configurable detectors on the output ports, allowing a choice of 2 from a selection of 3 available detectors. These are an uncooled DLaTGS, 100 cm-1 – 1600 cm-1, similar to the FORUM detectors, a Stirling cooled longwave extended MCT, 400 cm-1 – 1600 cm-1 and a standard Stirling cooled MCT detector, 600 cm-1 – 1600 cm-1. This allows the user to switch between a Mid-to-Far-IR configuration, and an extended Mid-IR configuration, the latter offering higher temporal and spatial resolution and improved signal to noise. The spectrometer is scheduled to be delivered to Imperial College in early March 2025 where the front-end calibration system will be integrated, and ground testing will start. This presentation will provide first light spectral performance based on ground tests, including measurements of the noise equivalent spectral radiance (NESR), instrument line shapes, calibration strategy and ground-based zenith view measurements under available sky conditions. Finally, we will detail the objectives and plans for UNIRAS deployment.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: A Versatile Fourier Transform Spectrometer Model for Future Earth Observation Missions

Authors: Mr. Tom Piekarski, Dr. Christophe Buisset, Dr. Anne Kleinert, Dr. Felix Friedl-Vallon, Mr. Arnaud Heliere, Dr. Julian Hofmann, Dr. Ljubiša Babić, Mr. Micael Dias Miranda, Dr. Tobias Guggenmoser, Mr. Daniel Lamarre, Dr. Flavio Mariani, Felice Vanin, Dr. Ben Veihelmann
Affiliations: European Space Agency, Institute of Meteorology and Climate Research, Karlsruhe Institute of Technology
Infrared Fourier Transform Spectrometers (FTS) have widely been used in previous Earth Observation space missions to monitor the abundance of atmospheric gases (e.g., TANSO-FTS, ACE-FTS, MIPAS, IASI) and are planned to be used in several future missions such as MTG-IRS, FORUM or CAIRT. We have developed an FTS model that uses instrumental parameters as inputs, generates interferograms and reconstructs a scene spectrum with associated noise. This model has been implemented as a Python code with the primary objective of providing a tool for engineers and scientists i) to estimate FTS instrument performance in early mission phases, ii) to understand the relation between the instrument design and errors with the signal quality, and iii) to select the most suitable calibration strategy and interferogram numerical apodization. The model comprises a forward and a backward model. The forward model simulates the interferograms provided by the FTS and estimates the associated noise, for three kinds of sources: the cold and hot calibration black bodies and the Earth scene. These interferograms include the contributions of the detector, the thermal background and the instrumental effects such as self-apodization, and differential shearing, tilt and wavefront error. This forward model has been validated through comparison with real measurements from the airborne FTS GLORIA, a joint development of Karlsruhe Institute of Technology and Forschungszentrum Jülich. The forward model has also been used at ESA for the assessment of the Earth Explorer 11 CAIRT radiometric performance. The backward model reconstructs the scene radiance from these interferograms, by i) simulating the radiometric calibration, ii) retrieving the spectral radiance including the interferogram numerical apodization and zeros-filling, and iii) propagating the associated noise. The outputs of this model are the reconstructed spectral radiance, gain and offset with their associated noise. The FTS forward and backward theoretical models together with the python code have been developed in the frame of a Young Graduate Traineeship at ESA. They have been fully documented with the aim to be released in 2025-2026 for supporting future missions’ performance assessment.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: CAIRT Earth Explorer 11 candidate / Impact Study for volcanic ash

Authors: Ilaria Mazzotti, Enzo Papandrea, Umberto Rizza, Lorenzo Buriola, Marco Menarini, Piera Raspollini, Tiziano Maestri, Giorgia Proietti Pelliccia, Guido Masiello, Lorenzo Cassini, Stefano Corradini, Lorenzo Guerrieri, Björn-Martin Sinnhuber, Dr. Bianca Maria Dinelli
Affiliations: National Research Council of Italy-Institute of atmospheric sciences and climate (CNR-ISAC), National Research Council of Italy-Institute of atmospheric sciences and climate (CNR-ISAC), National Research Council of Italy-Institute of Applied Physics "Nello Carrara", Department of Physics and Astronomy “Augusto Righi”, University of Bologna, Department of Engineering, University of Basilicata, Department of Civil, Building and Environmental Engineering, University “La Sapienza”, National Institute of Geophysics and Volcanology, (INGV), Karlsruhe Institute of Technology (KIT)
Volcanic eruptions emit large amounts of gases and particles into the atmosphere, causing severe impacts on human health, environment and climate. Volcanic ash clouds are also well known for being dangerous for aviation, indeed the current regulations in Europe state that airlines must operate in concentrations of ash smaller than 2 mg/m³ (Beckett et al., 2020). It is therefore very important to individuate the ash plume, characterize its extension and geometry and its concentration. Geostationary instruments are routinely used to infer the spatial extent of volcanic ash clouds, their effective altitude and ash columnar abundance (Guerrieri et al., 2024). However, information on the vertical distribution and extent of the volcanic ash clouds is still missing. The Changing-Atmosphere InfraRed Tomography explorer (CAIRT) is one of the two candidates for the Earth Explorer 11 selection. One of the secondary objectives of CAIRT is to provide information on clouds. By exploiting the limb viewing geometry, CAIRT will allow to estimate the thickness of the plume that, combined with the ash columnar abundance retrieved from a geostationary instrument like SEVIRI, can be used to compute the ash concentration. In the frame of the CAIRT phase A studies, we have investigated whether, in case of volcanic eruptions, CAIRT can detect the ash cloud and provide information on the vertical extent of the volcanic clouds. The study is organized as follows: first, to characterize the sensitivity of the radiance measured by CAIRT to the main parameters of volcanic ash clouds, some simulations are run by using the GBB-clouds Radiative Transfer Model (RTM) (Dinelli et al., 2023). Successively, with the same RTM and using also the sigma-IASI RTM (Masiello et al., 2024), radiance fields as observed by CAIRT and SEVIRI are generated considering a realistic volcanic eruption (the Etna 23 November 2013 event) simulated with the WRF-Chem code (Grell et al., 2005). The assumed ash optical properties are obtained from Volz (1973) refractive index. Finally, the CAIRT and SEVIRI simulated data are used for the retrieval of ash cloud thickness and columnar abundance respectively and then merged to estimate the ash concentration. The latter parameter will be also cross-compared with the WRF model simulations of the same case study. References: Beckett, F.M., Witham, C.S., Leadbetter, S.J., Crocker, R., Webster, H.N., Hort, M.C., Jones, A.R., Devenish, B.J., Thomson, D.J., 2020. Atmospheric Dispersion Modelling at the London VAAC: A Review of Developments since the 2010 Eyjafjallajökull Volcano Ash Cloud. Atmosphere 11, 352. https://doi.org/10.3390/atmos11040352. Dinelli, B.M.; Del Bianco, S.; Castelli, E.; Di Roma, A.; Lorenzi, G.; Premuda, M.; Barbara, F.; Gai, M.; Raspollini, P.; Di Natale, G. GBB-Nadir and KLIMA: Two Full Physics Codes for the Computation of the Infrared Spectrum of the Planetary Radiation Escaping to Space. Remote Sens. 2023, 15, 2532. https://doi.org/10.3390/rs15102532. Guerrieri, L., Corradini, S., Theys, N., Stelitano, D., & Merucci, L. (2023). Volcanic Clouds Characterization of the 2020–2022 Sequence of Mt. Etna Lava Fountains Using MSG-SEVIRI and Products’ Cross-Comparison. Remote Sensing, 15(8), 2055. https://doi.org/10.3390/RS15082055/S1. Masiello, G., Serio, C., Maestri, T., Martinazzo, M., Masin, F., Liuzzi, G., Venafra, S., 2024. The new σ-IASI code for all sky radiative transfer calculations in the spectral range 10 to 2760 cm-1: σ-IASI/F2N. Journal of Quantitative Spectroscopy and Radiative Transfer 312, 108814. https://doi.org/10.1016/j.jqsrt.2023.108814 Volz, F. E., Infrared optical constants of ammonium sulfate, Sahara dust, volcanic pumice and fly ash. Appl. Optic. 12 564-568 (1973).
LPS Website link: CAIRT Earth Explorer 11 candidate / Impact Study for volcanic ash&location=Room+1.15/1.16" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: NH3 point source emissions and lifetimes derived from 15 years of IASI observations

Authors: Antoine Honet, Lieven Clarisse, Martin Van Damme, Cathy Clerbaux, Pierre Coheur
Affiliations: Université libre de Bruxelles (ULB), Brussels Laboratory of the Universe (BLU-ULB), Spectroscopy, Quantum chemistry and Atmospheric Remote Sensing (SQUARES), Royal Belgian Institute for Space Aeronomy (BIRA-IASB), LATMOS/IPSL, Sorbonne Université, UVSQ, CNRS
Ammonia (NH3) is a short-lived atmospheric constituent with a lifetime of a few hours. Despite its devastating effects on air quality and the environment, its global concentration continues to increase. Knowledge of its diverse emission sources is crucial for guiding and implementing effective legislation. Particularly large NH3 emissions originate from animal feedlots and housings, and a variety of industries related to the production of e.g., synthetic fertilizers, coke, steel, and soda ash. Emissions from these super emitters are currently not well constrained in bottom-up inventories. Satellite measurements offer an attractive means for quantifying point sources. In this work, we present the latest version of the NH3 point source catalogue based on 15 years of IASI measurements. Over 750 hotspots were identified based on a wind-rotated supersampling technique that significantly increases the spatial resolution of the measurements beyond the native resolution of the sounder. These were subsequently categorized into 12 distinct source categories through careful study of visible imagery, publicly available inventories and various online sources. Specific region-dependent periods were excluded from the analysis to avoid the contributions of fires, enabling the identification of sources that are otherwise difficult to detect. Each source was classified based on its geographical extent into one of the following categories: point source, extended point source, or cluster of point sources. We estimated the atmospheric lifetimes and emissions for each, by fitting an Exponentially Modified Gaussian (EMG) function to the observed NH3 distributions. The results are analyzed as functions of geographical location, season, and source category. In addition, yearly emissions are derived and compared to those presented in the European Pollutant Release and Transfer Register (E-PRTR) for the sources in Europe. We quantify the uncertainty in our estimates by propagating uncertainties in the input parameters. This includes the systematic uncertainty originating from the satellite measurements and the uncertainty in the fitting parameters such as the fitting domain and the chosen reference wind speed. The estimated uncertainties are source-dependent and can therefore be used to identify the sources whose emissions and resulting NH3 residence time are tightly constrained, making these particularly useful for the evaluation of emission inventories.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.14)

Session: A.10.01 EO for Mineralogy Geology and Geomorphology

Earth observation is an important source of information for new and improved geological, mineralogical, regolith, geomorphological and structural mapping and is essential for assessing the impact of environmental changes caused by climatic and anthropogenic threats. Given the increasing demand for mineral and energy resources and the need for sustainable management of natural resources, the development of effective methods for monitoring and cost-effective and environmentally friendly extraction is essential.
In the past, the use of multispectral satellite data from Landsat, ASTER, SPOT, ENVISAT, Sentinel-2 or higher resolution commercial missions, also in combination with microwave data, has provided the community with a wide range of possibilities to complement conventional soil surveys and mineralogical/geological mapping/monitoring e.g. for mineral extraction. In addition, discrimination capabilities have been enhanced by hyperspectral data (pioneered by Hyperion and PROBA), which are now available through several operational research satellites and will be commissioned by CHIME.
The session aims collect contributions presenting different techniques to process and simplify large amounts of geological, mineralogical, and geophysical data, to merge different datasets and to extract new information from satellite EO data to support with a focus on mine site lifecycles.

Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.14)

Presentation: VSWIR and TIR imaging spectroscopy data to characterize surface mineralogy over geothermal active area.

Authors: Federico Rabuffi, Simon J. Hook, Massimo Musacchio, Kerry Cawse-Nicholson, Malvina Silvestri, Maria Fabrizia Buongiorno
Affiliations: Jet Propulsion Laboratory, California Institute of Technology, 2Istituto Nazionale di Geofisica e Vulcanologia, Osservatorio Nazionale Terremoti
Parco Naturalistico delle Biancae (PNB), part of the Lardello geothermal field in Italy, is an area of interest both industrially and scientifically due to the continuing geothermal energy production. However, the use of remote sensing images to characterize surface mineral is challenging due to the limited bare soil exposure which primarily occurs over the geothermally active area. The aim of this study is to use a combination of Visible to ShortWave InfraRed (VSWIR: 0.4 to 2.5 μm) and Therma InfRared (TIR: 7.5 to 12 μm) imaging spectroscopy data to characterize surface mineral composition and generate complementary mineral and lithotype maps of the exposed portion of the PNB geothermal area, where geothermal features such as fumaroles and mineral alteration occurs at both regional and local scales. Datasets used for the analysis include VSWIR and TIR spectral libraries derived from laboratory measurements of field samples, that represent the main outcropping lithotypes, and remotely sensed spectroscopic data acquired by the Airborne Visible InfraRed Imaging Spectrometer - Next Generation (AVIRIS-NG) and the Hyperspectral Thermal Emission Spectrometer (HyTES). The AVIRIS-NG and HyTES data have very high spatial resolutions of 5.7 m and ~1 m respectively. The classification maps were derived using the Material Identification and Characterization Algorithms (MICA) from the United States Geological Survey (USGS) that allow identification of materials based on a set of diagnostic features in the reflectance spectra starting from the spectral libraries and remotely sensed data. The use of high resolution spectroscopy images in the VSWIR and TIR result in very detailed classification maps about the lithology and mineralogy outcropping in the area, with the identification of specific minerals such as, oxide mineral (hematite), clay minerals (alunite, kaolinite, smectite), hydrothermal silica, gypsum and sulfur. Furthermore, the availability of a high spatial resolution surface temperature maps (collected from a drone survey) allowed to verify the occurrences of mineral alterations with the location of the main active thermal area. This study provides an example of what should be possible using complementary information that can be retrieved from the several planned future satellite sensor systems that should provide VSWIR and TIR data including PRISMA-2, SBG-VSWIR and CHIME for the VSWIR and TRISHNA, SBG-TIR, LSTM and LandsatNext for the TIR
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.14)

Presentation: Ladakh ophiolites: Martian analogue site mapped for degree of serpentinization using PRISMA hyperspectral satellite imagery and lab spectroscopy

Authors: Dr. Mamta Chauhan, Dr. Aakansha S. Borkar, Dr. Giorgio Antonino Licciardi, Patrizia Sacco, Dr. Deodato Tapete
Affiliations: Agenzia Spaziale Italiana (ASI), Indian Institute of Remote Sensing (IIRS), Indian Space Research Organisation (ISRO)
1. OPHIOLITES AS MARTIAN ANALOGUES Ladakh terrain with its unique geology has witnessed a sequence of magmatic activities, the oldest among them being ophiolites. This unique variety of igneous rock assemblage are generated by partial melting in the Earth’s mantle and exposed during obduction of oceanic lithosphere onto the continental crust. Their composition ranges from ultramafic to mafic rocks that are rich in ferromagnesium minerals such as olivine, pyroxene, plagioclase and oxides as spinel, chromite. They occur in association with sedimentary rocks represented by deep sea sediments present towards their top. The crust of differentiated planetary bodies including Mercury, Mars, Moon, Venus, Vesta are composed of mafic (e.g., basaltic) and ultramafic rocks. Ophiolite terraines with composition represented by lower oceanic crust and upper mantle section therefore serve as the most accessible terrains for detail characterization of these systems. These mafic and ultramafic rocks on reaction with water form serpentine along with heat and hydrogen gas, both being potential energy sources for chemosynthetic microorganisms [1, 2]. Serpentinization therefore, creates conditions amendable for both abiogenic and microbial synthesis of organic compounds. Serpentine-bearing outcrops have been detected on Mars using remote sensing [3]. 2. LADAKH OPHIOLITES AND STUDY AREAS The ophiolites of Ladakh form parts of Indus-Tsangpo suture zone (ITSZ) of Himalaya, the main tectonic units of the northern Himalayas that separates the Indian plate from the Eurasian plate. Ladakh ophiolites are emplaced during the late Cretaceous period and occur as remnants of Neo-Tethyan ocean’s that was closed during the Early Cretaceous period as a result of obduction [4, 5]. Among these exposed dismembered tectonic slices, the present research has selected Nidar and Spongtong Ophiolite Complexes belonging to south Ladakh group of ophiolites. Nidar Ophiolite Complex (NOC) (~32-33o N and 78o-79oE) lies towards SE Ladakh in the form of an eye-shaped exposure within Nidar valley. Whereas, Spongtang Ophiolite Complex (SOC) (~33-34 º N and 76º39’- 76º54’E) lies in the district of Leh, ~30 km south of the Indus-Tsangpo suture zone near to Photoskar [4,6]. These complexes display the complete ophiolite sequence from ultramafic- mafic to volcano-sedimentary rocks [6,7]. 3. DATA AND METHODOLOGY The present study has primarily utilized hyperspectral data from PRISMA (PRecursore IperSpettrale della Missione Applicativa) mission for mineral mapping. In addition, multispectral data from LANDSAT-8 OLI, ASTER-TIR and spectroradiometer have also been used for characterizing lithology. PRISMA is a pushbroom imaging spectrometer launched recently (2019) by the Italian Space Agency (ASI). It provides hyperspectral images of over a swath of 30 x 30 km per each frame, in 400-2500 nm continuous spectral range, at 30m spatial resolution in 240 spectral channels, 10nm spectral resolution along with corresponding panchromatic image of 5m spatial resolution [8, 9]. This study has utilized the atmospherically corrected L2D (Geolocated Surface Reflectance data cube) product. The available cloud free coverage scenes were downloaded from ASI’s official PRISMA portal. The processing of PRISMA data included bad band removal, size reduction (subset) of the data (both spatially and spectrally), noise and dimensionality reduction, extracting pure pixels, collecting endmembers and matching with standard spectral library. The softwares used were ENVI® 5.3, ArcGIS® 10.4 and Python programming. Additionally, the spectra obtained from the hyperspectral images were compared with field spectra derived from rock samples collected from the field to calibrate and validate. 4. RESULTS Multispectral data analysis of band composites and spectral indices and the generated (mafic, quartz, carbonate, ultramafic) maps for NOC and SOC allowed highlighting and discriminating prominent lithology dominated by mafic, ultramafic, carbonate, serpentines and other hydrothermal altered phases (clay minerals). Spectral analysis from PRISMA data enabled identification and mapping of the dominant primary and altered minerals based on their unique spectral signatures in visible and near infrared region. Furthermore, application of machine learning-based classification approach helped in characterizing the various lithological units present based on the spectral variability. The degree of serpentinization has been assessed using hyperspectral data wherein the absorption depths of olivine/pyroxene (900-1100nm), OH (1400 & 1900nm) and serpentine minerals (2300nm) were collected and, together with the results from field based spectroscopic data, were utilized mathematically to compute the percentage of the degree of serpentinization based on the relative proportion of primary and altered phases. 5. DISCUSSION AND CONCLUSIONS The study of serpentinization through specific spectral wavelengths relies on the identification of distinctive absorption features that correspond to the mineralogical changes occurring during the process. The degree of serpentinization refers to the extent to which an ultramafic rock has undergone serpentinization and is positively correlated with the depth of absorption features at 1000 and 2300nm. As serpentinization increases, there is a corresponding rise in overall reflectivity and the depth of these absorption bands. The study of Ladakh Ophiolites and associated serpentinization is useful to understand the geological and chemical processes that occurs over Mars surface. The possibility of the linkage between biological processes and the precipitated secondary phases makes this terrain remarkable to investigate habitability conditions of Mars. Finally, the study underscores the added value brought by hyperspectral satellite data collected with PRISMA, enhancing the discrimination capabilities. The achieved results show what could be possible to perform in future over wider remote regions as soon as new global hyperspectral missions such as CHIME become operational. Acknowledgement: This research was part of the ASI-ISRO joint Earth Observation Working Group (EOWG) [10, 11], project HYP_4 MARTIAN ANALOGUES. The study has used data from ORIGINAL PRISMA Products© Italian Space Agency (ASI) delivered under an ASI License to the User. 6. REFERENCES [1] Kelley, D.S. et al. “A serpentinite-hosted ecosystem: the lost city hydrothermal field”. Science. 307, 1428-1434. 2005.http://dx.doi.org/10.1126/science.1102556. [2] Sleep, N.H., Meibom, A., et al. “H2-rich fluids from serpentinization: geochemical and biotic implications”. Proc. Natl. Acad. Sci. USA101, 12818-2823. 2004. [3] Ehlmann, B. L., et al. “Identification of hydrated silicate minerals on Mars using MRO-CRISM: Geologic context near Nili Fossae and implications for aqueous alteration”. J. Geophys. Res., 114, E00D08, 2009. doi:10.1029/2009JE003339. [4] Chauhan, M., Sur, K., Chauhan, P. et al., “Lithological mapping of Nidar ophiolite complex, Ladakh using high-resolution data.” Adv. Space Res., 73 (08),4091-4105, 2024. [5] Gansser, A. “Geology of the Himalayas” Wiley Interscience, London 289 p, 1964. [6] Thakur, V.C. and Misra, D.K. “Tectonic framework of Indus and Shyok Suture Zones in eastern Ladakh. Northwest Himalaya”. Tectonophysics, 101, 207-220, 1984. [7] Catlos, Elizabeth J., et al. "Nature, age and emplacement of the Spongtang ophiolite, Ladakh, NW India." J. Geol. Soc., 176.2, 284-305, 2019. [8] Loizzo, R., Daraio, M., et al. “Prisma Mission Status and Perspective.” IEEE International Geoscience and Remote Sensing Symposium, 4503-4506, 2019. [9] Caporusso et al., “The Hyperspectral Prisma Mission in Operations,” 2020 IEEE International Geoscience and Remote Sensing Symposium, pp. 3282-3285, 2020. [10] Tapete, D, Kumar Jaiswal, R, et al. “Scientific research and applications development based on exploitation of PRISMA data in the framework of ASI – ISRO Earth Observation Working Group Hyperspectral Activity.” IEEE International Geoscience and Remote Sensing Symposium, 1648-1651, 2023. 979-8-3503-2010-7/23. [11] Tapete, D., Kumar Jaiswal, R., et al. “ASI – ISRO cooperation in Earth Observation: status, achievements and new avenues”. 75th International Astronautical Congress (IAC), Milan, Italy, 14-18 October 2024, IAC-24-B1.1.4, 12 pp.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.14)

Presentation: Detailed Geological Mapping of the State of Qatar at Various Mapping Scales by Combining Multi-Spectral Sentinel-2 Imagery with Very High Spatial Resolution Pleiades Imagery

Authors: Charalampos Kontoes, Martha Kokkalidou, Katerina-Argyri Paroni, Nikolaos Stasinos, Stavroula Alatza, Constantinos Loupasakis, Katerina Kavoura, Dimitris Vallianatos, Dorothea Aifantopoulou, Alper Gürbüz, Ökmen Sümer, Ismat Sabri, Yassir Elhassan, Ali Feraish Al-Salem, Ali Anmashhadi, Elalim Abdelbaqi Ahmed, Umi Salmah Abdul Samad
Affiliations: National Observatory of Athens, Institute for Astronomy and Astrophysics, Space Applications and Remote Sensing, Center BEYOND for EO Research and Satellite Remote Sensing, National Technical University of Athens, School of Mining and Metallurgical Engineering, Laboratory of Engineering Geology and Hydrogeology, EDGE in Earth Observation Sciences, Ankara University, Faculty of Engineering, Department of Geological Engineering, Dokuz Eylül University, Faculty of Engineering, Department of Geological Engineering, STS Survey Technologies, Ministry of Municipality
This paper presents a comprehensive approach to surficial geological mapping of the State of Qatar and selected areas at multiple scales, leveraging High-Resolution (HR) Sentinel-2 and Very High-Resolution (VHR) Pleiades satellite imagery through the integration of remote sensing techniques and geospatial technologies, underpinned by ground-truth data from extensive field surveys. Sentinel-2 multispectral imagery (10-meter spatial resolution) enabled broad-scale geological mapping at a 1:100,000 scale, while Pleiades RBG-NIR imagery (0.5-meter spatial resolution) supported detailed mapping in critical areas at scales of 1:50,000 and 1:20,000. Due to the size of individual VHR Pleiades images, mosaicking of the imagery was performed using GDAL to reassure seamless processing. Resampling the imagery to a greater pixel size (2-meters) was necessary for the analysis, while the spectral resolution was preserved in its original bit depth, ensuring no loss of radiometric accuracy. The high spatial resolution of the acquired imagery enabled the identification of five primary geological formations in the surface –previously documented according to literature-, Rus, Dammam, Dam, and Hofuf, along with Quaternary deposits. The enhanced delineation of boundaries of the surficial exposure of these geological formations and sediments, for the first time in such precision, was possible, as the spatial and spectral resolution of the acquired imagery was higher. This multiscale mapping approach was informed by ground-truth data collected across diverse field locations, which helped capture the range of spectral signatures and validation sample data within each formation. All data were imported into a geospatial database, enhancing data integration and facilitating sharing with the end user. Thus, a coherent integration with GIS software and tools is enabled, improving accessibility and interoperability for decision-makers and researchers. The classification process initially employed a combination of unsupervised clustering and Principal Component Analysis (PCA) to highlight spectral distinctions among formations, providing a first assessment of surface exposures. To enhance the spectral contrast between classes and enable precise formation identification, spectral indices—such as the Geology Index and mineral indices—were calculated, forming an extended feature space. Fieldwork conducted by geological experts facilitated the correlation of remote sensing data with geological structures and formations. These field data were then refined both spectrally and statistically, extracting distinct spectral signatures to support highly accurate supervised classifications. The supervised classification utilized the Maximum Likelihood Classification (MLC) algorithm, selected for its proven ability to effectively differentiate complex geological features. Following classification, GIS-based filtering techniques were applied to refine the results, removing noise and correcting minor misclassifications. This refinement led to high-accuracy final geological maps, validated by geological experts through blind photointerpretation and intensive and targeted field visits. Accuracy assessments were conducted by the means of confusion matrices to quantify key metrics such as overall accuracy, producer’s accuracy, user’s accuracy, and the kappa coefficient, demonstrating the reliability of the classification results at each mapping scale exceeding the accuracy of 90%. In the areas which were selected for detailed mapping at the scales of 1:50000 and 1:20000, VHR imagery from Pleiades mission was utilized. The increased spatial resolution offered by Pleiades enabled even more precise boundary delineation, despite some limitations in band availability for spectral index calculation. The geological maps produced at various scales will be securely hosted within NOA/BEYOND’s ArcGIS Enterprise installed on our premises and can be retrieved through the ArcGIS Enterprise API. This API provides a robust framework for securely accessing, managing, and analysing spatial data through RESTful services. It supports CRUD operations on hosted layers, advanced spatial analysis, and map visualization services with custom symbology. With secure token-based authentication and OAuth 2.0, as well as advanced querying capabilities, the ArcGIS Enterprise API ensures efficient and scalable geospatial solutions tailored to the needs of managing and accessing geological maps. The overall methodology highlights the effectiveness of combining HR Sentinel-2 images (depicting the entire Qatar on only 4 scenes) for classifying and then guiding, equally successful, the classification of the numerous VHR Pleiades images (> 750 scenes) in order to a detailed geological map deliver over the state of Qatar at various scales. The approach provides a strong foundation for further stratigraphic, sedimentological, structural, and mineral exploration, facilitating resource management across Qatar and establishing a model for comprehensive geological assessment in similarly complex and arid regions.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.14)

Presentation: Improving mine lifecycle monitoring using advanced InSAR phase closure approaches

Authors: Fei Liu, Dr Rachel Holley
Affiliations: Viridien
Interferometric Synthetic Aperture Radar (InSAR) is a remote sensing technique that measures phase differences between two or more Synthetic Aperture Radar (SAR) images, to retrieve the surface displacements between the acquisition times with millimetre-level precision. Given its reliability and cost-effectiveness, InSAR has been increasingly used throughout the mine lifecycle for monitoring ground deformation. However, due to the decorrelation noise often caused by surface changes across parts of a mine site over time, InSAR phase measurements may become unreliable in some areas. The noisy pixels in the interferograms have to be identified and masked to prevent the propagation of errors into surrounding pixels during further processing (e.g., filtering and phase unwrapping). This results in loss of coverage, which remains one of the main limitations of current InSAR mine site monitoring. Here, we propose an innovative approach to effectively select high-quality pixels and improve the coverage of InSAR results, based on the property of phase closure residual. Phase closure is a property of a closed loop formed by three interferograms (e.g., AB, BC, and CA) between three SAR images (A, B, and C). In multilooked images the sum of phases around the loop (the closure) is non-zero due to the decorrelation noise, and its residual (the non-zero part) is related to the characteristics of the surface change. By analysing the temporal evolution of phase closure residuals, we can evaluate the quality of each pixel and track its changes over time. Based on the pixel quality evaluation, we can improve the coverage of InSAR measurements and the resulting measurement uncertainties. We apply this new phase closure approach across a range of mine sites, and find significant coverage and measurement quality improvements compared to the conventional InSAR results. This new approach does not involve any assumptions of the deformation regime, or heavy computations – our results show that it works for different datasets (X, C, and L band SAR data) and is efficient even for large volume datasets. In summary, this advanced phase closure approach extends the versatile capabilities of InSAR in mining sector, emphasizing its role in proactive monitoring and risk management.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.14)

Presentation: Assessment of machine learning methods for mineral mapping using different hyperspectral satellite systems

Authors: Saeid Asadzadeh, Anna Buczyńska, Raymond Kokaly, Sabine
Affiliations: German Research Centre for Geosciences GFZ
With the increasing availability of spaceborne hyperspectral imaging data, there is an immediate need for sensor-agnostic algorithms to map the diversity, composition, and quantity of minerals on exposed Earth's surface, which includes approximately one-third of the land masses. The USGS Material Identification and Characterization Algorithm (MICA) is an established method for mineral mapping; an expert system designed to identify minerals in airborne and spaceborne hyperspectral imaging data using a custom spectral library. With the recent development in machine learning and deep learning techniques, an important question is how effectively these algorithms can be trained/tailored for automated mineral classification/mapping on a global scale. To address this question, we acquired and processed hyperspectral imaging datasets from the Cuprite test site using machine learning methods comparing the results with those yielded from the MICA algorithm applied to the AVIRIS-Classic airborne systems. The sensors investigated include the Environmental Mapping and Analysis Program (EnMAP), Earth Surface Mineral Dust Source Investigation (EMIT), PRecursore IperSpettrale della Missione Applicativa (PRISMA), Hyperspectral Imager Suite (HISUI), and the Advanced Hyperspectral Imager aboard Chinas GaoFen-5 satellite. The employed algorithms encompass Random Forest (RF), Extra Trees (ET), K-Nearest Neighbors (K-N), Support Vector Machine (SVM), and the U-Net deep learning method. Training and testing data were obtained from MICA-generated maps applied to ground-adjusted AVIRIS-C data, covering six distinct mineral/mixed classes in the VNIR and fourteen in the SWIR ranges. The performances of the algorithms were evaluated using overall accuracy and Kappa coefficient metrics. The comparison of results indicated that the SVM algorithm with a polynomial kernel gives results closest to the MICA products for all sensors in both VNIR and SWIR ranges. The overall accuracy and Kappa coefficient remained above 90%, regardless of the sensor type and noise level. The best performance was observed for minerals with distinctive absorption features in the SWIR, although mixed classes such as calcite + montmorillonite and pyrophyllite + kaolinite showed the lowest classification accuracy. By applying the same algorithms to standard and ground-adjusted reflectance data, it was observed that the SVM-polynomial (and the majority of other techniques) is insensitive to the quality of atmospheric correction, meaning that good results could be obtained using standard reflectance products. However, for sensors with accurate atmospheric correction and high SNR, such as EnMAP, it yields higher classification accuracy. By decreasing the number of pixels used for training, it was observed that, in contrast to the deep learning algorithm, the SVM-polynomial maintained its good performance even with a fraction (25%) of the original training data. This study indicates that sensor-agnostic algorithms such as SVM, can be effectively used for mapping mineralogy from spaceborne hyperspectral data. We now plan to evaluate the performance of the same algorithms using datasets acquired over Cuprite but at different times, and from different areas, to evaluate the stability of the models across space and time. We also aim to include more deep learning methods to identify the most promising algorithm for mineral mapping. The ideal algorithm should not rely on training data from the scene, have the capability for pre-training with a limited number of inputs, and be robust to noise and perform fast to enable global-scale mineral mapping.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.14)

Presentation: A Geological System Analysis for Better Understanding of Mineral-rich Sedimentary Basins by Using Multistage Data and Space-based Imaging Systems: A Case Study in Türkiye

Authors: Tamer Özalp
Affiliations: Researchturk Space Co.
Tertiary basins characterized by lacustrine, fluvial, and volcanic facies are prevalent in the interior and western regions of Anatolia. These basins are significant not only for their contributions to the understanding of the tectonic evolution and depositional history of the area but also for the economic mineral deposits such as bentonite and zeolite they contain. However, the absence of traditional exploration data hinders the enhancement of the basin's economic potential and the identification of tectono-sedimentary events. The study focuses on the geological system analysis of the Kalecik-Hasayaz Basin in Ankara, Turkey, utilizing space-based imaging systems (optical and radar) to enhance the understanding of mineral-rich sedimentary basin evolution. A comprehensive geological study was designed to assess various parameters such as stratigraphy, lithology, structure and topography of the area. The exploration for economic mineral deposits relies on specific parameters and criteria that either directly indicate or suggest surface conditions favorable to the formation of such deposits. The research emphasized the use of imaging radar for geological mapping, successfully distinguishing key rock units and formations through spectral (optical)and physical(radar) signatures. Optical data were processed to facilitate the detection and identification of rock types and structural features, as well as to correlate their spatial distributions with independent data regarding their structural configurations. Techniques such as band ratios were employed to create color ratio composite images for better lithofacies discrimination. Radar imagery proved to be particularly advantageous for two primary purposes: first, it aided in delineating the unique lithological characteristics and boundaries of surface mapping necessary for the establishment of geological formations; second, it enabled the identification of structural signatures and lithological exposures through the spatial signatures derived from Synthetic Aperture Radar (SAR) imagery. The three-dimensional sensing capabilities of radar technology enhanced the analysis of surface morphology, thereby improving the spatial mapping of radar-derived information pertaining to the study area. Surface back-scattering profiles and roughness changes, 3-D SAR models were used to better determining the rock unit maps and tectonic features with fault indications. The characterization of the underlying materials and structures can be effectively delineated using C-band SAR, provided that the appropriate physical conditions are met. The radar imagery has proven to be an effective tool for delineating geological characteristics in the region under investigation. The integration of image data obtained across various wavelength bands significantly augmented the geological information available for the study area. The SAR and optical data results demonstrated the links between surface developments and remote sensing in the visible, infrared, and microwave spectra. The results indicated that a multistage approach is the most effective method for comprehending the geological evolution of the area, with results aligning well with field observations and existing geological literature. By employing advanced imaging techniques, the research aims to provide insights into the geological processes and historical developments of the basin, contributing to a more comprehensive understanding of sedimentary environments. The findings from this case study could have implications for broader geological research and applications in the region.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.11/0.12)

Session: C.05.03 ALTIUS: ESA's Ozone Mission

In the 1970s, scientists discovered that the ozone layer was being depleted, particularly above the South Pole resulting in what is known as the ozone hole. To address the destruction of the ozone layer the international community established the Montreal Protocol on ozone-depleting substances. Since then, the global consumption of ozone-depleting substances has reduced by about 98%, and the ozone layer is showing signs of recovering. However, it is not expected to fully recover before the second half of this century. It is imperative that concentrations of stratospheric ozone, and how they vary according to the season, are monitored continually, to not only assess the recovery process, but also for atmospheric modelling and for practical applications including weather forecasting.
The Atmospheric Limb Tracker for Investigation of the Upcoming Stratosphere (ALTIUS) mission fills a very important gap in the continuation of limb measurements for atmospheric sciences. The ALTIUS mission will provide 3-hour latency near-real time ozone profiles for assimilation in Numerical Weather Prediction systems, and consolidated ozone profiles for ozone scientific analysis. Other trace gases and aerosols extinction profiles will also be provided.
The focus of this session is the mission and its status, together with the implemented technical and algorithmic solutions to image the Earth limb and retrieve the target chemical concentration, as well as the ongoing preparations for the calibration/validation of the mission products.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.11/0.12)

Presentation: ALTIUS O3, NO2 and aerosol extinction retrieval algorithms and expected in-flight performance

Authors: Antonin Berthelot, Noel Baker, Didier Fussen, Pierre Gramme, Nina Mateshvili, Didier Pieroux, Kristof Rose, Sotiris Sotiriadis, Emmanuel Dekemper
Affiliations: Bira-iasb
ALTIUS (Atmospheric Limb Tracker for the Investigation of the Upcoming Stratosphere) is an atmospheric limb mission being implemented in ESA's Earth Watch programme. The mission is in its implementation phase, with both the space and ground segments having reached the critical design review (CDR). The launch is foreseen on a Vega-C rocket in 2026-2027, in a dual configuration with another ESA mission, FLEX. The primary objective of the mission is to provide near-real-time and consolidated high-resolution stratospheric ozone concentration profiles. Secondary objectives include stratospheric aerosols, H₂O, NO₂, NO₃, temperature, OClO, BrO, and mesospheric ozone, as well as the detection of polar mesospheric and stratospheric clouds (PMCs and PSCs). The instrument consists of three spectral imagers: UV (250-355 nm), VIS (440-675 nm) and NIR (600-1020 nm) channels. Each channel is able to take a snapshot of the scene independently of the other two channels, at a desired wavelength and with the requested acquisition time. It comes with excellent vertical sampling (<1km at the tangent point), and allows straightforward in-flight pointing calibration, usually a key driver of the error budget of limb instruments. The agility of ALTIUS allows for series of high vertical resolution observations at wavelengths carefully chosen to retrieve the vertical profiles of species of interest. ALTIUS is a single payload mission which gives numerous options for the observation scenarios. ALTIUS will perform measurements in different geometries to maximize global coverage: observing limb-scattered solar light in the dayside, solar occultations at the terminator, and stellar, lunar, and planetary occultations in the nightside. The baseline mission plan combines 100 limb-scatter observations on the day side, 2 solar occultations, and 5 stellar/planetary/lunar occultations in the night side (typical numbers). We will present the mission, focusing on its relevance for the stratospheric ozone community. Limb-scatter and occultation retrieval algorithms will be presented, and the expected performance of the mission will be discussed based on end-to-end simulations. Additionally, the current status of the retrieval algorithms of the secondary objective species will be discussed, with a focus on NO2 and aerosols. The data from previous UV-VIS-NIR limb instruments will also be processed by the ALTIUS L2-processor to assess the expected in-flight performance of ALTIUS. A comparison with the results from GOMOS/ENVISAT, OMPS-LP/JPSS-2, and SAGE-III/ISS will be presented.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.11/0.12)

Presentation: ALTIUS Mission: Project Status

Authors: Daniel Navarro Reyes, Michael François, Luciana Montrone, Stefano Santandrea, Hilke Oetje, Didier Fussen, Emmanuel Dekemper
Affiliations: ESA/ESTEC, BISA
ALTIUS, ESA’s ozone missions, is an atmospheric limb sounder mission for monitoring of the distribution and evolution of stratospheric ozone number density profiles in support of operational services and long-term trend monitoring. The mission will provide detailed stratospheric ozone profile information at high vertical resolution, which will add valuable information to total column ozone used in data assimilation systems by operational centres. Secondary products, i.e. vertical profiles of NO2, BrO, OClO, NO3, H2O, mesospheric O3, aerosol extinction, and temperature will be also provided. Currently there are only few atmospheric satellite missions operational that provide limb measurements and several of them might terminate within the next few years. ALTIUS will fill therefore an upcoming data gap. The ALTIUS data will also be of high importance for the atmospheric chemistry modelling community, for use as input to climate models and their validation. ALTIUS data will extend the existing GCOS (Global Climate Observing System) ozone profile ECV (Essential Climate Variable) as produced with the ESA CCI (Climate Change Initiative) ozone project. The ALTIUS mission is under implementation by ESA within the Earth Watch Program, with participation of Belgium, Canada, Luxembourg, and Romania. The launch is expected in April 2027. The current status and planning with respect to flight model manufacturing, ground segment qualification and deployment, launch, commissioning, validation, and routine operations will be presented. An Announcement of Opportunity for calibration/validation will be called in second half of 2025; details will be communicated in oral presentations/posters during the Living Planet Symposium 2025.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.11/0.12)

Presentation: ALTIUS: The Next Generation of Atmospheric Limb Sounders

Authors: Emmanuel Dekemper, Noel Baker, Antonin Berthelot, Didier Fussen, Pierre Gramme, Nina Mateshvili, Didier Pieroux, Kristof Rose, Sotiris Sotiriadis, Adam Bourassa, Doug Degenstein, Daniel Zawada, Michael François, Luciana Montrone, Daniel Navarro-Reyes, Hilke Oetjen
Affiliations: BIRA-IASB, University of Saskatchewan, ESA-ESTEC
The atmospheric limb sounding community is bracing itself for an era of limited availability of new measurements of atmospheric trace gases concentration profiles with high vertical resolution. Indeed, several limb sounding satellite instruments are about to put an end to their exceptionally long record of observations. Soon, OMPS-LP and SAGE-III will remain as the main providers of O3, aerosols, temperature (both), and NO2, H2O (SAGE-III only) profiles. Still, the stratosphere keeps changing with, for instance, different recovery rates of the O3 layer between the lower and upper stratosphere, or its cooling and moistening caused by the Hunga Tonga eruption. ALTIUS is ESA's upcoming Earth atmospheric limb mission. The primary objective of the mission is to provide near-real-time and consolidated stratospheric ozone profiles. Secondary objectives include stratospheric aerosols, H2O, NO2, NO3, temperature, OClO, BrO, and mesospheric ozone. The mission is in its implementation phase, with both the space and ground segments having reached the critical design review (CDR). The launch is foreseen on a Vega-C rocket in 2027. The mission has some unique features intended to better tackle the common problems faced by previous UV-VIS-NIR limb sounders. First, it is a single payload mission on an agile platform, giving therefore many options for the observation scenarios. The baseline mission plan combines 100 limb-scatter observations on the day side, 2 solar occultations, and 5 stellar/planetary/lunar occultations in the night side (typical numbers). Second, the instrument is a three-channels spectral imager with tuneable capability from 250nm to 1020nm. It comes with excellent vertical sampling (<1km at the tangent point), and allows straightforward in-flight pointing calibration, usually a key driver of the error budget of limb instruments. We will present the mission, focusing on its relevance for the stratospheric ozone community. Synergies with the existing and future limb sounders will be discussed, as for example on the complementarity of the spatial coverages. A crucial point for the stratospheric community is the extension of the time series that started more than 30 years ago with SAGE-II. One key application of the consolidated scientific products of the mission is to contribute to this uninterrupted record.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.11/0.12)

Presentation: ALTIUS Geophysical Validation Plan

Authors: Jean-Christopher Lambert, Dr Steven Compernolle, Dr Daan Hubert, Dr Tijl Verhoelst, Dr Quentin Errera, Dr Arno Keppens, Dr Antje Inness, Dr Natalya Kramarova, Prof Kaley Walker, Prof Kimberly Strong, Dr Robert Koopman, Dr Daniel Navarro-Reyes, Dr Hilke Oetjen, Dr Claus Zehner
Affiliations: Royal Belgian Institute for Space Aeronomy (BIRA-IASB), European Centre for Medium-Range Weather Forecasts (ECMWF), National Aeronautics and Space Administration Goddard Space Flight Center (NASA/GSFC), Department of Physics, University of Toronto, European Space Agency (ESA-ESTEC), European Space Agency (ESA-ESRIN)
Atmospheric Limb Tracker for Investigation of the Upcoming Stratosphere (ALTIUS) is a gap filler mission responding to the pressing need to ensure, after the upcoming termination of historical limb missions, the global, long-term monitoring of stratospheric ozone, other trace gases and aerosols at the vertical resolution of 1 km. Implemented as an ESA Earth Watch mission for the 2026-2030 period, ALTIUS will contribute near-real-time data to the Copernicus Atmosphere Monitoring Service (CAMS) and the Belgian Assimilation System of Chemical ObsErvations (BASCOE). ALTIUS also aims at providing consolidated ozone data records to the Copernicus Climate Change Service (C3S) and to international assessments sponsored by the World Meteorological Organization (WMO) and the World Climate Research Programme (WCRP), as well as new research data needed for a better understanding of polar processes and the upper atmosphere. This contribution describes the plan envisioned for the geophysical validation of the ALTIUS profile data for stratospheric O₃, NO₂, H₂O, BrO, OClO, NO₃, aerosols, polar stratospheric and mesospheric clouds, mesospheric O₃, and temperature. After an overview of the mission and user requirements against which ALTIUS data will be validated, the overall validation approach will be described, which will combine: (i) comparisons to Fiducial Reference Measurements collected from ground-based monitoring networks (CANDAC, NDACC, SHADOZ…) and during dedicated validation campaigns, (ii) cross-validation with other profiling satellites extending the ground-based validation to the global domain, and (iii) quality assessments using modelling support from the CAMS and BASCOE data assimilation systems and from the OSSSMOSE metrology simulator. A dedicated validation service will provide (i) baseline monitoring of ALTIUS ozone data quality performed by an operational validation system, and (ii) in-depth validation to support the evolution of the data products and associated retrieval algorithms. The operational validation element for ozone data products will be complemented by validation activities to be proposed by the scientific community in response to the upcoming ESA Announcement of Opportunity (AO) for the calibration and validation (Cal/Val) of ALTIUS. The AO Call encompasses the validation of other data products than ozone and aims to open the ALTIUS Cal/Val to a wider range of external data and activities.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.11/0.12)

Presentation: Preparations at ECMWF for the use of ALTIUS data within CAMS

Authors: Christopher Kelly, Roberto Ribas, Antje Inness, Richard Engelen, Johannes Flemming, Martin Suttie
Affiliations: ECMWF
The Copernicus Atmosphere Monitoring Service (CAMS), operated by the European Centre for Medium-Range Weather Forecasts (ECMWF) on behalf of the European Commission, provides daily analyses and 5-day forecasts of atmospheric composition as well as reanalysis datasets covering past years at global and regional scales. Satellite observations of various trace gases including ozone are routinely assimilated into the CAMS system in support of these products. The continuity of high-quality total-column ozone observations for data assimilation is covered by the Sentinel missions (Sentinel-4, -5 and -5 precursor) for years to come. However, there is an impending break in the availability of high-quality ozone limb observations in the stratosphere when the long-serving Aura-Microwave Limb Sounder (MLS) mission ends. Stratospheric ozone limb observations are particularly valuable to the CAMS system, providing a crucial insight into the vertical structure of ozone in the atmosphere. The ALTIUS mission is set to provide the next generation of ozone limb observations, enabling a timely transfer from the assimilation of MLS stratospheric ozone profiles to the use of ALTIUS stratospheric ozone profiles within CAMS. In this presentation, we explain how CAMS satellite observations are used at ECMWF and provide a specific update on the technical preparations being made for the use of ALTIUS data. The data flow of CAMS satellite observations is divided into three key stages. Firstly, acquisition – the seamless transfer of observations from data provider to ECMWF ahead of the two 12-hour assimilation windows that are run each day. Secondly, ingestion & pre-processing – the transformation of observations from their native format into Binary Universal Form for the Representation of meteorological data (BUFR) format using bespoke converter-decoder software. Finally, analysis & forecast – the use of data assimilation to combine the observations with a state-of-the-art atmospheric composition model powered by a supercomputer. ALTIUS pre-launch test data has provided an essential resource for the mission preparation at ECMWF. Our discussion highlights the key technical challenges from this work, such as converting the profiles to partial columns, handling data from different viewing geometries and working with the synthesis data product.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.11/0.12)

Presentation: Accounting for surface reflectivity inhomogeneities in stratospheric ozone retrieval from limb scattering observations

Authors: Carlo Arosio, Alexei Rozanov, Vladimir Rozanov, Andrea Orfanoz-Cheuquelaf, John Burrows
Affiliations: Institute of Environmental Physics, University of Bremen
This study investigates and mitigates a retrieval artefact identified in tropospheric ozone column data and ozone limb profiles retrieved from OMPS-LP observations at the University of Bremen (IUP). This artefact is associated with inhomogeneities in the surface reflectivity along the satellite line of sight (LOS). At IUP, a tropospheric ozone column (TrOC) product has been produced by exploiting the limb-nadir matching technique applied to OMPS observations. In this data set, we noticed an artefact in the tropical Pacific region, i.e. higher ozone columns in the [0°N, 5°N] latitude band, where the tropospheric ozone is expected to be fairly homogeneous. This issue was traced back to the stratospheric profiles, which show a lower ozone content at their peak altitude. This feature is also visible in the Atlantic, though less pronounced, and exceeds the typical uncertainty of the TrOC, being of the order of 5-7 DU. Other stratospheric ozone column (SOC) and TrOC data sets, e.g. the NASA OMPS and SCIAMACHY TrOC products show a similar pattern in the tropical Pacific. In preliminary studies we associated this pattern with the semi-permanent presence of the Inter-Tropical Convergence Zone (ITCZ), a region of high surface reflectivity crossing the satellite LOS. The present contribution belongs to the ESA ENFORCE project, which has the aim of implementing in the radiative transfer model SCIATRAN at IUP the possibility of taking into account variations of the surface reflectivity along the satellite LOS (2D mode) to mitigate the described artefact. The final goal is the improvement of the TrOC product derived from satellite limb scattering measurements, and the outcome could be of interest for any limb scattering instrument, e.g. SCIAMACHY and ALTIUS. In this presentation, we show the first results of the retrievals performed using the SCIATRAN 2D mode. First, we used simulated case studies to better investigate the impact of different idealized distributions of surface reflectivity on the retrieved profiles. Then, we compare the results obtained with the SCIATRAN 2D mode on a subset of OMPS observations with the standard 1D SCIATRAN retrievals and with collocated MLS observations. Finally, we address the impact of the implemented correction on TrOC derived using the limb-nadir matching technique.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:30 (Hall G2)

Session: C.05.07 EO National Missions and Programmes

In this session, national missions’ objectives, strategy, and future plans from a programmatic point of view and under Member States’ management will be presented. Representatives from various national space agencies will present their current Earth observation programmes as well as their missions and strategy for the future.

Speakers:


  • Simonetta Cheli - ESA, Director of Earth Observation Programmes
  • Thomas Geist - FFG - Austrian Research Promotion Agency
  • Jean-Christophe Schyns - BELSPO, STEREO Programme Manager
  • Joost Vandenabeele - BELSPO, STEREO Programme Manager
  • Godela Rossner - German Space Agency at DLR, Head of Earth Observation Department
  • Selma Cherchali - CNES, Deputy Director of Earth Observation Programmes
  • Daniel Kristof - Lechner Knowledge Centre, Head of Earth Observation Department
  • Francesco Longo - ASI, Head of Earth Observation Department
  • Dag Anders - Norwegian Space Agency, Moldestad Lead Copernicus Programme
  • Carolina Sá - Portuguese Space Agency, EO Project Officer
  • Anton (Toni) Horžen - Slovenian Space Office
  • Krištof Oštir - University of Ljubljana and Slovenian Space Office
  • Harshbir Sangha - UKSA, Director Missions and Capabilities. EO, PNT and Space Resilience
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall E2)

Session: A.06.01 Geospace dynamics: modelling, coupling and Space Weather - PART 2

This session aims to capture novel scientific research outcomes in the Geospace dynamics field, encompassing atmosphere, ionosphere, thermosphere, and magnetosphere - modelling and coupling. A significant contribution is expected from Space Weather science with the usage of, but not limited to, data of ESA Earth Observation missions, such as Swarm, in particular FAST data, and SMOS. The objective of the session is to collect recent findings that improve the knowledge and understanding of the dynamics and coupling mechanisms of the middle and upper atmosphere and their link with the outer regions that are mainly driven by the Sun and the solar cycle, as well as a focus on data validation and on Space Weather events. We solicit results also from simulations, ground-based observatories or other heliophysics missions, in particular demonstrating synergetic combinations of these elements.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall E2)

Presentation: The Spectral Shape of Auroral Plasma Turbulence and its Relation to GPS Scintillations

Authors: Magnus Ivarsen, Professor Glenn Hussey, Professor Jean-Pierre St-Maurice
Affiliations: University Of Saskatchewan
At times, turbulence permeates geospace, Earth's plasma environment, and the turbulence may also be present at all available scale-sizes. The phenomenon has been studied for decades, but reliable multi-scale measurements of auroral turbulence are still hard to come by. In this presentation, I will present new measurements of a composite spectrum of plasma turbulence near the aurora borealis, on scale-sizes ranging from 10 meters to 10 kilometers. Through space-ground conjunctions we are able to directly connect the composite spectrum to structuring in the large-scale electrical currents that flow with the aurora. We discuss the topic, concuding that a characteristic turbulent shape seems to follow the ionosphere-magnetosphere mapping closely.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall E2)

Presentation: Observations of Plasma Structures of Varying Scale Size in the High-Latitude Ionosphere with a Suite of Instrumentation

Authors: Sophie Maguire, Alan Wood, David Themens, Derek McKay
Affiliations: University Of Birmingham, Sodankylä Geophysical Observatory
Within the high-latitude ionosphere, large-scale plasma structures, such as polar cap patches and blobs, have been observed. These large-scale structures can seed smaller-scale irregularities in the presence of instability mechanisms. It is these smaller-scale irregularities which can lead to the scintillation of trans-ionospheric radio signals, such as those used for Global Navigation Satellite Systems (GNSS). Irregularities which lead to scintillation are on much smaller scale sizes than high-latitude structuring such as polar cap patches. Thus, the Scales of Ionospheric Plasma Structuring (SIPS) experiment was conducted in January 2024 to observe the multi-scale ionosphere and its effects for scintillation. Given that the aim of the SIPS experiment was to observe the ionosphere across various scale sizes, an extensive suite of instrumentation was needed. This experiment utilised a variety of both space-based and ground-based instrumentation, including the European Space Agency’s Swarm satellites, incoherent scatter radars and radio telescopes, in combination with data modelling techniques. In this experiment, the large-scale structures were observed using the European Incoherent SCATter (EISCAT) radars, the medium-scale structures with the Kilpisjärvi Atmospheric Imaging Receiver Array (KAIRA), and the smaller-scale structures from the Swarm satellites and GNSS receivers. The combination of these instruments in conjunction with modelling techniques gives unprecedented coverage of the varying scale sizes which is not possible with individual instrumentation alone. This presentation showcases the results from this experiment, explaining the relationship between structures of varying scale sizes and their associated scintillation effects.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall E2)

Presentation: Swarm-VIP-Dynamic: Models for Ionospheric Variability, Irregularities Based on the Swarm Satellite Data

Authors: Alan Wood, Wojciech Miloch, Yaqi Jin, Daria Kotova, Gareth Dorrian, Lucilla Alfonsi, Luca Spogli, Rayan Imam, Eelco Doornbos, Kasper van Dam, Mainul Hoque, Jaroslav Urbar
Affiliations: Space Environment and Radio Engineering (SERENE) group, University of Birmingham, Department of Physics, University of Oslo, Istituto Nazionale di Geofisica e Vulcanologia, The Royal Netherlands Meteorological Institute (KNMI), German Aerospace Center (DLR), Institute of Atmospheric Physics CAS
The ionosphere is a highly complex plasma containing electron density structures with a wide range of spatial scale sizes. The variability and structuring of this plasma depends on forcing from above and below. Coupling of the ionosphere with the Earth’s magnetosphere and the solar wind, as well as to the neutral atmosphere, makes the ionosphere highly dynamic and highly dependent on the driving processes. Thus, modelling the ionosphere and capturing its full dynamic range considering all spatiotemporal scales is challenging. Swarm is the European Space Agency’s (ESA) first constellation mission for Earth Observation (EO), comprising multiple satellites in Low Earth Orbit (LEO). Numerous data products are available, including measures of the ionosphere at a range of spatial scales and the density of the thermosphere. These data products mean that Swarm is uniquely placed to investigate coupling between the dynamic ionosphere and the neutral atmosphere. The Swarm-VIP-Dynamic project started in early 2024 and it focuses on the variability, irregularities, and predictive capabilities for the dynamic ionosphere. In this project, we develop a suite of models for capturing the ionosphere structuring and dynamics at various spatiotemporal scales. In addition to the Swarm data, we will use datasets from other satellites and ground-based instruments for validation and to explore the added value of space instrumentation with various observation and sampling characteristics. We will also test the feasibility of the models to be used in a real-time environment. Recent results from the Swarm-VIP-Dynamic project are presented, including the model concepts, as well as prospects of further development in the context of space weather and predicting ionospheric space weather effects.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall E2)

Presentation: A Decade-long Model of the Fast-varying Ionospheric and Magnetospheric Magnetic Fields Constrained by Ground and Satellite Observations

Authors: Jingtao Min, Dr. Alexander Grayver
Affiliations: ETH Zurich, University of Cologne
The time-varying geomagnetic field is a superposition of contributions from multiple internal and external current systems. A major source of geomagnetic field variations at periods less than a few years is the current systems external to the solid Earth, namely the ionospheric and magnetospheric currents, as well as associated currents induced in the Earth’s mantle. Understanding these current systems is at the centre of geospace modelling and space weather studies, and is also crucial for electromagnetic induction studies of the Earth interior. We present here the reconstructed decade-long models of the ionospheric, magnetospheric and induced magnetic fields for the time period 2014 - 2023. While the separation of these three sources is mathematically underdetermined using either ground or satellite measurements alone, it is made tractable with our new geomagnetic field modelling approach which combines both ground and multi-satellite datasets. Our modelling approach is not confined to data from specific magnetic conditions or local times, nor does it impose harmonic behaviour in time, as is typical in previous models. The resulting new field models provide continuous time series of the ionospheric, magnetospheric and induced field spherical harmonic coefficients, covering all local times and magnetic conditions, without any prescribed time harmonic behaviour. These new time series unravel complex non-periodic dynamics of the external magnetic fields during global geomagnetic storms, as well as periodicities in the magnetospheric and ionospheric magnetic fields associated with solar activities and lunar tides, respectively. As such, our new model contributes to a better picture of the dynamics of the external current systems and magnetosphere-ionosphere interactions. Our new modelling approach is highly versatile and flexible, allows for on-the-fly estimation and generation of geomagnetic field models of high temporal resolution. The new approach and the published model will hence be relevant for space physics, and can facilitate space weather operations.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall E2)

Presentation: Equatorward Closure of Region 2 Birkeland Currents

Authors: David Knudsen, Yhihenew Getu
Affiliations: University Of Calgary
The classic picture of the Birkeland current system includes a poleward (R1) and an equatorward (R2) sheet at most local times [Iijima and Potemra, 1976a], with an additional poleward sheet near noon [Iijima and Potemra 1976b] and midnight (in the Harang region). Away from noon and midnight, the R1/R2 currents are generally considered to form a nearly-balanced pair, with a fraction of the R1 currents closing across the polar cap, and R2 comprising the Birkeland system’s equatorward boundary. However, using precision magnetic field measurements from Swarm, we find that approximately 20% auroral zone traversals display evidence of an additional sheet equatorward of R2, occurring at all local times, and having the opposite polarity of R2, indicating partial closure of the R2 sheet in the equatorward direction. In this presentation we explore their dependence of these “Region 3” currents on local time and other parameters.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall E2)

Presentation: Self-Organized Criticality and Intermittency in the Integrated Power of High-Latitude Ionospheric Irregularities

Authors: Hossein Ghadjari, David Knudsen, Georgios Balasis
Affiliations: University Of Calgary, National Observatory of Athens
This study investigates the statistical properties of plasma density fluctuations in the auroral and polar cap regions using data from the entire Swarm mission. A key focus is to characterize the probability distribution functions of these fluctuations and extract insights into the occurrence and nature of extreme plasma density events. These events, often associated with significant ionospheric disturbances, will be analyzed to evaluate their effects on Swarm GPS receiver performance. By examining the spatial and temporal patterns of extreme events, this research aims to further our understanding of the dynamics driving extreme irregularities in the high-latitude ionosphere.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.85/1.86)

Session: A.09.01 The mountain cryosphere in peril – improved monitoring of snow and ice in complex terrain to address societal challenges in the face of climate change

The impact of climate change on the cryosphere in mountain areas is increasing, affecting billions of people living in these regions and downstream communities. The latest Intergovernmental Panel on Climate Change Assessment Report highlights the importance of monitoring these changes and assessing trends for water security, as well as the risks of geo-hazards such as GLOFs, landslides, and rockfalls.

This session will explore advanced methods and tools for monitoring physical parameters of snow, glaciers, and permafrost in mountainous regions using data from current satellites. We will also discuss the potential of upcoming satellite launched in the near future to enhance these observations and fill in any gaps. By improving our understanding of water availability in mountainous areas and identifying key risks, we can develop strategies to adapt to the changing conditions and also better protect these vulnerable regions.

We welcome contributions on advanced geophysical observations of snow, glaciers and permafrost variables in mountainous regions around the world using different satellite data and their impact on water resources and the increasing risks posed by geo-hazards under changing climate conditions.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.85/1.86)

Presentation: Deep learning for automated mapping of marginal snow in Sentinel-2 satellite imagery

Authors: Leam Howe, Prof Richard Essery
Affiliations: University Of Edinburgh
Mountain snow plays vital roles as a water reservoir, habitat, and recreational area, but it also poses significant risks like floods and avalanches and is highly sensitivity to climate change. The task of measuring and forecasting snow cover is complicated by the high spatial variability of mountain snow compared to the resolutions available from satellite sensors and models, especially in complex terrains where accurate data are critical. Existing remote sensing products, often developed and validated in regions with persistent seasonal snow cover, face challenges in regions with marginal or ephemeral snow. Such products can struggle as the 'reasonable' omissions/errors made in areas of abundant snow cover become significant when applied to areas with variable or fleeting snow cover. In the face of ongoing climate change, many permanent seasonal snowpacks are transitioning to marginal and ephemeral conditions. There is, therefore, a need to develop remote sensing products that perform well in these challenging regions. To address this issue, we train a U-Net-based machine learning model to map snow and cloud cover in Sentinel-2 imagery. Our model was trained on a relatively small dataset of late-lying snow cover in the Highlands of Scotland. Despite the dataset's limited size, our approach achieved a high score for overlap on our testing set and reduced the error in snow cover areal extent by an order of magnitude compared to NDSI-based methods, demonstrating high accuracy with modest computational demand (approximately 15 minutes of training time on a GPU). The results also show that our model better accommodates the diverse locations and spectral properties of snow found under Scotland's temperate maritime climate, and can accurately identify snow in challenging atmospheric conditions and cloud effects. Preliminary tests in other climatically and geographically diverse regions such as Greenland and Australia suggest that our model maintains consistent performance, though further validation is required to confirm its generalisability. This proof-of-concept establishes that machine learning, and specifically deep learning with convolutional neural networks, can capture the numerous spectral and spatial characteristics of mountain snow found in optical satellite data, and could improve projections and studies in regions experiencing transitional snow conditions.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.85/1.86)

Presentation: Improved monitoring of seasonal snow characteristics in mountainous terrain by means of satellite data

Authors: Gabriele Schwaizer, Thomas Nagler, Markus Hetzenecker, Ursula Fasching, Maria Heinrich, Johanna Nemec, Tanja Rahim, Andrea Scolamiero, Helmut Rott
Affiliations: ENVEO IT GmbH
Mountain regions are acting as water towers of the surrounding lowlands, supporting several millions of people downstream with fresh water. Precise mapping of snow characteristics in mountain terrain is of high relevance for supporting many different applications linked to water management, hydrology, natural hazards, and hydropower generation. The Copernicus Sentinel-1/-2/-3 satellites provide an excellent data basis for continuous monitoring of the seasonal snow from local to global scale. The complexity of mountainous terrain requires advanced methods to retrieve high quality information on the spatial distribution of the seasonal snow cover and its properties. Optical satellite data, as available from Sentinel-2 MSI and Sentinel-3 SLSTR & OLCI sensors, can be used for monitoring the seasonal snow extent at different spatial and temporal scales. To improve the snow cover classification in mountain terrain, in particularly in cast shadowed areas, an improved retrieval method based on multi-spectral unmixing with locally adaptive end-member selection has been developed. End-members of fully snow covered and snow free pixels in high Alpine terrain are selected per scene and separately for illuminated and shaded areas. Based on the multi-spectral reflectance information of the resulting four end-members, the snow cover fraction per pixel is estimated for all remaining pixels with a multi-spectral unmixing procedure. This method significantly improves the snow cover fraction estimation in cast shadow areas of mountain regions. The method is applicable to optical satellite sensors having spectral bands from the visible to the shortwave infrared range. The algorithm can consider all available reflective spectral bands for the unmixing procedure. Snow cover fraction maps generated from Sentinel-2 and Landsat data over the Alps were validated with very high resolution reference snow maps from WorldView-2/3 images, showing a bias close to 0% and an overall root mean square error of about 15%. Additionally, the algorithm was tested with data from different medium resolution optical satellite sensors, including Sentinel-3 SLSTR&OLCI, Terra MODIS, and S-NPP VIIRS. The resulting snow cover fraction maps were compared with the products from Sentinel-2 and Landsat data, providing consistent information about the snow covered areas at the different spatial and temporal resolutions. While optical satellite data enable the classification of the total snow area, Synthetic Aperture Radar (SAR) satellite data, as C-band data available from Sentinel-1, allow the identification of melting snow areas. The wet snow retrieval is based on a change detection algorithm optimized for high mountain terrain, as the occurrence of liquid water content within the snowpack reduces the backscatter signal compared to dry snow or snow free conditions. An important step for wet snow classification from SAR data is the preparation of a reference backscatter map per track. Repeat pass SAR data acquired at dry snow conditions or optionally snow free conditions are used as data base. The backscatter ratio of a SAR image acquired at melting conditions and the reference backscatter map of the same orbit build the basis to identify wet snow areas. Combining the VV and VH polarization Sentinel-1 SAR data in dependence on the local incidence angle helps to reduce impacts of the local incidence angle on the backscatter ratio and thus improves the wet snow retrieval in particular at small local incidence angles. A threshold is applied to separate wet snow from other surfaces. Dry snow and snow free areas cannot be discriminated from SAR satellite data. The comparison of the wet snow classification from SAR satellite data with snow cover fraction maps retrieved from high resolution optical satellite data during the main melting season in Alpine terrain, assuming melt conditions for the complete snowpack, resulted in an overall accuracy of more than 90% and F-scores close to 90%. To exploit the snow information from different satellite sensors over mountain terrain, the daily snow covered area from Sentinel-3 data can be combined with the snow melt extent information from Sentinel-1 data. Thus, snow free areas, melting snow areas which potentially contribute to the runoff and areas still covered by dry snow can be discriminated. High resolution snow cover extent information from cloud-free Sentinel-2 data can be used to get further details about the spatial distribution of the snow area. We will present the methods for the improved retrieval of snow characteristics in mountain terrain observable from the Copernicus Sentinel-1/-2/-3 satellites, demonstrate the improvements of each individual approach and highlight the added value by the combination of the different snow information sources for potential applications. Further, we will demonstrate the applicability of the presented methods in other selected mountain regions around the world, based on activities in support of the Common Observing Period Experiment (COPE) of the International Network for Alpine Research Catchment Hydrology (INARCH).
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.85/1.86)

Presentation: Snow Line Elevation Trends in the Alps, Pyrenees, and Andes Mountains, derived from 40-year Landsat snow cover time series

Authors: Andreas Dietz, Sebastian Roessler, Jonas Koehler
Affiliations: German Aerospace Center (DLR), German Remote Sensing Data Center (DFD)
Climate change is affecting the snow cover in mountain regions all around the world. With temperatures increasing, snow melt starts earlier every year on both Hemispheres, leading to various effects such as changes in the runoff regime, albedo, vegetation dynamics, animal habitats, floods, and impacts on tourism and hydropower generation. Because temperatures are expected to increase even more in the upcoming years, a detailed trend analysis of past developments is desired to understand the potential effects in the future. because climate models are oftentimes too coarse to produce reliable results for the complex terrain of mountain regions, time series of high-resolution remote sensing data offer a great alternative. At the German Aerospace Center (DLR), methods to derive Snow Line Elevation (SLE) statistics based on long-term time series of Landsat data have been developed which can be utilized to derive monthly SLEs for every mountain catchment around the globe where Landsat data is available. The challenges when dealing with Landsat time series comprise aspects such as considerable data gaps caused by cloud cover, different data availability throughout the years and Landsat generations, and the generally difficult to handle conditions in steep mountain terrain. The derived SLEs can be analyzed to identify trends in autumn or spring, delineating to which extent the snow cover is retreating each year. These trends can be further analyzed to identify the trend significance, or can be used to predict the potential future SLE retreat. The developed methods have been applied to the European Alps, the Pyrenees, and some catchments in the Chilean Andes Mountains close to Santiago de Chile. The analysis of the SLEs has revealed significant trends in all three regions, with SLEs retreating up to 20 meters per year during spring. These snow cover changes can pose significant challenges to flora, fauna, and humans in the affected regions and beyond. The presentation will outline the general methodology behind the SLE retrieval, and will then focus on the trends detected within the three study regions and potential future developments that can be expected there.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.85/1.86)

Presentation: Beyond snow and glaciers: Quantifying aufeis thickness in the Trans-Himalaya of Ladakh, India

Authors: Dr. Dagmar Brombierstäudl, Dr. Susanne Schmidt, Dr. Mohd Soheb, Prof. Dr. Marcus Nüsser
Affiliations: Department of Geography, South Asia Institute (SAI), Heidelberg University, Heidelberg Centre for the Environment, Heidelberg University, Heidelberg
Aufeis is often associated with permafrost and cold-arid conditions and one of the least studied component of the Trans-Himalayan cryosphere. These seasonal laminated sheet-like ice masses are formed in winter by the successive freezing of overflowing water that seeps from the ground, a spring or emerges from river ice. They are an important water sources for field irrigation and pastoral communities in Ladakh. In some villages, aufeis accumulation is enhanced in ice reservoirs (commonly known as “artificial glaciers”) since decades to store the winter baseflow for crop irrigation during the water scare period in spring. Despite this importance, research on aufeis in the region is still in its early stages. In previous studies we have mapped a total aufeis covered area of almost 400 km² across the Trans-Himalaya between 4000 and 5500m a.s.l. The number and size distribution shows a distinct increase towards the Tibetan Plateau indicating the importance of cold-arid climatic conditions for their development. The largest individual aufeis field covers an area of 14 km², which is almost triple the size of the largest high-altitude glaciers in Central Ladakh. While mapping the maximum spatial extent of aufeis thickness estimations are more challenging. In this study we demonstrate the unexplored potential of differencing digital elevation models (DEM) from very high-resolution stereo Pléiades satellite data and terrestrial photographs for aufeis studies. Therefore, we have selected four case study sites: two ice reservoirs (Igoo and Phuktse) and two catchments (Gya and Sasoma) with natural aufeis occurrence. While Pléiades data was available for all sites, terrestrial imagery was only acquired for the ice reservoirs due to the limited accessibility of the catchments of Gya and Sasoma. In total six stereo images were acquired - three during the ice-free reference period (September/October 2022) and three during aufeis-covered season (February/March 2023). Due to strict UAV regulations, 5700 terrestrial photographs were taken at five-meter intervals by walking around the slopes of the ice reservoirs during the summer and winter field surveys. Calculation of DEMs from both datasets relied on the computer vision and photogrammetric Structure-from-Motion technique which reconstructs DEMs from overlapping 2D imagery. The Pléiades DEMs were computed with the open-source NASA Ames Stereo Pipeline, DEMs from the terrestrial photographs with the commercial Agisoft Metashape software. DEM differencing revealed an ice thickness up to 2.8 m in both ice reservoirs, while natural aufeis fields occasionally even reach greater thickness over 3 m. Aufeis volumes across the four study sites range from 34,106 ± 13,440 m3 in Phuktse up to 105,790 ± 28,511 m3 in Sasoma, indicating substantial amounts of water that need to be considered in future hydrological studies. The results from very high-resolution stereo satellite imagery are promising for aufeis studies on large spatial scales. Their usage can fill an observation gap that is caused by the remoteness and inaccessibility of many aufeis-prone areas and large sizes of individual aufeis fields. Point clouds and DEMs from terrestrial photographs revealed a high level of detail that is especially useful for in-depth studies of aufeis morphology and seasonal dynamics. In the context of ice reservoirs, this could even have practical implications for the development of sustainable water management strategies. This study does not only represent the first quantification of aufeis thickness in the Trans-Himalaya, but also contributes to the ongoing scientific efforts to implement existing and well-established remote sensing methods for aufeis studies on regional and global scales. It also highlights the importance of studying this lesser-known cryosphere component to improve our understanding of mountain hydrology. It might help to shed light on factors that play a significant role in aufeis formation and persistence, like permafrost or groundwater distribution that is still unknown for most parts of the Trans-Himalaya.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.85/1.86)

Presentation: Ensemble-based cryospheric reanalysis to infer global snow mass

Authors: Kristoffer Aalstad, Esteban Alonso-González, Joel Fiddes, Gregoire Guillet, Andreas Kääb, Norbert Pirk, Désirée Treichler, Stefan Wunderle, Sebastian Westermann, Yeliz Yilmaz
Affiliations: University Of Oslo, Pyrenean Institute of Ecology, SLF, University of Bern
Snow, glaciers, and permafrost are essential climate variables (ECV) that regulate the global cycles of energy, carbon, and water. At the same time, these cryospheric ECVs are only partially observable by satellites due to gaps, noise, and indirect retrieval algorithms. Data assimilation (DA), namely the Bayesian fusion of uncertain models and noisy observations, presents a natural solution to the problem of partial observability. Nonetheless, unlike in numerical weather prediction, DA has received relatively little attention from the Earth observation (EO) community despite its potential as a generalized retrieval framework that adds value by filling gaps and inferring latent ECVs with uncertainty quantification. This contribution presents active research on applying ensemble-based DA techniques to carry out cryospheric reanalysis constrained by satellite data. Our future goals with these efforts are to generate consistent global high resolution reanalyses of seasonal snow mass (also known as snow water equivalent or simply SWE), glacier mass balance, and the thermal state of permafrost. In doing so, we seek to leverage a myriad of data streams including Earth observing satellites, global atmospheric reanalyses, airborne retrievals, and in-situ observations. Here we focus on seasonal snow since accurate global snow mass estimation, particularly in mountainous terrain, remains a major unsolved problem in snow hydrology. To help tackle this problem, we are developing a global scale ensemble-based snow reanalysis product by assimilating ESA Snow_cci fractional snow-covered area retrievals. Using a simple snow model we are able to devote considerable computational resources to generate a global snow reanalysis at daily temporal resolution and kilometric spatial resolution while still being able to afford using promising iterative ensemble-based DA schemes. For the latter, we explore recent developments in hybridizing iterative ensemble Kalman and particle methods to provide robust Bayesian posterior inference. In particular, we demonstrate how these schemes can be used as nested smoothers to hierarchically infer prior hyperparameters related to snow climatology. The new reanalysis approach is evaluated using independent spaceborne, airborne, and in-situ validation data by comparing its performance to state-of-the-art regional snow reanalyses and existing global snow mass products in the form of ERA5-Land reanalysis data and ESA Snow_cci SWE retrievals. Unlike existing global snow products, this new snow reanalysis product is uncertainty-aware, assimilates snow satellite data, and specifically targets a key knowledge gap concerning mountain snow mass. Concurrent efforts to adapt this DA framework towards the generation of global ensemble-based reanalyses for glaciers and permafrost will also be showcased to emphasize synergies across these terrestrial cryospheric reanalysis efforts. We highlight that, by combining EO with cryospheric models, ensemble-based DA can transform largely untapped climate data into actionable climate information on cryospheric ECVs. The baked-in uncertainty quantification in this probabilistic climate information empowers us to make decisions in response to climate change and its perilous impacts on the mountain cryosphere.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.85/1.86)

Presentation: Using temporal interpolation on optical-derived labels improves snow detection on SAR images using deep learning method

Authors: Swann Briand, Flora Weissgerber, Sylvain Lobry, Jérôme Idier
Affiliations: DTIS, ONERA, Université Paris Saclay, SIP, LIPADE, Université Paris Cité, LS2N, CNRS
Snow detection is important in many domains as it is a key variable for climate monitoring [Aguirre2018]. It also allows us to assess available water resources for human consumption or hydroelectricity generation [Rouhier2018]. Using optical data, snow can be detected because of its high reflectance in the visible spectrum and the low reflectance in the shortwave infrared spectrum. The Normalized Differential Snow Index (NDSI) which exploits these spectral characteristics [Hall2002] is commonly used to create binary or fractional snow cover maps, but is highly sensible to cloud cover resulting in unevenly spaced time series. Synthetic Aperture Radar (SAR) data can be acquired at night and through clouds. However, snow detection with SAR is challenging, as dry snow is almost transparent to SAR and most of the observed signal comes from the ground. A method to retrieve dry snow depth using ratios between VV and VH backscatter compared to a reference created using means of snow-free acquisitions was proposed by [Lievens2019], but need prior information about snow presence. When snow melts, its liquid water content increases and most of the signal is scattered in the specular direction, strongly attenuating the backscattered signal. In [Nagler2016], the authors use this attenuation by combining ratios of both polarization backscatter with their reference to detect wet snow with a thresholding method. As it is a pixel-wise decision, it can be noisy. Deep learning methods can be used to detect wet snow with optical-derived labels in a semantic segmentation task using ratios between backscatter and reference as input as it presents the advantage of being independent on incident observation angle [Lê2023]. In [Montginoux2023], [Nagler2016] and [Lievens2019] ratios were concatenated to detect wet and dry snow and topological information was added in [Gallet2024]. These previous methods show promising results, however optical-derived label maps used for training the network are always patchy due to high cloud cover. In this study, we investigate whether increasing the amount of labels by temporal interpolation of the NDSI improves wet and dry snow detection results even if these labels are uncertain. We compare two temporal interpolation methods on NDSI: Closest Neighbours Interpolation (CNI) using a three days window and a Kalman smoother. CNI only fills small gaps taking little risk while Kalman smoother estimates a NDSI value for each date regardless of the gap size. The NDSI maps are then thresholded to get binary snow cover maps and projected on SAR geometry using LabSAR algorithm [Weissgerber2022]. To conduct this study, we first assess which set of input channels gives the best results training a network with non-interpolated labels. We consider four sets of input channels: the one used in [Montginoux2022] (A), the one used in [Lê2023] (B), a concatenation of VV and VH backscatter (C), and channel set C concatenated with their references (D). After identifying the best set, we use it to assess label interpolation effect. As machine learning methods performances are very dependent on training data, we test the robustness of our method under spatial and temporal domain shift. We use couples of SAR acquisitions and optical label maps from the Guil basin located in the Queyras massif in the French Alps from September 2018 to June 2019 as our main domain and split it temporally in a training set, a validation set to avoid overfitting and tune model hyperparameters and a test set to evaluate it. To evaluate temporal shift robustness, we use acquisitions from the same basin between September 2019 and June 2020, and for spatial shift the Gyronde basin in the Ecrins massif between September 2018 and June 2019. The Sentinel-1 data is acquired in interferometric wide swath mode, with a range-azimuth ground resolution of 5x20 m and a temporal resolution of 6 days. Three orbits go over each basin, so we have 6 acquisitions every 12 days combining both ascending and descending orbits. Reference images are computed for each year, basin and orbit using snow-free acquisitions between the month of June and August of the respective year. The optical data is from the MOD10A1 dataset [Hall2021] from the National Snow and Ice Data Center (NSIDC), which provides daily NDSI and cloud cover maps for both basins at 500m ground resolution. When investigating channel sets, each lends accuracies over 0.85 without domain transfer with D performing best with 0.899 accuracy. With temporal transfer, performances do not change much as we have more mono class dates which are easy to segment and D remains the best channel set. Spatial transfer is a harder task but D still is the best channel set with an accuracy of 0.866. With qualitative evaluation on predicted maps, we see that models trained with channel set C can miss snow as reference information about snow-free ground is needed. Model trained with channel set D always predicts better maps than those trained with channel set A and B, which are less precise and noisier. Keeping the reference as an independent channel using channel set D allows better segmentation, as using a ratio between backscatter and reference removes incident angle variability thus topography information. For the rest of the study, we use channel set D as input, and trained models using CNI and Kalman smoother with different regularization parameter values. All the interpolation methods improve from using non-interpolated labels, and CNI performs best with accuracies over 0.9 for all domains. Qualitatively, we see a clear improvement on the predicted snow maps which are smoother due to better spatial regularization learned during training, where the network sees less patchy label maps. Using any label interpolation is better than none, but the Kalman smoother performs worse than CNI. While we get more labels than using CNI, it increases the risk of introducing label noise by misclassifying more frequently a pixel by filling all the gaps in its timeseries. To improve our method, we can use the estimation variance the smoother outputs to model the confidence in a label and use this information during training. References: [Aguirre2018] F. Aguirre et al., « Snow Cover Change as a Climate Indicator in Brunswick Peninsula, Patagonia », Front. Earth Sci., vol. 6, sept. 2018, doi: 10.3389/feart.2018.00130. [Gallet2024] M. Gallet, A. Atto, F. Karbou, et E. Trouvé, « Wet Snow Detection From Satellite SAR Images by Machine Learning With Physical Snowpack Model Labeling », IEEE J. Sel. Top. Appl. Earth Observations Remote Sensing, vol. 17, p. 2901 2917, 2024, doi: 10.1109/JSTARS.2023.3342990. [Hall2021] D. K. Hall et G. A. Riggs, « MODIS/Terra Snow Cover Daily L3 Global 500m SIN Grid, Version 61 ». NASA National Snow and Ice Data Center Distributed Active Archive Center, 2021. doi: 10.5067/MODIS/MOD10A1.061. [Hall1995] D. K. Hall, G. A. Riggs, et V. V. Salomonson, « Development of methods for mapping global snow cover using moderate resolution imaging spectroradiometer data », Remote Sensing of Environment, vol. 54, no 2, p. 127 140, nov. 1995, doi: 10.1016/0034-4257(95)00137-P. [Lê2023] T. T. Lê, A. Atto, E. Trouvé, et F. Karbou, « Deep Semantic Fusion of Sentinel-1 and Sentinel-2 Snow Products for Snow Monitoring in Mountainous Regions », in IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA: IEEE, juill. 2023, p. 6286 6289. doi: 10.1109/IGARSS52108.2023.10282065. [Lievens2019] H. Lievens et al., « Snow depth variability in the Northern Hemisphere mountains observed from space », Nat Commun, vol. 10, no 1, p. 4629, oct. 2019, doi: 10.1038/s41467-019-12566-y. [Montginoux2023] M. Montginoux, F. Weissgerber, S. Lobry, et J. Idier, « Évaluation du couvert neigeux à partir d’images SAR par apprentissage profond basé sur des images optiques de référence », in 29e colloque GRETSI, Grenoble (38000), France, août 2023. Consulté le: 18 septembre 2024. [En ligne]. Disponible sur: https://hal.science/hal-04256105 [Nagler2016] T. Nagler, H. Rott, E. Ripper, G. Bippus, et M. Hetzenecker, « Advancements for Snowmelt Monitoring by Means of Sentinel-1 SAR », Remote Sensing, vol. 8, no 4, Art. no 4, avr. 2016, doi: 10.3390/rs8040348. [Rouhier2018] L. Rouhier, « Régionalisation d’un modèle hydrologique distribué pour la modélisation de bassins non jaugés. Application aux vallées de la Loire et de la Durance », phdthesis, Sorbonne Université, 2018. Consulté le: 28 novembre 2024. [En ligne]. Disponible sur: https://theses.hal.science/tel-02409965 [Weissgerber 2022] F. Weissgerber, L. Charrier, C. Thomas, J.-M. Nicolas, et E. Trouvé, « LabSAR, a one-GCP coregistration tool for SAR–InSAR local analysis in high-mountain regions », Front. Remote Sens., vol. 3, p. 935137, sept. 2022, doi: 10.3389/frsen.2022.935137.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.34)

Session: F.02.13 International Cooperation in Spaceborne Imaging Spectroscopy

Building on the outcomes of the 3rd Workshop on International Cooperation in Spaceborne Imaging Spectroscopy (WICSIS-2024; https://hyperspectral2024.esa.int/), held at ESA-ESTEC (Netherlands) on 13-15 November 2024, this insight session will continue to explore opportunities and challenges for international collaboration in this relevant field.
Imaging spectroscopy from space in the visible-to-shortwave-infrared has emerged as a powerful tool for monitoring the Earth system surface. In the last years, the availability of high spatial resolution (i.e. ~30 m pixel size) imaging spectroscopy data from space accessible to users for scientific or commercial purposes has tremendously increased thanks to the successful deployment of PRISMA (ASI), DESIS (DLR), HISUI (METI), EnMAP (DLR) and EMIT (NASA/JPL), paving the way for the development of future missions such as PRISMA Second Generation (ASI), SBG (NASA/JPL) and CHIME (ESA/EC). The exploitation of these growing data streams creates immense opportunities for scientific and operational users and stakeholders. However, to fully meet the growing demands for higher and higher temporal frequency of observations, and to bridge the gap in spatial resolution with multi-spectral products, a combination of data from different missions, and the integration of growing constellations of commercial satellites will be necessary.
This session aims to bring together key stakeholders from government agencies, research institutions, and industry to discuss the latest advancements, challenges, and opportunities in spaceborne imaging spectroscopy, with a focus on medium/high spatial resolution VSWIR products and the activities carried out within the CHIME-SBG cooperation activities. Topics will include development instrument-agnostic algorithms and interoperable products, validation of global products and open science approaches. By facilitating open dialogue and exchange of ideas, we aspire to build stronger partnerships and lay the groundwork for even stronger future collaboration among Agencies and interactions with the user community.

Presentations and speakers:


Instrument-Agnostic Science: International Cooperation with the SBG-VSWIR Mission


  • David R. Thompson - NASA/JPL

International scenario on hyperspectral missions: maximizing users' benefits


  • Simona Zoffoli - ASI

Equality in imaging spectroscopy missions: needs and perspectives


  • Monica Pepe - CNR

Example of applications using time-series from spaceborne imaging spectrometers


  • Sabine Chabrillat - GFZ

EnMAP synergies with hyperspectral missions and international campaigns


  • Vera Krieger - DLR
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall N1/N2)

Session: D.05.05 CDSE User Review Meeting - User Innovations in Action

This session highlights the scientific work of experienced users leveraging the Copernicus Data Space Ecosystem to address pressing environmental and societal challenges. Champion users from diverse fields will share their work, methodologies, and findings, illustrating how Copernicus data is being applied to advance knowledge in different areas. Attendees will gain insight into applied case studies and innovative uses of the platform, fostering a deeper understanding of the ecosystem’s capabilities and its role in supporting impactful scientific work and operational services. This session will be driven by an open call to allow users submit their Unique User Story, which will be presented by sharing specific examples of how these user insights have led to impactful improvements and innovations. Join us to explore how user-driven contributions are making the Copernicus Data Space Ecosystem more accessible, responsive, and effective for diverse applications across sectors.

Presentations and speakers:


Use of Copernicus Data Space Ecosystem Data and Services in the Common Agricultural Policy Paying Agency of Castile and Leon


  • Alberto Gutierrez García – Instituto Tecnológico Agrario de Castilla y León

ESA WorldCereal: Effortless Crop Mapping with OpenEO and CDSE


  • Kristof Van Tricht and Jeroen Degerickx - VITO Remote Sensing

CDSE and Euro Data Cube


  • Gunnar Brandt - Brockmann Consult

The Space Planter Dashboard - Earth observation data in support of agriculture


  • Kostas Gružas, Ričardas Mikelionis, and Marius Survila - Statistics Lithuania, Eurostat Hackaton team

Interactive panel session


  • CDSE Team and User Community
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.96/0.97)

Session: A.02.02 Terrestrial and Freshwater Biodiversity - PART 1

Preserving the integrity and health of natural ecosystems, and the biodiversity they host is crucial not only for the vital services they provide to sustain human well-being, but also because natural ecosystems with a high degree of integrity and diversity tend to exhibit elevated levels of productivity and resilience. The importance of safeguarding biodiversity is increasingly recognised in many Multilateral Environmental Agreements (MEAs) which all place great emphasis on the sustainable management, restoration and protection of natural ecosystems.

The pivotal role of ecosystems in maintaining ecological balance and supporting human well-being is a unifying theme in MEAs. Taking note that despite ongoing efforts, biodiversity is deteriorating worldwide and that this decline is projected to continue under business-as-usual scenarios, Parties to the Convention on Biological Diversity (CBD) have adopted at the 14th Conference of the Parties in December 2022, the Kunming-Montreal Global Biodiversity Framework (GBF). The GBF represents the most ambitious and transformative agenda to stabilise biodiversity loss by 2030 and allow for the recovery of natural ecosystems, ensuring that by 2050 all the world’s ecosystems are restored, resilient, and adequately protected. In Europe, the EU Biodiversity Strategy for 2030 aims to put Europe’s biodiversity on the path to recovery by 2030, by addressing the main drivers of biodiversity losses.

The emergence of government-funded satellite missions with open and free data policies and long term continuity of observations, such as the Sentinel missions of the European Copernicus Program and the US Landsat programme, offer an unprecedented ensemble of satellite observations, which together with very high resolutions sensors from commercial vendors, in-situ monitoring systems and field works, enable the development of satellite-based biodiversity monitoring systems. The combined use of different sensors opens pathways for a more effective and comprehensive use of Earth Observations in the functional and structural characterisation of ecosystems and their components (including species and genetic diversity).

In this series of biodiversity sessions, we will present and discuss the recent scientific advances in the development of EO applications for the monitoring of the status of and changes to terrestrial and freshwater ecosystems, and their relevance for biodiversity monitoring, and ecosystem restoration and conservation. The development of RS-enabled Essential Biodiversity Variables (EBVs) for standardised global and European biodiversity assessment will also be addressed.

A separate LPS25 session on "Marine Ecosystems" is also organised under the Theme “1. Earth Science Frontiers - 08 Ocean, Including Marine Biodiversity”.

Topics of interest mainly include (not limited to):
•Characterisation of the change patterns in terrestrial and freshwater biodiversity.
•Integration of field and/or modeled data with remote sensing to better characterize, detect changes to, and/or predict future biodiversity in dynamic and disturbed environments on land and in the water.
•Use of Earth Observation for the characterisation of ecosystem functional and structural diversity, including the retrieval of ecosystem functional traits, (e.g., physiological traits describing the biochemical properties of vegetation) and morphological traits related to structural diversity.
•Sensing ecosystem function at diel scale (e.g. using geostationary satellites and exploiting multiple individual overpasses in a day from low Earth orbiters and/or paired instruments, complemented by subdaily ground-based observations).
•Assessment of the impacts of the main drivers of changes (i.e., land use change, pollution, climate change, invasive alien species and exploitation of natural resources) on terrestrial and freshwater ecosystems and the biodiversity they host.
•Understanding of climate-biodiversity interactions, including the impact of climate change on biodiversity and the capacity of species to adapt.
•Understanding of the evolutionary changes of biodiversity and better predictive capabilities on biodiversity trajectories,
•Understanding of the ecological processes of ecosystem degradation and restoration,
•Multi-sensor approaches to biodiversity monitoring (e.g. multi-sensor retrievals of ecosystem structural and functional traits),
•Validation of biodiversity-relevant EO products (with uncertainties estimation),
•Algorithm development for RS-enabled Essential Biodiversity Variables (EBVs) on terrestrial and freshwater ecosystems,
•Linking EO with crowdsourcing information for biodiversity monitoring

Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: Global Environmental Drivers of 3D Structural Biodiversity Traits

Authors: Atticus Stovall, Dr Shukhrat Shokirov, Dr Xin Xu, Dr John Armston, Dr Lisa Patrick Bentley, Kim Calders, Professor Mathias Disney, Dr Lola Fatoyinbo
Affiliations: Nasa Goddard Space Flight Center, University of Maryland, TIIAME National Research University, Sonoma State University, Ghent University, University College London
Conservation of forest biodiversity at a global scale is directly dependent on understanding the factors influencing habitat structure. Yet, the standard metrics for assessing biodiversity (Essential Biodiversity Variables) do not capture 3D ecosystems complexity and are constrained to simplistic measures of ecosystem structure (e.g. canopy cover or tree height). Understanding the factors influencing more complex tree architectural traits in forests will support mapping and monitoring of forest biodiversity and the effectiveness of conservation efforts. Here, we present recent findings from a community-built global database containing thousands of ground-based laser scanning plots (the Global Terrestrial Laser Scanning Database; GTLS) from which we derive tree-level and plot-level architectural traits important for biodiversity, or structural biodiversity traits (SBTs), across environmental gradients. The ultimate aim of the GTLS database is to address a clear lack of 3D tree-level trait data at a global scale. We now have an improved automatic trait extraction pipeline enabling tree extraction and modeling for thousands of trees per study site, providing a standardized, quality controlled, open-source method that can be implemented across the scientific community. Currently, our database has ~20,000 3D trees with more than 10 SBTs per tree, focused on characterizing the structural signature of forest biodiversity. We will provide the newest results from an extensive laser scanning field campaign in South Africa, highlighting some preliminary trends in convergent and divergent allometric scaling relationships in dry forest ecosystems around the globe. In addition, we will discuss recently funded work that will dramatically improve our regional sensitivity to drivers of 3D biodiversity traits in Mediterranean forests. The focus on environmental drivers of 3D biodiversity traits will enable us to further understand future climate impacts on forest ecosystem biodiversity. The Global TLS Database is becoming a critical means of improving our fundamental understanding of drivers of tree-level architecture and forest biodiversity, while directly supporting conservation efforts. With broad community support for the GTLS database, we aim to directly inform EO observations and mapping of global forest biodiversity traits.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: Spaceborne and In-Situ Remote Sensing for Monitoring Enhanced Forest Structural Complexity Promoting Biodiversity in Central European Forests

Authors: Patrick Kacic, Dr. Ursula Gessner, Dr. Christopher R. Hakkenberg, Stefanie Holzwarth, Prof. Dr. Jörg Müller, Dr. Kerstin Pierick, Prof. Dr. Dominik Seidel, Dr. Frank Thonfeld, Dr. Michele Torresani, Claudia Kuenzer
Affiliations: University of Würzburg, Institute of Geography and Geology, Department of Remote Sensing, German Aerospace Center (DLR), German Remote Sensing Data Center (DFD), School of Informatics, Computing & Cyber Systems, Northern Arizona University, Field Station Fabrikschleichach, Biocenter, Department of Animal Ecology and Tropical Biology, University of Würzburg, Bavarian Forest National Park, Department for Spatial Structures and Digitization of Forests, Faculty of Forest Sciences, Georg-August-Universität Göttingen, Department for Silviculture and Forest Ecology of the Temperate Zones, Faculty of Forest Sciences, Georg-August-Universität Göttingen, Free University of Bolzano/Bozen, Faculty of Agricultural, Environmental and Food Sciences
Enhancing the structural complexity of forests has been identified as a key management technique to increase biodiversity, support multifunctionality and strengthen the resilience towards disturbances. In the context of the interdisciplinary research project BETA-FOR, experimental silvicultural treatments with increased diversity of light structures (distributed and aggregated cuttings) and deadwood features (no deadwood, downed and standing structures, habitat trees) have been implemented in central European broad-leaved forests. The standardized treatments mimicking old-growth structures and accelerating their development, enable a novel understanding of human-forest interactions, i.e. how monitoring of forest management towards structural complexity can be implemented. For continuous, cost-effective, and across-scale monitoring of forest structure, remote sensing offers complementary perspectives to local measurements. In the present study, multi-source remote sensing analyses comprising in-situ (mobile and terrestrial laser scanning) and spaceborne data (Sentinel-1; Sentinel-2; Global Ecosystem Dynamics Investigation, GEDI) were conducted to investigate enhanced forest structural complexity in BETA-FOR treatments. More precisely, the change patterns in forest structural complexity due to the treatment implementation of experimental silvicultural treatments were characterized based on satellite time-series. Bayesian time-series analyses (BEAST, Bayesian Estimator of Abrupt change, Seasonal change, and Trend) of Sentinel-1 and Sentinel-2 metrics (combination of spectral indices and spatial statistics) demonstrate the identification of enhanced structural complexity in aggregated treatments comprising no or downed deadwood structures (stumps, logs, crowns), as well as standing deadwood structures (snags, habitat trees). Furthermore, we integrated in-situ measurements from mobile and terrestrial laser scanning to assess relationships among spaceborne and in-situ indicators of forest structural complexity. We found strong correlations among in-situ and spaceborne data on structural complexity after carrying out different analyses (bi- and multi-variate correlations, unsupervised clustering). Our findings demonstrate the great potential of multi-source remote sensing to monitor forest structure along different gradients (light conditions, deadwood structures) of enhanced structural complexity. We identified several indicators of forest structural complexity from spaceborne remote sensing that accurately bridge in-situ remote sensing measurements. Those forest structural complexity indicators have the potential to guide adaptive forest management towards structural complexity according to the EU Biodiversity Strategy for 2030 from local (in-situ) to global (spaceborne) observations.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: Mapping individual tree species using high-resolution sensors and deep learning

Authors: Daniel Ortiz-Gonzalo, Dr. Dimitri Gominski, Dr. Martin Brandt, Prof. Dr. Rasmus
Affiliations: University Of Copenhagen
Accurate classification of tree species from remote sensing represents a major advancement in ecological monitoring. Traditional large-scale mapping efforts have primarily relied on species distribution models and pixel-based analyses, which are often constrained by the lack of ground-truth data and the limitations of coarse-resolution imagery. These approaches struggle to capture critical structural details—such as canopy edges and other nuanced variations in visual traits— that are essential for precise tree species identification. The integration of high-resolution, multi-sensor remote sensing with advanced deep learning techniques for extracting high-level semantic information provides tailored solutions to these challenges, enabling a more accurate mapping of tree species across diverse ecosystems and land uses. In this study, we develop a tree species classifier at the individual tree level by integrating high-resolution aerial imagery, airborne LiDAR, and National Forest Inventory (NFI) data from Spain. Specifically, we use 25-cm resolution orthophotos from the Spanish National Plan of Aerial Orthophotography (PNOA) and airborne LiDAR data with a density of 3-5 points per square meter. A key challenge lies in the misalignment between tree positions recorded in the NFI data and the visual features in aerial imagery, complicating direct object detection training. To address this, we decouple the detection and classification tasks: detection models are trained using labeled data from other countries, while the Spanish dataset is dedicated to species classification. To enhance the accuracy of tree matching, we incorporate NFI-derived traits such as tree height and align them with canopy height models generated from LiDAR data. Additionally, we leverage recent advances in deep semi-supervised learning to enhance species recognition, reducing the reliance on extensive labeled data and ensuring scalability and efficiency. Our individual tree-level approach outperforms traditional pixel and patch-level analyses in estimating tree diversity indices. Metrics such as species richness, Shannon index, Simpson index, and Pielou’s Evenness are captured more accurately across diverse ecosystems and land use systems, including forests, agriculture, and urban areas. This study lays the groundwork for a national tree species map at the individual tree level, offering an unprecedented level of detail for monitoring tree diversity and advancing ecological research.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: A Dataset on the Structural Diversity of European Forests

Authors: Gonzalo Oton Azofra, Marco Girardello, Matteo Piccardo, Mark Pickering, Agata Elia, Guido Ceccherini, Mariano Garcia, Mirco Migliavacca, Alessandro Cescatti
Affiliations: European Commission, Joint Research Centre (JRC), The University of Dublin, Trinity College Dublin, Department of Geography, Consultant of European Commission, Joint Research Centre, European Space Research Institute, ESA-ESRIN, Universidad de Alcala, Department of Geology, Geography and the Environment, Environmental Remote Sensing Research Group
Forest structural diversity, defined as the heterogeneity of canopy structural elements in space, is an important axis of functional diversity and is central to understanding the relationship between canopy structure, biodiversity, and ecosystem functioning. Despite the recognised importance of forest structural diversity, the development of specific data products has been hindered by the challenges associated with collecting information on forest structure over large spatial scales. However, the advent of novel spaceborne LiDAR sensors like the Global Ecosystem Dynamics Investigation (GEDI) is now revolutionising the assessment of forest structural diversity by providing high-quality information on forest structural parameters with a quasi-global coverage. Whilst the availability of GEDI data and the computational capacity to handle large datasets have opened up new opportunities for mapping structural diversity, GEDI only collects sparse measurements of vegetation structure. Continuous information of forest structural diversity over large spatial domains may be needed for a variety of applications. The aim of this study was to create wall-to-wall maps of canopy structural diversity in European forests using a predictive modelling framework based on machine learning. We leverage multispectral and Synthetic Aperture Radar (SAR) data to create a series of input features that were related to eight different structural diversity metrics, calculated using GEDI. The models proved to be robust, indicating that active radar and passive optical data can effectively be used to predict structural diversity. Our dataset finds applications in a range of disciplines, including ecology, hydrology, and climate science. As our models can be regularly rerun as new images become available, it can be used to monitor the impacts of climate change and land use management on forest structural diversity. In conclusion, we generated a spatially-explicit dataset on eight forest structural diversity metrics at multiple resolutions (10 km, 5 km, 1 km) encompassing temperate, Mediterranean, and continental regions of Europe.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: Biodiversity from Space: Understanding Large-Scale Patterns of Ecosystem Structure and Diversity with Remote Sensing

Authors: Fabian D Schneider, Ryan P Pavlick, Ting Zheng, Antonio Ferraz, Natalie Queally, Ethan Shafron, Morgan Dean, Laura Berman, Zhiwei Ye, Giulia Tagliabue, Philip A Townsend
Affiliations: Aarhus University, NASA Jet Propulsion Laboratory, California Institute of Technology, NASA, University of Wisconsin-Madison, University of Montana, University of California Los Angeles, University of Milano-Bicocca
Biodiversity is under pressure from anthropogenic and climate change, yet monitoring and predicting these changes globally is challenging due to knowledge gaps in biodiversity’s spatial and temporal dynamics. New remote sensing instruments offer large-scale measurements of plant canopy structure, functional traits, and ecosystem functioning from space. For example, spaceborne lidar, like the GEDI instrument, provides detailed views of plant canopy structure and diversity across landscapes. I will present results and challenges from mapping forest structural diversity in California and Central Africa using GEDI, at scales from 1 to 25 km, covering Mediterranean and tropical forests. We found GEDI’s RH98, Cover, and FHD metrics were most effective for capturing canopy height, density, and layering. GEDI generally captured canopy structure well in closed forests on flat terrain, though challenges arose in open forests and complex terrain. We identified high structural diversity in mid-elevation and coastal forests in the US and in volcanic ranges and forest-savanna transitions in Africa. GEDI revealed patterns of structural diversity that aligned with ecological processes, including the influence of wildfire in the US and topographic variation in Africa. In addition to ecosystem structure, we developed methods using imaging spectroscopy to map leaf biochemical and biophysical traits, revealing patterns of plant functional diversity. Testing with airborne data from AVIRIS Classic across the Sierra Nevada mountains, we assessed the potential of spaceborne instruments like EnMAP, PRISMA, and future NASA SBG and ESA CHIME missions. I will present results that give insights into mapping foliar traits at large spatial scale and the role of trait-trait relationships in mapping plant functional diversity. We found that there are at least three relevant functional axes of variation that should be represented in functional diversity analyses, and that the relationship among those axes and functional plant strategies is context dependent. We also found that patterns of functional diversity were related to elevation gradients and disturbance patterns, especially related to wildfire. Combining these new measurements with ground-based data will help to better understand biodiversity patterns and change over time. I will present examples of new analyses of remotely sensed patterns of plant functional and structural diversity, and their relationship to other dimensions of biodiversity and ecosystem functions, that demonstrate the value and potential of new remote sensing instruments and methods for biodiversity monitoring from space.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: Benchmarking plant functional diversity estimation from space with a Biodiversity Observing System Simulation Experiment

Authors: Javier Pacheco-Labrador, Ulisse Gomarasca, Daniel E. Pabon-Moreno, Wantong Li, Martin Jung, Dr Gregory Duveiller
Affiliations: Spanish National Research Council, Max Planck Institute for Biogeochemistry
As global and regional vegetation diversity loss threatens essential ecosystem services under climate change, monitoring biodiversity dynamics is crucial in evaluating its role and providing insights into climate adaptation and mitigation. However, biodiversity monitoring is resource-intensive and unable to provide the coverage and resolution necessary to understand biodiversity responses to environmental changes. In this context, remote sensing (RS) has arisen as a potential opportunity to assess long-term and large-scale biodiversity dynamics. However, the results of the literature are contrasting and reveal a strong effect of spatial resolution on the estimation of different vegetation diversity. Filling the existing methodological gaps suffers from the lack of ad hoc, consistent, global, and spatially matched ground diversity measurements that enable testing and validating generalizable methodologies. To address this problem, we have developed the Biodiversity Observing System Simulation Experiment (BOSSE). BOSSE simulates synthetic landscapes featuring communities of various vegetation species, the seasonality of the vegetation traits in response to meteorology and environmental factors, and the corresponding remote sensing imagery linked to the traits via radiative transfer theory. Thereby, BOSSE enables users to evaluate the capability of different methods to estimate plant functional diversity (PFD) from RS. BOSSE simulates hyperspectral reflectance factors (R), sun-induced chlorophyll fluorescence (SIF), and land surface temperature (LST). The simulated images can be further convolved to the bands of specific RS missions. In this work, we use BOSSE to answer five methodological questions regarding the quantification of PFD with RS. We found BOSSE a valuable tool for evaluating different methods and shedding light on the best approaches and the limitations of RS to infer PFD. In particular, we learned that: 1) at the landscape scale, diversity indices should be computed over small windows and averaged rather than using large windows; 2) leaf area index (LAI) is a better proxy of species abundance than the surface covered by each species; 3) optical traits (traits estimated form RS) and hyperspectral reflectance and are likely the best estimators for PFD; 4) PFD estimation uncertainty maximizes at the phenological minimum (low LAI values); and 5) PFD estimation is strongly affected when RS pixels combine signals from different species, but correlations with PFD are robust if field data are gridded to the pixel size as long as pixels are less than three times larger than plants. In summary, we prove BOSSE is a valuable tool for testing novel methods regarding RS monitoring of plant diversity, facilitating advances in this new area of research.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.31/1.32)

Session: C.01.17 Creating the Perfect Bouquet of Innovation: Designing the Next EO Technology Demonstration Mission - Session 2

What are the next generation of promising technologies to be demonstrated in space? Are you working on new technologies for space demonstration? How can these developments contribute to strengthening Earth Observation efforts worldwide?

This session is designed to gather ideas for potential technology demonstration missions that could be developed within three years, with an estimated launch in 2030. The session will include a series of activities combining individual and group efforts, applying a design-thinking approach and creative facilitation methods to foster unconventional ideas and maximize innovation.
The goal is to collect a broad range of ideas and refine them into realistic, feasible mission concepts within the given timeline.

What happens after?
The top ideas will be presented on Friday, 27th June, and reviewed by a panel of ESA experts.

Speakers:


  • Emma De Cocker - ESA
  • Tuur Strobbe - ESA
  • Sofia Lembo - ESA
  • Paolo Bazzocchi - ESA
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall F1)

Session: C.03.13 Sentinel-1C Preliminary User Assessment: Early Insights and Feedback from the Community

The Sentinel-1C spacecraft is scheduled for launch in December 2024. Considering the context on which the Sentinel-1 mission is running, ESA intends (if spacecraft commissioning allows) to increase the sensing capacity beyond the commissioning needs and to release the data to the users before the end In-Orbit Commissioning Phase planned for May 2025.

At the time of the LPS 2025 symposium, users will have access to 3 months of pre-qualified Sentinel-1C. This session will provide an early evaluation of its usability, performance, and added value as experienced by the user community.

Following the conclusion of its in-orbit commissioning (IOC) phase in late May 2025, the mission’s new capabilities and datasets will be assessed by initial users from various application domains, offering valuable insights into its impact on operational and scientific workflows.

This session will highlight the feedback and experiences of pioneering users who have accessed and utilized Sentinel-1C data in the months following its release. Presentations will address key aspects of the mission, including:

- Data Quality and Continuity: Initial observations on the consistency and reliability of Sentinel-1C data compared to earlier mission units, with a focus on calibration, noise characteristics, and cross-mission compatibility.

- Operational Integration: Insights from early adopters on integrating Sentinel-1C into existing processing pipelines, highlighting challenges, lessons learned, and potential improvements.

- Preliminary Use Cases: Demonstrations of how Sentinel-1C data is being applied in fields such as disaster response, agriculture, forest monitoring, urban analysis, and climate studies.

The session will provide a forum for the Earth observation community to share preliminary experiences with Sentinel-1C, identify early successes, and discuss the challenges associated with onboarding a new spacecraft unit within the Sentinel-1 constellation.

Presentations and speakers:


Preliminary AIS-fused satellite ship detection capabilities by Sentinel-1C


  • Carl Torbjorn Stahl - EGEOS

On the validation and assimilation of Sentinel-1C wave data in operational wave model MFWAM, Lotfi


  • Lotfi Aouf - Meteo-France

Early results of Sentinel-1C one day radar interferometry for grounding line delineation in polar ice


  • Eric Rignot - Univ. California Irvine

Sentinel 1C boosting Near Real Time Ice Products


  • Keld Quistgaard - DMI

Early data uptake in the agriculture, forestry and Ukraine war contex


  • Guido Lemoine - JRC
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.49/0.50)

Session: A.08.07 Ocean Health including marine and coastal biodiversity - PART 2

Ocean Health, defined as the Ocean’s condition allowing to continuously provide services for Humans in a sustainable way, while preserving its intrinsic well-being and its biodiversity, is under considerable threat. Decades of pollution, overexploitation of resources and damaging coastal environment use have severely degraded the condition of both coastal and offshore marine ecosystems, compromising the Oceans capacity to provide their services. This degradation is being further exacerbated by Climate Change whose effects on Oceans are numerous. The many sensors on-board currently operating satellites (Altimeters, Radiometers, Scatterometers, Synthetic Aperture Radars, Spectrometers) have high relevance for Ocean Health and Biodiversity studies, providing continuous, global and repetitive measurements of many key parameters of the physical (temperature, salinity, sea level, currents, wind, waves) and biogeochemical (Ocean Colour related variables) marine environment, including also high resolution mapping of key marine habitats (coral reefs, kelp forests, seagrass,…). In this context, this session welcomes contributions demonstrating how satellite data can be used to better monitor Ocean Health, including the retrieval of Essential Biodiversity variables and the estimations of the many different stressors, also including marine litter, impacting Ocean Health and marine and coastal biodiversity. Single sensors capability is even more amplified when used in synergy with other space and in-situ measurements, or together with numerical modelling of the physical, biogeochemical, ecological ocean state, so that the session is encouraging multi sensors and multi-disciplinary studies. The session is also open to contributions demonstrating how EO derived products can be used to support management actions to restore and preserve Ocean Health and the marine and coastal biodiversity.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: CIAO: A Machine-Learning Algorithm for Mapping Arctic Ocean Chlorophyll-a from Space

Authors: Maria Laura Zoffoli, Vittorio Brando, Gianluca Volpe, Luis González Vilas, Bede Ffinian Rowe Davies, Robert Frouin, Jaime Pitarch, Simon Oiry, Jing Tan, Dr Simone Colella, Christian Marchese
Affiliations: Consiglio Nazionale delle Ricerche, Istituto di Scienze Marine (CNR-ISMAR), 00133, Institut des Substances et Organismes de la Mer, ISOMer, Nantes Universite, UR 2160, F-44000, 3Scripps Institution of Oceanography, University California San Diego, La Jolla
The Arctic Ocean (AO) is warming faster than any other region on Earth, influencing phytoplankton communities and potentially triggering cascading effects throughout the marine trophic web, with global climate repercussions. Despite its critical importance, limited sampling in this vast and challenging oceanic region has hindered understanding of these changes. Ocean color (OC) remote sensing, with over 26 years of continuous daily acquisitions, is a crucial tool to bridge these knowledge gaps, offering insights into long-term trends and seasonal variability in phytoplankton abundance, as indexed by Chlorophyll-a concentration (Chl), at a Pan-Arctic scale. However, current algorithms for retrieving Chl from satellite data in the AO have shown significant limitations, including high levels of uncertainty and inconsistent accuracy across different regions. These inaccuracies in Chl retrievals propagated further, affecting primary production estimates, climate and biogeochemical modeling. In this study, we quantified uncertainties of seven existing algorithms using harmonized, merged multi-sensor satellite remote sensing reflectance (Rrs) data from the ESA Climate Change Initiative (CCI) spanning 1998–2023. These estimations provide environmental modelers with more effective tools for understanding and managing the propagation of uncertainties. The existing algorithms exhibited varying performance, with Mean Absolute Differences (MAD) ranging from 0.756 to 4.209 mg m-3. To improve upon these results, we developed CIAO (Chlorophyll In the Arctic Ocean), a machine learning-based algorithm specifically designed for AO waters and trained with satellite Rrs data. The CIAO algorithm uses Rrs at four spectral bands (443, 490, 510 and 560 nm) and Day-Of-Year (DOY) to account for seasonal variations in the bio-optical relationships. CIAO significantly outperformed the seven existing models, achieving a MAD of 0.519 mg m-3, thereby improving Chl retrievals by at least 30%, compared to the best-performing existing algorithm. Furthermore, CIAO produced consistent spatial patterns and provided more reliable Chl estimates in coastal waters, where other algorithms tend to overestimate. This enhanced accuracy offers improved accuracy in tracking seasonal variability at the Pan-Arctic scale. By improving the precision of satellite-derived Chl data, the CIAO algorithm enables more accurate assessments of the ecological impacts of climate change in the AO, contributing to more robust ecological and climate projections.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: Advancing Ecosystem-Based Management With Satellite-Based Habitat Mapping and Transfer Learning: Insights From the Horizon Europe EFFECTIVE Project

Authors: Gyde Krüger, Lisbeth Tangaa Nielsen, Marie Lund Larsen, Silvia Huber
Affiliations: DHI A/S
Ocean health faces critical challenges due to pollution, habitat destruction, and climate change. The EU-funded EFFECTIVE-project (https://effective-euproject.eu/) has the primary objective of Enhancing social well-being and economic prosperity by reinforcing the eFFECTIVEness of protection and restoration management in Mediterranean Marine Protected Areas. The four-year project aims to develop a comprehensive scientific knowledge base and practical guidelines combining science, technological nature-based solutions, digitalisation and social impacts for the application of ecosystem-based management to promote large-scale marine protected areas establishment in the European seas. One aspect of the project focuses on satellite-based habitat mapping of shallow coastal areas. These areas, including coral reefs and seagrass meadows, provide numerous benefits to local communities and the global environment, including storm protection, food security, water quality regulation, recreation and supporting rich biodiversity. Protecting and restoring these ecosystems is essential for combating climate change and ensuring healthy coastal environments. Satellite-based habitat mapping is relevant because it provides comprehensive, regular information that is crucial for monitoring and managing marine ecosystems cost-efficiently at large spatial scale, ensuring accurate assessments and informed decision-making for conservation efforts. While advanced machine learning (ML) methods are increasingly used for satellite-based habitat mapping, the diversity and complexity of these ecosystems challenge the performance of generic models for large scale applications across Europe. With experience gained in Scandinavia (1) and Southeast Asia (2), we have further enhanced our approach and developed a transfer learning method for efficient scaling of the satellite-based habitat mapping. A Convolutional Neural Network was trained on multi-temporal optical satellite imagery and metocean data for four pilot sites in the Mediterranean and then applied on 10-meter Copernicus Sentinel-2 imagery to map critical shallow habitats across the Mediterranean. The developed approach provides a cost-effective tool for regular monitoring of these critical ecosystems. In this contribution, we will briefly introduce the EFFECTIVE-project and present our marine habitat mapping approach, highlighting ML model development and results, share lessons learned, and provide an outlook on future developments and next steps of the activity. 1) https://setac.onlinelibrary.wiley.com/doi/10.1002/ieam.4493 2) https://oceaninnovationchallenge.org/cohort-3-mpas-area-based-management-and-blue-economy#cbp=/ocean-innovations/mapping-and-monitoring-ecosystems-scale-copernicus-sentinel-2-imagery-tropical
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: Improving the Prediction of Ocean Ecosystem Indicators by Assimilation of Satellite Observations Into a Biogeochemical Model

Authors: Lars Nerger, Sophie Vliegen, Yuchen Sun, Anju Sathyanarayanan
Affiliations: Alfred Wegener Institute, Helmholtz Center for Polar and Marine Research
To improve the prediction of ocean ecosystem indicators related to the biogeochemistry and nutrients, satellite observations of sea surface temperature and chlorophyll are assimilated into an ocean biogeochemical model. We focus on the North Sea and Baltic Sea, utilizing the operational model system of the Monitoring and Forecasting Center for the Baltic Sea of the Copernicus Marine Service (CMEMS) which consists of the ocean model NEMO coupled to the biogeochemical model ERGOM running at a resolution of 1.8km. To incorporate observational data, the model is coupled to data assimilation functionality provided by the parallel data assimilation framework (PDAF, https://pdaf.awi.de). We leverage ensemble data assimilation in which the uncertainty of the model state is estimated by a dynamic ensemble of 30 model state realizations. The uncertainty in the biogeochemical fields is represented utilizing perturbed uncertain process parameters. The satellite observations of sea surface temperature and chlorophyll from Sentinel satellites, provided via CMEMS, are assimilated daily. The data assimilation lets the model learn from the satellite data and directly improves predictions of both observed fields, temperature and chlorophyll. It also influences the other model variables and ecosystem indicators, which are less easily validated due to limited independent observations. The assimilation also reduces the uncertainties of the indicators as estimated by the spread of the ensemble of model states. We assess the impact of the assimilation on the forecast skill with a focus on the biogeochemical variables. For chlorophyll we find an improved forecast skill for up to 14 days, which also relates to an improved representation of the phytoplankton community simulated by ERGOM. In addition, ecosystem indicators, like tropic efficiency, pH, phytoplankton community structure, and oxygen are analyzed. Here particular changes are visible in the plankton community structure and the relative abundance of zooplankton, i.e. trophic efficiency. In addition, effects on the oxygen and nutrient concentrations are visible. Apart from the scientific results, the program code for the assimilation into the NEMO model, as well as NEMO and PDAF are available as open source software (https://pdaf.awi.de provides links to PDAF and the NEMO-PDAF code). This supports possibilities for further applications and cooperation as well as operationalization.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: Towards Operational Monitoring Of Shallow Marine Habitats - Integrating Remote Sensing Into The Danish National Monitoring Program

Authors: Silvia Huber, Rasmus Fenger-Nielsen, Lisbeth Tangaa Nielsen, Troels Lange, Lars Boye Hansen
Affiliations: DHI A/S, The Danish Agency for Green Transition and Aquatic Environment
Shallow marine habitats, like seagrass meadows and rockweed beds, are vital for supporting biodiversity, controlling erosion, enhancing disaster resilience, and offering habitat and food for a variety of marine species. With increasing climate pressures and human impacts related to eutrophication, overfishing and habitat fragmentation, the coverage and health of coastal habitats have rapidly declined. For example, seagrasses alone are being lost at a rate of 1.5% per year and have already lost about 30% of their estimated historical global coverage (https://www.thebluecarboninitiative.org/). A cornerstone for effective management and conservation is access to accurate and timely information about the status and trends of shallow marine habitats. Since the late 1980s, the Danish national marine environmental monitoring has been based on manual monitoring techniques, with measurements taken from ships and by divers. This monitoring includes several hundred transects, mapping and annually reporting the occurrence, density, and depth distribution of primarily seagrasses and macroalgae. Seagrass is one of the key parameters, as its depth distribution is a primary indicator of ecological status. For macroalgae, new indicators reflecting species richness and changes in the accumulated coverage with depth are being further developed based on evaluation at the EU level. These environmental indicators currently assess the ecological status of marine flora and do comply with EU’s Water Framework Directive (WFD) requirements. However, despite the high costs of the current monitoring program, it does not necessarily provide an accurate representation of the environmental state of the shallow coastal zone, as the monitoring is carried out with relatively low temporal and spatial coverage and without assessment of the areal distribution of submerged vegetation. Therefore, the Danish Agency for Green Transition and Aquatic Environment has been working to develop a cost-effective and scalable approach to collecting marine environmental data that supports the monitoring program, incorporating remote sensing technology (airborne and spaceborne) for Danish coastal waters. With the Agency's support, DHI has been developing a cloud-based, digital platform to regularly map the distribution of submerged aquatic vegetation (SAV) at nation-wide scale, using systematic Copernicus Sentinel-2 satellite data. We are currently enhancing this system with deep learning and time-series analyses for robust, operational SAV mapping, aiming for integration into the Agency’s ongoing operational monitoring program by the end of 2026. Annual calculations of the areal distribution of SAV will support the development of an area-based indicator. Alongside the indicator for seagrass depth distribution, this will help assess the condition of coastal waters according to the WFD, the EU Habitats Directive (HD), and the EU Nature Restoration Law (part of EU Biodiversity Strategy), committing member states revitalise at least 20 percent of their land and sea areas by 2030. All this will support the Agency’s ongoing administration of Danish marine water areas to meet conservation and environmental objectives and safeguard our coastal ecosystems which are key for both climate mitigation and adaptation strategies. In our presentation, we will showcase the cloud-based digital platform for operational mapping of marine underwater vegetation using remote sensing technology. We will discuss the implemented methods and how they support Denmark's marine environmental monitoring program, particularly through the development of an area-based indicator for status reporting.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: The Wadden Sea - the detection of seagrass & co. in a changing evironment

Authors: Kerstin Stelzer, Dr Marcel König, Dr Jörn Kohlus
Affiliations: Brockmann Consult GmbH, Schleswig-Holstein Agency for Coastal Defence, National Park and Marine Conservation, National Park Authority
Seagrass meadows are among the most productive ecosystems in the world and offer a great number of ecosystem services, and their systematic monitoring is critical for environmental protection, climate research and coastal management. Seagrass meadows support biodiversity by providing habitat, shelter and food for many marine organisms. Dense seagrass meadows reduce coastal erosion by reducing wave energy and stabilizing sediments. Seagrass is also an efficient carbon sink and plays a pivotal role in the global carbon cycle and for climate protection. Seagrass meadows respond quickly to changing environmental conditions which makes them an important water quality bio-indicator for coastal ecosystems used in many regions including the European Union within the EU Water Framework Directive. Detection and characterization is therefore part of environmental monitoring programmes. In the Wadden Sea along the North Sea Coast of Denmark, Germany and The Netherlands, seagrass is distributed very unequal. While large and dense seagrass meadows cover the North Frisian Part in Germany during summer months, they hardly occur in other parts. Sentinel-2 is already being operationally used for seagrass mapping in the German Wadden Sea in Schleswig-Holstein but the limited spectral resolution of MSI complicates distinguishing between different types of aquatic vegetation. Here, multi-temporal and hyperspectral information are currently added to the classification scheme of the existing services. The multi-temporal approach enables the differentiation of seagrass and microphytobenthos (diatoms) as well as brown macroalgae such as Fucus vesiculosus. After the seagrass perishes in winter, a new lifecycle starts with strong growth in the following May with its maximum in August/September, while the microphytobenthos often have a maximum occurrence in spring. Parts of this work is funded by the EU project FOCCUS - Forecasting and observing the open-to-coastal ocean for Copernicus users. While multi-temporal approaches use the seasonality of different species, the hyperspectral information is used for separating species or at least algae groups that along their characteristic absorption features. We will investigate the inter- and intraclass spectral variability and separability to improve the identification of different intertidal habitats in the Wadden Sea, including seagrass beds, mussel beds, saltmarshes and accumulations of brown and green algae, based on existing field spectroscopy and well-known targets present in the EnMAP imagery. Alternatively, we will explore data-driven machine learning approaches based on extensive field data coming from the operational monitoring. Hyperspectral EnMAP observations offer novel opportunities to improve the operational seagrass mapping service and to prepare for operational satellite missions such as CHIME. This activity is performed within the SEK project starting early 2025, co-funded by the Federal Ministry for Economic Affairs and Climate Action via the German Space Agency. The Wadden Seas is a challenging environment, very dynamic, half-time water covered and characterized by fluent transitions. Seagrass and macro algae coverage can vary between 5 and 100% coverage, can be partly water covered, can be covered by thin mud layers after calm weather conditions and can occur in different proportions. In addition to the known species, the Wadden Sea is experiencing the occurrence of invasive species such as the red macro algae Gracilaria vermiculophylla. Therefore, a separation of different species becomes more and more important for the assessment of the ecological health of the Wadden Sea. A good concept for using complementary data from ground based measurements and from Earth Observation techniques is a key for a concise monitoring of a sensitive ecosystem between land and ocean. We will present the status of the work leading to an improved operational service for the monitoring of the Wadden Sea in Schleswig-Holstein.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: Earth Observation for Advanced Marine Habitat Mapping

Authors: Branimir Radun, Kristina Matika Marić, Luka Raspović, Josipa Židov, Zrinka Mesić, Ivan Tekić, Ivona Žiža, Bruno Ćaleta, Ivan Tomljenović, Ante Žuljević, Ivan Cvitković, Dragan Bukovec
Affiliations: Oikon Ltd. - Institute Of Applied Ecology, Department of Wildlife Management and Nature Conservation, Karlovac University of Applied Sciences, Laboratory for Benthos, Institute of Oceanography and Fisheries
Early 2024 marked the publication of the official map of coastal and benthic marine habitats of Croatia, encompassing the national coastal sea and the Croatian Exclusive Economic Zone (EEZ). Spanning 51% of the Adriatic Sea under Croatian jurisdiction—approximately 30,278 km²—this map represents one of the most extensive and intricate marine habitat mapping efforts in Europe. It was produced over a period of 25 months and provides habitat information at three scales (1:25,000, 1:10,000, and 1:5,000), tailored to the varying protection levels and management needs of different marine areas. The mapping process was built on the foundation of Earth Observation technologies and spatial analytics, integrating Satellite-based Earth Observation, Aerial Photogrammetry, in-situ surveys, and acoustic methods. Remote Sensing was pivotal for mapping habitats up to depths of 20 meters, utilizing multispectral imagery from the Sentinel-2 satellite constellation. Data for deeper regions were captured using acoustic methods, including multibeam and side-scan sonar, while over 4,000 in-situ transects were conducted for ground truthing and validation. Advanced methodologies such as Object-Based Image Analysis (OBIA) and Pixel-Based Image Analysis (PBIA) were employed to achieve high spatial resolution and detailed habitat classification. OBIA was used to process aerial ortho-maps at 0.5 m resolution, enabling precise segmentation and habitat delineation. PBIA leveraged 110 seasonal Sentinel-2 images to analyze temporal dynamics and classify seagrass species such as Cymodocea nodosa and Posidonia oceanica. The integration of these datasets was performed using advanced Geographic Information System (GIS) tools and spatial statistics, resulting in a high-resolution map with up to three habitat types assigned per spatial feature. The cartographic generalization algorithm custom-developed for this project ensured spatial, topological, and thematic accuracy in the final product. Notably, key elements of the methodologies applied in this effort were initially developed through ESA-funded activities, highlighting the crucial role of space-based data and technologies in advancing marine conservation and resource management. The resulting map provides a valuable tool for Natura 2000 site management, ecological network planning, marine spatial planning, and sustainable resource management. Furthermore, it establishes a scalable model for marine habitat mapping that can be adapted to other regions, addressing the growing need for robust, data-driven conservation solutions. By bridging Earth Observation, field surveys, and advanced spatial analytics, this project provides critical insights for biodiversity stakeholders, helping to mitigate climate and human-induced pressures on marine ecosystems.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall L1/L2)

Session: F.02.16 GFOI Session on Tropical Forest Monitoring

Countries are working to advance Forest Monitoring for emissions and removals measurement, reporting and verification (MRV). The objective of the MRV is typically to quantify the country’s contribution to meeting the Paris Agreement goals and/or to gain access to climate finance. The GFOI is facilitating a process for country-led planning (CLP) where countries work towards building institutions and strengthening functional aspects of forest monitoring.

The principal objective of the agora is to advance knowledge exchange and joint learning among countries on technical aspects surrounding forest MRV. A broader discussion between science and practical implementation will be fostered.

The agora will be organized together with the GFOI Office and in close collaboration with GFOI partners and represented developing countries.

Moderators:


  • Daniela Requena Suarez - GFZ
  • Frank Martin Seifert - ESA

Panelists:


  • Daniela Requena Suarez - GFZ
  • Frederic Achard - JRC
  • Javier Garcia Perez - FAO
  • Sarah Carter - WRI
  • Natalia Malaga Duran - GFZ
  • Andy Dean - Hatfield
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall E1)

Session: C.03.07 The Copernicus Sentinels: from first to Next Generation missions - development status and technology challenges

The status of development of ESA missions will be outlined
In 4 sessions a 1h30 minutes (equally of a full day) the unique opportunity for participants will be offered to gain valuable insights in technology developments and validation approaches used during the project phases of ongoing ESA programmes.
The Projects are in different phases (from early Phase A/B1 to launch and operations) and together with industrial/science partners the status of activities related to Mission developments will be provided.

Presentations and speakers:


Sentinel-1: Mission Continuity through Next Generation Enhancements


  • Ramon Torres
  • Malcolm Davidson
  • Dirk Geudtner
  • Tobias Bollian

Sentinel-2: development, technology & Next Generations mission status: evolutions from Sentinel-2


  • Janice Patterson
  • Francisco Reina

Sentinel-3 Optical: development, technology & Next Generations mission status Sentinel-3 (AOLCI and ASLSTR)


  • Nic Mardle
  • Simone Flavio Rafano Carna

Sentinel-6: development, technology mission status: The technology behind the sea level record


  • Alejandro Egido
  • Pierrik Vuilleumier
  • Julia Figa
  • Lieven Bydekerke

Sentinel-3 Topography: development, technology & Next Generations mission status: On the way towards operational swath altimetry


  • Alejandro Egido
  • Pierrik Vuilleumier

Sentinel 6 Next Generation: Status of Mission definition and next steps


  • Bernardo Carnicero Dominguez
  • Agathe Carpentier
  • Robert Cullen
  • Alejandro Egido
  • Valeria Gracheva
  • Marcel Kleinherenbrink
  • Martin Suess
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.94/0.95)

Session: B.03.06 Climate, Environment, and Human Health - PART 2

It is well-known that many communicable and non-communicable diseases have a seasonal component. For example, flu and the common cold tend to increase in autumn and winter whilst vector borne diseases like Dengue and West Nile Virus tend to peak in late summer when the vectors are at their most abundant. Under monsoon regimes, many diseases peak during the rainy season. Hay fever, spring-time allergies and other respiratory disorders also have seasonality related to the abundance of pollens and other allergens in the air. Environmental conditions in water, air and land have a role in regulating the variability in the presence or absence and abundance of pathogenic organisms or material in the environment, as well as the agents of disease transmission like mosquitoes or birds. For example, air temperature and relative humidity are linked to flu outbreaks. Water quality in coastal and inland water bodies impact outbreaks of many water-borne diseases, such as cholera and other diarrheal diseases, associated with pathogenic bacteria that occur in water. The seasonality has inter-annual variabilities superimposed on it that are difficult to predict. Furthermore, in the event of natural disasters such as floods or droughts, there are often dramatic increases in environmentally-linked diseases, related to break down of infrastructure and sanitation conditions.

Climate change has exacerbated issues related to human health, with the shifting patterns in environmental conditions, and changes in the frequency and magnitude of extreme events, such as marine heat waves and flooding, and impacts on water quality. Such changes have also led to the geographic shifts of vector-borne diseases as vectors move into areas that become more suitable for them, as they become less cool, or retract from those that become too hot in the summer. The length of the seasons during which diseases may occur can also change as winters become shorter. There are growing reports on the incidence of tropical diseases from higher latitudes as environmental conditions become favourable for the survival and growth of pathogenic organisms.

Climate science has long recognised the need for monitoring Essential Climate Variables (ECVs) in a consistent and sustained manner at the global scale and with high spatial and temporal resolution. Earth observation via satellites has an important role to play in creating long-term time series of satellite-based ECVs over land, ocean, atmosphere and the cryosphere, as demonstrated, for example, through the Climate Change Initiative of the European Space Agency. However, the applications of satellite data for investigating shifting patterns in environmentally-related diseases remain under-exploited. This session is open to contributions on all aspects of investigation into the links between climate and human health, including but not limited to, trends in changing patterns of disease outbreaks associated with climate change; use of artificial intelligence and big data to understand disease outbreaks and spreading; integration of satellite data with epidemiological data to understand disease patterns and outbreaks; and models for predicting and mapping health risks.

This session will also address critical research gaps in the use of Earth Observation (EO) data to study health impacts, recognizing the importance of integrating diverse data sources, ensuring equitable representation of various populations, expanding geographic scope, improving air pollution monitoring, and understanding gaps in healthcare delivery. By addressing these gaps, we aim to enhance the utility of EO data in promoting health equity and improving health outcomes globally.

The United Nations (UN) defines Climate Change as the long-term shift in average in temperatures and weather patterns caused by natural and anthropogenic processes. Since the 1800s, human emissions and activities have been the main causes of climate change, mainly due to the release of carbon dioxide and other greenhouse gases into the atmosphere. The United Nations Framework Convention on Climate Change (UNFCCC) is leading international efforts to combat climate change and limit global warming to well below 2 degrees Celsius above pre-industrial levels (1850–1900), as set out in the Paris Agreement. To achieve this objective and to make decisions on climate change mitigation and adaptation, the UNFCCC requires systematic observations of the climate system.

The Intergovernmental Panel on Climate Change (IPCC) was established by the United Nations Environment Programme (UNEP) and the World Meteorological Organization (WMO) in 1988 to provide an objective source of scientific information about climate change. The Synthesis Report, the last document part of the sixth Assessment Report (AR6) by IPCC, released in early 2023, stated that human activities have unequivocally caused global warming, with global surface temperature reaching 1.1°C above pre-industrial levels in 2011–2020. Additionally, AR6 described Earth Observation (EO) satellite measurements techniques as relevant Earth system observation sources for climate assessments since they now provide long time series of climate records. Monitoring climate from space is a powerful role from EO satellites since they collect global, time-series information on important climate components. Essential Climate Variables (ECV) are key parameters that explain the Earth’s climate state. The measurement of ECVs provide empirical evidence in the evolution of climate; therefore, they can be used to guide mitigation and adaptation measures, to assess risks and enable attribution of climate events to underlying causes.

An example of an immediate and direct impact of climate change is on human exposure to high outdoor temperatures, which is associated with morbidity and an increased risk of premature death. World Health Organisation (WHO) reports that between 2030 and 2050, climate change is expected to cause approximately 250,000 additional deaths per year from malnutrition, malaria, diarrhoea and heat stress alone. WHO data also show that almost all of the global population (99%) breathe air that exceeds WHO guideline limits. Air quality is closely linked to the earth’s climate and ecosystems globally; therefore, if no adaptation occurs, climate change and air pollution combined will exacerbate the health burden at a higher speed in the coming decades.
Therefore, this LPS25 session will include presentations that can demonstrate how EO satellites insights can support current climate actions and guide the design of climate adaptation and mitigation policies to protect and ensure the health of people, animals, and ecosystem on Earth (e.g., WHO’s One Health approach).
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.94/0.95)

Presentation: Signatures of Cholera Outbreak in Long-Term Seasonal Rainfall Trends and Urban Built Patterns in Chandigarh, India

Authors: Dhritiraj Sengupta, Dr Neelam Taneja, Aswin Sachidanandan, Dr Shubha Sathyendranath, Dr Gemma Kulk, Dr Anas Abdulaziz, Dr Nandini Menon, Ranith Rajamohananpillai, Dr Craig Baker Austin, Dr Nicholas Thomson, Elin Meek
Affiliations: Plymouth Marine Laboratory, Plymouth, UK, Post Graduate Institute of Medical Education and Research,, National Centre for Earth Observation, Plymouth Marine Laboratory, National Institute of Oceanography, Nansen Environmental Research Centre, Centre for Environment, Fisheries and Aquaculture Science (CEFAS),, The Wellcome Trust Sanger Institute, Wellcome Trust Genome Campus,
Cholera, caused by Vibrio cholerae, is highly sensitive to environmental factors, particularly rainfall, which influences water contamination and disease transmission. This study examines the role of long-term rainfall patterns in Cholera outbreaks in Chandigarh, India, over 21 years (2002–2023). Chandigarh, located in northern India, sits at the foothills of the Shivalik range of the Himalayas. Geographically, it lies near the Indo-Gangetic plain, with a terrain that transitions between flat fertile plains and low rolling hills. The city’s location gives it a humid subtropical climate, characterized by distinct seasonal variations, including hot summers, a monsoon season, and mild winters. Using satellite-derived rainfall data from CHIRPS: Rainfall Estimates from Rain Gauge and Satellite Observations, the research analyzes the impact of extreme seasonal variations on Cholera incidence. Chandigarh experiences an average annual rainfall of 1,100 mm, with significant peaks during the monsoon (July–September). The study identifies a strong correlation between excessive weekly rainfall, particularly when precipitation exceeds 100 mm, and spikes in Cholera cases. Urban flooding caused by heavy rainfall events contaminates water supplies and creates favorable conditions for V. cholerae proliferation, especially in areas with dense, poorly planned settlements. Such environmental conditions are closely linked to heightened transmission risk. Analysis reveals that weekly rainfall trends during monsoon weeks (22–35) explain Cholera outbreaks more effectively than annual averages. These rainfall events disrupt urban drainage systems, exacerbating waterborne disease risks. The findings underscore that the timing and intensity of rainfall anomalies, rather than cumulative rainfall, are critical predictors of Cholera outbreaks in Chandigarh. This study highlights the interplay between climatic variability and urban infrastructure in driving disease transmission. Increasing monsoon extremes, coupled with urban congestion, amplify the risk of Cholera outbreaks. The insights underscore the need for targeted public health interventions, such as improved drainage, access to clean water, and early warning systems. By integrating climatic data with health records, the research provides a framework for understanding and mitigating the impact of extreme weather events and congested urban planning on public health. These findings offer valuable implications for early-warning mechanisms and disease management in similar urban settings across South Asia.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.94/0.95)

Presentation: Leveraging earth observation for understanding forest disturbances and malaria vector ecology in the Malaysian Borneo

Authors: Edgar Manrique, Benny Obrain Manin, Clarissa Balinu, Kamruddin Ahmed, Jason Matthiopoulos, Brian Barret, Kimberly Fornace
Affiliations: School of Biodiversity, One Health and Veterinary Medicine, University of Glasgow, Borneo Medical and Health Research Centre (BMHRC), Universiti Malaysia Sabah, School of Geographical & Earth Sciences, University of Glasgow, Glasgow, Scotland, UK, Saw Swee Hock School of Public Health, National University of Singapore
Earth Observation (EO) is a vital component of One Health, linking human, animal, and environmental health through large-scale monitoring of environmental conditions affecting disease ecosystems. Malaria, a persistent global health issue, is strongly influenced by Land Use and Land Cover (LULC) changes, particularly deforestation, which increases disease risk and vector abundance. Traditional studies often focus on human settlements, neglecting mosquito interactions with diverse environments and leading to gaps in understanding transmission patterns. Effective malaria control requires data collection beyond households to include human activities in forest fringes, where mosquito-host interactions and exophagic mosquito behavior heighten transmission risk. EO technologies provide valuable insights to address these complexities and enhance vector control strategies. The most critical landscape features influencing malaria vector distribution are the availability of breeding sites and forest disturbances, such as deforestation. Optical satellite imagery often fails to detect small aquatic habitats and is limited in cloudy regions, emphasizing the need for finer spatial resolutions and complete time series. Differentiating drivers of forest disturbances, such as selective logging, forest fires, or land-use changes, is essential to understanding their environmental impact. This study integrates spaceborne SAR and optical imagery from Sentinel-1, ALOS-2, and Sentinel-2, along with drone-collected imagery, to overcome these challenges and accurately identify vector habitats and deforestation events. The approach highlights the utility of advanced satellite and drone technologies to improve environmental monitoring and malaria vector control. This study focuses on the region of Sabah in the Malaysian Borneo, where malaria transmission persists in forested regions. The aim is to understand how changes in forest structure impact malaria vectors dynamics and develop a novel surveillance system using EO data. In Malaysia, the increase in Plasmodium knowlesi, a zoonotic malaria, is strongly linked to deforestation. The primary vector, Anopheles balabacensis, is found in various land cover types and is mostly exophagic. The study leverages Bayesian geostatistical models and machine learning algorithms to evaluate and predict mosquito abundance. Initial mosquito data from SAFE repository (2012-2014), monthly collections using Mosquito Magnet traps from 2023 to 2025, and combined EO datasets, will inform simulations used to evaluate the surveillance approach and study the spatio-temporal dynamics of anopheles abundance and its relation to forest disturbances. The study reveals significant differences in mosquito abundance across five land cover types, emphasizing the need for detailed environmental monitoring. Anopheline mosquitoes were more abundant in pulpwood plantations and secondary forests compared to built-up areas, while oil palm plantations and primary forests showed no significant differences. Vector diversity was highest near secondary and primary forests, with pulpwood plantations showing low diversity. Spatial clustering of abundance and diversity near land-use transitions, such as forest edges and plantations, highlights the impact of landscape changes on malaria vector distribution.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.94/0.95)

Presentation: Using satellite observations to improve air quality through policy relevant research

Authors: Rosa Gierens, Hubert Thieriot, Lauri
Affiliations: Centre For Research On Energy And Clean Air
Air pollution is harmful to human health, causing illness and premature death. The Centre for Research on Energy and Clean Air (CREA) is an independent research organisation that uses scientific data, research, and evidence to support the efforts of governments, companies, and campaigning organisations worldwide to move towards clean energy and clean air. Our purpose is to provide data and analysis relevant to the current decisions being made. Satellite observations are a great asset, as they provide continuous monitoring of key air pollutants on a global scale, independently of the existence or access to ground-level monitoring. However, some technical skills are required to work with earth observation datasets, especially for more advanced analyses. CREA is therefore building its capacity to provide data and analysis derived from satellite data, to bring the right results to the right people at the right time. In many countries accurate air pollution emission monitoring is missing, making it difficult to develop effective policies or to enforce air pollution regulations. Furthermore, emission data is required for conducting health and environmental impact assessments. Various approaches for estimating emissions from satellite data can be found in the scientific literature. At CREA, we have implemented the flux divergence method following Beirle et al., (2023, https://doi.org/10.5194/essd-15-3051-2023) to estimate NOx emissions using TROPOMI NO2 and horizontal wind fields from ERA5. We chose the flux divergence method because it provides good accuracy for point sources, and is computationally relatively inexpensive. In some regions of interest in Southeast Asia, the low availability of TROPOMI NO2 data is challenging the emission estimates, and we are currently working on finding ways to mitigate this issue. Furthermore, we are evaluating our NOx point source emissions against other data sources from the USA, EU, Taiwan, and Australia. Preliminary results suggest that the agreement between the top-down and bottom-up emission estimates is in the expected range. Datasets for NOx emissions derived from satellite data already exist in the public domain, but they are often made available with considerable delay. In rapidly developing regions the timeliness of the data is particularly important. For example, in Southeast Asia, we can identify several point sources in 2023 for locations at which no NOx emissions could be detected for previous years. Our main goal is to make timely NOx emission data available to the public for the health impact assessments conducted at CREA. Furthermore, we aim to be able to identify facilities that have (not) installed pollution abatement technologies, to detect possible breaches in regulations. In the future, we plan to also use the flux divergence method for estimating point source SO2 emissions. In other work, we also utilise satellite observations to understand air pollution on a regional scale. We use readily available L3 data products for SO2, NO2, and aerosol optical depth to describe the spatial distribution of pollutants and changes over time. For the most recent example of how satellite data can provide valuable context for other analyses, see Manojkumar (2024, https://tinyurl.com/creafdg).
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.94/0.95)

Presentation: Temporal modeling of surface water bacteriological quality and diarrheal diseases in West Africa using remote sensing and machine learning methods

Authors: Marc-Antoine MANT, Elodie Robert, Edwige Nikiema, Manuela Grippa, Laurent Kergoat, Moussa Boubacar Moussa, Beatriz Funatsu, Javier Perez-Saez, Emma Rochelle-Newall, Marc Robin
Affiliations: LETG - CNRS - Nantes Université, GET, Université Toulouse III, CNRS, IRD, CNES, IRD - iEES-P, Université Joseph Ki-Zerbo, Hôpitaux universitaires de Genève
In 2021, diarrheal diseases were responsible for around 1.17 million deaths worldwide (GBD, 2024). Sub-Saharan Africa is one of the most impacted regions. In 2024, they caused some 440,000 deaths in this region. This high mortality rate can be explained by 1) significant bacteriological pollution of surface waters by pathogenic micro-organisms responsible for diarrheal diseases (E. coli, Salmonella spp., Shigella spp., etc.), 2) high concentration of suspended solids which provides a substrate and refuge for these pathogens, 3) widespread use of untreated water for domestic, washing and horticultural purposes, and, 4) lack of sanitation and community health infrastructures. In addition the social and political insecurity and major demographic changes facing sub-Saharan Africa make access to drinking water and healthcare difficult for part of the population. Finally, ongoing climate change is likely to have a negative impact on water resources, both in terms of quantity and quality, and potentially increase the presence, dissemination and transmission of pathogens. Indeed, climate change is expected to increase the relative risk of diarrheal disease in tropical and subtropical regions from 22% to 29% by 2070-2099 (Kolstad & Johansson, 2011). Tele-epidemiology, i.e. the combination of satellite observations and epidemiology, is a powerful tool for studying climate-environment-health relationships and for understanding and predicting the spatio-temporal distribution of pathogens and diseases through the use of satellite and in-situ data. We aim at using satellite and in-situ data to indirectly monitor water quality and reveal environmental factors conducive to the emergence of critical health situations by modeling the dynamics of E. coli and cases of diarrhea in West Africa. E. coli is considered the best indicator of faecal contamination (IFC) in temperate zones, and is recommended by the World Health Organization as a proxy for assessment of water contamination and the associated risk of diarrheal disease. In Burkina Faso, Robert et al (2021) demonstrated a significant correlation between E. coli, intestinal enterococci and cases of diarrhea. E. coli therefore appears to be a good IFC in West Africa, and would be relevant for predicting diarrheal diseases. The first objective is to study the links that exist between E. coli concentration in water and certain environmental parameters 1) measured in-situ in Burkina Faso, precisely in the Bagré reservoir, from 2018 to 2024 (dissolved oxygen, concentration of suspended particulate matter - SPM, water conductivity, water temperature, particulate organic carbon - POC, dissolved organic carbon - DOC etc.), 2) measurable by satellite (NDVI or surface water reflectances measured by Sentinel-2 to inverse SPM, POC) or 3) estimable by satellite algorithm (precipitation estimated by IMERG, hydrometeorological parameters estimated by GLDAS - specific air humidity, soil moisture, surface runoff, etc.). We also investigate the relationships between these environmental parameters (in-situ, remote sensing and earth observations data) and the number of cases of diarrheal diseases (data obtained from three health centers in the area). We then use key environmental parameters to model the concentration of E. coli in Bagré reservoir and diarrheal diseases over several years, firstly using these key environmental parameters, and then only using satellite data to study their robustness. Random Forest and Gradient Boosting regression trees were used for modeling. These are machine learning algorithms in which the algorithm will learn the non-linear relationships between variables and then attempt to estimate the variable to be explained (E. coli and cases of diarrhea) from an unknown dataset. The best model (Random Forest) revealed that the dynamics of E. coli in Bagre reservoir depends mainly on POC, SPM and air humidity, which are parameters that can be derived by satellite. The model had showed a R² of 0.84 (RMSE 0.4 log10 MPN/100mL) using in-situ and satellite data, and R² of 0.62 with only satellite data (RMSE 0.62 log10 MPN/100mL). Concerning the cases of diarrhea, the first PLS model applied on one year and monthly data had showed a R² of 0.81 using in-situ and satellite data, and R² of 0.76 with only satellite data (Robert et al. 2021). The challenge now is to test machine learning methods on 7 years with more precise time step (daily or weekly) and more types of data to have a finer temporal prediction. This work will allow to create health hazard indices that can be used by public health players, firstly in West Africa without the need to collect data in the field, and then more generally for other sites facing similar public health problems.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.94/0.95)

Presentation: Survey on sanitation and microbial pollution for assessment of risk from climate change and water-borne diseases - case study from Kerala, India

Authors: Dr Nandini Menon, Ranith Rajamohananpillai, Farzana Harris, Vishal Vasavan, Dr Anas Abdulaziz, Jasmin Chekidhenkuzhiyil, Grinson George, Gemma Kulk, Dr Shubha Sathyendranath
Affiliations: Nansen Environmental Research Centre India, CSIR-National Institute of Oceanography, ICAR-Central Marine Fisheries Research Institute, Earth Observation Science and Applications, Plymouth Marine Laboratory, National Centre of Earth Observation, Plymouth Marine Laboratory
Climate change and associated extreme events pose risks to public health through disruption of sanitation facilities and hygiene practices, especially in low-lying coastal and inland regions that are prone to flooding due to rising sea levels, storm surges and intense precipitation. The impacts of these challenges on sanitation, microbial water quality and hygiene factors were investigated in the districts adjoining a large water body, Vembanad Lake, as well as in selected coastal areas of the Arabian Sea, in the state of Kerala, India. A digital household survey on sanitation, hygiene and disease prevalence was conducted using the mobile application ‘CLEANSE’, whereas the microbial and physical quality of drinking water was assessed using another mobile application ‘Aquadip’ from about 500 households in the study area. The results showed that not only extreme weather events, but even spring tides flood a majority of the houses in the study area, contaminating surface water and sewage-disposal systems. The microbial quality of the water correlated significantly with the prevalence of waterborne diseases such as diarrhoea. The vulnerability of communities to risk of waterborne diseases was assessed using an Analytical Hierarchy Process (AHP), employing nine variables that significantly influenced sanitation and hygiene. The results indicated that open defaecation, source of water used for household activities, proximity of drinking water source to septic tanks, improper management of sewage and wastewater, and disposal of solid wastes were the major factors contributing to water contamination and poor health. Around 5% of surveyed households lacked proper septic tank systems, resulting in the discharge of untreated sewage and pollutants into the drains, backwater and coastal sea. Presence of rodents was high in the areas contaminated with solid waste. Though most of the participants of the survey reported minimal occurrences of cholera or leptospirosis, those with drinking water heavily contaminated with faecal indicator bacteria (i.e., too numerous to count with the most probable number method) often had disease symptoms, including diarrhoea and vomiting. In the absence of clinical data, it is difficult to pinpoint the source of the symptoms, but such discrepancies highlight a disconnect between perception and reality of risk, highlighting the need for regionally-targeted awareness campaigns, since there is potential to reduce existing risks by changing habits and behaviour of affected population. Spatial risk maps produced using the results of the study identify areas that are vulnerable or at risk of waterborne diseases. Climate change adaptation and climate change mitigation strategies should also go hand in hand to limit the complex disease dynamics.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.94/0.95)

Presentation: A remote sensing assessment of floating macrophyte cover dynamics in Lake Vembanad, India

Authors: Emma Sullivan, Thomas Jackson, Gemma Kulk, Dr Nandini Menon, Jasmin C, R Ranith, Dr Shubha Sathyendranath, Dhriti Sengupta, Varunan Theenathayalan, Victor Martinez Vicente
Affiliations: Plymouth Marine Laboratory, National Centre of Earth Observation, EUMETSAT Climate Services Group, Nansen Environmental Research Centre, Environment and Climate Change Canada, Canada Centre for Inland Waters
Invasive floating vegetation, particularly water hyacinth (Eichhornia crassipes), has become an environmental and socioeconomic challenge in many water bodies in India. These macrophytes form dense interlocking mats that can spread quickly, adversely impacting ecosystems, livelihoods, and human health. In the Vembanad-Kol wetland system in Kerala, southern India, water hyacinth has become a major environmental problem. The Vembanad-Kol wetland system is India’s second-largest Ramsar site, an area of wetland designated to be of international importance. The region’s waterways are an essential part of daily life, serving as transportation routes and supporting livelihoods such as fishing, tourism, and paddy cultivation. However, the spread of water hyacinth can block these waterways, causing disruption to incomes and requiring labour-intensive and expensive removal operations. Water hyacinth may also pose public health risks by degrading water quality and creating conditions favourable for harbouring disease vectors. Understanding the spatial and temporal dynamics of floating vegetation is the foundation for exploring the possible drivers dictating it’s distribution, links with environmental and human health, and potential management interventions. However, monitoring has traditionally used field-based measurements which are time-consuming, and spatially limited by site accessibility and resources. In large aquatic systems such as Lake Vembanad (>2000 km²), regular assessment using field surveys alone is not feasible for logistical and economic reasons. Consequently, little information is available on the distribution and temporal changes in floating vegetation over long periods and at the scale of the entire wetland system. Remote sensing is a uniquely capable tool for mapping invasive plant species as it can provide information over large areas at regular intervals at a lower cost than intensive field surveys. In this work, we use the satellite data from Sentinel-2 to assess the temporal and spatial dynamics of floating macrophytes over Lake Vembanad from 2016 to mid 2024. This study demonstrates how Sentinel-2 data can be used to describe the temporal and spatial dynamics of floating macrophytes at an internationally important wetland site. The Floating Algae Index (Hu, 2009) was used to separate water and possible vegetation cover. Next, a scaling approach was used to calculate the percent cover of floating vegetation for each pixel. Estimates of vegetation cover from Sentinel-2 data were validated using in situ observations and high-resolution WorldView satellite data. Monthly composites were used to explore the phenology of, and trends in, the floating vegetation in Lake Vembanad as a whole, and also separately for the northern and southern parts of the lake, which have different hydrodynamic regimes. Climatological patterns in lake vegetation cover were compared with ancillary environmental data to explore potential drivers of floating vegetation proliferation. Initial results suggest there has been a significant increase in floating vegetation cover over the study period. The analysis highlights particular hotspots of vegetation accumulation, and demonstrates a clear seasonality, associated with changes in salinity, in floating macrophyte cover which varies between lake regions. In situ monitoring of water hyacinth indicates that the root system can harbour pathogenic bacteria and larvae of mosquitoes and gastropods, which are vectors of many diseases, pointing to a health implication associated with the spread of water hyacinth in the region. The data and analysis produced from this study can help future work to model floating vegetation distributions, investigate potential connections to human health, and inform management decisions on possible interventions. Ongoing efforts to acquire in situ data will continue to support product validation as the record extends into the future. We also hope that future studies might build on this work to explore the feasibility of identifying the dominant species of floating macrophytes with the current sensor technology.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K2)

Session: B.04.01 Satellite based terrain motion mapping for better understanding geohazards. - PART 2

Better understanding geohazards (such as landslides, earthquakes, volcanic unrest and eruptions, coastal lowland hazards and inactive mines hazards) requires to measure terrain motion in space and time including at high resolution with multi-year historical analysis and continous monitoring. Several EO techniques are able to contribute depending on the context and the type of deformation phenomena considered and some techniques can provide wide area mapping (e.g. thanks to Sentinel-1). Advanced InSAR or pixel offset tracking using radar imagery including newly available missions with different sensing frequency (e.g. L Band) can help provide relevant geoinformation. This is also the case with optical streo viewing technique and optical correlation techniques including for wide area mapping. There is a need to assess new EO techniques to retrieve such geoinformation both locally and over wide areas and to characterise their limitations. New processing environments able to access and process large datastacks have allowed to increase user awareness, acceptance and adoption of EO and have created opportunities for collaboration including co-development and increased combination of data sources and processing chains. With this in mind it is needed to understand what is the agenda of geohazard user communities and what are the barriers for reaching their goals.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K2)

Presentation: Enhancing the P-SBAS Processing Chain for L-Band DInSAR Time Series Retrieval: Insights from the SAOCOM-1 Constellation

Authors: Marianna Franzese, Ph.D Claudio De Luca, Yenni Lorena Belen Roa, Ph.D Manuela Bonano, Ph.D Francesco Casu, Ph.D Pablo Euillades, Ph.D Leonardo Euillades, Ph.D Michele Manunta, Ph.D Muhammad Yasir, Ph.D Giovanni Onorato, Ph.D Pasquale Striano, Ph.D Luigi Dini, Dr Deodato Tapete, Ph.D Riccardo Lanari
Affiliations: Istituto per il Rilevamento Elettromagnetico dell'Ambiente (IREA), CNR, Università degli Studi di Napoli Federico II, Istituto per il Rilevamento Elettromagnetico dell'Ambiente (IREA), CNR, Conicet, Instituto CEDIAC, Facultad de Ingenierìa, Universidad Nac de Cuyo, Italian Space Agency (ASI)
In the current Earth Observation scenario, the Differential Synthetic Aperture Radar Interferometry (DInSAR) technique is widely recognized for investigating the surface displacements affecting large areas of the Earth's surface with high accuracy. Such a technique is particularly useful in both natural and anthropogenic hazard scenarios, thanks to its capability to retrieve ground displacements with centimeter (in some cases millimeter) accuracy at rather limited costs. Originally developed to analyze single deformation episodes such as an earthquake or a volcanic unrest event, the DInSAR methods are also capable of investigating the temporal evolution of surface deformations. Indeed, the so-called Advanced DInSAR techniques properly combine the information available from a set of multi-temporal interferograms relevant to an area of interest, in order to compute the corresponding deformation time series. Among several Advanced DInSAR algorithms, a widely used approach is the one referred to as Small BAseline Subset (SBAS) technique and its computationally efficient algorithmic solution is referred to as Parallel Small BAseline Subset (P-SBAS) approach. However, the effectiveness of the DInSAR technique can be limited by the temporal decorrelation phenomena, which arise from changes in the electromagnetic response of the imaged scene over time. In this context, low-frequency SAR sensors, such as those operating at the L-band, characterized by a significantly longer wavelength (~23 cm) compared to the X-band (~3 cm) and C-band (~5.6 cm) ones, are well-suited to mitigate the temporal decorrelation effects, thanks to their capacity of maintaining interferometric coherence for a long period in rather vegetated areas and, in some cases, in snow/ice-covered zones. Moreover, the long wavelength makes the L-band highly effective for detecting, assessing, and, in some cases, monitoring rapid or large deformations associated with various geohazards, including landslides, earthquakes, and volcanic unrests. The crucial role of L-band systems in hazards scenario is pushing worldwide the space agencies to invest in the development of this technology. The forthcoming missions ROSE-L (developed by ESA), NISAR (a joint NASA-ISRO project), as well as the already operative SAOCOM-1, ALOS-2, ALOS-4, and Lutan-1, highlight the growing importance of L-band SAR sensors, particularly for interferometric applications. The presented work focuses on extending the original P-SBAS interferometric processing chain to handle L-band SAR image time series, particularly those acquired by the Argentinean SAOCOM-1 constellation, which consists of two twin, full-polarimetric L-band SAR satellites operating on the same orbits of the COSMO-SkyMed first and second generations (CSK and CSG, respectively). In particular, we present several improvements of the available P-SBAS processing chain to expand its monitoring capability beyond the exploitation of X-band (e.g., CSK-CSG and TerraSAR-X) and C-band (e.g., ERS-1/ERS-2, ENVISAT, RADARSAT-1/2) StripMap SAR images, enabling the analysis of L-band data from the SAOCOM-1 constellation. The algorithmic advancements focus on the merging of adjacent Single Look Complex (SLC) image slices and the improvement the quality of the orbital information. Furthermore, an advanced method to estimate and correct ionospheric effects, which are particularly pronounced in L-band SAR datasets, has been developed. Regarding the implementation of the Area of Interest (AoI) Single Look Complex (SLC) generation, it is important to note that SAOCOM-1 images are provided as “slices” with a typical azimuth extension of approximately 70 to 100 km. Consequently, particularly for large scale DInSAR analysis, these slices must be properly merged into a single SLC image corresponding to the AoI. While slice merging is an ordinary and common procedure in DInSAR applications, it presents specific challenges when working with SAOCOM-1 SLC data. To address these aspects, two key steps were included in the P-SBAS processing chain: - Slice resampling on a common temporal grid; - Phase shift estimation and compensation. For what concerns the low accuracy in the SAOCOM-1 state vectors information, these result in an incorrect estimation of the topographic phase component within the DInSAR interferogram generation process and, therefore, it introduces artefacts in the interferometric phase (that, at the first order, can be represented by a sort of phase ramp) which may significantly degrade the quality of the DInSAR products, if no appropriate correction is introduced. To address orbital phase artifacts in SAOCOM-1 data, a two-step correction process is applied, which benefits from the redundancy of the generated SBAS interferograms and retrieving an orbit correction for each single SAR acquisition of the exploited dataset. Finally, this work also aims to highlight the importance of detecting and mitigating ionospheric disturbances, which can cause distortions in L-band SAR images, particularly in the Earth's low- and high-latitude regions due to intense solar radiation and geomagnetic interactions. The presence of ionospheric effects in SAR images can significantly impact both the phase and amplitude of the acquired data, compromising the accuracy of ground displacement measurements obtained through DInSAR, Multi-Aperture Interferometry (MAI), and Pixel Offset Tracking (POT) methods. It is important to emphasize that the proposed algorithmic solutions may play a very relevant role in light of the upcoming availability of new L-band satellite SAR systems for geohazards analysis and monitoring.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K2)

Presentation: Operationalisation of Satellite Interferometry for Geotechnical Monitoring During Subway Construction in Prague

Authors: Jan Kolomaznik, Ph.D. Ivana Hlavacova, Ph.D. Juraj Struhar
Affiliations: Gisat s.r.o.
A new subway line is under construction in Prague, Czechia, traversing from the city’s peripheral regions to its central districts through geologically complex and challenging conditions. The line frequently intersects stratigraphic layers and boundaries of heterogeneous geological units and aquifers of varying ages and mechanical properties. Furthermore, the tunnel tubus and stations are situated at varying depths, resulting in diverse deformation patterns and intensities that influence the surrounding buildings and infrastructure. As part of the comprehensive geotechnical monitoring for the Metro Line D route between Pankrác and Olbrachtova stations, GISAT is conducting operational surface deformation monitoring through satellite radar interferometry (InSAR). Interferometric measurements from TerraSAR-X/PAZ and Sentinel-1 satellites are being analysed using a customised Persistent Scatterer Interferometry (PS-InSAR) algorithm to detect temporal changes in displacement. The monitoring is organised into several stages, beginning with a retrospective “passportization stage,” which leverages archived SAR datasets to establish baseline conditions. Subsequent stages adaptively modify the stage duration and frequency of satellite acquisitions to capture evolving deformation patterns during construction. Each monitoring phase yields thousands of temporally persistent scatterer (t-PS) measurements, complementing conventional geodetic monitoring conducted at over 900 stabilised points. This dual approach enhances spatial coverage and facilitates the identification of deformation anomalies beyond anticipated impact zones. The resulting deformation map detects surface and building movement chronology and patterns with high precision and synoptic spatial coverage without requiring point stabilisation. The observed tunnel-induced displacement patterns exhibit significant spatial heterogeneity and temporal nonlinearity. In areas of heightened subsidence risk, subsurface concrete injections frequently induce temporary uplift, reversing surface motion trends. This spatial and temporal variability in displacement subjects structures to complex and differential strain forces, with potential implications for structural integrity. Some of the detected subsidence is attributable to tunnelling activity indirectly; hydrological effects, such as aquifer drainage, contribute to displacement phenomena in areas distant from construction zones. The nonlinear characteristics of displacement trends present challenges for InSAR analysis, particularly in X-band data, where the wavelength is commensurate with the magnitude of deformation, leading to phase unwrapping errors. These errors compromise spatial interpretation, elevate noise levels, and diminish the reliability of results. To address these challenges, GISAT has introduced innovative enhancements to the PS-InSAR methodology, including advanced segmentation of time series data considering statistically significant differences in displacement velocities or noise levels. Segments classified as unreliable due to noise or unwrapping artefacts are excluded from interpretation. Additional insights are derived from complementary C-band sensors to enhance robustness. Validation confirms strong agreement between displacement trends measured through InSAR and conventional geotechnical methods.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K2)

Presentation: Enhancing the understanding of present-day and future urban subsidence risk in Italy based on multi-scale satellite InSAR workflows and advanced modelling

Authors: Dr Francesca Cigna, Dr Roberta Bonì, Prof Pietro Teatini, Dr Roberta Paranunzio, Dr Claudia Zoccarato
Affiliations: National Research Council (CNR) - Institute of Atmospheric Sciences and Climate (ISAC), University School for Advanced Studies (IUSS) - Department of Science, Technology and Society (STS), University of Padua (UNIPD) - Department of Civil, Environmental and Architectural Engineering (ICEA), National Research Council (CNR) - Institute of Atmospheric Sciences and Climate (ISAC)
Italy is among the world countries with the largest estimated groundwater extractions. When groundwater withdrawal and natural discharge exceed recharge rates, aquifer systems are over-exploited, resulting in resource depletion, storage loss and compaction of confining clay beds. The induced land subsidence may cause direct/indirect impacts on urban landscapes (ground depressions, earth fissures, structure damages, increased flood risk, loss of land to water bodies) and economic loss. High to very high subsidence susceptibility and hazard levels characterize many Italian regions, and a number of subsidence hotspots have been observed using satellite Interferometric Synthetic Aperture Radar (InSAR) methods, such as the Po River and Florence-Prato-Pistoia plains. The novel project SubRISK+ (https://www.subrisk.eu) innovates in this field by providing new EO-derived products and tools aiming to enhance the understanding of subsidence risk in major urban areas of Italy, towards sustainable use of groundwater resources and urban development. The project is assessing current and future subsidence risk in Italy using a multi-scale methodology, with implementation spanning from the national to the local scale. Risk is estimated with matrix-based risk assessment approaches, embedding InSAR-derived ground deformation observations (e.g. Copernicus’ European Ground Motion Service, EGMS), hydrogeological, topographic and land use data. Hazard levels are estimated via computation of angular distortion and horizontal strain induced by differential settlement on urban infrastructure, as derived from InSAR datasets. Exposure and vulnerability are assessed based on type and height of urban infrastructure, and geospatially combined with hazard information via a risk matrix to derive risk levels, from very low to very high (e.g. R1 to R4). Statistics on the population living within the various risk categories are finally extracted. At regional scale, accurate detection of hotspots and drivers is enabled by implementing advanced geostatistics and exploiting time series analysis tools, including Independent Component Analysis (ICA) and Optimized HotSpot Analysis. An advanced numerical model coupling 3D transient groundwater flow and geomechanics of heterogeneous aquifer systems is also being developed to quantify the effects of groundwater usage to land deformation, and estimate uncertainties at local-scale in a highly vulnerable city. The output from the groundwater flow model serves as input in the deformation model, utilizing the same computational grid and distribution of mechanical parameters to create a consistent flow-deformation model. A calibration procedure is implemented where uncertainties associated to available piezometric records and InSAR measurements and from modelling approach are integrated to estimate the parameters of both the groundwater flow and deformation models. A tailored socio-economic impact analysis is being developed to quantify market and non-market direct/indirect losses at national, regional and local scale, based on affected areas’ exposure, vulnerability and resilience. Future subsidence risk by 2050 and 2100 under climate change (RCP4.5/8.5, medium/high emissions), demographic and urban development, is assessed for the metropolitan cities and, locally, by adapting the numerical model to support long-term risk predictions under different scenarios. Predictions of future land use changes using socioeconomic and environmental parameters will contribute to an integrated, indicator-based approach at city scale that will enable assessment of urbanization patterns and identification of potential areas prone to future subsidence. The results for the 15 metropolitan cities of Italy, the whole Emilia Romagna region and the city of Bologna will showcase the potential of the developed methodology and its benefits to inform water resource management and decision making. This work is funded by the European Union – Next Generation EU, component M4C2, in the framework of the Research Projects of Significant National Interest (PRIN) 2022 National Recovery and Resilience Plan (PNRR) Call, project SubRISK+ (grant id. P20222NW3E), 2023-2025 (CUP B53D23033400001).
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K2)

Presentation: Free and Open-Access OPERA Surface Motion Data over North America: A Geohazard Perspective

Authors: David Bekaert, Scott Staniewicz, Grace Bato, Sara Mirzaee, Simran Sangha, Talib Cabrera, Jin-Woo Kim, Piyush Agram, Marin Govorcin, Geoffrey Gunter, Alexander Handwerger, Matthew Calef, Heresh Fattahi, Zhong Lu, Batuhan Osmanoglu, Elodie Macorps, Steven Chan, Emre Havazli
Affiliations: Jet Propulsion Laboratory/ Caltech, Southern Methodist University, Goddard Space Flight Center, Descartes Lab
Remote sensing satellites provide key data that can be used to better understand the Earth, respond to disaster events, and inform decisions on pressing climate and environmental issues. For decades, many space agencies have provided high quality remote sensing data free of charge for their end-users. Although these data have been accessible, and widely used, the raw remote sensing measurements can be challenging to use and analyze particularly for non-specialists. Thus, projects such as European Ground Motion Service (EGMS) and the NASA JPL Observational Products for End Users from Remote Sensing Analysis (OPERA) aim to leverage state-of-art processing environments to create earth observations products over continental scales spanning Europe and North America, respectively. The OPERA project (https://www.jpl.nasa.gov/go/opera/) is processing multiple streams of data from optical (Landsat 8, Sentinel 2 constellation) and SAR-based (Sentinel 1 constellation, NISAR) missions to bring the data to higher-level Analysis Ready Datasets (ARD). These datasets are aimed to address the remote sensing needs of US Federal agencies under the Satellite Needs Working group (SNWG). OPERA products relevant to the theme of the session “satellite based terrain motion mapping” include a coregistered and geocoded SLC product suite, a surface displacement product suite, as well as a vertical land motion product suite from both Sentinel-1 and NISAR. With a geographical focus over North America, a temporal record spanning the complete sensor record, and with product updates made as new data is acquired, the products enable broad applicability for mapping and monitoring of geohazards. OPERA’s stakeholder engagement program is used to interact with end-users, ease co-development, and importantly helps to facilitate adoption by stakeholders through focused 1-1 engagements and workshops. We will present an overview of the salient product features with illustrations for geohazards in the context of stakeholder applications such as wildfires, landslides, subsidence, volcanoes and also share experiences from our stakeholder interactions.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K2)

Presentation: Classification of Ground Deformation Phenomena at Continental Scale from European Ground Motion Service Data

Authors: Riccardo Palamà, María Cuevas-González, Anna Barra, José Antonio Navarro, Oriol Monserrat, Michele Crosetto
Affiliations: Centre Tecnologic De Telecomunicacions De Catalunya (CTTC)
This work proposes a supervised classification of ground motion (GM) phenomena using as main input the European Ground Motion Service (EGMS) datasets. The availability of such an extended dataset allows implementing wide area tools to detect and classify GM phenomena, that can be useful for potential users to evaluate hazard and mitigate risks. This work implements a wide-area ground motion classifier (GMC) that categorizes areas affected by GM phenomena into four main classes, i.e. slow-moving slope deformation phenomena (mostly represented by deep-seated gravitational slope deformation phenomena, DSGSD), landslides, subsidence and uplift. The implementation of the classifier is preceded by the identification of active deformation areas (ADAs) through the ADAfinder tool [2], which yields a European map of ADA polygons. Furthermore, the ADAs that are detected in the same area by both ascending and descending Sentinel-1 orbits are merged, otherwise they are processed separately. In this way, we provide a tool that maximises – where possible - the information associated with the Sentinel-1 data at both orbit trajectories. The training dataset for the GMC is obtained by matching the European ADA map with ground truth/labelling data. The labels for the two landslide classes are obtained by matching the unlabelled data with the polygons of the Italian National Landslide Inventory [3], whereas the labels for the subsidence class are given by the subsidence map of Emilia-Romagna region (Italy) [4]. Finally, the uplift class labels are obtained from known uplift areas across Europe, e.g. the dewatering areas in the United Kingdom [5]. The ADAs that are not labelled at this stage are included in the dataset whose classification is the main goal of this work, which aims at training a model, then deploys it to obtain a European map of ground motion phenomena. The supervised ground motion classifier is implemented through the Extreme Gradient Boosting (XGB) technique. XGB belongs to the ensemble learning family and is used in various applications, due to its good performance, versatility, and capability to cope with missing values. In this work, the Catboost implementation of XGB was chosen due to its better performance [6]. The XGB classifier employs spatio-temporal features extracted, for each ADA polygon, from different data sources, i.e. the EGMS-PSInSAR data (e.g. mean velocity, acceleration, seasonality, temporal coherence of the PS displacement values), Corine Land Cover map, Digital Elevation Model (DEM) and its derived terrain attributes (local slope and aspect). The performance of the implemented model is demonstrated by high performance metric values (accuracy of 92%, precision of 91%, f1-score of 88% and recall of 86%). It should be noted that lower recall values are associated to the DSGSD class (64%), which presents shared properties with the landslide one. Furthermore, with the aim of exploring the explainability of the classification algorithm, we have evaluated the feature importance (IF) values, revealing that the more relevant features are DEM (IF ~22%), slope (IF ~20 %), mean velocity of the PS displacement time series (IF~18% for both trajectories). The trained models are deployed to classify the unlabeled ADA polygons over the whole territory covered by the EGMS, then producing a European map of classified ground motion data. Furthermore, the softmax operator is added to the final stage of the deployed classifier to provide the probability that one ADA polygon belongs to each of the considered ground motion classes. This allows to evaluate the confidence of the implemented classification, since higher probability values are associated with a higher confidence in the classification. On the other hand, lower probability values of the dominant class mean that the classification is less reliable, which may be due to the presence of mixed properties between two or more GM classes, or also show that the ADA could be associated to a deformation phenomenon that is not considered in this work. It should be noted that the four GM classes employed represent a good portion of known deformation phenomena at local scale, but the presence of deformation phenomena of different nature (e.g. seasonal deformation phenomena), whose labelling is not always feasible, yields a decrease of the classification probability values. The European map of classified ground motion phenomena has been validated using available independent datasets in the north-west part of Italy (Piemonte and Valle d’Aosta regions) and in the Spanish territory, revealing good agreement with the existing datasets. The obtained map will be published through a Web Mapping Service (WMS), which will be made available online for a wide audience, including final users (e.g. geologists and civil protection entities) or developers willing to train their algorithms on labelled GM data. This work has been funded by the European Space Agency under the Living Planet Fellowship awarded to Riccardo Palamà, with the project titled “Wide-Area Sentinel-1 Deformation Classification for Advanced Data Exploitation”. References [1] M.Crosetto, L.Solari, M. Mróz, et al. (2020). The evolution of wide-area DInSAR: From regional and national services to the European Ground Motion Service, Remote Sensing, 12(12). [2] A.Barra, L.Solari, M.Béjar-Pizarro, et al. A methodology to detect and update active deformation areas based on Sentinel-1 SAR images, Remote Sensing, 9, 1002, 2017. [3] A.Trigila, et al, Quality assessment of the Italian Landslide Inventory using GIS processing, Landslides, 7, 455–470,2010. [4] G. Bitelli, F. Bonsignore, I. Pellegrino, L. Vittuari. (2015), Evolution of the techniques for subsidence monitoring at regional scale: the case of Emilia-Romagna region, (Italy). Proceedings of the Ninth International Symposium on Land Subsidence, Nagoya (Japan), November 15-19, IAHS, 92, 1-7 [5] N. Anantrasirichai et al., "Detecting Ground Deformation in the Built Environment Using Sparse Satellite InSAR Data With a Convolutional Neural Network", in IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 4, pp. 2940-2950, April 2021, doi: 10.1109/TGRS.2020.3018315 [6] Prokhorenkova L, Gusev G, Vorobev A, Dorogush AV, Gulin A, CatBoost: unbiased boosting with categorical features, in Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2018. p. 6639–49
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K2)

Presentation: Surface Deformation and Micro-Seismic Activity Driven by Groundwater Level Changes at the Gardanne Post-Mining Site

Authors: Michael Foumelis, Dr. Eng. Jose Manuel Delgado Blasco, Dr Elena Papageorgiou, Dominique Pascale, Dr Pavlos Bonatis, Dr Daniel Raucoules, Dr Marcello de Michele, Dr Eleftheria
Affiliations: Aristotle University of Thessaloniki (AUTh), Randstad c/o ESA-ESRIN, French Geological Survey (BRGM)
Post-mining surface deformation and associated microseismicity are critical concerns for former mining sites, with changes in groundwater levels being an important influencing factor. This study investigates these phenomena at the Gardanne coal site in Provence, southern France. Following the closure of the mine in 2003 and the cessation of groundwater pumping, flooding gradually occurred and stabilized in 2010. However, subsequent pumping activities to prevent flooding resulted in fluctuating water levels that triggered periodic seismic swarms and surface deformation. Strong temporal and geographical relationships between seismic activity and changes in groundwater levels have been observed since 2010. An expanded seismic monitoring network recorded almost 2,700 occurrences between 2013 and 2018, most of which were swarms of small-magnitude events. Advanced multi-temporal Interferometric Synthetic Aperture Radar (InSAR) methods were applied using Copernicus Sentinel-1 satellite data from 2015 to 2022. To examine surface motion throughout the research region, Persistent Scatterer Interferometry (PSI) and Distributed Scatterer (DS) approaches were utilized to analyze surface motion across the study area. The findings showed patterns of surface displacement aligned with seismic clusters, particularly during the 2016–2017 seismically active periods. During this period, cumulative vertical displacements reached up to 26.4 mm. Geometric decomposition of ascending and descending measurements confirmed predominantly vertical motion with negligible horizontal displacement, consistent with the local tectonic stress regime and focal mechanisms of seismic events. Temporal analysis revealed a strong correlation between surface motion and seismic swarms, with deformation signals often preceding or persisting beyond seismic activity. Seasonal subsidence patterns unrelated to seismicity were also observed, particularly during the summer months, likely related to groundwater management practices. These findings underline the complex relationship between hydrogeological and tectonic processes influencing surface stability in post-mining environments. This study demonstrates the effectiveness of advanced InSAR techniques for monitoring subtle surface deformation and provides critical insights for managing groundwater levels and mitigating risks in post-mining areas. Future work should focus on refining models to separate seismic and aseismic contributions to surface deformation, while examining seasonal influences on displacement patterns.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall G1)

Session: C.03.08 The European Copernicus Space component: status, future prospects and challenges - PART 2

Copernicus is the European Earth monitoring program which opened a new era in Earth Observation with continuous and accurate monitoring of our planet and continuous improvement to respond to the new challenges of global change.
Since it became operational in 2014 with the launch of the first dedicated satellite, Sentinel-1A, Copernicus has provided a wealth of essential, timely and high-quality information about the state of the environment, allowing borderless environmental and emergency monitoring, and enabling public authorities to take decisions when implementing European Union policies.
The intense use and increased awareness for the potential of Copernicus have also generated great expectations leading to an evolved Copernicus system that has embraced emerging needs, new user requirements and a new commercial dimension.
This future evolution of the Copernicus program will fill observational gaps and will help monitor the “pulse” of our planet for the decades to come, but to do so, programmatic and budgetary commitments will need to be maintained.

Presentations and speakers:



Unleashing the potential of Copernicus Sentinel Data: Fuelling Europe's Digital Future



  • J. Martin - ESA, CSC Data Access System Architect

The Copernicus Contributing Missions: present and future


  • P. Fischer - ESA, EO Third Party Missions Manager

The Copernicus current Sentinel satellite missions: Sentinel-2


  • F. Gascon - ESA, Sentinel-2 Mission Manager

The Copernicus current Sentinel satellite missions: Sentinel-3


  • J. Bouffard - ESA, Sentinel-3 Mission Manager
  • H. Wilson - EUMETSAT, Sentinel-3 Project Manager

The Copernicus current Sentinel satellite missions: Sentinel-5P



  • C. Zehner - ESA, Sentinel-5P Mission Manager

The Copernicus current Sentinel satellite missions: Sentinel-6



  • B. L. Bydekerke - EUMETSAT, Copernicus Programme Manager
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.61/1.62)

Session: A.03.06 Exploring ground-based, airborne and satellite observations and concepts for the carbon cycle

The remote sensing community is abuzz with developing innovative concepts for the carbon cycle to collect crucial data at different spatial and temporal scales required to study and improve understanding of underlying geophysical processes. The observations from new airborne and ground-based instruments play a vital role in developing new applications that benefit from integrated sensing.

These new concepts need to go hand in hand with the mathematical understanding of the theoretical frameworks including uncertainty estimates. This session invites presentations on:
- innovative observations of geophysical products focussing on the carbon cycle
- Highlighting innovative applications based on integrated sensing
- feedback and lessons learned from ongoing or planned developments as well as from first ground-based or airborne campaigns.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.61/1.62)

Presentation: Towards an automated sea-based ocean lidar network

Authors: Davide Dionisi, Dr. Cédric Jamet, Dr. Kelsey Bisson, Peng Chen, Paolo Di Girolamo, Yongxiang Hu, Dong Liu, X Lu, Sayoob Vadakke-Chanat, Siqi Zhang, Z. Zhang, Yudi Zhou, Iwona Stachlewska
Affiliations: CNR - ISMAR, Univ. Littoral Côte d’Opale, LOG, Ocean Biology and Biogeochemistry program, NASA Headquarters, State Key Laboratory of Satellite Ocean Environment Dynamics, School of Engineering,University of Basilicata, Science Directorate, Lidar Science Branch, NASA Langley Research Center, Ningbo Innovation Center, , Zhejiang University, University of Warsaw
Lidar techniques have proven to be a reliable and valuable tool for the study of marine environments [1]. Over the past decade, numerous Ocean Color (OC) research initiatives utilizing space-borne lidar measurements, originally dedicated to atmospheric missions (e.g. CALIPSO), have not only established a solid proof of concept for ocean space-based applications but also yielded significant scientific discoveries [2,3]. These results ushered in a ‘new era of lidar in satellite oceanography’ [4], highlighting the potential of these technologies in understanding oceanographic processes and ecosystem dynamics. Specifically, lidar technique can overcome some limitations of passive ocean color remote sensing, as it enables nighttime observations and can resolve phytoplankton vertical structure, thus reducing uncertainties in global phytoplankton biomass and net primary production estimates. Internationally established lidar networks for monitoring the atmosphere, such as the European Aerosol Lidar NETwork (EARLINET) and the Network for the Detection of Atmospheric Composition Change (NDACC), are well known. Similarly, networks dedicated to radiometric observations of ocean color, like the Aerosol Robotic Network Ocean Color (AERONET-OC), are also actively operating. The latter can be used for validation of NASA and ESA ocean color space missions by providing spectrum of the light backscattered by the ocean surface. However, information on the vertical structure of the upper ocean and on the inherent optical properties of seawater (IOPs) cannot be retrieved. Only ship-borne observations provide IOPs but those are scarce in space and time, as they are time- and labor-consuming. What we propose is a vision of an ocean lidar network. A network of in-situ lidar observations that will be complementary to the current ocean color network. It will provide parameters currently not observed and, more importantly, those observations will be vertically-resolved. Our presentation explores the potential for developing a network of observations utilizing lidar technology. It will outline the current status and developmental needs of an automated lidar network specifically designed for oceanic applications. Furthermore, we will provide recommendations regarding the requirements for such network, covering aspects of instrumentation, geographical locations, frequency of observations, ocean variables to be monitored, acceptable levels of uncertainty in data products, and quality assurance procedures. This comprehensive discussion aims to highlight the essential components necessary for establishing an effective lidar network to enhance oceanographic studies. The proposed observations will yield innovative insights into geophysical products related to the ocean carbon cycle, which could be instrumental in validating current and future ocean color space missions. Additionally, they will support next-generation missions based on lidar technology, such as the Global Scale Observations of the Ocean-Land Atmosphere System (CALIGOLA) mission [5,6] that is currently being developed by the Italian Space Agency in collaboration with NASA. [1] Jamet, C., et al., 2019. Going beyond Standard Ocean color observations: Lidar and polarimetry. Front. Mar. Sci. 6, 251. https://doi.org/10.3389/fmars.2019.00251 [2] Behrenfeld, et al., 2019. Global satellite-observed daily vertical migrations of ocean animals. Nature 576, 257–261. https://doi.org/10.1038/s41586-019-1796-9 [3] Dionisi, D.,et al., 2020. Seasonal distributions of ocean particulate optical properties from spaceborne lidar measurements in Mediterranean and Black Sea. Remote Sens. Environ. 247, 111889 https://doi.org/10.1016/j.rse.2020.111889. [4] Hostetler, et al., 2018. Spaceborne Lidar in the study of marine systems. Ann. Rev. Mar. Sci. 10, 121–147. https://doi.org/10.1146/annurev-marine-121916-063335. [5] Behrenfeld, et al., 2023. Satellite Lidar measurements as a critical new Global Ocean climate record. Remote Sens. (Basel) 15, 5567. https://doi.org/10.3390/rs15235567. [6] P. Girolamo, et al., 2022. Introducing the Cloud Aerosol Lidar for Global Scale Observations of the Ocean-Land-Atmosphere System: CALIGOLA, in Proceedings of the 30th International Laser Radar Conference. Atmospheric Sciences. Springer, Cham pp. 625–630.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.61/1.62)

Presentation: Estimating methane fluxes from Arctic-boreal wetlands using observations from the CoMet 2.0 Arctic airborne mission

Authors: Andreas Fix, Sebastian Wolff, Leah Kanzler, Christoph Kiemle, Mathieu Quatrevalet, Fruck Christian, Martin Wirth, Paul Waldmann, Kerstin Hartung, Anna-Leah Nickl, Bastian Kern, Mariano Mertens, Patrick Jöckel, Sven Krautwurst, Heinrich Bovensmann, Michał Gałkowski, Christoph Gerbig
Affiliations: German Aerospace Center (DLR), Institute of Atmospheric Physics, Institute of Environmental Physics (IUP), University of Bremen, Max Planck Institute for Biogeochemistry
CoMet 2.0 Arctic (www.comet2arctic.de) has successfully been conducted within a six-week intensive operation period from August 10th to September 16th, 2022, targeting greenhouse gas emissions from boreal wetlands and permafrost areas in the Canadian Arctic, from wildfires, and from oil, gas, and coal extraction sites. Using the German research aircraft HALO with an innovative combination of scientific instruments onboard, a total of 135 flight hours have been performed. Thus, a valuable data set was acquired to help understanding the methane and carbon dioxide cycles in the Arctic and emissions from a variety of sources responsible for accelerated climate warming, particularly at high latitudes. CoMet 2.0 Arctic is embedded within the transatlantic AMPAC (Arctic Methane and Permafrost Challenge) initiative of the US and European Space Agencies, NASA and ESA, which promotes the co-operation of Canadian, US and European research institutes in this research area. A selection of the research flights concentrated on the Hudson Bay Lowlands, a vast region of boreal wetlands and permafrost in northern Canada, that store significant amounts of carbon in the form of peat deposits and frozen organic matter. However, as permafrost thaws due to climate change, natural CH4 emissions from the area are expected to increase substantially, in particular through enhanced ebullition and increasingly frequent wildfires. Monitoring these emissions is crucial for understanding the magnitude and variability of permafrost-carbon feedbacks, which could amplify global warming if left unmanaged. Regular observations of greenhouse gas fluxes in this region are therefore essential to improve predictions of future emissions and inform strategies for mitigating climate change. For this, detecting emission patterns is critical for validating process-based wetland models, which simulate the dynamics of carbon cycling and methane production in these ecosystems. These models rely on understanding the complex interactions between soil moisture, temperature, and vegetation, as well as the role of microbial communities in producing and consuming greenhouse gases. Here, we report on our attempts to infer methane fluxes from that region and compare our in-situ and remote sensing observations against the results of the online-coupled one-time nested global and regional chemistry-climate model MECO(n). The regional model domain was set up for CoMet 2.0 Arctic with a 50 km (0.44 x 0.44°) grid over North America. The simulated tracers see methane emissions from different processed-based wetland models, as well as from specific methane sources (e.g. anthropogenic, biomass burning, wetlands). In the future, improved observation capabilities from satellites are foreseen, including the German-French MEthane Remote sensing Lidar missioN (MERLIN), which offers a promising tool for monitoring CH4 gradients over large spatial scales particularly at high latitudes since it does not depend on sun illumination and is much less affected by clouds and aerosol compared to passive remote sensing. The results shown here are therefore a step forward for the utilization of upcoming satellite data to understand the greenhouse gas cycles e.g. at high northern latitudes.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.61/1.62)

Presentation: Evaluating the impact of phenological shifts on gross primary productivity across Europe

Authors: Getachew Mehabie Mulualem, Dr. Jadu Dash
Affiliations: School of Geography and Environment, University of Southampton
Phenology, the study of the timing of biological events, plays a critical role in understanding ecosystem productivity and carbon fluxes. Changes in climatic variables such as temperature and precipitation impact the timing of these phenological events, affecting overall productivity during the growing season. In the northern high latitudes, increased temperatures have led to earlier spring and delayed autumn events. However, the impact of these phenological shifts on vegetation carbon dynamics remains less understood. Therefore, this study utilizes data from 57 flux towers within the ICOS network from 2017 to 2023 (7 years) to provide detailed insights into the relationship between the length of the growing season and Gross Primary Productivity (GPP). Discrete Fourier Transform analysis was applied to smooth the data and minimize noise. The start of the growing season (SOS) was identified at the 25th percentile slope on the greening curve, while the end of the growing season (EOS) was determined at the 75th percentile slope on the senescing curve for each site. GPP values across the growing season were integrated to provide annual GPP for each site. The analysis showed a significant positive correlation (r=0.54) between annual GPP and the length of the growing season. However, this relationship varies per landcover type, amongst natural vegetation Evergreen Needleleaf Forests (ENF) showed the strongest positive correlation (r=0.74) and Deciduous Broadleaf Forests (DBF) had the weakest positive correlation (r=0.60). Inter-annual variations in phenology metrics varied across different land cover types. Over the last decade, the start of the growing season (SOS) for Grassland (GRA) stations advanced by 2.9 days, while the end of the growing season (EOS) advanced by 12.8 days per decade. In DBF, the SOS was delayed by 17.6 days and the EOS by 20.8 days per decade. For ENF, the SOS advanced by 11.5 days, whereas the EOS advanced by only 0.8 days per decade. All trends were statistically significant at the 0.05 level. Annual GPP in GRA and ENF decreased significantly, which is consistent with the shortening of the growing season. Conversely, in DBF, the annual GPP increased, aligning with the lengthening of the growing season. Across all sites, annual GPP was found to be negatively correlated with SOS and positively correlated with EOS and LOS. However, the inter-annual variation of GPP was inconsistent across different land covers, with an overall significant increase of 8.77% in GPP observed. While advanced SOS, delayed EOS, and extended LOS are often attributed to climate warming, the relationship between phenology shift and GPP is complex and influenced by several environmental factors. Future activity will explore these relationships at a regional scale and incorporate additional environmental variables to better understand the intricate interplay between phenology and carbon dynamics.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.61/1.62)

Presentation: CarboCatch: assessing tree biomass carbon using remote sensing and machine learning in an interactive platform

Authors: Liv Toonen, Ramadhan -, Waas Thissen
Affiliations: Space4good, Louis Bolk Instituut
Keywords: agroforestry monitoring, aboveground biomass, carbon sequestration, remote sensing, satellite monitoring, LiDAR, fieldwork, data integration, MRV, artificial intelligence, random-forest, allometric equation, tree detection, suitability map, online platform The high costs of monitoring, reporting, and verification (MRV) pose a significant challenge to the profitability of carbon credits in agroforestry systems. Traditional MRV methods rely heavily on extensive manual tree measurements, creating a substantial burden for agroforestry farmers and project developers. Given that carbon farming in agroforestry is a relatively new practice, with monitoring methodologies differing from forestry-based carbon farming, research was necessary to assess the value of remote sensing-based models and determine the most effective approaches. CarboCatch was developed to address this challenge, leveraging remote sensing technologies and artificial intelligence (AI) trained on ground-truth data to automate biomass and carbon stock estimation, with the potential to reduce or replace costly fieldwork. CarboCatch integrates multiple datasets to improve the accuracy of its machine learning (ML) models. Among these, freely available LiDAR-derived canopy height models (CHM) from the Actueel Hoogtebestand Nederland (AHN) provide detailed structural information about tree heights and biomass distribution. High-resolution multispectral imagery, capturing spectral bands such as red, green, red edge, and near-infrared (NIR), adds critical insights into vegetation characteristics. Ground-truth measurements collected on five walnut agroforestry farms in the Netherlands by the Louis Bolk Institute were used to label the remote sensing data with biomass values. These measurements included tree diameter at breast height (DBH), tree height, age, and species, ensuring the integration of accurate field data with remote sensing datasets. The development of CarboCatch involved an iterative modeling process to identify the most effective configurations for agroforestry applications. Various combinations of model features were tested, such as CHM and multispectral image bands. The performance of different machine learning algorithms, including random forests and other regression models, was evaluated. Experiments with train:test data splits optimized model validation, while diverse tree species, soil types, and regions were included to improve scalability. Techniques for filtering outliers ensured data quality and model reliability. Comparisons between original and pan-sharpened resolutions revealed that higher-resolution imagery often introduced noise, adversely affecting model performance. This systematic approach allowed the team to refine models and identify the best-performing configurations. The final models were validated using testing datasets, with predictions compared to field-based above-ground biomass (AGB) measurements. Metrics such as R² and mean absolute error (MAE) demonstrated the reliability of the models. Models integrating LiDAR-derived CHM and multispectral bands performed particularly well, achieving R² values exceeding 0.90 for plot-specific datasets. However, soil-specific models, such as those distinguishing clayey versus sandy soils, showed better generalization across landscapes. These soil-specific models achieved R² values of 0.80 (sandy) and 0.88 (clayey), with MAE values of 3.68 Ton/Ha and 1.53 Ton/Ha, respectively. Interestingly, models using 1.2-meter resolution imagery outperformed those with 0.3-meter pan-sharpened imagery, which was prone to introducing reflectance distortions and noise. The results underline the potential of integrating diverse datasets, including LiDAR, multispectral imagery, and field data, for scalable and cost-effective MRV in agroforestry systems. However, further work is necessary to enhance the applicability and usability of CarboCatch. Expanding training datasets to include additional field plots from diverse regions and soil types across the Netherlands is a priority. Certification of the methodology by SNK will ensure alignment with official standards, making the platform more accessible to project developers. Planned platform enhancements include personalized dashboards that restrict access to managed plots, protecting user privacy while providing tailored insights. Advanced modeling features, such as survival rate visualizations and species recognition algorithms, will further improve usability. CarboCatch demonstrates the transformative potential of AI-driven solutions for reducing MRV costs while maintaining high accuracy. By integrating cutting-edge remote sensing technologies with robust field data, CarboCatch is positioned to revolutionize carbon sequestration monitoring in agroforestry systems, paving the way for scalable and profitable carbon farming.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.61/1.62)

Presentation: The Greenhouse gas Emissions Monitoring network to Inform Net-zero Initiatives for the UK (GEMINI-UK): a new national capability for ground-based remote sensing of greenhouse gases

Authors: Neil Humpage, Paul Palmer, Alex Kurganskiy, Liang Feng, Jerome Woodwark, Will Morrison, Douglas Finch, Stamatia Doniki, Damien Weidmann, Robbie Ramsay
Affiliations: National Centre for Earth Observation, University Of Leicester, National Centre for Earth Observation, University of Edinburgh, University of Edinburgh, RAL Space, Rutherford Appleton Laboratory, NERC Field Spectroscopy Facility, University of Edinburgh
The UK has a long-term goal in place to achieve net-zero greenhouse gas (GHG) emissions by 2050. As part of the UK Greenhouse gas Emissions Measurement Modelling Advancement programme (GEMMA), which aims to provide timely, frequent, and open emissions data to inform progress towards this target, the National Centre for Earth Observation have set up a network of ground-based shortwave infrared spectrometers around the UK. This new network, called GEMINI-UK (Greenhouse gas Emissions Monitoring network to Inform Net-zero Initiatives for the UK), will provide continuous observations of the column concentrations of carbon dioxide and methane during cloud-free conditions. The motivation for this network is to provide data that will, along with in-situ measurements collected by the UK's existing tall tower network, help quantify regional net GHG emissions across the country. Together, these data will form the backbone of a pre-operational GHG emissions monitoring framework for the UK. Through the GEMMA programme, data from GEMINI-UK will be used in a Bayesian inversion framework to constrain regional flux estimates of carbon dioxide and methane. We have designed the measurement network to deliver the biggest uncertainty reductions in carbon dioxide flux estimates, working closely with host partners that include UK universities, a research institute and a secondary school to promote the open access and transparency of the collected data. The network comprises ten new Bruker EM27/SUN spectrometers, which we operate in automated weatherproof enclosures using a design developed by University of Edinburgh researchers, allowing year-round autonomous observations across multiple sites. A further two EM27/SUNs located in London, operated by the NERC Field Spectroscopy Facility, are also contributing data to GEMINI-UK In this presentation we describe the status, network design, first data, and longer-term goals of GEMINI-UK, including an ongoing evaluation of the GEMINI-UK station located alongside a Bruker IFS 120/5 HR TCCON (Total Carbon Column Observing Network) spectrometer at the Rutherford Appleton Laboratory in Harwell, Oxfordshire. We also highlight the opportunities that GEMINI-UK will provide for validation of existing and future greenhouse gas observing satellite missions including S5P TROPOMI, Microcarb, and CO2M.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.61/1.62)

Presentation: Resilience of forests across species: recovery curves for tree cover and biomass in France

Authors: Agnès Pellissier-Tanon, Ibrahim Fayad, Fajwel Fogel, Martin Schwartz, Yidi Xu, François Ritter, Dr. Philippe Ciais
Affiliations: Laboratoire des Sciences du Climat et de l’Environnement, LSCE/IPSL, CEA-CNRS-UVSQ, Department of Geosciences and Natural Resource Management, University of Copenhagen
Forests face a growing number of natural and human-induced perturbations, including wildfires, storms, pests, deforestation, and land-use changes, all of which are exacerbated by climate change [1,2]. These disturbances threaten forest health and their ability to act as critical carbon sinks, making regeneration efforts essential [3]. Ensuring the recovery and resilience of forests is not only vital for maintaining biodiversity and ecosystem services but also for mitigating climate change by restoring their capacity to sequester carbon effectively [4]. Accurately quantifying forest contributions to the carbon cycle requires detailed monitoring of growth, recovery, and carbon sequestration dynamics [5,6]. However, the best available global estimates of carbon removal from natural forest regrowth do not sufficiently capture the spatial and temporal variability [7]. The Intergovernmental Panel on Climate Change (IPCC) provides standard carbon removal rates for only two time periods - young (< 20 years) and older (20 to 100 years) secondary forests - and only at the continental and ecozone scales [8]. Some studies have concentrated on the impact of forest functional type on growth [9,10]. However, these studies utilise field data, which only partially represents the full range of forest age, species and environment (climate and soil) [7,10,11]. To address this limitation, satellite data provides maps of forest metrics such as height, biomass [12–15] and forest history dating back to the 1980s [16]. Consequently, they offer a substantial source of continuous spatial information to study secondary forest growth [17,18]. This study presents secondary growth curves for three key forest variables: tree cover, height, and AGB in France. These curves, differentiated by species, reveal distinct growth rates for various age classes and tree species, emphasizing the varied dynamics of forest regrowth. By analyzing a variety of forest metrics, we identify the extent to which the time required to close the canopy differs from the time required for biomass generation. This allows for the extraction of several measures of resilience according to the level of recovery under study. In order to elucidate the sources of variability in recovery curves, an investigation is conducted into the potential influence of environmental factors, including NDVI, soil characteristics, and climate. This approach establishes a link between forest growth dynamics and the underlying ecological drivers, thereby enhancing the interpretability and predictive power of recovery models. Furthermore, the study estimates the maximum potential AGB for different regions and species, mapping the deviation of current forest states from this theoretical maximum. This gridded recovery analysis provides a spatially explicit understanding of forest recovery potential, enabling more effective management strategies, particularly with regard to tree species for enhancing carbon sequestration. Finally, using a bookkeeping approach, we estimate C losses and gains following disturbances in the last decades and compare the results with those of the French NFI. The findings provide a comprehensive framework for the refinement of carbon budgeting and the enhancement of understanding with regard to forest resilience at the species and regional levels. This, in turn, facilitates the development of more informed and sustainable forest management practices. [1] Forzieri G, Dakos V, McDowell NG, Ramdane A, Cescatti A. Emerging signals of declining forest resilience under climate change. Nature 2022;608:534–9. https://doi.org/10.1038/s41586-022-04959-9. [2] Cerioni M, Brabec M, Bače R, Bāders E, Bončina A, Brůna J, et al. Recovery and resilience of European temperate forests after large and severe disturbances. Glob Change Biol 2024;30:e17159. https://doi.org/10.1111/gcb.17159. [3] Puhlick JJ, Weiskittel AR, Kenefic LS, Woodall CW, Fernandez IJ. Strategies for enhancing long-term carbon sequestration in mixed-species, naturally regenerated Northern temperate forests. Carbon Manag 2020;11:381–97. https://doi.org/10.1080/17583004.2020.1795599. [4] Lewis SL, Wheeler CE, Mitchard ETA, Koch A. Restoring natural forests is the best way to remove atmospheric carbon. Nature 2019;568:25–8. https://doi.org/10.1038/d41586-019-01026-8. [5] Zhu K, Zhang J, Niu S, Chu C, Luo Y. Limits to growth of forest biomass carbon sink under climate change. Nat Commun 2018;9:2709. https://doi.org/10.1038/s41467-018-05132-5. [6] Bukoski JJ, Cook-Patton SC, Melikov C, Ban H, Chen JL, Goldman ED, et al. Rates and drivers of aboveground carbon accumulation in global monoculture plantation forests. Nat Commun 2022;13:4206. https://doi.org/10.1038/s41467-022-31380-7. [7] Cook-Patton SC, Leavitt SM, Gibbs D, Harris NL, Lister K, Anderson-Teixeira KJ, et al. Mapping carbon accumulation potential from global natural forest regrowth. Nature 2020;585:545–50. https://doi.org/10.1038/s41586-020-2686-x. [8] Requena Suarez D, Rozendaal DM, De Sy V, Phillips OL, Alvarez‐Dávila E, Anderson‐Teixeira K, et al. Estimating aboveground net biomass change for tropical and subtropical forests: Refinement of IPCC default rates using forest plot data. Glob Change Biol 2019;25:3609–24. [9] Robinson N, Drever R, Gibbs D, Lister K, Esquivel-Muelbert A, Heinrich V, et al. Protect young secondary forests for optimum carbon removal 2024. https://doi.org/10.21203/rs.3.rs-4659226/v1. [10] Chen X, Luo M, Larjavaara M. Effects of climate and plant functional types on forest above-ground biomass accumulation. Carbon Balance Manag 2023;18:5. https://doi.org/10.1186/s13021-023-00225-1. [11] Heinrich VHA, Dalagnol R, Cassol HLG, Rosan TM, de Almeida CT, Silva Junior CHL, et al. Large carbon sink potential of secondary forests in the Brazilian Amazon to mitigate climate change. Nat Commun 2021;12:1785. https://doi.org/10.1038/s41467-021-22050-1. [12] Schwartz M, Ciais P, De Truchis A, Chave J, Ottlé C, Vega C, et al. FORMS: Forest Multiple Source height, wood volume, and biomass maps in France at 10 to 30 m resolution based on Sentinel-1, Sentinel-2, and GEDI data with a deep learning approach. Earth Syst Sci Data Discuss 2023:1–28. https://doi.org/10.5194/essd-2023-196. [13] Liu S, Brandt M, Nord-Larsen T, Chave J, Reiner F, Lang N, et al. The overlooked contribution of trees outside forests to tree cover and woody biomass across Europe. Sci Adv 2023;9:eadh4097. https://doi.org/10.1126/sciadv.adh4097. [14] Lang N, Jetz W, Schindler K, Wegner JD. A high-resolution canopy height model of the Earth. Nat Ecol Evol 2023:1–12. https://doi.org/10.1038/s41559-023-02206-6. [15] Potapov P, Li X, Hernandez-Serna A, Tyukavina A, Hansen MC, Kommareddy A, et al. Mapping global forest canopy height through integration of GEDI and Landsat data. Remote Sens Environ 2021;253:112165. https://doi.org/10.1016/j.rse.2020.112165. [16] Viana-Soto A, Senf C. The European Forest Disturbance Atlas: a forest disturbance monitoring system using the Landsat archive. Earth Syst Sci Data Discuss 2024:1–42. https://doi.org/10.5194/essd-2024-361. [17] Xu Y, Ciais P, Li W, Saatchi S, Santoro M, Cescatti A, et al. Biomass recovery after fires dominates the carbon sink of boreal forests over the last three decades 2023:EGU-9695. https://doi.org/10.5194/egusphere-egu23-9695. [18] Pellissier-Tanon A, Ciais P, Schwartz M, Fayad I, Xu Y, Ritter F, et al. Combining satellite images with national forest inventory measurements for monitoring post-disturbance forest height growth. Front Remote Sens 2024;5. https://doi.org/10.3389/frsen.2024.1432577.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.14)

Session: F.05.09 Case Studies on the Economic Impacts of Earth Observation

Earth observation (EO) data and solutions provide benefits for commercial and governmental end-users across a wide range of key sectors such as insurance, finance, energy, climate, forestry, agriculture, mining etc. Several studies have been conducted to identify the economic, strategic and environmental impacts of EO. However, due to the open nature of data (e.g. from the Copernicus programme), the economic impacts of EO are most likely underestimated.

This session will feature interactive panel discussions with stakeholders across both sides of the EO value chain - EO data and solution providers as well as EO end-users who will share case studies on the current value of EO for their organisations, along with an outlook on how EO is set to transform their businesses in the future.

Speakers:


  • Aravind Ravichandran - founder of Terrawatchspace
  • Geoff Sawyer - Strategic Advisor to the EARSC Board
  • Grinson George Padinjakara ARS - Director ICAR-Central Marine Fisheries Research Institute
  • Gopal Erinjippurath - Founder and CTO at SustGlobal
  • David Fernandes - GIS Consultant at EDP
  • Eduard Escalona Zorita - Space Downstream Market Officer at EUSPA
  • Dmytro Shemet – CEO of Cropwise Operations, Syngenta
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall F2)

Session: A.02.03 EO for Agriculture Under Pressure - PART 4

The human impact on the biosphere is steadily increasing. One of the main human activities contributing to this is agriculture. Agricultural crops, managed grasslands and livestock are all part of the biosphere and our understanding of their dynamics and their impacts on other parts of the biosphere, as well as on the wider environment and on the climate is insufficient.
On the other hand, today’s Agriculture is Under Pressure to produce more food in order to meet the needs of a growing population with changing diets– and this despite a changing climate with more extreme weather. It is required to make sustainable use of resources (e.g. water and soils) while reducing its carbon footprint and its negative impact on the environment, and result in accessible, affordable and healthy food.
Proposals are welcome from activities aiming at increasing our understanding of agriculture dynamics and at developing and implementing solutions to the above-mentioned challenges of agriculture, or supporting the implementation and monitoring of policies addressing these challenges. Studies on how these challenges can be addressed at local to global scales through cross site research and benchmarking studies, such as through the Joint Experiment for Crop Assessment and Monitoring (JECAM) are welcome.

The session will hence cover topics such as
- Impact on climate and environment:
- Crop stressors and climate adaptation
- Food security and Sustainable Agricultural Systems
- New technologies and infrastructure
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall F2)

Presentation: Mapping Sahelian Agricultural Landscapes

Authors: Altaaf Mechiche Alami
Affiliations: Center for Sustainability Studies, Lund University
Transforming agricultural systems to produce more affordable and nutritious foods in a sustainable manner while being climate resilient is one of the main challenges the world is facing today. Concepts of sustainable intensification, climate smart agriculture, Sustainable Land Management (SLM) - including agroecology - are increasingly being promoted in order to achieve this goal. However, the way such strategies and policies are being implemented, as well as their broader socio-ecological impacts on the ground have been hard to track globally. Agricultural monitoring and yield estimation become particularly challenging in data scare regions such as the Sahel, where agriculture forms complex systems with high spatial heterogeneity and strong dependence and vulnerability to climatic variability. Senegal placed agricultural transformation at the heart of its economic growth strategy (The Emerging Senegal Plan since 2013) by establishing agricultural growth poles (agropoles), promoting private sector investments and supporting smallholders in accessing organic inputs, improved seeds and irrigation to reform selected value chains and improve nutritional intake, achieve self-sufficiency and increase incomes. The country has also implemented SLM practices in the context of the Great Green Wall initiative since its inception in 2008 with a strong emphasis on land restoration and agroforestry project. This research aims to inventory the resulting changes in Senegal's agricultural landscapes over the past decade. It considers changing management practices (rotations, fallows, irrigation, fertilizer use and SLM) and their potential environmental and livelihood impacts. This is done by building on approaches used in Agro-Ecological, Livelihood and Socio-Ecological Land System zoning, and utilizing various data sources from satellite imagery, climate products, national statistics and agricultural census to complement a small in-situ dataset (JECAM). The project experiments with the use of various Machine Learning algorithms (supervised and unsupervised) and satellite-based features in an attempt to characterize agricultural landscape composition, vegetation-climate interactions and management practices. Output annual maps inform on the location, scale and pace of agricultural transformations from intensification, expansion and use of SLM. Such information is not only necessary for agricultural monitoring but also serves as a basis for analyzing food and nutrition security and evaluating farming systems’ response and resilience to climatic changes. They also have the potential of improving the representation of farming systems in Dynamic Global Vegetation Models, enabling the evaluation of management practices as well as climate impacts on yields and ecosystem health (carbon and nitrous oxide emissions and nitrogen leaching).
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall F2)

Presentation: EO4Nutri: Remote Sensing for nutrient estimation and sustainable crop monitoring

Authors: Mariana Belgiu, Associate Professor Michael Marshall, Dr. Gabriele Candiani, Dr. Francesco Nutini, Dr. Monica Pepe, Dr. Mirco Boschetti, Associate Professor Micol Rossini, Dr. Luigi Vignali, Chiara Ferrè, Dr. Cinzia Panigada, Prof. Tobias Hank, Dr. Stefanie Steinhauser, Dr. Stephan Haefele, Professor Murray Lark, Dr. Alice Milne, Dr. Grace Kangara, Kwasi Appiah-Gyimah Ofori-Karikari, Professor Alfred Stein, Dr. Raian Vargas Maretto, Associate Professor Chris Hecker, Prof Andy Nelson
Affiliations: Faculty of Geo-information Science and Earth Observation (ITC), University Of Twente, Institute for Electromagnetic Sensing of the Environment (IREA), National Research Council of Italy (CNR), Department of Earth and Environmental Sciences (DISAT), University of Milano-Bicocca, Ludwig Maximilian University of Munich, Rothamsted Research, University of Nottingham
The EAT-Lancet Commission on Food, Planet, and Health developed the so-called planetary health “plate” in an attempt to address the question: "Can we provide a healthy diet for a future population of 10 billion people within planetary boundaries?". The commission emphasizes the critical link between dietary choices and climate change, advocating for sustainable food systems to mitigate environmental impacts. The planetary health plate is divided evenly between vegetables and fruits on one side and grains, plant protein sources, unsaturated plant oils, and limited amounts of animal protein sources on the other. This proposed diet aims to meet the global population's required calorie and nutrient intake. The quantity of agricultural yield is regularly measured and monitored at various geographic and temporal scales. Unfortunately, the quality of agricultural yield has received less attention. To realize the vision of the planetary health plate, it's essential to develop, test, and implement effective solutions for evaluating crop nutrient levels. Information on crop nutrients is critical for identifying potential deficiencies that may lead to micronutrient deficiency aka hidden hunger. This form of malnutrition is associated with serious health issues, including impaired physical and mental development, premature death, immune dysfunction, and reduced learning capacity. It affects more than 3 billion people around the world. Conventional methods for measuring the nutrient levels typically consist of collecting grains at the maturity phase and performing a wet chemical analysis in the laboratory. Unfortunately, this method is time-consuming, destructive, and cost-prohibitive and, consequently, is not suitable for consistent quantification of nutrients across large spatial extents and across time. EO4Nutri project is focusing on addressing this challenge. Specifically, it aims at advancing our understanding of the lifecycle of nutrients from the soil to crop canopy and further to crop grains with data-driven and remotely sensed data. Target nutrients are those with high relevance to human nutrition and plant growth, namely Calcium (Ca), Iron (Fe), Magnesium (Mg), Nitrogen (N), Phosphorus (P), Potassium (K), Selenium (Se), Sulphur (S), and Zinc (Zn). The target crops are maize, rice and wheat. The EO4Nutri team conducted extensive measurements at the Jolanda di Savoia farm, operated by Bonifiche Ferraresi S.p.A. in the Po Valley (Emilia-Romagna region), over two growing seasons (2022/2023 and 2023/2024). The collected data included soil samples, pre-sowing soil and plant spectral measurements, plant biophysical parameters, and plant and grain samples. Proximal spectral measurements were taken with a handheld spectrometer during the vegetative, reproductive, and maturity stages of each crop in both growing seasons. In addition to these measurements, PRISMA and EnMAP satellite images were acquired at the three key growth stages. Important biophysical parameters, such as Leaf Area Index (LAI) and Leaf Chlorophyll Content (LCC), were measured during the vegetative and reproductive stages and biomass samples were collected and fresh weighted. At the maturity stage, total fresh and dry biomass, along with yield, were recorded. Laboratory analysis was performed to determine nutrient content in plants and grains. We developed Partial Least Square Regression (PLSR) models for each nutrient, crop, and growth stage using proximal spectral measurements. The models yielded promising results (R² > 0.5) for Mg, Zn, P, S, and N across all growth stages, whereas Ca, Fe, K, and Se were accurately predicted only during the vegetative stage. Two-band vegetation indices (TBVIs) were also utilized to explore the relationship between plant and grain nutrients and various vegetation indices derived from PRISMA, EnMap, and Sentinel-2. Consistent with proximal spectral measurements, promising predictions were achieved for Mg, Zn, P, S, N, K, and Ca across all growth stages, whereas Fe and Se predictions were less accurate. The most relevant spectral bands for estimating target nutrients were in the shortwave infrared (SWIR) and near-infrared (NIR) ranges, with the optimal band combinations varying by growth stage, crop, and nutrient. The EO4Nutri project demonstrates the potential of integrating spectral analysis and remote sensing to accurately predict critical crop nutrients, providing a scalable solution to support sustainable agriculture.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall F2)

Presentation: Agricultural Drought Monitoring in the Marchfeld Region Using Sentinel-2 Imagery and Deep Learning

Authors: Omid Ghorbanzadeh, Francesco
Affiliations: BOKU
Agricultural drought is defined by water deficits relative to crop needs and worsened by climate change, which poses severe challenges to agriculture and water resources and consequently the economy, and the environment. A drought can lead to yield losses of up to 90% in crops like maize, and irrigation is necessary in most agricultural fields to deal with drought adverse impacts. The Marchfeld region in Lower Austria exemplifies these challenges as this region has a semi-arid climate unique pedo-climatic conditions and an important agricultural area for food production in Austria. From May to September, annual rainfall averages only 250–300 mm. This often results in water shortages in summer and requires increased irrigation. However, groundwater in this region is shared with the urban and industrial sectors, which complicates water management. Earth Observation (EO) data, such as Sentinel-2 images, are used for drought monitoring by assessing information from crop development. Integration of Sentinel-2 imaging spectrometry with advanced techniques offers solutions to assess plant water status and monitor drought conditions. However, traditional models for tracking drought often rely on single indicators like soil moisture or vegetation, which limits their ability to capture drought complexity. To address this, models integrate multiple indices, such as NDVI, NDWI, and EVI, alongside approaches like the Scaled Drought Condition Index (SDCI) to enhance drought monitoring. The current advances in machine/deep learning (ML/DL) algorithms improved drought monitoring by incorporating more indicators, such as precipitation and vegetation, offering sophisticated analysis of EO data. In this study, Sentinel-2 satellite data and ancillary datasets will be combined with ML/DL techniques to enhance drought intensity mapping in maize fields in the Marchfeld region. We developed two DL algorithms namely a Deep Neural Network (DNN) and a One-dimensional Convolutional Neural Network (1D CNN). The performance of the developed DL algorithms was compared to that of common ML algorithms like Random Forest (RF) that optimized for this task. The research integrates complementary datasets, such as climate and soil data, with nearly 30 derived indices from time-series Sentinel-2 imagery as input data for our supervised algorithms. The study applies and compares these models to predict drought, evaluating their accuracy and performance for agricultural drought monitoring in the Marchfeld region. Key indices and hyperparameters were identified based on their effectiveness in drought prediction for maize fields. Finally, the research generates detailed drought maps for the growing season. They provide predictions on pixel-level drought probabilities as accurate assessments of drought conditions. The findings reinforce the effectiveness of Sentinel-2 images in detecting and assessing drought stress, supporting its applicability in drought monitoring. Among the tested models, DNN achieved the highest accuracy, followed by CNN and RF, proving effective for detecting and assessing drought stress.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall F2)

Presentation: From Field Samples to Production Estimates: Evaluating Yield Estimation Models for Sub-National Statistics

Authors: Pierre Houdmont, Sophie Bontemps, Pierre Defourny
Affiliations: Université Catholique de Louvain - Earth And Life Institute
Estimating agricultural production is crucial for decision-makers and policymakers, particularly in managing stock importation, exportation, and international aid. This is especially important for ensuring food security in developing countries, where agricultural statistics are often unreliable. The era of satellite imagery has significantly improved the accuracy of crop acreage estimation, and numerous studies indicate that it has also enhanced yield estimation at subnational levels, making possible a global improvement in the estimation of agricultural production. Sen4Stat is an open-source toolbox offering various modules to support National Statistical Offices (NSOs) in improving their agricultural statistics using models that incorporate satellite Earth observation data. Two specific modules have been developed to ensure the most accurate yield estimation based on the data collected by the NSOs. The first module (Parcel-level model) relies on georeferenced yield measurement data for model calibration. Obtaining such data requires costly field measurement campaigns. The second module (Regional model) is designed for NSOs that are unable to carry out these campaigns. It uses the NSOs' historical estimates at sub-national levels, such as district or region, to calibrate the models. For comparison and evaluation purposes of the two yield modules proposed in Sen4Stat, a study was conducted on the estimation of wheat in France across 41 departments over 5 years (2017-2021). Calibration datasets for the parcel-level module were derived from yield data at the field and farm levels, provided by farmers as part of surveys for French national agricultural statistics. For the regional-level module, the calibration datasets were directly obtained from French national agricultural statistics. Both modules rely on machine learning regressors to link the variables of interest with the measured yield as a reference. Yield explanatory variables were defined based on meteorological data, soil moisture, and vegetation proxy variables derived from Earth observation (Sentinel-2). Parcel-level and departmental leaf area index (LAI) time series were smoothed using Savitzky-Golay interpolation. These time series enabled the identification of three phenological periods (vegetative, reproductive, and senescence), which were then used to calculate the variables of interest. We compared the performance of three regression algorithms proposed in the Sen4Stat Yield modules: Random Forest, Support Vector Machine, and multi-linear regression. An ongoing evaluation of gradient boosting is being conducted, as it is expected to outperform Random Forest with a smaller calibration set. Leave-one-year-out validation was performed to assess each year individually. Field-level estimations were aggregated to the department scale for comparison with the regional module's estimations. Preliminary results showed that Random Forest outperformed Support Vector Machine and linear regressions for estimating yield. Both modules demonstrated good temporal transferability over time. At the departmental level, aggregated field-level estimations outperformed the regional module’s estimations, highlighting the potential of using large datasets at a small scale to improve sub-national yield statistics. However, the regional module demonstrated that accurate estimations can still be achieved despite the lack of detailed data. For each year, models from both modules explained more than 75% of the yield variability during the season and achieved high scores using only variables computed before senescence, making them suitable for forecasting.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall F2)

Presentation: Enhancing Sustainable Agriculture Through Earth Observation: Insights From the CRISP (Consistent Rice Information for Sustainable Policy) Initiative

Authors: Giaime Origgi, Francesco Holecz, Massimo Barbieri, Luca Gatti, Renaud Mathieu, Emma Quicho, Sushree Sagarika Satapathy, Alessandro Marin, Roberto Di Rienzo, Gaetano Pace
Affiliations: sarmap sa, IRRI-Africa, CGI Italia, IRRI-Philippines
Consistent Rice Information for Sustainable Policy (CRISP) is a two-year ESA-funded project designed to address Indicator SDG 2.4.1, which measures the proportion of agricultural land under productive and sustainable agriculture. CRISP aims to contribute to achieving sustainable food production systems and resilient agricultural practices by 2030. The project collaborates with key Early Adopters, including AfricaRice, GEOGLAM, GIZ, the Syngenta Foundation, IFAD, WFP, and SRP, to ensure its solutions align with diverse stakeholder needs. The initiative focuses on scaling up advanced and cost-effective Earth Observation (EO) solutions to deliver vital information on seasonal rice planted areas, growing conditions, yield forecasts, and production at harvest. CRISP adopts a user-oriented approach, emphasizing the importance of active involvement from Early Adopters. This collaborative process helps users understand the capabilities and limitations of the proposed solutions, reduces the risk of setting unrealistic expectations, and ensures successful endorsement of the services. User requirements, carefully identified during the needs assessment phase, have been translated into algorithms and workflows tailored to address diverse and complex demands. Central to this effort is the adoption of a multi-mission EO strategy, leveraging existing operational rice area-yield services such as RIICE (Remote Sensing-based Information and Insurance for Crops in Emerging Economies). By integrating data from multi-mission EO systems, including Sentinel-1 and Sentinel-2, the solution provides flexibility and adaptability to meet a variety of user demands. Its robustness was tested across five distinct sites in South-East Asia, India, and Africa, demonstrating its ability to cater to the heterogeneous needs of stakeholders in different contexts. To validate the proposed solution, CRISP carried out use-case demonstrations across various test sites, each representing different scenarios and challenges. Specifically, CRISP addressed the following: - Evaluating the impact of drought on yield in the Luzon region (Philippines). - Generating yield maps and Start-of-Season (SoS) information for irrigated areas in the Senegal River Valley (Senegal). - Estimating yield loss following a flood event in Andhra Pradesh (India). - Providing yield maps and SoS information in the largest irrigated area of Mwea (Kenya). - Estimating yield in rainfed systems in the Kano region (Nigeria). In its final phase, CRISP focuses on addressing the need for an operational service by integrating its solutions into an end-to-end processing chain deployed on a cloud computing infrastructure. This infrastructure is optimized for scalability and equipped with advanced analytical tools. Additionally, the project includes knowledge transfer activities through the organization of dedicated Living Labs. These labs aim to demonstrate the platform's potential while enabling users to familiarize themselves with the technology and its usability, fostering long-term adoption and impact.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall F2)

Presentation: Early detection of soil salinization by means of EnMAP hyperspectral imagery and laboratory spectroscopy

Authors: Giacomo Lazzeri, Dr. Robert Milewsky, Dr. Saskia Foerster, Prof. Sandro Moretti, Prof. Sabine Chabrillat
Affiliations: University of Florence, Department of Earth Sciences, Helmholtz Center Potsdam GFZ German Research Centre for Geosciences, Umweltbundesamt (UBA) - German Environment Agency, Institute of Soil Science, Leibniz University Hannover
Soil salinization is the build up of soluble salts in the topsoil, measured as Electrical Conductivity (EC – dS/m). Progressively Increasing concentrations of salt in soil leads to crop productivity decreases and ultimately, soil sterility. In a global perspective, food production is predicted to increase by 62% by 2050, while soil salinization has increased by 16.8% in the period 1986-2016, posing a serious threat to the future of soil health and food production. Salt affected soils present complex spectral characteristics, with limited absorption features and strong modifications of surface reflectance. The entity and magnitude of spectral modifications is function of salt concentrations. The available literature shows how, successful salinity detection applications rely on very high salt concentrations (9.80 dS/m) to maximize salt spectral evidences and detection capabilities. With EnMAP deployment, its unprecedented radiometric and spectral characteristics have opened new possibilities for the detection of salt related spectral modifications. Therefore, we investigated EnMAP’s detection capabilities for low levels of salinization, corresponding to the early stages of the phenomenon. To compare the prediction performance of spaceborne derived models, we adopted laboratory derived spectra modelling results as a benchmark. The area under analysis is located in central Italy, Tuscany region, in the province of Grosseto. Extensive agriculture combined with an evapotranspiration to precipitation ratio deficit of -400 mm resulted in an overexploitation of the groundwater reservoir. The area proximity to the coast and the numerous channels allow for seawater intrusion during storm surges, contributing, with other geological sources to the total cations and anions budget. We conducted field acquisitions at the apex of the dry season, in September 2023, to maximize the probability of surface salt efflorescence occurrence. Field samples were collected within 3 days from the EnMAP acquisition, with no rainfall occurrence. Soil samples were processed according to FAO (FAO. 2021. Standard operating procedure for soil electrical conductivity, soil/water, 1:5. Rome.) salinity assessment guidelines and EC was measured. For the same field samples, we acquired laboratory spectra according to the procedure described by Gholizadeh (Gholizadeh, A., Neumann, C., Chabrillat, S., van Wesemael, B., Castaldi, F., Borůvka, L., Sanderman, J., Klement, A., & Hohmann, C. (2021). Soil organic carbon estimation using VNIR–SWIR spectroscopy: The effect of multiple sensors and scanning conditions. Soil and Tillage Research, 211, 105017). Concomitantly, EnMAP image spectra were extracted for the location of the collected field samples. Both laboratory and EnMAP derived spectra were tested to define the best preprocessing – regression algorithm combination for salinization detection. The preprocessing methods tested included Savitzky-Golay filters, continuum removal, PCA and Norris gap derivatives. Similarly, the regression models used were PLSR, 2D Correlograms and a hyperparameter tuned Random Forest Regressor. Model results for laboratory derived spectra were considered as reference for maximum model prediction capability, allowing us to assess the satellite derived model predictions. Among the models tested, the correlogram derived best band – index combination resulted in a R2 of 0.88 for laboratory data and 0.63 for EnMAP data. PLSR proved to have the worst performance on both datasets. Random Forest Regressor proved its capability in detecting complex spectral features, with R2 scores of 0.72 for laboratory data and 0.60 for EnMAP. Considering only the EnMAP derived spectra, the best correlogram derived index, when applied to the whole spaceborne image resulted in a poor generalization of the salinity spatial extent and concentration. Differently, the trained Random Forest Regressor upon deployment on the whole image was able to capture the spatial variability of the phenomenon, with concentration value predictions in accordance with field observations and expert knowledge. Overall, the results testify EnMAP data quality. Similar statistical performances of the models tested validates our hypotheses on the feasibility of spaceborne early detection for topsoil salinization. Considering the research outlook, the high spatial and temporal variability of salinity requires extensive field sampling efforts to capture the condition of the phenomenon. Consequently, validation of the predictions with new field evidences represents a challenge we will focus and tackle in future research developments. In addition, the acquisition of new and numerous salt affected soil spectra could be beneficial to increase the model prediction and generalization capabilities, potentially allowing a transition from a site-calibrated model to a model capable of generalizing dynamics across previously unseen areas.
Add to Google Calendar

Tuesday 24 June 11:37 - 11:57 (EO Arena)

Demo: D.04.31 DEMO - NoR Updates and Road Map - session 2

This session will showcase the ESA Network of Resources initiative current status, major updates, and road map.

Speaker:


  • Francesco Barchetta - Starion for ESA
Add to Google Calendar

Tuesday 24 June 12:00 - 12:20 (EO Arena)

Demo: D.04.15 DEMO - Dunia: an all-in-one processing and dissemination platform for EO data over Africa

Dunia stands as a comprehensive platform designed for Africans, welcoming both beginners and experts in Earth observation. Focused exclusively on the African continent, it empowers users to discover, build, and exchange valuable geospatial insights across Africa. Those interested can start with a free trial, diving into a web map browser and an innovative streaming solution that highlights the immense potential of Earth observation data. As users engage with the platform, they find a dynamic development environment where they can create tailored solutions for large-scale data processing. This capability fosters creativity and innovation, allowing individuals to transform data into impactful applications. At the same time, Dunia encourages collaboration within the African Earth observation community, offering a vibrant marketplace for data and applications. Discover, build and exchange yourself at https://dunia.esa.int.
During the session we will dive into all three core elements of Dunia. We will discover streamable datasets, look at example jupyter notebooks in the Dunia Sandbox, build own workflows in the Dunia Application Hub and offer them in the Dunia Marketplace to the African EO community.

Speakers:


  • Johannes Schmid - IT Service and Operations Manager, GeoVille Information Systems and Data Processing GmbH
Add to Google Calendar

Tuesday 24 June 12:22 - 12:42 (EO Arena)

Demo: C.03.21 DEMO - SentiBoard: Your Real-Time Window into Copernicus Operations

This session will showcase the Copernicus Operations Dashboard, an online platform designed to provide real-time visibility into the operational status of the CSC-EOF (Copernicus Space Component - Earth Observation Framework). Known as Sentiboard, the dashboard integrates information from across the data acquisition, processing, and dissemination chain, offering users a unified and intuitive view of mission operations.
Through a guided live demonstration, we will explore the main features and navigation structure of the platform, highlighting how it supports monitoring activities and situational awareness. The session will include an overview of the different sections of the dashboard—such as acquisition planned and real, publication statistics, and dissemination status—and demonstrate how to access mission-specific insights and performance indicators.
The goal is to show how Sentiboard translates complex operational data into accessible and actionable information. Whether you're involved in satellite operations, mission planning, performance analysis, or simply interested in the infrastructure behind Copernicus data delivery, this session will offer a clear and engaging introduction to the tool.
Attendees will leave with a practical understanding of how to:
• Navigate the dashboard efficiently
• Interpret key visual indicators and metrics
• Access up-to-date information about mission activities
Join us to discover how the Copernicus Operations Dashboard enhances transparency and supports informed decision-making across the EO community.

Speakers:


  • Salvatore Tarchini - Serco
  • Daniele Rotella - Serco
  • Alessandra Paciucci - Serco
  • Rosa Fontana - Serco
Add to Google Calendar

Tuesday 24 June 12:45 - 13:05 (EO Arena)

Demo: D.01.15 DEMO - TourismSquare, monitor and anticipate the practicability of tourist activities according to environmental conditions and climate projections

The demonstration session will showcase how TourismSquare integrates satellite Earth observation data to help tourism stakeholders monitor environmental conditions, assess the feasibility of tourist activities, and anticipate climate impacts.
Attendees will explore the user-friendly web interface which provide five key indicators—Human Activity, Air, Biodiversity, Climate, Land, and Water— and a digital twins approach, with simulation capabilities thanks to predictive analytics, enabling data-driven tourism planning and supporting territorial management.
The live demonstrations will illustrate how the tool calculates practicability scores for different tourism activities, optimizes seasonal travel planning, and supports strategic decision-making for local authorities and businesses.
The session will conclude with a Q&A segment, offering participants the opportunity to discuss specific use cases and explore how TourismSquare can be tailored to their region’s tourism needs.

Link to a presentation video: https://youtu.be/sokdaEf2mSE

Speakers:


  • Fabien Castel
Add to Google Calendar

Tuesday 24 June 13:00 - 13:45 (Frontiers Agora)

Session: E.03.05 Shaping the Future of EO: Digital Systems & Disruptive Public-Private Models

This Agorà session invites participants to coalesce around the challenge of designing a Future Earth Observation (EO) mission architecture embracing new space paradigms in terms of fresh perspectives and frameworks that redefine how space exploration, utilization, and governance are approached, together with innovative models that leverage technology, creativity, and collaboration to create a sustainable and accessible space ecosystem.
The goal is to promote a dynamic exchange of ideas to identify the key capabilities and synergies needed to tackle global scientific challenges, environmental sustainability, and cost-effectiveness.
A key theme for the Agorà is the role of digital infrastructure as an enabler of the next-generation EO ecosystem. Ideas around digital twins, cloud-based platforms, and blockchain-driven data traceability will be explored, focusing on their potential to enhance data sharing, transparency, and societal value.
The session will also delve into the transformative potential of Public-Private Partnerships (PPPs) in the realm of EO Missions. By exploring innovative collaboration models, stakeholders can envision how governments and private sector actors might co-create and operationalize EO systems, sharing risks and accelerating the deployment of cutting-edge technologies. Discussions will focus on the mutual benefits of these partnerships, from cost reduction to the rapid adaptation of services to emerging needs.
Through this participatory session, the Agorà seeks to align diverse perspectives, identifying priorities and innovative approaches to realize a collaborative, forward-looking EO architecture that meets both scientific and societal needs.

Moderators:


  • Emmanuel Pajot - EARS

Speakers:


  • Giovanni Sylos Labini - Planetek
  • Dominique Gillieron - ESA
  • Pierre Philippe Mathieu - ESA
  • Francesco Longo - Italian Space Agency (ASI)
  • Maria Santos - University of Zurich

Add to Google Calendar

Tuesday 24 June 13:00 - 13:45 (Nexus Agora)

Session: B.01.01 Amplifying impact through EO integration in international development finance mechanisms

Discover how Earth Observation (EO) technology is inducing and amplifying impact at scale in international development finance mechanisms. For over 15 years, the European Space Agency (ESA) has cultivated strategic partnerships with International Financial Institutions (IFIs) – such as the World Bank and regional development banks – to embed EO as a key contributor in development assistance operations. These collaborations have been bolstered by ESA-funded initiatives, most recently the Global Development Assistance (GDA) programme, which mobilises Europe's collective EO expertise to support global development and climate action – as well as complementary IFI activities, following jointly agreed cooperation principles to align efforts and resources.
Considering the potential of space-based applications to contribute to climate action and sustainable development, this agora will explore experiences and success stories on how integrating EO into financing mechanisms enhances decision-making, drives innovation, and accelerates impact across development activities. Join us to learn how ESA’s and its partners’ efforts are paving the way for scalable, sustainable EO adoption within global development cooperation frameworks.
This agora will highlight impact stories resulting from cooperation activities under ESA’s GDA programme and discuss with partner IFIs on how they take ownership in integrating those EO services to inform their operations and transfer it to their client countries. The discussion will focus on required steps to further foster wide-scale adoption and integration at the country level, in order to maximise socio-economic impact and stimulate growth of local digital economies.

Agenda:


Opening


  • Christoph Aubrecht – ESA, Programme Coordinator Global Development Assistance
  • Rune Floberghagen – ESA, Head of Climate Action, Sustainability and Science Department

Panel


  • Renaud Seligmann – World Bank, Director Strategy & Operations, Planet
  • Eric Quincieu – ADB, Principal Water Resources Specialist
  • Rafael Anta – IDB, Principal Specialist in Science, Technology & Innovation
  • Gladys Morales Guevara – IFAD, Global Head of Innovation
  • Fani Kallianou de Jong – EBRD, Principal Manager Climate Strategy & Delivery
Add to Google Calendar

Tuesday 24 June 13:00 - 14:30 (ESA Agora)

Session: D.02.14 AI and Earth observation - where to now?

Artificial Intelligence (AI) has entered all areas of society and Earth Observation (EO) is no exception. The insights contained in multi sensor (multimodal) data can complement and empower traditional physical models and work in unison towards solutions that are accurate, explainable and can enhance scientific discovery. The rise of language models in EO also open new opportunities for interaction with users, or to mine the massive data archive with semantics. In this session, members of the Phi-Lab invited professors will discuss the latest advances and engage critically (and provocatively) with the audience in a forward looking discussion about the future at the interface of AI and remote sensing.

Speakers:


  • Konrad Schindler (ETH, Switzerland) and XiaoXiang Zhu (TUM, Germany) : foundation models
  • Gustau Camps-Valls (Universitat de València, Spain) and Mihai Datcu (University POLITEHNICA of Bucharest, Romania) : interpretable AI and causality
  • Fabio del Frate (Università di Tor Vergata, Italy) and Bertrand Le Saux (DG Connect, European Commission) : physics-driven models
  • Devis Tuia (EPFL, Switzerland) Jan van Rijn (Leiden University, the Netherlands) and Nicolas Longepe (ESA) : user-centric AI
Add to Google Calendar

Tuesday 24 June 13:07 - 13:27 (EO Arena)

Demo: D.03.31 DEMO - SNAP in Action - Various Application Examples throught the week demonstrating the power of SNAP for EO data visualisation, analysis and processing - session 2

SNAP is the ESA toolbox for visualising, analysing and processing of optical and microwave EO data. SNAP support a large number of current and past satellite sensors as well as generic data formats. SNAP addresses all kind of users, from early stage students, through experienced researchers, up to production managers responsible for a public and commercial EO processing service.

In a series of demonstrations we showcase this breadth of possibilities at various land and water real life applications. Demonstratoins will be repeated multiple times to allow as many as possible participants to join a specific demonstration. We will tailor the daily programme from a set of prepared demonstrations according to themes of the days, and user needs if expressed during the conference.

The following list give a glimpse of demonstrations from which we can select:
1. Sentinel-1 ETAD processing with SNAP
2. Change Detection Monitoring
3. Supporting new SAR missions with SNAP
4. “Live” fire evolution in Los Angeles using Sentinel-2 image
5. Burned Areas Detection – Mehedinti, Romania
6. Monitoring Drought Evolution – Dobrogea, Romania
7. Water Quality in urban areas at the example of the city of Hamburg
8. Interpreting Hyperspectral Data for coastal habitat mapping

Speakers:


  • Diana Harosa - CS Romania
  • Cosmin Cara - CS Romania
Add to Google Calendar

Tuesday 24 June 13:30 - 13:50 (EO Arena)

Demo: D.04.24 DEMO - Streamlining Snow monitoring with openEO and CDSE

Snow monitoring plays a crucial role in water resource management. The increasing availability of remote sensing data offers significant advantages but also introduces challenges related to data accessibility, processing, and storage. For operational use, scalable workflows are essential to ensure global applicability.
Leveraging a cloud-based platform such as the Copernicus Data Space Ecosystem (CDSE) enables efficient data processing directly where the data are stored, without data download. Our workflows are built using the openEO API, which provides a standardized interface for accessing and processing large Earth observation datasets worldwide.
In this demonstration, we will showcase key applications for snow monitoring. Specifically, we will explore snow and ice cover classification, snow cover fraction downscaling, wet snow detection, and snow albedo estimation. The session will illustrate how different sensors and methodologies can be leveraged to achieve reliable outputs while demonstrating the power and scalability of cloud computing platforms. A particular focus will be placed on how our workflow leverages cloud scalability to reconstruct long-term time series at high spatial resolution—crucial for monitoring snow over large areas and extended periods.
This demo is suited for researchers, practitioners, and decision-makers interested in snow monitoring, as well as those looking to integrate openEO-based workflows into their environmental data processing pipelines. Participants will gain insights into how cloud-based infrastructures streamline large-scale Earth observation analysis.

Valentina Premier1, Riccardo Barella1, Stefaan Lippens2, Emile Sonneveld2, Carlo Marin1, Michele Claus1, Alexander Jacob1, Jeroen Dries2
1Eurac research, Institute for Earth Observation, Bolzano (Italy)
2VITO Remote Sensing, Mol (Belgium)

Speakers:


  • Valentina Premier - EURAC
  • Riccardo Barella - EURAC
Add to Google Calendar

Tuesday 24 June 13:52 - 14:12 (EO Arena)

Demo: D.04.17 DEMO - Interactively visualise your project results in Copernicus Browser in no time

#cog

In this demo, we will demonstrate how to interactively visualize and explore your project results using Copernicus Browser. Copernicus Browser is a frontend application within the Copernicus Data Space Ecosystem, designed to explore, visualize, analyze, and download Earth Observation data.

We will guide you through the necessary steps to prepare your data for ingestion, introduce various services within the Ecosystem one of them to support data ingestion (Bring Your Own COG API), and show you how to configure your data for interactive visualization. This includes setting up a configuration file, writing an Evalscript, and creating a legend.

Finally, we will demonstrate how to visualize and analyze results within Copernicus Browser.

Speakers:


  • Daniel Thiex - Sinergise
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.11/0.12)

Session: A.01.08 Planetary Boundary Layer from Space

The planetary boundary layer (PBL) plays an essential role in weather and climate, which are critical to human activities. While much information about the temperature and water vapor structure of the atmosphere above the PBL is available from space observations, EO satellites have been less successful in accurately observing PBL temperature and water vapor profiles and in constraining PBL modelling and data assimilation. Improved PBL models and parameterizations would lead to significantly better weather and climate prediction, with large societal benefits.

In the latest US National Academies’ Earth Science Decadal Survey, the PBL was recommended as an incubation targeted observable. In 2021, the NASA PBL Incubation Study Team published a report highlighting the need for a global PBL observing system with a PBL space mission at its core. To solve several of the critical weather and climate PBL science challenges, there is an urgent need for high-resolution and more accurate global observations of PBL water vapor and temperature profiles, and PBL height. These observations are not yet available from space but are within our grasp in the next decade. This can be achieved by investing in optimal combinations of different approaches and technologies. This session welcomes presentations focused on the PBL, from the observational, modeling and data assimilation perspectives. In particular, this session welcomes presentations focused on future EO PBL remote sensing missions and concepts, diverse observational approaches (e.g., active sensing, constellation of passive sensors, hyperspectral measurements, high-altitude pseudo satellites) and potential combinations of techniques to optimally depict the 3D structure of PBL temperature and water vapor.

Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: Synergistic Use of Satellite Data at EUMETSAT for improved Planetary Boundary Layer Detection

Authors: Axel Von Engeln
Affiliations: EUMETSAT
EUMETSAT has been operating satellites since 1977, initially only in geostationary orbit, and with the addition of the EUMETSAT Polar System (EPS) in 2006, also in low Earth orbit. In particular the addition of the EPS satellites provide a rich data source, with several instruments providing collocated measurements over a swath of up to 2900km. The EPS instruments cover cloud information (e.g., AVHRR-3 instrument), temperature and water vapour profile information at microwave, infrared, and using GPS frequencies (e.g., IASI, MHS, AMSU-A, GRAS instruments), as well as several instruments operating in the UV, visible, near infrared (e.g., GOME-2, AVHRR-3 instrument) . This data sets spans more than 15 years; several years have had 2-3 EPS satellites in orbit. Regular reprocessing activities assure that earlier datasets are consistently processed with the latest available processor, thus long term data assessments are possible. Following the identification of the Planetary Boundary Layer (PBL) as an incubation targeted observable within the latest Decadal Survey in the US, EUMETSAT has started to assess its data records, and identified possible synergistic uses for improved PBL detection. This includes combination of microwave, infrared, and radio occultation instruments for temperature and water vapour profiling (where AVHRR-3 provides cloud coverage information), as well as the use of radio occultations for PBL height detection (working together with the Radio Occultation Meteorology Satellite Application Facility, ROM SAF). Additionally, use of the UV, visible, near infrared instrument fleet will be assessed to provide aerosol, cloud, total column water vapour and other information. The presentation will give an overview of the EUMETSAT data sets that can already be exploited for PBL retrievals, present selected results obtained from single instruments, provide an overview of the planned next steps, and also discuss use of the next-generation satellites in geostationary (MTG-S) and polar (EPS-SG) orbit.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: Profiling the Planetary Atmospheric Boundary Layer From Space: the Perspective of “Space It Up!”

Authors: Domenico Cimini, Maria Pia De Natale, Francesco Di Paola, Donatello Gallucci, Sabrina Gentile, Edoardo Geraldi, Salvatore Larosa, Saverio T. Nilo, Elisabetta Ricciardelli, Filomena Romano, Mariassunta Viggiano, Dr Thomas August, Axel Von Engeln
Affiliations: CNR-IMAA, ESA, EUMETSAT
"Space It Up!" is a multidisciplinary project funded by the Italian Ministry of University and Research (MUR) and the Italian Space Agency (ASI), aiming at developing space-borne solutions with breakthrough potential, ranging from Earth observations (EO) to human extraterrestrial exploration. "Space It Up!" is organised in thematic Spokes, among which Spoke 7 ("Space for the sustainable development of the planet") aims to increase the technology readiness level of EO solutions to improve current capabilities in process observation and prediction and fostering the achievement of heterogeneous sustainable development goals (SDGs). Activities have started in August 2024 and will last for three years. The importance of Planetary Atmospheric Boundary Layer (PABL) profiling is recognized for several SDGs and societal needs, such as climate action, severe weather hazards, renewable energy, air pollution, and food production. ABL profiling from ground-based remote sensing has increased in the last decade, leading to the establishment of international networks, such as the European EPROFILE program (Rüfenacht et al., 2021). However, ABL profiling from ground is limited to instrumented sites and lacks global coverage. In this framework, a review of the current technologies available for PABL profiling from space is being performed, looking at advantages, limitations, and future perspectives. In particular, a study is being performed to investigate the potential of combined microwave and infrared (MW-IR) satellite observations to detect PABL height and retrieve PABL temperature and humidity profiles. A machine learning approach is applied to simulated observations from MW-IR sensors on current EUMETSAT Polar System (EPS) MetOp series, namely the Advanced Microwave Sounding (AMSU), the Microwave Humidity Sounder (MHS), and the Infrared Atmospheric Sounding Interferometer (IASI). In addition, the same approach is applied to instruments to be launched on EPS Second Generation (EPS-SG) series, i.e., the Microwave Sounder (MWS) and IASI Next Generation (IASI-NG), providing enhanced spatial and spectral resolutions, as well as lower instrumental noise, with respect to their predecessors, offering increased potential for PABL profiling. This presentation will provide the perspective of “Space It Up!” to PABL profiling from space. The opportunity to validate the space-borne retrievals against nearly continuous ground-based observations will also be discussed, presenting the available products from EPROFILE networks of hundreds of ceilometers and Doppler wind lidars (delivering PABL height) and tens of microwave radiometers (delivering PABL temperature and humidity profiles) in Europe. Rüfenacht, R., Haefele, A., Pospichal, B., Cimini, D., Bircher-Adrot, S., Turp, M., Sugier., J,. EUMETNET opens to microwave radiometers for operational thermodynamical profiling in Europe. Bull. of Atmos. Sci.& Technol. 2, 4, https://doi.org/10.1007/s42865-021-00033-w, 2021.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: Hyperspectral PBL Thermodynamic Structure Observations from Photonic Integrated Circuit Microwave Radiometers

Authors: Patrick Stegmann, Narges Shahroudi, Alexander Kotsakis, Stephen Nicholls, Fabrizio Gambini, Antonia Gambacorta
Affiliations: NASA GSFC, UMD ESSIC, SSAI, UMBC
Remote sensing of the full three-dimensional (3D) thermodynamic structure of the Planetary Boundary Layer (PBL) segment of the terrestrial atmosphere on the basis of passive satellite-based radiometers remains a significant challenge to the scientific community. The 2017 NASA Decadal Survey (DS, NASEM, 2018a) and the NASA PBL Incubation Study Team Report (STR) (Teixeira et al., 2021) have identified retrievals of PBL 3D thermodynamic structure with enhanced horizontal and vertical resolution and PBL height with enhanced fidelity as cornerstones for future advances in Earth System Science (ESS). Passive hyperspectral microwave (MW) radiometers offer a solution for 3D PBL thermodynamics retrievals under all-sky conditions, i.e. not only for a clear-sky atmospheric state without clouds or aerosols present in the scene. However, conventional technologies do not permit to construct such instruments that are compact enough for deployment in orbit and have a sufficient signal-to-noise ratio (CN0). Photonic Integrated Circuit technology employed at NASA Goddard Spaceflight Center (GSFC) offers a means to construct such instruments while simultaneously reaching an unprecedented Size, Weight, Power and Cost (SWaPC) minimum for a potential smallsat deployment. Hyperspectral MW instruments currently in testing and development at NASA GSFC are CoSMIR-H, HyMPI, and AURORA Pathfinder. The first CoSMIR-H observation data were recently collected during the WH2yMSIE field campaign at the US West Coast and Rocky Mountains. At the same time, algorithmic infrastructure to process these hyperspectral MW observations and retrieve PBL thermodynamic profiles is under development based on the Community Radiative Transfer Model (CRTM). Following this approach ensures operational readiness of the instrument data pipeline and compatibility with the infrastructure of NASA partners, such as NOAA. The CRTM is integrated into the Microwave Integrated Retrieval System (MIRS) used operationally at NOAA for retrievals of MW observations from conventional instruments such as e.g. ATMS. However, several sources of uncertainty still exist when it comes to the interpretation of the PBL observations and the sheer amount of hyperspectral data will pose a significant challenge for any conventional operational retrieval algorithm.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: Profiling Arctic Tropospheric Water Vapor Using the Differential Absorption G-band Radar GRaWAC

Authors: Sabrina Schnitt, Mario Mech, Jens Goliasch, Thomas Rose, Linnu Bühler, Nils Risse, Susanne Crewell
Affiliations: Institute for Geophysics and Meteorology, University of Cologne, Radiometer Physics GmbH
Low-tropospheric water vapor is a central component of multiple feedback processes known to contribute to amplified warming in the Arctic. Continuous, highly resolved all-weather profiling observations are key to advance the understanding of the Arctic water cycle in a rapidly changing Arctic climate and to improve the representation of PBL mixed-phase clouds in modeling. Current state-of-the-art measurement techniques, yet, are limited by the occurrence of clouds, precipitation, and polar night, or lack the needed temporal or vertical resolution. The newly emerging Differential Absorption Radar (DAR) technique can overcome some of these challenges as in-cloud water vapor profiles can be derived continuously. We illustrate the advantages of this novel technique for the Arctic PBL based on recent measurements obtained from the novel and unique G-band Radar for Water vapor and Arctic Clouds (GRaWAC). GRaWAC is a Doppler-capable, FMCW G-band radar with simultaneous dual-frequency operation at 167 and 175GHz. Our recent measurement suite includes observations from AWIPEV station, Ny-Alesund, from the central Arctic aboard a RV Polarstern cruise, and along the Norwegian coast aboard AWI’s Polar-6 research aircraft. We apply the DAR technique to our measurements to derive temporally continuous in-cloud profiles in cloudy and precipitating conditions. When deployed from aircraft, we additionally retrieve the column amount in clear-air conditions. We investigate advantages and limitations of water vapor profiles derived from the stand-alone DAR technique including cloud properties, retrieval resolution, and accuracy. Additionally, we illustrate alternative water vapor retrieval methods by making use of the synergy with passive microwave radiometer or conventional cloud radar measurements. By embedding GRaWAC measurements in a multi-frequency cloud radar synergy, we find fingerprints of precipitation-forming processes, and highlight the potential of our measurements for future model evaluation studies.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: Daytime convective development over land: The role of surface forcing

Authors: Wojciech Grabowski
Affiliations: NCAR
Water availability at the Earth surface determines the partitioning of the surface heat flux into its sensible and latent components, that is, the surface heat flux Bowen ratio. The two components affect differently the surface buoyancy flux and thus the development and growth of the convective boundary layer. As a result, the Bowen ratio has a critical impact on the daytime dry and moist convection development over land. We use two canonical modeling test cases, one for the shallow convection and one for the shallow-to-deep convection transition, to document the impact of the surface Bowen ratio on daytime convection development. A simple approach is used where results from simulations featuring the original setup are contrasted with simulations where surface sensible heat flux takes on values of the latent heat flux and vice versa. Such a change illustrates the key impact of the surface water availability without changing the total surface heat flux that itself affects the convective boundary layer development. Because of the larger surface buoyancy flux, simulations with the reversed surface heat fluxes feature faster deepening of the convective boundary layer and wider clouds once moist convection develops. Mean cloud base width of cumulus clouds increases as the boundary layer deepens. A simple explanation is provided of why deeper well-mixed convective subcloud layer results in wider clouds. The key is a larger width of boundary layer coherent updraft structures when the convective subcloud layer is deeper. We also document an important role of lower-tropospheric horizontal flow that affects cloud-base width of convective clouds. This is because of contrasting organization of boundary layer eddies in purely convective versus mixed convective/shear-driven boundary layers.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.94/0.95)

Session: B.03.06 Climate, Environment, and Human Health - PART 3

It is well-known that many communicable and non-communicable diseases have a seasonal component. For example, flu and the common cold tend to increase in autumn and winter whilst vector borne diseases like Dengue and West Nile Virus tend to peak in late summer when the vectors are at their most abundant. Under monsoon regimes, many diseases peak during the rainy season. Hay fever, spring-time allergies and other respiratory disorders also have seasonality related to the abundance of pollens and other allergens in the air. Environmental conditions in water, air and land have a role in regulating the variability in the presence or absence and abundance of pathogenic organisms or material in the environment, as well as the agents of disease transmission like mosquitoes or birds. For example, air temperature and relative humidity are linked to flu outbreaks. Water quality in coastal and inland water bodies impact outbreaks of many water-borne diseases, such as cholera and other diarrheal diseases, associated with pathogenic bacteria that occur in water. The seasonality has inter-annual variabilities superimposed on it that are difficult to predict. Furthermore, in the event of natural disasters such as floods or droughts, there are often dramatic increases in environmentally-linked diseases, related to break down of infrastructure and sanitation conditions.

Climate change has exacerbated issues related to human health, with the shifting patterns in environmental conditions, and changes in the frequency and magnitude of extreme events, such as marine heat waves and flooding, and impacts on water quality. Such changes have also led to the geographic shifts of vector-borne diseases as vectors move into areas that become more suitable for them, as they become less cool, or retract from those that become too hot in the summer. The length of the seasons during which diseases may occur can also change as winters become shorter. There are growing reports on the incidence of tropical diseases from higher latitudes as environmental conditions become favourable for the survival and growth of pathogenic organisms.

Climate science has long recognised the need for monitoring Essential Climate Variables (ECVs) in a consistent and sustained manner at the global scale and with high spatial and temporal resolution. Earth observation via satellites has an important role to play in creating long-term time series of satellite-based ECVs over land, ocean, atmosphere and the cryosphere, as demonstrated, for example, through the Climate Change Initiative of the European Space Agency. However, the applications of satellite data for investigating shifting patterns in environmentally-related diseases remain under-exploited. This session is open to contributions on all aspects of investigation into the links between climate and human health, including but not limited to, trends in changing patterns of disease outbreaks associated with climate change; use of artificial intelligence and big data to understand disease outbreaks and spreading; integration of satellite data with epidemiological data to understand disease patterns and outbreaks; and models for predicting and mapping health risks.

This session will also address critical research gaps in the use of Earth Observation (EO) data to study health impacts, recognizing the importance of integrating diverse data sources, ensuring equitable representation of various populations, expanding geographic scope, improving air pollution monitoring, and understanding gaps in healthcare delivery. By addressing these gaps, we aim to enhance the utility of EO data in promoting health equity and improving health outcomes globally.

The United Nations (UN) defines Climate Change as the long-term shift in average in temperatures and weather patterns caused by natural and anthropogenic processes. Since the 1800s, human emissions and activities have been the main causes of climate change, mainly due to the release of carbon dioxide and other greenhouse gases into the atmosphere. The United Nations Framework Convention on Climate Change (UNFCCC) is leading international efforts to combat climate change and limit global warming to well below 2 degrees Celsius above pre-industrial levels (1850–1900), as set out in the Paris Agreement. To achieve this objective and to make decisions on climate change mitigation and adaptation, the UNFCCC requires systematic observations of the climate system.

The Intergovernmental Panel on Climate Change (IPCC) was established by the United Nations Environment Programme (UNEP) and the World Meteorological Organization (WMO) in 1988 to provide an objective source of scientific information about climate change. The Synthesis Report, the last document part of the sixth Assessment Report (AR6) by IPCC, released in early 2023, stated that human activities have unequivocally caused global warming, with global surface temperature reaching 1.1°C above pre-industrial levels in 2011–2020. Additionally, AR6 described Earth Observation (EO) satellite measurements techniques as relevant Earth system observation sources for climate assessments since they now provide long time series of climate records. Monitoring climate from space is a powerful role from EO satellites since they collect global, time-series information on important climate components. Essential Climate Variables (ECV) are key parameters that explain the Earth’s climate state. The measurement of ECVs provide empirical evidence in the evolution of climate; therefore, they can be used to guide mitigation and adaptation measures, to assess risks and enable attribution of climate events to underlying causes.

An example of an immediate and direct impact of climate change is on human exposure to high outdoor temperatures, which is associated with morbidity and an increased risk of premature death. World Health Organisation (WHO) reports that between 2030 and 2050, climate change is expected to cause approximately 250,000 additional deaths per year from malnutrition, malaria, diarrhoea and heat stress alone. WHO data also show that almost all of the global population (99%) breathe air that exceeds WHO guideline limits. Air quality is closely linked to the earth’s climate and ecosystems globally; therefore, if no adaptation occurs, climate change and air pollution combined will exacerbate the health burden at a higher speed in the coming decades.
Therefore, this LPS25 session will include presentations that can demonstrate how EO satellites insights can support current climate actions and guide the design of climate adaptation and mitigation policies to protect and ensure the health of people, animals, and ecosystem on Earth (e.g., WHO’s One Health approach).
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: Integrating Hydrological Simulations and High-resolution Water Quality Parameters to Characterize the Influence of River Plumes on Aquaculture Sites in the Coastal Waters of Abruzzo Region, Italy

Authors: Carla Ippoliti, Susanna Tora, Federico Filipponi, Romolo Salini, Barbara Tomassetti, Federica Di Giacinto, Alessio Di Lorenzo, Annalina Lombardi, Carla Giansante, Annamaria Conte
Affiliations: Istituto Zooprofilattico Sperimentale "G. Caporale" - Teramo, National Research Council (CNR), Center of Excellence in Telesensing of Environment and Model Prediction of Severe Events (CETEMPS), University of L’Aquila
River plumes result from the dispersion of sediments, nutrients, and other materials carried by rivers into the sea: they can be identified by their distinct salinity, temperature, turbidity, and optical properties, which are measurable via Satellite Earth Observation (SEO). These plumes can contain a high concentration of nutrients for phytoplankton (nitrogenous and phosphorous substances), but also organic discharges, and therefore can influence the productivity and health of aquaculture systems. Monitoring the spatial distribution of suspended solids and the concentration of phytoplankton, through the estimation of chlorophyll-a parameter, near aquaculture sites is essential to assess the growth potential of molluscs and to mitigate potential health risks. Such knowledge can guide the development of targeted prevention measures and adaptive management strategies ensuring the sustainability and resilience of aquaculture operations in coastal regions under climate change conditions. In addition, it can also contribute in identifying the most suitable production areas. Notably, aquaculture is becoming increasingly important as a source of high-quality food, and hence for supporting and maintaining a healthy population. In this study, we integrated river discharge data, modelled through the Cetemps Hydrological Model (CHyM) for the main rivers in Abruzzo region, with high-resolution water quality parameters estimated from Copernicus Sentinel-2 imagery over the coastal area in the central Adriatic Sea. This approach allowed us to characterize, at 10 meters spatial resolution, the distribution of turbidity and chlorophyll-a concentrations reaching mollusc farms located along the coastal area in the period 2016-2024. To improve the accuracy of satellite-based mapping parameters estimated from water inherent optical properties, Case-2 Regional Coast Colour (C2RCC) algorithm was regionally calibrated using a set of in situ acquisitions along the Abruzzo coast collected during 12 boat campaigns (years 2019-2020), and in 20 sampling points distributed between Pescara river mouth and a mussel farm. We generated time series of turbidity and chlorophyll-a concentration at 10 m spatial resolution, using all available and cloud-free Sentinel-2 MSI satellite acquisitions in the period 01 July 2016 - 31 December 2024. The ChyM model estimates hourly discharge rates (m³/h) at the estuaries of Abruzzo's main rivers. These estimates are obtained through hydrological simulations forced by high-resolution rain gauge and temperature observations. The fine spatial and temporal resolution of these observations allows for a realistic representation of precipitation patterns and their impact on river flow. For each aquaculture site, the distribution of data of the two parameters were summarised highlighting the anomalies. These anomalies were identified across the time series, and correlated with the river discharge quantities. Typically, according to the in situ data, turbidity decreases and salinity increases when moving away from the coast. This general trend, expected when the fresher and colder waters of the river mix with the saltier and warmer marine waters, is well captured by SEO imagery: turbidity values in the coastal waters (0-3 nautical miles NM) of Abruzzo region has a mean value 4.48 FNU and a standard deviation of 1.58. Chlorophyll-a mean value is 0.20 and 0.04 standard deviation, in 0-3 nautical miles from coastal waters of the Abruzzo region, indicating oligotrophic waters. On specific dates, river discharge significantly influenced turbidity and chlorophyll-a distribution, as detected through combined analysis of Sentinel-2 and CHyM outputs. This integration of hydrological and SEO data highlights the value of multi-source approaches for monitoring coastal ecosystems. This approach would help identifying and quantifying deviations during extreme events, when the plume may extend further or shift direction, posing potential risks to the aquaculture sites and molluscs production.
LPS Website link: Integrating Hydrological Simulations and High-resolution Water Quality Parameters to Characterize the Influence of River Plumes on Aquaculture Sites in the Coastal Waters of Abruzzo Region, Italy&location=Room+0.94/0.95" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: Earth Observation Insights on Climate-Induced Shifts in Culicoides imicola Distribution: A Vector-Borne Disease Perspective in Europe and the Mediterranean

Authors: Lara Savini, Annamaria Conte, Maria Goffredo, Tommaso Orusa, Michela Quaglia, Miguel Ángel Miranda, Thomas Balenghien, Luca Candeloro
Affiliations: Istituto Zooprofilattico Sperimentale "G. Caporale" - Teramo, Applied Zoology and Animal Conservation research group, University of the Balearic Islands, Palma, CIRAD, UMR ASTRE, F-34398 Montpellier
Vector-borne diseases (VBDs) represent an increasing global health threat, with climate change significantly altering the habitats of key insect vectors species. Among these, the biting midge species Culicoides imicola stands out as a major field transmitter of several viral diseases affecting livestock, such as Bluetongue, Epizootic Hemorrhagic Disease, and African Horse Sickness, and has been involved in Schmallenberg virus transmission. Climatic factors are crucial in driving the global distribution of C. imicola, which is expected to shift significantly under climate change scenario. This study focuses on modeling the climatic and environmental suitability of C. imicola as part of a broader effort within the Horizon Europe WiLiMan-ID project, which aims to integrate pathogen, host, and climatic-environmental data to address high-priority animal diseases, contributing to global food security, economic stability, and public health. We modeled the climatic and environmental suitability of C. imicola across Europe and the Mediterranean Basin, producing publicly accessible raster datasets with a spatio-temporal resolution of 1 km and an 8-day interval. These datasets span over six decades, covering the period from 1960 to the present. Our approach integrates Earth Observation (EO) data with machine learning (ML) techniques. Livestock density data, derived from FAO's global livestock distribution maps, were combined with key climatic-environmental predictors — including temperature, precipitation, vegetation indices, soil moisture, solar radiation, surface vapor pressure deficits, and wind speed — identified through an extensive literature review and sourced from reliable EO datasets such as MODIS, ERA5, CHIRPS, and VIIRS. The 8-day temporal resolution captured critical seasonal dynamics influencing C. imicola suitability, allowing us to identify favorable periods for vector occurrence. Presence-absence data were sourced from the Italian entomological surveillance plan (2000–present) and enriched with records from other countries within the study area. ML algorithms were trained over the past two decades of observations to predict the probability of C. imicola occurrence, and predictions were extrapolated backward to assess changes since 1960. This framework enabled the analysis of long-term trends and the evaluation of climate change impacts on the vector’s distribution. Preliminary results highlight a significant expansion in C. imicola climatic and environmental suitability over time, accompanied by an extended seasonal activity window. These trends demonstrate the strong correlation between climate change and vector adaptation, with critical implications for VBD transmission in both newly colonized and historically affected areas. In these established areas, climate change extends the favorable conditions for vector survival and reproduction over longer periods of the year, leading to significantly larger vector populations and an elevated risk of disease transmission throughout an expanded timeframe when conditions are favorable for virus replication. The open-access raster datasets generated in this study provide resources for epidemiological modeling and proactive vector surveillance. By elucidating the interplay between climate dynamics and vector ecology, this work supports the development of targeted control strategies and policies to mitigate the impact of emerging VBD threats in a changing world.
LPS Website link: Earth Observation Insights on Climate-Induced Shifts in Culicoides imicola Distribution: A Vector-Borne Disease Perspective in Europe and the Mediterranean&location=Room+0.94/0.95" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: The impact of extreme weather on the spread of water-associated diseases in a tropical wetland region

Authors: Gemma Kulk, Dr Anas Abdulaziz, Dr Shubha Sathyendranath, Dr Nandini Menon, Ranith Rajamohananpillai, Jasmin Chekidhenkuzhiyik, Grinson George
Affiliations: Earth Observation Science and Applications, Plymouth Marine Laboratory, National Centre of Earth Observation, Plymouth Marine Laboratory, CSIR-National Institute of Oceanography, Nansen Environmental Research Centre India, ICAR-Central Marine Fisheries Research Institute
Water is an essential natural resource, but increasingly water also forms a threat to the human population. Global warming, shifts in precipitation patterns and extreme weather conditions lead to water stress, including natural disasters such as floods or droughts that can cause severe damage to the environment, property and human life. A less studied aspect of such events is the impact on human health through water-associated diseases and on wellbeing through mental health problems. Action to reduce the risk is urgently needed, with more frequent floods and droughts already leading to climate refugees. Earth Observation has the potential for developing cost-effective methods to monitor risks to human health from water stress, with free and open data available at the global scale. In this study, we present the application of remote sensing observations to map flooded areas, using the tropical Vembanad-Kol-Wetland System in the southwest of India as a case study. In August 2018, this region experienced an extremely heavy monsoon season, which caused once-in-a-century floods that led to nearly 500 deaths and the displacement of over a million people. We investigate the use of different satellite sensors to increase temporal coverage of flood maps and combine this information with field measurements of human pathogens, such as Vibrio cholerae, Escherichia coli and Leptospira, and information on disease outbreaks to further study the contamination of natural water bodies during the course of the year in 2018. Further analysis of the satellite data record from 2016 to 2024 showed increased flood risk in the region surrounding Lake Vembanad during this period, with potential consequences for the spread of water-associated diseases and impact on human health. The results indicate the need for improving sewage treatment facilities and city planning in flood-prone areas to avoid the mixing of septic sewage with natural waters during extreme climate events and water stagnation and logging even during the normal monsoon season.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: Earth Observation for risk-based Vector-Borne Disease surveillance under a changing climate

Authors: Luca Candeloro, Carla Ippoliti, Laura Amato, Francesco Valentini, Susanna Tora, Valentina Zenobio, Chadia Wannous, Rachid Bouguedour, Paolo Calistri, Annamaria Conte, Alessandro Ripani
Affiliations: Istituto Zooprofilattico Sperimentale "G. Caporale", World Organisation for Animal Health, Sub-Regional Representation for North Africa, World Organisation for Animal Health
Climate change is increasingly reshaping environmental conditions, profoundly impacting the seasonality, distribution, and frequency of vector-borne diseases (VBDs) and their associated vector species. Leveraging Earth Observation (EO) data to map climate, water, and landscape features offers invaluable insights for developing epidemiological models and tools to support targeted surveillance and enable Early Warning Systems for VBDs. In this context, the World Organization for Animal Health (WOAH) funded the PROVNA project, an initiative to support North African Veterinary Services by creating an innovative tool to optimize the surveillance and control of vector-borne climate-sensitive diseases. The study area—encompassing Algeria, Egypt, Libya, Mauritania, Morocco, and Tunisia—was classified into ecoregions, defined as zones with homogeneous ecological and climatic conditions and then possibly suitable to host same vectors responsible of viral transmission. Rift Valley fever (RVF), a zoonotic disease of significant concern, was selected as the primary VBD of interest. EO data products from 2018 to 2022 (e.g., MODIS Land Surface Temperature, MODIS Normalized Difference Vegetation Index, SMAP Soil Moisture, TAMSAT Rainfall, MODIS Normalized Difference Water Index) at 250m/16-day resolution, were processed, aggregated and standardized at seasonal and annual levels. An unsupervised neural network clustering method, the Super Self-Organizing Map (Super-SOM), was employed to create an interpretable, topology-preserving map of North Africa. Initially, a detailed 40x40 neuron grid comprising 1600 nodes was trained to identify 1600 distinct ecoregions. Subsequently, an affinity propagation clustering algorithm was applied to the 1600 nodes, reducing them to 55 ecoregions per year. The results, shared with national authorities through webinars, bilateral discussions, and an in-person workshop, included the delineation of ecoregions and temporal analyses across countries. These analyses identified areas more susceptible to interannual variation, allowing for the prioritization of specific surveillance strategies based on the unique characteristics of each ecoregion. By integrating advanced EO data analytics with epidemiological insights, the tool supports Veterinary Services in implementing targeted, risk-based surveillance, optimizing both financial and human resources through strategic planning. Climate change exacerbates the geographic shifts and seasonal dynamics of VBDs, underscoring the importance of long-term monitoring of essential climate variables via satellite. This project exemplifies the potential of EO-based approaches to adapt surveillance strategies to the challenges posed by climate variability and extreme events, aligning with WOAH’s regional strategy for controlling vector-borne and transboundary animal diseases.
LPS Website link: Earth Observation for risk-based Vector-Borne Disease surveillance under a changing climate&location=Room+0.94/0.95" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: Leveraging Earth Observation Data and Explainable AI for Predicting Mosquito-Borne Disease Outbreaks

Authors: Dimitrios Sainidis, Konstantinos Tsaprailis, Lazaros Stamogiorgos, Dr. Charalampos
Affiliations: National Observatory Of Athens
In the early 21st century, rapid urbanization, the expansion of international trade, and increased human mobility, coupled with the changing climate, have significantly changed the distribution of disease vectors such as mosquitoes, ticks, and fleas. These changes have allowed these vectors to expand into new regions, increasing the spread of the pathogens they carry and the diseases they transmit to humans. Vector-borne diseases (VBDs) are a major global public health concern, accounting for 17% of all infectious diseases worldwide and causing approximately 700,000 deaths annually. In recent years, these diseases have begun to affect previously unaffected countries many of them in the European peninsula. Among VBDs, the majority of deaths are caused by mosquitoes, which transmit diseases such as West Nile Virus, Malaria, Zika, Dengue, and Chikungunya. This makes mosquitoes the deadliest animals on the planet. Consequently, much of the VBD research community's efforts are focused on mosquitoes and mosquito-borne diseases (MBDs). One of the most critical and challenging problems in controlling MBDs is accurately predicting the risk of future outbreaks. Timely and accurate risk maps enable health authorities to implement appropriate mitigation strategies to prevent outbreaks. Recent research has focused extensively on predicting mosquito abundance and estimating the likelihood of human virus outbreaks using machine learning models, as these two factors are closely corelated. Another significant challenge is making sense of these risk maps, especially when they are created using complex machine learning models. Recent advances in explainable AI (XAI) provide valuable tools to interpret and analyze these predictions. XAI helps uncover the key factors influencing the model's outputs, making the results more transparent and easier to understand. This clarity allows public health officials and policymakers to trust the models' findings and make more informed decisions, ultimately improving the effectiveness of strategies to prevent and control disease outbreaks. This work addresses both challenges of accurate outbreak prediction and model interpretability by developing an interpretable Early Warning System (EWS) for predicting the risk of West Nile Virus (WNV) outbreaks. The EWS is designed to operate at an exceptionally fine spatial resolution of 2x2 square kilometers and a temporal prediction window of one month. It integrates diverse data sources, including big Earth Observation (EO) data, statistical census data, in-situ mosquito population observations, predicted mosquito abundance data, and historical WNV case records from Greece. By leveraging this comprehensive dataset, Machine Learning models are trained to predict WNV outbreak risk. To ensure the model's outputs are interpretable and actionable, the SHAP (SHapley Additive exPlanations) methodology is applied to analyze the predictions at both local and global levels. This analysis identifies the most influential factors contributing to WNV outbreak, offering insights into the environmental, demographic, and biological drivers of transmission. The resulting system not only provides accurate, high-resolution risk maps but also enhances transparency and trust in the predictions, enabling public health officials to make informed decisions. The predictive model was trained using historical West Nile Virus (WNV) outbreak data spanning 2010 to 2021, with a spatial resolution of 2x2 square kilometers across the Greek territory. The model generates a risk score ranging from 0 to 1 for each grid cell, representing the likelihood of a WNV outbreak. To evaluate the model's performance, binary classification metrics were employed. Using a classification threshold of 0.5, the model achieved a recall score of 0.87, demonstrating its effectiveness in identifying areas at risk of WNV outbreaks. Additionally, the area under the precision-recall curve (PR-AUC) was 0.7, which is noteworthy given the significant class imbalance in the dataset, where the ratio of non-cases to case instances was approximately 500:1. These results indicate the model's robustness in accurately predicting outbreak risks even in the presence of highly imbalanced data, highlighting its potential as a reliable tool for early warning and public health interventions. The explainability analysis revealed several key environmental factors influencing the risk of West Nile Virus (WNV) outbreaks. One of the most significant factors identified by the model is the nightly surface temperature of the previous month. There is a strong positive correlation between higher nighttime temperatures and increased WNV risk, suggesting that warmer conditions may enhance mosquito activity or virus transmission dynamics. Another critical factor is elevation, where a clear inverse relationship is observed. Areas at higher elevations are associated with lower WNV risk. Rainfall patterns also exhibit complex relationships with WNV transmission. Accumulated rainfall since January shows a nonlinear effect where rainfall below 500 mm has a negative influence on WNV risk, while amounts between 500 mm and 1100 mm increase the risk. However, rainfall exceeding 1100 mm again reduces the risk, indicating that both very low and very high rainfall levels can negatively impact WNV transmission. This aligns with the known effect of rainfall on the mosquito breeding cycle, where the formation of stagnant waters increases possible mosquito breeding sites while heavy rainfalls tend to flush the larvae from the breeding sites. Additionally, the annual precipitation from previous years plays an important role, with accumulated rainfall below 1100 mm having a slight positive effect on risk, whereas rainfall above this threshold shows a slight negative influence. These insights highlight the intricate interactions between environmental factors and WNV transmission, providing valuable guidance for targeted surveillance and intervention strategies. In conclusion, this study presents a robust and interpretable Early Warning System (EWS) for predicting West Nile Virus (WNV) outbreaks, leveraging high-resolution data and advanced machine learning techniques. By integrating diverse datasets and employing SHAP analysis for explainability, the model not only achieves good predictive capability but also provides actionable insights into the environmental and geographical factors driving WNV transmission. The system’s ability to generate transparent, fine-grained risk maps enhances trust and usability for public health officials, enabling proactive and targeted interventions. The dependence on Earth Observation and MBD case data allows this solution to be applied anywhere and for a plethora of Mosquito Borne Diseases with a few modifications. These advancements underscore the potential of combining machine learning and explainable AI to address critical challenges in controlling mosquito-borne diseases and safeguarding public health.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: Extreme weather events, changing land use patterns and microbial pollution escalate outbreaks of leptospirosis in coastal regions along the southwest coast of India

Authors: Anas Abdulaziz, P Sreelakshmi, Nizam Ashraf, Ranith Rajamohananpillai, Dhritiraj Sengupta, Gemma Kulk, Dr Nandini Menon, Grinson George, Jasmin Chekidhenkuzhiyil, Dr Shubha Sathyendranath
Affiliations: CSIR-National Institute of Oceanography, Nansen Environmental Research Centre India, Earth Observation Science and Applications, Plymouth Marine Laboratory, National Centre of Earth Observation, Plymouth Marine Laboratory, ICAR- Central Marine Fisheries Research Institute
Environment plays a major role as an intermediary in the spread of zoonotic diseases, with pathogens derived from a reservoir host being released into the environment and then infecting a new host or population. Leptospirosis is a zoonotic water-associated disease that is becoming increasingly prevalent in regions susceptible to extreme weather events. An analysis of decadal datasets on the incidence of leptospirosis in the areas surrounding Vembanad Lake, a Ramsar site along the southwest coast of India, indicates that this region is endemic to the disease. The Vembanad Lake and its surrounding areas experience significant rainfall during the monsoon season (June to September) and often suffer from flash floods during this period. The established risk factors for leptospirosis are contact with water contaminated by Leptospira, which usually occurs in flood situations in places with the presence of rodents and other zoonotic animals associated with poor sanitation. We conducted monthly monitoring at 13 stations along Vembanad Lake from 2018 to 2019 for a one-year period at 20-day intervals, during which the region experienced a once-in-a-century flood in August 2018. Molecular surveillance of Leptospira in the water column showed that the pathogen was present in the lake year-round. Interestingly, the distribution of these pathogens was notably higher during the warm season (November to April), while the incidence of the disease peaks during the rainy season. In 2018, around 50% of the reported leptospirosis cases occurred following the flood, confirming the fact that during the rainy season, humans are frequently exposed to water contaminated with Leptospira, leading to infection and disease outbreaks. Changes in land use patterns and inadequate solid waste management may exacerbate the prevalence of Leptospira in the region. Additionally, extreme weather events facilitate greater contact between pathogens and humans, resulting in increased disease incidence. This study highlights the urgent need to address environmental factors that influence host-microbe interactions to curb the rise of zoonotic water-associated diseases. Climate emergencies have to be tackled by including environmental influences on issues of medical and ecological importance.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.31/1.32)

Session: A.09.04 Glaciers - the other pole - PART 1

Glaciers are distributed around the world in mountainous areas from the Tropics, to Mid-latitudes, and up to Polar regions and comprise approximately 250,000 in number. Glaciers currently are the largest contributors to sea level rise and have direct impacts on run-off and water availability for a large proportion of the global population.

This session is aimed at reporting on latest research using EO and in situ observations for understanding and quantifying change in glacier presence, dynamics and behaviour including responses to changes in climate, both long term (since the Little Ice Age) and in the recent satellite period. EO observations of glaciers come from a large variety of sources (SAR, Altimetry, gravimetry, optical) and are used to derive estimates of ice velocity, surface mass balance, area, extent and dynamics of both accumulation and ablation, characteristics such as surging, glacier failure, and downwasting as well as associated observations of snow pack development and duration, lake formation, glacier lake outbursts (GLOF) and slope stability.

Presentations will be sought covering all aspects of glacier observations but in particular efforts to derive consistent global databases e.g. GlaMBIE, ice velocity and area (Randolph Glacier Inventory) as well as variation in run-off and water availability and interfaces between these observations and glacier modelling to forecast possible future glacier changes and their impact on hydrology and sea-level rise.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: Regional Glacier Elevation Changes Assessment from Optical DEM Time Series

Authors: Livia Piermattei, Francesco Ioli, Clare Webster, Lucas Kugler, Désirée Treichler, Enrico Mattea, Robert McNabb
Affiliations: Department of Geography, University of Zurich, Department of Geosciences, University of Oslo, Department of Geosciences, University of Fribourg, School of Geography and Environmental Sciences, Ulster University
This study is part of the Glacier Mass Balance Intercomparison Exercise (GlaMBIE), which aims to collect, homogenise, and estimate global and regional assessments of glacier mass balance using the main observation methods. Here, we present our assessment of glacier elevation change using the geodetic method (DEM differencing) based on spaceborne optical data. We exploited the potential of the SPOT-5 satellite, operational from 2002 to 2015, which provided global coverage. Since 2021, the SPOT 1-5 image archive has been freely available as part of the SPOT World Heritage program run by CNES. However, observation periods vary across regions, limiting temporal coverage to less than five years in some areas. Iceland is selected as a pilot study due to its extensive SPOT-5 temporal coverage, further complemented by ArcticDEM data. Our methodology is also applied to other regions with sufficient temporal coverage, and historical aerial images are incorporated to extend the analysis back to the last century. The workflow starts with generating DEM time series at a regional scale and homogenising the data, including DEM co-registration, selection, noise filtering and void filling. To address challenges posed by sparse DEM time series, especially when including historical DEMs, we developed a method to extrapolate elevation changes over 10-year intervals using the combined DEM time series. This method relies on the assumption that a relationship exists between elevation change and elevation; therefore, an elevation trend can be derived for elevation bands. We extract median elevations for fixed elevation bands (i.e., 100 m bins) from the DEMs time series and interpolate these values over time using linear regression. Elevation data are then extrapolated for each band and pre-defined periods, and area-weighted mean elevation changes are calculated for each glacier using RGI7.0. For comparability, we also applied our approach to derive elevation changes from time series of ASTER DEMs and compared our results with the pixel-based multi-temporal approach of Hugonnet et al. (2021) over a common observation period. Regional and individual glacier estimates from both methods are evaluated. This work discusses key challenges in using spaceborne optical data for regional glacier elevation change assessments, including limitations in the temporal coverage of SPOT-5, issues with DEM generation, co-registration, noise filtering, void filling, and methods for estimating mean elevation changes. Our findings contribute to improving regional assessments of glacier mass balance and advancing geodetic approaches using optical DEM time series.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: Monitoring glaciers with CryoSat-2 altimetry – opportunities and challenges

Authors: Sophie Dubber, Livia Jakob, Morag Fotheringham, Noel Gourmelen, Carolyn Michael, Andrea Incatasciato, Julia Bizon, Jérôme Bouffard, Alessandro Di Bella, Tommaso Parinello
Affiliations: Earthwave, University of Edinburgh, ESA
The large footprint of radar altimeters has traditionally limited their use to ice sheets, however the launch of CryoSat-2 – the first altimetry mission to carry a synthetic aperture radar interferometer – enabled the monitoring of environments beyond the two ice sheets. By using Swath processing applied to CryoSat-2 data, we can measure changes in elevation over rough terrain such as ice caps and mountain glaciers, and do so at high spatial and temporal resolution. This approach provides unique opportunities to better understand the behaviour of these regions. Additionally, constantly advancing processing techniques continue to add to the expanding toolbox of global glacier mass balance measurements, by enabling monitoring of extremely challenging terrains. As a result of these advances, it is now possible to perform assessments of global glacier volume and mass changes, as presented in Jakob & Gourmelen (2023). Here we present an updated version of this assessment, which provides global glacier changes between 2010 - 2023. This new study uses updated glacier outlines from v7.0 of the Randolph Glacier Inventory, includes 5 additional regions and utilises improved algorithms. The expansion of this work to new regions is possible due to additional coverage recently added to the CryoTEMPO EOLIS (Elevation Over Land Ice from Swath) products. This Swath processed CryoSat-2 dataset now includes additional coverage of mountain glacier regions with challenging terrain conditions: Scandinavia, Western Canada & US, Central Europe, Low Latitudes and New Zealand. We will show how the addition of these regions in this updated assessment enables further insights into the global picture of glacier mass change. We will also give an introduction to the suite of CryoTEMPO EOLIS products, including case studies of how they can be utilised over small mountain glaciers and rapidly changing ice caps. We will discuss the opportunities and challenges of radar altimetry as a tool to measure glacier changes, as well as looking forward to even more detailed monitoring of glaciers’ health using the upcoming CRISTAL mission.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: Advancing reconciled regional & global glacier mass changes with the second Glacier Mass Balance Intercomparison Exercise (GlaMBIE-II)

Authors: Livia Jakob, Michael Zemp, Inés Dussaillant, Samuel Nussbaumer, Sophie Dubber, Noel Gourmelen
Affiliations: Earthwave, University of Zurich, University of Edinburgh
Glacier changes are a sign of climate change and have an impact on the local hazard situation, region runoff, and global sea level. In previous reports of the Intergovernmental Panel on Climate Change (IPCC), the assessment of glacier mass changes was hampered by spatial and temporal limitations as well as by the restricted comparability of different observing methods. The Glacier Mass Balance Intercomparison Exercise (GlaMBIE; https://glambie.org) aims to overcome these challenges in a community effort to reconcile in-situ and remotely sensed observations of glacier mass changes at regional to global scales. GlaMBIE is now entering its second phase (GlaMBIE-II), in an effort to improve upon the first approach and results of the new data-driven reconciled estimation of regional and global mass changes from glaciological, DEM-differencing, altimetric, and gravimetric methods. This presentation will highlight GlaMBIE’s findings, emphasising its implications for regional glacier mass loss and global sea-level rise. It will also explore lessons learned from the first phase, discuss persistent differences among observational methods, and identify other ongoing challenges. Additionally, preliminary results from a pilot study aiming to enhance the spatial and temporal resolution of glacier mass balance data will be presented.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: Towards a Flexible, Data Assimilation Framework for Global Glacier Modelling

Authors: Patrick Schmitt, Fabien Maussion, Lea Hartl, Dan Goldberg
Affiliations: University Of Innsbruck, University of Bristol, Austrian Academy of Sciences, University of Edinburgh
Mountain glaciers are crucial to the Earth's water systems. As they shrink and lose ice globally, they contribute to rising sea levels and pose challenges for water supply, hydropower, agriculture, and natural disaster management. To address these challenges effectively, dynamic glacier models are essential. Recent advances in Earth observation (EO) products, including geodetic glacier mass balance, glacier outlines and ice velocity, offer new opportunities to improve global glacier models. Assimilating heterogeneous datasets in a dynamically consistent modelling framework remains very challenging. This contribution highlights findings from a recent study (preprint: [https://doi.org/10.5194/egusphere-2024-3146]) focusing on the data-rich Ötztal and Stubai ranges in western Austria. By adapting the Open Global Glacier Model (OGGM) to include these high-resolution, multitemporal observational datasets, the model's performance significantly improved compared to using global, lower-resolution data. For the first time, the model simultaneously matched observed area and volume changes on a regional scale, boosting confidence in regional projections. Projections for the region show that only 2.7% of the 2017 glacier volume will remain by 2100 under a +1.5 °C global warming scenario, a more pessimistic outlook than previous studies. Under a +2 °C scenario, this volume is reached roughly 30 years earlier, with near-total deglaciation by 2100 (0.4% of the 2017 volume remaining). The presented approach represents a significant step forward compared to earlier regional assimilation methods. However, it is tailored to specific observations and lacks flexibility to accommodate additional or alternative datasets. To overcome this limitation, we are developing the Open Global Glacier Data Assimilation Framework (AGILE). This framework iteratively adjusts control variables to minimize discrepancies with observations using a cost function. AGILE leverages automatic differentiation through the machine learning framework PyTorch, enabling efficient computation of control variable sensitivities. Its flexibility allows it to integrate temporally and spatially diverse observational datasets and control variables, such as glacier bed heights, mass-balance parameters, and initial ice thickness. While AGILE's capabilities are currently being demonstrated in idealized experiments, the ultimate goal is for it to serve as the assimilation engine for a potential Digital Twin Component for Glaciers, part of ESA's Digital Twin Earth program.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: Rapid response of Svalbard glaciers to ocean warming

Authors: Geir Moholdt, Josephine Maton, Jack Kohler, Øyvind Foss, Adrian Luckman, Marta Majerska, Alex S. Gardner, Johannes Fürst
Affiliations: Norwegian Polar Institute, Swansea University, Institute of Geophysics Polish Academy of Sciences, NASA Jet Propulsion Laboratory, University of Erlangen-Nuremberg
About one third of the glacier area of the Arctic drains towards ocean-terminating fronts that ablate by calving and melting above and below the waterline. This frontal ablation is a significant but poorly quantified part of the overall mass budget of Arctic glaciers, as well as an important source of freshwater and calved ice for marine ecosystems. We present a detailed analysis of frontal ablation for all Svalbard’s ~200 tidewater glaciers for 2013-2024, a period with abundant availability of satellite imagery. We account for changes in frontal position, surface velocity and ice thickness at time scales from monthly to yearly, and we separate the results into components of glacier retreat and ice discharge. Although the ice discharge can be high year-round, especially for surging glaciers, we find that almost all frontal ablation occurs from late summer to autumn when the ocean is warmer. This represents a delayed freshwater flux to the fjords and open ocean compared to surface meltwater runoff which is more confined to the peak of the atmospheric summer season. Annual frontal ablation was exceptionally high during 2016-2018 and 2022-2024, which coincides with periods of high inflow of Atlantic water and warmer temperatures in the upper ocean. Links with air temperature and meltwater runoff are less clear. The observed variability in frontal ablation demonstrates how reactive these glaciers are to ocean warming and that this should be considered in studies of marine environments and future glacier retreat.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: Multi-mission Investigation of a Recent Giant Glacier Collapse and Ice Avalanche in Tibet

Authors: Andreas Kääb, Luc Girod, Juditha Aga, Désirée Treichler
Affiliations: Department of Geosciences, University of Oslo
The most extreme type of glacier instabilities are large-volume detachments of entire low-angle tongues, usually not more than 20° steep. Internationally, this process was first documented for the 2002 Kolka Glacier, Caucasus, the tongue of which suddenly collapsed and sent a 130 10⁶ m³ ice-rock avalanche, up to 300 km/h fast, downvalley where it reached after 18 km the village Karmadon and further transformed into a 15 km long mudflow, in total claiming around 120 lives. This event was long believed to be a unique disaster, until large parts of a low-angle glacier in the remote Aru range, western Tibet, suddenly detached on 17 July 2016. The consequent 68 10⁶ m³ ice avalanche killed nine herders and hundreds of their livestock on the ~6 km long avalanche runout into the Lake Aru. While the investigation of this event had only just started, the neighbouring glacier detached in a very similar way than the first, causing a 83 10⁶ m³ ice avalanche reaching ~5 km far. The Aru twin glacier collapses directed the attention to the surprising detachments of low-angle glaciers, and triggered closer research that found that in total a dozen comparable events, including the above three, had happened worldwide, in Eurasia, North- and South America, during the recent decades. The most recent of these events, know to date, was the 130 10⁶ m³ detachment of the Sedongpu Glacier, Nyenchen Tanglha Mountains, south-east Tibet, that dammed up the Yarlung Tsangpo / Brahmaputra river for several days in late 2018. Comparison of all these around a dozen events revealed a number of individual differences, but suggested also similarities. Among the most prominent communalities seems the connection between catastrophic low-angle glacier detachments and glacier surges. Most detached glaciers showed signs of surge-like flow behaviour previous to their failure and/or have surge-type glaciers in their vicinity. Second, for several of the detachment sites particularly fine and soft sediments at the glacier bed are found or suggested, or appear possible from their lithological settings. Here, we describe and analyse an around 40 10⁶ m³ large, previously unnoticed, glacier collapse in eastern Tibet that happened in 2022 and provides important new insights into the processes involved in the detachments of low-angle glaciers. We highlight how the synergistic use of data and products from multiple satellite remote sensing missions enabled a close investigation of the event, despite having happened in one of the most remote regions on Earth. Sentinel-1 data were key to detect the event at all. Data from Sentinel-2, ASTER, and low-resolution sensors such as Sentinel-3 OLCI, MODIS and Suomi/NPP/VIIRS constrained the event date and time so that we could find it in seismic records. TanDEM-X data and the Copernicus DEM, optical stereo images from ASTER and very-high-resolution sensors, and ICESat-2 laser altimetry elevations led to collapse volume estimates. Time series of surface velocities on the failing glacier tongue could be reconstructed from repeat Sentinel-2 and Planet optical data, showing an exponential increase in speeds, up to 46 m/day as an average for the 24 hours before detachment. Many of the above types of image data further served the understanding of important details of the event, such as a consequent lake impact wave and its shore run-up, or the finding that no fine sediments seem to have been involved in the detachment – in contrast to most other glacier detachments known so far.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K1)

Session: D.01.04 Using Earth Observation to develop Digital Twin Components for the Earth System - PART 1

Climate change represents one of the most urgent challenges facing society. The impacts of climate change on the Earth system and society, including rising sea levels, increasing ocean acidification, more frequent and intense extreme events such as floods, heat waves and droughts, are expected not only to have a significant impact across different economic sectors and natural ecosystems, but also to endanger human lives and property, especially for most vulnerable populations.

The latest advances in Earth Observation science and R&D activities are opening the door to a new generation of EO data products, novel applications and scientific breakthroughs, which can offer an advanced and holistic view of the Earth system, its processes, and its interactions with human activities and ecosystems. In particular, those EO developments together with new advances in sectorial modelling, computing capabilities, Artificial Intelligence (AI) and digital technologies offer excellent building blocks to realise EO-based Digital Twin Components (EO DTCs) of the Earth system. These digital twins shall offer high-precision digital replicas of Earth system components, boosting our capacity to understand the past and monitor the present state of the planet, assess changes, and simulate the potential evolution under different (what-if) scenarios at scales compatible with decision making.

This session will feature the latest developments from ESA’s EO-based DTCs, highlighting:
- Development of advance EO products
- Integration of EO products from a range of sensors
- Innovative use of AI and ML
- Advanced data assimilation
- Development of tools to address needs of users and stakeholders.
- Design of system architecture
- Creation of data analysis and visualization tools
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K1)

Presentation: Hydr’Avatar, toward a digital twin of hydrological systems using multi-complexity modelling and advanced EO datasets.

Authors: Adrien Paris, Pierre André Garambois, Brice Mora, Chloe Campos, Ludovic Cassan, Jean-François Crétaux, Catherine Fouchier, Laetitia Gal, Jérémie Hahn, Kevin Larnier, Thomas Ledauphin, Jérôme Monnier, Fabrice Papa, Vanessa Pedinotti, Jean Christophe Poisson, Sophie Ricci, Hélène Roux, Malak Sadki, Guy Schumann, Paolo Tamagnone, Nicolas Vila, Hervé Yesou
Affiliations: Hydro Matters, INRAE, CS Group, RSS-Hydro, CERFACS/CNRS, LEGOS UT, CNES/CNRS/IRD/UT3, SERTIT, INSA, Magellium, Vortex.io, Toulouse INP
As the hydrological cycle is changing worldwide, and the consequences of theses changes are directly impacting communities and their activities, there is a urgent need for a comprehensive representation of its different components. Major gaps still exist in domains like the observability of continental waters fluxes both from ground observations (GO) and satellite data (EO) and the seamless and exhaustive use of such data in models at varying spatio-temporal scales. In Hydr’Avatar, ESA funded project, we propose a core of hydrologic/hydraulic modelling fed by diverse and heterogeneous dataset for providing stakeholders with pertinent information on specific problematics in link with continental waters. Here, we focus on 4 geographic zones, being them the Garonne River Basin (GRB), the Maroni River Basin (MRB), the Rhine River (RR) and the joint Niger and Chad River Basins (NCRB). We will deploy distributed (SMASH, Colleoni et al., 2022; Huynh et al., 2024) and/or semi-distributed (MGB, Paiva et al., 2013; Siqueira et al., 2018) modelling at the basin level for representing the vertical energy balance and propagation in main rivers, embedded or not with DassFlow (see Larnier et al., 2023) and/or Telemac (see Nguyen et al., 2023) for fine 1D and/or 2D hydraulics of river streams and floodplains. A large set of information-rich datasets will be used for these set-up, ranging from multi-sourced precipitation products (pure EO, gauged-corrected and model-based), water levels and slope from nadir and large swath altimeters, soil moisture and groudwater, flooded areas, etc. Advanced processing algorithms (FloodSENS, …) will be employed to process those datasets and produce high-level information for model calibration, validation and data analysis. All the data produced by the methods will be encompassed within a 4D dataset. After a strong and thorough validation, the 4D dataset will be employed to answer the scientific questions raised by the identified stakeholders. The platform, integrated to the ESA DESP system, will provide the stakeholders (and all the community) with a clear interface and tools where they will be able to (i) retrieve pertinent information on climate change impact on the hydrological cycle through different pathways simulations, (ii) analyze flood risk and potential damaged areas, (iii) manage transboundary rivers and provide users with adequate guidance and (iv) access insightful comparisons of such EO-derived dataset to dense in situ network. We propose a versatile framework where multi-complexity models and complex datasets for hydrology can be operated in an efficient manner and provide a large range of users with cutting-edge information with strategic, operational and scientific perspectives.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K1)

Presentation: SaveCrops4EU: an Agricultural DTC Component for Enhanced Decision Making

Authors: Sander Rouwette, Dr. Mauro Sulis, Martin Schlerf, Prof. Dr. Harrie-Jan Hendricks Franssen, Dr. Jochem Verrelst, Brian Maguire, Dr. Ir. Yannick Curnel, MSc Márton Tolnai, Dr. Ir. Louise Leclere
Affiliations: Thales Alenia Space Luxembourg - Digital Competence Center, Remote Sensing and Natural Resources Modelling Group, Luxembourg Institute of Science and Technology, Agrosphere (IBG-3), Forschungszentrum Julich GmbH, University of Valencia, Image Processing Laboratory (IPL), Laboratory of Earth Observation (LEO), Walloon Agricultural Research Centre (CRA-W), ‘Agriculture, territory and technologies integration’ Unit, CropOM Research Team
The SaveCrops4EU Digital Twin Component (DTC) represents a groundbreaking initiative aimed at addressing some of the pressing challenges posed by climate change to the agricultural sector. As society confronts the multifaceted impacts of climate change, the need for innovative solutions is increasingly urgent. The SaveCrops4EU project focuses on creating high-precision digital replicas of cropland ecosystems to enhance agricultural decision-making, contributing significantly to sustainable practices aligned with European policies like the Common Agricultural Policy and the Green Deal. Central to the SaveCrops4EU DTC is the establishment of a framework with monitoring, forecasting, and scenario testing capabilities, to provide data to inform agricultural practices. The initiative aims at offering real-time insights on crop water and nitrogen status, phenological development, and potential yield for major cultivated crops across Europe. These capabilities shall empower farmers and stakeholders to make informed decisions in response to abiotic stressors related to climate change. The DTC uses cutting-edge EO-based monitoring techniques, utilizing a wide variety of sensors and enhanced spatial and spectral resolutions. This approach enables the delivery of richer and more accurate data, allowing for a comprehensive understanding of how environmental factors influence crop health and productivity. The EO-based data will be used to adapt the state trajectory of physically based models and to estimate model parameters, resulting in an overall better agreement between model predictions and reality. In addition, a physically consistent and stochastic dataset generated by a crop module of a land surface model will be utilized to train a suite of machine learning approaches. Critical outputs on crop phenology, water stress, and the carbon-nitrogen cycle, will enhance the predictive capabilities for crop yields and provide stakeholders with actionable insights. Recognizing the importance of inclusivity, the project places significant emphasis on engaging diverse stakeholders from both public and private sectors. Feedback is necessary to ensure that the DTC remains responsive to the evolving needs of end users, facilitating the development of practical tools aligned with real-world agricultural practices. This stakeholder engagement is crucial for developing DTC solutions that are innovative but also adoptable. The architectural design of the SaveCrops4EU DTC allows for organic growth while ensuring maintainability and robustness. The project aims to guarantee long-term scalability and adaptability, accommodating the changing needs of the agricultural sector and technological advances within the associated scientific fields. This strategic approach reinforces the initiative's commitment to sustainability and efficiency. Using four representative Use Cases of major cultivated crops in Europe, this presentation will showcase some functionalities of the proposed approach to bolster our ability to understand, monitor and respond to the complexities surrounding abiotic stressor response. Insights will be given to the technologies and the collaborative approaches integral to the SaveCrops4EU initiative, setting the stage for enhanced decision-making that benefits both agriculture and the environment, ultimately supporting more sustainable food systems in Europe and beyond.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K1)
Tuesday 24 June 14:00 - 15:30 (Hall K1)

Presentation: Forest Digital Twin Component for DesinE

Authors: Dr. Matti Mõttus, Mr. Renne Tergujeff, Dr. Benjamin Brede, Lauri Häme, Dr. Lucie Homolová, Dr. José Ramón González, Dr. Francesco
Affiliations: VTT Technical Research Centre of Finland
VTT Technological Research Centre of Finland together with consortium partners – Terramonitor, German Research Centre for Geosciences (GFZ, Helmholtz Centre Potsdam), Global Change Research Institute of the Czech Academy of Sciences (CzechGlobe), Forest Science and Technology Centre of Catalonia, and Yucatrote – are implementing a Forest Digital Twin Component (DTC) on the Destination Earth (DestinE) User Platform (DESP) with funding from ESA. The implementation, based on the forest digital twin precursor, focuses on forest growth and carbon cycle at the spatial resolution of Sentinel-2 (10 m) using Earth observation data. The processes in an undisturbed forest are slow: changes occur over years to decades, achieving a stable state can take hundreds of years. Twinning such a system requires reliable data on the environmental variables, such as that produced by the Copernicus system and Climate Change DT – one of the core elements of DestinE. The spatial resolution of a forest system, however, needs to be much higher than that of a climate system and approach the size of a single tree. Temporal evolution, on the other hand, is slower. Hence, the output temporal resolution of the forest in the DTC is set to one year with growth simulations based on daily weather data. A yearly cycle is also foreseen in simulating forest management actions. The computational requirements of the system allow it to be run on existing cloud computers without the need of high performance computing. The initial state of the forest will be retrieved from Earth Observation (EO) data. The model will make use of the rapidly increasing volume of EO-based forest variable datasets. In case the user has more robust information for a specific region, custom EO data processing will be available via collaboration with other online systems. The forest maps will be updated with the most recent EO data to account for possible disturbances. At least two physically-based forest growth models will be implemented in Forest DTC. The user needs for a forest digital twin were mapped in a precursor project, which ended in 2021. During Forest DTC, we will update the information on user needs and requirements, and also incorporate new data sources such as hyperspectral imagery and tree maps based on individual tree detection, and include new model components (forest fire fuel, pest damage risk). The system will be modular to include user- or biome-specific growth models, management simulators and end-user extensions. In the forthcoming years, the key questions remain the integration of the largely differing spatial and temporal resolutions of the Forest DTC and those of the many other components of the digital twin of the Earth, such as the atmospheric or hydrological processes. In the future, the system should be available on DESP to all DestinE users for understanding the future of forests.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K1)

Presentation: Earth Observation based Digital Twin for Resilient Agriculture under Multiple Stressors

Authors: Gohar Ghazaryan, Dr. Maximilian Schwarz, Dr. Philippe Rufin, Jonas Schreier, Florian Pötzschner, Dr. Michele Croci, Dr. Irene Salotti, Prof Paola Battilani, Dr Tobias Landmann, Dr Patrick Hostert, Prof Stefano Amaducci, Claas Nendel
Affiliations: Leibniz Centre for Agricultural Lanscape Research, Remote Sensing Solutions GmbH, Humboldt-Universität zu Berlin, Università Cattolica del Sacro Cuore, International Centre of Insect Physiology and Ecology, Institute of Biochemistry and Biology, University of Potsdam, Potsdam, Germany, Integrative Research Institute on Transformations of Human-Environment Systems (IRI THESys), Humboldt-Universität zu Berlin
In the face of growing agricultural challenges due to climate change and rising global food demand, there is an increasing need for innovative approaches to enhance agricultural resilience. To address this, our contribution presents the Digital Twin for Agriculture (DT) framework, designed to monitor agricultural systems under multiple stressors. By integrating both data-driven and process-based models with Earth Observation (EO) data, the DT framework offers significant advancements over the current state of the art in assessing agricultural systems and understanding the impact of environmental stressors. The approach encompasses four distinct use cases, each illustrating the practical application of DT in monitoring and managing various stressors affecting agricultural productivity. Over the past two decades, and especially since 2018, Germany has experienced several periods of drought with a severe impact on ecosystems and food production. To mitigate these impacts at the national level, the MOdel for NItrogen and Carbon in Agro-ecosystems (MONICA model) simulates drought impacts using meteorological, soil, and crop data, calculating variables such as actual and potential evapotranspiration to identify drought intensity. The DT also integrates high-resolution crop condition monitoring from Sentinel -3, Sentinel-2 and EnMap data, providing insights into crop health drought risk and vulnerabilities. The combination of these simulations with a dedicated drought model enables accurate risk assessments and early warning for agricultural stakeholders. Moreover, the impact of different management practices, particularly irrigation, is evaluated to determine how water use can be optimized to mitigate yield losses under drought conditions. Water management is another critical component of the project, especially as abiotic stressors become more frequent across Europe. The DT framework addresses this challenge through detailed field-level assessments of actual evapotranspiration (ET). Using EO data from Sentinel-2, Sentinel-3, and Landsat, the framework estimates ET, which is validated against ground observations such as data from eddy covariance stations and irrigation records. This high-resolution monitoring facilitates improved irrigation practices, enhancing water use efficiency and reducing the impact of droughts on crop productivity. By assessing crop-specific water needs and understanding the spatial variability of water stress, the DT supports sustainable water resource management, ensuring that irrigation is applied where and when it is most needed. In the Po Valley in Italy, a region that is crucial for national food production, the DT use case addresses the compounded effects of drought and disease outbreaks. The valley's agricultural productivity is at risk due to increasing drought frequencies and a humid climate conducive to disease outbreaks, which can lead to mycotoxin contamination and affect both crop yield and food safety. The DT integrates Sentinel-1 and Sentinel-2 data for early mapping of crops with different models, such as DAISY, light-use efficiency (LUE), mathematical disease model, to derive essential biophysical parameters and simulate soil-water-plant interactions. Additionally, the system uses a high-resolution soil property database and a decade-long dataset of ground truth information to enhance model accuracy. Information on crop conditions, phenology, LAI and biomass accumulation estimated at high spatial resolution from EO data are used as an input to spatialize mechanistic models for crop disease prediction driven by weather data. In Kenya, the DT framework is applied to support smallholder farmers using the push-pull cropping system, a sustainable and climate-smart approach to pest control. Here, EO and Internet of Things (IoT) technologies are used to continuously monitor field conditions. IoT devices collect real-time data on soil moisture, nutrient levels, electrical conductivity, and pest densities, while EO data from Sentinel are used to track crop development. The DT employs AI-driven models to simulate crop growth and vigor, forecast yields, and assess the impact of different management practices on productivity. This approach supports integrated pest management (IPM), enhancing productivity while promoting eco-friendly agricultural practices. These use cases highlight the potential of the Digital Twin for Agriculture. By seamlessly integrating EO data, advanced modeling, and user-oriented tools, the DT framework provides a platform for improving agricultural resilience, optimizing resource use, and supporting sustainable food production.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K1)

Presentation: A first view of the EO-driven digital twin for ice sheets

Authors: Sebastian B. Simonsen
Affiliations: DTU Space, Earthwave, United Kingdom, Lancaster University, United Kingdom, University of Edinburgh, United Kingdom, Katholieke Universiteit Leuven, Belgium, ENVEO, Austria, Geological Survey of Denmark and Greenland, Denmark, Greenland Survey – Asiaq, Greenland
The response of ice sheets and shelves to climate change profoundly influences global human activities, ecosystems, and sea-level rise. As such, ice sheets are a vital component of the Earth system, making them a cornerstone for developing a future Digital Twin Earth. Here, we present the initial steps toward an Earth Observation (EO)-driven Digital Twin Component (DTC) for Ice Sheets, marking an effort to understand and predict the behavior of the Greenland Ice Sheet and Antarctic ice shelves. To meet the diverse needs of stakeholders, DTC Ice Sheets will adopt a modular design comprising 10 Artificial Intelligence/Machine Learning (AI/ML) and Data Science modules. All targeted four initial use cases that will drive the development of DTC Ice sheets. These initial use cases are: (1) Greenland Hydropower Potential: By modeling and monitoring ice sheet hydrology and meltwater runoff, the DTC ice sheets will evaluate Greenland’s renewable energy opportunities and provide actionable insights for sustainable hydropower development. (2) EU Sea Level Response Fingerprint: The DTC Ice Sheets will deliver region-specific insights into how ice sheet mass loss will contribute to global sea level rise, focusing on the implications for coastal infrastructure across Europe. (3) State and Fate of Antarctic Ice Shelves: Through detailed stability analysis, the DTC Ice Sheets will investigate the vulnerability of Antarctic ice shelves to climatic and oceanic changes, shedding light on their role in regulating ice sheet mass loss and global sea level. (4) Enhanced Surface Climate: Leveraging EO data and climatology, the DTC Ice Sheets will improve understanding of surface climate interactions, advancing predictions of feedback loops between ice sheets, the atmosphere, and the ocean. The DTC Ice sheet implementation on the DestinE Core Service Platform (DESP) will consist of interconnected modules to serve the use cases. Still, it will also, when fully implemented, provide a holistic view of an ice sheet digital twin. Hence, DTC Ice Sheets aims to provide high-resolution insights into ice sheets' past, present, and future states, align with stakeholders, and foster interdisciplinary collaboration by interfacing with other thematic Digital Twin Earth systems, such as ocean and coastal processes. The DTC ice sheets will empower stakeholders to explore What-if scenarios to address climate change's impacts and feedback mechanisms. All are found in current state-of-the-art EO data of ice sheets.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K2)

Session: B.04.05 Remote sensing for disaster preparedness and response to geo-hazards, hydro-meteorological hazards and man-made disasters - PART 1

Every year, millions of people worldwide are impacted by disasters. Floods, heat waves, droughts, wildfires, tropical cyclones and tornadoes cause increasingly severe damages. Civil wars and armed conflicts in various parts of the world, moreover, lead to a growing number of refugees and large changes in population dynamics. Rescue forces and aid organizations depend on up-to-date, area-wide and accurate information about hazard extent, exposed assets and damages in order to respond fast and effectively. In recent years, it has also been possible to prepare for specific events or to monitor vulnerable regions of the world on an ongoing basis thanks to the rapidly growing number of satellites launched and their freely available data. Providing information before, during or after a disaster in a rapid, scalable and reliable way, however, remains a major challenge for the remote sensing community.
Obtaining an area-wide mapping of disaster situations is time-consuming and requires a large number of experienced interpreters, as it often relies on manual interpretation. Nowadays, the amount of remote sensing data and related suitable sensors is steadily increasing, making it impossible in practice to assess all available data visually. Therefore, an increase of automation for (potential) impact assessment methods using multi-modal data opens up new possibilities for effective and fast disaster response and preparedness workflow. In this session, we want to provide a platform for research groups to present their latest research activities aimed at addressing the problem of automatic, rapid, large-scale, and accurate information retrieval from remotely sensed data to support disaster preparedness and response to geo-hazards, hydro-meteorological hazards and man-made disasters/conflicts.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K2)

Presentation: DeepFuse: Harnessing AI and Earth Observation for Enhanced Flood Inundation Monitoring

Authors: Prof. Dr.-ing. Antara Dasgupta, Dr.-Ing. Rakesh Sahu, Paul Hosch, Prof. Dr. Björn Waske
Affiliations: Institut für Wasserbau und Wasserwirtschaft, RWTH Aachen Universität, Compute Science and Engineering Department, Galgotias University, Institute of Informatics, Universität Osnabrück
Despite the growing number of Earth Observation satellites equipped with active microwave sensors suitable for flood mapping, the observation frequency remains a limitation for effectively characterizing inundation dynamics. Capturing critical events such as the flood peak or maximum inundation extent continues to be challenging, representing a significant research gap in flood remote sensing. However, the rapid expansion of multimodal satellite hydrology archives, coupled with advancements in deep learning, offers a promising avenue to address this limitation in observation frequency. DeepFuse is a scalable data fusion methodology that utilizes deep learning (DL) and Earth Observation data to generate daily flood inundation maps at high spatial resolution. This proof-of-concept study demonstrates the potential of Convolutional Neural Networks (CNNs) to model flood inundation at the spatial resolution of Sentinel-1 (S1). By integrating temporally frequent but coarse-resolution datasets such as soil moisture and accumulated precipitation data from NASA’s SMAP and GPM missions, alongside static predictors like topography and land use, a CNN was trained on flood maps derived from S1 to predict high-resolution inundation patterns. The proposed methodology was applied to two sites, including one in southwest France, focusing on the December 2019 flood event at the confluence of the Adour and Luy rivers, and one in Germany, focusing on the Christmas floods of 2023 in Lower Saxony. Predicted high-resolution flood maps were independently validated using flood masks derived from Sentinel-2, created through the Random Forest Classifier. Initial results indicate that the CNN can generalize some hydrological and hydraulic processes driving inundation, even in complex topographical regions, enabling the bridging of spatiotemporal resolution gaps in satellite-based flood monitoring. We also demonstrate model transferability in space and in time, showcasing the potential of using such approaches in typically data scarce regions. Achieving daily flood monitoring at high resolution will enhance the understanding of spatial inundation dynamics and facilitate the development of more effective parametric hazard re/insurance products, helping to address the flood protection gap.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K2)

Presentation: Enhancing Rapid Tsunami Hazard Estimation: the ALTRUIST Project

Authors: Michela Ravanelli, Elvira Astafyeva, Mattia Crespi
Affiliations: Sapienza University Of Roma\e, IPGP, Université Paris Cité
Tsunamis are among the most devastating geo-hazards, posing significant threats to coastal communities. The financial losses and human toll are immense, as exemplified by the 2004 Sumatra-Andaman earthquake and tsunami, which caused over 200,000 fatalities. Critically, the first tsunami waves reached Sri Lanka's coasts within two hours without any warning issued. This highlights the urgent need for reliable and timely tsunami warning systems, especially in earthquake-prone regions. Complementary capabilities that enhance or support existing warning systems could significantly improve coastal safety. However, failures in tsunami warning systems - only relying on seismic and sea level data - over the past 15 years underline the necessity of exploring new paradigms for ocean monitoring and tsunami hazard estimation to reinforce traditional observational techniques. Within the last 30 years GNSS (Global Navigation Satellite Systems), thanks to its dense deployment and temporal resolution, played a pivotal role in analyzing the spatial and temporal dynamics of geo-hazards, capturing variations across time scales from decades to sub-seconds and spatial scales from local to global. Specifically, in the past decade, GNSS Ionospheric Seismology has made remarkable progress in detecting earthquake and tsunami signatures in the ionosphere, Earth's atmosphere upper part. By analyzing GNSS-TEC (Total Electron Content) observations, this field studies the ionospheric response to geo and human-induced hazards. Tsunamis and earthquakes generate acoustic and gravity waves (AGWs), which, due to the density decreasing of the atmosphere at higher altitudes, can propagate to the ionosphere, causing TEC disturbances. This enables the remote sensing of the ionosphere, allowing the imaging of Total Electron Content and providing valuable insights for geo-hazard assessments. The ALTRUIST (totAL variomeTry foR tsUnamI hazard eStimaTion) project, perfectly fits into this context, aiming to improve the reliability and accuracy of real-time tsunami warning systems leveraging the GNSS Total Variometric Approach (TVA) methodology. Developed at Sapienza University of Rome, TVA combines two innovative algorithms: VADASE (Variometric Approach for Displacement Analysis Stand-Alone Engine) and VARION (Variometric Approach for Real-Time Ionosphere Observation) [1]. These algorithms process the same real-time GNSS data streams to simultaneously estimate: ground motion, including co-seismic displacements and ionospheric TEC disturbances caused by earthquakes and tsunamis. This dual-layer capability allows TVA to bridge geospheric observations and support traditional tsunami warning systems. Indeed, the ground motion analysis (through VADASE) provides velocity and displacement data, estimating the magnitude and direction of ground motion critical for seafloor displacement evaluation and tsunamigenic potential assessment. While the ionospheric TEC monitoring (through VARION) tracks TEC anomalies, offering insights into vertical sea surface displacement and validating tsunami potential within 10 minutes of seismic rupture. The TVA methodology was tested in a real-time scenario during the 2015 Mw 8.3 Illapel earthquake and tsunami, demonstrating its potential to enhance tsunami genesis estimation and contribute significantly to preparedness workflows and disaster risk reduction strategies. Currently, TVA is being implemented in real-time through the ALTRUIST project, supported by the AXA Research Fund and UNESCO-IOC within the United Nations Ocean Decade [2]. ALTRUIST is being piloted using the GNSS network of the Observatoire Volcanologique et Sismologique de Guadeloupe (IPGP) in the French Caribbean. ALTRUIST integrates a front-end dashboard for real-time and interactive data visualization and a modular, scalable back-end layer for real-time and historical data management, allowing easy integration with external modules to expand system capacity. This architecture supports simultaneous monitoring of ground motion and ionospheric TEC disturbances, marking a breakthrough in multi-sphere geospheric analysis. ALTRUIST leverages the full capabilities of multi-constellation GNSS systems, including Galileo, to enable comprehensive global monitoring, even in remote and underserved regions. By incorporating Galileo's advanced features such as enhanced signal accuracy and robust availability, ALTRUIST ensures reliable and real-time applicability wherever GNSS data access is available, significantly enhancing tsunami early warning systems worldwide. ALTRUIST represents a significant leap in GNSS technology, transitioning from academic research to practical applications. Its cost-effective implementation leverages existing GNSS networks, addressing sustainability challenges. The project's scalability makes it particularly relevant for regions like the South Pacific, where traditional warning systems often fall short. Finally, by providing additional resources for tsunami hazard estimation and integrating multi-geospheric observations, ALTRUIST establishes a new benchmark for real-time tsunami hazard assessment. It has the potential to complement traditional tsunami early warning systems, strengthen collaboration within global tsunami alert frameworks, and contribute to enhancing the safety of coastal communities worldwide. [1] Ravanelli M. et al. (2021). GNSS Total Variometric Approach: First Demonstration of a Tool for Real-Time Tsunami Hazard Estimation, Scientific Reports, 11(1). [2] https://axa-research.org/funded-projects/climate-environment/mitigating-tsunamis-threats-and-destructive-impacts-through-enhanced-navigation-satellite-system
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K2)

Presentation: Advancing Drought Resilience in South Africa: The ANIN Project and its Earth Observation-Based Early Warning System

Authors: Juan Suarez, Carlos Domenech, PhD Èlia Canoni, PhD Beatriz Revilla-Romero, PhD Pablo Torres, Mr Jesús Ortuño, Mr Mxolisi Mukhawana, PhD Ndumiso Masilela, PhD Christina Botai, Mr Jaco de Wit, Mr Thomas Tsoeleng, Mr Morwapula Mashalane, MSc Sibonile Sibanda, PhD Andy Dean, Mr Vangelis Oikonomopoulos, Mr Emile Sonnenveld, Wai-Tim Ng, MSc Fabrizio Ramoino, PhD Clement Albergel
Affiliations: GMV, Department of Water and Sanitation, South Africa Weather Service, South Africa National Space Agency, Hatfield Consultants Africa, Hatfield Consultants, AgroApps, VITO, ESA-ESRIN, ESA-ECSAT
The ANIN Project, also referred to as the South Africa Drought Monitoring National Incubators, is an initiative financed by the European Space Agency (ESA) Earth Observation for Africa (EO Africa) programme. This project sought to bolster South Africa's resilience to droughts by creating a comprehensive drought early warning system specifically designed to meet the needs of South African stakeholders. The aim was to develop an advanced Earth Observation (EO)-based solution that would enable the country to better prepare for, mitigate, and respond to droughts, an escalating concern in the face of increasing climate variability in southern Africa. A key outcome of the project is an open-source drought monitoring system that uses EO data to generate drought indices. This system provides near real-time insights into the state of drought conditions across South Africa. Recognising the multifaceted nature of drought, ANIN employs a multi-pronged approach to evaluate various drought types, including meteorological, soil moisture (agricultural/ecological), and hydrological droughts. This approach is based on the understanding that these drought types are interconnected and propagate through the hydrological cycle. Meteorological drought is evaluated using the Standardised Precipitation Index (SPI) and the Standardised Precipitation-Evapotranspiration Index (SPEI). The SPI, a widely used indicator, compares current precipitation accumulations to historical data, revealing precipitation deficits. The SPEI, in contrast, incorporates both precipitation and potential evapotranspiration (PET), reflecting the impact of temperature on water demand. Both the SPI and SPEI are calculated at various time scales, enabling the detection of both short-term and long-term drought conditions. Soil moisture drought, which has direct implications for agriculture and ecosystems, is monitored in ANIN using the Vegetation Condition Index (VCI) and the Combined Drought Indicator (CDI). The VCI compares current Normalised Difference Vegetation Index (NDVI) values to historical ranges, indicating the health of vegetation and levels of stress. The CDI integrates SPI, Soil Moisture Anomaly (SMA), and FAPAR anomaly data to provide a comprehensive assessment of agricultural drought risk and recovery stages. Hydrological drought is monitored using the Standardised Streamflow Index (SSFI) and the Standardised Groundwater Index (SGI). The SSFI assesses streamflow anomalies in relation to long-term averages, providing insights into river discharge conditions. Similarly, the SGI evaluates groundwater level anomalies, reflecting the state of groundwater resources. The ANIN system is fully integrated into the SANSA Digital Earth South Africa (DESA) infrastructure. This integration enables seamless management and analysis of EO data for drought monitoring. The collaborative nature of ANIN was crucial in achieving these outcomes. From the outset, the project was designed to incorporate local knowledge and needs (data, analysis, infrastructure gaps) and to build local capacity. This was achieved by involving South African partners in the co-design and co-development of the system, culminating in its deployment within the SANSA infrastructure. ANIN has significantly enhanced South Africa's drought monitoring capabilities by providing a more precise and efficient method for tracking drought conditions in near real-time. User feedback indicates that the system's drought indices accurately reflect conditions across different regions, supporting informed decision-making in water management, agriculture, and disaster response. The impact of ANIN extends beyond data provision; the collaboration between European and South African partners has empowered local stakeholders to independently manage and utilise EO technology. The success of the project has created opportunities for potential upscaling to encompass the Southern African Development Community (SADC) region. There is considerable interest in expanding ANIN's reach to address similar environmental challenges faced by the sixteen SADC member countries. This expansion would require establishing partnerships with relevant agencies, customising the system for regional needs, and establishing a unified platform for drought monitoring across Southern Africa.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K2)

Presentation: Integrating Remote Sensing and Tsunami Numerical Simulations for Building Damage Mapping

Authors: Bruno Adriano, Shunichi Koshimura
Affiliations: International Research Institute of Disaster Science, Tohoku University
Assessing damaged buildings after a devastating disaster is essential for prompt and effective rescue and relief efforts. Recently, combined approaches based on machine learning and earth observation technologies have increasingly performed well in automatic damage recognition in large affected areas. However, machine learning methods require many human expert-labeled training samples, which often need to be made available. Collecting them after a disaster strikes is not feasible because affected areas frequently become isolated or present dangers to early field survey missions. To address this challenge, previous studies have leveraged existing benchmark datasets built using previous disasters and trained machine-learning models that are expected to perform well when applied to new disaster events. Although such approaches have shown success in some cases, the generalization ability of these machine learning models still needs improvement, often requiring collecting a minimum amount of training samples from the affected areas to guarantee acceptable performance. In this context, this study presents another approach to addressing the challenge of collecting ground truth samples soon after a disaster occurs. Numerical simulation using physics-based computational models can simulate the intensity of given disasters, such as peak ground acceleration in earthquake events and inundation depth in the case of flood disasters. This work introduces a novel building damage mapping method for tsunami disasters that uses disaster intensity as complementary information to train a machine learning classifier. In the absence of training data, the primary assumption is that disaster intensity is correlated to the degree of building damage and can be used as additional data in a weakly supervised scheme. We evaluate the performance of our proposed method on two tsunami disasters, namely the 2011 Tohoku Tsunami and the recent 2024 Noto Peninsula Tsunami, both events in Japan. The experimental results showed that our method performs similarly to a fully supervised scenario in which training samples are available.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K2)

Presentation: A Novel Two-Stage Approach for Buildings and Roads Damage Assessment in Remote Sensing Imagery

Authors: Lorenzo Innocenti, PhD. Edoardo Arnaudo, Jacopo Lungo
Affiliations: LINKS Foundation
Natural disasters, such as earthquakes, hurricanes, and floods, can cause widespread destruction, disrupting communities and critical infrastructure. The rapid assessment of infrastructure damage following natural disasters is crucial for effective emergency response and resource allocation. Traditionally, damage assessment has relied on satellite imagery and human analysis. This approach has two main drawbacks: the coarse time resolution of satellite imagery and the time-consuming nature of manual image analysis. Machine learning (ML) models can significantly speed up the image analysis process, which is critical in emergency situations, and can also be of aid to manual assessment by can highlighting areas of concern that might be overlooked by humans, increasing the overall effectiveness of damage assessment. Unmanned aircraft systems (UAS) can further expedite the process by capturing high-resolution images without the delay associated with satellite imagery. These aerial platforms provide high resolution imagery that benefits both human analysts and ML models, enabling more precise and timely information about the extent and nature of infrastructure damage. Our proposition with this study is a neural network (NN) model designed for damage assessment in disaster scenarios. The model takes as input two images: a pre-disaster image from a very high-resolution (VHR) satellite, such as Maxar, and a post-disaster image, which can be either from another VHR satellite or a downscaled aerial image. By analyzing these images, the model generates a map that highlights the locations and extent of damage to the infrastructure present within the area. To address the problem of damage assessment on both buildings and roads, we propose a novel two-stage approach combining infrastructure segmentation and change detection. The first stage consists of an infrastructure segmentation model that classifies each pixel in remote sensing imagery into three categories: background, road, or building. While there are existing datasets focused on damage assessment of buildings, there is a notable lack of datasets for road damage assessment and even road remote sensing segmentation. The gap is significant because damaged road infrastructure can severely hamper emergency response efforts by cutting off affected areas from aid and connectivity after a natural disaster. Therefore, developing comprehensive datasets that include road damage assessment is equally important than those focused solely on buildings. To overcome the dataset limitation, we develop a novel annotation pipeline utilizing state-of-the-art foundation models to automatically generate training data. Specifically, we employ the Microsoft Buildings Footprint dataset and the Microsoft Road Detection dataset as prompts for a segmentation foundation model to generate a training label images. The Microsoft Buildings Footprint dataset is a collection of building footprints derived from satellite imagery, which includes over 1.4 billion building footprints detected from Bing Maps imagery between 2014 and 2024, using data from sources like Maxar, Airbus, and IGN France. The Microsoft Road Detection dataset consists of road detections derived from Bing Maps aerial imagery between 2020 and 2022, which contains approximately 48.9 million kilometers of roads. They are both freely available under the Open Data Commons Open Database License (ODbL). To generate image labels from infrastructure footprints, we utilize the Efficient Segment Anything Model (ESAM), an advanced iteration of the Segment Anything Model. ESAM is a highly versatile general segmentation model, supporting both point and text prompts. This enables users to perform segmentation tasks by marking specific points on an image or providing text descriptions. In our application, we use points from the buildings and roads dataset, along with a series of words representing roads and buildings. This annotated dataset was then used to train a lightweight segmentation model tailored for infrastructure identification. The model consists in a ConvNeXt-based neural network, which is a modern convolutional neural network (CNN) designed for high performance in vision tasks. It follows a U-Net architecture with skip connections, allowing it to extract hierarchical features in the encoder and reconstruct segmentation in the decoder using higher resolution data from the skip connections. Our version also features an encoder pretrained on ImageNet, which improves its performance and reduces the training time required for our task. The second stage employs a change detection model trained on an existing remote sensing change detection dataset. For this, we used a similar model, also based on ConvNeXt with a pretrained encoder. The model first extracts features from both pre- and post-event images, then the decoder takes as input the absolute difference of the pre- and post-event features, allowing the change detection to work both ways (i.e., detecting changes where a building is present in the pre-event image but not in the post-event image, and vice versa). For training the change detection model, we utilized the SYSU-CD dataset. This dataset contains 20,000 pairs of 0.5-meter resolution aerial images, each sized 256×256 pixels, taken between 2007 and 2014 in Hong Kong. The types of changes captured in the dataset include newly built urban buildings, suburban expansion, groundwork before construction, road expansion, and similar. By combining the change detection scores with the infrastructure segmentation information our system can identify and categorize building damage: the system takes the segmented buildings and roads from the pre-disaster images, checks the damage assessment score and, if the pixels inside the infrastructure have a high score, they are marked as damaged or destroyed. This tool is part of the OVERWATCH project, which aims to provide an immersive and intuitive operational crisis asset management tool for public authorities responsible for civil safety and emergency services. Our tool, integrated into the project platform, provides a web-based dashboard that incorporates an automatic Earth observation-based pipeline. Public authorities can upload pre- and post-disaster images, which are then processed to generate infrastructure damage scores. This process is entirely automated, requiring no human interaction beyond the initial image upload. The platform ensures that emergency responders have on-demand access to accurate and timely damage assessments, enhancing decision-making and situational awareness during crises. This research was conducted within the framework of the Horizon EU OVERWATCH project (Grant ID. 101082320).
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K2)

Presentation: Investigating all-weather rapid flood mapping with Sentinel-1 Ground Range Detected and Single Look Complex data.

Authors: Nikolaos Ioannis Bountos, Maria Sdraka, Angelos Zavras, Ilektra Karasante, Andreas Karavias, Themistocles Herekakis, Angeliki Thanasou, Dimitrios Michail, Professor Ioannis
Affiliations: National Technical University Of Athens, Harokopio University of Athens, National Observatory of Athens
Global floods, driven by climate change, pose significant risks to human lives, infrastructure, and ecosystems. Recent disasters in Pakistan, and Valencia, emphasize the pressing need for accurate flood mapping to support recovery efforts, assess vulnerabilities, and improve preparedness. The Sentinel missions deliver abundant remote sensing data, presenting a vital opportunity to address this challenge. Sentinel-1's Synthetic Aperture Radar data is particularly well-suited, offering all-weather, day-and-night imaging capabilities ideal for the task. The significant advancements in deep learning which have already provided major milestones in both computer vision and remote sensing present a powerful opportunity to address this critical challenge. Its application, however, for flood mapping is limited, mainly due to the lack of large curated datasets. To address this gap, we curate time series data from Sentinel-1 SAR imagery for 43 flood events worldwide, manually annotated by SAR experts. The dataset includes two SAR products: a) Ground Range Detected (GRD) SAR, optimized for flood mapping, and b) minimally processed Single Look Complex (SLC) SAR, retaining both phase and amplitude signals. These products are paired with reference annotation maps classifying each pixel to one of the following categories: “Flood”, “Permanent water”, “No water”. We name the resulting dataset “Kuro Siwo”. We enhance Kuro Siwo with an extensive unlabeled set of SAR samples augmenting both products to explore the advances in large-scale self-supervised pretraining for remote sensing [1,2]. The annotated dataset features 67,490 time series and 202,470 unique SAR samples, stored as 224 × 224 tiles, with rich metadata such as acquisition dates, climate zones, and elevation information. Combined, the full dataset offers 533,847 time series and 1,601,511 unique SAR samples, making it a groundbreaking resource for flood mapping and beyond. Building on Kuro Siwo we construct a framework to evaluate the capabilities of GRD and SLC products for rapid flood mapping by developing a comprehensive benchmark of state-of-the-art models inspired from semantic segmentation, change detection, and temporal modeling domains. Our benchmark includes both convolutional and transformer-based architectures e.g U-Net[3] and UPerNet[4], implemented with various backbone variants like ResNet[5] and Swin Transformer[6] families, providing strong baselines for future research. As expected, heavily processed GRD data are better suited for rapid flood mapping with conventional real-valued architectures, achieving ~83.85% F1 score for the binary water/no water classification and 80.12% and 78.24% for the flood and permanent water categories respectively. However, our experiments demonstrate that deep learning models can effectively classify even unrefined SLC data, when paired with high-quality annotations, like those in Kuro Siwo. For example,, a standard U-Net with a ResNet18 backbone achieves an F-Score of ~79.94% on binary water detection, and ~71.20% and ~76.76% on flood and permanent water categories, respectively. These results are particularly noteworthy, as the models in our benchmark were not specifically optimized for SLC’s unique characteristics. Investigating SAR’s complex-valued data with methods tailored to this domain is a promising avenue for future work. This comparative study sets a high standard for future GRD and SLC-based methods for the critical application of rapid flood mapping. [1] Cong, Yezhen, et al. "Satmae: Pre-training transformers for temporal and multi-spectral satellite imagery." Advances in Neural Information Processing Systems 35 (2022): 197-211. [2]Bountos, Nikolaos Ioannis, Arthur Ouaknine, and David Rolnick. "FoMo-Bench: a multi-modal, multi-scale and multi-task Forest Monitoring Benchmark for remote sensing foundation models." arXiv preprint arXiv:2312.10114 (2023). [3] Ronneberger, O., Fischer, P., and Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234–241. [4] Xiao, T., Liu, Y., Zhou, B., Jiang, Y., and Sun, J. (2018). Unified perceptual parsing for scene understanding. Proceedings of the European conference on computer vision (ECCV), pages 418–434 [5] He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778 [6] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021). Swin transformer: Hierarchical vision transformer using shifted windows. Proceedings of the IEEE/CVF international conference on computer vision, pages 10012–10022.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F1)

Session: C.06.01 Sentinel-1 mission performance and product evolution

The Sentinel-1 mission is a joint initiative of the European Commission (EC) and the European Space Agency (ESA) comprises a constellation of two polar-orbiting satellites, operating day and night performing C-band synthetic aperture radar imaging, enabling them to acquire imagery regardless of the weather. The C-band SAR instrument can operate in four exclusive imaging modes with different resolution (down to 5 m) and coverage (up to 400 km). It provides dual polarization capability, short revisit times and rapid product delivery. Since the launch of Sentinel-1A and Sentinel-1B, respectively in 2014 and 2016, many improvements were brought to the mission performances and to the products evolved on many points. Sentinel-1B experienced an anomaly which rendered it unable to deliver radar data in December 2021, and the launch of Sentinel-1C is planned for 2023. This session will present the recent improvements related to a) the upgrade of the products characteristics, performance and accuracy, b) the better characterization of the instrument with the aim to detect anomalies or degradation that may impact the data performance, c) the anticipation of the performance degradation by developing and implementing mitigation actions and d) the explorative activities aiming at improving the product characteristics or expanding the product family to stay on top of the Copernicus Services evolving expectations.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F1)

Presentation: Enhancing Sentinel-1 Soil Moisture derived Production Validation: Upscaling Methodologies and Insights from the Copernicus GBOV Service

Authors: Ana Pérez-Hoyos, Rémi Grousset, Christophe Lerebourg, Dr Marco Clereci, Nadine Gobron, Ernesto Lopez-Baeza
Affiliations: Albavalor, ACRI-ST, European Commission Joint Research Centre
The Copernicus Ground-Based Observations for Validation (GBOV) service (https://gbov.land.copernicus.eu), aims to develop and disseminate robust in-situ datasets from a network of ground-based monitoring sites. These datasets enable systematic and quantitative validation of Earth Observation (EO) products generated by the Copernicus Land Monitoring Service. The GBOV service provides two types of datasets: Reference Measurements (RMs), consisting of raw ground observations from diverse contributing networks, and Land Products (LPs), which are upscaled variables specifically processed for EO validation purposes. This presentation focuses on one of the seven GBOV Land Products, soil moisture (SM), identified as an Essential Climate Variable (ECV) by the Global Climate Observing System (GCOS). Specifically, it emphasizes the collection and processing of Surface Soil Moisture (< 5 cm depth) to generate quality daily soil moisture RMs (RM-10) and precipitation (RM-11) with hourly temporal resolution. These datasets are compiled from over 40 globally distributed sites, covering a more comprehensive range of land cover types and climatic conditions. A core component of this work involves developing an upscaling methodology to transform in-situ RM soil moisture data (RM-10) into a Land Product (LP-6). The LP-6 dataset provides surface soil moisture (SSM) values aggregated over a 1 km grid aligned with the 0.1° × 0.1° Copernicus grid. The upscaling approach incorporates Sentinel-3 (Sea and Land Surface Temperature Radiometer) SLSTR-derived parameters, namely, the Normalized Difference Vegetation Index (NDVI), Land Surface Temperature (LST), and the Temperature-Vegetation Dryness Index (TVDI). Three modelling scenarios were evaluated: i) TVDI as a stand-alone proxy for soil moisture, ii) NDVI and LST as input variables, and iii) a more comprehensive model incorporating all three proxies (i.e., NDVI, LST and TVDI). A wide range of statistical and machine learning (ML) algorithms were evaluated to establish robust transfer functions that model the relationship between soil moisture measurements and remote sensing variables. These algorithms captured both linear and non-linear patterns in the data, They included linear regression, polynomial regression (2nd and 3rd Degree), logarithmic models, interaction terms, regularized regression (i.e., ridge regression), random forest, extreme gradient boosting (XGBoost) and support vector machine. Additionally, a categorical monthly dummy variable was incorporated into linear, polynomial and logarithmic models to account for seasonality. Model performance was evaluated using two key metrics: the coefficient of determination (R²), which measures the proportion of variance in soil moisture explained by the model, and the Root Mean Squared Error (RMSE), which quantifies the average magnitude of prediction error. Results indicated that the most effective model was a second-degree polynomial algorithm incorporating a monthly temporal component, which proved critical for capturing seasonal patterns and significantly enhancing model accuracy. Logarithmic and 3rd-degree polynomials models showed similar results. While ML models demonstrated strong performance, they yielded lower R2 values when tested on independent datasets than polynomial models. This suggests that simpler, more straightforward approaches may provide more reliable predictions for soil moisture upscaling. Performance was site-specific, with high R2 values (>0.6) observed at locations such as Litchfield, Valencia, Saint Felix, and Montaut. In contrast, lower R2 values (<0.3) were recorded at sites like Barlad, Calarasi, Darabani, and Tereno. Final models were validated against independent datasets such as ECMWF ERA-5, providing a robust comparison and ensuring reliable validation of the models. A first assessment of CLMS Surface Soil Moisture 1 km product obtained from Sentinel-1 C-band SAR backscatter will be presented.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F1)

Presentation: An overview of Sentinel-1 instruments status, L1 product performance and evolution

Authors: Muriel Pinheiro, Antonio Valentino, Guillaume Hajduch, Pauline Vincent, Andrea Recchia, Martin Steinisch, Riccardo Piantanida, Kersten Schmidt, Christoph Gisinger, Jakob Giez
Affiliations: ESA/ESRIN, Starion Group, CLS, Aresys, DLR-HR, DLR-IMF
he Copernicus Sentinel-1 (S-1) mission ensures the continuity of C-band SAR observations over Europe. The routine operations of the constellation are on-going and performed at the maximum capacity allowed by the Sentinel-1 A unit. The Sentinel-1 B unit has not been operational since December 2021 and the new Sentinel-1C unit is going to be launched in December 2024. The mission is characterized by large-scale and repetitive observations, systematic production and free and open data policy. Sentinel-1 data are routinely used by Copernicus and many other operational services, as well as in the scientific and commercial domain. A key aspect of the Copernicus program is the constant provision of open and free high-quality data. This requires long term engagement to carefully monitor, preserve, and improve the system and product performances. The Sentinel-1 SAR Mission Performance Cluster (SAR-MPC) is an international consortium of SAR experts and is in charge of the continuous monitoring of the S-1 instruments status, as well as the monitoring of the quality of the L1 and L2 products. This is typically done by analyzing the variation of key parameters over time using dedicated auxiliary products or standard data available to the public, e.g., antenna monitoring trough RFC products, radiometry and geolocation using standard data and dedicated Fiducial Reference Measurements (FRM). The SAR MPC is also responsible to implement any actions necessary to prevent or minimize quality degradation, e.g., in the event of instrument anomaly. This includes the update of processor configuration files and updates of the S-1 Instrument Processing Facility (IPF) algorithms and/or their implementation [1]. A Sentinel-1A platform anomaly impacting the thruster in charge of the orbit inclination control occurred in 2024. After the event, ESA has decided in agreement with the European Commission to suspend the orbit inclination control manoeuvres, for spacecraft safety reasons. This decision has further consequences on the Sentinel-1 orbit which had, since the beginning of the mission, been maintained within 200m RMS diameter tube. The impact in interferometry, in particular due to the increased perpendicular baselines, have been analysed and considered acceptable. Starting mid-April 2024, the orbit inclination is naturally evolving following a yearly pattern further modulated by a secular drift. The monitoring of baseline and burst synchronization continues to be done routinely and has been evolved to better track the effects of the change in orbit control, e.g., to better identify dependencies with latitude for baseline and burst synchronization. The monitoring of burst synchronization has also been extended to verify variations within the data-take, which are along-track baseline dependent, and to include verification using information from the Orbit (on-board) Position Schedule (OPS) angle, which allows to monitor the capability of the instrument to perform synchronization. The monitoring of both the SAR antenna health status and of the SAR instrument is carried out exploiting the dedicated auxiliary products and ensures to minimize degradation of SAR data quality originated by instrument aging or element’s failures. In the case of antenna health, the analysis is performed using the RF Characterization (RFC) products which allows to assess the status of the 280 TRMs composing the SAR antenna. In April 2024 the antenna monitoring of the antenna error matrices obtained using S1A RFC products identified a failure of a single antenna TRM module of Sentinel-1A. The identification of the anomaly was followed by a dedicated quality impact assessment that confirmed no appreciable degradation of the performance. A small degradation of one element in H pol of Sentinel-1A has been observed since January 2021 (loss of about 3 dB gain in Rx and 1 dB gain in Tx), but with no impact in the data quality at the moment. In general the antenna monitoring shows that there has been no considerable degradation since 2017 for Sentinel-1A. The instrument status is monitored through the internal calibration and noise products, which can be used, for example, to generate time series of the PG product. Currently analysis shows that the overall behavior of both instruments is quite stable, with the slope of the PG gain trend below 0.1 dB/year for both units. The radiometric and geolocation performance of L1 products is performed using standard Sentinel-1A data and are also stable and within specifications. In particular, the DLR calibration site composed of transponders and corner reflectors is used to assess the stability of the radiometry, and current analysis including data from 2017 until 2024 shows a mean value of -0.1 dB and standard deviations below 0.25 dB for both units. In addition to the point-target analysis, gamma measurements over uniformly distributed targets like rainforest are also used to assess the relative radiometric accuracy of Sentinel-1 products. Evaluating the flatness of such profiles, updates of the antenna patterns and processing gains are performed in order to ensure radiometric accuracy. The geolocation accuracy is monitored using dedicated acquisitions over additional corner reflector calibration sites such as Surat Basin, Australia, and includes the compensation of known instrument and environmental effects, e.g., propagation through troposphere and ionosphere or solid Earth deformation signals [2]. Current analysis of the point targets shows an absolute mean value of less than 20 cm in azimuth and less than 10 cm in range for the Sentinel-1A unit, and respective standard deviations of less than 10 cm and 30 cm. The regular monitoring also shows a few centimeters of impact by the presently very high solar activity on Sentinel-1 geolocation performance, which is attributed to accuracy limitations in the ionospheric delay corrections applying the GNSS-based Total Electron Content (TEC) maps. Toward the beginning of 2024, Doppler jumps larger than usual have been observed between different star-tracker (STT) configurations (up to 50Hz). A STT re-calibration has then been proposed and implemented in June 2024 and shows positive results in terms of Doppler time series continuity. In general, with the only exception of a small degradation of the orbital tube of Sentinel-1A, the SAR-MPC monitoring activities show that the performance is nominal and stable. The quality of the L2 products is also continuously monitored by the SAR-MPC (see dedicated presentation in [4]). The IPF has also continuously evolved to improve the data quality and its usability. The latest version is IPF 3.9, which has been deployed on November 25th, 2024. Main evolutions which have been included in the latest IPF versions deployed are: - Support of specific timeline for S-1C and D - Annotation of used L0 A/C/N products in the manifest - Correction of the ANX date annotated in the manifest - Improve the robustness of annotation of burst ID - Compensate for the effect of RFI in the denoising vector annotation - Correction and calibration of denoising vectors Refer to https://sar-mpc.eu/processor/ipf/ for a full list of deployed changes. Together with the deployment of S1-IPF v3.9, the configuration of the SW module for Radio Frequency Interferences (RFI) detection and mitigation has been updated. The change consists in a fine tuning of the parameters aimed at reducing the mis-detection, which currently typically affects less than 2% of the slices. SAR-MPC also maintains a set of tools to support its own activities of monitoring and expert analysis of Selntonl-1 data. Recently a new tool has been developed that has two main purposes: - to generate the engineering products (L0N) that are needed to exploit rank echoes for the de-noising of products acquired before 2018, and - to generate accurate de-noising vectors starting form L1 products and exploiting the updated algorithms and the latest calibration data. The tools will be made available to the public, e.g., to support ad hoc generation of noise vectors for archive products. [1] Sentinel-1 Annual Performance Report 2023, on-line document, https://sentiwiki.copernicus.eu/web/document-library [2] R. Piantanida et al., "Accurate Geometric Calibration of Sentinel-1 Data," EUSAR 2018; 12th European Conference on Synthetic Aperture Radar, 2018 [3] Franceschi et al., “Operational RFI Mitigation Approach in Sentinel-1 IPF”, submitted to EUSAR 2022 [4] A. Benchaabane,” Sentinel 1 Level 2 Ocean Products Performance Monitoring: current status and evolutions”, submitted to the LPS2022 Acknowledgement: The results presented here are outcome of the ESA contract Sentinel-1 / SAR Mission Performance Cluster Service 4000135998/21/I BG, funded by the EU and ESA. The views expressed herein can in no way be taken to reflect the official opinion of the European Space Agency or the European Union.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F1)

Presentation: Sentinel-1 Level-2 Ocean Products Performance Monitoring: current status and short-term evolutions

Authors: Amine Benchaabane, Romain Husson, Guillaume Hajduch, Charles Peureux, Pauline Vincent, Antoine Grouazel, Alexis Mouche, Frédéric Nouguier, Geir Engen, Anna Fitch, Harald Johnsen, Yngvar Larsen, Fabrice Collard, Gilles Guitton, Beatrice Mai, Andrea Recchia
Affiliations: CLS, IFREMER, NORCE, OceanDataLab, ARESYS
Introduction Copernicus Sentinel-1 is a constellation of two C-band Synthetic Aperture Radars (SAR) operated by the European Space Agency, launched in March 2014 for S1A, flying in tandem with S1B since 2016 until December 2021 when S1B failed operating. Their Level-1 products consist of High-Resolution (HR) Radar images distributed either in GRD (Ground-Detected) or SLC (Single-Look Complex) processing. The knowledge acquired since decades on SAR acquisition over the ocean allows for the measurement of sea surface wind vectors, wave spectra and radial velocity, which are all provided in a single Level-2 product referred to as OCN products (OCeaN). L2 OCN data quality is constantly evolving thanks to radiometric and algorithmic improvements performed either at Level-1 or Level-2 processing steps. This work aims at presenting the strategy, current performances and short-term evolutions of the so-called Sentinel-1 L2 OCN products from the Mission Performance Cluster Service (S-1 MPC). This activity both depends on and benefits from evolutions of the Level-1 products quality (presented in a dedicated presentation). The first results and performances for the recently launched Sentinel-1C be provided (not available when writing this abstract). Ocean Wind The estimation of wind vectors is made possible by the knowledge of GMFs (Geophysical Model Function) relating the calibrated NRCS to the wind in a statistical way. These functions are precisely known and have been evolving for decades. Their knowledge allows us to estimate a SAR wind vector at a resolution of 1 km from a Bayesian inversion using the co-polarized channel and an a priori provided by a Numerical Weather Prediction (NWP) model: ECMWF Integrated Forecasting System (IFS). The validation strategy relies on massive comparisons with atmospheric model forecasts at various resolutions: ECMWF 10 km 3-hourly forecast, NCEP 10km 3-hourly forecast, AROME and ARPEGE. Numerous statistical diagnostics are developed that monitor both the product performances on wind speed and direction. Strategies are being developed for the incorporation of in-situ data in the Sentinel-1 CAL/VAL chain, especially, weather buoy data, other satellite missions. The impact of the radiometric calibration is also specifically investigated to assess the impact on the wind products. Radiometric discontinuities, particularly visible at subswath edges or within the subswath overlap using SLC data, are quantified over a large dataset and can lead to Sigma0 inconsistencies of the order of several 0.1dB, equivalent to several meter per seconds for the downstream wind speed. On the algorithmic side, the recent improvements for the operational products concern: - Increase of update frequency of the ECMWF wind forecast used as a first guess in the ocean wind measurement process, - Activation of RFI mitigation applied on input Level 1 products (dedicated presentation on RFI mitigation is planned during the symposium), - Ad hoc calibration of GR2 for HH and HV aiming to compensate for general bias on wind speed. On the algorithmic side, the short-term improvements for the operational products concern: - The flagging of rain impacted regions that can otherwise affect the wind estimates, - The bright target removal for cross-polarization channel to prepare its inclusion in the wind inversion, - An update of the Bayesian cost function used for wind inversion, - The choice of the polarization ratio for acquisitions in HH, - The addition of new variables in the L2 OCN products to enable a potential homogeneous sigma0 re-calibration of past and recent products. Ocean Surface Waves The Sentinel-1 derived wave measurements provide 2-D ocean swell spectra (2-D wave energy distribution, function of wavelength and direction) as well as classical integrated parameters such as significant wave height for the observed swell partition, dominant wavelength, and direction. Several dedicated methodologies have been set up for validation. (i) For each Sentinel-1 Ocean swell spectrum measurement, a directional spectrum is systematically produced with co-located WAVEWATCH III numerical wave model (ii) As very few acquisitions are available in coastal areas where in-situ buoys are deployed, we perform a dynamical co-location. This method allows propagating wave measurements acquired by Sentinel-1A in open ocean up to the closest in-situ buoy for comparison. Such methods are particularly interesting for cross-comparison and inter-calibration of swell measurements from Sentinel-1A and B, or from ascending and descending orbits. (iii) More classical methods such as co-location against altimeters are also used and presented. We present here the calibration and validation methodology, the main improvements put in place in the last years and the planned improvements in the coming months. On the algorithmic side, the recent improvements for the operational products concern: - The implementation of a new methodology to calibrate the Simulated Cross Spectra used in the quasi-linear inversion of the swell, to calibrate the SAR Modulation Transfer Function (MTF) as a function of the SAR-derived wind speed, and to tune the wave partition quality flags, - The revisit of the two algorithms used to estimate the wind sea component of the significant wave height (Hswind sea) and the so-called “total Hs” from the Wave Mode acquisitions using Machine Learning techniques, On the algorithmic side, the short-term improvements for the operational products concern the production of inter-burst and intra-burs cross-spectra with associated variables for the TOPS mode (IW and EW). This is paving the way to the estimation for directional ocean wave spectra for this specific coastal acquisition mode. Radial Velocity The so-called radial velocity (RVL) is related to the velocity of the scatters in the line-of-sight of the SAR antenna. Over ocean, a strong dependence on surface current and winds-waves is expected. Unfortunately, the Sentinel-1 Level 2 RVL measurements are currently coloured by the Doppler frequency (DC) derived from AOCS and the antenna DC bias (electronic mispointing). This prevents the current version of the Level 2 processor to provide calibrated RVL estimates. We will present here the status of performances achieved from Sentinel-1 measurements in the OCN products and the foreseen way forward. The overall status is that the calibration of RVL measurement suffers for insufficient spacecraft attitude knowledge now of the generation of the OCN products. Analysis of ways to collect the attitude information with proper accuracy are ongoing. Some experiments were performed to achieve a better calibration by inter alia: - Compensating some of the Doppler jumps observed during the activation of thermal compensation in the instrument, - Compensating variations of DC along orbit, - Ensuring continuity of DC measurement along the data takes using land areas as reference points.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F1)

Presentation: Generation of accurate de-noising vectors for S-1 data: 10 years of activities

Authors: Andrea Recchia, Beatrice Mai, Laura Fioretti, Kersten Schmidt, Guillaume Hajduch, Pauline Vincent, Muriel Pinheiro, Antonio Valentino
Affiliations: Aresys, DLR, CLS, ESA, Starion for ESA
The Sentinel-1 first generation is composed of four satellite units developed in two batches. The first two units (S-1A and S-1B) have been launched two years apart in 2014 and 2016. The third and furth units (S-1A and S-1B) will be launched between 2024 and 2025. Sentinel-1 is characterized by large-scale and repetitive observations, systematic production, and free and open data policy i.e., the mission is designed to acquire data globally, to systematically process and deliver products with a timeliness compliant with operational use. The S-1A and S-1B constellation has performed nominally up to the failure of the S-1B spacecraft in December 2021. The Sentinel-1 satellites carry on-board an advanced C-band Synthetic Aperture Radar instrument, providing fast scanning in elevation and in azimuth to enable the implementation of the TOPSAR acquisition mode. The raw data acquired by the SAR instrument are packetized on-board into the so-called Instrument Source Packets (ISPs), downlinked on-ground through the dedicated ground stations and finally included in the SAFE Level-0 products that are available to the users. The Level-0 products are then ingested by the S-1 Instrument Processing Facility (IPF), the sub-system of the S-1 Payload Data Ground Segment (PDGS) responsible for the generation of the Level-1 and Level-2 products. The SAFE L1 products, freely available to all users, provide high resolution RADAR images of the Earth’s surface for land and ocean services. The high coherence of the data is exploited for interferometric applications whereas other applications only rely upon the image intensity. The latter, also exploiting the availability of polarimetric data (e.g., change detection), are gaining importance and reaching similar level of performance than those based on optical images. The applications exploiting the data intensity to retrieve geophysical parameters such soil moisture or wind speed over ocean, require calibrated SAR images. Furthermore, for scenes with low backscatter, the instrument thermal noise level shall be properly removed to get unbiased measures. The S-1 IPF does not perform operationally the noise subtraction but provides, in the product annotations, the relevant information needed for the operation. The noise information is retrieved from dedicated pulses in the S-1 acquisition timeline. The S-1 noise characterization and calibration is one of the many tasks of the SAR Mission Performance Cluster (SAR MPC), an international consortium of SAR experts in charge of the continuous monitoring of the S-1 instruments status and of the L1 and L2 products quality. The MPC is responsible of detecting any potential issue and of implementing the necessary actions (e.g., processor configuration files update) to ensure that no data quality degradation occurs for the users [3]. One of the long-term activities of the SAR MPC has been the improvement of the quality of the de-noising vectors annotated in the products to further improve the data quality. The present contributions will provide a resume of all the noise related activities that have been performed since launch of S1A: • Several improvements have been introduced in the processing chain including: the introduction of 2D vectors to capture the azimuth variation of the noise level in TopSAR data, the proper normalization of the noise vectors for the different processing levels (SLC and GRD at different resolution) and the introduction of noise pulses filtering in case of RFI contamination. • The usage of TopSAR rank echoes has been introduced in 2018 to capture the observed dependency of noise power on the imaged scene. The noise power is about 1 dB higher over land w.r.t. ocean due to the larger Earth Brightness Temperature. The operational usage of rank echoes in the noise vectors generation allows to better track the noise variations within long data takes. • Several calibration campaigns over data with very low backscatter have been performed in time to ensure that the generated noise vectors are correctly aligned with the data. The above-mentioned activities have led in the last 10 years to several changes in the noise vectors annotated in the products. In order to make possible for the users generating high-quality noise vectors for past data, two new tools are currently under development: • A tool to generate the engineering products (L0N) that are needed to exploit rank echoes for the de-noising of products acquired before 2018 has already been developed. • A tool to generate accurate de-noising vectors starting from L1 products and exploiting the updated algorithms and the latest calibration data is currently under development. These tools will be made available to the public to support ad hoc generation of noise vectors for archive products. Acknowledgements: The SAR Mission Performance Cluster (MPC) Service is financed by the European Union, through the Copernicus Programme implemented by ESA. Views and opinion expressed are however those of the author(s) only and the European Commission and/or ESA cannot be held responsible for any use which may be made of the information contained therein.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F1)

Presentation: Development of an InSAR Phase Bias Correction Processor

Authors: Yasser Maghsoudi, Professor Andrew Hooper, Professor Tim Wright, Dr. Milan Lazekcy, Dr. Muriel Pinheiro
Affiliations: Department of Earth and Environmental Sciences, University of Exeter, Penryn Campus,, COMET, School of Earth and Environment, University of Leeds, European Space Agency (ESA)
Phase bias in interferometric synthetic aperture radar (InSAR) can significantly impact the accuracy of ground displacement measurements, particularly in areas with dense vegetation or temporal decorrelation. Addressing this challenge, we developed and consolidated a correction strategy through a project funded by the European Space Agency (ESA), aiming to create a universally applicable phase bias mitigation strategy. This processor estimates bias terms using short-term wrapped interferograms with calibration factors estimated using the long interferograms, and applies these terms to correct interferograms over various acquisition patterns, including 6-day and 12-day intervals. The proposed algorithm incorporates temporal smoothing constraints to handle gaps and missing interferograms, ensuring robust performance across diverse datasets. We applied the method to three study areas: the Azores (Portugal), Campi Flegrei (Italy), and Tien Shan (China). Results show that phase bias effects are significantly reduced, with corrected velocities closely aligning with benchmark estimates derived from the eigendecomposition-based maximum-likelihood estimator (EMI) phase linking method. In the Azores and Campi Flegrei regions, characterized by dense vegetation and shorter acquisition intervals, our approach effectively mitigated apparent artifacts, such as false subsidence and uplift patterns, caused by phase bias. In Tien Shan, where a 12-day acquisition pattern is used, minimal correction was required due to reduced vegetation density and lower susceptibility to phase bias. We further evaluated algorithm’s robustness through an analysis of calibration parameters, demonstrating that slight variations in these parameters do not significantly affect the corrected velocities. We also explored the selection of long-term interferograms to ensure minimal bias during parameter estimation, finding that interferograms with a temporal baseline exceeding 250 days provide reliable zero-bias references. This ESA-funded work represents a significant advancement in InSAR phase bias correction, offering a robust framework applicable to various terrains and acquisition patterns. The results hold relevance for advancing scientific understanding, improving applications in geophysical monitoring, and supporting policy and decision-making processes in hazard assessment and land deformation analysis.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F1)

Presentation: Auto-calibrated estimation of radial velocity for the Sentinel-1 TOPS mode

Authors: Geir Engen, Dr. Yngvar Larsen
Affiliations: Norce
The radial velocity (RVL) component of the Sentinel-1 OCN product is derived from a precise estimate of the doppler centroid frequency. To achieve the required precision, it is currently necessary to use an internal SLC product processed to full azimuth bandwidth, using a uniform azimuth window. This processing step is a computational bottleneck. Furthermore, for the IW and EW imaging modes, it is challenging to fully avoid azimuth filtering due to the spectral mosaicking procedure used in the TOPS mode focusing algorithm. In this work, we present a method for estimation of the RVL from an SLC with less stringent requirements on the processed azimuth bandwidth. The new algorithm requires that a sufficiently wide ideal bandpass or lowpass filter is applied, and that the extent of the ideal part of the filter is annotated with high precision. A narrower azimuth spectrum utilization has several advantages. First, for the TOPS mode, the area of the burst overlaps in azimuth direction can be significantly increased. When the overlaps contain sufficient data to provide reliable stand-alone doppler centroid estimates, we may derive two independent estimates in each burst overlap. This may be exploited for data driven auto-calibration of the doppler centroid estimates since it can be assumed that the geophysical doppler does not change in the ~3 seconds between two consecutive bursts in the same swath. Furthermore, the negative impact of azimuth aliasing is reduced, simplifying the sideband correction procedure. However, if the bandwidth becomes too narrow, the precision of the doppler centroid estimator precision is impacted. In this contribution, we explore the tradeoff between utilized azimuth bandwidth and the statistical performance of the RVL estimation. In addition, we present the proposed auto-calibration approach using doppler estimates from the burst overlap zones. A long datatake containing land in both beginning and end will be used to evaluate the effectiveness of this approach. Finally, we demonstrate the advantages of a workflow starting directly from Level-0 data, optimizing the focusing algorithm specifically for the RVL estimation. This approach eliminates the need for the internal SLC product, leading to a significantly improved throughput. In addition, we show that after specific tuning of the focusing algorithm, the artificial discontinuity between consecutive doppler estimates in azimuth direction observed in some Sentinel-1 RVL products are no longer present.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.85/1.86)

Session: B.01.02 Earth Observation accelerating Impact in International Development Assistance and Finance - PART 1

In this session, attendees will delve into an impact-oriented approach to accelerating the use of Earth Observation (EO) in support of international development assistance, incl. integration in financing schemes. Presenters will provide in-depth insights into real-world application use cases across multiple thematic domains, implemented in developing countries in coordination with development and climate finance partner institutions. The session will prioritise examples showcasing the tangible impact on end-users in developing countries and the successful uptake of EO products and services by their counterparts. Counterparts here can be national governments or International Financial Institutions (IFIs), such as multi-lateral development banks (World Bank, ADB, IDB, EBRD) and specialised finance institutions (e.g. IFAD), as well as and Financial Intermediary Funds (FIFs), most specifically the large global climate and environment funds (GCF, GEF, CIF, Adaptation Fund). Attendees can expect to gain valuable insights into how the process of streamlining EO in development efforts is (1) opening new market and operational roll-out opportunities for EO industry, and (2) translating into impactful change on the ground and driving sustainable development outcomes worldwide.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: From Earth Observation Insights to Impact: GDA Water Resources for International Development Assistance

Authors: Eva Haas, Fabian von Trentini, PhD Beatriz Revilla-Romero, Èlia Cantoni i Gómez, Alexander Kreisel, Kerstin Stelzer, Jorrit Scholze, Tomas Soukup, Tomas Bartalos, Fränz Zeimetz, Seifeddine Jomaa
Affiliations: EOMAP GmbH & Co. KG, GMV, Gisat, GeoVille, Brockmann Consult GmbH, Gruner Stucky, Helmholtz Centre for Environmental Research (UFZ)
Water is one of the most vital resources for sustaining life on Earth. It is a habitat, can be a provider of energy or recreation, but is also a source of natural hazards. A changing climate and pollution diminish the availability of usable water and thus increases the potential for user conflicts. The successful implementation and monitoring of Integrated Water Resources Management (IWRM) initiatives, disaster risk reduction and good water quality requires access to reliable data and information on water-related issues. There is a growing awareness that EO data has the potential to serve these data needs, especially in the context of International Financing Institutions (IFIs) and Official Developing Assistance (ODA) that are normally operating in regions where policies and management decisions are often based on sparse and inconsistent information. While past programs with IFIs were important to assess client requirements and demonstrate capabilities at different scales, the ESA GDA program's main objective is to build on existing state-of-the-art services and transform them into meaningful pre-operational prototype solutions. As a result of the GDA Water Resources project, a targeted operational information basis was provided, enabling scale-up to support operations and analytics across the IFIs’ work on water resources. The process to get EO-based information developments into actual protocols and processing lines at the IFIs, starting with the WBG and ADB, has been successfully initiated. The following case studies and real-world applications have been delivered through the GDA AID Water Resources consortium: • Botswana: Groundwater resources monitoring and quantification • Lake Victoria: Water quality management • Pakistan: Integrated Water Resources Management • Georgia: Supporting the set-up of a Hydro-Agro Informatic Centre • Timor Leste: Assessment of surface water extent variability and drought impact on agriculture • Zambezi River: Enhance water quality and discharge monitoring • Peru: Sedimentation and discharge monitoring for reservoir lifetime assessments • Cameroon: Sedimentation and discharge monitoring for reservoir lifetime assessments • Mexico: Drought analyses and sustainable water management • Uzbekistan: Digital monitoring of a reservoir’s storage and water quality This presentation will highlight tangible impacts on end-users in selected case studies and show how the successful uptake of EO products and services by the IFIs and national actors opens new markets and operational roll-out opportunities.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: Earth Observation for Proactive Desert Locust Management in East Africa

Authors: Koen De Vos, Roel Van Hoolst, Hadgu Teferi, Koen Van Rossum, Kasper Bonte, Laurent Tits
Affiliations: VITO
The desert locust remains one of the world’s most destructive agricultural pests, with swarms being capable of devastating cropland and natural vegetation over vast areas – heavily impacting local livelihoods and food security. Large locust outbreaks between 2019-2022 in Eastern Africa highlighted the need for innovative and transboundary strategies in pest monitoring and control, particularly as climate change is expected to increase the frequency and intensity of such events. ESA has a Global Development Assistance (GDA) programme that focuses on targeted Agile EO Information Development (GDA AID). To mitigate the recurring threat from locust invasions, GDA AID enabled the co-development of two EO-based services with the Intergovernmental Authority on Development (IGAD) in Eastern Africa- targeted at two distinct stages in the life cycle of the desert locust. Through combining in-situ data from FAO’s Desert Locust Hub with Sentinel-2 and Metop ASCAT satellite data into a MaxEnt model, we were able to identify environmental conditions that are linked to the presence of hoppers- the ground bound stage of the desert locust. Specific soil moisture, soil texture, and air temperature conditions were identified as important indicators because of their relevance in the locusts’ life cycle. The presence of a minimal vegetation cover was found important for food needed for the hoppers to develop. Using the OpenEO platform, we produced hopper habitat suitability maps at 1km resolution for the IGAD countries, Egypt, and the Arabian Peninsula – at regular time intervals of 10 days. This innovative approach allows for flexible adaptation to localized conditions and was co-created in agile development cycles with local stakeholders to ensure operational relevance. This near real-time (NRT) service is prepared for integration into IGAD’s East Africa Hazards Watch (EAHW) platform, thereby providing actionable insights that can empower regional governments and transboundary institutes to better allocate locust control measures. Simultaneously, we developed a tool that can assess damage to crops caused by locust swarms. Time series analysis of Sentinel-2 NDVI were combined with meteorological information and available locust swarm sightings to distinguish impacts from locust from other harmful events (e.g., agricultural drought). By combining this tool with dedicated crop type information in a user-friendly platform, decision makers could further detail the impact on crop production and food security and set up mitigation measures for future planning.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: Combined use of EO, OSINT/SOSINT to develop new application products generating indicators of crisis and early triggers in fragile countries

Authors: Luisa Bettili, Annalaura Di Federico, Adriano Benedetti Michelangeli, Federica Pieralice, Nicola Pieroni, Chiara Francalanci, Annekatrin Metz-Marconcini, Alix Leboulanger, Anne-Lynn Dudenhoefer, Gerhard Backfried, Roel Van Hoolst, Alessandro Marin
Affiliations: e-GEOS, Cherrydata, DLR, Janes, Hensoldt-Analytics, Vito, CGI
It is recognized that fragility and conflict-affected countries are facing several bottlenecks into meeting the Strategic Development Goal targets related to unmet basic needs. Fragility, Conflict and Violence indeed already threaten to reverse development gains, and the situation would become even worst considering that the share of the extreme poor living in conflict-affected situations is expected to rise above 50% by 2030. In this context, a key role is played by the entities operating in the International Development Assistance. supporting countries affected by conflict and fragility by providing the financing tool and knowledge needed to rebuild resilient institutions and economies, and remaining engaged during active conflict, recovery and transition. In this context, ESA launched the GDA Fragility, Conflict, and Security project under the ESA’s Global Development Assistance Agile EO Information Development (AID) Programme. The initiative, aimed at integrating Earth Observation data into IFI (International Financial Institutions) operations in fragile settings, by developing new application products coupling EO and SOSINT to generate indicators of crisis and early triggers, to allow for context analysis and situational awareness in a variety of fragility, conflict and security related scenarios in developing countries. Altogether EO combined with OSINT/SOSINT can provide a further contribution to building a complete information framework that allows a more organic, reliable and definitively enhanced scenario analysis, under the assumption that coupling the OSINT/SOSINT techniques with EO observation would result in enhanced information by validating the respective outcomes, filtering the results and confirm the initial hypothesis against tangible complementary observation. Several use cases were fruitfully developed and co-designed with the IFIs, to tackle World Bank (WB), Asian Development Bank (ADB) and the International Fund for the Agricultural Development (IFAD) needs. In the Food Security domain, a use case was developed with World Bank to support emergency response locust program in the eastern African regions, to identify egg breeding and habitat monitoring by using soil moisture and crop damage assessment maps. As far as Situational Awareness is concerned, 3 use cases were developed. In Cameroon for the World Bank, information on major security events obtained by OSINT fed periodic Road Security Assessment Briefings. In Ukraine, Land Grabbing issues were analysed by a multidisciplinary approach that made extensive use of all available information deriving from EO and Social Media data, to monitor land tenure changes and ongoing conflict, identifying EO/OSINT indicators of land grabbing, expropriation, transactions and forced land abandonment phenomena. Finally, a scenario was designed with the Asian Development Bank at the border between Tajikistan and Afghanistan, combining HR and VHR data to provide information at different scale for checkpoint monitoring and migration flow on a monthly basis, to consistently allocate Tajikistan financial resources. In the Asset, Population and Exposure domain, 3 use cases were developed. In Pakistan, a decision support system was designed, integrating Very High Resolution data and analytics to support a Small Scale Infrastructure reconstruction project funded by the World Bank. For the Cox’s Bazar Analytical Program managed by World Bank, a scenario was developed to estimate the effect of displaced population on local economy. The analysis was focussed on roads around the camps as well as multitemporal observation of changes in the settlement extent, as roads and built up areas might be considered proxy of the positive economic impact. A use case was developed with IFAD, the UN International Fund for Agricultural Development, in Colombia, a country suffering events and drug trading. For this use case, land use and land cover analysis and temporal changes, together with context analysis indicators, allowed to support the users in the assessment of the results of their policies related to identification of coca crops, assessment of yearly trends. IFAD was also supporting a livestock migration monitoring in Sudan, to monitor the dynamics/impacts of conflicts between herders and farmers through EO-based analysis, assessing how the livestock routes interfere with the agricultural areas. Leveraging / exploiting the SAR coherence from Sentinel to detect terrain disruption by livestock and combing the outcomes with optical based analysis, provided a more comprehensive picture of the dynamics in terms of animal tracks typical features such as texture and width, allowing to clearly recognize them from other kind of tracks. To conclude, the integration of EO/OSINT data has proven very fruitful in some cases and challenging in others. Specifically, if the phenomenon to be investigated is small size, VHR data are needed to adequately monitor and analyse both from the OSINT and from the EO side. The integration of these two sources offers undoubted added value to the decision maker and is a very powerful tool, if applied to appropriate use cases and scenarios. Application of this multi-source integration on scenarios related to wars, conflicts, terrorism, insurgency, and crisis areas, were proposed by the OSINT perspective. Likewise, satellite capabilities unlocked potential to investigate complex and relevant phenomena, where there is broader space and room of manoeuvre for data collection. Scenarios such as the war in Ukraine, the current conflict between Israel and Gaza, the crisis in Nagorno-Karabakh or the clashes in Sudan, appear in this sense as potential successful Use Cases. Besides the IFIs, there is a potential to extend the benefits from the products generated in the context of ESA-GDA-Fragility, Conflict and Security project to other entities, such as UN organizations, local and FCS related research entities, NGOs and local experts for training and capabilities development on GIS and the overall benefit produced by the EO/OSINT integrated products.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: GDA-AID Marine Environment and Blue Economy: Advancing Sustainable Coastal and Marine Ecosystem Management in Cambodia, Indonesia, and Tunisia through EO

Authors: Dr. Giulio Ceriola, PhD Antonello Aiello, PhD Gamal Abdelnasser Allam Abouzied, PhD Tania Casal, Daniela Drimaco
Affiliations: Planetek Italia s.r.l., ESA
Coastal ecosystems are vital for maintaining biodiversity, protecting shorelines from erosion, supporting fisheries, and serving as natural buffers against storms and rising sea levels. The use cases presented here demonstrate the application of EO technologies to monitor and manage coastal and marine ecosystems in Cambodia, Indonesia, and Tunisia under the Global Development Assistance Agile EO Information Development (GDA-AID) program. Leveraging high-resolution datasets from the European Union’s Copernicus program, including Sentinel-1, Sentinel-2, and Sentinel-3 satellites, the EO-based services provide insights into ecosystem health, water quality, environmental drivers, and the impacts of anthropogenic pressures. The findings aim to support sustainable development, biodiversity conservation, and climate mitigation through actionable geospatial analysis. In Cambodia, the study supported the “Cambodia: Sustainable Coastal and Marine Fisheries Project” funded by the Asian Development Bank, which has the objective of identifying and promoting sustainable practices for fishery and aquaculture concerning the coastal environment. The activity focused on evaluating mangrove ecosystems across Kampot, Kep, Koh Kong, and Preah Sihanouk provinces. Mangrove ecosystems were analyzed using Sentinel-1 and Sentinel-2 imagery, with mapping conducted for the years 2017 and 2021. The analysis revealed significant mangrove loss, primarily attributed to anthropogenic pressures such as shrimp farming, charcoal production, urban expansion, and salt pan construction. Restoration efforts were constrained by inappropriate site selection and adverse environmental conditions. Key restoration challenges included high mortality rates in replanted mangroves and the unsuitability of target areas. The integration of EO data into future restoration planning is recommended to identify optimal restoration zones. EO data were used to analyze key parameters such as salinity, sea surface temperature, chlorophyll concentration, turbidity, and wave height from 2016 to 2022. While these parameters displayed spatial and temporal variability, no consistent correlation with mangrove changes was identified. It highlights the complexity of ecosystem dynamics and the need for more detailed investigations combining EO data with socio-economic and biological factors. In Indonesia, the study supported the “Infrastructure Improvement for Shrimp Aquaculture” initiative led by the Asian Development Bank (ADB) and the Ministry of Marine Affairs and Fisheries (MMAF). The project focused on shrimp farming in Lampung and Banten regions, integrating EO technologies to assess water quality, salinity, and the environmental impacts on shrimp ponds and adjacent coastal areas. Sentinel-2 imagery was used to generate turbidity maps at a spatial resolution of 10 meters. Forty-seven turbidity maps were derived for Lampung in 2021–2022, categorizing water quality into four turbidity ranges. Shrimp ponds with high turbidity were identified as potentially having poor water quality, which could negatively impact shrimp production. Copernicus Marine Environment Monitoring Service (CMEMS) data allowed us to evaluate sea salinity and sea level variations. The study revealed significant spatial and temporal variability in sea salinity and sea level, both of which emerged as critical factors influencing shrimp pond conditions. A potential impact index was developed to assess the salinity levels of shrimp ponds based on sea level and salinity changes. Concerning the impact of shrimp farming on coastal areas, by exploiting Copernicus Sentinel-3 data, seasonal eutrophication was detected in some coastal areas of Lampung due to riverine nutrient inflow. Persistent nutrient enrichment in Jakarta Bay was attributed to urban and agricultural runoff. Key indicators such as chlorophyll concentration, water transparency, and the trophic state index (TSI) were used to identify areas under environmental stress, with Jakarta Bay showing year-round high nutrient levels and Lampung displaying seasonal patterns linked to agricultural cycles. In Tunisia, the Gulf of Gabes served as a use case for evaluating EO technologies in the estimation of blue carbon. The region, known for its seagrass meadows and phytoplankton populations, was analyzed to assess ecosystem health and carbon sequestration potential. High-resolution Sentinel-2 imagery and historical data revealed a 5% decline in seagrass coverage between 2017 and 2022, corresponding to a loss of over 165 km². Despite this decline, the remaining seagrass meadows sequester approximately 1.35 million tons of CO₂ annually, emphasizing their critical role in climate mitigation. The loss of seagrass was linked to human activities, including coastal development, overfishing, and pollution, highlighting the need for targeted conservation policies. CMEMS data were used to map phytoplankton populations, which are vital for marine primary productivity and ecosystem health. The EO-based information obtained for the period 2018 and 2022 reveals clear seasonal patterns in NPP and phytoplankton concentration, with distinct temporal and spatial variations. The Gulf of Gabes consistently emerged as the most productive region in both years. At the same time, coastal waters within 12 nautical miles exhibited higher biomass densities than the more extensive EEZ area. Although the 2022 data showed higher NPP and biomass carbon compared to 2018, it is too early to determine whether this represents a long-term trend or a result of short-term environmental variability. Seasonal blooms were observed, driven by nutrient availability and climatic conditions. The findings provide insights into the trophic dynamics of the Gulf of Gabes and the broader implications for fisheries and marine biodiversity. The use case underscores the need for integrating EO data into marine spatial planning to support biodiversity conservation and sustainable resource management. In the Gulf of Gabes, actionable insights from EO analyses can inform policies to mitigate seagrass loss, regulate nutrient loading, and preserve critical habitats. The findings and recommendations have been incorporated into the national government’s blue economy roadmap in Tunisia, developed with World Bank support. Across all regions, the GDA-AID Marine Environment and Blue Economy activity highlighted the transformative potential of EO technologies in ecosystem monitoring, particularly when combined with in-situ data for validation. The findings emphasized the need for tailored conservation strategies, informed by geospatial analyses, to mitigate anthropogenic pressures such as urbanization, agriculture, and unsustainable aquaculture practices. By supporting biodiversity, improving water quality, and enhancing climate resilience, EO technologies provide an invaluable resource for sustainable development in regions under increasing ecological stress. These methodologies are replicable in other coastal areas and can be scaled for global environmental monitoring and policymaking.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: Methodologies and Lessons Learned: Measuring the Impact of Earth Observation in Climate Action and Sustainable Development

Authors: David Taverner
Affiliations: Caribou Space
Introduction The integration of Earth Observation (EO) in international development assistance continues to unlock transformative opportunities for climate action & sustainable development, supported by programmes such as ESA’s Global Development Assistance (GDA). However, measuring and communicating that impact, particularly within multi-stakeholder collaborations involving organisations like ESA, International Financial Institutions (IFIs), and the EO service sector, requires innovative impact measurement approaches. This presentation highlights specific methodologies, best practices, and lessons learned from the GDA programme, including data synthesis, stakeholder alignment, and impact communication, offering actionable insights for EO programmes focusing on end-user adoption and long-term climate action & sustainable development. Methodologies, Best Practices, and Lessons Learned The presentation will highlight the following generalisable learnings that have most benefit to a wide range of LPS audiences, including: 1. Evidence Landscape Mapping: Tools like impact literature reviews and evidence maps synthesise the existing knowledge base, showcasing where and how EO contributes to climate action & sustainable development. This approach fosters stakeholder alignment around shared objectives and directs efforts toward areas of greatest impact. 2. Theory of Change and Indicator Frameworks: A robust Theory of Change links programme outputs to longer term outcomes, while indicator frameworks provide measurable pathways to assess progress. 3. Data Management and Synthesis: Efficient systems for managing and communicating indicators across consortium members and IFIs, such as the GDA Dashboard, enable end to end data collection and dissemination. 4. Tracking Demand: Monitoring end user EO adoption via procurement data (e.g., World Bank and Asian Development Bank databases, EARSC surveys) provides actionable insights into the level of mainstreaming of EO technologies over time. 5. Stakeholder Collaboration: Aligning indicators with external organisations, such as the World Bank, translates technical and scientific outcomes into tangible benefits for global sustainable development efforts. 6. Public Evaluations: Published evaluations, such as the GDA Midterm Evaluation, provide transparency and wider access to lessons beyond those directly involved. Highlights include that to date; 167 Earth Observation Information Developments (EOIDs) have supported 83 IFI projects in 69 countries. 7. Communication of Impact: Bite-sized, interactive formats such as dynamic/online evaluations, dashboards and case studies enhance the accessibility of EO impacts, helping non-experts, including policymakers and governments, engage with actionable insights. Latest GDA Programmatic Results If desired by the LPS and GDA team there can be inclusion of the latest results and findings from the newly drafted GDA Evaluation (planned for publication winter/spring 2025) including updated information regarding EO usage and IFI financing alignment. Degree of Innovation This presentation introduces a distinct perspective within the LPS community, focusing on methodologies for measuring the impact of EO usage rather than the development of EO technologies themselves. By addressing the operationalisation of EO in large-scale, multi-stakeholder programmes, it advances the understanding of how EO technologies translate into tangible, climate action & sustainable development outcomes. The innovation lies in its emphasis on cross-sectoral collaboration, practical impact measurement, and scalable methodologies offering new insights and evidence into a sector and technology that is ripe for mainstreaming. Technical Correctness and Validation Outcome All methodologies and results discussed in this presentation have undergone rigorous validation through practical application and multi-level review processes. These include: ● Quality assurance of input data from EO service providers. ● Review and critique by ESA team members, including Technical Officers. ● External validation by IFIs, ensuring alignment with institutional requirements and development objectives. Caribou Space’s decade-long expertise in impact measurement in the EO sector, underscores the reliability and technical robustness of this impact assessment work. Relevance of Results For the EO Sector: Increased understanding of the importance of - and means by which it is possible to - measure and communicate the impact of their EO technological developments - to increase end user adoption and ultimately procurement. For ESA: Understanding impact measurement methodologies, best practices, and lessons learned from GDA as an end user focused EO programme. Partnerships with non-EO Organisations: Highlighting principles for impact measurement in multi-stakeholder partnership and cooperation between ESA/EO Service sector, IFIs and other development agencies. Conclusion The European, and global, EO industry has historically and traditionally been focused on technological R&D and communication of its impact from a scientific perspective. However, as the industry, and ESA with it, evolves from a focus on technological R&D to end-user uptake and usage, measurement and communication of end-user impact, particularly within climate action & sustainable development, is a key pillar to success.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: The Geospatial Planning and Budgeting Platform (GPBP) use case within the GDA Climate Resilience ESA Project

Authors: Stefano Natali, Kai Kaiser, Ramiro Marco Figuera, Robert James Johnsen, Parvathy Krishnan Krishnakumari, Melih Uraz, Imanol Uriarte Latorre
Affiliations: SISTEMA GmbH, The World Bank
The Geospatial Planning and Budgeting Platform (GPBP, https://gpbp.adamplatform.eu/) represents a significant milestone in climate resilience efforts under the Global Development Assistance (GDA) Climate Resilience project, developed in collaboration with the European Space Agency (ESA) and the World Bank. This innovative platform addresses the pressing need for decision-support tools capable of screening climate conditions and their potential impacts on critical infrastructure across various sectors. The GPBP consists of two main modules. The Data-as-a-Service (DaaS) module integrates historical climate data from ERA5-Land and future projections from CMIP6 into country-specific Country DataCubes. These datacubes enable efficient analysis and data access tailored to national contexts. The Platform-as-a-Service (PaaS) module provides a processing API for climate change screening that can be accessed via web app and jupyter notebooks. This API enables users to assess potential disruptions to infrastructure by analyzing asset types, geographic footprints, thresholds, and climate variables such as wind speed, precipitation, and temperature. The API is accessible via multiple interfaces: a user-friendly web application with graphical representations of results and a Jupyter notebook integration for advanced analytics. Together, these features empower stakeholders to evaluate climate risks efficiently and adapt decision-making processes accordingly. In its current version (v0.4.0), GPBP allows users to conduct a complete climate change screening workflow over 24 countries worldwide, from asset information input to the export of the assessment results. The platform has been successfully demonstrated in various presentations and dissemination activities, showcasing its potential across sectors such as agriculture, insurance, and urban planning. Starting in January 2025, the platform will undergo a three-year development phase to enhance its functionality. Planned upgrades include modularization, enabling third-party entities to integrate individual services into the platforms, and the introduction of new features to scale up to wider domains. The GPBP exemplifies the growing demand for data-driven tools to address the challenges of climate change. Its adaptable framework offers a robust foundation for supporting decision-making in diverse scenarios, paving the way for enhanced climate resilience worldwide.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall E2)

Session: A.06.02 Enhancing Space Weather Understanding: Insights from LEO Satellite-Based Operational and Pre-Operational Products

Space weather and space climate refer to the interactions between the Sun and Earth over timescales ranging from minutes to decades. Predicting extreme space weather and developing mitigation strategies is crucial, as space assets and critical infrastructures, including satellites, communication systems, power grids, aviation, etc., are vulnerable to the space environment.

This session focuses on assessing the current status of the space weather forecast and nowcast products obtained from LEO satellite measurements, alongside other missions and ground-based technologies, and pushing forward with innovative concepts. We strongly encourage contributions that promote a cross-disciplinary and collaborative approach to advancing our understanding of space weather and space climate. Moreover, we welcome presentations that investigate the effects of space weather on diverse applications in Earth's environment, such as space exploration, aviation, power grids, auroral tourism, etc.

Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall E2)

Presentation: Use of the SWARM ionospheric gradient product to model scintillation at high latitudes

Authors: Dmytro Vasylyev, Martin Kriegel, Mainul Hoque, Andres Cahuasqu, Jens Berdermann
Affiliations: German Aerospace Center Dlr,
The Global Ionospheric Scintillation Model (GISM) is planned to operate as an ionospheric scintillation modelling and prediction tool as part of the Ionospheric Monitoring and Prediction Center (IMPC) services at DLR. Currently, the GSM model is only able to cover the low-latitude regions and our efforts are focused on extending the model to high and polar latitudes before it can be used in operational service. For this purpose, it is planned to use the recently developed method of phase gradient screens, which allows to simulate the refractive type of scintillation caused by scattering on strong ionospheric gradients [1]. The climatology of the required gradient field will be derived from the in-situ electron density measurements on board the Swarm satellites, which cover a period of 10 years of data collection. In this context, the method of empirical orthogonal functions has been used to relate the gradient values to the relevant driving parameters such as solar flux index, solar wind coupling parameter, geomagnetic field strength, etc. We present some recent results on high latitude scintillation modelling and validation studies. [1] D. Vasylyev et al., “Scintillation modeling with random phase gradient screens”, 14, 29 JSWSC (2024).
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall E2)

Presentation: Use of satellite observations to study the effects of the large geomagnetic storms of 2024

Authors: Eelco Doornbos
Affiliations: KNMI, Royal Netherlands Meteorological Institute
A combined visualisation of various satellite constellation observations sheds light on the effects of large geomagnetic storms and the extent of the aurora. We have focused on the May 10/11 and October 10/11 storms of 2024. In particular, we have looked at thermosphere-ionosphere observations and field-aligned currents measured by the three Swarm satellites, ionospheric ultraviolet emissions from NASA's GOLD as well as the SSUSI instruments on two DMSP satellites, and visible emissions by the aurora from the Day-Night-Band on the VIIRS instruments carried by three JPSS satellites. Combined with ground magnetometer and GNSS receiver total electron content data, this provides a large variety of perspectives on the storm-time dynamics. Geomagnetic storms are space weather events that result from the interactions between the solar wind and Earth's magnetosphere. They are characterised by global perturbations in measurements of Earth's magnetic field, resulting from large current systems in the magnetosphere, which via field-aligned currents in the auroral regions reach into the upper atmosphere and connect via ionospheric current systems. The field-aligned currents bring energetic protons and electrons into the thermosphere-ionosphere, where they cause the aurora. But the storm-time current systems have other effects as well. They result in localised high-latitude heating of the thermosphere, exceeding the day-side heating due to solar EUV irradiation, and thereby completely altering the global thermosphere dynamics. The energy is globally redistributed from auroral latitudes to other latitudes via large-scale waves, resulting in a global expansion of the neutral upper atmosphere and greatly increased LEO satellite drag at fixed altitudes. For the Swarm satellites, the May and October storms resulted in the largest measured peak drag accelerations since the start of its more than 10 year mission, a factor of 2 larger than during previous storms. Further effects of storms on the thermosphere-ionosphere also include complex dynamics, resulting in regions of enhanced and depleted electron densities and large electron density gradients, affecting radio signal propagation, including those used by Global Navigation Satellite Systems (GNSS) services, and reducing the reliability and availability of satellite navigation augmentation systems. Finally, the magnetic field fluctuations can create geomagnetically induced currents, which in extreme cases has been known to cause stability issues or even permanent damage to high voltage power grid infrastructure. Because of their rare occurrence, very large geomagnetic storms have been difficult to study. For example, based on ground magnetometer observations, the geomagnetic storms of 1859 and 1921 were most likely much stronger than any so far observed during the space age. Anecdotal evidence of auroral sightings from very low latitude locations during these and even older events are part of the traditional narrative warning of possible space weather impacts in highly populated lower latitude locations, should such an extreme storm reoccur in our modern technology-driven society. During the strong, but not extreme May and October storms, there were also many eyewitness accounts as well as photographic evidence, of aurora visible from mid to low latitudes. The recent satellite observations of the 2024 storms by VIIRS, SSUSI, GOLD and Swarm help to put these observations into context, as they prove that strong auroral emissions occurred overhead down to at least +/-45 degrees quasi-dipole magnetic latitude as well as up to 1300 km in altitude. It seems that extreme heights of auroral emissions during larger storms can play a significant role in lower latitude visibility.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall E2)

Presentation: ESA's Distributed Space Weather System - Missions and Data

Authors: Melanie Heil
Affiliations: European Space Agency
ESA's Space Safety Programme aims at protecting space and ground assets against adverse effects from space. The Space Weather Segment is focussing on such effects due to the activity of our Sun. Monitoring of the Earth's and Sun's environment is an essential task for the now- and forecasting of Space Weather and the modelling of interactions between the Sun and the Earth. Due to the asymmetry and complexity of Earth's magnetosphere, the involved particle environment and its dynamics, it is necessary to capture the state of the magnetic field and the particle distribution in a sufficiently large number of sampling points around the Earth, such that it allows state-monitoring and modelling of the involved processes with sufficient accuracy and timeliness. ESA is implementing a space weather monitoring system, including the establishment of a Distributed Space Weather Sensor System (D3S) to observe the effects of solar activity within Earth's vicinity. D3S is a system of systems with a variety of mission types. Space Weather instrumentation for in-situ measurements is typically rather compact and of low resource need. This characteristic makes it easy to accommodate them on spacecrafts as a secondary payload. Hosted payload missions, in which an ESA provided space weather instrument is flying on a mission managed outside of the Space Safety Programme, are a cost-effective way to address individual measurement requirements. Currently, hosted payload missions are implemented on GeokompSat-2A, EDRS-C, Sentinel-6, Hotbird 13F&G as well as MTG-I1, with the collaboration with EUMETSAT to extend to all MTG and Metop-SG satellites for radiation monitoring. Hosted payload missions need to be complemented by dedicated space weather missions to achieve coverage of the D3S measurement requirements. In particular, the wide span of observations to be performed in LEO and the data timeliness requirement driving the mission architecture in this orbit make dedicated missions the optimal solution. These missions could be performed on platforms spanning from nano- to micro-satellites with mass up to 200 kg. Current missions in implementation are Aurora, to provide continuous monitoring of both Auroral ovals, and SWING, ESA's first Space Weather Nanosatellite to provide data not he ionosphere. A second nano satellite mission is in preparation as well as a GTO mission to provide nowcasts of the radiation belts, called SWORD. The current configuration and planned implementation of D3S using Hosted payloads, SmallSats and NanoSats will be presented.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall E2)

Presentation: Comparing Thermospheric Density Variations from GRACE-FO and Swarm Missions During Low and High Solar Activity

Authors: Myrto Tzamali, Alexi Glover, Juha-Pekka Luntama
Affiliations: ESOC-ESA
In situ thermospheric densities from two key LEO missions, GRACE-FO and Swarm, which operate in near-polar orbits at similar altitudes, are provided by Delft University. GRACE-FO densities are derived from high-precision accelerometer measurements, while Swarm densities are calculated using GPS observations. This study analyses the densities provided by both missions during their overlapping operational period from 2018 to 2024, covering both low and high solar activity periods. This temporal coverage enables a comparison of residuals under different levels of solar activity. The analysis focuses on variations beyond the dominant orbital and diurnal periodicities. These primary periodicities are removed using Least Squares and Weighted Least Squares methods, facilitating the isolation of short-term variations. After the removal of these dominant periodicities, the residuals reveal patterns associated with equatorial disturbances, terminator effects, pole crossings, and geomagnetic storms. Equatorial signals are observed in the residuals of both missions, either in ascending or descending orbits; however, these signals are not consistent between the two missions. The dependency of residuals on local time is evaluated to investigate day-night variations and their impact on density perturbations. A comparison between POD- and accelerometer-derived densities is conducted, with particular focus on their ability to capture high-frequency density variations during geomagnetic storms. Results indicate that GRACE-FO densities detect disturbances even at Kp = 3 in mid- and high-latitude regions, whereas Swarm densities exhibit weaker responses under similar conditions. During moderate geomagnetic storms, density residuals for both missions can increase by up to three orders of magnitude due to significant disturbances in the thermosphere. A correlation analysis between geomagnetic indices, such as Hp30 and Kp, and the residual densities highlights the importance of high-cadence geomagnetic indices for accurately capturing short-term density fluctuations. The analysis is repeated with and without incorporating error information in the density measurements, and the residuals are compared with the standardized residuals to demonstrate the critical role of realistic uncertainties in improving the reliability of thermospheric density datasets.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall E2)

Presentation: How the ESA Swarm mission can contribute to Space Weather

Authors: Dr Anja Strømme, Enkelejda Qamili, Roberta Forte, Vincenzo Panebianco, Antonio De la Fuente
Affiliations: ESA, Serco for ESA
The Near Earth Space Environment is a complex and interconnected system of systems, and the home of a multitude of physical processes all contributing to space weather and space climate effects, and hence collaboration across traditional boundaries is essential in order to progress in our understanding of and our capability to predict Space Weather. The ESA Swarm Earth Explorer mission, launched 22. September 2013 has completed a full solar cycle in orbit and is in its nature a true system science mission. After more than a decade in Space, Swarm is still in excellent shape and continues to contribute to a wide range of scientific fields, from the core of our planet, via the mantle and the lithosphere, to the ionosphere and interactions with the Solar wind. In 2023 a “fast” processing chain was introduced providing Swarm Level 1B products (orbit, attitude, magnetic field and plasma measurements) with a minimum delay with respect to acquisition. In 2024 also the generation of Swarm Level 2 products (Field Aligned Current, Total Electron Content) have been implemented in the “fast” chain and are available through the Swarm dissemination server. In this presentation we will highlight the contributions the Swarm mission has had and continues to have for the space weather community through constantly evolving products and services, with a specific focus on the “fast” data as these products add significant value in monitoring present Space Weather phenomena and help modelling and nowcasting the evolution of several geomagnetic and ionospheric events.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall E2)

Presentation: Total Root Electron Content Obtained From Lightning Generated Whistlers in the Extremely Low Frequencies From Swarm Mission and Future NanoMagSat Opportunities.

Authors: Martin Jenner, Pierdavide Coïsson, Gauthier Hulot, Louis Chauvet, Robin Deborde
Affiliations: Université Paris Cité, Institut de physique du globe de Paris, CNRS
Electromagnetic signals of opportunity propagating through the Earth ionosphere can be used to measure its parameters. Strong lightning can excite whistler signals that are detected by Swarm satellites, during burst-mode (250 Hz sampling) acquisition campaigns of the Absolute Scalar Magnetometer (ASM). These acquisition campaigns have been conducted regularly since 2019, an entire week of continuous burst-mode acquisition being run every month on each of the Alpha and Bravo Swarm satellites. Electromagnetic propagation below the lower hybrid frequency of the plasma and above the gyrofrequency of the dominant positive ion is dispersed and temporal separation of the various frequency components of the lightning whistler signals reaching the satellite can be measured. The corresponding dispersion depends on both the properties of the plasma crossed and the propagation path of individual frequencies. We recently demonstrated [Jenner et al., 2024] that within this frequency range the propagation time is proportional to the integral of the square-root of the plasma electron density, a quantity that we called Total Root Electron Content (TREC). This TREC can be recovered by relying on the measured whistler dispersion and computing their propagation path using numerical ray-tracing propagating through the climatological International Reference Ionosphere (IRI) and a dipolar magnetic field approximation based on the International Geomagnetic Reference Field (IGRF) model. The recovered TREC values have been validated using independent ionosonde data to infer the bottom side ionospheric profile and Swarm in-situ plasma densities from its Langmuir probes to constrain its top side. Swarm whistler detections are currently provided by the WHI Swarm mission Level 2 product, which we used to obtain such Swarm derived TREC estimates. The random occurrence of exploitable whistler detections limit the TREC availability mostly to the low latitudes. But these are regions where the ionospheric dynamic is strong and the availability of ionospheric data from other sources limited, making TREC a valuable parameter for future applications. In this presentation, we will present results obtained so far from the Swarm mission and discuss the additional enhanced opportunities that the ESA Scout NanoMagSat mission will provide by continuously acquiring magnetic scalar and vector components with a 2 kHz sampling from a constellation of three satellites to be launched end of 2027 at 545 km altitude, one on a polar orbit and two on 60° inclined orbits. References Jenner, M., P. Coïsson, G. Hulot, D. Buresova, V. Truhlik, and L. Chauvet (2024), Total Root Electron Content: A new metric for the ionosphere below Low Earth Orbiting satellites, Geophysical Research Letters, 51(15), e2024GL110,559, doi:https://doi.org/10.1029/2024GL110559, e2024GL110559 2024GL110559.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L3)

Session: A.08.01 Advances in Swath Altimetry - PART 3

The NASA and CNES Surface Water and Ocean Topography (SWOT) Mission, launched in December 2022, is the first inflight experience in orbit of a swath altimeter. The SWOT mission has revealed the capability of swath altimeters to measure ocean and inland water topography measurements in an unprecedented manner. The onboard Ka-band interferometer (KaRIn) observes wide-swath sea surface height (SSH) with a sub-centimetre error. It is already unveiling the small mesoscale ocean circulation that is missing from current satellite altimetry. SWOT already carried a campaign for the satellite calibration and validation (Cal/Val) including ground truths and airborne campaigns.
ESA’s Sentinel-3 Next Generation Topography (S3NGT) mission is being designed as a pair of two large spacecrafts carrying nadir looking synthetic aperture radar (SAR) altimeters and across-track interferometers, enabling a total swath of 120 km, in addition to a three-beam radiometer for wet tropospheric correction across the swath, and a highly performant POD and AOCS suite.
With a tentative launch date of 2032, the S3NGT mission will provide enhanced continuity to the altimetry component of the current Sentinel-3 constellation, with open ocean, coastal zones, hydrology, sea ice and land ice, all as primary objectives of the mission.
This session is dedicated to the presentation of advances in swath altimetry - including airborne campaigns- and the application of swath altimetry to the primary objectives of the mission, i.e. open ocean and coastal processes observation, hydrology, sea ice and land ice. We also invite submissions for investigations that extend beyond these primary objectives, such as the analysis of ocean wave spectra, internal waves, geostrophic currents, and air-sea interaction phenomena within swath altimeter data.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L3)

Presentation: Geophysical contents of the SWOT Doppler measurements; observables as complementary information to topography.

Authors: Pierre Dubois, Alejandro Bohe, Fabrice Ardhuin
Affiliations: CLS, CNES, LOPS/IFREMER
The Surface Water and Ocean Topography (SWOT) mission data product provides two separate Doppler measurements [Peral et al., 2024]. First, the topography (Sea Surface Heights, SSH) estimation process includes a Doppler centroid estimation (fractional part) using the pulse-pair estimation method [Zrnic, 1977], performed on-board on the raw data. This estimate (two values per swath, every 25km in along-track) is used during the on-board azimuth compression to re-center the Doppler spectrum approximately at 0 Hz. This is meant to maximize SNR which relates to SSH noise minimization. The second Doppler measurement, the mitigation or “high-resolution” Doppler, is also estimated on board using a pulse pair algorithm on the range compressed data (yet, before the azimuth compression on-board algorithm) on a resolution grid of 2x2km. The mitigation Doppler is for the first time processed and exploited to evaluate its intrinsic value in providing information about the sea state and its potential future use in improving the Sea State Bias correction and, as a result, the SSH data product. We modeled the platform and antenna contributions to the Doppler, the so-called Non Geophysical (NG) Doppler, and evaluated it with the use of orbit and attitude reconstruction. The Doppler corrected from this NG contribution represents the Geophysical Doppler. It is the contribution of the ocean surface velocities projected onto the radar direction. Ocean surface velocities include surface currents and a wave-induced velocity , which appears in radar data due to a correlation between local backscattering power and local orbital velocities of the waves [Chapron et al., 2005]. Because of the near-nadir looking geometry, the horizontal surface current has a small projected component in the radar direction; a 1m/s surface current aligned in the radar direction gives a Doppler from 8 to 45 Hz, depending on the cross track distance. The wave-induced velocity contributes to the observed Geophysical Doppler with values typically ranging from 10 to 50Hz in the cross track direction [Nouguier et al., 2018]. For comparison, an error as small as 1 mdeg on the knowledge of the pitch of the platform, or the antenna, already results in an error of about 60Hz in the NG correction. In this presentation, we demonstrate that our NG correction is accurate enough to obtain Geophysical Doppler estimates dominated by geophysical content. To validate our NG correction, we perform statistical comparisons between our Geophysical Doppler estimates and theoretical predictions of the wave-induced Doppler computed from wave model data. At near-nadir radar incidences, the wave-induced Doppler has been shown to be determined by the local wind and Stokes current vectors [Nouguier et al., 2018], which are parameters directly available from the ocean circulation/waves models (ECMWF, WW3, …). The conversion also involves a model for the derivative of the Normalized Radar Cross Section (NRCS) in Ka band with respect to radar incidence and azimuth angles, which we compute from Global Precipitation Measurements (GMP) mission data. Accumulating differences over a large number of SWOT passes allows us to assess the quality of the NG correction. Residual instrumental errors, mainly driven by the change in solar beta angle (the angle at which the sun shines on the spacecraft, determining its thermal behavior) are characterized and a strategy to calibrate them out is presented. The noise in the measurements is also characterized. With this statistical validation in hand, we then discuss the extent to which SWOT’s geophysical velocity estimates allow us to infer properties of the sea state and of surface currents . We find that regions of strong currents may exhibit large enough signatures for the data put useful constraints on current maps . Away from those regions, the velocity estimates may be dominated by the wave-induced contribution, which provides information on the direction of the wind relative to the radar direction, a quantity of geophysical value in itself, and potentially an input for sea state bias correction. There is also theoretical evidence that the height/NRCS correlation that drives the Electromagnetic bias is at near nadir primarily related to the long wave orbital velocity variance [Chapron et al., 2001]. The long wave orbital velocity dependence of the radar observed wave-induced velocities may then provide a complementary estimate of the effect of sea state on SSH .
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L3)

Presentation: Next generation mean sea surface with swath altimetry

Authors: Bjarke Nilsson, Ole Baltazar Andersen, Per Knudsen
Affiliations: DTU Space
The development of mean sea surface references have been continuously developed as more and new nadir altimeters have been launched for the last 30 years. Even with the breakthough of the second generation SAR satellite altimeters, the improvements in quality have been limited by multiple factors. First, that the conventional altimeters are locked to a profile nadir of the satellite, thereby limiting either the spatial or temporal resolution. Secondly, due to the profile sampling, the resolution is not isotropic, and primarily favors the north-south direction, while the east-west resolution is limited. And lastly, the ability to resolve the the sea surface height based on the return power, has limited the vertical resolution of the suite of nadir altimeters. With the launch of the Surface Water and Ocean Topography (SWOT) mission in 2022, all of the abovementioned points have been challenged, with swath altimetry providing wide coverage, low noise level and excellent cross-track resolution. Due to the dual antennas, small-scale 2-dimensional sea surface features are resolvable at a resolution not previously possible. Including these high-resolution observations in the geodetic references shows new features in the ocean surface not peviously resolvable. With the future launch of the next-generation topography Sentinel missions, the benefit of swath altimetry will only increase. Around one and a half year of global SWOT data is currently available. This is much shorter than the 30 years of conventional satellite altimetry used to produce the current MSS models. However, combining the long time-series of the current MSS models in determining the longer spatial scales with the high-resolution SWOT data to determine the finer scales is explored here. We present a new mean sea surface reference, utilising the high resolution data available at the moment. Our analysis shows a noise floor an order of magnitude lower than those derived using purely nadir altimetry, as well as being able to sample closer to the coast than ever before. Using an updated reference with the highest resolution data available will be of critical importance for oceanographic research, geodetic mapping and climate science.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L3)

Presentation: Observations of swells resolved by SWOT’s HR mode and simulations of S3NG-Topo’s wave mode products

Authors: Louise Yu, Dr Alejandro Bohe, Dr Damien Desroches, François Boy
Affiliations: Centre National d'Etudes Spatiales
The study of ocean waves is important for understanding the energy distribution and fluxes within the ocean and with the atmosphere and the coasts. Swath altimetry, pioneered by the SWOT (Surface Waters and Ocean Topography) mission launched in December 2022, offers the possibility to obtain concurrent 2-dimensional measurements of sea surface height and backscatter coefficient. In particular, SWOT's HR (High Rate) mode delivers such products at a 10-to-60-m pixel size in a swath of 120 km, derived from fully-focused SAR (Synthetic Aperture RADAR) interferograms. This study focuses on the observation of swell regimes of wavelengths ~100 m and up, through the HR mode of SWOT. While the HR data is typically acquired by SWOT over hydrological targets and coastal areas but do not globally cover the ocean, several patches of HR data have been acquired over the open ocean, particularly during the Cal/Val phase of the mission. This offers a fantastic opportunity to examine how SWOT resolves swells down to such small wavelengths and to tackle interesting questions regarding wave physics. To this end, we confront SWOT’s data with simulations from the CNES radar simulator Radarspy and with results from analytical approaches. From the SWOT HR products, we compute 2-dimensional spectra of the concurrent measurements of ocean height and backscattered power, as well as cross-spectra between both. These observables contain a wealth of information about the underlying wave spectrum and about wave physics. However, similarly to what happens for traditional SAR modulation spectra, a number of distortions caused by the observing mechanism (in this case, near-nadir interferometric SAR) make the interpretation of these spectra indirect. While these distortions are well understood and documented in the literature for traditional SAR, they remain to be accurately modelled for height spectra derived from interferometric SAR acquisitions. Here, we discuss our effort to qualitatively understand and quantitatively predict these distortions and the main structures that they create in the spectrum, through rigorous analytical derivation and numerical simulations. The work presented here has several applications. First, it provides a model to invert the underlying wave spectrum from SWOT’s HR acquisitions. We show, both analytically and numerically, that spectra of HR data notably showcase harmonics of the original swell spectrum as well as a low-frequency component that results from convolutions of the swell spectrum with itself, and which should not be interpreted as actual wave energy in the underlying spectrum. The effect of the motion of the surface, leading among other effects to an exponential suppression of the wave energy in the along-track direction (usually referred to as the azimuth cutoff effect) is also accounted for in our analysis. Second, SWOT’s HR data presents a unique opportunity to shed some light on the hydrodynamical backscatter modulation phenomenon whereby the backscattered power is higher at the troughs of long-wavelength waves due to a lower local roughness induced by non-linear wave interactions. Indeed, at least for swells of wavelengths longer than say 100m, SWOT simultaneously measures the wave height and backscattered power along the wave profile. Constraining this physical height/backscatter correlation is an important step towards improving corrections of the so-called sea state bias, which affects sea surface height measurements in altimetry. However, some of the distortions due to the imaging mechanism also generate a correlation between the height and backscattered power measured by the instrument. In particular, velocity bunching caused by the surface motion typically induces correlations that completely dominate those from the hydrodynamical modulation and therefore needs to be accurately modelled in order to obtain useful information about the latter. We present our effort to tackle this issue through both an analytical approach and simulations, and a first application case on SWOT observations Finally, this work helps prepare the future mission S3NG-T (Sentinel 3 Next Generation Topography), which will include a wave mode to deliver 2-dimensional wave spectra for observations of HR spatial resolution. In the last part of this talk, we present a few simulations of this wave mode using Radarspy and discuss the similarities and differences, compared to the SWOT spectra, which we can expect to see in the S3NG-T data.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L3)

Presentation: Sea State Bias In Wide Swath Altimetry

Authors: Samuel Osina, Frederic Nouguier, Alejandro Bohe, Fabrice Ardhuin
Affiliations: LOPS
Since the launch of SWOT in December 2022, astonishing bi-dimensional maps of highly resolved Sea-Surface Height (SSH) have been revealed showing the complex structure of the ocean and the multi-scaled processes occurring at its surface. The highly resolved data provided by KaRIn provides a unique opportunity to monitor, study, and quantify such structures. One important source of error in altimetry, related to the presence of surface waves, is the so called Sea State Bias (SSB) that induces a negative shift in the measured surface height of the order of several percent of the significant wave height (SWH). The SSB is the signature of various complex physical properties of the ocean waves, among which their non-Gaussian nature and the fact that the local roughness (mean square slope) varies along the wave profile due to non-linear interactions between the waves (hydrodynamical modulation), inducing differences in the electromagnetic signal backscattered to the radar that bias the measurement (electromagnetic bias). In the context of conventional (or SAR) nadir altimetry, empirical corrections (essentially derived by tabulating differences between height measurements at cross-overs as a function of sea state parameters) are used to reduce the error due to SSB. The measurement principle used by KaRIn (interferometry) is significantly different from that of nadir altimetry and as a result, the SSB created by waves could be different. Studies before launch using somewhat simplified wave physics (gaussian waves and a simple model for the backscatter modulation) concluded that the SSB would be similar enough to the one affecting nadir altimetry to allow the usage, as the initial correction (used in the operational processing since launch), of the empirical table derived from the AltiKa mission (also in Ka band). In this work, we revisit this issue using refined wave physics. Since KaRIn’s processing uses a SAR compression to achieve its azimuth resolution, we pay particular attention to the impact of surface motion, which has recently been shown to be an important contributor to SSH errors on sensors using Doppler processing (Buchhaupt et al 2021, Marié et al 2024). We first derive an analytical model for the bias affecting an interferometric SAR instrument like KaRIn in the presence of waves, which are characterized by their four-dimensional (elevation, slopes and vertical velocity) joint probability density functions (PDFs). We use our model to compute the SSB as a function of cross-track distance and doppler beam for a variety of sea state conditions (Significant wave height, wind speed, wind direction, ...). This allows us to quantitatively investigate the impact of instrument characteristics (PTR, pointing...) and wave physics ingredients to the SSB. Our analysis includes the effect of the non-linear interactions and couplings between the slopes and velocities of the large waves, while higher-order effects are left for future work. Specifically, we have derived and used the waves PDFs under second-order Eulerian and first-order Lagrangian approximations. We will present the results of these analyses, comparing and analyzing the dependences of the predicted SSB on cross-track distance, wind speed, wind direction and wave doppler. By understanding the impact of each wave physics ingredient, we aim to progress toward a more complete SSB model that can be used for correction in the long term.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L3)

Presentation: Improving the assimilation of SWOT into Mercator Ocean International global forecasting system

Authors: Ergane Fouchet, Mounir Benkiran, Elisabeth Remy, Pierre-Yves Le
Affiliations: Mercator Ocean International
Ocean analyses and forecasts are crucial for a wide range of applications, including maritime transport, offshore operations, weather prediction, risk assessment, resource management and protection of marine biodiversity. The accuracy of ocean model predictions heavily depends on data assimilation processes, which in turn depend on in situ and satellite observations. A clear understanding of the physical content of both models and observations, along with precise estimation of their respective errors, is critical for improving predictive capabilities. For over 30 years, sea level anomalies (SLA) from conventional nadir altimetry have been widely used to constrain ocean models through assimilation. The Surface Water and Ocean Topography mission, SWOT, launched in 2022, builds on altimetry techniques to produce unprecedented high-resolution, two-dimensional mapping of sea surface height across the global oceans. These observations allow for new detailed spatial analyses of mesoscale to submesoscale ocean processes. Beyond the breakthrough in spatial resolution and spatial coverage, the mission’s 3-month Cal/Val phase, characterized by a one-day repeat cycle, has also provided valuable insights into high-frequency ocean variability, from a temporal perspective. A frequency analysis of the fast-sampling phase demonstrated the presence of both balanced and unbalanced internal tide residuals within the SWOT L3 SLA product, which can dominate the mesoscale signals in certain regions. As the tidal processes are not represented in the model or cannot be constrained through data assimilation, they should be considered as red noise, and the observation error in the system must be adapted accordingly. The Mercator Ocean assimilation system has been adapted to fully leverage the potential of SWOT measurements in the global 1/12° forecasting model. The first tests of SWOT Karin data assimilation (1-day and 21-day phases) have shown a significant improvement in the accuracy of ocean analyses and forecasts. The objective of this study is to enhance the assimilation of SWOT SLA in preparation for their operational integration into the Copernicus Marine Service. This requires a precise understanding of how assimilating high-resolution and high-frequency data impacts the system. To address this, we evaluate the model's analysis and forecasts based on the temporal resolution of the data and the observation error, particularly when internal tide residuals are finely characterized.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L3)

Presentation: Retrievals of Internal Solitary Wave Amplitudes from SWOT KaRIn observations

Authors: José da Silva, Jorge Magalhaes, Armel Bosser, Renan Huerre, Carina de Macedo, Ariane Koch-Larrouy, Chloe Goret, Camila Artana, Michel Tchilibou, Simon Barbot, Souley Diallo
Affiliations: University of Porto, CIIMAR, Instituto Nacional de Pesquisas Espaciais (INPE), Université Claude Bernard Lyon1, CECI, CERFACS
The Earth's climate is closely linked to the global ocean overturning circulation, which is driven by differences in density and include deep convection in specific areas, connecting surface circulation with deep currents. However, the deep convection observed in the ocean is not enough to maintain the global overturning circulation; a huge amount of mechanical mixing of the colder, denser deep waters with the warmer, less dense surface waters is also required. One of the traditionally accepted mechanisms for this mixing are internal waves, which induce vertical flows of heat and other physical properties of seawater. Therefore, to understand global overturning circulation and parameterize ocean circulation models for predicting future climate scenarios, it is crucial to have a detailed understanding of internal waves. A special class of internal waves, which are usually referred to as Internal Solitary Waves (ISWs) are typically observed in satellite images and in situ data to have time and space scales that are the same as those of many other ocean processes. For instance, they can propagate across basin-scale distances and yet display a highly turbulent character in their propagation, meaning their energetics could naturally link the larger scales of global overturning circulation with the smaller scales in turbulent motion. In addition, ISWs provide (by far) the ocean with its largest vertical velocities (up to 1 m/s) over vertical scales exceeding one hundred meters, producing intense localized mixing. The satellite sensor that excels ISW observations has been the Synthetic Aperture Radar (SAR), for various reasons that include high spatial resolution and operation in all weather conditions. However, sea surface height, being proportional to the ocean pressure field, is more directly relevant for studying ocean interior dynamics than the surface roughness captured in standard SAR systems. The new SWOT mission carries two advanced Ka-band SAR Interferometer antennas (KaRIn) separated by a 10 meters mast in orbit, providing the first two‐dimensional high resolution and low-noise measurement of surface water elevations owing to ISWs from space. Instrumental noise for a footprint 3km wide has a standard deviation 𝜎_3𝑘𝑚 ≤ 0.40 𝑐𝑚 (Chelton, 2024), which is much lower than traditional and SAR altimeters. KaRIn measurements over a swath of 120 km (with a 20 km nadir gap that is sampled with coarse resolution by a conventional altimeter along-track), are converted in image pixels of both surface elevation and roughness with 250 meter resolution over ocean. In theory, the coherent elevations measured by KaRIn allow inference of currents and internal wave amplitudes, at global scale. We will demonstrate how SWOT KaRIn may be used to measure thermocline displacements and ISW current fields making simultaneous use of ocean surface topography and radar backscatter. With our knowledge of vertical density stratification, for example from the ARGO (the Array for Real-time Geostrophic Oceanography) program, and recurring to fully-nonlinear easy-to-use models, such as the Dubreil-Jacotin-Long (DJL) model (Long, 1953), we develop a method to retrieve wave amplitudes from KaRIn measurements. The method is based on both surface elevations measured in sea surface height anomalies (ssha) and modelled internal amplitudes derived from density stratification. For the ocean region off the Amazon shelf, in the tropical Atlantic Ocean, ISW amplitudes retrieved by our method can exceed 120 meters, concurring with independent measurements. In situ measurements recently obtained within the framework of international project AMAZOMIX in the same study region, including deployed equipment (ADCP and thermistor chain moorings) spanning for more than a year, are helpful to support the methodology presented in this paper. Furthermore, we provide evidence that the ssha is not correlated with surface roughness (sigma0), a major concern that needs clarification before we proceed to operational use.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L3)

Presentation: SWOT - a new global ocean radar imager for air-sea interaction applications in synergy with present and future ESA ocean SAR missions

Authors: Justin Stopa, Doug Vandemark, Ralph Foster, Alexis Mouche, Paco Lopez Dekker, Bertrand
Affiliations: The University Of Hawai`i At Manoa
It's been known since the early SEASAT synthetic aperture radar (SAR) mission that radar imaging of the sea surface can reveal a wide range of air-sea interaction processes with applications that span from search and rescue to ocean wave and weather prediction. A new joint NASA/CNES imager aboard the Surface Water and Ocean Topography (SWOT) satellite is now providing the widest global ocean coverage of any earth-observing radar system yet launched. This study describes the central characteristics of ocean radar backscatter imagery collected using SWOT's primary ocean sampling mode including the scope of oceanic, atmospheric, and air-sea interaction phenomena that the radar can resolve. SWOT Ka-band interferometric SAR images differ substantially from previous satellite SAR measurements in four key respects including spatial resolution (500 m), operating frequency (36 GHz), and incidence angle (near-nadir). The final difference is spatial coverage, and this is arguably the most important. SWOT imagery is provided continuously along the satellite track over a nearly 120 km swath opening new opportunities to systematically investigate sub-mesoscale air-sea interaction embedded within synoptic weather systems as well as over regions of strong ocean-atmosphere exchange such as western boundary current systems. We will illustrate and discuss the benefits and limitations of SWOT using an observational investigation of SWOT versus SAR measurements, using this comparison between coincident SWOT and Sentinel-1 C-band SAR WV mode imagery. It is expected that image interpretation of wind-wave signatures is simplified using these low-incidence angle Ka-band data. We will show that SWOT offers several new capabilities to the earth observing system, and provide a first list of potential applications using this new sensor. We will also address how SWOT data may both complement and extend ESA SAR mission datasets, including Sentinel-1 and Harmony, to advance the understanding of submesoscale air-sea interaction processes.
Add to Google Calendar

Tuesday 24 June 14:00 - 16:15 (Hall G2)

Session: C.05.06 Status ESA Mission development: National Programmes managed by ESA - PART 1

The status of development of ESA missions will be outlined.
In 4 sessions a 1h30 minutes (equally of a full day) the unique opportunity for participants will be offered to gain valuable insights in technology developments and validation approaches used during the project phases of ongoing ESA programmes.
The Projects are in different phases (from early Phase A/B1 to launch) and together with industrial/science partners the status of activities related to Mission developments will be provided.

Speakers:


  • G. Costa - ESA
  • V. Faccin - ESA
  • R. Lanari - CNR/IREA
  • M. Manunta - CNR/IREA
  • A. Taramelli - ISPRA
  • L. Sapia - ESA
  • E. Cadau - ESA

Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall N1/N2)

Session: D.05.05 CDSE User Review Meeting - Navigating the Copernicus Data Galaxy: Insights and Innovations from the Copernicus Data Space Ecosystem

We invite all users to join us to explore key achievements, identify upcoming opportunities, and discuss how user feedback drives the continuous enhancement of Earth observation capabilities. In this session, we present the insights and the Earth Observation trends from ESA perspective and future path forward for Copernicus Data Space Ecosystem.
This retrospective and forward-looking discussion will highlight key milestones, recent developments, and upcoming innovations aimed at empowering users worldwide with advanced Earth observation data.
Join us for an in-depth session exploring the evolution, opportunities and future trajectory of the Copernicus Data Space Ecosystem.

Presentations and speakers:


Keynote by ESA and European Commission


  • ESA and European Commission

Copernicus for water monitoring - Ocean Virtual Laboratory


  • Fabrice Collard - OceanDataLab

From Sentinel-1 mosaics to VHR imagery: New data sources and downstream data products in CDSE


  • András Zlinszly - Sinergise

Keynote by EEA


  • Matteo Mattiuzzi - EEA

Keynote by EC-JRC


  • Peter Strobl - EC-JRC

Low-Cost, High-Impact: Advanced Copernicus Data Analysis with openEO on the Cloud


  • Jeroen Dries - VITO Remote Sensing
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.34)

Session: A.03.04 Model-data interfaces and the carbon cycle

The increasing provision of the synergistic observations from diverse and complementary EO observations relevant to the carbon cycle underlines a critical challenge in generating consistency in multi-variate EO datasets whilst accounting for the differences in spatial scales, times of acquisition and coverage of the different missions. It also entails the requirement to improve models of the carbon cycle to ensure they can fully exploit the observation capabilities provided by both EO data and enhanced global ground networks. This implicitly means increasing the spatial resolution of the models themselves to exploit the spatial richness of the data sources as well as improving the representation of processes, including introducing missing processes especially those for describing vegetation structure and vegetation dynamics on both long and short timescales, while ensuring consistency across spatial scales (national, regional, global).

Understanding and characterisation of processes in the terrestrial carbon cycle, especially with reference to estimation of key fluxes, requires improved interfaces between models, in situ observations and EO. It also requires research to ensure an appropriate match is made between what is observed on the ground, what is measured from space, their variability in space and time and how processes that explain this dynamism are represented in models and hence to allow the assessment of the impacts of scale in particular how processes, operating at fine scale, impact global scale carbon pools and fluxes. This implicitly involves a close collaboration between the Earth observation community, land surface and carbon modellers and experts in different disciplines such as ecosystems, hydrology and water cycle research.

This session is dedicated to progress in model-data interfaces and the appropriate coupling of EO observations of different types, processes and variables with in-situ observations and models to ensure the observations collectively and the models are consistent and compatible.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.34)

Presentation: A model-data fusion diagnosis for spatial distribution of biomass carbon and net biome production across the world’s largest savanna

Authors: Mathew Williams, Dr David Milodowski, Dr Luke Smallman, Dr Iain McNicol, Prof Kyle Dexter, Casey Ryan, Prof Gabi Hegerl, Mike O'Sullivan, Stephen Sitch, Dr Aude Valade
Affiliations: University of Edinburgh, University of Exeter, University of Montpellier
Southern African woodlands (SAW) are the world’s largest savanna, covering ~3 M km2, but their carbon balance and its spatial variability are poorly understood. Here we quantify the dynamics of the regional carbon cycle, diagnosing stocks and fluxes and their interactions with climate and disturbance by combining earth observations of components of the C cycle with a process model. We address the following questions: 1. How do fluxes and net exchanges of CO2 vary across the SAW region and covary with climate, fire, and functional characteristics? 2. How do carbon stocks and their longevity covary with climate, fire, and functional characteristics? 3. How does data-constrained analysis of ecosystem C cycling compare to Trendy land surface model estimates for the region? Using 1500 independent 0.5o pixel model calibrations, each constrained with local earth observation time series of woody carbon stocks (Cwood) and leaf area, we produce a regional C analysis (2006-2017). The regional net biome production is neutral, 0.0 Mg C/ha/yr (95% Confidence Interval –1.7 - 1.6), with fire emissions contributing ~1.0 Mg C /ha/yr (95% CI 0.4-2.5). Fire-related mortality driving fluxes from total coarse wood carbon (Cwood) to dead organic matter likely exceeds both fire-related emissions from Cwood to atmosphere and non-fire Cwood mortality. The emergent spatial variation in biogenic fluxes and C pools is strongly correlated with mean annual precipitation and burned area. But there are multiple, potentially confounding, causal pathways through which variation in environmental drivers impacts spatial distribution of C stocks and fluxes, mediated by spatial variations in functional parameters like allocation, wood lifespan and fire resilience. Greater Cwood in wetter areas is caused by positive precipitation effects on net primary production and on parameters for wood lifespan, but is damped by a negative effect with rising precipitation increasing fire-related mortality. Compared to this analysis, LSMs showed marked differences in spatial distributions and magnitudes of C stocks and fire emissions. The current generation of LSMs represent savanna as a single plant functional type, missing important spatial functional variations identified here. Patterns of biomass and C cycling across the region are the outcome of climate controls on production, and vegetation-fire interactions which determine residence times, linked to spatial variations in key ecosystem functional characteristics. The C budgets generated in this analysis can also support more robust and observationally consistent national reporting in the SAW region for the Paris Agreement of the UNFCCC. The detailed resolution of the outputs, with locally valid functional characteristics, can enhance national CO2 emission factors for fire disturbance, for instance. Working closely with national agencies, these approaches could deliver Tier 3 estimates of national C budgets to support countries and climate action world-wide. The application of the SAW approach across the wider dry tropics will be discussed, noting biogeographical variations in diagnostics.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.34)

Presentation: Toward the development of coupled carbon and water cycle land data assimilation in the ECMWF Integrated Forecast System (IFS) by leveraging machine learning and new types of Earth observations

Authors: Sebastien Garrigues, Patricia de Rosnay, Peter Weston, David Fairbairn, Ewan Pinnington, Souhail Boussetta, Anna Agusti-Panareda, Jean-Christophe Calvet, Cedric Bacour, Richard Engelen
Affiliations: ECMWF, CNRM, Université de Toulouse, Météo-France, CNRS, LSCE
The CO2MVS Research on Supplementary Observations (CORSO) project aims at reducing the uncertainties in the land carbon sink estimates which represent the largest source of uncertainties in the global carbon budget. One of the objectives of CORSO is to consistently constraint water and carbon fluxes over land with the assimilation of microwave and solar-induced chlorophyll fluorescence (SIF) satellite observations in the Integrated Forecast System (IFS) developed at ECMWF. Assimilating satellite observations requires an observation operator to predict the model-simulated counterpart of the remotely sensed observation from the model fields. Machine learning (ML)-based observation operators are good alternative to process-based models which are generally computationally expensive, more complex and associated with large uncertainties over land. In this work, we present results on the assimilation at global scale of (1) the normalized backscatter at 40 from ASCAT onboard METOP-B and-C and (2) SIF derived from TROPOMI onboard Copernicus Sentinel-5p, using ML-based observation operators. The work consists in (1) developing a ML-based observation operator for each type of observation to predict the model counterpart of the satellite signal from the IFS model fields at global scale; (2) implement the ML-based observation operators in the IFS to jointly analyse soil moisture and Leaf Area Index (LAI) (3) evaluate the impacts on the forecast of Gros Primary Production (GPP) and low-level meteorological variables (2m temperature and humidity) forecast. Training databases, which consist of the IFS model fields collocated with the satellite observation at the spatial and temporal resolution of the satellite observation (25 km, daily for ASCAT and 10km, 8-d for SIF), were produced to design each ML-based observation operator. Orographic area, snow, frozen soil area and water bodies samples, for which the satellite signal is uncertain or difficult to interpret, were excluded. Features were selected using process-based knowledge and explainability methods (SHAP) to identify the most influent features on the satellite signal. Gradient boosted trees (XGBOOST) and feedforward neural network (NN) models were tested. The IFS model fields used to predict ASCAT backscatter at 40 include soil moisture and soil temperature in the first 3 soil layers (up to 1m depth) and Leaf Area Index (LAI). Latitude and longitude are included as additional features to represent local observation conditions. A NN with 4 hidden layers, 60 neurons model is trained over 2016-2018 period and tested over 2019. For SIF, the predictors were selected from process-based knowledge of the SIF drivers at canopy scale which includes LAI, shortwave downwelling radiation, 2m temperature and humidity, soil moisture, root-zone soil moisture, soil temperature and the fraction of low and high vegetation. For SIF, an XGBOOST model was trained over 2019-2020, tuned over 2021 and tested over 2022. Both SIF and ASCAT ML-observation operators show good performances at global scale with mean absolute error within the expected instrument error. The spatial distributions of the satellite observations are accurately reproduced as well as their seasonal evolution. Performances are slightly lower for SIF compared to ASCAT backscatter, indicating larger uncertainties and lack of information content in the IFS model fields to accurately predict the SIF satellite signal at global scale. The prediction of SIF is generally more accurate over mid-latitude cropland and grassland for which LAI and solar radiation are the main drivers of SIF at the canopy scale. Lower correlation between predicted and observed SIF are reported for tropical rainforest (Amazon, Central Africa) and semi-arid regions (Central Australia). The assimilation of SIF to update LAI in the IFS is conducted in two steps: (1) SIF is first assimilated in the offline Land Data Assimilation System (LDAS) to update the low and high vegetation LAI variables of IFS, (2) the updated LAI variables are used in IFS forecast only experiments to evaluate their impacts on the prediction of carbon fluxes (GPP) and low-level meteorological variables (2m humidity and temperature). The assimilation of SIF provides realistic spatiotemporal patterns of low and high vegetation LAI increments such as the enhancement of the greening of the Sahel region and Western Europe in spring. The updated LAI shows better agreement with the Copernicus satellite LAI product over Northern Eurasia and scattered regions in North and South America, Central Europe and Eastern and Southern Australia. Lower performances are obtained over tropical rainforest (Amazon) and sparse vegetation regions where the prediction of SIF by the ML observation operator is more uncertain. However, the magnitude of the produced increments is too low to have an impact on NWP and carbon flux forecasts. A possible reason is the lack of sensitivity of the observation operator to LAI. The assimilation of ASCAT is directly conducted in the IFS coupled experiments to update both soil moisture and LAI. The implementation of the observation operator in the IFS and the tuning of the data assimilation system (e.g. cross correlation between LAI and soil moisture) is ongoing and results will be presented at the symposium. This work highlights the potential of ML techniques to quickly implement and evaluate the assimilation of new types of observation in NWP models. An important lesson learned is that the evaluation of the prediction performances of the observation operator is not sufficient. Testing the observation operator in the data assimilation system is paramount to verify that it provides enough sensitivity to the analysed variable (here LAI).
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.34)

Presentation: Fire carbon emission constraints from space-based carbon monoxide retrievals in the CarbonTracker data assimilation system: a case-study for the 2019 Amazonia dry season

Authors: Anne-Wil van den Berg, Joram Hooghiem, Maarten Krol, Guido van der Werf, Jost Lavric, David Walter, Hella van Asperen, Wouter Peters
Affiliations: Meteorology and Air Quality group, Wageningen University, Institute for Marine and Atmospheric Research, Utrecht University, Acoem GmbH, Multiphase Chemistry Department, Max Planck Institute for Chemistry, Department Biogeochemical Processes, Max Planck Institute for Biogeochemistry, Centre for Isotope Research, University of Groningen
Fires play a key role in the regional Amazonian carbon budget. Accurately quantifying their emissions is crucial for a better understanding of their role in the regional carbon cycle and for emission reporting, monitoring, and verification. Building on the work of Naus et al. (2022), who analysed fire carbon monoxide (CO) emissions in the Amazon between 2003 and 2018, we examine the 2019 dry season. The 2019 dry season was marked by a rapid increase in fire activity and deforestation rates compared to previous years. We show how space-based total column CO retrieval products (XCO from MOPITT, TROPOMI) can complement bottom-up fire carbon estimates and report on the timing, location, and strength of these remote sensing constraints on the 2019 Amazon fires. For the first time, we combine the new GFED5 (beta) fire emission dataset, which uses dynamic savannah emission factors and new higher-resolution burned area data, with XCO retrievals in an atmospheric inversion. We perform a two-step CO inversion using the CTDAS global multi-species inversion framework, with coupled CO/CO₂ budgets inside the TM5 transport model. In this framework, we separately target the different timescales of the budget components of CO and CO₂. The first step (i.e., long-window) constrains month-to-multi-year scale CO variations using flask measurements, and limited subsets of satellite data. The second step (i.e., short-window) is designed to use the high spatiotemporal resolution retrievals of XCO to capture the CO variability related to fire events. This approach allows us to integrate diverse types of datasets to better understand CO and fire emission variability on multiple timescales. Our inversions result in an estimate of the carbon fluxes of an exceptional Amazonian dry season and we quantify the emission contributions of fires in the savannah (Cerrado), tropical forests, and the transition region where deforestation dominates. References: Naus, S, L G Domingues, M Krol, I T Luijkx, L V Gatti, J B Miller, E Gloor, et al. “Sixteen Years of MOPITT Satellite Data Strongly Constrain Amazon CO Fire Emissions.” Atmospheric Chemistry and Physics 22, no. 22 (2022): 14735–50. https://doi.org/10.5194/acp-22-14735-2022.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.34)

Presentation: Novel Earth observation data-model fusion approaches reveal dominant role of woody debris in fire emissions in the Amazon and Cerrado

Authors: Matthias Forkel, Christine Wessollek, Vincent Huijnen, Niels Andela, Jos de Laat, Daniel Kinalczyk, Christopher Marrs, Dave van Wees, Ana Bastos, Dr. Philippe Ciais, Dominic Fawcett, Johannes Kaiser, Carine Klauberg, Erico Kutchartt, Rodrigo Leite, Wei Li, Carlos Silva, Stephen Sitch, Jefferson Goncalves De Souza, Sönke Zaehle, Stephen Plummer
Affiliations: TUD Dresden University of Technology, Royal Netherlands Meteorological Institute (KNMI), BeZero Carbon Ltd., Vrije Universiteit, Universität Leipzig, Laboratoire des Sciences du Climat et de l'Environnement, Swiss Federal Institute for Forest Snow and Landscape Research WSL, Klima- og miljøinstituttet NILU, University of Florida, Forest Science and Technology Centre of Catalonia (CTFC), University of Padova, NASA Goddard Space Flight Center, Tsinghua University, University of Exeter, Max Planck Institute for Biogeochemistry, European Space Agency, ESRIN
Emissions of greenhouse gases and air pollutants from wildfires are produced by the interplay of the chemical composition of vegetation fuels, fuel moisture, fire behaviour, and burning conditions. Established and operational Earth observation-based fire emission approaches over-simplify those processes by representing fuels in terms of simplified biome maps or by using fixed emission factors that relate the burned biomass to emissions of specific trace gases. Our aim was to better represent the complexity of fuels or burning conditions in the quantification of fire emissions by making use of several European Earth observation products. Therefore, we developed within the ESA-funded Sense4Fire project several approaches to quantify fire emissions. First, we adapted the Global Fire Atlas approach by taking active fire observations from VIIRS to map fire spread and size and to classify several fire types such as forest fires, savannah fires, deforestation (GFA-S4F). Second, we developed a satellite data-model fusion approach for fuels loads, fuel moisture, fuel combustion and fire emissions (TUD-S4F). TUD-S4F integrates Leaf Area Index time series from Proba-V and Sentinel-3, land cover and biomass maps from ESA-CCI, soil water index from ASCAT and burned area maps in a simple model of ecosystem fuel carbon and fuel moisture pools. Additionally, the model is calibrated against satellite observations of canopy height from GEDI, fire radiative energy derived from VIIRS, live fuel moisture content from VOD2LFMC, above-ground biomass (ESA CCI) and against databases of field and laboratory measurements on fuel moisture, litter loads, fuel consumption and emission factors. Unlike other fire emission approaches, TUD-S4F computes emission factors dynamically based on the chemical composition (i.e. lignin, cellulose, volatiles) of different fuels (litter, woody debris, herbaceous and woody biomass). Third, we made use of Sentinel-5p TROPOMI observations to estimate carbon monoxide (CO) and nitrogen oxide (NOx) emissions in a top-down approach in order to benchmark the GFA-S4F and TUD-S4F emission estimates (KNMI-S5p approach). In addition, we also use the IFS-COMPO atmospheric chemistry model to simulate the transport and distribution of the fire emission estimates in the atmosphere and then to validate the fields of CO with observed fields from TROPOMI. We applied all approaches in the Amazon and Cerrado, in southern Africa, southern Europe, and in a region in eastern Siberia. However, we here describe the results for the Amazon and Cerrado for the year 2020 (large fire years) and for the year 2024 (extreme fire year). The CO and NOx emissions from GFA-S4F and TUD-S4F approaches agree well with the top-down estimate from S5p for the year 2020. For the main fire season from August to October 2020, CO emissions are 43.7 Tg in TUD-S4F and 41.6 Tg in GFA-S4F which both correspond to the KNMI-S5p estimate of 43.6 Tg (with an uncertainty estimate of 25%). Those uncertainties are much smaller than the range of emissions estimates from other established fire emission approaches (27 to 49.7 Tg CO, 8 approaches) and dynamic global vegetation models (16.8 – 57.1 Tg CO, 3 models). In the extreme fire year 2024, GFA-S4F and TUD-S4F agree well with atmospheric fields of CO from Sentinel-5p in August. CO emissions from the operational Global Fire Assimilation System (GFAS) show a strong underestimation of atmospheric CO. In September 2024, also GFA-S4F and TUD-S4F emissions show an increasing underestimation atmospheric CO, which is however, much smaller than from GFAS. The uncertainties in fire emission estimates mainly originate from understorey forest fires and deforestation fires while all approaches show higher agreement for savannah fires. By using the TUD-S4F approach, we further investigated the contribution of different fuels to fire emissions: 75% of the total regional fire emissions over the Amazon and Cerrado originate from the burning of woody debris, with higher contribution of woody debris in forest and deforestation fires than in savannah fires. A validation with field data shows that TUD-S4F can represent the biome-level spatial patterns in woody debris loads. Woody debris loads are the main factor that affect the spatial patterns of emission factors. From the computation of dynamic emission factors in TUD-S4F, we derive an increasingly incomplete combustion, i.e. more smouldering fires, with increasing loads of woody debris. The statistical distribution of emission factors corresponds to the distribution which reported from field and laboratory measurements. Based on those findings we hypothesise that the underestimation of CO in September 2024 originates from an under detection of low-temperature smouldering combustion that occurs after the initial burning detected by active fire detections. Our results emphasise how novel Earth observation approaches of fuel and fire dynamics and of atmospheric trace gas observations reduce uncertainties of fire emission estimates and help to diagnose the representation of fuels, wildfire combustion and its effects on atmospheric composition in fire emission approaches and in global vegetation-fire models. Datasets of fire emissions from the approaches developed in Sense4Fire (TUD-S4F, GFA-S4F and KNMI-S5p) are available at https://sense4fire.eu/
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.34)

Presentation: A Mechanistic Model-Data Approach to Understand the Global Pattern of the Ocean’s Biological Carbon Pump’s Transfer Efficiency

Authors: Anna Rufas, Samar Khatiwala
Affiliations: University Of Oxford
The ocean’s biological carbon pump (BCP) transfers large amounts of carbon from the atmosphere into the ocean’s interior, contributing to oceanic carbon sequestration. Through biologically-mediated processes, the BCP generates sinking particles that transport carbon from the surface to the deep ocean (>1000 m), where it can be sequestered long-term. As anthropogenic CO₂ emissions rise, understanding how much of the BCP-generated particulate organic carbon (POC) flux reaches the deep ocean has become increasingly important. Despite significant advances in observational and modelling capabilities over the past decade, the mechanisms controlling the transfer efficiency of POC flux to the deep ocean (Teff) remain poorly understood. Here, we integrate ESA’s satellite-derived surface ocean carbon data with a novel stochastic particle tracking model developed within the BCP framework, extending the satellite-based representation of surface carbon to the ocean interior. Our goal is to understand the marine particle dynamics, surface ocean ecosystem structure and the environmental factors that control the global patterns of Teff. The model tracks discrete Lagrangian marine particles as they mechanistically interact with their biogeochemical environment –through processes such as phytoplankton photosynthesis, zooplankton grazing, egestion, heterotrophic remineralisation, dissolution and solubilisation– and with other particles –through aggregation and disaggregation–, as they sink through the water column. These particles represent various forms of biological material, including living and dead phytoplankton, zooplankton faecal pellets, dead zooplankton and a combination of those, which aggregate through sorption and are aided by sticky transparent exopolymer carbon. We validate the model at six data-rich ocean locations, using observations of three particulate tracers (POC, particulate inorganic carbon, and biogenic silica) and the vertical distribution of particle numbers by size class. The model successfully reproduces local patterns and is applied globally. Our results show that phytoplankton community composition and grazing dynamics significantly influence Teff, challenging the conventional focus on temperature as the primary control.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.34)

Presentation: Satellite-constrained Dynamic Global Vegetation Models for a Near-real time Global Carbon Budget

Authors: Mike O'Sullivan, Stephen Sitch, Jefferson De Souza Goncalves, Philippe Ciais, Ana Bastos, Myriam Terristi, Sonke Zaehle, Wei Li, Luiz Aragao
Affiliations: University of Exeter, LSCE, University of Leipzig, MPI BGC, Tsinghua University, INPE
Understanding the terrestrial carbon cycle is critical for assessing climate impacts and guiding mitigation strategies. This work advances dynamic global vegetation model (DGVM) simulations with near real time (NRT) capability to evaluate recent extreme events and their effects on carbon fluxes. By prescribing satellite-derived burned areas in DGVMs, we improve the representation of global fire emissions’ magnitude and trends while also capturing the full carbon cycle dynamics, including legacy emissions and subsequent regrowth—processes that products like GFED do not provide. Integrating satellite-based observations for burned area enhances model performance, reducing uncertainties in fire emissions and also improves simulated biomass stocks and trends. This work stems from several ESA projects (NRT Carbon Extremes, RECCAP2, and EO LINCS) and demonstrates our growing ability to constrain recent major events in the carbon cycle with EO-DGVM synergy. Here we focus on the 2024 extreme events in Brazil, which experienced unprecedented forest degradation carbon losses, driven by large-scale drought conditions and widespread fires. A key strength of our methodology is that DGVMs allow us to attribute flux anomalies to key processes, such as net primary productivity, soil respiration, and fire activity, as well as study post-disturbance dynamics. Unlike traditional national inventory approaches, this method provides rapid, low-latency insights into carbon flux variability after extreme events, offering a critical advantage for policymakers. By delivering timely inputs to frameworks like the Paris Agreement’s Global Stocktake and national greenhouse gas inventories (NGHGIs), this approach supports better tracking of climate mitigation progress. The integration of satellite observations into DGVMs bridges the gap between data and actionable policy, providing a comprehensive view of carbon losses, recovery, and trends. This work paves the way for operational NRT carbon monitoring systems, crucial for managing ecosystems and responding to extreme events. By combining process based models with Earth Observation (EO) data, we take a significant step forward in understanding and managing the terrestrial carbon cycle, supporting ambitious climate targets in an era of rapid environmental change.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall G1)

Session: F.05.10 50 Years of ESA/100 year Roy Gibson, Session - Roy Gibson - The Golden Age of EO

Session Agenda:
1. Welcome by Josef Aschbacher, Director General ESA
2. Message from Roy Gibson read by Volker Liebig, former Director EO, ESA, Institute of Space Systems, University of Stuttgart
3. Roy Gibson and Earth Observation by Stephen Briggs, Reading University, Department of Meteorology
4. Earth Observation Ground Breaking Science Discoveries by Maurice Borgeaud, Chair Earth Science Panel, European Space Science Committee
5. Discussion on what it means to continue the Golden Age of Earth Observation introduced by Simonetta Cheli, Director EO Programmes ESA

Speakers:


  • Dr. Josef Aschbacher - Director General ESA
  • Prof. Volker Liebig - Honorary Professor, Institut of Space Systems, University of Stuttgart, former EO Director, ESA
  • Prof. Stephen Briggs - Visiting Professor, Reading University, Department of Meteorology, Cambridge University, Department of Chemistry
  • Prof. Maurice Borgeaud - Chair Earth Science Panel European Space Science Council (ESSC), former Head of Science, Applications and Climate Activities, ESA
  • Dr. Simonetta Cheli - Director Earth Observation Programmes, ESA
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.96/0.97)

Session: A.02.02 Terrestrial and Freshwater Biodiversity - PART 2

Preserving the integrity and health of natural ecosystems, and the biodiversity they host is crucial not only for the vital services they provide to sustain human well-being, but also because natural ecosystems with a high degree of integrity and diversity tend to exhibit elevated levels of productivity and resilience. The importance of safeguarding biodiversity is increasingly recognised in many Multilateral Environmental Agreements (MEAs) which all place great emphasis on the sustainable management, restoration and protection of natural ecosystems.

The pivotal role of ecosystems in maintaining ecological balance and supporting human well-being is a unifying theme in MEAs. Taking note that despite ongoing efforts, biodiversity is deteriorating worldwide and that this decline is projected to continue under business-as-usual scenarios, Parties to the Convention on Biological Diversity (CBD) have adopted at the 14th Conference of the Parties in December 2022, the Kunming-Montreal Global Biodiversity Framework (GBF). The GBF represents the most ambitious and transformative agenda to stabilise biodiversity loss by 2030 and allow for the recovery of natural ecosystems, ensuring that by 2050 all the world’s ecosystems are restored, resilient, and adequately protected. In Europe, the EU Biodiversity Strategy for 2030 aims to put Europe’s biodiversity on the path to recovery by 2030, by addressing the main drivers of biodiversity losses.

The emergence of government-funded satellite missions with open and free data policies and long term continuity of observations, such as the Sentinel missions of the European Copernicus Program and the US Landsat programme, offer an unprecedented ensemble of satellite observations, which together with very high resolutions sensors from commercial vendors, in-situ monitoring systems and field works, enable the development of satellite-based biodiversity monitoring systems. The combined use of different sensors opens pathways for a more effective and comprehensive use of Earth Observations in the functional and structural characterisation of ecosystems and their components (including species and genetic diversity).

In this series of biodiversity sessions, we will present and discuss the recent scientific advances in the development of EO applications for the monitoring of the status of and changes to terrestrial and freshwater ecosystems, and their relevance for biodiversity monitoring, and ecosystem restoration and conservation. The development of RS-enabled Essential Biodiversity Variables (EBVs) for standardised global and European biodiversity assessment will also be addressed.

A separate LPS25 session on "Marine Ecosystems" is also organised under the Theme “1. Earth Science Frontiers - 08 Ocean, Including Marine Biodiversity”.

Topics of interest mainly include (not limited to):
•Characterisation of the change patterns in terrestrial and freshwater biodiversity.
•Integration of field and/or modeled data with remote sensing to better characterize, detect changes to, and/or predict future biodiversity in dynamic and disturbed environments on land and in the water.
•Use of Earth Observation for the characterisation of ecosystem functional and structural diversity, including the retrieval of ecosystem functional traits, (e.g., physiological traits describing the biochemical properties of vegetation) and morphological traits related to structural diversity.
•Sensing ecosystem function at diel scale (e.g. using geostationary satellites and exploiting multiple individual overpasses in a day from low Earth orbiters and/or paired instruments, complemented by subdaily ground-based observations).
•Assessment of the impacts of the main drivers of changes (i.e., land use change, pollution, climate change, invasive alien species and exploitation of natural resources) on terrestrial and freshwater ecosystems and the biodiversity they host.
•Understanding of climate-biodiversity interactions, including the impact of climate change on biodiversity and the capacity of species to adapt.
•Understanding of the evolutionary changes of biodiversity and better predictive capabilities on biodiversity trajectories,
•Understanding of the ecological processes of ecosystem degradation and restoration,
•Multi-sensor approaches to biodiversity monitoring (e.g. multi-sensor retrievals of ecosystem structural and functional traits),
•Validation of biodiversity-relevant EO products (with uncertainties estimation),
•Algorithm development for RS-enabled Essential Biodiversity Variables (EBVs) on terrestrial and freshwater ecosystems,
•Linking EO with crowdsourcing information for biodiversity monitoring
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: Testing AVIRIS-4 for Monitoring Grassland Biodiversity Through Imaging Spectroscopy

Authors: Tiziana Koch, Christian Rossi, Andreas Hueni, Marius Vögtli, Maria J. Santos
Affiliations: University Of Zurich, Swiss National Park
Advances in remote sensing technologies, particularly airborne imaging spectroscopy, offer great opportunities for biodiversity monitoring. The recent improvements in airborne sensors and their processing pipelines, such as for the Airborne Visible InfraRed Imaging Spectrometer 4 (AVIRIS-4), provide enhanced data quality, which are crucial for capturing fine-scale biodiversity and ecosystem dynamics in complex environments. In this study, we present the first application of AVIRIS-4 data to measure biodiversity in alpine grasslands, focusing on the Swiss National Park, a hotspot for ecological research and conservation. AVIRIS-4, operated by the Airborne Research Facility for the Earth System (ARES) platform at the University of Zurich, acquires image data across a broad spectral range (380–2490 nm) with a spectral sampling of 7.5 nm and a (sub-)meter scale spatial resolution, enabling detailed analysis of key grassland plant community traits and ecological processes. We leverage this technology to investigate relationships between spectral signatures and biodiversity metrics in alpine grassland ecosystems. The airborne data, collected under optimal cloud-free conditions, were pre-processed using a comprehensive state-of-the-art reflectance retrieval workflow that incorporates corrections for atmospheric and topographic effects. We then compare AVIRIS-4 data with comprehensive in situ ecological data collected from 80 grassland plots during the summer of 2024. Field measurements include biomass and canopy spectral reflectance measurements using field spectrometers. The biomass samples have been weighted and analyzed in the laboratory for various plant functional traits, such as nitrogen, potassium, and lignin content. We then use Partial Least Squares Regression (PLSR) to model the relationships between reflectance measurements from the airborne sensor and in situ measurements, including handheld spectroscopy and the chemical analysis of plant traits. Our preliminary results demonstrate the capability of AVIRIS-4 data to accurately map grassland plant trait distributions at high spatial resolution, thereby offering new opportunities for assessing biodiversity, its change and contribute to monitoring at the landscape scale. By integrating high-resolution imaging spectroscopy data with ground-based observations, we provide a robust framework for assessing biodiversity in alpine grasslands. This approach allows us to examine how well AVIRIS-4 can predict key ecological traits that are indicative of biodiversity patterns, as this is particularly important in the context of ongoing environmental changes, where timely and precise monitoring is essential. Moreover, since AVIRIS-4 is being used as a precursor for the upcoming Copernicus Hyperspectral Imaging Mission (CHIME), our findings have broader implications. The relationships established in this study, particularly with regard to plant trait upscaling, can be used to inform future spaceborne missions, enabling global biodiversity assessments and advances the use of remote sensing in conservation and ecosystem management.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: Dynamics as foundation of riverine biodiversity: towards system scale analysis of the dynamic interaction between hydromorphology and vegetation controlling ecosystem functioning and services in river corridors

Authors: Florian Betz, Baturalp Arisoy, Magdalena Lauermann, Rafael Schmitt, Prof. Dr. Tobias Ullmann
Affiliations: University of Würzburg, Catholic University Eichstätt-Ingolstadt, University of California Santa Barbara
Despite only covering 2% of the earth surface, freshwater ecosystems are the home of approximately 10-12% of all described species. Key driver of this high biodiversity is the inherent dynamic of river corridors (i.e. rivers and their floodplains) with steep gradients of hydro-geomorphic disturbance creating a diverse mosaic of habitats. Today, the biodiversity of river corridors is critically endangered and is declining at a high rate. According to the WWF Living Planet Index, the biodiversity of freshwaters has declined by 83% since the 1970s, which is more than twice as much as the decline in biodiversity in terrestrial ecosystems. To support nature-positive decision making and the conservation as well as restoration of river corridors, their comprehensive assessment of structure and dynamics across a range of spatial and temporal scales is required. However, current studies tend to either focus on small scales or oversimplify the role of river dynamics for maintaining ecosystem functioning. In this study, we use the Aral Sea basin in Central Asia as an example to demonstrate the use of state-of-the-art, cloud computing enabled satellite remote sensing and digital geomorphometry for understanding river corridor structure and dynamics at the scale of the basin’s entire network of major rivers spreading along more than 15.000 km. We introduce innovative methods such as a spaceborne LiDAR and satellite time series driven approach for unsupervised classification of riparian habitats for large scale studies in data-scarce regions. In addition, we leverage the entire Landsat archive to analyze hydrologic and geomorphic dynamics of the entire river network. Using field UAV and field mapping derived ground truth data along with the Clay foundational model enables us to accurately predict grainsize patterns of the surfaces, soil moisture as well as vegetation structure and biomass. This allows for instance to assess the provision of rejuvenation habitat and gross primary productivity as two fundamental ecosystem functions crucial for the long-term sustainable provision of ecosystem services. The results show that the river network of the Aral Sea basin has a significant heterogeneity. In the upland part, near-natural braided and braided-anastomising rivers dominate with high inundation dynamics and high morphodynamics. These river segments are associated with a high degree of habitat diversity and large primary productivity. In the significantly modified river segments of the lower Aral Sea basin, the habitat heterogeneity decreases, also gross primary productivity is significantly lower compared to the upstream segments. These differences can be clearly attributed to anthropogenic modifications of the river corridors. Beyond the case study level, our study paves the way for a quantitative, spatial-temporal perspective on rivers and their floodplain which wouldn’t be feasible without leveraging the potential of state-of-the-art remote sensing basing upon dense satellite time series, cloud computing and recent advances in foundational deep learning models. Our remote sensing approach enables scientists and practitioners to better understand the role of complex feedbacks between hydrologic, geomorphic and ecologic processes forming the basis of riverine biodiversity. Therefore, it supports informed decision making on system scale enabling nature-positive decision making on the path to implement dynamic process-oriented targets in river conservation and restoration.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: A Europe-Wide Analysis Integrating Soil Biodiversity and Earth Observation-Derived Indicators

Authors: Nikolaos Tsakiridis, Ms Eleni Kalopesa, Dr Nikiforos Samarinas, Prof Emeritus George Zalidis, Prof Christian Mulder, Prof Maria Tsiafouli
Affiliations: Aristotle University Of Thessaloniki, Inter-Balkan Environment Center - Green Innovation Hub, University of Catania
Introduction: Understanding the complex interplay between soil biodiversity (SOB) and ecosystem state and services is critical for advancing sustainable land management and mitigating biodiversity loss. This study aims to increase understanding on how soil biodiversity measured at the sample/field scale is linked to ecosystem/environmental descriptors measured at landscape scale through EO. Data: The study focuses on soil nematodes, a taxonomically and functionally diverse group that is representative of the entire soil food web. Specifically, we used open access nematode community data [1] and calculated concrete biodiversity indicators such as trophic/functional diversity and metabolic footprints as proxies for functional processes from across Europe (ca. 1800 samples). These data were then integrated with Earth Observation (EO)-based high-resolution geospatial data, i.e., reflectance bands and NDVI and EVI from MODIS, topography, land use (CORINE land cover), topsoil health descriptors, soil temperature offsets (SoilTemp) [2] and other climatic variables. Methods: AI-driven modeling approaches were used to examine how nematode community metrics correlate with EO-driven data and how these relationships vary across land uses, spatial and temperature gradients. To this end, we examined the popular Random Forest and XGBoost regressors which were optimized using a 5-fold grid search approach, while we also examined the use of a-priori feature selection mechanisms. Results: The first results demonstrate that EO-derived spatial indicators were critical for scaling field observations and capturing spatial variability in soil biodiversity, highlighting the value of combining in situ biodiversity measurements with EO technologies. In particular, Pearson’s correlations indicated a relevance between abundance of trophic groups and mean temperatures of wettest and driest quarters derived from SoilTemp (ρ ~= 0.18), with MODIS reflectance data and vegetative indices (ρ ~= 0.25), and C stock (ρ ~= 0.20). The SoilTemp data and particularly the temperate of warmest quarter and month exhibited the highest correlation with the production and respiration components of the metabolic footprint (ρ ~= 0.18). The best AI-models attained an R2 of ~0.85 in the training set and a mean R2 of ~0.40 in the out-of-fold validation sets. Final remarks: The study provides first insights into a scalable and replicable framework for upscaling our understanding of soil biodiversity and its links to ecosystems state and services, offering valuable insights for policymakers and land managers aiming to address biodiversity loss and enhance ecosystem sustainability across Europe. In the future, we aim to utilize the SOB4ES dataset to augment this data with more points from across the European continent and apply this methodological framework to it. Another research avenue is to utilize other Copernicus EO data and potentially products from the upcoming Vegetated Land Cover Characteristics category. References [1] van den Hoogen, J., Geisen, S., Routh, D. et al. Soil nematode abundance and functional group composition at a global scale. Nature 572, 194–198 (2019). https://doi.org/10.1038/s41586-019-1418-6 [2] Maclean, I.M.D., Suggitt, A.J., Wilson, R.J., Duffy, J.P. and Bennie, J.J. (2017), Fine-scale climate change: modelling spatial variation in biologically meaningful rates of warming. Glob Change Biol, 23: 256-268. https://doi.org/10.1111/gcb.13343 Keywords: soil biodiversity, ecosystem services, Earth Observation, SOB4ES, nematodes, metabolic footprint, AI modeling, European soil data. Acknowledgement: This work has received funding from the European Union’s Horizon Europe programme under the project SOB4ES (Grant agreement no 101112831).
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: Vegetation Dynamics in an Alpine Protected Area, the Gran Paradiso National Park (NW Italy) from a Remote Sensing Perspective

Authors: Chiara Richiardi, Dr. Consolata Siniscalco, Maria Patrizia Adamo
Affiliations: Laboratory Biodiversity and Ecosystems, Division Anthropic and Climate Change Impacts, ENEA, Department of Life Sciences and Systems Biology, University of Torino, National Research Council (CNR), Institute of Atmospheric Pollution Research (IIA), c/o Interateneo Physics Department,
Alpine ecosystems are highly sensitive to environmental changes, making long-term monitoring essential for biodiversity conservation and ecosystem management. This study presents an analysis of 39 years (1985–2023) of vegetation dynamics in the Gran Paradiso National Park (GPNP), the oldest protected area in Italy. Using multispectral Landsat imagery (Landsat 4–9) at a 30 m resolution, we examined land cover changes with a focus on ecological and climatic drivers. Seasonal composite images were developed for the growing (June 15–August 31) and senescence (September 15–November 30) seasons, employing a refined Best Available Pixels (BAP) methodology. Terrain-corrected images were processed using the improved cosine algorithm and snow/cloud masks were applied to enhance data quality. Eight land cover types were classified using a Random Forest algorithm, trained with a dataset built starting from the high-resolution (0.5 m) land cover cartography provided by GPNP. Rigorous validation was conducted using confusion matrices and independent photointerpreted datasets, yielding consistently high accuracy (Overall Accuracy >96%; Cohen’s Kappa >0.90). Key spectral indices, such as the Enhanced Vegetation Index (EVI) and the Normalized Difference Snow Index (NDSI), and topographic variables derived from Digital Terrain Models (DTMs) were instrumental in improving classification performance. Results highlight significant land cover trends: a loss of grasslands (-10 ha/year), largely due to shrub encroachment (+10 ha/year), and the expansion of rocky habitats (+8.6 ha/year), likely driven by glacier retreat. These patterns varied across altitudinal zones, with grassland loss most pronounced in the subalpine belt (1900–2300 m a.s.l.) and shrub encroachment prevalent at mid-elevations. Spatial analysis revealed distinct regional dynamics, with the Piedmont side experiencing greater grassland declines than the Aosta Valley. Change detection identified three pixel categories: stable (65%), mixed (18%), and transitional (17%), providing insights into vegetation stability and transition processes. Shrublands exhibited the lowest stability (19%), reflecting their high sensitivity to climate and land-use changes. Further analysis showed a strong correlation between land cover dynamics and anthropogenic drivers, such as land abandonment, alongside climatic factors, including snow cover duration (SCD) and glacier retreat. This study offers novel insights into the mechanisms shaping alpine landscapes under environmental and anthropogenic pressures. It underscores the critical role of remote sensing for long-term monitoring of protected areas and biodiversity, highlighting its relevance for conservation policies and adaptive management strategies. These findings contribute to advancing Earth observation methodologies, demonstrating their potential for scaling up to other alpine and protected regions globally.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: Woody Cover Dynamics in Land-Water Interfaces Across Pan-Europe (1990–2024)

Authors: Xiang Liu, Dr Matthias Baumann, Dr Sonja Jähnig, Tobias
Affiliations: Humboldt University of Berlin, Leibniz-Institute of Freshwater Ecology and Inland Fisheries
The land-water interface (LWI) is a critical ecological transition zone where terrestrial and aquatic ecosystems converge, playing a pivotal role in maintaining biodiversity, regulating hydrological processes, and mitigating flood risks (Bänziger, 1995). However, these areas face mounting threats from habitat fragmentation, ecological degradation (Dreyer & Gratton, 2014), and public health risks such as zoonotic disease spread (Karr & Schlosser, 1977). Climate change and intensified flooding are also reshaping the structure and function of LWIs, impacting vegetation dynamics and ecosystem resilience. Understanding these changes, particularly concerning woody cover dynamics, is essential for informed conservation and land-use management. This study presents the first high-resolution (30 m) map of the LWI across Europe, developed using integrated Digital Elevation Models (DEM) and remote sensing data. LWIs cover 11.51% of Pan Europe’s total area, with significant spatial variability. Northern and Eastern Europe exhibit dense, natural LWIs dominated by wetlands and riparian zones, while Western and Southern Europe show extensive fragmentation driven by urbanization and agricultural expansion. Analysis reveals that climate-influenced flooding patterns contribute to the persistence of natural LWIs in certain regions while exacerbating degradation in others. From 1990 to 2024, woody cover within LWIs exhibited significant net growth, with 441,368.7 km² showing marked increases. Northern and Eastern Europe saw the most pronounced gains, driven by rewilding, natural regeneration, and conservation efforts. However, urbanized and agricultural LWIs experienced limited increases due to intensive land use and reduced ecological connectivity. These findings underscore the dual influence of conservation policies and the increasing variability of flooding and climatic conditions on vegetation recovery and ecosystem resilience. By integrating spatial and temporal analyses, this study comprehensively assesses LWI extent, woody cover dynamics, and the impacts of land use change, climate, and flooding. The findings offer critical insights for mitigating ecological and socio-economic risks, enhancing flood resilience, conserving biodiversity, and guiding sustainable development in Europe’s land-water boundaries.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: Tracking Lake Phytoplankton Blooms: A Global Remote Sensing Approach

Authors: Jelle Lever, Stefan Simis, Dr. Luis J. Gilarranz, Dr. Petra D’Odorico, Christian Ginzler, Dr. Achilleas Psomas, Prof. Dr. Alexander Damm, Arthur Gessler, Dr. Yann Vitasse, Dr. Daniel Odermatt
Affiliations: Swiss Federal Research Institute WSL, Swiss Federal Research Institute Eawag, Plymouth Marine Laboratory, University of Zürich
The impacts of climate change, eutrophication, and other anthropogenic factors on the timing, duration, intensity, and spatial extent of lake phytoplankton blooms are a growing global concern. Changes in these key bloom characteristics are especially problematic because they may form a threat to water quality and aquatic ecosystems. This, in turn, can lead to economic losses, health hazards, reduced drinking water quality, and, in some cases, toxicity to aquatic life. These properties are, therefore, critical indicators of how lake ecosystems are responding to environmental stressors. The high inter-annual variability in bloom dynamics, however, makes it difficult to detect consistent trends across years and attribute changes to specific drivers. In addition, regional variability in the underlying environmental factors – such as temperature, radiative forcing, nutrient loading, and hydrology – further complicates this analysis. Therefore, reliable global data across an extensive time period are needed alongside robust analytical methods to better understand these dynamics and inform effective management strategies. The goal of this study is to develop a comprehensive global dataset that allows for the analysis of multi-decadal change in bloom properties across a wide range of biogeographic and environmental conditions. To this end, we analyze data from 2,024 lakes across the globe. The satellite data used in this analysis are derived from the Medium Resolution Imaging Spectrometer (MERIS) and the Ocean and Land Colour Instrument (OLCI) on European Space Agency (ESA) satellites, covering the years 2002-2012 and 2016-2022, respectively. By analysing daily chlorophyll-a estimates from these sensors, we extract phenology metrics that represent the onset and decline of peak chlorophyll-a concentrations, as well as the magnitude of fitted timeseries for each pixel, among other properties. Using these metrics as a basis, we then proceed with identifying bloom events at the lake scale. This involves detecting clusters of chlorophyll-a peaks across pixels and years that occur during the same seasonal period, which are indicative of recurring bloom events. Ultimately, this enables us to obtain information on changes in the characteristics of recurring phytoplankton blooms – particularly their timing, duration, and extent on a lake level for the above-mentioned time periods. The large number of lakes analyzed with consistent methods, provide a solid basis for further research. This research could, for example, attempt to disentangle the relative effects of different environmental drivers and the interplay between them. By incorporating environmental data such as temperature, precipitation, and nutrient concentrations, we may be able to understand the relative contributions of climate change, land use change, and other anthropogenic factors to the observed trends in bloom dynamics. This knowledge will be crucial for guiding policy decisions aimed at mitigating the impacts of harmful algal blooms, improving water management practices, and protecting freshwater ecosystems. Moreover, by providing a global perspective on algal bloom dynamics, our research will contribute to the growing body of knowledge on the intersection of climate change, eutrophication, and aquatic ecosystem health. In conclusion, our study underscores the importance of satellite remote sensing in advancing our understanding of global lake phytoplankton bloom dynamics. By tracking lake phytoplankton bloom characteristics, we aim to provide critical insights that will help inform management strategies at local, regional, and global scales. This work is a step toward better quantifying the impacts of environmental change on freshwater ecosystems and developing more effective policies to mitigate the threats posed by harmful algal blooms.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.49/0.50)

Session: A.08.03 Ocean Salinity

Ocean salinity is a key variable within the Earth’s water cycle and a key driver of ocean dynamics. Sea surface salinity (SSS) has been identified as Essential Climate Variable by the Global Climate Observing System (GCOS) and Essential Ocean Variable by the Global Ocean Observing System (GOOS). Through the advent of new observing technologies for salinity and the efforts to synthesize salinity measurements with other observations and numerical models, salinity science and applications have significantly advanced over recent years.
This Session will foster scientific exchanges and collaborations in the broad community involved in ocean salinity science and applications, widely encompassing satellite salinity (eg, SMOS and SMAP) data assessment and evolution, multi-mission merged product generation (eg, CCI-salinity), exploitation of in-situ assets for calibration and validation and related Platforms (eg, Salinity PI-MEP) and ultimately broad salinity-driven oceanographic/climatic applications and process studies.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.49/0.50)

Presentation: ESTIMATING SEA SURFACE SALINITY IN COLD SEAS WITH CRYORAD 0.4-2GHZ WIDEBAND RADIOMETER

Authors: Jean-Luc Vergely, Dr Jacqueline Boutin, Stéphane Ferron, Dr Marie-Laure Frery, Dr Giovanni Macelloni, Dr Marco Brogioni, Eric Jeansou, Veronique Bruniquel
Affiliations: ACRI-ST, CNRS, CNES, CNR-IFAC
Salinity in polar oceans is changing. Sea ice melt and increased continental runoff are responsible for a decrease of sea surface salinity (SSS) in most regions of the Arctic Ocean. In the Southern Ocean, recent changes of Antarctic sea ice extent and thickness are also prone to modify SSS, and to increase the upper ocean stratification. These changes have strong implications for the oceanic circulation, for the ocean’s capability to absorb atmospheric heat and carbon, with large consequences for Earth’s climate. A particularly important aspect is the SSS influence on the collapse of the Atlantic meridional overturning circulation, whose timing could be earlier than predicted by climate models. Improved SSS estimates in polar seas are required to monitor the evolution of freshwater fluxes at ocean boundaries (sea ice melting and formation, river runoff, precipitation effects), the variability of surface hydrography that controls deep water formation and overturning circulation, exchanges with other ocean basins and their impact on the global climate. The current generation of climate models poorly reproduces high-latitude water mass properties because of their crude representations of physical processes such as lateral mixing, convection, and entrainment (especially in the marginal ice zone). These limitations impair the modeled response to climate change. SSS is recognized as an Essential Climate Variable (ECV) by the Global Climate Observing System (GCOS) and as an Essential Ocean Variable by the Global Ocean Observing System (GOOS). Current 1.4 GHz (L-band) radiometer missions have provided unprecedented SSS measurements over the global ocean at 40-150 km scales with a revisit time of 3 to 8 days, and continuity of observations is recognized as a high priority that will be partially addressed by the CIMR Copernicus mission. However, for cold waters, the sensitivity of the L-band brightness temperature to SSS decreases (roughly by a factor 3 between 30°C and 0°C), leading to greater uncertainties in polar SSS. The CryoRad mission selected as an ESA EE12 mission candidate includes a radiometer covering an extended frequency range between 0.4 and 2 GHz, one of the aims of which is to improve the accuracy of SSS measurements in cold waters by at least a factor of 2 compared with L-band measurements. As part of the CNES ‘Salinity estimate in cold seas using multiband 0.4-2GHZ’ research and technology study (R&T DTN/CD/AR-2024.0009418), and of the ESA CryoRad Earth Explorer 12 Phase 0 Science and Requirements Consolidation Study (SciReC) (ESA 4000145903/24/NL/IB/ar), we performed simulations based on a CryoRad simplified instrument model in order to demonstrate CryoRad's contribution to SSS estimation at high latitudes. We simulate SSS retrieval uncertainties taking into account various contributors to the radiometric measurements, such as sea surface temperature, wind speed, atmosphere influences, as derived from radiative transfer model elements well validated at L-Band and propagated to lower frequencies using physically based considerations. This simulator is used to carry out an initial sensitivity study for level 2 and level 3 salinity estimation. We will present the way in which this simulator has been implemented (direct model, inverse model and inversion strategy) and the performance obtained in estimating the SSS in the frame of an academic study.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.49/0.50)

Presentation: Ocean-induced magnetic field: Spatio-temporal characteristics and sensitivity to ocean flow and salinity

Authors: Jakub Velímský, Ondřej Kureš, Veronika Ucekajová, Christopher Finlay, Clemens Kloss, Rasmus Møller Blangsbøll
Affiliations: Department of Geophysics, Faculty of Mathematics and Physics, Charles University, Department of Space Research and Technology, Technical University of Denmark
Satellite magnetic field observations have the potential to provide valuable information on dynamics, heat content and salinity throughout the ocean. Here we present the expected spatio-temporal characteristics of the ocean-induced magnetic field at satellite altitude on periods of months to decades. To characterize the expected ocean signal we make use of advanced numerical simulations taking high resolution oceanographic inputs and solve the magnetic induction equation in 3D including galvanic coupling and self induction effects. We compare the magnetic field calculated for several different ocean models, and isolate spatio-temporal features which are consistent across the inputs. We also investigate the sensitivity of the ocean-induced magnetic field to the sea surface salinity constrained by satellite observations (CCI+SSS).
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.49/0.50)

Presentation: Maritime continent water cycle as a key forcing for decadal variation of upper-ocean salinity in the southeast Indian Ocean

Authors: Tong Lee, Dr. Sreelekha Jarugula, Dr. Ou Wang, Dr. Severine Fournier
Affiliations: NASA Jet Propulsion Laboratory
Argo measurements illustrate pronounced decadal variation of salinity in the southeast Indian Ocean (SEIO) that is coherent in the upper 200 m, with freshening from the mid-2000 to the early 2010s followed by salinification afterwards. The SEIO decadal salinity variation contributed to over half the magnitude of decadal sea level variation in this region. Sea surface salinity (SSS) from SMOS capture the SEIO salinification after the early 2010 with much better-defined spatial structure than that depicted by Argo. SMOS data also reveal the linkage of the decadal SSS signal in the SEIO with that in the maritime continent, which is not sampled by Argo. Previous studies suggested several possible factors contributing to SEIO decadal salinity signal: SEIO local winds, remote winds in the tropical Pacific forcing the Indonesian throughflow (ITF) that advects salinity signal into the SEIO, SEIO local evaporation-precipitation (E-P), remote E-P in the maritime continent. These studies did not agree on a key forcing mechanism. In particular, a recent study suggested that SEIO local wind stress is the key forcing mechanism. However, the finding was based on association of forcing with salinity variability without demonstrating a causality. Here, we attribute decadal variation of SEIO salinity by isolating the contributions of E-P and wind stress forcings through forcing sensitivity experiments using the ECCO ocean modeling and state estimation system (https://ecco.jpl.nasa.gov). Our causality analysis reveals that maritime continent E-P is the key forcing for decadal variation of SEIO salinity. Decadal variation in winds, suggested by some previous studies, play little role. Therefore, the climatological ITF (forced by climatological winds) carrying decadal variation of freshwater content from the maritime continent into the SEIO is the main oceanic process that transmits maritime continent water cycle effect to the SEIO. We further strengthen our finding through a budget analysis of the SEIO salinity.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.49/0.50)

Presentation: Monitoring Freshwater Variability in Southwest Greenland Using Satellite and In-Situ Observations

Authors: Fabrice Bonjean, Gilles Reverdin, Jacqueline Boutin, Jean-Luc Vergely, Sébastien Guimbard, Nicolas Kolodziejczyk
Affiliations: CNRS, ACRI-ST, Oceanoscope, LOPS
Understanding the variability of Sea Surface Salinity (SSS) in the subpolar North Atlantic is critical for assessing freshwater transport and its role in the global climate system. This study focuses on the region south of Greenland, which is influenced by significant freshwater inputs from the Arctic and Greenland ice melt. Using SSS data from the Climate Change Initiative (CCI) alongside a comprehensive set of in-situ observations, we analyzed key events and variability in the area, exploring mechanisms driving the exchange between the shelf and the open ocean. The CCI SSS product, which integrates data from multiple satellite missions, effectively captures the seasonal and interannual variability of SSS beyond 50 km from the coast. Our analysis of a well-sampled freshwater event in fall 2021 highlights the capability of satellite SSS to track the transfer of fresh shelf waters into the open ocean. This "fresh blob" event, driven by strong northwesterly winds, resulted in the transport of freshwater from the East Greenland Current into the interior Labrador Sea. Weekly CCI SSS fields capture the westward progression of this anomaly, a feature corroborated by in-situ data from Argo floats and drifters. Despite these successes, challenges remain in coastal regions where biases in CCI SSS highlight the need for improved absolute calibration. Positive biases near the Greenland shelf have been linked to limitations in the previous mean calibration against the ISAS climatology, which struggled to capture small-scale variability near the coast. A recently updated ISAS climatology has now been utilized for the calibration of the latest CCI SSS version, and updated results addressing these issues will be presented. Nonetheless, these findings underscore the need for higher-resolution satellite sensors to resolve the finer-scale processes governing coastal freshwater transport. Building on these results, this study serves as a foundation for broader applications of satellite-derived SSS in monitoring high-latitude freshwater variability. Future efforts will extend the methodology to other regions in the Northern Hemisphere, applying a systematic approach to integrate satellite and in-situ observations for enhanced tracking of freshwater anomalies. Examples of these extensions will be presented, showcasing their potential to improve understanding of freshwater pathways and their influence on the subpolar gyre and global thermohaline circulation.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.49/0.50)

Presentation: CCI+SSS: Expanding Sea Surface Salinity Research to Meet Climate Challenges

Authors: Jacqueline Boutin, Dr. Nicolas Reul, Dr Rafael Catany, Dr Roberto Sabia
Affiliations: CNRS/LOCEAN, IFREMER, ARGANS, ESA
Sea Surface Salinity (SSS) is an increasingly used Essential Ocean and Climate Variable. The Soil Moisture Ocean Salinity (SMOS), Aquarius, and Soil Moisture Active-Passive (SMAP) satellite missions provide SSS measurements with very different instrumental features leading to specific measurement characteristics. The ESA funded Climate Change Initiative Salinity project (CCI+SSS) aims to produce SSS Climate Data Record based on those satellite measurements. The instrumental differences are carefully adjusted to generate a homogeneous Climate Data Record (CDR) [Boutin et al., 2021]. An optimal interpolation in the time domain without temporal relaxation to reference data or spatial smoothing is applied. This allows for preserving the original dataset's variability. CCI+SSS fields are well-suited for monitoring weekly to interannual signals at spatial scales ranging from 50 km to the basin scale. In this presentation, we review recent scientific findings from the CCI+SSS project team in recent years. We also detail the improvement features included in CCI+SSS version 5.5 dataset, which covers the 2010-2023 period and will be delivered at the end of 2024. Since CCI+SSS version 4, and according to users' recommendations, global SSS fields are provided on a rectangular 0.25°grid. Polar SSS fields on the EASE polar grid are also provided. Comparing with previous CCI+SSS versions, version 4 was noticeably better-quality at high latitude regions. The Climate Research Group used this dataset to show a relationship between the interannual SSS variability in the Barents Sea and the regional sea ice coverage. CCI+SSS enabled us to monitor the spatio-temporal evolution of a fresh event west of Greenland in the fall of 2021. In the tropics, the application of new RFI contamination correction (Bonjean et al. 2024) allowed the restoration of the interannual SSS variability related to ENSO, which was, in previous versions, masked by RFI contamination around the island of Samoa. The CCI+SSS fields have been used to assess model results at a global scale with or without data assimilation (GLORYS model) and in river plumes regions such as the Amazon plume (NEMO-PISCES biogeochemical model, Gévaudan et al. 2022) and the east tropical Atlantic Ocean (Thouvenin-Masson et al. 2024). A main uncertainty for simulating SSS interannual variability has been identified as coming from uncertainty on river discharges. CCI version 5 (2010-2023) uses SMOS SSS derived with dedicated reprocessing and the recent SMAP version 5.3 SSS to improve temporal stability. In regions contaminated by RFI, SMOS SSS variability is recovered using a methodology adapted from Bonjean et al. (2024). Systematic latitudinal-seasonal SSS corrections, as well as temperature and wind-related effects. This leads to significant improvements, especially at high latitudes. Participants to the CCI+SSS team are: J. Boutin1, N. Reul 2, R. Catany 3, A. Martin 4, J. Jouanno 5, L. Bertino 6, F. Rouffi 7, F. Bonjean1, G. Corato 8, M. Gévaudan2, S. Guimbard 9, P. Hudson4, N. Kolodziejcyk2, M. Martin10, X. Perrot, R. Raj6,E. Rémy11, G. Reverdin1, A. Supply 2,C. Thouvenin-Masson 1, J.L. Vergely 7, J. Vialard1, R. Sabia12, S. Mecklenburg12 (1)LOCEAN, (2)LOPS/IFREMER, (3)ARGANS, (4)NOC, (5) LEGOS, (6) NERSC,(7) ACRI-st, (8) ADWAISEO, (9)OCEANSCOPE, (10)METOFFICE, (11)MERCATOR OCEAN INTERNATIONAL, (12)ESA References Bonjean, et al. (2024), "Recovery of SMOS Salinity Variability in RFI-Contaminated Regions," in IEEE Transactions on Geoscience and Remote Sensing, doi: 10.1109/TGRS.2024.3408049. Boutin, J., et al. (2021), Satellite-Based Sea Surface Salinity Designed for Ocean and Climate Studies, Journal of Geophysical Research: Oceans, 126(11), e2021JC017676, doi:https://doi.org/10.1029/2021JC017676. Gévaudan et al. (2022), Influence of the Amazon-Orinoco discharge interannual variability on the western tropical Atlantic salinity and temperature. Journal of Geophysical Research: Oceans, 127, e2022JC018495. https://doi.org/10.1029/2022JC018495. Thouvenin-Masson et al. (2024), Influence of river runoff and precipitation on the seasonal and interannual variability of sea surface salinity in the eastern North Tropical Atlantic, Ocean Sci., 20, 1547–1566, https://doi.org/10.5194/os-20-1547-2024.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall M1/M2)

Session: F.01.03 Trends in Earth Observation Education and Capacity Building: Embracing Emerging Technologies and Open Innovations - PART 2

Education activities in recent years have undergone a significant transformation related to the global digitalization of education and training. Traditional teaching methods, like face-to-face trainings provided to small groups of students, are being complemented or even replaced by massive open on-line courses (MOOCs) with hundreds of participants following the course at their own pace. At the same time, the Earth observation sector continues to grow at a high rate; in Europe, the European Association of Remote Sensing Companies (EARSC) reported in 2023 that the sector grew by 7.5% in the past 5 years.
This session will cover new trends in modern education in the Space and EO domains as well as methods, use cases, and opportunities to cultivate Earth observation literacy in diverse sectors, such as agriculture, urban planning, public health, and more. It will focus on new methods and tools used in EO education and capacity building, such as: EO data processing in the cloud, processing platforms and virtual labs, dashboards, new and innovative technologies, challenges, hackathons, and showcase examples which make successful use of EO data. Participants will also have opportunity to share and discuss methods for effective workforce development beyond typical training or education systems.
Based on the experience of Space Agencies, international organisations, tertiary lecturers, school teachers, universities and companies working in the domain of space education, this session will be an opportunity to exchange ideas and lessons learnt, discuss future opportunities and challenges that digital transformation of education has brought, consolidate recommendations for future education and capacity building activities, and explore opportunities to further collaborate, build EO literacy in new users outside of the Earth and space science sector and expand the impact of EO across sectors.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall M1/M2)

Presentation: Tools in action: Tailoring user-friendly solutions for varied educational environments

Authors: Tobias Gehrig, Maike Petersen, Johannes Keller, Alexander Siegmund
Affiliations: Heidelberg University Of Education, Heidelberg Center for the Environment (HCE) & Institute of Geography, Heidelberg University
Modern approaches to Earth Observation (EO) hold significant potential for enhancing our understanding of the Earth’s system in the context of the Sustainable Development Goals (SDGs). They provide a wealth of opportunities for climate and environmental education offering unique insights into the state and changes occurring at virtually any location on Earth. Additionally, these applications create numerous educational opportunities by linking Geography, STEAM education (Science, Technology, Engineering, Arts, Mathematics), and education for sustainable development (ESD). However, while EO data are accessible and visually compelling, their interpretation requires an understanding of complex technical, environmental, social, and ethical contexts (Ohl, 2013). Consequently, their implementation in education is often hindered by time constraints, a lack of expertise among teachers, and the absence of suitable teaching examples and applications for students to analyse EO data (Dannwolf et al., 2020). By addressing these barriers, the Institute for Geography and Geocommunication – rgeo at Heidelberg University of Education aims at bridging the gap between science and education, bringing EO approaches to classrooms. Therefore, rgeo develops and constantly optimizes tailormade digital tools which allow a user-friendly experience. Those tools are the basis for teaching with EO data and include a student-friendly web-based application to analyse EO-data, an adaptive e-learning-platform, and an app to combine EO and field work. Based on those applications we design teaching material, e-learning-modules, lesson plans, workshops and further trainings to ease implementation of this highly motivating and visually appealing methodology. While the main target group of projects conducted by rgeo are teacher trainees, students and students of secondary school level, it also increasingly aims at vocational trainees as well as experienced teachers. All these target groups have their specific challenges which need to be addressed and considered during project planning and implementation. This talk presents two approaches to implement EO in education. While the use of EO data in secondary schools is already common and encouraged by its inclusion in the curricula of several federal states in Germany, used methods are mostly confined to Google Earth/Maps to provide a first overview of a geographical phenomenon. Additional potentials such as the use of different time steps for a change detection, UAS or multispectral data largely remain untouched. However, using such methods from actual scientific projects can aid in conveying topics such as resource conflicts. However, the translation from a scientific project to a teaching example needs to manage a triple complexity of methodological, content-related, and ethical considerations. Students must first grasp the technical and physical foundations necessary for interpreting EO data (Keller et al., 2023). The subject matter should be aligned with key geographic concepts to help students acquire meaningful knowledge (Fögele, 2017). Furthermore, when addressing ethical issues, students need to learn how to articulate ethical questions clearly and appropriately (Barth, 2022). Thus, this presentation will use a teaching example focused on the causes and consequences of land use change in West Pokot (Kenya) to illustrate how to effectively navigate this triple complexity. A second challenge lies in the applicability of EO content for vocational training. While many trainees are likely to be confronted with EO data in their future occupations or would benefit from this methodology, EO is not part of most vocational training programmes. Implementing such approaches into vocational training is even more restricted by time constraints than it is the case for secondary schools. Most teachers in this educational field are unaware of the potentials of EO and need to be convinced of the benefits for their students. Thus, courses should be developed in co-design with the teachers to meet their specific needs and address their concerns. Courses are also more appealing if they are designed as project studies where students can use their own ideas and topics relevant for their specific occupations. Finally, the presentation will highlight key design principles and lessons learned that play a crucial role in the development and implementation of EO-based educational approaches. These principles provide valuable guidance for overcoming the challenges described above and enable the sustainable integration of EO data into different educational formats.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall M1/M2)

Presentation: Teacher’s Training in the Projects Copernicus4schools and EUthMappers

Authors: Alberta Albertella, Lorenzo Amici, Jesus Rodrigo Cedeno Jimenez, Quang Huy Nguyen, Alberto Vavassori, Prof Maria Antonia Brovelli
Affiliations: Politecnico Di Milano
Over the last two years, in two different European projects, Politecnico di Milano has been involved in designing training activities for European secondary school teachers. The first project, Copernicus4Schools, is a FPCUP (Framework Partnership Agreement on Copernicus User Uptake) project involving several high schools in different countries, with the aim of stimulating students and teachers to use and better understand the Copernicus Programme and the possibilities offered by Earth Observation. It focuses in particular with regard to climate, climate change and their consequences, illustrating how satellite images are used to monitor our planet. Several European partners are involved in the project and, among them, Politecnico di Milano, has been in charge of developing the teaching materials for the teachers’ trainings. These were prepared in a web-book available in English and in the languages of the countries involved in the project. The 15-hours course introduces GIS and QGIS, focusing on satellite data analysis and emergency management. Participants will learn how to access and analyse Copernicus satellite imagery, retrieve and integrate datasets such as flood delineation maps, land cover data, and population information. Also they will learn how to use these tools in QGIS to evaluate the impact of flooding on land use and population, providing valuable insights for emergency response and planning. All topics are introduced theoretically and applied in an exercise on real data (from a flood event that took place in 2020 in Italy), described step by step in all its aspects. The second EUthMappers Project is an ERASMUS+ Project with the aim of increasing the interest of Secondary Schools pupils in STEM topics and to enhance their digital skills and their environmental and civic engagement. This will introduce them to the use of open-source geospatial tools finalised at the development of open collaborative and inclusive mapping projects based on OpenStreetMap platform (OSM). Initially, teachers from five schools in Italy, Spain, Slovakia, Romania and Portugal are introduced to OSM through a workshop and developed-in-project handbook. Then, they guide their students developing local mapping projects from ideation, creating an online mapping project on Tasking Manager, data acquisition and visualisation. Throughout this process, the pupils are trained to improve their teamwork abilities, think creatively and develop by their own a method of gathering data. In the final step, to expand the ability of pupils in five schools, they not only cooperate with their classmates, but also work together for one collaborative humanitarian project led by UN Mappers. The simultaneous mapping efforts by students enhance also their global collaboration and sensibility. Participants will acquire the skills and competencies needed to work together on an international scale and organise themselves within a group, ensuring they are equipped to manage future collaborative projects beyond the scope of the project.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall M1/M2)

Presentation: SatSchool: Observing the Earth from Space, in the Classroom

Authors: Leam Howe, Alex Lewis, Rebecca Wilks, Hannah Picton, Samuel Bancroft, Catherine Mercer, Laura Bentley, Maria Paula Velasquez, Yvonne Anderson, Emily Dowd, Bryony Freer, Calum Hoad, Morag
Affiliations: University Of Leeds
SatSchool is an outreach initiative aimed at engaging lower secondary school pupils (aged 11-14) with Earth observation (EO) science, while highlighting the relevance of STEM subjects and showcasing the diverse pathways into EO careers. Initially spearheaded by PhD students from the Satellite Data in Environmental Science Centre for Doctoral Training (SENSE CDT), SatSchool has evolved into a collaborative effort involving early career researchers from institutions across the UK, including the Universities of Edinburgh, Leeds, Stirling, Glasgow, the National Oceanography Centre, and the British Antarctic Survey. SatSchool offers the opportunity for PhD students to engage in outreach as part of a supportive network, alleviating the time constraints and stress associated with individual organisation of such activities. Supported by funding totalling £23,850 from sources including NERC, SENSE CDT, the Ogden Trust, and SAGES, SatSchool has already made a significant impact, having reached over 2000 students across 37 schools in Scotland and England, and thousands more engagements through festivals and online events. SatSchool’s outreach package contains six bespoke modules (Introduction to EO, Hands on with Data, Cryosphere, Biosphere, Atmosphere, and Oceans), which draw from the broad expertise and creativity of SENSE CDT students and have been enhanced by liaison with school teachers and the European Space Education and Resources Office UK (ESERO-UK). All resources are open-source, with modular exercises enabling educators and PhD demonstrators to flexibly create EO lessons. At LPS 2025, we will showcase our open-access outreach materials, present insights gained from the development of SatSchool, and outline our future objectives.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall M1/M2)

Presentation: Space fuels learning jewels: Gaining spatial literacy through gamified learning with Earth Observation

Authors: Eva-Maria Steinbacher, Thomas Strasser, Isabella
Affiliations: Paris Lodron University of Salzburg
Space fuels learning jewels: Gaining spatial literacy through gamified learning with Earth Observation Satellite based earth observation (EO) offers a wide variety of application areas, such as effects of climate change, pollution, or loss of biodiversity, for monitoring spatio-temporal environmental phenomena. Many of these applications inherently address the pressing real-world challenges for the society, both current and in future. EO serves as a data source for historical and up-to-date information on environmental status and changes. Thus, is ideal as a learning framework for educators to teach young people on environmental changes, human impact and the consequences. This is of utmost importance, since children and adolescents from the age of about 8 to 18 years are the next cohort, who will shape the future by transforming their personal knowledge into action. In this context, the iDEAS:lab, lab for Science Communication at PLUS university, provides informal education to professionals. Emphasis is put on work within open, experience-based learning environments for integrating EO-based, gamified and experiential learning offers. These professionals operate in learning spaces that are more exploratory and less constrained, thus providing opportunities for interactive, hands-on learning experiences. In education emotional triggers are essential to deeply engage with a topic and foster behavioral change by critical reflection. Such triggers can arise from factors like spatial proximity, personal relevance, or topics that align with our interests. Important are subjects we can connect with and integrate meaningfully into our cognitive understanding by scaling from personal surroundings to the perspective on the world. The usage of EO in a learning environment needs basic skills for educators to interpret and contextualise the provided information. However, motivation for lesson integration is a critical asset. For the education and training of educators, this means providing inspiration and ideas that are easy to implement - both in terms of material preparation and the resources used. At the same time, the examples we introduce in the following, aim to introduce current societal and environmental topics in a fresh context: through the use of geospatial media, both analogue and digital, presented in an engaging manner that fosters personal relevance, identification, and interest. Spatio-temporal literacy is developed by shifting perspectives on familiar surroundings, enabling learners to explore and analyze their environment from new perspectives. Educators focus on fostering spatial literacy through three distinct contexts: children and adolescents examine familiar locations using satellite imagery, learning to identify distinguishing features and contrasting them with their well-known ground-level perspectives. The complexity of these topics is simplified to ensure that educators can easily apply or adapt the content using conventional media. Another key aspect of the training approach is the playful method of teaching, where learning happens implicitly and is often initiated by the children and adolescents themselves. This transforms learning into an experience - an enjoyable, exploratory, and playful journey. In the following, four sustainable games for education with EO are introduced. The places to be explored can either be guided, accompanied by intriguing stories or facts, or freely chosen by the children and adolescents. Popular options often include revisiting previous places of residence, exploring past or future vacation destinations, or other locations of personal significance. Focusing on specific locations, such as infrastructure or topographical features, can be effectively achieved through a brief "Space Travel" between points of interest. This approach becomes particularly engaging when implemented with a playful element like "Space Bingo": During a space travel, guided by the educators, participants can play a bingo game, identifying infrastructure elements such as railways, bus networks, industrial sites, or recreational facilities like stadiums, tennis courts, or swimming pools from a bird’s-eye perspective. By using a Bingo-card filled with infrastructure elements or symbols, the task is to identify these according elements on satellite images. Initial feedback on "Space Bingo," which can be easily and quickly conducted using applications with freely available tools for EO data investigation (e.g. virtual globes like Google Earth), has been overwhelmingly positive in the practical use of the work of educators. Both educators and participating children and adolescents have reported high levels of interest and enthusiasm for this interactive learning activity. The Satellite Image Matching Game – also adaptable as a memory game – pairs the views from a first-person perspective (in-situ images) and the earth view from space (satellite image maps). The game is easy designed around familiar, prominent locations or collaboratively created with children and adolescents. In the initial step, the in-situ images can be matched openly with satellite image maps, focusing on the identification and recognition of landmarks. A subsequent round of Satellite Image Memory then provides an opportunity to reinforce this knowledge at a higher level of difficulty. For teenagers, "Earth Observation - The Case Stories” offer a chance to delve deeper into specific topics. For instance, wildfires or floods can be analysed as small case studies using accessible, open-source tools like the EO Browser. In this exercise, personal connections play a key role in fostering engagement with these topics. Examples include relatives or acquaintances affected by wildfires during vacations or flooding events in the participants' own communities. Such personal relevance enhances identification with the subject matter and deepens understanding of its implications. In conclusion, satellite-based Earth observation provides a powerful and engaging tool for educators to teach young people about environmental and societal challenges, while offering immersive, hands-on learning experiences that foster spatio-temporal literacy. Using EO data in gamified approaches, educators can create appealing learning opportunities such as "Space Bingo" and "Satellite Image Memory,", where learning is both enjoyable and meaningful, and adaptable to interests and age-groups.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall M1/M2)

Presentation: Expanding the Access to Hyperspectral Remote Sensing: Open Science and Education Initiatives by the EnMAP Science Segment

Authors: Theodora Angelopoulou, Arlena Brosinsky, Akpona Okujeni, Saskia Foerster, Katrin Koch, Daniel Scheffler, Kathrin Ward, Robert Milewski, Karl Segl, Saeid Asadzadeh, Alexander Khokanovsky, Tobias Hank, Stefanie Steinhauser, Astrid Bracher, Marianna Soppa, Najoro Randrianalisoa, Benjamin Jakimow, Andreas Janz, Michael Bock, Nicole Pinnel, Vera Krieger, Sabine Chabrillat
Affiliations: German Research Centre for Geosciences (GFZ), Helmholtz Centre, Humboldt-Universität zu Berlin (HU), German Environment Agency (UBA), German Weather Service (DWD), Ludwig-Maximilians-Universität München (LMU), Alfred-Wegener-Institute Helmholtz Centre for Polar and Marine Research (AWI), Institute of Environmental Physics, University Bremen, German Aerospace Center (DLR), Earth Observation Center (EOC), German Aerospace Center (DLR), German Space Agency, Leibniz University Hannover, Institute of Earth System
Hyperspectral remote sensing offers novel opportunities to develop innovative products and services within the framework of the European Copernicus programme, addressing global environmental challenges and supporting policy implementation. Recent years have witnessed rapid advancements, driven by the launch of scientific hyperspectral satellite missions such as EnMAP, PRISMA and DESIS, paving the way for ESA’s upcoming flagship mission CHIME. Despite significant advances within the scientific community, widespread use of hyperspectral remote sensing remains limited due to challenges such as technical complexity, restricted access to tools and data, and a lack of tailored educational resources. Addressing these challenges requires dedicated efforts in education and user engagement to bridge the gap between scientific innovation and practical applications, enabling broader use by non-experts. The EnMAP Science Segment has embraced Open Science principles to make hyperspectral knowledge and tools more accessible. As part of the EnMAP mission, a comprehensive scientific programme fostering Open Science activities has been established, coordinated by the German Research Centre for Geosciences (GFZ), supported by the German Space Agency at the German Aerospace Center (DLR), and partnered with leading institutions including the Ludwig-Maximilians-Universität München (LMU), the Alfred-Wegener-Institute Helmholtz Centre for Polar and Marine Research (AWI), and Humboldt-Universität zu Berlin (HU). This programme encompasses the development and provision of algorithms and applications, the free and open-source EnMAP-Box software, benchmark datasets, and the HYPERedu training initiative. HYPERedu plays a pivotal role in both, bringing scientific developments into education, and preparing users for the effective uptake of hyperspectral (EnMAP) data in research. As well as addressing the needs of public authorities, and the potential industry players developing commercial applications based upon hyperspectral EnMAP data. HYPERedu addresses graduate students at master level and professionals in academia, industry, and governmental institutions offering a variety of freely accessible learning resources. These include Massive Open Online Courses (MOOCs), annotated slide collections, hands-on tutorials built on the EnMAP-Box software, educational films and screencasts as well as interactive graphics. All materials are freely available under a CC-BY license and hosted on the EO-College platform, facilitating their integration into university curricula, professional training programmes, and self-paced learning. We present an overview of the ongoing efforts by the EnMAP science community to translate science knowledge into educational tools through HYPERedu, with MOOCs being the most developed educational initiatives. These MOOCs are designed for flexible, self-paced learning, combining fundamental knowledge with hands-on exercises using the EnMAP-Box, and participants earning a certificate upon completion. The first MOOC, "Beyond the Visible: Introduction to Hyperspectral Remote Sensing," launched in November 2021, covers the fundamentals of imaging spectroscopy. Subsequent MOOCs have addressed agricultural applications (2022), EnMAP data access (2023), and soil applications (2024). Further applied topics such as forestry, geology, and coastal waters expanding the resources will become accessible in the near future. HYPERedu, with its MOOCs, are well-received and currently spearheading education in hyperspectral remote sensing, as well as further enhancements are considered to stay aligned with evolving educational trends. Through this contribution, we further aim to actively gather insights and foster discussions on future directions on how to expand content and integrate innovative learning technologies. Bridging scientific development with education offers a valuable opportunity to ensure broader access and optimize the use of modern Earth Observation technologies. In this context, the EnMAP Science Segment , through its HYPERedu initiative, underscores the importance of education in facilitating the widespread adoption of hyperspectral remote sensing, making this discipline accessible to a diverse user community.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall M1/M2)

Presentation: Edusat Challenge: Empowering Classrooms with Satellite Earth Observation

Authors: Rosa Olivella, Laura Olivas
Affiliations: SIGTE-University of Girona
Within the framework of the Edusat project (https://www.edu-sat.com/?lang=en), designed as an innovative educational resource for learning about Earth observation through satellite imagery, the Edusat Challenge emerges as a transformative initiative. Its goal is to empower teachers and students to integrate these resources effectively into primary, secondary, and high school education, fostering engagement with global environmental phenomena and solutions. THE CONTEXT The Edusat project is grounded in the belief that understanding the Earth's dynamic processes is essential for addressing the challenges of global environmental change. By familiarizing students with the study of natural and human-induced phenomena, Edusat bridges the gap between theoretical knowledge and real-world application. The increasing availability of daily satellite imagery from around the world enables the identification and monitoring of critical phenomena such as forest fires, floods, glacier melting, deforestation, and urban expansion. These processes significantly impact the Earth's surface, providing invaluable learning opportunities. Edusat focuses on utilizing freely available satellite imagery, primarily from the European Space Agency's (ESA) Copernicus program and its Sentinel satellites, while also allowing for the integration of data from NASA’s Landsat program. With the knowledge and skills gained through Edusat, participants can envision objectives for new space missions. Edusat’s Key Objectives: 1. Promoting STEAM Education: Inspire curiosity and interest in Science, Technology, Engineering, Arts, and Mathematics (STEAM) among students by providing hands-on experiences with cutting-edge tools. 2. Raising Environmental Awareness: Equip students with the ability to identify and analyze the effects of global environmental changes, fostering responsible global citizenship. 3. Fostering Creativity and Collaboration: Encourage critical thinking, problem-solving, and teamwork among teachers and students through interdisciplinary projects. 4. Empowering Educators: Provide teachers with the training, resources, and confidence needed to incorporate Earth observation into their curricula, making advanced concepts accessible at all educational stages. THE EDUSAT CHALLENGE Building on the success and potential of the Edusat project, the Edusat Challenge has been developed to extend its reach and impact. This initiative focuses on training secondary and upper primary school teachers to bring satellite-based Earth Observation techniques into the classroom, with an emphasis on climate change and environmental impacts. A MULTI-DIMENSIONAL LEARNING EXPERIENCE The Edusat Challenge integrates several key elements: - Blended Learning: Teachers engage in a 40-hour training program combining online modules and in-person workshops. - Mentorship Support: Teachers receive continuous guidance to design and implement impactful activities tailored to their students' needs. - Interdisciplinary Focus: Activities bridge science, geography, technology, and environmental studies, providing a holistic learning experience. The ultimate aim is to enhance students' understanding of global issues while fostering a passion for scientific inquiry—particularly among underrepresented groups such as girls and students from disadvantaged backgrounds. PROGRAM TIMELINE AND PHASES The pilot program spans from November 2024 to May 2025, with a structured approach designed to ensure meaningful engagement and measurable outcomes. Phase 1: Presentation and Registration. Teachers are introduced to the program, its goals, and the resources available. Interested participants register to join the initiative. Phase 2: Training. Participants receive specialized training focused on the use of satellite imagery to study Earth observation and global environmental change. This training is designed to cover: - Understanding the principles of remote sensing. Participants will gain a foundational understanding of how remote sensing works, including the science behind satellite data and its applications. - Exploring case studies in Earth observation. Practical examples will demonstrate how remote sensing can be applied to monitor and analyze Earth systems, such as land use changes, deforestation, urban growth, and climate impacts. - Learning to analyze natural phenomena using satellite imagery. Participants will use tools like Copernicus Browser to explore and interpret satellite data, empowering them to investigate natural events such as floods, wildfires, and vegetation cycles. - Documenting and communicating findings through storytelling tools. They will learn to create engaging and informative narratives using storytelling platforms like ArcGIS StoryMaps, enabling them to share their findings effectively with diverse audiences. - Adapting activities for classroom implementation. Guidance will be provided on how to translate these skills into classroom activities, equipping educators to integrate Earth observation and environmental monitoring into their teaching. This phase equips educators with the technical and pedagogical tools necessary to bring these concepts into the classroom, fostering a deeper understanding of environmental issues and satellite technology among their students. Phase 3: Challenge Launch and Mentoring. Teachers lead hands-on activities in their classrooms, with mentorship provided to address challenges and optimize outcomes. Students explore environmental phenomena, analyze satellite data, and draw meaningful conclusions. Phase 4: Results Presentation and Feedback. In the final phase, participating teams present their findings. This collaborative session includes constructive feedback from peers and mentors, fostering a culture of shared learning and continuous improvement. PILOT IMPLEMENTATION The pilot phase will engage 34 teachers from 27 schools across Catalonia, representing a diverse range of educational contexts. - Training Phase: Training sessions will be held in November and December 2024, ensuring that all participants are equipped with the necessary skills to carry out activities in their classrooms. - Mentorship and Classroom Activities: From January to March 2025, teachers will implement activities in the classroom, supported by mentors to address challenges and ensure successful integration of the program. - Final Presentation and Evaluation: The program will culminate in May 2025 with a presentation of the results. This event will celebrate the achievements of students and teachers while providing a platform for exchanging insights and refining approaches for future iterations. COLLABORATION AND LONG-TERM VISION The final session will not only celebrate the program’s successes but also serve as a collaborative forum where participants share insights, challenges, and innovative approaches. This exchange will provide a foundation for continuous improvement and scalability in future editions of the Edusat Challenge. LEADERSHIP AND PARTNERS The Edusat Challenge is spearheaded by the Geographic Information Systems and Remote Sensing Service (SIGTE) of the University of Girona, under the umbrella of the NewSpace Educational Program, promoted by the Government of Catalonia (Secretariat for Digital Policies of the Department of Business and Labor and the STEAMcat program of the Department of Education and Professional Training), in collaboration with the Institute of Space Studies of Catalonia (IEEC), at the NewSpace Strategy of Catalonia. The NewSpace Strategy of Catalonia is coordinated by the Government of Catalonia (Secretariat for Digital Policies) in collaboration with the IEEC, the i2cat Foundation, and the Cartographic and Geological Institute of Catalonia, with the objective of creating a pole of innovation in the new economy of space, to bring economic growth to the country and to improve the citizens’ lives thanks to the solutions and benefits that this sector provides. By fostering innovation and collaboration, the Edusat Challenge aims to make advanced Earth observation tools accessible to classrooms, shaping a new generation of environmentally aware and scientifically skilled global citizens.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall E1)

Session: C.03.07 The Copernicus Sentinel Expansion missions development: status and challenges - PART 1

The status of development of ESA missions will be outlined
In 4 sessions a 1h30 minutes (equally of a full day) the unique opportunity for participants will be offered to gain valuable insights in technology developments and validation approaches used during the project phases of ongoing ESA programmes.
The Projects are in different phases (from early Phase A/B1 to launch and operations) and together with industrial/science partners the status of activities related to Mission developments will be provided.

Presentations and speakers:


CO2 Monitoring Mission Overview


  • Valerie Fernandez
  • Yannig Durand

CO2 Monitoring Mission: The Ground Segment architecture


  • Angela Birtwhistle
  • Daniela Taubert
  • Cosimo Putignano

CHIME Mission and Project Status


  • Jens Nieke
  • Marco Celesti

CHIME: Satellite, Instrument and Performances


  • Laurent Despoisse
  • Heidrun Weber

LSTM mission and project status


  • Ana Bolea
  • Miguel Such
  • Benjamin Koetz

LSTM L1 and L2 products and Algorithms


  • Itziar Barat
  • Steffen Dransfeld
  • Ignacio Fernandez Nunez
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.15/1.16)

Session: A.07.08 Global and regional water cycle in the integrated human-Earth system, estimation of hydrological variables and hyper-resolution modelling - PART 1

Water in all three phases and its cycling through the Earth system are essential to weather, climate and climate change, and to life itself. The water cycle is closely coupled with energy cycle and carbon cycle. Over continents, water cycle includes precipitation (related to clouds, aerosols, and atmospheric dynamics), water vapor divergence and change of column water vapor in the atmosphere, and land surface evapotranspiration, terrestrial water storage change (related to snowpack, surface and ground water, and soil moisture change), and river and groundwater discharge (which is linked to ocean salinity near the river mouth). Furthermore, the terrestrial water cycle is directly affected by human activities: land cover and land use change, agricultural, industrial, and municipal consumption of water, and construction of reservoirs, canals, and dams.

The EO for hydrology community is working towards datasets describing hydrological variables at a steadily increasing quality and spatial and temporal resolution. In parallel, water cycle and hydrological modellers are advancing towards “hyper-resolution” models, going towards 1 km resolution or even higher. In some cases such efforts are not just taking place in parallel but in collaboration. This session aims at presenting advances from each of the communities as well as demonstrating and promoting collaboration between the two communities.

Presentations are welcome that focus on at least one of the following areas:
- The global and regional water cycle and its coupling with the energy and carbon cycles in the integrated human-Earth system based on satellite remote sensing, supplemented by ground-based and airborne measurements as well as global and regional modeling
- New advances on the estimation of hydrological variables, e.g. evapo(transpi)ration, precipitation (note that there is another, dedicated session for soil moisture);
- Suitability of different EO-derived datasets to be used in hydrological models at different scales;
- Capacity of different models to take benefit from EO-derived datasets;
- Requirements on EO-derived datasets to be useful for modelling community (e.g. related to spatial or temporal resolution, quality or uncertainty information, independence or consistency of the EO-derived datasets, …);
- Downscaling techniques;
- Potential of data from future EO missions and of newest modelling and AI approaches (including hybrid approaches) to improve the characterisation and prediction of the water cycle.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.15/1.16)

Presentation: Implementing the three-source energy balance model with Copernicus-based inputs for improved evapotranspiration modeling over savanna ecosystems

Authors: Vicente Burchard-Levine, Héctor Nieto, M.Pilar Martín, Benjamin Mary, M.Dolore Raya-Sereno, Miguel Herrezuelo, Arnaud Carrara
Affiliations: Institute of Agricultural Sciences (ICA), Spanish National Research Council (CSIC), Environmental Remote Sensing and Spectroscopy Laboratory (SpecLab), Spanish National Research Council (CSIC), Fundación Centro de Estudios Ambientales del Mediterráneo (CEAM)
Accurate evapotranspiration (ET) estimates are key to better understand ecosystem function, manage terrestrial water resources and provide early indicators for drought events. Recent ESA projects such as SenET and ET4FAO have made great strides in improving operational ET modeling by merging shortwave and thermal infrared (TIR) imagery from Sentinel-2 and 3. With a current lack of operational high spatial resolution TIR sensor (<100m) with frequent revisit time (< 1 week), SenET proposed a data mining approach to sharpen Sentinel-3 LST (1km) to 20m using Sentinel-2’s spectral bands. These ET algorithms implement the TIR-based two-source energy balance (TSEB) model, demonstrating to provide robust ET retrievals at reasonable accuracy across a range of ecosystems and conditions. However, savannas or tree-grass ecosystems (TGEs), composed of a clumped and open tree canopy superimposing an herbaceous understory, have inherent structural and phenological complexities, which has shown to contribute to increased model uncertainties when applying conventional remote sensing approaches. In light of this, the three-source energy balance (3SEB) model, an adaptation of TSEB, was proposed to better characterize the multiple vegetation layers present in TGEs. 3SEB adds an additional vegetation source to TSEB allowing to directly incorporate the distinct structural and phenological traits of the two co-existing plant functional types. 3SEB was previously evaluated using tower-based inputs across a range of flux sites along with inputs stemming from geostationary satellites (i.e. 0.05° MSG-SEVIRI). The main objective of this study was to assess the performance of 3SEB when forced at the medium to high spatial resolution (20-300 m) using the Sentinel constellation as similarly applied within the SenET/ET4FAO context. Model performance was evaluated at both sharpened 20 m and 300 m spatial scales using Sentinel imagery along with Landsat images (100 m) over a range of TGE eddy-covariance (EC) sites acquired from FLUXNET, ICOS, AmeriFlux and OzFlux. Preliminary results in Spanish TGE sites showed robust estimates from 3SEB with the root-mean-square-errors (RMSEs) of modelled sensible and latent heat fluxes ranging between 70-80 W m-2, showing no significant differences in accuracy when using sharpened Sentinel or Landsat inputs. These results highlight the potential to apply 3SEB operationally at high spatial resolution with Copernicus data to improve our understanding of these complex but highly valuable ecosystems, especially in regards to the effects of global change and increased drought frequencies.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.15/1.16)

Presentation: Advancing large-scale, high-resolution modelling of the water cycle

Authors: Peter Salamon, Stefania Grimaldi, Cinzia Mazzetti, Christel Prudhomme, Carlo Russo, Ervin Zsoter, Jesus Casado Rodriguez, Corentin Carton de Wiart, Juliana Disperati, Nikolaos Mastrantonas, Mohamed Azhar, Goncalo Gomes, Christoph Schweim, Tim Sperzel, Carina-Denise Lemke, Markus Ziese, Alejandro Serratosa, Tomas Jacobson, Francesca Moschini, Berny Bisselink, Davide Bavera, Andrea Ficchì, Marco Radke-Fretz, Antonio
Affiliations: European Commission Joint Research Center
Hydrological models are essential tools for assessing the water cycle. They provide relevant information to decision makers for floods, droughts, and water resource management and enable the analysis of scenarios on how a hydrological system might behave under varying natural and anthropogenic constraints. One example is the open-source hydrological model OS-LISFLOOD that is used to generate flood forecasts and drought indicators for the European and Global Flood Awareness Systems (EFAS & GloFAS) as well as the European and Global Drought Observatories (EDO & GDO) of the Copernicus Emergency Management Service (CEMS). OS-LISFLOOD is a distributed, physically based rainfall-runoff model able to represent all the main hydrological processes. It needs as input meteorological forcings as well as surface fields encompassing (i) catchment morphology and river networks, (ii) land use, (iii) vegetation cover type and properties, (iv) soil properties, (v) lake and reservoir information, and (vi) water demand. Like other hydrological models, it requires calibration using river discharge observations for a set of parameters to adjust model behavior to reflect specific climatic and physiographic conditions. Being used in an operational service, the hydrological model and its European and global model domain set-up benefit from regular upgrades, with major changes in the hydrological modelling chain introduced as ‘version releases’. In its current operational version, the global model set-up (GloFAS v4.x) uses a spatial resolution of 3 arcminutes (~5.4 km) and a daily time step, whereas the European model set-up (EFAS v5.x) uses a spatial resolution of 1 arcminute (~1.8 km) and a 6-hourly time step. Both set-ups have been calibrated using discharge observations at gauging stations (1995 in GloFAS and 1903 in EFAS). In ungauged catchments where no discharge observations were available, model parameters were regionalized using climatic similarity and geographical proximity as criterion. Both set-ups are used to provide a hydrological reanalysis as well as hydrological predictions spanning different time ranges from short-term and medium range predictions to monthly and seasonal outlooks. A Wiki page is available for users providing detailed information about the version release including model set up and skill performance (EFAS – GloFAS). In addition, an extensive model documentation, user guide, and test catchments for OS LISFLOOD are available on the OS LISFLOOD webpage. A specific feature of the European and global model set up of OS LISFLOOD is that not only the model and associated tools for pre-/post-processing, calibration, etc. are open-source, but also the required input and calibrated parameter maps are freely accessible. This allows users to benefit from the latest developments and innovations and, more importantly, it enables a wider community in contributing to further extending and improving the model and its set-up. In this presentation we describe the next major evolution of OS LISFLOOD and its set-up for the European (EFAS v6.x) and global domain (GloFAS v5.x). The main foreseen changes can be grouped into three categories: 1.) model input; 2.) model improvements; and 3.) calibration and regionalization. The main changes in the model input concern the meteorological forcings. For the European domain, the meteorological forcings benefit from an increased number of meteorological observations, improved quality control, and a modified interpolation method. In the global model domain, enhancements include a correction of spurious rainfall and a modified downscaling of ERA-5 meteorological variables. Furthermore, changes in the surface fields related to soil properties, lakes and reservoirs as well as water demand for anthropogenic use integrating the latest available datasets have been included. Hydrological model advancements focus on river routing, in particular for mild sloping rivers, and a modified reservoir routine. Furthermore, the model state initialization has been enhanced and a new modelling routine called transmission loss, which accounts for transpiration by macrophytes and riparian vegetation as well as groundwater recharge through river channels, has been added. For model calibration and regionalization, it is foreseen to increase the number of calibration stations, improve the overall performance of the objective function along the whole flow duration curve, add more hydrological performance statistics (e.g. from the Budyko framework), and to utilize the power of deep learning in the calibration of process-based hydrological models. It is expected that all those changes together contribute to a further, significant improvement in modelling the water cycle using OS-LISFLOOD at the European and global scale. In line with the current version, the improved model and its new set-up will be freely available. The hydrological model reanalysis and predictions of the upgraded set-ups will be made available on the CEMS Early Warning Data Store. Its release as part of the floods and drought prediction and monitoring systems (EFAS, GloFAS, EDO, GDO) of CEMS is foreseen during 2025.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.15/1.16)

Presentation: A new approach to retrieve evapotranspiration of crops from solar-induced fluorescence and hyperspectral reflectance data

Authors: Dr Bastian Siegmann, Prikaziuk Egor, Oscar Hartogensis, Mary Rose Mangan, Jim Buffat, Joaquim Bellvert, Julie Krämer, Juan Quiros Vargas, Juliane Bendig, Patrick Rademske, Uwe Rascher, Christiaan van der Tol
Affiliations: Forschungszentrum Jülich, University of Twente, Wageningen University and Research, Institute of Agrifood Research and Technology
Keywords: Latent heat flux, solar-induced fluorescence, evapotranspiration, hyperspectral remote sensing, airborne data, SCOPE, machine-learning regression, emulation Challenge The increase in extreme weather events has a strong impact on the exchange of water and energy in agricultural ecosystems. Evapotranspiration (ET), as a key hydrological variable, is an important component of the energy, water and carbon cycles and provides important information for predicting and monitoring drought events. In recent decades, methods for determining ET have been combined with various types of Earth observation data to create spatial ET estimates. While most of the available approaches are based on optical reflectance and thermal remote sensing (RS) data, the use of solar-induced fluorescence (SIF) as an additional data source for ET estimation is still an under-explored field. SIF is directly emitted from the core of the photosynthetic machinery of plants and can therefore be used as a proxy of photosynthesis (PS). Since transpiration and PS are coupled processes, SIF remote sensing data provide important information for improving ET estimates. With the launch of ESA’s Earth Explorer satellite mission FLEX in 2026, which will provide high-quality SIF data from space, now is the right time to further investigate how RS SIF data can contribute to estimate ET. In this contribution, we want to present a new approach that uses a combined radiative transfer, photosynthesis and energy fluxes model to determine ET from airborne SIF and reflectance data. The achieved results are finally compared to ET estimates derived from eddy-covariance (EC) data and corresponding ‘Sentinels for Evapotranspiration’ (Sen-ET) products derived from Sentinel-2 and 3 data. Methodology A time-series consisting of seven airborne SIF and reflectance data sets covering an agricultural area in the northeast of Spain was recorded by the FLEX airborne demonstrator HyPlant during the LIAISE project field campaign between 15 and 27 July 2021. The Soil-Canopy-Observation of Photosynthesis and Energy fluxes (SCOPE) model was used to derive ET expressed as latent heat from the airborne SIF and reflectance data. First, 3,000 reflectance simulations were generated with the combined leaf and canopy radiative transfer model (RTM) implemented as one module in SCOPE. Subsequently, a hybrid inversion scheme was applied that combines the SCOPE simulations with support vector regression (SVR) to retrieve biophysical leaf and canopy parameters from the airborne reflectance data. The inverted parameters for each pixel and metrological data from a weather station were then used to run the full SCOPE model in forward mode for a single alfalfa field consisting of 54,000 pixels equipped with an EC station to produce spatial estimates of latent heat (LESCOPE). For each pixel, numerous simulations were made using different input parameter combinations in the leaf biochemistry module of SCOPE to estimate LE. In the end, LE for each pixel was selected from the simulation for which the corresponding simulated gross primary productivity (GPP) and SIF values showed the best fits with measured GPP from the EC station and SIF retrieved from the HyPlant airborne image data, respectively. To determine the best fit for each pixel we used a cost function based on the Levenberg-Marquardt algorithm. Since running SCOPE on pixel-based is very time-consuming, in a second step, we built a SCOPE emulator using a Gaussian process regression (GPR) model trained with 10,0000 SCOPE simulations to predict LE of all alfalfa fields covered by the airborne image data. In that respect, an emulator can be regarded as a statistical learning model that mimics the input-output relationships of SCOPE. Once the emulator was trained, it was applied to the airborne image data and this allowed us to produce LE maps of all alfalfa fields within the study site (LEEMU) in less than ten minutes. Both LE estimates (LESCOPE, LEEMU) were finally compared to LE derived from instantaneous flux measurements of the EC station located in the investigated alfalfa field derived at the time of the aircraft overflights (LEEC). Furthermore, we converted and up-scaled the instantaneous airborne LE maps to daily ET maps and compared them to the corresponding Sen-ET products. Results The inversion of the SCOPE model in the first step led to a good match between the spatial estimates of leaf area index (LAI) and leaf chlorophyll content (LCC) with field measurements of the same parameters collected from the investigated alfalfa field (LAI: R2 = 0.86, RMSE = 0.62 m2 m-2∷, LCC: R2 = 0.69, RMSE = 11.37 µg cm-2). Furthermore, the comparison of the averaged simulated and measured reflectance of the field provided a high level of agreement (RMSE = 0.0222). The two spatial LE predictions from HyPlant compared to the LE reference data derived from the EC station also resulted in a good agreement. The LESCOPE model provides a high R2 (0.87) and a relatively low RMSE (67.58 W m-²), but the derived latent heat fluxes from the airborne data, especially for the later observations (22-27 July), are overestimated compared to the LEEC estimates. Although the LEEMU model is characterized by a slightly lower R2 (0.84), the RMSE is lower (33.76 W m-2), and the slope of the regression model is closer to 1 compared to the LESCOPE model. The comparison of the converted and up-scaled LESCOPE fluxes to daily ETSCOPE values with the corresponding Sen-ET product also resulted in a moderate level of agreement (R2 = 0.74, RMSE = 0.67 mm day-1). Outlook for the future The results of this study illustrate that additional RS-based SIF information can complement conventional RS data to possibly improve spatial estimates of LE/ET from SIF-measuring satellites in the near future. This is especially important for the early detection of drought events to develop adapted irrigation strategies in agriculture. In future research, the presented approach will be transferred to other crops and different climatic regions to investigate the full potential of the developed emulator. In addition, the development of a SIF-based LE/ET product could be of great interest for the FLEX satellite mission, as ESA will only deliver products up to level 2 and encourages the scientific community to develop innovative level 3 and 4 products. Although further research is needed, we are convinced that the presented approach has the potential to become such an innovative level 3 product derived from FLEX satellite data.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.15/1.16)

Presentation: Diurnal Asymmetry Analysis Combining Energy-Water Balance Models and Geostationary Land Surface Temperature Data

Authors: Pedro Torralbo, Christian Bossung, Philippe Pinheiro, Kaniska Mallick, Chiara Corbari
Affiliations: DICA, Politecnico di Milano - POLIMI, Remote Sensing & Natural Resources Modeling, Department ERIN, Luxembourg Institute of Science and Technology, Geocomputing Research, Department ERIN, Luxembourg Institute of Science and Technology
Accurate evapotranspiration (ET) data is essential for global water management and has recently been recognized as an Essential Climate Variable (ECV). To monitor ET, Satellite-based operational models are commonly used, relying on instantaneous land surface temperature (LST), which are limited to clear-sky days. Moreover, these models rely solely on daily data and fail to capture the full dynamics of ET throughout the day. This incomplete representation is directly linked to the fact that ET dynamics often exhibit an asymmetry between radiation and evaporation, a phenomenon more pronounced in arid regions, where it is influenced by a complex interplay of environmental factors such as air temperature, vapor pressure deficit or net radiation, and biophysical variables such as vegetation biophysical conductances. Interpreting this asymmetry requires models capable of representing ET dynamics without dependence on LST data and cloud conditions. This study presents the methodology and preliminary results of the ESA-funded UNITE project, which aims to address the limitations of estimating evapotranspiration (ET) and land surface temperature (LST) under cloudy conditions. The proposed approach integrates physical water-energy balance modeling at daily-hourly scales with geostationary satellite LST data from MSG, improving the interpretation of energy balances and daily dynamics. The study integrates data across an aridity gradient and ecological transect from northern Europe to southern Africa. Two models were applied: the analytical Surface Temperature Initiated Closure (STIC) model (Mallick et al., 2018, 2024), which is based on the Penman-Monteith and Shuttleworth-Wallace formulations, and the prognostic FEST-EWB energy-water-balance (EWB) model. The FEST-EWB model continuously simulates soil moisture and ET over time and space, resolving LST by ensuring the closure of the energy-water balance equations (Corbari et al., 2011). The study analyzed the differences and similarities in ET estimates from both models across regions at eddy covariance sites with varying aridity indices during the 2019-2023 study period, aiming to validate model performance under different climate conditions. The results not only address the challenges of estimating cloudy-sky ET and LST but also offer relevant insights into the variations in diurnal hysteresis between evaporation and plant responses to daily water stress across diverse climates. These findings, spanning humid to arid regions, will contribute to the development of advanced ET products for improved agricultural water management.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.15/1.16)

Presentation: The Global Atmospheric River Network: A Complex Network Approach to Global Moisture Transport Dynamics

Authors: Dr. Tobias Braun, Sara M. Vallejo-Bernal, Prof. Sebastian Sippel, Prof. Miguel D. Mahecha
Affiliations: University Leipzig, Potsdam Institute for Climate Impact Research
As the global water cycle intensifies, the frequency and severity of hydrological extremes, such as heavy precipitation events, are increasing. This poses profound challenges to terrestrial ecosystems and human systems. Atmospheric rivers (ARs) – narrow corridors of enhanced vapor transport in the lower troposphere – are a key driver of these extremes. In extratropical regions, ARs are the main moisture transport mechanism, accounting for more than 90% of water vapor transported towards the poles. While previous research has significantly advanced our understanding of their role for the global water cycle, the transport patterns of ARs at global scale as well as their land-surface impacts remain underexplored. In this talk, I will present current progress on the ‘Living Planet Fellowship’-funded ARNETLAB project, which leverages innovative methods from complexity science to disentangle the interplay between atmospheric dynamics and land surface processes. In analogy to terrestrial river networks, the pathways that ARs follow through the Earth’s atmosphere can be effectively represented by a transport network. Generally, the paradigm of complex networks encodes interactions between the units of a system through interlinked nodes. Recent applications illustrate that complex networks have provided novel insights into climate teleconnection patterns, synchronization of extremes and vegetation-atmosphere feedbacks. We draw on the vast array of existing methods from complex network theory to reveal the “global atmospheric river network”. It is defined on a hexagonal grid to avoid distortions due to the Earth’s spherical geometry. Multiple AR catalogs can be integrated seamlessly. Using effective measures of node and edge centrality, we reconstruct the global transport infrastructure of ARs, including prominent pathways, basins, and scale-dependent regional clusters of AR dynamics. To assess the significance of our findings, we simulate ensembles of random walkers diffusing along the AR network’s edges. This approach allows us to create a hierarchy of effective null models and to define network measures that are tailored to detecting more intricate regions that are vital for AR transport. Our preliminary findings highlight regions where AR dynamics could be less predictable, showcase how climate oscillations control AR network topology, and unveil how the AR network is evolving in a changing climate. They underscore the potential of complexity science to advance our understanding of ARs as critical components of the integrated human-Earth system. In a next step, the global atmospheric river network formalism enables us to study AR-driven moisture and heat transport networks. To this end, we systematically aggregate AR moisture and heat transport budgets along their most frequented routes. This holds particular relevance for ARs reaching the poles: here, the triggered precipitation as well as the released sensible and latent heat fluxes can exacerbate glacier melt and slow down Arctic sea ice recovery. It furthermore informs on which ARs feed hydrological and heat extremes. I will close my talk with these first advances towards AR-driven moisture and heat transport networks. These will finally help us to link the developed network framework to Earth’s land ecosystem variables, given by the full range of remote-sensing derived suite of ESA land-surface data as they are curated in the Earth System Data Lab. Overall, this talk situates ARs within the broader context of global water cycle dynamics and highlights their coupling with terrestrial and energy cycles, offering novel perspectives on the interplay between atmospheric dynamics and land surface processes.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F2)

Session: A.02.03 EO for Agriculture Under Pressure - PART 5

The human impact on the biosphere is steadily increasing. One of the main human activities contributing to this is agriculture. Agricultural crops, managed grasslands and livestock are all part of the biosphere and our understanding of their dynamics and their impacts on other parts of the biosphere, as well as on the wider environment and on the climate is insufficient.
On the other hand, today’s Agriculture is Under Pressure to produce more food in order to meet the needs of a growing population with changing diets– and this despite a changing climate with more extreme weather. It is required to make sustainable use of resources (e.g. water and soils) while reducing its carbon footprint and its negative impact on the environment, and result in accessible, affordable and healthy food.
Proposals are welcome from activities aiming at increasing our understanding of agriculture dynamics and at developing and implementing solutions to the above-mentioned challenges of agriculture, or supporting the implementation and monitoring of policies addressing these challenges. Studies on how these challenges can be addressed at local to global scales through cross site research and benchmarking studies, such as through the Joint Experiment for Crop Assessment and Monitoring (JECAM) are welcome.

The session will hence cover topics such as
- Impact on climate and environment:
- Crop stressors and climate adaptation
- Food security and Sustainable Agricultural Systems
- New technologies and infrastructure
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F2)

Presentation: Sen4Stat for leveraging the use of Earth Observation data for improved agricultural statistics: outcomes and lessons learned from 2 years of demonstration across the world

Authors: Sophie Bontemps, Pierre Defourny, Boris Norgaard, Cosmin Cara, Pierre Houdmont, Laurentiu Nicola, Cosmin Udroiu, Zoltan Szantoi
Affiliations: UCLouvain-Geomatics, CS GROUP - ROMANIA, ESA-ESRIN
Over the last decade, food security has become one of the world’s greatest challenges. By 2050, the world’s population will be 34 percent higher than today, and this massive increase will mainly affect developing countries and increase the food demand. Reliable, robust and timely information on food production, agricultural practices and natural resources is required. Since years, the potential of satellite Earth Observation (EO) for agricultural statistics has been recognized but it has not yet led to the adoption of this technology by National Statistical Offices (NSOs). The open-source Sentinels for Agricultural Statistics (Sen4Stat) toolbox aims at facilitating the uptake of Sentinel EO-derived information in the official processes of NSOs, since the early stages of the agricultural surveys to the production of the statistics. It automatically ingests and processes Sentinel-1 and Sentinel-2 time series in a seamless way for operational crop mapping and yield modelling, using ground data provided by national statistical surveys. It then integrates these EO products with the survey dataset to improve the statistics. Different types of improvements are targeted by the system: (i) reduction of the amplitude of the estimates confidence interval, (ii) disaggregation of the reprensentativity level to smaller administrative units, (iii) provision of timely crop area and yield estimators, (iv) optimization of sampling design by leveraging maps to build or update sampling master frames. The system has been tested and demonstrated in various countries around the world, thus addressing a wide diversity of both cropping systems and agricultural data collection protocols. In Spain, in-situ data come from the ESYRCE database, which is an integrated list and area frame survey, including square segments (700m - 250m) divided in agricultural plots. A national crop type map was generated based on Sentinel-2 dataset used by a random forest algorithm. F1 Scores of the main crop type classes were most often higher than 0.8. These maps were then coupled with the ESYRCE crop data and allowed significantly reducing the crop acreage estimates uncertainty. As an example, the barley acreage estimate based on ESYRCE survey only in Castilla y Léon region is 980.081 hectares, with a 95% confidence interval of +/- 56.644 hectares. When using both the survey and the EO-map, the barley estimate is in the same order of magnitude (923.026 hectares) but with a significantly smaller confidence interval (+/- 23.663 hectares). The coupling with EO data also enabled the spatial disaggregation of the acreage statistics to the municipality-level, which was not possible using only the ESYRCE data due to the lack of samples for obtaining accurate estimates. Similarly, we were able to demonstrate that estimating yield on a larger sample of data (i.e. with EO data) can improve the confidence in aggregate statistics by virtually increasing the number of data points collected in the survey. Finally, a map of irrigation was also produced at national scale in order to support an update of the sampling master frame by the NSO. In Senegal, the Agricultural Annual Survey (AAS) is a list frame survey, and parcels are identified by geolocalized points. We worked during two successive years with the NSO to make the survey protocol more compatible with EO data and for instance, registering parcels boundaries. These adjustments, implemented over a regional extent, allowed generating a crop type map with good accuracy for the main crops, and deriving acreage estimates with reduced error. The Sen4Stat system was also demonstrated in the Sindh province, in Pakistan, with the support of the World Bank (WB). The focus was on the irrigated wheat during the winter season and on the main summer crops. The demonstration started with the design of an area sampling frame and the implementation of the survey including the quality control of the collected data. Seasonal crop type maps were generated and acreage estimates were computed. The same decrease in the confidence interval amplitude was observed for both seasons. FAO also supported the uptake of the tool in different countries, mainly in Africa. Depending on the country, the focus was put on the survey protocol adjustment or on the statistics estimates. All these demonstrations have confirmed the high potential of EO data to get improved statistics. In all countries, the integration of EO data allowed significantly reducing the confidence interval around the estimates. Spatial disaggregation and timeliness gained with EO data were also successfully demonstrated. The demonstrations have also highlighted the importance to have in situ data compatible with EO data. Extensive work has been done with NSOs to evaluate their protocols and test adjustments to allow the integration of EO data. Clearly the Sen4Stat system can meet the requirements of reliable, robust and timely information needed to strengthen food security. Nevertheless, the adoption of such new technologies by the NSOs or other national stakeholders necessitates mid-term perspectives, in order to work step by step. In that context, the support received from international funders like FAO, CIMMYT, World Bank and Development Banks open new possibilities for a wide and impactful Sen4Stat uptake.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F2)

Presentation: From JECAM site to the region – Vegetation Conditions analysis using Sentinel-2, Sentinel-1, ECOSTRESS, and Copernicus Land Monitoring Service (CLMS) data for yield prediction through AI Applications

Authors: Prof Katarzyna Dabrowska - Zielinska, Msc Konrad Wróblewski, PhD Ewa Panek - Chwastyk, PhD Maciej Bartold, PhD Sandhi
Affiliations: Institute Of Geodesy And Cartography
Satellite data will be integrated with AI systems to enhance the classification of different vegetation types, monitor crop growth stages, and forecast yields. With frequent satellite passes, farmers receive regular updates on field conditions, enabling them to optimize management practices throughout the growing season. Data integration will include meteorological data and regional variations using ERA5. Climate prediction models, based on 20 years of historical data and utilizing Random Forest techniques, will be developed. This will allow for comprehensive climate change analysis, examining changes in agricultural structures, such as variations in crop fields over time. Additionally, soil moisture dynamics will be analyzed using ECOSTRESS data and a soil moisture model developed at the Institute, integrating Sentinel-1 imagery and crop classification information. This approach demonstrates the potential of Sentinel-2, Sentinel-1, and ECOSTRESS satellite data, combined with Copernicus Land Monitoring Service (CLMS) products like the Leaf Area Index (LAI), to assess biomass variability and predict yields. Different vegetation indices derived from satellite data will be examined. The study takes environmental variables into account to accurately predict yields for different crops. The results will be validated with reference data collected in the field, including biomass measurements and harvest dates. Analysis of seasonal variation in LAI revealed significant differences in crop growth dynamics, allowing key stress periods to be identified and potential yield losses to be estimated. Predictive models, integrated with satellite data, achieved high accuracy, demonstrating the effectiveness of remote monitoring in precision agriculture. This study aligns with the goals of the GEOGLAM initiative by showcasing how advanced remote sensing, AI, and environmental modeling enhance global agricultural monitoring, precision farming, and resource management while addressing climate change impacts on agriculture.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F2)

Presentation: Estimation of Key Crop Traits from spaceborne Hyperspectral imageries with Neural Network Models: investigating the impact of ground and synthetic training dataset

Authors: Lorenzo Parigi, Gabriele Candiani, Luca Miazzo, Mirco Boschetti
Affiliations: Institute for Electromagnetic Sensing of the Environment, National Research Council, Department of Civil, Constructional and Environmental Engineering, Sapienza University of Rome, University of Milan
Nitrogen fertilisation is a crucial element in maintaining crop productivity. However, it also represents a significant source of water pollution and contributes to the formation of greenhouse gases. It is therefore essential to reduce the quantity of nitrogen distributed on agricultural land. The reduction and rationalisation of fertilisation is the scope of sustainable agriculture aligned with the assertion "produce more with less" as included in the European Farm-to-Fork strategy. Satellite images are a valuable tool that can be used to generate spatial explicit information to assess variability of crop status within fields. These maps can then be used to inform on the crop development and nutritional status as the basic for a more rational approach to determining crop need hence the distribution of fertiliser. In the last years, two scientific hyperspectral satellites sensors have been launched (ASI-PRISMA and DLR-EnMap) as a precursor of the new generation of operational missions Copernicus-CHIME and NASA-SBG that will provide a wall-to-wall mapping solution for the assessment crop condition and agricultural productivity. Hyperspectral data are rich in information, allowing for detailed exploration of the spectral signature of the crop enhancing the quantitatively estimation of plant biophysical parameters (biopar). The high number of bands is beneficial for improving estimation; however, there are issues regarding the high collinearity among the bands and the non-linear relationship between biopars and measured remote sensed spectra. In this framework, the objective of this work is do develop retrieval solutions able to handle such limitation and fully exploit hyperspectral spaceborne data to estimates biopar as fundamental input to support rational and smart farming applications. In this context, we opted to utilise artificial neural networks (ANNs) for their capability to model complex scenarios, handle regressor redundancy and noise in the data. The use of ANNs also presents certain challenges, including the necessity to utilise a large and diverse dataset for the training phase that for crop traits estimation imply the acquisition of ground data contemporary to satellite acquisition. In order to overcome these limitations, the use of synthetic data, generated by vegetation radiative transfer model (RTM), as training have been proposed as a feasible solution. The objective of this study is to evaluate the performance of various data set scenarios for ANN training. To this end, a data-driven and hybrid approach is employed to develop ANN models using real and synthetic data, respectively, testing different model architectures. The data employed in this study were pairs of spectra and biopars, classified according to the origin of the spectra as i) field (GRD), ii) satellite (SAT), and iii) synthetic (HYB) datasets. The biopars of interest are the Leaf Area Index (LAI), Canopy Chlorophyll Content (CCC) and Canopy Nitrogen Content (CNC). The field dataset comprises wheat data and it is composed of spectra acquired with a hand-held spectrometer and biopars collected in the ground on different location in Italy over a two-year period (2022-2023) for a total of 200 samples for LAI and 100 samples for CCC and CNC. The satellite data set includes ground biopar measurements acquired at satellite scale and collected at the same time of PRISMA overpass on different crops (multi-crop, 2020-2024) consists of approximately 200 samples for all biopars, this dataset is used for model training (2020 -2021), validation (2021 -2022) and test (2023 -2024). The synthetic dataset is generated by the PROSAIL-PRO providing different combination of input parameters biologically constrain to simulate 50000 samples. The aforementioned datasets were used to train three separately ANN models, validated and tested on the PRISMA dataset. The preliminary tests yielded some interesting results. The SAT models, trained on PRISMA, yielded satisfactory results when tested on independent multi-crop dataset. The GRD model, trained on field spectra, demonstrated good performance on PRISMA spectra when tested on wheat (2023-2024), exhibiting a relative Root Mean Squared Error (rRMSE) of 15, 14, and 20% and a coefficient of determination (R²) of 0.75, 0.7, and 0.65 for LAI, CCC, and CNC, respectively. The HYB model, trained on synthetic spectra, yielded the most favourable outcomes on the other crops (rice and corn) when tested on PRISMA data, with an rRMSE of 12 and 12% and an R2 of 0.65 and 0.85 for LAI and CCC, respectively (CNC data were not available for those crops). The favourable outcomes achieved with the GRD model indicate that it is feasible to utilise field spectra in training ANN developing models suitable when applied to satellite data. However, the models may exhibit limited transferability to other crops, as evidenced by their lower performance on rice and corn. Nevertheless, these models can be valuable for single-crop prediction. Conversely, the HYB-ANN solution demonstrated robust performance suggesting the potential for a more transferable model in a multi-crop scenario. Finally, a proof of concept of the utility of the ANN estimated CNC maps is proposed for the generation of wheat nitrogen fertilisation maps. Actual nitrogen uptake from CNC maps is used together with crop model scenario according to soil properties and weather data to assess crop needs.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F2)

Presentation: Are Radiometric Landscapes Mirrors of Agrarian Systems?

Authors: Aurelle Sedegnan, Hervé Sossou, Simon Madec, Nestor AHOYO ADJOVI, Agnes Begue
Affiliations: Cirad, University of Montpellier, INRAB
In Benin, identifying agro-ecological zones and agricultural development poles is crucial for implementing effective agricultural policies. However, current large-scale zoning methods for agrarian systems rely on heterogeneous data sources and often involve subjective selection of socio-economic and environmental variables. These approaches face challenges in representativeness and reproducibility, limiting their utility for policy and planning. To overcome these limitations, we propose a novel approach grounded in the principle that landscapes - as a reflection of the interplay between biophysical and human factors - can serve as proxies for land use and agricultural practices in rural areas. This makes landscape zoning a viable tool for approximating agrarian system zoning. Recognizing that traditional landscape mapping relies on extensive, multi-scale data with varying degrees of accuracy, we introduce an innovative method called radiometric landscape mapping (Lemettais et al., 2024). This approach derives landscapes exclusively from remote sensing data, bypassing the need for measured variables (e.g., climate data) or interpreted products (e.g., land cover maps). It offers a statistically robust, scalable, and cost-effective solution that is applicable across different locations and scales. Data and Methods Radiometric landscapes were calculated using the first principal components of a series of MODIS NDVI (Normalized Difference Vegetation Index) images from 2018 to 2022. These analyses resulted in the identification of 36 homogeneous radiometric landscapes, which were subsequently classified into nine broader radiometric zones. For comparative analysis, we utilized data from the 2017 « Typologie des exploitant(e)s des sites de recherche et développement du Bénin » survey (Sossou et al., 2019), which covered 477 villages. This survey collected data on agricultural households, focusing on socio-economic characteristics (e.g., household composition, assets) and agricultural practices (e.g., crop types, mechanization, irrigation). The analysis identified three primary farming system types: irrigated systems, mechanized systems, and intensive systems relying on chemical inputs. Results and Implications The comparison of the agrarian system types distribution with radiometric zoning revealed strong alignment in terms of land cover composition and an intensification of the agriculture. Radiometric zones effectively discriminated between land cover types and provided a robust framework for analyzing agrarian systems. Conclusion These findings highlight the potential of radiometric zoning to redefine zoning frameworks for agricultural and land-use planning policies. By offering a replicable, data-driven, and scalable approach, radiometric landscapes present a promising tool for supporting sustainable agricultural development in Benin and beyond.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F2)

Presentation: Field-level crop yield estimation using phenometrics from LAI time-series analysis and weather data in a machine learning framework

Authors: Francesco Nutini, Federico Filipponi, Andrea Ferrarini, Michele Croci, Piero Toscano, Mirco Boschetti
Affiliations: CNR-IREA, CNR-IGAG, Università Cattolica del Sacro Cuore, CNR-IBE
Predicting crop yield is a compelling challenge due to climate change, growing global population and fluctuations of commodities prices. Observing yield variability in their spatial and temporal dimension is a prior activity to understand phenomena and to lead thoughtful management of cropping systems. In this context, earth observation (EO) programs have transformed crop monitoring because they offer a unique opportunity to monitor crops with reasonable spatial resolution and near-weekly temporal frequency allowing analysis at farm/field level. Indeed, remote sensing imagery has been utilized in several data-driven yield estimation approaches, leveraging various techniques such as parametric regression, machine learning algorithms, and statistical models. Past scientific literature highlights some important advice to estimate yield with EO data that were followed by this work, such as the exploitation of time-series (comprehensive information of seasonal dynamic) of bio-physical parameters (direct quantitative indicator plant grown and production) rather than single dates of vegetation indices. The primary objective of the work presented here is to estimate yield of cereals at field level exploiting time-series of LAI and meteorological variables with a non-parametric regressive approach. Side goal are 1) to identify which features and data are more important in yield estimation and 2) to demonstrate that the usage of phenometrics derived from LAI time-series improved yield estimation rather than using seasonal indicator derived from fix crop calendar. The analysis was conducted in North of Italy on two area of interest (AOI). The first covers the largest Italian farm (~3800 ha) in Ferrara province (north-east Italy) and the second comprise an area of in Piacenza province (north-west Italy) for which field observation were available. In the former, 223 field data of winter cereals (spelt, wheat, barley) were manually collected at plot level in the framework of various scientific projects at the end of 5 cropping seasons (2020-2024). This collection of high-quality field data is exploited as calibrating dataset. Testing dataset is structured to assess model performance and exportability in time (Time Test - TT same location different season) and space-time (Spatial Test ST - different location and season). The test set is made by 107 yield at field level from the first (TT – 57 samples) and the second (ST- 50 samples) AOI, provided by farms archives from 2020 to 2022 cropping seasons. Over the two AOIs, Sentinel-2 L2 data were downloaded from THEIA archive (atmospherically corrected with MAJA)and LAI maps computed using biophysical processor (Weiss et al., 2016). Copernicus ERA5 datasets of daily temperature and rainfall were downloaded from Google Earth Engine's data catalogue. The aim here is to use these data to depict crop growing during seasons (LAI) and abiotic stressors (meteorological data)to highlight drought condition and area that faced water shortage. These datasets represent the depend variables for yield estimation, while yield data represent the independent target variable. LAI time-series in correspondence of yield data (i.e 223 plots – calibration set- and 107 fields – test set) were exploited to compute phenological metrics (phenometrics). To do so, gap filling and interpolation of LAI was done with R package {sen2rts} (https://ranghetti.github.io/sen2rts), while phenometrics (e.g. start and peak of season) were get with a method inherited from R package {phenopix} – Filippa et al., (2016.) Phenometrics were exploited to compute 40 regressors from LAI (e.g. LAI at peak of season etc.) and ERA5 time-series (e.g. cumulated rainfall before flowering etc). Moreover, other regressors were computed on fixed period according to cropping calendar, in order to check the actual contribution given by dynamic information on space and time provided by spatial explicit EO derived phenological information. After a run of a feature selection algorithm (Boruta test) to get rid of autocorrelated features, the selected regressors are exploited in multiple random forest (RF) trials (R package {caret}). Accumulated Local Effects (ALE) plots are used to investigate how selected features (production/growth proxies) influence yield. Best model is then compared with 107 field average, aiming at testing the RF on data with different origin (farm declaration rather than experimental field sampling) and AOIs. First tests conducted with RF show promising results in cross-validation (r^2 0.66, RMSE 1.44) and that most important regressors are “LAI related”: 1) LAI value at peak of season, 2) LAI cumulate from start to end of season and 3) rate of senescence. Driving proxy is also rain cumulated between peak and end of cropping season. A first validation shows average results on the first (r^2 = 0.53, RMSE = 1.23) and second (r^2 = 0.44, RMSE = 0.95) AOIs. Investigations of the presence of outliers and unreliable data in validation dataset is still to be done. Top-ranked LAI regressors shows features that were always exploited by EO time-series studies to monitor agro-ecosystems (e.g. seasonal LAI cumulate, see Prince 1991) and ALE plots shows that, as expected, the higher are these proxies, the more yield is produced. On the other hand, ALE plot for the best “weather proxy” shows that the wetter is the end of cropping season, the lower is production. This can highlight unfavourable abiotic conditions (e.g. floral sterility, loading, etc.) and potential biotic stressor (e.g. fungus) that impact on plant health and grain filling processes. These aspects should be further investigated with plant physiologists to ensure biologically sound interpretations and to select meteorological metrics a priori based on expert knowledge. Potential constrain to the approach are due to conditions where LAI behaviour does not match with obtained yield (i.e. high yield for LAI with low maximum value and vice versa). These peculiar cases and their potential causes (e.g. flower sterility, plant lodging, field sample not representative at satellite scale) will be showed and discussed. Future activities will be first focused on thoroughly validating the estimation on the two AOIs and on getting more yield data to check RF robustness on other AOIs. Moreover, more meteorological variables from ERA5, such as temperature for heat stress and potential and actual evapotranspiration as indicator of water stress, will be included in RF trials. Once calibrated, our view is to apply the model to the whole Po valley (Italy) to spot which districts faced yield loss during a severe drought event that impacted north of Italy in 2022, hence that are more likely to face the same issue in the very next future.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F2)

Presentation: Characterization of crops sequences in Argentina over six growing seasons using satellite derived crop type maps.

Authors: Diego De Abelleyra
Affiliations: Instituto Nacional de Tecnología Agropecuaria (INTA)
Soybean area in Argentina has increased significantly over the last decades, from 5 Mha in 1990 to 16 Mha in 2022. Nowadays, soybean is the main planted crop followed by maize (9 Mha) and wheat (7Mha). When soybean is planted as a single crop in a season produce very few residues which are quickly degraded because of its low C/N ratio. In contrast, cereal crops generate significantly more residues which are degraded slowly. Increases in commodity prices can lead to the continuous planting of soybean because of its differential gross margin, generating risks of soil degradation and uncertainty regarding the sustainability of agricultural production. This work analyzed crop sequences along six growing seasons using as base information the Argentina National Map of Crops (2018/2019 to 2023/2024). These maps were generated with supervised classification methods using Landsat 8 and 9 and Sentinel 2 satellite images, and in situ data obtained from on-road surveys throughout the agricultural regions of Argentina, registering georeferenced samples at different times in each season. Mapped classes included: soybean, maize, winter cereals, sunflower, peanut, common bean, cotton and sugarcane. Summer crops can be planted as single crops, or as double crops with a preceding winter / spring crop like wheat, barley or sunflower. As the combination of 12 classes of crops over six growing seasons resulted in nearly 60.000 sequences, several indices were used to map and describe the sequences: i) cropping intensity, ii) proportion of early soybean and iii) proportion of cereals in the sequence. The more frequent 20 sequences represented nearly 25 % of the agricultural area and only included three crops: soybean, maize and winter cereals. Two crop rotations accounted for the first five most frequent sequences. A rotation of two crops, with maize and soybean as single crops per season, represented nearly 8 % of the area. A a three-year rotation of i) single crop maize, ii) single crop soybean, and iii) a winter cereal / soybean double crop represented 5 % of the area. Other relevant sequences more proportion of single crop soybean. Cropping intensity showed that nearly 36 % of agricultural area is planted with only one crop per season. This was partially observed in areas with lower precipitation, but also in areas with high precipitation in the agricultural belt. Nearly 20 % of the area showed sequences with four or more single soybean crops in the sequence, which can be a risk for sustainability of production. These cases were mostly located in the agricultural belt, near ports and agro-industrial areas. The number of cereals in the sequence showed that near 70 % of the agricultural area showed three years with cereals (mainly maize or wheat) over a six-year sequence and were predominantly observed in high precipitation regions which allow the planting of double crops. Even though some areas showed undesirable proportions of early soybean in the sequences, most of the agricultural area included at least one cereal every two years. This ensures certain levels of carbon inputs that can contribute to maintaining soil health and sustain production levels. There is a margin for improving sequences, for example including more crops per year or reducing the frequency of early soybean. Achieving this requires not only considering environmental aspects like precipitation, but also socio-economic aspects that are also relevant for farmers’ decisions of planting.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L1/L2)

Session: D.02.13 AI-based Methods for EO data compression

Earth Observation (EO) sensors are acquiring Big Data volumes at very high data rates (e.g., the Copernicus missions produce 12 TB of data per day). In particular, next generation SAR systems will offer a quantum leap in performance using large bandwidths and digital beam forming techniques in combination with multiple acquisition channels. These innovative spaceborne radar techniques have been introduced to overcome the limitations imposed by classical SAR imaging for the acquisition of wide swaths and, at the same time, of finer resolutions, and they are currently being widely applied in studies, technology developments and even mission concepts conducted at various space agencies and industries. Such significant developments in terms of system capabilities are clearly associated with the generation of large volumes of data to be gathered in a shorter time interval, which, in turn, implies harder requirements for the onboard memory and downlink capacity of the system. Similar considerations can be drawn with respect to optical sensors, such as multispectral and hyperspectral ones, which provide nowadays large amounts of images at high resolution. Therefore, the proper quantization/compression of the acquired data prior to downlink to the ground is of utmost importance, as it defines, on the one hand, the amount of onboard data and, on the other hand, it directly affects the quality of the generated EO products.

EO data show unique features posing important challenges and potentials, such as learning the data models for optimal compression to preserve data quality and to avoid artefacts hindering further analysis. For instance, based on the peculiarities of the imaged scene (e.g., in radar imaging these are characterized by the reflectivity, polarization, incidence angle, but also by the specific system architecture, which may offer opportunities for efficient data quantization; differently, multispectral data are characterized by the land cover or the presence of clouds), a more efficient data representation can be achieved by searching for the best quantizer and the ad-hoc tuning of the inner quantization parameters. Additionally, onboard preprocessing of the acquired data to a sparse domain (e.g., range compression in the case of SAR data) can also lead to a more compact data representation, which could aid small missions with limited on-board memory.

Artificial Intelligence (AI) represents one of the most promising approaches in the remote sensing community, enabling scalable exploration of big data and bringing new insights on information retrieval solutions. In the past three decades the EO data compression field progressed slowly, but the recent advances in AI are now opening the perspective of a change of paradigm in data compression. AI algorithms and onboard processing could be exploited to generate/discover novel and more compact data representations, obtain an EO data quality to satisfy the the cal/val requirements that ensure the consistency of the physical parameters to be extracted, and open new perspectives for on board intelligence and joint ground-space processing, i.e., edge computing.

This session would like to bring to the field new methodologies for both loss-less and lossy compression of remote sensing data. Several data compression topics are welcomed to the session, which include (but are not limited to): data-driven and model-based compression methods, Kolmogorov complexity-based algorithms, source coding with side information, neural data compression, compression of correlated sources, integrated classification and compression, semantic coding, big data compression and application-oriented compression.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L1/L2)

Presentation: Efficient Raw Data Compression for Future SAR Systems

Authors: Dr. Michele Martone, Nicola Gollin, Student Rafael Bueno Cardoso, M. Sc. Marc Jäger, Dr. Rolf Scheiber, M. Sc. Simon Nolte, Dipl. Wirtsch.-Ing. Jamin Naghmouchi, Dr. Gerhard Krieger, Dr. Paola Rizzoli
Affiliations: German Aerospace Center (DLR), Universität zu Lübeck (UzL)
Synthetic aperture radar (SAR) represents nowadays a well-established technique for a broad variety of remote sensing applications, being able to acquire high-resolution images of the Earth’s surface, independently of daylight and weather conditions. In the last decades, innovative spaceborne radar techniques have been proposed to overcome the limitations which typically constrain the capabilities of conventional SAR for imaging wide swaths and, at the same time, achieving fine spatial resolutions. In addition to that, present and future spaceborne SAR missions are characterized by the employment of multi-static satellite architectures, large bandwidths, multiple polarizations and fine temporal sampling. This inevitably leads to the acquisition of an increasing volume of on-board data, which poses hard requirements in terms of on-board memory and downlink capacity of the system. This paper presents an overview of the research activities in the field of SAR raw data compression which have been developed in the last years or are currently under investigation at the Microwaves and Radar Institute of the German Aerospace Center (DLR). In particular, we investigate research assets for data volume reduction in multi-channel SAR [1], [2]. These systems allow for high-resolution imaging of a wide swath, at the cost of the acquisition and downlink of a huge amount of data. Together with the intrinsic requirements related to resolution and swath width, the high data volume is due to the fact that the effective pulse repetition frequency (PRF) generated by the multiple channels is typically higher than the processed Doppler bandwidth, which introduces a certain oversampling in the raw data in azimuth. In this context, convenient data volume reduction strategies are proposed, based on Doppler-based transform coding (TC) or linear predictive coding (LPC), which aim at exploiting the existing correlation between subsequent azimuth samples. We consider realistic multi-channel SAR system architectures and simulate multi-channel raw data using synthetic as well as real backscatter data from TanDEM-X. We analyze the statistical properties (such as autocorrelation and Doppler power spectrum) exhibited by the multi-channel raw signal and discuss the impact of relevant system parameters, highlighting potential and limitations of the proposed approaches as a trade-off between achievable data volume reduction and performance degradation. Furthermore, we address some of the above-mentioned challenges and limitations in terms of data transfer and downlink in the frame of the Horizon Europe project SOPHOS (Smart on-board processing for Earth observation systems) [3]. The main goals of SOPHOS consist in the design and implementation of enabling technology for high-end data products generated on board spacecraft via the implementation of power-efficient high-performance space processing chains for various Low-Earth Orbit (LEO) missions. The main focus is on Synthetic Aperture Radar (SAR) and, in this scenario, we develop algorithms aimed at improving on-board SAR raw data compression. For this purpose, the performance-optimized block-adaptive quantization (PO-BAQ), recently developed by the authors, is proposed. PO-BAQ [4] extends the concept of state-of-the-art block-adaptive quantizer (BAQ) and allows for jointly optimizing the resource allocation and the resulting SAR image degradation due to quantization. Since quantization errors are significantly influenced by the local distribution of the SAR intensity, such an optimization is achieved by exploiting a-priori knowledge about the SAR backscatter statistics of the imaged scene. Given the severe constraints imposed by the downlink capacity, the optimized on-board data compression proposed in the SOPHOS project allows for better/customized quality data for all large scale, global monitoring and time series applications. Furthermore, the SAR performance-optimized quantization optimizes the overall, global product quality for a given downlink budget and, in this way, it allows for an increase of the system acquisition capability and, ultimately, for more continuous observations. In the frame of SOPHOS, efficient on-board SAR image formation is also considered: for this purpose, SAR acquisitions may need to be processed in blocks due to constraints imposed by the available computational resources, and the necessary processing steps are applied to each block of SAR raw data such that the outputs are concatenated to obtain the final image formation result. The overall SAR image formation workflow consists of a fixed sequence of processing steps, including range and azimuth compression, antenna pattern compensation and image generation, which is stored on board for later transmission to ground, also allowing for a significant reduction of the resulting data volume. In addition, we investigate the suitability and potential of SAR raw data transformations, with focus on the JPEG2000 and polar-based compression, with the goal of optimizing and potentially reducing the resulting data rate. Finally, DLR is member of the Consultative Committee for Space Data Systems (CCSDS), a multi-national forum for the development of communications and data systems standards for spaceflight. In particular, the authors currently support the Data Compression Working Group in collaboration with other research institution (including ESA, NASA, CNES) with the main objective of defining and standardizing data compression methods for SAR systems. At the Symposium we will present the latest investigations as well as an outlook on the future activities of the Working Group. [1] M. Martone, M. Villano, M. Younis, and G. Krieger, Efficient onboard quantization for multichannel SAR systems, IEEE Geoscience and Remote Sensing Letters 16 (12), pp. 1859-1863, Dec. 2019. [2] M. Martone, N. Gollin, E. Imbembo, G. Krieger, and P. Rizzoli, Data Volume Reduction for Multi-Channel SAR: Opportunities and Challenges, EUSAR 2024; 15th European Conference on Synthetic Aperture Radar, Munich, Germany, pp. 243-248, Apr. 2024. [3] M. Martone, N. Gollin, M. Jäger, R. Scheiber, M. Taddiken, O. Bischoff, D. Smith, O. Flordal, M. Persson, C. Bondesson, V. Kollias, N. Pogkas, S. Nolte, and J. Naghmouchi, Smart On-Board Processing for Earth Observation Systems: the SOPHOS Project. On-Board Payload Data Compression (OBPDC) Workshop, Las Palmas de Gran Canaria, Spain, Oct. 2024. [4] M. Martone, N. Gollin, P. Rizzoli and G. Krieger, Performance-Optimized Quantization for SAR and InSAR Applications, IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-22, Jun. 2022.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L1/L2)

Presentation: Using adaptive grids for the compression of ERA5 meteorological reanalysis data

Authors: Farahnaz Khosrawi, Adrian Kolb, Lars Hoffmann, Siegfried Müller
Affiliations: Jülich Supercomputing Centre, Forschungszentrum Jülich, Institut für Geometrie und praktische Mathematik, RWTH Aachen
The continuous increase in computational power comes with an equivalent demand of storage space. However, the ability to store data has hardly increased in recent years. This makes the demand for efficient storage solutions even more eminent, especially, for e.g. meteorological reanalysis data. The current European Centre of Medium-Range Weather Forecasts (ECMWF) ERA5 reanalysis data poses already a high challenge for the community, but with the upcoming ERA6 which will have a much higher resolution, significantly more storage space will be needed. An efficient way to store data is to use either lossy or lossless data compression to store the data with lower storage requirements. To compress the meteorological data, we perform a multiresolution-analysis using multiwavelets on a hierarchy of nested grids. Since the local differences become negligibly small in regions where the data is locally smooth, we apply hard thresholding for data compression. Thereby, we transform the data from a regular cartesian grid to an adaptive grid that keeps a fine resolution in areas where it is necessary, but otherwise coarsens the grid. This approach of compressing the data results in a high compression rate while preserving the accuracy of the original data. This compression strategy has been implemented into the Lagrangian model for Massive-Parallel Trajectory Calculation (MPTRAC) and successfully applied to ERA5 data. Application to the upcoming ERA6 data and to satellite observations are planned for the future.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L1/L2)

Presentation: AI for Performance-Optimized Raw Data Quantization in Future SAR Systems

Authors: Nicola Gollin, Dr. Michele Martone, Max Ghiglione, Dr. Gerhard Krieger, Dr. Paola Rizzoli
Affiliations: Microwaves and Radar Institute, German Aerospace Center (DLR), Radio Frequency Payloads and Technology Division, European Space Agency (ESA)
In next-generation synthetic aperture radar (SAR) systems, performance is advancing through increased bandwidth, multiple polarization, more complex acquisition methods exploiting digital beamforming (DBF) and multichannel and multistatic configurations. These technologies enable high-resolution wide-swath polarimetric and interferometric acquisitions, significantly enhancing temporal sampling and data coverage. This capability, in upcoming missions like NISAR and Sentinel-1 Next-Generation, introduces as drawback a substantial increase of data volumes, which needs storage and high-speed downlink, requiring efficient onboard data quantization methodologies. Block-Adaptive Quantization (BAQ) [1] is a state-of-the-art SAR raw data quantization method, achieving a balance between complexity, signal fidelity and resulting data volume by adapting quantization levels to raw data block statistics. The main limitation of the BAQ consists in the usage of uniform quantization rate throughout the scene, resulting in different performance quality depending on the backscatter variability throughout the imaged area. A further development of this method is the Flexible Dynamic BAQ (FDBAQ) [2], which is implemented in Sentinel-1, and includes an adaptive bit allocation based on the scene's signal-to-thermal-noise ratio (STNR), exploiting look-up tables (previously derived from global backscatter maps). However, the FDBAQ carries out the bitrate allocation without considering the actual performance degradation in the resulting high-level SAR products and applications. In particular, the local variability and inhomogeneities in the backscatter distribution strongly impact the resulting quantization degradation, requiring a direct link between the quantization settings and the focused SAR domain to be properly handled. An attempt to close this gap is represented by the Performance-Optimized BAQ (PO-BAQ) [3], which is based on the estimation of a two-dimensional, spatial-variant bitrate allocation map in the SAR raw data domain depending on the final performance requirement defined on the higher-level SAR and InSAR products. In order to estimate the local distribution of the SAR intensity and, in particular, its degree of homogeneity, the PO-BAQ exploits a priori knowledge on the SAR backscatter statistics of the imaged scene. This information allows for deriving two-dimensional bitrate maps (BRM) which must be either available on board (stored or uplinked) before commanding. For these reasons, the PO-BAQ is not fully adaptive to the acquired scene, since the quantization settings are derived from prior considerations and do not directly account for the local conditions at the time of the SAR survey. In present years, deep learning (DL) methods have shown promise for data compression. While traditionally applied to fully focused SAR images, recent efforts aim to adapt DL methods for SAR raw data compression. Nevertheless, the topic has remained highly unexplored, mainly due to the lack of spatial correlation and self-similarity among samples typically observed in the raw data domain, which complicates the task of pattern recognition. In this work, we propose a novel deep learning-based method for performing a dynamic and adaptive onboard bitrate allocation to feed a space-varying BAQ. The principle is that a direct link between the raw data and the focused domains can be achieved through a DL model, without the need for a complete SAR focusing. This allows for achieving a certain desired performance in the final focused SAR product thanks to a dynamic allocation of quantization bits, which only depends on the raw data characteristics and on the desired quality of the output SAR/InSAR products (e.g., in terms of signal-to-quantization noise ratio, interferometric phase error and noise equivalent sigma zero). In this contribution, different examples of SAR and InSAR target performance parameters are considered to train, validate and test a specific DL architecture on real TerraSAR-X and TanDEM-X raw uncompressed dataset (i.e., acquired without applying any quantization after digitization, i.e. without quantization noise) covering different landcover types to perform efficient performance optimized bit allocation. [1] - Kwok, R., & Johnson, W. T. (1989). Block adaptive quantization of Magellan SAR data. IEEE Transactions on Geoscience and remote sensing, 27(4), 375-383. [2] - Attema, E., Cafforio, C., Gottwald, M., Guccione, P., Guarnieri, A. M., Rocca, F., & Snoeij, P. (2010). Flexible dynamic block adaptive quantization for Sentinel-1 SAR missions. IEEE Geoscience and Remote Sensing Letters, 7(4), 766-770. [3] - Martone, M., Gollin, N., Rizzoli, P., & Krieger, G. (2022). Performance-optimized quantization for SAR and InSAR applications. IEEE Transactions on Geoscience and Remote Sensing, 60, 1-22.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L1/L2)

Presentation: CHIMERA: AI-Based Lossless Data Compression Revolutionizing Efficiency and Scalability for Big Data Applications and the Space Industry

Authors: Andrea Cavallini, Marco Uccelli, Angie Catalina Carrillo Chappe, Leticia Pérez Sienes
Affiliations: Starion Group
Data compression has become a cornerstone of modern information technology, addressing the global demand for efficient storage and transmission of ever-increasing data volumes. As industries worldwide generate and rely on vast datasets, effective compression solutions are essential to overcome limitations in capacity and bandwidth. This need is particularly acute in big-data-driven sectors, where managing the scale, complexity, and accessibility of information is critical to ensuring seamless global operations and technological advancement. In the space industry, this challenge is even more pronounced. With advancements in satellite technology, higher-resolution sensors, and the increasing number of satellites in constellations, the volume of collected data has surged dramatically. Managing this data places immense pressure on storage systems and transmission bandwidth, making efficient compression systems indispensable. Moreover, the storage of this vast amount of data translates to significant economic implications, as it demands highly capable and often costly storage solutions. These demands are pushing industries across the board, from telecommunications to Earth Observation, to adopt innovative compression techniques that enable the handling of massive datasets without compromising quality or accessibility. Artificial Intelligence (AI) is now transforming this domain by introducing adaptive, learning-based methods that optimize efficiency and handle diverse data types. Unlike traditional techniques, AI can learn intricate patterns in data also in a faster way, enabling compression that preserves information and scales to the growing demands of modern applications. The potential of AI is particularly evident in fields like Earth observation, where satellites are transitioning from static images to continuous video, creating massive data streams. Data compression has evolved to meet the need for efficiency while maintaining quality. Early lossless methods, like Run-Length Encoding (RLE), ensured data could be perfectly reconstructed, making them ideal for text or scientific use. The rise of multimedia led to lossy techniques like JPEG and MP3, which traded fidelity for higher compression ratios. Despite these advances, traditional methods rely on fixed rules, limiting their adaptability to diverse and complex datasets. Furthermore, traditional compression systems often struggle with already highly optimized or compressed data. For example, compressing a JPEG image or H264/265 video file with tools like ZIP typically results in minimal, if any, size reduction. This limitation underscores the need for more sophisticated approaches that can extract further redundancies even from optimized data. More recently, algorithms like cmix and NNCP (Neural Network Compression) have demonstrated the potential of neural networks in data compression. The cmix, a state-of-the-art lossless compressor, combines prediction models with large-scale neural networks to achieve remarkable compression ratios, albeit with significant computational demands. Similarly, NNCP leverages neural networks to predict data patterns, enabling high compression efficiency while retaining the lossless property. Losslessness is crucial in fields like scientific research, medical imaging, legal documentation, and space exploration, where even minimal data loss can compromise accuracy, integrity, or safety. These applications demand advanced compression techniques that preserve every bit of information while improving efficiency, ensuring data remains intact and usable for critical tasks. Building on these advancements, our approach introduces a versatile AI-driven lossless compression algorithm based on transformer networks, capable of handling diverse data types, including text, audio, and images. Leveraging the power of transformers, the algorithm dynamically adapts to the unique features of each data type, ensuring efficient compression while maintaining precise reconstruction. Through technical refinements in the network's architecture and optimization techniques, our approach improves both compression performance and computational efficiency. Initial testing demonstrates that the algorithm achieves compression results comparable to cmix and NNCP, while significantly outperforming both in encoding and decoding speeds. The algorithm’s performance has been evaluated using established metrics, with results demonstrating its competitiveness across all tested domains. In terms of compression ratio, the algorithm achieves performance on par with or exceeding the state of the art, showing an average improvement of approximately 3% over the best existing methods. Additionally, encoding and decoding speeds have been significantly optimized, delivering an average increase in efficiency of 60% compared to previous approaches. Testing was conducted on standard benchmarks representing a diverse range of data modalities, including text, audio, and images, evaluated across both homogeneous and heterogeneous datasets. The algorithm consistently achieved substantial reductions in data size while losslessly reconstructing the original data, affirming its reliability and precision. These results highlight its adaptability and robustness, establishing it as a versatile solution for compression across multiple domains. This technology offers significant advantages for a wide range of stakeholders, including space agencies, private satellite operators, and research institutions managing vast satellite data collections. Its adaptability makes it particularly well-suited for applications such as environmental monitoring, disaster response, and Earth observation, where robust data handling is critical. Additionally, the algorithm’s ability to operate on-board satellites allows for original data compression at the source, significantly reducing downlink bandwidth requirements or enabling the transmission of greater data volumes without the need to expand existing downlink capabilities. This feature optimises the utilisation of communication resources, ensuring that critical information reaches the ground more efficiently. Beyond its operational advantages, the technology directly addresses the high costs and complexities of long-term data preservation. By ensuring secure, efficient, and accessible storage of valuable datasets over extended periods, it mitigates the financial and logistical burdens associated with maintaining massive data archives. Moreover, its versatility is evident in scientific research fields such as genomics, climate modelling, and astrophysics, where managing vast datasets is essential for driving innovation and advancing knowledge. By supporting both on-board compression for real-time data optimisation and sustainable long-term archiving, this technology provides a transformative solution for data-intensive industries and research disciplines, enhancing efficiency and impact at every stage of the data lifecycle. It offers a transformative solution to managing the growing volume of data generated across industries, paving the way for more accessible and actionable information in the years ahead.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L1/L2)

Presentation: Complex-Valued Autoencoder-Based Data Compression Scheme for SAR Raw Data

Authors: Dr. Reza Mohammadi Asiyabi, Prof. Andrei Anghel, Mihai Datcu, Dr. Adrian Focsa, Dr. Michele Martone, Dr. Paola Rizzoli, Ernesto Imbembo
Affiliations: Politehnica Bucharest, CEOSpaceTech, German Aerospace Center (DLR), European Space Agency (ESA-ESTEC)
Next generation SAR systems will offer improved performance, using large bandwidths, digital beam forming techniques, and multiple acquisition channels. These new radar systems are designed to overcome limitations of traditional SAR imaging sensors, enabling wider coverage and better resolution. They are being widely explored by space agencies and industries to be employed in the next generation SAR systems. Such significant developments in terms of system capabilities lead to large volumes of data to be acquired in a shorter time interval, which, in turn, implies harder requirements for the onboard memory and downlink capacity of the system. Consequently, the proper quantization and compression of SAR raw data is of utmost importance, as it defines, on the one hand, the amount of onboard data to be transferred or stored, and, on the other hand, it directly affects the quality of the generated SAR products. These two aspects must be traded off due to the constrained acquisition capacity and onboard resources of the system. Lossy data compression techniques are employed to reduce the size of the acquired SAR raw data without sacrificing critical information. By compressing the data, the required downlink bandwidth is significantly reduced, enabling efficient transmission of SAR data from the satellite to the ground station. Moreover, data compression is essential for onboard memory management. SAR satellites have limited onboard storage capacity, and efficient data compression algorithms allow for storing larger amounts of data within the available memory. This enables longer data acquisition periods and increased mission flexibility, as SAR systems can acquire and store more data before the need for data offloading. Effective data compression techniques are essential for maximizing the utility of SAR systems. In this work we present a complex-valued autoencoder-based data compression method (developed in the ESA project ARTISTE - “Artificial Intelligence for SAR Data Compression” ), which introduces a new perspective for SAR data compression that goes beyond the complex-valued numbers and basic SAR processing. It unlocks the huge potential of the complex-valued networks for development of the neural data compression methods for raw data compression, while preserving the original properties and phase information of the SAR data. The developed method is a standalone data compression method based on the complex-valued autoencoder architecture that can replace the conventional data compression techniques and provide efficient data compression for future SAR missions with AI perspective. Additionally, with the increasing interest in onboard processing, complex-valued deep architectures (e.g., autoencoder data compression) can lay the foundation for deep learning-based onboard processing (e.g., classification and object recognition). In lossy data compression algorithms usually an alternative representation of the data in another space is found to be quantized. Conventional data compression algorithms use a fixed transformation model and cannot be adapted to the statistics of the data. However, in neural data compression methods, a neural network is trained to transform the data into the embedded features, considering the statistics and distribution of the data and providing a more adaptive transformation model, hence a lower data loss. In the proposed autoencoder network, the encoder architecture comprises of several complex-valued convolutional layers followed by complex-valued Generalized Divisive Normalization (GDN) layers. The encoder represents the input image patch in the latent space as the embedded features. Later a quantization module is used to quantize the embedded features into a discrete-valued representation. Since the derivative of the quantization function is zero almost everywhere, during the training, the quantization module is replaced by uniform noise to maintain the gradient for the backpropagation algorithm and training the network. However, after training, actual quantization is used. The quantized embedded feature maps are discrete-valued and can be losslessly compressed into a bitstream, using an entropy coding method such as arithmetic encoding. The resulting bit stream is the compressed data and is transferred or stored. To decompress and reconstruct the data, the arithmetic decoder (with the same entropy model) recovers the embedded feature maps from the compressed bit stream. Later, the decoder consisting of several complex-valued transpose convolutional and complex-valued inverse GDN layers is used to reconstruct the data from the embedded representation. Rate-Distortion (RD) loss is used for training the complex-valued autoencoder-based compression network. RD loss basically has two main terms: the rate loss and the distortion loss. The rate term in the loss function estimates the minimum number of bits required on average to store the embedded bitstream, based on the distribution of the embedded features. However, since this distribution is unknown, the rate term is estimated using the Shannon cross-entropy between the real distribution and the estimated distribution model of the symbols in the embedded features. On the other hand, the distortion term is the pairwise distortion metric between the input and the output images and is computed using the well-known Mean Square Error (MSE) measure. There is a trade-off between the rate and distortion terms where a higher rate allows for a lower distortion, and vice versa. So, the loss function used for training the compression network is the weighted sum of these two terms. The weight controls the tradeoff between these two losses and enables us to achieve different rates for different applications The ability of the complex-valued deep architectures to learn the complex distribution of SAR data and preserve the original properties and phase information of the SAR data is evaluated using Sentinel-1data acquired in stripmap mode. The datasets, used to train the autoencoder, consists of three Sentinel-1 SAR scenes acquired over Chicago and Houston, United States (US), and Sao Paulo, Brazil to include various landcovers (e.g., different constructed areas, agriculture, vegetation, and water bodies). To use the dataset in the deep architecture, the SAR scenes are divided into non-overlapping patches of 256×256 pixels. Later, different Sentinel-1 scenes are used as test data to evaluate the performance of the trained network (one scene acquired over the island of Fogo, Cape Verde, and one over Amsterdam, The Netherlands). It is worth mentioning that the hardware implementation feasibility and efficiency of the developed complex-valued autoencoder-based data compression method is also evaluated within the ARTISTE project. Unavailability of uncompressed SAR raw data is a restricting factor for development of novel data compression techniques. For instance, available raw SAR data from Sentinel-1 (Level-0 products) are already FDBAQ compressed and the decoded raw data has non-uniform quantization. As a result, we define a procedure [1] to add the quantization noise to the decoded raw data in order to obtain the uniformly quantized raw data that resembles the statistics of the uncompressed raw data onboard the SAR missions. In this way, we would have the raw data samples on the right number of bits, and with similar statistics to the uncompressed raw data. The developed method is compared and benchmarked against the well-accepted BAQ as well as the JPEG2000 compression standards. A few studies have utilized JPEG2000 algorithm for detected SAR data compression. JPEG2000 also has been used as one of the base methods for comparison in many studies, focused on detected SAR data compression. Within the ARTISTE project we extended the application of JPEG2000 compression method into the realm of complex-valued SAR raw data and employed JPEG2000 for SAR raw data compression (applied separately to the real and imaginary components (i.e., I and Q) of the SAR raw data) [2]. The reason why JPEG2000 works also for raw data is related to the presence of the low-pass filters from the wavelet decomposition present in the JPEG2000 processing chain. In baseband, the instantaneous frequency of a range chirp varies linearly and reaches zero frequency in the middle of the chirp (the azimuth chirps have a similar behavior, but the zero-frequency point may be slightly shifted due to the Doppler centroid). Each chirp signal (corresponding to a target) has a relatively slow variation around the zero-frequency point. This slowly varying region can respond quite well to a “partial” matched filter that consists in a boxcar window (a low-pass filter), which can be easily implemented with a sliding window summation. The resulting 2D signal is a badly focused (low resolution) SAR image that exploits only a narrow range/azimuth bandwidth around the zero-frequency points of the chirps. Hence, the low-pass filters from the JPEG2000 algorithm generate badly focused images that have some degree of spatial correlation, which can be exploited for compression. At the Living Planet symposium, we aim to present the architecture of the complex-valued autoencoder, the procedure used to generate uniformly quantized data starting from the Sentinel-1 FDBAQ compressed and decoded data and the obtained data compression results in comparison with BAQ and JPEG2000 compression methods. This study is under review for publication at IEEE Journal of Selected Topics in Signal Processing (J-STSP). [1] R. M. Asiyabi et al., "Adaptation of Decoded Sentinel-1 SAR Raw Data for the Assessment of Novel Data Compression Methods," IGARSS 2024, Athens, Greece, 2024, pp. 2541-2545. [2] R. M. Asiyabi et al., "On the use of JPEG2000 for SAR raw data compression," EUSAR 2024; 15th European Conference on Synthetic Aperture Radar, Munich, Germany, 2024, pp. 249-253
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.61/1.62)

Session: F.04.20 EO in support of the regulation on Deforestation-free products (EUDR, EU 2023/1115) - PART 1.

Faced with mounting global environmental concerns and the urgency of addressing climate change, the EU has introduced the ground-breaking regulation on Deforestation-free products (EUDR, EU 2023/1115) targeting global deforestation. The EUDR ensures that seven key commodities – cattle, cocoa, coffee, palm oil, soy, timber, and rubber – and their derived products like beef, furniture, and chocolate, entering the EU market from January 2026 onwards, are not linked to deforestation after a defined cut-off date (December 2020).
The regulation obliges operators to establish robust due diligence systems that guarantee deforestation-free and legal sourcing throughout their supply chains to achieve this goal. Verifying compliance with these standards is crucial. The EUDR mandates using the EGNOS/Galileo satellite systems and exploiting the Copernicus Earth Observation (EO) program for this purpose. This involves, among others, cross-referencing the geographic locations of origin for these commodities and products with data from satellite deforestation monitoring.
By providing precise and detailed information on deforestation linked to commodity expansion, Copernicus and other EO data/products will help to detect fraud and strengthen the implementation of the policy by diverse stakeholders.
This session will delve into the latest scientific advancements in using EO data to support due diligence efforts under the regulation, including global forest and commodities mapping.
Topics of interest mainly include (not limited to):

- Classification methods for commodities mapping using EO data;
World forest cover and land use mapping with EO Data;
- Deforestation and GHG/carbon impacts related to commodity expansion;
- Field data collection strategies for EUDR due diligence;
- Practical examples of EO integration in global case studies;
- Machine learning / AI for deforestation detection and change analysis;
- EUDR compliance strategies: Integrating EO data with other datasets;
- Traceability in the Supply Chain: EO Data for Transparency.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: Mapping Coffee Farms in Colombia: How Does Agroforestry Design affect RS-Based coffee Detection?

Authors: Yaqing Gou, Yudhi Tanasa, Claudia Paris, Xi Zhu, Mila Luleva
Affiliations: Rabobank, ITC, University of Twente
Application of remote sensing to the monitoring of agricultural fields represents an important advance in the improvements of agricultural management and productivity. The recent demands for remote sensing technology in supporting companies' due diligence processes to meet compliance requirements has posed new requirements on the reliability of remote sensing data and the associated risks due to uncertainty. Environmental and sustainability regulations such as EUDR has heavily focused on the monitoring of permanent crops including coffee and cocoa. Increasing efforts have been put into remote-sensing-based coffee monitoring with successful examples utilising emerging technologies such as deep learning and data fusion. Accurate mapping of coffee farms remain challenging, due to the complex structural charateristics (e.g. tree height, density) relate to the shaded agricutlure system. How are the coffee crops cultivated along with the shade trees is heavily influenced by the agrofroestry design. In this study, we would like to explore wether we can map coffee farms with various ratio of shade trees in Colombia using midium to high resolution opticla and SAR data. First, we developed a deep learning model using the spectral bands, vegetation index and the texture information derived from PlanetScope and Sentinel-2 imagery, backscatter from Sentinel-1 and ALOS PALSAR Radar sensors, and the tree height information from the recent released tree height product from Meta. The training data is derived from Lidar and orthophotos. The model is validated using the in-situ data collected in 2021, including information of the tree species and the number of trees per species. We reclassed the in-situ data by the ratio of shade trees into 4 groups: 0-25%, 25%-50%, 50%-75%, and 75%-100%. The model’s accuracy is validated for each group. The results indicated that vegetation index including NDVI, EVI, LSWI, NDRE, and MSI, textural information are important features for coffee classification. The model’s accuracy dropped from 80% to 56% as the ratio of shade tree increases.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: Starling

Authors: Product Marketing Manager Sustainability Montaine Foch, Patryk Jaskula
Affiliations: Airbus
Born in 2016 from a collaborative venture between Airbus and Earthworm Foundation, Starling is a geospatial solution designed to measure the environmental impact across entire supply chains, aiding in the delivery of deforestation-free and net-zero commitments. Starling uses various sources of satellite imagery such as Sentinel data and Airbus’ own constellation including Pléiades Neo, to monitor vegetation cover down to 30cm resolution. With an unrivalled level of detail, Starling’s basemap includes a reference layer that differentiates natural forest from forest plantation, planted forest and agroforestry, accurately detect deforestation. With Starling, companies rely on accurate and actionable data to engage their suppliers, mitigate risks and verify commitments. Starling delivers easy-to-use intelligence through an intuitive digital platform and tailored reports. Starling also supports companies in complying with the European Union Deforestation-free Regulation (EUDR) by monitoring suppliers’ activities to help achieve no-deforestation commitments worldwide. We provide this service for companies involved in industries such as palm oil, coffee, cocoa, rubber, soy, timber, as well as pulp and paper. Furthermore, our partner Earthworm Foundation can provide accurate analysis of Starling data through their in-house team of experts and local worldwide field staff in key producing areas. Leveraging data exported from the Starling platform, Earthworm Foundation provides companies with tools to evaluate their zero-deforestation commitments. Action and progress reports can effectively communicate their strategy to internal or external stakeholders. Starling’s 20+ years of time series data on land use change provides the ideal primary data source for calculating carbon footprints. These time series are used by third-party carbon specialists to model carbon stocks as of a specific date, along with carbon sinks and sources over a designated period. Whether companies are already working with a GHG consultant or not, Starling can assist in establishing the most accurate and scientifically-approved approach to calculate carbon footprint adhering to industry standards such as SBTi and GHG Protocol. In parallel, leveraging our high and very high-resolution satellite imagery, Starling automatically generates key analytics on land cover evolution and helps monitor the progress of your forest positive project. Airbus’ satellite constellation, including Pléiades Neo’s with its best-in-market 30cm resolution, ensures exceptional accuracy of information. Our imagery and technology provide a global view of your projects, with a resolution that is ideal for tree counting and tree detection. We collaborate with third-party carbon experts to offer tailor-made solutions that meet your specific needs. Together with Earthworm Foundation, Airbus combines over 10 years of experience in monitoring deforestation worldwide. Our team, located across multiple countries, is dedicated to offering unmatched support to help customers meet their no-deforestation and net-zero target commitments. Leveraging our expertise and technology, we provide a reliable, unbiased and global solution.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: Natural Forests of the World: A 2020 Baseline for Deforestation Monitoring and EUDR Compliance

Authors: Maxim Neumann, Anton Raichuk, Yuchang Jiang, Melanie Rey, Petra Poklukar, Keith Anderson, Charlotte Stanton, Dan Morris, Drew Purves, Katelyn Tarrio, Nick Clinton, Radost Stanimirova, Michelle Sims, Sarah Carter, Dr. Liz Goldman
Affiliations: Google Deepmind, University of Zurich, Google Research, Google Geo, World Resources Institute (WRI)
Effective conservation strategies and efforts to mitigate climate change require accurate and comprehensive understanding of global forest cover. This study presents a novel methodology for mapping the extent of natural forests in 2020 at 10 m resolution. Natural forests encompass both primary forests (those that have remained undisturbed by human activity) and secondary forests (those that have regenerated naturally following disturbance). The methodology employs a state-of-the-art deep learning model based on a multi-modal, multi-temporal vision transformer. This innovative approach leverages multiple remote sensing data sources and captures short- and long-term temporal dynamics to provide a nuanced representation of natural forest cover, characterizing both the probability of the area being natural forest and the model's intrinsic uncertainty about its predictions. The resulting global natural forest map for the year 2020, developed to be in alignment with the European Union's Deforestation Regulation (EUDR) and following forest definitions from FAO FRA 2020, provides a critical baseline for monitoring deforestation activities and informing conservation initiatives. This baseline map enables the tracking of changes in forest cover over time, facilitating the identification of areas experiencing deforestation or degradation. Such information is essential for targeting conservation efforts, enforcing regulations, and promoting sustainable land-use practices. Our approach harmonizes ~30 label data sources by training a model for pattern matching of spectral, temporal and spatial/texture signatures of the natural forests. After defining the target label maps, we create the training and evaluation datasets including short-term (seasonal variations) and long-term (multi-year) time series information from Sentinel and Landsat satellites and auxiliary layers (climate, topography). When evaluating on a hold-out dataset of validation examples, which are geographically separate from training data, we obtain a global F1-score of 85.2% (precision: 83.8%, recall: 86.7%). When evaluating on a completely separate Global Forest Management (GFM) validation dataset from 2015, which hasn't been seen during training, after class reprojection we obtained an F1-score of 79.3% (precision: 80%, recall: 78.5%). To further support users, a layer of model uncertainty alongside the estimated probabilities of natural forests is provided. This uncertainty layer acknowledges the inherent limitations of any modeling approach and encourages a cautious interpretation of the results. By explicitly quantifying uncertainty, the study promotes transparency and helps decision-makers assess the level of confidence associated with the mapped forest areas. Until now, there are few global forest and forest type maps, which are suitable for EUDR purposes. Many existing products rely on combining diverse data sources to create a single global layer, which results in different levels of quality and spatial inconsistencies. We suggest using an AI model that when combined with forest mapping from other sources can lead to higher quality and more consistent results (consistent definition, temporal coverage etc.). Our approach supports the implementation of the EUDR by providing a baseline from which deforestation and degradation can be identified. It can also support other voluntary commitments, conservation initiatives, and efforts to protect and restore our most valuable forest ecosystems. At the symposium we will present the methodology, the generated product, evaluation, and insights gained from this "Natural forests of the world" map. References: 1. European Union, REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on the making available on the Union market and the export from the Union of certain commodities and products associated with deforestation and forest degradation and repealing. Regulation (EU) No 995/2010 https://data.consilium.europa.eu/doc/document/PE-82-2022-INIT/en 2. FAO: Global Forest Resources Assessment (FRA 2020), Terms and Definitions. https://openknowledge.fao.org/server/api/core/bitstreams/531a9e1b-596d-4b07-b9fd-3103fb4d0e72/content 3. Bourgoin, Clement; Verhegghen, Astrid; Degreve, Lucas; Ameztoy, Iban; Carboni, Silvia; Colditz, Rene; Achard, Frederic (2024): Global map of forest cover 2020 - version 2. European Commission, Joint Research Centre (JRC) [Dataset] PID: http://data.europa.eu/89h/e554d6fb-6340-45d5-9309-332337e5bc26. 4. Hunka, Neha, Laura Duncanson, John Armston, Ralph Dubayah, Sean P. Healey, Maurizio Santoro, Paul May, et al. 2024. “Intergovernmental Panel on Climate Change (IPCC) Tier 1 Forest Biomass Estimates from Earth Observation.” Scientific Data 11 (1): 1127. 5. Mazur, Elise, Michelle Sims, Elizabeth Goldman, Martina Schneider, Marco Daldoss Pirri, Craig R. Beatty, Fred Stolle, and Martha Stevenson. n.d. “SBTN Natural Lands Map – Technical Documentation.” https://sciencebasedtargetsnetwork.org/wp-content/uploads/2024/09/Technical-Guidance-2024-Step3-Land-v1-Natural-Lands-Map.pdf. 6. Lesiv, Myroslava, Dmitry Schepaschenko, Marcel Buchhorn, Linda See, Martina Dürauer, Ivelina Georgieva, Martin Jung, et al. 2022. “Global Forest Management Data for 2015 at a 100 m Resolution.” Scientific Data 9 (1): 199.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: Are Freely Accessible Global Forest Maps Suitable as Reference Tools for EUDR Compliance in Deforestation Monitoring?

Authors: Juliana Freitas Beyer, Dr. Margret Köthke, Dr. Melvin
Affiliations: Waldwirtschaft - Thünen-Institut, Waldwirtschaft - Thünen-Institut, Waldwirtschaft - Thünen-Institut
Forest ecosystems provide a wide range of services that are crucial for maintaining life on earth. Therefore, regulatory efforts that may contribute to reducing deforestation and forest degradation, with the ultimate goal of eradicating them globally, are essential for preserving the ecological balance of the planet. A recent measure from the European Union (EU), the regulation on deforestation-free supply chains (EUDR, Regulation (EU) 2023/1115), is one of the latest attempts in promoting policy that aims to decrease deforestation and forest degradation. The regulation targets to prevent unsustainably produced commodities (palm oil, soy, cocoa, coffee, rubber, cattle, woods and their derivatives) from entering the EU market if they were produced after December 31, 2020 (“cut-off-date”). For that, EU-based companies can only import, export, or distribute EUDR-regulated products if they provide a due diligence statement confirming the products are deforestation-free, free from forest degradation, and comply with national laws. To verify (submit) the provided statements, competent authorities (operators) may choose to conduct individual analysis of forest conditions by using earth observation approaches such as assessing time-series satellite images and global forest maps. However, the verification and proving of deforestation-free production using global map products are data-driven decisions, which are subjected to technical limitations. Notably, the EUDR regulation does not estipulate any specific map as binding regulatory decision mechanism. This suggests that multiple global, regional or local forest maps may be used to support the verification of deforestation-free production. For this reason, we perform a review of publicly available global Forest/Non-Forest (FNF) and Land Use/Land Cover (LULC) maps and their capability to match the EUDR requirements regarding mapping traits. Although there are previous similar studies, they do not follow a systematic approach to the use of FNF and LULC as reference maps that aligns with the EUDR framework. Our objectives are to identify, collect, describe and evaluate publicly available global FNF and LULC reference layers on their capability to match the EUDR requirements based on two groups of relevant indicators: EUDR parameters (temporal proximity, spatial detail, forest cover definition) and technical parameters (reported accuracy metrics). To achieve our objectives, we first compile a comprehensive list of publicly available global FNF and LULC datasets and gather specific information for each dataset (steps 1 and 2). Following that, we assess the suitability of these datasets as potential EUDR-reference maps based on the EUDR parameters. This serves as the initial filtering stage, identifying datasets that do not meet most EUDR requirements (referred to as "filtered datasets I"). Next, we analyze the filtered datasets I against their reported accuracy values and further refine them based on additional criteria, resulting in a set of "shortlisted" datasets (filtered datasets II). We finalize our assessment by comparing the mapped forest areas from the shortlisted datasets against the Food and Agriculture Organization of the United Nations (FAO) reported forest estimates for 2020. We identify 21 global datasets, of FNF (11 datasets) and LULC (10 datasets), spanning from 1992 to 2024 and with spatial resolutions ranging from 1 to 300 meters, of which 6 global forest datasets are considered not suitable as reference maps for deforestation detection in the EUDR framework, based on the temporal proximity to the “cut-off-date” and adequate spatial resolution (EUDR parameters). Regarding the accuracy metrics, for most datasets, the ratio producer’s accuracy over user’s accuracy (PA/UA) is above one. A PA/UA ratio above one suggests a tendency to falsely classify non-forest areas as forest cover (false positives) leading to non-compliant production areas. Shortlisted datasets generally overestimate forest areas compared to FAO reports, especially in Central America, the Caribbean, North America, and Europe, with less variability observed in South America. The results underscore the matching capability of global FNF and LULC datasets to function as deforestation verification tools within the EUDR context based on different indicators and mapped forest area. The current study also highlights the predicted limitations of datasets based on the chosen indicators. It suggests that using multiple datasets at different scales is a better approach, as global datasets may fail to represent national definitions of forests or fragmented and heterogeneous landscapes, leading to false accusations of deforestation-free production violations or improper detection of existing deforestation. No single dataset is flawless; however, certain maps are better suited for the specific EUDR applications than others.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: Operational EO-based commodity crop mapping to support land-use regulation: the case of soy

Authors: Xiao-Peng Song, Matthew Hansen, Bernard Adusei, Jeffrey Pickering, Andre Lima, Peter Potapov, Yu Xin, Laixiang Sun, Stephen Stehman, Marcos Adami, Carlos Di Bella
Affiliations: University of Maryland, The State University of New York College of Environmental Science and Forestry, INPE, University of Buenos Aires
Commodity crop expansion is the main driver of tropical deforestation, which is a major cause of global climate change and biodiversity loss. Removing deforestation from the agricultural supply chains has been gaining momentum in academia, non-government organizations as well as the private sector. The Amazon Soy Moratorium in Brazil and the Palm Oil Moratorium in Indonesia are proved effective in reducing deforestation. Recently, the European Union (EU) has introduced the EU Regulation on Deforestation-free products (EUDR) with a wider scope and coverage. Soy is one of the seven enlisted commodities for EUDR, as soy is one of the key commodities driving deforestation in South America. Many satellite missions such as the Landsat and Sentinel 2 are providing consistent Earth Observations (EO) over the globe. However, the capability to convert EO data into high-quality crop maps over large areas and over time has been lacking. We have developed an end-to-end workflow for national-to-continental-scale commodity crop mapping using satellite data and statistical field survey. Major components of the workflow include satellite Analysis Ready Data (ARD) generation, machine learning application for crop classification, statistical sampling, in situ crop survey, crop area estimation and crop map validation. The method can generate internally consistent results of crop map with validation accuracy and crop area estimates with known uncertainty. Using soy as an example, we illustrate the production of soy maps at 30 m and 10 m resolutions in an annual operational mode with > 95% overall accuracy. The maps over South America can be viewed and downloaded at: https://glad.earthengine.app/view/south-america-soybean. We use the soy maps to quantify and analyze the crop expansion dynamics and associated natural vegetation loss at the biome and finer scales. In South America, soy area has been consistently growing in all major biomes including the Brazilian Amazon, Atlantic Forests, Cerrado, Chaco, Chiquitania and Pantanal. Soy-driven deforestation is concentrated at the active frontiers, nearly half located in the Brazilian Cerrado. Soy area will continue to grow due to persistent global demand. We integrate satellite-based land-use change maps with economic modeling to evaluate the effectiveness of supply chain policies. Our analysis suggests that existing forest conservation policies effectively limited soy-induced deforestation in the Brazilian Amazon.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: User Requirements From EU Member State Authorities for Verification of Due Diligence for EUDR

Authors: Dr. Sharon Gomez, Katja Berger, Christophe Sannier, Fabian Enßle, Martin Herold
Affiliations: GAF AG, GFZ Helmholtz Centre for Geosciences
As part of the European Green Deal strategy to reduce emissions, the European Union (EU) Deforestation Free Supply Chain Regulation (EUDR) came into force in June 2023. The new Regulation No° 2023/1115 requires companies exporting the commodities of cattle, cocoa, coffee, oil palm, rubber, soya and wood, to the EU to ensure that they are being produced in areas that are deforestation-free and free of forest degradation in case of wood, from December 2020 onwards. The ‘operators’ or ‘traders’ have the responsibility to submit Due Diligence declarations to show compliance with the different aspects of the Regulation which are wide-reaching. The Member States (MS) have designated Competent National Authorities (CNAs) who will then have to undertake the verification of these submissions. The ESA World AgroCommodities (WAC) project, which was initiated in September 2024, has the objective to support the MS with Earth Observation (EO) based tools, which can be used for the verification process. In this context, the Consortium engaged with several CNAs and undertook guided interviews to better understand both the legal and technical requirements for implementation of the Regulation. The feedback was compiled and presented to them in a Living Lab workshop to ensure completion. Some of the main topics that the Consortium requested feedback on were: overall requirements for EO tools; what are the commodities and related countries considered to be priorities to develop and test the use of EO tools; which are data/methods needed for plausibility checks on geolocation and areas of the reported commodity, which are data/methods are needed for verification of occurrence or not of deforestation, which frequency of updates are needed (yearly, monthly, Near Real Time) and what would be the optimal spatial resolution for deforestation / agricultural mapping? The outcome of the user requirements shows an overall need for a two-step verification process. First, a EO based system should be developed to rapidly identify high-risk areas/polygons. Second, there is a requirement for tools to support more detailed inspection-level work. Additionally, there was a request to monitor land use changes, especially deforestation, before and after the cutoff date of December 31, 2020. The system should also assess potential changes in forest cover. The tools should be all user-friendly and with the potential to be integrated into the EU Information system TRACES. Regarding forest degradation, it is crucial to detect the conversion of primary forests into plantations. Additionally, identifying signs of degradation, such as selective logging and clear-cutting, and differentiating between primary and secondary forest types is highly relevant. The time lag between the harvest of wood and regrowth would be important to be captured. CNA groups require regular updates and timely access to data. Efficient processing of large volumes of EO data is also essential. The preferred spatial resolution is between 4 and 10 meters, with higher resolutions for detailed inspections. The desired temporal resolution includes mainly quarterly, in some cases monthly, but also yearly updates. Another main outcome of the guided interviews was the compilation of priority countries and related commodities for the test and demonstration site selection. These will be presented as part of this paper. The development of the EO-based monitoring system in the project will follow an iterative approach, closely adhering to agile principles and involving CNAs in different development stages. This collaborative process will ensure continuous refinement and alignment with user requirements and the underlying policy framework, ultimately delivering an optimal EO-based solution. Before the technical approach is finalized a benchmarking of different state of the art EO solutions available will be undertaken. This will require the assessment of how the methods can meet the main criteria required by the CNAs which have been noted. Results of the benchmarking exercise will then lead to selection of technical approaches which will be demonstrated and validated. The collection of these user requirements from key CNAs in EU in the current project, provides one of the first consultative efforts to specifically obtain geo-spatial needs from the agencies who will ultimately be responsible for ensuring compliance with the EUDR.
Add to Google Calendar

Tuesday 24 June 14:00 - 16:00 (Room 1.33)

Session: D.02.31 Edge-SpAIce Workshop - Solving Maritime Littering with AI & EO

This is an open workshop organized by Edge-SpAIce project’s consortium to discuss problems and solutions around marine plastic litter detection & tracking from space.
The workshop will start with brief introduction into Edge-SpAIce, how it does AI training, DNN optimisation, deployment and operations of AI on satellite for marine litter detection.
Then, depending on visitors, we will split into groups of (1) marine litter problems (2) image analysis to detect those from space (3) solution AI and algorithm deployment to satellite for autonomous operations (4) policy enforcement for environmental and health benefits. We will embark with the experience developed though EdgeSpAIce and go beyond its scope towards idealistic solution for clean & healthy Earth.

Everyone is invited to join the activity. Project info: https://edgespaice.eu/

Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.14)

Session: F.02.01 Harnessing the Power of Remote Sensing for Research and Development in Africa

Harnessing the power of remote sensing technology is instrumental in driving research and development initiatives across Africa. Remote sensing, particularly satellite imagery and big data analytics, provides a wealth of information crucial for understanding various aspects of the continent's environment, agriculture, and natural resources. This data-driven approach facilitates evidence-based decision-making in agriculture, land management, and resource conservation. Overall, remote sensing serves as a powerful tool for advancing research and development efforts in Africa, contributing to sustainable growth, environmental stewardship, and improved livelihoods across the continent. In this session, we aim to promote various initiatives fostering collaboration between African colleagues and those from other continents.

Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.14)

Presentation: User-Integrated National Scale Drought Modelling Framework in Kenya

Authors: Maximilian Schwarz, Gohar Ghazaryan, Dr. Tobias Landmann, Waswa Rose Malot, Tom Dienya
Affiliations: Remote Sensing Solutions Gmbh, Leibniz Centre for Agricultural Landscape Research (ZALF), icipe, Regional Centre for Mapping of Resources for Development, Ministry of Agriculture of Kenya
Accurate monitoring of agricultural systems is inevitable for humanity and sustainability, as it is key to alleviating most sustainable development goals, including hunger, biodiversity loss, and climate change. To close the yield gap, it is critical to understand how climate extremes and field management impact yield in a spatially explicit and scalable way. The increasing number of freely available Earth Observation (EO) data offers opportunities to monitor intra-seasonal changes in abiotic stressors in croplands accurately and frequently by tracking subtle changes in time series. Drought is set to increase globally in frequency and severity due to climate change according to the Intergovernmental Panel on Climate Change (IPCC). Both, drought frequency and severity, increased notably in the previous decades, while drought risk is amplified by numerous factors such as population growth or fragmented governance in water and resource management. Monitoring drought hazard and impact is highly critical due to the widespread effects of drought on various sectors of the agroecological system. One of the most vulnerable sectors impacted by drought is agriculture. Droughts are a significant threat as they can adversely impact food security by reducing agricultural production. For efficient drought management including understanding which agricultural sectors are most affected, comprehensive drought characterization and monitoring are essential. Especially in vulnerable areas, it is essential to continuously monitor droughts and affected communities where poverty, food insecurity, and income inequality can result in adverse conditions. In this context, Africa represents one of the regions with the largest impact of climate change on locations and communities globally. Within the scope of the ADM – Kenya (Integrated use of multisource remote sensing data for national scale agricultural drought monitoring in Kenya) project, Kenya was chosen as a representative country to implement and integrate drought monitoring activities. Food security and the economic sector in Kenya heavily rely on agricultural output, while the country has been struck by widespread drought in recent years. It is characterized by different ecological biomes ranging from humid to arid climatic conditions, making Kenya a challenging but also exemplary country in the African context for national scale drought monitoring. Despite recent advances in drought risk and impact modelling, there is still a lack of coherent and explicit information on drought hazard, vulnerability, and risk across larger areas. Producing spatially explicit information on drought hazard, vulnerability and risk have thus multiple challenges. Whereas global models do not allow for the characterization of regional drought events due to low spatial resolution, local and regional models are often not transferable to other countries or regions. To address this gap, we developed a spatially explicit drought hazard, vulnerability, and risk modelling framework for agricultural land, grass- and shrubland areas based on rainfall and vegetation index anomalies. The original model was initially developed in the USA, Zimbabwe and South Africa. While the original model was solely based on MODIS (Moderate-resolution Imaging Spectrometer) data, the model was further developed during the ADM – Kenya project to ensure the future sustainability of the framework. Therefore, the drought modelling framework was extended by incorporating Sentinel-3 data to provide reliable and accurate results moving forward. Through this extension, the developed modelling framework is now one of the first to take advantage of a 20+ year time-series of EO data combining MODIS and Sentinel-3. The newly developed drought modelling framework is now based on TAMSAT rainfall data (SPI3 – 3-monthly Standardized Precipitation index), MODIS and Sentinel-3 data (NDII – Normalized Difference Infrared Index, NDVI – Normalized Difference Vegetation Index, LST – Land Surface Temperature), and national yield statistics provided by the FAO (Food and Agriculture Organization). Due to the lack of in-situ data, the model results were successfully cross verified against global drought models like the GDO (Global Drought Observatory), FEWS NET data (Famine Early Warning Systems Network), and national drought reports. With this submission we want to present the results of an accurate and reliable drought modelling framework developed for future usage in Kenya as close collaborations with national incubators vastly improved performance. The drought modelling framework provides monthly drought probability maps on a national scale that can also be used in NRT (Near Real Time) applications and can be further integrated into EWS (Early Warning Systems). As drought impact and vulnerability can be reduced by the implementation of different management practices, it is furthermore important to provide monitoring tools that support decision making for sustainable management. To do so, a second product on Irrigation Systems mapping on a national scale was developed in close collaboration with the national incubators. The Sentinel-2 based product reflects irrigated and rainfed cropland with a 10m spatial resolution for Kenya. While this product stands alone to provide information on areas in need of support and intervention for political decision makers, it also feeds directly into the Sentinel-3 based drought modelling framework. With this submission we want to present a comprehensive drought modelling framework, that not only addresses drought hazard, but also incorporates farming practices and supports future drought impact mitigation strategies. To do so, all products were developed in close collaboration with national incubators in Kenya to ensure the future usage, uptake and integration of the proposed products. While advanced processing chains were developed during this project, detailed documentation and guidance were provided to users for a seamless uptake. Therefore, the presentation will also outline a small overview of how these freely available products and processing chains can be used and accessed by local incubators as it was already done in a user workshop in Kenya during the project.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.14)

Presentation: Advancing Earth Observation in Africa : Mid-term achievements of the WG Africa Copernicus Training of trainers program in three languages

Authors: Linda Tomasini, Ali Arslan Nadir, Agnès Bégué, Nico Bonora, Catarina Duarte, Maria Daraio Girolamo, Jean-François Faure, Philippe Gisquet, Carlos Gonzales Inca, Eric Hallot, Michal Krupinski, Cécilia Leduc, Marc Leroy, Benoît Mertens, Benjamin Palmaerts, Marietta Papakonstantinou, Cristina Ponte Lira, Carolina Sa, Dimitra Tsoutsou
Affiliations: CNES, FMI, CIRAD, ISPRA, Air Centre, ASI, IRD, Visioterra, University of Turku, ISSEP, CBK PAN, IDGEO, Space4Dev, NOA, University of Lisbon, PT Space, PRAXI Network
The WG Africa project is a collaborative initiative involving 12 national institutions from 8 European countries. Its aim is to support and enhance the utilization of Copernicus Data and services in Africa through a "training of trainers" program, which is funded by the European Commission and implemented in three languages: French, Portuguese, and English. The primary goal is to assist African academic or private trainers in integrating Copernicus-based modules into their training programs or curricula. This initiative complements other capacity-building efforts in the sector of Earth Observation from space in Africa, such as GMES & Africa. The project has started in October 2022 and has completed its initial phase. During 2023, 30 future trainers originated from 18 African countries were selected and an extensive 10 weeks long training program on the use of Copernicus data and services was developed and delivered to them on line. Thematic training sessions on different topics such as agriculture, forests, disasters management, hydrology, health, mangrove monitoring were also organized and delivered by experts from different European institutions and research entities. The project has now entered its second phase in which each African Trainer has been developing and implementing his/her own local training sessions with the support of European partners. In 2024, 15 local training sessions addressing different thematics such as hydrology, land use land cover management were implemented by the African Trainers in 12 countries. In total, more than 1000 students, practitioners and decision makers were introduced and trained to the use of Copernicus data and services to the benefit of scientific studies, policy making or enforcement, and private initiatives in favor of Copernicus applications within the targeted countries With a relative small budget, this action has already proven to be effective thanks to the cooperation scheme and sharing of training contents among the European partners and also to the co-development approach with the African partners that has enabled a leveraging effect and multiplying factor in reaching out to new Copernicus users. A digital learning platform hosting training contents, lectures recordings, exercises and various training resources has also been developed to support the training activities and is a valuable asset showcasing Copernicus use cases in Africa. It has been opened to the GMES & Africa community. In addition to these activities that are related to training, the WG Africa is also organizing webinars all along the project to raise awareness and promote the use of Copernicus data and services to a wider audience in Africa. New local training sessions will take place in 2025 and the project will end in October. As continuity of activities and collaboration is essential to build sustainable Copernicus User Uptake in Africa, it is the author's intention to maintain the WG Africa network beyond the project and to apply to be part of the Copernicus Ambassadors network in Africa.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.14)

Presentation: Utilizing the Potential of Hyperspectral and Thermal EO Data for Drought and Crop Water Stress Monitoring in Africa – Results From the ARIES Project

Authors: Silke Migdall, Veronika Otto, Dr. Heike Bach, Jeroen Degerickx, Louis Snyders, Aolin Lia, Kaniska Mallick
Affiliations: Vista-Geo GmbH, VITO, LIST
ARIES is part of ESA’s EO AFRICA (African Framework for Research Innovation, Communities and Applications), an initiative that focuses on building African - European R&D partnerships and the facilitation of the sustainable adoption of Earth Observation (EO) and related space technology in Africa. The focus of the project was on exploring the potential of upcoming satellite missions (particular hyperspectral and thermal) to address water management and food security issues in Africa, as climate change is affecting many regions in Africa already. Some already drought-prone regions in the Sahel zone have become even drier, but also regions where water formerly was plentiful like Zambia have recently experienced drought conditions. Within ARIES, experimental EO analysis techniques from hyperspectral and thermal data have been developed and validated as the first step towards a new and deeper understanding of the crop water situation under different climatic and management conditions. The new ARIES products are (1) a thermal drought indicator derived from ECOSTRESS data, (2) a high-resolution crop water stress indicator from a combination of Sentinel-3, Sentinel-2 and ECOSTRESS data, and (3) high-resolution leaf area, canopy and leaf water content from hyperspectral data (PRISMA, EnMAP). In terms of scientific advancements in the field of thermal remote sensing, we have developed a new approach to derive information on drought using the concept of thermal inertia and have gained a better understanding on the importance of properly accounting for directionality effects. ARIES worked together with Early Adopters covering west (AGRHYMET Regional Centre and AAH Action Against Hunger in Niger) and southern (Zambian Agricultural Knowledge and Training Centre LTD in Zambia) Africa. Thereby, the developed algorithms and approaches could be validated, tested and evaluated in different geographic regions. The newly developed products show clear potential in supporting farmers in optimizing water productivity at individual field scale. To make sure that the algorithms and results can be further utilized, they have been integrated as processors on a cloud platform: the Food Security Explorer, which also hosts the ECOSTRESS data. After in depth discussion with the Early Adopters and other stakeholders as well as the compilation of a policy matrix, possible policy impacts of the EO analyses were deducted. Potentials, limitations and recommendations for the Copernicus Expansion missions CHIME and LSTM were analysed. In this presentation, the most prominent results of the activities within ARIES will be shown and the insights from the project towards the planning of future missions will be shared. The project ARIES is one of the EOAfrica Explorers and is funded by ESA under ESA Contract No: 4000139191/22/I-DT.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.14)

Presentation: Harnessing Remote Sensing for Mangrove Mapping and Restoration in Support of Protected Area Management in West Africa

Authors: Celio De Sousa, Dr Lola Fatoyinbo, Abigail Barenblitt, Dr Adia Bey, Dr Neha Hunka
Affiliations: Nasa Goddard Space Flight Center, University of Maryland Baltimore County, University of Maryland
Despite their well-documented ecological, economic, and social benefits, mangroves continue to experience alarming rates of degradation and destruction, with global losses of 1-2% per year—rates that surpass those of terrestrial tropical forests. In West and Central Africa, mangroves cover approximately 11% of the world’s mangrove area and serve as a vital global carbon sink. Their protection is critical in the context of climate change, yet conservation efforts face persistent challenges. Coastal conservation projects within Marine Protected Areas (MPAs) in the region struggle with insufficient local funding, relying heavily on international funding sources—a model that is unsustainable in the long term. Consequently, identifying durable, locally driven funding solutions for these protected areas has become a pressing priority. To address these challenges, this study explores the use of remote sensing technology to support mangrove mapping, restoration, and the effective management of MPAs. Our approach integrates advanced satellite-based mapping and monitoring techniques to assess the potential for initiating blue carbon projects while improving regional cooperation for climate change mitigation and adaptation. Using a Landsat-based compositing approach (LandTrendr) combined with machine learning classifiers, we developed annual land cover maps spanning from 2000 to 2022 for approximately 275,000 km² of coastline, covering more than 235 MPAs between Mauritania and the Democratic Republic of Congo. This analysis identified eight key land cover classes, including mangrove forests. The results provide a detailed understanding of annual trends in land cover and mangrove extent, offering critical insights for prioritizing restoration areas and identifying key MPAs where carbon-financed projects could be developed. As an example, we focused in Guinea-Bissau, where mangroves are one of the main land cover classes within protected areas. We found that some protected areas promoted the restoration of mangroves over time. These findings underscore the potential of remote sensing to guide sustainable mangrove conservation and restoration initiatives, strengthen protected area management, and contribute to regional climate adaptation and mitigation efforts.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.14)

Presentation: Harnessing remote sensing for monitoring turbidity dynamics in small reservoirs to inform agriculture and aquaculture development in sub-Saharan Africa

Authors: Stefanie Steinbach, Anna Bartels, Jun.-Prof. Dr. Valerie Graw, Jun. Prof. Dr. Andreas Rienow, Dr. Bartholomew Thiong'o Kuria, Dr. Sander Zwart, Prof. Dr. Andrew Nelson
Affiliations: Institute of Geography, Ruhr University Bochum, International Institute for Geo-Information Science and Earth Observation (ITC), University of Twente, Institute of Geomatics, GIS and Remote Sensing, Dedan Kimathi University of Technology, International Water Management Institute
The widespread construction of small dams across sub-Saharan Africa, driven by their affordability and simplicity, has led to the establishment of thousands of small reservoirs. These reservoirs are crucial for supporting smallholder agriculture, farmer-led irrigation, fishing and aquaculture, playing a significant role in rural livelihoods. Sustained water supply during periods of scarcity contributes to the resilience of rural populations in times of climate change and variability. However, water quality challenges including turbidity fluctuations make small reservoir performance vary strongly. Turbidity, a measure of water clarity, indicates soil erosion and pollution from human activities like farming. Elevated turbidity levels can severely impact aquaculture by increasing fish mortality rates, making effective turbidity monitoring crucial. However, monitoring turbidity in small reservoirs and understanding its underlying drivers remain significant challenges. Seasonal turbidity monitoring, in particular, is essential for the successful use of small reservoirs in irrigation and aquaculture. Yet, the lack of such data, the low temporal resolution of existing datasets, and the high costs of continuous measurement contribute to the high risk of failure in irrigation and aquaculture projects. Remote sensing-based turbidity monitoring, if successfully implemented, provides a cost-effective means to track turbidity dynamics across large areas and numerous reservoirs, thereby serving as a valuable tool to mitigate risks and inform sustainable investments. Building on this potential, this study investigates the applicability of remote sensing and machine learning-based approaches for estimating turbidity and its influencing factors, developed using data from Kenya, and evaluates their transferability to a study site in northern Ghana. Sentinel-2 time series iamges were processed with the Case 2 Regional Coast Colour (C2RCC) processor in ESA SNAP, calibrated against water samples collected during the rainy-to-dry season—in 10 reservoirs in the central highlands of Kenya in January 2023 and 15 in northern Ghana in October 2024. The findings highlight contrasting turbidity patterns both within and across the study sites, offering insights into localized and regional factors influencing turbidity. Elevated turbidity levels, frequently or even permanently exceeding safe thresholds during the observation period from 2017 to 2023, were observed in the northern part of the Kenyan study site and across the Ghana study site. Turbidity dynamics were linked to factors such as land management, meteorology, and topography with varying degrees of influence. Our results highlight the complex interactions driving water quality in small reservoirs and demonstrate the capacity and applicability of scalable, site-specific turbidity monitoring using openly accessible tools and remote sensing data. Our methodology provides critical insights for evaluating the suitability of sites for small reservoirs for irrigation agriculture and aquaculture and their potential future development. By enhancing water resource management with continuous turbidity monitoring, this work can support the improved planning and success of small reservoir initiatives, ultimately contributing to agricultural productivity and rural livelihoods in sub-Saharan Africa.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.14)

Presentation: Facilitating African-European R&D Partnership in Earth Observation Through Collaborative Research: EO AFRICA R&D Research Calls

Authors: Dr. Serkan Girgin, Dr. Mahdi Farnaghi, Dr. Diana Chavarro Rincon, Dr. Zoltan Vekerdy
Affiliations: University of Twente, Hungarian University of Agriculture and Life Sciences
The EO AFRICA R&D Facility is the flagship of the EO AFRICA initiative that aims to facilitate the sustainable adoption of EO and related space technology in Africa through an African-European R&D partnership. For this purpose, the Facility supports capacity development for research by organizing tailor-made domain-specific training courses and webinars, and capacity development through research by enabling research projects co-developed and run by African and European research tandems. During its first phase in 2020-2023, the Facility launched two research project calls to support African-European collaborative efforts in developing innovative, open source EO algorithms and applications adapted to African solutions to African challenges by using cutting-edge cloud-based data access and computing infrastructure. The calls aimed at addressing emerging research topics in food security and water scarcity, making full use of the digital transformation in Africa and the observation capabilities of the ESA and Third Party EO missions. More than 100 project proposals were submitted by African and European co-investigators affiliated with public and private research institutions in 29 African and 17 European countries and covering a wide range of topics such as crop monitoring, yield forecasting, climate change, flood mapping, livestock mapping, soil monitoring, lake monitoring, biodiversity, etc. Following an exhaustive peer-review and evaluation process considering 33 criteria grouped under 6 categories, including qualifications of the project team, scientific quality of the proposed work, innovation and impact potential, use of EO data, use of cloud-based ICT infrastructure, and budget, the Facility provided financial support and ICT infrastructure to 30 research projects. Each project developed an innovative algorithm or research workflow delivered as open-source research code, preferably as interactive notebooks, together with open-access research data. The results of the research projects are also published as open-access scientific publications. In this talk, first, the details of the research calls will be described starting for the preparation of the call, up to the closure of the research projects, with a special emphasis on the evaluation process. Then, an overview of the submitted proposals will be provided including research questions, EO data and analysis methods, work and budget distribution, geographical distribution, and gender balance. The results of the funded projects will be summarized and finally, the lessons learned from the research calls will be discussed in detail, including challenges in using cloud computing infrastructure, performing collaborative research as tandems, budget utilization, and Open Science practices.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.14)

Session: A.08.13 Multiple stressors on Ocean Health and Marine Biodiversity: Lessons Learned and Path Forward

Ocean plays a critical role in mitigating climate change by absorbing heat and human-induced CO2 emissions, also providing essential ecosystem services that support human well-being. With the rising of anthropogenic effects, the ocean is warming, acidifying and losing its oxygen content. In addition, pollution and overfishing further modify the biological environment. Intensity and frequency of extreme conditions has been consequently rising, posing a significant threat to marine ecosystems.
Cumulative stressors affect a wide range of ecosystem services, operating across multiple scales from cellular-level physiological responses to broader community dynamics. Interactions between stressors are complex and their cumulative effects are not always additive, leading to non-linear, synergistic, or antagonistic outcomes. Thus, predicting the combined impact of stressors on marine ecosystems remains a significant challenge.
Recent advancements in integrating scientific approaches, such as in-situ observations, Earth Observation, numerical modelling, and Artificial Intelligence, have enhanced our understanding of cumulative impacts of these stressors on marine biodiversity and ecosystem services. However, a comprehensive understanding of how marine ecosystems respond to multiple stressors is still lacking. This uncertainty hinders efforts to accurately assess marine environmental status and ocean health.

Our goal is to bring together experts to collaborate on current knowledge and addressing future challenges. Specifically, we aim to:
• Identify gaps in knowledge, observation, technology, and methodology that need to be addressed to improve monitoring and assessment of ocean health and marine biodiversity.
• Pinpoint primary stressors that require detection and monitoring, and explore how EO-based techniques can support their identification.
• Strengthen our understanding of mechanistic links between physical, biogeochemical, and biological processes affecting marine biodiversity, improving predictive capabilities for future ocean health scenarios.

By working in collaboration, we can enhance our ability to monitor, understand, and define mitigation strategies for the impacts of multiple stressors on the health of the ocean and its ecosystems.

Chairs:


  • Federico Falcini - Institute of Marine Sciences, National Research Council of Italy, Rome, Italy
  • Angela Landolfi - Institute of Marine Sciences, National Research Council of Italy, Rome, Italy
  • Victor Martinez Vicente, Earth Observation Science and Applications, Plymouth Marine Laboratory, Plymouth, United Kingdom

Speakers:


  • Bror F. Jönsson - Ocean Processes Analysis Laboratory, University of New Hampshire
  • Yolanda Sagarminaga - AZTI Marine Research, Basque Research and Technology Alliance
  • Branimir Radun - Oikon Ltd., Institute of Applied Ecology
  • Laura Zoffoli - Institute of Marine Sciences, National Research Council of Italy

Add to Google Calendar

Tuesday 24 June 14:15 - 14:35 (EO Arena)

Demo: D.03.32 DEMO - NASA-ESA-JAXA EO Dashboard

#stac

This demonstration will demonstrate the features of the NASA-ESA-JAXA EO Dashboard. It will covert following elements:
- Dashboard exploration - discovering datasets, using the data exploration tools
- Browsing interactive stories and discovering scientific insights
- Discovering Notebooks in the stories and how to execute them
- Creating new stories using the story-editor tool
- Browsing the EO Dashboard STAC catalogue
- Exploring the documentation

Speakers:


  • Diego Moglioni - Starion for ESA
  • Anca Anghelea - ESA


The demo will be performed by ESA, NASA and JAXA joint development team.
Add to Google Calendar

Tuesday 24 June 14:30 - 15:30 (ESA Agora)

Session: F.02.19 Austrian Space Cooperation Day - Human & Robotic Exploration, Space Transportation

The Austrian space community and international testimonials take a kaleidoscopic look at products and services “made in Austria”, highlighting existing and inviting future cooperations within international partner networks. With a view to the ESA Ministerial Conference in 2025, the great importance of ESA programmes for maintaining and improving Austria's excellence in space will be explained using technological and commercial success stories. In the FFG/AUSTROSPACE exhibition, Earth observation space hard- and software products manufactured in Austria are presented (next to Agora area and ESA booth in Main Entrance Hall).

Speakers:


Keynote


  • Hermann Ludig Möller - Director of the European Space Policy Institute

Human and Robotic Exploration: Framework and Austrian Success Stories


  • Christian Fidi - TTTech

Space Transportation: Framework and Austrian Success Stories


  • Georg Grabmayr - Director Institutional Sales at Beyond Gravity

Panel Discussion


  • Andreas Geisler - Head of ALR within FFG
  • Carmen Possnig - Austrian reserve astronaut
  • Hermann Ludig Möller - Director of the European Space Policy Institute
  • Christian Fidi - TTTech
  • Georg Grabmayr - Director Institutional Sales at Beyond Gravity
Add to Google Calendar

Tuesday 24 June 14:37 - 14:57 (EO Arena)

Demo: D.02.26 DEMO - Putting the A.I. in F.A.I.R.: Unlocking Reproducible Machine Learning through openEO

The integration of Machine Learning (ML) and Deep Learning (DL) in Remote Sensing has revolutionized the way a vast amount of Earth Observation (EO) data is processed and analyzed. These advanced computational techniques have not only enabled faster and more efficient data processing but have also significantly improved the accuracy and scalability of insights derived from remote sensing data. ML-driven approaches are particularly valuable for scene classification, object detection, and segmentation applications, where identifying complex spatial patterns and subtle variations is critical.
To make ML more accessible for EO practitioners, openEO integrates key algorithms such as Random Forest, a widely used classification model known for its robustness and accuracy. This method enhances EO data classification by combining predictions from multiple decision trees, reducing the need for deep ML expertise. Additionally, the growing demand for more sophisticated ML techniques has led to the adoption of foundation models pre-trained on massive datasets and fine-tuned for EO applications. These models enable more generic, scalable, and automated classification pipelines without sacrificing precision.
This demonstration will showcase real-world mapping projects that have successfully implemented ML-powered classification workflows using openEO. Attendees will gain insights into how foundation models are being integrated to push the boundaries of EO analysis, offering new possibilities for large scale and automated geospatial data processing.

Speakers:


  • Victor Verhaert - VITO
Add to Google Calendar

Tuesday 24 June 15:00 - 15:20 (EO Arena)

Demo: D.03.26 DEMO - The Geo-Quest mobile application: Easy and accurate Earth Observation-enhanced ground data collection

Geo-Quest is a mobile app for the collection of high-quality reference data on the ground. Such reference data are needed to improve AI models and remote-sensing applications. The quests in the Geo-Quest app are campaigns related to different themes, where the common denominator is the collection of information on the ground. The app combines information coming from different sensors in the phone such as the camera, the GPS receiver, the gyroscope, the accelerometer, etc., to support the user’s ground data collection, including augmented reality. Examples of current quests include Crop Capture, Tree-Quest, and Forest-Quest. Crop Capture allows users to record agricultural information such as the crop type and location, including parcel delineation over satellite imagery, and can be supplemented by pictures taken in the field being surveyed. Some of the data from Crop Capture will support the ESA-funded WorldCereal project, which provides very-high resolution global crop maps. Tree-Quest allows users to collect measurements of individual tree attributes such as the tree diameter, tree height, and tree species, which is then used to derive above ground biomass while Forest-Quest is used to measure the basal area of a forest plot. Other features of Geo-Quest include the possibility to store satellite imagery in areas where the internet is not available. Measurements can be made on the ground and then uploaded when the user is online, which are then made available to the community through the quest’s web platform. Thus, all the information collected in Geo-Quest is openly available for anyone to use.

This demonstration will allow users to download the application and test the available quests. It will include a slide presentation and a Q&A session, followed by hands-on testing of the app on-site. A video showcasing the capabilities of the app will also be running in the background.

Speaker:


  • Juan Carlos - IIASA

Add to Google Calendar

Tuesday 24 June 15:22 - 15:42 (EO Arena)

Demo: D.03.35 DEMO - Introducing EarthCODE

The objective of this brief demonstration is to introduce EarthCODE.

The Open Science and Innovation Vision included in ESA’s EO Science Strategy (2024) addresses 8 key elements: 1) openness of research data, 2) open-source scientific code, 3) open access papers with data and code; 4) standards-based publication and discovery of scientific experiments, 5) scientific workflows reproducible on various infrastructures, 6) access to education on open science, 7) community practice of open science; and 8) EO business models built on open-source. EarthCODE (https://earthcode.esa.int) is a strategic ESA EO initiative to support the implementation of this vision.

EarthCODE (Earth Science Collaborative Open Development Environment) will form part of the next generation of cloud-based geospatial services, aiming towards an integrated, cloud-based, user-centric development environment for European Space Agency’s (ESA) Earth science activities. EarthCODE looks to maximise long-term visibility, reuse and reproducibility of the research outputs of such projects, by leveraging FAIR and open science principles and enabling, thus fostering a sustainable scientific process. EarthCODE proposes a flexible and scalable architecture developed with interoperable open-source blocks, with a long-term vision evolving by incrementally integrating industrially provided services from a portfolio of the Network of Resources.?

During this 20 minute demo, we will cover how collaboration and Federation are at the heart of EarthCODE. As EarthCODE evolves we expect providing solutions allowing federation of data and processing. EarthCODE has ambition to deliver a model for a Collaborative Open Development Environment for Earth system science, where researchers can leverage the power of the wide range of EO platform services available to conduct their science, while also making use of FAIR Open Science tools to manage data, code and documentation, create end-to-end reproducible workflows on platforms, and have the opportunity to discover, use, reuse, modify and build upon the research of others in a fair and safe way.

Speakers:


  • Samardzhiev Deyan - Lampata
  • Dobrowolska Ewelina Agnieszka - Serco
  • Anne Fouilloux - Simula Labs
Add to Google Calendar

Tuesday 24 June 15:30 - 16:15 (ESA Agora)

Session: C.01.29 Crafting the European Earth Observation Ecosystem 2040+: Needs, Offers, Gaps leading to ideas for a future EO Ecosystem architecture

What should our European Earth Observation Ecosystem look like in 2040+?
Which future users’ needs and societal challenges will drive the system-of-systems?
Which components of the ecosystem will be the game-changer?
Which key characteristics are essential?
The European EO Ecosystem 2040+ (“The European Blueprint for Earth Observation”) is a cross-cutting vision for the future of EO in Europe. It will help to join common forces from the various EO actors (science, commercial and operational nature) and highlight future needs for scientific research and development, innovative new EO mission ideas and technologies, and mission data exploitation with applications that address new Earth system science and deliver societal benefits.
This agora is to identify and discuss actions - in support of European citizens and policies - to implement and sustain, operate, and evolve the performance and capacity of Earth Observation in Europe as the most advanced living systems-of-systems in the world.
The vision of a European EO Ecosystem is thereby founded on a critical assessment for optimised, sustainable and affordable growth. This is achieved using a scenarios-based approach to consider potential evolution in the 2040+ timeframe (e.g. business-as-usual, enhanced continuity and optimised reduction), while at the same time identify key drivers and benchmark tools for a sustainable and unique European Ecosystem 2040+.
We will identify the key characteristics of the European EO Ecosystem as an adaptable approach including elements such as long-term data preservation, complementarity, interconnected, standards-based, verification, performance, modularity and scalability, reusability, best practices, affordability to name some.

Panel discussion with:


Connecting the dots between science needs and the EO Ecosystem


  • Craig Donlon - ESA

Green solutions, actions and policies


  • Inge Jonckheere - ESA

Future science needs


  • Markus Rapp - ACEO member and speaker of the DLR Earth Observation research institutes

A commercial perspective


  • Representative from industry

Add to Google Calendar

Tuesday 24 June 15:30 - 16:15 (Nexus Agora)

Session: F.02.11 Enhancing Earth Observation Uptake in the Philippines and ASEAN Region

Southeast Asia, including the Philippines, Indonesia, and Thailand, is among the most disaster-prone areas globally, severely affected by tropical typhoons, flooding, volcanic activities, and other climate change impacts. A leading reinsurance company recently identified the Philippines as highly exposed to significant economic losses due to disasters (as % of GDP). In the face of these shared challenges, timely decision-making, environmental monitoring, and effective policy implementation are important for building resilience across the region.
Jointly with the Directorate General for International Partnerships (DG-INTPA) and the Philippines government, the European Space Agency (ESA) has set up the National Copernicus Capacity Support Action Programme for the Philippines, known as CopPhil. The national CopPhil centre, hosted by the Philippine Space Agency (PhilSA), was inaugurated in October 2024, providing access to the complete Sentinel data of the European Copernicus Programme and is co-designing together with mandated institutions of the

Philippines government three EO services:
• Ground Motion Monitoring: Utilising InSAR to monitor landslides, earthquakes, ground movement, and volcanoes, enhancing disaster preparedness and mitigation strategies.
• Land Monitoring, Forests, and Crop Mapping: Monitoring Forest extent, types, health, and deforestation, as well as mapping high-value crops and land use changes to support sustainable land management and agricultural productivity.
• Benthic Habitat Monitoring: Mapping coastal ecosystems and detecting coral bleaching events to protect marine biodiversity and support fisheries management.

Building on CopPhil's success and recognising shared regional challenges, the EU-ASEAN Sustainable Connectivity Package (SCOPE) Digital initiative aims to adapt, transfer, and scale these solutions. SCOPE Digital focuses on Indonesia and Thailand as pilot countries, partnering with the National Research and Innovation Agency (BRIN) and the Geo-Informatics and Space Technology Development Agency (GISTDA) respectively. This regional expansion leverages the CopPhil experiences and tools to enhance EO data processing and digital connectivity, promoting sustainable solutions to environmental and economic challenges across ASEAN.

Moderator:


  • Casper Fibæk - ESA, Earth Observation Application Specialist

Speakers:


  • Ariel C. Blanco - Director of the Space Information Infrastructure Bureau (SIIB) of PhilSA
  • Kandasri Limpakom - Deputy Executive Director, GISTDA
  • Rokhis Khomarudin - Head of the Geoinformatics Research Center, BRIN
  • Thibault Valentin - Programme Responsible, DG-INTPA
  • Eric Quincieu - Principal Water Resources Specialist, ADB
  • Ariel Blanco - Director for Space Information, Philippine Space Agency, and Professor, University of the Philippines Diliman
Add to Google Calendar

Tuesday 24 June 15:30 - 16:15 (Frontiers Agora)

Session: F.04.26 Towards Operational Greenhouse Gas Monitoring for Policy

The Committee on Earth Observation Satellites, CEOS, and the Coordination Group on Meteorological Satellites, CGMS, have demonstrated that high-quality, systematic satellite observations of atmospheric carbon dioxide (CO2) and methane (CH4) are essential for building a truly integrated global greenhouse gas (GHG) monitoring system. These observations are fundamental for ensuring data accuracy, tracking collective climate progress, and supporting the Enhanced Transparency Framework under the Paris Agreement.

Their commitment to sustaining long-term monitoring of greenhouse gases is clearly reflected in the recently updated Greenhouse Gas (GHG) Roadmap. This updated Roadmap aims to further support the Paris Agreement’s Global Stocktakes by integrating key lessons learned from the first Global Stocktake and leveraging recent advancements in satellite infrastructure and data processing capabilities.

The Roadmap emphasizes enhanced engagement and co-development with stakeholders and stronger partnership with key organizations like the World Meteorological Organization’s Global Greenhouse Gas Watch (WMO G3W) and the United Nations Environment Programme’s International Methane Emissions Observatory (UNEP IMEO). It also provides an overview of the space-based greenhouse gas observing architecture, capable of delivering GHG emissions information at global, regional, and facility scales through both public and non-governmental missions.

Additionally, it outlines the efforts needed to transition the current framework from research to operations in support of sustained and operational GHG Monitoring and Verification Support systems that serve stakeholders across science, inventory, policy, and regulatory communities.

In this Agora session, we will engage with international and European stakeholders and discuss how we will move towards operational greenhouse gas monitoring providing policy-relevant and actionable information.

Speakers:


  • Yasjka Meijer - ESA
  • Gianpaolo Balsamo - WMO-G3W
  • Itziar Irakulis Loitxate - UNEP-IMEO
  • Mark Dowell - JRC
  • Tomohiro Oda (USRA)

Add to Google Calendar

Tuesday 24 June 15:30 - 16:45 (Plenary - Hall D)

Session: Outlook for ESA's Earth Observation programmes - CM25

This session will focus on the outlook for ESA’s Earth Observation Programmes as regards what is planned to be proposed to Member States for funding at the next Ministerial Council in November. The general context of the Ministerial Council, including the overall package of programmes for the Agency, will be described by the ESA Director General. More detailed information on the EO programmes and initiatives which will be open for Member State subscription will be given by the Director of ESA’s Earth Observation Programmes. These programmes will seek to ensure support for the development of future scientific, institutional, and commercial missions as well as support for the exploitation of the satellite data collected by past and current missions. The session will end with some views expressed on ESA’s plans by representatives of the scientific and commercial communities. This session complements others held during the Symposium which focus on individual ESA programmes, missions and initiatives as well as the longer term strategy of ESA, particularly as regards Earth Science.

This session will be accessible via live captioning at the following link: HERE
=i=- due to limitations in the app, this is clickable only from the web version of the programme

Speakers:


  • Josef Aschbacher - Director General, ESA
  • Simonetta Cheli - Director of Earth Observation Programmes, ESA
  • Andrew Shepherd - Head of the Department of Geography and Environment at Northumbria
  • Charles Galland - Policy Manager, ASD-Eurospace
Add to Google Calendar

Tuesday 24 June 15:45 - 16:05 (EO Arena)

Demo: C.06.17 DEMO - Pi-MEP: A Comprehensive Platform for Satellite Sea Surface Salinity Validation and Analysis

The Pilot-Mission Exploitation Platform (Pi-MEP) for salinity (https://www.salinity-pimep.org/) provides a powerful web-based environment for validating and analyzing satellite-derived sea surface salinity (SSS) data. Originally developed in 2017 to support ESA's Soil Moisture and Ocean Salinity (SMOS) mission, Pi-MEP has evolved into a comprehensive reference platform serving multiple satellite missions including SMOS, Aquarius, and SMAP.

Pi-MEP addresses three core functions essential for oceanographic applications:
1- Centralizing diverse datasets required for satellite SSS validation
2- Generating systematic comparison metrics to monitor SSS product quality
3- Providing intuitive visualization tools for exploring both SSS data and validation results

The platform integrates extensive in situ measurements from Argo floats, drifters, thermosalinographs, and saildrones, alongside complementary datasets for precipitation, sea surface temperature, and ocean currents. Users can access pre-generated validation reports covering 30 predefined oceanic regions through the platform's intuitive web interface.

Through an ESA-NASA partnership established in 2019, Pi-MEP has undergone significant enhancements, including implementation of triple-collocation analysis, advanced match-up criteria, and integration of data from field campaigns like SPURS, EUREC4A, and SASSIE.

Our demonstration will showcase Pi-MEP's latest capabilities and user interface, highlighting new tools for characterizing representation errors across satellite salinity products. Attendees will see how oceanographers can efficiently access, validate, and analyze SSS data for applications ranging from river plume monitoring to mesoscale boundary current dynamics and salinity evolution in challenging regions.

Speaker:


  • Sebastien Guimbard - OceanScope
Add to Google Calendar

Tuesday 24 June 16:07 - 16:27 (EO Arena)

Demo: D.04.28 DEMO - Exploring Copernicus Sentinel Data in the New EOPF-Zarr Format

#zarr #stac #cloud-native

Overview:
This demonstration will showcase the Earth Observation Processing Framework (EOPF) Sample Service and the newly adopted cloud-native EOPF-Zarr format for Copernicus Sentinel data. As ESA transitions from the SAFE format to the more scalable and interoperable Zarr format, this session will highlight how users can efficiently access, analyze, and process Sentinel data using modern cloud-based tools.

Objective:
Attendees will gain insight into:
- The key features of the Zarr format and its advantages for cloud-based workflows.
- How the transition to EOPF-Zarr enhances scalability and interoperability.
- Accessing and exploring Sentinel data via the STAC API and S3 API.
- Using Jupyter Notebooks for interactive data exploration and analysis.
- Running scalable Earth observation workflows on cloud platforms.

Interactive Discussion & Feedback:
Following the demonstration, there will be a dedicated time for discussion and feedback. Attendees can share their experiences, ask questions, and provide valuable input on the usability and future development of the EOPF-Zarr format. This is a great opportunity to learn about next steps in the transition process, future developments, and how to integrate EOPF-Zarr into your own workflows.

Join us to explore how EOPF-Zarr is changing access to Copernicus Sentinel data and enabling scalable Earth observation workflows, and contribute your thoughts on shaping the next phase of this transformative technology!

Speaker:


  • Anne Fouilloux - Simula
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall E1)

Session: C.03.07 The Copernicus Sentinel Expansion missions development: status and challenges - PART 2

The status of development of ESA missions will be outlined
In 4 sessions a 1h30 minutes (equally of a full day) the unique opportunity for participants will be offered to gain valuable insights in technology developments and validation approaches used during the project phases of ongoing ESA programmes.
The Projects are in different phases (from early Phase A/B1 to launch and operations) and together with industrial/science partners the status of activities related to Mission developments will be provided.

Presentations and speakers:


CRISTAL general status presentation


  • Kristof Gantois

CRISTAL instrument and mission E2E performance


  • Frank Borde
  • Paolo Cipollini

ROSE-L Mission and Project status


  • Gianluigi Di Cosimo
  • Malcolm Davidson

ROSE-L SAR Instrument


  • Nico Gebert

CIMR Mission and Project status


  • Craig Donlon

CIMR Spacecraft & Instrument


  • Mariel Triggianese
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K2)

Session: B.04.05 Remote sensing for disaster preparedness and response to geo-hazards, hydro-meteorological hazards and man-made disasters - PART 2

Every year, millions of people worldwide are impacted by disasters. Floods, heat waves, droughts, wildfires, tropical cyclones and tornadoes cause increasingly severe damages. Civil wars and armed conflicts in various parts of the world, moreover, lead to a growing number of refugees and large changes in population dynamics. Rescue forces and aid organizations depend on up-to-date, area-wide and accurate information about hazard extent, exposed assets and damages in order to respond fast and effectively. In recent years, it has also been possible to prepare for specific events or to monitor vulnerable regions of the world on an ongoing basis thanks to the rapidly growing number of satellites launched and their freely available data. Providing information before, during or after a disaster in a rapid, scalable and reliable way, however, remains a major challenge for the remote sensing community.
Obtaining an area-wide mapping of disaster situations is time-consuming and requires a large number of experienced interpreters, as it often relies on manual interpretation. Nowadays, the amount of remote sensing data and related suitable sensors is steadily increasing, making it impossible in practice to assess all available data visually. Therefore, an increase of automation for (potential) impact assessment methods using multi-modal data opens up new possibilities for effective and fast disaster response and preparedness workflow. In this session, we want to provide a platform for research groups to present their latest research activities aimed at addressing the problem of automatic, rapid, large-scale, and accurate information retrieval from remotely sensed data to support disaster preparedness and response to geo-hazards, hydro-meteorological hazards and man-made disasters/conflicts.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K2)

Presentation: Assessing in situ national drought monitoring services in Central Europe against satellite-based drought indicators and a new drought impact database

Authors: Nirajan Luintel, Piet Emanuel Bueechi, Pavan Muguda Sanjeevamurthy, Wolfgang Preimesberger, Wouter Dorigo
Affiliations: TU Wien
Droughts have severe impacts on the environment and economy, particularly in regions with high water demand and low annual precipitation. Central Europe is one of such regions, where droughts reportedly have led to losses in crop yield and biodiversity, to disruptions in water transport, to shortages of drinking water, among others. To mitigate these impacts, national weather and environmental agencies in the region have developed national drought monitoring tools. Most drought monitoring products (such as Standardized Precipitation Index, Standardized Precipitation Evapotranspiration Index) are based on weather observation stations. However, these stations are not homogeneously distributed. Alternatively, satellite remote sensing allows for monitoring droughts contiguously over large areas. Satellite-borne sensors provide data on precipitation, vegetation condition, evapotranspiration, and soil moisture, all of which are useful for drought monitoring. Among these, soil moisture-based drought indicators are particularly valuable, as soil moisture is estimated with all-weather satellite sensors and soil moisture is a good indicator of plant water availability. However, the performance of these satellite-based drought indicators should be evaluated before integrating them into existing drought monitoring systems. This study provides a quantitative assessment of national drought monitoring products and satellite-based standardized soil moisture indices derived from the new disaggregated ASCAT surface soil moisture product and ESA-CCI v09.1 gap-filled soil moisture product, by comparing them with a novel impact database developed for the region within Clim4Cast project [1]. The database synthesizes impacts of drought, heatwave and forest fire on various sectors (agriculture, hydrology, household water supply, economy and technology, wildlife, soil among others) reported in national newspapers published between 2000 and 2023. We assess the drought indicators on two fronts: their ability to capture severity of the drought and their ability to detect drought. First, for each reported drought event, we correlate the drought severity with the number of reported impacts in the database. Drought severity is defined as the drought indicator values during the drought event (when the values remain below the given threshold, set at -1 in this study) accumulated over time. The correlation value shows how well the drought indicator captures the severity of a drought. Second, the timing of drought impact reporting in the impact database is to evaluate its ability to detect observed impacts. This evaluation is performed using the area-under-the-receiver-operating characteristics curve (ROC-AUC), which is the plot of true positive rate against false positive rate at various classification thresholds of drought definition. The AUC value reveals how well the reported drought events are detected by the drought indicator. Our results show differences among drought indicators in their ability to detect drought signals (AUC values) and their ability to capture the severity of impacts observed (correlation values). Some drought indicators are better at detecting the occurrence of drought, while others are better at capturing the severity of the drought. In some regions, the drought indicators from national monitoring systems outperform those from satellite products, while in some regions, the reverse is true. Furthermore, regardless of the drought indicator chosen, the geographical characteristics of a region—such as complex terrain—pose challenges to effective drought monitoring. [1] This work is supported by Interreg Central Europe and the European Union in the framework of the project Clim4Cast (grant number CE0100059).
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K2)

Presentation: Landslide Hunter: a fully automated EO platform for rapid mapping of landslides in semi-cloudy conditions

Authors: Dr. Serkan Girgin, Dr. Ali Özbakır, Dr. Hakan Tanyaş
Affiliations: University of Twente
Landslides are a common natural hazard mostly triggered by seismic, climatic, or anthropogenic factors. The impacts of landslides on the nature, the built environment, and the society call for effective hazard management to improve our preparedness and resilience. Accurate landslide risk analysis methods are necessary to identify the elements at risk, and effective early warning systems are needed to prevent loss of life and economic damage. Landslide catalogs provide valuable information on past events that can be exploited for better hazard assessment and early warning. However, creating a landslide catalog is a time-consuming process, especially after major disasters. Several semi-automated landslide mapping methods using cloud-free optical satellite images have been developed recently that benefit from the advancements in image processing and AI technologies. However, such methods are mostly tested only in specific study areas, and it is uncertain if they can respond to the analysis needs globally. Compiling cloud-free images by combining many semi-cloudy images also requires significant time. Additionally, most landslides occur in mountainous regions, which are typically characterized by heavy rainfall patterns. As a result, finding cloud-free images that cover these areas in their entirety is quite difficult. The Landslide Hunter is a prototype online platform designed to rapidly detect landslides using an innovative method, which analyzes consecutive partially cloudy optical Earth observation (EO) images to identify visible landslide extents and then automatically integrate these partial extents to determine the complete extent of the landslides. The platform continuously monitors online resources for events capable of triggering landslides (e.g., major earthquakes), pinpoints regions where landslides are likely to have occurred following such events, and initiates the collection of EO data for these identified areas from public EO data portals. Whenever a new image becomes available, it is downloaded and processed automatically to detect landslide areas. Proximity to cloudy regions is used to determine if a landslide is partially visible or not, and partial extents are marked for further tracking. By combining information from successive analyses, the full extents of landslides are determined. This allows timely first detection of landslides and their effective monitoring under cloudy conditions. The platform allows the integration of various models for landslide detection, ranging from simple index-based approaches (e.g., NDVI) to advanced machine learning and deep learning techniques utilizing image segmentation. The results are published in an open-access landslide catalog, available through a user-friendly web portal for individuals and a REST API for machine access. This catalog is continuously updated and offers faster updates compared to any existing conventional catalog. The platform enables stakeholders, such as researchers, public authorities, and international organizations, to receive notifications when new landslides are detected in their areas of interest. In addition to supporting and expediting rapid damage assessment efforts, the data provided can contribute to landslide prediction initiatives, ultimately enhancing the safety of communities and the built environment. This presentation will offer an in-depth exploration of the design principles and operational framework of the Landslide Hunter platform. It will cover the platform's core features, functional capabilities, and user interface, along with a comprehensive overview of the data access methods designed to enhance interoperability and seamless integration with other systems. Furthermore, a live demonstration of the operational platform will highlight its practical applications and effectiveness. The demonstration will showcase how the platform enables the automatic identification and tracking of landslides without relying on cloud-free optical satellite imagery and how it facilitates near real-time monitoring of landslide evolution, contributing to the global mapping and cataloging of such events.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K2)

Presentation: Spatio-temporal Extrapolation of Time-series Data with Deep Learning

Authors: Lea Schollerer, Prof. Dr. Christian Geiß, Patrick Aravena Pelizari, Dr. Yue Zhu, Prof. Dr. Hannes Taubenböck
Affiliations: German Aerospace Center, Earth Observation Center, Singapore-ETH Centre
There has been an increase in the number of disasters induced by natural hazards in the last decades. Such events can cause huge losses, especially in human settlements with high population densities. It can be expected that this situation intensifies in the future as the world’s population grows in numerous hazard-prone regions across the globe and climate change increases the number of both single and multi-hazard situations. As a result, it can be expected that more people will be exposed to natural hazards in the future than ever before. In order to develop mitigation strategies for possible future damage events, detailed information on the future spatial distribution of the population and properties of the built environment, i.e., future exposure, are required. Here Earth observation (EO) datasets and new Artificial Intelligence (AI) techniques offer innovative possibilities. Current EO datasets, in particular long time series data with high temporal and thematic resolution, and new techniques from the field of AI, like Long Short-Term Memory Cells, offer innovative options to extrapolate exposure information spatiotemporally. Here we leverage EO time series data that describe changes in global population and land use since around 2000, while enable a high spatial, temporal, and thematic resolution. The different datasets are preprocessed to the same temporal (~years 2000 – 2020) and spatial extent. In combination with static features, the time series then serves as the basis for a novel AI-model, that identifies characteristic change trajectories in the target variables over time and can extrapolate the target variables correspondingly in spatiotemporal terms into the future (Geiß et al., 2024). By combining multiple target variables, the developed model can exploit multi-task learning, which allows for improved prediction by encoding the interdependencies between the multiple target variables. As a case study, we focus on the megacity of Istanbul, a highly dynamic urban center susceptible to earthquakes and landslides. In a future perspective, the resulting exposure dataset can then be used for an early and sustainable urban planning, risk assessment, and risk reduction efforts in the future, as well as for evaluating the systemic risk and vulnerability of human settlements. For instance, this can be done by linking it to models of natural hazards, to show how many people will be affected in the future. Reference: Geiß, C., Maier, J., So, E., Schoepfer, E., Harig, S., Gómez Zapata, J.C., Zhu, Y., 2024. Anticipating a risky future: long short-term memory (LSTM) models for spatiotemporal extrapolation of population data in areas prone to earthquakes and tsunamis in Lima, Peru. Natural Hazards and Earth System Sciences 24, 1051–1064.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K2)

Presentation: Urban Flood Analysis through SAR Data and Super Resolution DEM Integration

Authors: Ira Karrel San Jose, Sesa Wiguna, Sandro Groth, Marc Wieland, Bruno Adriano, Erick Mas, Shunichi Koshimura
Affiliations: Department of Civil and Environmental Engineering, Tohoku University, International Research Institute of Disaster Science (IRIDeS), Tohoku University, German Remote Sensing Data Center, German Aerospace Center (DLR)
As a direct consequence of extreme weather events such as heavy precipitation and tropical cyclones, flooding is considered as one of the most devastating disasters affecting numerous countries globally. According to the EM-DAT International Disaster Database created by the Center for Research on the Epidemiology of Disasters, flood events dominated in terms of the frequency of disaster occurrence as early as 1970s. In recent years, the growing frequency and severity of urban flood cases have affected millions of people worldwide, causing substantial environmental and economic damages. Therefore, reliable and timely estimation of flood extent after a heavy rainfall event is crucial for disaster management, early warning systems, and post-disaster recovery. Urban environments, characterized by dense infrastructure, pose significant challenges for accurate flood mapping using optical satellite imagery, particularly in areas obscured by clouds or canopies. Moreover, high-resolution digital elevation models (DEMs), essential for precise inundation extent and depth approximation in urban flood mapping, are typically unavailable due to high acquisition costs. To address such limitations, this research proposes an integrated framework that capitalizes on globally available remote sensing datasets and deep learning techniques to improve urban flood mapping. The first phase of the research involves the construction of a convolutional neural network (CNN) that integrates low-resolution DEM, optical imagery, and synthetic aperture radar (SAR) to generate a higher resolution DEM. With the ability of SAR signals to penetrate clouds and dense canopies, addition of SAR data such as coherence and intensity is expected to augment the feature extraction process, alongside with the spectral signature and terrain information derived from optical images and low resolution DEM, respectively. This approach intends to reconstruct high-resolution terrain details from low resolution DEM, creating an enhanced DEM suitable for urban flood mapping and broader hydrological and geological applications. Using the enhanced DEM, the second phase implements a flood segmentation network to detect visible flooded areas captured by optical imagery. For regions obscured by urban infrastructures and dense vegetation, SAR information is integrated in the network to fully delineate the flood extent in the affected sites. The framework is applied to Joso Town, Ibaraki Prefecture, Japan, a region severely devastated by Typhoon Etau on September 10, 2015. The heavy rains brought about by the typhoon resulted to flood depth of up to five meters deep, displacing 22,000 residents and inundating approximately 1,000 buildings. Validation results indicate that the proposed model provided a better approximation of the flood extent using the enhanced DEM as compared to the results relying solely on low-resolution DEM. The proposed methodology offers a scalable and cost-effective solution for improving urban flood risk management through multi-source remote sensing data and deep learning. The approach enhances flood mapping accuracy in data-scarce regions, addressing gaps in the implementation of remote sensing-based flood modelling in urban areas. Future research will test the model’s transferability in different geographic contexts, particularly in areas lacking access to high resolution DEMs, to ensure its robustness and global applicability.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K2)

Presentation: High resolution flood maps through commercial UAV imagery and deep learning

Authors: Lisa Landuyt, Bart Beusen, Tanja Van Achteren
Affiliations: VITO
Throughout the past years, an increase in both the frequency and intensity of flood events has become clear. Targeted crisis response is thus critical in order to limit human and material losses. Satellite imagery can provide crisis managers a bird eye’s view, and the added value of the Sentinel-1 constellation for flood mapping is beyond doubt. However, in the context of crisis response, the fixed acquisition scheme of Sentinel-1 can be a bottleneck to provide timely insights and even capture the flood (peak). Moreover, it’s resolution is rather coarse for scattered landscapes and urban regions. Lately, several companies have emerged that provide high resolution X-band SAR imagery with flexible tasking, two properties that complement the Sentinel-1 drawbacks. However, this imagery typically comes at a single polarization and the flexible tasking also implies that a reference image to perform change detection is not generally available. While many algorithms have been developed for flood delineation on Sentinel-1 imagery, studies considering high resolution SAR imagery are limited. SAR-based flood mapping approaches are traditionally thresholding-based, complemented by refinements based on auxiliary data using e.g. region growing, decision trees and fuzzy logic. Recently, several studies have demonstrated the superior performance of deep learning architectures. Moreover, self-supervised learning techniques and foundation models, that aim to overcome the limitation of labeled data, are emerging. This study focuses on the usage of high-resolution SAR imagery for flood mapping. A set of 50+ images, provided by Capella Space, is considered to train and assess several deep learning-based workflows. Labels are obtained using a combination of semi-automated labeling and existing maps (e.g. from the Copernicus Emergency Management Service). We compare deep learning architectures, including U-Net with different backbones and Swin Transformer , and assess the added value of auxiliary inputs like incidence angle, reference optical imagery, land cover and building footprints. In addition, we investigate the added value of pre-training using a masked auto-encoder objective on both accuracy and transferability. This work was conducted in context of the FLOWS project, funded by the Belgian Science Policy Office (BELSPO). The authors would like to thank Capella Space for supporting this research.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K2)

Presentation: Enhancing Situational Awareness in Emergency Response: Combining Remote Sensing and Teleoperated Systems

Authors: Magdalena Halbgewachs, Lucas Angermann, Dr. Konstanze Lechner
Affiliations: German Aerospace Center (DLR), German Remote Sensing Data Center
The application of remote sensing and geospatial technologies is becoming increasingly crucial in addressing multifaceted challenges across environmental monitoring, humanitarian aid, and disaster response. With growing demands for rapid, data-driven decision-making in remote and crisis-prone areas, advanced Earth observation capabilities enable more efficient planning, coordination, and resource deployment. The combination of satellite imagery, high-resolution aerial data, and advanced analytics within situational awareness platforms provides novel opportunities for comprehending and responding to dynamic, often unpredictable environments. The development of an advanced web application for a Global Mission Operation Centre (GMOC) is central to both the MaiSHU and RESITEK projects, demonstrating the progressive enhancement of situational awareness tools. Initially, in the MaiSHU project, the web application was used to support teleoperators in navigating amphibious SHERP vehicles in complex and unstructured environments where traditional humanitarian efforts face operational limitations. In June 2024, during a field campaign in Northern Bavaria, Germany, two realistic scenarios highlighted this application: a food delivery mission to a flood-isolated village in South Sudan, supported by the United Nations World Food Programme (WFP), and a flood evacuation exercise in a dangerous environment with the Bavarian Red Cross (BRK), simulating recent flood events in southern Germany. The interactive web application at the GMOC combines multi-layered geospatial data to provide continuous situational awareness, enable high level route planning, and support real-time operations. It integrates and visualizes multi-layered geospatial and remote sensing data, thereby forming a comprehensive situational picture that is essential for mission preparation and execution. This includes simulated flood masks for the exercise region, which facilitated detailed visualizations of prospective flood zones and aided in the identification and planning of accessible routes for the teleoperated vehicle. Time-series of optical satellite imagery provided valuable insights into the region's evolving landscape, enabling the monitoring of environmental changes that could impact mission safety and route stability. The additional incorporation of high-resolution aerial imagery, taken by DLR aircraft, facilitated a more comprehensive understanding of the terrain and infrastructure. The specific characteristics of the local terrain allowed for the implementation of more precise route adjustments. Further, the integration of up-to-date drone imagery, captured before and during the event itself, proved to be a crucial element in the provision of situational updates. The addition of these datasets in the GMOC web application facilitated the creation of three-dimensional terrain models, thereby enhancing the visual and spatial understanding of the environment necessary for detailed route planning. The described data layers enabled mission planners to optimize SHERP vehicle routes based on variables such as terrain slope, surface type, and radio signal coverage. A designated route segment was designed to traverse a river, validating the SHERP’s amphibious capabilities and testing the continuity of operations under varying surface conditions. The pre-planned routes were transmitted to a Local Mission Operation Centre (LMOC), where remote drivers received precise navigation information, guiding the SHERP vehicle along the safest and most efficient paths based on the analysed remote sensing and geospatial data. The web application’s capabilities included real-time GPS monitoring of the SHERP, providing continuous updates on its location, orientation, and previously travelled paths. This tracking data allowed operators to refine navigation strategies and make informed, real-time adjustments. Supplementary real-time geotagged photo uploads from the field augmented the situational overview, while an integrated communication layer ensured continuous connectivity by highlighting areas with signal coverage. Building on the advancements of MaiSHU, the RESITEK project further develops the web application by integrating additional functionalities and additional vehicle types, including ground and aerial units, both manned and unmanned. The platform in RESITEK is undergoing enhancements to serve as an integrative tool for diverse data visualization and situational analysis, thereby supporting continuous monitoring and user-centric planning. This progression is intended to demonstrate comprehensive interoperability in a collaborative exercise involving various stakeholders and realistic emergency scenarios. The project incorporates AI-based image analysis for real-time monitoring and damage detection, highlighting crisis-relevant information in complex 2D and 3D displays to optimize decision-making during disaster response. The joint developments in MaiSHU and RESITEK underline the essential role of geospatial and remote sensing data in crisis management and emergency response. Together, these projects demonstrate the transformative role of Earth observation technologies—including satellite-based crisis information, real-time data updates, and multi-modal situational displays—in enhancing teleoperated and multi-vehicle mission planning, ultimately facilitating more effective and reliable operations in remote and inaccessible areas.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.31/1.32)

Session: A.09.04 Glaciers - the other pole - PART 2

Glaciers are distributed around the world in mountainous areas from the Tropics, to Mid-latitudes, and up to Polar regions and comprise approximately 250,000 in number. Glaciers currently are the largest contributors to sea level rise and have direct impacts on run-off and water availability for a large proportion of the global population.

This session is aimed at reporting on latest research using EO and in situ observations for understanding and quantifying change in glacier presence, dynamics and behaviour including responses to changes in climate, both long term (since the Little Ice Age) and in the recent satellite period. EO observations of glaciers come from a large variety of sources (SAR, Altimetry, gravimetry, optical) and are used to derive estimates of ice velocity, surface mass balance, area, extent and dynamics of both accumulation and ablation, characteristics such as surging, glacier failure, and downwasting as well as associated observations of snow pack development and duration, lake formation, glacier lake outbursts (GLOF) and slope stability.

Presentations will be sought covering all aspects of glacier observations but in particular efforts to derive consistent global databases e.g. GlaMBIE, ice velocity and area (Randolph Glacier Inventory) as well as variation in run-off and water availability and interfaces between these observations and glacier modelling to forecast possible future glacier changes and their impact on hydrology and sea-level rise.

Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: Measure Glacier Elevation Change in Karakoram using TanDEM-X InSAR Data

Authors: Shiyi Li, Dr. Philippe Bernhard, Prof. Dr. Irena Hajnsek Hajnsek
Affiliations: Institute of Environmental Engineering, ETH Zurich, Microwave and Radar Institute, German Aerospace Center, Gamma Remote Sensing AG
Accurately measuring glacier elevation change is essential for understanding glacier mass balance and its links to climate change, water resources, and sea-level rise. The Karakoram region, home to over 20,000 km² of glaciers, is the most extensively glaciated area outside the polar regions and plays a crucial role in regional hydrology and global sea-level dynamics. Unlike many other glaciated regions, the Karakoram exhibits anomalous behavior, with glaciers showing stable or even positive mass balance in recent years—a phenomenon often referred to as the "Karakoram anomaly." This unique behavior underscores the need for high-quality elevation change measures to better understand the region's glacier dynamics and their implications for water resources and climate systems. In this work, we present glacier elevation changes in the Karakoram measured using Digital Elevation Models (DEMs) generated from TanDEM-X data. Publicly available global DEMs often suffer from temporal ambiguities due to mosaicking and post-processing, which can introduce errors in glacier studies. To address this, we generated DEMs directly from the raw TanDEM-X CoSSC data using the InSAR technique. This approach ensured high temporal precision by preserving the acquisition time of each DEM. We used TanDEM-X data from the global missions conducted during 2011–2014 and 2017–2020. By calculating elevation differences between these periods, we produced high-resolution, time-sensitive elevation change measurements for the past decade. However, maintaining time awareness in DEMs posed significant challenges in data coverage and uncertainty control, particularly in the Karakoram's complex mountainous terrain. To balance time sensitivity with data coverage, we carefully selected single-season DEMs for mosaicking to minimize seasonal bias and cross-year uncertainty. We further developed a Gaussian Process Regression (GPR)-based void-filling algorithm to address missing values in the seasonal mosaic of Differenced DEMs (dDEMs). The uncertainties in the derived dDEMs were rigorously assessed, accounting for heteroscedasticity and spatial correlations, before converting height changes into mass balance. The generated dDEM covered 1,763 glaciers in the Karakoram, spanning 14,614.40 km², equivalent to 67% of the total glaciated area. The mean mass balance for the covered glacier is -0.035±0.15 m w.e a^(-1), and elevation changes exhibited strong spatial variability among individual glaciers. High-resolution (10 m) dDEM maps revealed detailed local glacier dynamics, including kinematic waves of surge-type glaciers and terminus advancements or retreats. This study provides continued observations of glacier elevation changes over the Karakoram during the past decade (2011–2019). The processing strategy ensures the time sensitivity of elevation change measurements and enables robust evaluation of regional glacier volume and mass changes. This comprehensive dataset contributes to a deeper understanding of regional glacier volume and mass changes and their contributions to sea-level rise. References: [1] E. Berthier and F. Brun, “Karakoram geodetic glacier mass balances between 2008 and 2016: persistence of the anomaly and influence of a large rock avalanche on Siachen Glacier,” Journal of Glaciology, vol. 65, no. 251, pp. 494–507, Jun. 2019, doi: 10.1017/jog.2019.32. [2] G. Krieger et al., “TanDEM-X: A radar interferometer with two formation-flying satellites,” Acta Astronautica, vol. 89, pp. 83–98, Aug. 2013, doi: 10.1016/j.actaastro.2013.03.008. [3] S. Leinss and P. Bernhard, “TanDEM-X:Deriving InSAR Height Changes and Velocity Dynamics of Great Aletsch Glacier,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 4798–4815, 2021, doi: 10.1109/JSTARS.2021.3078084.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: A new inventory of the glaciers of Pakistan in 2022 from Sentinel-2

Authors: PhD Davide Fugazza, Anees Ahmad, Blanka Barbagallo, Maria Teresa Melis, Luca Naitza, Marco Casu, Maurizio Gallo, Riaz Ul Hassan, Mohammed Aurang Zaib, Sadia Munir, Arif Hussain, Guglielmina Diolaiuti
Affiliations: Università Degli Studi Di Milano, University of Cagliari, EvK2CNR, EvK2CNR Pakistan
Pakistan is one of the countries suffering the greatest impacts from climate change but also holding one of the largest glacier reservoirs outside the polar regions, across the mountain ranges of the Hindukush, Karakoram and Himalayas; yet the only glacier inventory previously covering the entire country – the GAMDAM inventory - is centred around 2000 and covers a large time span (1993-2009), making area comparisons over time difficult. As part of an international cooperation project, “glaciers and students”, implemented by EvK2CNR and realized by University of Milan together with University of Cagliari, Karakoram International University and University of Baltistan Skardu, we realized a new inventory covering all the glaciers in Pakistan, using a Sentinel-2 mosaic from late summer 2022 as a basis and combining a segmentation approach based on Sentinel-2 optical data and indices with Sentinel-1 interferometric coherence to improve the mapping of debris covered glaciers. In the inventory, we censed more than 13000 glaciers, covering an area larger than 13000 km². Almost all of these glaciers drain into the Indus, while a small amount (3%) drain to the Tarim basin of Central Asia. A large number of glaciers (44%) is smaller than 0.1 km², which puts them at increasing risk from rising temperatures, while only 32 glaciers are larger than 50 km². The inventory further reveals large differences in the glacier distribution across different basins and elevations, mainly driven by topography and the different climatic influences of the area, namely the South Asian Monsoon and the Westerlies, and the complex interplay between these two factors. Preliminary comparison with the GAMDAM inventory, albeit hampered by the large time span of the older inventory, shows relatively stable glacier areas in the Karakoram, with locally large variations mostly caused by surging glaciers. In contrast, in the Himalayan region glacier losses prevail. As part of the project, automatic weather stations were also installed in the Hunza Valley, around Passu, Ghulkin, Shishpar and Pissan Glaciers, and on and around Baltoro Glacier on the route to K2. While the new inventory provides a baseline for future comparisons, the combination with meteorological data will help assess ice volume and meltwater, thus improving the management of water resources in the country and in high mountain Asia.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: DL4GAM: a multi-modal Deep Learning-based framework for Glacier Area Monitoring, trained and validated on the European Alps

Authors: Codrut-Andrei Diaconu, Harry Zekollari, Jonathan L. Bamber
Affiliations: Earth Observation Center, German Aerospace Center (DLR), School of Engineering and Design, Technical University of Munich, Department of Water and Climate, Vrije Universiteit Brussel, Laboratory of Hydraulics, Hydrology and Glaciology (VAW), ETH Zürich, Bristol Glaciology Centre, University of Bristol
The ongoing retreat of glaciers in the European Alps, about 1.3% per year from 2003 to 2015 according to recent inventories, underscores the urgent need for accurate and efficient monitoring techniques. Traditional methods, often relying on manual correction of semi-automated outputs from satellite imagery like Sentinel-2, are time-consuming and susceptible to human biases. In recent years, significant progress has been made in developing fully automated glacier mapping techniques using Deep Learning. In this work we propose DL4GAM: a multi-modal Deep Learning-based framework for Glacier Area Monitoring, available open-source. It includes uncertainty quantification through ensemble learning and a procedure to automatically identify the imagery with the best mapping conditions independently for each glacier. We then use DL4GAM to investigate the evolution of the glaciers in the Alps from 2015 to 2023. We first show that when evaluating the model on unseen data, we find good agreement between the estimated areas and the inventory ones. We also apply DL4GAM on a small set of glaciers from the Swiss glaciers inventory (SGI2016) and show that our results align well with their round-robin experiment, demonstrating high accuracy in the estimated areas and reliable uncertainty estimates. We then analyse the limitations of traditional approaches and highlight the benefit of using elevation change maps as complementary inputs to further improve the mapping of debris-covered areas. Next, we apply the models on data from 2023 and, based on these predictions, estimate the area change both at individual glacier and regional level. However, fully automated methods still face numerous challenges, such as cast shadow, clouds, difficulties in distinguishing debris-covered segments from surrounding rocks or seasonal snow from glacier ice etc. We therefore implemented an outlier filtering scheme in order to remove the glaciers for which the models perform poorly. Finally, we provide annual area change rates over 2015-2023 for ca. 1000 glaciers, covering around 84% of the region. Based on these, we estimate a regional retreat of -1.97 ± 0.67% per year, with significant inter-glacier variability, which illustrates the high sensitivity of the glaciers in this region to climate change.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: Glacier surge activity over Svalbard in the period 1991-2015 interpreted using heritage satellite radar missions and comparison to the period 2015-present (Sentinel era)

Authors: Tazio Strozzi, Oliver Cartus, Dr Maurizio Santoro, Thomas Schellenberger, Erik Schytt Mannerfelt, Andreas Kääb
Affiliations: Gamma Remote Sensing, Department of Geosciences, Oslo University
Glacier surging refers to strongly enhanced ice flow speeds over time-periods of months to years. Knowing where and when glaciers show surge-type flow instabilities is important for a number of scientific and applied reasons. The mechanisms of glacier surging and the conditions leading to it are still incompletely understood and questions arise whether and how climate change could impact surge initiation, frequency and magnitude, and therefore on the response of glaciers to climate change. Glacier surges are identified and mapped using a number of (often combined) indicators such as looped moraines, specific landforms in the glacier forefield, exceptional and major glacier advance, exceptional crevassing, sheared-off glacier tributaries or particular patterns of elevation and surface velocity change. Leclercq and others (http://doi.org/10.5194/tc-15-4901-2021) introduced a method to detect surge-type glacier flow instabilities through the change in backscatter that they cause in repeat satellite SAR images. The method was developed based on Sentinel-1 C-band backscatter data between consecutive years. First, aggregated images of maximum backscatter values for each pixel location over the 3-month winter period (January to March for northern hemisphere), when glaciers typically show little other backscatter change due to cold and dry conditions, were created. Then, the normalized difference between the two aggregated maximum images was calculated to search for change in backscatter and to eventually identify surge activity. In order to minimize the effect of variation of topographic effects, the analysis is preferably performed with images taken from the same nominal orbit. Due to Sentinel-1's systematic acquisition strategy, a consistently large number of observations are available over Svalbard every winter for the same nominal orbits. Using this approach, Kääb and others (http://doi.org/10.1017/jog.2023.35) mapped 25 surge-type events over Svalbard in the period 2017-22, a number of surge events that appears to be higher than in previous published inventories or studies (https://doi.org/10.1016/j.geomorph.2016.03.025). The question therefore arises as to whether the increasing number of detected surge events is related to changing environmental or climatic conditions over Svalbard or simply to improved observation capacity in the Sentinel era since 2015. In order to answer this research question and extend back in time before the Sentinel-1 based inventory, we considered heritage satellite radar missions in the period 1991-2015. In particular, at GAMMA we have already processed all the ENVISAT ASAR Image Mode (IM), Wide Swath Mode (WSM) and Global Monitoring Mode (GMM) data available on ESA's G-POD as part of ESA CCI Land Cover to 150 m resolution (http://doi.org/10.1016/j.rse.2015.10.031). We produced over Svalbard ENVISAT ASAR winter backscatter average and change images between 2004 and 2010. In the following, at GAMMA we have also processed with the support of JAXA the global JERS-1 data archive to provide winter backscatter average and change images between 1993 and 1998. Lately, we requested to the ESA’s Heritage Space Programme access to 17709 ERS-1/2 products and 10TB data volume over Svalbard in the period 1991-2011. A fully automated processing chain was implemented to process the ERS SLC data to radiometric terrain-corrected level. Because for the ERS-1/2 mission there are not many repeated winter observations for the same orbital track in consecutive years available and in order to get wall-to-wall coverage over all Svalbard we needed to consider differences for time scales longer than one year. Finally, we also processed a series of Radarsat-2 acquired over Svalbard in Wide mode and Wide Fine mode between 2012 and 2016. Using the ERS-1/2 SAR, JERS-1 SAR and ENVISAT ASAR data we mapped over Svalbard 20 surge-type events in the period 1991-2011. Using the Radarsat-2 SAR data we mapped further 5 surge-type events over the period 2012-2015. Eliminating duplicates with the inventory published by Kääb and others (http://doi.org/10.1017/jog.2023.35), updating start dates and completing the Sentinel-1 analysis with new images from 2023 onwards, we recorded over Svalbard 25 surge-type events in the pre Sentinel-1 period 1991-2015 (25 years) and 28 surge-type events in the post Sentinel-1 period 2016-2024 (9 years), which corresponds to about a threefold increase in surge events in the last period. In our contribution, we will briefly recall the available satellite data and processing steps, and present and discuss the surge catalogue. Acknowledgements This research has been supported by ESA through Glaciers CCI (grant no. 4000109873/14/I-NB. We thank the ESA’s Heritage Space Programme for provision of the ERS-1/2 data archive.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: Glacier Snowline Mapping from Sentinel-2 images by Machine Learning

Authors: Prashant Pandit, Dr Thomas Schellenberger, Dr Mattia Callegari, Lorenzo Bruzzone
Affiliations: University Of Trento, Institute for Earth Observationn Eurac Research, Department of Geosciences University of Oslo
Snow accumulation and ablation play a pivotal role in the mass balance of mountain glaciers. Accurate mapping of snow extends on glaciers enhance our understanding of climate change impacts (Larocca et al.) and serve as valuable input for glacier surface mass balance models (Rabatel et al.). For example, the snowline altitude at the end of the summer season serves as a proxy for the Equilibrium Line Altitude and strongly correlated with the annual mass balance (Rabatel et al.). The largest study mapped 3489 snowlines of 269 glaciers of the ~275.000 glaciers from Landsat data between year 1984 and 2022 and found an increase in snow line elevation of approximately 150 meters. This significant rise indicates a reduction in glacier accumulation zones, suggesting negative mass balance trends driven by rising temperatures and shifting precipitation patterns. Regional variability in snowline changes underscores the complex interplay of climatic and local factors, emphasizing the importance of snowline monitoring for glacier health and climate change studies (Larocca et al.). However, manual delineation is time consuming and band- and index threshold-based approaches often fails in areas of steep and complex mountainous terrain and in homogenous snow conditions. The spectral similarity to firn and ice and varying illumination conditions in steep terrain are posing significant challenges for accurate large-scale and long-term snow monitoring Advanced machine learning techniques offer promising solutions to address these limitations by automating snowline detection and improving accuracy under diverse conditions (Prieur et al.). Designed to overcome these limitations by improving accuracy and scalability, this study presents first steps towards region-scale snow cover extend mapping using machine learning. We manually digitized 312 snowlines on 41 glaciers in Scandinavia (13), Svalbard (9), and the European Alps (19) from Sentinel 2 data in the period 2015 to 2023, encompassing a wide range of seasonal snow conditions., Using this benchmark, we trained several machine learning models, including pixel-based classifiers such as Support Vector Machine, Random Forest, and XGBoost (Chen et al.), and U-Net (Ronneberger et al.), a Fully convolutional neural network , and compared them against threshold-based approaches as baselines. The results from Scandinavia demonstrate the superiority of machine learning methods. While threshold-based approaches, such as the Normalized Difference Snow Index (NDSI > 0.4) and Near-Infrared (NIR > 0.11), achieved an Intersection over Union (IoU) score of 0.7147, U-Net significantly outperformed with an IoU of 0.9456. Random Forest was the next best-performing method with an IoU of 0.8957, followed by XGBoost (0.8899) and SVM (0.8887). Adding elevation models and slope data to the classifiers resulted in only marginal performance improvements. This significant improvement highlights the potential of U-Net to accurately capture fine-scale snowline features, especially in heterogeneous and complex mountainous environments with spectrally similar classes such as firn and ice and paves the way for accurate, low-cost, automated and large-scale snow mapping on glaciers. Keywords: Cryosphere, Snow, Snowline, Sentinel-2, Machine Learning Keywords: Cryosphere, Snow, Snowline, Sentinel-2, Machine Learning References Larocca, L. J., Lea, J. M., Erb, M. P., McKay, N. P., Phillips, M., Lamantia, K. A., & Kaufman, D. S. (2024). Arctic glacier snowline altitudes rise 150 m over the last 4 decades. The Cryosphere, 18(8), 3591-3611. A. Rabatel, J.-P. Dedieu, and C. Vincent, “Using remote-sensing data to determine equilibrium-line altitude and mass-balance time series: validation on three french glaciers, 1994–2002,” Journal of Glaciology, vol. 51, pp. 539–546, 2005. Rabatel, A., Bermejo, A., Loarte, E., Soruco, A., Gomez, J., Leonardini, G., ... & Sicart, J. E. (2012). Can the snowline be used as an indicator of the equilibrium line and mass balance for glaciers in the outer tropics?. Journal of Glaciology, 58(212), 1027-1036. Prieur, C., Rabatel, A., Thomas, J. B., Farup, I., & Chanussot, J. (2022). Machine learning approaches to automatically detect glacier snow lines on multi-spectral satellite images. Remote Sensing, 14(16), 3868. Chen, T., & Guestrin, C. (2016, August). Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining (pp. 785-794). Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18 (pp. 234-241). Springer International Publishing.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: Estimation of SAR Signal Penetration Depth over Snow/Ice Land Cover Areas using Volume Decorrelation computed from Geocoded TanDEM-X Products

Authors: Nerea Ibarrola Subiza, Lukas Krieger, Marie Lachaise, Dana Floricioiu, Thomas Fritz
Affiliations: German Aerospace Center (DLR)
Digital elevation models (DEMs) derived from SAR interferometry (InSAR) can be significantly affected by volume decorrelation in different land cover areas. Volume decorrelation occurs when radar signals interact with multiple scattering surfaces at different heights and orientations. This effect causes the radar signals to scatter in multiple directions, leading to a loss of coherence. The scattering phase center is consequently located beneath the surface, resulting in biased elevation calculations for the actual land surface. In bistatic radar systems like TanDEM-X, volume decorrelation can be derived from the interferometric total coherence by considering various decorrelation sources affecting the overall coherence [1]. The availability of this interferometric parameter as a layer in the TanDEM-X products is highly valuable for a range of applications, such as the estimation of penetration depth into snow or ice-covered surfaces or into forest canopy. This study presents results obtained by directly using geocoded products within the TanDEM-X DEM Change Map processing chain [2] to compute volume decorrelation for each acquisition. Applying the equations from [3], volume decorrelation, together with the height of ambiguity, are used to derive the penetration depth on ice and snow. This additional information helps to correct the bias between the actual terrain surface and the measured phase center, leading to more realistic elevation estimation. A recent study on Aletsch glacier [4] has observed the elevation bias due to signal penetration in an X-band derived DEM by comparing it to a coincident DEM acquisition from Pléiades optical imagery. Here the elevation bias – averaged per elevation bin – can reach up to 4–8 m in the accumulation area and a mean elevation difference over of -5.59 m which can in turn reduce to about -4.29 m if additional local fine co-registration corrections are applied to this complex topography area. We use these results to validate the circumstances under which a signal penetration correction layer can be used to generate bistatic X-band DEMs that reflect the actual ice/snow surface. Keywords: TanDEM-X DEM, Bistatic Interferometric Coherence, Volume Decorrelation, Penetration Depth, Glacier Mass Change. [1] Rizzoli, Paola, Luca Dell’Amore, Jose-Luis Bueso-Bello, Nicola Gollin, Daniel Carcereri, and Michele Martone. “On the Derivation of Volume Decorrelation From TanDEM-X Bistatic Coherence.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 15 (2022): 3504–18. https://doi.org/10.1109/JSTARS.2022.3170076. [2] Schweisshelm, Barbara, and Marie Lachaise. “Calibration of the Tandem-X Craw DEMs for the Tandem-X DEM Change Maps Generation.” In IGARSS 2022 - 2022 IEEE International Geoscience and Remote Sensing Symposium, 291–94. Kuala Lumpur, Malaysia: IEEE, 2022. https://doi.org/10.1109/IGARSS46834.2022.9883204. [3] Dall, Jorgen. “InSAR Elevation Bias Caused by Penetration Into Uniform Volumes.” IEEE Transactions on Geoscience and Remote Sensing 45, no. 7 (July 2007): 2319–24. https://doi.org/10.1109/TGRS.2007.896613. [4] Bannwart, Jacqueline, Livia Piermattei, Inés Dussaillant, Lukas Krieger, Dana Floricioiu, Etienne Berthier, Claudia Roeoesli, Horst Machguth, and Michael Zemp. “Elevation Bias Due to Penetration of Spaceborne Radar Signal on Grosser Aletschgletscher, Switzerland.” Journal of Glaciology, April 30, 2024, 1–15. https://doi.org/10.1017/jog.2024.37.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.34)

Session: C.06.03 Validation of GNSS-RO and GNSS-R observations from small sats

GNSS-Radio Occultation (RO) for atmospheric sounding has become the first Pilot Project of integrating institutional (e.g. from MetOp) and commercial RO measurements into operational Numerical Weather Prediction (NWP) led by NOAA and EUMETSAT. The path for this achievement was preceded by a number of studies for Calibration, Data Quality and Validation through Impact assessments including complementary observations from other sensors. Innovation continues in GNSS-RO, with for example Polarimetric-RO, and more studies are on-going and can be presented in this session.

A number of GNSS-Reflectometry (GNSS-R) commercial missions have been launched since in the last 10 years mostly driven by wind-speed applications, and more are planned for 2025 like ESA Scout HydroGNSS with significant innovations and with primary objectives related to land applications. Like for GNSS-RO, a number of Data Quality and Validation studies are on-going or being planned, and if successful, GNSS-R could also make it to operational systems.

This session is intended for the presentation of this kind of studies related to the assessment of GNSS measurements typically from miniaturised GNSS EO receivers in commercial initiatives.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.34)

Presentation: Recent Validation Activities for GNSS-R and -RO Products from Spire

Authors: Matthieu Talpe, Philip Jales, Vu Nguyen, Jessica Cartwright, Giorgio Savastano, Sanita Vetra-Carvalho, Claudio Navacchi, Ben Yeoh
Affiliations: Spire Global, Spire Global, Spire Global, Spire Global
Spire Global is an Earth Observation company operating multi-purpose nanosats in a variety of orbits. One of the core activities of Spire’s constellation is passive remote sensing of L-band signals using its in-house designed and manufactured multi-GNSS STRATOS receiver. After several successful data pilots spanning 2016 to 2019, Spire Global has been providing RO measurements to weather agencies NOAA and EUMETSAT for operational data buys since 2020 and 2021, respectively. Over the last few years, Spire has further collaborated with agencies on the development and validation of ionospheric, near-nadir GNSS-R (ocean winds, soil moisture), and Polarimetric RO products. The aim for these products is to contribute to global assimilation models, whether for space weather models (such as GloTEC) or numerical weather prediction (NWP) models. In 2022, NOAA Space Weather Prediction Center conducted a weather data pilot on absolute slant Total Electron Content (TEC) and electron density profiles and S4 scintillation indices from Spire and PlanetiQ. It was shown that TEC products exhibit good accuracy and greatly improve spatial coverage. The study also generated recommendations to improve the detection of scintillation events. In 2023, Spire operated new polarimetric (dual-polarized H and V) antennae onboard three nanosats and collected novel precipitation-sensitive RO datasets. The next (and currently ongoing) step is to refine forward operators and enable the assimilation of these PRO datasets into NWP models. This work is carried out at ECMWF. The NOAA Ocean Surface Winds Data Pilot conducted between 2023 and 2024 evaluated the suitability of operational Near Nadir GNSS-R for ocean winds and Mean Sea Slope (MSS) products. At least 500 tracks of GNSS-R Level 1 calibrated radar cross section products were delivered daily between 25 January and 24 July 2024 with strict latency (<3 hr) requirements. Over 17 institutional and governmental groups participated in this pilot to use the Level 1B and Level 2 GNSS-R data over ocean and land surfaces. A recent NASA Commercial SmallSat Data Acquisition (CSDA) program evaluation demonstrated agreement between the Level 2 Soil Moisture estimates from Spire GNSS-R reflections and SMAP, and of the L2 Ocean product with respect to ECMWF, ERA5 and CYGNSS. Lastly, the ESA EarthNet Data Assessment Pilot (EDAP) has provided assessments of GNSS-R datasets and is in the process of evaluating PRO and grazing angle reflection products. The Spire constellation continues to be replenished, with six new RO and R satellites launched on SpaceX Transporter 11 in August 2024, including the first two combined R+RO platforms, and several more in upcoming rideshare launches. Spire encourages and supports the research community to further explore the Spire GNSS datasets. Free access is provided by the NASA CSDA program for US-funded researchers. ESA Third Party Mission also provides access to a select set of datasets for researchers worldwide.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.34)

Presentation: Meta-Mission of GNSS-R Satellites: Investigating the Potential of 40+ LEO Satellites with Reflectometry Payloads

Authors: Estel Cardellach, Tianlu Bai, Yan Cheng, Philip Jales, Cheng Jing, Weiqiang Li, Wenqiang Lu, Manuel Martín-Neira, Dallas Masters, Chris Ruf, Martin Unwin
Affiliations: Institute of Space Sciences (ICE-CSIC, IEEC), Tianmu Aerospace (Chongqing) Satellite Technology Co., Ltd., Yunyao Aerospace, Spire Global, Inc., CAST-Xi'an,CAST-Xi'an, National Satellite Meteorological Centre (NSMC, CMA), European Space Agency (ESA), Muon Space, Inc., University of Michigan, Surrey Satellite Technology Ltd.
A collaborative effort is made between scientists and commercial and non-commercial GNSS reflectometry (GNSS-R) data providers to show the potential of the full current GNSS-R systems. This ‘mission of missions’ consists of over 40 satellites in low Earth orbit (LEO), under different architectures, platform sizes and payload designs, the data of which are shared through a neutral, multi-lateral arrangement. The experiment, called METACONRef, aims at analysing them as a single large meta-constellation of reflectometers to answer the following main questions: - What are the benefits of a 40+ GNSS-R satellite meta-constellation? How is the resulting spatio-temporal coverage? What scientific cases could be resolved with such density of observations that cannot be properly monitored with the current Earth Observation system? - What are the challenges of a GNSS-R meta-constellation? How limiting are factors such as differences in format; differences in instrumental parameters; inter-calibration of the power measurements; or inhomogeneous quality control and uncertainty characterization? Are further homogenization actions required towards common use of such diverse system of systems? - What is the comparative cost-benefit of such a system? Does the improvement in the performance justify the cost of incremental satellites? The analysis will be made on actual GNSS-R data collected from spaceborne platforms that belong to missions developed by both national space agencies and commercial companies: NASA CYGNSS (currently seven satellites), Spire Global Near-Nadir satellites (four satellites), Muon Space (two satellites), FengYu3 (three satellites), BuFeng-1A/B (two satellites), Tianmu (23 satellites), Yunyao (two satellites), UK DoT-1 (one satellite), and other GNSS-R payloads in LEO providing open access data (e.g., TRITON) or to be joined later in the experiment (e.g., ESA HydroGNSS and EOS-08, TBC). Comparative studies between the different missions will be avoided, focusing the effort on the combined performance over particular case studies to explore spatio-temporal resolution and sensitivity of the Level-1 (observables) products. Different inter-calibration strategies are investigated, and the re-calibrated observables exploited over small-scale quickly-evolving events to test the limits of the current resolution. This can cover different applications such as flooding, storms and wildfires. Other challenging applications can also benefit from large oversampling to enhance their performance, such as ocean altimetry (e.g., under conditions that limit the performance or operability of active dedicated sensors). Focusing on Level-1 observables addresses a two-fold strategy: on the one hand, we foresee difficulties in homogenizing the different retrieval algorithms behind the Level-2 products independently produced from each of the missions. On the other hand, other GNSS remote sensing techniques (e.g., radio occultation) have proven it feasible and even desirable to assimilate level-1 products in operational modelling systems, rather than assimilation of Level-2 retrievals. The analysis of the impact on operational services (e.g., numerical weather prediction, NWP) is not considered in this first experiment due to (1) the relatively low maturity of GNSS-R level-1 assimilation into operational services and (2) the long time series required to properly test the impact (at least six months, three during northern hemisphere summer/southern hemisphere winter and three more during northern hemisphere winter/southern hemisphere summer). We believe this is a timely experiment the output of which has potential to assist space and science funding agencies tracing the roadmap towards an optimal use of this opportunistic and cost-effective technique. This comes in a moment where new missions are about to be launched (e.g., ESA twin-satellites HydroGNSS, Brazil Amazonia-1B) or being considered for development (e.g., Spanish component of the Atlantic Constellation), while commercial operators have started delivering data through pilot contracts with major operational and scientific agencies. Recommendations will be issued based on the outcome of the experiment.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.34)

Presentation: GNSS-R land data assimilation at ECMWF

Authors: Patricia de Rosnay, Estel Cardellach, David Fairbairn, Sébastien Garrigues, Eleni Kalogeraki, Nazzareno Pierdicca, Peter Weston
Affiliations: ECMWF, ECMWF, ICE-CSIC, IEEC, INGV
This paper presents activities starting at ECMWF (European Centre for Medium-Range Weather Forecasts) to investigate the impact of GNSS-R data assimilation for Numerical Weather Prediction (NWP) and future climate reanalyses. The ECMWF NWP system relies on a coupled land-atmosphere-ocean model. In terms of data assimilation, a dedicated land data assimilation system (LDAS) is used to analyse soil moisture and temperature variables using a simplified Extended Kalman Filter (SEKF) approach. Here, we are developing the ECMWF LDAS capability to assimilate level-1 GNSS-R observations. A machine-learning based observation operator is being developed to simulate level-1 GNSS-R data, using model features that include soil moisture and vegetation leaf area index (LAI). The training dataset uses an ECMWF land surface reanalysis conducted in preparation of ERA6-Land and the CYGNSS GNSS-R data. The machine learning approach is based on a gradient boosted (XGBoost) decision tree. The approach is presented here. A detailed information content analysis is conducted to analyse the most important features that contribute to the signal. Plans to ingest GNSS-R data into the ECMWF SEKF are presented. The work includes a detailed analysis of the Jacobians, development of the data quality control in the data assimilation system, and update of the SEKF data assimilation to include GNSS-R data in the observation vector. The work is being conducted in the context of the preparation of HydroGNSS, using existing data (e.g. CYGNSS here and extended to SPIRE in the future). GNSS-R land data will be assimilated in the ECMWF system to evaluate the impact both on land surface variables and for atmospheric NWP. The plan is also to combine GNSS-R and L-band passive microwave data from SMOS (Soil Moisture and Ocean Salinity) observations and to assess the impact of each observation type either individually or combined. Potential benefit for both NWP and future climate reanalyses will be discussed.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.34)

Presentation: The Radio Occultation Modeling Experiment (ROMEX)

Authors: Christian Marquardt, Hui Shao, Dr. Benjamin Ruston, Richard Anthes
Affiliations: EUMETSAT, UCAR
The international radio occultation (RO) community is conducting a collaborative effort to explore the impact of a large number of RO observations on numerical weather prediction (NWP). This effort is named the Radio Occultation Modeling Experiment (ROMEX) and it has been endorsed by the International Radio Occultation Working Group (IROWG) in 2022, a scientific working group under the auspices of the Coordination Group for Meteorological Satellites (CGMS) in close coordination with the user community such as WMO, IOC-UNESCO, and other user entities. ROMEX seeks to answer some of the more pressing technical and programmatic questions facing the community and help inform the near- and long-term strategies for RO missions and acquisitions by NOAA, EUMETSAT, and other CGMS partners. Most important among these questions is to quantify the benefit of increasing the quantity of RO observations. ROMEX is envisioned to consist of at least two three-month periods during which all available RO data are collected, processed, archived, and made available to the global community free of charge for research and testing. Although the primary purpose is to test the impact of varying numbers of RO observations on NWP, the three months of RO observations will be a rich data set for research on many atmospheric phenomena. The first ROMEX period (ROMEX-1) covers September through November 2022, which contains a number of tropical cyclones that can be studied. The international community and representatives of the IROWG are currently finalising the execution of ROMEX-1. RO data providers have sent their data to EUMETSAT for repackaging and, in some cases, reprocessing in a uniform way. The processed data (phase, bending angle, and refractivity) was made available to registered ROMEX participants by the ROM SAF. The data was also be processed independently by both the UCAR COSMIC Data Analysis and Archive Center (CDAAC) as well as NOAA STAR divison. The data are available to all participants at no charge, with the conditions that the providers be acknowledged and the data not be used for any commercial or operational purposes. This presentation will provide an overview of GNSS-RO data currently available from both public and commercial sources and introduce the rationale for ROMEX. We will then summarise the results obtained so far.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.34)

Presentation: Developing a forward operator for GNSS polarimetric radio occultation observations

Authors: Katrin Lonitz, Dr Sean Healy, Estel Cardellach, Ramon Padullés
Affiliations: ECMWF, ECMWF, Institute of Space Studies of Catalonia (IEEC)
GNSS radio occultation (GNSS-RO) measurements are now an established component of the global observing system, providing vertical profiles of temperature and water vapour content of the atmosphere. Increasing volumes of GNSS polarimetric radio occultation (GNSS-PRO) observations are now becoming available with the introduction of GNSS receivers in low-earth orbiters being able to measure the GNSS signals in both the vertical and horizontal polarisation direction. These measurements extend the information content of conventional GNSS-RO and enable the retrieval of oriented hydrometeor particle information along the ray path (Cardellach et al., 2019). They are of potential interest for operational numerical weather prediction (NWP). As a first step towards using these observations in both data assimilation and model diagnostics, a forward operator for the GNSS-PRO observable polarimetric differential phase shift has been developed as shown by Hotta et al. (2024). In this forward operator, which is designed for operational NWP applications, ‘effective’ values of hydrometeor density and axis ratio have been used to calculate the specific differential polarimetric phase shift (Kdp). Here, we show how large the impact is when refining these values for the different hydrometeors. Also, we explore a new formulation of Kdp based on particle-scattering, using hydrometeor habits as provided in ARTS (Eriksson et al., 2019). The advantage of this approach is that the assumptions are consistent with those in other modules of the NWP models, such as the radiative transfer (Geer et al., 2021). Ultimately, we show the differences between them for some case studies. The implications when assimilating the GNSS-PRO observations will be discussed. Cardellach, E., et al. (2019). Sensing heavy precipitation with GNSS polarimetric radio occultations. Geophysical Research Letters, 46(2), 1024-1031. Eriksson, P., et al. (2018), A general database of hydrometeor single scattering properties at microwave and sub-millimetre wavelengths, Earth System Science Data, 10(3), 1301-1326. Hotta, D., K. Lonitz, and S. Healy (2024), Forward operator for polarimetric radio occultation measurements, Atmos. Meas. Tech., 17, 1075–1089. Geer, A. J., et al. (2021), Bulk hydrometeor optical properties for microwave and sub-millimetre radiative transfer in RTTOV-SCATT v13.0, GMD, 14, 7497–7526.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.34)

Presentation: Comprehensive analysis of spaceborne GNSS reflectometry for precision altimetry

Authors: Dr. Sajad Tabibi, Dr Raquel N. Buendia
Affiliations: Faculty of Science, Technology and Medicine, University of Luxembourg
Global Navigation Satellite System-Reflectometry (GNSS-R) has emerged as a versatile and cost-effective technique that complements traditional remote sensing methods. By using reflected GNSS signals, GNSS-R enables all-weather operation, supports the monitoring of diverse surface types. Its applications span soil moisture estimation, ocean altimetry, and ice dynamics monitoring. This study explores the potential of Grazing-Angle GNSS-R (GG-R) carrier-phase measurements for precision altimetry, focusing on retrieving sea-level anomalies (SLA) and monitoring polar regions. Data from Spire Global Inc.’s Radio Occultation (RO) constellation are compared with conventional radar altimetry missions, including Sentinel-3A/3B, Saral, and CryoSat-2. Over a period of more than two years, SLA data analyzed with 1-day intervals and 10 km spatial resolution using dual-frequency GPS measurements yielded an average RMSE of approximately 47 cm. The analysis highlights complementary strengths between the two methods, with GG-R providing valid measurements in scenarios where radar altimetry is unavailable, and vice versa. A focused evaluation of GG-R-specific collocated events confirmed the consistency of Spire’s retrievals, achieving an RMSE of 25 cm. These findings demonstrate GG-R’s ability to enhance spatio-temporal resolution and address coverage limitations in conventional systems, establishing it as a valuable tool for advancing altimetry in challenging environments.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K1)

Session: D.01.04 Using Earth Observation to develop Digital Twin Components for the Earth System - PART 2

Climate change represents one of the most urgent challenges facing society. The impacts of climate change on the Earth system and society, including rising sea levels, increasing ocean acidification, more frequent and intense extreme events such as floods, heat waves and droughts, are expected not only to have a significant impact across different economic sectors and natural ecosystems, but also to endanger human lives and property, especially for most vulnerable populations.

The latest advances in Earth Observation science and R&D activities are opening the door to a new generation of EO data products, novel applications and scientific breakthroughs, which can offer an advanced and holistic view of the Earth system, its processes, and its interactions with human activities and ecosystems. In particular, those EO developments together with new advances in sectorial modelling, computing capabilities, Artificial Intelligence (AI) and digital technologies offer excellent building blocks to realise EO-based Digital Twin Components (EO DTCs) of the Earth system. These digital twins shall offer high-precision digital replicas of Earth system components, boosting our capacity to understand the past and monitor the present state of the planet, assess changes, and simulate the potential evolution under different (what-if) scenarios at scales compatible with decision making.

This session will feature the latest developments from ESA’s EO-based DTCs, highlighting:
- Development of advance EO products
- Integration of EO products from a range of sensors
- Innovative use of AI and ML
- Advanced data assimilation
- Development of tools to address needs of users and stakeholders.
- Design of system architecture
- Creation of data analysis and visualization tools
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K1)

Presentation: An EO-informed Digital Twin Component for Glaciers

Authors: Fabien Maussion, Julia Bizon, Inés Dussaillant, Alexander Fischer, Noel Gourmelen, Livia Jakob, Rischard Lane, Thomas Nagler, Samuel Nussbaumer, Carlos Pereira, Patrick Schmitt, Gabriele Schwaizer, James Thomas, Michael Zemp
Affiliations: University of Bristol, Earthwave, World Glacier Monitoring Service (WGMS), University of Innsbruck, ENVEO
Mountain glaciers are critical elements of the Earth’s hydrological and climate systems. The retreat and mass loss of glaciers globally not only contribute significantly to sea-level rise but also have profound implications for water resources, hydropower, agriculture, and natural hazards. The rapid changes in glaciers due to climate change impacts our ability to monitor and address these associated risks effectively. To address these challenges, we present the Digital Twin Component for Glaciers (DTC Glaciers), a pioneering initiative under ESA’s Digital Twin Earth (DTE) program. Leveraging the latest in Earth Observation (EO) data, advanced modelling techniques, and AI, DTC Glaciers will assimilate heterogeneous information from in-situ observations and EO to produce a centralised product that transcends the capabilities of individual datasets. Users will be able to interrogate the DTC to derive valuable insights into glacier changes in area, volume, mass and runoff and implications for communities and ecosystems. In this presentation, we showcase how our DTC prototype can be used to address two main challenges faced by scientists and stakeholders in the Alps and in Iceland. The first challenge is the estimation of glacier runoff which is calculated by integrating EO products and meteorological information into models. Here we show how a data assimilation platform informed by EO reduces uncertainties compared to currently available approaches to estimate daily runoff, a critical variable for water management in glaciated mountain basins, and essential information for hydropower operation and downstream irrigation. The second challenge relates to the fundamental capacity of DTCs to adapt to user actions and near real-time changing conditions in the physical world. In this presentation, we will show how our DTC prototype leverages cloud computing to allow interaction with the twin, permitting users to inform the tool with independent data - in a first step, in-situ observations of glacier mass-balance. DTC Glaciers aims not only at advancing glacier monitoring but also demonstrates the transformative potential of digital twins in addressing global climate challenges. While its initial focus is regional and at the demonstrator level, the scalable design of DTC Glaciers positions it as a blueprint for future global-scale implementations, ensuring its relevance for both scientific research and operational decision-making within the broader context of ESA’s Digital Twin Components and the Destination Earth (DestinE) initiative of the European Commission.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K1)

Presentation: Digital Twin Earth: Coastal Processes and Extremes

Authors: Dr Daniel Morton, Jan Jackson, Martin Jones, Dr Steve Emsley, Anne-Laure Beck, Antoine Mangin, Jean-Michelle Gauzan, Jean-Michelle Rivet, Dinesh Kumar-Babu, Natascha Mohammedi, Dr Dominique Durand, Dr Andres Payo Garcia, Patrick Matgen, Jefferson Wong, Professor Ivan Haigh, Dr Hachem Kassem, Dr Claire Dufau, Laurine Maunier, Isabella Zough, Dr Oscar Serrano Gras
Affiliations: Argans Ltd, ACRI-ST, Covartech, AdwäisEO, LIST, CLS, British Geological Society, The University of Southampton, Biosfera, ONACC
The European Space Agency's (ESA) Digital Twin Earth (DTE) project is a cutting-edge initiative to create a highly accurate digital replica of Earth, designed to simulate physical, biological and social systems and support the analysis of the planet's dynamics in near-real time. It will integrate vast amounts of data from satellite observations, ground measurements, advanced computation, AI/ML and process models to provide insights into Earth's systems, such as climate, oceans, forests, and human activities. Under the DTE project ARGANS (UK), with sister companies adwäisEO (Luxembourg) and ACRI-ST (France), partnered with Biosfera (Spain), the British Geological Society (UK), CLS (France), COVARTEC (Norway), LIST (Luxembourg), ONACC (Cameroon) and the University of Southampton (UK) have been given the opportunity to develop a digital twin to represent Coastal Processes and Extremes. This involves designing and implementing a digital twin architecture within the ESA DestinE platform to showcase four coastal use cases, these include EO-supported models of (i) coastal erosion, (ii) coastal flooding, plus (iii) mangrove and (iv) sargassum dynamics to understand their effects upon ecosystem health, biodiversity and consequent economic impacts. Outputs from the digital twin will enhance disaster preparedness and response by improving predictions of storm surges and flooding, by providing information to support evacuation scenarios and identifying vulnerable infrastructures and communities to enable pre-emptive measures. It will support climate change adaptation by tracking changes in coastline dynamics, identifying areas for adaptive infrastructure such as seawalls and green buffers and in doing so help policy makers weigh trade-offs between development and environmental conservation under ‘what if?’ scenarios. It will advance environmental conservation by tracking marine and coastal habitats enabling targeted conservation efforts to support mangrove restoration and sargassum control bringing potential benefits of healthier marine ecosystems, increased biodiversity, sediment stabilisation and carbon storage. Accessible dynamic visualisations will make scenarios easy to understand and promote education and public awareness, allowing communities to participate in planning and advocate for their interests. This project began in early 2025 so here we report on early progress, technical challenges and solutions so far, so that other digital twin practitioners can learn from our experience. We shall contrast our digital twin with other coastal modelling and digital twin activities to emphasise the scientific and societal benefits brought by the digital twin concept and increase awareness of the wider activities in this scientific area. Finally, we shall share thoughts on how the coastal processes and extremes digital twin can integrate with partner projects to realise a fully integrated operational system of systems (SoS) to replicate the Earth’s dynamics.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K1)

Presentation: Geohazard DTC: the GET-it project

Authors: Salvatore Stramondo, Hugues Brenot, Stefano Corradini, Arnau Folch, Gaetana Ganci, Fabrizio Pacini, Elisa Trasatti, Daniela Fucilla
Affiliations: Istituto Nazionale di Geofisica e Vulcanologia, Terradue, Consejo Superior de Investigaciones Científicas, Royal Belgian Institute for Space Aeronomy
The ESA GET-it (Geohazards Early Digital Twin Component) project proposes a holistic approach to a DTC (Digital Twin Component) system. building upon the exploitation of multi-sensor EO data and AI techniques. It is designed to leverage Copernicus data and advanced algorithms to generate information services for geohazards that address real needs of institutional and commercial stakeholders. GET-it benefits from the long-lasting experience of leading geohazard researchers from well-established geohazard Community represented by INGV (Istituto Nazionale di Geofisica e Vulcanologia), CSIC (Consejo Superior de Investigaciones Científicas) and BIRA (Royal Belgian Institute for Space Aeronomy), having over two decades of experience on integrating satellite, airborne, and ground-based observations along with complex simulations to better understand geohazard processes and develop solutions for the preservation of lives and the protection of valuable assets. TerraDue, the technological partner of GET-it, leads the innovative services for data-intensive applications. GET-it offers a portfolio of cutting-edge processors and products fully exploiting EO data encompassing the whole spectrum of volcano- and seismic-related geohazards, in order to support information services. The Geohazard DTC is designed as a customizable environment supporting the stakeholders communities to design accurate and actionable adaptation strategies and mitigation measures. GET-it addresses the needs and targets public institutions, decision makers and private customers (among others, aviation stakeholders, engine manufacturers, the insurance sector, road/infrastructure authorities, and energy providers) dealing with geohazards. The increased complexity of modern society and the key role of infrastructures (energy, transportation, and services) in the welfare of citizens demands a proper management of the impact of different hazards. In particular, the Critical Infrastructures (CI) are technological systems which ensure the production and the delivery of primary services to citizens. The roadmap for satellite EO data in geohazards management and the expected developments of EO in the forthcoming decades have been first traced in the International Forum on Satellite Earth Observation and Geohazards (the Santorini Conference) in 2012. The occurrence of a geohazard implies sudden, unpredictable, and cascading effects. The impact to activities and manufactures due to geohazards are based on the exposure and the vulnerability. The Sendai Framework recommends a number of actions at the State level, based on the concept that government policies should evolve from merely managing disasters to managing risks, i.e., establishing effective prevention measures. Therefore, a fundamental, detailed comprehension of all risk elements relating to disasters is essential. This applies to geohazards, and it is the inspiring principle beyond the proposed Geohazard DTC deployed in GET-it. In the wider scope of the ESA DTC Programme element, GET-it specifically pertains to the creation of a "Digital Twin Earth" that simulates the Earth system based on EO data. This not only aims to provide a virtual representation of Earth but also to predict future environmental conditions and occurrences, focusing on natural disasters such as volcanic eruptions and earthquakes. The Geohazard DTC is a component of this larger framework and serves the following specific functions: - What-If Analysis for Disaster Preparedness: A key feature of the DTC will be its ability to perform what-if analyses, allowing users to assess potential interventions and their impacts on disaster outcomes, thereby enhancing preparedness and mitigation strategies. - Integration with Global Efforts: The DTC will be designed to integrate seamlessly with international monitoring and response efforts, providing a tool that complements and enhances global capabilities to manage geohazard risks. - Support for ESA Policies and Directives: The development of the Geohazard DTC supports ESA’s policies on disaster risk reduction, climate change, and sustainable development, making it a strategic component of the Earthwatch Programme. GET-it has engaged several stakeholder communities, pertaining to the policy and decision makers, emergency managers and scientists. GET-it will be demonstrated in three use cases, the 2018 eruption of Mount Etna (Italy), the Central Italy 2016 earthquake sequence, and the 2022 eruption at La Palma (Canary Islands).
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K1)

Presentation: Hydrology analyses in mountain basins for a Decision Support System in a Digital Twin of Alps

Authors: Matteo Dall'Amico, Maxim Lamare, Federico Di Paolo, Stefano Tasin, Nicolò Franceschetti, Luca Brocca, Silvia Barbetta, Sara Modanesi, Bianca Bonaccorsi, Jean-Philippe Malet, Clément Michoud, Thierry Oppikoffer, Philippe Bally
Affiliations: Waterjade Srl, Sinergise Solutions GmbH, Research Institute for Geo-Hydrological Protection, CNR-Irpi, Institut Terre et Environnement de Strasbourg - University of Strasbourg, Ecole et Observatoire des Sciences de la Terre - University of Strasbourg, Terranum srl, European Space Agency - Esrin
The Alps are the most densely populated mountain range in Europe. As a result, hydrological hazards constitute a major threat to human activity, and water resources play a central role in socio-economic developments (agriculture, tourism, hydropower production...). Furthermore, the Alps are particularly sensitive to the impacts of climate change. Over the last century, temperatures have risen twice as fast as the northern-hemisphere average, whereas precipitation has increased non-linearly. Because of the increasing pressure on human settlements and infrastructure, there is a strong priority for policy-makers to implement climate change adaptation strategies from the local to the regional scale. To support and improve the decision-making process, numerical decision support systems provide valuable information derived from observations or models to better manage increasing threats and weaknesses. For such a reason, through the Digital Twin Earth programme (https://dte.esa.int/), ESA is encouraging the development of technological projects addressed to the implementation of operational Digital Twins ecosystems. The main objective of the ESA-funded Digital Twin of Alps project (https://digitaltwinalps.com/) is to provide a roadmap for the implementation of future Digital Twin Earth (DTE) instances, with a focus on the Alpine context. A demonstrator has been developed to act as a decision support system representing the major environment-related risks and impacts faced by populations living in the Alps, as well as water resource management indicators. Regarding hydrology, different parameters evaluated through Earth Observation, in-situ data and physical modeling are reported for reanalysis, monitoring forecast and decision-making purposes. Reanalysis and monitoring of snow parameters (snow depth, snow cover area, Snow Water Equivalent - SWE) and hydrological variables (soil moisture, river discharge) enable a quasi real-time tracking of the evolution of the water content in a basin. The nowcast (+ 3 days) of snow melt is used as an input for the landslide monitoring and risk assessment. Hydrology-related anomalies with respect to the historical mean (i.e., SWE, soil moisture, evapotranspiration and precipitation) enable a quick understanding of the current water budget in the user-selected area, and can be used as starting point to predict future evolution (e.g., the SWE anomaly at the end of the winter can give information about the possibility of drought during the summer). Finally, two Decision Support Systems (DSSs) help the user in evaluating possible future scenarios, to efficiently tackle the problem of water-related extreme events.The flood DSS evaluates the flooded area around a river as a function of precipitation return period, freezing level and soil moisture content. The drought DSS evaluates the river discharge in a section (expressed as percentiles calculated with respect to the average historical value over the 2002-2022 period) depending on temperature, precipitation, snow storage content and the presence of hydraulic works. The future development of a DTE will enable a comprehensive tool for monitoring and predicting extreme events related to natural hazards, enabling a timely and effective mitigation of the associated risk.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K1)

Presentation: Development of an Agriculture Digital Twin Infrastructure Model

Authors: Rajat Bindlish, Dr. Pang-wei Liu, Dr. Jessica Erlingis, Meijian Yang, Dr. Shahryar Ahmad, James Geiger, Luke Monhollon, Sujay Kumar, Alex Ruane, Zhengwei Yang, Gary Feng, Yanbo Huang
Affiliations: NASA Goddard Space Flight Center, Science Systems and Applications, Inc., Earth System Science Interdisciplinary Center, University of Maryland, NASA Goddard Institute for Space Studies, Center for Climate Systems Research, Climate School, Columbia University, Science Applications International Corporation, Kellogg Brown & Root (KBR), US Department of Agriculture, National Agricultural Statistics Service, US Department of Agriculture, Agriculture Research Service
Crop growth, yield, and production information is critical for commodity market, food security, economic stability, and government policy formulation. Current agricultural models required weather forcings such as precipitation, temperature, and solar radiation along with historical data as key field parameters to develop estimates for field operations schedules, from seeding to harvesting, with fertilizer and herbicide treatments in-between. Although current crop growth models provide rigorous modules to simulate crop development, they lack rigorous water balance and hydrologic processes. On the other hand, hydrology models lack more in-depth simulation of crop development stages and farm management. Coupling hydrology and crop growth models with interdependent constraints will leverage their complementary strengths to improve the estimates of hydro-agricultural variables. Land Information System (LIS) was coupled with Decision Support System for Agrotechnology Transfer (DSSAT) model to estimate crop growth stages, biomass, and crop yield for different conditions. The coupled model framework is capable of directly utilizing the LIS’s built-in modules to assimilate remotely sensed data such as soil moisture and LAI to update and improve the model simulations. In the presentation, we will demonstrate the capability of the developed digital twin framework and explore the impact of weather (precipitation, soil moisture, temperature) and climate on crop yield. The framework will provide an unprecedented useful tool to support the best management practices for farming systems and productivity outlooks for agricultural decision makers.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K1)

Presentation: Towards a Digital Twin of Tropical Wetland Methane Emissions

Authors: Rob Parker, Cristina Ruiz Villena, Khunsa Fatima, Chandana Pantula, Nic Gedney, Paul Palmer
Affiliations: National Centre for Earth Observation, School of Physics and Astronomy, University Of Leicester, Met Office Hadley Centre, National Centre for Earth Observation, School of Geosciences, University of Edinburgh
Recent unexplained and significant increases in atmospheric methane (CH₄) highlight an increasingly urgent need to understand how tropical wetlands are responding to climate change and how potential methane-climate feedbacks are driving such increases. As we try to achieve Net Zero targets and meet commitments to the Methane Pledge, it is vital that we understand the background of underlying natural emissions upon which anthropogenic emissions are added. Climate feedbacks which accelerate natural emissions could undermine any benefit from reducing anthropogenic emissions and significantly change advice given to policymakers. To address this challenge, we propose to combine state-of-the-art modelling capabilities with the wealth of observational data and make intelligent use of machine-learning analysis methods. We will accomplish this by developing a novel, dedicated and focused Tropical Wetland Digital Twin. Environmental Digital Twins are an emerging paradigm, incorporating Earth System modelling, Earth Observation (EO) and Artificial Intelligence (AI), to provide new environmental insights and provide stakeholders with the ability to ask data-driven and evidence-led questions. Our Digital Twin will bring together our best capabilities for observing and predicting wetland emissions and make these results useful to researchers, policymakers or anyone who needs to ask questions about how the Earth System responds to changes. It will enable new types of analysis (emulators providing understanding and explainability); generation of new data (wetland extent maps); new modelling capabilities (wetland methane-climate feedbacks in climate projections); and improved decision support (widely democratised access to tools and data). This work details the first steps towards such a Digital Twin, focusing on our development of machine-learning based emulators for the JULES land surface model. The development of such emulators allows: fast and efficient simulations of large ensembles to explore the complex parameter space; exploration of driving factors through Explainable AI; model-data fusion incorporating EO data with model responses; and the deployment as Digital Twin components into wider climate services.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall E2)

Session: C.02.06 Swarm - ESA's extremely versatile magnetic field and geospace explorer

This session invites contributions dealing specifically with the Swarm mission: mission products and services, calibration, validation and instrument-related discussions. It is also the session in which the future and evolution of the mission, and the future beyond Swarm will be discussed. Particularly welcome are contributions highlighting observational synergies with other ESA and non-ESA missions (past, current and upcoming), in addition to ground-based observations and modelling.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall E2)

Presentation: Swarm Investigation of Ultra-Low-Frequency (ULF) Pulsation and Plasma Irregularity Signatures Potentially Associated With Natural Hazards

Authors: Georgios Balasis, Angelo De Santis, Constantinos Papadimitriou, Zoe Boutsi, Gianfranco Cianchini, Omiros Giannakis, Stelios M. Potirakis, Mioara Mandea
Affiliations: Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing, National Observatory of Athens, Istituto Nazionale di Geofisica e Vulcanologia, Department of Physics, National and Kapodistrian University of Athens, Department of Electrical and Electronics Engineering, University of West Attica, Centre National d’Etudes Spatiales
Launched on 22 November 2013, Swarm is the fourth in a series of pioneering Earth Explorer missions and also the European Space Agency’s (ESA’s) first constellation to advance our understanding of the Earth’s magnetic field and the near-Earth electromagnetic environment. Swarm provides an ideal platform in the topside ionosphere for observing ultra-low-frequency (ULF) waves, as well as equatorial spread-F (ESF) events or plasma bubbles, and, thus, offers an excellent opportunity for space weather studies. For this purpose, a specialized time–frequency analysis (TFA) toolbox has been developed for deriving continuous pulsations (Pc), namely Pc1 (0.2–5 Hz) and Pc3 (22–100 mHz), as well as ionospheric plasma irregularity distribution maps. In this presentation, we focus on the ULF pulsation and ESF activity observed by Swarm satellites during a time interval centered around the occurrence of the 24 August 2016 Central Italy M6 earthquake. Due to the Swarm orbit’s proximity to the earthquake epicenter, i.e., a few hours before the earthquake occurred, data from the mission may offer a variety of interesting observations around the time of the earthquake event. These observations could be associated with the occurrence of this geophysical event. Most notably, we observed an electron density perturbation occurring 6 h prior to the earthquake. This perturbation was detected when the satellites were flying above Italy. The results obtained here pave the way for exploring other types of events using satellite data, as ionospheric processes and the space-based detection of natural hazards continue to be a multidisciplinary research area. The short- and long-term prospects are promising, even though our current understanding of the coupling between the lithosphere, atmosphere, and ionosphere remains limited. This applies not only to the generation of co-seismic and co-volcanic ionospheric disturbances, which are of particular interest, but also to other solid Earth phenomena, such as slow-slip earthquakes and landslides. To enhance our understanding of this complex coupling, it is essential to investigate the formation mechanisms of these ionospheric disturbances. Moreover, a deeper study of how this coupling varies with solar activity levels, atmospheric conditions, and other factors is necessary. In terms of observations, combining electromagnetic measurements with other data, such as high-resolution GNSS or gravity data, is crucial. This combination could provide new insights into the generation and evolution of ionospheric disturbances caused by natural hazard events and how they develop with altitude. For more details please see: Balasis, G.; De Santis, A.; Papadimitriou, C.; Boutsi, A.Z.; Cianchini, G.; Giannakis, O.; Potirakis, S.M.; Mandea, M. Swarm Investigation of Ultra-Low-Frequency (ULF) Pulsation and Plasma Irregularity Signatures Potentially Associated with Geophysical Activity. Remote Sensing 2024, 16, 3506. https://doi.org/10.3390/rs16183506.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall E2)

Presentation: The Swarm Satellite Trio and Related Spacecraft for Exploring Earth’s Magnetic Field and Its Environment

Authors: Dr Anja Strømme, Dr. Nils Olsen
Affiliations: ESA, DTU
Launched in November 2013, the Swarm satellite trio has provided continuous, accurate measurements of the magnetic field for more than one solar cycle. These measurements are accompanied by plasma and electric field data, precise navigation, and accelerometer observations. Over the years, the constellation has undergone various orbital configurations. These include co-rotating orbits between the side-by-side flying satellites Swarm Alpha + Charlie and Swarm Bravo in 2014, “orthogonal orbital planes” (6 hours difference in Local Time) in 2017, counter-rotating orbits (12 hours difference) in 2021, and the current configuration (June 2025) with a 6-hour difference. These different configurations enable investigations into various aspects of Earth’s magnetic field and geospace, from small-scale to large-scale, covering both solar minimum and maximum conditions. In addition to providing simultaneous measurements of the geomagnetic field from different locations in space, the highly accurate absolute Swarm magnetic data allow for the calibration of data from navigational magnetometers onboard satellites like Cryosat-2, GRACE, GOCE, and GRACE-FO. This further enhances the space-time sampling of magnetic data provided by LEO satellites, though (due to the reduced absolute accuracy of these additional data) mainly for investigations of ionospheric and magnetospheric sources. Since May 2023, the polar-orbiting Swarm satellites have been augmented with the low-inclination MSS-1 (Macau Science Satellite 1), significantly extending the coverage in space and time. Additionally, the NanoMagsat constellation, consisting of one near-polar and two low-inclination satellite, is in the pipeline as an ESA Scout mission for launch within the next few years. In this presentation, we will report on the status and future plans for the Swarm mission, including opportunities for cross-mission data calibration and validation, joint analysis of multi-spacecraft data, and plans for the upcoming years when low-altitude data will also allow for improved characterization of the lithospheric magnetic field.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall E2)

Presentation: Supporting open science with VirES and SwarmPAL

Authors: Ashley Smith
Affiliations: University Of Edinburgh
As the Swarm mission enters its second decade of operations, we face increasing complexity and ambition. This comes from different directions: serving a greater number of higher level data products; more software tools and on-demand processing; growing importance of system and complexity science; and more spacecraft - both in the form of utilising platform magnetometers on other LEO missions, and synergy with related new missions such as the Macau Science Satellites and the NanoMagSat ESA Scout mission. The coordinated development of open source software is critical to tackling these challenges in a sustainable and collaborative manner. ESA has supported the development of the VirES system to aid in disseminating Swarm products (see related presentation "VirES: Data and model access for the Swarm mission and beyond"). The service has been instrumental in making Swarm more accessible by providing unified interfaces to the complex product portfolio. As well as data access, VirES can provide auxiliary information such as magnetic coordinates computed on demand, as well as forward evaluation of geomagnetic models. Such calculations can be performed on the server, with the implementation details hidden from the user. For more in-depth processing, such as deriving higher-level products, more flexibility is needed and is often more appropriate to perform on the client side. To this end, we are developing a Python package, SwarmPAL: the Swarm Product Algorithm Laboratory, as a home for higher level analysis code. SwarmPAL enables algorithms to be applied to data both from VirES or from any HAPI server, and includes simple visualisations for quick views of input and output data. We are approaching data access and analysis across multiple layers - web-based GUIs, APIs, Python libraries, Jupyter notebooks - backed by infrastructure including a free-to-use JupyterHub. These are developed openly and foster more collaboration between scientists and software engineers, which is essential in enabling more open science.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall E2)

Presentation: Lessons Learnt From Building a DGRF 2020 Candidate Model (and Parent 2013-2024 Model) Entirely Based on Swarm ASM Experimental Vector Mode Data

Authors: Gauthier Hulot, Louis Chauvet, Robin Deborde, Jean-Michel Léger, Thomas Jager
Affiliations: Université Paris Cité, Institut De Physique Du Globe De Paris, CNRS, CEA-Leti, Université Grenoble Alpes, MINATEC
ESA Swarm satellites carry a magnetometry payload consisting of an absolute scalar magnetometer (ASM), a relative flux gate vector magnetometer (VFM), and a set of star trackers (STR). The primary role of the ASM is to provide precise 1 Hz absolute field intensity measurements, while the VFM and STR provide the additional data needed to accurately reconstruct the attitude of the vector field. This magnetometry payload has provided a remarkable set of nominal vector data, which has extensively been used for multiple investigations. Each ASM instrument, however, can also produce its own self-calibrated 1 Hz experimental vector data, or, when requested, 250 Hz scalar burst mode. Self-calibrated 1 Hz experimental vector data have routinely been produced ever since launch and are still run when the ASM instruments are not in burst mode. The availability of such an alternative source of calibrated magnetic vector data on board the Swarm satellites provides a unique opportunity to validate the nominal data of the mission, either by directly comparing VFM based nominal vector data with ASM experimental vector mode data or by building “twin” field models that can next also be compared. Here we report on the lessons learnt from such intercomparisons which we carried out in the process of building a DRGF 2020 candidate model in response to the IGRF 2025 call for candidate models. These comparisons revealed slight disagreements between both data sets even after correcting for the already well-known Sun-related thermoelectric effect that affects both instruments. This slight disagreement cannot be attributed to a similar effect and is best explained in terms of a subtle calibration issue affecting both instruments in opposite ways. We designed an empirical approach to independently “post-calibrate” each data set, and showed that once “post-calibrated”, both data sets are in significantly better agreement. This strategy was implemented to build a parent field model entirely based on “post-calibrated” ASM experimental vector mode data, which we used to propose our DRGF 2020 candidate model. This candidate model turns out to be in striking agreement with the recently released official DGRF 2020 model. This agreement is all the more remarkable that the many other models that went into the making of this official DRGF model made use of different data sets, either nominal Swarm vector field data or data from other satellites and ground observatories. This suggests that calibration strategies currently used on board missions such as Swarm could still possibly be improved to produce data sets of even better quality.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall E2)

Presentation: Ocean-induced magnetic field: Swarm data processing and field modelling experiments

Authors: Chris Finlay, C Kloss, R.M. Blangsbøll, N. Olsen, J. Velímský, O. Kureš, V. Ucekajová
Affiliations: DTU Space, Charles University Prague
The motions of the ocean through Earth’s core-generated magnetic field produce electrical currents that depend on the details of the ocean flow as well as on the ocean’s temperature and salinity, via its electrical conductivity.  Low-Earth orbit magnetic survey satellites such as the Swarm trio record magnetic fields resulting from the integrated effects of such motionally induced currents and their closure in the electrically conducting sold Earth.  These ocean-induced magnetic fields (OIMF) thus carry remote information on ocean flow dynamics, temperatures and salinity.   OIMF signals due to a number of ocean tidal components have now been convincingly extracted, but detection of the OIMF signal due to the more general ocean circulation has remained elusive.   In this contribution we present ongoing efforts in the Swarm for Ocean Dynamics project to detect the OIMF signal using observations made by the Swarm satellites. This involves (i) a scheme to correct as best as possible for other geomagnetic signals (from the core, crust, ionosphere and magnetosphere),   (ii) time-dependent field modelling with a focus on spherical harmonic degrees 15 to 30 and periods of 60 days up to 5 years and model regularization designed for studies of the OIFM (iii) post-processing filtering to highlight the OIMF signal.  We will presents results of experiments with both synthetic satellite data and real Swarm observations. A particular focus will be regions such as the Indian Ocean where strong OIMF signals are expected.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall E2)

Presentation: Large-scale ionosphere and magnetospheric currents during the May 2024 storm obtained from assimilation of magnetic ground and multi-satellite data

Authors: Dr. Alexander Grayver, Jingtao Min, Nils Olsen, Federico Munch, Ashley Smith
Affiliations: University Of Cologne, ETH Zurich, DTU Space, University of Edinburgh
We present a high-cadence (20 min) model of mid-latitude ionospheric and magnetospheric currents obtained using a novel geomagnetic modelling method based on the variational assimilation principle. We use both ground and multi-satellite magnetic data from Swarm, CryoSat-2, Grace-FO and Macau Space Satellite (MSS) missions to enable consistent separation of ionosphere and magnetosphere sources with an unprecedented space-time resolution. The data are fit to a set of spatial basis functions that represent solutions of governing Maxwell’s equations for electric currents in the ionosphere and magnetosphere parameterized with Spherical Harmonic modes. Using a prior 3-D subsurface conductivity model allows for a self-consistent co-modelling of secondary, internal, magnetic field components induced in the 3-D solid Earth and oceans. The resulting framework enables the retrieval of magnetic and electric fields across the model domain, facilitating analysis of ionosphere-magnetosphere interactions during all phases of the May 2024 storm, and supports the modelling of ground electric fields associated with space weather hazards.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall M1/M2)

Session: A.05.05 Tipping points and abrupt change in the Earth system

There are elements of the Earth system, including ecosystems, that can undergo rapid transition and reorganisation in response to small changes in forcings. This process is commonly known as crossing a tipping point. Such transitions may be abrupt and irreversible, and some could feedback to climate change, representing an uncertainty in projections of global warming. Their potentially severe outcomes at local scales - such as unprecedented weather, ecosystem loss, extreme temperatures and increased frequency of droughts and fires – may be particularly challenging for humans and other species to adapt to, worsening the risk that climate change poses. Combining satellite-based Earth Observation (EO) datasets with numerical model simulations is a promising avenue of research to investigate tipping elements. And a growing number of studies have applied tipping point theory to satellite time series to explore the changing resilience of tipping systems in the biosphere as an early earning indicator of approaching a tipping point. This session invites abstracts on tipping points and resilience studies based on or incorporating EO, as well as recommendations from modelling groups that can be taken up by the remote sensing community, for example on early warning signals, products needed for model assimilation or novel tipping systems to investigate further using EO.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall M1/M2)

Presentation: Shifting Dynamics: Decoupling of Carbon and Water Cycles in the Amazon Rainforest

Authors: Sarah Worden, Dr. Sassan Saatchi, Dr. Nima Madani, Dr. Yan Yang
Affiliations: NASA Jet Propulsion Laboratory/ California Institute of Technology, UCLA / JIFRESSE, Ctrees
To survive, plants must employ a range of strategies under different conditions to balance carbon uptake for photosynthesis with water loss via transpiration. Across forest stands, these processes—represented by gross primary productivity (GPP) and evapotranspiration (ET), respectively—are key ecosystem fluxes that determine vegetation water use efficiency. These fluxes are generally assumed to be tightly coupled across all vegetation types as plants exchange water for carbon via stomatal (small pores at the leaf surface) conductance, and models often explicitly incorporate this coupling. Changes in these fluxes significantly affect the terrestrial biosphere’s capacity to respond and to influence future regional and global climate change. Here, we evaluate 40 years of the relationship between the gross primary production (GPP) and evapotranspiration (ET) across the Amazon Basin, representing carbon and water fluxes associated with photosynthesis and transpiration respectively. We show that 64% of the Amazon Basin exhibits weak (R<0.3) or negative (R<0) correlations between GPP and ET.. We verify these results using direct satellite measurements of photosynthesis from solar induced-fluorescence (SIF) measurements and ET calculated using water-balance measurements as well as using flux tower GEP and ET from the Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) flux tower sites. To further refine our analysis, we examine GPP and ET correlations across regions categorized by average maximum cumulative water deficit (WD) levels. The areas with the weakest WD (i.e., the most water) display the weakest (or even negative) correlations, while the areas with the strongest WD displays the strongest correlations. When analyzing the seasonal behavior of GPP and ET across these WD categories, we additionally find that GPP and ET are strongly coupled seasonally in the strong WD region. Additionally, differences in the timing of seasonal increases (and decreases) in GPP and ET drive anti-correlations seen within the weaker WD regions. This is primarily due to larger seasonal variability in ET as GPP does not show large seasonal variability within the weaker WD regions. Finally, we demonstrate that GPP and ET coupling has weakened over time, with the most pronounced changes in the southwestern Amazon. This region has experienced long-term increases in VPD, severe droughts and increasing vulnerability over recent decades. Such changes in coupling strength may signal a decline in the ecosystem resilience under pressures from climate and land use changes. Understanding the relative contributions of photosynthesis, transpiration, and evaporation to this decoupling, along with distinguishing climate and anthropogenic drivers, is essential to better assess shifts in the dynamics of the Amazon carbon-climate system.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall M1/M2)

Presentation: Suitability of Remotely Sensed Vegetation Indicators for CSD-based Resilience Analyses of Tropical Forests

Authors: Lana Blaschke, Sebastian Bathiany, Marina Hirota, Niklas Boers
Affiliations: Technical University Of Munich, Potsdam Institute for Climate Impact Research, Universidade Federal de Santa Catarina
Tropical forests are vital for climate change mitigation as carbon sinks. Yet, research suggests that climate change, deforestation and other human influences threaten these systems, potentially pushing them across a Tipping Point where the tropical vegetation might collapse into a low-treecover state. Signs for this trend are reductions of resilience defined as the system's capability to recover from perturbations. If resilience decreases, dynamical system theory implies that critical slowing down (CSD) induces changes in statistical properties such as variance and auto-correlation. This allows to indirectly examine resilience changes in the absence of observations of strong perturbations. Yet, deriving estimates of resilience changes based on CSD impose several assumptions on the system under observation. For tropical vegetation, it is not obvious that these assumptions are fulfilled. Moreover, the conditions of tropical rainforests pose difficulties on the observation of the vegetation. E.g, cloud cover, aerosols, and the dense vegetation hinder the reliable retrieval of vegetation indicators, especially from data gathered in the optical spectrum. This implies that the data might not be suitable for resilience analyses based on CSD and that theoretical estimators of resilience align with actual recovery rates. We investigate the different assumptions of CSD and test them on a diverse set of remotely sensed vegetation indicators. Thereby, we establish a framework that allows to select ideal combinations of theoretical estimators and vegetation indicators. Based on this selection, we assess resilience change of the tropical forests in recent years.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall M1/M2)

Presentation: Tipping points in tidal wetland vulnerability: A multi-sensor, multi-scale forecasting approach

Authors: Rusty Feagin, Raymond Najjar, Wenzhe Jiao, Maria Herrmann, Joshua Lerner
Affiliations: Texas A&M University, Pennsylvania State University
Tidal wetlands are highly productive ecosystems and play disproportionately large roles in coastal biology and biogeochemical cycling relative to their small areas. However, they are also vulnerable to episodic disturbances and can rapidly transition from a terrestrial to aquatic state. While site-specific studies can identify the reasons for vegetative productivity loss, no existing methods can predict when a tidal wetland reaches a tipping point in time and then this tipping point propagates across broader spatial scales. We are using a remote sensing, multi-scale approach to (1) detect tipping points as tidal wetlands rapidly transition from a terrestrial to aquatic state, (2) detect the micro-tipping points that accumulate to precipitate a broader-scaled transition, and (3) forecast future tipping points in tidal wetland vulnerability before they happen. We are addressing these objectives across all tidal wetlands in the conterminous United States (CONUS), over the period 2000–2025. Our approach is detecting and identifying the tipping points in several off-the-shelf remote sensing products using a novel synthesis of Early Warning Signals (EWS) analysis and traditional ecosystem resilience metrics. We are using a gross primary productivity (GPP) dataset from the publicly available Oak Ridge National Laboratory Distributed Active Archive Center for Biogeochemical Dynamics (DAAC) and the Harmonized Landsat and Sentinel-2 (HLS) dataset. We are using the GPP dataset to identify historical tipping points across the CONUS at 250 m and 16 day resolution, and then exploring them with greater spatial and temporal detail using the HLS dataset (10–30 m and 2–3 day resolution). Using the knowledge gleaned from this work, we are then predicting future tipping points. With these datasets, we are also testing several hypotheses about tipping points. We have hypothesized that we can predict the timing of an approaching tipping point when (H1) the frequency of perturbations to tidal wetland productivity increases, (H2) the effect size (magnitude) of the productivity response increases, (H3) the return time for productivity recovery to an equilibrium condition increases, and (H4) the return time for productivity recovery, relative to the effect size, increases. We have additionally hypothesized that (H5) a sudden drop in vegetative cover and productivity at finer scales warns of a potential micro-tipping point origin, and (H6) an increasing spatial variance warns of cascading micro-tipping points that accumulate into coarser-scaled transitions. Our preliminary results suggest that several of these hypotheses are valid, but that others can be rejected. Moreover, while scientists often think about tipping points as causing negative ecosystem loss, we have found that change can also occur in a positive direction. In the case of our work, we have detected both tipping points that lead towards decreasing productivity and wetland loss, as well as tipping points that lead towards increasing productivity and wetland gain. The tipping points that we have found correlate reasonably well with disturbance frequency, but also with changes in several long-term meteorological trends. Our maps and analyses show spatial and temporal variability in tipping points across the CONUS, but also how micro-tipping points cascade to initiate broader scale change in tidal wetland cover. This NASA-supported work helps extend our current understanding of using remote sensing-based tipping point analyses in mixed aquatic and terrestrial ecosystems. This work also helps scientists to better link wetland carbon losses with broader-scale implications for ocean biogeochemistry.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall M1/M2)

Presentation: Tipping Points in Southern Ocean Overturning

Authors: Rafael Catany, PhD Alessandro Silvano, Professor Hugues Goose, Professor Alberto Naveira Garabato, PhD Sarah Connors
Affiliations: Albavalor, University of Southampton, Université catholique de Louvain, ESA ECSAT
The Southern Ocean Overturning Circulation (SOOC) is a critical component of Earth's climate system, regulating oceanic heat and carbon uptake over decadal to millennial timescales and influencing global sea level rise. It consists of two main overturning cells: the upper and lower cells. In the upper branch, Subantarctic Mode Water (SAMW) and Antarctic Intermediate Water (AAIW) are subducted into intermediate depths (500–2000 m), contributing to the uptake and storage of over 70% of the anthropogenic heat and 40% of the carbon uptake from the atmosphere. This process regulates atmospheric CO2 over decadal to multidecadal timescales. Antarctic Bottom Water (AABW) forms near Antarctica and replenishes the abyssal layers of the ocean, allowing for long-term carbon storage. This mechanism stabilises Earth's climate and protects Antarctic glaciers and ice sheets from warm ocean waters, helping to reduce mass loss in most regions. A slowdown or collapse of the SOOC represents a tipping point with far-reaching consequences, including accelerated global warming, sea level rise from increased Antarctic Ice Sheet melting, and ecosystem loss due to reduced oxygen and nutrient supplies to the abyssal ocean. Despite its critical importance, understanding the SOOC and its tipping points remains limited due to the challenges of observing subsurface properties beneath sea ice and sparse in situ data in the remote Southern Ocean. Traditional models often have biases in simulating accurately the dynamics of Antarctic sea ice and AABW formation. This limitation makes it harder to identify early warning signals of potential tipping points. In this presentation, the Tipping Points in Southern Ocean Overturning (TiPSOO) project will be presented. TiPSOO addresses these challenges by employing advanced Earth Observation (EO) and modelling approaches to study the dynamics and vulnerabilities of the SOOC. TiPSOO leverages satellite data from the ESA Climate Change Initiative (CCI)—including temperature, salinity, sea ice, altimetry, and GRACE-derived mass changes—to detect critical variations in sea surface height and density. By integrating these data with idealised modelling experiments, TiPSOO seeks to identify early warning signals and collapse fingerprints associated with AABW formation and SOOC disruptions. The main objectives of TiPSOO are to evaluate how sea ice dynamics and freshwater fluxes affect the formation of AABW and enhance our scientific understanding of tipping points in the Southern Ocean. This project will also conduct feasibility studies using Earth Observation (EO) data to monitor changes in the SOOC. Additionally, TiPSOO will demonstrate innovative EO methods for detecting tipping points in SOOC and analysing changes in ABW formation over the past two decades. The findings from the TiPSOO project will significantly enhance our scientific understanding of tipping points in the Southern Ocean. This research will provide critical data and information for climate policymakers and strengthen the confidence of IPCC assessments. By identifying and quantifying the risks associated with the slowdown or collapse of the SOOC, TiPSOO aims to improve resilience in climate systems and support informed decision-making. The project's multidisciplinary approach, which combines satellite data, modelling expertise, and collaborative partnerships, ensures that we gain robust and actionable insights into one of the most pressing challenges in climate science.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall M1/M2)

Presentation: An Early Warning System for Tipping Points in the Greenland Ice Sheet and the North Atlantic Subpolar Gyre: Exploring the Edge of the Possible with AEROSTATS

Authors: Christine Gommenginger, David McCann, Adrien Martin, José Marquez Martinez, Samantha Lavender, Christian Buckingham, Alice Marzocchi, Louis Clément, Simon Josey
Affiliations: National Oceanography Centre, NOVELTIS, Radarmetrics, Pixalytics
The climate system is approaching dangerous tipping points, with the predicted collapse within decades of critical components like the Greenland Ice Sheet and the North Atlantic Subpolar Gyre posing severe risks to European weather and global climate stability. Despite the pressing need for early warning systems and robust predictive models, significant gaps persist between current observational capabilities and the data required to enhance climate forecasting. This disconnect hampers efforts to build confidence in climate predictions and implement effective mitigation and adaptation strategies. Earth-orbiting satellites and in situ observations provide valuable insights into broad-scale changes in the ocean, cryosphere and atmosphere. However, these systems often struggle to capture extreme events and small-scale processes in complex, dynamic regions such as sea ice margins. These regions play a crucial role in governing water, heat and momentum exchanges at the ocean-cryosphere-atmosphere interfaces that connect the Greenland Ice Sheet and the North Atlantic Subpolar Gyre. This paper introduces AEROSTATS (Aerial Experimental Remote sensing of Ocean Salinity, heaT, Advection, and Thermohaline Shifts), a UK-led, innovation-driven international initiative to demonstrate long-term, low-cost, low-carbon monitoring in the dynamic Greenland ocean-ice margins. Funded as a high-risk, forward-thinking project, AEROSTATS leverages autonomous platforms, airborne systems, spaceborne sensors, and high-resolution models and reanalyses to address critical observational challenges in this region. Central to the initiative is a groundbreaking field campaign in 2028, featuring an extensive deployment of autonomous sensors to provide year-round observations of total surface current vectors, winds, salinity, ocean colour, and sea surface temperature—key variables governing exchanges in these critical regions. By integrating multi-platform observations with high-resolution models, reanalysis data, and advanced digital tools like machine learning, AEROSTATS represents a major step-forward towards improving our understanding and predictive capability for tipping points. Starting in 2025, the five-year project is actively seeking collaborations that can amplify its impact, for example through coincident deployments of airborne demonstrators, High-Altitude Pseudo-Satellites, and autonomous aerial, surface or subsurface vehicles. AEROSTATS represents a transformative step in developing Earth Observation data to advance climate system understanding and build the robust monitoring systems needed to confidently forecast and mitigate the impacts of catastrophic climate tipping points.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall M1/M2)

Presentation: Earth Observations Reveal Mixing Anomalies and Regime Shifts in Dimictic Lakes

Authors: Elisa Calamita, Michael Brechbühler, Iestyn Woolway, Dr. Clément Albergel, Laura Carrea, Daniel Odermatt
Affiliations: Eawag, Swiss Federal Institute of Aquatic Science and Technology, University of Tübingen, Bangor University, European Space Agency Climate Office, University of Reading
Climate change significantly impacts lake ecosystems, driving responses that range from gradual adaptations to abrupt shifts in ecological states. These transitions, often triggered when lakes cross critical tipping points, can lead to profound modifications in established dynamics, cascading through ecosystem processes and affecting services to human well-being. Such shifts in lake behaviour can disrupt biodiversity, nutrient cycling, and water quality, with far-reaching implications for ecological stability and societal reliance on these systems. Despite their critical importance, a comprehensive understanding of climate-induced lake shifts remains limited, largely due to a lack of systematic global data. To address this gap, we conducted a literature review focusing on climate-related lake shifts and explored the contributions of satellite Earth Observation (EO) in this research domain. Our analysis revealed that only 9% of studies on lake shifts utilize EO data, though its application has grown since 2012. EO data is most commonly used to assess shifts in surface extent, ice coverage, or phytoplankton phenology. Beyond direct observations, EO data can also provide indirect insights into processes such as the vertical mixing of lake water, which can be inferred from surface thermal patterns. To prove this, we tried using EO only to detect and study mixing regime shifts. Mixing regimes regulate nutrient distribution, energy flow, and oxygen levels in lakes. Specifically, we investigated dimictic lakes, which typically stratify during summer and exhibit inverse stratification in winter. Under warming conditions, these lakes are increasingly at risk of shifting to a monomictic regime, where winter stratification fails, and fall mixing continues until spring. Such regime shifts can disrupt nutrient cycling and oxygen dynamics, with severe ecological consequences. To track mixing anomalies, we utilized satellite-derived lake surface water temperatures and a thermal front tracking method to identify patterns indicative of failed winter stratification. By analyzing global EO data from 2000 to 2022, we present the first comprehensive assessment of mixing anomalies in dimictic lakes. Our results demonstrate that spatial gradients in EO data are effective for detecting these anomalies on a global scale. Moreover, we found that lakes that exhibit higher frequencies of mixing anomalies are more susceptible to regime shifts under ongoing climate warming. Our findings highlight the potential of EO as a tool for early detection and monitoring of lake ecosystem shifts. Although EO data lacks intrinsic predictive capabilities, its ability to identify lakes prone to mixing regime shifts underscores its utility as an early warning system. We propose a susceptibility index based on the statistics of the winter stratification length over the past two decades, showing a positive correlation between the number of mixing anomalies and the likelihood of future regime shifts. By identifying lakes experiencing mixing anomalies, EO data can play a pivotal role in monitoring ecosystem stability and anticipating the impacts of climate change on global lake systems. This approach offers a pathway to enhance adaptive management and conservation efforts for freshwater ecosystems.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.94/0.95)

Session: A.05.01 Using earth observation to assess climate change in cities

The Intergovernmental Panel for Climate Change (IPCC) Sixth Assessment report concluded that "Evidence from urban and rural settlements is unequivocal; climate impacts are felt disproportionately in urban communities, with the most economically and socially marginalised being most affected (high confidence)." (IPCC, WG2, Chapter 6)

In its Seventh Assessment Cycle, the IPCC will produce a Special Report on Climate Change and Cities to further develop the role of climate and its interactions with the urban environment. The report will cover topics that include:
- Biophysical climate changes;
- Impacts and risks, including losses and damages and compounding and cascading aspects;
- Sectoral development, adaptation, mitigation and responses to losses and damages;
- Energy and emissions;
- Governance, policy, institutions, planning and finance; and
- Civil society aspects.

This session calls for abstracts demonstrating how Earth Observation is being used to understand how climate change is impacting cities and how EO can be used to adapt and mitigate further climate change on the city scale. This session's abstracts should explicitly link the use of EO data and assessing their usefulness for small scale urban/cities information.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: Analysis of Local Climate Zones and the Urban Heat Island through Geomatic Techniques: the Italy - Vietnam LCZ-UHI-GEO project

Authors: Prof Maria Antonia Brovelli, Mr. Matej Žgela, Alberto Vavassori, Deodato Tapete, Dr. Patrizia Sacco, Dr. Thy Pham Thi Mai, Dr. Nguyen Lam Dao
Affiliations: Politecnico di Milano, Agenzia Spaziale Italiana, Vietnam National Space Center
The localised temperature increase in urban areas compared to the surrounding rural or natural environments is known as the Urban Heat Island (UHI) phenomenon. The spatial and temporal distribution of temperatures varies within cities depending on many factors, including the morphology of built-up areas, the construction materials, and the presence and distribution of vegetation. The Local Climate Zone (LCZ) concept is a well-established system that classifies urban and suburban areas based on their physical and thermal characteristics. LCZ maps are commonly generated by processing multispectral satellite images, integrated with morphological information on the built environment and vegetation. Recently, the results from the project “Local Climate Zones in Open Data Cube” (LCZ-ODC), a collaboration between the Italian Space Agency (ASI) and Politecnico di Milano (POLIMI) – Agreement n. 2022-30-HH.0 – in the framework of ASI’s program “Innovation for Downstream Preparation for Science (I4DP_SCIENCE)”, have demonstrated that the use of hyperspectral images from ASI’s PRISMA mission can significantly improve the accuracy of LCZ maps compared to traditional multispectral Sentinel-2 images. Within such a project, a methodological procedure was proposed and implemented for generating LCZ maps using PRISMA and Sentinel-2 satellite images, integrated with multiple geospatial layers known as Urban Canopy Parameters (UCP), which describe the morphological characteristics of urban surfaces. This procedure was tested in the Metropolitan City of Milan, Italy, showing better performance in terms of map accuracy than the state-of-the-art LCZ Generator tool. Air temperature distribution and differences among the LCZs were also assessed through statistical tests, enabling the quantification of the maximum UHI intensity across different times of the day and seasons. Building upon the results of the LCZ-ODC project, the project “Analysis of Local Climate Zones and the Urban Heat Island through Geomatic Techniques” (LCZ-UHI-GEO) began in 2024 and is expected to conclude in 2026. The project is funded by the Italian Ministry of Foreign Affairs and International Cooperation (MAECI) and Vietnam’s Department of International Cooperation of the Ministry of Science and Technology (MOST). The project involves POLIMI, ASI, and the Vietnam National Space Center (VNSC). LCZ-UHI-GEO aims to replicate and expand the methodologies developed in LCZ-ODC and to test them comparatively in Italian and Vietnamese cities. This bilateral collaboration will facilitate the study of diverse urban climatic contexts and, by leveraging the integration of different expertise, will advance research into the correlation between LCZ maps and air temperature maps, generated from in-situ and satellite data. Regarding the study areas, the project focuses on two Italian cities, i.e., Milan and Rome, and two Vietnamese cities, i.e., Hanoi and Ho Chi Minh City, to test the scalability of the procedure in urban areas with a significantly different structure and extent, population density, terrain morphology, and background climate. For the LCZ classification, multi-temporal PRISMA images acquired specifically for the LCZ-UHI-GEO project will be used to map seasonal variations across the test areas. The acquisition plan is ongoing for all four cities, aiming to guarantee at least one usable image per season, weather permitting. Corresponding Sentinel-2 images will also be used for co-registration and comparative analysis. Multiple open geospatial data will also be used to calculate the UCPs. The geospatial data used within the LCZ-ODC project can be exploited for the Italian cities. For the Vietnamese cities, global datasets (e.g. JRC Global Human Settlement Layer Building Height) or, where available, more detailed local data will be used. Air temperature analysis will rely on official sources, such as regional agencies (e.g., ARPA Lombardia and ARPA Lazio for the Italian case studies), as well as crowdsourced air temperature observations (e.g., Netatmo). The use of crowdsourced data is meant to increase the spatial coverage of the observations, allowing us to compute and validate continuous air temperature maps. Additionally, the project fosters international collaboration between Italian and Vietnamese research teams by sharing knowledge and technical skills. In this context, LCZ-UHI-GEO aims to raise awareness about the UHI problem through workshops and training events, providing stakeholders with useful tools to implement mitigation strategies in the frame of, e.g., urban master plans and renewal projects. This activity has already started and first workshops have been held in Vietnam with representatives of public institutions and administrations, in order to identify user requirements to account for during the generation of LCZ maps and understand how the LCZ-UHI-GEO products may contribute to decision-making towards improved urban resilience to UHI.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: Using Downscaled Geostationary Land Surface Temperature for a High Spatio-temporal Approach to Study Surface Urban Heat Islands

Authors: Alexandra Hurduc, Dr. Sofia Ermida, Dr. Carlos DaCamara
Affiliations: Instituto Português do Mar e da Atmosfera (IPMA), Instituto Dom Luiz, Faculdade de Ciências, Universidade de Lisboa
As urbanization has transformed the surface cover materials and cities have emerged and expanded, their influence on the environment also intensified. Thermal remote sensing is often used when evaluating land surface temperature (LST) at the most varied temporal and spatial scales. Although the uses of remote sensing LST observations have been countless, its utility relies on the characteristics of the sensor and weather these are adequate to represent the surface. Geostationary sensors provide sufficient observations throughout the day for a diurnal analysis of temperature, however, lack the spatial resolution needed for highly heterogeneous areas such as cities. Polar orbiting sensors have the advantage of a higher spatial resolution, enabling a better characterization of the surface while only providing one to two observations per day. A multi-layer perceptron (MLP) based method is used to downscale geostationary derived LST based on a polar orbiting derived one. The MLP consists of a classical approach to neural networks and was trained on a pixel-by-pixel basis. The rationale behind it relates to the complexity of relationships between surface variables over large areas. In the case of a unique model, the parameters are trained to best represent those relationships, finding the best compromise between model accuracy and generalization over a large area. This compromise may result in regional biases. Also, a large array of variables would be required to correctly represent the high complexity of the land surface (such as vegetation structure and health, surface materials and their heat capacity and emissivity, soil water content, amongst others). Most of these variables are not readily available at the desired spatio-temporal resolution, which means that there may not be enough information in the input data to obtain a good performance with a unique model. A pixel-wise model needs only to optimize training at the local scale, drastically decreasing the complexity needed by the model. This approach was used to downscale SEVIRI LST for the city of Madrid, from approximately 4.5 km to 750 m. The resulting dataset was used to assess the enhancement of the urban heat island (SUHI) effect during heat waves (HW) when compared to normal conditions. The increased spatial and temporal resolution allows a more detailed analysis of the impact of these extreme events within the city, providing information on the more susceptible areas of the city and the time-of-day when the SUHI is more intense. This information is crucial to support policy makers in developing prevention strategies to reduce the impact of HW on the city’s population.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: Atlantic SENSE: towards an integrated geospatial intelligence solution

Authors: Caio Fonteles, Bruno Marques, Sofia Aguiar, Dr. Ana Oliveira
Affiliations: CoLAB +Atlantic
As we live in an era of big data acquisition—satellite, in-situ, wearables—climate change and environmental risks have become much easier to map. On the other hand, domain knowledge is usually supplied by the academic sector, offering novel methodologies for hazard mapping and predictions, albeit being hard to translate those scientific-driven findings for the public administration and society at large. Hence, public policies and public domain knowledge, including the implementation and monitoring of regulatory frameworks, often lag behind the scientific state-of-the-art. As such, citizens are left ‘in the dark’ about the environmental or climatic risks surrounding them, even though about 40% of the world’s population lives within 100km of the coast, subject to sea level rise, or exposed to other weather and climate extremes such as heatwaves and droughts. Furthermore, the pressure for further urbanisation and the efforts to preserve its rich natural capital are often at odds. Atlantic SENSE builds upon these notions to leverage the state-of-the-art scientific knowledge on data acquisition, machine learning (ML), and metocean predictions to address the key environmental and climatic challenges we face, to become a live platform with real-time natural hazards and risks information, readily available to the community. The main objectives of the work are: OBJ-1: Offer an integrated geospatial information web-based tool for municipalities and citizens. OBJ-2: Translate geospatial and in-situ data into impact indicators on multiple climate and environmental hazards. OBJ-3: Ensure scalability, transparency, and affordability of the results. Building upon the results of several projects and initiatives, such as Horizon Europe (EC), Destination Earth (ECMWF and ESA) and EU Digital Twin Ocean (Mercator Ocean International), the proof-of-concept of the Atlantic SENSE concept has been deployed over mainland Portugal. Furthermore, in the scope of the PRR New Space Portugal Agenda, a participatory approach with early adopters has kick-started to ensure fitness for purpose. Currently, several modules are already operational, and being tested: AIR: temperature extremes health indicators, urban heat island forecast and scenarios, air quality monitoring LAND: land use/land cover change monitoring, ecosystem services COAST: coastal erosion monitoring, coastline evolution, sea level rise scenarios OCEAN: physics and biogeochemical forecasts of ocean health indicators such as marine heatwaves. The resulting product is a Geospatial Multi Hazards Information System, based on data fusion between EO imagery and altimetry, in-situ measurements (including IoT and other traditional sensors) and model data that delivers weather and climate-related risk maps pertaining to these Earth Climate System domains, integrated into a geospatial visualization web-based tool for multi-criteria analysis, with querying options to benchmark risk profiles across neighbourhoods, as well as at the municipal level. With this, we hope to bridge the gap between the international push towards the adoption of a Global Goal on Adaptation (in agreement with the Sendai Framework and Early Warnings for All initiatives) and the regional and local capacity to respond to climate change in a cost-efficient but scientifically accurate manner.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: Urban Development Through EO and Natural Experiments: the UDENE Project and its case studies

Authors: Lorenzo Amici, Maria Antonia Brovelli, Dr. Vasil Yordanov, Nikola Obrenović, Branislav Pejak, Onur Lenk, Yücel Erbay, Mohamed Rahji, Ahmed El Fadhel, Anaïs Guy, Murat Ozbayoglu, Ali Turker, Maria Antonia Brovelli
Affiliations: Department of Civil and Environmental Engineering (DICA), Politecnico di Milano, BioSense Institute, University of Novi Sad, Istanbul University, Institute of Marine Sciences and Management, NiK Insaat Ticaret Ltd. Şti., Tunisian Space Association, Eurisy, Department of Artificial Intelligence, TOBB University of Economics and Technology, WeGlobal
The Urban Development Explorations using Natural Experiments (UDENE) project is an innovative initiative that combines Earth Observation (EO) technologies with urban planning to address critical challenges faced by cities today. Funded under the Horizon Europe program, this initiative aims to bridge critical gaps in the ability of urban planners, policymakers, and researchers to assess the impacts of urban interventions. By leveraging EO data from Copernicus satellites and linking it with structured local datasets organized as data cubes, UDENE provides a robust framework to enable evidence-based decisions. The project applies "natural experiments," defined as real-world changes or events analysed as if they are controlled experiments, offering unique insights into the causal effects of urban development policies. A key objective of UDENE involves structuring in-situ urban data into interoperable data cubes and integrating them into the Copernicus data cube federation. This linkage facilitates seamless exploration of urban development impacts across time and geographic locations. The project partners develop advanced sensitivity analysis algorithms to verify and operationalize multivariate causal models, enabling accurate predictions of how urban interventions influence critical outcomes. These outcomes include air quality, heat load, mobility, and resilience to natural hazards, such as earthquakes. UDENE also seeks to close the gap between high-level EO technologies and the practical needs of urban planners. At the core of UDENE’s framework are two tools tailored for practical use: the UDENE Exploration Tool and the UDENE Matchmaking Tool. The Exploration Tool allows urban planners, developers, and decision-makers to test, validate, and visualize their ideas. A user-friendly interface assesses impacts of urban development options on various metrics, such as air pollution, traffic patterns, and temperature regulation. Users explore potential outcomes of specific urban strategies by combining EO data and advanced causal models. The Matchmaking Tool links the Exploration Tool with existing EO products, services, and applications by identifying relevant downstream EO applications and service providers. The project’s objectives and tools demonstrate their utility through three use cases in Serbia, Tunisia, and Turkey. These case studies highlight the versatility of UDENE’s approach in addressing diverse urban challenges, ranging from environmental concerns to disaster preparedness. In Novi Sad, Serbia, the project analyses the environmental impacts of major transportation infrastructure changes. Two interventions are at the centre of this study: constructing bypass bridges to redirect heavy traffic and converting a central street into a pedestrian zone. While these changes aim to reduce air pollution and enhance mobility, they also raise concerns about potential congestion and disruption to existing transportation patterns. The case study integrates EO datasets, including Sentinel-5P data for tracking nitrogen dioxide (NO2) emissions, with in-situ traffic and air quality data from local monitoring stations. In addition, advanced regression and machine learning models estimate changes in air quality and assess causal relationships between traffic interventions and pollutant reductions. For the same goal at the microscopic level, agent-based simulations use inputs such as local demographic and transportation data to model traffic redistribution and identify its consequences to the pollutant emissions. In Greater Tunis, Tunisia, the study addresses the issue of urban heat islands (UHIs), where cities experience elevated temperatures compared to surrounding rural areas. The focus is on assessing the impact of a linked park system to mitigate heat loads across Local Climate Zones (LCZs). EO data is central to this analysis, with Sentinel-2 imagery providing vegetation indices like the Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI), and Landsat missions supplying land surface temperature (LST) data. Urban spectral indices, such as the Normalized Difference Built-up Index (NDBI), quantify the extent of built-up areas and their impact on heat distribution. These EO datasets combine with in-situ temperature, humidity, and wind measurements to evaluate the cooling effects of green spaces and assess the park system efficiency. Random Forest regression models explore the relationships between LCZ characteristics, green infrastructure, and heat mitigation. In Istanbul, Türkiye, the focus is on assessing the resilience of high-rise districts to possible seismic risks. Located near the active west segment of the North Anatolian Fault Zone (NAFZ), the study area includes Kadıköy, Ataşehir, and Üsküdar districts, characterized by dense urbanization and numerous high-rise buildings. As part of the pre-earthquake scenario, the use case simulates the impacts of a potential Mw ≥ 7.0 earthquake, estimating building damage, casualties, and economic losses through the implementation of models associated with Earthquake Loss Estimation Routines (ELER). High-resolution satellite imagery and local data from both private and public organizations as well as Copernicus Data Space Ecosystem are employed for land use and building information. The casualties are estimated through the models, such as those developed under the HAZUS framework considering building type, damage and injury severity parameters and they are integrated through ground motion prediction equations. To validate the results, it is anticipated to compare the outputs with the damage assessments from recent earthquakes, such as the 2023 Kahramanmaraş earthquakes. By combining EO technologies with local seismic and building data, this use case offers insights into how urban development policies enhance disaster preparedness. In addition to these case studies, UDENE emphasizes collaboration and partnership-building by actively involving European and non-European stakeholders from the public and private sectors with the aim to enhance the usability and scalability of EO technologies. By addressing challenges like climate change adaptation, air quality improvement, and disaster resilience, UDENE’s aims and objectives align closely with global ones, such as the UN Sustainable Development Goals (SDGs). The project’s contributions are not only academic but also provide practical solutions for cities worldwide.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: Urban Nighttime Temperature Trends Derived from 20 Years of ESA-CCI LST Data

Authors: Panagiotis Sismanidis, Benjamin Bechtel, Marzie Naserikia, Negin Nazarian, Melissa Hart, Iphigenia Keramitsoglou, Darren Ghent
Affiliations: Institute of Geography, Ruhr University Bochum, Australian Research Council Centre of Excellence for Climate Extremes, University of New South Wales, School of Built Environment, University of New South Wales, Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing, National Observatory of Athens, National Centre for Earth Observation, Department of Physics and Astronomy, University of Leicester, Australian Research Council Centre of Excellence for 21st Century Weather, University of Tasmania, Climate Change Research Centre, University of New South Wales, Australian Research Council Centre of Excellence for 21st Century Weather, University of New South Wales
Cities are generally warmer than their surroundings. This phenomenon is known as the Urban Heat Island (UHI) and is one of the clearest examples of human-induced climate modification. UHIs increase the cooling energy demand, aggravate the feeling of thermal discomfort, and influence air quality. As such, they impact the health and welfare of the urban population and increase the carbon footprint of cities. The relative warmth of the urban atmosphere, surface, and substrate leads to four distinct UHI types that are governed by a different mix of physical processes. These four types are the canopy layer, boundary layer, surface, and subsurface UHI. Surface UHIs (SUHI) result from modifications of the surface energy balance at urban facets, canyons, and neighborhoods. They exhibit complex spatial and temporal patterns that are strongly related to land cover and are usually estimated from remotely-sensed Land Surface Temperature (LST) data. In the context of ESA’ Climate Change Initiative LST project (LST_cci) we investigate how the LST of cities has changed over the last ~20 years (2002-2019) using nighttime data from Aqua MODIS. We focus on nighttime conditions when the agreement between the LST and the near-surface air temperature over cities is strongest. Our results reveal a consistent warming trend across all cities, that is on average (± SD) equal to 0.06 ± 0.02 K/year. Cities located in continental climates exhibit the most pronounced warming, of about 0.08 K/year, while those in tropical climates the least (~0.04 K /year). Our results also suggest that the cities in the Northern Hemisphere warm faster than cities in the Southern and that the cities with the strongest increase in nighttime LST are all concentrated in Middle East, where we estimated trends as high as 0.15 K/year (Doha, Qatar).
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: Urban Air Temperature Prediction Leveraging Machine Learning and Remote Sensing Technologies

Authors: Lorenzo Innocenti, Giacomo Blanco, Mr. Luca Barco, Claudio
Affiliations: LINKS Foundation
Urban heat islands (UHIs) pose significant challenges to environmental sustainability and public health, manifesting as localized areas within a city where temperatures are significantly higher compared to their surroundings. These thermal hotspots amplify energy consumption, escalate health risks, and stress urban infrastructure. Addressing these challenges necessitates predictive tools capable of delivering precise temperature forecasts to support urban planning and policy decisions. Despite satellite-based land surface temperature (LST) monitoring's potential, existing data from the ESA Copernicus Sentinel-3 mission are constrained by two critical limitations: inadequate spatial resolution for urban-scale thermal differentiation, with bidaily LST measurements at a resolution of 1 km per pixel, and the fundamental disparity between land surface and air temperatures. While higher resolution LST satellites, such as Landsat 8, exist, they offer a coarser temporal resolution, with data available every 16 days. This research introduces a machine learning model designed to predict maximum daily air temperatures at a high spatial resolution of 20 meters per pixel. This resolution is sufficient to understand temperature dynamics within the city, allowing for the recognition of temperature differences between individual city blocks. For each day the inference is run, the model produces a seven-day temperature forecast into the future. Our technology utilizes a visual transformer-based architecture, which distinguishes itself by being more compact and computationally efficient than traditional convolutional neural networks (CNNs), achieving a mean absolute error (MAE) of 2°C across seven-day temperature predictions for three major European cities. The model takes as input multiple remote sensing and weather forecast data. The first input is the aforementioned LST data from the Sentinel-3 satellite constellation, in particular the morning passage data, collected between the hours of 9 AM and 11 AM. The model also takes data from Sentinel-2, again from the Copernicus program, which offers a high spatial resolution of 10 to 60 meters, and a temporal resolution of five days. In particular, Normalized Differential Vegetation Index (NDVI) is used, calculated using the red and near-infrared bands, which are sensitive to vegetation health and density. To ensure data quality and minimize cloud interference, the median value of the monthly measurements with cloud cover less than 10% is used. The meteorological data utilized in this study originates from the Visual Crossing provider, using their Visual Crossing Weather data service, which incorporate variables such as forecasted temperature, pressure, humidity, wind, and others. Regarding topographic data, two sources have been utilized. The first is the Digital Elevation Model (DEM), which provides information on the terrain's altitude. The second source is the Copernicus Urban Atlas, which classifies land use in urban environments into 27 distinct classes. All input data is resized to the required dimensions and combined into a single 3D tensor for the model. For LC data, it is transformed from 27 classes into four broader categories and processed into a four-channel matrix, where each pixel's value represents the percentage of the class present within that pixel. To incorporate temporal context, circular encoding is used for the day of the year, day of the week, and time of day of the Sentinel-3 passage. The data is utilized by stacking all inputs except for the weather data, which is combined with the weather data for the day being predicted, and passed to the model. This process is repeated for each of the seven days to generate the seven temperature predictions. The temperature measurements, which are used as the target for the ML training, have been sourced from the Weather Underground temperature crowdsourcing portal. The stations provide temperature data at intervals ranging from 3 to 10 minutes, depending on the specific location. This data is processed in a 2D matrix composed of pixels with values equal to the average of the maximum temperature recorded by each station within the area covered by the pixel for that day. If no station is active in the area, the pixel is marked as invalid. For each valid pixel, the mean squared error (MSE) loss between the predicted temperature from the model and the ground truth is computed and used to update the model weights. An image-to-image regression neural network architecture is used to translate these multidimensional inputs into a set of two-dimensional temperature maps. The architecture features an encoder-decoder structure, where the encoder extracts hierarchical features from the input data and the decoder reconstructs the spatial information. The chosen encoder is a Mixed Transformer model (MiT), which features attention blocks as the main computational units and convolutional layers for downsampling stages, as a lighter and more efficient alternative to CNN-based ones. The decoder reconstructs this information using a simple cascade of convolution-upsample blocks, incorporating higher resolution features via skip connections. The model is embedded within a continuous processing pipeline designed for uninterrupted operation. Its daily workflow automatically retrieves data, performs preprocessing, and generates temperature mappings. Seven-day temperature forecasts are uploaded to a geospatial dashboard, presenting predictions as overlays on the target urban landscapes. This solution is currently implemented within UP2030 (Urban Planning and Design Ready for 2030), a project supported by the European Union's Horizon Europe research and innovation program, which seeks to guide cities through socio-technical transitions aligned with climate neutrality objectives. By integrating this temperature forecasting service into urban planning frameworks, the project offers a tool for mitigating urban heat island impacts and fostering sustainable urban development. This work was funded in part by the Horizon Europe project UP2030 (grant agreement n.101096405) and in part by the Project NODES through the MUR—M4C2 1.5 of PNRR  Grant ECS00000036.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.14)

Session: A.01.12 EE9 FORUM - WHAFFFERS campaign networking event

The WHAFFFERS (W-band, HiSRAMS, AERI, FIRMOS, FINESSE, and FIRR-2 Experiment on Remote Sensing) is the largest effort to date to bring together far-infrared (FIR), mid-infrared (MIR), and microwave instrumentation (MW) that mimic aspects of FORUM’s formation flying with MetOp-SG’s IASI-NG and MWS. The atmosphere and snow surface were simultaneously characterised by a suite of independent ground-based and airborne instrumentation, i.e. lidar, radar, microwave (MW), aircraft in-situ ice cloud measurements, soundings for temperature and water vapour profiles together with micro- and macrophysics of ice and water clouds, and snow grain size to help interpret surface snow emissivity.
This campaign is a joint endeavour between ESA, NASA, the National Research Council (NRC) Canada, ECCC, CNR Italy, McGill University, Université du Québec à Montréal, and Imperial College London. The campaign took place at the ground stations at Ottawa Airport and the Gault Nature Reserve close to Montreal, with overflights of the instrumented NRC Convair-580 research aircraft, in the January/ February 2025 timeframe.
The objectives of the campaign are to support the development of FORUM by: 1) Radiative closure experiments in clear and cloudy conditions, 2) Retrieval information content analysis (FIR only, FIR+MIR, FIR+MW,…) 3) Snow and ice emissivity assessment.
WHAFFFERS addresses the FORUM scientific development by creating a benchmark data set for: 1) assessment of FORUM retrievals, 2) community on-boarding through provision of data, 3) and validation preparation.

In the first half of this networking session, highlights of the campaign will be presented, followed by discussions about the next steps for data interpretation, uncertainties, and applications. This networking session also welcomes scientists new to WHAFFFERS and EE9 FORUM.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L1/L2)

Session: C.05.04 Landsat Program and Science Applications

Landsat satellites have been providing continuous monitoring of the Earth’s surface since 1972. The free and open data policy of the Landsat program enables the global land imaging user community to explore the entire 52-year long-term data record to advance our scientific knowledge and explore innovative use of remote sensing data to support a variety of science applications. This session will focus on Landsat mission collaboration and related data and science applications of Landsat data and products that provide societal benefits, and efforts by European and U.S. agencies to maximize their benefits alongside comparable European land imaging missions such as Copernicus-Sentinel 2.

A diverse set of multi-modal science applications has been enabled with Landsat and Sentinel-2 harmonization and fusion with SAR, LiDAR, high-resolution commercial imagery, and hyperspectral imagery among others. Rapid progress has been achieved using the entire Landsat archive with access to high-end cloud computing resources. Landsat data and applications have revealed impacts from humans and climate change across the globe in land-cover, land-use, agriculture, forestry, aquatic and cryosphere systems.

Building on the 52+ year legacy and informed by broad user community needs, Landsat Next’s enhanced temporal (6-day revisit), spatial (10 – 60 m), and superspectral (21 visible to shortwave infrared and 5 thermal bands) resolution will provide new avenues for scientific discovery. This session will provide updates on Landsat missions and products, and collaboration activities with international partners on mission planning, data access, and science and applications development.

We invite presentations that demonstrate international collaboration and science advancements on the above topics. We also invite presentations on innovative uses of Landsat data alone or in combination with other Earth observation data modalities that meet societal needs today and in coming decades.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Global Scale Deforestation Monitoring for Seasonal and Deciduous Forests Using Sentinel-2 and Landsat

Authors: Vincent Schut, PhD Luca Foresta, Berto Booijink, PhD Niels Anders, Niklas Pfeffer, Niels Wielaard, Rens Masselink
Affiliations: Satelligence
Deforestation monitoring by the private sector has over the past year mainly focused on deforestation in Tropical Moist Forest, as most of the focus on deforestation monitoring was around palm oil, cocoa and soy in Latin America. Since the announcement of the EUDR, however, the demand for satellite deforestation monitoring solutions for the other EUDR commodities such as rubber, coffee, wood, has increased. With expansion to these commodities, also comes a shift in the forest types that need to be monitored. Commodities like coffee are grown in areas of dry tropical forests and wood products in more temperate regions with deciduous forests. The change of forest types in many cases also means that some change detection algorithms are no longer effective because these classify leaf-off periods as being deforestation. A misclassification especially prone to happen in optical data such as Sentinel-2 and Landsat. We present a new SpatioTemporalAdaptiveBareness (STAB) methodology, which uses a combination of Landsat and Sentinel-2 data to determine whether pixels were deforested. First, the seasonality (STAB Factors) of an area is calculated based on a period of 6 years preceding the monitoring period. These STAB Factors model the temporal (seasonal) behaviour of the bareness per pixel related to the regional "reference bareness" of surrounding forest(like) areas, where bareness is an index based on the SWIR and NIR bands, and reference bareness is the median of the bareness for all forest-like pixels within an entire Landsat or Sentinel-2 scene. During dry or leaf-off periods, this reference bareness will be high for the entire scene, while during a wet season where vegetation is green, it will be low. The reference bareness thus represents the overall regional seasonality, and the STAB Factors model if and how a single pixel's bareness values follow that regional seasonality, or not. Second, during the monitoring period, an expected per-pixel bareness can be calculated by applying the STAB Factors to the current reference bareness. For each pixel then the SpatioTemporal Adaptive Bareness value is calculated by dividing the pixel's raw bareness by the expected bareness. Whenever this ratio crosses a certain threshold, the pixel is flagged as being deforested. The change detection algorithm has successfully detected changes in moist tropical forest, dry tropical forests and savanna type woodlands such as the Cerrado and Chaco areas in Latin America. The algorithm has been successfully applied at continental scale in Latin America, Africa and Asia. We will show examples of the change detection, and compare to other (open) data available.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Do We Really Have Enough Data for Long-term Analyses: Deep Dive Into Global Per-pixel Availability of Usable Landsat and Sentinel-2 Data

Authors: Dr. Katarzyna Ewa Lewińska, Stefan Ernst, Dr. rer. nat David Frantz, Prof. Dr. rer. nat. Ulf Leser, Patrick Hostert
Affiliations: Geography Department, Humboldt-Universität zu Berlin, Department of Forest and Wildlife Ecology, University of Wisconsin-Madison, Geoinformatics –Spatial Data Science, Trier University, Department of Mathematics and Computer Science, Humboldt-Universität zu Berlin, Integrative Research Institute on Transformations of Human-Environment Systems (IRI THESys), Humboldt-Universität zu Berlin
The Landsat data archive provides over 40 years of medium-resolution multispectral data, making it the longest, continuously running Earth observation program and one of the most commonly used data sources for long-term analyses. Free and open access to Landsat data combined with technological advancements in data storage and processing capabilities have facilitated the development of new algorithms and approaches that utilize dense time series of satellite observations. The shift towards using all available data has unlocked new monitoring capacities. Yet, the aptness and accuracy of any analysis hinges on the availability and quality of the data. Landsat historical global data coverage is extremely variable due to the limitations and priorities of past missions. Although the availability of Landsat products is well known on a per-tile basis, users lack easily accessible and quarriable information on net data availability at per-pixel level, which is driven by cloudiness, cloud shadows, snow, and other highly variable disturbances. The aggregated information on data availability for different time windows is not routinely and accessibly provided at the per-pixel level for Landsat data holdings. The same limited overview of usable data applies to Sentinel-2, which is nowadays frequently used in synergy with Landsat to capitalize on more frequent data acquisition. This implies that each study based on Landsat or Sentinel-2 data needs to a priori quantify the availability of cloud-, shade-, and snow free data specific to their area and time period of interest to understand the limitations of analytical approaches and to correctly parametrize algorithms. Critically, increasing accessibility of interpolation and data reconstruction approaches, often seamlessly incorporated into data cloud processing environments and software, allow for extensive augmentation of missing data with unquestioned confidence and accuracy without any per pixel quality information, creating ‘perfect’ time series, but simultaneously potentially jeopardizing the quality of the final results. Convergent with the growing popularity of machine learning and deep learning applications and statistical classifiers this could have a detrimental effect on the credibility of results. To improve the discoverability of the per-pixel data availability and to demonstrate the opportunities and limitations in the 1982 2024 Landsat and 2015-2024 Sentinel-2 archives, we performed a systematic global pixel-based assessment of usable data (i.e., cloud-, shade-, and snow-free) across both data holdings. Using a global 0.18° sampling scheme, our overview highlights differences in data availability evaluating region-specific limits for time series analyses. Importantly, we focus on both the feasibility of long- and medium‑term analysis windows, as well as annual and sub-annual data availability analyzing data as available in the archives, and assuming different interpolation extensity. To ensure wide usability, we provide insights on how time series densities vary when using Landsat and Sentinel-2 data separately and jointly. Finally, we determine whether increased data availability after 2014 and combining Landsat and Sentinel-2 data have an impact on long term trends in NDVI (Normalized Difference Vegetation Index), which is commonly used in studies on vegetation greening and implications of climate change. Overall, our results evaluate and highlight spatio‑temporal heterogeneity in the availability of two critical environmental satellite missions, draw attention to the feasibility of analyses in some regions and over specific time periods including the most recent years, as well as challenge the quality of results of some land cover and land use analyses and applications. We accordingly emphasize the importance of analysis-specific thorough data availability evaluation and critical assessment of the applicability of algorithms of choice. Specifically, our results provide an urgently needed perspective on data availability-driven limitations and opportunities for analyses based on Landsat and Sentinel-2 data archives, which will continue to be at play in the future with the next mission building on the existing legacy. To ensure maximum usability and impact, our data availability dataset is freely available to the community as a sharable dataset and through a cloud-based interactively-browsable interface.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Toward operational Landsat aquatic reflectance science products for advancing global inland water and coastal ocean observations

Authors: Benjamin Page, Christopher Crawford, Danika Wellington, Saeed Arab, Gail Schmidt, Chris Barnes
Affiliations: Earth Space Technology Services (ESTS), U.S. Geological Survey (USGS), KBR, Inc., Earth Resources Observation and Science (EROS) Center
In April 2020, the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center released provisional Aquatic Reflectance (AR) science products for the Landsat 8 Operational Land Imager (OLI) via EROS Science Processing Architecture (ESPA) On Demand Interface. This was envisioned as an expansion to the current suite of Landsat Level-2 atmospherically corrected products. Landsat science products that are considered provisional are accessible to the public but are actively under USGS evaluation and remote sensing community validation. Provisional algorithms and the generated outputs may undergo further modifications, improvements, or design before being considered as operationally ready for scientific use. The release of the Landsat 8 and Landsat 9 provisional AR products marks the first step toward structuring a standardized processing pathway designed to produce AR measurements from OLI’s 30-meter spatial resolution image data. This initiative is anticipated to enhance aquatic science and authoritative environmental monitoring efforts, particularly in the areas of coastal mapping and lake management practices. The Science Algorithms to Operations (SATO) process for USGS Landsat data products enables smooth transition of researched, developed, and matured science algorithms from a provisional state into operational readiness. The Sea-viewing Wide Field-of-view Sensor Data Analysis System (SeaDAS), developed by NASA’s Ocean Biology Processing Group (OBPG), has been the flagship atmospheric correction processor for generating Collection 1 and Collection 2 provisional AR products for Landsat 8 and Landsat 9. Here, it is evaluated within the parameters defined by the SATO process with the Landsat Science Office (LSO) whether it is the most optimal pathway for current, heritage, and upcoming Landsat missions in terms of suitability for emerging scientific needs and standards that require reliable analysis ready data for both inland and coastal water quality mapping applications. Preliminary assessments of alternative atmospheric correction solutions that can generate AR products from Landsat imagery suggest that there may be more suitable processing options for operational Landsat AR global science products. The purpose of this contribution is to communicate with aquatic scientists and the broader Earth observation community on the origins, requirements, challenges, successes, and objectives for standardizing global AR science data products for Landsat satellite missions.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Rapid glacier shrinkage on Baffin Island from 2000 to 2019 as observed from Landsat 8 and Sentinel-2

Authors: Dr. Frank Paul, Philipp Rastner
Affiliations: University of Zurich
The glaciers and ice caps on Baffin Island contribute substantially to sea-level rise, but glacier area was poorly constrained in the widely used RGI 6.0 (e.g. debris-covered regions were often missing and ice divides were at the wrong location) and area changes over the past two decades were basically unknown. To improve the situation, we have (a) created for the new RGI 7.0 a revised version of the RGI 6.0 outlines from ‘around the year 2000’ and (b) a new inventory for 2019 to perform change assessment. For the revised year 2000 inventory in RGI 7.0, the temporal coverage could be reduced from 52 (1958-2010) to 4 years (1999 to 2002) using Landsat 7 scenes and for 2019 to about one week using 12 Landsat 8 along with 3 Sentinel-2 scenes. We also substantially revised the ice divides by applying watershed analysis to the Arctic DEM in combination with maps of flow velocities from Sentinel-2. Topographic information for each glacier was also derived from the Arctic DEM. When excluding Barnes Ice Cap (which lost 117 km² or 2% of its area), the mapped glacier area is 29,781 km² in 2000 and 26,067 km² in 2019, i.e. a reduction of 3,715 km² or -12.5% (-0.66%/a). Thereby, relative area losses strongly increased towards smaller glaciers, reaching -75% for glaciers <0.1 km² and -25% (-1.3%/a) for all glaciers <10 km². Many ice caps disintegrated into smaller pieces and 2140 ice bodies (area 190 km²) melted away completely from 2000 to 2019. Apart from the methods and results obtained for the two inventories, we will also present the challenges of glacier mapping in this region (e.g. separating glaciers from attached ice patches or identifying debris-covered ice in shadow) along with the difficulties of calculating glacier-specific area changes from two inventories with a spatial mismatch.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L1/L2)

Presentation: The NASA Harmonized Landsat and Sentinel-2 Version 2.0 surface reflectance dataset

Authors: Junchang Ju, Qiang Zhou, Brian Freitag, Madhu Sridhar, Christopher
Affiliations: University Of Maryland
The United States National Aeronautics and Space Administration (NASA) Harmonized Landsat and Sentinel-2 (HLS) project is entering its 10th year of offering 30-m surface reflectance data. The HLS project was initiated in early 2010s to produce more frequent land measurements by combining the observations from the US Landsat 8 Operational Land Imager (OLI) and the European Copernicus Sentinel-2A MultiSpectral Instrument (MSI), and currently from two OLI and two MSI sensors, by applying atmospheric correction to top-of-atmosphere (TOA) reflectance, masking out clouds and cloud shadows, normalizing bi-directional reflectance view angle effects, adjusting for sensor bandpass differences with the OLI as the reference, and providing the harmonized data in a common grid. Several versions of HLS datasets have been produced in the last ten years, and a new HLS dataset, tagged Version 2.0, as a result of the improvements on almost all the harmonization algorithms, was completed in the summer of 2023 and for the first time takes on a near global coverage (no Antarctica). The data harmonization efficacy was assessed by examining how the reflectance difference between contemporaneous Landsat and Sentinel-2 observations was successively reduced by each harmonization step, for 545 pairs of globally distributed same-day Landsat/Sentinel-2 image samples from 2021 to 2022. Compared to the TOA data, the HLS atmospheric correction slightly increased the reflectance relative difference between Landsat and Sentinel-2 for most of the spectral bands, especially for the two blue bands and the green bands. The subsequent bi-directional reflectance view angle effect normalization effectively reduced the between-sensor reflectance difference present in the atmospherically corrected data for all the spectral bands, and notably to a level below the TOA differences for the red, near-infrared (NIR), and the two shortwave infrared (SWIR) bands. The bandpass adjustment only had a modest effect on reducing the between-sensor reflectance difference. In the final HLS products, the same-day reflectance difference between Landsat and Sentinel-2 was below 4.2% for the red, NIR, and the two SWIR bands, all smaller than the difference in the TOA data. However, the between-sensor differences for the two blue and the green bands remain slightly higher than in TOA data, and this reflects the difficulty in accurately correcting for atmospheric effects in the shorter wavelength visible bands. The data consistency evaluation on a suite of commonly used vegetation indices (VI) calculated from the HLS V2.0 reflectance data showed that the between-sensor VI difference is below 4.5% for most of the indices. HLS is under continuous refinement. Additional updates include a global production of HLS VI data scheduled to start in early 2025, 10-day surface reflectance composite in prototyping, a 6-hour low-latency HLS production under development, and research in atmospheric correction and topographic correction improvement.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Landsat Next Programmatic Update

Authors: Timothy Newman
Affiliations: U.S. Geological Survey
The USGS National Land Imaging Program Coordinator, Timothy Newman, will provide a programmatic update on the Landsat Next mission. Landsat Next is the next-generation Landsat mission currently in development at NASA and USGS, intended to provide Landsat data continuity through the 2030s and beyond for hundreds of thousands of Landsat users in the United States and around the world and delivering billions of dollars of economic benefit annually. Landsat Next is projected to deliver better than twice the spectral, spatial and temporal resolution of Landsat 9, meeting the evolving needs of research and operational users across a host of applications, including crop health and production, tracking water use, documenting urban growth, mapping wildfires, monitoring the well-being of forests, assessing the impact of industrialization, and informing efforts to reduce hunger globally. The programmatic update will include a detailed description of the mission, including agency roles and responsibilities, the development history of the mission, and the latest programmatic schedules and milestones.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall G2)

Session: C.05.06 Status ESA Mission development: National Programmes managed by ESA - PART 2

The status of development of ESA missions will be outlined.
In 4 sessions a 1h30 minutes (equally of a full day) the unique opportunity for participants will be offered to gain valuable insights in technology developments and validation approaches used during the project phases of ongoing ESA programmes.
The Projects are in different phases (from early Phase A/B1 to launch) and together with industrial/science partners the status of activities related to Mission developments will be provided.

Speakers:


  • Dimitris Bliziotis - HSC
  • Rosario Quirino Iannone - ESA
  • Mario Toso - ESA
  • Enrique Garcia - ESA
  • Ana Sofia Oliveira - ESA
  • Ariane Muting - ESA
  • V. Marchese - ESA
  • R. Gurdak - POLSA
  • Jolanta Orlińska - POLSA
  • L. Montrone - ESA
  • G. Grassi - ESA
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.15/1.16)

Session: A.07.08 Global and regional water cycle in the integrated human-Earth system, estimation of hydrological variables and hyper-resolution modelling - PART 2

Water in all three phases and its cycling through the Earth system are essential to weather, climate and climate change, and to life itself. The water cycle is closely coupled with energy cycle and carbon cycle. Over continents, water cycle includes precipitation (related to clouds, aerosols, and atmospheric dynamics), water vapor divergence and change of column water vapor in the atmosphere, and land surface evapotranspiration, terrestrial water storage change (related to snowpack, surface and ground water, and soil moisture change), and river and groundwater discharge (which is linked to ocean salinity near the river mouth). Furthermore, the terrestrial water cycle is directly affected by human activities: land cover and land use change, agricultural, industrial, and municipal consumption of water, and construction of reservoirs, canals, and dams.

The EO for hydrology community is working towards datasets describing hydrological variables at a steadily increasing quality and spatial and temporal resolution. In parallel, water cycle and hydrological modellers are advancing towards “hyper-resolution” models, going towards 1 km resolution or even higher. In some cases such efforts are not just taking place in parallel but in collaboration. This session aims at presenting advances from each of the communities as well as demonstrating and promoting collaboration between the two communities.

Presentations are welcome that focus on at least one of the following areas:
- The global and regional water cycle and its coupling with the energy and carbon cycles in the integrated human-Earth system based on satellite remote sensing, supplemented by ground-based and airborne measurements as well as global and regional modeling
- New advances on the estimation of hydrological variables, e.g. evapo(transpi)ration, precipitation (note that there is another, dedicated session for soil moisture);
- Suitability of different EO-derived datasets to be used in hydrological models at different scales;
- Capacity of different models to take benefit from EO-derived datasets;
- Requirements on EO-derived datasets to be useful for modelling community (e.g. related to spatial or temporal resolution, quality or uncertainty information, independence or consistency of the EO-derived datasets, …);
- Downscaling techniques;
- Potential of data from future EO missions and of newest modelling and AI approaches (including hybrid approaches) to improve the characterisation and prediction of the water cycle.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.15/1.16)

Presentation: Towards Operational Water Vapour Products from Optical Imager

Authors: Juergen Fischer, René Preusker
Affiliations: Spectral Earth GmbH, Free University Berlin
We present recent results of EUMETSAT’s SCIENTIFIC FRAMEWORK - OPERATIONAL WATER-VAPOUR PRODUCTS FROM OPTICAL IMAGERS. First, we focuss on the improvement of the scientific quality of the COWa Sentinel-3 OLCI Level-2 Total Column Water Vapour (TCWV) product and the operational generation of the TCWV products. The updated TCWV COWa algorithm considers the different spectral characteristics of OLCI-A and B and introduces a temporal evolution of the spectral models of each camera of both instruments. The retrieval is based on an Optimal Estimation, allowing a pixel- by-pixel retrieval diagnostic, including uncertainty and information content estimates. The COWa TCWV retrievals are compared with ground based GNSS (Ware et al. 2000) measurements, water vapour from AERONET (Pérez-Ramírez et al. 2014, Holben et al. 1998), water vapour from ground-based microwave radiometer at the Atmospheric Radiation Measurement (ARM) (Turner et al. 2003, Turner et al., 2007). An extensive validation exercise demonstrates the high performance of the COWa water vapour retrieval. We discuss a comparison of OLCI TCWV retrievals with ECWMF TCWV on a global scale over more than 4 years. We achieved seasonal pattern of TCWV observations and ECMWF analysis, which demonstrate impressively the potential impact of the use of OLCI- and other satellite TCWV observations in the assimilation for NWP and climate models.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.15/1.16)

Presentation: Towards high resolution evaporation data integrating satellite observations and hybrid modelling

Authors: Diego Miralles, Oscar Baez Villanueva, Olivier Bonte, Joppe Massant, Fangzheng Ruan, Maximilian Söchting, Prof. Dr. Miguel Mahecha
Affiliations: Ghent University, Leipzig University
Terrestrial evaporation (E) is an essential climate variable linking the water, carbon, and energy cycles. It regulates precipitation and temperature, influences feedbacks from water vapor and clouds, and drives the occurrence of extreme events such as droughts, floods, and heatwaves. For water management, E represents a net loss of water resources, while in agriculture, transpiration determines irrigation demands. Despite its importance, global E estimates remain uncertain due to the limited availability of field measurements, the complex interplay of physiological and atmospheric processes, and challenges in capturing E through satellite observations. These gaps have driven innovation in modeling approaches that blend satellite data, in situ observations, and state-of-the-art algorithms. The fourth generation of the Global Land Evaporation Amsterdam Model (GLEAM4) enables the estimation of E and its components globally using a hybrid framework. The dataset spans 1980–2023 at a 0.1° resolution, offering improved representations of critical processes such as interception, atmospheric water demand, soil moisture dynamics, and groundwater access by plants. GLEAM4 integrates machine learning techniques to capture evaporative stress, leveraging eddy-covariance and sapflow data, while maintaining water balance and thermodynamic constraints. By reconciling the interpretability of physics-based models with the adaptability of machine learning, GLEAM4 provides a scalable solution for estimating E across ecosystems. Validation against hundreds of eddy-covariance sites demonstrates its robustness. Global land evaporation is estimated at 71 x 10³ km³ yr⁻¹, with 63% attributed to transpiration. In addition to E, the dataset provides complementary variables such as soil moisture, potential evaporation, sensible heat flux, and evaporative stress, facilitating diverse applications in hydrology, ecology, and climate science. Building upon GLEAM4, a new generation of high-resolution datasets is under development to meet the growing demand for actionable data in agriculture, water management, and climate adaptation. In this presentation, a 1-km resolution pilot dataset across Europe and Africa will be introduced, and its skill to capture the fine-scale dynamics of evaporation and soil moisture will be evaluated. Innovations include the assimilation of Sentinel-1 backscatter data to account for irrigation impacts, enabling precise evaporation estimates in agricultural regions, and the dynamic downscaling of radiation forcing using Land Surface Analysis Satellite Applications Facility (LSA SAF) and Moderate Resolution Imaging Spectroradiometer (MODIS) data. This high-resolution dataset will allow better characterization of droughts, heatwaves, and water resource distribution, particularly in regions vulnerable to climate variability, offering a valuable tool to manage water resources and mitigate climate impacts. Outputs from these efforts will be disseminated openly and include an interactive 3D data cube visualization, enabling timely access for researchers, policymakers, and stakeholders. This research is framed within the ESA Digital Twin Earth initiative and the Belgian Science Policy Office (BELSPO) STEREO IV programme.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.15/1.16)

Presentation: GIRAFE v1: A global precipitation climate data record from satellite data including uncertainty estimates

Authors: Marc Schroeder, Hannes Konrad, Anja Niedorf, Stephan Finkensieper, Remy Roca, Sophie Cloche, Giulia Panegrossi, Paolo Sano, Christopher Kidd, Rômulo Augusto Jucá Oliveira, Karsten Fennig, Madeleine Lemoine, Thomas Sikorski, Rainer Hollmann
Affiliations: Deutscher Wetterdienst, LEGOS, IPSL, CNR-ISAC, NASA/GFSC, Hydro Matters
We present a new precipitation climate data record (CDR), called GIRAFE (Global Interpolated Rainfall Estimation), which has recently been released by EUMETSAT’s Satellite Application Facility on Climate Monitoring (CM SAF). It covers a time period of 21 years (2002 – 2022) with global coverage, daily temporal resolution and 1° x 1° spatial resolution. GIRAFE is a completely satellite-based data record obtained by merging infrared (IR) data from geostationary satellites and passive microwave radiometers (PMW) onboard polar-orbiting satellites. Additional to daily accumulated and monthly mean precipitation, a sampling uncertainty at daily scale within the range of geostationary satellites (55°S - 55°N) is provided. The implementation of a continuous extension of GIRAFE via a so-called Interim CDR service has almost been completed and associated data will become available soon. For retrieving instantaneous rain rates from PMW observations, three different retrievals for microwave imagers (HOAPS) and sounders (PNPR-CLIM, being developed by CNR-ISAC in the C3S_312b_Lot1 Copernicus and PRPS) were used. Quantile mapping is applied to the instantaneous rain rates estimated from the observations by the 19 different PMW platforms to achieve stability over time. The IR observations from the geostationary satellites undergo a dedicated quality control procedure. The uncertainty estimation is based on decorrelation ranges from variograms in spatial and temporal dimensions. The merging of PMW and IR data and the technique for uncertainty estimation in GIRAFE is based on the methods of the Tropical Amount of Precipitation with an Estimate of ERrors (TAPEER) algorithms. Here, we present details on the GIRAFE algorithm and results of the quality assessment activity comprised of comparisons against other established global, regional and local precipitation products. A focus will be on the analysis of the homogeneity of the GIRAFE data record relative to a variety of reference data records. Finally, results from the analysis of the consistency between precipitation extremes and surface temperature are presented and discussed.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.15/1.16)

Presentation: Intercomparison of Earth Observation products for hyper-resolution hydrological modelling over Europe

Authors: Wouter A. Dorigo, Almudena García-García, Pietro Stradiotti, Federico Di Paolo, Paolo Filipppucci, Milan Fischer, Matěj Orság, Luca Brocca, Jian Peng, Alexander Gruber, Bram Droppers, Niko Wanders, Arjen Haag, Albrecht Weerts, Ehsan Modiri, Oldrich Rakovec, Félix Francés, Matteo Dall'Amico, Luis Samaniego
Affiliations: Helmholtz-zentrum Für Umweltforschung Gmbh - Ufz, Department of Geodesy and Geoinformation, TU Wien, Waterjade Srl, National Research Council of Italy, Research Institute for Geo-Hydrological Protection, Global Change Research Institute CAS, Department Computational Hydrosystems, UFZ - Helmholtz Centre for Environmental Research GmbH, University of Potsdam, Institute of Environmental Science and Geography, Department of Physical Geography, Utrecht University, Operational Water Management, Deltares, Hydrology and Environmental Hydraulics group, Wageningen University & Research, Faculty of Environmental Sciences, Czech University of Life Sciences Prague, Research Institute of Water and Environmental Engineering (IIAMA), Universitat Politècnica de València, Remote Sensing Centre for Earth System Research, Leipzig University
The increasing frequency and severity of hydrological extremes demands the development of early warning systems and effective adaptation and mitigation strategies. Such systems and strategies require high (spatial) resolution hydrological predictions, mostly provided by hydrological models. However, current state-of-the-art hydrological predictions remain limited in their spatial resolution. A proposed solution is the integration of high-resolution Earth observation (EO) products in hydrological modelling in order to reach hyper-resolution (approximately 1 km2). Nonetheless, proper use of these data in hydrological modelling requires a comprehensive characterisation of their uncertainties. Here, we present results from the 4DHydro project evaluating the performance of high-resolution EO products of four hydrological variables (precipitation, snow cover area, surface soil moisture, and evapotranspiration) against observational references. Two merged EO precipitation products at 1 km resolution (merged IMERG-SM2A and merged ERA5-IMERG-SM2A) reached correlation coefficients of more than 0.5 with the benchmark reference over most areas and are recommended for hyper-resolution hydrological modelling over Europe. The MODIS (resolution of 250 m) and Sentinel-2/Landsat-8 (resolution of 20 m) snow cover products showed the highest classification accuracy and were selected as best choice for the use of snow cover area products in hyper-resolution hydrological modelling. For surface soil moisture, the NSIDC SMAP product at 1 km resolution yielded correlation coefficients of more than 0.6 at most stations and is recommended for hyper-resolution hydrological modelling. Finally, the MODIS-Terra (MOD16A2) evaporation product at 500 m resolution, showing correlation coefficients higher than 0.8 at most eddy covariance towers, is recommended for the assimilation of ET in models. The assimilation of the proposed high-resolution products in models individually or in combination could improve the performance of hyper-resolution modelling. Still, developing integration workflows is required to overcome difficulties related to scale mismatches and data-gaps.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.15/1.16)

Presentation: Next Generation Hydrographic Mapping to Support Hyper-Resolution Hydrological Modelling Across Europe - The New EU-Hydro 2.0

Authors: Linda Moser, Bernhard Lehner, Achim Roth, Amelie Lindmayer, Guia Marie Mortel, Leena Julia Warmedinger, Stephanie Wegscheider, Günther Grill, Carolin Keller, Stefan Ram, Antje Wetzel, Maria Kampouraki, Jose Miguel Rubio Iglesias, Joanna Przystawska, Inés Ruiz, Veronica Sanz
Affiliations: GAF AG, Confluvio Consulting Inc., German Remote Sensing Data Center, German Aerospace Center (DLR), European Environment Agency (EEA)
EU-Hydro is a hydrographic reference dataset and part of the Copernicus Land Monitoring Service (CLMS) portfolio, implemented by the European Environment Agency (EEA). It offers detailed information on the geographical distribution and spatial characteristics of water resources throughout Europe, such as river networks, surface water bodies and watersheds. EU-Hydro was initially developed in 2012, with subsequent updates aimed at improving data accuracy and network topology. However, inconsistencies remained, allowing EU-Hydro to be used for mapping applications, whereas its use for hydrological modelling remains limited. It is currently being updated to produce an improved and upgraded version of this unique European reference dataset. Highlighting the importance of water mapping and modelling, the new version of EU-Hydro (EU-Hydro 2.0) shall tackle the requirements of a modern reference product within the pan-European hydrological domain, serving various use cases, such as: supporting water quality and availability analysis; runoff modelling or flood modelling and prediction; environmental assessments related to river connectivity or the evaluation of anthropogenic impacts. The coastline can serve as input for analytical purposes for various applications. Moreover, policy areas such as nature restoration and climate adaptation can be tackled, all with the goal to strengthen water resilience across Europe. The EU-Hydro 2.0 will build upon a latest generation Digital Elevation Model (DEM) to provide highly detailed and high-quality topographic input data: the Copernicus DEM, a pan-European DEM available at 10m resolution, based on the TanDEM-X mission, supported by the Copernicus DEM at 30m resolution for catchments upstream and downstream that flow in and out of the EEA38 +UK area (EU27 + European Free Trade Association (EFTA) + Western Balkans + Turkey + UK). The production of EU-Hydro 2.0 will involve the best possible ancillary data of hydrography, land cover, and infrastructure to allow seamless integration into the DEM editing process, as well as VHR satellite data for quality control and validation. The product suite consists of eight main layers: The three main raster products are the hydrologically conditioned DEM (Hydro-DEM), the Flow Direction (Hydro-DIR) and the Flow Accumulation (Hydro-ACC) maps, supported by additional raster layers for expert hydrological use. The five vector products are the river network (Hydro-NET), water bodies (Hydro-WBO), basins and sub-watersheds (Hydro-BAS), a product on artificial hydrographic structures (Hydro-ART) and a coastline (Hydro-COAST). First, the process of DEM editing and hydro-conditioning involves refining the DEM to ensure it accurately represents the water flow and natural hydrological features. This includes correcting artefacts that interfere with flow connectivity, such as bridges and dams, filling sinks in the DEM that are caused by inherent uncertainties, and adjusting elevation data to create a hydrologically consistent surface, which is critical for accurate water flow analysis and watershed delineation. In particular, novel methods are employed to remove noise and distortions from the DEM due to vegetation cover and urban build-up, to enforce flow paths using high-resolution cartographic layers of water surfaces, rivers and lakes, and to centre drainage lines in the middle of larger water bodies. In a next step, the raster layers (i.e., the flow direction and accumulation maps as well as further advanced hydrological layers) are derived from the hydrologically conditioned DEM surface, and subsequently, the vector layers Hydro-NET and Hydro-BAS are extracted from them. Ancillary data are needed to generate Hydro-WBO, Hydro-COAST and Hydro-ART, which can only be partially derived from the Hydro-DEM. To ensure a homogeneous approach across Europe regardless of geographical differences, and to find the best possible methodologies, the greatest expected challenges are a) the correct delineation of river networks and watershed boundaries in regions with mostly flat terrain, low topographic variation, and dense vegetation cover which affects DEM accuracy, such as large floodplains; b) the correct interpretation of flow topology in highly modified landscapes such as urban or irrigated areas where artificial canals can dominate over elevation-derived flow paths; and c) the correct detection and interpretation of special flow features such as inland depressions, underground flow connections in karst areas, or the complex structures of deltaic systems. Furthermore, inconsistencies among and within European countries related to the quality and completeness of available ancillary datasets used to improve the river and watershed delineations may introduce some regional differences in achievable accuracies. The derived raster layers will be a significant enhancement and novelty to EU-Hydro 2.0, alongside other key additions such as hierarchically nested watersheds, an updated coastline dataset, and detailed maps of water bodies and artificial structures, i.e., the vector products, all integrated within a topologically consistent river network. All layers will be interrelated, scalable and logically consistent. The approach aims at transparency and automation to-the-extent-possible, supported by manual corrections where needed to increase quality and meet user requirements. This will ensure efficient and reproducible data processing and facilitate further updates of EU-Hydro into the future. The pan-European production is targeted to be finalized by summer 2026. The upgraded EU-Hydro 2.0 suite of products will constitute a harmonized, homogeneous and consistent reference dataset for Europe. It will provide a new era not only for water mapping in the framework of CLMS, further Copernicus Services and other fields, but also target hydrological modelling, hydrologic risk assessments, climate change studies, water resource management, environmental protection strategies, and infrastructure planning, hence supporting the implementation of the EU Biodiversity Strategy, in particular the Nature Restoration Law. Its potentially crucial role for society is further emphasised by the rising frequency of natural disasters, such as droughts and floods, the latter being one of the use cases for modelling. With its high-quality, free, and openly accessible data, EU-Hydro 2.0 will help address these challenges, which urgently call for enhanced water resilience in Europe.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.15/1.16)

Presentation: Ensemble irrigation modeling with AquaCrop v7.2 in NASA’s Land Information System, verified using in situ and satellite observations

Authors: Louise Busschaert, Prof. Gabriëlle J. M. De Lannoy, dr. Sujay V. Kumar, dr. Martha Anderson, Michel Bechtold
Affiliations: Department of Earth and Environmental Sciences, KU Leuven, Hydrological Science Laboratory, NASA Goddard Space Flight Center, Agricultural Research Service, Hydrology and Remote Sensing Laboratory, US Department of Agriculture
Irrigation in agriculture represents the largest component of anthropogenic water use, a demand expected to increase under a changing climate and growing population. Despite its critical importance, accurately estimating irrigation (at fine spatial and temporal scales) remains a significant challenge. Previous research has explored the use of satellite remote sensing, modeling, or a combination of both to approximate the irrigation water usage, from field to regional levels. While irrigation modeling can offer estimates at all times and locations it relies on assumptions and parameters that typically vary in space and time, and are ultimately farmers’ decisions. Furthermore, even with an optimally parametrized model, there is still a large uncertainty in the model input, such as the meteorological forcings. Therefore, this research explores the potential of ensemble modeling to better constrain the uncertainty of irrigation estimates. The ensemble is generated by perturbing (1) the meteorological forcings (radiation, precipitation), and (2) selected irrigation parameters, such as the irrigation threshold and time interval between irrigation events. This study leverages the integration of AquaCrop v7.2, the latest version of the Food and Agriculture Organization (FAO) crop growth model, into NASA’s Land Information System (LIS). The integration of AquaCrop into the LIS framework allows to perform ensemble simulations with AquaCrop, over any domain and at any resolution. In this study, the model is run at a field-scale resolution (< 1 km²) for select regions with intense irrigation in Europe in the last decade. An ensemble verification is performed using field-level irrigation observations and satellite-based evapotranspiration retrievals. More specifically, it is evaluated if the mean model estimates and their uncertainty envelop the reference data. It is discussed how to best choose the spread in the various perturbed input variables and parameters to create a realistic ensemble of irrigation and evapotranspiration. The verification evaluates the robustness of the ensemble and is a first step towards a data assimilation system intended to estimate irrigation.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.49/0.50)

Session: A.08.12 Advances and applications of sea surface temperature and the Group for High Resolution Sea Surface Temperature

Sea surface temperature (SST) is a fundamental physical variable for understanding, quantifying and predicting complex interactions between the ocean and the atmosphere. SST measurements have been performed operationally from satellites since the early 1980s and benefit a wide spectrum of applications, including ocean, weather, climate and seasonal monitoring/forecasting, military defense operations, validation of atmospheric models, sea turtle tracking, evaluation of coral bleaching, tourism, and commercial fisheries management. The international science and operational activities are coordinated within the Group for High Resolution Sea Surface Temperature (GHRSST) and the CEOS SST Virtual Constellation (CEOS SST-VC) in provision of daily global SST maps for operational systems, climate modeling, and scientific research. GHRSST promotes the development of new products and the application of satellites for monitoring SST by enabling SST data producers, users and scientists to collaborate within an agreed framework of best practices.

New satellites with a surface temperature observing capacity are currently being planned for launch and operations with ESA and EUMETSAT, such as CIMR, Sentinel-3C/D, and Sentinel-3 Next Generation Optical. In addition, new ultra-high-resolution missions are in planning such as TRISHNA and LSTM. These satellite missions continue contributions to the provision of high-quality SST observations and opens up opportunities for further applications. However, this will also require new developments and innovations within retrievals, validation etc. It is therefore important that the developments within high resolution SST products are presented and coordinated with the ongoing international SST activities. Research and development continue to tackle problems such as instrument calibration, algorithm development, diurnal variability, derivation of high-quality skin and depth temperature, relation with sea ice surface temperature (IST) in the Marginal ice zone, and in areas of specific interest such as the high latitudes and coastal areas.

This session is dedicated to the presentation of applications and advances within SST and IST observations from satellites, including the calibration and validation of existing L2, L3 and L4 SST products in GHRSST Data Specification (GDS) and preparation activities for future missions. We also invite submissions for investigations that look into the harmonization and combination of products from multi-mission satellites.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Global satellite-based sea and sea-ice surface temperatures since 1982

Authors: Pia Englyst, Ioanna Karagali, Ida L. Olsen, Jacob L. Høyer, Guisella Gacitúa, Alex Hayward
Affiliations: Danish Meteorological Institute
Sea surface temperature (SST) and sea-ice surface temperature (IST) are both essential climate variables (ECVs) and long-term stable observational records of these (and other ECVs) are crucial to monitor, characterize and understand the state of climate as well as its variability and changes. We present a 43-year climate data record (CDR, 1982-2024) of global combined sea and sea-ice surface temperature which has been produced from satellite observations (independent of in situ measurements) within the Copernicus Climate Change Service (C3S). Satellite observations from both infrared and microwave sensors have been blended using an optimal interpolation scheme to provide daily gap-free fields of combined SST and IST on a global 0.05 regular latitude-longitude grid. Efforts have been put into improving the surface temperature estimates over sea ice and the marginal ice zone, and to improve the uncertainties of the surface temperatures over both sea and sea ice. For consistency with existing L4 SST products, the global C3S SST/IST CDR also includes an estimate of the under-ice water temperature (UISST) in sea-ice covered regions, which is based on an improved methodology using the sea ice concentration and a monthly climatology of salinity. The derived surface temperatures have been validated against independent in situ observations from a wide range of sources including ships, drifting/moored buoys and Argo floats over open ocean, and flight campaigns, ice mass balance buoys and other drifting buoys/platforms for sea ice. The global CDR performs similarly to existing ESA Climate Change Initiative (CCI) SST datasets over open ocean, and similarly to an earlier (Arctic-only) version of this dataset produced within the Copernicus Marine Service (CMS) over sea ice. The combination of SST and IST provides a much better and more consistent indicator of climate change and surface temperature trends in the high latitudes, where the coverage of sea ice changes rapidly. The global combined sea and sea-ice surface temperature has risen with about 0.5°C over the period 1982-2024, which is ~25-30% more than observed in existing global L4 SST products considering the global ocean (using the under-ice SSTs) and the region between 60S and 60N. This highlights the importance of the combined sea and sea-ice surface temperature indicator for monitoring the actual surface temperature trends in high latitudes.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Have we been underestimating midlatitude air-sea interaction?

Authors: Cristina González Haro, Javier García-Serrano, Aina García-Espriu, Antonio Turiel
Affiliations: Institut Ciències Del Mar (ICM-CSIC), Institut Català per la recerca i governança del mar (ICATMAR), Group of Meteorology (METEO-UB), Universitat de Barcelona
Some traditional, climate-oriented sea surface temperature (SST) observational datasets do not generally include satellite data and are typically based on in-situ observations with a coarser spatial resolution (1 to 2 degrees), prominent examples being the Extended Reconstructed SST from NOAA (ERSST) and the Hadley Centre SST, version 3 (HadSST3). Other datasets combine both, in-situ and satellite observations, such as the Hadley Centre Sea Ice and Sea Surface Temperature dataset (HadISST). The main objective of this work is twofold. First, we globally characterize and compare SST climatology and variability at grid-point level, considering seasonal averages (DJF, MAM, JJA, SON), between two standard, climate-oriented datasets, HadISST (1° resolution) and ERSST v5 (2° resolution), with the GHRSST product developed by the European Space Agency Climate Change Initiative (CCI) (0.05° resolution). Secondly, we assess the impact of temporal and spatial resolution in such SST characterization as well as on air-sea interaction, estimated by correlating SST with turbulent heat flux (THF; latent plus sensible). The study spans over 1982-2016 (35 years) that corresponds to the record of the satellite product (CCI). Our results show that the coarser datasets (ERSST-HadISST) overall have a warmer mean-state, except in the more dynamically-active oceanic regions such as the western boundary currents where they yield a colder SST climatology. More interestingly, the high-resolution dataset (CCI) markedly displays larger SST variability in these dynamically-active oceanic regions, which is consistent along the seasonal cycle. Likewise, we also find higher correlations between SST and THF over the western boundary currents in CCI as compared to ERSST-HadISST, indicating a stronger ocean-atmosphere coupling. Our results suggest that the high temporal and spatial resolution provided by remote sensing is key to better resolve air-sea interaction.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Exceptional Global Sea Surface Warming Driven by Earth’s Energy Imbalance

Authors: Owen Embury, Christopher Merchant, Richard Allan
Affiliations: University Of Reading, National Centre for Earth Observation
Sea surface temperature (SST) is a fundamental parameter within the Earth’s climate system, making global mean SST (GMSST) a key diagnostic for analyzing current climate change. The change in GMSST is not steady but shows both multi-decadal changes in warming trends and year-to-year fluctuations reflecting chaotic internal variability, such as the El Niño Southern Oscillation (ENSO), and external forcing including solar, volcanic, and anthropogenic effects. During the record-breaking ocean surface temperatures of 2023 and 2024 the GMSST exceeded previous observed seasonal maxima for approximately 15 months with a maximum margin of 0.3 K. As with previous record-breaking periods (1997/98 and 2015/16) this was triggered by a strong El Niño episode. However, the degree of warming observed in the 2023/24 event cannot be explained by ENSO variability alone – the 2023/24 event was the weakest of the three El Niño episodes, but the strongest in terms of record-breaking GMSST amplitude and duration. We present an assessment of the last 40 years of GMSST based on the new SST climate data record from the European Space Agency Climate Change Initiative and a statistical model using known drivers of variability and change showing that the increase in GMSST is accelerating, and the long-term trend in SST cannot be assumed linear. The accelerating GMSST trend is physically linked to the growth in the Earth Energy Imbalance (EEI) allowing changes in GMSST to be predicted for future scenarios of EEI. These indicate that GMSST will continue to increase faster than expected from a linear extrapolation from the previous four decades. Even under a "mitigated" EEI scenario the GMSST is likely to increase by 0.6 K over the next two decades, compared to 0.26 K from the linear fit. Policy makers and wider society should be aware that the rate of global warming over recent decades is a poor guide to the faster change that is likely over the decades to come, underscoring the urgency of deep reductions in fossil-fuel burning.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Development of Retrieval Algorithms for Level-2 Sea Surface and Lake Surface Water Temperature for CIMR

Authors: Emy Alerskans, Jacob Høyer, Pia Englyst, Ida Lundtorp Olsen
Affiliations: Danish Meteorological Institute
Observations of sea surface temperature (SST) from passive microwave (PMW) sensors are important complements to traditional infrared (IR) observations. However, the resolution of the current microwave imagers are not enough to capture subscale to mesoscale variability. Furthermore, they suffer from coastal and sea ice contamination. The Copernicus Imaging Microwave Radiometer (CIMR) is currently being prepared by the European Space Agency (ESA) as a part of the Copernicus Expansion program for the European Union, with an expected launch in 2029. CIMR is designed to observe high-resolution and high-accuracy PMW measurements of a selected range of geophysical variables. SST is one of the key parameters for monitoring and understanding climate change. Temperature changes due to climate change is most pronounced in Polar regions, which is why it is essential to have accurate estimates of SST in these regions. SST is therefore one of the main parameters of the CIMR mission. Furthermore, CIMR will also observe PMW measurements of Lake Surface Water Temperature (LSWT). LSWT is an important indicator of lake hydrology and biogeochemistry and can be used as an indicator of how climate change affects lakes. Furthermore, variations in LSWT can impact the weather and climate of the surrounding areas. However, due to the resolution of the current microwave imagers, LSWT products have not been developed from PMW measurements before. The enhanced resolution of CIMR will therefore make it possible to produce an LSWT product for large lakes. Currently, the retrieval algorithms for CIMR Level-2 SST and LSWT are being developed. The CIMR SST retrieval algorithm is a 2-step statistically-based algorithm with so called localised algorithms, which makes it possible to take into account non-linear relationships between the brightness temperatures (TBs) and other variables, such as e.g. wind speed. The LSWT retrieval algorithm is based on the SST algorithm and is tuned toward lake properties, using a matchup dataset with reference lake temperatures, TBs and auxiliary data, such as numerical weather prediction (NWP) data. In the first phase, the retrieval algorithms are developed using AMSR2 TBs and are thereafter fine-tuned using simulated CIMR data. Validation is performed using both AMSR2 TBs from matchup datasets and simulated CIMR TBs, making use of two kinds of demonstration reference scenario scenes; - Artificial test scenes, consisting of typical brightness temperature for different surface types arranged in artificial patterns corresponding to real world scenarios, such as ocean-land and ocean-sea ice transitions; and - Realistic test scenes consisting of simulated CIMR brightness temperatures.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Using information from microwave imager radiances to improve the ocean analysis in a coupled atmosphere-ocean model

Authors: Tracy Scanlon, Niels Bormann, Alan Geer, Philip Browne, Tony McNally
Affiliations: ECMWF
Microwave imagers play a key role in Numerical Weather Prediction (NWP) systems, providing information about atmospheric humidity, temperature, cloud and precipitation as well as surface information such as skin temperature and sea ice. Recently, low frequency microwave channels (6.9 and 10.65 GHz) from AMSR2 and GMI have been included under the all-sky (clear and cloudy) route into the ECMWF NWP system with a view to exploiting their surface information content over open oceans. Knowledge of the ocean surface skin temperature is vital to the accurate use of satellite radiances in weather forecasting, and it helps improve the quality of forecasts for both the ocean and atmosphere. The RadSST method utilises skin temperature increments generated in the atmospheric 4D-Var using a sink variable approach. These increments are then passed to the ocean component of the coupled system to update the ocean state via NEMOVAR. The skin temperature increments generated in the atmospheric 4D-Var are shown to address the time delay between the input SST retrieval products used to describe the ocean in the uncoupled system and the time of the microwave imager observations, particularly in the region of tropical instability waves. When assimilated within the coupled system these increments are demonstrated to improve the fit of the ocean background to in-situ observations from ARGO floats. Building on the improvements seen in the coupled system, work is ongoing to further understand the relationship between the bulk (foundation) SST used as an input to the NWP system and the skin temperature seen by the microwave imagers. This is explored using a machine learning approach and it is hoped this will be able to provide information to allow the current skin temperature parameterisation to be updated so that it is more applicable to the microwave imager channels used. The framework to use MW radiances to inform the ocean analysis will also be expanded to other upcoming sensors, such as AMSR3 and the future CIMR instrument. The latter activities will be performed under the new Data Assimilation and Numerical Testing for Copernicus eXpansion missions (DANTEX) project.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Monitoring the sea surface temperature from IASI for climate application

Authors: Virginie Capelle, Jean-Michel Hartmann, Cyril
Affiliations: Ecole Polytechnique/Laboratoire de Météorologie Dynamique/IPSL
The sea surface (skin) temperature (SST) is a key parameter in climate science, meteorology and oceanography. Being at the ocean- atmosphere interface, it plays a crucial role in the variability and regulation of climate and its knowledge is essential to understand heat, momentum and gases exchange processes between the ocean and the atmosphere. As such, it is recognized as one of the essential variables for which accurate and global measurements are needed for the understanding, monitoring and forecasting of climate evolution, as well as for numerical weather predictions. Within this framework, satellite remote sensing, by providing daily and global observations over long time series, offers good opportunities. In particular, the excellent calibration and stability of the IASI instrument and the planned long time series of observation provided by the suite of three satellites Metop A, B and C is fully consistent with the quality requirement. We analyze here 18 years of the SST time series retrieved from IASI on board the three Metop using a fully physically-based algorithm. This dataset is characterized by : i) a total independence from in-situ measurements or models. ii) a high accuracy assessed by a systematic comparison with in-situ depth-temperature measurements, with a mean difference lower than 0.05 K and a robust standard deviation of 0.25 K. iii) an excellent stability of the time series, with a trend of the bias compared to in-situ measurements lower than 0.05 K/decade over the 2007-2024 period; iv) a perfect consistency between the three generations of IASI on-board Metop- A, -B, and -C, where monthly comparisons over their overlapping period give a mean SST difference lower than 0.02 K and a standard deviation of 0.3 K. Altogether, these results satisfy the prerequisites required to consider an SST time series as a climate data record. This opens promising perspectives by demonstrating the possibility to provide an accurate, as well as stable, SST time-series from IASI over the planned 20 years of the Metops-suite, that will be followed by two more decades of the IASI-New Generation missions.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L3)

Session: A.05.07 Sea level change from global to coastal scales and causes

Sea level changes at global and regional scales have been routinely measured by high-precision satellite altimetry for more than three decades, leading to a broad variety of climate-related applications. Recently, reprocessed altimetry data in the world coastal zones have also provided novel information on decadal sea level variations close to the coast, complementing the existing tide gauge network. Since the early 2010s, the ESA Climate Change Initiative programme has played a major role in improving the altimetry-based sea level data sets at all spatial scales, while also supporting sea level related-cross ECVs (Essential Climate Variables) projects dedicated to assess the closure of the sea level budget at global and regional scales. Despite major progress, several knowledge gaps remain including for example:
• Why is the global sea level budget not closed since around 2017?
• Why is the regional sea level budget not closed in some oceanic regions?
• How can altimetry-based coastal sea level products be further improved?
• How can we enhance the spatial coverage of these products, which are currently limited to satellite tracks?
• To what extent do small-scale sea level processes impact sea level change in coastal areas?
• Can we provide realistic uncertainties on sea level products at all spatial scales?
• What is the exact timing of the emergence of anthropogenic forcing in observed sea level trends at regional and local scale?
In this session, we encourage submissions dedicated to improving multi-mission altimetry products and associated uncertainties, as well as assessing sea level budget closure at all spatio-temporal scales. Submissions providing new insights on processes acting on sea level at different spatial and temporal scales are also welcome. In addition to using altimetry data, other space-based and in-situ data, as well as modelling studies, are highly encouraged to submit to this session.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L3)

Presentation: Why Are Interannual Sea Level Variations at the U.S. Northeast and Southeast Coasts Uncorrelated?

Authors: Dr. Ou Wang, Tong Lee, Dr. Thomas Frederikse, Dr. Rui Ponte, Dr. Ian Fenty, Dr. Ichiro Fukumori, Dr. Ben Hamlington
Affiliations: NASA Jet Propulsion Laboratory, University of California Los Angeles, Atmospheric and Environmental Research
The magnitude of interannual sea-level anomaly (SLA) along the East Coast of the United States (U.S.) can be comparable to that of global mean sea-level rise over a few decades. These interannual SLA variations contribute to more frequent nuisance floods that affect coastal communities. Altimetry measurements suggest that interannual sea level anomalies (SLAs) at the U.S. East Coast are highly correlated along the northeast or southeast sectors (separated by Cape Hatteras) but are uncorrelated between the two sectors. These features are reproduced by the Estimating the Circulation and Climate of the Ocean (ECCO) ocean state estimate that is constrained by altimetry data. Here we use ECCO state estimate and sensitivity analysis to pinpoint the atmospheric forcing type and forcing region that make interannual SLAs at the Northeast and Southeast U.S. Coasts correlated or uncorrelated. We find that nearshore winds north of Cape Hatteras cause interannual SLAs in the northeast and southeast sectors to co-vary because these winds cause fast propagation of coastally trapped waves down the U.S. East Coast with time scales of weeks. Offshore winds are the major factor causing uncorrelated interannual SLAs between the Northeast and Southeast U.S. Coasts because (1) offshore winds affect SLA in the southeast sector much more strongly than SLA in the northeast sector, and (2) open-ocean baroclinic Rossby waves generated by offshore winds take months to years to reach the U.S. East Coast. Overall speaking, buoyancy forcing is much less important than winds in causing interannual SLAs at the Northeast and Southeast U.S. Coasts although surface heat flux can induce marine heat wave that causes SLA as large as wind-generated SLA at the northeast sector. The insight of gained from our causal analysis provides information that is helpful for developing machine-learning based prediction models for interannual sea-level variation along the U.S. East Coast.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L3)

Presentation: Reconciling Satellite-based Measurements of the Ice Sheets’ Contribution to Sea Level Rise – Update from the Ice Sheet Mass Balance Intercomparison Exercise (IMBIE)

Authors: Inès Otosaka, Andrew Shepherd
Affiliations: Centre For Polar Observation And Modelling
The Greenland and Antarctic Ice Sheets remain the most uncertain contributors to future sea level rise; they are projected to contribute between 0.08 to 0.59 m and between 0.02 to 0.56 m to global mean sea level by 2100 according to the IPCC Sixth Assessment Report (AR6), respectively. Producing an observational record of ice sheet mass changes is thus critical for constraining projections of future sea level rise. The Ice Sheet Mass Balance Inter-Comparison Exercise (IMBIE) led by ESA and NASA aims at reconciling estimates of ice sheet mass balance from satellite altimetry, gravimetry, and the mass budget method through community efforts. Building on the success of the three previous phases of IMBIE – during which satellite-based estimates of ice sheet mass balance were reconciled within their respective uncertainties and which showed a 6-fold increase in the rate of mass loss during the satellite era – IMBIE has now entered its fourth phase. The objectives of this new phase of IMBIE, supported by ESA CCI, are to (i) provide annual assessments of ice sheet mass balance, (ii) partition mass changes into dynamics and surface mass balance processes, (iii) produce regional assessments and (iv) examine the remaining biases between the three geodetic techniques, all in order to provide more robust and regular estimates of ice sheet mass balance and their contribution to global mean sea level rise. In this paper, we report on the recent progress of IMBIE-4. We present an updated time-series of mass changes of Greenland and Antarctica from the 1970s until the end of 2023. We examine the drivers of Greenland and Antarctica mass trends, showing that while ice dynamics remain the main driver of Antarctica’s mass loss, in Greenland, ice losses from reduced surface mass balance have exceeded ice dynamics losses for the first time during the last decade.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L3)

Presentation: Extrapolation of the Satellite Altimeter Record to Understand Regional Variations in Future Sea Level Change

Authors: Robert Steven Nerem, Ashley Bellas-Manley, Benjamin Hamlington
Affiliations: University Of Colorado, Jet Propulsion Laboratory
We perform a quadratic extrapolation of sea level on a regional scale based on satellite altimeter observations spanning 1993-2022, including corrections for internal variability and a rigorous assessment of the uncertainties associated with serially correlated formal errors, GIA, and satellite altimeter measurement errors. The errors in these regional extrapolations are relatively narrow, and we show significant overlap with the regional projections from the most recent IPCC 6th Assessment Report. The extrapolations are completely data-driven, model-independent, and show the trajectory of sea level change over the last 30 years extrapolated into the future. These extrapolations suggest that sea level rise in 2050 relative to 2020 will be 24 ± 10 cm in the North Indian Ocean, 22 ± 5 cm in the mid-North Atlantic, 22 ± 4 cm in the North Pacific, 16 ± 4 cm South Atlantic, 12 ± 4 cm in the South Pacific, 11 ± 4 cm in the Tropical Pacific, and 10 ± 4 cm in the Antarctic Circumpolar Ocean. The regional results may differ from each other by more than 100% and differ significantly from the extrapolated global mean sea level rise of 17 ± 4 cm in most cases. The results highlight the importance of considering regional variations in estimates of future sea level and provide an additional line of evidence when considering the representativeness of the range of climate model projections in describing near-term sea level rise.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L3)

Presentation: Level-2-Based Gridded GRACE Ocean Mass Change Estimates and Their Uncertainty Characterisation to Assess the Closure of the Sea Level Budget

Authors: Thorben Döhne, Martin Horwath, Marie Bouih
Affiliations: TU Dresden University of Technology, Magellium
The assessment of the global sea-level budget reveals the extent to which all significant causes of sea-level variability are identified and whether their combined effects correspond to observed total sea-level changes. It also helps validating the underlying measurement systems. The misclosure of the global sea level budget in recent years motivates the need for a deeper understanding of the individual measurement systems. Assessing the sea level budget on regional scales can also help to focus investigations on certain regions. Inversion methods of the sea level budget components can further help to identify time periods in which individual components differ from other components. A robust and reliable uncertainty characterisation is absolutely crucial for this assessment. GRACE derived mass changes provide a unique observation of the ocean-mass component but are afflicted by problems of signal leakage stemming from the limited spatial resolution and required filtering. Signal leakage is particularly pronounced for the ocean margins and at smaller scales. Mass changes derived for the global ocean traditionally counteract the leakage of land signals into the ocean by employing a buffer zone along the ocean margins which is excluded from the subsequent integration. For gridded mass changes which include the regions along the ocean margins more sophisticated analysis methods have been developed. Mascon solutions provide a framework for such gridded mass change solutions and have become convenient and popular for many users. However, design choices inherent to these traditionally Level-1-based solutions are difficult to assess, or to adapt, by users with regard to specific applications like regional ocean-mass changes. Mascon solutions based on Level-2 gravity field solutions allow more access to, and control of, design choices by a wider range of scientists. We derive such gridded mass changes based on GRACE Level-2 spherical harmonics by extending the method of tailored sensitivity kernels from regional mass changes to globally distributed mascons. During the analysis different design choices are implemented to realise a compromise between propagated GRACE Level-2 solution errors and leakage errors. We present the impact of two design choices on ocean mass change and signal leakage of the land-ocean margin: (a) the amendment of a-priori mascons patterns by their sea-level fingerprints and (b) the choice of signal variances and covariances. The resulting sensitivity kernels, which describe the weighting functions that are used to integrate the input data, allow for a direct interpretation of the mass integration step of individual mascons. We further use the resulting sensitivity kernels to assess time-dependent error variances and covariances of integrated ocean-mass changes. The considered temporal correlations range from uncorrelated monthly noise to fully correlated long-term trend errors and include following error sources: (a) noise propagated from the GRACE Level-2 solutions (b) errors propagated from low-degree harmonics (c) leakage errors (d) errors of geophysical corrections. We present our gridded mass change solutions, resulting global and regional ocean mass changes and our uncertainty assessment in the form of error variance-covariance matrices. We also highlight preliminary results of an assessment of the closure of the sea-level budget studies within ESA’s Sea Level Budget Closure CCI+ project.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L3)

Presentation: Observed regional sea level trends in the tropical Pacific ocean over 2014-2023 : causes and associated mechanisms

Authors: William Llovel, Antoine Hochet
Affiliations: Lops/cnrs
High-precision satellite altimetry data have revolutionized our understanding of regional sea level changes from seasonal to decadal change. For the first time, satellite altimetry reveals large-scale spatial patterns in regional sea level trends. Some regions (e.g., the tropical Pacific ocean) have experienced a linear rise three times as large as the global mean sea level trend. Steric sea level change has been identified as one of the major contributors to the regional variability of sea level trends observed by satellite altimetry over the past decades. The temperature contribution to sea level (known as thermosteric sea level) has generally been found to be more important than the salinity effect (i.e., halosteric sea level). The salinity contribution to regional sea level trends has been less studied than the temperature contribution because the halosteric contribution to global mean sea level is close to zero and also because of the lack of historical salinity measurements. In this study, we investigate regional sea level trends inferred from satellite altimetry data and from Argo floats since 2005 to assess their temperature and salinity contributions. We focus our analysis on large scale halosteric sea level trends in the tropical oceans that we link to the surface atmospheric forcing. Over 2014-2023, we find a particularly large halosteric sea level decrease in the tropical Pacific ocean that is associated with a salinification in the upper 200 m depth. We find a local decrease in precipitation. We also highlight an increase in trade winds located in the central tropical Pacific ocean. We hypothesize that the positive sea surface salinity anomalies, responsible for the positive halosteric sea level trends are advected with a strengthening of the upper ocean circulation induced by an increase in surface wind stress.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L3)

Presentation: Sea Level Rise from Altimetry and Aspects for Future Missions

Authors: Remko Scharroo, Estelle Obligis, Bojan Bojkov, Julia Figa, Alejandro Egido, Craig Donlon
Affiliations: EUMETSAT, ESA/ESTEC
The era of precise satellite altimetry is generally regarded to start with the launch of TOPEX/Poseidon in 1992. Since then, a continuous series of missions, Jason-1, -2, -3 and Sentinel-6 Michael Freilich have been monitoring global and regional mean sea level from what is called the "altimetry reference orbit" at 1336 km and with a 66º inclination. Many consecutive improvements in satellite instrumentation, satellite design, as well as to the precise orbit determination systems on-board contributed to an increasing accuracy and precision. By flying the successive missions in tandem with a separation of 30 seconds to 30 minutes, it was possible to cross-calibrate those missions to within a few millimeters or better, thus ensuring the long-term stability of the now 32 years record. Other external factors have also contributed to the continued success of the altimetric sea level record. The ever increasing precision and accuracy of the atmospheric and other geophysical modelling, and the availability and maintenance of a number of tide gauges against which any drift of the altimetric sea level measurements can become evident. But the most overlooked source of critical validation of the reference missions as well as contributors to the long-term record have been nine other missions that have operated during the same time from a much lower altitude (ERS-1, ERS-2, Envisat, GFO, CryoSat-2, SARAL/AltiKa, Sentinel-3A and -3B, and SWOT), generally in Low Earth high inclination sun-synchronous orbits, conditions that were widely thought to prohibit an accurate retrieval of global mean sea level. During the course of time, technologies, background models, and orbit determination have evolved. For example, on the reference orbit, Sentinel-6 makes the transition to High Resolution altimetry that is intrinsically more precise, while also providing continuity Low Resolution measurements on the reference orbit. Sentinel-3 also introduced global High Resolution altimetry on the polar orbit and deviating slightly from the previous polar orbits of the ERS/Envisat/SARAL heritage. On top of these measurement evolutions, there have also been changes in processing, transitioning from the traditional Low Resolution MLE4 retracking, to numerical retracking, and various ways of processing the High Resolution altimetry. That poses the following questions regarding error sources and their evaluation in the establishment of our altimetric sea level record: * How well can we currently determine sea level rise and its acceleration? * Is there a distinction between the reference and polar altimeters? * How do various measurement techniques and processing affect the sea level measurements? * How relevant is the selection of the orbit for the continuation of the sea level record? * What does this all mean to the design of the Next Generation altimeter missions? * How does this inform about the error budget still lurking in the sea level record? This presentation makes a statistical analysis of the sea level record that can now be established from various combinations of the 14 altimeters mentioned here. It highlights the most compelling results of the altimetric sea level rise measurements, summarises some of the essentials to their success and discusses the way forward to maintain this record for the next decades with the Sentinel-3 Next Generation Topography Mission and Sentinel-6 Next Generation.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall F2)

Session: A.02.03 EO for Agriculture Under Pressure - PART 6

The human impact on the biosphere is steadily increasing. One of the main human activities contributing to this is agriculture. Agricultural crops, managed grasslands and livestock are all part of the biosphere and our understanding of their dynamics and their impacts on other parts of the biosphere, as well as on the wider environment and on the climate is insufficient.
On the other hand, today’s Agriculture is Under Pressure to produce more food in order to meet the needs of a growing population with changing diets– and this despite a changing climate with more extreme weather. It is required to make sustainable use of resources (e.g. water and soils) while reducing its carbon footprint and its negative impact on the environment, and result in accessible, affordable and healthy food.
Proposals are welcome from activities aiming at increasing our understanding of agriculture dynamics and at developing and implementing solutions to the above-mentioned challenges of agriculture, or supporting the implementation and monitoring of policies addressing these challenges. Studies on how these challenges can be addressed at local to global scales through cross site research and benchmarking studies, such as through the Joint Experiment for Crop Assessment and Monitoring (JECAM) are welcome.

The session will hence cover topics such as
- Impact on climate and environment:
- Crop stressors and climate adaptation
- Food security and Sustainable Agricultural Systems
- New technologies and infrastructure
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall F2)

Presentation: Quantifying the Impact of the 2022 Mega-Heatwave on Indian Wheat Yields Using Satellite Sun-Induced Chlorophyll Fluorescence and Environmental Data

Authors: Ben Mudge, Dr Harjinder Sembhi, Darren Ghent, Dr Dan Potts
Affiliations: School of Physics and Astronomy, University Of Leicester, National Centre for Earth Observation
Between 2010 and 2050, global food demand is predicted to increase between 35-56% in line with increasing global populations. Environmental pressures such as those caused by a changing climate and more frequent extreme heat events can threaten future food security. Heatwaves and water stress cause many negative plant responses such as decreased transpiration and photosynthetic inhibition leading to reduced crop yields. Satellite observations of solar induced fluorescence (SIF) are an effective way to monitor global vegetation changes as SIF provides information on plant photosynthetic efficiency, which can potentially help us better understand the timescales and intensity of heat stress on crops and act as an early warning system for plant stress. This project primarily focuses on understanding how agricultural water and heat stress manifests in SIF. Across India, 70% of rural households rely primarily on agriculture for their livelihoods. Many states in India adopt agriculturally intensive rice-wheat cropping systems where up to 80% of a state’s land is dedicated to growing rice in the summer and up to 70% growing wheat in the winter. During the 2022 mega-heatwave, India was forced to stop wheat exports in due to nation-wide crop yield losses. Heatwave conditions combined with torrential rains in 2023 resulted in a predicted ban extension until March 2025. By combining multiple coincident satellite observations, we explore the relationships between SIF, land surface temperature (LST), normalised differential vegetation index (NDVI), and vapour pressure deficit (VPD ), and soil moisture (SM), in baseline and extreme water stress conditions across agricultural regions of the country. A multivariate analysis of Sentinel 5p TROPOSIF, VIIRS LST, NDVI, SM, and ERA 5 derived VPD will be presented in the context of government wheat yield statistics data. Early results indicate that SIF is most strongly correlated with state crop yield information. Regional and time series analysis from 2018 to 2024 along with the results of statistical analysis will be used to demonstrate the timescales over which SIF and other parameters capture heat and water stress impacts and how these stresses can be better monitored and predicted.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall F2)

Presentation: Optimising Light Use Efficiency Models for Crop Productivity Estimation Under Heat Stress

Authors: Peiyu Lai, Dr. Michael Marshall, Dr. Roshanak Darvishzadeh, Dr. Andrew Nelson
Affiliations: Faculty of Geo-Information Science and Earth Observation, University of Twente
The increasing frequency and intensity of extreme heat events highlight the need for reliable global estimates of crop productivity under heat stress. Light use efficiency (LUE) models are increasingly used for macroscale crop yield due to their ease of parameterisation using satellite-driven vegetation indices and gridded meteorological data. However, their performances often suffer under heat stress, primarily due to limitations in model structure, parameters, and input data. Addressing these uncertainties is essential for enhancing model accuracy and ultimately improving food security assessments in a warming climate. This study evaluates the impacts of uncertainties from model structure, parameters, and input data on LUE models’ performances under heat stress. Firstly, based on eddy covariance flux tower data spanning 177 crop growth seasons across 18 globally distributed sites, LUE model structures (components representations) were assessed and optimised for heat stress periods characterised by high temperatures. We excluded confounding factors such as low soil moisture and unfavourable light conditions. Secondly, the optimised model was validated for key outputs—gross primary productivity (GPP), dry above-ground biomass (AGB), and crop yield—to quantify parameter driven uncertainties at the field level with 145 samples over 14 years. Finally, input data uncertainties were further analysed by comparing three remote sensing sources (MODIS, Landsat 8, and Sentinel-3) and three meteorological datasets (station data, ERA5, and LSA SAF EUMETSAT), focusing on differences in spatial and temporal resolution, data quality, and representativeness. Results show that incorporating the Enhanced Vegetation Index (EVI)-based Fraction of Photosynthetically Active Radiation (FPAR), the evaporative fraction (EF)-based moisture constraint, and an inverse double-exponential temperature function significantly improved GPP and AGB estimation under heat stress. The optimised model outperformed the three commonly used models — the Vegetation photosynthesis model (VPM), the eddy covariance–light use efficiency (EC-LUE) model, and the Carnegie–Ames–Stanford Approach (CASA) model —reducing RMSE by 34%, 39%, and 57% respectively and increasing R² by 9%, 8%, and 44%, respectively. These enhancements also improved GPP and AGB estimation under normal growth conditions. Analysing the parameter-driven uncertainties revealed that literature-based parameters for converting AGB to crop yield often underestimated crop yields. This was evident as the optimised model, while accurately estimating GPP and AGB, still underestimated crop yields. In contrast, EC-LUE, which overestimated GPP, provided more accurate yield estimates. This highlights the critical role of accurate estimation of parameters related to the dry matter allocation, which is often treated as an empirical, crop-specific constant across all conditions. The influence of heat stress on the harvest index should incorporated in future model refinements. This study provides critical insights into improving crop productivity estimation under heat stress and can inform large-scale adaptation strategies to mitigate the impacts of a warming climate.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall F2)

Presentation: Addressing soil stressors on rice crops through hyperspectral remote sensing: a comparison of EnMAP, PRISMA and Sentinel-2 missions

Authors: Francisco Canero, Dr. Victor Rodriguez-Galiano, Mr. Aaron Cardenas-Martinez, Daniel Arlanzon, Jose Manuel Ollega-Caro
Affiliations: Department of Physical Geography and Regional Geographic Analysis, Universidad de Sevilla
Rice (Oryza sativa L.) serves as a primary source of food for over half of the world's population. Anthropogenic climate change poses a threat for rice crops and food safety, increasing the risk of damage by abiotic stressors such as soil salinity, nitrogen or carbon deficit. Innovative advances in spaceborne hyperspectral technologies such as EnMAP and PRISMA scanners might improve the characterization, mapping and understanding of those phenomena compared with forerunner multispectral missions including Sentinel-2 or Landsat. Moreover, there is a growing demand on hyperspectral imaging for abiotic stress detection concerning the forthcoming ESA hyperspectral operational mission CHIME. This study aims at mapping three agricultural soil properties (soil salinity, total carbon, and available nitrogen) acting as soil stressors of rice crops in a 34477.51 ha agricultural area under a climatic Mediterranean setting. A second objective of this study is to evaluate differences of EnMAP and PRISMA hyperspectral missions compared to operational multispectral ESA mission Sentinel-2 for mapping these soil properties. The field campaign was carried on Bajo Guadalquivir under the ESA-funded EO4Cereal Stress project, a plain field located in the estuary of Guadalquivir River in southern Spain. Stakeholders have shown their concern for salinity impact on rice yield, considering the 4201 plots with an average size of 8.21 ha. One hundred samples were collected on May-June 2023, measuring the spectra under laboratory conditions. Bare soil images were acquired during a drought year (with no rice harvest) for EnMAP, PRISMA and Sentinel-2. Two spectral preprocessing methods to enhance specific absorption features were applied to the hyperspectral images, Continuum Removal and Multiplicative Scatter Correction. To address the high dimensionality of hyperspectral data together with the limited number of soil samples, a two-step dimensionality reduction workflow based on a recursive feature extraction and a PCA was built. This dimensionality method was tested on five modelling algorithms: Linear Regression, Partial Least Squares Regression, Random Forest, Support Vector Regression and a Multilayer Perceptron Neural Network. To detect key spectral bands for each soil stressor, a model-agnostic interpretation method based on feature importance by permutation was performed. Dimensionality reduction, hyperparameter tuning, and model performance were evaluated using R2 and RMSE. Uncertainty was assessed selecting the models with positive R2 and evaluating Z-score deviation within each pixel. Hyperspectral images from EnMAP and PRISMA provided more reliable mapping estimations of the soil stressors compared with Sentinel-2, while having similar results to those obtained with laboratory spectroscopy. EnMAP provided a better prediction for soil salinity, while PRISMA achieved more accurate soil carbon and nitrogen maps. The most important bands were found on spectral regions captured by Sentinel-2, pointing that an enhanced spectral resolution might be required to accurately assess soil stressors of rice. Within modelling algorithms, Partial Least Squares Regression obtained the highest accuracy overall: soil salinity using EnMAP-MSC data (R2 = 0.574, RMSE = 2.647 dSm-1), soil carbon using PRISMA data (R2 = 0.717, 0.259%) and soil available nitrogen using PRISMA (R2 = 0.88, RMSE = 1.35mg/kg). The best result according to R2 per variable and image were as follows. Soil salinity: Laboratory: 0.79, EnMAP: 0.57, PRISMA: 0.5, Sentinel-2 0.1. Soil carbon: Laboratory: 0.89, EnMAP 0.571, PRISMA: 0.717, Sentinel-2: 0.14. Soil available nitrogen: Laboratory: 0.69, EnMAP: 0.57, PRISMA: 0.88, Sentinel-2: 0.16. The most important variables for salinity were 645 and 1609 nm bands, for soil carbon 855, 2437 and 1706 nm, and for soil available nitrogen the key features were within the 611-628 nm range. In summary, these results suggest the importance of the hyperspectral information provided by EnMAP and PRISMA for soil mapping aimed at detecting abiotic stressors. These results underpinned the requirement of further development of the operational hyperspectral mission CHIME, to fulfil the needs of stakeholders and acting as potential inputs to delimit the importance of different stressors in rice crops.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall F2)

Presentation: Human and Environmental Causal Effects on Food Security in Africa

Authors: Jordi Cerdà-Bautista, Vasileios Sitokonstantinou, Homer Durand, Gherardo Varando, Dr Gustau Camps-Valls
Affiliations: Universitat De València
Understanding the complex interplay between human and environmental factors affecting food security is crucial for designing effective, context-sensitive interventions, especially in vulnerable African regions. This study utilizes cutting-edge causal machine learning (ML) methods to estimate the impacts of anthropogenic and environmental variables on a comprehensive food security index. By estimating Average Treatment Effects (ATE) and Conditional Average Treatment Effects (CATE), we provide detailed insights into the socio-economic and climatic drivers of food security and their relative contributions [Hernan, 2020]. Our analysis focuses on three regions with distinct socio-environmental dynamics and food security challenges: the Horn of Africa, the Sahel, and South Africa. Leveraging a newly developed dataset that integrates socio-economic indicators, such as food prices, level of conflicts, or internal displacements; and climate variables, like precipitation, evaporation, temperature, or vegetation indices; we investigate spatial heterogeneity in causal effects, identifying distinct regional variations. Additionally, we employ innovative techniques such as Granger PCA [Varando, 2021] to cluster areas with similar climatic responses to El Niño Southern Oscillation (ENSO) patterns. This approach enables us to capture heterogeneity in the causal effects of treatments on food security outcomes across regions with analogous climatic behavior. We perform ATE and CATE analyses across multiple regions and apply robustness tests to ensure the validity of the estimations. Our results highlight the spatial heterogeneity of treatment effects on food security, providing quantitative and spatially explicit evaluation. These findings offer nuanced insights into how diverse socio-environmental factors interact and influence food security in the selected areas of interest. This research advances the application of causal inference to complex socio-environmental systems, providing evidence-based knowledge for policy-making. By evaluating the spatial and contextual dependencies of food security drivers, our study emphasizes the importance of tailored strategies to address the multifaceted challenges facing Africa’s food systems. References Sitokonstantinou, Vasileios, et al. "Causal machine learning for sustainable agroecosystems." arXiv preprint arXiv:2408.13155 (2024). Varando, Gherardo, Miguel-Angel Fernández-Torres, and Gustau Camps-Valls. "Learning Granger causal feature representations." ICML 2021 Workshop on Tackling Climate Change with Machine Learning. 2021. Cerdà-Bautista, Jordi, et al. "Assessing the Causal Impact of Humanitarian Aid on Food Security." IGARSS 2024-2024 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2024. Pearl J., “Causality: Models, reasoning, and inference,” Cambridge University Press, vol. 19, 2000. Hernán MA, Robins JM. “Causal Inference: What If”. Boca Raton: Chapman & Hall/CRC, 2020.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall F2)

Presentation: CropSHIFT - Climate Change impact on crop growing patterns in Europe

Authors: Andreas Walli, Dr. Edurne Estévez
Affiliations: Geoville
The effects of climate change — rising temperatures, altered precipitation patterns, and an increased frequency of extreme weather events— present significant challenges for agriculture and have strong consequences for crop yields, food security, and rural livelihoods. These changes in agroclimatic conditions induce substantial shifts in crop phenology, crop suitability and risks, and yield potentials. The consequences are becoming increasingly evident, particularly in changing crop cultivation spatial patterns. For example, regions once considered suitable for cultivation may become less viable, while new areas may emerge as more favorable for agricultural production. Additionally, some crop types may no longer be suitable for particular regions but thrive in new ones, enhancing agricultural production. Given the scale and urgency of these changes, especially in Europe, it is critical to advance our understanding of how climate change influences agricultural systems. CropSHIFT identifies, quantifies, and visualizes shifts in crop growth and predicts heat and drought risk-induced yield reduction on a regional level in Europe for the latest climate prediction model calculations. This prediction service will combine EO data - Copernicus-based high-resolution crop type growing areas -, weather-related crop growing parameters obtained from the ARIS (Agricultural Risk Information System; Eitzinger J. et al. 2024), and climatic information (Climate DT developed by ECMWF). The resulting hybrid model is the first that combines the latest climate scenarios with unprecedented spatial resolution crop information and allows to predict potential shifts of the ideal potential growing regions in the upcoming decades more accurately and spatially explicit. It quantifies the changing growing conditions and climate-related risks for a selection of crop types. This service will not only be essential for addressing the immediate challenges faced by agriculture, developing adaptive strategies and mitigating risks but also for aligning with key Sustainable Development Goals (SDGs), including SDG 2 (Zero Hunger), SDG 12 (Responsible Consumption and Production), and SDG 15 (Life on Land). Moreover, it will enable strategic land-use decisions informed by local contexts and needs, supporting the sustainable and resilient management of agricultural resources at every level of implementation. It will be of great use to a wide range of stakeholders, including agricultural ministries (policy level), agricultural insurers, logistical operators, agri-food actors such as seed producers and distributors, federal authorities for water management or farmers themselves. The collaboration with the Agro Innovation Lab will contribute to connecting with these diverse stakeholders to better understand their needs and tailor the service accordingly.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall F2)

Presentation: Earth Observation for Rice Stress: Evaluating EnMAP Hyperspectral Mission to Detect the Effects of Salinity and Nutrient Deficit in Crop Biophysical Traits

Authors: Dr. Victor Rodriguez-Galiano, Mr. Daniel Arlanzon-Quiroz, Ms. Ana Martin-Gonzalez, Mr. Aaron Cardenas-Martinez, Mr. Francisco Canero-Reinoso, Mr. Manuel Lobeto-Martin
Affiliations: Department of Physical Geography
Soil salinity, caused by natural factors and agricultural mismanagement (e.g., inadequate irrigation and drainage), and nutrient deficits significantly impair crop development. In the Guadalquivir marshes, rice fields are irrigated with water from the Guadalquivir River, which is influenced by tidal seawater infiltration, exacerbating salinity stress. Nitrogen (N) deficits further hinder growth, reducing photosynthetic efficiency and grain filling. This study evaluates the performance of hyperspectral (EnMAP) and multispectral (Sentinel-2) satellite missions in monitoring salinity and nitrogen deficit impacts on rice crops in the Guadalquivir marshlands (Southern Spain). Hyperspectral and multispectral imagery from summer 2023 were complemented by five field campaigns (July 24–September 22) across three fields representing optimal, suboptimal, and poor conditions. Nine Elementary Sampling Units (3×3 grids, 30×30 m) were sampled per field, including analyses of nitrogen (N), pigments (chlorophyll-a and chlorophyll-b [Chla, Chlb], carotenoids [CAR]), water content (leaf water content [LWC]), and canopy traits such as the Leaf Area Index (LAI). Crop traits were estimated using a hybrid approach combining PROSAIL-PRO radiative transfer models (RTMs), dimensionality reduction techniques, and active learning to optimize machine learning (ML) algorithms. Principal component analysis (PCA) was applied to hyperspectral imagery to reduce spectral redundancy. The best models achieved R² > 0.6, with Gaussian Processes excelling in carotenoids (CAR; R² = 0.934, normalized root mean square error [NRMSE] = 7.899) and leaf nitrogen content (LNC; R² = 0.916, NRMSE = 11.128). Other traits, such as LWC (R² = 0.901) and leaf chlorophyll content (LCC; R² = 0.866), also performed strongly, whereas canopy traits like canopy chlorophyll content (CCC; R² = 0.642) and canopy nitrogen content (CNC; R² = 0.69) showed moderate agreement, likely due to challenges in LAI estimation (R² = 0.71). A case-control study compared stressed and non-stressed zones, evaluating salinity and combined stress (salinity + N deficit). A two-tailed t-test revealed significant impacts on CAR (p = 1.20e-08), LCC (p = 3.50e-06), and LAI (p = 3.07e-13) under salinity stress, with stronger effects under combined stress (LCC: p = 1.65e-18, CCC: p = 1.91e-03). Sentinel-2 corroborated most trends but showed discrepancies in CNC under combined stress (p = 1.94e-10), highlighting EnMAP’s superior spectral resolution. These findings demonstrate the potential of hyperspectral sensors for sustainable agriculture, supporting future advancements with ESA’s upcoming CHIME mission.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.61/1.62)

Session: F.04.20 EO in support of the regulation on Deforestation-free products (EUDR, EU 2023/1115) - PART 2.

Faced with mounting global environmental concerns and the urgency of addressing climate change, the EU has introduced the ground-breaking regulation on Deforestation-free products (EUDR, EU 2023/1115) targeting global deforestation. The EUDR ensures that seven key commodities – cattle, cocoa, coffee, palm oil, soy, timber, and rubber – and their derived products like beef, furniture, and chocolate, entering the EU market from January 2026 onwards, are not linked to deforestation after a defined cut-off date (December 2020).
The regulation obliges operators to establish robust due diligence systems that guarantee deforestation-free and legal sourcing throughout their supply chains to achieve this goal. Verifying compliance with these standards is crucial. The EUDR mandates using the EGNOS/Galileo satellite systems and exploiting the Copernicus Earth Observation (EO) program for this purpose. This involves, among others, cross-referencing the geographic locations of origin for these commodities and products with data from satellite deforestation monitoring.
By providing precise and detailed information on deforestation linked to commodity expansion, Copernicus and other EO data/products will help to detect fraud and strengthen the implementation of the policy by diverse stakeholders.
This session will delve into the latest scientific advancements in using EO data to support due diligence efforts under the regulation, including global forest and commodities mapping.
Topics of interest mainly include (not limited to):

- Classification methods for commodities mapping using EO data;
World forest cover and land use mapping with EO Data;
- Deforestation and GHG/carbon impacts related to commodity expansion;
- Field data collection strategies for EUDR due diligence;
- Practical examples of EO integration in global case studies;
- Machine learning / AI for deforestation detection and change analysis;
- EUDR compliance strategies: Integrating EO data with other datasets;
- Traceability in the Supply Chain: EO Data for Transparency.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.61/1.62)

Presentation: Insights into EUDR Implementation at BLE: Challenges of applied geodata-analyses for deforestation monitoring

Authors: Niklas Langner, Stefanie
Affiliations: Federal Office for Agriculture and Food Germany (BLE)
The Regulation on deforestation-free products (EUDR) (EU 2023/1115) establishes new requirements to mitigate deforestation and forest degradation associated with consumption of key commodities in the EU. The regulation requires operators and traders to submit a Due Diligence Statement (DDS) ensuring that products, which contain relevant commodities such as timber, rubber, soy, beef, palm oil, cocoa, and coffee, are deforestation-free prior to being placed on the EU market or exported. The DDS must include detailed geolocation data of the production area to facilitate traceability and ensure compliance with the regulation. The German Federal Office for Agriculture and Food (BLE) is the designated national competent authority (CNA) responsible for implementing and enforcing the EUDR in Germany. As part of this role, the BLE in collaboration with the Thünen Institute of Forestry and several partners, is developing a digital control process including a system for analysing geolocation data, aiming at a high level of automatization. This System for analyzing geolocation data requires the development of a robust monitoring framework capable of providing reliable risk assessment at multiple levels. The first level of analysis involves an automated comparison of forest cover products, such as the GFC2020 map. Additionally, this step integrates information from initiatives like the ESA Agro Commodities project to enhance its capabilities. By automatically analyzing geolocation data, this step focuses on identifying low risk cases and minimizing the overall volume of controls. The second level targets unclear and higher-risk cases through case-specific processing of Copernicus satellite imagery. To execute this processing, the Copernicus Open Data and Exploitation Platform – Germany (CODE-DE) is utilized, employing established algorithms for the classification of relevant commodities, as well as deforestation and degradation. Unclear and high-risk cases identified at this stage undergo further examination in a third processing step. This final stage involves detailed analysis and manual interpretation using very high-resolution (VHR) data, ensuring legal reliability where required. This presentation outlines the conceptual framework, methodological approaches, and challenges associated with the monitoring system. The digital federal infrastructure is highly complex and sets strict limitations in terms of data security, thus CODE-DE with its ISO certificate 27001 must be used. The platform offers a secure working environment for processing remote sensing data and access to data archive via a scalable processing environment. This is a mandatory element for data management in the federal environment due to EUDR requirements. However, the development and implementation is highly intense as challenges arise from compliance requirements, the high standards of federal information security (BSI), and EU legal regulations ensuring secure and lawful data storage. Additionally, the project must address issues related to the access and usability of digital forest maps and commodity-specific maps, and their practical applicability in the verification process. Of particular importance are the challenges related to the access, usability, and integration of digital forest cover maps and commodity-specific maps. These maps play a crucial role in the verification process, as their applicability and accuracy are essential for effective risk assessment and monitoring.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.61/1.62)

Presentation: High-Resolution Global Maps of Cocoa Farms Extent

Authors: Robert Masolele, Dr Johannes Reiche, Camilo Zamora, Dr. Diego Marcos, Dr. Liz Goldman, Katja Berger, Martin Herold
Affiliations: Wageningen University, Helmholtz GFZ German Research Centre for Geosciences, Remote Sensing and Geoinformatics Section, Inria, World Resources Institute
Cocoa cultivation serves as a cornerstone of many agricultural economies across the globe, supporting millions of livelihoods and contributing significantly to global cocoa production. However, accurately mapping cocoa farm locations remains a challenging endeavor due to the complex and heterogeneous nature of the landscapes where cocoa is cultivated. Traditional mapping techniques often fall short in capturing the intricate spatial patterns of cocoa farming amidst dense vegetation, varying land cover types, farming practices and growing stages (Masolele et at., 2024). Moreover, the current mapping efforts mainly focus on two major producing countries, Ivory Coast, and Ghana (Kalischek et al., 2023). Thus, little is known about the location of cocoa farms in other cocoa producing regions, posing a challenge to the sustainability and economic contributions of the cocoa crop. To address this challenge, we first present a benchmarking approach for mapping commodity crops worldwide. Here we compare different spectral, spatial, temporal and spatial-temporal methods for mapping commodity crops. The benchmarking is based on a variable combination of Sentinel-1 and Sentinel-2, locational and environmental variables (temperature and precipitation). We use a comprehensive list of reference data spanning 36 cocoa-producing countries to do this task. Higher accuracy (F1-score 87%) is obtained when using a model that employs spatial-temporal remote sensing images plus locational and environmental information, compared to other models without locational and environmental information. Secondly, for demonstration, we employ the developed deep learning methodologies to map the locations of cocoa farms across the Globe with an F1-Score of 88%. By leveraging the rich spatio-temporal information provided by Sentinel-1 and Sentinel-2 satellite data, complemented by location encodings, temperature and precipitation data, we have developed a robust and accurate cocoa mapping framework. The developed deep learning algorithm extracts meaningful features from multi-source satellite imagery and effectively identifies cocoa farming areas. The integration of Sentinel-1 and Sentinel-2 data offers a synergistic approach, combining radar and optical sensing capabilities to overcome the limitations of individual sensor modalities. Furthermore, incorporating location encodings into the modeling process enhances the contextual understanding of cocoa farm distributions within their geographical surroundings. Through this research effort, we provide the first high-resolution global cocoa map giving, valuable insights into cocoa farm locations, facilitating sustainable cocoa production practices, land management strategies, and conservation efforts across the pan-tropical forests, where cocoa farming occurs. The work aligns with recent European Union (EU) regulations to curb the EU market’s impact on global deforestation and provides valuable information for monitoring land use following deforestation, crucial for environmental initiatives and carbon neutrality goals (European Commission., 2022). Specifically, our product can support monitoring and compliance of the European Union (EU) Regulation on Deforestation-free Products (EUDR, No 2023/1115) by identifying the previous existing and current cocoa farm expansion after the cut-off date of December 31, 2020. Within the framework of the ESA funded WorldAgroCommodities project, this mapping approach is now being converted into an operational cloud-based service on the Copernicus Data Space Ecosystem, allowing easy access to these crucial tools for the National Competent Authorities in light of enforcing the EUDR regulation. Furthermore, our findings hold significant implications for cocoa farmers, agricultural policymakers, and environmental stakeholders, paving the way for informed decision-making and targeted interventions to support the resilience, sustainability and traceability of cocoa farming systems worldwide. Robert N. Masolele, Diego Marcos, Veronique De Sy, Itohan-Osa Abu, Jan Verbesselt, Johannes Reiche and Martin Herold (2024). Mapping the diversity of land uses following deforestation across Africa. Sci Rep 14, 1681. https://doi.org/10.1038/s41598-024-52138-9 Nikolai Kalischek, Nico Lang, Cécile Renier, Rodrigo Caye Daudt, Thomas Addoah, William Thompson, Wilma J. Blaser-Hart, Rachael Garrett, Konrad Schindler, and Jan D. Wegner (2023). ”Cocoa plantations are associated with deforestation in Côte d’Ivoire and Ghana”. Nat Food 4, 384–393. https://doi.org/10.1038/s43016-023-00751-8 European Commission (2022). Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on the making available on the Union market as well as export from the Union of certain commodities and products associated with deforestation and forest degradation and repealing Regulation (EU) No 995/2010. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:52021PC0706
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.61/1.62)

Presentation: Mapping Global Forest Management Practices in support of EUDR

Authors: Myroslava Lesiv, Wanda De Keersmaecker, Luc Bertels, Dmitry Schepaschenko, Dr Linda See, Sarah Carter, Elizabeth Goldman, Elise Mazur, Ruben Van De Kerchove, Steffen Fritz
Affiliations: IIASA, VITO, World Resources Institute (WRI)
Interest in using Earth Observation (EO) data for forest monitoring to support policies and regulations, such as the European Union Regulation on Deforestation-free products (EUDR), has surged in recent years. While new global and regional forest maps have been released, their quality is variable, and information on forest types, or use of forest land is often not available. Within this context, we have been updating the global forest management layer for 2015 developed by Lesiv et al. (2021), and for the first time, we are utilizing both Sentinel-1 and Sentinel-2 data to create an updated global map for the year 2020 of forest management practices. Our approach involves not only incorporating new remote sensing data but also testing various classification models, such as CatBoost, to identify the optimal model for this global mapping effort. These models are evaluated using different data configurations, including Sentinel-2 alone, Sentinel-1 alone and a combination of Sentinel-1 and Sentinel-2, with further performance comparisons between global and regional models. Baseline information for the year 2020 on forest and forest types is essential in order to identify potential deforestation and degradation. To ensure compliance with the EUDR, we have refined the forest definitions to include specific management classes including: Naturally regenerating forests without management signs (including primary forests), Managed forests (e.g., logging or clear cuts), Planted forests (rotation >15 years), Woody plantations (rotation <15 years), Agroforestry and two new classes: Rubber plantations and Fruit tree plantations. We have also updated the 2015 training dataset to 2020 by revisiting areas where deforestation has occurred, adding the new classes, and collecting additional training data in regions with lower accuracy. Finally, we have integrated feedback from the initial map version to enhance training data quality. We aim at achieving 80% accuracy as a minimum per each class in the iterative improvement process. The results we will present hold significant value for the scientific community that is engaged in EO-based forest mapping and land-use assessment, as this marks the first global effort to map forest management practices using both Sentinel-1 and Sentinel-2 imagery combined. Additionally, our insights on reference data collection may offer valuable support for the EUDR’s due diligence processes. We will share the validation results, discuss avenues for improving the map quality, and remaining research gaps to produce next-generation products for t policy and decision-making.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.61/1.62)

Presentation: Development of EO based forest and crop monitoring tools to support Competent National Authorities: the ESA World AgroCommodity Project.

Authors: Christophe Sannier
Affiliations: GAF
Tropical forests are an important habitat with multiple functions playing a major role as global carbon sinks which offer solutions to the on-going challenges in climate change mitigation. Several international Conventions and policy frameworks such as the United Nations Convention on Climate Change (UNFCCC) reducing emissions from deforestation and degradation (REDD+), the UN Convention on Biological Diversity (UNCBD) and the UN Sustainable Development Goals (SDG) 13 and 15 all address forest protection and management. As the move towards Zero Deforestation (ZD) and improved traceability has been on a voluntary basis, the progress towards deforestation free supply chains has been slow. The new EU regulation on deforestation free supply chains (EUDR) which came into force in June 2023 as part of the EU Green Deal aims to reduce EU’s contribution to GHG emissions from deforestation and forest degradation worldwide. The regulation requires companies to ensure that specific target commodities -soy, beef, palm oil, wood, cocoa, coffee, rubber- and derived products (leather, chocolate or furniture) are sourced from areas where no deforestation occurred after 31 December 2020. The implementation of EUDR planned for 31 December 2024 is likely to be postponed for 12 moth at the time of writing this abstract to leave more time for stakeholders to prepare themselves for its implementation. EUDR requires operators and traders to produce Due Diligence Statement (DDS). These DDS will be subject to inspections from Competent National authorities (CNAs) designated for each EU member states. The inspections are implemented through annual plans according to the origin of the products and the risk level itself based against three sets of criteria: i) rate of deforestation and forest degradation; ii) rate of expansion of agricultural land for relevant commodities; iii) production trends of relevant commodities and relevant products. The primary focus of the ESA AgroCommodities project is on the development of a pre-operational monitoring system to support the implementation of the EUDR by EU Member States to support the checks to be made as part of the DDS inspection process. This system will align with the requirements and needs of the Competent National Authorities (CNAs) responsible for monitoring the compliance of operators and traders. A comprehensive consultative process with CNAs was initiated at the project onset to gather detailed requirements and will continue throughout the duration of the project. The detailed objectives of the project are as follows: - Engage with a representative number of European Competent National Authorities (CNAs) who will provide the user requirements for the project and commit to the collaboration with the Consortium. - In a consultative manner with CNAs, identify potential test and demonstration site for the seven commodities- beef, cocoa, coffee, oil palm, soya, rubber, and wood - where deforestation could have occurred after December 31 2020 and which will form the basis for the mapping work in the project. - Map and validate the seven commodities in different geographic regions (at least in 4 different countries), identify location where deforestation has occurred after December 2020, using EO-based (Copernicus data) methods and open source tools. - Conduct a knowledge transfer to the CNAs on the methods and open source solutions developed. - Undertake promotion and outreach of the methods and project outcomes with a broader audience than the CNAs; this will include a project website, webinars, the presentation of policy briefs and scientific publications The user requirement phase identified two main steps of the inspection process on which to focus: i) a fully automated tool to sieve through DDS to identify those requiring more detailed inspection through the identification of potential non-compliance (e.g. deforestation post 2020 and / or inconsistencies with declared commodity); ii) inspection level work to support the identification of non-conformance. Initial tests will be carried out on representative test sites across the world based on priority countries identified from CNAs with a set of criteria aimed at ensuring a representative sample of test sites across different production systems and regions. An objective and structured approach was adopted for the selection of test sites using the H3 level 5 hexagonal grid (representing 153km² at the equator) as an analytical framework to integrate available datasets representing each of the selected criteria combining deforestation risk, commodity presence and production systems with in situ data availability. At least 10 sites of a minimum area of 100km² (identified from H3 grid cells) will be selected and several methods to identify deforested areas and commodity types will be tested through a benchmarking approach. The preliminary design of the system will be based on a dynamic mapping approach in which the CNA inspector will be able to run a ML/DL model on the fly to identify deforested and commodity areas. The system will adopt a cloud based platform agnostic architecture to allow the integration within the CNA's own system. Preliminary results from the benchmarking process will be presented as well as the selected architecture for the prototype system. The next steps will be to implemented the selected approach over larger geographical areas covering at least 4 countries for each commodities, validate the results and develop a series of use cases in collaboration with the CNAs.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.61/1.62)

Presentation: Global forest maps for year 2020 in support to the EU deforestation-free regulation: Improvements and accuracy

Authors: Rene Colditz, Clement Bourgoin, Astrid Verhegghen, Lucas Degreve, Iban Ameztoy, Silvia Carboni, Frederic Achard
Affiliations: Joint Research Center, European Commission, ARHS Developments Italia, European Dynamics Luxembourg
The EU regulation on deforestation-free supply chains (EU, 2023) prohibits to place or make available on the market or export certain commodities and relevant products if they are not deforestation-free, legally produced and covered by a due diligence statement. Due diligence by operators comprises the collection of information (including the geolocation of the sourcing area), risk assessment and risk mitigation measures to ensure that commodities and products do not origin from forest land use in December 2020. Member States’ competent authorities will check a certain percentage of due diligence statements. Even though geospatial data on forest presence or forest types is not required for the operation of the regulation, it may be a helpful source in various stages of the implementation. The JRC develops and maintains the EU observatory on deforestation and forest degradation, which provides access to global forest maps and spatial forest and forestry-related information and facilitates access to scientific information on supply chains. Building on a few existing global layers mostly derived from Earth Observation data including the WorldCover map 2020 (Zanaga et al., 2021), the map of global forest cover for the year 2020 (Bourgoin et al., 2024) indicates forest presence or absence, meeting the definition of forest as set out in the regulation. Operators could use this globally consistent, harmonized layer alone or in combination with other geospatial sources for risk assessment of deforestation (Verhegghen et al 2024), i.e. the conversion of forest into agricultural land for commodities and products of scope. Based on a first version released in December 2023, the JRC improved the map with new or updated input layers and user feedback and released a second version in December 2024 (JRC 2024). To support the risk assessment of areas subject to forest degradation, the JRC undertakes also work on mapping forest types that are in line with the definitions as set out in the regulation and by FAO (FAO, 2018). In November 2024, the JRC released a preliminary version of a global map of forest types for the year 2020 with three main classes (primary forests, naturally regenerating forests and planted forests). An accuracy assessment of the global forest cover map is an important but resource intensive part of the mapping exercise. The JRC interpreted more than 21,000 sample locations for forest presence or absence and several sub-categories, that is aimed to allow for a statistically robust assessment globally. In this presentation we will inform the audience about the latest data and methodology updates and product accuracy. In addition, we will outline the next phases for the global forest cover and global forest type maps for the year 2020. We will link to cases where this map is used with other sources of information in the risk assessment phase to be conducted for commodities such as cattle, cocoa, palm oil and wood. - Bourgoin C et al, 2024, Mapping Global Forest Cover of the Year 2020 to Support the EU Regulation on Deforestation-free Supply Chains, Publications Office of the European Union, Luxembourg - EU, 2023. Regulation (EU) 2023/1115 of the European Parliament and of the Council of 31 May 2023 on the making available on the Union market and the export from the Union of certain commodities and products associated with deforestation and forest degradation. - FAO, 2018. Global Forest Resources Assessment 2020 - Terms and Definitions. Forest Resources Assessment Working Paper 188, Food and Agriculture Organization of the United Nations, Rome. - JRC, 2024. EU Observatory on Deforestation and Forest Degradation. https://forest-observatory.ec.europa.eu/forest/rmap - Verhegghen A et al., 2024 Use of national versus global land use maps to assess deforestation risk in the context of the EU Regulation on Deforestation-free products: case study from Côte d`Ivoire, Publications Office of the European Union, Luxembourg - Zanaga D et al, 2021. ESA WorldCover 10 m 2020 v100. https://doi.org/10.5281/zenodo.5571936
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.61/1.62)

Presentation: Monitoring commodity-related deforestation and carbon emissions in Colombia

Authors: Camilo Zamora, Robert Masolele, Katja Berger, Johannes Reiche, Martin Herold, Louis
Affiliations: GFZ - Helmholtz-Zentrum Potsdam - Deutsches GeoForschungsZentrum
Deforestation and subsequent land-use changes, particularly for agricultural production, are significant contributors to global greenhouse gas (GHG) emissions, exacerbating global warming and climate change. Particularly in the tropics, the expansion of commodity crops such as soy, palm oil, rubber, cocoa, coffee, among others, has been a primary driver of deforestation and associated carbon emissions. The European Union (EU) Deforestation-Free Regulation (Regulation EU-2023/1115 on deforestation-free products, hereafter ‘EUDR’) aims to reduce the EU’s contribution to global deforestation and biodiversity loss by restricting the entry and commercialization into the EU market of commodities linked to deforestation and forest degradation. Understanding the environmental impact of these commodities on deforestation is crucial for developing effective regulatory frameworks, and support current efforts to mitigate the effect of food production on deforestation-related emissions. Disaggregated measurements of GHG emissions provide more accurate estimations of the climate impact of specific agricultural commodities, enabling targeted interventions and the evaluation of policies aimed at reducing emissions, such as the EUDR. This research aims to quantify the spatial and temporal dynamics of commodity crop expansion and estimate the associated carbon emissions and removals from changes in land use in Colombia. Our methodological approach integrates a comprehensive reference dataset of crop types with a diverse array of remote sensing data (Landsat, Sentinel 1-2) and environmental variables, to train state-of-the-art machine learning algorithms to classify land use types, particularly commodity crops across diverse geographic regions. Then, we integrate these results with ancillary data, such as the European Space Agency's Climate Change Initiative (ESA-CCI) and the Global Forest Watch (GFW), to estimate carbon emissions associated with post-deforestation land-use changes. Our analysis reveals a significant variation in carbon loss among different crop types and subregions in Colombia, with pasture, maize, and palm oil being the main drivers of carbon loss compared to other crops like cacao and coffee. The Amazon subregion shows the highest carbon loss, highlighting the importance of enhancing sustainable land management practices in this threatened ecosystem. Our study demonstrates that disaggregated emission estimations associated with different crop types and land use changes could contribute to the refinement of national emission inventories of GHG emissions. Expanding this study to include regions with fragile or endangered ecosystems, particularly in other tropical areas vulnerable to deforestation due to land conversion for agricultural commodities, could facilitate effective policy implementation to reduce deforestation-related emissions, and align with global climate goals.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.85/1.86)

Session: B.01.02 Earth Observation accelerating Impact in International Development Assistance and Finance - PART 2

In this session, attendees will delve into an impact-oriented approach to accelerating the use of Earth Observation (EO) in support of international development assistance, incl. integration in financing schemes. Presenters will provide in-depth insights into real-world application use cases across multiple thematic domains, implemented in developing countries in coordination with development and climate finance partner institutions. The session will prioritise examples showcasing the tangible impact on end-users in developing countries and the successful uptake of EO products and services by their counterparts. Counterparts here can be national governments or International Financial Institutions (IFIs), such as multi-lateral development banks (World Bank, ADB, IDB, EBRD) and specialised finance institutions (e.g. IFAD), as well as and Financial Intermediary Funds (FIFs), most specifically the large global climate and environment funds (GCF, GEF, CIF, Adaptation Fund). Attendees can expect to gain valuable insights into how the process of streamlining EO in development efforts is (1) opening new market and operational roll-out opportunities for EO industry, and (2) translating into impactful change on the ground and driving sustainable development outcomes worldwide.

Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: GDA Analytics & Processing Platform: supporting Agile EO Information Development activities

Authors: Simone Mantovani, Alessia Cattozzo, Mirko Sassi, Fabio Govoni, Hanna Koloszyc, Judith Hernandez, Carlos Doménech García, Patrick Griffiths
Affiliations: MEEO, GeoVille, earthpulse, GMV, ESA-ESRIN
The Analytics and Processing Platform (APP) is a user-oriented analytical environment developed under European Space Agency’s Global Development Assistance (GDA) programme. In adherence to FAIR and open science principles, the Platform provides ten open-source, scalable and generic analytical EO capabilities. Additional capabilities will be integrated in the future through the Earth Observation Training Data Lab, GDA Agile EO Information Development activities, and other application package providers. This expandable ecosystem, powered by European Space Agency's Network of Resources, embodies GDA's commitment to capacity building and collaborative development. The architecture of the APP ensures that users can interact with EO data regardless of their technical background. More specifically, the Platform offers intuitive widgets for quick capability execution, a webGIS for visualising and comparing outputs with many CDSE data and WMS layers, Jupyter notebooks for advanced analytical workflows, as well as a Swagger page for direct APIs consumption. Ongoing stakeholder engagement has already revealed promising application scenarios, including infrastructure damage assessment (in Sudan) and desertification/revegetation efforts monitoring (in Syria). By continuously exploring stakeholders’ information needs and working practices, the APP strives to advance GDA’s mission of mainstreaming EO in international development assistance and fostering equitable access to satellite-derived insights.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: Accelerating the Impact of Earth Observation for Public Health in Support of International Development Assistance

Authors: Eirini Politi, Carsten Brockmann, Carlos Doménech, Juan Suárez, Guido Riembauer, Lennart Meine, Markus Eichhorn, Ali Ahmad, Guillaume Dubrasquet-Duval, Michel Bénet, Rachel Lowe, Bruno Moreira De Carvalho, Jolita Jancyte, Georgina Charnley, Pia Laue
Affiliations: Brockmann Consult GmbH, GMV, mundialis GmbH & Co. KG, Diginove, Barcelona Supercomputing Center
Increasing public health risks due to climate change and the sensitivity of infectious diseases to changing environmental factors are adding pressure to existing socioeconomic-related challenges that public health faces around the world. International Financial Institutions (IFIs), such as the World Bank and the Asian Development Bank, have introduced agendas that target these challenges. By strengthening health systems, reducing inaccessibility to health infrastructure, increasing disease preparedness and resilience to climate-induced health risks, improving nutrition and providing sustainable solutions to strengthen health infrastructure, governance and financing, IFIs provide aid to national health agencies and directly affected local communities. Pivotal to the work financed by IFIs is access to relevant data and synoptic information on health and their background environmental or socioeconomical triggers. Earth Observation (EO) has been recognised as an essential source of information that can complement national data and support countries in the monitoring of key indicators related to health risks, or factors of vulnerability. For example, EO is used in the surveillance, prevention, and control of infectious diseases through the development of early warning systems and risk maps for diseases like malaria and dengue, both of which can be accelerated by impacts of climate change. EO also helps assess the likelihood or severity of airborne and waterborne health hazards such as air pollution, wildfires, dust storms, and algae blooms. Indirectly, climate also affects food and water access through more severe extreme events like droughts, flooding, storms, strong wind, or sea level rise, the risk and impact of which on population health can be assessed by a combination of EO data and other information sources. EO applications also support assessments of health infrastructure accessibility and vulnerability, particularly during natural disasters or crises, and provide useful information on nutrition and food security, the lack of which increases public health risks. The Global Development Assistance (GDA) Agile EO Information Development (AID) Public Health thematic activity aims to provide suitable, tailored and robust EO services and developments to IFI client projects, enabling them to improve existing or add new context to their public health assessment and improvement initiatives. This talk will present the specific real-world case studies we have been developing in collaboration with the World Bank and Asian Development Bank and their client state beneficiaries, and how our impact-oriented approach helps to accelerate the use of EO in support of international development assistance. Even though the activity is still at early stages, we will discuss our plan to maximise uptake of our EO products and services at the end of the activity.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: GDA Forest Management - Contributing to International Conventions and Regulations

Authors: Fabian Enßle, Dr. Sharon Gomez, Christophe Sannier
Affiliations: GAF AG
The ESA Global Development Assistance (GDA) programme has the overall objective to fully capitalise the utility of Earth Observation (EO) in international development programmes. Building on the experiences and lessons learned within the precursor programme Earth Observation for Sustainable Development (EO4SD) the GDA aims for adoption of EO in International Financing Institutes (IFIs) global development initiatives and streamlining EO in future development efforts. The thematic cluster of the GDA - Forest Management (GDA Forest) which was initiated September 2024 and led by GAF AG with a Consortium of European Partners has the overall goals of 1) Demonstrating the value of mainstreaming Earth Observation (EO) based forest products and services in IFI programmes for improved forest management in Client States (CS); 2) Assisting IFIs and CS with understanding, acceptance and adoption of the EO technology, its costs and sustainability which will support the integration of the technology into IFI-funded initiatives and decision making in CS. The GDA Forest activity is jointly developing new EO based Information Developments (EOIDs) to support IFIs and counterparts to address existing challenges. Monitoring forests is crucial and Earth Observation (EO) from satellites represents a cost-effective solution, providing global, comprehensive, accurate, repeatable and timely information, and is invaluable in the planning, implementation and impact assessment of Forest Management activities at larger scales. Based on the jointly developed product portfolio during the EO4SD-Forest Monitoring cluster the GDA Forest consortium is further enhancing the service and product specifications to comply with IFI needs and steer the adoption of EO based solutions for forest monitoring. Along four main themes, which include Reducing Emission from Deforestation and forest Degradation (REDD+), Forest Landscape Assessment and Planning, Mangrove & Protected Areas and Zero Deforestation (ZD). These themes are supported by different EO based products, which are further improved and aligned to User Needs within selected Use Cases of GDA Forest. These products include Forest Cover and Forest Area Change assessment, Tree Cover Density (TCD) mapping, Land Use and Land Cover Change information, Near Real Time Tree Cover Disturbance detection as well as Mangrove Area and Change assessments. A high priority is given to the use of Copernicus Sentinel satellite data, which is open accessible and at the same time providing a temporal and spatial resolution to address identified information requirements. Such data are used to enhance the efficiency and effectiveness of forest inventories (including mangroves) and GDA Forest will providing general forest resource and use information (data, map products, etc.) for sustainable forest management, planning, harvesting, etc. In the domain of Landscape Spatial Planning and Sustainable Management products support natural capital accounting, spatial planning, and land-use modelling approaches, and forest governance, and provide measures on the performance and effectiveness of the related initiatives. The use and integration of EO products into REDD+ workflows helps to track and verify the impacts of the forest sector and to ensure that forest and non-forest emissions are not underestimated or omitted in the forest sector layer. GDA Forest products can help to enhance the overall accuracy of deforestation estimations for the elaboration of Forest Reference Emissions levels (FREL) submissions – and could contribute to the implementation of innovative digital Measurement, Reporting and Verification (MRV) systems. The demonstration of EO based early warning mechanisms through near real time monitoring of forest cover disturbance using Sentinel-1 radar data supports the analysis of potential drivers of deforestation including identification of the expansion of agricultural land, growth of urban areas or development of illegal artisanal small-scale mining (ASM) activities. The NRT information is as well of importance for ensuring specific commodity value chains (e.g. cocoa, coffee, palm oil, wood) are free from deforestation to support countries in the implementation of policies related to the new Regulation (EU) 2023/1115 on deforestation-free products (EUDR). A first set of user engagement activities have been initiated with the World Bank and there is evidence for high interest in the GDA Forest portfolio as a potential support to the newly launched Global Challenges Programme for ‘Forests for Development, Climate and Biodiversity’ which will focus on three main regions in Africa, S. America and S. E. Asia. Additionally specific projects have been put forward by Bank Experts for collaboration. The paper will present the final selected Use Cases of the GDA Forest activity and the selected EO applications. The Use of EO in IFI programmes and its potential for market uptake will be discussed along the Use Case demonstrations.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: Asset-Level Climate Risk Analysis of Energy Infrastructure Using Smart Tracing and Satellite Imagery

Authors: Anders Pedersen, Mr. Parth Khare, Ms. Clara Ivanescu, Mr. Laurens Hagendoorn, Ms. Elke Krätzschmar
Affiliations: ESMAP - World Bank, World Bank, NEO BV., IABG
The World Bank's Energy Sector Management Assistance Program (ESMAP) in partnership with ESA's Global Development Assistance (GDA) program, has developed a transformative Earth Observation methodology that addresses critical infrastructure mapping challenges in resource-constrained environments. The Smart Tracing Energy Asset Mapping (STEAM) methodology demonstrates how innovative Earth Observation applications can deliver substantial cost efficiencies while maintaining high accuracy in power infrastructure mapping. ESMAP and ESA-GDA leveraged STEAM to deploy an asset-level method for assessing climate-related risks of floods, landslides, and high winds to energy infrastructure. The STEAM methodology is a pioneering solution for cost-effective, large-scale detection of transmission infrastructure, which uses satellite imagery and deep learning. Traditional mapping of transmission lines typically incurs prohibitive costs and resource demands, limiting applicability in low- and middle-income regions. STEAM addresses these challenges by leveraging a Tower Probability Map, a probabilistic model that selectively targets areas for high-resolution imagery acquisition, resulting in cost saving of up to 92% for pilots conducted in Bangladesh and the Dominican Republic. In Bangladesh, the World Bank and ESA-GDA team combined the STEAM framework to integrate geospatial data for energy infrastructure with climate risk models to identify hyper-localized vulnerabilities within a 50-meter buffer zone around energy assets. This approach overlays asset locations with environmental and hazard data, such as flood exposure and landslide susceptibility, enabling targeted climate resilience and disaster response planning. The analytical framework can improve grid resilience, hardening of current energy asset and O&M, and inform future site selection for energy infrastructure infrastructure development. The methodology integrates Earth observation data with open-source platforms like OpenStreetMap to deliver actionable insights for infrastructure monitoring and planning. STEAM's design focuses on efficient data use and scalability, reducing reliance on comprehensive ground surveys and enabling its adaptation to diverse geographies and data availability contexts. These attributes make STEAM an essential tool for addressing infrastructure gaps in resource-constrained settings, offering replicable processes for governments and development agencies. The STEAM methodology demonstrates significant potential for transforming the detection and mapping of transmission infrastructure in resource-constrained environments. Building upon these results, this novel approach addresses critical challenges in infrastructure mapping by integrating probability maps, deep learning, and strategic sampling. The methodology's success in reducing imagery acquisition needs while maintaining high accuracy underscores its value for utilities and policymakers, particularly in developing countries where comprehensive grid data is often lacking. When applied to pilot countries, STEAM demonstrated significant cost savings by reducing the required satellite imagery down to approximately 10 percent of the country. Post-processing and quality assurance steps ensured the accuracy and completeness of the final maps. Rigorous post-processing significantly enhanced results, with accuracy improvements ranging from 12 to 28 percentage points compared to the results before post-processing. Notably, the methodology significantly reduced image acquisition costs to about 10 percent of the country's entire area, by focusing the analysis on high-probability areas. Our cost-effective and efficient approach is particularly well-suited for countries lacking comprehensive geospatial data on their energy infrastructure. The innovative approach addresses critical challenges faced by utilities, policymakers, and communities in developing countries by providing accurate, cost-effective, and timely information on transmission assets. The high-precision mapping achieved through the STEAM methodology translates into tangible operational benefits for utilities. This level of detail enables utilities to make more informed decisions, potentially leading to significant improvements in grid efficiency and reliability. Ultimately, this technology has the potential to transform energy infrastructure management globally, leading to more resilient, efficient, and sustainable power systems. Granular, tower-level geospatial data is a game-changer for utility operations and asset management. By replacing outdated, approximate information with precise, up-to-date data, utilities can significantly enhance grid performance and efficiency. Optimized maintenance scheduling, efficient crew dispatch, and accelerated disaster response become possible through accurate tower location data. The precise location data of each transmission tower is crucial for efficient crew dispatch, both for routine maintenance and emergency repairs, particularly in challenging terrains. In the event of natural disasters or other grid disturbances, having accurate infrastructure locations can significantly reduce response times and improve service restoration, for example in mountainous terrains where a meter difference can have significant impact for topography and therefore also flooding risk (U.S. Department of Energy, 2024). Moreover, predictive maintenance strategies, enabled by correlating asset conditions with environmental factors, contribute to cost savings and improved grid reliability (Shayesteh et al., 2018). The data outputs can also support innovative asset management services such as drone-based inspections and can enable grid technologies for enhanced power flow optimization, real-time monitoring, and demand response (Mokhade et al., 2020). With this detailed asset-level data, utilities can achieve substantial improvements in operational efficiency, reducing costs while enhancing grid reliability and resilience against evolving challenges. The value of this methodology extends beyond day-to-day utility operations and could play a crucial role in climate resilience planning. As climate-related risks to energy infrastructure increase, accurate and regularly updated geospatial data becomes essential for identifying vulnerable sections of the grid based on terrain, vegetation, and climate projections. This information allows utilities and policymakers to develop targeted hardening strategies for at-risk infrastructure and improve disaster response planning (PLOS Climate, 2023). By providing a comprehensive and up-to-date view of transmission networks, our methodology supports more informed decision-making in climate adaptation strategies for the energy sector. In developing regions, where energy access remains a significant challenge, our methodology can support more efficient infrastructure planning and expansion efforts. A comprehensive mapping of existing infrastructure networks can help in planning for new lines, considering factors such as terrain, existing settlements, and environmental sensitivities (Gorsevski, P. V. et al., 2013). This can be particularly useful in the context of lower and middle-income countries, where reliable georeferenced data on energy infrastructure is often incomplete or missing entirely. Moreover, the ability to detect transmission infrastructure remotely can be critical, for example in areas affected by conflict or natural disasters and where on-the-ground georeferencing is not possible (Xu et al., 2024). The methodology can provide baselines for rapid damage assessments and help prioritize grid rehabilitation efforts, thereby enhancing the resilience of energy systems to external shocks. Governments can significantly benefit from the availability of accurate and accessible geospatial data on transmission infrastructure. By leveraging this information, governments can optimize strategic planning, enhancing grid reliability and disaster response capabilities. Furthermore, this data can be instrumental in accelerating the transition to a clean energy future by facilitating the identification of suitable locations for renewable energy projects and assessing grid integration challenges (IRENA, 2023). Ultimately, the combination of effective planning, resource optimization, and a focus on clean energy contributes to robust economic development and improved quality of life for citizens. Multilateral development banks (MDBs) can together with ESA-GDA as geospatial partner leverage this data to identify investment opportunities, assess project feasibility, and monitor the performance of energy infrastructure projects. MDBs and ESA-GDA can therefore play a crucial role in enabling data-driven decision making for energy infrastructure investments in developing countries. By providing detailed information on the existing grid, MDBs can provide utilities and governments with the needed for investment decisions for grid expansions and new interconnections. Moreover, with access to precise geospatial data, MDBs can, in partnership with ESA-GDA, effectively monitor the performance of energy infrastructure projects, ensuring that investments deliver the expected outcomes and identify areas for improvement. Ultimately, this data-driven approach strengthens MDBs' capacity to support the development of resilient and efficient energy systems in their target countries. Making detailed power grid information accessible to the public serves multiple purposes, from enhancing community safety to fostering innovation. This democratization of data empowers individuals and communities to make informed decisions about land use and development, ensuring safe coexistence with power infrastructure (Broto & Kirshner, 2020). By enhancing transparency in the energy sector, this data supports informed policymaking, investment planning, and public engagement. Moreover, it accelerates academic research and innovation in energy systems, enabling comprehensive studies on grid expansion, vulnerability assessments, and renewable energy integration (Heylen et al., 2018). This open approach to energy infrastructure data creates a foundation for cross-border energy planning and coordinated disaster response efforts. The publication of the method as a public good can contribute to innovations in grid management and planning, and thereby a more sustainable and equitable energy future. The STEAM methodology contributes to the growing toolkit for energy system planning and management, with potential for global application. As the energy sector continues to evolve, facing challenges in sustainability, accessibility, and resilience, data-driven approaches like the one presented in this paper will play an increasingly important role. While further refinement and validation across diverse global contexts are necessary, this approach presents a significant step towards more informed decision-making in energy infrastructure planning and management worldwide. Future research could focus on expanding the application to a broader range of geographical contexts to validate its global robustness and expand its impact. As we look ahead ESA-GDA can therefore play a key role in expanding the use and impact of the methodology developed with ESMAP and thereby contribute to climate resilient of energy infrastructure globally.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: Connecting people – EO as a driver for knowledge-based finance decisions for multiple infrastructure projects in Uganda

Authors: Kristin Fleischer, Peter Schauer, Mattia Marconcini, Elke Kraetzschmar
Affiliations: IABG, DLR
The North of Uganda is, like few other regions, impacted by a significant influx of refugees from neighbouring and conflicted countries, becoming part of the population on long term. Latest UN numbers of November 2024 count over 1.7M refugees hosted in the whole country. Within this setting, existing infrastructure serving the social and economic needs demands to be broadened. The World Bank is engaging in the region to extend and facilitate the safe access of the local communities to schools, markets, hospitals and other social services to develop a safe and viable livelihood and to foster economic growth. To make these investments sustainable and advantageous for most people possible is reflected by various Sustainable Development Goals, i.e. SDG3 Good Health and Well-being, SDG4 Quality Education, SDG5 Gender Equality, SDG11 Sustainable Cities and Communities. Sustainable investment requires comprehensive knowledge of the status-quo and dynamics in the focus region. The consideration of input data, in best case ranging from an up-to-date situation picture, prior developments within the region and the environmental pre-conditions, greatly influence the impact an investment can achieve. Economists are widely using statistical information during the project planning phase, often linked to administrative units and combine these with on-site investigations once the investment projects enter the preparation phase. Within the ESA GDA - Transport and Infrastructure project, collaborations between the GDA team and the WB counterparts started to emphasise the use of highly granular EO data and information, as well as raise awareness of the potential this data has on the decisions-making process. The IFI activities are focused on sustainable development for the local population, peacebuilding, as well as the integration of displaced persons into the communities, while addressing long-term needs for social stability and economic growth. The presented results support multiple World Bank teams and projects in Uganda, where decision making processes depend on reference data and statistics that overlap with reality to a limited extent. While national statistics represent the local population, UN-based statistics focus solely on monitoring refugee camps. The baseline for the analysis outlined here is DLR´s World Settlement Footprint (WSF) tracker derived by jointly exploiting Sentinel-1 and Sentinel-2 imagery, which systematically outlines the settlement extent growth with 10m spatial resolution at 6-month pace from July 2016 to (so far) July 2024. Whereas statistics keep local population and refugees separate, the inclusive WSF supports the understanding of the established refugee camps (location and extent) and their impact onto the region over time. One main objective of this engagement is to provide key figures related to the distribution and categories of schools and their adequate accessibility to the pupils (distance and time, safety). The team was fortunate to benefit from the latest Uganda Census 2024, conducted by the Uganda Bureau of Statistics, providing details on population structure. Linking the latter to the settlement extent estimates local demand and deficiencies, considering in-depth aspects responding to age structure and gender, as well as the potential population growth within the upcoming years. Another WB project that is addressed here emphasises on the expansion and extension of the road network to support local transport, and to lower barriers to exchange and connect within and between the cities and refugee camps. The geospatial analysis conducted, provides a valuable input for planning impactful investments. These may be related • to encouraging economical preconditions by supporting and simplifying public transport in order to emphasize mobility of people (labour force) and goods, or • to site selection best suitable for new schools respectively the extension of existing schools, on positively influencing public transport (extension and densification as a commuter medium), targeted investments for a safer transport infrastructure (e.g. traffic light or speed reduction along pupils’ commuter lines) and preventing in parallel an extension of the road to prevent more dangerous routes to school. The examples shown include and combine open information layer considering their advantages and limitations, conjoin these with most recent EO data and geospatial analytics, to retrieve highly transferable solutions. Although approaches are kept generic in a first step, their ability to be consecutively tailored to various specifics is inevitable. Tailoring hereby ranges from (a) the thematic addressed (e.g. schools, hospitals, social services, markets, or other commercial centres), to (b) the response to the local dynamics (urban growth characteristics), and to (c) the enrichment by additional information (such as data collected via citizen science, e.g. counting bus customers, road conditions, etc). The engagement process with the bank teams as an agile development approach enables both sides to tackle new ideas and options within the development process. It allows the service provider to better comprehend the challenges of the bank teams & local stakeholders, and to respond appropriately to their needs (different service- combination, scale and information depth), keeping in mind the overarching aspiration to achieve transferable, and scalable results.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: Operationalizing the Use of Earth Observation Data for Agricultural Statistics: The Case of Acreage Estimates in Pakistan

Authors: Boris Norgaard, Sophie Bontemps, Pierre Houdmont, Olivier Durand, Doctor Babur Wasim, Pierre Defourny
Affiliations: UClouvain-Geomatics, World Bank, Asian Development Bank
Pakistan is the fifth most populous country in the world and is expected to experience significant population growth in the coming decades, posing serious challenges to food security. At the same time, climate change is increasing the frequency of extreme weather events, such as droughts and floods, which jeopardize food production. Agriculture thus plays a crucial role in the country's resilience and development while experiencing significant pressures. In addition to this food security issues, the Ministry of Agriculture has to face the additional challenge of transitioning to more sustainable practices, particularly in water management. The two major stapple crops in Pakistan are wheat and rice, cotton and sugarcane being the two major cash crops. The agricultural calendar comprises two main cropping seasons i.e., Kharif from April to October-November and Rabi from October-November to April. More than 82% of the cultivated land is irrigated, and 18% is rainfed, which emphasizes the importance to reach sustainable water use in the near future. Pakistan has significantly lower water availability compared to other countries, classifying it as “water-stressed” and approaching “water scarcity”. The “National Water Policy 2018” has identified this emerging water crisis and aims at providing an overall policy framework and guidelines for comprehensive policy action. In this context, the integration of technologies such as remote sensing into agricultural monitoring systems presents a significant opportunity for evidence-based decision-making. The ability to provide timely, accurate, and cost-efficient data on crop acreage can complement traditional survey methods, enabling better planning and resource allocation. Within the ESA Global Development Assistance (GDA) programme, a collaboration was initiated with the World Bank to demonstrate the usefulness of EO data to estimate winter wheat acreage during Rabi season in Sindh Province. This collaboration was then widened to the Asian Development Bank (ADB) in order to scale-up over four provinces (Punjab, Sindh, Balochistan and Khyber Pakhtunkhwa), focusing on the main crops of the summer season at provincial level. For both experiments, area sampling frames were designed to collect statistically sound data compatible with EO data, aiming at estimating acreage of main crops during both cropping seasons. During Rabi season, the survey was jointly conducted by the Sindh Crop Reporting Service and our team, ensuring capacity building on the field. In total, 2240 points were collected by two enumeration teams during a 16-days field mission. For the 2024 Kharif season field campaign, the survey was conducted in autonomy by the Sindh CRS team, supervised remotely by us. The data were collected as expected, in enough quantity and with good quality, showing that the lessons from the Rabi season were uptaken by the CRS staff. In the other provinces, the ADB staff was trained and then conducted the field campaign in coordination with local provincial staff. For this season, more than 21 000 points were collected by enumeration teams scattered across the 4 provinces. The ESA Sen4Stat toolbox was used to automate the EO data processing pipeline and generate seasonal crop type maps using state-of-the-art methods. The accuracy of the obtained maps was good, reaching an f1-score of 0.85 for the Rabi season wheat map, and between 0.85 and 0.93 for rice, cotton and sugarcane for the Kharif season. Obviously, the quality of the collected ground data contributed to the quality of these maps. Maps were then combined with the agricultural surveys through regression estimators to obtain acreage estimates. During the Rabi season in Sindh, the estimates obtained were of 445 000 Hectares of wheat, which is aligned with the official statistics provided by the CRS. For the summer crops, obtained acreage estimates were also reliable and of comparable order of magnitude to official sources. The added-value of EO data highlighted by these pilots were the increased reliability of the statistics (reduced estimation error when using EO data), the timeliness (estimates were available a few weeks after the end of the season) and the possibility to obtain these estimates by district and not only at the province-level. Capacity building for the uptake of EO technologies is currently ongoing Operationalizing the use of Sen4Stat – and of EO data in general – requires a step-by-step approach, starting with regional pilots and specific objectives in order to reach awareness, demonstrate and convince the governments and local stakeholders about the added-value of EO-based information. This step has been successfully achieved and we now need to expand both the size of the pilots and their complexity to make sure the proposed solution is fully operational and performs as expected. Capacity building will be fully part of this process to make sure that the skills for a proper use of the new technologies are established and integrated locally.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall F1)

Session: C.03.12 Sentinel-1 Mission: Advances in Remote Sensing After a Decade in Space

Sentinel-1 mission has reached a decade in space. Since its launch, Sentinel-1 has revolutionized SAR-based remote sensing, becoming a cornerstone in Earth observation with its unparalleled capabilities and global coverage.

The session will address the way Sentinel-1 has transformed our understanding of the Earth's surface dynamics and enabled groundbreaking applications across various domains. From land cover monitoring to mapping natural disasters, assessing agricultural practices, studying urban ground motion, evaluating forest resources, and exploring coastal and marine environments, Sentinel-1 has been instrumental in advancing our knowledge and addressing critical societal challenges.

The session will present cutting-edge research and innovative methodologies, showcasing the latest developments in geophysical retrieval techniques, data fusion with complementary sensors, and the integration of machine learning and artificial intelligence approaches for enhanced analysis and interpretation of Sentinel-1 data.

Moreover, this session will highlight the importance of international cooperation in leveraging Sentinel-1 data for global initiatives and fostering collaboration among diverse stakeholders. Through collaborative efforts, we can maximize the potential of Sentinel-1 and amplify its impact on environmental monitoring, disaster management, and sustainable development worldwide.

Presentations and speakers:


A decade of advancing Forest Disturbance Monitoring and Alerting with Sentinel-1: Progress and Future Directions


  • Johannes Reiche - WUR

Why Sentinel-1 has been a game changer for monitoring dynamic hydrological processes


  • Wolfgang Wagner - TUW

Sentinel-1 reveals climatic changes in the Arctic sea ice at unprecedented detail


  • Anton Korosov - NERSC

Sentinel-1 operational DInSAR services for monitoring surface displacements of the Italian volcanoes: 10 years of observations and data analysis


  • Riccardo Lanari - IREA / CNR

A Decade of Ice Sheet Monitoring Using Sentinel-1 SAR Data: Advancements and Opportunities


  • Thomas Nagler - Enveo

Fostering Tropical Cyclone research and applications with Synthetic


  • Alexis Mouche - Ifremer
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall G1)

Session: D.02.11 Super-resolution in Earth Observation: The AI change of paradigm

The design of the Sentinel-2 sensor with spatial resolutions of 10m, 20m and 60m for different spectral bands in the context of the actual resources offered by the methods of deep learning was a key turning point for the field of super-resolution. The spatial resolution is a characteristic of the imaging sensor, i.e. the bandwidth of the transfer function, super-resolution meaning to enlarge the range of spatial frequencies and the bandwidth of the transfer function. In the classical approaches this was treated mainly in two cases: i) as an ill-posed inverse problem, the solutions being constrained by strong hypotheses, very seldom fulfilled in actual practical cases, ii) based on physical model as pansharpening, the design of optical sensors with half pixel shift in the array or in the case of SAR by wave number tessellation or using information from side lobes of multistatic SAR. In reality super-resolution is a much broader area, it may refer also to the wavelength bandwidth for multi- or hyper-spectral sensors, the radiometric resolution, the characterization of single pixel cameras based on compressive sensing, the 3D estimation in SAR tomography, or an enhanced “information” resolution (e.g., instead of counting trees in very high resolution to estimate trees density form a low resolution observation), or enhance resolution of ocean wind estimation from SAR observations.

With the advent of deep learning, super-resolution entered in a new era. The deep models with huge number of parameters, trained with big data sets opened a new alternative to the super-resolution: the data prediction applied to a low-resolution sensor by training a model with high resolution data. The new paradigm does not anymore require strong hypotheses but suffers from the black-box syndrome of deep learning. Thus, new methods are required as hybrid method using the sensor image formation models, derive consistency criteria for the physical parameters, verification of the cal/val criteria for the super-resolved products. The session invites submissions for any type of EO data and will address these new challenges for the Copernicus and Earth Explorer or related sensors.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall G1)

Presentation: Learning Sentinel-2 Multi-Date Super-Resolution by Self-Supervision

Authors: Jérémy Anger, Anderson Nogueira Cotrim, Gabriele Facciolo
Affiliations: Kayrros, ENS Paris-Saclay, University of Campinas
Super-resolution (SR) is an important task in satellite imagery analysis, enhancing spatial resolution to recover finer details essential for applications like environmental monitoring, urban planning, and disaster response. While many state-of-the-art SR methods rely on cross-sensor datasets, this dependency introduces challenges such as radiometric and spectral inaccuracies, geometric distortions, and temporal mismatches. To address these issues, we previously proposed a self-supervised framework for single-frame SR on Sentinel-2 L1B imagery [1], leveraging overlapping regions between the Multi-Spectral Instrument (MSI) detectors to train an SR network without requiring ground-truth high-resolution data. Building on this foundation, we now present significant advancements that improve performance and broaden the applicability of our approach. First, we extend the framework to Sentinel-2 L1C and L2A imagery, increasing usability for real-world applications. During training, paired patches from overlapping regions of L1C and corresponding L1B imagery are used. As in [1], the L1B imagery is used to supervise the training, containing complementary aliased information with high radiometric accuracy despite minor geometric misalignments caused by the sub-second acquisition delay between detectors. Dense optical flow estimation is employed to correct these disparities, ensuring accurate alignment. Remarkably, the method generalizes well to L2A imagery, even though training is conducted on L1C inputs. Second, we incorporate the 20m spectral bands of Sentinel-2, previously excluded in [1]. These bands are upsampled and concatenated to the 10m input bands. The self-supervised training framework is adapted to include these additional inputs, achieving a restoration of both the 10m and the 20m bands to 5m/pixel resolution. Experiments demonstrate that the inclusion of 10m bands enhances the restoration quality of the 20m bands, leading to better overall performance. Finally, we extend our method to a multi-frame SR setting by adopting a state-of-the-art architecture. Using a permutation-invariant network, our model supports both single-image and multi-date scenarios, handling between 1 to 15 frames as input. Multi-frame inputs mitigate the inherent limitations of single-frame SR by leveraging complementary information across time, even in suboptimal acquisitions. We evaluate restoration quality, temporal stability, and robustness against scene changes, demonstrating the method's suitability for tasks like change detection and land monitoring. These improvements significantly advance the state of self-supervised super-resolution for Sentinel-2 imagery, providing superior accuracy, versatility, and resilience for a wide range of satellite imagery applications. References: [1] Nguyen, Ngoc Long, et al. "L1BSR: Exploiting detector overlap for self-supervised single-image super-resolution of Sentinel-2 L1b imagery." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall G1)

Presentation: Toward Real-World Hyperspectral Image Super-Resolution

Authors: Paweł Kowaleczko, Maciej Ziaja, Daniel Kostrzewa, Michal Kawulok
Affiliations: KP Labs, Silesian University of Technology, Warsaw University of Technology
Hyperspectral images (HSIs) are a valuable source of information that has been found useful in a variety of remote sensing applications, including Earth surface classification, precision agriculture, monitoring of environment, and more. However, high spectral resolution of HSIs is achieved at a cost of decreased spatial resolution that is insufficient in many practical scenarios. Therefore, the problem of super-resolving HSIs aimed at increasing spatial resolution of the spectral bands is an actively explored field of remote sensing. This process can be performed either by relying solely on a hyperspectral cube, or by exploiting an auxiliary source of high-resolution (HR) information, as is done in pansharpening. In both cases, the state-of-the-art techniques are based on deep learning, and their reconstruction quality heavily depends on the available training data. An important limitation concerned with super-resolution (SR) of HSIs consists in using simulated data for training: low-resolution (LR) spectral bands are obtained by treating an original HSI (later considered as an HR reference) with Wald’s protocol which degrades the individual channels and decreases their spatial resolution. Although this process allows for generating the amounts of data that are sufficient for training deep models, the reported results are often overoptimistic and they cannot be reproduced for original (i.e., not downsampled) HSIs due to the domain gap between simulated and real-world datasets. This problem is also inherent in SR of single-channel or multispectral images, and an increasing number of such methods have already been trained with real-world datasets comprising LR and HR images acquired by sensors of different resolutions. While there are a few such benchmarks (e.g., a PROBA-V dataset published by European Space Agency or WorldStrat and MuS2 benchmarks that match Sentinel-2 images with HR references acquired with SPOT and WorldView-2 data), creating real-world datasets composed of LR and HR HSIs would be much more challenging and costly. In the research reported here, we focus on developing real-world HSI SR methods. At first, our efforts are concerned with task-oriented validation, in which we evaluate the super-resolved HSIs in specific use-cases. They include real-life applications that exploit various features of HSIs, thereby verifying whether SR allows for information gain in the spatial domain and whether the spectral properties are preserved. Furthermore, we demonstrate how the existing real-world datasets can be exploited for training deep networks that super-resolve HSIs – we use them alongside the simulated hyperspectral data, as well as we employ them to improve the simulation itself. Finally, we show that multi-image SR techniques trained from real-world datasets can be applied to panchromatic images in order to enhance the high-frequency details of the pansharpened spectral bands. In our study, we exploit HSIs acquired within the PRISMA mission and we report both quantitative and qualitative results that overall confirm the effectiveness of the proposed approaches.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall G1)

Presentation: Machine learning for population displacement assessment in northern Afghanistan.

Authors: Maximilien Houël
Affiliations: SISTEMA Gmbh
In crisis context, such as in Afghanistan from 2021 on, monitoring population migration is necessary to understand local (national, sub-national, cross-border) pressure. Such context makes on-site assessment difficult, but Earth Observation (EO) offers capabilities to follow the evolution of a territory over time, enabling near real time monitoring. Within the Copernicus program, the Sentinel-2 family provides continuous optical imagery with high spatial and temporal resolution (10m every maximum 5 days). Such data can’t lead to direct monitoring of population but gives information to geographic objects that can be used as proxy for this aim. Indeed Sentinel-2 allows identifying formal and informal settlements to be used as information about population settling or leaving an area. The proposed work focus on the border cities between Afghanistan, Tajikistan and Uzbekistan namely: Mazar, Kholm, Konduz and Khwahan in Afghanistan, Balkh and Khorog in Tajikistan and Termiz in Uzbekistan. Analysis has been performed in the year 2022, including 2020 and 2021 as reference for changes. The methodology foresees three main steps: - Sentinel-2 provides optical imagery at 10m, to improve the image analysis a resolution enhancement has been applied through a Super Resolution (SR) model. The model is trained on Sentinel-2 visible bands as input and with a mix dataset from Planetscope and Worldview-2 imagery as reference. The architecture corresponds to the state-of-the-art Enhanced Deep Super Resolution (EDSR), known to maintain as much as possible the overall structure of the input data. This part is leading to a new spatial resolution of Sentinel-2 at 3.3m with consistent spectral signature to be used as input to the data processing steps. - On top of the super resolution a UNet with ResNet blocks has been developed to perform a segmentation task and to focus especially on buildings. The reference corresponds to a mix of the several open datasets on building layers such as Open Street Maps, Microsoft building dataset or Google Open building. With the super-resolved Sentinel-2 images an automatic detection of the buildings is generated. This model provides buildings masks for all times over the area of interest and then can identify potential changes over time. - Finally, an object-based detection algorithm is used with the building layers to extract specifically the changes: new or removed buildings over time. The data analysis workflows allowed identifying new settlements in the border of the Tajikistan cities that are directly connected with Afghanistan cities, leading to the hypothesis of population movement from one country to the other during the analysed years. The city of Balkh in Tajikistan shows three main areas with changes were identified, two respectively in East and South of the city showing installation of shelters over agricultural fields. The last area is closer to the centre of the city with decrease of urban vegetation for new settlements, moreover an increase of greenhouses directly linked with an increase of agricultural activity over the area, linked with an increase of population to sustain. The city of Mazar in Afghanistan is showing increases of urbanization in both North and South of the city following road arrangements, moreover a block organization of urbanization can be spotted and filled through the years. Termiz in Uzbekistan shows a new neighbourhood with urbanization increase on the North-West, with new settlements and new concrete roads to link the neighbourhood with the city centre. The other areas investigated regions such as Kholm; Konduz, Khwahan and Khorog didn’t show major changes over time even though located close to the borders and on main roads links. The developed methodology provides a generic and automatic pipeline to increase flexibility and speed of analysis. The work can then be performed in every area of interest and gives materials for reports and maps for decision making purposes.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall G1)

Presentation: A data fusion method for Sentinel 2 super-resolution via diffusion models learned using harmonized NAIP images

Authors: Muhammad Sarmad, Michael C. Kampffmeyer, Arnt-Børre Salberg
Affiliations: Norwegian Computing Center, UiT The Arctic University of Norway
The escalating demand for high-resolution Earth Observation (EO) data for various applications has significantly influenced advancements in image processing techniques. This study proposes a workflow to super-resolve the 12 spectral bands of Sentinel-2 Level-2A imagery to a ground sampling distance of 2.5m. The method leverages a hybrid approach, integrating advanced diffusion models with image fusion techniques. A critical component of the proposed methodology is super-resolution of Sentinel-2 RGB bands to generate a super-resolved Sentinel-2 RGB image, which subsequently serves in the image fusion pipeline that super-resolves the remaining spectral bands. The super-resolution algorithm is based on a diffusion model and is trained using the extensive National Agriculture Imagery Program (NAIP) dataset of aerial images, which is freely available. To make the super-resolution algorithm, trained on NAIP images, applicable to Sentinel-2 imagery, image harmonization and degradation were necessary to compensate for the inherent differences between NAIP and Sentinel-2 imagery. To address this challenge, we utilised a sophisticated degradation and harmonisation model that accurately simulates Sentinel-2 images from NAIP data, ensuring the harmonised NAIP images closely mimic the characteristics of Sentinel-2 observations post-resolution reduction. To investigate if learning the diffusion model using a large dataset of airborne images like NAIP provides better results than learning the model using a smaller satellite-based dataset like WorldStrat of high-resolution SPOT images, we performed a comparative analysis. The results demonstrate that models trained with the harmonised and correctly simulated datasets like NAIP significantly outperform those trained directly on SPOT images but also other existing super-resolution models available. This finding reveals that learning with more data can be beneficial if the data is properly harmonised and degraded to match the Sentinel-2 images. We performed a comprehensive evaluation using the recently established open-SR test methodology to validate the proposed model across multiple super-resolution metrics. This testing framework rigorously evaluates the super-resolution model based on metrics beyond traditional super-resolution metrics such as PSNR, SSIM, and LPIPS. Instead, the open-SR test evaluates the model based on metrics that measure its consistency, synthesis, and correctness. The proposed super-resolution model outperformed several current state-of-the-art models based on the comprehensive open-SR test framework. In addition, visual comparison further established the superior performance of our model in both urban and rural scenarios. An important component of the proposed model is the super-resolution of all 12 Sentinel-2 Level-2A bands, contrary to previous work, which has mainly focused on RGB band super-resolution. The proposed fusion pipeline successfully utilises the super-resolved image to obtain an enhanced 12-band Sentinel 2 image, similar to pansharpening techniques. We show qualitative and quantitative results on all 12 bands that demonstrate the seamless performance of the fusion method in super-resolution. This study not only showcases the potential of combining AI-driven super-resolution models with image fusion techniques in enhancing EO data resolution but also addresses the critical challenges posed by the diversity in data sources and the necessity for accurate generative models in training neural networks for super-resolution tasks.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall G1)

Presentation: Challenges in Sentinel-2 Single Image Super-Resolution for quantitative remote-sensing

Authors: Julien Michel, Ekaterina Kalinicheva, Jordi Inglada
Affiliations: CESBIO (Université de Toulouse, CNES, CNRS, INRAE, IRD, UT3)
Deep-learning based Single Image Super-Resolution (SISR) of Sentinel-2 images has received a lot of attention over the past decade, with target resolution ranging from 5 meter to 1 meter. Training and evaluating such deep-learning models rely either on simulated datasets from High (target) Resolution images, where Sentinel-2 images are simulated from High Resolution images from another sensor at target resolution, or so-called cross-sensor datasets, leveraging near-simultaneous acquisitions of Sentinel-2 images with images from another sensor at target resolution. Examples of such cross-sensor datasets include the Sen2Venµs dataset [1] and the Worldstrat dataset [2]. With both simulated and cross-sensor datasets however, inconsistencies between Sentinel-2 and the high resolution sensor, also referred to as a domain gap, can impair proper training and evaluation. For simulated datasets, this gap mostly occurs at inference time: the model trained with simulated Sentinel-2 data may react poorly to real Sentinel-2 data due to a misfit or incomplete simulation process. With cross-sensor datasets, this gap occurs during training stage: unwanted geometric and radiometric distortions that are inherent to the cross-sensor setting will be learned by the model during training, resulting in geometric distortion and loss of radiometric precision at inference time. Moreover, evaluating model performances using cross-sensor datasets is also affected by the domain gap, as the usual Image Quality metric may be affected by radiometric and geometric distortion. In the frame of the Horizon Europe EVOLAND project, which has a dedicated work package on the Super-Resolution of Sentinel-2 images, we have made several findings and contributions in order to solve the domain gap issues caused by cross-sensor datasets in SISR. Our main contributions are as follows. 1) We demonstrated that most Image Quality (IQ) metrics usually used for SISR evaluation are sensitive to radiometric and geometric distortions. For instance, Peak Signal to Noise Ratio (PSNR), which is one of the most widely used metric, can no longer properly rank images with different levels of blur with respect to a reference image if there is more than 1 high resolution pixel of registration error. Such metrics cannot be trusted for the evaluation of cross-sensor SISR. 2) We proposed a new set of spatial frequency domain metrics in order to measure the spatial resolution improvement. Those metrics are insensitive to radiometric and geometric distortions. 3) We proposed an auxiliary Optical flow UNet that can be used to control geometric distortion during training, but also to measure the amount of learnt geometric distortion during evaluation. 4) We propose a training and evaluation framework for cross-sensor SISR that can be used to prevent geometric and radiometric distortions to leak into the model during training, and to impair proper evaluation. 5) Through the use of a vanilla ESRGAN [4] on both Sen2Venµs and Worldstrat datasets, we demonstrated that, unless a proper training strategy such as the one we proposed above is used, geometric and radiometric distortions of cross-sensor datasets are indeed learnt by the models, which will distort the input Sentinel-2 images at inference time. These contributions are summarized in a journal paper [3] currently under review. Additionally, we also developed a model that super-resolves 10 Sentinel-2 spectral bands, including Red-Edge bands and SWIR bands, to 5 meter, using a simulated dataset derived from Sen2Venµs, which can be compared to cross-sensor models thanks to the proposed metrics. While it performs only a modest super-resolution factor, this model is to our best knowledge the only one to jointly process 10 Sentinel-2 bands, and shines in its radiometric faithfulness with respect to the input Sentinel-2 images. In order to facilitate the use of this model, we have published an open-source inference code [4] that allows to apply the model to full Sentinel-2 products. In this talk, we will present an overview of these findings, focusing on the lessons learned during the development of the SISR models in EVOLAND. In particular, we will focus on the challenges posed by domain gap in cross-sensor datasets and how they can be overcome for more faithful SISR models as well as more confidence and reliability in the comparison of SISR models in future researches. End users and developers of downstream applications will also learn more about the quality of SISR images and about our ready-to-use, publicly available model. [1] Michel, J., Vinasco-Salinas, J., Inglada, J., & Hagolle, O. (2023). Correction: Michel et al. SEN2VENµS, a Dataset for the Training of Sentinel-2 Super-Resolution Algorithms. Data 2022, 7, 96. Data, 8(3), 51. https://doi.org/10.3390/data8030051 [2] Cornebise, J., Oršolić, I., & Kalaitzis, F. (2022). Open high-resolution satellite imagery: The worldstrat dataset–with application to super-resolution. Advances in Neural Information Processing Systems, 35, 25979-25991. [3] Julien Michel, Ekaterina Kalinicheva, Jordi Inglada. Revisiting remote sensing cross-sensor Single Image Super-Resolution: the overlooked impact of geometric and radiometric distortion. 2024. ⟨hal-04723225⟩ (Submitted to IEEE TGRS) [4] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., ... & Change Loy, C. (2018). Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings of the European conference on computer vision (ECCV) workshops (pp. 0-0). [5] https://github.com/Evoland-Land-Monitoring-Evolution/sentinel2_superresolution
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall G1)

Presentation: Trustworthy Super-Resolution of Sentinel-2 Products Using Latent Diffusion and Their Applicability to Building Delineation and Flood Detection

Authors: Simon Donike, Cesar Aybar, Enrique Portalés-Julià, Samuel Hollendonner, Luis Gómez-Chova, Dr. Freddie Kalaitzis
Affiliations: University Of Valencia, University of Oxford, TU Vienna
The accessibility of high temporal-resolution Sentinel-2 (S2) multispectral imagery contrasts starkly with the scarcity of high-resolution satellite data, which is often commercially restrictive. Bridging this gap through super-resolution techniques offers a transformative potential for various remote sensing applications, from environmental monitoring to urban planning and disaster management. This research introduces a novel approach employing latent diffusion models (LDMs) to enhance the spatial resolution of S2 imagery by a factor of four, achieving 2.5m resolution from the nominal 10m of the RGB-NIR bands. Included in the final product are pixel-wise confidence metrics, giving users the ability to judge the SR accuracy for their specific downstream tasks. In addition, an extension of this project uses the introduced high-frequency of the RGB-NIR bands to enhance the 20m bands of S2. Our method adapts latent diffusion techniques to the unique challenges of multispectral remote sensing data, which necessitates maintaining high spectral fidelity while introducing realistic textural details. Our approach exploits the generative capabilities of LDMs guided by an encoding and conditioning mechanism specifically designed for remote sensing imagery. This mechanism ensures spectral consistency by utilizing low-resolution images to condition the diffusion process, thereby aligning generated high-resolution details closely with the ground-truth data. The core of our model, LDSR-S2, is designed to process the additional complexity of multispectral data, including the visible and near-infrared bands, essential for accurate remote sensing analysis. To circumnavigate the computational demands of diffusion models, which traditionally limit their applicability, we implement a diffusion process in a compressed latent space. This adaptation not only drastically reduces inference times but also allows the handling of large-scale datasets effectively. A distinctive feature of our approach is the integration of uncertainty estimation in the super-resolution process. The stochastic nature of diffusion models allows us to sample the distribution of likely generations, leading to a higher sampling diversity in uncertain regions and therefore a lower certainty score. By generating pixel-level uncertainty maps, our model provides a quantifiable measure of confidence in the super-resolved images, which is critical for applications where decision-making depends on the reliability of the data. Empirical results demonstrate that our model achieves superior performance in both spectral and spatial fidelity compared to existing state-of-the-art methods. The LDSR-S2 not only outperforms in terms of visual quality but also in the robustness of the details added, as evidenced by comprehensive testing across varied landscapes and conditions of S2 data. To further validate the practical utility of the LDSR-S2 model, we explored its application in a building delineation task. We trained different segmentation models using the SEN2NAIP dataset and the Microsoft Buildings Dataset. Each model was trained on low-resolution (LR), high-resolution (HR), and super-resolved (SR) imagery to enable a fair comparison. As expected, the model trained on HR imagery exhibited the best performance due to the higher detail and clarity, which facilitates feature recognition and segmentation. Conversely, the LR model performed the least effectively, struggling with feature extraction due to the lower spatial resolution. The SR model demonstrated significantly better performance than the LR model, although inferior to the HR model. This improvement underscores the value of the super-resolved images, as the introduced high-frequency details evidently aid the segmentation model in learning and identifying building features more effectively than when using the original LR images. Not only is the general detection of buildings improved, but especially small buildings are detected in the SR imagery which are not detectable in the LR imagery, with the detection rate of small buildings of a size smaller than 4 pixels improved by over 10%. This result highlights that super-resolution can substantially enhance the performance of downstream remote sensing tasks by providing richer information and enabling more accurate analyses. To further validate the results, we apply the model to a natural disaster use case. In October 2024, a significant flooding event occurred in Valencia, captured by a S2 pass 2 days after the flood. The urgency of the situation necessitated rapid and accurate flood mapping to facilitate emergency response and damage assessment. Traditionally, such efforts would rely on very high-resolution (VHR) satellite acquisitions or aerial imagery, which are not only costly but also suffer from longer revisit times or a limited swath. Using our LDSR-S2 model, we were able to immediately super-resolve the available S2 imagery, effectively reducing the waiting time for high-resolution data, and apply flood mapping models on the SR product. The super-resolved imagery enabled more precise detection and delineation of flood extents, enhancing the accuracy of the flood detection models used. This use case exemplifies how super-resolution can play a critical role in time-sensitive environmental monitoring and disaster response, providing high-quality data swiftly when it is most needed.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall N1/N2)

Session: D.05.05 CDSE User Review Meeting - Becoming Part of the Copernicus Data Space Ecosystem: Opportunities, Collaboration, and Community Guidelines

This session offers a comprehensive guide for individuals, researchers, and businesses across both public and private sectors seeking to engage with the Copernicus Data Space Ecosystem. We’ll outline the opportunities for collaboration, the resources and tools available, and the ecosystem’s key participation rules and best practices. Additionally, this session will cover pitches of onboarded and future Ecosystem members, where attendees will learn how to leverage the open-access Copernicus data, to efficiently co-develop their applications and services, and how to build partnerships that contribute to this dynamic, user-driven ecosystem. Join us to discover how to offer your datasets, services and knowledge while adhering to ecosystem standards, as we grow an impactful Copernicus community together.

Presentations and speakers:


Joining the Ecosystem: A Comprehensive Overview


  • Jurry de la Mar and Uwe Marquard - T-Systems

Presentation by one of the Ecosystem Members


  • Sander Niemeijer – S&T

Interactive panel session


Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.14)

Session: F.04.32 Toward an Aquatic Carbon Roadmap as a key integrated contribution to the GST

Following the 2015 Paris Agreement, the Committee on Earth Observation Satellites (CEOS) released a Carbon Strategy to guide the coordination of satellite data efforts supporting the Global StockTake (GST) process.
In this context, significant effort has been undertaken in the past years to understand how Earth Observation data can best support the GST implementation, notably through the writing of a Greenhouse Gas (GHG) Roadmap in 2020 focusing on the provision of atmospheric GHG datasets to the GST process. The Agriculture, Forestry and Other Land Uses (AFOLU) Roadmap followed in 2021. Considering the key role of the Aquatic realm (open and coastal oceans, inland waters) in the global Carbon cycle, ESA, NASA and JAXA are now coordinating the writing of an Aquatic Carbon Roadmap whose objective is to provide a framework with a long-term vision (~ 15+ years) to support space agencies in coordinating and defining the science, observation and policy needs to improve our understanding of the role and changes of carbon in aquatic environments.
This insight session will spotlight the developing Aquatic Carbon Roadmap and bring together contributors from the other CEOS roadmaps to highlight synergies and interconnections across the three efforts towards an enhanced understanding of the Earth as a System within the framework of the global stocktake. It will offer an opportunity to meet, exchange ideas, put the roadmaps in context of other efforts, and advance the efforts of the Aquatic Carbon Roadmap.

Presentations and speakers:


Introduction and CEOS context


  • Marie-Helene Rio - ESA

Global StockTake


  • Ben Poulter - NASA
  • Rosa Roman - JRC

The Greenhouse Gas Roadmap


  • Yasjka Meijer - ESA

The AFOLU roadmap


  • Clement Albergel - ESA

The Aquatic Carbon Roadmap


  • Jamie Shutler - U. of Exeter

Panel discussion


  • Moderator: Laura Lorenzoni - NASA
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.96/0.97)

Session: A.02.02 Terrestrial and Freshwater Biodiversity - PART 3

Preserving the integrity and health of natural ecosystems, and the biodiversity they host is crucial not only for the vital services they provide to sustain human well-being, but also because natural ecosystems with a high degree of integrity and diversity tend to exhibit elevated levels of productivity and resilience. The importance of safeguarding biodiversity is increasingly recognised in many Multilateral Environmental Agreements (MEAs) which all place great emphasis on the sustainable management, restoration and protection of natural ecosystems.

The pivotal role of ecosystems in maintaining ecological balance and supporting human well-being is a unifying theme in MEAs. Taking note that despite ongoing efforts, biodiversity is deteriorating worldwide and that this decline is projected to continue under business-as-usual scenarios, Parties to the Convention on Biological Diversity (CBD) have adopted at the 14th Conference of the Parties in December 2022, the Kunming-Montreal Global Biodiversity Framework (GBF). The GBF represents the most ambitious and transformative agenda to stabilise biodiversity loss by 2030 and allow for the recovery of natural ecosystems, ensuring that by 2050 all the world’s ecosystems are restored, resilient, and adequately protected. In Europe, the EU Biodiversity Strategy for 2030 aims to put Europe’s biodiversity on the path to recovery by 2030, by addressing the main drivers of biodiversity losses.

The emergence of government-funded satellite missions with open and free data policies and long term continuity of observations, such as the Sentinel missions of the European Copernicus Program and the US Landsat programme, offer an unprecedented ensemble of satellite observations, which together with very high resolutions sensors from commercial vendors, in-situ monitoring systems and field works, enable the development of satellite-based biodiversity monitoring systems. The combined use of different sensors opens pathways for a more effective and comprehensive use of Earth Observations in the functional and structural characterisation of ecosystems and their components (including species and genetic diversity).

In this series of biodiversity sessions, we will present and discuss the recent scientific advances in the development of EO applications for the monitoring of the status of and changes to terrestrial and freshwater ecosystems, and their relevance for biodiversity monitoring, and ecosystem restoration and conservation. The development of RS-enabled Essential Biodiversity Variables (EBVs) for standardised global and European biodiversity assessment will also be addressed.

A separate LPS25 session on "Marine Ecosystems" is also organised under the Theme “1. Earth Science Frontiers - 08 Ocean, Including Marine Biodiversity”.

Topics of interest mainly include (not limited to):
•Characterisation of the change patterns in terrestrial and freshwater biodiversity.
•Integration of field and/or modeled data with remote sensing to better characterize, detect changes to, and/or predict future biodiversity in dynamic and disturbed environments on land and in the water.
•Use of Earth Observation for the characterisation of ecosystem functional and structural diversity, including the retrieval of ecosystem functional traits, (e.g., physiological traits describing the biochemical properties of vegetation) and morphological traits related to structural diversity.
•Sensing ecosystem function at diel scale (e.g. using geostationary satellites and exploiting multiple individual overpasses in a day from low Earth orbiters and/or paired instruments, complemented by subdaily ground-based observations).
•Assessment of the impacts of the main drivers of changes (i.e., land use change, pollution, climate change, invasive alien species and exploitation of natural resources) on terrestrial and freshwater ecosystems and the biodiversity they host.
•Understanding of climate-biodiversity interactions, including the impact of climate change on biodiversity and the capacity of species to adapt.
•Understanding of the evolutionary changes of biodiversity and better predictive capabilities on biodiversity trajectories,
•Understanding of the ecological processes of ecosystem degradation and restoration,
•Multi-sensor approaches to biodiversity monitoring (e.g. multi-sensor retrievals of ecosystem structural and functional traits),
•Validation of biodiversity-relevant EO products (with uncertainties estimation),
•Algorithm development for RS-enabled Essential Biodiversity Variables (EBVs) on terrestrial and freshwater ecosystems,
•Linking EO with crowdsourcing information for biodiversity monitoring
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.96/0.97)

Presentation: Integrating biodiversity cubes into Earth Observation

Authors: Quentin Groom, Lissa Breugelmans, Rocio Beatriz Cortes Lobos, Michele Di Musciano, Maarten Trekels, Duccio Rocchini
Affiliations: Meise Botanic Garden, Alma Mater Studiorum - University of Bologna, Department of Biological, Geological and Environmental Sciences, Department of Life, Health & Environmental Science, University of L'Aquila
Effective biodiversity management and policymaking require timely, accurate, and comprehensive data on the status, trends, and threats to biodiversity. This data must be delivered in actionable formats, incorporating measures of uncertainty and projections under various scenarios. Despite global policy initiatives such as the Kunming-Montreal Global Biodiversity Framework and IPBES assessments underscoring the urgent need for improved biodiversity monitoring, significant challenges remain in integrating biodiversity data into the broader environmental observation landscape. Biodiversity data originate from diverse sources, including citizen scientists, researchers, conservation organisations, and automated technologies such as sensors, eDNA, and satellite tracking. However, these datasets often lack standardisation, hindering interoperability with remote sensing and environmental data layers. The Essential Biodiversity Variables (EBV) framework offers a structured approach to transforming raw occurrence data into robust, policy-relevant indicators. We particularly focus on occupancy, that is the presence or absence of a taxon in a grid cell over a particular timeframe. Occupancy is included within the species populations EBV class. Although it is only weakly related to the population size of the taxon, it provides valuable information about its distribution. This distribution is strongly linked to the spatial patterns of the biotic and abiotic environment.. Furthermore, occupancy data is probably the most abundant and comprehensive form of data we have on biodiversity, covering many decades and most of the terrestrial and coastal environment . Another advantage of occupancy data is that it can easily be standardised, aggregated, and harmonised with environmental variables, enabling deeper insights and improved monitoring capabilities. This presentation explores advancements in integrating biodiversity and environmental observation data through the use of automated workflows and biodiversity occupancy cubes. By leveraging these tools, data inconsistencies can be identified and addressed, facilitating reproducible and scalable analysis aligned with FAIR principles. One of our aims is to enable collaborative, cost-effective processing, supporting the rapid transformation of primary data into usable knowledge. This is particularly relevant for rapid alert systems on biodiversity, as well as for delivering cost-effective solutions for biodiversity monitoring and policy reporting.. The B-Cubed project, funded under Horizon Europe, exemplifies these principles by fostering interoperability between in situ biodiversity observations, remote sensing and other environmental datasets. Through the development of open-source workflows and tools, B-Cubed aims to democratise biodiversity data products, reducing analytical burdens and supporting global biodiversity assessments. By integrating biodiversity data into the broader environmental observation landscape, this approach facilitates informed policymaking, enabling swift responses to pressing challenges such as climate change, biological invasions, and biodiversity-related disease outbreaks. Among its objectives, the project also focuses on future biodiversity modeling. To this end, we developed the Suitability Cube, a structured, multi-dimensional array that integrates environmental data from diverse sources—such as the Copernicus Program and WorldClim—and organizes it across key ecological dimensions, including species occurrences, spatial coordinates, temporal scales, and suitability scores. This format simplifies the modeling of species distributions under current and future global change scenarios, providing crucial insights to guide conservation strategies.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.96/0.97)

Presentation: An Earth Observation- and Insect-based Framework for Biodiversity Intactness Reporting in Africa

Authors: Tobias Landmann, Mrs Faith Ashiono, Mr Vincent Magomere, Dr Komi Mensah Agboka
Affiliations: International Centre of Insect Physiology and Ecology (ICIPE)
We pioneer the integration of multi-sensor Earth Observation (EO) data with curated insect occurrence datasets from citizen science, GenBank, and in-house databases to monitor insect-based biodiversity intactness and ecosystem vulnerability across Africa. Insects, being the most abundant taxa, are excellent biodiversity indicators due to their sensitivity to global change drivers such as unsustainable farming practices, urbanization, and logging. Moreover, they occupy diverse micro-habitats and are present across all climate zones. Leveraging on high-resolution EO and drone data, fine-scale spatial indicators to map insect micro-habitats and habitat suitability over time and space can be effectively developed. These EO-based insect diversity indicators were found to be highly suitable for assessing overall ecosystem biodiversity status (Landmann et al., 2023). The UN Convention on Biological Diversity (CBD) emphasizes the need for scalable, unbiased biodiversity indicators that integrate drivers of biodiversity loss, planetary boundaries, and ecosystem service assessments. Similarly, the Kunming-Montreal Global Biodiversity Framework calls for tools to link biodiversity loss with ecosystem integrity. Despite these global efforts, wide-scale data on invertebrate biodiversity loss remains unavailable for Africa. To address this gap, we collated comprehensive datasets on Lepidoptera (butterflies and moths; n = 18,300), Odonata (dragonflies; n = 12,300), and Coleoptera (beetles; n = 15,332). Predictor variables included spectral indices from 10–20 m Sentinel-2 imagery, 25 m canopy heights data from GEDI (Global Ecosystem Dynamics Investigation), and 4 km climate data from TerraClimate. Using a regression boosting model, we predicted insect diversity (iD) patterns for each order and across all orders (scaled from 0 to 1). The iD predictions were compared with potential pre-human impact diversity patterns from biome distribution models (Hengl et al., 2018). Herein, the potential (or prehuman) insect diversity (p) values for major habitat types were estimated as follows: Tropical and coastal forests (p = 1.0), wetlands (p = 0.9), savanna (p = 0.8), shrublands (p = 0.7), grasslands (p = 0.6), and deserts (p = 0.5). Diversity model accuracies exceeded 0.86, and the resultant insect-based intactness maps (current insect diversity divided by prehuman insect diversity) correlated significantly (p < 0.05) with global forest intactness products. For example, mean biodiversity intactness in Namibia was 75%, indicating a 25% decline in native insect abundance compared to the pre-human period. In Senegal, intactness was lower, at 37%. This new insect-based biodiversity intactness product offers a valuable tool for national biodiversity conservation programs and ecosystem restoration initiatives. It can support biodiversity status reporting, prioritizes restoration efforts, and informs actions to maintain ecosystem services such as pollination. Efforts are underway to facilitate policy uptake of the results using policy endowment and biodiversity focal points in individual African countries. Hengl T, Walsh MG, Sanderman J, Wheeler I, Harrison SP, Prentice IC. 2018. Global mapping of potential natural vegetation: an assessment of machine learning algorithms for estimating land potential. PeerJ 6:e5457 https://doi.org/10.7717/peerj.5457 Landmann, T.,, Schmitt, M., Ekim, B., Villinger, J., Ashiono, F., Habel, J. C., & Tonnang, H. E. (2023). Insect diversity is a good indicator of biodiversity status in Africa. Communications Earth & Environment, 4(1), 234.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.96/0.97)

Presentation: Multisensor Approach for Quantifying Floral Resources in Hedgerows at Regional Scale

Authors: Julien Radoux, Léna Jeannerod, Maxime Buron, Pr Anne-Laure Jacquemart, Pr Yannick Agnan, Pr Pierre Defourny
Affiliations: Université catholique de Louvain - Earth and Life Institute
The decline of pollinators is of major concern for the resilience of several ecosystems and for sustaining food production of major crops. A large number of plant species indeed depend on these pollinators to complete their life cycle. Several traits of wild bees make them particularly efficient pollinators. Among the different factors affecting the fitness of wild bee colonies, the availability of pollen and nectar at the different stages of growth of the colonies plays a major role. In this study, the amount of pollen and nectar coming from flowering hedges is estimated by combining field observation, airborne Lidar, airborne RGB images and spaceborne optical images. Samples of pollen and nectar are collected on the field and analyzed in the lab to determine their nutritional quality. The quantity of these floral resources is measured per flower, and the number of flowers per cubic meter of hedges is estimated. In order to predict the pollen and nectar availability at the scale of the landscape, it is then necessary to use remote sensing data. First, flowering ligneous vegetation are classified within a deep learning framework on very high (25 cm) RGB orthophotos. RetinaNet, MMDetection and SingleShotDetection are compared to detect flowering hedgerow species on yearly mosaics of Wallonia (approximately 16900 km²). The best method is selected based on its area under ROC curve with a calibration dataset obtained by photointerpretation. The validation is performed on an independent dataset composed of 20 sites with field surveys and additional photointerpretation at random locations. The orthophoto mosaics used as input are however composed of different flight at different dates from the end of winter to the middle of summer. Because the flowering period of ligneous vegetation is relatively short in time, the optimal dates for the species of interest are selected across 5 consecutive years of acquisition. Second, Lidar data (50 cm resolution) is used to compute the volume of the flowering hedges and delineate them more precisely. This information is combined with the results of the field survey to determine the available resources per square meter on the ground. These values are downscaled with a floating circular average of 500 m radius, which corresponds to the average foraging distance by wild bees in the literature, in order to highlight the nectar and pollen resources from flowering hedgerows on the whole Wallonia. As mention above, the short period of time during which the flowering occurs hinder the detection of flowering hedges. Therefore we also used Sentinel-2 images for the subpixel detection of the flowering period. Based on the contributive proportion of the hedges inside the pixels (based on the point spread function) and assuming that surrounding pixels of the same land cover are homogeneous, it becomes possible to highlight when hedges are flowering or when they are “green”. The high temporal resolution of Sentinel-2 can then be used to estimate the duration of the resource availability. Unfortunately, this is only possible on cloud-free years, therefore we add to assume that there is no change in the hedgerows for a period of 5 years. The cumulated uncertainty is assessed from protein content to spatio-temporal extent of the hedges, highlighting the diverse sources of improvement. Nevertheless, estimates at landscape level demonstrate the major role of indigenous hedge species to sustain wild bee colonies.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.96/0.97)

Presentation: Capabilities and Limitations of Sentinel-2 for Monitoring Invasive Plants: Ragweed (Ambrosia artemisiifolia) and False Indigo Bush (Amorpha fruticosa) Case Study

Authors: Ivan Tekić, Mr Branimir Radun, Ms Nela Jantol, Ms Ivona Žiža, Mr Ivan Tomljenović, Mr Vladimir Kušan
Affiliations: Oikon Ltd - Institute for Applied Ecology
The application of satellite-based remote sensing for detecting and monitoring invasive alien species (IAS) in Croatia remains largely underutilized. Current methods rely heavily on labor-intensive and costly field surveys, which can be inefficient and challenging over large areas. This study evaluates the capabilities and limitations of Sentinel-2 imagery for identifying, monitoring, and quantifying plant IAS, focusing on two problematic species—common ragweed (Ambrosia artemisiifolia) and false indigo bush (Amorpha fruticosa). Amorpha fruticosa, a perennial shrub that invades open flood-prone habitats, forms dense monocultures that suppress the growth of native vegetation and pose a significant biodiversity threat. These dense stands make it a suitable candidate for detection using satellite imagery, as they often cover large areas. However, the optimal flowering phase, which provides the strongest spectral signature for differentiation, is frequently obscured by cloud cover during early summer. In addition, the flood-prone nature of its habitat and forestry activities introduce environmental variability that must be accounted for when performing detection through time series. To overcome these challenges, late-summer Sentinel-2 imagery was utilized when conditions were more stable, allowing for clear data collection. The detection model focused on separating young and mature stands of A. fruticosa from co-occurring tree species such as oak (Quercus robur) and ash (Fraxinus angustifolia), which often intermingle with the shrub in forest clearings. Red-edge and near-infrared (NIR) indices, sensitive to chlorophyll content, enabled high differentiation between A. fruticosa and the surrounding vegetation. Dense monocultures were readily identified, and the model also performed well in more complex environments where intermixing with grass and young oak or ash trees occurred. The model achieved over 90% accuracy in distinguishing young and mature stands of A. fruticosa, emphasizing Sentinel-2’s capability to detect chlorophyll-rich vegetation effectively. Ambrosia artemisiifolia, an annual plant and the leading cause of allergic rhinitis in Croatia, primarily invades agricultural areas, fallow lands, and field edges. Unlike A. fruticosa, it does not form large, contiguous patches, often growing in narrow strips along field margins or roadsides. Sentinel-2’s spatial resolution of 10 meters poses a significant limitation for detecting these narrow, linear growth patterns. A time-series approach was employed to address the limitations of single-date imagery, leveraging Sentinel-2 images from August to November to capture the phenological changes of A. artemisiifolia. During August, the plant retains high water content, enabling differentiation from maturing crops. By October and November, its withered stems were distinguished from surrounding healthy vegetation. The model incorporated indices such as the Normalized Difference Infrared Index (NDII), Normalized Burn Ratio (NBR), and Red Edge Simple Index (REDSI), which utilized red-edge, NIR, and shortwave infrared (SWIR) bands to track changes in water and chlorophyll content. This approach successfully identified larger ragweed clusters within agricultural fields, achieving over 90% accuracy in heavily invaded areas. However, the model struggled to detect ragweed in urban environments, along narrow field margins, and in sparsely covered areas. This study demonstrates Sentinel-2’s potential for detecting both annual and perennial invasive species. While indices derived from red-edge, NIR, and SWIR bands show strong potential for distinguishing A. fruticosa and A. artemisiifolia from other vegetation, challenges such as cloud cover, spatial resolution, and species intermixing highlight the limitations of this approach. Future work should focus on expanding the temporal range of analysis, incorporating additional ground truth data, and refining models to improve performance in complex and mixed environments. The findings of this research provide a foundation for developing reliable, near-real-time services to map spatial extent, monitor trends, and inform effective management strategies for invasive species.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.96/0.97)

Presentation: Novel Applications of Wildlife Population Estimation Methods to Satellite Imagery

Authors: Rebecca Wilks, Stuart King, Professor Ruth King, Dr Niall McCann, Dr Michael Chase, Dr David Williams, Dr Murray Collins
Affiliations: University Of Edinburgh, National Park Rescue, Elephants Without Borders, University of Leeds, Space Intelligence
Wildlife population estimation methods are a key branch of statistical ecology, enabling statistically rigorous estimates of population counts (abundance)[1]. These methods have a long history of being applied to data commonly collected in a conservation context such as camera trap images, acoustic surveys and transect flight surveys. We investigate two use cases involving novel ideas developed from abundance methods in statistical ecology, applied to satellite imagery of wildlife. Firstly, we present a framework for abundance estimation via satellite surveys of large wildlife in large-scale heterogeneous landscapes. Traditionally, wildlife surveys are undertaken using a time-consuming aerial process underpinned by distance sampling techniques, and therefore satellites which easily image huge areas are attractive for consideration, since they may represent a cost/effort saving. This is further fuelled by the recent demonstration by Duporge et al [2] of the use of CNN-based object detection to automate detection of endangered African Savannah Elephants in very high-resolution (VHR) 30cm satellite imagery (Pleaides Neo by Airbus, Worldview by Maxar). However, such satellite detections alone are not sufficient to provide robust abundance estimates. By design, wildlife detections from a point-in-time satellite image differ from detections from an aircraft moving through the landscape, and this change of observation method must be accounted for in the abundance estimation framework. Although satellites have already been used to detect populations of several species including Emporer Penguins [3], Polar Bears [4], and Wildebeest [5], these focus on groups usually found out in open terrain, meaning counts can be treated as a total count. We therefore provide the first theoretical framework for an end-to-end satellite abundance survey of large wildlife over large heterogeneous areas, which accounts for survey design (stratification), imperfect automated object detection, and partial obstruction of wildlife to the satellite (availability). Secondly, we investigate whether Capture-Recapture (CR) abundance methods can be used to obtain confidence bound counts for object detection in satellite imagery. Object detection algorithms have been applied to detect various features within satellite imagery such as cars[6] and ships[7], yet whilst object detectors are powerful, they are also prone to false negatives (missed objects) and false positives (wrong objects). They do not provide rigorous confidence bounds on these quantities, and so raw detection counts are traditionally corrected using ad-hoc methods, such as using precision and recall rates calculated on a test set. CR is an extremely common method within statistical ecology for estimating total population sizes. It consists of capturing and marking a sample of individuals from a population, then recapturing a new sample at a second observation time, and noting which marked individuals were re-captured. Traditionally, this requires physically marking individuals in-the-field (e.g. leg rings for birds [8]), however recently individuals for some species have been identified in imagery purely by their distinctive markings, for example Manta Rays [9]. CR methods then enable a confidence bound estimation of total counts, rigorously accounting for false negatives (missed animals) which are an inevitability when surveying wild animal populations. We take the CR principles of multiple observation occasions, and apply this in a novel way to object detection of generic objects in a single image. Using different object detection algorithms as individual observers, we draw links with ensemble modeling and investigate whether an extended CR methodology can be applied to generate confidence bounds for objects in object detection. References [1] Ruth King and Rachel McCrea. “Chapter 2 - Capture–Recapture Methods and Models: Estimating Population Size”. In: Handbook of Statistics. Ed. by Arni S. R. Srinivasa Rao and C. R. Rao. Vol. 40. Integrated Population Biology and Modeling, Part B. Elsevier, Jan. 1, 2019, pp. 33–83. doi: 10.1016/bs.host.2018.09.006. [2] Isla Duporge et al. “Using very-high-resolution satellite imagery and deep learning to detect and count African elephants in heterogeneous landscapes”. In: Remote Sensing in Ecology and Conservation 7.3 (Sept. 1, 2021). Publisher: John Wiley & Sons, Ltd, pp. 369–381. issn: 2056-3485. doi: 10.1002/rse2.195. [3] Peter T. Fretwell et al. “An Emperor Penguin Population Estimate: The First Global, Synoptic Survey of a Species from Space”. In: PLOS ONE 7.4 (Apr. 13, 2012). Publisher: Public Library of Science, e33751. issn: 1932-6203. doi: 10.1371/journal.pone.0033751. [4] Seth Stapleton et al. “Polar Bears from Space: Assessing Satellite Imagery as a Tool to Track Arctic Wildlife”. In: PLoS One 9.7 (July 2014). Num Pages: e101513 Place: San Francisco, United States Publisher: Public Library of Science Section: Research Article, e101513. doi: 10.1371/journal.pone.0101513. [5] Zijing Wu et al. “Deep learning enables satellite-based monitoring of large populations of terrestrial mammals across heterogeneous landscape”. In: Nature Communications 14.1 (May 27, 2023). Number: 1 Publisher: Nature Publishing Group, p. 3072. issn: 2041-1723. doi: 10.1038/s41467-023-38901-y. [6] Sébastien Drouyer. “VehSat: a Large-Scale Dataset for Vehicle Detection in Satellite Images”. In: IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium. IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium. ISSN: 2153-7003. Sept. 2020, pp. 268–271. doi: 10.1109/IGARSS39084.2020.9323289. [7] Z. Hong et al., "Multi-Scale Ship Detection From SAR and Optical Imagery Via A More Accurate YOLOv3," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 6083-6101, 2021, doi: 10.1109/JSTARS.2021.3087555. 8] “Bird Banding — Learn Science at Scitable.” Cleminson, A. & Nebel, S. (2012) Bird Banding. Nature Education Knowledge 3(8):1 [9] Edy Setyawan et al. “Population estimates of photo-identified individuals using a modified POPAN model reveal that Raja Ampat’s reef manta rays are thriving”. In: Frontiers in Marine Science 9 (Nov. 15, 2022). Publisher: Frontiers. issn: 2296-7745. doi: 10.3389/fmars.2022.1014791.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.96/0.97)

Presentation: Uncertainties in Remote Sensing of Biodiversity: Definitions, Sources and Methods

Authors: Christian Rossi, Andreas Hueni, Tiziana, L. Koch, Maria, J. Santos
Affiliations: University of Zurich, Swiss National Park
Recent advances in remote sensing of biodiversity and biodiversity-related products have significantly enhanced our capacity to monitor and understand biodiversity. Typical remote sensing products directly related to biodiversity are spectral features and plant traits, and their diversity in space, i.e., spectral diversity and functional diversity. Hence, remote sensing of biodiversity involves measuring biophysical quantities from signals recorded by a sensor in response to radiation reflected from the Earth’s surface. As for any other measurements, the biodiversity quantities measured via remote sensing are inherently uncertain. Starting from the digital numbers recorded by the detector, the processing to obtain surface reflectance products, to the final biodiversity output, various sources of uncertainty can arise. For example, uncertainties related to the sensor and preprocessing of remote sensing data can account for as much as 10% in the near- and shortwave infrared regions where there is less solar radiation and thus inherently lower signal-noise ratios. After applying atmospheric corrections, spectral regions highly sensitive to water vapor can display uncertainties of up to 20%, which can increase through the process that leads to the derivation of biodiversity products. Failing to account for such uncertainties may lead to over- or underestimates of diversity, with downstream repercussions on management strategies and policy making. Nevertheless, uncertainties are rarely quantified in remotely sensed biodiversity products, limiting our understanding of biodiversity processes and their detection. Sparse quantification of uncertainties is further exacerbated by the confusion arising from the inconsistent and improper use of uncertainty terms. Here, we clarify the concept of uncertainty by defining what it is and what it is not, outline its typologies, and introduce metrological principles in the remote sensing of biodiversity. We highlight sources of uncertainty and provide examples of uncertainty estimation and propagation in remotely sensed biodiversity products. Finally, we discuss the critical need for product uncertainty requirements and reliable reference measurements. In particular, uncertainties are needed to compare and consolidate different quantities being measured and support the evaluation of product conformity. Providing uncertainties is essential for effectively and consistently communicating the strengths and limitations of remote sensing products.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.11/0.12)

Session: A.01.10 Copernicus Sentinel-5P 7.5 Years in Orbit: Mission Status

Copernicus Sentinel-5P had on Oct. 13 2024 its 7 years launch Anniversary. The aim off this session is to inform the community about the excellent in orbit performance of this mission, its expected lifetime, details about data acquisition, processing, operational product QA monitoring, data dissemination and several scientific highlights achieved so far.

Session Schedule:


In Orbit Functional Performance and Lifetime Evaluation of the Sentinel-5P mission


  • K. Symonds - ESA

Sentinel-5P Mission Operations - a Success Story


  • D. Mesples - ESA

Global Atmospheric Composition Changes Observed by TROPOMI on Sentinel-5 Precursor


  • P. Veefkind - KNMI

TROPOMI Sentinel-5P SWIR Highlights, in Relation to Policy and Action


  • I. Aben - SRON

New Era of Air Quality Monitoring over Europe: Combining Daily Sentinel-5 Precursor and Hourly Sentinel-4 Observations


  • D. Loyola - DLR

Advances in Sentinel-5 Precursor Air Quality Data Products and their Validation


  • M. van Roozendael - BIRA/IASB
Add to Google Calendar

Tuesday 24 June 16:30 - 16:50 (EO Arena)

Demo: A.01.16 DEMO - How to add your own forward model in the GRASP version 2.0.0 retrieval framework

GRASP (Generalized Retrieval of Atmosphere and Surface Properties, Dubovik et al., 2021) is a flexible tool, designed to retrieve aerosol, gas and surface properties for a wide variety of sensors and the combination of them. It is a proven tool, applied to a wide range of different combinations of instruments, for example: active and passive (lidar and sunphotometer), spectrometers and photometers (Pandora and AERONET), Multi-Angular Polarimeters and hyperspectral sensors (CO2M/MAP and CO2M/CO2I, S5/UVNS and 3MI).
In the framework of the OPERA-S5 project, GRASP version 2.0.0 has been developed to transform the original code into a totally modular architecture in which every forward model part can easily be replaced. GRASP version 2.0.0 allows the user to include in the GRASP code new radiative transfer schemes, new surface models, AI based approaches or any other innovative modelling code. The interfaces and the tools around GRASP version 2.0.0 have been designed to allow a very user friendly experience to facilitate the scientists the adaptation and extension of GRASP possibilities to their specific needs and new ideas.
During the tutorial session,users will get familiar with GRASP version 2.0.0 possibilities, by following a step-by-step guide in which all participants will implement a new forward model in GRASP, including how to access the code, the input, the output, the internal interfaces. In order to make the session as agile as possible, the activity will be carried out in the DIVA platform (https://cloud.grasp-sas.com/). This is an Jupiter-notebooked based virtual environment, accessible from the browser, with all the configuration and tools already pre-installed that the users will use as the baseline for the developments.

Speakers:


  • Masahiro Momoi
  • Marcos Herreras-Giralda
Add to Google Calendar

Tuesday 24 June 16:52 - 17:12 (EO Arena)

Demo: D.03.33 DEMO - RACE Dashboard Demonstration

The RACE Dashboard is a joint initiative of ESA and EC DG-DEFIS to illustrate new indicators on economy, society and the environment, based on Earth Observation.

It is accessible at race.esa.int.

This demonstration will showcase how the RACE Dashboard integrates industrially provided indicators. The focus will be on demonstrating the novelty and innovation of the indicators, as well as the mechanisms by which they are provided to the RACE dashboard, and the various business models - supported by the Network of Resources.
Selected examples will illustrate the high diversity of services and capability in European industry, including, e.g. for environmental monitoring, health and pollution, natural disasters management, agriculture, and many more.

The demonstration will also include elements of gamification and storytelling.

Speakers:


  • Sara Aparicio - Solenix for ESA
  • Anca Anghelea - ESA
Add to Google Calendar

Tuesday 24 June 17:00 - 17:45 (ESA Agora)

Session: F.02.19 Austrian Space Cooperation Day - Earth Observation

The Austrian space community and international testimonials take a kaleidoscopic look at products and services “made in Austria”, highlighting existing and inviting future cooperations within international partner networks. With a view to the ESA Ministerial Conference in 2025, the great importance of ESA programmes for maintaining and improving Austria's excellence in space will be explained using technological and commercial success stories. In the FFG/AUSTROSPACE exhibition, Earth observation space hard- and software products manufactured in Austria are presented (next to Agora area and ESA booth in Main Entrance Hall).

Speakers:


Welcome


  • Christian Briese - EODC

Destination Earth Data Lake: Core Capabilities and Edge Services


  • Michael Schick - Eumetsat

Global Flood Monitoring: Service and Updates in 2025


  • Florian Roth - TU Wien

Vegetation change dynamics product from Sentinel-2 time series: A multi-year basemap for Austria generated using a semantic Earth Observation data cube


  • Dirk Tiede - University of Salzburg

Summary & Closure


  • Christian Briese - EODC
Add to Google Calendar

Tuesday 24 June 17:15 - 17:35 (EO Arena)

Demo: D.03.28 DEMO - Lexcube viewer: Interactive Data Cube Visualization – using Lexcube as standalone or in a Jupyter notebook

Lexcube is an open-source tool designed for interactive visualization of 3D data cubes either as stand-alone application or within Jupyter notebooks. It enables Earth system scientists to explore large, high-dimensional datasets efficiently. The interactive version allows to integrate Lexcube seamlessly into Python-based workflows. By leveraging chunked data access, caching, and LZ4 compression, Lexcube ensures real-time interaction even with large-scale datasets.

A key component of the tool is its interactive 3D visualization capabilities, allowing users to explore, manipulate, and extract insights from data cubes. Participants will learn to navigate core functionalities, including dynamic selection of spatial and temporal subsets, customizable colour maps, and exporting visualizations and sub-cubes for further analysis. Unlike traditional 2D visualization tools, Lexcube enables intuitive inspection of complex, multidimensional data for model evaluation, anomaly detection, and scientific discovery. By attending this session, participants will gain hands-on experience with Lexcube and Lexcube for Jupyter, learning how to apply it to their research while exploring its latest features and developments.

Speaker:


  • Maximilian Söchting - Uni.Leipzig
Add to Google Calendar

Tuesday 24 June 17:37 - 17:57 (EO Arena)

Demo: D.04.26 DEMO - Accessing Copernicus Contributing Missions, Copernicus Services and other complementary data using CDSE APIs: OData, STAC, S3, OGC, openEO

#stac

Copernicus Data Space Ecosystem offers a wide portfolio of data sets complementary to the “core” Sentinel products. They characteristics may differ from the Sentinel data sets and some of them may not be available in all of the CDSE APIs. The aim of this demonstration session is to facilitate usage of the complementary datasets in the CDSE platform by explaining the main differences between them and Sentinel data based on selected data access scenarios. Code snippets in the CDSE JupyterLab will be provided to allow CDSE users to utilize them in their own applications.

Speaker:


  • Jan Musiał - CloudFerro
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: F.01.03 - POSTER - Trends in Earth Observation Education and Capacity Building: Embracing Emerging Technologies and Open Innovations

Education activities in recent years have undergone a significant transformation related to the global digitalization of education and training. Traditional teaching methods, like face-to-face trainings provided to small groups of students, are being complemented or even replaced by massive open on-line courses (MOOCs) with hundreds of participants following the course at their own pace. At the same time, the Earth observation sector continues to grow at a high rate; in Europe, the European Association of Remote Sensing Companies (EARSC) reported in 2023 that the sector grew by 7.5% in the past 5 years.
This session will cover new trends in modern education in the Space and EO domains as well as methods, use cases, and opportunities to cultivate Earth observation literacy in diverse sectors, such as agriculture, urban planning, public health, and more. It will focus on new methods and tools used in EO education and capacity building, such as: EO data processing in the cloud, processing platforms and virtual labs, dashboards, new and innovative technologies, challenges, hackathons, and showcase examples which make successful use of EO data. Participants will also have opportunity to share and discuss methods for effective workforce development beyond typical training or education systems.
Based on the experience of Space Agencies, international organisations, tertiary lecturers, school teachers, universities and companies working in the domain of space education, this session will be an opportunity to exchange ideas and lessons learnt, discuss future opportunities and challenges that digital transformation of education has brought, consolidate recommendations for future education and capacity building activities, and explore opportunities to further collaborate, build EO literacy in new users outside of the Earth and space science sector and expand the impact of EO across sectors.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Advanced Environmental Assessment: Integrating Satellite and IoT Data

Authors: Valeria Pia Vevoto, Massimiliano Ferrante, Francesco Mauro, Silvia Liberata
Affiliations: University of Sannio, European Space Agency (ESA)
This study investigates the integration of electronics, Internet of Things (IoT) sensors, Earth Observation data, software development, and machine learning (ML) algorithms to enhance environmental monitoring. Regarding the IoT sensors, an IoT device called AQP (Air Quality Platform) has been developed. The AQP is an assembly kit (hardware and software) capable of sensing local ambient parameters and sending data to an ESA central server. It was designed and built internally at ESA ESRIN EO Laboratory for the ESA Living Planet Symposium 2019 School Laboratory, for educational purposes. Currently, the AQP project consists of about 200 AQP platforms spread across Europe (https://aqp.eo.esa.int/map/ ). The primary objective of this activity is to assess the quality of AQP data and explore correlations among AQP, ARPA, and Sentinel-5P measurements. An electronic interface was developed alongside software implemented in Python and C to facilitate data transmission to the ESA web server. We evaluate the visualization and management of in-situ data collected from the ARPA Lazio website and the AQP, as well as remote data from Sentinel-5P. Additionally, we apply ML techniques, specifically the CatBoost algorithm, to analyse the correlation between nitrogen dioxide levels detected by ARPA and Sentinel-5P in the Rome area during 2024. The results underscore the potential of combining these methodologies for improved accuracy in environmental assessments. Human and technological progress has responded to this problem through concrete actions, and therefore should not overlook the outcomes that climate change poses. As a result, this study aims to present the analysis of remote and in situ data both through graphs and through the use of ML techniques, also proposing future goals for the improvement of both data acquisition technique sand tools so can be monitored the environmental pollutions over the years. After a little description of the Copernicus space mission and the Sentinel-5P satellite and a little description of Italian national environment protection system ARPA the focused is on describing the architecture and purpose of the ESA Air Quality Platform (AQP). The core not only showing the data collected by the ground stations (ARPALAZIO and AQP) and the Sentinel-5P satellite, but also the correlation between them. To enable a more accurate estimation of in situ values from the parameters the ML algorithm Catboost was used. Finally, was described, in addition to conclusions and proposals for future activities. The data acquisition process has been divided into two parts: the first involves the collection of in-situ data (from AQP IoT sensors and ARPA Lazio), while the second focuses on the acquisition of remote sensing data (from ESA Earth Observation data, specifically Sentinel-5P). The in-situ data was obtained from the official websites of ARPA Lazio and the ESA web server for the AQP. ARPA Lazio is responsible for the institutional task of monitoring the environmental situation across the region, with 54 stations distributed throughout. Each station is assigned an identification number and can monitor a variety of pollutants, depending on its location. To facilitate comparison, data from the stations located in the Rome agglomeration were selected and compared with the in-situ data from AQP devices in the same area. For each pollutant, data was downloaded in .txt format covering the period from 1999 to 2023. Using a Python script, the data was transformed into a Data Frame, and a merge operation was performed to combine it into a single database. The merged data was then converted into .csv format. This process was repeated for pollutants common to both AQP and Sentinel-5P data (NO2, SO2, CO, O3), as well as for PM10, a harmful pollutant present in high concentrations in the atmosphere. After processing the .csv files for each pollutant, the correlation between the various ARPA stations was calculated, revealing a strong correlation. To explore the relationship between in-situ data and remote Sentinel-5P data for the ARPA stations, a virtual reference station was created, termed the "reference station." Regarding the AQP project, real-time and historical data can be downloaded from the official ESA website. By selecting the desired platform, users can choose the start and end dates and download the data in CSV format, which includes the station number, location, and pollutant values. After establishing the AQP reference station, data for the period from April to May 2024 was downloaded, focusing specifically on the newly installed sensors: CH4, O3, and HCHO. The data is provided every 60 seconds, with daily, monthly, and annual averages available for the selected parameters. For what it concerns EO data, the EO Browser is the tool used to download data from Copernicus satellites, specifically Sentinel-5P data in our case. The EO Browser allows users to instantly visualize satellite data or download it based on their preferred configuration. For Sentinel-5P data, users can navigate to the area of interest, select pollutants from a list (AER AI, CH4, CO, HCHO, NO2, O3, SO2), and choose the desired time range. After inspecting the data in the browser, the user can download statistical information in a CSV file. By analyzing the data from both in-situ stations (AQP and ARPA) and remote sensing (Sentinel-5P), correlations between the datasets can be determined. This activity led to the successful acquisition of data, resulting in important technical conclusions. The AQP project has proven to be a valuable tool for citizen science applications. Notably, there was a strong correlation—greater than 0.90—between the ARPA and AQP data for digital temperature, humidity, pressure, and particulate sensors. A good correlation (approximately 0.7) was also observed for the analog sensors, particularly for carbon monoxide, ozone and nitrogen dioxide. In terms of time sampling, the AQP, which acquires data at one-minute intervals, performs better than ARPA, which collects data once a day. It is important to emphasize that while the AQP can record all statistical values within a day (maximum, minimum, average, standard deviation, and trends), ARPA Lazio provides only a single data point, limiting the ability to understand the dynamics of events. However, one challenge encountered was the difficulty in establishing a valid correlation with methane, as the Sentinel-5P satellite did not acquire a significant amount of data during the March-April 2024 period. Despite this, the research has laid the groundwork for future work that could improve upon the current data in just a few months. Future efforts could include installing new sensors on the reference AQP, calibrating existing sensors by interpolating current conversion curves with those from professional instruments, optimizing the ARPA reference, and using Sentinel-5P Level 3 products for improved correlation. The work completed thus far not only represents a concrete achievement but also serves as a solid foundation for the future. It has highlighted the importance of understanding the air quality we breathe and marks the beginning of a journey full of opportunities. REFERENCES [1] J. Awewomom, F. Dzeble, Y. D. Takyi, W. B. Ashie, E. N. Y. O. Ettey, P. E. Afua, L. N. Sackey, F. Opoku, and O. Akoto, “Addressing global environmental pollution using environmental control techniques: a focus on environmental policy and preventive environmental management,” Discover Environment, vol. 2, no. 1, p. 8, 2024. [2] K. Protocol, “Kyoto protocol,” UNFCCC Website. Available online: http://unfccc. int/kyoto protocol/items/2830. php (accessed on 1 January 2011), pp. 230–240, 1997. [3] C. Poirier, M. Hermes, and M. Aliberti, “The role of space-based data in European climate policies,” Acta Astronautica, vol. 214, pp. 439–443, 2024.5 [4] G. Kaplan and Z. Y. Avdan, “Space-borne air pollution observation from sentinel-5p tropomi: Relationship between pollutants, geographical and demographic data,” International Journal of Engineering and Geosciences, vol. 5, no. 3, pp. 130–137, 2020. [5] B. W. Bodah, A. Neckel, L. S. Maculan, C. B. Milanes, C. Korcelski, O. Ram´ırez, J. F. Mendez-Espinosa, E. T. Bodah, and M. L. Oliveira, “Sentinel-5p tropomi satellite application for no2 and co studies aiming at environmental valuation,” Journal of Cleaner Production, 2022. [6] G. Balsamo, A. Agusti-Parareda, C. Albergel, G. Arduini, A. Beljaars, J. Bidlot, E. Blyth, N. Bousserez, S. Boussetta, A. Brown et al., “Satellite and in situ observations for advancing global Earth surface modelling: A review,” Remote Sensing, vol. 10, no. 12, p. 2038, 2018. [7] M. Amann, Z. Klimont, and F. Wagner, “Regional and global emissions of air pollutants: Recent trends and future scenarios,” Annual Review of Environment and Resources, 2013. [8] I. Cohen, Y. Huang, J. Chen, J. Benesty, J. Benesty, J. Chen, Y. Huang, and I. Cohen, “Pearson correlation coefficient,” Noise reduction in speech processing, pp. 1–4, 2009. [9] M. Francesco, R. Luigi, J. Fjoralba, S. Alessandro, and U. S. Liberata, “Estimation of ground NO2 Measurements from Sentinel-5P Tropospheric Data through Categorical Boosting,” 2023 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence and Neural Engineering (MetroXRAIN), 2023. [10] Earthscience, “Convert NO2 concentration in Sentinel-5P data from mol/m2 to μg/m3,” 2020. [Online]. Available: https://earthscience.stackexchange.com/questions/19391/convert-no2-concentration-in-sentinel-5p-data-from-mol-m2-to-%ce%bcg-m3-on-the-ground
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Earth Observation in the Framework of COSPAR Capacity Building

Authors: Jérôme Benveniste, Dr Carlos Gabriel
Affiliations: COSPAR
Scientific data, preserved in public archives and made freely accessible to researchers around the world, serve as a critical foundation for the capacity-building initiatives organized by the Committee on Space Research (COSPAR). Since its inception, COSPAR has actively sought to democratize space science by ensuring that developing countries have the opportunity to participate in and benefit from global scientific advancements. The COSPAR Capacity Building (CB) initiative plays a central role in this mission, aiming to enhance the scientific and technical capabilities of emerging countries by providing training, access to resources, and fostering international collaborations. One of the most significant ways COSPAR achieves this goal is through its organization of regional workshops across a wide range of space science disciplines. These workshops are designed to equip postgraduate students, young researchers, and emerging scientists with the necessary tools, skills, and knowledge to conduct high-quality scientific research, despite limited access to resources. COSPAR’s CB workshops, which typically last two weeks, provide hands-on training in space science, with an emphasis on using publicly available data and open-source tools. The goal is to ensure that participants, even with minimal resources, can conduct advanced research using basic computer equipment and internet access. By teaching participants how to analyze and interpret space science data, COSPAR enables them to continue their work independently, fostering sustainable research and development in their home countries. One of the focusses of COSPAR’s Capacity Building initiative is Earth Observation (EO) from Space. EO data, plays a crucial role in addressing global challenges such as environmental monitoring, climate change, and disaster management. COSPAR’s workshops train participants in processing and analyzing EO data to track changes in the Earth’s surface, atmosphere, and oceans. This enables scientists in developing countries to apply EO data to local issues. In addition, participants gain insight into the broader scientific, technological, and policy implications of using EO data for global decision-making. The COSPAR CB workshops have been held in various emerging countries, contributing to the creation of a global network of scientists skilled in EO data analysis. The focus on collaboration ensures that the knowledge gained extends beyond individual projects, contributing to global efforts to address research and development challenges. In addition to the workshops, COSPAR offers a Capacity Building Fellowship Program. This program supports former workshop participants by providing additional opportunities for research, collaboration, and professional development. Fellows are encouraged to continue their work and collaborate with international experts, ensuring that the skills and knowledge gained during the workshops have a lasting impact. The fellowship program also strengthens the global network of young space scientists, fostering ongoing collaboration and exchange of ideas. COSPAR’s recent Small Satellite Program further strengthens its capacity-building efforts. This initiative helps universities and research institutions in developing countries establish satellite laboratories, enabling them to design, build, and launch small satellites. These satellites provide a cost-effective means for engaging with space science, offering hands-on experience in satellite technology and data collection. The program encourages self-reliance and innovation, as well as international collaboration, helping developing countries build local expertise in satellite-based research. Looking ahead, COSPAR plans to expand its Capacity Building initiatives, with a continued focus on Earth Observation. Future workshops and summer schools will explore new applications of EO data in areas such as Oceanography, the Cryosphere, Atmospheric sciences, land Hydrology and water cycle, agriculture, urban planning, extreme events, etc. COSPAR’s Capacity Building initiative has been instrumental in strengthening space science in developing countries. Through its workshops, fellowships, and programs like the Small Satellite Initiative, COSPAR empowers young scientists to contribute to global space science, addressing critical environmental and sustainability issues while building lasting research capacity. The COSPAR Capacity Building initiative at large will be detailed with specific examples in Earth Observation and prospects for future co-sponsored Workshops and Summer Schools.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Progress made and future steps of the HYPERedu learning initiative

Authors: Theodora Angelopoulou, Arlena Brosinsky, Akpona Okujeni, Saskia Foerster, Katrin Koch, Vera Krieger, Sabine Chabrillat
Affiliations: GFZ, German Research Centre for Geosciences, Helmholtz Centre, German Environment Agency (UBA), German Weather Service (DWD), DLR German Aerospace Center, Space Agency, Leibniz University Hannover, Institute of soil science
The increasing availability of imaging spectroscopy data from sources such as EnMAP, PRISMA, DESIS, EMIT, and PACE has ignited widespread interest in hyperspectral data analysis across various fields. However, there is a notable shortage of accessible training courses and educational resources. To address this gap, HYPERedu was established in 2019 as part of the EnMAP science program, offering online learning for hyperspectral remote sensing. HYPERedu provides comprehensive, free learning materials tailored for students, researchers, and professionals in academia, industry, and public institutions, from master level upwards. These resources include annotated slide collections and hands-on tutorials using the EnMAP-Box software, available in PDF and video formats. Continuously expanding to meet diverse learning needs, these materials are increasingly integrated into university curricula, professional training programs, and self-directed learning paths. HYPERedu has developed a series of Massive Open Online Courses (MOOCs). The first MOOC, "Beyond the Visible: Introduction to Hyperspectral Remote Sensing", launched in November 2021, covers fundamental principles of imaging spectroscopy, sensor technologies, data acquisition techniques, and software tools and since 2024, this course is also available in German language. Designed for flexible, self-paced learning, the course requires 5–8 hours to complete, with participants earning a certificate and a diploma supplement upon completion. Subsequent MOOCs have focused on agricultural applications (2022), EnMAP data access and preprocessing techniques (2023), and soil applications (2024). Upcoming MOOCs will explore topics such as forestry, geology, and inland and coastal waters. All HYPERedu resources are hosted on the EO College platform, a hub for Earth Observation education, and are freely accessible under a CC-BY license. The MOOCs are available both as interactive online courses and downloadable offline documents (PDF format), allowing participants to engage with the material even without a stable internet connection.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Geospatial Intelligence for Sustainable Futures: Smart Data and AI Applications in Geographic Education

Authors: Lars Tum, Torben Dedring, Jun.Prof. Dr. Andreas Rienow
Affiliations: Ruhr University Bochum
In an era of rapid urbanisation and digital transformation, harnessing the power of geospatial data and artificial intelligence (AI) is essential to addressing global challenges such as sustainable urban planning, climate resilience, and environmental monitoring. This presentation highlights an integrated educational initiative designed to equip MSc and PhD students with essential skills in geospatial data analysis, AI-driven geographic research, and modern research data management practices. The program delves into the analysis of urban transformations through geospatial data sources such as volunteered geographic information (VGI), social media geographic information (SMGI), and Earth observation (EO) data. It explores how AI techniques, including machine learning and deep learning, can be applied to address critical geographic questions. Specific case studies include the use of Sentinel-1 data for flood detection, the application of OSMnx for road network analysis, and the implementation of convolutional neural networks (CNNs) for ship detection in radar imagery. Additionally, the program covers predictive modeling techniques for urban growth and environmental changes, such as water level predictions and housing market dynamics, showcasing AI's role in enhancing geospatial analysis for sustainability and resilience. At the core of the learning process are Python-powered Jupyter Notebooks and educational videos delivered in a Massive Open Online Course (MOOC) format. These resources provide learners with structured, interactive content, guiding them through theoretical concepts and practical applications. The videos contextualise complex topics with real-world examples, while the Jupyter Notebooks facilitate hands-on experimentation with datasets, algorithms, and neural network models. Together, these tools ensure learners can independently implement and adapt the methodologies in their own research and projects. These and many more educational resources are part of the NFDI4Earth project, a research initiative advancing digital transformation in Earth System Science (ESS). By following NFDI4Earth’s principles, the courses impart a simple, efficient, open, and FAIR (Findable, Accessible, Interoperable, and Reusable) approach to its learners, making ESS innovation-friendly and user-driven. Through a combination of theoretical instruction and practical exercises, participants master the processing, analysis, and visualisation of heterogeneous geospatial datasets, implement machine learning algorithms such as Random Forest and Support Vector Machines, and build simple neural networks for geographic applications. This initiative not only bridges the gap between traditional geospatial analysis and cutting-edge AI methods but also prepares students for independent, interdisciplinary research. By integrating tools, formats, and FAIR practices, this program contributes to the advancement of geospatial education and the responsible application of digital innovation in ESS.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Master's in Spatial Information Applications: Insights after 100 Graduates Across South America and Italy

Authors: Lic. Camilo Barra, PhD Elisabet Benites, Lic. Rodrigo Chani, Eng Luis Cruz, Lic Guadalupe Escalante, Lic Bernardita Rivera, Lic Manuel Zeballos, Lic Christian Escobares, Lic Julieta Motter, Lic. Silvana Palavecino, Msc Gaston Gonzales Kriegel, Eng. Veronica Schuller, PhD Santiago Seppi, Ximena Porcasi, PhD. Anabella Ferral, PhD Marcelo Scavuzzo, Phd Fernanda Garcia Ferreyra
Affiliations: Conae_ Instituto Gulich
The Master’s in Spatial Information Applications (MAIE), jointly organized by the Gulich Institute and the Faculty of Mathematics, Astronomy, Physics, and Computer Science (FAMAF), is an evolution of the Master’s in Space Applications for Early Warning and Emergency Response (MAEARTE), offered since 2009. This program has graduated over 100 professionals who actively contribute to research centers and organizations worldwide. Spanning two years, MAIE combines advanced coursework with research tutorships, immersing students in globally relevant projects. Its interdisciplinary approach encompasses areas such as agricultural and forestry resource management, meteorology and oceanography, environmental emergencies and monitoring, cartography, geological studies, and human health. Graduates are distinguished by their robust technical training, adaptability, and innovative capabilities—essential traits in the rapidly evolving field of space technologies. Located at the Gulich Institute within CONAE’s Teófilo Tabanera Space Center (Falda del Cañete, Córdoba, Argentina), MAIE provides a unique educational experience. Students collaborate with professionals from CONAE, ASI, and other national and international entities, and many have the opportunity to undertake training and research stays in Italy, supported by the Italian Space Agency (ASI) and the Italian Government. MAIE not only offers exceptional academic training but also fosters a diverse and collaborative community, with students from over 10 countries. This international and interdisciplinary focus, enriched by hands-on experience and strong ties to institutions such as ASI, positions MAIE as a leading program for training specialists in space technologies across Latin America and beyond. We celebrate MAIE's profound impact, not only in academic excellence but also in building a professional network committed to addressing the challenges of managing and applying space technologies for societal benefit.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Digital Geomedia in Vocational Education and Training: Blended learning concepts to promote sustainable development through modern geotechnologies

Authors: M.A. Tobias Gehrig, Dr. Maike Petersen, Prof. Dr. Alexander Siegmund
Affiliations: Institute for Geography and Geocommunication – Research Group for Earth Observation (rgeo), Heidelberg University of Education, Germany, Heidelberg Center for the Environment (HCE) & Institute of Geography, Heidelberg University, Germany
Current global challenges such as climate change, land transformation, and loss of biodiversity force educational systems to reinvent their approaches (Habibullah, Din, Tan, and Zahid, 2020). Emphasis on digitalization and Education for Sustainable Development (ESD) are two of the main trends currently gaining momentum (Ahel and Lingenau, 2020). While the need for digitalization has been acknowledged by most fields of education (from primary through tertiary education), ESD is rarely incorporated into the curricula of vocational training (Schmidt and Tang, 2020). However, as almost 500,000 people complete vocational training in Germany every year, they account for a substantial portion of the workforce (Federal Ministry of Education and Research, 2023). Digital geomedia, such as earth observation (EO), geographic information systems (GIS), and mobile geotools, offer an ideal opportunity to integrate Education for Sustainable Development and digitalization. Despite their considerable professional relevance, digital geomedia are not widely employed in vocational education and training. Indeed, they have thus far been of relatively little importance. Such tools offer trainees a robust connection to their immediate surroundings and have significant potential for professional and academic preparation (Ministry of Education, Youth and Sport Baden-Wuerttemberg, 2024). For example, digital geomedia, predominantly EO tools like Google Earth, can be employed not only in everyday contexts but also in professional settings, such as state spatial planning or market analysis for companies. Integrating EO skills into vocational training is crucial for managing and preserving cultural landscapes, such as traditional meadow orchards, which are vital for biodiversity and local ecosystems. By equipping trainees with EO skills, they can effectively use technologies like EO, including unmanned aerial systems (UAS) to monitor environmental changes, manage land use sustainably, and contribute to the conservation of these valuable landscapes. Innovative cooperation between science, the educational system, and training companies is essential to provide these skills and promote sustainable development within various sectors. Thus, the project DiGeo:BBNE focuses on embedding the use of digital geomedia to support sustainable practices in vocational training through blended learning. To achieve this, it developed and implemented hybrid teaching-learning settings that combine location- and time-independent e-learning programmes with practice-oriented learning on site. Modules were specifically designed to introduce trainees from various fields, such as landscape management, regional product marketing, and care professions, to digital geomedia. These include interactive e-learning modules on EO, GIS, and mobile geotools, conveying the basics of these technologies through differentiated examples. Additionally, various in-person courses on location analysis, business start-ups, and educational geocaches have been developed and conducted to combine learning with hands-on experience in the field. These courses have been tailored to meet the needs of vocational trainees and provide them with practical skills directly applicable to their future careers. A notable example is a course with trainees from a local automobile manufacturer in Neckarsulm. This course introduced UAS flight and planning for analyzing traditional meadow orchards in Neckarsulm. Trainees learned the basics of EO and UAS technology in a workshop, then collected and analyzed UAV imagery from a traditional meadow orchard. This hands-on approach not only provided practical skills but also highlighted the importance of EO technologies in preserving valuable cultural landscapes. The courses are evaluated with a particular focus on deep structures of teaching, such as cognitive activation, which involves engaging trainees in higher-order thinking processes. Additionally, the evaluations examine student motivation, assessing how the courses inspire and sustain their interest and enthusiasm for learning. The presentation aims to introduce the developed modules targeting vocational training in different sectors as a best-practice example. It will discuss challenges faced when approaching partners from vocational education and strategies to address these, such as time constraints, geography not being part of curricula, and scepticism toward geographical topics. By incorporating these skills into vocational training, we can better prepare trainees for the demands of the modern work environment. The presentation will also present initial results from the course evaluations. These results will help embed the use of digital geomedia within the vocational education system to promote sustainable economic action. Ultimately, this will become another pillar to equip Germany’s workforce with the skills needed to face current and future challenges. References: - Ahel, O., & Lingenau, K. (2020). Opportunities and Challenges of Digitalization to Improve Access to Education for Sustainable Development in Higher Education. In W. L. Filho, A. Lange Salvia, R. W. Pretorius, B. L. Londero, E. Manolas, F. Alves, . . . A. Do Paco, Universities as Living Labs for Sustainable Development. Supporting the Implementation of the Sustainable Development Goals (pp. 341-356). Berlin: Springer. - Federal Ministry of Education and Research. (2023). Report on Vocational Education and Training 2023. Bonn: BMBF. Habibullah, M. S., Din, B. H., Tan, S.-H., & Zahid, H. (2020). Impact of climate change on biodiversity loss: global evidence. Environmental Science and Pollution Research, pp. 1073-1086. - Ministerium für Kultus, Jugend und Sport Baden-Württemberg. (2024). Allgemeine Informationen zur Beruflichen Bildung. Retrieved from https://km.baden-wuerttemberg.de/de/schule/berufliche-bildung/allgemeine-informationen-zur-beruflichen-bildung - Schmidt, J. T., & Tang, M. (2020). Digitalization in Education: Challenges, Trends and Transformative Potential. In M. Harwardt, P. F.-J. Niermann, A. M. Schmutte, & A. Steuernagel, Führen und Managen in der digitalen Transformation. Trends, Best Practices und Herausforderungen (pp. 287-312). Berlin: SpringerGabler.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: GEO ART – EARTH FROM SPACE: Earth Observation Data of Kruger National Park From 30 Years Captured on Canvas

Authors: Christiane Schmullius, Susen Reuter, Jussi Baade, Izak Smit
Affiliations: University Jena, Photography and Art, South African National Parks
In 1994, Kruger National Park was one of the scientific supersites of a radar remote sensing experiment onboard the spaceshuttle Endeavour, SIR-C/X-SAR. Fabulous unprecedent images were taken and later explored in cooperation projects of SANParks scientists and international Earth observation scientists. The cooperation was continued towards e.g. woody cover mapping and surface model calculations with the satellite generations to follow in the SANParks projects SARvanna 2008-2011, ARS-Africa-EO 2012-2017, COLD-EMS 2018-2023. These remote sensing scenes of Copernicus Sentinel-1 and Sentinel-2 as well as JAXA's PALSAR radar images illustrate the magnificent beauty of savanna surfaces. The rich colours of these false colour images stimulated the project “GEO ART – EARTH FROM SPACE”, which is an extraordinary meeting between art and science: https://www.susenreuter.com/themen/geo-art/. The German visual artist Susen Reuter (member of the Federal Association of Fine Arts) transformed the digital scenes into several big sized canvas paintings. The series of artworks is entitled "Landscapes in motion" and is constantly being expanded to include new motifs. Reuter uses several techniques like scumbling and pouring as well as different materials for her paintings, like gouache, acrylic and pigments. The GEO ART paintings were shown in several exhibitions in Germany, e.g. at the Friedrich Schiller University Jena and at the Nicolaus Copernicus Planetarium Nuremberg. The aim of the project is for art to act as a door opener to science and thus to enable the public to experience science in a completely different way. This talk will introduce the satellite images and explain the transition to the paintings and their magnificent attractivity to public viewers. In total, 12 paintings are being presented (also possibly in an exhibition on-site the LPS25-conference facilities!). The data sources and image processing techniques are reflected with respect to the transformations on canvas. Vice versa, the most obvious features in the acryl paintings are re-connected to their digital origins from the Earth observation satellites and the meaning of the remote sensing products.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: GATHERS project – multi-tool educational and networking experience

Authors: Maya Ilieva, Freek van Leijen, Ramon Hanssen, Norbert Pfeifer, Mattia Crespi, Iwona Kudłacik, Jan Kapłon, Grzegorz Jóźków, Witold Rohm
Affiliations: UPWr, TU Delft, TU Wien, Sapienza University of Rome
The GATHERS project ran between 2019 and 2024, and was funded by the European Commission program TWINNING Horizon 2020 (http://www.gathers.eu/). The main scientific goal of GATHERS was the development of a methodology for integration of geodetic and imaging techniques for monitoring and modelling the Earth’s surface deformations and seismic risk. The three techniques that founded the methodology were Interferometric Synthetic Aperture Radar (InSAR), Light Detection and Ranging (LiDAR) and Global Navigation Satellite System for seismological applications (GNSS-seismology). On the other hand, within the frames of the project we aimed to gather and train a strong group of young scientists with interests in the field of geodesy and remote sensing. The GATHERS project was an initiative between several European Universities and involved as partners the Delft University of Technology (TU Delft, Netherlands), Technische Universität Wien (TU Wien, Austria), Sapienza University of Rome (Sapienza, Italy) and Wroclaw University of Environmental and Life Sciences (UPWr, Poland). The latter one also served as Project Coordinator. The training plan, developed by the project partners, included various approaches for realising the project’s goals as the main activity were short to middle-term trainings of experiences researchers (ER), PhD and MSc students. The realisation of this mission was strongly influenced by the outbreak of the COVID-19 crisis which coincided with the start of the project. Nevertheless, the good cooperation between the project partners supported the attainment of 21 trainings – 8 ERs, 7 PhD and 6 MSc, performed in a hybrid mode. Most of the UPWr trained staff, as well as the mentors and lecturers from TU Delft, TU Wien and Sapienza, took part in the preparation of series of Summer/Winter Schools, Hackathons, virtual and in-person Roadshows as part of the Knowledge integration task. The COVID-19 pandemic posed multiple challenges in front of our lecturers in these initiatives. The GATHERS partnership developed several strategies to overcome the restrictions on human mobility and gathering imposed by the virus spread in 2020-2022. The analyses of the possible solutions, including virtual schooling, observed a strong fatigue from on-line studying during the global lockdown at the background of switching to close to fully virtual life in any sphere. The social isolation unlocked additional anxiety, insecurity and lowered the effectiveness. The need of socialising and team work was back on the table. As one of the main goals of the project was to create a geoscience alumni community, the post-pandemic period of the project (2022-2024) was devoted to designing of a subsequent and progressive strategy for achievement the planned events. The most prominent example of the strategy implementation was the organisation of the final so-called Super Event, which comprised an Advanced Winter School, based on two basic previous Summer schools, a workshop, a B2B meeting and a hackathon. The Super Event was in-person event hosted by Sapienza. Important elements for the successful and smoother realisation of the Super Event were: 1) evaluation on the feedbacks gathered during the previous basic editions of the GATHERS Schools and Hackathon; 2) conduction of virtual pre-school and pre-hackathon technical meetings for establishing the requirements for the programming skills and tools; 3) providing shared data spaces and platforms for the practical exercises data and tools; 4) proper formation of working groups based on skill level and expertise of the participants; 5) communication (shared messaging environment) during the events; 6) many in-person initiatives acting as ice-breakers and reinforcement of team work and collaborations; 7) fair and open communication and feedback gathering during and after the realisation of the events; 8) and last but not least the shared responsibilities beyond institutions for individual modules of the course were successful and deepened the cooperation between the project partners. The staff trainings fulfilled in hybrid mode resulted in 13 scientific publications, 3 defended PhDs and 1 MSc theses, 2 new PhDs started, increased scientific and educational capacity in each partners’ unit, including formation of a new generation of mentors and educators. On the other hand, the Knowledge Integration Activities – schools, hackathons, roadshows involved 150 BSc, MSc, PhD students and young researchers mainly from Europe. 24 of them took part in one Basic and the Advanced Schools building up their knowledge and skills, 25 – in School (1 or 2) and hackathon (1 or 2), demonstrating determent ability to apply the theory in practice. Nine of these remarkable young scientists continued their collaboration beyond by working on publishing their results. The GATHERS project is a valuable showcase for applying a thorough analysis supported by cooperative brainstorming among the main payers for providing flexible educational tools depending on the environment, needs and expectations of the target groups.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Echoes in Space – A Narrative Introduction to Radar Remote Sensing With 14 Exercise Blocks

Authors: Christiane Schmullius, Clemence Dubois, Robert Eckardt, Christian Thiel
Affiliations: University Jena, Deutsches Zentrum für Luft- und Raumfahrt
Essence of the Manuscript: Adventures of a female scientist's life explain milestones of a fascinating Earth observation methodology. Genre: Creative Non-Fiction Textbook. Topic: Introduction to the basics and applications of radar remote sensing. Content: Since 2015, a new fleet of European radar satellites (Sentinel-1A/B/C and following) has enabled operational applications of this Earth observation technique for environmental science. This brings a physically complex technology into everyday use in research and management. This textbook uses fascinating experiences to explain the basics of this Earth observation technique and provides an easy introduction to the almost unimaginable possibilities with advanced processing techniques and to the professional field. Planned book size: 500 pages with many illustrations (campaign photos, tabular overviews, exciting results graphics) and numerous QR codes for further browsing (web pages, animations, videos). Dates: Book publication in September 2025 in German and during 2026 in English. Audience: Earth observation has an expanding user community - not only in all environment-related sciences, but also in environmental planning and management. To date, radar remote sensing has deterred many users because the technology seems exclusive. This factual and textbook aims to overcome this perceived “inaccessibility”. The target audience therefore ranges from BSc and MSc students and PhD candidates to private and public sector stakeholders. It is intended to serve as a textbook for university lectures and seminars (including a semester-long compendium of 14 exercises), as well as to give interested laypersons an introduction to the possible applications. Market Analysis: There is no comparable book on the market – neither in German nor in English. Especially the idea of a narrative description of exciting technological milestones, in which the author herself was involved is novel: challenging airplane campaigns, three Space Shuttle missions, time-critical transport of a receiving station to Mongolia, innovative "orbital dance" of satellites in space, development of innovative environmental monitoring with data from the new European Copernicus satellite fleet in Germany and through international cooperations in China, Mexico, Siberia and South Africa. Marketing: In addition to the publisher’s advertising, trade fairs and conferences, use of own international networks and committees, especially in the remote sensing education sector (international learning platform eo-college.org and UN consortium EOTEC Devnet).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Teaching and Learning Remote Sensing with SNAP and Sentinel-2 Data – A case study from Anhalt University of Applied Sciences

Authors: Sophie Prokoph, Marvin Gabler, Josef Palmer, Arne Scheuschner, Prof. Dr. Marion Pause
Affiliations: Anhalt University Of Applied Sciences
Hands-on remote sensing data analysis is a key part within the undergraduate studies of surveying and geoinformatics at Anhalt University of Applied Sciences in Germany. At present, the majority of our students studying within a dual academic program and are supported by federal authorities and engineering companies in Germany. Therefore, the “dual”- students are an excellent interface to distribute and enhance knowledge and expertise about available Copernicus EO data for various fields of application. To increase motivation and quality of learning remote sensing in an academic environment, we developed a lecture concept which allows students to analyse their home region (or any area of individual interest) and provide material for scientific communication simultaneously. In the undergraduate lectures the focus is on optical remote sensing. The students gain knowledge about different data sources (sensors, platforms), data parameters, land cover classification and spectral indices. This content is deepened in the exercises using Senstinel-2 data and the open access software SNAP. How the concept works: At the beginning, there are two face-to-face introductory exercises to get familiar with SNAP and discuss different RS data products. This is followed by six exercises with Sentinel-2 data (free choice of study area) on the topics: understanding multispectral data, classification and spectral indices. Finally, two exercises are provided to show special areas of application for RS (i.e., learn to create a building mask, analyse the influence of irrigation and tillage). The results and insights of the individual student exercises are collected within a presentation and are finally evaluated by the lecturer.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: The INTEGRAL Project: Synergies Between European and Asian Academia for Building Geo-Technologies Capacity Towards Resilient Agricultural Adaptation to Climate Change in Lao PDR

Authors: Marinos Kavouras, Eleni Tomai, Margarita Kokla, Vassilios Tsihrintzis, Christina Karakizi, Athanasia Darra, Maria Bezerianou, Ali Mansourian, Jean-Nicolas Poussart, Avakat Phasouysaingam, Phetsavanh Somsivilay, Nouphone Manivanh, Khandala Khamphila, Sisomphone Southavong
Affiliations: National Technical University of Athens, Lund University, National University of Laos, Savannakhet University, Souphanouvong University, Champasack University
Lao PDR’s economic growth is based primarily on natural resources, while the vast majority of the country’s population depends for employment and income on smallholder agriculture. However, agriculture in Lao PDR is increasingly affected by natural hazards such as droughts, floods, and erratic, intense rainfall, all of which have been exacerbated in recent years by the effects of climate change. Shifting agricultural practices toward climate-change resilience while ensuring food security is a vital priority for the country. Intelligent Geo-technologies, that combine Earth Observation, Geographic Information Systems, Artificial Intelligence and large-scale mapping, can offer a broad suite of tools and applications to support climate-resilient agricultural practices. However, the use of such crucial knowledge and tools in Lao PDR Higher Education Institutions (HEIs) is very limited. The main objective of the EU-funded project “Intelligent Geotechnologies for Resilient Agricultural Adaptation to Climate Change in Lao PDR - INTEGRAL” is to build the capacity of four Higher Education Institutions in Lao PDR in the field of resilient agriculture towards the country’s effective adaptation to climate change by exploiting a broad toolkit of geospatial technologies and by introducing and putting into practice hybrid teaching techniques such as physical/ in-campus as well as distance learning, and at the same time, promoting project/problem/investigation-based learning. The project consortium consists of two European and four HEIs in Laos, namely, the National Technical University of Athens in Greece and the Lund University in Sweden, the National University of Lao, the Souphanouvong University, the Savannakhet University, and the Champasack University, in Lao PDR. The INTEGRAL project started in February of 2023 with a duration of 36 months. The main achievements of the project so far can be summarized as follows: i. The development of four innovative modular courses equal to 7.5 ECTS each, that will be integrated in the curricula of the 4 participating Lao PDR HEIs; ii. The equipment of four Lao HEIs, resulting in the formation of four brand-new geo-technology laboratories. The acquired equipment comprises of a common core for all HEIs, as well as additional equipment tailored to each university’s specific needs. iii. The achievement of “Building training capacity” (BTC) agenda, through which several consortium-level, as well as local in Lao PDR, training events and activities have been implemented to enhance the knowledge and skills of Lao PDR HE staff. BTC activities have employed the new geo-technology laboratories and e-learning equipment acquired by universities in Lao PDR and are closely linked with the developed course material. The trained academic and administrative staff has already surpassed the designed target number. During the first half of the project, the commitment of Lao PDR beneficiaries to sustaining the project’s impact has been strongly demonstrated. Participating Lao PDR universities are very eager to integrate the capacity built and innovative courses developed by the project into their broader effort to update and enhance the relevance and appeal of their curricula and teaching methods. The Lao PDR HE system has acquired new competences, skills, infrastructure and resources to better equip graduates in interdisciplinary scientific fields and has developed or/and improved its potential for online and remote training, thereby extending their scope and including larger parts of the population. In the future, we envision that the impact will increase transversally once the four developed courses are taught in Lao PDR HEIs classrooms reaching wider audiences and being tested in vivo. ------- Funded by the European Union (Ref: 101082841 — INTEGRAL — ERASMUS-EDU-2022-CBHE). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: A Federated Learning Environment for Earth Observation Students: A Success Story from Austria

#stac #pangeo

Authors: Martin Schobben, Luka Jovic, Nikolas Pikall, Joseph Wagner, Clay Taylor Harrison, Davide Festa, Felix David Reuß, Sebastian Hahn, Gottfried Mandlburger, Christoph Reimer, Christian Briese, Matthias Schramm, Wolfgang Wagner
Affiliations: Department of Geodesy and Geoinformation, Technische Universität Wien, Earth Observation Data Centre for Water Resources Monitoring GmbH
Establishing an effective learning environment for Earth Observation (EO) students is a challenging task due to the rapidly growing volume of remotely sensed, climate, and other Earth observation data, along with the evolving demands from the tech industry. Today’s EO students are increasingly becoming a blend of traditional Earth system scientists and "big data scientists", with expertise spanning computer architectures, programming paradigms, statistics, and machine learning for predictive modeling. As a result, it is essential to equip educators with the proper tools for instruction, including training materials, access to data, and the necessary computing infrastructure to support scalable and reproducible research. In Austria, research and teaching institutes have recently started collaborating to integrate their data, computing resources, and domain-specific expertise into a federated system and service through the Cloud4Geo project, which is funded by the Austrian Federal Ministry of Education, Science, and Research. In this presentation, we will share our journey towards establishing a federated learning environment and the insights gained in creating teaching materials that demonstrate how to leverage its capabilities. A key aspect of this learning environment is the use of intuitive and scalable software that strikes a balance between meeting current requirements and maintaining long-term stability, ensuring reproducibility. To achieve this, we follow the Python programming philosophy as outlined by the Pangeo community. In addition, we need to ensure that the environment is accessible and inclusive for all students, and can meet the demand of an introductory BSc level course on Python programming as well as an MSc research project focused on machine learning with high-resolution SAR data. We accomplished this by combining the TU Wien JupyterHub with a Dask cluster at the Earth Observation Data Centre for Water Resources Monitoring (EODC), deployed close to the data. A shared metadata schema, based on the SpatioTemporal Asset Catalog (STAC) specifications, enables easy discovery of all federated datasets, creating a single entry point for data spread across the consortium members. This virtually “unlimited” access to data is crucial for dynamic and up-to-date teaching materials, as it helps spark the curiosity of students by opening-up a world full of data. Furthermore, the teaching materials we develop showcase the capabilities of the federated system, drawing on the combined resources of the consortium. These materials feature domain-relevant examples, such as the recent floods in central Europe, and incorporate scalable programming techniques that are important for modern EO students. These tutorials are compiled into a Jupyter Book, the “EO Datascience Cookbook”, published by the Project Pythia Foundation, which allows students to execute notebooks in our federated learning environment with a single click. Beyond serving as teaching material, the Jupyter Book also acts as a promotional tool to increase interest in EO datasets and their applications. We are already seeing the benefits of our federated learning environment: 1) it enhances engagement through seamless, data-driven storytelling, 2) it removes barriers related to computing resources, 3) it boosts performance by breaking complex tasks into manageable units, and 4) it fosters the development of an analytical mindset, preparing students for their future careers. We hope that this roadmap can serve as a model for other universities, helping to preserve academic sovereignty and reduce reliance on tech giants, such as Google Earth Engine. Federated learning environments are essential in training the next generation of data-driven explorers of the Earth system.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Enabling High Resolution Air Quality Forecasts using Advanced Machine Learning Algorithms for Improved Decisions through SERVIR Capacity Building Activities in Southeast Asia

Authors: Ashutosh Limaye, Alqamah Sayeed, Daniel Irwin, Peeranan Towashiraporn, Aekkapol
Affiliations: Nasa
SERVIR strives to build the capacity of partners to use satellite data to address critical challenges in food security, water security, weather and climate resilience, ecosystem and carbon management, and air quality and health. A partnership of NASA, USAID, and leading technical organizations in Asia, Africa, and Latin America, SERVIR develops innovative solutions to specifically address the user needs. Several efforts across the NASA Earth Action portfolio, and beyond, successfully bridge the gap between science and decision making. SERVIR has a unique way of linking the latest and most appropriate science to decision-making of users and regional technical and scientific collaborators through services that build sustained capacity in SERVIR regions. In this presentation, we will focus on an example - SERVIR’s air quality explorer tool in Southeast Asia. The web-based tool applies data from NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS), Visible Infrared Imaging Radiometer Suite (VIIRS), and Goddard Earth Observing System (GEOS) Air Quality Forecasts to track and predict air quality in the Southeast Asian region, which includes Vietnam, Cambodia, Thailand, Myanmar, and Laos PDR. Starting in Thailand, SERVIR worked with partners at the Thailand Pollution Control Department (PCD) to co-develop actionable air quality forecasts for that are grounded using the in-situ observations collected by the Thailand Pollution Control Department. From the beginning, the purpose of this effort was to enable the users, such as Thailand Pollution Control Department, become proficient in the generation of the air quality forecasts on their own. Effective use of the ground observations, satellite data, and forecasts in a computationally efficient manner was a need expressed by the users. Use of neural network machine learning and deep learning algorithms, enabled us to stitch together a system that brings the different elements together in a computationally efficient manner to estimate air quality forecasts. Now the modeling system has found a firm user base, and has been used to develop specific, targeted application systems based on the air quality forecasts. For example, in northern Thailand, the forecast data was used to provide guidance on agricultural burns, to enable continuation of traditional agricultural practices while ensuring the resulting smoke impacts will not further exascerbate the air quality for downwind residents. In this presentation, we will discuss our experience with collaborating with the users to strengthen user capacity in development of a system that fits the needs.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Breaking down time-series analyses, UAV, and hyperspectral data for schools

Authors: Johannes Keller, Christian Plass, Dr. Maike Petersen, Prof. Dr. Alexander Siegmund
Affiliations: Institute for Geography & Geocommunication – Research Group for Earth Observation (rgeo), Heidelberg University of Education, Institute for Geography & Geocommunication – Research Group for Earth Observation (rgeo), Heidelberg University of Education and Heidelberg Center for the Environment (HCE) & Institute of Geography, Heidelberg University
Modern approaches to Earth Observation (EO), including time-series analyses, UAV, and hyperspectral data, hold significant potential for enhancing our understanding of the Earth’s system in the context of the Sustainable Development Goals (SDGs). Time-series analyses are instrumental in assessing the impact of climate change on the environment (Winkler et al., 2021), while drone data aids farmers in adopting resource-efficient cultivation methods with precision farming (Harsh et al., 2021). Furthermore, hyperspectral data from new satellites like EnMAP can assist in identifying minerals necessary for the sustainable energy transition (Asadzadeh et al., 2024). These applications create numerous educational opportunities by linking Geography, STEAM education (Science, Technology, Engineering, Arts, Mathematics), and education for sustainable development (ESD). However, the implementation of these EO applications in education is often hindered by time constraints, a lack of expertise among teachers, and the absence of suitable teaching examples (Dannwolf et al., 2020). A key solution to address these limitations is the development of user-friendly web applications for analysing EO data. For instance, there is a pressing need for a web-based tool that enables students to utilize the over 200 bands of EnMAP for analysis without causing confusion or overwhelm. Additionally, these applications should provide easily accessible EO data, include clear explanations for both teachers and students, and be integrated into ready-to-use educational materials. E-learning plays a crucial role in this context, as it alleviates the burden on teachers and facilitates personalized learning for students (Dannwolf et al., 2020). The project "EOscale3" at the Institute for Geography and Geocommunication - rgeo at Heidelberg University of Education aims to integrate satellite image time series, UAV, and EnMAP into educational settings. To achieve this, the user-friendly EO analysis web application BLIF has been expanded to offer a broader range of EO data and innovative tools for analysis. Throughout this process, various challenges have been encountered in integrating different data sources into a cohesive web application and providing suitable analytical tools for students. Additionally, adaptive e-learning modules and a virtual classroom have been developed, where students learn to apply EO data to solve real-world problems using the newly created application. The aim of this presentation is to demonstrate how time-series analyses, UAV, and hyperspectral data can be effectively integrated into classrooms. It will detail the development of appropriate tools, and the challenges addressed throughout this process. Finally, the presentation will showcase the e-learning modules, and the virtual classroom we will create within the project, designed to assist educators in effectively incorporating these new tools. References Asadzadeh, S., Koellner, N. & Chabrillat, S. (2024). Detecting rare earth elements using EnMAP hyperspectral satellite data: a case study from Mountain Pass, California. Scientific reports, 14(1), 20766. https://doi.org/10.1038/s41598-024-71395-2 Dannwolf, L., Matusch, T., Keller, J., Redlich, R. & Siegmund, A. (2020). Bringing Earth Observation to Classrooms—The Importance of Out-of-School Learning Places and E-Learning. Remote Sensing, 12(19), 3117. https://doi.org/10.3390/rs12193117 Harsh, S., Singh, D. & Pathak, S. (2021). Efficient and Cost-effective Drone – NDVI system for Precision Farming. International Journal of New Practices in Management and Engineering, 10(04), 14–19. https://doi.org/10.17762/ijnpme.v10i04.126 Winkler, K., Fuchs, R., Rounsevell, M. & Herold, M. (2021). Global land use changes are four times greater than previously estimated. Nature communications, 12(1), 2501. https://doi.org/10.1038/s41467-021-22702-2
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Closing the Geospatial Data Literacy Gap in Digital Farming: Lessons Learned

Authors: Dr. Julia Wagemann, Sabrina H. Szeto, Julian Blau
Affiliations: thriveGEO GmbH, BASF Digital Farming GmbH
The use of geospatial and Earth Observation (EO) data is growing. Digital Farming is a key industry where EO are used enabling farmers to make data-driven decisions. The growing demand results in a shortage of workers with data literacy skills, which has led to challenges with hiring and a need for upskilling. This poster highlights identified training needs in EO based on an ongoing training collaboration between BASF Digital Farming, a digital farming business, and thriveGEO, a training and consulting company. This poster also provides an overview of the growing skills gap in EO as well as areas of skills development that are needed.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Fostering Earth Observation Literacy: Lessons from SERVIR’s Curriculum Development Initiative

Authors: Kelsey Herndon, Micky Maganini, Dr. Tom Loran, Dr. Rob Griffin, Eric Anderson, Dr. Freek van der Meer, Dan Irwin, Dr. Roshanak Darvishzadeh, Claudia Paris, Roelof Rietbroek, Dr. Margarita Huesca Martinez, Michael Schlund
Affiliations: The University Of Alabama In Huntsville, NASA Marshall Space Flight Center, University of Twente
Open-access Earth Observation (EO) data represent a transformative resource for decision-making across a wide range of sectors, including NGOs, environmental organizations, and government agencies. Initiatives such as NASA's and ESA's Open Science programs have dramatically reduced barriers to access these data, creating opportunities to address complex challenges in areas like climate resilience, natural resource management, and disaster mitigation. Despite this progress, significant obstacles remain for non-experts in operationalizing EO data to inform actionable decision-making. These challenges, including limited technical expertise, insufficient tools, and a lack of tailored training, often prevent users from fully leveraging the rich temporal depth, global geographic coverage, and diverse environmental insights offered by remote sensing datasets. SERVIR, a joint initiative between NASA and the United States Agency for International Development (USAID), is designed to bridge these gaps by partnering with leading geospatial institutions worldwide to support the integration of EO data into decision-making processes. SERVIR employs a holistic approach that extends beyond traditional training and workshops, combining cutting-edge technology, local capacity building, and strategic partnerships to enhance EO literacy. One such partnership is with the University of Twente’s Faculty of Geo-Information Science and Earth Observation (ITC), a leading educational institution aimed at building technical expertise to use EO data to improve environmental decision making. Since 2018, SERVIR and ITC have worked together to integrate operational tools and services developed by SERVIR into ITC’s graduate curriculum. In addition, SERVIR has developed asynchronous, virtual modules as part of ITC’s Geoversity platform – an educational platform targeted at professionals to increase their operational capacity to use EO data and tools in their workflows. Key SERVIR tools integrated into ITC’s curriculum include ClimateSERV, a tool for accessing climate data and visualizations to analyze trends and inform climate adaptation strategies; Collect Earth Online (CEO), a collaborative tool for land-use monitoring and environmental assessments; HYDRAFloods, a hydrological modeling tool supporting flood risk assessment and water resource management; and the Radar, Mining, and Monitoring Tool (RAMI), which tracks mining activities and their environmental impacts. Each tool was selected based on its potential to address specific regional and sectoral challenges, enabling ITC students and professionals to tackle real-world issues using EO data. To ensure effective integration of these tools, SERVIR employs a structured Curriculum Development Initiative Framework comprising six key phases: Assessment, Outreach, Development, Review, Implementation, and Evaluation. This phased approach ensures that the curriculum is both relevant and impactful, fostering sustainable capacity-building. In addition to in-person courses, SERVIR has developed asynchronous, virtual modules for ClimateSERV, CEO, and HYDRAFloods as part of ITC’s Geoversity platform. This platform provides professionals with flexible learning opportunities, enabling them to incorporate EO data into their workflows regardless of geographic or time constraints. These modules are designed with practical applications in mind, emphasizing hands-on exercises and real-world case studies. Initial outcomes from this initiative are promising with participant surveys indicating increased confidence in applying EO tools to their areas of interest. Furthermore, the initiative has highlighted the importance of tailoring content to diverse user groups, recognizing that the needs of professionals in government agencies may differ significantly from those in NGOs or academia. Key lessons learned include the value of blending technical instruction with contextual examples to bridge the gap between theory and application, the importance of iterative feedback from participants to refine training materials, and the need for ongoing support and mentorship to ensure sustained use of EO tools after initial exposure. SERVIR’s experience underscores the critical role of strategic partnerships and innovative educational models in advancing EO literacy. By integrating EO tools into both traditional academic programs and professional training platforms, SERVIR is fostering a new generation of decision-makers equipped to harness the full potential of EO data. This approach not only addresses immediate capacity gaps but also lays the groundwork for sustained, scalable impact across diverse sectors and geographies. In this presentation, we will delve deeper into the Curriculum Development Initiative Framework, share detailed outcomes from participant evaluations, and discuss the broader implications of our work for cultivating EO literacy globally. By sharing these insights, we aim to inspire other organizations to adopt similar approaches, contributing to a more EO-literate workforce capable of addressing today’s pressing environmental and societal challenges.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: The ESA Stakeholder Engagement Facility

Authors: Phillip Harwood, Michelle Hermes, Diego Carcedo, Francesca Elisa Leonelli
Affiliations: Evenflow, EARSC, SERCO, ESA-ESRIN
The European Space Agency, ESA, funds a wide range of user-focused application projects across a wide range of topics, including food security, ecosystem monitoring and SDG indicators. Each of these projects typically works together with a set of early adopters to ensure that the project’s outputs are tailored to real user needs, and respond to real policy requirements. However ESA’s experience has shown there to be several areas where stakeholder interaction could be improved: Engaging with stakeholders in the context of a single project misses opportunities to interest them in a wider range of multi-project solutions that could meet their needs. Many projects produce outputs which are of interest to a wider group of stakeholders than those originally involved in the project. When projects end, interactions with their stakeholders is not systematically maintained. Stakeholders often find that services they are interested in are not maintained, or that support is no longer available. To help address these issues ESA has created the Stakeholder Engagement Facility, SEF, with the aim of maintaining and expanding engagement with a diverse range of user communities. The SEF is run under contract from ESA by a consortium consisting of Evenflow (Belgium), SERCO (Italy and Czechia) and EARSC (Belgium). The SEF initially focuses on four priority themes: Food Systems; Ecosystems and Biodiversity; Carbon, Energy and the Green Transition; and Sustainable Development Goals. The SEF was kicked off in November 2023, and started full operations in May 2024. Since then it has performed a series of user focussed activities, aiming not to duplicate existing ESA efforts but instead to bring in new stakeholders. Rather than following a predefined set of activities, the aim is to work with the community to identify their needs and blocking points, then to define an action plan to address these. Each community has different needs: in some cases the blocking point is simply lack of awareness of the tools available, for others there may be a need for training and capacity building, while in other cases the tools are understood but there is a need to establish trust in the outputs. In many cases the blocking points are not technical, but can relate to management or legal issues. The SEF adapts its activities according to the state of the community, and over the last year has performed activities including - Providing presentations and demonstration desks at events of the target communities, working on the principle of going to the users rather than expecting the users to come to the EO community. - Organising a series of webinars presenting EO based services to stakeholders in the fields of ecosystem conservation and city management. - Providing bespoke training to users to get them started in the use of EO based tools. - Undertaking other tasks identified by the community as being needed by them, such as compiling inventories of EO based services. The SEF works closely together with another project (APEx: Application Propagation Environment), which ensures the continued availability of the data, tools and services developed by different projects. Taken together, the two projects ensure that users of ESA funded projects should no longer experience a sharp transition at the end of a project lifetime, allowing time to build up the case for operational use of the services developed. In addition, ESA also has identified a need for an improved mapping of policies and stakeholders, to help ensure that ESA’s funds are effectively directed towards meeting key policy needs. The SEF provides a tool where the relations between key policies and stakeholders are mapped, also including thematic areas and project outputs, which allows an improved understanding of the overall landscape that EO based tools are addressing. Over time this should allow the SEF to better target its activities towards those stakeholders that are most relevant for the desired policy outcomes.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: D.01.04 - POSTER - Using Earth Observation to develop Digital Twin Components for the Earth System

Climate change represents one of the most urgent challenges facing society. The impacts of climate change on the Earth system and society, including rising sea levels, increasing ocean acidification, more frequent and intense extreme events such as floods, heat waves and droughts, are expected not only to have a significant impact across different economic sectors and natural ecosystems, but also to endanger human lives and property, especially for most vulnerable populations.

The latest advances in Earth Observation science and R&D activities are opening the door to a new generation of EO data products, novel applications and scientific breakthroughs, which can offer an advanced and holistic view of the Earth system, its processes, and its interactions with human activities and ecosystems. In particular, those EO developments together with new advances in sectorial modelling, computing capabilities, Artificial Intelligence (AI) and digital technologies offer excellent building blocks to realise EO-based Digital Twin Components (EO DTCs) of the Earth system. These digital twins shall offer high-precision digital replicas of Earth system components, boosting our capacity to understand the past and monitor the present state of the planet, assess changes, and simulate the potential evolution under different (what-if) scenarios at scales compatible with decision making.

This session will feature the latest developments from ESA’s EO-based DTCs, highlighting:
- Development of advance EO products
- Integration of EO products from a range of sensors
- Innovative use of AI and ML
- Advanced data assimilation
- Development of tools to address needs of users and stakeholders.
- Design of system architecture
- Creation of data analysis and visualization tools
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: A digital twin of Svalbard’s cryosphere (SvalbardDT)

Authors: William Harcourt, Georgios Leontidis, Dr Eirik Malnes, Dr Robert Ricker, Ward Van Pelt, Veijo Pohjola, Adrian Luckman, Noel Gourmelen, Livia Jakob, Ashley Morris, Morag
Affiliations: University Of Aberdeen, NORCE Norwegian Research Centre, Uppsala University, Swansea university, University of Edinburgh, EarthWave, Svalbard Integrated Arctic Earth Observing System (SIOS)
The Svalbard archipelago, which sits at the boundary between the warm midlatitudes and cold Polar region, is warming six times faster than the global average. This is driving mass loss from glaciers, diminishing sea ice extent, and reducing seasonal snow cover, significantly altering the interconnected systems within Svalbard’s cryosphere. Furthermore, Svalbard is considered a super site of in situ observations in a pan-Arctic context owing to the permanent infrastructure and long history of international scientific collaboration on the archipelago. Combined with the recent exponential increase in satellite data, now is the time to exploit this dense set of observations to build improved digital representations of the Arctic cryosphere. In this contribution, we will update on the development of a Digital Twin Component (DTC) of Svalbard’s cryosphere (SvalbardDT) as part of the Destination Earth (DestinE) initiative. The development of a DTC that can map the current state of the cryosphere and analyse the physical processes interconnecting the different sub-systems has profound implications for marine and terrestrial decision-making as well as our understanding of the fundamental physical processes that govern Svalbard’s cryosphere. The DTC will be optimised using Svalbard’s extensive observational record which will enable us to undertake a thorough validation of the DTC outputs. We will construct a new DTC of the ice and snow of Svalbard’s cryosphere in the 21st Century through an automated data management system to ingest, harmonise, and analyse data products ready for delivery to the Digital Twin Earth Service Platform (DESP). Earth Observation (EO) data products describing glacier dynamics, snow cover, and sea ice variability combined with atmospheric reanalysis data will be ingested into our DTC. These data products are multi-modal i.e. they are collected at different resolutions, scales and time/spatial periods. Therefore, the DTC will utilise a deep learning approach to ingest the relevant data products and harmonise them into a 4D data cube describing the data set variable, x-dimension, y-dimension, and its changes over time. Our aim is to generate weekly data cubes describing 16 parameters. With spatially modelled data cubes, we will next initiate a feedback loop to train the DTC using multi-modal learning. After building the DTC infrastructure and AI models, we will focus on the application of two case studies. Firstly, we will study the impacts of extreme weather events on Svalbard’s cryosphere, such as Rain on Snow and Ice, by analysing ‘emergent behaviour’ from the AI models that may elucidate new understanding of these physical processes. The second use case will focus on developing the DTC as a tool for optimising marine and terrestrial navigation across Svalbard and associated waters in response to changing snow and ice conditions. This will involve engagement with local stakeholders. We will present the architectural design of our DTC, the data products used and the first results of the AI models used to harmonise these multi-modal data sets.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: The IRIDE Cyber Italy project: an enabling PaaS for Digital Twin Applications

#cloud-native

Authors: Stefano Scancella, Fabio Lo Zito, Stefano Marra, Davide Foschi, Fabio Govoni, Simone Mantovani
Affiliations: Serco Italia S.p.A., CGI Italia S.r.l., MEEO S.r.l.
The IRIDE Cyber Italy project represents a significant national step forward to develop and implement Digital Twins (DT) of the Earth, by leveraging Earth Observation data and cloud technologies and services to build a scalable and interoperable reference Framework enabling the use of DTs in diverse thematic domains. As part of the Italian EO space program funded by the European Union's National Recovery and Resilience Plan (PNRR) and managed by the European Space Agency (ESA) in collaboration with the Italian Space Agency (ASI), the project demonstrates Italy’s commitment to advancing EO applications and fostering digital innovation. The Cyber Italy Framework aims to provide an enabling Platform as a Service (PaaS) solution to exploit the Digital Twins capabilities with practical applications in field such as risk management, environmental monitoring, and urban planning. A Digital Twin, as a digital replica of Earth, integrates data-driven models to simulate natural and human processes, thereby allowing advanced analyses, predictive capabilities, and insights into the interactions between Earth's systems and human activities. SERCO is the company leader of the consortium composed by e-GEOS, CGI, and MEEO. Phase 1 of the project, completed in 2024 and lasted 12 months, focused on prototyping a hydro-meteorological Digital Twin, showcasing the powerfulness of a DT framework and its application to flood simulation and management. Phase 2, on-going, lasting additional 12 months, evolves the framework prototype into a pre-operational system, by: • enhancing the Framework’s scalability, elasticity and interoperability; • setting up a DevOps environment over a cloud-based infrastructure; • demonstrating the usability of the Framework by integrating an additional DT (Air quality Digital Twin), developed by a third-party. The final Phase 3, lasting 10 months and ending in 2026, will focus on the full operationalization of the Framework as a platform for the integration of any additional DTs, to expand thematic coverage. The project adopts a cloud-native, container-based architecture, leveraging the continuous integration, delivery and deployment (CI/CD) approach to ensure efficient updates and system adaptability. The infrastructure, based on OVHcloud technologies, is designed to support both horizontal and vertical scalability and elasticity, allowing it to handle increasing data volumes and concurrent user sessions seamlessly by a Kubernetes-based orchestration. The Digital Twin framework is powered by Insula, the CGI Earth Observation (EO) Platform-as-a-Service, which has been successfully implemented in various ESA projects, including DestinE DESP. Insula provides a comprehensive suite of APIs designed to support hosted Digital Twins (DTs) with functionalities such as data discovery, data access, processing orchestration, and data publishing. Beyond these foundational capabilities, Insula also enables the seamless integration of custom processors, allowing users to extend the platform's analytical capabilities to meet specific project requirements. Complementing its robust APIs, Insula offers an advanced user interface tailored for complex big data analytics. This UI leverages a scalable and cloud-native backend, empowering users to perform intricate analyses efficiently and at scale, thus making Insula a key technology for operationalizing Digital Twin frameworks. Interoperability is a key concept of Cyber Italy Framework, facilitated by the integration in the Framework of the ADAM platform developed by MEEO, which adopts both Harmonised Data Access (HDA) and Virtual Data Cube (VDC) approaches, ensuring consistent and fully customizable handling of input data, supporting the integration of distributed data sources and diverse DTs while enhancing long-term flexibility. ADAM is largely adopted as key technology within relevant European Commission initiatives (WEkEO, DestinE Service Core Platform, …) and ESA projects (ASCEND, HIGHWAY, GDA APP, …) to generate and deliver Analysis Ready Cloud Optimised (ARCO) products to support multi-domain and temporal analyses. One of the key features of the CyberItaly Framework is the ability to define and implement "what-if" scenarios, which provide stakeholders with critical tools to simulate conditions, predict outcomes, and make data-driven decisions. These scenarios are instrumental in addressing challenges like hydro-meteorological events, offering precise predictions for flood risks or air quality previsions, such as emissions or traffic pollution estimation, enabling more effective planning and response strategies. The IRIDE Cyber Italy project goal is to create a robust and versatile digital ecosystem, integrating cutting-edge EO technologies and seeks to demonstrate the potential of Digital Twins in supporting a sustainable Earth system and environmental management. By leveraging cloud-native architectures, and emphasizing standardization and scalability, the IRIDE Cyber Italy project is creating a versatile platform for DTs. This project represents a crucial step towards by creating a comprehensive framework capable of supporting a wide range of Digital Twins. Future applications could extend the use of Digital Twins on a wide range of sectors, such as urban planning, agriculture, and natural resource management, contributing to the global vision of using EO technologies to advance Earth system understanding and management.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Sentinel-3 OLCI observation-based digital twin component for aquatic carbon in the land-sea continuum

Authors: Martin Hieronymi
Affiliations: Helmholtz-Zentrum Hereon
Water constituents exhibit diverse optical properties across ocean, coastal, and inland waters, which alter their remote-sensing reflectance obtained via satellites. Optical water type (OWT) classifications utilized in satellite data processing aim to mitigate optical complexity by identifying fitting "ocean" color algorithms tailored to each water type. This facilitates comprehension of biogeochemical cycles ranging from local to global scales. We present a novel neural network- and OWT-based processing chain for Sentinel-3 OLCI data of the aquatic environment. Using a data set of daily-aggregated Sentinel-3A & 3B OLCI data of the entire North Sea and Baltic Sea region with adjacent land areas for the time span June to September 2023, we introduce the retrieved optical properties and their relationships with concentrations of the water constituents, for example on dissolved and particulate organic carbon. Moreover, we show the great potential of a novel OWT analysis tool for the differentiation of phytoplankton diversity, understanding aquatic carbon dynamics, and assessing the uncertainties of satellite products. The OWT analysis can directly be used to draw conclusions about the trophic state of lakes (albeit based on colour and not on a concentration range) or potentially harmful algal blooms, e.g. intense cyanobacteria blooms in the Baltic Sea. This would provide a handle for possible warnings for bathing waters, drinking water treatment or the fishing and aquaculture industry. Together with additional information, e.g. on the water depth, the OWT analysis also serves to develop meaningful new flags for areas where the model assumptions and algorithms are not valid. In the future, it will also be important to demonstrate the performance of satellite retrievals through water type-specific validation. The presented data set with OWT analysis can serve as a blueprint for a holistic view of the aquatic environment and some pools of carbon and is a step towards an observation-based digital twin.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Forest Digital Twin – From TLS data to 3D tree representation for Radiative Transfer Modelling

Authors: Tomáš Hanousek, Jan Novotný, Barbora Navrátilová, Růžena Janoutová
Affiliations: Global Change Research Institute CAS, Department of Geography, Faculty of Science, Masaryk University
The concept of a forest digital twin represents a transformative approach to understanding and managing forest ecosystems by bridging the gap between field data and predictive models. Terrestrial Laser Scanning (TLS) provides high-resolution, three-dimensional (3D) data of forest structure, enabling detailed and precise reconstruction of individual trees. By converting this data into digital 3D tree representations, it enables simulating ecosystem processes using Radiative Transfer Models (RTMs) within various parts of the forest and calibrate the influence of tree structure on outputs. These simulations play an important role in studying forest metrics, RTM, and energy exchange within forests. The key challenge is to develop a streamlined workflow from TLS data acquisition to the production of accurate and usable 3D tree representations for RTM applications. We are introducing a comprehensive workflow for reconstruction of 3D tree representations from TLS data based on already tested practices combined with new approaches. The workflow was tested on 9 plots from the Těšínské Beskydy region, Czech Republic where TLS data were acquired using Riegl VZ-400 scanner and employs following steps: • From the acquired TLS data, we estimated the Leaf Area Index (LAI) value using the VoxLAD model and extracted its value for individual trees • Separation of individual trees, we used 3D graph-based method to isolate individual trees with optimisation to Central European forests • Foliage and wood segmentation, we labelled TLS data using semantic segmentation and deep learning algorithm • To build the tree branch structure, we used Treegraph for deciduous trees and our own algorithm for coniferous trees • Spatial distribution of leaves was done using our own algorithm with usage of LAI value to distribute the appropriate number of leaves Reconstructed 3D tree representations are ready for use as scalable RTM models, such as the Discrete Anisotropic Radiative Transfer (DART) model. The workflow allows users to define their own LAI values and select different leaf models to simulate different conditions. In addition, the leaf models provided in workflow are optimised to balance accuracy and computational efficiency, ensuring that the models can be used in computationally demanding scenarios without compromising the reliability of the simulation results. However, they can be replaced with more detailed leaf models to further improve accuracy. We present a comprehensive workflow for reconstructing 3D tree representations from TLS data as a method for creating a forest digital twin. By integrating advanced segmentation, structural modelling, and leaf spatial distribution techniques, our proposed workflow uses a combination of innovative methods to accurately reconstruct individual tree in forest and develop Digital Twin of Forest for RTM models. This approach represents a significant advance in ecological modelling and forest management, providing a reliable basis for studying canopy dynamics and forest metrics in complex forest systems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: A DTC Urban - SURE Smart Urban Resilience Enhancement.

Authors: Jan Geletič, Manuela Ferri, Aniello De Luca, GABRIELE MURCHIO, Alessia Tricomi, Daniele Oxoli, Prof Maria Antonia Brovelli, Patrick Matgen, Marco Chini
Affiliations: E-geos, Politecnico di Milano, LIST-Lyxembourg Institute of Science and Technology
SURE has been proposed in the framework of ESA-DTE-B-02 EARLY DTCS DEVELOPMENT ACTIONS. It is managed by e-GEOS and a significant Consortium including Politecnico di Milano, Luxembourg Institute of Science and Technology and Stefano Boeri Architetti. The project focuses on two use cases: • Modelling the effects of Urban Heat Islands (UHIs) in a test area of Milan, addressing both city scale and neighbourhood-scale impacts. • Simulating flood effects in a suburban area of Luxembourg (Urban Floods). Concerning the UHI use case, city scale simulations aim at demonstrating the capability to provide insights into macro-changes in Land Surface Temperature (LST) triggered by extensive interventions (e.g., green area expansions, urban forestation). LCZ (Local Climate Zone) and UHI maps will be generated based on Sentinel-2 with different Urban Canopy Parameters and MODIS/Landsat-8/9 thermal data, respectively. A simulation workflow will be set up based on urban vegetation/built-up characteristics to replicate the LST response. Instead, the neighbourhood scale will be addressed through urban microclimate modelling by leveraging the PALM model system (https://palm.muk.uni-hannover.de). The Urban Flood Use Case focuses on creating a flood risk assessment framework using EO data, AI algorithms (e.g., Sentinel-1 SAR), and climate models to predict flood hazards in Luxembourg’s Alzette and Sure river floodplains. It integrates precipitation, temperature, soil moisture, and river discharge data to produce flood hazard and risk maps. An ensemble approach addresses uncertainties, with results displayed on a dashboard to evaluate impacts on infrastructure. Both Use Cases will be articulated in a set of “what if” scenarios, according to the User’s requirements collected during the first steps of the project. For each scenario a dedicated Digital Twin Component environment will be realized, in order to give to the final users a well-organized and complete framework in which the results of the modelling may be visualized and combined to get further information. The project addresses the needs of different users’ categories, such as private entities (banks, insurances, professionals, urban planners) and public administrations involved in the management of the territory. As stakeholders already involved in the project, it is possible to mention for the UHI Use Case: - Città Metropolitana of Milan; - Studio Boeri Architetti (partner and stakeholder); for the Flood Use Case - Spuerkees Bank On the UHI Use Case we registered the active interest of the Municipality of Milan. The Consortium is getting in touch with Luxembourg Public Administrations involved in water management and civil protection, in order to widen the stakeholder’s participation. SURE will do wide use of satellite datasets, jointly with all other useful available dataset, in all the phases of the project, starting from the training and validation of the models to the realization of the DT environment. What realized during the project will be compliant to the specifications of DestinE Core Service Platform (DESP) for future integration.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Digital Twin Earth for Climate Change Adaptation: Downscaling for Human Activities

Authors: Prof. Mihai Datcu, Research Assoc. Prof. Vaduva Corina, Dr. Ing. Alina Radutu, Prof. Eden Mamut, Prof. Ionescu Constantin, Prof. Liviu
Affiliations: POLITEHNICA Bucharest
The presentation is introducing the results of the recently established national Competence Center for Climate Change Adaptation in Romania. Climate models describe changes at scales of thousands of kilometers and for periods of centuries. However, adaptation measures shall be applied at human activities scales, from few meters to kilometer and for periods of days to months. It is in the scope of the project to contribute to the EC Adaptation to Climate Change Agenda with specific measures of implementation using coupled models across domains and spatiotemporal scales. The Competence Centre will promote in its agenda the opportunities offered by the availability of Big EO Data, with a broad variety of sensing modalities, global coverage, and more than 40 years of observations. The project is in line with the „Destination Earth'' initiative (DestinE) that promotes the use of Digital Twins (DT) and with the EC-ESA Joint Initiative on Space for Climate Action developing new AI4EO paradigms. The Compentence Center activities are supported by an actionable digital media, a system of federated DTs. The DTs implement a virtual, dynamic models of the world, continuously updated, enabling simulations while providing more specific, localized and interactive information on climate change and how to deal with its impacts. These DTs will also be a tool to largely interact with local administrations, and also directly with people, raising awareness and amplifying the use of existing climate and EO data and knowledge services, for the elaboration of local and specific adaptation. That is a step towards a citizen driven approach with an increased societal focus. The Competence Center is currently implementing 5 DTs in 5 synergetic projects: “Artificial Intelligence in Earth Observation for Understanding and Predicting Climate Change” (AI4DTE), “Active Measures for Restoring Sweet-Water Lakes and Coastal Areas affected by Eutrophication addressing the Enhancement of Resilience to Climate Change and Biodiversity” (Act4D-Eutrophication), “Exploitation of Satellite Earth Observation data for Natural Capital Accounting and Biodiversity Management” (EO4NATURE), “The Research centEr for climAteChange due to naTuraldIsasters and extreme weather eVEnts” (REACTIVE), “Assessing climate change impact on the vector-borne diseases in the One-Health context” (VeBDisease). This is a federated DT systems. A first DT Earth (DTE) will maximize the information extracted from EO data. The methodology is focused on hybrid Physics Informed AI methods, time series, prediction, causality discovery, a “what-if” engine to monitor, forecast or simulate climate change effects. A second DT will model the eutrophication of sweet-water lakes and Black Sea West coast waters. A third DT produce adaptation knowledge as requested for actions aiming to protect plants biodiversity and ecosystems. A fourth DT will monitor coupled atmosphere-hydrosphere-lithosphere processes, it will provide for the first time an integrated view of how climate-change-stimulated phenomena can be monitored using seismic sensor networks. And a fifth DT will model and anticipate and fight vector-borne emerging animal and zoonotic infectious diseases, implementing an integrated One-Health approach considering the links between human health, animal health, and environmental health. The use case to be presented is covering the region of Dobrogea encompassing the region of the Black Sea coast from Suitghiol lake to the Danube Delta and Babadag forest, a very diverse region where the DTs complementarity is demonstrated. The initial data used comprises multi annual Satellite Image Time Series from Sentinel-1 and Sentinel-2, GEDI measurements, infrasound and seismic continues records, GNSS data, in-situ water quality measurements, biodiversity in-situ parameters, in-situ information on mosquito species and pathogens, wind maps, and meteorological data. The analysis is made by the fusion and the joint analysis of the spatio-temporal patterns of environmental parameters estimated from the collected data and predictions based on DNN models: water bio-, chemical- parameters, canopy height, spectral indices, land cover classes, wind speed prediction al low altitude, sea currents and wind speed, or extreme weather effects detection and characterisation. The coupled DTs system will support the scope of the Competence Center to promote a geographical diversity approach, involving various regions and communities, following a systemic approach converging several cross-modalities themes and areas of innovation, implemented as an inclusive methodology to bring together public administrations, private sector, civil society, and finally the citizens in person.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Fields of The World and fiboa: Towards interoperable worldwide agricultural field boundaries through standardization and machine-learning

#parquet #fiboa

Authors: Matthias Mohr, Michelle Roby, Ivor Bosloper, Hannah Kerner, Prof. Dr. Nathan Jacobs, Caleb Robinson
Affiliations: Taylor Geospatial Engine, Arizona State University, Washington University in St. Louis, Microsoft
In this talk, we present two closely related initiatives that aim to facilitate datasets for worldwide standardized agricultural field boundaries: the fiboa data specification and the Fields of The World (FTW) benchmark dataset and models. Both initiatives work in the open and all data and tools are released under open licenses. Fiboa and FTW emerged from the Taylor Geospatial Engine’s Innovation Bridge Program’s Field Boundary Initiative [1]. This initiative seeks to enable practical applications of artificial intelligence and computer vision for Earth observation imagery, aiming to improve our understanding of global food security. By fostering collaboration among academia, industry, NGOs, and governmental organizations, fiboa and FTW strive to create shared global field boundary datasets that contribute to a more sustainable and equitable agricultural sector. Field Boundaries for Agriculture (fiboa) [2] is an initiative aimed at standardizing and enhancing the interoperability of agricultural field boundary data on a global scale. By providing a unified data schema, fiboa facilitates the seamless exchange and integration of field boundary information across various platforms and stakeholders. At its core, fiboa offers an openly developed specification for representing field boundary data using GeoJSON and GeoParquet formats. This specification has the flexibility to incorporate optional 'extensions' that specify additional attributes. This design allows for the inclusion of diverse and detailed information pertinent to specific use cases. In addition, fiboa encompasses a comprehensive ecosystem that includes tools for data conversion and validation, tutorials, and a community-driven approach to developing extensions. This allows a community around a specific subject to standardize datasets. By using datasets with the same extensions, the tools can validate attribute names, coding lists, and other conventions. The fiboa initiative goes beyond providing specifications and tooling by developing over 40 converters for both open and commercial datasets [3]. These converters enable interoperability between diverse data sources by transforming them into the fiboa format. This significant effort ensures that users can integrate and utilize data more efficiently across different systems and platforms. All open datasets processed through this initiative are made freely accessible via Source Cooperative [4], an open data distribution platform. Fields of The World (FTW) [5] is a comprehensive benchmark dataset designed to advance machine learning models for segmenting agricultural field boundaries. Spanning 24 countries across Europe, Africa, Asia, and South America, FTW offers 70,462 samples, each comprising instance and semantic segmentation masks paired with multi-date, multi-spectral Sentinel-2 satellite images. Its extensive coverage and diversity make it a valuable resource for developing and evaluating machine learning algorithms in agricultural monitoring and assessment. FTW also provides a pretrained machine learning model for performing field boundary segmentation. This model is trained on the diverse FTW dataset, enabling it to generalize effectively across different geographic regions, crop types, and environmental conditions. Additionally, ftw-tools - a set of open-source tools accompanying the benchmark - simplifies working with the FTW dataset by providing functions for download, model training, inference, and other experimental or explorative tasks. Fiboa (Field Boundaries for Agriculture) and Fields of The World (FTW) complement each other in advancing agricultural technology. fiboa provides a standardized schema for field boundary data. FTW, with its benchmark dataset and pretrained machine learning model, generates field boundary data from satellite imagery to fill global data gaps. FTW’s source polygons used to create the benchmark dataset and output ML-generated field boundaries are fiboa-compliant. Together, both projects form a powerful ecosystem: fiboa ensures data consistency and usability, while FTW supplies the tools and insights to produce and refine this data. This synergy supports precision farming, land use analysis, land management, and food security efforts, driving innovation and sustainability in agriculture worldwide. The vision is to develop a continuously evolving global field boundary dataset by combining the open field boundaries converted into the fiboa format with the output datasets generated by FTW. References: [1] https://tgengine.org [2] https://fiboa.org [3] https://fiboa.org/map [4] https://source.coop [5] https://fieldsofthe.world
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Introduction to the Early Digital Twin Component EO4ER ("Earth Observation for Energy Risks")

Authors: Ingo Schoolmann
Affiliations: Ohb Digital Services Gmbh
In line with the goals of ESA Digital Twin Earth Program, the project EO4ER (“Earth Observation for Energy Risks”) contributes to the Early Digital Twin Component (DTC) objectives, by the implementation of a DTC prototype for the energy sector. Solar energy being one of the leading renewable energy sources while being affected by the temperature increase caused by climate change, this project focuses on two use cases regarding solar energy systems resulting in an interactive digital reconstruction and simulation with respect to • anticipating the impact of variable power production by photovoltaic systems on the active operation of low voltage networks using hour-scale solar power production forecasting based on the novel EO capabilities given by the MTG satellite series, and, • operational risks for large PV plants due to extreme situations and the long-term solar potential assessment based on climate projection datasets and what-if-scenarios. The targeted main stakeholders are represented by individual prosumers as well as infrastructure, grid and distribution system operators. For the validation of the developed methods, a field test area in Germany (covering more than 80 PV systems) is considered. The scope of the EO4ER project is in line with the increased emphasis on renewable energy and zero emissions anticipated by the European Green Deal as well as national plans for clean energy transitions. The prime OHB Digital Services GmbH together with Reuniwatt SAS and Technische Hochschule Ulm (THU) form the EO4ER consortium.
Affiliation(s): Ohb Digital Services Gmbh
LPS Website link:
Introduction to the Early Digital Twin Component EO4ER ("Earth Observation for Energy Risks")&location=X5+–+Poster+Area+–+Zone+T" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Mirroring natural and anthropogenic phenomena with CyberItaly

Authors: Enrico Cadau, Mr Pascal Gilles, Rita Chirico, Luigi Conte, Luciano Rempicci, Irene Luongo, Dario Bottazzi, Marco Pischedda, Luca Benenati, Antonio Monteleone, Ivan Federico, Rosalia Maglietta, Martina Montinaro, Giovanni Coppini
Affiliations: ESA-ESRIN, Capgemini, Nais, CMCC, ATG-Europe for ESA
The increasing pollution, environmental fragility, climate change, and associated extreme weather events like flash-floods and persistent droughts are putting significant strain on our communities. To build a more sustainable future, public common policies and solutions are needed [1]. The integration of Digital Twin (DT) technology, big data analytics, AI, and Earth observation systems offers a promising approach to address these challenges. By leveraging these technologies, we can analyse complex phenomena, assess the impacts of various actions, and provide policymakers and civil protection agencies with crucial insights for timely and informed decision-making. The notion of DT emerged in the early 2000s and it is gaining popularity in different branch of engineering [2]. In earth science domains the DT [3] is defined as an information system that exposes users to a digital replication of the state and temporal evolution of the Earth system constrained by available observations and the laws of physics. DTs can be viewed as a new class of simulations based on expert knowledge and data that are continuously gathered from the physical system to accurately represent the phenomena of interest in different scales of time and space [4]. DTs allow users to explore hypothetical simulation scenarios and engage in complex “what-if” analysis to improve our understanding, prediction and, consequently, to extend our ability to reduce natural/anthropogenic risks [5]. DTs require the availability of high-quality data to capture the complexity of the reality. Early DT projects like ESA's Destination Earth [5]. and NASA's Earth System DT [6] primarily relied on satellite EO data. The CyberItaly project is part of the IRIDE programme initiated under the framework of the Italy’s National Recovery and Resilience Plan (PNRR). It introduces an innovative approach to data collection that integrates traditional satellite-based observations with ground-level sensor data collected from multiple sources, including regional and municipal institutions. The proposed approach opens the possibility to create more comprehensive digital representations of complex systems. Available DT implementations typically involve the simulation of complex chemical-physical and (multi-) physics processes. These simulations require significant time and computational resources to complete. In addition, it is also necessary to adopt advanced data assimilation techniques to integrate heterogeneous observational data with physical system simulations and to generate comprehensive state estimations. CyberItaly follows a different approach and fosters the adoption of surrogate models based on machine learning techniques. ML models are trained using synthetic data generated through complex simulations and ensure the possibility to represent common system scenarios with acceptable fidelity. Running ML models requires limited time/resources, and ML-based simulations enable users can gain comprehensive insights into system complexity. This enables the exploration of different public policies to increase safety/sustainability of the environment. After thoroughly exploring various scenarios through the surrogate model, users can validate their findings using traditional high-fidelity simulations. In this paper we will present these ideas and will introduce case studies in the field of air quality management and costal protection. • Air Quality DT—This DT forecasts air pollution from traffic in metropolitan areas. It gathers data from various sources like traffic, road maps, weather, and elevation maps. The system creates an emission model for each street and uses a kernel based surrogate model to estimate pollutant diffusion. It also allows analysis of hypothetical scenarios to assess the impact of traffic restrictions policies, vehicles fleet evolution and buildings on pollutant dispersion. So far, we applied this DT to the urban area of Bologna and Genova, which can benefit of pollutant dispersion map computed at 20 meters resolution or better, for both past and forecast capabilities implemented. • Coastal Protection DT—The Coastal Protection DT forecasts erosion, flooding, sediment transport and water quality. The system leverages cutting-edge multi-resolution EO data and in situ observations and models from Copernicus Marine Service (CMS) and Emodnet. Advanced modeling approaches, including wave, ocean circulation, sediment transport and coastal inundation models, simulate complex environmental processes, while an AI-based emulator enables the rapid generation of "what-if" scenarios. These features empower stakeholders to evaluate the impacts of coastal restoration and Nature-Based Solutions (NBS), infrastructure modifications and climate adaptation measures in near-real time, ensuring timely and effective decision-making. The DT is currently applied to two pilot areas in Italy, the Rimini coastline and the Manfredonia zone. References [1] IPCC, “Climate Change 2023: Synthesis Report. Contribution of Working Groups I, II and III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change,” H. Lee and J. Romero (eds.), IPCC Technical Report, Geneva, Switzerland, 2023. [2] M. Grieves and J. Vickers, “Digital twin: Mitigating unpredictable, undesirable emergent behaviour in complex systems,'” in Transdisciplinary Perspectives on Complex Systems: New Findings and Approaches, F.-J. Kahlen, S. Flumerfelt, and A. Alves, Eds., Springer Int., Aug. 2016, pp. 85–113. [3] P. Bauer, B. Stevens, W. Hazeleger, “A digital twin of Earth for the green transition,” Nature Climate Change, Vol. 11, Feb. 2021, pp. 80–83. [4] T. Gabor, L. Belzner, M. Kiermeier, M. T. Beck, and A. Neitz, “A simulation-based architecture for smart cyber-physical systems,” in Proc. IEEE International Conference on Autonomic Computing (ICAC), Wurzburg, Germany, Jul. 2016, IEEE Press, pp. 374–379. [5] Nativi, Stefano, Paolo Mazzetti, and Max Craglia. 2021. "Digital Ecosystems for Developing Digital Twins of the Earth: The Destination Earth Case" in Remote Sensing, MDPI, Vol. 13, No. 11, May 2021. [6] J. Le Moigne, “NASA'S Advanced Information Systems Technology (AIST): Combining New Observing Strategies and Analytics Frameworks to Build Earth System Digital Twins,” in proc. 2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia July 2022, pp. 4724-4727.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Towards a Digital Twin for the Alps to simulate landslide occurrences for hazard adaptation strategies.

Authors: Jean-Philippe Malet, Clément Michoud, Thierry Oppikofer, Floriane Provost, Maxim Lamare, David Michéa, Aline Déprez, Michael Foumelis, Fabrizio Pacini, Philippe
Affiliations: Ecole et Observatoire des Sciences de la Terre / CNRS, Terranum Sàrl, Data-Terra / THEIA Continental Surfaces Data and Service Hub / CNRS, Sinergise Solutions Gmbh, School of Geology, AUTh / Aristotle University of Thessaloniki, Terradue, European Space Agency - ESA, Esrin
The Alps are the most densely populated mountain range in Europe particularly sensitive to the impacts of climate change and thus to hydro-meteorological hazards such as landslides, floods, droughts and glacier related processes. Moreover, those phenomena are expected to increase in the near future. These hazards constitute a major threat to human activity. Indeed, over the last century, temperatures have risen twice as fast as the northern hemisphere average, whereas precipitation has increased non-linearly and has become more discontinuous with an increase in the number of extreme rainfall events. Because of the increasing pressure on human settlements and infrastructure, there is a strong priority for policy-makers to implement hazard adaptation strategies from the local to the regional scale. To support and improve the decision-making process, numerical decision support systems may provide valuable information derived from multi-parametric (in-situ sensors, satellite data) observations and models, linked to computing environments, in order to better manage increasing risks. In this context, a demonstrator has been developed to simulate landslide occurrences by combining in-situ sensor data, satellite EO derived products and process-based models. The demonstrator targets three applications: 1) the possibility to quantify the complex landslide motion from space by combining advanced InSAR analyses techniques (SNAPPING) and advanced optical offset-tracking techniques (GDM-OPT) to monitor respectively low and high ground motion rates; 2) the possibility to assess and forecast, with a daily leading time and at regional scales, the occurrence of heavy rainfalls induced-shallow landslides, in terms of slope failure probability and sediment propagation towards the valleys; 3) the possibility to predict the activity (e.g. velocity) of large deep-seated and continuously active landslides from extreme rain events through the use of a combination of physics- and AI-based simulation tools. The analysis and simulation tools have been embedded in the Digital Twin for the Alps (DTA) platform together with advanced visualization tools (maps and time series) which have been specifically implemented to favor easy exploration of the products for several categories of stakeholders. Use cases in South Switzerland and South France are provided to demonstrate the capabilities of the platform. The data, services and technologies used for providing tailored information to the landslide operational and science community will be presented.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Digital Twin Component in Urban Flood Modelling - A Proof-of-Concept

Authors: Yu Li, Jefferson See Wong, Thanh Huy Nguyen, Joao Vinholi, Anis Amziane, Marco Chini, Patrick Matgen
Affiliations: Luxembourg Institute Of Science And Technology
Urban flood forecasting faces growing challenges due to rapid urbanization, climate change, and the increasing frequency of extreme rainfall events. The integration of Digital Twin (DT) technologies in flood forecasting offers a transformative approach by creating real-time, virtual replicas of urban systems that can simulate, predict, and mitigate flood risks with higher precision and adaptability. A core aspect of the Digital Twin in this study is its ability to combine real-time data with historical flood patterns to generate dynamic flood simulations. Through the integration of sensor data, such as rainfall and river gauge measurements, the DT system is capable of adjusting simulations as new data becomes available, thus improving accuracy over time. The system also incorporates urban infrastructure models, including drainage networks, building map, and surface elevation data, to simulate water flow and identify vulnerable areas in the catchment. As part of the ESA project (Urban DTC/SURE), we explore the integration of a Digital Twin Component for urban flood modelling within the Alzette catchment, focusing on improved flood identification and risk assessment through a combination of Earth Observation (EO) data, hydrological and hydraulic modeling, and scenario analysis. The research presents innovative techniques to address urban flood hazards using advanced analysis of SAR and multispectral satellite data, calibrated with field measurements, and enhanced with future climate change scenarios. The methodological framework involves four components: 1) leveraging high-resolution data from Sentinel-1 SAR data and Sentinel-2 multispectral imagery to identify and map buildings and floodwaters in urban areas, 2) calibrating a hydrological model using a blend of EO-derived data and field measurements and simulate discharge as the boundary conditions for the hydraulic model, 3) calibrating a hydraulic model with the use of EO-derived maps in 1), high-resolution building maps, and simulated discharge from 2) to provide more detailed predictions of flood inundation extent and water depth, and 4) assessing flood hazard and risk under different future climate change scenarios. One of the novel aspects of this DTC is the exploration of ‘what if’ scenarios to evaluate flood hazard and risk under different future climatic conditions and mitigation measures. Using near-future and mid-term climate change projections, the research assesses the potential impacts of changing precipitation and temperature patterns on flood dynamics. This scenario-based approach allows for the testing of adaptive strategies to mitigate the increased flood risks expected with climate change. Further ‘what if’ scenarios examine the effects of modifications to river geometry, such as changes to channel shape or bank reinforcement, on flood hazard and risk. These simulations help assess how modifications in river conveyance capacity can influence urban flood behavior, providing critical insights for future urban infrastructure planning. Flood hazard maps have been pre-computed for each climate change and river conveyance scenario, providing a comprehensive toolset for estimating the impact of climate physical risks and floodplain development projects in the selected urban areas. To do this we make use of the inventory of physical assets prone to be affected by flooding. The demonstration consists in displaying for each scenario selected by the user the hazard and risk maps and summarizing associated risk metrics on a dashboard (e.g. number of persons affected, critical infrastructure impacted). A scenario consists of a time period (i.e. reference/near future/mid-term future), an Representative Concentration Pathway (i.e. RCP2.6, RCP4.5, RCP8.5) and a river conveyance (i.e. increase or decrease of x%). The reference hazard and risk map corresponds to the 1981-2000 period. For all RCPs and river conveyance scenarios, changes with respect to the reference period will be highlighted on a map and dashboard as part of the demonstration. The research demonstrates the potential of combining EO data, hydrological modeling, and climate scenario analysis within a Digital Twin framework to improve urban flood modelling and risk management. The Alzette catchment serves as a testbed for validating this approach, offering a pathway for cities globally to adopt similar strategies for proactive flood mitigation, enhanced disaster preparedness, and informed urban planning in response to the growing threat of climate-induced urban flooding.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Advancing water resources management and flood control merging earth observations and modelling reservoir behaviour in digital twins

Authors: Arjen Haag, Athanasios Tsiokanos, Albrecht H. Weerts
Affiliations: Operational Water Management, Deltares
Reservoirs play an important role in water security, flood risk, energy supply and natural flow regime around the world. Reservoir area can be observed from space (e.g. see https://www.globalwaterwatch.earth/). Reservoir volumes and/or level can be estimated, as for instance developed in the ESA project Surface-2-Storage (Winsemius and Moreno Rodenas, 2024). . Reservoir storage/volume can also be simulated (e.g. van der Laan et al., 2024) using a fast advanced distributed hydrological model (Imhoff et al., 2024). We present results and ongoing activities related to ongoing digital twin projects, among others DTC Hydrology Next , where we integrate earth observations of reservoir storage, estimated from reservoir surface area, with modeled reservoir behavior. These observations help to get a better understanding of the actual situation (inc reservoir rules) and help to provide actionable information for water resources management or flood control. Surface-2-Storage - Final Report, Winsemius, H.C. and A. Moreno Rodenas, 2024. Deltares, 11207650-002-ZWS-0020, 7 May 2024. Simulation of long-term storage dynamics of headwater reservoirs across the globe using public cloud computing infrastructure E. van der Laan, P. Hazenberg, A.H. Weerts, Science of The Total Environment, 10.1016/j.scitotenv.2024.172678, 2024. A fast high resolution distributed hydrological model for forecasting, climate scenarios and digital twin applications using wflow_sbm R.O. Imhoff, J. Buitink, W.J. van Verseveld, A.H. Weerts, Environmental Modelling & Software, 10.1016/j.envsoft.2024.106099, 2024.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Validation of geohazards products as part of the Digital Twin Component solution of the ESA GET-it project

Authors: Hugues Brenot, Nicolas Theys, Stefano Corradini, Arnau Folch, Daniela Fucilla, Gaetana Ganci, Fabrizio Pacini, Elisa Trasatti, Salvatore Stramondo
Affiliations: Royal Belgian Institute for Space Aeronomie (BIRA), Istituto Nazionale di Geofisica e Vulcanologia (INGV), Consejo Superior de Investigaciones Científicas (CSIC), Terradue
The development of services based on the exploitation of multi-sensor Earth observation data into models is essential to provide key information in the event of geohazards with an impact on people and society (like volcanic eruptions or earthquakes). The ESA Geohazards Early Digital Twin Component (GET-it) project is dedicated to the implementation of a prototype system designed to provide an interactive tool to users, e.g. institutional and commercial stakeholders. For two types of scenarios, seismic and volcanic crisis, GET-it aims at providing solutions to users to help in decision making and eventually mitigate the impact of geohazards. To do so, GET-it relies on four modules, which provide information on surface deformation, damaging events, quantitative forecasts of volcanic ash/SO2 clouds, and the thermal and rheological evolution of lava flows. These modules are built on 10 toolboxes targeting surface deformation and topographic monitoring (5 toolboxes based on Interferometric Synthetic Aperture Radar – InSAR, Global Navigation Satellite System – GNSS, and optical imagery), damage (1 box based on a combination of imagery), volcanic cloud occurrence and characterisation (2 boxes based on thermal infrared – IR – data from geostationary sensors), ground thermal anomaly and lava flow monitoring (2 boxes based on medium IR data from polar orbiting and geostationary sensors). This presentation shows the first results of the validation of the GET-it system for 3 case studies (the 2018 eruption of Mount Etna, Italy; the 2016 earthquake sequence in central Italy; and the 2022 eruption in La Palmas, Canary Islands, Spain). This validation is a two-steps process. The first step is the validation, error analysis and uncertainty quantification of the four GET-it scenario modules output. The second validation concerns the Digital Twin Component (DTC) solution and its associated functionalities. This validation is based on parameters that characterise the quality of the solution. An important part is to verify that the DTC solution matches the needs and requirements expressed by the users. Another aspect also relates to the possible scaling of the DTC component in an operational environment and whether the proposed solution would be fit to handle real-time situations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: From Mobile LiDAR Point Clouds to Urban Digital Twins: Advancing 3D Reconstruction With Quality Optimization

Authors: Xinyi He, Dr. Alvaro Lau Sarmiento
Affiliations: Wageningen University & Research
In recent years, aerial and satellite remote sensing products have gained popularity for various urban research applications. Nonetheless, there is a need for a cheaper and more readily accessible data source for small-scale, high-precision datasets, such as streets, buildings, and infrastructure. LiDAR provides just such a data source in the form of point clouds. Among the various LiDAR technologies, mobile laser scanning (MLS) offers much flexibility because it can be based on moving platforms such as backpacks and vehicles. However, MLS is sparse and noisy, limiting its application in urban digital twins. Therefore, a comprehensive improvement in the quality of MLS point clouds is essential for enhancing the accuracy of 3D reconstruction and advancing the usability of MLS in digital twins. Nowadays, specific steps for processing point clouds typically include alignment, denoising and filtering, segmentation and classification, feature extraction, and object extraction. However, improving point cloud quality focuses on the initial stages. Significant limitations exist in the previous studies: 1. There are not enough studies on point clouds from MLS sources. 2. A significant proportion of existing studies rely on static objects from publicly available datasets, such as models of a rabbit or mechanical gears, among others. 3. Many studies will evaluate the effectiveness of the denoising algorithms by artificially introducing noise. However, complex urban environments are facing more challenges, such as the interference of dynamic objects (e.g., vehicles and pedestrians). 4. Additionally, urban point clouds also need to solve the problems of massive data storage and the trade-off between modelling accuracy and simplicity. To address these challenges, this paper proposes a systematic and holistic data processing workflow tailored to MLS point clouds in urban contexts. This workflow covers multiple processing steps, including advanced denoising, filtering, and surface reconstruction techniques to improve point clouds' geometric and reconstruction accuracy. Firstly, all the articles and techniques on improving the quality of point clouds are systematically and comprehensively summarised through the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) review. PRISMA's screening results from 7,593 publications between 2023 and 10 August 2024 were drawn from five academic platforms. The literature came from five academic platforms: Google Scholar, Web of Science, Scopus, IEEE Xplore, and ArXiv. The search terms included 'LiDAR,' 'Point Cloud,' and 'Denoising.' The validated denoising methods include Random Sampling-based algorithms, KNN (K-Nearest Neighbors) based algorithms, CNN (Convolutional Neural Networks) based algorithms, PointNet-based algorithms, Manifold-based algorithms, and Normal-based algorithms. Secondly, experiments are conducted to verify which methods are particularly effective for the MLS point cloud in the streets of Leeuwarden in the Netherlands, which is our study area. Specifically, several algorithms were implemented to find the most suitable algorithms for processing specific MLS point clouds. These algorithms were based on PRISMA's screening results. The experiments evaluate the root mean square error (RMSE) and denoising rate by comparing the reference point cloud and Terrestrial Laser Scanning (TLS) in the same area as the target MLS point cloud. This study concludes that PointNet and its derivative algorithm, PointFilter, perform superior in processing MLS point cloud data. Subsequently, the MLS point cloud obtained from the alignment and denoising process is subjected to 3D reconstruction. Then, we measure the absolute trajectory error, relative position error, and surface distance error on the produced 3D models. These assessments validate the effectiveness of the proposed point cloud quality enhancement methodologies. This highlights the contribution to improving the accuracy and reliability of 3D reconstruction for building the digital twin of urban areas. This study establishes the groundwork for utilizing the MLS point cloud as a cost-effective data source for urban digital twins by proposing a comprehensive workflow to improve the quality of MLS data and target the accuracy of 3D reconstruction. Providing a high-precision and low-cost 3D database for urban digital twins can significantly accelerate the development and implementation of urban digital twins.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone D)

Poster: A.05.05 - POSTER - Tipping points and abrupt change in the Earth system

There are elements of the Earth system, including ecosystems, that can undergo rapid transition and reorganisation in response to small changes in forcings. This process is commonly known as crossing a tipping point. Such transitions may be abrupt and irreversible, and some could feedback to climate change, representing an uncertainty in projections of global warming. Their potentially severe outcomes at local scales - such as unprecedented weather, ecosystem loss, extreme temperatures and increased frequency of droughts and fires – may be particularly challenging for humans and other species to adapt to, worsening the risk that climate change poses. Combining satellite-based Earth Observation (EO) datasets with numerical model simulations is a promising avenue of research to investigate tipping elements. And a growing number of studies have applied tipping point theory to satellite time series to explore the changing resilience of tipping systems in the biosphere as an early earning indicator of approaching a tipping point. This session invites abstracts on tipping points and resilience studies based on or incorporating EO, as well as recommendations from modelling groups that can be taken up by the remote sensing community, for example on early warning signals, products needed for model assimilation or novel tipping systems to investigate further using EO.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone D)

Poster: Investigating Regime Shifts in Atlantic Sargassum

Authors: Brian Barnes, Chuanmin Hu, Yingjun Zhang, Deborah Goodwin, Amy Siuda, Jeffrey Schell
Affiliations: University Of South Florida, Eckerd College, Sea Education Association
Located in the subtropical North Atlantic Ocean, the Sargasso Sea (SS) draws its name from the floating brown macroalgae, Sargassum spp. Sargassum aggregations have been observed in the region for centuries, and are an integral component of the local biology and ecology - providing habitat for numerous marine species. In 2011, the footprint of Atlantic Sargassum increased to include a now-persistent population seasonally spanning the tropical North Atlantic, termed the Great Atlantic Sargassum Belt (GASB). As a result of this expansion, nearshore locations within the GASB domain now face devastating ecological and economic impacts when portions of this floating habitat inundate coastal environments. In this study, we investigate the formulation of the GASB, as well as additional large-scale shifts in Atlantic Sargassum as observed in both satellite data and long-term in situ net tows. In particular, we document dramatic changes in the abundance and seasonality of SS Sargassum occurring since 2015. Both the satellite and in situ net tow data indicate a substantial decline in Sargassum abundance in the North SS during the fall / winter period, accompanied by an increase during spring / summer. Similarly, the abundance has also dramatically increased in the South SS, particularly during this spring / summer period. Notably, the timing of the SS Sargassum increase matches that of the GASB seasonality. However, the long-term in situ observations indicate that the Sargassum morphotype most commonly observed in the GASB is rarely found in the SS. As such, the changes in SS Sargassum distribution are not sufficiently explained by transport from the GASB alone, with internal dynamics also driving the seasonal and long-term abundance cycles in the SS. Disentangling the forcings underlying these regime shifts may improve predictions of Sargassum distribution. Additionally, understanding these changes may provide insight into future climate-related alterations in North Atlantic Sargassum and subsequent impacts to associated fauna and flora.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone D)

Poster: El Niño-driven cascading effects on global ecosystem resilience

Authors: Xiaona Wang, Miguel Mahecha, Yongmei Huang, Chaonan Ji, Dongxing Wu, Xiuchen Wu
Affiliations: Leipzig University, Remote Sensing Centre for Earth System Research, Beijing Normal University, Faculty of Geographical Science, German Centre for Integrative Biodiversity Research (iDiv), Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI), Wuhan University, School of Resource and Environmental Sciences
El Niño-Southern Oscillation (ENSO), as a dominant driver of interannual natural climate variability, profoundly influenced the global weather and climate patterns, as well as the terrestrial ecosystems. However, a quantitative determination of cascading effects of El Niño on the dynamics of terrestrial ecosystems is lacking, required for understanding the imprints of the large-scale climate variabilities on the Earth system. To address this, we constructed the directed and weighted climate networks using near-surface air temperature and soil moisture, allowing us to systematically evaluate how El Niño-driven climate anomalies affect the dynamics of ecosystem resilience. We first identified the influence patterns of El Niño on the variations in air temperature and soil moisture. We found that El Niño produces a significant teleconnection pattern, characterized by the increased anomalies in temperature and decreased anomalies in moisture. These effects are globally pervasive across terrestrial biomes, and accounted for most of the global hotspots. During extreme El Niño phases, most of terrestrial ecosystems experienced marked changes in their resilience. Furthermore, we quantified the cascading effects of El Niño on ecosystem resilience mediated by the variations in temperature and moisture. The cascading strength did not significantly vary across geographical distances, highlighting the global reach of these effects. The cascading processes were predominantly mediated by changes in soil moisture and air temperature, underscoring their pivotal roles in the resilience loss of ecosystem. Finally, we evaluated the global hotspots derived from the state-of-the-art Earth system models (ESMs) under future scenarios. We found that El Niño-induced warming and dying global hotpots are expected to expand spatially in the future, potentially further leading to a decline in ecosystem resilience. This study is vital to improve the investigation and prediction of imprints of El Niño-driven climate anomalies on ecosystem destabilization, aims to understand the dynamic interactions among different natural components of the Earth system.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone D)

Poster: Vegetation resilience: What does it mean, how can we measure it, and how can it change? Conceptual simulations with a complex dynamic vegetation model

Authors: Sebastian Bathiany, Lana Blaschke, Andreas Morr, Niklas Boers
Affiliations: Technical University of Munich, Potsdam Institute for Climate Impact Research, University of Exeter
There have been many recent studies that aim to estimate vegetation resilience and its changes over time from satellite data. Typically, they define resilience as the ability of vegetation to recover from externally induced perturbations like fires or droughts. It can in principle be measured quantitatively as the rate of recovery after such events. In simple dynamical systems, other indirect metrics can also diagnose resilience even in absence of large perturbations. The most important of these metrics is autocorrelation. A loss of resilience over time ("slowing down") can thereby be detected as increasing autocorrelation. In simple dynamical systems, the resilience loss is also associated with an increasing sensitivity of a system’s stable state to external conditions. This is particularly meaningful in systems with catastrophic tipping points, where the stable state disappears at a critical parameter value. For example, there has been concern that the Amazon rainforest may be approaching such a tipping point due to global warming and deforestation. Recent studies, using the normalised difference vegetation index NDVI and/or vegetation optical depth VOD, have shown that resilience seems to be higher in wet regions of tropical forests compared to dryer regions, and that resilience has been decreasing in vast parts of the Amazon rainforest. Observations also show that there is a relationship between autocorrelation and the empirical recovery rates after perturbations, which confirms the high practical relevance of theoretical expectations. However, it is still unclear which properties of the vegetation and which processes determine the observed autocorrelation, its spatial differences, and its trends over time. For example, different vegetation indicators and frequency bands capture different parts and properties of the vegetation, and the nature of empirical disturbances is often unknown. Our contribution discusses idealised simulations with the state-of-the-art dynamic vegetation model LPJmL to illuminate how the resilience of natural forests and its indicators can depend on (i) different climates, (ii) vegetation composition (mix of plant functional types), (iii) the vegetation property considered, and (iv) the nature of the perturbation(s). We find that autocorrelation is typically in good agreement with the recovery time from large negative perturbations that affect all combined tree types similarly. However, there are exceptions where any of the factors listed above can play a role. In these cases, recovery rates or autocorrelation do not necessarily agree with each other, nor with the forest’s sensitivity to climate change. In particular, perturbations that change the relative abundance of tree types can yield different recovery rates than perturbations affecting all tree types in the same way. Also, vegetation variables that recover quickly when perturbed on their own (e.g. fluxes like net primary productivity) can still co-evolve with slower variables they depend on (e.g. the carbon stored in trees). We will reveal important mechanisms causing these features in the model, and test their relevance by conducting simulations in a more realistic setup (i.e. by forcing the model with observed climate in a geographically realistic domain), and by discussing the relevance of these mechanisms in the real world. Our results remind us that in high-dimensional systems, there is only one autocorrelation for each variable, but many possible perturbations and hence resiliences, unless we are already very close to a tipping point. Our results also highlight the need to understand the nature of perturbations and trends (e.g. climate- or ecologically induced) in real ecosystems, and the mechanisms and properties captured by satellite-derived indicators. Such knowledge needs to be combined with improved resilience monitoring methods to allow us to draw reliable conclusions about the future response of ecosystems to human interferences.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: A.07.08 - POSTER - Global and regional water cycle in the integrated human-Earth system, estimation of hydrological variables and hyper-resolution modelling

Water in all three phases and its cycling through the Earth system are essential to weather, climate and climate change, and to life itself. The water cycle is closely coupled with energy cycle and carbon cycle. Over continents, water cycle includes precipitation (related to clouds, aerosols, and atmospheric dynamics), water vapor divergence and change of column water vapor in the atmosphere, and land surface evapotranspiration, terrestrial water storage change (related to snowpack, surface and ground water, and soil moisture change), and river and groundwater discharge (which is linked to ocean salinity near the river mouth). Furthermore, the terrestrial water cycle is directly affected by human activities: land cover and land use change, agricultural, industrial, and municipal consumption of water, and construction of reservoirs, canals, and dams.

The EO for hydrology community is working towards datasets describing hydrological variables at a steadily increasing quality and spatial and temporal resolution. In parallel, water cycle and hydrological modellers are advancing towards “hyper-resolution” models, going towards 1 km resolution or even higher. In some cases such efforts are not just taking place in parallel but in collaboration. This session aims at presenting advances from each of the communities as well as demonstrating and promoting collaboration between the two communities.

Presentations are welcome that focus on at least one of the following areas:
- The global and regional water cycle and its coupling with the energy and carbon cycles in the integrated human-Earth system based on satellite remote sensing, supplemented by ground-based and airborne measurements as well as global and regional modeling
- New advances on the estimation of hydrological variables, e.g. evapo(transpi)ration, precipitation (note that there is another, dedicated session for soil moisture);
- Suitability of different EO-derived datasets to be used in hydrological models at different scales;
- Capacity of different models to take benefit from EO-derived datasets;
- Requirements on EO-derived datasets to be useful for modelling community (e.g. related to spatial or temporal resolution, quality or uncertainty information, independence or consistency of the EO-derived datasets, …);
- Downscaling techniques;
- Potential of data from future EO missions and of newest modelling and AI approaches (including hybrid approaches) to improve the characterisation and prediction of the water cycle.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Assessment of future EO mission needs for the study of the water cycle

Authors: Laura Soueidan, Dr. Vanessa Keuck, Dr. Armin Loescher, Dr. Craig Donlon
Affiliations: ESA
Earth Observation data has long been instrumental in advancing our understanding of the water cycle, with missions like SMOS, GRACE/-FO, ICESat-2 or SWOT, enabling the estimation of key hydrological variables, such as evapotranspiration, soil moisture, river discharge, and terrestrial water storage anomalies. From a science perspective, these datasets are critical for the study of the water cycle, especially in the context of climate change, where more frequent and extreme hydrological events are expected. High-resolution EO datasets are also becoming increasingly important for the evaluation of governments' compliance with public policies related to water resource management, climate adaptation, and sustainability. To address the evolving needs of the science community, an Earth Observation Reference Architecture is being developed as a standardized framework to support a European EO Ecosystem. This Reference Architecture provides a comprehensive set of design principles, guidelines, and best practices for creating a collaborative and flexible EO framework. It is designed to support the delivery of high-quality EO data with improved spatial and temporal resolution, accuracy, and rigorous uncertainty quantification. By facilitating interoperability across satellite constellations, ground segments and relevant non-space systems, it promotes a holistic monitoring of the Earth systems and their feedback loops. Providing insights into future Earth Observation requirements for hydrological science, scenario-based analyses are at the basis of this work ; By defining potential climate scenarios, we identify critical and supporting hydrological parameters, knowledge and observable gaps, in order to derive the standards that future EO missions need to meet. Potential synergies between sensors and satellite constellations are also explored for their ability to enhance data quality, coverage and continuity. Finally, this study investigates the potential of the assimilation of high-resolution EO data within existing land surface models, improving the characterization and prediction of the water cycle, with a particular focus on extreme events like floods and droughts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Improving River Network Accuracy Using Graph Neural Networks and Multi-Sensor Remote Sensing Data

Authors: Hamidreza Mosaffa, Prof Christel Prudhomme, Dr Matthew Chantry, Prof Liz Stephens, Prof Christoph Rüdiger, Dr Michel Wortmann, Prof Florian Pappenberger, Prof Hannah Cloke
Affiliations: Department of Meteorology, University of Reading, Department of Geography and Environmental Science, University of Reading, European Centre for Medium-Range Weather Forecasts (ECMWF), European Centre for Medium-Range Weather Forecasts (ECMWF)
Rivers are dynamic systems that evolve over multiple timescales, from slow meandering processes to rapid flood-induced changes. However, most river networks are represented as static maps derived from Digital Elevation Models (DEMs), often failing to capture critical braided river systems and artificial channels. This limitation poses challenges in accurate hydrological and hydralic modelling, flood management, and water resource management. We propose to leverage Earth Observation data to refine river networks by using multi-temporal Sentinel-2 and Sentinel-1 SAR data at 30m resolution, which capture water bodies and flood extents across various flow regimes. By treating rivers as graph structures with nodes and edges, we use Graph Neural Network (GNN) models to identify and predict missing river connections (edges). We extract multiple features such as water extend, Normalized Difference Water Index (NDWI), Normalized Difference Vegetation Index (NDVI), flow direction, and flow accumulation over time, utilizing the GRIT river network dataset as a baseline for modifications. Through our methodology, potential nodes and edges are identified, and GNN algorithms—such as Graph Convolutional Networks (GCNs), Graph Attention Networks (GATs), and GraphSAGE—are tested to predict the probability of missing edges. Validation is performed using OpenStreetMap (OSM) river data, ensuring the accuracy of the predicted network. Our case study focuses on Pakistan, a region characterized by extensive artificial channels and frequent flooding. The results demonstrate that our approach successfully identifies missing river segments, particularly artificial channels, and improves the completeness and accuracy of the river network. The promising outcomes of this study provide a scalable solution for global river network prediction and have significant implications for hydrological modelling, flood risk assessment, and water resource management.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Using high-resolution precipitation product for characterizing and modeling flow behavior in karst environments

Authors: Vianney Sivelle
Affiliations: Université de Montpellier
Groundwater constitutes the hidden part of the water cycle and so assessing its contribution raises important challenges such as (i) describing the Groundwater–Surface Water (GW-SW) interactions and (ii) closing the water budget with meteorological/climate forcings. To this day, the contribution of karst groundwater in the continental water cycle is unknown and deserves to be properly characterized. Karst environments are of primary importance for world heritage, ecosystem development, as well as freshwater supply for around 9% of the world population. Characterizing and modelling flow processes in karst groundwater systems requires high spatial and temporal resolution of meteorological forcings due to (i) small to meso-scale dimension of the recharge area (order of magnitude from 1 to 1000 km2), and (ii) the predominance of quick-flow processes, requiring infra-daily monitoring of environmental variables (e.g., spring discharge, piezometric head, physico-chemical parameters of karst water). Therefore, neither the Land Surface Model (LSM) nor climate model explicitly represent yet the contribution of karst groundwater systems in their estimation of the various components of the continental water cycle, while karst domain represents around 12% of continental surface. The recent development of high-resolution precipitation products (1km, 1day) creates new opportunities for karst hydrology, taking advantage of a suitable spatial resolution to better assess spatial heterogeneity of recharge processes. The latter may occurs either with diffuse recharge or concentrated recharge, playing an important role in the overall flow behavior of karst groundwater systems. Then, characterizing the concentrated recharge following significant precipitation event is of prime importance to assess quick-flow processes and so the effect of potential pollution coming from the surface to groundwater as well as flood processes. The present work aims to showcase some recent advances in karst hydrology including (i) the evaluation of precipitation products for characterizing and modeling the flow behavior of karst environments and (ii) the use of satellite precipitation data to better constraint the meteorological forcing of hydro-geo-logical modeling exercises.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Assessing uncertainty in WaPOR global evapotranspiration data: Insights from using triple collocation and in-situ measurements

Authors: Bich Tran, Dr. Solomon Seyoum, Dr. Johannes van der Kwast, Prof. Dr. Remko Uijlenhoet, Prof. Dr. Graham Jewitt, Dr. Marloes Mul
Affiliations: IHE Delft Institute for Water Education, Delft University of Technology
Evapotranspiration (ET) is a key process linking the water, energy, and carbon cycles of the Earth. Accurately estimating ET is essential for hydrological studies but remains challenging due to its complex, scale-dependent processes. Satellite remote sensing (RS) has been applied in several process-based models to estimate spatial ET, which produces a range of continuously updated global data products (e.g., MODIS16, GLEAM, SSEBop, and WaPOR). Among these, the Food and Agricultural Organization of the United Nations (FAO)’s portal to monitor water productivity through open-access of remotely sensed derived data (WaPOR) provides global ET data (WaPOR-ET) at a high spatial resolution (300 m) and 10-day temporal intervals. The availability of such hyper-resolution data at global coverage offers significant potential for many hydrological and agricultural applications. However, comprehensive information on the quality or uncertainty WaPOR-ET remains limited, posing a challenge for its assimilation into hydrological models. The most common method for assessing RS-ET uncertainty involves direct comparison with ET estimation from eddy covariance (EC) measurements. While valuable, this approach is constrained by the sparse distribution of EC sites in many regions and inherent uncertainties, including energy balance closure and flux footprint issues (Tran et al., 2023). To address these limitations, this study evaluated WaPOR version 3 global ET data by direct comparison with EC measurements from FLUXNET regional networks, accounting for EC uncertainties. We analyzed WaPOR-ET uncertainty across land cover types, climate regions, and elevation ranges. In addition, we compared direct comparison with triple collocation analysis using multiple high-resolution (30 m) ET models from the OpenET project over the contiguous United States (Volk et al., 2024). Using extended triple collocation method, we characterized uncertainty information spatially and examined how different triplet combinations affected the results. Our findings show good agreement between the two approaches at perennial cropland sites. Meanwhile, results for seasonal cropland, forest, grassland, shrubland sites varied greatly depending on the triplets used. In general, triplets combining WaPOR (a two-source Penman-Monteith model) with one-source surface energy balance models showed greater divergence from EC-based uncertainty estimates. No single triplet consistently aligned with direct comparison results across all land cover types. These results highlight the capability of uncertainty assessment methods and contribute to the roadmap of quality assessment for RS-ET products, which helps address the uncertainty requirement of hydrological modelling community. References Tran, B.N., Van Der Kwast, J., Seyoum, S., Uijlenhoet, R., Jewitt, G. and Mul, M., 2023. Uncertainty assessment of satellite remote-sensing-based evapotranspiration estimates: a systematic review of methods and gaps. Hydrology and Earth System Sciences, 27(24), pp.4505-4528. Volk, J.M., Huntington, J.L., Melton, F.S., Allen, R., Anderson, M., Fisher, J.B., Kilic, A., Ruhoff, A., Senay, G.B., Minor, B. and Morton, C., 2024. Assessing the accuracy of OpenET satellite-based evapotranspiration data to support water resource and land management applications. Nature Water, 2(2), pp.193-205.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Surface Temperature and Soil Moisture Estimates Across Cropland and Agroforestry: UAV-borne Imagery and Ground Sensors Synergy

Authors: Jiri Rous, Jan Komarek
Affiliations: Department of Spatial Sciences, Faculty of Environmental Science, Czech University Life Sciences Prague, Kamýcká 129, Praha – Suchdol 165 00, Czech Republic
Challenge Long-term environmental monitoring is essential for understanding soil-vegetation-atmosphere interactions in agroforestry systems, where tree strips alternate with crop fields, soil moisture, and temperature vary significantly, driven by complex microclimatic and ecological processes. Unmanned Aerial Vehicles (UAVs) with thermal and multispectral sensors offer high spatial resolution for capturing these dynamics, but their deployment faces critical challenges. Atmospheric influences, particularly humidity and wind, distort sensor readings, complicating data accuracy. Applying correction methods is necessary to address these issues and extract reliable information. The goal is to develop a reliable identification key for soil moisture and temperature estimation using UAV sensors, enabling precise and scalable monitoring. Achieving this would support improved land management, water efficiency, and deeper insights into the functioning of integrated landscapes. Methodology Following lab calibration, seventeen Tomst TMS data loggers were deployed, seven in forested and ten in agricultural strips. Monthly UAV flights at 300 m above ground height were conducted using senseFly DuetT and Micasense RedEdge MX sensors. The imagery was captured around solar noon, processed in image-matching software, georeferenced using ground control points (GCPs), and radiometrically calibrated. The first dataset includes temperature measurements at three levels (14 cm below ground, on the ground, and 14 cm above ground) and soil moisture data. The second dataset comprises thermal and multispectral UAV mosaics, created monthly from July 2023 to October 2024, focusing on July to September 2023 and May to September 2024. Supplementary meteorological data—wind speed, humidity, temperature, and precipitation—from a nearby station enhance the dataset. The study area is a 1.42 ha strip pattern of forested and agricultural land within the Amalie Smart Landscape (CZU). A Generalized Additive Model (GAM) was employed to investigate relationships between UAV-based temperature, stripe type, and meteorological factors. Multispectral data, particularly the NIR band, was used to identify vegetation - crops, trees, or shrubs, a key determinant of temperature variability. RMSE analysis was performed to compare UAV temperatures with ground sensors and adjusted for humidity effects. Expected results Initial analysis explored correlations between soil moisture and ground-level temperatures, revealing coefficients of -0.39 (ground) and -0.45 (below ground). These findings informed the decision to integrate both UAV sensors. Preliminary findings suggest that UAV-estimated temperatures correlate best with below-ground sensors, yielding an RMSE of 2.86 °C. Unexpectedly, above-ground temperatures exhibited higher RMSE values (11.2 °C), prompting further investigation into sensor calibration and environmental influences. Humidity correction significantly improved agreement between above-ground sensor data and UAV temperatures, reducing RMSE to 4 °C. GAM results indicate that vegetation presence, rather than its type or height, drives temperature variations, as detected by UAV, highlighting the dominant role of canopy cover in moderating soil and surface temperatures through shading and evapotranspiration. Outlook for the future This research demonstrates the potential of UAV-based thermal and multispectral imaging for soil parameter estimation but also reveals significant challenges in aligning UAV and ground-based measurements. These insights underscore the complexity of synergizing UAV-based estimations with in-situ data and highlight the need for robust correction factors to account for environmental variability. Future work will focus on refining humidity correction models, expanding the analysis to include seasonal trends, and exploring machine learning techniques for enhanced prediction accuracy. Additionally, further calibration and validation of UAV sensors will aim to reduce discrepancies with in-situ data. By advancing UAV-enabled environmental monitoring, this study contributes to scalable and non-invasive approaches for understanding landscape-level soil and vegetation dynamics.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: The new HydroSHEDS v2.0 database derived from the TanDEM-X DEM

Authors: Carolin Keller, Leena Warmedinger, Larissa Gorzawski, Martin Huber, Bernhard Lehner, Günther Grill, Michele Thieme, Birgit Wessel, Achim Roth
Affiliations: German Aerospace Center (DLR), German Remote Sensing Data Center, Company for Remote Sensing and Environmental Research (SLU), McGill University, Department of Geography, Confluvio Consulting Inc., World Wildlife Fund
The increased availability and accuracy of recent remote sensing data accelerates the development of high-quality data products for hydrological modelling. Accurate representation of the Earth's surface, including all water-related features, is crucial for simulating runoff and other hydrological processes. In this contribution we introduce HydroSHEDS v2.0, the second and refined version of the well-established HydroSHEDS dataset. It provides global seamless high-resolution hydrographic information and is developed through an international collaboration involving the German Aerospace Center (DLR), McGill University, Confluvio Consulting, and World Wildlife Fund, HydroSHEDS v2.0 builds on the TanDEM-X mission's digital elevation model (DEM) to offer enhanced accuracy and expanded geographic coverage compared to its predecessor. While the first HydroSHEDS version relied on the Shuttle Radar Topography Mission (SRTM) DEM, HydroSHEDS v2.0 benefits from the TanDEM-X DEM, which provides a higher resolution of 0.4 arc-seconds globally and includes regions beyond 60°N latitude, previously uncovered by SRTM. Advanced pre-processing techniques ensure that HydroSHEDS v2.0 preserves the high-resolution details of the TanDEM-X DEM. These techniques include the generation of a global inland water mask and its usage for filling invalid and unreliable DEM areas, delineating global coastlines with manual quality control, and reducing distortions caused by vegetation and urban areas. A sequence of automated hydrological conditioning steps further refines the DEM, incorporating void filling, outlier correction, and algorithms to optimize hydrological consistency. Finally, extensive manual corrections using various ancillary data sources improve river network delineation in areas where high uncertainties exist for DEM-derived products, such as areas with flat terrain or anthropogenically modified landscapes. The resulting hydrologically conditioned DEM has a resolution of 1 arc-seconds and ensures accurate derivation of hydrologic flow connections, forming the basis for core products such as flow direction and flow accumulation maps. In the future final HydroSHEDS product, these gridded datasets are complemented by secondary vector-based information on river networks, nested catchment boundaries, and associated hydro-environmental attributes. Together, these products create a standardized, multi-scale database in the same structure and format as the original version and support applications ranging from local to global scales. In our presentation we will give an overview of the production and present a demonstration of the novel data products and the pre-processing workflow for selected test sites. The new HydroSHEDS v2.0 dataset offers a consistent and easy-to-use framework for hydrological and hydro-ecological research. The main release, scheduled to start in 2025 under a free license, will provide researchers and practitioners with a robust tool for diverse applications. The HydroSHEDS v2.0 dataset will be available at www.hydrosheds.org.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Sensitivity of Sentinel-1 σ0 backscattering to crop phenology and row orientation in irrigated fields

Authors: Martina Natali, Prof.dr.ir. Susan Steele-Dunne, Sara Modanesi, Gabrielle De Lannoy, Alessio Domeneghetti, Christian Massari
Affiliations: CNR-IRPI, Department of Civil and Environmental Engineering, University of Perugia, Department of Geosciences and Remote Sensing , Faculty of Civil Engineering and Geosciences, TU Delft, Department of Earth and Environmental Sciences, KU Leuven, Department of Civil, Chemical, Environmental and Materials Engineering, Alma Mater Studiorum - University of Bologna
In recent years, the availability of high-resolution satellite remote sensing observations has led to an increasing number of applications at very high spatial resolutions. In hydrological modeling, precision agriculture and irrigation applications, resolutions of about 1 km or below are relevant to account for the high spatial variability of soil moisture and provide more accurate estimations of the components of the water cycle. Synthetic aperture radar (SAR) platforms such as e.g. Sentinel-1 provide high resolution backscattering observations (~ 20 m) which are not hampered by clouds and the atmosphere and are used to estimate soil moisture in all weather conditions via retrieval algorithms or data assimilation in land surface models. However, σ0 values are sensitive to surface roughness, vegetation water content, plants’ structure and in agricultural areas also to the orientation of fields’ rows. In regions characterized by high spatial heterogeneity of plots with different landcover and crop types, retrievals of bio-geophysical quantities with methods that do not distinguish between parcels may be affected by uncertainties which remain poorly investigated. In this study we explored the behavior of Sentinel-1 σ0 data on several fields and crops in irrigated agricultural areas in northern Italy. Fields are either arable land, orchards or vineyards, with areas from ~ 1 ha to ~ 20 ha. The study areas are covered by several Sentinel-1 orbits with local incidence angles varying from 30° to 45°. For each individual parcel we evaluated the sensitivity of σ0 values and their variance with respect to vegetation indices such as e.g. NDVI from Sentinel-2, and we explored their potential uses for early season crop classification. Furthermore, we estimated the bias on the mean value of σ0 due to different fields row orientations for different observational geometries. To account for differences in the mean incidence angle of the orbits we applied an angle-based bias removal procedure on the individual fields, and we assessed its impact on the above-mentioned experiments with respect to results with original biased data. Quantifying these effects on backscattering, which are usually not accounted for at the 1 km scale, has potential applications in row orientation identification and early season crop classification and can help in better estimate the uncertainty in soil moisture, irrigation and vegetation parameters retrieval over intensively cultivated and heterogeneous areas, contributing to precision agriculture applications and water resource management.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: The LSA SAF evapotranspiration and surface energy fluxes in drought monitoring across the field of view of the Meteosat Second Generation satellite

Authors: José-Miguel Barrios, Alirio Arboleda, Jan De Pue, Françoise Gellens-Meulenberghs
Affiliations: Royal Meteorological Institute Of Belgium
Alterations to normal weather patterns have been reported from virtually all regions of the world in recent years. Such alterations occur often (but not exclusively) in the form of anomalies in the intensity and frequency of precipitation and/or increased evaporative demand (due to higher temperatures). These anomalies in precipitation and temperature patterns may lead to droughts posing multiple challenges to socio-economic development and ecosystem functioning. Evapotranspiration impacts the humidity conditions at the Earth’s surface and drives the partition of the net incoming radiation into the latent (LE) and sensible (H) heat fluxes returning into the atmosphere. Dry conditions will result into a higher weight of the H in the partitioning of outgoing surface heat fluxes and will produce higher temperatures. Conversely, humid conditions at the surface would represent a higher LE to H ratio which entails a cooling effect. In consequence, the analysis of the energy partitioning at the surface can be informative on the humidity conditions in time and space and, therefore, useful in drought monitoring. This study explored the relationship between LE and H in time and space and its potential to detect the extent and intensity of abnormally dry conditions. The analyzed metric was the evaporative fraction (EF) and the data source was the near-real time LE and H estimates generated in the frame of the LSA SAF operational service (https://lsa-saf.eumetsat.int) for Europe, Africa and Eastern South America. The LSA SAF evapotranspiration and surface energy fluxes are largely based on observations by the Meteosat Second Generation (MSG) satellite in addition to meteorological fields and ancillary datasets. The LSA SAF data are generated and made available in near-real time and cover the period from 2004 till present day. The study analyzed the drought occurrences in recent years by computing the anomaly in EF with respect to statistical aggregates derived from the LSA SAF LE and H in the period between 2004 and 2020; i.e. the first 17 years of operations of the MSG satellite (Barrios et al., 2024). Anomalous EF conditions detected in the analysis were contrasted to drought reports derived from commonly used drought indicators for the analyzed period. The study revealed the sensitivity of the LSA SAF LE and H estimates to droughts when processed in the form of EF anomalies. A significant degree of correspondence to drought events reported for the study period (for instance, the extended drought in Europe in 2022) was observed when compared to reports from other sources. The relevance of this finding is related to the timeliness of the LSA SAF products (near-real time) and suggests the potential of this dataset to support drought monitoring across the field of view of the MSG satellite.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Evaluating Water and Energy Fluxes Using ECOSTRESS LST Imagery: Validation Against the ICOS’ Warm Winter 2020 Database

Authors: Héctor Nieto, Vicente Burchard-Levine, Benjamin Mary, Miguel Ángel Herrezuelo, Radoslaw Guzinski
Affiliations: CSIC, DHI
The assessment of evapotranspiration (ET) at a reasonable accuracy is crucial to reliably monitor and manage irrigation and fresh water resources. In recent ESA projects (SenET/ET4FAO) and publications, we showed that merging Sentinel-2 shortwave with Sentinel-3 thermal capabilities have proven useful for field-scale monitoring ET at regional and national levels. However, the lack of an operational thermal mission with high spatial resolution (<100m) and frequent revisit time (< 1 week) still poses some limitations for water use management. Both listed limitations should be addressed with the upcoming thermal missions such as the Copernicus Land Surface Temperature Mission (LSTM). In order to support the upscaling and transfer of products and services to these future satellite missions, several initiatives have been promoted by ESA that will pave the ground for downstreaming future operational applications to end users and stakeholders. These initiatives aim to use datasets that could approximate the information captured by future missions. In the particular case of ET and water resource monitoring, the ECOSTRESS mission, onboard the International Space Station, is key as it can provide robust measurements of Land Surface Temperature at high spatial and temporal resolutions, and with variable overpass times. This study was performed in the scope of ESA’s EO MAJI and MULTIWATER projects, in which we are evaluating the performance of TSEB modelling framework using ECOSTRESS LST, with additional inputs generated using the methodology inspired by the Sen-ET and ET4FAO projects. Firstly, biophysical traits are derived using Sentinel-2 imagery under a hybrid model inversion of ProspectD+4SAIL, including the derivation of total and green LAI. Then, weather forcing from ERA5 is topographically corrected and driven to a blending height of 100m above ground, while canopy height for forests is derived from the 2019 GEDI Global Forest Canopy Height and other ancillary canopy parameters in TSEB, such as fractional cover for clumped vegetation and effective leaf width, are derived from a Look-Up-Table based on IGBP land cover type. Furthermore, we used the ICOS Warm Winter 2020 database as validation dataset, since it covers up to 43 sites over the ECOSTRESS spatial extent, in order to include a large number of water limiting cases and varying biomes under the evaluation. The results presented here show that TSEB produced reasonable results, with RMSE of 79 W m⁻² in λE (Pearson r=0.8) and less that 1mm errors for daily ET. Furthermore, this approach proved to be robust in semi-arid and water limited conditions such as savannas and open shrublands, in which the model showed no significant decrease in performance with reducing soil water content and increasing climatic aridity conditions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Digital Twin Earth Hydrology precipitation: overcoming single products limitations

Authors: PhD Paolo Filippucci, PhD Luca Ciabatta, PhD Christian Massari, PhD Luca Brocca
Affiliations: Istituto di Ricerca per la Protezione Idrogeologica (IRPI), Consiglio Nazionale delle Ricerche (CNR)
In recent years, the European Union (EU) Green Deal and the EU Data strategy have called for the development of Digital Twins of the Earth (DTE), to integrate the latest advancements in Earth Observation (EO) systems, models, AI and computing capacities. These digital models are necessary to visualize, monitor and forecast natural and human activities on the planet, supporting sustainable development and mitigating ongoing climate change impacts. In this context, the European Space Agency (ESA) proposed the DTE Hydrology project, which focuses specifically on the water cycle, hydrology and its different applications. Within this project, accurate, high-resolution (1 km-daily) data for key variables of the water cycle are collected to simulate the water cycle, hydrological processes, and their interactions with human activities. Among these variables, precipitation is of paramount importance due to its impact on agriculture, water resource management, socio-economic development and disaster mitigation. However, in-situ monitoring stations are declining globally and in most countries are insufficiently dense in most countries to provide adequate data. Satellite-based precipitation estimations are hence crucial to bridge the spatial and temporal data gaps affecting such regions. To address this, DTE-Hydrology precipitation measurements are derived from various EO satellite sources and approaches, which are merged with reanalysis data to create an optimal product which overcomes the limitations of the individual datasets. Specifically, precipitation information from IMERG-Late Run, SM2RAIN ASCAT (H SAF) and ERA5 Land are first downscaled and then merged. The downscaling process leverages high-resolution ancillary information" on precipitation spatial variability obtained from the Climatologies at High resolution for the Earth’s Land Surface Area (CHELSA) climate dataset, while the merging weights are derived using Triple Collocation. The resulting product was assessed through comparison with multiple datasets, including coarse resolution ones such as H SAF, IMERG-LR, ERA5, EOBS, PERSIANN, CHIRP, GSMAP, as well as high-resolution products like EMO, INCA, SAIH, COMEPHORE, MCM, 4DMED, confirming its strong performance
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Precipitation rate estimation from SWOT: a pixel-wise data-driven approach using random forest with boosting

Authors: Aurélien Colin, Romain Husson, Bruno Picard
Affiliations: Collecte Localisation Satellite, Fluctus
Precipitation is a key component of the Earth’s hydrological cycle, influencing water resource management, agriculture, and disaster risk mitigation. Accurate rainfall estimation is vital for weather forecasting, flood prediction, and climate modeling. It is also of tremendous importance for remote sensing applications as rainfall often interferes with other phenomena of interest, leading to misinterpretation of observation, such as the noise introduced by rainfall on the wind estimation from SAR imagery. While ground-based systems like NEXRAD provide detailed precipitation data, their coverage is limited, particularly over oceans. Satellite missions, such as the Surface Water and Ocean Topography (SWOT), offer global coverage and open possibilities for rainfall estimation over the ocean. Though SWOT’s primary mission is to measure surface water and ocean topography, its Ka-band Radar Interferometer (KaRIn) can potentially provide indirect precipitation data. This study explores the use of SWOT’s radar measurements for rainfall estimation. By collocating SWOT radar data with ground-based NEXRAD observations, an ensemble learning algorithm, XGBoost, is trained to estimate precipitation rates. Estimates are performed pixel-wise at a resolution of 2 km/pixel, using a set of features that are either local (e.g., the Normalized Radar Cross Section, NRCS) or computed over patches (e.g., the first four moments of the NRCS distribution). The input features also include a wind speed prior obtained from an atmospheric model. Since the resulting model struggles to reproduce the extremes of the precipitation rate distribution, a quantile-mapping post-processing step is applied to ensure accurate predictions for both low and high precipitation rates. Given the exponential decrease in the proportion of pixels with increasing precipitation rates, the model is evaluated using the Pearson Correlation Coefficient (PCC) of the logarithm of precipitation rates. This approach accounts for both low and high precipitation values. The PCC reaches 52.8%, which is on par with the correlation between two NEXRAD systems observing the same areas (52.9%). From a classification perspective, considering three categories based on thresholds of 1 mm/h and 10 mm/h, the random forest achieves an accuracy of 44.8% for the [1, 10] mm/h category, with a non-detection probability of 51.1% and an overestimation probability of 4.1%. In comparison, the corresponding results for two NEXRAD systems observing the same area are 53.3%, 42.7%, and 4.0%, respectively. Currently, collocations of SWOT with observation systems sensitive to precipitation rates remain limited, as the satellite was launched only two years ago. However, the increasing availability of data is expected to further enhance the performance of the rainfall detection system. Since observations are acquired globally, SWOT has the potential to serve as an additional source of information for hydrographic studies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Towards an updated ESA Earth System Model: Showcasing the Improvements in the Hydrological Model of LISFLOOD on the Example of Central Asia around Lake Issyk-Kul

Authors: Eva Boergens, Laura Jensen, Robert Dill, Tilo Schöne, Alexander V. Zubovich, Linus Shihora, Henryk Dobslaw
Affiliations: GFZ Helmholtz Centre For Geosciences, Central-Asian Institute for Applied Geosciences (CAIAG)
The ESA Earth System Model (ESA-ESM) is a synthetic model of the time-variable gravity field of the Earth. ESA-ESM consistently combines models of mass transport of hydrology, oceans, atmosphere, ice, and solid Earth. The current version was published in 2015 with back then up-to-date model input. The hydrology in the current ESA-ESM version is modelled by the Land Surface Discharge Model (LSDM) but will be updated by the hydrological model OS LISFLOOD in the new ESA-ESM version 3.0. Here, we test and validate the hydrological update of ESA-ESM in a test region that is characterized by various hydrological processes challenging to model as well as a good coverage with observations. Western and Central Asia contain the largest endorheic region, including the Caspian Sea basin, the Tarim Basin, the Central Asian Internal Drainage basin, and the Lake Issyk-Kul basin. Although Lake Issyk-Kul (Kyrgyzstan) lies at 1600m elevation in the Tianshan Mountains, it does not freeze over in the winter months. The hydrology of the region is dominated by the storage in several large to medium-sized endorheic lakes (e.g., Lake Balkhash ~400km to the North), artificial reservoirs (e.g., Kapshagay Reservoir ~150km North), and the snow cover during the winter months. In addition, melting glaciers play a major role in the region’s hydrology. The lake and its surrounding mountains are well observed with in-situ stations maintained in cooperation with the Central-Asian Institute for Applied Geosciences (CAIAG), Kyrgyzstan, with the GFZ Helmholtz Centre for Geosciences, Germany. The in-situ observations and the ice-free winters already made Lake Issyk-Kul an ideal test site for calibrating satellite altimetry. Snow and glacier storage, endorheic lakes, and reservoirs with unpublished discharge rates all pose difficulties in the hydrological modelling, which makes the Issyk-Kul region a suitable test region for new hydrological model developments. Compared to the previously used LSDM, OS LISFLOOD produces more realistic terrestrial water storage estimates and has several more advantages. OS LISFLOOD is an open-source project developed by the Joint Research Centre of the European Commission. The current version of OS LISFLOOD is running at a spatial resolution of 0.05°, compared to the 0.5° LSDM resolution. Surface water storage of lakes and reservoirs is more realistic due to a significantly larger number of lakes and reservoirs included in the model (currently globally, 463 lakes and 667 reservoirs in OS LISFLOOD vs. 28 lakes and one reservoir in LSDM). A further advantage of OS LISFLOOD is the new development of including endorheic lakes. Around 18% of the land surface drains into endorheic lakes. Thus, their consideration is a large step toward more realistic storage estimates. In contrast to LSDM, OS LISFLOOD also models anthropogenic water use for domestic, industry, and livestock consumption and irrigation, although the latter plays a minor role in the northern part of the test region. We investigate how all these progresses in hydrological modelling influence the performance of the ESA-ESM in the test region.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Low-Rank Matrix Completion for Denoising, Gap-Filling, and Temporal Extension of Hydro-Variable Time Series.

Authors: Dr. Karim Douch, Dr, Peyman Saemian
Affiliations: ESA ESRIN, GIS, University of Stuttgart
Over the past two decades, the GRACE and GRACE-FO missions have revolutionized terrestrial water cycle monitoring by introducing a novel independent observable: monthly terrestrial water storage anomalies (ΔS). However, these time series face limitations, including observation gaps, significant errors, and insufficient length for robust climatic studies. In parallel, the proliferation of Earth observation data and advancements in computational modelling have led to numerous hydrological products estimating variables such as precipitation (P) and evaporation (E). Despite their utility, these products often exhibit substantial discrepancies due to their model-dependent nature. Consequently, practitioners aiming to analyse regional hydrological trends must carefully select datasets based on factors like spatial resolution, temporal coverage, and, more often than not, their ability to achieve water balance closure at the basin scale - a persistent challenge in hydrological studies. To address these issues, we propose a statistical approach grounded in low-rank matrix approximation and completion. This method enables simultaneous data imputation, GRACE(-FO) time series back-extension, and denoising of P, E, and ΔS products, along with in-situ discharge measurements. The core concept of the proposed algorithm is that these 4 quantities can be represented by only 3 empirical functions in virtue of mass conservation. Consequently, a matrix comprising multiple estimates of these variables should be well-reconstructed using a low-rank representation. In this study, we applied our approach and conducted extensive numerical analyses on 46 river basins worldwide, utilizing five precipitation products, along with four evaporation and four TWS datasets spanning 1995–2022. We evaluated the impact of matrix rank selection and discharge time series gaps on imputation accuracy. Additionally, we explored the benefits of embedding these time series into a Hankel matrix to incorporate temporal autocorrelation. Our findings demonstrate that a rank-3 or rank-4 matrix strikes an optimal balance between data fitting and extrapolation, reducing the average water balance misclosure by at least 30%, even during periods requiring data imputation. This approach offers a robust framework for improving the accuracy and usability of hydrological datasets in basin-scale studies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Upgrading of water resources assessment including green water quantification evaluated thanks to Earth Observation

Authors: PHD Veronique Miegebielle, Doctor Odile Rambeau
Affiliations: TotalEnergies
The preservation of freshwater resources is a current topic, concerning everyone on the planet. The global consumption of freshwater is worldly shared with 10% to be preserved for the Human as a vital resource, 20% for the industries and 70% for the agriculture. For the survival of the population the 10% of the freshwater resources to be dedicated to their population is the responsibility of authorities of the different countries. Decrees can be issued to restrict water withdrawals, limiting the productivity of the industries but also of agriculture which is involved in feeding the population. A balance must be found between the vital needs of the population and the water resources. Looking at the calculation of the water footprint of human activities, a major part of the rainfall is not considered, due to difficulties to quantify it, even if it represents 60 to 70 % of the precipitation into the water cycle. This part of water not considered in our predictions and recommendations of water use is” Green Water”. The Green Water is the part of water infiltrated into the soil, used by the vegetation to grow and evapotranspired into the atmosphere before condensation and reprecipitation. With a better estimation of the water footprint including green water volume, the water footprint of human activities would be more representative of reality. The aim of this project is to explore different methodologies proposed in the literature in order to establish a valid green water model based on remote sensing data (satellite & drone images) supported by field data. Thanks to Earth Observation, pictures taking and analysis, evapotranspiration of the vegetation has been calculated on different areas of interest in Southwest of France. Different models have been used. Satellite imagery became wildly used to study green water and evapotranspiration. Nagler & al. proposed, in 2004, to calculate Evapotranspiration (ET), T° in °C and EVI (Enhanced Vegetation Index). Other models based on remote sensing indicators or sensors on sites, with the Two-Source Energy Balance (TSEB) or FAO-56 (Allen & al. 1998, Colaizzi & al. 2014, Alhousseine. 2018) also exist, focusing on agriculture with good results. To explore and validate the models, field measurements (in situ) have been planned. materials have been used as pyranometer measuring solar radiation; weather stations measuring rainfall, air temperature and moisture; capacitance probes measuring both temperature and ratio of water in soil at various depths, anemometers measuring wind direction and speed. Remote sensing analyses have been done using optic multispectral satellite images and drone multispectral acquisitions. This paper presents the results of green water daily iso quantification on a field test during a part of the year, covering 2 seasons, comparing models results and field measurements.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Is It Possible to Translate Sentinel-1 Images to Field-Scale ET Product Using Transformers Trained With EEFlux data?

Authors: Fatma Selin Sevimli, Mustafa Serkan Isik, Prof. Dr. Esra Erten
Affiliations: Istanbul Technical University, Geomatics Engineering, Heavy Finance UAB, OpenGeoHub Foundation
Evapotranspiration (ET) is an essential component of the water cycle, as it helps improve agricultural productivity and maintain the balance of climate and ecosystems. With its tight connection to surface temperature, one of the most commonly used remote sensing approaches for ET estimation is the METRIC (Mapping Evapotranspiration at High Resolution with Internalized Calibration) method. METRIC integrates satellite-derived Land Surface Temperature (LST) and optical imagery with weather data to estimate ET at high spatial resolution, such as the 30 m resolution Landsat based Analysis-Ready Data (ARD); Earth Engine Evapotranspiration Flux (EEFlux). However, EEFlux data often contains spatial gaps because of the limitations of Landsat's thermal band acquisitions, and cloud coverage, limiting the temporal resolution, remains a persistent challenge. To address these issues, recently a weakly supervised U-net architecture was trained to learn EEFlux-based ET data from Sentinel-1 images for field-scale ET estimation [1]. Although the initial results are promising, additional research is required to assess the impact of long-term phenological and meteorological information on the learning representation. In this study, a spatio-temporal modeling of ET time series was conducted using multisource Earth Observation (EO) data collected for cotton fields in Sanliurfa, a city in the Southeastern Anatolia Region in Turkiye, covering more than 13K fields. These cotton fields are heavily dependent on irrigation due to the low rainfall and high evaporation rates in the area. The data set contains multiscale biophysical and geophysical characteristics derived from high spatial resolution Sentinel-1 (S1) backscatter data, ERA5-Land meteorological data, and high resolution (30 m) soil type data [2]. All time frequencies of dynamic features were matched with the target variable temporal resolution; 16 days, through the cotton phenology, which is from April to November, and merged with static features duplicated along the time domain to form the training dataset. ET data were extracted from the EEFlux database to develop forecasting models for long-term irrigation planning and water resource management, ensuring that cotton crops do not experience water stress, which is critical to maintaining optimal growth and yield. Two types of deep learning models both handling sequential data; Long Short-Term Memory (LSTM) and Transformer architectures were applied to predict future ET values from the EO-based time series data. Both models capture temporal dependencies and generate accurate forecasts, while the Transformer model, utilizing attention mechanisms, learns broader contextual relationships to provide more precise predictions. In the study, 20% of the total cotton fields are allocated for testing not only the mentioned sequential models but also for the SAR2ET model [1]. In particular, the cotton fields that the SAR2ET model has not encountered during either validation or testing are targeted to ensure unbiased performance evaluation. In this way, this study aims to form the basis for data-driven ET estimation using Sentinel-1 images, while highlighting the important EO modalities. [1] S. Cetin, B. Ülker, E. Erten and R. G. Cinbis, "SAR2ET: End-to-End SAR-Driven Multisource ET Imagery Estimation Over Croplands," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 17, pp. 14790-14805, 2024, doi: 10.1109/JSTARS.2024.3447033. [2] Xuemeng Tian, Sytze de Bruin, Rolf Simoes et al. Spatiotemporal prediction of soil organic carbon density for Europe (2000--2022) in 3D+T based on Landsat-based spectral indices time-series, 23 September 2024, PREPRINT (Version 1) available at Research Square [https://doi.org/10.21203/rs.3.rs-5128244/v1]
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Satellite canopy water content from Sentinel-2, Landsat-8 and MODIS

Authors: Hongliang Ma, Marie Weiss, Daria Malik, Béatrice Berthelot, Dr Marta Yebra, Rachel Nolan, Dr Arnaud Mialon, Jiangyuan Zeng, Håkan Torbern Tagesson, Xingwen Quan, Dr Albert Olioso, Frederic Baret
Affiliations: INRAE, UMR1114, EMMAH, Magellium, Fenner School of Environment & Society, Australian National University, School of Engineering, Australian National University, Hawkesbury Institute for the Environment, Western Sydney University, Centre d'Etudes Spatiales de la Biosphère (CESBIO), Université de Toulouse (CNES/CNRS/INRAE/IRD/UPS), State Key Laboratory of Remote Sensing Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Department of Physical Geography and Ecosystem Science, Lund University, School of Resources and Environment, University of Electronic Science and Technology of China
This study proposes a unified algorithm for CWC mapping at both decametric and coarse spatial resolution from several widely used optical satellites. Similarly to the algorithm implemented in the SNAP toolbox used to derive LAI and fAPAR from SENTINEL-2, we trained Artificial Neural Networks (ANN) with PROSAIL radiative transfer model simulations. The algorithm was improved to better assess the representativeness of the simulations through a better parameterized distribution of the canopy and vegetation input variables (i.e., leaf traits and soil background) of the PROSAIL model. We relied on the largest open integrated global plant (TRY) and soil spectral (OSSL) databases. We used directly the kernel density estimation (KDE) method to approximate the probability density function (PDF) of each leaf trait in TRY. We also reduced the dimension of the OSSL database (around 35 000 spectra) to avoid some over-representation of similar soil spectra by using the soil brightness concept. We found that 47 soil spectra were sufficient to well represent the range of spectral shapes within a good accuracy. We also stabilized the algorithm prediction by computing the median value of a series of 12 ANNs trained with the same dataset, thus regularizing the inversion process. We found little impact of diverse band combinations as well as the inclusion of optical indices for CWC estimation. The performances of this algorithm were first evaluated at decametric resolution based on ground measurements distributed over five ground campaigns corresponding to diverse climate and biome types. The retrieved CWC from Sentinel-2 and Landsat-8 exhibits satisfying performance, with coefficient R of 0.81 and RMSE of 0.046 g/cm2. We then evaluated CWC at 500m resolution from MODIS by comparing it with Landsat-8 and Sentinel-2 aggregated values over a globally distributed selection of LANDVAL sites, representative of the existing biome types combined with a range of precipitation, soil moisture and vegetation density conditions. The MODIS CWC global maps show reasonable seasonal and spatial patterns compared to multi-frequencies microwave-based VOD, and obvious improvements compared to the conventionally and extensively used optical indices such as NDWI. Despite the satisfying results obtained in this study (e.g. spatio-temporal behavior and direct validation exercise although limited to the several available sites), three main issues can still be identified (i) the representativeness of the training database (e.g possible bias in TRY and OSSL spatio-temporal sampling, co-distribution of PROSAIL input variables) (ii) the accuracy and footprint of the ground measurements, and (iii) the saturation effect for dense canopies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Altimeter DREAMing in River Basins - Focus on Africa

Authors: Prof Philippa Berry, Jérôme Benveniste
Affiliations: Roch Remote Sensing (RRS), COSPAR
The role of satellite altimetry in measuring river and lake heights is well-established, with data forming vital inputs to river basin models. However, additional information is encoded in altimeter echoes over these surfaces. In order to exploit this resource, DRy EArth Models (DREAMs) were developed. Originally created to investigate satellite altimeter backscatter from desert and semi-arid terrain, DREAMs have now been crafted over river basins, using multi-mission satellite data and ground truth to model the response of a completely dry surface to Ku band nadir illumination. The first DREAM with significant hydrological content was created over the Kalahari desert, including the Okavango river basin. DREAMcrafting was then attempted over the Congo and Amazon basins. Comparing the Congo basin DREAM with independent data (Dargie et al., 2017) revealed a wealth of DREAM surface hydrological information. It was realised that these DREAMs could be used to assess and interpret altimeter data over rivers and wetlands. Detailed masks have now been generated from the DREAMs to classify pixels as lake/river, wetland/seasonally flooded and soil/rock surface types to facilitate altimeter data analysis. For example, in the latest Congo model, 30 - 35% of the DREAM is identified as wetland/seasonally inundated surface (depending on mask classification criteria) with 14 - 18% as rivers. This paper seeks to answer the following questions: 1) What can DREAMing add to our information store in river basins? 2) How effectively do current and previous altimeter missions recover height and backscatter data over these river basins? 3) What proportion of the overflown river basin surfaces must be monitored to optimise retrieval of these data over rivers and wetlands? New DREAMs over Africa now extend the coverage, encompassing more than 30 river basins including the Congo, Niger, Okavango, Zambezi and Volta. DREAMs have also been crafted over parts of Australia, the Amazon basin and Arabia. As over 85% of Africa has now been DREAMed, this paper focuses on African rivers, lakes and wetlands, showcasing multiple river basins in a range of surface conditions. Envisat, ERS-1/2, Jason-1/2, CryoSat-2 and Sentinel-3A/B altimeter data were utilised in this study, together with a database of over 86000 graded altimeter River and Lake height time series. The recently developed puddle filter (created to filter out altimeter echoes where small puddles of surface water ‘contaminate’ altimeter soil moisture estimates) is found to show clear temporal patterns mirroring local rainfall or river height changes. Altimetry presents a unique information source in this regard in rainforest areas, as the nadir reflection is dominated by the ground return. Very detailed DREAM models are required to capture the intricate structure in river basins. It is noted that smaller tributaries in major river basins are below the current 10 arc second spatial resolution of the DREAMs, and are classified with their surrounding terrain as wetland pixels. Within the constraints of satellite orbit and repeat period, data can be successfully gathered over the majority of these overflown DREAM surfaces. The highest altimeter data retrieval rate over river basin DREAMs for all missions, for all areas where data were gathered, is found over ‘river’ and ‘wetland’ pixels, with lower percentages over ‘soil’ pixels. This is an expected outcome, as targeting ‘soil’ pixels selects for rougher topography. Of prior missions, Envisat performed best, recovering data from a high proportion of river, lake and wetland surfaces even in rough terrain; ERS1 and ERS2 were also very successful. For current missions, the Sentinel-3A/3B OLTC masks are found to preclude monitoring of the vast majority of ‘soil’ pixels over all DREAMs. Of substantive concern, the majority of wetland surfaces and smaller tributaries are also excluded. For example, in the Congo basin the current DREAM shows that monitoring is required from 48-49% of the overflown surface to acquire wetland and river data, with an additional requirement to monitor ‘puddles’. The ability of nadir-pointing altimeters to penetrate vegetation canopy gives a unique perspective in rainforest areas. Along-track time series of surface inundation and also of soil moisture can be generated at the spatial resolution of the underlying DREAMs, currently 10 arc seconds. The major constraint, as with altimeter height measurements, is the spatio-temporal sampling, so use is envisaged in combination with other remote sensed and in-situ data. The monitoring capabilities of the current generation of SRAL altimeters are not being fully realised over inland water due to critical constraints on the OLTC masks. In this era of climate change, the observation strategy should be focussed towards global monitoring. Evolving climate patterns and changing user requirements in river basins can alter monitoring priorities in unforeseen ways, and timeseries of prior measurements are essential to provide baseline data. Dargie, GC; Lewis, SL; Lawson, IT; Mitchard, ET; Page, SE; Bocko, YE; Ifo, SA; (2017) Age, extent and carbon storage of the central Congo Basin peatland complex. Nature , 542 pp. 86-90. 10.1038/nature21048
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Development of a high resolution European Drought Monitor

Authors: Pallav Kumar Shrestha, Prof. Dr. rer. nat. habil. Luis Samaniego, Dr. Ehsan Modiri
Affiliations: Helmholtz Centre For Environmental Research - UFZ
Droughts are the costliest of natural disasters in Europe contributing to loss of 621 million euros per drought event. In 2022-2023, globally 23 countries declared drought emergency including eight from Europe (UNCCD, 2023). Besides cutting crop yields (e.g., Germany's 2018 drought: 16% loss), droughts fuel wildfires and heatwaves. The ability to monitor, model and forecast and/or predict the occurrence of droughts seamlessly, along several scales in space (1 km to 25 km) and time (weeks to seasons to decades), constitutes one of the great challenges in European hydro-meteorological sciences. We respond with the European Drought Monitor (EDM) with high resolution (up to 1 km) and latency of few days. The EDM is based on the precursor system of the German Drought Monitor (Zink et al., 2016; Boeing et al., 2022) and previous scientific demonstrations and analyses at European level (Thober, Kumar, Sheffield, et al., 2015; Luis Samaniego, Kumar, et al., 2017; Thober, Kumar, Wanders, et al., 2018; Wanders et al., 2019; L. Samaniego et al., 2018; Luis Samaniego, Thober, et al., 2019; Rakovec et al., 2022). The system constitutes of a single, continental modeling domain of the Europe employing the mesoscale hydrological model (mHM, https://mhm-ufz.org) incorporating major European reservoirs and a new irrigation module. Advanced earth observation (EO) products are used in conjunction with downscaled 1 km ERA5-land, both bias corrected using EMO-1, to cut the latency of meteorological forcings for near-real time initialization of the hydrological model. Furthermore, EOs are used in the estimation of irrigation demand and command area as well as in validation of model output including evaporation (CGLS at 1 km), soil moisture (ESA CCI SM v08.1 at 25 km), snow water equivalent (SMOS at 50 km), and total water storage (GRACE-FO at 100 km). Once the reliability of the system is demonstrated, the EDM generates drought indicators such as soil moisture index (SMI) and heat wave index (HWI) as proxies to impact on irrigation for the end-users. For reproducibility, the EDM backend is powered by ecFlow, the workflow management system developed by ECMWF. The comprehensive EO augmentation demonstrated in the EDM reverberates with the Sendai Framework for Disaster Risk Reduction, which, among others, calls for improved monitoring of hazards using EO data. With the EO integration, we expect the operationalized EDM to generate ``timely'' warning and help us to decipher novel solutions to the challenge of the evolving European droughts. References Boeing, Friedrich, Oldrich Rakovec, Rohini Kumar, Luis Samaniego, Martin Schr¨on, Anke Hildebrandt, Corinna Rebmann, Stephan Thober, Sebastian M¨uller, Steffen Zacharias, et al. (Oct. 2022). “High-Resolution Drought Sim- ulations and Comparison to Soil Moisture Observations in Germany”. In: Hy- drology and Earth System Sciences 26.19, pp. 5137–5161. issn: 1027-5606. doi: 10.5194/hess-26-5137-2022. (Visited on 11/29/2024). Rakovec, Oldrich, Luis Samaniego, Vittal Hari, Yannis Markonis, Vojtˇech Moravec, Stephan Thober, Martin Hanel, and Rohini Kumar (2022). “The 2018–2020 Multi-Year Drought Sets a New Benchmark in Europe”. In: Earth’s Future 10.3, e2021EF002394. issn: 2328-4277. doi: 10.1029/2021EF002394. (Vis- ited on 07/11/2023). Samaniego, L., S. Thober, R. Kumar, N. Wanders, O. Rakovec, M. Pan, M. Zink, J. Sheffield, E. F. Wood, and A. Marx (2018). “Anthropogenic Warming Exacerbates European Soil Moisture Droughts”. In: Nature Climate Change 8.5, pp. 421–426. issn: 17586798. doi: 10.1038/s41558-018-0138-5. Samaniego, Luis, Rohini Kumar, Stephan Thober, Oldrich Rakovec, Matthias Zink, Niko Wanders, Stephanie Eisner, Hannes M¨uller Schmied, Edwin Su- tanudjaja, Kirsten Warrach-Sagi, et al. (2017). “Toward Seamless Hydrologic Predictions across Spatial Scales”. In: Hydrology and Earth System Sciences 21.9, pp. 4323–4346. issn: 16077938. doi: 10.5194/hess-21-4323-2017. Samaniego, Luis, Stephan Thober, Niko Wanders, Ming Pan, Oldrich Rakovec, Justin Sheffield, Eric F. Wood, Christel Prudhomme, Gwyn Rees, Helen Houghton-Carr, et al. (Dec. 2019). “Hydrological Forecasts and Projections for Improved Decision-Making in the Water Sector in Europe”. In: Bulletin of the American Meteorological Society 100.12, pp. 2451–2472. issn: 0003-0007, 1520-0477. doi: 10.1175/BAMS-D-17-0274.1. (Visited on 05/27/2024). Thober, Stephan, Rohini Kumar, Justin Sheffield, Juliane Mai, David Sch¨afer, and Luis Samaniego (Dec. 2015). “Seasonal Soil Moisture Drought Prediction over Europe Using the North American Multi-Model Ensemble (NMME)”. In: Journal of Hydrometeorology 16.6, pp. 2329–2344. issn: 15257541. doi: 10.1175/JHM-D-15-0053.1. (Visited on 01/18/2022). Thober, Stephan, Rohini Kumar, Niko Wanders, Andreas Marx, Ming Pan, Oldrich Rakovec, Luis Samaniego, Justin Sheffield, Eric F. Wood, and Matthias Zink (Jan. 2018). “Multi-Model Ensemble Projections of European River Floods and High Flows at 1.5, 2, and 3 Degrees Global Warming”. In: Environmental Research Letters 13.1, p. 014003. issn: 17489326. doi: 10.1088/ 1748-9326/aa9e35. (Visited on 01/18/2022). UNCCD (2023). Global Drought Snapshot 2023: The Need for Proactive Action. Tech. rep. United Nations Convention to Combat Desertification. Wanders, Niko, Stephan Thober, Rohini Kumar, Ming Pan, Justin Sheffield, Luis Samaniego, and Eric F. Wood (Jan. 2019). “Development and Eval- uation of a Pan-European Multimodel Seasonal Hydrological Forecasting System”. In: Journal of Hydrometeorology 20.1, pp. 99–115. issn: 15257541. doi: 10.1175/JHM-D-18-0040.1. (Visited on 10/01/2021). Zink, Matthias, Luis Samaniego, Rohini Kumar, Stephan Thober, Juliane Mai, David Schafer, and Andreas Marx (2016). “The German Drought Monitor”. In: Environmental Research Letters 11.7. issn: 17489326. doi: 10 . 1088 / 1748-9326/11/7/074002.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Satellite-based optical characterization of a RAMSAR lagoon in Argentina

Authors: Sofía Paná, Francisco Nemiña, Dr Nicola Ghirardi, Mariano Bresciani, Claudia Giardino, Matias Bonansea, Inés del Valle Asís, Anabella Ferral
Affiliations: Mario Gulich Institute for Advanced Space Studies, Centre for Research and Studies on Culture and Society (CIECS), National Scientific and Technical Research Council (CONICET), Córdoba National University (UNC), Argentina's Space Activities Commission (CONAE), CNR–Institute for Electromagnetic Sensing of the Environmental, CNR–Institute of BioEconomy, Institute of Earth Sciences, Biodiversity and Environment (ICBIA; CONICET-UNRC), Department of Geology, Faculty of Exact, Physical-Chemical and Natural Sciences, National University of Río Cuarto (UNRC)
Located in the province of Cordoba (Argentina), the Mar Chiquita Lagoon is one of the largest saltwater lakes in the world, characterized by exceptional salinity and intense climate conditions, such as salt storms. The lagoon, along with the northern coastal areas and the estuaries of its main tributaries (Suquía and Xanaes Rivers), are included in the Ansenuzas National Park and, together with the Dulce River wetlands, has been classified as a Wetlands of International Importance by the RAMSAR - Convention on Wetlands of International Importance Especially as Waterfowl Habitat. The Mar Chiquita Lagoon is currently threatened by human activities like agriculture and urbanization within its catchment area. Additionally, the region is identified as a global hotspot for the impacts of climate change. Remote sensing tools are particularly valuable for the study of water resources, enabling continuous monitoring that allow researchers to track seasonal variations and long-term trends in water quality. These data are essential for understanding the dynamics of water bodies and ecosystems in response to environmental change. This study investigates the potential of three satellite sensors for classifying water types in the Mar Chiquita Lagoon by implementing a water classification system based on optical property parameters. The Optical Water Type (OWT) classifications aim to manage optical complexity by identifying appropriate ocean colour algorithms tailored to each water type, facilitating the understanding of biogeochemical cycles from local to global scales. Two complementary methods were used to achieve this objective. First, a Principal Component Analysis (PCA) was performed on the full set of spectral bands for Sentinel 2 (S2), Sentinel 3 (S3) and PACE to identify the general optical characteristics of the lagoon. An unsupervised classification (K-means) was performed on the basis of the PCA results. Finally, an OWT classification was applied to the resulting clusters. The effectiveness of these methods was evaluated using S2, S3 and PACE satellite data acquired on 18 April 2024, to test their applicability over different spatial and spectral scales. The results revealed that the lagoon can be spectrally classified into three distinct zones: an initial zone influenced by the Suquía and Xanaes rivers, a mixing zone and a final zone influenced by the Dulce river. The k-means cluster analysis identified three homogeneous zones for S3, while four homogeneous zones were detected for S2 and PACE. These zones were further classified into OWT categories 5A and 5B. The study highlights the importance of satellite-based remote sensing tools in monitoring water quality in the Mar Chiquita Lagoon. The employed techniques also provide insights into the zones of influence of each river, analyzed across varying temporal and spatial scales.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: RainGNSS: an In-Situ Network for Altimetry, Water Vapor and Precipitation Validation of Satellite-Based Observations.

Authors: Bruno Picard, Julianna Devillers, Jean-Christophe Poisson, Yannick Riou, Valentin Fouqueau
Affiliations: Fluctus Sas, vorteX-io
Accurate in-situ rainfall data and dense river monitoring networks are crucial for managing the increasing frequency of extreme weather events such as floods and droughts, driven by climate change. These measurements enable precise, high-resolution datasets critical for predicting localized hydrological phenomena, improving early warning systems, and validating satellite observations. Using Global Navigation Satellite System (GNSS) signals to estimate water vapour and precipitation is a promising application for meteorology and hydrology. The RainGNSS project uses VorteX.io Micro-Stations equipped with low-cost precise GNSS receivers to provide Zenith Total Delay (ZTD) and precipitable water vapor (PWV), estimations. Distributed along rivers, these stations enable continuous high-frequency measurements and offer a complementary product for monitoring flash floods and validating satellite observations, particularly for the SWOT mission. The addition of ZTD-derived rainfall estimation complements existing hydrological measurements (water surface elevation, water surface velocimetry, water surface temperature), enhancing flood prediction capabilities. GNSS data processing is based on a chain using RTKLIB and the Precise Point Positioning (PPP) method, adapted to low-cost GNSS receivers. ZTD retrieval relies on a precise troposphere model (Saastamoinen) for the dry component and the retrieval of the wet component as an unknown parameter. We present the developments to refine the accuracy and dynamics of the data. Then, we discuss the performance of the GNSS-derived rainfall feature and its validation against independent dataset such as ECMWF analysis and measurements from a Davis Vantage Vue weather station. Finally, we discuss the generalization of the approach to a larger network and its validation against weather radar precipitation. This project represents a step towards cost-effective, scalable river monitoring networks capable of providing critical data for flood risk management and climate resilience.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Empirical Orthogonal Function (EOF) Analysis of Water Vapor Data from GPS and MODIS

Authors: İlke Deniz
Affiliations: Zonguldak Bülent Ecevit University
Water vapor is the most abundant natural greenhouse gas in the atmosphere. The distribution of water vapor in the atmosphere shows a high degree of temporal and spatial changes. Current studies include EOF analysis, modeling, propagation characteristics, trends of dynamic structures such as the atmosphere, and their application in data filtering. Furthermore, the "signal" and "noise" in the time series may be reliably identified using EOF analysis. It is also possible to ascertain the precision of time series by using this characteristic. The patterns of the important principal components can be examined for their physical meanings. In this study, the MODIS and GPS water vapor data from 10 Tusaga-Active stations located in the Western Black Sea Region are used. The data consists of twice daily observations between 16 February and 23 April 2023. Water vapor time series of MODIS, GPS, and the difference of the GPS-MODIS is evaluated by EOF analysis. Based on the EOF analysis of the MODIS, GPS, and the difference of the GPS-MODIS time series, the distributions of the "residuals" computed for each station are consistent with each other. The significant principal component PC1's variance ratio, as determined by the EOF analysis of MODIS data, was 0.29. The root mean square error (RMSE) was found to be ± 4.25 mm with 95% reliability. Moreover, the variance ratio of the significant basic component PC1 was 0.83 with the EOF analysis of GPS data. With 95% reliability, the RMSE was determined to be ± 1.31 mm. As for the EOF analysis of the difference of the GPS-MODIS time series, the variance ratio of the significant basic component PC1 was 0.41, and the RMSE was found to be ± 4.54 mm with 95% reliability. The precisions are consistent with the findings of Xu and Liu (2024) and Zhu et al. (2021). The significant principal components of the MODIS, GPS, and GPS-MODIS time series are shown to have patterns that are compatible with the test area's topography.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: A New Upper Tropospheric Humidity Dataset Based on Passive Microwave Sounders

Authors: Dr Elizabeth Good, Philip Whybra, Rob King, Stephen
Affiliations: Met Office
Version 2 (v2.0) of the global Upper Tropospheric Humidity (UTH) dataset, produced within the framework of the EUMETSAT Satellite Application Facility for Climate Monitoring (CM SAF), is presented. This climate data record provides a time series of estimates of UTH derived from passive microwave (MW) sounders. UTH contributes significantly to the atmospheric greenhouse effect by having a strong influence on the outgoing longwave radiation, despite the smaller concentration by mass in comparison to the lower troposphere. The CM SAF UTH v2.0 data set is based on data from twelve MW sounders operating at 183 GHz in polar orbit that are combined into a single time series covering the period 6 July 1994 to 31 December 2018. It is accessible via the CM SAF website (https://www.cmsaf.eu/EN/Home/home_node.html). The MW sounders include the Special Sensor Microwave - Humidity (SSM/T-2) onboard the DMSP-F[11, 12, 14, 15], the Advanced Microwave Sounding Unit B (AMSU-B) onboard NOAA-[15, 16, 17], the Microwave Humidity Sounder (MHS) onboard NOAA-[18,19] and Metop-[A, B], and the Advanced Technology Microwave Sounder (ATMS) onboard SNPP. The CM SAF UTH v2.0 is a near-global 1°x1° latitude-longitude dataset that is available at both hourly and daily time steps. Observations from the twelve different sensors have been assigned to the nearest Coordinated Universal Time (UTC) hour and therefore hourly observations are not available at all hours for all grid cells. Where observations from more than one sensor for a single grid cell are available, a priority ordering is applied where best-performing sensor is used. This priority ordering is ascertained by evaluating each sensor independently against a reference dataset based on the ERA5 reanalysis. The daily product is derived from the mean of all available hourly UTH observations on that day, where there are a minimum of two hourly observations. For both the hourly and daily products, uncertainty components that capture sources of independent (or random), structured (or locally correlated) and common (or systematic) errors in the data are also provided for each grid cell. These uncertainties have been propagated from the input MW top of atmosphere brightness temperatures and through the UTH retrieval; uncertainties due to the retrieval itself are also included in these uncertainty components. In addition, an estimation of the spatial sampling uncertainty is provided. Invalid observations affected by deep convective or precipitating clouds, and/or radiation emitted from the surface, together with any spurious observations from individual MW sounders, have been removed from the data set. The UTH provided typically represents a broad atmospheric layer between 500 and 200 hPa. However, the exact height of this layer depends on the atmospheric conditions at the time of the observation. An optional fixed layer approximation adjustment is supplied that users can apply to provide an estimated mean relative humidity (RH) between ±60° latitude for a fixed layer between 500 and 200 hPa (mean_RH). However, users are advised to take care using this correction, especially outside of the tropics where the mean_RH is of lower quality. Users are also advised to take care using UTH observations above ±60° latitude as the retrieval is sometimes less reliable at high latitudes.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: The use of EO-derived irrigation maps to assess irrigation impacts on water availability of the Rhine basin

Authors: Devi Purnamasari, Albrecht Weerts, Willem van Verseveld, Ryan Teuling, Joost Buitink, Brendan Dalmijn, Frederiek Sperna Weiland
Affiliations: Deltares, Wageningen University
The Rhine Basin has experienced summer droughts that led to concerns about water availability and increased irrigation water demand. However, in temperate basins like the Rhine, high-resolution maps of irrigated areas are often lacking. Here, as part of the HorizonEurope project STARS4Water, we developed irrigated areas in the Rhine basin at a 1 km resolution for water availability assessment using a hydrological model. This approach uses the difference in Land Surface Temperature (LST) between two modelled (without irrigation) and observed LST from the Moderate Resolution Imaging Spectroradiometer (MODIS). These LST differences provide distinct features for classification using a random forest algorithm by excluding primary evapotranspiration from precipitation. The irrigated area maps were evaluated against national agricultural statistics and compared with existing maps of irrigated areas. The result of irrigation maps provides insights into the interannual variability of irrigated cropland extent and location in the Rhine Basin. Subsequently, the derived irrigation map are used in wflow_sbm for assessing and modeling agricultural water use in the region. The wflow_sbm model now with irrigation also includes other water usage (e.g. domestic, industrial, livestock) . Finally, we compare the hydrological fluxes and state variables of wflow_sbm with and without water usage (include irrigation) against point and EO based observations such as discharge, total water storage, and LST.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Exploring the potential of sub-daily microwave remote sensing observations for estimating evaporation

Authors: Emma Tronquo, Susan Steele-Dunne, Hans Lievens, Niko E.C. Verhoest, Diego G. Miralles
Affiliations: Hydro-Climate Extremes Lab (H-CEL), Ghent University, Department of Geoscience and Remote Sensing, Delft University of Technology
Evaporation (E) plays a key role in the terrestrial water, energy, and carbon cycles, and modulates climate change through multiple feedback mechanisms. Its accurate monitoring is thus crucial for water management, meteorological forecasts, and agriculture. However, traditional in situ measurements of E are limited in terms of availability and spatial coverage. As an alternative, global monitoring of E using satellite remote sensing, while indirect, holds the potential to fill this need. Today, different models exist that yield E estimates by combining observable satellite-based drivers of this flux, but typically work at daily or oven monthly time scales. As natural evaporation processes occur at sub-daily resolution, there is a need to estimate evaporation at finer temporal scales to capture the diurnal variability of this flux and to monitor water stress impacts on transpiration. Likewise, interception loss shows high intra-day variability, mainly concentrated during precipitation events and shortly after. Moreover, the moisture redistribution within the soil–plant–atmosphere continuum as a consequence of transpiration is highly non-linear and has a strong daily cycle. Sub-daily microwave data could inform about these short-term processes, and as such improve process understanding and monitoring of E and its different components, while providing all-skies retrievals. The Sub-daily Land Atmosphere INTEractions (SLAINTE) mission, an ESA New Earth Observation Mission Idea (NEOMI), aims to provide sub-daily SAR observations of soil moisture, vegetation optical depth (VOD) and wet/dry canopy state, enabling a more accurate estimation of E and the potential to advance E science beyond its current boundaries. This study investigates the potential value of future SLAINTE observations for improving the estimation of E at four eddy covariance sites. In this regard, Observing System Simulation Experiments (OSSEs) are assembled. In total, three experiments using synthetic microwave observations are implemented, focusing on the role of (1) sub-daily surface soil moisture in improving bare soil evaporation and transpiration estimates, (2) sub-daily VOD in improving transpiration estimates, and (3) sub-daily microwave observations that inform about the wetness state of the canopy, to address the uncertainties related to rainfall interception loss. The Global Land Evaporation Amsterdam Model (GLEAM; Miralles et al., 2011) is used for the simulations. GLEAM is a state-of-the-art E model that estimates the different E components (mainly transpiration, bare soil evaporation, and interception loss) using satellite data, including microwave observations of surface soil moisture and VOD. The model is here adapted to work at sub-daily resolution. The results of the OSSEs illustrate that prospective sub-daily microwave data have the ability to improve the estimation of E and its separate components, even if based on current-generation E models, and highlight the need for satellite missions providing sub-daily microwave data, like SLAINTE, to better comprehend the flow of water in ecosystems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Observation-Based Evaluation of Anthropogenic Land- and Water-Use Scenarios in Regional Water Budgets over Europe

Authors: Benjamin D. Gutknecht, Jürgen Kusche, Francis Lopes, Jane Roque Mamani, Yikui Zhang
Affiliations: Collaborative Research Centre 1502 DETECT, Institute of Geodesy and Geoinformation, University of Bonn, Institute of Geosciences, Meteorology Section, University of Bonn, Institute of Bio- and Geosciences, Agrosphäre (IBG-3), Forschungszentrum Jülich, Centre for High-Performance Scientific Computing in Terrestrial Systems, Geoverbund ABC/J
There is wide consent about the fact that increased atmospheric greenhouse gas concentrations lead to considerable change in the Earth’s climate system and, thus, to an intensification of the water cycle. At regional scales, however, also other human actions have recently been found to be of significance in this regard. Decades of anthropogenic land-use change and increased water-use have impacted the regional water- and energy-cycle also across the atmospheric-terrestrial boundary. In order to strengthen our understanding of the objective quantities, a series of different reanalyses, free-run and coupled model scenarios over the EURO-CORDEX region have been developed in the framework of the Collaborative Research Cluster (CRC) 1502 DETECT. These include the Terrestrial Systems Modelling Platform (TSMP) with COSMO, ICON, (e)CLM and ParFlow, and assume systematic forcing-parameter variations and different irrigation- and land-use patterns. Here, we evaluate essential climate variables such as precipitation (P), evapotranspiration (ET), terrestrial water storage change and river discharge by means of kernel-integrated monthly water mass fluxes in the terrestrial water budget equation over multiple time scales. In what may be understood as a combined consistency check based on a sound physical assumption, we compare modelled boundary net flux, i.e. P-ET, against terrestrial water storage change as observed by GRACE satellite gravimetry and observed river discharge. Hereby it is not only possible to identify model ensemble outliers and systematic offsets, but to assess the significance of differences in budget residuals between such model runs with and without human interaction. The choice of primary target regions comprises major European river catchments over the Iberian Peninsula, Western Europe, and Central Continental Europe with varying types of prevalent climate and seasonality. Our preliminary findings indicate that (1) the choice of sea surface temperature forcing in regional climate models can lead to very localised differences in boundary net fluxes, (2) varying assumptions about irrigation might have strong regionally differing significance of variability in accumulated precipitation, and (3) the inclusion of anthropogenic water use in a coupled earth model leads to season-dependent changes of multi-annual mean boundary net water flux in the order of >10 mm per month on local to regional scales (both: reduction and intensification). Moreover, we highlight challenges of the budget method stemming from spatio-temporal gaps in measured data, which further emphasises the potential and need for space-based alternatives for continuous observations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Seasonal Analysis of Precipitation Partitioning Using a Storage-Adjusted Budyko Framework.

Authors: Dr. Karim Douch
Affiliations: ESA ESRIN
Global warming and land use changes are intensifying the hydrological cycle by altering water flux rates within the Earth system. This intensification manifests in many regions as more frequent extreme precipitation events and prolonged dry spells. While these changes are increasingly well-documented, an open question remains whether and how the partitioning of precipitation into runoff and evaporation is shifting. The Schreiber-Oldekop hypothesis, also known as the Budyko framework, offers a model for understanding the long-term evolution of this partitioning. In this framework, the evaporative index (E/P), representing the evaporation (E) to precipitation (P) ratio at the basin scale, is modelled as a function of the aridity index (Ep /P), defined as the potential evaporation (Ep) to precipitation ratio. Traditionally, this relationship is derived using annual or longer-term averages of these variables to minimize the impact of water storage changes (dS/dt), assuming 1 – Q/P ≈ E/P, where Q is the total basin outflow. However, this annual averaging may cover up potential divergent dynamics in the seasonal partitioning of water between evaporation and runoff. In this study, we address this gap by conducting a seasonal analysis of the Schreiber-Oldekop hypothesis across 46 basins worldwide, utilizing Earth observation data. At the seasonal scale, water storage changes (dS/dt) become a significant component of the water balance equation and are treated as a source or sink of water in addition to precipitation. These storage changes are estimated using terrestrial water storage anomaly time series derived from GRACE and GRACE-FO observations over the last two decades. Our methodology comprises two main steps. First, we construct consistent time series for precipitation, evaporation and water storage change (dS/dt) for each basin, spanning 1995–2023. This involves filling gaps and extending GRACE(-FO) time series back to 1995 while improving water mass balance closure. Second, depending on the basin’s climatic zone, we segment the time series into two (dry and wet) or more seasons, each covering at least three months. The modified Budyko framework is then applied to analyse the partitioning dynamics for each season.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: A.07.07 - POSTER - Advancements in Observation of Physical Snow Parameters

Comprehensive quantitative observations of physical properties of the seasonal snow cover are of great importance for water resources, climate impact and natural hazard monitoring activities. This has been emphasized by the Global Energy and Water EXchanges (GEWEX) (a core project of the World Climate Research Programme (WCRP)) for decades and highlighted by the Global Precipitation Experiment (GPEX) (a WCRP Light House Activity) launched in October 2023. Satellite-based observation systems are the only efficient means for obtaining the required high temporal and spatial coverage over the global snow cover. Due to the sensitivity to dielectric properties and penetration capabilities, SAR systems are versatile tools for snow parameter observations. Significant advancements have been achieved in SAR based retrieval algorithms and their application for operational snow cover monitoring. Additionally, lidar backscatter measurements has been proven to provide accurate observations on snow height and its changes. However, there is still need for improvement of snow cover products, addressing physical parameters such snow depth, SWE, liquid water content, freezing state and snow morphology. In this session the current status of physical snow cover products will be reviewed and activities towards further improvements will be presented, taking into account satellite data of current and future satellite missions. In this context, a broad range of observation techniques is of interest, including methods based on backscatter intensity, polarimetry, interferometry, tomography, as well as multi-frequency and multi-sensor approaches.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: An intercomparison exercise of Snow Cover Area maps from high-resolution Earth Observation over the Alps

Authors: Federico Di Paolo, Matteo Dall'Amico, Stefano Tasin
Affiliations: Waterjade Srl
In Europe, the majority of the precipitation during winter falls as snow over 1.000 m altitude, where it accumulates and remains stored in the snowpack until the melting season, when it returns in the hydrological cycle and it is partly used for sustaining downstream water demands. Being the component of the cryosphere experiencing the largest seasonal variation in spatial extent, snow is highly connected to climate change, particularly in low- and mid-elevation mountain regions, both because changes in climate and rise in temperature can cause a decrease in winter snow amount, and because any change in snow presence and melt can directly affect water availability in snowfed basins. Associated with global warming, in the last decades climate change is driving an increase in the frequency and magnitude of hydrological extremes over the globe such as droughts and floods. In particular, droughts are one of the main water-related geophysical issues in Europe, recently showing an increase in the Mediterranean area, and impacting the entire economy at different levels. Currently, a comprehensive description and prevision of droughts is challenging, and many approaches have been developed to indicate areas exposed to droughts in the near future. Many European regions rely on water streams that originate in mountains where the snow is a dominant variable in the water cycle. In such areas snow cover estimation is one of the main indicators to predict possible drought conditions, governing many hydrology-related risks at different time horizons: i) droughts, caused by a reduced precipitation during a season, occurring over months; ii) floods, due to intense snowmelt, occurring over days; iii) avalanches, due to intense snowfalls, occurring over hours to days. In recent decades, thanks to the large and open availability of Earth Observation (EO) data, the monitoring of snow at a global scale has been enabled. Different snow-related physical variables have been retrieved from EO images, such as: (i) Snow Cover Area (SCA), extracted from multispectral sensors such as MODIS, Sentinel-2 and the Landsat constellation, having a medium to high spatial resolution (from some 10 to some 100 m); (ii) wet snow cover area, retrieved from Synthetic Aperture Radars (SARs), such as Sentinel-1, to detect the presence of wet snow; iii) Snow water equivalent (SWE), estimated from passive microwave sensors and gravity measurements; iv) Snow depth (HS), recently retrieved from Sentinel-1 SAR data at kilometer/sub-kilometer resolution and from stereo satellite imagery. The use of EO-retrieved snow parameters is vast, and the applications of such data can be addressed to diverse topics and time horizons, such as climatology, long-term snow cover variability, drought assessment snow evolution monitoring, melting dynamics, ingestion in hydrological models for snow evolution and validation of hydrological models. Due to the vast number of EO-retrieved snow products and the diverse processing approaches present, a comparison between different products is important to assess performance and limitations. The so-called Round Robin Exercise is an intercomparison method for evaluating together the performance of different products. In this work we present an intercomparison between high-resolution (i.e., from 250 to 20 m) EO-retrieved snow products. In particular, we focus on SCA products extracted from multispectral sensors (i.e., Sentinel-2 and Landsat-8) and SAR (i.e., Sentinel-1). A medium-resolution (i.e., 250 m) MODIS SCA product evaluated over the Alps is also included in the dataset for a comparison with the more resolute Sentinel/Landsat products. Since the resolution of the considered EO products varies from 20 to 250 m, the SKYE approach for the validation of the EO-retrieved snow variables versus in-situ measurements reference proposed under the SnowPEx project is used, considering the HS dataset published by Matiu et al. (2020) as a benchmark. The analysis has been carried out over the Alps, during the winter seasons 2017/2018 and 2018/2019. All the analyzed EO-retrieved SCA products showed good metrics with respect to the in-situ benchmark. Our results show that, in retrieving the SCA value from HS or Fractional Snow Cover (FSC) maps obtained through EO: - A threshold HS > 2 cm should be used, being the one providing the best metrics, and enabling a more conservative snow identification from multispectral sensors (for HS < 2 cm, the snow cover can not be uniform); - A threshold FSC > 15% or 25% should be used because providing the best metrics and being a conservative value for the identification of snow.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: A New Method for Assimilating Satellite Snow Extent Data in NWP

Authors: Niilo Siljamo, Mikael Hasu, Ekaterina Kurzeneva, Laura Rontu
Affiliations: Finnish Meteorological Institute
The H SAF portfolio includes several operational snow extent products. Since 2008, the geostationary H31 (MSG/SEVIRI) product has provided snow extent data across the full SEVIRI disk. Better polar coverage has been provided by H32 (Metop/AVHRR), available from 2015. The latest addition is the H43 (MTG/FCI) snow extent, which provides daily full-disc snow extent coverage for MTG/FCI. These products are suitable for hydrological and meteorological applications, such as numerical weather prediction (NWP). The key challenge has been the assimilation of the satellite snow data into NWP models. Using these observations, particularly in autumn and spring, could enhance the Snow Water Equivalent (SWE) field in NWP models. The idea is to adjust the analysis fields by using statistical interpolation between snow barrels, SYNOP observations (weather stations), and model background. The innovative "snow barrels" are pseudo-observations of snow extent produced from the H SAF H32 (Metop/AVHRR) intermediate (single image) snow extent product. Since 2022, the FMI has provided snow barrel data to MetCoOp servers, enabling testing within the MetCoOp Ensemble Prediction System (MEPS). The MetCoOp is a collaboration between the FMI, Met Norway, SMHI, ESTEA, and LEGMC, with the goal of jointly operating a limited-area NWP model for the Nordic region. After successful development and testing, snow barrels were ready for use in NWP models and were integrated into MetCoOp’s operational production in spring 2024. Snow barrels condense snow extent data from multiple (10×10) satellite pixels into a single classification distribution. Each "barrel" contains observation time, average location, and number of classifications (snow, no snow, partial snow) in the selected 100-pixel area. For NWP purposes, this data is converted into a format suitable for assimilation, which influences the model’s snow water equivalent (SWE) field.  Since snow barrels do not directly measure SWE, a fixed snow depth range of 0–10 cm is assumed, corresponding to SWE values of 0–31 kg/m² depending on snow density. Snow density is estimated using typical monthly climatological values, which generally range from 140 to 310 kg/m³. The fraction of pixels classified as "snow" is multiplied by the maximum SWE value for a 10 cm snow depth. For example, in spring, when the average snow density is approximately 250 kg/m³ if all pixels are classified as "snow," the resulting pseudo-observation is 25 kg/m². If only half the pixels are classified as snow, the pseudo-observation is 12.5 kg/m². Barrels are primarily applied when they contradict the model’s background field. If satellite observations were used indiscriminately, the model analysis might incorrectly reduce the overall snow amount, as the barrels represent a maximum SWE of only approximately 14-31 kg/m². The winter of 2024-2025 marks the first full northern hemisphere winter in which the snow barrel approach will be operational. Snow barrels are expected to bring several benefits during this period. They will enhance the analysis of snow cover in areas with thin or patchy snow, allowing the model to more effectively track rapid changes in snow cover, even over smaller regions. This capability is especially valuable during melting periods, near coastal areas, or after fresh snowfall. Early results from this year already demonstrate improvements in the model’s snow analysis statistics, such as lower RMSE and bias. It will be interesting to observe how the improved snow field impacts other model parameters, such as temperature, over an entire winter season. However, snow barrel observations do have certain limitations. During the darkest months (November to January), data collection is not possible due to insufficient daylight at the latitudes within the model domain. For this reason, the barrels are not expected to have a significant impact before February. Nonetheless, this is not a significant disadvantage given how the barrels function. As previously mentioned, barrels detect snow cover but not snow depth, meaning that during midwinter, when the snow cover is thick and stable, the barrels provide little additional value. Next year, our NWP model cycle is likely to be updated, which will influence how snow barrel data is assimilated. Currently, snow observations are processed in units of kg/m², but in the new model setup, the unit will change to centimetres. Preliminary results from this update are expected in spring, based on our pre-operational system.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: Machine learning based GNSS-IR retrieval in complex terrain: Initial results for snow heights in Switzerland

Authors: Matthias Aichinger-Rosenberger, Laura Crocetti
Affiliations: Chair of Space Geodesy, Institute of Geodesy and Photogrammetry, ETH Zurich
GNSS-Interferometric Reflectometry (GNSS-IR) represents an innovative method for environmental remote sensing. The technique makes use of ground-reflected signals from Global Navigation Satellite Systems (GNSS) to sense different environmental parameters such as snow and soil moisture. These parameters belong to the essential climate variables defined by the Global Climate Observing System (GCOS) and observations are of great value for understanding the hydrological cycle and the impacts of climate change on it. However, one major limitation of the classic GNSS-IR algorithm is the restriction of its applicability to stations located in flat terrain, which makes the technique unusable for sites in mountainous areas. This is unfortunate since measurement networks are typically already sparse in these regions, which are particularly vulnerable to the effects of climate change. Thus, the ability to use existing station infrastructure in alpine regions for snow and soil moisture monitoring would be beneficial for hydrological monitoring and climate studies. In this contribution, we present the newest results from the GCOS Switzerland project “Machine-learning based Advancement and usability assessment of GNSS Interferometric Reflectometry for Climatological studies in Switzerland” (MAGIC-CH). This includes a validation of snow height products from our machine learning model as well as from the classic retrieval, by comparison to standard in-situ and satellite products. Furthermore, we showcase the potential benefits of incorporating local terrain information, in the form of a high-resolution Digital Elevation Model (DEM), in the retrieval process.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: An Innovative Concept of High Spatial Resolution Measurements of Snow Depth and Snow Density from Optical Remote Sensing

Authors: Dr. Yongxiang Hu, Xubin Zeng, Parminder Ghuman, Keith Murray, Chris Edwards, Jared Entin, Craig Ferguson, Thorsten Markus, Knut Stamnes, Xiaomei Lu, Sunny Sun-Mack, Carl Weimer, Yuping Huang
Affiliations: University Of Arizona, NASA Langley Research Center, NASA Earth Science Technology Office, Code YS NASA Headquarters, Stevens Institute of Technology, BAE Systems, Inc.
Visible light travels diffusively inside snow. Such diffusive light propagation process can be formulated precisely through random walk theory. Using this theory, we have established a simple theoretical formulation that links snow depth to the diffusive photon path distribution of visible light scattered by snow particles. The theory suggests that for backscattering measurements of a space-based lidar (e.g., ICESAT-2), the averaged photon path lengths of laser light traveling inside the snow is equal to twice of the snow depths. Using Monte Carlo simulations [1] and theoretical radiative transfer modeling analysis [2], we demonstrated that this simple theory applies universally to snowpacks with different snow grain sizes, shapes, and densities. We have derived snow depths from ICESat-2 data using the theory, and the results agree with snow depth measurements in field campaign. Sunlight reflected by snow are sensitive to absorption by the snow, which is a function of wavelengths in the visible and solar infrared spectral range. Comparing with shortwave IR wavelengths, reflectance at shorter spectral wavelengths is less absorbing and more sensitive to the deeper part of the diffuse photon path lengths. Thus, averaged diffusive photon path distribution (twice of the snow depth) can also be measured by spectral reflectance of solar radiation in visible and infrared wavelengths. This talk introduces the innovative high spatial resolution snow depth and snow density measurement concept from optical remote sensing, both with lidars (active remote sensing) and with the broad swath spectral measurements of sunlight reflected by snow (passive remote sensing, trained by collocated lidar measurements) from space, such as the spectral measurements from NASA’s EMIT, PACE and SBG missions, in order to establish global snow depth data record at very high spatial resolution (e.g., up to 30 m resolution with spectral measurements of NASA’s EMIT and SBG missions). We also introduce techniques to derive snow density from the spectral reflectance and lidar measurements. We will also introduce the machine learning based synergistic lidar/spectrometer snow depth retrievals, lidar/microwave-radiometer snow depth retrievals, and a quantum annealing technique we developed for enhancing SNRs of the lidar measurements. References: [1] Hu Y, Lu X, Zeng X, Stamnes SA, Neuman TA, Kurtz NT, Zhai P, Gao M, Sun W, Xu K, Liu Z, Omar AH, Baize RR, Rogers LJ, Mitchell BO, Stamnes K, Huang Y, Chen N, Weimer C, Lee J and Fair Z (2022) Deriving Snow Depth From ICESat-2 Lidar Multiple Scattering Measurements. Front. Remote Sens. 3:855159. doi: 10.3389/frsen.2022.855159 [2] Hu Y, Lu X, Zeng X, Gatebe C, Fu Q, Yang P, Weimer C, Stamnes S, Baize R, Omar A, Creary G, Ashraf A, Stamnes K and Huang Y (2023), Linking lidar multiple scattering profiles to snow depth and snow density: an analytical radiative transfer analysis and the implications for Front. Remote Sens. 4:1202234. doi: 10.3389/frsen.2023.120223
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: Improving Snow Water Equivalent retrievals and our understanding of terrestrial snow mass in the ESA CCI+ Snow project

Authors: Dr. Kari Luojus, MSc. Pinja Venäläinen, Jouni Pulliainen, Matias Takala, Mikko Moisander, Lina Zschenderlein, Chris Derksen, Colleen Mortimer, Lawrence Mudryk, Thomas Nagler, Gabriele Schwaizer
Affiliations: Finnish Meteorological Institute, Environment and Climate Change Canada, ENVEO IT GmbH
Reliable information on snow cover across the Northern Hemisphere and Arctic and sub-Arctic regions is needed for climate monitoring. Warming surface temperatures during the recent decades have driven a substantial reduction in the extent and duration of Northern Hemisphere snow cover. These changes in snow cover affect Earth’s climate system via the surface energy budget and influence freshwater resources across a large proportion of the Northern Hemisphere. In contrast to snow extent, reliable quantitative knowledge on seasonal snow mass and its trend have been relatively uncertain until recent years. FMI has been working with ECCC to improve the retrieval of terrestrial snow mass using passive microwave radiometers in several ESA projects, currently within the ESA Snow CCI. The ESA Snow CCI project initiated in 2018 strives to further improve the retrieval methodologies for snow water equivalent (SWE) from satellite data and construct long term climate data records (CDRs) of terrestrial snow cover for climate research purposes. The efforts to improve satellite-based retrieval of snow water equivalent have resulted in an enhanced resolution SWE record spanning 1979-2022 with 0.1° x 0.1° spatial resolution (Venäläinen et al. 2023, Luojus et al. 2024). The retrieval applies the FMI-developed GlobSnow approach which combines satellite-based data with ground-based snow depth observations and nowadays includes dynamic snow density consideration in the retrieval process. Further, the team has updated the bias-correction approach presented in Pulliainen et al. 2020 to improve the reliability of the long-term climate data records of terrestrial snow mass. The approach is being further improved in the ESA Snow CCI project. The new SWE data record and upcoming Snow CCI datasets will improve our estimates of the satellite-era snow mass changes and trends for the Northern Hemisphere. References: Luojus, K.; Venäläinen, P.; Moisander, M.; Pulliainen, J.; Takala, M.; Lemmetyinen, J.; Derksen, C.; Mortimer, C.; Mudryk, L.; Schwaizer, G.; Nagler, T. (2024): ESA Snow Climate Change Initiative (Snow_cci): Snow Water Equivalent (SWE) level 3C daily global climate research data package (CRDP) (1979 - 2022), version 3.1. NERC EDS, (2024). https://dx.doi.org/10.5285/9d9bfc488ec54b1297eca2c9662f9c81 Pulliainen, J., Luojus, K., Derksen, C., Mudryk, L., Lemmetyinen, J., Salminen, M., Ikonen, J., Takala, M., Cohen, J., Smolander, T. and Norberg, J., “Patterns and trends of Northern Hemisphere snow mass from 1980 to 2018”. Nature 581, 294–298 (2020). https://doi.org/10.1038/s41586-020-2258-0 Venäläinen, P., Luojus, K., Mortimer, C., Lemmetyinen, J., Pulliainen, J., Takala, M., Moisander, M. and Zschenderlein, L., 2023. "Implementing spatially and temporally varying snow densities into the GlobSnow snow water equivalent retrieval", The Cryosphere, 17(2), pp.719-736. https://doi.org/10.5194/tc-17-719-2023
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: Radar measurements using WBSCAT for supporting multi-frequency snow water equivalent retrieval and GEO- and LEO SAR development

Authors: Juha Lemmetyinen, Jorge Jorge Ruiz, Thomas Nagler, Charles Werner, Anna Kontu, Julia Kubanek
Affiliations: Finnish Meteorological Institute, ENVEO IT GmbH, GAMMA Remote Sensing AG, ESA ESTEC
Seasonal snow cover is a dynamic and unpredictable part of the terrestrial cryosphere, showing considerable year-to-year variations in its extent and duration (Brown et al., 2017). Around one-sixth of the global population depends on seasonal snow as their primary source of freshwater (Barnett et al., 2005). The amount of water stored in snow is commonly expressed as Snow Water Equivalent (SWE), or snow mass. However, current methods for assessing SWE—using satellite sensors, ground-based networks, and Earth system models—provide insufficient accuracy with significant discrepancies and biases across approaches (Mudryk et al., 2015). This represents a critical knowledge gap, particularly as projected changes in temperature and precipitation in the northern hemisphere are expected to increase fluctuations in seasonal snow mass, jeopardizing its reliability as a freshwater resource (Mankin et al., 2015; Sturm et al., 2017). A promising method for monitoring SWE from space is based on an interferometric SAR (InSAR) technique from repeat-pass orbits (Guneriussen et al., 2001). The approach has been demonstrated to provide SWE estimates using ground observations (Leinss et al., 2015), airborne campaigns (Hoppinen et al., 2024) and recently, space-borne SAR (Jorge Ruiz et al., 2024; Oveisgharan et al., 2024). Several upcoming satellite missions will dramatically increase the availability of suitable, low-frequency SAR observations which are optimal for the method; these missions include the NASA-ISRO Synthetic Aperture Radar (NISAR) and the Radar Observing System for Europe at L-band (ROSE-L). Moreover, Hydroterra+, a candidate mission for the European Space Agency’s 12th Earth Explorer opportunity, proposes to use a SAR in geosynchronous orbit to observe the water cycle, including snow cover. The high temporal frequency of SAR imaging enabled by the orbit configuration makes observations of SWE possible even at the applied C band. Together with ROSE-L and NISAR, Hydroterra+ presents unique new opportunities for monitoring snow cover parameters, including SWE, making use of advantages presented by different wavelengths (Belinska et al., 2024). In order to study the opportunities presented by multi-frequency SAR observations, as well as the high temporal frequency provided by a mission such as Hydroterra+, the ESA Wide-Band Scatterometer (WBSCAT) ground-based radar system (Werner et al., 2019) has been deployed in Sodankylä, Finland, to observe snow cover signatures for three winter seasons. WBSCAT is a polarimetric scatterometer operating at 1-40 GHz, installed on a pan/tilt positioner which provides the capability to obtain measurements from a range of angles in azimuth and elevation. In Sodankylä, the instrument has been deployed on a platform at a height of 5 m overlooking a boreal wetland (fen). The wetland typically freezes over the winter, presenting a relatively smooth subnivean surface. Snow depth at the site is typically up to 80 cm in midwinter. Radar measurements are supported by instruments collecting ancillary data on snow depth, SWE, snow and ground temperature profiles, and weather conditions. Automated measurements are complemented by weekly manual snow profile observations, which measure the stratigraphy, density and temperature profiles, grain size and grain type, liquid water content, and the specific surface area (SSA) of snow. In addition, a suite of passive microwave radiometers operating at 1.4, 10.65, 18.7, 21, 36.5, 89 and 150 GHz are operated at the site, observing the same area as WBSCAT. In this study, we present initial results on the WBSCAT campaign in Sodankylä, presenting an analysis of • radar coherence conservation versus environmental factors (snow accumulation and redistribution, snow melt events) at X-, C-, and L-Bands, analyzing the impact on InSAR SWE retrieval at different frequencies and frequency combinations, • C-band SAR Image quality for long term focusing intervals during variable surface conditions such as snowfall, snow melt and freeze, • the response of radar backscatter at L-to Ka bands to changing physical snow parameters (snow melt events and liquid water content, snow height SWE, changing ground conditions, etc.), • radar signatures against passive microwave observations at different wavelengths. Barnett, T.P., Adam, J.C., Lettenmaier, D.P., 2005. Potential impacts of a warming climate on water availability in snow-dominated regions. Nature 438, 303–309. https://doi.org/10.1038/nature04141 Belinska, K., Fischer, G., Parrella, G., Hajnsek, I., 2024. The Potential of Multifrequency Spaceborne DInSAR Measurements for the Retrieval of Snow Water Equivalent. IEEE J. Sel. Top. Appl. Earth Observations Remote Sensing 17, 2950–2962. https://doi.org/10.1109/JSTARS.2023.3345139 Brown, R., Schuler, D., Bulygina, O., Derksen, C., Luojus, K., Mudryk, L., Wang, L., Yang, D., 2017. Arctic terrestrial snow cover, in: Snow, Water, Ice and Permafrost in the Arctic (SWIPA) 2017. Arctic Monitoring and Assessment Programme, Oslo, Norway. Guneriussen, T., Hogda, K.A., Johnsen, H., Lauknes, I., 2001. InSAR for estimation of changes in snow water equivalent of dry snow. IEEE Trans. Geosci. Remote Sensing 39, 2101–2108. https://doi.org/10.1109/36.957273 Hoppinen, Z., Oveisgharan, S., Marshall, H.-P., Mower, R., Elder, K., Vuyovich, C., 2024. Snow water equivalent retrieval over Idaho – Part 2: Using L-band UAVSAR repeat-pass interferometry. The Cryosphere 18, 575–592. https://doi.org/10.5194/tc-18-575-2024 Jorge Ruiz, J., Merkouriadi, I., Lemmetyinen, J., Cohen, J., Kontu, A., Nagler, T., Pulliainen, J., Praks, J., 2024. Comparing InSAR Snow Water Equivalent Retrieval Using ALOS2 With In Situ Observations and SnowModel Over the Boreal Forest Area. IEEE Trans. Geosci. Remote Sensing 62, 1–14. https://doi.org/10.1109/TGRS.2024.3439855 Leinss, S., Wiesmann, A., Lemmetyinen, J., Hajnsek, I., 2015. Snow Water Equivalent of Dry Snow Measured by Differential Interferometry. IEEE J. Sel. Top. Appl. Earth Observations Remote Sensing 8, 3773–3790. https://doi.org/10.1109/JSTARS.2015.2432031 Mankin, J.S., Viviroli, D., Singh, D., Hoekstra, A.Y., Diffenbaugh, N.S., 2015. The potential for snow to supply human water demand in the present and future. Environ. Res. Lett. 10, 114016. https://doi.org/10.1088/1748-9326/10/11/114016 Mudryk, L.R., Derksen, C., Kushner, P.J., Brown, R., 2015. Characterization of Northern Hemisphere Snow Water Equivalent Datasets, 1981–2010. Journal of Climate 28, 8037–8051. https://doi.org/10.1175/JCLI-D-15-0229.1 Oveisgharan, S., Zinke, R., Hoppinen, Z., Marshall, H.P., 2024. Snow water equivalent retrieval over Idaho – Part 1: Using Sentinel-1 repeat-pass interferometry. The Cryosphere 18, 559–574. https://doi.org/10.5194/tc-18-559-2024 Sturm, M., Goldstein, M.A., Parr, C., 2017. Water and life from snow: A trillion dollar science question. Water Resources Research 53, 3534–3544. https://doi.org/10.1002/2017WR020840 Werner, C., Suess, M., Wegmuller, U., Frey, O., Wiesmann, A., 2019. The Esa Wideband Microwave Scatterometer (Wbscat): Design and Implementation, in: IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium. Presented at the IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium, IEEE, Yokohama, Japan, pp. 8339–8342. https://doi.org/10.1109/IGARSS.2019.8900459
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: 30-years (1991-2021) Snow Water Equivalent Dataset in the Po River District, Italy through EO images, in-situ data and physical modeling

Authors: Matteo Dall'Amico, Stefano Tasin, Federico Di Paolo, Marco Brian, Paolo Leoni, Francesco Tornatore, Giuseppe Formetta, John Mohd Wani, Riccardo Rigon, Gaia Roati
Affiliations: Waterjade Srl, Po River Basin District Authority, Department of Civil, Environmental and Mechanical Engineering - University of Trento, C3A - Center Agriculture Food Environment - University of Trento
Snow is a critical component of the mountain cryosphere, playing a significant role in shaping hydrology and climate dynamics . As an essential interface between the Earth’s surface and the atmosphere, snow influences other cryospheric elements such as glaciers and permafrost. The snowpack functions as a vital water reservoir, accumulating during the winter and gradually releasing water during the melt season, thereby sustaining downstream water demands. However, snow is highly sensitive to climate change, particularly in low- and mid-elevation mountain regions. Changes in snow occurrence, melt timing, and variability can directly affect water availability in snowfed basins, with significant implications for both ecosystems and human populations. In Europe, most river basins originate in the Alps, often referred to as the “water tower of Europe”. In the Alps, snow is an important cryospheric component, playing an essential role in meeting the agricultural, domestic and industrial water needs in the lowlands. Generally, the amount of water stored in a snowpack is defined in terms of snow water equivalent (SWE), i.e., the equivalent amount of water that would result after melting the entire snowpack. We present a long-term SWE dataset in the Po River District, Italy, spanning from 1991 to 2021 at daily time step and 500 m spatial resolution partially covering the mountain ranges of Alps and Apennines. The basin of Po river, the largest in Italy, is considered as the second most sensitive area in Europe after the Rhone River basin, and it has been exposed to severe drought in the last years. The data has been generated using a hybrid modelling approach integrating the physically-based GEOtop model, preprocessing of the meteorological data, and assimilation of in-situ snow measurements and Earth Observation (EO) snow products to enhance the quality of the model estimates. In particular, EO-retrieved Snow Cover Area (SCA) maps are used to correct the melting rate in the physical model, in order to correctly retrieve the SWE variations, especially during the melting season. A rigorous quality assessment of the dataset has been performed at different control points selected based on reliability, quality, and territorial distribution. The comparison between simulated and observed snow depth across control points shows the accuracy of the dataset in simulating the normal and relatively high snow conditions, respectively. Additionally, satellite snow cover maps have been compared with simulated snow depth maps, as a function of elevation and aspect. 2D Validation shows accurate values over time and space, expressed in terms of snowline along the cardinal directions. Such dataset, reporting the longest time series of coherent SWE cartography at daily aggregation in Italy, fills an important gap in the scientific understanding of the hydrology in the area, and can be used for hydrological and climatological purposes such as the drought characterization in mid-latitude Mediterranean areas.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: A.05.01 - POSTER - Using earth observation to assess climate change in cities

The Intergovernmental Panel for Climate Change (IPCC) Sixth Assessment report concluded that "Evidence from urban and rural settlements is unequivocal; climate impacts are felt disproportionately in urban communities, with the most economically and socially marginalised being most affected (high confidence)." (IPCC, WG2, Chapter 6)

In its Seventh Assessment Cycle, the IPCC will produce a Special Report on Climate Change and Cities to further develop the role of climate and its interactions with the urban environment. The report will cover topics that include:
- Biophysical climate changes;
- Impacts and risks, including losses and damages and compounding and cascading aspects;
- Sectoral development, adaptation, mitigation and responses to losses and damages;
- Energy and emissions;
- Governance, policy, institutions, planning and finance; and
- Civil society aspects.

This session calls for abstracts demonstrating how Earth Observation is being used to understand how climate change is impacting cities and how EO can be used to adapt and mitigate further climate change on the city scale. This session's abstracts should explicitly link the use of EO data and assessing their usefulness for small scale urban/cities information.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: CLIM4cities: from Citizen Science, Machine Learning and Earth Observation towards Urban Climate Services

Authors: Dr. Ana Oliveira, Mark Payne, Manvel Khudinyan, João Paixão, Inês Girão, Rita Cunha, Bruno Marques, Maria Castro, Élio Pereira, Caio Fonteles, Peter Thejll, Irene Livia Kruse, Hjalte Sørup, Erika Hayashi, Rune Zeitzen, Kasper Stener, Chiara Bearzotti, Sebastian
Affiliations: CoLAB +Atlantic, DMI
As climate change prospects point towards the pressing need for local adaptation strategies, exposure to extreme weather events becomes one of the most important aspects in determining our society’s resilience in the future. Globally, we are already experiencing changing patterns of exposure to certain types of extremes (e.g., wildfires in high latitudes, droughts in midlatitudes, flash floods in riverine and coastal areas); and, at the European level, recent historical weather measurements are already showing a changing climate, where heatwaves (HW) are becoming longer, more frequent and intense, while cold waves (CW) show only minor or non-significant changes. This temperature amplitude increase is a major challenge for our highly urbanised and ageing society, as the health and energy sectors are deeply affected by air temperature conditions. Att the local level, these conditions are strongly influenced by the energy exchanges between the lower atmosphere and our strongly modified urban surfaces. Indeed, such extremes can lead to significant impacts, such as excess mortality/morbidity and unmet peak electricity demand. To address these challenges, CLIM4cities - a European Space Agency (ESA)-funded project under the call for Artifical Intelligence (AI) Trustworthy Applications for Climate - aims to pioneer the development of Machine Learning (ML) and Artificial Intelligence (AI) models designed to downscale air and land surface temperature predictions in urban areas. This initiative serves as a preliminary step towards the implementation of cost-effective Integrated Urban Climate and Weather components into local Digital Twin Systems. By leveraging crowdsourced data obtained from citizens owned weather stations, Earth Observation and weather forecasting models, we offer spatio-temporal data fusion models that can solve the unmet need for a low-cost, efficient and scalable Urban Climate prediction system. To achieve this, CLIM4cities has tailored its solution to the requirements of local early adopters, who state the need for tools that offers both early warning weather forecast capabilities, as well as scenario-making capabilities to evaluate climate adaptation measures, namely the impact of blue-green infrastructures on the Urban Heat Island effect. Currently, CLIM4cities has already developed the first version of its coupled ML-based near-surface Air Temperature (henceforth, T2m) and Land Surface Temperature (LST) downscaling models, targeting four metropolitan areas in Denmark, proving the concept’s reliability and scalability to other urban regions. In relation to the LST results, Sentinel-3 LST and Synergy (NDVI) products were used to train several data-driven models, and performances were compared with Landsat 8/9 for unbiased validation purposes. To improve the accuracy and precision of LST downscaling, we combined ML techniques with disaggregation algorithms (DisTrad) to create a Non-Linear DisTrad (henceforth NL-DisTrad) method, in order to capture spatial and vegetation-related temperature patterns. This hybrid approach enhances the downscaling process by enabling the non-linear modelling of relationships between LST and auxiliary variables such as NDVI, while retaining DisTrad’s structured spatial disaggregation. The achieved results show that model performance varies according to the season - R² was 0.67, 0.51 and 0.56 in the Summer, Spring and Autumn, respectively. Concerning the T2m results, based on the evaluation metrics for both the time and space fine-tuning datasets, the Random Forest (RF) model was also the one achieving the best results. In particular, overall performance was compared to that during HW and CW days, and a sensitivity analysis conducted on the hyperparameters. The best results were achievable opting for a maximum RF depth of 50, with an overall R² of 0.98, which is the same during HW but reduced to 0.97 during CW events. In terms of error metrics, the MAE was 0.74K, 0.63K and 0.81K (overall, HW and CW subsets, respectively) denoting the model’s good performance during extreme heat conditions. Together, these results offer not only an improved level of spatial detail, but also enhance the accuracy of local measures of T2m and LST, compared to the lower resolution inputs. In the next steps, CLIM4cities will demonstrate how this concept can be transformed into a scalable solution for European metropolitan areas, by using Danish case studies to pave the way for a Proof of Concept Application featuring downscaled short-term predictions and climate change scenarios. This Proof of Concept framework builds on previous pilot implementations in Lisbon and Naples, providing a strong foundation for future development as a tool for local authorities to: (i) identify short-term critical areas of the city during heat- and coldwave events, (ii) test the performance of urban development scenarios in response to climate change and urban spatial planning policies, and (iii) translate essential climate variables into impact indicators relevant to the health and energy sectors. Overall, CLIM4cities contributes to scientific advances and user uptake of Earth Observation data. We are adding to international efforts on addressing key urban climate monitoring and local climate change adaptation challenges by bringing the Earth Observation and crowdsourced-based Machine Learning models closer to user requirements for local Digital Twin-ready operational climate and weather monitoring services.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: UpGreen: EO-based Urban Green Assessment, Prediction and Vision

Authors: M.Sc. DAMIAN HRUBAN, M.A. Jan Labohý, Ph.D. MIKULAS MURON, Ph.D. LENKA FOLTYNOVA, M.A. MARTIN VOKRAL
Affiliations: World from Space, ASITIS, Atregia
World from Space is developing UpGreen, a service that accurately assesses, predicts, and proposes urban green infrastructure using advanced Earth Observation and geospatial methods. Currently, it is being piloted over Copenhagen and Lisbon. A comprehensive understanding of the dynamics of urban greenery is ensured by using multi-sensor, multi-resolution, and multi-temporal approaches. The service comprises three subsequently interdependent modules: UpGreen Assessment, UpGreen Prediction, and UpGreen Vision. UpGreen service is based on open, city-owned and commercial EO and non-EO data. Temporal series of PlanetScope and Landsat 8/9 are processed in data-cube: cloud masking, co-registration, spectral indices computation. Such time series are cleaned via drop handling and smoothing methods. Incorporated open data include OpenStreetMap, CMIP6 and ERA5, and Global Canopy Height data. City data include administrative division, RGBI aerial imagery, socio-economic data, elevation models and tree inventory for validation. UpGreen Assessment utilizes introduced data to provide detailed delineation, and segmentation of greenery segments and performs allocation of attributes and ecosystem services of urban green spaces. The information gathered per green segment includes for instance: greenery productivity state and trend, height, biomass, cooling effect, connectivity, accessibility for citizens, carbon sequestration, shaded area cast and others. UpGreen Prediction utilizes advanced trend analyses and arborist-based causal relationships applied to vast amounts of EO and other data to forecast future scenarios for urban green in 3 years ahead and for horizons 2035 and 2050. UpGreen Vision provides actionable insights for optimal urban green planning based on the city's preferred ecosystem services targets. The recommendations include suggesting the most effective tree placement distribution and its quantities to maximize environmental and socio-economic benefits. The service is a technical response to domain requirements gathered in the preceding ESA Feasibility Study. In summary, those are (1) holistic understanding and strategic planning of urban green, (2) trend analysis and forecasting urban green health, (3) data interoperability for better stakeholder engagement. UpGreen demonstration pilot is currently being developed within an ESA project: Development and Verification of Urban Analytics (4000143727/24/I-DT). It has been validated over tree inventory ground-truth data collected by arborists (consortium partner Atregia) and the Department of Xylogenesis and Biomass Allocation (Czech Global Change Research Institute). Beyond that, a comparative analysis with a 360° ground panorama view from multiple time slices has been successfully performed. It has been concluded that the core metrics: productivity state and productivity trend indicators refer to the overall productivity/photosynthetic activity of urban greenery, which reflects parameters of greenness, physiological age, dimensions, stress and vitality, assessed through spectral-based phenology metrics derived from satellite data. The prototype visualised in the user interface on sample AOI within Copenhagen is to be found under a non-public link: http://wfs-upgreen-website.s3-website.eu-central-1.amazonaws.com/ The business model with go-to-market activities and first partnerships are already set up and fully operational commercial product-as-a-service is scheduled to be completed after the end of the project. A consortium partner, ASITIS will be UpGreen's product manager. More information on the ESA project and designed product: https://business.esa.int/projects/upgreen UpGreen will assist cities in making informed decisions towards sustainable urban development by enhancing ecosystem services, urban resilience, and citizen well-being through efficient nature-based solutions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: T4 version of intelligent space-borne data-fusion for Smart Cities governance

Authors: Paolo Caporossi, Gerardo Ferraioli, Ing. Stefano Cruciani, Prof. Lorenzo Lipparini, Giovanni Quacquarelli
Affiliations: Titan4 S.r.l., Department of Earth Science, University of Roma Tre
Metropolitan cities are at the forefront of the global shift toward Smart Cities, leveraging cutting-edge technologies to enhance sustainability, resilience, and competitiveness (Kucherov et al., 2017; Josipovic & Viergutz, 2023). Within this context, the Space Economy plays a pivotal role by integrating satellite-derived data with advanced digital technologies to mitigate natural hazards impacting critical infrastructures such as transportation networks (roads and railways), energy grids, and water systems. These systems are increasingly vulnerable to geohazards, including landslides, floods, subsidence, and heatwaves - phenomena exacerbated by the ongoing effects of climate change. This study presents a comprehensive methodology that combin satellite data (radar, multispectral optical, infrared IR, and atmospheric sensors), and field observations by using advanced technologies such as Artificial Intelligence (AI) and Data Meshing within a unified operational ecosystem. This integrative approach represents a vital step toward the efficient management and sustainable maintenance of urban infrastructures. Radar satellites, such as Sentinel-1, provide millimetric precision in monitoring ground and structural deformations via Interferometric Synthetic Aperture Radar (InSAR), allowing for the early identification of geotechnical and structural risks. Multispectral optical sensors, such as Sentinel-2, deliver critical insights into land cover, permeability level and vegetation, supporting hydrogeological risk mitigation. Infrared sensors, including Landsat and MODIS, detect thermal anomalies indicative of urban heat islands, water leakage, and energy network irregularities. Atmospheric monitoring satellites like Sentinel-5P offer precise data on pollutants (e.g., NO2, CO, O3), essential for environmental mitigation strategies. The integration of satellite data with field measurements—including geotechnical surveys, hydrometeorological data, and structural information—further enriches the analytical framework. Additionally, Internet of Things (IoT) devices, equipped with distributed sensors, provide real-time data on atmospheric, hydrological, and infrastructural conditions, significantly enhancing situational awareness. These heterogeneous datasets, increasingly available through open-source platforms and at high temporal and spatial resolutions are integrated, in this study, within Geographic Information Systems (GIS) platforms. Advanced machine learning algorithms enable the analysis and synthesis of these data, generating predictive models and detailed risk assessments. This research employed custom-developed software, utilizing Python-based scripts to ensure full interoperability across diverse systems and seamless data exchange between devices. A dedicated monitoring dashboard was designed to consolidate sensor data, processing outputs, and risk analysis into an intuitive and actionable interface, facilitating operational planning and decision-making. The proposed approach can support a substantial improvement in the capacity for monitoring and managing metropolitan areas, potentially enabling near real-time assessment of natural hazard impacts on critical infrastructure. By leveraging AI-driven analysis and integrated visualization tools, this approach appear able to provide actionable insights in supporting the prioritization of interventions and optimization of resource allocation. This framework has also empowered urban administrations and infrastructure managers to implement targeted interventions, enhance resource efficiency, and promote resilience and sustainability, aligning with global objectives for sustainable development and climate change mitigation. The findings highlight the importance of integrating Space Economy resources with digital innovation to advance the resilience and sustainability of Smart Cities.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Study of Erosion in Oil Extraction Fields Based on Interferometric Techniques - The Case of the Ghawar Oil Field (Saudi Arabia)

Authors: Joana Vines
Affiliations: Tecnico Lisboa
The world's largest oil reservoir, Ghawar, stretches across a vast area of about 280,000 square kilometers in Saudi Arabia's Eastern Province. With an estimated 75 billion barrels of recoverable oil reserves, it's an incredible source of energy. Since its production began in 1951, Ghawar has been a crucial player in the global energy market. Its complex reservoir structure, with multiple oil-bearing layers extending from 1,000 meters to an impressive 6,000 meters, presents both challenges and opportunities for oil exploration and extraction. In Ghawar Field, waterflooding is the main approach for extracting oil. Over a million barrels of water are pumped into the reservoir every day to force the oil towards extraction wells. Being an extremely desertic and arid area, this water that’s being brought artificially to this area, may be influencing the rate with each erosion is acting in place. On the other hand, the permanence of this production field and its workplaces along the reservoir, may be strengthening the development of neighbor conurbations. To assess the intensity and significance of the erosion processes in this area, Sentinel-1 imagery will be leveraged. Using the SNAP tool, annual Digital Elevation Models (DEMs) will be derived from Synthetic Aperture Radar (SAR) images acquired between 2015 and 2024 through interferogram generation. This approach will allow for a detailed examination of the land profile evolution, providing critical insights into changes in topography and the progression of erosion over nearly a decade. Using Sentinel-2 imaging, one is studying the urban development of Al Hufüf, the closest metropolis to Ghawar. This study seeks to examine the growth dynamics and land use transformations in Al Hufüf using the high-resolution multispectral data from the Sentinel-2 satellite. The study employs remote sensing techniques to classify and monitor urban land cover such as Support Vector Machine, to ensure precise identification of various urban features such as roads, buildings, vegetation, and barren land. The application of the Normalized Difference Vegetation Index (NDVI) will aid in differentiating vegetated areas from non-vegetated ones, offering valuable insights into the city's green spaces.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Predictability of the Summer 2022 Yangtze River Valley Heatwave in Multiple Seasonal Forecast Systems

Authors: Prof. Lijuan Chen
Affiliations: China Meteorological Administration Key Laboratory for Climate Prediction Studies, National Climate Centre,CMA, Collaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters (CIC-FEMD), Nanjing University of Information Science and Technology
Predictability of the Summer 2022 Yangtze River Valley Heatwave in Multiple Seasonal Forecast Systems Jinqing ZUO¹², Jianshuang CAO³⁴, Lijuan CHEN*¹², Yu NIE¹², Daquan ZHANG¹², Adam A SCAIFE⁵⁶, Nick J. DUNSTONE⁵, and Steven C HARDIMAN⁵ 1 China Meteorological Administration Key Laboratory for Climate Prediction Studies, National Climate Centre, Beijing 100081, China 2 Collaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters (CIC-FEMD), Nanjing University of Information Science and Technology, Nanjing 210044, China 3 Nansen-Zhu International Research Centre, Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing 100029, China 4 University of Chinese Academy of Sciences, Beijing 100049, China 5 Met Office Hadley Centre, Exeter EX1 3PB, United Kingdom 6 University of Exeter, Exeter EX4 4QF, United Kingdom ABSTRACT The Yangtze River Valley (YRV) include large and medium-sized cities in China, and meteorological disasters in the region will have a major impact on economic development and people's lives. In July and August 2022,the region experienced record-breaking heatwaves. The characteristics, causes, and impacts of this extreme event have been widely explored, but its seasonal predictability remains elusive. This study assessed the real-time one-month-lead prediction skill of the summer 2022 YRV heatwaves using 12 operational seasonal forecast systems. Results indicate that most individual forecast systems and their multi-model ensemble (MME) mean exhibited limited skill in predicting the 2022 YRV heatwaves. Notably, after the removal of the linear trend, the predicted 2-m air temperature anomalies were generally negative in the YRV, except for the Met Office GloSea6 system, which captured a moderate warm anomaly. While the models successfully simulated the influence of La Niña on the East Asian–western North Pacific atmospheric circulation and associated YRV temperature anomalies, only GloSea6 reasonably captured the observed relationship between the YRV heatwaves and an atmospheric teleconnection extending from the North Atlantic to the Eurasian mid-to-high latitudes. Such an atmospheric teleconnection plays a crucial role in intensifying the YRV heatwaves. In contrast, other seasonal forecast systems and the MME predicted a distinctly different atmospheric circulation pattern, particularly over the Eurasian mid-to-high latitudes, and failed to reproduce the observed relationship between the YRV heatwaves and Eurasian mid-to-high latitude atmospheric circulation anomalies. These findings underscore the importance of accurately representing the Eurasian mid-to-high latitude atmospheric teleconnection for successful YRV heatwave prediction. Key words: the summer 2022 YRV heatwaves; real-time prediction skill; operational seasonal forecast systems; Eurasian mid-to-high latitude teleconnection Article Highlights: (1)Most models predicted cold anomalies after removing the long-term trend for the 2022 record-breaking heatwaves in the YRV. (2)GloSea6 stands out as the only model predicting a moderate warm anomaly after removing the linear warming trend. (3)The underestimated warm anomalies are linked to the deficiency of the models in simulating the relation between YRV heatwaves and the Eurasian teleconnection. *presenter & corresponding author,chenlj@cma.gov.cn
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Projection of Precipitation and Temperature in Major Cities in Pakistan Using Multi-Model Ensembles

Authors: Fahad Shah
Affiliations: Graduate School of Smart Society Practical Sciences, Hiroshima University,
This study evaluates future projections of variation in monthly precipitation and average temperature in major cities of Pakistan. Using 16 General Circulation Models (GCMs) from Coupled Model Intercomparison Project Phase 6 (CMIP6), the analysis constructs multi-model ensembles (MMEs) by selecting GCMs that best match observed historical data through an Artificial Neural Network (ANN) based statistical downscaling approach. The performance of these models was assessed using five statistical metrics: Correlation Coefficient, Nash–Sutcliffe Efficiency, Root Mean Squared Error, Kling–Gupta Efficiency, and the Modified Index of Agreement. The results show that MMEs outperform individual GCMs in simulating historical temperature and precipitation trends across the cities. Projections for 2024–2100, based on four Shared Socioeconomic Pathways (SSP1-2.6, SSP2-4.5, SSP3-7.0, and SSP5-8.5), reveal a decline in annual precipitation by 39.22%, 48.79%, 36.27%, and 38.08%, respectively. In terms of temperature, maximum temperature is projected to rise by 5.95% (+1.85°C), 12.79% (+3.97°C), 9.86% (+3.06°C), and 16.22% (+5.04°C), while minimum temperature is projected to decrease by 4.25% (-0.76°C) and 0.74% (-0.13°C) under SSP1-2.6 and SSP2-4.5, respectively. However, under SSP3-7.0 and SSP5-8.5, the results show that minimum temperature is expected to increase by 0.20% (+0.04°C) and 7.26% (+1.30°C), respectively. The greatest potential for precipitation decline is seen in Islamabad, Multan, and Sialkot. At the same time, higher increases in maximum temperature are expected in high-altitude cities like Quetta and Peshawar compared to low-altitude areas. This study provides essential insights to help policymakers and stakeholders develop targeted strategies for addressing the impacts of climate change in cities.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: A.01.03 - POSTER - Fourier Transform Spectroscopy for Atmospheric Measurements

Fourier Transform Spectroscopy (FTS) is a powerful technique for atmospheric observations allowing the Earth' and atmosphere's thermal radiation to be sampled with high spectral resolution. This spectral range carries profile information of many atmospheric gases (water vapour, carbon dioxide, nitrous oxide, methane, ammonia, nitric acid,...), but also information on cloud (e.g. phase or liquid/ice water path) and aerosol (e.g. dust optical depth). Measurements have been performed from satellite (nadir and limb), from ground, or with airborne platforms for several decades and have recently come into the foreground in ESA's Earth Explorer (EE) programme with the EE9 FORUM mission and the EE11 candidate CAIRT, both aiming to fly in convoy with the FTS IASI-NG on MetOp-SG. The Infrared Sounder (IRS) will be launched on MTG-S1 in 2025. In addition, new airborne and ground-based instruments became available with performance and versatility that allow for innovative research applications. This session invites presentations on:
- retrieval algorithms and methods for uncertainty quantification including calibration/validation techniques for existing and future missions,
- new spectrometer developments for field work and satellite applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Stratospheric and upper tropospheric measurements of long-lived tracers and photochemically active species of the nitrogen, chlorine, and bromine families with GLORIA-B

Authors: Gerald Wetzel, Sören Johansson, Felix Friedl-Vallon, Michael Höpfner, Jörn Ungermann, Tom Neubert, Valéry Catoire, Cyril Crevoisier, Andreas Engel, Thomas Gulde, Patrick Jacquet, Oliver Kirner, Anne Kleinert, Erik Kretschmer, Dr Johannes Laube, Guido Maucher, Hans Nordmeyer, Christoph Piesch, Peter Preusse, Markus Retzlaff, Tanja Schuck, Wolfgang Woiwode, Martin Riese, Peter Braesicke
Affiliations: Institute of Meteorology and Climate Research Atmospheric Trace Gases and Remote Sensing (IMKASF), Karlsruhe Institute of Technology, Institute of Climate and Energy Systems - Stratosphere (ICE-4), Forschungszentrum Jülich, Central Institute of Engineering, Electronics and Analytics - Electronic Systems (ZEA-2), Forschungszentrum Jülich, Laboratoire de Physique et Chimie de l’Environnement et de l’Espace (LPC2E/CNRS), Université Orléans, Laboratoire de Météorologie Dynamique, IPSL, CNRS, Institute for Atmospheric and Environmental Sciences, Goethe Universität, Scientific Computing Center (SCC), Karlsruhe Institute of Technology, Business Area Research and Development, Deutscher Wetterdienst
The Gimballed Limb Observer for Radiance Imaging of the Atmosphere (GLORIA) is a limb-imaging Fourier-Transform spectrometer (iFTS) providing mid-infrared spectra with high spectral sampling (0.0625 cm-¹ in the wavenumber range 780-1400 cm-¹). GLORIA is a demonstrator for the Changing-Atmosphere Infra-Red Tomography Explorer (CAIRT), one of the remaining two candidates for the ESA Earth Explorer 11 mission. A version of GLORIA dedicated to deployment on aircraft has been successfully flown on seven research campaigns up to 2023, with a further one planned for March 2025. In order to enhance the vertical range of GLORIA to observations in the middle stratosphere albeit still reaching down to the middle troposphere, the instrument was adapted to measurements from stratospheric balloon platforms. GLORIA-B performed its first flight from Kiruna (northern Sweden) in August 2021 and its second flight from Timmins (Ontario/Canada) in August 2022 in the framework of the EU Research Infrastructure HEMERA. The objectives of GLORIA-B observations for these campaigns have been its technical qualification and the provision of a first imaging hyperspectral limb-emission dataset from 5 to 36 km altitude. Scientific objectives are discussed, which are, amongst many others, the diurnal evolution of photochemically active species belonging to the nitrogen (N₂O₅, NO₂), chlorine (ClONO₂), and bromine (BrONO₂) families and the retrieval of SF₆, an important molecule for determining the mean age of air. In this contribution we demonstrate the performance of GLORIA-B with regard to level-2 data of the flight in August 2021, consisting of retrieved altitude profiles of a variety of trace gases. We will show examples of selected results together with uncertainty estimations, altitude resolution as well as long-lived tracer comparisons to accompanying in-situ datasets. Combined error bars of the instruments involved were calculated in order to determine whether a detected difference between measurements of the instruments is significant or not. In addition, diurnal variations of photochemically active gases are compared to simulations of the chemistry climate model EMAC. Calculations largely reproduce the temporal variations of the species observed by GLORIA-B.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Observations of dichloromethane-rich air masses transported from the Asian summer monsoon region across the Pacific, Alaska and Canada

Authors: Wolfgang Woiwode, Sören Johansson, Jörn Ungermann, Markus Dick, Felix Friedl-Vallon, Norbert Glatthor, Jens-Uwe Grooß, Thomas Gulde, Michael Höpfner, Jan Kaumanns, Anne Kleinert, Erik Kretschmer, Valentin Lauther, Guido Maucher, Tom Neubert, Hans Nordmeyer, Christof Piesch, Felix Plöger, Peter Preusse, Markus Retzlaff, Sebastian Rhode, Heinz Rongen, Georg Schardt, Björn-Martin Sinnhuber, Johannes Strobel, Franziska Trinkl, Ronja van Luijt, Stefan Versick, Bärbel Vogel, Michael Volk, Gerald Wetzel, Peter Braesicke, Peter Hoor, Martin Riese
Affiliations: Institute of Meteorology and Climate Research - Atmospheric Trace Gases and Remote Sensing (IMK ASF), Karlsruhe Institute of Technology, Institute for Climate and Energy Systems, Stratosphere (ICE-4), Forschungszentrum Jülich, Central Institute of Engineering, Electronics and Analytics - Electronic Systems (ZEA-2), Forschungszentrum Jülich, Institute for Atmospheric and Environmental Research, University of Wuppertal, Institute for Atmospheric Physics, Johannes Gutenberg University
Dichloromethane (CH₂Cl₂) is known to be the most abundant chlorinated very short-lived halogenated substance (VSLS) in the atmosphere and capable of delaying the recovery of the stratospheric ozone layer. Recent studies have shown that CH₂Cl₂ emissions in East Asia have been rising rapidly over the last decades. Here, we present unique 2-dimensional observations of the mesoscale structure of CH₂Cl₂-rich airmasses over the Pacific, Alaska and Canada that originated from the Asian summer monsoon (ASM) region. Observations by the infrared limb imager GLORIA (Gimballed Limb Observer for Radiance Imaging of the Atmosphere) aboard the German research aircraft HALO (High Altitude and LOng Range Research Aircraft) during the PHILEAS (Probing High Latitude Export of Air from the Asian Summer Monsoon) campaign document the size and structure of CH₂Cl₂-rich air masses that were transported over the Pacific in August and September 2023. High CH₂Cl₂ mixing ratios exceeding 450 pptv (~700% of northern hemispheric background) are found far away from their anticipated source. Together with chemistry transport modelling, backward trajectories and in situ observations, the GLORIA observations provide new insights into the long-range transport of CH₂Cl₂-rich airmasses from the ASM region in the free troposphere and tropopause region and show indications for mixing with lowermost stratospheric air. The combined results underline the importance of monitoring CH₂Cl₂ emissions. The GLORIA instrument is an airborne demonstrator for the ESA Earth Explorer 11 candidate CAIRT (Changing-Atmosphere Infra-Red Tomography explorer), which is currently under study in Phase A. CAIRT would provide global and continuous observations of a multitude of trace gases, including chlorinated species, and thus be very helpful to investigate important factors associated with stratospheric ozone recovery and atmospheric composition changes in general.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: New Experimentally Derived Temperature-Dependent Refractive Index of Ice in the Infrared

Authors: Cecilia Taverna, Dr Jean-Blaise Brubach, Dr Marine Verseils, Dr Quentin Libois, Dr Laurent Labonnote, Dr Pascale Roy
Affiliations: Synchrotron Soleil, Laboratoire d’Optique Atmosphérique, Centre National de Recherches Météorologiques
The spectral distribution of the infrared energy emitted by the Earth is of utmost importance to understand the planet’s energy budget, especially under climate change. In this context, ice clouds are among the atmospheric constituents that have the largest impact on the infrared emission of Earth. To accurately estimate their radiative properties, a detailed knowledge of the ice complex refractive index (CRI) is required, as it fundamentally controls the cloud-radiation interactions. However, the ice CRI is still poorly known, especially in the far-infrared (FIR, 100-700 cm-1) and for temperatures relevant for ice clouds (170-270 K), for which the CRI variation is expected to be significant. This lack of data over a critical temperature range is due to experimental issues. Indeed, most previous experimental determinations of the ice CRI used direct condensation of water vapor on a cold plate (around 100 K) to form an ice film, making it impossible to reach temperatures above 195 K due to the sublimation of ice at this level of vacuum (around 10-6 mbar). To overcome the technical limits of previous studies, we developed a specific cell allowing the formation of ice films from 10 K up to 270 K. The cell is formed of a copper body with a central hole where two diamonds windows were placed separated by a spacer in polypropylene, the thickness of the spacer will define thicknesses of the sample (typically from 0.5 microns to 50 microns). In this configuration the sample is at ambient pressure during the entire experiment permitting us to overcome the temperature limit of sublimation. Furthermore, the copper body of the cell guarantees good thermal conductivity, and the diamond windows are transparent from the Far-Infrared up to the visible range. This last point allows to use an interference fringes system shining a UV-Visible lamp on the sample to determine the thickness of the sample in situ. This new cell combined with the synchrotron radiation of the AILES beamline of SOLEIL synchrotron facility allows us to obtain transmission spectra in all the infrared range (20-10000 cm-1) with optimized signal-to-noise ratio. Furthermore, with our setup we are also able to directly measure the fundamental parameters of the experiment (thickness and temperature of the ice film). Finally, we implemented a retrieval algorithm for estimating the CRI of ice from these transmission measurements, accounting for the complexity of the multi-layer system used. We were able to obtain for the first time the refractive index of water ice in the full infrared in the temperature range 150-270 K from experimental data. In this presentation, we will display new optical indexes in a wide temperature range and will compare them with the data available in the literature.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Independent Performance Validation of the Instrument Simulator Model of CAIRT’s End to End Performance Simulator

Authors: Jonathan Flunger, Felix Friedl-Vallon, Alex Hoffmann, Anne Kleinert, Valère Mazeau
Affiliations: EOP Climate Action, Sustainability and Science Department, European Space Agency (ESA/ESTEC), EOP Future Missions and Architecture Department, European Space Agency (ESA/ESTEC), Institute of Meteorology and Climate Research, Karlsruhe Institute of Technology (KIT)
The Changing-Atmosphere Infra-Red Tomography Explorer (CAIRT) is one of two candidate missions from which ESA’s Earth Explorer 11 will be selected for implementation in September 2025. CAIRT’s overarching science goal is to reveal, resolve, and unravel the complex coupling between composition, circulation, and climate – from the mid-troposphere to the lower thermosphere. To achieve this, CAIRT carries a hyperspectral infrared limb imaging Fourier Transform Spectrometer (FTS) that will observe Earth’s limb simultaneously between 5 and 115 kilometres of altitude, with unprecedented horizontal and vertical sampling. With this innovative instrument, CAIRT will produce a unique three-dimensional dataset of numerous trace gases, aerosols, and temperature, that will greatly improve our understanding of atmospheric gravity waves, circulation and mixing; the coupling with the upper atmosphere; the impacts of solar variability and space weather; and aerosols and pollutants in the upper troposphere and lower stratosphere. The evaluation of the CAIRT mission requirements and of the observation concepts is a crucial step in the early mission phases, and is conducted, among other tools, with the CAIRT end-to-end performance simulator (CEEPS). This simulator is a collection of software modules, simulating each part of the observation process from geophysical scenes to retrieved data products such as trace gas concentrations and temperature. At the heart of CEEPS lies the Instrument Simulator Module (ISM), which simulates the measurement process of the FTS. While the ISM has been verified and partially validated in the frame of CEEPS, an additional independent validation increases further the confidence in the quality of CEEPS assessment. Here, we present the results of a study where we evaluate, cross-compare, and critically discuss the performance of the CEEPS ISM.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: The CAREVALAB mission to examine the UTLS by 3-D tomography

Authors: Jörn Ungermann, Sören Johansson, Samuele Del Bianco, Michael Höpfner, Peter Preusse, Piera Raspollini, Sebastian Rhode, Björn-Martin Sinnhuber, Gerald Wetzel
Affiliations: Forschungszentrum Jülich Gmbh, Karlsruhe institute for Technology, Istituto di Fisica Applicata “Nello Carrara” (IFAC), Consiglio Nazionale delle Ricerche (CNR)
The CAREVALAB project focuses on aircraft measurements that are planned within the framework of the Arctic Springtime Chemistry Climate Investigations campaign (ASCCI) in March 2025. Among other in-situ and remote sensing instruments, the GLORIA (Gimballed Limb Observer of the Atmosphere) instrument will be deployed on the German High Altitude and LOng range research aircraft (HALO). GLORIA is a limb imaging Fourier-transform spectrometer covering the spectral range from 780 to 1400 1/cm with a spectral sampling of up to 0.0625 1/cm. With its 2-D detector it measures more than 6000 spectra simultaneously. It is mounted in a gimbal, which allows stabilization on an aircraft and full control over the pointing including the capability to scan horizontally and measure in nadir direction. Apart from its use in a variety of scientific campaigns, it also serves as a prototype for the Earth Explorer 11 mission proposal CAIRT: over the past decade, we benefited greatly from operating GLORIA to assess the possible performance of the proposed satellite mission. So far GLORIA used dedicated circular flight patterns to derive 3-D trace gas volume mixing ratios in a manner similar to computer-tomography. This is efficient in terms of flight time, but poses significant mathematical hurdles on the ill-posed inversion process due to the three-dimensionality of the problem and limitations in available measurement angles, as it requires treating the whole 3-D volume and all measurements in a single mathematical optimization problem. The measurement pattern of limb sounder satellites allows a much simpler retrieval considering only 2-D cross-sections as demonstrated by operating satellites such as, e.g., MLS. Here, we plan to showcase the first measurements by GLORIA replicating the simpler measurement pattern of limb sounding satellites, in particular, the proposed CAIRT satellite, which will allow to derive a 3-D volume by processing separate 2-D atmospheric slices along the satellite track. This is only feasible by dedicating the full flight time of a HALO measurement flight towards acquiring the necessary data. The data are enhanced by nadir pointing measurements and co-located IASI spectra to allow a synergistic retrieval bypassing both shortcomings of limb- and nadir sounders. In this contribution, we will show the results of studies using synthetic data and first results from actual measurements acquired during the ASCCI measurement campaign.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: The project CASIA for exploring the synergy between CAIRT and IASI-NG

Authors: Piera Raspollini, Flavio Barbara, Elisa Castelli, Ugo Cortesi, Francesco De Cosmo, Samuele Del Bianco Del Bianco, Bianca Maria Dinelli, Elisa Fabbri, Andrea Faggi, Marco Gai, Liliana Guidetti, Giuliano Liuzzi, Tiziano Maestri, Guido Masiello, Marco Menarini, Enzo Papandrea, Marco Ridolfi, Luca Sgheri, Sabrina Vittori
Affiliations: Istituto di Fisica Applicata Nello Carrara, Consiglio Nazionale delle Ricerche (IFAC-CNR), Istituto di Scienze dell’Atmosfera e del Clima, Consiglio Nazionale delle Ricerche (ISAC-CNR, Istituto per le Applicazioni del Calcolo, Consiglio Nazionale delle Ricerche (IAC-CNR), Section of Florence, University of Bologna, Department of Physics and Astronomy “Augusto Righi”, University of Basilicata, Department of Engineering, Istituto Nazionale di Ottica, Consiglio Nazionale delle Ricerche (INO-CNR)
The Changing-Atmosphere Infra-Red Tomography Explorer (CAIRT) is one of the two candidates for ESA’s Earth Explorer 11. CAIRT aims to investigate the coupling between circulation and composition in the middle atmosphere and to study their interaction with climate change. For this purpose, a 3-dimensional knowledge of the atmosphere with high spatial resolution is needed. If selected, CAIRT will fly in loose formation with MetOp-SG satellite, bringing on board IASI-NG together with several other instruments looking at nadir. The two satellites will fly on the same orbit, dephased of about 27° to match IASI-NG Field of View with the region with the CAIRT tangent points (for each line of sight of a given acquisition, the tangent point is the point closer to the surface). Both CAIRT and IASI-NG rely on Fourier Transform Spectroscopy FTS and they have very similar characteristics in terms of spectral range (718 cm-1 to 2200 cm-1 for CAIRT and 645 cm-1 to 2760 cm-1 for IASI-NG) and resolution (0.4 cm-1 for CAIRT and 0.25 cm-1 for IASI-NG after apodisation). Both instruments exploit imaging detectors: in particular, for the first time from space, CAIRT will exploit a detector that simultaneously measures limb emission spectral radiance in two spatial dimensions, in altitude and horizontally (across-track, with a swath of about 400 km) and will allow very close (50 km distant) consecutive acquisitions along track. In this way, unprecedented (for limb measurements) horizontal resolution (50 km) both along track and across-track will be possible. IASI-NG will cover an even wider swath (about 2200 km) with a spatial sampling of about 25 km. The main difference between the two instruments is the observation geometry: CAIRT sounds the limb of the atmosphere, allowing measurements with high vertical resolution over the vertical extension down to 5 km and up to 115 km; IASI-NG performs nadir measurements, characterised by higher horizontal resolution but smaller vertical resolution, and has information on the lower and middle troposphere. Together, CAIRT and IASI-NG can provide information on several trace species from the lowest layers of the atmosphere to the top of the atmosphere, both during day and night. Other advantages of the synergy have been proven during CAIRT phase 0 for ozone and other trace species with the rigorous data fusion technique of Complete Data Fusion, with the results of combination being characterised by smaller total error and better spatial resolution. Synergy can help also for studying clouds: indeed CAIRT can provide information on the altitude, thickness of the clouds and aerosol plume. It excels in optically thin conditions but could deal well with all typical cirrus clouds in the upper troposphere. It can also detect volcanic ash plume. In turn, IASI-NG, for the same scattering layers, can provide information on the total column amount and optical and microphysical properties. Within such a framework, the project CASIA (CAIRT and Synergy with IASI-NG) funded by the Italian Space Agency (ASI) has the objective to prepare a set of tools to study and fully exploit the complementary information of the two instruments. CASIA aims at developing an innovative and validated forward model, fast and accurate for the simulation of CAIRT measurements and IASI-NG measurements, both in clear-sky and in presence of scattering layers, and for the computation of 2D (possibly 3D) analytical derivative ready to be included in a 2D(3D) retrieval of temperature, trace species, and possibly optical and microphysical properties of the clouds in the MIR spectral range. The activity aims at contributing to the development of CAIRT mission by consolidating secondary objectives of CAIRT, as the study of the synergy between limb and nadir measurements also in presence of clouds. We will present the activity made in the frame of this project and its first findings. Acknoledgements: This work is carried out within ASI-funded project agreement “CASIA” n. 2023-3-HB.0, CUP n. F93C23000430001.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: F.02.01 - POSTER - Harnessing the Power of Remote Sensing for Research and Development in Africa

Harnessing the power of remote sensing technology is instrumental in driving research and development initiatives across Africa. Remote sensing, particularly satellite imagery and big data analytics, provides a wealth of information crucial for understanding various aspects of the continent's environment, agriculture, and natural resources. This data-driven approach facilitates evidence-based decision-making in agriculture, land management, and resource conservation. Overall, remote sensing serves as a powerful tool for advancing research and development efforts in Africa, contributing to sustainable growth, environmental stewardship, and improved livelihoods across the continent. In this session, we aim to promote various initiatives fostering collaboration between African colleagues and those from other continents.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: SLIM but Mighty: Transforming Zambia’s Future with EO Solutions

Authors: Tomas Soukup, Tomas Bartalos, Stepan Bubak
Affiliations: GISAT, People In Need
The Sustainable Landscape through Integrated Management (SLIM) initiative, a cornerstone of the Team Europe Initiative (TEI) "Climate Action for Inclusive Green Recovery and Growth in Zambia," exemplifies the transformative potential of Earth Observation (EO) technologies to address the challenges of sustainability, climate change, and resilience. With joint funding contributed by the European Union and the Czech Republic, SLIM operates as a bold collaboration for integrated landscape management, ecosystem restoration, and community-based natural resource governance. Spanning from 2023 to 2027, SLIM aligns closely with Zambia’s 8th National Development Plan (8NDP), Vision 2030, and revised Nationally Determined Contributions (NDCs) under the Paris Agreement on Climate Change. It directly supports the EU-Zambia Forest Partnership, focusing on sustainable forest management, biodiversity conservation, and fire and land-use monitoring. These priorities are embedded within the wider EU “Green Partnership and Investment Programme” framework, addressing agriculture, forestry, biodiversity, water, and climate at their nexus to drive ecological resilience and socio-economic development. At its core, SLIM leverages EO as a key enabler to catalyze change across multiple sectors. As EO technologies provide vital data and insights, enabling precise land and water monitoring, fire prediction and mitigation, drought management, and land-cover change assessment, the ultimate goal is to leverage these EO-driven solutions and integrate it into Zambia's decision-making processes, empowering local institutions such as the National Remote Sensing Center (NRSC) and the Ministry of Green Economy and Environment to achieve impactful, data-driven resource management. EO as a Catalyst for Impactful Change EO plays a pivotal role in the SLIM initiative, offering unique capabilities for data-driven management of Zambia’s natural resources. The initiative focus on leveraging existing EO based resource from Copernicus programme and beyond together with advanced methodologies, including AI and machine learning, to process and analyze EO data, uncovering patterns and trends critical for effective resource management. These approaches are complemented by a focus on integrating EO insights with local data sources and expertise, ensuring relevance and practicality for decision-makers. By demonstrating EO’s value in diverse application areas, SLIM will showcase how space-based data can transform traditional approaches to environmental and disaster management. This holistic use of EO highlights its potential to bridge the gap between scientific research and operational implementation, providing actionable intelligence for policymakers and stakeholders at all levels. A Multi-Disciplinary Approach The success of SLIM lies in its team’s multidisciplinary expertise, spanning remote sensing, geospatial analysis, environmental science, and capacity building with a strong local partners involvement. Led by People in Need (PIN) – the Czech-based international humanitarian and development organization, under the stewardship of Czech Development Agency (CzDA), this collaboration ensures that SLIM benefits from both cutting-edge technology and on-the-ground knowledge, enabling tailored solutions for Zambia’s unique challenges. SLIM’s integration into the Green Nexus framework further amplifies its impact, emphasizing the interconnectedness of water, food, and energy systems, and SLIM contributes by enhancing resource efficiency, reducing vulnerabilities, and promoting equitable growth. Capacity Building and Technology Transfer A core component of SLIM is its commitment to capacity building and technology transfer. Training programs, workshops, and user-centric service co-creation are embedded into the initiative, ensuring that Zambian institutions acquire the skills and knowledge needed to independently manage and sustain EO-based solutions. The initiative emphasizes collaboration with local stakeholders, fostering a sense of ownership and ensuring that developed systems and methodologies are both practical and sustainable. By transferring technology and know-how to Zambian institutions, SLIM strengthens their ability to leverage EO for improved decision-making, contributing to long-term resilience and self-reliance. Driving Qualitative Change SLIM is not just about addressing immediate environmental challenges; it aims to foster systemic change in how data and evidence are used to inform policy and practice in Zambia. The initiative’s emphasis on user engagement and co-creation ensures that EO-derived products are actionable and relevant, closing the gap between data availability and utilization. This contributes to a culture of evidence-based decision-making, promoting greater sustainability, resilience, and equity. Conclusion The SLIM initiative exemplifies the transformative potential of EO as a tool for sustainable development and resilience building. By combining cutting-edge EO technologies with local expertise and capacity-building efforts, SLIM delivers practical solutions that address Zambia’s environmental challenges while empowering its institutions and communities.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Integrated Use of Multisource Remote Sensing Data for National Scale Agricultural Drought Monitoring in Kenya

Authors: Gohar Ghazaryan, Dr. Maximilian Schwarz, S. Mohammad Mirmazloumi, Harison Kipkulei, Dr Tobias Landmann, Henry Kyalo, Rose Waswa, Tom Dienya
Affiliations: Leibniz Centre for Agricultural Lanscape Research, Remote Sensing Solutions GmbH, International Centre of Insect Physiology and Ecology, Regional Centre for Mapping of Resources for Development, Ministry of Agriculture and Livestock Development, Geography Department, Humboldt-Universität zu Berlin
Drought significantly affects agricultural systems, threatening crop yields, food security, and socio-economic stability. The availability of Earth Observation (EO) data has greatly enhanced drought monitoring by providing near real-time information on crop conditions. However, monitoring efforts primarily focused on identifying drought hazards rather than assessing their broader impacts and risks. While MODIS data has been instrumental in drought assessment, its mission is nearing completion, necessitating the integration of other datasets for effective decision-making. Moreover, a comprehensive understanding of drought risk and impact requires context-specific information, such as irrigation practices and cropping systems. Within the EO Africa National Incubator project ADM-Kenya, we co-developed solutions with several actors and stakeholders to create EO-based products assessing drought risk and impacts. Four operational EO-based products were produced—drought hazard and risk maps at a monthly scale, high resolution crop condition, irrigated/rainfed farming systems map, and downscaled evapotranspiration (ET)—at a national scale in Kenya. Additionally, a demonstration product for mapping mono and mixed cropping systems was generated specifically for Busia County. We selected Sentinel-2 and Sentinel-3 data as the primary sources for drought impact and risk assessment and for deriving agriculturally relevant information, including crop condition, evapotranspiration as well as information on farming systems, i.e., irrigated/ rainfed and mono/mixed cropping. Additionally, yield statistics, meteorological data, and information on phenology were used. Sentinel-2 time series and derived vegetation indices were used to monitor the impact of drought on agricultural systems by tracking intra-seasonal changes in croplands and classifying drought affected and unaffected areas using random forest. Complementary datasets, such as yield statistics, meteorological data, and phenological information, were integrated. Sentinel-2 time series and vegetation indices tracked intra-seasonal changes in croplands, classifying drought-affected areas using random forest with severity thresholds derived from baseline conditions. Furthermore, machine learning and a two-source energy balance model were employed to derive daily 20-m ET. Hazard and impact data were linked to spatially explicit farming systems, enriched with socioeconomic and environmental information, to support comprehensive drought risk assessments. Active participation from key stakeholders, including International Centre of Insect Physiology and Ecology (icipe), Regional Centre for Mapping of Resources for Development (RCMRD), and the Ministry of Agriculture, played a critical role in the co-design and validation of these products. Several rounds of user validation were carried out, where structured feedback was collected on the accuracy and usability of the output. Based on this feedback, the products were improved by changing the specifications and/or implementation steps. This ensured the tools addressed local needs and enhanced stakeholders' capacity to use EO data for agricultural monitoring and reporting, as well as further capacity-building efforts carried out by RCMRD. As part of routine reporting, the Ministry of Agriculture is planning to incorporate these datasets into its monthly Food Security and Agricultural Status Bulletin, improving decision-making and policy support. The project outputs also contribute to addressing data gaps in pest-climate interactions, supporting icipe’s climate-smart pest-resilient push-pull technology. Policy reports were an additional output of the project, focusing on drought impact and risk assessment, as well as irrigation and water use. These reports provided actionable recommendations for integrating EO-derived insights into national agricultural policies and strategies. Training sessions and knowledge-sharing initiatives strengthened the capacity of local stakeholders to utilize EO data for agricultural monitoring and decision-making, ensuring the long-term sustainability of the project outputs. These advancements illustrate how EO-based solutions, supported by robust capacity building and co-development, can enhance drought monitoring and risk assessment.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Empowering Africa with Hyperspectral Data: Satellite Integration, Capacity Building, and Collaborative Research for Sustainable Agriculture

Authors: Jakub Dvořák, Petr Boháček
Affiliations: TRL Space
Satellite remote sensing technology is a cornerstone for research and development across Africa, providing critical data for addressing environmental, agricultural, and natural resource challenges. As a satellite integrator, TRL Space is committed to contributing to these efforts through its TRL Space Rwanda branch for local satellite manufacturing and collaborations with local communities, experts, and international partners. This contribution highlights key initiatives that demonstrate the value of international partnerships in harnessing the potential of remote sensing for sustainable growth across the continent. A central element of TRL Space’s engagement in Africa is capacity building, both in terms of satellite integration and hyperspectral data analysis. Through initiatives like our upcoming workshop at the Rwanda Institute for Conservation Agriculture (RICA), we aim to empower local stakeholders with the knowledge and tools needed to utilize remote sensing technology effectively. This workshop, conducted in collaboration with experts from Charles University, will focus on critical skills such as reference data collection, hyperspectral data acquisition, and data analysis using locally acquired UAV datasets. By bridging technical expertise with local knowledge, the program enhances regional capacity for leveraging advanced Earth observation (EO) technologies. In parallel, TRL Space collaborates closely with local partners on research activities, including crop mapping/monitoring in Rwanda. Utilizing data from our hyperspectral satellite TROLL, we provide high-resolution insights into crop patterns at a national scale. With its unique combination of ~4.75 m spatial resolution and 32 adjustable VNIR bands, TROLL enables precision agriculture applications and enhances climate resiliency. These efforts address key challenges in food security, sustainable land management, and environmental stewardship. The insights gained from these collaborations have the potential to extend beyond Rwanda, offering scalable solutions and shared expertise that can benefit the broader region. By integrating cutting-edge remote sensing technologies with local engagement and international expertise, these initiatives showcase the transformative power of cooperation in driving research and development across Africa.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: High-Resolution AI-Driven Crop Segmentation in Nyeri County, Kenya: Enhancing Agricultural Monitoring Through Deep Learning

Authors: Anna Bartels, Marcus Goebel, Dr. Bartholomew Thiong’o Kuria, Jun.-Prof. Dr. Andreas Rienow
Affiliations: Ruhr-University Bochum, Dedan Kimathi University of Technology
Artificial Intelligence (AI) is advancing the potential of earth observation and geoinformation data. Deep learning methods are particularly widely used in land use and land cover mapping. They address challenges such as handling complex and high-dimensional input data, capturing spatial and temporal variability, automating feature extraction and classifying land cover classes. Accurate and continuous agricultural land use mapping is essential for sustainable land management, food security, and climate adaptation, particularly in regions where agriculture is a key economic driver. In Nyeri County, Kenya, agriculture drives the local economy and is characterised by heterogeneous and small-scale farming systems depending heavily on rainfall. Climate variability exacerbates planning uncertainties for farmers, making accurate, continuous monitoring of crop patterns critical for food security and climate adaptation strategies. With coffee and tea being valuable export products, additionally, political pressure on the traceability of these products grows. The EU law adopted in 2023 defines clear regulations for the certification of export products that must be complied with. This also includes the verification that no deforestation was practised. The presentation will introduce the implementation of a deep learning approach using high-resolution Jilin-1 satellite imagery to accurately map heterogeneous cash crops and capture, monitor and analyse the dynamics in land use and crop patterns for effective decision-making. A neural network architecture was created, trained and validated on ground truth data for land use in the Muringato sub-catchment area. The data were sampled during a field campaign in January 2023 and pre-processed to create train data containing crop masks and Jilin-1 satellite imagery patches with a spatial resolution of 0.5 m. Among different tested models, U-Net with its encoder-decoder architecture and skip connections, enhances the network’s ability to capture local and global features which is crucial for heterogeneous landscapes typical in the agriculture structure in Nyeri County. With an overall accuracy of 0.985 and an Intersection over Union of 0.973, the U-Net model generated precise segmentation masks, enabling automated identification of crop fields. These results highlight the model's robustness in heterogeneous landscapes, offering a foundation for real-time agricultural monitoring systems in regions where traditional methods are challenging due to access limitations or financial constraints. In terms of total area (8524 ha), large coffee plantations (346 ha) slightly exceed the smaller and scattered tea fields (329 ha) in the prediction mask for the study area in Nyeri. These results underscore the importance of precise crop mapping in distinguishing between key agricultural products, particularly in regions with diverse farming systems. This automated and scalable approach not only supports real-time agricultural monitoring but also facilitates the automation of workflows for mapping valuable export products and ensuring traceability, which is crucial for complying with evolving certification standards. The automation of the workflow also makes it adaptable for use with other satellite data sources, offering flexibility for various agricultural contexts. Looking ahead, European satellite systems, including ESA’s Earth observation missions, have the potential to enhance this approach further. By providing additional high-resolution satellite data, these systems can support the automation of crop mapping, improve traceability for valuable export products, and assist in meeting evolving certification requirements, such as those set by the EU for sustainable agricultural practices. The synergy between AI-driven analysis and European space-based assets offers great promise for advancing agricultural monitoring, improving traceability, and ensuring compliance with international sustainability standards.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Enhancing Pastoral Resilience in Northern Kenya through Integrated Use of Earth Observation and Local Knowledge

Authors: dr.ir. Anton Vrieling, Florian Ellsäßer, Claudia Paris, Luc Boerboom, Malumbo Chipofya, Alex Sayon Lengarite, Simon Piers Simpkin, Sake Godana Duba, Benjamin Loloju, Bob Sammy Munyoki Mwende, George Kinyanjui Ngugi, Clinton Ouma Ogola, Teyie Sharon
Affiliations: University of Twente, Faculty ITC, Mercy Corps Kenya, Save the Elephants, Directorate of Resource Surveys and Remote Sensing, National Drought Management Authority
Residents of northern Kenya's semi-arid rangelands face numerous challenges, including climate variability, land degradation, and resource conflicts. Their livelihoods are largely dependent on livestock with seasonal herd migration being a main mechanism to ascertain sufficient food and water intake for their animals. However, frequent droughts, flooding, erosion, disease outbreaks, and proliferation of non-palatable invasive species jeopardize the sustainable provision of sufficient levels of ecosystem services to meet the needs of the various pastoral groups, while armed conflict over scarce resources is common. To strengthen the livestock sector and enhance the sustainable management of rangelands in northern Kenya, the Embassy of the Kingdom of the Netherlands in Kenya is funding a 5-year (2024-2028) €15 million project called RANGE (Resilient Approaches in Natural ranGeland Ecosystems). The project targets Isiolo, Marsabit, and Samburu counties and is led by the non-governmental organization Mercy Corps in partnership with a) the Frontier Counties Development Council (FCDC), a regional economic bloc in Kenya composed of County governments, and b) the University of Twente’s Faculty of Geo-Information and Earth Observation (ITC). Recognizing the need that effective decision-making requires high-quality data on rangeland conditions and use, ITC’s contribution will focus on supporting and improving existing Earth observation solutions for obtaining such data, both leveraging on in-situ (sensors, surveys) and satellite-based sources. RANGE will also build capacity at county and sub-county level to enhance spatial planning through data collection and analysis. Beyond the consortium partners we engage in active collaboration with county governments, mandated governmental institutes (e.g. DRSRS, NDMA), local universities, research organizations, and conservancies. The RANGE project integrates capacity building with research, supporting six Kenyan PhD candidates (all listed as co-author) and nine MSc students. Their research will strengthen institutional partnerships and contribute to sustainable development in the region. Below, we outline the research focus of the six PhD candidates, working on interconnected topics: 1) To explore scalable technologies for assessing livestock dynamics with the aim to contribute to enhanced planning of rangeland utilization. In collaboration with candidate 2, the project will aim to establish a LoRaWAN network (Long Range Wide Area Network), building on previous efforts by the Northern Rangeland Trust (a membership organization of conservancies). LoRaWAN enables the transmission of small data over large distances, which will be used for livestock tracking. Additionally, high-resolution imagery from PlanetScope and Sentinel-2 will support spatial and temporal analysis of livestock enclosures, where animals are kept overnight for protection from wildlife and raiders. 2) LoRaWAN can support data transmission from a variety of sensors. A second candidate will focus on establishing a LoRaWAN-enabled sensor network to monitor weather, moisture, and forage conditions, providing better ground data and insights into the links between water availability and status of rangeland vegetation. Automated photointerpretation procedures, using data from phenoCams and existing transect surveys, will also be developed. These in-situ sources will be combined with satellite image time series to create more accurate assessments of forage conditions across large areas, supporting drought monitoring and insurance programs. 3) A third candidate will use existing household surveys collected monthly by NDMA to assess how precursor events may lead to impact; for example, how do climatic fluctuations (teleconnections) result in meteorological drought, which then affects agricultural and socio-economic conditions. This will involve analyzing the relationship between remote-sensing derived forage availability and household welfare indicators from the surveys. The insights gained will help design and test an improved drought forecasting system. 4) Understanding how drought affects pastoral resources requires identifying key areas for forage production and the timing of their use. Candidate 4 will map key dry- and wet-season grazing areas and herd migration patterns. This mapping will involve participatory input from communities and elders, and will assess how grazing areas have changed over time due to land tenure or climate shifts. The resulting spatial understanding will enhance satellite monitoring of forage scarcity and drought. 5) Multiple initiatives aim to improve the effectiveness of ecosystem services in northern Kenya’s rangelands, including support for grazing management planning, soil- and water conservation, and invasive species control. However, evidence on the effectiveness of these interventions is often lacking. For example, water conservation measures may unintentionally promote the proliferation of invasive species. Candidate 5 will collaborate with local communities to identify rangeland health indicators for assessing intervention success. The candidate aims to scale part of those indicators to remotely-sensed data to evaluate the long-term impacts of interventions over large areas. This work will provide recommendations for designing and scaling interventions across broader landscapes. 6) Improved data acquisition and analysis does not guarantee better decision making. Candidate 6 will develop a participatory regional planning tool aim at promoting sustainable economic investments. Following Kenya’s 2010 Constitution, which decentralized government power and empowered counties, counties are now required to make five-year County Integrated Development Plans. However, there is significant potential to better utilize spatial data (such as that collected by other five PhD candidates) to enhance spatial planning at both the county and sub-county levels. While the RANGE project will leave multiple challenges unresolved, such as scaling data collection across all three counties and integrating data effectively into decision-making to design climate-adaptive interventions, we are confident in the partnership model. By jointly executing research and development with multiple Kenyan organizations, the project has the potential to create lasting impact, providing services and insights that can enhance the resilience of pastoralists in the northern rangelands.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: EOCap4Africa – Earth Observation in Africa: Capacity building in the field of remote sensing for the conservation of ecosystems and their services.

Authors: Insa Otte, Prof. Dr. Emily Bennitt, Prof. Dr. Eric Forkuo, Prof. Dr. Jean-Paul Kibambe Lubamba, Dr. Ange Félix Nsanziyera, Dr. Janina Kleemann, Dr. Doris Klein, Dr Martin Wegmann, Michael Thiel
Affiliations: University of Würzburg, Okavango Research Institute, University of Botswana, Kwame Nkrumah University of Science and Technology (KNUST), University of Kinshasa, Institute of Applied Sciences (INES), Martin Luther University Halle-Wittenberg, German Aerospace Center (DLR) - Data Center (DFD)
Remote sensing is an important tool for recording landscape changes and creating a basis for the management of ecosystems and their services. This is particularly relevant for the African continent. Climate change, population growth, pollution and a growing demand for natural resources are leading to rapid landscape changes in many African countries and regions, often to the detriment of natural ecosystems. Wetlands, as one example of various ecosystems, provide valuable services for the local population by the supply of food and drinking water, protect against droughts and floods, and provide habitats for a large number of protected animal and plant species. Remote sensing technologies enable an inventory of ecosystems and thus create a basis for sustainable management, restoration and sustainable use. Besides the immense potential for the use of remote sensing in wetland management in Africa there is still a need for capacity development to further utilize the available technologies for management and recovery. The aim of the project “EOCap4Africa” is to strengthen the capacities of future conservation managers in the application of information generated by remote sensing for the protection and sustainable use of ecosystems, with a focus on wetlands and their services. We developed a curricular for a Master’s remote sensing module in close cooperation with our African partners from the university sector. The idea is to spread knowledge about the potential of remote sensing data via students of relevant courses and to increase its application in the medium term. In addition, the project is pursuing an approach in which senior and junior scientists and practitioners are integrated into the EO work at the African partners. On the one hand, this is intended to ensure professional excellence in the development of the module, on the other hand, the capacities of young scientists are increased, and the exchange of knowledge and experience is promoted. In EOCap4Africa we cooperate with four partner institutions in Africa, namely the Kwame Nkrumah University of Science and Technology in Kumasi (Ghana), the University of Kinshasa (DR Congo), the Institute of Applied Science in Ruhengeri (INES; Rwanda) and the University of Botswana in Gaborone/Maun (Botswana).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Investigating air pollution and climate change on the African continent

Authors: Pieternel Levelt, Prof. dr. Eloise Marais, dr. Helen Worden, dr. Wenfu Tang, dr David Edwards, Henk Eskes, Dr Pepijn Veefkind, dr Steve Brown, dr Collins Gameli Hodoli, dr Allison Hughes, dr Barry Lefer, dr Drobot Sheldon, Associate Research Professor Dan Westervelt
Affiliations: Nsf Ncar, KNMI, TU Delft, University College London, NOAA CSL, Department of Built Environment at the University of Environment and Sustainable Development (UESD), Clean Air One Atmosphere, Department of Physics, School of Physical and Mathematical Sciences College of Basic and Applied Sciences, University of Ghana, NASA HQ, Space & Mission Systems BAE Systems, Inc, Colombia University
In the next few decades a large increase in population is expected to occur on the African continent, leading to a doubling of the current population, which will reach 2.5 billion by 2050. At the same time, Africa is experiencing substantial economic growth. As a result, air pollution and greenhouse gas emissions will increase considerably with significant health impacts to people in Africa. In the decades ahead, Africa’s contribution to climate change and air pollution will become increasingly important. The time has come to determine the evolving role of Africa in global environmental change. We are building an Atmospheric Composition Virtual Constellation, as envisioned by the Committee on Earth Observation Satellites (CEOS), by adding to our polar satellites, geostationary satellites in the Northern Hemisphere : GEMS over Asia (launch 2022); TEMPO over the USA (launch 2023) and Sentinel 4 over Europe to be launched in the 2025 timeframe. However, there are currently no geostationary satellites envisioned over Africa and South-America, where we expect the largest increase in emissions in the decades to come. In the recent ACVC CEOS meeting extending the GEO constellation over the Global South was positively received. In this paper the scientific need for geostationary satellite measurements over Africa will be described, partly based on several recent research achievements related to Africa using space observations and modeling approaches, as well as first assessments using the GEMS data over Asia, and TEMPO over the USA. Our ambition is to develop an integrated community effort to better characterize air quality and climate-related processes on the African continent.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Forecasting Agricultural Drought Impact in Africa through Machine Learning and Earth Observation

Authors: Koen De Vos, Sarah Gebruers, Jeroen Degerickx, Marian-Daniel Iordache, Jessica Keune, Francesca Di Giuseppe, Hendrik Wouters, Francesco Pereira, Else Swinnen, Koen Van Rossum, Laurent Tits
Affiliations: VITO, European Centre for Medium Weather Forecasts (ECMWF)
Agricultural systems across Africa are increasingly becoming vulnerable to the impacts of climate change, including prolonged and severe droughts or dry spells, which threaten food security, economic stability, and social resilience. Simultaneous with a rising demand for food due to a growing population, agriculture must adapt to a growing frequency in extreme weather conditions, while also minimizing its environmental footprint and ensuring sustainable resource use. The capacity to predict and manage agricultural droughts before they affect crops and livestock is therefore crucial for safeguarding food systems and the livelihood of those relying on food production for income. In response to these challenges, this study presents a model that combines Earth Observation (EO) with machine learning (ML) techniques to estimate lower-tail anomalies in the Normalized Difference Vegetation Index (NDVI)—a key indicator of vegetation health used to monitor agricultural drought impacts. Focusing on croplands and grasslands in Mali, Mozambique, and Somalia, we developed a zone-based system that integrates near real-time satellite data with meteorological (re-)forecasts into a gradient-boosted autoregressive model. This approach allows for the prediction of NDVI anomalies up to three months in advance, offering valuable lead time for decision-making and resource allocation in drought-prone areas. Our model does this by combining environmental information such as soil moisture, elevation, soil texture with information on meteorological droughts (e.g., SPI and SPEI), alongside phenological and land cover data to better represent the expected impact. This integration improves predictive accuracy over conventional near real-time NDVI monitoring, substantially reducing the root mean square error (RMSE) across different time horizons (10 days, 1 month, and 3 months ahead). These advancements represent a critical step towards transitioning from reactive monitoring of agricultural impact to proactive forecasting systems. By allowing for early assessment of agricultural drought impacts, our study supports the development of informed drought management strategies, helping stakeholders—from policymakers to farmers— to make timely interventions. This capability is particularly important for regions reliant on rainfed agriculture, where the consequences of delayed drought responses are often severe. Additionally, the model’s integration of meteorological ensemble forecasts offers uncertainty quantification, further enhancing its value in risk assessment and resource planning. As such, this study contributes to ongoing efforts to build more resilient agricultural systems, aligning with global initiatives like GEOGLAM or FEWSNET that aim to enhance food security through advanced monitoring and forecasting.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Enhancing Sugarcane Stress Detection with Hyperspectral and Thermal Data: Insights from the PRISMA4AFRICA Project

Authors: Dr Roberta Bruno, Dr Raffaele Casa, Dr Francesca Fratarcangeli, Dr Saham Mirzaei, Francesco Palazzo, Dr Simone Pascucci, Dr Stefano Pignatti, Dr Nitesh Poona, Dr Chiara Pratola, Dr Zoltan Szantoi, Dr Alessia Tricomi
Affiliations: e-GEOS S.p.A, Department of Agriculture and Forestry Sciences (DAFNE), University of Tuscia, Institute of Methodologies for Environmental Analysis (IMAA)- Italian National Research Council (CNR), Serco S.P.A., South African Sugarcane Research Institute (SASRI), Science, Applications & Climate Department, European Space Agency (ESA)
The PRISMA4AFRICA project aims to establish a partnership between African and European organizations to advance the adoption and use of Earth Observation (EO) technologies for precision farming and food security. This initiative is designed to address user needs while leveraging the opportunities and challenges offered by recent EO data processing and modelling. In the framework of this project, we develop and disseminate tools based on thermal and hyperspectral EO data to detect plant stress, with a particular focus on stresses impacting sugarcane plantations. Sugarcane is widely cultivated in all the collaborating countries—Gabon, Mozambique, and South Africa—which are represented by AGEOS, INIR, IIAM, and SASRI, namely African Early Adopters (AEA). The SASRI team has identified key stress factors affecting sugarcane, including yellow sugarcane aphid infestations and Eldana damages, while INIR is particularly interested in water stress. This collaboration is fundamental for validating the products using in-situ data collected quasi real-time with the hyperspectral acquisitions. To this end, an online training session was organized to share the theory and practice of data collection with African collaborators. Hyperspectral and thermal data, generally less exploited compared to multispectral data due to their complexity and current limitations in terms of revisit time, offer instead significant opportunities because enabling the retrieval of (a) crop biochemical and biophysical vegetation parameters (e.g., LAI, FAPAR, LCC/CCC and CWC), (b) soil properties (e.g., soil organic carbon - SOC), and (c) evapotranspiration (ET) and Evaporative Stress Index (ESI). The goal of the joint activity is to generate stress maps by combining outputs from different processing and modelling used as input PRISMA (ASI Italian mission) and EnMAP (DLR German program) for the hyperspectral (0.4-2.5 μm) dataset and the ECOSTRESS (Ecosystem Spaceborne Thermal Radiometer Experiment on Space Station on the ISS) and Landsat-8/9 for the thermal ones (8-12 µm). Crop biochemical and biophysical parameters retrieval was achieved through a hybrid approach. The radiative transfer model PROSAIL was used to generate a training dataset encompassing different illumination and geometry configurations, which is then applied to train Machine Learning Regression models (tree-based models and Gaussian Process Regression - GPR models, depending on the target variables). These models have been validated in a different country, achieving promising results: RMSE = 0.38 m²/m², R² = 0.82 for LAI; RMSE = 0.093, R² = 0.805 for FAPAR; RMSE = 0.019 g/cm², R² = 0.77 for CWC; and RMSE = 0.38 µg/cm², R² = 0.695 for Chlorophyll. Whereas SOC was estimated using a 1-Dimensional Convolutional Neural Network (1D-CNN), trained on an extensive global PRISMA dataset, combined with SOC values from the ICRAF and KKSL (https://soilspectroscopy.org/) spectral libraries. Transfer learning was subsequently applied to refine retrieval processes for the specific areas of interest. A preliminary test of the methodology was conducted in South Africa, yielding an R² value of 0.47. ECOSTRESS L3/L4 standard products were exploited to extract information about the ET and water stress. Unfortunately, these data are not always produced because of the lack of ancillary layers required by the ECOSTRESS processing chain for the ET calculation. For this purpose, an ad hoc workflow was set up to derive both albedo and LAI directly using PRISMA images. To improve the spatial resolution, the Data Mining Sharpener (DMS) algorithm was successfully applied to the sharpening of the LST products using PRISMA-derived 30m-NDVI While crop vegetation parameters and soil properties will be validated using in situ data collected by the AEA, the absence of Eddy Covariance or ET stations within the study areas will limit the evaluation of the retrieved ET/ESI to a qualitative assessment. So, this evaluation will involve comparisons with WaPor FAO portal values (https://data.apps.fao.org/wapor) or cross-validation against data from better-instrumented reference sites. Preliminary results clearly show that the 30m ET products derived by combining PRISMA and ECOSTRESS using the Priestley-Taylor Jet Propulsion Laboratory (PT-JPL) algorithm are of good quality in terms of dynamic range and spatial pattern. The PRISMA-ECOSTRESS ET product shows a high correlation with ESA-STIC and NASA-PT-JPL products and a RMSE of 32 and 19 W/m², respectively. To conclude, this study evaluates the potential of thermal and hyperspectral data for detecting stress and damages in crop fields like sugarcane. By integrating these advanced technologies, it becomes possible to provide critical insights to enhance the resilience of plantations against various stressors and to contribute to food security efforts. Furthermore, with the upcoming hyperspectral missions like CHIME (ESA) and SBG (NASA), as well as thermal missions such as LSTM (ESA), SBG-TIR (NASA), and TRISHNA (ISRO), these tools pave the way for the development of an operational monitoring system in the framework of precision farming and food security.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Assessing EO Maturity in Sub-Saharan Africa

Authors: Irena Drakopoulou, Lefteris Mamais, Peter Zeil, Cecilia
Affiliations: Evenflow
Africa is experiencing significant economic growth in recent years and a significant increase in innovation output. For this growth to be sustainably achieved in the face of a host of challenges, informed decisions need to be made. Here EO has a major role to play, informing policy makers and supporting innovators across a wide range of sectors. For all this to materialise however, it is vital that African countries have a solid understanding of their current capabilities and maturity readiness in terms of EO. The EO Maturity Indicators (EOMI) framework, a proven and structured tool for tracking progress and identifying areas for growth in the EO sector, was utilized as part of the ongoing collaboration between the European Commission’s Directorate General for International Partnerships (DG INTPA) and the African Union. This framework, previously employed in similar contexts in several countries in Europe, North Africa, Middle East and South-East Asia, enables a comprehensive evaluation of EO maturity levels. To that end, our research assessed the EO landscape in nine Sub-Saharan African (SSA) countries: South Africa, Nigeria, Kenya, Rwanda, Gabon, Ivory Coast, Tanzania, Namibia, and Botswana. The study’s primary aim was to provide a clear and comprehensive understanding of the space sector’s status in these countries, with the goal of fostering EO-driven innovation and sustainable development. Using the tailored EO Maturity Indicators methodology, the research evaluated three fundamental pillars of the space sector: ecosystem capacity, infrastructure, and policy frameworks. A mixed-methods approach was adopted, combining stakeholder consultations, in-depth desk research, and validation by national experts to ensure context-specific and accurate insights. The findings for each of the indicators revealed a diverse range of EO maturity levels across the region. South Africa and Nigeria emerged as regional leaders, showcasing strong policy frameworks, active academic ecosystems, and successful international collaborations that have significantly advanced their EO sectors. Other countries, such as Tanzania, Namibia, and Botswana, are at earlier stages of development. While they face several challenges they also present opportunities for targeted investments and capacity building, which can unlock their future potential to fully leverage EO for national and regional benefits. Despite these disparities, the study identified promising opportunities for growth. Existing regional collaborations, international partnerships, and academic and training programs represent key strengths that can be further scaled. EO applications in critical areas such as agriculture, disaster management, urban planning, and climate monitoring underscore the potential for transformative socio-economic impact. Targeted investment in these applications can drive sustainable development and innovation across the region. To address the challenges, the study offers key recommendations, including enhancing funding mechanisms, fostering cross-border partnerships, and investing in capacity building at institutional and individual levels. Strengthening support for small and medium enterprises (SMEs) and closing gaps in research and innovation are equally critical. Furthermore, improving policy frameworks and infrastructure is essential to developing a resilient and competitive EO sector capable of meeting the diverse needs of Sub Saharan Africa (SSA). This research underscores the strategic importance of Earth Observation in promoting sustainable growth, environmental stewardship, and improved livelihoods across Africa. By fostering regional and international cooperation, the EO sector can catalyze innovation and economic development while addressing global challenges. The findings of this study provide a roadmap for policymakers, industry leaders, and academic stakeholders to align their efforts and fully realize the transformative potential of EO for the region. In that regard, the study has already informed upcoming investments by the European Commission in the context of EU/Africa space flagship programme, as well as country-specific action plans, such as those under development in Kenya. Finally, as widely recognized by involved stakeholders at country and regional level, the study provided them with an invaluable opportunity to further learn about each other’s activities in a structured way and drive positive developments going forward.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Perspectives on Critical Remote Sensing and Mixed Methods for Development Studies in Africa. Assessing the Land Dynamics of Middle Scale Farms in the Nacala Corridor, Mozambique.

Authors: Ricardo Gellert Paris, Jun.Prof. Dr. Andreas Rienow
Affiliations: Ruhr University Bochum
Remote sensing methods, such as satellite imagery and time-series analysis, have become indispensable in data-poor contexts, such as for understanding Africa’s infrastructural, agricultural, and socio-spatial dynamics. While these tools are often celebrated for their objectivity and scalability, critical perspectives highlight their potential to reinforce dominant societal metanarratives about modernization and development. Instead, we argue for approaches that enable satellite imagery to contest these dominant discourses, fostering political debate rather than foreclosing it and opening pathways for more inclusive and participatory narratives. Our research aims to investigate the socio-political potential of critical remote sensing applications through its integration with ethnographic and field-based methods, emphasizing the need for context-driven and participatory approaches. The Nacala Development Corridor is the major infrastructural project in Mozambique, connecting mining extraction sites to a newly built port. Alongside the logistic infrastructure, the government and international agencies implemented development projects to foster the agricultural sector, including land titling and technology transfer. To investigate these dynamics, we were confronted by two issues: i) the multiple spatial and temporal scales of territorial changes and ii) conflicting perceptions of autonomy and dependency among peasants and middle-scale farmers, reflecting broader tensions in resource access and control. To bridge physical and social changes driven by mega infrastructural projects, we utilized remote sensing time series to identify the expansion of agricultural operations and land-use heterogeneity in disputed territories. Specifically, we analyzed 10-year Sentinel-2 images alongside Rainfall Estimation (CHIRPS) and Topographic data (STRM) to assess land cover changes over time. Also, georeferenced field data further contextualized these observations. We conducted a six-month fieldwork residency in collaboration with local research institutions, immersing ourselves in towns and villages along the Nacala Corridor. This immersive approach enabled us to build trust and foster meaningful engagement with local communities. Our methods included in-depth interviews with a diverse range of stakeholders and transect walks through selected study areas, allowing us to traverse the boundaries between commercial and family farming operations. This combination of techniques provided nuanced insights into the socio-spatial dynamics and power relations shaping land use and livelihoods along the Corridor. By intersecting remote sensing data with on-the-ground narratives and experiences, we question not only the spatial dynamic of agriculture intensification but who has access to the benefits and how they are shared among local actors. Stable production and the mitigation of risks related to weather unpredictability, we argue, rely on access to natural resources and technology. A condition applied by middle scale farms, for example, by reaching up to 180% increase in median NDVI values after the implementation of mechanized irrigation. However, this access is often mediated by established power structures that favour certain groups over others. By critically assessing the strengths and limitations of combining remote sensing with ethnographic fieldwork, this research reframes remote sensing as more than a neutral tool for data extraction. Instead, it demonstrates its potential as a platform to amplify marginalized voices, challenge inequities, and contribute to more inclusive and sustainable development practices.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: FAO PLAN-T: Advancing Climate Adaptation for Maize Cultivation in Zambia with Innovative Tools and Methodologies for Better Decision-Making

Authors: Dr. Ramiro Marco Figuera, Kimani Bellotto, Melih Uraz, Giulio Genova, Marco Venturini, Alessandro Moser, Marcello Petitta, Dr. Sandra Corsi, Michela Corvino, Zoltan Szantoi
Affiliations: SISTEMA GmbH, AMIGO Climate, FAO, ESA
The FAO PLAN-T project is an initiative aimed at improving climate adaptation strategies for maize cultivation in Zambia by leveraging advanced climate and agronomic data. The project focuses on assessing crop yield potential for nine maize varieties across the country, utilizing high-resolution ERA5Land climate data and FAO WaPOR, to evaluate climate variables critical to maize growth, in combination with soil water retention characteristics, fertility and salinity. With a detailed spatial resolution of 250m per pixel, the project assesses maize yield on a pixel-by-pixel basis, providing localized insights to farmers and policymakers. The FAO's AquaCrop model is a key component of the project, utilizing precipitation, air temperature, evapotranspiration, soil water retention characteristics, soil fertility, and soil salinity to assess crop performance. For each pixel, AquaCrop examines three climate scenarios—dry, average, and wet—and computes the mean to determine optimal sowing dates and maximum yields for each maize variety. The model generates detailed maps that display expected crop yields and recommend sowing dates, providing guidance on the best planting times to maximize yield potential. An interactive web-based application enables users to select specific locations and dates to assess suitability for planting based on real-time and forecast soil moisture and water retention data. This decision-support tool allows farmers to adjust planting decisions based on current soil and moisture conditions and high resolution weather forecast (up to 10 days) from ECMWF, improving resilience to climatic fluctuations. To further support farmers, the project has developed an innovative module that quantifies risks from extreme climate events during maize growth stages. This module evaluates the impact of climate stressors on each phenological phase of maize growth, from planting to the latest growth stage. Working in near-real time, it enables farmers to predict potential yield losses and implement timely adaptation measures, improving decision-making capabilities in response to climate variability. Moreover, it allows the identification of the prevailing climate baseline (dry, average or wet) throughout the growth season and the detection of any emerging climate scenarios. The FAO PLAN-T project represents a significant advancement in agricultural decision-support tools, providing localized, data-driven insights that empower Zambian farmers to optimize maize yields and build climate resilience.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Earth Observation-Based Characterization of Social-Ecological Systems in the Kavango-Zambezi Transfrontier Conservation Area

Authors: Achim Röder, M.Sc. Jari Mahler, M.Sc. Chidinma Akah, M.Sc. Henrike Dierkes, M.Sc. Aaron Nikultsev, JProf. Dr. David Frantz, Prof. Dr. Richard Fynn, Dr. Stephanie Domptail, M.Sc. Sakeus Kadhikwa, Prof. Dr. Jonathan Kamwi, Prof. Dr. Nichola Knox, Prof. Dr. Vincent R. Nyirenda, Dr. Antonio Chipita
Affiliations: Trier University, Environmental Remote Sensing and Geoinformaticsensing and Geoinformatics, Trier University, Geoinformatics - Spatial Data Science, University of Botswana, Okavango Research Institute, University of Gießen, Institute for Agricultural Policy and Market Research, Namibia University of Science and Technology, Dep. Of Geo-Spatial Sciences and Technology,, The Copperbelt University, School of Natural Resources, Associacao de Conservacio do Ambiente Desenvolvimento Integrado Rural
The Kavango-Zambezi Transfrontier Conservation Area (KaZa-TFCA), established in 2012 spans 5 countries and is one of the largest transboundary protected areas worldwide, including iconic national parks such as the Okavango Delta or Kafue National Park. It is a network of protected areas, with the goal of leveraging biodiversity conservation while at the same time sustainably managing the Kavango Zambezi ecosystem and supporting livelihoods of its 3 million people. The vast majority of these livelihoods depends on small-scale farming, often in highly dynamic setups including a combination of shifting cultivation and horticulture. Market integration is very limited, and more intensive (irrigation) agriculture only plays a role in some parts of Zambia. Conflicts over resources, on-going human wildlife conflicts and failing conservation and development objectives jeopardize the effectiveness of KAZA. The SASSCAL-II project ELNAC (Enhanced Livelihoods and Natural Resource Management under Accelerated Climate Change – A Large Landscape a Large Landscape Social-Ecological Systems Approach) addresses these issues by bringing together ecological, socio-economic and geoinformatics research with diverse stakeholders to support the implementation of community based natural resource management (CBNRM) concepts as an efficient means of empowering local communities. It is based on the assumption that the current conservation paradigm often stigmatises local communities as degrading agents and excludes them from managing the natural resources surrounding them. This in turn may compromise conservation outcomes, since disenfranchised communities will resist conservation objectives. While CBNRM is mostly local, one goal of ELNAC`s earth observation component is to develop data products for use in diverse applications at different scales. This is essential, since understanding and managing socio-ecological systems requires a robust data foundation. In this context, earth observation plays a critical role in bridging knowledge gaps by providing consistent, large-scale data that can complement local insights. By integrating advanced remote sensing technologies, ELNAC enhances the ability to monitor ecological and land-use changes, enabling informed decision-making at both community and regional levels. The FORCE (Framework for Operational Correction for Environmental Monitoring) was utilized to establish a Landsat and Sentinel-2 based datacube for the entire region. After radiometric and topographic processing, all images were organized in a consistent tiling structure, amounting to ~3 Mio images within ~3500 30x30km2 tiles. These include surface reflectance, cloud and cloud shadow mask and other auxiliary data. It supported development of different level-3 products, such as land surface phenology metrics (LSP), spectral-temporal metrics (STM) and best available pixel composites (BAP), which form the basis for a wide range of analyses. Our analytical framework consists of three components to characterize the social-ecological system of the region: i) vegetation structure as a key resource for livelihoods and wildlife; ii) a conceptual look at the large scale transformation frontiers and related patterns; and iii) production and fallow cycles in agricultural systems. To map vegetation structure, data from the Global Ecosystem Dynamics Investigation (GEDI) was used in combination with land surface phenology and spectral temporal metrics derived from Sentinel-2 and extensive field work. Then random forest models were used to produce maps of canopy cover, height and foliage height diversity across the Okavango Delta. Despite GEDI being developed for forest systems, we found results to also well represent structures in the flooded grasslands of the Delta region. To characterize agricultural systems, different mapping approaches were implemented based on Landsat time series, since these allow evaluating the time span between 1990 and 2023, while earlier years were discarded due to a shortage of data. Analysis of land use and land cover conversions made use of the frontier metric concept developed for Southern America (Baumann et al. 2023, Environmental Research Letters), which we adapted to reflect the finer texture of processes found in the KAZA region. We found six metrics to well represent the transformation patterns in the region: year of deforestation onset, speed, diffusion and activity of deforestation, and land use after deforestation. In contrast to other parts of the world, leapfrogging of the deforestation frontier was not found to play a major role, while regularly progression frontiers dominated. Improving agricultural practices and mitigating human-wildlife-conflicts is one of the most important objectives of KAZA, necessitating monitoring the effectiveness of implemented measures and the suggestion of improved resource use. Thus, besides mapping and characterizing agriculture-related deforestation, it is of equal importance to better understand the dynamics within agricultural systems. Using dry-season STMs we mapped major land use and land cover categories for the 1990 to 2023 period, and then applied the LandTrendR time series algorithm (Kennedy et al. 2010, Remote Sensing of Environment) to the cropland probabilities to smoothen and segment the time series. These segments were then linked to the major production phases in small-scale agricultural systems, and - complemented by extensive surveys carried out in communities in Namibia, Zambia and Angola - we found much higher dynamics than traditionally assumed in many areas. These manifest in short and more frequent cropping-fallow cycles on existing fields and the gradual development of new cropping areas. It is particularly noteworthy that often these also extend into areas assigned to other uses in regional land use zoning schemes, casting doubt on the effectiveness of regional land use planning.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Satellite observations for supporting air quality monitoring in East Africa

Authors: Anu-Maija Sundström, Pie Celestin Hakizimana, Deborah Nibagwire, Dessydery Mngao, Katja Lovén, Henrik Virta, Iolanda Ialongo, Seppo Hassinen
Affiliations: Finnish Meteorological Institute, Rwanda Environment Management Authority, Tanzanian Meteorological Authority
Significant advancements in space-based atmospheric composition monitoring have created new opportunities to utilize satellite data in various societal applications, such as supporting air quality monitoring or assessing impacts of air pollution on public health. To fully leverage the potential of satellite observations, active collaboration between the scientific community and stakeholders is essential. The role of satellite observations in supporting air quality monitoring is especially valuable in Africa, where rapidly growing cities face increasing air pollution levels but ground-based air quality measurements are often very limited or unavailable. The Finnish Meteorological Institute’s project FINKERAT funded by the Ministry for Foreign Affairs of Finland aims to increase East African societies’ preparedness for extreme weather events and to improve air quality monitoring in Kenya, Rwanda and Tanzania. Satellite observations play a key role in this project to support the assessment of air quality in each country, by providing information on various air quality related parameters, especially over those areas where ground-based observations are not available. Satellite observations of aerosols, fires, and trace gases provide valuable information on emission hotspots, seasonal pollutant variations, and long-term trends over East Africa. Special focus has been paid on aerosol observations as the main pollutant in the area is often particulate matter. In this work the main outcomes of the long-term satellite observation analysis are presented, and how these observations can be used to support air quality monitoring. Capacity building is also an essential part of the FINKERAT project, and several hands-on training sessions on satellite data analysis have been organized by the FMI in Kigali, Nairobi, and Dar Es Salaam as well as in Helsinki.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: A.08.01 - POSTER- Advances in Swath Altimetry

The NASA and CNES Surface Water and Ocean Topography (SWOT) Mission, launched in December 2022, is the first inflight experience in orbit of a swath altimeter. The SWOT mission has revealed the capability of swath altimeters to measure ocean and inland water topography measurements in an unprecedented manner. The onboard Ka-band interferometer (KaRIn) observes wide-swath sea surface height (SSH) with a sub-centimetre error. It is already unveiling the small mesoscale ocean circulation that is missing from current satellite altimetry. SWOT already carried a campaign for the satellite calibration and validation (Cal/Val) including ground truths and airborne campaigns.
ESA’s Sentinel-3 Next Generation Topography (S3NGT) mission is being designed as a pair of two large spacecrafts carrying nadir looking synthetic aperture radar (SAR) altimeters and across-track interferometers, enabling a total swath of 120 km, in addition to a three-beam radiometer for wet tropospheric correction across the swath, and a highly performant POD and AOCS suite.
With a tentative launch date of 2032, the S3NGT mission will provide enhanced continuity to the altimetry component of the current Sentinel-3 constellation, with open ocean, coastal zones, hydrology, sea ice and land ice, all as primary objectives of the mission.
This session is dedicated to the presentation of advances in swath altimetry - including airborne campaigns- and the application of swath altimetry to the primary objectives of the mission, i.e. open ocean and coastal processes observation, hydrology, sea ice and land ice. We also invite submissions for investigations that extend beyond these primary objectives, such as the analysis of ocean wave spectra, internal waves, geostrophic currents, and air-sea interaction phenomena within swath altimeter data.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: On the assessment of swath altimetry spectral requirements: lessons learned from the SWOT Cal/Val phase

Authors: Francesco Nencioli, Matthias Raynal, Clément Ubelmann, Emeline Cadier, Pierre Prandi, Gerald Dibarboure
Affiliations: Collecte Localistation Satellites, Centre National d'Etudes Spatiales, Datlas
The launch of the Surface Water and Topography Mission (SWOT) on 16 December 2022 opened the era of wide-swath altimetry. One of the main objectives of SWOT new Ka-band radar interferometer (KarIn) is to resolve two-dimensional sea surface height signals over a swath of 120 km and down to 15-30 km wavelengths, well below those that can be observed from the current nadir altimeter constellation. Reaching this objective required a main paradigm shift in terms of mission error requirements which, for the first time, were defined in a spectral form. Cal/Val activities performed during the first two years of the SWOT mission proved that assessing such requirements is a particularly challenging task, since it demands simultaneous and colocalized independent SSH observations over up to 1000 km along the SWOT swath. SWOT Cal/Val activities included various approaches based on comparisons with in-situ measurements (e.g. dedicated mooring array and airborne lidar) as well as with remote sensing observations. Here we will mostly focus on the comparison with SWOT Nadir and Sentinel-3 SRAL sea surface observations. Despite the coarser resolution and the higher noise levels of those along-track nadir observations, our comparison showed very good overall KarIn performances: specifically, a length of minimum-resolvable scales reduced by ten-folds and error magnitudes below the measured signal at all scales. Furthermore, SWOT observations seem to indicate that below 100 km ocean processes are more energetic than what could be inferred from traditional nadir altimeters, implying that mission requirements, as currently defined, are likely too stringent at those scales. Despite the good results, these first analyses evidenced non-negligible limitations associated with each approach. The largest source of uncertainty of each approach comes from the lack of reference measurements that are exactly simultaneaous and colocalized with the SWOT ones. Because of that, natural ocean variability (spatial and/or temporal) must be accounted for when estimating SWOT error spectra. Estimating such contribution is non-trivial, especially down to the small scales observed by SWOT. Observations from the initial “fast-repeating” phase of the mission (1-day repeat orbit) proved to be extremely important to assess the natural ocean variability for the satellite-based approaches. The daily-repeating observations were also extremely valuable for the in-situ-based approaches, since they allowed the error spectrum to be reconstructed by averaging several realizations of otherwise noisy individual spectra . Overall, evaluating the error spectra at scales smaller than 100 remained a challenging task for satellite-based approaches. Those scales are poorly resolved by Nadir observations and, although partially resolved by SAR altimetry, are characterized by fast decorrelation scales in both space and time, making them particularly elusive to methods relying on spectral differences. Results from in-situ experiments represent an important complementary source of information at those scales, even though the short duration and localized spatial extent of field campaigns limit the frequency range, resolution and accuracy at which the error spectra can be estimated. All these lessons learned are particularly relevant in the perspective of future swath altimetry missions (such as Sentinel-3G NG) and they should be taken into consideration when defining future mission spectral requirements and how they are assessed.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: A CNN-Based Approach for Improving SWOT-Derived Sea Level Observations Using Drifter Velocities

Authors: Sarah Asdar, Bruno Buongiorno
Affiliations: CNR - Istituto di Scienze Marine
Satellite altimetry has fundamentally transformed our understanding of ocean dynamics by providing extensive coverage of sea surface height (SSH) data. The launch of the Surface Water and Ocean Topography (SWOT) mission on December 16, 2022, marked a significant milestone, offering unprecedented spatial resolution of sea surface height anomalies (SLA). With the ability to resolve features down to scales of 15–20 km, SWOT captures fine-scale ocean processes, including internal waves and tides, which can dominate the signal at these smaller scales. However, at smaller scales, the geostrophic approximation becomes less valid as other dynamics become more dominant (e.g., nonlinear advection and ageostrophic motions), posing challenges for deriving geostrophic velocities that assume a balance between the Coriolis force and pressure gradients—a condition typically applicable to large-scale flows. SWOT data are further impacted by instrumental noise and processing artefacts, which disproportionately affect smaller spatial scales, as well as the aliasing of high-frequency signals, such as tides and inertial oscillations. These factors necessitate robust filtering techniques to isolate low-frequency geostrophic flows. Moreover, deriving geostrophic velocities requires taking spatial derivatives of SSH, a process that amplifies high-frequency noise and underscores the need for effective smoothing strategies to reduce this amplification and ensure reliable velocity estimates. To address these limitations and leverage advancements in machine learning, we developed a convolutional neural network (CNN)-based filtering technique to enhance the accuracy of satellite-derived sea level data. CNNs, known for their efficacy in capturing complex spatial patterns, form the backbone of our methodology. Our primary goal is to reduce the error between geostrophic velocities calculated from SWOT SLA and in-situ velocity measurements from drifters (from the Global Drifter Program), thereby generating refined sea level maps as outputs. A key innovation of our approach lies in the custom loss function developed for the CNN model, explicitly tailored to minimize velocity discrepancies. By integrating drifter data to constrain the satellite-derived velocities, our approach ensures that the resulting sea level fields are more representative of actual oceanic conditions. This strategy not only improves SWOT-derived observations but also addresses longstanding challenges in remote sensing-based oceanography. By combining satellite capabilities with advanced machine learning techniques, we present a powerful framework for improving SWOT-derived observations globally. This work provides a clear example of how machine learning can address critical challenges in oceanography, advancing our capacity to monitor and understand global ocean circulation dynamics.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Exploring the Capabilities of SWOT KaRIn for Monitoring Lake Ice and Snow Depth

Authors: Jaya Sree Mugunthan, Claude Duguay, Dr Benjamin M Jones, Justin Murfitt, Elena Zakharova
Affiliations: H2O Geomatics, University of Waterloo, Hydro-EO, University of Alaska Fairbanks, EOLA
Lakes are a vital component of the Earth’s hydrological and climate systems. As highly sensitive indicators of climate change, lakes are classified by the Global Climate Observing System (GCOS) as an essential climate variable (ECV), with lake ice cover (LIC) and lake ice thickness (LIT) being two of its thematic products. In northern high-latitude regions, where lakes cover a significant portion of the landscape, the presence/absence of LIC and its thickness influence local/regional weather patterns, climate dynamics, hydrological processes, permafrost conditions, transport between northern communities, recreational activities, and tourism. Given their importance, accurate and frequent monitoring of LIC and LIT is critical. However, there has been a significant decline in field measurements of lake ice and overlying snow properties over recent decades. This reduction underscores the need for alternative monitoring approaches. Additionally, there has been a long-standing need for retrieving snow properties overlying lake ice, namely snow depth and snow mass (the product of snow depth and snow density), to improve the simulation of LIT from lake models used in standalone mode or as lake parameterization schemes in numerical weather forecasting and climate models. Despite advancements in the retrieval of LIC and LIT from optical and microwave (Ku- to L-band) satellite remote sensing data, including radar altimetry, the sensitivity of Ka-band observations to snow-covered lake ice remains largely unexplored. With the high-resolution wide-swath altimetry measurements provided by the Surface Water and Ocean Topography (SWOT) mission’s novel Ka-band Radar Interferometer (KaRIn), the above-mentioned knowledge gap could be addressed. This study builds on our prior work where we demonstrated the sensitivity of SWOT KaRIn signals to lake ice and overlying snow during the Cal/Val period, focusing on Teshekpuk Lake in Alaska. This period, however, was limited to late ice growth and break-up phases. In the current study, we extend the temporal scope to include both the Cal/Val and Scientific Phases, thereby capturing the complete ice phenology cycle—from initial freeze-up to the transition to ice-free conditions. Besides Teshekpuk Lake, this study also investigates the Dettah Ice Road, a critical winter transportation route connecting communities in Canada’s Northwest Territories. By exploring KaRIn-derived parameters including height and backscatter, we further investigate spatio-temporal patterns observed during the ice phenology period with a focus on snow accumulation and underlying surface ice properties. To better understand and support the KaRIn results, we use complementary satellite, meteorological station and field campaign data. The outcomes of this study will benefit researchers working on the estimation of lake water level, LIC, and LIT from radar altimetry and numerical lake models. Keywords: SWOT, lakes, lake ice, snow, wide-swath altimetry
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: SWOT-KaRIn Level-3 and Level-4 Algorithms and Products Overview

Authors: Cécile Anadon, Anaëlle Treboutte, Robin Chevrier, Antoine Delepoulle, Clément Ubelmann, Maxime Ballarotta, Marie-Isabelle Pujol, Gérald Dibarboure
Affiliations: Collecte Localisation Satellites (CLS), Datlas, Centre National d'Etudes Spatiales (CNES)
The DUACS system (Data Unification and Altimeter Combination System) produces, as part of the CNES/SALP project, Copernicus Marine Service and Copernicus Climate Change Service, high quality multi-mission altimetry Sea Level products for oceanographic applications, climate forecasting centers, geophysics and biology communities. These products consist in directly usable and easy to manipulate Level-3 (L3; along-track cross-calibrated SSHA) and Level-4 products (L4; multiple sensors merged as maps or time series). Level-3 algorithms used for nadir altimeters have been extended to handle SWOT’s unique swath-altimeter data: upgrades with state-of-the-art Level-2 corrections and models from the research community, a data-driven and statistical approach to the removal of spurious and suspicious pixels, a multi-satellite calibration process that leverages the strengths of the pre-existing nadir altimeter constellation, a noise-mitigation algorithm based on a convolutional neural network. The objective of this presentation is to present the uniqueness of Level-3 algorithms and datasets and the regular changes made twice a year with reprocessings. The changes introduced by version 2 of the L3 products published in December 2024/January 2025 are as follows: - Geophysical standards changes o Mean Sea Surface model 2024 o Internal tides model HRET14 o Quick fix of the SSB/SSHA offset in polar transitions o Addition of 5 cm offset on MDT and ADT to be consistent with other L3 products - Coverage improved : o Eclipse data gaps retrieved with good quality o Polar and coastal regions - Cross-calibration improved, especially for coastal areas and polar seas - Coastline and distance to coast improved - Addition of surface classification (ice/leads) in editing flag - Addition of new variables : o Unfiltered geostrophic velocities o Internal tide model o Cross-track distance 2D topography images from SWOT have been added to nadir altimeter data inside mapping algorithms (MIOST, 4DvarNET, 4DvarQG) to produce Level-4 products. The wide swath data provided by the SWOT mission help to reduce mapping errors mainly in energetic ocean currents, to better position oceanic structures (eddies, fronts…) and to have finer resolution in maps.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: SWOT's contribution to the study of coastal ocean circulation, and more specifically the North Current (NW Mediterranean Sea)

Authors: Léna Tolu, Florence Birol, Claude Estournel, Fabien Léger, Mathilde Cancet, Rosemary Morrow
Affiliations: CNRS-LEGOS, Université de Toulouse
The monitoring of ocean currents is a key component in many coastal applications, ranging from biogeochemical resources to marine pollution or search and rescue. During the last three decades, satellite altimetry has played an essential role in the understanding and monitoring of ocean currents at global scale. But its use is still limited in coastal areas due to poorer data quality as we approach the coast, and a spatio-temporal data resolution considered as coarse relatively to the scales of coastal dynamical features. However, many recent studies addressing the different issues related to the derivation and exploitation of altimeter-derived coastal current velocities have shown that they efficiently complement coastal velocity fields derived from in-situ data (e.g., hydrographic observations, surface drifter and moored or ship-based acoustic Doppler velocities) or from shore‐based HF radars. Indeed, one of the major advantages of this measurement technique is to provide long time series (i.e. > 30 years) of spatially and temporally homogeneous information about the circulation and to be available at near-global scale. The coastal altimetry data quality problem can be partially overcome thanks to dedicated processing with adequate corrections. Additionally, merging data from multiple missions has been shown to improve the spatial and temporal resolution. However, few data sets including coastal processing and several altimetry missions exist. The SWOT mission represents the beginning of a new class of altimeters. Associated with substantial improvements in terms of spatial resolution (including in 2D while all other altimetry missions provide 1D information) and data accuracy, it could considerably change the situation in terms of coastal applications. In this study, we study and quantify the ability of SWOT to observe coastal currents compared with conventional nadir missions on a case study: the Northern Current (NW Mediterranean Sea). In particular, we take advantage of the 1-day repeat orbit during the Fast Sampling Phase as a prototype to explore what such temporal resolution can bring to coastal oceanography.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: An enhanced Mean Sea Surface model developed by combining SWOT KaRIn and nadir altimetry data

Authors: Rémy Charayron, Philippe Schaeffer, Maxime Ballarotta, Antoine Delepoulle, Alice Laloue, Marie-Isabelle Pujol, Gerald Dibarboure
Affiliations: Collecte Localisation Satellite, Centre National d'Etudes Spatiales
The data from the Ka-band Radar Interferometer (KaRIn) instrument on the Surface Water and Ocean Topography (SWOT) mission is expected to mark a significant breakthrough in our understanding of the oceans. SWOT KaRIn offers two key advantages. First, it provides observations with unprecedented precision, enabling better resolution of small-scale ocean features. Second, it delivers two-dimensional observations, allowing ocean features to be viewed in their entirety, unlike traditional one-dimensional nadir altimeter data, which only offer cross-sectional views. In particular, SWOT KaRIn data is expected to help in the development of an enhanced Mean Sea Surface (MSS) model, which is essential for improving the precision of Sea Level Anomaly (SLA) measurements. This study introduces a novel MSS model derived from the integration of SWOT KaRIn data and 30 years of nadir altimetry observations. By leveraging the unparalleled spatial resolution of SWOT with the long-term temporal coverage of nadir altimetry, the new MSS model aims to provide a more detailed and comprehensive representation of mean sea surface topography. The process uses a gridded draft MSS to capture large-scale content, refining it with two kinds of innovations applied selectively based on wavelength. The first approach takes advantage of the SWOT KaRIn science phase mean profile, while the second relies on the static component of the Sea Surface Height (SSH) signal obtained through the Multiscale Inversion of Ocean Surface Topography (MIOST) mapping method. Qualitatively, compared to the state-of-the-art MSS Hybrid 2023, the new MSS reveals previously undetected seamounts and significantly reduces geodetic residuals in SWOT KaRIn science SLA signals. Quantitatively, on SWOT KaRIn science data, the new MSS reduces the integrated low-mesoscale SLA power density spectrum by 8.44% and the integrated low-mesoscale MSS error power density spectrum by 66.67%, compared to the MSS Hybrid 2023. Additionally, it reduces local SLA variance by up to 30% over geodetic structures. These improvements have also been validated using independent data from nadir altimeters and on SWOT KaRIn’s calibration and validation phase data.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Imaging and altimetric multi-mission synergy, including SWOT, Sentinel 6, Sentinel2, for reservoir monitoring: applications to the Grand lacs de Seine reservoirs (France)

Authors: Sabrine Amzil, Carlos Yanez, Thomas Ledauphin, Maxime Azzoni, Nicolas Picot, Emilie Mangold, Claire Pottier, Jérome Maxant, Herve Yesou
Affiliations: ICube Sertit, Centre National d’Etudes Spatiales
Reservoirs are key tools in the management of water resources. They provide a means of reducing the effects of inter-seasonal and inter-annual flow fluctuations, thereby facilitating water supply, flood control, power generation, recreation and other water uses. For more than 20 years, satellite radar altimetry has been an effective technique for monitoring variations in the elevation of continental surface waters, such as inland seas, lakes and reservoirs, rivers and, more recently, wetlands. This paper presents a case study on the capabilities of current imaging and altimetry satellites including the breakthrough mission SWOT and the recent Sentinel-6 to monitor water surfaces and heights and variations in water stocks. It also highlights the contribution of multi-sensor synergy. The demonstration of this potential is ongoing over the largest French reservoir, the Der lake in Champagne (NE France). With 48km2, the Lac du Der-Chantecoq, also known as the Marne reservoir, is the largest man-made lake in mainland France. Together with the Orient, Amance and Auzon lakes, it is part of the system of large Seine lakes intended to protect Paris from flooding. These reservoirs are fully controlled and information regarding water heights and remaining water volume in the reservoirs was provided by the Grand lacs de Seine authorities in charge of their management. Water is drawn from the Marne from November/December to June, filling the reservoir. From July to October, water is released to support the flow of the rivers. As a result, the water surface area changes considerably throughout the year, from about forty square kilometers during the high-water period to less than ten square kilometers during the very low-water period. It is interesting to note that Lac de Der comprises three sub-basins separated by dykes. The central basin, the largest in terms of retained volume (185 million m3), the northern basin (9 million m3) and the southern basin (7 million m3). The work presented focuses on the central basin, which describes the most interesting dynamics for this type of study. This reservoir is under the track of several altimetry satellites, in particular SWOT and Sentinel-6. In the case of the SWOT data, the reservoir is theoretically located in the No Acquisition diamond area, but the signal from this large water body is clearly visible in the SWOT L2_HR_PIXC products and the Lake SP product, and the SWOT NADIR track passes over the lake. The analysis of the heights measured by SWOT sensors was carried out both on the PIXC PGC0/PIC0 point cloud products (class 4) and on the Lake SP Prior vector products over a period of more than one year, from July 2023 to mid-September 2024. However, it should be noted that vector data is not systematically available, which means that there are gaps in the time series of LakeSP Prior products. Both types of products reproduce well the hydrological evolution of the central reservoir, with a decrease in water levels from September to December, then an increase in levels with water storage and a maximum reached during the summer. Between the summers of 2023 and 2024, the level variations are just over 8m. To begin with, a few outliers were removed from the time series. The SWOT water level values were then compared with in-situ data. A bias of about twenty centimeters was observed between the in-situ data and the SWOT data. The origin of this bias is not fully understood. In other words, an instrumental origin, such as extreme position on the track, or inaccurate levelling of the in-situ stations; further investigations are in progress. Nevertheless, the accuracy/quality of SWOT measurements is very good, with an RMSE of 0.09m and at one sigma 0.30m. Lac de Der has also been tracked by satellites from the Jason series, including the most recent, Sentinel-6, since March 2021, allowing its water levels to be derived at a ten-day frequency. Sentinel-6 data was first processed using FFSAR to achieve fine resolution of the radargram in the along-track direction, and then a retracker specially designed for this focused data was applied to estimate the water height. Comparison of the in-situ water level data and the Sentinel-6 heights show a constant 60 cm offset between the two data sets; once this bias is corrected, the consistency of the data is impressive, with a RMS of 0.02m (Median and One sigma at 0.03m) based on the analysis of over 14 months of data. It is interesting to compare these values with those obtained during previous work on the same Lac de Der site using Jason 3 data, for which the comparison with in-situ data presented an RMSE of 0.36 m. Furthermore, a more detailed analysis of the Sentinel 6 data over time has revealed a very specific feature during from previous years, when the curve had a relatively smooth convex shape. This saw-like appearance is not an instrumental anomaly; it corresponds to a very specific management in 2024, with a close alternation of water release/impoundment, which is crucial in the context of the Paris Olympic Games, for which the management of the Seine was an essential parameter. It was only thanks to the highest temporal revision of the Sentinel 6 mission, i.e. 10 days, that it was possible to show that water stocks will be managed differently in the summer of 2024. When comparing the height series derived from SWOT and Sentinel-6 , the finer granulometry of the Sentinel-6 revisit is immediately visible, thanks to a revisit every ten days. As a result, the effects of management during the summer of 2024 of the Der reservoir are only observable on the Sentinel-6 series, and these effects are smoothed out on the SWOT series. The second part of this work deals with the analysis of surface areas as observed from SWOT data. The first step is to validate the PICX classifications and compare the surfaces obtained from SWOT with those derived from the Sentinel-2 (10m) time series. The first results obtained indicate a slight over-elevation of water surfaces, based on SWOT data, mainly during the low-water period. There is confusion between open water and the muddy and wet edges of the lake. The next steps will focus on combining these surface and altimetric data to generate a surface/height hypsometric curve for the lake, and also to monitor variations in the volume of this reservoir. The values derived from the Sentinel-2/Sentinel-6 satellite solution will be then compared with the DREAL estimates. The results obtained illustrate the strong current capabilities of altimetric satellite data such as SWOT and Sentinel-6, combined or not with optical data from Sentinel-2, to access water surfaces, heights and volumes over relatively small reservoirs.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: First Quality Data Assessment of SWOT Products Over the Gironde Estuary

Authors: Lucie Caubet, Florent Lyard, Nadia Ayoub, Robin Chevrier
Affiliations: LEGOS laboratory, CNRS/UPS/CNES/IRD, CLS
Estuaries form the last sections of rivers before reaching the ocean. Therefore, these environments are influenced by both hydrological processes, mainly the river discharge and ocean processes, namely tides along with storm surges and waves. The understanding of their complex dynamics remains of prime importance as estuaries often combine special ecosystems to be protected and highly urbanised zones. Nevertheless, as physical estuarine processes occur on a wide range of temporal and spatial scales, their study requires a large number of measurements to improve our ability to capture most of them. So far, in-situ data, field campaigns and numerical modelling approaches were mainly used to achieve this. Very few estuarine studies were conducted through satellite data analysis, either because of a lack of temporal resolution or because of a bad quality of altimetry satellite observations in coastal regions. However, the new satellite mission SWOT (Surface Water and Ocean Topography) launched in December 2022 that carries an innovative altimetry radar combining SAR (Synthetic Aperture Radar) and interferometry techniques, is the first altimetry mission to answer both dedicated hydrology and oceanography purposes. As a matter of fact, SWOT provides 2D topography measurements into two 50 km swaths on both sides of the nadir track instead of only 1 dimensional sampling along the track like it is the case with conventional altimetric missions. Therefore, by providing 2D snapshots with high resolution, SWOT represents an incredible opportunity to study both longitudinal and transverse variabilities of water level in estuaries. It also represents a unique dataset to test and calibrate numerical models. But, as a first step, it remains of prime importance to verify the quality and the relevance of the existing products for the oceans (LR products) and for continental waters (HR products), especially as none of these products were specifically designed for estuaries. We focus on the Gironde estuary for which different types of datasets (i.e tide gauges and numerical model) are available and facilitate this validation process. We chose to start with Level 2 LR-unsmoothed ocean products at 250 m postings as ocean products are easier to handle compared to HR products and as the spatial resolution remains acceptable in lower reaches of estuaries compared to 2km resolution ocean products. The objective of our work is to evaluate the usability of this product in the Gironde estuary, to assess the accuracy of the sea level data and some of the corrections and to propose good practices on the use of these products. Our method relies on tide gauge observations as well as on numerical simulations with the 2D data-assimilated T-UGOm model. For this purpose, it is important to provide this first SWOT error budget for all SWOT pixels that are relevant for further physical analysis (i.e. water pixels). This supposed to first edit spurious SWOT data (i.e. both land pixels and pixels contaminated by land). Using SWOT sea surface height quality flag proved to be too restrictive to achieve this. Ancillary surface classification flag is neither appropriate enough in estuaries because it corresponds to a static land/water mask whereas the latter changes according to the phase of the tides. We thus propose two methods to compute a dynamic land/water mask specific to each cycle and pass. One is based on a thresholding of the backscatter radar response (i.e. sigma0) while the other relies on SWOT grid distortion anomaly. In addition to editing the data, these dynamic masks demonstrate some potential for identifying intertidal zones. Secondly, by comparing the resulting edited SWOT data to tide gauges, we found that SWOT deviation from in-situ measurements of total water levels (i.e. sea surface height including tides, waves and dynamic atmosphere effect) is of the order of ten cm all along the estuary. It should be noted that SWOT cross calibration have to be applied to avoid deviation above one meter. It is consistent when comparing with T-UGOm model for which SWOT deviation is of the same order over the whole swath. Moreover, the comparison with the model enables spatial quantification of SWOT disagreements and we finally detect some potential errors in the model that could not have been revealed in tide gauge comparisons alone.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: SWOT Lake Processing and Products

Authors: Claire Pottier, Dr Mathilde De Fleury, Manon Delhoume, Dr Jean-François Crétaux, Dr Roger Fjørtoft, Dr Damien Desroches, Lucie Labat-Allée
Affiliations: CNES, CS Group
The SWOT altimetry mission [1, 2] was launched in December 2022, and since end of March 2023 provides a product specific to lakes [3], globally and repeatedly, with two or more observations per 21-day orbit cycle. It is computed from the pixel cloud [4], which provides height, corrections and uncertainties for pixels classified as water and pixels in a buffer zone around these water bodies, as well as in systematically included areas (defined by an a priori water mask), and for each water feature observed by SWOT and not assigned to a regular river. The lake product consists of polygon shapefiles, delineating the lake boundary and providing the area and average height of each observed lake. A Prior Lake Database (PLD) [5] allows to link the SWOT observations to known lakes and monitor them over time. The first two steps of the lake processing are crucial in the shaping of the lake object [6]. The first one is an accurate selection of pixels from the pixel cloud. This includes the removal of pixels related to rivers: these are mainly defined in the Prior River Database [7], but some remain outside and are currently handled in the lake processing. This selection also highly depends on the quality flags of the pixels. As an example, “specular ringing” pixels, i.e. pixels for which the interferogram quality is degraded due to range point-target-response side-lobe ringing from a bright target near nadir, were introducing a high error in water surface elevation as well as area. They are now discarded if they are outside a thresholded prior water probability mask, but kept otherwise, to better reflect the actual extent of the lake feature. The second step is to identify all separate water regions in the water mask previously obtained. After a simple separation of each water region, an additional segmentation based on height is performed to handle lakes that are layovered in radar geometry. To separate such mixed lakes with different heights, the Otsu method [8] is used to perform automatic height histogram thresholding. This segmentation is not straightforward because the height of some pixels must be discarded due to their classification (dark water or low-coherence pixels) or quality flags, but they are part of the observation of the lake and therefore must be kept to compute the lake extent. Moreover, the lake processing now uses the Lake-TopoCat dataset [9] to improve the assignment of detected water objects to PLD lakes: this dataset provides, for each prior lake, polygons that take into account hydrological constraints and topography while the previous dataset was based on distances between lakes. A first global performance assessment was presented at the SWOT Science Validation Meeting in June 2024, achieving accuracies meeting or being close to the Science Requirements [10]. Since then, the algorithms and auxiliary data used for operational processing have been further improved. We here present these evolutions, and their impact on performance, focusing mainly on the estimation of lake water surface elevation. Some options for future improvements are also addressed. References: [1] L.-L. Fu, D. Alsdorf, R. Morrow, E. Rodriguez, and N. Mognard, “SWOT: The surface water and ocean topography mission: Wide-swath altimetric elevation on Earth,” Jet Propulsion Laboratory, Nat. Aeronautics Space Administ., Washington, D.C., USA, JPL Publication 12-05, 2012. [2] M. Durand, L. Fu, D. P. Lettenmaier, D. E. Alsdorf, E. Rodriguez, and D. Esteban-Fernandez, “The surface water and ocean topography mission: Observing terrestrial surface water and oceanic submesoscale eddies,” Proc. IEEE, vol. 98, no. 5, pp. 766–779, May 2010. [3] Centre National d’Etudes Spatiales, “SWOT Level 2 KaRIn high rate lake single pass vector science data product (L2_HR_LakeSP)”, SWOT-TN-CDM-0674-CNES, Toulouse, France, 2024. [4] Jet Propulsion Laboratory, "SWOT Level 2 KaRIn high rate water mask pixel cloud product (L2_HR_PIXC)," JPL D-56411, Pasadena, CA, 2024. [5] J. Wang, C. Pottier, C. Cazals, M. Battude, Y. Sheng, C. Song, Md S. Sikder, X. Yang, L. Ke, M. Gosset, R. Reis, A. Oliveira, M. Grippa, F. Girard, G. Allen, S. Biancamaria, L. Smith, J.-F. Crétaux, T. Pavelsky, “The Surface Water and Ocean Topography Mission (SWOT) Prior Lake Database (PLD): Lake mask and operational auxiliaries”, Water Resources Research, In review. [6] Centre National d’Etudes Spatiales, "Algorithm Theoretical Basis Document: L2_HR_LakeSP Level 2 Processing," SWOT-NT-CDM-1753-CNES, Toulouse, France, 2024. [7] E. H. Altenau, T. M. Pavelsky, M. T. Durand, X. Yang, R. P. d. M. Frasson and L. Bendezu, "The Surface Water and Ocean Topography (SWOT) mission River Database (SWORD): A global river network for satellite data products," Water Resources Research, vol. WRCS25408, 2021. [8] N. Otsu, "A threshold selection method from gray-level histograms," IEEE Trans. Sys. Man. Cyber., vol. 9, no. 1, p. 62–66, 1979. [9] S. Sikder, J. Wang, G. H. Allen, Y. Sheng, D. Yamazaki, C. Song, M. Ding, J.-F. Crétaux and T. M. Pavelsky, "Lake-TopoCat: A global lake drainage topology and catchment," Earth System Science Data Discussion, 2023. [Online]. Available: https://essd.copernicus.org/preprints/essd-2022-433/essd-2022-433.pdf. [10] Jet Propulsion Laboratory, “Surface water and ocean topography mission (SWOT): Science requirements document,” JPL D-61923, Rev. B, SWOT NASA/JPL Project, Pasadena, CA, 2018.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Toward Comprehensive Understanding of Air-Sea Interactions Under Tropical Cyclones: On the Importance of High Resolution 2D Sea Surface Height measurements

Authors: Bertrand Chapron, Clément Combot, Alexis Mouche, Dr. Nicolas Reul
Affiliations: Hokkaido University, Ifremer
Cold wakes are marked distinctive footprints of the air-sea interactions occurring during the passages of moving Tropical Cyclones (TCs), with intense near-inertial waves dispersing through the ocean column. Strong shear currents at the base of the mixed layer can reach 1–3 m, penetrate deeply into the thermocline to erode the initial stratification, leaving quite systematically persisting sea surface anomalies: cooling, chlorophyll bloom and salinity rise. At depth, isopycn displacements leave thermocline ridges that strengthen the injection of subsurface anomalies, leading to measurable sea level depressions. Both barotropic (column-integrated current) and baroclinic modes participate to sea surface height anomalies (SSHA), but the latter is largely dominant for open ocean conditions. While sea surface temperature anomalies (SSTA) have extensively been documented, SSHA remain somehow overlooked. In the wakes of TCs, baroclinic signatures mostly range around 10–20 cm and peak at 40 cm. Deeper anomalies correspond to barotropic response. These measurable signatures are directly linked to the inner core TC dynamic and the ocean stratification. Moreover, TC SSHAs are persistent enough to be easily monitored by the current fleet of altimeter instruments, and largely augmented with the recent SWOT swath altimetric enhanced capabilities. Indeed, SWOT instrument can provide unique 2D maps of TC wakes. This eases the analysis and the automation of the SSHA extraction method, to make more precise TC wake SSHA monitoring capabilities. Importantly, the measured SSHA dynamics integrates and reduces the air/sea interactions during the TC passage into a single observable metric. SSHA mostly encodes the cyclonic wind forcing and the interior ocean state to provide new means to better analyze and understand air-sea interactions under TCs. In this presentation, we shall insist on SWOT new capabilities, especially during its fast sampling phase, to uniquely provide high resolution spatio-temporal 2D SSHA imprints, generated in the aftermath of TCs on time scales of weeks. The 2D SWOT SSHA measurements more precisely evidence the SSHA depression in the center of the storm-wake, balanced by opposite SSHAs outside of the wake. Moreover, propagations of the SSHAs depressions can jointly be compared to SSTAs, and also sea surface salinity anomalies. All signatures are generally found to both propagate westward at a speed depending on the TC latitude, suggesting that the resulting TC disturbances will transport all upper ocean material properties (SST, SSS).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: A Variational method for reconstructing and separating Balanced Motions and Internal Tide from wide-swath Altimetric Sea Surface Height Observations

Authors: Valentin Bellemin-Laponnaz, Florian Le Guillou, Clément Ubelmann, Pr. Éric Blayo, Dr. Emmanuel Cosme
Affiliations: Institut des Géosciences de l'Environnement - UGA/CNRS/IRD/INRAE, Datlas, Laboratoire Jean Kuntzmann - UGA/CNRS/INRIA
Mapping Sea Surface Height (SSH) from satellite altimetry is crucial for numerous scientific and operational applications. At the fine scales observed by wide-swath altimeters, SSH variations are primarily driven by two types of dynamics: nearly geostrophic balanced motions and the wavy motion of the internal tide. These two processes influence ocean dynamics in different ways and their contributions to SSH variations must be separated for applications. While this separation is now standard practice with high-frequency outputs of numerical simulations, it remains an unresolved challenge for SSH maps derived from satellite observations, which are sparse in both space and time. This study introduces an innovative method to separate balanced motions and internal tide components in SSH altimetric observations, including wide-swath altimetry. The method is based on a data assimilation system combining two models: a quasi-geostrophic model for the balanced motions and a linear shallow-water model for the internal tide. The inversion is performed using a weak-constraint four-dimensional variational (4DVar) approach, with two different sets of control parameters adapted to each regime. A major expected benefit of this approach is its potential to capture the non-stationary part of the internal tide component. The method produces hourly SSH and surface velocity fields for both components over a specified domain. The study focuses on the North Pacific Ocean, a region characterized by strong mesoscale and sub-mesoscale activity, including the two dynamics of interest. First, Observing System Simulation Experiments (OSSEs) were conducted over 20°×20° domains surrounding the SWOT crossovers. These experiments included both conventional nadir and wide-swath SSH measurements, interpolated from the LLC4320 MITgcm simulation. The performance of the mapping algorithm was evaluated by comparing its outputs with the MITgcm reference fields. In a subsequent step, the method is planned to be applied to real SWOT SSH measurements, with the Californian Current System 1-day phase crossover serving as a case study. As this work forms part of a PhD project, the most recent results will be presented at the conference.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Kilometer and Sub-kilometer Scale Precipitation Observations by the SWOT Ka-band Radar Interferometer: Detection and Precipitation Rate Retrieval Using Artificial Intelligence Approaches.

Authors: Bruno Picard, Colin Aurélien, Romain Husson, Gerald Dibarboure
Affiliations: Fluctus Sas, CLS, CNES
Satellite altimetry missions measure the sea surface height (SSH) at a global scale and with increasing accuracy since 1992. The attenuation of microwave radar pulse depends on the moist air refractivity index, strongly impacted by the precipitations. A new step forward is made with the launch at the end of 2022 of the Surface Water and Ocean Topography (SWOT) mission. Thanks to the technological breakthrough allowed by the Ka-band Radar Interferometer (KaRIn), the Ka-band backscattering coefficent is now available on a two-dimensional grid spanning 70 km on either side of the nadir track with two grid cell resolutions available (250 m and 2 km). As a matter of comparison, the swath of KaRIn is smaller than the Ka-band precipitation radar (KaPR) of the Global Precipitation Measurement (250 km), without its range slicing capability (250/500 m for the KaPR) but with a much greater spatial resolution (5 km for the KaPR). We will present the results of the impact of precipitation on SWOT missions. Defining a method to estimation the attenuation from the radar backscatter coefficient, we will quantify the threshold above which the SSH measurements are no longer valid and characterize the occurrences of non-valid observations, statistically and geographically. Then, we will present a first approach to retrieve precipitation rates from SWOT observations. As developed in a similar work achieved on Sentinel-1 (Colin et al 2024, submitted), a convolutional neural network for the regression of precipitation rate is trained on a large dataset of SWOT observations collocated to ground observations performed by the NEXRAD, the weather radar instruments operated in the USA. This model is trained in a multi-objective framework to mitigate the discrepancies between both types of sensors. The model is constrained to ensure that the mean and maximum precipitation rates both match the ground truth and contains a adversarial loss to provide implicit prior on the ground truth distribution. The precipitation rate is then used to find the threshold above which the observation of SSH is no longer valid. This presentation will highlight how the experience gain from the first year of the SWOT mission could benefit to future missions based on the same concept but also to all the scientific community focusing on precipitation.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Desaliasing of tides and tidal currents using wide-swath altimetry

Authors: Perrine Abjean, Loren Carrere, Florent Lyard, Gerald Dibarboure
Affiliations: CLS, LEGOS, CNES
The accuracy of tidal models has been much improved during the last 25 years, but some tidal errors remain mainly in shelf seas and in polar regions where availability of new databases is still worthful for the development of future tide models. In this context and knowing that the tides and tidal currents are a predominant signal in shallow and shelf regions which have critical applications and societal interests, this study analyzes the interest of new satellite missions for the observation of tidal signals. The present analysis evaluates the potential for desaliasing tidal signals from various wide-swath satellites. We will consider the orbit of the Odysea mission for tidal currents, and the orbits of SWOT and the future S3NG for tidal elevations. The analysis is based on an OSSE experiment using the IBI36 regional simulation of the North-East Atlantic Ocean (provided by Mercator-Ocean) which allows taking into account the tidal signal as well as other oceanic variability which can prevent a proper tide estimation from satellite measurement due to crossed aliasing issues. The topography missions studied have different characteristics: SWOT is a single-satellite, non-sun-synchronous mission, and the future S3NG mission will be a 2 satellites constellation but with a sun-synchronous orbit. It is well known that sun-synchronous nadir missions do not sample properly the tidal signal: they are characterized by some bad aliasing frequencies for most tidal waves, and some solar waves are even not observable with these orbits (such as S1 and S2 waves). The local multiple sampling allowed by the wide-swaths of those missions and, in the case of S3NG, within a two-satellites constellation context, makes it possible to break these aliasing issues and allows a more accurate observation of tides. Regarding the Odysea mission, the aim of the study is to quantify the proportion of tidal currents that will be observed, taking into account the multiple local sampling allowed by the wide-swath. The final objective is to assess the interest of Odysea's tidal current measurements for the tidal community and eventually envision any assimilation of these data in ocean tidal models.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Calibration of the SWOT systematic errors: current performances and limitations.

Authors: Matthias Raynal, Benjamin Flamant, Pierre Prandi, Etienne Jussieu, Emeline Cadier, Clément Ubelmann, Gerald Dibarboure
Affiliations: CNES, CLS, DATLAS
The launch of the Surface Water and Topography Mission (SWOT) in December 2022 represented a major breakthrough in satellite altimetry. Its Ka-band Radar Interferometer (KaRIn) provides for the first time 2-dimensional images with an unprecedented resolution and precision. SWOT will provide the first global view of freshwater bodies to monitor water resources and better characterize the water cycle. Over Ocean it completes the existing measures from nadir constellation to observe small scales topography structures (up to 15 km), which are important contributors in air-sea interactions and ocean vertical mixing. However, in comparison with nadir altimetry, the KaRIn measurements also contains new sources of errors errors which contaminates the topography measures for spatial wavelengths above a thousand of kilometers. They are referred to as the systematic errors. For oceanographers interested in SWOT measurements, they are of secondary importance as they do not impact the small scales topography measured. However, for hydrology applications they significantly contaminate the measures of water bodies height and slope. In the SWOT ground processing center, this correction is ensured by the crossover calibration (XCAL) algorithm. It relies on the analysis of uncalibrated Sea Surface Height (SSH) differences at KaRIn crossovers plus a comparison with SWOT nadir measurement. The objective, here, is to present the results of the Level-2 XCAL calibration assessment over both Land & Ocean surfaces and to characterize the residual errors after calibration. To achieve this, several studies have been conducted with for example the comparison of KaRIn topography measured over Land with respect to Digital Elevation Model (DEM) and the definition of a virtual continent over the Pacific Ocean. The results obtained are compared with the mission requirement defined for hydrology and discussed to highlight they complex variability driven among others by orbital parameters (such as the beta angle), the quality of KaRIn topography measurements over Ocean (and thus, the performances of geophysical models and corrections used to calculate it). We finally discussed how this calibration method can be improved in a near future.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: A new chapter in satellite altimetry: monitoring small lakes and coastal zones with SWOT HR PIXC data

Authors: Simon Jakob Köhn, Karina Nielsen
Affiliations: DTU Space
The Surface Water and Ocean Topography (SWOT) mission is the first satellite mission to provide 2D spatially distributed elevation measurements with a 21-day orbit. At higher latitudes like Denmark, even the smallest lakes are typically covered by 3.5 different passes within one orbital cycle, resulting in revisit times of ca. 9 days. Small lakes play a vital role in the global freshwater cycle and are thus important to understand, given the rapidly accelerating climate change affecting worldwide freshwater dynamics and increased demand for freshwater from population strain. Using SWOT’s unprecedented abundance of water surface elevation (WSE) measurements, we validate a method of deriving high-accuracy, robust WSE time series from the SWOT HR PIXC 2.0 data on 40 gauged Danish lakes between 0.25 and 40km2 surface area. Furthermore, we explore the minimum required lake size for SWOT to correctly measure the WSE. With sufficient aggregation of individual point measurements, we find that spatial undulations do not deteriorate the WSE time series accuracy with respect to gauges. We quantify the WSE time-series performance using the summary measures root mean square error (RMSE) and Pearson correlation coefficient (PCC). We find a median RMSE of 5.76cm and PCC of 0.93 using all 40 lakes and 13 months of SWOT data. Besides the enormous potential to observe even the smallest of lakes, SWOT is the first satellite to bridge the gap between inland water and the ocean effectively. Its 2D elevation measurements show spatial WSE variations in complex coastal areas that can be used to constrain hydrological models and boost flood prevention measures. Limfjorden is a large fjord/estuary in Denmark, stretching 180 km with a tidal signal of up to 30 cm. It features multiple side arms and encompasses the island Mors. Wind patterns and local land boundaries, such as chokepoints, largely influence the spatial WSE characteristics of Limfjorden. SWOT, for the first time, enables us to observe them. Significant water build-up is especially observable at chokepoints and interfaces with the open ocean under appropriate conditions. WSE levels along two pseudo centerlines (north and south of Mors) are validated using gauge data corresponding to each respective SWOT acquisition time. Our analysis shows that water levels remain consistent and converge to the same value whether traveling north or south of Mors. Particularly interesting are three SWOT acquisitions before and after the October 2023 Baltic Sea storm, allowing us to investigate its build-up and aftermath. Furthermore, we investigate how SWOT can observe the WSE in Øresund, particularly in and around Copenhagen harbor. We can spatially observe a tide surge passing through Øresund with a WSE gradient of ca. 1.5m. In Copenhagen, we observe an abrupt drop of WSE instead of a gradual decline we can see around the island of Amager, shielding Copenhagen. The abrupt drop is caused by a tide lock installed in the southern part of Copenhagen. Harbors in Denmark – and worldwide – are exposed to an ever-increasing flood risk, necessitating mitigation work and better flood and tide modeling. Our example proves that SWOT can observe small, complex areas with high spatial fidelity, such as harbors. This new data can constrain flood models and facilitate governments and local communities to enact better mitigation measures.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Is Ultrawide-Swath Precise 2D Altimetry Possible using Multiple GNSS-R Satellites in Flight Formation?

Authors: Estel Cardellach, Yiqing Wan, William Hill, Nicolo Bernardini, Martin Unwin, Camille Pirat
Affiliations: Institute of Space Sciences (ICE-CSIC, IEEC), Surrey Satellite Technology Ltd. (SSTL), European Space Agency (ESA)
GNSS reflectometry (GNSS-R) is an opportunistic technique that benefits from the signals transmitted by navigation satellites (GNSS constellations) at L-band (~0.2 m wavelength) to remotely sense different variables of the Earth surface in a cost-effective way (e.g., only the receiving chain is deployed) from small satellites (demonstrated from platforms as small as 3-unit CubeSats). One of the strong points of this remote sensing approach is its multi-static capability: from a single receiver one can potentially collect information from as many reflection points as GNSS satellites in view, covering different regions simultaneously. However, being opportunistic signals, not designed for remote sensing of the Earth, some of their features are suboptimal (e.g., available power and bandwidth). The forward-scattering geometry of the observations induces what is called the delay-Doppler ambiguity, that is, the impossibility to map the received power (at a given delay and Doppler frequencies) to a unique point/zone on the surface. In order to investigate whether a small swarm of six GNSS-R satellites could break these ambiguities and thus provide geo-located 2D information, the HydroSwarm mission concept was proposed to ESA in response to the OSIP ‘The Preparation Campaign on CubeSat swarm mission concepts’ Call in February 2023. A short contract (ESA CN 4000142425/23/NL/AS/ov, Oct 2023 - Mar 2024) initiated early studies, including the identification of feasible formation flight orbit configurations and signal processing approaches, the optimal performance of which was tested with the impulse-response approach. GLITTER is a Horizon Europe MSCA Doctoral Network project which kicked off on March 2024 and it will continue studying the potential of swarms of GNSS-R satellites over four years. Should this concept prove feasible and show sufficient performance, it would be very attractive due to its unprecedented broad swath capabilities. If each of the transmitter-surface-swarm links could generate 2D information across ~100 km wide-swath area, the joint swath considering all the simultaneously visible GNSS satellites would cover of the order of ~1000 km swath, with few gaps and some overlapping regions (good for redundancy). The HydroSwarm study identified a technique to break the ambiguity to generate 2D information and also proposed a precise-altimetry processing approach. Simple simulation scenarios confirmed that this altimetric technique could optimally yield cm-level altimetric resolution at 1 km spatial resolution, over wide-swath zones with surface height variations (anomaly) of the order of 10 cm. The studies are now continuing under the GLITTER project, to add elements not considered during the initial simulations, such as instrumental (clock synchronization, antenna phase centre and phase patterns) and other scattering related effects (coherence issues) that will certainly degrade the optimal performance. The technique will be presented, showing the results from its initial simple simulations and the progress towards adding more realistic effects, while identifying the most critical aspects to be considered.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Examining ice breakup on Arctic rivers using SWOT’s high-resolution altimetry

Authors: Linda Christoffersen, Prof. Louise Sandberg Sørensen, Prof. Peter Bauer-Gottwein, Dr. Karina Nielsen
Affiliations: Technical University Of Denmark, DTU Space, University of Copenhagen
The primary goal of this research is to investigate the capabilities of the Surface Water and Ocean Topography (SWOT) mission in observing Arctic rivers during the critical ice breakup and melt season, combined with optical satellite imagery. Arctic rivers, typically ice-covered for much of the year, undergo a rapid and complex breakup process in spring. This process significantly alters the water surface elevation (WSE) and flow patterns, influencing hydrological and cryospheric processes in these regions. This research aims to provide a deeper understanding of SWOT's capabilities to monitor ice breakup and advance our ability to monitor and predict these changes. Using SWOT’s high-resolution interferometric synthetic aperture radar (InSAR) data, this study explores the potential of SWOT to measure ice and water surface elevations in Arctic rivers. SWOT's spatial resolution of approximately 10 meters allows for precise measurements of ice and water surface elevation, enabling the monitoring of ice dynamics during the breakup season. The level of detail that SWOT can provide is not achievable with conventional satellite radar altimetry. In this research, we investigate how the breakup of ice within a single channel of a braided river system affects the flow dynamics and water levels in neighbouring channels. By analyzing both ice and water surface elevations over time, we track the evolution of water WSE during the ice breakup period. SWOT’s high-resolution data enables us to observe the complex changes in ice cover - from fully intact to partially broken and ice-free conditions - and how these stages impact the flow dynamics within the river system. This work contributes to a better understanding of the hydrological and cryospheric processes at play during the ice breakup season. By tracking the changes in both ice and water elevations on the Lena River, this research enhances our ability to model the flow dynamics of Arctic rivers in a changing climate.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Global assessment of SWOT performance at the small scale via synergy with surface chlorophyll observations

Authors: Aurélien Deniau, Francesco Nencioli, Pierre Prandi, Dr Maxime Ballarotta, Matthias Raynal
Affiliations: CLS, CNES
Altimetry entered a new era with the launch of Surface Water and Topography Mission (SWOT) in December 2022. With a wide swath of 120 km, the Ka-band Radar Interferometer (KaRIn) provides, for the first time, 2D observations of sea surface elevation with a resolution of the order of the km scale. This allows for the observation of small-scale ocean processes (< 100km) currently not resolved by Nadir altimeters products, either single-satellite along-track or multi-satellite gridded fields, such as the ones generated by the DUACS production system. Currently, one of the main challenges with SWOT observations is to assess the nature of the small-scale features detected by KaRIn and to prove the benefits of two-dimensional measurements compared to standard along-track ones. Specifically, it still remains unclear how much of the observed sea level signal at scales below 100 km can be effectively associated with surface ocean currents. To address this challenge, we combined SWOT L3 2 km sea level observations collected during the “Science” Phase (21-day repetitive orbit) with remote sensing observations of surface ocean tracers. The underlying hypothesis at the base of this approach is that the two fields are tightly related: assuming geostrophic balance, SSH is directly associated with surface velocities; in turn (to a first order), these currents regulate the geographical distribution of the surface tracers. The tracer included in this study is chlorophyll concentration, retrieved from the CMEMS GlobColour multi-satellite product. As this product has a resolution of the order of the km scale, chlorophyll concentration has the capability to resolve sea surface structures down to scales analogous to those observable by SWOT. Comparison between DUACS SSH and surface chlorophyll was also included in the analysis and used as a reference. Our analysis investigates the spatial distribution of the correlation coefficient between SSH (both SWOT and DUACS) and chlorophyll concentration over segments of 120 km along the SWOT swath. As the initial results revealed that the distribution of such correlation was primarily driven by the large-scale meridional gradient, altimetry and chlorophyll fields were band-pass filtered to retain only the scales between 100 and 15 km relevant for the study. Our results evidenced that the SSH/CHL correlations obtained for the unfiltered products have overall similar values and geographical patterns for both DUACS and SWOT products. The strongest negative correlations are found over the upwelling regions and western boundary currents, while strong positive ones occur in the subtropical bands of southern Indian and pacific oceans. The same comparison performed with the band-pass filtered fields shows strongly degraded performance for the DUACS product, while the SWOT one maintains larger correlations coefficients and similar geographical distribution. This is a clear indication that SWOT observations perform better than the DUACS product at capturing the small-scale circulation at scales between 100 and 15 Km over most of the global ocean. A notable exception is the equatorial band where the DUACS product shows stronger correlations. Detailed analysis of the SSH fields indicated that in this band the small scales observed by SWOT are predominantly ageostrophic signals (e.g. internal waves) which are not associated with surface currents and, consequently, do not affect the distribution of the surface CHL field.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: SWOT KaRIN Level-3 Calibration Algorithm and Updates

Authors: Cécile Anadon, Antoine Delepoulle, Clément Ubelmann, Marie-Isabelle Pujol, Gérald Dibarboure
Affiliations: Collecte Localisation Satellites (CLS), Datlas, Centre National d'Etudes Spatiales (CNES)
Sea Surface Height Anomaly (SSHA) images provided by KaRIn can be biased or skewed by a few centimeters to tens of centimeters. The main source of these errors is an uncorrected satellite roll angle, which explains why KaRIn images are mainly composed of a linear variation based on cross-track distance. There are various other sources of errors such as interferometric phase biases or thermo-elastical distortions in the instrument baseline and antennas. To mitigate these topography distortions, a calibration mechanism is applied. Two variants of KaRIn calibration have been developped: the SWOT mono-mission or Level-2 algorithm, and the multi-mission or Level-3 algorithm. As the names imply, the former is used in the SWOT ground segment and L2 products, whereas the latter is specific to Level-3 processors. The L2 algorithm was primarily designed to meet Hydrology requirements, and it is considered optional over ocean since it is not necessary to meet SWOT’s ocean requirements from 15 to 1000 km. This algorithm is based on SWOT data only, because a ground segment cannot depend on external satellites. In contrast, the Level-3 algorithm was designed to leverage better algorithms and external satellites: not only Sentinel-6, the so-called climate reference altimeter, but also all other altimeters in operations (Sentinel-3A/3B, HY2B/C, SARAL, CRYOSAT-2). The L3 correction is generally more robust and stable than the Level-2 variant thanks to the thousands of daily multi-mission crossover segments provided by the constellation. Level-3 calibration algorithm is regularly updated to solve failure cases and to improve calibration quality. For the version 1 of Level-3 product published in July 2024, static and orbit corrections have been added into the calibration for geodesy applications. The updates of the version 2 published in December 2024/January 2025 are improvements of the editing applied before the calibration to calibrate only on valid pixel, improvements of coverage with eclipse data used for the calibration and reduction of the number of degrees of freedom to focus on systematic errors and not on geophysical error residuals.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: The spatial organization of Sargassum aggregations by ocean frontal dynamics : insights from SWOT data

Authors: Pierre-Etienne Brilouet, Julien Jouanno
Affiliations: CNRS - LEGOS, Université de Toulouse, LEGOS (CNES/CNRS/IRD/UT3)
Unprecedented massive landings of Sargassum floating algae have been observed since 2011 in large amounts off the coasts of the Lesser Antilles, Central America, Brazil and West Africa with tremendous negative environmental and socioeconomic impacts. Satellite remote sensing is essential for observing, understanding and forecasting the extent of Sargassum blooms in the Atlantic. Currently, Sargassum detection by remote sensing is mainly based on ocean color indexes which rely on the difference in optical properties between Sargassum and the surrounding waters. At the Tropical Atlantic basin scale, the observability of Sargassum is assessed with the Moderate Resolution Imaging Spectroradiometer (MODIS) and the Ocean and Land Colour Instrument (OLCI) sensors with spatial resolution of 1 km and 300 m, respectively. However, the acquisition of those optical data is only available during the day and is deeply afected by clouds and their shadows which are prevalent over the region of interest. In this context, we propose an innovative approach based on Synthetic Aperture Radar (SAR) images to improve the detection of Sargassum rafts and better understand the structuration of the mats in the open ocean. Indeed, the SAR allows to probe high resolution observations of the ocean surface during day and night, regardless of weather conditions and cloud cover. The SAR backscatter signal captures the sea surface roughness signature ot the Sargassum mats, most probably because they inhibit the small waves at the ocean surface. The capacity of the SAR to detect Sargassum is verified using a multi-sensor approach, through comparison with ocean color based Sargassum detections. In this study, the emphasis is on the recent Surface Water and Ocean Topography (SWOT) mission. Indeed, SWOT mission is a breakthrough in radar remote sensing, as the onboard sensors provide on a same wide-swath, the sea surface height (SSH) and the backscatter signal at high resolution (250m). This provides an invaluable framework for investigating the spatial organization of Sargassum mats and the associated frontal ocean dynamics. Once our Sargassum identification algorithm has been validated, we have focused on selected case studies in order to improve basic knowledge of the processes driving Sargassum transport and aggregation, especially the relative contribution of oceanic frontal dynamics and winds in shaping the Sargassum mats at the ocean surface.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: SWOT hydraulic visibility on a densely instrumented reach of the Rhine canal: accurate flow lines and wave propagation signature

Authors: Thomas Ledauphin, Pierre-André Garambois, Kevin Larnier, Charlotte Emery, Maxime Azzoni, Nicolas Picot, Sabrine Amzil, Jérome Maxant, Roger Fjortof, Herve Yesou
Affiliations: ICube Sertit, INRAE, HYdroMatters, CS Group, CNES
The Surface Water and Ocean Topography (SWOT) satellite mission), a joint endeavor between NASA and the French space agency CNES, is set to revolutionize our understanding of Earth’s water cycle by providing unprecedented data over continental water bodies and oceans. Launched in 2022, SWOT measures the elevation of water bodies with exceptional accuracy and resolution, offering a comprehensive view of surface water dynamics across the globe at scales never before achieved from space. SWOT data, mostly based on the fine PIXC, acquired during the CalVAl orbit, ie 101 days from March to July 2023, were exploited in order to analyze the feasible accuracy on hydraulic visibility on a densely instrumented river, and evaluate the possibility to observe /recognize/characterize hydraulic waves on a normal and extreme hydrological context. This analysis was achieved over 180 km of the Rhine River at the French German borders. This area is particularly interesting as two hydraulic objects are running in. The Western one, the canalized Rhine, presenting a succession of 10 hydropower dams, i.e. like a succession of gently sloping basins, is returning to a free river course after the Iffenzheim dam. On the Eastern side, the Old Rhine, a by-passed segment, is flowing in more natural conditions, beginning with a first free segment of 50 km. Then the Old Rhine corresponds of about ten km, very largely entropized segments, flowing at a lower altitude of about ten meters. The difference in gradient is recovered by a series of metric weirs. At the level of a hydroelectric dam, the North-South offset will be 12-14 meters, with a lateral East-West offset of 8-10 m. To analyze and validate SWOT derived parameters, ie Water height and slope values a database containing water elevation over 44 water level gauges collected from French and German agencies (VNF, DREAL, WSV, LUBW and EDF …) le over the 180km long river providing WSE time series at 15 mn time step - all leveled in EGM 2008 that is SWOT reference. Plus, several stand-alone stations plus additional gauges were also installed, as well as few limnimetric scales (OECS). Drone flights were also carried out to measure the water surface elevation and slopes. This very rich dataset provides a relatively rare and spatially dense reference measurement of WSE profile along the Rhine that is of great interest to analyze SWOT data. In addition, as the SWORD river database, ie V12 to V16, presented an inaccurate centerline, not respecting the true river morphology, a customized river database, with a more realistic description of the Rhine's parallel courses, and of the position of structures (dams, locks, weirs), was set up and used to reprocess the SWOT HR L2 River products, using the open source RiverObs tool. Then precise Water Surface Elevation (WSE) profiles of the river, are obtained at a daily temporal resolution over the Cal/Val period, greatly revealing the channelized Rhine profile with its successive dams, and compared with the numerous available in situ time series. Slope profiles are simply computed by downstream finite difference. Both PIXC and nodes WSE products are first illustrated in terms of longitudinal profiles. This shows, at a given date, a PIXC snapshot greatly depicting the chanelized Rhine WSE profile closely matching in situ WSE at the gauging stations well distributed over the various reaches. This figure also displays the hydraulic visibility in terms of temporal variation, over the 103 consecutive days of the Cal-Val period, of longitudinal WS elevation and slope profiles Z(x) and S(x). These profiles obtained from nodes products represent relatively accurate hydraulic information at high spatial resolution, nevertheless the slope profiles variabilities at high spatial frequency might be attributable to measurement noise since smoother variations are expected for the gradually varied open channel flows observed. Remarkably, SWOT provides a quite fine visibility of temporal variations of WSE at each station. Local WSE measurements of the node product are relatively accurate with a median on WSE error of -0.03m, a standard deviation of 0.42m, and a 1Sigma i.e. 68th percentile of absolute errors of 0.1m. This is a remarkable accuracy for node data obtained by spatial averaging of the PIXC over a relatively small spatial polygon with RiverObs algorithm while SWOT science requirement on WSE error is of 10cm on WSE for spatially averaged data over 1km2. For node scale product, i.e. resulting from a spatial aggregation of PIXC, with a median on WSE error of -0.07m, a standard deviation of 0.24m, and a 1Sigma of 0.12m for the nodes. For the average WSE at reach scale EXPLAIN, the errors are even lower with a median on WSE error of -0.06m, a standard deviation of 0.43m, and a 1Sigma i.e. 68th percentile of absolute errors of 0.10m to identify hydraulic phenomena with relatively good confidence. Remarkably, two hydraulic propagation phenomena are visible with SWOT snapshots over the studied period and this is corroborated by the close fit with the available in situ elevation data within the reach. of interest. First, the signature of a flood hydrograph propagation from upstream to downstream is clearly visible in the WS profiles, with a local intumescence characterized by higher slopes locally. Second, oscillations in the WSE profile must be due to wave propagation from downstream to upstream following operations on the dam downstream of the reach. This is corroborated by downstream water levels temporal variations and also by engineers in charge of hydraulics structures over the Rhine. This highlights the remarkable capability of SWOT to depict fine signatures associated to hydraulic propagations. This impressive hydraulic visibility of WS deformations with SWOT also enable to identify surface signatures triggered by natural hydraulic controls as illustrated with the ”Old Rhine” for which WSE profile in low flow conditions shows meaningful spatial variabilities with main slope breaks clearly corresponding to the signatures of main hydraulic controls that can be associated to morphological variabilities of the main channel (riffles, contractions, sandbars and ledges) while flatter WS zones correspond to pools. These results obtained over the Rhine Tier 1 Cal Val site confirm the quality SWOT measurements in terms of absolute water level. We also observe that SWOT signal allows to identify changes in the river profile over time, as a function of flow and river level (Slope, flow1 waves, ...) but also series of small riffles, about one meter high, and successive pools and alluvial deposits, sandbanks, and recharge areas. The consolidation of the results already started using data from the Science phase over this area and narrow rivers (between 30 and 60 meters such Meurthe, Moselle, Ill rivers in France), that confirm the quality of SWOT measurements. Soon, we will continue the comparison with an HR lidar topo-bathymetric DEM and annual analysis of the SWOT profile to see if changes in the river surface profile can reflect changes in the riverbed topography. All these very promising results highlight the SWOT data for calibrating large-scale hydraulic models on rivers, in the near future.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Ocean tides at the interface of inland and coastal waters from wide-swath satellite altimetry

Authors: Michael Hart-Davis, Richard Ray, Daniel Scherer, Christian Schwatke, Tamlin Pavelsky, Denise Dettmering
Affiliations: DGFI-TUM, Geodesy and Geophysics Laboratory, NASA Goddard Space Flight Center, University of North Carolina
The land-sea interaction of water is a complex system that is crucial for a wide-range of biogeochemical phenomena, ranging from compound flooding to feeding patterns to pollution distributions. Ocean tides are a natural phenomenon which plays a major role in the dynamics of the water in the land-ocean continuum. Studying ocean tides from satellite altimetry has traditionally been difficult in coastal regions, mainly due to the complexity of tides in these regions, limited spatial coverage of in-situ and satellite observations, and land contamination of the satellite radar returns. Significant efforts have been made by modellers to better resolve the ocean tides closer to the coast by employing enhanced algorithms for coastal altimetry and leveraging more accurate bathymetry products. In late 2022, the Surface Water and Ocean Topography (SWOT) satellite launch introduced the Ka-band Radar Interferometer (KaRIn), marking a significant leap beyond traditional altimetry by offering high-resolution, two-dimensional sea surface measurements. This presentation demonstrates the use of these wide-swath data for tidal research at unprecedented spatial scales within complex coastal environments. The validation of the results from SWOT is encouraging as errors with respect to gauges are reduced compared to both global and regional models. The Cal/Val phase of SWOT has also provided the opportunity to evaluate the nonlinear tidal effects. The high-resolution products from SWOT are also useful for tidal research within the land-ocean continuum, particularly in estuaries, rivers and fjords. Exploiting the SWOT inland products, we derive empirical estimates of ocean tides in estuarine and river systems, allowing us to calculate the extent of tidal influence within inland waters, which is crucial for a variety of biogeochemical processes and compound flood predictions. These results demonstrate further opportunities to use wide-swath measurements to advance the understanding of tidal dynamics in these critical areas. Furthermore, the incorporation of current and future wide-swath data into tide models will be crucial to advance the tidal corrections in the coastal region.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Sentinel-3 Next Generation Topography Mission Performance and Uncertainty Assessment (S3NGT-MPUA)

Authors: Noemie Lalau, Thomas Vaujour, Michaël Ablain, Clément Ubelmann, Lucile Gaultier, Fabrice Collard, Nicolas Taburet, Julien Renou, Maxime Vayre, Emma Woolliams, Sajedeh Behnia, Frédéric Nouguier, François Boy, Louise Yu, Alejandro Egido, Craig Donlon, Robert Cullen
Affiliations: Magellium, DATLAS, ODL, CLS, NPL, Ifremer, CNES, ESA-ESTEC
The Sentinel-3 Next Generation Topography (S3NGT) mission is designed to ensure the continuity of the existing Copernicus Sentinel-3 nadir-altimeter measurements from 2030 to 2050 while also improving measurement capabilities and performance. This mission consists of two large spacecrafts equipped with an across-track interferometer swath altimeter (SAOOH), a synthetic aperture radar (SAR) nadir altimeter (Poseidon-5 (POS-5)), a multi-channel microwave radiometer, and a precise orbit determination suite. The SAOOH instrument builds upon the swath altimetry advancements pioneered by the Surface Water and Ocean Topography (SWOT) mission, launched in December 2022 with the KaRIn instrument. However, the SAOOH instrument differs from KaRIn in several key aspects, including a shorter baseline (3 m instead of 10 m) and a different Signal-to-Noise-Ratio (SNR) of the antenna compared to SWOT/KaRIn's. Given these differences, it is imperative to pay utmost attention to the performance of the S3NGT mission before its launch to ensure its success. The ESA-supported S3NGT preliminary mission performance and uncertainty assessment (S3NGT-MPUA) study is ongoing. In this study, we have made significant progress since its initiation in May 2023. Here, we outline the key achievements for each objective of this project. The first objective of the study is to conduct a preliminary assessment of the performance of S3NGT Level-2 products before the mission's launch. This assessment focuses on ocean surfaces and inland waters. To achieve this, we first developed a strategy to generate S3NGT-like data. For ocean surfaces, we employed a strategy that utilized inflight data from the SWOT mission (level-3 products) to create a first dataset and an Ocean General Circulation Model (OGCM) to simulate S3NGT-like data for a second dataset. We introduced S3NGT-specific instrumental uncertainties into the OGCM data and SWOT inflight level-3 data using a scientific simulator developed by ODL. These two complementary approaches provide respectively lower and upper bounds of S3NGT performances. Additionally, we evaluated the behavior of swath measurements for high sea state conditions by degrading SWOT Level-1B data with instrumental characteristics that are similar to SAOOH ones. For inland waters, we developed a similar strategy to generate S3NGT-like data from SWOT L2 Pixel cloud products, incorporating specific S3NGT uncertainties. This activity also included defining key metrics to describe the mission’s performance for several variables including sea surface height, sea state, systematic errors, and inland water surface elevation. We evaluated these metrics with our generated S3NGT-like data and compared these results to the S3NGT Mission Requirements Document (MRD). This approach provides valuable insights into the expected capabilities of S3NGT products and highlights areas where the mission design can be refined to meet operational requirements. The second study objective is to develop a comprehensive uncertainty model and budget following established metrological principles. The first part entails a metrological assessment of the S3NGT mission, while the second part focuses on verifying and validating the S3NGT uncertainty budget. This has involved creating a clear metrological traceability diagram representing the Swath altimeter and using this to identify (and later quantify) individual sources of uncertainty. The third study objective is to evaluate options for in-orbit calibration of the S3NG-TOPO mission. This includes methods such as using orbit crossovers to correct for known systematic errors in water elevation over the ocean and inland water bodies. The evaluation must consider the different latencies of the S3NG-TOPO products. Indeed, given the stricter latency requirements for the S3NG-TOPO mission than for SWOT, the number of available orbit crossovers for cross-calibration measurements is limited. The final study objective is to assess the uncertainty in cross-calibrating S3NG-TOPO with the current S3 constellation and reference mission (S6) using established or innovative methods. The primary goal is to ensure the continuity of S3NGT measurements, including an effective cross-calibration between the current S3 constellation and the future S3NG-TOPO constellation, particularly for the nadir altimeter system incorporating the microwave radiometer (MWR).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: New Insights into Cryosphere Applications of the Surface Water and Ocean Topography (SWOT) Mission

Authors: Mohammed Dabboor
Affiliations: Science And Technology Branch, Environment And Climate Change Canada, Government of Canada
Monitoring sea ice is critical for advancing our understanding of climate change, maintaining polar ecosystems, and ensuring safe navigation in the Arctic and Antarctic regions. Satellite remote sensing technologies play a pivotal role in sea ice monitoring by providing consistent and reliable observations. These technologies offer detailed information on ice type, thickness, extent, and movement, enabling scientists to track changes over time and identify emerging trends. Such data are essential for improving the accuracy of climate models, supporting ecosystem studies, and enhancing the safety and efficiency of maritime operations in these challenging environments. By offering a comprehensive view of sea ice dynamics, satellite remote sensing significantly contributes to informed decision-making and policy development for polar and global sustainability. The NASA/CNES international Surface Water and Ocean Topography (SWOT) satellite mission, launched on December 16, 2022, was initially designed to support ocean and hydrology applications. Its primary sensor, a Ka-band near-nadir radar system, provides across-track interferometric measurements for two swaths on either side of the satellite's nadir. Beyond its primary objectives, the SWOT mission has demonstrated promising potential for applications in polar and Nordic environments. Its high-resolution data could enhance our understanding of ice dynamics, particularly in the Canadian Arctic. By integrating SWOT's data with existing satellite imagery, such as from conventional radar systems, it is possible to achieve a more comprehensive understanding of sea ice conditions. This integration could support strategic and informed decision-making for Arctic management, navigation, and ecosystem protection, addressing both regional and global challenges. This presentation offers a preliminary assessment of the feasibility of using data products from the SWOT satellite for sea ice analysis in the Arctic. We provide both qualitative and quantitative evaluations of SWOT measurements for ice detection, including an analysis of scattering profiles from the Ka-band radar across various ice types. Additionally, we incorporate an analysis of coincident imagery from the SWOT satellite and the RADARSAT Constellation Mission (RCM). This combined approach leverages SWOT's high-resolution data and RCM's advanced radar imaging capabilities to enhance the detection and characterization of sea ice. By integrating these complementary datasets, we aim to improve our understanding of ice dynamics, contributing to more robust monitoring and modeling of Arctic ice conditions. Preliminary results from coincident SWOT and RCM observations over the Beaufort Sea demonstrate the promising capability of SWOT for detecting multiyear ice floes and leads.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Development of an integrated method to validate SWATH altimetry over inland water: A new approach from SWOT Cal/Val first results

Authors: Valentin Fouqueau, Gabrielle Cognot, Eliot Lesnard-Evangelista, Jean-Christophe Poisson, Nicolas Picot, François Boy, Roger Fjortoft, Laurent Froideval, Christophe Connessa
Affiliations: Vortex-io, CNES, CNRS-INSU
For many years now, satellite altimetry has been increasingly used to monitor inland waters all over the globe. The SWOT mission represents a technological breakthrough compared to all previous altimetry missions especially for hydrology. Inland water elevation measurements are no longer taken at specific points that correspond to intersection between satellite ground track and rivers but across entire river segments. This technology paves the way for a significant increase in altimetric measurements for inland waters enabling unprecedent new applications of altimetry data. The SWOT mission has demonstrated the full potential of this measurement technology. This is evidenced by the decision to equip the next-generation Sentinel-3 mission of the Copernicus program with swath altimeters. The validation of swath altimetry data presents new challenges. Advances in satellite measurement technology necessitate a novel approach to in-situ data collection for swath altimetry validation. Traditionally, validating nadir altimetry required punctual in-situ measurements, which were directly compared to satellite data. More advanced processing methods, such as those developed in the St3TART project, have further refined nadir altimetry validation with the combination of fixed and moving sensors measurements. For swath altimetry, however, the process requires the acquisition of longitudinal water surface height (WSH) profiles that are temporally and spatially collocated with the satellite overpass. This can be achieved through field campaigns deploying moving sensors, as the vortex-io altimeter, to capture WSH along these profiles or other types of moving sensors (CalNaGeo, Cyclopée, airborne LiDAR, etc…). Such an approach has been successfully employed during the SWOT Cal/Val campaign. However, the main limitation of these methods is the logistical demand, as field teams must be mobilized for each swath altimeter pass to capture the necessary longitudinal profiles. Moreover, the acquisition time of such in-situ means are not comparable to Satellite measurements. In the framework of the SWOT Calibration and Validation phase, vorteX-io has developed a method to validate swath altimetry measurements over extended river segments. This approach relies on robust in-situ instrumentation using two sensors developed by the vorteX-io team. The core concept is to reconstruct the river's longitudinal profile over a long section for each swath altimeter overpass. This reconstruction combines simultaneous measurements from fixed micro-stations with data from moving sensors, using a historical database of longitudinal profiles. This database, created prior to the Cal/Val period, captures river topography across a wide range of water levels to ensure comprehensive coverage. In this presentation, we will detail the instrumentation required to construct the combined profiles, outline the method and discuss the results obtained from the existing super sites. The first case study focuses on Marmande, a Cal/Val super site used for both SWOT and Sentinel-3 Cal/Val. Additionally, we will present the goals of extending this approach to other rivers in various regions to further validate the swath altimetry validation methodology. This effort to develop a network of super sites is being carried out in collaboration with CNES as part of the S3-NG T project.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Monitoring the Arctic Ocean with SWOT - A comparison with conventional altimeter measurements in the ice-covered ocean

Authors: Felix Müller, Denise Dettmering, Florian Seitz
Affiliations: Technical University of Munich (DGFI-TUM)
In the last decades, satellite altimetry has become a significant data source for monitoring the polar sea level and ocean currents. Since the early nineties, satellite altimetry has been used for monitoring the Arctic Ocean sea surface heights (SSH), the declining sea ice cover and changing ocean circulation with continuously improved observation techniques and increased accuracy. While classical pulse-limited altimeters were used at the beginning (e.g. from ERS-2 and Envisat), the era of Delay-Doppler altimeters began in 2010 with the ESA Earth Explorer Mission Cryosat-2 that opened new possibilities, e.g., in lead detection or free-board calculation. A game changer is the use of swath altimetry, which has been available since December 2022 from the radar Ka-band interferometer observations of SWOT also in the ice-covered oceans. With SWOT it is now possible to retrieve SSHs in a 2D-swath and not only along a 1D-pass. Thanks to its various data sets, it is possible to capture pixel-based height information with a spatial resolution of 250 metres in the Arctic peripheral seas up to a latitude of 78°N. Although SWOT does not cover the central Arctic Ocean, it nevertheless opens up new possibilities for the detection of leads (i.e., elongated water openings within the sea-ice) to get new insights into the ocean’s dynamic topography and further into the Arctic Ocean circulation. This contribution utilises SWOT Level-2 LR Unsmoothed and Expert ocean datasets to investigate the lead detection and sea surface height determination capabilities of SWOT. The work, embedded in the SWOT Science Team within the SMAPS project, aims to gain a first impression of the extent to which SWOT is suitable for polar ocean applications. Therefore, SWOT observations will be compared pointwise with conventional Ka-band altimetry (i.e. SARAL) and Delay-Doppler Ku-band altimetry from Cryosat-2 and Sentinel-3 (Sea-Ice Thematic Product), as well as with lidar altimetry from NASA's ICESat-2 in terms of computed sea level anomalies (SLA), sigma0 (i.e. backscatter) and water surface (i.e. leads, polynyas) detections. The first step is to find crossover locations between SWOT and the other missions. For this task, SWOT’s cal/val (1-day repeat) and science phase (21-days repeat) are considered. In order to have the same sea ice conditions during the overflights, a maximum acquisition time difference of 30 minutes is set. Despite these temporal boundary conditions, a sufficient number of suitable comparisons can be identified, for example, about 80 for ICESat-2 and about 600 for SARAL during the cal/phase. Only crossover locations with enough valid nadir observations will be kept to enable the creation of reliable statistics. In the next step, for all crossover locations, pointwise comparisons are performed. This includes signal and data analyses by means of RMSE or correlation computations between SWOT swath data and observed SLA from contemporaneous satellite altimetry missions (i.e., SARAL, Cryosat-2, Sentinel-3A/B, ICESat-2, and SWOT nadir). For this purpose, the same atmospheric and geophysical corrections are applied to the SWOT KaRIn observations before they are interpolated to the conventional altimeter observation locations. Moreover, analyses are carried out with regard to backscatter and results of the unsupervised nadir altimeter surface type classifications (Müller et al., 2017). It will be analysed to what extent the different missions detect and reproduce open water features such as leads of different sizes for example.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Performance of the Surface Water and Ocean Topography (SWOT) Mission for Monitoring Small Lakes in West Africa

Authors: Félix Girard, Laurent Kergoat, Ibrahim Mainassara, Maxime Wubda, Hedwige Nikièma, Amadou Abdourhamane Touré, Julien Renou, Maxime Vayre, Nicolas Taburet, Nicolas Picot, Manuela Grippa
Affiliations: Géosciences Environnement Toulouse (GET), Collecte Localisation Satellites (CLS), HydroSciences Montpellier (HSM), Université Joseph Ki-Zerbo, Université Abdou Moumouni, Centre National d’Etudes Spatiales (CNES)
In West Africa, the volume variability and hydrological functioning of the region’s thousands of lakes are poorly monitored. The recently launched Surface Water and Ocean Topography (SWOT) mission carrying a wide-swath Ka-band radar interferometer offers new opportunities for large-scale monitoring of lake water resources, and overcomes the spatial coverage limitations of the nadir altimeters. Here, we evaluate the performance of two SWOT data products, namely the PIXel Cloud (PIXC) and the Lake Single Pass (LakeSP), over sixteen small and medium-sized lakes in the Central Sahel. Excellent agreement of elevation with in-situ is found for both products, with 1-sigma errors (68th percentile of absolute errors) between 0.06 and 0.11 m, consistent with the mission science requirements. When compared to Sentinel-3 elevation, the PIXC product shows better results than LakeSP, with 1-sigma differences of 0.16 m and 0.32 m, respectively. SWOT LakeSP surface area estimates show a large variability and a general overestimation with a median bias of 17.2% compared to Sentinel-2 measurements. SWOT pixel classification errors related to bright land contamination or dark water due to low signal return are found to affect both elevation and area estimates, especially for LakeSP. Restrictive spatial filtering combined with the use of appropriate quality flags included in the SWOT PIXC product allows to mitigate the classification errors and produce robust water surface elevation estimates. Elevation-area relationships derived from combined SWOT PIXC and Sentinel-2 compare well with in-situ (RMSEs below 0.28 m), highlighting the capabilities of SWOT for monitoring lake volume changes once complemented by external water masks. Finally, the SWOT PIXC product is used to derive seasonal water level amplitude estimates for more than 600 lakes of various sizes, 80% of which are relatively small (< 1 km²). With 25% of the study lakes showing a water depletion greater than the average evaporation-induced water loss, such results provide unprecedented large-scale information on the lake water use and also highlight the spatial coverage potential of SWOT.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Using SWOT Data to Assess the Impact of Ocean Tides and Sea Level Change on Upstream Rivers and Estuaries

Authors: Robert Steven Nerem, Toby Minear, Martin Kolster, Eduard Heijkoop
Affiliations: University Of Colorado
The upstream-most backwater effects of ocean tides on inland rivers and estuaries are not well known presently and pose an additional threat from sea level rise. As sea levels are projected to rise by as much as ~1 m by the year 2100, tides will propagate farther upstream, extending into freshwater waterbodies and wetland areas, adding to flooding threats far from the sea. In addition, even small increases in salinity can have serious impacts on groundwater, wetlands, agriculture and human populations living in these areas. With SWOT, we can, for the first time, observe the extent to which tidal-influence backwater effects influence upstream estuaries and rivers. There are many factors that impact this effect including the height of the high tide on a particular day, the slope of the geoid, potential storm surge, the shape of the estuary and its connectivity to nearby wetlands, in addition to other factors. We are investigating the use of SWOT data to map the present-day spatial extent of tidal-influence on upstream surface waters, as well as the spatially-distributed mean water surface elevation and tidal range (difference between low- and high-tides) from the coast through the estuary and river system. These three key variables can then be used to project the impacts of sea level rise for these inland surface water regions. We will show initial results from this study and discuss plans for future research.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Assessing SWOT satellite performance against tide gauge observations in the Western Mediterranean Sea

Authors: Diego Vega Gimenez, Antonio Sánchez Román, Laura Gómez Navarro, Angel Amores, Ananda Pascual
Affiliations: Imedea (uib-csic)
Over the past three decades, radar altimetry has revolutionized sea level monitoring globally and regionally. However, traditional altimeters face challenges near the coast, particularly within 20 km of the shoreline, where land contamination and complex geophysical conditions degrade data quality. This gap has limited the understanding of coastal sea level dynamics, crucial for assessing hazards, tracking climate-driven trends, and improving coastal resilience. The Western Mediterranean Sea, with its intricate bathymetry and energetic mesoscale features, provides a valuable testbed for evaluating advancements in altimetry. The Surface Water and Ocean Topography (SWOT) mission, developed through international collaboration, employs the innovative Ka-band Radar Interferometer (KaRIn) to produce high-resolution, two-dimensional sea surface height (SSH) maps. This breakthrough enables the resolution of coastal mesoscale and sub-mesoscale phenomena, offering new opportunities to monitor sea level variability close to shore. This study utilizes SWOT data from its intensive 90-day Calibration/Validation (Cal/Val) phase (April–July 2023) to validate Level-3 Sea Level Anomalies (SLA) against observations from 21 tide gauges distributed along the Western Mediterranean coasts. These tide gauge records, provided by the Copernicus Marine Service, were adjusted to remove atmospheric contributions from pressure and wind, isolating the sea level signal for direct comparison. Results reveal strong correlations and low Root Mean Square Differences (RMSD) between SWOT-derived SLAs and tide gauge data, demonstrating the mission’s capability to capture coastal sea level anomalies with high precision. SWOT’s swath-based observation method overcomes the spatial limitations of traditional nadir altimetry, extending valid measurements closer to shore and improving the resolution of small-scale coastal processes shaped by bathymetry, shoreline configuration, and regional atmospheric dynamics. This research highlights the significant advancements made possible by SWOT in coastal altimetry. The mission’s ability to provide high-resolution SSH measurements near the coast bridges critical observational gaps, offering valuable insights into coastal variability and mesoscale processes. By validating SWOT’s performance in the Western Mediterranean, this study underscores its pivotal role in advancing coastal monitoring and informing strategies for managing the impacts of sea level rise and climate change.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Flood event analysis based on SWOT PICX products: case of May 2024 Sarre (Northeast France) and October 2024 Valencia Province (Spain) floods events

Authors: Sabrine Amzil, Jérome Maxant, Pierre Andre Garambois, Kevin Larnier, Alessandro Caretto, Maxime Azzoni, Nicolas Picot, Herve Yesou
Affiliations: ICube Sertit, INRAE, HydroMatters, CNES
For several decades now, earth observation data has been integrated into services to support relief efforts following natural or man-made disasters, such as the International Charter on Space and Major Disasters, or the European Commission's Copernicus Rapid Mapping Service (EMS). In the case of floods, they can be used to map the extent of the flooding and to characterize the areas affected. These maps can help in decision-making at the highest levels and in making the best possible use of human and material resources. However, current Sentinel1/2 or higher resolution images data such as Pleiades NEO, CSK or TSX, do not allow us to characterize the hazard in its entirety. In fact, of the key variables for defining the flood hazard, only the extension parameter and the duration of submersion are directly accessible from earth observation data. What is missing are the important parameters of water height and current speed, which will determine how dangerous an event will be, bearing in mind, for example, that a water height of 50 cm associated with a current of 0.5 m/s represents a great danger to people, or that a car starts to float at a depth of 30 cm. The recent innovative SWOT mission could go some way to filling this gap by deriving information about water levels. This is what we are proposing to evaluate, obviously bearing in mind the scientific nature of the mission, which is far removed from the expectations of an operational mission in terms of revisit, responsiveness and access to NRT data. As a reminder, the Surface Water and Ocean Topography (SWOT) satellite mission), a joint endeavor between NASA and the French Space Agency CNES, is set to revolutionize our understanding of Earth’s water cycle by providing unprecedented data over continental water bodies and oceans. Launched in December 2022, defined to monitor water courses over 100m wide, and water bodies with a surface area greater than 6.25 ha, on a global scale. SWOT measures the elevation of water bodies with exceptional accuracy and resolution, offering a comprehensive view of surface water dynamics across the globe at scales never before achieved from space. Depending the location on the swath and the orbits crossing, over a given area, revisit during a 21 day cycle can reach, at temperate latitudes, up to 4 observations. This allows to give the opportunity to catch some flood events. Thus, few events of various extents have been identified and corresponding SWOT data collected, as well as those acquired under normal hydrological conditions. For each site, when available exogeneous information have been gathered such as, water levels timeseries from gauge stations, HR DEMs, quasi synchronous SAR and/or optical imagery, as well for few cases, EMS Rapid mapping products. Results from two events with different typology are presented. The first one corresponds to the cross border Sarre River flood event in May 2024, for which Copernicus EMSR has been triggered both over France and Germany, ie EMSR722 and 733. The second corresponds to the dramatic flood event affecting the Valencia province (Spain) in October 2024, EMSR 773 and Charter 924, particularly over the coastal plain of Albufera. The analysis of the SWOT data was carried out over the PICX product particularly exploiting the classes 3 and 4 of the pixel cloud corresponding respectively to water-near-land and open water. In the first stage, the distribution and location of the PICX classes were analyzed and compared with the quasi-synchronous images and the Delineation EMSR products. In a second stage, the altitude of the water bodies (water surface elevation, WSE) was compared with the in situ gage values. Finally, an analysis of the fine topographic data, ie Lidar DEM, was carried out to analyze the highest water altitudes as observed by SWOT and to assess the heights of submersion. Although the SWOT system, as mentioned above, was developed for a global approach focusing on rivers over 100m wide, as has already been shown on small rivers, the detection and recognition of bodies of water means that overflows can be observed up to a total width of around fifty meters. Below this width, it becomes difficult/hard to recognize a flooded area, or this flooded area appears discontinuous. The discontinuity of the flooded zone over small areas can be linked to several factors. Firstly, the riparian vegetation bordering the watercourse limits observation capabilities. There may also be backwater phenomena on very smooth water. The larger spreading of water on the surface is well observable. In term of WSE, the PICX, from the L2_HR_PIXC product presents a very coherent information in term of altitude as well as the derived water depth. Soon, we will continue the comparison with result coming from simplified 2D hydrodynamic modeling as well as those derived from forecasting models used by flood warning services
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Combining S6 FFSAR and SWOT Data to Achieve Near Ground-Accurate Water Extent and Level Measurements for Terrestrial Water Storage Targets From Spaceborne Measurements

Authors: Salvatore Savastano, Ollie Holmes, Adrià Gómez Olivé, Ferran Gibert, Maria José Escorihuela
Affiliations: isardSAT Ltd., isardSAT S.L.
Freshwater resources provide the backbone of civilisation, and their careful management partly led to the rise of some of the most prominent civilisations in history, making the management of this vital resource critical to continuing prosperity. However, the influence of climate change is increasing the complexity of this management; while the total amount of precipitation across a region has remained relatively constant, the stability of rainfall has changed with intense downpours and droughts in different areas [1]. Therefore, managing freshwater resources is becoming a regional problem rather than a historically local problem. While networks of in-situ gauges exist, their installation in every terrestrial water storage location is impractical, especially in remote areas and less-developed countries, hindering comprehensive regional analysis. However, the combination of Sentinel-6 (S6) data processed through the Fully Focused Synthetic Aperture Radar (FFSAR) processor and Surface Water and Ocean Topography (SWOT) data can offer near ground-accurate data for entire regions, regardless of cloud cover, and with sufficiently short revisit times for capturing the dynamic changes in water bodies over time. This not only enhances scientific data for quantifying the impact of climate change on terrestrial water storage but also provides a regional analysis of freshwater resources for governments to optimise their management, potentially allowing for the capability of allocating resources from areas of surplus to areas of scarcity. This study investigates multiple targets across the Ebro basin, comparing each mission's water level measurements to in-situ gauges provided by SAIH Ebro and water extent measurements to optical data. Due to the different measurement approaches, the results have differing degrees of accuracy and sources. On the one hand, S6 carries a nadir-looking altimeter, with an across-track coverage of approximately 0-10 km on either side of the nadir. Data from S6 was collected at the L1A stage and processed through the FFSAR ground processor and subsequent algorithms developed by isardSAT. The FFSAR processor provides a significant advantage over previous processors as phase information can align multiple along-track measurements to the same along-track cell, increasing the resolution up to the theoretical limit of 0.5 m while simultaneously reducing noise power and enhancing the signal-to-clutter ratio. On the other hand, SWOT is a side-looking SAR mission with an across-track coverage of approximately 5-60 km on either side of the nadir. SWOT data was collected from the L2 HR PIXC products provided by JPL/CNES [3]. While the PIXC product provides water level measurements, water extent was extracted and validated against optical data and LakeSP results from the CNES team [4] for a customised analysis. The nadir look angle and direct time measurement approach of the S6 FFSAR data provide superior height measurements over SWOT's interferometric phase measurements. While SWOT does provide increased coverage, using Sentinel-3 and CryoSat-2 data, both capable of undergoing FFSAR processing and S6 data, should provide sufficient coverage. However, SWOT's off-nadir look angle and dual channel power measurements provide superior water extent measurements with significantly increased coverage. A result of stringent requirements for S6 FFSAR extent measurements, which require targets to be sufficiently off-nadir to preserve across-track resolution and prevent iso-range ambiguities while also needing to be adequately nadir to preserve adequate signal-to-clutter ratios, severely limiting coverage. Furthermore, Sentinel-3 and CryoSat-2 produce distorted water extent results due to aliasing issues from their closed-burst transmission mode. Utilising altimeter data with FFSAR processing for water level measurements improves accuracy by approximately 5-10 times over SWOT data with sufficient coverage when using multiple missions. Similarly, utilising SWOT data for water extent measurements improves accuracy by approximately 100 times over S6-FFSAR data with superior coverage. Therefore, the synergism of altimeter data with FFSAR processing and SWOT data can provide near ground-accurate water extent and level measurements for most terrestrial water targets. [1] Richard P Allan. Amplified seasonal range in precipitation minus evaporation. Environmental Research Letters, Volume 18, Number 9. August 2023, DOI: 10.1088/1748-9326/acea36 [2] Adrià Gómez Olivé, Ferran Gibert, Albert Garcia-Mondéjar, Charlie McKeown, Malcolm MacMillan, Michele Scagliola. “Inland Water Extent Measurements for the CRISTAL Mission.” 30 Years of Progress in Radar Altimetry Symposium. 2-7 September 2024 | Montpellier, France. [3] Brent Williams. JPL/CNES. PIXC Validation, SWOT Validation Meeting, Chapel Hill, NC. June, 2024. [4] Claire Pottier, Roger Fjortoft. CNES. Lake Product Validation, SWOT Validation Meeting, Chapel Hill, NC. June, 2024.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Long swells and extreme storms: SWOT level 3 wave spectra for the calibration of climate extremes

Authors: Fabrice Ardhuin, Taina Postec, Guillaume Dodet, Beatriz Molero, Adrien Nigou
Affiliations: LOPS, CLS
As ocean altimetry is pushed to higher and higher resolution, the SWOT Low Rate ocean products, posted at 250 m resolution, resolves many processes that contribute to the surface elevation, including wind-generated waves with wavelengths longer than 500 m. A preliminary analysis (Ardhuin et al., Geophys. Res. Lett, 2024, https://doi.org/10.1029/2024GL109658 ) has revealed that SWOT is capable of measuring these long swells, even with significant wave heights as low as 3 cm, which is a unique capability for any open ocean measurements (the noise floor of in situ drifting buoy is close to 10 cm). As a result, the long swells that radiate from the most extreme storms are clearly visible in SWOT data, providing a unique opportunity for validating our understanding of extreme storm evolution and calibrating models and other measurement records. For this purpose CNES has supported the design and production of dedicated SWOT Level 3 wind-wave products that contain estimates of spectra for waves longer than 500 m. In our analysis we particularly focus on storm Bolaven in October 2023, the most severe storm of the year 2023, with wave heights exceeding 20 m in the North Pacific according to numerical models, and radiated swells with periods up to 26 s (a wavelength of 1200 m), and storm Rosemary (June 2023). We particularly analyze the swells that propagate directly from the storms to the SWOT measurement locations, and swells reflected off the coasts of central and south America in the case of Bolaven. The spatial pattern of swell radiated from the storm is generally broader than predicted by models using exact non-linear interactions, suggesting some unknown scattering effects. The far field propagation across ocean basins also reveals deficiencies in usual numerical propagation schemes and some possible biases in swell dissipation estimates. At present swells too close to the storm are apparently too steep to be properly imaged by KaRIN and some correction of the swell amplitude will be needed. Finally, once a dissipation model is adjusted, we can link the far field energy radiated from the storm to the storm intensity. This relationship is used to rank the intensity of the most severe ocean storms. A similar approach could be used using the wide global coverage of seismic stations using primary microseisms (Aster et al. Nature Communications, 2023, https://doi.org/10.1038/s41467-023-42673-w ) once their amplitude has been calibrated to a swell amplitude using SWOT L3 spectral data.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: C.06.06 - POSTER - Global Digital Elevation Models and geometric reference data

One of the most important factors for the interoperability of any geospatial data is accurate co-registration. This session will be dedicated to recent developments and challenges around the availability, quality, and consistency of reference data required for accurate co-registration of remotely sensed imagery at global scale. It will provide a forum for the results of studies performed under CEOS-WGCV, EDAP, and other initiatives aiming at the quality assurance and harmonisation of DEMs, GCPs and reference orthoimagery used by different providers worldwide.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: AI-Driven Landslide Susceptibility and Hazard Mapping for the CopernicusLAC Hub

Authors: Caterina Peris, Davide Colombo, Paolo Farina, Michael Foumelis
Affiliations: Indra Espacio S.l.u., Geoapp, Aristotle University of Thessaloniki (AUTh)
The Copernicus Latin America and Caribbean (LAC) initiative leverages Earth Observation (EO) data to enhance Disaster Risk Management (DRM) and Disaster Risk Reduction (DRR) across one of the world’s most disaster-prone regions. Through the Copernicus LAC Hub in Panama, it provides scalable EO services, such as terrain motion analysis, using open-access Sentinel data and co-developed methodologies tailored to local needs. This initiative sets the stage for advanced applications such as AI-driven landslide susceptibility and hazard mapping. Landslide susceptibility refers to the likelihood of landslide occurrences in a given area, determined based on local terrain conditions, geological characteristics, and triggering factors such as rainfall or seismic activity. Assessing landslide susceptibility is a vital step in understanding and mitigating risks, especially in regions prone to geological hazards. Traditional methods for evaluating susceptibility can be time-consuming and subject to human bias, which limits their scalability and accuracy. In this work, we introduce an AI-driven approach leveraging the Random Forest algorithm to automate the calculation of landslide susceptibility. This machine learning method enables the analysis of complex relationships between various influencing factors, providing a robust and scalable solution for susceptibility assessment. Key input datasets for the model include high-resolution Digital Terrain Models (DTM), which capture topographic features; geological maps, to detail bedrock properties; and lithological data, describing soil and rock types. These datasets are processed to extract key attributes such as slope gradient, aspect, curvature, drainage density, and lithological composition. The Random Forest algorithm is trained on historical landslide data to classify terrain areas into susceptibility categories, offering predictions with high accuracy and reliability. To enhance the utility of the susceptibility model, we integrate it with Interferometric Synthetic Aperture Radar (InSAR) measurements. InSAR allows capturing subtle motion of the surface providing valuable information on the activity status of landslides. This integration bridges the gap between static susceptibility models and dynamic ground monitoring, creating a more comprehensive hazard mapping framework. The combined methodology presents a transformative approach to landslide hazard assessment, enabling proactive risk management. The resulting hazard maps delineate geotechnical domains into distinct zones with defined “attention levels”, representing the urgency and priority of monitoring or intervention in each area. The maps can guide land-use planning, emergency preparedness, and infrastructure development by highlighting critical zones requiring attention. Furthermore, the integration of AI and remote sensing enhances the efficiency and objectivity of the assessment process, ensuring scalability for large and diverse regions, by reducing reliance on domain-specific human expertise and minimizing potential biases inherent in traditional methods. This work demonstrates the potential of machine learning and geospatial technologies in advancing the accuracy, automation, and applicability of landslide hazard analyses, supporting the development of safer, more resilient communities in vulnerable areas.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: An introduction to Sen2VM: an Open-Source tool for geocoding the Sentinel-2 Level-1B products

Authors: Antoine Burie, Jonathan Guinet, Marine Bouchet, Guylaine Prat, Sara Teraitetia, Emmanuel Hillairet, Silvia Enache, Rosalinda Morrone, Valentina Boccia
Affiliations: Cs Group, Starion for ESA ESRIN, European Space Agency, ESRIN
The Copernicus Sentinel-2 is a European mission that acquires wide-swath (290km), high-resolution (10m max), multi-spectral (13 bands) images of the Earth. The wide swath is ensured by 12 staggered detectors, overlapping each other’s. Due to their worldwide regular acquisitions and accurate geolocation (<5m CircularError90), Sentinel-2 satellites has been offering since several years now a massive quantitative and qualitative resource for the Earth observation community. Each Sentinel-2 acquisition is generated at multiple levels, with higher levels indicating greater modification or enhancement through scientifical algorithms (from Level-1B radiances in sensor geometry to Level-1C orthorectified Top Of Atmosphere reflectance, and then Level-2A surface reflectance corrected from atmosphere effects). Since the beginning of the mission, the only publicly available levels were the Level-1C and Level-2A. Level-1B products (in sensor geometry) require a high level of expertise, mainly to handle the georeferencing of the product. However, L1B products can be of great interest for users who want to: • Manage their own orthorectification, using their own Digital Elevation Model (DEM), • Reproject the data in their own projection, • Have access to the whole overlapping area between detectors (the overlapped information is not reachable in L1C/L2A products because a choice is done, and only the radiance of one detector is provided). This presentation will introduce “Sen2VM”, an Open-Source tool that integrates the Sentinel-2 viewing models and enables the geocoding of Level-1B products. This tool was designed to simplify and enlarge the possibilities of use of the Level-1B Sentinel-2 products. The tool is composed of several parts: • A stand-alone tool, allowing to generate geolocation grids that will be included in the L1B product. The main purpose is to generate direct location grids over the whole Level-1B product going from the whole product to the ground. In addition, the tool will also allow generating inverse location grids, i.e. going from a ground area to the L1B product, • A SNAP plugin, calling directly the stand-alone tool, allowing the same generations of grids, • A GDAL driver, i.e. a contribution to GDAL (https://gdal.org/en/stable/) allowing to handle L1B products with direct location geolocation grids inside them and offering all the GDAL capabilities of reprojections of L1B Sentinel-2 data. The presentation will address the following topics: • Overview of the Sen2VM tools and the required inputs, • Example of application: presentation of the resampling possibilities and performances using the generated geolocation grids, • Publication (GitHub, etc.) – so basically how users can access to the tool.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: TanDEM-X DEM 2020: Product release and quality assessments

Authors: Birgit Wessel, Carolin Keller, Larissa Gorzawski, Martin Huber, Thomas Busche, Marie Lachaise
Affiliations: German Aerospace Center (DLR), German Remote Sensing Data Center, Company for Remote Sensing and Environmental Research (SLU), German Aerospace Center (DLR), German Remote Sensing Data Center, German Aerospace Center (DLR), Remote Sensing Technology Institute
This contribution introduces the newly developed global TanDEM-X DEM 2020 data set. The German TanDEM-X mission, consisting of two satellites operating in close formation since 2010, serves as the foundation for generating bistatic interferometric SAR data. Following the conclusion of data acquisition for the initial global TanDEM-X Digital Elevation Model (DEM) between 2010 and 2014 [1], the TanDEM-X mission systematically acquired data mainly between September 2017 and mid-2021 to create a new global DEM, referred to as the “TanDEM-X DEM 2020”. The main focus of this presentation lies in the production process and preliminary results of global evaluation measures. The primary distinctions between the TanDEM-X DEM 2020 and its predecessor (2010-2014) are the new and independent time frame and an updated interferometric processing technique which minimizes phase unwrapping errors, allowing for a mainly single-coverage acquisition strategy except for challenging terrain. Each DEM scene is pre-calibrated against the global TanDEM-X DEM optimizing the DEM calibration process while maintaining exceptional accuracy. Key features of digital elevation models are performance and accuracy, therefore the TanDEM-X DEM 2020 will be quantitatively assessed against reference data including ICESat, ICESat-2, GPS tracks and, importantly, the first global TanDEM-X DEM. Additionally, a qualitative evaluation is conducted exemplarily on selected sites, highlighting the advantages of interferometric DEMs: global coverage, high accuracy and homogeneity. In this contribution we present the production process, first quality assessments and release of a new global digital elevation dataset: TanDEM-X DEM 2020. This DEM offers an actual topographic dataset and facilitates global-scale monitoring of topographic changes through comparisons with the earlier TanDEM-X DEM. The TanDEM-X DEM 2020 product is expected to become publicly available to the scientific community by mid-2025 via DLR’s EOWEB portal. References: [1] Rizzoli, P., Martone, M., Gonzalez, C., Wecklich, C., Borla Tridon, D., Bräutigam, B., Bachmann, M., Schulze, D., Fritz, T., Huber, M., Wessel, B., Krieger, G., Zink, M., and Moreira, A. (2017): Generation and performance assessment of the global TanDEM-X digital elevation model, ISPRS J. Photogram. Remote Sens., 132, 119–139.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: WorldDEM Neo - The new reference in global elevation

Authors: 3D Data Development Manager Ernest Fahrland, Léa Kierbel
Affiliations: Airbus
Digital Elevation Models (DEM) represent a core dataset within the geospatial domain and the on-going German TanDEM-X mission provides the solid basis for a truly global and consistent DEM coverage. The WorldDEM and its derivative Copernicus DEM successfully replaced SRTM as the global elevation reference mode. Another 10 years of TanDEM-X operations laid the foundation for a next generation of WorldDEM: the WorldDEMneo, providing 4x times better resolution and improved height accuracy. WorldDEM Neo will be the only global, consistent and analysis-ready DEM reference of European origin until the mid 2030s. Accelerating challenges such as global warming, urbanization and de-forestation call for improved core data sets to model and mitigate climate change impacts both on global and local levels. The WorldDEM Neo has got the potential to provide a significant evolution of the current Copernicus DEM with benefits for the Copernicus services in terms of modelling and orthorectification of future acquisitions of satellite data with ever increasing ground resolution and a strong need to precise geolocation, enabling time-series / data cube analysis. The new WorldDEM Neo has been produced based on continued and systematic TanDEM-X mission acquisitions on a global scale (2018-2020). The temporal footprint is 4-5 years more up-to-date than the previous global standard WorldDEM (acquired 12/2010 to 01/2015) and the resolution is 4-times better (now 5 x 5 m² instead of 10 x 10 m², as used for the current most-detailed version of the Copernicus DEM). The absolute, vertical accuracy of the WorldDEM Neo DSM has been assessed on global scale with a linear error better than 1.5 meters (90% confidence level for ICESat-2 ATL08 terrain reference points). A validation of the corresponding DTM layer, using globally distributed LIDAR reference, is an on-going effort. The continuing TanDEM-X mission changed focus in 2020 from global to regional/continental level on scientifically relevant ecosystems. The new acquisitions after 2020 of e.g. the tropical regions (biosphere), glaciers & permafrost regions (cryosphere) as well as urban areas (anthroposphere) support the various applications of DEM data such as environmental, de-forestation & urban growth monitoring, hydrological modelling, disaster management, infrastructure planning, natural resource management or the orthorectification of satellite imagery. The unique data quality and worldwide availability makes WorldDEM Neo the most robust layer model for risk assessments & management and for investigating global phenomena. Answering two main user needs, the WorldDEM Neo Digital Surface Model (DSM; incl. objects on the ground such as buildings & vegetation) is now accompanied by an Digital Terrain Model (DTM; with objects on the ground removed). Secondly, both datasets are based on fully automated, parameterizable and scalable production processes. This allows for better modelling of geo-biophysical parameters shortly after raw data acquisition, alongside other space-based information data such as Sentinel imagery and the Copernicus Contributing Mission data. Spatio-temporal analysis with a consistent sensor source on global scale is now possible by combining WorldDEM Neo with the previous WorldDEM. Previous radar-based DEMs such as the Shuttle Radar Topography Mission global layer (acquired 02/2000), but also the WorldDEM (primary input for Copernicus DEM) required huge manual efforts to achieve an error-free status. The editing of both datasets consisted of semi-automated and manual working steps and lasted for several years, i.e. a direct use of the fresh acquisition data was delayed accordingly. The presentation will provide an insight into the fully automated production process of global, up-to-date, consistent, radar-based DSM & DTM datasets called WorldDEM Neo. Different use cases will accompany the presentation.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Improving global DEMs from interferometry with smart DEM data fusion: a case study in urban landscapes

Authors: Ernest Fahrland, Hanne Liebchen, Jachin Jonathan van Ek, Henning Schrader
Affiliations: Airbus Defence and Space, Airbus Defence and Space
By 2030, 60% of the global population will live in urban areas with on-going urbanization and growth of urban areas. Notably, the urban extent is less than 2% of the global landmass, indicating the importance of spatial data (2D/3D) for the small urban footprint on Earth’s surface with sufficient quality. Digital Elevation Models (DEMs) ever since represent a core dataset within the geospatial domain and support government needs for urban planning, mapping, through modeling and simulations of to-be cities and their infrastructures. DEM data also sets foundations for Digital Twins of cities. Common global DEM datasets such as the Shuttle Radar Topography Mission dataset (SRTM, acquired in 02/2000) but also the newer WorldDEM and its derivative Copernicus DEM (acquired from 12/2010 to 01/2015) contain height information of a fixed, well defined acquisition timeframe. Here, interferometric techniques represent the best methodology to create digital elevation information on global scale. Unfortunately, dense urban environments are represented with insufficient vertical accuracy due to the presence of double-bounce effects of the radar measurement. In addition, on-going topographic changes occurring after data acquisition are not captured and do negatively affect any analysis requiring up-to-date height information. The German TanDEM-X mission ended its data acquisition for the current WorldDEM / Copernicus DEM in January 2015 but continues to operate with two X-band SAR satellites flying in a close, bistatic formation. Since this time, the mission produced more up-to-date height information of global extent which lead to the new WorldDEM Neo dataset (acquired until 2020). Following this global update, continental to regional acquisitions since 2020 (until today) lead to a continuous flow of bi-static data which allow for 4D change analysis based on interferometric processing algorithms with persistent sensor technology. To complement for local deficiencies in dense urban landscapes, different data acquisition methods such as stereo analysis of satellite data from optical sensors (triangulation of e.g. Pléiades or Pléiades Neo imagery) or new evolving techniques such as SAR image height reconstruction based on machine learning techniques are required to locally improve the global layers where they show their greatest weakness(es) The smart integration of this more accurate and more detailed height information into a global elevation database such as WorldDEM Neo will create a truly global, consistent, accurate and homogenous elevation database. The integration process is script-based, performant and parametrizable and represents a first step to database of “living topography” with the strict rule of always raising local data accuracy above the WorldDEM Neo DSM values The presentation will provide an insight into the local/regional integration of high-resolution DEM data from SAR image height reconstruction and from optical sensors into the global WorldDEM Neo database.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Improving ECOSTRESS’ absolute and relative georeferencing for optimisation of crop and irrigation products

Authors: Agnieszka Soszynska, Jan Hieronymus, Darren Ghent
Affiliations: University of Leicester, National Centre for Earth Observation, constellr GmbH
ECOSTRESS is currently the only source of high-resolution thermal imagery apart from ASTER (which is at its end-of-life). ECOSTRESS will continue to play a crucial role for the time before three major thermal missions appear (LSTM, TRISHNA, and SBG), delivering Land Surface Temperature (LST) operationally, and all the high-level derived products, such as evapotranspiration. Therefore, the scientific community as well as the emerging thermal remote sensing service industry rely heavily on ECOSTRESS imagery. However, ECOSTRESS imagery is affected by image-quality issues. One of the most crucial issues is an inaccuracy in the absolute georeferencing of the images. The standard georeferencing of ECOSTRESS images is based on matching to a static reference database and manages to successfully process approximately 38% of all the scenes (including cloudy scenes). In these cases, a small average error of 48 metres (< 1 pixel) is observed and a spread of 20 to 100 meters remains in the majority of the analysed scenes, as reported by NASA-JPL. If the matching fails (62% of the scenes), large errors are observed. Previous studies reported errors of 14 pixels (980 m) on average. In standard processing of thermal imagery, matching procedures are typically conducted using static basemaps created from VIS-NIR imagery. However, georeferencing of thermal imagery is challenging due rapid changes of the heat distribution across the terrain throughout the day, and the fact that the land cover of a static basemaps is quickly outdated, causing the matching to fail more often. We propose a solution, by creating an up-to-date basemap for matching, which consists of a mosaic of Sentinel-2 imagery, which is acquired temporarily close to the ECOSTRESS image. Thus, an up-to-date reference is created separately for each to-be-processed ECOSTRESS scene. Such an approach saves on archiving a global reference dataset (such as the Landsat Orthobase that NASA-JPL uses for their standard processing of ECOSTRESS imagery), and accounts for land cover changes in the imaged area (which is not given by using a static reference database). The created mosaics are used to find the optimal matching products. We compare the most suitable candidates, e.g., high-pass filtered Sentinel-2 imagery, which allows detecting structural/geometric features and borders of objects, which should be equally visible in VIS-NIR ranges as in TIR ranges imaged by ECOSTRESS. Another issue of ECOSTRESS’ geometry comes from the rotating mirror inparallelity, which results in a slight offset of the adjacent scans in each image. We statistically describe the offset in order to derive a set of parameters to remove the offset and make the relative georeferencing homogeneous. The conducted work allow improving ECOSTRESS image products, which bridge the gap until the high resolution future thermal missions become operational.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: AI driven detection of local errors and local 3D features in global DEMs

Authors: Oliver Lang, Pawel Kluter, Schulze Michael
Affiliations: Airbus Defence and Space
Digital Elevation Models (DEM) represent a core dataset within the geospatial domain. Global datasets, like the Copernicus DEM or the more recent WorldDEM Neo provide seamless 3D information all over the globe. These global products are created from high resolution SAR earth observation data of the TanDEM-X mission leveraging highly automated and scalable processing. While the absolute and relative accuracy of these models is high and well documented (see for example [1]), specific local artifacts are still present in the data sets. In particular, the applied automated correction processes lack the detection and editing of local errors related to vertical features, like power pylons. As a consequence, the impact of these objects in the automated interferometric DEM generation process might remain as artifacts in the final product resulting in a spatial sequence of local depressions. The presented approach is based on a mature deep learning technology allowing for automatic detection of those artifacts in high resolution DEMs and for subsequent correction. As detector we applied a proprietary deep learning network architecture optimized for complex scenarios with multiple classes at multiple scales. The model training was based on an extensive set of representative labels taken from the global WorldDEM Neo elevation model. It is shown that this approach provides an effective way for automatic quality assurance, detection and elimination of point artifacts in Digital Elevation Models. A secondary goal of the AI driven detection approach is the automatic detection and classification of prominent features of interest in a global DEM. This comprises geological features like craters, domes, depressions and manmade objects like dams, mining areas. It is shown that the methodology provides the potential to generate additional insights about location and type of 3D features and consequently adds value to large scale DEMs. Reference: [1] Copernicus Digital Elevation Model Validation Report, Tech. rep., AIRBUS Defence and Space GmbH,https://spacedata.copernicus.eu/documents/20123/121239/GEO1988-CopernicusDEM-RP-001_ValidationReport_I3.0.pdf (last access: 02 December 2024), 2020
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: C.06.01 - POSTER - Sentinel-1 mission performance and product evolution

The Sentinel-1 mission is a joint initiative of the European Commission (EC) and the European Space Agency (ESA) comprises a constellation of two polar-orbiting satellites, operating day and night performing C-band synthetic aperture radar imaging, enabling them to acquire imagery regardless of the weather. The C-band SAR instrument can operate in four exclusive imaging modes with different resolution (down to 5 m) and coverage (up to 400 km). It provides dual polarization capability, short revisit times and rapid product delivery. Since the launch of Sentinel-1A and Sentinel-1B, respectively in 2014 and 2016, many improvements were brought to the mission performances and to the products evolved on many points. Sentinel-1B experienced an anomaly which rendered it unable to deliver radar data in December 2021, and the launch of Sentinel-1C is planned for 2023. This session will present the recent improvements related to a) the upgrade of the products characteristics, performance and accuracy, b) the better characterization of the instrument with the aim to detect anomalies or degradation that may impact the data performance, c) the anticipation of the performance degradation by developing and implementing mitigation actions and d) the explorative activities aiming at improving the product characteristics or expanding the product family to stay on top of the Copernicus Services evolving expectations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: Integrating Remote Sensing and Geospatial Analysis to Assess Environmental and Climatic Vulnerability in Urban Mediterranean Contexts: A Case Study of Valencia

Authors: Carlos Rivero Moro, Mª Amparo Gilabert Navarro, Ana Pérez-Hoyos, Ernesto López Baeza
Affiliations: Dept of Environmental Remote Sensing, Faculty of Physics, University of Valencia, Burjassot 46100, Spain, Dept of Environmental Remote Sensing, Faculty of Physics, University of Valencia, Burjassot 46100, Spain Albavalor, Science Park University of Valencia, Paterna 46980, Spain, Albavalor, Science Park University of Valencia, Paterna 46980, Spain
Urban areas are increasingly vulnerable to the impacts of climate change, particularly in Mediterranean cities like Valencia, which face compounded pressures from overbuilding, traffic, air pollution, and intensifying heat waves and extreme precipitation events. This study introduces a comprehensive framework to assess Valencia's environmental and climatic vulnerability, integrating advanced remote sensing tools, geospatial analysis, and air pollution data to guide targeted environmental and health policies. Utilizing high-resolution Sentinel-2 and Landsat data, three key environmental dimensions were analyzed: vegetation coverage, carbon sequestration potential, and urban heat. The Leaf Area Index (LAI) was derived to quantify vegetation density and its regulatory role in microclimatic stability. The fraction of absorbed photosynthetically active radiation (fAPAR) was used to estimate the carbon sequestration potential of green areas, while the intensity of urban heat island (UHI) was mapped using Landsat thermal data. Additionally, spatial patterns of air pollution were assessed using concentrations of PM2.5, PM10, and NO2 as key indicators of traffic-related and industrial emissions. A composite spatial indicator of vulnerability was developed by integrating these variables through Geographically Weighted Principal Component Analysis (GWPCA). The natural breaks method was applied to define risk classes, enabling the identification of vulnerability hotspots. The analysis revealed that a substantial proportion of Valencia's population resides in areas with high or very high vulnerability, emphasizing disparities between the urban core and peri-urban areas, making the city more vulnerable to the consequences of climate change, such as extreme precipitation or temperature events. The study provides a novel and replicable approach to mapping climatic vulnerability at a city-wide scale by integrating biophysical, thermal, and air pollution data. This framework identifies high-risk areas and populations, highlighting the interplay between vegetation, thermal stress, air pollution, and carbon sequestration. These insights offer a robust foundation for designing policies that address environmental and climatic inequalities, reduce exposure to air pollution, and enhance urban resilience in Valencia.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: Sentinel-1C and Sentinel-2C Precise Orbit Determination Commissioning Results

Authors: Carlos Fernandez Martin, Sonia Lara Espinosa, Oleksandr Ivanchuk, Jaime Fernandez Sanchez, Heike Peter, Muriel Pinheiro
Affiliations: GMV Aerospace & Defence, PosiTim UG, ESA/ESRIN
The Copernicus Precise Orbit Determination (CPOD) Service delivers, as part of the Ground Segment of the Copernicus Sentinel-1, -2, -3, and -6 missions, orbit products and auxiliary data files for the operational generation of the science core products in the corresponding Production Services (PS) at ESA and EUMETSAT, and to external users through the newly available Copernicus Data Space Ecosystem (https://dataspace.copernicus.eu/). The recent launches of Sentinel-1C and Sentinel-2C at the end of 2024 mark significant milestones in the Copernicus program, necessitating rigorous calibration and validation (CalVal) activities to ensure the precision and reliability of their Precise Orbit Determination (POD). This contribution presents the commissioning results of these satellites, focusing on the comprehensive activities undertaken. At the time of writing this abstract, Sentinel-1C has not been launched yet so results will be preliminary and pending on a successful commissioning. As part of the POD commissioning, initial orbit determination solutions are generated using the same configuration as for their Sentinel-1 and -2 predecessors, providing a baseline for further calibration and validation. The Level-0 decoding capabilities of the satellite signals are then thoroughly verified to ensure data integrity and accuracy. The launch of these satellites marks a key milestone, augmenting the number of PODRIX receivers in orbit tracking routinely both GPS and Galileo, together with Sentinel-6A. Moreover, GNSS antennae calibration is conducted to mitigate multipath effects by generating a preliminary Phase Center Variation map in the usual ANTEX file. To validate the accuracy of the preliminary orbit solutions, cross comparisons are conducted with solutions provided by the CPOD Quality Working Group (QWG), which includes esteemed institutions such as AIUB, DLR, TU Delft, TU Munich, and GFZ, among others. These comparisons ensure consistency and reliability across different processing centers. The commissioning results demonstrated that the preliminary orbit determination solutions for Sentinel-2C met the stringent accuracy requirements of the Copernicus program, and it is expected that the same conclusion is reached for Sentinel-1C. The calibration of the antennae and verification of Level-0 decoding capabilities further enhanced the reliability of the POD products. Cross comparisons with CPOD QWG solutions confirmed the robustness and precision of the orbit determination process. Building on the commissioning results, future work will focus on continuous monitoring and refinement of the POD solutions for Sentinel-1C and Sentinel-2C. Additionally, lessons learned from these activities will inform the commissioning of future Sentinel satellites, ensuring ongoing improvements in the precision and efficiency of the CPOD Service. This presentation will provide a detailed overview of the CalVal activities, highlight key findings from the commissioning results, and discuss the implications for future Copernicus missions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: New Product Evolution Of ESA’s Extended Timing Annotation Dataset (ETAD) For Sentinel-1 Mission

Authors: Victor Navarro Sanchez, Christoph Gisinger, Helko Breit, Ulrich Balss, Steffen Suchandt, Lukas Krieger, Thomas Fritz, Antonio Valentino, Muriel Pinheiro, Guillaume Hajduch
Affiliations: German Aerospace Center (DLR), Rhea for ESA, European Space Agency (ESA), ESRIN, Collecte Localisation Satellites (CLS)
SAR remote sensing is a powerful tool for Earth observation, supporting a wide range of applications thanks to its night and day observation capabilities and its excellent geometric accuracy. These include interferometric applications (InSAR), where the differential phase obtained from images of the same area acquired with a different geometry and/or at a different time instant is exploited to reconstruct, for instance, the scene topography or deformation over time. SAR measurements are, however, affected by the spatial and temporal variability of atmospheric conditions, solid Earth dynamic effects, and approximations during image processing. If not corrected, these effects can produce geometric shifts of up to several meters. In order to facilitate Sentinel-1 (S-1) SAR data corrections, bringing their geometric accuracy from meters down to centimetres, the Extended Timing Annotation Dataset (ETAD) was developed in a joint effort by ESA and DLR [1][2]. The ETAD product provides easy-to-use gridded timing corrections for S-1 level-1, single-look complex (SLC) data, following the radar geometry of the associated SLC product (range-time, azimuth-time). At the time of writing, ETAD products from July 21th, 2023 onwards can be retrieved via the Copernicus Data Space Ecosystem. Following positive feedback from the expert users who participated in S-1 ETAD pilot study activities, acknowledged below, an extension of the S1-ETAD baseline product to cover a wider range of applications has been investigated in the context of ESA-funded activity ”Scientific Evolution of the S1-ETAD product” (ETAD-SE). Of the experimental features prototyped and evaluated in the ETAD-SE activity, the following features have been selected for inclusion in the operational ETAD processor in the next major release (3.0): • New correction layer: ocean tidal loading (OTL) corrections in range and azimuth • New supportive layer: tropospheric delay gradient layer with respect to height changes • Bit quantization of correction layers in ETAD NetCDF to reduce product file sizes Ocean tidal loading is a wide-area deformation effect caused by the tidal redistribution of ocean water mass which loads and deforms the solid Earth in coastal regions by up to 10 cm. OTL corrections are expected to improve geometric accuracy in affected coastal regions, also reducing the stochastic error in time series analysis [3]. The tropospheric delay derivative with respect to height is an auxiliary layer to support interpolation of tropospheric delay corrections, highly dependent on surface height, to a new grid with different sampling of the underlying topography. This is useful, for instance, for InSAR applications where secondary products must be aligned (coregistered) to the primary product and, consequently, corrections must be re-evaluated for the common InSAR grid height values [4]. Finally, the bit quantization feature would allow removing non-significant digits from selected layers, ensuring that relevant information is kept, which in combination with data compression algorithms will reduce data size, thus compensating for additional layers in the product. Current product size is in the order of 100 MB. The implementation and qualification of these new features is foreseen within Q1/2025, in the context of Mission Performance Cluster (MPC) service activities. The new version of the ETAD processor (3.00) is planned to become operational in the S-1 ground segment before May 2025 along with the introduction of Sentinel-1C unit. Our contribution at the LPS’25 conference will present the extended ETAD product, together with use-case scenarios and the status of operational production. Acknowledgements The authors thank all the research groups that participated in the ETAD pilot study in 2022, for their valuable feedback on the product when applying it in SAR applications such as offset tracking, InSAR processing, data geolocation and geocoding, and stack co-registration. The ETAD processor was hosted in the Geohazard Exploitation Platform to allow for processing by the pilot participants and the hosting was supported by ESA Network of Resources Initiative. List of participating institutions in alphabetical order: Caltech, DIAN srl, DLR, ENVEO, IREA-CNR, JPL, Joanneum Research, NORCE, PPO.labs, TRE ALTAMIRA, University of Jena, University of Leeds, University of Strasbourg. The S1-ETAD scientific evolution study, contract No. 4000126567/19/I-BG, was financed by the Copernicus Programme of the European Union implemented by ESA. The results presented here are outcome of the ESA contract Sentinel-1 / SAR Mission Performance Cluster Service 4000135998/21/I BG. Copernicus Sentinel-1 mission is funded by the EU and ESA. Views and opinion expressed are however those of the author(s) only and the European Commission and/or ESA cannot be held responsible for any use which may be made of the information contained therein. [1] Gisinger, C., Libert, L., Marinkovic, P., Krieger, L., Larsen, Y., Valentino, A., Breit, H., Balss, U., Suchandt, S., Nagler, T., Eineder, M., Miranda, N.: The Extended Timing Annotation Dataset for Sentinel-1 - Product Description and First Evaluation Results. IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-22, 2022. doi: 10.1109/TGRS.2022.3194216 [2] ESA: Sentinel-1 Extended Timing Annotation Dataset (ETAD). Data product website on Sentinel-1 webpage: https://sentiwiki.copernicus.eu/web/s1-products [3] Yu, C., Penna, N. T., Li, Z., “Ocean tide loading effects on InSAR observations over wide regions,” in Geophysical Research Letters, 47, 2020. Doi: 10.1029/2020GL088184 [4] Navarro, V., Gisinger, C., Brcic, R., Suchandt, S., Krieger, L., Fritz, T., Valentino, A., Pinheiro, M., "Advancing Sentinel-1 Insar Applications Using Esa’s Extended Timing Annotation Dataset Product," 2023 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Pasadena, CA, USA, 2023, pp. 7878-7881, doi: 10.1109/IGARSS52108.2023.10282172.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: SAME-AT - SAR meets Atmosphere: An Austrian Initiative in coupling INSAR information and numerical weather models

Authors: Michael Avian, Karlheinz Gutjahr, Florian Meier, Clemens Wastl, Stefan Schlaffer, Matthias Schlögl, Christoph Wittmann
Affiliations: Geosphere Austria, Joanneum Research
Satellite based radar systems (Synthetic Aperture Radar - SAR) are well known for all-day and all-weather capabilities. However, as signals have to travel through the atmosphere twice, multiple effects occur such as e.g. range delays and interferometric phase delays. These effects have to be considered when interpreting results based on radar data. The project SAME-AT was designed to contribute to a better understanding of the interaction between radar signals and the atmosphere. A major goal of this Austrian initiative is the improvement of modelling of atmospheric correction as well as the usage error budgets on the atmospheric input parameters. This particular information is derived from forecast uncertainties from the prediction of a convection permitting numerical weather (NWP) ensemble system. Numerical weather models provide valuable information for SAR/InSAR correction approaches. Vice versa, observed SAR/InSAR delays and their error statistics can serve as data sources for the determination of the initial state (data assimilation) of the NWP ensemble systems. SAR/InSAR delays allow conclusions to be drawn about the tropospheric moisture content, which act as extremely valuable information for weather models. An important part of SAME-AT is therefore the investigation of the possible benefit of SAR/InSAR delays on the quality of NWP systems. Six months of bias corrected Sentinel-1 SAR/InSAR delays of ASC 146 were assimilated into the convection permitting Numerical Weather Prediction model AROME over Austria using a slant-delay GNSS operator and a temporal coherence threshold of 0.5 for quality check. The bias showed very high temporal fluctuations, therefore a spatial averaged bias correction was applied. The observation error was set to 1.4 cm slightly inflated compared to typical GNSS delays in order to take observation error correlation into account. Furthermore, a 12 observation point thinning was applied. Results show that small scale increments could be ingested. Mostly neutral forecast RMSE scores for 2m temperature and humidity, mean sea level pressure, 10m wind, global radiation and precipitation were detected. 2m relative humidity bias improved slightly in the first nine forecast hours while other biases remained mostly unchanged. However, in single convective cases like 30th June 2023 an improvement of precipitation patterns compared to the reference assimilation could be shown - due to evaluation with fraction skill score. Especially for higher precipitation thresholds, the InSAR assimilation performed better than the reference. The Austrian NWP models AROME and C-LAEF are a big step in terms of spatial resolution compared to ECMWF models like ERA-5. The effective spatial resolution (i.e. two times the grid point distance) of AROME/C-LEAF is ~ 5 km whereas ERA-5 resolution is about 62 km. Thus, the spatial resolution of AROME/C-LEAF NWPs is in the range of the spatial filter of classical atmospheric phase screen APS removal. However, this resolution is still a few magnitudes larger than the Sentinel-1 resolution and small features in Sentinel-1 interferograms are not modelled by AROME or any member of C-LAEF. To highlight this fact, the delay corrections were calculated ignoring the actual topography. Although AROME corrections contained much more details than ERA-5 corrections, small variations in the interferometric delay were not included in this correction. The meteorological parameters (e.g. temperature, pressure and humidity) are regularly provided on hourly bases. However, Sentinel-1 acquisitions for Austria are approximately 10 min before or after a full hour and the meteorological parameters have to be interpolated in time. To investigate the effect of different temporal interpolation methods, for the date 2023-02-18 we provided a dataset with regular runs at 16:00 and 17:00 and intermediate model runs at 16:15, 16:30 and 16:45. Subsequently, we compared three interpolation methods (nearest neighbour, linear and based on wind components) for the 16:15 run. In our test, the temporal linear interpolation outperformed the other interpolation methods and ensured an interpolation error below +/-5 mm in most areas. Of course, the absolute deviation from the true values strongly depends on the actual weather conditions and may deviate from this example. However, linear interpolation still will yield the best results with respect to the other interpolation methods. The main challenge with interferometric SAR analysis stays the problem of coherence loss mainly due to temporal decorrelation. Still, all recently published work dealing with atmospheric SAR range/phase delay corrections, require a more or less fully coherent signal to perform best. However, for our Austrian test site and the investigated seasons, we detected that this requirement is often not fulfilled. However, the possibilities of the new high agile SAR sensor systems like e.g. ICEYE or the upcoming EE10 mission Harmony allow short or even zero temporal baselines, which strongly mitigate these coherence problems and consequently will yield better estimation of atmospheric effects.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: Impact Of 25-th Solar Cycle Ionospheric Activity On Sentinel-1 SAR Data – A Status Report By SAR-MPC

Authors: Christoph Gisinger, Giorgio Gomba, Mainul Hoque, Victor Navarro Sanchez, Antonio Valentino, Muriel Pinheiro, Guillaume Hajduch, Ruben Bleriot
Affiliations: German Aerospace Center (DLR), Rhea for ESA, European Space Agency (ESA), ESRIN, Collecte Localisation Satellites (CLS), Apside
The ionization of Earth’s upper atmosphere by solar radiation and particles of the solar wind is a known major source of data disturbance with Synthetic Aperture Radar (SAR) satellites, typically operating in the micro-wave regime between 1.2 GHz (L-band) and 9.5 GHz (X-band). Primarily driven by the approximately 11-years solar cycle, the impact of ionosphere dynamics on SAR satellites spans from minor degradation of precise orbit solutions to frequency-dependent path delays in the radar measurements, causing significant errors when geolocating SAR image data and in interferometric SAR processing [1][2]. The European Copernicus mission Sentinel-1 (S-1) operates a C-band SAR payload (5.4 GHz) that provides continuous mapping of Earth’s surface at a global scale with a free and open data policy. At the start of S-1 public data dissemination in late 2014 , solar activity was already rapidly declining and after achieving full data capacity with the addition of second satellite S-1B in mid-2016, the mission has operated mostly in low to moderate ionospheric conditions. However, with the onset of latest solar cycle 25 in 2022, the situation started to change again. This was also registered at the S-1 SAR Mission Performance Cluster (SAR-MPC), an international consortium of experts that performs a continuous monitoring of the mission’s instrument performance and SAR data quality. Specifically, our assessment of S-1 data geolocation with globally distributed test sites began to show low cm-level systematic effects which were attributed to limitations in the presently applied methods using Total Electron Content (TEC) maps from GNSS data to correct for the ionospheric path delays [3]. Moreover, S-1 data usage is supported by the Extended Timing Annotation Dataset (ETAD), which provides several layers for geometric data correction that also include ionospheric path delay estimations based on the TEC maps [4]. Since July 2023, the ETAD is operationally produced for every S-1 acquisition with the exception of wave mode data. By monitoring the statistics of ionospheric delay correction results, SAR-MPC can keep track of the impact of solar activity on S-1 data. As of today, the largest ionospheric path delays recorded with S-1 correspond to more than 2m and were detected in the ETAD results of September 2024 during a series of major solar eruptions. These results mark a strong contrast to the solar quiet years wherein the delays reached about 0.5m maximum. One important driver in computing ionospheric path delays for S-1 mission is the bottom-side ratio that accounts for the fact that S-1 satellites are operating within the ionosphere, requiring a vertical separation of the total integrated TEC contained in the TEC maps. Presently we are using a fixed ratio of 0.9 [4]. Modifications to this ratio and other modelling aspects such as the slant-range mapping methods were investigated in the S-1 ETAD scientific evolution study, employing the 3-D ionospheric model NEDM2020 [5]. Interestingly, our tests with a spatio-temporal modelling of the bottom-side ratio have only showed minor improvements with S-1 measurements at the calibration sites. SAR-MPC is now investigating how to better align these findings on ionospheric delay correction methods with the S-1 measurements of the calibration sites and the statistics provided through the systematic ETAD production. In this contribution, we will present the status of our work, closely following the activity of solar cycle 25 in the data of S-1 which is expected to remain high until 2026. Acknowledgements The S1-ETAD scientific evolution study, contract No. 4000126567/19/I-BG, was financed by the Copernicus Programme of the European Union implemented by ESA. Part of the results presented here are outcome of the ESA contract Sentinel-1 / SAR Mission Performance Cluster Service 4000135998/21/I BG. Copernicus Sentinel-1 mission is funded by the EU and ESA. Views and opinion expressed are however those of the author(s) only and the European Commission and/or ESA cannot be held responsible for any use which may be made of the information contained therein. [1] Hackel, S. and Montenbruck, O. and Steigenberger, P. and Balss, U. and Gisinger, C. and Eineder, M. (2016) Model improvements and validation of TerraSAR-X precise orbit determination. Journal of Geodesy, 91 (5), pp. 547-562. Springer. doi: 10.1007/s00190-016-0982-x. ISSN 0949-7714. [2] Gomba, G. and De Zan, F. and Rommen, B. and Orus Perez, R. (2022) Study on Ionospheric Effects on SAR and their Statistics. Proceedings of the European Conference on Synthetic Aperture Radar, EUSAR, pp. 1-5. EUSAR 2022, 2022-07-26 - 2022-07-27, Leipzig. [3] Hajduch et al. (2024) S-1 Annual Performance Report for 2023. Technical report prepared by the S-1 SAR MPC, SAR-MPC-0634, Issue 1.3, 19.04.2024. Online: https://sentiwiki.copernicus.eu/web/document-library [4] Gisinger, C., Libert, L., Marinkovic, P., Krieger, L., Larsen, Y., Valentino, A., Breit, H., Balss, U., Suchandt, S., Nagler, T., Eineder, M., Miranda, N.: The Extended Timing Annotation Dataset for Sentinel-1 - Product Description and First Evaluation Results. IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-22, 2022. doi: 10.1109/TGRS.2022.3194216 [5] Hoque, M., Jakowski, N., Prol, F.: A new climatological electron density model for supporting space weather services. J. Space Weather Space Clim., volume 12, issue 1, 2022. https://doi.org/10.1051/swsc/2021044
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: On the validation and assimilation of Sentinel-1C wave data in operational wave model MFWAM

Authors: Lotfi Aouf, Dr Fabrice Collard, Romain Husson, Bertrand
Affiliations: Météo France, CNRM
The coming launch of Sentinel-1C is an excellent news for operational wave forecasting and the improvement of SAR directional wave spectra coverage over the global oceans. The revisit of certain ocean regions should be significantly improved, and thus enhance the use of directional wave observations in operational wave models. This work aims to perform assimilation experiments of wave spectra provided by S-1C, and to assess the quality of these data in comparison with the existing use of S1A and CFOSAT missions. This work is preliminary analysis for the use of these directional wave observations in the MFWAM wave model, which provides integrated wave parameters for the Copernicus Marine Service (CMEMS). The development of data quality control procedures is crucial to remove corrupted observations in the assimilation and sea state forecasts. Assimilation experiments have been implemented for a global configuration of the MFWAM model with a resolution of 20 km. Model runs with assimilation of S1C SAR wave spectra only, as well as conjointly with the directional wave spectra from S1A and CFOSAT missions, will be analyzed to estimate the impact several scales of dominant sea state in terms of swell and wind sea wave regimes. The output from the experiments will be validated using significant wave height from altimeters and wave parameters from drifting buoys available over all oceans. Particular attention in the analysis will be considered at the Southern Ocean, with seas dominated by severe storms. Discussions and conclusions on the use of directional wave spectra in the model MFWAM will be reported to mission performance center.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: Refining Sentinel-1 Radiometric and Pointing Calibration by On-Board Temperature Compensation Emulation

Authors: Beatrice Mai, Andrea Recchia, Gilles Guitton, Harald Johnsen, Muriel Pinheiro, Antonio Valentino
Affiliations: Aresys Srl, OceanDataLab, NORCE, ESA, Starion
The Sentinel-1 (S-1) instrument is an active phased array antenna providing fast scanning in elevation and in azimuth allowing the implementation of the TopSAR acquisition mode, the main operational mode of the S-1 mission over land and ice. The SAR antenna Front End is made of 280 Transmit/Receive Modules (TRMs) organized in 14 tiles (along the azimuth direction) of 20 TRMs each (along the elevation direction). Independent TRMs for polarization (H and V) and for Tx and Rx are available. Each TRM can be commanded in gain and phase to get the needed antenna pattern steering. On-board, due to limited memory available, a fixed set of steering coefficients is loaded: • The steering coefficients to implement 16 Elevation Antenna Patterns (EAP): 6 Stripmap, 3 TopSAR Interferometric Wide Swath (IWS), 5 TopSAR Extra Wide Swath (EWS) and 2 Wave beams. • The steering coefficients to implement 1024 Azimuth Antenna Patterns (AAP): a different sub-set of the azimuth patterns is used for each TopSAR beam depending on the required steering capability while the azimuth pattern pointing at the boresight is always used for Stripmap and Wave beams. The status of the TRMs is continuously monitored by the SAR Mission Performance Cluster (MPC) by means of the dedicated Radio Frequency Characterization (RFC) acquisition mode. The RFC mode allows monitoring the status of each TRM in TX and RX and in H and V polarization. In particular, the gain and phase deviations w.r.t. the nominal TRM operating state (defined in orbit immediately after the launch) are measured with this mode. The monitoring allowed to detect in the past the failure of a few TRMs. During the antenna operation, the TRMs gain and phase change around the commanded settings due to the temperature variations [1]. To reduce the temperature related variations and ensure a better antenna patterns stability, an on-board temperature compensation strategy has been implemented. The gain and phase variations within the TRMs are compensated by look up tables, based on the temperature of the TRM. The on-board temperature compensation for imaging operation can be enabled or disabled. The operational approach is continuous on-board temperature compensation of the TRMs, ensuring that the patterns remain of good quality even if there are high temperature gradients across the antenna. Considering this, the ground processing calculates the antenna patterns based on the nominal commanded settings. A first verification of the effects of the on-board temperature compensation was made by observing the evolution of the gain and phase of all TRMs (Tx/Rx and both V/H pol) derived from a sequence of RFC acquisitions of 24 hours (sampling 5 minutes) during S-1B commissioning phase, while the instrument was cooling down. Two different types of jumps are identified: • Isolated jumps of single TRMs due to the temperature compensation applied at TRM level. • Simultaneous jumps of all the 20 TRMs of the same tile due to the temperature compensation applied at Tile Amplifier (TA) level. From this first investigation, some important conclusions are drawn: • The implemented temperature compensation strategy works well ensuring stable excitation coefficients in presence of large temperature variations (the temperature decrease during the monitored 24 hours was about 30 degrees, much higher than the temperatures variations observed during the operation). Indeed, the excitation coefficients at the begin and at the end of the cooling are aligned. • In the short term the temperature compensation strategy introduces quantized gain and phase jumps. This can result in small distortions of the antenna patterns when a TRM (or even worse a TA) is working around a temperature for which a gain/phase adjustment is foreseen. Moreover, the analysis of real S-1 data has shown that in some cases: • Small radiometric jumps at sub-swaths boundaries, that can be a problem for radiometry-based applications (e.g., soil moisture retrieval or wind velocity estimation). Such small jumps could be introduced by changes in the excitation coefficients of the TRMs due to the temperature compensation that are not compensated on-ground. • Small Doppler Centroid jumps are observed during long data takes that introduce biases in the L2 Radial Velocity (RVL) products [2]. These products provide a measure of the ocean currents based on the DC estimated from the data (after removing the component related to the acquisition geometry). Again, these jumps could be introduced by changes in the excitation coefficients of the TRMs due to the temperature compensation slightly changing the azimuth pointing of the beam. A procedure has been implemented to emulate the temperature compensation approach applied on-board and assess its effect on the antenna patterns. The aim is to confirm whether the above-mentioned effects could indeed be related to the on-board temperature compensation strategy. The following steps are repeated for a certain Sentinel-1 data take: • The time instants when the temperature compensation is applied on-board are derived from the stream of Instrument Source Packets (ISPs) of the acquisition. For this purpose, the L0 Annotation (L0A) products are used since they do not include the User Data Fields and cover the full data take. The time instants to be considered depend on the acquisition mode and on the platform. • The telemetry data containing the temperature of all the TRMs (provided with a sampling of 16 seconds) are interpolated to get the TRMs temperature at the time instants when temperature compensation is applied. • For each TRM, the gain and phase variations during the data take are emulated considering the obtained temperatures and the on-board tables containing the predefined gain and phase settings. • The obtained relative gain and phase variations are made absolute by means of the first available RFC acquisition performed before the data take (according to the acquisition plan the time interval can span from some minutes to several hours). • The absolute excitation coefficients are fed to the S-1 Antenna Model to predict the expected antenna pattern variations within the data take. This presentation will provide an overview of the newest results obtained with the on-board temperature compensation emulation procedure described above. The results will be aimed at identifying possible “fine” calibration strategies to compensate for the small data quality issues discussed above. This will be particularly relevant for the calibration of S-1C data. Indeed, during the S-1C IOC, a dedicated activity will be performed to test different temperature compensation strategies. This will provide more information, always aimed at solving calibration issues encountered by users of real S-1 data. References: [1] S1-PL-ASD-PL-0001, Sentinel-1 SAR Instrument Cal. and Char. Plan., issue 8.1, 25/02/2016 [2] MPC-0534. Sentinel-1 Doppler and Ocean Radial Velocity (RVL) ATBD, issue 1.6, 10/10/2022 Acknowledgements: The SAR Mission Performance Cluster (MPC) Service is financed by the European Union, through the Copernicus Programme implemented by ESA. Views and opinion expressed are however those of the author(s) only and the European Commission and/or ESA cannot be held responsible for any use which may be made of the information contained therein. The authors wish to thank Francisco Ceba Vega (AIRBUS) and Ignacio Navas Traver (ESTEC) for the support provided in understanding the S-1 on-board compensation strategy approach.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: Observing ocean wave spectra from space: complementarity between CFOSAT-SWIM and Sentinel-1 SAR wave mode data

Authors: Charles Peureux, Annabelle OLLIVIER, Romain Husson, Lotfi Aouf, Cédric Tourain, Danièle Hauser
Affiliations: CLS, Météo-France, CNES, LATMOS
Wave spectra carry with them information for a detailed characterization of the ocean surface in their domain of definition: integrated parameters such as Hs and dominant waves, spectral and directional width, Stokes drift, etc... This work compares two types of databases of wave spectra measured from space and acquired by two different types of technology. SWIM is the first ocean wave scatterometer onboard CFOSAT, launched in 2018. With its 3 rotating beams at near -nadir incidence, it allows for the measurement of ocean wave spectra in the domain 30m to 500 m wavelength approximately, at global scale. The SAR instruments onboard Sentinel-1 constellation (A and B) are acquiring images of the global ocean since 2014. Thanks to their wave mode acquisition configuration, ocean wave spectra are measured with global coverage every 100 kms over the open ocean. A set of SWIM spectra colocalized with S1 and WAM are statistically compared. SWIM enables ocean waves characterization down to a few tens of meters length, with regular global coverage, whereas Sentinel-1 is limited to the longest wave lengths. Although noisy, SWIM data complement numerical wave predictionmodels such as WAM, and can be used to characterize quantities not accessible via altimetry: wave field directionality, peak parameters and more. Comparisons are shown between SWIM, Sentinel-1 and MFWAM (The French WAM version) for integrated wave parameters such as significant wave height, peak wavelength or peak direction of partitioned wind-sea and swells. Sentinel-1 data are well complemented by the recent launch of Sentinel 1-C. Analysis shows that SWIM can resolve the wind-sea part in approximately 25% of the sea-states over the global ocean, which is higher than Sentinel-1. Sentinel-1 exhibits better capacities to image long swells (wavelengths longer than 500m) than SWIM does. Improvements in SWIM processing are in the pipe, that could allow extending SWIM measurements validity to longer swells, up to 1200 m wave length.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: First Commissioning Phase Results of the Internal Calibration Concept adapted for Sentinel-1C

Authors: Jakob Giez, Dr Patrick Klenk, Dr Kersten Schmidt, Dr Marco Schwerdt
Affiliations: German Aerospace Center (DLR)
Building on the achievements of Sentinel-1A (S-1A) and Sentinel-1B (S-1B), Sentinel-1C (S-1C) represents the third Sentinel-1 satellite in ESA's Copernicus program. The commissioning of S-1C ensures continued provision of high-resolution C-band synthetic aperture radar (SAR) data for security applications and environmental monitoring, including land subsidence, ice movements, and ocean conditions (e.g., [1]) for the coming years. As with its predecessors, S-1C is equipped with an active phased array C-Band antenna, comprising 280 transmit/receive modules (TRM) for each polarization channel (H and V). These modules control the antenna beam steering in both azimuth and elevation directions. Similar to the instrument of the preceding two satellites, S-1C employs an internal calibration methodology based on the acquisition of different calibration signals, obviating the necessity for a dedicated calibration network. Furthermore, the pulse coded calibration technique (also known as PN gating method [2]) is employed, whereby special RF characterization (RFC) data will be acquired for the purpose of monitoring and ensuring the performance of the whole instrument down to individual TRMs. However, in order to mitigate the impact of spurious signals, which had been observed for S-1A and S-1B, modifications have been made to the antenna hardware in the form of newly developed tile amplifiers. This new architectural approach allows for a reduction from five to three different calibration signals, as well as the addition of new interleaved noise measurements. This results not only in alterations to the mode timelines but especially in a complete re-design of the internal calibration concept and the RFC mode. As was done for Sentinel-1A and Sentinel-1B ([3] and [4]), an independent SAR system calibration of Sentinel-1C is performed by DLR in parallel to the commissioning phase activities executed by ESA. Ground testing of the new internal calibration concept has demonstrated its applicability using simulated data. This presentation will show the effectiveness of the adapted Sentinel-1C internal calibration concept, by presenting first results based on real Sentinel-1C data acquired during the commissioning phase. References: [1] R. Torres, D. Geudtner, S. Lokas, D. Bibby, P. Snoeij, I. N. Traver, F. Ceba Vega, J. Poupaert, and S. Osborne, “Sentinel-1 Satellite Evolution,” in IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium, July 2018, pp. 1555–1558. [2] D. Hounam, M. Schwerdt, M. Zink, Active Antenna Module Characterisation by Pseudo-Noise Gating, 25th ESA Antenna Workshop on Satellite Antenna Technology, Noordwijk, Netherlands, 2002. [3] M. Schwerdt, K. Schmidt, N. Tous Ramon, G. Castellanos Alfonzo, B. J. Döring, M. Zink, and P. Prats-Iraola, “Independent Verification of the Sentinel-1A System Calibration,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 9, no. 3, pp. 994–1007, 2016. [4] M. Schwerdt, K. Schmidt, N. Tous Ramon, P. Klenk, N. Yague-Martinez, P. Prats-Iraola, M. Zink, and D. Geudtner, “Independent System Calibration of Sentinel-1B,” Remote Sensing, vol. 9, no. 6: 511, 2017.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: Copernicus POD Service: Status of Copernicus Sentinel Satellite Orbit Determination

Authors: Carlos Fernandez Martin, Jaime Fernandez Sanchez, Heike Peter, Muriel Pinheiro, Carolina Nogueira-Loddo
Affiliations: GMV Aerospace & Defence, PosiTim UG, ESA/ESRIN, EUMETSAT
The Copernicus Precise Orbit Determination (CPOD) Service is integral to the Ground Segment of the Copernicus Sentinel missions, specifically Sentinel-1, -2, -3, and -6, by providing essential orbit products and auxiliary data files. These resources are crucial for the operational production of scientific core products within ESA and EUMETSAT's Production Services (PS) and are available to external users via the Copernicus Data Space Ecosystem (https://dataspace.copernicus.eu/ ). Since its establishment in April 2014, CPOD has consistently supported the Copernicus program alongside the launch of successive Sentinel satellites. Historically reliant on NAPEOS, a Flight Dynamics and POD SW from ESOC, a significant evolution within CPOD has been the transition to a GMV-owned software suite, FocusPOD. Developed from scratch in 2021 using modern C++ and Python, FocusPOD was adopted in CPOD and declared operational in 2023. This represents a leap forward in processing capabilities and integration thanks to the modern technologies and development paradigms, tailored specifically to meet the unique requirements of Copernicus missions. During this transition, accuracy state-of-the-art standards were kept while notably improving runtime performance. GMV, in collaboration with the CPOD Quality Working Group (QWG), oversees the ongoing evolution of precise orbit determination systems. The CPOD QWG includes esteemed institutions such as AIUB, CNES, DLR, ESOC, JPL/NASA, TU Delft, TU Munich, TU Graz, and GFZ, among others, contributing to quality control, integration, and validation of new algorithms and standards. The CPOD Service boasts state-of-the-art accuracy, achieving 3D RMS consistency below 1 cm with non-time-critical products from QWG centers, and excels in timeliness by generating products in under five minutes to support near-real-time processing, all while keeping undoubted operational robustness. Recent initiatives include analyzing the impact of seasonal geocenter motion modelling through the latest ITRF2020 standards. This involves assessing solutions from QWG centers in Centre of Mass (CoM) and Centre of Network (CoN) realizations, impacting orbit comparisons and combination strategies. We are also enhancing Sentinel-3 short-time critical products via single-receiver ambiguity fixing strategies and updating Sentinel-6 macro-models. This presentation will showcase the current performance of POD products and the impact of recent analyses. Additionally, we will outline future developments in CPOD aimed at continuously improving our products and maintaining our critical support of upcoming Copernicus missions' precision and efficiency.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: 3 Years of Observations of the Corner Reflector Network Graz

Authors: Karlheinz Gutjahr, Michael Avian
Affiliations: Joanneum Research, Geosphere Austria
Corner reflectors (CRs) are artificial passive reflectors of different shape, size and material etc. (e.g. Qin et al. 2013 and Jauvin et al., 2019) which have been used in many (In)SAR related studies. Typically, CRs serve as calibration and reference targets in such studies and thus allow investigations and optimizations of radiometric as well as geometric parameters of the SAR system. With respect to geometric calibration activities, we have to mention the following work for illustrative purposes: Mohr and Madsen, 2001 (ERS), Small et al., 2004 (ENVISAT), Schubert et al., 2008 (TerraSAR-X), Nitti et al., 2015 (CosmoSkymed) and Gisinger et al., 2020 (TerraSAR-X and Sentinel-1). Recently, CRs have been increasingly used for ground motion monitoring applications especially in areas that suffer a lack of coherent natural radar reflections (Strozzi et al., 2013, Jauvin et al., 2019 or Qin et al. 2013 and the national funded project project VIGILANS). Dedicated CR based studies on atmospheric path delays can be found in e.g. Jehle et al., 2008 or in Eineder et al., 2011. Both experiments used CR “valley - mountain-top" constellations to investigate topography induced path delay effects. For this reason, Joanneum Research and Geosphere Austria have joined forces and established a CR network around the city of Graz, Austria in order to shed more light on two key research questions: 1. Modelling of atmospheric path delays 2. Deformation monitoring using CRs In total, four double headed CRs were installed in the surrounding area of Graz (order South to North): (i) two CRs at the airport in Graz Thalerhof (THN and THS), (ii) one at Graz-Lustbühel (LBL), all three at the flat and hilly areas of the Grazer Feld, and (iii) one at the Graz-Schöckel (SKL) plateau at 1442 m.a.s.l.. The special features of this network are that (i) LBL is close to the renowned Satellite Laser Ranging (SLR) Station at the Lustbühel Observatory and (ii) THS was equipped with a fixed shifting device, enabling controlled east/west and up/down movements. For Sentinel-1 - depending on the imaging geometry - a CR can theoretically occur in up to three bursts. As the CRs THN, THS and SKL each occur in one orbit in two consecutive bursts, whereby only the valid range of bursts was counted, there are a total of 15 detections. Additionally, all four CRs could be monitored in one TerraSAR-X stripmap data stack. After (at the time of the abstract) 2.5 years of maintaining and monitoring the CR network we can resume: ALE The absolute localisation errors (ALE) for Sentinel-1 are in a feasible range of -0.32 to +0.70 m in azimuth and -0.18 to +0.14 m in range direction. All these numbers include the corrections as provided by the S1-ETAD product (Sanchez et al., 2023). The authors were part of the S1-ETAD pilot study that was set up by ESA between January and September 2022, which aimed to provide early access to ETAD products to expert users, promoting independent validation and supporting the definition of eventual improvements of the product. However, the measurements of THN in ascending orbit 146 show a higher ALE of about 0.30 m in range direction. To exclude multi-path effects, we conducted a terrestrial laser scanning campaign and measured the distances to possible other reflectors nearby. Although a paved road and a metallic fence is close to THN, due to their distance a multipath effect cannot be fully explained. The ALEs for the TerraSAR-X data stack – although ascending orbit direction too - do not show any deviating behaviour of THN. The ALE in azimuth direction is in the range from -0.01 to 0.02 m and after replacing the standard atmospheric range correction with corrections based on ERA-5 or the AROME NWP model, the ALE in range direction is in the range from 0.04 to 0.10 m. d-InSAR To the best of our knowledge, the shifting device as developed for THS is unique and allows a simple yet very controlled shift of the whole CR in east/west and up/down direction. We simulated several “movements” of the CR, most of which were controlled independently by terrestrial measurements. To evaluate the accuracy of observable surface displacements using differential SAR interferometry (d-InSAR), we computed the differences in line-of-sight (LOS) displacements between the stable corner reflector THN and the “moving” corner reflector THS. The observed d-InSAR LOS displacement differences (ΔLOS THN-THS) were compared with terrestrial measurements of differences in East, North, and Height directions, projected onto the incidence angles of the respective Sentinel-1 orbits. The analysis of around 60 d-InSAR measurements revealed a mean difference of 0.75 mm ± 1.07  mm for ascending orbit 146, −0.46 mm ± 1.31 mm for descending orbit 22, and 0.06 mm ± 1.96  for descending orbit 124. Summary Corner reflectors are essential tools for SAR and InSAR applications, supporting calibration, atmospheric modeling, and ground motion monitoring, especially in areas with limited natural radar reflectivity. To address these challenges, Joanneum Research and Geosphere Austria established a CR network near Graz, Austria, featuring four dual-headed CRs, including one with a novel shifting device for controlled movements. Over 2.5 years, observations from Sentinel-1 and TerraSAR-X demonstrated feasible absolute localization errors and robust displacement monitoring capabilities. Differential SAR interferometry analysis validated line-of-sight displacement measurements with mean differences of 0.75 mm ± 1.07 mm (ascending orbit) and −0.46 mm ± 1.31 mm to 0.06 mm ± 1.96 mm (descending orbits), underscoring the network's value in geophysical research.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: Roadmap for the next generation of Sentinel-1 Level-2 Ocean Products

Authors: Husson Romain, Amine Benchaabane, Guillaume Hajduch, Pauline Vincent, Charles Peureux, Antoine Grouazel, Alexis Mouche, Frédéric Nouguier, Yngvar Larsen, Anna Fitch, Geir Engen, Fabrice Collard, Gilles Guitton
Affiliations: CLS, IFREMER, NORCE, OceanDataLab
Over the past years, the Level-2 experts from the Sentinel-1 Mission Performance Center (MPC-S1) have gained much experience in understanding the capabilities and limitations to provide users with the most accurate and well qualified S1-derived sea state parameters: sea surface wind vectors, wave spectra and radial velocity. These variables are provided by the S-1 Instrument Processing Facility (IPF) in a single Level-2 product referred to as OCN products (OCeaN). In the IPF, some processing specificities can lead to inconsistencies in the Level-2 OCN products or prevent from easily exploring more advanced and synergetic sea state retrieval methodologies. For instance, wind products are produced from Ground-Detected (GRD) products while wave spectra and radial velocity are produced from Single-Look Complex (SLC) processing. This prevents from using wind-related variables only available in SLC and from easily merging wind and wave retrievals in a combined inversion. Besides, on top of the geophysical parameters already available in the OCN products, state-of-the-art techniques have shown the need to provide new information that can benefit to both current users in better qualifying the existing sea state products and new users like meteorologist/oceanographers in providing new variables (e.g. classification of various atmospheric stability conditions [8]). This is typically the case of SAR texture-based information that can be derived from Deep Neural Networks (DNN) to provide segmentation/classification of sea ice, atmospheric or oceanic processes at stake. Following the guidelines from the SEASAR 2023 workshop, we propose several major evolutions, to prepare the ground for the retrieval of more exhaustive, more accurate and better qualified L2 OCN products. Provide new SAR observables: Co- Cross- Polarization Coherence (CCPC) [3], IMACS parameter (Imaginary MeAn Cross-Spectra) [4], Wind streaks orientation [5], geophysical Doppler shift [6] are typical examples of variables that can be extracted from the SLC products. They can be used in complementary approaches to better constrain the sea state retrieval and avoid using ancillary data such as wind vector from Numerical Weather Prediction (NWP) model. AI methods applied to sea surface SAR observations are very useful to identify phenomena that can degrade the quality of the existing OCN products but are not available in these Level-2. Providing classification/segmentation information in OCN products for mature algorithms or more user-friendly data formats to enable deriving new AI techniques would be very beneficial to foster the development of such promising methods. Investigate new Level-2 processing approaches, that would start from Level-0 or Level-1 SLC products, instead of using Level-1 GRD as inputs. Such approaches are expected to bring more freedom: 1- for signal processing to avoid applying filtering/windowing required by other applications than sea state retrieval and 2- to access extended burst overlap in time/frequency domain to estimate bursts Cross-Spectra over larger regions. Such approaches were successfully tested in initiatives from NORCE with their GDAR OCN libraries and have already successfully proven the concept for swell and radial velocity retrievals. The current procedure used for Sigma0 calibration in co- and cross-polarizations relies on measurements over the rain forest to obtain flat gamma profile and over transponders for absolute calibration over these punctual locations. Other approaches referred to as “geophysical calibration” are based on the overall agreement between NWP models and Sigma0 measurements using available GMFs. Such empirical methods compensate for residual calibration errors such as imperfect Elevation Antenna Pattern (EAP) corrections, which also evolve over time and among S1 products. Validation against in situ measurements show improved performances for retrieving sea surface winds [7]. Building a massive and exhaustive dataset of Sentinel-1 products with well qualified reference datasets from numerical model outputs (e.g. CERRA, future ERA6 for atmospheric models) and in situ measurements (e.g. spotter drifters, moored buoys) would also greatly benefit to the better understanding of SAR observables dependence to sensing and environmental conditions. This would ideally require the availability of the entire Sentinel-1 archive reprocessed with a homogeneous processing. Such massive datasets are key for deriving new sea state retrieval methodologies combining them all, using either analytical or AI techniques. As a complement, conducting inter-comparisons and inter-calibration between various spaceborne measurements can dramatically help addressing issues requiring the largest possible number of SAR observations. This is typically the case for Tropical Cyclone (TC) monitoring which can only be daily monitored by a constellation of SARs maintained by ESA (S1), CSA (RCMs, RS2) and JAXA (ALOS-2). This topic also recalls the need for investigating synergies between C-band and L-band missions, to prepare the future ROSE-L mission with current ALOS-2 and NI-SAR mission to be launched in Q1-2025. Similarly, ensuring the consistency of sea surface winds derived from SAR and Scatterometers is necessary to ensure that they can be used together by downstream users. Direct comparisons between co-located acquisitions with short time lags or indirect co-locations against common in situ references are needed to provide inter-calibrated wind measurements despite their different resolutions and sensing technologies. Finally, dedicated efforts are also required to provide users with a user-friendly confidence level, distributed for each sea state variable. In that purpose, using Bayesian retrieval from a wide set of SAR observables and the residual should help quantifying the variables uncertainty.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: DLR’s Independent Calibration of the Sentinel-1C System – First Results from S1C Commissioning Phase Activities

Authors: Dr Patrick Klenk, Dr Kersten Schmidt, Jakob Giez, Matteo Nannini, Andrea Pullela, Dr. Pau Prats-Iraola, Dr Marco Schwerdt
Affiliations: German Aerospace Center (DLR)
European Space Agency’s (ESA) Sentinel-1C (S1C) is the third satellite of the Sentinel-1 mission. To be launched in December 2024, it will ensure seamless continuity of C-band SAR data for global monitoring of the Earth surface in the framework of the Copernicus program (e.g., [1]). In parallel to the commissioning of S1C by ESA, an independent system calibration is performed by DLR on behalf of ESA. Based on an efficient calibration strategy, this paper details the different activities planned and executed by DLR and presents first calibration results. Due to the stringent performance requirements of Sentinel-1, on behalf of ESA, the DLR SAR Calibration Center already performed a similarly organized independent end-to-end system calibration of S1A in 2014 ([2], [3]) and of S1B in 2016 [4], relying on a separate dedicated suite of in-house analysis tools and the innovative and highly-stable reference ground targets deployed along DLR’s SAR calibration field (e.g., [5]) in Southern Germany. However, S1C is not simply an exact rebuilt of its predecessors S1A/B but has implemented a series of hardware improvements for lessons learnt with the previous missions. These novel aspects and their impact on the calibration strategy will first be briefly introduced in this presentation. Launch of S1C is currently foreseen for early December 2024 with the ensuing commissioning phase activities to be performed between early January and late April 2025. This will therefore allow us to then present and discuss the results achieved by the DLR team during the S1C in-orbit commissioning phase at the symposium. After a general overview of all DLR activities, this presentation will focus on a detailed assessment of all L1-based performance results, such as pointing and antenna model verification, point target evaluations and InSAR verification activities. Last but not least, results of cross-calibration activities between Sentinel-1A and S1C acquisitions will be discussed. References: [1] R. Torres, D. Geudtner, S. Lokas, D. Bibby, P. Snoeij, I. N. Traver, F. Ceba Vega, J. Poupaert, and S. Osborne, “Sentinel-1 Satellite Evolution,” in IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium, July 2018, pp. 1555–1558. [2] M. Schwerdt, K. Schmidt, N. Tous Ramon, G. Castellanos Alfonzo, B. J. Döring, M. Zink, and P. Prats-Iraola, “Independent Verification of the Sentinel-1A System Calibration,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 9, no. 3, pp. 994–1007, 2016. [3] M. Schwerdt, K. Schmidt, N. Tous Ramon, G. Castellanos Alfonzo, B. Doering, M. Zink, and P. Prats, “Independent Verification of the Sentinel-1A System Calibration - First Results,” in EUSAR 2014; 10th European Conference on Synthetic Aperture Radar, June 2014, pp. 1259–1262. [4] M. Schwerdt, K. Schmidt, N. Tous Ramon, P. Klenk, N. Yague-Martinez, P. Prats-Iraola, M. Zink, and D. Geudtner, “Independent System Calibration of Sentinel-1B,” Remote Sensing, vol. 9, no. 6: 511, 2017. [5] M. Jirousek, B. Doering, D. Rudolf, S. Raab, and M. Schwerdt, “Development of the highly accurate DLR Kalibri Transponder,” in EUSAR 2014; 10th European Conference on Synthetic Aperture Radar, June 2014, pp. 1176–1179.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: A.01.05 - POSTER - Ozone and its precursors through the Atmosphere: Advances in understanding and methods

Ozone is a fundamentally important constitute of the atmosphere, in the troposphere it is a greenhouse gas and pollutant that is detrimental to human health and crop and ecosystem productivity. In the troposphere data is available from ozonesondes, aircraft, and satellites, but high levels of uncertainty biases remain. While in the stratospheric ozone protects the biosphere from UV radiation, long-term observations from satellites and the ground confirmed that the long-term decline of stratospheric ozone was successfully stopped, as a result of the Montreal protocol. Future stratospheric ozone levels depend on changes on many factors including the latitude domain and interactions with the troposphere, and potentially the mesosphere.

This session is detected to presentation of methods and results for furthering the understanding of the distribution of ozone and its precursors through the atmosphere through remote sensing techniques, with particular emphasis on advanced methods with past and current missions such as OMI and Sentinel-5P, and preparing for future missions such as ALTIUS, Sentinels 4 & 5 and their synergies with other missions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: A Posteriori Fusion of IASI, MIPAS and GOME2 Ozone Profile Products

Authors: Liliana Guidetti, Nicola Zoppetti, Dr Simone Ceccherini, Piera Raspollini, Ugo Cortesi
Affiliations: Ifac-cnr
In this work, we introduce a new dataset of atmospheric ozone profiles derived from the synergy of three satellite instruments: IASI, GOME-2, and MIPAS. This dataset is global in scope, spanning the period from January 2008 to April 2012, and is mapped onto a regular time-latitude-longitude grid. While the grid is not fully covered due to the inherent characteristics of the contributing instruments, the dataset provides comprehensive spatial and temporal coverage of the atmospheric ozone distribution. The dataset is constructed using the Complete Data Fusion (CDF) method, an algebraic algorithm rooted in the Optimal Estimation (OE) technique. This method integrates individual OE retrievals from the three instruments, leveraging their complementary strengths to enhance the accuracy and completeness of the resulting profiles. By combining the high vertical resolution of MIPAS (Envisat, IFAC-CNR data), the high spatial coverage of IASI (Metop-A, ULB-LATMOS data) and the ultraviolet sensitivity of GOME-2,(Metop-A, ACSAF data) the CDF approach delivers a more robust and detailed representation of atmospheric ozone. We describe the genesis of this dataset, focusing on its unique characteristics from both the perspective of individual profiles and aggregated large-scale patterns. The dataset is evaluated against several reference sources, including the original retrievals, radiosonde measurements, and atmospheric models, depending on the context of the analysis. A key aspect of our work is a detailed exploration of the methodological contributions of each instrument to the fused product, emphasizing the added value brought by the CDF approach. Preliminary validation of the fused dataset involves comparisons with ozone radiosonde profiles, offering insights into its accuracy and reliability. Additionally, the gridded structure facilitates direct comparisons with global atmospheric models. Examples of such comparisons are presented, showcasing the potential applications of this dataset in advancing our understanding of atmospheric dynamics. Finally, we discuss the road map for publishing this dataset within the framework of a digital infrastructure currently under development. This infrastructure aims to ensure the dataset's accessibility, usability, and integration with existing atmospheric and climate research tools, thereby supporting its future use in a wide range of scientific studies and applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Development of a Merged CO Climate Data Record from IASI and MOPITT Observations

Authors: Maya GEORGE, Cathy Clerbaux, Juliette Hadji-Lazaro, Sarah Safieddine, Simon Whitburn, Selviga Sinnathamby, Daniel Hurtmans, Pierre Coheur, Helen Worden, Corinne Vigouroux, Bavo Langerock, Steven Compernolle
Affiliations: LATMOS/IPSL, Sorbonne Université, UVSQ, CNRS, Spectroscopy, Quantum Chemistry and Atmospheric Remote Sensing (SQUARES), Université libre de Bruxelles (ULB), Royal Meteorological Institute of Belgium (RMIB), Atmospheric Composition, Measurements and Modelling (ACM2), Atmospheric Chemistry Observations and Modeling, National Center for Atmospheric Research, Royal Belgian Institute for Space Aeronomy (BIRA)
Carbon monoxide (CO) is a key atmospheric compound that can be remotely sensed by satellite on a global scale. Continuous observations have been available since 2000 from the MOPITT/Terra instrument. Since 2007, the IASI/Metop instrument series has provided another homogeneous CO data record, thanks to the recent reprocessing of Metop-A and Metop-B data by EUMETSAT, resulting in the IASI CO Climate Data Record (IASI CO-CDR). Measuring the variability and trends of CO on a global scale is crucial as it serves as a precursor for ozone and carbon dioxide and regulates the troposphere's oxidizing capacity through its destruction cycle involving the hydroxyl radical (OH). As part of the ESA CCI+ Ozone Precursors project, we have been developing a merged CO Climate Data Record dataset combining IASI and MOPITT data to analyze long-term variability and trends. Monthly averaged gridded CO total columns (Level 3, 1°x1° resolution) are used as input. For IASI, we first apply an additional cloud mask to the Level 2 official data available on the Aeris French Database (https://iasi.aeris-data.fr/). We then compute monthly averages using IASI CO data from all Metop satellites, resulting in an intermediate (non-public) IASI CO monthly Level 3 product. For MOPITT, we use the official monthly Level 3 (version 9T) data available on the NASA Earth Data Portal (https://www.earthdata.nasa.gov/). We tested various methodologies for merging IASI and MOPITT CO Level 3 monthly grids. We performed averages with weighting schemes based on MOPITT priors and/or IASI/MOPITT uncertainties. In this poster, we will present the final version of the CO CCI merged product, which uses MOPITT CO total column/MOPITT prior ratios as weights for averaging. Among the different algorithm versions tested, this approach showed the best performance when validated against ground-based FTIR NDACC measurements, achieving a mean absolute bias below 5%, low standard deviation, and excellent correlation.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Initial investigations of altitude-resolved ozone variability for the past 2.5 decades using the novel GOME-type Ozone Profile Essential Climate Variable (GOP-ECV) data record

Authors: Dr Melanie Coldewey-Egbers, Dr. Diego Loyola, Richard Siddans, Barry Latter, Brian Kerridge, Michel Van Roozendael, Daan Hubert, Michael Eisinger
Affiliations: German Aerospace Center, Rutherford Appleton Laboratory, Royal Belgian Institute for Space Aeronomy, European Space Agency
In this paper, we present first applications of the novel GOME-type Ozone Profile Essential Climate Variable (GOP-ECV) data record for the 26-year period 1995 through 2021. GOP-ECV has been developed in the framework of the European Space Agency’s Climate Change Initiative+ ozone project (Ozone_cci+) and combines ozone profile measurements from a series of European nadir-viewing satellite sensors including GOME, SCIAMACHY, OMI, GOME-2A, and GOME-2B into a coherent long-term climate data record. The Rutherford Appleton Laboratory (RAL) scheme is used to retrieve ozone profiles on 20 fixed pressure levels ranging from the surface up to 80km. Profiles from the individual instruments are first harmonized through careful elimination of inter-sensor deviations and drifts and then merged to generate a consistent monthly mean gridded product at a spatial resolution of 5°x5°. For the harmonization, OMI serves as a reference sensor. In a further step, the merged product is homogenized with the well-established GTO-ECV (GOME-type Total Ozone Essential Climate Variable). This data record is based on nearly the same satellite sensors and possesses an excellent long-term stability, which enables us to further improve the coherence and reliability of the merged nadir profiles from the first step. An altitude-dependent scaling, based on the profile Jacobians derived from a Machine Learning approach, is applied to the profiles. With this adjustment, full consistency between the GTO-ECV and GOP-ECV data records in terms of the total ozone column is achieved. We use the GOP-ECV data record to investigate the temporal evolution and long-term variability of the partial columns and ozone anomalies from selected atmospheric layers during the past 2.5 decades. The anomalies will be compared with anomalies derived from the SBUV (Solar Backscatter Ultraviolet Radiometer) Merged Ozone Data Set (MOD) from the SBUV satellite instrument series covering the period 1970-2023. On top of that, we show results of an initial comparison with ozonesonde measurements in the tropics gathered from the Southern Hemisphere Additional Ozonesonde (SHADOZ) network archive. We find a low bias in the lowermost layers, which turns into a positive bias above 150 hPa. Furthermore, we demonstrate the impact of the scaling on the temporal evolution of the difference between GOP-ECV and the ground-based data.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: The Unique Contribution to Understanding Antarctic Ozone Hole Dynamics of Infrared Sounder Measurements

Authors: Guido Masiello, Tiziano Maestri, Carmine Serio, Giuliano Liuzzi, Michele Martinazzo, Federico Donat, Lorenzo Cassini, Pamela Pasquariello, Marco D'Emilio, Sara Venafra
Affiliations: Department of Engineering, Univeristy of Basilicata, Department of Physics and Astronomy, University of Bologna, Department of Civil, Building and Environmental Engineering, University of Rome, Italian Space Agency
The ozone hole over Antarctica is a yearly occurrence that forms and grows during the Southern Hemisphere's spring. It typically reaches its maximum size in October or November and then diminishes as temperatures in the Antarctic stratosphere rise in December. This warming prevents the formation of Polar Stratospheric Clouds (PSCs), which are crucial for ozone depletion. PSCs form when temperatures drop below 195 K, allowing nitric acid and water vapor to condense into ice crystals. Various satellite instruments, such as the Ozone Monitoring Instrument (OMI) and the Tropospheric Monitoring Instrument (TROPOMI), track the ozone hole. These instruments rely on reflected sunlight to measure ozone concentrations, limiting their ability to monitor the early stages of the ozone hole when the polar region is still dark. Additionally, they cannot directly detect nitric acid and water vapor in the gas phase. Microwave instruments like MLS/AURA can monitor nitric acid but have coarse spatial resolution and are insensitive to the thermodynamic conditions in the upper troposphere and lower stratosphere (UT/LS) region. Recent improvements in forward and inverse modeling techniques have enabled scientists to simultaneously retrieve thermodynamic conditions, ozone, and nitric acid concentrations from Infrared Atmospheric Sounding Interferometer (IASI) measurements. IASI, with its polar orbit, provides excellent spatial and temporal coverage of the ozone hole. By analyzing IASI data collected over Antarctica from 2021 to 2023, we discovered a significantly larger and deeper ozone hole than indicated by ECMWF analysis, which relies on TROPOMI and OMI data that are limited during winter, especially in the Antarctic interior. The study found a correlation between decreasing nitric acid concentrations and upper tropospheric temperatures below 195 Kelvin, supporting the role of NATs in ozone depletion. IASI spectra near the pole confirmed the presence of NATs. Furthermore, a comparison of HNO3 spatial patterns from IASI and MLS/AURA showed strong agreement, indicating that the observed nitric acid decline primarily occurs in the upper troposphere under cold conditions favorable for NAT formation. The study demonstrates how infrared sounder measurements offer valuable insights for understanding Antarctic Ozone hole dynamics indicating a fundamental contribution of EE9-FORUM in this direction.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Characterization of the TROPOMI UV radiometric calibration for the operational Ozone Profile retrieval algorithm

Authors: Serena Di Pede, Dr Pepijn Veefkind, Dr Maarten Sneep, Dr Mark ter Linden, Dr Erwin Loots, Emiel van der Plas, Edward van Amelrooy, Mirna van Hoek, Antje Ludewig, Arno Keppens
Affiliations: Royal Netherlands Meteorological Institute (KNMI), Delft University of Technology, Royal Belgian Institute for Space Aeronomy (BIRA-IASB)
Daily global ozone profile measurements are essential to understand ozone-related physical and chemical processes in the atmosphere. Ozone profile’s information can be derived from the UV backscattered radiation as the ozone absorption cross section varies more than three orders of magnitude. But in order to retrieve accurate information on the trace gas, especially in the troposphere, the quality and calibration of the measured radiances is crucial. The operational TROPOMI Ozone Profile retrieval is obtained from the TROPOMI radiances in the UV band 1 (270-300nm) and band 2 (300-330nm), with a spectral resolution of, respectively, 1.0 nm and 0.5 nm, and a spectral sampling of 0.065nm. To optimize the retrieval and improve the fitting precision, it is a common practice to apply an additional calibration correction on the input radiances. This radiometric calibration, known as “soft-calibration”, is performed on the input radiances at L2 processing level, before performing the retrieval itself. The TROPOMI soft-calibration is a time-dependent correction, updated yearly on the input radiances. It is computed from the comparison of the measured radiances with forward model calculations, taking into account four orbits per each year in order to capture the seasonal radiance variation. Per each orbit, the correction parameters are computed as a function of the wavelength, orbit ground pixel, and radiance level. The current radiometric soft-calibration correction can reach up to ~30% of the input radiance (in relative sense), especially at wavelengths < 300 nm, and it shows a unique spectral shape. In order to decrease the size of the correction, and to improve upon its spectral shape, the effect of the detector straylight has been thoroughly investigated and will be presented. In this contribution, we will first focus on the overview of the current TROPOMI operational radiometric correction, then we will deepen into the complex effect that detector straylight has on the size and trend of the soft-calibration. In particular, we will discuss on the importance to keep the correction as much stable and small as possible in time, which is essential for the temporal consistency of the retrieval quality and to eliminate residual systematic biases in the radiance that can significantly affect the precision of the tropospheric ozone column estimate.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Observation of chlorine activation by means of TROPOMI measurements of OClO from 2017 – 2025

Authors: Janis Pukite, Steffen Ziegler, Thomas Wagner
Affiliations: Max Planck Institute for Chemistry
Chlorine dioxide (OClO) is a by-product of the ozone depleting halogen chemistry in the stratosphere. Although being rapidly photolysed during daytime, it plays an important role as an indicator of the chlorine activation in polar regions during polar winter and spring at twilight conditions because of the nearly linear dependence of its formation on chlorine oxide (ClO). The TROPOspheric Monitoring Instrument (TROPOMI) is an UV-VIS-NIR-SWIR instrument on board the Sentinel-5P satellite developed for monitoring the composition of the Earth’s atmosphere. Launched on 13 October 2017 in a near polar orbit, it provides continuous monitoring possibilities for many constituents including the observations of OClO at an unprecedented spatial resolution. We analyze the time series (2017 – 2025) of slant column densities (SCDs) of chlorine dioxide (OClO) at polar regions. Especially we focus on the higly variable conditions in the NH polar regions by comparing the OClO timeseries with meteorological data and CALIPSO CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) polar stratospheric cloud (PSC) observations for both Antarctic and Arctic regions. This allows us to investigate the conditions under which the chlorine activation starts and ends.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Tropospheric Ozone Retrieval Using the RAL UV Algorithm: Applications to Geostationary and Polar-Orbiting Satellites with Early Insights from GEMS and TEMPO

Authors: Ka Lok Chan, Richard Siddans, Brian Kerridge, Barry Latter
Affiliations: RAL Space
The ozone profile retrieval algorithm developed for UV nadir sounders by RAL is a robust and versatile scheme for extracting height-resolved ozone distributions from spectral observations in the ultraviolet (UV) band. is applicable also to preceding nadir-viewing sensors aboard both polar-orbiting satellites (e.g., GOME, GOME-2, OMI and Sentinel-5P ) and newly available geostationary satellites (e.g., GEMS and TEMPO). Using the optimal estimation method, the scheme provides information on tropospheric ozone (surface to 450 hPa) in particular as well as higher layers and data have been exploited in a series of scientific studies concerning the role of ozone in atmospheric chemistry, climate, and air quality. Having been re-engineered for ESA Sentinels-4 and -5, the scheme has recently undergone significant enhancements, enabling harmonized application across multiple satellite platforms with differing orbits and observational characteristics. These advancements have improved the algorithm's precision, consistency, and computational efficiency, ensuring its adaptability to both polar and geostationary instruments. A key focus has been the optimization of the retrieval algorithm for geostationary satellite instruments, such as GEMS, TEMPO and Sentinel-4, which provide unprecedented temporal resolution and coverage for monitoring ozone variability over specific regions. This presentation will provide an overview of the current state of tropospheric ozone data retrieved from polar-orbiting instruments, highlighting its validation against ozone sonde measurements and illustrating its utility in various applications. Early results from the geostationary instruments GEMS & TEMPO will be illustrated, emphasizing their capabilities in capturing diurnal ozone variation. These results will be compared against data from polar-orbiting instruments and ozone sonde observations to assess consistency and reliability. By harmonizing ozone profile retrievals across satellite platforms and leveraging the unique advantages of geostationary sensors, the RAL algorithm represents a significant step forward in atmospheric monitoring. These advancements pave the way for a more comprehensive understanding of tropospheric ozone at regional and global scales, offering complementary information for air quality management and climate research.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: The Antarctic stratospheric nitrogen hole: Southern Hemisphere and Antarctic springtime total nitrogen dioxide and total ozone variability as observed by Sentinel-5p TROPOMI and the stratospheric denitrification process.

Authors: Jos deLaat
Affiliations: KNMI
Daily Sentinel-5p nitrogen dioxide total column measurements - in conjunction with total ozone column data - are used to study daily, seasonal and interannual Southern Hemisphere middle latitude and polar (Antarctic) spatio-temporal variability from 2018 to 2021 during Austral spring with a particular focus on the Antarctic Ozone Hole and the stratospheric denitrification process. Correlating total nitrogen dioxide columns and total ozone columns using phase diagrams intricate patterns. Although denitrification is a crucial process for the formation of the Ozone Hole, the relation between total ozone and total nitrogen dioxide is far from simple. Results reveal two main regimes: inner-vortex air depleted of ozone and nitrogen dioxide and outer-vortex air enhanced in ozone and nitrogen dioxide. Within the vortex total ozone and total stratospheric nitrogen dioxide are strongly correlated which is much less evident outside the vortex. Denitrification inside of the Antarctic ozone hole (stratospheric vortex) during Austral spring can clearly be observed. These two main regimes are in phase diagrams linked via a third regime, so-called “mixing lines”, coherent patterns in the total nitrogen dioxide column - total ozone column phase space connecting the two main regimes. These “mixing lines” exist because of spatial differences in the locations of minimum and maximum nitrogen dioxide and total ozone and differences in their respective spatial gradients. This strongly suggests that total nitrogen dioxide columns and total ozone columns reflect coherent physio-chemical processes occurring at different altitudes thereby providing information about vortex dynamics and cross-vortex edge mixing. The characteristics of the relation between nitrogen dioxide and ozone vary significantly during Austral spring. Interannual variability between 2018-2021, on the other hand, is rather small and for any time of the year phase diagrams are very similar. The sole exception is 2019 which was a year with a highly unstable Antarctic stratospheric vortex and significantly more mixing of inner-vortex and outer-vortex air. The distinction between the three regimes is nevertheless robust irrespective of date and time. The results show that daily stratospheric nitrogen dioxide column measurements from nadir-viewing satellites like TROPOMI – and thus many of its predecessors like OMPS, OMI, GOME-2,, SCIAMACHY and GOME – provide a new means for monitoring stratospheric nitrogen dioxide and the denitrification in the springtime Antarctic stratosphere, and in conjunction with daily total ozone column data, also springtime Antarctic stratospheric vortex dynamics. Finally, although these findings are not new in a sense that they had been reported in the early 2000s based on satellite instruments GOME and SCIAMACHY. However - and to some extent surprisingly - there never had been any effort to explore these findings with additional data and/or in new satellite instruments. This "rediscovery" is rather timely as the capacity of earth observation to monitor the stratosphere is rapidly aging and key satellites like MLS will end by 2026.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Inter-comparison of tropospheric ozone column data sets from combined nadir and limb satellite observations

Authors: Carlo Arosio, Viktoria Sofieva, Andrea Orfanoz-Cheuquelaf, Alexei Rozanov, Klaus-Peter Heue, Edward Malina, Roeland Van Malderen, Jerry Ziemke, Mark Weber
Affiliations: Institute of Environmental Physics, University of Bremen, Finnish Meteorological Institute, German Aerospace Center, DLR, ESA ESRIN, Royal Meteorological Institute of Belgium, NASA GSFC
Satellite observations provide a valuable monitoring tool for tropospheric ozone, particularly after the launch of ESA Sentinel missions. This study is part of the ESA project Ozone Recovery from Merged Observational Data and Model Analysis (OREGANO) and focuses on satellite data sets derived using limb-nadir combined observations. This approach exploits the total ozone column from nadir observations and stratospheric column information from limb measurements (or models) to obtain tropospheric ozone column (TrOC) as a residual. This study contributes to the Tropospheric Ozone Assessment Report (TOAR) II activity. Seven data sets are considered in our analysis: some combine two satellite-based observations, others satellite with model or reanalysis data. At IUP, TrOC data sets were derived using the limb-nadir matching technique from SCIAMACHY and OMPS observations and were merged to obtain a product covering the 2002-2023 time frame. Three more long-term satellite-based products are considered: OMI-LIMB and GTO-LIMB developed at the Finnish Meteorological Institute, and OMI-MLS developed at NASA. Other shorter TrOC products involving model data, such as OMPS-MERRA, EPIC-MERRA and S5P-BASCOE, are included in this study to perform an overall inter-comparison between the existing data sets. We compared the data sets in terms of climatology and seasonality, investigated the tropopause height used for the construction of each data set and related biases, and finally evaluated long-term TrOC trends and drift with respect to ozonesondes. The overall goal of the study is to assess the consistency between the data sets and explore possible strategies to reconcile the differences between them. Despite uncertainties associated with the limb-nadir residual methodology and large biases between the mean values of the considered data sets, we show an overall agreement of TrOC morphology. We demonstrate that the average drift with respect to ground-based observations is close to zero and that long-term trends in specific regions can be consistently detected, for instance, the positive trend of up to 1.5 DU per decade observed over Southeast Asia.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Tropospheric Ozone from CCD and CSA: Data extension and harmonization from TROPOMI to SCIAMACHY

Authors: Kai-Uwe Eichmann, Swathi M. Satheesan, Dr. Mark Weber
Affiliations: Institute Of Environmental Physicss
The algorithms of the convective cloud differential method CCD/CHORA (Cloud Height adjusted Ozone Reference Algorithm) and CSA/CHOVA (Cloud Height Ozone Variation Algorithm) are based on the method developed by Ziemke et al. (1998, 2001). They retrieve tropical tropospheric column ozone TCO [DU] and ozone volume mixing ration [ppbv], respectively. This work summarizes the extension of the algorithms from TROPOMI and the GOME-2s to SCIAMACHY, OMI, and GOME and the harmonization of the datasets. The nadir viewing TROPOMI spectrometer aboard the S5p satellite, started in October 2017, has both high spatial resolution and daily coverage of the Earth. More than six years of GODFIT total ozone and OCRA/ROCINN CRB (cloud reflecting boundary) cloud fraction and height operational level 2 data (versions ≥ 2.4) are available that are combined to retrieve tropospheric ozone. The instruments GOME-2 A, B, and C are also providing total ozone and cloud retrieval (version ≥ 4.8) data but on a coarser grid for the period 2007 to 2023. The CHORA algorithm has been optimized for the TROPOMI and GOME2 type instruments. The ACCO (Above Cloud Column Ozone) is calculated in the Pacific sector. In a post-processing step, it is interpolated and smoothed in time/latitude space to reduce data gaps and scatter in the daily ACCO(latitude) 1D fields. The upper tropospheric ozone volume mixing ratios TTO [ppbv] is retrieved using the cloud slicing method CSA/CHOVA (Cloud Height induced Ozone Variation Analysis) by regression analysis of ACCO and CP (Cloud Pressure) pairs. Monthly mean volume mixing ratios are calculated in the Pacific sector to calculate the above cloud column ozone (ACCO) at the pressure level of 270 hPa. Daily total ozone is averaged in a small grid box with a latitude/longitude resolution of 0.5° x 0.5° (1° x 1° for GOME2) to minimize errors from stratospheric ozone spatial variation. All datasets have been successfully validated using SHADOZ ozone sonde profiles. Low CHORA biases (TROPOMI ~11%, GOME2s < 6%) and a dispersion of ~6 DU are found. CHOVA/TROPOMI bias is about -4% with 11 ppbv dispersion. Temporal sampling of TROPOMI data is one day due to the high amount of daily measurements and three day of GOME-2 s. The data retrieval for SCIAMACHY, OMI, and GOME is currently work in progress. Here we present results of the time evolution of tropospheric ozone for the 4+ sensors and on the harmonization of the datasets for both retrieval methods. Part of this work was funded by the German Federal Ministry for Economic Affairs and Energy (BMWi) via the TROPO3-MIDLAT project. The work on TROPOMI/S5P geophysical products is funded by ESA and national contributions from the Netherlands, Germany, Belgium, and Finland. We thank the NASA/GSFC SHADOZ team for providing ozone sonde data.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Harmonized Tropospheric Ozone Data Records From Satellites Produced for the Second Tropospheric Ozone Assessment Report: Methodology and Outcomes

Authors: Arno Keppens, Daan Hubert, Oindrila Nath, José Granville, Jean-Christopher Lambert
Affiliations: Royal Belgian Institute for Space Aeronomy
The first Tropospheric Ozone Assessment Report (TOAR) encountered several observational challenges that limited the confidence in estimates of the burden, short-term variability, and long-term changes of ozone in the free troposphere. One of these challenges is the difficulty to interpret tropospheric observations from space, especially when combining data records from multiple satellites with differences in vertical sensitivity, prior information, resolution and spatial domain. Additional confounding factors are time-varying biases and the lack of harmonization of geophysical quantities, units, and definitions of the tropospheric top level. All together, these factors reduced the confidence in the observed distributions and trends of tropospheric ozone, impeding firm assessments relevant for policy and science. These challenges motivated the Committee on Earth Observation Satellites (CEOS) to foster a coordinated response on improving assessments of tropospheric ozone measured from space. Here, we report on work and resulting harmonized datasets that contribute to this CEOS activity, as well as to the ongoing second phase of the TOAR assessment. Our primary objective is to harmonize the vertical perspective of different ozone data records from satellites, using the Copernicus Atmosphere Monitoring Service Re-Analysis (CAMSRA) as a transfer standard. A first class of products is obtained through an inversion of spectral measurements by nadir-viewing sounders into a vertical ozone profile. We illustrate several approaches to harmonize the differing profile retrievals for the GOME-2, IASI, OMI and TROPOMI sensors, by making use of prior information and vertical averaging kernels. A second class of tropospheric ozone products is obtained through subtraction of the stratospheric component from total column retrievals. We present how all products, from both classes, can be harmonized to a common tropospheric top level. The effect of all harmonization approaches on tropospheric ozone assessments, both in terms of global distributions and long-term changes, is discussed. We additionally anchor the satellite records to monthly gridded ozonesonde data obtained from the TOAR HEGIFTOM (Harmonization and Evaluation of Ground-based Instruments for Free Tropospheric Ozone Measurements) working group, both before and after harmonization. This provides a view on whether the tropospheric ozone column harmonization yields a better agreement with reference data as well. The presented harmonization methodology is currently under review for the Tropospheric Ozone Assessment Report Phase II (TOAR-II) Community Special Issue (ACP/AMT/BG/GMD inter-journal SI, https://acp.copernicus.org/articles/special_issue1256.html). The harmonized datasets are planned to become available for download through the Belgian BRAIN-be 2.0 TAPIOWCA project (long-Term Assessment, Proxies and Indicators of Ozone and Water vapour changes affecting Climate and Air quality, https://tapiowca.aeronomie.be/).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: D.01.03 - POSTER - Synergies between ESA DTE Programme and DestinE Ecosystem

The content of this session shows the potential of dynamic collaborations between ESA DTE Programme activities and the opportunities provided by DestinE Platform. The session includes presentations about the capabilities available in DestinE Platform, and the defined framework to grow the ecosystem of services through onboarding opportunities for ESA and non-ESA activities. It also includes presentations on the pre-operational innovative services and applications developed under ESA DTE activities (such as the Digital Twin Components) and their synergies with DestinE Platform.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: DestinE Platform – Collaborative Endpoint for AI Tenancies

Authors: Sebastien Tetaud
Affiliations: ESA
This paper introduces a collaborative, cloud-based environment designed to support the efficient management and utilization of Earth Observation (EO) and DestinE Digital Twins data. The environment offers a work space that deploy tailored virtual machines with necessary computational resources (CPU/GPU), facilitating advanced data processing, modeling, and AI-driven analytics. It includes a private Model and Dataset Registry, enabling seamless access to and sharing of datasets and AI models, along with a DestinE Python library for easy integration with the platform’s services. The environment also offers features like AI-focused educational content and a community space to promote collaboration, and best practices within the EO domain. By empowering users with state-of-the-art tools, this platform fosters innovation in Earth System Modeling and enhances the application of AI in EO research and operations. The presentation discusses the environment, its integration capabilities, and its role in enabling secure and efficient collaboration across various users.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: DestinE Sea Ice Decision Enhancement (DESIDE): A Destination Earth Use Case

Authors: David Arthurs
Affiliations: Polar View
Ships operating in the polar regions encounter hazards that present elevated levels of risk, and more severe consequences when accidents occur. The DESIDE project is utilizing Destination Earth system capabilities and data to provide comprehensive sea ice and related information for policy and operational decision makers in the polar regions. Benefits to polar operations and society include: 1. More accurate information that supports strategic and tactical decision-making for enhanced safety of life and property, 2. More efficient route optimization that minimizes ship emissions for pollution reduction, and 3. Better forecasts that help policymakers protect environmentally sensitive areas affected by changing polar conditions. The DESIDE project is: - Aggregating diverse information sources to provide common products across jurisdictional boundaries. - Producing new forecast products to improve decision-making by users. - Customizing delivery of products to different user communities based on their needs. The drivers for the project are: - Regulatory Compliance: Delivering short and medium-term forecasts of ice, meteorological, and ocean conditions, meeting the requirements of the IMO Polar Code. - Climate Change Effects: Providing long-term forecasts on changing ice and other conditions, enabling planning and policy development for the fishing, tourism, research, and oil and gas industries. DESIDE is demonstrating the added value of the DestinE system in supporting policy and decision making at three levels within the context of polar operations: - Execution support: Supporting ships needing to avoid or navigate through sea ice. - Planning support: Supporting ship operators in planning polar voyages, guided by the information requirements of the IMO Polar Code. - Strategy and policy support: Supporting organizations and policy analysts wanting to assess the impact of climate change on future decisions regarding polar operations. Workflow - Data Ingestion: Collect past, current, and forecasted information on sea ice, snow thickness, icebergs, ocean currents and waves, wind, temperature, visibility, and Sentinel-1 imagery from DESP/DestinE. - Data Processing, Modeling, and Analysis: Use models, machine learning, and algorithms to process data for different user communities. - Information Dissemination: Through decision support platforms. - Information Product Generation: Create short, medium, and long-term sea ice charts, risk profiles, and route optimization suggestions for better decision-making. Decision support is provided in three ways to meet different needs and levels of sophistication of user groups: - IcySea: Tactical decision support for ships operating in polar regions. - Polar TEP: Research collaboration platform for private, academic, and public sectors. - Polar Dashboard: Strategic decision support for policy analysts and residents. The DESIDE team consists of: - Polar View - EOX - Drift+Noise Polar Services - Norwegian Meteorological Institute - Finnish Meteorological Institute - Danish Meteorological Institute
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Destination Renewable Energy: Renewable Energy Forecasting on DestinE platform using Digital Twin data

Authors: Rizos-Theodoros Chadoulis, Charalampos Kontoes, Theodora Papadopoulou, Stelios Kazadzis, George Koutalieris, Christos Stathopoulos, Platon Patlakas, Angelos Georgakis, Kyriakoula Papachristopoulou, Thanassis Drivas, Nikolaos S. Bartsotas, Symeon Symeonidis, Vasileios Perifanis, Athanasios Koumparos, David Casalieri, Vasileios Sinnis
Affiliations: National Observatory of Athens (NOA), Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing (IAASARS), BEYOND Center of Earth Observation Research and Satellite Remote Sensing, ENORA INNOVATION, Weather & Marine Engineering Technologies P.C., Quest Energy, PMOD-WRC
Renewable energy systems like solar and wind inherently depend on weather and climate conditions. As the world confronts climate change and the imperative to reduce greenhouse gas emissions, accurate forecasting, standardized forecasting models and protocols, and the transferability of these models become critical for efficiently operating and integrating renewable energy sources into electricity grids. The Destination Renewable Energy (DRE) project, a Use Case within the European Space Agency's Destination Earth (DestinE) platform, addresses these challenges by providing Hybrid Renewable Energy Forecasting System (HYREF), a hybrid (solar and wind) application for renewable energy forecasting in different time scales. HYREF leverages the DestinE Platform's extensive, high-quality global data catalogue, which includes outputs from high-resolution numerical weather prediction models, Weather-induced Extremes Digital Twin forecasts, and Data Lake resources such as Copernicus and ERA5 reanalysis data. By incorporating end-user-provided historical and real-time energy production data, HYREF enables precise forecasting for specific locations and energy infrastructures. The system combines numerical models with satellite-based Earth observation data to provide detailed information on solar and wind availability, covering spatial scales from individual rooftops to regional and national levels. The HYREF system is designed to be flexible, scalable, and user-driven, evolving through continuous interactions and feedback from end users and market stakeholders. Emphasis has been placed on user interface and user experience design to ensure that the application is not only functional but also intuitive and accessible. The system incorporates a user authentication interface that integrates with the DestinE Platform's Identity and Access Management component, providing secure access control and differentiating roles such as Production Site Manager and Weather Modelling Scientist. By providing precise and efficient forecasts for solar and wind power production, the HYREF system enables the combination of different renewable energy sources to ensure a steady energy supply. This assists policymakers, energy producers, and other stakeholders in optimizing resource allocation, improving energy efficiency, and formulating strategies aligned with global green and digital transformation objectives such as the Paris Agreement, the United Nations Sustainable Development Goals (SDGs), and the European Green Deal. Suitable for a wide range of users—from private rooftop owners to large-scale industrial facilities and national grid operators—HYREF maximizes the DestinE Platform's capabilities by synergistically using data and models. Leveraging DestinE's robust data infrastructure, which offers access to diverse, high-quality environmental data globally and high-performance computing capabilities, HYREF improves forecast accuracy, adapts to specific regional characteristics, enables what-if scenario testing to understand the impacts of different environmental conditions on renewable energy production, and significantly enhances its scalability. These advancements contribute directly to achieving international sustainability goals by facilitating the transition to clean energy sources and supporting measures to increase energy efficiency. In a nutshell, the DRE project represents a significant advancement in renewable energy forecasting. By leveraging cutting-edge technology and collaborative platforms, it addresses the pressing challenges of climate change and the global energy transition. HYREF provides actionable and meaningful information, serving as a vital tool for policymakers, energy producers, and other stakeholders. This not only supports the global shift toward sustainable energy solutions but also aligns with international efforts to achieve a greener and more resilient future.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Generating a Digital Twin with CARS, a scalable open-source Multiview Stereo framework

Authors: Yoann Steux, David Youssefi, Loïc Dumas, Mathis Roux, Marian Rassat, Cédric Traizet, Tommy Calendini
Affiliations: Cs-group, CNES
CARS is a CNES open-source 3D reconstruction software part of the Constellation Optique 3D (CO3D) mission. CARS stands out from other MultiView-Stereo methods due to its highly parallelizable design, capable of addressing large volumes of data for processing on an HPC cluster or personal machine. It uses high-resolution images such as Pleiades and Spot images. This innovative pipeline applies advanced image processing techniques to generate precise 3D models of the Earth's surface. With the ability to extend, CARS can also facilitate the creation of digital twins, offering the possibility to visualize and interact with 3D models in virtual environments. This extension supports a range of physical simulations, including flood modeling and heat island analysis, which are valuable for urban planning, disaster management, and environmental monitoring. The flexible nature of the CARS framework unlocks new opportunities to apply satellite data across various domains, providing enhanced decision-making tools through realistic, dynamic digital models of the physical world.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Development of a General-Purpose Multi-Scale 3D Synthetic Scene Generator for Simulation and Analysis

Authors: Yves Govaerts, Dr Vincent Leroy, Mr Nicolae Marton
Affiliations: Rayference
Combining ground and satellite observations is crucial for validating space-based quantitative data, as these observations offer complementary information. Remote sensing measurements, whether ground-based or satellite-derived, are inherently influenced by both atmospheric properties and surface reflectance. However, ground-based up-looking observations are highly sensitive to atmospheric aerosol properties and only marginally influenced by surface reflectance, whereas down-looking satellite observations often exhibit a stronger sensitivity to the surface. Additionally, these different types of data are often collected at varying spatial scales, making direct comparisons challenging and necessitating the use of potentially unreliable upscaling approaches. Radiative transfer models are essential for providing a theoretical basis to interpret both space- and ground-based data. Typically, these models operate under simplified assumptions of a homogeneous atmosphere and surface, which limits their ability to accurately account for radiative processes occurring across different spatial and temporal scales. This highlights the need for more sophisticated approaches that consider radiative processes occurring at different scales for improved data validation and understanding. To address these limitations and understand the impact of surface heterogeneities on CalVal activities or for the design of new missions, Rayference is developing a general-purpose multi-scale 3D synthetic scene generator. This tool is designed to create customizable, detailed 3D scenes that can be tailored to various spatial scales, from micro-scale surface details to macro-scale landscapes. This generator allows the representation of detailed vegetation structure, water bodies, artificial surfaces, clouds, … By allowing for the assignment of optical properties and supporting the integration within Eradiate, our 3D open-source radiative transfer model, this generator allows the simulation of ground and satellite observations in a radiatively consistent framework. This generator is essential for advancing calibration and validation activities that require realistic simulations of complex environments. The modular nature of the tool ensures that it can be adapted for diverse use cases, from future mission preparation to advanced scientific research. The outcome is a powerful, flexible platform that enhances the capacity for detailed 3D scenes. Practical examples of applications of this synthetic scene generator will be shown, combining the simulation of ground and satellite observations. For that purpose, synthetic scenes corresponding to different land cover types are generated, and satellite images with different characteristics simulated. As the characteristics of the scenes are completely defined, these synthetic images can be used to benchmark retrieval algorithms. In conclusion, this synthetic scene generator could contribute to a 3D Radiative Digital Twin Earth Component dedicated to physically-based realistic modelling of the Solar radiation reflected by the Earth as seen from space and ground observations. It will support our understanding of the integration of multi-scale information. This feature will allow the generation of Big Data sets of realistic satellite images for training of AI-enabled algorithms, such as machine learning techniques, among others. It will also allow the verification of our understanding of the radiative Earth with a direct comparison between simulated satellite images and actual observations at various temporal and spatial scales. This development is funded by the ESA 3DREAMS project.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: A.02.02 - POSTER - Terrestrial and Freshwater Biodiversity

Preserving the integrity and health of natural ecosystems, and the biodiversity they host is crucial not only for the vital services they provide to sustain human well-being, but also because natural ecosystems with a high degree of integrity and diversity tend to exhibit elevated levels of productivity and resilience. The importance of safeguarding biodiversity is increasingly recognised in many Multilateral Environmental Agreements (MEAs) which all place great emphasis on the sustainable management, restoration and protection of natural ecosystems.

The pivotal role of ecosystems in maintaining ecological balance and supporting human well-being is a unifying theme in MEAs. Taking note that despite ongoing efforts, biodiversity is deteriorating worldwide and that this decline is projected to continue under business-as-usual scenarios, Parties to the Convention on Biological Diversity (CBD) have adopted at the 14th Conference of the Parties in December 2022, the Kunming-Montreal Global Biodiversity Framework (GBF). The GBF represents the most ambitious and transformative agenda to stabilise biodiversity loss by 2030 and allow for the recovery of natural ecosystems, ensuring that by 2050 all the world’s ecosystems are restored, resilient, and adequately protected. In Europe, the EU Biodiversity Strategy for 2030 aims to put Europe’s biodiversity on the path to recovery by 2030, by addressing the main drivers of biodiversity losses.

The emergence of government-funded satellite missions with open and free data policies and long term continuity of observations, such as the Sentinel missions of the European Copernicus Program and the US Landsat programme, offer an unprecedented ensemble of satellite observations, which together with very high resolutions sensors from commercial vendors, in-situ monitoring systems and field works, enable the development of satellite-based biodiversity monitoring systems. The combined use of different sensors opens pathways for a more effective and comprehensive use of Earth Observations in the functional and structural characterisation of ecosystems and their components (including species and genetic diversity).

In this series of biodiversity sessions, we will present and discuss the recent scientific advances in the development of EO applications for the monitoring of the status of and changes to terrestrial and freshwater ecosystems, and their relevance for biodiversity monitoring, and ecosystem restoration and conservation. The development of RS-enabled Essential Biodiversity Variables (EBVs) for standardised global and European biodiversity assessment will also be addressed.

A separate LPS25 session on "Marine Ecosystems" is also organised under the Theme “1. Earth Science Frontiers - 08 Ocean, Including Marine Biodiversity”.

Topics of interest mainly include (not limited to):
•Characterisation of the change patterns in terrestrial and freshwater biodiversity.
•Integration of field and/or modeled data with remote sensing to better characterize, detect changes to, and/or predict future biodiversity in dynamic and disturbed environments on land and in the water.
•Use of Earth Observation for the characterisation of ecosystem functional and structural diversity, including the retrieval of ecosystem functional traits, (e.g., physiological traits describing the biochemical properties of vegetation) and morphological traits related to structural diversity.
•Sensing ecosystem function at diel scale (e.g. using geostationary satellites and exploiting multiple individual overpasses in a day from low Earth orbiters and/or paired instruments, complemented by subdaily ground-based observations).
•Assessment of the impacts of the main drivers of changes (i.e., land use change, pollution, climate change, invasive alien species and exploitation of natural resources) on terrestrial and freshwater ecosystems and the biodiversity they host.
•Understanding of climate-biodiversity interactions, including the impact of climate change on biodiversity and the capacity of species to adapt.
•Understanding of the evolutionary changes of biodiversity and better predictive capabilities on biodiversity trajectories,
•Understanding of the ecological processes of ecosystem degradation and restoration,
•Multi-sensor approaches to biodiversity monitoring (e.g. multi-sensor retrievals of ecosystem structural and functional traits),
•Validation of biodiversity-relevant EO products (with uncertainties estimation),
•Algorithm development for RS-enabled Essential Biodiversity Variables (EBVs) on terrestrial and freshwater ecosystems,
•Linking EO with crowdsourcing information for biodiversity monitoring.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: The Green(ing) Backbone: Spatiotemporal Vegetation Productivity Trends in the Carpathian Mountains

Authors: Daria Svidzinska, Karin Mora, David Montero, Oleh Prylutskyi, Oleg Seliverstov, Dr Volker Radeloff, Miguel Mahecha
Affiliations: Leipzig University, Falz-Fein Biosphere Reserve “Askania Nova”, V.N. Karazin Kharkiv National University, University of Wisconsin-Madison
The Carpathian Mountains, often referred to as the green backbone of Eastern Europe, are a hotspot of biodiversity and ecosystem services. With mountains warming faster than lowland regions, vegetation productivity in this region is expected to increase. Moreover, land use changes, such as reduced grazing pressure, are contributing to the transformation of vegetation cover. However, empirical analyses of these shifts in the Carpathians are still missing. Remote sensing observations are essential in addressing this gap. This study aims to leverage remote sensing advances to analyse spatiotemporal vegetation productivity trends in the Carpathian Mountains across the past 41 years. Specifically, we seek to answer the following questions: (1) How widespread is the greening signal in the Carpathians? (2) Are the greening trends associated with land cover classes? (3) Do the greening trends vary with altitude? (4) Do the greening trends change over time? To this end, we use all Landsat (satellites 4 to 9) images available in the Google Earth Engine from June to September over the period 1984 to 2024 at a resolution of 30 m for areas above 1,300 m. We thus focus on subalpine and alpine vegetation belts. We apply statistical corrections to account for variations in bandwidths across different Landsat sensors and harmonise comparable bands. To assess greening, we employ the Mann-Kendall test for trend with a correction for temporal autocorrelation in time series. The Theil-Sen’s (TS) slope estimator quantifies the direction and magnitude of change over time. Additionally, the Kendall rank correlation coefficient and two-sided p-value assess the strength and significance of the association between variables. We define greening as the increase in the yearly Normalised Difference Vegetation Index (NDVI) values derived from Landsat imagery, which are associated with statistically significant TS slope values and confirmed by strong to moderate Kendall coefficients. This study provides a comprehensive spatiotemporal assessment of greening for one of the largest mountain ranges in Europe with the highest possible level of spatial detail and temporal extent. It thus offers a route to better understand the changes in mountain environments and to further investigate their drivers.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Estimating the Fraction of Green Vegetation Cover of Coastal Dunes Using Very High Resolution Imagery and Sentinel-2 in Southern Spain

Authors: Eva Romero-Chaves, Emilia Guisado-Pintado, Víctor F. Rodríguez-Galiano, Diego López-Nieta
Affiliations: University of Seville
Coastal dunes play a key role in coastal response, acting as an essential sediment reserve for beaches, especially during intense storms and spring tides. Vegetation in coastal dunes is a key element, since it promotes growth and stabilization while acting as the first line of defense against flooding and erosion, also facilitating beach recovery. Understanding the dynamics of dune vegetation becomes crucial to effectively promote sustainable management of coastal areas and reduce risk, particularly in highly populated environments. However, monitoring coastal dunes vegetation requires not only high-resolution satellite images given their spatial distribution but also multitemporal vegetation products to analyse seasonal to annual changes in vegetation coverage. The Vegetation Cover Fraction (FCover) is defined as the proportion of the ground surface covered by green vegetation observed from a nadir perspective. This parameter is a crucial tool to differentiate vegetation from soil in energy balance processes. Derived from structural canopy variables, such as the Leaf Area Index (LAI), FCover is largely independent of illumination geometry, making it a robust alternative to traditional vegetation indices for green vegetation monitoring. Furthermore, it is highly consistent despite resolution of the satellite images used due to its quasi-linear relationship with reflectances. FCover focuses exclusively on green vegetation, excluding other land cover types, which enhances its ability to monitor active vegetation. Although other FCover products exist in Copernicus Land, those are generated at moderate resolution (300 m) using Sentinel-3 data, which is not suitable for detailed monitoring of dune vegetation, which requires higher resolution to obtain accurate information on its dynamics. A multi-platform based methodology for retrieving the fraction of green vegetation cover (FCover) is tested in three coastal dunes systems in southern Spain: Cabopino in the Mediterranean and El Rompido and Punta Malandar in the Atlantic coast of Andalusia. Sites were chosen for being representative of the Atlantic and Mediterranean coast and thus having variable geographical and climatic conditions. The Atlantic coast is characterized by higher humidity levels and a more temperate climate, which promote greater vegetation growth and dynamic aeolian-induced dune processes. In contrast, the Mediterranean coast features a drier and warmer climate, resulting in less developed dune systems where vegetation is adapted to semi-arid conditions. The data subset considered Sentinel-2 and Very High Resolution (VHR) images such as Pleiades, Superview, Worldview and Spot (dating from 2017 to 2021), with spatial resolutions of 10 m and 2/4 m. Sentinel-2 Surface Reflectance (S2_SR) products (spatial resolution of 10 m) were chosen based on the closest date to the available VHR images using Google Earth Engine. VHR images were obtained from the National Geographic Institute of Spain (IGN). The percentage of a Sentinel-2 pixel covered by green vegetation (FCover) was computed by considering a threshold of 0.3 NDVI (Normalized Difference Vegetation Index) in co-collocated VHR imagery. The statistical distribution of FCover values was considered in order to avoid over-representation of very low and high values in the models. Various machine learning algorithms were evaluated such as: Random Forest (RF), Neuronal Networks (NN), Support Vector Machine (SVM) and Partial Least Squares regression (PLSR), Linear Regression (LR). A set of Sentinel derived variables were used for training the models including: NDVI, NDWI, NDSDI, NDESI y EVI and raw bands (B2, B3, B4, B5, B6, B7, B8, B11 and B12). Although the NDVI resulted on the strongest Kendall correlation with the calculated FCover all variables were used for the prediction. Preliminary results of the linear regression between VHR-based FCover and Sentinel-2 derived variables showed a fair agreement with R² of 0. 57 and RMSE of 21.62% but with a notable dispersion, evidenced by the overrepresentation of extreme values. This is evidenced by the fact that a single FCover value can be associated with a wide spectrum of NDVI values. Further, density analysis of FCover data indicates that in general high FCover values (100%) tend to correspond to high NDVI values (0.6-0.8), while low FCover values (0-20%) are mainly clustered in low NDVI ranges (0.1-0.3). Looking at the spatial distribution of vegetation cover (Fcover) applied to the dune systems of Cabopino, Punta Malandar and El Rompido revealed differentiated patterns in the FCover-NDVI relationship according to local characteristics. In Cabopino and Punta Malandar, high FCover values (100%) correlate with high NDVI (mean 0.63-0.70), indicating an optimal fit of the model in areas of dense and homogeneous vegetation. In El Rompido, on the other hand, a greater variability between FCover-NDVI relationship is observed with high NDVI values corresponding with low FCover percentages, suggesting a more vigorous vegetation, but with less coverage. On the other hand, in general, areas with low vegetation cover (FCover 4%) show lower NDVI values though with variable patterns between areas. For instance, in Cabopino a stable and homogeneous NDVI (0.19-0.24) is associated with low FCover, while Punta Malandar shows slightly higher values (0.20-0.29) and with more spatial variability. Finally, in El Rompido, NDVI values for low FCover ranges between 0.19-0.33, suggesting abrupt transitions between vegetated and non-vegetated areas. In this research, a new approach for estimating the fraction of green vegetation cover of coastal dunes is presented. Although results are representative of some dense vegetation areas across the study cases, the VHR images rescaling and the FCover representation as discrete values difficult accurate representation of the vegetation coverage in scarce vegetation areas and for some ranges of NDVI values. Next steps could explore the use Radiative Transfer Models to generate a Look Up Table, which will allow training FCover models at 10 m resolution. In addition, a spectral mixture analysis could be incorporated as a strategy to extract new variables as well as to explore the incorporation of phenological trajectories derived from the HR-VPP products as an input for the models.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: A Coupled In-Situ/Remote Sensing Dataset for Macrophyte Research in Small, Temperate Lakes

Authors: Frederike Kroth, Bastian Robran, Dr. Katja Kuhwald, Dr. Thomas Schneider, Natascha Oppelt
Affiliations: Kiel University, Technichal University of Munich
As primary producers and structural habitat builders, freshwater macrophytes serve as foundational species within aquatic environments. The distribution of macrophytes in lake ecosystems is primarily influenced by a range of lake-specific abiotic driving factors, including water depth and light availability, water chemistry and temperature, or substrate characteristics and littoral slope. Different macrophyte species exhibit distinct preferences for these abiotic conditions, and at the same time respond uniquely to shifts within their habitats, making them long-term indicators for the ecological status of lakes. Globally, anthropogenic pressures have caused a notable decline in macrophyte diversity, as land use changes, climate shifts, and invasive species disrupt habitats and species composition. To monitor these changes, the European Water Framework Directive mandates macrophyte mapping every three years for lakes over 50 hectares, leaving smaller lakes understudied. While in-situ mapping is essential, it poses logistical challenges, is time-intensive and costly. Remote sensing techniques provide a promising supplement but rely on accurate in-situ data for calibration and validation. To address this, we present a comprehensive coupled in-situ/remote sensing dataset designed to bridge this gap and enable efficient, scalable monitoring of macrophytes in small, temperate lakes. Collected as part of the MARTINI project, the dataset integrates high-resolution multispectral aerial imagery, WorldView-2/3 data, and Sentinel-2 MSI time series with extensive in-situ measurements from 19 interconnected lakes in the Osterseen area (southern Germany). Over two growing seasons (2023–2024), we systematically captured macrophyte spectral signatures, biometric data, and habitat characteristics, alongside abiotic drivers such as water chemistry, temperature, and light conditions. Abiotic factors were systematically monitored over the vegetation periods. Water temperature was logged continuously at macrophyte growth sites, and dissolved oxygen, pH, conductivity, and Secchi depths were measured bi-weekly to monthly at multiple depths. Water samples were collected for analysis of chlorophyll-a, nutrients, and humic substances. Detailed macrophyte mapping, including biometric parameters and species composition, was conducted by divers and a hydroacoustic device along all lake shores. We will illustrate the potential of the dataset to advance remote sensing applications in aquatic ecology. This will include the development of algorithms for species-specific macrophyte mapping, habitat monitoring, and predictive modelling of macrophyte responses to environmental change.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Species Distribution Modeling with Graph Neural Networks

Authors: Emilia Arens, Dr. Damien Robert, Professor Jan Dirk Wegner
Affiliations: EcoVision Lab, Department of Mathematical Modeling and Machine Learning, University of Zurich
Reliably modeling the distribution of species at a large scale is critical for understanding the drivers of significant biodiversity loss in our rapidly changing climate. Specifically, the task involves predicting the probability that a species will occur in a particular location, given the prevailing environmental conditions. Because field data capturing the true presence and absence of species is costly and limited, opportunistic citizen science data has emerged as a valuable source of data. However, it comes at the cost of lacking information on species absences and introducing strong data biases such as geographic bias toward urban areas and species bias towards conspicuous individuals. Therefore, species distribution modeling (SDM) has proven to be a challenging task, requiring the learning of highly complex interactions in a scarce data regime, where even the driver of sparsity is ambiguous as it can be both: the true rarity of a species as well as strong sampling biases. Traditional approaches from the field of ecology often tackle the task from a statistical perspective, fitting a per-species density function around the known occurrences. Thereby, individual species are modeled in isolation, which not only limits the taxonomic scalability, but also neglects the rich information found in species interactions. As a result, the growing body of literature attempting to solve the SDM task by leveraging neural networks is often motivated by the fact that these models are capable of fitting the individual distributions simultaneously. More precisely, deep learning allows meaningful representations to be learned from raw input data. In the case of SDMs, the currently proposed recipe is to learn a geospatial representation from diverse input variables, e.g. climatic rasters and elevation maps. The representation is learned jointly and is therefore shared by all species. It then serves as the conditioning variable to predict the likelihood of the presence of a species. Thus, neural networks implicitly express species interactions through the shared feature space. However, most of the proposed deep learning approaches do not yet significantly outperform traditional methods, raising the question whether the co-modeling potential of deep learning can be exploited in a more concrete and systematic way. Here, we suggest to learn and propagate representations between species explicitly using a graph structure, where the nodes represent individual species and the edges express species interactions. This setting not only allows an explicit reasoning about species interactions, but also opens the door for the integration of data sources that have received less attention in the SDM literature as compared to traditional remote sensing predictors. Along these lines, nodes and edges can thereby be equipped with per-species and inter-species attributes extracted either from the presence-only data, or from relevant external sources. In particular, given the sparsity of the data regime, the enriched graph structure should lead to more robust interaction inference, while making the results more interpretable by exploring edge information propagation. We implement the entire framework as a Graph Neural Network (GNN) reasoning over the proposed species graph, which can be flexibly combined with any neural network learning the aforementioned geographic representation from environmental data. Thus, the proposed structure should be seen as an extension of established approaches, allowing the model both to receive rich species interaction data and to reason about this additional information. In doing so, we open the door to multimodal approaches ranging from remote sensing raster data to tabular observational data to graphical interaction data in an end-to-end trainable regime. We expect that the integration of the GNN branch will lead to more robust performance, especially for rare or heavily undersampled species. By having constant access to large-scale species interactions, we assume that information on well-represented species can support decision making for less represented individuals. Thereby, we are targeting those species that are difficult to model but critical to drawing the right conclusions for conservation planning and biodiversity protection.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Employing Earth Observation in Habitat Modelling of Freshwater Macrophytes

Authors: Bastian Robran, Frederike Kroth, Dr. Katja Kuhwald, Dr. Thomas Schneider, Natascha Oppelt
Affiliations: Kiel University, Technical University of Munich
Freshwater macrophytes play a crucial role in freshwater ecosystems by enhancing habitat complexity, influencing trophic webs, cycling nutrients and providing multiple ecosystem services. However, macrophyte diversity is in global decline due to habitat degradation driven by anthropogenic activities and climate change. To address this threat, habitat suitability models (HSMs) have emerged as valuable tools for investigating the relationship between macrophytes and their changing environment. Nevertheless, few HSMs leverage remote sensing data to enhance predictive accuracy and scalability in freshwater settings. Our study addresses this critical gap by developing an HSM supported by Earth observation data specifically designed for small lakes. Small lakes comprise the majority of freshwater lakes but are often underrepresented in ecological studies due to their scale and monitoring challenges. We tailored our model for a series of small lakes in Southern Germany. Our analysis identified key environmental factors, including distance to groundwater inflow, lake depth, littoral slope, and availability of photosynthetically active radiation (PAR), as significant predictors of macrophyte occurrence. A distinctive feature of our HSM is the integration of Sentinel-2 MSI data to derive PAR availability at macrophyte growing depths. Unlike traditional, point-based methods for measuring light availability, Sentinel-2 derived PAR availability allows for spatially continuous data that can be scaled across numerous lakes. Additionally, by incorporating a time series of MSI-based PAR data, we have introduced a temporal dimension to the model, thereby facilitating the monitoring and prediction of changes in macrophyte habitats over time. This represents a significant advancement for understanding and managing dynamic freshwater environments. The modelled habitat suitability scores showed a robust correlation (R = 0.908) with actual macrophyte distributions, indicating that the approach effectively captures the conditions that influence macrophyte presence. Our approach allows for more nuanced, data-driven assessments of habitat conditions that can inform conservation efforts. Demonstrating the efficacy of GIS- and remote sensing-based HSMs, this study provides a foundation for potential applications in ecological conservation and resource management, particularly in smaller freshwater ecosystems where traditional monitoring is often limited.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Assessment of GEDI vegetation structure metrics in African savannas: Towards multi-sensor integration with Copernicus Sentinel data

Authors: Marco Wolsza, Dr. Jussi Baade, Andrew B. Davies, Sandra MacFadyen, Jenia Singh, Tercia Strydom, Prof. Dr. Christiane Schmullius
Affiliations: Department for Earth Observation, Friedrich Schiller University Jena, Department of Geography, Friedrich Schiller University Jena, Department of Organismic and Evolutionary Biology, Harvard University, Mathematical Biosciences Lab, Stellenbosch University, National Institute for Theoretical and Computational Sciences (NITheCS), Scientific Services, South African National Parks (SANParks)
Savanna ecosystems are characterized by their unique coexistence of herbaceous (grasses) and woody (trees and shrubs) vegetation. They play a crucial role in the global carbon cycle, in maintaining biodiversity, and supporting livelihoods, yet they are increasingly sensitive to global climate change impacts. Furthermore, their conservation has often received considerably less attention than that of forest ecosystems, although they cover approximately a fifth of Earth’s land surface and store a substantial portion of the total aboveground carbon on the African continent. Accurately characterizing the structural diversity of savanna woody vegetation is important to better understand ecosystem functioning and resilience, as well as changes in biodiversity patterns. Consistent monitoring using Earth Observation data remains challenging due to the pronounced spatio-temporal heterogeneity inherent to these ecosystems. Advances in Earth Observation, particularly the combination of different active remote sensing technologies, offer new opportunities to consistently monitor vegetation structural metrics (VSMs) describing variations in horizontal and vertical dimensions. While most recent studies have focused on canopy top height, other metrics such as foliage height diversity (FHD) are important to quantify structural diversity, which in turn is linked to biodiversity. Spaceborne lidar data from the Global Ecosystems Dynamics Investigation (GEDI) offers footprint-level measurements of VSMs, initially optimized for retrieval in dense forests. This study focuses on the assessment of GEDI VSMs using high-resolution airborne lidar data acquired for several, spatially distributed study areas in the savanna ecosystem of Kruger National Park, South Africa. As previous studies have highlighted, quality filtering of GEDI data is important but approaches differ significantly. We take into account recent findings that are relevant for areas of short stature, discontinuous vegetation. These are further complemented with our workflow developed to address savanna-specific challenges, incorporating MODIS Burned Area data and Copernicus Sentinel-2 time series. We present an assessment of GEDI VSM accuracy across varying configurations and identify key factors affecting retrieval quality in savanna environments. This research contributes to the development of a reproducible framework that integrates Copernicus Sentinel-1 Synthetic Aperture Radar time series data for wall-to-wall mapping of savanna woody vegetation. These insights will advance our understanding of multi-sensor approaches for monitoring structural diversity in savanna ecosystems, supporting improved biodiversity assessment and conservation planning.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Large scale monitoring of inland freshwater hydrologic parameters to study the functioning of aquatic environments that are being modified by climate change Example of the Garonne River basin

Authors: Jean-paul Gachelin, Eliot Lesnard-Evangelista, Mr Thibaut Ferrer, Mr Jean-Pierre Rebillard
Affiliations: vorteX-io, Agence de l'Eau Adour Garonne
Climate change is one of the most pressing environmental issues of our time, with significant implications across ecosystems, including inland freshwater systems. As global temperatures rise due to greenhouse gas emissions, inland water bodies such as rivers, lakes, and wetlands are experiencing noticeable warming with an average temperature rise of 0.5 degrees per decade. This increase water temperature is causing widespread changes in aquatic ecosystems, altering species distribution, biological processes, and ecosystem resilience: - Disruption of thermal stratification and mixing patterns - Altered species distribution and biodiversity loss - Enhanced eutrophication and algal blooms - Reduced oxygen levels and metabolic stress In the same time, climate change is increasing the frequency of extreme events such as floods and droughts. The Adour Garonne Water Agency (France) has decided to launch a research and innovation project to study the functioning of aquatic environments that are being modified by climate change, in terms of both hydrology (flooding, low water) and quality (water temperature, turbidity, etc.), considering the two aspects to be intimately linked. To carry out this experiment, which aims to provide a better understanding of the impact of climate change on the basin, it is crucial to deploy a significant number of instruments to test the effectiveness of the system. To date, only the vorteX-io device allows simultaneous acquisition of real-time quantitative and qualitative measurements. For this reason, the Agency has commissioned vorteX-io to provide water temperature and metrics with 150 vorteX-io micro stations on the Garonne River Basin as part of this project. The vorteX-io micro station is an in-situ device derived from space technology: the micro station has been designed as an earth observation nanosatellite that does not fly, but is installed above rivers to acquire in situ data with remote sensing instruments including: lidar, multispectral and thermaL infrared sensors, GNSS on board. Water parameters are transferred in real-time through GSM or SpaceIOT networks. Innovative and intelligent, lightweight, robust, and plug-and-play, the micro stations are equipped with unprecedented features that allow them to remotely and in real-time measure water temperature, provide contextual images and floods metrics (water levels, flow, rain rates). This instrument provides in situ datasets for calibration, validation and accuracy assessment of EO projects in space hydrology, i.e. in the ESA st3art project dedicated to the calibration and validation of Sentinel 3. The long-term vision is to cover river basins in Europe with an in-situ network, to be used at large scale as earth-observation in situ component either for monitoring water quality parameters or for extreme hazards monitoring such as floods and droughts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Land Cover Mapping in Conservation Areas: Machine Learning or Deep Learning Image Classification?

Authors: Uvini Senanayake, Dr. Scott Mitchell, Koreen Millard
Affiliations: Carleton University
Habitat loss, fragmentation, and land use change severely threaten biodiversity and ecosystem services. Area-based conservation is a globally recognized conservation approach focused on protecting specific geographic areas to preserve biodiversity, ecosystems, and natural resources by reducing the loss of habitats, maintaining population levels of species and providing a functioning environment for humans (Ferraro & Hanauer, 2015; Watson et al., 2014). Area-based conservation also contributes towards mitigating the impacts of climate change. Therefore, a growing need arises for area-based conservation as a nature-based solution for enhancing biodiversity conservation and mitigating the impacts of climate change. The increasing impacts of climate change and anthropogenic pressures pose significant threats to conservation areas, which require regular monitoring. Activities occurring in the surrounding and adjacent lands also pose risks to the health of the landscapes within conservation areas. Therefore, land cover mapping of conservation areas is crucial as it provides information about the distribution of the different habitats and ecosystems. It is also essential in identifying changes occurring in the conservation areas, and it serves as baseline data for ecological models such as species distribution models. Remote sensing has become a powerful tool in conservation biology, offering innovative ways to monitor, assess and manage conservation areas. Land cover is used as a measure of structural diversity, one of the three main categories of remote sensing-based essential biodiversity variables (RS-EBVs) that have been introduced to support regular monitoring of biodiversity from space (Pettorelli et al. 2016; Reddy et al. 2021). Open-access data and databases (e.g. Google Earth Engine Data Catalogue) have expanded the accessibility of remotely sensed data for researchers – including in the conservation field. Conservation area managers have the potential to use such tools to develop land cover maps for their conservation areas based on their requirements. Advancements in artificial intelligence (AI) have significantly enhanced remote sensing image classification techniques with machine learning (ML) and deep learning (DL) methods. Choosing a method for land cover mapping in conservation areas can be overwhelming for conservation area managers due to the variety of ML and DL image classification techniques available. Additionally, managers often face various challenges and constraints, such as financial limitations and limited availability of field data. This research aims to improve conservation area managers' awareness of utilizing ML and DL image classification techniques for land cover mapping in conservation areas. By increasing awareness of these advanced technologies, the study aims to equip conservation professionals with the tools to monitor and manage conservation areas more effectively. A critical review was conducted using the Web of Science database to examine the application of ML and DL techniques for land cover mapping in conservation areas utilizing medium and high-resolution remote sensing imagery. Based on the identified ML and DL classification methods, an analysis was conducted to evaluate the strengths and weaknesses of these methods and determine the most effective approaches to land cover mapping in conservation areas. Among various classification algorithms, Random Forest, Support Vector Machine, Artificial Neural Networks, and Convolutional Neural Networks are frequently used for land cover mapping in conservation areas. U-Net and Vision transformers are two emerging DL image classification techniques for land cover mapping. It is evident that the more complex and advanced models, such as DL models, have the potential to produce more accurate and efficient land cover maps. However, selecting advanced algorithms for land cover mapping is not always practical, especially for conservation area mapping. Factors such as the availability of remote sensing data, ground-truthing data for training and testing algorithms, the area of the conservation lands and the associated costs should be considered before selecting a suitable image classification algorithm. Often, ML algorithms are more suited for medium-resolution remote sensing imagery. Although ML and DL algorithms can be used with high-spatial-resolution data, DL algorithms might be more suitable. While ML and DL algorithms can be used with multidimensional remote sensing data, DL algorithms are more suited for handling multimodal data. The availability of training data is a significant determinant in selecting a ML or DL algorithm for land cover mapping. Field surveys are labour-intensive and time-consuming. Accessibility to some habitats is often limited, resulting in small, biased field samples. Manually generating training samples is also a time-intensive, expensive, and subjective task that requires expert knowledge. The performance of DL classification methods is closely related to the amount of training data, and the same is true for transformer-based methods, which is the major drawback of using DL classification methods for LC mapping in conservation areas. However, ML classification methods are easy to train and less sensitive to the quality of training data. DL classification methods require significant computational resources, such as GPUs, for training the models, making it less feasible for applications with limited access to high-performance hardware. Comparatively, ML methods are less computationally intense and can be performed in cloud computing platforms like Google Earth Engine. However, compared to ML models, DL models have more transferability. To conclude, selecting a ML or DL image classification technique should be determined based on the challenges and constraints the conservation area managers face. DL models can be used when the complexity of the input features is high. DL models require large, labelled datasets and significant computational resources, which increases the associated costs. Thus, selecting a ML image classification algorithm is more suitable if the training data is small or when computation resources are limited.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Detection and Biclass Differentiation of Landscape Elements using Sentinel-2

Authors: Manuel Reese, Prof. Dr. Björn Waske
Affiliations: Universität Osnabrück
Landscape elements such as tree rows, hedgerows, grass strips and flower strips play a crucial role in supporting biodiversity and providing habitats for wildlife within finely structured agricultural landscapes. These features often serve as last refuges for various species in a landscape dominated by high-intensity agricultural practices. Preserving these landscape components is essential not only for the conservation of local flora and fauna, but also for maintaining the ecosystem services they provide, including pollination, soil stabilization and pest regulation. This study aims to develop a robust method for identifying landscape elements and to assign them to one of two main classes using Sentinel–2 data. One is dominated by grass and herbaceous plants while the other is characterized by woody plants, mainly shrubs and trees. For this purpose we are leveraging deep learning techniques and compare their efficiency. Specifically, we implement and compare two prominent classifiers: a U-Net architecture tailored for semantic segmentation and a transformer network renowned for its attention mechanisms. The U-Net architecture is inspired by Strnad et al. (2023), who follow a similar goal with aerial imagery. The general workflow of the transformer based classifier is inspired by the findings of Bazi et al. (2021). The implementation is realized in tensorflow’s Python API (Martı́n Abadi et al. 2015). Training and test data were manually annotated on very high-resolution aerial images (R-G-B-NIR) that were provided free of charge by the state of Lower Saxony. The study area is a randomly selected 21 square kilometer area southeast of the northern German town of Löningen in the Oldenburg Münsterland, a region characterized by intensive agriculture. Further “non-landscape element classes” (forest, impervious surfaces, permanent grassland, arable land and water bodies) are added from EU CAP data and CORINE data. Sentinel-2 images from the vegetation period of 2023 were collected and cloud masked using s2cloudless (Braaten 2023). By aggregating and composing images across different temporal resolutions and composition methods, we explored the feature representation of woody elements and herbaceous elements, by testing the different image composition methods in terms of their effect on model performance. We used the GEE Python API for image pre-prosessing (i.e., image collection, cloud masking, and image composition). In order to make the validation as independent as possible, we have compiled an additional data set that contains only landscape elements from CAP funding applications (Niedersachsen 2023). Our methodology involves assessing the impact of various temporal resolutions and image composition techniques. We experiment with approaches such as seasonal compositing — a median and a max value composition each of the entire time span and a bimonthly stack. The idea is to generate input data with varying spectral characteristics and temporal coverage. Through our experiments, we aim to determine how these factors influence the accuracy of landscape element detection models, ultimately guiding the selection of optimal data processing workflows for this task. The performance of the U-Net and transformer models is evaluated using precision, recall, and F1-score metrics, as well as time usage for training and classification which provide insights into their relative strengths and weaknesses in detecting the specified landscape elements. Preliminary results indicate that while the U-Net architecture demonstrates significant efficacy in pixel-level predictions, the transformer network excels in contextual understanding due to its ability to capture long-range dependencies within the image data. This research not only aims to contribute to the field of biodiversity monitoring within the intensely used agricultural landscape. By refining our detection methods, we evaluate the potential for near real-time monitoring of important habitats, which could significantly enhance agricultural decision-making and biodiversity initiatives. In conclusion, this study lays the groundwork for enhanced precision in detecting landscape elements within finely structured agricultural environments. By comparing the performance of U-Net and transformer architectures and evaluating the effects of image data characteristics, we provide a comprehensive framework that can be adapted for various applications in remote sensing and monitoring tasks. Future work will focus on including a qualitative assessment of the individual landscape elements in order to obtain a meaningful proxy metric on the state of biodiversity in agricultural landscapes. References Bazi, Yakoub et al. (2021). “Vision Transformers for Remote Sensing Image Classification”. In: Remote Sensing 13.3. issn: 2072-4292. doi: 10.3390/rs13030516. url: https://www.mdpi.com/2072-4292/13/3/516 Braaten, Justin (2023). Sentinel-2 Cloud Masking with s2cloudless Martı́n Abadi et al. (2015). TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. Software available from tensorflow.org Niedersachsen, ML SLA (2023). Landschaftselemente in Niedersachsen, Bremen und Hamburg. Strnad, Damjan et al. (May 2023). “Detection and Monitoring of Woody Vegetation Landscape Features Using Periodic Aerial Photography”. In: Remote Sensing 15, p. 2766. doi: 10.3390/rs15112766.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Enhancing Biodiversity Assessment with Super-Resolution Techniques: A Sentinel-2-Based Approach for High-Resolution Habitat and Ecosystem Monitoring

Authors: Ramona Cennamo, Prof. Giovanni
Affiliations: University of Rome "Sapienza"
Introduction Biodiversity monitoring is crucial for understanding ecosystem dynamics, tracking species distributions, and informing conservation efforts. However, the spatial resolution of freely available satellite imagery, such as Copernicus Sentinel-2, often limits the accuracy of fine-scale habitat mapping and species assessments. This research investigates the potential of super-resolution (SR) techniques to enhance Sentinel-2 imagery, thereby improving its spatial resolution and enabling more precise biodiversity assessments. By leveraging advanced machine learning algorithms, this study aims to refine land-cover classifications, identify biodiversity hotspots, and monitor habitat fragmentation. Ultimately, this research aims to bridge the gap between medium-resolution satellite data and the high spatial detail required for ecological analysis, providing a cost-effective tool for biodiversity conservation monitoring. Background: Measuring biodiversity has become a cornerstone of ecosystem health assessments, as evidenced by its integration into the frameworks of important global initiatives such as the Group on Earth Observation Biodiversity Observation Network (GEO BON), the International Geosphere Biosphere Programme (IGBP), the World Climate Research Programme (WCRP), and the Committee on Earth Observation Systems (CEOS) Biodiversity task. Traditional biodiversity monitoring methods often involve time-consuming and resource-intensive field surveys with in-situ data collection. While satellite remote sensing offers certainly a valuable alternative for large-scale monitoring, the spatial resolution of freely available imagery like Sentinel-2 (10-20m) can be insufficient for capturing fine-scale habitat features crucial for many species. Super-resolution techniques, which reconstruct high-resolution images from low-resolution counterparts, offer a promising solution also thanks to the wider availability of GPUs at reasonable costs. Recent advances in machine learning, particularly deep learning, have led to significant improvements in SR algorithms, enabling the generation of sharper and more detailed images. Specific Objectives: This research aims to achieve the following specific objectives: - Evaluate the performance of different super-resolution techniques applied to Sentinel-2 imagery for biodiversity applications: This involves comparing the effectiveness of various SR algorithms, including both traditional methods (e.g., bicubic interpolation) and deep learning-based approaches (e.g., convolutional neural networks). The evaluation will consider factors such as image quality, processing time, and computational resources required. Additional effort is put on the transition of the algorithms on cloud environments. - Develop and test a robust workflow for integrating super-resolution techniques directly with traditional remote sensing methods: This objective focuses on creating a streamlined process for incorporating SR into existing remote sensing workflows and tested frameworks. This will involve the usage of methods for pre-processing Sentinel-2 data (cloud screening, geometric refining, etc.), applying SR algorithms, and integrating the enhanced imagery with traditional analysis techniques such as image classification and object-based image analysis. - Assess the impact of enhanced spatial resolution on biodiversity metrics: This objective investigates how the improved spatial detail from SR-enhanced imagery affects the accuracy and precision of biodiversity assessments. This will involve quantifying changes in land-cover classification accuracy, as well as evaluating the impact on biodiversity metrics such as consolidated species richness indexes, such as Shannon's diversity index, Rao's Q, Berger-Parker's index, Hill's, etc. - Validate the obtained results with the help of very high-resolution (VHR) datasets: To ensure the accuracy and reliability of the SR-enhanced imagery and subsequent biodiversity assessments, the results will be validated using VHR datasets (e.g., aerial imagery, LiDAR data). This validation will involve comparing the spatial patterns and biodiversity metrics derived from the enhanced Sentinel-2 imagery with those obtained from the VHR data. Data and Method: Before applying super-resolution techniques, we classify the original Sentinel-2 imagery dataset over predefined AOIs using standard methods (e.g., Random Forest, Support Vector Machines). We then assess the accuracy using a confusion matrix, overall accuracy, producer's/user's accuracy, and Kappa coefficient. This establishes our baseline performance. Secondly we apply our chosen super-resolution technique(s) to the Sentinel-2 imagery improving the native resolution. Then, we classify the enhanced imagery obtained using the same methods as the baseline. Finally we compare the accuracy metrics from the enhanced imagery classification to the baseline. This will quantitatively demonstrate how super-resolution improves the accuracy of land-cover mapping. In doing such we pay close attention to land-cover classes that are particularly important for biodiversity (e.g., specific forest types, wetlands, grasslands, etc.). First results show an increased heterogeneity provided by SR images which should lead to the detection of more diverse habitats within what was previously classified as a single homogenous area by the native Sentinel-2 data. In addition on some cases super-resolution reveal previously undetected habitat fragmentation that could impact species. Conclusions and outlook: First results of this work demonstrate the potential of super-resolution techniques to significantly enhance the spatial resolution of Sentinel-2 imagery for improved biodiversity monitoring. By integrating advanced machine learning algorithms with traditional remote sensing workflows, this study has shown how this cost-effective method, utilizing freely available data and open-source tools, makes advanced biodiversity monitoring more accessible to a wider range of stakeholders, including researchers, conservation practitioners, and policymakers. References: Schmitt, M., Hughes, L. H., Zhu, X. X. (2021). Super-resolution for multispectral remote sensing imagery: A review. Remote Sensing of Environment, 252, 112114. Skidmore, A. K., et al. (2021). Remote sensing of biodiversity: progress, challenges and opportunities. Ecological Informatics, 61, 101219. D., Nelson, T., ... & Hilker, T. (2018). Lidar sampling for large-area forest characterization: A review. Remote Sensing of Environment, 215, 19-46. Turner, W., Spector, S., Gardiner, N., Fladeland, M., Sterling, E., & Steininger, M. (2003). Remote sensing for biodiversity science and conservation. Trends in Ecology & Evolution, 18(6), 306-314. Rocchini, D., et al. (2016). Integrating Sentinel-2 and airborne laser scanning data for mapping plant species richness in a heterogeneous Mediterranean forest. Remote Sensing, 8(7), 572. Rocchini D., Matteo Marcantonio, Carlo Ricotta. (2017). Measuring Rao’s Q diversity index from remote sensing: An open source solution. Ecological Indicators, 72, 778-784. Shannon, C., 1948. A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423.
LPS Website link: Enhancing Biodiversity Assessment with Super-Resolution Techniques: A Sentinel-2-Based Approach for High-Resolution Habitat and Ecosystem Monitoring&location=X5+–+Poster+Area+–+Zone+L-M-N" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Mapping Tree Invasions in an Afromontane Ecosystem With Multidecadal Landsat and Sentinel-2 Data

Authors: Heather Cox, Dr. Patrick Hostert, Dr. Volker Radeloff
Affiliations: University Of Wisconsin-Madison, Humboldt Universität zu Berlin
Exotic trees associated with commercial forestry can pose serious ecological problems to native flora as they alter soil nutrient dynamics, deplete ground water supplies, accelerate erosion and modify wildfire behaviour. Pines and wattles (Pinus and Acacia spp.) are particularly problematic, as the very traits that make them good candidates for timber and pulp production - i.e., rapid growth rates and tolerance of a wide variety of soil and climatic conditions - also increase their invasiveness. Remote sensing has great potential as a tool for detecting and monitoring tree invasions. In this study, we aimed to map, quantify and assess change in the distribution of pines and wattles between 1990 and 2024 in an Afromontane study area, the Nyanga mountains of Zimbabwe. First, we calculated seasonal spectral-temporal metrics for all high-quality Landsat and Sentinel-2 images available since 1990. Second, we applied a random forest classifier to imagery collected between January 2022 and December 2024 to map the current distribution of pines and wattles. Finally, we used temporal segmentation of the combined Landsat-Sentinel time series to estimate the timing of invasion for each invaded pixel. Our results show that pines and wattles have spread well beyond plantation boundaries, now occupying over 3000 km² (more than 5% of the study area). The species are particularly concentrated along footpaths and logging roads, but have also invaded ecologically sensitive areas such as Nyanga National Park. The wattles appear to be more aggressive invaders than the pines in this study area, and in some cases, wattles have even invaded pine plantations. While most current plantations were established by 1990, the extent of invasion outside plantation boundaries has expanded substantially since then. We demonstrate the effectiveness of a two-step approach for monitoring invasive plants, where initial detection of invaded areas is followed by estimation of invasion timing. This contrasts with other methods that either i) rely on a limited subset of available images to detect invasion at discrete time points, or ii) apply change detection algorithms to spectral time series for all pixels and therefore struggle to differentiate between invasive plant spread and other vegetation greening trends such as native shrub encroachment. By examining the spread of wattles and pines across a large, heterogeneous landscape over 34 years, our study provides unprecedentedly detailed information that could improve models of invasion risk and cast new light on underlying ecological processes.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: From Point Clouds to Habitat Use: Insights into Female Roe Deer Resource-Risk Trade-Off

Authors: Johanna Kauffert, Sophie Baur, Alexandra Baumann, Dr Wibke Peters, Prof Dr Annette Menzel
Affiliations: Technical University Munich. Professorship of Ecoclimatology, Bavarian State Institute of Forestry, Research Unit Wildlife Biology and Management
Satellite remote sensing has been an invaluable tool in wildlife ecology for over four decades, enabling insights into species’ habitats and their corresponding behaviour with products such as land-cover maps and vegetation indices like the Normalized Difference Vegetation Index (NDVI). The recent advances and availability of very high-resolution remote sensing data—particularly aerial photogrammetry and LiDAR—have made it possible to derive fine-scale habitat structure parameters, particularly in forests. These developments offer unprecedented precision in characterizing habitat features critical for species’ ecology and management. As a use case, we here examined the trade-off in habitat use between resource acquisition and risk avoidance of the most abundant ungulate species in Europe, the roe deer (Capreolus capreolus), during its fawning season. We analysed the influence of fine-scale wooded habitat structures, derived from aerial photogrammetry and LiDAR remote sensing products, on the habitat use of female roe deer during the fawning period (April - June). Habitat use was tested using GPS-telemetry data of 32 females with confirmed parturition dates across three years and three study sites in southern Germany, resulting in 45 year-ID datasets. We found that pre-parturition habitat use was more affected by nutritional demands, displayed by increased use of mature stands, while during parturition, habitat use was shaped by high concealment and cover demands. After parturition, females rather displayed risk-avoiding behaviour by using young stands and stands with high canopy surface roughness. Our results not only provide valuable insights into the roe deer’s use of woody structures and possible hiding places of fawns but also demonstrate how fine-scale remote sensing products can enhance the analysis of habitat use at finer resolutions. With the emerging ubiquitous availability of aerial imagery and LiDAR, our study showcases the advantages these datasets offer for wildlife ecological research and evidence-based management strategies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Habitat suitability analysis of Asian elephants in Nepal-India transboundary region using machine learning and geospatial data

Authors: Binita Khanal, Dr Tiejun Wang, Dr Ashok Kumar Ram, Dr Olena Dubovyk
Affiliations: Institute of Geography, University of Hamburg, Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, Department of National Parks and Wildlife Conservation, Babarmahal
Understanding the suitable habitat distribution of large and conflict-prone animals and how they use their habitat is crucial to preserving biodiversity and maintaining ecosystem integrity. The cross-border regions between Nepal and India are the natural habitats for Asian elephants. However, this region has experienced dramatic land cover and land use changes over the last several decades due to human pressure and infrastructure development. Thus, this study aims to predict habitat suitability and factors determining habitat suitability in the Nepal-India transboundary region. This study mapped the suitable habitats for Asian elephants in the Nepal-India transboundary region and analysed prominent factors influencing habitat distribution. For this, we modelled the habitat suitability of Asian elephants using geospatial data and an ensemble stacking species distribution modelling approach to establish key factors determining habitat suitability. To identify suitable habitats, this study employed remote sensing-derived bioclimatic variables, five vegetation-related variables, two topographic variables, and proximity to water bodies calculated using GIS techniques as predictor variables for modelling. Three commonly applied machine learning algorithms, viz. boosted regression trees (BRT), random forest (RF), and maximum entropy (MaxEnt), were selected as base learners for ensemble stacking modelling, and results were fused for final predictions. A total of 163 elephant presence points were collected from different sources and randomly divided into 70-30 for training and testing. The model performance evaluation using Area under the the curve (AUC=0.90) and True skill statistics (TSS=0.65) indicates robust performance of the stacking ensemble model. The result showed that 26,679 km2, approximately one-third of the total transboundary landscape areas, is suitable habitat for Asian elephants in the study area. Elevation, precipitation of the driest month and wettest month, and temperature of the warmest month were the key variables determining the habitat suitability for elephants in this region. Suitable habitats were found to be distributed mainly in lower-elevation vegetated areas. The overall predicted suitable habitat was a mix of forest and non-forest in almost equal proportion, suggesting a high overlap in space and resource use between elephants and humans. The study recommended strengthened transboundary conservation efforts, and special attention should be paid to highly dense human settlements around the protected area while implementing measures to mitigate the risks of conflict between humans and elephants. The study emphasised the potential distribution of the elephant’s habitat in the transboundary landscape and its implication on spatial planning for long-term biodiversity conservation.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Time series of Sentinel-1 backscatter and coherence reveal shifts in inundation duration and timing in open and vegetated wetlands

Authors: Stefan Schlaffer, Peter Dorninger, Melina Frießenbichler
Affiliations: GeoSphere Austria, 4D-IT GmbH
In comparison to their extent at the global level, wetlands serve as habitat for a disproportionally large number of plant and animal species. They provide a multitude of other ecosystem services including water retention, which can help mitigate the impacts of floods and droughts, retention of pollutants, provision of food as well as important cultural services. Their ecosystem functions do not only depend on their extent and number but also on their inter- and intra-annual dynamics, i.e., inundation duration and timing, as well as on their internal structure and vegetation types. Drainage, damming and other hydraulic constructions affect these inundation dynamics, while climate change exerts further pressure on these vulnerable ecosystems. Efforts to restore degraded wetlands, which are undertaken, e.g., in the framework of the Nature Restoration Law of the European Union, require monitoring to quantify their effect on ecosystem functions. Synthetic aperture radar (SAR) systems constitute an optimal means for monitoring changes in inundation characteristics due to their relatively high spatial and temporal resolution and their sensitivity to the occurrence of surface water, even beneath vegetation depending on the right combination of water level, vegetation density and radar wavelength. We aimed at characterising the inter- and intra-annual dynamics in surface water extent at the shallow, subsaline Lake Neusiedl, which is located in the Pannonian lowlands of Eastern Austria. More than half of the lake surface is covered by one of the largest continuous reed belts in Europe, dominated by Phragmites australis. The study area also includes a number of soda lakes, which fall dry intermittently and are the only water bodies of this type found in Central Europe. The area is a Ramsar site of international importance due to its ecological significance, especially for bird populations. We analysed time series of backscatter and coherence with temporal baselines between 6 and 24 days acquired by Sentinel-1 over Lake Neusiedl between 2015 and 2024. We interpreted the time series with the help of a comprehensive set of reference data, including in-situ water levels, meteorological data, a LiDAR-based digital surface model and high-resolution optical imagery. Water surfaces were delineated using a Bayesian approach. The retrieval of open water was based on the typical specular backscatter signatures of smooth, open water surfaces, however, it was found to be impacted by wind and ice cover. The latter could partly be mitigated using the cross-polarisation channel of Sentinel-1. The reed belt showed a clear double-bounce signature during spring, caused by the interaction of the radar wave with the water surface and the stems of emergent P. australis vegetation. During a prolonged drought period, which lasted from 2019 to 2022, water extent in Lake Neusiedl and the surrounding soda lakes decreased significantly with respect to pre-drought conditions. Since 2023, water bodies in the region have shown a significant recuperation in terms of their water extent. The results hold significance for both the monitoring the impacts of prolonged droughts on wetland ecosystems and of the effects of restoration efforts. Future work will include the characterisation of the impact of the heterogeneity of the reed belt in terms of open water and vegetated areas, on the one hand, and the reed structure and age, on the other hand, on SAR backscatter and coherence.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Are hyperspectral vegetation indices based on multi-sensor data fusion better than pure multispectral indices in measuring trait-based functional diversity?

Authors: Nikolina Mileva
Affiliations: European Space Agency
Recent hyperspectral sensors such as EnMAP, DESIS, EMIT, and PRISMA have increased substantially the amount of hyperspectral data available enabling the use of imaging spectroscopy techniques on a wider scale. However, these sensors cannot yet provide time series long enough to describe important biodiversity patterns related to climate change and other phenomena, which are visible on a decadal timescale. Thus, the synergistic use of multispectral sensors providing long time series and hyperspectral sensors offering better spectral resolution is necessary. In this study, we explore the fusion of hyperspectral data from EnMAP and DESIS with Sentinel-2 to derive a number of vegetation traits commonly used to evaluate functional biodiversity. We focus on chlorophyll, carotenoid and water content, which we then use as inputs for calculating functional richness, divergence and evenness. We perform the same analysis using only multispectral data and evaluate the differences. This analysis will help us assess the added value of hyperspectral data for measuring functional diversity and give us insights into possible limitations stemming from the individual sensor characteristics. For the fusion of multispectral and hyperspectral data, we employ a set of well-known fusion techniques requiring a minimal input of data, as the availability of hyperspectral images is still rather limited. Airborne hyperspectral data from AVIRIS-NG acquired over the Bavarian Forest National Park is used for validation. Preliminary results show that multispectral and hyperspectral sensors have better agreement for lower values of chlorophyll content, while for larger values they tend to diverge (multispectral data showing lower estimates). For the chlorophyll to carotenoid ratio, the hyperspectral estimates are consistently larger than the multispectral ones. This study will demonstrate the feasibility of creating simulated hyperspectral time series and will showcase their relevance for biodiversity research. While here we concentrate on specific physiological traits, the methods employed are not sensor or band specific and can have broader field of application.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Environmental plague monitoring : desert locust prediction with artificial intelligence and stochastic model.

Authors: Maximilien Houël, Alessandro Grassi, Kimani Bellotto, Wassim Azami, Komi Mensah Agboka, Dr. Elfatih Abdel-Rahman, Dr. Bonoukpoè Mawuko Sokame, Dr. Tobias Landmann
Affiliations: SISTEMA Gmbh, ICIPE
Desert locusts are known as the world’s most destructive migratory pest. A single swarm can travel up to 150 km per day, have 80 million locusts and eat the same amount of food per day as 35.000 people. The pest is impacting on mid-to-long term on economy, quality of life and environmental protection. Climate change is amplifying the occurrence of such pests and especially the increase of extreme events such as cyclones, is generating ideal conditions for the locust breeding. In the context of the European project EO4EU and European Space Agency (ESA) project IDEAS, a service has been developed, divided in two parts: a first one as early warning to monitor suitable ecosystem for locust to breed, the second one as impact assessment simulating the evolution of swarms. The first part aims at predicting the favorable breeding ground for desert locust seven days in advance by checking environmental conditions of the previous fifty days. The environmental variables used for the forecast are Soil water content, Precipitation, and Temperature from ERA-5 land (Copernicus Climate). Additionally, NDVI (Normalized Difference Vegetation Index) from MODIS plays a role in the prediction. Locust information for model training was from a presence-only dataset provided by FAO’s Locust Watch. At the current stage, the most effective model is a customized version of Maxent. Maxent is a statistical model widely used by researchers for species distribution modeling (SDM) as it is designed to work with presence-only datasets, a common scenario in this field. Our model keeps Maxent's principles but modifies its internal structure by replacing the linear machine learning model with a GRU (Gated recurrent unit). This enables the model to learn complex patterns and better understand the temporal evolution of features. Since no locust absence information is provided, only two evaluation techniques have been proven useful: recall, which reaches 76%, and positively predicted area (amount of area predicted as locust breeding ground) which is at ~17%. Following the early-stage locusts appearance prediction, the second step aims at evaluating the geographic footprint that adult locusts will have within a two-week time frame. In particular, the focus is on forecasting migration patterns, as locusts are able to travel long distances in short periods and explore new areas unpredictably. The maps generated by the first part serve are the primary input for the second part. As they represent the probability of early stage appearances, they also provide an initial estimate of potential adults locations under specific environmental conditions. The strength of this model lies in its stochastic structure. Specifically, the model simulates an environmental-biased random movement on a 2D lattice, generating batches of diverse potential scenarios. This approach allows the incorporation of complex driving-factors for migrations and considers all the various paths that swarms may take. Climate conditions primarily influence swarm behavior, along with the availability of resources, such as vegetation. Specifically, the model uses temperature and wind data from ERA-5, as well as the Leaf Area Index (LAI) from ERA-5 Land. Collecting these variables is essential, as they not only trigger migration events but also determine the direction and speed of swarm movement. Consequently, the model performs a statistical analysis across all generated scenarios. This enables it to produce output maps that estimate the future locations of swarms and their potential sizes. Predicted results are showing promising correlation with FAO reports on desert locust activity. In order to lead to a fully operational tool, validation activities are on going, in cooperation with experts on desert locust and to provide ground verification of the prediction tool. Indeed due to the lack of open source datasets on desert locust, ground truth information is needed for validation. The International Center of Insect Physiology and Ecology (ICIPE) from Nairobi, Kenya, is focusing its mission on the study of insect science for sustainable development. Additional independent data were collected in Sudan by the Sudanese Ministry of Agriculture, department of crop protection. This data is corresponding to 698 validation points from 1st of January to 21st of March 2023. Their support for the tool could lead to high level testing of the models, and empower its capabilities to monitor at a larger scale.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Integration of a multi-sensor analysis for the estimation of water quality in Italian lakes

Authors: Mariano Bresciani, Dr Nicola Ghirardi, Alice Fabbretto, Ludovica Panizza, Andrea Pellegrino, Monica Pinardi, Salvatore Mangano, Dr.ssa Claudia Giardino
Affiliations: CNR-IREA, Space It Up project, CNR-IBE, University of Tartu, University of Sapienza, NBFC
Phytoplankton and turbidity dynamics in lakes are complex and difficult to predict due to morphometric complexity, variable wind patterns, intensity of benthic-pelagic coupling, variable light availability and the inherent instability of ecosystems. To gain a better understanding of phytoplankton dynamics and to characterise water quality status in complex aquatic environments, data collected during the same day are needed. Remote sensing is a valuable tool for spatial/temporal analysis of inland water environments. However, the use of a single sensor can be limiting in highly dynamic environments, such as turbid/eutrophic shallow lakes, where wind and temperature significantly affect lake conditions. In this context, the aim of this study within the Space It Up project is to use a combination of hyperspectral and multispectral sensors to understand the intra- and inter-daily dynamics of three Italian lakes (Trasimeno, Varese and Garda) characterised by different optical properties. Across the three lakes under study, in situ instruments are available to provide continuous measurements of either reflectance or water quality throughout the day. Specifically, in Lake Trasimeno is located the spectroradiometer WISPstation, in Lake Garda the spectroradiometer Hypstar, and in Lake Varese a buoy featuring multiparameter probes and, for some periods of the year, the spectroradiometer JB-ROX. These in situ data reveal the higher diurnal variability of phytoplankton in Lakes Trasimeno and Varese, which is not evident in Lake Garda. The dataset includes more than 30 different dates between 2019 and 2024 and a total of 160 remotely sensed images from 14 different sensors. Specifically, six hyperspectral sensors (PRISMA, DESIS, EnMap, EMIT, PACE, and AVIRIS) and eight multispectral sensors (Landsat-8/9, Sentinel-2A/B, Sentinel-3A/B, MODIS-Aqua/Terra, VIIRS-SNPP/JPSS) were used. Level-2 images were downloaded and used as inputs to the BOMBER bio-optical model (Bio-Optical Model Based tool for Estimating water quality and bottom properties from Remote sensing images) to generate maps of water quality parameters (total suspended organic and inorganic matter and chlorophyll-a). To produce these maps, BOMBER model was parametrized using the inherent optical properties (IOPs) specific to the three lakes. Additionally, for Lake Trasimeno and Lake Varese, phycocyanin maps were also produced using a mixture density network (MDN) for sensors with a suitable spectral configuration. A comparison was then conducted between the remotely sensed images and the in situ data, evaluating both spectral and concentration levels. For Lake Trasimeno, the spectral analysis showed a strong overall agreement between the remotely sensed images and the WISPStation data (MAPE=28.3%, SA=12.2°), similar results were obtained for the lake Varese. However, for Lake Garda, the agreement was less robust, primarily due to atmospheric correction inaccuracies in the blue spectral region. Preliminary results on the concentrations of water quality parameters confirmed that the multi-sensor analysis was crucial to detect rapid changes in the turbid and productive lake (Trasimeno and Varese), mainly due to variations in temperature and wind, which would have been impossible to detect with a single sensor analysis. In particular, during the late summer period, the high growth of phytoplankton (cyanobacteria) in the waters during the day emerged, with maximum values recorded in the afternoon; turbidity values were also very variable throughout the day, strongly influenced by the wind. This study was carried out within the Space It Up project funded by the Italian Space Agency, ASI, and the Ministry of University and Research, MUR, under contract n. 2024-5-E.0 - CUP n. I53D24000060005.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Impact of Sentinel-2 light extinction data on lake temperature profile simulations in the 1D hydrodynamic General Lake Model

Authors: Najwa Sharaf, Guillaume Morin, Jordi Prats, Pierre-Alain Danis, Gabriel Orabona, Nathalie Reynaud, Thierry Tormos, Jean-Philippe Jenny, Olivia Desgue, Rosalie Bruel
Affiliations: Pôle R&D Ecosystèmes Lacustres (ECLA), OFB-INRAE-USMB, INRAE, Aix Marseille Univ, RECOVER, Team FRESHCO, Magellium, SEGULA Technologies, OFB, DRAS, Service ECOAQUA, Université Savoie Mont-Blanc, INRAE, CARRTEL, OFB, DRAS, Service ECOAQUA
This study deals with the integration of satellite-derived light extinction values into the General Lake Model (GLM) for French lakes. Light extinction or attenuation, a crucial parameter influencing lake hydrodynamics, is often overlooked in terms of its temporal variability. Commonly, it is assumed to be constant across long-term simulations in lake models such as the one-dimensional deterministic hydrodynamic GLM. However, this approach fails to capture the inherent variability of light extinction, which can fluctuate significantly on seasonal or even shorter timescales. Such oversimplification may lead to inaccuracies in simulating lake thermal dynamics. As a result, we derived light extinction data from Sentinel-2 satellite imagery using a semi-analytical water color algorithm. This dataset was validated against in situ measurements of Secchi disk depth from the French national water quality monitoring network. We compared GLM simulations made using both a constant light extinction value (0.5 m⁻¹) and Sentinel-2-derived values for the period 2015-2020. These dynamic inputs included annual averages, seasonal averages, linearly interpolated time series, and predictions generated using a Generalized Additive Model (GAM). Simulation outputs were evaluated against observed in situ temperature data at the surface, bottom, and along the water column, as well as for thermocline depth. Incorporating Sentinel-2-derived light extinction values generally enhanced the model accuracy, yielding lower RMSEs (Root Mean Squared Errors) compared to simulations using a constant extinction coefficient. However, some exceptions were noted where performance differences were not statistically significant or did not improve. This study discusses the relative strengths of different approaches for integrating variable light extinction into the GLM, identifying optimal strategies. To our knowledge, this study represents the first assessment of integrating Sentinel-2-derived light extinction data into the GLM, demonstrating their value for improving lake simulations and advancing the accuracy of hydrodynamic modeling. These findings underline the potential of coupling satellite remote sensing and models to improve the knowledge of environmental trajectories of lake ecosystems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: A Deep Learning Framework for Large Scale Land Cover Mapping: A Case Study in Ontario, Canada

Authors: Fariba Mohammadimanesh
Affiliations: Natural Resources Canada
Large scale land cover mapping, also known as semantic segmentation, is crucial for understanding the ecological characteristics of land surfaces and comprehending environmental changes. Frequent updates are necessary to capture dynamic shifts in land use, monitor the impacts of human activities, and ensure that decision-makers have the most current and accurate data for sustainable planning and management. As such, this study addresses the challenge of large-scale land cover mapping using advanced deep learning models. While state-of-the-art deep learning models have shown promising results for several remote sensing applications, their efficiency has yet to be explored for large scale semantic segmentation tasks. One existing problem is that their successful application highly depends on the availability of large amount of training data. To overcome this, we propose a two-stage classification system, combining Random Forest (RF) for initial land cover mapping and MobileUNETR, a lightweight hybrid convolution-transformer model, for refined land cover classification. Using Sentinel-1 and Sentinel-2 data, we produce the land cover map of Ontario at spatial resolution of 10m, aligned with the north American land change monitoring system (NALCMS) legend level I, comprising 11 classes (snow and ice class was removed in our analysis ). Our results demonstrate that MobileUNETR outperforms other models, such as UNet and PSPNet, in terms of both accuracy (accuracy approaching 85%) and efficiency, highlighting its suitability for large-scale land cover mapping application. As MobileUNETR is the only deep learning model with both convolutional and transformer block, the results confirmed the superiority of hybrid models for large scale semantic segmentation, given their capability in capturing both local and global features, which are essential for semantic segmentation of heterogeneous land cover classes with varying sizes and spectral signatures. This study provides a scalable method for deep learning-based land cover mapping with a high potential for the national-scale applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Space4Nature: Empowering Nature Recovery With People and Earth Observation Satellite Data

Authors: Dr Ana Andries, prof Stephen Morse, prof Richard Murphy, Ms Victoria
Affiliations: University of Surrey
Surrey County faces a critical biodiversity challenge, with nearly 12% of its native wildlife already lost, and a large portion of species under threat. Particularly, the decline of semi-natural habitats such as heathlands, chalk grasslands, and neutral and acid grasslands has made the county a red flag for conservation efforts. Space4Nature responds to the need for habitat restoration and biodiversity monitoring by integrating ecological surveys collected by citizen science with advanced remote sensing technologies and machine learning techniques to monitor and map Surrey’s key habitats. The Space4Nature project focuses on collecting ecological survey data from 1 m sq quadrants across Surrey County, involving citizens in the collection of critical biodiversity information such as key species, species abundance, species composition, and environmental characteristics. Since 2022, these surveys have been designed and conducted in collaboration with Surrey Wildlife Trust and Buglife. These citizen-led surveys have provided invaluable ground-level insights that have been used to support the calibration and validation of our remote sensing approaches. We used high-resolution PlanetScope imagery, at 3 m spatial resolution with 8 spectral bands, and derived a suite of vegetation indices to characterise habitat health and vegetation structure. Along with topographic parameters, soil data, and other environmental variables, the project applies machine learning (ML) techniques—specifically, Random Forest (RF)—to predict and map key habitats such as chalk grasslands, heathlands, and other grasslands with exceptional accuracy. The project has also explored supervised classification methods to enhance habitat mapping capabilities. Using Levels 3 and 4 from the UK Habitat Classification (version 2), Space4Nature has successfully classified habitats across Surrey County at a 3m resolution. These classifications have been cross-referenced with field data and existing habitat inventories, resulting in a comprehensive and accurate map of the region's biodiversity. The accuracy of our RF model and supervised classification for chalk grassland and heathland habitats has been particularly noteworthy, we achieved very high-performance metrics: mean squared error (MSE) values ranged from 6.346 to 6.637, while F1-Score, Matthews Correlation Coefficient (MCC), sensitivity, and overall accuracy were consistently in the 0.8-0.9 range. In addition, we conducted accuracy assessments for our ML models using independent ecological datasets provided by Surrey Wildlife Trust (SWT). These datasets, which include annually visited and restored chalk grassland and heathland sites, allowed us to verify our model's predictions. Our predictions matched 82% of the ecological sites visited by the SWT in the last 20 years. Moreover, Space4Nature is not only about scientific remote sensing exploration but also about practical applications and meaningful conservation outcomes. Our habitat mapping results have already guided real-world restoration efforts. Specifically, the project has provided key insights for Buglife, an organisation leading the B-Lines project, which is focused on creating and restoring pollination habitats across the UK. Based on the Space4Nature habitat maps, Buglife has identified over 100 hectares of suitable sites for habitat creation and restoration. These areas are critical to the B-Lines project’s goal of establishing a network of wildflower-rich habitats to support declining pollinator species. Importantly, Space4Nature has earned international recognition for its innovative approach to biodiversity monitoring and habitat restoration. This year, the project won the prestigious Geovation International Geospatial Award by Ordnance Survey for Nature theme. Furthermore, Space4Nature has been accredited by the Space Climate Observatory (SCO) for its contribution to climate action through habitat monitoring and restoration initiatives. This global recognition highlights the project's broader relevance beyond Surrey, positioning it as a model for how EO and citizen science can be harnessed for sustainable conservation actions and outcomes. For the upcoming year, Space4Nature aims to build on its successes by expanding its data collection efforts in collaboration with local stakeholders and citizen scientists. Further ecological surveys in the spring and summer of 2025 will collect additional data on the remaining habitats, such as neutral and acid grasslands, which require more data collection. By engaging more volunteers in the data collection process and refining our ML models through reinforcement learning, we aim to produce the most accurate and up-to-date habitat maps possible. The practical impact of Space4Nature will continue to grow as we expand partnerships with conservation organizations, local governments, and community groups. The project's data-driven approach not only informs habitat restoration but also offers a scalable solution to biodiversity monitoring that can be replicated in other regions facing similar conservation challenges. In conclusion, Space4Nature is an exemplary project that combines citizen science, EO data, and ML to address one of the most pressing environmental issues of our time—biodiversity loss. With its high levels of accuracy, practical conservation impacts, and international recognition, the project serves as a powerful example of how science and community action can come together to address global challenges such as the loss of biodiversity and critical habitats. We look forward to presenting the latest results on chalk grassland and heathland mapping at the conference, and to sharing how our innovative approaches are making a real difference in habitat conservation across Surrey County and beyond. .
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Project From Samples to Satellites – the deployment of hyperspectral satellites for optically complex northern inland waters

Authors: Pauliina Salmi, Pritish Naik, Jenni Attila, Daniel Atton Beckmann, Dalin Jiang, Konstantinos Karantzalos, Mirva Ketola, Ismo Malin, Linda May, Rebecca McKenzie, Kristian Meissner, Justyna Olszewska, Ilkka Pölönen, Jukka Seppälä, Michal Shimoni, Sami Taipale, Jussi Vesterinen, Peter Hunter
Affiliations: Faculty of Information Technology, University of Jyvaskyla, Finnish Environment Institute, Faculty of Natural Sciences, University of Stirling, National Technical University of Athens, Lake Vesijärvi Foundation, City of Lahti, UK Centre of Ecology & Hydrology, UKCEH, Kuva Space, Faculty of Mathematics and Science, University of Jyväskylä, The Association for Water and Environment of Western Uusimaa
Remote sensing of inland waters has always been the bottleneck of environmental observation. However, recent technological developments have made hyperspectral sensor technology commercially available, and satellites carrying imaging spectroscopy instruments capable of monitoring boreal latitudes are being launched. The natural topology of inland waters, and changes in weather and catchment activities can cause significant variations in their water quality. Hyperspectral satellites are of interest not only because of their good spatial resolution and rapidly increasing temporal frequency, but also because of their high spectral resolution. Potentially, when paired with robust unmixing models, this could enable detailed remote observations of optically complex inland waters. However, introducing new technologies and data products into practice is a major effort that requires multidisciplinary cooperation. Here we describe a project which started on 2023 and will end in 2027, made possible by cooperation between different institutes and satellite operators. In summer-autumn 2024, EnMAP [1] and PRISMA [2] hyperspectral satellite acquisitions were collected on Scottish and Finnish inland waters. This campaign was carried out on five water bodies of anthropogenic importance, with in-situ ground-truthing undertaken by the University of Stirling Forth-ERA programme [3] and the UKCEH Loch Leven water quality monitoring programme in Scotland, and the Lake Vesijärvi and Enäjärvi monitoring programmes in Finland. The satellite dataset obtained for the first year comprised 24 EnMAP and 13 PRISMA products with cloud-free pixels of the target lakes. When combined with Sentinel-2 data, these satellites yield a good frequency of observations, limited only by the number of cloudless days. Hyperspectral satellites have high potential to complement traditional satellites and in-situ water quality assessments due to their superior spatial coverage and detailed spectral information. In the forthcoming years, new satellites and ground truthing approaches will be added systematically. References [1] EnMAP (The Environmental Mapping and Analysis Program). Earth Observation Center EOC of DLR. [2] PRISMA (Hyperspectral Precursor of the Application Mission). Agenzia Spaziale Italiana (ASI). [3] Forth-ERA (Forth Environmental Resilience Array). https://www.stir.ac.uk/about/scotlands-international-environment-centre/forth-environmental-resilience-array/about-forth-era/.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Remote Sensing-Based Detection of Giant Hogweed: Integrating Machine Learning and Satellite Data

Authors: Petr Lukeš, Michaela Podborská, Kateřina Tajovská
Affiliations: Global Change Research Institute, Masaryk University
The timely detection and management of invasive species, such as Heracleum mantegazzianum (giant hogweed), is critical for preserving the ecological integrity and economic value of permanent grasslands. This study explores the application of remote sensing (RS) data combined with advanced machine learning (ML) techniques to monitor and map the spread of giant hogweed, with a focus on the Karlovy Vary and Plzeň regions in the Czech Republic, which are the most affected areas in the country. These regions are particularly vulnerable due to their extensive grasslands, and favorable conditions for the species' proliferation. Utilizing multispectral satellite imagery from Sentinel-2 and Planet, we employed a range of ML algorithms, including Random Forest (RF) and Support Vector Machines (SVM), as well as target detection methods like Matched Filter (MF). Our results revealed that ML algorithms, particularly RF and SVM, outperformed traditional methods in accurately classifying giant hogweed infestations. These algorithms leveraged the plant's distinct spectral characteristics, especially during its flowering phase, achieving user accuracies of up to 97% with Planet's high-resolution data. Although target detection methods such as MF showed promise for detecting dense and homogeneous infestations, they were less effective in identifying fragmented and scattered occurrences, which are typical in early-stage invasions. Spatial resolution emerged as a pivotal factor in detection performance. Planet's finer resolution (3-meter) facilitated the detection of small and dispersed patches of giant hogweed, offering a distinct advantage for precision monitoring in fragmented landscapes like those in Karlovy Vary and Plzeň. In contrast, Sentinel-2's moderate resolution (10-meter) proved more suitable for tracking large, contiguous infestations across extensive areas. This research demonstrates the feasibility of integrating RS data with ML approaches to improve the accuracy, scalability, and cost-effectiveness of invasive species monitoring. The findings are particularly relevant for managing giant hogweed in the Czech Republic's most affected regions and offer valuable insights for broader ecological management, precision agriculture, and policymaking.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Bridging Remote Sensing and Ecosystem Dynamics for Nitrogen Deposition: A Digital Twin Perspective

Authors: Mahmoud Ahmed, Enrico Dammers, Joris Timmermans, Martijn Schaap, Roderik Lindenbergh
Affiliations: Department of Geoscience and Remote Sensing, Delft University of Technology, Air Quality and Emissions Research, Netherlands Organisation for Applied Scientific Research (TNO)
Reactive Nitrogen deposition is a significant driver of biodiversity loss worldwide. It adversely impacts ecosystem dynamics, by directly altering individual species traits, or indirectly by changing the ecosystem structure and functioning. As such, nitrogen deposition needs to be tracked carefully to prevent severe consequences in the future. However, current monitoring instruments and modelling approaches face significant challenges in capturing the intra-ecosystem dynamics of reactive nitrogen. Current ground-based nitrogen monitoring is limited to a few select locations, providing insufficient spatial coverage for ecosystem-scale analyses. While satellite products, such as TROPOMI, offer broader spatial coverage, they are constrained to vertically integrated column values with relatively coarse horizontal resolutions (e.g., ~7 km × 3.5 km²). Moreover, essential nitrogen compounds, such as nitric acid, are not measured, further reducing their suitability for detailed ecosystem studies. Chemistry Transport Models (CTM), offer the capability to simulate the complex chain of processes governing atmospheric nitrogen flows and deposition rates. However, in these models, the biosphere-atmosphere interactions of nitrogen are often too generic and rely on fixed land surface characterizations. Recent studies have attempted to incorporate land surface changes using satellite-derived products. Nevertheless, the ecological relevance of these products remains limited for studying the impact of nitrogen deposition on the biodiversity, as they lack the vertical and horizontal consideration necessary to account for the complexities of ecological processes. This limitation is particularly critical for surface-atmosphere exchanges, which remain a significant source of uncertainty in deposition modelling and, consequently, for effectively addressing nitrogen's impact on Essential Biodiversity Variables (EBVs). In response to this challenge, our study aims to create high fidelity EBV products that are integrated within an Environmental Digital Twin (EDT). EDTs, with their ability to incorporate interconnected components, link environmental drivers and anthropogenic pressures to ecologically relevant trends in community composition, allowing for the evaluation of intervention measures. Remote sensing plays a critical role in establishing EDTs by providing essential variables that characterize both environmental processes and surface dynamics and link them to the their digital replicas. We focus on the Veluwe area in the Netherlands, a critical conservation and protection zone. In this study, we integrate ecosystem-specific state variables derived from satellite remote sensing to refine deposition rate estimates within a Chemistry Transport Model (CTM). By combining multispectral Sentinel-2 data with ultra-high-resolution (~30 cm) Pléiades Neo observations and leveraging the capabilities of Large Eddy Simulation (LES) models, we aim to deduce the parameters needed for upscaling nitrogen deposition using the LOTOS-EUROS CTM. We present the design and working principles of the Digital Twin framework, alongside initial results from the monitoring framework. Specifically, we demonstrate the impact of automated tree detection for species distribution modelling (from optical satellite remote sensing) and the characterization of vertical ecosystem structure (from airborne LiDAR) on nitrogen deposition estimates derived from LOTOS-EUROS. These simulations are conducted at a high spatial resolution of 100 m and finer, enabling detailed insights into nitrogen dynamics within the ecosystem.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Mapping 30+ Years of Mangrove Extent in Tanzania Using Historical Paper Maps and Remote Sensing

Authors: Helga Kuechly, Dr. Mwita M. Mangora, Sam Cooper, Simon Spengler, Dr. Makemie J. Mabula, Kelvin J. Kamnde, Dr. Carl C. Trettin
Affiliations: Institute of Marine Sciences, University of Dar es Salaam, World Wide Fund For Nature (WWF) Germany, Earth Observation Lab, Geography Department, Humboldt-Universität zu Berlin, East African Crude Oil Pipeline (EACOP), Western Indian Ocean Mangrove Network, Center for Forest Watershed Research, Southern Research Station, USDA Forest Service
Mangroves are vital ecosystems for biodiversity, coastal resilience, and climate action, yet long-term monitoring in Tanzania and Zanzibar has been hindered by inconsistent methodologies and limited historical data. This assessment combines historical paper maps from Tanzania’s 1989/1990 national mangrove inventory, satellite imagery, and extensive field data to map mangrove extent in 1990 and 2023, quantify changes, and inform sustainable management strategies. While Zanzibar’s mangroves were also mapped, historical paper maps were unavailable, necessitating exclusive reliance on remote sensing and field validation. The assessment combines the digitization of historical inventory maps, analysis of Landsat and Sentinel-1 and -2 imagery, training and validation data obtained from field validation using a custom mobile application and manual digitalization from Google Earth, updated with Planet NICFI monthly composites. The analysis harnessed the power of Google Earth Engine (GEE) python API applying the supervised Random Forest Modelling for mangrove classification and local expert knowledge of mangrove areas to map changes in the extent of mangrove forests, estimating the gain, loss and stable mangrove area between 1990 to 2023, with overall accuracy of 90% for 1990 and 94% for 2023. Results reveal a mangrove extent of 124,022 ha in 1990, declining to 106,054 ha in 2023 on the mainland. Stable mangrove areas totaled 93,761 ha, with 12,292 ha gained and 30,261 ha lost, representing a net reduction of 17,969 ha (14.5%) over 33 years, or 545 ha annually. Zanzibar’s mangroves were similarly assessed, with separate classification models tailored to ecological and geographical differences, enhancing accuracy. Validation highlighted challenges such as spectral confusion with coconut plantations and inland vegetation. Findings indicate significant mangrove loss driven by land-use change and governance challenges, including ineffective enforcement of harvesting bans. However, net gains in specific districts reflect the impact of conservation programs from the 1990s-2000s. These data inform ongoing national mangrove management strategies, action plans, and Tanzania’s Nationally Determined Contribution (NDC) for climate action. This methodology establishes a robust, scalable framework for monitoring mangrove ecosystems, emphasizing public data, repeatability, and integration with future assessments. It supports informed policy decisions, strengthens conservation efforts, and enhances coastal ecosystem resilience for communities reliant on mangrove resources.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Satellite Remote Sensing for Riparian Vegetation Health Assessment

Authors: Hamid Afzali, Milos Rusnak
Affiliations: Institute Of Geography, SAS
Floodplain forests are simultaneously the most critical components of riverine landscapes, providing many principal functions and benefits for biodiversity, stabilizing channel banks, and preserving aquatic ecosystem integrity. The proposed methodology utilizes a combination of remote sensing data, along with documented information about riparian forests, to develop a robust framework for analysing riparian vegetation properties in large river systems. In this study, we used satellite-driven vegetation indices to investigate vegetation health, greenness, productivity, and functionality in response to climatic conditions and human intervention. More than ten vegetation indices were computed using the Sentinel-2 and Landsat imagery. Textural information was extracted using the Gray Level Co-occurrence Matrix (GLCM) and geomorphological filters, which were applied to high-resolution, preprocessed, and normalized satellite data. Subsequently, their spectral and spatial characteristics were classified through the Random Forest (RF) machine learning model. Furthermore, the Vegetation Condition Index (VCI) and Vegetation Health Index (VHI) have been used to assess vegetation health, particularly regarding environmental stressors such as drought, temperature extremes, and other climate-related variables. By systematic spatiotemporal monitoring of riparian vegetation, we demonstrated that utilizing multiple remote sensing data and machine learning techniques provides a robust framework for assessing vegetation health and functional changes over time.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Treesure: new data for small woody features monitoring at landscape scale

Authors: Francesco Saverio Santaga, Sofia Maria Lilli, Marzia Franceschilli, Stefano Marra, Camilla Bizzarri, Sara Antognelli
Affiliations: Agricolus Srl, CGI
Introduction Local Public Administrations (LPAs) highly benefit from geographical information, as GIS systems are increasingly used for landscape management. Small woody features (SWFs) are crucial components of the European landscape. They provide many ecological benefits such as acting as habitat corridors and enhancing biodiversity. They can also help regulate water cycles and prevent soil erosion, functions that are particularly important in agricultural areas. Their aesthetic and cultural value is also important as elements of traditional agricultural practices and rural heritage (https://land.copernicus.eu/en/products/high-resolution-layer-small-woody-features) Local public administration highly benefit from updated and detailed georeferenced information of SWFs, to ensure a more informed management of the land to increase its ability to provide ecosystem services, preserve and increase biodiversity, protect the soil and maintain typical landscape elements. State of the art Copernicus Land monitoring service provides a SWFs layer covering Europe in its enterity, with a spatial resolution of 5 m, classifying the SWFs in: patchy features and linear features, updated in 2018. Additionally, some LPAs have access to local GIS layers describing SWFs with relatively a high level of detail. However, the data are produced using non-standardized methodologies, and their update is very rare. Treesure solution Treesure data enable Local Public Administrations (LPAs) to identify and monitor the presence of Small Woody Features (SWF). This knowledge about SWFs will be delivered through geographical layers containing information accessible through the Treesure service. Treesure will be able to distinguish 3 SWFs classes: - woody patches, - riparian strips - rows of trees and hedges and tree belts. The production of data relies on a standard procedure that includes 3 steps: 1. the production of super resolution images from Sentinel 2. This step is based on an AI-based algorithm trained with high resolution images. It produces 1 m resolution images. 2. the identification of wooded areas from super resolution images using a machine-learning based algorithm. This algorithm applies machine learning techniques to detect wooded vegetation from super resolution images. 3. the classification of wooded areas in “woods” and “SWFs”, and the subsequent classification of SWFs. Landscape metrics and geometrical elaborations represent the base concepts of SWFs classifications. Additional land use-land cover data provided by LPAs have been used for data refinement when available. Treesure product has 3 m resolution. It is currently being produced and updated yearly for different Italian LPAs. Data performances were good, with the following results (2023): - Precision of SWFs class: 59% - False positives for SWFs class: 41% - False negatives for SWF class: 54,4% The performances resulted better than the baseline for 2023 in the selected areas (including Autorità di bacino distrettuale dell’appennino settentrionale, Regione Toscana, and Comune di San Giuliano Terme). Products are disseminated through the CGI Insula platform, an advanced Earth Observation (EO) Platform-as-a-Service designed to harness cutting-edge cloud technologies for big data analytics on EO datasets. Insula's architecture enables the efficient processing and analysis of massive data volumes, making it an ideal solution for Treesure challenges. The platform provides users with a seamless experience, offering an intuitive user interface for detailed analytics as well as accessibility through standardized Open Geospatial Consortium (OGC) interfaces, ensuring interoperability and ease of integration into diverse workflows. Conclusion Treesure represents a valuable and accessible dataset for LPAs. In fact, the semi-authomatic production of outputs guarantees a yearly product update, and the comparability of results across the different geographical regions. Performances resulted sufficient to the main LPAs need, that is related to landscape monitoring and management. Data resulted particularly suitable for LPAs, compared toto Copernicus service, due to the higher update frequency, higher spatial resolution and better performances at local level.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Automated Habitat Mapping Using High-Resolution Satellite Data in the “Sv. Juraj - Sv. Kajo” and “Osoje” Mining Areas

Authors: Katarina Barnjak, Dragan Divjak, PhD in Ecological and Biological Engineering Andreja
Affiliations: LIST LABS LLC
The project focused on automating the annual mapping of habitat types in the mining areas of “Sv. Juraj - Sv. Kajo” (300 ha) and “Osoje” (20 ha) in Croatia, addressing the need for efficient and precise land-use monitoring in ecologically sensitive areas. Habitat mapping plays a crucial role in ensuring compliance with environmental regulations, preserving biodiversity, and guiding sustainable land-use practices. In this context, the project employed advanced geospatial methodologies and high-resolution satellite imagery from the PlanetScope platform to produce accurate, standardized datasets that meet both national (NKS1, NKS2) and European (EUNIS, Natura 2000) habitat classification standards. PlanetScope imagery was selected due to its high temporal frequency, spectral resolution, and spatial detail, enabling comprehensive year-round monitoring. The data's temporal richness allowed for capturing dynamic changes in vegetation and land use throughout the seasons. Key metrics such as the Normalized Difference Vegetation Index (NDVI) were calculated to assess vegetation health and classify habitats effectively. The combination of these advanced satellite datasets with machine learning techniques for supervised classification ensured a robust methodology for delineating and tracking habitat transitions over time. One of the main achievements of the project was the development of the habitat classification and map for 2024, based on the reference habitat map from 2023, which served as a starting point for the analysis. This 2024 map enabled the detection of spatial and temporal variations, providing stakeholders with a reliable tool for monitoring land-use dynamics and mitigating potential environmental risks. The results demonstrate significant progress in automating habitat mapping, reducing the time and effort required for traditional manual methods while ensuring high accuracy and consistency of outputs. The application of automated workflows, developed in R and Python programming environments, further enhances the scalability and replicability of this methodology. These workflows ensure efficient data processing and future updates with minimal human intervention, providing a cost-effective and adaptable solution for long-term monitoring programs. By utilizing PlanetScope imagery, the project established a replicable framework that can be applied to other regions facing similar challenges, promoting broader adoption of automated habitat mapping technologies. In addition to its technical advancements, the project underscores the importance of leveraging geospatial technologies for environmental stewardship. Automated habitat mapping equips policymakers, environmental managers, and local communities with actionable insights, empowering them to address ecological challenges more proactively. By aligning with international conservation efforts, this approach supports biodiversity preservation and sustainable development goals, ensuring a balance between industrial activities and environmental integrity.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Vegetation disturbance alert from HLS (DIST-ALERT) – applications for all land monitoring

Authors: Sarah Carter, Song Zhen, Fred Stolle, James MacCarthy, Jessica Richter, Annabel Burg, Anika Berger, Benjamin Wielgoetz, Elise Mazur, Elizabeth Goldman, Dr. Matthew Hansen, Amy Pickens
Affiliations: World Resources Institute, University of Maryland, Wageningen University
Meeting biodiversity, climate and sustainable development targets means preventing the loss of valuable natural ecosystems. There is a need for actionable (timely, accessible, easily analyzable, spatial explicit) vegetation change data to understand the full changes and stresses that are happening on natural ecosystems because of human, natural or climate drivers. The recently released disturbance monitoring system known as DIST-ALERT developed through the NASA-funded OPERA project, in collaboration with the University of Maryland (UMD) and Land & Carbon Lab (a World Resources Institute and Bezos Earth Foundation initiative) is the first global operational system to detect disturbances in all vegetation cover, including forests, grasses, shrubs and crops. Based on Harmonized Landsat Sentinel-2 data, alerts are triggered when the observed vegetation coverage is more than 10% below the minimum value in the historical baseline (+/- 15 days in the previous 3 years). It has been operating since 1 Jan 2023 (Pickens et al. 2024). A number of data layers are provided such as the maximum vegetation anomaly, detected disturbance count and disturbance duration, and these provide information useful for better understanding of the alerts. This information has also been used to categorize alerts into low and high confidence, and along with other information, to identify potential causes of alerts such as wildfire, and conversion of land to agriculture. This classification provides users with policy relevant information for example it can help to determine whether potential violations of the EUDR have taken place. The DIST-ALERT crucially enables continuous monitoring and early notifications on possible deforestation, which is important for enforcing the regulation, and for other tasks such as identifying and taking action on illegal deforestation. Specifically combining the DIST-ALERT with the Natural Lands Map (Mazur et al. 2024) provides a timely indication of where forest loss in natural (primary, naturally regenerating) forests is happening, and can identify potential degradation. Combination of DIST-ALERT with VIIRS active fire alerts, can identify potential wildfires which may be less likely to be associated with violations of EUDR for example. While the EUDR is currently limited to forests, the potential expansion to Other Wooded Land will be discussed in future. Since these alerts operate in all land covers, this data can therefore support an expansion of the scope. Inside of tree cover, DIST-ALERT have been compared to existing forest disturbance alerts, and the use of different thresholds (e.g. maximum vegetation anomaly) can be used to develop complementary and integrated products. Agreement with current alerts however varies within different canopy densities, and across continents, and results from these comparisons, and implications for use alongside other forest disturbance products will be presented. Presenting open and free data in easy to access formats and with tools and workflows is crucial to support uptake. The DIST-ALERT will be integrated into the Global Forest Watch (GFW) platform and the new-in-development Land & Carbon Lab (LCL) platform to ensure data gets widely distributed. This will include GFW Pro, a dedicated platform for deforestation / conversion free supply chain assessments. At present, GFW has over 7 million worldwide active users including decision makers, forest rangers and many other stakeholders who can use the DIST-ALERT product and its derived data for monitoring their landscapes or ecosystems of interest. The integration of the DIST-ALERT product into these platforms will facilitate timely information flow to decision makers and stakeholders on where, when, and the extent of changes in their areas of interest. This will support action to prevent the conversion of natural ecosystems while continuing to meet the world’s growing need for food, timber and other goods. Future research opportunities, such as utilizing 10 m HLS data, the integration of the DIST-ALERT with other alert products to increase confidence and timeliness (e.g. Reiche et al. 2024), and classification of drivers using automated approaches in near-real-time (e.g. Slagter et al. 2023) will also be discussed. References Mazur, E., M. Sims, E. Goldman, M. Schneider, M.D. Pirri, C.R. Beatty, F. Stolle, & Stevenson, M. (2024). “SBTN Natural Lands Map v1: Technical Documentation”. Science Based Targets for Land Version 1-- Supplementary Material. Science Based Targets Network. https://sciencebasedtargetsnetwork.org/wp-content/uploads/2024/09/Technical-Guidance-2024-Step3-Land-v1-Natural-Lands-Map.pdf Pickens, A., Hansen, M., & Zhen, S. (2024) Product Specification Document for Disturbance Alert from Harmonized Landsat and Sentinel-2. Observational Products for End-Users from Remote Sensing Analysis (OPERA) Project OPERA Level-3 Disturbance Alert from Harmonized Landsat-8 and Sentinel-2 A/B Product Specification Version 1.0 JPL D-108277 D-108277_OPERA_DIST_HLS_Product_Specification_V1.0.pdf Reiche, J., Balling, J., Pickens, A. H., Masolele, R., Carter, S., Berger, A., Gou, Y., Donchyts, G., Slagter, B., Mannarino, D., & Weisse, M. J. (2024). Integrating satellite-based forest disturbance alerts improves detection timeliness and confidence. Environmental Research Letters. Slagter, B., Reiche, J., Marcos, D., Mullissa, A., Lossou, E., Peña-Claros, M., & Herold, M. (2023). Monitoring direct drivers of small-scale tropical forest disturbance in near real-time with Sentinel-1 and-2 data. Remote Sensing of Environment.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Characterizing alpine vegetation communities using a multi-scale approach employing UAV and spaceborne Earth Observation

Authors: Basil Tufail, Baturalp Arisoy, Elio Rauth, Antonio José Castañeda Gómez, Dr Martin Wegmann, Dr Mirjana Bevanda, Dr. Doris Klein, Univ.-Prof. Dr. Stefan Dech, Prof. Dr. Tobias Ullmann
Affiliations: Earth Observation Research Cluster, Department of Remote Sensing, Institute of Geography and Geology, Julius-Maximilians-University Würzburg, German Remote Sensing Data Center (DFD), German Aerospace Center (DLR)
Climate change has caused significant alteration to various ecosystems and is considered the influencing factor for biodiversity loss in the previous decades (1). Alpine regions are one of the most vulnerable and adversely damaged in this regard as well. Within the alpine ecosystem, there is a notable division between the different types of flora depending on factors like altitude, slope, and thawing frequencies. Literature has shown less emphasis on the sub-nival to nival vegetation including fellfields (2). Whereas they serve as an important indicator to monitor patterns in snowpacks, and vegetation changes linked to climate change and warming of the specific biome. Besides, global warming with its intensified impact on the alpine region exerts an influential role to the carbon cycle in the ecosystems as well. However, there are still research gaps in the implications of the extensive greening and shifting of the tree line to higher elevations that have been identified in recent studies, while focusing on the dwarf shrubs and sub-nival vegetation communities regarding the Net Ecosystem Exchange of CO2 can help understand the role they play either as carbon sources or sinks. This study focuses on the collection and analysis of in situ data from various ground sensors installed along an altitudinal gradient in the Zugspitze area, Germany. The approach will make use of the long-term environmental records provided by the Environmental Research Station Schneefernerhaus (UFS), a unique research facility located just below the summit of the Zugspitze at a height of 2,650 m in the German Alps. The aim is to understand the impacts of climate change on the alpine tundra vegetation communities, utilizing a multi-sensor fusion of remote sensing satellite data and including high-resolution airborne data from Unmanned Aerial Vehicles (UAV). Exemplarily, vegetation indices like NDVI, acquired at different spatial resolutions, can serve as a basis for identifying spatial patterns and their dynamics in time. Besides, parameters like Solar Induced Chlorophyll Florence (SIF), Aridity index (AI), and soil moisture along with climatic records on temperature, precipitation, and humidity can help better understand the ecosystem response and their changing traits. This is vital for climate change adaptation and mitigation attempts in the montane environments raising the question of anthropogenic intervention
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: SAR-based solution for Ecosystem Functional Type identification in cloudy regions

Authors: Mr. Marek Ruciński, Ph.D. Edyta Woźniak, Ph.D. Anna Foks-Ryznar, Ms. Ewa Gromny, Ph.D. Lluís Pesquer Mayos, Ph.D. Cristina Domingo-Marimon, Ph.D. Małgorzata Jenerowicz-Sanikowska, Mr. Michał Krupiński
Affiliations: Space Research Centre of the Polish Academy of Sciences, GRUMETS research group. CREAF Bellaterra (Cerdanyola del Vallès)
The Ecosystem Functional Types (EFTs) allow identification of areas characterised by similar matter and energy exchange between biotic and abiotic components of ecosystem. The use of EFTs can be valuable for identifying early-stage changes in ecosystem functioning. In this regard, remote sensing methods are enable multi-temporal EFTs assessment based on optical data derived vegetation indices, i.e. the Normalized Difference Vegetation Index (NDVI) or the Enhanced Vegetation Index (EVI) [1]. However, the operational use of EFTs can be constrained by the characteristics of cloud cover over the analysed region, which can significantly affect their effectiveness and reliability. Synthetic Aperture Radar (SAR) offers a weather-independent alternative to optical remote sensing, ensuring reliable data acquisition even in regions with persistent cloud cover. Despite its promise, the relationship between SAR-derived features and vegetation indices remains underexplored, often limited to specific land cover types [2]. This study investigates whether the correlation between radar (Sentinel-1) and optical (Sentinel-2) data is sufficient to develop a SAR-based proxy for Ecosystem Functional Types (EFTs) that operates independently of atmospheric conditions. The region of interest is located in Central Africa covering northwestern part of Tanzania, Burundi, Rwanda and western part of Democratic Republic of Congo. The analysed time range covers the period of 2019-2021 what corresponds to two growing seasons of satellite delivered data. The typical climate conditions for the region is characterized by two rainy seasons, what leads to a very high seasonal cloudiness already confirmed by an analysis of cloudy pixel percentage on Sentinel-2 data. For the period from December 2018 to December 2022, the average percentage of cloudy pixels in the series of individual images acquired for the polygons was 49.8%, with a standard deviation of 35%. The median value for this data was 47.6%. First, we calculate coherence matrices [3] and the H/α decomposition for dual polarization (VV + VH) [4] Sentinel-1 radar images. Then, we developed a model to correlate polarimetric features with NDVI calculated from Sentinel-2 images. The model was tested using 1,000 random points distributed across the scene. Preliminary results indicate a Pearson correlation between the radar model and NDVI of 0.67 with p < 0.001 level of significance. The major discrepancies occur in the urban areas where strong radar signal is not related to the vegetation. The calculation of EFT using SAR based inputs showed peculiarities of vegetation behaviour which were not clearly seen while using optical images. The main differences were the visibility of the rainy seasons throughout the year, resulting in two cycles of vegetation productivity and a specific seasonality of growth. Research performed under ARICA project: NOR/POLNOR/ARICA/0022/2019-00, Norway Grants, POLNOR2019, co-financed from the State budget, Applied Research. The European Union’s Horizon 2020 research and innovation programme under EOTIST project, grant agreement No 952111 [1] Domingo-Marimon C., Jenerowicz-Sanikowska M., Pesquer L., Rucinski M., Krupinski M., Wozniak E., Foks-Ryznar A., Abdul Quader M. (2024) “Developing an early warning land degradation indicator based on geostatistical analysis of Ecosystem Functional Types dynamics”, Ecological Indicators 169: 112815. DOI: 10.1016/j.ecolind.2024.112815. [2] Ruciński M, Foks-Ryznar A, Pesquer L., Woźniak E., Domingo-Marimon C., Jenerowicz-Sanikowska M., Krupiński M., Gromny E., Aleksandrowicz S. (2023) "The Multi-Temporal Relationship Between Sentinel-1 SAR Features and Sentinel-2 NDVI for Different Land Use / Land Cover Classes in Central Africa", IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 2023, pp. 325-328, doi: 10.1109/IGARSS52108.2023.10281862. [3] Cloude, S.R. and Pottier, E. 1997. An entropy based classification scheme for land application of polarimetric SAR. IEEE Transactions on Geoscience and Remote Sensing, 35 (1), 68–78; 10.1109/36.551935 [4] Cloude, S. R. 2007. The Dual Polarisation Entropy/Alpha Decomposition: A PALSAR Case Study. POLinSAR 2007, the 3rd International Workshop on Science and Alications of SAR Polarimetry and Polarimetric Interferometry, ESA, Frascati, January 22–26.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Canopy Reflectance as a Proxy for Soil Microbial Communities at a Regional Scale

Authors: Angela Harris, Prof Richard Bardgett
Affiliations: The University of Manchester
Soil microbial communities are integral to terrestrial ecosystem processes, yet their spatial and temporal dynamics remain poorly understood. While climate and soil properties are recognized as primary drivers of microbial composition at broad scales, emerging evidence highlights the significant influence of plant community composition and functional diversity. Specifically, plant traits - such as leaf chemistry and morphology - are linked to microbial composition through their effects on nutrient cycling and decomposition. However, the extent to which above- and below-ground communities co-vary predictably across landscapes and environmental gradients is unclear, presenting challenges for forecasting ecosystem responses to global change. Canopy reflectance captures a range of plant traits related to ecological processes. Plant traits, and therefore canopy spectra, may reflect soil microbial communities. However, the extent to which canopy reflectance can help elucidate soil microbial community composition across biomes remains unclear. Using datasets from 14 NEON (National Ecological Observatory Network) ecoregions (domains) we explore links between aboveground plant traits and belowground soil community composition and develop partial least squares regression models, using airborne imaging spectrometer data, to predict the abundance of soil microbial groups derived from phospholipid fatty acid analysis (PLFA; including gram positive bacteria (G+), gram negative bacteria (G-), saprophytic (SF) fungi, arbuscular mycorrhizal (AM) fungi, actinomycetes, total microbial biomass and the ratios G+:G- and fungi: bacteria) and the relative abundance of commonly found bacterial phyla derived from 16S rRNA gene sequencing (including; Acidobacteria, Actinobacteria, Proteobacteria and Verrucomicrobia). Our results provide evidence that plant traits are associated with the abundance of diverse soil microbial groups and bacterial phyla at a regional scale (hundreds of kilometres), particularly when the abundance of microbial groups are characterized by PLFA analysis. Foliar traits were able to explain a unique proportion of modelled variation in soil microbial community composition and jointly through their association with soil properties. Our ability to predict soil microbial abundance using partial least squares regression models, with spectral reflectance as the independent variables, was greatest for microbial abundances characterised by PLFA analysis (R² = 0.57 – 0.85, nRMSE = 10 – 15 %) whereas the relative abundance of bacteria phyla proved more challenging to predicted (R² = 0.01 - 0.41; nRMSE = 20 - 25 %). Of the PLFA soil microbial groups the abundance of G+ bacteria and AM fungi were most well predicted, whereas Actinobacteria and Acidobacteria where the best predicted bacterial phyla. Our results suggest that spectral reflectance data holds promise as a novel indirect indicator of soil microbial community composition at the regional scale, particularly for broad functional soil microbial community groups identified by PLFA analysis. Spatial maps of key soil microbial community groups obtained from remotely sensed data could bridge an important gap in field-measured soil microbial community composition and improve our understanding of ecosystem function over space and time.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Strategic Framework for Biodiversity Conservation: AI and Open Source Data for Protected Area Prioritization

Authors: Katharina Horn, Ass. Prof. Dr. Christine Wallis, Prof. Dr. Birgit Kleinschmit, Jun. Prof. Dr. Annette Rudolph
Affiliations: Technische Universität Berlin, Department for Artificial Intelligence and Land Use Change, Technische Universität Berlin, Geoinformation in Environmental Planning Lab
Biodiversity is increasingly under threat due to a range of human-driven processes, including climate change, deforestation, land use change, and habitat destruction. Additionally, biodiversity and climate change mutually intensify each other, which is why action needs to be taken to prevent irreversible damage to ecosystems. Biodiversity is not solely about species richness; it encompasses a broader range of dimensions, including taxonomic, phylogenetic, genetic, functional, spatial, temporal, and interactional aspects. Consequently, these dimensions need to be taken into account in the development of strategies aiming at hindering biodiversity loss. Ensuring the protection and recovery of threatened species and ecosystems is essential for safeguarding the future of life on Earth. In response to this biodiversity crisis, the European Union has introduced the "EU Biodiversity Strategy for 2030," which aims to protect 30% of both terrestrial and aquatic ecosystems within EU member states by 2030. However, in Germany, there remains a lack of comprehensive strategies to identify and prioritize the most critical areas for conservation. Therefore, the challenge of developing effective approaches for selecting and managing these protected areas needs to be addressed adequately. To identify protected areas in Germany, a set of influencing factors such as topography, soil types, species abundance, distribution patterns, land use changes, and more need to be included into the analysis. Given the increasing amount and complexity of data necessary to assess these factors, there is a growing need for advanced tools to derive valuable information from the available data. Artificial intelligence (AI), particularly reinforcement learning, offers a promising solution to this challenge. Reinforcement learning models are capable of learning from large datasets and making decisions based on predefined goals, such as optimizing land protection strategies. These models can analyze vast amounts of data, allowing for the identification of areas that are most suitable for conservation based on a variety of ecological parameters. In this regard, citizen science data (e.g., iNaturalist) presents a valuable resource, providing on-the-ground insights on species occurrences at different locations. The data is collected by citizens and provides significant potential in complementing remote sensing and other data sources. This study aims to apply reinforcement learning techniques to identify potential protected areas within German forests. The methodology will follow a three-step approach. First, we will assess the quality of existing data, which includes remote sensing products that provide valuable information on land use and land cover, climate, soil characteristics, and vegetation. The selected datasets will then be pre-processed. Second, we collaborate with nature conservation agencies and ecological experts to identify key parameters that influence biodiversity protection. Finally, the validated data and expert input will be integrated into the AI model, allowing it to prioritize areas that have the highest potential for conservation and recovery. This study seeks to advance the application of AI in biodiversity conservation by developing a robust, data-driven approach for identifying priority protected areas in Germany. By combining approaches of artificial intelligence with citizen science and expert knowledge, we hope to contribute to the broader goal of halting biodiversity loss and fostering ecosystem resilience in the face of global environmental change.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Temporal Dynamics in Ecosystem Functional Attributes (EFAs) and Types (EFTs): Approaches and Lessons Learned

Authors: Dr. Cristina Domingo-Marimon, Dr. Lluís Pesquer, Dr. Małgorzata Jenerowicz-Sanikowska, Marek Ruciński, Dr. Edyta Woźniak, Dr. Joan Pino
Affiliations: CREAF, Space Research Centre of the Polish Academy of Sciences
Understanding and monitoring ecosystem dynamics is essential for addressing global challenges such as land degradation, biodiversity loss, and food security. Ecosystem Functional Attributes (EFAs) and Ecosystem Functional Types (EFTs), derived from remote sensing, have emerged as powerful tools for characterizing and monitoring ecosystem dynamics using remote sensing data. EFAs are quantitative descriptors of ecosystem functioning, derived from satellite-based vegetation indices such as NDVI or EVI, which capture the exchanges of matter and energy between biotic communities and their environment. EFAs form the basis for defining EFTs, which group ecosystems based on shared functional attributes rather than structural or compositional characteristics, providing a novel approach to ecosystem categorization without requiring prior knowledge of vegetation types or canopy structure. The application of EFAs and EFTs in ecosystem monitoring and assessment has gained significant traction due to their ability to provide rapid, integrative insights into ecosystem responses to environmental changes. Unlike traditional land cover maps that emphasize structural attributes with consolidated properties, EFAs and EFTs focus on functional aspects of ecosystems, offering a more dynamic and responsive measure of ecosystem health. Indeed, the ability of EFAs to respond more rapidly to change than structural or compositional attributes makes them particularly valuable in the context of global environmental changes and biodiversity monitoring. This functional approach has proven particularly valuable in species niche characterization, biodiversity modeling, and as an early warning system for ecosystem changes. Our analysis is based on lessons learnt from different applications in span diverse climatic regions including Tanzania, Ethiopia, Bangaldesh , Spain, Siberia and Amazonia. Additionally, the integration of EFA/EFT analysis with other geostatistical methods, such as variogram-based analysis, enables a deeper understanding of spatial patterns and ecosystem dynamics and has proven effective for early detection of land degradation, often surpassing traditional land cover change analysis in sensitivity and timeliness. The potential of EFAs and EFTs as early warning systems for ecosystem degradation is a research topic of growing interest. While traditional land cover change analysis may detect changes only after critical thresholds have been surpassed, EFA-based approaches offer the possibility of identifying ecosystem transitions leading to critical points before they occur. This early detection capability could significantly improve degradation mitigation actions and reduce associated economic costs. Despite their potential capabilities, the application of EFAs and EFTs points out interesting challenges. On the one hand, the temporal resolution of EFA and EFT analysis is a critical factor in their effectiveness. High temporal resolution is critical for capturing dynamic ecosystem processes, their trends and the anomalies. Cloud cover and data availability often constrain the production of annual EFT maps, particularly in fragmented or heterogeneous landscapes. On the other hand, the choice of spatial resolution significantly impacts results, especially in complex ecosystems like the Mediterranean, where coarse resolutions may overlook critical heterogeneity. Most previous EFT studies have been based on low or medium spatial resolution data from sensors such as AVHRR, SPOT-VGT, or MODIS, with only a few studies utilizing higher resolution data from Sentinel-2 or Landsat missions. These studies have primarily focused on static approaches (just one year or mean of a period), highlighting the need for more research into the interannual variability of EFTs, that offers a promising approach in the field of functional ecology. The current review is based on the works carried on different satellite platform/sensors: Envisat MERIS, Landsat TM, ETM+, OLI, Sentinel-2 MSI, with the corresponding different spatio-temporal resolutions (Domingo-Marimon et al. 2024; Pesquer et al 2019). In conclusion, the use of EFAs and EFTs in remote sensing analysis represents a significant advancement in our ability to monitor and understand ecosystem dynamics. These functional approaches offer several advantages over traditional structural analyses, including increased sensitivity to change, the ability to capture seasonal and short-term variations, and potential application as early warning systems. However, challenges remain in terms of optimizing spatial and temporal resolutions and applying these methods across diverse ecosystem types. As remote sensing technologies continue to advance, the integration of EFA and EFT approaches with other analytical methods promises to enhance our capacity for ecosystem monitoring, conservation planning, and sustainable resource management in the face of global environmental changes. References: Domingo-Marimon C, Jenerowicz-Sanikowska M, Pesquer L, Rucinski M, Krupinski M, Wozniak E, Foks-Ryznar A, Abdul Quader M (2024) Developing an early warning land degradation indicator based on geostatistical analysis of Ecosystem Functional Types dynamics. Ecological Indicators 169: 112815. DOI: 10.1016/j.ecolind.2024.112815. Pesquer L, Domingo-Marimon C, Cristóbal J, Ottlé C, Peylin P, Bovolo F, Bruzzone L (2019) Comparison of ecosystem functional type patterns at different spatial resolutions in relation with FLUXNET data. Proc. SPIE, Vol. 11149: 1114908. DOI: 10.1117/12.2533049.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Assessing spectral-functional diversity relationships though scales in a monoculture experiment

Authors: Javier Pacheco-Labrador, M. Pilar Martín, Rosario Gonzalez-Cascon, Vicente Burchard-Levine, Lucía Casillas, Victor Rolo, David Riaño
Affiliations: Enviromental Remote Sensing and Spectroscopy Laboratory (SpecLab), Spanish National Research Council (CSIC), National Institute for Agriculture and Food Research and Technology (INIA), Spanish National Research Council (CSIC), Tec4AGRO. Institute of Agricultural Sciences (ICA). Spanish National Research Council (CSIC), Forest Research Group, INDEHESA, University of Extremadura
Grasslands and grassland-dominated ecosystems, such as tree-grass ecosystems, play a fundamental role in the global carbon balance and the population’s subsistence in vulnerable regions. Protecting the entire range of grasslands’ ecosystem services requires assessing the influence of management and environmental drivers on these services and the role of biodiversity in their provision. While remote sensing offers tools to help monitor and better understand plant properties, functions, and diversity in grasslands and, therefore, the relationships between diversity and ecosystem function, grass plant sizes limit this potential. Still, remote sensing could offer vegetation diversity proxies to reveal diversity’s role in ecosystem functions and services, even if individuals cannot be distinguished. In this study, a monoculture experiment was implemented with 7 herbaceous species, including C3 and C4 grasses, legumes and forbs typical of Mediterranean grasslands to assess the capacity of hyperspectral data to detect intra- and inter-specific differences in foliar functional traits of pasture species at different phenological stages, and their plastic responses to water shortage. The experiment included 42 plots (1.5x1.5 m), with six replicates of every other species, organized in two blocks. Water regimes were manipulated to simulate typical versus water stress conditions. W assess how the relationships between spectral and plant functional diversity vary at different spatial scales in a monoculture experiment. Leaf, canopy, and drone spectral measurements are combined with laboratory estimates of plant functional traits. We assess the capability of different spectral measurements to decipher the inter and intraspecific variability of the different species’ functional diversity and what functional traits dominate the relationships at each scale. The study includes the temporal dimension, as the experimental plots are measured across their phenological development. Results revealed clear effects of phenology on the spectral diversity and the significant role of the LWC in the spectral-functional relationships, suggesting SWIR hyperspectral sensors could contribute to characterizing the diversity in grasslands.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Mapping and monitoring of natural and artificial floating materials in aquatic environments using PRISMA data

Authors: Erika Piaser, Dr.ssa Federica Braga, Mariano Bresciani, Dr.ssa Claudia Giardino, Dr. Paolo Villa
Affiliations: National Research Council (CNR), Institute for Electromagnetic Sensing of the Environment (IREA), Politecnico di Milano, National Research Council (CNR), Institute of Marine Sciences (ISMAR), National Biodiversity Future Center (NBFC)
The potential of hyperspectral data for the detection and differentiation of various floating materials, such as macrophytes and oil spills, has traditionally relied on proximal and airborne data due to the lack of suitable spaceborne platforms. The launch of the PRISMA mission by the Italian Space Agency (ASI) has made it possible to apply hyperspectral techniques to satellite data in practical scenarios. Within the PANDA-WATER project, we have developed and tested applications using PRISMA data to monitor various natural and artificial floating materials, focusing on three key products: algae scum and floating macrophyte cover, macrophyte status and oil spills. The first product uses specific spectral features of surface reflectance bands in the VNIR and SWIR as input to machine learning classification models. These models, optimised for distinguishing algae scum from floating vegetation, have achieved accuracies higher than 99% across multiple geographic regions, including Europe, Africa, Asia and the Americas. The second product uses narrowband and broadband spectral indices as proxies to generate continuous maps of macrophyte functional traits at both canopy (e.g. fractional cover, leaf area index) and pseudo-leaf (e.g. pigment content, leaf mass per area) scales. This mapping of macrophyte status was carried out on plant communities in northern Italian lakes, covering different community types, species and phenological stages. The third product focuses on oil spill detection by extracting spectral features sensitive to the presence and abundance of oil on water surfaces. These features, derived from reflectance spectra in the VNIR and SWIR regions, allow oil and water to be distinguished through optimised thresholding techniques. The oil spill detection methodology was validated using PRISMA-like data resampled from AVIRIS flights over the 2010 Deepwater Horizon oil spill in the Gulf of Mexico and tested on PRISMA data from the 2023 Oriental Mindoro oil spill in the Philippines. These products demonstrate the potential of hyperspectral satellite data for monitoring aquatic environments, offering high accuracy and applicability across scales and environments.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Investigating the Impact of Atmospheric Correction on PLSR-Based Vegetation Trait Retrieval

Authors: Christiana Ade
Affiliations: Nasa Jpl
Mapping vegetation traits is critical for understanding ecosystem processes, informing conservation strategies, and assessing environmental changes on a global scale. As we strive to develop universally applicable trait retrieval algorithms, it is vital to investigate how atmospheric correction influences Partial Least Squares Regression (PLSR)-based trait maps, particularly given the diverse surface reflectance retrieval methods employed by different agencies. Using airborne flight data from NEON over Colorado, we conducted a cross atmospheric correction trait map comparison. Three PLSR models were trained on images processed with different atmospheric corrections: ACORN, ISOFIT, and ATCOR. Each PLSR model was then applied to three separate images, each processed with one of the atmospheric corrections. We focused on retrieving three key vegetation traits—foliar nitrogen (N), leaf mass per area (LMA), and leaf water content (LWC)—which are of high ecological interest and have been demonstrated to perform well across platforms. Results reveal significant variability in PLSR trait maps, indicating that a globally applicable PLSR model cannot accommodate imagery processed with differing atmospheric corrections without adjustment. This suggests that cross-agency PLSR models will be challenging to standardize. However, encouragingly, some traits exhibit high consistency when models are applied to images processed with the same atmospheric correction used during model training. This finding underscores the potential for cross-agency collaboration through the Level 3 products or by recalibrating trait models to specific atmospheric corrections using shared image locations and field data. These insights highlight the importance of considering atmospheric correction in trait retrieval workflows and provide a pathway for supporting global vegetation trait mapping efforts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Developing a Data Cube for Biodiversity and Carbon Dynamics Assessment in Estonia with Remote Sensing data

Authors: Oleksandr Borysenko, Jan Pisek, Mirjam Uusõue, Alexander Kmoch, Holger Virro, Wai Tik Chan, Eveli Sisas, Ats Remmelg, Marta Jemeljanova, Evelyn Uuemaa
Affiliations: University Of Tartu
We are constructing a comprehensive data cube at the national level for Estonia, leveraging remote sensing and geospatial data to advance biodiversity (BD) and carbon (C) dynamics research. The full potential of fusing active (LiDAR, radar) and passive remote sensing have not been developed and used yet. Moreover, multi-temporal (seasonal) feature sets, consisting of numerous combinations of spectral bands, can hold the potential to predict compositional vegetation classes. The combination of these approaches is promising for applications in biodiversity mapping and modelling. Remote sensing data, including Sentinel-1, Sentinel-2, Landsat, and high-resolution airborne LiDAR, which will be sourced from open repositories (e.g., Copernicus Open Access Hub). Vegetation indices and other derivatives will be calculated and integrated into the data cube. To capture multi-temporal aspect, we will use principal component analysis. Our framework organizes analysis-ready spatial data remote sensing in a data cube at the national level, enabling efficient retrieval, storage, and extraction of spatial and temporal extents from input and project-generated datasets. The data cube includes tiled and hierarchical variables at multiple resolutions. Complementary vector data, such as experimental site and habitat information, will also be included. Sentinel-2 spectral diversity indices are explored as proxies for biodiversity, providing an initial evaluation of the spectral diversity hypothesis under Estonian conditions. Biodiversity assessment employs spectral species concepts and k-means clustering to analyze gridded remote sensing data, producing 2D α- and β-diversity heterogeneity maps. We employ monthly and seasonally composite Sentinel-2 images processed in Google Earth Engine using the Cloud Score+ S2_HARMONIZED dataset. This dataset is produced from the harmonized Sentinel-2 L1C collection, enabling the identification of relatively clear pixels and the effective removal of clouds and cloud shadows. Diversity indices are calculated using the biodivMapR library, enabling robust and reproducible biodiversity assessments. Additionally, high-resolution (10 m) ecological descriptors of vegetation and terrain are generated using airborne laser scanning point data from the Estonian Land Board. We demonstrate and discuss the potential as well as limitations of novel spectral species indices and multitemporal frameworks for the automatic mapping of vegetation types in complex landscapes. Explainable artificial intelligence (XAI) (i.e. Random Forest) models will be trained using Scikit-learn and Shapley on these harmonized datasets to model biodiversity and carbon stocks/emissions at the national level. This scalable approach has the potential to enhance environmental monitoring and inform sustainable land management strategies across Estonia and similar regions
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Mobilizing Animal Movement Data to Make Better Maps of Functional Fragmentation in African Savannas

Authors: Lorena Benitez, Dr Jared Stabach, Dr Kate Parr, Dr Mahesh Sankaran, Dr Casey Ryan
Affiliations: University Of Edinburgh, Smithsonian Conservation Biology Institute, University of Liverpool, University of Pretoria, University of the Witwatersrand, National Centre for Biological Sciences, Tata Institute of Fundamental Research
Field data are currently underutilized by remote sensing scientists mapping fragmentation. Traditional assessments of fragmentation, such as the landscape division index, rely on distinguishing differences in vegetation structure. This works for forests, but in naturally patchy, disturbance-driven ecosystems like savannas where there is a mix of high and low biomass areas, fragmentation is difficult to detect in this way. Instead of relying on vegetation structure to delineate fragmentation in savannas, we suggest incorporating measures of landscape functionality, specifically connectivity, into fragmentation assessments. A wealth of potential field data exists which can be used to calibrate and validate functional fragmentation models and associated maps. In particular, large volumes of animal movement data exist that can be used as ‘labels’ to train models of functional fragmentation. Animal-borne tags produce thousands of points per individual and provide detailed spatiotemporal information regarding landscape connectivity. Data on animals provides many opportunities for adding functional context to fragmentation maps, since animals often strongly influence vegetation structure and ecosystem function. Additionally, data regarding animal dispersal may be a good way to test how specific landscape features influence landscape connectivity (e.g. impact of different types of roads) or when fragmentation is occurring without vegetation change (e.g. avoidance of humans). In this study, we tested how well fragmentation maps made with and without animal movement data compare within the Greater Maasai Mara Ecosystem, Kenya (34°40′ E, 1°00′ S to 35°50′ E, 1°80′ S). We used global land cover products (ESA CCI, ESA WorldCover, and GLAD Global Land Cover) to quantify fragmentation by anthropogenic land covers. We also created our own site-level land cover maps using unsupervised classification with Landsat-8 and ALOS Palsar-2 imagery. All land cover maps were simplified into binary maps (anthropogenic/natural) at 100 m resolution for comparative purposes. Finally, we used machine learning to make functional fragmentation maps using data from GPS-collared wildebeest (n=15, 50,000 points) and lions (n=8, 68,000 points). Our models were spatially validated by bisecting the study region, with training and testing occurring on the western half and validation on the eastern half. We found that the global land cover products varied widely regarding the degree of fragmentation of the Mara Ecosystem. Habitat area also varied widely with maps indicating as little as 20% or as great as 96% of the area is ‘natural’ habitat. The landscape division index (LDI) ranged from 0.06 for WorldCover to 0.55 for CCI reflecting a large difference in classification of natural habitat. Our site-level land cover classification had a higher landscape division index of 0.66. Conversely, the fragmentation maps based on animal movement data resulted in an LDI of 0.40, but with much less habitat area indicated compared to land cover methods. The high level of disagreement between maps highlights how little we know about land cover and fragmentation in savanna ecosystems. This is especially problematic as savannas cover the largest land areas in the tropics and are expected to undergo further pressure from land use and climate change. Developing better methods for mapping savannas and their connectivity is vital to determining how best to conserve these important ecosystems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Back to Black - Harnessing the Spatial Resolution of SDGSAT-1 for Biodiversity Monitoring

Authors: Dominique Weber, PD Dr. Janine Bolliger, Dr. Klaus Ecker, Dr. Claude Fischer, Christian Ginzler, Prof. Dr. Martin M. Gossner, Laurent Huber, Dr. Martin K. Obrist, Dr. Florian Zellweger, Prof. Dr. Noam Levin
Affiliations: Swiss Federal Institute for Forest Snow and Landscape Research WSL, Geneva School of Engineering, Architecture and Landscape – HEPIA, University of Applied Sciences and Arts of Western Switzerland, Department of Environmental Systems Science, Institute of Terrestrial Ecosystems, ETH Zurich, Department of Geography, The Hebrew University of Jerusalem, Earth Observation Research Center, School of Earth and Environmental Sciences, University of Queensland
The rapid increase of light pollution in recent decades has become a global threat to biodiversity at all levels. Artificial light at night (ALAN) affects organisms in terrestrial and aquatic ecosystems in various, often detrimental ways. By disrupting circadian rhythms, ALAN can lead to disturbed physiological processes, but also to altered behaviour of nocturnal species, reducing their foraging ability and increasing predation risk. Despite the mounting evidence of the negative ecological impacts of ALAN, we still lack suitable tools and capabilities for assessing and monitoring ALAN at ecologically relevant scales. Recently, data from a multispectral sensor on-board the Chinese Sustainable Development Goals Science Satellite 1 (SDGSAT-1) have become available. These data provide great improvement in spatial resolution and spectral detail and thus open up new perspectives for ecology and conservation. We review the current contribution of night-time satellites to ecological applications and discuss the potential value of the Glimmer sensor onboard SDGSAT-1 for quantifying ALAN. Due to their coarse spatial resolution and panchromatic nature, currently used data from the DMSP/OLS and VIIRS/DNB space-borne sensors are limited to assess local light pollution and the ecological and conservation-relevant effects of ALAN. SDGSAT-1 now offers new opportunities to map the variability of light intensity and spectra at fine spatial resolutions, providing the means to identify and characterise different sources of ALAN, and to relate ALAN to local parameters and in situ measurements. We demonstrate some key ecological applications of SDGSAT-1, such as assessing habitat quality of protected areas, evaluating wildlife corridors and dark refuges in urban areas, and modelling the visibility of light sources to animals. Monitoring ALAN at 10–40 m spatial resolution enables scientists to better understand the origins and impacts of light pollution on sensitive species and ecosystems, and it assists practitioners in implementing local conservation measures. Our study thus provides new perspectives for sound ecological impact assessment of ALAN and conservation management using space-borne remote sensing. We conclude that SDGSAT-1, and possibly similar future satellite missions, will significantly advance ecological light pollution research to better understand the environmental impacts of light pollution and to devise strategies to mitigate them. However, to boost the use of SDGSAT-1 Glimmer data for science and practice, further research, solving data quality and accessibility issues and ensuring the continuation of the mission are essential. The combination with other remote sensors and in situ measurements is essential to (1) understand and quantify ALAN data delivered by satellites and (2) advance conclusive ecological impact assessment and monitoring of ALAN, for example by upscaling from photometers to UAVs and satellites.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Phytoplankton Community Assessment Using Optical Data in the Shallow Eutrophic Lake Võrtsjärv

Authors: Kersti Kangro, Mr. Ian-Andreas Rahn, Mrs. Kai Piirsoo, Mr. Rene Freiberg, Mr. Joel Kuusk, Krista Alikas
Affiliations: Tartu Observatory, University of Tartu, Estonian University of Life Sciences
Lake Võrtsjärv, with a surface area of 270 km² and a maximum depth of 6 meters, is the second largest lake in Estonia. It is turbid and eutrophic lake, where chlorophyll a (Chl a) concentration varies between 3.4–72.2 mg/m³ (median 40.4), total suspended matter between 1.6–52.7 g/m³ (median 16), and absorption by coloured dissolved organic matter at 442 nm between 1.9–8.9 m-¹ (median 2.9)). The variability and changes in in-water parameters can be retrieved from both, Sentinel-3/Ocean and Land Colour Instrument (OLCI) and Sentnel 2/MultiSpectral Instrument (MSI). Phytoplankton, which forms the basis of all aquatic food chains, is crucial for assessing water quality and eutrophication processes. Additionally, phytoplankton plays a significant role in carbon fixation and the carbon cycle in water bodies. The composition of algal pigments indicates the structure of the phytoplankton community and can be estimated through optical observations. Retrieving different types of pigments from phytoplankton absorption is essential for developing further applications. Besides algal pigment composition, microscopic estimation remains relevant as it provides insights into the phytoplankton community at the species level and helps identify potentially toxic genera. In L. Võrtsjärv, the phytoplankton community is primarily dominated by shade-tolerant cyanobacteria species (Limnothrix planktonica and L. redekei), with diatoms appearing in spring and autumn. Phytoplankton biomass and Chl a tend to increase linearly from spring to autumn. The in-water parameters in L. Võrtsjärv are heavily influenced by water levels, which can fluctuate by up to 1.5 meters. Võrtsjärv is a well-studied lake, with continuous research of the phytoplankton community dating back to the 1960s. Phytoplankton absorption measurements are more recent, being measured once a month starting from 2014, but here we focus on the last two years. Initially, optical water typisation by Uudeberg (2020) was applied, which considers features in the reflectance spectra and classifies waters into five optical water types (Clear, Brown, Moderate, Turbid, and Very Turbid). The optical water type for a specific date can be retrieved from HYPSTAR® reflectance data. HYPSTAR is a Hyperspectral Pointable System for Terrestrial and Aquatic Radiometry, providing automated, in-situ multi-angular reflectance measurements of land and water targets, covering the 380-1020 nm spectral range at 3 nm spectral resolution (Kuusk et al, 2024). It has been measuring at the pier of L. Võrtsjärv for two vegetation periods, starting from 2023. Previously, models based on Chl a concentration, Gaussian decomposition, and an inversion model relying on PCA (similar to Zhang et al., 2021) were developed using in situ measured absorption spectra and high-performance liquid chromatography pigment data from 30 small Estonian lakes, with measurements taken three times during the vegetation period. These models will be applied to L. Võrtsjärv data to detect dominant algal groups. These data will serve later as input for models derived during the AQUATIME project. The AQUATIME project aims to enhance applications for Ecosystem and Biodiversity Monitoring, Inland Water Management, and Coastal Management, focusing on novel possibilities for phytoplankton monitoring. The hyperspectral capabilities of ESA’s planned hyperspectral CHIME will provide more detailed information about phytoplankton parameters and allow for more specific products compared to Sentinel-3/OLCI. References: Kuusk J., Corizzi A., Doxaran D., Duong K., Flight K., Kivastik J., Laizans K., Leymarie E., Muru S., Penkerc’h C., Ruddick K. 2024. HYPSTAR: a hyperspectral pointable system for terrestrial and aquatic radiometry. Frontiers Remote Sens. 5. https://doi.org/10.3389/frsen.2024.1347507 Uudeberg, K. 2020. Optical Water Type Guided Approach to Estimate Water Quality in Inland and Coastal Waters. Dissertationes physicae Universitatis Tartuensis, 124. 67 pg. https://dspace.ut.ee/handle/10062/67338 Zhang, Y., Wang, G., Sathyendranath, S., Xu, W., Xiao, Y., Jiang, L. 2021. Retrieval of Phytoplankton Pigment Composition from Their In Vivo Absorption Spectra. Remote Sens. 13, 5112. https://doi.org/10.3390/rs13245112
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: BIOMONDO - Towards Earth Observation supported monitoring of freshwater biodiversity

Authors: Petra Philipson, Carsten Brockmann, Miguel Dionisio Pires, Marieke Eleveld, Niklas Hahn, Jelle Lever, Daniel Odermatt, Aafke Schipper, Jorrit Scholze, Kerstin Stelzer, Susanne Thulin, Tineke Troost
Affiliations: Brockmann Geomatics Sweden AB, Brockmann Consult GmbH, Deltares, PBL Netherlands Environmental Assessment Agency, Eawag, Swiss Federal Institute of Aquatic Science and Technology
The European Space Agency activity called Biodiversity+ Precursors is a contribution to the joint ESA and European Commission Flagship Action on Biodiversity and Vulnerable Ecosystems, launched in February 2020, to advance Earth System Science and its response to the global challenges that society is facing. The Precursor BIOMONDO was focused on biodiversity in freshwater ecosystems. The project developments were based on an analysis of the major knowledge gaps and science questions on biodiversity and vulnerable ecosystems, an assessment of how recent and future Earth Observation systems can help address these scientific challenges in biodiversity knowledge and a demonstration of the Earth System Science approaches with a number of pilot studies called Earth System Science Pilots for Biodiversity. The project finalized with the development of a Science Agenda and a scientific roadmap, serving as a basis for the implementation phase of the EC-ESA actions to further increase global Earth Observation supported monitoring of biodiversity. Based on an in-depth-analysis of the relevant sources for scientific and policy priorities, the main knowledge gaps and challenges in freshwater biodiversity monitoring were identified. The findings were used in BIOMONDO to develop three pilot studies that integrate Earth Observation data and biodiversity modelling using advanced data science and information and communications technology. Each pilot addressed objectives and knowledge gaps corresponding to one of the following drivers of global environmental change in freshwater ecosystems: pollution and nutrient enrichment (Pilot 1), climate change (Pilot 2), and habitat change (Pilot 3). More specifically, in pilot 1 we have explored the opportunity to upgrade ecosystem modelling by integrating EO data into Delft3D. Delft3D is a 3D modelling suite to investigate hydrodynamics, sediment transport and morphology, and water quality for fluvial, estuarine and coastal environments. In pilot 2 we explored the use of Earth Observation based water temperature to quantify the impacts of increases in temperature and heat waves on freshwater fish diversity. In this pilot we used a novel phylogenetic heat tolerance model, created by PBL as part of the GLOBIO model suite, which estimates thermal tolerance of freshwater fish species. In pilot 3 we combined Earth Observation data and the modelled degree of geographic range fragmentation, expressed as a connectivity index, for monitoring and assessing the impact of dam construction and removal on biodiversity, including the effects on habitat fragmentation and water quality. The pilot studies were implemented and validated for selected sites to showcase the applicability and impact for science and policy. The generated products constitute the so called BIOMONDO Experimental Dataset, and the results have been presented, assessed, and discussed with external stakeholders. The Experimental Datasets were gathered in the BIOMONDO Freshwater Laboratory. Central to this Lab is federation of all data on common grids (a data cube). Analysis and processing functions for state-of-the-art methods, as well as visualisation interfaces and export functions are available in the Lab. The external stakeholders were given access to the novel Earth Observation products and model results through the Lab and supported the validation and evaluation of scientific impact and policy benefit of the developments.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Seasonal Patterns of Local and Regional Plant Biodiversity Observed from Hyperspectral Airborne Imagery

Authors: Elisa Van Cleemput, Kristen Lewers, Benjamim Poulter, Bryce Currey, Peter Adler, Katharine Suding, Laura Dee
Affiliations: Leiden University College, University of Colorado Boulder, NASA Goddard Space Flight Center, Montana State University, Utah State University
Mapping and monitoring biodiversity are essential activities that form the foundation of effective biological conservation efforts. Hyperspectral sensors on airborne and spaceborne platforms show great promise to contribute to this endeavor. Indeed, because of their high spectral resolution, hyperspectral data has been uniquely capable of characterizing various aspects of what terrestrial biodiversity entails, including morphological, biochemical and phenological vegetation properties. The premise of hyperspectral sensors is that they cannot only be used to visualize spatial biodiversity patterns, but also allow researchers and practitioners to study biodiversity changes over time, including tracking progress towards restoration goals and detecting responses to disturbance and stress. To effectively support these applications, it is essential to consider their required spatial and temporal resolutions, as different measurement platforms involve significant trade-offs among spectral, spatial, and temporal capabilities. Local diversity is not necessarily correlated with regional diversity and they may therefore play different roles in supporting ecosystem functions; phenology may impact both local and regional biodiversity. In this study, we employed airborne imagery of the Surface Biology and Geology High-Frequency Time Series (SHIFT) campaign to explore the scale-and time-dependency of (spectral) diversity. Using the AVIRIS-NG (Airborne Visible/Infrared Imaging Spectrometer-Next Generation) instrument, the SHIFT campaign collected hyperspectral imagery over the Jack and Laura Dangermond Preserve (JLDP), on an approximately weekly basis from late February to late May with a spatial resolution of ~ 5 m. The JLDP hosts various ecosystems; we focused on the Coast Live Oak Woodlands ecosystem, which consists of relatively open savannas and dense closed-canopy old-growth forests. We hypothesized that this ecosystem had relatively lower local (alpha) spectral diversity (e.g., relatively homogenous plain grasslands), compared to regional (beta) diversity. (i.e., turnover from more grassy to more woody vegetation). Additionally, as grasses senesce over the season, we hypothesized that the contribution of spectral alpha-diversity to total diversity would decrease (e.g., it becomes harder to distinguish species in a senesced grassland), and that of spectral beta-diversity would increase (larger difference between senesced grassland and evergreen forested locations in the landscape). To partition spectral diversity into local (alpha) and regional (beta) diversity, we used the approach developed by Laliberté et al. (2020), where alpha-diversity was calculated as the specral dissimilarity between all pixels in a 30 m x 30 m community, and beta-diversity was calculated as the spectral dissimilarity between those communities. The community size was chosen to correspond with the typical spatial resolution of hyperspectral satellite sensors. We applied this algorithm to all images across the season, to obtain a time series of spectral diversity information. In line with our expectations, beta-diversity contributed indeed more to overall diversity, than alpha-diversity: Across the season beta-diversity was responsible for 65-75% of the total diversity. As the season progressed this contribution decreased, as hypothesized. We observed a peak in spectral alpha-diversity in the beginning of March, which corresponds with peak flowering. The observation that beta-diversity contributed more to overall diversity than alpha-diversity in this ecosystem is promising for the use of hyperspectral satellite sensors that measure Earth’s surface with a spatial resolution that is similar to the size of communities in this study. Important seasonal biodiversity patterns may however not be picked up by hyperspectral satellite sensors, as they typically have longer revisit times. Follow-up research needs to clarify whether similar patterns are present in other ecosystem types (e.g., grasslands and chaparral in the JLDP). In conclusion, this study sheds light on expectations and measurement objectives of current and future spectroscopy missions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Semi-supervised object-based classification of coastal dune vegetation covers in the SW Spain using Sentinel-2 imagery

Authors: Diego López-Nieta, Víctor F. Rodríguez-Galiano, Emilia Guisado-Pintado, Eva Romero-Chaves
Affiliations: Department of Physical Geography and Regional Geographic Analysis, Faculty of Geography and History, University of Sevilla (Spain)
Coastal dunes ecosystems play a crucial role in climate change adaptation, acting as a natural barrier against flooding and being reservoirs of biodiversity. Lately, human activities have thread their integrity jeopardising their important function. Therefore, monitoring changes in coastal dunes has become an essential task in managing coastal areas. In this context, the use of satellite imagery and machine learning algorithms provides a powerful tool to delineate, differentiate and accurately map theses ecosystems. This contribution is focused on a preliminary study to assess the feasibility of a semi-supervised object-oriented classification methodology using high-resolution satellite imagery from Sentinel-2 mission and machine learning algorithms to map four coastal dune systems in SW Spain (Zahara de los Atunes, Bolonia, Valdevaqueros and Los Lances). Sentinel-2 was chosen because of its easy accessibility, high periodicity (5 days) and adequate spatial resolution (10 m). All this makes this Copernicus mission a valuable source of information for large-scale monitoring studies. However, most previous studies have been carried out using very high-resolution imagery, making it difficult to adequately monitor these ecosystems over time. Additionally, this work considers objects or segments instead of pixels, incorporating the spatial context and providing new insights into the study of dune ecosystems as complex landscapes. The dataset included a 2017 Sentinel-2 annual composite of the VIS, NIR and SWIR bands, resampled to 10 m, seasonal NDVI composites from the same year and texture variables derived from the Sentinel-2 annual composite. The methodology followed a semi-supervised object-oriented approach for mapping coastal dune environments, aiming to establish a basis for long-term monitoring of these ecosystems. The Multiresolution Segmentation (MRS) algorithm was used to group pixels representing homogeneous territorial units, allowing a more accurate representation of the geospatial characteristics of these ecosystems. To achieve an optimal segmentation, the algorithm's parameters were fine-tuned using the ESP2 algorithm developed by Drăguţ et al. (2014). Sentinel-2 bands were used as input for the MRS process. Seventy-five percent of the resulting segments/objects served as training data for a supervised classification, while 25% were reserved as an independent test. The latter subset was labelled using photointerpretation techniques from digital aerial orthophotographs. Training data were labelled using the automatic K-means clustering method, grouping the segments into classes (environments), determining the optimal number of groups by applying the Elbow method. Seasonal NDVI composites and the SWIR band were used as inputs for the K-means algorithm. The spectral distance of each segment with respect to the centroid of the nearest class was calculated, grouping segments into different training subsets based on distance percentiles. The representativeness of these training subsets was evaluated using the Random Forest (RF) algorithm, considering the trade-off between accuracy and generalisability of the model for each class, since coastal dune environments are diverse, highly complex and dynamic systems. The best-performing combination of training subsets was then used as the final training set for the RF classification. Finally, the results were validated by applying the model to the photo-interpreted independent test set. The variables used for the RF-supervised models included the seasonal composites of NDVI, SWIR band and texture variables. MRS and K-means results allowed the identification of four environments: areas with sparse or no vegetation (ASV), areas with herbaceous vegetation (AHV), areas with arboreal vegetation (AWV), and areas with mixed vegetation (combination of the above; AMV). The optimal training data were set at spectral distance thresholds of 30 for ASV, 70 for AWV and 100 for AHV and AMV. The overall accuracy of the final model showed a strong value of 0.86 and good agreement with the predictions, with a Kappa coefficient of 0.8. These results show the effectiveness of a semi-supervised method, achieving a stratification of the territory that facilitates the subsequent classification of coastal dune environments. Furthermore, this approach employs an operational sensor that is highly updatable (every 5 days) and with an adequate spatial resolution. Consequently, the next step will be to replicate this methodology across other coastal dune systems to ensure its applicability and effectiveness in future coastal monitoring studies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: From Space to Land: exploiting satellite-derived water quality variables for climate studies

Authors: Monica Pinardi, Rossana Caroni, Anna Joelle Greife, Mariano Bresciani, Claudia Giardino, Laura Carrea, Xiaohan Liu, Stefan Simis, Clement Albergel
Affiliations: Institute for Electromagnetic Sensing of the Environment, National Research Council, CNR - IREA, Department of Meteorology, University of Reading, Reading, Plymouth Marine Laboratory, European Space Agency Climate Office, NBFC, National Biodiversity Future Center, Department of Environmental Science and Policy, University of Milan
Despite making up less than 1% of the world water area, lakes are an important resource that provide drinking water, biodiversity, and recreational opportunities—all of which are tied to sustainable development goals. Increasing urbanisation and population growth have led to eutrophication, hydrological changes and loss of ecosystem services. Invasive species, land use changes and climate change are recognized as the main drivers of species loss in freshwater environments, which may be five times faster than in terrestrial environments. In the coming decades, climate change and global warming, and in particular the increase in extreme weather events, are expected to have more widespread and significant impacts on biodiversity, species composition, hydrology, land cover and nutrient cycling. Using in-situ data to monitor such water bodies and comprehend their complex behavioral changes on a global scale is not feasible. Open access satellite-derived data represent a way forward in understanding the ecological processes and in assessing the impact of the main drivers of changes on freshwaters. The Lakes_cci (Climate Change Initiative) project provides global and consistent satellite observations of lake specific essential climate variables (ECVs): Lake Water Level and Extent, Surface Water Temperature, Ice Cover and Water-Leaving Reflectance (LWLR), which capture both the physical state of lakes and their biogeochemical response to physico-chemical and climatic forcing. With the release of version 2.1, the products cover the period 1992-2022 and provide daily data at 1 km resolution for over 2000 relatively large lakes. The project has explored multiple use cases that examine long-term time series of biophysical water quality parameters to understand possible causes of their trends, including the unique response of shallow lakes globally and the effects of heatwaves on lakes. In the first use case, we selected a globally distributed subset of shallow lakes (mean depth < 3m; n=347) to investigate long-term time series/trends (2002-2020) of chlorophyll-a (Chl-a) and turbidity derived from LWLR. Shallow lakes and wetlands form a major component of inland waters and provide many ecological services, particularly important for carbon storage and biodiversity. Due to their large surface-to-volume ratio, they are vulnerable to environmental changes influenced by nutrient and pollutant loads and are sensitive to climate change. According to the trend analysis, turbidity increased significantly in 60% of the shallow lakes and decreased in 17% of the lakes, while Chl-a increased significantly in 45% of the lakes and decreased in 22% of the lakes. Further investigation revealed that in most lakes the parameters turbidity (50%) and Chl-a (48%) increased simultaneously with LSWT, suggesting an impact of climate warming on lake water quality. Chl-a and turbidity in most lakes increased positively with population and gross regional product, according to a structural equation model-based analysis used to model the interactions between climatic, socio-economic and total water conditions. This finding suggests that human population growth in a lake’s catchment represents an important pressure on lake water quality. In the second use case, exploiting the high frequency and coverage of observations, we were able to gain insight into the response of lakes to sequential extreme weather events, such as heat waves and monsoon rainfall events, that occurred in India in 2019. Indian lakes are an excellent test subset across Lakes_cci variables as monsoon dynamics require that lake turbidity, chlorophyll-a, LWL and climatic variables are considered together at seasonal and annual scale. We examined the water quality response using time series and TAM (Time Alignment Measurement) analysis, which measures the degree of synchrony with the heatwave event, followed by cluster analysis of Chl-a and turbidity patterns. The TAM analysis showed that the rainfall time series was closer in phase with air temperature and turbidity, but less so with Chl-a, indicating a driving influence of rainfall on turbidity, probably due to the strong influence of the monsoon. The available LWL data showed high variability over a short period of time. Cluster analysis revealed two main groups of turbidity patterns: northern lakes showed peaks most likely driven by spring snowmelt, while southern lakes were dominated by peaks driven by the summer monsoon. In contrast, Chl-a patterns were less related to hydro-morphology, and likely more influenced by local nutrient dynamics and changes in LWL, helping to focus further studies centred on individual lakes.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: BioBalance: A Comprehensive Indicator to Quantify Anthropogenic Impacts on Biodiversity

Authors: Théau Degroote, Adrien Gavazzi, Fabien Castel
Affiliations: Murmuration
1. Introduction and Context Human activities are among the most significant drivers of biodiversity loss, necessitating the development of tools to quantify and mitigate their impact. While traditional indicators often rely on species surveys, their effectiveness is limited by inconsistent spatial and temporal coverage. To overcome these challenges, BioBalance leverages the GLOBIO methodology, a well-established framework for modeling biodiversity loss driven by human pressures. By combining this methodology with Earth Observation (EO) data, BioBalance offers a scalable, reliable, and science-based approach to assessing biodiversity integrity. 2. Presentation of BioBalance BioBalance quantifies the human impact on biodiversity using the Mean Species Abundance (MSA) index, which measures biodiversity health on a scale from 0 (complete loss) to 1 (intact biodiversity). The indicator accounts for six major anthropogenic pressures: 1. Land use 2. Climate change 3. Nitrogen deposition 4. Human encroachment 5. Habitat fragmentation 6. Road disturbance Each anthropogenic pressure assessed by BioBalance generates an individual MSA. These scores reflect the impact of a specific pressure on biodiversity within a given area. To calculate the overall MSA for a given site, the individual MSA values are combined multiplicatively. To ensure accuracy and scalability, BioBalance combines high-resolution datasets such as: - CORINE Land Cover and ESA WorldCover for land use, - ECMWF COPERNICUS for climate data, - OpenStreetMap (OSM) for infrastructure mapping, - World Database on Protected Areas (WDPA) for conservation zones, - EMEP/MSC-W modelled air concentrations for nitrogen deposits. By combining these datasets, BioBalance delivers spatially explicit and temporally consistent assessments of biodiversity pressures. Crucially, these results are made accessible through interactive dashboards, which present clear, actionable insights. Policymakers, environmental planners, and conservation organizations can use these dashboards to visualize MSA. 3. Methodology: Assessing Key Anthropogenic Pressures 3.1. Land Use Land use is the most impactful factor influencing biodiversity loss. BioBalance categorizes land use into 13 types, based on the GLOBIO framework. These are grouped into natural areas (e.g., forests) and human-dominated zones (e.g., croplands, urban areas). The significance of land use is so dominant that if a region is classified as urban or agricultural, other pressures are not considered. It’s worth noting that one-third of Earth's terrestrial area is currently used as cropland or pastureland. 3.2. Climate Change The impact of climate change on biodiversity is calculated based on the increase in temperature and its effects across various biomes. Using scientific studies, BioBalance determines the relationship between warming and biodiversity conservation within each biome, ensuring a global and biome-specific approach to assessing this pressure. 3.3 Nitrogen Deposition Nitrogen deposition measures the excess nitrogen that surpasses the critical load the ecosystem’s capacity to absorb nitrogen without adverse effects. Critical loads are determined based on the vegetation type specific to each biome. Observational data on nitrogen impacts are used to calculate the MSA for different ecosystems. This ensures that only nitrogen levels beyond the ecosystem's resilience threshold are considered in the pressure calculation. 3.4. Human Encroachment Human encroachment reflects the impact of human activities (e.g., hunting, food and fuel collection, tourism) on animal biodiversity in areas that would otherwise remain natural. Encroachment is assumed to occur within 10 km of urban areas or croplands. Research indicates that encroachment within this zone affects only one-third of the animal and plant population. Simulations suggest that even a small urban or cropland proportion (1.5% within a 50x50km grid cell) is sufficient to influence biodiversity across the entire grid. 3.5. Habitat Fragmentation Habitat fragmentation is primarily caused by major roads (highways, primary, and secondary roads). Other smaller infrastructure types are considered to have negligible effects. By merging road maps with land use maps, BioBalance identifies the largest intact habitat patch. The size of the largest patch determines the MSA value, based on scientific measurements correlating patch size with biodiversity health. 3.6. Road Disturbance Road disturbance quantifies the impact of infrastructure proximity on animal biodiversity. BioBalance distinguishes between five types of roads (highways, primary, secondary, tertiary, residential) and also includes other infrastructure such as railways, power lines, and mines. The impact zone for roads is defined as a 1 km radius around the infrastructure. Studies on mammals and birds indicate the impact of road disturbance for areas within 1 km of a road. In protected areas, the impact is mitigated, and the MSA is adjusted. 4. Objectives and Usefulness BioBalance aims to empower decision-makers with a practical and user-friendly tool for biodiversity management. By offering interactive dashboards, BioBalance enables users to: Identify priority areas for conservation efforts, Analyze the most significant anthropogenic pressures on biodiversity, The interactive dashboards make biodiversity data accessible, intuitive, and actionable. BioBalance also complements other indicators developed by the company, such as tools for air quality and vegetation health, offering a comprehensive approach to ecosystem management. 5. Deployment and Practical Applications BioBalance has already been deployed across various regions and ecosystems, demonstrating its versatility and practical value. In Occitanie (France), the tool was used to analyze regional biodiversity pressures. In regional natural parks such as the Haut-Jura (France) and Peneda-Gerês (Portugal), BioBalance has been instrumental in identifying priority areas for biodiversity conservation. The indicator has also been applied in urban contexts across multiple cities in Turkey and Thailand, giving a better understanding of the impact of urban expansion and infrastructure development on biodiversity. By highlighting priority zones for conservation, BioBalance supports stakeholders in making informed decisions to mitigate biodiversity loss while balancing development needs. Its adaptability across different scales and contexts underscores its potential as a key tool for environmental planning. 6. Conclusion BioBalance bridges the gap between scientific biodiversity assessment and actionable policy implementation. Its robust methodology, combined with interactive dashboards, enables stakeholders to tackle biodiversity loss effectively. By supporting evidence-based conservation strategies, BioBalance contributes to the development of sustainable, targeted approaches for preserving global biodiversity.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Satellite Images for High-Resolution Species Distribution Models

Authors: Johannes Dollinger, Dr. Philipp Brun, Dr. Vivien Sainte Fare Garnot, Dr. Damien Robert, Dr. Lukas Drees, Professor Jan Dirk Wegner
Affiliations: Department of Mathematical Modeling and Machine Learning (DM3L), University of Zurich, Swiss Federal Research Institute WSL
Species distribution modeling (SDM) concerns itself with creating species distribution maps to analyze the shifting distributions of species under changing environmental conditions. These maps support decision makers in their selection of areas to put under protected status, planning land use and tracking invasive species. What makes this task challenging is the great variety of controlling factors for species distribution, such as habitat conditions, human intervention, competition, disturbances, and evolutionary history. Experts either incorporate these factors into complex mechanistic models based on presence-absence (PA) data collected in field campaigns or train machine learning models to learn the relationship between environmental data and presence-only (PO) species occurrence. Due to a sharp increase in available occurrence data from crowd-sourcing efforts, it has become viable to model thousands of species jointly. Using large amounts of crowd-sourced data comes with a trade-off in terms of bias, both spatially around inhabited areas, and sampling-wise in favor of charismatic species. This work uses plant data at the European scale. Currently, these presence-only occurrences are heavily biased towards cities in western Europe, with a strong class imbalance ranging from 1 to 4500 observations. Satellite imagery is a unique modality that can help dealing with the noisy PO data due to its high resolution. Most environmental modalities such as climatic, soil and human footprint data are restricted to a 1 km grid, while the publicly available Sentinel-2 images have a resolution of 10 meters. Sentinel-2 is specifically well adapted to the task with its bands chosen to provide a plethora of information on flora. A 10 meter resolution is not enough to identify the individual plants, but contains information on small-scale local structures, such as agriculture, forests, proximity to bodies of water and streets, that are not covered by other modalities. Modeling distributions of PO data using satellite images is promising, but still underexplored. Spatial Implicit Neural Representations (SINR) has shown that jointly learning many species from PO data provides a useful embedding space that allows for the generalization to species with few samples, but the authors constrain themselves to mapping the species distributions primarily based on location only. We discuss an extension of SINR with Sentinel-2 images dubbed Sat-SINR, jointly modeling the spatial distributions of 5.6k plant species across Europe. This model achieves an improvement of up to 3 percentage points on micro F1 and ROC-AUC compared to logistic regression and SINR. Additionally, the resulting maps show qualitative differences such as recognizing viable habitats in under-sampled areas. We furthermore dive deeper into understanding the information that the model retrieves from satellite images.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: European Biodiversity Partnership (Biodiversa+) harmonizing trans-national long-term biodiversity monitoring

Authors: Petteri Vihervaara, Toke Hoye, Michele Bresadola, Aino Lipsanen, Iiris Kallajoki, Cécile Mandon, Julia Seeber, Jamie Alison, Helene Blasbichler, Risto Heikkinen, Sara Wiman, Gaëlle Legras, Pierre Thiriet, Mathieu Basille, Mona Naeslund, Michelle Silva del Pozo, Alberto Basset, Senem Onen Tarantini, Martina Pulieri, Lluís Brotons, Gloria Casabella, Magdalena Henry, Constantinos Phanis, Rob Hendriks, Guillaume Body, Sophie Germann, Ron
Affiliations: Finnish Environment Institute
European Biodiversity Partnership (Biodiversa+) has been supporting transnational long-term biodiversity monitoring across Europe. Biodiversa+ was launched October 2021 and will last until October 2028. We will summarize the key achievements of the first years of cooperation (years 2021-2025) on biodiversity monitoring of over 20 countries participating in these activities. To support harmonization of biodiversity monitoring schemes, we have started six pilot projects that will test novel monitoring methods, such as remote sensing, environmental DNA, and automated sound and image recognition to improve biodiversity monitoring, and to provide deeper understanding of possibilities and challenges of extending them to true long-term monitoring schemes. Currently, we have been piloting monitoring of i) invasive alien species, ii) soil biodiversity, iii) moths, bats, and birds, iv) rocky-reef fish, and 5) grassland and wetland habitats. In addition, in the sixth pilot, we have been assessing governance aspects of national biodiversity monitoring coordination in ten countries. The budget of these pilots has been 8.3M€ so far, and 19 participating organisations from 18 countries have participated in the pilots. Besides of concrete collection of monitoring data, we have also developed transversal activities such as data management and interoperability as well as integrating the observational data into decision-making which has been demonstrated also. As part of implementation of novel monitoring methods, a roadmap for utilization of those novel methods have been conducted. We have also studied current funding spent on biodiversity monitoring as well as expected costs of upscaling piloted monitoring schemes to become true long-term monitoring programmes. We will highlight possibilities and challenges learned from these pilots. One of the main benefits of such transnational monitoring schemes is that they can provide calibration and validation data for Earth Observation biodiversity data products and remotely sensed data sets. Finally, possibilities to integrate in situ observations with remote sensing approaches will be discussed. We will also present a vision for the forthcoming biodiversity monitoring cooperation with Biodiversa+ and other key monitoring initiatives.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Upland Habitat Mapping Using High-resolution Satellite Imagery and Machine Learning

Authors: Dr Charmaine Cruz, Dr Philip Perrin, Dr Jerome O'Connell, Dr James Martin, Dr John Connolly, Marco Girardello
Affiliations: Trinity College Dublin, Botanical, Environmental & Conservation (BEC) Consultants Ltd, Proveye Ltd
Uplands comprise a range of extensive, mostly semi-natural habitats, including blanket bogs, heaths, fens, grasslands and those associated with exposed rocks and scree. These habitats are protected in the European Union under the Habitats Directive. They provide important services, such as carbon sequestration and storage, biodiversity support, flood mitigation and water quality regulation. However, they are also highly vulnerable to climate change and to increasing pressures and threats from anthropogenic stressors, mainly by land-use changes. Comprehensive mapping is fundamental for monitoring these vulnerable habitats as it can provide baseline data, such as the location and extent of habitats, and can be used to monitor and track their condition over time as well as to support restoration programmes. Remote sensing has emerged as a valuable tool for mapping the distribution of upland habitats over time and space. However, the resolution of most freely available satellite images, such as Sentinel-2’s 10-meter resolution, may be inadequate for identifying relatively small features, especially in the heterogeneous landscape—in terms of habitat composition—of uplands. Moreover, the use of traditional remote sensing methods, imposing discrete boundaries between habitats, may not accurately represent upland habitats as they often occur in mosaics and merge with each other. In this context, we used high-resolution (2 m) Pleiades satellite imagery and Random Forest machine learning to map habitats at two Irish upland sites. Specifically, we investigated the impact of varying spatial resolutions (i.e., by resampling from the original 2-m spatial resolution to 4-, 6-, 8- and 10-m resolutions) on classification accuracy and proposed a complementary approach to traditional methods for mapping complex upland habitats. Results showed that the accuracy generally improved with finer spatial resolution data, with the highest accuracy values (80.34% and 79.64%) achieved for both sites using the 2-m resolution datasets, followed by the 4-m resolution datasets with an accuracy of 77-79%. The maps produced from these datasets provide information on the spatial distribution of habitats in great detail. Coarser spatial resolution datasets, however, resulted in a reduction of the accuracy and a slight overestimation of area for narrow and small-sized habitats (e.g., eroding blanket bogs and bog pools). The total percentage area differences between the 2-m and 10-m resolution images are 8% and 11% for the two studied sites. Although these differences may appear small, they could have significant implications in monitoring small-sized habitats, particularly when tracking gradual and subtle changes in these habitats over time. Therefore, a higher spatial resolution dataset is preferred if mapping habitats in a more heterogeneous and diverse landscape. The study also demonstrated the use of crisp and fuzzy classification techniques in mapping upland habitats. Crisp classification results in a single habitat map, which is relatively easy to interpret. Fuzzy classification delivers probability maps for each habitat considered in the modelling. While these probability maps may be more difficult to interpret, they can represent the typical complex mosaics and gradual transitions of upland habitats as observed in the field. They can also be used to describe spatial confidence in the classification through computing the entropy. Using fuzzy classified maps has the potential to improve our understanding of nature’s fuzzy patterns.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: From Pixels to Paths: Animal Path Mapping using UAVs and a Deep Convolutional Neural Network - Insights from the Kruger National Park

Authors: Konstantin Müller, Leonie Sonntag, Dr Mirjana Bevanda, Antonio Jose Castañeda‐Gomez, Dr Martin Wegmann, Dr Benjamin Wigley, Dr Corli Coetsee, Dr Doris Klein, Univ.-Prof. Dr. Stefan Dech, Jakob Schwalb-Willmann
Affiliations: Earth Observation Research Cluster (EORC), Department of Remote Sensing, Scientific Services, Kruger National Park - SANParks, School of Natural Resource Management, Nelson Mandela University, Plant Ecology, University of Bayreuth, German Remote Sensing Data Center (DFD) of the German Aerospace Center (DLR)
Animals play a vital role in and for ecosystems. Information about their behavior is crucial for understanding the health and state of the surrounding environment. Their small- and large-scale movements leave traces visible as individual paths or resting sites in the landscape. With our approach, we utilize mono-temporal UAV RGB imagery to automatically map animal paths. This resulting path network gives detailed insights into animal behavior in interaction with the environment across individuals, groups and different species for subsequent biodiversity analysis. Being able to map animal paths continuously, we complement existing, temporally discrete approaches of tracking individually tagged animals via GPS tags, VHF triangulation or ringing. In contrast to current UAV video-based techniques to follow and record animals, our focus on the animals’ paths throughout the landscape is non-invasive, scalable and temporally independent from animal presence. In this study, we developed (i) a semi-automatic path labeling approach for different path types and (ii) trained a Convolutional Neural Network (CNN) to segment animal paths in UAV RGB and photogrammetrically derived DSM imagery. For this, we mapped a research area in the Kruger National Park, South Africa, for which the understanding of animal behavior plays a key role for biodiversity preservation and conservation management. In this regard, animal paths play an important tole for understanding habitat use and the effects of changing environments. Thus, we investigate how animal path patterns relate these patterns to potential influences on animal behavior such as other phenomena of not only ecological related incidents (e.g. food and freshwater availability, shelter, predator pressure, and human activities. Building upon other well-known line delineation tasks, e.g., road segmentation, this research explores the ability of CNNs, especially encoder-decoder-based architectures, to map animal paths from UAV data. The project is guided by three research questions regarding (1) the impact of ground truth data generation on segmentation accuracy, (2) the contributions of network enhancements to improve segmentation, and (3) the generalizability of the model to different natural environments. We found that CNNs can well segment animal paths in different conditions, ranging from clear paths to partly grown over or strongly vegetated path. Even in difficult scenarios, our network can detect the direction of paths correctly. Furthermore, we show that refining manually labeled paths using our semi-automatic dynamic path width estimation approach increases segmentation performance. In addition, our framework is transferable across different soil types and scales. We show that our architecture outscores existing, off-the-shelf architectures by over 7% in in prediction accuracy through enhancements such as attention modules and the inclusion of denser connections in our architecture. With our approach to automatically map animal paths from UAV RGB imagery, we offer a new method to uncover animal path networks created by many individuals across different species while reducing disturbances of animals that could potentially alter their behavior. This new opportunities for biodiversity research, e.g. by using animal path networks as predictor in modelling structural biodiversity of ecosystems, in species distribution modelling or for classifying habitat types. Moreover, we see potentials for combining conventional tracking techniques such as GPS with our mapping approach, e.g. to associate path characteristics with species data or capture types of movement behavior, enhancing the data foundation for ecology research and beyond.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Modelling the Role of Multiple Global Change Drivers on Future Range Shifts in a Tropical Biodiversity Hotspot

Authors: Emma Underwood, Prof Nigel Walford, Prof Mark Mulligan, Dr Kerry Brown
Affiliations: Kingston University London, King's College London
Climate change is causing plants to alter their known ranges to track newly suitable habitat. Species extinction risk due to global and local change drivers is highest on geographically isolated islands with high endemism such as Madagascar. For plants like Calophyllum paniculatum (C. paniculatum), they have a double threat, after high mortality rates were discovered linked to a newly identified vascular wilt like pathogen (Wright et al., 2020). We modelled C. paniculatum under multiple future climate, land cover, dispersal, and pathogen-spread scenarios through a combination of correlative and mechanistic approaches to disentangle the driving forces of range shift into the future, at national and regional scales. For mechanistic models, we parameterised scenarios using locally collected lemur dispersal data and a population-specific mortality probability from five consecutive years of spatially explicit monitoring of C. paniculatum tree health in Ranomafana National Park. Dispersal distance was parameterised from multi-year behaviour and movement observations of lemur species Eulemur rubriventer present in the Park (Tonos et al., 2022). Initial results suggest range shift becomes increasingly limited when utilising dispersal parameters from field collected data, as they do not account for less common, long distance dispersal events. With current rates of pathogen spread alone, local populations of C. paniculatum may not be able to sustain themselves within the measured time-period without intervention or support. Localised change drivers such as fragmentation of forest edges, with the additional increased mortality due to pathogen, may have more direct impacts on the plants’ future status than climate alone. Further analysis is required to ascertain the risk posed by localised environmental factors such as the pathogen spread, and to understand what this means for endemism in Madagascar. References: Patricia Chapple Wright et al. “The Progressive Spread of the Vascular Wilt Like Pathogen of Calophyllum Detected in Ranomafana National Park, Madagascar.” Frontiers in Forests and Global Change 3 (2020). DOI: https://doi.org/10.3389/ffgc.2020.00091. Tonos, J., et al., Individual-based networks reveal the highly skewed interactions of a frugivore mutualist with individual plants in a diverse community. Oikos, 2022. DOI: https://doi.org/10.1111/oik.08539. Tonos, Jadelys et al., unpublished (field surveys 2018-2022).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Predicting spatio-temporal patterns of Lantana camara in a savannah ecosystem

Authors: Lilly Schell, Konstantin Müller, Merzdorf Maximilian, Emma Else Maria Evers, Drew Arthur Bantlin, Dr. Sarah Schönbrodt-Stitt, Dr Insa Otte
Affiliations: Department of Remote Sensing, Institute of Geography and Geology, University of Würzburg, (2) Conservation and Research Department, Akagera National Park
Invasive alien species represent the second greatest threat to global biodiversity, disrupting ecosystems, outcompeting native species, and ultimately contributing to widespread ecosystem degradation. In this context, modelling species distribution is critical for managing invasive species, as reliable information on habitat suitability is essential for effective conservation and rehabilitation strategies. This study aims to model the suitable habitat and potential distribution of the notorious invader Lantana camara (Lantana), in the Akagera National Park (1 122 km2), Rwanda, a savannah ecosystem. Spatio-temporal patterns of Lantana from 2015 to 2023 were predicted at a 30-m spatial resolution using a presence-only species distribution model in Google Earth Engine, implementing a Random Forest classification algorithm. The model incorporated remote sensing-based predictor variables, including Sentinel-1 SAR and Sentinel-2 multispectral data. Furthermore socio-ecological parameters and in situ occurrence data of Lantana were employed. Around 33 % of the study area was predicted to be suitable Lantana habitat in 2023. Habitat suitability maps indicated higher vulnerability to Lantana invasion in the central, and most northern, and southern parts of the Akagera National Park compared to the eastern and western regions for most years. Additional change detection analysis exhibited an increase in habitat suitability in the northeastern park sector and a decrease in the southwestern part of the park over the study period. The model's predictive performance was robust, demonstrated by high scores on threshold-independent metrics. AUCROC values, which assess the model´s ability to distinguish presence from absence sites, ranged from 0.93 to 0.98, while AUCPR values, focusing on accurate presence predictions, ranged from 0.79 to 0.94. Key factors influencing Lantana habitat suitability in the study area included the road network, elevation, and soil nitrogen levels. Additionally, the red edge, shortwave and near-infrared Sentinel-2 bands were identified as essential within the Random Forest classification, highlighting the efficacy of combining remote sensing and socio-ecological data with machine learning techniques to predict invasive species distributions. These results offer valuable guidance for developing successful conservation strategies to protect savannah ecosystems and mitigate Lantana spread in the future. Moreover, the methodological approach of this study provides a robust framework that is intended to be applied to comparable ecosystems facing similar challenges.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M-N)

Poster: Satellite Data-Driven Mapping of Tropical Forest-Savanna Transitions on a Global Scale

Authors: Matúš Seči, Carla Staver, Dr David Williams, Casey M. Ryan
Affiliations: School of GeoSciences, University of Edinburgh, Department of Ecology and Evolutionary Biology, Yale University, School of Earth and Environment, University of Leeds
Forest-savanna transitions are thought to be the most widespread ecotone in the tropics. These transition zones form unique ecosystems of mosaic habitats supporting substantial biodiversity and providing a variety of ecosystem services to local populations. However, ecosystem mosaics occurring within the transition zones are often misunderstood and mislabelled as degraded forest remnants rather than unique ecosystems which makes them increasingly endangered by forest-centric management practices. At the same time, forest-savanna transition zones have received relatively little focus from researchers compared to the core areas of these biomes. Existing work focuses on local-scale understanding of the ecological processes but has not provided a systematic assessment of the extent and distribution of transition zones or evaluated their intactness and conservation status on a continental or global scale. This limits our ability to understand change and to conserve these areas effectively as they become more threatened by the global environmental change and anthropogenic pressures. Here we conduct the first satellite data-driven mapping of natural forest-savanna transition zones on a global scale using vegetation structural variables. By calculating rate of change of tree cover through space across the tropics, we identified savanna-forest transition zones across all the major tropical regions. We evaluated the intactness of these zones using remotely sensed land cover maps of anthropogenic land uses such as agriculture and deforestation and quantified the overlap of these areas with maps used for conservation planning. Next, we quantified the degree of tree-cover patchiness in the transition zones, to assess how common natural ecosystem mosaics are, given their importance for biodiversity. Finally, we described the climatic space in which these transition zones occur and quantified environmental drivers which have been shown to influence forest-savanna coexistence such as topography, fire occurrence, hydrological dynamics and soil properties to understand the relative importance of these drivers across the different zones. This work represents the first step towards understanding the distribution and intactness of, and the processes within, forest-savanna transition zones on a global scale. The map of natural forest-savanna transition zone will serve as a basis for further investigation into the spatiotemporal dynamics of these unique ecosystems and help inform ecosystem conservation efforts and management practices in the tropics.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: C.06.03 - POSTER -Validation of GNSS-RO and GNSS-R observations from small sats

GNSS-Radio Occultation (RO) for atmospheric sounding has become the first Pilot Project of integrating institutional (e.g. from MetOp) and commercial RO measurements into operational Numerical Weather Prediction (NWP) led by NOAA and EUMETSAT. The path for this achievement was preceded by a number of studies for Calibration, Data Quality and Validation through Impact assessments including complementary observations from other sensors. Innovation continues in GNSS-RO, with for example Polarimetric-RO, and more studies are on-going and can be presented in this session.

A number of GNSS-Reflectometry (GNSS-R) commercial missions have been launched since in the last 10 years mostly driven by wind-speed applications, and more are planned for 2025 like ESA Scout HydroGNSS with significant innovations and with primary objectives related to land applications. Like for GNSS-RO, a number of Data Quality and Validation studies are on-going or being planned, and if successful, GNSS-R could also make it to operational systems.

This session is intended for the presentation of this kind of studies related to the assessment of GNSS measurements typically from miniaturised GNSS EO receivers in commercial initiatives.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: EDAP+ Atmospheric domain: SPIRE GNSS-R assessment

Authors: Leonardo De Laurentiis, Gabriele Mevi, Dr. Chloe Helene Martella, Sabrina Pinori, Dr. Clement
Affiliations: Esa
SPIRE Global is a data and Analytics Company that collects GNSS-R data from its constellation composed of Lemur-2 satellites. GNSS-R is an opportunity measurement in which the signals emitted by GPS satellites and their reflections on the ground are collected by the Lemur-2 constellation and processed. Within the Earthnet Data Assessment Project (EDAP+), SPIRE GNSS-R products have been analyzed and compared with reference satellites and ground measurements. The products analyzed are the SPIRE Ocean Products, Surface Wind Speed and Mean Square Slope (MSS), and Soil Moisture. The analysis has been conducted following the EDAP+ guidelines, and it is composed by a Maturity Matrix referred to the products documentation, and an intercomparison exercise on the measurements. In this work we present the procedures followed and the results of the assessment.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: The impact of assimilating GNSS Radio Occultation data on the sub-seasonal forecasts

Authors: Katrin Lonitz, Dr Sean Healy, Frederic Vitart
Affiliations: ECMWF, ECMWF
Sub-seasonal forecasting is a difficult time range for weather forecasting because it is often considered as a too long-time scale for the atmosphere to retain enough memory of its initial conditions, and too short for the boundary conditions, like ocean, sea-ice or land, to vary enough to provide predictability beyond persistence. It is often assumed that the main sources of sub-seasonal predictability come from the ocean or land variability. So far, only few studies have assessed the impact of atmospheric observing systems on sub-seasonal forecasts using data denial experiments (OSEs). This lack of atmospheric OSEs represents an important gap in our understanding of sub-seasonal forecasting performance. There is a clear need to assess the impact of the current atmospheric observing system on sub-seasonal forecasts. This would help to better understand which observing systems have the largest impacts on sub-seasonal prediction and as a consequence help provide guidance on the implementation of future observing systems. The value of assimilating GNSS Radio Occultation (RO) data on medium-range Numerical Weather Prediction (NWP) is now well established in many operational systems. The present study investigates whether GNSS-RO observations also have a measureable impact on the sub-seasonal forecast-range. The impact was measured by running two large sets of 32-day ensemble re-forecasts over the extended winter periods from 2020 to 2023 which were initialised from an analysis with, and without GNSS-RO assimilated. Results indicate a statistically significant improvement in the reforecast skill scores up to week 4 in the stratosphere, particularly over the Tropics. The impact in the troposphere is generally negligeable. However, the amplitude of the Madden Julian Oscillation (MJO) is significantly stronger during the first two weeks when the reforecasts are initialised from the analysis with GNSS-RO assimilated, suggesting a potential link between MJO prediction and the initialisation of the stratosphere. Given these encouraging results for GNSS-RO, the need for new sub-seasonal impact experiments with other observing systems is suggested.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Exploring different microphysics assumptions with Polarimetric Radio Occultations

Authors: Antía Paz, Ramon Padullés, Estel Cardellach
Affiliations: Institute of Space Sciences (ICE-CSIC), Institute of Space Studies of Catalonia (IEEC)
The Polarimetric Radio Occultation (PRO) technique consists on tracking signals transmitted by GPS satellites and captured by Low Earth Orbit (LEO) satellites as they rise or set behind the Earth's limb. This approach extends the capabilities of the traditional Radio Occultation (RO) method by not only measuring vertical profiles of thermodynamic variables but also incorporating polarimetric effects. Unlike standard RO, PRO employs two orthogonal linear polarizations—horizontal (H) and vertical (V)—for its receiving antennas, enabling relevant insights into atmospheric conditions. Since its deployment aboard the PAZ satellite in 2018, the GNSS-PRO concept has been successfully demonstrated. More recently, in 2023, it has been implemented aboard three of Spire Global’s commercial CubeSats. The polarimetric capability of PRO allows the retrieval of vertical profiles of differential phase shift (ΔΦ), which represents the difference in the phase delay between the H and V polarizations. Heavy precipitation events, characterized by oblate spheroid-like hydrometeors, induce a positive differential phase shift as the PRO signals traverse such phenomena. Consequently, this technique provides unique information into the microphysical properties of these precipitation events. The primary hypothesis that PRO onboard PAZ is sensitive to oblate raindrops has been conclusively validated. Furthermore, it has been unexpectedly demonstrated that PRO is also sensitive to frozen hydrometeors. The technique's performance has been corroborated through comparisons with two-dimensional data like the IMERG-GPM products and three-dimensional data from the NEXRAD weather radars. Ongoing analyses are directed toward understanding the sensitivity of PRO to various microphysical parameterizations derived from the Weather Research and Forecasting (WRF) model and particle habits modeled using the Atmospheric Radiative Transfer Simulator (ARTS). The variation of the model’s microphysics parameterizations allows for the study of the PRO technique’s sensitivity based on different assumptions about hydrometeors. Changes in these parameterizations impact total precipitation, vertical structure of hydrometeors, cloud properties, energy budget, spatial structure, among others. The validation and sensitivity study of the PRO technique will contribute to an enhanced understanding of the observable obtained and will offer insights into the phenomena characterizing intense precipitation situations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Grazing-Angle Ionospheric Delay on GNSS-R: Findings from the ESA PRETTY Mission Observations.

Authors: Mario Moreno, Maximilian Semmling, Florian Zus, Georges Stienne, Andreas Dielacher, Mainul Hoque, Jens Wickert, Hossein Nahavandchi, Milad Asgarimehr, Estel Cardellach, Weiqiang Li
Affiliations: German Aerospace Center (DLR), Deutsches GeoForschungsZentrum (GFZ), Université Littoral Côte d’Opale (ULCO), Technische Universität Berlin (TUB), Beyond Gravity Austria GmbH (BGA), Technische Universität Graz (TUG), Norwegian University of Science and Technology (NTNU), Institute of Space Sciences (ICE-CSIC), Institute of Space Studies of Catalonia (IEEC)
Space weather can affect the operation of both spaceborne and ground-based systems, impacting daily human activities. The ionosphere, an ionized layer of Earth's atmosphere extending from approximately 50 to over 1,000 kilometers in altitude, experiences perturbations in Total Electron Content (TEC) due to variations in space weather conditions. Consequently, TEC serves as a parameter for monitoring potential ionospheric effects of space weather. TEC represents the total number of electrons present along a path between a Global Navigation Satellite System (GNSS) transmitter and a receiver, inducing a delay in the transmitted signal. Although GNSS-based infrastructure for ionospheric monitoring is well-developed, coverage gaps remain in remote areas and over oceans. GNSS Reflectometry (GNSS-R) has emerged as an important technique for atmospheric sounding, providing reliable information to complement data where conventional measurements are unavailable. This study aims to estimate the ionospheric delay using observations from the single-frequency ESA Passive REflecTomeTry and dosimetrY (PRETTY) mission. PRETTY is a pioneering GNSS-R satellite operating on the L5/E5 frequency, primarily aimed at altimetry and sea ice detection applications at very low elevations. Neutral atmospheric corrections on code delay observations are applied using a ray-tracing tool that utilizes data from the ERA5 reanalysis model, allowing isolation of the ionospheric delay component. The estimated relative ionospheric delay (reflected w.r.t the direct signal) from six events in the North Pole region shows close alignment when compared with the Neustrelitz Electron Density Model (NEDM2020), the NeQuick model, and the International Reference Ionosphere (IRI) model, with relative variances ranging from 0.5% to 18% during days with high solar activity (F10.7 = 224). This indicates that the estimation accurately accounts for the first-order ionospheric delay from GNSS-R code data, which is proportional to the Total Electron Content. The relative ionospheric delay reaches its maximum (negative) value at approximately 3° elevation (at the specular point) due to the higher contribution of the delay from the direct signal. At elevations around 7.2°, the contribution from the reflected signal counteracts that of the direct signal, resulting in a cancellation point. This point can be associated with the peak electron density height, providing insights into the vertical structure of the ionosphere.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: A.09.04 - POSTER - Glaciers - the other pole

Glaciers are distributed around the world in mountainous areas from the Tropics, to Mid-latitudes, and up to Polar regions and comprise approximately 250,000 in number. Glaciers currently are the largest contributors to sea level rise and have direct impacts on run-off and water availability for a large proportion of the global population.

This session is aimed at reporting on latest research using EO and in situ observations for understanding and quantifying change in glacier presence, dynamics and behaviour including responses to changes in climate, both long term (since the Little Ice Age) and in the recent satellite period. EO observations of glaciers come from a large variety of sources (SAR, Altimetry, gravimetry, optical) and are used to derive estimates of ice velocity, surface mass balance, area, extent and dynamics of both accumulation and ablation, characteristics such as surging, glacier failure, and downwasting as well as associated observations of snow pack development and duration, lake formation, glacier lake outbursts (GLOF) and slope stability.

Presentations will be sought covering all aspects of glacier observations but in particular efforts to derive consistent global databases e.g. GlaMBIE, ice velocity and area (Randolph Glacier Inventory) as well as variation in run-off and water availability and interfaces between these observations and glacier modelling to forecast possible future glacier changes and their impact on hydrology and sea-level rise.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Recent modification of Miage Glacier: using EO to monitor the evolution of the Alpine glaciers in the context of Climate Change

Authors: Francesco Parizia, Walter Alberto, Marco Giardino, Enrico Borgogno Mondino, Luigi Perotti
Affiliations: Univeristy Of Rome Sapienza, Department of Civil, Construction and Environmental Engineering (DICEA), Arpa Piemonte, Univeristy Of Turin, Department of Earth Science, Univeristy Of Turin, Department of Agriculture, Forest and Food Sciences
Miage Glacier is the third glacier in the Italian Alps in terms of areal extension. It is situated on the southern side of the Mont Blanc massif (Val Veny, Italy). Furthermore, Miage Glacier is one of the largest debris- covered glaciers in the Alps. The debris cover, a layer of rock that blankets its surface, plays a crucial role in its mass balance and overall behavior. While debris can insulate the glacier from solar radiation, reducing melt rates, it also alters the glacier&#39;s albedo, making it more susceptible to absorbing heat. Miage Glacier has experienced significant changes in recent decades, including accelerated retreat, thinning and local instability. These changes are driven by combinations of factors, including rising air temperatures, reduced snowfall, and altered precipitation patterns. In recent years, the Italian Glaciological Committee (CGI) has carried out continuous and annual studies on the Miage Glacier using different techniques. Therefore, the reconstruction of the Glacier&#39;s evolution, first with historical aerial photographs and then with the use of more modern technologies such as satellite images, terrestrial laser scanner and digital photogrammetry (from drone or helicopter), has allowed us to understand the glacier&#39;s evolution. The Miage Glacier is a perfect natural laboratory in which to observe all kinds of changes in morphology and dynamic glacier response to Climate Change. In this framework we can observe consequences connected with the glacial body lowering, such as moraine instability and the creation of important numbers of supraglacial lakes and also risky events such as Glacial Lake Outburst Floods (GLOF). Technologies derived from photogrammetry (3D measurments) have made it possible in recent years to quantify the glacier volume loss at about 100 billion liters of fresh water over the period 2008-2022. Considering that in the period 1958-2008, the detectable volume loss is about 85 billion liters. In addition, proximity surveys have also made it possible to monitor risky events such as progressive moraine instability and GLOF events, like the one that occurred on July 11, 2022 with a sudden emptying of about 400’000 m 3 of water from Miage Lake. In addition, within the context of glacial monitoring, the use of satellite data fits perfectly. Using data such as those provided by Sentinel 2, which are completely free, we can continuously monitor surface changes. Therefore, we can also monitor through the use of spectral indices, the changes occurring on the surface of the glacial body. An example of this is the formation or emptying of lakes, which emphasize how the processes of glacier evolution itself have changed in recent years. Glacial dynamics monitoring in the Alpine environment is a key, especially in the current context of Climate Change. In particular, understanding the glacier&#39;s response to the current climatic emergency allows a more effective relationship with the environment. Monitoring allows for greater awareness of the natural hazards that may result and a better management of the water resource in the communities downstream of the glacier.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Remote Sensing Data Downscaling for High Mountain Glaciers

Authors: Mariia Usoltseva, Prof. Dr. Roland Pail, Dr. Christoph Mayer, Dr. Martin Rückamp
Affiliations: Technical University of Munich, Bavarian Academy of Sciences
Glaciers are crucial components of the Earth's climate system and serve as indicators of climate change. Their substantial mass loss due to global warming significantly contributes to sea-level rise and impacts regional hydrology, downstream ecosystems and settlements. Despite considerable advancements in observational and modelling techniques, accurately quantifying glacier responses to climate change and predicting their future behaviour remain complex challenges, particularly in regions characterized by rapidly changing glaciers and complex topography. One of the key limitations in this field remains the availability of high-resolution regional datasets. In this study, we investigate the application of remote sensing data downscaling techniques to improve spatial and temporal resolution of glacial mass balance. We focus on the integration of relatively high-resolution surface elevation changes derived from satellite altimetry with coarse-resolution mass changes inferred from satellite gravimetry data to localize mass changes. This study mainly focuses on the glaciers of the Patagonia region. This region, characterized by rapid glacier retreat and complex climatic influences, serves as an ideal case study for integrating multiple satellite datasets and regional models. This approach aims to improve local assessments and provide a transferable framework for applying remote sensing downscaling in other regions where observational data is sparse. The findings contribute to advancing the use of satellite remote sensing for cryospheric studies and underscore the importance of high-resolution datasets in tracking and predicting glacial responses to climate change. Preliminary results highlight the potential of enhanced data integration techniques to resolve sub-regional mass changes, offering insights into glacier-climate interactions in Patagonia. The potential outcomes of this work aim to benefit the field of glacial modelling. The development of a downscaled glacial mass balance dataset, tailored for regional glacial systems or even individual glaciers, holds significant promise for model forcing and data assimilation to improve the estimates of future glacial melt and hydrological processes.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Climatic and morphological factors controlling the development of glacial lakes in High Mountain Asia

Authors: Sheharyar Ahmad, Dr. Giacomo Traversa, Dr. Nicolas Guyennon, Dr. Franco Salerno, Mr. Luca
Affiliations: Ca' Foscari University of Venice
Glaciers in High Mountain Asia (HMA) play a crucial role in modulating the release of freshwater into rivers and supporting ecosystems. However, the glacier changes not only impact the water supply for the downstream area, but also alter the frequency and intensity of glacier-related hazards, such as glacial lake outburst floods (GLOFs). An increasing frequency and risk of GLOFs is threatening the Asian population. In this context, glacial lake inventories benefit the disaster risk assessment and contribute to predicting glacier–lake interactions under climate change. Studies in glacial lake inventories using satellite observations have been heavily conducted in the Tibetan Plateau. However, a recent glacial lake mapping is still absent for the overall HMA, although the recent availability of Sentinel-2 satellite with a resolution of 10 m. Here we present the GLACIAL LAKE INVENTORY for the entire HMA regions based on more than 1300 images of Sentinel-2 collected during the 2022 year. A semi-automated lake mapping method have been developed and validated in order to assess and reduce the uncertainty. This study aims to present: (1) an up-to-date glacial lake inventory using Sentinel-2 images for the overall HMA; (2) the rigorous validation methodology adopted to check and reduce the uncertainty; (3) the morphological factors, derived from the Randolph Glacier Inventory (4) and the climatic parameters, considering reanalysis products. Generally, this work updates the current knowledge on distribution of glacial lakes and on factors responsible for their development in High Mountain Asia.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Combining Fully Focused and Swath Processing for Glacier Applications

Authors: Charlie McKeown, Albert Garcia-Mondéjar, Ferran Gibert, Noel Gourmelen, Tristan Goss, Sophie Dubber, Mal McMillan, Michele Scagliola, Paolo Cipollini
Affiliations: isardSAT UK, isardSAT, University of Ediburgh, Earthwave, University of Lancaster, European Space Agency
High PRF altimeters transmit pulses at a high pulse repetition frequency thus making the received echoes suitable for coherent processing on-ground. Conventional delay-Doppler processing (DDP, commonly called SAR or High Resolution) coherently integrates echoes in a burst-by-burst basis to provide single look waveforms referred to a specific ground location, which after being correctly aligned (compensating for the slant-range migration, among others), can be incoherently averaged, increasing the performance in terms of the speckle reduction and the along-track resolution compared with the traditional Low Resolution Mode and in turns in terms of geophysical retrieval. The Fully Focused delay-Doppler processing (FF-DDP, also known as Fully Focused SAR) moves one step further and intends to coherently integrate the echoes over a time longer than a burst to get an even higher along-track resolution with an improved speckle reduction with respect to DDP. Swath mode processing has been used to monitor the elevation of areas with complex topography, such as over ice sheet margins, ice caps and mountain glaciers, improving upon the resolution and coverage of conventional radar altimetry. Swath mode relies on an accurate angle of arrival of the measured echo, this is obtained from the SAR Interferometric mode of CryoSat-2 and CRISTAL, and post-processing strategies resolving the ambiguous nature of the phase measurement. The Open Burst (or interleaved) transmission mode to be implemented in Sentinel-6 and the Copenicus polaR Ice and Snow Topography Altimeter (CRISTAL) missions makes them more suitable to FF-DDP processing thanks to the uniform along track sampling of the scene. However, in the conventional Closed Burst mode (like in CryoSat-2), replicas induced by the non-uniform sampling of the Doppler spectrum will be mixed with the main echo and, in most cases, will not be able to be filtered out. The CRISTAL Mission will include Open Burst and Interferometric capabilities. It will be the first altimeter to be able to combine both methodologies to increase both the along and across-track resolutions, improving the current performances of CryoSat-2 over small glaciers that can't be observed properly. In this presentation we present the results from the assessment and show the impact of the combined Fully Focused and Swath solution within the CLEV2ER Land Ice & Inland Water project. We will show the improvement in performance over complex terrain when utilising FF-DDP processed data, compared to conventional DDP data, using Swath processing.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Mapping annual summer glacier fronts and a proxy metric of calving intensities with Sentinel-1 Extra Wide Swath mode.

Authors: Jörg Haarpaintner, Manon Tranchand-Besset, Dr Heidi Hindberg, Dr Valentin Pillet
Affiliations: NORCE Norwegian Research Centre, i-Sea
The retreat of melting glaciers and increased calving activities are dramatic evidence of climate change, ice mass loss and the biggest cause of sea level rise. The glacier front lines of marine terminating glaciers can be highly variable in time and are a balance between the glacier flow, i.e. surging, and calving, i.e. the breaking off of ice from the glacier’s terminus, which floats away as icebergs or growlers. This is especially the case for the sea-ice free summer month July to September. The icebergs and growlers then directly influence the marine environment, for example, by providing habitat for the marine fauna, inducing fresh water in the water column, and transporting sediments. Optical satellite sensors such as Sentinel-2 provide accurate results for glacier front lines delimitation but the time frequency of monitoring are limited by the persistent cloud covers in the Arctic and allow mainly snapshots of the glacier front under cloud-free condition. Cloud-penetrating synthetic aperture radar (SAR) data from the two Sentinel-1 (S1) A &B satellites of the European Copernicus Program however provide consistent time series of observations since 2015 for statistical analysis. Over the Svalbard archipelago, the S1 acquisition plan is dominated by the use in extra-wide swath (EW) mode, acquiring observation in medium resolution of 25m on a quasi-daily basis in HH/HV dual polarization. The EW mode is intensively used for operational sea-ice monitoring, but otherwise often neglected for other applications. Only two of the S1 paths over Svalbard acquire in interferometric wide-swath 10m high-resolution (IWH) mode but in different dual polarization set-ups, VV/VH and HH/HV, respectively, each providing only one acquisition every 12 days per satellite, i.e. a maximum of five acquisitions per month when both S1 A&B were operational from 2017 to 2021. In this presentation, instead of providing a snapshot at a specific time, the whole time-series of daily Sentinel-1 EW mode acquisitions over Kongsfjorden in the north-west of Svalbard is analyzed to provide a statistically defined summer glacier fronts for Kronebreen and other glaciers for the years 2015 to 2024. The method is based on Haarpaintner and Davids (2021) that has been developed to map the intertidal zone in Norway into classes of atmospheric exposure by calculating backscatter percentile mosaics of dense S1 IWH time-series. Hence the summer glacier fronts are extracted by thresholding the 95th backscatter percentile mosaic from the daily sea-ice free summer months acquisitions and define thereby the glacier front as the line where the glacier prevails more than 95% during summer. These glacier front lines are then compared to glacier fronts extracted from Sentinel-2 as well as to the ones from a statistical analysis of the fewer S1 IWH acquisitions in higher resolution. The comparison reveal the high variability of the glacier front position during summer of several hundred of meters. In addition to defining the glacier front, the S1 SAR also detects floating icebergs and growlers in the waters in front of the glaciers. Using lower percentile backscatter mosaics are then used to define regions in the Kongsfjorden waters where iceberg and growlers are present 10%-25%, 25%-50%, 50%-75%, and 75%-95% of the time during summer. Although, the distribution of icebergs and growlers in the fjord highly depend on surface winds and currents, this approach still provides a proxy metric of summer calving intensities. In the outlook, we will provide ideas of future research of how to better assess, quantify and validate the distribution of icebergs and growlers in regard to their concentrations in the fjord.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Unlocking the Potential of Airborne Hyperspectral Thermal Infrared Remote Sensing for Monitoring Debris-Covered Glacier Dynamics

Authors: Gabriele Bramati, William Johnson, Glynn Hulley, Bjorn Eng, Gerardo Rivera, Robert Freepartner, Simon Hook, Kathrin Naegeli
Affiliations: TIRLab - RSL - Department of Geography, University of Zurich, NASA Jet Propulsion Laboratory - California Institute of Technology
Debris covered glaciers (DCGs) are present in every mountain range on Earth. Debris layers on glaciers affect melt, morphology, evolution and overall dynamics. While a thin debris layer enhances melt, thicker debris insulates the underlying ice. However, the 3-dimensional spatial and temporal distribution of debris layers is still poorly understood. Innovative datasets are needed in order to map debris layer characteristics and to disentangle the interplay between debris, climate, and glacier response over time. Among the available Earth Observation (EO) data for Alpine glaciers, thermal infrared (TIR) datasets have been underexploited due to the lack of suitable resolution and general availability, despite their great potential. TIR observations allow estimation of land surface temperature (LST), which can be used to estimate the glacier debris energy balance, extent, and thickness of debris. In addition, debris lithologies can be distinguished using multi- or hyperspectral TIR data. In this contribution, we present a unique remote sensing dataset with unprecedented spatial and spectral resolution over both a DCG, and a clean-ice glacier. We surveyed two different alpine glaciers in the Swiss Alps, one debris-covered (Zmuttgletscher) and one clean-ice (Findelgletscher), with the Hyperspectral Thermal Emission Spectrometer (HyTES) developed at NASA-JPL, in addition to various in-situ measurements (thermal infrared point measurements, debris thickness excavations, meteorological observations, ablation measurements etc.). HyTES is an airborne imaging spectrometer, with 256 bands in the 7.5-12 µm wavelength range at a ground surface distance of about 3 m. The acquired dataset allows for testing algorithms and processing schemes in sight of future TIR satellite missions (such as TRISHNA, SBG, LSTM), which will open new frontiers for global glacier studies. We calibrated the DCG survey exploiting the clean-ice glacier survey. We then present validation results between HyTES data and in situ temperatures on the DCG, and discuss the distribution of LST in regard to different debris characteristics. In addition, a lithological map has been produced utilizing airborne and laboratory derived emissivity spectra, in combination with chemical analysis for mineralogical composition and silica weight percent. Finally, we discuss applications such as glacier debris thickness estimation, distributed energy balance modelling for sub debris melt estimation, and address scaling effects of multi-source remote sensing datasets.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Towards the Regional Snowline Estimates at Sub-Seasonal Scale in Central Asia

Authors: Dilara Kim, Mattia Callegari, Enrico Mattea, Ruslan Kenzhebaev, Erlan Azisov, Tomas Saks, Martina Barandun
Affiliations: Department of Geosciences, University Of Fribourg, Institute of Earth Observation, EURAC research, Central-Asian Institute for Applied Geosciences (CAIAG)
Central Asia's glaciers are critical to the region's freshwater supply during the dry summer months, supporting the region's agriculture and hydropower sectors. It is therefore imperative to better understand the glacier response to ongoing and future climate change and the potential impact on regional water resources. The Central Asian mountain ranges, Pamir and Tien Shan, encompass over 25,000 glaciers, yet glaciological measurements are scarce especially after the collapse of the Soviet Union. Existing gaps and sparse spatial coverage of the glaciological measurements time series restrict regional assessment of seasonal and annual glacier changes. A promising approach to infer annual glacier mass balance is based on the position of the end-of-summer snowline, which marks the transition between snow and bare-ice surfaces; at the end of the melting season, it approximates equilibrium line altitude (in the absence of superimposed ice). Snow and glacier ice have distinct spectral characteristics and thus are suitable to map remotely. We designed a novel method to retrieve snowlines from the MODIS surface reflectance product, that covers the period since the beginning of the 21st century. To bridge the coarse spatial resolution of MODIS we used a statistical relationship between MODIS reflectance and snowline derived from the high-resolution data of Sentinel-2 and Sentinel-1, available since 2015 and 2016 respectively. The resulting time series provides high spatially and temporally resolved snowlines. The method was tested on the selected glaciers in Central Asia; sub-seasonal snowline evolution was compared to the modelled glaciers daily melt contribution. We further demonstrate the potential to use retrieved snowlines to better constrain surface mass balance models. Our approach is suitable for larger scale assessments, thanks to the implementation based on the cloud computing service of Google Earth Engine and the use of well-established processing algorithms. In our contribution we present regionally applied glacier snowline estimates for Central Asia and provide insight on the seasonal snowline dynamics as a proxy for glacier mass balance of the last 25 years. Our study reveals the potential of snowline monitoring for a better understanding of glacier mass balance changes and sub-seasonal glacier melt contribution to runoff for remote and inaccessible regions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Recent changes at Jostedalsbreen ice cap revealed by repeat UAV and satellite data

Authors: Benjamin Robson, Dr Harald Zandler, Dr Jakob Abermann, Professor Jonathan Carrivick, Ms Daria Ushakova, Dr Sven Le Moine Bauer, Dr Thomas Scheiber, Daniel Thomas, MSc Alexander Maschler, Dr Gernot Seier, Dr Liss Andreassen, Professor Jacob Yde
Affiliations: University Of Bergen, University of Graz, Western Norway University of Applied Sciences, Julius-Maximilians-Universität Würzburg, University of Leeds, Independent Researcher, The Norwegian Water Resources and Energy Directorate
Jostedalsbreen, the largest ice cap in mainland Europe, covered an area of 458 km² as of 2019, representing approximately 20% of the total glacier-covered area in mainland Norway. The ice cap plays a crucial role in regional hydrology and serves as an important indicator of climate change impacts in the Nordic region. Previous research has shown that the ice cap is experiencing a net mass loss, but these findings are mostly based on analyses over decadal timescales, leaving short-term dynamics less understood. This study aims to address the gap in understanding short-term, high-resolution changes by focusing on a four-year period from 2020 to 2024, utilising data from Unmanned Aerial Vehicles (UAVs), airborne LiDAR, and high-resolution satellite imagery. Our analyses enable us to study recent changes at eight outlet glaciers of the Jostedalsbreen ice cap at a decimetre scale. We examine volumetric surface changes, horizontal glacier flow rates, and surface metrics such as roughness and rugosity. By integrating these high-resolution datasets, we can detect subtle changes in glacier morphology and dynamics that are not apparent in longer-term studies. This approach allows us to compare recent glacier change rates with those observed over longer decadal scales since the mid-20th century, providing more detailed insights into the cryospheric response to recent climatic variations. We further extend our analysis by examining a time series of Sentinel-1 Synthetic Aperture Radar (SAR) images acquired every 12 days between 2020 and 2024. This dataset allows us to study the spatial and temporal distribution of wet and dry snow over the entire ice cap and assess the duration and altitudinal distribution of snowmelt throughout the ablation and accumulation seasons. The high temporal frequency of the SAR data enables the monitoring of seasonal transitions and extreme melt events, which are critical for understanding the ice cap's response to short-term climatic fluctuations however the resulting time series is complicated due to the strong backscatter responses from icefalls on several outlet glaciers, We anticipate that integrating surface melt occurrence and frequency data with observed glacier changes will enhance our understanding of the ice cap's dynamics, contributing to more accurate predictions of its future evolution and informing regional water resource management and climate adaptation strategies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Glacier mapping using Deep Neural networks in the Tropical Andes

Authors: Diego Pacheco Ferrada, Dr Thorsten Seehaus
Affiliations: Friedrich-Alexander Universität Erlangen-Nürnberg
Glaciers in the Tropical Andes have experienced a significant and accelerated decrease over the last decades, mainly driven by climatic variables affected by climate change. Glaciers in the Andean regions no only provide important hydrological services as water reservoirs for consumption of communities downstream and economic activities, but also they play fundamental role in sustaining high altitude environments and as well as cultural beliefs. Despite importance, only few studies have address the mapping and volume changes evaluation on regional and multitemporal scale. Most of them have focus on specific areas and/or glaciers in Peru or Bolivia. Furthermore, an increase in debris covered-glaciers extent have been observed in similar regions, which impose new challenges in mapping them, specially considering with conventional threshold methodologies. Therefore, this study we aim to generate updated and temporally consisted outlines of the Tropical Andes Glacier by implementing a fully automatic routine supported by machine-learning approaches, which can be suitable to evaluate the ice volume change over the last decade in the tropics. For the mapping process, we present binary and multiclass segmentation approaches using state-of-art deep learning architectures to map the glacier extent across the Tropical Andes.Here, the Glacier-VisionTransformer-U-Net (GlaViTU) -a hybrid deep learning model composed by a segmentation transformer inline with a convolutional- was trained for a large-scale glacier delineation considering the most recent Peruvian glacier inventory from INAIGEM (Instituto Nacional de Investigación en Glaciares y Ecosistemas de Montaña) which consider data from 2020 and includes debris-free and debris-cover glaciers segmentation. For training, the model was feed with diverse remote sensing data, such as: optical (Sentinel-2), topographic features (Elevation and slope from Copernicus DEM) and synthetic aperture radar (SAR) data (Sentinel-1 backscatter and coherence in ascending and descending orbits). Once trained, the model have successfully reproduced the overall glaciers extent of the Peruvian Andes, with acceptable uncertainties values. Our results show that the binary segmentation (glacier/no-glacier) show the best performance (IoU) compared to the multiclass approaches (No glacier/debris-cover glacier, debris-free glacier). In the case of multiclass segmentation, shows better results debris cover detection when applying a multiclass classification after masking data with binary segmentation. However, debris cover areas are challenging both multiclass approaches and show higher uncertainty in the binary approach. Nonetheless, coherence maps from repeat-pass and multiple orbits have shown to improve the differentiation between debris-cover and debris-free glacier areas, as well as mitigate the impact of shadowed and overlayed areas typically found in such mountainous environments. Moreover, the model shows that even in presence of clouds that can partially occlude the glacier surrounding, they have not affected the delineation. This improvement is particularly important in regions where cloud-free optical images are difficult to acquired. Our study underscore the importance of combining remote sensing data to improve automated glacier mapping, particularly in areas with scarped topographies and potentially growing debris-cover extent. These results highlight the potential for multitemporal glacier monitoring in the entire Tropical Andes; they allow us to periodically map the glacier extent over the complete Tropical Andes, and to evaluate temporal evolution and volume changes over the last decade in combination remote sensed DEM, such as TanDEM-X acquisitions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Estimating Frontal Ablation at High Temporal Resolution in Svalbard With Sentinel-1 SAR Imagery and a Deep Learning Model

Authors: Dakota Pyles, Nora Gourmelon, Vincent Christlein, Dr Thorsten Seehaus
Affiliations: FAU Erlangen-Nuremberg, Institute of Geography, FAU Erlangen-Nuremberg, Pattern Recognition Lab
Frontal ablation is a key component of tidewater glacier mass loss, yet high temporal resolution estimates remain elusive due to difficulty in reliably capturing terminus position changes with satellite imagery. Recent development in automated delineation of glacier calving fronts, using machine learning techniques, has opened an opportunity to calculate frontal ablation over fine timescales. By segmenting Sentinel-1 synthetic aperture radar (SAR) image sequences with a deep learning-based terminus segmentation algorithm, we aim to quantify a decade of seasonal and annual frontal ablation from 2015-2024 for ~150 tidewater glaciers in Svalbard – results are expected in spring 2025. To calculate frontal ablation, workflow pipelines consist of the pre- and post-processing of Sentinel-1 SAR images to extract glacier termini, the creation of regional training data to assist the segmentation algorithm, the application of climate mass balance model outputs, and the generation of monthly ice flux calculations; ice flux estimates primarily leverage Sentinel-1 SAR-derived velocity fields, with Sentinel-2 optically-derived velocities resolving glaciers that have poor Sentinel-1 coverage. The resultant frontal ablation information is valuable to glacier models, which may benefit from high-resolution reference data and lead to improved calibrations and parameterizations. Svalbard, an Arctic region characterized by variable glacier and fjord geometries, served as a methodological test site and we now intend to expand the project scope by applying this method to the Canadian Arctic, Russian Arctic, Greenland periphery, and Alaska, or ~1240 additional marine-terminating glaciers in the Northern Hemisphere. Future project efforts will focus on mass budgeting for all glaciers in the study by integrating frontal changes and climatic mass balance data with geodetic mass balance estimates derived from TanDEM-X. To identify and evaluate external drivers of glacier change, the frontal ablation and mass balance products will be correlated with modeled and observational atmospheric, oceanic, and sea ice data. Through multivariate statistical analyses between these Earth system datasets and mass budget components, we look to provide an improved understanding of dynamic tidewater glacier processes, their spatio-temporal variability, and the influence of glacier geometry on observed changes across the Arctic.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Long-term albedo glaciers variations in Pakistan: a focus on the Hushe Basin

Authors: Blanka Barbagallo, PhD Davide Fugazza, Dr Lorenzo Raimondi, Guglielmina Adele Diolaiuti
Affiliations: Università Degli Studi Di Milano
Glaciers are highly sensitive to climate change and serve as key indicators of its impacts. Among glaciological parameters, albedo plays a crucial role in understanding glacier health and surface energy balance. In this study, we analyze albedo variations of all glaciers in Pakistan from 2013 to 2023, identifying significant trends and subsequently focusing on the Hushe basin, which exhibited the greatest reduction in albedo during this period (-0.8). The analysis utilizes the Harmonized Landsat Sentinel-2 product (HLSL30v002) to study albedo trends between 2013 and 2023. This product is the results of a set of pre-processing algorithms, including atmospheric correction, cloud and shadow masking, illumination and view angle normalization, and spectral bandpass adjustment, therefore, after applying another cloud mask, we were able to compute the broadband albedo values. For the second part of the study, focusing on the Hushe basin, we extend the analysis over a longer period (1984–2024) using Landsat 5 and Landsat 8 Tier 1 imagery. For these latter the products have to be corrected before being able to compute the albedo values, therefore for the study two different GEE scripts were developed. The Hushe basin, located in the buffer zone of the Central Karakoram National Park, is characterized by its complex topography and significant glacier coverage (477.37 km² across 315 glaciers). Despite this data, the area has received limited attention in glaciological research. Its elevation profile, with two-thirds of glacier area between 4700 and 5700 m a.s.l., and smaller fractions above 6000 m (5.79%) and 7000 m (0.14%), provides an unique opportunity to study elevation-dependent albedo variations. Preliminary results indicate that 90% of Pakistan's glacier basins exhibit a slight increase in average albedo values over the last decade. However, an elevation band analysis reveals that higher-altitude glaciers (above 6000–6500 m a.s.l.) show greater variability and instability, while lower-altitude glaciers display more consistent trends, likely due to their larger number and area. The Hushe basin, with its pronounced elevation variability and complex glacier dynamics, provides an ideal case to further investigate these preliminary findings. The outcomes of this study are expected to enhance our understanding of regional climate dynamics and support the development of strategies to mitigate climate change impacts and sustainably manage natural resources.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Comparing Glacier Surface Velocity Methods with Satellite and UAV Imagery - the Example of Austerdalsbreen

Authors: Harald Zandler, Jakob Abermann, Benjamin Aubrey Robson, Alexander Maschler, Thomas Scheiber, Jonathan L. Carrivick, Jacob Clement Yde
Affiliations: Department Of Geography And Regional Science, University of Graz, Department of Earth Science, University of Bergen, Department of Civil Engineering and Environmental Sciences, Western Norway University of Applied Sciences, School of Geography and water@leeds, University of Leeds
Global warming causes profound changes in glacier dynamics, which has strong impacts on natural hazards, sea-level rise and river discharge. A key component of these dynamics is glacial surface velocity and various remote sensing methods exist for its quantitative analysis. At the scale of mountain glaciers, relatively high spatial resolution is required to achieve sufficient accuracy for a detailed understanding of glacier flow dynamics and associated changes. Thereby, traditional methods, such as different implementations of cross-correlation techniques, are ideal for slow-moving glaciers (<30 m per year) or low surface deformation between image acquisitions, but are often limited in cases of strong surface changes and large ranges in flow velocities. Additionally, the suitability of remote sensing sensors varies according to their resolution and noise. Therefore, we compare and evaluate different sensors and methods to determine (sub-seasonal) surface velocities during the one-year period 2023-2024 at Austerdalsbreen, an outlet glacier of the Jostedalsbreen ice cap, Norway, with surface velocities from 5 m to more than 100 m per year. To include several resolutions, we select different high-resolution platforms (UAV surveys resampled to 0.15 m and 0.6 m, 3 m PlanetScope imagery) and a moderate-resolution product (10 m Sentinel-2 data) for our analysis. We utilize respective sensors with traditional cross-correlation techniques, feature tracking algorithms (e.g., ORB) and novel, deep-learning based feature matching approaches. We evaluate the derived velocities with manually mapped displacements that are based on high-resolution orthoimagery (< 0.05 m). Our results indicate limitations of cross-correlation methods in cases of large surface velocity variations with high-resolution data. The medium-resolution sensor Sentinel-2 showed more robust results for some fast-moving regions, but lower performance in other parts of the glacier. Novel deep-learning techniques illustrate promising results and, applied to UAV datasets, resulted in accurate surface velocities over most parts of the glacier. In summary, our study demonstrates strengths and limitations of traditional and innovative state-of-the-art methods and sensors, thereby contributing to the derivation of essential glacier metrics with remote sensing approaches in a changing climate.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Coupling the MODIS and LANDSAT products to investigate the land surface temperature trends in High Mountain Asia

Authors: Sheharyar Ahmad, Dr. Giacomo Traversa, Dr. Biagio Di Mauro, Dr. Nicolas Guyennon, Dr. Franco Salerno, Mr. Luca
Affiliations: Ca' Foscari University of Venice
The surface temperature is a key parameter of the surface energy budget and influences a range of physical processes within the critical zone and high mountain regions that host glaciers, snow cover, and permafrost are particularly sensitive to increasing temperatures. However, ground-based instrumental monitoring of surface temperature is difficult to implement in remote mountainous areas with steep hillslopes. Alternatively, satellites offer the possibility to measure the land surface temperature (LST) with different ranges of spatial and temporal resolution. Many LST studies rely on data from the MODIS sensor, but the assessment of the reliability of using this information in high mountainous regions is limited due to the low availability of station-based surface temperature data. This is particularly true in High Mountain Asia, where such data are practically absent. From a methodological point of view, here we propose a coupled use of thermal bands from MODIS, LANDSAT, and ground station data in order to validate the multi-decadal LST trends. On the parallel, from scientific perspective, a cooling trend has been observed at Himalayan high-elevations, close to the main glacier masses (https://doi.org/10.1038/s41561-023-01331-y). These recent findings deserve to be further investigated through satellite thermal products together with the possible implications of LST on permafrost and vegetation evolution under global warming.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Drivers of Proglacial Lake Colour in Iceland

Authors: Natasha Lee, Professor Andrew Shepherd, Dr Emily Hill
Affiliations: The Centre for Polar Observation and Modelling (CPOM), Northumbria University, Newcastle University
Proglacial lakes often form due to the availability of meltwater at a glacier margin. As glaciers retreat due to the effects of climate change, the area of proglacial lakes has increased. The greatest increase in proglacial lake area and volume is currently occurring in the Arctic. This research investigated the relationship between proglacial lake colour and suspended sediment concentration across Iceland. Spatial variation in the colour of proglacial lakes in Iceland was quantified from high resolution PlanetScope satellite imagery. Suspended sediment concentration was calculated from water samples collected by both autonomous vehicle and near shore methods from proglacial lakes in September 2024. Investigating the differences between the sediment within these proglacial lakes is expected to provide a clearer understanding on the causes of the (variation in) colour of proglacial lakes.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: F.04.20 - POSTER - EO in support of the regulation on Deforestation-free products (EUDR, EU 2023/1115)

Faced with mounting global environmental concerns and the urgency of addressing climate change, the EU has introduced the ground-breaking regulation on Deforestation-free products (EUDR, EU 2023/1115) targeting global deforestation. The EUDR ensures that seven key commodities – cattle, cocoa, coffee, palm oil, soy, timber, and rubber – and their derived products like beef, furniture, and chocolate, entering the EU market from January 2026 onwards, are not linked to deforestation after a defined cut-off date (December 2020).
The regulation obliges operators to establish robust due diligence systems that guarantee deforestation-free and legal sourcing throughout their supply chains to achieve this goal. Verifying compliance with these standards is crucial. The EUDR mandates using the EGNOS/Galileo satellite systems and exploiting the Copernicus Earth Observation (EO) program for this purpose. This involves, among others, cross-referencing the geographic locations of origin for these commodities and products with data from satellite deforestation monitoring.
By providing precise and detailed information on deforestation linked to commodity expansion, Copernicus and other EO data/products will help to detect fraud and strengthen the implementation of the policy by diverse stakeholders.
This session will delve into the latest scientific advancements in using EO data to support due diligence efforts under the regulation, including global forest and commodities mapping.
Topics of interest mainly include (not limited to):

- Classification methods for commodities mapping using EO data;
World forest cover and land use mapping with EO Data;
- Deforestation and GHG/carbon impacts related to commodity expansion;
- Field data collection strategies for EUDR due diligence;
- Practical examples of EO integration in global case studies;
- Machine learning / AI for deforestation detection and change analysis;
- EUDR compliance strategies: Integrating EO data with other datasets;
- Traceability in the Supply Chain: EO Data for Transparency.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: From GEE to CODE-DE: Transforming Deforestation Monitoring for EUDR Compliance and Global Forest Protection

Authors: Fatemé Ghafarian, Dr Melvin Lippe, Dr Margret Köthke
Affiliations: Thünen Institute Of Forestry
The European Union Deforestation Regulation (EUDR) (EU 2023/1115) establishes critical requirements to ensure that products placed on the EU market are free from deforestation and forest degradation. This regulation mandates the verification of land-use practices to prevent deforestation-linked products from entering the Union market, aiming to safeguard global forests and align with international climate goals. Effective implementation of the EUDR requires robust monitoring systems capable of delivering reliable risk assessments at multiple levels, supporting national authorities in their inspection and enforcement tasks. The CODED (Continuous Degradation Detection) algorithm, developed to detect deforestation and forest degradation using Earth observation data, has proven highly effective in identifying land-use changes. Originally implemented on Google Earth Engine (GEE), CODED leverages Sentinel-1 and Sentinel-2 data to analyze deforestation patterns over time. However, GEE's server infrastructure and data processing architecture are not compliant with the German Federal Office for Information Security (BSI) and EU legal standards for secure and lawful data storage, making it unsuitable for official applications under the EUDR for the case of Germany. To address these compliance issues, the RiMoDi (Risk-based Monitoring Service for Deforestation) project is transferring the CODED algorithm from GEE to CODE-DE, a German cloud platform designed for secure Earth observation data processing. This migration ensures that the monitoring tools align with the stringent security and operational requirements of the German competent authority responsible for EUDR compliance checks, the Federal Office for Agriculture and Food (BLE). The transfer involves adapting the algorithm to CODE-DE’s infrastructure, configuring virtual machines for secure access, and integrating process chains into government intranets. The presented study focuses on the technical implementation challenges and solutions involved in migrating CODE-DE processing chains from Google Earth Engine to CODE-DE. We rely on geolocation data from Cote d’Ivoire to test the migrating and implementation. By leveraging CODE-DE’s capabilities, the RiMoDi project establishes a secure and scalable monitoring framework, enabling Germany to enforce the EUDR effectively. This approach not only ensures compliance with EU regulations but also enhances the national capacity for long-term environmental monitoring, contributing to global efforts to combat deforestation and forest degradation. Keywords: EUDR, Deforestation, Google Earth Engine, CODE-DE
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: Readiness of Ethiopia's Coffee and Ghana's Cocoa sector for EUDR compliance

Authors: Kalkidan Ayele Mulatu
Affiliations: Alliance Bioversity-ciat
The European Union Deforestation Regulation (EUDR) marks a significant milestone in global efforts to mitigate deforestation and forest degradation caused by high-demand agricultural commodities. Aligning with key EU policy frameworks such as the European Green Deal and the Farm to Fork Strategy, the EUDR seeks to ensure sustainable production and consumption patterns. However, its stringent requirements pose unique challenges to smallholder farmers (SHFs) in developing countries, particularly those reliant on forest-related commodities like coffee and cocoa. This study examines the implications of the EUDR on SHFs in Ghana and Ethiopia, two major cocoa and coffee producers respectively, and assesses their readiness to meet EUDR traceability and due diligence requirements. By analyzing the transparency and operational demands of the EUDR, the research identifies gaps in technical infrastructure, digital capacity, and national datasets that are critical for compliance. Additionally, it highlights the risk of disproportionately disadvantaging SHFs in least developed countries with limited resources, potentially favoring better-equipped competitors. To address these challenges, the study proposes context-sensitive solutions, including leveraging national platforms, open-source tools, and Earth observation technologies to streamline traceability and reduce costs. Emphasis is placed on building technical capacity and fostering equitable systems to ensure the regulation supports both environmental goals and the livelihoods of SHFs. Ultimately, the study underscores the need for collaborative, inclusive approaches to implement the EUDR effectively while balancing its environmental and socio-economic impacts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: Advancing Commercial EO Solutions for EUDR Compliance: AI-Driven Insights for Deforestation and Degradation Monitoring

Authors: Anna Brand, Anna Seiche, Stefan Kirmaier, Jonas
Affiliations: Remote Sensing Solutions GmbH
One-third of the world’s forests have already been cleared because of agricultural expansion, contributing significantly to environmental degradation, biodiversity loss, and the acceleration of climate change. In response, the European Union introduced the EU Deforestation Regulation (EUDR, EU 2023/1115) as a policy solution, which mandates that certain products and their derivatives entering the EU market, such as cocoa, coffee, palm oil, soy, beef, rubber, and wood, must not be associated with deforestation. The effective operational implementation of such initiatives, however, requires innovative tools that provide rigorous monitoring and verification systems that provide traceable and independent information of deforestation free supply chains. Earth Observation (EO) data plays a critical role in this process with satellite missions offering a constant stream of objective data on how forests are changing in near-real-time on a global scale. Our approach addresses the need for practice-oriented monitoring and streamlined data processing by integrating the full time series of Sentinel-2 imagery with advanced artificial intelligence (AI) methodologies and efficient cloud processing capabilities. Combining these strengths, we developed a scalable platform-based solution for companies and regulators to ensure supply chain transparency and compliance. Existing methods often rely on third-party land cover or deforestation datasets to assess compliance for each plot of land where the commodities were produced. This dependency on global layers introduces risks of inaccuracies at local scales and limits timeliness, as such datasets are only available at specific intervals. In contrast, our bitemporal approach systematically compares all image pairs within the complete time series of satellite data at local scale enabling the continuous provision of actionable updates. The system employs convolutional neural networks (CNNs) which are especially suited for recognizing spatial patterns in satellite imagery. This allows for highly accurate detection of deforestation and degradation. The algorithms are trained on manually labeled datasets, that are specifically designed to distinguish between natural forest loss and tree cover loss as seen in plantation clearings. This differentiation allows an analysis independent from third-party data and reduces the risk of misclassifications, particularly in plantation-heavy regions while mitigating potential supply chain disruptions or regulatory fines. Building on this capability, the system also incorporates advanced detection of forest degradation, recognized as an early warning sign of deforestation. It allows stakeholders to identify risks within the supply chain, enabling proactive, data-driven interventions to safeguard the sustainability of sourcing practices. Validated through quantitative and qualitative assessments, including fieldwork in Indonesia and performance metrics analyzed across independent test scenes, the service ensures robust, transaprent and reliable insights. By leveraging cloud infrastructure, our system has been integrated into a platform that enables scalable analysis of globally distributed sourcing areas. Featuring a user-friendly dashboard and API, the platform offers an effective solution to monitor compliance with regulatory requirements while optimizing operational efficiency. To demonstrate its functionality, the service will be showcased using a real-world example from the tropical region, where challenges such as cloud cover and rapid land-use change complicate monitoring efforts. The system’s output includes compliance reports tailored to meet EUDR requirements, illustrating how businesses and regulators can use the platform to ensure transparency and traceability. This solution represents an important step in the commercialization of EO services. It not only addresses regulatory compliance but also highlights the broader potential of EO and AI in sustainability monitoring and nature-based solutions. By delivering actionable insights in near-real-time, it empowers industries and regulators to advance environmental stewardship while ensuring resilient, sustainable supply chains for the future.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: Harnessing AI for Field Boundary Detection in South America

Authors: Tristan Grupp
Affiliations: World Research Institute
To support agricultural commodity traceability and the protection of critical landscapes, the World Resources Institute (WRI), with support from the Walmart Foundation, is developing an open-source toolkit for automated field boundary detection in South America. Field boundaries are a foundational unit of analysis for monitoring land use, assessing deforestation risk, and ensuring compliance with sustainability commitments and regulations, including the European Union Deforestation Regulation (EUDR). Field boundaries remain difficult to detect accurately at scale, and existing datasets are often incomplete or outdated. To address this challenge, WRI has partnered with Dr. Hannah Kerner’s lab at Arizona State University (ASU) to implement, test, and iteratively improve a machine learning-based field boundary detection model. The project focuses on soy and other key agricultural supply chains in Latin America, where deforestation pressures are high and transparency is urgently needed. The resulting tools and datasets aim to enable actors across the supply chain to better monitor field-level activity, assess compliance risks, and drive sustainable land management practices.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: Implementing Commodity Mapping and Change Detection Services in the Control System for EU Regulation 2023/1115 (EUDR)

Authors: Marco Corsi, Laura De Vendictiis, Simone Tilia, Fabio Volpe, Colonel Giancarlo Papitto, Pasquale Pistillo
Affiliations: e-GEOS S.p.A., Via Tiburtina 965, Rome, 00156, Italy, https://www.e-geos.it, ARMA DEI CARABINIERI, CUFAA office projects, https://www.carabinieri.it/chi-siamo/oggi/organizzazione/tutela-forestale-ambientale-e-agroalimentare, STARION, https://www.stariongroup.eu/
EU Regulation 2023/1115 (EUDR) establishes strict requirements for preventing commodities linked to deforestation and forest degradation from entering the EU market. It mandates traceability, monitoring, and compliance mechanisms for commodity supply chains to mitigate environmental impacts and ensure sustainable practices. Effective implementation of these measures requires advanced monitoring systems capable of integrating multi-source data for comprehensive land-use analysis. This work presents a methodology for commodity mapping and change detection designed to support EUDR compliance. Commodity mapping utilizes a Vision Transformer (ViT)-based classifier [1],[2] applied to time series data from Sentinel-2 imagery. The approach leverages spectral and temporal features to classify land cover and monitor the presence of specific crops, such as coffee, within the framework of agricultural land use analysis. Forest monitoring is performed using a bi-temporal change detection approach based on Sentinel-1 Synthetic Aperture Radar (SAR) and Sentinel-2 optical data. The SIROC algorithm is employed to detect changes in land cover using spectral and spatial features, enabling the identification of deforestation and forest degradation. SiROC (Spatial Context Awareness for Unsupervised Change Detection in Optical Satellite Images) algorithm is an effective methodology for Land Cover Change detection using minimal bi-temporal images. The SIROC algorithm [3] is an unsupervised change detection method that incorporates spatial context awareness to enhance detection accuracy in optical satellite imagery. SIROC leverages the spatial relationships between pixels to effectively distinguish between true land cover changes and noise or transient phenomena.Pre-processing steps include radiometric correction, cloud screening, atmospheric adjustment, and vegetation index computation to ensure consistent input data. The methodology incorporates an optional quality check step to validate outputs, reducing uncertainties and improving reliability. The proposed system integrates EO-based techniques with optional validation workflows, providing a scalable tool for tracking land-use changes and monitoring compliance with EUDR. The presentation will illustrates an example of a map of soybean and coffee crops generated using the Land Cover processor, specifically designed for remote monitoring of agricultural regions. The map, produced from Sentinel-2 data, highlights distinct crop areas: orange polygons represent soybean fields, while brown polygons indicate coffee plantations. An inset displays a typical Normalized Difference Vegetation Index (NDVI) cycle for soybean in Brazil, which is used to track crop growth phases, showing both the first and second harvest periods. This kind of remote sensing facilitates precision agriculture and crop management in geographically isolated areas. 1. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv preprint arXiv:2010.11929 (2020) 2. Oguiza, I.: TSiT: PyTorch implementation based on ViT (Vision Transformer). Available at: https://timeseriesai.github.io/tsai/models.tsitplus.html 3. Kondmann, L., Toker, A., Saha, S., Schölkopf, B., Leal-Taixé, L., & Zhu, X. X. (2022). Spatial Context Awareness for Unsupervised Change Detection in Optical Satellite Images. IEEE Transactions on Geoscience and Remote Sensing, 60, 1–15. https://doi.org/10.1109/TGRS.2021.3130842
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: Traceability in the Supply Chain: EO Data for Transparency

Authors: Yu Dong, Zahra Dabiri
Affiliations: EXDIGIT, University of Salzburg
As the climate crisis intensifies, Earth Observation (EO) technologies play a vital role in advancing sustainability, supporting global regulatory compliance, and achieving the United Nations Sustainable Development Goals (SDGs), particularly SDG 12 (Responsible Consumption and Production) and SDG 15 (Life on Land). The European Union Deforestation Regulation (EUDR, EU 2023/1115) exemplifies how EO can address these challenges, targeting deforestation-free supply chains for key commodities such as coffee, cocoa, palm oil, and timber. Effective from January 2026, the EUDR mandates robust due diligence systems to verify legally sourced and deforestation-free products after December 2020. Central to achieving this is the integration of EO data, particularly from the Copernicus program provided by the European Space Agency (ESA), for example, Sentinel-2 optical and Sentinel-1 synthetic aperture radar (SAR) data and derived products to provide comprehensive monitoring capabilities. This work explores how the synergy of optical and SAR EO data enhances supply chain transparency and traceability and supports EUDR. Sentinel-1 SAR capabilities, unaffected by cloud cover, provide continuous monitoring, especially, when the optical data cannot be used due to atmospheric conditions, complementing the spectral richness of Sentinel-2 imagery to enhance spatial and temporal precision. However, the applicability of SAR data within the complex environment is influenced by sensor characteristics, such as wavelength and target characteristics, such as type and geometry. By integrating time series analysis of SAR and optical, we demonstrate the strengths of these datasets to enable the detection of land-use changes and deforestation trends, even in challenging conditions such as tropical cloud cover or areas with spectral confusion, like shaded coffee plantations. Machine learning and geospatial analysis further improve the accuracy of deforestation alerts and land cover classification, addressing the complexities of distinguishing between forest and commodity plantations. We demonstrate a practical case study and illustrate how EO technologies empower stakeholders—including regulators, industry operators, and auditors—to validate commodity origins, detect non-compliance or fraud, and ensure alignment with the EUDR requirements. The practical case study focuses on coffee plantation monitoring using time series SAR and optical EO data, covering the period of 2019 to 2024 and utilizing machine learning techniques, such as random forest. The results will demonstrate the strengths and challenges of SAR and optical EO data utilization, such as data accessibility, accuracy variability, and integration challenges. By identifying these constraints and exploring potential solutions, this work aims to uncover opportunities for enhancing EO’s effectiveness in monitoring deforestation and promoting sustainable supply chains. This highlights EO’s transformative potential to advance regulatory compliance, foster collaborative climate action, and support the achievement of global sustainability goals.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: How to support smallholders in proving EUDR compliance? A feasibility study

Authors: Florian Schlenz, Johannes Sommer, Jenny Franco, Michael Holzapfel, Rita Lunghamer, Stefan Scherer
Affiliations: Geocledian GmbH
As a reaction to global deforestation and climate change EU has adopted the potentially disruptive Deforestation-Free Regulation (EU 2023/1115). From January 2026, seven key commodities and their derivatives - cattle, cocoa, coffee, oil palms, rubber, soy and wood - that enter the EU market must be deforestation-free and from legal sources. Every importer needs to prove this by providing sourcing information down to the plot level. An essential and at the same time probably the most difficult requirement is to record the production areas and locations of small farmers in particular. EO data can be used to provide the transparency needed for the deforestation check for each plot of land and thus support supply chain traceability. Typically, these checks are integrated in supply chain traceability platforms. Violation of the EUDR results in the product being excluded from the market, which represents an enormous economic risk for companies. The obligations to provide evidence and their short-term implementation pose enormous challenges for companies, their suppliers and producers, including smallholders. Different challenges need to be solved at every point in the supply chain. However, there is a particular risk for small farmers in developing countries, who produce large quantities of these raw materials. In the case of coffee, for example, they are responsible for 70% of global production. If smallholders are unable to provide the required evidence, there is a very real risk that these will be excluded from the European market. The consequences are low revenues and further impoverishment of small farmers. It will have to be seen whether the EUDR’s traceability mandates truly support sustainable practices or inadvertently exclude smallholders, impacting their economic stability and market access. So, how can smallholders be supported in achieving compliance with the EUDR requirements? In the frame of the “EUDR-Check” project (funded by BMWK Germany, grant number 16GM103702) we have developed and tested an App-based approach to support cocoa and coffee smallholders in demonstrating compliance with the EUDR. The app allows to record production areas and confirm their conformity with the EUDR criteria in the form of a certificate. Smallholders can do this with a free app and pass on the certificates digitally with a traceability solution, using blockchain technology. The solution is free for smallholders, while the costs are borne by certificate users and traceability users. Buyers of cocoa/coffee can thus comply with EU regulations and maintain market access to the EU. At the same time, access to the EU market is guaranteed for local producers in compliance with EUDR standards, which promotes sustainability in production without disadvantaging smallholders. The App is linked to an EO-centered EUDR compliance Check API built on top of a powerful and scalable IT system that can support large amounts of users. The solution is addressing multiple issues: 1. Smallholders are enabled to capture the geolocation of their plot of land in a very simplified manner. 2. The EUDR deforestation compliance is automatically checked based on this EO-driven solution. In cases of negative results additional information can be collected on site to mitigate a negative result. 3. Buyers are provided with the mandatory geolocation including the deforestation compliance at no additional effort. 4. The integrity of the geolocation, the compliance check and the traded good is secured by the traceability solution applied and the blockchain backend 5. The smallholders are not baring additional costs for the EUDR compliance With this integrated solution, cocoa and coffee buyers can fulfill the due diligence obligations of the EUDR economically and efficiently. At the same time, access to the EU market is guaranteed for local cocoa and coffee producers, as the producers themselves meet the technical requirements of the EUDR by localizing the production site and embedding it in the entire product supply chain. In this way, the solution contributes to greater sustainability in cocoa and coffee production without adversely affecting small-scale producers. In the frame of the feasibility study we are evaluating: - the technical solutions of an EO based EUDR deforestation compliance check - a test implementation of the data collection app for smallholder farmers - the acceptance of the app - the data flow through the supply chain - the business model - challenges along the way We will report on the first findings of the project and introduce our API-based EO-driven EUDR Compliance Check methodology used in this project.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: Enhancing Satellite-Based Forest Monitoring for Accurate and Cost-Efficient Compliance With the EU Deforestation Regulation Through Standardized Benchmarking, Ground-Truthing, and Integration of Advanced Technologies.

Authors: Anton Eitzinger, Koimé Kouacou
Affiliations: Veo Partners
The adoption of the EU Deforestation Regulation (EUDR), which requires certain imported products to be deforestation-free, has created a strong demand for reliable monitoring solutions. In response, the satellite-based forest monitoring market has experienced rapid growth, driven by the need for accurate and efficient methods to ensure compliance with the regulation. This trend has been particularly significant for businesses in industries such as agriculture and forestry, where advanced monitoring solutions are essential to meet the EUDR's stringent requirements. Satellite-based forest monitoring provides distinct technical advantages tailored to deforestation-free requirements. For instance, it enables the collection of high-resolution data across vast forested areas, allowing businesses to monitor entire provenances without relying on limited ground-based assessments. It also reduces the need for frequent physical site inspections, significantly cutting costs and logistical complexity. Moreover, satellite systems offer near real-time tracking of forest conditions, enabling companies to quickly identify and address deforestation risks or breaches. Compared to manual verification methods, satellite monitoring is not only more scalable but also provides a cost-efficient way to ensure transparency and compliance with regulatory demands. While satellite-based forest monitoring is a powerful tool for ensuring compliance with the EU Deforestation Regulation (EUDR), it has several notable shortcomings that can impact its effectiveness and lead to the misclassification of deforestation or forest degradation in certain areas. A significant limitation is the inability to differentiate tree species or forest types, such as distinguishing between primary forests and planted forests, unless there are obvious physical differences detectable in the imagery. Furthermore, while satellite data provides a broad overview, it often lacks the granularity needed for reliable assessments, making ground-truthing through field observations a necessary step to validate the data. Technical constraints also arise from canopy penetration limitations, as optical sensors can only capture light reflected from the top of the forest canopy, leaving understory conditions largely invisible. This limitation can obscure important details about forest health and biodiversity. A false positive—incorrectly identifying compliant land as deforested—could result in producers, particularly vulnerable smallholder farmers, being unjustly excluded from the European premium market. On the other hand, a false negative—failing to detect actual deforestation—could expose operators or traders to regulatory breaches, potentially resulting in fines of up to 4% of their annual turnover, a substantial penalty for non-compliance. To address these issues, we propose a benchmarking framework to evaluate and improve the reliability of satellite-based forest monitoring systems in addressing these limitations. The framework will establish standardized metrics for assessing system performance, including precision, recall, and cost-efficiency, to provide commercial users with a transparent basis for selecting suitable monitoring solutions. A key component of this approach involves integrating ground-truthing efforts with the participation of smallholder farmers, leveraging their local knowledge to validate satellite data and improve detection accuracy. By fostering collaboration between stakeholders and embedding smallholder contributions into the verification process, the framework ensures inclusivity and fairness while enhancing data reliability. By aligning these advancements with the specific requirements of the EUDR, this approach supports businesses in navigating the regulatory landscape, minimizes the risk of misclassification, and mitigates associated economic impacts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: Fine Scale Cocoa Mapping With Deep Learning Methods

Authors: Kasimir Orlowski, Filip Sabo, Dr. Astrid Verhegghen, Dr. Michele Meroni, Dr. Felix Rembold
Affiliations: FINCONS S.P.A., European Commission, Joint Research Centre, ARHS Developments Italia S.R.L., Seidor Consulting
Mapping and characterizing cocoa planted areas with Earth Observation data and accurately disentangling them from other land cover is not only paramount for effectively monitoring and reporting on sustainability goals related with cocoa production but also for the EU Deforestation Regulation. However, accurately representing the complexity of the cocoa planted area is a challenging task. Cocoa is grown mostly on smallholder plantations with various agricultural practices, ranging from mono-cultural plantations to agroforestry systems with cocoa shaded by other trees with varying densities and spatial distribution. Here we combine a curated dataset of cocoa plot location and very high resolution (VHR; 0.5m) multispectral satellite imagery covering ∼33% of Ivory Coast area, in a deep learning framework to map cocoa. The selected deep learning model is based on a U-net architecture with efficient-netb5 encoder. To train the model, batches of tiles of 512x512 pixels were used and two sample sizes were tested: i) 221,158 and ii) 2,069,855 (full dataset) tiles. Both samples were split into 70% training and 30% validation. An independent and randomly selected VHR image (66,244ha) served as a test set. Despite the heterogeneity of cocoa plantations, our model was able to generalize well and to differentiate between cocoa and non cocoa areas accurately at this unprecedented spatial resolution. Results show that the improvement related to the use of a larger sample was limited (F1: +2.3%) and not proportionate considering the increase in training time (22h to 153h). The best performance metrics on the test set with the first (smaller) sample size gave a F1 score of 0.92 with Precision and Recall of 0.93 and 0.91 respectively. Building on the results of this study, current work focuses on the characterization of the shading level of cocoa plantations, by applying the Meta canopy height prediction model (Tolan et al. 2024) to the same set of VHR in order to separate larger non cocoa trees from cocoa trees.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: Continental-Scale Tree Crop Mapping in South America

Authors: Yuchang Jiang, Anton Raichuk, Stefan Istrate, Dan Morris, Katelyn Tarrio, Nicholas Clinton, Dr. Vivien Sainte Fare Garnot, Prof Konrad Schindler, Professor Jan Dirk Wegner, Maxim Neumann
Affiliations: Google DeepMind, University of Zurich, Google Research, Google Geo, ETH Zurich
Tree crop expansion in South America, a global production hotspot, contributes significantly to economic development but also drives deforestation and habitat loss within crucial ecosystems like the Amazonian rainforest. Accurate and high-resolution tree crop maps are crucial for sustainable land management, supply chain transparency, and the effective enforcement of regulations like the European Union Deforestation Regulation (EUDR) [1]. This study presents a novel deep learning approach for continent-wide, high-resolution (10-meter) tree crop mapping in South America. Leveraging a transformer-based architecture, our model effectively integrates multi-modal, multi-temporal Sentinel-1 and Sentinel-2 satellite data. To train this model, we have constructed a large-scale dataset of 100,000 samples evenly distributed across the continent, encompassing diverse forest, tree crops (including coffee and oil palm), and non-woodland classes. We use this extensive and diverse dataset to train our segmentation model and generate a continental-scale, 10-meter resolution map of tree crops for 2020. Our resulting tree crop map reaches high accuracy on two independent validation datasets for coffee in Brazil and oil palm in Peru, outperforming existing baseline methods. Comparative analysis reveals that our map consistently distinguishes tree crop areas within the generalized forest class in Brazil, Peru, and Colombia. The research once more highlights the power of deep learning for accurate, large-scale vegetation monitoring. Our high-resolution map provides valuable information for diverse stakeholders, supporting decision-making in service of conservation efforts, sustainable development planning, and compliance with regulations aimed at reducing deforestation through agricultural expansion. By enabling precise identification of areas converted from natural forest to tree crop plantations, our work directly contributes to the implementation of the EUDR and promotes responsible land management practices in South America. Keywords: Tree Crop Mapping, Remote Sensing, South America, Deforestation, Sustainability, EUDR, Deep Learning, Sentinel-1, Sentinel-2, Transformer References: [1]: European Union, REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on the making available on the Union market and the export from the Union of certain commodities and products associated with deforestation and forest degradation and repealing. Regulation (EU) No 995/2010 https://data.consilium.europa.eu/doc/document/PE-82-2022-INIT/en
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: Global Mapping of EUDR Commodities for Better Forest Baselines and Identifying Deforestation Drivers

Authors: Michel Wolters, Nikoletta Moraiti, PhD Luca Foresta, Niklas Pfeffer, PhD Niels Anders, Niels Wielaard, Rens Masselink
Affiliations: Satelligence
Mapping soft commodities such as oil palm, coffee, soy, and cocoa is critical for implementing the European Union Deforestation Regulation (EUDR), which aims to prevent deforestation linked to the production of goods imported into the EU. Accurate mapping ensures transparency in supply chains, enabling the identification of drivers of deforestation for commodity production. Furthermore, it is important to distinguish old-growth commodities from natural forests in maps that function as a forest baseline to track deforestation, rather than other land cover changes, accurately. At Satelligence, we create commodity maps for given target years, using a combination of in-house modelling and third-party openly available layers. This way, we keep our forest baseline up-to-date while we can attribute deforestation events to a specific commodity. We will present our methodology and results for a number of relevant commodities, such as oil palm and soy. As input data to our models, we use Sentinel-1, Sentinel-2, and Landsat imagery (along with derived metrics and indices) processed with an engine that uses FORCE for optical data preprocessing and ISCE for Sentinel-1 radar preprocessing to generate analysis-ready composites. Our approach employs a tile-based system for collecting, training and testing samples, building classification models, evaluating results, and seamlessly merging tiles into a unified global map. Creating the training data involves using anonymized plot data provided by clients and partners, which is used as input for semi-supervised learning methods which serve as preliminary qualitative assessment, enabling the automated filtering of irrelevant land cover pixels and isolating those that correspond to the target land cover. By reducing the need for fully manual digitization and labelling, this approach ensures efficiency while maintaining accuracy. The filtered and labeled samples, combined with the feature data, are then used to construct a sample database. This database serves as the foundation for the machine learning models, facilitating precise and scalable land cover mapping. Furthermore, we implement a decision-tree-based model that integrates commodity classifications across multiple years, minimizing the need for extensive postprocessing while enhancing accuracy. Finally, the accuracy of the maps is independently assessed and discussed, and we additionally show comparisons against openly available datasets
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: Approaching the EUDR by a combination of crowd sourcing and remote sensing

Authors: Manuela Hirschmugl, Nik Cepirlo, Koimé Kouacou, Caroline Kunesch
Affiliations: Joanneum Research, University of Graz, Beetle4Tech, Boku University
According to the United Nations Food and Agriculture Organization, the world has lost 420 million hectares of forest through deforestation over the past 30 years significantly affecting the forests’ multiple and highly essential functions. Agricultural expansion is estimated to cause almost 90% of global deforestation. Seven forest risk commodities (FRC) represent with almost 84% the largest share of EU-driven deforestation: palm oil, soy, timber, cocoa, coffee, beef and natural rubber. EU-consumption of these FRCs is responsible for about 10% of global deforestation. The European Commission has acknowledged these facts and related responsibility and has therefore proposed a regulation to put an end to causing deforestation. The current proposal for a regulation on deforestation-free products and commodities (EUDR) comprises a legal framework based on mandatory requirements for due diligence for companies placing forest and ecosystem-risk commodities and derived products on the EU market. One of the main tasks to prepare for a future EUDR implementation is to use satellite positioning systems and the Copernicus Earth Observation (EO) program for this purpose. A multitude of projects and studies in the past have shown possibilities and success stories of deforestation mapping by remote sensing (Hamunyela, 2017; Hansen, 2016; Kennedy et al., 2010), yet difficulties remain in areas with agro-forestry systems (Mananze et al., 2020) and for detecting forest degradation. These difficulties can be attributed to the similarity of spectral response for the classes to be separated and the class-inherent spectral heterogeneity. Several approaches use the full potential of the time series to overcome the issues of spectral similarity at one point in time (Verbesselt et al., 2010; Zhu et al., 2012) while at the same time providing data in a timelier manner ‘near-real-time’ (Puhm et al., 2020; Zhu et al., 2016). Nevertheless, uncertainties remain relatively high, thus, in-situ information is needed in many cases to improve the classifications and/or to verify the achieved data on the ground. Crowdsourcing and citizen science offer innovative opportunities to gather huge amounts of data. However, not all crowdsourced data is also in-situ data. Important examples of information provided by the crowd through image interpretation are for example the WHISP (What is in that plot?) initiative. In-situ crowd sourcing can be supported by dedicated apps guiding participants and helping to generate useful data in a fast and efficient manner. This is specifically important, as currently more and more deep learning approaches, which are known to be extremely data-hungry, will be employed. Our work focuses on a combination of in-situ crowdsourcing data collection in combination with Sentinel-2 remote sensing classification. An additional, equally important component is the analysis on the drivers for people in the global south to contribute to such crowdsourcing initiatives aiming to tackle deforestation. We tested our approaches in Côte d'Ivoire, West Africa, with different users along the supply chain—from smallholder farmers, through processors and exporters, to organizations in the broader ecosystem like the Conseil Café Cacao and certification bodies. Regarding the remote-sensing aspects, the first results show, that time series-based analysis led to higher accuracies for change mapping compared to bi-temporal change detection. Accuracy was asse with stratified sampling according to (Olofsson et al., 2014) by a person not involved in the mapping. The overall accuracy reached 68 vs. 72% for completely blind evaluation and 69 vs. 82% for plausibility evaluation for bi-temporal and time series classification respectively. The plausibility evaluation was a simple boundary evaluation: if a verification point was closer than 20 m to the boundary of a change, it was still considered correctly detected. The second evaluation considered the usability of crowd sourcing apps. Two apps were tested for a comprehensive set of parameters including geo-positioning, map integration, (multiple) photo upload, automatization options, and many more. Separating these parameters into necessary and nice-to-have, we found that ODK collect was preferable over Epicollect5 due to three main advantages: possibility to show example photos for different disturbance types for easier identification, depiction of the user’s own position and optional a target area for improved navigation and finally the possibility to provide a polygon in addition to the point information in the feedback, which tremendously helps for remote sensing applications. There are also some disadvantages of ODK collect: the backend is more difficult to set up, there are costs entailed for server hosting (or you can host your own server) and the offline data collection is more difficult to implement. Thirdly, we also investigated the accuracy of the positioning in different land cover types (dense and open forest, meadows, settlements) comparing it with professional GNSS antenna measurements. According to our (limited) sample, it seems that the type and age of the mobile device was more important in terms of accuracy than the apps used. Overall, the deviations found were below 5 m (∆x= 2,85m, ∆y=3,36m, ∆xy= 4,40m), which seems to be sufficient for most crowdsourcing applications, if the crowd is trained to move at least 5m, ideally more than 10m from any boundary before recording the point. Finally, regarding the motivation of crowdsourcing participants, interviews with a variety of stakeholders in Côte d'Ivoire already hint to the following main aspects. Most importantly, the size of rewards matters for both participants' motivation to contribute and the quality of the crowdsourcing results. Previous work by NGOs has shown that project-based remuneration or pay on a weekly basis led to better results compared to offering remuneration by record. Regarding motivation, internal aspects such as interest in mapping activities or contributing to halt climate change matter. Another aspect impacting the likelihood to contribute high-quality crowdsourcing data is education and training, since literacy and certain technological knowledge are prerequisites. Next to motivational aspects, the integration of smallholder farmers directly turned out to be challenging due to technology-access-related issues such as low smartphone density or low network-connection in remote areas. Further research, in the form of an experimental study, will investigate the relevance of intrinsic motivation and the effect of external rewards on crowdsourcing participants. This will reveal how to effectively design such campaigns, enabling to tap into the collective intelligence of various crowds and gather in-situ data, which in turn contributes to mapping deforestation efficiently and accurately. Combining these findings from crowdsourcing with remote sensing insights, an innovative approach to deforestation mapping is developed and proposed to aid in both, preparing for the implementation of the EUDR and more importantly, contribute to limit deforestation and its adverse effects on the global climate and biodiversity. References: Hamunyela, E., 2017. Space-time monitoring of tropical forest changes using observations from multiple satellites (PhD Thesis). Wageningen University & Research, Laboratory of Geo-Information Science and Remote Sensing. https://doi.org/10.18174/420048 Hansen, R., M.C.,. Krylov, A.,. Tyukavina, A.,. Potapov, P.,. Turubanova, S.,. Zutta, B.,. Ifo, S.,. Margono, B.,. Stolle, F.&. Moore, 2016. Humid tropical forest disturbance alerts using Landsat data. Environmental Research Letters 11, 034008. Kennedy, R.E., Yang, Z., Cohen, W.B., 2010. Detecting trends in forest disturbance and recovery using yearly Landsat time series: 1. LandTrendr - Temporal segmentation algorithms. Remote Sensing of Environment 114, 2897–2910. https://doi.org/10.1016/j.rse.2010.07.008 Mananze, S., Pôças, I., Cunha, M., 2020. Mapping and Assessing the Dynamics of Shifting Agricultural Landscapes Using Google Earth Engine Cloud Computing, a Case Study in Mozambique. Remote Sensing 12, 1279. https://doi.org/10.3390/rs12081279 Olofsson, P., Foody, G.M., Herold, M., Stehman, S.V., Woodcock, C.E., Wulder, M.A., 2014. Good practices for estimating area and assessing accuracy of land change. Remote Sensing of Environment 148, 42–57. http://dx.doi.org/10.1016/j.rse.2014.02.015 Puhm, M., Deutscher, J., Hirschmugl, M., Wimmer, A., Schmitt, U., Schardt, M., 2020. A Near Real-Time Method for Forest Change Detection Based on a Structural Time Series Model and the Kalman Filter. Remote Sensing 12, 3135. https://doi.org/10.3390/rs12193135 Verbesselt, J., Hyndman, R., Zeileis, A., Culvenor, D., 2010. Phenological change detection while accounting for abrupt and gradual trends in satellite image time series. Remote Sensing of Environment 114, 2970–2980. http://dx.doi.org/10.1016/j.rse.2010.08.003 Zhu, X., Helmer, E.H., Gao, F., Liu, D., Chen, J., Lefsky, M.A., 2016. A flexible spatiotemporal method for fusing satellite images with different resolutions. Remote Sensing of Environment 172, 165–177. Zhu, Z., Woodcock, C.E., Olofsson, P., 2012. Continuous monitoring of forest disturbance using all available Landsat imagery. Remote Sensing of Environment 122, 75–91. https://doi.org/10.1016/j.rse.2011.10.030
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: Employing high-resolution data to enhance the accuracy of land use and cover classification

Authors: Dr Dr. Flávia De Souza Mendes, Dr Vivian Ribeiro, MSc Tara O'Shea
Affiliations: Planet Labs GmbH, Meridia Land, Planet Labs
Remote sensing has revolutionized the monitoring of commodity supply chains, providing essential insights into how production activities impact the environment. Unlike traditional monitoring methods that rely on self-reported data, which often lacks transparency and accuracy, remote sensing offers an objective, data-driven means of verifying sourcing practices, enabling the detection of unsustainable activities like deforestation caused by agricultural expansion. By accurately mapping areas within specific supply chains, satellite imagery empowers stakeholders, including consumers, investors, and regulatory bodies, to hold companies accountable for environmental commitments, fostering greater responsibility and transparency. Our work aims to demonstrate how high-resolution (HR) imagery can enhance the accuracy of public mapping tools. While current public maps provide useful data, HR satellite imagery can refine this information, offering a more precise look at land-use changes and environmental impacts within supply chains. With advanced spatial resolution and sophisticated data analysis, HR imagery provides a granular view, identifying individual farms or plantations and assessing their environmental performance. This enhanced granularity is key for detecting deforestation patterns in small farm plots, especially important for smallholder production, minimizing false positives. Integrating this detailed imagery with existing public data can improve the accuracy of maps used to monitor sourcing patterns, enabling more confident decision-making by governments, smallholders, and companies. Moreover, the broad scope of satellite technology allows efficient monitoring over large, remote, or inaccessible areas, offering a comprehensive view of environmental risks such as deforestation hotspots. Publicly accessible remote sensing data extends its impact by empowering diverse stakeholders. Governments can enforce environmental regulations more effectively, while civil society organizations and researchers can independently monitor deforestation, thereby strengthening forest governance. Initiatives such as Brazil’s PRODES, Global Forest Watch, and MapBiomas already employ advanced technologies to produce high-resolution deforestation data across the Amazon, contributing to transparency. In Brazil, the Rural Environmental Registry (CAR) links farm boundaries data with remote sensing, enabling property-level deforestation monitoring and providing companies with timely alerts on deforestation within their supply chains. Preliminary results had already demonstrated the improved accuracy achieved by incorporating high-resolution data into land cover classification in the city of Patrocínio, Minas Gerais, Brazil, a major coffee-producing municipality. By using PlanetScope image and height data we identified that areas classified as forest by public maps are, in fact, established coffee plantations. In conclusion, leveraging a diverse set of data sources including high-resolution satellite imagery, public maps, and near-real-time alerts is the most effective way to monitor forests, non-forest areas, and commodities. Combining these data sources increases mapping accuracy and confidence, benefiting all stakeholders involved in sustainable land management, from government agencies to smallholders and private companies. This approach promotes a more accurate and accountable framework for tracking land use and forest conservation efforts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: An Approach for an EUDR Forest Baseline Based on a Combination of Open Data, Commodity Maps and Forest Change Detection

Authors: Michel Wolters, PhD Niels Anders, PhD Luca Foresta, Vincent Schut, Niels Wielaard, Rens Masselink
Affiliations: Satelligence
Implementing the European Union Deforestation Regulation (EUDR) has posed significant challenges, primarily due to the complexities of monitoring and verifying supply chains for deforestation-linked commodities across diverse global regions. In order to determine whether commodities are sourced from deforested areas, it is important to develop an accurate and consistent land cover map, which would be the baseline for a deforestation monitoring service. While developing a globally consistent, multi-year and high resolution land cover map from scratch is a challenging, time consuming and expensive undertaking, we present a methodology for creating a yearly historic forest baseline that can be updated annually and be used for purposes such as EUDR (with cutoff date 2020-12-31), NDPE (No Deforestation, no expansion on Peat, no Exploitation, cutoff date 2015-12-31), CFI (Cocoa & Forests Initiative, 2017-12-31) and other frameworks and regulations. The methodology consists of leveraging dozens of open data sources such as JRC Tropical Moist Forests, Greenpeace intact forest landscapes, University of Maryland Primary Forest and National Landcover Maps, and Descals et al., palm oil map. These open data layers are combined with commodity maps, mainly those relevant for EUDR, such as oil palm, cocoa, coffee, soy etc. For each layer, thorough qualitative and quantitative QA is performed, a harmonisation of land cover classes is performed to align the definitions of land cover with other datasets, and, where applicable, a consistency check is performed in overlapping areas and through time. Since not all input data is available for all years of interest, or of different individual years, we will perform backward and forward propagation of data layers through time. This is done using land cover change detection data, as well as a forest and vegetation change detection algorithm, in conjunction with aforementioned data layers. Then, based on quality and priority, distinguishing natural land cover classes and commodity land cover classes, taking into account overlapping datasets and yearly availability of datasets, a decision tree-type model is applied to determine the final landcover class of each pixel per year. This forest baseline has been independently assessed for many countries, and accuracy scores of 85-95% are the average. We will show a comparison with other forest baseline maps commonly used for e.g. the EUDR.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: A.01.08 - POSTER - Planetary Boundary Layer from Space

The planetary boundary layer (PBL) plays an essential role in weather and climate, which are critical to human activities. While much information about the temperature and water vapor structure of the atmosphere above the PBL is available from space observations, EO satellites have been less successful in accurately observing PBL temperature and water vapor profiles and in constraining PBL modelling and data assimilation. Improved PBL models and parameterizations would lead to significantly better weather and climate prediction, with large societal benefits.

In the latest US National Academies’ Earth Science Decadal Survey, the PBL was recommended as an incubation targeted observable. In 2021, the NASA PBL Incubation Study Team published a report highlighting the need for a global PBL observing system with a PBL space mission at its core. To solve several of the critical weather and climate PBL science challenges, there is an urgent need for high-resolution and more accurate global observations of PBL water vapor and temperature profiles, and PBL height. These observations are not yet available from space but are within our grasp in the next decade. This can be achieved by investing in optimal combinations of different approaches and technologies. This session welcomes presentations focused on the PBL, from the observational, modeling and data assimilation perspectives. In particular, this session welcomes presentations focused on future EO PBL remote sensing missions and concepts, diverse observational approaches (e.g., active sensing, constellation of passive sensors, hyperspectral measurements, high-altitude pseudo satellites) and potential combinations of techniques to optimally depict the 3D structure of PBL temperature and water vapor.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Planetary Boundary Layer Heights From GNSS Radio Occultations

Authors: Stig Syndergaard
Affiliations: Danish Meteorological Institute
Global Navigation Satellite System (GNSS) radio occultation (RO) measurements provide high-resolution vertical information about the atmosphere from near-surface altitudes and up through the troposphere and stratosphere. Using the Radio Occultation Processing Package (ROPP), which is a product of the EUMETSAT Radio Occultation Meteorology Satellite Application Facility (ROM SAF), it is possible to derive a number of different estimates of the planetary boundary layer height (PBLH) from these measurements. Earlier studies have shown reasonable agreement between the PBLH estimated from GNSS radio occultation bending angle or refractivity profiles using the gradient method (identifying the largest negative gradient in the profiles) and the PBLH estimated from other measurements or models. Among the alternatives in ROPP, one can also estimate the PBLH from the so-called dry temperature. The dry temperature, which is derived from the refractivity, diverges from the physical temperature in the lower troposphere when the water vapour pressure contributes significantly to the refractivity. Profiles of dry temperature in the lower troposphere are typically characterised by a sharp inversion, below which water vapour is abundant. At the same time, dry temperature also includes a possible inversion if such exists in the physical temperature. With a slight modification of the current ROPP algorithm, the PBLH can thus be estimated from the dry temperature without calculating the vertical gradients first. This leads to a more robust estimate. In this study, global estimates of the PBLH from derived bending angle, refractivity, and dry temperature profiles are compared to each other and to model estimates, including the PBLH based on the Bulk Richardson number being available from the ECMWF reanalysis version 5 (ERA5). It is shown that there is a very good agreement between average estimates of the PBLH from derived dry temperature and the average estimates based on ERA5 forward-modelled dry temperature, both showing a generally large PBLH in the tropics and a shallow PBLH over Greenland and Antarctica.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Low Tropical Marine Clouds and Their Interactions With Boundary Layer Dynamics Observed From ALADIN/Aeolus and SCAT/HY-2

Authors: Zacharie Titus, Marine Bonazzola, Hélène Chepfer, Artem Feofilov, Marie-Laure Roussel, Alexis Mouche
Affiliations: Sorbonne Université / LMD, CNRS / LMD, University Brest / CNRS
Previous studies have shown interactions between low clouds over the oceans and the atmospheric circulation in which they are embedded. However, few observations corroborate them at global scale. The intensity of the wind shear can trigger turbulence and entrainment, lowering cloud tops and reducing the horizontal extent of clouds by introducing dry air from the lower free troposphere in the atmospheric boundary layer (Schulz and Mellado, 2018). It can also tilt updrafts (Helfer et al., 2020) or even organize meso-scale convective systems (Abramian et al., 2022). The wind at 10 m above the ocean surface drives evaporation and introduces humidity in the atmospheric boundary layer, resulting in its deepening (Mieslinger et al., 2019 ; Nuijens and Stevens, 2012). Using co-located observations of clouds and projected wind profiles observed by ALADIN/Aeolus as well as 10 m above surface winds from SCAT/HY-2, we estimate how the circulation, at typically 10-100 km scale, in the atmospheric boundary layer, modifies the horizontal and vertical extent of low tropical clouds over the oceans. We show that the most intense cloud top wind shears are associated with lower cloud tops and horizontally smaller clouds. Meanwhile, situations with intense evaporation rates feed more moisture in the atmospheric boundary layer, leading to higher cloud tops, particularly in stratocumulus dominated regions. References : Abramian, S., Muller, C. and Risi, C.: Shear-Convection Interactions and Orientation of Tropical Squall Lines, Geophysical Research Letters, 49, https://doi.org/10.1029/2021GL095184, 2022. Helfer, K. C., Nuijens, L., de Roode, S. R., and Siebesma, A. P.: How wind shear affects trade-wind cumulus convection. Journal of Advances in Modeling Earth Systems, 12, e2020MS002183. https://doi.org/10.1029/2020MS002183, 2020. Mieslinger, T., Horváth, Á., Buehler, S. A., & Sakradzija, M.: The dependence of shallow cumulus macrophysical properties on large‐scale meteorology as observed in ASTER imagery, Journal of Geophysical Research: Atmospheres, 124, 11477–11505. https://doi.org/10.1029/2019JD030768, 2019. Nuijens, L. and Stevens, B.: The Influence of Wind Speed on Shallow Marine Cumulus Convection, Journal of the Atmospheric Sciences, 69, 168–184, https://doi.org/10.1175/JAS-D-11-02.1, 2012. Schulz, B., and Mellado, J. P.: Wind Shear Effects on Radiatively and Evaporatively Driven Stratocumulus Tops, Journal of the Atmospheric Sciences, 75, 3245–3263, https://doi.org/10.1175/JAS-D-18-0027.1, 2018.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Temperature and humidity profile retrievals from synergistic satellite (MTG-IRS) and ground-based (Microwave Radiometer, SYNOP) observations

Authors: Maria Toporov, Prof. Dr. Ulrich Löhnert
Affiliations: University Of Cologne
Spatially and temporally resolved fields of temperature and humidity within the planetary boundary layer (PBL) are the crucial variables for short-term forecasting with convection resolving numerical weather prediction models (NWP). Despite their potential positive impact on NWP analysis and forecast, both variables are still not adequately vertically, horizontally, and temporally) measured by current observing systems. The hyperspectral infrared sounder (IRS) will operate from a geostationary orbit onboard the Meteosat Third Generation (MTG) and provide an unprecedented temporally and spatially resolved view into the atmosphere. However, even hyperspectral infrared satellite observations still leave gaps in the observation of the PBL structure, mainly due to the limited vertical resolution of the satellite as well as the strong influence of the surface properties or clouds (Teixeira et al., 2021). Moreover, atmospheric profiles retrieved from hyperspectral observations show increasing uncertainty in the lowest few kilometers of the atmosphere (Wagner et al., 2024). To fill the existing observational gap in the PBL, ground-based remote sensors for measuring temperature, humidity, and wind profiles have been developed that are nowadays suitable for network operation. Especially, passive microwave radiometers (MWR) have been characterized accurately concerning their 24/7 reliability, accuracy, and information content. A network of ground-based MWRs has the potential to provide real-time, all-sky profile observations. On the European level, the first instrument networks are in the process of being established, e.g. within the European Research Infrastructure consortium ACTRIS. With our study, carried out within the Hans-Ertel Center for weather research of DWD (HErZ), we attempt to answer the question, to what extent the synergy of ground-based MWR and standard 2m temperature/humidity measurements (SYNOP) with hyperspectral infrared satellite observations (IRS) can improve temperature and humidity profiling over the ICON-D2 domain. We develop retrievals of temperature and humidity profiles based on reanalysis as the truth and applying the Neural Network (NN) approach allowing an optimal blending of IRS radiances with surface-based remote sensing observations and standard 2m meteorology over the ICON-D2 domain. We simulate satellite observations using the RTTOV model and use the MWRpy for ground-based MWRs. In the first attempt, the retrievals are developed for two stations: the Jülich Observatory for Cloud Evolution (JOYCE) and the DWD Observatory Lindenberg (RAO). Other suited ACTRIS sites will also be considered. After the launch of MTG-S, we plan to apply the developed retrievals to real MWR, SYNOP, and IRS observations and assess the impact of assimilation of obtained atmospheric profiles on the short-term forecasts of crucial variables such as low-level winds, cloudiness, atmospheric stability, and severe weather. In this contribution, we present the first results of the study including simulation of satellite and ground-based observations from reanalysis, neural network architecture, and performance of the MWR, IRS, and synergistic MWR+IRS and SYNOP+IRS retrievals applied to simulated observations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: PBL Height Retrieval and Thermodynamic Characterization and Its Variability from NAST-I During the WH2yMSIE Field Campaign

Authors: Daniel Zhou, Hyun-Sung Jang, Allen Larar, Xu Liu, Anna Noe, Antonia Gambacorta, Rachael Kroodsma
Affiliations: NASA Langley Research Center, Analytical Mechanics Associates, NASA Goddard Space Flight Center
The National Airborne Sounder Testbed-Interferometer (NAST-I) suborbital system (<2.6 km IFOV; 0.25 cm-1 within 645–2700 cm-1) serves as a spaceborne instrument simulator and pathfinder for future satellite capabilities and airborne science experiments. The NAST-I measurements are made to advance understanding of science critical for weather, climate, chemistry, and radiation applications. Here we present some capabilities of NAST-I measurements and corresponding geophysical retrievals and their potential benefits toward enhancing characterization and understanding of the Planetary Boundary Layer (PBL). Initial results of PBL height estimation and thermodynamic characterization and their time evolution from NAST-I measurements during the WH2yMSIE field campaign are presented.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: A.03.04 - POSTER - Model-data interfaces and the carbon cycle

The increasing provision of the synergistic observations from diverse and complementary EO observations relevant to the carbon cycle underlines a critical challenge in generating consistency in multi-variate EO datasets whilst accounting for the differences in spatial scales, times of acquisition and coverage of the different missions. It also entails the requirement to improve models of the carbon cycle to ensure they can fully exploit the observation capabilities provided by both EO data and enhanced global ground networks. This implicitly means increasing the spatial resolution of the models themselves to exploit the spatial richness of the data sources as well as improving the representation of processes, including introducing missing processes especially those for describing vegetation structure and vegetation dynamics on both long and short timescales, while ensuring consistency across spatial scales (national, regional, global).

Understanding and characterisation of processes in the terrestrial carbon cycle, especially with reference to estimation of key fluxes, requires improved interfaces between models, in situ observations and EO. It also requires research to ensure an appropriate match is made between what is observed on the ground, what is measured from space, their variability in space and time and how processes that explain this dynamism are represented in models and hence to allow the assessment of the impacts of scale in particular how processes, operating at fine scale, impact global scale carbon pools and fluxes. This implicitly involves a close collaboration between the Earth observation community, land surface and carbon modellers and experts in different disciplines such as ecosystems, hydrology and water cycle research.

This session is dedicated to progress in model-data interfaces and the appropriate coupling of EO observations of different types, processes and variables with in-situ observations and models to ensure the observations collectively and the models are consistent and compatible.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Advancing long-term ecosystem assessments by unifying multi-sensor Earth Observation Data with self-supervised Deep Learning

Authors: Zayd Mahmoud Hamdi, Dr. Sophia Walther, Dr. Martin Jung, Gregory Duveiller, Dr. Qi Yang, Vitus Benson, Ulrich Weber, Sebastian Hoffmann, Dr. Christian Reimers, Fabian Gans
Affiliations: Max-planck-institute For Biogeochemistry
Consistent and Continuous long-term monitoring of Earth's processes such as land surface dynamics and meteorological changes is crucial to understand their variability beyond seasonal scales as well as their implications for global models of ecosystems, the land surface and the climate. Spaceborne observations of surface reflectance, land surface temperature, soil moisture, and other variables are crucial for analyzing ecosystem behaviour and fluxes, as well as for related modeling activities globally. However, the finite lifespans of satellite missions pose significant challenges to obtaining temporally consistent observations over extensive periods. Furthermore, the evaluation of spaceborne observations in conjunction with concurrent in-situ observations and their use in modeling activities is limited by the decommissioning of platforms. We address these challenges with a deep learning approach based on a transformer-based encoder architecture. As a showcase, this architecture is trained jointly on datasets from the Moderate Resolution Imaging Spectroradiometer (MODIS) and its successor, the Visible Infrared Imaging Radiometer Suite (VIIRS). The goal is to learn a latent representation that captures the important features across both sensors. This offers the ability to analyze and predict across the entire temporal span (MODIS-only period, overlap period, and post-MODIS VIIRS period). Preliminary testing on surface reflectance demonstrates that this approach captures temporal and spatial dynamics of pixels sampled globally with high consistency. Moreover, the method is conceptually flexible, enabling adaptation to other variables such as land surface temperature and soil moisture. It is also not specific to the sensor pair, but can be adapted to other combinations, such as MODIS/VIIRS to Sentinel-3, without requiring extensive training. By enhancing the continuity and compatibility of Earth observation datasets, this approach aligns with the critical need to fuse data from satellites and ground-based networks for flux modeling. Potentially the latent representations can become predictors for ecosystem models themselves. The framework allows to match observed data with spatial and temporal variability, potentially improving the long-term representation of dynamic processes in ecosystem and carbon cycle models. Ultimately, this work lowers the barrier to use multi-sensor Earth observation datasets for long-term environmental monitoring and predictive modeling across diverse applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: A New Operational Global Terrestrial Ecosystem Gross Primary Productivity (GPP) Product: The Quantum Yield (QY) GPP Product.

Authors: Booker Ogutu, Mr Finn James, Mr Sven Berendsen, Dr Stephen Plummer, Prof Jadunandan Dash
Affiliations: School of Geography and Environmental Science, University of Southampton, European Space Agency
Gross primary productivity (GPP) represents the amount of carbon dioxide (CO₂) fixed by plants through photosynthesis per unit area/time and is a key component of the global carbon cycle. Accurate quantification of GPP is critical in understanding the global carbon cycle and how terrestrial ecosystems might respond to global environmental change. Here we present a new global GPP product derived using the Quantum Yield (QY) model (QY-GPP product). The QY model calculates GPP as a product of photosynthetic pathway (i.e., C₃ and C₄) quantum yield term (α mol /mol) and time averaged fraction of photosynthetically active radiation absorbed by photosynthetic pigments in the canopy (i.e., FAPARchl) derived from Sentinel-3 OLCI data. The evaluation of the QY-GPP product across various biomes using data from two flux tower networks (i.e., Integrated Carbon Observation System-ICOS and Ameriflux flux tower networks, n=2350) showed that the QY-GPP product was close to in-situ GPP measurements across various biomes (i.e., R²= 0.72; RMSE= 3.16gC/m²/day, MAE= 2.5gC/m²/day and Bias= 1.39gC/m²/day). Additionally, when compared with two operational satellite-based GPP products (i.e., Copernicus Global Land Service Gross Dry Matter Productivity-CGLSGDMP and MOD17 GPP), the QY-GPP product explained a higher variability of the in-situ measurements at flux tower sites(i.e., QY-GPP: R²= 0.77; CGLS DMP GPP: R² = 0.74 and MOD17: R²= 0.60). The satisfactory performance of the QY-GPP product shows its potential for application in carbon cycle research (e.g., in monitoring of the dynamics of global carbon cycle) and in a broad range of applications (e.g. carbon accounting) at regional to global scale. The QY-GPP product relies on operational Sentinel 3 OLCI land product and ECMWF reanalysis products, which makes it feasible to produce at global scale operationally.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Complementing global-to-local scale terrestrial carbon-water models with Earth Observation

Authors: Sujan Koirala, Dr. Martin Jung, Hoontaek Lee, Tina Trautmann, Lazaro Alonso Silva, Bernhard Ahrens, Felix Cremer, Fabian Gans, Markus Reichstein, Nuno Carvalhais
Affiliations: Department of Biogeochemical Integration, Max Planck Institute for Biogeochemistry, Institute of Physical Geography, Goethe University
Biogeochemical processes influencing climate feedback across scales tightly link terrestrial carbon and water cycles. Yet, biosphere-climate feedback remains one of the largest uncertainties in the global terrestrial models. This points to evidences that our understanding of the coupled water-carbon processes is potentially limited by incomplete observational data and under-constrained models highlighting the need for methods integrating observational data with terrestrial ecosystem models. This study introduces the SINDBAD model-data integration (MDI) framework, which seamlessly integrates diverse observational data to constrain terrestrial models of varying complexities and scales. We demonstrate the relevance of a modular MDI workflow to test hypotheses and parameterizations across scales, and in particular, how Earth Observation (EO) data can help complement and constrain the models. At the global scale, using EO-based vegetation indexes in simpler models enhances simulations of monthly runoff and terrestrial water storage in arid regions. Such parsimonius model is also able to represent the global CO2 exchange and its relatioship with water cycle at the global scale. At the regional scale, employing vegetation fraction data from EO geostationary satellites in coupled water-carbon models significantly improves the simulation of gross primary productivity interannual variability. At the ecosystem scale, linking fluxes and states by prognostically coupling water-carbon controls on productivity and carbon allocation benefits from remote sensing EO of vegetation states, adding constraints on the carbon cycle beyond eddy covariance measurements. We can thus conclude that with appropriate data and EO constraints, an across-scale approach underpins hypothesis testing, enhancing our understanding and quantifying carbon-water interactions. However, the model-observation discrepancies, at a given scale, are quantitatively comparable to differences among observations and observation-based products. To address this, we extend the SINDBAD framework to learn representations and assumptions on process parameterizations. To do so, we combine local-scale observations of fluxes and stocks, along with local information of ecosystem functional properties, to learn the spatial variability of physical model parameters describing carbon-water dynamics. Leveraging such hybrid modeling approaches may pave way to develop physically sound process parameterizations, ultimately improving the representation of coupled carbon-water dynamics from local to global scales.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Remote Quantification of Soil Organic Carbon: Role of Topography in the Intra-field Distribution

Authors: Ben Cutting, Professor Clement Atzberger, Professor Asa Gholizadeh, Professor David A. Robinson, Doctor Jorge Mendoza-Ulloa, Professor Belen
Affiliations: University Of Surrey
Quantitative measurements of soil organic carbon (SOC) are important for monitoring soil health and for the study of land-atmosphere carbon fluxes. Traditional methods for SOC determination involve the acquisition and lab analysis of soil samples across a given study area. Despite the accuracy of these methods, they are expensive, time consuming, and require access to the area in question. One way to circumvent some of these limitations, is through remote sensing which offers the possibility to quantify SOC over large and potentially inaccessible areas, in both a periodic and cost-effective way. In recent years, numerous studies have sought to relate Earth observation spectral data to SOC content. Many of these studies consider large areas with relatively low sampling densities with the aim of achieving a varied range of soil types. While this is important, SOC has a highly varied distribution even at a crop field scale. Indeed, SOC levels can vary by over 50% within a small area of cropland (Li et al., 2024). Understanding this intra-field variability has the potential to unveil important drivers relating to SOC changes within the soil. A driver of particular import at both large and intra-field sales is topography, being closely related to the movement and accumulation of water and material across the landscape and, consequently, contributing to the SOC distribution. This moisture level can induce either aerobic or anaerobic conditions which, in turn, influences the carbon flux and so the proportion of SOC stored within the soil (Linn and Doran, 1984). Therefore, the study of topographical covariates as a driver for SOC change is of significant importance, particularly in capturing fine-scale variations and highlighting the influence of micro-topographical features at high-resolutions. This study undertook a high-density sampling campaign of SOC at three crop fields in Southeast England synchronous to a clear-sky Sentinel-2 observation of the area. In addition, a hyperspectral UAV flight and lidar survey were conducted in conjunction with the sampling campaign. These data facilitated the creation of a range of models, developed to predict SOC based on topographical features and single-day and multi-date spectral data. The importance of the different predictors was meticulously analysed at an intra-field high resolution scale. Sentinel-2 spectral data of the study fields were acquired for the exact day of the sampling campaign, and for an interval of 18 months before and after this date. Random Forest (RF) and Support Vector Regression (SVR) models were trained and tested on the spectral and topographical data to predict the observed SOC values. Five different sets of model predictors were assessed by using independently and in combination with single and multidate spectral data, and topographical features for the SOC sampling points. Both RF and SVR models performed best when trained on multitemporal Sentinel-2 data together with topographic features, achieving validation root-mean-square errors (RMSEs) of 0.293% and 0.229%, respectively (Cutting et al., 2024). These RMSEs are competitive when compared to those found in the literature for similar models. Of the input set, topographical features were found to be the most important, specifically the topographic wetness index (TWI), a parameter closely linked to the accumulation of soil water which exhibited the highest permutation importance for virtually all models. However, contrary to the positive relationship observed by Minhoni et al. (2021) in dryer climates at a similar scale, TWI was found to be negatively related to SOC levels in the study fields. This potential disagreement may suggest a different role of soil wetness in SOC storage in climatic regimens at an intra-field scale. Cutting, B. J., Atzberger, C., Gholizadeh, A., Robinson, D. A., Mendoza-Ulloa, J. & Marti-Cardona, B. 2024. Remote Quantification of Soil Organic Carbon: Role of Topography in the Intra-Field Distribution. Remote Sensing, 16, 1510. Li, W., Yang, Z., Jiang, J. & Sun, G. 2024. Spatial Variation and Stock Estimation of Soil Organic Carbon in Cropland in the Black Soil Region of Northeast China. Agronomy, 14, 2744. Linn, D. M. & Doran, J. W. 1984. Aerobic and Anaerobic Microbial Populations in No-till and Plowed Soils. Soil Science Society of America Journal, 48, 794-799. Minhoni, R. T. D. A., Scudiero, E., Zaccaria, D. & Saad, J. C. C. 2021. Multitemporal satellite imagery analysis for soil organic carbon assessment in an agricultural farm in southeastern Brazil. Science of The Total Environment, 784, 147216.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Capturing Short-Term Dynamics in ASCAT Vegetation Parameters

Authors: Paco Frantzen, Susan Steele-Dunne, Tristan Quaife, Mariette Vreugdenhil, Sebastian Hahn, Wolfgang Wagner
Affiliations: Delft University of Technology, University of Reading, Vienna University of Technology
The relationship between microwave backscatter and incidence angle, as observed by the Advanced Scatterometer (ASCAT) onboard the Metop satellites provides valuable insights into vegetation water content and structure. The so-called Dynamic Vegetation Parameters (DVP) -representing the first (slope) and second derivative (curvature) of this relationship - have been used in various studies to monitor changes in vegetation water content (Pfeil et al., 2020, Petchiappan et al., 2022), and structure (Steele-Dunne et al., 2019). Another potential use of DVP lies in their use to constrain vegetation states in land surface models (Shan et al., 2024). While DVP are largely affected by changes in vegetation water content and structure, it was found that on short time scales, they may also be affected by soil moisture and intercepted precipitation (Greimeister-Pfeil et al., 2022). Currently, DVP time series are derived using a kernel smoother that weights observations based on their temporal proximity for each day following the Epanechnikov kernel. While this approach is effective for capturing seasonal variability of DVP, this method struggles to accurately represent the timing of short-term changes in the input data because they are smoothed out. Furthermore, the smoothing of short-term changes also compromises the quality of the estimated DVP time series because short-term changes can affect the estimated DVP time series for multiple weeks and introduce adverse artifacts depending on the kernel halfwidth. It is crucial to preserve the timing of short-term variations in the estimation process so that we can disentangle the effects of various contributions to the ASCAT slope. This will allow for more accurate comparisons with independent data on soil and vegetation states. In addition, it will allow us to isolate and remove high-frequency variability due to e.g. intercepted precipitation or soil moisture in any analysis relating ASCAT slope to biomass or vegetation water content. In this study, an alternative method based on temporally constrained least squares proposed by (Quaife and Philip, 2010) is evaluated for estimating ASCAT DVP without the use of a smoothing kernel. The results show that this method better preserves the timing of short-term changes, while matching the Epanechnikov kernel in terms of aggregated validation metrics such as the unbiased root mean squared error. Ongoing research is focused on investigating the influence of this alternative approach on the estimated ASCAT slope, and its potential to improve our ability to relate ASCAT slope to independent observations of soil and vegetation states. Greimeister-Pfeil, I., Wagner, W., Quast, R., Hahn, S., Steele-Dunne, S., and Vreugdenhil, M. (2022). Analysis of short-term soil moisture effects on the ASCAT backscatter-incidence angle dependence. Science of Remote Sensing, 5:100053. Petchiappan, A., Steele-Dunne, S. C., Vreugdenhil, M., Hahn, S., Wagner, W., and Oliveira, R. (2022). The influence of vegetation water dynamics on the ASCAT backscatter–incidence angle relationship in the Amazon. Hydrology and Earth System Sciences, 26(11):2997–3019. Pfeil, I., Wagner, W., Forkel, M., Dorigo, W., and Vreugdenhil, M. (2020). Does ASCAT observe the spring reactivation in temperate deciduous broadleaf forests? Remote Sensing of Environment, 250:112042. Quaife, T. and Philip, L. (2010). Temporal Constraints on Linear BRDF Model Parameters. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 48 Shan, X., Steele-Dunne, S., Hahn, S., Wagner, W., Bonan, B., Albergel, C., Calvet, J.-C., and Ku, O. (2024). Assimilating ascat normalized backscatter and slope into the land surface model isba-a-gs using a deep neural network as the observation operator: Case studies at ismn stations in western europe. Remote Sensing of Environment, 308:114167. Steele-Dunne, S. C., Hahn, S., Wagner, W., and Vreugdenhil, M. (2019). Investigating vegetation water dynamics and drought using Metop ASCAT over the North American Grasslands. Remote Sensing of Environment, 224:219–235.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Constraining vegetation turnover rates in Terrestrial Biosphere Model using L-band backscatter

Authors: Xu Shan, Sujan Koirala, Markus Zehner, Lazaro Alonso, Nuno Carvalhais
Affiliations: Max Planck Institute Of Biogeochemistry
An improved representation of the carbon and water cycle dynamics in terrestrial ecosystems underpins a large uncertainty reduction in modeling Earth system dynamics. The climate sensitivity of ecosystem processes controls land-atmosphere interactions and the overall atmospheric carbon uptake and release dynamics across scales. Local and Earth observations of vegetation dynamics are key for the evaluation of our understanding and support the quantification of process representation in model development. Previous research has shown the importance in undermining equifinality using multi-variate observation constraints, focusing water and carbon fluxes and stocks. Long-wavelength radar backscatter provides unique insights into the dynamics of plant water and carbon dynamics when compared to optical EO products, as such, embeds the potential for constraining various parameters controlling local climate vegetation responses. In this study, we present an approach for assimilating Earth observation backscatter data in a terrestrial ecosystem model to improve estimates of vegetation parameters turnover rates. Among others, we focus on the information content of L-band ALOS PALSAR data in constraining vegetation dynamics at selected FLUXNET sites, where carbon and water fluxes and stocks are observed. Using a radar observation operator, a standard radiative transfer model, we design a model-data integration experiment to investigate the benefits of multiple backscatter observations versus unique above ground biomass to constrain model parameters. The experimental setup focuses on the trade-off between information content from backscatter and uncertainties from observation operators versus sparse above ground biomass observations to constrain parameters controlling leaf and wood pool dynamics in vegetation. Current results indicate that the assimilation improves the estimation of aboveground biomass and constraints on turnover rates for both foliage and woody pools. Ultimately, data sparsity and availability exert control on model performance and prior model uncertainty on parameter constraints. Ultimately, this study highlights the potential of L-band backscatter to enhance vegetation carbon cycle modeling, emphasizes the added value of the upcoming ESA BIOMASS mission, and underscores the importance of integrating vegetation water dynamics into carbon models.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Improving the monitoring of vegetation and drought by land surface models through the assimilation of satellite data

Authors: Jean-Christophe Calvet, Bertrand Bonan, Yann Baehr, Timothée Corchia, Oscar Rojas-Munoz, Pierre Vanderbecken, Jasmin Vural
Affiliations: Meteo-france, CNRS, CNES
Severe droughts become more intense and widespread. Vegetation fires are being observed in regions where such events have never occurred before. Clay shrinkage is causing increasing damage to houses. Current climate trends have reached a critical level where the tools of climate science and everyday disaster monitoring need to converge. While this study does not directly investigate tipping points, the tools and products being developed have the potential to better investigate and monitor tipping points. Land data assimilation aims to monitor the evolution of soil and vegetation variables. These variables are driven by climatic conditions and anthropogenic factors such as agricultural practices. Land surface monitoring includes a number of variables of the soil-vegetation system such as land cover, snow depth, surface albedo, soil water content and leaf area index (LAI). These variables can be monitored by integrating satellite observations into models through data assimilation. Monitoring land variables is particularly important in a changing climate, as unprecedented environmental conditions and trends emerge. Unlike atmospheric variables, land variables are not chaotic per se, but rapid and complex processes affecting the land carbon budget, such as forest management (thinning, deforestation, ...), forest fires and agricultural practices, are not easily predictable with good temporal precision. They cannot be accurately monitored without integrating observations as they become available. Because data assimilation is able to balance information from contrasting sources and account for their uncertainties, it can produce an analysis of variables that is the best possible estimate. Data assimilation can involve several techniques, such as 'model parameter tuning', variational assimilation or sequential Kalman filtering methods. The latter are used in meteorology and in some land modelling frameworks to improve initial conditions (e.g. root zone soil moisture) at a given time. New research is being undertaken to assess the impact of improving vegetation initial conditions, as vegetation has a memory of past environmental conditions, as does soil moisture. Vegetation variables such as LAI control the amount of evapotranspiration and their initial conditions have a predictive capability. Examples are given of how data assimilation can be implemented on a global scale by regularly updating model state variables through a sequential assimilation approach. The focus is on LAI assimilation and the use of machine learning techniques to build observation operators that allow direct assimilation of new vegetation sensitive observations such as microwave backscatter and brightness temperature or solar induced fluorescence. We show that the analysis of LAI together with root zone soil moisture is necessary to monitor the effects of irrigation, drought and heat waves on vegetation, and that LAI can be predicted after proper initialisation. We also show that machine learning can be used to derive new variables (e.g. surface albedo, vegetation moisture) from those already calculated by the land surface model. This paves the way for new developments such as more interactive assimilation of land variables into numerical weather prediction and seasonal forecasting models, as well as atmospheric chemistry models. Examples of CO2MVS applications will be presented. These results can be extrapolated to the monitoring of vegetation fire danger at different spatial resolutions. The latter will be developed in the framework of the Green Deal governance action GreenEO (2025-2029).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Quantifying the spatio-temporal heterogeneity around eddy-covariance towers to improve upscaling with remote sensing

Authors: Daniel E. Pabon-Moreno, Arianna Lucarini, Dr. Jacob A. Nelson, Giacomo Nicolini, Luca Di Fiore, Dario Papale, Gregory Duveiller
Affiliations: Max Planck Institute For Biogeochemistry, Department of Agricultural Science, University of Sassari, Department of Climate Change and Sustainable Development, School for Advanced Studies IUSS, Fondazione Centro Euro-Mediterraneo sui Cambiamenti Climatici, University of Tuscia
The eddy covariance (EC) technique is commonly used to measure the exchange of energy and matter between the biosphere and the atmosphere. EC measurement towers are located around the globe covering different ecosystems types and climate and disturbance/management regimes. In the last decades, EC data have been used in combination with satellite imagery and climate products to train machine learning techniques and predict ecosystem fluxes at global scale. One of the uncertainty sources that is often underestimated when performing upscaling exercises is the matching degree between the observational footprints of the EC tower measurements and those of the satellite sensor. Depending on the spatial resolution of the satellite sensor and the characteristics of the EC tower, the mismatch with between what is measured by the tower can potentially bias the upscaled estimation of carbon fluxes at regional and global scale. In the present study, we use images from the Sentinel-2 satellites at 20 meters resolution to assess the spatio-temporal heterogeneity of the landscape around the EC towers of the FLUXNET network. To quantify their mismatch, we use the Jensen-Shannon distance (JS) that measures the amount of information shared between two probability distributions. We compute the Jensen-Shannon distance for different concentric areas around the EC tower, representing approximations of either the climatological footprint of the EC measurements, or those of satellite measurements with increasingly coarser spatial resolutions. We found that when considering a 70% threshold of shared information between the EC tower and the satellite resolution, only half of the FLUXNET sites are suitable to perform upscaling exercises. We also found that most FLUXNET sites show 5%-10% temporal variability in similarity between tower and satellite footprint, considering an EC footprint of 250 meters radius around the tower and a satellite resolution of 500 meters radius around the tower. Finally, we discuss what is the potential bias introduced by the mismatch when using machine learning techniques to upscale Gross Primary Production (GPP), and how to correct this bias.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Optimizing Data for a Spatially Explicit Forest Carbon Model for the EU: A Case Study of Finland

Authors: Gonzalo Oton Azofra, Viorel Blujdea, Roberto Pilli, Mirco Migliavacca, Giacomo Grassi
Affiliations: European Commission, Joint Research Centre (JRC), Independent researcher providing service to the Joint Research Centre, European Commission
The European Union is committed to becoming the first continent to achieve carbon neutrality by 2050, with net-zero greenhouse gas emissions as outlined in the European Green Deal and the European Climate Law. This commitment is in line with global efforts to combat climate change under the Paris Agreement. Achieving neutrality will require robust technological and ecosystemic carbon sinks in the coming years. The IPCC has outlined methodologies for monitoring and reporting forest carbon stocks and changes, essential for offsetting anthropogenic emissions. To evaluate the pathways to 2050 target, reliable models are needed. The models must capture forestry processes both in the short term and for intermediate targets, such as 2030 and forthcoming 2040, as well. To accurately monitor carbon dynamics and report on forest carbon balance, the Carbon Budget Model developed for the Canadian Forest Sector (CBM-CFS3) has been adapted for European forests, resulting in the EU-CBM-HAT. This model provides country- and regional-scale insights on forestry indicators and carbon dynamics given forest management scenarios. Building upon the CBM, the Canadian team has developed a workflow for applying the CBM in a spatially explicit manner through the Generic Carbon Budget Model (GCBM). This scalable model offers enhanced granularity at the pixel level and producing a time series of spatial forest carbon indicators. Currently, a prototype of the GCBM model is being tested in Europe, with Finland serving as a case study, representing an initial step toward its pan-European application. The development encompasses three major phases: a) selection of the optimal data source for initialization of standing volume/biomass/age, and species distribution, with data obtained via remote sensing; b) the implementation of stands growth based on data generally derived from ground data from National Forest Inventories; and c) the incorporation of the silvicultural practices and natural disturbances, such as wildfires and windstorms, utilizing remote sensing data. The model also incorporates climate data and administrative/ecological factors. The output comprises time series maps at a 100-meter resolution, offering a robust framework for understanding and managing forest carbon dynamics and thereby enhancing the decision-making capabilities of stakeholders and forest managers.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Towards a multidecadal record of above ground biomass from active and passive microwave observations

Authors: Samuel Favrichon, Maurizio Santoro, Oliver Cartus, Catherine Prigent, Carlos Jimenez
Affiliations: Gamma Remote Sensing Ag, LERMA, Observatoire de Paris, Estellus
Global vegetation plays a critical role in the global carbon budget, storing the largest proportion of terrestrial carbon. Human activities and changes to the global climate affect the state of global forest through regional extent decrease but also anthropogenic or natural vegetation growth. Significant uncertainties remain in the relative contributions of regional carbon sinks. Enhancing estimates of the carbon fluxes is essential for improving climate modeling and informing policymakers more effectively. While changes in above ground biomass (AGB) stocks can be reliably obtained from inventory data, they can be complemented by consistent remote sensing based estimates of vegetation dynamics on a global scale over multiple decades. Satellite data records are now available over 40 years, with microwave based remote sensing providing unique abilities for continuous terrestrial surfaces observation. Microwave remote sensing below 36 GHz, both passive and active, provides sensitivity to the vegetation, with canopy penetration and atmospheric transparency increasing with decreasing wavelength. In addition, passive microwave datasets can provide daily global coverage. However, multiple instruments are required to achieve multi decadal record lengths. These instrument changes and observation specificities must be addressed to create homogeneous long-term time series. This study examines instruments data records consistency, and strategies to correct discontinuities across sensors. Methods to combine multiple sources of measurements to achieve the best estimation of AGB on a global scale are also presented. For active sensors, intercalibration was performed between instrument types and frequencies (Tao et al. 2022). Here, we show how the overpassing times differences of the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave - Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) (Fennig et al. 2020) observations at 18 and 36GHz can be corrected to generate a 30+ year data record of passive microwave observations. Using as reference the CCI Biomass dataset of Above Ground Biomass (Santoro et al. 2023), we combine active and passive observations to estimate AGB at a coarse resolution (~12.5 km) using a method tested during the ESA BiomAP project (Integrating Active and Passive microwave data towards a novel global record of above ground biomass maps) (Prigent et al. 2022). The resulting biomass estimation have an R2>0.85 compared to the reference dataset. The model can then be applied to all available sensor combinations and further analyzed to evaluate limitations and uncertainties in biomass retrieval. The resulting time series offer insights into large-scale vegetation growth and biomass decline since 1992, providing a basis for comparison with other estimates of global AGB variations, such as the ones derived from models or plot data. [1] Karsten Fennig, Marc Schroder, Axel Andersson, and Rainer Hollmann. A fundamental climate data record of smmr, ssm/i, and ssmis brightness temperatures. Earth System Science Data, 12(1):647–681, 2020. [2] M Santoro and O Cartus. Esa biomass climate change initiative (biomass cci): Global datasets of forest above-ground biomass for the years 2010 2017 2018 2019 and 2020. NERC EDS Centre for Environmental Data Analysis, 2023. [3] Shengli Tao, Zurui Ao, Jean-Pierre Wigneron, Sassan Saatchi, Philippe Ciais, Jerome Chave, Thuy Le Toan, Pierre-Louis Frison, Xiaomei Hu, Chi Chen, et al. C-band scatterometer (cscat): the first global long-term satellite radar backscatter data set with a c-band signal dynamic. Earth System Science Data Discussions, 2022:1–30, 2022 [4] Prigent, Catherine, and Carlos Jimenez. "An evaluation of the synergy of satellite passive microwave observations between 1.4 and 36 GHz, for vegetation characterization over the Tropics." Remote Sensing of Environment 257 (2021): 112346
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Assessing the impacts of recent European droughts on terrestrial vegetation gross primary productivity (GPP) using the Quantum Yield (QY) GPP Product.

Authors: Finn James, Dr Booker Ogutu, Sven Berendsen, Claire Miller, Daria Andrievskaia, Mahmoud El Hajj, Dr Stephen Plummer, Professor Jadunandan Dash
Affiliations: University Of Southampton, NOVELTIS, European Space Agency
Through climate change, the frequency and severity of future droughts in Europe are expected to increase. Understanding the impacts of drought on vegetation, and thereby its ability to offset carbon dioxide (CO2) emissions, is thus crucial in mitigating further warming. A key indicator of vegetation productivity, and its efficiency to sequester carbon, is gross primary productivity (GPP), describing the carbon fixed into (and subsequently stored within) vegetation biomass. Here, we utilised GPP data derived from the quantum yield (QY) model to analyse the effect of recent (2018-2020) European droughts on vegetation, at a 500m spatial resolution. Additionally, we investigated the impact of drought seasonality (specifically, whether drought occurred in spring, summer or both seasons) on GPP, as well as whether impacts varied among different landcover classes (Rainfed Croplands; Deciduous Broadleaf Forests; Evergreen Needleleaf Forests; Mixed Forests; Grasslands). Our results show that spring droughts led to the largest overall reduction in GPP at -22.5%, versus only -3.3% under summer drought, and -17.7% under consecutive spring and summer droughts. This pattern was especially true in Northern Europe, with Southern Europe exhibiting greater impact of summer drought than spring drought in reducing GPP. Furthermore, these trends were observed across all landcover types – Rainfed Croplands and Grasslands were especially affected by spring drought, showing reductions in GPP of around 27%. All landcover classes showed a decrease in GPP by at least -12% under spring droughts, though the highest reductions for summer and combined spring-summer droughts were only -3% (for Deciduous Broadleaf Forest and Evergreen Needleleaf Forest) and -8% (Evergreen Needleleaf Forest) respectively. Subsequently, whilst prior research has shown that warm springs may increase GPP, our results suggest that a combination of warm springs and spring drought may yield the largest negative impacts on GPP. Such insights could be beneficial both for mitigating the impact of drought during the growing season, and for anticipating likely disturbances to carbon sequestration.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Verification of Terrestrial Carbon Sinks with the Terrestrial Carbon Community Assimilation System (TCCAS)

Authors: Thomas Kaminski, Wolfgang Knorr, Michael Voßbeck, Mathew Williams, Timothy Green, Dr Luke Smallman, Marko Scholze, Tristan Quaife, Tea Thum, Sönke Zaehle, Peter Rayner, Susan Steele-Dunne, Mariette Vreugdenhil, Mika Aurela, Alexandre Bouvet, Emanuel Bueechi, Wouter Dorigo, Tarek El-Madany, Mariko Honkanen, Yann Kerr, Anna Kontu, Dr. Juha Lemmetyinen, Hannakaisa Lindqvist, Dr Arnaud Mialon, Tuuli Miinalainen, Amanda Ojasalo, Shaun Quegan, Pablo Reyez Muñoz, Dr Nemesio Rodriguez-Fernandez, Mike Schwank, Jochem Verrelst, Songyan Zhu, Matthias Drusch, Dirk Schüttemeyer
Affiliations: The Inversion Lab, University of Edinburgh, University of Lund, University of Reading, Finnish Meteorological Institute, Max-Planck-Institute for Biogeochemistry, TU Delft, TU Wien, CESBIO, University of Sheffield, University of Valencia, Swiss Federal Institute for Forest, Snow and Landscape Research, University of Southampton, European Space Agency
The Paris agreement allows the use of terrestrial carbon sinks as a climate mitigation mechanism. In this context, accurate quantification of such sinks is highly relevant. Ideally this quantification combines process understanding incorporated in a terrestrial biosphere model with a range of observations that constrain the model simulation. To tackle this task we employ the Terrestrial Carbon Community Assimilation System (TCCAS, https://tccas.inversion-lab.com/), a development funded by the European Space Agency within its Carbon Science Cluster. TCCAS is constructed around the newly developed D&B terrestrial biosphere community model (https://doi.org/10.5194/egusphere-2024-1534). D&B builds on the strengths of its two component models, DALEC and BETHY, in that it combines the dynamic simulation of the carbon pools and canopy phenology of DALEC with the dynamic simulation of water pools, and the canopy model of photosynthesis and energy balance of BETHY. Both component models have a long track-record of successful data assimilation applications. TCCAS includes a suite of dedicated observation operators that allows the simulation of solar-induced fluorescence (SIF), fraction of absorbed photosynthetically active radiation (FAPAR), vegetation optical depth from passive microwave sensors, and surface layer soil moisture. The model is embedded into a variational assimilation system that adjusts a control vector to match the observational data streams. For this purpose TCCAS is provided with efficient tangent and adjoint code. The control vector consists of a combination of initial pool sizes and process parameters in the core model and in the observation operators. One of the main target quantities in the context of the Paris Agreement is the simulated long-term carbon uptake. The accuracy of this quantity depends on several factors including the combination of observational data streams that was assimilated by TCCAS and on the model error, i.e. the model's capability to accurately simulate these data streams. We derive a specification of that model error. Based on this specification we quantify the capability of several combinations of remote sensing and in situ data streams (SIF, soil moisture, vegetation optical depth, FAPAR, and biomass) to constrain the simulated terrestrial carbon uptake. In this context we analyse the role of the observation operators and discuss observational strategies for sink quantification including upcoming space missions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: F.04.03 - POSTER - Desertification, land degradation and soil management

Desertification and land degradation poses a major threat to food security, ecosystem services and biodiversity conservation. Soil is not a renewable resource when viewed on a time scale of a couple of decades and it is threatened worldwide by climate change, natural hazards and human activities. The consequences are an increased soil loss due to wind and water erosion, land-slides, reduced soil quality due to organic matter loss, contamination and soil sealing. The EU Soil monitoring law on the protection and preservation of soils aims to address key soil threats by sustainable soil use, preservation of soil quality and functions. Space-based earth observation data together with in-situ measurements and modelling can be used in an operational manner by national and international organizations with the mandate to map, monitor and repot on soils. With the advent of operational EO systems with a free and open data policy as well as cloud-based access and processing capabilities the need for systematic, large area mapping of topsoil characteristics with high spatial resolution that goes beyond recording degradation processes can be addressed

We encourage submissions related to the following topics and beyond:
- Advanced earth observation-based products to monitor desertification and land degradation at a large scale
- Specific earth observation-based methods for soil related topics such as soil parameter mapping. Soil erosion mapping as well as other soil related health indicators in different pedo-climatic regions and biomes.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Land Degradation Mapping and Change Assessment for SDG 15.3.1 in the Nigeria Guinea Savannah

Authors: Ademola Adenle, Nikhil Raghuvansh, Felicia O. Akinyemi, Olena Dubovyk
Affiliations: Department of Geography, University of Bergen, Norway, Department of Geography, Federal University of Technology, Department of Earth System Sciences, University of Hamburg Germany, Geomatics, Department of Environmental and Life Sciences, Karlstad University, Universitetsgatan 2, 651 88 Karlstad, Sweden
Land degradation is one of the leading global problems undermining progress towards achieving Sustainable Development Goals (SDGs) in Sub-Saharan Africa. However, the absence of updated comprehensive national assessment, monitoring, and reporting of land degradation is a recurring problem among developing countries owing to scientific, technical, and political challenges. Over the recent decade, there is a growing recognition of the severe impact of land degradation on the existence of millions of lives, livelihoods and landscapes in Nigeria, particularly in the Nigeria Guinea Savannah (NGS). This region is not only a critical biodiversity ecoregion but also the country’s most ethnically diverse area and a significant agricultural zone. Based on available national data, this study assesses the condition of three land degradation indicators namely: land cover (LC), land productivity (LP), and soil organic carbon stocks (SOC). The analysis compared the analytical effectiveness and practice of default method (DM) and adopted method (AM) for both baseline period (BP) 2000-2013 and monitoring period (MP) 2013-2022 to support the global land goals. The DM employs medium-resolution data and semi-automated open-source Trends.Earth techniques, while the AM uses improved methods involving Landsat data, following recent best-practice guidance (a new standard methodology). Our preliminary results from LULC indicator show that 0.70% of the NGS is degraded during the BP and 0.45% in MP for the DM, while 9.34% (BP) and 10.63% (MP) were degraded respectively in the AM, with grassland experiencing the most change. According to DM, degraded areas due to land productivity indicators experienced a decline from 27.44%(BP) to 38.42% (MP), and a similar decline is anticipated with the AM. For the SOC indicator, 1.42% (BP) and 38.99% (MP) of the NGS are degraded based on the DM. It is important to note that this analysis is preliminary and ongoing. Therefore, the outcomes presented are yet to depict the results of the AM in detail. However, in support of the national land degradation (neutrality) inventory process, preliminary DM findings show that during the BP, 24.22% of the NGS improved, 46.48% remained stable, and 28.30% was degraded. In the MP, these values shifted to 10.54% improved, 49.47% stable, and 38.99% degraded. As demonstrated in several studies, the DM analysis is expected to serve as a reference for the AM results after the final analysis. While this comparative study highlights inadequacies in the DM, it also underscores the improved capacity of the AM to delineate, characterize, and contextualize land degradation effectively. These findings aim to enhance spatial planning, inform environmental policies, and enable sustainable land management initiatives to support SDG 15.3.1 within the NGS.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Spatio-Temporal Monitoring of Vegetation Structure and Surface Moisture in Kruger National Park and the Overberg District in South Africa From Sentinel-1 and -2 Time-Series Since 2015

Authors: Christiane Schmullius, Marco Wolsza, Tercia Strydom, Jussi Baade
Affiliations: University Jena, South African National Parks
Land degradation can be defined as a persistent reduction or loss of the biological and economic productivity resulting from climatic variations and human activities. To quantify relevant surface changes with Earth observation sensors requires a rigorous definition of the observables and an understanding of their seasonal and inter-annual temporal dynamics as well as of the respective spatial characteristics. This talk illustrates operational mapping possibilities with the European Sentinel satellite fleet, that guarantee high-resolution spatial, spectral and temporal monitoring since 2015 and until 2040. Synergistic retrieval of innovative land surface indices is demonstrated with focus on Kruger National Park and the Overberg district, both in South Africa. A joint EO and in situ strategy for management needs will be outlined. The need for analysis-ready data (ARD) in the optical and especially radar domain has been recognized and formerly complex information is increasingly easy to be used and applied (e.g. through companies such as synergise and Google Earth Engine). Various processing tools are accessible without cost for large datasets (e.g. PyroSAR, SNAP). In this work, data cubes have been established and Jupyter Notebooks generated, which contain a portfolio of Python scripts for production of various vegetation indices, bare soil maps, vegetation height, woody cover and surface moisture. The data cubes allow to exploit the synergy of radar and optical remote sensing data over meanwhile seven years. The dense time series reveal intra- and inter-annual variations of unexpected land surface phenoma. To evaluate Sentinel-1 time series to detect surface changes, irregularities in the radar backscatter and coherence time series were analysed. The radar products exhibit more contrast between areas of high and low woody cover in flat terrain, but is still more affected by topography despite radiometric corrections. Sentinel-2 better detects differences in mountainous areas. Machine learning applications help to analyze the big amount of data, but training and accuracy assessments require also in-situ data and feedback from local experts. Therefore, we are using own soil moisture measurements in each region and interaction with regional scientists and stake holders for interpretation and validation. This assembly of in-situ and Earth observation products represent a treasure case for evidence-based climate-change studies and a new spatio-temporal EO monitoring strategy for land degradation detection in Southern Africa is being presented.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Mitigating the Global Crisis of Chromium Pollution

Authors: Chandra Prakash Jha
Affiliations: Fashion For Biodiversity S.r.l.
Problem statement: As a global community, we face a mounting environmental and public health crisis. Desertification and land degradation are intensifying environmental challenges exacerbated by industrial pollution. Chromium, particularly its hexavalent form [Cr(VI)], is a highly toxic industrial byproduct predominantly released by leather tanning, metal plating, and mining industries. With the global demand for leather goods and industrial metals surging, chromium pollution has reached alarming levels, threatening human health, biodiversity, and sustainable development. Statistical data underscores the scale of this crisis. According to the United Nations Industrial Development Organization (UNIDO), approximately 90% of the world’s leather is processed using chromium salts, generating over 6.5 million tons of chromium-laden waste annually. In India alone, where Kanpur’s leather industry is a hub, an estimated 2,000-3,000 tons of chromium are discharged annually into the Ganges River, contaminating water sources for millions. The European Union highlights Italy’s Veneto and Spain’s Igualada as chromium pollution hotspots, with soil and water Cr(VI) levels often exceeding permissible limits by thousands of times, breaching the WHO guideline for drinking water (0.05 mg/L). Contaminated soils lose fertility and biodiversity, accelerating desertification in semi-arid regions. Chromium leaches into groundwater, making it unsafe for drinking and agriculture. WHO estimates a 50% cancer risk increase for populations consuming Cr(VI)-contaminated water. Aquatic ecosystems suffer too, with chromium bioaccumulating in fish and disrupting food chains. Policy frameworks, such as the European Union’s REACH Regulation and India’s Central Pollution Control Board standards, have established guidelines to mitigate chromium discharge. However, enforcement remains inconsistent, and remediation efforts are often cost-prohibitive. In this article, we examine the scale and implications of chromium pollution in the EU and India, highlight relevant policies, and present the ChromEX program by Fashion For Biodiversity as an innovative solution. ChromEX employs a triangulated remediation strategy integrating hyperspectral spatial data analysis, drone surveillance, IoT technologies, and microbiome-based bioremediation, demonstrated successfully in its Kanpur pilot project. EU and global policies addressing chromium pollution supporting soil management, land restoration, and combating desertification: 1. European Union Regulations: • REACH Regulation classifies hexavalent chromium as a "substance of very high concern" • European Green Deal aims for zero pollution • Soil Strategy for 2030 focuses on restoring polluted lands • Water Framework Directive sets stringent contamination limits 2. International Agreements: • UN Sustainable Development Goals (SDGs) • UN Convention to Combat Desertification (UNCCD) • Basel Convention • Stockholm Convention Despite these comprehensive policy frameworks, enforcement remains inconsistent, and remediation efforts are often prohibitively expensive. The ChromEX Program: An Innovative Solution The ChromEX programme by Fashion For Biodiversity leverages cutting-edge technology to combat chromium contamination in soil and water. Step 1: Identification of Hotspots: Satellites map contaminated areas based on spectral data. Step 2: Targeted Remediation: Results guide the placement of biochar-bacteria reactive zones, microbial treatments, and phytoremediation plants. Step 3: Post-Remediation Monitoring: Satellite data tracks improvements in soil and water quality, vegetation recovery, and reductions in chromium concentrations. Utilizing hyperspectral spatial data, hyperspectral drones, and IoT sensors, it identifies contamination hotspots with precision, analysing soil and water properties for chromium toxicity. Contamination detection and impact monitoring via hyperspectral satellites In the ChromEX program, hyperspectral satellites are used as the first step in the remediation process. Hyperspectral satellites play a pivotal role by providing advanced, detailed spectral data across hundreds of narrow wavelength bands, hyperspectral satellites can identify, map, and track pollution hotspots with exceptional precision, supporting the detection and remediation of toxic hexavalent chromium [Cr(VI)]. It can also provide large-scale monitoring capabilities for chromium contamination in soil and water. Lets study them in detail. 1. Detection of Chromium Contamination Hotspots Hyperspectral satellites, such as Sentinel-2 (part of the EU’s Copernicus program) and Germany’s EnMAP (Environmental Mapping and Analysis Program), detect subtle spectral changes in the environment caused by chromium contamination. 1.1. Spectral Signatures of Chromium: Chromium, particularly in its hexavalent form, alters the reflectance of soil, water, and vegetation. These changes occur in specific wavelength ranges, especially in the visible (400–700 nm), near-infrared (700–1400 nm), and shortwave infrared (1400–2500 nm) regions. 1.2. Soil Contamination: Contaminated soils exhibit unique spectral patterns due to chromium's interaction with minerals and organic matter, enabling the identification of polluted areas. 1.3. Water Pollution: Chromium affects the turbidity and chemical composition of water, which is detectable through absorption features in hyperspectral data. 2. Monitoring Vegetation Stress Chromium contamination disrupts plant physiology by affecting chlorophyll levels, water absorption, and nutrient uptake. Hyperspectral satellites can detect: 2.1. Chlorophyll Degradation: Reduced chlorophyll content shifts spectral reflectance in the visible and near-infrared regions. 2.2. Water Stress: Changes in water content within vegetation due to chromium toxicity are visible in the shortwave infrared bands. 2.3. Early Detection: These changes are detectable before visible symptoms appear, allowing for timely intervention in contaminated agricultural areas. 3. Spatial and Temporal Coverage Hyperspectral satellites provide comprehensive data on contamination over vast areas, enabling macro-level analysis of chromium pollution. 3.1. Wide Area Coverage: Satellites can monitor entire industrial regions, such as Kanpur in India or Veneto in Italy, where chromium contamination is widespread. 3.2. Temporal Monitoring: Regular flyovers ensure continuous observation, allowing for the tracking of contamination trends and the effectiveness of remediation efforts. 4. Guiding and Optimizing Remediation Hyperspectral data informs decision-making in the remediation process by: 4.1. Mapping Contamination Pathways: Identifying how chromium spreads through soil and water, helping prioritize critical areas for bioremediation. 4.2. Evaluating Remediation Success: Post-remediation, hyperspectral satellites track changes in soil and water properties, verifying reductions in chromium levels and improvements in vegetation health. Advantages of Hyperspectral Satellites in Chromium Monitoring Non-Invasive and Scalable: Satellites monitor chromium contamination without disturbing ecosystems, covering vast areas cost-effectively. High Precision: Their ability to detect subtle spectral changes allows for the identification of chromium even in low concentrations. Long-Term Observation: Satellites support temporal analysis, making it possible to assess both contamination progression and remediation effectiveness over time. Global Accessibility: Data from hyperspectral satellites like Sentinel-2 is publicly available, fostering collaboration among researchers and policymakers. IoT sensors play a vital role in the ChromEX triangulation strategy by real-time monitoring of contamination, providing early warnings with localized precision. It is also integrated with the ChromEX network to monitor bioremediation effectiveness in real time. Similarly, Hyperspectral drones are integral to the ChromEX program, providing high-resolution, localized data for detecting and monitoring chromium contamination in soil, water, and vegetation. These drones bridge the gap between satellite observations and ground-level measurements, offering unmatched precision and flexibility in targeting contamination hotspots. Chromium mitigation process: At the core of this process is E-MicroBiome bioremediation, where engineered microbial consortia like Pseudomonas, Bacillus, and Cellulosimicrobium funkei drive the bio-reduction of Cr(VI). These microbes convert Cr(VI) into Cr(III) through mechanisms such as biosorption (binding chromium ions to microbial cell walls), enzymatic reduction, and efflux systems that detoxify chromium efficiently. Tailored to site-specific conditions like pH and temperature, these microbes maintain high efficacy even in challenging environments. In addition, the Biochar-Bacteria Reactive Zone enhances chromium mitigation by combining biochar and microbial activity. Biochar provides a high-surface-area substrate that immobilizes Cr(VI) while supporting microbial colonization. Installed as reactive barriers at contamination pathways, this zone ensures the adsorption of Cr(VI) and facilitates its microbial reduction, effectively containing chromium migration. Finally, phytoremediation complements microbial remediation. Plants like vetiver grass (Chrysopogon zizanioides) are used for phytoextraction, absorbing chromium from soil and water. This process is enhanced by myco-assisted remediation, where fungi mobilize chromium for plant uptake through organic acid production, significantly improving extraction efficiency. Together, these interconnected strategies enable ChromEX to sustainably detoxify chromium, restore ecosystem health, and prevent further contamination of soil and water. Pilot Project in Kanpur, India The European Union's strict environmental regulations in the 1990s compelled the leather tanning industry to relocate to Kanpur, India, leveraging lenient laws, lower costs, and abundant labor despite environmental concerns. In October 2023, ChromEX launched a pilot in Rania, Kanpur. Key actions included: Hotspot Detection: Satellite and drone data identified severely contaminated areas. Bioremediation Implementation: Chromium-absorbing plants (Brassica juncea) were integrated with biochar and bacterial consortia. Outcome: By September 2024, chromium levels reduced by 85%, improving water quality and agricultural productivity. Conclusion Chromium pollution threatens health, ecosystems, and land productivity. Programs like ChromEX show that EO Spatial data, and bioremediation can mitigate impacts. Global collaboration is vital to restore lands, combat desertification, and ensure sustainability.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Addressing land degradation and desertification: from LIFE NewLife4Drylands to HE MONALISA project

Authors: Nicola Riitano, Daniela Smiraglia, Cristina Tarantino, Giovanna Seddaiu, Paolo Mazzetti, Francesca Assennato
Affiliations: Italian Institute for Environmental Protection and Research (ISPRA), Institute of Atmospheric Pollution Research (IIA), National Research Council (CNR), University of Sassari
Addressing the complex issue of land degradation and desertification (LDD) in Mediterranean drylands areas facing increasing climatic pressure and limited adaptive capacity, is a primary challenge for research and policy making. To find customized, scalable solutions that support sustainable land management and ecosystem restoration, solutions should navigate various socioeconomic, environmental, and cultural constraints, closely with the EU's Sustainable Development Agenda. In particular SDG 15 emphasizes the need to "protect, restore, and promote sustainable use of terrestrial ecosystems, combat desertification, and halt and reverse land degradation and biodiversity loss”. A preparatory LIFE project called Newlife4drylands (NL4DL) (https://www.newlife4drylands.eu/), ended in 2024, developed a protocol based on the use of satellite Remote Sensing (RS) data and techniques for the identification of a framework for combating LDD in protected areas by adopting Nature-Based Solutions (NBS) and for the mid and long-term monitoring of LDD status to evaluate restoration effectiveness. Six pilot sites in southern European Mediterranean countries were examined using field data and satellite observations to develop replicable methodologies. SDG 15.3.1 indicator “Proportion of land that is degraded over total land area” was computed for each site by considering additional sub-indicators, related to the specific pressures and threats on the site, and produced at local scale as much as possible to obtain a more useful support to the decision-making process. NL4DL results enhanced the key role of multiple sources integration and combination of top-down and bottom-up approaches for effective interventions against LDD in Mediterranean drylands together with the need for harmonization and standardization of ecological indices/indicators derived from satellite data. Gathering the legacy of NL4DL, the Innovation Action MONALISA Horizon Europe project (https://monalisa4land.eu/) funded under the EU Soil Mission, started on September 2024 and it will end in August 2028, aims to taking over with a broader continental study on soil degradation and sensitivity to desertification. New indicators tested at continental scale for LDD status monitoring will be, then, applied to case studies scale coupled with the integration of environmental and socio-economic data with local actions through stakeholders’ collaboration and cutting-edge digital tools. Six case studies in Italy, Spain, Greece, Tunisia and Palestine strategically located across a gradient of aridity and socio-ecological conditions in the Mediterranean drylands will host innovative agro-pastoral - water management - natural ecosystems restoration solutions to develop predictive models and scenario analysis and a decision support system for the scaling out of the solutions for reverse and preventing LDD. MONALISA addresses the “last mile” challenge for integrating scientific knowledge, local practices, advanced digital systems, artificial intelligence, and remote sensing technologies, fostering collaboration between researchers, policymakers, and land managers to ensure the adoption and scalability of solutions. A methodological framework for assessing and monitoring LDD risk across Europe and Mediterranean regions and a standardize data collection method, while enhancing soil productivity across diverse land uses, including agriculture and agroforestry, will support findings from local applications to inform broader continental strategies that adapt successful practices to diverse pedo-climatic conditions. MONALISA promotes regional creativity and interdisciplinary research, echoing the 2030 Agenda's paragraph 33, which call for partnerships and integrated approaches to sustainable land management. Through its case study locations, MONALISA supports efforts to reverse LDD and enhance resilience by endorsing the EU's "Soil Deal for Europe" initiative. This integrated strategy emphasizes large-scale monitoring combined with localized testing of methods—key steps in effectively addressing soil degradation. Such an approach not only deepens our understanding of environmental dynamics but also facilitates the implementation of practical solutions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: High Resolution Spectral and Statistical Information About Soils In Europe – Products, Applicability and Free Data Access

Authors: Dr. Uta Heiden, Dr. Pablo d'Angelo, Mr. Paul Karlshoefer, Dr. André Twele, Dr. Fenny van Egmond, Dr. Laura Poggio
Affiliations: DLR Oberpfaffenhofen, ISRIC - World Soil Information
Based on a recent study of JRC and EEA, about 63% of the European soils are degraded (Arias-Navarro et al., 2024). Therefore, the European Union’s Soil Strategy 2030 and Horizon Europe’s Mission “A Soil Deal for Europe” has been developed. Furthermore, the European Commission intensifies efforts to develop reliable and robust soil monitoring strategies. There is a high demand for information about chemical and physical properties of European soils, properties that can reflect the consequences of our intensive use of the natural resource soil. Earth Observation (EO) is a valuable data source for large scale information about the Earth surface. In the CUP4Soil project, EO-based and European-wide soil information has been developed with the objective to provide it to a large user community and to prepare the ground for the extension of the Copernicus Land Monitoring Service by soil related information. In this presentation, the SoilSuite, a collection of different image data products for Europe, is presented. It leverages the Sentinel-2 data archive that is assimilated by DLR’s Soil Composite Mapping Processor (SCMaP), a specific processing chain for detecting and analyzing bare soil surfaces on a large (continental) scale. The methodology behind SCMaP is constantly enhanced using a technique that allows to control the variation of the different processing parameters or concepts (presented by a LPS submission from Karlshöfer et al.). We present a variety of pixel-based spectral information products of bare soils. The “Bare Surface Reflectance Composite – Mean” shows distinct spectral properties of soils due to varying soil organic carbon (SOC) content, soil moisture and soil minerology. Additionally, the product “Bare Surface Reflectance Composite - Standard deviation” informs about the spectral dynamic of soils that is an rarely used characteristic. An important milestone is the provision of a data product to inform users about the reliability of the spectral information. Thus, we developed the “Bare Surface Reflectance Composite – 95% Confidence” product. It contains the half width of the 95% confidence interval (CI) of the “Bare Surface Reflectance Composite”. Finally, the “Bare Surface Frequency Product” is scaled between 0 and 1 and quantifies the number of bare soil occurrences over the total number of valid observations. It highlights areas with limited soil coverage by e.g. vegetation. It is a generic product that can be used to identify areas that have been prone to soil erosion. Further, the “Bare Surface Frequency” can be used to better distinguish between areas with carbon farming and those with conventional agriculture. All products are used for the SOC map of the WorldSoils project as well as for the mapping of chemical/physical soil properties and uncertainties developed during the CUP4SOIL project (presented by a LPS submission from Poggio et al.). We further explored the suitability of the “Bare Surface Frequency Product” for the assessment of potential soil erosion source areas of river basins using the variety of products of HYDROSHEDS (HYDROSHEDS, access: 11/2024). Finally, it is important to enable the access of the SoilSuite on free and open access conditions to allow users to test and experience the data by their own. The direct EO-based data (https://doi.org/10.15489/qkud8cudg596) can be accessed soon via DLR’s EOC Geoservice and the EO-based information as well as the soil properties and their uncertainties are available via a dedicated website of ISRIC. Both platforms provide functionalities for data visualization (via OGC-WMS) as well as data download (CC BY 4.0 license). References: Arias-Navarro, C., Baritz, R. and Jones, A. editor(s), 2024. The state of soils in Europe. Publications Office of the European Union. https://data.europa.eu/doi/10.2760/7007291, JRC137600. BGR [Bundesanstalt für Geowissenschaften und Rohstoffe] (2005). Soil Regions Map of the European Union and Adjacent Countries 1:5,000,000 (Version 2.0). Special Publication, Ispra. EU catalogue number S.P.I.05.134. HYDROSHEDS, 2024: https://www.hydrosheds.org/hydroatlas, access 11/2024.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Estimating soil properties and nutrient concentrations using machine learning and hyperspectral data: a case study in Italy

Authors: Micol Rossini, Dr. Luigi Vignali, Chiara Ferrè, Dr. Cinzia Panigada, Giulia Tagliabue, Dr. Gabriele Candiani, Dr. Francesco Nutini, Dr. Monica Pepe, Michael Marshall, Mariana Belgiu, Dr. Mirco Boschetti
Affiliations: Department of Earth and Environmental Sciences (DISAT), University of Milano Bicocca, Institute for Electromagnetic Sensing of the Environment (IREA), National Research Council of Italy (CNR), Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente
This study explores the use of machine learning algorithms and hyperspectral data to estimate key soil parameters, essential for monitoring soil fertility and promoting sustainable agricultural practices, as part of the European Space Agency (ESA) EO4NUTRI project. Spectral data were collected both in the laboratory and from the PRISMA satellite, focusing on the agricultural region of Jolanda di Savoia (FE), Italy. The primary goal is to develop accurate predictive models for soil chemical and physical properties, as well as the contents of macro, meso, and micronutrients (including total nitrogen, exchangeable bases, available sulfur, iron, phosphorus, and zinc), with spectral reflectance as the primary predictor. Field campaigns were conducted between March 2023 and May 2024, collecting soil samples from three crops—corn, rice, and wheat. A total of 200 soil samples were taken from the 0-20 cm depth (corresponding to the Ap horizon) and analysed for nutrient concentrations. Laboratory spectral measurements were conducted under controlled conditions with an SR-3500 spectroradiometer (Spectral Evolution, USA), covering the 350 nm to 2500 nm range (VNIR-SWIR). These measurements were standardized using the measurement protocol proposed by Ben Dor et al. (2015) to minimize instrumental variation. In addition to laboratory data, PRISMA satellite images were used, with the image date closest to each field sampling campaign selected to ensure temporal alignment between spectral data and soil parameters. Principal Component Analysis (PCA) was applied for dimensionality reduction, testing different numbers of principal components (5, 10, and 15) to optimize model performance. Various machine learning regression algorithms (MLRAs) were then trained and compared to estimate soil parameters, including Gaussian Processes Regression (GPR), Partial Least Squares Regression (PLSR), Least Squares Linear Regression (LSLR), Random Forest (RF), Kernel Ridge Regression (KRR), and Support Vector Regression (SVR). Model performance was evaluated using leave-one-out (L-O-O) and k-fold (k=5) cross-validation to assess generalization and prevent overfitting. Models based on laboratory spectral data showed higher accuracy compared to those based on PRISMA data, highlighting the challenges posed by environmental variability in satellite measurements. However, PRISMA-based models still provided valuable insights for estimating parameters such as total nitrogen exchangeable magnesium and calcium, and available zinc. For laboratory data, total nitrogen, exchangeable calcium and available iron and zinc were accurately estimated, with an R2 between predicted and measured values of 0.94, 0.86, 0.94 and 0.84, respectively. For PRISMA data, the best results were observed for total nitrogen, exchangeable magnesium and calcium, and available zinc, with R² of 0.79, 0.790.7 and 0.83, respectively. These findings demonstrate the potential of machine learning and hyperspectral data to estimate soil parameters. Future research will focus on improving satellite-based model accuracy by refining pre-processing techniques to minimize noise and atmospheric interference in PRISMA data. Incorporating additional environmental variables, such as moisture conditions and soil texture, will further improve the predictive capabilities of the models. This study contributes to the growing field of digital soil science, offering new opportunities for sustainable soil monitoring and management. Satellite-derived soil parameter maps, such as those from PRISMA, can support site-specific crop management, optimize fertilizer usage, and provide valuable insights into soil health on a large scale.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Optimising Satellite-based Soil Spectra Extraction for Predicting Agricultural Soil Carbon Content Across Europe

Authors: Denise Hick, Guy Ziv, Pippa Chapman
Affiliations: University of Leeds
There is a growing need for reliable monitoring, reporting and verification (MRV) systems to quantify and track soil organic carbon (SOC) content change in agricultural land both at the local scale and over large scales. Accurate SOC content estimates can be obtained via direct soil sampling and SOC measurement in the laboratory, but traditional soil surveys are expensive and time-consuming. Predicting SOC content based on remotely-sensed bare soil reflectance is gaining popularity, but the accuracy of predictions remains challenging, especially over large scales characterised by variable climate and soil types. In part, this challenge could be due to choices image selection, pixel-level filtering and multi-temporal aggregation accounting for residual vegetation, crop residue cover and soil moisture. Using 309 arable points across Europe surveyed as part of LUCAS 2018 to identify bare soil locations and ground-truth satellite imagery, this study investigated how different vegetation indices thresholds and bare soil compositing techniques affect the spectral similarity of Sentinel-2 soil reflectance to laboratory soil spectra, and picked the best combination of methods to predict SOC content at the European scale. Upon investigation of the LUCAS 2015 soil spectral library, we found that the Normalized Burn Ratio (NBR2) of European bare soil, often used to mask out crop residue covered and high soil moisture pixels, varies based on soil type. Therefore, a new soil-type specific (dynamic) NBR2 threshold was proposed and tested in this study. Our results showed that the bare soil satellite reflectance filtered using the dynamic NBR2 threshold proposed here and aggregated by the 90th highest quantile resulted in a closer match to laboratory soil spectra. This methodology was therefore applied onto Sentinel-2 imagery from 2017 to 2020 to obtain “optimal” bare soil reflectance, which was decomposed into principal components and, together with additional spectral, climatic, terrain and pedological covariates, used to predict SOC content over agricultural points at the European scale (R2 = 0.42  0.03, RMSE = 5.67  0.29 g kg-1, MAPE = 30.39  1.23 %). This returned comparable results to other European-scale studies, despite adopting a more restrictive approach and using open-source data only. Lastly, total annual precipitation, the slope between visible satellite bands, and latitude were found to be the most important covariates for SOC content prediction at the European scale, followed by satellite brightness indices and soil textural information.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Evaluating different methods for the estimation of bare soil surface reflectance using multispectral satellite image time series and LUCAS 2015 Multispectral Reflectance Data

Authors: Vasileios Tsironis, Ms Eleni Sofikiti, Dr Konstantinos Karantzalos
Affiliations: National Technical University Of Athens
Soil degradation is an increasingly pressing issue that necessitates action, prompting the European Union and United Nations to implement policies for its continuous monitoring. In this context, to support climate change monitoring, policymaking, and the adoption of agricultural practices that comply with these policies, there is an increasing need for up-to-date, comprehensive soil data. In the context of soil monitoring, Remote Sensing can provide valuable insights for monitoring soil over long periods and large areas at a low cost. Key challenges for soil monitoring through remote sensing techniques include distinguishing soil from vegetation, especially crop residue, and minimizing the influence of soil moisture and surface roughness on the reflectance of bare soil. Several satellite spectral indices are used up to now to distinguish bare soil but few studies are dedicated to evaluating and comparing all the different approaches for the adequate estimation of bare soil reflectance from satellite imagery. The goal of this work is to thoroughly evaluate the accurate bare soil reflectance mapping in Greece at medium spatial resolution by benchmarking the performance of various compositing approaches, providing a thorough assessment of the contribution of different techniques, i.e., simultaneous use of multiple spectral indices, different compositing, masking or thresholding techniques and other parameters of the time series i.e. cloud cover and time range. Focusing on Greece, a Mediterranean country with diverse microclimates and soil types, the study leverages Landsat 8 images spanning from 2015 to 2020 and the LUCAS 2015 database to evaluate the results for creating accurate bare soil reflectance composites. Each image was masked with the quality band of L8 and all land cover classes except for grassland, Cropland and Bare / sparse vegetation were excluded based on auxiliary Land Cover datasets available on GEE. A wide range of experiments were conducted to determine the best approach to create a bare soil reflectance composite that has the highest correlation possible per spectral band with the spectral reference data of 2015. This study evaluates the use of Normalized Difference Vegetation Index (NDVI), Normalized Burn Ratio 2 (NBR2), Bare Soil index (BSI), Soil Surface Moisture Index (S2WI) spectral indices, used individually or simultaneously, and with different combinations in order to test their ability to distinguish bare soil pixels from vegetation, crop residue and minimizing the effect of soil moisture. Different thresholds for the indices were tested and different compositing methods i.e., mean, median, min NDVI, min NBR2, min S2WI, max BSI. We also examined what is the most appropriate maximum cloud cover to use for the timeseries and whether setting a minimum number of bare soil instances for the pixels included in the composite improves the results. Additionally simple thresholding, i.e. setting a single threshold value for the entire area of interest, and dynamic thresholding, i.e. different threshold for each pixel, were tested. Finally, all approaches were tested for an 1-year or a 6-year long time series. Results indicated that by increasing bare soil frequency, through low-frequency pixel elimination, yielded substantially better correlation with reference data. The increase of maximum image cloud coverage, to include more input images to the composite, did not improve the composite’s coverage. Additionally, the simultaneous combination of all spectral indices is proven necessary for achieving high correlation levels, while choosing a good combination of multiple indices for the selection of bare soil pixels tends to be more important than using longer timeseries. Dynamic thresholding achieved high levels of correlation only with the 6-years composite, possibly due to very low bare soil frequency for the 1-year composite, while BSI and S2WI were proven to be the more impactful indices when used in a dynamic thresholding setting. When applying the dynamic BSI masking the correlation on the infrared part of the spectrum was improved. The best compositing methods in terms of correlation with the reference data were the mean and median ones, with Pearson’s correlation coefficient ranging from 0.75-0.85, across the different spectral bands. Τhese composites had also minimum salt and pepper noise which is a known issue. In terms of error, max BSI achieved the lower RMSE and most of the experiments achieved ubRMSE around 0.03 for the RGB bands and 0.05 for the infrared bands, indicating the residual effect of crop residue and soil moisture. By using NDVI, NBR2 and BSI simultaneously the quality of the composite increases rapidly even for one year of data. By further increasing the span to 6 years, the composite correlates even better in the VNIR part of the spectrum, but less improvement is observed in the SWIR part. This work benchmarked the performance of several compositing methodologies and provided a comprehensive evaluation of the contribution of various techniques in order to accurately map bare soil reflectance in Greece at medium spatial resolution. We demonstrated that careful selection and tuning of a wide range of parameters, may greatly enhance the estimation of bare soil reflectance from multispectral satellite image timeseries. The findings provide a solid foundation for improving bare soil reflectance estimate techniques and give useful directions for upcoming monitoring initiatives meant to support sustainable soil management and combating soil degradation globally.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: The EDAFOS Project: A GIS Tool Solution To Combat Desertification

Authors: Christos Theocharidis, Maria Prodromou, Panagiota Venetsanou, Charalampos Panagiotou, Athos Agapiou, Diofantos Hadjimitsis
Affiliations: ERATOSTHENES Centre Of Excellence, ATLANTIS Environment and Innovation Ltd, Cyprus University of Technology
Desertification and land degradation are noteworthy hazards to environmental sustainability and human well-being worldwide, especially in vulnerable Mediterranean region, including Cyprus. Numerous efforts have been made, such as ratifying the Convention to Combat Desertification. However, countries like Cyprus still need assistance to assess the risk of desertification and identify the natural and anthropogenic pressures that contribute to it. The EDAFOS project, funded by the European Space Agency (ESA), is a collaborative effort involving ATLANTIS Environment LTD, the Department of Environment, and the ERATOSTHENES Centre of Excellence. The area of interest for the project was Cyprus, with the Limassol district selected as the pilot study area. A variety of open satellite and geospatial data were sourced from ESA and various governmental services such as the Department Of Meteorology (DOM), Department of Land and Surveys (DLS) and the Cyprus Agricultural Payments Organisation (CAPO). This project aimed to address the challenge of desertification by utilising advanced geospatial technologies to design a system that simplifies desertification risk mapping and the different individual parameters contributing to it. Moreover, through this the EDAFOS toolbox has been developed within the ArcGIS Pro software environment where the users can automatically integrate various datasets to create the Environmentally Sensitive Area Index (ESAI). This index evaluates the susceptibility of areas to desertification by integrating geospatial analysis, remote sensing, in-situ data, and data processing techniques to deliver crucial information for land degradation dynamics. Combining parameters such as the Vegetation Quality Index (VQI), Soil Quality Index (SQI), Climate Quality Index (CQI) and Management Quality Index (MQI), the EDAFOS project allows stakeholders to monitor, report, and develop countermeasures for combating desertification. Furthermore, the project provides scenario analysis to minimise risk and mitigate associated socioeconomic and environmental impacts to assist policymakers. Overall, it represents a significant advancement in environmental analysis, offering a powerful tool for identifying and mitigating land degradation risks at regional and global scales and supporting national competent authorities in implementing the desertification directive. ACKNOWLEDGEMENTS The EDAFOS project funded by the European Space Agency in the framework of ESA AO/1-10264/20/NL/SC, which is Tender for the Fourth Call for Outline Proposals under the Plan for European Cooperating States (PECS) in Cyprus. The authors acknowledge the 'EXCELSIOR': ERATOSTHENES: Excellence Research Centre for Earth Surveillance and Space-Based Monitoring of the Environment H2020 Widespread Teaming project (www.excelsior2020.eu). The 'EXCELSIOR' project has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No 857510, from the Government of the Republic of Cyprus through the Directorate General for the European Programmes, Coordination and Development and the Cyprus University of Technology.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Methods and applications for soil organic carbon mapping based on Sentinel-2 bare soil composites

Authors: Dries De Bièvre, Prof. Bas van Wesemael, Pierre Defourny
Affiliations: Earth and Life Institute - UCLouvain
In recent years, soil health has gained increasing attention due to its important role in agricultural sustainability. The EU has now proposed a soil monitoring law, emphasizing the need for tools to monitor soil organic carbon (SOC) for soil health and understand its spatial variability. Since the reflectance spectrum of a soil is influenced by its content of organic matter, optical remote sensing can be a tool for mapping or monitoring SOC content. SOC maps at high resolution would allow to compare groups of fields with different management practices but in comparable contexts. Another possibility is that the derived maps serve to establish regional baselines to compare to soil analysis in individual fields. Observations of Sentinel-2 provide reflectance measurements in 10 spectral bands from the visible to the short-wave infrared part of the spectrum. We processed Sentinel-2 images to obtain bare soil reflectance values for pixels where at least 1 bare soil observation is available. A soil database of farmers’ routine analyses was then used to train prediction models for SOC content based on bare soil composites derived from Sentinel-2 images over 3 years. Reflectance values in the composite were averaged at field level to match the support of the soil samples. Despite the low variability of SOC contents in the Walloon region, this approach allows for predictions at parcel-level with a RMSE of 2.7 g C/kg. The performance is however variable. In the Loam Belt, characterized by a large surface of croplands but small variability in SOC content, the RMSE is 2.6 g C/kg while in more heterogeneous areas with less croplands the RMSE is up to 5.5 g C/kg. With the use of quantile regression approaches, the uncertainty on the estimates is accurately quantified. The predictive power of the normalized difference of all pairwise combinations of Sentinel-2 bands was evaluated. This allowed to select 4 spectral features that are associated with SOC content. Results indicated that spectral data alone cannot capture small-scale SOC variation. Incorporating three environmental covariates as predictors significantly improved model performance. The model can map SOC content with an accuracy comparable to existing soil maps, but at a higher spatial resolution. Interpretation of the model using Shapley values and surrogate models provided insights into the associations between predictors and SOC. Using this model, SOC content predictions are obtained at the level of individual fields. Since the variability between fields in the Walloon region is small, the obtained map has limited use for comparisons at parcel-level. Also for monitoring SOC changes in croplands over time, the obtained accuracy is not high enough, since SOC content could change with only 1 g C/kg in the timeframe of 10-20 years when including cover crops. Therefore, Sentinel-2 derived SOC maps alone do not suffice for precise SOC monitoring at parcel-level. While the model's accuracy is insufficient for monitoring small SOC changes at the parcel level, it demonstrates potential for regional assessments. We propose a methodology to estimate regional SOC content averages and variability, incorporating geostatistical simulations to quantify uncertainties. This approach supports the estimation of regional SOC baselines.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: C-Band SAR Amplitude Time Series in Dryland Landscapes Reveal Grain Size Change Distribution after Flash Floods and Debris Flows

Authors: Albert Cabré, Dr Dominique Remy, Dr Odin Marc, Prof Dr Aaron Bufe, Dr Sebastien Carretier
Affiliations: Department of Earth and Environmental Sciences Ludwig Maximilian University of Munich, Géosciences Environnement Toulouse (GET), UMR 5563, CNRS/IRD/CNES/UPS, Observatoire Midi-Pyrénées
Understanding sediment fluxes and geomorphic changes in arid environments is essential for improving landscape evolution models. This research utilizes Synthetic Aperture Radar (SAR) imagery acquired since October 2014 in the Atacama Desert. Sentinel-1 SAR C-band images from the European Union’s Copernicus program have enabled us to investigate grain size variability and sediment transport processes across ephemeral channels and alluvial fans in the Atacama Desert—a region uniquely suited for such studies due to its extreme aridity and the lack of erosional interference between runoff events. Our analysis focuses on 21 alluvial fans and over than 50 kms of valley floors along a latitudinal gradient (21–24°S), where sedimentation from debris flows and hyperconcentrated flows (defined by varying water-to-sediment ratios) remains undisturbed between rainstorm events, which in some of the studied regions can be separated by more than 30 years. By integrating SAR amplitude data with on-site grain size measurements derived from field photographic analysis, we obtain strong correlations (R² = 0.72 for the 50th percentile and R² = 0.93 for the 84th percentile). These findings highlight the capacity of SAR amplitude imagery to reconstruct historical grain size distributions, leveraging the Sentinel-1 archive to establish a temporal dataset spanning nearly a decade of surface change related to grain size variations. This study also classifies non-permanent (transient) SAR amplitude variations associated with moisture and shallow groundwater during hydrological events observed in ephemeral channels and valley floors. By exploring sediment dynamics at varying altitudinal gradients, we gained critical insights into regional sediment pathways and their influence on geomorphic processes. The Atacama Desert’s unique hydrological history, including events in March 2015, May 2017, January 2020, and March 2022, provided an ideal setting to test and refine our methodologies. SAR amplitude imaging, combined with digital elevation models (DEMs), facilitates the extraction of erosion and deposition patterns along ephemeral drainages that lack direct monitoring. This approach enhances predictions of hydrological impacts from extreme runoff events and supports sediment transport modeling in under-monitored arid landscapes. Our findings contribute to global efforts to understand sediment fluxes in desert regions, addressing critical gaps in observational data. and demonstrates the versatility of C-band SAR and its potential for integration with other radar wavelengths in such landscapes that cover 40% of the continental lands. Our research lays promising foundations for advancing dryland erosion studies in other deserts of the world.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone H)

Poster: A.05.07 - POSTER -Sea level change from global to coastal scales and causes

Sea level changes at global and regional scales have been routinely measured by high-precision satellite altimetry for more than three decades, leading to a broad variety of climate-related applications. Recently, reprocessed altimetry data in the world coastal zones have also provided novel information on decadal sea level variations close to the coast, complementing the existing tide gauge network. Since the early 2010s, the ESA Climate Change Initiative programme has played a major role in improving the altimetry-based sea level data sets at all spatial scales, while also supporting sea level related-cross ECVs (Essential Climate Variables) projects dedicated to assess the closure of the sea level budget at global and regional scales. Despite major progress, several knowledge gaps remain including for example:
• Why is the global sea level budget not closed since around 2017?
• Why is the regional sea level budget not closed in some oceanic regions?
• How can altimetry-based coastal sea level products be further improved?
• How can we enhance the spatial coverage of these products, which are currently limited to satellite tracks?
• To what extent do small-scale sea level processes impact sea level change in coastal areas?
• Can we provide realistic uncertainties on sea level products at all spatial scales?
• What is the exact timing of the emergence of anthropogenic forcing in observed sea level trends at regional and local scale?
In this session, we encourage submissions dedicated to improving multi-mission altimetry products and associated uncertainties, as well as assessing sea level budget closure at all spatio-temporal scales. Submissions providing new insights on processes acting on sea level at different spatial and temporal scales are also welcome. In addition to using altimetry data, other space-based and in-situ data, as well as modelling studies, are highly encouraged to submit to this session.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone H)

Poster: Assessment of deep-ocean warming based on sea-level and energy budget

Authors: Hyeonsoo Cha, Jae-Hong Moon, Taekyun Kim, Y. Tony Song
Affiliations: Jeju National University, NASA Jet Propulsion Laboratory
Advances in satellite altimetry and in-situ observations allow the quantification of thermal expansion and ocean mass changes contributing to sea-level rise. Observation-based estimates have shown that the global mean sea-level (GMSL) budget is closed within the uncertainties of each component. However, the GMSL budget has not been closed since 2016. Recent studies have suggested instrumental problems, such as salinity drift in ARGO and measurement and wet troposphere correction in Jason-3 satellite, which can contribute to this discrepancy. Our analysis shows that although correcting these problems has reduced the discrepancy, it still remains a non-closure of sea-level budget. This non-closure may be driven by deep ocean warming. However, estimating thermosteric changes in the deep ocean is challenging due to a lack of observation. Therefore, we quantify the deep ocean (below 2000m) contribution by using residual approach for thermosteric sea-level and earth energy imbalance. Results of budget analysis shows that ocean warming below 2000m is accelerating since 2016 compared to the previous decades. We will discuss these results in more detail at the conference.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone H)

Poster: Uncertainty quantification of sea level altimetry data in the coastal ocean

Authors: Fernando Niño, Léna Tolu, Fabien Léger, Florence Birol, Mathilde Cancet, Pierre Prandi
Affiliations: CNRS - LEGOS, Université de Toulouse, Collecte Localisation Satellites
Recent developments in coastal altimetry processing have made available new datasets of coastal sea level at a distance of only a few kilometers from the shoreline. Here we want to optimize the new geophysical information this represents by quantifying the associated uncertainties. Uncertainty quantification in altimetry is a very difficult matter; it must take into account systematic and random errors from different kinds of sources : theory, measurements and models. Prandi et al. [Local sea level trends, accelerations and uncertainties over 1993–2019. Scientific Data 8, 1 (2021)] provided a framework for sea level uncertainty for multi-mission gridded data (1°x1°) at the global level. We present the extension of this work to high-resolution along-track data (with a spatial resolution of ca. 300 m) near the coast, over the period January 2002 – December 2019, for the Jason-1/2/3 missions. We also present a new error budget analysis of the satellite altimetry system that takes into account the individual contributions of each altimetric correction (dry and wet tropospheric corrections, ionospheric correction, ocean tides, sea state bias, etc), and characterize the errors in each one of them as either biases, drifts or noise. We account for the time correlation in errors and estimate at each location the temporal variance-covariance matrix of the uncertainty in local sea level using all these contributions. The resulting variance-covariance matrices, are used to estimate the uncertainty metrics associated to the local sea level changes (e.g. uncertainty in local sea level or in local sea level trends) by using an extended least squares estimator. We thus estimated confidence intervals on sea level trends for 1149 portions of tracks distributed globally near the coast . For the characterization of the uncertainty in each altimetry correction we apply two different methods: (1) when we have several estimates to a given correction (e.g. from several tide models), we approximate the error of this particular correction as the standard deviation of the differences in these estimates, and (2) we can estimate the standard deviation of the difference with neighboring points. Because we use high-resolution altimetry data, not all points are always available and time-series can present gaps. For adequate processing, we fill these missing values with several strategies, and analyze the impact of them on the calculated uncertainties. Finally, we also present an error budget of the coastal sea level anomaly as a whole and the uncertainty contribution of each correction at the global scale, showing particular examples. Understanding the contribution of each source of error will finally help to reduce uncertainties.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone H)

Poster: Sea level variations at the world coastlines over the past two decades from reprocessed satellite altimetry

Authors: Lancelot Leclercq, Anny Cazenave, Fabien Léger, Florence Birol, Fernando Nino, Jean-François Legeais, Dr Sarah Connors
Affiliations: Université de Toulouse, LEGOS (CNES/CNRS/IRD/UT3), CLS, ESA
In the context of the ESA Climate Change Initiative Sea Level project, we performed a complete reprocessing of high resolution (20 Hz, i.e., 350m) along-track altimetry data of the Jason-1, Jason-2 and Jason-3 missions over January 2002 to June 2021 in the world coastal zones. This reprocessing provides along-track sea level time series and associated trends from the coast to 50 km offshore over the study period. We call ‘virtual coastal stations’ the closest along-track point to the coast. This creates a new network of 1160 virtual sites well distributed along the world coastlines. We performed Empirical Orthogonal Decomposition analyses of the sea level time series at the virtual stations, globally and regionally, in order to: (1) identify the main drivers of the coastal sea level variability at interannual time scale, and (2) assess the along-coast coherence of the sea level response to the dominant drivers. The results highlight those coastlines where the EOF first mode reveals a dominant long-term coastal sea level rise They also help in identifying other regions where the coastal sea level is dominated interannual variations, highly correlated to natural climate modes. This analysis allows us to clearly separate portions of the world coastlines displaying different sea level behaviors. In regions where no tide gauge data are available (a large portion of the southern hemisphere), our results provide new information on present day sea level changes at the coast, hopefully useful for coastal adaptation.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone H)

Poster: Improvements in Estimating Mean Sea Level Trends and Acceleration from Global to Regional Scales

Authors: Anna Mangilli, Pierre Prandi, Victor Quet, Sylvie Labroue, Gerald Dibarboure, Thomas Moreau, Sarah Connors
Affiliations: CLS, CNES, ESA ECSAT
The accurate measurement of the mean sea level (MSL) and the precise estimate of the MSL trend and acceleration, at global and regional scales, are key goals of high precision satellite altimetry. These estimates are crucial to tackle important scientific questions including the closure of the sea level budget and the assessment of the Earth Energy Imbalance (EEI) in the context of climate change. Great efforts have been put in the last decade to better characterise and understand the uncertainties associated to the MSL measurements from radar altimetry, leading to the design of an error variance-covariance matrix describing the temporal correlations of the MSL uncertainty at global (Ablain et al 2009, Ablain et al 2019, Guerou et al 2023) and at local (Prandi et al., 2021) scales. Precisely quantifying the observational sea level uncertainties is important because uncertainties inform on the reliability of sea level observations and prevent from misinterpretations of artifacts arising from the limitations of the observing system. Following these efforts, the 28-year MSL trend and acceleration uncertainties (at 90%CL) are now down to ±0.3mm/yr ([5%-95%] CL) and ±0.05 mm/yr2 ([5%-95%] CL) at global scales (Guerou et al 2023) and, in average, to ±0.83 mm/yr ([5%-95%] CL) and ±0.062 mm/yr2 ([5%-95%] CL), respectively, at local scales (Prandi et al. 2021). Yet, further improvements, at both global and regional scales, are still required to address three main scientific questions 1) the closure of the sea level budget, 2) the detection, and attribution of the signal in sea level that is forced by greenhouse gases emissions (GHG) and 3) the estimate of the current EEI (Meyssignac et al 2023). Meeting such requirements need further improvements on the accuracy and precision of satellite altimetry data, on the error description and on the data analysis. In this talk we will focus on the statistical analysis, showing that a significant improvement in the estimate of the MSL trend and acceleration uncertainties at global scales, of the order of ~15% and ~20%, respectively, can be gained from an optimal General Least Square estimator, or a Bayesian analysis which optimally include the covariance matrix in the likelihood function. In particular, we will present the updated optimal analysis (with the GLS and the Bayesian approach) of the MSL timeseries at global scales from the recently released L2P DT 24 products, discussing the impact on the estimation of the MSL trend and acceleration. The talk will then focus on how these methods can be applied to the MSL analysis at regional scales, showing an improvement of the estimation of the MSL trend and acceleration at local scales in the order of 5%, according to the current description of the error covariance at regional scales. Finally, we will discuss the perspectives, highlighting the benefits of the Bayesian approach for future MSL analysis and the improvements of the MSL error description at local scales when adding the spatial covariance to the error budget. The results presented are obtained within the ESA cci_SL project.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone H)

Poster: Open-Ocean Contribution to Sea-Level Variations over the Norwegian Continental Shelf

Authors: Fabio Mangini, Dr. Antonio Bonaduce, Dr. Léon Chafik, Dr. Roshin Pappukutty Raj
Affiliations: Nansen Environmental And Remote Sensing Center, Department of Meteorology, Stockholm University
At the Living Planet Symposium, we would like to present the results of an ongoing project which investigates the impact of density fluctuations in the North-East Atlantic Ocean on sea-level variations over the Norwegian continental shelf. The project has two main objectives: increasing our understanding of ocean dynamics, and offering insights into the reliability of climate models. The presentation will mostly focus on the first objective, as the analysis of climate models is still under development. The project aims to identify the patterns of density variations in the open ocean that most significantly correlate to sea-level variations over the Norwegian continental shelf. A preliminary result suggests a role for the upper ocean. Specifically, density variations over the upper 200m of the North-East Atlantic show a statistically significant correlation with the sea-level variability over the Norwegian shelf both on intra-annual and inter-annual timescales (after the contribution of local winds has been removed). This result aligns with the existing literature which links density variations over the eastern margin of the North Atlantic Ocean to northern European sea-level variations. However, compared to the existing literature, we provide additional information on the depth range over which density variations most affect the Norwegian sea-level variability. Furthermore, when compared to previous works, our finding is based on more recent observational datasets. Indeed, together with the Norwegian tide gauges and hydrographic stations, we use the ALES-reprocessed coastal satellite altimetry dataset to estimate Norwegian sea-level variations, and the GRACE and GRACE-FO satellite gravimetry missions, to estimate Norwegian mass component of sea-level variations. The project also uses two different products of ocean temperature and salinity to determine whether different spatial resolutions can impact the results. Specifically, the analysis is performed using EN4, which has a spatial resolution of 1°x1°, and ARMOR3D, which has a spatial resolution of 0.25°x0.25°. Both datasets return comparable results.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone H)

Poster: How is the global and regional sea level budget closed from the latest observations?

Authors: Marie Bouih, Anne Barnoud, Robin Fraudeau, Ramiro Ferrari, Michaël Ablain, Julia Pfeffer, Dr Anny Cazenave, Benoît Meyssignac, Alejandro Blazquez, Sébastien Fourest, Hugo Lecomte, Lancelot Leclercq, Martin Horwath, Thorben Döhne, Jonathan Bamber, Anrijs Abele, Dr. Antonio Bonaduce, Raj Roshin, Stéphanie Leroux, Nicolas Kolodziejczyk, William Llovel, Giorgio Spada, Andrea Storto, Chunxue Yang
Affiliations: Magellium, 2LEGOS, Université de Toulouse, CNES, CNRS, UPS, IRD, TUD Dresden University of Technology, University of Bristol, NERSC, DATLAS, UBO-LOPS, CNRS/LOPS, UNIBO, CNR-ISMAR
The closure of the Sea Level Budget (SLB) is a key challenge for modern physical oceanography. First, it is essential that we ensure the proper identification and quantification of each significant contributor to sea level change through this closure. Second, it provides an efficient means to closely monitor and cross-validate the performance of intricate global observation systems, such as the satellite altimetry constellation, satellite gravimetry missions (GRACE/GRACE-FO), and the Argo in-situ network. Third, this closure reveals to be a beneficial approach for assessing how well the observed climate variables, such as sea level, barystatic sea level, temperature and salinity, land ice melt, and changes in land water storage, comply with conservation laws, in particular those related to mass and energy. In this presentation, we will discuss the state of knowledge of global mean and regional sea level budget with up-to-date observations, encompassing 1) an up-to-date assessment of the budget components and residuals, along with their corresponding uncertainties, spanning from 1993 to 2023 in global mean and throughout the GRACE and Argo era for spatial variations; 2) the identification of the periods and areas where the budget is not closed, i.e. where the residuals are significant; 3) advancements in the analysis and understanding of the spatial patterns of the budget residuals. A focus will be made on the North Atlantic Ocean where the residuals are significantly high. We investigate the potential errors causing non-closure in each of the components (e.g., in situ data sampling for the thermosteric component, geocenter correction in the gravimetric data processing) as well as potential inconsistencies in their processing that may impact large-scale patterns (e.g., centre of reference and atmosphere corrections). Errors linked to the system observability (due to different sampling and resolution of the various observations) will be quantified with synthetic data extracted from ocean simulations. This work is performed within the framework of the Sea Level Budget Closure Climate Change Initiative (SLBC_cci+) programme of the European Space Agency (https://climate.esa.int/en/projects/sea-level-budget-closure/). This project was initiated by the International Space Science Institute Workshop on Integrative Study of Sea Level Budget (https://www.issibern.ch/workshops/sealevelbudget/).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone H)

Poster: Explaining the Global Sea Level Budget Since 1992 From Altimetry, GRACE and Independent Dataset and Models

Authors: Carsten Ludwigsen, Ole Andersen
Affiliations: DTU Space
Global sea level change is a clear indicator of climate change, and any misalignment in the observational system, such as sea level budget misclosure, raises concerns about either incomplete understanding or observational errors (e.g., salinity drift in ARGO floats or wet path delay in Jason-3 radiometers). Addressing these discrepancies is essential for accurately assessing sea level rise and its impacts. Building on Ludwigsen et al. (2024), which validated GRACE data using independent land surface mass change estimates, this study extends the analysis to the full 32-year satellite altimetry record. We investigate the transient (seasonal to decadal) and long-term drivers and acceleration of global sea level change. Our results show strong agreement between ocean mass reconstructions and GRACE/GRACE-Follow On (GRACE/FO) data until 2020. However, post-2020 discrepancies emerge, with reconstructions indicating a more pronounced increase in ocean mass than observed by GRACE/FO. This divergence is attributed to underestimation of precipitation over Western Africa in ERA5 reanalysis data, which impacts hydrological models and terrestrial water storage estimates. These findings affirm the globally observed ocean mass changes derived from GRACE-FO. Discrepancies between GRACE data and steric-corrected altimetry before 2017 are amplified by salinity drift in the ARGO floats and wet path delay errors in Jason-3. However, some residual misclosures remain unexplained and warrant further investigation. This study demonstrates the value of integrating GRACE land mass data, steric-corrected altimetry, and independent reconstructions to identify gaps in current monitoring systems. Addressing these gaps is critical for improving the accuracy of sea level budgets, enhancing our understanding of regional variability, and resolving the drivers of accelerating sea level trends observed over the past three decades.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone H)

Poster: 20-Year-Long Sea Level Changes Along The World’s Coastlines From Satellite Altimetry: The New ESA CCI Dataset Of Coastal Virtual Stations

Authors: Jean-François Legeais, Dr Anny Cazenave, Lancelot Leclercq, Fabien Léger, Dr Florence Birol, Fernando Niño, Dr Marcello Passaro, PhD Sarah Connors
Affiliations: CLS, CNRS-LGOS-CTOH, Technical University of Munich, ESA/ECSAT
In the context of the ESA Climate Change Initiative (CCI) Coastal Sea Level project, a complete reprocessing (including retracking of the radar waveforms) of high resolutions (20 Hz, i.e. 350 m) along-track altimetry data of the Jason-1, Jason-2 and Jason-3 missions since January 2002 was performed along the world coastal zones. The latest release (v2.4) of this SL_cci coastal altimeter sea level dataset covers the period January 2002 to June 2021 and is now available for the users (https://doi.org/10.17882/74354). A new improved processing of the waveform retracking and computation of the coastal sea level anomalies was developed and a new editing procedure for the coastal sea level trend computation was implemented. We now obtain a dataset of more than 1100 coastal virtual stations (i.e., the location of the first valid point from the coast along the satellite track) at an average distance from the coast of about 3 km, including more than 200 stations at less than 2 km from the coast. These coastal sea level anomalies and trends of the altimetry-based virtual stations have been validated with tide gauges data where possible. The project also focuses on the estimate of improved sea level uncertainties at regional and local scale. This dataset provides valuable information where there are no other sea level measurements and allows to fill gaps in historical time series of some nearby tide gauges. This dataset can be used to analyse the coastal sea level variability (we show example in the Mississippi river delta) and to determine the dominant forcing factors of this variability both at local scale and along the world coastlines. Future versions of these coastal virtual stations are planned with extended temporal coverage (with Sentinel 6A-MF), improved altimeter processing and characterization of the associated uncertainties. We also prepare the production of a dataset of sea level time series relative to the ground motion to better answer the needs of coastal adaptation policies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone H)

Poster: A Multiplatform Approach to Explore Sentinel-6 LRM and SAR Measurements at Different Temporal and Spatial Scales

Authors: Mathilde Cancet, Florence Birol, Claude Estournel, Fabien Léger, Rosemary Morrow
Affiliations: CNRS-LEGOS, Université de Toulouse
Monitoring and predicting the sea level changes are critical issues for coastal populations and ecosystems, and the 30-year record of satellite radar altimetry observations is an outstanding tool to contribute to the understanding of ocean processes and changes. From 1992 (Topex/Poseidon) to 2020 (Jason-3), the reference satellite nadir altimetry missions used to estimate the sea level changes were operated with conventional altimeters in Low Resolution Mode (LRM), i.e. measuring sea surface height estimates averaged within a circular footprint of about 10 to 15 km of diameter. This altimetry technique is known to encounter issues in coastal environments, due to land contamination in the radar signal. The SAR (Synthetic Aperture Radar) or Doppler-delay altimetry technique, which has been first operated on CryoSat-2 since 2010 and then on Sentinel-3 since 2016, provides sea surface height measurements with much higher resolution along the track (about 300 m), and still about 15 km across the track. With this SAR technique, the along-track noise is almost divided by two, compared to the LRM technique, and this approach enables new insights for meso-scale dynamics, coastal circulation and ocean changes. Sentinel-6, the current reference mission that took over Jason-3 on the reference orbit in 2020, provides the unique opportunity to directly compare both modes, as it is operated in interleaved LRM and SAR modes. Building a long time series of reference missions poses the question of continuity between LRM and SAR modes, as the SAR technique may provide different information than LRM, depending on the temporal and spatial scales. Having sea surface height estimates measured in both modes almost simultaneously is also an opportunity to better understand the content of LRM observations in past missions and possibly better sort the noise and the signals in the 30-year archive. In this study, we explore Sentinel-6 LRM and SAR sea surface heights measurements in the North-Western Mediterranean Sea, at different temporal (from one pass to seasonal) and spatial scales. We take advantage of the wealth of multiplatform data available in the region, such as in situ observations, model simulations and the new SWOT altimetry mission, which provides 2D sea surface height images, to analyze specific events and better understand the physical content and the limitations of the SAR and LRM nadir altimetry observations, in order to build a consistent long-term record in coastal regions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone H)

Poster: Impact of Using FES2022b Tidal Model for Climate Scales

Authors: Loren Carrere, Perrine Abjean, Florent Lyard, Gerald Dibarboure
Affiliations: CLS, LEGOS/CNRS, CNES
The accuracy of altimeter measurements has been much improved for the last 30 years thanks to improvements in the instrumental, environmental and geophysical parameters leading to an unprecedent accuracy of the data. In particular, a new global tidal model FES2022b has been produced (https://www.aviso.altimetry.fr/en/data/products/auxiliary-products/global-tide-fes.html ) taking advantage of longer and more accurate altimeter time-series, new missions, an improved global bathymetry and a refined mesh. These accurate data and models allow investigating applications such as climate variability with more accuracy. The present analysis focuses on the impact of the recent global tidal models on climates scales. Historically, some mean sea level (MSL) residual error can be visible on 58.77 days period which correspond to the S2 semi-diurnal tide aliased frequency by the Topex/Jason altimeters and to instrument's errors detected on the altimeter (beta-prime variability). Recent insights in tidal models have also detected the existence of long-term tendencies in the amplitude of some of the main tidal wave (Ray 2024). The impact of the tidal model on the global and regional trends of the mean sea level and on the long-term variability like annual and semi-annual signals has been analyzed here. Some comparisons are made using three different models FES2022b, FES2014 and GOT4.10. The study focuses on the reference missions which are generally used to estimate the global mean sea level trends and accelerations (cf https://www.aviso.altimetry.fr/en/data/products/ocean-indicators-products/mean-sea-level.html ): Topex, Jason-1, Jason-2, Jason-3 and Sentinel-6MF. Results indicate a weak impact of the model choice on both the global and stronger on the regional MSL trends. Moreover, FES2022b and FES2014 tidal models consistently reduce the residual long-term variability of the ocean at annual and semi-annual periods. As already stated in some other studies (Zawadzki et al. 2016), the tidal model has an impact on the residual signal on the 58.77 days period. FES2014 and FES2022b allow a low and consistent 58.77 error between T/P, Jason-1, Jason-2, Jason-3 and Sentinel-6MF global MSL. FES2022b reduces the 58.77-day errors in Topex/Poseidon global MSL but raises it slightly in the other missions studied. Locally, 58.77-day errors are equivalent between FES2022b and FES2014.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone H)

Poster: Understanding uncertainties in the satellite altimeter measurement of coastal sea level: insights from a round robin analysis.

Authors: Florence Birol, François Bignalet-Cazalet, Mathilde Cancet, Jean-Alexis Daguze, Wassim Fkaier, Ergane Fouchet, Fabien Léger, Claire Maraldi, Fernando Niño, Marie-Isabelle Pujol, Ngan Tran
Affiliations: CNRS - LEGOS, Université de Toulouse, Collecte Locallisation Satellites, Noveltis, Centre National d'Etudes Spatiales (CNES), Noveltis
The satellite radar altimetry record of sea level has now surpassed 30 years in length. These observations have greatly improved our knowledge of the open ocean and are now an essential component of many operational marine systems and climate studies. But use of altimetry close to the coast remains a challenge from both a technical and scientific point of view. Here, we take advantage of the recent availability of many new algorithms developed for altimetry sea level computation to quantify and analyze the uncertainties associated with the choice of algorithms when approaching the coast. To achieve this objective, we did a round robin analysis of radar altimetry data, testing a total of 21 solutions for waveform retracking, correcting sea surface heights and finally deriving sea level variations. Uncertainties associated with each of the components used to calculate the altimeter sea surface heights are estimated by measuring the dispersion of sea level values obtained using the various algorithms considered in the round robin for this component. We intercompare these uncertainty estimates and analyze how they evolve when we go from the open ocean to the coast. At regional scale, complementary analyses are performed through comparisons to independent tide gauge observations. The results show that tidal corrections and mean sea surface can be significant contributors to sea level data uncertainties in many coastal regions. However, improving quality and robustness of the retracking algorithm used to derive both the range and the sea state bias correction, is today the main factor to bring accurate altimetry sea level data closer to the shore than ever before. Full details of this work can be found in the article "Understanding uncertainties in coastal sea level altimetry data: insights from a round robin analysis" (Birol et al., Ocean Science, 2024 - https://doi.org/10.5194/egusphere-2024-2449, 2024).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone H)

Poster: A.08.03 - POSTER - Ocean Salinity

Ocean salinity is a key variable within the Earth’s water cycle and a key driver of ocean dynamics. Sea surface salinity (SSS) has been identified as Essential Climate Variable by the Global Climate Observing System (GCOS) and Essential Ocean Variable by the Global Ocean Observing System (GOOS). Through the advent of new observing technologies for salinity and the efforts to synthesize salinity measurements with other observations and numerical models, salinity science and applications have significantly advanced over recent years.
This Session will foster scientific exchanges and collaborations in the broad community involved in ocean salinity science and applications, widely encompassing satellite salinity (eg, SMOS and SMAP) data assessment and evolution, multi-mission merged product generation (eg, CCI-salinity), exploitation of in-situ assets for calibration and validation and related Platforms (eg, Salinity PI-MEP) and ultimately broad salinity-driven oceanographic/climatic applications and process studies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone H)

Poster: Advancing the Understanding of Salinity Dynamics in the Baltic Sea Through Integrated Satellite, In Situ, and Numerical Modeling Approaches

Authors: Rafael Catany, Dr Andreas Lehmann, Dr Lara Schmittmann, Dr Hela Mehrtens, Professor Miroslaw Darecki, Dr Anna Bulczak, Dr Jaromir Jakacki, Dr Maciej Muzyka, Dr Daniel Rak, Dr Dawid Dybowski, Professor Lidia Dzierzbicka-Glowacka, A Nowicki, Dr Joanna Ston-Egiert, Dr M Ostrowska, PD Dr-Ing. habil Luciana Fenoglio, Dr Jiaming Chen, Dr Artu Ellman, Dr Nicole Delpeche-Ellmann, Quentin Jutard, Marine Bretagnon, Phillippe Bryère, Dr Laurent Bertino, Dr Raphaël Sauzède, Natascha Mohammadi, Giovanni Corato, Dr Roberto Sabia
Affiliations: Albavalor, GEOMAR, IOPAN, Bonn University, TalTech, ACRI-ST, NERSC, LOV, AdwaisEO, ESA ESRIN
The Baltic Sea is a semi-enclosed shelf sea with distinct geographical and oceanographic fea-tures. One of the Baltic's most notable characteristics is its horizontal surface salinity gradient, which decreases horizontally from the saline North Sea to the near-fresh Bothnian Sea in the north and the Gulf of Finland in the east. Additionally, a vertical gradient and strong stratification separate less saline surface water and deep saline water. These salinity features are mainly driv-en by river runoff, net precipitation, wind conditions, and geographic factors that lead to restricted and irregular saltwater inflow into the Baltic and limited mixing. The overall positive freshwater balance causes the Baltic to be much fresher compared to fully marine ocean waters, with a mean salinity of only about 7 g/kg. The Baltic Sea is particularly sensitive to climate change and global warming due to its small volume and limited exchange with the world oceans. Consequently, it is changing more rapidly than other regions. Recent changes in salinity are less clear due to high variability, but overall surface salinity decreases with a simultaneous increase in the deeper water layers. Furthermore, the overall salinity distribution is indirectly linked to the general circulation of the Baltic Sea, which consists of cyclonic circulation cells that comprise the main basins. Thus, im-proving the understanding of the salinity dynamics leads to a better understanding of the circula-tion in the Baltic Sea. The project 4DBALTDYN (May 2024 to May 2026) will build upon and enhance the previous Baltic+ Salinity SSS (Sea Surface Salinity, 2011-2019) dataset. By integrating highly spatially resolved SMOS satellite SSS data with in situ observational data and numerical modelling, this project aims to improve our understanding of the Baltic Sea's salinity dynamics. The SMOS SSS data pro-vide continuous monitoring of the evolution of the surface salinity of the entire area of the Baltic Sea.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone H)

Poster: Mechanisms of tropical sea surface salinity variations at seasonal timescales

Authors: Antoine Hochet, Soumaïa Tajouri, Nicolas Kolodziejczyk, William Llovel
Affiliations: University of Brest, Cnrs, IRD, Ifremer, UBO LOPS
Climate-coupled models typically overestimate the amplitude of the seasonal cycle of sea surface salinity (SSS) in the tropics. A better understanding of the mechanisms controlling the seasonal variance of SSS could provide directions for improving the representation of the SSS seasonal cycle amplitude in these models. In this work, we use a novel framework, based on seasonal Salinity Variance Budget (SVB), which we apply to the Estimating the Circulation and Climate of the Ocean (ECCO) state estimate, to study the mechanisms controlling the variance of seasonal SSS in the tropical Oceans. Our findings reveal that oceanic advection, vertical diffusion, and freshwater fluxes from rivers and precipitation all play an important role in controlling the amplitude of the seasonal cycle but their impact varies regionally. The SVB framework effectively distinguishes between ``source'' (mechanisms that enhance variance) and ``sinks'' (mechanisms that dampen variance). We show that vertical diffusion acts as the primary sink across most regions, except for the eastern Arabian Sea where precipitation dominates as the main sink. In other regions of the tropical Oceans, precipitation and river runoff act as sources of variance. The effect of the advective term on the SSS variance is shown to be mainly the sum of two terms. First, a term associated with the spatial redistribution of the variability by the eddy-parametrized oceanic circulation. Secondly, a term associated to a transfer of salinity variance between the time mean and seasonal circulations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone H)

Poster: Towards Physically Consistent Copernicus Imaging Microwave Radiometer Level 2 Products for the Global Ocean and Atmosphere

Authors: Robin Ekelund, Christophe Accadia
Affiliations: EUMETSAT
The Copernicus Imaging Microwave Radiometer (CIMR) is an upcoming European satellite mission in the framework of the Copernicus sentinel expansion programme, specifically designed to support the European integrated policy through augmented monitoring of global warming and Arctic amplification. To this end, CIMR will by design monitor several essential climate variables at the poles, land and global ocean through high spatial resolution and low-frequency passive microwave satellite measurements. Developed by the European Space Agency, the mission will consist of at least two satellites, with first launch planned in 2029 and second seven years later. CIMR will fly in a sun-synchronous orbit and observe using a 360 degrees conical scanning viewing geometry with a minimum of 1900 km swath width, allowing sub-daily no-hole coverage of the poles and dual observations at each location (forward and aft views). It will measure the full polarisation at L, C, X, K and Ka band (central frequencies at 1.4135, 6.925, 10.65, 18.7 and 36.5, respectively) with stringent requirements upon noise equivalent difference temperatures (NEdT). Footprint sizes range from 60 to 5 km, from the lowest to highest channel frequency. On-board hardware will be used to mitigate radio frequency interferences, an ever-increasing issue for satellite-borne microwave sensors. The European Organisation for the Exploitation of Meteorological Satellite (EUMETSAT) is tasked with the development, generation and distribution of L2 surface and atmospheric variables covering the global ocean. These products are the sea surface salinity (SSS), sea surface temperature (SST), ocean wind vector (OWV), total column water vapour (TCWV), liquid water path (LWP) and liquid precipitation (PCP). They are essential for understanding the influence of global warming and Arctic amplification upon the hydrological cycle, boundary layer interaction and ocean dynamics. The sensor concept is well and uniquely suited for the retrieval of SSS, SST and the OWV. L-band is a requirement for salinity remote sensing as sensitivity is limited to this frequency band. CIMR will in this respect provide continuity and improvement to current L-band missions, i.e. ESA SMOS and NASA SMAP. Detection of the full polarisation allows the retrieval of wind direction and correction of the Faraday rotation of radiation in the Ionosphere without auxiliary total electron content data. While CIMR lacks a dedicated water vapour channel, it will have sensitivity to TCWV, LWP and PCP, albeit at a reduced sensitivity compared to missions dedicated to the atmosphere such as the EUMETSAT Polar System - Second Generation (EPS-SG) Microwave Imager (MWI). Together, however, these variables form a comprehensive L2 product portfolio for boundary layer monitoring. The development of the global ocean and atmosphere L2 product portfolio is currently in the early phase at EUMETSAT. The selected retrieval algorithm is a physically based optimal estimation algorithm, which will ensure the physical consistency between the retrieved product variables. The products will be distributed on high, medium and low-resolution grids depending upon the channels that the variable in question is primarily reliant upon. The SST product will comply with the development and distribution standards set up by the Group for High Resolution Sea Surface Temperature (GHRSST) international science group. Validation activities consider cross-validation with the upcoming Metop-SGB satellite that will carry the Scatterometer (SCA) and MWI instruments for retrieval of winds, humidity and clouds. The CIMR orbit is synced with that of Metop-SGB to provide collocated observations within 10 minutes difference at the poles. Surface parameters will be validated using the networks of Argo profiling floaters and Towards fiducial Reference Measurements of Sea-Surface Temperature by European Drifters (TRUSTED).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone H)

Poster: New regional SSS fields developped at CATDS CEC-OS

Authors: Jacqueline Boutin, Dr Jean-Luc Vergely, Dr Gilles Reverdin, Dr Léa Olivier, Dr Stéphane Tarot
Affiliations: Cnrs/locean, ACRI-st, IFREMER
The Ocean Salinity Center of Expertise for CATDS (CATDS CEC-OS) works at improving methodologies to be implemented in the future in the near real time CATDS processing chain (CATDS-CPDC). The primary goal is to generate global level 3 SMOS SSS fields, the latest being CATDS CEC-LOCEAN Debiased V9. However, the methodology used to derive these global fields does not always fit very well with specific regional needs. This lead us to derive two specific regional fields: -High temporal resolution fields in rapidly variable areas, such as river plumes -A specific processing in the Arctic Ocean Global SSS products currently distributed by the CATDS CEC-OS are smoothed over 9 days or 18 days and sampled every 4 days. In highly variable regions, like in river plumes, variations in surface salinity are expected on much smaller time scales. Knowledge of these regional variations is of interest for studying processes related to the fate of freshwater and associated biogeochemistry [e.g. Olivier et al., 2024]. Hence, we have developed a new temporal interpolation scheme that intends to keep as much as possible high temporal frequency SSS variability (over typically one day or two). Temporal resolution of satellite SSS products is limited by revisit times. The combination of ascending and descending SMOS passes allow revisit times of the order of 1.5 days. The addition of SMAP data makes it possible to consolidate this information. These fields are provided over 8 regions from 2010 to 2021. Over the Arctic Ocean, the methodology originally derived by Supply et al. [2020] and implemented in SMOS ARCTIC SSS V1 maps has been revisited. 9-day and 18-day maps are provided over the June 2010 to August 2023 period.a temporal optimal interpolation with a bias removal depending on the SMOS observation geometry (see a general description in [Boutin et al., 2018]) has been added, Comparisons with independent in situ datasets, conducted in CEC LOCEAN and in PIMEP, indicate a clear improvement (reduction of std difference by ~a factor 2 and systematic increase of r2); with V2.0 r2 is greater than 0.8 with 40% of the data sets considered at PIMEP. As in version 1, SSS maps are provided on an Equal-Scalable Earth Grid 2 (EASE 2) with a Northern Hemisphere Azimuthal projection and a resolution of 25km. These results lead to the development of a new operational chain for deriving a SMOS SSS Arctic product operationally. Reference datasets : Boutin J. and Vergely J.-L. (2024). SMOS ARCTIC SSS L3 V2 maps produced by CATDS CEC LOCEAN. SEANOE. doi:10.17882/98769 Boutin J., Vergely J.L, Olivier L., Reverdin G., Perrot X., Thouvenin-Masson C. (2022). SMOS SMAP High Resolution SSS maps in regions of high variability, generated by CATDS CEC. SEANOE. https://doi.org/10.17882/90082. Boutin J., Vergely J.-L., Khvorostyanov D. (2024). SMOS SSS L3 maps generated by CATDS CEC LOCEAN. debias V9.0. SEANOE. https://doi.org/10.17882/52804#109630. Bibliography : Boutin, J., J. L. Vergely, S. Marchand, F. D'Amico, A. Hasson, N. Kolodziejczyk, N. Reul, G. Reverdin, and J. Vialard (2018), New SMOS Sea Surface Salinity with reduced systematic errors and improved variability, Remote Sensing of Environment, 214, 115-134, doi:https://doi.org/10.1016/j.rse.2018.05.022. Olivier, L. et al. (2024), Late summer northwestward Amazon plume pathway under the action of the North Brazil Current rings, Remote Sensing of Environment, 2024, vol. 307, doi: https://doi.org/10.1016/j.rse.2024.114165. Supply, A., J. Boutin, J.-L. Vergely, N. Kolodziejczyk, G. Reverdin, N. Reul, and A. Tarasenko (2020), New insights into SMOS sea surface salinity retrievals in the Arctic Ocean, Remote Sensing of Environment, 249, 112027, doi:https://doi.org/10.1016/j.rse.2020.112027.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: D.01.01 - POSTER - Collaborative Innovation: building a Digital Twin of the Earth System through Global and Local Partnerships

The concept of a Digital Twin of the Earth System holds immense potential for revolutionizing our understanding and management of our planet. However, building such a complex and comprehensive system requires a global effort. This session explores the power of collaborative innovation in bringing together diverse stakeholders to create a robust and impactful Digital Twin Earth.

In this session, we invite contributions to discuss the following key topics:

- International Collaborations and Global Initiatives
We seek to highlight major international collaborations, such as ESA's Digital Twin Earth and the European Commission's Destination Earth, which exemplify the collective effort needed to develop these advanced systems. Contributions are welcome from successful international projects that demonstrate the potential for global partnerships to significantly advance the development and application of the Digital Twin Earth.

- Public-Private Partnerships (Industry and Academia Collaborations)
We invite discussions on innovative models for funding and resource allocation within public-private partnerships, which are crucial for sustainable development and effective environmental monitoring. Contributions from tech companies and startups that have been instrumental in developing key technologies for the Digital Twin Earth are especially welcome, showcasing the private sector's vital role in this global initiative.

- Local and Community Engagement
Engaging local communities and fostering grassroots initiatives are essential for the success of the Digital Twin Earth. We invite contributions that discuss the role of citizen scientists in data collection, monitoring, and validation efforts. Examples of training and capacity-building programs that empower local communities and organizations to actively participate in and benefit from these advanced technologies are also sought. Additionally, we welcome examples of successful local collaborations that highlight the positive impact of digital twin technologies on environmental monitoring and resilience.

- Multi-Disciplinary Approaches
Addressing the complex challenges of developing a Digital Twin Earth requires a multi-disciplinary approach. We seek contributions that integrate diverse expertise from climate science, data science, urban planning, and public policy to create comprehensive digital twin models. Discussions on developing standards and protocols for interoperability and effective data sharing among stakeholders are critical for holistic problem-solving and are highly encouraged.

- Policy and Governance Frameworks
We invite contributions that explore policy and governance frameworks supporting the development of policies for sustainable development and climate action. Effective governance structures that facilitate collaboration across different levels of government, industry, and academia are crucial. Additionally, we seek discussions on addressing ethical, privacy, and regulatory considerations to ensure the responsible use of digital twin technologies.

By fostering international collaborations, leveraging public-private partnerships, engaging local communities, integrating diverse expertise, and developing robust policy frameworks, this session aims to collectively advance the development of the Digital Twin Earth. This holistic approach ensures that the Digital Twin Earth is not only a technological marvel but also a collaborative, inclusive, and impactful tool for sustainable development and environmental resilience.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: SNOWCOP - Unlocking the Full Potential of Copernicus Data and Infrastructure to Improve Meltwater Monitoring in the Andes

Authors: Carlo Marin, Riccardo Barella, Valentina Premier, Claudia Notarnicola, Alexander Jacob, James McPhee, Jaime Ortega, María Ignacia Orell, Paloma Palma, Cristóbal Sardá, Jeroen Dries, Patrick Henkel, Markus Lamm, Mariano Masiokas, Lucas Ruiz, Ezequiel Toum, Leandro Cara, Carolina Adler, Pierre Pitte, James Thornton
Affiliations: Eurac Research, University of Chile, Vito Remote Sensing, ANAVS, IANIGLA, MRI
The meltwater contribution from snow and ice in mountainous regions plays a critical role in sustaining life downstream, supporting potable water security, agriculture, industry, hydropower generation and mining, especially in the current regime of climate change. To effectively address this challenge, the Horizon Europe project SNOWCOP, started in October 2024, leverages the full potential of the European Union Copernicus data and infrastructure to provide novel snow water equivalent (SWE) and ice melting rate maps with high spatio-temporal resolution, suitable to monitor meltwater dynamics in complex mountainous terrains. Cutting-edge methods will be employed to extract valuable information about snow and glacier from satellite data, which will then be assimilated into a physically based model for snow and ice water equivalent estimation. The model will be run in a hindcast mode, generating reanalysis data spanning the past 20+ years at daily resolution. Notably, the proposed approach yields SWE maps over a 50-m pixel size, achieving an unprecedented level of spatial detail for the vast area of the extra-tropical Andes Cordillera. The Copernicus Data Space Ecosystem (CDSE) infrastructure and openEO specifications, which houses both processing facilities and a comprehensive data repository, serves as the backbone for this project. CDSE will be populated with all the necessary codes, in-situ data, and third-party space data, enabling the seamless extraction, processing, and analysis of mountain cryosphere-related information. SNOWCOP will leverage innovative snow stations powered by EGNSS technology to reinforce in-situ measurements of SWE and liquid water content (LWC). These stations will be positioned on locations that optimally represent the snowmelt dynamics within a specific catchment based on the project generated reanalysis data. To further enhance accessibility and utilization, a user-friendly and standardized API and a robust dissemination strategy will be implemented in collaboration with key public authorities in Europe and South America. This strategic approach aims to attract new users from both the scientific and commercial sectors, ensuring that the project’s valuable data and insights reach a broad audience. The primary results of snow modeling developments and SWE estimation method integration within the CDSE, along with initial community and policymaker engagement activities, will be presented at the conference. Focusing on the critical Andes Mountain range, where meltwater serves as a vital lifeline for millions of people but remains poorly monitored, this initiative leverages the expertise of the International Copernicus partners at the University of Chile. Through this collaboration, we tackle shared challenges faced by mountain regions globally, developing replicable solutions that unlock new opportunities for both local and European communities. *https://dataspace.copernicus.eu/ SNOWCOP has received funding from the European Union’s Horizon Research and Innovation Actions programme under Grant Agreement 10180133
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Collaboration around standardized benchmarks: Finding the common ground between Ocean and Data scientists

Authors: Quentin Febvre, Alexis Mouche, Antoine Grouazel, Julien Le Sommer, Clément Ubelmann, Ronan Fablet
Affiliations: Ifremer, IMT Atlantique, CNRS, Datlas
The development of digital twins of the Ocean system will require collaboration from different fields. Deep learning is an integral part of this landscape bringing both opportunities to be seized and challenges to be addressed. Indeed, developing deep learning methods to analyze ocean observation data requires methodological (ML) expertise. It also necessitates instrumental and geophysical knowledge. Furthermore in an evolving observing system with changing instruments, and a changing climate, correctly diagnosing and monitoring the outputs of neural networks is paramount.This is essential for maintaining the quality of downstream products and scientific outputs. We present here general principles and how to design standardized benchmarks which can serve as the foundation for collaborative development of deep learning solutions for ocean observation problems. Ocean scientists define scientific or operational objectives using data (observations, reanalysis, simulations …), and an evaluation framework. Data scientists develop data-driven solutions for the specified problem. Such collaboration relies on iteratively identifying the fail cases, and updating the evaluation cases, metrics and methods in order to guide further improvements. We present what concepts (data and code versioning, experiment tracking, workflow management) and python libraries (hydra, mlflow, dvc) can be used to implement such iterative and collaborative spaces. We detail the feedback acquired from two projects with different scopes : - A public data challenge on SSH Mapping from satellite altimeters. The project aims at providing easy data access, reproducible and extensible processing pipelines as well as automated evaluation workflow. This initiative demonstrates ways to facilitate the participation of outside research teams and individuals on a specific ocean observation challenge. - An internal evaluation bench for sea state parameter inversion from Synthetic Aperture Radar images. The goal of this work is to install and track and manage ML models as part of a production pipeline with evolving processing steps and validation test cases. This project showcases how to develop, maintain and iterate on deep learning algorithm for observation analysis
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Flood Simulation and Forecasting based on Earth Observation and AI for Sustainable Planning of Climate Change Adaptation

Authors: Mariana Damova, Dr Stanko Stankov, Dr Emil Stoyanov, Hermand Pessek, Hristo Hristov
Affiliations: Mozaika
We will present one of the first use cases on the DestinE platform, a joint initiative of the European Commission, European Space Agency and EUMETSAT, providing access to global earth observation, meteorological and statistical data, and emphasize the good practice of intergovernmental agencies acting in concert. Further, we will discuss the importance of space-bound disruptive solutions for improving the balance between the ever-increasing water-related disasters coming from climate change and minimizing their economic and societal impact. The use case focuses on forecasting floods and estimating the impact of flood events on the urban environment and the ecosystems in the affected areas with the purpose of helping municipal decision-makers to analyze and plan resource needs and to forge human-environment relationships by providing farmers with insightful information for improving their agricultural productivity. For the forecast, we will adopt an EO4AI method of our platform ISME-HYDRO, in which we employ a pipeline of neural networks applied to in-situ measurements and satellite data of meteorological factors influencing the hydrological and hydrodynamic status of rivers and dams, such as precipitations, soil moisture, vegetation index, snow cover to model flood events and their span. ISME-HYDRO platform is an e-infrastructure for water resources management based on linked data, extended with further intelligence that generates forecasts with the method described above, throws alerts, formulates queries, provides superior interactivity and drives communication with the users. It provides synchronized visualization of table views, graphviews and interactive maps. It has be federated with the DestinE platform.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: DT-HEAT: A Digital Twin for Urban Heat Resilience

Authors: Iphigenia Keramitsoglou, Aphrodite Bouikidis, Aleš Urban, Jan Geletič, Klea Katsouyanni, Evangelia Samoli, Antonis Analitis, Alexandra Tragaki, Eleni Toli, Panagiota Koltsida, Nefta Votsi, Evangelos Gerasopoulos, Christos Zerefos, Stavros Solomos, Christos Spyrou, Sorin Cheval, Konstantinos Zervas, Evi Tzimpoula, Gaia Cipolletta, Chris Kiranoudis
Affiliations: National Observatory Of Athens, Czech University of Life Sciences in Prague, Institute of Computer Science of the Czech Academy of Sciences, Medical School, National Kapodistrian University of Athens, Harokopio University of Athens, Athena RC, Academy of Athens, National Meteorological Administration, New Metropolitan Attica, Serco
The intensifying impacts of heatwaves, particularly on public health and mortality, underscore the urgent need for innovative and collaborative solutions to enhance urban resilience. Climate change is amplifying the frequency and severity of these extreme events, posing unprecedented challenges to metropolitan areas and their most vulnerable populations. DT-HEAT, currently under development within the European Commission-funded CARMINE project and aligned with ESA's Destination Earth (DestinE) initiative, represents a cutting-edge response to these challenges. By leveraging digital twin technology, DT-HEAT provides a predictive and actionable platform to estimate heat-related mortality, support emergency response planning, and promote the integration of Nature-based Solutions (NbS) into urban landscapes. At its core, DT-HEAT combines high-resolution urban modeling, satellite data, real-time weather forecasts, mortality records, and socio-economic indicators to deliver comprehensive insights for both short-term and long-term planning. The tool is designed to empower stakeholders, from local/regional authorities to organizations serving and representing vulnerable populations, with the ability to anticipate heatwave impacts and implement targeted actions and interventions. Its dual focus—on immediate heatwave response and long-term urban resilience—ensures that cities are better equipped to protect their residents, minimize heat risk, and adapt to a changing climate. DT-HEAT exemplifies the power of European collaboration and the role of public-private partnerships in addressing complex environmental challenges. By bringing together technology providers, local governments, and academic institutions, the project has fostered a robust and user-centric ecosystem. Current developments in Athens Metropolitan Area and Prague showcase the adaptability of DT-HEAT across diverse urban contexts. In Athens, the tool is being tailored to address the city’s intense heatwaves, exacerbated by dense urban structures and limited green spaces. Prague’s deployment of DT-HEAT addresses both extreme heat and air quality in a historic, mixed-use urban environment. These case studies highlight the tool's flexibility and its potential for scaling to other cities, each with unique challenges and characteristics. *Technical Development and DestinE Integration* The technical foundation of DT-HEAT is now being transferred to ESA's DestinE platform, enabling it to leverage the platform’s advanced capabilities. A user-friendly dashboard interface will provide policymakers and stakeholders with real-time and predictive insights into heatwave characteristics and impacts. The tool is powered by data streams from weather forecast and climate simulations (from DestinE and ECMWF/Climate Data Store, as well as from local partners providing downscaled data), which support short-term and long-term impact assessments, respectively. Short-term planning is based on a data-driven approach. Historical datasets of daily environmental parameters, such as average and maximum temperature, along with attributable deaths due to heat, are used to train a deep learning model. This model predicts next-day mortality with high performance, that will allow city officials to implement targeted emergency measures. This short-term mortality estimation is crucial for immediate heatwave management, enabling cities to allocate resources efficiently and save lives. The long-term planning component of DT-HEAT focuses on assessing the cumulative impact of heatwaves on urban populations and informing strategies to enhance resilience and reduce mortality. By simulating different urban planning scenarios, such as implementing different NbS, the tool provides insights to policymakers as to which solution will have the higher positive impact. This dual capability of DT-HEAT—addressing both immediate and strategic needs—will support urban resilience. *Community Engagement and Localized Solutions* CARMINE project gives emphasis on stakeholder engagement by integrating Living Labs in all its case study areas, including Athens Metropolitan Area and Prague. These collaborative spaces bring together local stakeholders, including municipal officials, community leaders, research/academia and social service organisations, to contribute to digital solutions design and validation and co-create urban resilience strategies. Implementation of DT-HEAT on DestinE platform directly targets users who may not have extensive experience or familiarity with the data but are interested in gaining insights from it. *Recognition and Vision* DT-HEAT’s recognition as the "Most Promising Proposal" at the 2nd DestinE Innovation Challenge organized by ESA, underscores its transformative potential. The award highlights the project’s innovative approach to integrating digital twin technology with urban resilience planning. This recognition also provides a platform for further collaboration and expansion with new features opening doors to new partnerships and opportunities for scaling the tool to additional cities. The project’s vision extends beyond addressing immediate challenges. By fostering international collaboration, engaging local communities, and aligning with global policy goals, DT-HEAT aims to contribute to the broader objective of building a Digital Twin Earth. This ambitious initiative seeks to revolutionize how we understand and manage our planet, providing a comprehensive and inclusive tool for sustainable development. DT-HEAT represents a significant step forward in addressing the escalating impacts of heatwaves. By integrating advanced technology, fostering collaboration, and engaging communities, the tool offers a scalable and adaptable solution for urban resilience. Its deployment in Athens Metropolitan Area and Prague provides valuable insights into its potential, while its alignment with ESA’s DestinE platform ensures that it remains at the forefront of digital innovation. As cities worldwide face increasing heat-related challenges, DT-HEAT serves as a model for how collaborative, data-driven approaches can protect public health, enhance sustainability, and inspire global efforts to build resilient urban environments. Funded by the European Union (GA 101137851).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Dynamic Spin on a Digital Twin: Integrating Real-Time Weather, Land-Cover and Land-Use Changes in Landslide Hazard Assessment

Authors: Margit Kurka, Manuela Hirschmugl, Christian Bauer, Herwig Proske, Janik Deutscher
Affiliations: University of Graz - Department of Geography and Regional Science, Joanneum Research GmbH
Millions of people in Europe and worldwide live in highly landslide-prone areas¹, being subjected to many fatalities² and high economic loss. Global economic loss due to landslides amounts to several hundred billion Euros every year² ³. Europe faces the highest economic loss worldwide, and within Europe Austria ranks high among the countries affected most by the consequences of landslides¹. Vast areas of Austria consist of mountainous and hilly terrain prone to gravitative mass movement, due to existing topographic and geological conditions. Current landslide susceptibility and hazard models primarily focus on static, causal parameters such as geology and morphology as predisposing factors of slope instability, often neglecting the temporal variability introduced by dynamic parameters, such as extreme weather events, land-cover changes, e. g. deforestation, and land-use adaptations. This highlights the necessity to integrate dynamic variables in the modelling process to improve real-time response to high-intensity or long-duration precipitation events as well as anthropogenic changes to land-cover and morphology. Landslides are highly dynamic processes, and their monitoring and prevention therefore require a dynamic approach. The herein presented study addresses this gap by developing a digital twin landslide susceptibility model, with focus on rainfall-triggered sliding and flowing modes in unconsolidated hillside materials, to duplicate not only the static conditions responsible for such landslides, but also dynamic processes involved in, causing and triggering them. The model area for the dynamic digital twin to be developed lies in the south of the state of Styria, Austria, where landslides are a frequent and widespread phenomenon, demonstrating that not only mountainous areas but also the less-studied foothills and foreland are often affected. Gravitative mass movements in this area are driven by unique geological conditions, consisting of unconsolidated interlayered sands, silts, clays, gravels and marls. In the project area sufficient data is available from previous projects, including landslide maps and inventories, geological, meteorological and land-cover data. Globally many regions experience increased rainfall intensity and duration due to climate changes, causing experts to increasingly voice concerns about heightened probability of and risks from landslides³. Under consideration that rainfall-triggered landslides cause the majority of landslide caused fatalities and high monetary losses² ⁴, these are relevant points to consider when evaluating landslide hazard, regardless if looked at globally, nationally or regionally. In 2009 and 2023 intense rainfalls triggered thousands of individual landslides within the project area. In case of the August 2023 event, which plays a key role in this study, the low-pressure system ‘Zacharias’ was responsible for the severe flooding and widespread landslide occurrences. The districts of Southeast Styria and Leibnitz were declared disaster zones, with over 3,000 landslide-related damage reports and an estimated loss exceeding 30 million Euros⁵. This serves as one example to demonstrate the increase of landslide occurrences due to more frequent long-lasting or high-intensity precipitation events, observed increasingly all over the country. It shows that Austrian stakeholders, be it on governmental or private, are faced with the challenge to predict, monitor and prevent risk associated with landslide events in an ever-changing climatic and environmental setting. A major limitation in landslide modeling and providing well-founded landslide susceptibility prediction for regional planning and infrastructure protection lies in the lack of detailed landslide inventories, particularly regarding the temporal occurrence of landslides⁶. The implementation of dynamic parameters, such as precipitation and land-cover changes, depends on this knowledge. Herein lies the advantage of the chosen project area, where such data is available. Intricate dynamic input parameters taken into account in the study are high-resolution meteorological data and land-cover data. Meteorological data is provided by the ECMWF's (European Centre for Medium-Range Weather Forecasts) high-resolution simulation models developed in the framework of DestinE (Destiny Earth), a flagship initiative of ESA and the European Commission to create a digital twin of the Earth to model, monitor and simulate natural phenomena, hazards and the related human activities. The Weather-Induced Extremes Digital Twin (Extremes DT), one of the first two implemented digital twins within DestinE, provides forecasts and simulations at 2-4 km resolution and on-demand even at 500m resolution⁷, and therefore is extremely valuable for our task. Additionally, in regard to land-use data, dynamic EO products, such as the Copernicus high resolution layers, national data sets from the Green Transformation Information Factory Austria (GTIF-AT), land use and land cover information, soil moisture, and forest structural parameters from airborne LiDAR will be included in developing the dynamic digital twin. The GTIF-AT for example provides forest disturbances at high spatial and temporal resolution to be included in the forecasting. Airborne LiDAR data provides insights into the vertical structure of forests, which in turn is relevant for the forests’ protective functionality. The utilization of synergies with the DestinE flagship initiative is an integral part of the project. The ultimate goal is the development of an automated prototype, which combines DestinE and local data in such a way that daily hazard maps can be provided, based on different scenarios, such as changes of landcover or landuse, increase of preventive measures, sudden extreme weather events. Calibration, modelling and validation will be performed on data available for the August 2023 event in the south of Styria, since it offers a unique opportunity: a large number of landslides were mapped in the field, and the event was one of the first on-demand high resolution use cases of the Extremes DT with data already available at ECMWF⁷. The project is carried out by a multi-disciplinary team with partners from research institutions (University of Graz, Joanneum Research GmbH) and a civil engineer SME based in the region (Lugitsch und Partner Ziviltechniker GmbH) thus bundling expertise in landslide susceptibility and hazard modelling, programming, geology, forestry, remote sensing, meteorology, engineering and construction. Stakeholders, such as the Austrian railway company (OEBB), are aware of the goals aimed for in the project and underline the necessity to move from static landslide susceptibility and hazard maps towards dynamic products, when it comes to prediction and advancing the functionality of early warning systems. Exemplarily for others, the influence of landslides on railway infrastructure points out the cross-regional and international relevance of the presented study. References ¹ Haque, U.; Blum, P.; Da Silva, P. F.; Andersen, P.; Pilz, J.; Chalov, S. R.; Malet, J.-P.; Auflič, M. J.; Andres, N.; Poyiadji, E.; Lamas, P. C.; Zhang, W.; Peshevski, I.; Pétursson, H. G.; Kurt, T.; Dobrev, N.; García-Davalillo, J. C.; Halkia, M.; Ferri, S.; Gaprindashvili, G.; Engström, J.; Keellings. 2016. D. Fatal landslides in Europe. Landslides. 13, pp. 1545–1554. ² Froude, M. J.; Petley, D. N. 2018. Global fatal landslide occurrence from 2004 to 2016. Nat. Hazards Earth Syst. Sci. 18, pp. 2161–2181. ³ Marín-Rodríguez, N. J.; Vega, J.; Zanabria, O. B.; González-Ruiz, J. D.; Botero, S. 2024. Towards an understanding of landslide risk assessment and its economic losses: a scientometric analysis. Landslides. 21, pp. 1865–1881. ⁴ Haque, U.; Da Silva, P. F.; Devoli, G.; Pilz, J.; Zhao, B.; Khaloua, A.; Wilopo, W.; Andersen, P.; Lu, P.; Lee, J.; Yamamoto, T.; Keellings, D.; Wu, J.-H.; Glass, G. E. 2019.The human cost of global warming: Deadly landslides and their triggers (1995–2014). Science of The Total Environment. 682, pp. 673–684. ⁵ Wind, H.-P.; Urbanitsch, A. 2023. Verbal Communication of extent of approximate landslide damages and cost by members of Land Steiermark. ⁶ Brenning, A. 2005. Spatial prediction models for landslide hazards: review, comparison and evaluation. Nat. Hazards Earth Syst. Sci. 5, pp. 853–862. ⁷ Gascón, E.; Sandu, I.; Vannière, B.; Magnusson, L.; Forbes, R.; Polichtchouk, I.; van Niekerk, A.; Sützl, B.; Maier-Gerber, M.; Diamantakis, M.; Bechtold, P.; Balsamo, G. 2023. Advances towards a better prediction of weather extremes in the Destination Earth initiative. EMS Annual Meeting Abstracts. 20. EMS2023-659.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: D.02.11 - POSTER - Super-resolution in Earth Observation: The AI change of paradigm

The design of the Sentinel-2 sensor with spatial resolutions of 10m, 20m and 60m for different spectral bands in the context of the actual resources offered by the methods of deep learning was a key turning point for the field of super-resolution. The spatial resolution is a characteristic of the imaging sensor, i.e. the bandwidth of the transfer function, super-resolution meaning to enlarge the range of spatial frequencies and the bandwidth of the transfer function. In the classical approaches this was treated mainly in two cases: i) as an ill-posed inverse problem, the solutions being constrained by strong hypotheses, very seldom fulfilled in actual practical cases, ii) based on physical model as pansharpening, the design of optical sensors with half pixel shift in the array or in the case of SAR by wave number tessellation or using information from side lobes of multistatic SAR. In reality super-resolution is a much broader area, it may refer also to the wavelength bandwidth for multi- or hyper-spectral sensors, the radiometric resolution, the characterization of single pixel cameras based on compressive sensing, the 3D estimation in SAR tomography, or an enhanced “information” resolution (e.g., instead of counting trees in very high resolution to estimate trees density form a low resolution observation), or enhance resolution of ocean wind estimation from SAR observations.

With the advent of deep learning, super-resolution entered in a new era. The deep models with huge number of parameters, trained with big data sets opened a new alternative to the super-resolution: the data prediction applied to a low-resolution sensor by training a model with high resolution data. The new paradigm does not anymore require strong hypotheses but suffers from the black-box syndrome of deep learning. Thus, new methods are required as hybrid method using the sensor image formation models, derive consistency criteria for the physical parameters, verification of the cal/val criteria for the super-resolved products. The session invites submissions for any type of EO data and will address these new challenges for the Copernicus and Earth Explorer or related sensors.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Hyperspectral Earth Observation for Sustainability: Enhancing EnMAP Data Spatial Resolution through Deep Neural Network Fusion with Sentinel-2 Imagery.

Authors: Pierre-Laurent Cristille, Jeronimo Bernard-Salas, Nick Cox, Emmanuel Bernhard, Antoine Mangin
Affiliations: ACRI-ST, CERGA, INCLASS Common Laboratory, Institut d’Astrophysique Spatiale (IAS)
The transition from Earth observation to actionable insights for climate and sustainability necessitates advancements in remote sensing technologies. Hyperspectral imaging has emerged as a cornerstone in this domain, offering unparalleled spectral richness across a wide range of wavelengths. Such data enable precise material characterization and monitoring of environmental changes. However, the relatively coarse spatial resolution of hyperspectral sensors, such as the Environmental Mapping and Analysis Program (EnMAP), limits their applicability in scenarios demanding fine spatial detail. This study proposes a novel image fusion framework, carried by a Deep Neural Network trained on fully synthetic data, that enhances the spatial resolution of EnMAP hyperspectral data by integrating it with high-resolution multispectral images from Sentinel-2. EnMAP is a cutting-edge spaceborne hyperspectral mission capturing 218 spectral bands across the 420–2450 nm wavelength range. While its spectral fidelity supports diverse applications in environmental and land-use studies, its 30-meter spatial resolution restricts detailed spatial analyses. In the other hand, Sentinel-2, part of the European Union’s Copernicus Program, offers 10-meter spatial resolution but with limited spectral coverage, including only four spectral bands at this resolution. The complementary nature of these datasets forms the basis for multi- and hyperspectral fusion, enabling the creation of products with both high spatial and spectral resolution. A critical challenge in supervised learning for image fusion is the lack of high-resolution hyperspectral ground truth data. To address this, we employed a Linear Mixing Model (LMM) to generate synthetic datasets. LMM is a widely used approach in remote sensing, effectively blending the spectral properties of EnMAP with the spatial details of Sentinel-2. Specifically, we synthesized 10-meter EnMAP-like images by spatially enhancing EnMAP spectral data with Sentinel-2 spatial features. The resulting mixture is constituted by abundance maps extracted from Sentinel-2 and representative spectra coming from the unmixing of an EnMAP image covering the same area. These images were then degraded back to 30 meters to mimic the original EnMAP resolution. Additionally, the corresponding 10-meter Sentinel-2 images were simulated by integrating the ground truth with the sensor spectral response functions (SRF), the other 20- and 60-meters bands were also simulated by degrading the 10-meter integrated product. This approach resulted in a representative dataset, covering all continents at every season, comprising simulated 30-meter EnMAP images, their 10-meter Sentinel-2 counterparts, and the synthetic 10-meter EnMAP ground truth. We designed a transformer-based autoencoder network to address the fusion challenge, leveraging its self-attention mechanism to model both local and global spectral-spatial dependencies. Transformers have revolutionized natural language processing and computer vision, and their application in hyperspectral image fusion represents a novel advancement. The network’s encoder processes input data—comprising low-resolution hyperspectral and high-resolution multispectral images—into compact feature representations. These features are then decoded to reconstruct a high-resolution hyperspectral image with enhanced spatial and spectral quality. The training process incorporated a custom loss function designed to balance spatial fidelity and spectral accuracy. Key components of the loss function included: 1. Spectral Reconstruction Loss: Ensures alignment of spectral signatures between the fused image and the high-resolution ground truth. 2. Spatial Sharpness Loss: Promotes spatial clarity by penalizing deviations from high-resolution details in the Sentinel-2 data. 3. Spectral Integrity Constraint: Regularizes the output to maintain consistency with EnMAP’s spectral profiles. Preliminary results demonstrate the efficacy of the proposed framework in enhancing EnMAP’s spatial resolution to 10 meters. The fused images exhibit significant improvement in spatial sharpness while maintaining spectral integrity. Quantitative metrics, including Root Mean Squared Error (RMSE), Peak Signal-to-Noise Ratio (PSNR) and Spectral Angle Mapper (SAM), confirm the superior performance and generalization capabilities of the model compared to baseline methods. Additionally, qualitative spectral analyses reveal the model’s ability to accurately capture fine spatial details and preserve the spectral signatures critical for remote sensing applications. This fusion framework has far-reaching implications for Earth observation and sustainability. By addressing the limitations of spatial resolution in hyperspectral imaging, the proposed method enables detailed environmental monitoring, improved land-use classification, and precise assessment of climate-related phenomena. Potential applications include: • Agriculture: Enhanced hyperspectral data can improve crop health monitoring, soil analysis, and precision farming practices. • Forestry: Fused images enable detailed assessments of forest density, species distribution, and deforestation patterns. • Water Resources: The improved resolution facilitates monitoring of water quality and aquatic ecosystems. • Urban Development: High-resolution hyperspectral data supports urban planning, infrastructure monitoring, and pollution analysis. Furthermore, the fusion process depicts the synergy between multispectral and hyperspectral remote sensing, paving the way for future missions integrating both technologies. The transformer-based architecture employed in this study also highlights the potential of deep learning to tackle complex challenges in Earth observation. The outcomes of this research contribute directly to climate action and sustainability goals by enhancing the usability of hyperspectral data. Improved spatial resolution enables detailed analysis of phenomena such as land-use changes, urban expansion, and ecosystem degradation. These insights can inform policy decisions, resource management strategies, and climate adaptation measures. By transforming EnMAP hyperspectral data into high-resolution products, this work supports actionable insights for monitoring environmental changes, assessing ecosystem health, and promoting sustainable development. The integration of advanced machine learning techniques with remote sensing underscores the role of interdisciplinary approaches in addressing global challenges.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Sentinel-2 Super-Resolution With Geolocation-Aware Generative Models

Authors: Ksenia Bittner
Affiliations: German Aerospace Center (DLR)
Recent advances in remote sensing technologies have provided a wealth of satellite imagery, enabling applications in areas such as urban planning, disaster management, environmental monitoring, and resource management. Among these applications, building segmentation is particularly critical, especially for rapidly urbanizing regions. High-resolution imagery is crucial for accurately mapping and analyzing structures; however, limitations in publicly available satellite imagery, such as Sentinel-2, pose challenges due to their relatively low spatial resolution (10–20 m per pixel). To overcome these limitations, super-resolution techniques have emerged as essential tools to enhance image quality and enable finer detail extraction from low resolution images. Generative Adversarial Networks (GANs) have been previously employed for super-resolution of remote sensing images, but their application has largely been confined to limited datasets, for example NAIP imagery, which covers only regions within the United States. Models trained on these datasets often fail to generalize effectively to global regions, producing suboptimal results when applied elsewhere. Moreover when attempting to upscale large areas using tiling techniques, noticeable patch artifacts often emerge, degrading the overall quality of the output. In recent years, the use of prior information through embeddings has gained popularity, most notably with the success of text embeddings in various tasks. Building on this trend, there has been a growing interest in leveraging geographic context via location embeddings. In this work, we introduce a novel approach that leverages location embedding to enhance super-resolution models. Additionally, we improve the GAN’s performance by incorporating techniques used diffusion models. We also conduct experiments to address common patching issues caused by tiling, drawing inspiration from recent advancements in seamless image synthesis. We can summarize our contributions as following: 1. We develop the first location-guided super-resolution model for remote sensing, designed to enhance generalization across diverse geographic regions by integrating location embeddings directly into the model. 2. We improve a GAN-based super-resolution model’s architecture by integrating attention mechanisms techniques to improve the scalability, and context understanding of the model. 3. We adapt a seamless image synthesis method to super-resolution, tackling common tiling artifacts problems by incorporating neighboring image data, ensuring the generation of continuous, high-resolution satellite imagery. 4. We showcase the transformative potential of the generated 1m resolution super-resolved Sentinel-2 imagery by successfully applying it to tasks such as building footprint extraction. Using the enhanced imagery, we directly infer binary building masks, achieving superior performance in downstream tasks compared to previously developed super-resolution methods. This demonstrates the significant advantages of our approach in delivering actionable, high-precision results for practical applications. Our results demonstrate the super-resolution of low-resolution satellite imagery, marking a key step forward in the application of publicly available dataset for global-scale remote sensing tasks. By enhancing the spatial resolution of Sentinel-2 imagery by a factor of 10, we unlock new possibilities for precise and scalable applications across a variety of remote sensing fields. This breakthrough not only maximizes the utility of existing satellite data but also reduces the dependency on launching new, high-cost sensors, promoting sustainability in Earth observation. Furthermore, our methodology establishes a robust foundation for future research and operational integration in geospatial analytics, highlighting the transformative potential of AI-driven approaches in addressing global challenges.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Sharper Insights: Enhancing Agricultural and Environmental Monitoring with Sentinel-2 Super-Resolution

Authors: Andreas Walli, Dr. Michael Riffler
Affiliations: Geoville
Sentinel-2, with its multispectral imaging capabilities and high revisit frequency, has become a cornerstone for Earth observation across diverse applications such as agriculture, urban planning, forestry, and environmental monitoring. However, its native spatial resolution of 10–60 meters can limit its effectiveness in scenarios requiring finer spatial detail. Super-resolution techniques, which enhance the resolution of Sentinel-2 data using advanced algorithms like deep learning or data fusion, address this limitation by generating higher-detail imagery without the need for new satellite missions. This enhanced resolution unlocks new possibilities, enabling more accurate land cover and land-use classification, improved crop monitoring, detailed urban analysis, and better support for disaster response. The central concept of our approach involves utilizing high-quality ground truth data at a finer spatial resolution than Sentinel-2's native 10m resolution—preferably high-quality vector data that can be rasterized to any desired resolution. This ensures that the model can access detailed, precise reference data, essential for effective super-resolution learning. For the modelling, we use a U-Net model, a well-known convolutional neural network architecture designed for image segmentation, which is trained using the Sentinel-2 time-series data as input. The model computes the loss by comparing its predictions to the super-resolution ground truth layer, optimizing its performance through iterative updates. The U-Net's architecture is uniquely suited for this task due to its symmetrical design, where the encoding path captures contextual information, and the decoding path reconstructs finer details. By incorporating additional decoding layers relative to the encoding layers, the network is deliberately configured to upscale the resolution. This modification allows the model to recover and enhance spatial detail, effectively downscaling the resolution to a finer level within the model's design rather than through post-processing. Our results demonstrate that the model can capture and refine prominent geometric features, such as field boundaries, tree rows, and other structural elements within the landscape. The U-Net's ability to detect and preserve geometric patterns is leveraged, allowing it to reconstruct high-resolution details that align closely with the underlying ground truth. This approach enhances spatial resolution and maintains the integrity of critical landscape features, offering a robust solution for generating high-resolution imagery from lower-resolution satellite data. We have implemented this approach to detect sub-field agricultural practices in the Common Agriculture Policy (CAP) Area Monitoring Services for Austria and Wallonia and in the Copernicus Land Monitoring Services (CLMS) production of the Small Landscape Features (Small Woody Features) layers. The extracted super-resolution features are not just a technical achievement, but they are crucial for these applications, demonstrating the real-world impact of our work. By bridging the gap between high-resolution satellites and very-high-resolution alternatives, super-resolution expands the utility of Sentinel-2 data, making it a cost-effective solution for precision-driven applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Super-resolution in Earth Observation: The AI change of paradigm

Authors: Prof. Mihai Datcu, Prof. Andrei Anghel, lecturer Mihai Coca, Prof. Daniela Coltuc, lecturer Cosmin Danisor, Dr. Ing. Omid Ghozatlou, Ing. Vlad
Affiliations: POLITEHNICA Bucharest
Earth Observation (EO) images are records of the EM signature of the observed scene represented in a 4-dimensional (4D) space: space (geographic), wavelength or polarization, and time. Each EO sensor and mission has a number of parameters defining the scales and recording intervals in the 4D space, i.e. the spatial separation of finest image detail, the sensing spectral wavelength band and its bandwidth, and the time interval in-between the successive image acquisition and duration of the record. Super-resolution in the classical and most popular accept refers to the spatial representation, the limit of “distinguishability” of spatial details of the observed scene. In this presentation we discuss and define the “resolution” as a property of the EO sensor/instrument and overall mission, approaching the 3 aspects, spatial, spectral, and temporal resolution. These aspects are exemplified with recent results for the cases of Sentinel-1 and Sentinel-2. The presentation is also summarizing the main super-resolution methodologies, encompassing techniques from ill-posed inverse problem, exploitation of pixel shift, SAR wave number tessellation or using information from side lobs of multisatic SAR to PInSAR and TomoSAR. With the advent of deep learning, super-resolution entered in new era. The deep models with huge number of parameters, trained with big data sets opened a new alternative to the super-resolution: the data prediction applied to a low resolution sensor by training a model with high resolution data. The new paradigm does not anymore require strong hypothesis, but suffers from the black-box syndrome of deep learning. Thus, new methods are required as hybrid method using the sensor image formation models, derive consistency criteria for the physical parameters, verification of the cal/val criteria for the super-resolved products. The spatial resolution of optical sensor is the property of the imaging system, its spatial frequency bandwidth, the Modulation Transfer Function (MTF). The clasic example of super-resolution is to combine several low resolution images, slightly shifted, into a unique image, larger and with more details. If the observed scene does not change, the super-resolved image contains actual trustworthy information. In frequency domain, the super-resolution means the restoration of scene high frequencies. Thus, the super-resolution is an inverse problem. We need a forward model, which is the image fomation model, that is inversed by computation [Farsiu2004]. The model inversion is often an ill-conditioned problem. In order to restrein the solution space, one needs to add regularizations which are often independent of measured image and bring some prior knowledge about the image. The resulted super-resolved image is trustworthy only in the conditions all used hypotheses are true. Most recent methods address the super-resolution task through the lenses of various neural networks’ predictions, directly modeling the output as a high resolution image, conditioned on the input of the low resolution counterpart. The method proposed in [Lanaras2018] can be regarded as a precursor to prediction-based super-resolution. By using two different CNNs predict up-sampled versions of the 60m bands and 20m bands, respectively, from Sentinel-2 images. By constructing input-output training pairs using synthetical degradation, a networks is trained, using reconstruction losses such as ℓ1-norm. In [Vasilescu2023a] is proposed a multi-objective loss for controlling the consistency-synthesis characteristic of the final model, guiding the output high-frequency features towards the high-frequency spatial details from available high resolution bands. The prediction is evaluated based on Walds’ protocol [Wald1997], using the sensors’ MTF, to measure the reconstruction error on degraded inputs. In [Nguyen2021] and [Vasilescu2023b] is explicitly included an MTF-based operator as a final layer, with corresponding characteristics dependent on the spectral bands which were being up-sampled. The methods can be used for super-resolving images acquired by another sensor, too. The concept of spectral super-resolution is a technique used to enhance or recover multispectral or hyperspectral missing or low-resolution spectral bands. By combining both spectral and spatial super-resolution techniques, a more comprehensive and accurate hyperspectral image can be generated. This integrated approach allows for the recovery of both detailed spectral bands and improved spatial details, resulting in high-quality data that as in [Neagoe2023] where corrupted, missing or unobserved high resolution pixels are predicted using an U-net model. The temporal super-resolution plays a crucial role in earth observation by predicting and reconstructing missing or unobserved data signatures, such as cloud-covered areas. These techniques utilize both predictive and generative models to enhance the temporal resolution of satellite images. Predictive models are trained on existing datasets to forecast outcomes, such as identifying cloud-free regions based on historical patterns. Generative models, on the other hand, learn the underlying patterns or distributions of the data to generate new samples, similar to missing data. In the context of cloud removal, generative models like generative adversarial networks (GANs) are employed to synthesize high-quality, cloud-free images by learning from past observations. In the line of super-resolution obtained by increasing the system’s bandwidth (true super-resolution), bistatic SAR imaging systems with a stationary receiver and a spaceborne transmitter of opportunity (e.g., TerraSAR-X, ERS-2/ENVISAT, or GNSS) open the possibility to image the same area using data bursts belonging to multiple subswaths that correspond to different azimuth bandwidths [Anghel2019]. The available multiburst data can be used in various ways for target characterization by exploiting enhanced azimuth diversity. An essential benefit of using multiple apertures in spaceborne transmitter/stationary receiver bistatic SAR is the possibility to obtain an enhanced azimuth resolution, having access just to publicly available information regarding the data. Moreover, Sentinel 1 does not operate in spotlight mode and cannot provide very good azimuth resolution. In [Rosu2020] was introduced a methodology developed to increase azimuth resolution by exploiting multiaperture bistatic data acquired in a spaceborne transmitter–stationary receiver configuration. The procedure uses as input several continuous groups of range compressed pulses (from one or more bursts) and consists in the following steps: compensation of the antenna pattern, resampling in the slow time domain, and reconstruction of the missing azimuth samples between neighboring groups of pulses using an autoregressive model. The obtained multiaperture range image (with enhanced azimuth bandwidth) is focused on a 2-D grid using a back-projection algorithm. The approach was evaluated with real bistatic data acquired over an area of Bucharest city, Romania. Persistent scatterers SAR interferometry is the technique measuring from multitemporal observations the scene deformation with subwavelength accuracy. A parametric phase model is estimates persistent scatterers, points with stable electromagnetic proprieties. Usually the method is reaching the order of a few mm/year for deformation rates. The method is enhanced by separating pixels containing persistent scaterers from those severely affected by noise based on a statistical decision test. [Danisor2023] Further, the SAR Tomography starting from a multitemporal dataset, estimates the scene’s reflectivity profile in dimensions additional to the 2D focusing plane of the SAR images – like elevation and deformation velocity, enabling a more accurate study of the scene’s parameters. Besides the reflectivity profile estimation challenge, another aspect consists in the stable targets’ detection, the main feature of SAR Tomography allowing the detection of multiple scatterers within same resolution cell, leading to an 3D representation. References [Farsiu2004] Farsiu, S., et al. "Advances and challenges in super‐resolution." International Journal of Imaging Systems and Technology 14(2), 2004 [Lanaras2018] Lanaras, C., et al, Super-resolution of sentinel-2 images: Learning a globally applicable deep neural network. ISPRS JPRS, 146:305–319, 2018 [Nguyen2021] Nguyen, H. V., et al, Sentinel-2 sharpening using a single unsupervised convolutional neural network with mtf-based degradation model. IEEE JSTARS, vol. 14, pp. 6882–6896, 2021 [Vasilescu2023a] Vasilescu, V., Datcu, M., and Faur, D., A cnn-based sentinel-2 image super-resolution method using multiobjective training, in IEEE TGRS vol. 61, pp. 1–14, 2023 [Vasiklescu2023b] V. Vasilescu, M. Datcu and D. Faur, "Sentinel-2 60-m Band Super-Resolution Using Hybrid CNN-GPR Model," in IEEE GRSL, vol. 20, pp. 1-5, 2023 [Anghel2019] A. Anghel, R. Cacoveanu, A. -S. Moldovan, B. Rommen and M. Datcu, "COBIS: Opportunistic C-Band Bistatic SAR Differential Interferometry," in IEEE JSTARS, vol. 12, no. 10, pp. 3980-3998, 2019 [Rosu2023] F. Rosu, A. Anghel, R. Cacoveanu, B. Rommen and M. Datcu, "Multiaperture Focusing for Spaceborne Transmitter/Ground-Based Receiver Bistatic SAR," in IEEE JSTARS, vol. 13, pp. 5823-5832, 2020 [Wald1997] Wald, L., Ranchin, T., and Mangolini, M., “Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images”, PERS, 1997, 63 (6), pp. 691-699, 1997 O. Ghozatlou and M. Datcu, "Hybrid Gan and Spectral Angular Distance for Cloud Removal," 2021 IEEE IGARSS, Brussels, Belgium, pp. 2695-2698, 2021 [Dong2016] C. Dong, C. C. Loy, K. He and X. Tang, "Image Super-Resolution Using Deep Convolutional Networks," in IEEE PAMI, vol. 38, no. 2, pp. 295-307, 2016 [Hu2022] J. -F. Hu, et al, "Hyperspectral Image Super-Resolution via Deep Spatiospectral Attention Convolutional Neural Networks," in IEEE TNNLS, vol. 33, no. 12, pp. 7251-7265, 2022 [Neagoe2023] I. C. Neagoe, D. Faur, C. Vaduva and M. Datcu, "Band Reconstruction Using a Modified UNet for Sentinel-2 Images," in IEEE JSTARS, vol. 16, pp. 6739-6757, 2023 [Danisor2023] C. Dănişor, A. Pauciullo, D. Reale and G. Fornaro, "Detection of Distributed Scatterers in Multitemporal SAR Interferometry: A Comparison Between CAESAR and SqueeSAR Detectors," in IEEE TGRS, vol. 61, pp. 1-15, 2023
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Magnifying Change: A Deep Learning Approach for Multi-Sensor, Multi-Resolution Satellite Imagery

Authors: Maria Sdraka, Dimitrios Michail, Prof. Ioannis Papoutsis
Affiliations: Orion Lab, National Technical University Of Athens & National Observatory Of Athens, Harokopio University of Athens
Change detection is a widely applied technique in the field of geospatial analysis, supporting critical applications in environmental monitoring, disaster response and urban planning [1], [2], [3]. Its aim is to identify and analyse particular changes in the surface of the Earth over time, accounting for irrelevant variation in data such as atmospheric phenomena (e.g. fog, clouds, dust), sunlight incidence angle, vegetation growth, and geometric distortions [4]. On the other hand, super-resolution techniques target the enhancement of the ground sampling distance of images without loss of information or the insertion of artifacts [5]. While traditional change detection methodologies have proven effective, they often depend on imagery of consistent spatial and spectral resolutions, which is rarely available in practice due to inherent differences in satellite sensors and their outputs. This study introduces a novel deep learning approach specifically designed for change detection using multi-resolution satellite imagery from different sensors, addressing challenges such as resolution and bandwidth disparities, and high magnification factors. We make use of the open-source FLOGA dataset [6] for burn scar mapping, thus the proposed framework takes as input a high-resolution pre-event image from the Sentinel-2 satellites and a low-resolution post-event image from the MODIS satellites, with a significant magnification factor of up to x8. The output is a binary change map indicating areas of change versus no change. The novelty of this approach lies in its tailored architecture, capable of reconciling significant cross-resolution discrepancies between images acquired from different satellite platforms. Unlike existing methods that struggle with the integration of multi-sensor data due to resolution and sensor-specific variations, our approach incorporates specialised feature extraction pathways for high and low resolution data, and a cross-resolution alignment technique. The network is designed to learn spatial correspondences despite the high magnification factor, maintaining the integrity of high frequency details during feature alignment. This capability is crucial for applications requiring precise change detection where post-event imagery might be lower in resolution due to budgetary, temporal, or logistical constraints. Extensive experiments on the FLOGA dataset demonstrate that our approach outperforms traditional change detection models and specialised multi-resolution change detection approaches. In particular, we evaluated several approaches, such as multi-resolution input models, knowledge distillation, two-step pipelines comprising distinct super-resolution and change detection modules, as well as self-supervised pretraining. Our proposed model achieves higher accuracy in detecting underlying changes, particularly in scenarios with substantial differences in resolution and sensor characteristics. This performance underscores the model's ability to generalise across data types, setting a new benchmark in change detection research. We are hopeful that our model will facilitate faster and more reliable monitoring of critical alterations in land use, deforestation, urban sprawl, and post-disaster damage assessment, thus supporting timely decision-making and resource allocation. [1] Jiang, Wandong, et al. "Change detection of multisource remote sensing images: a review." International Journal of Digital Earth 17.1 (2024): 2398051. [2] Wang, Lukang, et al. "Advances and challenges in deep learning-based change detection for remote sensing images: A review through various learning paradigms." Remote Sensing 16.5 (2024): 804. [3] Cheng, Guangliang, et al. "Change detection methods for remote sensing in the last decade: A comprehensive review." Remote Sensing 16.13 (2024): 2355. [4] Khelifi, Lazhar, and Max Mignotte. "Deep learning for change detection in remote sensing images: Comprehensive review and meta-analysis." Ieee Access 8 (2020): 126385-126400. [5] Sdraka, Maria, et al. "Deep learning for downscaling remote sensing images: Fusion and super-resolution." IEEE Geoscience and Remote Sensing Magazine 10.3 (2022): 202-255. [6] Sdraka, Maria, et al. "FLOGA: A machine learning ready dataset, a benchmark and a novel deep learning model for burnt area mapping with Sentinel-2." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (2024).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Laveraging low resolution labels and noise-robust learning for very high resolution building mapping

Authors: Anis Amziane, Dr. Marco Chini, Dr. Yu Li, Dr. João Gabriel Vinholi, Dr. Patrick Matgen
Affiliations: Luxembourg Institute Of Science And Technology
Building mapping is a foundational task in remote sensing, underpinning a wide array of applications such as population estimation, urban planning, and disaster management. Accurate building footprint extraction provides critical insights into human activity, resource allocation, and risk assessment. For instance, precise maps of urban structures are vital for effective resource distribution, identifying vulnerabilities in infrastructure, and supporting sustainable development goals. The emergence of very-high-resolution (VHR) imagery, with resolutions of 1 meter or better, has significantly advanced the ability to capture and represent fine-grained details of Earth's surface, offering potential for building mapping tasks and enabling detailed spatial analysis. Despite the growing availability of high-quality VHR imagery, a persistent challenge limits its widespread application: the lack of large-scale pixel-wise annotated data. Annotating VHR images with accurate, pixel-level labels is a labor-intensive and resource-demanding process, requiring domain expertise, significant time, and computational resources. This challenge is further exacerbated when aiming for global-scale deployment, as producing labeled data for vast and diverse regions is often infeasible. Consequently, this bottleneck constrains the development of deep learning models that could fully leverage the potential of VHR data for building mapping. To address these limitations, this study explores the use of low-resolution (LR) labels, which are widely available globally, albeit at a coarser resolution of approximately 10 meters or lower. These LR labels, often derived from global land cover products or other satellite-based datasets, serve as a valuable yet underutilized resource. While their coarse granularity makes them insufficient for direct use in fine-grained mapping tasks, they provide a promising starting point for bridging the resolution gap between LR labels and VHR imagery. Harnessing these readily available labels for high-resolution tasks could unlock new possibilities in automated building mapping. In this paper, we propose a novel framework that leverages LR labels to infer accurate building footprints in VHR imagery without relying on pixel-wise annotations at high resolution. Our approach addresses the key challenges associated with this task through two main innovations: 1. Inferring Pseudo-High-Resolution Labels: Using LR labels as a base, we estimate pseudo-high-resolution (pseudo-VHR) labels that align with the granularity of VHR images. We design this inference process to bridge the resolution gap, effectively transforming coarse labels into fine-grained annotations suitable for VHR data. The pseudo-labels capture building footprints with improved detail, enabling their use in downstream learning tasks. 2. Noise-Robust Learning Strategies: Recognizing that the estimated pseudo-VHR labels may contain inherent noise and inaccuracies, we introduce robust learning techniques to refine these labels. Specifically, our framework incorporates noise-resistant loss functions and model training schemes to mitigate the effects of label noise. These strategies ensure that the learned building footprints maintain high accuracy, even when training data is noisy or imprecise. The proposed pipeline employs a cascading learning structure. First, we train two deep convolutional neural networks (CNNs) sequentially to predict pseudo-VHR labels using a specific label super-resolution loss function. The cascading design transfers the weights learned in the first model to the second, enhancing training efficiency and speeding up convergence. Finally, we train a third model to refine the predicted labels using a noise-robust loss function, incorporating information from both low-resolution and high-resolution optical imagery to further enhance accuracy. We evaluate the effectiveness of our framework on several publicly available VHR building mapping datasets, including the MIT Building dataset and the INRIA aerial image dataset. These datasets encompass a variety of urban landscapes, offering a diverse testing ground for assessing the generalizability of our approach. To benchmark our method, we compare its performance against several state-of-the-art (SOTA) techniques adapted to the building detection problem. The baselines include JoCoR, CoDis, SIGUA, and the Decoupling method, all of which are grounded in robust learning paradigms designed to handle noisy labels: • JoCoR: This approach trains two neural networks jointly using a co-regularization loss to reduce prediction diversity. Both networks prioritize parameter updates for small-loss samples, which indicate higher label reliability. • CoDis: In contrast to JoCoR, CoDis trains two networks in a divergence regime. It selects samples with high-discrepancy predictions between the networks, focusing on refining the most uncertain cases. • SIGUA: This method employs selective gradient updating to enhance robustness against noisy labels. • Decoupling: To prevent learning incorrect patterns from noisy data, the decoupling method separates the decision of when to update a model from how to update it, relying on prediction disagreement between classifiers. We also evaluate label-matching loss functions, such as QR and RQ losses, originally proposed for label refinement tasks, alongside these noise-robust learning methods. These loss functions explicitly model the relationship between LR labels and VHR imagery, offering a complementary perspective to our proposed pipeline. Our experimental results demonstrate the efficacy of the proposed framework in generating high-quality building footprint maps using LR labels. In terms of key metrics, the cascading architecture and noise-robust learning strategies significantly improve mapping accuracy, outperforming SOTA methods. Moreover, the framework exhibits strong generalization across diverse datasets, highlighting its potential for real-world deployment in remote sensing applications. In conclusion, this work addresses a critical bottleneck in VHR building mapping by leveraging widely available LR labels to infer fine-grained building footprints. By combining pseudo-labeling with noise-robust learning, our framework bridges the resolution gap and mitigates the challenges associated with noisy annotations. The proposed approach not only advances the SOTA in building mapping (a 15 % gain in terms of precision is achieved on average) but also opens new avenues for utilizing coarse datasets to tackle high-resolution remote sensing tasks at scale.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Guided Super-Resolution for Biomass Upsampling

Authors: Kaan Karaman, Yuchang Jiang, Dr. Damien Robert, Dr. Vivien Sainte Fare Garnot, Prof. Maria Joao Santos, Prof Jan Dirk Wegner
Affiliations: EcoVision Lab, Department of Mathematical Modeling and Machine Learning, University of Zurich, Geography, University of Zurich
Accurate Above-Ground Biomass (AGB) mapping at both large scale and high spatio-temporal resolution is essential for applications ranging from climate modeling to biodiversity assessment, and sustainable supply chain monitoring. Traditional non-invasive fine-grained AGB mapping relies on costly airborne laser scanning acquisition campaigns, usually limited to regional scales. Meanwhile, projects such as the ESA Climate Change Initiative (CCI) leverage diverse spaceborne sensors to produce global biomass estimates at a relatively low 100-meter spatial resolution. This trade-off between resolution and coverage has significant implications for ecological monitoring and policy-making since the performance of these and similar downstream tasks highly depends on both of these properties of the data. To enable high-resolution (HR) mapping globally, a common approach is to estimate a model for AGB from HR satellite observations such as ESA Sentinel-1 & 2 10-meter resolution images. To solve the same problem, we propose a novel way to address HR AGB prediction by leveraging both HR satellite observations and existing low-resolution (LR) biomass products. We cast this problem as Guided Super-Resolution (GSR), aiming at upsampling an LR biomass map (the source) using an auxiliary HR co-registered satellite image (the guide). We benchmark several existing GSR techniques against unguided upsampling methods alongside direct regression approaches on the BioMassters dataset. Our results demonstrate that Multi-Scale Guidance (MSG), the simplest yet effective deep-learning-based GSR technique, consistently outperforms direct regression from satellite imagery. MSG achieves superior performance in both regression performance metrics (-7.8 t/px RMSE, -5.7 t/px MAE) and perceptual quality scores (+2.0 dB PSNR, +0.07 SSIM) without introducing significant computational overhead. Additionally, GSR methods show higher accuracy in regions with higher biomass values, underscoring their potential for ecological applications in areas of critical importance. Another finding from our experiments reveals that unlike the RGB+Depth setting, they were originally developed for, our best-performing AGB GSR approaches are those that most preserve the guide image texture. We validate this observation through Fourier analysis, examining the frequency components in predictions of the benchmark models. This difference in texture handling between tasks highlights the need for customized GSR models for biomass estimation. Our findings not only establish the utility of GSR for AGB mapping but also open new avenues for designing new models that balance texture preservation and predictive accuracy. This study lays the foundation for scalable and precise HR biomass mapping, contributing to a better understanding of global biomass dynamics and their future implications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Enhancing Landsat-8 Temperature Downscaling in Subarctic Regions Through Tree Shadow Integration

Authors: Jérôme Pigeon, Foutse Khomh, Pooneh Maghoul
Affiliations: Polytechnique Montréal
The Arctic is warming at a rate significantly faster than the global average, making detailed monitoring of land surface temperature (LST) in this region critically important. Despite advancements in satellite imaging, acquiring high-resolution thermal data remains a significant challenge. High-resolution LST data is not only crucial as a proxy for climate change but also serves as an indicator of local thermal dynamics, which directly impact permafrost stability and regional ecosystems. Currently, the MODIS and Landsat-8 satellites are the primary public sources of satellite thermal data, offering spatial resolutions of 1 km and 100 m, respectively. While adequate for global-scale studies, these resolutions fall short for engineering and localized applications, where critical thermal dynamics are aggregated together into a single pixel value. This aggregation obscures fine-scale temperature variations within each pixel. Machine learning models can leverage external knowledge about a location to approximate true sub-pixel thermal dynamics from coarse thermal images. By incorporating key variables derived from higher-resolution reflectance data, such as the Normalized Difference Vegetation Index (NDVI), Urban Index, and Snow Cover, these models can enhance the resolution of thermal distributions in lower-resolution images. However, satellite reflectance data is inherently two-dimensional, omitting critical information about vertical features like tree height, which significantly influence thermal dynamics. This limitation is particularly significant in the subarctic, where vegetation cover is sparse and highly heterogeneous, creating localized thermal variability due to the cooling effect of tree shadows. However, with Landsat-8, this information is diluted into pixel values due to the large area they represent, effectively erasing the spatial context and influence of these thermal variations. Recent advancements in deep learning and remote sensing have opened new avenues for addressing these challenges. Among these breakthroughs is the High Resolution Canopy Height Maps dataset (CHM), which offers tree height data with a1-meter spatial and vertical resolution, covering tree heights ranging from 1 to 25 meters. This study aims to improve the accuracy and spatial resolution of existing LST downscaling methods by integrating tree shadow effects with reflectance data from Landsat-8 and Sentinel-2, as well as elevation data. Tree shadows will be derived using tree height information from the CHM dataset, elevation data, and the sun’s position during Landsat-8 thermal acquisitions. Additionally, the study will evaluate the significance of incorporating tree height data in downscaling thermal imagery in subarctic regions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Benchmarking Deep Learning Super-resolution Techniques for Digital Elevation Models in Mountainous Regions

Authors: Nazanin Bagherinejad, Prof. Dr. Antara Dasgupta, Uni.-Prof. Dr. sc. habil. Julia Kowalski
Affiliations: Chair of Methods for Model-based Development in Computational Engineering, RWTH Aachen University, Institute of Hydraulic Engineering and Water Resources Management, RWTH Aachen University
Digital Elevation Models (DEMs) are 3-dimensional (3D) representations of the Earth's bare terrain, encapsulating critical information like elevation, slope, and aspect of the bare ground over a 2D grid. These models are the result of a computational pipeline that transforms raw satellite or aerial data into a standardized, user-friendly format. Unlike their raw satellite or aerial data sources, DEMs are easy to interpret and do not require any specialized knowledge. They serve as an indispensable input to numerous applications across a wide range of fields, from disaster management (e.g., geohazard simulations and predictions) to hydrology (e.g., stream flow and flood inundation forecasting). Decision support systems rely heavily on the accuracy and resolution of the underlying data, as incomplete or coarse data introduces uncertainties that propagate through the system and lead to unreliable results. Low-resolution DEMs are not capable of capturing the micro terrain features, for instance, and therefore result in higher uncertainty in simulation outcomes. High-resolution DEMs, however, provide detailed topographic information that can be critical, especially in mountainous areas with abrupt and steep slopes. Thus, high-resolution DEMs play an essential role in research development and enhancing decision-support systems across many fields and applications. Recent advancements in deep learning have expanded the possibilities in the field of super-resolution. Deep architectures ranging from Convolutional Neural Networks (e.g., Super-Resolution Convolutional Neural Network (SRCNN)) to Generative Adversarial Networks (E.g., Generative Adversarial Network for Image Super-Resolution (SRGAN)) and Attention-based models have demonstrated success in enhancing the resolution of images in various domains. These models showcase unique strengths and weaknesses while delivering competitive results. In addition, training configurations, particularly the choice of loss function, can significantly affect each model's performance and outcome. Nevertheless, it is essential to acknowledge the fundamental differences between DEMs and the standard RGB (Red, Green, Blue) images. DEMs typically consist of a single channel that captures quantitative elevation values, making them more analytical tools than graphical representations. This scientific nature of DEMs highlights the importance of accuracy and precision over visual appeal, accounting for the emergence of specialized super-resolution models trained solely on DEMs. Subsequently, the necessity arises for systematic evaluations to identify the most effective super-resolution methods for DEMs. The present study aims to address this gap by performing a comparative analysis of the state-of-the-art methods and investigating their strengths and weaknesses. This research utilizes the DHM25 dataset from the Federal Office of Topography swisstopo. DHM25 is the digital height model of Switzerland, representing the topographic complexities of the Swiss Alps, which offers meaningful challenges for this analysis. Patches of 128*128 pixels were extracted from the matrix model with a 25-meter grid and subsampled by scale factors of 2 and 4 to create the low-resolution equivalences. Furthermore, as a second variation, a slight Gaussian noise was added to the low-resolution patches in order to imitate real-world datasets. This allows us to compare the robustness of the different methods. Deep learning models from three categories (CNN-based, GAN-based, and Attention-based) were trained using four variations of the DHM25 dataset, differing in scale factor and noise condition: (1) scale factor 2 with noise, (2) scale factor 2 without noise, (3) scale factor 4 with noise, and (4) scale factor 4 without noise. The performance of the models was assessed and analyzed using multiple metrics, including Mean Squared Error, Peak Signal-to-Noise Ratio, and Structural Similarity Index Measure. This comparison offers insights into various DEM super-resolution approaches and their performance, reliability, and robustness towards different scale factors/noise conditions. Results of this work will assist researchers and practitioners in selecting the most suitable approach to their DEM Super-Resolution task. Our results will be made publicly available as curated datasets with corresponding model implementations, helping move the field forward by enabling further downstream applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Trustworthy resolution Enhancement: Non-generative super-resolution of Sentinel-2

Authors: Christian Ayala, Rubén Sesma, Mikel Galar
Affiliations: Tracasa Instrumental S.L., Public University of Navarre
Observation data is becoming increasingly accessible and affordable, largely due to the Copernicus programme and its Sentinel missions. Sentinel-2, for instance, provides global multi-spectral imagery every five days at the equator, freely available for a wide range of applications. Its RGB and Near-Infrared (RGBN) bands offer a spatial resolution of 10 meters, which is sufficient for many tasks but proves inadequate for others. For this reason, enhancing the spatial resolution of these images without incurring additional costs would significantly benefit subsequent analyses. This study addresses the challenge of increasing the spatial resolution of Sentinel-2's 10-meter RGBN bands to 2.5 meters, a process known as single-image super-resolution. The proposed solution leverages a reference satellite with spectral bands highly similar to those of Sentinel-2 but with higher spatial resolution. This enables the creation of paired images at both the source and target resolutions, which are then used to train a state-of-the-art Convolutional Neural Network (CNN) capable of recovering details absent in the original bands. Unlike Generative Adversarial Networks (GANs), CNNs were chosen to avoid the introduction of synthetic artifacts or hallucinations that could negatively impact downstream analyses. Building on our previous work, where we achieved a fourfold resolution enhancement using the Enhanced Deep Residual Network (EDSR) architecture, this study introduces significant advancements. In our earlier approach, we utilized PlanetScope imagery with a native resolution of 3.125 meters, resampled to 2.5 meters, as ground truth. In this study, we shift to Geosat imagery, which provides resolutions as fine as 0.75 meters. This allows us to generate ground truth directly at 2.5 meters without the need for resampling, thereby enhancing the accuracy and reliability of the training data. The quality of the dataset is critical when developing super-resolution models, so we carefully curated the dataset generation process. As part of this, we introduced a novel harmonization phase comprising two key tasks: spatial collocation improvement and radiometric matching. For spatial alignment, we employed a deep learning-based optical flow estimation model to calculate the necessary pixel-level translations between the low- and high-resolution images. This ensures that the low- and high-resolution image pairs are perfectly co-registered. For radiometric harmonization, we applied histogram matching to each spectral band individually. Following this, we computed the average spectral angle distance as a quality metric. Patches with high radiometric discrepancies were filtered out to ensure consistency across the dataset. Finally, given advancements in the state of the art since our earlier work, we opted to use a Second-order Attention Network (SAN) to further improve feature representation and correlation learning. The SAN architecture offers superior capability in capturing intricate feature relationships, enabling more accurate detail recovery during the super-resolution process. An exhaustive experimental study was conducted to validate our proposal, including a comparison with our earlier approach. The results demonstrate that the proposed methodology outperforms existing alternatives, highlighting the feasibility of further enhancing the resolution of Sentinel-2 images by using another satellite as a reference for training a CNN. Additionally, we show that the spectral radiometry of the native Sentinel-2 bands is preserved during the super-resolution process. This ensures that the enhanced images can be seamlessly used for subsequent analyses as if they were originally acquired by Sentinel-2. The findings of this study pave the way for a wide range of applications where the native spatial resolution of Sentinel-2 imagery falls short, offering an effective solution to extend its usability in more demanding scenarios.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Super-resolution of all Sentinel-2 bands to 10 meters using parameter-free attention and cross-correlation embeddings

Authors: Julio Cesar Contreras Huerta, Cesar Luis Aybar Camacho, Luis Gómez-Chova, Simon Donike, Freddie Kalaitzis
Affiliations: Image Processing Laboratory, University of Valencia, Oxford Applied and Theoretical ML Group, University of Oxford
The Copernicus Sentinel-2 mission provides multispectral imagery with high revisit frequency and global coverage, acquiring 13 spectral bands at spatial resolutions of 10, 20, and 60 meters. While the 20-meter and 60-meter bands are critical for applications like water stress assessment, vegetation monitoring, and atmospheric correction, their coarser resolution limits their utility for tasks requiring fine-scale details, such as mapping heterogeneous vegetation or narrow water bodies. To address this issue, reference super-resolution (refSR) methods can be utilized to achieve a uniform resolution of 10 meters across all spectral bands. These methods are based on the assumption that the correlation between spectral bands remains invariant across different spatial resolutions. In the case of Sentinel-2, the refSR process at 10 meters involves training models to upscale images from 40 meters to 20 meters (i.e., scale factor x2) and from 360 meters to 60 meters (i.e., scale factor x6). Once trained, these refSR models can effectively upsample the 20-meter and 60-meter bands to a 10-meter resolution. In this study, we introduce two innovations. First, we present a comprehensive global Sentinel-2 L2A and L1C dataset comprising 100,000 samples, stratified to capture a wide range of inter-band correlations across 10-meter, 20-meter, and 60-meter bands. This dataset includes scenarios with weak inter-band correlations, offering a more realistic basis for the training and evaluation of refSR models. Second, we propose a novel convolutional neural network (CNN) that incorporates a parameter-free attention mechanism, designed to emphasize critical land covers with fine textures and land cover boundaries. Additionally, this model learns an implicit representation by aligning high-frequency details with contrasting inter-band correlation vectors. Our preliminary results indicate that this approach consistently outperforms existing refSR architectures in quantitative assessments, delivering results that are both more accurate and visually plausible.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Investigating Generalized Strategy for Single-Image Satellite Super Resolution Using Deep Learning

Authors: Sandeep Kumar Jangir, Dr. Reza Bahmanyar
Affiliations: German Aerospace Center (DLR)
The effectiveness of remote sensing applications is critically influenced by image quality, which can be constrained by factors such as resolution, environmental conditions, and sensor-specific artifacts. Super-resolution (SR) is a fundamental computer vision task and an ill-posed inverse problem focused on enhancing the spatial resolution of images. It encompasses traditional approaches such as interpolation techniques, wavelet transformations, and sparse representation, as well as modern deep learning-based methods. SR methods typically rely on paired low- and high-resolution (LR and HR) images, where LR images are generated through bicubic downsampling of HR images. While effective on the data distribution they are trained on, these methods struggle to generalize to datasets outside the training distribution due to domain gaps between different sensors and ground sample distances (GSDs), necessitating retraining for new data [2]. In this paper, we extend our previous work [1] by demonstrating the effectiveness of super-resolution on Sentinel and similar low-resolution satellite images from RGB and multispectral sensors. The uniqueness of our approach lies in the fact that it was trained exclusively on high-resolution aerial and satellite images with GSD under 1.5 meters, yet still performs well on low-resolution images. This is made possible by the nature of the method itself, which was designed for generalized image enhancement across a variety of data sources. We use U2D2 [1], our prior framework for generalized enhancement, which was initially developed to enhance high-resolution aerial and satellite images with GSDs below 1.5 meters. The U2D2 framework utilizes a modular approach, where a deep learning-based upsampler (DLU) first performs SR and mitigates common degradations such as noise, blur, compression artifacts, and aliasing by simulating LR images during training. The upsampled images are then processed by a diffusion-based refinement module, which sharpens the image and recovers details from the original LR input. This pipeline not only produces visually improved images, but also enhances downstream applications like object detection and building/road segmentation. Our results demonstrate that the framework works effectively across a range of sensors and GSDs (from 10 cm to 1.5 m), and notably, it can super-resolve Sentinel-2 and similar low-resolution satellite images by up to 4X, from 10~m GSD to 2.5~m GSD , without requiring retraining for new sensors or GSDs. In the presentation, we would like to showcase the SR results for various low-resolution satellite image and compare them to other state-of-the-art methods. Our results demonstrate that our approach produces high-quality, natural-looking super-resolved images, offering substantial improvements in visual quality and performance for remote sensing applications. Our approach demonstrates how a modular framework can provide a scalable and robust solution for the enhancement of low-resolution satellite imagery, extending its applicability to RGB and multispectral bands. Such an approach has significant implications for remote sensing applications, enabling more accurate environmental monitoring, urban planning, and agricultural analysis, especially in scenarios constrained by limited resolution. By addressing domain gaps and offering a generalized solution, this work paves the way for broader adoption of SR techniques in remote sensing workflows. [1] Sandeep Kumar Jangir, Reza Bahmanyar, "U2D2: A Blind Super Resolution and Enhancement Framework for Aerial and Satellite Images," Elsevier, 2024. (ISPRS Journal of Photogrammetry and Remote Sensing, Submitted). [2] Peng Wang, Bahar Bayram, Elif Sertel, "A comprehensive review on deep learning based remote sensing image super-resolution methods," Earth-Science Reviews, vol. 232, pp. 104110, 2022. doi:
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Evaluation of super-resolution results using a knowledge-based spectral categorisation system

Authors: Felix Kröber, Dirk Tiede, Martin Sudmanns, Hannah Augustin, Andrea Baraldi
Affiliations: University of Salzburg, Department of Geoinformatics, Forschungszentrum Jülich, Institute of Bio- and Geosciences, IBG-2: Plant Sciences, Spatial Services GmbH
Motivation Superresolution (SR) models are critical for enhancing the spatial resolution of satellite imagery, enabling to generate very-high resolution data in a cost-efficient manner. However, the value of SR data strongly depends on its reliability or trustworthiness. SR evaluation methods should assess the spectral fidelity of SR outputs and provide means to interpret possible systematic biases of SR outputs. Currently, evaluation of SR models [1,2] often relies on accuracy metrics such as the Root Mean Square Error or Structural Similarity Index, which are primarily focusing on intensity differences by measuring Euclidean distances between the spectral signatures of reference data and SR outputs. However, relevant spectral inconsistencies are not necessarily discoverable by employing aggregative distance metrics. Additionally, accuracy figures obtained this way offer no possibility of describing the encountered biases semantically. This limits the formulation of model applicability as well as an in-depth analysis and targeted mitigation of errors. The problems described also apply to the use of more recent perceptual metrics, such as the Learned Perceptual Image Patch Similarity [3]. To tackle this issue, we assess the incorporation of a physical, knowledge-based, spectral categorization system to facilitate the detection but also the meaningful semantic characterisation of spectral SR errors. By validating SR outputs this way alongside traditional metrics, our study aims to provide a framework for a more nuanced understanding of SR outputs, reinforcing the trustworthiness of SR products for subsequent usage in downstream applications such as land use classifications. Data & Methods The SR data has been produced within a research project focusing on the agricultural domain [4]. Specifically, a two-part model adapting the established ESRGAN+ [5] and EDSR [6] architectures was trained on pairs of Sentinel2 (S-2) and PlanetScope imagery acquired over Austria. The training and validation sets were sampled in spatially disjoint manner from the test set. The latter comprises 1235 test set tiles, each covering an area of 1.28 x 1.28 km², for which both S-2 and PlanetScope imagery are available to be compared to the SR outputs. For the purposes of evaluation, all data is resampled to the SR resolution (2.5 m) As a basis for the knowledge-based evaluation of the SR results the Satellite Image Automatic Mapper (SIAM) system [7] is used. SIAM is a fully automated, hyperparameter free decision tree for categorizing multi-spectral data. The model-based expert system is capable of employing any multispectral image data that is radiometrically calibrated to at least Top of Atmosphere (TOA) reflectance. Operating as a pointwise operator, it maps reflectances into a discrete and finite vocabulary of semi-symbolic spectral categories. The extensive multidimensional continuous data space is reduced to essential information components that can be represented as an 8-bit discrete output raster. The categories of this raster are not immediate land use or land cover classes, as these high-level semantic concepts cannot be derived in an unambiguous way on a per-pixel basis. SIAM categories instead represent an intermediate level of semantic enrichment that can be derived more directly from the spectral information. Employing SIAM in the context of SR output evaluation is based on two considerations: 1. Given its physical and semantic nature, SIAM offers enhanced interpretability of spectral signatures. Beyond the detection of intensities of changes in the spectral signature between the original product and the SR result, SIAM allows to assess the changes in terms of their type and quality (e.g. changes from vegetation-like signatures to soil-like signatures). This makes it easier to identify systematic model biases. 2. The SIAM-inherent discretization of continuous reflectances accounts for the fact that not every distance in the multivariate reflectance vector space is equally important. Starting from a given spectral signature, a multivariate displacement vector by a given metric distance x can have different implications depending on its direction. For the same x, the resulting spectral signature can either a) still characterise the same land use/cover type (e.g. variability of the vegetation signature in the mid-range infrared depending on water scarcity), b) reflect a different land use/cover type or c) transform the given signature towards a physically implausible signature. These three possible changes should be given different significance in the evaluation of SR results despite the same distance-metric change, as subsequent downstream models, e.g. for land use classification, also give different weight to these type of changes- either explicitly (knowledge-based models) or implicitly (data-based models). SIAM has several sensor modes, allowing it to be applied to a range a multispectral input images despite different available bands. For the current case, SIAM is run with 6 band inputs (R-G-B-NIR-SWIR1-SWIR2) for S-2 and SR products, and additionally with 4 band inputs (R-G-B-NIR) for PlanetScope. The output granularities depend on the chosen sensor mode except for the outputs with 33 categories, which can be calculated across all sensors. Those harmonizing 33 categories are thus used as the primary basis for all following evaluations. Results & Discussion The spectral categorization demonstrates that most SR tiles retained spectral consistency with below 40% of pixels exhibiting any spectral changes. For a more detailed consideration of the comparisons of the SR categorization with the reference data, a breakdown of the frequencies by individual spectral categories is presented. According to the selection of the tile locations over Austria with a focus on agricultural areas, approximately 80% of all pixels are categorized as vegetation-like, both in the original data and in the SR outputs. It is evident that a large proportion of the category transitions representing changes occur within supersets of categories (e.g. within vegetation-like or within bare soil-like categories). Among the changes across supersets are primarily transitions from bare soil categories in the original data to weak vegetation in the SR outputs. The complementary change, i.e., pixels categorized as weak vegetation in the original data are categorized as bare soil in the SR outputs, also occurs, albeit with lower frequency. Other severe changes involve re-categorizations of original dark soil pixels as water or shadow-like pixels in the SR outputs. The complementary process is much less pronounced here. The proportion of spectral signatures categorized as unknown according to the knowledge-based SIAM framework averages 0.17% for Planet, 0.37% for S-2, and 1.43% for the SR outputs. The SR outputs thus have an increased proportion of signatures that cannot be interpreted physically. Comparing the spectral categorization of SR outputs to S2 and PlanetScope categorizations individually, a closer alignment with PlanetScope’s categorization is evident. Quantitatively, an average of almost 40% of pixels is categorized differently when comparing SR outputs to S2 data. For the pair of SR outputs and PlanetScope data, the figure amounts to 34%. This observation aligns with the qualitative impression resulting from a visual inspection of the SR results plotting them as RGB true color composites and CIR false-color composites. Here, the SR data also seem to reconstruct the spectral patterns of Planet more closely than those of S-2. Conclusion This study underscored the potential of spectral categorisation assessment as a robust complement to traditional metrics, facilitating deeper insights into SR model performance. Quantitative insights into spectral fidelity were provided and the semantic description of encountered changes allowed to uncover systematic biases of the SR model. References [1] D. C. Lepcha, B. Goyal, A. Dogra, and V. Goyal, ‘Image super-resolution: A comprehensive review, recent trends, challenges and applications’, Information Fusion, vol. 91, pp. 230–260, Mar. 2023, doi: 10.1016/j.inffus.2022.10.007. [2] Z. Wang, J. Chen, and S. C. H. Hoi, ‘Deep Learning for Image Super-Resolution: A Survey’, IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 10, pp. 3365–3387, Oct. 2021, doi: 10.1109/TPAMI.2020.2982166. [3] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, ‘The Unreasonable Effectiveness of Deep Features as a Perceptual Metric’, Apr. 10, 2018, arXiv: arXiv:1801.03924. doi: 10.48550/arXiv.1801.03924. [4] FFG, ‘SMAIL – Super-resolution-based Monitoring through AI for small Land parcels’. Accessed: Nov. 24, 2024. [Online]. Available: https://projekte.ffg.at/projekt/4351017 [5] N. C. Rakotonirina and A. Rasoanaivo, ‘ESRGAN+: Further Improving Enhanced Super-Resolution Generative Adversarial Network’, in ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2020, pp. 3637–3641. doi: 10.1109/ICASSP40776.2020.9054071. [6] C. Lanaras, J. Bioucas-Dias, S. Galliani, E. Baltsavias, and K. Schindler, ‘Super-resolution of Sentinel-2 images: Learning a globally applicable deep neural network’, ISPRS Journal of Photogrammetry and Remote Sensing, vol. 146, pp. 305–319, Dec. 2018, doi: 10.1016/j.isprsjprs.2018.09.018. [7] A. Baraldi, M. L. Humber, D. Tiede, and S. Lang, ‘GEO-CEOS stage 4 validation of the Satellite Image Automatic Mapper lightweight computer program for ESA Earth observation level 2 product generation – Part 2: Validation’, Cogent Geoscience, vol. 4, no. 1, p. 1467254, Jan. 2018, doi: 10.1080/23312041.2018.1467254.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Deep Learning Techniques to Enhance Spatial Resolution of Thermal Imagery for Fire and Cloud Detection

Authors: Valentina Kanaki, Stella Girtsou, Aggelos Georgakis, Vassilia Karathanassi, Dr. Charalampos Kontoes
Affiliations: National Observatory of Athens (NOA), Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing (IAASARS), BEYOND Center of Earth Observation Research and Satellite Remote Sensing, National Technical University of Athens (NTUA), School of Rural and Surveying Engineering, Remote Sensing Laboratory
In recent years, the field of Remote Sensing and satellite imagery has gained significant attention. Among the critical applications of Remote Sensing, the timely detection and management of wildfires have become increasingly important due to the growing environmental and social impacts. This effort is closely linked to the 2030 Agenda and the Sustainable Development Goals (SDGs), particularly Goal 13 for climate action and Goal 15 for life on land, as wildfires destroy biodiversity and exacerbate climate change. Geostationary satellites, such as Meteosat Second Generation (MSG), enable the detection and monitoring of thermal anomalies of wildfires with a refresh frequency ranging from 5 to 15 minutes. This frequency meets the needs of wildfire response agencies, offering details about the fire's timing, radiative power, and location. However, despite the high temporal resolution of thermal images, their limited spatial resolution can hinder the early detection of wildfires. This spatial resolution ranges from 3 km at the Equator to 4.5 km at Mediterranean latitudes. Furthermore, cloud masks play a crucial role in satellite imagery analysis. While they often cover the satellite's observation target, thus hindering data collection and analysis, they also offer valuable insights into atmospheric conditions, precipitation patterns, and climate change. Moreover, in solar energy applications clouds are a major factor as they are significantly affecting the solar radiation reaching the Earth's surface. Therefore accurately detecting cloud coverage in satellite images using techniques like spatial downscaling is essential for optimizing many EO applications, such as weather nowcasting and wildfire monitoring. Super Resolution using deep learning techniques aims to overcome these limitations by enhancing the spatial resolution of satellite images. The objective of this study is to create a comprehensive dataset and conduct a qualitative and quantitative comparison of deep learning techniques, such as SRCNN and SRGAN, for improving the spatial resolution of thermal images and cloud masks generated by the SEVIRI sensor on the MSG geostationary satellite. Dataset curation. This project exploits two data sources: 1. The MODIS level 1B calibrated observations (MOD021KM/MYD021KM), which are converted to spectral radiances for two of the 36 standard resolution channels and the MODIS cloud mask (MOD35_L2/MYD35_L2). 2. The SEVIRI level 1.5 calibrated observations (Rapid Scan High Rate SEVIRI Level 1.5 - MSG) for the 12 standard resolution channels and the SEVIRI cloud mask (Rapid Scan Cloud Mask - MSG). The dataset was created based on MODIS active fires measurements. The study focuses on Greece, using data from 2018 to 2023. First, we selected the MODIS scenes with high fire intensity (large number of active fire pixels) and matched it with the closest SEVIRI image in time. We performed the necessary geoprocessing of the AQUA/MODIS and MSG/SEVIRI data to transform them to NetCDF files. After geoprocessing, the SEVIRI and MODIS images can be aligned using their coordinate information. Special care is taken to perform temporal alignment. As SEVIRI rapid scan measures every 5 minutes, the MODIS images are matched with the closest SEVIRI measurement in time. The cloud mask dataset was created using a similar methodology as the one used for the active fires dataset. Cloud masks from MODIS were collected and matched with the SEVIRI cloud masks that shared corresponding capture times. Then we created patches and selected patches based on a) number of active fires, b) on the presence of clouds for training models to apply SR for active fire detection and SR for refined cloud masks respectively. Dataloaders. We created flexible data loaders, to configure different bands and sensors for enabling large experimentation spaces. For the experiments where the SR exploited the MODIS bands, the SEVIRI images were downscaled by bicubic interpolation, to match MODIS spatial resolution. Methods. In our initial study, we used two models for SR tasks: SR Convolutional Neural Network (SRCNN) and SR Generative Adversarial Network (SRGAN). SRCNN, our shallow baseline model, is a convolutional neural network that is specially configured for image SR tasks and is trained using the Mean Square Error (MSE). SRGAN adopts a generative adversarial network (GAN) framework, utilizing a generator and a discriminator and is optimized on a perceptual loss function, balancing content loss (MSE) with adversarial loss. During training, we also monitored peak signal-to-noise (PSNR) and structural similarity (SSIM) scores. We tested two different data combinations: 1. SEVIRI - MODIS: In this scenario, the models learned to map lower-resolution SEVIRI data to the higher-resolution MODIS images. 2. SEVIRI - upsampled SEVIRI: Here, we used the Wald’s protocol. The low-resolution SEVIRI images were first upscaled using bicubic interpolation to match the target resolution. The original high-resolution SEVIRI images served as ground truth, while the bicubic-upsampled versions were used as input. Our experiments were based mainly on the channels 21,22,31 from MODIS and IR3.9, IR10.8 from SEVIRI, chosen for their relevance to our application. Initial results indicate that the SEVIRI-MODIS mapping approach outperforms the self-supervised SEVIRI method, suggesting that leveraging external high-resolution data significantly improves super-resolution performance. Future work. To build upon these findings, we plan to broaden the scope of our experiments. Our next steps include expanding the dataset to cover additional years and applying our methods on a larger European scale. This extended analysis will help us better generalize our models and enhance their robustness across diverse geographic and temporal conditions. Furthermore, we plan to explore additional downscaling techniques that have demonstrated effectiveness with remote sensing data to further improve the accuracy and applicability of our models. Acknowledgements. The ESA-funded ASIMOV project addresses the challenge of the coarse spatial resolution of Essential Climate Variables (ECVs) by using AI super-resolution techniques. These techniques utilize both satellite Earth Observation (EO) data and other non-EO data sources. The super-resolution methods are applied in two key use cases: fire risk prediction and real-time wildfire detection. By leveraging these diverse data sources, ASIMOV generates highly informative, super-resolved versions of the ECVs. The project is led by the National Observatory of Athens, supported by NTUA ICCS and WIRELESSINFO.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: Resource-Efficient Super-Resolution for Sentinel-2 Imagery Using Modular Auto-Encoders and U-Net Architectures

Authors: Thomas Cusson, Thomas Corpetti, Antoine Lefebvre
Affiliations: KERMAP, CNRS UMR LETG
NIMBO (https://nimbo.earth/) , developed by KERMAP, is an advanced Earth observation platform that delivers high-quality, GIS-ready basemaps derived from Sentinel-2 data. These monthly basemaps provide consistent and comprehensive coverage of the Earth's surface, supporting applications in urban planning, agriculture, and environmental monitoring. However, the 10 meters spatial resolution of Sentinel-2 data can limit its utility for application requiring finer spatial detail. To overcome this limitation, we have developed a modular and efficient super-resolution framework capable of enhancing Sentinel-2 images to a 2.5 meters resolution, achieving a x4 improvement in spatial detail while maintaining low computational cost. Our approach integrates seamlessly with NIMBO’s processing pipeline, ensuring scalability and resource efficiency. The framework leverages a modular architecture combining an auto-encoder and a U-Net, designed to address the computational and data challenges of super-resolution. The process begins by training an auto-encoder on high-resolution (HR) images to create a compact bottleneck representation. This bottleneck significantly reduces data complexity while retaining the essential features needed for reconstruction. Next, a U-Net, equipped with attention mechanisms and residual connections, is trained to map low-resolution (LR) Sentinel-2 inputs to this bottleneck representation. Finally, the decoder reconstructs HR images, capturing fine spatial details and preserving critical edge features. The proposed framework offers several key advantages: - Enhanced basemap utility: By improving spatial resolution, the framework produces more detailed basemaps, better suited for precision-demanding applications. - Efficiency and scalability: The modular design reduces computational requirements and aligns with NIMBO’s goal of resource-efficient data processing for large-scale operations. - Generalization with limited data: The auto-encoder ensures robust HR image reconstruction, even with limited training samples, by enforcing a structure that mimics true HR image distributions. This work not only enhances the quality of NIMBO’s basemaps but also demonstrates the potential of combining deep learning architectures for scalable and efficient satellite image super-resolution. By addressing both scientific and operational challenges, our method positions NIMBO as a leader in delivering high-resolution satellite-derived products, and provides a flexible framework adaptable to other remote sensing tasks.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone T-U)

Poster: AI-Driven Super-Resolution in Earth Observation: Addressing Domain Shift and Uncertainty in Thermal Data Analysis

Authors: Pauline Hecker, Hannes Baeuerle, Shivali Dubey, Dr. Peter Kuhn
Affiliations: Fraunhofer Ernst-Mach-Institut, EMI
The proliferation of Artificial Intelligence (AI) in Earth System Science and Earth Observation (EO) is revolutionizing research methodologies and fostering innovation at an unprecedented rate. Yet, to harness the full potential of AI in Earth Action initiatives, it is imperative that AI solutions exhibit explainability, physics-awareness, and trustworthiness. This ensures that the outcomes are reliable and fit for intended purposes, especially in critical areas such as climate and environmental monitoring. Our motivation stems from the critical need for high-resolution (HR) satellite data within numerous scientific and commercial sectors. Acquiring such data, however, often involves prohibitive costs. AI-driven Super-Resolution (SR) techniques are a promising solution, enabling the generation of HR-images from low-resolution (LR) inputs. However, the HR training data required for the development of SR models in critical sectors is frequently sparse or nonexistent. Here, it would be convenient if we could construct SR models for low quantity data domains using models fitted on high quantity data domains. However, such an approach requires a validation that the domain shift does not introduce high prediction uncertainties. An emerging field of research is the application of SR techniques to thermal satellite imagery. The SR-techniques are applied to both raw thermal radiation data and processed products like Land Surface Temperature (LST). In this context, the objective of our research is to develop SR algorithms for the thermal satellite data product LST, that generalize between data domains and incorporate uncertainty quantification. We systematically investigate the impact of domain shift on predictive uncertainty, as assessed by standard measures as well as an innovative usage of cycle loss which preserves important features during the aforementioned image translation process and is known from image generation and style transfer using cycleGAN. Our research therefore focuses on developing a guided SR approach, where not only pairs of HR- and LR-images, but also physical relationships between individual bands and band combinations with the land surface temperature are incorporated into the model. Exemplarily, the structural and feature-based similarities between LST and the Normalized Difference Built Index (NDBI) can be utilized to first learn the mapping between the two types of images at low resolution. This serves as the domain adaptation step, which can then facilitate a domain shift at low resolution, ultimately allowing us to generate high-resolution thermal images. Furthermore, the red, green, and blue color channels, along with combinations of these with the near- and shortwave-infrared bands of Landsat-8/9, are used for model training. Landsat-8/9 produce LST data in 100m resolution and visible as well as near- and shortwave-infrared data in 30m resolution, which represent the guidance data. The HR-domain of the guidance data is to be learnt by the models, and to be adapted by the LR-LST data. As a result, the LST’s resolution domain will have changed to the 30m guidance data domain. This means a Super-Resolution factor of ~3. To ensure reliability, we develop two strands of uncertainty quantification for our model. First, we use conservative measures like Monte Carlo Dropout and model ensembling to measure the model’s epistemic uncertainty. Second, we attempt to shed additional light on the uncertainty arising from the intended domain shift by training using invertible or cyclic generative model architectures, like cycleGANs and invertible neural networks. These models can potentially expedite the domain generalizability from the cycle loss, which enforces the cyclic or invertible property on a model to not only perform image translations from the source domain to the target domain, but also to reconstruct an image in the source domain from the target domain without losing information about important features. We systematically validate this approach by testing it on LR-data domains where the ground truth is available and compare it to the established methods of evaluating predictive uncertainty arising from domain shifts. We thus demonstrate that we can produce a reliable upper bound on the error expected from a domain shift. The study focuses on the 15 most populated cities of Europe. This geographic focus ensures that the model is tested in various urban environments, providing a robust assessment of its applicability across different contexts. In summary, we employ a guided SR model to predict HR-images from LR-images, together with other EO data by leveraging the cycle loss mechanism in Generative AI. Our paper will systematically investigate how effectively the models generalize to HR-thermal domains where data is sparse. The overall objective is to evaluate how well the models can learn a mapping from surface features to thermal data to then employ these models to produce HR-thermal images without access to training data.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: C.05.03 - POSTER - ALTIUS: ESA's Ozone Mission

In the 1970s, scientists discovered that the ozone layer was being depleted, particularly above the South Pole resulting in what is known as the ozone hole. To address the destruction of the ozone layer the international community established the Montreal Protocol on ozone-depleting substances. Since then, the global consumption of ozone-depleting substances has reduced by about 98%, and the ozone layer is showing signs of recovering. However, it is not expected to fully recover before the second half of this century. It is imperative that concentrations of stratospheric ozone, and how they vary according to the season, are monitored continually, to not only assess the recovery process, but also for atmospheric modelling and for practical applications including weather forecasting.
The Atmospheric Limb Tracker for Investigation of the Upcoming Stratosphere (ALTIUS) mission fills a very important gap in the continuation of limb measurements for atmospheric sciences. The ALTIUS mission will provide 3-hour latency near-real time ozone profiles for assimilation in Numerical Weather Prediction systems, and consolidated ozone profiles for ozone scientific analysis. Other trace gases and aerosols extinction profiles will also be provided.
The focus of this session is the mission and its status, together with the implemented technical and algorithmic solutions to image the Earth limb and retrieve the target chemical concentration, as well as the ongoing preparations for the calibration/validation of the mission products. )
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: ALTIUS Ozone Retrieval Algorithm in Bright Limb Mode Validated using OMPS LP Observations

Authors: Sotiris Sotiriadis
Affiliations: Belgian Institute Of Space Aeronomie
ALTIUS (Atmospheric Limb Tracker for the Investigation of the Upcoming Stratosphere) is an atmospheric limb mission being implemented in ESA's Earth Watch program and planned for launch in 2026. The instrument consists of three imagers: UV (250-355 nm), VIS (440-675 nm) and NIR (600-1040 nm) channels. Each channel is able to take a snapshot of the scene independently of the other two channels, at a desired wavelength and with the requested acquisition time. The agility of ALTIUS allows for series of high vertical resolution observations at wavelengths carefully chosen to retrieve the vertical profiles of species of interest. ALTIUS will perform measurements in different geometries to maximize global coverage: observing limb-scattered solar light in the dayside, solar occultations at the terminator, and stellar, lunar, and planetary occultations in the nightside. The primary objective of the mission is to measure high-resolution stratospheric ozone concentration profiles. This work concerns the bright limb mode and the validation of the ALTIUS L2P algorithm using the Ozone Mapping and Profiler Suite Limb Profiler (OMPS LP) L1 data. The OMPS LP measures solar radiation scattered from the atmospheric limb in ultraviolet and visible spectral ranges between the surface and 80 km and these data were use for retrieval of ozone profiles from cloud tops up to 55 km. We perform end-to-end simulations to examine the robustness of the L2P limb algorithm using L1 OMPS LP data. We assume no prior knowledge of the rest of the atmosphere in our tests. We compare our retrieved ozone profiles with the ones from the OMPS algorithm and discuss potential disagreements and biases in the results. In our study, we generate artificial stimuli from the OMPS L1 signals where the ozone, temperature, and pressure profiles are coming from OMPS L2 products. Then we feed these stimuli into our system performance simulator (SPS) and an ALTIUS L1 product, based on the latest description of the instrument, is generated and it is passed our L2 processor.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Feasibility of BrO and OClO Retrievals in ALTIUS' Solar Occultation Mode: Key Challenges and Solutions

Authors: Kristof Rose, Noel Baker, Dr. Antonin Berthelot, Didier Fussen, Nina Mateshvili, Didier Pieroux, Sotiris Sotiriadis, Emmanuel Dekemper
Affiliations: BIRA-IASB
The Atmospheric Limb Tracker for the Investigation of the Upcoming Stratosphere (ALTIUS) is an ozone monitoring mission under ESA’s Earth Watch Programme. Scheduled for launch in 2026-2027 aboard Vega-C, ALTIUS addresses the observational gap following ENVISAT’s decommissioning in 2012, and the looming discontinuation of very successful limb missions such as MLS/Aura, OSIRIS/Odin, and ACE-FTS/SciSat. ALTIUS is designed for versatile atmospheric measurements, utilizing limb scattering and solar occultation on the dayside of the orbit, and stellar, lunar, and planetary occultations on the nightside. The ALTIUS payload, mounted on a PROBA platform, consists of three imagers: UV (250–355 nm), VIS (440–675 nm), and NIR (600–1040 nm) channels. Each imager can independently capture images at desired wavelengths and acquisition times, allowing for optimal wavelength and acquisition time selection. This feature enhances vertical resolution, enabling the retrieval of vertical profiles of various chemical species, including but not limited to O₃, NO₂, and aerosols. While the mission’s primary goal is to retrieve high-resolution ozone profiles, the versatile imagers also grant us the possibility to measure secondary species such as BrO and OClO, which play critical roles in stratospheric ozone depletion. Given their low abundance in the stratosphere, only the solar and lunar occultation methods have a chance to detect their presence. This study evaluates the feasibility of retrieving BrO and OClO using ALTIUS’ solar occultation chain. Specifically, our objectives are: 1) Identifying the (ALTIUS-specific) optimal measurement vector for BrO and OClO retrievals, minimizing interference from more abundant species such as O₃ and NO₂. 2) Assess whether the signal-to-noise ratio for these species is sufficient in single measurements or if temporal averaging (e.g., daily, weekly, or monthly) is required for meaningful profiles. By addressing these challenges, this study aims to enhance ALTIUS’ scientific contributions, broadening its scope beyond primary mission objectives and advancing our understanding of these trace species critical to ozone chemistry.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: B.04.05 - POSTER - Remote sensing for disaster preparedness and response to geo-hazards, hydro-meteorological hazards and man-made disasters

Every year, millions of people worldwide are impacted by disasters. Floods, heat waves, droughts, wildfires, tropical cyclones and tornadoes cause increasingly severe damages. Civil wars and armed conflicts in various parts of the world, moreover, lead to a growing number of refugees and large changes in population dynamics. Rescue forces and aid organizations depend on up-to-date, area-wide and accurate information about hazard extent, exposed assets and damages in order to respond fast and effectively. In recent years, it has also been possible to prepare for specific events or to monitor vulnerable regions of the world on an ongoing basis thanks to the rapidly growing number of satellites launched and their freely available data. Providing information before, during or after a disaster in a rapid, scalable and reliable way, however, remains a major challenge for the remote sensing community.
Obtaining an area-wide mapping of disaster situations is time-consuming and requires a large number of experienced interpreters, as it often relies on manual interpretation. Nowadays, the amount of remote sensing data and related suitable sensors is steadily increasing, making it impossible in practice to assess all available data visually. Therefore, an increase of automation for (potential) impact assessment methods using multi-modal data opens up new possibilities for effective and fast disaster response and preparedness workflow. In this session, we want to provide a platform for research groups to present their latest research activities aimed at addressing the problem of automatic, rapid, large-scale, and accurate information retrieval from remotely sensed data to support disaster preparedness and response to geo-hazards, hydro-meteorological hazards and man-made disasters/conflicts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: First Assessment of Electronic Corner Reflectors for Dam Monitoring in Germany – A Case Study

Authors: Jonas Ziemer, Jannik Jänichen, Carolin Wicker, Katja Last, Prof. Dr. Christiane Schmullius, Clémence
Affiliations:
Regular deformation monitoring is a key task for dam operators, given its fundamental socio-economic and environmental importance. In Germany, dam monitoring programs encompass various in situ geodetic methods, such as plumb measurements and trigonometric surveys, to ensure safe operation (DWA, 2011; DIN, 2004). While plumb data provide the highest accuracy (Bettzieche, 2020), these systems are not installed on all dams. Trigonometric data offer an alternative, but due to the high costs and substantial time investment, field campaigns are typically conducted only once or twice a year. This practice limits monitoring capabilities to the detection of long-term deformations. Technical advances in differential synthetic aperture radar interferometry (DInSAR) address these challenges, providing multiple observation points, known as persistent scatterers (PS), on the dam with higher temporal resolution. Interferometric methods, such as Persistent Scatterer Interferometry (PSI), can detect deformations with millimeter-level precision. The number of detected scatterers primarily depends on the material properties of the object. In this context, the type of dam is one of the most important factors for successful PS identification. Different dam types impound German reservoirs, leading to considerable variations in the number of detected scatterers. Gravity dams made of masonry or concrete typically provide suitable conditions for PS-based monitoring. In contrast, embankment dams, often covered by vegetation, present less favorable conditions due to decorrelation effects, resulting in fewer detected PS points. To facilitate monitoring of such dams, corner reflectors can be deployed, serving as stable, high-intensity reflection scatterers that enhance the reliability and accuracy of DInSAR measurements for deformation monitoring. Numerous studies have employed passive corner reflectors for infrastructure monitoring (Kelevitz et al., 2022; Sage et al., 2022). However, these devices often face limitations due to their large size and weight, conspicuous appearance, reliability issues stemming from geometric variations, and material degradation or maintenance challenges over extended periods of use (Mahapatra, 2014). Consequently, smaller, lighter, and less conspicuous radar transponders, known as electronic corner reflectors (ECRs), have been developed and are particularly well-suited for publicly accessible infrastructures. Their capability to cover ascending and descending tracks with a single unit, instead of requiring a dual, opposite-facing reflector setup (Fotiou and Danezis, 2020), makes them ideal for dam monitoring applications. This study provides initial insights into the assessment of Sentinel-1 C-band PS time series obtained using electronic corner reflectors for dam monitoring in Germany. The analysis is conducted on several dams in North Rhine-Westphalia, western Germany, and spans a period of up to two years beginning in January 2023. PS data in the sensors’ line of sight are compared with in situ geodetic measurements to evaluate consistency. Preliminary results indicate promising benefits for dams where few natural scatterers are detected. References - Bettzieche, V. (2020). Satellitenüberwachung der Verformungen von Staumauern und Staudämmen. Wasserwirtschaft, 9, 48-51. https://doi.org/10.1007/s35147-020-0424-9 - Fotiou, K., & Danezis, C. (2020). An overview of electronic corner reflectors and their use in ground deformation monitoring applications. In Eighth International Conference on Remote Sensing and Geoinformation of the Environment (RSCy2020) (Vol. 11524, pp. 216-224). SPIE. https://doi.org/10.1117/12.2571886 - German Institute for Standardization (DIN) (2004). 19700-10: 2004-07, Stauanlagen-Teil 11: Talsperren; Beuth Verlag GmbH: Berlin, Germany. https://dx.doi.org/10.31030/9560336 - German Association for Water, Wastewater and Waste (DWA) (2011). Bauwerksüberwachung an Talsperren. DWA-Merkblätter, Nr. M 514. - Kelevitz, K., Wright, T. J., Hooper, A. J., & Selvakumaran, S. (2022). Novel corner-reflector array application in essential infrastructure monitoring. IEEE Transactions on Geoscience and Remote Sensing, 60, 1-18. https://doi.org/10.1109/tgrs.2022.3196699 - Mahapatra, P. S., Samiei-Esfahany, S., van der Marel, H., & Hanssen, R. F. (2013). On the use of transponders as coherent radar targets for SAR interferometry. IEEE Transactions on Geoscience and Remote Sensing, 52(3), 1869-1878. https://doi.org/10.1109/tgrs.2013.2255881 - Sage, E., Holley, R., Carvalho, L., Miller, M., Magnall, N., & Thomas, A. (2022). InSAR monitoring of a challenging closed mine site with corner reflectors. In Mine Closure 2022: Proceedings of the 15th International Conference on Mine Closure (pp. 779-788). Australian Centre for Geomechanics. https://doi.org/10.36487/acg_repo/2215_56
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Rapid identification of disaster hotspots by means of a geospatial information fusion from remote sensing and social media

Authors: Marc Wieland, Sebastian Schmidt, Bernd Resch, Dr. Sandro Martinis
Affiliations: German Aerospace Center (DLR), German Remote Sensing Data Center (DFD), University of Salzburg, Department of Geoinformatics (Z_GIS)
Effective management of complex disaster scenarios relies on achieving comprehensive situational awareness. Recent disasters, such as the 2024 floods in Southern Germany, have highlighted the critical need for timely geoinformation to protect communities. During the response phase, it is vital to rapidly identify the most affected areas to guide emergency actions and allocate limited resources effectively. This process is typically iterative, incorporating continuous updates as new or improved information becomes available. Initial estimates, often based on incomplete or imprecise data, play a crucial role in forming an early situational overview before detailed damage assessments are conducted. Early-stage proxies, such as population distribution and hazard zones, can support planning data collection efforts, enhancing situational understanding and focusing response efforts efficiently. This study introduces a method for rapidly identifying disaster hotspots, particularly in scenarios where detailed damage assessments or very high-resolution satellite imagery are not (yet) available. The approach leverages the H3 discrete global grid system and employs a log-linear probability pooling method with an unsupervised hyperparameter optimization routine. It integrates flood hazard data from systematically acquiring high-resolution satellite imagery (Sentinel-1 and Sentinel-2), disaster-related information from X (formerly Twitter), and freely accessible geospatial data on exposed assets. The method’s effectiveness is assessed by comparing its outputs to detailed damage assessments from five real-world flood events (USA August 2017, Mozambique 2019, Mexico November 2020, Germany July 2021, Pakistan September 2022). Results demonstrate that disaster hotspots can be identified using readily available proxy data. An extensive hyperparameter analysis revealed that while equal-weight methods offer simplicity and effectiveness, optimized pooling weights generally yield superior results. Context-specific tuning was shown to be critical for optimal performance in log-linear pooling. Notably, an unsupervised method minimizing the Kullback-Leibler divergence between input distributions and predictions outperformed supervised approaches, overcoming the limitations of training data. This method’s transparency and adaptability allow it to incorporate geospatial layers with varying resolutions and semantic relevance, making it particularly suitable application to other hazards (e.g., landslides, wildfires, earthquakes) or exposed assets (e.g., roads, railways, critical infrastructure).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: A Satellite-Based Methodology for Assessing Wildfire Defensibility of Buildings in France

Authors: Luc Mercereau, Arnaud Broquard, Simon Lamy, Aurélien de Truchis, Julien Camus, Philippe Meresse, Marjorie Sampsoni
Affiliations: Kayrros SAS, Entente Valabre
The increasing frequency and severity of wildfires demand robust tools for assessing the defensibility of structures, a measure of their capacity to be defended during a wildfire event. Traditional defensibility assessments focus on three key components: accessibility (ease of approach for firefighting units), vegetation clearing (removal of flammable materials around structures), and proximity to defending infrastructure (such as water reservoirs or firefighting stations). These assessments, however, are typically conducted through ground inspections, which are labor-intensive and challenging to scale across large regions. Defensibility is a critical concept for firefighters, allowing the identification of the most vulnerable buildings and informing strategic and tactical decisions for wildfire prevention and crisis management. Analysis conducted by wildfire management experts reveals that approximately 80% of buildings with properly cleared surroundings within a 50-meter radius are spared when a wildfire occurs nearby. This is primarily due to the disruption of fuel continuity and, most importantly, the improved conditions for firefighting operations. Conversely, more than 90% of buildings with uncleared surroundings are destroyed in the event of a major wildfire in close proximity. These findings highlight the importance of vegetation management in enhancing building defensibility and mitigating the impact of wildfires. In this study, we propose a novel methodology that integrates satellite-based vegetation clearing analysis using Sentinel-2 imagery to enhance the defensibility scoring process. Sentinel-2's high-resolution, multispectral data enables accurate quantification of vegetation clearing around structures. This satellite-derived data is combined with field insights from firefighter inspections conducted in various regions of France. The study highlights the correlation between satellite-based assessments and on-the-ground evaluations, demonstrating strong agreement between the two approaches. Our methodology was applied to several test areas in wildfire-prone regions of France. Results show that satellite-derived defensibility scores reliably capture critical risk factors while significantly reducing the time and resources needed for large-scale assessments. This approach also supports continuous monitoring of vegetation regrowth, allowing for updated risk assessments over time. By improving scalability and consistency in defensibility evaluations, this methodology offers a powerful tool for wildfire crisis management. It provides actionable insights for prioritizing resources, optimizing firefighting strategies, and developing preventive measures. Ultimately, this study bridges operational expertise from firefighters with advanced satellite technology, contributing to more effective and efficient wildfire preparedness and response.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Investigating the Risk of Damage to Traditional Timber Houses Caused By Tropical Cyclones in Madagascar, a Cyclone Enawo (2017) Case Study.

Authors: Holly Moore, Dr Thomas Reynolds, Professor Alexis Comber
Affiliations: University Of Edinburgh, University of Leeds
Traditionally built ‘non-engineered’ timber houses constructed in coastal regions of Madagascar are frequently damaged or destroyed by tropical cyclone hazards such as extreme wind speeds, storm surge and waves. The risk of damage to traditional timber houses is investigated using a multi-hazard structural fragility predictive model. The model calculates the probability of component-level failure for a representative building archetype from wind, storm surge and breaking wave loading of increasing intensities comparable to varying cyclone strengths. A Monte-Carlo simulation method is utilised where 10,000 random samples of component resistance measurements collected during an in-country experimental campaign [1] are taken to incorporate uncertainties of component strength. The building components under investigation are the embedded columns that form the foundation system, and mortise-tenon (MT) joints that connect the roof to the vertical structure, failure of which will cause severe damage to the house [1]. The model is validated through hindcasting of past cyclone case studies and compared to damage maps produced from object detection of very high resolution (VHR) Pleiades optical satellite imagery and post-event reports. This research focuses on the impacts of Intense Tropical Cyclone Enawo that made landfall on the 7th of March 2017 on the north-east coast of Madagascar as an equivalent category 4 cyclone on the Saffir-Simpson scale [2]. Wind, wave and storm surge intensities are modelled for two different study sites: Antalaha, located closest to cyclone landfall; and Fénérive-Est, located further south. Severe damage to the foundation system is categorised as the embedded columns rotating > 25° caused by wind and/or breaking wave loading [1]. The probability of failure due to wind loading was derived for both study sites using equations from Eurocode 1 [3] and wind speed data from a global reanalysis product from the E.U. Copernicus Marine Service (CMEMS) [4]. Antalaha experienced higher 10-min maximum sustained wind speeds compared to Fénérive-Est, producing a probability of failure of 99.8% for houses with column embedment depths of 50cm, while Fénérive-Est had a probability of failure of 50.7%. The probability of embedded columns rotating > 25° from breaking wave loading was computed utilising equations from FEMA (2011) [5], ground elevations above geoid from the GLO-90 DSM dataset [6], sea surface height data and significant wave height data from CMEMS reanalysis products [7,8]. Breaking wave heights for coastal areas < 5m above sea level were calculated to reach a maximum of 1.16m in Antalaha and 0.19m in Fénérive-Est, producing probabilities of foundation failure of 98.2% in Antalaha and 0.3% in Fénérive-Est, where column embedment depths were 50cm. Model results suggest traditional timber houses located in Antalaha had a higher probability of foundation failure from wind and breaking wave loading during Cyclone Enawo in comparison to Fénérive-Est. Results correspond well to damage statistics from post-event reports [2], which reported that in the Sava region (which includes the Antalaha district) 34,894 houses were destroyed, compared to the Analanjirofo region (which includes the Fénérive-Est district) where 1,845 houses were destroyed. The model can be utilised to assess cyclone risk to Malagasy building stock and can be further adapted to assess the efficacy of simple and cost-effective building strengthening strategies. Strengthening strategies currently advised by the Croix-Rouge Malagasy (CRM) include increasing column embedment depths to 75cm and wrapping connections with metal wire. The results can then be utilised to inform construction guidelines to be disseminated to local communities to improve cyclone resilience of traditional Malagasy houses. References: [1] Taleb, R., et al., 2023. Fragility assessment of traditional wooden houses in Madagascar subjected to extreme wind loads. Engineering Structures 289, 116220. https://doi.org/10.1016/j.engstruct.2023.116220 [2] Probst, P., et al., 2017. Tropical Cyclone Enawo: post event report: Madagascar, March 2017. Publications Office of the European Union, Ispra (Italy). [3] British Standard, 2005. Eurocode 1: Actions on Structures - Part 1-4: General actions - Wind actions. [4] Global Ocean Hourly Reprocessed Sea Surface Wind and Stress from Scatterometer and Model. E.U. Copernicus Marine Service Information (CMEMS). Marine Data Store (MDS). https://doi.org/10.48670/moi-00185 [5] FEMA, 2011. Coastal Construction Manual. Principles and Practices of Planning, Siting, Designing, Constructing, and Maintaining Residential Buildings in Coastal Areas (Fourth Edition). FEMA P-55, Volume 1. [6] Copernicus DEM - Global and European Digital Elevation Model. https://doi.org/10.5270/ESA-c5d3d65 [7] Global Oceans Physics Reanalysis Model. E.U. Copernicus Marine Service Information (CMEMS). Marine Data Store (MDS). https://doi.org/10.48670/moi-00021 [8] Global Ocean Waves Reanalysis Dataset. E.U. Copernicus Marine Service Information (CMEMS). Marine Data Store (MDS). https://doi.org/10.48670/moi-00022
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Optimizing Dam Monitoring: Validation and Optimization of the CR-Index for PSInSAR and Electronic Corner Reflector (ECR) Integration

Authors: Jannik Jänichen, Jonas Ziemer, Daniel Klöpper, Carolin Wicker, Sebastian Weltmann, Marco Wolsza, Nora Fischer, Katja Last, Prof. Dr. Christiane Schmullius, Dr. -Ing Clémence Dubois
Affiliations: Friedrich Schiller University Jena, Ruhrverband, Department for Water Economy, Institute of Data Science, German Aerospace Center
Monitoring the structural integrity of dams is crucial to ensure safety and mitigate risks for surrounding areas. Traditional methods relying on geodetic measurements, such as plumb measurements, trigonometry, or GNSS technology, are often costly and limited by accessibility. Satellite-based remote sensing techniques, particularly Persistent Scatterer Interferometry (PSI), present a promising alternative, offering cost-effective and precise deformation measurements over large areas. However, its applicability can be constrained by geometric and topographic factors. To address these limitations, the CR-Index was developed, combining geometric data with land use information to assess the suitability of PSI for dam monitoring. This study applies the CR-Index to dams in North Rhine-Westphalia, western Germany, focusing on the Möhne Dam and Bigge Dam. These exhibit distinct structural properties and environmental conditions. The Möhne Dam, a gravity dam made of masonry, and the Bigge Dam, a dam with vegetation cover, served as ideal test cases to assess the practical application of the CR-Index for monitoring dams with varying surface characteristics and orientations. The analysis utilizes Sentinel-1 data, including local incidence angles and metadata, alongside high-resolution Digital Elevation Models (DEMs) and land use data provided by the Geodata Infrastructure of North Rhine-Westphalia. The CR-Index, an evolution of the earlier geometry-based R-Index, incorporates a land use component to provide a more comprehensive evaluation of dam observability via PSI. This combined approach allows for a more tailored assessment, considering both geometric properties and land cover types, which are critical in determining PS point density. Results demonstrate that the CR-Index accurately identifies the suitability of different dam sections for PSInSAR analysis. The wall of the Möhne Dam consistently showed high CR-Index values, ensuring excellent observability in both ascending and descending satellite tracks. Conversely, the Bigge Dam exhibited more variability, particularly between its water-facing and air-facing surfaces. Areas with dense vegetation showed lower CR-Index values, while asphalted or exposed surfaces were more conducive to PSInSAR observation. The dams' topography and orientation significantly influenced CR-Index outcomes, with descending satellite tracks generally providing better results due to favorable incidence angles. These findings highlight the potential to enhance observability in challenging areas through complementary devices, such as electronic corner reflectors (ECRs), to further improve the density and quality of PS points. Validation of the CR-Index was conducted using PS data provided by the German Ground Motion Service (Bodenbewegungsdienst Deutschland, BBD). By comparing BBD point densities to CR-Index values, a strong correlation was observed: Areas with higher CR-Index values corresponded with higher PS densitiy, thus validating the CR-Index's predictive accuracy. For both ascending and descending tracks, PS point density increased notably at CR-Index values between 60 and 90. In the descending direction, PS point density reached up to 25 points per hectare, demonstrating the effectiveness of the CR-Index in predicting PS-rich areas and supporting its utility in developing optimal monitoring strategies. This study underscores the importance of the CR-Index as a tool for preliminary site selection and observation strategy development in PSInSAR analyses. By integrating geometric and land use parameters, the CR-Index provides a robust basis for tailoring monitoring approaches to individual dams, allowing for more efficient and effective use of PSInSAR technology. Future work will extend the validation to a broader range of dam types and environmental conditions, refining the CR-Index for more generalized application across various types of infrastructure. This research provides valuable insights into dam monitoring using remote sensing technologies and establishes a foundation for advancing the monitoring of critical infrastructure, thereby enhancing both the safety and sustainability of dam management practices. References BGR (2019): BodenBewegungsdienst Deutschland – BBD, https://www.bgr.bund.de/DE/Themen/GG_Fernerkundung/BodenBewegungsdienst_Deutschland/bodenbewegungsdienst_deutschland_node.thml. (Last access: 11/2024). BGR (2022): Nutzungshinweise BBD Sentinel-1 PSI, https://www.bgr.bund.de/DE/Themen/GG_Fernerkundung/Downloads/Nutzungshinweise-BBD_PSI-Daten.pdf?__blob=publicationFile&v=2. (Last access: 11/2024). Cigna, F.; Bateson, L.B.; Jordan, C.J.; Dashwood, C. (2014). Simulating SAR geometric distortions and predicting Persistent Scatterer densities for ERS-1/2 and ENVISAT C-band SAR and InSAR applications. Nationwide feasibility assessment to monitor the landmass of Great Britain with SAR imagery. Remote Sensing of Environment, 152, 441-446. Ferretti, A.; Prati, C.; Rocca, F. (2001): Permanent Scatterers in SAR interferometry. IEEE Transactions on Geoscience and Remote Sensing 39, 1, 8-20. Notti, D.; Meisina, C.; Zucca, F.; Colombo, A. (2011). Models to Predict Persistent Scatterers Data Distribution and Their Capacity to Register Movement Along the Slope. Fringe 2011 Workshop, 19-23. Available online: https://earth.esa.int/eogateway/documents/20142/37627/Models_predict_persistent_scatterers_data_distribution.pdf
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: High-Resolution Insights into Extreme Drought Impacts on Vegetation using Sentinel-2

Authors: Claire Robin, Vitus Benson, Dr. Marc Rußwurm, Dr Nuno Carvalhais, Prof Markus Reichstein
Affiliations: Max Planck Institute For Biogeochemistry, Wageningen University
Understanding the ecological impacts of drought is essential for safeguarding ecosystems and mitigating climate change. Leveraging Sentinel-2’s unprecedented 20-meter spatial resolution, we introduce an innovative approach to quantify the effects of climate extremes on vegetation with a new level of detail. High-resolution mapping of the impact of extreme events, such as droughts and heatwaves, can reveal how vegetation heterogeneity and local variability modulate their effects, providing crucial insights into their impact on vegetation health and the carbon cycle. However, identifying extremes requires decades of data, posing a challenge for current high-resolution satellite missions. Sentinel-2 offers only seven years of data, while Landsat's low temporal resolution limits its suitability for studying vegetation dynamics. Although these datasets deliver unprecedented spatial detail, their limited historical coverage presents significant challenges for analyzing extreme events. To address these data limitations, we adopt a sampling strategy tailored for Sentinel-2 data to extend the regional extremes method [1]. This method leverages ecosystem similarities to determine extreme thresholds across ecoregions, providing a robust alternative to traditional location-specific approaches. Doing so avoids the inherent biases of local thresholds that are computed independently for every location. Location-specific threshold approaches often yield a uniform distribution of extremes, especially problematic with short time series, where many locations may not have experienced abnormal climate impacts or vegetation responses. By utilizing larger sample sizes within eco-regions—delineated through principal component analysis (PCA) of the mean seasonal cycle—our method ensures more reliable threshold estimation and enables the mapping of extremes at Sentinel-2 spatial resolution. We demonstrate the computational efficiency of our method using the DeepExtremesCubes dataset[2], which includes samples from both within and outside climatic extremes. Validation against low-resolution MODIS data in areas with uniform landscapes—where MODIS and Sentinel-2 data are comparable—demonstrates that our method provides more reliable quantile threshold estimates than traditional location-specific approaches, supporting its effectiveness for high-resolution assessments of vegetation impacts. By leveraging Sentinel-2’s 20-meter resolution, we reveal the spatial heterogeneity of vegetation responses to climate extremes, overcoming the limitations of spatial averaging inherent in MODIS data. This finer resolution uncovers localized variations in vegetation dynamics that were previously masked, offering unprecedented insights into ecological extremes. This approach significantly advances our ability to analyze ecosystem dynamics under climate extremes, unlocking new opportunities for fine-scale ecological monitoring. [1] Mahecha, M. D., Gans, F., Sippel, S., Donges, J. F., Kaminski, T., Metzger, S., ... & Zscheischler, J. (2017). Detecting impacts of extreme events with ecological in situ monitoring networks. Biogeosciences, 14(18), 4255-4277. [2] Ji, C., Fincke, T., Benson, V., Camps-Valls, G., Fernandez-Torres, M. A., Gans, F., ... & Mahecha, M. D. (2024). DeepExtremeCubes: Integrating Earth system spatio-temporal data for impact assessment of climate extremes. arXiv preprint arXiv:2406.18179.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Towards a Resilient Future: CENTAUR’s Integrated Approach to Climate-Security and Early-Warning Systems

Authors: Valerio Botteghelli, Marco Corsi, Simone Tilia, Adriano Benedetti Michelangeli
Affiliations: e-GEOS, https://centaur-horizon.eu/
Climate change is, to an increasing degree, acknowledged, as a threat multiplier, exacerbating existing vulnerabilities and amplifying risks to human security. Environmental hazards such as floods, droughts, and storms not only bring disastrous impacts on ecosystems but also contribute to political instability, economic disruptions, and social unrest. Addressing these compounded challenges is critical for global peace and security, and requires a comprehensive approach integrating climate science, geospatial intelligence, and early-warning systems. Launched in 2022, the CENTAUR project is an initiative funded by the European Commission in efforts towards reducing the increasing risks posed by climate extremes by enhancing situational awareness and preparedness for climate-related security crises. Through the development of innovative indicators and data-driven tools, CENTAUR aims to improve anticipatory capabilities, support informed decision-making, and enhance resilience to climate-induced security threats. The project focuses on two key domains: urban flood risks and water/food insecurity, both of which are critical drivers of conflict, displacement, and humanitarian crises. CENTAUR employs a multidisciplinary approach, combining satellite-based Earth observation (EO) data with socio-economic indicators to assess the impact of climate extremes on vulnerable populations and critical infrastructure. Within the urban flood domain, the project has developed 11 innovative indicators that enhance flood risk assessment by integrating high-resolution meteorological, hydrological, and socio-economic data. These indicators provide early warnings of impending flood events, evaluate their potential impacts on urban areas, and support long-term recovery and resilience forecasting. Moreover, the project includes the integration of geo-referenced media data, social media analysis, and high-end flood modeling techniques like InSAR and highly resolute DTM to deliver accurate flood extent predictions and damage assessments. In the domain of water and food insecurity, CENTAUR has elaborated a set of 22 indicators that measure the interconnectedness between resource scarcity and political instability. These indicators model the potential for conflict, displacement, and instability, providing early warnings of emerging crises linked to water and food insecurity. By combining meteorological and agricultural data with socio-economic vulnerabilities, CENTAUR’s tools allow decision-makers to identify high-risk areas, assess the likelihood of climate-induced political unrest, and take preventive actions. The CENTAUR initiative follows a user-driven approach, with active participation from a wide array of stakeholders, including United Nations agencies, NGOs, and EU civil protection authorities. Feedback from the end-users, collected through targeted workshops, is being integrated into the project to ensure the relevance and effectiveness services being developed. CENTAUR is testing its indicators both in historical “cold cases”, well document past crisis events, and real-time “hot cases,” thus continuous monitoring of crisis situations and the fine-tuning of early warning systems. 8 are the use cases selected for the testing of the CENTAUR platform and its early-warning services, covering a diverse range of geographic regions of varying relevant focus. The use cases, address critical problems at the nexus of urban floods risk-food-and-water insecurity, political stability, and humanitarian crises. CENTAUR contributes to enhancing resilience and stability in regions vulnerable to crises induced by climate change through its integration of climate and security data. The project provides a pre-operational platform for monitoring, forecasting, and responding to climate-security risks, with the intent of building a robust, scalable framework for future applications in crisis management and humanitarian intervention. By linking scientific research, operational tools, and policy-driven solutions, CENTAUR represents a significant milestone toward understanding the complex nexus of climate change and security. Its innovative approach provides an all-encompassing, multi-dimensional understanding of climate-related threats, thus offering timely and data-driven responses to protect vulnerable populations and critical infrastructures from the changing climate.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: New Developments in the Monitoring of Spruce Bark Beetle Infestations with Copernicus Sentinel Data

Authors: Simon König, Dr. Michael Förster, Prof. Dr. Paul Magdon, Dr. Frank Thonfeld, Prof. Dr. Marco Heurich
Affiliations: Faculty of Environment and National Resources, University Of Freiburg, Bavarian Forest National Park, German Space Agency of the German Aerospace Center, Geoinformation in Environmental Planning Lab, Technical University of Berlin, Faculty of Resource Management, University of Applied Sciences and Arts (HAWK) Hildesheim/Holzminden/Göttingen, German Remote Sensing Data Center of the German Aerospace Center, Department of Forestry and Wildlife Management, Inland Norway University of Applied Sciences
Under the influence of climate change, significant changes in forest disturbance regimes have been observed globally, with disturbances becoming more frequent and severe. This includes abiotic disturbances like windthrow and forest fires, as well as biotic disturbances such as pathogens and insects. Bark beetle infestations, in particular, have caused large-scale forest die-offs worldwide. In Central Europe, major drought events have triggered mass outbreaks of the European Spruce Bark Beetle (Ips typographus), leading to unprecedented levels of Norway Spruce mortality, especially in Germany, Austria, Czechia and Slovakia. Early detection of infestations is crucial for management actions, as infested trees have to be removed within a few weeks before a new generation of beetles emerges. Earth observation offers great potential for monitoring bark beetle infestations efficiently. Historically, Landsat has been most commonly used in this regard. Copernicus data however, especially from Sentinel-2 has emerged as a very valuable asset in this regard, combining a suitable spatial, temporal, and spectral resolution to effectively detect and monitor infestations. Various studies have shown that Sentinel-2 data is well able to capture infestation dynamics and can detect infested areas reliably. However, despite some progress being made, the early detection of trees (early enough to successfully remove them before the next generation of beetles has developed) remains a challenge. Multiple studies have used Sentinel-1 with its free and open SAR data in unmatched spatial resolution as well, albeit detecting bark beetle infestations with Sentinel-1 is more challenging due to its C-Band imagery. In this presentation, we discuss multiple innovations in the monitoring of bark beetle infestations via Copernicus data. All these innovations were tested in the Bavarian Forest National Park, a protected forest area in Southeastern Germany. The park administration has been collecting spatially explicit data on bark beetle infestations since 1988 and is Germany’s most covered forest remote sensing site. We built a consistent data cube of all available Sentinel-1, Sentinel-2, and Landsat data for the national park using the Framework for Operational Radiometric Correction for Environmental monitoring (FORCE) software, based on which we conducted our analyses. First, we tested whether a combination of the time series of all three of these sensors benefits the detection of infestations. Based on the reference data, we computed infestation probabilities from the time series using Bayesian conditional probabilities as well as random forests. Next, we tested the spatial and temporal accuracy of five different sensor configurations in the detection of infestations: • Landsat only, • Sentinel-2 only, • Sentinel-1 only, • Landsat/Sentinel-2 combined, • all sensors combined. Our results show that a combination of sensors yields no benefits for the detectability of infestations. Sentinel-2 only achieved the highest spatial accuracy (0.93) as well as the best detection timeliness. Landsat only and Landsat/Sentinel-2 combined achieved good results as well, but did not improve over Sentinel-2 only. Both configurations involving Sentinel-1 achieved inferior results. Second, since Sentinel-2 proved to be the most suitable sensor in this comparative study, we tested further improvements in the detection of infestations based on this sensor. Multiple studies have shown the particular suitability of two areas of the electromagnetic spectrum: the Red Edge (RE) and Shortwave Infrared (SWIR) range. Yet, a vegetation index that combines imagery from these two spectral ranges has not been proposed. We tested all possible combinations of Sentinel-2’s three RE and two SWIR bands via a normalized difference (NDVI-like) index. The combination of its 2nd RE and 1st SWIR bands emerged as the most suitable index, which we called NDRESW. We used a non-parametric, self-calibrated detection procedure and compared the NDRESW to three vegetation indices commonly applied in the detection of infestations. It showed the highest sensitivity to infestations (despite relatively high commission errors) as well as the timeliest detections. While only few infestations could be detected in the first infestation stages, the NDRESW delivered reliable, early detections of infestations (> 50 % of detections occurred within the first three months after the infestation onset). Lastly, based on these results, we assessed which environmental factors affect the detectability of infestations. For this task, we extended the data cube described above with multiple additional datasets, including ALS-derived forest structure metrics, metrological data, and spatially explicit metrics of the infestation intensity. Our results indicate that variables that are connected to the interplay between bark beetles and spruce trees in the surrounding of an area are the most important ones for the detectability. This is because they relate to the size of the infested patch, and the larger patches are, the lower the probability of mixed pixels. A reliable, consistent and ideally early detection of bark beetle infestations is important for forest managers, scientists, conservationists and public administration employees. In the context of the ever-increasing usage of Copernicus data in the monitoring of temperate forests, our results show that Sentinel-2 is very well able to capture the dynamics of bark beetle infestations. Yet, further research is key to extend our results to larger areas and further improvements should be explored. Future sensors, e.g. ESA’s CHIME satellite, will further improve the monitoring of bark beetle infestations from space, especially their early detection, which remains a key challenge.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Integrating EO and OSINT for Enhanced Conflict Analysis in Fragile Settings in Sub-Saharan Africa

Authors: Dr. Theophilos Valsamidis, Dr. George Benekos, Mr Alexandros Voukenas, Mr Konstantinos Pilaftsis, Alix Leboulanger
Affiliations: Planetek Hellas, Janes
The joint utilization of Earth Observation (EO) and Open-Source Intelligence (OSINT) offers an innovative approach to understanding fragility in conflict-prone regions. Under ESA's awarded contracts in the framework of its EOLAW and EO4SECURITY activities, innovative methodologies to assess the Security State of Fragility in Sub-Saharan Africa, focusing on Southern Somalia and Northern Mozambique were developed and applied. This research explores the integration of EO-derived data, GIS analysis, and OSINT to provide actionable insights for international stakeholders. The OSINT methods applied, encompass the identification of conflict actors, the classification of conflict events, and the retrieval of critical information such as photographic evidence and testimonials. Concurrently, EO/GIS techniques enable precise geolocation of conflict events using change detection, assessment of rural environments in terms of accessibility and actors’ movement patterns using multi criteria analysis, and appropriate visualization techniques to reveal spatial concentrations of conflict activity. Key findings demonstrate the effectiveness of a workflow where EO/GIS analysis is guided by the OSINT findings. This approach enhances the understanding of conflict landscapes, and more specifically: (a) the profiles and interactions of involved actors, (b) areas of intense activity, and (c) environmental determinants of conflicts. Depending on data availability, the analyses provided either high-level regional insights or detailed geospatial intelligence at localized scales. These results highlight the potential for innovative EO and OSINT integration to support intergovernmental organizations, such as Interpol, in addressing fragility. The outcomes underline the value of this approach in improving risk assessments at regional, national, and subnational levels, paving the way for broader adoption of EO in fragile settings analysis.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Change detection using SAR tomography

Authors: Yuki Yamaguchi
Affiliations: NEC Corporation
In this study, we propose a change detection method by generating background images suitable for each observation using Bayesian Synthetic Aperture Rader (SAR) tomography and comparing them with observation images probabilistically. In change detection using SAR images, it is difficult to define an image representing unchanged state, i.e., a “background image”, due to orbital variation for each observation or the effect of random noise. The proposed method can generate background images adjusted to orbital positions for each observation using 3-dimensional information of the unchanged state reconstructed by Bayesian SAR tomography. The Bayesian approach allows us to evaluate the effect of random noise on each observation probabilistically. The method makes it possible to assess post-disaster damage rapidly and accurately. Specifically, by reconstructing 3-dimensional information of the unchanged state using time-series SAR images before the disaster and then applying the method to an image observed immediately after the disaster, rapid and accurate change detection is possible. SAR is a sensing technology that is expected to be used for various applications. In SAR, microwaves are irradiated, and amplitude and phase signals are obtained at high resolution (e.g., 0.5–3 m). SAR can observe a large area at once, day and night, and in all weather conditions. Taking advantage of these strengths, SAR has been widely used for terrestrial monitoring, such as post-disaster damage assessment and urban development. SAR images are complex images, and we can realize sensitive change detection by comparing them using phase information as well as amplitude information. Coherence change detection (CCD) is a typical method for comparing complex images. CCD detects changes between a SAR image pair observing the same area at different times by evaluating the similarity of amplitude and phase of the local regions. CCD can detect tinier changes than amplitude-based change detection techniques because it considers the similarity of phase as well as amplitude. However, it is intrinsically difficult to generate a "background image" representing the unchanged state suitable for the analysis, including CCD. This is because an orbit of a SAR satellite varies slightly for each observation, and consequently, the obtained SAR images look different even if the no change has occurred. While, spectral filtering can be applied to mitigate spectral and Doppler decorrelation, that is not the case for volumetric scattering occurring in urban as well as vegetated areas. In addition, in the low signal-to-noise region of a SAR image, the random noise's effect becomes significant, making SAR images different for each observation. Therefore, when comparing SAR images observed at different times, such as CCD, it is impossible to distinguish whether the detected changes occurred actually or are false positives due to orbital variation or the effects of random noise. Therefore, it is necessary to generate background images with orbit positions adjusted for each observation and compare them with the observed images while considering the effect of random noise. This study proposes change detection robust to orbit variation using Bayesian SAR tomography. The proposed method generates suitable background images for each observation using Bayesian SAR tomography and compares them with observed SAR images probabilistically to detect change. The proposed method can generate background images adjusted to orbit positions for each observation using 3-dimensional information of the unchanged state reconstructed by Bayesian SAR tomography. The Bayesian approach allows us to evaluate the effect of random noise on each observation probabilistically. In the proposed method, first, we reconstruct 3-dimensional information of the unchanged state by Bayesian SAR tomography. SAR tomography is the method of reconstructing the complex amplitude distribution of scatterers along the elevation direction using multi-baseline and multi-temporal SAR images. Here, we adopt SAR tomography using Sparse Bayesian Learning to reconstruct 3-dimensional information with the posterior distribution. ext, the proposed method calculates a background image adjusted to the orbit position of the i-th image, for which the change is to be detected, as a predictive distribution based on the results of Bayesian SAR tomography. This study assumes the predictive distribution as the complex circular Gaussian and calculates the distribution's average y(x,i) and standard deviation σ ̂(x,i) at each azimuth-range position x: y(x,i)=α_bg (x)^T r(x,i) σ ̂(x,i)^2=σ(x)^2+r(x,i)^H Σ(x)r(x,i) where α_bg (x) is the complex amplitude distribution in the elevation direction estimated by Bayesian SAR tomography, r(x,i) is the i-th row of the steering matrix, σ(x) is the noise accuracy parameter, and Σ(x) is the variance-covariance matrix in the posterior probability of the 3-dimensional information. Because r(x,i) reflects the orbit position of i-th image, y(x,i) represents the inferred signal adjusted to the orbit position of the i-th image. Therefore, the proposed method can suppress false positives originating from the orbital variation and random noise by comparing observed signal g(x,i) with the inferred signal y(x,i) while considering the standard deviation σ ̂(x,i). The proposed method detects changes by evaluating the probability that the observed signal is observed under the condition of the predictive distribution, i.e., the posterior probability p(g(x,i)|y(x,i),σ ̂(x,i)). This study evaluates the exponential part of p(g(x,i)|y(x,i),σ ̂(x,i)). The smaller the value of the exponential part of p(g(x,i)|y(x,i),σ ̂(x,i)), the more significant the discrepancy between the observed and inferred signal, which means that changes occur. In this study, we apply the proposed method to real SAR data to evaluate its performance. This study uses 55 TerraSAR-X images that observed Haneda Airport from June 8, 2011, to February 11, 2014. Here we focus on the maintenance area of the airport, which is considered to have heavy vehicles and aircraft traffic. First, this study compares the proposed method with the conventional CCD. In the conventional CCD, coherence values are calculated between adjacent time-series SAR image pairs using the 3×3 boxcar filter. The result clearly shows that the proposed method successfully detects the appearance of vehicles and aircraft even in areas where it is difficult to detect changes by the conventional CCD due to the false decrease of coherence values caused by differences between images. Next, this study compares the proposed method with a change detection method that uses only amplitude information. The method using only amplitude information detects changes by evaluating the distance between the observed amplitude and the average amplitude in the direction of the time series. The result shows that the proposed method can suppress false positives better than the method using only amplitude information.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: An Operational Emergency Flood Mapping System in Scotland Using SAR Data

Authors: Dr Morgan Simpson, Dr Cristian Silva-Perez, Dr Armando Marino, Professor Peter Hunter
Affiliations: University Of Stirling, Keen AI
Increases in flood events have been attributed to land-use change, climate change and watershed management changes. Extreme rainfall events are expected to increase in both intensity and frequency with climate change. Flooding causes damage to infrastructure, such as roads, railways, buildings and agricultural land, as well as posing danger to ecosystems and human lives through the event itself or through the transfer of biological and industrial waste. One-third of annual natural disasters are flood events, with more than half of all victims to natural disasters being flood-related. With this, a focus on flooding response is needed for future management of these events. Scotland is a country that is prone to high rainfall and flood events. Annual flood damage in Scotland is approximately £252 million between 2016 – 2021. Scotland’s Environment Protection Agency (SEPA) have proposed a mixture of flood risk mitigation strategies, including: awareness raising, flood forecasting, maintenance, planning policies and emergency plans / response (SEPA, 2015). Flood detection using Synthetic Aperture Radar (SAR) remote sensing has received substantial attention in scientific literature. And more recently, the use of machine learning and artificial intelligence has also been implemented to aid the classification and mapping of flood events. Here, we focus on the new SEPAs' Satellite Emergency Mapping System (SEMS), which uses state-of-the-art satellite imaging technology to deliver real-time, high-resolution data and insights that enhance decision-making capabilities and enable faster, more efficient response efforts when disaster strikes, offering a significant boost to Scotland's resilience against disasters. SEMS forms part of the International Charter Space and Major Disasters, a global network of over 270 satellites from 17 Charter members around the world, working to support disaster relief. SEPA are the only organisation in Scotland able to activate the charter and give emergency responders access to critical satellite imagery (here, focussing on Sentinel-1, TerraSAR-X and RADARSAT-2 data). SEMS operates 365 days a year with an on-call provision available 24 hours a day. The location of use is primarily focussed on Central Scotland, with specific focus on the Forth Catchment. Approximately 25% of Scotlands population is situated within the catchment, which spans area covering 3000km2. The catchment land-use is dominated by rural usage, notably manged forests and farmland. However, large degrees of urbanisation are found within Stirling and surrounding villages. While the catchment is primarily within the central lowlands, hill ranges such as the Ochil Hills and Lomond Hills are found surrounding the Forth Valley. While SEMS launched in September 2024, here, we show flood maps created from two previous charter activations from November 2022 and October 2023. The flood maps are created from utilising deep learning U-NET Convolutional Neural Network (CNN) techniques for flood mapping on Sentinel-1 imagery as well as rapid thresholding techniques. A focus is shown on the system itself and the flood maps generated during this period.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: The use of Multi-temporal Interferometry to monitor pre-failure ground displacement

Authors: Mariana Ormeche, Ana Paula Falcão, Rui Carrilho Gomes
Affiliations: Instituto Superior Técnico, Universidade de Lisboa
The term landslide is used for any kind of slope movement, of either rock, earth or debris masses that impacted approximately 6.7 million people and caused over 18 000 deaths, resulting in 6.9 billion dollar damage between 2000 and 2024. Current engineering practices for landslide risk assessment aim to determine a slope safety factor using the ration between resisting and destabilizing stresses, or indirect stability indicators such as rainfall thresholds, which may be prone to false alarms. The stability conditions of a slope can however be directly linked to its kinematics (e.g. displacement and velocity). In fact, though its lifetime, a landslide goes through a sequence of three deformation stages: the initial deformation stage, the uniform deformation stage and the accelerating deformation stage. The accelerating deformation stage is typically the definition of an active landslide, characterized by exponentially increasing displacement and velocity curves. The outcomes of this stage are either collapse or the reach of a new equilibrium. With Multi-temporal Interferometry (MTI), millimeter scale ground displacement time-series over large areas can be retrieved. Thus, MTI using spaceborne SAR over landslide prone areas can prove to be an effective cost-benefit technique for landslide risk assessment. To test the advantages and limitations of MTI to monitor pre-failure landslide displacement, a corner reflector (CR) was used to simulate the displacement of the accelerating deformation stage of a slope. Over the course of two months, exponentially increasing displacements were applied to the CR to test the ability of Sentinel-1 to capture this deformation pattern, therefor test is ability to be used in landslide risk assessment.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Temporal disaggregation of high-resolution building footprint data using Sentinel 2

Authors: Manuel Huber, Prof. Dr. Christian Geiß, Prof. Dr. Hannes
Affiliations: German Aerospace Institute
Identifying building footprints is essential for understanding urbanization and its impact on the environment. These footprints are, for example, used to assess urban structures, surface sealing, and they are used as proxy for population assessments. Furthermore, they are related to effects of the urban heat island, air quality, or green space distribution, helping shape urban planning strategies. To understand the development as well environmental implications of urbanization, especially in fast expanding regions, demands timely and accurate building data. This is also critical for evaluating, for example, risk from natural disasters such as floods and earthquakes. Traditional building footprint extraction methods rely heavily on high resolution imagery and complex machine learning models. Google’s recent multitemporal dataset covering the Global South is a substantial effort in this domain but has notable limitations. It depends on costly and proprietary high-resolution imagery and uses image stacks from Sentinel-2, which hinders fast response applications and performs poorly in cloud-covered regions. Furthermore, problems related to domain adaptation, for example aiming for global generalization, can lead to inaccuracies due to the diverse building characteristics across different regions. Our approach addresses these challenges by developing a fully open-source, end-to-end solution using Sentinel-2 imagery and open source high-resolution building footprint data. The developed method involves training a locally adapted probabilistic MIMO U-Net model, as U-Nets are reliable and an excessively researched architecture for image segmentation. Additionally, we implement weighted masks in the trainings process to enhance performance across diverse urban areas, especially as we aim to vectorize the segmentation outputs. To further optimize the trainings process and data acquisition an urban density map was created and utilized to select a diverse training dataset around the region of interest. In this process we select representative training data that cover both densely built and sparsely populated regions. This localized training process improves the model’s ability to adapt to regional variations in building styles, overcoming the shortcomings of generalized approaches. We also integrate an uncertainty estimation layer that captures both aleatoric (data-related) and epistemic (model-related) uncertainties. These uncertainties, are computed as outputs of the MIMO U-Net model and can directly be used to set confidence thresholds for predictions, making the model outputs more transparent for high-stakes applications such as disaster risk management. In conclusion, our proposed method represents a scalable, adaptable and open-source solution for building footprint extraction. By utilizing open-source data and tools, our workflow is scalable, cost-effective, and accessible to a broad range of users, including researchers, urban planners, and disaster response teams. This aligns with the need for transparent, reproducible methods in geospatial analysis, particularly in developing regions where resources for data acquisition and high resolution in-situ data are limited. The iterative update process further ensures that the building footprint data remain accurate and up-to-date, supporting dynamic urban monitoring and timely decision-making. Our presentation at the Living Planet Symposium will cover the full methodology, showcase case studies, and discuss the practical applications of our approach in urban and environmental risk assessments.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Innovative multicriteria approach for flood risk assessment: A case study in Garyllis river basin, Cyprus.

Authors: Ms Josefina Kountouri, Dr CONSTANTINOS F. PANAGIOTOU, Mrs Alexia Tsouni, Mrs Stavroula Sigourou, Mrs Vasiliki Pagana, Dr Christodoulos Mettas, Dr Evagoras Evagorou, Dr Charalampos (Haris) Kontoes, Professor Diofantos Hadjimitsis
Affiliations: Eratosthenes Centre Of Exellence, National Observatory of Athens (NOA), ), Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing (IAASARS), Operational Unit “BEYOND Centre of Earth Obser-vation Research and Satellite Remote Sensing
Floods are the most frequent and most costly natural hazards at the European scale. Therefore, policymakers and water planners urge reliable information to design and implement effective flood management plans that cover the four major components of dis-aster risk reduction, particularly preparedness, response, recovery and mitigation. The proper adaptation of these components in the management plans is especially important in river networks that intersect urban units since these networks are highly prone to flash floods. As part of the collaborative activities between the ERATOSTHENES Centre of Excellence (ECoE) and BE-YOND/IAASARS/NOA, an innovative multicriteria approach is proposed to assess the spatiotemporal evolution of flood risk levels in the Garyllis River basin, which is located in the southern part of the island of Cyprus. Data have been collected from multiple sources, including satellite missions, governmental portals, in situ measurements, and historical records, at different resolutions. For example, a digital elevation model (DEM) with a 5 m resolution was provided by the Department of Land and Surveys of Cyprus, the land use/land cover map of the study area was extracted from Copernicus Land Monitoring Services, whereas daily precipitation data were obtained from nearby ground-based rainfall stations. The collected data have been calibrated via onsite visits and dis-cussions with relevant actors, harmonized in terms of spatial and temporal resolution and used as inputs to estimate the evolution of surface run-off (HEC-HMS), together with hydraulic simulations (HEC-RAS 2D) to estimate the flow depth for different return periods. The vulnerability levels of the study area are quantified via the weighted linear combination of relevant factors, particularly pop-ulation age, population density and building properties, according to the latest official governmental reports. In addition, the ex-posure levels were quantified in terms of the land value. For each flood component, all factors are assigned equal weighting coef-ficients. Consequently, flood risk levels are evaluated at each location as a product of hazard, vulnerability and exposure levels. The validity of the proposed methodology is evaluated by comparing the critical points that were identified during the field visits with the estimations of the flood risk levels. Consequently, escape routes and refuge regions were recommended for the worst-case scenario. Overall, this study is expected to help water authorities further align with the EU Floods Directive 2007/60/EC, support social awareness regarding the actions that need to be taken, and recommend appropriate mitigation measures.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: The FLOWS Project – Improving Flood Crisis Management Through Earth Observation Solutions

Authors: Benjamin Palmaerts, Andrés Camero Unzueta, Sébastien Dujardin, Rink W. Kruk, Dr Lisa Landuyt, Sam Leroux, Sandro Martinis, Pieter Simoens, Eric Hallot
Affiliations: Remote Sensing and Geodata Unit, Scientific Institute of Public Service (ISSeP), Earth Observation Center, German Aerospace Center (DLR), Department of Geography, University of Namur, Belgium National Geographic Institute (NGI-IGN), Remote Sensing, Flemish Institute for Technological Research (VITO), IDLab, Department of Information Technology, Ghent University
Europe is increasingly experiencing devastating floods, highlighting the severe impacts of climate change and exposing vulnerabilities in land use and population safety. Earth Observation (EO) technologies offer significant potential for flood crisis management, but the catastrophic floods in Belgium and Germany in July 2021 revealed gaps in the effective use of EO data, largely due to a lack of methods adapted to the needs of crisis managers and insufficient awareness of available tools. The FLOWS project addresses these challenges by determining how and when EO data and derived products can optimally support flood crisis management across three phases: crisis response, aftermath, and reconstruction. The project builds on the experiences of crisis managers during the 2021 floods and a comprehensive analysis of EO data acquired during the event. Using a problem-tree approach, the project identifies geospatial challenges faced by first responders and stakeholders, including crisis centers, emergency services, authorities, municipalities, and water managers. These stakeholders are actively involved in the process, providing input to guide development and validation through Agile prototyping. Key innovations include methodologies to enhance situational awareness, such as leveraging multi-sensor EO data through optimized algorithms for UAVs, commercial SAR, and Sentinel-1/2 data. Real-time computer vision pipelines enable adaptive UAV flight adjustments and efficient onboard processing of RGB imagery, ensuring rapid detection of flooded areas and victims, and providing first responders with essential data. Social media and mobile phone data are integrated into GIS-based solutions to map population dynamics at fine spatial and temporal scales. Additionally, deep learning methods are applied to assess flood-induced damage across diverse environments, including urban and rural areas. Using very high-resolution (VHR) and Sentinel data, the project develops automated tools to classify and map impacts on buildings, transportation networks, vegetation, and riverbanks, supporting both immediate response and long-term recovery planning. Finally, heterogeneous data sources are integrated into disaster hot spot maps using probabilistic fusion techniques. These maps provide an up-to-date situational overview of the most affected areas, enabling crisis managers to prioritize response efforts and allocate resources efficiently. By integrating EO methodologies and engaging key stakeholders, FLOWS aims to advance flood preparedness and crisis management, ultimately supporting flood-affected populations and fostering resilient communities.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Human-Caused Wildfire Ignition Risk Modelling - a Comparison of Different Regions in Europe, Using Remote Sensing and Geodata

Authors: Julian Schöne, Prof. Dr. Christian Geiß, Dr. Michael Nolde, Moritz Rösch, Dr. David
Affiliations: DLR e.V., University of Bonn, University of the United Nations, Institute for Environment and Human Security
Wildfires are of increasing global concern due to their devastating effects and their role in climate change through positive feedback loops from greenhouse gas emissions. While between 95-97 % of wildfires in Europe are triggered by human-related factors, to date, cross-regional fire prediction research has mostly focused on weather data and fuel characteristics. The presented work addresses this gap by proposing a predictive spatial modelling approach for human-caused fire ignition risk and by analysing the contributing factors. By using openly available geospatial and remote sensing data, the built random forest models are transferable to any study area in Europe when trained with local data. In this study, the models were applied to and tested in four distinct study areas: northern Portugal, north-western Spain, the Athens metropolitan region and its surroundings (Greece), and Brandenburg, Germany. For each study area, eight models were trained and evaluated, incorporating different combinations of up to seven explanatory variables that try to depict human-caused fire ignition. These variables are mainly related to human presence and activities in wildland areas, primarily measured by the distance to different kinds of infrastructure. In each region’s best performing model, which throughout is the model including all seven variables, distance to forested and to agricultural areas were included. The results revealed substantial regional variations in performance, with exceptional performance in Brandenburg (F1-score: 0.97), high accuracy in Greece (F1-score: 0.86) and moderate performances in Spain and Portugal (F1-scores: 0.65 & 0.59). The predominant variables contributing to human-caused ignition risk in the Mediterranean regions are distance to railways and the wildland-urban interface. In Brandenburg, distance to footpaths was predicted to be the primary factor. Interestingly, military training areas showed a strong spatial correlation with fire ignitions, although they were not included as a variable in the analysis. When used in conjunction with dynamic live fuel and weather maps, the results can provide policymakers and stakeholders with valuable tools for implementing targeted localized fire risk reduction measures and optimizing resource allocation for fire management. The transferability of the methodology and the identification of region-specific risk factors can help to develop locally tailored fire prevention strategies across Europe. Keywords: human-caused wildfire ignition – machine learning – random forest – ignition risk prediction – disaster prevention
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Assessment of Different Synthetic Aperture Radar (SAR) Systems for Mapping Floating Pumice Rafts After Submarine Volcanic Eruptions

Authors: Dr Melanie Brandmeier, Dr Simon Plank, Marco Lutz
Affiliations: Technische Hochschule Würzburg-Schweinfurt, German Remote Sensing Data Center, German Aerospace Center (DLR)
Floating pumice rafts generated by submarine volcanic eruptions pose significant risks to maritime activities, fisheries, tourism, and coastal populations. Tracking these rafts is crucial to mitigate their potentially wide-ranging impacts. While previous approaches have primarily relied on optical satellite data, their effectiveness is often limited by cloudy conditions. In this study, we investigated the potential of cloud-penetrating Synthetic Aperture Radar (SAR), a method not previously evaluated for this purpose. We processed and analyzed data from three SAR systems: TerraSAR-X operating in the X-band, Sentinel-1 in the C-band, and ALOS-2 in the L-band. For Sentinel-1 data, both amplitude information and results from a polarimetric decomposition based on the complex SAR data were examined. In contrast, for TerraSAR-X and ALOS-2, only amplitude data were evaluated. The results demonstrate that the polarimetric properties of the pumice rafts do not provide any advantage over amplitude data for mapping purposes. Consequently, polarimetric decomposition does not offer a sufficient basis for developing an automated approach to pumice raft tracking. Within the amplitude data, the co-polarized (co-pol) channel proved more suitable for mapping than the cross-polarized (cross-pol) channel. This is due to the higher contrast between the pumice rafts and the surrounding water and the reduced influence of noise in the co-pol channel across all investigated SAR bands. However, a limitation of SAR emerged in scenarios where pumice rafts were partially or fully submerged after a longer period following the eruption. This indicates that SAR data are not well-suited for the long-term tracking of pumice rafts. Instead, SAR is particularly valuable for identifying and manually tracking floating pumice during eruption events and under cloudy conditions, where visibility for optical sensors is significantly reduced.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Hybrid Deep Learning for Oil Spill Mapping: Leveraging Sentinel-2 and Foundation Models

Authors: Christos GE Anagnostopoulos, Konstantinos Vlachos, Anastasia Moumtzidou, Dr. Ilias Gialampoukidis, Dr. Stefanos Vrochidis, Dr. Ioannis Kompatsiaris
Affiliations: Centre for Research & Technology Hellas
Oil spills pose significant environmental threats, necessitating efficient and accurate detection methodologies. Optical satellite imagery from missions like Sentinel-2 can potentially provide essential data for monitoring these occurrences, on top of traditional approaches such as SAR; yet challenges emerge due to variations in water characteristics, atmospheric interference, and the complex spectral signatures of different oil types, and sunglint effect, among others. Traditional detection approaches frequently depend on models trained on limited datasets, which may lack generalizability and robustness across varied diverse environmental conditions. This study explores the enhancement of oil spill mapping with Sentinel-2 data through the development of a hybrid deep learning model and acts as a proof-of-concept for the use of Foundation Models on water applications. The Marine Debris and Oil Spill (MADOS) dataset is employed for training and evaluation of this study. MADOS is a meticulously curated benchmark consisting of 174 Sentinel-2 scenes acquired from 2015 to 2022, encompassing various environmental conditions and covering approximately 1.5 million pixels across 15 thematic classes, including oil spills. Notably, it includes a total of 2,803 patches, of which 361 patches correspond to oil spill cases (234,568 pixels), serving as critical training samples. This reflects a significant class imbalance, as the oil spill class constitutes a small fraction of the dataset. The proposed hybrid deep learning framework combines a state-of-the-art marine debris and oil spill detection model (i.e., MariNeXt) with a Sentinel-2 Foundation Model specialized for water applications (i.e., HydroFoundation). MariNeXt utilizes the SegNeXt architecture being pretrained on the MADOS dataset, and is shown to be proficient in capturing contextual characteristics pertinent to oil spills. The HydroFoundation model implements a Swin v2 Transformer encoder pretrained on a vast amount of Sentinel-2 data, adept at extracting comprehensive representations of water bodies and relevant features. In contrast to prior methodologies, the hybrid model leverages the strengths of both architectures by integrating the Swin v2 Transformer encoder with the MariNeXt decoder. This integration facilitates advanced feature extraction and effective segmentation specifically designed for oil spill detection. The model is adapted to accept the 11 spectral bands of the MADOS Sentinel-2 data by adjusting the patch embedding layer of the encoder. A progressive fine-tuning method is utilized where the decoder is frozen to preserve its specialized segmentation capabilities while the encoder adapts to the new input configuration. Subsequently, decoder layers are gradually unfrozen, allowing for joint optimization and harmonious integration between the encoder and decoder. Training and evaluating the model using the MADOS dataset, which provides a diverse set of oil spill instances and environmental conditions, enhances detection accuracy. A comparative analysis is conducted between the original MariNeXt model and the proposed hybrid model on unseen data not used during training. Various metrics are employed to evaluate detection performance, including Overall Accuracy, Precision, Recall, F1-Score, and Intersection over Union. The results provide evidence that the hybrid model shows comparable performance to the MariNeXt model, exhibiting better generalization capabilities leaving room for improvement. The issue of label imbalance inherent in the dataset, due to the relatively rare occurrence of oil spill instances compared to non-oil pixels, is addressed through data augmentation techniques. The results are promising and underscore the importance and future prospects of Foundation Models in water applications, particularly in complex problems like oil spill detection, among others. Furthermore, the approach in this study acts as a proof-of-concept and points towards the more widespread adaptation of Foundation Models to support other water-related applications, such as parameter retrieval for water quality indicators, among others.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: From Satellite Data to Resilient Farming Systems: Enhancing Drought Monitoring in Mozambique

Authors: Samuel Massart, Mariette Vreugdenhil, Rogerio Borguete Alves Rafael, Martin Schobben, Pavan Muguda Sanjeevamurthy, Carina Villegas-Lituma, Wagner Wolfgang
Affiliations: Technische Universität Wien
With the majority of the rural population of Mozambique relying on rain-fed agriculture, the country is vulnerable to drought events and shifts in rainfall seasonality. Water shortages have strong negative impacts on the productivity of smallholder farms and, subsequently, on food security and household income. In Africa, drought monitoring systems are commonly based on precipitation and temperature indicators. These datasets are low resolution and are predominantly based on in-situ information. Hence, accurate monitoring of soil water content is crucial to mitigate the effects of droughts and delayed rainy seasons on rural communities and vegetation ecosystems. In this context, microwave remote sensing provides accurate estimation of soil moisture in the first few centimetres of the soil, thus constituting an essential tool to support drought monitoring for early warning systems. Surface soil moisture and drought indicators support decision-makers, from politicians to small-scale farmers, to make data-driven decisions on agricultural planning, drought mitigation strategies, and ultimately increase resiliency of farming systems. First, a change detection model is applied to Sentinel-1 backscatter to model surface soil moisture over Mozambique between 2015-2023 at a 500m sampling. The modelled soil moisture is compared with state-of-the-art products, including land surface model (ERA5-Land), earth observation datasets (SMAP, ASCAT) and hybrid product (WaPOR). Moreover, the Sentinel-1 dataset is validated with in-situ stations located in five regions of Southern Mozambique. The results show that Sentinel-1 backscatter is highly sensitive to soil moisture and is a valuable tool for developing drought indices at a kilometer scale resolution. The resulting Sentinel-1 SSM product is then used as a basis for developing agricultural drought indicators, and a start-of-season product. Two drought indicators are developed based on (1) combined Sentinel-1 and ASCAT climatology and (2) soil physics using auxiliary datasets from SoilGrids. The estimation of the "start of rainy season" product is derived from a break-point detection approach applied on the stand-alone Sentinel-1 surface soil moisture product. The drought indicators are compared with precipitation (Z-score based on CHIRPS - Rainfall Estimates from Rain Gauge and Satellite Observations), and vegetation anomaly (Z-score from NDVI available on the Copernicus Land Monitoring Service). The comparison underlines the complementarity of climate, vegetation, and soil-based indicators to effectively monitor agricultural drought development. These findings and methodologies will be detailed in a forthcoming publication (Massart et al., In preparation). Finally, we highlight the limitations and challenges associated with bridging the gap between the development of earth observation products and the needs of small-scale farming systems in Mozambique. We present a case study focusing on the development of a custom data viewer designed to monitor, share, and promote the dissemination of satellite products. Observed limitations, including accessibility and technical capacity, are described, and alternative approaches are proposed to improve the adoption of new technology within traditional farming systems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Assessment of the contribution of EO data to support national firefighters in activities of urgent technical rescue during disaster response, fire prevention and surveillance

Authors: Dr. Valentina Nocente, Dr. Stefano Frittelli, Dr. Rossella Simione, Dr. Deodato Tapete, Maria Virelli
Affiliations: Agenzia Spaziale Italiana (ASI), Corpo Nazionale dei Vigili del Fuoco (CNVVF)
At national level, firefighters are the public body in charge of being the first responders trained in firefighting, primarily to control and extinguish fires that threaten life and properties, as well as to rescue persons from confinement or dangerous situations. In Italy, the activities of “urgent technical rescue” and more generally of public rescue, together with those of fire prevention and fire surveillance, are guaranteed by the Ministry of the Interior – Department of Fire Brigade, Public Rescue and Civil Defense, through the operational structures of the National Fire Corps (Corpo Nazionale dei Vigili del Fuoco – CNVVF) located throughout the national territory, active 7 days a week, 24 hours a day. This organizational structure is an Italian specificity and represents a true unicum in the international panorama of fire brigades. In other States, in fact, these are mainly organized on a local basis (at municipal or, sometimes, regional level). The Italian Fire Brigade, on the other hand, constitutes a National Corps (CNVVF), which finds its institutional place within Italy’s State Administrations and, for this reason, has a unitary organization, but, at the same time, also a widespread diffusion throughout the territory, through the Regional Directorates, which coordinate the peripheral operational network of the Provincial Commands and the related detachments. CNVVF is called, first and foremost, to ensure the fundamental mission of “urgent technical rescue”. This type of interventions are characterized by urgency and immediacy of the rescue service and, as such, require highly specialized technical professionalism and suitable instrumental resources. In the case of civil protection events, CNVVF operates as a fundamental component of the National Civil Protection Service ensuring, within the scope of its technical skills, the direction of first aid interventions. Fire prevention is the other function of eminent public interest entrusted to CNVVF and includes study, experimentation, standardization and control activities, aimed at reducing the probability of a fire or limiting its consequences. Among the various data sources and assets that the CNVVF exploit to address activities of urgent technical rescue and fire prevention, Earth Observation (EO) is increasingly being used. Over the past years, a specialist expertise has been developed in processing satellite imagery and generating products, thematic maps and elaborations that can be used to inform in situ activities during emergencies. Satellite data and derived products are nowadays among the information layers of the “cloud based” cartographic portal (GeoportaleVVF) that the CNVVF cartographic office (the TAS Central Service) has developed to share geographic data relating to rescue, analyse intervention scenarios, define the operational strategy and quantify and direct resources on a geographic basis. In this wider context, since 2018, the Italian Space Agency (ASI) and CNVVF have signed and cooperated in the framework of a bilateral agreement (n. 2018-10-Q.0) to promote, at national level, the use of existing national and international EO satellite assets and related data in support to CNVVF activities – with specific regard to urgent technical rescue – and identify possible new applications for fire hazard prevention. Under this agreement, in the event of medium and large-scale emergencies, ASI makes radar-type satellite products available to the CNVVF. In particular, Synthetic Aperture Radar (SAR) data are provided by exploiting the X-band COSMO-SkyMed constellation. The mission currently consists of three operational satellites from the First Generation and two from the Second Generation, and allows for image collection at high to very spatial resolution (up to 3 and less than 1 m in the case of StripMap and Spotlight modes, respectively) and according to a regular observation scenario up to on demand data take opportunities. COSMO-SkyMed data are provided by ASI free of charge, given that the usage for CNVVF purposes falls within the institutional support and cooperation, and are delivered within very short timeframes (up to few hours) during emergencies. COSMO-SkyMed data are helpful to generate change detection products that can serve for delineation and rapid mapping, e.g. for identifying collapsed buildings due to earthquakes or other instability processes, the extent of flooded areas and zoning of the areas affected by wildfires. Furthermore, during the cooperation with ASI, CNVVF has gained some experience with processing hyperspectral data from ASI’s PRecursore IperSpettrale della Missione Applicativa (PRISMA) mission. The satellite was launched in March 2019, is based on a single small class spacecraft, flying on a frozen Sun-Synchronous Low Earth Orbit at 615 km altitude and is equipped with electro-optical devices, collecting imagery in 239 spectral bands (total VNIR-SWIR range: 400–2500 nm) at 30 m Ground Sampling Distance (GSD) over a standard image size of 30 km × 30 km, coupled with a 5-m resolution panchromatic image. These data are useful for example to generate thematic products allowing the classification of areas susceptible to fires and assessment of the affected areas. With both SAR and hyperspectral EO data, the scope is to provide an evidence base over either local or wide areas that, during emergencies, could inform decision-making in a very timely way suiting the extreme rapidity required by rescue operations and, during ordinary times, could improve methods for fire hazard assessment and preparedness. To this scope, it is worth highlighting that CNVVF combine satellite-based products with other sources of information, for example in situ inspections and drone surveys. Furthermore, while data analysis and interpretation are made by expert operators, more efforts are being put to experiment more automated routines and algorithmic solutions. The present paper aims to showcase experiences and lessons learnt on the role that EO plays in supporting CNVVF in their activities, both during crisis / emergencies – i.e. urgent technical rescue, disaster response and fire confinement – and non-crisis time – i.e. fire prevention and surveillance. With regard to crisis / emergencies, different hazard types and temporal and spatial scales are considered. Among the many events during which ASI and CNVVF cooperated, we discuss the support that COSMO-SkyMed products enabled during the following events: • Wide-area catastrophic events such as the floods that hit Emilia Romagna and Tuscany region in 2023. Especially in the first case, floods spread across huge territories and lasted for long, leaving urban environments and countryside flooded for several weeks; • Site-specific events where rapidity in the disaster response, especially for searching for people to rescue, is paramount. This is the case of earthquakes and hydro-meteorological hazards such as the mudflow and debris flow happened at Casamicciola, in Ischia Island, in November 2022, and the seismic event (Mw 6.5) hitting Norcia on 30 October 2016. In particular, the latter event is also discussed to showcase the benefits of undertaking pre-operational tests of possible new satellite-based applications. The Norcia earthquake was indeed selected as a real scenario to demonstrate how COSMO-SkyMed images could serve the purpose of detecting and mapping damaged buildings, to complement and facilitate in situ surveys. Given the strong link between timely rescue operations and the percentage of survived victims from natural disasters, CNVFF need to access accurate information in order to identify site accessibility and prioritize sectors that require inspections and people search. Taking advantage of the availability of pre-event 1-m spatial resolution COSMO-SkyMed Spotlight images covering the site of Norcia and regular observations at 3 m spatial resolution from the Map Italy project, an experiment was undertaken by comparing the results achievable from satellite data and the map that was produced by CNVFF over Norcia and Castelluccio during the emergency. The main outcomes is the clear demonstration that a SAR-based single-building damage mapping after earthquakes is feasible and leads to accurate results if SAR data with properties such as those offered by COSMO-SkyMed products are accessed (whereas global medium spatial resolution missions such as Sentinel-1 would have failed to provide the needed temporal revisit and spatial detail). Finally, with regard to fire prevention, examples will be presented from: • experimental use of PRISMA imagery on Italian case studies; • operational exploitation of optical multispectral data during recent activations that CNVFF participated in to contribute to wildfires crises in 2024, in Portugal and Greece, in the context of the European Civil Protection mechanism.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: A comparative assessment of a meteorological drought indicator and soil moisture over Austria

Authors: Stefan Schlaffer, Matthias Schlögl, Klaus Haslinger, Stefan Schneider, Raphael Quast
Affiliations: GeoSphere Austria
Drought represents one of the most impactful types of hydrometeorological extreme events with adverse consequences for human and natural systems. Their frequency, intensity, and duration are expected to increase due to rising temperatures and altered precipitation patterns, thus posing significant risks to water resources, agriculture, ecosystems, and human livelihoods. Intensity, duration and impact of droughts can be monitored and characterised by deriving statistical indicators from meteorological and biophysical variables, such as precipitation, evapotranspiration, soil moisture and indicators of vegetation health. These variables, however, correspond to different characteristics and intensities of drought, such as meteorological, hydrological and agricultural drought. While soil moisture, a key indicator of water availability, provides a more direct measure of drought severity, it typically is more challenging to measure than meteorological variables. As a result, modelled time series are often used instead. Knowledge about the relationship between relatively simple indicators of meteorological drought, which can be derived from meteorological measurements, and soil water availability could help to better inform decisions, especially in complex terrain like the Austrian Alps. To this end, we compared the Standardised Precipitation-Evapotranspiration Index (SPEI) with two gridded soil moisture datasets, namely (1) the ERA5-Land reanalysis volumetric soil water content and (2) the EUMETSAF H-SAF METOP ASCAT surface soil moisture product. Furthermore, in-situ soil moisture measurements from the International Soil Moisture Network (ISMN) were used. The WINFORE SPEI product is based on interpolated daily fields of precipitation and air temperature for the territory of Austria. Reference evapotranspiration is computed using the Hargreaves formula, which is based on daily air temperature time series. Three different integration times (30, 90 and 365 days) were used for computing SPEI. All three datasets were aggregated or resampled to the 0.1° grid of the ERA5-Land reanalysis dataset. In general, correlation coefficients showed a clear spatial pattern with higher values in Eastern and Southern Austria, whereas correlation was low in the mountainous regions of Central and Western Austria. As expected, correlation coefficients were higher in case of soil moisture anomalies than in case of absolute values. Between SPEI and ERA5-Land soil moisture, Pearson correlation coefficients of 0.7 were attained in the Northeastern parts of Austria. Correlation strongly decreased when comparing short-period SPEI (30 and 90 days) with soil water content of deeper ERA5-Land soil layers, whereas SPEI computed over a longer time window (365 days) showed the highest correlation with water content in the deepest soil layer (100 to 289 cm). Between SPEI and ASCAT surface soil moisture, correlation was consistently lower, especially over mountainous and densely forested regions. Masking of observations made under frozen conditions significantly improved the achieved correlations. The results demonstrate the potential of drought indicators, such as the SPEI, to serve as a proxy for soil moisture anomalies at the intermediate spatial scale of ca. 10 km.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Mapping Wildfire Exposure for a Transboundary Region of Central Europe

Authors: Evripidis Avouris, Christopher Marrs, Kristina Beetz, Dr. Marketa Poděbradská, Dr. Emil Cienciala, Lucie Kudláčková, Prof. Dr. Miroslav Trnka, Matthias Forkel
Affiliations: TUD Dresden University of Technology, Junior Professorship in Environmental Remote Sensing, TUD Dresden University of Technology, Junior Professorship in Environmental Remote Sensing (currently employed at ICEYE, Finland), CzechGlobe – Global Change Research Institute of the Czech Academy of Sciences
Central Europe, an area that has historically been untouched by catastrophic wildfires, has recently experienced an increase in the number of major wildfire events. Equally alarming, some of these disasters are occurring in transboundary or wildland-urban interface (WUI) areas, where different administrative systems mix with natural vegetation, posing unique challenges to firefighting approaches. One such catastrophic event occurred in 2022, when a wildfire burned an unprecedented 1173 ha in the Saxon Switzerland and Bohemian Switzerland national parks on the border between the Czech Republic and Germany. This served as a warning to the scientific community and local stakeholders, demonstrating the need to adapt to this new reality. Such events underline the need to inform the public in Central Europe of the potential risk that they and their property could face from forest fires. In addition, stakeholders responsible for dealing with wildfires in Central Europe, such as firefighters, national park administrations and relevant government agencies, should be aware of the potential danger that areas under their jurisdiction are increasingly exposed to. Here we develop a methodology for creating a wildfire exposure map in a transboundary area of Central Europe to quantify and demonstrate the exposure of settlements. We created nine different wildfire scenarios based on three fire durations (1-3 days) and three levels of fire weather conditions. Fuel types for the study area were derived from the methodology proposed by Beetz et al. (2024), which involved the employment of the European fuel type classification of Aragoneses et al. (2022) as the starting point. The study area has suffered from a severe bark-beetle infestation in recent years, which has resulted in a large amount of flammable deadwood and natural regrowth, making such areas more prone to wildfires. Landsat 8 imagery was used to map those bark beetle-infested parts of the study area, for which no data could be found. This was achieved through a supervised classification algorithm, whereby a map of bark beetle infestations from ground and airborne surveys by the forest administration was used as training data. Fuel models were finally derived after a crosswalk of the fuel types to the Scott and Burgan Fire Behaviour Fuel Models (2005), which were further enhanced by in-situ fieldwork. We used the FlamMap model to calculate flame length and burn probability for each scenario. These two metrics were then combined into a bi-variate raster, one for each wildfire scenario. The final map used settlements in the area as the exposed assets in focus and was further enhanced with support capability indicators (transportation network and fire station locations). The final map was visualised as an interactive web application. The map allows the user to alternate between the scenarios, and permits the evaluation of exposure down to a building level for the settlements in focus. We then performed three evaluation analyses. Firstly, we tested the overall ability of the FlamMap model to accurately model the first three days of the 2022 wildfire by considering the specific fire weather conditions during that event. The fire perimeters of these three modelling runs were compared to active fire observations from the VIIRS sensors. There was good overlap between the two, especially for the second and third days of the fire. Secondly, in order to test the ability of the nine fire modelling scenarios to represent typical wildfires’ behaviour, we compared the predicted fire perimeters derived from single-ignition fires, to the fire perimeter of the 2022 wildfire. It was found that in all nine cases the predicted burned area lies almost entirely within the reference fire perimeter, albeit with low coverage. Thirdly, 50 users were asked to complete a usability study by using the interactive web map. According to the answers, the design of the interactive map is intuitive, and the resulting product, though it presents complex information, does so in an understandable way. Furthermore, the views expressed by relevant stakeholders on the map’s usefulness questionnaire revealed the need to include local stakeholders and experts early on, and throughout such a research process. These results have demonstrated that the modelling scenarios can indeed be used to predict wildfire behaviour in the area, albeit with limited confidence as more validation data, such as historical fire perimeters are needed. Scarce validation data creates a degree of uncertainty when it comes to accurately evaluating the modelling outputs. This lack of data though, is not an issue of the fire modelling software per se, but rather, highlights the importance of maintaining information on the characteristics of historical wildfires by state authorities. Moreover, a general recommendation for future wildfire research in this or any other study area, is to establish good communication with the local expert and stakeholder community. Expert knowledge is invaluable in developing accurate fuel model maps, while stakeholders should be consulted in various parts of the wildfire research process, and not only towards the end. This research extensively exploits remote sensing data such as Landsat 8 images, to identify particularly flammable areas. It should be noted that higher-resolution Sentinel-2 imagery from the period immediately before the 2022 wildfire would have been preferable, however this was not possible because of cloud cover. Moreover, the use of VIIRS played an essential role in the evaluation process. Finally, making an exposure map for a transboundary region has highlighted the importance of data interoperability between different countries. References Aragoneses, E., García, M., Salis, M., Ribeiro, L. M., & Chuvieco, E. (2022). Classification and mapping of European fuels using a hierarchical-multipurpose fuel classification system [Preprint]. ESSD – Land/Land Cover and Land Use. https://doi.org/10.5194/essd-2022-184 Beetz, K., Marrs, C., Busse, A., Poděbradská, M., Kinalczyk, D., Kranz, J., & Forkel, M. (2024). Effects of bark beetle disturbance and fuel types on fire radiative power and burn severity in the Bohemian-Saxon Switzerland. Forestry: An International Journal of Forest Research, cpae024. https://doi.org/10.1093/forestry/cpae024 Scott, J. H., & Burgan, R. E. (2005). Standard fire behavior fuel models: A comprehensive set for use with Rothermel’s surface fire spread model (No. RMRS-GTR-153; Issue RMRS-GTR-153, p. RMRS-GTR-153). U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station. https://doi.org/10.2737/RMRS-GTR-153
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: SGAM - Smart Geotechnical Asset management

Authors: Emanuela Valerio, Alessandro Brunetti, Maria Elena Di Renzo, Michele Gaeta, Prof. Paolo Mazzanti
Affiliations: NHAZCA S.r.l., Sapienza University di Rome
Smart Geotechnical Asset Management (SGAM) is an innovative framework integrating external systems via a cloud-based Software as a Service (SaaS) platform or API. It leverages advanced data-fusion algorithms and satellite Earth Observation (EO) technologies, such as A-DInSAR and PhotoMonitoring™, to enable a semi-automatic decision-making process for asset management and predictive maintenance. This approach significantly enhances the financial resilience and operational efficiency of structures and infrastructures by optimizing maintenance investments through sophisticated, data-driven insights. SGAM focuses on identifying, analyzing, and mitigating risks to assets by examining their interactions with local geological and environmental settings. It systematically evaluates both direct and potential interferences with geohazards, including landslides, floods, subsidence, and earthquake-induced effects, which could compromise asset integrity. By integrating vast quantities of archived and newly acquired EO data, SGAM provides Decision Makers with detailed and actionable insights, enabling them to define, prioritize, and schedule maintenance operations more effectively based on comprehensive asset vulnerability and loss scenario analyses. The EO data is further enriched and validated through field surveys as well as Geotechnical/Geomorphological Monitoring technologies sourced from extensive regional and global geodatabases. A core feature of SGAM is its adaptability and forward-looking design, which allows seamless integration of satellite data from different space missions, ensuring its long-term relevance, scalability, and technological advancement. AI-driven Process Automation solutions enhance its capabilities by performing first-level risk assessments, facilitating cost-effective, optimized prioritization of maintenance activities, and enabling decision-making underpinned by redundancy and precision. By seamlessly combining advanced satellite EO technologies, AI algorithms, and ground-based monitoring data, SGAM empowers organizations to proactively address structural and geotechnical risks. It not only reduces the likelihood of asset failure but also ensures sustainable, informed, and timely decision-making. Through prioritization of maintenance operations founded on comprehensive risk evaluations, SGAM is instrumental in enhancing infrastructure resilience, safety, and long-term sustainability amidst both current and future geohazards.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Applying Copernicus Satellite Data for Geo-Hazard Monitoring and Warning Services in Norway

Authors: Solveig Havstad Winsvold, Stefan Blumentrath, Aron Widforss, Kjetil Melvold, Karsten Müller, Liss Marie Andreassen, Sjur Kolberg, Rune Engeset, Nils Kristian Orthe
Affiliations: Norwegian Water Resources and Energy Directorate
The European Union's Earth Observation Program, Copernicus, provides free and openly accessible satellite data and services. These have become essential for hydro-meteorological and geo-hazard monitoring conducted by the Norwegian Water Resources and Energy Directorate (NVE). Copernicus satellite data provides efficient and comprehensive observation of snow avalanches, snow cover, lake ice, glaciers, and more, across large regions. Its applications are becoming increasingly important for risk assessment, natural hazard management, emergency preparedness, and warning services. Products from the Copernicus project at NVE support decision-making for Varsom, NVE’s warning services for snow avalanches, landslides, lake ice, and floods (www.varsom.no). At NVE, satellite products are combined with other data sources, such as crowd-sourced in situ observations through the Varsom app and additional remote sensing data, forming a multi-modal approach. In addition to supporting decision-making, Copernicus satellite data and products enhance process understanding, improve NVE's basemaps, and facilitate analyses of climate change impacts. NVE's Copernicus Services is managed in-house at NVE and co-funded by the Norwegian Space Agency. The project serves as a pioneer for IT infrastructure development at NVE, establishing production lines that streamline the process from satellite image acquisition and algorithm application to the distribution of resulting products directly into users' familiar working environments and applications. This presentation demonstrates how NVE utilizes automated satellite products to support warning services for geo-hazards such as snow avalanches. Furthermore, it will highlight how NVE validates flood forecasting models using snow cover products. Snow avalanches and floods pose risks to Norway’s environment and infrastructure, and through its warning services, NVE helps prevent accidents and mitigate potential impacts. Additionally, NVE observes glacier lake outburst flood (GLOF) activity. The presentation also provides an outlook on new products planned for the coming years, such as landslide detection, which will enhance warning and preparedness for extreme weather events that are becoming increasingly frequent in Norway.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Detecting Changes in War-Damaged Urban Areas Using the IR-MAD Method and Sentinel-2 Satellite Data

Authors: Jáchym Černík
Affiliations: Charles University
This study presents a method for detecting urban changes resulting from the October 2023 military conflict in Gaza City and its surrounding areas. Using Sentinel-2 multispectral data on the Google Earth Engine (GEE) platform, Python scripts were employed to analyze spectral signatures over time. The Iteratively Reweighted Multivariate Alteration Detection (IR-MAD) technique was utilized to identify differences between pre- and post-conflict images. IR-MAD is a multivariate statistical method that enhances change detection by iteratively reweighting spectral bands through Canonical Correlation Analysis, aligning two images to maximize similarity before subtraction. This approach increases sensitivity to subtle changes while minimizing the detection of insignificant alterations by improving the correlation of unchanged pixels. Consequently, the method effectively identified changes such as debris, destroyed buildings, vegetation loss, and craters with high precision by comparing Sentinel-2 images. Change detection results were validated using high-resolution PlanetScope data, achieving an accuracy of 74%. Custom Python scripts further enhanced the IR-MAD analysis by incorporating functions for masking with the Dynamic World dataset and automating image export and thresholding. This streamlined processing enabled efficient handling of large datasets, making the approach scalable for similar conflict-affected regions like Ukraine. The IR-MAD analysis revealed a 52% change between September 27 and November 26, 2023. Additionally, applying a chi-square distribution-based threshold together with iterative threshold optimizer improved the consistency and accuracy of binary change maps, which could be valuable for damage assessment and resource allocation. While a web-based mapping application was developed to visualize the conflict's impact, the primary focus remains on the analytical framework. In conclusion, the study successfully applied and validated the IR-MAD algorithm with Sentinel-2 data to detect changes in war-affected urban areas and developed specialized Python scripts to enhance the analysis. This methodology provides a reliable and straightforward framework for monitoring urban changes in conflict zones. In an era of declining public trust in media, this study offers a methodological foundation for an independent approach to scientific reporting in war-torn areas using publicly sourced data.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Supporting Flood Disaster Response Using Multi-Sensor Earth Observation Data

Authors: Sandro Martinis, Marc Wieland, Sandro Groth, Hannes Taubenböck
Affiliations: DLR
Remote sensing data has become an essential part of today's crisis management activities. In recent years, the German Aerospace Center (DLR) has developed various components to support flood disaster response using multi-sensor Earth Observation (EO) data. On a global level, a multi-sensor system for automatic and large-scale surface water extraction was implemented. The system consists of several cloud-based modular processing chains based on convolutional neural networks (CNN) to extract the surface water extent from systematically acquired high-resolution radar (Sentinel-1) and multi-spectral (Sentinel-2 and Landsat) satellite data. A globally applicable high-resolution seasonal reference water product at 10-20 m spatial resolution based on fused Sentinel-1/2 time-series data over a reference period of two years is computed and used to distinguish permanent water from temporary flooded areas. The system also allows to provide information about the duration of flood coverage on a pixel level by combining single temporal flood masks over time. Further, a mechanism has been installed to identify whether the water extent outlined in a satellite scene is abnormally large or small in comparison to a reference period. The anomaly detection criterium is based on the interquartile range (IQR). In case of observed anomalies, end users are alerted using E-Mail notifications. To enhance situational awareness, early-stage estimations of impacted regions derived from heterogenous geospatial indicators can aid to prioritize the crisis management activities and support data collection initiatives of very high-resolution (VHR) EO imagery (satellite, aerial, UAV). In this context, a log-linear pooling method coupled with an unsupervised hyperparameter optimization routine is developed to fuse information on flood hazard extracted from high-resolution satellite imagery with disaster-related data from geo-social media and freely available supplementary geospatial data on exposed assets (e.g. building distribution, population density, hazard zones). The identification of disaster hot spots is carried out on the basis of the H3 global grid system. Very high-resolution EO data tasked in the frame of an activation of a crisis mechanism on-demand and supported by rapidly generated disaster hot-spot maps, are analyzed using deep learning-based approaches in the frame of the multi-sensor EO system. The spatial resolution of these sensors enables the identification of relevant local crisis information, e.g. small-scale flood extent in heterogenous landscapes as well as damaged buildings and infrastructure to provide a detailed and reliable picture of flood-affected areas in order to optimize disaster management activities.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Holistic approach to flood risk assessment: innovative multi-parameter methodology validated in urban river basin affected by fatal flash flood

Authors: Alexia Tsouni, Stavroula Sigourou, Vasiliki Pagana, Panayiotis Dimitriadis, Theano Iliopoulou, G.-Fivos Sargentis, Romanos Ioannidis, Efthymios Chardavellas, Dimitra Dimitrakopoulou, Marcos Julien Alexopoulos, Nikos Mamasis, Demetris Koutsoyiannis, Charalampos (Haris) Kontoes
Affiliations: National Observatory of Athens (NOA), Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing (IAASARS), Operational Unit “BEYOND Centre of Earth Observation Research and Satellite Remote Sensing”, National Technical University of Athens (NTUA), School of Civil Engineering, Department of Water Resources and Environmental Engi-neering, Research Group “ITIA”
Decision makers and civil protection authorities need reliable flood risk assessment for efficient flood risk management, covering all the phases of the disaster risk reduction framework: preparedness, response, recovery and mitigation. This is even more crucial in highly dense urban river basins which are prone to flash floods. In the framework of a Programming Agreement with the Prefecture of Attica, Greece, BEYOND/IAASARS/NOA in cooperation with ITIA/NTUA developed a holistic multi-parameter methodology which was implemented in five flood-stricken river basins at high spatial resolution (2m-50m). The research teams first of all collected all available data, such as spatial data and data from technical studies from the relevant authorities. They conducted detailed field visits, and modified the terrain accordingly. Spatial parameters obtained following processing of Earth Observation data, such as DEM and land cover, were used as input for the HEC-HMS rainfall-runoff model, as well as for the hydraulic model. Flood hazard was assessed by hydraulic modelling; using the open-source software HEC-RAS 2D for different scenarios. Vulnerability was considered as a weighted estimation of population density, population age, and building character-istics, taking into consideration the relevant finding of the latest available national Population-Housing Census. Exposure was based on the land value. Flood risk was eventually assessed based on the combination of flood hazard, vulnerability, and exposure. Moreover, critical points, which were identified from the field visits, were also cross-checked with the flood inundation maps. Finally, refuge areas and escape routes were proposed for the worst-case flood scenario. This innovative methodology was applied, amongst others, in the Mandra river basin, and was validated with the results of the fatal flash flood which took place in November 2017. This flash flood event affected the urban and suburban area of Mandra causing 24 recorded fatalities and extensive mil-lion-euro damages to properties and infrastructure, rendering it the deadliest flood in Greece in the last 40 years. BEYOND de-veloped a user-friendly web GIS platform, where all the collected and produced data, including the flood risk maps, the critical points, the refuge areas and the escape routes are made available. This work supports the relevant authorities in improving disaster resilience in many aspects: raising awareness, designing civil protection exercises, implementing flood risk mitigation measures, prioritising short-term and long-term flood protection interventions, and making rapid response more effective during the flood event. This approach is in line with the requirements for the implementation of the EU Floods Directive 2007/60/EC, the Sendai Framework for Disaster Risk Reduction, the UN SDGs, as well as the UN Early Warnings for All initiative. Last but not least, this flood risk assessment methodology was applied, following the necessary adaptations, in the Garyllis river basin in Cyprus, in the framework of the EXCELSIOR project, by the ERATOSTHENES Excellence Research Centre for Earth Surveillance and Space-Based Monitoring of the Environment.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: An integrated system for multi-hazard response based on multi-source EO and non EO: the contribution of IRIDE Service Segment

Authors: Giorgo Pasquali, Annalaura Di Federico, Chiara Francalanci, Paolo Ravanelli, Lucia Luzietti
Affiliations: e-GEOS S.p.A., Cherrydata Srl
Catastrophic events such as natural disasters, including earthquakes, hurricanes, floods, and wildfires, pose significant challenges to societies worldwide. These phenomena not only result in devastating human and environmental losses but also place immense pressure on emergency response systems. In this context, satellite monitoring has emerged as a powerful tool for disaster management, offering comprehensive insights that enhance situational awareness. By providing high-resolution imagery, precise geolocation, and continuous updates, satellite technology enables more effective planning, rapid response, and resource allocation, ultimately mitigating the impact of such events. In this context, the fundamental contribution of the European flagship program Copernicus Emergency Management Service (CEMS) - Rapid Mapping is indisputable. Nevertheless, to further enhance the benefit of this type of services focussing on the Italian territory, IRIDE Service Segment is developing a dedicated emergency service focused on Italy. IRIDE Service Segment S7 Emergency will stand out for its high level of automation and its provision of a comprehensive system to Italian institutions for emergency response. This system will leverage not only the currently available commercial constellations but also the IRIDE constellations and will build uponexploit the state of the art algorithms and methods, to reach unprecedent level of characterised by an high level of automation and high accuracy, leveraging, among others, AI algorithms, . The system will be cloud-based, ensuring easy scalability and high resilience. It will include everything necessary to efficiently meet the needs of Italian institutions. This starts with the Service Manager, which allows users to input essential information to activate the service quickly and easily. It also features a specific Service Value Chain (SVC), responsible for acquiring and processing satellite data tailored to the type of event (e.g., earthquake, flood, landslide). Finally, the Exploitation Tool provides a platform for visualizing and utilizing the generated products, with the option to download them as needed. This tool will also enable the direct integration of products into end-user systems, allowing seamless visualization and incorporation of the outputs into the user’s operational workflow. Specifically, IRIDE S7 Emergency services will provide capabilities that exceed the current state of the art, not only in terms of processing algorithms and automation, as previously mentioned, but also in terms of service performance and the introduction of innovative products. Among these, the extremely short response times stand out: delineation products for areas affected by an event will be delivered within just 4 hours of satellite data availability, and damage assessment products within 9 hours. These rapid response times have a significant impact on emergency management, where the ability to act quickly makes a critical difference. By reducing these times, competent relevant authorities can act more effectively. IRIDE S7 Emergency innovation spans across several thematic domain, for instance for Flood detection, a new methodology has been designed by CIMA to perform a continuous flood monitoring activity in near real-time using on-demand SAR data. The most groundbreaking advancement in response times, however, comes with the introduction of the FIP (First Information Product), developed in cooperation with Cherrydata. While traditional product generation times have improved, the main bottleneck remains the waiting period for satellite data availability, which can often average up to 24 hours. To address this challenge, IRIDE S7 Emergency introduces the FIP product to deliver a preliminary information about the impact of the event even before satellite data becomes available. The FIP provides an estimation of the affected area within just 3 hours of activation, significantly improving response times and enabling earlier decision-making. The FIP is generated by collecting information about the event from social media and online news sources. Using Natural Language Processing (NLP), the system geolocates the information, correlating it to the specific event. This approach enables the creation of a geolocated map of information about the event, offering initial insights into the areas reported as most affected and the overall extent of the event. The FIP includes at least two deliveries: the first is made 3 hours after activation, and the second is delivered 6 hours after activation. The second delivery integrates additional information gathered in the interim, still before satellite data becomes available, to keep the end user informed about the evolving situation using the latest available sources. The aim of the FIP is to ensure that users remain continuously updated on the situation until satellite data can be utilized, bridging the gap in information during the critical early hours of an emergency. In conclusion, IRIDE S7 Emergency will provide Italian institutions with a system capable of generating products for emergency response in an effective and automated manner. This system will be tailored to meet the specific needs of Italian authorities, integrated into their systems, and enhanced by advanced algorithms. It will deliver performance and products that go beyond the current state of the art, setting a new standard for emergency management solutions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: VALUESAFE project - Vulnerability of Assets and Losses in Multirisk Evaluations: Satellite Data for Financial Estimation. Combining Engineering Risk Analysis, Satellite Observations, and Artificial Intelligence

Authors: Alberto Ciavattone, Neri Banti, Emanuela Valerio, Adriano Nobile, Claudia Masciulli, Antonio Cosentino, Emanuele Del Monte, Paolo Mazzanti
Affiliations: S2R S.r.l., Viale Giovanni Amendola 24, 50121, NHAZCA S.r.l., Start-up of Sapienza University of Rome, Via Vittorio Bachelet 12, 00185, IntelligEarth S.r.l., Start-up of Sapienza University of Rome, Via Vittorio Bachelet 12, 00185
Vulnerability of Assests and Losses in mUltirisk Evaluations: SAtellite data for Financial Estimation (VALUESAFE) represents a groundbreaking advancement in real estate risk assessment, offering an integrated, multi-hazard evaluation platform powered by advanced satellite Earth Observation (SatEO) technology and cutting-edge image processing solutions. Designed to assess vulnerabilities and risks associated with seismic, geological, and flooding hazards, VALUESAFE addresses the growing demand for comprehensive, standardized, and actionable risk assessments across public and private sectors. The service is developed through the collaboration of three leading Italian companies, each contributing unique expertise, and it is supported by the ESA Incubed program. VALUESAFE addresses a significant gap in the market, where existing risk assessment methods are time-intensive, resource-heavy, and often lack standardization. For instance, Italy's annual expenditure on hydrogeological damage mitigation exceeds €3.3 billion, while earthquake recovery costs have reached €120 billion in recent decades. These figures underscore the urgent need for efficient and reliable tools to safeguard vulnerable assets, particularly in historical urban areas. The VALUESAFE platform operates on a multi-layered framework, beginning with territorial hazard evaluations, advancing to building-specific vulnerability assessments, and culminating in detailed economic impact analyses. By integrating remote sensing data, engineering insights, and financial metrics, the platform delivers certified reports tailored to stakeholder needs. These reports, validated by qualified professionals, provide a credible and practical decision-support tool. This comprehensive methodology not only enhances assessment reliability but also significantly reduces time and resource requirements. Key features of VALUESAFE include its ability to cater to diverse operational scales. Public administrators can use the platform for territorial risk management, while private stakeholders such as insurers and property managers can assess risks for specific assets. The platform's flexibility ensures consistent evaluations across different building types and urban contexts, including historically significant structures. Furthermore, the incorporation of economic depreciation forecasts linked to disaster scenarios offers invaluable insights for resource allocation and investment planning. VALUESAFE leverages advanced technological solutions to streamline its processes. InSAR data enhances ground motion and structural stability assessments, while AI-driven image processing delivers precise evaluations of building conditions. These innovations enable the system to address seismic, ground instability, and flooding risks with tailored methodologies. By standardizing vulnerability assessments and integrating multi-source data, VALUESAFE achieves consistent results while saving time and resources. A user-friendly online platform enhances accessibility, allowing stakeholders to customize analyses and download certified reports efficiently. VALUESAFE aligns with global sustainability goals by emphasizing proactive risk management and the preservation of cultural heritage. Its comprehensive approach reduces reliance on post-disaster recovery efforts, minimizing economic and environmental impacts. Moreover, the platform's certified reports meet the needs of urban managers and insurers, providing reliable, actionable insights for long-term planning. VALUESAFE aims to establish new standards in real estate risk assessment by delivering scientifically robust, economically relevant, and operationally efficient solutions. Through its innovative methodologies and user-centric design, VALUESAFE addresses critical market needs, offering a transformative tool for safeguarding assets and enhancing urban resilience in the face of natural hazards.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Population Displacement and Response During Flood Events: Towards A Global Perspective

Authors: Ekta Aggarwal, Zhifeng Cheng, Shengjie Lai, Laurence Hawker, Andrea Gasparotto, Andrew J Tatem, Steve Darby
Affiliations: School of Geography and Environmental Science, University of Southampton, Worlpop, School of Geography and Environmental Science, University of Southampton, School of Geographical Sciences, University of Bristol, UK
Flooding, already the world’s most significant natural hazard, is expected to increase in frequency and intensity because of social and environmental change. Flood events can induce human mobility, both as an immediate adaptation to individual flood events, and in terms of permanent mobility away from at-risk areas. However, accurately quantifying both short- and long-term mobility patterns across large areas remains challenging. Traditional approaches, such as the use of census data and household and travel surveys, have provided critical insights into migration induced by environmental stress but are limited in terms of their spatial and temporal resolutions and geographic scope. One potential way to help address the gaps for measuring population displacement and response during flood events is through the use of high-resolution human mobility data, for example as derived from Meta’s Data for Good database, and geospatial data. The gridded user count data from Facebook users, generated by the Data for Good programme at Meta offers a rich source for tracking migration and displacement, displacement for crises such as disease outbreaks, flooding, and tropical cyclones across the globe, particularly in low- and middle-income countries where alternative mobility data are sparse. Leveraging anonymised mobile phone and internet location history data, this research investigates human mobility during extreme weather events, focusing on floods in socio-economically vulnerable regions. A high-resolution global flood database is employed alongside satellite-based nightlight data and Meta mobility data to provide near real-time insights into population movements and behaviours before, during, and after floods, providing greater insight into the impacts of flooding. Our findings may therefore be useful to civil defence and humanitarian agencies, enhancing their preparedness and response efforts in regions where flood infrastructure and resources are often limited.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: EO-enhanced Hydrology: How ESA EO R&D activities could enable an Early Warning System for smarter Drought Management – A case study of the 2022 French Droughts

Authors: Greg Sadlier, Luca Niccolai, Sara Cucaro, Alyssa Frayling
Affiliations: know.space
Note: Subject to ESA agreement to publicly share findings from our unpublished but non-confidential ‘Climate Crisis: Droughts - EO R&D activities for water resources management’ pilot case study for ESA (ref: Eleni Paliouras; Vanessa Keuck). The increasing frequency and severity of climate-related events, such as droughts, highlights the urgent need for innovative tools to support public sector decision-making. Earth Observation (EO) technologies offer transformative potential by providing high-resolution data, further enhanced by advanced analytical tools, including Artificial Intelligence (AI) and Machine Learning, to improve governance and enhance resilience. This presentation outlines a case study on the 2022 droughts in France, applying an analytical framework to evaluate the impact of five EO R&D activities on governance and their contributions to mitigating the social, economic, and environmental effects of droughts. The analytical framework is structured around the four pillars of climate change impacts —governance, social, economic, and environmental. It assesses the effects of extreme weather events by defining specific indicators, applying valuation methods (where relevant), and identifying appropriate data sources. Governance indicators capture improvements in decision-making and operational response efficiency, while economic indicators quantify cost savings or avoided losses. Social and environmental indicators measure reduced impacts on vulnerable communities and ecosystems. Designed to be adaptable, this framework provides a scalable tool for evaluating the impacts of other extreme weather events, offering actionable insights for policymakers and practitioners. The case study examines five EO R&D activities - Next Generation Gravity Mission (NGGM), DT-Hydrology, Soil Moisture, 4DMED-Hydrology, and AI4DROUGHT. The latter, in particular, leverages AI to enhance drought monitoring and forecasting, providing advanced tools for analysing water cycle dynamics. Collectively, these activities, at varying stages of development and operation, improve data availability and decision-making tools, equipping practitioners with the means to anticipate and manage changes effectively. These advancements support more efficient operational responses, reducing the impacts of droughts on communities, industries, and ecosystems. Central to the analysis is the governance pillar of sustainable development, focusing on early warning systems and operational preparedness. The study quantifies socio-economic benefits, including potential cost savings achieved through enhanced early warning systems and response measures, while also exploring broader impacts across social, economic, and environmental dimensions. The findings demonstrate the critical role of sustained EO R&D investment in strengthening public sector governance, improving decision-making, and building resilience against extreme weather events.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: The Use of Satellite Technologies in Mapping Flood Extent and Analysis of Its Impact on the Availability of Ambulances in Flood Areas

Authors: Jakub Niedźwiedź, Adrian Bobowski, PhD Michał Lupa, Jakub Staszel
Affiliations: AGH University, Faculty of Geology, Geophysics and Environmental Protection, Space Technology Centre AGH
The use of satellite data has revolutionized crisis management. In the face of increasingly frequent and severe natural disasters, predicting the extent and locations of events such as floods has become crucial. Issues like unregulated riverbeds, urbanization through concreting, and excessive deforestation have exacerbated the problem of natural disasters. However, floods are not just about direct material losses or infrastructure damage. To address these challenges, we conducted a project analyzing the extent of a flood that struck southeastern Poland in September 2024 and its impact on ambulance routes and response times. Using satellite imagery, including both SAR (Synthetic Aperture Radar) and optical instruments, we delineated the flood extent in the most affected areas within the Lower Silesian and Opole Voivodeships. By integrating GPS data from ambulances, we superimposed a grid of points on road networks. Then we adjusted the road lengths in flooded areas to determine the fastest routes to emergency calls. After analyzing the changes in ambulance routes caused by inundated transport infrastructure, we created an ambulance access map highlighting areas cut off from emergency services during the flood. Our analysis revealed the necessity of considering indirect effects. Beyond impacting ambulance response times, the flood significantly reduced the availability of essential medical and logistical resources, complicating rescue coordination efforts. The findings from this project have broad potential applications in future crisis management. The identified challenges can help optimize planning for alternative routes and prioritize investments in disaster-resilient infrastructure. Future ambulance stations and algorithms for alternative route searches should account for flood-related infrastructure losses, ultimately improving the safety of residents who were previously at greater risk due to limited ambulance accessibility during natural disasters. The project's methodology can also be adapted to analyze larger areas. Furthermore, the flood extent map we developed can already be used to safeguard existing structures, enhancing the safety of people living in flood-prone areas. Potential stakeholders for such solutions include crisis management teams, governmental and local institutions, investors planning future developments in flood-risk areas, and residents seeking to assess the risk of ambulance inaccessibility to their homes.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone Q-R)

Poster: Detection Of The Green Attack Stage Of Bark Beetle Infestation Using Sentinel-1 Time Series

Authors: M. Eng. Christine Hechtl, Andreas Schmitt, M. Sc. Sarah Hauser, Dr. Anna Wendleder, Dr. Marco Heurich
Affiliations: Hochschule München University Of Applied Sciences, Bavarian Forest National Park, Institute for Applications of Machine Learning and Intelligent Systems (IAMLIS), German Aerospace Center (DLR)
The innovative approaches of remote sensing open up new dimensions in forest monitoring and thanks to the exhaustive and almost continuous surveying in the protection and long-term strengthening of the complex ecosystem. Especially in times of climate change, the application of such methods is essential in order to actively meet the challenges. On the one hand, global warming, extreme precipitation events, long periods of drought and the simultaneous increase in biotic disturbance factors are threatening forest areas and changing their dynamics. On the other hand, the changing environmental conditions favour the spread of invasive species such as the bark beetle. The spruce trees, weakened by drought among other things, can no longer defend themselves sufficiently against the pests and consequently die. The sharp increase in forest mortality is a mammoth task for forest workers, making it more and more difficult to stop calamities such as bark beetle infestation. Reliable information on the current conditions of the forest and the changes is therefore required, which serves as a basis for decision-making for the timely initiation of countermeasures. Due to the large spatial extent, terrestrial data collection is no longer feasible, which emphasizes the need for a remote sensing-based approach. Until now, bark beetle infestation has mainly been analysed using optical data. However, according to the current state of research, there is no practical method that can precisely and promptly detect bark beetle infestation in the “Green Attack Stage” timely before the death of the trees. This is because the major challenge in detecting an infestation is that the initial signs are very subtle and the time of discolouration of the tree crown is already too late to combat it. In addition, the optical data does not allow for continuous bark beetle monitoring due to cloud cover, especially over forested areas in low mountain ranges. In this context, the question arises whether a Sentinel-1 time series can be used to depict the vitality development of spruce trees and thus, the vulnerability to bark beetle infestation. The radar system emits microwaves that penetrate the clouds and thus, ensure continuous data collection. As a result, up to four images per month are continuously available for the analyses. The study area is located in the south-east of Germany in the Bavarian Forest National Park on the border with the Czech Republic. Together with the Šumava National Park, it is the largest contiguous protected area in Central Europe. Over the years, a unique biodiversity has developed on an area of almost 25,000 hectares in the national park, as human intervention is only permitted under international guidelines, meaning that natural processes shape the ecosystem. One of the consequences of this is that the bark beetle infestation is not combated and the deadwood remains in the forest, allowing the development of the forest from healthy to infested to deadwood to be observed. Since 1988, the deadwood has been digitalized of the Bavarian Forest National Park Administration and provided in a data pool [1]. This database was used to train and validate the implemented machine learning models. Furthermore, additional data was also taken into account. In view of the strong impact of drought on the forest ecosystem described in the literature, hydrological data using the Topographic Wetness Index (TWI) and the predominant soil type are also included in the analysis. The TWI represents the relief-related soil moisture and is therefore a good indicator for soil hydrology, especially for the hilly terrain in the Bavarian Forest. Based on the digital terrain model and information on the water catchment area, the run-off behaviour can be determined, which is decisive for the available moisture in the soil [2]. Due to the different water storage capacities of the various soil types, the soil map at a scale of 1:25000 is also used as part of the data basis [3]. From this, it is possible to deduce exactly which soil is located under the tree population and thus, how high its water storage capacity evolves. In this study, Sentinel-1 data of the European Copernicus program from April to October in the years 2020 to 2023 was used. This period also corresponds to the swarming flight of the bark beetles. In addition, not only the bark beetle infestation is analysed, but the vitality development of the spruce trees up to three years before the infestation is also included in the analysis. The radar data was pre-processed at the German Aerospace Center (DLR) by the Multi-SAR system. The most important steps are as follows: 1) Decomposition of the Sentinel-1 data into Kennaugh elements 2) Application of multi-looking to reduce noise 3) Orthorectification using the Copernicus digital elevation model 4) Radiometric calibration to flattening gamma Based on the method of an orthogonal transform on hyper-complex bases the Sentinel-1 data is broken down into the individual Kennaugh elements K0, K1, K5 and K8 [4]. The total intensity K0, as the sum of VV and VH, is sensitive to the density and moisture of the vegetation. K1 represents the difference in intensity between VV and VH, which can be used to determine whether there is an increase in volume scatterers. These capabilities allow the upright forest structure to be captured excellently. This type of processing differs from previously investigated and developed methods for bark beetle detection, as the polarizations VV and VH are considered together in fused and normalized Kennaugh elements and the data are sufficiently calibrated thanks to the flattening gamma approach [5]. As bark beetle infestation is a complex process in the forest ecosystem and is strongly linked to the drought that occurs, additional environmental data is also used. These include the monthly precipitation sum, the topographical wetness index, the soil moisture and the prevailing soil classes. Based on this data, various random forest models are created, each of which predicts the vitality level of the conifers per epoch of the time series. The combination of the Kennaugh elements, the topographic wetness index and the soil classes leads to the best results for the models that have been trained with different features. In addition to visual validation, the high quality of the results of the random forest regression is also confirmed by the R2 metrics of 83 % and 89 % and an RMSE of between 5 and 9 months. The latter indicates that, on average, the model forecasts deviate by half a year. In contrast, the inclusion of precipitation sum, soil moisture and water retention capacity does not lead to any improvement. This illustrates that a targeted selection of features is more important than the number of different features. If one also considers the influence of each feature on the decision in the Random Forest model, complex processes in the ecosystem can be understood. For example, the root structure of the spruce can be traced. Spruce trees are shallow-rooted and therefore spread their roots close to the surface. In drier areas that are further away from groundwater, they also form so-called sinker roots that grow vertically downwards. This relationship is reflected in the impact of the features on the prediction. The results show for the first time that the vitality development of coniferous trees from a healthy or already stressed state to bark beetle-induced deadwood can be derived using a Sentinel-1 time series. By considering the intensities of VV and VH together as normalized Kennaugh elements in each image, the structure of the forest can be characterized in more detail and unique features regarding the water and chlorophyll content in the spruce needles can be derived. In this way, measures can be taken promptly even during the “Green Attack Stage” if necessary. This valuable insight should be incorporated and further developed in future research. In particular, the transferability of the models to other forest areas should also be included in future studies. By taking into account other environmental data, such as evapotranspiration, further complex interactions in the forest ecosystem could also be deciphered. [1] H. Latifi u. a., „A laboratory for conceiving Essential Biodiversity Variables (EBVs)—The ‘Data pool initiative for the Bohemian Forest Ecosystem’“, Methods Ecol. Evol., Bd. 12, Nr. 11, S. 2073–2083, Nov. 2021, doi: 10.1111/2041-210X.13695. [2] Julius Kühn-Institut, „Topographischer Feuchteindex“, Julius Kühn-Institut. Accessed February 10, 2024. https://wms.flf.julius-kuehn.de/cgi-bin/twi/qgis_mapserv.fcgi [3] Bayerisches Landesamt für Umwelt, „Übersichtsbodenkarte 1:25.000“. Accessed October 19, 2024. https://www.lfu.bayern.de/boden/karten_daten/uebk25/index.htm [4] A. Schmitt, A. Wendleder, und S. Hinz, „The Kennaugh element framework for multi-scale, multi-polarized, multi-temporal and multi-frequency SAR image preparation“, ISPRS J. Photogramm. Remote Sens., Bd. 102, S. 122–139, Apr. 2015, doi: 10.1016/j.isprsjprs.2015.01.007. [5] D. Small, „Flattening Gamma: Radiometric Terrain Correction for SAR Imagery“, Geosci. Remote Sens. IEEE Trans. On, Bd. 49, S. 3081–3093, Sep. 2011, doi: 10.1109/TGRS.2011.2120616.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: C.05.04 - POSTER - Landsat Program and Science Applications

Landsat satellites have been providing continuous monitoring of the Earth’s surface since 1972. The free and open data policy of the Landsat program enables the global land imaging user community to explore the entire 52-year long-term data record to advance our scientific knowledge and explore innovative use of remote sensing data to support a variety of science applications. This session will focus on Landsat mission collaboration and related data and science applications of Landsat data and products that provide societal benefits, and efforts by European and U.S. agencies to maximize their benefits alongside comparable European land imaging missions such as Copernicus-Sentinel 2.

A diverse set of multi-modal science applications has been enabled with Landsat and Sentinel-2 harmonization and fusion with SAR, LiDAR, high-resolution commercial imagery, and hyperspectral imagery among others. Rapid progress has been achieved using the entire Landsat archive with access to high-end cloud computing resources. Landsat data and applications have revealed impacts from humans and climate change across the globe in land-cover, land-use, agriculture, forestry, aquatic and cryosphere systems.

Building on the 52+ year legacy and informed by broad user community needs, Landsat Next’s enhanced temporal (6-day revisit), spatial (10 – 60 m), and superspectral (21 visible to shortwave infrared and 5 thermal bands) resolution will provide new avenues for scientific discovery. This session will provide updates on Landsat missions and products, and collaboration activities with international partners on mission planning, data access, and science and applications development.

We invite presentations that demonstrate international collaboration and science advancements on the above topics. We also invite presentations on innovative uses of Landsat data alone or in combination with other Earth observation data modalities that meet societal needs today and in coming decades.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Global Evaluation of Temporal Consistency and Uncertainty in Vegetation Indices Derived from NASA's Harmonized Landsat and Sentinel-2 (HLS) Surface Reflectance Product

Authors: Qiang Zhou, Margaret Wooten, Christopher Neigh, Junchang Ju, Zhe Zhu, Petya Campbell, Madhu Sridhar, Brad Baker
Affiliations: Science Systems and Applications, Inc (SSAI), contractor to NASA GSFC, NASA Goddard Space Flight Center, University of Maryland, Department of Natural Resources and the Environment, University of Connecticut, Joint Center for Earth Systems Technology (JCET), University of Maryland, NASA Marshall Space Flight Center, University of Alabama in Huntsville
NASA's Harmonized Landsat and Sentinel-2 (HLS) project recently released a suite of Vegetation Index (VI) products derived from HLS Landsat 30 m (L30) and Sentinel-2 30 m (S30) surface reflectance data. HLS data provide observations every 1.6 days on a global average, regardless of cloud, and every 2.2 days in the most data-scarce tropical regions when all four satellites data are available. VIs are useful for monitoring vegetation dynamics, such as forest loss, crop growth, and fire disturbance severity and recovery among many other applications. To ensure reliable data for scientific applications, the temporal consistency of HLS VIs is important. Previous evaluations of other VI products have often relied on field data or other existing products, which can be costly or hard to disentangle discrepancies due to varying production algorithms. The HLS dataset provides a unique opportunity for consistency assessment, as same-day L30 and S30 images of the same geographic areas are available worldwide, approximately 30 minutes apart. In this study, we evaluated 21 VIs derived from 545 same-day L30 and S30 image pairs, encompassing diverse land cover types globally. We randomly selected over 136 million cloud-free pixels from these image pairs. We calculated the normalized Root Mean Square Deviation (RMSDIQR) and R² for each VI, and found high consistency (R² > 0.94) for most VIs, except for the Chlorophyll Vegetation Index (CVI; R² = 0.5). VIs with lower consistency were typically designed for specific applications and land covers (e.g., crop chlorophyll). Therefore, we stratified the pixel pairs by Moderate Resolution Imaging Spectroradiometer (MODIS) Land Cover Types (MCD12Q1 Version 6.1). The RMSDIQR and R2 were reported for the combination of vegetation types and VIs. We also investigated factors contributing to discrepancies. Large View Azimuth Angle Differences (VAD) (> 125°) and high Solar Zenith Angles (SZ) (> 60°) increased discrepancies in most of the VIs. Large VAD indicates forward/backward scattering of the pixel pairs, and high SZ occurs in the high- or mid-latitude regions during the winter season. Additionally, we analyzed discrepancies across different levels of aerosol optical thickness as indicated by the HLS quality assessment layer, where a cloud-free pixel can have a low, moderate, or high aerosol optical thickness level. We used the VIs derived from low aerosol level cloud-free pixels as the reference to evaluate the discrepancy associated with moderate or high aerosol levels. Low-low or low-moderate (moderate-low) aerosol level pixel pairs showed the best agreement, while low-high or high-low aerosol level pairs exhibited substantial discrepancies, indicating higher uncertainty in VIs derived from high aerosol level observations. Even in low-low aerosol level pairs, some VIs showed increased discrepancies for extreme VI values. This behavior may be attributed to the soil background influence or noteworthy noise in areas with very low surface reflectance. We report specific VI value ranges of lower uncertainties, providing valuable guidance for scientific applications. The observed discrepancies and uncertainties associated with VAD, SZ, and aerosol levels highlight limitations in current atmospheric and BRDF correction algorithms. Our analysis offers essential insights for HLS data users and future algorithm development.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Forest Disturbances and Vulnerability mapping, preliminary results

Authors: Dr Giovanni D'Amico, Saverio Francini, Ruben Valbuena, Dr Gherardo Chirici
Affiliations: Department of Agriculture, Food, Environment and Forest Science and Technology (DAGRI), University of Florence, Department of Science and Technology of Agriculture and Environment (DISTAL), University of Bologna, Department of Forest Resource Management, Swedish Universityof Agricultural Sciences (SLU), Fondazione per il Futuro delle Città
Climate change and environmental stressors negatively affect forest ecosystems and biodiversity. Climate-smart forestry and restoration are acknowledged as global solutions in the European Forestry Strategy, which does prioritize sustainable management for biodiversity and climate resilience, in addition to promoting the forests multifunctionality. Consequently, understanding the effects of forest management and how forests adapt to climate change is crucial. However, data lack hinders these investigations. Therefore, standardized monitoring programs across Europe are essential for efficient planning and mitigation. Therefore, for efficient planning and mitigation, standardized monitoring programs throughout Europe are essential. In this context, the European project FORWARDS aims to bridge the current separation between ground and satellite forest information, to develop the ForestWard Observatory - a European observatory for forest climate change impacts. Specifically, based on the Google Earth Engine cloud computing capabilities, we processed approximately two hundred thousand Landsat images to provide four decades (1984-2023) of Europe-wide disturbance mapping and characterization. To characterize the detected forest change, several parameters including the severity of the disturbance, the persistence, and the number of years the forest needed to recover were predicted. Next, this detailed disturbance information was used to estimate the per pixel vulnerability of the forest to that forest disturbance, obtaining comprehensive and exhaustive information on European forest disturbances. To facilitate this challenging procedure, we developed a Google Earth Engine application that enables visualization, filtering, and downloading of each detected forest disturbances parameter. In this framework European forests harmonizing, we are developing a multi-temporal forest disturbance truth map by integrating historical multispectral Landsat data, with the most recent and accurate Sentinel-2 data. This harmonized dataset will allow on the one hand the implementation of our application in which the user can visualize all the multitemporal disturbance parameters deeper understanding European disturbances. On the other these data are crucial as multifunctionality-related variables and will constitute the basis for wall-to-wall mapping of European forests' vulnerability and resilience.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Using Landsat Evapotranspiration and Climate Data for Estimating High-Resolution Gridded and Field-scale Irrigation Water Use and Groundwater Withdrawals in the Western U.S.

Authors: Rahel Pommerenke, Dr. Sayantan Majumdar, Mr. Thomas J. Ott, Dr. Justin L. Huntington, Dr. Ryan Smith, Mr. Peter ReVelle, Mr. Matt Bromley, Mr Md Fahim Hasan, Mr. Christopher Pearson, Mr. Blake Minor, Mr. Charles G.
Affiliations: Desert Research Institute, Colorado State University
In the Western United States (U.S.), the combination of ongoing and projected droughts, rising irrigation water demands, and population growth is expected to intensify groundwater consumption. Despite the pressing need to address these challenges, most irrigation systems in this region are not equipped with flowmeters required to monitor groundwater withdrawals, which is crucial to implementing sustainable water management practices. However, metering is not a trivial solution, as meters can often be faulty or inadequately calibrated, resulting in discrepancies in the recorded readings. Therefore, developing reliable and efficient solutions for monitoring groundwater withdrawals is paramount in addressing the urgent water management concerns in the Western U.S. The existing methods for estimating withdrawals either entail significant costs and time (e.g., process-based models) or are not suited to support local-scale water management. Building on our prior research, here, we rely on Landsat actual evapotranspiration (ET) from OpenET, Landsat-derived irrigation masks (IrrMapper), irrigation data (field boundaries, water source type), and climate datasets (gridMET, CONUS404, Daymet) to estimate annual groundwater withdrawals, irrigation water use (i.e., consumptive use), and irrigation efficiencies in Nevada, Oregon, and Arizona. We use statistical (linear regression and bootstrapping) and machine learning (Random Forests, XGBoost, LightGBM) approaches and compare our groundwater withdrawal estimates with in-situ meter data at multiple spatial scales— field (30 m-100 m), local (2 km), and individual groundwater basin scales. We also evaluate these regression models based on temporal (leaving out multiple years from the model training) and spatial holdouts (leaving out multiple groundwater basins from the model training). Our models can explain 50%-80% variance in withdrawal depths and 90% variance in withdrawal volumes across these spatial scales and evaluation strategies. The estimated irrigation efficiencies (80%-90%) also align with known irrigation system efficiencies in the study areas (Nevada, Oregon, and Arizona). While these groundwater withdrawal estimates can be further improved, we consider our approach to be more accurate than simply relying on common water right duties, potential crop ET-based estimates, or assumed values. Ultimately, we aim to empower water resource communities by improving water budget information and facilitating the implementation of groundwater management plans throughout this region.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: The ESA Landsat 1-5 MSS Analyse Ready Data Products, an initiative to extend multi spectral surface reflectance time series back to the 1970’s

Authors: SEBASTIEN SAUNIER, Fay Done, Samantha Lavander, Sabrina Pinori, Roberto Biasutti, Philippe Goryl
Affiliations: Telespazio France, Telespazio Vega UK, Serco, ESA/ESRIN
The land monitoring community expects consistent and harmonised datasets, spanning a significant period of time, in order to derive Essential Climate Variables (ECV). Within this context, the ESA Landsat L1 data archive, which covers the entire duration of the NASA / USGS Landsat Programme (initiated with the launch of Landsat 1 in 1972), provides an outstanding source of data. The ESA Data Services Initiative (DSI) / Systematic Landsat Processor (SLAP projects (2010 – 2020) have provided a good opportunity to reach some major milestones, with regards to Landsat Level 0 (L0) data consolidation, Level 1 (L1) data processing and, finally, Landsat product data quality (Saunier, 2017). Many IDEAS / QA4EO (ESA Contracts) experiments put in evidence that after the DSI bulk reprocessing, beside being compatible with NASA Collection 1 products (USGS Website), ESA archive might be used to produce consistent and long time series (proceeding of the Multi Temporal conference in 2017, (Saunier, 2017)). In order to optimize dataset consistency and its interoperability (with other sources), the Committee on Earth Observation Satellites (CEOS) suggested concept of Analysis Ready Data (ARD), (CEOS ARD Website). Extending the Multi Spectral records back to the 1970s to be CEOS ARD compatible is challenging, but is definitely crucial for global change science and applications In this presentation it is proposed to introduce the ARD self-assessment framework and its translation in the context of the ESA MSS Level 1C products. Then, starting from product family specification items and associated threshold level, different options of technical improvements captured from results of algorithm / processing experiments are detailed. It is shown that beside metadata and image quality improvements (missing data), improvements in domain related to cloud shadow masking, geometric correction and atmospheric correction would make ESA MSS data CEOS ARD compatible. The presentation demonstrates that technical solutions exist and are possible, mostly thanks to major achievements that occurs in the last decades, notably, in the field of artificial intelligence, computer vision, processing performance and climatological data re analysis. -- References -- S. Saunier, F. Done, S. Lavender, R. Biasutti, P. Goryl “On the use of Radial Basis Functions to improve geometric accuracy of the ESA Landsat MSS historical archive”, VH RODA 2024, ESRIN, December 2024 (POSTER) . S. Saunier, « Bulk processing of the Landsat MSS/TM/ETM+ archive of the European Space Agency: an insight into the level 1 MSS processing », in Image and Signal Processing for Remote Sensing XXIII, J. A. Benediktsson, Éd., Warsaw, Poland: SPIE, oct. 2017, p. 1. doi: 10.1117/12.2278633. ESA Landsat MSS Catalog: https://landsatdiss.eo.esa.int/socat/LandsatMSS/ USGS Website Landsat Collection 1: https://www.usgs.gov/landsat-missions/landsat-collection-1 S. Saunier et al., « European Space agency (ESA) Landsat MSS/TM/ETM+/OLI archive: 42 years of our history », Brugge, Belgium, juin 2017. http://ieeexplore.ieee.org/document/8035252/ CEOS ARD Website, https://ceos.org/ard/
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Aboveground biomass prediction in tropical forests with a multi-modal approach and temporal features from HLS data

Authors: Rodrigo Leite, Dr. Qiang Zhou, Margaret Wooten, William Wagner, Dr. Christopher Neigh
Affiliations: NASA Postdoctoral Program Fellow, Goddard Space Flight Center, Biospheric Sciences Laboratory, NASA Goddard Space Flight Center, Science Systems and Applications, Inc (SSAI)
Quantifying and monitoring aboveground biomass (AGB) in tropical forests is essential for supporting conservation and restoration initiatives of these ecosystems. NASA’s Global Ecosystem Dynamics Investigation (GEDI) lidar data integrated with multisource remote sensing imagery has been used to provide AGB predictions at large scales. Tropical forests, however, often present high growth rates and dense canopy cover that can limit the ability of this approach to fully capture the AGB variability. Understanding these limitations across forest age and AGB ranges is essential for enhancing AGB predictions for forest monitoring over time and informing remote sensing-based growth models. Exploring the high temporal coverage of products such as the Harmonized Landsat and Sentinel-2 (HLS) offers a valuable opportunity that has not been fully explored. In this study, we explore a multi-modal data fusion approach leveraging HLS to predict AGB in tropical forests. The initial experiments focus on assessing forest patches located in Minas Gerais, Brazil, within the Atlantic Forest domain. The two main vegetation types in the region include Dense Rainforest and Seasonally Dry Semi-deciduous Forest. We calculated vegetation indices from HLS annual mosaics to use as predictors in a Random Forest (RF) model, with GEDI L4A AGB serving as the reference dataset. The upscaling approach consists of extracting values from the layer-stack of vegetation indices intersecting the GEDI footprints, training the model to predict AGB and applying the model to the entire image stack. A subset of 11,452 footprints were used in this analysis, where 70% of the data was used for training and 30% to validate the model. The results show models with a R2 = 0.69 and RMSE% = 33.5 %, and an observed underestimation of AGB >200 Mg/ha. This suggests the need for incorporating additional metrics to capture the full range of AGB that could reach over 300-400 Mg/ha in the study domain. Ongoing analysis will include and evaluate phenology-specific temporal features derived from HLS time series and Sentinel-1 vegetation indices to enhance predictions and explore seasonal and annual phenological cycles. We also use a Landsat-based historical land cover classification dataset to explore the influence of vegetation age on AGB variability. These efforts aim to take advantage of HLS time series and muti-modal imagery data fusion with GEDI to enhance monitoring and management of tropical forest ecosystems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Leveraging the temporal benefits of Harmonized Landsat and Sentinel-2 (HLS) data for modeling fine-scale land cover and land use change in complex landscapes

Authors: Margaret Wooten, Jordan Caraballo-Vega, Molly Brown, Konrad Wessels, Mark Carroll, Minh Tri Li, Aziz Diouf, Modou Mbaye, Christopher Neigh
Affiliations: NASA GSFC, University of Maryland College Park, George Mason University, Centre de Suivi Ecologique, Senegalese Agricultural Research Institute
In sub-Saharan West Africa, accelerating population growth and worsening effects of climate change are further straining natural resources and threatening smallholder agricultural productivity. As such, understanding the spatial and temporal dynamics of land cover and land use (LCLU) changes is vital for the majority of people who rely heavily on rainfed subsistence agriculture to support their livelihoods. However, this region is characterized by a mosaic of small, irregularly defined agricultural fields and grasslands, interspersed with sparse pockets of savannah woodlands and individual tree stands, making it notoriously difficult to monitor with traditional remote sensing approaches and moderate to coarse resolution satellite data. Moreover, extreme variations in phenology, significant burnt area during the dry season, and a scarcity of cloud-free data during the rainy season present additional obstacles. These challenges complicate LCLU modeling in this region, especially for land use classes that are difficult to differentiate from one another without consistent cloud-free observations during the growing season (e.g. cultivated crop or fallow field). To address these challenges, we developed a near-autonomous spatiotemporal data fusion framework that combines objects derived in an unsupervised segmentation from commercial very-high resolution (VHR) multispectral satellite data with temporal patterns obtained from coarser spatial resolution data and their derived vegetation indices (VIs). Our workflow offers flexibility for the specification of the underlying time series data, provided this data is represented at the necessary temporal interval (e.g. monthly) and at an adequate spatial resolution. VIs derived from optical satellite imagery have long been used for LCLU mapping, but the significant presence of clouds and the spatial and temporal resolution trade-offs inherent in existing global multispectral satellite constellations (e.g. MODIS, Landsat, Sentinel-2) has traditionally hindered our ability to obtain reliably cloud-free observations at the spatial and temporal scales necessary for skillfully predicting land cover and land use classes within our study domain. In response, we have typically relied on Sentinel-1’s cloud penetrating Synthetic Aperture Radar (SAR) satellite data to provide the predictive temporal data for our model. But thanks to a joint initiative between the National Aeronautics and Space Administration (NASA) and the United States Geological Survey (USGS) to produce a seamless surface reflectance product from NASA’s Landsat and the European Space Agency’s Sentinel-2 satellites (Harmonized Landsat and Sentinel-2; HLS), we are now able to derive VIs at a sufficiently high temporal resolution (every 1.5 to 3 days on average) without sacrificing the spatial detail necessary for resolving these fine-scale LCLU classes and their changes. Here we present the results of this workflow applied in Senegal, where we identify LCLU classes such as agroforestry, cultivated and fallow agriculture, urban area, and dense or degraded forests using a multivariate One-Dimensional Convolutional Neural Network (1DCNN) model, fueled by a combination of single-date VHR multispectral imagery from Maxar’s WorldView constellation and time-series data from HLS and SAR. Independent validation on the initial model results, substantiated by in-situ observations collected on a recent field campaign to Senegal, reveals an overall classification accuracy greater than 75%. The science output from this model includes a spatiotemporal land use database that can be used for LCLU change detection and subsequent efforts for guiding informed policy and land management decisions. Our approach highlights the usefulness of multi-modal data fusion strategies and the inter-mission data integration efforts of the HLS project for addressing important societal challenges.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Continuous Change Detection and Classification using NASA’s Harmonized Landsat and Sentinel-2 (HLS) Data in Google Earth Engine

Authors: Thuy Trang Vo, Junchang Ju, Qiang Zhou, Bradley Baker, Brian Freitag, Pontus Olofsson, Christopher Neigh, Madhu Sridhar
Affiliations: University of Alabama in Huntsville, Earth System Science Interdisciplinary Center, University of Maryland, Science Systems and Applications, Inc (SSAI), University of Alabama in Huntsville, NASA Marshall Space Flight Center, NASA Marshall Space Flight Center, Earth System Science Interdisciplinary Center, University of Maryland
NASA’s Harmonized Landsat and Sentinel-2 (HLS) global surface reflectance products are generated by combining input data from OLI and MSI sensors aboard NASA/USGS’s Landsat 8/9 and ESA’s Sentinel-2A/B satellites, respectively. The analysis-ready HLS dataset is produced at a medium spatial resolution of 30m with a near-global coverage enabling land observation every 2-3 days. The production of harmonized surface reflectance on a common MGRS grid involves several processing steps, including atmospheric correction of Top of Atmosphere (TOA) data, cloud masking, normalizing bi-directional view angle effects and bandpass adjustment to account for sensor level differences with OLI as the reference. The dataset has undergone rigorous validation and consistency evaluation. The data harmonization is found to be efficacious and therefore the data products are suitable for quantitative analyses. Compared to the revisit times of individual constituent satellites, the HLS virtual constellation dataset offers significantly higher observational frequency. The HLS data archive exceeds 4 PB (and ~30M products) and extends nearly a decade (HLS Landsat component L30: April 2013 onwards, HLS Sentinel-2 component S30: Nov. 2015 onwards). This rich dataset is useful for many applications such as disaster response and vegetation monitoring. In particular, availability of highly frequent surface reflectance greatly benefits time series based analysis in uncovering seasonality and long-term trends. For geospatial analysis with large datasets and at global scales, Google Earth Engine (GEE) has emerged as a powerful platform which removes barriers to users by offering convenient tools and computing resources. HLS L30 data products are available on GEE and as of December 2024, HLS S30 data is being actively ingested. The goal of this study is to demonstrate the benefits of HLS data series compared to Landsat or Sentinel-2 alone data stacks by using the Continuous Change Detection and Classification algorithm available on GEE. The study will focus on a few key applications and highlight the ease of use at different scales by providing examples of pixel-based time series and spatial visualizations. These analyses can be further extended to other land cover applications to derive useful insights by leveraging benefits of HLS dataset and GEE.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: A.06.01 - POSTER - Geospace dynamics: modelling, coupling and Space Weather

This session aims to capture novel scientific research outcomes in the Geospace dynamics field, encompassing atmosphere, ionosphere, thermosphere, and magnetosphere - modelling and coupling. A significant contribution is expected from Space Weather science with the usage of, but not limited to, data of ESA Earth Observation missions, such as Swarm, in particular FAST data, and SMOS. The objective of the session is to collect recent findings that improve the knowledge and understanding of the dynamics and coupling mechanisms of the middle and upper atmosphere and their link with the outer regions that are mainly driven by the Sun and the solar cycle, as well as a focus on data validation and on Space Weather events. We solicit results also from simulations, ground-based observatories or other heliophysics missions, in particular demonstrating synergetic combinations of these elements.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Swarm – SMOS synergies for Space Weather events monitoring

Authors: Roberta Forte, Raffaele Crapolicchio, Enkelejda Qamili, Vincenzo Panebianco, Dr. Lorenzo Trenchi, Federica Guarnaccia, Veronica Gonzalez Gambau, Dr. Nuria Duffo
Affiliations: Serco For Esa, CSIC Institute of Marine Science, Universitat Politecnica de Catalunya
ESA Earth Explorers missions pioneer new space technology and observe our planet to help answer key science questions about Earth’s systems, and in some cases, they can go behind the original scientific purpose and be beneficial to other field of science. Moreover, they allow to develop synergies which open to further applications in different fields. An example of a fruitful synergy between two completely different missions, that fosters new objectives behind their original ones, is Swarm and SMOS. Both these Earth Explorer missions may be advantageous for Space Weather applications: their distinctive characteristic is the possibility to observe Space Weather phenomena from different points of view. SMOS mission is dedicated to Soil moisture and salinity measurements, but within these measurements, the on-board Microwave Imaging Radiometer with Aperture Synthesis (MIRAS) captures a signal from the Sun, that allows to derive the Solar Flux in L-band, with its polarization component. Swarm original purpose is to characterize Earth’s geomagnetic, ionospheric and electric fields and their temporal variation, through measurements of Earth’s magnetic field and plasma parameters with a peculiar constellation configuration of 3 satellites. With its new “Fast” processing chain, Swarm is able to provide data with a minimum delay with respect to acquisition time, making this mission eligible for Space Weather application. Moreover, both Swarm and SMOS provide measurements of Total Vertical Electron Content (VTEC), very useful to evaluate impact of Space Weather phenomena on the ionosphere. This poster aims at demonstrating how these two missions can enhance their contributions to Space Weather by combining their distinct observations, revealing new possible applications. Some examples of this collaboration will be presented as results of the analysis of the same events observed by SMOS and Swarm, with a focus on high-impact events of Solar Cycle #25, involving several parameters: the Solar flux in L-band and its circular polarization component measured by SMOS; the detection of Solar Radio burst from SMOS solar flux variation, compared with GNSS effects and radio blackouts on the ground; variations in geomagnetic field, plasma density, plasma temperature and Field Aligned Currents measured by Swarm; VTEC measurements from both missions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: AGATA (Antarctic Geospace and ATmosphere reseArch): the new SCAR Scientific Research Programme and its mentoring activities

Authors: Jaroslav Urbar, Lucilla Alfonsi, Wojciech Jacek Miloch, Nicolas Bergeot, Eduardo Perez Macho, Yamila Melendi, Trinidad Duran, Carlos Castillo-Rivera, Marayén Renata Canales Riquelme, Reetambhara Dutta, Satyajit Singh Saini, Simon Bouriat, Anoruo Chukwuma
Affiliations: Institute of Atmospheric Physics CAS, Istituto Nazionale di Geofisica e Vulcanologia, University of Oslo, Royal observatory of Belgium, Mackenzie Center for Radio Astronomy and Astrophysics, Departamento de Física - UNS, Universidad de Concepcion, Indian Institute of Technology, IPAG - Institut de Planétologie et d'Astrophysique de Grenoble, University of Nigeria
The AGATA is a new Scientific Research Programme (SRP) endorsed by SCAR starting its activities from January 2025. During its 8-years lifetime AGATA aims to significantly advance the current knowledge of the Antarctic atmosphere and geospace, in the bipolar, interhemispheric context. AGATA contributes to answering the outstanding scientific questions related to the whole-atmosphere interactions, including coupling between atmospheric layers and between the neutral and ionized parts of the atmosphere, space weather and magnetospheric influences, and the whole atmosphere’s role in climate variations. These questions are addressed with a multi-disciplinary and multi-instruments approach, and by bringing together communities which study the polar atmosphere and geospace. Scientists who need atmospheric corrections for their measurements are also involved. AGATA SRP takes advantage of existing and planned instrumentation in Antarctica, and aims for coordinated research efforts and data exchange. To understand the global context, AGATA SRP is also set in the interhemispheric perspective. While the understanding of physics of the neutral and ionized atmosphere has been significantly improved using both ground-based and space-based measurements, the questions that remain open need to be addressed with a synergistic approach. This requires active involvement of various research groups in the field. AGATA contributes to answering the outstanding scientific questions within atmospheric physics and aeronomy in the Antarctic, namely: 1. How are different atmospheric layers coupled in the Antarctic? 2. How does the Antarctic upper polar atmosphere respond to increased geomagnetic activity, including energy transfer from space? 3. How does the whole polar atmosphere impact short- and long-term climate variations? Answering these open questions has not only implications for the understanding of processes in the Antarctic atmosphere, but also greatly improves our understanding of the atmospheric dynamics in the polar regions and globally, thus contributing to the development of large-scale whole atmosphere and climate models. AGATA is an inclusive and interdisciplinary programme, with a strong participation of early career researchers (ECRs) and emphasis on inclusiveness and gender balance. AGATA encourages the interdisciplinary approach and seeks to: ● Foster the collaboration among experts of different disciplines, such as astrophysics, planetary science, neutral atmosphere physics and chemistry, and heliophysics to share the competencies necessary to understand the role of different drivers of atmospheric and ionospheric dynamics from above and below; ● Strengthen the collaboration between atmospheric scientists and space physics community to improve our knowledge about space weather forecasting and space weather impacts; ● Facilitate sharing of data, algorithms and models to harmonize the exploitation of the information (adoption of standards, agreement on metrics, use of shared communication tools, use of interoperable tools, etc.); ● Develop and strengthen the collaboration between the research communities that manage and exploit ground-based and in-situ observations to optimize and maximize their efforts given an increasing number of multi-instrument sites on the ground and multi-sensor payloads in space. AGATA is already engaged to contribute to the identification of priorities in polar atmospheric and space weather research that should be achieved during the 5th IPY (2032-2033). AGATA is thus gathering contributions and expertise from a significant part of the scientific community dealing with the physics from lower to upper atmosphere and geospace. In this framework the next generation of scientists is engaged in the roadmap of the next IPY activities. Already long before its official endorsement AGATA started mentoring programme for ECRs and also supported them to attend SCAR Open Science Conference 2024. These teams have been working on ECRs-led manuscripts dealing with the understanding of atmospheric couplings by pathfinding approaches in studies of long-term trends as well as truly multi-instrumental studies of Mother’s 2024 superstorm over Antarctica.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: SPACE IT UP Project (Spoke 6): Aeronomic Parameters Retrieved at Middle Latitudes With the THERION Method for Space Weather Studies

Authors: Dario Sabbagh, Loredana Perrone, Dr. Alessandro Ippolito, Carlo Scotto, Luca Spogli
Affiliations: Istituto Nazionale Di Geofisica E Vulcanologia
SPACE IT UP is a program aiming at enhancing the space technology of Italy to be used for space exploration and exploitation for the benefit of planet Earth and the entire humankind. Our study is inserted in the Spoke 6, whose main objective is aimed at protecting critical infrastructures from Space Weather (SWE) events by fostering research and tools that can potentially be translated into future operational services. Specifically, in this task we will study the Thermosphere-Ionosphere system in response to adverse SWE conditions at regional scale. For this purpose, an original method, THERION (THERmospheric parameters from IONosonde observations), has been used to retrieve a consistent set of aeronomic parameters under disturbed geomagnetic conditions. The method is based on observed bottom-side Ne(h) profile in the F region and, when available, on satellite (Swarm, GRACE) neutral gas density observations, being applicable at noon time hours at middle latitudes under any level of solar geomagnetic activity. The retrieved aeronomic parameters will be compared with empirical thermospheric empirical models like MSISE00, showing increased ability of THERION in reproducing thermospheric variability in such conditions. This study is carried out within the Space It Up project funded by the Italian Space Agency, ASI, and the Ministry of University and Research, MUR, under contract n. 2024-5-E.0 - CUP n. I53D24000060005.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Short-term (1-24) hour foF2 and MUF(3000) prediction and the state of the thermosphere over Europe during the great geomagnetic storm in May 2024

Authors: Loredana Perrone, Andrey Mikhailov, Paolo Bagiacchi, Dario
Affiliations: ISTITUTO NAZIONALE DI GEOFISICA E VULCANOLOGIA
The MUF(3000) predicted 1- 24 hours ahead is one of the operational space weather product inserted in PECASUS, one of the three global Space Weather Centers for aviation space weather user services designed by the International Civil Aviation Organization(ICAO) and in the SWESNET project (ESA, Space Weather Awareness- https://swe.ssa.esa.int/). MUF(3000) depends on two ionospheric parameters foF2 and M(3000): for the foF2 the forecasting model EUROMAP and for M(3000) the IRI model are used. The method has been applied to Europe where there are ionospheric stations with long (for some solar cycles) historical data and current real-time foF2 observations. The method includes two types of prediction models: regression models based on the analyses of historical observations, and training models based on current foF2 observations A mapping procedure applied to the European stations provides MUF(3000) short-term prediction over the whole area. The application of these methods and the comparison with IRI-storm model for the storm event on May 10 2024 with kp up to 9 and Dst down to -403 nT are discussed. Thermospheric parameters retrieved from ground-based ionosonde and Swarm neutral density observations obtained for the storm period are compared to modern empirical thermospheric models. This study is carried out within the Space It Up project funded by the Italian Space Agency, ASI, and the Ministry of University and Research, MUR, under contract n. 2024-5-E.0 - CUP n. I53D24000060005."
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Unexpected Field-Aligned Structure in Equatorial Plasma Bubbles

Authors: David Knudsen, Bizuayehu Addisie Beyene
Affiliations: University Of Calgary
Equatorial plasma bubbles (EPBs) are deep density depletions that tend to be elongated in the meridional direction, i.e. along the geomagnetic field in the equatorial ionosphere. This study compares the distribution of bubble dimensions across B, as seen by the C/NOFS satellite, and approximately along B, with Swarm. Whereas one expects the Swarm distribution to reflect much longer bubble lengths than C/NOFS, the observations do not bear this out: Swarm observes more "short" bubbles than expected. This surprising finding suggests that EPBs, which can in fact been seen with ground-based cameras to be elongated along B, may be comprised of smaller segments, indicating a previously-unknown field-aligned density structuring mechanism.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Towards a physically constrained empirical model of climatological variations of ionospheric F-region magnetic field and electric currents

Authors: Martin Fillion, Gauthier Hulot, Patrick Alken
Affiliations: Cooperative Institute for Research in Environmental Sciences, University of Colorado, Boulder, CO, USA, NOAA National Centers for Environmental Information, Boulder, CO, USA, Université Paris Cité, Institut de physique du globe de Paris, CNRS, F-75005 Paris, France
The Earth’s ionosphere hosts a complex electric current system that generates a magnetic field, referred to as the ionospheric field. The study of ionospheric electric currents and fields provides crucial insights on the ionosphere-thermosphere system and on ionospheric plasma distribution and dynamics. A particularly valuable dataset to study these currents and fields comes from magnetic measurements acquired by magnetometers onboard low Earth Orbit (LEO) satellites, such as those of the ESA Earth Explorer Swarm constellation. These satellites orbit within the ionospheric F region and provide highly valuable in situ measurements. These data are already much used to recover and study the signals produced by the Earth’s outer core, the lithosphere, the oceans, the magnetosphere and the currents induced by the time-varying ionospheric and magnetospheric fields. This requires sophisticated empirical models. Building data-based models of the highly dynamic and spatially complex F-region ionospheric field and associated electrical currents, however, is a challenge of its own. The complex spatio-temporal nature of the signals makes the parameterization of the problem difficult to handle, with the data not providing enough information to uniquely constrain the model. This issue is usually addressed by introducing simplifying assumptions on the space-time variations, and by restricting the model to describe the field and currents within the regions sampled by the satellites (Fillion et al., 2023). Recent research has nevertheless demonstrated that additional progress could be made by relying on optimized spatial basis functions using numerical simulations from realistic physics-based models, such as the Thermosphere-Ionosphere-Electrodynamics General Circulation Model (Alken et al., 2017; Egbert et al., 2021). Such an approach has many advantages, not the least the possibility of building a model describing the field and electrical currents beyond the regions directly sampled by the data. In this presentation, we will present our ongoing efforts toward using such an approach to build a data-based model of climatological variations of ionospheric F-region magnetic fields and electric currents. Preliminary results will be presented and possible avenues for future improvements discussed. Alken, P., Maute, A., Richmond, A. D., Vanhamäki, H., & Egbert, G. D. (2017). An application of principal component analysis to the interpretation of ionospheric current systems: TIEGCM MODELING, PCA, AND DATA FITTING. Journal of Geophysical Research: Space Physics, 122(5), 5687–5708. https://doi.org/10.1002/2017JA024051 Egbert, G. D., Alken, P., Maute, A., & Zhang, H. (2021). Modelling diurnal variation magnetic fields due to ionospheric currents. Geophysical Journal International, 225(2), 1086–1109. https://doi.org/10.1093/gji/ggaa533 Fillion, M., Hulot, G., Alken, P., & Chulliat, A. (2023). Modeling the Climatology of Low- and Mid-Latitude F-Region Ionospheric Currents Using the Swarm Constellation. Journal of Geophysical Research: Space Physics, 128(5), e2023JA031344. https://doi.org/10.1029/2023JA031344
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Ionospheric Occurrence of Pc1/EMIC Waves relative to the Ionospheric Footprint of the Plasmapause

Authors: Tamás Bozóki, Balázs Heilig
Affiliations: HUN-REN Institute of Earth Physics and Space Science, ELTE, Institute of Geography and Earth Sciences, Department of Geophysics and Space Science, Space Research Group
Pc1 pulsations cover the 0.2–5 Hz frequency range with electromagnetic ion cyclotron (EMIC) waves of magnetospheric origin being generally accepted as their most important source. In the ionosphere, the initially transverse EMIC waves can couple to the compressional mode and propagate long distances in the ionospheric waveguide. By studying the Pc1 waves frequency range in the topside ionosphere, we can obtain information on the spatial distribution of both the transverse (incident EMIC) and the compressional waves. We made use of our new Swarm L2 product developed for characterising Pc1 waves to explore the spatial distribution of these waves relative to the midlatitude ionospheric trough (MIT), which corresponds to the ionospheric footprint of the plasmapause (PP) at night. It is shown that the vast majority of Pc1 events are located inside the plasmasphere and that the spatial distributions clearly follow changes in the MIT/PP position at all levels of geomagnetic activity. The number of transverse Pc1 (incident EMIC) waves rapidly decreases outside the PP, while their occurrence peak is located considerably equatorward of the PP footprint, i.e. inside the plasmasphere. On the other hand, the compressional Pc1 waves can propagate in the ionosphere outside the PP towards the poles, while in the equatorial direction there is a secondary maximum in their spatial distribution at low magnetic latitudes. Our results suggest that mode conversion taking place at the PP plays a crucial role in the formation of the presented spatial distributions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Investigating Mid-Latitude Ionospheric Disturbances at the Ionospheric Observatory of Rome During Solar Minima

Authors: Dario Sabbagh, Loredana
Affiliations: Istituto Nazionale Di Geofisica E Vulcanologia
This study examines the mid-latitude ionospheric disturbances over the Ionospheric Observatory of Rome (41.82° N, 12.51° E) during the last two solar minima. The goal is to improve our understanding of their relationship with different sources, including geospheric storms, as strong manifestations of Space Weather. Ionospheric F2 layer disturbances are analyzed by studying strong positive and negative deviations of its critical frequency foF2, which corresponds to the maximum electron density in the vertical profile. Short-lived anomalies (2-3 hours) and Long-lasting ones (≥4 hours) are identified using hourly observations over a background defined by a 27-day running median for each hour, and binned according to the geomagnetic activity, hour and season of their occurrence. Hourly Total Electron Content (TEC) data from a GNSS receiver at the same location of the ionosonde are similarly processed after calibration and conversion to vertical TEC (vTEC). The Interquartile Range method is applied to detect anomalous values with the same running windows, enabling a direct comparison between simultaneous measurements at the ionospheric peak altitude given by the ionosonde, and vertically integrated ones by the co-located GNSS receiver. The results reveal that significantly fewer anomalies are detected in vTEC compared to foF2, although the total number of each type is similar across the two solar minima. Positive anomalies dominate each year and are always most prevalent where the distributions according to the geomagnetic activity are more pronounced. A particularly small number of negative anomalies is confirmed also for foF2 during daytime, while those occurred during night were more frequent in Summer. Seasonal and hourly patterns show more pronounced differences for positive anomalies, particularly those with long persistence.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Conjugate Processes in the Magnetosphere and the Subauroral Ionosphere

Authors: Máté Tomasik, Balázs Heilig
Affiliations: HUN-REN Institute of Earth Physics and Space Science, HUN-REN – ELTE Space Research Group, Eötvös Loránd University, Institute of Geography and Earth Sciences, Department of Geophysics and Space Science, Space Research Group
The Plasma Boundary Layer (PBL) is a rich repository of various dynamic processes. PBL is the boundary separating the relatively dense cold plasma of the plasmasphere co-rotating with the Earth, and the tenuous plasma trough. While the Ring Current overlaps with the plasmasphere, energetic particle precipitation takes place outside the PBL. However, the PBL separates diverse plasma populations and different plasma wave modes, providing a reflection boundary for compressional ULF waves and dominated by electric fields of different origins. Various cold plasma structures are formed by the interplay of these electric fields. The sharp storm-time night side plasmapause is shaped under the joint effect of the corotating electric field, the global convection and the sub-auroral polarisation electric field active during geomagnetic substorms. The corresponding structure in the ionosphere is the mid-latitude ionospheric trough. This paper investigates the relationship between these conjugate structures in detail utilising RBSP/Arase and Swarm observations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: On the synergies between ground-based VLF/LF measurements and SWARM data: application to the study of seismic precursors

Authors: Olimpia Masci, Mr Mohammed Y. Boudjada, Mr. Hans Ulrich Eichelberger, Ms. Aleksandra Nina, Mr. Pier Francesco Biagi, Mr. Patrick H.M. Galopeau, Mr. Mohammad Azem Khan, Ms. Olimpia Masci, Ms. Maria Solovieva, Mr. Michael Contadakis, Mr. Helmut Lammer, Mr. Wolfgang Voller, Mr. Manfred Stachel, Mr. Bruno P. Besser, Ms. Iren-Adelina Moldovan, Mr. Konstantinos Katzis
Affiliations: Institute for Applied Mathematics (IAC), National Research Council of Italy (CNR), Space Research Institute, Austrian Academy of Sciences,, Institute of Physics Belgrade, University of Belgrade, Department of Physics, University of Bari, Laboratoire Atmosphere, Milieux, Observations Spatiales–Centre National de la Recherche Scientifique, UVSQ Université Paris-Saclay, EOG GmbH, DIAN Srl, Institute of the Earth Physics, Russian Academy of Sciences, Department of Geodesy and Surveying, Aristotle University of Thessaloniki, National Institute for Earth Physics (NIEP), Faculty of the Computer Science and Engineering, European University
Large earthquakes trigger pre-seismic electromagnetic (EM) waves that propagate from the lithosphere to the ionosphere, above the epicenter region. Those waves exhibit modulations generated by the acoustic-waves (AWs), atmospheric gravity waves (AGWs) and planetary waves (PWs) with periods ranging from 1 minute to days. Such waves disturb a huge ionospheric space area considered to be equal to the preparation earthquake (EQ) zone, derived from the Dobrovolsky’s relationship, with a radius (Rdb) equal to Rdb=100.43M where Rdb is expressed in km and M is the magnitude of the EQ. In this analysis, we report on the study of electromagnetic precursors based on the use of LF (30-300 kHz) and VLF (3 – 30 kHz) radio transmitter signals detected by ground-based receivers. Those observations are combined to space measurements onboard LEO satellites, like ESA’S Swarm mission and the China Seismo-Electromagnetic Satellite (CSES). Ground-based VLF/LF observations are daily monitored by two different and complementary networks. The first one is the International Network for Frontier Research on Earthquake Precursors (INFREP), established in 2009 with eight sensors located in Austria, Cyprus, Greece, Italy, Romania and Serbia. The radio receivers measure the intensity (electric field strength) of radio signals radiated by existing VLF-LF broadcasting stations in the bands VLF (20 - 80 kHz) and LF (150 - 300 kHz), with 1-minute sampling rate. The INFREP cooperation started in 2009 and a huge database of amplitude measurements of electric field has been investigated and used to study perturbations in the ionosphere due to external activity (i.e., solar and geomagnetic activities), and to detect EQs EM precursors with a magnitude Mw>6.0 [1]. More recently, the deployment of a new VLF/LF network has been started which currently consists of four reception stations deployed in Graz (Austria), Guyancourt (France), Réunion (France) and Moratuwa (Sri Lanka) [2]. Satellite missions, such as CSES and ESA’s Swarm can provide magnetic field measurements at low altitude allowing the detection of seismic precursors (e.g. [3] and [4]). The aim of this work is to analyse EQ events with magnitude Mw > 6.0 which occurred in the southern part of Europe (Greece, Italy and Turkey). Hence Swarm satellite magnetic field and electron density measurements are combined to VLF/LF electric field ground observations. We emphasize on the analysis of the EQ that happened in Antakya (Turkey) on 6th February 2023, Mw = 7.8 [5]. Time series of the Dst, AE, Kp, and ap geomagnetic indices and GEOS satellite observations are also considered to distinguish and separate lithospheric precursors and external effects, like solar and geomagnetic activities. The main issue is to make evident the lithospheric-induced disturbances in the ionosphere and to confirm, or not, a clear correlation between the ground electric field observations and the satellite space magnetic field measurements. References: [1] P.F. Biagi, R. Colella, L. Schiavulli, A. Ermini, M. Boudjada, H. Eichelberger, K. Schwingenschuh, K. Katzis, M. Contadakis, C. Skeberis, I.A. Moldovan, M. Bezzeghoud, “The INFREP Network: Present Situation and Recent Results”, Open Journal of Earthquake Research, 8, 101–115, 2019. [2] P.H.M. Galopeau, A.S. Maxworth, M.Y. Boudjada, H.U. Eichelberger, M. Meftah, P.F. Biagi, K.nSchwingenschuh, “A VLF/LF facility network for preseismic electromagnetic investigations”, Geosci. Instrum. Method. Data Syst., 12, 231–237, 2023. [3] A. De Santis, D. Marchetti, L. Spogli, G. Cianchini, F.J. Pavón-Carrasco, G. De Franceschi, R. Di Giovambattista, L. Perrone, E. Qamili, C. Cesaroni, A. De Santis, A. Ippolito, A. Piscini, S.A. Campuzano, D. Sabbagh, L. Amoruso, M. Carbone, F. Santoro, C. Abbattista, D. Drimaco, “Magnetic Field and Electron Density Data Analysis from Swarm Satellites Searching for Ionospheric Effects by Great Earthquakes: 12 Case Studies from 2014 to 2016”, Atmosphere, 10, 371, 2019. [4] M. Akhoondzadeh, A. De Santis, D. Marchetti, X. Shen, “Swarm-TEC Satellite Measurements as a Potential Earthquake Precursor Together with Other Swarm and CSES Data: The Case of Mw7.6 2019 Papua New Guinea Seismic Event”, Frontiers Earth Sciences 10, 820189, 2022. [5] M.Y. Boudjada, P.F. Biagi, H.U. Eichelberger, G. Nico, K. Schwingenschuh, P.H.M. Galopeau, M. Solovieva, M. Contadakis, V. Denisenko, H. Lammer, W. Voller, F. Giner, “Unusual Sunrise and Sunset Terminator Variations in the Behavior of Sub-Ionospheric VLF Phase and Amplitude Signals Prior to the Mw7. 8 Turkey Syria Earthquake of 6 February 2023”, Remote Sensing, 16, 23, 4448, 2024.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Cosmic ray measurements and solar modulation with HEPD-01 on board CSES-01

Authors: Matteo Sorbara, Matteo Martucci
Affiliations: Università Degli Studi Di Roma Tor Vergata, Via della Ricerca Scientifica 1, 00133, INFN Sezione Roma Tor Vergata, Via della Ricerca Scientifica 1, 00133
The China Seismo-Electromagnetic Satellite (CSES-01) is a mission developed by the Chinese National Space Administration (CNSA) together with the Italian Space Agency (ASI), to investigate the near-Earth electromagnetic, plasma and particle environment. One of the main payloads on board the CSES-01 satellite is the High-Energy Particle Detector (HEPD-01), a light and compact detector designed and built by the Italian Limadou collaboration. This instrument is aimed to measure electron, proton and light nuclei fluxes in the energy range from 3 to 100 MeV for electrons and from 30 to 200 MeV for protons and light nuclei. The detector is made of a plastic scintillator trigger, a tower of 16 plastic scintillator planes and a matrix of LYSO crystals arranged in a 3 by 3 pattern, read by photomultiplier tubes with a custom DAQ electronics. The hardware provides good energy resolution and a wide angular acceptance (about 60 degrees), resulting in a high capability in particle identification and separation. Furthermore, its high stability in time makes HEPD-01 very well adapted in detecting variations of particle fluxes (even over long periods of time) related to a plethora of phenomena taking place on the Sun and in the inner Heliosphere. After six years of data-taking since the satellite launch, on February 2018, HEPD-01 has shown impressive ability in measuring various particle populations all over its orbit, like galactic cosmic rays. Moreover, a new CSES mission, with on board the new HEPD-02 detector, improved with respect the one currently orbiting, will be launched in 2025. This instrument will serve as a very reliable and accurate tool to continue the study of particle fluxes in the near-Earth space going towards the period of maximum activity of the solar cycle. In this work, an overview on cosmic protons and Helium nuclei measurements with HEPD-01 will be given, focusing on their energy spectra and their time variations, i.e. solar modulation and other small-scale periodicities.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: New release of the forecasting service SODA

Authors: Sandro Krauss, Mag. Dr.rer.nat. Manuela Temmer, Dipl.-Ing. BSc Andreas Strasser, Dr.rer.nat. MSc Florian Koller, Dipl.-Ing. BSc Ing. Barbara Süsser-Rechberger, BSc. MSc. Daniel Milosic
Affiliations: Graz University Of Technology, Institute Of Geodesy, University Graz, Institute of Physics, Queen Mary University of London, Space and Astrophysical Plasma Physics
With the strong rise of the current solar cycle 25, the number of solar eruptions such as solar flares and coronal mass ejections (CMEs) is also increasing. The SODA (Satellite Orbit DecAy) forecasting tool is currently based on an interdisciplinary analysis of space geodetic observations and in-situ solar wind measurements between 2002 and 2017. In this new release, we present an updated version of the service, which is part of ESA's Space Safety Programme (Ionospheric Weather I.161). We have analyzed an additional 7 years of data up to 2024 and incorporated the results into the forecast. This means that major storms, such as the Gannon Storm in May 2024, are now included in the forecast base. This geomagnetic storm, which occurred on 10 May 2024, was one of the most severe in decades. The storm was triggered by six CMEs hurled towards Earth by the giant sunspot AR3664. Due to the complexity of the event and an insufficient data base, the forecast with the previous release of SODA had its weaknesses. Other new features include the prediction of storm induced orbital decay for two new altitude layers (400 km and 450 km) and the expansion of new input parameters, so that the focus is no longer solely on the interplanetary magnetic field component Bz. Also included is the classification of the severity of the expected geomagnetic storm in the form of the National Oceanic and Atmospheric Administration (NOAA) Space Weather G-Scale. Finally, we will use our new thermospheric mass density processing chain, which is being applied to a wide range of satellites (e.g., CHAMP, GRACE, GRACE-FO, SWARM, TerraSAR-X) using accelerometer measurements or kinematic orbit information. Finally, a comparison between the old and new versions of the forecast service is presented for a selection of geomagnetic storms that have occurred over the last three solar cycles.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Ionospheric Slab-Thickness modelling for Space Weather monitoring

Authors: M Mainul Hoque, Kateryna Lubyk, Marjolijn Adolfs, Norbert Jakowski
Affiliations: German Aerospace Center DLR
Space Weather refers to phenomena that arise from the connection between the Sun and Earth and can have adverse effects on the operation of technical systems and human activities. The Sun is rapidly approaching solar maximum currently, the technical systems are particularly vulnerable to rapid changes in the electron density distribution in the topside ionosphere and plasmasphere that can arise from space weather. Since ionospheric slab thickness is a measure of the shape of the electron density distribution, accurate modelling and monitoring of slab thickness can help prediction of space weather impact. Indeed, the profile shape reflects the complexity of production, loss and transportation of plasma in the Earth’s ionosphere and plasmasphere. A proxy Slab-Thickness for the topside ionosphere/plasmasphere is computed by dividing the topside vertical total electron content (TEC) data derived from GNSS navigation measurements by the in-situ electron density data from the Langmuir Probes (LP). The idea is very similar to the computation of equivalent slab-thickness dividing the ground TEC by the peak electron density measurements from vertical sounding or radio occultation data (see Jakowski and Hoque 2021). Single satellite measurements can be used; however, since Swarm-A and -C satellites are flying close-by both satellites data can be combined for improved products. The derived quantity will accurately provide a proxy measure of the topside ionosphere/plasmasphere (profile) thickness. From the long-term database of proxy Slab-Thickness a slab-thickness model can be developed. Existing ionosphere models (e.g., IRI, NeQuick, NEDM2020) will be benefitted using topside slab-thickness information. The data as well the model can be used to verify the properties of equivalent slab-thickness and ionosphere/plasmasphere coupling found by Jakowski and Hoque (2021, 2018). A bulge like increase of slab-thickness at around middle latitude (~40°) especially during night time is found which is first reported by Jakowski and Hoque (2021) in ground data. Many unanswered questions regarding ionosphere/plasmasphere coupling processes may be solved by simultaneously analysis of ground slab-thickness and topside proxy slab-thickness data. During space weather events the TEC and LP data will be changed in a nonlinear way and therefore the proxy slab-thickness value can be used a monitor for space weather events. References: Jakowski N, Hoque MM. 2018. A new electron density model of the plasmasphere for operational applications and services. J. Space Weather Space Clim. 8: A16 Jakowski N & Hoque MM 2021. Global equivalent slab thickness model of the Earth’s ionosphere. J. Space Weather Space Clim. 11, 10. https://doi.org/10.1051/swsc/2020083
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: C.02.07 - POSTER - FORUM- ESA's 9th Earth Explorer

The FORUM mission will improve the understanding of our climate system by supplying, for the first time, most of the spectral features of the far-infrared contribution to the Earth’s outgoing longwave radiation, particularly focusing on water vapour, cirrus cloud properties, and ice/snow surface emissivity. FORUM’s main payload is a Fourier transform spectrometer designed to provide a benchmark top-of-atmosphere emission spectrum in the 100 to 1600 cm-¹ (i.e. 6.25 to 100 µm) spectral region filling the observational gap in the far-infrared (100 to 667 cm-¹ i.e. from 15 to 100 µm), which has never been observed from space, spectrally resolved, and in its entirety. The focus of this session is on the scientific developments in the frame of this mission and the outlook into the future.

Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Investigating water vapour using far infrared observations and simulations

Authors: Sophie Mosselmans, Helen Brindley, Dr Edward Gryspeerdt, Dr Caroline Cox, Dr Andreas Foth, Dr Tim Carlsen, Dr Robert David, Sanjeevani Panditharatne
Affiliations: Imperial College London, National Centre for Earth Observation, RAL Space, Leipzig University, University of Oslo
Accurately measuring the atmospheric state is crucial for climate change analysis and weather forecasting. In clear sky conditions, water vapour is responsible for over half of the Earth’s greenhouse effect. To better quantify water vapour variability and its influence on radiative forcing, we need both satellite and ground-based measurements with improved accuracy and vertical resolution. The Far-infrared Outgoing Radiation Understanding and Monitoring (FORUM) mission will investigate variations in upper tropospheric water vapour and its radiative signature in the far infrared. To prepare for this mission and gain an understanding of what it will deliver, Imperial College has developed a ground-based instrument, the Far-Infrared Spectrometer for Surface Emissivity (FINESSE) capable of measuring across the infrared spectrum (400 to 1600 cm-1) with high temporal resolution and accuracy. In early 2023, FINESSE measured clear sky downwelling radiation spectra during its first field campaign at the ALOMAR Observatory in Norway. Two aims of this campaign were to test the stability of FINESSE’s performance in the harsh operating conditions and to measure downwelling radiative spectra. In the cold and dry Arctic conditions, the far infrared “dirty window” between 400-600 cm-1 opens up allowing for the measurement of radiation emitted from higher in the atmosphere which is sensitive to water vapour concentrations at these altitudes. Measurements of downwelling radiance extending into the far-infrared in the Arctic are relatively rare. In principle, the observations may allow improved characterisation of the lower-mid tropospheric water vapour profile. A first step is to analyse how well existing representations or measurements of the water vapour profile map to the radiance observations using radiative transfer modelling. Here we perform this task using the Line By Line Radiative Transfer Model v12.13 (LBLRTM) in concert with temperature and water vapour profiles taken from three sources: 1. a local radiosonde launch; 2. a multichannel microwave radiometer (Humidity And Temperature Profiler - HATPRO) and 3. colocated data from the European Centre for Medium-Range Weather Forecasts Reanalysis v5 (ERA5). Our results show that none of the simulations using the different input sources match the observed radiances within measurement uncertainty, with a significant underestimate seen within the dirty window. The closest match is seen using the radiosonde profile as input. The radiosonde captures a humid layer which is not seen by either the HATPRO or ERA5. To assess uncertainties in ERA5, the ensemble members are used. The members are generated by introducing perturbations to the model’s initial conditions and how observations are incorporated and weighted. The standard deviation of the ensemble members' output radiance is used to approximate how the ERA5 reanalysis profile uncertainties propagate to its radiance. All of the measurement sources were in the same ERA5 grid box as FINESSE; however, the radiosonde did travel across to several grid boxes. To characterise the impact of the radiosonde movement on the radiance, a composite profile from ERA5 grid boxes was constructed. Another source of uncertainty that is explored, is the possible variations in the strength of the water vapour continuum used in the simulations. However, for this spectral region realistic perturbations of the continuum are not sufficiently large enough to fully reconcile FINESSE with the simulations. The differences in radiance between the different simulations and observations translate to radiative flux differences which are significant in the context of the Arctic surface energy budget. Our results imply that observations from instruments like FINESSE could provide additional information on water vapour vertical structure above and beyond what is currently available from reanalysis or commonly used microwave profilers, both in terms of vertical information and temporal sampling. We are currently working to infer the vertical temperature and water vapour profiles from FINESSE in collaboration with others on the FORUM team.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Modeling and Inversion of the Far-IR Spectral Radiances Measured by FIRMOS in Ground and Stratospheric Balloon Campaigns

Authors: Dr Gianluca Di Natale, Marco Ridolfi, Dr Marco Barucci, Dr Claudio Belotti, Dr. Giovanni Bianchini, Dr. Elisa Castelli, Dr Francesco D'Amato, Dr. Samuele Del Bianco, Dr. Bianca Maria Dinelli, Dr. Giuliano Liuzzi, Prof. Tiziano Maestri, Dr. Michele Martinazzo, Prof. Guido Masiello, Enzo Papandrea, Paolo Pettinari, Prof. Carmine Serio, Dr Silvia Viciani, Dr Luca Palchetti
Affiliations: CNR-INO, CNR-ISAC, CNR-IFAC, Dip di Ingegneria - Università della Basilicata, Dip. di Fisica e Astronomia "A. Righi" - Università di Bologna
FORUM (Far-infrared Outgoing Radiation Understanding and Monitoring) will be 9th Earth Explorer mission of the European Space Agency (ESA). Starting from 2027, FORUM will measure, from a polar orbiting satellite, the spectrum of the Earth’s Outgoing Longwave Radiation (OLR) in the interval from 100 to 1600 cm-¹ (that is from100 to 6.25 μm in wavelength). Together with the Polar Radiant Energy in the Far-InfraRed Experiment (PREFIRE), the FORUM mission will supply the first global, spectrally resolved, measurements covering the Far-InfraRed (FIR) range of the OLR spectrum. Measuring and monitoring the FIR region of the OLR is in fact crucial to understand the climate forcing / feedback effects exerted by clouds and water vapor content in the Upper-Troposphere / Lower-Stratosphere. In preparation to the FORUM mission, both the ESA and the Italian Space Agency (ASI) have started several projects to get the scientific community ready for the exploitation of the new measurements. In this context, at CNR-INO a Far-Infrared Radiation Mobile Observation System (FIRMOS) was designed and built with the support of ESA and ASI. FIRMOS is a Fourier Transform Spectrometer that can perform measurements both from ground and from stratospheric balloons. The characteristics of FIRMOS are very similar to those required for FORUM in terms of spectral range, resolution and Noise Equivalent Spectral Radiance. For this reason, FIRMOS measurements represent a very good basis to test the accuracy of radiative transfer models and connected ancillary data in reproducing the FIR OLR spectrum. Several forward / inverse models were developed or are simply used by the Italian scientific community interested in atmospheric FIR spectral measurements. Among these models, KLIMA (Kyoto protocoL Informed Management of Adaptation), SACR (Simultaneous Atmospheric and Cloud Retrieval), FARM (FAst Retrieval Model) and GBB-Nadir (Geofit Broad Band, Nadir version) are forward / retrieval algorithms, with different accuracy and speed characteristics, commonly used by our team to reproduce and analyze FIRMOS measurements. To date, with the support of ESA, FIRMOS has been deployed in several measurement campaigns. In 2019 the Zugspitze (2962 m asl) campaign from ground was carried out. In August 2022, the instrument was operated from a stratospheric balloon launched during the Strato-Science 2022 campaign from Timmins (Canada). In June 2024 FIRMOS operated again from a stratospheric balloon launched from the SSC facility in Kiruna (Sweden) within the TRANSAT 2024 campaign. A further ground-based campaign is planned from Ottawa (CA) in early 2025. In this work, we present the results of the analysis of the measurements collected by FIRMOS in these campaigns. The main objective of this analysis is the characterization of the accuracy of our models and of the ancillary databases in reproducing the measured FIR spectra. One of our codes (FARM) can also handle the joint inversion of matching measurements. Thus, if a measurement (either from satellite or ground) matching the FIRMOS one exists, we also test the synergistic inversion approach. Indeed, this is one of the techniques that will be applied to the matching measurements expected from FORUM and from the Infrared Atmospheric Sounding Interferometer – New Generation (IASI-NG) onboard of the MetOp-SG-A satellite.
LPS Website link: Modeling and Inversion of the Far-IR Spectral Radiances Measured by FIRMOS in Ground and Stratospheric Balloon Campaigns&location=X5+–+Poster+Area+–+Zone+C" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Determination of emissivity profiles using a Bayesian data-driven approach

Authors: Chiara Zugarini, Francesco Pio De Cosmo, Cristina Sgattoni, Luca Sgheri
Affiliations: University Of Florence, Institute for Applied Matemathics (IAC) - National Research Council (CNR), Institute of BioEconomy (IBE) - National Research Council (CNR)
This study addresses the critical challenge of accurately identifying surface emissivity profiles that align with experimental observations for specific geolocations and times. Accurate emissivity estimation is fundamental during radiative transfer retrieval processes, where the inherent coupling between emissivity and surface temperature can introduce significant biases in the retrieval of both parameters. The work focuses on methods to derive emissivity profiles that are consistent with observational data, serving as reliable initial guesses or a priori inputs for retrieval algorithms. These efforts are particularly relevant for the Far-infrared Outgoing Radiation Understanding and Monitoring (FORUM) mission, which will pioneer measuring the Earth's far-infrared spectral emission. The study evaluates two methodologies for determining emissivity profiles. The first method is an empirical method using Moderate Resolution Imaging Spectroradiometer (MODIS) and ancillary data. This approach integrates MODIS observations with ancillary datasets, including snow cover, surface temperature, and soil humidity, to infer plausible emissivity profiles without relying on predefined land cover classifications. The method generates a synthetic soil type map by associating Huang's emissivity profiles with the observed conditions. The performance is assessed by minimizing the root mean square error (RMSE) against MODIS emissivity data, showing that appropriately selected Huang profiles effectively reduce discrepancies concerning the selection of a constant initial guess. The second method is the Bayesian method, which leverages Combined Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and MODIS Emissivity for Land (CAMEL) and land cover data. This approach uses the CAMEL database and high-resolution land cover maps to derive emissivity profiles as convex combinations of Huang profiles. This method employs a Bayesian framework to incorporate information from the CAMEL database and the MODIS/Terra+Aqua Yearly Land Cover Type dataset, ensuring a statistically accurate selection of emissivity profiles. Key findings indicate that the Bayesian approach delivers superior performance compared to linear spline interpolation of CAMEL data when tested against experimental emissivity spectra retrieved from the Infrared Atmospheric Sounding Interferometer (IASI). Moreover, the second method performs even better than the full database from Huang. The results underscore the potential of this method to enhance the accuracy of surface parameter retrievals by providing accurate and computationally efficient initial estimates of emissivity profiles, thereby mitigating biases and improving the reliability of radiative transfer models. This development holds significant promise for the upcoming FORUM mission and broader Earth observation and climate modeling applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Development of the MetOp-SG Module (MSGM) for the ESA FORUM End-to-End Simulator

Authors: Giuliano Liuzzi, Prof. Tiziano Maestri, Prof. Guido Masiello, Dr. Michele Martinazzo, Dr. Luca Sgheri, Prof. Carmine Serio, Dr. Hilke Oetjen, Dr. Dulce Lajas
Affiliations: Department Of Engineering, University Of Basilicata, Department of Physics and Astronomy "Augusto Righi", University of Bologna, CNR-IAC, National Council of Research, ESA-ESTEC, European Space Agency
In this work we present the fundamental elements of the MetOp-SG Module (MSGM) of the FORUM End-To-End Simulator (FEES) developed for the European Space Agency, in preparation for FORUM, ESA’s 9th Earth Explorer (launch 2027), for flying in formation with MetOp-SG. This work constitutes also a basis for further, future applications to other sensors. The goal of the MetOp-SG Module (MSGM) is to simulate IASI-NG (Infrared Atmospheric Sounder Interferometer Next Generation) L1C data, following the format specified by EUMETSAT. This module slots into the existing Phase A/B1 FORUM end-to-end simulator (FEES A/B1), and for this reason, its structure is coherent with the interfaces already defined in FEES A/B1. The MSGM provides a set of functionalities which aim at calculating IASI-NG radiances corresponding to those observations taken in coincidence with FORUM. The synergy between the two instruments is in fact of fundamental importance to obtain full coverage of the Earth outgoing longwave spectrum in the whole infrared range, including the Far Infrared, which will be observed by FORUM for the very first time from satellite remote sensing. To achieve this, the MSGM is composed of three submodules: 1) the MetOp-SG Matching Module (MSGM-MM), which is responsible of the collocation of the IASI-NG fields of view with the FORUM observation; 2) the MetOp-SG Scene Generator (MSGM-SG), which has the task of producing the high spectral resolution radiances that reach the IASI-NG sensor; 3) the MetOp-SG Observation System Simulator (MSGM-OSS), which ingests the high-resolution spectra and applies the simulation of the Level 1 processor to get the L1C synthetic products. The software is developed in Matlab, C and Fortran 2003. The current version of the software, which relies on LBLRTM and LBLDIS radiative transfer models, includes several ancillary databases of optical properties of clouds and aerosols as well as a full emissivity database for the Mediterranean and Northern European region which is built upon the Huang emissivity global database. Special emphasis was placed on harmonising these databases to apply them across the full spectral range of FORUM+IASI-NG, in order to make them easily adaptable for further applications. In this work we present the full scheme of the software and its functionalities, showing how each submodule works and showcasing some sample results.
LPS Website link: Development of the MetOp-SG Module (MSGM) for the ESA FORUM End-to-End Simulator&location=X5+–+Poster+Area+–+Zone+C" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Evaluating the potential impact of future FORUM radiances through ensemble simulations

Authors: Alberto Ortolani, Cristina Sgattoni, Samantha Melani, Luca Rovai, Luca Fibbi, Marco Ridolfi, Ugo Cortesi, PhD Stefano della Fera
Affiliations: CNR-IBE, CNR-INO, CNR-IFAC, Consorzio LaMMA
In modern operational meteorology, satellite observations play a crucial role , providing consistent and comprehensive measurements of the Earth's atmosphere and surface on a global scale. More in detail, most of the data providing information on key meteorological variables, such as temperature, surface temperature, water vapor, and clouds, come from measurements of spectrally resolved Outgoing Longwave Radiation (OLR), which is the infrared radiation emitted by Earth at the Top Of the Atmosphere (TOA). These observations are integrated into models through Data Assimilation (DA) techniques to produce analysis products. This process combines irregularly distributed atmospheric observations with short-range model forecasts, effectively performing an optimal space-time interpolation onto a regular grid. The resulting gridded atmospheric states serve as inputs for numerical weather prediction, and also as resources for diagnostic studies, supporting the evaluation of the atmospheric and climate system's behaviour over time. Ensemble forecasting is a powerful approach in numerical weather prediction to give insights into the range of possible future states of the atmosphere. Instead of producing a single, deterministic (most likely) forecast, an ensemble of forecasts is generated to account for uncertainties in the prediction system. These uncertainties stem from two primary sources: errors in the initial conditions, which are amplified by the chaotic and non-linear nature of atmospheric dynamics, and errors in the model itself. The latter includes errors arising from approximations used in solving the governing equations, the use of parameterization schemes to represent unresolved sub-grid physical processes, as well as from the governing equations themselves, which are ultimately simplified representations of more complex processes. Ideally, the actual atmospheric state should fall within the predicted ensemble spread, with the spread magnitude reflecting the level of forecast uncertainty. At the initial forecast time, the ensemble spread should represent the uncertainty on the knowledge of the real atmospheric state even after its optimal reconstruction using the best available observational and modeling instruments. This process merges the widest possible set of global,heterogeneous observations, including spectrally resolved radiances with the physics of state-of-the-art models through advanced data assimilation procedures. Under this assumption (acknowledging that it is not always fulfilled), it is useful to generate an ensemble of synthetic atmospheric observations, corresponding to the ensemble model members, and to evaluate the spread of these synthetic observations. In fact, if these synthetic observations mimic measurements from an instrument expected to become operational in the near future and have a known associated error, the ensemble of synthetic observations can be used to estimate the potential impact of including such data with a proper assimilation procedure. This impact is indicated by the ratio between the observational error and the synthetic ensemble observation spread. When this ratio is approximately less than one, lower ratios suggest a greater potential for reducing the information uncertainty in the initial ensemble, and consequently, for narrowing the forecast spread at later times. The target observations in this study are radiances from ESA’s forthcoming 9th Earth Explorer (EE9) mission, FORUM (Far-infrared Outgoing Radiation Understanding and Monitoring). FORUM will deliver unprecedented spectrally resolved radiance measurements in the far- and mid-infrared spectral range (100–1600 cm⁻¹) with 0.5 cm⁻¹ un-apodised resolution. This spectral range, which encompasses the bulk of the planet’s outgoing longwave radiation (OLR), is particularly sensitive to key climate variables, forcing and feedback, including temperature, water vapor (especially in the upper-troposphere) and cirrus clouds. The present work thus analyses the potential impact of FORUM measurements over an initial set of important atmospheric scenarios, based on ECMWF IFS ensemble atmospheric products, and σ-IASI/FORUM as radiative transfer model to generate the corresponding synthetic FORUM radiances. This work is partially funded by MC-FORUM project (Meteo and Climate exploitation of FORUM), a two-year initiative funded by the Italian Space Agency (ASI), it began in late Jan. 2024, aimed at developing new tools and expertise to exploit FORUM data in operational meteorology and climate studies. The study also benefits from the developments in EMM (Earth-Moon-Mars), a three-year project launched in January 2023 as part of Italy's National Recovery and Resilience Plan, which provided key competencies for this work.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Improvement of PTB’s vacuum FIR calibration system in support of ESA’s Mission FORUM

Authors: Daniela Narezo Guzman, Julian Gieseler, Max Reiniger, Dirk Fehse, Robert Häfner, Jamy Schumacher, Albert Adibekyan, Christian Monte
Affiliations: PTB
ESA’s 9th earth explorer mission FORUM aims to perform, for the first time, spectrally resolved, traceable measurements over extended time periods of Earth’s outgoing FIR radiation for wavelengths spanning from 6.25 µm to 100 µm. Until today, only spectral measurements up to 17 µm have been realized. However, about half of Earth’s outgoing total energy is found at wavelengths beyond 15 µm, making the FIR region crucial for determining Earth’s energy budget and hence climate development. FORUM aims to fill the data gap and with it provide valuable data for climate research, modeling, and prediction. PTB will support FORUM with a traceable pre-flight calibration of its on-board reference source under vacuum in the Reduced Background Calibration Facility 2 (RBCF2). An absolute radiometric uncertainty of 30 mK in radiation temperature required by FORUM demands an FIR laboratory reference source with an absolute uncertainty of 15 mK or less. This value is below that of currently available radiometric reference sources, not just for the FIR, but for the MIR and NIR as well. Based on a sensitivity analysis we have identified critical components and limiting specifications to meet this demanding uncertainty requirement. These are the temperature sensing of the blackbody cavity which must be realized with an uncertainty of less than 10 mK, the effective emissivity of the cavity which has to be above 0.999 and which also implies a temperature uniformity of the cavity of around 10 mK. Additionally, the background radiation must be considered and actively controlled. To meet these requirements, PTB developed a novel radiometric FIR calibration system consisting of an in-Vacuum Reference Blackbody (VRBB) in combination with a precisely temperature-controlled and uniform scenery or thermal shroud called Coldscreen (CS). The VRBB is a liquid-operated blackbody with a cylindrical cavity coated with Vantablack S-IR. It utilizes a novel temperature sensing scheme using capsule SPRTs immersed in a non-conducting liquid: the so-called in-liquid mounting. The CS design is based on a Finite Element Method optimized structure to meet a uniformity requirement of better than 1 K in the temperature range from -60 °C to 60 °C. It is used to precisely realize different background radiation environments and thus help correct background signals or to determine the effective emissivity of sources under test. With this calibration system radiometric uncertainties in-lab below 15 mK in the FIR region can be reached and reference sources and detectors from NIR to FIR can be calibrated with unprecedented small uncertainty. Our contribution will present the sensitivity analysis, the hardware design, and the characterization results of the radiometric FIR calibration system built in the RBCF2. These results will include: thermal uniformity of CS and VRBB, VRBB effective emissivity derived from ray-tracing simulations, radiance temperature and spectral radiance measurements and the uncertainty budget. Financial support of this work by the ESA project Novel Reference/Calibration System to Measure Spectral Radiance on the Range 4 μm to 100 μm is gratefully acknowledged.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: A Physics-Aware Data-Driven Surrogate Approach for Fast Atmospheric Radiative Transfer Inversion

Authors: Cristina Sgattoni, Luca Sgheri, Matthias Chung
Affiliations: Institute of BioEconomy, National Research Council - INdAM-GNCS Research group, Institute of Applied Mathematics, National Research Council, Department of Mathematics, Emory University
The Far-infrared Outgoing Radiation Understanding and Monitoring (FORUM) mission, selected in 2019 as the ninth Earth Explorer by the European Space Agency, aims to provide spectrally resolved measurements of the long-wave Earth's emitted radiance. FORUM will also cover the far-infrared region of the spectrum, which represents approximately 50\% of the Earth's outgoing longwave radiation and has remained largely unobserved from space until now. FORUM is scheduled for lunch in 2027 and, utilizing a Fourier transform spectrometer, it will provide valuable insights into atmospheric parameters such as surface emissivity, water vapor distribution, and ice cloud properties. Once operational, FORUM is expected to generate more than 10,000 spectra per day, resulting in a substantial data volume that will require efficient processing and analysis. To handle this, accelerated radiative transfer and inversion techniques will be essential. This is particularly important for near-real-time applications, such as weather and climate modeling, which lie at the core of the National Recovery and Resilience Plan - Earth Moon Mars (NRRP-EMM) project. The analysis of FORUM data involves solving an ill-posed inverse problem to retrieve atmospheric properties from observed spectra, requiring stabilization of the solution through regularization techniques. This study introduces a novel, data-driven approach to tackle the inverse problem in clear sky conditions, aiming to provide a computationally efficient and accurate solution. In the first phase, a preliminary approximation of the inverse mapping is generated using simulated FORUM data. In the second phase, climatological information is incorporated as prior knowledge, and neural networks are employed to dynamically estimate optimal regularization parameters during the retrieval process. While this method may not match the precision of traditional full-physics retrieval techniques, its ability to deliver near-instantaneous results makes it ideal for real-time applications. Additionally, the proposed approach can serve as a preprocessor, supplying improved prior estimates to enhance the accuracy and efficiency of full-physics retrieval methods. Furthermore, an ongoing study is being conducted on the inverse problem under all-sky conditions using an innovative approach that combines two key components. The first is a data-driven solution, leveraging an autoencoder to manage the high dimensionality of the problem and serving as a prior to guide the solution. The second is the implementation of constraints on the solution space to prevent non-physical approximations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Foreseeing the benefit of FORUM observations to evaluate climate models

Authors: Félix Schmitt, Quentin Libois, Romain Roehrig
Affiliations: Centre National de Recherches Météorologiques (CNRM), Université de Toulouse, Météo-France, CNRS
The spectral details of the Earth’s top-of-atmosphere outgoing infrared (IR) radiation, as already observed by several satellite instruments, contain valuable information on essential climate variables, making them critical tools for the evaluation of climate models. For example, it has been shown that error compensations in the spectral domain can result in apparently correct broadband fluxes, hiding deficiencies of the models in reproducing the seasonal and spatial variabilities of temperature and relative humidity (Huang et al. 2007, Huang et al. 2008). More recently, Della Fera et al. (2023) have investigated the interannual variability of IASI spectra in clear-sky conditions to point out systematic biases in the EC-Earth model. With the FORUM satellite mission, selected by the European Space Agency to be the 9th Earth Explorer and planned to be launched in 2027, the Earth top-of-atmosphere full IR emission spectrum will be measured for the first time at high spectral resolution, filling an observational gap in the far-infrared (FIR) (100 to 667 cm-1 i.e. from 15 to 100 μm). These measurements will provide a unique opportunity to further document and understand the Earth’s radiative budget, and will constitute a unique dataset to evaluate in more detail its representation in state-of-the-art climate models. Here we aim at estimating to which extent future FORUM observations could help discriminate between climate models in terms of their ability to correctly simulate the Earth’s IR emission. To this end, the fast radiative transfer solver RTTOV is used to emulate FORUM observations from the atmospheric profiles and surface properties simulated by a dozen of climate models participating to the 6th phase of the Coupled Model Intercomparison Project (CMIP6). These simulations cover the full period corresponding to the historical amip simulation, namely 1979 – 2014. We first focus on clear-sky scenes and compare the simulated spectra from each model, in terms of mean properties, but also in terms of spatial distribution, seasonal variability, and longer-term changes. The discrepancies between the selected climate models are highlighted and traced to differences in geophysical variables. While differences in the mid-IR can already be analyzed in the light of available hyperspectral IR observations, we point that differences also appear in the FIR, suggesting that FORUM observations will put a strong constrain on climate model evaluation and will contribute to the improvement of climate models by highlighting processes that need to be refined. References : Huang, Y., V. Ramaswamy, X. Huang, Q. Fu, and C. Bardeen (2007), A strict test in climate modeling with spectrally resolved radiances: GCM simulation versus AIRS observations, Geophys. Res. Lett., 34, L24707. Huang, X., W. Yang, N. G. Loeb, and V. Ramaswamy (2008), Spectrally resolved fluxes derived from collocated AIRS and CERES measurements and their application in model evaluation: Clear sky over the tropical oceans, J. Geophys. Res., 113, D09110. Della Fera, Stefano, F. Fabiano, P. Raspollini, M. Ridolfi, U. Cortesi, F. Barbara, et J. von Hardenberg (2023), On the use of infrared atmospheric sounding interferometer (IASI) spectrally resolved radiances to test the EC-Earth climate model (v3.3.3) in clear-sky conditions, Geosci. Model Dev., 16(4), 1379‑94.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: SPectroscopy In The Far InfraREd: Reducing Uncertainties in Carbon Dioxide Spectroscopic Line Parameters for ESA’s FORUM Mission

Authors: Daniel Coxon, Jeremy Harrison, Ritika Shukla, Chris Benner, Malathy Devi, Brant Billinghurst, Jianbao Zhao
Affiliations: University of Leicester, National Centre for Earth Observation, College of William and Mary, Canadian Light Source, University of Saskatchewan
The upcoming ESA FORUM (Far-infrared Outgoing Radiation Understanding and Monitoring) mission will be the first to measure, at high resolution, the Earth's spectrally resolved outgoing longwave radiation (OLR) in the far-infrared (FIR). The FIR spectral region is crucially important because it is responsible for over half of the Earth’s emission to space, accounting for a large contribution to the Earth’s greenhouse effect. The aim of the FORUM mission is to evaluate the role of the FIR in shaping the current climate, thereby reducing the uncertainty in predictions of future climate change and enabling us to mitigate against its effects. The Earth’s OLR in the FIR region largely consists of absorptions from two gases, one of which is carbon dioxide (CO₂). The radiative forcing of the climate system associated with increasing CO₂ concentrations occurs primarily within the wavenumber region 500–850 cm-1. Almost half of this region (below 645 cm-1) has never been measured at the top-of-the-atmosphere (TOA) at high resolution, but will be measured by FORUM. The interpretation of measurements from FORUM is highly reliant on the ability to perform accurate radiative transfer calculations in the FIR region. Recent high-resolution measurements below 600 cm-1 have shown that there are significant deficiencies in the current Voigt line parameters in the High resolution TRANsmission (HITRAN) database. We report here a suite of measurements of high resolution (up to 0.00096 cm-1) spectra of both pure and air-broadened CO₂ taken at the Canadian Light Source that covers the entire 500–850 cm-1 region at once. The high spectral resolution has allowed for lines in heavily congested regions (such as the Q branches) to be well resolved for the lower pressure measurements. Utilising a synchrotron light source facility provides a more intense source of electromagnetic radiation than in a conventional laboratory, and provides access to the highest possible spectral resolution by a Fourier transform spectrometer. We analyse our spectra using the Labfit multispectrum fitting program, which has a long and extensive history in deriving non-Voigt line parameters for remote sensing. Through our analysis, we have begun to derive new CO₂ line parameters, including zero-pressure line position, line intensity, self- and air-broadened halfwidth, self- and air-pressure induced line shifts, speed dependence, Dicke narrowing, and line mixing. For position and intensity, we adopt quantum mechanical constraints to the global solution to reduce the correlations between parameters. We compare our results to those from the HITRAN database to demonstrate the improvements effected by our study.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: PREFIRE and IASI Radiances in All-Sky Conditions: Data Intercomparison and Analysis Using sigma-IASI/F2N

Authors: Tiziano Maestri, Michele Martinazzo, Fabrizio Masin, Guido Masiello, Giuliano Liuzzi, Carmine Serio, Brian Drouin, Brian Kahan, Nathaniel Miller, Aronne Merrelli, Kyle Mattingly, Tristan L'Ecuyer
Affiliations: University Of Bologna, Physics and Astronomy Department "Augusto Righi", University of Basilicata, Department of Engineering, NASA, Jet Propulsion Laboratory, Cooperative Institute for Meteorological Satellite Studies, Space Science and Engineering Center, Department of Climate and Space Sciences and Engineering, University of Michigan
The important role played by Far Infra-Red (FIR) radiation in shaping the Earth’s energy balance and its sensitivity to essential climate variables such as temperature, water vapor, surface emissivity, and clouds is now well recognized by the scientific community. In this regard, the European Space Agency (ESA) selected the Far-infrared Outgoing Radiation Understanding and Monitoring (FORUM) mission as its ninth Earth Explorer, scheduled to launch in 2027. FORUM will collect measurements of the outgoing longwave radiation in the spectral range from 100 to 1600 cm−1, with 0.5 cm−1 (un-apodized) spectral resolution. On its side, NASA launched in 2024 two CubeSats carrying the Polar Radiant Energy in the Far-InfraRed Experiment (PREFIRE) which are measuring the 0-54 𝜇m region at 0.84 𝜇m spectral resolution. The FORUM and PREFIRE missions boosted the studies of the radiative transfer (rt) community to extend fast radiative routines to the whole IR region, which required the assessment of the ability of commonly used fast solutions to simulate radiance fields at FIR wavelengths. In this work, we focus on the performances of the main rt models based on physical solutions applied to all sky conditions. A special attention is provided to algorithms operating at FIR and in presence of scattering layers (such as clouds and aerosols) which are adopted in inversion processes for the definition of satellite level 2 products, or simply for the analysis of spectrally remotely sensed radiance fields. The work briefly discusses the limits and advantages of fast methodologies based on the Chou approximation [Chou et al., 1999; Martinazzo et al., 2021] and the Tang adjustement solution [Tang et al., 2018] when applied to radiance computations [Maestri et al., 2024] which are implemented in the new sigma-IASI/F2N rt model [Masiello et al., 2024]. To assess the validity of the approximate numerical calculation and the overall algorithms performances, results obtained using fast solutions are compared with those derived with a discrete-ordinate based rt model (DISORT) for a large range of physical and optical properties of ice and liquid water clouds and for multiple atmospheric conditions derived from the 60 levels EUMETSAT NWP model profile dataset (https://nwp-saf.eumetsat.int/site/software/atmospheric-profile-data/). Finally, a set of observations of the Infrared Atmospheric Sounding Interferometer (IASI) flying on MetOp B, and C is compared with temporally and spatially collocated PREFIRE data. Multiple atmospheric conditions and geolocations are considered. The effectiveness of sigma-IASI/F2N is then demonstrated by comparing synthetic calculations to collocated IASI and PREFIRE observations. The code ingests an atmospheric state vector which includes surface temperature and Temperature profile, H2O mixing ratio, O3 mixing ratio, Specific Liquid and Ice Water Content derived from the ECMWF analysis. The comparison aims at evaluating the sigma-IASI/F2N performances at both Mid-infrared and FIR wavelengths in all sky conditions. Possibly, for a limited set of cases, the information content of PREFIRE observations is estimated by applying the scheme described in Serio [2024] (based on the sigma-IASI/F2N forward model) for cloud optical and microphysical retrievals. Acknowledgements This work is founded by Italian Space Agency (ASI) in the framework of the project FIT-FORUM (Accordo attuativo: n. 2023-23-HH.0). References M.-D. Chou, K.-T. Lee, S.-C. Tsay, and Q. Fu. “Parameterization for Cloud Longwave Scattering for Use in Atmospheric Models”. Journal of Climate 12(1) (1999), pp. 159–169. doi: https:10.1175/1520- 0442(1999) T. Maestri et al. “Innovative solution for fast radiative transfer in multiple scattering atmospheres at far and mid infrared wavelengths”. Radiation Processes in the Atmosphere and Ocean, New York, AIP Publishing, «AIP CONFERENCE PROCEEDINGS», 2024, 2988, pp. 1 - 4 (International Radiation Symposium, Thessaloniki (Greece), 4-8 July 2022). doi:10.1063/5.0183019 M. Martinazzo et al. “Assessment of the accuracy of scaling methods for radiance simulations at far and mid infrared wavelengths”. Journal of Quanti- tative Spectroscopy and Radiative Transfer 271 (2021). doi:10.1016/j.jqsrt.2021.107739. G. Masiello et al. “The new 𝜎-IASI code for all sky radiative transfer cal- culations in the spectral range 10 to 2760 cm-1: 𝜎-IASI/F2N”. Journal of Quantitative Spectroscopy and Radiative Transfer 312 (2024), p. 108814. issn: 0022-4073. doi:10.1016/j.jqsrt.2023.108814. C. Serio et al. “Demonstration of a physical inversion scheme for all-sky, day-night IASI observations anda pplication to the analysis of the onset of the Antarctica ozone hole: Assessment of retrievals and consistency of forward modeling”, Journal of Quantitative Spectroscopy and Radiative Transfer,Volume 329, 2024,109211, ISSN 0022-4073, https://doi.org/10.1016/j.jqsrt.2024.109211. G. Tang et al. “Improvement of the Simulation of Cloud Longwave Scatter- ing in Broadband Radiative Transfer Models”. Journal of Atmospheric Sciences 75(7) (2018), pp. 2217–2233. doi:10.1175/JAS-D-18-0014.1
LPS Website link: PREFIRE and IASI Radiances in All-Sky Conditions: Data Intercomparison and Analysis Using sigma-IASI/F2N&location=X5+–+Poster+Area+–+Zone+C" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Simulation of the Earth’s disk radiance seasonal variability observed from the Moon by the Lunar Earth Temperature Observatory

Authors: Dr Gianluca Di Natale, Dr Simone Menci, Dr Luca Palchetti, Marco Ridolfi, Dr Claudio Belotti, Dr Silvia Viciani, Dr Marco Barucci, Dr Francesco D'Amato
Affiliations: CNR-INO
Within the Earth-Moon-Mars (EMM) project it is planned to develop a lunar infrastructure to monitor the global far- and mid- infrared (FIR/MIR) spectral radiance coming from the whole Earth’s disk. The Lunar Earth Temperature Observatory (LETO) will be part of this infrastructure and will consists of a Fourier transform spectro-radiometer (LETO-FTS) and an imager (LETO-IMG). To mimic the LETO’s measurement, a comprehensive software was developed at the CNR-National Institute of Optics. This is composed of an Earth-Moon orbital simulator, which can provide the Earth’s portion viewed from the Moon as a function of time and position of the lunar base, and a radiative transfer algorithm to simulate the spectral radiance that will be observed by LETO. Lunar orography can also be considered in the orbital simulator. The radiative transfer algorithm simulates the spectral radiances emitted from each Earth’s single pixel, by using the σ-FORUM fast radiative transfer model. The average of the simulations subsequently provides the the total mean radiance measured by LETO. As an example, hourly simulations of the whole spectral radiance of the visible portion of the Earth’s disk are obtained for a specific day, for a lunar site located on the prime meridian at a latitude of -70°. Also annual simulations were performed to define the measurement requirements. The development of the radiative transfer algorithms will take advantage from the modelling activity conducted for the preparation of the ESA FORUM mission, which will provide a similar spectral measurement from polar low Earth orbit some years in advance from the potential deployment of LETO on the lunar base. In this presentation, the study of the time variability of the signal over specific spectral bands, between 100 and 1600 cm-1, and the correlations with the variability of geophysical parameters, such as the global outgoing longwave radiation, the average global temperature, the water vapour amount, etc., will be presented. This approach will allow to build a long-term dataset for monitoring climate variables.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Exploiting airborne far-infrared measurements to optimise an ice cloud retrieval.

Authors: Sanjeevani Panditharatne, Prof Helen Brindley, Dr Caroline Cox, Richard Siddans, Jonathan Murray, Dr Richard Bantges, Dr Stuart Fox, Dr Rui Song
Affiliations: Imperial College London, RAL Space, NERC National Centre for Earth Observation, Met Office, University of Oxford
Cirrus clouds, high altitude ice clouds, regularly cover ∼30% of the Earth, and can reflect shortwave radiation back to space or trap the outgoing longwave radiation. There is currently disagreement between climate model predictions of their net radiative effect and feedback processes, due to the variation in their micro-and macrophysical properties. Studies have indicated the far-infrared region is highly sensitive to the microphysics of cirrus clouds, particularly their ice crystal habit. This sensitivity can be exploited in retrievals from FORUM observations, with studies showing that the inclusion of a few far-infrared channels alongside the mid-infrared has the potential to improve the cloud retrieval products and reduce uncertainty in them. We test this theory on unique airborne observations of the upwelling far-infrared radiation for the first time. We use the Infrared and Microwave Sounding (IMS) retrieval scheme, developed at RAL Space, to perform an optimal estimation retrieval on an airborne observation of coincident far- and mid-infrared upwelling radiances taken above a cirrus cloud. Recent work has extended this retrieval scheme for use on FORUM, including testing on clear sky retrievals from airborne observations that have modified to mimic the FORUM Sounding Instrument’s instrument line shape. In this work, we simultaneously retrieve temperature, water vapour, cloud optical thickness, cloud effective radius, cloud top height, and ice crystal habit with and without far-infrared channels. To model the radiative effects of the ice crystal habit, we use the Yang et al. 2013 and Baum et al. 2014 Solid Columns and General Habit Mix bulk optical property models, and evaluate if known uncertainties within them significantly impact the retrieval quality. All the retrievals are evaluated against lidar, cloud probe, and MODIS measurements of the cloud, as well as dropsonde measurements of the temperature and water vapour profile. This cloud retrieval capability is the first of its kind to be tested on airborne observations of upwelling far-infrared radiances and will be available for use on FORUM observations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: SPectroscopy In The Far InfraREd: Reducing Uncertainties in Water Vapour Spectroscopic Line Parameters for ESA’s FORUM Mission

Authors: Daniel Coxon, Jeremy Harrison, Chris Benner, Malathy Devi, Dominique Appadoo, Corey Evans
Affiliations: University of Leicester, National Centre for Earth Observation, College of William and Mary, Australian Synchrotron, Swinburne University of Technology
The upcoming ESA FORUM (Far-infrared Outgoing Radiation Understanding and Monitoring) mission will, for the first time, measure the Earth’s spectrally resolved outgoing longwave radiation (OLR) in the far-infrared (FIR) at high resolution. The FIR spectral region is highly significant as it accounts for more than half of the Earth’s emission to space. It thus provides a substantial contribution to the Earth’s greenhouse effect. The main goal of the FORUM mission is to evaluate the role of the FIR in shaping the current climate, in order to reduce the uncertainty in future climate change predictions and enable us to mitigate against its effects. The vast majority of the Earth’s OLR in the FIR region consists of absorptions from two gases, one of which is water vapour (H₂O). The radiative forcing (RF) of climate associated with increases in H₂O concentrations has a substantial contribution in the FIR at wavenumbers below 600 cm-1. The interpretation of measurements from FORUM is highly reliant on our ability to perform accurate radiative transfer calculations in the FIR region. Recent high-resolution measurements below 600 cm-1 have shown that there are significant deficiencies in the Voigt line parameters within the High resolution TRANsmission (HITRAN) database. We report here a suite of measurements of high resolution (0.00096 cm-1 and below) FIR spectra of both pure and air-broadened H₂O taken at the Australian Synchrotron. Standard laboratory light sources used by Fourier transform infrared (FTIR) spectrometers to not provide sufficient signal-to-noise ratios in the FIR below 700 cm-1. However, by utilising synchrotron light source facilities, which provide both an intense source of electromagnetic radiation and a wide band coverage across the FIR, it is possible to measure high resolution, high signal-to-noise spectra in the FIR region. Our spectra are analysed using the Labfit multispectrum fitting program, which has been used for many years to derive non-Voigt line parameters for remote sensing. Through our analysis, we have begun to derive new H₂O line parameters including the zero-pressure line position, line intensity, self- and air-broadened halfwidth, self- and air-pressure induced line shifts, and speed dependence. We compare our results to those from the HITRAN database, to demonstrate the improvements effected by our study.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Selection of Informative Channels for Future FORUM Measurements Assimilation in Numerical Weather Prediction Models

Authors: Cristina Sgattoni, Stefano Della Fera, Ugo Cortesi, Samantha Melani, Alberto Ortolani, Marco Ridolfi
Affiliations: Institute of BioEconomy, National Research Council, Institute of Applied Physics, National Research Council, National Institute of Optics
FORUM (Far-infrared Outgoing Radiation Understanding and Monitoring) is the ninth Earth Explorer satellite mission, selected by the European Space Agency in 2019. FORUM, scheduled for launch in 2027, will host a Fourier transform spectrometer providing spectrally resolved measurements of the longwave Earth's emitted radiance in the range 6.25–100 ηm (100 to 1600 cm-1). FORUM will also cover the far-infrared part of the spectrum that accounts for about 50% of Earth's outgoing longwave radiation and, until recently, has never been systematically observed from space. These measurements are important because at these wavelengths, outgoing radiation is affected by water vapour and ice clouds. Once operational, the FORUM spectrometer is anticipated to produce more than 10,000 spectra per day. A key goal of the NRRP-EMM (National Recovery and Resilience Plan - Earth Moon Mars project) is the assimilation of the FORUM dataset into weather prediction models. Our study focuses on selecting a subset of the most informative spectral channels within the FORUM spectral range (i.e., more than 5000 channels) to be used in weather prediction models, trough data assimilation technique. Reducing information redundancy is crucial when managing large datasets for near-real-time applications. The methodology begins with calculating weighting functions, which quantify the sensitivity of transmittance to changes in altitude and are essential to analyze the vertical distribution of atmospheric properties. Each weighting function is associated with a specific spectral channel, highlighting the altitude where the channel has the highest sensitivity to the observed radiance. Using the McMillin and Goldberg channel selection algorithm, the most informative weighting functions, and hence spectral channels, are selected independently for each atmospheric layer. The final set of channels to be assimilated is determined by aggregating the channels chosen across all layers. To assess the reliability and the effectiveness of the selected FORUM channels, we employ an optimal estimation framework to evaluate errors and quantify the information loss. Numerical experiments are conducted using 80 clear-sky scenarios derived from a diverse profile dataset designed to represent global and seasonal atmospheric variability. Simulated measurements are generated using the σ-FORUM fast radiative transfer model. Starting from pre-computed look-up tables of optical depths, σ-FORUM generates high-resolution radiances (0.01 cm-1), which are convolved with the FORUM spectral response function.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Towards the Assimilation of Far Infrared Data: Case Studies With Low and Mid Complexity Models

Authors: Lorenzo Mele, Carlo Grancini, Paolo Ruggieri, Tiziano Maestri, Guido Masiello, Alberto Ortolani, Alberto Carrassi
Affiliations: Department of Physics and Astronomy, University Of Bologna, School of Engineering, University of Basilicata, National Research Council of Italy, Institute for the Bioeconomy (CNR-IBE), Consortium Laboratory of Environmental Monitoring and Modelling for the Sustainable Development (Consorzio LaMMA)
The FORUM (Far-Infrared Outgoing Radiation Understanding and Monitoring) mission is ESA's ninth Earth Explorer mission, with launch planned in 2027. FORUM will provide measurements in the FIR spectral region (100-667 cm-1) of the outgoing longwave radiation (OLR) of Earth, with a spectral resolution never achieved before. The FIR spectrum is uniquely sensitive to the upper troposphere and low stratosphere (UTLS) water vapour (WV) distribution, to the surface emissivity at high latitudes and to the optical properties of high-level clouds. Besides the tremendous impact on diagnosing atmospheric conditions in real time, the above properties are observed with great interest by the data assimilation (DA) community. The potential benefit of the upcoming FIR data in a DA process is not obvious, nor it is exempt from challenges to its feasibility. In the context of the ASI (Agenzia Spaziale Italiana) projects MC-FORUM (Meteo and Climate exploitation of FORUM) and in collaboration with FIT-FORUM (Forward and Inverse Tool for FORUM), we are studying how to assimilate FIR data and their impact within a DA cycle. The present work shows preliminary results along this perspective. We intentionally leverage on the use of a low-order model: a multilayer version of the Lorenz-96 model, already used in previous theoretical studies about DA of satellite data. The use of a low complexity model allows one to easily control the experimental setup, while reducing the computational burden. We adopt a state-of-the-art ensemble-based DA scheme. The main challenges of the study are (i) the construction of a simple yet adequate observational operator H to generate synthetic FIR observations, (ii) the vertical localization of such data, which are vertical integrals over an atmospheric column and (iii) the capability to discriminate the impacts of FIR measurements into the overall assimilation process. We will illustrate the potential of FIR against regular infrared data. This work is part of an incremental research strategy, whose next step consists into performing a similar analysis, but with the use of an intermediate complexity atmospheric model (SPEEDY, Simplified Parameterizations, primitivE-Equation DYnamics) and a fast radiative transfer model (σ-IASI/FORUM) to generate the FIR radiances to be assimilated. The experimental setup using SPEEDY will also be detailed.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: A.10.01 - POSTER - EO for Mineralogy Geology and Geomorphology

Earth observation is an important source of information for new and improved geological, mineralogical, regolith, geomorphological and structural mapping and is essential for assessing the impact of environmental changes caused by climatic and anthropogenic threats. Given the increasing demand for mineral and energy resources and the need for sustainable management of natural resources, the development of effective methods for monitoring and cost-effective and environmentally friendly extraction is essential.
In the past, the use of multispectral satellite data from Landsat, ASTER, SPOT, ENVISAT, Sentinel-2 or higher resolution commercial missions, also in combination with microwave data, has provided the community with a wide range of possibilities to complement conventional soil surveys and mineralogical/geological mapping/monitoring e.g. for mineral extraction. In addition, discrimination capabilities have been enhanced by hyperspectral data (pioneered by Hyperion and PROBA), which are now available through several operational research satellites and will be commissioned by CHIME.
The session aims collect contributions presenting different techniques to process and simplify large amounts of geological, mineralogical, and geophysical data, to merge different datasets and to extract new information from satellite EO data to support with a focus on mine site lifecycles.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Geospatial Artificial Intelligence Analysis for Tailings Storage Facilities based on Satellite Earth Observation

Authors: PhD. Jan Růžička, Robin Bouvier, Lukáš Brodský, Ing. Tomáš Bouček, Associate Professor Mike Buxton, PhD. Feven Desta, PhD. Mwansa Chabala, PhD. Martin Landa, Francisco Luque, PhD. Glen Nwaila, Shruti Pancoli
Affiliations: Charles University, Department of Applied Geoinformatics and Cartography, Cybele Lawgical, Lda, Czech Technical University in Prague, Department of Geomatcis, University of Delft, Resource Engineering, Copperbelt University, ISMC-IBERIAN SUSTAINABLE MINING CLUSTER, University of Witwatersrand
The Geospatial Artificial Intelligence Analysis for Tailings Storage Facilities (GAIA-TSF) project aims to advance tailings storage facility (TSF) monitoring and risk assessment by integrating satellite Earth observation (EO), machine learning (ML), and in-situ data. GAIA-TSF addresses limitations in current TSF monitoring, including disconnected data pipelines, stakeholder-relevant integration challenges, and time-consuming analysis of large data sets. The project objectives are: 1: Establish a set of Key Variables for monitoring Tailings Storage Facility (TSF) anomalies and the risk of failures and associate the prototype design supporting monitoring based on satellite data. 2: Identify synergies between suitable Satellite Earth Observation (SatEO) technologies, mining engineering data, and Machine Learning to foster the efficiency of TSF operational monitoring. 3: Design, Develop, and Optimise a prototype integrating satellite data, in-situ data, and ML models to support the identification, explanation, and prediction of TSF risk. Innovative satellite-based methods, including multispectral and hyperspectral imaging, will be combined with geotechnical and environmental datasets to analyze key variables such as soil stability and hydrological parameters. These datasets are used to train an advanced ML/DL architecture to detect anomalies and assess risk over time. In addition, eXplainable AI (xAI) techniques increase transparency by explaining the relationships between key variables and risk factors, facilitating stakeholder engagement and informed decision-making. The first case study, the 2022 Jagersfontein dam collapse, demonstrates the potential of the GAIA-TSF approach. The 2022 Jagersfontein Tailings Dam Collapse was a structural failure of a mine tailings dam near Jagersfontein, situated in the Free State province of South Africa. This resulted in a mudslide. Sentinel-2 time series data and Random Forest algorithms are used to monitor anomalies and assess the environmental impact after the disaster. The approach transforms time series into features using moving averages, lagged values, and differences. The concept of lag refers to the use of previous values from time series as features to predict the current or future class. The appropriate choice of lag depends on the temporal dynamics of the data collected by Sentinel-2. This approach revealed widespread damage to infrastructure, ecosystems, and agricultural land, underscoring the need for robust TSF monitoring systems. Acknowledgments The authors gratefully acknowledge the support of the European Union through the Horizon Europe Framework Programme for Research and Innovation under the project Geospatial Artificial Intelligence Analysis for Tailings Storage Facilities (GAIA-TSF), grant agreement number 101180263. The European Union Agency for the Space Programme (EUSPA) manages this project.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: The Role of Copernicus Data and Copernicus Contributing Missions to Raw Materials Mining Life Cycle: Outcomes From S34I

Authors: Ana Claudia Teodoro, Joana Cardoso-Fernandes, Maria Mavroudi, Rushaniia Gubaidullina, Michael Tost, Krištof Oštir, Tanja Grabrijan, Mihaela Gheorghe, Marta Alonso Fernández, Myriam Montes, Alicia Garcia, Maria Fernandez, Antonio Peppe, Francesco Falabella, Fabiana Calò, Andreas Knobloch, Nike Luodes, Fahimeh Farahnakian, Francisco Javier González Sanz, Wai L. Ng-Cutipa, Ana B. Lobato, Irene Zananiri, Vaughan Williams, Matthias Siefert, Hannes Blaha, Marko Savolainen, Bjorn Fossum, Enoc Sanz Ablanedo, Mercedes Suarez, Petri Nygren, Victoria Jadot, Patricia Santos
Affiliations: Instituto de Ciências da Terra, Departamento de Geociências, Ambiente e Ordenamento do Território, Faculdade de Ciências, Universidade do Porto, Montanuniversität Leoben, University of Ljubljana, Faculty of Civil and Geodetic Engineering, GMV Innovating Solutions SRL, International Center for Advanced Materials and raw materials of Castile and Leon (ICAMCYL), Iberian Sustainable Mining Cluster (ISMC), National Research Council of Italy, Institute for the Electromagnetic Sensing of the Environment (CNR), Beak Consultants GmbH, Geological Survey of Finland (GTK), Geological Survey of Spain (IGME-CSIC), Hellenic Survey of Geology & Mineral Exploration (HSGME), Aurum Exploration, Omya, VTT Technical Research Centre of Finland Ltd, Ecotone AS, University of Léon, Department of Mining Topography and Structure, University of Salamanca, Geology Department, SPECTRAL MAPPING SERVICES SMAPS OY, Eurosense Belfotop BV/SRL
The Horizon Europe S34I project aimed to explore new data-driven methods to analyse Earth Observation (EO) data for systematic mineral exploration, continuous monitoring of extraction, closure and post-closure activities, and increase European autonomy regarding raw materials. This work aims to present the principal outcomes of the S34I project focusing on the processing of Copernicus and Copernicus Contributing Missions (CCM). S34I results were validated and demonstrated at six different sites as industrial-relevant environments, considering all phases of the mining life cycle: (i) onshore exploration (Áramo mine, Spain); (ii) exploration in the coastal-marine transition (Ria de Vigo, Spain); (iii) active open-pit mine (Gummern mine, Austria); (iv) closed mine with acid mine drainage – AMD – problems (Lausitz in Germany and Outokumpu in Finland); and (v) closed mine with subsidence issues (Aijala mine, Finland). In the onshore pilot, the potential of Sentinel-1 and Sentinel-2 data was assessed using classical image processing techniques such as RGB combinations, band ratios, selective Principal Component Analysis (PCA) and unsupervised learning (K-means). A new ensemble artificial intelligence (AI) method was developed by integrating Support Vector Machines (SVM), Random Forest (RF), and Artificial Neural Networks (ANNs) to exploit Sentinel-2, Landsat-9 and PRISMA data. Different types of data were assessed to improve EO-based structural mapping, namely satellite Synthetic Aperture Radar (SAR) data from Sentinel-1, ALOS PALSAR-2 and COSMO-SkyMed and airborne Light Detection and Ranging (LiDAR) data. New airborne LiDAR and hyperspectral datasets were acquired. An AI algorithm was developed for automated hyperspectral airborne data preprocessing requiring minimal ground truth data. Airborne hyperspectral data was processed through a combination of PCA, endmember extraction, K-means clustering, band ratios, minimum wavelength mapping and Spectral Angle Mapper (SAM). Data fusion of LiDAR and PRISMA data was combined with geochemistry analysis utilising the Self-Organizing Map (SOM) technique and the K-means clustering algorithm to improve the knowledge of Cobalt distribution. Mineral predictive maps were produced, integrating airborne hyperspectral, geological and structural data with ground spectral and geochemical data using ANNs. A spectral library has been elaborated for ground truth/calibration of EO-related methods and to determine the spectral signatures of outcropping rocks and soils. Classical image processing techniques on Sentinel-2 and Landsat-9 data were tested for coastal-marine transition exploration, such as RGB combinations, band ratios, selective PCA and K-means. Additionally, optical satellite data from the WorldView-2 and -3 platforms was processed to detect and map the placer deposits using spectral unmixing and Object-Based Image Analysis (OBIA) methods. Spectral unmixing was also applied to EnMap hyperspectral data. The potential of Sentinel-1 radar data for beach placer exploration was assessed through unsupervised learning through K-means classification using the textural analysis results as inputs. A Sentinel-2 satellite-derived bathymetry (SDB) processing chain was developed based on the ACOLITE processor and ensemble machine learning algorithms. In the scope of the S34I project, an innovative Remotely Operated Vehicle (ROV) carrying an Underwater Hyperspectral Image (UHI) system was deployed to acquire data from the seafloor. Complementary spectral libraries were created considering seafloor, beach, coastal rock outcrops and heavy mineral concentrate samples. Thematic maps were produced at regional and local scales to show geological features integrated into the coastal environment, considering the background and new data collected in the project (e.g., UAV surveys and studies of the samples collected). In addition, for the first time, we know the extent (in square m) of surface placer occurrences on beaches. For the active mine representing the extraction mining life phase, Pléiades Neo tri-stereo and WorldView-2 satellite images were used to produce Digital elevation models (DEMs), later compared to the DEMs produced from high-resolution UAV surveys conducted in the scope of the project. Volume maps of mining waste deposits and stockpiles were created using Structure from Motion (SfM) UAV photogrammetry. Low-cost dual-frequency GNSS sensors were utilised to establish low-cost GNSS Monitoring Stations (LGMS) to estimate displacements. Deep Learning (DL) techniques were used to enhance the resolution of Sentinel-1 SAR data by a factor of 4 using COSMO-SkyMed satellite data. Amplitude-based change detection techniques were applied to Sentinel-1 data. Three-dimensional (3D) ground displacement maps were produced using multitemporal Sentinel-1 and COSMO-SkyMed datasets processed through advanced Interferometric synthetic-aperture radar (InSAR) techniques. In the closed mines with AMD problems, Sentinel-2, PRISMA, WorldView-3 and UAV data were employed to map AMD constituents using both supervised (ANN, logistic regression, RF, K-nearest neighbours) and unsupervised (SOM, K-means) approaches. Lastly, the ground monitoring tools developed in the extraction site were tested in the Aijala closed mine with subsidence problems. In conclusion, S34I developed different techniques to process and extract new information from EO data, particularly Copernicus and CCM satellite data. EO data was integrated with geological, mineralogical and geochemical data whenever possible. This way, S34I outcomes will support mining activities in all lifecycles, contributing to the path towards responsible mining while creating social and economic impact through EO uptake. This study is funded by the European Union under grant agreement no. 101091616 (https://doi.org/10.3030/101091616), project S34I – SECURE AND SUSTAINABLE SUPPLY OF RAW MATERIALS FOR EU INDUSTRY.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Pattern-based Sinkhole Detection In Kazakhstan From Sentinel-1 And -2 Data

Authors: M. Eng. Simone Aigner, Andreas Schmitt, M. Sc. Sarah Hauser
Affiliations: Hochschule München University Of Applied Sciences, Institute for Applications of Machine Learning and Intelligent Systems (IAMLIS)
Sinkholes are prevalent in karst regions and pose significant geohazards due to their sudden appearance and potential to disrupt landscapes and infrastructure. These geological structures result from the collapse of subterranean cavities. Typically, sinkholes manifest as funnel or box-shaped depressions caused by the erosion of limestone, commonly occurring in areas where groundwater flows through porous rock layers and eventually causes these cavities to collapse [1]. Recognized as potential geohazards, sinkholes threaten structural stability and pose environmental risks by linking surface water with aquifers, thereby elevating the risk of drinking water contamination. The critical nature of these events necessitates robust monitoring and detection systems to identify vulnerable areas early and implement protective measures. This need is particularly acute in Kazakhstan, a region where sinkhole incidents are notable, but understudied, posing risks to crucial developmental projects such as the Hyrasia One initiative. This ambitious project, aiming to harness Kazakhstan’s green hydrogen potential, depends on the geological stability of the terrain. Comprehensive mapping and understanding of sinkhole formation within the project's area are imperative to mitigate risks and safeguard future infrastructural integrity. Current methodologies for detecting sinkholes predominantly rely on digital terrain models (DTMs) and manual interpretations using geographic information systems (GIS). These methods, effective under certain conditions, face several limitations that can impede their utility, especially in remote places like Kazakhstan: • Resolution and Availability: High-resolution DTMs are crucial for accurate sinkhole detection. However, in many regions, especially those that are less economically developed or geographically accessible, such data may not be readily available or may be prohibitively expensive to acquire. • Manual Effort and Expertise: The traditional use of GIS for sinkhole detection typically requires substantial manual effort and considerable expertise, making the process time-consuming and dependent on the availability of skilled personnel. • Environmental Limitations: Methods that rely on physical surveys or indirect indicators can be less effective in regions with extensive vegetation cover or in urban areas where built environments obscure the ground surface. Dynamic Conditions: Many existing methods do not adequately account for the dynamic nature of sinkholes that develop or change rapidly due to factors like extreme precipitation, making real-time or near-real-time monitoring more challenging. Given these limitations, there is a clear need for an innovative approach that enhances the efficiency, accuracy, and applicability of detection technologies. We address these needs by harnessing the capabilities of the European Space Agency’s Sentinel satellites in the Copernicus program. Utilizing the 10m Red, Green, Blue, and Near-Infrared bands of the Multispectral Imager (MSI) on ESA's Sentinel-2 satellites, this novel detection process moves away from the conventional reliance on artificially illuminated shaded images produced by DTMs. By employing natural sunlight as a light source in our analyses, we effectively capture distinctive shadow patterns as indication of sinkholes. These patterns are particularly pronounced in the sparsely vegetated terrains of Kazakhstan, where natural light provides a more authentic and detailed view of the terrain. Geological features are highlighted more accurately and extensively by this method of natural solar illumination than by conventional approaches using rendering methods of computer-aided design (CAD) on DTMs and visual inspection. As the principles of processing both DTM and satellite data align, this automatic image processing methodology can also be easily applied to terrain data from airborne LiDAR flights in built-up and vegetated areas, ensuring broad applicability. The workflow for automated sinkhole detection using satellite data is meticulously organized into three central steps: · data pre-processing and fusion, · sinkhole detection by morphology, and · geospatial analysis of the detected structures. The process begins with intensive processing of optical satellite data to enhance the visibility of geological structures. We specifically utilize datasets from varying seasonal conditions: capturing the humid vegetative state in February and the arid bare landscape in August. This approach accommodates environmental variations that significantly impact visibility and detection capabilities. A temporal-spectral fusion technique [2] is applied over time, critical in enhancing our ability to discern subtle variations in the terrain. Additionally, advanced techniques such as Principal Component Analysis (PCA) and the application of Kennaugh elements are employed to maximize the definition and clarity of surface structures, crucial for accurately identifying potential sinkholes. In the main phase of detection, we employ a multi-scale approach using the Laplacian of Gaussian (LoG) like filter similar to the one used for Multiscale Multilooking [3]. This filter is exceptionally suited for identifying the round, funnel-shaped depressions characteristic of sinkholes as photonegative of the point scatterers in radar which it was originally designed for [3]. It is strategically applied at multiple scales to accurately pinpoint sinkholes of various sizes, with the lowest points (argmin) serving as key indicators. To minimize background noise and enhance the detection accuracy, thresholds are carefully applied to these low points. Challenges posed by low-growing vegetation, which can lead to false detections, are addressed through the development of the Combined Vegetation Doline Index (CDVI). This index effectively differentiates between low vegetation and actual dolines through the bitemporal fusion of Sentinel-1 and Sentinel-2 data via Hypercomplex data fusion [2], providing a clear distinction that is crucial for accurate classification. Following the detection phase, a comprehensive spatial analysis is conducted. This analysis maps the distribution of detected sinkholes and examines their clustering, which unveals valuable new insights into potential underlying geological processes. The sinkholes identified by the LoG filter are combined with the results from the CDVI to produce a final classification. This classification sorts the sinkholes into three categories: confirmed sinkholes, sinkholes with vegetation overlay, and probable sinkholes requiring further investigation. This advanced, systematic workflow not only refines the sinkhole detection process but also integrates these findings into broader geological and infrastructural planning. Such an integration is crucial for managing risks and supporting sustainable development in sinkhole-prone regions like Kazakhstan, where understanding and mitigating geohazards are essential for the safety and sustainability of development projects. The results of the present study on the automated detection of sinkholes show an accuracy of 92% for sinkholes, especially in the arid months when compared with the merged reference dataset. The comparison of the different individual datasets with the developed reference dataset shows that the principal component analysis in the arid month, which has a precision of 88% for sinkholes. By conducting a spatial analysis, the hypothesis that sinkhole clusters often run parallel to surface watercourses can be confirmed. This observation allows us to draw conclusions about the underlying watercourses and geological processes in the area. It further enables to designate territories with only few to no dolines which can be seen as geologically stable in the long term. A significant advantage of this methodology is its transferability to other regions through the use of freely available and up-to-date remote sensing data (Sentinel-1 and -2). Especially in areas with limited geological reference data, such as in Kazakhstan, this technique proves to be an most useful tool for the comprehensive mapping of sinkholes and the derivation of correlations. The process is also automated to such an extent that large areas can be mapped rapidly. For the first time, this study enables the comprehensive annual mapping of sinkholes, especially in the context of climate change. Changes in precipitation patterns, such as an increase in heavy rainfall events or longer periods of drought, can have a significant impact on the development and frequency of sinkholes. Monitoring also plays a role in connection with water protection, crisis management, and soil conservation. Automated sinkhole detection therefore also provides the basis for long-term monitoring of these geological structures in order to identify and minimize potential risks at an early stage. [1] D. Ford and P. D. Williams, Karst Hydrogeology and Geomorphology, John Wiley & Sons, 2007. [2] Schmitt, A.; Wendleder, A.; Kleynmans, R.; Hell, M.; Roth, A.; Hinz, S. Multi-Source and Multi-Temporal Image Fusion on Hypercomplex Bases. Remote Sens. 2020, 12, 943. [3] A. Schmitt, A. Wendleder, and S. Hinz, "The Kennaugh element framework for multi-scale, multi-polarized, multi-temporal and multi-frequency SAR image preparation," ISPRS J. Photogramm. Remote Sens., vol. 102, pp. 122–139, Apr. 2015, doi: 10.1016/j.isprsjprs.2015.01.007.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Fusing EnMAP and Sentinel for resolution enhanced geological mapping

Authors: Sam Thiele, Dr Rupsa Chakraborty, Dr Parth Naik, Dr Richard Gloaguen
Affiliations: Helmholtz Institute Freiberg, Helmholtz Zentrum Dresden Rossendorf, Centre for Advanced Systems Understanding, Helmholtz Zentrum Dresden Rossendorf
The emerging generation of hyperspectral satellites provide a wealth of new data for geological mapping, allowing subtle mineralogical signatures and trends to be identified from space. In areas with suitable exposure this will facilitate improved geological mapping, allow exploration for much-needed mineral deposits, and help manage environmentally hazardous geomaterials (e.g., tailings, acid mine drainage). However, the inevitable compromise between coverage and resolution currently limits hyperspectral data to 30×30 m pixels. This reduces the resolvability of geological structures relative to higher spatial sampling (but lower spectral resolution) sensors such as Sentinel-2 or RapidEye. Various techniques have been proposed for super-resolving hyperspectral data, but these often favour cosmetics ( e.g., by injecting spatial features related to overall pixel brightness, but adding little additional spectral information) over spectral fidelity. Furthermore, many existing tools can result in non-realistic spectral distortions and so fail to preserve spectral integrity. In this contribution we present a novel data fusion and resolution enhancement approach that combines low spatial resolution (high ground sampling distance) hyperspectral data with high-spatial resolution multi-spectral or RGB information, to derive a resolution enhanced hyperspectral image. Unlike established approaches, our method uses a small (9x9 pixel) sliding-window to learn very local relationships, allowing correlations that would be invalid at the large scale to inform the resolution enhancement. Additionally, our approach is inherently conservative, adding spatial detail where informative correlations are found, but defaulting to the original (lower spatial resolution) hyperspectral data if only poor relationships are found. This results in a balance of meaningfully enhanced spatial detail, while rigorously preserving hyperspectral information and features. We tested our approach using EnMap and Sentinel-2 data covering the Lofdal carbonatite complex in Namibia. These data nicely capture the regional geological structures, including dykes, faults and large-scale folding. After resolution enhancement using our data fusion approach, we were able to resolve intricate and geologically sensible marker horizons (bedding) that were otherwise difficult to detect using the EnMap data alone. Similarly, band ratio analyses performed to map carbonate minerals on the 30-m EnMAP and 10-m resolution enhanced data gave highly correlated results. These both resolve carbonatite dykes, but in much more detail after the resolution enhancement. Spectral interpretation confirmed the hyperspectral integrity of the resolution-enhanced dataset, with meaningful changes in absorption depths for pixels containing significant sub-pixel variation (e.g., around the dykes), and little change in spatially homogeneous regions. In conclusion, we suggest that our novel resolution enhancement method is able to fuse the spatial resolution of RGB and/or multispectral sensors with the rich hyperspectral information available from EnMAP. Spectral analyses and geological interpretations of the resolution enhanced data confirm that meaningful spectral and spatial information was added during this fusion process, allowing better discrimination of geologically relevant features (including bedding and dykes). We also speculate that in many (arid) regions our resolution enhancement approach could help to separate bare-earth and vegetation spectra to enable more accurate geological (and vegetation) mapping.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Unravelling the Evolution of Alluvial Fans in the Northern Sultanate of Oman: Applications of Remote Sensing and Deep Learning

Authors: Andrea Pezzotta, Lukas Brodsky, Mohammed Al Kindi, Michele Zucali, Andrea Zerboni
Affiliations: Dipartimento di Scienze della Terra “A. Desio”, Università degli Studi di Milano, Department of Applied Geoinformatics and Cartography, Charles University, Earth Sciences Consultancy Centre
Alluvial fans are distinctive fluvial landforms that develop at the abrupt widening of mountain valleys due to decreasing slope. They can occur in various environments and geomorphological settings (Ventra & Clarke, 2018). Their formation is influenced by the interplay between tectonics and climatic conditions. Alluvial fans are conical landforms formed along a drainage system where a topographic gradient exists and deposition prevails over erosion. In such context, a fan results from aggrading sediments that spread out from a sedimentary source through multiple channels radiating from the apex, which may shift over time (Blair & McPherson, 2009). The activation/deactivation of channels ultimately depends on the local hydrological regime. As a consequence, alluvial fans preserve evidence of various generations of inactive channels (paleochannels), lateral migrations of active streams, and the establishment of relative chronology based on their geometrical relationships. In the northern Sultanate of Oman, the southern margin of the Al-Hajar Mountains is flanked by extensive alluvial fans, forming vast and coalescing bajada-type landforms. The semi-arid to arid climate of the region, combined with the almost complete absence of plant cover, allows the observation of the bare surface of alluvial fans, which exhibits intricate and complex patterns of (paleo-)drainage systems. The latter consist of a series of exhumed gravel ridges, representing alluvial fan systems with ages ranging from the Miocene to the Pleistocene (Blechschmidt et al., 2009). Notably, previous works (Maizels, 1987, 1990; Maizels & McBean, 1990) has identified up to 14 generations of paleochannels along the alluvial fan at Barzaman. The current availability of multispectral high-resolution SPOT 6 and 7 satellite imagery offers unprecedented opportunity to investigate these paleochannel systems across an area of approximately 1780 km2. Understanding these landforms is critical for reconstructing past hydrological conditions of the Barzaman alluvial fan, which occurrence at the foothills of mountain belt reflects the role of climatic and tectonic influence on its evolution. However, manual mapping is time-consuming, and the subjective interpretation of fluvial features and paleochannels can lead to inconsistencies, characterised by different level of generalisation, in the assessment of hydrological history and landscape evolution. To address these challenges, we propose implementing Machine Learning/Deep Learning techniques to automate the detection of fluvial features and the tracing of paleochannel paths, thereby enhancing mapping accuracy and consistency, ultimately facilitating a better understanding of alluvial fan dynamics. Preliminary results involve the manual mapping of paleochannels and fluvial features in a portion of the area using 4-bands SPOT satellite imagery, with a resolution of 6 m. This approach enables the recognition of several generations of paleochannels based on geometric relationships, including intersections and overlaps. The mapped area serves as dataset for developing algorithms that leverage Deep Learning techniques, specifically employing Convolutional Neural Networks (CNNs). This step allows for the production of a probabilistic map for identifying paleochannel systems, differentiating between paleochannels and the underlying substrate. CNN models are particularly effective for the segmentation of alluvial fans from multispectral imagery due to their capacity to capture both spatial and spectral features, which are essential for accurate identification and mapping. CNNs are capable of learning the spatial context, given the sequence of convolution operators, that distinguishes paleochannel systems from other landforms. Furthermore, they have the capacity to generalize effectively across diverse geographic regions and environmental conditions. These characteristics enable the model to automate the segmentation process over extensive datasets. In particular, a U-Net architecture (Ronneberger et al., 2015) was selected for the purpose of alluvial fan segmentation. The U-Net model's encoder-decoder architecture is particularly well-suited for image segmentation tasks, where the accurate delineation of boundaries is of paramount importance. The U-Net model is capable of capturing both low-level and high-level features due to the presence of contracting (encoder) and expansive (decoder) paths, which progressively reduce and then recover spatial resolution. A comprehensive learning pipeline (PyTorch) was developed based on the U-Net architecture, which entails the mapping of fluvial landforms in analogous environmental contexts. In the specific, the preliminary dataset comprising 387 multispectral images, of which 270 images were augmented and allocated for training over 50 epochs, and 117 images were used for testing. The model's performance was evaluated using accuracy metrics, achieving 93% accuracy on the testing set. This research is expected to yield a high-resolution geomorphological map of the Barzamani alluvial fan and develop Deep Learning algorithms for the automated tracing and identification of fluvial and geomorphological features. The development of this approach will enhance the potentiality for remote mapping of alluvial fans and other fluvial landforms in semi-arid and arid environments, in order to significantly advance the understanding of alluvial fan dynamics and provide valuable insights for future research in fluvial geomorphology. We kindly acknowledge the support of ESA for providing the access to the SPOT imagery through project PP0100418 (A. Pezzotta), as well as to the Erasmus + Traineeship 2024/25 grant for enabling A. Pezzotta the opportunity to conduct this research at Charles University. References • Blechschmidt, I., Matter, A., Preusser, F., & Rieke-Zapp, D. (2009) - Monsoon triggered formation of Quaternary alluvial megafans in the interior of Oman. Geomorphology, 110, 128–139. https://doi.org/10.1016/j.geomorph.2009.04.002. • Maizels, J.K. (1987) - Plio-Pieistocene raised channel systems of the western Sharqiya (Wahiba), Oman. In: Frostick, L., & Reid, I. (eds), 1987, Desert Sediments: Ancient and Modern. Geological Society Special Publication, 35, 31-50. • Maizels, J.K. (1990) - Raised channel systems as indicators of palaeohydrologic change: a case study from Oman. Palaeogeography, Palaeoclimatology, Palaeoecology, 76, 241-277. • Maizels, J.K., & McBean, C. (1990) - Cenozoic alluvial fan systems of interior Oman: palaeoenvironmental reconstruction based on discrimination of palaeochannels using remotely sensed data. In: Robertson, A.H.F., Searle, M.P., & Ries, A.C. (eds), 1990, The Geology and Tectonics of the Oman Region. Geological Society Special Publication, 49, 565-582. • Ronneberger, O., Fischer, P., & Brox, T. (2015) - U-Net: Convolutional Networks for Biomedical Image Segmentation. ArXiv:1505.04597. • Ventra, D, & Clarke, L.E. (2018) - Geology and Geomorphology of Alluvial and Fluvial Fans: Terrestrial and Planetary Perspectives. Geological Society of London, 440. https://doi.org/10.1144/SP440.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Advancing Mineral Identification Through Image Super Resolution (SIR) Methods: A Case Study in Kosovo

Authors: Martyna Durda, Bartosz Skóra, Zuzanna Słobodzian, Jakub Sumara, M.Sc Katarzyna Adamek
Affiliations: AGH University of Krakow, Department of Geoinformatics and Applied Computer Science
The integration of remote sensing and image enhancement technologies has the potential to revolutionize mineral identification and geological mapping. This study investigates the application of image super resolution (SIR) methods to improve the accuracy of satellite imagery for mineral identification in Žegovac Mountains of Kosovo. The project aims to address the challenges posed by the low resolution of satellite data, which can limit the precision of geological and various data interpretations, by utilizing advanced SIR models to enhance image quality. The methodology encompasses three key steps. First, a selection of SIR models will be applied to satellite imagery to enhance spatial resolution and preserve spectral integrity. Second, spectral curves obtained from the analysis of in situ rock samples in Kosovo and by hyperspectral camera, will be compared to those derived from the SIR-enhanced satellite images. This comparison will validate the effectiveness of SIR methods in accurately reproducing mineral-specific spectral signatures. Finally, areas of occurrence for specific rock types will be depicted on a map of Kosovo, showcasing the potential of SIR-enhanced satellite imagery in supporting mineral exploration efforts. The primary objective of developing the SIR model is to generate synthetic high-resolution imagery from Sentinel-2 data. Acquiring high-resolution satellite images is often prohibitively expensive and limited in availability, posing challenges for widespread application in geoscientific research. This study seeks to address this limitation by training the SIR model on a limited dataset of high-resolution satellite imagery, enabling its application to freely available low-resolution Sentinel-2 images. This approach aims to enhance spatial resolution and detail while maintaining the accessibility and cost-effectiveness of utilizing open-source satellite data. Preliminary results suggest that SIR methods significantly improve the spatial clarity of satellite images without compromising spectral data, enabling more precise mineral identification. The integration of SIR with hyperspectral analysis could lead to the identification of new mineral deposits, optimize mineral extraction processes, and support environmental protection initiatives by minimizing exploration-related disturbances. This study contributes to the session themes of innovation in Earth observation and its application to geoscience, mining, and environmental management. The results highlight the value of advanced image processing techniques for operationalizing decision-making in resource management and sustainability.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Unlocking Hidden Treasures from Above by Hyperspectral Imaging across Scales – Impact of Increased Spatial Resolution on Mineral Mapping Accuracy –

Authors: Thomas Bahr, Dr. Friederike Koerting, Dennis Adamek, Dr. Daniel Schläpfer
Affiliations: NV5 Geospatial Solutions GmbH, Norsk Elektro Optikk AS, ReSe Applications LLC
After the launch of multiple spaceborne hyperspectral instruments, such as the European EnMap and PRISMA missions, providing free spaceborne data to a range of applications, the quality and usability of the acquired spaceborne hypercubes can be compared against existing airborne or drone-based hyperspectral imaging data. This study shows the impact of scale, mainly due to different spatial resolutions on mineral mapping results. Three datasets from hyperspectral sensors, namely the spaceborne EnMAP sensor (30 m spatial resolution), the airborne AVIRIS-NG sensor (2.9 m spatial resolution), and the drone-based HySpex Mjølnir VS-1240 and S-620 VHR sensors (0.1 m spatial resolution) are analysed for mineral surface cover. All datasets were collected over Cuprite Hills, Nevada, USA. Cuprite is known for its distinct geologic features showing hydrothermal alteration minerals at-surface and has been the subject of extensive spectral imaging campaigns. The site was and is used as a validation site for different instruments and the data acquired over the site is frequently used to test and validate processing routines and mapping algorithms. The Cuprite Hills are characterized by extensive hydrothermal alteration and exposed terrain, making it a prime location for studying argillic and advanced argillic alteration zones and dominating minerals such as Kaolinite and Alunite. Several hydrothermal alteration minerals display distinct spectral features in the 0.4-2.5 μm wavelength region that allows the detection and mapping using airborne, spaceborne, and drone-based hyperspectral imagery. In this presentation, new evaluation results of the above hyperspectral data will be presented, based on the scientifically proven ENVI technology. All datasets represent surface reflectance in cartographic geometry (Level-2A products) and have been processed comparably allowing the assumption that spatial resolution is the main contributor to changes in mineral classification products of the surface. While AVIRIS-NG and EnMap cover approximately the same area over the Western Cuprite Hills, the HySpex drone-based dataset covers a smaller subset of that area. Ground truthing, detailed spectral analysis and endmember spectra by G. Swayze et al. (2014) are used as a reference to validate mapping results. Prior to endmember extraction and classification, Minimum Noise Transform (MNF) technique was applied to map differences in the data variance as a function of the variable surface mineralogy. Increasing differentiation, from mineral alteration types to the mineral species level, is achieved with increasing spatial resolution. For the extraction and classification of the mineral endmembers the ENVI spectral hourglass procedure was used. This processing scheme consists of noise suppression and dimensionality reduction using the MNF transformation, determination of endmembers with the Pixel Purity Index method, extraction of the endmember spectra by n-dimensional scatter plotting, and their identification using spectral library comparisons. The image-derived endmember spectra are then used as input to various whole-pixel classification algorithms as Spectral Feature Fitting (SFF) and to spectral mixture analysis in the subpixel domain as with Mixture Tuned Matched Filtering (MTMF). Successful mineral mapping with hyperspectral data is often dependent on the ability to differentiate endmembers from the data. While several mineral endmembers can be identified and are validated by ground-truthing, we focus on the detection of the Alunite endmembers as a proxy of advanced argillic alteration zones. Comparing the classification results of various classifiers (whole-pixel and spectral unmixing) for a suite of in-scene Alunite endmembers shows an increase in detail from EnMAP to AVIRIS-NG and HySpex imagery as a function of their spatial resolution. Nevertheless, at Cuprite, EnMAP data can be used successfully to detect abundances of Alunite minerals with good correspondence to the ground truth-derived mapping. Overall large-scale alteration patterns mapped across the three datasets are in accordance with each other, though smaller patterns are lost with decreasing spatial resolution. Accurate detection of alteration minerals such as Alunite from spaceborne hyperspectral imagery as EnMAP shows the potential of this technology, especially considering that data is available free of charge over many areas globally interesting for mineral exploration. The results show the value of mineral mapping by spaceborne hyperspectral imagery to identify areas of interest but also that higher-resolution data is necessary for detailed analysis via airborne or drone-based data for a precise local investigation of mineral patterns.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: B.01.02 - POSTER - Earth Observation accelerating Impact in International Development Assistance and Finance

In this session, attendees will delve into an impact-oriented approach to accelerating the use of Earth Observation (EO) in support of international development assistance, incl. integration in financing schemes. Presenters will provide in-depth insights into real-world application use cases across multiple thematic domains, implemented in developing countries in coordination with development and climate finance partner institutions. The session will prioritise examples showcasing the tangible impact on end-users in developing countries and the successful uptake of EO products and services by their counterparts. Counterparts here can be national governments or International Financial Institutions (IFIs), such as multi-lateral development banks (World Bank, ADB, IDB, EBRD) and specialised finance institutions (e.g. IFAD), as well as and Financial Intermediary Funds (FIFs), most specifically the large global climate and environment funds (GCF, GEF, CIF, Adaptation Fund). Attendees can expect to gain valuable insights into how the process of streamlining EO in development efforts is (1) opening new market and operational roll-out opportunities for EO industry, and (2) translating into impactful change on the ground and driving sustainable development outcomes worldwide.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Harvesting Earth Observation for Belize: Transforming Financial Strategies for Climate-Resilient Agriculture.

Authors: Koen De Vos, Kasper Bonte, Melissa Brown, Koen Van Rossum, Laurent Tits
Affiliations: VITO, WorldBank
Belize’s agricultural sector is paramount for the national economy with the export of sugarcane, citrus, and banana products contributing to almost 5% of the country’s GDP. Meanwhile, this industry has increasingly been impacted by droughts in key production regions – and is threatened to be impacted even more and more harshly because of climate change. To address these challenges, the Government of Belize and the Ministry of Agriculture, Food Security and Enterprise (MAFSE) launched the Climate Resilient and Sustainable Agriculture Project (CRESAP), focusing on enhancing productivity and climate resilience. A key component of CRESAP is the expansion of the Belize Agriculture Information Management System (BAIMS) to make it a platform that supports evidence-based decision-making for farmers, policymakers, and financing institutions. The ESA’s Global Development Assistance (GDA) programme focuses on targeted Agile EO Information Development applied to thematic priority sectors, such as agriculture. It has been instrumental in supporting CRESAP by delivering Earth Observation (EO) services tailored to Belize’s needs – which are perfectly fit to be integrated into BAIMS. Through collaborations with the WorldBank, local experts, and industry representatives, we have developed a suite of EO-based products that support climate risk assessments, sustainability monitoring, and strategic financial planning in a data-driven manner. The integration of these products into BAIMS particularly simplifies the workflows for financial institutes supporting climate-smart agriculture by bundling all relevant information in one central, accessible place- thereby allowing these financial institutions to integrate the use of EO into their workflows. Tailored to the needs of the export-heavy industry, a crop mapper was developed for sugarcane, citrus, and banana production areas at 10m resolution in leading cash-crop production regions. For this, we used a combination of high-resolution Sentinel-1 SAR and Sentinel-2 optical imagery and fine-tuned existing crop type classification algorithms (e.g., WorldCereal) to a Belizean context. Through analyzing multiannual composites from 2020 to 2022, we achieved high classification accuracies - and thereby also highly precise maps- of major production zones. Such information can directly be fed into monitoring, reporting, and verification processes essential for institutes providing crop-specific loans and micro-grants. Precipitation variability analysis revealed a sharp divide between northern and southern regions from 2015 onward in terms of meteorological drought occurrence. Southern regions (e.g., Stann Creek, Toledo), which are dominated by irrigated citrus and banana groves, have experienced alterations of wetter and drier years over the last decade. Northern regions (e.g., Orange Walk), which are dominated by rainfed sugarcane fields, have experienced multiple consecutive drier years than average – heavily impacting the stability of sugarcane production. By combining our sugarcane mapping product with the existing Vegetation Condition Index (VCI) from FAO’s Agricultural Stress Index System (ASIS), we identified sugarcane production hotspots that have experienced substantial impact of agricultural droughts- and where investments in irrigation infrastructure or other drought-resilient adaptations are most urgently needed. To comply with the sustainability component of CRESAP, we produced a deforestation map that reveals substantial land cover changes- (e.g., the Orange Walk district losing 20% of its forest cover since 2001). This deforestation can largely be attributed to the expansion of grasslands for livestock. These insights are critical for institutions that assist farmers to allocate resources effectively, while also ensuring that efforts to increase productivity in the livestock sector align with environmental conservation goals. This work demonstrates how the integration of an EO-portfolio into an existing platform can potentially transform climate finance strategies, enable evidence-based investment decisions, and allow for more transparent monitoring, reporting, and verification practices. This collaboration sets a precedent for using advanced EO technologies to secure a climate-resilient future for Belize’s agricultural sector, aligning with international climate finance goals and environmental conservation policies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: How Consistent Are Existing Earth Observation-Based Poverty Prediction Models in Sub-Saharan Africa?

Authors: Reason Mlambo, Dr Sohan Seth, Dr Ian McCallum, Dr Vikki Houlden, Dr Gary Watmough
Affiliations: School of Geosciences, University of Edinburgh, School of Informatics, University of Edinburgh, International Institute for Applied Systems Analysis, School of Geography, University of Leeds
In the last few years, there has been a significant proliferation of machine learning (ML) models that leverage Earth Observation (EO) data for predicting poverty and socioeconomic wellbeing, largely due to the advocating for a ‘data revolution’ by the United Nations to aid the implementation and monitoring of the Sustainable Development Goals (SDGs). This trend is also driven by the limitations of traditional data sources like censuses and household surveys, which are costly, infrequent, and lack the necessary detail for effective policy making and targeted aid. In the past five years, significant progress has been made in developing EO-based ML models for predicting poverty, with ongoing refinements involving variations in statistical methodologies, algorithm complexity and a broader array of EO covariates. However, despite these models utilising the same Demographic and Health Survey (DHS) data for training – albeit in different forms (either harmonised or survey-specific) – they exhibit varied performances and accuracies across different regions. With the 2030 SDG timeline nearing its end and an increased interest in EO data for poverty mapping, it is critical to evaluate the consistency of these models, particularly across sub-Saharan Africa (SSA) which not only suffers from a substantial lack of data but also shoulders the heaviest burden of poverty and socioeconomic inequality. This study provides a quantitative evaluation and comparison of four recent EO-based poverty prediction ML models by Yeh et.al (2020), Chi et.al (2021), Lee and Braithwaite (2022) and McCallum et.al (2022) across 22 sub-Saharan African countries to assess their consistency in estimating poverty trends at the second administration unit level. We found that the four models achieved unanimous agreement in less than 10% of combined administrative units across all countries. In approximately 50% of the units there was agreement in two or three models. The remaining units saw either absolute disagreement or agreement between different pairs of models on varying quintile values. In the pairwise comparisons we observed a consistently positive correlation in the quintile ranks determined by the four models across both rural and urban administrative units in all countries combined, with no significant differences in the Spearman's correlation coefficients among the different model pairs. However, a visual analysis of the poverty maps at the second administrative unit level by the four models highlighted significant variation in spatial patterns of wealth quintiles across the 22 countries. This discrepancy was further confirmed by the overall spatial agreement scores which were relatively lower than the correlation coefficients across all model pairs. While the models of McCallum and Lee showed the highest proportion of spatial agreement, and those of Yeh and Lee the lowest, no pair of models consistently achieved high agreement scores throughout the 22 countries. Moreover, comparisons with the latest DHS wealth quintiles in each country showed that no single model consistently ranked in the top or bottom five for spatial agreement across all countries. Additionally, while no country consistently showed high or low spatial agreement scores across all model comparisons, Mozambique, Uganda, and Rwanda frequently ranked in the top five for spatial agreement in at least half of the pairwise assessments for both unstratified and rural units. Conversely, Sierra Leone, Benin, and Mali often appeared in the bottom five in these assessments. Although the four models show varying degrees of alignment in their poverty assessments across different countries, it is apparent that no single model or pair of models consistently achieves spatial agreement. This variance underscores the complexities of using 'global' models to predict poverty in diverse geographic landscapes without taking local contexts into account. These models typically assume a uniform relationship between assets and wealth across varied social contexts, but this assumption often conflicts with empirical evidence. Additionally, the black-box nature of most of these models prohibits a deep understanding of the factors driving poverty, as they rely heavily on unexplained features with obscure links to the underlying drivers of poverty. The observed variations make it clear that these models need further refinement. We hope our assessment will prove useful in guiding future enhancements in this field.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Earth Observation-Driven Parametric Flood Insurance for Enhancing Climate Resilience

Authors: Paolo Tamagnone, Chloe Campo, Guy Schumann, Alice Castro, Maria Mateo Iborra, Konrad Jarocki, Dipankar Munshi
Affiliations: Research and Education Department, RSS-Hydro, School of Science (Geospatial Science), Royal Melbourne Institute of Technology (RMIT) University, Ibisa Network
Climate change-exacerbated extreme weather events, particularly floods, and unrestrained urbanisation pose significant risks to communities, economies, and human lives worldwide. Fluvial floods caused by river overflows, coastal floods caused by storms or high tides, and pluvial floods exacerbated by overburdened drainage systems are some of the ways in which floods can cause significant material damage to infrastructure, supply chains, and operations. Traditional insurance models often struggle to adequately address these increasing risks, particularly in regions with limited historical data and complex hydrological systems. This research explores the potential of Earth Observation (EO) data to revolutionize flood insurance by enabling the development of innovative parametric insurance products. The proposed solution aims to leverage EO data throughout the entire insurance product lifecycle, from design and development to operational execution, and exploit advanced EO techniques to improve flood detection and quantification. By integrating satellite imagery and geospatial datasets, the proposed methodology aims to overcome the limitations of traditional stream gauge and inundation model-based methods and provide more accurate and timely information on flood impact. Stream gauges are limited by their inability to measure extreme discharge, funding, and international cross-border data sharing restrictions while accurate flood models require high-resolution topography and are difficult to implement and maintain on a large scale due to costs. Unlike traditional methods, the proposed EO-based solution is designed to be highly scalable and transferable to other regions. This feature is particularly valuable for expanding insurance coverage globally. Moreover, EO data can supplement or reduce reliance on ground-based measurements, modelled scenarios and enhance spatial coverage, reducing costs and improving data quality. Specifically, the project will focus on advanced flood mapping, spatio-temporal analysis, frequency quantification, and data-driven trigger definition. Utilising state-of-the-art of flood mapping algorithms, the project will accurately map flood features, such as extent and depth, from historical and near real-time EO data. These algorithms aim to effectively combine multiple data sources, such as multi-sensor imagery, land cover, topography and population/asset censuses, to improve the accuracy of flood delineation and impact assessment. Time-series analysis techniques will be employed to identify long-term trends and short-term variations in rainfall and flood patterns. The enhanced understanding of flood dynamics will enable the development of a more accurate trigger definition and a more robust flood insurance product. This framework will lead to the development of a data-driven parametric insurance product, triggering payouts based on predefined EO-derived flood parameters. By leveraging large amounts of reliable EO data, insurers can make more informed decisions about underwriting, pricing, and risk management. Creating a repository of standardised and reliable EO data enables (re)insurers to price the risk, paving the way for insurance product offerings. Additionally, the short latency of EO data will facilitate the insurers to make swift payouts after the floods. The combination of the selected datasets will enable the development of a versatile product that can be tailored to various flooding scenarios, from localized pluvial floods to extensive riverine floods. This versatility is crucial for comprehensive risk assessment and adapting the insurance coverage to diverse geographical contexts. By advancing the frontiers of EO-based flood parametric insurance, this research contributes to the development of innovative evidence-based climate finance strategies by providing a robust and transparent tool that aims to accelerate disaster recovery, promote sustainable development, create more resilient communities, and foster long-term financial stability in the face of increasing flood risks. To further enhance the effectiveness of the proposed approach, future research directions include incorporating additional high-quality data sources, developing advanced data-fusion algorithms, addressing data and meteorological challenges, and collaborating with policymakers, insurers, and communities to co-design and implement effective flood risk management strategies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Urban Sustainability Index: Leveraging Earth Observation to Benchmark Environmental Performance at the City-Level Worldwide

Authors: Alexander Stepanov, Raghavan Narayanan, Zélie Marçais
Affiliations: European Bank For Reconstruction And Development
Abstract: The presentation will introduce the Urban Sustainability Index (USI), a new city-level index of environmental and economic performance developed by the European Bank for Reconstruction and Development (EBRD) Impact team. Promoting a low-carbon transition stands at the core of the EBRD's strategic priorities, with a focus on sustainable infrastructure investments spanning energy efficiency, transportation, climate resilience, and more. Central to these efforts is the Bank’s commitment to empowering cities within its countries of operation to tackle local environmental challenges and catalyze green investments. In light of the scope and scale of these investments, the EBRD’s Impact team faces the widely shared challenge of capacity constraints, inconsistent methodologies, and subjectivity of local data collection on socio-economic and environmental conditions to guide and monitor impactful investments. Thus, supporting rigorous, cost-effective data collection on local conditions can empower impact investors to better target their interventions, reduce the burden on local stakeholders, and enable comprehensive, comparable performance analyses over time. Similarly to other Multilateral Development Banks (MDBs) active in the urban sustainable infrastructure sector, benchmarking the environmental performance of localities to increase the effectiveness of interventions is particularly relevant in the EBRD’s countries of operations, where cities continue to grapple with challenges such as deteriorating air quality, unchecked urban sprawl, inefficient legacy buildings, inadequate water and waste management, and limited institutional capacities to address these issues effectively. As such, the USI was developed by the EBRD’s Impact team as a composite multi-layer index covering the dimensions of urban environmental asset quality, efficient resource use, climate risks and socioeconomic benefits. Each dimension consists of one or multiple indicators. In total, the USI uses 15 primary indicators; of which 11 are city-level and 4 are country-level. These include yearly average concentrations of air pollutants (e.g. PM2.5, NO2, SO2), CO2 emissions, maximum vegetation, frequency of heatwaves, flood risks, average night-time lights intensity, total public transport infrastructure availability, and so on. To achieve global coverage and ensure verifiability and transparency of the scores, the Impact team collected publicly available earth observations (CAMS, Sentinel 5P, MODIS, etc…) and other geospatial data sources (e.g. OpenStreetMap). The index has been calculated annually from 2015 to 2024 for more than 13 thousand localities in 164 countries and should be revised every year. To ensure a consistent treatment of cities despite national variations in defining a city’s borders, the index utilises the concept of an Urban Centre as proposed by a consortium of international organisations and adopted by the UN Statistical Commission. Here, urban centre boundaries are determined through the observation of built-up spaces, total population size and population density rather than official administrative borders. The construction of the USI followed the steps described in the OECD (2008) guide on constructing composite indicators: (1) Theoretical framework, (2) Selection of variables and data sources, (3) Imputation of missing data, (4) Multivariate analysis, (5) Normalisation, (6) Weighting and aggregation, (7) Robustness and sensitivity checks. Each step involved choices between different possible methods and parameters, which were made in accordance with the best practice in economic literature and considering data availability, purpose, and scope of the intended analyses. Imputation of missing data was performed solely within each city-specific time series, and all variables were log-normalised and windsorised to correct for outliers. Finally, all indicators were aggregated uniformly across all sub indicators. The robustness of the USI was explored via a sensitivity analysis, ensuring the index scores were (a) robust to using alternative weighting schemes that either place more (double) or less (half) weight on any given indicator, and (b) not driven by a single sub indicator. For this purpose, various alternative specifications of the USI were computed. To assess the similarity of the resulting outcomes with the baseline specifications, both Pearson correlation coefficients and Spearman correlation coefficients were considered. Through this novel index, the Impact team of the EBRD offers a new tool for the Bank’s operating teams, impact investors, policy makers, civil society organisations, and academia to (i) identify challenges faced by cities worldwide in a comparative fashion, (ii) target and expand new interventions in sustainable urban development, and (iii) monitor changes in cities' environmental performance over time. The EBRD currently applies the USI to assess the long-term impact of its investments in municipal infrastructure across multiple countries. The USI can be expanded in the future to capture other critical environmental dimensions, e.g. water assets, waste, and wastewater management. The index can have a wide-ranging use for urban planning, biodiversity designs, and providing snapshots of systemic change or market effects. Key words: Urban Sustainability, Infrastructure, Earth Observation, Air Quality, Transportation, Biodiversity, Index Benchmarking, Urban Environment, Multilateral Development Bank, Impact.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: The MAPME Initiative - A Cross-Institutional Community for Reproducible Geospatial Data Analysis

Authors: Darius Andreas Görgen, Dr. Johannes Schielein
Affiliations: MAPME Initiative, University of Münster, KfW
Earth Observation (EO) and geospatial technologies can help us to take informed decisions to allocate public funds responsibly to maximize impact and benefits for our societies and the natural environment. A sustainable uptake of analytical geospatial solutions requires us to base our decisons on accessible and reproducible evidence. To address this, we worked on prototyping open-data projects that illustrate the power and usefulness of EO technologies within development aid projects in the context of the MAps for Planning, Monitoring and Evaluation (MAPME) initiative (www.mapme-initiative.org). It is an open, cross-institutional community of practice focusing on knowledge exchange between organizations both in developing and donor countries. We focus on open standards and software to harness EO data for all phases of the project cycle. A major outcome of this initiative is the R software "mapme.biodiversity" (https://CRAN.R-project.org/package=mapme.biodiversity). This tool streamlines reproducible data analysis within our organizations and beyond by supplying efficient routines to handle a diverse set of geospatial data sources. Because it is free and open source software (FOSS), it serves as a common platform to share knowledge between our members thus significantly reducing duplication of efforts. We will show that the software framework can be easily deployed on local machines, on-prem servers or even on cloud-computing environments. The tool is thus a good fit to conduct reproducible analysis for projects with diverse budgets and skillsets. We will showcase how the framework was successfully used by KfW to produce a comprehensive data base which contains more than 1,000 protected areas currently supported by the German Development Cooperation (both KfW and GIZ). The database allows easy access for decision makers to align conservation efforts with conservation policy targets. We also show how the framework can be used to estimate potential impacts of project activities on local ecosystem integrity as required in disclosure reports. Finally, we will share success stories of transferring the methodological approach to researchers from the Global South to harness the framework within their respective institutions. Our presentation will highlight the importance of developing a cross-institutional community approach to building up EO capacities in development cooperation as well as the importance of focusing on closing the (still considerable) gap between EO data providers and decision makers in their daily work.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Transforming Forest Monitoring for Climate Finance and Carbon Conservation in Coffee Landscapes

Authors: Michelle Kalamandeen, Katja Weyhermüller, Johannes Pirker, Merga Diyessa, Girma Ayele, Heiru Sebrala Ahmed
Affiliations: Unique Land Use Gmbh, Farm Africa Ethiopia, Environment, Forest and Climate Change Commission
Effective climate finance strategies demand high-quality data to tackle the interlinked challenges of climate change mitigation and sustainable development. In Ethiopia, coffee-growing regions face increasing pressures from population growth and agricultural expansion, resulting in forest degradation, biodiversity loss, and elevated carbon emissions from deforestation. Ethiopia has successfully unlocked climate finance opportunities to reduce emissions in other subsectors, but has been lacking a scalable methodology to quantify the impact of forest degradation. This study investigates the integration of artificial intelligence (AI) and Earth observation technologies as a scalable and cost-efficient approach to forest monitoring and carbon management for carbon market schemes such as Architecture for REDD+ Transactions and The REDD+ Environmental Excellence Standard (ART-TREES). Leveraging Sentinel-2 satellite imagery and advanced neural network models, we evaluated forest health through various vegetation indices to monitor forest degradation trends from 2020 to 2023, quantify biomass dynamics, and estimate CO2 emissions and sequestration. Results demonstrate that rejuvenated coffee plots showed stabilization or growth in biomass, reflecting the effectiveness of conservation efforts, while unmanaged plots displayed variable outcomes. Post-2021 recovery of coffee agroforestry plots which followed improved management systems significantly boosted carbon sequestration, reaffirming the pivotal role of agroforestry in climate change mitigation. This research underscores the potential of integrating AI with Earth observation to improve the precision and scalability of forest and agriculture monitoring systems. Such advancements are essential for generating actionable insights to inform climate finance strategies, hence catalyzing targeted interventions, and enhance resilience of the agroforestry domain, contributing to sustainable development and low-carbon pathways in vulnerable landscapes.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Leveraging EO uptake through GDA FFF

Authors: Andreas Walli, Alexander Kreisel
Affiliations: Geoville
The Fast EO Co-financing Facility (FFF) is a cross-cutting activity in the GDA programme that aims to provide rapid support to International Financial Institutions (IFIs) and Official Development Assistance (ODA) entities with a focus on leveraging co-financing for additional EO related capacity building and skill transfer or geospatial analysis. The FFF does not have a thematic focus but rather provides support on any topic in a pre-defined time and budget framework, on the condition that the recipient of support can demonstrate capacity for alignment. At this stage, multiple IFIs and an ODAs have been engaged, most prominently World Bank and the Asian Development Bank, but also the European Bank for Reconstruction and Development, Inter-American Development Bank, Kreditanstalt für Wiederaufbau (KfW Germany), The French Agency for Development (AFD) and European Investment Bank. In this session, we want to showcase successful IFI support though the FFF that has led to EO uptake and co-financing, benefitting European industries in entering a highly competitive market. These success stories will be accompanied by valuable lessons learned from less successful engagements and support actions highlighting strategies and markers in engagements to early identify both capacity and willingness to use and acknowledge the benefit from EO. Well over 20 requests for support from various IFIs have been evaluated so fair, with many more to come. EBRD is very recent addition to ESA’s IFI cooperation and partnership and the FFF lies the pathway to further future collaboration, particularly by supporting their Green Cities network. World Bank has already proven willingness to co-finance by providing complementing funds to enlarge the mapping area for seagrass in the Red Sea. With ESA aim to significantly enlarge the network of cooperating IFIs, the FFF plays a key role in facilitating the first steps through fairly easy entry points. Continuous communication and a clear management of expectation is key to a successful start of the collaboration that ensures uptake and willingness to contribute.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Evaluating the Environmental Impact of Sand Dams in Semi-Arid Regions Using Multi-Scale Earth Observation Data

Authors: Dr. Andreas Braun, Dr. Martin Sudmanns, Kibet Nimrod Mandela, Bernhard Ebersbach, Christian Khouri, Niklas Lehr
Affiliations: Eberhard Karls University Tübingen, Paris Lodron University of Salzburg
Increasingly severe water scarcity, driven by climate change and exacerbated in semi-arid regions, threatens ecosystems and communities, particularly those reliant on seasonal water sources. Sand dams—a low-cost, scalable solution for water retention—hold promise as an effective intervention to improve water availability, increase vegetation, and support local resilience (Ryan & Elsner, 2016). However, scientific evaluations of sand dams’ long-term environmental impacts are limited, yielding an uncertain situation regarding their applicability and reducing their visibility and adoption. This study aims to address this knowledge gap by applying an integrated, multi-scale Earth observation (EO) approach to evaluate the impacts of sand dams on local and regional ecosystems across selected African semi-arid areas. By leveraging various EO methodologies, this research assesses the ecological and hydrological impacts of sand dams through a series of analyses designed to capture the nuanced ways in which sand dams interact with their environments over time. Four main analyses form the backbone of this study, each targeting distinct aspects of environmental change across spatial and temporal scales. To isolate the effects of sand dams, we employ a comparative study design in Makueni County, Kenya, where two comparable river catchments were selected: one with several sand dams along a river constructed between 2010 and 2015 and one without. Both catchments share similar river characteristics, hydrological regimes, and climate zones, making them suitable controls for assessing environmental changes specifically attributable to sand dam interventions. This design allows us to distinguish between landscape changes influenced by broader climatic or hydrological trends and those directly resulting from sand dam presence and operation. By systematically comparing test and control catchments, we aim to identify the unique contributions of sand dams to ecosystem resilience, water availability, and landscape dynamics. Radar-based detection of ground deformation: We use Sentinel-1 radar data spanning from 2014 to 2025 to detect patterns of ground deformation in areas with and without sand dams. Through an SBAS interferometry approach (Casu et al., 2014), we aim to identify subsidence or uplift patterns, examining their relationship to the hydrological changes associated with sand dams. This analysis evaluates whether sand dams induce measurable land deformation patterns, either as gradual subsidence or cyclic deformation corresponding to seasonal water retention. Additionally, this assessment helps to determine the spatial and temporal characteristics of any deformation, offering insights into the broader impacts of sand dams on soil stability and groundwater retention. Land surface temperature (LST) analysis: Utilizing high-resolution (10m) LST data from the ConstellR mission (Spengler et al., 2024), we examine temperature variations and anomalies across sand dam regions for the year 2020. ConstellR’s thermal infrared imagery provides seasonal snapshots, allowing us to assess temperature changes over 11 acquisitions throughout the year. This analysis focuses on identifying temperature anomalies that may indicate the cooling effect of sand dams and investigating whether these effects are spatially aligned with specific land cover types, such as vegetated areas that benefit from increased moisture retention. We aim to determine whether the presence of sand dams correlates with reduced surface temperatures, particularly during dry seasons. The investigation of spatial patterns in relation to land cover offers insights into how sand dams might moderate local temperature extremes and indirectly support vegetation health by retaining soil moisture. NDVI time-series analysis for gradual vegetation change: To monitor the gradual impact of sand dams on vegetation dynamics, we use Normalized Difference Vegetation Index (NDVI) data derived from Landsat imagery. Analyzing these data in Google Earth Engine enables the detection of long-term greening or browning trends (Walper et al., 2022) in dam regions, focusing on distinguishing between pre- and post-construction phases of the dams. This time-series analysis assesses whether sand dams contribute to vegetation recovery or expansion, using vegetation health and coverage as indicators of ecological resilience. We aim to attribute observed vegetation trends directly to the presence and maturity of sand dams, especially in areas that demonstrate increased greening over time. This study also examines short-term vegetation responses following significant rainfall events, allowing us to determine how sand dams influence both immediate and enduring vegetation dynamics. Categorical land cover change analysis via Semantic Data Cubes: We apply semantic EO Data Cubes containing semantically enriched Sentinel-2 and Landsat data to categorize time series of land cover changes over extended periods, capturing shifts in land use and land cover linked to sand dam construction. By classifying and tracking changes in categories such as vegetation, water bodies, and bare soil components, this approach allows us to quantify the broader landscape-scale impacts of sand dams, such as increases in agricultural activity and the establishment of new infrastructure like roads. The semantic EO Data Cube framework facilitates the integration of complementary data, such as digital elevation models, and transferability across regions, enhancing the robustness of the land cover analysis (Sudmanns et al., 2021). This framework allows for automated, scalable assessment and the potential to extend findings across other semi-arid regions. Moreover, this categorical analysis will enable us investigate land cover changes based on sand dam construction date as well as associated with broader climatic trends, providing a nuanced understanding of sand dams' role in landscape transformation. These four analytical streams work in concert to create a comprehensive picture of sand dams’ environmental impacts, providing valuable insights for ecosystem management and climate adaptation strategies. In addition, our use of diverse EO methodologies underscores the potential for remote sensing technologies to monitor and evaluate the effectiveness of small-scale water retention systems, which are particularly valuable in regions where ground-based observation is challenging or incomplete. This research contributes to the broader understanding of how low-cost interventions can enhance ecological resilience in semi-arid regions. Specifically, it offers a scientific basis for scaling sand dams as a sustainable water management strategy, supporting both humanitarian efforts and local policy initiatives aimed at reducing vulnerability to climate change. By delivering evidence of sand dams’ ecological benefits, this study provides a foundation for further exploration into similar water retention systems and the potential for these interventions to be adapted across diverse environmental and climatic contexts. Our findings will be particularly relevant for policymakers, NGOs, and environmental managers interested in adopting cost-effective and locally impactful solutions to address water scarcity and bolster climate resilience. Literature Casu, F., Elefante, S., Imperatore, P., Zinno, I., Manunta, M., De Luca, C., & Lanari, R. (2014). SBAS-DInSAR Parallel Processing for Deformation Time-Series Computation. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 7(8), 3285–3296. https://doi.org/10.1109/JSTARS.2014.2322671 Ryan, C., & Elsner, P. (2016). The potential for sand dams to increase the adaptive capacity of East African drylands to climate change. Regional Environmental Change, 16(7), 2087–2096. https://doi.org/10.1007/s10113-016-0938-y Spengler, D., Ibrahim, E., Chamberland, N., Pregel Hoderlein, A., Berhin, J., Zhang, T., & Taymans, M. (2024, März 11). Monitoring land surface temperature from space -constellr HiVE - new perspectives for environmental monitoring. https://doi.org/10.5194/egusphere-egu24-21514 Sudmanns, M., Augustin, H., van der Meer, L., Baraldi, A., & Tiede, D. (2021). The Austrian Semantic EO Data Cube Infrastructure. Remote Sensing, 13(23), 4807. https://doi.org/10.3390/rs13234807 Walper, C., Braun, A., & Hochschild, V. (2022). A Satellite-Based Framework to Investigate the Impact of Sand Dams on Landscapes in Semi-arid Regions. In V. Naddeo, K.-H. Choo, & M. Ksibi (Hrsg.), Water-Energy-Nexus in the Ecological Transition: Natural-Based Solutions, Advanced Technologies and Best Practices for Environmental Sustainability (S. 287–290). Springer. https://doi.org/10.1007/978-3-031-00808-5_66
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Building a Worldwide Coastal Monitoring Capability: EO-derived shoreline data for international collaboration against coastal erosion.

Authors: Anne-Laure Beck, Martin Jones, Professor Ivan Haigh, Dr Salvatore
Affiliations: Argans Ltd, University of Southampton, isardSAT, ACRI-ST
The increasing impacts of coastal erosion driven by climate change and human activities present a critical challenge for sustainable coastal management. The global aspects of coastal processes require a collaborative and trans-country effort to understand and mitigate erosion effects. To encourage joined-up coastal management, decision and policy makers require a reliable and global capacity for coastline monitoring that is easy to use and provides equitable access to facilitate cross-border collaboration and monitoring. After 5 years of research and development, ARGANS coastal processing chain helps deliver a series instantaneous land/sea boundaries derived from earth observation data, corrected and projected to reference tidal level such as Mean Sea Level. Most importantly these data are at temporal sampling rates which allow the effective analysis of specific erosion or accretion phenomena related to events (lunar tides frequency, and before-after all storms, dam releases) and not just at yearly or decadal scales. It is a world first. As demonstrated within the ESA’s GDA disaster resilience programme in Ghana, high-frequency EO-derived information is essential to uncover the multiple causes of coastal change. Thanks to the numerous images now available per year, it was evident that waves alone have not always been the main cause of erosion and so protection measures that are planned need to be mindful of other factors and this can only be understood by a repetitive and regular view of the coast. The production of a 30-years MSL shoreline for the UK has been integrated to the British Geological Survey (BGS) Coastal modelling environment (CoastalME) to produce numerical simulations to support multi-hazard analyses under present and future climate change scenarios. However, spatially accurate MSL shoreline rely on high-resolution additional geospatial data, that are not always available. The UK Space Agency through its Enabling Technology Programme have supported continuation and improvement of ARGANS coastal processing chain, bringing together coastal and oceanography experts to enable the shoreline correction without any support of in-situ data. Combining Synthetic Aperture Radar data with modelled tidal table, the Global Shoreline project investigates the automatic production of EO-derived slope and modelled calculated tidal level to replace in-situ measurement in the shoreline processing which will allow worldwide processing irrespective of the need for in-situ coastal information. The Coastal Sea Level Integrator (CSLI) has been developed to automatically compute accurate heights of sea level at any selected location around the world’s coastline and for any given time, to feed into the GSL processor. Sea level height, at any location or time or along the coast, arises as a combination of: (1) astronomical tides; (2) storm surges; and (3) waves, especially setup and runup, superimposed on relative mean sea level. The slope from a SAR waterline generator, integrates tidal information from the CSLI to match a SAR derived land/ sea boundary with an elevation, producing a costal digital elevation model. There is now the opportunity to accurately map complete regions such as West Africa to the same degree of temporal & spatial accuracy as enjoyed in the UK and Europe. Such high resolution (both temporally and spatially) data provides multi-scale shoreline change information for predictive model and digital twin infrastructures allowing policymakers, planners, and local communities to develop and implement evidence-based strategies for mitigating coastal risks and enhancing resilience.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Ol’ Man River - development and growth by decreasing negative impacts

Authors: Kristin Fleischer, Mareyam Belcaid, Elke Kraetzschmar
Affiliations: IABG
The Amazon basin habitats the world’s most biodiverse forest and has a share of 50% of the remaining rain forest worldwide. By covering about 35% of the South American continent in nine countries, the regions host further superlatives such as longest rivers and largest volume of water of river system in the world. Beyond that, its importance as a stabilising factor of the World’s climate is undeniable. But the complex and fragile ecosystem is under stark pressure. Although scarcely populated with a few larger cities and scattered settlements, the living and working environment is expanding. One of the biggest challenges is the deforestation in favour of bioeconomic products & services such as mining, lumbering, agriculture, and livestock, one of the results of the ever increasing global demand and hunger for goods. On top of that, the increasing pressure induced by climate change is taking its toll. Rainy seasons are shortened, and periods of severe drought accompanied by wildfires are more frequent, to name only a few. Despite the ecological richness of the region, local communities and people often life in poverty and basic needs are unmet. The Global Development Goals should serve as a guiding principle for the development of the region. Connectivity as well as access to markets and social services is key for both - bioeconomic products & services and the local population. The World Bank is investing to answer one of the main questions: how can access be gained and connecting networks be developed with simultaneously decreasing the negative impact. The WB team is conducting an analysis of infrastructure gaps by assessing their vulnerability to climate risk in order to support bioeconomy value chain, and supporting greater opportunities for higher incomes, increased productivity, and economic participation for the people of the Amazon Region. Focus areas lie on the • Identification of key infrastructure bottlenecks to productivity growth; • Identification of connectivity and access challenges and potential solutions to improve access to basic services and improved welfare; • Designing a sustainable infrastructure development roadmap for transport connectivity, energy access and digital connectivity. Waterways and related assets such as harbours are a declared alternative to common road networks in the Amazon region. The GDA Transport and Infrastructure team supports the World Bank in their data driven approach for infrastructure planning by enhancing waterway insights through the state-of-the-art analysis of satellite imagery. Only a detailed knowledge of the status quo of the hydrological network and its exposure to seasonal and climate change influences allows to tackle the identified connectivity challenges in an appropriate manner. In close cooperation with the WB team, the developments are focusing on the following four topics. (1) Hydrographic inventory: understanding the accessibility of areas by and the navigability of rivers is essential. Open or governmental data lacking thematic level of detail and geometric accuracy. Information such as network density, river width, sandbanks, or obstacles is widely missing. The inventory is based on Sentinel 1 and Sentinel 2 satellite image data of the dry season. It follows a two stage approach: in a first step a remote sensing analysis of the data is conducted, where the classification considers various indices and spectral bands of the sensor. To densify the results and improve the detectability of narrow rivers and close gaps in the network, advanced technologies such of super resolution and AI is incorporated in a second step. (2) Water course change: the analysis provides the comparison of the water extent, showing changes between two consecutive years from 2019 to 2023. The analysis of seasonal or drought induced changes as well as geomorphological variation (e.g. sedimentation) help to assess their influences on the navigability. (3) River Flood Modelling: helps in assessing flood risks, planning flood management strategies for e.g. harbours, and understanding the hydraulic behaviour of the river system under different flood conditions and its affects on navigability. (4) Boat detection: to determine the number of boats traversing a specific stretch of river, providing valuable insights into the connectivity of access routes. By quantifying the traffic of boats, key waterways and harbours that serve as critical transportation corridors are identified. Additionally, size of boats provides information of river navigability. As the Amazon region is vast it is clear that manual approached as well as the use of VHR image data or even field missions can only be conducted locally, but are impossible to be applied on a larger scale. Otherwise, costs and manpower are exceeding every benefit. EO data of ESA’s Sentinel fleet combined with automatic approaches for information extraction are enrolling its full potential here. The approach provides an opportunity for the WB team to cover large areas and close the current information gaps on their way to a sustainable and targeted infrastructure planning along the hydrological network. It is the aim to build on the developments exemplary showcased in the areas of Colombian Leticia and Brazilian Tabatinga, and to transfer these to other hot spot areas of development that demand sustainable interactions throughout the whole Amazone region.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: EO data facilitates the global solar energy transition through scaling up solution and collaboration

Authors: Fang Fang
Affiliations: NEO BV
The global energy transition has reached a pivotal juncture, with some nations advancing rapidly in adopting renewable energy sources such as solar and wind, while others continue to face significant challenges. For many countries, the lack of reliable, actionable data and insights remains a primary barrier to accelerating the transition. Earth Observation (EO) data presents a transformative solution, enabling stakeholders to overcome these barriers through informed decision-making and data-driven strategies. In collaboration with the World Bank, NEO has developed scalable EO-based solutions that provide comprehensive insights on rooftop solar potential generation for energy transition efforts in developing countries. These solutions have been successfully implemented in 46 cities across 26 countries, delivering tangible impacts for local governments, private sectors, and communities, facilitating the energy transition movement. Securing Resources for Renewable Energy Projects One of the impactful applications of EO data is in supporting the financing of renewable energy projects. In Lagos, Nigeria, NEO conducted a pilot study assessing the rooftop solar energy potential in the city center. The results revealed significant capacity for solar energy generation, providing the local government with the confidence and data necessary to secure loans from the World Bank. These funds are now being utilized to further integrate solar energy into the city’s energy infrastructure, propelling Lagos toward its renewable energy goals. Similarly, in Sint Maarten, local teams used NEO’s assessment result to identify suitable locations for initiating solar panel installations. The insights enabled stakeholders to prioritize and launch a pilot project on a public building, paving the way for a nationwide solar energy rollout. Empowering Local Stakeholders A key element of NEO’s success has been its focus on stakeholder engagement and capacity building. Collaborations with local governments and institutions ensure that EO solutions are not only just replicable easily to different areas but also aligned with the specific needs and priorities of the regions they serve. Feedback and ground validations from local stakeholders further refine the outputs, enhancing their practical application and support local bank team and stakeholders’ operation. Moreover, all data generated from NEO’s work is made publicly accessible through a user-friendly platform. The platform allows users to view, filter, interact with, and download datasets easily. To date, it has garnered over 40,000 views and 3,000 likes, reflecting widespread appreciation from diverse sectors. Government organizations have used the platform for planning and prioritizing renewable energy projects, while private companies, such as solar panel installers, have leveraged it for market insights. Educational institutions have also benefited, with university students and researchers utilizing the data for further in-depth studies and analyses. Scalable and Adaptive EO Solutions NEO’s approach to scaling EO solutions lies in its use of a deep learning (DL) model, which is iteratively refined to improve its predictive capabilities. This master model is trained on diverse satellite imagery and continuously updated using new data from completed projects. Each project feeds back into the model, enhancing its ability to generate high-quality output for any region. For example, in Dominica, where data availability has historically been limited, NEO collaborated with local planning departments and the World Bank to integrate high-quality Digital Surface Model and Digital Terrain Model from local departments into its workflow. By combining EO data with localized datasets, the model was able to deliver precise and context-specific outputs, ensuring greater relevance and utility for stakeholders. Capacity Building and Training To further scale its impact and speed up the facilitation process for local stakeholders, NEO has partnered with the World Bank to deliver training programs aimed at building local capacity in using EO data for solar energy applications. For example, in South Africa, 77 participants enrolled in a training program that introduced them to the fundamentals of remote sensing, satellite imagery processing and tools, and the applications of EO data in energy transition efforts. The program received overwhelmingly positive feedback, with participants expressing enthusiasm for the subject and a strong desire for more in-depth, onsite training. By empowering local stakeholders with the knowledge and skills to work with EO data, these training initiatives are fostering greater adoption of EO-driven solutions and accelerating progress toward renewable energy goals. Continuous Monitoring for Sustainable Development As energy transition efforts progress, the demand for continuous monitoring services is growing. EO data offers an effective solution for tracking development activities, such as residential construction, solar panel installations, and roof quality assessments. For instance, NEO has been approached often by bank teams and local stakeholders querying on using time series imagery data for providing insightful update on the status and progress of renewable energy projects. This capability ensures that stakeholders remain informed about ongoing developments, enabling timely interventions and informed decision-making. By bridging information gaps, EO-driven monitoring services support the efficient and sustainable rollout of renewable energy initiatives. Driving Impactful Change The use of EO data in energy transition projects has demonstrated clear benefits, not only in accelerating renewable energy adoption but also in creating opportunities for the EO industry. By streamlining EO applications for international development, NEO in joint force with its partners are driving sustainable development outcomes worldwide. Through a combination of scalable solutions, stakeholder engagement, capacity building, and continuous monitoring, NEO’s work exemplifies how EO data can be harnessed to address real-world challenges, especially in developing areas. By providing actionable insights and fostering collaboration, these efforts are paving the way for impactful change on the ground, ensuring that no country is left behind in the global energy transition.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Supporting fragility analyses with generative AI: the GEN4GEO approach to geospatial data exploration in natural language

Authors: Marcello Cinque, Chiara Francalanci, Paolo Giacomazzi, Uli Hab, Paolo Ravanelli, Stefano Rosiello, Michela Corvino
Affiliations: Univ. Of Naples Federico II, Uli Hab, Politecnico di Milano, Cherrydata srl, Critiware, ESA
The goal of the GEN4GEO project is to design a system that uses generative artificial intelligence (gen AI) to enable the exploration and visualisation of geospatial data with an interaction between users and systems based on natural language (either text or speech). The system will be designed with a focus on the fragility of countries or other geographical areas of interest, including environmental, societal, economic, and political implications. We believe that understanding the implications of these phenomena could be greatly helped by enabling domain experts to perform an easier, direct, and interactive exploration of large multi-source geospatial datasets and to self-define their own high-level indicators (e.g. fragility) based on this direct uptake of the practical implications of natural phenomena in their domain of interest. Data exploration is an approach to data analysis aimed at extracting insights incrementally, starting from no or very limited knowledge on available data. In principle, data exploration should be an agile, incremental, and creative process, but, with current dashboards, end users must have too much technical knowledge to explore data without technical support. As a result, data exploration is a team effort and cannot be done by end users alone. Overall, it is slow, difficult, and often unimaginative and dashboards are never used to their full potential. Overcoming these limitations would significantly reduce the barriers to the exploitation of geospatial data in a variety of domains generating market opportunities many EO applications. From this perspective, our use case has practical implications and potential applications in a broad cross-section of industries. The objective of GEN4GEO is to address the limitations of current technology by leveraging the ability of generative AI to enable data exploration with an interaction based on natural language. This conversational interaction is supported by the so-called “foundation models,” that is generative neural networks trained on very large datasets to answer a broad range of user questions. This generality makes foundation models suitable for data exploration, as their broad knowledge makes them flexible and rather context independent. However, they have a very limited ability to handle data (especially quantitative data) and run analytics, in terms of bounded data size as well as low relevance and accuracy of results. The main innovation of GEN4GEO is to design a data exploration engine that exploits generative AI for natural language interaction but does not rely on foundation models to run the analytics. Intuitively, our idea is to ask generative AI to provide the software code that answers the user’s question and then run it with the GEN4GEO engine to obtain results. For example, if the user asks for the average level of rainfall in the Shan region in Myanmar, we ask generative AI to provide the code of the corresponding SQL query and then use the GEN4GEO engine to run the query on the weather database to obtain the actual result. The user question will be also used to ask generative AI to select the best visualisation of the result in the system dashboard (i.e. a density map rather than a plot). This removes the limitations on data size, improves the accuracy of results, and broadens the range of the analytics that can be executed and visualised by the system. In turn, this reduces the knowledge barriers of data exploration, enabling non-technical users to understand new data without or with limited help from technical users. It can make data exploration a faster and more creative process. To verify this assumption, we have performed a preliminary set of cross-industry interviews (2 system integrators, 1 expert of business intelligence, 1 company operating in the maritime transportation industry, 1 pharma, 1 agency specialised in geo-marketing, 1 data provider). Most interviews involved high-level decision makers and managers. We have noted a positive general interest in the idea. However, we have received a recurring comment related to the fact that running queries on a dataset in natural language is not enough, while for GEN4GEO to represent a truly innovative application queries should be accompanied with the ability to self-define high level concepts (such as “fragility”) as a function of available data and then use these high-level concepts in subsequent conversational interactions with the dashboard. This observation reinforces the idea that language models are not the solution per se, but a tool that should be embedded in a system with data aggregation and analytical capabilities, consistent with the idea of data exploration of GEN4GEO. The exploration of EO-nonEO fragility related data represents the ideal testbed for GEN4GEO. Exploring this type of data from a geographical standpoint can highlight areas and time patterns where the tangible effects of fragility are or will be most impactful. The literature on fragility explains how impactful fragility-related phenomena are related to the concentration of critical events in certain areas and their different change patterns over time. This geographical understanding of data has practical implications from an economic and political point of view, with cross-industry applications. An easy and interactive exploration of data can favour broader adoption and usage. The project has kicked off in October 2024, with a first release of the dashboard due in April 2025. Two design thinking sessions will be held to design the dashboard in December 2024 and January 2025, leading to a design of the main dashboard functionalities tailored to the needs of a broad cross-section of potential users. The dashboard will be demonstrated with a fragility dataset comprising relevant multi-dimensional indicators of fragility based on EO and complementary non EO data for 4 developing countries in the 2018-2024 period. The team will analyse the role played by the dashboard in highlighting important fragility patterns and their environmental, economic, social, and security impact. Acknowledgements – This research activity is carried out under the programme Open Call for Proposals for EO Innovation, Contract n. 4000145918/24/I-DT-bgh, and funded by the European Space Agency”.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Graph-Based Machine Learning Models and Earth Observation Data for Social Good

Authors: Seán Ó Héir, Beāte Desmitniece, Dr Gary Watmough, Dr Sohan Seth
Affiliations: University Of Edinburgh
Introduction: Effective socioeconomic measurement is crucial for informed decision-making across areas such as health programming [1], poverty mitigation [2], and urban planning [3]. In low- and middle-income countries (LMICs), measuring socio-economic and demographic changes can be challenging due to outdated, or insufficiently detailed data. Traditional survey methods, such as censuses, are costly, requiring significant time, money, and personnel, and therefore have low temporal resolution. This cost can be a barrier for LMICs, leading to less frequent surveys [4], sometimes with gaps of more than 15 years [5], which fail to reflect current realities and obscure socio-economic disparities [6]. Household surveys, such as Demographic and Health Survey (DHS), Multiple Indicator Cluster Survey (MICS) and those conducted by the country offices for a variety of purposes (income, agricultural ) provide measurements during the intercensal period that can be invaluable in assessing the changes in socio-economic indicators. The use of household survey data alongside Earth Observation (EO) offer opportunities for providing higher frequency estimations at a fine spatial resolution for a variety of socioeconomic outcomes such as poverty, [7], population [8], and health risks [6]. Nonetheless, these surveys are usually sparse both spatially and temporally requiring appropriate spatio-temporal computational tools to effectively process them. Graph-based ML: Machine learning (ML) has been increasingly used alongside EO to monitor socio-economic changes, e.g., in population estimation [9][10][8], poverty mapping [11], supporting COVID-19 responses in slum communities [12], and monitoring urban transformation processes [6]. One of the challenges of the existing ML-based methods is the incorporation of spatial context, i.e., information from the surrounding areas that can help us understand or predict certain characteristics within the area of interest. Several studies have illustrated how various factors such as the increase in manufacturing and service amenities [13], growth in the number of well-being facilities and housing density [14], and changes in land development [15] cause positive or negative population density changes in the surrounding areas of the examined region. Graphs offer a natural representation of the geographical data: by representing geographic units as nodes and spatial relationships as edges that connect the nodes, graph-based ML models, e.g., Graph Neural Networks (GNNs) can capture complex spatial dependencies, both short- and long-range, and aggregate information from neighbouring regions, enabling more accurate predictions of socioeconomic dynamics. GNNs allows capturing non-linear relationships between covariates and outcomes [16], using explainability tools (such as GNNExplainer [17]) to make decisions more transparent, integrating multiple data sources effectively, accumulating information at multiple resolutions [18] (e.g., admin levels), combining spatial and temporal information seamlessly [19], and using multiple types of relationships to define connections [20] (e.g., neighbouring geographical units or units connected by a certain road). These make GNNs well-suited for a geography-focused dataset, and GNNs have been applied in various studies such as predicting local culture of neighbourhoods [21], spatio-temporal land cover mapping [22] and road surface extraction from satellite imagery [23], and they are also excellent at relational learning, allowing modelling intricate interconnections between different regions [24]. Example: We explore the use of graph-based models in the context of social good, e.g., estimating population density for monitoring sustainable development goals in several sub-Saharan African countries using EO data. Graphs are constructed with nodes representing administrative units, and edges based on geographical adjacency and transportation linkage. Node attributes include geospatial features from Sentinel-2 land use data, Landsat data, nighttime light levels, building footprints, and road density from OpenStreetMap data. We assess the performance of our graph-based approach by comparing against baseline models; evaluating its’ ability to generalise to geographically distant areas by training/testing on province splits; determining the geospatial feature importance by employing permutation feature importance; and quantifying the prediction uncertainty. Conclusion: The use of graph-based machine learning models, particularly GNNs, offers significant advancements in understanding and predicting socioeconomic dynamics in LMICs. By leveraging high-resolution EO data and employing spatial relationships between admin levels, these models can potentially enhance the accuracy of socioeconomic measurements. Our exploration of population density estimation in sub-Saharan Africa, using diverse geospatial datasets, demonstrates the potential of GNNs to include spatial context into socioeconomic monitoring from space. Additionally, this approach provides valuable insights through explainability tools, paving the way for more informed decision-making in areas such as health, urban planning, and poverty mitigation. Although we have focused on the case of population esti- mation, the adaptability of a GNN approach suggests its applicability across various socioeconomic indicators, offering a flexible, data-driven tool for improved policy and planning in data-scarce environments. References: [1] Saman Khalatbari-Soltani et al. “Importance of collecting data on socioeconomic determinants from the early stage of the COVID-19 outbreak onwards”. In: J Epidemiol Community Health 74.8 (2020), pp. 620–623. [2] Imran Sharif Chaudhry, Shahnawaz Malik, et al. “The Impact of Socioeconomic and Demographic Variables on Poverty: A Village Study.” In: Lahore Journal of Economics 14.1 (2009). [3] Devis Tuia et al. “Socio-economic data analysis with scan statistics and self-organizing maps”. In: Computational Science and Its Applications–ICCSA 2008: International Conference, Perugia, Italy, June 30–July 3, 2008, Proceedings, Part I 8. Springer. 2008, pp. 52–64. [4] Deborah L Balk et al. “Determining global population distribution: methods, applications and data”. In: Advances in parasitology 62 (2006), pp. 119–156. [5] NA Wardrop et al. “Spatially disaggregated population estimates in the absence of national population and housing census data”. In: Proceedings of the National Academy of Sciences 115.14 (2018), pp. 3529–3537. [6] Paloma Merodio G´omez et al. “Earth observations and statistics: Unlocking sociodemographic knowledge through the power of satellite images”. In: Sustainability 13.22 (2021), p. 12640. [7] Gary Watmough and Charlotte LJ Marcinko. “EO for Poverty: Developing Metrics to Support Decision Making Using Earth Observation”. In: Comprehensive Remote Sensing: Volume 9 Remote Sensing Applications. Elsevier, 2024, pp. 1–22. [8] Isaac Neal et al. “Census-independent population estimation using representation learning”. In: Scientific Reports 12.1 (2022), p. 5185. [9] Caleb Robinson, Fred Hohman, and Bistra Dilkina. “A deep learning approach for population estimation from satellite imagery”. In: Proceedings of the 1st ACM SIGSPATIAL Workshop on Geospatial Humanities. 2017, pp. 47–54. [10] Wenjie Hu et al. “Mapping Missing Population in Rural India”. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. ACM. Jan. 2019. [11] Jessica E Steele et al. “Mapping poverty using mobile phone and satellite data”. In: Journal of The Royal Society Interface 14.127 (2017), p. 20160690. [12] Patricia Lustosa Brito et al. “The spatial dimension of COVID-19: The potential of earth observation data in support of slum communities with evidence from Brazil”. In: ISPRS International Journal of Geo-Information 9.9 (2020), p. 557. [13] Diego Firmino Costa da Silva, J Paul Elhorst, and Raul da Mota Silveira Neto. “Urban and rural population growth in a spatial panel of municipalities”. In: Regional Studies 51.6 (2017), pp. 894–908. [14] Yisheng Peng et al. “The relationship between urban population density distribution and land use in Guangzhou, China: A spatial spillover perspective”. In: International Journal of Environmental Research and Public Health 18.22 (2021), p. 12160. [15] Qingmeng Tong and Feng Qiu. “Population growth and land development: Investigating the bi-directional interactions”. In: Ecological Economics 169 (2020), p. 106505. [16] Rongzhe Wei et al. “Understanding non-linearity in graph neural networks from the bayesian-inference perspective”. In: Advances in Neural Information Processing Systems 35 (2022), pp. 34024–34038. [17] Zhitao Ying et al. “Gnnexplainer: Generating explanations for graph neural networks”. In: Advances in neural information processing systems 32 (2019). [18] Luca Pasa, Nicol`o Navarin, and Alessandro Sperduti. “Multiresolution reservoir graph neural network”. In: IEEE Transactions on Neural Networks and Learning Systems 33.6 (2021), pp. 2642–2653. [19] Truong Son Hy et al. “Temporal multiresolution graph neural networks for epidemic prediction”. In: Workshop on Healthcare AI and COVID-19. PMLR. 2022, pp. 21–32. [20] Guohao Li et al. “Deepergcn: All you need to train deeper gcns”. In: arXiv preprint arXiv:2006.07739 (2020). [21] Thiago H Silva and Daniel Silver. “Using graph neural networks to predict local culture”. In: Environment and Planning B: Urban Analytics and City Science (2024), p. 23998083241262053. [22] Domen Kavran et al. “Graph neural network-based method of spatiotemporal land cover mapping using satellite imagery”. In: Sensors 23.14 (2023), p. 6648. [23] Jingjing Yan, Shunping Ji, and Yao Wei. “A combination of convolutional and graph neural networks for regularized road surface extraction”. In: IEEE transactions on geoscience and remote sensing 60 (2022), pp. 1–13. [24] Luana Ruiz, Fernando Gama, and Alejandro Ribeiro. “Graph neural networks: Architectures, stability, and transferability”. In: Proceedings of the IEEE 109.5 (2021), pp. 660–682.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: EO-Driven Solutions for Energy Access in International Development: Bridging Gaps with ESA’s GDA Clean Energy Activity

Authors: Malin Sophie Fischer, Filipe Girbal Brandão
Affiliations: Vida.place Gmbh, GMV Innovating Solutions S.L.
Energy access is a catalyst for sustainable development, powering progress in economic growth, healthcare, and education. Despite its central role and the global commitment to Sustainable Development Goal 7 (SDG-7) - ensuring affordable, reliable, sustainable, and modern energy for all by 2030 - significant gaps remain. As of 2022, approximately 685 million people lack electricity, with four in five of them residing in Sub-Saharan Africa, predominantly in rural areas where electrification deficits have even grown over the past decade (IEA 2024). Extending national grids to these remote communities is often financially unsustainable, highlighting the need for decentralised, renewable-powered solutions like mini-grids and solar home systems. Yet, many of these communities are missing from existing public or even governmental maps, presenting a critical challenge: How can we efficiently and affordably identify and characterise underserved populations on a large scale as a foundation for electrification planning? Earth Observation (EO), with satellites monitoring our planet at an unprecedented rate and level of detail, in combination with geospatial technologies offer a transformative solution to bridging data gaps in energy access planning. By leveraging satellite data and advanced analytics, we can efficiently locate and characterise underserved communities, providing critical insights to guide sustainable electrification efforts from site selection to financing to implementation. This session will delve into how EO and geospatial tools are being applied in practice, including use cases developed under the European Space Agency’s (ESA) Global Development Assistance (GDA) Clean Energy activity. These examples, created in collaboration with international financing institutions across Subsaharan Africa and Asia, demonstrate how innovative geospatial solutions can support data-driven energy planning, enabling targeted and impactful interventions to accelerate progress toward universal energy access. The main use case to be presented showcases how advanced Earth Observation (EO) and geospatial analytics can generate detailed cropland and irrigation maps to support energy planning for rural communities. This approach drastically reduces the need for costly on-ground surveys by leveraging ESA’s free and open Sentinel satellite data. A pilot was conducted in collaboration with the World Bank’s Energy Sector Management Assistance Program (ESMAP), covering three diverse areas in Madagascar. There, energy access remains critically low: Two-thirds of the population, or approximately 18 million people, lack access to electricity. Using data from ESA’s Sentinel-2 satellites, high-resolution cropland maps were developed with sufficient detail to detect even smallholder farms. Sentinel-2’s 10-meter spatial resolution and frequent revisit times enable precise detection of fields, accounting for seasonal variations. Specifically, an image segmentation in combination with a Machine Learning algorithm (Random Forest) was trained on remotely collected data in combination with a multitude of metrics derived from a Sentinel-2 time series, ensuring an accurate classification without the need for physical field visits (overall accuracy: 91 %). The analysis extended to identifying irrigated fields, which are particularly relevant for energy planning as these areas can benefit from electricity-powered technologies like irrigation pumps. By combining a vegetation index derived from Sentinel-2 imagery with backscatter data based from Sentinel-1, which can penetrate clouds and measure soil moisture, detailed irrigation maps at ten meter resolution were produced with pixel- as well as object-based classifications. Due to a lack of on-ground data, the accuracy could only be assessed visually - a common constraint when working in data-scarce regions. The resulting spatial data products provide an up-to-date view of agricultural activities around rural settlements, which are often overlooked in traditional mapping efforts and electrification planning. They complement global products such as ESA’s WorldCereal and WorldCover maps as well as IFPRI's crop-specific yet low-resolution MapSPAM data, offering localised, actionable insights into farming practices. Ultimately, making sense of complex data products is crucial for decision-makers and practitioners, especially without a technical background. Accordingly, the created map products were connected to settlements, which serve as the fundamental units of electrification planning. To achieve this, a reliable base-map was developed, automatically identifying settlements of all sizes using a Machine Learning clustering algorithm (DBScan) to group buildings and define settlement boundaries in the areas of interest. The cropland and irrigated land around each settlement were then quantified to assess surrounding agricultural activity. By combining this with the number of buildings and regional crop type data from MapSPAM, decision-makers gain actionable insights into household energy demand and the potential for agriculture-related productive energy uses. To ensure accessibility and usability, these outputs have been integrated into the GDA Clean Energy Platform. This intuitive, map-based online tool presents all relevant data layers and provides detailed settlement profiles. Users can filter settlements by key criteria, such as size and surrounding cropland share, to identify areas of interest quickly. This enables stakeholders to evaluate productive energy uses and prioritise electrification efforts effectively, whether at the regional or settlement level. The flexibility and global applicability of this approach, supported by ESA’s Sentinel satellite imagery, make it a scalable solution for energy planning worldwide. Regular updates to crop maps and potential thematic extensions, such as crop type or seasonality analysis, further enhance the platform's utility, paving the way for precise, large-scale electrification strategies. In addition to the use case in Madagascar as presented above, selected examples of applying EO and geospatial analytics in electrification planning can briefly be presented. This includes other use cases from ESA’s GDA Clean Energy activity which are currently on-going including the main authors’ organisations, with relevant results expected until the LPS symposium. As examples, off-grid least-cost electrification planning on islands can be presented (use case in Micronesia) as well as grid extension planning in Papua New Guinea in collaboration with the Asian Development Bank. In both cases, buildings are to be detected from very-high-resolution satellite imagery obtained from ESA’s Third Party Missions, as a foundation for more advanced analyses including least-cost electrification and climate risk assessments. In summary, this session highlights the transformative role of Earth Observation (EO) and geospatial technologies in addressing energy access challenges through practical use cases from ESA’s GDA Clean Energy activity. The featured example from Madagascar in collaboration with the World Bank demonstrates how satellite data and machine learning generate detailed cropland and irrigation maps linked to rural settlements, providing actionable insights for energy planning and productive uses of energy. Potential additional examples, including off-grid electrification in Micronesia and grid extension planning in Papua New Guinea, can showcase the scalability and adaptability of these solutions. By leveraging ESA’s openly available satellite data, this session demonstrates not only the critical role these resources play in addressing energy access challenges but also their versatility in enabling diverse, impactful applications across varying contexts, showcasing the far-reaching potential of EO technologies for sustainable development.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Wastewater Treatment Plant Impact Assessment Based on Earth Observation Data in the Panama Bay

Authors: Ioan Daniel Serban, Sorin Constantin, Georgiana Anghelin, Marius Budileanu, Zoltan Bartalis
Affiliations: Terrasigna, European Space Agency
Juan Díaz Wastewater Treatment Plant (WWTP) is the main facility, located near Panama City (at the mouth of the River Juan Díaz), dedicated to cleaning up residual waters before they are released into the bay. The first module of the WWTP was completed in 2015, while the second one started its operations in August 2022. The main objective of the analysis was to assess and detect any changes in water quality parameters, using EO data, that might have been influenced by these investments. A comprehensive analysis on the impact on the physical, chemical and biological status of water bodies in the neighboring area within Panama Bay was performed. Given the objective, it was considered that the following indicators are of prime interest: chlorophyll-a concentration (Chla) and derived products (e.g. number of algal blooms), Sea Surface Temperature (SST), dissolved oxygen (DO), nutrients concentrations (nitrate and phosphate) and the fraction of organic and mineral particles. Multiple sources of data were used, from products available through the Copernicus Marine Service to satellite images collected by the Sentinel-3 mission. A long-period of time was considered, starting with 1993 until present, for several indicators, as to highlight the overall changes in the region. The main conclusions were drawn based on the analysis of anomalies from the climatological mean at monthly and daily time scales, for each parameter of interest. The results suggest a tendency of improvement in terms of water quality for the most recent years. However, this overlaps on a long-term trend of degradation that might have started now to be reversed thanks to the clean-up actions in the region. This work was performed within the framework of the GDA FFF (Global Development Assistance – Fast EO Co-Financing Facility), in partnership with the European Space Agency (ESA) and the European Investment Bank (EIB).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Upscaling the water use efficiency analyses - GDA Agriculture pilot case Indonesia

#cloud-native

Authors: Alen Berta, Viktor Porvaznik, Juan Suarez Beltran, Stefano Marra, Alessandro Marin
Affiliations: CGI Deutschland, CGI Italy, GMV
Agriculture, as the largest consumer of water worldwide, faces a critical challenge in improving irrigation efficiency to ensure food security and sustainable farming practices. Currently, more than 50% of ground and potable water is wasted due to inefficient irrigation systems, an issue exacerbated by the growing impacts of climate change, including more frequent and severe droughts. This inefficiency threatens food production and livelihoods for millions of people necessitating robust solutions to optimize water usage and enhance irrigation management. The GDA Agriculture project aims to tackle this issue by deploying the ESA Sen-ET algorithm, enriched with global EO based biomass products, and fully automated and integrated into the CGI Insula platform. This cloud-native platform integrates EO data, Geographic Information Systems (GIS), and advanced analytics to provide a cutting-edge solution for analyzing water use efficiency and daily evapotranspiration. Leveraging Sentinel-2 and Sentinel-3 data, along with other EO datasets, the project identifies problematic areas, evaluates irrigation system performance, and provides actionable insights to optimize water use. As such, this project supports Asia Development Bank in the related project of enhancing dryland farming systems in Indonesia, but it can be used globally as it does not rely on local data. Local data (crop areas/crop types) can be uploaded into the Insula platform for post-processing depending on the users need for granularity. The CGI Insula platform delivers significant benefits to end-users, including farmers, policymakers, and funding organizations. Firstly, it provides near-real-time monitoring and analysis of water usage efficiency, enabling farmers to make timely adjustments to their irrigation practices and mitigates the risk of water scarcity. The platform also supports the identification of areas that require additional irrigation or where existing systems are underperforming, allowing for targeted interventions and resource allocation. This targeted approach maximizes the effective use of water resources, improving agricultural productivity and fostering sustainability. By integrating EO data, GIS, and advanced analytics, the project provides a robust solution for optimizing water usage and improving agricultural productivity. The benefits for end-users are manifold, including near-real-time monitoring and targeted interventions. This operational implementation not only enhances food security and water sustainability but also supports the overall resilience and prosperity of agricultural communities.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Zoom In – A Cascading Solar Potential Approach

Authors: Elke Kraetzschmar, Kristin Fleischer
Affiliations: IABG mbH
Access to energy is identified as one of the basic needs towards a decent life with better chances (SDG 7). Whereas in Europe, main initiatives concentrate on implementing the European Green Deal and related national policies, providing access to energy is still ongoing in other parts of the world and often handled in a more pragmatic way. Access to continuous and reliable energy as guarantee for economic growth remains a challenge in many fast growing and rapidly densifying urban agglomerations (Africa, SE-Asia). Concerning a well-designed energy infrastructure, cities are less managed. Understanding the dissemination grid is crucial. Private initiatives and investors provide access to electricity, thus the urban fringe is often intermingled with off-grid energy units, such as mini-grids preferably run by diesel generators. Few houses have Photovoltaic (PV) systems installed on their rooftops. Air quality is critical, space is limited, and open suburban regions convert to densely populated places within a few years. In this common setting, International Financial Institutions (IFIs) engage in supporting the transition towards sustainable solutions to fulfil SDG 7, serving the ranging needs within urban agglomerations and in rural areas. EO data can act as the overarching element by providing a better understanding of regional patterns and the urban dynamics, and thus in tailoring the financial support accordingly. Focus is drawn to find best fitting, affordable and sustainable solutions, may this be solar rooftop solutions, hydropower, wind energy, or biogas. Within the Global Development Assistance Project on Clean Energy (GDA-CE), the team sketched multi-scale approaches and linked these to sites of ranging extent as demanded by the WB projects. Key questions to define the analysis approach are • What area size needs to be covered and what is the best fitting scale of analysis? • What is the purpose of the analysis & the respective user group and how high is the willingness to pay? • Which input and analysis data need to be considered? The analysis of the cardinal questions often reveals, that the well-know and very detailed roof-top based solar potential analysis can be over-the-top respectively is less purposeful, and even a waste of money. To avoid a mis-alignment in effort and suitability of results, the team introduced five scales of analysis serving different decision making levels and local user groups. 1) Global scale: low to medium-resolution global information layers for high-level decision makers. Usually used to understand the pan-continent situation or to conduct a country comparison based on long-term average data. A classic example is hereby the Global Solar Atlas. 2) National scale: Whereas national solutions in European countries follow bottom-up approaches due to a rich local data environment, developing countries mainly lack detailed information layers, forcing to set-up alternative top-down approaches for a first country-wide solar potential analysis. The advantage lies in a significant lower effort needed, both in terms of budget and man-power. The GDA-CE team developed and conducted a solar potential analysis on national level of Armenia. The high-resolution analysis benefited from ESA’s Sentinel-2 imagery providing a sufficient retrospective timeline and most recent terrain data. Yet, the level of detail of the results goes beyond the commonly used global solar atlas. 3) Regional scale: While still relying on open EO and geospatial data of high resolution, the number of input information layer for the analysis increases and thus does the thematic level of detail. Additionally, regional specificalities as well as user data are considered. The team is currently at the start to implement this approach, discussing most urgently needed regions in West Africa with the WB partner. 4) Local scale: Latest at local scale the solar potential analysis enters the VHR world, as the level of detail switches to building footprint level. The GDA team conducted a solar rooftop analysis based on VHR stereo data for Yerevan the capital of Armenia, emphasising on common challenges when working with spaceborne data. These proof sufficiency for the first stage of dimensioning potential investments. When characterising the urban structures regarding their suitability for roof-top installations itself, generic understanding of building orientation, types & sizes, distribution, and specific roof-top characteristics (obstacles, age, sub-rooftop level) is of interest. 5) Implementation scale: More detailed aerial flight planning is considered as far too costly and gets rather replaced by local drone flights once investment planning reaches engineering level (static). Sub-city or even building level analysis is linked to on-site visits. Here, the team already conducted detailed planning including engineering & construction specific skills in Germany. The presentation aims to provide a wrap-up of the reasonability of the multi-scale analysis within different project planning phases and user perspectives. The different approaches were showcased in multiple locations (cities and rural areas). It supports the engagement of the IFIs, being aware of limitations of regional scale vs. Benefits of receiving a most recent situational picture, linked to timeline and understanding the contextual options often surpasses this and build the base for a detailed trade-off analysis. Identifying the needed and most suitable scale of analysis is the foundation for the subsequent selection of data and analysis methodology. The choice directly implies a certain financial scheme necessary on IFI side (driven by data costs and high ration of manual work). Depending on the size of the location of interest, costs are prone to explode rapidly when choosing the supposedly best solution, while lacking knowledge of cost-efficient alternatives. The developed decision tree aims to guide the IFIs and user to tackle their need in the best fitting, and in cases pragmatic, manner.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Geospatial AI integrated with Space-based measurements to Model Future Wind Energy Potential

Authors: Gopal Erinjippurath, Michael Sierks, Tristan
Affiliations: Sust Global
Renewable energy investors worldwide are increasingly focused on quantifying the impacts of climate change on their wind energy generation assets. Accurate projections of future wind speed patterns are critical for improving the modeling of wind energy production during the prospecting of wind farm sites, financial planning for new project developments, and estimating future capital yields from operational wind energy generation sites. In this work, we present a method for training and inferring Geospatial AI models designed to provide high-resolution projections with reduced bias when forecasting future wind characteristics including wind speeds and wind project energy generation at any location on the globe. Our approach integrates data from ESA missions such as Aeolus-1 and Sentinel-2, along with ground-based wind speed characterization datasets, including the Copernicus Regional Reanalysis for Europe (CERRA), ECMWF ReAnalysis v5 (ERA5), and the NREL Wind Toolkit. Additionally, we leverage NASA's NEX-GDDP dataset, which consists of bias-corrected climate scenario projections derived from General Circulation Models (GCM) runs conducted under the Coupled Model Intercomparison Project Phase 6 (CMIP6). We detail our methodology for collecting and qualifying ground truth datasets, combining space-derived and environmental reanalysis datasets. Furthermore, we explore various Geospatial AI model architectures that enable flexible learning representations of land surface influences on wind speed. We benchmark performance for regional generalizability, inland, coastal and off shore locations and characterize performance against in situ measurements of wind characteristics at wind energy generation sites. We evaluate scenarios such as tropical and extratropical cyclones which limit the statistical performance of such models and present a novel approach to quantify uncertainty in predictive performance under such acute physical hazard genesis scenarios. Finally, we demonstrate example workflows where these Geospatial AI models are deployed in commercial contracting and institutional investor settings. These workflow allow renewable energy investors and project operators to assess the impacts of climate change on wind characteristics and wind energy capacity planning. We showcase how renewable energy finance teams can better adapt current and new generation capacity with future energy demands through representative examples from our commercial engagements in the UK, Europe and US.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Impact evaluation of irrigation schemes in Africa using Earth observation data

Authors: Oliver Mundy, Dr. Athur Mabiso, Dr. Emanuele Zucchini, Dr. Cristina Chiarella, Yu Dong, Rakhat
Affiliations: Ifad
The provision of irrigation is a crucial adaptation strategy against climate change, particularly in light of the variability in seasonal patterns and rainfall. In many parts of Africa, it is a vital component in ensuring food security. This session will examine a range of irrigation schemes funded by the International Fund for Agricultural Development (IFAD), including small drip irrigation schemes in Cabo Verde, medium-sized and large schemes in Ethiopia and Madagascar. The utilisation of Earth observation (EO) data for the monitoring of irrigation schemes, which are frequently situated in remote rural regions with inadequate road networks, can facilitate the acquisition of profound insights for international finance institutions such as IFAD. However, this approach is not a standard component in the impact evaluation of rural development programmes. This session proposes the implementation of a comprehensive approach to the collection of geo-referenced field data in regular project monitoring and evaluation. This approach should be particularly focused on mapping water sources, pipes and canals pathways, as well as the delineation of command areas and irrigated areas. The session will present a range of approaches and metrics for the monitoring and evaluation of different types of irrigation schemes, with a view to detecting the full extent of change and estimating the level of change using a range of EO-based output variables, including land cover land use maps, crop maps, vegetation and water indices (e.g. EVI, NDVI and NDWI). Given the variability in seasons, crops, and irrigation systems, a range of approaches and EO indicators are required. This session presents a change matrix for different types of irrigation schemes. Based on anticipated behavioural changes among farmers, the potential changes that could be detected from space are described, and the most suitable EO variables for measuring these changes are selected. Furthermore, the approaches delineate methodologies for defining time series analysis (before-after analysis on the same plot) and for making comparisons with a control area.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Revolutionizing Country Performance Assessment: Integrating EO/OSINT Data in a Machine Learning Model for fragility assessment

Authors: Valerio Botteghelli, Annalaura Di Federico, Adriano Benedetti Michelangeli, Chiara Francalanci, Annekatrin Metz-Marconcini, Alix Leboulanger, Anne-Lynn Dudenhoefer, Koen Van Rossum, Koen De Vos
Affiliations: e-GEOS, Cherrydata, DLR, Janes, Hensoldt Analytics, Vito
One of the outcomes of the GDA Fragility, Conflict and Security initiative was the design and implementation of a methodology introducing Earth Observation and Open-Source derived information to complement statistical based methodology to assess fragility contexts, implemented in cooperation with International Financing Institutions. The GDA Fragility initiative designed, implemented and tested a proof of concept to enhance the understanding of a country’s fragility through the development of innovative indicators, to contribute to a better knowledge and understanding of the cohesion and convergence of drivers of fragility and resilience and the identification of their roles. The completed activity included collection of data and analysis in over 12 developing member countries (DMCs) that the ADB categorized as Group A and B , covering an observation period from 2017 to 2022. To assess country maturity, 108 indicators were categorized into economic, social, and political dimensions. These indicators were then pre-processed, ingested, cleaned, normalized, and rescaled to ensure homogeneity and comparability. Using k-means machine learning method, countries were clustered into two groups based on similar indicator values and trends, both separately for each dimension and collectively. An aggregate country performance indicator was developed, akin to the traditional composite country performance rating, by assigning weights to maximize correlation with the traditional index. The resulting correlation was highly accurate, suggesting that the newly introduced geospatial indicators can provide early signals for decision-making in international financial institutions and on official development assistance. Over the next months, the team will work on the completeness of our set of indicators and will define a more general process for selecting and weighing indicators. In-depth case study analyses will be conducted on selected countries, with the goal of taking full advantage of the greater geographical and temporal granularity of quantitative indicators (both EO and non EO).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: EO supporting strategic planning of industrial-scale biogas and bio-methane production

Authors: Jonas Franke, Levente Papp, Kristin Fleischer, Elke Krätzschmar
Affiliations: Remote Sensing Solutions (RSS), Industrieanlagen-Betriebsgesellschaft mbH (IABG)
This abstract explores the partnership between the European Space Agency’s (ESA) Global Development Assistance (GDA) program and the World Bank, which leverages Earth Observation (EO) technologies to unlock the potential for biogas and biomethane production in Bangladesh. The country is advancing its clean energy ambitions under the Paris Agreement with a strong focus on reducing greenhouse gas emissions and scaling up biogas production. Despite the country's substantial feedstock resources, its biogas sector remains underdeveloped. Through satellite data integration, the collaboration optimizes feedstock sourcing for rural biogas production and identifies methane emission hotspots from landfills for assessing the potential of gas recovery systems. In regard to assessing potentials for biomethane from landfills, time series of Sentinel-5 satellite data, spanning five years, has been instrumental in identifying areas with recurring methane emissions. Within the identified hotspot areas, high-resolution GHGSat data were used to confirm and quantify methane emissions at a local level, supporting the development of projects on methane recovery from landfills. Another key innovation is the use of EO data to guide the scaling of biogas production from agriwaste. By combining satellite imagery with land use, climate, and socio-economic data, this approach enables precise spatial modelling of feedstock sourcing. The ability to map areas where feedstock can be sourced sustainably minimises land-use competition and negative ecological impacts. The spatial modelling also optimises biogas production by prioritizing feedstock sourcing in proximity to energy demand, transport networks, and existing gas infrastructure. This ensures that biogas production is not only economically viable but also aligned with local energy needs, reducing costs and emissions associated with feedstock transport. This approach guided strategic planning for industrial-scale biogas and biomethane production at country scale. This is not only addressing Bangladesh’s energy needs but also contributing to global methane mitigation efforts, aligning with the Global Methane Pledge. By utilizing EO data for precise spatial modeling and assessing the economic viability of biogas-to-biomethane production, this initiative contributes to reduce Bangladesh’s reliance on imported natural gas, supporting a sustainable, renewable energy market. This innovative approach provides a model for other developing nations striving for both economic growth and environmental sustainability.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Democratizing High-Resolution Earth Observation: Multi-Image Super-Resolution for Development Applications in Urban Asia

Authors: Sebastian Mueller, Prof Konrad Schindler, Dr Yohan Iddawela, Mr. Aditya Retnanto, Mr. Son Le
Affiliations: Asian Development Bank, ETH Zurich
High-resolution satellite imagery is essential for environmental monitoring, urban planning, and disaster management. However, the high costs of acquiring such data limit its accessibility, especially for applications requiring extensive time-series analysis. Governments and researchers often need months or years of data to track trends, compounding costs to unsustainable levels. To address these challenges, the Asian Development Bank (ADB) proposes to develop an open-source deep learning model to upscale freely available Sentinel-2 imagery to high-resolution equivalents, offering an affordable and scalable solution for Earth Observation (EO) applications. This project introduces an innovative multi-image super-resolution framework to enhance Sentinel-2 imagery from its native 10m resolution to approximately 5m and 2.5m for the red, green, blue, and near-infrared (NIR) spectral bands. Leveraging deep-learning models, this approach lowers the barriers to accessing high-resolution EO data while enabling critical time-series analysis across large areas. This project delivers key innovations in advancing multi-image super-resolution, addressing urban challenges in Asia and evaluating impacts on downstream applications. 1. Deep-learning for multi-image super-resolution: While deep learning has significantly advanced single-image super-resolution, multi-image super-resolution has traditionally relied on conventional image fusion techniques. This project advances the emerging field of deep learning for multi-image super-resolution. By leveraging the additional information from multiple observations of the same scene and harnessing the enhanced capacity of deep-learning models, this approach aims to achieve improved performance in super-resolution applications. 2. Application in Urban Asia: Rapid urbanisation in Asia has heightened the need for high-resolution EO data to monitor land use, urban sprawl, and environmental degradation. Many countries cannot afford very high-resolution (VHR) data, and existing super-resolution models have not been designed or validated for the needs of Asian cities. This project addresses both gaps. 3. Impact on Downstream Applications: The project evaluates the utility of super-resolution imagery for downstream tasks, focusing on land-use and land-cover (LULC) classification. Automated LULC classification results using super-resolved images will be compared against Sentinel-2 and VHR imagery (~1–2 meters GSD). 4. Benchmarking and Reproducibility: The project benchmarks multiple super-resolution AI models on a standardised test set, ensuring robust and reproducible results. It provides meaningful comparisons of different approaches for urban applications. The methodology involves collecting and preprocessing satellite imagery, training and validating super-resolution models, and applying the outputs to land-use classification in Hanoi, Vietnam. 1. Data Collection and Preprocessing: Very high-resolution (VHR) imagery from multiple sensors (Worldview, Spot, Pleiades NEO) is co-registered with multiple Sentinel-2 revisits. The Data is preprocessed to ensure radiometric and geometric consistency. 2. Models: AI models for multi-image super-resolution, including ResNets, GAN-based approaches (e.g. SRGANs, ESRGANs), and Transformer-based architectures, are evaluated for their ability to generate high-resolution outputs. 3. Validation Framework: Super-resolved outputs are validated against VHR reference data using metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and mean squared error (MSE). 4. LULC Classification: Human experts provide LULC labels for Hanoi. A classification model trained on super-resolved, Sentinel-2, and VHR images will assess the impact of super-resolution on classification accuracy. AI-powered super-resolution bridges the gap between the growing demand for precise EO data and the high cost of VHR imagery. By enhancing freely available Sentinel-2 imagery, this project provides a cost-effective solution that democratises access to high-resolution EO data. It enables applications such as creating granular land-use maps and detecting changes in building outlines, particularly in regions with limited access to high-resolution imagery. This work also addresses a critical gap in the literature by evaluating the performance of deep-learning-based super-resolution in urban settings and its impact on downstream tasks. By openly sharing datasets and methodologies, the project fosters international collaboration and enables the global research community to advance EO applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: GPI – Grassland Production Index

Authors: Montaine Foch, Patryk Jaskula
Affiliations: Airbus
The Grassland Production Index (GPI), developed by Airbus, is an innovative service designed specifically for the European agricultural insurance sector. This satellite-based solution enables insurers to create precise insurance products that protect cattle breeders against economic losses caused by drought. Month after month, the indicator is compared to the historical average, the maximum and minimum record years in grassland production, at a local (yet not individual) scale. When the index drops below an agreed threshold, the insurers compensate all insured breeders in the impacted local province, (within the conditions defined in their respective contracts), without any paperwork or on-site expert inspection. Using satellite imagery from Modis & Sentinel-3 satellites, part of the European Space Agency's Copernicus program, the GPI allows insurance companies to accurately assess the impact of climatic conditions on vegetation & calculate compensation based on scientific data. Unlike traditional agricultural damage assessment methods, this index offers a transparent and data-driven approach. It enables insurers to provide fairer and faster compensation contracts, relying on precise satellite measurements rather than subjective estimations. The primary goal is to secure European farmers' income against climate-related risks while allowing insurers to manage their risks more efficiently and scientifically.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Analyzing Gender Dynamics for Monitoring of Artisanal Mining Activities Using Remote Sensing in Ghana’s Ashanti Region

Authors: Diana West, Edwina Anderson, Lindsey Bonsu, Bashara Abubakari, Foster Mensah, Edith Kwablah, Jacob Abramowitz
Affiliations: NASA SERVIR and University of Alabama in Huntsville, CERSGIS, University of Twente
Artisanal and small-scale gold mining (ASGM) accounts for approximately 35% of Ghana’s total gold output, employing over one million people directly. Despite this significant contribution, ASGM perpetuates stark gender inequalities. Women, who represent over 40% of the ASGM workforce in the Ashanti Region, are predominantly confined to low-paying, labor-intensive roles such as ore hauling and gold panning, earning 50–70% less than men, who dominate higher-value mining and processing activities. The SERVIR program is a partnership between NASA and the US Agency of International Development, with geospatial services for climate adaptation implemented through local Hub partners. SERVIR West Africa has an active Monitoring of Artisanal Mining (Galamsey) service based out of the Centre for Remote Sensing and Geographic Information Services (CERSGIS) in Ghana, which offers a geospatial platform designed to track ASGM activities. This study captures the efforts of the service team to integrate the gender perspective into the geospatial service through an extensive analysis of the gender dynamics surrounding ASGM. The Gender Analysis conducted for the Monitoring of Artisanal Gold Mining (Galamsey) service combined insights from geospatial analyses, interviews, and surveys conducted with 300 respondents across 10 ASGM communities to uncover the socio-economic and environmental dimensions of gender disparities in the sector. It reveals significant structural barriers to resource control: only 18% of women miners have access to land ownership or legal mining licenses, compared to 67% of men. Geospatial monitoring highlights severe environmental degradation in these areas, including deforestation and mercury contamination, which disproportionately affect women tasked with securing water and food for households. Health impacts are stark, with over 60% of women reporting issues like respiratory conditions and reproductive challenges, exacerbated by inadequate access to occupational health services. To address these inequities, we propose a transformative, gender-responsive framework that leverages geospatial tools for monitoring and resource allocation. Coupled with community-led and capacity-building initiatives, this approach aims to enhance women’s representation in decision-making and promote equitable, sustainable ASGM practices. By foregrounding the intersection of gender, geospatial technology, and environmental sustainability, this research offers actionable insights for policymakers, practitioners, and researchers committed to driving inclusive development in resource-dependent economies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Monitoring Carbon Stocks Using Satellite Data: Global and Local Approaches

Authors: Francesco Amato, Valerio Pisacane, Renato Aurigemma, Jose Antonio Lopez Ferreiro, Fabiana Ravellino, Giovanni Giacco, Mauro Manente, Marco Focone
Affiliations: University Of Naples "Federico II", Euro.soft srl, Earth Sensing srl, Latitudo 40 srl
Carbon markets, which are becoming increasingly important in the global fight against climate change, require the development of reliable and transparent monitoring mechanisms. However, despite the growing significance of this market, several challenges remain, such as the difficulty in accurately measuring the amount of carbon sequestered, particularly in remote or hard-to-reach areas. Local surveys are costly and logistically complex, and there is no fully automated system for calculating carbon credits, limiting the ability to scale solutions globally. Furthermore, the lack of transparency between credit buyers and sequestration sources, along with the absence of secure mechanisms to prevent double counting of credits, exacerbates the problem, especially given the unregulated nature of these intangible assets. This work highlights the potential of employing Earth Observation data for the Monitoring, Reporting & Verification of carbon projects within a carbon marketplace. Two effective methodologies have been proposed to estimate carbon stocks, crucial indicators of a vegetation ecosystem's carbon sequestration capacity, initially estimated as above-ground biomass (AGB) and then converted into carbon stocks using empirical rules. The first method, "ReUse: REgressive Unet for Carbon Storage Estimation," employs deep learning to estimate global carbon sequestered by greenery. By utilizing biomass (AGB) data from the European Space Agency's Climate Change Initiative Biomass project, along with a time series of Sentinel-2 images, the model predicts carbon sequestration for each pixel through a regressive U-Net network. Incorporating Sentinel-1 satellite radar images and Digital Elevation Models enhances the model, enabling a more precise estimation of global carbon stocks. This tool offers quick estimates even in challenging conditions, such as after fires or hard-to-reach areas. The second method, "Forest Carbon Stock Estimation Using Machine Learning Ensembles: Active Sampling Strategies for Model Transfer," focuses on localized regions rather than providing global estimates. This approach employs active sampling and satellite imagery to identify the most relevant data points for these specific cases. Using Shannon’s entropy for sample selection, it innovatively transfers a calibrated regression model across different areas through an active-learning approach, starting with calibration in a reference region. Various sampling methods and regression strategies have been tested to reduce fieldwork while ensuring the accuracy of the estimates. This leads to a smaller set of data points for collecting new ground truth information, thereby minimizing the need for physical measurements. Experimental results demonstrate that combining regression ensembles with active learning significantly reduces field sampling, while still producing carbon stock estimates comparable to conventional methods. Together, the two approaches offer complementary solutions for carbon stock estimation: a global method for remote or rapidly changing areas, and a more focused, localized method that minimizes field sampling. Lastly, the concept of a carbon marketplace, AICarbonHub, is introduced. This marketplace addresses the compensation needs of businesses and individuals, while also supporting property owners in securing funds for the upkeep and enhancement of green spaces. By integrating the methodologies described above, the marketplace would enable continuous monitoring and verification of carbon storage, ensuring the credibility and accuracy of the carbon credits being traded.
LPS Website link: Monitoring Carbon Stocks Using Satellite Data: Global and Local Approaches&location=X5+–+Poster+Area+–+Zone+V" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: B.03.06 - POSTER - Climate, Environment, and Human Health

It is well-known that many communicable and non-communicable diseases have a seasonal component. For example, flu and the common cold tend to increase in autumn and winter whilst vector borne diseases like Dengue and West Nile Virus tend to peak in late summer when the vectors are at their most abundant. Under monsoon regimes, many diseases peak during the rainy season. Hay fever, spring-time allergies and other respiratory disorders also have seasonality related to the abundance of pollens and other allergens in the air. Environmental conditions in water, air and land have a role in regulating the variability in the presence or absence and abundance of pathogenic organisms or material in the environment, as well as the agents of disease transmission like mosquitoes or birds. For example, air temperature and relative humidity are linked to flu outbreaks. Water quality in coastal and inland water bodies impact outbreaks of many water-borne diseases, such as cholera and other diarrheal diseases, associated with pathogenic bacteria that occur in water. The seasonality has inter-annual variabilities superimposed on it that are difficult to predict. Furthermore, in the event of natural disasters such as floods or droughts, there are often dramatic increases in environmentally-linked diseases, related to break down of infrastructure and sanitation conditions.

Climate change has exacerbated issues related to human health, with the shifting patterns in environmental conditions, and changes in the frequency and magnitude of extreme events, such as marine heat waves and flooding, and impacts on water quality. Such changes have also led to the geographic shifts of vector-borne diseases as vectors move into areas that become more suitable for them, as they become less cool, or retract from those that become too hot in the summer. The length of the seasons during which diseases may occur can also change as winters become shorter. There are growing reports on the incidence of tropical diseases from higher latitudes as environmental conditions become favourable for the survival and growth of pathogenic organisms.

Climate science has long recognised the need for monitoring Essential Climate Variables (ECVs) in a consistent and sustained manner at the global scale and with high spatial and temporal resolution. Earth observation via satellites has an important role to play in creating long-term time series of satellite-based ECVs over land, ocean, atmosphere and the cryosphere, as demonstrated, for example, through the Climate Change Initiative of the European Space Agency. However, the applications of satellite data for investigating shifting patterns in environmentally-related diseases remain under-exploited. This session is open to contributions on all aspects of investigation into the links between climate and human health, including but not limited to, trends in changing patterns of disease outbreaks associated with climate change; use of artificial intelligence and big data to understand disease outbreaks and spreading; integration of satellite data with epidemiological data to understand disease patterns and outbreaks; and models for predicting and mapping health risks.

This session will also address critical research gaps in the use of Earth Observation (EO) data to study health impacts, recognizing the importance of integrating diverse data sources, ensuring equitable representation of various populations, expanding geographic scope, improving air pollution monitoring, and understanding gaps in healthcare delivery. By addressing these gaps, we aim to enhance the utility of EO data in promoting health equity and improving health outcomes globally.

The United Nations (UN) defines Climate Change as the long-term shift in average in temperatures and weather patterns caused by natural and anthropogenic processes. Since the 1800s, human emissions and activities have been the main causes of climate change, mainly due to the release of carbon dioxide and other greenhouse gases into the atmosphere. The United Nations Framework Convention on Climate Change (UNFCCC) is leading international efforts to combat climate change and limit global warming to well below 2 degrees Celsius above pre-industrial levels (1850–1900), as set out in the Paris Agreement. To achieve this objective and to make decisions on climate change mitigation and adaptation, the UNFCCC requires systematic observations of the climate system.

The Intergovernmental Panel on Climate Change (IPCC) was established by the United Nations Environment Programme (UNEP) and the World Meteorological Organization (WMO) in 1988 to provide an objective source of scientific information about climate change. The Synthesis Report, the last document part of the sixth Assessment Report (AR6) by IPCC, released in early 2023, stated that human activities have unequivocally caused global warming, with global surface temperature reaching 1.1°C above pre-industrial levels in 2011–2020. Additionally, AR6 described Earth Observation (EO) satellite measurements techniques as relevant Earth system observation sources for climate assessments since they now provide long time series of climate records. Monitoring climate from space is a powerful role from EO satellites since they collect global, time-series information on important climate components. Essential Climate Variables (ECV) are key parameters that explain the Earth’s climate state. The measurement of ECVs provide empirical evidence in the evolution of climate; therefore, they can be used to guide mitigation and adaptation measures, to assess risks and enable attribution of climate events to underlying causes.

An example of an immediate and direct impact of climate change is on human exposure to high outdoor temperatures, which is associated with morbidity and an increased risk of premature death. World Health Organisation (WHO) reports that between 2030 and 2050, climate change is expected to cause approximately 250,000 additional deaths per year from malnutrition, malaria, diarrhoea and heat stress alone. WHO data also show that almost all of the global population (99%) breathe air that exceeds WHO guideline limits. Air quality is closely linked to the earth’s climate and ecosystems globally; therefore, if no adaptation occurs, climate change and air pollution combined will exacerbate the health burden at a higher speed in the coming decades.
Therefore, this LPS25 session will include presentations that can demonstrate how EO satellites insights can support current climate actions and guide the design of climate adaptation and mitigation policies to protect and ensure the health of people, animals, and ecosystem on Earth (e.g., WHO’s One Health approach).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Investigating Vectors of Water-Associated Diseases Linked to Water Hyacinth in Vembanad Lake

Authors: Jasmin Chekidhenkuzhiyik, P Fathimathul Henna, P J Neelima, Rithin Raj, Emma Sullivan, Dr Anas Abdulaziz, Dr Nandini Menon, Dr Shubha Sathyendranath
Affiliations: Nansen Environmental Research Centre India, Trevor Platt Science Foundation, Earth Observation Science and Applications, Plymouth Marine Laboratory, CSIR-National Institute of Oceanography, National Centre for Earth Observation, Plymouth Marine Laboratory
Water hyacinth (Eichhornia crassipes) usually found in freshwater bodies, remains an unresolved challenge to many countries around the world, as it affects human activities as well as health. The invasive hydrophyte is widespread throughout Vembanad Lake, a backwater body in the south west coast of India, and its connected canal systems. The proliferation of this weed fluctuates dynamically in response to variations in salinity levels. During the monsoon season, when the entire lake is freshwater-dominated, these hydrophytes envelop the lake’s surface, whereas during the dry season, the hydrophytes in saline water-dominated areas decay and sink to the bottom. In both cases, the water quality of the lake is affected, with consequences for the ecosystem health as well as human health. The thick floating weed mats obstruct water flow, hamper fishing activity and stagnate the water. Reduced water flow would promote sedimentation, deoxygenation and water quality deterioration, and reduce sunlight penetration. This creates a favourable habitat for the proliferation of disease vectors such as mosquitoes and snails, promoting diseases such as schistosomiasis, dengue, chikungunya, and malaria. Our investigation in the Vembanad Lake showed the presence of larval forms of various vectors in the roots of Eichhornia species collected from different canal stations connected to Vembanad Lake. Mosquito larvae were found at all stations, with varying abundances. Molecular sequencing identified these larvae as Mansonia indiana, a zoophilic mosquito that serves as a vector for the filarial nematode Brugia malayi. Other organisms found within the root network included juveniles of freshwater snails, water bugs, diving beetles, midges, and water spiders. Among the snail species, Indoplanorbis exustus and Gyraulus sp. are known to serve as intermediate hosts for trematode parasites such as Echinostoma and Schistosoma, which cause diseases such as Schistosomiasis in humans and animals. There have been reports on outbreaks of Schistosomiasis in the districts that border the freshwater regime of Vembanad Lake, heavily infested with water hyacinth. Modern remote sensing technologies can greatly enhance our capacity to understand, monitor, and estimate water hyacinth infestation within inland as well as coastal freshwater bodies. This study should be continued to investigate potential connections to human health.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Does Industrial pollution Drive Antimicrobial Resistance-Results from A Metagenomic study in Asia’s largest Pharmaceutical Hub

Authors: Inderjit Singh, Rashmi Sharma, Amit Arora, Balvinder Mohan, Neelam Taneja
Affiliations: Postgraduate Institute of Medical Education and Research, Chandigarh
Background Sewage constitutes a diverse group of bacteria, including human gut pathogens shed in the feces. In industrial cities it includes outflow of antibiotics from pharmaceutical producers and communities that act as a huge reservoir of antibiotic-resistance genes (ARGs). This intricate environment provides a great opportunity for pathogens to acquire new or exchange genes for their benefit. Understanding the emergence, evolution, and transmission of individual ARGs is essential to develop sustainable strategies to combat AMR. We carried resistome and microbiome analysis from sewage and soil samples from the Asia’s largest pharmaceutical hub of Asia, Baddi, Himachal Pradesh, India. Methodology We carried out intensive mapping of Baddi, marking important points around river Sirsa, community sewage, pharmaceutical/Industrial effluents and hospitals. The sewage and soil samples were collected. Samples were processed for microbiological isolation of ESBLs and Carbapenem resistant Organisms (CRO). DNA was isolated, shotgun metagenomic was done. Raw reads were checked for quality and assembled using IDAB-UD. Reads were investigated for the presence of AMR genes using MegaRes database, and microbial diversity using KrakenUniq. R studio packages was used for plotting and statistics. Results We observed higher CROs in industrial and hospital sewage as compared to community sewage. Also found higher number of ARGs signals in industrial wastewater signifying selection pressure. Ironically, ARGs were higher in community sewage as compared to hospital sewage indicating contamination with industrial wastewater. Hits against metallic compounds were exclusively high in Industrial effluents. Number of hits for antimicrobial drugs significantly increased in soil industrial sludge and river sediment samples taken from locations with high anthropogenic activities. A high level of aminoglycosides, beta-lactams, and tetracycline resistance was observed. Higher prevalence of ARGs in water samples indicated the dissemination of antibiotic resistance genes in water bodies and untreated wastewater. Industrial sludge was becoming part of agricultural soil. Conclusions A multipronged approach is required to mitigate effects of Industrial pollution. Better sewage treatment practices are needed to reduce the microbial load to reduce AMR Transmission.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Shoreline Dynamics and trends along the kerala coast, India: Observations from multi-temporal satellite data

Authors: Dr Ranith Rajamohanan Pillai, Ms. Swathy Krishna. M. C, Dr Bjorn Nyberg, Dr Nandini Menon. M, Dr Roshin P. Raj
Affiliations: Nansen Environmental Research Centre (India), 7 analytics, Nansen Environmental and Remote Sensing Center, Bjerknes Center for Climate Research
Climate change and associated extreme events have affected the coastline of India, resulting in the loss of coastal habitats. Detailed information on the rate of change of coastline is important in identifying the intensity of loss and planning mitigation measures. This study assessed the variability in the shoreline along the Kerala coast, situated in the South-west coast of India, for the period 2015 to 2023. The shoreline datasets for the study were obtained from Landsat 5,7,8 and Sentinel 2. A normalized difference water index served to delineate land and water bodies, as a first step in digitizing the shorelines for the Kerala coast. The analysis employed the Digital Shoreline Analysis System (DSAS) integrated with ArcGIS 10.2 to calculate shoreline dynamics using five distinct metrics: Shoreline Change Envelope (SCE), Net Shoreline Movement (NSM), End Point Rate (EPR), Linear Regression Rate (LRR), and Weighted Linear Regression (WLR). A total of 5,819 transects were analyzed to quantify the spatial and temporal variability in shoreline dynamics. Results shown that about 61% of the transects exhibited negative movement, indicating erosion. The maximum erosion is measured at -827.83 meters, while the highest accretion is measured at 437.18 meters. Based on SCE, the average shoreline change along the Kerala coast was 26.04 meters. The total distance of shoreline movement represented by NSM indicated significant erosion across the region. The average annual rate of shoreline movement between the oldest and most recent shorelines according to EPR showed that erosion is widespread, at an average of -1.06 meters/year. About 61% of the transects exhibited erosional trends, with maximum erosion rate of 101.4 meters/year and an accretion rate of 53.55 meters/year. LRR, derived from linear regression applied to all shoreline positions over time, also gave the same figures and patterns of erosion. The highest erosional rate recorded was -95.83 meters/year, while the highest accretion rate was 49.05 meters/year. Similarly, the WLR method, which applies weighted regression to account for uncertainty in shoreline positions, showed that about 61% of transects were eroding at an average rate of -1.06 meters/year. The peak erosion rate was -95.23 meters/year, and the maximum accretion rate was 48.56 meters/year. It is hence evident from this study that the Kerala shoreline is predominantly erosional, with significant shoreline retreat observed along the Kannur, Ernakulam and Trivandrum districts, owing to the frequent heavy rainfall, hydrodynamics and coastal anthropogenic activities. However, accretion was also identified at localized levels. The uncertainty in the rate calculations, evaluated at a 90% confidence interval, ranged between 0.47 and 2.51 meters/year, indicating the need for further refined analyses. This study serves as an interim assessment of the shoreline variability along the Kerala coast and provides critical insights into the spatial extent of coastal changes. Incorporation of high-resolution data, extending the temporal scale in the analysis could enhance the precision of these findings and better inform coastal management strategies to support sustainable development along the Kerala coastline.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: The Zanzemap Project: Artificial Intelligence Models and Satellite Data to Forecast Vector Dynamics in Northern Italy

Authors: Giovanni Marini, Daniele Da Re, Francesca Dagostin, Marharyta Blaha, Annapaola Rizzoli
Affiliations: Fondazione Edmund Mach, University of Trento
The project "ZanZeMap" aims to enhance public health in the Autonomous Province of Trento (Northern Italy) by developing user-friendly maps that indicate the risk of tick and mosquito presence and activity, addressing significant public health challenges posed by vector-borne diseases. Utilizing advanced artificial intelligence (AI) and machine learning techniques, this initiative analyzes detailed climatic and environmental data to predict where and when these arthropods are most active. Key to this project is the integration of high-resolution climate data, including satellite observations, providing insights into temperature, humidity, and vegetation cover—critical factors for understanding vector habitats and behaviors. The project can forecast changes in mosquito and tick populations up to two weeks in advance under various climate scenarios, allowing for proactive vector management. Additionally, field-based vector monitoring will be incorporated to validate the model’s forecasts, enhancing the accuracy of vector activity assessments and enabling timely interventions. The resulting online maps will empower the local population and stakeholders by providing real-time information on vector phenology and activity, facilitating personal protective measures against bites such as using repellents and fostering a collaborative environment in public health initiatives. Ultimately, this project not only aims to improve local vector surveillance but also has the potential for application in diverse geographical contexts facing similar public health challenges exacerbated by climate change. By establishing a robust framework for ongoing data analysis and community involvement, the initiative seeks to enhance public health outcomes and quality of life in the Autonomous Province of Trento and the Alpine area in the future.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Vibrio-phytoplankton relationships in Vembanad Lake and their potential use in Earth observation

Authors: Kiran Krishna, Dr Anas Abdulaziz, S Sangeetha, Shard Chander, Ashwin Gujrati, Dr Nandini Menon, Grinson George, Dr Shubha Sathyendranath
Affiliations: CSIR-National Institute of Oceanography, Academy of Scientific and Industrial Research (AcSIR), Space Applications Centre (ISRO), Nansen Environmental Research Centre India, ICAR-Central Marine Fisheries Research Institute, Earth Observation Science and Applications, Plymouth Marine Laboratory, National Centre for Earth Observation, Plymouth Marine Laboratory
Diseases associated with Vibrio species are a growing global concern, particularly in coastal regions where the incidence of such diseases is escalating due to various factors related to climate change and other anthropogenic influences. There are ongoing efforts to employ Earth observation tools to develop risk maps of microbially-contaminated coastal areas, which necessitate the identification of reliable proxies for the presence and abundance of Vibrio species in the water. Vibrio species are found in association with diverse organisms, among which phytoplankton warrant further investigation due to their potential role as proxies detectable via satellites. This study seeks to assess statistically the relationship between the distribution of autotrophic picoplankton and Vibrio species in Vembanad Lake, situated along the southwest coast of India. The environmental variability in this region is primarily influenced by rainfall during the monsoon season (June to September), which often results in flash floods. Water samples were collected from 13 stations along the lake during three seasons: pre-monsoon (March), monsoon (June), and post-monsoon (December). These samples were analysed for total chlorophyll concentration using spectrophotometric methods. Autotrophic picoplankton were sorted into picoeukaryotes and cyanobacteria (Synechococcus) using flow cytometry; and Synechococcus were further partitioned into two types based on their pigment complement: those containing phycocyanin and those containing phycoerythrin. The total of all Vibrio species, including V. cholerae, was quantified employing quantitative real-time PCR techniques, as was the abundance of Escherichia coli, a bacterium that is often taken as indicative of faecal contamination. Other environmental variables such as temperature and pH were also monitored at the same time. Total chlorophyll concentrations in the lake varied between 3 µg/L and 71 µg/L, with the peak concentration recorded at one station during the pre-monsoon. The abundance of Synechococcus cells containing phycocyanin ranged from 10¹ to 10⁵ cells/ml, whereas those containing phycoerythrin ranged from 10¹ to 10⁴ cells/ml, and picoeukaryotes from 10² to 10⁴ cells/ml. The total Vibrio counts within the lake varied temporally, with concentrations of 7 x 10⁵ ± 3 x 10⁵ copies/ml during the pre-monsoon period, 7.5 x 10² ± 8.8 x 10² copies/ml during the monsoon season, and 1.4 x 10⁴ ± 2.2 x 10⁴ during the post-monsoon. We found that the log-transformed abundance of total Vibrio (copies/ml) had a positive linear relationship (r²=0.46) with the relative abundance of picoeukaryotes and a negative linear relationship (r²=0.57) with the relative abundance of phycocyanin-containing Synechococcus. Multiple linear regression indicated that the variability in the distribution of total Vibrio species in Vembanad Lake exhibited significant relationship (p<0.05) with temperature, picoeukaryotes, and pH. Similarly, multiple linear regression showed that the distribution of V. cholerae in the lake was significantly related (p<0.05) to E. coli, picoeukaryotes, pH, turbidity, and silicate. The high covariance between V. cholerae and E. coli could potentially indicate a common source for the two types of bacteria. In conclusion, this study highlights the potential of using picoplankton and temperature as proxies for mapping the distribution of Vibrio species in Vembanad Lake using satellite data. Ongoing research will focus on integrating these findings with Earth observation from space, as well as with data from drones equipped with hyperspectral sensors.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: High-Resolution Spatio-Temporal Mapping of Air Temperature and Humidity in Padua (Italy) Using Satellite Data and Geographically-Temporally Weighted Regression

Authors: . Naila, Dr. Jacopo Vivian, Prof. Michele De Carli
Affiliations: Department of Industrial Enrineering, Università degli Studi di Padova, Padova, Italy
Due to rapid urbanization, cities worldwide are experiencing a unique climatic phenomenon known as the urban heat island (UHI) effect, where urban temperatures become significantly higher than those of surrounding rural areas. Consequently, air temperature (Ta) and relative humidity (RH) levels vary tremendously across space due to the uneven distribution of factors like vegetation cover, population density, urban morphology, and other local characteristics. UHIs can intensify heat-related health risks during heat waves; therefore, understanding the spatial and temporal distribution of air temperature (Ta) and relative humidity (RH) is crucial to accurately quantify (and hence prevent) heat-associated mortality rates. This study aims to determine the spatio-temporal patterns of air temperature and relative-humidity and their respective explanatory factors, such as land surface temperature (LST), normalized difference vegetation index (NDVI), digital elevation model (DEM), and solar zenith angle (SZA). To model correlations among all these, the study used two data-driven models: a geographically weighted regression (GWR) model and a geographically and temporally weighted regression (GTWR) model. GTWR is an extended version of the standard GWR model, where the weighting matrices embody both spatial and temporal information about the independent (explanatory) variables. In most cities, there are only a few meteorological stations available to measure Ta and RH, thereby limiting their applicability to vast areas. To overcome this problem, satellite-driven thermal (IR) images have been employed to estimate Ta and RH at high spatial and temporal resolutions. Both GWR and GTWR models were trained using LANDSAT-8/9 and MODIS satellite thermal imagery for the city of Padua, Italy. The model predicts Ta and RH for 10 years (2015 - 2024) for winter and summer months. The predicted weather data were compared with observations from 110 meteorological stations recently installed by the Municipality in various parts of the city. Results demonstrated that GTWR effectively predicts the spatial distribution of air temperature during hot summer days. However, the comparison revealed that certain weather stations, particularly those influenced by local anthropogenic heat sources, exhibit discrepancies that cannot be captured by the models. In conclusion, the model investigated in this study (GTWR) provides a reliable approach to identify heat-vulnerable areas, where mortality rates may increase due to heat, contributing to targeted interventions to protect at-risk populations and mitigate heat-related health impacts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Spatial Modelling of Mosquito Breeding Sites to Improve Larval Source Management

Authors: Fedra Trujillano, Zaida Quiroz, Najat Kahamba, Fredros Okumu, Emma Laurie, Brian Barrett, Kimberly Fornace
Affiliations: University Of Glasgow, Pontificia Universidad Catolica del Peru, Ifakara Health Institute, National University of Singapore
Rising temperatures, as a consequence of climate change, can lead to an expansion of the areas suitable for mosquito breeding habitats and increase the population at risk of infectious diseases transmitted by this vector. This expansion poses a threat to the current progresses of elimination and control of mosquito-borne diseases such as malaria. Due to the mosquito’s high dependency on the environment, the use of Earth Observation (EO) data is crucial to inform vector surveillance. Despite the current efforts for malaria elimination, additional strategies are needed. Among them, Larval Source Management (LSM) is recommended as a complementary intervention by the World Health Organisation (WHO). In the last decades, data-driven models using entomological and EO data have been developed to produce high risk area maps at the local scale. However, further insights on the utility of EO and spatial modelling is needed for an integrated operational framework for vector control. The availability of EO data and the advances in data-driven models could potentially improve breeding site identification and the development of cost-effective methods. The aim of this study is to explore the integration of EO data and ground-based survey information to build a predictive model for identifying potential breeding habitats. The study focuses on a case located in South-East Tanzania, a malaria-endemic region where water bodies were surveyed for larval presence during both the dry and rainy season from 2021 to 2023. The proposed methodology investigates the spatial correlation between larval positive water bodies and environment characteristics. This environmental information includes weather variables (temperature and rainfall extracted from ESA products) and high-resolution land cover characteristics. The land cover classes (water, buildings and forest) were extracted from high spatial resolution (4m) Planet Scope imagery using the Segment Anything Model. These datasets, combined with larval breeding site observations, are integrated into a comprehensive dataset of environmental covariates. The model is developed as a Bayesian spatial model, implemented using Integrated Laplace Approximation (INLA). The results provide insights into combining recent image processing foundation models with Bayesian spatial modelling for predicting fine-scale maps for larval source management purposes. The improvement of traditional vector surveillance methods will help monitor the expansion of mosquito habitats in changing environments.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Rising Sea Surface Temperatures and Marine Heatwaves in the Adriatic Sea: Implications for Mussel Aquaculture along the Abruzzo Coast, Central Italy

Authors: Romolo Salini, Susanna Tora, Federico Filipponi, Annamaria Conte, Carla Ippoliti
Affiliations: Istituto Zooprofilattico Sperimentale "G. Caporale" - Teramo, National Research Council (CNR)
Sea temperature is a critical parameter in aquaculture, directly influencing the growth and survival of molluscs. In central Adriatic Sea, along the Italian Adriatic coast, mussel farms are located nearshore, approximately between 1 and 3 km from the coastline, at a median average of 10 meters depth. Given the significant role of the aquaculture sector in Italian economy and its increasingly relevance as a source of high-quality food essential for a healthy population, we characterised the evolution of Sea Surface Temperature (SST) in the mollusc production areas. Although SST refers to surface temperature, which is influenced by the atmospheric temperature and its fluctuations, solar radiation, wind, and short-term weather conditions, in shallow costal waters, such as our study sites, the variation of temperature along the water column is minimal. Therefore the SST can be considered, with reliable and sufficient accuracy, as representative of mollusc environmental conditions. This study analysed SST trends in the Adriatic coastal waters of Abruzzo from 2008 to 2024 focusing also on marine heatwaves (MHWs). A MHW is defined as an anomaly of SST that exceeds the 90th percentile of the climatological baseline for at least five consecutive days. SST data were derived from high-resolution satellite products provided by the Copernicus Marine Service, namely Level-4 of “Mediterranean Sea High Resolution and Ultra High Resolution Sea Surface Temperature Analysis”, with a spatial resolution of 0.1° (approximately 1 km), a daily temporal frequency, and a coverage of the entire Mediterranean Sea and since 2008. Data were extracted from location representative of mussel farms and pre-processed using QGIS and R software. Using time-series analysis tools, daily SST data were decomposed into trend, seasonal, and random components. The Mann-Kendall test identified statistically significant warming trends, while linear regression on detrended data revealed an average annual SST increase of 0.027°C from 2008 to 2024. MHWs, increasingly frequent and intense, characterized the summers of 2023 and 2024, with maximum SSTs exceeding 30°C and persisting, in some cases, for over 60 consecutive days. The heatwaves of 2024 were notably more prolonged than previous events, such as the 34-day MHW in 2022 along the Abruzzo coastline. Autumn 2023 was charcterised by persistent SST anomalies, with elevated temperatures extending through October and November, indicating significant thermal inertia. These prolonged warm conditions likely compounded stress on marine life. Although over a relatively short period, this study highlights a significant upward trend in SST in central Adriatic Sea, aligning with similar finding reported across the Mediterranean region. In conclusion, prolonged increases in water temperature have become more evident in terms of both frequency and duration, with summers of 2023 and 2024 experiencing the most intense events recorded over the investigated period. This study highlights the importance of satellite-based observations, in particular their ability to provide long-term daily SST mapping products over wide geographic areas with a sufficient spatial detail for coastal water monitoring. Such data are essential for monitoring environmental conditions, supporting adaptive aquaculture management under climate change conditions.
LPS Website link: Rising Sea Surface Temperatures and Marine Heatwaves in the Adriatic Sea: Implications for Mussel Aquaculture along the Abruzzo Coast, Central Italy&location=X5+–+Poster+Area+–+Zone+C-D" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Understanding Leptospirosis in Rio Grande do Sul, Brazil: Climatic and Sociodemographic Insights

Authors: Dr. Andrea de Lima Oliveira, Dr. Ricardo Guimarães, Shubha Sathyendranath, Milton Kampel
Affiliations: Instituto Nacional De Pesquisas Espaciais, Instituto Evandro Chagas, Plymouth Marine Laboratory
Several infectious diseases have a seasonal pattern that may be related to climatic conditions, such as rainy seasons for water-associated diseases. For this reason, satellite Earth observations are useful for studying the main drivers of these outbreaks, especially when coupled with additional information on sociodemographic factors that are potentially useful for modeling and predicting outbreaks. Here, we study leptospirosis, a bacterial disease that can be transmitted from animals to humans, usually through the urine of infected rodents or other hosts, including livestock. The bacteria can survive in soil for many days. Infection occurs when the bacteria come into contact with the mucous membranes or exposed wounds of susceptible individuals. It is endemic in Brazil and its incidence is closely linked to humid climates that favor the survival of the pathogenic bacteria in the environment. Among the Brazilian regions, the South has the highest average annual incidence (3.89 per 100,000 inhabitants) and the highest number of confirmed cases over the last 17.5 years (2007 – June 2024), totaling 20,260 cases. This corresponds to 65.1 cases per 100,000 inhabitants in 2024. The state of Rio Grande do Sul alone accounts for 38% of cases in the southern region. This state has also experienced extreme weather events, such as heavy rainfall in September 2023 and May 2024, with devastating consequences for the population. This study analyzes monthly leptospirosis cases in municipalities in Rio Grande do Sul, using data from the Brazilian government (DATASUS). A time series analysis was performed to calculate the Shannon entropy index, which indicates the complexity of the case patterns. Correlations between leptospirosis incidence, cumulative rainfall (CHIRPS, Climate Hazards Group InfraRed Precipitation with Station), and mean land surface temperature (ERA5) were evaluated. Municipalities were classified according to the predictability of leptospirosis outbreaks, considering time series complexity and climatic correlations. In addition, sociodemographic factors, such as gender, age, education level, occupation, and exposure to specific risk factors (e.g., flooding, rodent contact, agricultural activities) were analyzed in relation to the predictability index. Spatial analysis showed that leptospirosis cases were clustered in certain regions. The Shannon entropy index of most municipalities was between 0.75 and 1, indicating complex, non-linear patterns in the time series. Rainfall correlations with incidence were consistently positive, while temperature correlations varied, being negative in some locations but predominantly positive. Of the 497 municipalities in the state, 267 (53.7%) were classified as having low incidence (≤ 5 cases per 10,000 inhabitants), while 43 municipalities (8.7%) had highly unpredictable patterns, classified as “difficult to predict”. The remaining 187 municipalities (37.6%) showed varying degrees of predictability (feasible prediction, FP), based on combinations of lower complexity (S), significant rainfall correlation (R), and significant temperature correlation (T). Sociodemographic analysis showed that leptospirosis cases were predominantly among men (about 80% in all classes) and adults aged 30-59 years (49%-61%), followed by young adults aged 19-29 years (10%-23%). Data on educational level were incomplete, but among the reported cases, those with a secondary school education represented 22% to 31% of the feasible prediction classes. Occupational data, although often not reported, suggested that agricultural workers were disproportionately affected in municipalities classified as feasible prediction, with lower Shannon index and correlation with temperature, these occupations were also high in the difficult-to-predict group. This highlights the potential role of occupational exposure in shaping seasonal incidence patterns in these municipalities. On the other hand, other occupations were more frequent in the classes correlated with rainfall, indicating that in the municipalities where accumulated rainfall was correlated with the incidence, the occupation was not concentrated in agriculture workers, but spread to other occupations. The risk situations were categorized into four groups: flood-related (i.e., contact with flood water or mud, or proximity to water), rodent-related (i.e., direct contact or signs of rodents nearby), work-related (i.e., farming or grain storage) and other (i.e., proximity to waste disposal, water tank or septic tank maintenance, vacant lot, etc.). Exposure to rodents was the most commonly reported risk across all classes (> 50% of cases). Work-related risks were particularly high in municipalities where seasonality and temperature correlations were significant, while flood-related risks dominated in municipalities with strong rainfall correlations. This study highlights the spatial and temporal variability of leptospirosis incidence in Rio Grande do Sul, revealing the multiple drivers of outbreaks, including climatic factors, occupational risks, and flood-related exposures. By classifying municipalities based on outbreak predictability, we provide valuable insights for tailoring public health interventions. Municipalities with clear patterns of seasonality and climatic correlations offer opportunities for predictive modeling, enabling proactive interventions such as early warnings, targeted education, and vaccination campaigns. Conversely, regions with unpredictable patterns may require more extensive surveillance efforts and resource allocation. These findings highlight the importance of integrating environmental, occupational, and sociodemographic data into leptospirosis prevention strategies to effectively reduce the public health burden of leptospirosis. This study is a contribution to the Waterborne Infectious Diseases and Global Earth Observation in the Nearshore (WIDGEON) project.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Role of invasive macrophytes in enhancing the antimicrobial resistant pathogenic load in Vembanad Lake, Kerala, India

Authors: Jasmin Chekidhenkuzhiyik, S Sangeetha, Nada Mahamood, Dr Anas Abdulaziz, Emma Sullivan, Dr Nandini Menon, Dr Shubha Sathyendranath
Affiliations: Nansen Environmental Research Centre India, Trevor Platt Science Foundation, CSIR-National Institute of Oceanography, Earth Observation Science and Applications, Plymouth Marine Laboratory, National Centre for Earth Observation, Plymouth Marine Laboratory
Eichhornia crassipes, a native of South America, has invaded almost all the freshwater bodies of the world and is generally regarded as the most troublesome aquatic plant. Wherever it has encountered suitable environmental conditions it has spread with phenomenal rapidity to form extensive monotypic blanket-cover in lakes, rivers and rice paddy fields. The proliferation of the weed stagnates the water as well as affects its quality. The plant is variable in size, reaching up to 1m in height under good nutrient supply. Roots develop at the base of each leaf and form a dense mass, varying in length from 20 cm to 300 cm. These dense mats of roots harbour pathogens and vectors. The problem of diseases caused by microbes transmitted by water is a major public health challenge, especially in low-middle income countries with drinking water shortage and poor sanitation conditions. Faecal contamination from non-point sources into water bodies lead to an increase in microbial pathogens especially faecal indicator bacteria Escherichia coli. Pathogenic strains of E.coli can cause several diseases among human population. We found that E.coli in the water of Vembanad Lake forms complex association with the roots of the invasive weed Eichhornia crassipes, many a time existing as biofilm embedded within a matrix of extracellular polysaccharide on the roots of the weed, thereby protecting themselves from toxic pollutants. E. coli isolated from the roots of water hyacinth were characterized and tested for resistance against antibiotics from different generations. The Multiple Antibiotic Resistance (MAR) index of each isolate was calculated. The results showed that most of the isolates exhibited multi-drug resistance which is a globally challenging threat. The issue of antimicrobial resistance among opportunistic pathogens plus the wide geographical distribution of aquatic weeds, pose serious health concerns. Integrated water quality monitoring equipped with mapping of floating weeds on water bodies are required for the mitigation of the problem.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Earth observation measurements and spatio-temporal deep learning modelling to predict infectious disease outbreaks in South Asia case study from 2000 to 2017.

Authors: Dr Usman Nazir, Talha Quddoos, Dr Momin Uppal, Dr Sara Khalid, Dr Rochelle Schneider
Affiliations: Lahore University of Management Sciences, Center for Statistics in Medicine, University of Oxford, ESA, London School of Hygiene & Tropical Medicine
# Background: Malaria remains one the leading communicable causes of death. Approximately half of the world’s population is considered at risk, predominantly in African and South Asian countries. Although malaria is preventable, heterogeneity in climatological, socio-demographic, and environmental risk factors over time and across geographical regions make outbreak prediction challenging. Data-driven approaches accounting for spatio-temporal variability may offer potential for region-specific early warning tools for malaria. # Methods: We developed and validated a data fusion approach to predict malaria incidence in the South Asian belt spanning Pakistan, India, and Bangladesh using geo-referenced environmental factors. For 2000-2017, district-level malaria incidence rates for each country were obtained from the US Agency for International Development's Demographic and Health Survey (DHS) datasets. Environmental factors included temperature (Celsius), rainfall (millimeter), and Average Normalized Vegetation Index, obtained from the Advancing Research on Nutrition and Agriculture (AReNA) project conducted by the International Food Policy Research Institute (IFPRI) in 2020. Data on nighttime light pollution was derived from two satellites: NOAA DMSP OLS: Nighttime Lights Time Series Version 4, and VIIRS Nighttime Day/Night Band Composites Version 1. A multi-dimensional spatio-temporal LSTM model was developed using data from 2000-2016 and internally validated for the year 2017. Model performance was measured using accuracy and root mean squared error. Country-specific models were produced for Bangladesh, India, and Pakistan. # Results: Malaria incidence in districts across Pakistan, India, and Bangladesh was predicted with 80.6%, 76.7%, and 99.1% accuracy, respectively. In general, higher accuracy and reduced error rates were attained with increased model complexity. # Interpretation: Malaria outbreaks may be forecasted using remotely-measured environmental factors. Modelling techniques that enable simultaneously forecasting ahead in time as well as across large geographical areas may potentially empower regional decision-makers to manage outbreaks early and more accurately. # Funding: NIHR Oxford Biomedical Research Centre Programme. Additionally, we would like to acknowledge funding support provided by the Higher Education Commission of Pakistan through grant GCF-521. # Contributions: The study was conceived and designed by SK, UN, and MU. Data curation and analysis was performed by UN and MTQ. It was interpreted by all co-authors. The abstract was written by UN and SK and revised by all co-authors. SK is responsible for the overall study. # Declaration of Interests: SK is supported by the Innovative Medicines initiative, Bill & Melinda Gates Foundation, Health Data Research UK, British Heart Foundation, and Medical Research Council and Natural Environment Research Council outside of this work.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Advancing Health Impact Assessment with Air Quality Data from IoT/Low-Cost Sensors

Authors: Dr Christian Borger, Julie Letertre, Thomas Hodson, Ulrike Falk, Dr Rochelle Schneider, Dr Vincent-Henri Peuch
Affiliations: European Centre for Medium-Range Weather Forecasts (ECMWF), European Space Agency (ESA)
Understanding and mitigating the health impacts of air pollution requires accurate and very high resolution data on air quality, particularly in urban environments where pollutant levels vary significantly over scales of 100m or less. Traditional monitoring networks, while reliable, are often limited in spatial coverage due to high operational costs and maintenance requirements. As a result, critical gaps remain in our ability to assess local air quality and its effects on public health. Measurements from low-cost sensors, for instance from citizen science projects, represent a promising solution to these challenges. These Internet of Things (IoT) based observations can provide deeper insights into local-scale air pollution, though they come with certain limitations. In this study, we demonstrate the capabilities and benefits of these novel sensor measurements for health impact assessments. In particular, we focus on their ability to provide hyper-local information on health indicators in comparison to traditional approaches, using selected use cases to highlight their value in advancing public health assessments.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: AIR4health: Leveraging Earth Observation for Compound Climate and Air Quality Extremes Early Warning

Authors: Dr. Ana Oliveira, Paulo Nogueira, Vital Teresa, Luís Figueiredo, Élio Pereira, Rita Cunha, Fabíola Silva, Inês Girão, Amaya Atencia Yepez, Ana Margarida Alho, Maria Miguel
Affiliations: CoLAB +Atlantic, ISAMB, Faculty of Medicine, University of Lisbon, GMV
Climate resiliency is a key challenge of the 21st century, as the impacts of climate change and weather extremes come forth as pressing challenges. Nonetheless, public health stakeholders still struggle to take advantage of the state-of-the-art scientific knowledge on geospatial data science considering that many dose-response uncertainties remain regarding (i) the best practices in mapping the exposure to environmental and climate-induced hazards, (ii) the attribution and measurement of correlated impacts and (iii) our ability to produce meaningful future impact assessment scenarios. That is the case of Compound Climate and Air Quality Extremes. As extreme temperatures are already becoming significantly more frequent and severe in Portugal in the last decades (and overall, in Europe), their occurrence has translated into significant cases of excess human mortality and morbidity, also in tandem with the simultaneous air quality loss, with corresponding human and societal impacts. However, such impacts have only been documented in a case-specific manner, i.e., describing the consequences of Cold and Heat Waves (CW and HW, respectively) and low Air Quality (AQ) events separately, or by focusing on very specific events and locations, limiting the ability of the public health sector in deriving meaningful and generalisable policy guidelines for early warning and response actions. To tackle these challenges, the ESA-funded AIR4health project, part of the Early Digital Twin Components initiative, aims to develop innovative Earth Observation (EO) data-driven algorithms. These AIR4health Risk Algorithms will predict human mortality and morbidity using Machine Learning (ML) and Artificial Intelligence (AI) models. The project will also create a prototype Digital Twin Component (DTC) for early warning of compound extreme climate and air quality events. The main goal of AIR4health is to create two algorithms, called AIR4health Risk Algorithms, that can predict the risk of increased mortality and illnesses due to extreme climate and air quality events. These algorithms will use data from mainland Portugal and will help in developing a European system to monitor heatwaves, cold waves, and air quality. AIR4heath Risk Algorithms will consider the following facts: ● Use Case 1: Heat and Ozone: During a heatwaves, high temperatures increase the rate of photochemical reactions in the atmosphere. These reactions involve pollutants emitted from vehicles, industrial processes, and other sources reacting with sunlight. One significant consequence is the formation of ground-level ozone (O3), a major component of smog. Ozone formation is enhanced during heatwaves due to increased emissions of precursor pollutants like nitrogen Oxide (NOx) and volatile organic compounds (VOCs). These precursors undergo reactions facilitated by sunlight and heat, leading to the production of O3. Health illnesses due to concurrence of heatwaves and excessive O3 concentrations include respiratory (e.g., asthma, bronchitis, inflammation/irritation of airways, shortness of breath), cardiovascular (e.g., strokes, oxidative stress) and heatstroke issues. ● Use Case 2: Cold and Nitrogen Dioxide: During a coldwave, combustion processes, such as those in vehicles and heating systems, increase to meet the greater demand for warmth. This leads to higher emissions of nitrogen oxide (NOx), primarily nitrogen dioxide (NO2). Cold temperatures can enhance the stability of the atmosphere, trapping pollutants close to the ground and prolonging their presence. Additionally, calm wind conditions during coldwave can further exacerbate air pollution by limiting the dispersion of pollutants. Health illnesses due to concurrence of coldwaves and excess NO2 concentrations include respiratory (e.g., asthma, bronchitis, respiratory infections), cardiovascular (e.g., heart attacks, arrhythmias) and hypothermia. To develop these two AIR4health Risk Algorithms, the AIR4health consortium will utilize a highly detailed, two-decades-long healthcare database for Portugal mainland, which is already available to them. They will combine this data with Earth Observation (EO), modelled, and in-situ data to create two AIR4health Use Cases of dose-response indicators for Compound Climate and Air Quality Extremes. Built upon a solid foundation of the currently operational country-level Ícaro warning system, the project will focus on creating a detailed daily time series of Compound Climate and Air Quality Extremes (Heat and Cold-related, separately). This will be achieved by downscaling satellite data products (e.g., the Sentinel-5p mission), using ancillary in-situ measurements from the European Environmental Agency (EEA) as well as modelled data from Copernicus Atmosphere Monitoring and Climate Change Services (CAMS, C3S). Machine Learning (ML) models, similar to those used for air temperature in Lisbon, will be employed for this downscaling process. The 'Dynamic' and 'Continuous' aspects distinguish the AIR4health approach from the state-of-the-art. AIR4health will introduce two novel algorithms to the Portuguese public health and weather sectors, demonstrating the effectiveness of EO data to enhance the spatial level of detail and the ability to predict health outcomes from heatwaves and cold waves and poor Air Quality events. And it will transition from a non-spatial/single time-series to spatiotemporal approach, up to the municipal-level. Furthermore, results will be benchmarked with European-level data to pave the way towards broader adoption. This highlights the consortium’s commitment to use these cases in Portugal as a model for the international community on how to address Planetary Health and Climate Change Preparedness. On a larger scale, the European Union (EU) “Destination Earth” (DestinE) and European Green Deal initiatives will benefit from the AIR4health outcomes, as these stressors directly impact our communities, aiding the evolution towards the federation of local-specific services integrated into the European Digital Twin ecosystem.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Remote Sensing of Mental Health: the Effects of Heat Stress on Mental Health in Switzerland

Authors: Ella Schubiger, Jennifer Susan Adams, Susanne Fischer, Maria J. Santos, Kathrin Naegeli
Affiliations: Department of Geography, University of Zurich, Department of Psychology, University of Zurich
Recent summers have been marked by record-breaking heat across many regions of the world. The Intergovernmental Panel on Climate Change (IPCC) reports that the increase in the number of hot days and nights, as well as the length, frequency, and intensity of warm spells or heatwaves over most land areas, is virtually certain. Furthermore, the frequency and severity of extreme heat events are very likely to rise nonlinearly with increasing global warming. While the physiological risks of heat stress on humans are well-known, its impacts on mental health are not yet widely recognised. Establishing connections between temperature observations and mental health datasets is therefore essential to improve our understanding and to develop strategies to address the challenges posed by climate change, rising global temperatures, and more frequent extreme events. Meteorological station data and gridded products provide valuable resources for examining heat stress occurrences at local and regional scales. Additionally, Earth Observation (EO) data on Land Surface Temperature (LST) enable spatial and temporal analyses of heat stress events across varying scales. Surveys providing assessments of mental health problems and stress alongside demographic and socioeconomic data provide insight into the state and evolution of individuals’ mental health. Integrating these distinct datasets, however, presents significant challenges, yet it holds the potential to shed light on the societal impacts of heat stress on mental health. In this contribution, we focus on available LST products based on EO data from e.g. MODIS, Landsat or ECOSTRESS. Additionally, spatial climate analysis datasets from MeteoSwiss provide daily temperature, precipitation and other climate variables at a 1x1 km spatial resolution, spanning multiple decades. These datasets integrate observations from nearly 160 automatic weather stations, as well as radar and satellite data, to deliver a robust climate monitoring system. Data on mental health is obtained through the Swiss Health Surveys (SHS), conducted every five years and comprising seven survey rounds over a 30-year period. Each survey includes approximately 18,000 individuals aged 15 and older and increasingly adheres to the European Health Interview System (E-HIS) framework to ensure international comparability. This wealth of data serves as the foundation for our analysis. The contribution will present preliminary results on two main objectives of this project. First, we investigate the link between mental health in Switzerland and extreme heat events (derived from EO and weather station data, and human heat stress indices such as the Heat Vulnerability Index) over the past 30 years. Second, we perform spatial analyses over the datasets and show that links between mental health and extreme events/heat stress depend also on urban-rural differences (attributed to urban heat island effects) and urban planning (e.g., access to green spaces). To our knowledge, such a combined temporal and spatial analysis has not been conducted in Switzerland before. This work aims to provide new insights of the potential of EO data to assess spatial and temporal mental health issues by integrating diverse data sources. An improved understanding of impacts of environmental heat extremes on human mental health is urgently needed to develop adaptation and mitigation strategies for our societies facing a warming and increasingly extreme climate.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: From data to action: a machine learning model to support tick-borne encephalitis surveillance and prevention in Europe

Authors: Giovanni Marini, Francesca Dagostin, Diana Erazo, Giovanni Marini, Daniele Da Re, Valentina Tagliapietra, Maria Avdicova, Tatjana Avšič – Županc, Timothée Dub, Nahuel Fiorito, Nataša Knap, Céline M. Gossner, Jana Kerlik, Henna Mäkelä, Mateusz Markowicz, Roya Olyazadeh, Lukas Richter, William Wint, Maria Grazia Zuccali, Milda Žygutienė, Simon Dellicour, Annapaol Rizzoli
Affiliations: Research and Innovation Centre, Fondazione Edmund Mach, Spatial Epidemiology Lab, Université Libre de Bruxelles, Center for Agriculture Food Environment, University of Trento, Regional Authority of Public Health in Banská Bystrica, Institute of Microbiology and Immunology, Faculty of Medicine, University of Ljubljana, Department of Health Security, Finnish Institute for Health and Welfare, Unità Locale Socio Sanitaria Dolomiti, European Centre for Disease Prevention and Control (ECDC), Austrian Agency for Health and Food Safety, Environmental Research Group Oxford Ltd, c/o Dept Biology, Azienda Provinciale Servizi Sanitari, Dipartimento di prevenzione, National Public Health Center under the Ministry of Health
Background Tick borne encephalitis (TBE) is a severe zoonotic neurological infection caused by the TBE virus (member of the Flaviriridae family) and it is one of the most important tick-borne viral diseases in Europe and Asia. The infection is mostly acquired after a tick bite, but alimentary infection is also possible. Despite the availability of a vaccine, TBE incidence is increasing with the appearance of new foci of virus circulation in new endemic areas. The increase in TBE cases across Europe - from 2412 in 2012 to 3514 in 2022, has highlighted the need for predictive tools capable to identify areas where human TBE infections are likely to occur. In response, this study presents a novel spatio-temporal modelling framework that provides annual predictions of the occurrence of human TBE infections across Europe, at both regional and municipal levels. Methods We used data on confirmed and probable TBE cases provided by the European Surveillance System (TESSy, ECDC) to infer the distribution of TBE human cases at the regional (NUTS3) level during the period 2017-2022. We trained the model on data from countries with sufficient reporting, i.e., that provided the location of infection at the NUTS-3 level for at least 75% of cases notified during the selected period. To account for the natural hazard of viral circulation, we included variables related to temperature (derived from satellite images acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS) and supplied by NASA with a resolution of 5.6 km), precipitation (derived from the ECMWF ERA5-Land dataset at 30 arc seconds resolution), land cover (extracted from the 2018 Corine Land Cover (CLC) data inventory (class “3.1”) with a resolution of 0.25x0.25 km) and ticks’ hosts presence (originally produced using random forest and boosted regression trees approaches). We also used indexes based on recorded intensities of human outdoor activity in forests (based on the OpenStreetMap database) and population density (obtained from WorldPop) as proxies of human exposure to tick bites. We identified the yearly probability of TBE occurrence using a spatio-temporal boosted regression tree modeling framework. Results Our results highlight a statistically significant rising trend in the probability of human TBE infections not only in north-western, but also in south-western European countries. Areas with the highest probability of human TBE infections are primarily located in central-eastern Europe, the Baltic states, and along the coastline of Nordic countries up to the Bothnian Bay. Such areas are characterised by the presence of key tick host species, forested areas, intense human recreational activity in forests, steep drops in late summer temperatures and high precipitation amounts during the driest months. The model showed good predictive performance, with a mean AUC of 0.85, sensitivity of 0.82, and specificity of 0.80 at the regional level, and a mean AUC of 0.82, sensitivity of 0.80, and specificity of 0.69 at the municipal level. Discussion With ongoing climate and land use changes, the burden of human TBE infections on European public health is likely to increase, as trends are already indicating. This underscores the need for predictive models that can help prioritize intervention efforts. Hence, the development of a modeling framework that predicts the probability of human TBE infections at the finest administrative scale based on easily accessible covariates, represents a step forward towards comprehensive TBE risk estimation in Europe.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Synergy of extreme weather and socio-economic factors in improved understanding and prediction of water associated diseases in India: A machine learning and Bayesian statistics approach

Authors: Ranith Rajamohananpillai, Farzana Harris, Dr Nandini Menon, Dr Anas Abdulaziz, Grinson George, Gemma Kulk, Dr Shubha Sathyendranath
Affiliations: Nansen Environmental Research Centre India, CSIR-National Institute of Oceanography, ICAR-Central Marine Fisheries Research Institute, Earth Observation Science and Applications, Plymouth Marine Laboratory, National Centre of Earth Observation, Plymouth Marine Laboratory
The increasing frequency and intensity of extreme weather events such as flooding, heavy precipitation, and coastal inundation significantly affect public health, as it worsens the spread of water-associated diseases (cholera, leptospirosis, and acute diarrheal diseases). Extreme weather events, when combined with socioeconomic factors, could create complex and spatially heterogeneous patterns of disease incidence that demand advanced analytical frameworks for their effective prediction and mitigation. This study combines machine learning, Bayesian spatial statistics and advanced computational models for predicting the relationships between extreme weather events, socioeconomic factors, and the associated waterborne disease occurrences in India. Data used in this study were obtained from models, Government open data portals, and from Earth observation. Weekly record of water-associated diseases for a period of 15 years (2009-2023) was taken from the Integrated Disease Surveillance Programme (IDSP). Daily precipitation was obtained from Climate Hazards Center InfraRed Precipitation with Station (CHIRPS) data version 2.0. Duration and frequency of flooding, as well as the area inundated, were estimated from the global flood database, and from Sentinel-1-SAR product. Socioeconomic data on population density, sanitation conditions and income levels were collected from NASA’s Socioeconomic Data and Application Center (SEDAC). Vulnerability of a particular district or area was assessed using a multi-layered approach coupling Bayesian hierarchical models for spatial risk mapping, with machine learning methods such as neural networks and random forests. Additionally, machine learning algorithms were also used to evaluate feature importance, predictive accuracy optimisation, and to identify emerging hotspots that are at risk of increased water-associated diseases. The integrated model identified several disease hotspot clusters across India. In addition, the study helped to link climate change induced extreme weather events in one place with outbreaks of water associated diseases in a different place, where the former is connected to the latter through water circulation. A noteworthy finding from this study is the inferred link between high prevalence of cholera in the coastal districts of West Bengal following heavy rainfall in the upstream districts and flooding in the watershed. Similarly in Punjab, a surge in cholera cases during 2016 appeared to be linked to flooding in Lahore and dynamics of Lahore- Punjab watershed. In both states, the affected regions were characterised by high rural population density, low-income levels, poor sanitation and poor primary healthcare facilities. Preliminary results showed that this integrated framework effectively predicted disease vulnerability and hotspots with high accuracy (~80%), that could potentially support targeted public health interventions and resource allocation. Moreover, the Bayesian framework also provided quantification of uncertainty within predictions, providing strong, interpretable risk assessments important for policy making and water-associated disease management. By integrating advanced computational techniques with regional weather and socioeconomic datasets, this study highlights the importance of interdisciplinary approaches in dealing with climate-sensitive public health challenges.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Analyzing Cholera Outbreaks: Dynamics, Risks, and Response Measures

Authors: ASWIN SACHIDANANDAN, Dr Neelam Taneja, Shubha Sathyendranath, Dhritiraj Sengupta
Affiliations: Post Graduate Institute of Medical Education and Research, Plymouth Marine Laboratory (PML), National Centre for Earth Observation, Plymouth Marine Laboratory,
Cholera is a contagious illness resulting from Vibrio cholerae and is spread through the faecal-oral route. The primary sources of V. cholerae outbreak infections in humans are contaminated water and food, which serve as the most frequent vehicles, and cholera toxin has been identified as a factor in its virulence. The spread of the disease is facilitated by overcrowding, insufficient sanitation, poor hygiene, and the absence of safe drinking water. As a result, it continues to be a significant health concern in developing nations, where it is present as an endemic infection and can also lead to outbreaks. In this study we investigated outbreaks of cholera in Kumbra village of sec-46, Mohali from 2nd July to 3rd August 2024 and Morinda, Rupanagar of Punjab, India from 19th August to 23rd of August 2024. The total number of persons hospitalised with acute watery diarrhoea were 89 for Kumbra Village, and 26 cases for Morinda. We obtained data from local hospitals, concerned health authorities and also by implementing citizen science program. Water samples were collected from suspected contaminated water sources. We observed that the areas were overcrowded and they lacked proper water supply. Water samples were filtered, tested for coliforms and V. cholerae. Strains were identified by MALDI-TOF, RT-PCR and serotyping. A total nutrient analysis was done for both locations. Cultures were positive in all 12 of the stool samples from suspected patients. Affected locations were visited for collecting information data for source of water, sanitisation, leakage or break in latrines/sewage pipelines, their pictures and results were recorded separately. Out of 14 water samples collected from these houses 60% of them had brown colour and bad odour. From the citizen science data, we observed 70% of the houses were using municipal pipeline water for drinking and basic household activities, the rest 30% were depending on other sources like own well, handpump and tanker water supply. Also, according to citizens the age and gender group that is highly affected by diarrhoea are women between the ages of 19 and 60. From the total nutrient analysis done for Nitrite, Silicate, phosphate and Sulphate, all the 14 samples which was collected from Kumbra showed high levels of silicate concentration (246.56µM), followed by sulphate(221.62 µM), nitrite(18.89 µM) and phosphate(1.96 µM). Whereas water samples from Morinda were having less concentration i.e., less than 20µM of all the above nutrients. RT-PCR was positive for V. cholerae in water samples collected from both regions. Cholera remains a significant problem among the communities which lack adequate sanitation and proper access to dinking water facility or where there are municipal pipelines or own wells but not properly installed. Citizen science survey also supported the argument that these houses lack proper sanitization. We plan to study the effect of nutrients on growth of V. cholerae and study the effect of various climatic factors on seasonality of cholera by remote sensing. Keywords: Cholera, North India, Environmental Vibrio, Acute Diarrhoeal Disease, Epidemics
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Spatial and temporal detection of gold panning sites by remote sensinG

Authors: Poullo Baidy Ba, Paul Passy, Emilie Lavie, Laurent Bruckmann
Affiliations: Phd Student, Senior Lecturer, Senior Lecturer with habilitation, Research Scientist
Gold mining, particularly artisanal gold mining, has been practiced for several decades in West Africa. Senegal and Mali are among the countries that have experienced significant gold rushes, particularly during the droughts of the 1970s and 1980s. In Mali, gold panning dates back to the 13th century and developed under the Manding Empire (Boukaré, s. d.). Today, Mali is Africa's third-largest gold producer, after South Africa and Ghana, with annual production estimated at 60 tonnes in 2018 (Boukaré, s. d.) and 65 tonnes in 2023 . Mining activities are particularly concentrated in the gold-rich regions of Kayes, Koulikoro, and Sikasso. In Senegal, gold panning is mainly carried out in the southern regions, notably in Kédougou, and extends towards Bakel along the main tributary of the Senegal River. This activity involves local, foreign, and multinational gold miners. Both Mali and Senegal face numerous challenges associated with mining, particularly artisanal, semi-mechanical, and industrial operations. Gold-panning sites are often located along the banks of the Faleme River, the main tributary of the Senegal River. The Faleme River is a highly coveted resource with diverse uses: agriculture, fishing, livestock, domestic needs, energy production, and, of course, gold mining. An estimated 387,895 people, or 55,414 households, depend directly or indirectly on the income generated by its waters and sub-tributaries (« Appel-de-Keniéba.pdf », s. d.). In response to these challenges, remote sensing offers a promising solution for mapping and monitoring the evolution of both legal and illegal gold-panning sites within this watershed. High-resolution satellite imagery, such as Sentinel-2, makes it possible to monitor the spatial and temporal evolution of these sites. Remote sensing provides a comprehensive view of site conditions using vegetation, water, and bare soil indices, combined with machine learning techniques. This approach reduces the cost and time required for exhaustive mapping while enabling dynamic analysis of the affected areas. The aim of our study is to demonstrate the feasibility of using remote sensing images to accurately map gold panning areas, and to illustrate the evolution of these areas over time and space within the transboundary watershed of the Faleme River. A gold-panning site is often characterized by the presence of turbid water bodies and bare ground. Water and bare soil indices are particularly useful for detecting such sites. By analyzing Sentinel-2 images, we can track the temporal and spatial distribution of these areas along the Faleme River, observing changes in these indices over time. This method provides a detailed overview of the extent of gold panning, its evolution, and its environmental impacts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: From Contamination to Clarity: An Assessment of Water Quality and Public Health Risks in Lake Vembanad, India

Authors: Ms Ancy C Stoy, Dr. Grinson George, Dr Nandini Menon N, Dr Ranith R, Dr Jasmin C, Dr Anas Abdulaziz, Dr Shubha
Affiliations:
Availability of clean water and sanitation for all has been identified by the United Nations General Assembly as one of the goals (SDG 6) to be achieved by the year 2030 for sustainable development. Clean and clear water reflects health of aquatic ecosystems and their interconnected impact on human well-being. Climate change has redefined the perspective on water transparency. Secchi Depth (ZSD), previously regarded solely as an optical property of water, is now regarded as a critical ecological indicator, as it serves as a simple and reliable proxy for assessing water transparency and overall water quality of aquatic systems. This study investigates the relationship between Secchi depth and Forel-Ule (FU) colour index with the prevalence of pathogens causing waterborne diseases in Vembanad Lake, Kerala, India. Since 2019, in-situ measurements of Secchi depth have been collected across multiple sites in the lake using a 3D printed Mini Secchi Disk fitted with FU scale to assess variations in water colour and clarity. This ongoing data collection, continuing through 2024, is part of a citizen science program, in which local people and university students are involved in water quality monitoring. Preliminary analyses revealed significant spatial and temporal variations in ZSD, suggesting potential links to pollution, algal blooms, sewage discharge, resuspension of sediments and churning of water column. ZSD measurements as low as 0.01 m and as high as 4.4 m have been recorded. Likewise, the FU values ranged from 12 to18 across the Vembanad Lake, with occasional high values of 21. The extreme values of ZSD and FU index indicate fluctuations beyond the normal seasonal variations in water clarity and colour, highlighting potential shifts in water quality over time. The mixing of septic sewage with natural waters is a common occurrence in Vembanad Lake. Our microbiological studies have estimated the abundance of pathogenic bacteria such as Vibrio cholerae, Leptospira and Escherichia coli, in the Vembanad Lake year round. Reduced water clarity, as indicated by lower Secchi depth, and high FU values could be considered as indications of water contamination, during times of septic sewage mixing and extreme climate conditions, pointing to the potential health risks posed by pathogens like Escherichia coli. Also, the abundance of Vibrio spp. was significantly higher during the southwest monsoon of 2018, coinciding with a once-in-a-century flood event in the lake. The study underscores the importance of regular monitoring of water quality. Secchi depths and FU values are easier to understand than the sophisticated water quality measurements for non-scientists and managers who routinely monitor and make decisions to promote the aesthetic value of an aquatic area, and on its public health and economic importance, by limiting public exposure to contaminated waters and by reducing the risk of disease outbreaks.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Operational surveillance of environmental factors associated with Dengue transmission at country level in Argentina: Can a few parameters alerts about dengue outbreaks?

Authors: PhD Elisabet Benitez, MsD. Pablo Zadder, Lic. Julieta Motter, Ximena Porcasi
Affiliations: Conae_ Instituo Gulich
In Argentina, Dengue Environmental risk model has provided information at country level for 10 years, at CONAE geoserver web. It works on daily temperature data from LST (MODIS) to estimate the possible cycles of Aedes aegypti development and subsequent mosquito infection with the Dengue Virus. The model is based on simple arguments of immature stage development, hatching and possible female infection considering the incubation cycles of the virus (Porcasi et. al 2012). The output can be considered as the environmental threat component of Dengue transmission risk. It is updated annually with the observation of last year's temperature data for localities with more than 5,000 inhabitants. Here we analyzed the changes among the last 3 years in agreement with the worst registers of Dengue cases in the country. Thus, for 2022, the model recorded a lower number of complete cycles for the maximum observed temperatures of 32 with an average of 13; while in 2023 it recorded values higher than 33 complete cycles in some localities and an average of 14.7. These increases are not homogeneous for all localities in the country. Thus, 67.4% of the localities increased their environmental threat by one or more cycles with respect to the previous year and only 11.6% decreased the threat. Depending on the temperature, each cycle can be considered as an exposure of approximately 13 days (13 days more per year than the previous one), reaching up to 3 cycles (45 days more per year) in some places. Consequently, since 2009, Argentina has shown intermittent and increasing outbreaks of the disease covering more and more localities/cities and more cases: specifically, the incidence went from 2 cases per 100,000 inhabitants in 2022 to 123 cases per 100,000 inhabitants in 2023; while until October 2024 the incidence already accumulates 1262 cases/100,000 ha. Spatially, during 2022 cases were reported in 35 departments while in 2023 there were 246 administrative units reporting cases. The south-western geographic expansion of notifications coincides with the increase in the number of possible cycles in the same direction of the environmental threat maps. Here we shows the association of the latest Dengue outbreaks at the country level with a simple model based on the influence of temperature (LST) for mosquito and virus cycling. This demonstrates the usefulness of using EO-derived products at national and regional levels for sourveillance of Dengue and other arboviral diseases. In addition, improved, alternative EO products from this system are proposed.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: ENgaging Researchers and coastal population In Communicating ocean’s role on human Health (ENRICH)

Authors: Dr Roshin P. Raj, Dr Ranith Rajamohanan Pillai, Dr Pratheesh C Mammen, Mr Ramesh Krishnan, Dr Nandini Menon, Mr Lasse Pettersson
Affiliations: Nansen Environmental and Remote Sensing Center, Bjerknes Center for Climate Research, Nansen Environmental Research Centre (India), Foundation for Development Action
The coastal areas of the developing/under-developed countries with high population density are the most vulnerable to climate change, even though climate related changes in the ocean and their impacts are global. Climate related changes in the ocean such as sea level rise, and tidal flooding, drive these coastal communities to live in unhygienic and unsanitary conditions, making them vulnerable to water associated diseases. Another indirect influence of the climate-related changes in the ocean on human health, is via the increased occurrences of Harmful Algal Blooms (HABs). The cyanobacterial HABs, in addition to its adverse impact on fisheries and thus on the economic situation of the coastal communities, also have an established affinity to Vibrio cholerae, thereby increasing the vulnerability of the region to water-borne diseases. Making sustainable changes to the socio-economic status and health conditions of the vulnerable coastal populations demands the need to bring together environment and social science as well as public health organisations to scan the horizon for emerging climate-associated disease threats and their impacts on the local population. In addition to research-based initiatives, it is also highly needed to improve methods to engage, communicate and disseminate research-based knowledge among the vulnerable communities, which in turn is expected to improve the awareness among the population, enhance preparedness and reduce health issues (water associated diseases) due to climate related changes in the ocean and coastal regions. ENRICH is an Indo-Norwegian transdisciplinary project funded by the Research Council of Norway that aims to engage researchers and the coastal population to effectively communicate and disseminate research-based knowledge on climate related changes in the ocean and its implications on human health vulnerability. ENRICH focusses on the state of Kerala, the most densely populated coastal zone in South India, (total length- 593 km; population - 36 million). ENRICH will use methodologies and digital tools such as conducting surveys, data collection and analysis, imparting awareness and capacity building of targeted population towards the development of a sustainable citizen science program, dissemination of knowledge and scientific information to local communities and stakeholders using digital tools like web/mobile applications, to improve Community Engagement. ENRICH will invest in the next generation via providing educational activities for school children, specific courses for school teachers, and training courses for college lecturers, undergraduate and graduate students. By addressing the vulnerable coastal population and investing in the next generation (school children, school and college lecturers, undergraduate and postgraduate students), ENRICH directly addresses the stated goal of UN Decade of Ocean for Sustainable Development (2021-2030) to provide a greater understanding of the importance of the ocean for all segments of the population. ENRICH activities will indirectly address 5 of the 17 UN Sustainable Development Goals.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: A.08.12 - POSTER - Advances and applications of sea surface temperature and the Group for High Resolution Sea Surface Temperature

Sea surface temperature (SST) is a fundamental physical variable for understanding, quantifying and predicting complex interactions between the ocean and the atmosphere. SST measurements have been performed operationally from satellites since the early 1980s and benefit a wide spectrum of applications, including ocean, weather, climate and seasonal monitoring/forecasting, military defense operations, validation of atmospheric models, sea turtle tracking, evaluation of coral bleaching, tourism, and commercial fisheries management. The international science and operational activities are coordinated within the Group for High Resolution Sea Surface Temperature (GHRSST) and the CEOS SST Virtual Constellation (CEOS SST-VC) in provision of daily global SST maps for operational systems, climate modeling, and scientific research. GHRSST promotes the development of new products and the application of satellites for monitoring SST by enabling SST data producers, users and scientists to collaborate within an agreed framework of best practices.

New satellites with a surface temperature observing capacity are currently being planned for launch and operations with ESA and EUMETSAT, such as CIMR, Sentinel-3C/D, and Sentinel-3 Next Generation Optical. In addition, new ultra-high-resolution missions are in planning such as TRISHNA and LSTM. These satellite missions continue contributions to the provision of high-quality SST observations and opens up opportunities for further applications. However, this will also require new developments and innovations within retrievals, validation etc. It is therefore important that the developments within high resolution SST products are presented and coordinated with the ongoing international SST activities. Research and development continue to tackle problems such as instrument calibration, algorithm development, diurnal variability, derivation of high-quality skin and depth temperature, relation with sea ice surface temperature (IST) in the Marginal ice zone, and in areas of specific interest such as the high latitudes and coastal areas.

This session is dedicated to the presentation of applications and advances within SST and IST observations from satellites, including the calibration and validation of existing L2, L3 and L4 SST products in GHRSST Data Specification (GDS) and preparation activities for future missions. We also invite submissions for investigations that look into the harmonization and combination of products from multi-mission satellites.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: SST and Combined SST/IST Products Overview: The Danish Meteorological Institute's Contribution to Copernicus Marine and Climate Change Services

Authors: Ioanna Karagali, Pia Englyst, Ida Lundtorp Olsen, Guisella Gacitúa, Alexander Hayward, Wiebke Kolbe, Jacob Høyer
Affiliations: DMI
The Copernicus Marine Service (CMS) and Copernicus Climate Change Service (C3S) are responsible for complementary reprocessing activities using satellite ocean observations. CMS encompasses reprocessing at global and regional scales of all satellite observations including all observations available at a given time (reprocessing of Essential Ocean Variables, EOVs). C3S fosters climate reprocessing, typically at global scale, with special focus on the most accurate observations and homogeneous time series (reprocessing of Essential Climate Variables, ECVs). The Danish Meteorological Institute (DMI) serves as a Production Unit (PU) for the Sea Surface Temperature (SST) and Sea Ice (SI) Thematic Assembly Centers (TAC) of CMS and the SST ECV of C3S. Within both frameworks, a suite of GHRSST-compliant L3S and L4 SST and combined SST/IST products for the Baltic and North Sea (CMS), Pan-Arctic (CMS) and Global Ocean (C3S) are produced. In the end of 2024, the new C3S SST/IST global L4 Climate Data Record (1982-2024) was released which provides a unique opportunity for assessment of temperature changes over the global ocean including regions with sea-ice cover. In early 2025, a reprocessed version of the Baltic Sea and North Sea Reanalysis product (1982-2024) will be released using the latest version of the ESA SST_cci L2P data as input. The aim of this presentation is to provide an overview of the existing and new products and their quality, a summary of the improvements implemented during the period 2022-2024 and those foreseen for the period 2025-2028.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: Evaluation of NOAA ACSPO SST Products against Independent Saildrone Data

Authors: Dr. Irina Gladkova, Dr. Olafur Jonasson, Dr. Veronica Lance, Dr. Yuri Kihai
Affiliations: City College Of New York, National Oceanic and Atmospheric Administration, Global Science and Technology Inc.
The information about satellite-derived sea surface temperature (SST), including its diurnal cycle, and derived ocean thermal fronts, is important for a wide variety of users, studies and applications. To address all those users’ needs, NOAA has developed an enterprise SST system, Advanced Clear Sky Processor for Ocean (ACSPO). Currently, ACSPO processes data of multiple high-resolution (~1 km) infrared satellite sensors flown in low Earth (LEO: JPSS VIIRS, EOS MODIS, Metop AVHRR FRAC) and geostationary (GEO: GOES-R ABI, Japan Himawari, and European Meteosat) orbits. ACSPO produces a wide range of L2P (swath) and 0.02° (~2 km) gridded Level 3 products, including uncollated (L3U), collated (L3C), and super-collated (L3S). All ACSPO data follow Group for High Resolution SST (GHRSST) guidance and standards and available to users in GHRSST Data Specification version 2 (GDS) NetCDF format via various services (NOAA STAR CoastWatch https://coastwatch.noaa.gov/cwn/index.html, OSPO Product and Data Access, EUMETSAT EumetCast) and archives (NOAA NCEI https://www.ncei.noaa.gov/, NASA PO.DAAC https://podaac.jpl.nasa.gov/). There are two features of the ACSPO products relevant to this study: 1) Resolving the SST diurnal cycle. All ACSPO GEO products are reported hourly, 24 Full Disks per day for each processed GEO satellite. The LEO satellites report SST at least twice a day, during the daytime and nighttime overpasses. There are two types of LEO platforms. Following the NOAA-EUMETSAT interagency agreement, NOAA launches its POES/JPSS LEO satellites in the afternoon “PM” orbit around ~1:30am/pm, whereas EUMETSAT operates its Metop satellites in a mid-morning “AM” orbit at ~9:30am/pm. In principle, a combination of the day and night (D/N) data in the AM/PM orbits would contain some information about the diurnal cycle. This consideration motivated the ACSPO L3S-LEO design, which first combines the LEO data into four separate L3S-LEO-AM/PM-D/N files, and only then aggregates them into one daily L3S-LEO-DY file. However, unlike the GEO processing, where one regression equation, trained against in situ data, is employed to all data of one satellite, the LEO regressions for day and night are different (daytime regression uses only longwave IR bands, and nighttime regression additionally employs the 3.7 µm band), and trained against in situ SSTs independently. 2) Reporting thermal fronts. Along with SST, two additional variables are included in all ACSPO files (LEO/GEO, L2P/L3U/L3C/L3S): binary mask indicating the presence of a thermal front, and its intensity in K/km. ACSPO products undergo extensive validation with respect to in situ SSTs measurements. The conventional in situ data cover the ocean near-globally and span various time scales, from over a century for ships to only several decades for drifting and moored buoys (~1980 – on) and Argo floats (late 1990s – on). These observations have been incorporated in the NOAA in situ SST Quality Monitor system (iQuam; https://www.star.nesdis.noaa.gov/socd/sst/iquam/) for the satellite era from 1981 – onward. The iQuam ingests data from multiple NOAA, national and international data centers and archives, performs a consistent quality control, and serves to NOAA, national and international users online, in near-real time. ACSPO Team uses iQuam for variety of purposes, including validation of ACSPO SST products, which reported in another NOAA online system, SST Quality Monitor (SQUAM; https://www.star.nesdis.noaa.gov/socd/sst/squam/). Note that no specific validation is provisioned in SQUAM for the diurnal cycle or fronts. As a result, their quantitative assessment remains very limited to some off-line ad-hoc analyses and initial semi-qualitative estimates. Frontal locations are visualized in the NOAA ACSPO Regional Monitor for SST (ARMS; https://www.star.nesdis.noaa.gov/socd/sst/arms/) online system to facilitate routine evaluation of their performance, in a qualitatively way. More recently, in the mid-2010s, a new type of in situ data from automated unmanned vehicles, Saildrones, has been introduced (Gentemann et al, 2020). Saildrones sample the ocean surface every minute and provide multiple measurements of the ocean surface, including the skin (from IR instrument) and the bulk SST measurements (from the CTD sensor at 0.6 m below the sea surface level). Initial analysis of the data suggest that Saildrones’ SSTs are of sufficient accuracy to allow validation of satellite SSTs and gradients, and although the coverage remain limited to some selected regions of the ocean, independent evaluation of satellite SSTs is possible (e.g., Koutantou et al., 2023). At the time of writing, we have compared two ACSPO gridded SST products (L3C GOES-18 and L3S LEO) with Saildrone data from the Tropical Pacific Observing System (TPOS) 2023 mission, using data from https://www.pmel.noaa.gov/ocs/saildrone/data-access, with a focus on the diurnal cycle and thermal fronts’ information. The Gridded (0.02° ~2 km) Level 3 products were selected for the initial analysis because they provide convenient equilateral Latitude/Longitude grid for matching the Saildrone’s geo-locations with satellite observations. The selected gridded GEO collated (GOES-18 L3C) and LEO super-collated (L3S-LEO) products have larger clear-sky coverage and less noise than individual satellite observations, and have potential to resolve the diurnal cycle and ocean features movement. The GEO L3C has an hourly temporal resolution, and the four L3S-LEO files per day capture evolution of SSTs retrieved from LEO platforms at approximately 9:30am/pm and 1:30am/pm. The 1-minute temporal resolution of Saildrones allows nearly instantaneous matching with the satellite time-record. Our current analyses suggest that the diurnal signal in GOES-18 and Saildrone SSTs are largely consistent in shape and amplitude. The four diurnal points in the L3S-LEO-AM/PM-D/N show less agreement with Saildrone SST. This is expected, as the D/N regression equations are different, and trained against in situ data independently. More analyses are needed to better quantify and understand the ACSPO/Saildrone consistency. Thermal fronts occur in only a small fraction of data, and finding statistically representative match-up dataset for comparison is proving challenging to date. The traditional drifting and moored buoys, which are routinely ingested by iQuam from multiple NOAA, national and international data centers and archives, will also be included in the analyses. Although traditional buoys and drifters measure SST at a depth below the surface, rather than skin or sub-skin temperature, they have significantly wider spatial coverage. In combination with Saildrone’s capability of SST measurements at different levels, conventional in situ SSTs provide a good foundation for statistical analyses and additional consistency checks. More details and further results will be presented at the Symposium.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: Preliminary Assessment of the Copernicus Imaging Microwave Radiometer (CIMR) Impact on Mediterranean Sea Surface Temperature L4 Analyses

Authors: Mr. Mattia Sabatini, Andrea Pisano, Claudia Fanelli, Bruno Buongiorno Nardelli, Dr. Gian Luigi Liberti, Dr. Rosalia Santoleri, Daniele Ciani
Affiliations: CNR-ISMAR, CNR-ISMAR, University of Naples "Parthenope"
Regular and continuous satellite-based mapping of sea surface temperature (SST) is essential for developing global and regional SST datasets, which support both near real-time operational applications and the creation of long-term climate time series. SST is a crucial variable for studying ocean dynamics, ocean-atmosphere interactions, and climate variability, serving as a key indicator for tracking global warming and assessing the health of marine ecosystems. Currently, spaceborne infrared radiometers provide highly accurate, high-resolution SST measurements, from 1 km to 100 m. However, they are limited by their inability to penetrate cloud cover. In contrast, microwave radiometers offer near all-weather observation capabilities but typically operate at lower spatial resolutions due to instrumental constraints. The upcoming Copernicus Imaging Microwave Radiometer (CIMR) mission marks a significant advancement in microwave SST remote sensing, promising a spatial resolution of up to 15 km - representing a substantial improvement over existing passive microwave systems. Our preliminary study explores the potential impact of the CIMR mission on Mediterranean Sea SST products, which are currently produced and distributed by the European Copernicus Marine Service using only infrared SST data. Through an Observing System Simulation Experiment (OSSE), we evaluate the effect of integrating synthetic CIMR observations into the existing Copernicus Mediterranean SST analysis system, comparing results with and without CIMR data. The findings reveal that incorporating CIMR observations reduces the uncertainty of the Mediterranean SST product, as measured by the root mean square difference (RMSD), by 26%. Additionally, CIMR improves the reconstruction of level-4 fields and demonstrates that its enhanced spatial resolution enables effective microwave SST retrieval even in semi-enclosed basins like the Mediterranean Sea. Ongoing studies leveraging oceanographic satellite data focus on integrating AMSR-2 passive microwave (PMW) observations into the Mediterranean Sea L4 processing, serving as an additional preparatory study for incorporating CIMR PMW SSTs in the future. Although the AMSR-2 footprint and the resolution of the operational observations are not ideal for applications in semi-enclosed basins and coastal areas, this approach can be valuable offshore, especially under cloud cover conditions typically associated with large-scale low-pressure disturbances.
LPS Website link: Preliminary Assessment of the Copernicus Imaging Microwave Radiometer (CIMR) Impact on Mediterranean Sea Surface Temperature L4 Analyses&location=X5+–+Poster+Area+–+Zone+J" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: Exploring new cloud detection algorithms for remote sensing SST observations using a data-driven approach and the multifractal theory of turbulence.

Authors: Aina Garcia-Espriu, Dr. Cristina González-Haro, Dr. Jordi Isern-Fontanet, Dr. Antonio Turiel
Affiliations: Institute Of Marine Sciences (ICM, CSIC)
Cloud detection is crucial in Sea Surface Temperature (SST) measurements from satellite data. If clouds are present but not properly detected and masked, the measurements can contain a much colder cloud top temperature rather than the actual sea surface temperature, affecting statistics and larger-scale analysis. The presence of clouds can cause a cold bias in SST measurements since cloud tops are generally much colder than the ocean surface. On the contrary, if the cloud-detecting algorithm is based on local differences of SST masking high cold gradients, SST measurements in upwelling regions may be masked out. This can lead to systematic underestimation of SST values or a decrease in coverage in affected regions. Current cloud mask algorithms still present some limitations in detecting pixels where only partial cloud coverage is present. This leads to temperature measurement being an incorrect mixture of cloud top and sea surface temperatures. This creates unreliable data points that do not accurately represent either the cloud or sea temperature. Inaccurate SST measurements can mask or artificially create temperature trends, potentially leading to incorrect conclusions about climate change patterns or ocean warming rates. It also has an impact on ocean dynamics, as incorrect cloud masking can lead to misinterpretation of circulation patterns. Finally, it also has a high impact on weather forecasting, as incorrect values due to cloud contamination can degrade the forecast accuracy for coastal and marine weather predictions. Here, we present a new machine-learning approach to improve cloud detection algorithms for SST remote sensing observations. This approach is based on the Microcanonical Multifractal Formalism of turbulence (Turiel et al. 2008). The characterization of remote sensing of ocean variables by means of the Multifractal Formalism is known since 20 years ago (Lovejoy et al. 2001; Turiel et al. 2005;). Owing to ocean turbulence, using singularity analysis information about marine hydrography can be retrieved starting from sea surface temperature or other remote sensing variables (Turiel et al. 2005). This has been mainly used to develop data fusion techniques, and also for the characterization of ocean currents (Umbert et al. 2014, Olmedo et al. 2016). In recent years, we have gained a deeper understanding on the connection between intermittency and dissipation in ocean turbulence (Isern-Fontanet and Turiel 2021). The characteristics of the energy cascade determine the functional dependency of the singularity spectrum and, thus, the geometrical properties of the flow (fractal dimensions). These geometrical properties of the flow should be kept whether we are analyzing brightness temperature or SST observations. However, we observe a different dynamic response between different kinds of clouds, oceans, and land. By combining the TB and the SST along with their associated singularity exponents, we can train machine learning algorithms to obtain a more accurate cloud mask. Preliminary results using a clustering approach with Sentinel-3 L2P data show a more accurate segmentation between the clouds and ocean pixels. Isern-Fontanet, J., & Turiel, A. (2021). On the connection between intermittency and dissipation in ocean turbulence: A multifractal approach. Journal of Physical Oceanography, 51(8), 2639–2653. https://doi.org/10.1175/JPO-D-20-0256.1 Lovejoy, S., W. Currie, Y. Tessier, M. Claereboudt, E. Bourget, J. Roff, and E. Schertzer, 2001: Universal multifractals and ocean patchiness: phytoplankton, physical fields and coastal heterogeneity. J. Plankt. Res., 23 (2), 117–141. Olmedo, E. et. al 2016. Improving time and space resolution of SMOS salinity maps using multifractal fusion. Remote Sensing of Environment 180, 246-263. Turiel, A., J. Isern-Fontanet, E. García-Ladona, and J. Font, 2005: A multifractal method for the instantaneous evaluation of the stream-function in geophysical flows. Pys. Rev. Lett., 95, doi:https://doi.org/10.1103/PhysRevLett. 95.104502. Turiel, A., Yahia, H., Pérez-Vicente, C. J. (2008). Microcanonical multifractal formalism—A geometrical approach to multifractal systems: Part I. Singularity analysis. Journal of Physics A: Mathematical and Theoretical, 41(1), 015501 Umbert, M. et. al. 2014. Multifractal synergy among ocean scalars: applications to the blending of remote sensing data”, Remote Sensing of Environment 146, 188–200.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: A 45-Year Sea Surface Temperature Climate Data Record From the ESA Climate Change Initiative

Authors: Owen Embury, Christopher Merchant, Simon Good, Jacob Høyer, Nick Rayner, Thomas Block, PhD Sarah Connors
Affiliations: University Of Reading, National Centre for Earth Observation, Met Office, Danish Meteorological institute, Brockmann consult GmbH, European Space Agency
Understanding the state of the climate requires long-term, stable observational records of Essential Climate Variables (ECVs) such as sea surface temperature (SST). ESA’s Climate Change Initiative (CCI) was set up to exploit the potential of satellite data to produce climate data records (CDRs). The initiative now includes projects for 27 different ECVs – including SST, which released the third major version of the SST CCI CDR last year. Complementary to the CDR is an Interim CDR (ICDR) to proving an ongoing extension in time of the SST CCI CDR at short delay (approx. 2-3 weeks behind present). The ICDR was funded by the Copernicus Climate Change Service (C3S) for 2022 and is now funded by the UK Earth Observation Climate Information Service (EOCIS) and UK Marine and Climate Advisory Service (UKMCAS) for 2023 onwards. Version 3 of the SST CCI CDR now covers 45-years, from 1980 to present, using data from twenty infrared and two microwave radiometers. These include reference observations from the dual-view Along Track Scanning Radiometer (ATSR) and Sea and Land Surface Temperature Radiometer (SLSTR) instruments, meteorological observations from the Advanced Very High Resolution Radiometer (AVHRR), and observations from the Advanced Microwave Scanning Radiometer (AMSR)-E and AMSR2 which are less affected by clouds. The dataset includes both single-sensor products (native resolution Level 2, and averaged on a global 0.05° gridded Level 3) plus a merged, gap-free, Level 4 SST analysis generated using the Met Office Operational Sea Surface Temperature and Ice Analysis (OSTIA) system. All products follow the GHRSST Data Specification (GDS) and CCI Data Standards. The SSTs are harmonised at the sensor level to ensure that multiple satellites can be used together as a single CDR. Changes in the satellite overpass time (due to different and drifting orbits) are accounted for by providing both the direct satellite observation and an estimate adjusted to a standardised time-and-depth equivalent to the daily average SST at 20cm. This avoids aliasing the diurnal cycle into the long-term record and allows comparison with the historical in situ record.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: Comparing Super-Resolution Techniques for High-Resolution SST Reconstruction in the Tropic Oceans

Authors: Yueli Chen, Oleg Emelianov, Melanie Maria Maier, Simon Sing Hee Tong, Dr. Yawei Wang, Prof. Dr. Xiao Xiang Zhu
Affiliations: Technical University of Munich, Guangzhou Institute of Geography, Guangdong Academy of Sciences
Abnormal variations in sea surface temperature (SST) pose significant threats to coastal ecosystems, fisheries, and weather stability, particularly in tropical regions. These changes exacerbate rapid erosion, increase the frequency of extreme weather events, and accelerate ecosystem degradation. Accurate monitoring of SST is essential for understanding and mitigating phenomena like coral bleaching, marine heatwaves, and their cascading impacts on marine biodiversity and human livelihoods. However, existing SST datasets often suffer from a trade-off between spatial resolution and temporal coverage, limiting their ability to capture fine-scale dynamics essential for both scientific understanding and practical applications. To address these challenges, we explore the application of deep learning-based super-resolution (SR) techniques to reconstruct high-resolution, full-coverage SST data from multi-source remote sensing inputs. This study compares several state-of-the-art SR methods, including SRCNN, U-Net, SRGAN, and Diffusion models, intending to identify the most effective approach for generating high-spatiotemporal-resolution SST datasets. Using Himawari-8 data as the high-resolution (HR) reference with gaps and OSTIA data as the low-resolution (LR) full-coverage input, the models were trained on data from the tropical oceans around Australia. The transferability of the trained models is further evaluated by applying them to datasets from high-latitude cold regions. Quantitative evaluations of model performance are conducted using standard metrics, such as peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM), to assess each method’s ability to reconstruct fine-scale SST patterns and preserve critical features. Additionally, we collected in situ SST measurements in the training region and one of the transfer target areas, enabling real-world validation of the reconstructed datasets. This ensures the physical validity and reliability of the super-resolved SST products for practical applications. Furthermore, we investigate the potential benefits of incorporating auxiliary conditional data into simpler architectures, such as SRCNN, to overcome limitations inherent to simpler model designs. Comparisons between methods allow us to understand the trade-offs in complexity, accuracy, and transferability, providing actionable insights into the relative merits of each approach. This work contributes to the development of advanced SST reconstruction techniques, offering a pathway to generate accurate, high-resolution SST datasets for continuous monitoring. By enabling precise SST observation, these advancements promote sustainable management of marine resources and ecosystems while enhancing our understanding of SST dynamics in both tropical and high-latitude regions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: TRUSTED: In situ FRM Data for SST & IST

Authors: Marc Lucas, Dr Marc Lemenn, Gorm Dybkjær, Anne OCarroll
Affiliations: CLS, SHOM, DMI, EUMETSAT
The improvement in the precision of Sea Surface Temperature and Ice Surface Temperature retrievals by Satellite born instruments over the last decades has brought about the need for improved in situ reference data for calibration and validation purposes. This in situ data needs to be properly characterized and this means working out precisely the uncertainty budget and ensuring that any data collected is fully traceable. Over the past 6 years the TRUSTED project, funded by Copernicus has been doing just that, deploying over 350 drifters with high resolution Sea Surface Temperature sensors and setting up calibration and metadata processes to ensure full traceability, including a full uncertainty diagram in order to achieve Fiducial Reference Measurement status. More recently, the TRUSTED consortium has started working on a new instrument to retrieve high quality and fully traceable IST data. In this paper, we will present the achievements of the TRUSTED project as well as the latest development for IST Fiducial Reference data.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (ESA Agora)

Session: E.01.04 ESA GTIF initiative: new solution and business models for the Green Transition

The ESA GTIF (Green Transition Information Factories) initiative is addressing the information needs in the context of the Green Transition and develops analytical capabilities and decision support tools to address the operational needs of users and stakeholders and support their processes.

This Agora session will dive into the GTIF co-creation approach, in which Green Transition users and stakeholders are engaged to bring forward information needs and requirements from their operational working context. These requirements are then analysed by the contributing GTIF industry teams and ESA experts to develop initial versions of dedicated capabilities which combine value-adding algorithms, user interface embeddings with cloud computational scaling and quality assurance aspects. Subsequently, these capabilities are further evolved to correspond to specific stakeholder requirements. Operationalisation of such capabilities and uptake in user and stakeholder operational processes is the ultimate goal of this co-creation process.

This Agora will reflect on experiences, lessons learned and success stories of stakeholder engagement and co-creation in the different GTIF projects. It will feature speakers from across the different GTIF projects and currently covered countries (i.e., Baltics, UK, Ireland, France, North Atlantic, Danube region).

Speakers:


  • Christian Toettrup - DHI Group
  • Rui Song - University of Oxford
  • Erikas Berontas - JSC Coetus
  • Konstanze Fila - Austrian Research Promotion Agency-FFG, Aeronautics and Space Agency
  • Gerhard Triebnig - EOX IT Services GmbH
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: D.03.04 - POSTER - Innovative technologies, tools and strategies for scientific visualisation and outreach

In recent years there has been an advancement in the field of communicating science-based information and facts to the citizens through open access and open source web based solutions and mobile applications. In earth observation, these solutions use innovative ways of presenting eo-based indicators and data, often coupled with storytelling elements to increase accessibility and outreach. Additionally, such innovations, coupled with data access and computation on cloud-based EO platforms, are very effective tools for scientific data dissemination as well as for education in EO and Earth Science. In this session we welcome contributions from innovative web-based solutions, dashboards, advanced visualisation tools and other new and innovative technologies and use cases for scientific communication, dissemination and education. In particular, we seek to explore how such solutions help to increase the impact of science, create and grow communities and stimulate adoption of EO. We also look towards the future, exploring trends and opportunities to connect with non-EO communities and adopt new technologies (e.g. immersive tools, AR, VR, gaming engines, etc.). The session will be an opportunity to exchange experiences and lessons, and explore opportunities to further collaborate.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: A GAMIFIED MOBILE APPLICATION FOR IMPROVING PUBLIC ENGAGEMENT WITH RECREATIONAL WATER QUALITY THROUGH AR/VR SIMULATIONS

Authors: Rūta Tiškuvienė, Dr. Diana Vaičiūtė, Dr. Marija Kataržytė, Dr. Martynas Bučas
Affiliations: Klaipeda University
Water quality in recreational water bodies, such as lakes, rivers, and coastal ecosystems, is a growing concern due to pollution, climate change, and urbanization. Despite the advancements in water quality monitoring technologies, public awareness and engagement still need to improve, particularly among non-professional users who may find ecological data complex and difficult to interpret. To address this, our research focuses on creating a mobile application that integrates gamification and augmented/virtual reality (AR/VR) simulations to visualize water quality dynamics. The tool aims to connect environmental science and public awareness by making scientific data accessible and more engaging. The primary aim of this research is to develop an interactive digital tool that uses remote sensing and in situ water quality data with AR/VR simulations and evaluate the tool for its effectiveness in improving user engagement, understanding of water quality dynamics, and behavioral changes regarding water resource protection. This tool will offer user-friendly engaging situational simulations where users can explore how their actions influence water quality, learning through interactive scenarios based on scientific principles of ecology. At the core of the application are situational simulations designed to immerse users in real-world environmental challenges. These simulations allow users to interact with virtual environments that mirror actual water bodies, such as lakes, rivers, or coastal ecosystems. In each scenario, users are presented with environmental dilemmas, such as pollution from agricultural runoff, overfishing, or climate-driven algae blooms. Through a series of interactive decisions, users must manage these challenges while maintaining water quality. The simulation provides real-time feedback on the ecological consequences of the user's decisions, leveraging both remote sensing data (e.g., satellite data on chlorophyll-a, temperature, and water quality metrics like E. coli) and in situ data to create dynamic, realistic outcomes. For example, when users choose to implement certain pollution prevention measures, they will observe changes in water quality parameters, such as a reduction in harmful algal blooms or improved oxygen levels, demonstrating the cause-and-effect relationship between human actions and aquatic health. Gamification elements, such as challenges, points, and rewards, further enhance the simulation experience. Users are rewarded for making sustainable decisions, such as reducing nutrient runoff or implementing eco-friendly farming practices, reinforcing positive behavior. The more informed the user becomes about water quality, the higher they score in the game, providing a tangible incentive for continued engagement. Additionally, the situational simulations are adaptable to different geographic locations, allowing users from various regions to experience local water quality challenges and solutions. The integration of AR/VR technology adds a novel layer of immersion, allowing users to visually experience the impact of their actions in a 3D space. For instance, users can "dive" into a virtual water body to observe the effects of their decisions on aquatic flora and fauna or walk through a watershed to see how land-use practices affect water flow and quality. The ability to switch between different perspectives—both above and below the waterline—gives users a comprehensive understanding of water ecosystems. Through AR/VR, users can interact directly with water quality data, making it less abstract and more relatable. For instance, they can observe how excess nutrients from fertilizers contribute to algal blooms, or how urban runoff affects microbial pollution levels. The tool not only engages users in decision-making processes but also teaches the underlying scientific concepts driving these water quality issues, turning data into a visually engaging narrative. This research hypothesizes that gamification, combined with AR/VR simulations, will significantly enhance public citizen engagement and understanding of water quality issues compared to semi-professional and professional users. Public citizens often lack the scientific background to process complex environmental data, making them more likely to benefit from gamified, interactive learning environments. Although the research is in its early stages, we anticipate that the app will lead to significant improvements in user engagement, particularly among public citizens. The interactive, visually rich format is expected to increase the preservation of water quality information and promote a better understanding of the actions necessary to protect water resources. Public citizens are expected to show a greater positive change in perception and knowledge compared to semi-professional and professional users. This gamified mobile application has the potential to become a scalable solution for raising public awareness about water quality. By employing AR/VR technologies and gamification, the tool will bridge the gap between complex ecological data and public understanding, promoting greater community involvement in environmental sustainability. Future development could extend its use to other environmental challenges, such as air quality or climate change, offering a versatile platform for engaging the public in sustainability efforts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Water Health Indicator System (WHIS): A Global Water Quality Monitoring Web App through Advanced Earth Observation Technologies

#stac #cog

Authors: Daniel Wiesmann, Jonas Sølvsteen, Olaf Veerman, Emmanuel Mathot, Daniel Da Silva, Ricardo Mestre, Pr Vanda Brotas, PhD Ana Brito, Giulia Sent, João Pádua, Gabriel Silva
Affiliations: Development Seed, MARE Centre, Labelec
The Water Health Indicator System (WHIS) serves as a robust platform for monitoring water quality, showcasing the capabilities of earth observation technologies and environmental data analysis that are accessible to everyone. Developed through a collaboration between Development Seed, MARE (Marine and Environmental Sciences Centre), and LABELEC, WHIS addresses common challenges in existing water monitoring by offering a scalable solution designed for assessing aquatic ecosystem health. At the heart of WHIS is a powerful integration of geospatial cloud technologies, built on the eoAPI (Earth Observation API). This allows users to leverage tools such as the Spatio-Temporal Asset Catalog (STAC) and Cloud-Optimized GeoTIFF (COG) for dynamic data access. A platform like this enables seamless integration of remote sensing datasets, particularly from the Sentinel-2 mission, ensuring precision and adaptability in water quality assessment. The application utilizes specialized atmospheric processing algorithms, such as Acolite, to analyze water quality, tackling issues related to atmospheric interference and spectral interpretation. By focusing on key indicators like chlorophyll content and turbidity, WHIS allows for localized calibration and insights into ecosystem health, demonstrating that these advancements in monitoring are achievable with the right tools. WHIS is tailored for inland and coastal water bodies. Its cloud-optimized infrastructure provides an interactive interface where users can select specific water bodies, explore geographical data, conduct statistical analyses, and inspect pixel-level information—all of which can be replicated by other users with eoAPI. Furthermore, the innovative product-services business model links technological capabilities with environmental monitoring needs, showing how any organization can leverage these advancements. As global challenges related to water availability and quality persist, the Water Health Indicator System stands as a testament to what can be achieved with eoAPI technology. If we can harness its potential, so can you, making it an essential tool for environmental monitoring and ecosystem management.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Satellite data for the UN Ocean Decade: Innovative Approaches to Story-telling for Diverse Marine Stakeholders

Authors: Dr Hayley Evers-King, Dr Benjamin Loveday, Danaële Puechmaille, Michael Schick, Miruna Stoicescu, Sally Wannop
Affiliations: EUMETSAT, Innoflair
EUMETSAT, along with partners from the European marine data provider ecosystem, produces regular case studies showing how our data can be used to support marine operations, science and applications. As an endorsed activity, a series of these cases have been developed to show how marine Earth observation data can be used to help address the 10 challenges facing the ocean as defined by the United Nations Decade of Ocean Science for Sustainable Development (UNOD) programme. Multiple case studies are being produced for each challenge, showcasing specific topics, appropriate selection of relevant products, and varying approaches to data synergy, analysis, and visualisation techniques. A number of formats have been developed to facilitate the promotion and use of these case studies by different stakeholder communities. The traditional approach uses web-based articles, adopting a format that is recognisable to the EUMETSAT user base, but introducing new, and perhaps unfamiliar, ocean data streams, as well as the UNOD challenges themselves. These articles are accompanied by Jupyter Notebooks, which allow and encourage the reader to recreate, and expand upon, some of the analyses presented. These notebooks offer flexible deployment and are designed to run locally and on remote cloud services, including Binder, DestinE and the Copernicus WEkEO DIAS, where they are featured in the notebook catalogue and hosted on the JupyterLab. The notebooks, which are open source and freely shareable, are made available as part of EUMETSATs regular training courses, and form a key part of the current EUMETSAT special webinar series dedicated to the UNOD. New workflows are in development to exploit DestinE and, in particular, the DEA Interactive Storytelling service, bringing together cloud-based data provision, data visualisation and contextual narratives. These tools help to broaden the scope and appeal of the stories, opening discussion of UNOD challenges to new audiences in code-free contexts. Topics covered in the case studies so far include deoxygenation events associated with risks to fisheries and aquaculture, the role of altimetry in storm monitoring and global mean sea level quantification, and the assessment of marine heatwaves from sea surface temperature. Example case studies will be presented, as well as lessons learned from the use of different approaches to storytelling.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: The Timeline Viewer: a web application for intuitive interactive visualisation of time-based data

Authors: Eelco Doornbos, Mark Ter Linden, Eelco Verduijn, Kasper Van Dam, Jos De Kloe, Gerd-Jan Van Zadelhoff
Affiliations: KNMI, Royal Netherlands Meteorological Institute
The Timeline Viewer is an interactive web application for visualisation of time-dependent data, which adopts an easy-to-use user interface for zooming and panning using the mouse, trackpad or touch interfaces (phones and tablets). Users will be familiar with this mode of interaction from its use in popular web applications like Google Maps applied to geospatial dimensions. However, in the Timeline Viewer, these interactions are applied to the time dimension, allowing users to quickly zoom in to details at time scales of hours, minutes or seconds, then zoom out for context over days, months or years. By applying a combination of panning and zooming, users can move the view between different dates, times and events of interest, with similar ease as moving between different countries, cities and streets in Google Maps. The Timeline Viewer was originally designed for real-time monitoring and interactive visual exploration of space weather time series data, in which processes on the Sun, in the heliosphere, the Earth's magnetosphere and upper atmosphere are closely connected in time. Spatial scales in this domain are often so large that they become of secondary importance. Users find great value in being able to simultaneously display information from multiple related data sources on a single time-axis, and to have a replication of common plot types from scientific publications on space weather case studies available in a flexible interactive interface, both for all available historical data as well as for current events. The tool was extended as part of the activities of the ESA Swarm Data Innovation and Science Cluster to improve the utilisation of Swarm observations of space-weather-related variability in Earth's thermosphere-ionosphere and magnetic field. As part of this project, a public version of the application has been deployed at https://spaceweather.knmi.nl/viewer/. Besides the ability to display time series data as bar, line, and ridgeline charts, the tool gained the ability to browse through sequences of quick-look images to create time lapses as the user moves around the interface by panning, as well as to use heat-map-type images that resize as the user zooms in and out on the timeline. This allows for comparison of Swarm observations with remote sensing images of ionospheric emissions from NASA's GOLD satellite, and with remote sensing of aurora in the polar regions from JPSS VIIRS-DNB and DMSP SSUSI instruments. A capability to view 3D satellite orbit geometry was also added for the Swarm project. These facilities allow for the geospatial dimensions in the data to be reintroduced in the visualisations. Because of its flexibility, the application has also quickly proved its worth for continuous model validation, education and training, and data quality monitoring. For this latter purpose, it was adopted and separately deployed during the commissioning phase of the EarthCARE mission, demonstrating a first use case outside of the space weather domain. For EathCARE it was especially used for the ATLID lidar instrument, to display both instrument and calibration data, and this has proven to be very useful to detect features and anomalies in the early phase of the mission. For example strong noise spikes due to energetic particles and hot pixels on the detector could be identified early in the mission, which allowed us to adapt the L1 processing software to handle this, which should significantly improve the data quality. The tool has been built around a web server and database back-end created in Python and a front-end that uses the Svelte framework for reactive web applications. The Heliophysics API (HAPI) is used for the delivery of data between back-end and front-end. The adoption of this standard enables the tool to be used with other back-ends, such as the INTERMAGNET global network of magnetic observatories, NASA's Space Physics Data Facility (SPDF) and ESA's Cluster mission data archive. The HAPI standard also proved to be easy to use and beneficial outside of the domain of heliophysics, as demonstrated by its adoption in the EarthCARE project.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Alplakes: Monitoring and forecasting European alpine lakes

Authors: James Runnalls
Affiliations: Eawag
Alplakes (www.alplakes.eawag.ch) is an ESA funded research project that provides predictions of the main physical and biological parameters of more than 80 lakes throughout the European Alpine region. We integrate models and remote sensing products developed by the research community to provide up-to-date and accurate information. These products are made available to the public in a visualisation focused user-friendly web application, meaning hydrodynamic modelling and remote sensing data are no longer confined to domain experts. This accelerates and empowers evidence-based water management across a broad range of stakeholders. Alplakes utilizes the open-source Python toolbox Sencast (https://github.com/eawag-surface-waters-research/sencast) to access Sentinel-2 and Sentinel-3 data from DIAS providers, and to perform the computation of essential water quality parameters such as chlorophyll concentration, turbidity, and water clarity. From these detailed water quality maps, lake-specific statistics are generated, providing users with a comprehensive view of changes in lake conditions over time. This approach not only enhances accessibility to up-to-date water quality data but also supports long-term monitoring and analysis, empowering lake managers and researchers to make informed decisions for sustainable lake management. Earth observation data is visualised in tandem with other data sources, such as hydrodynamic models, to provide dynamic context for the static snapshots available from satellite imagery. The platform's built-in particle tracking functionality enables users to predict the development of lake events, such as algal blooms identified through satellite imagery, allowing lake managers to take proactive measures to protect water quality at drinking water intakes. All models, data processing pipelines, and products that power the Alplakes platform are fully open source, with results made accessible as open data for ease of use. This transparency allows other scientists not only to access and validate our methodologies but also to directly integrate Alplakes models and products into their own research. By providing unrestricted access to both the tools and data, Alplakes fosters collaborative opportunities across disciplines, supporting reproducibility, enabling new research insights, and expanding the impact of Earth observation science on freshwater ecosystem studies. This open framework encourages a community-driven approach to advancing environmental research, developing innovative monitoring applications, and managing lake health. To enhance Alplakes as a comprehensive digital twin of alpine lake ecosystems, we are working to expand the platform geographically and broaden the range of available products. By incorporating additional lakes from diverse regions and integrating data from various satellite missions, we aim to provide a more complete picture of alpine lakes. This expansion will not only increase the platform's utility for scientists and lake managers but also support Alpine-based collaboration and improve the predictive capabilities of the platform, allowing for more effective, data-driven decision-making in lake conservation and management.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Towards cloud-based EO platform in support of indicator development for society and environment

Authors: Alexandra Bojor, Tyna Dolezalova, Fabien Castel, Stefano Natali, Maximilien Houël, Leon Stärker, Lubomír Doležal, Daniel Santillan, Camille Lainé, Adrien Gavazzi
Affiliations: Sistema Gmbh, EOX IT Services GmbH, Murmuration SAS
Earth Observation (EO) data provide a complete description of the Earth system every few days, enabling near real time applications from local to global scale. There is a need for dedicated tools able to contemporary manage the large variety and the large volume of data, being as much as possible application-domain agnostic. Machine / Deep Learning (ML/DL) as well as Artificial Intelligence (AI) approaches and cloud-based platform technologies are getting more and more room in the EO domain. This moves the paradigm of data exploitation from physically based to geo-statistically based applications, facilitating efficient access, computation, handling various data sources and helping solving complex societal challenges. The main goal of the “Indicator Development For Economy And Society (IDEAS)” project, is to explore the value of cross cutting technologies (such as Overpass API, Citizen Contributed Data and Gamification) and to develop innovative and interdisciplinary indicators from EO and geospatial data. The new indicators shall provide new perspectives and relevant information on pressing societal challenges, by taking advantage of cloud-based EO platform capabilities, accessible data, computational resources, and analytical capabilities. Some societal challenges that require innovative solutions where EO and geospatial technologies can play a role include global climate crisis as well as ambitions related to green transformation. More recently the COVID19 pandemic has posed numerous challenges to societies globally. Also, energy shortages and geopolitical repercussions of the Russian invasion in Ukraine provide numerous addition challenges for which novel perspectives and solutions are required. Within this work five indicators were developed at European level and integrated in different ESA-supported environments, such as RACE ,GTIF and trilateral dashboard . The five developed indicators correspond to one of the following societal challenges and for each of them cross-cutting technologies were implemented. 1. Indicator #1: Pollution and urban heat islands allows the creation of population health vulnerability maps. The indicator couples the information observed by satellite remote sensing for air quality and land surface temperature and combines it with population characteristics (age-gender-location), in addition to the locations of medical infrastructures to provide a first level of analysis to decision makers. This indicator makes use of gamification technology and correspond to the information needs related to “Green Transition and the European Green Deal” and “COVID19 pandemic and economic recovery”. 2. Indicator #2: Wildlife and biodiversity aims at developing a powerful, impacting, visual indicator to help building the general public’s knowledge and raise awareness around the current status of biodiversity and the importance of conservation efforts. It makes use of Citizen Contributed Data and gamification (Minesweeper) technology by using the crowdsourced fauna and flora observation data available from the GBIF (Global Biodiversity Information Facility), combined with EO data on land use and vegetation health. It corresponds to the information needs related to “Green Transition and the European Green Deal” and “Climate Crisis & adaptation”. 3. Indicator #3: Food security. The scope is to monitor desert locust pests that can be integrated in two types of crisis, such as early warning and situational. It makes use of Citizen Contributed Data technology by using FAO data and correspond to the information needs related to “Climate Crisis & adaptation”. 4. Indicator #4: Flood risk aims at assessing the risk of inundation in coastal areas due to sea level rise. It makes use of OSM Overpass API technology and corresponds to the information needs related to “Climate Crisis & adaptation”. 5. Indicator #5: Real estate builds on the Urban Heat data that have been generated in the Health Indicator (indicator #1) to lead to a new added value product. It combines urban heat measurements during winter periods with socio-economic information (population density, age, real estate price…) and with information on the building’s construction material, history of changes and civil works from the French National Building Database (BDNB). It enables us to map the areas with people in high energy vulnerability. It makes use of Citizen Contributed Data technology and correspond to the information needs related to “Emerging energy crisis” and “Green Transition and the European Green Deal”. In conclusion the integration of EO, geospatial and citizen contributed data, innovative cross cutting technologies and cloud platform technologies to support address information needs in context of the presented societal challenges has shown the potential in helping the authorities and decision makers in many thematic areas given the versatility of the presented results.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: StacLine : new QGIS Plugin for diving into STAC Catalogs

#stac

Authors: Fanny Vignolles, Florian Gaychet, Vincent Gaudissart, Mélanie Prugniaux
Affiliations: CS Group
Geographic Information Systems (GIS) have become fundamental for analyzing and visualizing geospatial and temporal data across diverse domains, including environmental monitoring, disaster response, hydrology, urban planning, and agriculture. The availability of Earth Observation (EO) data has significantly increased in recent years, thanks to open-access data initiatives and advancements in satellite missions such as Sentinel, Landsat, and SWOT. However, while the datasets have become more accessible, the tools required to process and integrate them efficiently remain a challenge. The introduction of SpatioTemporal Asset Catalogs (STAC) as a data standard has revolutionized how datasets are organized and distributed. STAC provides a unified framework for describing, managing, and sharing spatiotemporal data through catalogs linked to geospatial servers. When combined with Open Geospatial Consortium (OGC) standards like Web Map Service (WMS), STAC enables seamless geospatial data management and interoperability. This project focuses on bridging the gap between STAC-based data catalogs and GIS workflows by developing a QGIS plugin that integrates STAC with the open-source GIS environment. The plugin simplifies data search, filtering, and visualization while adhering to both STAC and OGC standards, providing professionals and researchers with an efficient tool for managing EO data. Despite the increasing adoption of STAC-based data catalogs, their integration with GIS platforms remains a significant challenge. Existing plugins in QGIS for handling STAC data are limited, offering only basic functionalities and lacking the advanced capabilities required for sophisticated workflows. Current solutions often restrict users to viewing dataset footprints without allowing interactive visualization or the ability to style data layers dynamically. Additionally, these tools frequently require manual downloads and subsequent imports into QGIS, making the process inefficient and prone to user errors. Beyond these technical limitations, ensuring that a tool remains accessible and intuitive for a diverse audience is equally critical. Furthermore, achieving seamless interoperability between STAC and OGC protocols, particularly in the context of integrating WMS for real-time visualization, adds another layer of complexity. To address these challenges, we have developed a QGIS plugin that brings significant innovations to enhance filtering capabilities, simplify data import, and ensure interoperability. Designed with an intuitive interface, it strikes a careful balance between user-friendly simplicity for non-experts and the advanced functionality required by researchers and field practitioners. By incorporating ontological approaches, the plugin enables more precise and efficient dataset discovery. The integration of WMS protocols facilitates automatic data import, allowing users to preview datasets and dynamically apply visualization styles directly within QGIS. These styles, derived from metadata and cartographic servers adhering to OGC standards, provide tailored renderings suited to specific analytical needs. The plugin’s strict adherence to STAC standards aims to promote a compatibility with any STAC-compliant catalogue, enhancing its ability to integrate seamlessly into diverse geospatial platforms and workflows. The user interface has been designed to accommodate both novice and expert users, offering advanced configuration options for customized workflows without sacrificing simplicity. This combination of advanced functionality and ease of use positions the plugin as an essential tool for professionals relying on Earth Observation data, reducing the barriers to integrating STAC data into GIS projects. The current version has been implemented for the HYSOPE II project (CNES), the dissemination platform dedicated to SWOT products and generally to all kind of hydrological datasets and is intended to be extended to other initiatives. As the STAC ecosystem evolves, the plugin is designed to adapt and grow, incorporating new features and responding to user needs. One planned enhancement is the addition of a dynamic timeline feature, allowing users to explore temporal patterns in datasets interactively. This timeline will enable quick identification of dense data availability periods and improve usability for time-series analysis by rendering layers adaptively based on the selected temporal range. Additionally, we also envision the development of an adaptive form system that dynamically configures itself based on search parameters, which may be specific to each dataset. This automatic configuration will leverage the filtering extension and the queryables of the STAC API. This plugin, named QGIS StacLine, represents a significant advancement in democratizing access to STAC-based geospatial data. By addressing the limitations of existing tools and focusing on usability, interoperability, and scalability, it bridges the gap between complex EO data catalogs and practical GIS applications. Looking ahead, the development of the plugin involves a key decision: whether to focus on niche, closed-use cases for tailored solutions or to expand its scope for broader application across diverse projects. While an open approach offers versatility, it risks diluting the specificity and focus of the tool. Regardless of its future direction, the plugin stands as a vital resource for the geospatial community, enabling seamless access and utilization of the growing wealth of spatiotemporal data.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: A.08.07 - POSTER - Ocean Health including marine and coastal biodiversity

Ocean Health, defined as the Ocean’s condition allowing to continuously provide services for Humans in a sustainable way, while preserving its intrinsic well-being and its biodiversity, is under considerable threat. Decades of pollution, overexploitation of resources and damaging coastal environment use have severely degraded the condition of both coastal and offshore marine ecosystems, compromising the Oceans capacity to provide their services. This degradation is being further exacerbated by Climate Change whose effects on Oceans are numerous. The many sensors on-board currently operating satellites (Altimeters, Radiometers, Scatterometers, Synthetic Aperture Radars, Spectrometers) have high relevance for Ocean Health and Biodiversity studies, providing continuous, global and repetitive measurements of many key parameters of the physical (temperature, salinity, sea level, currents, wind, waves) and biogeochemical (Ocean Colour related variables) marine environment, including also high resolution mapping of key marine habitats (coral reefs, kelp forests, seagrass,…). In this context, this session welcomes contributions demonstrating how satellite data can be used to better monitor Ocean Health, including the retrieval of Essential Biodiversity variables and the estimations of the many different stressors, also including marine litter, impacting Ocean Health and marine and coastal biodiversity. Single sensors capability is even more amplified when used in synergy with other space and in-situ measurements, or together with numerical modelling of the physical, biogeochemical, ecological ocean state, so that the session is encouraging multi sensors and multi-disciplinary studies. The session is also open to contributions demonstrating how EO derived products can be used to support management actions to restore and preserve Ocean Health and the marine and coastal biodiversity.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: High Sensitivity Fluorescence Sensor For The Detection Of Dissolved Organic Matter In Coastal Environments

Authors: Giancarlo Bachi, Valter Evangelista, Bruno Tiribilli, Paolo Facci, Marco Carloni, Mirco Guerrazzi, Simone Marini, Paolo Povero, Francesco Massa, Michela Castellano, Federico Falcini, Gian Marco Scarpa, Emmanuel Boss, Patrick Gray, Vittorio Ernesto Brando, Chiara Santinelli
Affiliations: Biophysics Institute, National Research Council (CNR-IBF), Institute for Complex Systems (CNR-ISC), Institute of Marine Sciences, National Research Council (CNR-ISMAR), Earth, Environment and Life Sciences Department (DISTAV), University of Genova, School of Marine Sciences, University of Maine
Dissolved Organic Matter (DOM) in the oceans is a crucial component of the Earth's biogeochemical cycles and a key Ocean Water Quality parameter. DOM can be qualitatively studied through the optical properties (absorption and fluorescence) of its Chromophoric (CDOM) and Fluorescent (FDOM) fractions. FDOM has been used to gain information on DOM composition and origin as well as to trace riverine and pollutant inputs. Laboratory FDOM measurements on discrete samples provide insight into DOM characteristics, but lack spatial and temporal resolution. Recent developments in portable fluorescence sensors have enabled cost-effective, high-frequency in-situ measurements, crucial for monitoring dynamic environments such as estuaries and coastal areas. However, sensors currently on the market exhibit low sensitivity, low versatility, and low signal-to-noise ratio, furthermore most of them only detect fluorescence at one pair of Excitation/Emission wavelengths. Here we present the first data from the prototype of a fluorescence sensor implemented within the framework of the RAISE “Robotics and AI for Socio-economic Empowerment”, Spoke 3. The high sensitivity and flexibility of the sensor make it ideal for manual or continuous use on small and large boats, laboratories with seawater intake, and field activities. Preliminary tests showed good signal linearity, baseline stability, and good correlation with fluorescence from benchtop and portable fluorimeters. The sensor has been tested on coastal marine samples with a high spatial resolution collected onboard the schooner Tara (Tara Ocean Foundation) during the TREC (Traversing European Coastlines) expedition and the R/V Gaia Blu (CNR) during the BioTREC cruise as well as on samples collected seasonally from selected contrasting environments such as large ports, estuaries, and marine protected areas (e.g., Portofino LTER site). The comprehensive dataset obtained was combined with physical and biogeochemical parameters from discrete samples and satellite data to retrieve information on DOM-rich coastal filaments, chlorophyll distribution, anthropogenic inputs and DOM dynamics. The excellent correlation between the sensor signal and satellite chlorophyll retrievals illustrates the challenges in differentiating chlorophyll and CDOM from ocean color alone and that our sensor can be used to help characterize DOM dynamics within coastal filaments and phytoplanktonic blooms and could contribute to the improvement of existing satellite chlorophyll and CDOM algorithms. Future uses of the sensor span from monitoring the impacts of coastal pollution in real-time to supporting long-term studies on DOM variability.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Blending PlanetScope and Sentinel-2 satellites to assess subtidal seagrass meadows threatened by water quality

Authors: Mar Roca Mora, Carlos Eduardo Peixoto-Dias, Manuel Vivanco-Bercovich, Chengfa Benjamin Lee, Sergio Heredia, Isabel Caballero, Gabriel Navarro, Paulo Horta, Alessandra Fonseca
Affiliations: Instituto de Ciencias Marinas de Andalucía (ICMAN-CSIC), Universidade Federal de Santa Catarina (UFSC), Universidad Autónoma de Baja California (UABC), German Aerospace Center (DLR)
Seagrass meadows provide important ecosystem services, acting as nutrient and sediment traps, burying carbon into the soil, improving the water quality and attracting biodiversity: essential indicators of Ocean Health. These ecosystems play a crucial role buffering impacts, especially in coastal lagoons, increasingly threatened by eutrophication and anoxia worldwide. Their highly sensitivity to environmental changes enables their use as bioindicators of the environmental status, both in the water column and sediments. This study case focuses in a subtropical coastal lagoon in Brazil, which experienced a dystrophic status due to a massive wastewater plant explosion in 2021, impacting Ruppia maritima and Halodule wrightii seagrass species. Earth Observation techniques and computational capacity have evolved in better monitoring of marine macrophytes and water quality variables. However, subtidal seagrass mapping in turbid waters and low dense meadows through Earth Observation is still a challenge. This study blends the advantages of Sentinel-2 L1C at 13-band spectral resolution and PlanetScope 3B imagery, both Classic (4-band) and Super Dove (8-band) at 3-m spatial resolution. A multi-sensor approach with high spectral and spatial resolution to better detect small seagrass patches and its changes in shallow turbid coastal waters. In parallel, to synoptically assess the water quality impact in the coastal lagoon, we processed the Sentinel-2 time-series (2016-2024) through ACOLITE to obtain ocean color related variables such as chlorophyll-a, Suspended Particulate Matter (SPM) or diffuse attenuation coefficient (Kd490), identifying the wastewater disruption in the lagoon, moved from eutrophic to dystrophic; as well as the spatial patterns. The HydroLight radiative transfer numerical model was computed in Case 2 waters using the biogeochemical conditions of the lagoon to mask optically deep waters, limiting light penetration at 1-meter depth due to high backscattering (Kd = 0.4). To obtain the benthic habitat mapping, we first processed Sentinel-2 and PlanetScope imagery using ACOLITE obtaining corrected water leaving reflectance for both sensors. To assess the changes and the related seagrass extent uncertainty, we used three Sentinel-2 images for each year, 2018 and 2024, respectively, which were co-registered to the 3-meter PlanetScope grid, combining both sensors’ advantages into six multi-band raster. In the field, we performed two in situ campaigns in 2018 and 2024 summers, which seagrass presence and absence GPS locations were used to trained a machine learning Random Forest classifier and to validate results in each multi-sensor raster. The three classifications for each year were combined to obtain the seagrass extent map. From the optical perspective, the Red-Edge band for both sensors revealed the highest feature significance for the model, as well as the Depth Invariant Index (DII) produced. The resulting seagrass change map showed a decline of a 65% total seagrass extent 3 years after the disaster, with spectral similarity between seagrass species that could not be differentiated due to the complex inherent optical properties of the water column. This multi-sensor cost-efficient and synoptic approach enables to understand the severity of the impact and provides a tool to monitor the recovery capacity of this marine environment, preserving its intrinsic well-being and benefits for the coastal community.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Estimating uncertainty while detecting marine litter from Sentinel-2 imagery

Authors: Samuel Darmon, Emanuele Dalsasso, Devis Tuia
Affiliations: ECEO, EPFL
Marine plastic pollution is a major global crisis with profound environmental and ecological implications for society. With floating material aggregating under the effect of oceanic processes, so-called windrows reach sizes visible from space and can be used as proxies of marine litter. Within this context, large-scale mapping of marine litter can be enabled by leveraging medium-resolution remote sensing satellite data such as Sentinel-2. Thanks to the recent creation of labeled datasets of optical images containing marine litter, deep learning-based detection of floating debris has emerged as a promising tool to monitor and mitigate marine litter. However, the performance of existing models is hampered by visual ambiguity of objects (often related to spatial and spectral resolution), the presence of clouds and other artifacts. This limits the use of current machine learning approaches in critical scenarios, such as the planning of cleanup operations. To address this gap, we explore the use of uncertainty estimation techniques to provide insights into the model’s decisions and potential failures, by measuring the epistemic uncertainty of the model. In particular, we compare two uncertainty estimation methods: deep ensembles and ZigZag. Deep ensembles consist in training several independent networks with different initializations: variations on the models’ predictions on the same input sample indicate low confidence. While this approach requires several training and inference steps to produce an uncertainty measure, ZigZag is a general framework that reduces the computational load by performing a single training with minor modifications on the model architecture. It builds on the following strategy: a deep learning model is trained to produce the exact same prediction in two cases, whether or not the true label is provided as additional input to the network. At inference, a model which is confident of its prediction will produce an output close to the true label which, once provided as additional input to the same model, will lead to a similar prediction. The distance between the two predictions can then be used as an uncertainty measure. We investigate the use of deep ensembles and ZigZag to a U-Net semantic segmentation model trained for marine debris detection. We train our models on several Sentinel-2 satellite imagery annotated datasets including FloatingObjects, Marine Debris Archive (MARIDA), and S2Ships datasets and evaluate the performance of the models on a subset of MARIDA. The evaluation framework includes both a visual comparison of the predicted uncertainty maps and a quantitative assessment, mostly to study how the uncertainty correlates with the classification error. Our results indicates that uncertainty maps exhibit salient, characteristic patterns. Not only the model struggles to precisely delineate windrows’ borders, as it was expected, but it also associates high uncertainty to cloud patches characterized by thin linear shapes, as well as on wakes, the pattern of waves created by boats moving through the water. Interestingly, the boats themselves are associated with a low uncertainty: we argue that the correct classification of boats is due to the use of the S2Ships dataset during training, where static ships are used as hard negatives for the model. This study investigates the insights brought by uncertainty estimation methods applied to the problem of deep learning-based segmentation of floating debris on multispectral Sentinel-2 data. Our results indicate that the estimation of the model’s uncertainty helps to assess the reliability of predictions and leads to a better understanding of the model limitations. Uncertainty estimation methods enhance the interpretability of the outputs of marine debris detection models without loss of performance, aiding in the identification of error-prone predictions and providing a measure of trust in the outputs of the detector. Future works will study how the uncertainty correlates with the different spectral bands, especially when only visible and near-infrared bands are available, as it is the case for some optical sensors such as PlanetScope data. This will provide us with some insights on the expected uncertainty linked to the integration of PlanetScope data in the detection model.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Phytoplankton biodiversity from spaceborne radiometry in coastal regions

Authors: Héloïse Lavigne, Lumi Haraguchi, Dimitry van der Zande, Joppe Massant, Véronique Creach, Jukka Seppala, Sanjina Upadhyay, Hans Jakobsen, Therese Harvey, Felipe Artigas, Maialen Palazot, Yolanda Sagarminaga, Ioannis Tsakalakis, Natalia Stamataki, Laura Boicenco, Oana Vlas
Affiliations: Royal Belgian Institute Of Natural Sciences, SYKE, CEFAS, Aarhus University, NIVA, CNRS, AZTI, HCMR, NIMRD
Coastal ecosystems are often impacted by human activities, and it is fundamental to assess the rapid shifts in their water quality and biodiversity status. Remote-sensing observations provide rapid and synoptic data that have proved to be extremely useful to assess parameters such as the concentration in chlorophyll-a or the turbidity. Regarding biodiversity, remote sensing has also been used, especially in open ocean waters, to retrieve some phytoplankton groups or pigments. Indeed, main phytoplankton groups are retrieved in open ocean waters (optical Case I waters) from space thanks to multispectral or hyperspectral ocean color sensors. The main types of groups explored are in fact size groups or functional groups. Phytoplankton size groups provide information on the trophic status of the full ecosystem and functional groups are particularly relevant for modelers. Most of the algorithms allowing to retrieve phytoplankton types from ocean color data investigate anomalies in water reflectance which could be explained by a particular pigment signature. Although still challenging, this exercise is facilitated in Case I waters as the water reflectance spectra is only resulting from the phytoplankton community (phytoplankton itself and the related organic matter). In optically complex waters (Case II), which concern most of the coastal waters, retrieving water constituents from ocean color observations is much more challenging as the water reflectance signal is also impacted by external inputs like rivers runoff, dissolved substances and resuspended sediments. Then, even retrieving the concentration in chlorophyll-a can be extremely complex. To explore the capabilities of remote sensing in coastal waters eight study areas in the European waters are investigated. They include the Baltic Sea, the Mediterranean Sea, the North Sea, the English Channel, the Atlantic coast and the Norwegian coast. Different algorithms derived from machine learning methods are being tested to retrieve phytoplankton size classes (pico, nano and micro-phytoplankton) and four phytoplankton color groups (red, brown, green and blue-green phytoplankton). These color groups have been chosen as they are expected to be more easily detectable from space. Indeed, two different phytoplankton species with a close pigment signature might be very difficult to differentiate. The objective is then to determine if phytoplankton groups as defined above can be retrieved in coastal waters and if yes, we will determine if a unique algorithm could be used for these different regions. This work is supported by the project OBAMA-NEXT (H2020 project 101081642).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Unveiling Suspended Particulate Matter Dynamics and Environmental Drivers in European Coastal Waters Using Machine Learning and Satellite Data

Authors: Corentin Subirade, Cédric Jamet, Roy El Hourany
Affiliations: Université Littoral Côte d’Opale
Remote sensing of Suspended Particulate Matter (SPM) is essential for water-quality monitoring as it influences turbidity, light availability, and nutrient transport. Coastal ecosystems act as important interfaces between land and ocean, exhibiting high spatial and temporal SPM variability. Their ecological, societal, and economic value makes them very sensitive to natural and human-induced environmental changes. This study provides a comprehensive assessment of the mechanisms driving SPM spatio-temporal variability in European coastal waters, for the period 2016-2025, utilizing the Ocean and Land Color Instrument (OLCI) Copernicus Level-3 Remote Sensing Reflectance (Rrs) product. The semi-analytical algorithm of Han et al. (2016) was applied to the OLCI Rrs data to estimate SPM concentrations. The generated product was validated through a matchup exercise in the diverse French coastal waters (n = 71, Bias = -27%, Error = 63%, Slope = 0.85). Across European coastal waters, SPM concentrations are influenced by dynamic ocean circulation patterns and interactions between the atmosphere, ocean, and land. To investigate the drivers of SPM in this vast marine domain, we implemented a machine-learning-based two-step procedure. First, European coastal waters were classified into regions based on SPM seasonal cycles using a Self-Organizing Map combined with a Hierarchical Ascending Clustering method. This classification resulted in 10 distinct regions, ranging from clear offshore waters with relatively low SPM values throughout the year (< 0.5 g.m-3), to turbid estuarine areas with higher SPM concentrations peaking on average in winter (> 5 g.m-3). SPM seasonal cycles per class presented substantial differences, both in magnitude and shape. Second, SPM concentrations within each class were modeled using a random forest approach with reanalysis environmental variables including wind, waves, currents, and sea surface density (SSD). The contributions of these variables to SPM variability were evaluated using a feature-permutation method, enabling an analysis of the spatial and temporal variability of their influence. Contributions were scaled by the percentage of explained variance by the random forests in each class. Wind and waves emerged as dominant drivers in low-bathymetry regions, accounting for 24% and 19% of SPM variability, respectively, at the European scale. In contrast, SSD significantly influenced areas impacted by river plumes, explaining 23% of SPM variability within these regions. Current speed showed a relatively minor contribution, not exceeding 4% at the continental scale. This clustering-based approach offers a valuable framework for assessing future changes in water quality and SPM dynamics, providing an objective foundation for the management of marine ecosystems across Europe.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Advancing Oceanic Primary Production Estimates: Integrating Satellite Data, Vertical Dynamics, and BGC-Argo Observation

Authors: Marine Bretagnon, Quentin Jutard, Philippe Bryère, Julien Demaria, Antoine Mangin
Affiliations: ACRI-ST, ACRI-ST, Site de Brest, quai de la douane
Oceanic primary production (PP) converts sunlight, carbon dioxide, and nutrients into organic matter through photosynthesis. This process forms the foundation of the marine food web, supporting fish stocks and other marine life. It also plays a critical role in regulating the Earth's climate by absorbing large amounts of atmospheric carbon dioxide, a key greenhouse gas. Healthy primary production is essential for maintaining ecosystem balance, sustaining biodiversity, and providing resources for fisheries, which millions of people globally depend on for food and livelihood. However, despite primary production being a critical parameter, in situ measurements remain limited. This is primarily due to the need for incubation during measurements and the lack of a standardized protocol for their execution. Satellites equipped with ocean colour sensors, measure the light reflected by the ocean surface to estimate the concentration of chlorophyll-a, a pigment in phytoplankton. By combining chlorophyll-a data with environmental factors like light availability and sea surface temperature, models can estimate primary production at a global scale. These satellite-derived estimates provide a comprehensive and continuous view of primary production patterns, offering valuable insights into ecosystem dynamics, climate interactions, and the sustainability of marine resources. In this study, we will investigate the vertical component of primary production, which is key for understanding the full depth-integrated dynamics of oceanic productivity. We will compare the vertically integrated PP results obtained with a range of data sources. This includes in situ bottle measurements, data from Biogeochemical-Argo (BGC-Argo) floats, ocean colour data (without direct access to vertical information), and outputs of the SOCA machine learning model (https://data.marine.copernicus.eu/product/MULTIOBS_GLO_BIO_BGC_3D_REP_015_010/description). This analysis will be conducted using established algorithms, mixing observational data and model outputs. This cross-comparison of approaches will unlock new opportunities for improving the accuracy of primary production estimates from remote sensing ocean colour. These advancements will deepen our understanding of primary production, its role in global carbon cycling, its support of marine ecosystems, and its influence on climate change predictions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Impact of Marine and Atmospheric Heatwaves on Intertidal Seagrass: Experimental Spectroradiometry and Satellite-Based Insights

Authors: Simon Oiry, Bede Davies, Philippe Rosa, Laura Zoffoli, Anne-Laure Barillé, Nicolas Harin, Pierre Gernez, Laurent Barillé
Affiliations: Institut des Substances et Organismes de la Mer, ISOMer, Nantes Université, UR 2160, F-44000, Consiglio Nazionale delle Ricerche, Istituto di Scienze Marine (CNR-ISMAR), 00133, Bio-littoral, Immeuble Le Nevada, 2 Rue du Château de l’Eraudière
Seagrasses meadows are important coastal ecosystems, serving as crucial habitats for marine biodiversity, stabilizing sediments to mitigate erosion, and acting as significant carbon sinks in global climate regulation. However, these vital ecosystems face mounting threats from climate change, with the intensification and increased frequency of marine and atmospheric heatwaves posing profound risks to their health and functionality. This study investigates the impact of these extreme thermal events on the intertidal seagrass, employing a combination of laboratory-controlled experiments and satellite-based remote sensing to capture changes in spectral reflectance and assess the broader ecological implications. During laboratory experiments, seagrass of the species Zostera noltei were exposed to controlled simulated heatwave conditions to assess the physiological and structural impacts of extreme thermal stress. The experimental design involved placing seagrass samples in intertidal chambers that simulated natural tidal cycles, allowing to closely regulate temperature conditions during both high and low tides. One chamber served as a control, maintaining typical seasonal temperatures, while the other was used to simulate heatwave conditions with progressively increasing air and water temperatures to mimic an actual heatwave event observed in the field. Hyperspectral reflectance measurements were recorded at regular intervals during low-tide to monitor changes in the seagrass over time. Heatwave exposure resulted in a significant reduction in reflectance, particularly in the green (around 560 nm) and near-infrared (NIR) regions of the spectrum. This decline in reflectance was closely linked to visible leaf browning, suggesting alterations in pigment content, in the internal structure of seagrass leaves and their overall vitality. The green reflectance decline indicates a reduction in the plant's health, while changes in NIR reflectance often relate to the internal arrangement of cells and air spaces, which are sensitive to heat-induced damage. Vegetation indices like the Normalized Difference Vegetation Index (NDVI) and the Green Leaf Index (GLI), which are indicative of the overall health and structural integrity of vegetation, showed marked decreases under heatwave conditions, with NDVI dropping by up to 34% and GLI by 57%. This significant reduction highlights the adverse effects on the seagrass's ability to maintain its normal physiological processes under heat stress. To quantify these changes more effectively, a novel Seagrass Heat Shock Index (SHSI) was developed. The SHSI, applicable on emerged seagrass, was particularly effective in detecting the transition from green leaves to darkened, stressed leaves, providing a sensitive and reliable tool for assessing thermal stress effects on seagrass. By focusing on specific changes in reflectance across certain spectral bands, the SHSI allowed for a clear differentiation between unimpacted and impacted vegetation, capturing the onset of heat-induced stress with high accuracy. This sensitivity makes the SHSI valuable for early intervention, enabling managers and researchers to identify vulnerable seagrass meadows before substantial damage occurs, thereby facilitating more timely conservation measures. Complementing the experimental data, Sentinel-2 observations provided clear evidence of the effects of a documented heatwave event in South Brittany, France, on natural seagrass meadows. These intertidal zones were exposed to extreme air temperatures up to 32°C for more than 13.5 hours per day, leading to significant leaf darkening, which affected up to 24% of the meadow's area. The satellite-derived SHSI indicated a strong spatial correlation between prolonged heat exposure and areas experiencing spectral darkening, highlighting the susceptibility of intertidal seagrasses to extended thermal stress. Spatial analysis showed that darkening was especially pronounced in the higher intertidal regions, where seagrasses were exposed to air for longer durations during low tide, underlining the interaction between tidal exposure and thermal stress. However, Sentinel-2 data one month after the heatwave, showed partial recovery in some areas, suggesting a certain level of resilience in Zostera noltei. Despite this recovery, the seagrasses that had experienced the most severe darkening did not fully return to their original state, indicating that while seagrass meadows have some capacity for resilience, prolonged or repeated thermal stress can have lasting impacts, particularly in the more exposed intertidal regions. This highlights the need for focused conservation efforts to support their recovery and enhance their resilience in the face of increasing climate-driven thermal extremes. The combination of laboratory-controlled experiments and satellite-based remote sensing provided a comprehensive understanding of the impacts of heatwaves on seagrass meadows at multiple scales, from individual leaf-level responses to entire meadow-wide effects. This integrated approach highlights the potential of leveraging both detailed local observations and large-scale satellite data to effectively monitor ecosystem changes, offering valuable insights for the management and conservation of these vulnerable habitats. The study underscores the critical role of spectral reflectance as an early warning indicator of heatwave-induced stress, laying the foundation for remote sensing-based monitoring and conservation efforts. By employing innovative indices like the Seagrass Heat Shock Index (SHSI), we capture subtle yet ecologically significant changes, advancing the precision of habitat assessments in intertidal zones. As climate scenarios predict more frequent and intense heatwaves, the need for continuous monitoring of intertidal seagrass meadows becomes increasingly urgent. This research demonstrates the efficacy of remote sensing in capturing rapid environmental changes, providing a framework for mitigating the impacts of climate-driven stressors and calling for adaptive conservation strategies. These strategies should integrate advancements in remote sensing technologies with targeted field-based interventions to preserve the resilience of intertidal seagrass meadows, thereby addressing the escalating challenges posed by climate change and ensuring the continued health of these indispensable coastal habitats.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Remote Sensing of the German North Sea Coast: A Review

Authors: Karina Alvarez, Dr. Felix Bachofer, Dr. Claudia Kuenzer
Affiliations: University of Wuerzburg, German Aerospace Center
The German North Sea coast is of immense economic, cultural, and environmental importance. It includes a portion of the Wadden Sea World Heritage Site, which extends into the Netherlands and Denmark and represents the largest system of tidal flats in the world. Monitoring of such important and sensitive habitats is critical for their informed management, and even more so in the face of a changing climate. Remote sensing (RS) provides an opportunity for consistent, low-cost monitoring, especially for difficult-to-access portions of the Wadden and North Sea. To date, however, no comprehensive review of RS applications for this area has been conducted, limiting its application for effective monitoring. This study summarizes RS efforts and findings in this region, as well as identifies gaps and opportunities. We conducted a literature review for the years 2000 to August 2024 that yielded 102 papers. We found that these papers ranged from measuring singular physical and biogeochemical metrics to habitat classifications and ecosystem integrity assessments. Papers could be grouped into four main research categories: coastal morphology (32%), water quality (31%), ecology (28%), and sediment (8%). Studies on intertidal topography were the most numerous, making up nearly 20% of papers. In the categories of water quality, ecology, and sediment, SST and chlorophyll, bivalves, and sediment transport were the main focuses, respectively. Over half of papers (64%) use satellite remote sensing, whereas about a third use airborne remote sensing. Multispectral data was by far the most commonly used data type in these studies, followed by SAR. Studies considered in this review span a wide range of spatial scales and resolutions, revealing that the two are generally inversely correlated. Further, coastal morphology and ecology studies clearly cluster in high spatial resolution and small extents compared to water quality and sediment studies, which generally use lower spatial resolution but over larger study areas. Gaps identified in this review include coastal morphology and ecology studies at larger spatial scales, especially at scales that align with management areas such as the German Wadden Sea National Parks. Additionally, higher spatial resolution water quality studies, especially important in highly variable areas such as coastal zones, would help better characterize the highly dynamic nature of water quality in this area. Studies beyond this study area suggest that, with novel machine learning methods and advances in processing power, satellite RS has high potential to fill these gaps. This review finds that RS, and especially satellite-based RS, already plays a notable role in monitoring of the German North Sea coast and will likely continue to play a role in providing critical information for coastal managers.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Analyzing Satellite Scaling Bias Using Drone Data: Application to Microphytobenthos Studies

Authors: Augustin Debly, Bede Ffinian Rowe Davies, Dr Simon Oiry, Julien Deloffre, Romain Levaillant, Jéremy Mahieu, Ernesto Tonatiuh Mendoza, Hajar Saad El Imanni, Philippe Rosa, Laurent Barillé, Vona Méléder
Affiliations: Nantes Université, Institut des Substances et Organismes de la Mer, ISOMer, UR 2160, F-44000 Nantes, France, Univ Rouen Normandie, Univ Caen Normandie, CNRS, M2C, UMR 6143, F-76000 Rouen, France
Microphytobenthos (MPB) are microalgae that form biofilms on sediment surfaces, playing a crucial role in coastal ecosystems. They contribute significantly to food web support, carbon (CO₂) fluxes, and the stabilization of mudflats. Traditionally, MPB assessments have been conducted through in situ measurements. However, in recent years, remote sensors have increasingly been used for monitoring MPB, including the use of satellite imagery. While satellites offer a broad spatial and temporal coverage, they also present challenges, particularly regarding the "scaling bias." This bias arises from differences in observations due to the spatial resolution of the data, which can lead to discrepancies in ecological metrics derived from satellite data. One key area affected by scaling bias is the estimation of carbon fluxes, which can be derived from MPB biomass. These estimates often rely on Gross Primary Production (GPP) models, which use the Normalized Difference Vegetation Index (NDVI) as a proxy for biomass. The scaling bias arises from non-linearities in converting NDVI to biomass, combined with the spatial variability of MPB. This study aims to quantify the scaling bias in MPB assessments by leveraging high-resolution drone data, which provide a more detailed view of MPB distribution and variability than satellites. Drone surveys were conducted across four coastal sites during different seasons to capture the spatial heterogeneity of MPB. These high-resolution datasets were then used to simulate what satellite sensors would detect at coarser resolutions, assuming a linear averaging between the two scales for NDVI, though this assumption is being further examined and discussed. The conversion from NDVI to biomass was performed using an exponential model. This method addresses the saturation effect of NDVI at higher biomass levels. Biomass estimates were derived at both fine and coarse resolutions, and the scaling bias was determined by comparing the values obtained at these two scales. The results present maps indicating a scaling bias of a few per cent, with coarse-resolution biomass consistently being lower than those calculated at finer resolutions. To model the bias, the spatial structure of MPB-induced NDVI was represented using a statistical beta distribution, defined by two shape parameters. This choice is appropriate as the beta distribution is continuous and bounded. It has been demonstrated that the bias is influenced by the statistical moments of these distributions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: New ocean color algorithms for estimating the surface concentrations of particulate organic nitrogen and phosphorus from satellite observations

Authors: Fumenia Alain, Loisel Hubert, Jorge Daniel, Bretagnon Marine, Mangin Antoine, Bryère Philippe
Affiliations: Laboratoire D'océanologie Et De Géosciences, ACRI-ST
In a context of anthropogenic perturbation to the nitrogen and phosphorus cycle, determination long-term trends and budget of all nitrogen and phosphorus chemical species within the global ocean represents a significant challenge. This study highlights the potential of using inherent optical properties, IOPs, derived from semi-analytical algorithms applied to satellite ocean color observations as proxies for estimating surface mass concentrations of particulate organic nitrogen, PON, and phosphorus, POP, at the global scale. Specifically, the IOPs considered are the absorption coefficients of total particulate matter, ap(λ), and phytoplankton, aph(λ). These IOPs were derived from satellite ocean remote-sensing reflectance, Rrs(), using different available inverse methods Our results reveal that reasonably strong relationships between PON or POP and satellite-derived IOPs hold across a range of diverse oceanic and coastal environments. Both the coefficients ap(λ) and aph(λ) show the ability to serve as proxies for PON and POP across a broad range of environments, from open ocean oligotrophic waters to coastal waters. The validation of the algorithms is based on matchups between an extensive dataset of concurrent in situ particulate organic matter measurements and satellite-derived particulate IOPs. Additionally, comparison with in situ time series over twenty years also shows the good performance of the algorithm in reproducing the temporal evolution of PON and POP. Applying these algorithms to merged product observations provides global PON and POP distribution patterns that agree with the expected geographical distribution of in situ measurements. High PON and POP concentrations are observed in turbid shelf and coastal regions as well as in upwelling areas, while low PON and POP concentrations are observed in oligotrophic regions. The presented relationships demonstrate a promising means to assess long-term trends and/or budgets of PON and POP at the global oceanic scale, or in specific oceanic areas that could be affected by anthropogenic perturbation impacting the production of organic nitrogen and phosphorus. These relationships could be also helpful to gain insight on nitrogen cycling in environment where nitrogen budgets are needed such as hot spots of intense biological N2 fixation.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Modeling and Numerical Simulation of Ocean Circulation and Its Impact on Fisheries Resources: A Case Study of Northern Morocco

Authors: Hasna BOUAZZATI, Asma DAMGHI, Abdelmounim El M’RINI, Song WEIYU
Affiliations: Research Laboratory in Applied and marine Geosciences, Geotechnics and Geohazards (LR3G), Faculty of Sciences, Abdelmalek Essaadi University, Qingdao Institution of Marine Geology,
Climate change is increasingly altering ocean circulation patterns and upwelling processes, with significant impacts on marine ecosystems and fisheries. These shifts affect the distribution, abundance, and phenology of fish and shellfish, challenging the sustainability of fisheries, particularly in regions like Morocco. This study investigates the mechanisms by which ocean circulation influences key oceanographic factors, such as temperature, oxygen levels, and acidification, and how these changes affect marine species' vital rates—growth, reproduction, and survival. Using high-resolution numerical models, GIS tools, and satellite data, the research simulates the current and future impacts of ocean circulation on fish populations and fisheries resources. The results reveal that climate-induced shifts in ocean circulation are already pushing several key fish species in Morocco to new areas, reducing their accessibility to traditional fishing methods and potentially threatening fish stocks. Certain regions, particularly those heavily reliant on upwelling, are identified as vulnerable hotspots. Future projections suggest continued disruptions to species composition and fishery yields due to warming, acidification, and reduced oxygen levels in the marine environment. This research underscores the need for adaptive fisheries management and resilience-building strategies, integrating climate and oceanographic data into policy-making. The findings highlight the importance of proactive measures to sustain fishery resources and mitigate socio-economic impacts on coastal communities, ensuring the long-term sustainability of Morocco's fisheries in the face of ongoing climate change.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Identifying Phytoplankton Groups From Absorption Spectra – A Regional Approach Based on Data From the Baltic Sea and Estonian Lakes

Authors: Ian-Andreas Rahn, Dr Kersti Kangro, Krista Alikas, Rüdiger Röttgers, Martin Hieronymi, Mr. Rene Freiberg
Affiliations: University Of Tartu, Helmholtz-Zentrum Hereon, Estonian University of Life Sciences
Determining Chl-a from optical measurements and using it as a proxy for total phytoplankton biomass has been ongoing for a long time. However, different phytoplankton groups occupy unique positions in the ecosystem, and thus, there is a need to distinguish them. Some groups, such as the cyanobacteria, cryptophytes, chrysophytes, chlorophytes, dinoflagellates and diatoms, can be discriminated via unique photoactive pigments (markers). These are typically measured using High-Performance Liquid Chromatography (HPLC), an arduous, expensive, and time-consuming process. Obtaining information about the presence and concentration of ’marker’ pigments from absorption spectra would allow for quicker and cheaper analysis of the phytoplankton dynamics. It will also assist in developing further algorithms for future hyperspectral satellite missions, such as the CHIME (Copernicus Hyperspectral Imaging Mission) and ongoing ones, such as PACE (Plankton, Aerosol, Cloud, ocean Ecosystem). Here, different approaches to derive pigment concentration have been undertaken – a chl-based model, a Gaussian decomposition model, and a model based on principle component analysis (PCA). The pigment concentration was also linked with the measured biomass. The analysis relies on the data gathered during a research cruise on the Baltic Sea and special optical monitoring campaigns of Estonian lakes. Developing a combined model for both types of water bodies, which could be used within Estonia and around the various coastal areas of the Baltic Sea, has been explored. Preliminary results indicate that different models are better at distinguishing various pigments. The Gaussian decomposition model has shown an overall better performance than the other models, especially for photoprotective carotenoids (PPC) derived from absorption at 498 nm (r2=0.86). Meanwhile, the chl-a model showed promising results for discerning zeaxanthin (r2=0.72), a key pigment in cyanobacteria. The strengths and limitations of each model have been discussed, contributing to the advancement of phytoplankton identification techniques. Implications for phytoplankton group biomass have also been highlighted. These results can be used as inputs for future algorithms for hyperspectral satellite missions (e.g. CHIME) in distinguishing phytoplankton in coastal areas and inland waters.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Trialing Real-Time Global Marine Litter Monitoring With Edge-SpAIce Project

Authors: Dr. Andis Dembovskis, Dr François De Vieilleville, Dr Pauline Audenino, Mr. Sioni Summers, Dr. Kikaki Katerina, Mr. Boyan-Nikola Zafirov
Affiliations: AGENIUM Space, CERN, NTUA, ENDUROSAT
Human health relies on the health of oceans surrounding the land we all live on. Beyond serving as a global oxygen supplier and carbon-dioxide absorber [1], ocean serves also as a critical source of the food supply chain. With tons [2] of marine plastic being dumped in oceans annually, nano plastics make its way to food humans are eating [3]. And it gets worse with each year [2]. To care about human health means to care about ocean health. Edge-SpAIce project was created to provide real-time insight of surface plastic littering in oceans, sees and rivers, aimed to build global EO capability to provide pinpointing services of such pollution sources for environment policing agencies. Building such service relies on the following main pillars: 1) EO image gathering capability with the spectral resolution enabling plastic signature detection; 2) Onboard edge-AI computing capacity to do such image analysis; 3) Onboard-capable DNN trained to do the detection and able to do so at a commercialize quality; and finally 4) operational capacity of satellite operator to execute it. While individual bits and pieces have been present before, e.g. EO imagery with multi- and hyper- spectral data and DNNs to do plastic detection on ground [4], a complete space-based service has never been trialed before. Edge-SpAIce is a project funded by Horizon Europe program and aims to demonstrate a trial of such a service. Consortium is composed of 4 players: Endurosat (EDS) as space platform provider and operator, NTUA as the domain expert in plastic detection, CERN as the expert for optimized AI logic deployment on European FPGA and AGENIUM Space (AGS) as edge-AI technology solution provider plus the project coordinator. The mission was launched in January 2025 on Transporter-12 launcher by Space-X into SSO orbit [5]. The EO instrument onboard is Simera Sense HyperScape 200. Initial months were used for LEOPs and since April it has been open for edge-AI application testing. NTUA has developed a preliminary labelled dataset and ground-DNN for reference and AGS has built a distilled, quantized and architecture optimized DNN version for onboard detection of marine plastic in raw images. This paper evaluates first results of onboard marine plastic litter detection and provides insight in preliminary onboard detection AI execution timing and precision. The execution times are revealed for a DNN run on a space-equivalent engineering model on the ground, with logic deployed on Zynq Ultrascale+ ZU15EG board using VITIS AI versus HLS4ML frameworks. It reviews datasets used to train the DNN and what techniques were applied to enable use of a fused multi-camera sources and optimizing it to fit a SoC-FPGA execution. Furthermore, the paper elaborates in detection benefits of additional onboard pre-processing using the algorithm AGS has developed through ESA’s FutureEO program for PRNU/DSNU calibration with AI, technical details of which are already presented in ESA VH-RODA [6]. Additionally this paper draws an improvement roadmap on further technologies that could help to improve the service in future, incl. specifics of better accustomed EO camera and improved onboard processing hardware options for further missions, e.g. hyperspectral sensor requirements for micro-plastic detections. Finally, it also reviews operational costs and suggests business models for future environmental policing applications. References: [1] United Nations, “The ocean – the world’s greatest ally against climate change”, 2024, https://www.un.org/en/climatechange/science/climate-issues/ocean [2] Hannah Ritchie, “Where does the plastic in our oceans come from?”, 2021, OurWorldinData.org, https://ourworldindata.org/ocean-plastics [3] Elise M. Tuuri, Sophie Catherine Leterme, “How plastic debris and associated chemicals impact the marine food web: A review”, Environmental Pollution, Volume 321, 2023, 121156, ISSN 0269-7491, [4] Kikaki, K., Kakogeorgiou, I., Mikeli, P., Raitsos, D. E., & Karantzalos, K. (2022). “MARIDA: A benchmark for Marine Debris detection from Sentinel-2 remote sensing data.” PloS one, 17(1), e0262247. [5] Transporter 12 reference: https://rocketlaunch.org/mission-falcon-9-block-5-transporter-12-dedicated-sso-ride . [6] Dr. de Vieilleville François, “Towards DSNU estimation on routine images”, ESA VH-RODA poster session, 2024.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Evaluation of PRISMA Water Reflectance for the Validation of Biogeochemical Models

Authors: Giuliana Profeti, Paolo Lazzari, Giorgia Manfè, Eva Álvarez, Gian Marco Scarpa, Vittorio Ernesto Brando, Luis Gonzalez Vilas, Stefano Ciavatta, Dr.ssa Federica Braga
Affiliations: CNR-ISMAR, IEO-CSIC, MOI, OGS
The HE-NECCTON (New Copernicus capability for trophic ocean networks) project aims at building a fully integrated modelling system of the marine ecosystem for describing its functioning, predicting the impact of climate change and human pressure, supporting policymakers in protecting biodiversity and managing resources in a sustainable manner. The modelling system is going to be integrated in the Copernicus Marine Service to obtain reliable and timely ocean products. Novel research biogeochemical models have been upgraded by adding a spectral radiative transfer module describing the distribution of in-water irradiance along the water column and the interaction of optically active substances with the spectral light field. One of the objectives on NECCTON is to integrate spaceborne hyperspectral data in these modelling systems by means of augmented skill-performance metrics and novel assimilation techniques. A prerequisite for successful data assimilation is an accurate estimation of uncertainties in the satellite observations. We present the assessment of water reflectance derived from PRISMA hyperspectral mission in selected aquatic sites, that will be used for biogeochemical model validation and data assimilation. In situ reflectance from autonomous hyper- and multispectral radiometer systems, such as AERONET-OC and WATERHYPERNET, are used to evaluate Standard PRISMA Level 2 (L2C) products distributed by the Italian Space Agency and data derived from two atmospheric correction processors, ACOLITE and POLYMER, adapted for processing PRISMA Level 1 products. The qualitative and quantitative analyses of Remote Sensing Reflectance (Rrs) derived from PRISMA versus in situ radiometric data show consistent results in the longer wavelengths (i.e. from 500nm on), while a relevant overestimation of PRISMA Rrs is observed at the shorter wavelengths for all the applied processors, probably due to lower SNR of PRISMA L1 at these wavelengths. In general, PRISMA L2C products have weaker performances than the other methods. PRISMA data processed with POLYMER and ACOLITE with glint correction have an overall good agreement, with the lowest errors between satellite and in situ measurements in the 490–620 nm spectral interval. The overall bias of POLYMER is close to 0, while ACOLITE shows an overall overestimation of the reflectance spectrum, with improved results when the glint correction is applied. The availability of in situ Rrs data from autonomous systems is fundamental to providing validation data and thoroughly assess the radiometric performance of PRISMA Rrs for any spectral band between 400 and 900 nm. The results over the four water bodies analysed in this study are encouraging, confirming the consistency of PRISMA Rrs and its capability in providing adequate radiometric products for the retrieval of water quality parameters and for the validation in biogeochemical models. For the AAOT site in the Adriatic Sea, we performed a preliminary comparative analysis of Rrs spectra derived from the GOTM-FABM-BFM bio-optical biogeochemical model and observations from multispectral satellite sensors and hyperspectral PRISMA data processed with POLYMER and ACOLYTE. Across all methods, the general spectral shape of Rrs is consistent, with peaks in the green region and Rrs values generally higher in winter, which could be attributed to seasonal variations in water composition. Although the performance of PRISMA products varies by spectral range and correction method, in general they align well with the biogeochemical model and established multispectral sensors in the 500–900 nm range, making it a strong candidate for data assimilation into the HE-NECCTON system.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: From pigment prediction to phytoplankton functional type trends with explainable machine-learning

Authors: Angus Laurenson, Dr Shubha Sathyendranath, Dr Victor Martinez-Vicente
Affiliations: Plymouth Marine Laboratory
Life in the ocean is dependent on phytoplankton. These photosynthetic micro-organisms are the foundation of the marine food web - if their distribution shifts, so must the rest of the marine food web shift with it. We seek the answer to the question; “how are phytoplankton responding to climatic changes in their environment?” Phytoplankton adapted to light below the surface by incorporating secondary pigments that shift the absorption spectrum of chlorophyl. These pigments indicate the functional type, and thus their ecological role [1]. We trained a machine-learning model to predict these pigments from remote sensing reflectance using a global dataset of 34,600 High Performance Liquid Chromatography (HPLC) measurements [2] matched to OC-CCI v6.0, 4km daily remote sensing reflectance [3], and GEBCO 2023 bathymetry [4]. The model prediction of chlorophyll-a compared favourably to the OC-CCI. However, careful cross-validation revealed that of seven secondary pigments, only fucoxanthin, and peridinin could be discriminated, corroborating earlier works [5]. We applied this model to the global OC-CCI timeseries to generate an estimate of the pigment predictions between 1993 to 2023, and following the diagnostic pigment formula updated by Sun et al in 2023 [2] , we converted the predicted pigments into predicted diatom fraction. Careful regression of monthly anomalies of diatoms and chlorophyll predicted by the model revealed significant trends, particularly around Antarctica. To link these changes to environmental drivers, we trained a second model to predict these variables directly from a time-series of environmental drivers and tested its performance across a time-series split cross-validation exercise. For the regions it performed well, we use SHapley Additive exPlanations (SHAP) [6] to explain how the time forecast model made predictions and reveal which environment drivers dominate in different regions of the ocean on a per-pixel basis. References ---------------- 1. Vidussi, Francesca, et al. "Phytoplankton pigment distribution in relation to upper thermocline circulation in the eastern Mediterranean Sea during winter." Journal of Geophysical Research: Oceans 106.C9 (2001): 19939-1995 2. Sun, Xuerong, et al. "Coupling ecological concepts with an ocean-colour model: Phytoplankton size structure." Remote Sensing of Environment 285 (2023): 113415 3. Sathyendranath, S.; Jackson, T.; Brockmann, C.; Brotas, V.; Calton, B.; Chuprin, A.; Clements, O.; Cipollini, P.; Danne, O.; Dingle, J.; Donlon, C.; Grant, M.; Groom, S.; Krasemann, H.; Lavender, S.; Mazeran, C.; Mélin, F.; Müller, D.; Steinmetz, F.; Valente, A.; Zühlke, M.; Feldman, G.; Franz, B.; Frouin, R.; Werdell, J.; Platt, T. (2021): ESA Ocean Colour Climate Change Initiative (Ocean_Colour_cci): Version 5.0 Data. NERC EDS Centre for Environmental Data Analysis, 19 May 2021. 4. GEBCO Compilation Group (2023) GEBCO 2023 Grid (doi:10.5285/f98b053b-0cbc-6c23- e053-6c86abc0af7b) 5. Stock, Andy, and Ajit Subramaniam. "Accuracy of empirical satellite algorithms for mapping phytoplankton diagnostic pigments in the open ocean: a supervised learning perspective." Frontiers in Marine Science 7 (2020): 599. 6. Lundberg, Scott. "A unified approach to interpreting model predictions." arXiv preprint arXiv:1705.07874 (2017).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Evaluating topographic characteristics and population density in an Antarctic penguin colony using UAV-driven deep learning models

Authors: Oleg Belyaev Korolev, Dr. Alejandro Román, Dr. Josabel Belliure, Dr. Gabriel Navarro, Dr. Luis Barbero, Dr. Antonio Tovar-Sánchez
Affiliations: Institute Of Marine Sciences Of Andalusia, Alcala University, Cadiz University
This study examines the ecological role of chinstrap penguins (Pygoscelis antarcticus) in Antarctica, focusing on their population dynamics, behaviour, and environmental impacts produced by the climate change. Penguins play a key role in nutrient cycling and trace metal dynamics, highlighting the relevance of research efforts in characterizing their local biochemical contributions to the environment. Using UAVs equipped with RGB and multispectral sensors, the study mapped Vapour Col penguin colony on Deception Island, Antarctica, identifying several runoff-discharge points where guano and other materials enter the marine environment, enriching coastal waters with nutrients and trace metals like iron. These findings provide data for establishing environmental sampling stations to better understand nutrient transfer in the Southern Ocean. Additionally, deep-learning models, specifically YOLOv8, were used to estimate population size, yielding a range of 13,250 to 22,000 breeding pairs during the 2021/2022 season. Adjustments were made for late-season data collection by simulating clutch initiation dates, improving accuracy. The study also tested using chick counts as a proxy for adult numbers, offering an alternative for future assessments. Results show a stable population compared to past decades, despite previous declines, suggesting some resilience in this colony. The integration of UAVs and deep learning provides a precise, non-invasive, and efficient way to monitor wildlife. Unlike traditional ground-based methods, which are labour-intensive and disruptive, UAVs captured high-resolution data across remote areas, while deep-learning models processed it to identify individual penguins and map their distribution. The study also linked guano-stained areas to chick presence, showing how spatial analyses can explain habitat use during critical life stages. Beyond population estimates, the research highlights the broader ecological importance of penguin colonies. By enriching local marine ecosystems, they play a role in supporting primary productivity and food web dynamics. Identifying key discharge points where nutrients enter the ocean offers new opportunities for targeted sampling and biochemical studies. These areas are likely hotspots for marine productivity, driven by the inputs from penguin colonies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Biogeography of Arctic phytoplankton groups revealed from 20+ years of pigment data

Authors: Alexander Hayward
Affiliations: Danish Meteorological Institute
The Arctic is undergoing rapid environmental changes, with warming occurring at a rate faster than any other region on Earth. This warming, driven primarily by atmospheric greenhouse gas increases, has resulted in dramatic reductions in sea ice coverage and a shift from multi-year to predominantly first-year ice. These changes have affected Arctic marine ecosystems, including phytoplankton dynamics. Over recent decades, phytoplankton abundances have increased significantly due to longer growing seasons and greater light and nutrient availability. Phytoplankton are foundational to Arctic ecosystem processes and carbon export to ocean depths; however, their ecological contributions are not uniform across taxa. Among Arctic phytoplankton, diatoms are particularly significant due to their high lipid content, which makes them a vital food source for Calanus copepods, a key link to higher trophic levels. Diatoms also play a crucial role in the biological carbon pump, sequestering atmospheric CO2 more effectively than other groups. Despite their importance, there remains a limited understanding of the biogeographical patterns of phytoplankton communities at a circumpolar scale. Addressing this gap, we conducted a collaborative, community-driven analysis, assembling the largest dataset of Arctic phytoplankton pigments to date, derived from high-performance liquid chromatography (HPLC). This dataset comprises over 8,000 samples collected from the mid-1990s to the present and represents diverse Arctic environments, including coastal waters, open-ocean, and ice-covered regions. Using the pigment inversion method phytoclass, we quantified chlorophyll a concentrations for major phytoplankton groups: diatoms, haptophytes, green algae, pelagophytes, dinoflagellates, and cryptophytes. Cluster analysis enabled us to identify distinct phytoplankton community types and map their spatial distributions across the Arctic. This work has critical implications for ocean colour remote sensing, particularly in the context of hyperspectral capabilities from upcoming satellite missions such as NASA’s Plankton, Aerosol, Cloud, and Ecosystem (PACE) mission. Moreover, these findings provide a foundation for modeling phytoplankton group dynamics over time, offering insights into their responses to environmental changes and informing predictions about future Arctic ecosystem conditions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Advancing Cloud Masking for Marine Pollution Detection

Authors: Paraskevi Mikeli, Dr Katerina Kikaki, Dr Ioannis Kakogeorgiou, Mr Simon Vellas, Konstantinos Karantzalos
Affiliations: National Technical University of Athens, Hellenic Centre for Marine Research, Archimedes/Athena RC
Protecting aquatic ecosystems is fundamental to global sustainability, as emphasized by the United Nations Sustainable Development Goal 14 (SDG 14). Marine pollution, including debris and oil spills, remains a critical environmental issue. While satellite-based technologies hold promise for detecting and monitoring marine pollution, operational remote-sensing solutions face significant challenges, particularly in cloud masking over marine environments. Well-established cloud masking algorithms often struggle in marine regions, either underestimating cloud presence (e.g., S2Cloudless) or misclassifying bright sea features as clouds (e.g., FMASK). These inaccuracies can compromise preprocessing steps in marine pollution detection systems, leading to false positives. In particular, for oil spill monitoring systems, we observed that a major source of false positives is existing algorithms mistakenly identifying clouds as oil spills. This study investigates how integrating cloud data into model training can improve the ability to discriminate clouds from marine pollution and other sea surface features using multispectral high-resolution satellite imagery. We rely on the benchmark Sentinel-2 dataset for marine pollution, MADOS (https://marine-pollution.github.io/), in combination with the state-of-the-art deep learning framework MariNeXt for classification. MADOS contains annotations for marine debris and oil spills, as well as water-related classes such as floating macroalgae, ships, and natural materials. To address cloud masking issues, we expand the MADOS dataset by introducing a new “Cloud” class, capturing diverse cloud characteristics such as size, thickness, background, and lighting variations. The augmented dataset includes 10,000 patches and 99 million pixels annotated for clouds. Retraining the MariNeXt model with the enhanced MADOS dataset, we evaluate its performance qualitatively and quantitatively, comparing results to previous studies. In conclusion, incorporating cloud data into model training significantly improves model accuracy and enhances overall sea surface feature classification using multispectral satellite imagery. By expanding the MADOS dataset with cloud annotations, we enhance the accuracy and reliability of marine pollution detection systems. The retrained MariNeXt model demonstrates effective cloud classification capabilities, making it well-suited for operational use. Our findings highlight the necessity of a holistic approach to satellite-based marine pollution monitoring, significantly contributing to global sustainability efforts in line with SDG 14.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Using Satellite Data to Assess Sensitive Habitats and the Pressures They Face

Authors: Eeva Bruun, Mr Lauri Niskanen, Dr Aleksi Nummelin, Jenni Attila, Dr Olli Malve, Dr Elina Miettunen, Mr Janne Mäyrä, Mr Mikko Kervinen, Mr Eero Alkio, Mr Tomi Heilala, Mr Markus Kankainen, Mr Mika Laakkonen, Dr Antti Westerlund
Affiliations: Finnish Environment Institute, Natural Resources Institute Finland, Finnish Meteorological Institute
Satellite observations can reliably assess the state of the environment, its temporal and spatial variations, and the characteristics of marine areas. Typically, satellite observations are used to describe variables related to eutrophication, such as the water's chlorophyll-a content, Secchi depth, and turbidity. In this study we assess the usefulness of satellite observations for evaluating the impacts of fish farming, and marine heat waves on marine habitats in the coastal waters of Finland (Baltic Sea). Additionally, Syke is developing methods to identify small boats from satellite observations, which will be used to assess the pressure small boats exert on marine areas annually. We do this by comparing satellite data with regionally comprehensive flow-through data measured from moving vessels, and point-wise laboratory samples from three marine areas along the Finnish coast. We utilize open-source Copernicus and NASA data and evaluate the benefits of commercial Very High Resolution (VHR) images in the areas of field measurement sites. VHR images provide more detailed satellite data closer to the shore, where open-source data cannot be utilized. The cost-benefits and reliability of the information obtained through satellite observations are evaluated. We also assess the usefulness of satellite sea surface temperature (SST) data in monitoring marine heat waves in sensitive habitats. For this we use observations from Sentinel-3 SLSTR and Landsat-8/9 TIRS and TIRS-2. TIRS instruments provide data at sub-kilometer resolution which is particularly important in shallow and complex coastal areas. Additionally, the Finnish Coastal Nutrient Load Model (FICOS) is used to assess how nutrient concentrations change in sensitive areas under different hydrodynamic and environmental conditions. The project is being carried out in collaboration with the Finnish Environment Institute (Syke), the Natural Resources Institute Finland (Luke), and the Finnish Meteorological Institute (FMI). The project started in the summer of 2024 and the study will be completed by the end of 2025. Co-funded by the European Union.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Leveraging Earth Observation for Phytoplankton Biodiversity Monitoring: The Role of Sentinel-3 OLCI in Supporting MSFD PH1 Indicator and Regional Reporting

Authors: Antoine Mangin, Marine Bretagnon, Philippe Bryère, Anne Goffart
Affiliations: ACRI-ST, ACRI-ST, Site de Brest, quai de la douane, Oceanology, University of Liège
The PH1 indicator of the Marine Strategy Framework Directive (MSFD) is a key descriptor aimed at assessing changes in phytoplankton and zooplankton communities, which are fundamental components of marine ecosystems. This indicator measures relative changes in abundances or biomasses of lifeform pairs based on functional traits to indicate ecological change (Tett et al., 2008) in response to anthropogenic pressures such as eutrophication, pollution, or climate change. In the Mediterranean, the PH1-Phytoplankton indicator is particularly relevant for monitoring the responses of phytoplankton communities in an oligotrophic environment (nutrient-poor) subject to high seasonal and interannual variability. It relies on parameters such as phytoplankton functional groups and types. These observations can be collected through in situ surveys or satellite-based estimates, providing broader spatiotemporal coverage. Phytoplankton Functional Types (PFT) can be inferred from ocean colour reflectances by analysing the light spectrum reflected by the ocean surface. Different phytoplankton groups have distinct bio-optical properties due to variations in their pigment composition, size, and structure, which influence how they absorb and scatter light. By using advanced algorithms and models that link specific spectral signatures to phytoplankton groups, satellite sensors can estimate the relative abundance or dominance of PFT. This method provides a valuable, large-scale approach to understanding phytoplankton community composition and its role in marine ecosystems. In this study, we will present the methodology to infer the PFT community composition thanks to machine learning approach, using Sentinel-3/OLCI reflectances. This algorithm will be then applied to different study sites off Corsica, in the Mediterranean Sea, where historical in situ data are available and allow the accuracy assessment of satellite estimates. The algorithm will be applied to the whole timeseries to discuss spatial and temporal evolution of the main phytoplankton groups. Then we will demonstrate how these satellite-derived estimates can contribute to assess the Environmental Status of Pelagic Habitats.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Mapping the Areal Extent of Perennial Brown Macroalgae Dominated Habitats in Low Transparency Baltic Sea Waters With Sentinel-2 Satellite

Authors: Ele Vahtmäe, Laura Argus, Kaire Toming, Antonia Nyström Sandman, Tiit Kutser
Affiliations: University Of Tartu, AquaBiota
Coastal ecosystems provide numerous critical ecosystem functions and services such as habitat and food for marine organisms, protection from storm and erosion, nutrient recycling, sediment trapping, carbon storing etc. Despite the benefits that coastal ecosystems have, they are under the severe threat due to human activities, such as land use changes, anthropogenic disturbances, pollution, eutrophication and climate change effects conditioned by the burning of fossil fuels. Loss or degradation of such highly valuable ecosystems results in the losses of biodiversity and critical ecosystem services. Perennial brown macroalgae Fucus spp. belts play a vital role in providing a wide range of ecosystem services in the Baltic Sea. Fucus vesiculosus is also considered as one of the key species to indicate the effect of eutrophication in the Baltic Sea. Monitoring given coastal ecosystems allows to estimate the state of benthic communities, provide evidence for environmental changes and establish required management methods. Integrating satellite imagery with in situ observations, holds great potential for enhancing the scope of benthic ecosystem monitoring. Remote sensing allows to assess spatial distribution of benthic macroalgae to a much larger spatial scale than what is done by using only point based sampling. As such, remote sensing-based method holds a promise to develop new spatial extent indicators for benthic biodiversity assessment. Remote sensing has already shown its effectiveness to estimate the spatial distribution and areal extent of various benthic habitats. However, to use satellite imagery effectively for regular macroalgae monitoring in low transparency waters of the Baltic Sea, it is essential to understand the level of classification accuracy and to ensure consistency in mapping results. In the current study we are using Sentinel-2 satellite data to map the areal extent of perennial brown macroalgae dominated habitats in the Estonian and Swedish test sites in the Baltic Sea. Ground truth data from the University of Stockholm and University of Tartu are used for the calibration and validation of classification algorithms. High quality (cloud free, low turbidity) Sentinel-2 images from the years 2016-2023 are used to determine the occurrence frequency of brown macroalgae in multitemporal images to provide confidence in brown macroalgae presence. The use of multitemporal images also allows to assess the uncertainty in brown macroalgae areal extent retrievals, which is not achievable if only single image is used for mapping.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Combining open-access SAR and multispectral images with contextual environmental information to improve oil-spill detection in the Persian/Arabian Gulf

Authors: Alexis Culot, Dr. Qiang Wang, Pr. Emmanuel Hanert
Affiliations: Earth and Life Institute (ELI), UCLouvain, Royal Belgian Institute of Natural Sciences (RBINS), Institute of Mechanics, Materials and Civil Engineering (IMMC), UCLouvain
In a context where oil exploration, transport, and processing have significantly increased to meet an ever-evolving energy demand, the risk of marine pollution by oil has intensified. Beyond environmental disasters, these oil spills also threaten coastal infrastructure, disrupting maritime traffic, clogging desalination plant intakes, and harming fishing, aquaculture and key maritime ecosystems. This is particularly true for the Persian/Arabian Gulf, one of the most oil-polluted seas in the world. It is crossed by approximately 25,000 oil tanker movements each year and hosts 34 large oil fields operated by 800 wells, as well as 25 major oil terminals. On average, 260,000 tons of oil are spilled—accidentally or intentionally—into the Gulf each year. To mitigate the consequences of this pollution, it is necessary to better understand the risks and establish an early detection system for oil spills. While orbital synthetic aperture radar (SAR) images are effective for monitoring vast areas, their periodicity is not always sufficient for a rapid response system. It is therefore essential to complement them with other types of sensors to improve their spatial and temporal resolution. Moreover, oil spill detection in the Gulf is exceptionally challenging due to environmental and technical complexities, including widespread look-alike phenomena such as algal blooms, low wind zones, ocean currents as well as significant radio frequency interference (RFI) in radar acquisitions. A tailored oil spill monitoring system is therefore needed to address these limitations while providing reliable and timely information. Here, we seek to detect oil spill considering an approach that combines a hierarchical split based algorithm (HSBA) with feature extraction to analyse SAR and multispectral remote sensing data while incorporating additional geospatial and environmental data. To that use, we use a variety of sensors to combine their respective advantages and develop a comprehensive and versatile oil spill detection methodology. SAR images are particularly effective for detecting oil spills due to their ability to differentiate between the roughness of the sea surface and that of pollutants, regardless of light or weather conditions. We hence consider Sentinel-1 and Radarsat data. However, the exclusive use of SAR images in an open-source system presents limitations, particularly in terms of revisit frequency. We are therefore supplementing SAR data with multispectral images from Sentinel-2 and Landsat using the oil spill index (OSI). SAR and multispectral data are first analysed with HSBA method to highlight dark areas, hence possible oil spills, while minimizing the detection of look-alike. HSBA is leveraging the statistical distribution of pixel intensity values. This algorithm identifies regions in an image where two distinct normal populations coexist. These regions are then used to parameterize a region-growing method, specifically a flood algorithm, which is subsequently applied to the entire image enabling it to highlight anomalies that stand out from the water surface, such as oil spills. This step isolates dark objects across the image, ensuring accurate segmentation of potential oil spills, which typically appear as dark patches in SAR imagery, while minimizing look-alike detection. An additional feature extraction and classification step is used to discard any remaining look-alikes not filtered out by the HSBA. In this step, radiometric, geometric, and texture features are extracted from the detected dark objects and used in conjunction with a rule-based classification approach. Some of the features include the number of dark objects (NDO), the standard deviation of dark object intensities (StdDO), the object power-to-mean ratio (OPMR), and the ratio of NDO to the number of pixels in the chip (NDO/NPC). The rule-based classification is trained on data from confirmed oil spills in the Gulf as well as in other regions worldwide, such as the Singapore Strait, the Mediterranean Sea, and the Gulf of Mexico. This broader dataset is essential due to the limited availability of comprehensive oil spill datasets from the Gulf. We have also enhanced the oil spill detection algorithm by incorporating contextual information about environmental conditions at the time of detection. This additional layer helps to further filter out look-alikes by integrating data on surface winds, chlorophyll content, presence of underwater structures such as coral reefs and sandbanks, and a Radio Frequency Interference (RFI) probability map. Furthermore, a map of oil platform locations has been included to provide additional context, helping the algorithm make more informed decisions. By leveraging this contextual information, the algorithm achieves a higher level of accuracy in distinguishing true oil spills from false positives. We validated our method using more than 50 historical oil spills, including events in the Gulf as well as other regions worldwide, such as the Singapore Strait and Java Sea, the Mediterranean Sea, the Gulf of Mexico, Mauritius, Trinidad and Tobago, Venezuela, and the Philippines. Additionally, the algorithm will be applied to time series of Sentinel-1, Sentinel-2, Radarsat and Landsat images from 2024 over the Gulf to assess the frequency of oil spills and illegal discharges occurring in the region. The implications of this work will extend beyond oil spill detection, as the results will be integrated with high-resolution hydrodynamic and oil spill dispersal simulations in the Gulf to forecast dispersion of detected oil spills. By incorporating regional currents, winds and waves, these simulations illustrate how such a system can serve as the backbone of an operational oil spill early warning system in the Gulf. Furthermore, the detection algorithm relies exclusively on open-access datasets, ensuring that the approach can be quickly and effectively implemented in other regions worldwide, providing a scalable, open-source solution for global oil spill monitoring and response systems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Offshore Environmental Light Pollution in the UK Exclusive Economic Zone

Authors: Scotia Kaczor, Sara McGourty, Austin Capsey
Affiliations: UK Hydrographic Office
Artificial Light at Night (ALAN) is a growing but underappreciated environmental stressor in marine ecosystems within the UK Exclusive Economic Zone (EEZ). While ALAN's impacts on terrestrial and urban environments are well-documented (Mu et al 2021; Jiang et la 2018; Jiang et al 2017), its consequences for marine systems have gained significant attention only recently (Elvidge et al 2024; Zeng et al 2023; Smyth et la 2022; Zhao et al 2021). ALAN disrupts natural light regimes that govern essential ecological and biological processes, including reproduction, foraging, migration, and predator-prey interactions. These disruptions threaten individual species and ecosystems, particularly in habitats finely tuned to natural light cycles, such as those governed by lunar phases and daily light spectra alterations (Marangoni et al 2022). This study integrates emerging research on ALAN's effects in marine environments with a specific focus on temporal variations and evolving trends within the UK EEZ. Artificial light from offshore infrastructure (petrochemical platforms and windfarms), vessels, coastal urbanisation, and shipping routes contributes significantly to light pollution in marine areas (Elvidge et al 2024; Polinov et al 2022). ALAN impacts various marine species, including seabirds, cetaceans, turtles, fish, and zooplankton, altering critical behaviours like migration, navigation, and Diel Vertical Migration (DVM) (Marangoni et al 2022). These changes disrupt food webs and nutrient cycles and interact with other anthropogenic factors which ultimately affect marine biodiversity and ecosystem services (Tidau et al 2021; Gaston et al 2021). Radiance data from the Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) sensor aboard the joint NASA/NOAA Suomi National Polar-orbiting Partnership was utilised for this study (Suomi NPP) (Nurbandi et al 2016; Baugh et al 2013). Monthly and annual composite images of radiance were analysed across the UK EEZ, with a focus on identifying temporal patterns and trends within three separate regions with different offshore activity. A threshold of 0.8 nanoWatts/sr/cm^2 was set to mitigate implications of background noise and sensor degradation. This threshold was based on results from a calibration area distant from major offshore infrastructure. Annual radiance composites from 2014 to 2023 revealed a 40% decrease in mean radiance within the UK EEZ, with overall mean radiance declining from 5.0 to 3.0 nanoWatts/sr/cm^2, and reduced variability over time. In the NW Continental Shelf and South-Western Approaches regions, radiance levels are below the calibration threshold of 0.8 nanoWatts/sr/cm^2, while in the Southern North Sea, mean radiance declined by 30%, from 2.89 to 2.00 nanoWatts/sr/cm^2. Cross-referencing VIIRS DNB datasets on persistent offshore infrastructure, ship anchorages and Marine Protected Areas (MPAs) revealed a strong correlation between bright radiance zones and offshore infrastructure. Highly illuminated regions often overlapped with MPAs, raising concerns about the impacts of artificial light at night (ALAN) on sensitive marine ecosystems. Fluctuations in offshore activities and the resulting radiance levels may stem from economic, regulatory, and technological factors. Economic shifts, such as changes in oil prices, directly impact drilling and production activities. Regulatory changes, particularly in environmental policies, can restrict certain offshore operations, potentially reducing radiance levels. Meanwhile, technological advancements influence activity levels: for instance, innovations in offshore wind technology may increase installation and maintenance operations, thereby affecting radiance data. Together, these factors may be contributing to the observed variability in offshore radiance and activity in UK EEZ. The study’s findings underscore the potential risks posed by ALAN to MPAs in the UK EEZ, artificial lighting could disrupt species behaviour and ecosystem functions, compromising conservation goals. The evidence suggests that artificial light is encroaching into dark marine spaces, including MPAs, possibly threatening the integrity of these critical conservation areas. While the observed downward trend in radiance from 2014 to 2023 suggests progress, particularly for MPA preservation, the study emphasises the need for long-term evaluation. Developing guidelines to mitigate light pollution in ecologically sensitive marine habitats should be incorporated into future regulatory frameworks. Mitigating the effects of ALAN on marine ecosystems is likely to be complex but necessary. Recommendations from previous research include reducing light intensity, adopting specific wavelengths less harmful to marine species, and designating "Marine Dark Sky Parks" within MPAs to restrict light pollution (Davies et al 2016). The establishment of a foundation record of artificial light levels, as demonstrated in this study, is crucial for informing future policy aimed at safeguarding marine biodiversity. As the SUOMI NPP satellite approaches the end of its operational life, a new constellation of satellites will take over, ensuring the continuity of this critical monitoring. This transition is expected to maintain current observational standards but also creates opportunities for other satellite missions to develop enhanced spatial and spectral resolution sensors, advancing the effectiveness of ALAN monitoring. Stricter regulatory frameworks based on robust foundation data are essential to manage the encroachment of artificial lighting, particularly in sensitive and protected areas. In conclusion, ALAN can pose a significant threat to marine ecosystems within the UK EEZ, especially in areas of high ecological value such as MPAs. By integrating remote sensing technologies with ecological and nautical chart data, this study provides critical insights into the spatial and temporal dynamics of marine light pollution. This evidence could be essential for shaping future marine conservation policies and ensuring the long-term sustainability of the UK's offshore environments. Baugh, K., Hsu, F.C., Elvidge, C.D. and Zhizhin, M., 2013. Nighttime lights compositing using the VIIRS day-night band: Preliminary results. Proceedings of the Asia-Pacific Advanced Network, 35(0), pp.70-86. Davies, T.W., Duffy, J.P., Bennie, J. and Gaston, K.J., 2014. The nature, extent, and ecological implications of marine light pollution. Frontiers in Ecology and the Environment, 12(6), pp.347-355 Davies, T.W., Duffy, J.P., Bennie, J. and Gaston, K.J., 2016. Stemming the tide of light pollution encroaching into marine protected areas. Conservation Letters, 9(3), pp.164-171. Depledge, M.H., Godard-Codding, C.A. and Bowen, R.E., 2010. Light pollution in the sea. Marine pollution bulletin, 60(9), pp.1383-1385. Elvidge, C.D., Ghosh, T., Chatterjee, N., Zhizhin, M., Sutton, P.C. and Bazilian, M., 2024. A Comprehensive Global Mapping of Offshore Lighting. Earth System Science Data Discussions, 2024, pp.1-34. Gaston, K.J., Ackermann, S., Bennie, J., Cox, D.T., Phillips, B.B., Sánchez de Miguel, A. and Sanders, D., 2021. Pervasiveness of biological impacts of artificial light at night. Integrative and Comparative Biology, 61(3), pp.1098-1110. Jiang, W., He, G., Long, T., Guo, H., Yin, R., Leng, W., Liu, H. and Wang, G., 2018. Potentiality of using Luojia 1-01 nighttime light imagery to investigate artificial light pollution. Sensors, 18(9), p.2900. Jiang, W., He, G., Long, T., Wang, C., Ni, Y. and Ma, R., 2017. Assessing light pollution in China based on nighttime light imagery. Remote Sensing, 9(2), p.135. Marangoni, L.F., Davies, T., Smyth, T., Rodríguez, A., Hamann, M., Duarte, C., Pendoley, K., Berge, J., Maggi, E. and Levy, O., 2022. Impacts of artificial light at night in marine ecosystems—A review. Global Change Biology, 28(18), pp.5346-5367 Mu, H., Li, X., Du, X., Huang, J., Su, W., Hu, T., Wen, Y., Yin, P., Han, Y. and Xue, F., 2021. Evaluation of light pollution in global protected areas from 1992 to 2018. Remote Sensing, 13(9), p.1849. Nurbandi, W., Yusuf, F.R., Prasetya, R. and Afrizal, M.D., 2016, November. Using visible infrared imaging radiometer suite (VIIRS) imagery to identify and analyze light pollution. In IOP Conference Series: Earth and Environmental Science (Vol. 47, No. 1, p. 012040). IOP Publishing. Polinov, S., Bookman, R. and Levin, N., 2022. A global assessment of night lights as an indicator for shipping activity in anchorage areas. Remote Sensing, 14(5), p.1079. Smyth, T.J., Wright, A.E., Edwards-Jones, A., Mckee, D., Queirós, A., Rendon, O., Tidau, S. and Davies, T.W., 2022. Disruption of marine habitats by artificial light at night from global coastal megacities. Elem Sci Anth, 10(1), p.00042. Tidau, S., Smyth, T., McKee, D., Wiedenmann, J., D’Angelo, C., Wilcockson, D., Ellison, A., Grimmer, A.J., Jenkins, S.R., Widdicombe, S. and Queirós, A.M., 2021. Marine artificial light at night: An empirical and technical guide. Methods in Ecology and Evolution, 12(9), pp.1588-1601. Zeng, H., Jia, M., Zhang, R., Wang, Z., Mao, D., Ren, C. and Zhao, C., 2023. Monitoring the light pollution changes of China’s mangrove forests from 1992-2020 using nighttime light data. Frontiers in Marine Science, 10, p.1187702. Zhao, X., Li, D., Li, X., Zhao, L. and Wu, C., 2018. Spatial and seasonal patterns of night-time lights in global ocean derived from VIIRS DNB images. International journal of remote sensing, 39(22), pp.8151-8181.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Validation of Marine Debris Modelling Using Monitoring of Surfactants in the Black Sea Using Radar Remote Sensing

Authors: Dr Morgan Simpson, Dr Armando Marino, Evangelos Spyrakos, Ms Violeta Slabakova, Andrew Tyler
Affiliations: University Of Stirling, Institute of Oceanology - Bulgarian Academy of Sciences (IOBAS)
Plastic pollution is a pervasive threat to the environment, marine and human health. There are growing concerns of marine plastic pollutions impacts on these systems health. Plastics account for approximately 60 – 95% of global ocean marine litter making it the most common type of marine debris. However, even with an abundance of plastic litter within our marine environments, the movement and accumulation of plastic debris is not well mapped. Remote sensing techniques have been explored for aiding the monitoring of marine plastic pollution. Remote sensing has already been successfully employed for the coverage of multiple marine phenomena and processes, such as coastal currents, sea surface temperature, chlorophyll-a monitoring, oil spills and much more. Both passive and active systems have been investigated for their capabilities in monitoring marine litter, due to their observations providing information on a global scope, with continuous temporal coverage and the ability to coincide with in-situ ground measurements. Recently, the use of radar has seen a growth in its capabilities for monitoring marine plastic pollution. Whether that be through the marine litter itself, or through the use of surfactant proxies. While detection from space is possible, detection requires large accumulations or targets to be present when using freely available, coarser resolution imagery. Therefore, with current available technologies, utilising surfactants may prove to be the best form of action for monitoring marine plastic pollution and validating models of marine litter. A number of models have been produced in attempts to understand plastic litter movements and volume within global oceans. This includes studies on both observations and numerical models. Access to observational data has increased over recent years, however, some regions have only recently sufficient attention recently. One of these regions is the Black Sea, where marine litter observations and modelling was lacking until recently. Particle density per km2 within the Black Sea has been shown to be three times higher than the average values observed within the Mediterranean Sea. This study aims to validate models of the Black Sea by monitoring surfactant slicks within the Sea via Sentinel-1 Synthetic Aperture Radar imagery. The dataset created from this study provides validation for two models which hypothesise where accumulation zones occur throughout the Black Sea. Study Zone: The extent of the Black Seas drainage basin comprises six riparian countries (Romania, Ukraine, Russia, Turkey, Georgia and Bulgaria), and almost one third of the entire land area of continental Europe draining to it. Despite being a vital tourism attraction, fishery area and maritime route, the Black Sea lacks the necessary attention regarding marine litter pollution. The models: This study utilised predictions on where accumulation zones of marine litter would appear within the Black Sea, as well as where void zones of marine litter would appear. Castro-Rosero et al. (2023) and Stanev & Ricker (2019), both found that the southwest coast of the Black Sea has exhibited high density of floating marine litter in modelling scenarios. This has been attributed to a number of factors, be it: Ekman and Stokes drift patterns and the potential of significant input of floating marine litters from the Danube River. Both studies also show similar results in that the eastern and north-eastern areas of the Black Sea, accumulation of floating marine litters is less commonly found than in the western side of the body. Castro-Rosero et al (2023) identified a high accumulation point between Georgia and Turkey, which is also agreed from findings within Stanev and Ricker (2019) and Miladinova et al (2020), where all three studies found an existence of high concentration zones along the southeast coast. Due to the findings in the above mentioned studies, accumulation points within the black sea were determined. The European Space Agency’s Sentinel-1 Synthetic Aperture Radar satellite was exploited for this study. The mode of acquisition was Interferometric Wide Swath (IW) Ground Range Detected (GRD), with a spatial resolution of 20m and temporal resolution of up to 6 days. Across all 13 combined maximum and minimum accumulation zone sites, every Sentinel-1 image from 2017 was utilised. All images were visually assessed for dark stripe features that are apparent when surfactants are within a SAR scene. These stripes are visible due to floating surfactants causing a dampening of the short gravity-capillary waves that are responsible for radar Bragg-scattering. Bragg-scattering has a minimum wind speed threshold of occurrence, which lies between 2 – 3 ms-1. This wind speed, or higher, is required for Bragg waves to be generated. In addition to this, oil films become undetectable at wind speeds between 10 – 14 ms-1 due to their incorporation with the underlying waters from breaking waves. Copernicus / European Centre for Medium-Range Weather Forecasts (ECMWF) reanalysis V5 (ERA5) data was utilised to gain wind speed values of the accumulation zone areas of interest. The wind speed was averaged across the full 50 x 50km segments of SAR imagery, so that the full weather condition could be accounted for within the cell. The wind speed value taken was that of the nearest hour to the acquisition window. For example, an image acquired at 3:48am would have the wind speed values at 4:00am associated with it. Chlorophyll-a data was taken from the Copernicus Marine Environment Monitoring Service (CMEMS), where the Black Sea Bio-Geo-Chemical L4 climatology satellite observations were used to determine chl-a concentrations. Similarly to the wind speeds, the chl-a values were averaged across the full 50 x 50km segments of SAR imagery so that the values of the full cell could be taken into account. Floating Marine Litter (FML) monitoring survey was carried out in the period 2-18 June 2024 onboard the R/V Mare Nigrum during a H2020 DOOORS project cruise in the Black Sea. Visual observations of FML (> 2.5 cm) were performed following the protocol proposed by the MSFD TG10 Guidance on Monitoring of Marine Litter in European Seas. The surface litter was assessed based on fixed-width strip transect method. Observations were carried out at speed around 7.7 (± 0.8) knots, approximately 6 m above sea level. FML was recorded from the bow of the vessels by two observers covering each side of the vessel, within 7.5 m observation strip on each side of the vessels. The length of each transect was measured according to start-end geographic coordinates recorded by portable GPS. In order to standardize the survey effort, the duration of observations was fixed to 30 min corresponding to the mean transect length of 7.4±1.72 km. All transects were observed under low wind speed conditions (≤5 m/s) recorded with a portable anemometer and good visibility. Over the entire cruise period, a total of 33 transects were performed. A distance of 244 km was covered corresponding to 16 h of observations. Across all Sentinel-1 images in 2017, there were 324 instances of surfactants visible in maximum accumulation zones, and 147 instances within the minimum accumulation zones, with surfactants visible in double the amount of images in maximum zones compared with minimal zones. The outcomes from the FML concentrations coincide with the findings of Castro-Rosero et al., (2023) and Stanev & Ricker (2019). Where we can see from the observations that the eastern region near the Georgian coast is a high-density FML area. Another agreement between observational data and the model data is that floating marine litters are less commonly found in the eastern area of the Black Sea compared with the western side of the body. Percentage shares of FML major categories per transects are also presented, the most abundant category is Plastics representing 86% of the overall litter quantities detected in the region, followed by 7.6% share of Paper/Cardboards category and 2.4% accounted for Metal. Statistical analysis is also undertaken on the wind speed and chlorophyll-a data to determine viewing conditions and nature of surfactants present within the SAR imagery. Implications for the health of the Black Sea is also discussed regarding marine litter pollution and surfactant presence within the waters.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Spatio-Temporal dynamics of phytoplankton in the Ross Sea (Antarctica)

Authors: Graça Sofia Nunes, Ana C. Brito, Afonso Ferreira
Affiliations: MARE - Marine and Environmental Sciences Centre/ARNET—Aquatic Research Network, Faculdade de Ciências, Universidade de Lisboa, Departamento de Biologia Vegetal, Faculdade de Ciências, Universidade de Lisboa
The Southern Ocean, while being one of the most remote oceans on the planet, plays a fundamental role in the global climate system. The Ross Sea, located in the Pacific sector of the Southern Ocean, is considered a Marine Protected Area, incorporating multiple marine subsystems, including polynyas (areas of open water surrounded by sea ice) and marginal ice zones in offshore and coastal areas. This region is highly influenced by katabatic winds i.e. high-density descend winds along the southern and western sides of the Ross Ice Shelf and the coast of Victoria. These features, coupled with the region’s high variability, have an important influence on sea ice coverage, currents, and nutrient availability, affecting the region's phytoplankton communities. Phytoplankton blooms are pivotal in the carbon cycle of the Ross Sea and support a rich biodiversity, including krill, fish, penguins, seals, and whales. However, studies in the polar regions face logistical and weather constraints for in-situ data collection. Remote sensing has now emerged to substitute in-situ data with large-scale temporal and spatial data, enabling a better understanding of phytoplankton dynamics and their ecological and biogeochemical importance to protect ocean health. Satellite remote sensing and ocean modelling data were compiled to obtain a long-term dataset spanning 25 years (1998–2022) for the Ross Sea region. This study intends to fill a large knowledge gap by better understanding the environmental drivers of phytoplankton biomass and bloom phenology. Unlike the majority of past research, which often focused on analysing specific years or areas, the extensive spatial and temporal coverage of this study will provide key insights that will help us better understand this ecologically important region. The primary goal of this study was to further understand how phytoplankton biomass in the Ross Sea has changed in the past decades, using remote sensing data from 1998 to 2022 (nearly 25 years’ worth of data), being the first long-term contribution. To this end, several specific objectives were established: (i) investigate the spatial-temporal variation of chlorophyll-a concentrations; (ii) analyse phytoplankton bloom phenology changes over the 25 years; (iii) assess how the abiotic parameters influenced chlorophyll-a variability. The variables used in this study were chlorophyll-a (chl-a; as a proxy of phytoplankton biomass), sea ice coverage, currents, wind, mixed layer depth, and salinity. Given the large and dynamic nature of the Ross Sea, the area was divided into three phenoregions (zones with coherent phenological patterns) using a hierarchical clustering analysis based on chl-a, the yearly number of ice-free days, and index of the reproducibility of the annual seasonal cycle of chl-a (SCR). Random forest models were then conducted for each phenoregion to identify the abiotic factors most strongly influencing each phenological metric. Phenological indicators, including Bloom Start (Week of the Year of the star of the main bloom in the cycle), Bloom End (Week of the Year of the end of the main bloom in the cycle), Bloom Duration (Duration of the main bloom in the cycle) and Bloom Area (Biomass accumulated over the main bloom) were computed to assess phytoplankton bloom phenology between 1998 and 2022, from September to April. In addition, to evaluate variability the SCR was determined. This index analyses the similarities between the different growing cycles compared to the growing cycle average. To evaluate the trend of the chl-a concentration and timings a pixel-by-pixel trend analysis was performed for chl-a and multiple phenological metrics. We observed that the most oceanic phenoregion (around the offshore area until around 65ºS) is the least productive. Phytoplankton blooms were observed to start earlier, around October-November, and last for longer periods (twelve weeks). This phenoregion is mainly influenced by wind and ocean currents, which is related to its proximity to the Ross Sea gyre. In the southernmost, coastal region (area around 70 ºS until the coastal line), blooms start later, around November-December, and have a shorter duration (eight weeks), yet exhibit the highest biomass. This phenoregions is greatly influenced by sea ice cover and wind, since the extension of the Ross Sea Polynya will depend on wind direction and intensity, affecting ice coverage. Finally, the intermediate region (located between the previous two) shows less similarity between annual cycles and its average seasonal cycle, i.e. it is characterised by a lower SCR. This was the most dynamic phenoregion, where phytoplankton blooms have an average duration of nine weeks, starting in December-January. Phytoplankton blooms in this region seem to be mainly influenced by sea ice cover and ocean currents. Similar to the coastal region, sea ice coverage is an important element in shaping bloom phenology since in years with less ice cover blooms tend to start earlier. Currents are also important because they affect the distribution of nutrients and sea ice, which can affect the duration of the bloom. Our findings highlight how the oceanographic complexity of the Ross Sea shapes phytoplankton dynamics, emphasizing the importance of accounting for spatial heterogeneity when studying primary productivity in this region. We observed distinct regional patterns in chl-a variability, bloom phenology, and abiotic drivers throughout the Ross Sea, as well as increasing long-term trends in biomass in open waters. These results point out the importance of using long-term, high-resolution studies and multidisciplinary approaches in predicting the impacts of climate change on Antarctic marine ecosystems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Relationships Between Shelf-sea Fronts and Biodiversity Revealed Using Earth Observation Data Improve Planning of Offshore Renewable Developments

Authors: Peter Ian Miller, Emma Sullivan, Beth Scott, James Waggitt, Will Schneider, Deon Roos, Andrey Kurekin, Georgina Hunt, Graham Quartly, Juliane Wihsgott, Morgane Declerck, Elin Meek
Affiliations: Plymouth Marine Laboratory, University of Aberdeen, University of Bangor
Fronts – the interface between water masses – are hotspots for rich and diverse marine life, influencing the foraging distribution of many megafauna. We have analysed a long time-series of Earth observation (EO) data using novel algorithms to characterise the distribution and dynamic of ocean fronts, and used these to investigate links to biodiversity hotspots and to explore key drivers for changes in fronts and these relationships. This multi-sensor study comprises front detection using both sea-surface temperature and ocean colour products, and internal wave detection from synthetic aperture radar (SAR). The synergy of several EO modalities allows us to address the complex relationships between stressors, oceanography and biodiversity. For example, higher resolution (300m) fronts detected using the Sentinel-3 OLCI ocean colour sensor enables estimation of coastal biodiversity, complementing the thermal fronts (1km) more suited to shelf-sea regions. FRONTWARD (Fronts for Marine Wildlife Assessment for Renewable Developments) aims to provide evidence to justify the inclusion of frontal locations in marine spatial planning for the UK, most pressingly for zones for offshore windfarms. In this multi-disciplinary research, biodiversity hotspots are identified using a biodiversity index, created using an unprecedented collation of at-sea observations of seabirds, fish and cetaceans spanning several decades (1980s-2020s). Generalised additive models (GAMs) reveal the spatial influence of fronts and internal waves on biodiversity, and provide predictions of taxonomic diversity and distribution based on EO-detected front maps. The outcomes from this project will feed into the evidence base for marine conservation, and decisions on siting and consenting of future offshore renewable energy projects that minimise disturbance of ecosystems while expediting the transition to net zero. To achieve these outcomes we are working with multiple implementation teams responsible for marine spatial planning at The Crown Estate, the organisation that leases UK marine zones to offshore developers, and also the ongoing and pending UK research programmes studying ecosystem effects of fixed and floating wind farms (ECOWind and ECOFLOW).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: REWRITE project - Rewilding and Restoration of Intertidal Sediment Ecosystems for Carbon Sequestration, Climate Adaptation and Biodiversity Support

Authors:
Affiliations: Nantes Université
The climate and biodiversity crisis are major global challenges of the 21st century. The driving force is the well documented escalation of atmospheric greenhouse gas (GHG) concentrations, especially carbon dioxide (CO2), due to the disruption of biogeochemical and energy cycles caused by human activity. Consequently, the last decade 2011-2020 was around 1.1°C warmer than those of the last century (1850-1900). We are already observing profound changes, including biodiversity loss, more frequent or extreme weather events, as well as sea level rise. Within European coastal zones, intertidal areas consisting of soft sediment and emerging during each low tide, form seascapes covering more than 10 000 km2 along the 35 000 km of the tidal coastline. Their three key habitats constituted by seagrass meadows, salt marshes and mudflat inhabited by photosynthetic biofilm (i.e microphytobenthos), provide multiple ecosystem services with a great potential to cope with the biodiversity-climate crisis and thus to contribute to a number of UN and EU priorities regarding carbon neutrality, climate resilience, biodiversity support and social equity. Nevertheless, an alarming situation has emerged over recent years: these seascapes continue to disappear, to be fragmented and to be polluted, resulting in a decrease of their provision of goods and ecosystem services. REWRITE ambition is to expand innovative approaches and nature-based solutions for rewilding seascapes constituted by intertidal soft sediment, bridging biodiversity conservation, climate adaptation and social expectations and uses. To reach this main goal, three key challenges will be addressed: i) Reducing the uncertainty of the future trajectories of intertidal soft sediment seascapes. Due to a fragmented knowledge on their ecological and social functioning, the scientific community is currently unable to project accurately their trajectories by 2050. A deep understanding of the different restoration (active), rewilding (passive) and “do nothing” options, compared to a ‘business-as usual’ option in the context of erratic and constant changes is urgently needed. ii) Assessing the cascading effect. Understanding the propagation of the effect of the increase of CO2, temperature, sea level rise, extreme events and the loss of biodiversity from the local to the global scale is a key factor to enhance local natural capital for a resilient European shoreline. iii) Assessing how society engages to agree upon and / or overcome the trade-offs of rewilding, considering environmental benefits and societal pressures. Identifying the social and cultural drivers and barriers is crucial to ensure local and national engagement and support, and place-based decisions responsive to local needs, particularly where space requirements for rewilding are a source of conflict. We will develop and use innovative tools and techniques to rapidly quantify and map the ecosystem services supply, by coupling remote sensing images with modelling approach, field campaigns and stakeholder engagement. To address the challenges related to the upscaling due to the complexity of these seascapes (e.g. mixing and patchiness), we will develop an integrated pathway for a “step-by-step” upscaling, where each step will allow to validate the next one. We will use images from existing Copernicus Sentinel archives (since 2015), as well as images acquired during the dedicated synchronous and co-located field campaigns at various scales: laboratory, ground, drone, airborne and satellite with the objective to map biodiversity and its state of conservation, C-sequestration, protection ability from coastal flooding, seascape connectivity and fragmentation and cultural ecosystem services. This “step-by-step” upscaling approach will be lead synchronously with social innovation to recognize the plural values of nature using multi-methods approach integrating assessment of cultural values through a combination of a) participatory processes with stakeholders at varied scales (local to European), b) focus groups and interviews with stakeholder of varying power-interest relations, c) a social media approach to capture the large scale perception and d) historical information from peer reviewed and grey literature. This assessment has a dual goal: i) to quantify and map the plural cultural benefits supplied by the seascapes systems and ii) to raise awareness about these values to different stakeholder categories that will be involved in this cultural values assessment and ensure that the project outputs have societal relevance and promote societal engagement. This integration of bottom-up and top-down approach will allow for inclusion and transformation and bring research closer to society, setting the basis for the desired transformative change. REWRITE ambition is served by an impressively interdisciplinary consortium (25 partners from the academic and private sectors, representing 8 European tidal coastal states, as well as the UK, Canada and the USA) with recognized expertise on the climate-biodiversity nexus, fostering synergies among disciplines such as Social Sciences and Humanities, Natural Sciences and resources, and ecosystems management. To reach this ambition, the strength of REWRITE is the “space for time” approach based on 10 demonstrators from Northern to Southern Europe, and from North America to Europe, illustrating a wide panel of environmental constraints, societal uses, coastal management and stakeholder’s engagement. Coupling remote sensing, modelling and ground-truthing approaches, REWRITE will perform a joint analysis from natural and social sciences to understand the historical and current trajectories of ISS functioning and project the future. This approach offers a strong basis to co-develop robust scenarios using multivariable constraints, including plural and integrated (i.e. environmental, economic and societal) cost valuations, in order to select the best and low-cost options to rewild a resilient European coastline.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Insights of the variability of optically active constituents and phytoplankton dynamic in the Northwestern Iberian Peninsula using ocean colour inversion model

Authors: Amalia Maria Sacilotto Detoni, Xosé Antonio Padín, Gabriel Navarro, Natalia Rudorff Oliveira, Maria Laura Zoffoli, Antón Velo, Isabel Caballero
Affiliations: Instituto de Investigaciones Marinas, Consejo Superior de Investigaciones Científicas (IIM-CSIC), Instituto de Ciencias Marinas de Andalucía, Consejo Superior de Investigaciones Científicas (ICMAN-CSIC), Instituto Nacional de Pesquisas Espaciais (INPE), Consiglio Nazionale dele Ricerche, Istituto di Scienze Marine (CNR-ISMAR)
Spain leads global mussel aquaculture, with the Galician Rías Baixas estuaries at its core, emphasizing the region's socio-economic importance. Monitoring water quality, particularly phytoplankton dynamics, is critical as harmful algal blooms (HABs) increasingly threaten aquaculture productivity, causing significant economic losses. These challenges underscore the need for a comprehensive understanding of optically active constituents (OACs) and their spatiotemporal dynamics to assess estuarine trophic states and ecosystem responses to climate variability. However, the limited characterization of OACs in Northwestern Spain remains poorly understood, creating a gap in the calibration and validation of bio-optical algorithms, thus hindering efficient monitoring strategies and remote sensing efforts. This study addresses these gaps by analyzing OAC distributions, the contributions of four key phytoplankton groups, and their variability in the two rías in the northwestern Iberian Peninsula, which belong to the Rías Baixas. Nearly monthly sampling campaigns were conducted from September 2023 to October 2024 in two estuaries with distinct hydrodynamic characteristics: Ría de Arousa, a more sheltered estuary, and Ría de Vigo, characterized by dynamic circulation patterns. Surface water samples (~5 m depth) were collected to measure chlorophyll-a (Chl-a) concentrations. Above-water Remote Sensing Reflectance (Rrs) was extrapolated from irradiance (Ed) and radiance (Lu) data obtained using the PRR-800 profiling radiometer (Biospherical Inc.). Sampling was performed at nine fixed stations distributed across two estuaries with distinct circulation patterns and interactions with oceanic waters. The Water Colour Simulator (WASI) was applied to estimate inherent optical properties (IOPs) and subsequently derive Chl-a concentrations, as well as the relative contributions of four major phytoplankton groups potentially associated with harmful algal blooms: diatoms, dinoflagellates, cryptophytes, and cyanobacteria. Our results indicated that the WASI simulator performs notably well in deriving IOPs and phytoplankton groups contribution from in-situ above-water Rrs measurements in the waters of the Rías Vigo and Arousa. The findings highlight distinct spatial variability in OAC concentrations between the two estuaries, influenced by coastal morphology and internal water exchange dynamics. Ría de Arousa exhibited elevated concentrations of colored dissolved organic matter (CDOM) and detrital particles, suggesting that reduced circulation limits the occurrence of bloom events. In contrast, Ría de Vigo, characterized by higher water renewal rates, displayed lower detrital concentrations. Phytoplankton group dominance also varied significantly: Ría de Arousa showed a biomass gradient, with diatoms and dinoflagellates prevalent in the outer zones, while cryptophytes dominated the inner estuary. Conversely, Ría de Vigo exhibited a more uniform biomass distribution, with less pronounced variability among phytoplankton groups. These results highlight the importance of continued in-situ campaigns to enhance the calibration and validation of bio-optical models, improving the retrieval accuracy of IOPs from remote sensing data. By deepening our understanding of the interactions between OAC dynamics and environmental variables, this research contributes to advancing remote sensing capabilities and informing sustainable resource management strategies in aquaculture-dependent ecosystems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: A first national seagrass map for Venezuela

Authors: Chengfa Benjamin Lee, Dr. Ana Carolina Peralta Brichtova, MAR ROCA, Dr. Tylar Murray, Oswaldo David Bolivar Rodriguez, Dr. Daniele Cerra, Dr. Frank E. Muller-Karger
Affiliations: German Aerospace Centre (DLR), Institute for Marine Remote Sensing, College of Marine Science, University of South Florida, Institute of Marine Sciences of Andalusia (ICMAN), Spanish National Research Council (CSIC), Institute of Advanced Studies IDEA, Energy and Environment Unit, German Aerospace Centre (DLR)
Coastal wetland habitats including seagrass meadows are important for their ecosystem services. Yet, there are still many knowledge gaps in the global map of seagrasses. Some countries do not even have a first baseline. One such country is Venezuela, which has extensive seagrass meadows extending along its entire Caribbean Sea coast, but no national seagrass map or systematic in situ monitoring on seagrass ecosystems. The limited understanding of spatial and temporal trends does not help to develop informed national conservation and restoration strategies, or national blue carbon accounting. Here, we describe results from a new remote sensing study to evaluate the first national seagrass map along the whole Venezuelan Caribbean coast, derived from remote sensing techniques. One of the strategies to produce an initial regional seagrass map is via a multitemporal composite approach. This has been done in Greece, the Mediterranean Sea, East Africa, Bahamas and Seychelles. Often, issues of image quality, cloud cover, sun glint, and atmospheric and water turbidity reduces the pool of viable images for composition. Even when the cloud cover metadata is used to filter out excessively cloudy images, some cloudy pixels remain in the pool of filtered images, causing cloud artefacts in the composite image and compromising its quality. To improve the composite quality of the composite image and its derived seagrass map, we tested a new Google Earth Engine product: the Cloud Score+, which provides a per-pixel quality assessment (QA) band based on an atmospheric similarity model and a space-time context network model. The Cloud Score+ product has two bands, the Cloud Score probability score (CS) or the cumulative distribution function of this cloud score QA band (CDF). We compare the performance of Cloud Score+ derived products against previously established multi-temporal image composites acquired in different time ranges, and the more conservative ACOLITE-processed single image composite using Sentinel-2 (S2) Level-1C (L1C) imagery in the whole Venezuelan coastline. The S2 L1C imagery was processed following three different approaches: 1) using a multi-temporal composition of the full S2 L1C archive available and processed in GEE; 2) integrating Cloud Score+ dataset into the previous approach; and 3) using a single-image offline approach applying ACOLITE atmospheric correction which has been widely used for water applications. All images were further processed from L1C to L2A remote sensing reflectance (Rrs), for the purpose of comparability. Additional image features such as Gray Level Co-occurrence Matrix (GLCM) and Principal Component Analysis (PCA) were generated. The training data were randomly split into roughly 70 and 30% for training and test, respectively. This was bootstrapped 20 times to produce 20 sets of training and test data for the classification and validation. Per bootstrap, a first classification was trained on the 70% training dataset with Random Forest in GEE. A variable selection was performed on GEE using their native ee.Classifier.explain function, and only the top ten features were retained. A second classification was trained using these top 10 features on the 70% training dataset. We defined five classes for the classification, namely sand, seagrass, turbid, deep waters, and coral. For the training and test design, the point data were obtained along the coast and intertidal areas of the whole nation through existing literature, data banks and visual interpretation. We found that the performances across the different thresholds within the CS or CDF composites were largely similar, with small differences in their confidence intervals. In terms of temporal range for the multi-temporal processing, the full archive seven–year composite had the most consistent quantitative performance over the two different optical water types, achieving an F1 Score of seagrass class of 0.664 with a 95% Confidence Interval (CI) [0.634, 0.695] and 0.631 [0.588, 0.675] for coastal and open waters, respectively. For coastal waters, the ACOLITE composite had a very competitive F1 Score and best Overall Accuracy (OA) performance at 0.668 [0.649, 0.688] and 0.781 [0.649, 0.688], respectively. For open waters, the full archive seven–year composite performed best, with both the CS and CDF product having a comparable performance. The ACOLITE composite had the weakest quantitative performance in open waters, although its confidence intervals do overlap with all the other three products. Qualitatively, the ACOLITE composite was deemed to instead perform better than its competitors. In optically clear waters such as the open reef waters, where the main concerns were clouds and cloud shadows, the simpler Cloud Score+ products provided a pragmatic alternative to both the full archive and the ACOLITE products. For optically complex waters, it was better to rely on either a larger temporal interval or the ACOLITE atmospheric processor. Based on this comparison of the various products, the full archive seven–year composite forms a good initial baseline for the baseline national seagrass map for Venezuela.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: iMERMAID Project: Integrating Satellite and In-Situ Data for Water Pollution Identification in the Mediterranean Basin

Authors: Sofiia Drozd, Bogdan Yailymov, Pavlo Henitsoi, Prof. Andrii Shelestov, J. Donate, M. Milián, J. Rostan, R. Sedano
Affiliations: National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, Space Research Institute NAS Ukraine and SSA Ukraine, ITCL Technology Center of Spain “Instituto Tecnológico de Castilla y León”
Monitoring and improving water quality in the Mediterranean Sea is critical for preserving its unique biodiversity and addressing environmental challenges caused by anthropogenic activities. The Mediterranean Sea serves as a hotspot of ecological and economic importance, yet it faces significant threats from chemical pollution and overexploitation. As part of the Horizon Europe iMERMAID project, “Innovative Solutions for Mediterranean Ecosystem Remediation via Monitoring and Decontamination from Chemical Pollution”, our research focuses on advancing satellite-based methodologies to monitor key water quality indicators, specifically chlorophyll-a concentration and water turbidity. These indicators are vital for assessing biological productivity, phytoplankton dynamics, and water clarity, providing insights into the health of marine ecosystems. Traditional methods of measuring chlorophyll-a and water turbidity rely on costly and time-intensive laboratory analyses, which are limited in spatial and temporal scope. In contrast, satellite remote sensing offers an efficient and scalable solution for monitoring large and diverse marine areas. Leveraging satellite data from Sentinel-2, Sentinel-3, MODIS, and GCOM-C missions, the iMERMAID project develops integrated methodologies that combine spectral band analysis, in-situ measurements, and advanced machine learning models. Our research prioritizes improving the spatial and temporal resolution of chlorophyll-a and turbidity data to facilitate effective environmental management and pollution remediation strategies. A key innovation in our approach is the use of machine learning models, including Random Forest (RF) and multilayer perceptron (MLP), to analyze the non-linear relationships between spectral satellite data and in-situ chlorophyll-a measurements [1]. For example, regression models applied to GCOM-C and Aqua MODIS data achieved significant accuracy improvements, with RF models yielding an R² of 0.603 (RMSE = 0.008) for GCOM-C and R² of 0.74 (RMSE = 0.006) for Aqua MODIS. By downscaling coarse-resolution data (e.g., MODIS and GCOM-C) and upscaling Sentinel-3 data, we enhanced spatial resolution from 4 km to 300 m, making these models particularly effective for coastal regions where traditional methods often fail due to complex environmental conditions [2-4]. The integration of in-situ measurements allows us to validate and refine model predictions, ensuring consistency and accuracy in highly dynamic environments like the Mediterranean Sea. In addition to chlorophyll-a monitoring, the project addresses water turbidity by quantifying suspended particulate matter using satellite-derived spectral data. This parameter is critical for identifying sediment transport, pollution hotspots, and other ecological disturbances. By combining data-driven insights with high-resolution mapping capabilities, our methodologies enable timely detection of pollution and provide actionable information for marine ecosystem remediation. A crucial component of the project is the integration of maritime traffic density data to establish potential correlations between anthropogenic activity and water pollution. Using data from the EMODnet Map Viewer, historical navigation patterns in the Mediterranean Sea were analyzed, focusing on regions of high, medium, and low traffic densities. Areas of interest include regions with significant maritime activity, such as the southern Italian coast, the Balearic Islands, and northern Libya, alongside relatively lower-traffic zones like eastern Crete. This approach identifies pollution risks linked to shipping routes, oil spills, and port activities, complementing water quality assessments. Findings reveal a significant correlation between high maritime traffic areas, such as near Malta, and increased occurrences of oil spills, underscoring the role of vessel density in environmental contamination. Additionally, PRISMA images were utilized to explore potential links between satellite images and potential pollutants, such as water turbidity, to evaluate the utility of hyperspectral data for monitoring water quality indicators in the Mediterranean basin [5]. The results of the iMERMAID project demonstrate the potential of advanced remote sensing and data analytics to transform water quality monitoring in marine ecosystems. The integration of multiple data sources and machine learning techniques not only enhances monitoring accuracy but also supports sustainable management strategies. These methodologies are applicable to a wide range of use cases, including early warning systems for pollution, biodiversity conservation, and sustainable fisheries management. Acknowledgment This research was carried out within the Horizon Europe iMERMAID project “Innovative Solutions for Mediterranean Ecosystem Remediation via Monitoring and Decontamination from Chemical Pollution” (Grant agreement 101112824). References 1. P. Henitsoi, A. Shelestov, Transfer Learning Model for Chlorophyll-a Estimation Using Satellite Imagery, International Symposium on Applied Geoinformatics 2024 (ISAG2024), Wroclaw, Poland, 2024, p. 54. https://www.kongresistemi.com/panel/UserUploads/Files/ a3fe58047d50fbc.pdf. 2. B. Yailymov, N. Kussul, P. Henitsoi, A. Shelestov, Improving spatial resolution of chlorophyll-a in the Mediterranean Sea based on machine learning, Radioelectronic and Computer Systems 2024 (2024) 52–65. https://doi.org/10.32620/reks.2024.2.05. 3. H. Wu, W. Li, Downscaling land surface temperatures using a random forest regression model with multitype predictor variables, IEEE Access 7 (2019) 21904-21916. https://doi.org/10.1109/ACCESS.2019.2896241. 4. J. Peng, A. Loew, O. Merlin, N.E. Verhoest, A review of spatial downscaling of satellite remotely sensed soil moisture, Reviews of Geophysics 55 (2017) 341-366. https://doi.org/10.1002/2016RG000543 5. Amieva, Juan Francisco, Daniele Oxoli, and Maria Antonia Brovelli. "Machine and Deep Learning Regression of Chlorophyll-a Concentrations in Lakes Using PRISMA Satellite Hyperspectral Imagery." Remote Sensing 15.22 (2023): 5385. https://www.mdpi.com/2072-4292/15/22/5385
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Advancing Harmful Algal Bloom Monitoring for Sustainable Aquaculture Using Earth Observation

Authors: Nikola Geršak, Dragan Divjak, PhD Mirko Barada
Affiliations: LIST LABS LLC
Harmful Algal Blooms (HABs) pose a significant threat to aquaculture, causing substantial economic losses and environmental challenges. The AlgaeDataB is an ongoing project that aims to address this issue by developing an automated, user-friendly Earth Observation (EO) based web service for monitoring and mapping HABs around fish farms. The AlgaeDataB system monitors several harmful algae species that pose risks to aquaculture, including Karenia mikimotoi, Pseudochattonella, Chaetoceros, and Prymnesium. These species are known to cause significant damage to fish stocks and require continuous monitoring in order to avoid further damage in aquaculture management. The system leverages two complementary Copernicus missions for comprehensive HAB detection. Sentinel-3 OLCI operates with 21 spectral bands optimized for ocean color monitoring, enabling differentiation of water constituents including chlorophyll-a, colored dissolved organic matter (CDOM), and suspended sediments at mesoscale. Sentinel-2 MSI complements this with its higher spatial resolution, making it particularly valuable for detailed mapping of coastal areas and fjords where aquaculture operations are concentrated. This multi-sensor approach enables both broad-scale operational surveillance and local monitoring, ensuring effective coverage during critical summer and early autumn periods when major HAB events typically occur. A core innovation of AlgaeDataB is its three-tiered alert system, designed to empower fish farms with timely insights: - Acute stage: Immediate response required within 3 days. - Pre-acute stage: Early warning with a 3-7 day lead time. - Preventive stage: Continuous monitoring for long-term planning. These alerts are informed by satellite-based monitoring, validated with ground-truth data from water sampling and laboratory analysis. This dual-layer validation ensures fine detection and characterization of HAB events, offering high reliability for end-users. The project represents a unique collaboration between technical experts and aquaculture stakeholders, including major Norwegian salmon farming companies. AlgaeDataB is being designed to cater to both large-scale and mid-sized fish farms, with an emphasis on early detection, operational efficiency, and economic sustainability. A dedicated customer support system and iterative service optimization based on user feedback ensure its practicality and adoption. Also, it bridges the gap between research and application, progressing towards a proof-of-concept demonstration. The scalable framework positions itself as a commercially viable solution for the aquaculture industry, aligning with the goals of the European Green Deal and global sustainability initiatives. The AlgaeDataB service will demonstrate the transformative potential of EO technology in addressing climate-driven challenges to aquatic ecosystems. By providing timely and accurate HAB risk assessments, the project will enable aquaculture operators to mitigate losses, enhance resilience, and contribute to sustainable marine resource management.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: SAMSelect: An Automated Spectral Index Search for Marine Applications for Multi-Spectral Satellite Images

Authors: Joost van Dalen, Dr. Marc Rußwurm
Affiliations: Wageningen University
In this abstract, we present SAMSelect, an algorithm designed to generate three-channel visualisations for multispectral Sentinel-2 images for visual analysis of marine scenes and marine litter. It supports marine scientists in monitoring key marine and coastal biodiversity indicators with openly available Sentinel-2 imagery. Visual inspection remains a cornerstone of marine data analysis, especially for complex targets like marine litter, algal blooms, and oil spills, which are difficult to identify in medium-resolution imagery due to their spectral diversity and environmental complexity. Direct visual inspection enables experts to apply their domain knowledge to detect subtle patterns and contextualise significant events, such as oil spill spread or harmful algal blooms, which require a deep understanding of marine conditions. However, selecting the best spectral bands and indices for these tasks is often time-consuming, relying on best practices trial-and-error, and it is only sometimes clear what visualisation is optimal for a particular time of object visible in the marine scene. SAMSelect is an algorithm that systematically tests all possible band combinations, normalised band indices, and spectral-shape indices and evaluates the visualisation’s effectiveness by measuring how accurately the AI Segment Anything Model (SAM) can detect some few objects pre-specified by the user. The underlying assumption is that a visualisation suitable for SAM to identify objects is also helpful to the expert’s eyes. Crucially, SAMSelect can systematically and automatically explore a large number of visualisations. While SAMSelect can be applied to any multi-spectral image, we explicitly evaluated it on marine debris visualisation that is naturally heterogeneous in composition. Our results show that the visualisations found outline other floating objects well, such as algal blooms and oil spills. Concretely, we tested SAMSelect on three study sites: marine litter hotspots near the Bay Islands, Honduras; red tides along Oléron Island, France; and oil spill areas in the Caribbean Sea. Each phenomenon is associated with specific indices; for example, marine litter can be detected using the Floating Debris Index or Plastic Index, while algal blooms and oil spills are often tracked with indices such as the Floating Algae Index and the Oil Spill Index. We would like to present SAMSelect to marine researchers in this Symposium to explore its effectiveness in enabling domain experts to produce more accurate and interpretable visualizations. Incorporating expert annotations to narrow the search space has accelerated the algorithm while still allowing it to adapt to each phenomenon's unique spectral and contextual attributes, increasing segmentation accuracy and visual clarity. This submission focuses on SAMSelect’s expanded applications, underscoring its potential to advance satellite-based monitoring of critical biodiversity indicators impacting Ocean Health. By providing an open-source code repository, we aim to support its broader use across environmental research and marine management, promoting better-informed conservation and response efforts in marine and coastal ecosystems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: The CNES Ocean program: New sensors and future missions to monitor the ocean Health

Authors: Yannice Faugere, Dr Jacqueline Boutin, Aurelien Carbonniere
Affiliations: CNES, CNRS, Sorbonne Universite, LOCEAN
The French Space Agency (CNES), through its Earth Observation Program has for the last decade strongly contributed to ocean observation. The launch of the French/US mission Topex/Poseidon (T/P) (CNES/NASA) in August 1992 was the start of a revolution in oceanography. For the first time, a very precise altimeter system optimized for large-scale sea level and ocean circulation observations was flying. With its unique capability to observe the global ocean in near-real-time at high resolution satellite altimetry is today an essential input for global operational oceanography and ocean crucial for ocean Health monitoring. In parallel of altimetry success story, CNES participated to the observation of other ocean parameters: the Sea Surface Salinity through its contribution to SMOS in partnership with ESA, the waves with the development of the first scatterometter for wave spectra measurement, SWIM onboard CFOSAT, a French / Chinese Satellite….The launch, again with its historical partnership NASA, of the Surface Water and Ocean Topography (SWOT) satellite on December 16th 2022 allows us for the first time to measure 2D images of the ocean topography with unprecedented resolution and opens a new era for oceanography. However, despites all the flying ocean missions, many scientific questions remain about our understanding of how the ocean works, which requires the knowledge of the ocean's global evolution, its coupling with the atmosphere, polar zones and the role of fine oceanic scales, its interfaces with the earth's surface, and its physical, biogeochemical and ecological properties. Understanding, monitoring and forecasting the state of the of the ocean relies on the complementarity of spatial, in-situ measurements and numerical modeling. In addition to the need for continuity of space-based observations, new observables, increased spatio-temporal resolutions and new tools for combining these large sets of information (e.g. digital twins) are needed to meet these challenges, better assess the ocean's role in climate and marine biodiversity, and better guide marine biodiversity, and to better guide environmental policies and measures mitigation and adaptation measures. Faced with these challenges, the CNES/TOSCA scientific group has identified major issues around which it has structured its scientific outlook such as wind-current-wave couplings, fine scale salinity linked to freshwater inputs and feedbacks, the land-ocean continuum, the evolution of the biological Carbon pump and Marine biodiversity, the Climate system variability, trends and tipping point…. This presentation will give an overview of the CNES space oceanography current and future program on the for the period 2025-2029 with the priority to better understand and in fine protect the ocean health.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Integrated Methodology for Forecasting Sargassum Strandings

Authors: Audrey Minghelli, Sarah Barbier, Dr. Léo Berline, Dr. Léa Schamberger, Malik Chami, Dr. Cristèle Chevalier, Dr. Alex Costa Da Silva, Dr. Luc Courtrai, Boubaker Elkilani, Pierre Daniel, Warren Daniel, Dr. Marianne Debue, Dr. Jacques Descloitres, Prof Jean-Raphael Gros-Desormeaux, Dr. Thibault Guinaldo, Marine Laval, Jeremy Lepesqueur, Dr. Christophe Lett, Prof. Anne Molcard, Philippe Palany, Dr. Witold Podlejski, Dr. Adan Salazar, Dr. Stéphane Saux-Picart, Rose Villiers
Affiliations: University Of Toulon-LIS laboratory, MeteoFrance, Aix Marseille Université-MIO, Sorbonne Université-Observatoire de la Côte d’Azur, Universidade Federal de Pernambuco, Université de Bretagne Sud, Université de Lille, Université des Antillles, IRD-Marbec, Mexican Space Agency
The synergy of satellite data, ocean transport modeling, and in-situ measurements plays a crucial role in enhancing forecasts of invasive Sargassum algal strandings in the tropical Atlantic Ocean, the Caribbean Sea, and along the Brazilian coast. A methodology using remote sensing techniques for detecting and monitoring Sargassum algae on temporal scales ranging from hourly to daily has been developed through multi-sensor satellite data analysis, incorporating both Low Earth (Sentinel-2 and-3and VIIRS), and Geostationary orbit observations (GOES, MTG). Different methods of detection were developed based on agal index, inversion of radiative transfer model but also on artificial intelligence. The aggregation velocity is obtained using geostationary sensors. The spatial distribution of Sargassum aggregations has been analyzed using satellite sensors with resolutions between 20 meters and 5 kilometers. In-situ data were collected from the Caribbean Sea in order to validate the delivered products. To address societal concerns, alert bulletins tailored for end-users such as local authorities, the tourism industry, and fishermen have been designed. This study presents an integrative approach to tackle Sargassum stranding issues by combining satellite data, knowledge on spatio-temporal distribution, and transport forecasting models. Enhancements to each system component will enable societal authorities to mitigate more effectively the risks associated with the increasing frequency and intensity of Sargassum blooms in the Atlantic Ocean.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: F.04.06 - POSTER - Wetlands: from Inventory to Conservation

Wetlands are an essential part of our natural environment. They are scattered across the world in all bio-geographic regions, providing a range of critically important ecosystem services and supporting the livelihoods and well-being of many people. For much of the 20th century, wetlands have been drained and degraded.

The Ramsar Convention on wetlands is an intergovernmental treaty that provides the framework for national actions and international cooperation for the conservation and wise use of wetlands, as a means to achieving sustainable development. The 172 countries signatory to the convention commit, through their national governments, to ensure the conservation and restoration of their designated wetlands and to include the wise use of all their wetlands in national environmental planning.

Wetland inventory, assessment and monitoring constitute essential instruments for countries to ensure the conservation and wise use of their wetlands. Earth Observation has revolutionized wetland inventory, assessment and monitoring. In the recent years, the advent of continuous data streams of high quality and free of charge satellite observations, in combination with the emergence of digital technologies and the democratisation of computing costs, have offered unprecedented opportunities to improve the collective capacities to efficiently monitor the changes and trends in wetlands globally.

The importance of EO for wetland monitoring has been stressed by Ramsar in a recently published report on the use of Earth Observation for wetland inventory, assessment and monitoring.

The SDG monitoring guidelines on water related ecosystems (SDG target 6.6) also largely emphasize the role of EO, while the EO community is getting organised around the GEO Wetlands initiative to provide support to wetlands practitioners on the use of EO technology.

The Wetland session will review the latest scientific advancements in using Earth observations for wetland inventory, assessment, and monitoring to support effective wetland conservation. It will also discuss strategies for integrating Earth observations into the sustainable management of wetland ecosystems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Preliminary Analysis on long-term human activities around wetlands using VIIRS DNB data

Authors: Shi Qiu, Dr. Peter Dorninger, Dr. Wei Zhao, Dr. Jianzhong Zhang, Stefan Schlaffer, Ms. Yu Zhang
Affiliations: Aerospace Information Research Institute, Chinese Academy Of Science, 4D-IT GmbH, Institute of Mountain Hazards and Environment, Chinese Academy Of Science, Beijing Esky Technology Ltd., GeoSphere Austria
In recent years, an increasing number of scientists are adopting new emerging hardware and software capabilities to monitor the environment in order to obtain trends in environmental changes for targeted environmental protection. Among these, wetlands, as one of the important ecosystems on Earth, play a multifaceted role in the environment, such as carbon storage, climate regulation, and soil conservation. Protecting wetlands is crucial for maintaining ecological balance and human well-being. Globally, the protection and restoration of wetlands are receiving more and more attention. Remote sensing, as an important means of Earth observation, has the advantages of wide coverage, high temporal resolution, long time series, and high efficiency. These advantages make remote sensing technology indispensable in various fields such as environmental monitoring and resource management. Especially the Suomi National Polar-orbiting Partnership (SNPP) satellite launched by the United States at the end of 2011, equipped with the Visible Infrared Imaging Radiometer Suite (VIIRS) payload, has the characteristic of visible light imaging at night, which can reflect human activities at night, filling the gap in traditional remote sensing technology for nighttime monitoring. This study uses VIIRS-Day and Night Band (DNB) nighttime light data for a long time series data analysis (from 2013 to 2024) of wetland parks in Austria and wetlands in China, analyzing the changes in human activities around wetlands over more than a decade, providing data and technical evidence for the changes in human activities and wetlands. Currently, the study has analyzed the nighttime light data from the summer of 2013 to the summer of 2024 and found that the nocturnal light data in the area of Vienna, Austria, decreased by 20% from 2013 to 2019 and has been stable from 2019 to 2024 , this might be caused by the application of a new type of street lights; the data around the wetlands decreased by about 16% from 2013 to 2019, rebounded by about 10% in 2020, and then decreased by about 20% from 2020 to 2024. The study suggests that the changes may be due to the impact of COVID and changes in vacation patterns , more people used their weekend at home during pandemic.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Integrating Low-Cost Uncrewed Aerial Systems (UAS) and Satellite Data for Mangrove Monitoring and Conservation: A Case Study From Seychelles

Authors: Marlene Bauer, Anna Bischof, Antonio Castañeda-Gómez, Rafaela Gameiro, Corinne Julie, Dr. Nirmal Jivan Shah, Dr. Doris Klein, Stefan Dech, Dr. Martin Wegmann, Dr. Mirjana Bevanda
Affiliations: Earth Observation Research Cluster, Institute of Geography and Geology, Julius-Maximilians-Universität Würzburg, Nature Seychelles, The Centre for Environment and Education, Earth Observation Center, German Aerospace Center
Mangrove forests are crucial ecosystems that provide ecological, economic, and social benefits, including coastal protection, carbon sequestration, water quality improvement, and biodiversity support. Despite their significance, mangroves are increasingly threatened by human activities such as climate change, urbanization, and deforestation, highlighting the urgent need for effective monitoring methods. Traditional field surveys are time-intensive and remain spatially limited. In recent years, remote sensing has enabled large-scale mangrove mapping, health assessment, and change detection, greatly supporting conservation and management efforts. However, these datasets are often limited on the temporal scale and their spatial resolution restricts their ability to capture fine-scale changes. Uncrewed Aerial Systems (UASs) represent an advancement in remote sensing, complementing satellite-based monitoring by providing high-resolution data suitable for detailed spatial and temporal analysis. Previous studies have shown that UASs can be used to monitor mangroves by estimating biophysical properties such as canopy height and coverage, above ground biomass, as well as supporting habitat assessments, including individual tree species identification and invasive species detection. However, the potential of UASs to provide detailed spatial and temporal data for mangrove health monitoring has yet to be fully investigated. The Seychelles’ mangrove ecosystems cover 2,195 ha, with Mahe hosting the greatest diversity, including seven of the archipelago’s true mangrove species. Approximately 69% of Mahe’s mangrove forests are protected, including the Port Launay Coastal Wetlands, a Ramsar site since 2004. The island contains the second-largest mangrove extent in Seychelles (181 ha), contributing 12% of the nation’s mangrove carbon stock and providing critical climate change mitigation and adaptation services. This study aims to assess the potential of low-cost UASs for mangrove species mapping and health assessment to guide local conservation efforts. In collaboration with Nature Seychelles, field data was collected between 25 November and 13 December 2024, using a DJI Mavic 3 Pro to capture high-resolution RGB imagery. While it is important to acknowledge that even what is considered a low-cost UAS remains expensive, especially for small NGOs in the Global South, the DJI Mavic 3 Pro represents the lower end of the price range spectrum. It offers a cost-effective option for conservation applications. To the authors’ knowledge, this is the first study to integrate UAS and satellite data for mangrove analysis in Seychelles. The focus on consumer-grade technology and open-source processing tools aims to create accessible datasets to assist local monitoring efforts. Openly available optical and SAR satellite data will be combined with UAS imagery to address critical knowledge gaps in habitat monitoring, health assessment, and tracking dynamic changes. Finally, this work developed a scalable methodology for integrating UAS and satellite data, contributing to global mangrove conservation initiatives.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Large-Scale Wetland Mapping Using Self-Supervised Learning and Vision Transformer

Authors: Mohammad Marjani, Dr. Fariba Mohammadimanesh, Dr. Masoud Mahdianpari, Dr. Eric W. Gill
Affiliations: Memorial University Of Newfoundland, Natural Resources Canada, C-Core
Wetlands are unique ecosystems where freshwater and saltwater environments intersect, offering numerous environmental benefits. However, since the early 20th century, nearly two-thirds of the world's wetlands have been lost or severely degraded. In Canada, wetland loss has historically been attributed to land-use changes brought about by European settlement, urban development, climate change, agricultural activities, and impacts like runoff diversion and pollution. These changes affect the ability of wetlands to deliver essential ecosystem services, including carbon storage, water filtration, and economic support. Therefore, preserving these ecosystems safely and healthily is crucial. Accurate wetland distribution maps are essential for protecting wetlands and understanding their spatial distribution, ecosystem functions, and temporal changes. Traditional methods, such as vegetation surveys and water and soil sampling, are time-intensive, require significant effort, and are impractical in remote areas. Remote sensing has been used as an efficient and cost-effective alternative to address these challenges using optical and radar satellite systems. However, accurately classifying wetlands using remote sensing data remains challenging due to their complex, heterogeneous nature, fragmented landscapes, and overlapping spectral characteristics among wetland types. Temporal and spatial variability further complicates classification, requiring advanced methods for precise wetland mapping. Wetland mapping studies often focus on small regions constrained by satellite data and computational resources. However, algorithm performance typically declines when applied to larger areas, highlighting the need for robust approaches to maintain high accuracy in large-scale wetland mapping. Recent advancements in machine learning (ML) and deep learning (DL) have significantly enhanced wetland mapping using remote sensing data. Studies have employed convolutional neural networks (CNNs) and hybrid models combining CNNs and vision transformers (ViT), achieving high accuracy. However, these methods are typically limited to small-scale regions due to their reliance on extensive labeled training datasets, which becomes impractical for large-scale mapping. Self-supervised learning (SSL) offers a comparable alternative by learning patterns from unlabeled data, reducing dependence on labeled datasets. Techniques like SimCLR (simple framework for contrastive learning of visual representations) have succeeded in remote sensing applications. However, their application to large-scale wetland mapping still needs to be explored, presenting further research opportunities. Therefore, this study aims to use the potential of SimCLR for large-scale wetland mapping using Seintinel-1 (S1) and Sentinel (2). This study focuses on Newfoundland, a large Canadian island spanning 108,860 km², known for its diverse wetland ecosystems, including bogs, fens, and marshes. Wetlands cover approximately 18% of the island, with peatlands dominating the landscape. Ground truth data were collected in Newfoundland during field surveys conducted in 2015, 2016, and 2017. The surveys covered over 1,200 sites, including wetlands and non-wetlands, across areas like Avalon, Grand Falls-Windsor, Deer Lake, and Gros Morne. The Canadian Wetland Classification System (CWCS) was used to categorize wetlands into bogs, marshes, fens, and swamps, with peatlands being the most common. GPS data were recorded at each site, along with metadata such as location names, dates, photos, and notes on vegetation and hydrology. Early surveys in 2015 included wetlands of all sizes, but from 2016 onward, the focus shifted to areas larger than 1 hectare. Experts reviewed and labeled the data using high-resolution Google Earth imagery, and the dataset was divided into 70% for training and 30% for validation to support wetland classification. S1 and S2 satellite data were collected over the study area. S1 provides 10 m resolution radar data with four polarization bands (HH, VV, HV, VH), which is ideal for detecting water bodies under dense vegetation. S2 offers optical data with a multispectral sensor, including 10 m and 20 m resolution bands, enhancing mapping accuracy. The missions’ frequent revisit times (every 5–6 days) enable timely monitoring over large areas. This study used S1 and S2 data from the summers of 2022, 2023, and 2024. To manage Newfoundland’s large scale, the island was divided into 28 equally sized regions, confirming efficient and consistent data collection across the study area. In addition to the satellite bands, S2 bands were used to calculate three indices, including the Normalized Difference Vegetation Index (NDVI), the Normalized Difference Built-up Index (NDBI), and the Normalized Difference Water Index (NDWI). SimCLR extracts visual features from unlabeled data using random augmentations to create two unique perspectives of the same image. This process helps the model identify variations of the same image while distinguishing it from others. This study utilized a range of augmentation methods, such as channel shuffling, spectral jitter, random band cropping, rescaling, CutMix, random cutout, rotation, and flipping. These augmentations introduced diverse modifications to the input data. For each image, two distinct variations were generated by applying these augmentations randomly, enabling the creation of paired views for representation learning. SimCLR employs a contrastive loss function to bring the embeddings of two augmented views of the same image closer together in the feature space while pushing embeddings of different images further apart. This process treats each augmented pair as positive and all other pairs in the batch as negatives. The loss function used, NT-Xent (Normalized Temperature-scaled Cross Entropy), ensures that positive pairs are closely aligned and negative pairs are well-separated in the embedding space. In the SimCLR framework applied to wetland mapping, the ViT served as the encoder, using its self-attention mechanisms to extract spatial and contextual relationships within satellite imagery. The ViT architecture processes input data by dividing images into fixed-size, non-overlapping patches, then flattened and projected into a high-dimensional space using a linear transformation. This approach enables the model to capture spatial patterns and contextual interactions across the image, which is crucial for identifying complex wetland features. Following the training phase, the SimCLR model was fine-tuned using the available training images and evaluated on the validation dataset. To assess the model's performance, precision, recall, and F1-score (F1) were calculated for both datasets. The results of the fine-tuning experiments, which involved varying proportions of the training dataset, as well as the direct ViT training without SimCLR, are investigated. As the amount of training data increased, the model's performance significantly improved across all wetland types. When fine-tuning with only 25% of the training images (Scenario 1), the model achieved the lowest performance, with precision, recall, and F1 scores ranging from 0.55 to 0.79 for different wetland types. In contrast, fine-tuning with 50% (Scenario 2) and 75% (Scenario 3) of the training data resulted in notable improvements, particularly for bogs and marshes, with the F1 score reaching up to 0.89 for bogs and 0.88 for marshes. The highest performance was observed when 100% of the training images were used (Scenario 4), with precision, recall, and F1 scores peaking at 0.93, 0.92, and 0.92 for bogs, and 0.90, 0.89, and 0.89 for marshes, respectively. In comparison, training the ViT model directly on all images without the SimCLR framework (Scenario 5) resulted in slightly lower performance across most wetland types, particularly for fens, which saw a reduction in precision (0.86) and recall (0.88) compared to the best-fine-tuned models. This suggests that the SimCLR-based approach, particularly when fine-tuned with larger training datasets, significantly improves wetland classification task performance.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: The Tropical Wetland mapping system (TropWet) reveals profound changes in wetland extent across the Sahel region of Africa

Authors: Gregory Oakes, Dr Andy Hardy, Dr Pete Bunting, Lammert Hilarides, Mori Diallo, Edmond Kuto, Charlie Avis
Affiliations: Aberystwyth University, Wetlands International, Wetlands International Sahel Office, Wetlands International East Africa
Sahelian Wetland systems represent important ecosystems, particularly for migratory bird species and endangered mammals as well as providing vital services for millions of people. Yet wetlands across this region are relatively under studied with a scarcity of data and, in some instances, an absence of reliable wetland inventories. There is a pressing need to better understand the extent and dynamics of Sahelian wetlands where water resource and land use pressures has seen the degradation of wetland ecosystems across the extent of the Sahel. Equally, climatic events such as El Niño and positive phases of the Indian Ocean Dipole have led to dramatic changes in rainfall patterns resulting in devastating flooding in Nigeria and Mali, and globally significant increases in natural methane emissions reported over wetlands in South Sudan. TropWet, hosted on Google Earth Engine, is a tropical wetland mapping system, based on the analysis of Landsat imagery alongside hydrological terrain metrics. Specifically, linear spectral unmixing is applied using pre-determined endmembers, providing pixel level fractions of water, bare soil, vegetation and burn scar. Information of fractional cover is combined with spectral indices and terrain information within a fuzzy-optimised rule base to classify open water, inundated vegetation and other non-inundated land cover classes. The system was applied to bi-monthly Landsat and Sentinel-2 composites from 2014-23 over the Sahel region. Resulting maps were used to reconstruct the inundation history across the region, as well as generating inventories using the International Union for Conservation of Nature’s Global Ecosystem Typology by cross-walking with existing data including the Global Lakes and Wetlands Dataset. Resulting maps indicate that inundated vegetation accounts for a mean contribution of 40% (Std Dev: 15%) of the total wetted area. This represents a significant improvement on existing data, such as the Global Open Surface Water layer that only accounts for open water, thereby undermining the true extent of Sahelian wetlands. Capturing this information is important as it has been reported that inundated vegetation, particularly papyrus and phragmites, represent major sources of natural methane emissions. We examine changes in inundation extent between the periods 2014-18 and 2019-23. The former 5-year block is indicative of the long-term average of rainfall conditions, whereas the latter represents a significant increase in rainfall, particularly across central and eastern parts of the Sahel, attributed to a strong Indian Ocean Dipole event. TropWet demonstrates a 191% increase in overall inundation extent between these two periods with profound impacts on the livelihoods of the people living within these wetlands. Notable change hotspots are indicative of widespread devastation in the Inner Niger Delta and further downstream in northeast Nigeria due to extensive flooding; significant losses of agricultural land and livestock in the Sudd, leading to a complex humanitarian crisis for millions of people. Furthermore, we identity a shift in seasonality, with large scale areas previously characterised as seasonal wetlands becoming permanent. This has important implications in terms of methane emissions and the composition of ecosystems. TropWet represents a tractable solution, not only for generating wetland inventories in a timely manner but also charting inter and intra-annual inundation dynamics. Illustrated by the contrasting situations between 2014-18 and 2019-23 we emphasise the importance of providing up to date information on wetland extent and characteristics, as opposed to using static information that can quickly become outdated and inaccurate. In doing so, we can improve the way in which these regions can be managed to help safe-guard livelihoods and protect and enhance these important ecosystems. By demonstrating transferability across the Sahel region, we provide confidence that TropWet can be deployed across the African continent providing wetland inventories in data scarce regions, as well as providing information on how wetlands are reacting to increasing pressures and a changing climate.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Prototyping a Policy-Driven Earth Observation Service for Monitoring Critical Wetland Habitats in Natura 2000 Sites

Authors: Christelle Vancutsem, Bruno Combal, Meriam Lahsaini, Pavel Milenov, Frank Vassen
Affiliations: Joint Research Center - European Commission, European Commission (DG ENV), European Environment Agency (EEA), Arcadia SIT for the Joint Research Centre (European Commission)
The EU Habitats Directive mandates the protection and monitoring of wetland habitats within Natura 2000 sites. However, comprehensive and timely assessment of wetland conservation status remains challenging. The reporting under article 17 of the Habitats directive is missing the detailed, spatially explicit information required for accurate assessment of wetland habitats conservation status, and in particular indicators of degradation. This initiative, developed in collaboration with the European Commission's DG Environment (DG ENV) and the European Environment Agency (EEA), aims to design an operational geospatial information system to monitor critical wetlands, detect degradation, and assess conservation status within Natura 2000 sites. Leveraging the Knowledge Centre on Earth Observation's (KCEO) policy-focused value chain and Deep Dive assessment methodology, we translate specific policy needs into technical requirements for Earth Observation (EO) products. We analyze the fitness-for-purpose of existing products and services, evaluating gaps, and provide recommendations to support the EU's commitment to biodiversity protection. Our approach extends beyond assessment to prototype a Policy-driven Service for monitoring wetlands on selected areas. Ongoing and planned key activities include: - Characterizing various European wetland habitats, their ecological functioning, and main pressures leading to degradation. - Determining appropriate indicators for selected habitats and the relevant EO products, prioritizing wetland types based on current degradation levels (per Article 17 of the Habitats Directive), relevance beyond the Directive, and biodiversity value. - Designing advanced spatial and temporal analysis tools for policymakers and conservation managers integrating cutting-edge EO technologies with ground-truth data and modelling. This project will enhance our understanding of wetland dynamics and support more effective implementation of EU environmental policies, including the Biodiversity Strategy 2030 and the Nature Restoration Law. The insights and methodologies developed through this project will serve as the foundation for implementing a comprehensive web-based platform for monitoring all wetlands across the EU.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: An Efficient Hybrid CNN-Transformer Framework for Wetland Classification Using Multi-Source Satellite Data

Authors: Ali Radman, Masoud Mahdianpari, Fariba Mohammadimanesh
Affiliations: Memorial University of Newfoundland, C-CORE, Canada Centre for Remote Sensing, Natural Resources Canada
Wetlands are ecologically critical environments providing essential services such as climate regulation, water purification, and habitat preservation. However, their global decline necessitates accurate and efficient mapping methods for sustainable management. While field-based methods are resource-intensive and impractical for large-scale applications, satellite remote sensing provides an efficient alternative. Satellite remote sensing, leveraging multispectral and SAR datasets, offers promising tools for large-scale wetland classification. Despite recent advances, existing methodologies face challenges in computational efficiency and class differentiation accuracy, particularly for spectrally similar wetland categories. Recent advances in artificial intelligence, particularly hybrid deep learning architectures, offer promising solutions to these challenges. Convolutional neural networks (CNNs) excel at capturing local spatial features, while transformers are effective at modeling long-range dependencies. However, CNNs struggle with contextual information, and transformers are computationally expensive. Hybrid models that combine the strengths of both approaches provide an efficient framework for accurate wetland classification. This study introduces a hybrid convolutional-transformer model that fuses Sentinel-2 multispectral and Sentinel-1 SAR data for precise and efficient wetland classification. The proposed model leverages convolutional layers for capturing local spatial features and transformer blocks for long-range dependencies. A key innovation is the multi-head convolutional attention (MHCA) module, which optimizes the efficiency of traditional transformer mechanisms by integrating convolutional operations. The architecture also incorporates a local feed-forward network (LFFN) to preserve locality in spatial data, further enhancing the model's ability to handle the complexities of wetland classification. The model was evaluated using a robust dataset on wetlands in St. John’s, located on Newfoundland Island, Canada, consisting of 11 land cover classes, including diverse wetland types. Time-averaged Sentinel-1 SAR data with dual-polarized backscatter (VV, VH, HH, HV) and Sentinel-2 multispectral data with 10 optical and infrared bands were used to create a comprehensive dataset. The proposed model achieved state-of-the-art performance, with an overall accuracy (OA) of 95.36% and substantial improvements in challenging categories such as bog (94.79%), fen (90.57%), swamp (89.04%), and marsh (89.91%). Comparative analyses with CNN, transformer, and hybrid models demonstrated the proposed hybrid model's superiority in both accuracy and computational efficiency. Compared to CoAtNet (the second-best model), the proposed model demonstrated a 2% improvement, while surpassing ResNet and Swin by 4.97% and 6.61%, respectively. It also highlighted the advantages of multi-source data, with the fusion of Sentinel-1 and Sentinel-2 improving OA significantly over single-source configurations (93.73% for Sentinel-2 alone, 82.56% for Sentinel-1). These results underscore the model’s effectiveness in handling spectrally similar classes and leveraging complementary data sources. In addition to accuracy, the model achieves computational efficiency with reduced memory usage and training time by approximately 50% compared to leading alternatives. The study underscores the potential of hybrid architectures for overcoming the inherent challenges of wetland classification. By integrating the strengths of CNNs and transformers, the proposed model delivers a scalable solution for real-world applications, combining high accuracy with reduced computational demands. This model represents a significant step forward in remote sensing for ecological monitoring and sustainable management of wetlands.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Evaluating Sustainable Development Goal 15 Across Various Scenarios Using an Integrated Multi-objective Programming and Patch-generating Land Use Simulation Framework in the Internationally Significant Wetland of Momoge

Authors: Ms. Jiaqi Han, Prof. Dongyan Wang, Prof. Andreas Rienow
Affiliations: Jilin university, ruhr university
Wetlands are highly productive and valuable ecosystems that play a critical role in ensuring water security and supporting biodiversity. Internationally important wetlands accredited by the Ramsar Convention are vital to the global wetland conservation framework. Momoge Nature Reserve, a Ramsar site of international importance in China, serves as a vital resting site for 90% of the global crane population and waterfowl species migrating between Siberia and Oceania. However, because of the expansion of human production and living spaces, the wetlands in the study area face considerable issues of land degradation, environmental pollution, and resource wastage. Globally, human activity and climate change have reduced the global wetland area while posing significant challenges to achieving Sustainable Development Goals. Therefore, it is necessary to predict the spatial-temporal evolution of wetlands under various planning strategies and conduct sustainable development assessments at the small scale. To address these challenges, an integrated multi-objective programming and patch-generating land use simulation framework was developed to predict the future distribution of three wetland and seven non-wetland land types at 10 m resolution. Building upon this framework, this study presented three main innovations to support the SDG 15 report: fine-scale simulation of wetland distribution, integration of vision goals and SDGs with scenario design, and evaluation of wetland ecological sustainability. The ecological sustainability of wetlands was evaluated based on key SDG 15 indicators, including wetland coverage rate (SDG 15.1.2), land degradation rate (SDG 15.3.1), proportion of important bird habitats (SDG 15.5.1), and ecosystem service value (SDG 15.9). The study effectively assessed the level of wetland ecological sustainability from 2020 to 2035 under four scenarios: natural increase, agricultural development, wetland protection, and harmonious development. The results indicated that (1) the model has high simulation accuracy, as evidenced by its overall accuracy of 0.86, kappa of 0.84, and figure of merit of 0.61. (2) Under the wetland protection scenario, SDGs 15.1.2 and 15.5.1 achieved their highest values of 73.03% and 22.19%, respectively. Conversely, under all four scenarios, SDG 15.3.1 values declined, with the lowest value (5.64%) achieved under the harmonious development scenario; moreover, the land degradation neutrality target was not met. (3) The ecosystem service value increased under agricultural development, wetland protection, and harmonious development scenarios, with the largest increase (amounting to 6.51 billion CNY) achieved under the wetland protection scenario. This underscores the substantial ecological and economic benefits of conservation policies. (4) The overall ecological sustainability levels of the Momoge Nature Reserve failed to meet the expected standards under the four scenarios in 2035. Although the wetland protection scenario faced land degradation challenges, it was the optimal strategy for developing internationally important wetlands. The study reveals the critical role of scenario-based simulations in wetlands policy development. The findings presented herein highlight the necessity of policy support in achieving wetland conservation goals and advancing sustainable development. Such predictions bridge the gap between global objectives and practical local management actions, enabling regional managers to implement effective strategies aligned with international goals.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Unveiling four decades: Eco-Hydrology, land-use landcover classification & water quality estimation of Haiderpur wetlands through the lens of satellite imagery and AI

Authors: Abhinav Galodha, Dr Maria-Valasia Peppa, Dr Sanya Anees, Professor Brejesh Lall, Professor Shaikh Ziauddin Ahammad
Affiliations: School of Interdisciplinary Research (SIRe), Indian Institute of Technology, IIT Delhi, School of Engineering, Cassie Building, Newcastle University, Department of Electronics and Communication Engineering, Netaji Subhas University of Technology (NSUT), Department of Electrical Engineering, Indian Institute of Technology, IIT Delhi, Department of Biochemical Engineering and Biotechnology, Indian Institute of Technology, IIT Delhi
Wetlands are indispensable ecosystems that provide critical ecological services such as water purification, flood control, carbon storage, and habitat provision for diverse species. The Ramsar Convention, an international treaty signed in 1971, underscores the commitment of its contracting parties to the conservation and sustainable use of all wetlands, aiming to halt their degradation and loss. Haiderpur Wetland (Muzaffarnagar district and Bijnor district, Uttar Pradesh, India), a recognized Ramsar site in India, exemplifies a crucial ecological area within its region. This wetland has undergone significant changes over the past three decades due to various anthropogenic pressures, including urban expansion and agricultural intensification. This study aims to analyze land use and land cover (LULC) changes in the Haiderpur Wetland and its surrounding areas from 1990 to 2024. By utilizing advanced machine learning models such as Random Forest (RF), Convolutional Neural Networks (CNNs), XGBoost, and VGG16, the research provides a high-resolution, comprehensive understanding of these transformations. The methodological framework combines satellite imagery analysis with these cutting-edge models, enhancing precision in classification tasks crucial for temporal change detection. In 1990, the landscape of Haiderpur was predominantly vegetative, with forests and grasslands covering about 45% of the area. This rich natural habitat was integral not only for maintaining biodiversity but also for the sustenance of local communities that depended on these resources for their livelihoods. However, as the study reveals, by 2024 the vegetative cover has dramatically reduced to 25%. This decline signifies a substantial loss of biodiversity-rich areas, which are critical for ecological integrity and resilience. One of the primary factors driving this change is urban expansion. In 1990, urban areas constituted merely 15% of the region. Due to population growth and economic development, urban areas have surged to 50% by 2024. This rapid urbanization has reshaped the environment, influencing local climate patterns and contributing to the urban heat island effect, thereby exacerbating local climatic conditions. The application of CNNs has been instrumental in capturing spatial patterns from the satellite data, ensuring an overall classification accuracy of 95%. This model excels in identifying intricate land cover types and their transitions over time, which is vital for reliable LULC assessments. Moreover, XGBoost provided a robust framework for predictive analysis, highlighting critical environmental variables such as proximity to urban centres and changes in agricultural practices as significant contributors to land cover change. VGG16, a deep learning model fine-tuned for this study, further validated these classifications through superior accuracy in distinguishing urban from vegetative and aquatic areas. Significant changes in agricultural land and water bodies are observed. Agricultural land, which previously encompassed 22% of the study area, has decreased to 18%. This shift reflects changing land use priorities, where the economic allure of urban development often outweighs the traditional agricultural returns. Meanwhile, water bodies have seen a reduction from 18% to 12%, a change that could severely influence local biodiversity and water availability, further challenging the wetland's ecological functions. Seasonally, the study found marked fluctuations in vegetation health, impacting biodiversity and ecosystem services. This study investigates the land use and land cover (LULC) changes in the Haiderpur Wetland from 1990 to 2024, utilizing advanced machine learning techniques for precision and accuracy. By employing models like XGBoost for classification, supported by Sentinel-2 imagery, we analyzed 300 samples with impressive classification precision across several classes. The results of the XGBoost model, notably with a maximum depth of 10,500 estimators and a learning rate of 0.01, reveal classification accuracies for forest at 91% and built-up areas at 89%. Though overall accuracy was 75.56%, the findings pinpoint the need for improvements in urban classification, where precision and recall were lower at 63.64% and 58.33%, respectively, due to spectral overlaps. The analysis indicates significant land cover distribution, where forests dominate 37.4% of the area, highlighting their ecological importance. Water bodies account for 25.6%, showing their vital role in local hydrology and biodiversity support. Barren land comprises 15.4%, potentially prospects for development or restoration. Agricultural and built-up areas represent modest portions at 5.8% each, reflecting minimal urban and farming activities. Swamp vegetation holds 10%, emphasizing crucial biodiversity zones. Furthermore, for thermal dynamics, the study utilized MODIS Terra and Aqua datasets, measuring land surface temperature (LST) and the Urban Heat Island (UHI) effect. The data indicated notable temperature changes influencing the microclimate, especially during extreme weather conditions. Landsat-8 imagery's high resolution facilitated detailed mapping of thermal variations, paired with NDVI to assess vegetative health consistently seen as positive and signaling its continued resilience against seasonal environmental stresses. The study also incorporated precipitation data from CHIRPS, indicating declining annual rainfall, which could contribute to increasing thermal pressure within the wetland region. This precipitation shortage might elevate the UHI and induce further stress on local ecosystems. Our feature importance analysis highlights the spectral band B12 as crucial for distinguishing built-up and agricultural areas, aligning with existing research. This insight could direct future enhancements in land classification models, particularly for areas with spectral similarity issues. The CNN-based models also played a significant role, with detailed maps showing forest coverage of 550 km², water bodies at 350 km², and barren lands at 200 km², further validating the XGBoost findings. For instance, vegetation indices during the dry season showed a significant decrease, affecting both carbon sequestration rates and habitat availability for species during critical breeding periods. These findings resonate with the Sustainable Development Goals (SDGs), particularly SDG 6 (Clean Water and Sanitation), highlighting the interplay between water management and ecosystem integrity; SDG 13 (Climate Action), emphasizing the need for resilience-building measures to adapt to climate variability; and SDG 15 (Life on Land), which advocates for the conservation and sustainable use of terrestrial ecosystems. The study's outcomes stress the urgent need for integrative management approaches that balance ecological preservation with developmental pressures. Emphasizing the implementation of sustainable urban planning and the restoration of degraded habitats can help mitigate the negative impacts of rapid urbanization. Community engagement and education on the ecological and economic benefits of wetlands can also foster stewardship and conservation efforts. Finally, this research not only maps and analyses the transformation of Haiderpur Wetland over 34 years but also highlights the role of advanced analytics in ecological studies. By integrating CNNs, XGBoost, and VGG16, this study sets a precedent for future research in landscape dynamics, providing a model for assessing other critical ecosystems worldwide. The findings underscore the need for policy interventions that prioritize wetland preservation, aligning with international conservation goals, and protecting natural capital for future generations. Overall, it serves as a call to action for stakeholders at all levels to recognize and reinforce the essential value of wetlands in our global ecological continuum. In conclusion, the data underscores the importance of conservation for areas like Haiderpur Wetland, where forests and water bodies serve critical ecological functions. The study’s models and methodologies set the stage for better management practices, offering insights that guide conservation, urban planning, and sustainable resource management efforts. Collaborative approaches involving policymakers and stakeholders are essential to address environmental challenges and ensure a balanced coexistence between ecological preservation and developmental activities. Keywords: Google Earth Engine (GEE), Land Use and Land Cover (LULC), Landsat Imagery, Land Surface Temperature (LST), Urban Heat Island (UHI), Haiderpur, Sustainable Urban Planning, Modified Normalized Difference Water Index (MNDWI), Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), Random Forest, Sustainable Development Goals (SDG: 2, 6, 13, 15).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Monitoring Peatland Dynamics over Agricultural Areas in Estonia using Sentinel-1 SAR data

Authors: Stavroula Kanakaki, Juan M.
Affiliations: European Commission, University of Alicante
The scope of the current study is to monitor and analyze peatland regions in Europe. Peatlands play a significant role in environmental sustainability. Restoration of these ecosystems is crucial due to their unique characteristics, which contribute to the preservation of this valuable ecosystem. It is well documented that peat can retain up to 20 times its weight in water, moderating the flow of water through the landscape and enhancing resilience to extreme weather conditions. Additionally, peatlands contribute to improved water quality and reduced flood risk. In addition, peatlands are also critical habitats for wildlife, they act as global carbon stores, and provide services such as drinking water filtration, flood prevention, and historical archives. They are also utilized for grazing and recreational activities. Notably, peatlands are among the most carbon-rich ecosystems on Earth, storing twice as much carbon as the world's forests, underscoring their importance not only as ecological treasures but also as crucial elements in global carbon management. In the frame of the EU’s Biodiversity Strategy for 2030, among the planned actions is the launch of an EU nature restoration plan. This plan will engage EU countries in making concrete commitments and undertaking actions to effectively restore degraded ecosystems. These efforts will focus particularly on ecosystems with the greatest potential for carbon capture and storage, and on those that can help prevent and mitigate the impact of natural disasters. Monitoring agricultural and environmental policies help assess the impact of the Common Agricultural Policy (CAP). Typically, the Integrated Administration and Control System (IACS) is responsible for farmers submitting their CAP payment applications online as mentioned by the European Parliament. National authorities then verify farmers' compliance with conditions for receiving these payments. Additionally, EU countries utilize IACS to ensure farmers respect the CAP conditionality, which includes statutory management requirements (SMRs) and good agricultural and environmental conditions (GAECs). Consequently, the IACS is a significant and valuable system to monitor the CAP performance. As mentioned above, to receive direct payments, farmers have to implement standards for good agricultural and ecological conditions (GAECs) under “conditionality”, including one for the protection of wetlands and peatlands (GAEC 2). While GAEC 2 aims to protect carbon rich soils, the actual requirements remain weak: there is no obligation to halt or reverse degradation, and Member States can ask to delay the implementation of GAEC 2 until 2025. Overall, countries lack strong action to safeguard peatlands through GAEC 2, and insufficient data and mapping of peatlands are often named as barriers to the early implementation of GAEC 2. Although often overlooked, peatlands deliver important ecosystem services for humans, nature and the planet and time is pressing to ensure that peatlands are adequately protected, restored, and sustainably managed, as noted in the report by the European Environmental Bureau (2022). Within the framework of IACS Data Sharing, which tries to gather as much information as possible on agriculture in the European Union, this study aims to demonstrate the significance of IACS data in supporting the CAP policy, specifically focusing on peatland mapping and monitoring. At European Union level, data availability varies by country, leading to distinct processing approaches at each Member State. In Estonia, multiple datasets have been employed to assess whether IACS spatial data, alongside other complementary data, can aid in peatland mapping and monitoring. This includes agricultural parcels data annually declared by farmers for subsidy retrieval, which spatially describes each parcel (polygon) and its crop type. This study focuses on northern Europe, which is richer in peatlands compared to southern regions. As mentioned before, data pertaining to agricultural parcels and crop type information were utilized in conjunction with peatland maps. This integration facilitated a comprehensive analysis of areas rich in peatland, aiming to accurately map these regions. Challenges in data availability significantly impact the methodology employed in peatland research. The main challenge is that while there are numerous time-series datasets related to agricultural activity (for many Member States), information on peatlands typically lacks periodic updates. Often, there is only a single map available, which may or may not be recent, failing to meet the needs for multi-temporal studies. In this investigation, data pertaining to agricultural parcels and other agricultural activities were utilized in conjunction with peatland maps. This integration facilitated a comprehensive analysis of areas rich in peatland, aiming to accurately map these regions. Additionally, the study incorporates the soil map of Kmochet al. (2021), explicitly isolating the peatland layer of Estonia. The agricultural parcels describing the type of crop were analyzed, as well. Finally, satellite data, specifically Sentinel-1A Single Look Complex (SLC) images acquired in Interferometric, Wide Swath Mode (IW) with dual polarization (VV and VH), sensitive to vegetation and soil moisture levels, were exploited to enhance the understanding of peatland dynamics. Regarding the agricultural parcels, they were overlaid with the peatland areas to create three different classes: A) agricultural parcels within peatland zones, B) agricultural parcels outside of peatland zones, and C) pure peatland zones without including any agricultural land. These three classes were studied using satellite data to extract the signal characteristics of each zone separately. For each polygon, the mean value and standard deviation of various SAR features were calculated. This data was used to generate a single curve for each type of crop within these categories, e.g. by visualizing the backscatter values for both VV and VH polarizations. In addition, to aid in the interpretation of the graphs, daily meteorological data on precipitation for the study area were included as inputs. The graphs were analyzed in order to identify specific patterns, differences and similarities among various types of land use. Additionally, the study examines whether fluctuations in values, such as drops or peaks, correlate with specific weather events or seasonal transitions (e.g. spring to summer), which may affect vegetation moisture and structure. REFERENCES European Parliament, “Direct payments,” accessed November 28, 2024, https://www.europarl.europa.eu/factsheets/en/sheet/109/first-pillar-of-the-common-agricultural-policy-cap-ii-direct-payments-to-farmers. European Commission, “Biodiversity strategy for 2030,” accessed November 28, 2024, https://environment.ec.europa.eu/strategy/biodiversity-strategy-2030_en?wt-search=yes. European Environmental Bureau, “Peatlands and wetlands in the new CAP: too little action to protect and restore,” April 2022, https://eeb.org/wp-content/uploads/2022/04/Briefing-Peatlands-and-Wetlands-No-Branding.pdf. Kmoch A., Kanal A., Astover A., Kull A., Virro H., Helm A., Pärtel M., Ostonen I., Uuemaa E. “EstSoil-EH: a high-resolution eco-hydrological modelling parameters dataset for Estonia.” Earth System Science Data 13 (2021): 83-97.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Mapping invasive Prosopis spp. and native wetland vegetation communities in Point Calimere Ramsar Site using Sentinel-2 multiseasonal spectral temporal metrics

Authors: Dr Arasumani Muthusamy, Mr Kumaresan M, Prof Balasubramanian Esakki
Affiliations: Sathyabama Institute of Science and Technology, National Institute of Technical Teachers' Training and Research
Native species in coastal wetland ecosystems face an increasing threat from invasive plants. In India, the Point Calimere Ramsar Site's coastal tropical dry evergreen forests, grasslands, and mangroves are being adversely affected by Prosopis species. This invasion poses a significant risk to numerous avian, mammalian, and amphibian species that depend on these habitats. To restore wetland ecosystems and mitigate further invasions, it is imperative to monitor and track invasive species. This investigation examined the utilization of multi-season Sentinel-2 Spectral Temporal Metrics (STM) for mapping coastal native and non-native vegetation communities. The study employed summer, monsoon, and post-monsoon season datasets with Support Vector Machine (SVM) classification on the Google Earth Engine (GEE) platform. Results indicated that the combination of summer and post-monsoon Sentinel-2 spectral-temporal metrics yielded the highest accuracy (94% overall) for mapping Prosopis, tropical dry evergreen forests, and coastal grasslands. The monsoon dataset proved most effective for mapping mangroves. However, utilizing the entire season's spectral temporal metrics produced the most favorable average results across all land cover classes. The study also analyzed Prosopis distribution and fragmentation within various landscapes of the Ramsar site using Fragstats. Findings revealed that Prosopis is extensively distributed throughout the Point Calimere Wildlife Sanctuary, presenting a substantial threat to local wildlife. We anticipate that their map will be utilized for ongoing Prosopis removal efforts at the study site. This comprehensive approach demonstrates the potential for monitoring Prosopis and native vegetation in coastal tropical wetland habitats using Sentinel-2 STM.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: B.04.01 - POSTER - Satellite based terrain motion mapping for better understanding geohazards

Better understanding geohazards (such as landslides, earthquakes, volcanic unrest and eruptions, coastal lowland hazards and inactive mines hazards) requires to measure terrain motion in space and time including at high resolution with multi-year historical analysis and continous monitoring. Several EO techniques are able to contribute depending on the context and the type of deformation phenomena considered and some techniques can provide wide area mapping (e.g. thanks to Sentinel-1). Advanced InSAR or pixel offset tracking using radar imagery including newly available missions with different sensing frequency (e.g. L Band) can help provide relevant geoinformation. This is also the case with optical streo viewing technique and optical correlation techniques including for wide area mapping. There is a need to assess new EO techniques to retrieve such geoinformation both locally and over wide areas and to characterise their limitations. New processing environments able to access and process large datastacks have allowed to increase user awareness, acceptance and adoption of EO and have created opportunities for collaboration including co-development and increased combination of data sources and processing chains. With this in mind it is needed to understand what is the agenda of geohazard user communities and what are the barriers for reaching their goals.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Identifying Deformation Onset Timing at Socompa Volcano, Chile, Using Breakpoints in INSAR Time Series

Authors: Benjamin Kettleborough, Dr John Elliot, Susanna Ebmeier
Affiliations: COMET, School of Earth and Environment, University of Leeds
Sudden onsets or changes in deformation at an active volcano are important targets for volcano monitoring. Pressure changes from increased magmatic activity or gas build up and release can be an early indicator for volcanic eruptions or give insight into the subsurface magmatic system. For inaccessible or poorly monitored volcanoes, satellite based Interferometric Synthetic Aperture Radar (InSAR) provides an ideal tool to monitor this deformation remotely. The availability of ESA’s systematically acquired Sentinel-1 imagery has allowed near-global automatic processing and analysis of on-shore deformation since its launch in 2014. This has facilitated the detection of deformation at volcanoes with historical records of eruption or unrest, enabling early warnings of eruptions or greater understanding of the deformation mechanism and, hence, the hazard a volcano poses. Here we explore breakpoint analysis as a tool to automatically identify changes from baseline deformation behaviour. A breakpoint is the point at which a time series breaks from the status quo, whether through a gradient change or discontinuity. Furthermore, breakpoint analysis can allow for the retrospective identification of the timing of deformation onset to help understand the potential cause of deformation and its possible triggers. For example, breakpoint analysis has been recently used to investigate volcano-volcano interactions in Iceland and, alongside analysis of cross correlation between thermal and deformation data, has been used to identify causes of inflation at Domuyo volcano, Argentina. Here, we apply breakpoint analysis to the first geodetically recorded deformation at Socompa Volcano, Chile. Socompa is a stratovolcano at the eastern edge of the Atacama Desert, on the border with Argentina. Socompa’s last recorded eruption was 7,200 years ago and it has been assumed to be quiescent with no evident deformation over the prior 28 years. There is, however, evidence of magmatic activity with hotspots near the summit, fumaroles, and hot springs at Socompa Lagoon to the South and on the Quebrada del Agua fault to the east. In 2020 there was a relatively deep (112 km) interslab magnitude 6.8 earthquake 126 km away to the North. Around the same time, Socompa started uplifting at a rate of 17.5 mm/yr, having been undeforming in InSAR studies coving Socompa since 1992. Previous published breakpoint analysis using Markov Chain Monte Carlo Methods (MCMCs) to fit to a linear piecewise model to Global Navigation Satellite System (GNSS) data found that the uplift at Socompa originated in November 2019, 197 ± 12 days before the magnitude 6.8 earthquake. There has since been a magnitude 7.4 Earthquake in 2024 151 km away, which we will use to investigate any change in Socompa’s deformation. We are establishing alternative methods including Bayesian Changepoint Detection and the Autoregression-based Change Finder methods, which we will test, in terms of accuracy and sensitivity, with existing approaches. These are promising because, whilst preserving a distribution of possible breakpoints as with MCMCs, allowing for knowledge of uncertainty, they are computationally less intensive. Additionally, they do not require an exact parametrised prior model allowing for application to separate pixels without being constrained by assumptions about the deformation history. This permits the full exploitation of InSAR observations to gain greater insight into the spatial-temporal variations in deformation. These methods will be used to investigate and confirm the onset time of deformation for the 2019/20 onset as well as detecting any possible deformation change occurring after the 2024 earthquake. Further, we plan to use independent remote sensing datasets to investigate any changes in hotspot temperatures and edifice-wide median thermal anomalies. This is to pin-point the timing of the activation of any magmatic systems and to use the correlation and lag between deformation and thermal data to investigate the mechanism of deformation. We plan to apply these methods to the deformation of volcanoes in the automatically processed COMET LiCSAR portal (https://comet.nerc.ac.uk/comet-volcano-portal/) and investigate spatial and temporal links to nearby volcano-tectonic events for the region.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: The Use of InSAR Data to Identify Areas at Risk of Continuous Deformations Throughout the Country of Poland

Authors: Maria Przyłucka, Michalina Cisło, Zbigniew Perski
Affiliations: Polish Geological Institute - National Research Institute, Polish Geological Institute - National Research Institute
Poland is a country with relatively stable geology, yet it is still exposed to geohazards such as landslides, ground subsidence and uplift associated with mining activities and changes in hydrogeological conditions of the subsoil, as well as floods and induced seismic events. As the country's land area is large (312 696 km2), remote sensing methods are a useful tool for detecting and monitoring changes in the land surface. The advancement of techniques and the increased availability of satellite data over the past decade have opened up new opportunities for the analysis and identification of geological hazards. In our work, we will demonstrate the application of the InSAR satellite interferometry technique for large-scale and long-term geohazard analyses. The collected SAR satellite data from the Sentinel-1 mission enabled the identification of all locations with active continuous deformations occurring across the country. The study involved a geostatistical analysis of over 4 million PSI points and the identification of more than 300 areas affected by ground motions. Based on these analyses, regions where deformations are most significant on a national scale were identified, with the majority linked to the mining of mineral deposits. For selected critical areas: Upper Silesian Coal Basin, Lublin Coal Basin and Legnica-Głogów Copper Belt, additional analyses and InSAR processing using SAOCOM satellite data were carried out to extend the basic information and deepen the research results. Deformations occurring there reach values of decimetres per year, often exceeding a metre in total, therefore ground movements in the central parts of mining basins are not identified by the PSI technique. The complementary use of PSI and DInSAR techniques overcame this limitation. By complementing the information with DInSAR processing and analyses of large-scale subsidence caused by years of underground mining, more reliable deformation maps were generated, enabling a better assessment of the actual impact of mining activities on the surface. Such analysis performed for the most endangered areas of the country, against the background of a large-area analysis, allowed for a comprehensive characterisation of ground movements occurring in Poland. The work demonstrates the usefulness of SAR satellite data to support geohazard monitoring over wide areas, including the scale of an entire country such as Poland. The extensive spatial coverage of remote sensing observations provides access to high-risk areas as well as to regions that are otherwise difficult to monitor. The undeniable potential of satellite data has made it possible to uniquely identify all sites of continuous deformation, assess the effects of these movements, identify areas of key importance and initiate predictive analyses to identify areas potentially at risk in the future.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Enhanced Atmospheric Correction of InSAR Data Using Variable Tropospheric Layer Heights and Multi-Source Global Ionospheric Maps

Authors: Reza Bordbari, Andy Hooper, Professor Tim Wright, Yasser Maghsoudi
Affiliations: University Of Leeds, University of Exeter
Atmospheric effects significantly impact the accuracy of Interferometric Synthetic Aperture Radar (InSAR) measurements, necessitating precise corrections for both ionospheric and tropospheric de- lays. This study builds upon existing methods for calculating tropospheric delays directly in the line- of-sight (LOS) direction, introducing the novel concept of variable maximum tropospheric heights to enhance accuracy. By dynamically defining the upper limit of the tropospheric layer, this approach improves tropospheric delay estimation. Additionally, corrections for ionospheric delays are refined using global ionospheric maps (GIMs) from multiple analysis centers. Tropospheric delays in the slant-range direction were estimated using the ERA5 global reanaly- sis dataset from the European Centre for Medium-Range Weather Forecasts (ECMWF), which pro- vides a spatial resolution of 0.25° and hourly temporal sampling. While previous methods calculate zenith path delays first and then map them to the LOS direction geometrically (e.g., GACOS), such approaches can introduce biases under spatially anisotropic atmospheric conditions, particularly at large incidence angles. In this study, we processed ERA5 data to estimate slant delays directly along the LOS direction. Atmospheric parameters from ERA5 were interpolated temporally and spatially, with cubic splines applied vertically across 37 pressure levels to ellipsoidal heights. Slant-range de- lays were calculated by numerical integration along the LOS at 50-meter intervals, considering up to 70 different tropospheric layer heights ranging from 4 km to 40 km. The ionosphere extends roughly from an altitude of 60 to 1500 km, with a maximum electron con- centration at around 450 km. The total electron content (TEC) of the ionosphere varies with altitude, geographic location, time of day, season, and geomagnetic and solar activity. To address these varia- tions, global networks of permanent IGS GNSS stations are utilized to generate maps and provide TEC estimates. For this study, we employed eight vTEC products: IGS (International GNSS Service), CAS (Chinese Academy of Sciences), CODE (Center for Orbit Determination in Europe), ESA/ESOC (Euro- pean Space Agency/European Space Operations Centre), UPC (Universitat Politècnica de Catalunya), NRCan (Natural Resources Canada), and JPL (Jet Propulsion Laboratory) low- and high-resolution products. These products differ in estimation techniques, spatial resolutions, and temporal sampling rates. Additionally, different rescaling factors, ranging from 0.75 to 1, were analyzed to evaluate the proportion of vTEC to consider in ionospheric delay calculations. The methodology is applied to full-resolution and multi-looked Sentinel-1 SAR data over the Antarc- tic Peninsula and West Turkey test sites, demonstrating its effectiveness in mitigating atmospheric artifacts. Results indicate that incorporating variable tropospheric heights for slant-range delay esti- mation and leveraging multi-source GIMs provides a robust framework for improving the precision of InSAR measurements in diverse geophysical applications. By applying this approach, the averaged standard deviation of unwrapped interferograms decreased by 28% and 30% for tropospheric and ionospheric effects corrected interferograms, respectively.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Validation of ICEYE PS-InSAR Using Induced Nonlinear Deformation of Corner Reflectors

Authors: Anurag Kulshrestha, Dr. Valentyn Tolpekin, Mr. Michael Wollersheim, Dr. Qiaoping Zhang
Affiliations: ICEYE Oy
Persistent Scatterer Interferometry (PSI) for surface subsidence monitoring has been well established for spaceborne SAR missions, like ERS, ENVISAT, and Sentinel-1. New space SAR missions have yet to demonstrate precise deformation estimations using PSI techniques. In this endeavour, a campaign was set up to assess the accuracy of estimating surface deformations induced on corner reflectors using ICEYE Ground Track Repeat (GTR) SAR satellites. To facilitate this experiment, we produced a set of specialized, custom-designed corner reflectors that can be manually adjusted to induce vertical movement with a precision of one-eighth of a millimeter. In this campaign, four of those corner reflectors were set up in Calgary, Canada within an area of ~750 square meters. Non-linear deformation trends were induced on three of them while the fourth one was kept as a control. On the first reflector, we induced a periodic function modulated over a linear deformation velocity. This periodic pattern simulates deformations that occur mainly due to periodic thermal expansion or groundwater variation. On the second reflector, we induced a breakpoint model with three piecewise linear velocities. This simulates sudden changes in deformation velocities that have been observed to occur before a mining shaft collapses. On the third reflector, we induced a Heaviside model with three discontinuities occurring over the ground surface. Such discontinuities indicate pre-hazard precursory deformation patterns that can be used to flag impending hazards. These discontinuities were also used to test the phase unwrapping error limits during the PSI processing. The daily reflector adjustment campaign was carried out from November 22, 2023 until March 29, 2024, lasting for a total duration of 129 days. During the adjustment campaign, modelled deformation values were induced on the corner reflectors in the vertical direction. In addition, the local temperature was noted to account for any thermal expansion effects. ICEYE’s Spotlight Extended Dwell (SLED) mode images were taken over the reflector area with an almost daily revisit rate. The perpendicular baselines varied within an orbital tube of radius ~4.5 km. The image stack was coregistered and PS-InSAR processing was performed over a patch of ~330 m in azimuth and ~400 m in range direction. A total of 139 PS points were selected including the pixels over the four corner reflectors. The point over the control corner reflector was chosen as the reference point. A periodogram model was then used to estimate the height and non-linear deformation estimates over the points. The deformation time series over the PS points was then compared with the induced deformation values. To assess the accuracy, the correlation coefficient and the root mean squared error (RMSE) between the induced and PSI estimated deformation time series were calculated. For the periodic model, the correlation coefficient was 0.99, and the RMSE was 1.57 mm. For the breakpoint model, the correlation coefficient was 0.84 and RMSE was 1.23 mm. For the heaviside model, we observed that discontinuities under the unwrapping error limit of a quarter of X-band radar wavelength were unwrapped correctly. However it was challenging to unwrap the discontinuities beyond that limit. The correlation coefficient was 0.69 for this case. In conclusion, the experiment showed a high degree of accuracy for even non-linear deformations, however accuracy was impacted when discontinuities exceeded the unwrapping error limit, which is expected and was the purpose of that part of the experiment. To the best of our knowledge, this was the first attempt to validate PS InSAR results by inducing non-linear deformation trends over corner reflectors. This experiment establishes that ICEYE data is verified for precise deformation estimation using PS InSAR techniques.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Decade-Long Ground Deformation Analysis from Urban Expansion to Geological Influences Using Sentinel-1 PSI in Cluj-Napoca, Romania

Authors: Péter Farkas, Gyula
Affiliations: Geo-sentinel Ltd
Assessing Natural and Anthropogenic Ground Deformation Using Sentinel-1 PSI in the Region of Cluj-Napoca, Romania Continuous analysis of ground deformation is crucial for assessing natural hazards and monitoring human-induced activities. This study presents the results of a Persistent Scatterer Interferometry (PSI) analysis of ground deformations in the Cluj-Napoca region, Romania. Cluj-Napoca, the second most populous city in Romania, is situated in a hilly environment on the banks of the Someșul Mic River, making it ideal for such an assessment. Over the past few decades, the city's urbanization has rapidly progressed, more than doubling its area in 30 years. The city's expansion has reached neighboring hills with slopes up to 26% steepness, which are prone to landslides. The PSI analysis was conducted using over 10 years of Sentinel-1 descending data via the Interferometric Point Target Analysis module of the Gamma software. For interpretation, we integrated local geological information and included a geotechnical perspective. A thorough analysis is necessary due to the presence of various types of deformations, often superimposed, related to mass movements, groundwater pumping, sediment compaction, industrial operations, mining, and earthworks related to road construction. The results are expected to show significant movements in recently built areas at the city's edges, often caused by the combined effects of anthropogenic activities and geological conditions. This study underscores the necessity of local studies, as country and continent-wide maps, while useful for large-area mapping, may not provide the same level of detail and specificity. By using locally selected references and adjusting parameters to the research goals, our analysis is more up-to-date and tailored to the region and user needs. Furthermore, our detailed analysis, involving local knowledge, experts, and auxiliary data, provides valuable information regarding the risks, interpretation, origin, and characterization of detected movements. This demonstrates the importance of collaboration between remote sensing and local geotechnical experts to maximize the potential and effectiveness of InSAR data. Accurately mapped and quantified ground deformations can enhance the understanding of geological processes and assess the risks associated with urban development in the area. Detected slope instabilities, subsidence, or uplift can significantly impact the built environment and should be considered in the planning and design of new buildings and infrastructure.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Ground Deformation Detection and Risk Information Service for Slovenia

Authors: Mateja Jemec Auflič, Karin Kure, Ela Šegina, Krištof Oštir, Tanja Grabrijan, Matjaž
Affiliations: Geological Survey Of Slovenia, University of Ljubljana, Faculty of Civil and Geodetic Engineering, GeoCodis Ltd
Effective approaches to reducing landslide risk include the development of methods to identify landslide-prone areas and the development of risk reduction concepts to mitigate the effects of landslides in these areas. Among available techniques landslide monitoring present a mandatory step in collecting data on landslide conditions (e.g., areal extent, landslide kinematics, surface topography, hydrogeometeorological parameters, and failure surfaces) from different time periods and at different scales, from site-specific to local, regional, and national, to assess landslide activity. In 2023, severe rainfall events triggered more than 8000 landslides in Slovenia. Most of these landslides are categorized as shallow landslides, soil slips, and landslides that primarily caused damage to buildings, infrastructure and agricultural land. Among the numerous registered landslides, there are also landslides with a volume of more than one million m3, which, in addition to the damage to buildings, also endangered the lives of hundreds of people and even claimed human lives. The EO4MASRISK project is to fully utilise Sentinel-1 data, evolving from periodically updated ground deformation maps to early mapping and monitoring of landslide activity to increase urban resilience. Optical-based techniques will enable to better understand the extent and hazard of landslides. The priority is to support landslide inventory using the mCube service DisMapper and the GEP-based ALADIM service. The main reason is to use high-resolution optical data (Sentinel-2, Landsat, and Planet data) to map numerous landslides triggered by the 2023 floods and to monitor significant changes in landslides in release areas. EO4MASRISK service will help stakeholders and end-users to easily identify landslide moving areas and related potential impacts on built-up areas. The EO4MASRISK service functionality will provide the following information: (1) Ground deformation time series; (2) Ground deformation yearly velocity map; (3) Landslide activity map (three levels, e.g., low, medium, high); (4) Map of vulnerable elements at risk, e.g. buildings and infrastructure (three levels e.g. low, medium and high).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Satellite and terrestrial L-band radar interferometry in Alpine environment: insights from slope instabilities in Val Canaria (Switzerland)

Authors: Alessandro De Pedrini, Christian Ambrosi, Andrea Manconi, Dr. Prof. Federico Agliardi, Rafael Caduff, Othmar Frey, Philipp Bernhard, Tazio Strozzi
Affiliations: University of Applied Sciences and Arts of Southern Switzerland SUPSI, Federal Institute of Technology Zurich ETH Z, WSL Institute for Snow and Avalanche Research SLF, University of Milano Bicocca, Department of Earth and Environmental Sciences, Gamma Remote Sensing AG
C-Band satellite radar interferometry is commonly employed for regional assessments due to its effectiveness in detecting surface deformation over extensive areas at relatively low cost. However, it faces limitations, such as difficulty in capturing large or rapid displacements and reduced resolution in forested regions, primarily due to the satellite's revisiting time and the sensor's wavelength. Recent studies using L-Band satellite radar data, including ALOS-2 PALSAR-2 and SAOCOM-1, showed how some of these challenges can be overcome. Moreover, alternative platforms like airborne or terrestrial radar systems offer flexible survey planning tailored to specific site conditions. As part of the MODULATE project (Monitoring Landslides with Multiplatform L-Band Radar Techniques), under the ESA’s Earth Observation Science for Society program, we used the GAMMA L-Band SAR system mounted on a car, to detect and measure surface displacements in the Val Canaria (Canton of Ticino, Switzerland) by means of repeat-pass SAR interferometry. This region is highly prone to rock slope failures on both valley sides, with an overall estimated volume of 80 million m³. The valley has experienced significant collapses, such as on 27th October 2009, where a 380’000 m³ collapse partially dammed the Canaria River. Another large-scale failure on the right side, near the locality of Rütan dei Sass, poses a threat of damming the river, potentially causing a flooding wave that could impact the A2 highway, one of the main north-south viable infrastructure through the Alps. The car-borne interferometric measurements at L-band, revealed surface displacements of up to 10 cm between July and September 2024, clearly highlighting the most active areas of the slope. Persistent Scatterer Interferometry (PSI) from various satellite constellations (including Sentinel-1, Radarsat-2, TerraSAR-X, ALOS-2 PALSAR-2, and SAOCOM-1) was unable to detect these movements due to their rapid and irregular behavior. However, Small Baseline Subset (SBAS) processing of SAOCOM-1 images provided displacements compatible to the car-borne results. Our findings highlight the performance, versatility, and high-quality of L-band SAR data obtained from different platforms and prove that their use is a valid alternative to monitor fast-evolving landslides in forested slopes. Our findings can contribute to the effective planning and use of upcoming L-band SAR missions (e.g. NISAR and ROSE-L) for landslide monitoring services.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Application of L-band SAOCOM-1 satellite data for sinkhole formation research

Authors: PhD Wojciech Witkowski, Artur Guzy, Magdalena Łucka, Xingyu Zhang
Affiliations: AGH University of Krakow
Sinkholes are a type of discontinuous ground surface deformation that occurs on a macroscopic scale all over the world. They are estimated to affect roughly 20% of the global population and this type of phenomenon could have occurred naturally or as a result of human intervention in the orogen's initial state of equilibrium. The emptiness of the orogen in the second case is directly related to human activity, such as catacombs or mining. Simultaneously, the energy transition processes of many countries result in the closure and flooding of underground coal mines, resulting in an increasing number of observed discontinuous deformations such as sinkholes. In these areas, the land surface is frequently highly urbanised. Therefore, sinkholes pose a direct threat to human life and negatively impact surface infrastructure. In this study, we investigated the potential of using relative new SAR data from SAOCOM mission to monitor land surface movements in areas affected by sinkholes. The usefulness of L-band data was validated in the region of highest risk of discontinuous deformations. There were 12 acquisitions available from the period from 05.2023 to 11.2023. For comparison purposes, data from the Sentinel-1 mission were also analyzed for the same time period. Our study was conducted in the region of underground mine ZGH “Bolesław” located close to the city of Olkusz in Poland. The mine has finished mining at the end of 2020. One year later in 2021 the pumps dewatering the rock mass were turned off, which began the process of rebuilding the water table. On the one hand, this started the process of uplifting of the ground surface, but at the same time, the formation of discontinuous deformations intensified. Our research concerned specifically small-scale deformations related to the sinkhole formation process. The study focused on the analysis of two aspects related to satellite radar interferometry (InSAR). The first was the analysis of signal coherence for the L-band and C-band and the second was the possible detection of the deformation scale. Coherence analysis for L-band was performed for multi-looking with factors 1x2 and 3x6. For the 16 days and 32 days time bases, the average coherence was observed to be higher than in the case of the results for the Sentinel-1 data. All the average values for the L-band were above 0.5, while for the C-band this was the case only for single periods. The second aspect of the research was to analyse ground deformation field using the small baseline subset (SBAS) approach. The obtained displacement velocities ranged from -65 mm/year to + 45 mm/year. The results for the L-band data showed groups of points with a significant deformation signal that strongly correlates with the location of zones of possible occurrence of discontinuous deformations. In general, SAOCOM-1 dataset can be effectively used for monitoring land surface movements related to sinkhole formation. At the same time, the results obtained from the L-band with higher spatial resolution confirm the movement patterns obtained from the C-band, which has lower spatial resolution of information. Our study analysed the new SAR L-band sensor to determine potential ground surface movements resulting from local discontinuous deformations. The obtained results could be used for future research and may find application in engineering practice and risk management.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Monitoring of flood protection systems with InSAR in Austria

Authors: Vazul Boros, Maciej Kwapisz, Petr Dohnalík, Philip Leopold, Alois Vorwagner, Antje Thiele, Madeline Evers
Affiliations: Austrian Institute Of Technology, Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB
Sustainable natural hazard management is essential to promote the energy and mobility transition as well as the circular economy for urban areas. Urban centers, which are often located near rivers, are particularly at risk of flooding. The occurrence and intensity of this natural hazard is constantly gaining importance due to climate change and increasingly frequent heavy rainfall events. Since the catastrophic floods of 2002 in Austria, a shift towards integrated flood risk management has been observed in the way floods are dealt with, in which the issues of technical flood protection are supplemented by the observation and monitoring of existing protection measures. The importance of flood protection for the region has once again been highlighted by the centrale European floods in September 2024, which were caused by the heavy rainfalls generated by the storm Boris. The aim of the HoSMoS (HochwasserSchutz Monitoring via Satelliten) research project, which is sponsored by the Austrian Space Applications Programme (ASAP) of the Austrian Research Promotion Agency (FFG), is to investigate the potential offered by monitoring via satellites for flood protection systems [1]. The project is implemented by the Austrian Institute of Technology, with the assistance of the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB from Germany. Our key partner is the viadonau, which as the managing body of the Danube Flood Control Agency is responsible for flood protection along the Danube in the area of Krems and from Stockerau-Zeiselmauer to the Austrian state border. Based on multitemporal interferometric radar with synthetic aperture (InSAR), long-term deformations on the earth's surface can already be monitored under certain conditions. The special feature of this remote sensing method is not only the fact that no sensors need to be attached to the structure, but it also offers the unique possibility to analyze data retrospectively, e.g. for Sentinel data back to 2015. The accuracies currently achieved with InSAR are sufficient for monitoring trends of mass movements or glacier retreat, for example. There are promising results for the use of this technology in the monitoring of bridges, where the accuracy could be increased significantly by compensating for environmental conditions [2]. Currently, the condition of flood protection structures is monitored by means of "close-up inspection" by conducting geodetic surveys with theodolites and personnel along the dams. Only in rare exceptional cases are fully automated total stations with installed prisms or locally referenced GNSS sensors permanently surveyed. Initial investigations into the use of drones for surveying dams have also revealed various limitations, turning these methods uneconomical. The innovation of the HoSMoS project consists of investigating the fundamental applicability of the InSAR technology for flood protection. The aim is to investigate whether monitoring by satellites is possible in principle under the special circumstances that typically prevail at such structures. For example, the influence of natural vegetation, construction materials, the presence of roads and paths and the orientation of linear structures on the satellite monitoring are to be investigated. The accuracy achievable with InSAR is to be compared with the requirements for monitoring. Seasonal effects and relevant environmental conditions that require compensation are to be identified. In the long term, monitoring by satellites promises a great potential for flood protection. It would enable the simultaneous monitoring of deformations for many different structures in a large territory, with a higher temporal and spatial resolution than is currently possible. Long-term trends may be recognized through the retrospective evaluation of deformations. The definition of warning thresholds would allow the rapid, systematic identification of potentially critical areas and sections that require closer monitoring or inspection. References [1] https://projekte.ffg.at/projekt/5123067 [2] Vorwagner, A., Kwapisz, M., Leopold, P., Ralbovsky, M., Gutjahr, K.H. and Moser, T. (2024), Verformungsmonitoring von Brücken mittels berührungsloser Satellitenradarmessungen. Beton- und Stahlbetonbau, 119: 636-647. https://doi.org/10.1002/best.202400017
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: On the importance of large-scale, continually updated InSAR datasets for geohazard monitoring and mitigation

Authors: Karsten Spaans, Andrew Watson, Sarah Douglas, Tom Ingleby
Affiliations: Satsense Ltd
The improved quality and availability of ground movement data in the last decade has highlighted the exposure of infrastructure to deformation-linked geohazards. Founded in 201 8, SatSense has created large scale ground movement products, covering e.g. the United Kingdom and New Zealand at country scale with Sentinel-1 data, and region-wide products using higher resolution TerraSAR-X and Cosmo-SkyMed data. This data is pre-processed and continually updated, allowing unprecedented levels of network-wide monitoring of deformation-linked geohazards. Here, we provide an overview of the SatSense datasets, and explore how this data is used to analyze deformation hazard for a variety of infrastructure and asset monitoring applications. Our datasets across the UK and New Zealand capture deformation from a multitude of geohazards (e.g. volcanic, tectonic, mining-related, shrink-swell, and aquifer discharge) that can affect structures and infrastructure. Landslides are of particular importance for the rail and road networks, with direct damage due to movements of the track or road, and deposition of material due to slides on neighbouring slopes and embankments. When monitoring infrastructure networks, the sheer vastness of the data tends to overwhelm clients and partners alike, making analysis and interpretation difficult. To make the data more accessible, we have developed methods to condense millions of ground movement datapoints into easily interpretable products, to identify key areas of concern, and to aid non-specialist customers in working with our data. For property applications, both residential and commercial, we’ve developed risk metrics targeted at movements related to the most critical geohazards that may affect them, allowing for rapid and up-to-date evaluation of property risk and exposure. All our data is viewable through our custom-made web-portal, allowing non-specialist users to perform geospatial analysis of our data over their area of interest. In this contribution, we present examples of our data showing movements affecting infrastructure and assets, in particular landslides affecting railroads, movement along fault lines affecting highways, seasonal shrink-swell affecting buildings, and mining related movements affecting a variety of assets . We also demonstrate how our data is used to provide actionable metrics to our clients and partners. The continued uptake of large-scale InSAR ground movement datasets, in light of growing and changing geohazards related to climate change, relies on reducing the threshold of use for non-specialists. By pre-processing the data and distilling the most critical information into actionable maps and metrics, SatSense aims to do just that. Our data product provides an up-to-date overview of ground movements due to geohazards on the asset of interest, alongside providing crucial context by monitoring the surrounding area. With a time history going back at least a decade, cyclical patterns can be identified, leading to an improved understanding of long terms risks caused by various ground deforming geohazards. With the arrival of additional scientific, operational and commercial SAR satellites covering a wide range of resolutions and radar frequencies, we expect the roll-out of large-scale ground movement geohazard products to increase in scope and coverage.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: InSAR for Geotechnical Analysis, Applications and Geohazards

Authors: Regula Frauenfelder, Malte Vöge, Georgia Karadimou, Dyre Dammann
Affiliations: Norwegian Geotechnical Insitute, Kongsberg Satellite Services
Interferometric Synthetic Aperture Radar (InSAR) has become an indispensable tool for assessing ground stability and geohazards, offering unparalleled spatial coverage. It is a technique that enables observations of ground motion from space with millimeter-scale precision and assessments of ground stability and risk. Inio is a service that highlights the advantages of InSAR for such applications and presents the recent research by the Norwegian Geotechnical Institute (NGI) and the collaborative partnership with Kongsberg Satellite Services (KSAT), aiming to enhance the value of InSAR for geotechnical applications and geohazards. An increasing number of SAR satellites, both public and commercial, provide a constantly growing archive of data for InSAR analyses, which allows to carry out detailed deformation analyses on almost every place on earth. Providing data for many years into the past, in some places going back to the beginning of the 1990s, this provides a unique opportunity to map structural integrity of important infrastructure over long periods of time. The large footprint of the SAR images enables the tracking of spatial variations in ground movement over many kilometers. NGI has been at the forefront of applying advanced InSAR techniques, such as Small Baseline Subset (SBAS) and Persistent Scatterers (PS) interferometry, to study geohazards and geotechnical monitoring. KSAT is the world’s leading provider of Ground Network Services, with a uniquely positioned global ground station network, provided rapid access to all SAR and optical data required for this kind of analysis. The research of NGI and KSAT combined, focuses on integrating InSAR data with other geospatial information to improve accuracy and reliability. This multidisciplinary approach not only aids in understanding ground dynamics but also supports the development of effective mitigation measures. Transportation, construction, energy, mining and natural hazards, are some of the diverse sectors InSAR and this service have applicability. The case examples we have identified need for further study, include monitoring infrastructure like bridges, roads, railroads, tunnelling, hydro dams, mine tailings slopes and landslides. All of these, can be monitored to detect geohazards such us creep and subsidence, to track ground movements, evaluate project impacts and help monitor risk to operations as well as surrounding areas. Subsiding cities, or geological and natural hazards affecting population, can be continuously monitored and assess risks to the built environment. Inio uses InSAR to monitor geotechnical conditions and unstable ground associated with a construction and infrastructure development as well as the monitoring throughout the energy sector in support of EOR and CCUS. By detecting early signs of ground movement, InSAR can enable timely interventions, ensuring the stability and safety of construction projects. This proactive monitoring helps mitigate risks and prevents potential failures, thereby safeguarding investments and enhancing the longevity of operations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: PSI and SBAS Based InSAR Processing of Sentinel-1 Time Series for Assessing Surface Velocity Patterns and Precursor Land Subsidence due to Collapse of Underground Cavities in the State of Qatar

Authors: Dr. Charalampos Kontoes, Stavroula Alatza, Martha Kokkalidou, PhD(c) NIkolaos Stasinos, Katerina-Argyri Paroni, Prof Constantinos Loupasakis, Dr Katerina Kavoura, Dimitris Vallianatos, Dorothea Aifantopoulou, Ismat Sabri, Yassir Elhassan, Ali Feraish Al-Salem, Ali Anmashhadi, Elalim Abdelbaqi Ahmed, Umi Salmah Abdul Samad
Affiliations: National Observatory of Athens, Institute for Astronomy and Astrophysics, Space Applications and Remote Sensing, Center BEYOND for EO Research and Satellite Remote Sensing, National Technical University of Athens, School of Mining and Metallurgical Engineering, Laboratory of Engineering Geology and Hydrogeology, EDGE in Earth Observation Sciences, STS Survey Technologies, Ministry of Municipality
Sinkholes constitute the main geohazard that affects geotechnical and infrastructure projects in Qatar. These phenomena are strongly linked to long term ground deformation observed on the surface of Qatar due to land subsidence. Usually, most of them are related on the differential dissolution of gypsum interbedded the lower subsurface geological layers. The mild relief and the Land Use coverage of the country enables the accurate detection of deforming areas through multi-temporal SAR interferometry methods such as the PSI and SBAS techniques, that are robust methods with mm/year level of accuracy. Deformation phenomena in Qatar and specifically in the urban periphery of Doha and its surroundings, are detected and monitored through an integrated research approach combining continuous over the years InSAR processing with field investigations. Using Sentinel-1 data from 2016 to 2024, Line of Sight displacements are estimated for the entire State of Qatar, at selected scatterers that have a point-like scattering behavior in time. The deformation histories at the location of these scatterers are also produced, providing evidence for potential non-linear deformations occurring in the country. Sentinel-1 images of both descending and ascending tracks are employed (> 850 images employed for the study). Observations of both satellite passes enable the decomposition of LOS displacements to Up-Down and East-West motion components. The creation of the InSAR stack was performed with the open-source platform ISCE and PSI analysis with StaMPS. Both software is customized accordingly to increase processing capacity through parallelization algorithms techniques developed at the Center for Earth Observation Research and Satellite Remote Sensing BEYOND of the National Observatory of Athens. All SAR displacements layers will be securely hosted within NOA/BEYOND’s ArcGIS Enterprise installed in our premises, a platform designed for enterprise-level geospatial data management. ArcGIS Enterprise ensures scalable, secure, and efficient data storage, processing, and management. By utilizing ArcGIS Enterprise as the hosting platform, users benefit from a secure, robust infrastructure for managing all geospatial data. The ArcGIS REST API provides a versatile and powerful means to access and leverage this data for a wide range of applications, from environmental monitoring to urban planning and beyond. The use of the fully automated parallelized processing chain of NOA for InSAR processing, enabled the processing of large volumes of EO data covering the State of Qatar and provided valuable insights on surface deformation phenomena occurring in the country. Negative LOS displacements were identified in a broader area around Doha. Deforming areas that were identified from the PSI and SBAS (on selected zones) InSAR analysis on Sentinel-1 data, were validated by field investigations. During the field visits, several deforming sites were identified in between as well as at the perimeter of the Dahl Al Hamam and Dahl Duhail sinkholes, as well as at several other sites in the wider Doha region. Thus, a detailed analysis of the deforming sites and the LOS displacements identified by both ascending and descending Sentinel-1 satellite passes is presented. The implementation of InSAR techniques on Sentinel-1 data, for monitoring surface displacements in Qatar, enabled a national scale deformation mapping with millimeter accuracy, over an eight-year time period from 2016 to 2024. Finally, the field investigations performed in the identified deforming areas provided validation of the observed InSAR deformation phenomena in Qatar and additional information about the deformation driving mechanisms. To mitigate risk and enhance preparedness, continuous monitoring of ground deformation phenomena in Qatar is proposed with the use of SAR data and validated by ground-truth investigations, for a more accurate deformation mapping.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Open-Access Global Ground Deformation Dataset for Tectonic High-Strain Zones Based on Sentinel-1 Interferometry

Authors: Giorgio Gomba
Affiliations: German Aerospace Center (DLR)
The SAR4Tectonics project will deliver openly accessible, global measurements of ground deformation in tectonically active high-strain zones, providing geoscientists with ready-to-use velocity maps and time series. This reduces the need for researchers to process SAR data themselves, allowing to focus on the deformation analysis. Ground deformation in high-strain areas is important for understanding geological processes in tectonically active regions. These regions, situated near tectonic plate boundaries, are characterized by significant ground deformation and elevated seismic activity, making them critical for geoscientific research and seismic risks assessment. Traditional ground-based methods like GNSS provide limited spatial coverage, often leaving data gaps. InSAR techniques, especially PS/DS analysis, overcomes this with millimeter-scale displacement measurements across large areas and high temporal resolution. In the project, we processed 6.5 years of Sentinel-1 SAR data, focusing on areas where the second invariant of strain exceeds 3 nanostrain per year. Using the terrabyte high-performance data analytics platform (a collaboration between German Aerospace Center DLR and the Leibniz Supercomputing Centre LRZ), we applied the PS/DS technique with the IWAP processor to produce high-accuracy results. Error corrections included ionospheric mitigation via CODE total electron content maps, tropospheric delay correction using ECMWF reanalysis, and solid earth tide modeling. Vegetation and soil moisture impacts are minimized through a full covariance matrix approach, and GNSS data ensured precise calibration. The SAR data processing is complete, and we are finalizing the publication of the results as an open-access dataset aiming to make comprehensive ground deformation data readily accessible for scientific discovery and practical applications. By providing globally consistent, high-quality deformation products as open-access resources, this initiative hopefully reduces the burden of SAR data processing for geoscientists, enabling them to focus on analyzing Earth's dynamic processes. Additionally, it provides a baseline reference for future studies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Detecting Sinkholes and Land Surface Movements in Post-Mining Regions Utilizing Multi-Source Remote Sensing Data

Authors: Sebastian Walczak, PhD Wojciech Witkowski, Dr. Eng. Tomasz Stoch, Artur Guzy
Affiliations: AGH University Of Krakow
Post-mining regions are particularly prone to unpredictable geological hazards such as land surface movement and sinkhole formation associated with groundwater rebound, even long after mining activities have ceased. Given the rising number of underground mine closures across many European countries, this issue is highly significant, as both land surface movements and sinkholes pose a direct threat to the safety of infrastructure and residents in these vulnerable areas. To address this concern, remote sensing techniques such as Interferometric Synthetic Aperture Radar (InSAR) and Airborne Laser Scanning (ALS) have become increasingly important tools for monitoring these areas, particularly with the growing availability of open source datasets. In this study, we investigated the potential of integrating open source datasets of European Ground Motion Service (EGMS) InSAR data with ALS data obtained from the Polish National Geoportal to monitor land surface movements and detect sinkholes. We also validated the reliability of these datasets by comparing results with precise geodetic levelling measurements and in-situ observations of sinkhole occurrences. Our study was conducted in the underground hard coal mine “Siersza” located in the city of Trzebinia, Poland. The mine was closed in 2001 after several decades of mining exploitation that resulted in cumulative land subsidence of several meters. Following the closure of the mine, groundwater rebound, land uplift, and sinkholes have been observed. Due to data availability, this study covered the period from 2019 to 2022. The research highlights the complementarity of open-source EGMS InSAR and ALS datasets in monitoring land surface movements and sinkhole occurrences. In the study area both subsidence and uplift were observed, with values ranging from –7 mm to +15 mm per year, respectively. The EGMS InSAR demonstrated a strong correlation with precise levelling, indicating high reliability for monitoring land surface movements utilizing EGMS InSAR data. In contrast, the low vertical accuracy of ALS data, approximately +/- 18 cm, resulted in discrepancies when compared with both precise levelling and EGMS InSAR. For sinkhole detection, we applied several data processing algorithms to ALS data, including M3C2, low-pass filtering, and raster differencing, with the last approach yielding the most reliable results. ALS enabled precise determination of the centre, diameter, and depth of each sinkhole, which was not possible with in-situ observation alone. However, the ALS allowed for the detection of about 59% of sinkholes identified through field surveys. Due to the uncertainty of the time data from field surveys and the possibility of the sinkhole being buried during the study period, it was not possible to provide a 100% detection rate. Our analysis revealed that EGMS InSAR-retrieved land surface movements close to sinkholes display higher standard deviation, suggesting greater variability in land surface movement within approximately 400 meters from the sinkholes. Interestingly, land surface movement was less pronounced near sinkholes and increased with distance from them. Specifically, given that the study area is currently influenced by dominating land uplift trend, areas closer to sinkholes might experience smaller uplift due to the overlapping effect of local subsidence associated with the forming sinkholes. In general, EGMS InSAR can be effectively used for monitoring large-scale land surface movements of relatively small magnitudes in post-mining environments. However, it is not efficient for inventorying the spatial size of sinkholes. On the other hand, the findings emphasise the need for higher ALS data vertical accuracy when monitoring small-scale land surface movements. Despite this limitation, ALS proves effective in capturing larger land surface changes related to sinkhole formation. Our study utilized multiple remote sensing datasets to improve understanding of ongoing land surface movements in post-mining areas prone to geological hazards. These insights have valuable implications for future research and practical applications in engineering and risk management. Keywords: land surface movement, InSAR, ALS, sinkhole, post-mining regions
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Understanding the Complexity of Large Alpine Slope Instabilities at Mt. Mater (Valle Spluga, Italy) Using Multiplatform and Multifrequency InSAR

Authors: Federico Agliardi, Tazio Strozzi, Cristina Reyes-Carmona, Katy Burrows, Rafael Caduff, Othmar Frey, Philipp Bernhard, Urs Wegmüller, Andrea Manconi, Christian Ambrosi, Alessandro De Pedrini
Affiliations: University of Milano-Bicocca, Department of Earth and Environmental Sciences, Gamma Remote Sensing, Swiss Federal Institute for Forest, Snow and Landscape Research, University of Applied Sciences and Arts of Southern Switzerland
Large rock slope instabilities are widespread in alpine environments. They influence the long-term topographic and hydrological characteristics of alpine slopes and threaten lives, settlements and infrastructures. These phenomena are characterised by different mechanisms associated in space and time, resulting in heterogeneous displacement patterns, with nested sectors characterized by differential movements and shallow fast instabilities superimposed to deep slow movements. These landslides usually creep over hundreds or thousands of years and can eventually undergo a “slow to fast” transition towards catastrophic collapse. Recognizing this transition is mandatory to develop a capacity to deal with related risks. Satellite SAR interferometry has become a major tool to map and monitor surface deformations associated with large landslides, thanks to the extensive and temporally continuous coverage provided by Sentinel 1 missions. However, the application of C-band InSAR is limited in areas with vegetation, significant atmospheric disturbances typical of alpine environments, and relatively high displacement rates. The interpretation of C-band products is further complicated by the heterogeneity of displacement patterns and rates of large landslides. A sound characterization of their kinematics and activity thus requires integrating multi-platform and multi-frequency InSAR data, ground-based monitoring data, and strong field geomorphological constraints. In this perspective, a significant contribution of upcoming L-band SAR missions is expected. Thus, a systematic assessment of the potential of L-band data in large landslide studies is required to prepare future end-users to real-world applications. In the framework of the ESA MODULATE project (MOnitoring lanDslides with mUltiplatform L-Band rAdar Techniques) we studied the Mt.Mater rock slope instability in Valle Spluga (Lombardia, Italy). It affects a 1300m high slope over an area of 3 km2, impending on the Madesimo village and ski resort. Slope instability was first recognised in early PS-InSAR datasets, then monitored since 2011 by ARPA Lombardia through periodical Ku-band GB-InSAR and GNSS measurements. Field geomorphological investigations and C-band InSAR products at different temporal baselines (24 days to 1 year; Crippa et al., 2020) allowed identifying the processes underlying measured movements and provide a conceptual model for their interpretation. An active deep-seated gravitational slope deformation (DSGSD) affects the entire slope, with a translational global kinematics and displacement rates <3 cm/yr. The DSGSD hosts two nested large landslides with compound movements and seasonally variable rates of 3-6 cm/yr. In the upper part, scree and periglacial deposits move at faster rates exceeding 10 cm/yr. The spatial heterogeneity and wide range of displacement rates of Mt. Mater limit the capability of Sentinel 1 data to: a) identify differential displacement in vegetated or poorly coherent areas; b) reconstruct temporal trends associated to potential “slow to fast” evolution. Thus, we processed SAR images provided by spaceborne L-band sensors (ALOS-2 PALSAR-2, 2015-2023; and SAOCOM 2021-2024) and the carborne GAMMA L-Band SAR instrument (October 2024) to obtained ad hoc DInSAR and PSI (SBAS) products. We systematically compared L-band products (both phase and displacements) to the datasets derived from Sentinel 1 (DInSAR, 2016-2024; PSI, 2015-2020) and Ku-Band GB-InSAR (2011-2024), constrained by GNSS and field data. Our L-band products resulted in highly coherent interferograms over temporal baselines up to three years. This allowed obtaining extremely dense PSI datasets, providing an unprecedented figure of the boundaries and displacement rate of faster nester sectors, even in vegetated areas at slope toe. While confirming the interpretation of the landslide kinematics and activity provided by previous studies, new L-band data, systematically processed over the last ten years, could outline non-linear displacement trends relevant to hazard assessment. Finally, the availability of multi-LOS (spaceborne and terrestrial) L-band data supports the decomposition of displacement vectors toward a more effective assessment of 3D kinematics.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Advancing Ground Motion Monitoring with the IRIDE Nimbus Constellation: Development of Ground Motion Service Segment domain.

Authors: Enrico Ciraci, Francesco Valente, Vincenzo Massimi, Emanuela Valerio, Emanuele Passera
Affiliations: e-Geos S.P.A., TRE ALTAMIRA S.R.L., Planetek Italia S.R.L., NHAZCA S.R.L.
The IRIDE Program, a collaboration between the Italian Government, the European Space Agency, and the Italian Space Agency, represents a transformative effort to advance Earth Observation through innovative upstream, downstream, and service segment development. This presentation focuses on the Ground Motion domain within the IRIDE Service Segment, designed to leverage radar observations from current missions (e.g., Sentinel-1, COSMO-SkyMed, SAOCOM) and the forthcoming IRIDE NIMBUS constellation. This integration will enable national-scale ground motion monitoring with unprecedented spatial and temporal resolution. The Ground Motion Service Segment addresses critical challenges in ground deformation analysis, including monitoring infrastructure stability, landslides, and subsidence. High-precision, high-frequency data products are being developed to enhance geospatial intelligence for public safety, urban planning, and environmental management. In this context, we present the innovative products that will be delivered within the program to support the analysis of ground motion phenomena. These include, for example, a novel algorithm for mapping areas of active deformation using multi-temporal interferometric synthetic aperture radar (InSAR) data, leveraging a density-based clustering approach to automatically identify regions exhibiting significant displacement trends and consistent temporal variations. We will present the progress in developing this service, highlighting critical innovations in radar data processing, integration, and scalability. These include workflows optimized for IRIDE NIMBUS SAR payloads and advanced analytical tools designed to deliver actionable insights to end-users, including automated, big-data-oriented techniques to handle large-scale InSAR datasets, demonstrating a critical step toward developing nationwide monitoring systems for detecting and analyzing ongoing surface deformation. The IRIDE Program underscores a commitment to harnessing cutting-edge technology for societal benefit. By advancing the Ground Motion domain, IRIDE demonstrates its potential to revolutionize Earth Observation, fostering resilient and sustainable communities worldwide.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: RAINFALL, ANTHROPOGENIC ACTIVITY OR THE CHAMOLI FLOOD? WHAT TRIGGERED THE REACTIVATION OF THE JOSHIMATH SLOPE (UTTARAKHAND, INDIA): INSIGHTS FROM MULTI-SENSOR SATELLITE OBSERVATIONS

Authors: Floriane Provost, Bryan Raimbault, Pascal Lacroix, Simon Gascoin, Bastien Wirtz, Kristen Cook, Michael Foumelis, Jean-Philippe Malet
Affiliations: EOST - École et Observatoire des Sciences de la Terre, CNRS / Université de Strasbourg, ITES - Institut Terre et Environnement de Strasbourg, CNRS / Université de Strasbourg, Institut des Sciences de la Terre, Université́ Grenoble Alpes, Université́ Savoie Mont Blanc, CNRS,IRD, Université Gustave Eiffel, France, Centre d’Etudes Spatiales de la Biosphère, Université Toulouse 3, CNES, CNRS, IRD, INRAE, France, Department of Physical and Environmental Geography, School of Geology, Aristotle University of Thessaloniki
It has long been recognized that river erosion at the toe of hillslopes can cause slope failures, triggering new or reactivating existing landslides. The impact of extreme flooding on slope stability has been studied only for some specific case-studies during the events [1,2] while flood impact on the longer-term lateral erosion and its impact on slope stability is rarely considered [3,4]. This question is however of importance for our understanding of the impacts of flood events, as flood-driven erosion and slope failures are generally not considered in analyses of flood hazards, leaving potentially vulnerable populations unaware of their risk. Recently, in the Joshimath area, a landslide connected to the Rishiganga River in Northern India has attracted a lot of attention [5,6]. Indeed, the slope faced major signs of slope instability in early 2022, about a year after an extreme flood event occurred in the region following the Chamoli rock and ice avalanche in February 2021 [7]. The triggering mechanisms invoked to explain this reactivation are precipitation rates [3] and/or anthropogenic activities [4]. However, the role of the Chamoli rock and ice avalanche in this reactivation has not been investigated. In this study, we performed a regional analysis of the slope movements along, and in the vicinity of the Rishiganga River in order to investigate whether or not the Chamoli flood had an impact on the landslides located at the banks of the river. We processed multi-sensor satellite image time series (Sentinel-1, Sentinel-2) using SAR interferometry (InSAR) and offset tracking techniques to measure both slow and fast displacement rates. Further, an archive of Pléiades images was also used to construct time series of Digital Surface Models (DSMs) and estimate the erosion and deposition rates before and after the Chamoli 2021 avalanche. In total, we detected about 20 active landslides along, and in the vicinity, of the Rishiganga river. Some of them are located at the bank of the river, others in neighboring catchments that were not affected by the Chamoli flooding. For each active landslide, we analyzed the displacement time series and detected the occurrence of an acceleration onset and its date using Principal Component Analysis (PCA). First, we show that the majority of the landslides are characterised by a constant velocity from 2016 to 2024 with no significant acceleration. In the Joshimath slope and neighbouring eastern slopes, we detect transient accelerations in fall 2021 and in winter 2022-2023. Pléiades time series confirm the onset of these acceleration and show that the magnitude are even stronger (> 1m.year) at the to a of these slopes. These results likely indicate that the landslides directly connected to the Rishiganga River were reactivated after 2021 while the other landslides in the region did not undergo significant reactivation. However, we also detect several reactivations of fast moving landslides (> 1 m.year-1) in the Semkora Nala catchment, south of the Joshimath town, in 2021. This catchment is not connected to the Rishiganga River and was not impacted by the Chamoli flood. We discuss the possible mechanisms (e.g. water seepage at the toe, debuttressing due to river-bank erosion) that led to the Joshimath slope reactivation. We show that regional studies with multi-sensor and multi-processing approaches are key to capture the complete pattern of ground motion in mountainous areas.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: InSAR.Hungary: the Hungarian InSAR Ground Motion Service and Application

Authors: Bálint Magyar, Ambrus Kenyeres, István Hajdu
Affiliations: Lechner Nonprofit Ltd. - Satellite Geodetic Observatory, Budapest University of Technology, Faculty of Civil Engineering, Department of Geodesy and Surveying
Numerous nationwide, even continental scale InSAR ground-motion monitoring services became operational and published online thanks to the advances in wide-area InSAR processing (WAP) techniques. Inline with these developments we also present InSAR.Hungary, the Hungarian InSAR Ground Motion Service and Application developed in the LTK Satellite Geodetic Observatory (SGO). InSAR.Hungary provides nationwide ground motion solution and being published as an interactive web-based tool serving the clients. The product levels of the service are harmonized to the European Ground Motion Service (EGMS), complementing it with a country-specific and focused solution. We highlight here the mitigation of the atmospheric phase screen (APS) and the strategy of the applied phase unwrapping techniques in-depth, as well as demonstrate the characteristics of the specific production workflows. Moreover, we display the client-oriented features and tools of the service and discuss the different features of the service related applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: DInSAR Time Series Uncertainty Quantification

Authors: Alessandro Parizzi, Alfio Fumagalli, Alessio Rucci
Affiliations: TRE-Altamira
DInSAR time series have become a reliable tool for monitoring ground deformation over large areas without the need for in-situ instrumentation. While operational service providers leverage extensive SAR constellations to derive deformation time series, the underlying processing is complex and often lacks rigorous uncertainty quantification. Traditional quality indicators, such as coherence and RMSE, provide limited insight into the accuracy of individual time series samples. Two main drawbacks are evident. • It is not possible to have information on the accuracy of the single sample. • The indicators rely on the local noisiness of the sample to assess the quality. Exploiting the local noisiness as proxy is indeed correct but in general not complete since it does not account for the systematic errors present on the data introduced for example by the filtering of the atmospheric components. This work aims to address these limitations by developing an error propagation model that considers various noise sources, including clutter, atmospheric effects, and topographic phase compensation, having as a final goal to: • Improve the control on the product’s final performance • provide add-on information that facilitate the measurements interpretation for the final user For the clutter noise both the point targets (PS) and the distributed targets (DS) are considered and treated accordingly. Using the relation between amplitude and phase variances for the first and bootstrapping techniques for the latter. Since the final time series need to be corrected for the effects of topography of interferometric phase the accuracy of such correction is computed and accounted. A critical aspect of the model is the effect related the filtering of the atmospheric additive delay from the interferometric phase. The impact of such filtering on the uncertainties has been analytically derived and used for the computation of the final error. This component is important since it introduces a highly covariant error component that cannot be observed in the time series noisiness. Its prominence depends on the dimension of the observed AOI since the effect of the atmospheric delay increases moving apart from the spatial reference. This means that for small AOIs clutter/decorrelation effects are dominant but on large deformation sites the atmosphere becomes the main performance driver in the error budget. This aspect is of particular importance if we consider the increasing demand of nation-wide DInSAR analysis. The framework has been finally extended to the 2D decomposed timeseries retrieved both from ascending and descending data. To achieve this, the error sources are tracked through the processing steps that resample the data onto the same spatial and temporal grid, ultimately projecting them onto the horizontal and vertical directions relative to the Earth's surface. The approach has been implemented and tested on a large set of different test sites both for the single LoS (line of sight) and for the 2D ones. The results look promising highlighting the presence of noisy image as well as the drift generated by the atmospheric filtering uncertainty. Future validation against independent measurements (e.g., levelling, GNSS) will further solidify the approach.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Landslides detection through remote sensing and ground truth investigations in Cyprus.

Authors: Dr Stavroula Alatza, Constantinos Loupasakis, Kyriaki Fotiou, Alexis Apostolakis, Marios Tzouvaras, Kyriacos Themistocleous, Charalampos Kontoes, Chris Danezis, Diofantos G. Hadjimitsis
Affiliations: National Observatory of Athens, Operational Unit BEYOND Centre for Earth Observation Research and Satellite Remote Sensing IAASARS/NOA, Laboratory of Engineering Geology and Hydrogeology, School of Mining and Metallurgical Engineering, National Technical University of Athens, ERATOSTHENES Centre of Excellence, Department of Civil Engineering and Geomatics, University of Technology
Cyprus is located at the boundary zone of the African, Eurasian, and Arabian tectonic plates and is subject to significant geological activity. This unique geotectonic setting, combined with its mountainous terrain and climatic conditions, makes the island susceptible to various geohazards, including landslides. These events pose serious threats to human lives and infrastructure, especially in regions with steep slopes and unstable geological formations. In the center of Cyprus, Troodos Mountain, is the most prominent geological feature. In the Troodos Mountains, rockfalls and slides commonly occur. To investigate landslide phenomena in Cyprus and specifically in the broader region around Troodos Mountains, InSAR time-series analysis was performed with the use of Sentinel-1 images from 2016 to 2021. Persistent Scatterer Interferometry (PSI) was implemented on Sentinel-1 images of ascending satellite pass. InSAR processing was performed with the use of the fully automated parallelized processing chain of NOA, the so-called P-PSI, which enabled the processing of large volumes of EO data. Negative Line of Sight displacements due to landslide activity are detected in Pedoulas village, with a maximum value of -10mm/y. To further analyze surface displacements in the area, vertical displacements were estimated. The observed SAR deformation was validated by ground truth investigations. The ground truth inspections verified the remote sensing derived deformation phenomena in Pedoulas village and provided insights about the driving mechanism, which is extreme precipitation events. By analyzing ERA-5 precipitation data along with a time-series of vertical displacements, a correlation between extreme precipitation events and shifts in deformation trends is identified. Landslide movements were observed to accelerate during spring and summer, while the phenomena continue with a regular rate during winter. The present study demonstrated the efficiency of the applied multidisciplinary methodology for the investigation of landslide phenomena in Cyprus. The use of remote sensing techniques on Sentinel-1 data, enable the identification of landslides in affected areas. On the other hand ground-truth inspections provide valuable insights on the driving mechanisms of landslide phenomena. The proposed strategy establishes a strong foundation for risk mitigation and mitigation in geologically active regions such as Cyprus.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Earth Observation for Subsurface Risk Mitigation: InSAR Diagnostics of Wellbore Failures in the Permian Basin.

Authors: Mark Bemelmans, Tobiasz Bator, Charlie Waltman, Pieter Bas Leezenberg
Affiliations: SkyGeo
Salt Water Disposal (SWD) is known to cause human-induced pore pressure increases in the subsurface. Improper management of the subsurface pressure has an intrinsic risk that can lead to injection fluids, oil, and saltwater, leaking out of the injection zone and, in some cases, onto the surface. This poses immediate problems for operating sustainability. We use Interferometric Synthetic Aperture Radar (InSAR) to detect the surface footprint associated with (or displacement caused by) these human-induced pressure changes with millimeter precision. The Permian basin has experienced several water-to-surface events as a result of these dynamic pressure changes. Lake Boehmer, for example, was formed due to the uncontrolled spilling of water from an abandoned well. In recent years, there have been two well blowouts in the Permian Basin: The Crane County geyser, and the Toyah geyser, as well as, a well leak near Barstow. The Crane County geyser occurred in Crane County, Texas, where saltwater emerged from an old well in December of 2021. This blowout was preceded by surface uplift to the north of the well, travelling south for several months before finding a weak spot (i.e. the borehole) to burst out into the surface. We attribute this blowout to the increase in subsurface pressure as a result of SWD several kilometers to the north. Following this blowout, the well was shut in on January 29, 2022. However, immediately after the leaking Crane County wellbore was shut in, the feeder channel started building up pressure leading to further uplift in the area surrounding the borehole. In December 2023, due to the pressure build-up, a crevice formed in the vicinity of the Crane County well and started leaking salt water. This leakage caused a pressure drop and subsidence. Since January 2024 the subsidence has levelled off and continued monitoring should reveal if this development indicates an equilibrium or another pressure build-up. The Toyah geyser occurred on October 2, 2024, from a dry production well drilled in 1961, close to Toyah in Reeves County. This old well extends to 11000 feet but is not cast beyond 3974 feet. The Delaware Mountain Group Formation, used for shallow SWD in the Permian Basin, extends from 3800 feet to 6500 feet deep at this location. This blowout is not associated with a precursory surface displacement signal observable with InSAR. Therefore, we suggest that the Toyah geyser was not caused by a build-up of pressure due to SWD in the area, but was instead the result of a failure of the metal casing between 3800 and 6500 feet, the formation used for SWD. So, unlike the Crane County geyser, shutting in and cementing the Toyah geyser well will be an effective long-term solution to stop the leakage in this area. The leak in Barstow occurred on September 2, 2024, and, like the Toyah geyser, was not preceded by InSAR-observable surface uplift. Instead, the area surrounding the leak experienced subsidence at 2.5 cm/yr. Following the leak, the subsidence rate increased to 20 cm/yr. This subsidence is likely caused by the drop in pressure during the leak. We suggest that the Barstow leak was caused by a material failure at depth, resulting in salt water from the Delaware Mountain Group reaching the surface. We are monitoring the subsidence for a reduction in the subsidence rate and a return to equilibrium. Our targeted InSAR analysis of these three events proved effective in gaining insight into the underlying mechanisms responsible for the salt-water geysers in the Permian Basin. This is essential information for formulating mitigation strategies and promoting sustainable operating practices in the Permian Basin. Through careful monitoring of all wells in this area, we help manage operational risks by issuing warnings, and determining if a build-up of subsurface pressure is responsible for potential future leaks and blowouts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Identifying Triggered/Accelerated Deformation Areas from Early 2023 Extreme Weather Events in Auckland (NZ) using InSAR Advanced Analytics

Authors: Sebastián Amherdt, Miquel Camafort, Núria Devanthéry, David Albiol, Blanca Payas, Eric Audigé, Ross Roberts
Affiliations: Sixense Satellite, Sixense Oceania, Auckland Council
In early 2023, Auckland (New Zealand) was struck by two extraordinary weather events that caused billions of dollars in damage and claimed multiple lives. The first event, on January 27, brought widespread flooding and numerous landslides, marking it as New Zealand’s costliest non-earthquake disaster. Just two weeks later, Cyclone Gabrielle struck on February 14, breaking that record with even greater devastation. To support Auckland Council in identifying slope movements potentially triggered/accelerated by these catastrophic events, an InSAR analysis was conducted, combined with advanced data analytics, across the greater Auckland region. The InSAR processing covered the period from May 2022 to June 2023, utilizing mid-resolution SAR images acquired by the Sentinel-1 satellite (C-band). A total of 36 ascending and 35 descending images were analyzed, enabling a decomposition to extract the true vertical (up-down) and horizontal (east-west) motion components. This analysis produced over 1M measurement points in both Line-of-Sight (LOS) datasets and more than 800k points in the decomposed results. Advanced analytics were then applied to the decomposed datasets to detect clusters of deformation acceleration associated with the extreme rainfall events. The methodology consisted of two steps. First, a time-series segmentation and linear regression analysis were performed to identify points exhibiting acceleration after the rainfall events. Then, an active deformation areas (ADA) algorithm was applied to group points with similar deformation patterns and spatial proximity. This analysis identified slope deformations accelerated by the heavy rainfalls, some of which had previously gone undetected. This work will show the methodology used to identify areas of accelerated/triggered deformation following Auckland’s early 2023 weather extreme events. Examples of the results obtained will be discussed in detail, emphasizing their implications for hazard assessment and risk mitigation in similar contexts. Additionally, insights from this study provide valuable contributions to understanding the impact of extreme weather events on slope stability and offer a framework for future monitoring and analysis efforts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Austrian ground motion service - just a copy of EGMS?

#stac

Authors: Karlheinz Gutjahr
Affiliations: Joanneum Research
Since 2014, the European Copernicus programme has launched a wide range of Earth Observation (EO) satellites, named Sentinels designed to monitor and forecast the state of the environment on land, sea and in the atmosphere. The ever-increasing amount of acquired data makes Copernicus the largest EO data provider and the third biggest data provider in the world. Experts have already shown the data’s potential in several new or improved applications and products. Still, challenges exist to reach end users with these applications/products, i.e. challenges associated with distributing, managing, and using them in users’ respective operational contexts. In order to mainstream the use of Copernicus data and information services for public administration, the national funded project INTERFACE has been set up in autumn 2022. Since then the project consortium has been focussing on user-centric interfaces and data standards with special attention to integrating different data sets and setting up a prototype system that allows the systematic generation of higher level information products. One information layer within INTERFACE is the so-called Austrian ground motion service that is an interface to the SuLaMoSA prototype workflow and the data as provided by the European Ground Motion Service (EGMS). In this paper I will focus on the second aspect and explain the enhancements with respect to a pure copy of the EGMS data, discuss some findings for Austria and give some recommendations to further improve the usability of the EGMS data. The process of enhancing the EGMS data for inclusion in the INTERFACE STAC catalogue involves both spatial and temporal preprocessing. This includes the merging of the EGMS tiles and spatial slicing of the data to Austria, the temporal alignment and refinement of the EGMS updates to a continuous time series with additional attributes per temporal overlap, as well as the computation of supplementary statistical parameters to enrich the time series dataset. As of October 29, 2024, three updated versions of the EGMS products are available. Version 1 (v1) covers the period from February 2016 to December 2021, version 2 (v2) spans from January 2018 to December 2022, and version 3 (v3) includes the period from January 2019 to December 2023. The EGMS update strategy employs a five-year moving window approach to maximize point density. Analysis of the EGMS ortho product demonstrates that the number of valid points increases from 1.099 million in version 1 (v1) to 1.266 million in version 2 (v2) and 1.230 million in version 3 (v3). This indicates that reducing the observation period from six to five years results in an increase in point density of approximately 115% and 112%, respectively. Conversely, the temporal combination of versions 1 and 2 reduces the number of valid points to 1.036 million, while the combination of all three versions decreases the point count further to 0.998 million. This highlights a reduction of 6% and 9%, respectively, compared to v1, due to the loss of coherent scattering over time. However, this behaviour is not the same for all 18 tiles, which were used to cover the national territory of Austria. There is a clear trend in west east direction. The maximum decrease in point density is found in tile L3_E44N26, covering the area of Tirol. The minimum decrease in point density is found in tile L3_E47N28, roughly covering the area of south west of Vienna. This effect might be explained by the topography and land cover that changes from high alpine and sparsely populated terrain to moderate rolling topography with a highly urbanised environment. The extended temporal overlap of four years facilitates a robust merging of the time series under the valid assumption that the mapped points predominantly exhibit the same deformation regime across all time series. Consequently, only a relative shift of the subsequent time series with respect to the preceding one needs to be determined, resulting in a high degree of redundancy. The standard deviation of the residuals between the shifted time series i+1i+1i+1 and time series iii was 1.6 mm ± 1.87 mm for the merge of version 1 and version 2, and 1.3 mm ± 1.63  for the merge with version 3. Furthermore, the number of outliers per overlap amounted to 8.5±7.1 for the merge of version 1 and version 2, and 10.0±7.5 for the merge with version 3. Finally, to distinguish the predominant deformation regime—seasonal, accelerating, linear, or none—I propose calculating the root mean square error (RMSE) for each of these deformation models. The deformation regime with the minimum RMSE can be identified as the best fit. Subsequently, the reliability of this selection can be assessed based on the significance level of the model parameters. This straightforward decision tree would enable potential users to focus on the deformation pattern of interest and exclude the majority of points that do not conform to this pattern. In summary, geographic trends reveal varying point density reductions, influenced by terrain and land cover. A four-year temporal overlap allowed robust time series merging with low residuals and outlier counts. To identify deformation regimes, calculating the RMSE for seasonal, accelerating, linear, or no deformation models is proposed, enabling user-focused selection of relevant patterns.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Enhancing DESFA Pipeline Infrastructure Monitoring Through Advanced EO-based Geodetic Imaging

Authors: Michael Foumelis, Dimitra Angelopoulou, Ioannis Gerardis, Elena Papageorgiou, Jose Manuel Delgado Blasco, Paraskevas Frantzeskakis
Affiliations: Aristotle University Of Thessaloniki (AUTh), DESFA O&M Center of Southern Greece
The advancements in geodetic imaging, particularly the evolution and refinement of Interferometric SAR (InSAR) technique along with its thorough validation, has facilitated the acceptance of the technique for operational applications. These advancements are particularly relevant and valuable for the monitoring of large-scale critical engineering infrastructures. In the context of gas pipeline surveying, spaceborne InSAR has become a essential tool for mapping and monitoring surface motion. Pipelines, as massive constructions extending over large areas, pose difficulties for conventional ground-based monitoring methods. Satellite-based InSAR provides an effective solution, enabling the early identification of pipeline segments that require closer inspection and detecting surface motion indicators that could lead to hazardous conditions, allowing for timely information and preventive actions. A prominent example of such an application is the monitoring of the DESFA pipeline network, the principal natural gas transportation system in Greece, and its associated above ground installations. Spanning a total length of approximately 1530 km, the DESFA network passes mainly over flat terrains but also includes segments in more rugged areas and challenging terrain. The pipeline, buried at a shallow depth of approximately 1.20 m, is systematically monitored using ground-based methods, optical satellite imagery, and aerial surveys. These traditional approaches focus primarily on detecting signs of unauthorized human intervention or other ground deformation issues. However, ground-based geodetic techniques like GNSS and leveling or other surface displacement monitoring techniques face limitations in providing coverage of the entire network, making them more suitable for localized areas of known activity or with pronounced deformation signals. A monitoring system based on Copernicus Sentinel-1 mission data and the Persistent Scatterers Interferometry (PSI) technology has been established to address these challenges. The first phase of this activity focused on examining historical surface motion data starting in April 2015, utilizing the entire Sentinel-1 archive, including both ascending and descending orbital tracks. Prior to the actual interferometric processing, a feasibility analysis was conducted to investigate and anticipate areas with potential limitations, enabling the development of a strategically tailored and targeted plan. Based on an automated processing chain, a dedicated workflow was then designed to generate surface motion rates and corresponding time series at sensor resolution. Measurements extend several kilometers on either side of the pipeline, enabling the detection of deformation signals that could potentially propagate and impact the pipeline. To ensure robustness and proper compensation of different error sources, the area of interest was divided into several overlapping tiles processed independently. Post-processing included the geometric decomposition of Line-of-Sight (LoS) motion into vertical and East-West components and the separation of temporal trends from seasonal motion facilitating easier interpretation by expert domain engineers. Several statistical properties of the measurements, quality indicators and obtained uncertainties, geographic locations showing significant motion or demonstrating proper performance, and their corresponding time series visualizations were among the key findings summarized in a structured document format by an automated reporting mechanism. Finally, a layer of human interpretation and validation is incorporated to ensure the reliability, and consistency of measurements and deliverables. The monitoring system is built to offer continuous updates, incorporating new satellite acquisitions at defined time intervals that are suited to the unique surface motion properties of each pipeline segment. This approach supplements existing monitoring activities, offering a solid and scalable solution for enhancing the safety of a critical pipeline infrastructure, while also being applicable across all aspects of the pipeline network lifecycle. It serves as a complementary tool for proactive monitoring or wide-scale emergency inspections, particularly for assessing the impact of natural or weather-related phenomena that are expected to intensify.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: VHR SAR Particle Image Velocimetry Analysis for Lava Effusion Rate Estimates at Kadovar Volcano, Papua New Guinea

Authors: Inga Lammers, Simon Plank, Valerie Graw
Affiliations: Ruhr-University Bochum (RUB), German Aerospace Center (DLR), German Remote Sensing Data Center
The monitoring of volcanic eruptions is of high importance for the safety of the general public and the advancement of scientific knowledge. This study investigates the eruption evolution of Kadovar Volcano, located in Papua New Guinea, between October 2019 and June 2022. The main objective is to analyze lava velocities using the pixel-offset technique by employing the Particle Image Velocimetry (PIV) method during the eruptive period. Additionally, the study aims to characterize the eruption dynamics over the observation period. To achieve this, the research focuses on: (i) applying the PIV method using TerraSAR-X (TSX) Synthetic Aperture Radar (SAR) data to measure lava flow velocities while addressing underestimations associated with SAR imaging geometries; (ii) cross-checking the PIV results through comparisons with optical and thermal datasets; and (iii) developing a theoretical model of the eruption’s evolution based on the findings. Kadovar Volcano is a Holocene stratovolcano located within the Bismarck Archipelago, north of Papua New Guinea. The island has a width of 1.5 km and a height of 365 m and is characterized by steep slopes. The latest eruption began on January 5, 2018, resulting in the evacuation of residents and the activation of the International Charter "Space and Major Disasters" to provide assistance. The necessity for high-resolution TSX SAR imagery arises from the small scale of the lava flow and the methodology employed, which requires the recognition of small structures to detect displacements. The remote location of the volcano and the high probability of frequent cloud cover necessitate the use of data generated remotely and independent of weather conditions. The TSX satellite acquires data images in the X-band in different modes, with varying resolutions. Here, data in the High-Resolution SpotLight (HS) and the StaringSpotLight (ST) imaging modes were employed. A number of preprocessing steps are required for optimal PIV results when using SAR data, including for example co-registration of image pairs, image alignment and image enhancement. In the context of a PIV analysis, the algorithm identifies specific particles, in this case, the structural features of the blocky lava flow, which are then tracked across images. The application of this methodology to volcanic structures therefore depends on the solidification of the crust and the subsequent formation of surface structures. The movement of these structures is identified by subtracting the mean pixel value of all frames from each individual frame. The tracking of randomly selected particles is conducted using open-source kernelized cross-correlation software, thereby enabling the measurement of displacement between consecutive frames. Additionally, TSX amplitude images were visually analyzed to determine any changes in the morphology of the volcano and the size of the lava flow field. The study faces challenges and limitations due to environmental and methodological factors, including limited revisit times, spatial resolution constraints, and SAR-specific issues such as shadowing. To verificate the PIV results and enhance understanding of eruptive dynamics, data from thermal (MODIS and VIIRS) and optical (Sentinel-2 and Landsat-8) satellite sensors were integrated as supporting data. Based on the results of the PIV analysis and the analysis of thermal and optical satellite data, a theoretical model of the eruption’s evolution was derived. The observation period can be divided into three distinct phases of activity. The first phase, from October 2019 to October 2020, marks the return of volcanic activity following the beginning of the eruption in 2018, with the highest velocity values of up to 5.5 m per day, which were observed between March and June 2020. The second phase, from November 2020 to July 2021, was characterized by minimal activity, with velocity values approaching zero and a notable reduction in thermal anomalies. The third phase, from August 2021 to June 2022, depicts a renewed surge in activity levels concentrated in the summit region. The visual interpretation of the TSX amplitude data additionally indicated the formation of a new lava dome at the summit. The study demonstrates the effectiveness of VHR SAR in capturing detailed temporal changes in volcanic activity, thereby providing crucial insights into eruption dynamics in otherwise inaccessible regions. These findings illustrate the potential of the PIV method utilizing SAR imagery for effective remote monitoring of active high-viscosity lava flows.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Hypothesis Testing on a Continental Scale: GPU Based Time Series Classification

Authors: Adriaan van Natijne, Lars Keuris
Affiliations: Keuris Earth Observation
Satellite radar remote sensing has enabled monitoring of natural hazards at a global scale. In the recent years, pre-processed deformation datasets have become available to the general audience, in particular with the arrival of the European Ground Motion Service (EMGS) (Crosetto et al., 2021). Thanks to this recent development, access to InSAR measurements has significantly improved. The EGMS provides both science and society with a quantification of ground motion at a millimeter scale and promotes further analysis of the underlying deformation regimes throughout the European Union, the United Kingdom, Norway and Iceland. To separate (potentially) hazardous regions from their safe surroundings in a uniform and robust manner, one or several models are commonly imposed on time series to extract behavioral statistics such as deformation trends, accelerations and breaks thereof. Unfortunately, the deformation model imposed on time series in the EGMS data is insufficient to describe the deformation of all possible deformation patterns. The singular EGMS model expects a smooth, continuous deformation signal consisting of an offset, velocity, acceleration and periodicity. In contrast to this model, destructive deformation signals can be highly discontinuous and non-smooth. Hence, the EGMS model is currently insufficient for fully capturing all possible deformation signals. However, it is difficult to set-up an appropriate model without accurate, prior knowledge on the nature of the behavior. To accommodate for as many potential geo-hazard related deformation signals, a more flexible model definition is required. Jumps in the deformation time series are of particular interest, because they point out either erratic processing (e.g. phase unwrapping errors) or potentially hazardous physical behavior (e.g. sinkhole precursor). Such a jump is commonly modeled by a step function. In addition, abrupt changes in the linear or seasonal deformation rate are equally relevant. The common approach is to iteratively compose a custom, individual model for each time series. However, due to the large number of time series (several billions) and the large region (Europe) over which these time series are distributed, this comes at a great computational cost. Consequently, promising potential next steps, such as a unified European-wide deformation analysis, cannot realistically take place. As a result, investigative studies based on the EGMS have instead primarily been local. Parallelized GPU processing, however, allows us to impose many models on multiple time series simultaneously rather than iteratively, and can therefore extract model parameters at unprecedented scales. For each time series in the EGMS a few thousand model variations, including step functions, were fitted. Subsequently, the best model for each time series was selected using traditional hypothesis testing. The proposed, parallelized methodology is applied to all time series within the most recent EGMS dataset. Surprisingly, this project was completed on a simple consumer laptop with ~20 million models per second. It was found that the standard EGMS model fit can often be improved with more representative models. Moreover, the prescribed standard deviation of the individual EGMS deformation measurements of 4 mm has been underestimated. The systematic, automated model selection improves the quantification of the overall deformation, and serves as classification of the deformation type. This improves the interpretability of the deformation signal and supports the explanatory power of the EGMS altogether. All this will support a wide range of users to pinpoint overlooked anomalies on a dynamic continent. References: Crosetto, M., Solari, L., Balasis-Levinsen, J., Bateson, L., Casagli, N., Frei, M., Oyen, A., Moldestad, D. A., and Mróz, M.: DEFORMATION MONITORING AT EUROPEAN SCALE: THE COPERNICUS GROUND MOTION SERVICE, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2021, XLIII-B3-2021, 141–146 DOI: 10.5194/isprs-archives-XLIII-B3-2021-141-2021
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Processing SAR Images by PHASE: Persistent Scatterers Highly Automated Suite for Environmental Monitoring

Authors: Roberto Monti, Mirko Reguzzoni, Lorenzo Rossi
Affiliations: Politecnico Di Milano
Planet Earth has experienced significant transformations over time, with profound impacts on both natural ecosystems and human-built environments. Geohazards, arising from natural or anthropogenic processes, can pose significant threats to both the human safety and the territory. These phenomena, such as landslides, earthquakes, volcanic eruptions, and subsidence, can directly impact critical infrastructure and communities, as well as increasing their risk due to exposure to cascade effects. Therefore, monitoring geohazards is crucial for mitigating their potentially dramatic consequences, particularly when this can be done reliably and in near real-time. Synthetic Aperture Radar (SAR) technology offers a powerful solution for observing Earth’s surface under all-weather and all-light conditions, overcoming some limitations of optical instruments. While factors such as revisit time and spatial resolution may pose challenges, SAR’s ability to provide extensive spatial coverage makes it invaluable for monitoring the dynamics of geohazards that unfold over broad areas. SAR-based geospatial processed products are already available, with the European Ground Motion Service (EGMS) being a well-known example. It provides Persistent Scatterers (PS) deformation time series, computed from Sentinel-1 data, all over the continent. These products proved to be reliable for many entry-level applications or analysis where the spatial extant is dominating but lacked in small-scale scalability due to the poor spatial resolution of the level 3 gridded PS. Moreover, addressing the behavior of deformations-connected phenomena often requires proper modelling of the observed signals through statistical methodologies, that are not generally implemented. Therefore, several limitations are inherent to existing products and solutions, making both the processing workflow and results interpretation viable only to SAR experts. To address these challenges, we developed PHASE (Persistent scatterers Highly Automated Suite for Environmental monitoring), a MATLAB-based software suite designed to automatically perform geospatial analyses on data processed using the Persistent Scatterer Interferometry (PSI) technique. The first module of PHASE automatizes the entire PSI analysis, exploiting both SNAP and StaMPS software. The workflow of the second module - deputed to the geospatial processing - begins with deterministic modeling of the deformation time series for each PS using cubic splines. The number of splines is iteratively selected based on the Minimum Description Length (MDL) index, while outliers are removed through a t-student test applied to the residuals. After that, the remaining signal undergoes Fourier analysis to identify the principal signal components, with the corresponding harmonics incorporated into the modeled signal. Then, the empirical covariance function of the residual signal is estimated. A significance analysis is performed on this covariance function; if it is significant , residuals are stochastically modelled by collocation, to squeeze further information from the given time series. Otherwise, the deterministic model is deemed sufficient. This workflow is applied to both 1D and 2D geometries, reflecting the nature of most deformation phenomena. For instance, linear features such as roads and railways fall into the 1D category, while broader and more complex structures like dams, landslides, and volcanic regions are categorized as 2D. A spatial modelling of the displacement in time is also performed. As for 1D geometries, each PS is assigned to a position along the centerline of the monitored element, and displacements at each epoch are spatially interpolated using cubic splines. As for 2D geometries, displacements at each time step are spatially interpolated using bicubic splines. In both cases, the spline interpolation is evaluated on a uniformly spaced 1D or 2D grid. By accommodating both 1D and 2D geometries, PHASE offers a versatile and robust framework for analyzing a wide range of deformation processes. Its ability to process SAR PSI data in a robust and statistically driven way enables researchers and decision-makers to derive critical insights. This methodology can further allow the final users to enhance early warning systems, strengthen infrastructure resilience, and implement measures to protect lives and properties. As a comprehensive tool for geospatial analysis, PHASE represents a significant step forward in the effective monitoring and management of geohazards. Limitations of already available tools and instruments have been overcome through a deep automatization of state-of-the-art statistical methods, culminating in a resulting deformation model easily interpretable even by non-SAR experts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Monitoring Linear Infrastructure in Sweden Using InSAR Techniques

Authors: Saeid Aminjafari, Prof Leif Eriksson
Affiliations: Chalmers University of Technology
Sweden’s railway and road networks face increasing risks from ground instability, with severe financial and operational consequences. For example, the Malmbanan railway, a critical route for transporting iron ore, highlights these vulnerabilities with recent derailments that resulted in 15 kilometers of damaged track, halting operations for 76 days and causing daily losses of €10 million for the mining company. These risks extend to other railways and highways e.g. north of Gothenburg city where a large fraction of the transport infrastructure is built on the ground with a high clay content, and Härnösand where derailment has occurred after heavy rain. The stability of these linear structures is essential for Sweden’s transport system and economic resilience. Despite these issues, Sweden lacks sufficient studies applying advanced InSAR techniques to monitor railways and roads. The European Ground Motion Service (EGMS), while valuable, primarily uses Persistent Scatterer Interferometry (PSI), which is limited in vegetated and non-urban areas. These limitations hinder its ability to detect deformation over natural terrains and slopes, where distributed scatterers (DS) dominate. EGMS’s standardized spatial resolution further restricts its utility for localized, high-precision monitoring critical to infrastructure stability. To address these gaps, we adopt a dual approach. Small Baseline Subset (SBAS) InSAR is used to map deformation over non-urban and vegetated areas by leveraging DS, while Persistent Scatterer (PS) data is integrated with SBAS results to create DS+PS maps for enhanced accuracy and spatial density. The project utilizes radar data from satellites Sentinel-1 (medium resolution), and TerraSAR-X (high resolution). Ground-based measurements, including GNSS and leveling data, will validate the InSAR results. For Malmbanan, we have processed 214 Sentinel-1 images from both ascending and descending orbits, generating 330 interferograms to build the 2D deformation network. We employed small single-look (azimuth direction) interferogram processing for high-resolution maps. To mitigate decorrelation, we excluded winter and snowmelt season interferograms, retaining only short temporal baselines. The resulting maps reveal not only deformation along railways and roads but also in their vicinities, offering valuable insights into terrain stability beyond permanent scatterers. This work represents an application of SBAS and DS+PS techniques for Swedish infrastructure monitoring. By addressing limitations in existing systems, our approach supports improved maintenance strategies, reduces risks, and ensures the long-term resilience of critical transport systems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: A.02.05 - POSTER - Peatland

Peatlands cover only 3 per cent of the world’s land, mainly in the boreal and tropical zone, but they store nearly 30% of terrestrial carbon and twice the carbon stored in forests. When drained and damaged they exacerbate climate change, emitting two Gt of CO2 every year, which accounts for almost 6% of all global greenhouse gas emissions. The unprecedented observations collected by the Copernicus Sentinel family and other sensors allow new ways to monitor and manage peatlands. Emphasis will be put on advances in improved mapping and monitoring of intact, degraded and cultivated peatlands for conservation, management and restoration in a global and a specific climate zone (e.g. boreal, temperate, tropical) context. This session will showcase some of the more recent key achievements including methods/algorithms, science and applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Assessment of Surface Dynamics of Peatlands Using Sentinel-1 and Meteorological Data

Authors: Dita Lumban Gaol, Philip Conroy, Simon van Diepen, Dr Freek van Leijen, Prof Ramon Hanssen
Affiliations: Delft University Of Technology
Large parts of the Netherlands consist of low-lying coastal and fluvial wetlands where peat dominates the western and northern regions. Anthropogenic activities, especially water table lowering for agriculture and urban development, have induced land subsidence through peat consolidation, shrinkage, and oxidation, releasing significant CO₂ emissions. Managing and mitigating these effects in peatlands requires a comprehensive understanding of its driving mechanisms and spatio-temporal variations. Advances in space geodetic techniques, particularly interferometric synthetic aperture radar (InSAR) facilitate surface displacement monitoring by analyzing the InSAR phase over time. While time series InSAR analysis effectively estimates displacement, its precision, accuracy, and representativity is compromised by temporal decorrelation, noise, and dynamic soil movement, especially over grasslands on peat soils. Moreover, loss-of-lock events caused by an irrecoverable loss of coherence disrupt the time series and introduce arbitrary unintelligible phase offsets (Conroy et al., 2023a). These events should be identified to prevent misinterpreting phase offsets as displacement. Strategies such as multilooking and using contextual information , e.g. by integrating meteorological data, have improved the reliability of the InSAR displacement estimates (Conroy et al., 2024). However, more experience in the efficacy of InSAR-based surface dynamics assessments is required. Here we estimate and analyze surface motion in a regional peat area south of Delft, the Netherlands, with high spatial variability in soil types using Sentinel-1 data from 2016 to 2022 and the SPAMS (Simple Parameterization for the Motion of Soils) model (Conroy et al., 2023b). SPAMS estimates surface motion parameters based on physical processes and distinguishes between reversible and irreversible subsidence. The model uses precipitation and evapotranspiration data from nearby meteorological stations, assuming that these factors primarily drive soil movement. The analysis focuses on permanent grassland parcels to exclude non-Lagrangian processes related to crop cycles and plowing. Displacement time series were estimated for contextually homogeneous parcel groups, categorized by soil type and groundwater management zone, to address loss-of-lock events assuming that parcels in the same group exhibit similar behavior. The results reveal clear sub-seasonal patterns aligned with precipitation and evapotranspiration cycles. A water surplus from increased precipitation and reduced evapotranspiration causes uplift, while subsidence is due to water deficits driven by elevated evapotranspiration and reduced precipitation. The SPAMS model highlights a direct correlation between irreversible subsidence and climatic conditions. Notably, prolonged dry conditions in 2018 led to the highest estimated levels of subsidence, corresponding to a rainfall deficit and high evapotranspiration compared to other years. Subsidence rates also vary across parcel groups with different soil classes. Analysis of parcel groups with at least 20 members reveals significant subsidence in peat-dominated areas, whereas clay soils generally exhibit lower rates. For parcels with a thin clay cover, the SPAMS parameters indicate a lower evapotranspiration factor, increasing sensitivity to precipitation. In addition, these parcels have a smaller irreversible subsidence factor. The water-retaining properties of heavy clay presumably explains these differences, as clay can clog water, keeping underlying peat saturated and thereby reducing peat consolidation. Mitigating peatland subsidence requires maintaining soil water content, especially during dry periods, to prevent irreversible subsidence while preserving dairy farming operations. Achieving this balance involves using and updating the SPAMS parameters to monitor potential subsidence events, implement water management strategies, and contribute to mitigating peatland degradation. REFERENCES Conroy, P., Van Diepen, S.A., Van Leijen, F.J., Hanssen, R.F., 2023a. Bridging Loss-of-Lock in InSAR Time Series of Distributed Scatterers. IEEE Transactions on Geoscience and Remote Sensing 61. doi:10.1109/TGRS.2023.3329967. Conroy, P., van Diepen, S.A., Hanssen, R.F., 2023b. SPAMS: A new empirical model for soft soil surface displacement based on meteorological input data. Geoderma 440, 116699. doi:10.1016/j.geoderma.2023.116699. Conroy, P., Lumban-Gaol, Y., Van Diepen, S., Van Leijen, F., Hanssen, R.F., 2024. First wide-area Dutch peatland subsidence estimates based on InSAR, in: IGARSS 2024 IEEE International Geoscience and Remote Sensing Symposium, IEEE. pp. 10732–10735. doi:10.1109/IGARSS53475.2024.10642504.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Integrating InSAR and machine learning to estimate subsidence in deforested and drained tropical peatlands in Central Kalimantan, Indonesia

Authors: Deha Agus Umarhadi, Prof. Florian Siegert
Affiliations: Ludwig Maximilian University of Munich, Remote Sensing Solutions GmbH
Tropical peatlands have a crucial role in the global carbon cycle considering their contribution of storing a large amount of soil carbon. Peatlands naturally are in waterlogged condition as the swamp forests maintain the anoxic state. However, the majority has been facing major degradation and drainage due to land conversion for agriculture and logging, including that of located in Indonesia. Once peatlands are drained, carbon dioxide is released in huge quantities due to the bacterial decomposition of the plant biomass. Subsidence of the peat layer occurs as a result of peat consolidation, decomposition, and shrinkage due to desiccation. Interferometric Synthetic Aperture Radar (InSAR) has been widely used to monitor land subsidence from space effectively. However, SBAS-InSAR has a limitation in the continuity in terms of spatial coverage due to decorrelation especially when implemented in the vegetated areas over peatlands. In this study we used SBAS-InSAR and machine learning to capture peat subsidence in a large degraded peatland area in Central Kalimantan, Indonesia. We applied a time-series small baseline subset (SBAS) InSAR analysis using 45 stacks of Sentinel-1 C band (2021-2022). The study area covered Blocks B and C of the ex-Mega Rice Project area. Several regression-based machine learning algorithms were examined, i.e., Support Vector Regression (RFR), Random Forest Regression (RFR), eXtreme Gradient Boosting (XGB), and Light Gradient-Boosting Machine (LightGBM). Predictor maps included land use/land cover (1990, 2000, 2009, 2015, 2020, and 2022), peat depth, distance to peat edge, canal density, distance to canal, year of disturbance, Normalized Burn Ratio (1990, 1995, 2000, 2005, 2010, 2015, 2020, 2022), latest fires, and frequency of fires. Forests and plantation areas were excluded from our analysis as they may contain false estimates from InSAR analysis. Our results showed that SBAS InSAR could identify the peat vertical change covering 79.64% of the study area based on a temporal coherence threshold of 0.25, while the rest was estimated by machine learning model. Based on the model training and testing, RFR outperformed the other methods with an RMSE of 1.170 cm/year and an R2 of 0.740. Overall, the study area subsided with an average of –1.586 cm/year, while an uplift was also observed in the southern part. We collected dGPS data on ground and subsidence pole measurements to validate the remote sensing based subsidence rates.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Global Shocks and Disruptions to Scottish Peatlands – Modelling Carbon-Water Interactions and Feedbacks

Authors: Luisa Orci Fernandez, Mathew Williams, Professor Roxane Andersen, Dr Luke Smallman
Affiliations: University Of Edinburgh, University of the Highlands and Islands
Peatlands occupy 25% of Scotland, and they store more than 50% of the soil carbon in the country. Furthermore, the Flow Country in the north of Scotland is the largest expanse of blanket mire in Europe and the largest single terrestrial carbon store in the UK. Scotland has set ambitious net-zero goals, including tree planting and peatland restoration targets. Understanding the interactions between carbon and water cycles in peatland ecosystems is crucial for achieving Scotland's climate mitigation goals. Recent geopolitical disruption to energy and trade has shifted the focus of Scotland's political attention to food security and agricultural policy, highlighting the multiple demands on land use. Recent extreme weather has highlighted climate change risks to Scotland’s hydrological system. A crucial knowledge gap is how Scotland's hydrological status and peatland C stocks will adjust under climate and land use change. Hydrological risk is rarely assessed in land use policies, particularly the potential impacts of changes in soil moisture on plant growth, food production, and peatland restoration. This information is vital as hydrological feedback on terrestrial ecosystems may determine the success of land use policies. In this study we seek to address these knowledge gaps in peatland hydrology and carbon dynamics by applying the CARDAMOM data assimilation framework to calibrate and validate the DALEC terrestrial ecosystem model. We used CARDAMOM to calibrate and validate the DALEC model at monthly time step using downscaled satellite-based Earth observation of Leaf Area Index (LAI), Above Ground Biomass, and database values of Soil Organic Matter. To evaluate DALEC performance over organic soils we then validated our analysis using independent estimates of Net Ecosystem Exchange of CO2 (NEE) from the Auchencorth Moss ICOS Eddy Covariance tower located in a low land blanket bog in Scotland. Our CARDAMOM calibrated DALEC capture the overall trend of LAI (R2 = 0.67, RMSE = 0.45 m2/m2), with an uncertainty overlap (0-1) between model and assimilated LAI of 0.60. Our analysis was able to reproduce independent NEE observations with moderate deviation with predicted values (R2 = 0.73 and RMSE = 0.48 gC/m 2/day). Our analysis performed less well against water balance components Soil Water Content at 30 cm depth (BIAS = 0.53 m3/m3). In this poster we present our efforts to enhance our water balance analysis including implementing alternate organic soil hydrology equations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Assessing the Wetness of Peatlands in Sweden Using ALOS-2 L-band Data

Authors: Georgina Page, Dr Armando Marino, Professor Peter Hunter, Jens-Arne Subke, Brian Barrett
Affiliations: University Of Stirling, University of Glasgow
1. Introduction Peatlands are an important ecosystem that store large amounts of carbon due to low organic matter decomposition rates caused by their high water table. However, up to 25% of peatlands in Europe are degraded [1], and in need of restoration. The health of bogs can be observed by looking at the water table depth (WTD) and the overall wetness of the bogs to check that the peatland is in a waterlogged condition. Synthetic Aperture Radar (SAR) satellite data can be used to observe the WTD and wetness of the bogs remotely to assess their condition to guide assessment of restoration requirements and validation of outcomes following restoration. The WTD have been observed using C-band SAR and found correlations by using the soil moisture [2] or the backscatter intensity of dual-pol data [3,4]. L-band data however has a longer wavelength compared to C-band and therefore has a greater penetration depth which should be able to be used to monitor the WTD better. Our work is assessing the ability of using L-band data from ALOS-2 to monitor the changes in the wetness of bogs in Sweden. 2. Methods 2.1 Study area Three Swedish bogs (Rösjö Mosse, Blängsmossen and sections of the Sydbillingens Platå, located near Skövde) were observed for this study. These three bogs were chosen due to similarity (raised bogs) to Flanders Moss (near Stirling, Scotland) where previous work has been completed [5]. The wetness of the Swedish bogs was calculated using data from local weather stations that took daily precipitation, temperature and snow depth data over the observed time period (2020-2022) [6,7,8]. Using the daily temperature the potential evapotranspiration was calculated for each day at each bog. Then, the wetness was calculated as a ratio of precipitation and potential evapotranspiration [9]. To get an accurate representation of wetness, the precipitation and potential evapotranspiration was accumulated from several days before the acquisition, to test what the best number of previous of days was (1,3,7,10, and 30 days). 2.2 ALOS-2 Between 8th August 2020 and 16th April 2022, 20 quad-pol ALOS-2 acquisitions of the Swedish bogs were acquired. The images were first filtered to remove dates where the average daily temperature was below 0.5°C, and if there was snow depth recorded on any of the nearby weather stations. The 9 resulting images were calibrated, co-registered and a box car filter (9 x 9) was applied. Different variables were used to identify correlations with the wetness of the bog, looking at the absolute wetness ratio for individual dates, and identifying the changes over subsequent dates. The different parameters calculated were from the Pauli, Cloude-Pottier and Touzi decompositions and the intensities of HH, HV, VH, and VV. Additionally the change matrix of the coherency matrix (T₂ - T₁) for quad-pol data and the change matrix of the covariance matrix for dual-pol data (VV/VH and HH/HV) [10]. From the change matrix the eigenvalue and eigenvectors were calculated which represent the greatest changes and type of scattering being added or removed from the system between the two observed dates. 3. Results The strongest correlations relate to changes in the wetness over time and not absolute values for individual dates. For the strongest relationship, the wetness ratio must be from the accumulation of the previous 30 days. Assessing all the different variables identifies that there is a strong relationship with changes in the surface scattering and the wetness of the bogs, as seen by looking at the RGB Pauli images of the change matrix. However, the strongest relationship is the highest eigenvalue with the change matrix, either for dual-pol (VV/VH) or quad-pol data. For both the dual and quad-pol values the results show that an increase in the eigenvalue correlates with an increase in the change of wetness. For the quad-pol data the R² value was 0.87, while for the dual-pol data R² = 0.88. To further test the results, land map data from Lantmäteriet [11] was used to map forested areas within the bogs and these sections masked to ensure the cover area was just the peatland bogs. These improved the R² values slightly changing the quad-pol value to 0.89 and 0.90 for the dual-pol. Overall, the small difference between R² values show that this methodology could be exploited with dual-pol sensors, if the interest is exclusively on wetness indicators. These results show that L-band has strong potential to look at the wetness of bogs, yet improvement to the results could be obtained with in-situ data of the WTD. This would improve the understanding of the scattering within the bog itself instead of just looking at changes in the climate. The limited data available for ALOS-2 restricts full research capabilities and further work using future NISAR data would allow the chance to look at more peatlands and assess the health of these bogs. 4. Acknowledgements Contains data derived from JAXA ALOS-2 products, all rights reserved, provided by EO-RA3 PI No. ER3A2N039; Contains modified Copernicus Climate Change Service information between 2020-2022 (neither the European Commission nor ECMWF is responsible for any use that may be made of the Copernicus information or data it contains); This work was supported by the Natural Environment Research Council via an IAPETUS2 PhD studentship held by Georgina Page (grant reference NE/S007431/1). 5. References [1] F. Tanneberger, A. Moen, A. Barthelmes, E. Lewis, L. Miles, A. Sirin, C. Tegetmeyer, and H. Joosten. Mires in Europe—regional diversity, condition and protection. Diversity, 13, 8 2021. ISSN 14242818.doi: 10.3390/D13080381. [2] K. Lees, R. Artz, D. Chandler, T. Aspinall, C. Boulton, J. Buxton, N. Cowie, and T. Lenton. Using remote sensing to assess peatland resilience by estimating soil surface moisture and drought recovery. Science of The Total Environment, 761:143312, 3 2021. ISSN 00489697. doi: 10.1016/j.scitotenv.2020.143312. [3] M. Bechtold, S. Schlaffer, B. Tiemeyer, and G. D. Lannoy. Inferring water table depth dynamics from ENVISAT-ASAR C-band backscatter over a range of peatlands from deeply-drained to natural conditions. Remote Sensing, 10, 4 2018. ISSN 20724292. doi: 10.3390/rs10040536. [4] T. Asmuß, M. Bechtold, and B. Tiemeyer. On the potential of Sentinel-1 for high resolution monitoring of water table dynamics in grasslands on organic soils. Remote Sensing, 11, 2019. ISSN 20724292. doi: 10.3390/rs11141659. [5] B. Sterratt, A. Marino, C. Silva-Perez, G. Page, P. Hunter and J. -A. Subke, "Peatland Water Table Depth Monitoring Using Quad-Pol L-Band Sar," IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 2023, pp. 1469-1472, doi: 10.1109/IGARSS52108.2023.10281800. [6] Tveito, O.E., E.J.Førland, R.Heino, I.Hanssen-Bauer, H.Alexandersson, B.Dahlström, A.Drebs, C.Kern-Hansen, T.Jónsson, E.Vaarby-Laursen and Y.Westman., 2000, Nordic Temperature Maps, DNMI Klima 9/00 KLIMA., Norwegian Meteorological Institute [7] Tveito,O.E., Bjørdal,I., Skjelvåg,A.O., Aune,B. A GIS-based agroecoglogical decision system based on gridded climatology, 2005, Metoeorl.Appl., 12, 57-68, DOI:10.1017/S1350482705001490 [8] Copernicus. Copernicus Climate Change Service, Climate Data Store, (2021): Nordic gridded temperature and precipitation data from 1971 to present derived from in-situ observations. Copernicus Climate Change Service (C3S) Climate Data Store (CDS). https://doi.org/10.24381/cds.e8f4a10c, 2021. [Online, Accessed: 23/04/24]. [9] Bourgault, M-A., Larocque M., and Garneau M., “How do hydrological setting and meterological conditions influence water table depth and fluctuations in ombrotrophic peatlands?” Journal of Hydrology, 2019 [10] A. Marino and I. Hajnsek. A change detector based on an optimization with polarimetric SAR imagery. IEEE Transactions on Geoscience and Remote Sensing, 52:4781–4798, 2014. ISSN 01962892. doi:10.1109/TGRS.2013.2284510. [11] Lantmäteriet, Map 1:50,000 Download, raster, https://www.lantmateriet.se/sv/geodata/vara-produkter/produktlista/karta-150-000-nedladdning-raster/ [Online, Accessed 15/10/24]
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Automated Identification of Potential Peatland Areas in Closed Forest Canopies Through the Detection of Drainage Ditches: A Case Study in Austria

Authors: Oliver Rehberger, Isabella Greimeister-Pfeil, Gerhard Egger, Helmut Kudrnovsky, Gebhard Banko
Affiliations: Environment Agency Austria
Peatlands are vital for climate regulation due to their exceptional carbon storage capacity. By preventing the decomposition of plant matter, peatlands act as natural carbon sinks, mitigating climate change. Moreover, healthy peatlands help regulate water cycles, reducing the risk of floods and play an important role regarding biodiversity. However, when drained for agriculture or other purposes, peatlands release significant amounts of carbon dioxide and methane, contributing to global warming. For Austria, a large number of peatland areas and suspected peatland areas have been known for decades. However, there are still large data gaps especially in dense forests. The manual mapping of unknown or suspected peatland areas is very time-consuming. Satellite-based remote sensing methods could provide support, but they have very limited penetration depths especially over dense canopies. Here we present an alternative approach for the detection of suspected peatland areas in forests, which relies on the use of digital terrain models (DTM). A 1m x 1m DTM is used to detect drainage ditches by applying first a high pass median filter (HPMF) and combining this method with an approach to find local depressions from which trench structures are detected by finding opposite-facing slope. Even if the combination of the two methods improves the trench detection, there are still limitations resulting from the spatial resolution of the DTM, the setting of thresholds for the detection of certain trench widths and the additional detection of terrace structures or trenches along roads. The study is carried out for five regions all over Austria, where the presence of peatlands is highly likely. The resulting maps of approximate ditch length within a given area give a clear indication of where peatlands might be found – even if these are, to a large extent, heavily disturbed by the drainage ditches. Although this method is not able to directly delineate unknown and undisturbed peatland areas, it does give an initial indication of where they might be located and can significantly accelerate mappings. However, it provides a way forward to close gaps in Austria’s greenhouse gas balance, offers opportunities for restoration and serves as valuable information for calculating surface runoff.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Integrating Radar and Hyperspectral Data to Assess Ecological, Hydrological and Mechanical Dynamics of a Temperate Peatland.

Authors: Rachel Walker, Professor David Large, Professor Doreen Boyd
Affiliations: University of Nottingham
Combining different methodologies and datasets can improve understanding of peatland dynamics regarding the interrelationships between the ecology, hydrology and mechanics. This is challenging due to different temporal, spectral and spatial resolutions of the data resources so typically these are researched either in isolation or in combination with one other measure. Here an assessment of peat condition using a combination of InSAR and hyperspectral datasets. Radar data from Sentinel-1 was collected at a high temporal resolution, however, there is limited ground data to analyse in conjunction with it. Hyperspectral data was collected at a range of spatial resolutions and high spectral resolution (EnMAP satellite, piloted airborne, unmanned airborne and ground), enabling depreciation in spatial resolution to be analysed. All data was from the Flow Country, the world’s largest contiguous peatland and is typically cloudy or wet limiting hyperspectral data collection and quality, whereas the radar data was not limited by the weather or costs. Hydrological changes were monitored using InSAR coherence data from Sentinel-1, with relationships between the satellite data and ground data (soil moisture and ground water level) assessed using cross-correlation and Pearson’s coefficient. Ground measurements were collected at an eroded and a near natural site and comparison made between areas at different stages of restoration. Ecology was mapped using machine learning on hyperspectral data, trained using field data in four areas in different conditions (near-natural, restored in 2006, restored in 2015 and eroded). These data were studied in relation to the mechanics of the peatland (bog breathing) which was modelled using InSAR phase data. We found that soil moisture and InSAR coherence demonstrate strong relationships, especially during warmer, drier periods. Spectral data at a satellite level show how peatlands respond to restoration, but lack species/plant functional type details. Initial analysis suggests that the timing of the surface motion peaks, amplitude and rate of swell/shrink are related to the ecology when split into binary class (containing Sphagnum/pools or not) and hydrology in relation to seasonal changes in water loading. Overall, our findings further understanding of the interrelationships between peatland characteristics.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Improved Cerrado wetland mapping – seasonal moisture metrics, terrain information and semantic segmentation

Authors: Felix Beer, Leila Maria Garcia Fonseca, Mateus de Souza Miranda, Hugo do Nascimento Bendini, Dr Vu-Dong Pham, Dr Sebastian van der Linden
Affiliations: Institute of Geography and Geology, University of Greifswald, Partner in the Greifswald Mire Centre, Earth Observation and Geoinformatics Division (DIOTG), General Coordination of Earth Sciences (CG-CT), National Institute for Space Research (INPE)
Peatlands are the carbon-densest terrestrial ecosystem and play an important role for climate change mitigation. In the Brazilian Cerrado, the largest neotropical savanna region spanning over 2 million km², peatlands are part of an extensive biome-wide headwater network. They may store up to 13 % of the total Cerrado carbon stock (Above-ground and below ground carbon, soil carbon) on less than 1 % of the total area and they play a crucial role in supplying water to Brazil’s main river systems. Palm swamp savanna (Veredas), wet grasslands (Campos limpos úmidos) and gallery forests (Matas de Galeria) are typical vegetation types on organic soils. Land degradation following agricultural land use and climate change leads to direct and indirect negative impacts and degradation of peatlands and other wetlands in the Cerrado. Degradation includes drying, soil degradation and carbon loss due to fire, vegetation change and erosion, amongst others. Having a good understanding of peatland and permanent wetland distribution is essential for all further assessments that are urgently needed in that context, e.g. regarding carbon stocks, carbon emissions and degradation. However, existing uncertainties in wetland distribution and area are based on the challenging delineation of wetlands. The classification of wetland types of palm swamp savanna, wet grassland and/or gallery forest constantly yields lowest accuracies in a range of recent Cerrado-wide land cover and change mapping approaches that used machine learning (ML), e.g. the Mapbiomas or FIP Cerrado, especially because of small and thin patches of the wetland classes and gradual transition in-between them. Using statistical metrics that depict seasonal variations in reflectance patterns of vegetation from satellite time series have proven very efficient for land cover mapping and monitoring. Deep learning (DL) algorithms have been demonstrated very effective in wetland remote sensing with even higher accuracies than regular ML approaches by considering spatial patterns. Bendini et al. (2021) confirmed the potential for Cerrado wetland mapping in a first study by using freely available Sentinel-2 (S2) inputs into the DL network and showing added terrain information to improve results. Building up on that, we combined spectral metrics that are moisture sensitive and reflect seasonal variability, with terrain information in a U-Net, which is a well-established Convolutional network for segmentation tasks. The model was trained and tested at the Jalapão region, eastern Tocantins state, with the protected ecological station Serra Geral do Tocantins (A1) at the centre. To assess the model’s improvement and transferability, we further applied the trained model to two other regions in the Cerrado. These regions cover parts of southwestern Bahia, northeastern Goiás and northwestern Minas Gerais with the National Park Grande Sertão Veredas at its centre (A2) and parts of the northern Goiás state with National Park Chapada dos Veadeiros (A3) at its centre. All available S2 scenes for the year 2021 were downloaded and processed to Analysis-Ready-Data with the Framework for Operational Radiometric Correction for Environmental monitoring (FORCE), which included cloud detection, co-registration, radiometric correction, resolution merge and data cubing Medians were then calculated for all bands and a set of indices). Two combinations of bands were tested, 1) NIR, NDWI, MNDWI and slope (MNMs) and 2) with NIR, NDWI, MNDWI, NDMI and slope (MSMNs). For both combinations, a stack of wet/dry season medians (2nd quarter/August-September) and yearly quartiles was created resulting in four different datasets with stacked bands (MNMs_yearly/ MNMs_seasonal; MSNMs_yearly/ MSNMs_seasonal). We mapped two permanent wetland types that occur in valleys of the Cerrado depending on the geomorphologic development stage of the valley and water availability. Class 1 includes wet grass-/shrubland or swamp savanna, also called Vereda in portuguese. This class is characterized by grasses and sedges, herbaceous plants and/ or shrubs, all adapted to temporary to permanent water saturation of the soil. The palm M. flexuosa is often characteristic of these swamp savannas, but not necessarily present. Class 2 refers to gallery or riparian forests. These often occur in the central parts of the valleys along the running waters and can be temporary to permanently flooded and swampy. For model training all other land cover types were aggregated into a class “background”. The U-Net model was implemented in python using the Keras Tensorflow package. The original band stacks covering A1 were subsetted into a total of 2400 images of 256*256 pixels each. The dataset was randomly split into training and testing subsets that contained 80 % and 20 % of the samples, respectively. Data augmentation was applied to the training set. Categorical cross-entropy was used as a loss function with the Stochastic Gradient Descent (SGD) optimizer and the learning rate adaptation during the training. The classification of the two transfer areas A2 and A3 was validated with an independent data point collection derived from literature, field work, expert judgement of high-resolution satellite imagery and randomly created points over the PRODES land use map. All trained models produced high overall F1 scores of 0.97 on the testing dataset in A1 with very small differences between band combinations and time periods. MNMs combinations show higher recall and MSMNs combinations have higher precision for the wetland classes. This result aligns with the visual impression of less dryland area being misclassified as wetland with MSMNs classifications, while stack 1 classifications cover more of the actual wetland area. Based on the testing dataset, all models performed better in classifying wetland classes 1 and 2 compared to the Mapbiomas (F1 score: 0.95) and FIP Cerrado maps (F1 score: 0.91). Visual inspection shows a higher spatial homogeneity and consistency of outlined wetland areas of this study compared to Mapbiomas and FIP Cerrado land cover products. The transfer of the trained models and validation of classifications results in consistent delineations of wetland areas in A2 across models. F1 scores are slightly lower with 0.9-0.91. Certain pivot irrigation systems are misclassified as gallery forest by the seasonal models. Yearly variation metrics show this misclassification pattern to a significantly lower extent. Forest plantations are misclassified by all models as gallery forest, but misclassification is lower in the seasonal season models. Visual inspection confirms the results from the training region that seasonal models delineate better the wetland areas themselves, while the yearly metrics models show a tendency to rather classify less of the actual wetland areas, appearing to be more conservative. The outlined wetland area in A3 is significantly varying between classes and models and the actual extent of class 1 seems to be constantly overestimated. However, it appears that classification accuracy decreased in regions with different environmental parameters (e.g. soil types/ vegetation). The Cerrado is subdivided into 19 ecoregions. Fine-tuning of the model with regional training data would improve segmentation results and would allow a more accurate wetland mapping across the Cerrado. We were able to map wetlands using a U-net model with high accuracies that reduced only slightly when being applied to other regions. Both wetland types were delineated more consistently and more accurately than existing land cover products do.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Monitoring Peatland Water Table Depth In Scotland Using Sentinel-1 SAR Data and Machine Learning

Authors: Dr Morgan Simpson, Dr Armando Marino, Dr Cristian Silva-Perez, Professor Peter Hunter, Professor Jens Subke-Arne
Affiliations: University Of Stirling, Keen AI
Peatlands are an important ecosystem for regulating global carbon emissions due to their ability to both sequester and store carbon. One fifth of Scotland is comprised of peatland ecosystems. However, approximately 80% of peatlands in Scotland are degraded, which in turn causes CO2 emissions. Typically, measurements of water table depth (WTD) and soil moisture (SM) are used for understanding healthiness of, and restoration impacts on peatlands. Traditionally, these measurements of WTD and SM are undertaken with field-based measurements. However, these methods are time-consuming, costly and also, the spatially complex and variable nature of peatlands and their hydrological regimes presents a challenge for acquiring representative data. Peatlands often cover large expanses, but water table depths can vary over relatively small spatial scales making it difficult to obtain representative measurements of change at landscape or ecosystem level. Synthetic Aperture Radar (SAR) is a coherent microwave imaging method, capable of monitoring in near all-weather conditions, regardless of light conditions and cloud cover. SAR is particularly sensitive to surface roughness, target geometry and the dielectric properties of the target, which can all be utilised to derive water content. This study utilises Sentinel-1 SAR data, in combination with a gradient boosting machine learning model to estimate water table depth across multiple peatland sites in Scotland. Gradient boosting works by sequentially adding predictors to an ensemble, each one correcting its predecessor. The full sentinel-1 archive of Single Look Complex (SLC) imagery from 2015 – 2024 was used for this study. An ancillary dataset was also used to obtain other metrics for machine learning input and validation, this included: Copernicus European Centre for Medium-Range Weather Forecast Reanalysis v5 (ERA5) data and in-situ readings from data loggers that have measurements up to 2022. The in-situ loggers were located across multiple peatland sites with varying water table depths and vegetation characteristics. The splitting of training / test data for the model was investigated via multiple methods. Randomly splitting the datasets for training and validation resulted in overoptimistic results. Splitting the data geographically caused there to a discrepancy between the number of loggers based on site locations (i.e. Sites with 30 loggers in-situ vs sites with 1 logger). The best method for splitting the dataset was temporally at fixed time intervals. Data from before 1st June 2020 was used for training and data after for testing. This method reduced the bias of randomly splitting data and provided the model with as much data diversity as possible. After utilising various metrics from ERA5 data (including total evaporation, volumetric soil water layer, and leaf area index), results show that our machine learning method can provide accuracies of ~80% for water table depth estimation, dependent on the study site. With the highest errors being observed at either very low (low surface connectivity) or high (surface inundation) water table depths. Importantly, the model performed robustly when applied to the large number of peatland sites.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: From the Arctic tundra to temperate peatlands: Improving net ecosystem CO₂ exchange modelling for Irish peatland ecosystems

Authors: Dr. Wahaj Habib, Dr. Marco Girardello, Dr. Matthew Saunders, Dr. John Connolly
Affiliations: School of Natural Sciences, Geography Discipline, Trinity College Dublin, School of Natural Sciences, Botany Discipline, Trinity College Dublin
Peat soils cover about 23% of the terrestrial landscape and account for almost three-quarters of Ireland’s soil organic carbon (SOC) stock. Given their significant role in carbon storage, sequestration and, in turn, climate regulation, understanding the carbon dynamics of these ecosystems is crucial. This highlights the need for accurate, scalable models to estimate carbon dioxide (CO₂) Net Ecosystem Exchange (NEE) across these ecosystems. These models are also crucial for understanding correlations among climatological, environmental, and biophysical factors. To address this, this study builds on prior research that modelled NEE in the Arctic tundra using air temperature, Leaf Area Index (LAI), and Photosynthetically Active Radiation (PAR) as key drivers. Our work refines these methods and extends the model's application to Irish peat soils. The primary objective is to enhance the accuracy of the original Arctic NEE model (PANEEx) and upscale it using high-resolution satellite data. The refined model incorporates Sentinel-1 and Sentinel-2 satellite data alongside Moderate Resolution Imaging Spectroradiometer (MODIS) PAR estimates to assess NEE dynamics across Ireland’s peatlands. The model parameterisation is informed by Light Response Curve (LRC) metrics derived from in situ Eddy Covariance Flux Tower (ECFT) measurements. Preliminary results suggest that high-resolution remote sensing offers a more accurate representation of Ireland’s spatial variability in CO₂ exchange within peat soil ecosystems. These findings underscore the potential of remote sensing tools to monitor and report on Ireland’s peat soil carbon fluxes, aligning with national and international climate targets. The findings from this study will offer valuable insights for ecosystem monitoring, reporting, and policy, supporting climate targets by informing sustainable management of various ecosystems and verifying carbon budgets in line with national, European Union, and international climate commitments.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Close range hyperspectral estimation of northern peatland moisture content across climate zones and trophic levels

Authors: Susanna Karlqvist, Jussi Juola, Aarne Hovi, Sini-Selina Salko, Iuliia Burdun, Miina Rautiainen
Affiliations: Aalto University
Peatlands play an important role in the global carbon cycle despite their limited geographic extent, with northern peatlands alone storing nearly twice as much carbon as all global living forests combined. These crucial ecosystems depend heavily on waterlogged conditions and moisture levels to maintain their carbon-storing capacity. However, peatland moisture conditions face increasing threats from anthropogenic activities, such as drainage for agriculture and forestry, and climate change-induced alterations in temperature and precipitation patterns. These threats risk transforming peatlands from carbon sinks into carbon sources, highlighting the critical importance of moisture monitoring for identifying vulnerable areas and evaluating restoration efforts. While satellites and airborne sensors can provide extensive coverage of remote peatland regions, they require detailed ground-level validation to achieve their full potential. This validation, achieved through precise close-range reference measurements, has become particularly important with the advent of new hyperspectral satellite missions such as CHIME. Reference data can be acquired through both laboratory measurements of key peatland species, such as Sphagnum mosses, and field measurements, thereby enabling enhanced monitoring of peatland moisture dynamics. Previous research on estimating peatland moisture content or water table levels has often been limited in scope, typically focusing on either few isolated sites or a narrow range of peatland species. Few studies have evaluated optimal remote sensing methods for accurate moisture assessment across diverse northern peatlands and species varieties. In our research, we tested methods to estimate peatland moisture content using close-range hyperspectral field measurements collected from 13 northern peatlands spanning from Hemiboreal (57.644°N) to Arctic regions (68.884°N). We complemented these measurements with a comparative analysis of moisture estimation methods for laboratory measured pure Sphagnum species. The laboratory study was conducted as a drying experiment, enabling measurements from a variety of moisture conditions. Our laboratory findings revealed that classifying Sphagnum species by habitat enabled more accurate moisture estimation, leading us to test estimation methods across trophic levels in our field data analysis. We examined multiple analytical techniques for moisture estimation, including spectral moisture indices, continuum removal, optical trapezoid model (OPTRAM), smoothed reflectance spectra, and continuous wavelet transformed spectra. Our results demonstrate that full reflectance and continuous wavelet transformed spectra show particular promise for moisture content estimation, while spectral moisture indices prove less reliable for detecting moisture levels across different peatlands and trophic levels.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: SAR and InSAR applied to temperate peatlands: new insights on links between remote sensing estimates and ecohydrological parameters

Authors: Alexis Hrysiewicz, Jennifer Williamson, Chris D Evans, Shane Donohue, A. Jonay Jovani-Sancho, Sam Dixon, Nathan Callaghan, Jake White, Justin Lyons, Joanna Kowalska, Hugh Cushnan, Eoghan P. Holohan
Affiliations: SFI Research Centre in Applied Geosciences (iCRAG), UCD School of Earth Sciences, UK Centre for Ecology & Hydrology, UCD School of Civil Engineering, School of Biosciences, University of Nottingham, Natural England, Natural Resources Wales, RPS Group
Peat soils are known to sequester vast quantities of carbon with 644 Gigatonnes (Gt), or 20-30 % of global soil carbon, stored in peat, despite covering only 3-5 % of the land area. In Europe, peat soils cover about 530,000 km² (5 %) and hold around 42 Gt of carbon. Links proposed recently between tropical peatland Greenhouse Gas (GHG) emissions and peat-surface displacements, as estimated remotely by Interferometry of Synthetic Aperture Radar (InSAR), could provide a basis for estimation of peatland GHG emissions on a global scale via low-cost remote sensing techniques. In addition, recent studies propose that maps and time series of apparent peatland surface motions derived from satellite-based SAR/InSAR are a proxy for ecohydrological peat parameters (i.e., groundwater level and soil moisture). However, links between SAR and InSAR estimates and peat ecohydrological parameters remain uncertain for temperate bogs, and until recently, there has been a lack of ground validation of these apparent surface motions at peatlands. The ESA Living Planet Followship project – named RaiPeat_InSAR – aimed to fill the knownledge gap via a systematic analysis of SAR/InSAR products from Sentinel-1 C-Band data (intensity maps, interferograms, coherence maps and temporal evolutions of displacements) for well-studied Irish and Bristish bogs. From various in-situ measurements (peat surface movement, groundwater levels, soil moisture, weather conditions, etc.), we analysed the linkages between SAR/InSAR estimates and ecohydrological peat parameters. In our first study, we demontrated that the InSAR-derived VV-polarisation coherence and displacements are not affected by vegetation changes caused by the wildfire in June 2019. In contrast, the VV-polarisation SAR intensity shows an increase, which can be linked to vegetation removal. In-situ data show that the InSAR coherence is directly related to soil moisture changes, from which it can be interpreted that the satellite-derived C-band radar waves penetrate through the 10-20 cm thick mossy vegetation layer and into the upper few cm of the underlying peat. In our second study, we show that InSAR-derived surface motions are very similar to peat surface displacements measured in-situ. A modified InSAR processing approach applied to ascending and descending acquisitions spanning May 2015 to September 2021 indicates that the peat surface of Cors Fochno (raised bog in UK) is subsiding at the centre and rising at the edges (-5 mm/yr to +5 mm/yr) while the peat surface of Cors Caron (raised bog in UK) is mostly subsiding (max. -8 mm/yr). Both bogs are also affected by annual surface level oscillations of 10-30 mm amplitude (known as “bog breathing”). The InSAR data capture well the amplitude and frequency of peat surfaces oscillations measured in-situ by using a novel camera-based method; with Pearson’s values are respectively >0.8 and < 5-7 mm. Furthermore, the InSAR-derived ground motions follow the in-situ measured groundwater table levels in a ratio of roughly 1: 10. InSAR-derived displacements therefore appear to be an efficient proxy of groundwater level changes. In our third study, we undertook a critical analysis of the capacity for upscaling of our results as supported by the recently released European Ground Motion Service (EGMS) of the Copernicus Land Monitoring Program. Although the displacement rates appear to be consistent with the in-situ data, we show that the EGMS results suffer from an underestimation of larger annual displacement oscillations (> ±20 mm). On blanket bogs, such displacement oscillations cannot be captured by the EGMS and site-scale InSAR datasets due to the very low amplitudes (< 5 – 10 mm) of the oscillations. On fens and associated agricultural peatland, EGMS and site-scale computations do not provide accurate displacement measurements, due to very low InSAR coherence and the high annual oscillation of displacement (> ±50 mm). However, EGMS-derived measurements, combined with site-scale computations, can enable monitoring of peat surface displacements on raised and blanket bogs at continental scale. Finally, the last study in the project involved proposing the first experiments to estimate carbon emissions by reconstructing groundwater levels from InSAR-derived displacements. For example, the ratio between peat surface displacement and groundwater level is 1:10 for the raised bogs studied (i.e. 1 mm of peat surface displacement induces 1 cm of change in groundwater level). Using empirical laws and/or machine learning techniques, our preliminary results show a higher Net Ecosystem Production rate – for 5 raised bogs – for 2018, 2020, 2021 and 2022 than for 2019, while the NEP rate was lower for 2016 and 2017. Overall, our studies confirms that SAR/InSAR products contains the keys for accurate continental scale monitoring of hydrologically driven surface motions of peat soils and thus a step towards estimating peat carbon emission from large-scale remote sensing from space.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Assessing mire breathing patterns across Mecklenburg-Vorpommern, Germany using a Sentinel-1 SBAS approach

Authors: Luc Pienkoß, Philip Marzahn
Affiliations: University of Rostock
The characteristics and conditions of peatlands are crucial to assess the state of peatlands, especially to monitor their potential status of degradation. Mire breathing is a key process, serving as a proxy for the peatland’s degradation status. While drained, e.g. degraded, peatlands show a weak oscillation pattern with a general (nearly linear) subsidence trend, more natural peatlands show a more prominent mire breathing, resulting in seasonal cycles of uplift and subsidence rates. The quantification of peatland subsidence may provide critical insights into carbon storage dynamics and greenhouse gas emissions, thus making it a highly relevant topic for climate and environmental research. While for artificial surfaces such as urban areas, the retrieval of subsidence rates is well established, however for natural surface such as peatlands, the retrieval is hindered. This study examines the utilization of interferometric time-series analysis through the MintPy-SBAS approach with Sentinel-1 SAR data for the purpose of monitoring large-scale peatland subsidence. The methodology was applied to peatlands covering the whole area of the federal state of Mecklenburg-Vorpommern, Germany (North-East Germany) between 2017 and 2024. The findings illustrate the presence of spatiotemporal subsidence trends across the entire federal state over the period. The subsidence rates observed at three examination sites ranged from -4.32 to -9.61 cm per year in line of sight (LOS). Moreover, site-specific mire breathing patterns were identified, with amplitudes ranging from 5 cm to 15 cm in LOS. Seasonal variations in subsidence, characterized by increased subsidence rates during the summer months and partial recovery in wetter months, demonstrate the impact of hydrological changes on the dynamics of the subsidence patterns. The outcome of the study, demonstrates the efficacy of time-series analyses in capturing both long-term subsidence trends and short-term oscillatory responses, thereby contributing to the development of sustainable land management and carbon sequestration strategies. Nevertheless, further research is required to improve the reliability of the SBAS method and to validate its findings using robust and reliable in-situ subsidence data.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Multi-Source Earth Observation Data for Assessing Hydrological Dynamics in Peatlands

Authors: M.Sc. Lena Krupp, Prof. Claas Nendel, B.A. Simon Seyfried, Gohar Ghazaryan
Affiliations: ​​​​​​​​​​​​​​​​​​​​​Leibniz Centre for Agricultural Landscape Research (ZALF), Institute of Biochemistry and Biology, University of Potsdam, Earth Observation Lab, Geography Department, Humboldt University of Berlin, Integrative Research Institute on Transformations of Human-Environment Systems (IRI THESys), Humboldt University of Berlin, Global Change Research Institute of the Czech Academy of Sciences
Peatlands are important carbon sinks and are estimated to store 30% of the world's soil organic carbon. This is due to their characteristically high water table, which prevents microorganisms from breaking down plant material and releasing CO2 into the atmosphere during the process. However, many have been drained over centuries to convert them into agricultural land, but also to extract peat for horticultural purposes or as fuel, making them net CO2 emitters. The rewetting of peatlands is a crucial component in the fight against climate change and is currently being heavily promoted in some countries. In Germany, for example, rewetting projects are being carried out as part of the national peatland conservation strategy which is included in the Federal Action Plan on Nature-based Solutions for Climate and Biodiversity. Effective rewetting efforts necessitate enhanced monitoring of peatland hydrological conditions, particularly soil moisture and WTD, which are fundamental to understanding peatland health and their climate-regulating functions. This study explores a data-driven approach to assess hydrological dynamics in peatlands using Sentinel-1, Sentinel-2, Landsat and in-situ WTD measurements from several degraded peatlands across the North-East of Germany. These areas have undergone diverse management practices, ranging from historical drainage for agriculture to recent efforts focused on ecological restoration through rewetting initiatives. Sentinel-1 SAR data provided backscatter information sensitive to surface moisture and vegetation structure, while Sentinel-2 and Landsat multispectral sensors offer valuable spectral indices, such as the Normalized Difference Vegetation Index (NDVI) or the Normalized Difference Water Index (NDWI) to monitor the vegetation response to moisture changes. Land surface temperature (LST) derived from Landsat thermal data was integrated to capture the surface energy balance dynamics. In addition, methods like the Optical Trapezoid Model (OPTRAM) were applied to estimate soil moisture and track the indirect relationships between soil moisture (SM) and WTD. Correlations between measured WTD and remote sensing parameters, VV, VH and ratio and spectral indices, were used to establish relationships reflecting peatland hydrology. These relationships were further utilized in a Random Forest machine learning model to predict WTD dynamics, combining SAR backscatter, spectral indices, thermal data, and OPTRAM-derived soil moisture. This approach allows for capturing non-linear interactions between variables and provides a robust framework for monitoring seasonal and inter-annual changes in peatland hydrology. The results reveal significant correlations between WTD and key remote sensing parameters, such as VV and OPTRAM in several peat sites, highlighting the potential of integrating SAR, optical, and thermal datasets with data-driven statistical and machine learning models for peatland monitoring. By offering a scalable methodology, this work supports rewetting initiatives, informing conservation strategies, and advancing climate change mitigation efforts. Future directions include refining model accuracy and expanding the approach to other peatland regions for broader applicability.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: EO data for peatland monitoring: challenges and opportunities from multi-temporal SAR interferometry

Authors: Christian Bignami, Cristiano Tolomei, Lisa Beccaro, Stefano Salvi, Gerardo Lopez Saldana, Yara Al Sarrouh, Michel Bechtold, Kevin Tansey, Harika Ankathi, Susan Page, Fred Worrall, Arndt Piayda
Affiliations: Istituto Nazionale di Geofisica e Vulcanologia, Assimila Ltd, KU Leuven, Universtity of Leceister, Durham University, Thuenen Institute of Climate-Smart Agriculture
Peatlands cover only 3–4% of the World’s land area. Despite this minimal presence, peatlands are significant ecosystems, able to provide several ecosystem services, making their conservation and restoration critical for current and future generations. Indeed, peatlands play a crucial role in global environmental change processes, representing the most effective ecosystems for carbon storage. Pollution, urban development, and global warming severely affect these areas, causing high ecological stress. An increased recognition of the importance of these special habitats has encouraged the spread of monitoring and restoration studies through different approaches, from field surveys to laboratory analyses, and, ultimately, by remote sensing techniques. The present work illustrates the outcomes obtained during the ESA-funded WorldPeatland project, where various peatlands located in different Earth's climatic regions (temperate and boreal) have been studied using remote sensing imageries combined with data directly acquired in the field. In particular, we show the results concerning the exploitation of multi-temporal Interferometric Synthetic Aperture Radar (InSAR) methods, i.e., through Enhanced Persistent Scatterers and Small Baseline Subset techniques, applied to stacks of Sentinel-1 SAR data from 2021 to 2024. We produced ground displacement time series and mean velocity maps, allowing us to study the behaviour of the peatlands over time and to find possible correlations with available ground data such as water level, rainfall measurements, soil moisture, and other vegetation indexes obtained from optical satellite images. Moreover, an in-depth investigation has been carried out testing different processing settings to understand the scattering mechanisms responsible for the SAR signal response and the measured deformation in such challenging areas. Four peatlands have been studied in our work, in detail: Hatfield and Moor-House, in England, Gnarrenburg, in Germany, and Degero, in Sweden. The first three belong to temperate climate regions; the fourth represents the boreal climate environment. Two tropical peatlands have also been considered in the WorldPeatlands project. Unfortunately, the C-band data from Sentinel-1 mission were unsuitable for obtaining ground motion data, because of the dense forest canopy overlying the peatlands. Our findings confirm that natural peatlands in temperate regions are characterised by higher interferometric coherence since the vegetation is low and the water table level is not constantly above the surface. Therefore, the interferometric products present an acceptable spatial coverage and low noise time series. Instead, ground motion mapping in the Degero boreal peatland is more problematic, mainly because of snow cover during the winter season, which causes phase loss and discontinuity in the InSAR observations. Our results have been compared with the data provided by the European Ground Motion Service (EGMS) and validated against corner reflector data, where available. We confirmed the high quality of the measured mean ground velocity and deformation time series. Despite the good ground motion measurement accuracies, interpreting the ground motion signals is still challenging, given the incompleteness of the spatial coverage and the complex physical and biological processes acting in peatland environments.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Developing Spectral Indicators for the Monitoring of Re-wetted Peatlands

Authors: Ariane Tepaß, Dr. Marcel Schwieder, Christina Hellmann, Sebastian van der Linden, Dr. Stefan Erasmi
Affiliations: Thünen-Institut, Universität Greifswald
Peatlands play a pivotal role in climate regulation. However, over 95% of German peatlands have been drained, mainly for agricultural use, contributing significantly to greenhouse gas (GHG) emissions. Drained peatlands release substantial amounts of CO₂, nitrous oxide, and methane, accounting for approximately 7,5% of Germany’s total GHG emissions and 44% of all emissions from agriculture and agricultural land use. Rewetting drained peatlands is a necessary mitigation measure, with the goal to stop GHG emissions and the potential to transform them from substantial carbon sources to sustainable sinks. Nevertheless, improper rewetting may result in too wet or too dry conditions, which hamper a subsequent sustainable use of the areas or its carbon sink potential. To ensure the success of rewetting measures, it is crucial to implement systematic observation that captures changes in peatland vegetation and hydrological conditions. Vegetation, such as the plants Typha spp. (cattail) and Phragmites australis (common reed), can act as ecological indicators for peatland monitoring as they both reflect the impact of hydrological dynamics and restoration conditions. Thus, monitoring the success of rewetting measures requires consistent and accurate observation techniques of plant communities. We focus on mapping and monitoring the aforementioned key peatland plants based on Sentinel-1 and 2 satellite time series and wetland vegetation cover fractions derived from hyperspectral satellite data. Using spectral indicators, SAR backscatter and temporal trends, we aim to characterize spatial and temporal dynamics of these species and their phenology under different rewetting regimes. Our study area includes the Peene and Trebel river basins of the federal state of Mecklenburg-Vorpommern, Germany, with varying rewetting durations and intensities. The synergy of hyperspectral and Sentinel satellite data offers opportunities for monitoring and analyzing peatland vegetation dynamics. Hyperspectral sensors provide highly detailed spectral information, capturing fine-grained variations across numerous narrow bands, enabling the differentiation of vegetation types with similar spectral features. However, hyperspectral data is often more challenging and time-consuming to acquire or does not cover large areas. Conversely, Sentinel-1 and –2 sensors, with their high temporal and spatial resolution, enable frequent and large-scale observations, capturing phenological changes and dynamic processes over time. Sentinel-1, with its synthetic aperture radar (SAR), provides data regardless of weather conditions and Sentinel-2 delivers optical imagery across 13 spectral bands, ideal for capturing vegetation characteristics and phenological trends. Together, these datasets can bridge the gap between spectral precision and temporal-spatial coverage. In this study, we highlight the advantages of integrating data from both hyperspectral and Sentinel satellites to monitor the abundance and spatiotemporal dynamics of key peatland vegetation. We analyzed fractional cover maps of typical peatland vegetation types, which were derived by unmixing hyperspectral PRISMA datasets. The mixed hyperspectral signals were decomposed into constituent components, such as vegetation types or land use, utilizing machine learning models trained on a set of synthetically mixed training data. The fractions provide quantitative estimates of dominant vegetation types like Typha and Phragmites for each 30x30m pixel, offering insights into their distribution and density in the study area. Based on the resulting fractional cover map and the integration of pixels with a high fractional vegetation cover of Typha and Phragmites, we derived spectral-temporal metrics from time series of Sentinel-1 and 2 data. Initial results revealed distinct patterns of land surface phenology of regions dominated by Typha and Phragmites. This demonstrates the capability of Sentinel-data to differentiate between the two plant phenologies. These phenological shifts highlight differences in growth and senescence, which are related to hydrological and microclimatic conditions. In turn, these conditions can be influenced by rewetting intensity, duration and success. Such insights are essential for evaluating the effectiveness of peatland restoration and for refining strategies aimed at optimizing carbon sequestration. This work underscores the potential of combining multi- and hyperspectral as well as SAR backscatter and temporal indicators from satellite data to monitor vegetation dynamics in peatlands. Future research will focus on refining these indicators and exploring their scalability to other peatland restoration sites.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Temporal Analysis and Multi-Dimensional Fusion for Advanced Monitoring of Peatland Degradation

Authors: Harsha Vardhan Kaparthi, Dr. Alfonso Vitti
Affiliations: Sapienza Università di Roma, Università degli studi di Trento
The study presents a cutting-edge approach to monitoring peatland degradation through temporal analysis and multi-dimensional data fusion, providing a holistic framework for informed conservation efforts. By integrating spectral, Synthetic Aperture Radar (SAR), and LiDAR datasets, we employ advanced deep learning models to capture and analyze the complex dynamics of peatland ecosystems over time and across dimensions. Temporal trends in vegetation health, soil moisture, and degradation patterns are analyzed using recurrent neural networks (RNNs) and Temporal Convolutional Networks (Temporal CNNs). These models reveal long-term changes and seasonal variations, highlighting critical indicators of progressive degradation or recovery. Deep fusion models further enhance this analysis by integrating spectral, SAR, and LiDAR data, creating a comprehensive 3D view of peatland conditions. This fusion effectively combines spatial, spectral, and elevation information, providing unparalleled insights into the interactions among ecosystem components. This multi-dimensional framework is validated across diverse climate zones, demonstrating its adaptability to boreal, tropical, and temperate peatlands. The results showcase the effectiveness of combining temporal analysis with multi-source data fusion to support targeted interventions and sustainable management strategies. Our approach offers a robust toolkit for ecological monitoring, enabling high-resolution, spatio-temporal insights essential for the preservation and restoration of these critical carbon-storing ecosystems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Integrated indicators for monitoring peatland condition using multitemporal trends.

Authors: Mr Gerardo Lopez Saldana, Yara Al Sarrouh, Sam Doolin, Michel Bechtold, Stefano Salvi, Christian Bignami, Cristiano Tolomei, Lisa Beccaro, Susan Page, Fred Worrall, Kevin Tansey, Harika Ankathi, Ian Jory
Affiliations: Assimila, KU Leuven - Dept of Earth and Environmental Sciences, Istituto Nazionale di Geofisica e Vulcanologia, University of Leicester, Durham University
The ESA WorldPeatland project, a collaborative effort to enhance peatland mapping and monitoring, focuses on developing Earth observation (EO) tools to address the needs of various stakeholders. The project recognizes the significance of integrated indicators derived from multitemporal trends of hydrology, surface motions, and vegetation biophysical parameters to assess peatland condition. Integrated indicators are crucial for understanding the complex interplay of factors influencing peatland health. These indicators provide insights into the effectiveness of restoration efforts, track the impact of disturbances such as wildfires, and offer simplified assessments for a wide range of stakeholders. WorldPeatland aims to develop indicators that are sensitive to change, representative of diverse biomes where peatland is present, and offer leading insights to support proactive management decisions. The project leverages multitemporal EO data from various sources to derive these integrated indicators. Hydrological variables, such as water table depth, are essential for evaluating the efficacy of rewetting measures, providing early warnings of degradation, and assessing fire danger. WorldPeatland utilizes Sentinel-1 synthetic aperture radar (SAR), Sentinel-2 optical imagery, and the SMAP (Soil Moisture Active Passive) Level-4 soil moisture product to derive the peatland hydrology monitoring component. Monitoring peatland surface motion is critical for estimating carbon accumulation or loss, supporting GHG emission reporting as per IPCC guidelines, and assessing peatland health using water-level-dependent ground surface fluctuations. WorldPeatland employs Multi-Temporal InSAR techniques, specifically the Enhanced Persistent Scatterers (E-PS) and Intermittent Small Baseline Subset (ISBAS) algorithms, to measure ground motion, ensuring a balance between accuracy and spatial coverage over these challenging surfaces. Vegetation biophysical parameters, such as land surface temperature (LST), albedo, and Leaf Area Index (LAI), play a crucial role in assessing peatland function and monitoring vegetation changes over time. WorldPeatland utilizes long-term data records from MODIS and VIIRS, complemented by higher resolution Sentinel-1 and Sentinel-2 data, to track changes in these variables, supporting the assessment of peatland function and restoration progress. The open-source version of the Carbon Durham Model will be used to to provide insights about the carbon budget over peatlands areas. The integration of multitemporal datasets enables the development of comprehensive indicators of peatland condition. WorldPeatland aims to create indicators that will use a combination of multi-variable temporal trends and standardise anomalies that reflect peatland dynamics from a holistic perspective rather than focusing only on individual components. The time series will be detrended to standardise the time series by removing the influence of seasonal variations and highlighting interannual variations that are not related to typical seasonal dynamics. On the detrended time series, statistical trend metrics will be created to determine whether a trend exists in the time series of each monitoring variable. Then the trends of all variables can be combined to obtain an overall understanding of the dynamics of the area of study. Climatological averages are also calculated to capture the average behaviour of a variable over time. Then standardise anomalies will be generated, this will help to determine how far from the average behaviour is a specific variable at a particular point in time. By combining standardise anomalies for all variables it is possible to develop indicators that are sensitive to change and representative of different peatland types. These indicators will be accessible through user-focused online portals and tools, ensuring their applicability for a broad range of stakeholders, including scientists, policymakers, and restoration practitioners. By integrating multitemporal trends of hydrology, surface motions, and vegetation biophysical parameters, WorldPeatland strives to provide robust, tools and indicators that support informed decision-making for peatland conservation, restoration, and sustainable management.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Mapping Global Organic Soils Drainage and Emissions: Leveraging Earth Observation-based Geospatial Data with an Intergovernmental Panel on Climate Change Framework

Authors: Erin Glen, David Gibbs, Melissa Rose, Angela Scafidi, Nancy Harris, Benjamin Wielgosz
Affiliations: World Resources Institute
Peatland drainage contributes approximately 6% of global greenhouse gas (GHG) emissions, yet it remains underrepresented in many national and regional GHG inventories. While peatland drainage and degradation have occurred in the Northern Hemisphere for centuries, drainage in tropical regions has accelerated rapidly in the 21st century. Over the past three decades, the majority of Southeast Asia’s 25 million hectares of tropical peatlands have been deforested and drained, leading to significant GHG emissions (Hoyt et al., 2021). The degradation and conversion of peatlands not only release substantial carbon emissions but also disrupt critical ecosystem services and increase the likelihood of catastrophic peat fires. Despite their importance, existing datasets delineating peatland drainage are sparse, coarse, or localized, and no comprehensive global dataset currently maps peatland extent, drainage, and associated emissions. We address this gap by leveraging a recently developed 30-meter global organic soils map with the best available regional and global contextual data for estimating drainage and emissions. Using a geospatial data integration framework, we estimate emissions from peatland drainage from 2000–2020, following Intergovernmental Panel on Climate Change Wetlands Supplement (2013) guidelines. Our first iteration employs IPCC Tier 1 emission factors combined with spatial data on climate zones, soil nutrient status, land cover change, plantation types, drainage infrastructure, road networks and peat extraction areas to delineate global peatland drainage and quantify associated emissions. The resulting product is a global 30-meter resolution map characterizing drainage and conversion types and their emissions from 2000–2020. Because peatland definitions vary across geographies and jurisdictions, we provide results for all organic soils and empower users to subset this data to local peatland definitions. This flexible modeling framework is designed for iterative updates, incorporating improved datasets and refined methodologies for estimating GHG emissions from peatland drainage. Potential advancements for future iterations include assessment of drainage infrastructure intensity, incorporation of IPCC Tier 2 methods and emission factors, temporal refinement within the 2000-2020 period, and expansion of coverage beyond 2020. By providing a 30-meter resolution, globally consistent dataset, this work supports efforts to monitor, manage, and restore peatlands, offering critical insights for addressing climate change and preserving ecosystem services.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Northern Wetland Classifications and Carbon Cycle Applications: Translating Concepts Into Spatial Data

Authors: Marianne Böhm, Prof. Gustaf Hugelius, Prof. Stefano Manzoni
Affiliations: Stockholm University, Bolin Centre for Climate Research
Despite progress in the research on Arctic and Boreal carbon fluxes, there is still large uncertainties in carbon budget estimates. Improved land cover mapping is needed to decrease these uncertainties. Issues with applications of current maps for carbon budgets include class differentiation, scale issues and double counting. This introduces errors in upscaling of measured greenhouse gas emissions and—as a result—in development, parametersation and evaluation of models. In particular, global land cover maps have poor mapping accuracy for northern wetland ecosystems. Wetlands are prevalent at high latitudes, and are key players in the carbon cycle given their high vulnerability to climate change. Despite their central role, commonly used wetland maps are often spatially coarse, suffer from a lack of thematical detail, or do not draw class borders along differences in the carbon cycle. This contribution presents results from a review of national wetland classification and inventory systems across the Arctic-Boreal region. Specifically, we explore which distinctions are made in existing systems, and how they could be applied at different scales for carbon-cycle applications across the Arctic-Boreal domain. Furthermore, we identify which data inputs would enable a success of this effort, point to gaps in the existing datasets, and open the discussion on how to fill them.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Integrating Sentinel-1, Sentinel-2, and SMAP Level-4 Soil Moisture Data for Peatland Hydrology Monitoring

Authors: Michel Bechtold, Kevin Tansey, Harika Ankathi, Gerardo Lopez Saldana, Yara Al Sarrouh, Iuliia Burdun, Lucas Boeykens, Ullrich Dettmann, Fred Worrall, Gabriëlle De Lannoy
Affiliations: KU Leuven, Universtity of Leceister, Assimila Ltd, Aalto University, Thuenen Institute, Durham University
Peatlands play a critical role in global carbon and water cycles as well as regional ecosystem services. However, monitoring peatland hydrology remains challenging due to the complex surface properties and hydrodynamics in these areas. This study presents an integrated approach combining Sentinel-1 synthetic aperture radar (SAR), Sentinel-2 optical imagery, and the SMAP (Soil Moisture Active Passive) Level-4 soil moisture product to enhance peatland hydrology monitoring. The approach leverages the peatland-specific hydrological output of the SMAP Level-4 soil moisture (SMAP SM_L4, Reichle et al. 2023) product which includes a specialized model parameterized for peatland processes (PEATCLSM, Bechtold et al. 2019). Sentinel-1 and Sentinel-2 are employed to downscale the 9 km resolution SMAP L4_SM global hydrological estimates to a finer spatial resolution of 100 m, improving their applicability to monitor spatial variability within and across specific peatlands. To address the complexity of backscatter-to-water level relationships in Sentinel-1 data, the SMAP L4_SM product is utilized to resolve ambiguities. In particular, increasing backscatter occurs with subground water levels below ground, while backscatter was mostly found to decrease with increasing inundation fractions due to specular reflection. A change detection approach using SMAP L4_SM identifies the water level regime, enabling the assessment of inundation durations and periods when backscatter can track subsurface water level variations. The optical trapezoid model (OPTRAM) is applied to Sentinel-2 data at 20 m resolution. At this resolution, the SMAP L4_SM product is used to identify the pixels of the highest soil moisture sensitivity. These pixels are then used to aggregate the soil moisture index to the same resolution as the Sentinel-1 data. Both soil moisture indices are rescaled to the peatland-specific variables of the SMAP L4_SM product. In the last step, a bias correction is performed ensuring that the total time of inundation indicated by the product matches the ones derived from the Sentinel-1 data. The product will be provided with uncertainty information for each pixel. The downscaled datasets are validated across boreal, temperate, and tropical peatlands using time series of in situ water level data and surface water maps from high-resolution optical imagery. Preliminary validation results highlight considerable spatial variability in the skill of the new product. We discuss how this variability correlates with site characteristics and uncertainty estimates of the product. Our approach targets a scalable and transferable method for monitoring peatland hydrology, addressing critical needs in management and conservation. Understanding hydrological state variables is essential due to their primary role in regulating ecosystem services. While SMAP L4-SM may not be directly useful for stakeholders at the management scale, the downscaled product holds significant potential for management applications. This method could become an operational tool for researchers and practitioners across diverse peatland research and application fields. This work is part of the ESA WorldPeatland project. References: Bechtold, M., De Lannoy, G. J. M., Koster, R. D., Reichle, R. H., et al.: PEAT-CLSM: A specific treatment of peatland hydrology in the NASA Catchment Land Surface Model, Journal of Advances in Modeling Earth Systems, 11, 2130–2162, 2019. Reichle, R. H., Liu, Q., Ardizzone, J. V., Bechtold, M., Crow, W. T., De Lannoy, G. J. M., Kimball, J. S., and Koster, R. D.: Soil Moisture Active Passive (SMAP) Project Assessment Report for Version 7 of the L4_SM Data Product, NASA Technical Report Series on GlobalModeling and Data Assimilation, 64, 87pp, 2023.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Four decades of peatland monitoring (1985-2022) in the Baltic Sea region based on extended annual land cover products from a Landsat and Sentinel-2 data cube

Authors: Sebastian van der Linden, Viet Nguyen, Vu Dong Pham, Cosima Tegetmeyer, Fabian Thiel, Farina de Waard, Alexandra Barthelmes
Affiliations: University of Greifswald, Institute of Geography and Geology, Partner in the Greifswald Mire Centre, University of Greifswald, Institute of Botany and Landscape Ecology, Partner in the Greifswald Mire Centre
Peatlands store more carbon than any other ecosystem, but their drainage and the extraction of peat have caused severe degradation, e.g. in northern Europe’s temperate and boreal climate zones. Peatland degradation causes huge greenhouse gas (GHG) emissions, land surface subsidence, water eutrophication and biodiversity loss. Though 500,000 km² of degraded peatlands only cover 0.3% of the Earth's total land area, they contribute a disproportionate 5% of global GHG emissions. Both, the extraction of peat from drained peatlands and the change of land use towards agriculture or forestry on the formerly wet land cause such GHG emissions. Nowadays, mitigating GHG emissions from peatlands by ecological restoration or sustainable agricultural use under wet conditions receives increasing attention. However, for successfully doing this, current and past use of the peatlands and the land use change trajectories need to be understood. Earth observation (EO) can substantially support this. Mapping current and past peatland degradation and monitoring the effects of peatland restoration requires land cover (LC) products with high spatial and temporal resolution and a very high level of thematic detail. The Baltic Sea Region Land Cover plus (BSRLC+) product (Pham et al., Sci. Data, 2024, DOI: 10.1038/s41597-024-04062-w) provides such information beyond most other available Earth observation products. It covers the Baltic Sea region (BSR), i.e., Denmark, Estonia, Latvia, Lithuania, the north of Poland and Germany, the south of Sweden and Finland plus coastal regions of Russia. BSRLC+ has 30 m spatial resolution with annual temporal resolution between 2000 and 2022 and tri-annual resolution from 1985-1997. It extends the class schemes of regular large area LC products such as World Cover by 8 crop types and two peatland classes: exploited bog and unexploited bog. Based on this unique data set we performed a comparative study of land cover changes for peatland in countries within the BSR. We used the polygons from the Global Peatland Map 2.0, analysed and quantified (i) the LC change trajectories from and to peat extraction in bogs as well as the duration of extraction periods, and (ii) the LC trajectories for drained peatland areas under agricultural land use. With our work, we showcase how EO data help monitoring the impact of land use changes in peatlands and this way support information for restoration efforts, e.g., under the EU’s new Nature Restoration Law.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: A.03.06 - POSTER - Exploring ground-based, airborne and satellite observations and concepts for the carbon cycle

The remote sensing community is abuzz with developing innovative concepts for the carbon cycle to collect crucial data at different spatial and temporal scales required to study and improve understanding of underlying geophysical processes. The observations from new airborne and ground-based instruments play a vital role in developing new applications that benefit from integrated sensing.

These new concepts need to go hand in hand with the mathematical understanding of the theoretical frameworks including uncertainty estimates. This session invites presentations on:
- innovative observations of geophysical products focussing on the carbon cycle
- Highlighting innovative applications based on integrated sensing
- feedback and lessons learned from ongoing or planned developments as well as from first ground-based or airborne campaigns
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Impact of cluster configuration of forest inventory plots on representing AGB density within map units

Authors: Dr. Natalia Málaga, Dr. Sytze de Bruin, Dr. Andrew J. Lister, Dr. Daniela Requena Suarez, Dr. Arnan Araza, Martin Herold
Affiliations: Helmholtz Center Potsdam German Research Centre for Geosciences, Section 1.4 Remote Sensing and Geoinformatics, Laboratory of Geo-Information Science and Remote Sensing, Wageningen University and Research, USDA Forest Service, Environmental Systems Analysis, Wageningen University and Research
While National Forest Inventories (NFIs) serve as the primary data source for country-level forest aboveground biomass (AGB) estimates, many tropical countries still face challenges in completing and updating their inventories. Meanwhile, advancements in remote sensing-based biomass products, combined with future satellite missions, provide new avenues for overcoming these challenges. These innovations enable the integration of space-based biomass maps with ground-based information in favour of supporting forest AGB estimation, a critical component of greenhouse gas (GHG) mitigation and adaptation strategies. However, integrating biomass maps with NFI information poses several challenges, which include handling differences between the spatial support of the field-based sampling units and the map units. This study assesses the degree to which six spatial plot configurations commonly used in tropical NFIs (two single plots and four common NFI cluster plot designs) characterize mean AGB density within fixed-size rectangles representing biomass map units in a tropical and a temperate forest site. Employing a discrete bottom-up modelling approach by means of a hierarchical marked point process (HMPP) framework, we simulate forest AGB densities by accounting for tree-tree and other ecological interactions that influence the spatial distribution of trees within forest stands. These include asymmetric competition between larger trees and smaller trees, and clustering of trees due to environmental conditions and natural disturbances. Our results show that the spatial configuration of cluster plots impacts the accuracy and precision of AGB density estimates within map units. Notably, cluster plot configurations consistently outperformed single plots of equal size (0.5 ha), offering enhanced precision by capturing a wider range of AGB variability within map units. For both sites, AGB was found to be spatially structured as opposed to completely random. We also found that the L-shaped cluster configuration can lead to selection-bias in the case of monotonic spatial AGB trends. Our study contributes to understanding the impact of plot spatial configuration on map-to-plot intercomparison analyses, which are essential to any application integrating ground-based information with remote sensing-derived products. Insights derived from the study could inform the design of future ground-based campaigns.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Tree level biomass through self-supervised reconstruction of ALS point clouds: Application to monospecific French forests.

Authors: Alvin Opler
Affiliations: Laboratoire des Sciences du Climat et de l'Environnement, LSCE/IPSL
Individual tree resource monitoring and forestry ecosystems assessment have conventionnally been possible through the use of detailed plot-scale data [1]. Current state-of-the-art methods [2,3] have alleviated these dependencies using deep-learning on air/spaceborne imagery. These models learn to reproduce human-labelled segmentation while using field data only for validation purposes. Nevertheless, the manual labels used are variable in term of qualities and are often not precise enough in dense forests [4]. While the literature extensively uses imagery for individual tree detection (ITD) tasks, the use of Lidar data is unconventional. Indeed, the existing works often project concurrent point cloud data to 2d dimensional input [2, 5] to be used along aerial images for further predictions. While the full 3d structure is used in several studies, the models of the literature require a high point density and manually segmented tree dataset [6,7] which is not suitable for large scale studies. In this work, we present a fully-autonomous framework which learns to segment individual trees using solely airbone lidar. We leverage the full potential of 3d data using state-of-the-art transformer architecture [8] and reconstruction methods [9]. The model first learns a wide range of deformable tree prototypes from the FOR-species-20k [10] dataset in order to fit the former to existing point clouds. This deep-learning procedure enables segmentation across a wide range of regions while requiring few or none manual labeling. This framework further allows to extract key features of individual trees such as its volume, crown area, species and carbon stock. As a case study, we apply our model to the PureForest dataset [11], a French large-scale dataset of monospecific plots initially created for species classification. As a first downstream task, we create high-quality segmentation labels and a precise description of carbon stock distribution in dense forests through the whole dataset (339 km2). Furthermore, we show several applications of such dataset such as biomass and height estimation, competition indices, fire spreading modelling and others. Our results highlight the importance of very-high resolution lidar data to access local above ground biomass features. [1] Pellissier-Tanon, A et al. (2024) Combining satellite images with national forest inventory measurements for monitoring post-disturbance forest height growth. Front. Remote Sens. 5:1432577. doi: 10.3389/frsen.2024.1432577 [2] Sizhuo Li, et al. Deep learning enables image-based tree counting, crown segmentation, and height prediction at national scale, PNAS Nexus (2023), https://doi.org/10.1093/pnasnexus/pgad076 [3] Brandt, M., Chave, J., Li, S. et al. High-resolution sensors and deep learning models for tree resource monitoring. Nat Rev Electr Eng (2024). https://doi.org/10.1038/s44287-024-00116-8 [4] Veitch, J et al., OAM-TCD: A globally diverse dataset of high-resolution tree cover maps, The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track (2024) https://openreview.net/forum?id=I2Q3XwO2cz [5] Roussel, J.R. et al. (2021). lidR : An R package for analysis of Airborne Laser Scanning (ALS) data. Remote Sensing of Environment, doi:112061.10.1016/j.rse.2020.112061 [6] Maciej Wielgosz, et al. SegmentAnyTree: A sensor and platform agnostic deep learning model for tree segmentation using laser scanning data, Remote Sensing of Environment (2024), https://doi.org/10.1016/j.rse.2024.114367 [7] Binbin Xiang, et al. Automated forest inventory: Analysis of high-density airborne LiDAR point clouds with 3D deep learning, Remote Sensing of Environment, https://doi.org/10.1016/j.rse.2024.114078 [8] Robert, D et al. Scalable 3D Panoptic Segmentation as Superpoint Graph Clustering, Proceedings of the IEEE International Conference on 3D Vision (2024) https://drprojects.github.io/supercluster [9] Loiseau, R et al., Learnable Earth Parser: Discovering 3D Prototypes in Aerial Scans, CVPR (2024) https://arxiv.org/abs/2304.09704 [10] Puliti, S et al. Benchmarking tree species classification from proximally-sensed laser scanning data: introducing the FOR-species20K dataset. 10.48550/arXiv.2408.06507. [11] Charles Gaydon and Floryne Roche, PureForest: A Large-Scale Aerial Lidar and Aerial Imagery Dataset for Tree Species Classification in Monospecific Forests (2024) https://arxiv.org/abs/2404.12064
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: High-Resolution Gross Primary Productivity Estimation from the Synergy of Sentinel-2 and ERA5

Authors: Emma De Clerck, Dávid D.Kovács, Pablo Reyes-Muños, Dr. Jochem
Affiliations: Universitat de València
Background: Accurate, high-resolution Gross Primary Productivity (GPP) estimation is essential for understanding the carbon cycle, as GPP represents the total carbon dioxide uptake by vegetation through photosynthesis. This metric provides key insights into ecosystem carbon sequestration, a critical factor in assessing terrestrial carbon sinks and sources. Monitoring GPP at fine spatial scales allows for detailed understanding of carbon flux variations, which can influence regional and global carbon budgets. Leveraging remote sensing data such as Sentinel-2 and integrating climate reanalysis data from ERA5, accessible on platforms like Google Earth Engine and OpenEO, opens up new possibilities for scalable, landscape-level GPP monitoring. However, traditional models lack broad applicability across diverse vegetation types and geographic regions due to limited resolution and site-specific constraints, making it challenging to derive GPP estimates that are both detailed and widely applicable. Objective: This study aims to create a high-resolution GPP estimation model by combining Sentinel-2 multispectral satellite data with ERA5 Land climate reanalysis data, focusing on delivering an adaptable and reliable model for different plant functional types (PFTs). The model is validated using in situ observations from ICOS flux tower sites across Europe, ensuring relevance and robustness across diverse landscapes. This approach seeks to improve the accuracy of GPP predictions for applications in carbon cycle research and policy development. Methods: Several methodologies are under consideration for achieving the study objectives. These include using Radiative Transfer Models (RTMs), data-driven approaches, or a hybrid method combining simulation data and real-world observations. The use of tools such as SCOPE (Soil Canopy Observation of Photosynthesis and Energy fluxes), machine learning techniques such as Gaussian Processes Regression, and ICOS flux tower data is being explored to enhance the accuracy and adaptability of the models. Sentinel-2’s high spatial resolution and ERA5’s meteorological insights provide a robust basis for location-specific GPP predictions. The final methodology will aim to balance precision, scalability, and ease of implementation on platforms like Google Earth Engine and OpenEO. Results: The resulting framework is anticipated to produce high-resolution GPP maps tailored to different PFTs, including spatial uncertainty estimates to ensure model transparency. Validation using ICOS flux tower measurements will assess the framework's reliability across diverse European ecosystems. By integrating Sentinel-2 and ERA5 data, this approach is expected to deliver actionable insights into regional and ecosystem-specific carbon flux dynamics. Conclusion: This research seeks to demonstrate the feasibility of a high-resolution, adaptable GPP estimation framework, leveraging remote sensing and climate reanalysis data. The model’s integration with accessible platforms aims to support ecological monitoring, carbon cycle research, and informed climate policy decision-making, enabling broad adoption by researchers and stakeholders.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Wetland and anthropogenic emissions methane and carbon dioxide: Results and lessons learned from the MAGIC international campaigns and plans for future deployment in Brazil

Authors: Cyril Crevoisier, Caroline Bès, Jérôme Pernin, Axel Guedj, Thomas Ponthieu, Félix Langot, Lilian Joly, Nicolas Dumélié, Bruno Grouiez, Thomas Lauvaux, Charbel Abdallah, Yao Té, Pascal Jeseck, Michel Ramonet, Julien Moyé, Morgan Lopez, Hervé Herbin, Valéry Catoire, Nicolas Cézard, Julien Lahyani, Andreas Fix, Matthieu Quatrevalet, Anke Roiger, Klaus-Dirk Gottschaldt, Alina Fiehn, Rigel Kivi, Stéphane Louvel, Frédéric Thoumieux, Aurélien Bourdon
Affiliations: CNRS/LMD, CNES, GSMA/URCA, MONARIS/SU, LSCE/IPSL, LOA, LPC2E, ONERA, DLR, FMI, SAFIRE
Iin August 2021, a large-scale international campaign called MAGIC2021 took place in Scandinavia, with two objectives: 1) to improve knowledge of wetland emissions of methane; 2) to validate satellite missions measuring greenhouse gases (TROPOMI/Sentinel-5P, OCO-2, IASI) in the circumpolar region. Lead by CNRS and CNES, the campaign gathered 70 scientists from 14 research teams. During two weeks, more than twenty instruments were deployed on 3 research aircrafts, several small and large stratospheric balloons, as well as on ground. They covered a large region extending from Abisko Lake (Sweden) to Sodenkylä (Finland) and combined in-situ (air samplers, CRDS analyzers) and remote sensing observation (lidars, spectrometers) of atmospheric concentration of methane. It was followed by 2 consecutive campaigns, MAGIC2022 and MAGIC2023, in the mid-size city of Reims in France with the aim of evaluating anthropogenic emissions of CO2 and CH4 from the city and surrounding industries, including sugar factories. Both campaigns gathered about 50 scientists and the deployement of twenty instruments onboard 3 research aircrafts, small balloons and ground-based. Specific measurements were performed above the city to map the 3D atmospheric concentration of both gases at 1km-horizontal and 500m-vertical resolutions in order to prepare for the use of satellite city-mode observation such as planned for MicroCarb or CO2M. In this talk, we will present the main results derived from these unique datasets of gas concentration vertical profiles (from the ground to the mid-stratosphere), weighted columns (from the ground or from aircrafts) and 2D coverage at several altitudes of emission hotspots gathered during the MAGIC2021-2023 campaigns. We will show evaluation of several wetland and anthropogenic emissions inventories. We will also bring evidence for the crucial need for a better understanding of the vertical distribution of methane concentration, and the benefit of combining observation from short-wave (TROPOMI/Sentinel-5P) and thermal infrared (IASI/Metop) to account for long-scale horizontal transport (e.g. fire emissions). Finally, we will present some lessons learned from the campaigns and introduce future plans for a large scale campaign focusing on tropical wetlands (Brazil) in summer 2026. Main funding for the MAGIC campaigns came from CNES, CNRS, ESA, EUMETSAT and DLR (https://magic.aeris-data.fr).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: An optimized Land Parameter Retrieval Model parametrization for improved vegetation optical depth estimates

Authors: Univ.Ass.in Dipl.-Ing.in Ruxandra-Maria Zotta, Richard de Jeu, Nicolas Francois Bader, Wolfgang Preimesberger, Dr. Thomas Frederikse, Wouter Dorigo
Affiliations: TU Wien, Transmissivity B.V., Planet Labs
Monitoring long-term vegetation dynamics is crucial for many environmental studies, including carbon cycle modelling. Therefore, there is a need to enhance the quality of spaceborne vegetation estimates continuously. Vegetation optical depth (VOD) is a radiative transfer model (RTM) parameter retrieved from brightness temperature (TB) measurements and is closely related to vegetation water content and biomass. VOD observations have been used extensively in applications related to the carbon cycle, including estimating gross primary productivity and monitoring above-ground biomass. The Land Parameter Retrieval Model (LPRM) is a well-known RTM which simultaneously retrieves soil moisture and VOD using the microwave polarization difference index in a radiative transfer equation. LPRM is a forward model that runs the RTM iteratively over a wide range of soil moisture and VOD scenarios, modelling TB. Soil moisture derived through LPRM is used in the European Space Agency (ESA) Climate Change Initiative (CCI) Soil Moisture project framework, which produces global consistent, long-term, multi-sensor time-series of satellite soil moisture data. Several calibrations of the LPRM have been carried out throughout the project to improve the quality of soil moisture estimates and to facilitate better sensor harmonization. Nonetheless, the implications of different parametrizations on VOD retrievals have yet to be fully explored, an oversight this study strives to address. Here, we use TB from the Advanced Microwave Scanning Radiometer 2 (AMSR2) to investigate if calibrating the model parameters can improve the VOD retrieved through LPRM. We focus on three critical model parameters: single scattering albedo, surface roughness and effective temperature. We simultaneously optimize these parameters based on minimizing the errors between the time series of simulated and observed TB at each location. Additionally, we perform a sensitivity analysis using the Sobol method to disentangle the impact of these parameters at each location. The VOD estimates resulting from the optimization scenarios are assessed against independent vegetation datasets, such as MODIS fAPAR and against alternative AMSR2 VOD datasets, such as the Land Parameter Data Record (LPDR) and AMSR2-IB. Our findings indicate that the optimization procedures significantly improve the VOD, particularly in deciduous, needle-leaf, and mixed forest areas. These advancements render the new retrievals more applicable for carbon cycle research.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: 3D-Biomass: Biomass Estimation at Different Height Intervals Using Terrestrial LiDAR Scanning Data

Authors: Qian Song, Zhilin Tian, Dr. Benjamin Brede, Martin Herold, Dr. Mike Sips
Affiliations: Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences
Besides from the tree canopy height and total above-ground biomass estimation, there is an increasing interest in exploring the 3D structures of forest plots with the availability of high-detail and high-point-density Terrestrial LiDAR scanning (TLS) data. In this study, we proposed a new concept of 3D Biomass that measures the biomass of forest at different height layers. It consists four main steps: individual tree delineation, leaf points exclusion, 3D reconstruction of the tree, and 3D Biomass calculation. The TLS data were acquired over the Netherlands, Ghana and Germany in 2017, 2023 and 2019 respectively. Previous studies processed and segmented these point clouds into individual trees. The dataset consists of tree point clouds of multiple species, including both coniferous and deciduous trees. Then the individual tree point cloud were segmented into leaf points and non-leaf points using GBS (Graph-Based Leaf–Wood Separation) algorithm. This algorithm first builds a network graph on tree point cloud, then gradually extract woody points via shortest path analysis. In this way, lidar points of tree leaves and branches and trunk are separated. The leaf points are discarded from further analysis because: 1) the 3D reconstruction errors of the leaf parts are significantly higher; 2) they affect the branches modeling (two times large errors compared with modeling of branches-only points); 3) the biomass of tree leaves is usually assumed to be negligible. Thereafter we use TreeQSM (Quantitative Structure Models for Trees) tool to reconstruct the 3D structure of the delineated branch and trunk points. TreeQSM models the steam and branches as one or multiple cylinders by fitting the cylinders with the neighboring points. We used mean relative point to model distance (mR-PMD) to evaluate the 3D modeling quality. Results suggest that the model error increases with the branch order (from 22% to 220%), which might due to the sparse points of thin branches. Based on the parameters of the cylinders, we calculated the tree’s volumes of different height intervals, with which 3D biomass is derived by multiplying average woody density. In the future, we will use the proposed work pipeline to map the above-ground 3D Biomass of plots over the Netherlands and Ghana using the acquired TLS data. The vertical carbon stock patterns of different tree species will be analyzed.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Enhancing Agroforestry Biomass Estimation Using Multitask Learning and Structural Diversity from GEDI, ALOS PALSAR and Sentinel Data

Authors: Xi Zhu, Head of remote sensing Mila Luleva, Yaqing
Affiliations: Rabobank
Accurate biomass estimation is critical for assessing carbon sequestration and supporting sustainable agroforestry systems. In this study, we present a two-step approach for biomass estimation that integrates advanced remote sensing techniques and machine learning. First, we estimate three structural variables—GEDI-derived 95th percentile tree height (H95), canopy cover (CC), and foliage height diversity (FHD)—from Sentinel and ALOS data using a UNet-based deep learning model. The performance of this model is validated against high-resolution airborne lidar data. Second, we use these retrieved variables to model aboveground biomass (AGB) in agroforestry systems, leveraging a small number of ground truth samples. We evaluate the contribution of vertical structural diversity to biomass estimation using a simple parametric model with leave-one-out cross-validation. In the first step, our results demonstrate that multitask learning within the UNet framework outperforms single-task learning, with an accuracy improvement of 5–10% for the retrieval of the three structural variables. This highlights the benefit of leveraging shared features across tasks to improve model performance. Validation with airborne lidar data confirms the reliability of the retrieved GEDI variables, emphasising the potential of Sentinel data as a cost-effective alternative for large-scale structural mapping. In the second step, incorporating foliage height diversity into the biomass estimation model led to a 4% increase in accuracy (R2: 0.71, RMSE: 6.69 ton/ha) compared to using height and canopy cover alone (R2: 0.67, RMSE: 7.13 ton/ha). This improvement suggests that diversity metrics, likely linked to species diversity and wood density variations, play a significant role in biomass prediction. The results highlight the importance of accounting for structural complexity, particularly in heterogeneous systems like agroforestry, where a mix of species and planting densities challenges traditional estimation approaches. The implications of this study are significant for both research and practice. By demonstrating the utility of GEDI structural variables retrieved from Sentinel and ALOS data, we provide a scalable framework for biomass estimation in agroforestry systems, which are underrepresented in traditional forest inventory methods. The use of multitask learning not only enhances accuracy but also streamlines the estimation of key structural variables, reducing the need for extensive field data collection. Furthermore, the integration of structural diversity into biomass models aligns with ecological principles, emphasising the role of biodiversity in ecosystem functions such as carbon storage. This study underscores the potential of combining advanced machine learning techniques with satellite data to improve biomass estimation in complex landscapes. The approach is particularly relevant for agroforestry systems, where accurate biomass assessment is crucial for designing effective carbon projects and promoting sustainable land management. Future work could explore the integration of additional GEDI metrics and extend the approach to other land-use systems to further validate its robustness and scalability.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Retrieving long-term colored dissolved organic matter absorption coefficient and dissolved organic carbon concentrations in the Mackenzie River–Beaufort Sea using CMEMS GlobColour merged product

Authors: Maria Sanchez-Urrea, Dr. Martí Galí, Dr. Marta Umbert, Dr. Carolina Gabarró, Dr. Eva De Andres, Dr. Rafael Gonçalves-Araujo
Affiliations: Institute of Marine Sciences - Spanish National Research Council (ICM-CSIC), Universitat Politècnica De Catalunya · Barcelona Tech (UPC), Universidad Politécnica de Madrid (UPM), Technical University of Denmark (DTU)
In the rapidly changing Arctic, increasing organic carbon export from river systems is anticipated due to shifts in hydrology and permafrost thawing. These changes have profound implications for the biogeochemical cycles of coastal and shelf environments, emphasizing the need for robust monitoring of major Arctic rivers. Ocean color remote sensing has emerged as a valuable tool during the ice-free season, particularly for remote and undersampled regions like the Beaufort Sea. By providing extensive spatial and temporal coverage, it bridges gaps left by sparse in situ data, enhancing our understanding of land-ocean carbon fluxes and nearshore processes. Remote sensing of Chromophoric Dissolved Organic Matter (CDOM) and Dissolved Organic Carbon (DOC) has proven effective in capturing the variability of terrestrial carbon exports. However, seasonal variations and diverse ecological characteristics across Arctic river basins present significant challenges for developing universal retrieval algorithms. As a result, region-specific approaches have been prioritized, though long-term datasets remain limited. This study introduces a 26-year satellite-derived dataset (1998–2023) quantifying CDOM absorption at 443 nm (aCDOM(443)) and DOC concentrations in the Mackenzie River–Beaufort Sea system (122–142ºW, 68–73ºN). Data were generated using the multi-sensor CMEMS GlobColour merged product and a regionally adapted GIOP (Generalized Inherent Optical Properties) algorithm. The relationship between aCDOM(443) and DOC was calibrated specifically for the study area. The approach performs well when validated against in situ observations, as evidenced by the MdAPD (r2) for aCDOM and DOC of 54.5% (0.68) and 32.4% (0.65), respectively. This dataset enables detailed investigations of plume dynamics, interannual variability, and long-term trends. Comparisons with independent in situ DOC records (1999–2017) revealed consistent variability patterns. Interestingly, contrary to initial assumptions, both aCDOM(443) and DOC exhibited a significant decline at the Mackenzie River mouth (−0.017 m⁻¹ yr⁻¹ and −3.40 M yr⁻¹, respectively) over the 26-year period. These trends align with observed decreases in river discharge, suggesting a potential link between hydrological changes and the declining export of terrestrial organic carbon in the region. This finding highlights the need for further examination of interannual variability and the uncertainties in satellite-derived carbon metrics.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Mapping and Measuring Methane Release From Boreal Peatlands and Swamps: Testing the Capability of a Ground-Based and Airborne Long-Wave Infrared Hyperspectral Imager

Authors: Luke Richardson-Foulger, Professor Martin Wooster, Callum Middleton, Dr Mark Grosvenor, Dr. José Gómez-Dans, Dr. William Maslanka
Affiliations: National Centre for Earth Observation - King's College London, Leverhulme Wildfires Centre - King's College London
Boreal peatlands store more than 30% of Earth's terrestrial carbon, despite occupying only 3% of its surface. Changing rainfall patterns, increasing global temperature and poor human management have resulted in drier ecosystems. Peatlands are the largest natural sources of methane, a potent greenhouse gas, which could be further exacerbated by drying conditions. This threatens to turn peatlands from a net carbon sink to a carbon source, which could accelerate climate change in the coming century. In 2021, the EU and US launched the Global Methane Pledge to reduce methane emissions by 30% between 2020 and 2030. Understanding the nature and severity of methane release from peatlands is thus a critical goal for greenhouse gas monitoring, aligning with international climate commitments. In this context, an ESA-funded campaign to assess the feasibility of detecting and measuring methane and nitrous oxide release from emission targets was conducted in Alberta, Canada in 2024. A long-wave FTIR-based hyperspectral imager was deployed on-site at various wetland and industrial sites across the province. The objective was to refine the use of FTIR imaging to detect low-level methane emissions with maximum sensitivity. Preliminary findings indicate that methane release from peatlands can be detected with reasonable spectral fidelity under specific meteorological conditions and with careful scene composition. This approach enables the mapping of methane emissions across peatlands and swamps from the ground, which will be compared to point-source measurements through flux chamber deployment. In addition to ground measurements, a series of survey flights were conducted equipped with a long-wave FTIR imager, a VNIR-SWIR hyperspectral imager, and atmospheric sampling instrumentation. The flights were conducted over wetlands and industrial sites. The objective was similarly to test the capability of such instrumentation in detecting and measuring methane signals. Early results indicate the difficulty in translating local measurements of emissions from the ground to wide-area surveys from an aircraft. The presentation will summarise the campaign, methodological approaches for the ground measurements and airborne surveys, and the core results. The efficacy of each approach and sensor will be discussed, including the nuances of measurement conditions and sensor adjustment. It will conclude with reflections on whether this approach could help improve ecosystem monitoring and support global methane emission reduction efforts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: The Sentinel-3 OLCI and SLSTR Surface Reflectance Product of the Copernicus Land Monitoring Service

Authors: Carolien Toté, Dominique De Munck, Dominique Jolivet, Jorge Sánchez-Zapero, Fernando Camacho, Enrique Martínez-Sánchez, Sarah Gebruers, Else Swinnen, Davy Wolfs, Bart Ooms, Roselyne Lacaze, Michal Moroz
Affiliations: VITO, HYGEOS, EOLAB
The Copernicus Land Monitoring Service (CLMS) produces a series of qualified bio-geophysical products on the status and evolution of the land surface. The products are used to monitor vegetation, crops, water cycle, energy budget and terrestrial cryosphere. Production and delivery take place in a timely manner and are complemented by the constitution of long-term time series. The CLMS portfolio contains several near-real time global “Vegetation” biophysical products based upon the data acquired by the Ocean and Land Colour Instrument (OLCI) and the Sea and Land Surface Temperature Radiometer (SLSTR) onboard the Sentinel-3 (S3) platforms. All processing lines generating biophysical products from S3 OLCI and SLSTR data rely on a common, harmonized pre-processing chain which ingests Top-Of-Atmosphere (TOA) Level-1B (L1B) radiance data and delivers synergetic Top-Of-Canopy (TOC) reflectances. This synergy pre-processing chain is made of various modules which perform the pixel classification, co-registration between OLCI and nadir SLSTR acquisitions, reprojection and resampling on a regular grid, and atmospheric correction. With the objective to ingest the reprocessed OLCI L1B Collection 4, a reprocessing of the entire CLMS OLCI and SLSTR TOC reflectance products has been performed. The resulting files contain TOC reflectance estimates and associated error for 15 OLCI and 5 SLSTR spectral bands, observation and illumination angles for both sensors, and 4 annotation flag layers. The CLMS Sentinel-3 TOC reflectance v2.3 products will cover the period from June 2018 up to near-real time. They will be available to users via the CLMS Dataset catalogue and the Copernicus Data Space Ecosystem from January 2025. Validation of the CLMS TOC reflectance v2.3 product is ongoing. Preliminary validation results, based on product completeness and spatial consistency analysis, product intercomparison and direct validation with in-situ data indicate the high quality of the product. Intercomparison with the ESA S3 SYN surface directional reflectance product shows good spatial and statistical consistency. Remarkably good accuracy is found with 4 RadCalNet sites, with bias less than 1% for most channels. Also, particularly good consistency is obtained between S3A and S3B as well as between OLCI and SLSTR equivalent channels. The presentation will encompass the details of the CLMS Sentinel-3 TOC reflectance v2.3 processing scheme and algorithms and final validation results.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Upscaling Photosynthetic Function from Leaf to Canopy Level and Across the Seasons

Authors: Professor Petya Campbell, Professor Fred Huemmrich, Dr. Shawn Serbin, Dr. Christopher Neigh, Christiaan van der Tol
Affiliations: University of Maryland, Baltimore County (UMBC), 1000 Hilltop Circle, NASA Goddars Space Flight Center (GSFC), Biospheric Sciences Laboratory, University of Twente, Langezijds 1102
The study of vegetation function is increasingly used to understand the influence of environmental conditions on the capacity of species to adapt and to compare their resilience to climate change. Photosynthesis is of key importance for vegetation function and while canopy chlorophyll content (Chl) informs on the potential for photosynthetic function, solar-induced Chl fluorescence (SIF) can offer a direct probe to assess actual photosynthetic activity at leaf, canopy, and regional scales. High spectral resolution data offers an efficient tool for evaluation of the ability of vegetation to sequester carbon due to changes in vegetation chemical and structural composition. While the traits used as primary indicators of vegetation function vary, all ecosystems are considered as 'dynamic entities that interact continuously with their environment', which therefore require continuous monitoring. Current technology enables the assembly of high frequency time series of diurnal and seasonal measurements of leaf photosynthetic efficiency and canopy reflectance, SIF, leaf area index and photosynthetic pigments. To address the need for continuous remote sensing of vegetation photosynthesis, at select flux tower sites we collected high frequency leaf and canopy data for crops, prairies, tundra and boreal forests. This study presents findings from the analysis of leaf-level active fluorescence metrics of photosynthetic efficiency (e.g., Electron Transport Rate, ETR; Yield to Photosystem II, Moni-PAM; Non-photochemical Quenching, NPQ), canopy reflectance and SIF collected with field Fluorescence Box (FLoX, JB Hyperspectral) and reflectance time series of images collected with the Airborne Visible / Infrared Imaging Spectrometer (AVIRIS), the space-borne Earth Surface Mineral Dust Source Investigation (EMIT), the DLR Earth Sensing Imaging Spectrometer (DESIS) and PRISMA (ASI). The data were collected at different temporal, spectral and spatial resolutions. Field and space borne reflectance corresponding by acquisition date and time were assembled and the variations in reflectance properties for the flux tower footprints were evaluated. The leaf and canopy time series, in conjunction with eddy covariance measurements of gross primary productivity (GPP), airborne and spaceborne reflectance images, were used to spatially upscale photosynthetic performance. The Soil Canopy Observation of Photochemistry and Energy fluxes model (SCOPE) was used to integrate these measurements to link reflectance to plant photosynthesis and SIF. Using proximal SIF time series we derived estimates of leaf and canopy photosynthetic pigments and efficiency (e.g., leaf electron transport rate, ETR) upscaling them across the seasons. Using SIF B proximal measurements showed superior results for upscaling leaf ETR to canopy level, as compared to the use of SIF B, SIF A+B and SIF A/B. The seasonal air- and space-borne and dense proximal reflectance data sets correspond reasonably, and the combined dataset captured the dynamics in canopy photosynthetic traits associated with phenology. Our preliminary results show the importance of using dense hyperspectral time series for monitoring the seasonal dynamics in vegetation function. Combining the proximal FLoX and space-borne reflectance data and estimates of vegetation traits and GPP demonstrates the feasibility of a multi-sensor approach upscaling from field to satellite level canopy reflectance and traits. Using the biophysical model SCOPE we obtained estimates of canopy chlorophyll (Cab), water content (Cw), leaf area index (LAI), GPP and others. We compared the use of SIF, Vegetation Indices (VIs) and Machine Learning (ML) semi-empirical models to the use of the Soil Canopy Observation Photosynthesis Energy (SCOPE) biophysical model to estimate canopy traits and GPP. The estimates of photosynthetic pigments, LAI and GPP were more accurate with lower RMSE and higher R2 when using VSWIR reflectance versus VNIR data, which is due to the higher sensitivity of the VSWIR data. The variation in GPP and the associated canopy traits increased with the advancement of senescence during the fall season. The study characterized the dynamics in canopy photosynthetic function, as measured at leaf, proximal canopy, and satellite levels; and developed innovative algorithms for estimation of GPP. We simulated photosynthetic efficiency and canopy traits, as anticipated from the forthcoming European Space Agency's Fluorescence Explorer (ESA/FLEX) and the National Aeronautics and Space Administration’s Surface Biology and Geology (NASA/SBG) missions. The constellation of forthcoming spectroscopy missions, such as FLEX, SBG and CHIME hold great potential to develop multi-sensor time-series that capture vegetation dynamics numerous times per season and enable trait comparisons across multiple seasons and years.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Using GNSS VOD to Advance the Development of a Sub-Daily SAR Mission for Vegetation Water, Carbon, and Health

Authors: Nathan Van der Borght, Anna Neyer, Prof.dr.ir. Susan Steele-Dunne, Rob Mackenzie, Dr. Hans van der Marel, Paco Frantzen
Affiliations: Joint first authorship, TU Delft, TU Delft
The resilience of terrestrial ecosystems to droughts and heat stress is key for the future of the terrestrial carbon balance. Satellite observations of sub-daily variations in vegetation water content (VWC) could provide us information on health, stress and resilience of key ecosystems across the globe. Water dynamics in vegetation are central in these ecosystems, as they are closely coupled to carbon assimilation at the plant stomata. Understanding diurnal variations in VWC provides insight into the water status, stress and health of plants. However, sub-daily water dynamics in ecosystems are still poorly understood and weakly represented in terrestrial biosphere models. Furthermore, there are no existing or planned satellite missions capable of resolving fluctuations in VWC on sub-daily scales. To address this critical knowledge and observation gap, SLAINTE was developed as one of ESA’s New Earth Observation Mission Ideas with a first mission concept submitted in response to ESA’s 12th Call for Earth Explorers (Steele-Dunne et al., 2024, Matar et al., 2024). It comprises a constellation of identical , decametric, monostatic SARs to capture sub-daily variations in vegetation water storage (e.g. via vegetation optical depth (VOD), vegetation water content (VWC) and/or plant water potential (PWP) and surface soil moisture (SSM)). One of the challenges we face during the development of SLAINTE is a lack of sub-daily radar or vegetation optical depth (VOD) data. We urgently need to quantify the expected range and dynamics of radar backscatter and VOD across a range of vegetation types to support the development of the mission concept. These data are also essential to address the challenge of distentangling pertinent signals and isolating them from the influence of confounding factors that can become increasingly relevant and inter- connected at sub-daily scales. Recent studies have shown that relatively low-cost GNSS (Global Navigation Satellite System) receivers, can be used to estimate L-band VOD in-situ (e.g. Ghosh et al., 2024; Guerriero et al., 2020; Humphrey & Frankenberg, 2023; Zribi et al., 2017). This method compares the signal to noise ratio (SNR) at two receivers, one above and one below the vegetation canopy. The difference in SNR between the two receivers can be related to the opacity of the vegetation layer in between. Due to consistent data coverage given by the large number of GNSS satellites in orbit, this setup enables us to capture sub-daily VOD variations. To support the development of SLAINTE, we will install these GNSS VOD sensors at a network of sites spanning a range of vegetation types and climate classes. Continuous observations from GNSS VOD will be used to characterize sub-daily VOD dynamics and to reconcile them with co-located observations of biogeophysical variables. In addition, they will be used to support radiative transfer modeling studies to demonstrate prototype forward models and retrieval approaches. In this presentation, we will use the first year of data to demonstrate the value of this GNSS VOD network to further strengthen the science case, and consolidate the observation and measurement requirements for the SLAINTE mission idea. References: Ghosh, A., Farhad, M. M., Boyd, D., & Kurum, M. (2024). A UGV-based forest vegetation optical depth mapping using GNSS signals. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 17, 5093–5105. https://doi.org/10.1109/JSTARS.2024.3365798 Guerriero, L., Martin, F., Mollfulleda, A., Paloscia, S., Pierdicca, N., Santi, E., & Floury, N. (2020). Ground-based remote sensing of forests exploiting GNSS signals. IEEE Transactions on Geoscience and Remote Sensing, 1(1), 1–17. https://doi.org/10.1109/TGRS.2020.2976899 Humphrey, V., & Frankenberg, C. (2023). Continuous ground monitoring of vegetation optical depth and water content with GPS signals. Biogeosciences, 20(1), 1789–1811. https://doi.org/10.5194/bg-20-1789-2023 Matar, J., Sanjuan-Ferrer, M. J., Rodriguez-Cassola M., Steele-Dunne, S. & De Zan, F. (2024). A Concept for an Interferometric SAR Mission with Sub-daily Revisit. EUSAR 2024; 15th European Conference on Synthetic Aperture Radar, pp. 18-22. IEEE, 2024. Steele-Dunne, S., Basto, A., De Zan, F., Dorigo, W., Lhermitte, S., Massari, C., Matar J. et al. (2024) SLAINTE: A SAR mission concept for sub-daily microwave remote sensing of vegetation. EUSAR 2024; 15th European Conference on Synthetic Aperture Radar, pp. 870-872. VDE, 2024. Zribi, M., Motte, E., Fanise, P., & Zouaoui, W. (2017). Low-cost GPS receivers for the monitoring of sunflower cover dynamics. Journal of Sensors, 2017(1), Article 6941739. https://doi.org/10.1155/2017/6941739
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Scale influences on plant primary productivity as estimated with satellite-driven light-use efficiency models

Authors: Egor Prikaziuk, Dr. Rebecca Varney, Dr Nina Raolut, Stephen Sitch, Linda Gotsmy, Dr. Sarah Matej, Dr. Karl-Heinz Erb, Dr. Tim Ng, Vincent Verelst, Jeroen Dries, Roel van Hoelst, Marie Polanska, Pavel Vlach, dr. Michael Schlund, Michael Marshall
Affiliations: ITC, University of Twente, University of Exeter, BOKU, University of Natural Resources and Life Sciences, VITO, Flemish Institute for Technological Research, Gisat
There is a growing need to align satellite image data to information requirements in the race for ever-increasing spatial, temporal and spectral resolution. Naturally, alignment depends on the objective of the study or application. In relation to the carbon cycle, light use efficiency (LUE) models that transform the fraction of absorbed photosynthetically active radiation (fAPAR) into grams of assimilated carbon have long required only two spectral bands, the red and the near-infrared acquired over multiple days. Being optimized for quick global gross primary productivity (GPP) mapping, the LUE models may result in bias when applied to images of higher (<= 20 m) spatial resolution. This study aimed to quantify the uncertainty of GPP introduced by the spatial scale of underlying fAPAR data. The study was performed in the scope of the European Space Agency's (ESA's) Land Use Intensity’s Potential, Vulnerability and Resilience for Sustainable Agriculture in Africa (LUISA) project. LUISA aims to quantify human pressure on the ecosystems as the difference between potential and actual net primary productivity (NPP); a key component of the so-called human appropriation of NPP (HANPP) framework. The NPP is computed with JULES (Joint UK Land Environment Simulator) Dynamic Global Vegetation Model (DGVM), parametrized with ESA CCI Land Cover and leaf area index (LAI) variables. However, JULES resolution is 0.5 deg (~50 km), which inevitably results in mixed pixels consisting of several land cover types and different seasonal LAI trends. To assess the representativeness of JULES simulations, NPP for four case study regions in Ethiopia, Mozambique, Senegal and Uganda and 14 eddy-covariance sites across Africa was computed with the PEMOC model, a big leaf LUE model, at 20 m resolution with Sentinel-2 image data. Through the steps of degrading resolution from 10 m to 50 km, the profiles of area homogeneity and the NPP uncertainty were characterized. Preliminary results computed on a single image show that in an idealized case of scale-invariant PEMOC parameterization, the mean bias error (MBE) reaches 2% (relative to the range of NPP values) and the mean absolute error (MAE) reaches 3% (relative to the range of NPP values) at 500 m. The temporal evolution of the error and its influence on accumulated NPP (biomass) will be discussed. The results of the NPP product comparison may suggest novel strategies for high-resolution EO data integration into DGVMs.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: A Novel Observation Operator for Assimilating Microwave Vegetation Optical Depth into Vegetation / Carbon Cycle Models

Authors: Wolfgang Knorr, Mathew Williams, Tea Thum, Thomas Kaminski, Michael Voßbeck, Marko Scholze, Tristan Quaife, T. Luke Smallmann, Susan Steele-Dunne, Mariette Vreugdenhil, Tim Green, Sönke Zaehle, Dr. Mika Aurela, Alexandre Bouvet, Emanuel Bueechi, Wouter Dorigo, Tarek S. El-Madany, Mirco Magliavacca, Marika Honkanen, Yann H. Kerr, Anna Kontu, Juha Lemmetyinen, Hannakaisa Lindqvist, Arnaud Mialon, Tuuli Miinalainen, Gaétan Pique, Amanda Ojasalo, Nemesio J. Rodríguez-Fernández, Mike Schwank, Peter J. Rayner, Pablo Reyez-Muñoz, Dr. Jochem Verrelst, Songyan Zhu, Shaun Quegan, Dirk Schüttemeyer, Matthias
Affiliations: The Inversion Lab
Recent reports about a possible weakening of the terrestrial biosphere's carbon sink have highlighted the importance of land vegetation for storing large quantities of carbon originating in CO2 emitted by human activities, and thus mitigating against some of the worst impacts of the enhanced greenhouse effect. However, we still have only a very limited knowledge about the spatial and temporal dynamics of terrestrial-biospheric carbon pools. In such a situation, both passive and active microwave missions offer the unique opportunity to monitor above-ground land carbon and biomass repeatedly and undisturbed by cloud cover. For such sensors, various algorithms are available to separate the contribution of terrestrial vegetation to the microwave signal from that of the underlying soil. This contribution is usually expressed as Vegetation Optical Depth (VOD) for the specific wavelength used by the sensor. So far, the most common approach has been to use empirically derived relationships between VOD and above-ground biomass (AGB) to monitor carbon stores in land vegetation. This approach, however, ignores the influence of the plants' hydraulic status on VOD, nor does it take into account temperature effects. Therefore, if we employ a terrestrial biosphere model to simulate AGB and from that predict measured VOD using this approach, we fail to capture the VOD signal's often pronounced temporal fluctuations at the time scale of days and weeks. Here, we present a semi-empirical model for VOD of varying wavelength that can be easily implemented as an observation operator with regional to global terrestrial biosphere models. It predicts VOD using stem and leaf biomass, soil moisture and transpiration rates as input. We present results using this new VOD observation operator together with the D&B terrestrial vegetation model. D&B simulates carbon fluxes at land surfaces embedded into the full energy, water and radiation balance. Carbon is allocated to various live vegetation and soil organic matter pools. We show simulation results compared to locally measured VOD of different wavelengths from Sodankylä, a boreal study sight in northern Finland. The model captures the main features of the temporal variations of measured VOD, despite the fact that there is little change in biomass density during the measurement campaign. However, viewing conditions were changed twice, so that different stem densities fell within the instrument's field-of-view. This had a profound impact on measured VOD, which can also be reproduced by the model if we account for changes in the stem biomass input to the VOD model. As the next step, we compare simulated L-band VOD (L-VOD) for a regional simulation to measurements from ESA's SMOS mission. The results are similar to those obtained from local measurements. We finally show how assimilation of SMOS L-band VOD, surface soil moisture, and regional AGB data into D&B can be used to constrain the parameters of a L-VOD observation operator. This opens up the possibility to assimilate SMOS L-band VOD and surface soil moisture to derive estimates of AGB and other properties of the land surface for further regions. Our results highlight the potential of VOD for monitoring various land surface properties related to the carbon cycle within the framework of terrestrial-biosphere data assimilation. The simple, semi-empirical VOD model is ideally suited to be coupled to regional to global-scale vegetation models, as it does not depend on detailed structural or hydraulic properties of vegetation, for which information is rarely available at the spatial scales of interest for such models. It thus offers a valuable alternative to detailed models of the microwave backscatter, and at the same time constitutes an important advancement beyond empirical AGB-VOD relationships.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Using remotely sensed ecological and climate variables to assess ecosystem productivity for land carbon sequestration studies

Authors: Yanli You, Distinguished Professor Alfredo Huete
Affiliations: University Of Technology Sydney
Satellite derived ecological indicators such as vegetation indices (VI), leaf area index (LAI), and solar-induced chlorophyll fluorescence (SIF) are widely used to monitor ecosystem dynamics at different temporal and spatial scales. These indicators may be relevant in assessing changes in vegetation carbon sequestration capacity, assessing carbon source periods, and evaluating carbon storage across different ecosystems. Although much remote sensing carbon research is focused on carbon fixed as photosynthesis, there is less attention on the actual carbon stocks remaining after ecosystem respiration. In this study, we evaluated and compared the relationships between remote sensing derived ecological variables including enhanced vegetation index (EVI), LAI and SIF with eddy covariance measured gross primary productivity (GPP) and net ecosystem productivity (NEP) across a range of flux tower sites along the North Australian Tropical Transect (NATT). We also assessed three satellite-derived climate indicators, daytime and nighttime land surface temperature (LST), soil moisture (SM) and rainfall across our study sites. Six hydrologic years were analysed at both monthly and annual scales from Sep. 2015 to Aug. 2021. We aimed to assess how well the various satellite products can represent and detect changes in NEP, relative to their more common use in quantifying GPP. The results show site-dependent, strong NEP-GPP relationships (r^2 ~0.7 to 0.8, monthly scale) with slope sensitivities of NEP to GPP ranging from 0.35 to 0.46 (monthly scales) and 0.32 to 0.67 (annual scales). The higher values were inversely related with latitude indicating greater carbon sequestration rates in the wetter northern sites of NATT. In most cases ecological remote sensing variables were more strongly related to GPP than NEP and could not readily be used to identify carbon source periods. However, NEP relationships with LST, SM and rainfall helped explained ecosystem carbon sink to source switches and provided useful information for carbon sequestration studies at ecosystem scales.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: CAMAP and MAMAP-2D – Methane and CO2 airborne imaging spectrometers for validation of current and future GHG satellite missions

Authors: Konstantin Gerilowski, Jakob Borchardt, Sven Krautwurst, Oke Huhs, Wilke Thomssen, John Philip Burrows, Heinrich Bovensmann, Hartmut Bösch, Marvin Richrath, Jan Franke, Jan-Hendrik Ohlendorf, Roman Windpassinger, Yasjka Meijer, Thorsten Fehr
Affiliations: Institute of Environmental Physics (IUP), University of Bremen, Institute for Integrated Product Development (BIK), University of Bremen, European Space Agency (ESA)
To measure and monitor the greenhouse gas emissions from space on different spatial scales, several satellite missions are currently under development or have been launched in recent years. These missions comprise US missions like Methane-Sat and Tanager-1&2 or European missions like Sentinel-5p, MicroCarb, TANGO and the upcoming Copernicus Sentinel-5 and CO2M missions, aiming to provide high spatial resolution/medium spectral resolution or medium spatial resolution/high spectral resolution data with sufficient spatial coverage, and using different NIR and SWIR spectral bands and windows of the Methane and CO2 absorption spectrum (between 1.59µm and 2.3µm) for data retrieval. Accompanying and supplementing the space missions, high-quality atmospheric imaging spectra of CO2 and CH4 acquired from aircraft are needed to support retrieval algorithms development, and contribute to the validation and interpretation of Level 1, Level 2 and emission data products. In response to this need, the MAMAP-2D (Methane Airborne MAPper -2D) and CAMAP (CO2 And Methane Airborne maPper) high-performance airborne imaging spectrometers are being developed by IUP Bremen under national and ESA contracts, replicating as close as possible the spectral bands and spectral resolution defined for CO2M. CAMAP and MAMAP-2D are designed to acquire spectral images from aircraft in push-broom mode, allowing to retrieve atmospheric CO2 and CH4 concentration maps. At a flight altitude of 8 km, the instruments can cover a nadir swath of ~3.5 km with a ground sampling distance of ~ 100m x 100m. The instruments use NIR (760nm), SWIR-1 (1.6µm), and SWIR-2 (2µm, CAMAP only) spectral bands, with each band being implemented as an individual grating spectrometer. Design and implementation exploit a modular approach, using as many as possible similar or the same components with respect to opto-mechanical design geometry and accommodation. The MAMAP-2D two channel NIR and SWIR-1 instrument is currently being assembled, tested, and planned to be flown in 2025. For CAMAP, the already developed NIR and SWIR-1 bands will be complemented by an additional new SWIR-2 band. CAMAP is scheduled to being ready for calibration in the second half of 2026. This presentation describes specifications, design concept, and expected performance of the instruments, and provides an overview over the development status.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Measuring biomass in agroforestry systems coupling ground measurements, drone measurements and very high resolution stereo satellite images

Authors: Gernot Ruecker, Dirk Kloss, Oreste Santoni Akossi, Philippe Koumassi, Antoine Servais, Mathurin Koffi, Stefanie Korswagen, Kathrin Damian, Florent Camera
Affiliations: ZEBRIS Geo-IT GmbH, HEAT GmbH, Ministry of Environment and for the Ecological Transformation, GFA Group GmbH, GIZ GmbH
Agroforestry has a large potential to increase resilience of smallholder farmers to climate change in Sub-Saharan Africa and at the same time enhance terrestrial carbon stocks. Within the framework of the Paris Agreement on Climate Change and with the development of carbon markets, opportunities arise to generate additional income from increased carbon storage on agricultural land through agroforestry. This can be achieved through the generation of Internationally Transferable Mitigation Outcomes (ITMOS) in the case of the Paris Agreement or through voluntary carbon market projects. Such mitigation outcomes need to be transparently verifiable. This is achieved through establishment of Monitoring, Reporting and Verification (MRV) systems that document the mitigation outcomes, the used measurement methods and the underlying data. The necessary data to drive such a system can be obtained from ground measurements and remotely sensed observations. The rapidly emerging use of drones in Sub-Saharan Africa makes the use of high resolution airborne data as a middle layer between very detailed and costly ground measurements and less accurate satellite measurements attractive. Here we present a MRV system developed and applied to two agroforestry systems in Côte d’Ivoire. The system is based on three levels of data collection: on the ground, through drones and using very high resolution stereo satellite images. An option for using open source radar (ALOS) and optical data (Sentinel 2) is also discussed. The system was developed in collaboration between Ivorian institutions at local and national level and international cooperation partners. It was initially developed to assess carbon storage in agroforestry parcels in Northern Côte d’Ivoire. In these agroforestry systems, short rotation Acacia trees were planted. These leguminous trees are able to fix atmospheric nitrogen and help improve the soil quality. A resulting shorter fallow cycle can reduce pressure on neighboring forests and the leguminous trees help to avoid soil degradation. The MRV system is currently being adapted for a different land use in Southern Côte d’Ivoire, where carbon stock enhancements through shaded cacao agroforestry systems and restoration of community forests and degraded riverbanks are measured. Carbon stored in soil, litter, dead wood, trees planted during the agroforestry activity and other (pre-existing) trees is assessed using standard inventory techniques. Allometric equations are used to estimate tree biomass. In the case of the young Acacia trees, specific local allometric equations are developed. Commercial drones and structure-from-motion techniques are used to obtain a three dimensional model of the plots. These three dimensional models are used to obtain a surface- and a terrain model, from which a crown height model (CHM) is derived. Individual tree heights are obtained by automatically identifying trees in the CHM using local maximum filtering. Tree heights and recorded planting densities are then used to derive above ground biomass through allometric equations. High resolution stereo satellite data from the Skysat constellation (operated by Planet, Inc.) are used to obtain a digital surface model at 80 cm resolution. Thin plate spline interpolation is used to derive a digital terrain model by automatically identifying ground points in the surface model and interpolating the terrain between the obtained ground points. The CHM and recorded planting densities are then used to estimate biomass of the plantations. Correlation between drone obtained data and ground sampled data is satisfactory (r2 = 0.75). There is potential for improvement through a better filtering of vegetation which is sometimes misclassified as soil, leading to biases in AGB estimates. Correlation between satellite derived data and ground sampled data is weaker (r² = 0.53). Also here, improvement of the algorithm for identifying ground data in the satellite images could further improve the results. Another limitation is that most plantation trees are very young and thus tree height is sometimes close to measurement accuracy, and that some young trees are mistaken for ground. Adoption and enhancements of the system for shaded cacao plantations are the use multispectral drone data for automated discrimination between plantation trees and other trees, as well as machine learning and object recognition techniques to improve mapping of individual trees and small tree clusters, and better non-tree (ground) point classification for the derivation of digital terrain models. For data storage and analyses, a web-based system was set up that supports local ground sampling using a mobile app where data are directly capture in the field using tablets. Analyzed carbon stock data are then stored in a centralized database that can be used for reporting and verification purposes through a Web-GIS interface. The system is based on various Open Source components and can be easily adapted and configured for different agroforestry types.
Add to Google Calendar

Tuesday 24 June 17:45 - 18:30 (Frontiers Agora)

Session: C.05.10 EO National Missions Implemented by ESA - Future Evolution

The session will be used to provide example of capabilities developed by the national missions under implementation at ESA. Furthermore, it will give the opportunity to explore potential cooperations, challenges and further developments ahead.

Speakers:


  • S Lokas – ESA
  • Konstantinos Karantzalos – Secretary General, Greek Ministry of Digital Governance and Greek Delegate to the ESA Council
  • Dimitris Bliziotis – Hellenic Space Centre and Greek delegate to PBEO
  • G. Costa – ESA
  • F. Longo – ASI
  • D Serlenga – ESA
  • Head of Delegation to ESA – MRiT
  • R. Gurdak – POLSA
  • L. Montrone – ESA
  • N. Martin Martin / J.M. Perez Perez – (Affiliation not specified)
  • Pedro Costa – CTI
  • Betty Charalampopoulou – Geosystems Hellas CEO and BoD Hellenic Association of Space Industry
  • Dr. hab. inż. Agata Hościło – Institute of Environmental Protection – National Research Institute
  • A. Taramelli – ISPRA
  • V. Faccin – ESA
  • R. Lanari – CNR/IREA
  • M. Manunta – CNR/IREA
  • L. Sapia – ESA
  • E. Cadau – ESA
  • Rosario Quirino Iannone – ESA
  • Mario Toso – ESA
  • Enrique Garcia – ESA
  • Ana Sofia Oliveira – ESA
  • Ariane Muting – ESA
  • V. Marchese – ESA
  • Jolanta Orlińska – POLSA
  • G. Grassi – ESA
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: A.09.01 - POSTER - The mountain cryosphere in peril – improved monitoring of snow and ice in complex terrain to address societal challenges in the face of climate change

The impact of climate change on the cryosphere in mountain areas is increasing, affecting billions of people living in these regions and downstream communities. The latest Intergovernmental Panel on Climate Change Assessment Report highlights the importance of monitoring these changes and assessing trends for water security, as well as the risks of geo-hazards such as GLOFs, landslides, and rockfalls.

This session will explore advanced methods and tools for monitoring physical parameters of snow, glaciers, and permafrost in mountainous regions using data from current satellites. We will also discuss the potential of upcoming satellite launched in the near future to enhance these observations and fill in any gaps. By improving our understanding of water availability in mountainous areas and identifying key risks, we can develop strategies to adapt to the changing conditions and also better protect these vulnerable regions.

We welcome contributions on advanced geophysical observations of snow, glaciers and permafrost variables in mountainous regions around the world using different satellite data and their impact on water resources and the increasing risks posed by geo-hazards under changing climate conditions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Trends in the annual snow melt-out day over the French Alps and the Pyrenees from 38 years of high resolution satellite data (1986–2023).

Authors: Zacharie Barrou Dumont, Simon Gascoin, Jordi Inglada, Andreas Dietz, Jonas Köhler, Matthieu Lafaysse, Diego Monteiro, Carlo Carmagnola, Arthur Bayle, Jean-Pierre Dedieu, Philippe Choler
Affiliations: Magellium, Center for the Study of the Biosphere from Space (CESBIO), CNES/CNRS/IRD/UT3 Paul Sabatier, German Remote Sensing Data Center (DFD), German Aerospace Center (DLR), Grenoble Alpes university, Météo-France, CNRS, CNRM, Center for the study of snow, Grenoble Alpes university, Savoie Mont Blanc university, CNRS, Alpine Ecology Laboratory (LECA), Institut of Environmental Geosciences (IGE), Grenoble Alpes university/CNRS/Grenoble INP/ INRAE / IRD
Information on the spatial-temporal variability of the seasonal snow cover duration over long time periods is critical to study the response of mountain ecosystems to climate change. In particular, the annual snow melt out day (SMOD, i.e. the last day of snow cover) modulates the onset of the growing season and therefore has a profound impact on alpine vegetation dynamics and productivity. However, little is known about the SMOD trends at larger scale in the European mountains due to the sparse distribution of in situ observations or the lack of adequate remote sensing products as multi-decade time series of the snow cover area are typically derived from low resolution sensors such as MODIS (20 years, 500 m) or AVHRR (35 years, 1 km) which fail to capture the high spatial variability of mountain snowpack. Accounting for cloud cover, the effective revisit of the Landsat program (53 years, 30-60 m) is approximately one observation per month or less which hinders applications in mountain ecosystems. The release in the public domain by the French Space Agency (CNES) of the full collection of SPOT 1-5 images with the SPOT World Heritage (SWH) program provided a unique opportunity to densify the Landsat time series from 1986 to 2015 with thousands of 20 m resolution multi-spectral images. Therefore for this study we combined snow cover data from ten different optical platforms including SPOT 1-5, Landsat 5-8 and Sentinel-2A&B to build an unprecedented multidecadal time series of the annual SMOD at 20 m resolution across the French Alps and the Pyrenees from 1986 to 2023. The snow cover information was extracted from SWH images using deep learning and an innovative image emulation method [1]. We evaluated the pixel-wise accuracy of the computed SMOD using in situ snow measurements at 344 stations. We found that the residuals are unbiased (median error of 1 day) despite a dispersion (RMSE of 28 days), allowing us to study SMOD trends after spatial aggregation stratified by region and topographic class. The selected regions called "massifs" are relatively homogeneous with respect to their principal climatological characteristics at a given elevation, slope, and aspect. We found a general reduction in the SMOD revealing a widespread trend toward earlier disappearance of the snow cover with an average reduction of 20.4 days (5.51 days per decade) over the French Alps and of 14.9 days (4.04 days per decade) over the Pyrenees over the period 1986–2023. The SMOD reduction is robust and significant in most parts of the French Alps and can reach one month above 3000 m. The trends are less consistent and more spatially variable in the Pyrenees [2]. The historical SMOD dataset is freely available for future studies of mountain ecosystems changes, while it is extended by the Copernicus Land Monitoring Service (CLMS), which operationally produces and disseminates the Snow Phenology (SP S2) yearly product based on Sentinel-2 observations at the European scale. This work was supported by the TOP project under grant agreement ANR-20-CE32-0002. [1] Barrou Dumont, Z., Gascoin, S., and Inglada, J., 2024. Snow and Cloud Classification in Historical SPOT Images: An Image Emulation Approach for Training a Deep Learning Model Without Reference Data, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, pp. 1–13, https://doi.org/10.1109/JSTARS.2024.3361838. [2] Barrou Dumont, Z., Gascoin, S., Inglada, J., Dietz, A., Köhler, J., Lafaysse, M., Monteiro, D., Carmagnola, C., Bayle, A., Dedieu, J.-P., Hagolle, O., and Choler, P., 2024. Trends in the annual snow melt-out day over the French Alps and the Pyrenees from 38 years of high resolution satellite data (1986–2023), EGUsphere [preprint], https://doi.org/10.5194/egusphere-2024-3505.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Data assimilation of sparse snow depth observation with optimized spatial transfer of information

Authors: Marco Mazzolini, Marianne Cowherd, Kristoffer Aalstad, Manuela Girotto, Esteban Alonso-González, Désirée Treichler
Affiliations: University Of Oslo, UC Berkeley, Pyrenean Institute of Ecology
The satellite laser altimeter ICESat-2 provides accurate surface elevation observations across our living planet. With a high-resolution digital elevation model (DEM), we can use such measurements to retrieve snow depth profiles. Such observations are of great societal relevance because water managers could potentially use them to infer snow amounts even in remote montane areas, where the role of snow as water tower is not actively monitored. However, these retrievals are not currently used operationally because they are very sparse in space and time: ICESat-2 measures along profiles with a three-month repeat interval. The high spatio-temporal variability of the seasonal snowpack limits the observations’ value. Data Assimilation (DA) methods allow to use information from snow observations to constrain snow models and provide gap-free distributed simulations. The assimilation of observations like snow cover is considered the state-of-the-art for generating retrospective reanalysis, but the use of sparse snow depth observations in DA is an active research area as those could be used in an operational manner. Covariance localization has been adopted to spatially transfer information from observed locations to similar unobserved locations. Traditionally, the geographical distance has been used to define the similarity between locations. In previous studies, we showed that topographical indices and the climatology of the melt-out date are also relevant parameters for determining the similarity. However, this measure was considered as a fixed hyperparameter. In this work, we exploit airborne lidar snow depth maps acquired by the Airborne Snow Observatory (ASO) to optimize the similarity measure between simulated cells. Gaussian Processes (GP) offer a probabilistic approach to infer the relative relevance of geographic, topographic and snow-climatology variables through a method called Automatic Relevance Determination (ARD). Preliminary results from the East River basin in Colorado, USA, indicate that ARD can successfully learn repeated snow depth patterns from a water year and improve the spatial transfer of information for successive water years when only a profile is measured. The learned relative relevance is used in a set of full spatio-temporal DA experiments designed to quantify the potential contribution of snow depth observations from the satellite altimeter ICESat-2 for seasonal snow operational forecasts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Quantifying Uncertainty in Supraglacial Lake Depth Modeling from Optical Remote Sensing Data: Insights from Greenland

Authors: Samo Rusnák, Lukáš Brodský
Affiliations: Department of Applied Geoinformatics and Cartography, Faculty of Science, Charles University
Glacier dynamics driven by climate change is a closely monitored global issue. Monitoring the presence of supraglacial lakes (SGL) and their metrics like area, depth, and volume can provide insights into meltwater dynamics and glacier stability. However, due to their remote locations, field monitoring is limited, which makes remote sensing techniques essential for large-scale and frequent SGL monitoring. Accurately estimating SGL volume and depth requires refining the physical model, which in current research neglects the effects of cryoconite on glacier surface and suspended particulate matter in water. Additionally, Model calibration relies on only a single publicly available dataset from 2010 (Tedesco et al. 2015). Addressing this limitation is essential for reducing uncertainties in depth and volume estimates from remote sensing data. The Greenland Ice Sheet was used as a case study to demonstrate the current physical model due to the significant presence of SGLs as well as an available dataset. Supervised classification for SGL detection and regression analysis of the light attenuation coefficient (the physical model's g parameter) were applied to Landsat 7 scenes corresponding to the spatial and temporal coverage of the dataset by Tedesco et al. (2015). This study is the first to analyze and quantify the lake bottom albedo's variability parameter (the physical model's Ad parameter) and its impact on SGL depth and volume estimation. The analysis revealed significant variability in the parameter Ad, resulting in SGL volume uncertainty of up to 66 % under various physical model parameterizations. For a single Landsat 7 image used in this study, the estimated SGL volume ranges from 124 million m³ to 207 million m³. This indicates the limitation of global parametrization and highlights the need for improved calibration to enhance the model's accuracy. The proposed physical model improvement is significant in providing more accurate glacier monitoring through optical remote sensing data, which is relevant for understanding climate change impacts on glacier dynamics. Reference: TEDESCO, M., STEINER, N., POPE, A. (2015): In situ spectral reflectance and depth of a supraglacial lake in Greenland, Arctic Data Center, https://doi.org/10.5065/D6FQ9TN2.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Towards the development of a hybrid satellite product for snowline and meltline estimation at the scale of mountain massifs

Authors: Karlotta Kilias, Guillaume James, Fatima Karbou, Cléménce Turbé, Adrien Mauss
Affiliations: CNRM/CEN - Météo-France - CNRS, Université Grenoble Alpes / Grenoble INP - Inria - Laboratoire Jean Kuntzmann, DirOP / Centre de Météorologie Spatiale - Météo-France
The evolution of the snowline is an essential variable for short- and long-term snow cover monitoring in mountain massifs. While remote sensing technologies such as Sentinel-1 Synthetic Aperture Radar (SAR) and Sentinel-2 optical imagery provide great value in this field, their exploitation remains a complex issue. This study works towards the development of a comprehensive product for snowline detection at the scale of massifs, taking into account several available satellite sources for any given date. Such a product would benefit many end-users, including forecasters, who desire prompt information on the elevation of snow cover across an entire catchment or mountain range. In the frame of this work, we focus on the inclusion of Sentinel-1 and Sentinel-2 data. For each mountain range, we project the native satellite images (image ratio constructed with a multi-annual snow-free reference for Sentinel-1 and the NDSI index for Sentinel-2) into an altitude-orientation reference frame using the SRTM DEM. The orientation-altitude diagrams are then partitioned into snow-covered and snow-free zones by means of a segmentation method, which allows us to directly infer the snowline at massif scale. As it constitutes the most crucial step of the process, the choice of the segmentation method merits a particular focus. Beyond the classical threshold-based method coined by T. Nagler and H. Rott, we explore an extended version of the method of Chan-Vese, a mathematical image segmentation method based on the minimization of an energy term through curve evolution. The method of Chan-Vese notably allows for an easy incorporation of several images from different sources into the energy term, depending on their availability on a given date. As we work in the orientation-altitude system, the input images can have different resolutions. Weights are assigned to the input information to account for resolution differences and weather conditions (for instance to favor optical images under clear sky conditions, but rely more heavily on SAR information when cloud cover is high). Among the satellite data sources, Sentinel-1 SAR plays a particular role, since it reacts to the liquid water content of the snow cover. This implies that when using SAR as only source, we can additionally obtain information on the meltline, i.e. the limit between wet and dry snow, which provides valuable insights into the development of the snow cover during the melting period. In the future, the inclusion of other data sources, such as Sentinel-3 or VIIRS, will also be of interest as it allows for tighter temporal monitoring (ideally daily).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Testing the Retrieval Capabilities of Hyperspectral and Multispectral Sensors for Snow Cover Fraction (SCF)

Authors: Riccardo Barella, Carlo Marin, Claudia Notarnicola, Claudia Ravasio, Biagio Di Mauro, Erica Matta, Dr.ssa Claudia Giardino, Umberto Morra di Cella, Roberto Garzonio, Dr. Monica Pepe, Roberto Colombo, Katayoun Fakherifard
Affiliations: Eurac Research, Università Milano Bicocca, CNR ISP, ARPA VDA, CNR IREA, Agezia Spaziale Italiana (ASI)
The Snow Cover Fraction (SCF)—defined as the percentage of a pixel's surface covered by snow—is a key metric for characterizing snow distribution, particularly in areas with patchy or discontinuous snow cover. Its relevance is heightened in complex mountainous terrains or when analyzing satellite imagery with coarse spatial resolutions. Unlike binary Snow Cover Area (SCA) classifications, SCF offers finer granularity, enabling detailed snow characterization critical for hydrological and climate studies. SCF retrieval is predominantly based on optical remote sensing in the visible and shortwave infrared regions. While traditional methods, such as regression on the Normalized Difference Snow Index (NDSI) and multispectral unmixing, are widely used for their simplicity and reasonable performance, they face significant limitations in regions with complex topography or under atmospheric disturbances. Furthermore, these approaches fail to leverage the full potential of hyperspectral data, which offers richer spectral information. This study investigates the performance of linear and non-linear spectral unmixing methods for SCF retrieval using hyperspectral PRISMA and multispectral Sentinel-2 imagery. Data were acquired over Cervinia, Italy, on 4 July 2024, during a dedicated in situ campaign. Field spectroscopy data and very high-resolution (VHR, 25 cm) RGB images from a drone were collected to generate reference SCF maps at 20 m and 30 m resolutions for validating PRISMA and Sentinel-2 estimations, respectively. SCF retrieval algorithms evaluated in this work include linear regression on NDSI, linear spectral unmixing, and non-linear spectral unmixing. Various combinations of end-member spectra, sourced both from in situ measurements and direct image extraction, were tested. Additionally, the hyperspectral capabilities of PRISMA enabled experimentation with diverse spectral band combinations and bandwidths. Preliminary results highlight that SCF below 20% is challenging to detect. Nonetheless, the rich spectral information provided by PRISMA demonstrates improved performance compared to multispectral sensors. However, geolocation accuracy significantly impacts the retrieval results, underscoring the need for precise alignment in image processing. This work represents one of the first real-world validations of SCF retrieval methods using both hyperspectral and multispectral sensors. The findings enhance our understanding of SCF detection limits and the sensitivity of retrieval algorithms, contributing to advancements in snow monitoring techniques. The insights gained may inform the design of future multispectral sensors optimized for SCF retrieval. Acknowledgements • “Research work carried out using ORIGINAL PRISMA Products - © Italian Space Agency (ASI); the Products have been delivered under an ASI License to Use” . • This work is carried out within Contract “SCIA” n. 2022-5-E.0 (CUP F53C22000400005), funded by ASI in the “PRISMA SCIENZA” program.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Designing a permafrost & climate change response system in Longyearbyen, Svalbard

Authors: Maaike F. M. Weerdesteijn, Hanne Hvidtfeldt Christiansen, Marius O. Jonassen
Affiliations: University Centre In Svalbard
The Arctic town of Longyearbyen on Svalbard is located at 79°N and is built on permafrost. Longyearbyen is situated in a narrow valley surrounded by steep mountain slopes. Svalbard is undergoing some of the strongest climatic warming recently. With rising air temperatures, the permafrost active layer thickness increases, and it stays unfrozen longer into the autumn. Thawing permafrost in Longyearbyen has two major consequences: (1) damage of the town’s modern infrastructure and cultural heritage and (2) an increase of landslide risk due to mountain slope instability. Here, we give an example for each of the thawing permafrost consequences. (1) Lacking suitable building foundation design has caused abrupt evacuation of buildings in recent years, thereby displacing people from their homes. There is minimal place for Longyearbyen to expand due to the limited unused and unprotected space between mountain slopes and the managed river flowing through the entire valley. Therefore, depreciated homes due to permafrost thaw cause a level of complexity on the already existing housing issue. (2) In October 2016, several landslides on the town’s valley sides were triggered as the entire thawed active layer detached, due to a rainstorm with 20 mm of precipitation over 24 hours coming down with 2 mm/hour intensity. A 75 mm rainstorm in November the same year led to fewer and smaller landslides, because the active layer had started to freeze in November, and rainwater could not penetrate into the ground. Summer 2024 also saw many landslides, exposing the top the of the permafrost, and rockslides on mountains next to town, where residents and tourists hike for leisure. These events call for a need to observe, monitor, and predict the increasingly dynamic landscape. We do so by ground- and satellite-based observations and feeding these into the response system. The ground-based instruments are equipped with telemetric devices that send data real-time over the mobile network. Permafrost temperature and ground water content are measured by thermistor strings in boreholes and soil moisture sensors recording through the entire active layer in profiles on both sides of town near the boreholes. For the geotechnical mountain slope stability modelling we require high resolution weather simulations that resolve the steep topography around Longyearbyen. Weather stations in and around town, of which some are co-located with the boreholes, will be used to evaluate and further develop this weather model. Other inputs for the mountain slope stability modelling are a digital elevation model (DEM) and ground characteristics, such as ground ice content, thermal properties, and grain size of the sediment: information obtained from borehole cores. Deformation maps retrieved from interferometric synthetic aperture radar (InSAR) data processing over 2016-2023 are correlated to landforms in the valley, to identify areas of concern in the wetter summer and autumn periods. The next step is to apply pattern recognition between local observations of ground and meteorological conditions with the InSAR deformation maps for more robust landslide prediction, next to the geotechnical landslide modelling. We focus on developing resilience in Arctic communities by providing a geoscientifically developed coupled permafrost & climate change response system that is based on ground- and satellite-based observations. This system will assist decision-making by providing real-time key geoscientific observations and access to short-term landslide prediction output. The aim is to achieve a better information basis for making decisions about infrastructure design and maintenance, and for use in preparedness situations in connection to potential landslides.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Detection of Fresh Supraglacial Deposits Through Change Detection Analyses on Sentinel-2 Multispectral Data and Sentinel-1 Polarimetric Information

Authors: Chiara Crippa, Dr. Mattia Callegari, Carlo Marin, Giovanni Cuozzo, Dr. Claudia Notarnicola
Affiliations: Eurac Research
Climate warming is a global phenomenon with particularly pronounced impacts in high-altitude mountainous regions. These areas are experiencing rapid transformations, with glacial and permafrost retreat producing increasingly evident direct effects on the landscape. One significant outcome of these changes is the growing frequency of landslides in glacial environments, which reflects a paraglacial response to ice loss and permafrost degradation. The accumulation of debris on glaciers alters their thermal regime, impacts debris distribution, and affects alpinist routes also endangering high altitude infrastructures like huts and bivouacs. Identifying the location and frequency of these events is thus a critical step for assessing areas vulnerable to destabilization and understanding their connection to external triggers. Here, we propose a tool implemented in Google Earth Engine and Python that combines spaceborne radar and multispectral data to detect and classify glacial changes, minimizing the limitations inherent in each sensor type. Our workflow, tested on the glacial surfaces of Vedretta della Miniera (Val Zebrù, Italy), Tscherva Glacier (Val Roseg, Switzerland), and Mount Cook (New Zealand) in different snow conditions, analyzes changes in the Normalized Difference Snow Index (NDSI) derived from Sentinel-2 (S2) within specific time ranges to extract preliminary debris maps. It then integrates Sentinel-1 (S1) backscatter information to fill information gaps caused by cloud coverage and provides refined information of surface changes. The methodology consists of two main analytical blocks. First, we filter Sentinel-2 optical images in a user defined time range and over the glacial surface (Randolf Glacier Inventory extent; RGI Inventory, 2023) inside the selected area of interest. We then exclude cloudy pixels and calculate the normalized difference snow index (NDSI) for all the others applying a threshold to differentiate between areas likely covered by snow or ice (NDSI>0.3) and those corresponding to bare rock (NDSI<0.3). To improve rock pixel classification accuracy and avoid misidentifying temporarily covered rocks such as nunataks and bedrock outcrops, we compare each pixel's NDSI value with its value during the closest maximum ablation period. This comparison excludes pixels that already show rock signatures when snow cover is at its minimum. The union of pixels thus identified generates an initial debris extension map, with uncertainties stemming from the individual steps of cloud detection, snow and rock recognition. According to cloud extent in the consider time span, especially during winter season, a lot of pixels can remain not classified and thus prevent a correct detection of surface changes. We thus consider Sentinel-1 GRD intensity images (Mulissa et al., 2021) selected within the same user-defined timespan to compute the VH backscattering backward difference (∆dB) over all those areas that have not been classifies as debris from S2 analysis. Using a size-dependent filter, tailored to the minimum landslide size we aim to detect, we outline discrete pixel clusters and compute the mean ∆VH within each of them. Clusters whose mean ∆VH falls outside the interquartile range (IQR3) of the cluster values in the image indicate the most prominent changes on the glacier. By comparing this value with the VH value from the previous year’s accumulation period and applying a classification method based on a Support Vector Machine (SVM) model, we can distinguish between ice and snow cover. This allows us to isolate pixels that are more likely associated with debris accumulation, rather than changes in snow or ice. Integrating S1 and S2 data enables the creation of a comprehensive map of new debris accumulation, minimizing uncertainties and reducing false positives. Our results demonstrate a strong correlation between the identified clusters and manually mapped landslide deposits, which were used as the reference extent. For instance, we observed a 90% overlap between the landslide cluster in Vedretta della Miniera and Mt. Cook, where the debris deposits were manually mapped shortly after the event. In contrast, the overlap with Mt. Scerscen was 50%, where manual mapping from satellite images was only possible two months post-event due to persistent snow and cloud coverage. This delay led to the remobilization and rearrangement of the initial debris deposit, affecting the accuracy of the overlap. The tool leverages open-source libraries and datasets, making it readily adaptable to other glacial environments.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: InSAR-based movement rate estimation and classification of rock glaciers in the Austrian Alps

Authors: Elena Nafieva, Daniel Hölbling, Emma Hauglin, Zahra Dabiri, Benjamin Aubrey Robson, Vanessa Streifeneder, Lorena Abad
Affiliations: Department of Geoinformatics – Z_GIS, University of Salzburg, Department of Earth Science, University of Bergen
Rock glaciers, which serve as critical indicators of permafrost dynamics and hydrological processes in alpine environments, are increasingly studied to understand their kinematics and response to climate change. Time series analysis of rock glacier velocities derived from Earth observation (EO) data enables the study of rock glacier behaviour, for example, seasonal and multi-year velocities, deformation rates and transitions from fast, chaotic glacial flow to more periglacial landforms with slow, spatially coherent velocities. In this study, we present a regional-scale analysis of rock glaciers for selected mountainous areas in Austria, using Sentinel-1 data and Interferometric Synthetic Aperture Radar (InSAR) techniques to derive movement rates. We propose a classification scheme based on kinematic behaviour and evaluate our results by comparing them to existing classifications. We will attach InSAR-derived movement rates to rock glacier delineations from an existing inventory created through manual interpretation (Wagner et al., 2020) as well as an inventory produced by deep learning techniques by the authors of this contribution within the project “ROGER” (EO-based rock glacier mapping and characterization). Thereby, we (1) assess the suitability of our results for confirming or disconfirming existing rock glacier classifications, (2) identify previously undocumented active rock glaciers, and (3) evaluate how the spatial patterns of movement rates align with rock glacier delineations automatically generated through deep learning. A key outcome of this research is the development of a classification scheme for rock glaciers based on their movement rates. By defining thresholds for distinct kinematic categories - ranging from inactive to active rock glaciers - we establish a framework that supports comparisons across regions. The classification also provides insights into the relationship between movement rates, topographic conditions, and geomorphological characteristics. In addition to understanding rock glacier dynamics, this study underscores the potential of InSAR technology for monitoring of alpine permafrost regions. The methodology and proposed classification scheme can be applied to other mountainous regions, supporting global efforts to assess permafrost stability in the context of climate change. We will present results that include spatial maps of rock glacier movement, examples of inventory validation and information enrichment (i.e. movement rates attached to rock glacier delineations), and the proposed classification framework. These insights can contribute to advancing our understanding of rock glacier behaviour, thereby supporting water resource management and hazard mitigation efforts in alpine environments. Wagner, T., Ribis, M., Kellerer-Pirklbauer, A., Krainer, K., Winkler, G., 2020. The Austrian rock glacier inventory RGI_1 and the related rock glacier catchment inventory RGCI_1 in ArcGis (shapefile) format [dataset]. PANGAEA. https://doi.org/10.1594/PANGAEA.921629
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Seven decades of change in the debris-covered Belvedere Glacier (Western Italian Alps)

Authors: Lukáš Brodský, PhD. Roberto Azzoni, Associate Professor Irene Bollati, Associate Professor Jan Kropáček, Prof. Marcus Nüsser, PhD. Susanne Schmidt, Prof. Vít Vilímek
Affiliations: Charles University, , Department of Applied Geoinformatics and Cartography, University La Statale of Milan, Earth Science Department, “A. Desio”, Charles University, Department of Physical Geography and Geoecology, Heidelberg University, Department of Geography, South Asia Institute (SAI)
The Belvedere Glacier in the Western Italian Alps has undergone notable alterations as a consequence of climate change, cryosphere dynamics, and related geomorphological processes. This study integrates findings from investigations by the research team to present a synthesis of glacier changes over seven decades (1951–2023), with a particular emphasis on surface evolution, elevation dynamics, supraglacial lake fluctuations, lateral moraine instability, and interactions with debris flows from tributary basins. The analysis of historical orthophotos, UAV imagery, and digital surface models has revealed three distinct phases of retreat: the disconnection of the Nordend Glacier between 1951 and 1991, the partial separation of the central accumulation basin between 2006 and 2015, and the subsequent detachment of the Locce Nord Glacier between 2018 and 2021. These changes, in conjunction with a surge-type event (1999–2002), have speeded up the glacial retreat and downwasting process, with rates comparable to those observed in the post-2000 period. The data on elevation indicate a significant increase in downwasting rates, from 0.24 meters per year between 1951 and 2009 to 1.8 meters per year between 2009 and 2023. This increase is associated with spatial heterogeneity, influenced by factors such as debris cover, meltwater flow, and supraglacial lake dynamics. Furthermore, the equilibrium line altitude of four glaciers in Monte Rosa Massif, including Belvedere Glacier, was mapped using Sentinel-2 data for the period 2016-2023. The variation of the resulting mean ELA was in the range of 340 m which likely reflects differences in slope orientation and amount of snow accumulation. The mean ELA for Belvedere Glacier was 3230 m a.s.l. The temporal pattern of ELA for Belvedere Glacier did not differ compared to the other glaciers despite the steep slope and frequent avalanching. Moreover, in August 2023, a debris flow from the Castelfranco tributary basin entered the Belvedere Glacier in the lowest left lobe contributing to erosion and opening of ice cliffs. These findings highlight the necessity for sustained and continuous observations, monitoring, and assessment of the impacts of debris-covered glaciers and slope movements on glacier surface and stability. Concurrently, the lateral moraine sliding in the vicinity of touristic infrastructure, with rates of 1.87–1.98 meters per year (2018–2023), underlines the significance of integrating remote sensing analyses with field surveys and dendrogeomorphological analysis in identifying potential precursors to ground failure. Supraglacial lakes, most notably Lake Effimero, exhibit fluctuating areas (428 m² to 99,700 m²) that are influenced by snowmelt and glacier dynamics. The formation of new lakes was observed to occur consistently, reflecting evolving hydrological conditions and the potential for the occurrence of outburst floods. The research highlights the potential of high and very high spatial resolution images to facilitate the detailed detection of glacier processes.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: A Snow Reanalysis for the Central and Southern European Mountains Based on ESA-CCI Products

Authors: Esteban Alonso-González, Ignacio López-Moreno, Kristoffer Aalstad, Laura Sourp, Simon
Affiliations: CSIC
Snowpack plays a crucial role in numerous hydrological and ecological processes. Despite its mild conditions, Mediterranean regions of southern and central Europe host a vast array of mountain massifs of sufficient elevation to host deep and long lasting snowpacks. These snowpacks act as natural water reservoirs, storing fresh water resources during the colder months, releasing it when snowpack melts from late spring to early summer. The strong seasonality of the precipitation characteristics of climates under mediterranean climate influence, makes the snowmelt a critical resource since it synchronizes the availability with the demand of water during the drier months. Southern European countries are not only densely populated regions but European primary agricultural hubs, and thus the requirement of fresh water resources during early summer makes snowpack pivotal for sustaining European food production. Moreover, given the complex topography of Southern European countries, they exhibit a great potential for hydropower generation, provided sufficient water is available the appropriate availability of water. Thus, fresh water resources stored in the snowpacks in these regions are a relevant component of the southern European hydropower generation industry. Mediterranean regions are identified as a climate change hotspot, with projections highlighting its vulnerability to ongoing warming. Thus, snowpack is significantly threatened by climate change, which could drastically reduce its extent and magnitude. Given the close to isothermal conditions of the snowpack during most of the year and the often mild climatic conditions of snow falls occurring close to 0ºC conditions over extensive areas of these mediterranean mountain ranges, even low increments of temperature may exert drastic changes in snowpack dynamics. Despite this, the snowpack in the temperate regions of Southern Europe remains under-studied, often overshadowed by more extensively studied or iconic mountain ranges such as the Alps or polar regions. This underscores the need for greater scientific attention to the hydrological dynamics of southern European mountain ranges like Sierra Nevada (Spain), Corsica mountains or Balcan mountain ranges just to name a few. However, the lack of snowpack data in most of these regions, poses a significant challenge for researchers and water managers interested in long term snowpack dynamics. Here we present a new snow reanalysis, covering all the mountain massifs of Southern Europe, and the Central European mountain ranges of the Carpathians and the Alps. The product was developed using the Multiple Snow data Assimilation system (MuSA), ingesting observations, jointly with its uncertainties, generated in the frame of the snow and land surface temperature European Space Agency Climate Change Initiatives (ESA-CCI), into an ensemble of simulations generated by an intermediate-complexity numerical snowpack model, the Flexible Snow Model (FSM2). The reanalysis covers the period 2000-2020 at a resolution of approximately 1 km. We present the first validation results of the dataset using snowpack in situ and remotely sensed observations in the Pyrenees, proving its inter- and intra-annual consistency with the available observations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Remote sensing based early detection approaches for Glacial Lake Outburst Floods susceptibility: A case study of the 2024 Thyanbo Glacial Lake Outburst Flood near Thame (Nepal) using Persistent Scatterer Interferometry with Sentinel-1 imagery

Authors: Niels Dedring, Jun.-Prof. Dr. Andreas Rienow, Jun.-Prof. Dr. Valerie Graw
Affiliations: Ruhr University Bochum, Institute of Geography
There is a clear link between global warming and the increase of glacier melting, leading to the expansion of glacial lakes dammed by fragile moraines which represent unstable glacial deposits often partially frozen. Triggers such as heavy rainfall, earthquakes, landslides, avalanches, glacier breakoffs, or thawing permafrost can cause glacial lake outburst floods (GLOFs). These events result in moraine breaches, releasing flood waves of mud and debris that can cause significant damage and endanger populations far downstream. On August 16. 2024, a GLOF from the Thyanbo glacial lake primarily affected the village Thame in the Namche region of Solukhumbu district, Nepal. This flood implied a huge destruction of the local infrastructure, buildings and agricultural land, and displaced over 135 inhabitants. After first investigations, it seems that an initial trigger was originating from the Ngole Cho glacial lake and overtopped its terminal moraine. This flood wave further run into the Thyanbo glacial lake, which overtopped the terminal moraine and caused it to breach. All mentioned cascading incidents triggered the GLOF running downstream. Even though, no casualties where claimed, the International Charter Space and Major Disasters was activated three days after the GLOF (19.08.2024), which underlines the urgent need for investigation and research with support of Earth Observation Data. The here presented study, analyses recent and past dynamics of all glacial lakes in the Thame Khola valley and puts an emphasis on the GLOF event from August 2024. Integrating Sentinel-1 & -2 as well as high-resolution PlanetScope satellite data the lakes area, volume estimation and frozen-state periods are determined. To get a better image about the exact cause and course of the GLOF, further a change detection and runoff estimation will be performed. With Persistent Scatterer Interferometry (PSI) using Sentinel-1 SLC data, ground movements can be tracked and detected over time with millimetre accuracy in perpendicular directions. The PSI technique enables the measurement of displacements at identified Persistent Scatterers in SAR datasets, which mostly correspond to consistent features in the landscape such as man-made structures, natural rock outcrops, and exposed geological formations. This study aims to determine whether PSI could have predicted the collapse of the moraine and the resulting GLOF, as well as the time scales involved. The calculation of the PSI is carried out by using the Standford Method for Persistent Scatterers (StaMPS) based on Sentinel-1 data. If this remote sensing approach turns out to be sufficient in early detection of instable moraines, PSI could help to improve the identification and classification of potential dangerous glacial lakes within the whole Hindukush-Himalaya-Region and could be integrated into early warning systems for outburst susceptibility in future studies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Assimilation of Satellite Retrieved Snow Depth (SD) and Snow Water Equivalent (SWE) Into a Snow Model

Authors: Ezra Beernaert, Dr. Kari Luojus, Dr. Hans Lievens
Affiliations: Hydro-Climate Extremes Lab (H-CEL), Ghent University, Finnish Meteorological Institute
Satellite observations of the snow water equivalent (SWE) in the world’s mountain ranges are still lacking. This observation gap hinders the accurate estimation of total seasonal water storage in snow. To address this gap, a physical snow model can be implemented to obtain daily snow depth (SD) and SWE estimates for large regions. To generate a high-resolution dataset of SWE using a snow model, high-resolution meteorological forcings are required. The Multi-Source Weather (MSWX) and Multi-Source Weighted-Ensemble Precipitation (MSWEP) datasets are used. This 3-hourly forcing data, with a 0.1° resolution, was downscaled to a resolution of 500 meter to account for the terrain influences. Different options for the downscaling procedures (with the main focus on precipitation, temperature and solar radiation) were explored and evaluated to select the combination of methods that resulted in the best possible SD and SWE simulations. A physically based snow model (SnowClim, developed by Lute et al., 2022) was calibrated to further optimize the simulations. Through data assimilation, further improvement of the model simulations of SD and SWE is possible. For mountainous regions, Sentinel-1 SD retrievals (Lievens et al., 2019, 2022) can be utilized. For non-mountainous regions, in the northern hemisphere, the GlobSnow SWE dataset is available (Luojus, K., Pulliainen, J., Takala, M. et al., 2021). Here, we first investigated the assimilation of Sentinel-1 SD over the European Alps. The assimilation was found to improve the performance of the SD and SWE estimates compared to those based on the model or satellite observations alone. In a second case study, the model and data assimilation framework is extended to assimilate simultaneously Sentinel-1 SD over mountain regions and GlobSnow over non-mountainous regions in Scandinavia (Norway, Sweden and Finland). The results of our study demonstrate the advantage of combining satellite information with the physically based snow model for daily, high-resolution and area-wide SWE estimation.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: A.06.02 - POSTER - Enhancing Space Weather Understanding: Insights from LEO Satellite-Based Operational and Pre-Operational Products

Space weather and space climate refer to the interactions between the Sun and Earth over timescales ranging from minutes to decades. Predicting extreme space weather and developing mitigation strategies is crucial, as space assets and critical infrastructures, including satellites, communication systems, power grids, aviation, etc., are vulnerable to the space environment.

This session focuses on assessing the current status of the space weather forecast and nowcast products obtained from LEO satellite measurements, alongside other missions and ground-based technologies, and pushing forward with innovative concepts. We strongly encourage contributions that promote a cross-disciplinary and collaborative approach to advancing our understanding of space weather and space climate. Moreover, we welcome presentations that investigate the effects of space weather on diverse applications in Earth's environment, such as space exploration, aviation, power grids, auroral tourism, etc.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Dynamical Complexity in Swarm-derived Storm and Substorm Indices Using Information Theory: Implications for Interhemispheric Asymmetry

Authors: Constantinos Papadimitriou, Dr. Georgios Balasis, Dr. Adamantia Zoe Boutsi, Dr. Omiros Giannakis
Affiliations: National Observatory Of Athens - IAASARS, National and Kapodistrian University of Athens
In November 2023, the ESA Swarm constellation mission celebrated 10 years in orbit, offering one of the best-ever surveys of the topside ionosphere. Among its achievements, it has been recently demonstrated that Swarm data can be used to derive space-based geomagnetic activity indices, like the standard ground-based geomagnetic indices, monitoring magnetic storm and magnetospheric substorm activity. Given the fact that the official ground-based index for the substorm activity (i.e., the Auroral Electrojet – AE index) is constructed by data from 12 ground stations, solely in the northern hemisphere, it can be said that this index is predominantly northern, while the Swarm-derived AE index may be more representative of a global state, since it is based on measurements from both hemispheres. Recently, many novel concepts originated in time series analysis based on information theory have been developed, partly motivated by specific research questions linked to various domains of geosciences, including space physics. Here, we apply information theory approaches (i.e., Hurst exponent and a variety of entropy measures) to analyze the Swarm-derived magnetic indices around intense magnetic storms. We show the applicability of information theory to study the dynamical complexity of the upper atmosphere, through highlighting the temporal transition from the quiet-time to the storm-time magnetosphere around the May 2024 superstorm, which may prove significant for space weather studies. Our results suggest that the spaceborne indices have the capacity to capture the same dynamics and behaviors, with regards to their informational content, as the traditionally used ground-based ones. A few studies have addressed the question of whether the auroras are symmetric between the northern and southern hemispheres. Therefore, the possibility to have different Swarm-derived AE indices for the northern and southern hemispheres respectively, may provide, under appropriate time series analysis techniques based on information theoretic approaches, an opportunity to further confirm the recent findings on interhemispheric asymmetry. Here, we also provide evidence for interhemispheric energy asymmetry based on the analyses of Swarm-derived auroral indices AE North and AE South.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: The 10-11 May 2024 Geomagnetic Storm in the light of Swarm Observations

Authors: Balázs Heilig, Veronika Barta, Kitti Berényi, Máté Tomasik, Tamás Bozóki
Affiliations: HUN-REN Institute of Earth Physics and Space Science, Eötvös Loránd University, Institute of Geography and Earth Sciences, Department of Geophysics and Space Science, Space Research Group, HUN-REN – ELTE Space Research Group
Swarm observations provide a wide range of observations and data products supporting the investigation of magnetosphere-ionosphere coupling processes. A special area of these coupling processes is the subauroral ionosphere conjugated to the plasma boundary layer. In this paper, we demonstrate how Swarm observations could provide insight into storm-time dynamic processes. The latest geomagnetic superstorm, the 10-11 May 2024 event serves as an example. During this event, the Swarm A/C pair orbited in the 07/19 MLT sector, while Swarm B explored the pre-noon/pre-midnight MLT sector. Magnetic and electric field observations, observations of plasma structures, and field-aligned, ionospheric and magnetospheric currents provide a rich and complex context for the interpretation of the evolving processes. The 10-11 May 2024 event was extreme in many aspects, some of which will be presented in this contribution.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Comparative analysis of socioeconomic impacts of space weather: High vs. Mid-latitude vulnerabilities and mitigation strategies

Authors: Giulia Abbati, Sara Mainella, Pietro Vermicelli
Affiliations: Istituto Nazionale Di Geofisica E Vulcanologia, SpacEarth Technology SRL
Space weather extreme phenomena are increasingly recognized as a significant global economic risk [1]. Events such as solar flares and coronal mass ejections can directly disrupt critical spaceborne and ground-based infrastructures, including satellites, radiocommunications, and power grids, leading to service interruptions that may result in billions of euros in damages. Although these potential damages undermine societal stability and the economic sustainability of infrastructures, their owners, and the global economy, the scientific literature on this topic remains scarce. As Baker (2009) emphasizes [2], this scarcity stems from the complexity of addressing this research challenge, which requires an interdisciplinary approach that integrates scientific, engineering, economic, and social perspectives, as well as a general lack of data that prevents researchers from creating precise predictive models. However, an accurate understanding of the socioeconomic impact of space weather is fundamental to the development of appropriate resilience and mitigation strategies. In our previous study [3], we focused on high-latitude regions, as the interactions between the Earth's magnetic field and charged particles from the solar wind make these areas particularly susceptible to upper atmosphere phenomena (UAP), significantly affecting local technological infrastructures and energy systems. The socioeconomic impact of these events depends on both the latitude at which they occur and the severity of the event itself. Generally, high latitudes are more affected by these impacts, but extreme events can also cause significant damage at mid-latitudes. To illustrate this, we present a case study comparing two notable geomagnetic storms, focusing on their effects on GNSS receivers and the precision agriculture sector. This analysis explores and relates the findings of our previous study on the geomagnetic storm that occurred during Solar Cycle 24, known as the St. Patrick's Day storm, with those of a more recent geomagnetic storm that took place in May 2024, during Solar Cycle 25. The St. Patrick's Day storm on March 17, 2015, was the most severe geomagnetic storm of the 24th solar cycle, with a Dstmin index of approximately -226 nT, classifying it as an event with an annual occurrence probability [4]. During the main phase of the storm, high-intensity Medium Scale Traveling Ionospheric Disturbances (MSTIDs) led to a clear decrease in positional accuracy, exceeding 1.5 m for all components, with accuracy degraded to the point that positioning was impossible for over three hours. Considering that most GNSS applications in precision agriculture require accuracy below 1 m, and that an hour of GNSS outage costs this sector approximately €200,000, the estimated cost of the St. Patrick’s Day storm for precision agriculture (PA) is around €600,000 [5]. By contrast, the "Mother's Day Storm" of May 2024, triggered by an X8.7-class solar flare from Sunspot AR3664, peaked with a Dstmin of -412 nT, and it can therefore be classified as an event with once per ten years occurrence probability [4]. As one of the most powerful storms of the current solar cycle, it serves as a significant benchmark event for assessing the vulnerability of modern technologies to space weather threats [6]. During this recent geomagnetic storm, a significant decrease in TEC (Total Electron Content) was observed, indicating a strong ionospheric disturbance, along with an increase in ROTI (Rate of TEC change Index), signaling ionospheric irregularities [6]. These irregularities, combined with scintillation effects detected by GNSS receivers, impacted satellite signal propagation, compromising navigation and communication accuracy. Although the underlying UAP were different, also this geomagnetic storm of May 2024 severely affected the precision of GNSS-dependent systems used in agriculture. Users reported deviations of up to 25 cm in tractor guidance lines, despite PDOP values suggesting high precision, leading some farmers to suspend planting activities [7]. A comparative analysis of these two storms provides the opportunity to evaluate the effects at mid-latitudes of space-weather events causing GNSS disturbances in the PA sector, which is gaining importance in Central Europe [9]. The fact that the GNSS disturbances originate in UAP of different nature calls for an in-depth analysis of the underlying physical processes to assess the expected duration and region of occurrence of the service outages, promoting better mitigation and resilience strategies. Given the increasing risk posed by space weather, and the expectation that the probability of such events will continue to rise over time [8], it is crucial to assess the potential impacts of these phenomena - both in terms of economic and social consequences - and to evaluate the availability of current technological solutions capable of mitigating the associated damage. Through a comparative analysis of these storms, this study seeks to deepen our understanding of the consequences of space weather at different latitudes, identifying specific vulnerabilities in technological infrastructures and quantifying, where possible, the associated economic impacts in PA sector. The findings provide new insights into how the intensity and geographical location of UAP influence GNSS technological systems and may contribute to enhancing the resilience related to PA sector at mid-latitudes. [1] Eastwood, J. P., Biffis, E., Hapgood, M. A., Green, L., Bisi, M. M., Bentley, R. D., Wicks, R., McKinnell, L. A., Gibbs, M., & Burnett, C. (2017). The Economic Impact of Space Weather: Where Do We Stand?. Risk analysis : an official publication of the Society for Risk Analysis, 37(2), 206–218. https://doi.org/10.1111/risa.12765 [2] Baker, D. N. (2009), What Does Space Weather Cost Modern Societies?, Space Weather, 7, S02003, doi:10.1029/2009SW000465. [3] P. Vermicelli, S. Mainella, L. Alfonsi, A. Belehaki, D. Buresova, R. Hynonen, V. Romano, B, Witvliet “The Socioeconomic Impacts of the Upper Atmosphere Effects on LEO Satellites, Communication and Navigation Systems,” doi:10.5281/zenodo.66714242. [4] M. Ishii et al., “Space weather benchmarks on Japanese society,” Earth, Planets Sp., 73, 1, 2021, doi: 10.1186/s40623-021-01420-5. [5] Mainella, S., Vermicelli, P., & Urbar, J. (2024, May 19-24). Quantifying the socioeconomic impacts of Space Weather in Europe: How costly is the effect of Medium Scale Traveling Ionospheric Disturbances on GNSS positioning? Paper presented at the 4th URSI AT-RASC, Gran Canaria, Spain. [6] Spogli, L., Alberti, T., Bagiacchi, P., Cafarella, L., Cesaroni, C., Cianchini, G., Coco, I., Di Mauro, D., Ghidoni, R., Giannattasio, F., Ippolito, A., Marcocci, C., Pezzopane, M., Pica, E., Pignalberi, A., Perrone, L., Romano, V., Sabbagh, D., Scotto, C., Spadoni, S., Tozzi, R. and Viola, M. (2024) “The effects of the May 2024 Mother’s Day superstorm over the Mediterranean sector: from data to public communication”, Annals of Geophysics, 67(2), p. PA218. doi: 10.4401/ag-9117. [7] LandMark Implement. (2024, May 11). Geomagnetic storm affecting GPS signals - May 2024. https://landmarkimp.com/news/news/blog/geomagnetic-storm-affecting-gps-signals--may-2024/ [8] Consilium. (2023, November 21). Solar storms: A new challenge on the horizon. Council of the European Union. https://www.consilium.europa.eu/media/68182/solar-storms_a-new-challenge-on-the-horizon-21-nov-2023_web.pdf [9] Bojana Petrović, Roman Bumbálek, Tomáš Zoubek, Radim Kuneš, Luboš Smutný, Petr Bartoš, Application of precision agriculture technologies in Central Europe-review, Journal of Agriculture and Food Research, Volume 15, 2024,101048, ISSN 2666-1543, https://doi.org/10.1016/j.jafr.2024.101048.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Swarm as Space Weather mission: L1 and L2 Fast data processing

Authors: Roberta Forte, Enkelejda Qamili, Vincenzo Panebianco, Lars Tøffner-Clausen, Stephan Buchert, Christian Siemes, Jonas Bregnhøj Lauridsen, Guram Kervalishvili, Jan Rauberg, Alessandro Maltese, Anna Mizerska, Florian Partous, Maria Jose Brazal Aragón, Maria Eugenia Mazzocato, Giuseppe Albini, Antonio De la Fuente, Anja Stromme
Affiliations: Serco For Esa, DTU Space, Swedish Institute of Space Physics, TU Delft, GFZ, GMV Poland, ESA - ESOC, ESA - ESRIN
After more than a decade in Space, ESA’s Earth Explorer Swarm mission is still in excellent shape and continues to contribute to a wide range of scientific studies, from the core of our planet, via the mantle and the lithosphere, to the ionosphere and interactions with Solar wind. Its highly accurate observations of electromagnetic and atmospheric parameters of the near-Earth space environment, and the peculiar mission constellation design, make Swarm eligible for developing novel Space Weather products and applications. In 2023 a “Fast” processing chain has been transferred to operations, providing Swarm Level 1B products (orbit, attitude, magnetic field and plasma measurements) with a minimum delay with respect to the acquisition. In 2024 also the generation of Swarm Level 2 products (Field Aligned Current, Total Electron Content) have been implemented in the “Fast” chain and are available on Swarm dissemination server. These “Fast” data products add significant value in monitoring present Space Weather phenomena and help modelling and nowcasting the evolution of several geomagnetic and ionospheric events. This work presents the set-up of the Swarm “Fast” data processing chain, current status and plans for future improvements and applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: C.02.06 - POSTER - Swarm - ESA's extremely versatile magnetic field and geospace explorer

This session invites contributions dealing specifically with the Swarm mission: mission products and services, calibration, validation and instrument-related discussions. It is also the session in which the future and evolution of the mission, and the future beyond Swarm will be discussed. Particularly welcome are contributions highlighting observational synergies with other ESA and non-ESA missions (past, current and upcoming), in addition to ground-based observations and modelling.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Multi-Scale Irregularities Product (m-SIP): a data product utilizing the high-resolution Swarm plasma density data for space weather applications

Authors: Yaqi Jin, Dr. Wojciech Miloch, Dr. Daria Kotova, Dr. Luca Spogli, Dr. Rayan Iman, Lucilla Alfonsi
Affiliations: University of Oslo, Istituto Nazionale di Geofisica e Vulcanologia
Nowadays it is crucial to monitor and forecast space weather conditions, and in particular variations in the near-Earth space environment which are induced by the Solar Wind-Magnetosphere-Ionosphere-Thermosphere interactions. They can affect critical infrastructures and services that include communications and navigation systems such as Global Navigational Satellite System (GNSS), as well as satellite operations. However, as for now there is no space weather product that can monitor and predict the global space weather impact on the GNSS users. In this regard, the Swarm mission can contribute by its high-resolution faceplate (FP) plasma density measurements. We present the Swarm multi-scale irregularities product (m-SIP). m-SIP is a Swarm-based data product that can characterize small-scale irregularities (< 10 km) and at different scales down to near the Fresnel’s scale (~400 m), which is particularly useful for the GNSS users. The new data product m-SIP consists of two parts: 1) The derived plasma density parameters at small spatial scales, e.g., the rate of change of density in the 1-second window (RODI1s), density gradients at 5 km and at 10 km, and the spectral slope of power spectral density; 2) The modelled S4 index based on the phase screen model. The new data product will be useful for monitoring the space weather impact for the GNSS users. Due to its long availability around the globe at all latitudes, it would ease the development of specific models to characterise small-scale irregularities in the ionospheric plasma density and their impact on the GNSS services. In addition, the data product is useful for improving the fundamental understanding of ionospheric processes and formation of plasma irregularities. It contains parameters necessary for characterizing plasma irregularities at multiple scales and the energy cascading across different spatial scales (including the spectral slope) down to near the Fresnel's scale of GNSS signals. It can be used to study the turbulent ionosphere at both high and low latitudes. It can also provide additional parameters that will allow a quick assessment of the space weather conditions in the ionosphere.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: A World without Low Earth Orbit High-Precision Magnetometry

Authors: Dr. Guram Kervalishvili, Ingo Michaelis, Dr. Martin Rother, Dr. Maximilian A. Schanner, Prof. Christopher C. Finlay, Dr. Clemens Kloss, Dr. Monika Korte, Enkelejda Qamili, Jan Rauberg
Affiliations: GFZ German Research Centre For Geosciences, DTU Space, Technical University of Denmark, SERCO for European Space Agency (ESA-ESRIN)
High-precision magnetometry is essential for monitoring Earth's magnetic field, enabling breakthroughs in understanding the dynamics of the core, lithosphere, and magnetosphere. Missions like Ørsted, CHAMP (CHAllenging Minisatellite Payload), and ESA's Swarm constellation have demonstrated the critical value of high-precision vector field and scalar magnetometer measurements carried out with absolute accuracy in Low Earth Orbit (LEO). Now, imagine a world where the satellites or instruments of dedicated geomagnetic field missions in LEO reach the end of their operational lifetimes, whether expected or unexpected, lacking new missions to replace them. Without the unique insights provided by missions like Ørsted, CHAMP, and Swarm, we would lose a critical, high-resolution perspective of Earth's magnetic environment, which reveals fluctuations and shifts that would otherwise remain unresolved. Moreover, data from dedicated magnetic scientific missions play a crucial role in calibrating platform magnetometers on satellites not dedicated to magnetic measurements. While these platform magnetometers are functional, they lack the precision needed to detect fine-scale variations. Without the rigorous calibration provided by high-precision magnetic missions providing measurements with absolute accuracy, the data that platform magnetometers produce is less reliable, introducing inconsistencies and inaccuracies across datasets. Here, we explore the consequences of losing high-precision, absolute accuracy magnetometry capabilities in LEO for calibrating platform magnetometers on satellites not dedicated to magnetic measurements. While it would still be possible to generate reference geomagnetic data using less accurate sources, e.g., ground-based observatory networks, these alternatives lack spatial and temporal resolution provided by LEO-based measurements. As a result, the derived geomagnetic models would suffer from diminished resolution and accuracy, reducing their overall reliability and scope. Such degraded models would, in turn, propagate inaccuracies in the calibration of platform magnetometers, undermining their precision. This cascading effect would significantly hinder our ability to monitor, understand, and model the dynamic geomagnetic field, particularly the core, lithosphere, and magnetosphere. Maintaining accurate, high-precision, magnetometry in LEO is therefore essential for preserving the integrity of geomagnetic science and supporting its diverse scientific and practical applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: 11 years of Swarm PDGS Operations: Lessons Learned

Authors: Alessandro Maltese, Livia D'Alba, Antonio De La Fuente, Danilo Parente
Affiliations: Serco, ESA, Starion
Contrary to the common perception that operational ground segments are static and conservative by nature, the architecture of the Swarm PDGS has been constantly evolving. This evolution aims to respond to new operational and scientific requirements, such as the need for faster data delivery and the integration of innovative data processing algorithms. It also involves incorporating new science products that were not initially foreseen, demonstrating the system's flexibility and adaptability. Efforts have been made to improve the robustness, maintainability, and efficiency of the current system using the latest available techniques, including adopting modern software development practices and leveraging cutting-edge technologies. These ongoing enhancements ensure that the PDGS remains at the forefront of technological advancements, providing high-quality data services to the global scientific community. This poster summarizes the efforts of the Swarm PDGS operations support team over the last 11 years in several areas. Key initiatives include the evolution and streamlining of the system architecture to ensure long-term maintainability, which involved reengineering components to simplify future upgrades and reduce technical debt. The migration from a physical to a virtual infrastructure was another significant milestone, enhancing scalability, reducing operational costs, and increasing system resilience. The team also focused on the flexible provision of required storage and processing power for full mission reprocessing campaigns, enabling the system to handle increased data volumes efficiently. Improvements in monitoring and reporting subsystems have provided better insights into system performance, facilitating proactive maintenance and quicker issue resolution. The integration of additional new data products from the Swarm DISC Processing Centres and other missions has expanded the data portfolio available to researchers, fostering interdisciplinary studies and collaboration. Strengthening and enhancing system security has been a continuous priority, addressing emerging cyber threats and ensuring the integrity and confidentiality of the data. The implementation of the FAST platform marked a significant advancement, providing near real-time data access and opening new possibilities for time-sensitive applications like space weather monitoring. Changes in the contractual approach have introduced greater flexibility, allowing for more agile responses to evolving mission needs and technological developments. Finally, this contribution provides a synthesis of the main lessons learned during this period. It highlights how adaptability, continuous improvement, and close collaboration among all stakeholders are essential for the success of such a complex and long-term mission. The experiences gained offer valuable insights for future missions and ground segment developments, demonstrating that with the right approach, operational ground segments can be dynamic, innovative, and responsive to the ever-changing demands of the scientific community.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: VirES: Data and model access for the Swarm mission and beyond

Authors: Martin Pačes, Ashley Smith
Affiliations: EOX IT Services, GmbH, University of Edinburgh
The VirES service [1] has been developed to make Swarm products accessible to programmers and non-programmers alike. The overall project combines web services to robustly access and process data and models on demand, a graphical interface that enables easy exploration and visualisation of products, and Python tooling to allow more flexible operation and foster community-developed tools. The web client GUI provides both 3D visualisation and customisable 2D plotting, allowing data exploration without any programming required. On the other hand, the Jupyter-based Virtual Research Environment (VRE) [2] and ready-to-run Jupyter notebooks [3] provide the more intrepid explorer the opportunity to generate more bespoke analysis and visualisation. The notebooks are backed by a JupyterHub furnished with domain-relevant Python packages, which together lower the barrier to entry to programming. Both the web client and notebooks are interlinked with the Swarm handbook [4] which provides more detailed documentation of products. The VirES server can be accessed through Open Geospatial Consortium (OGC) APIs using the viresclient Python package [5], as well as through the Heliophysics API (HAPI) [6]. The availability of both APIs offers both flexibility and interoperability, enabling a variety of usage patterns both for researchers and for integration with external data systems. While the service was originally developed to serve the Swarm satellite data, we also provide access to ground magnetic observatory data derived from INTERMAGNET, as well as Swarm "multimission" products derived from other spacecraft as part of Swarm projects. VirES is developed for ESA by EOX IT Services [7], in close collaboration with researchers across the Swarm Data, Innovation, and Science Cluster (DISC). We aim to produce a sustainable ecosystem of tools and services, which together support accessibility, interoperability, open science, and cloud-based processing. All services are available freely to all, and the software is developed openly on GitHub [8,9]. [1] https://vires.services [2] https://vre.vires.services [3] https://notebooks.vires.services [4] https://swarmhandbook.earth.esa.int/ [5] https://viresclient.readthedocs.io/ [6] https://vires.services/hapi [7] https://eox.at [8] https://github.com/ESA-VirES [9] https://github.com/Swarm-DISC
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Swarm Magnetic Data Evaluated Through Comprehensive Inversion of Earth's Magnetic Field

Authors: Lars Tøffner-Clausen
Affiliations: DTU Space
Through more than 11 years the Swarm Mission has demonstrated leading class quality of its measurements of the magnetic field surrounding Earth. Though, the Sun induced magnetic disturbance, denoted dB_Sun, is known to have been imperfectly characterised so far. Even though we do not envisage a perfect characterisation of the dB_Sun disturbance, recent analyses have shown promising progress in a better understanding and characterisation of the disturbance. This progress has been supported by careful analysis of the magnetic data residuals with respect to models of the magnetic fields surrounding Earth. Here, we present the latest achievements in a better understanding and characterisation of dB_Sun through the analysis of data residuals versus the Comprehensive Inversion of Earth's magnetic field. The Comprehensive Inversion (CI) approach constitutes a simultaneous modelling of the magnetic fields from Earth's fluid core, lithosphere, ionosphere, magnetosphere, as well as induced magnetic fields in the tidal motion of the oceans. For the ionosphere and magnetosphere both the direct magnetic field as well as their counterparts induced in Earth's mantle are included.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Enhanced Swarm-Based Climatological Models of the Non-Polar Geomagnetic Daily Variations

Authors: Arnaud Chulliat, Louis Chauvet, Gauthier Hulot, Robin Duchene, Martin
Affiliations: CIRES, University Of Colorado Boulder
Climatological models of the non-polar geomagnetic daily variations have a variety of uses, from studying ionospheric electrical current systems to correcting magnetic field survey data. Several such models were produced as part of the Dedicated Ionospheric Field Inversion (DIFI) project throughout the Swarm satellite mission. Here we present the latest version of the DIFI model, DIFI-8, inferred from ten years of Swarm Alpha and Bravo magnetic field measurements. We also present a new version of the Extended DIFI model, xDIFI-2, inferred from Swarm, CHAMP and observatory data and covering 2001-2023. Like their predecessors, these new models provide both the primary and induced magnetic fields generated by mid-latitude Sq currents and the Equatorial Electrojet (EEJ) within +/- 55 degrees quasi-dipole latitudes, at both ground and Low-Earth Orbit satellite altitudes. In addition, they include new features, such as data preprocessing that incorporates corrections for toroidal magnetic fields based on a recently published climatological model (Fillion et al., 2023). Finally, they have been extensively validated against independent, ground-based observatory data.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Characterization of the ionospheric perturbation degree at mid-scales with Swarm's NeGIX and TEGIX

Authors: J. Andrés Cahuasquí, Mainul Hoque, Norbert Jakowski, Dmytro Vasylyev, Stephan Buchert, Martin Kriegel, Paul David, Grzegorz Nykiel, Youssef Tagargouste, Lars Tøffner-Clausen, Jens Berdermann
Affiliations: German Aerospace Center (DLR), Swedish Institute of Space Physics (IRF), Technical University of Denmark (DTU)
Since their launch in November 2013, the European Space Agency's (ESA) Swarm mission has delivered unprecedented data products and services that have significantly enhanced our understanding of solar, magnetospheric, thermospheric, ionospheric, and atmospheric processes, as well as their coupling and impact on human-made technological systems. Currently, the Swarm Product Data Handbook includes 68 Level 1 and Level 2 data products derived from Swarm measurements, along with over 20 additional products obtained from other spacecraft. All the activity is curated by the Swarm Data, Innovation, and Science Cluster (DISC). Recently, two novel data products have been added to the Swarm data family: the electron density gradient ionospheric index (NeGIX) and the total electron content gradient ionospheric index (TEGIX). These products implement a temporal and spatial combination of measurements from Swarm A and Swarm C along their near-polar, parallel orbits. NeGIX and TEGIX enable the investigation of ionospheric plasma irregularities and perturbations at mid-scales, on the order of 100 km, not only along the meridional transit direction of the Swarm satellites but also along the longitudinal (zonal) direction. Consequently, the space-based observations from Swarm, combined with the methodologies of NeGIX and TEGIX, provide new insights into several important topics in space weather research. Indeed, initial studies using these products have demonstrated their effectiveness in applications such as scintillation modeling, characterizing ionospheric plasma bubbles, and monitoring ionospheric indices in combination with ground-based observations. In this work, we provide a comprehensive assessment of the capabilities of NeGIX and TEGIX to characterize the ionospheric state under both quiet and stormy geomagnetic conditions. We examine several of the most intense geomagnetic events from solar cycles 24 and 25. Furthermore, with over ten years of Swarm data available, a climatological analysis of the ionosphere has been conducted using these newly-developed indices. Such analysis forms a basis for future modeling and combined studies, while also supporting the development of improved proxies for characterizing ionospheric behavior and enabling their practical use in navigation, communication, and remote sensing systems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: State of the art of Swarm mission: Instrument performances, Data Quality and Algorithm evolution

Authors: Vincenzo Panebianco, Roberta Forte, Enkelejda Qamili, Lars Tøffner-Clausen, Stephan Buchert, Johnathan Burchill, Dr.ir. Christian Siemes, Anna Mizerska, Jonas Bregnhøj Lauridsen, Thomas Nilsson, Alessandro Maltese, Maria Eugenia Mazzocato, Florian Partous, María José Brazal Aragón, Lorenzo Trenchi, Elisabetta Iorfida, Irene Cerro, Berta Hoyos Ortega, Antonio De la Fuente, Anja Stromme
Affiliations: Serco for ESA, DTU Space, Swedish Institute of Space Physics, University of Calgary, TU Delft, GMV Poland, European Space Agency, ESTEC, European Space Agency, ESRIN
Swarm mission marked more than a decade in orbit, representing a transformative achievement in our exploration and understanding of Earth's geomagnetic field, the ionosphere, and electric currents. Launched in 2013 by the European Space Agency (ESA) as a three-satellite constellation, Swarm was initially designed to provide unprecedented insights into Earth’s magnetic field and its interactions with the surrounding space environment. Over the years, the mission has consistently exceeded its original goals, delivering groundbreaking scientific results and enabling a host of innovative applications that extend far beyond its initial scope. A defining feature of the Swarm mission is its commitment to continuous improvement. Since its launch, advancements in data processing algorithms have played a vital role in ensuring the mission remains at the cutting edge of scientific discovery. These updates have not only maintained the exceptional quality of Swarm's measurements but have also allowed the mission to evolve in response to the changing needs of the scientific community. An overview of the Swarm mission status, highlighting the remarkable performance of its instruments and the ongoing enhancements in data processing algorithm is presented. These refinements have not only strengthened Swarm’s contributions to our understanding of magnetic fundamental Earth processes but have also supported the development of novel Swarm-based data products and services, further broadening the mission’s impact.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Swarm Accelerometer as a Component in Derivation of the Non-Gravitational Forces Acting on the Spacecraft

Authors: Sergiy Svitlov, Dr. Christian Siemes, Dr. Elisabetta Iorfida, M.Sc. Daniel Rotter
Affiliations: Micro-enterprise 'Sergiy M. Svitlov', Delft University of Technology, European Space Agency (ESA), ESTEC
Swarm is an ESA’s Earth Explorer mission in orbit since November 2013, consisting of three identical satellites (Swarm A, B, and C) in near-polar Low Earth Orbits. While its primary objective is to study Earth’s magnetic field and its temporal evolution, the Swarm satellites also carry GPS receivers and accelerometers as part of their scientific payload. In addition to providing the precise position and time for the magnetic field measurements, the GPS receivers are used to determine the non-gravitational forces acting on the spacecraft, from which the thermospheric neutral densities can be derived. The accelerometers are intended to measure those forces directly and with a much higher resolution. However, the Level 1B accelerometer data are not released to the public due to heavy distortions in the raw measurements, which render them to be useless in their unprocessed form. Instead, Level 2 calibrated accelerometer data are prepared and released, having undergone a series of manyfold corrections to compensate for the distortions. To exploit the advantages of both techniques, hybridised non-gravitational accelerations (Level 2 accelerometer data) are constructed as a combination of the low-pass filtered POD-derived accelerations and high-pass filtered pre-corrected raw accelerometer data. This hybrid approach ensures that the final accelerometer data products are of high scientific value and reliability. In this presentation, we report details on the sophisticated Level 2 processing algorithm and calibration procedures. These procedures have resulted in the production of scientifically valuable along-track accelerometer data of Swarm C for almost the entire mission timeline, Swarm A for almost two years, and Swarm B for a few months, particularly during geomagnetic active periods at the request of ad-hoc users. Special attention is given to monitoring and maintaining the satisfactory quality and validity of the accelerometers' Level 2 data. The benefits of accelerometers for deriving non-gravitational accelerations and studying the near-Earth space environment are highlighted with examples of several strong geomagnetic storms, showcasing the instrumental role of these data in advancing our understanding of space weather phenomena.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Implementation of the Swarm FAST Processing Pipeline

Authors: Alessandro Maltese, Livia D'Alba, Antonio De La Fuente, Danilo
Affiliations: Serco
Due to technical and budgetary constraints, the implementation of a near real-time (NRT) processing chain was initially discarded during the development of the Swarm Payload Data Ground Segment (PDGS) prior to launch. At that time, the focus was on meeting the core mission objectives within the limited resources, and incorporating an NRT processing capability was considered too ambitious and costly. However, only a few years into routine operations, it was recognized that a low-latency processing pipeline, along with optimization of the downlink strategy, could significantly extend the exploitation of Swarm data into new scientific and engineering application domains such as space weather. The growing importance of timely geomagnetic data for monitoring and forecasting space weather events highlighted the potential benefits of revisiting the initial decision. In 2021, the feasibility of a low-latency Level 1b processor and the implementation of a parallel Swarm FAST processing pipeline began to be evaluated. Given limited resources, a phased approach was adopted to manage risks and ensure efficient use of available assets. This approach included a processor feasibility analysis to assess technical requirements and potential challenges, followed by a six-month processing pilot to test the concepts in a controlled environment. The pilot phase provided valuable insights into system performance and user feedback, which were crucial for refining the processing pipeline. Following the successful pilot and strong endorsement from the scientific community at the 12th Swarm Data Quality Workshop (DQW) in October 2022, the implementation of a new robust FAST processing pipeline was initiated. The community's support underscored the demand for low-latency data and validated the project's direction. The new standalone FAST processing pipeline was implemented using an Agile and DevOps approach, facilitating iterative development and continuous improvement. This methodology allowed for rapid responses to emerging requirements and streamlined collaboration between development and operations teams. The pipeline is based on the Werum Olib framework, which provides a flexible and scalable platform and is also being used in more recent Earth Explorer missions such as EarthCARE and Biomass. Leveraging this framework ensured compatibility with existing systems. The FAST pipeline was deployed on the existing EOP-GE cloud infrastructure, utilizing cloud resources for scalability and reliability. Systematic production started at the end of April 2023, marking a significant milestone in enhancing Swarm's data capabilities. After thorough scientific validation and endorsement by the scientific community, the FAST data was made available to all users in December 2023, opening new opportunities for research and operational applications that require timely data access. Maltese A 1, de la Fuente A 2 , Shanmugam P 3 , D’Alba L 4 , Parente D 1 1 SERCO c/o ESA, Frascati, Italy 2 European Space Agency, Frascati, Italy 3 Werum, Lueneburg, Germany 4 Starion c/o ESA, Frascati, Italy
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: The Swarm Constellation - Ten Years in orbit, and beyond

Authors: Giuseppe Albini, David Patterson, Angel Fernandez Lois, Alessandro Latino, Emanuele Lovera, Giuseppe Romanelli, Filippo Inno, Anne Hartmann, Thomas Demeillers, Aybike Kolmas, Marco Bruno
Affiliations: Esa, Starion, Telespazio Germany GmbH, Serco GmbH, Solenix GmbH
Swarm is the magnetic field mission of the ESA Earth Observation program composed of three satellites flying in a semi-controlled constellation: Swarm-A and Swarm-C flying in pairs and Swarm-B at an higher altitude. Its history in-orbit began in the afternoon of the 22nd of November 2013, when the three identical spacecraft separated perfectly from the upper stage of the Rockot launcher at an altitude of about 499 km. Control of the trio was immediately taken over by the ESA’s European Space Operations Centre (ESOC) in Darmstadt, Germany. Following the successful completion of the Launch and Early Orbit Phase (LEOP), commissioning was concluded in spring 2014 and precious scientific data have been provided since then. In order to deliver extremely accurate data to advance our understanding of Earth’s magnetic field and its implications, each Swarm satellite carries a magnetic package, composed by an Absolute Scalar Magnetometer (ASM) and a Vector Field Magnetometer (VFM), an Electrical Field Instrument (EFI) and an Accelerometer (ACC). Unfortunately Swarm-C, due to a failure in LEOP and commissioning, does not carry an ASM. Two daily ground station contacts per spacecraft are needed to support operations and downlink the scientific data stored in the on-board Mass Memory Unit. The operations are run as of late 2023 with the highest level of automation implemented at ESOC, without real-time operator and with the team currently merged with the CryoSat-2 Flight Control Team. Many activities and campaigns have been performed through the years to improve instrument anomalies, such as changing the EFI operation’s concept to a limited number of daily science orbits and scrubbing operations to counteract image degradation. Similarly, in the last years the ASM instrument has undertaken more and more sessions in Burst Mode, producing data at 250Hz on request of the Instruments team. Also, this activity has been recently integrated in the automated operations concept, to offer flexibility to target this mode based on the space environment’s short-term evolution. On the platform side, a few anomalies happened and were reacted upon very quickly, e.g. the Swarm-A science data downlink anomaly in 2020, that was solved by routing all science data to the house-keeping storage and re-designing part of the ground segment’s processing to handle this change of concept. A recent undertaking has been, as of mid-2023 the support of an additional mass memory downlink concept to acquire the data sensed during the passes, in order to support the FAST processing chain and exploit some NRT capabilities of the mission. On the orbits side, several manoeuvring campaigns were undertaken in 2019 and then in 2022 and 2023: first to change the relative local time of Swarm-A and Swarm-C, such to meet Swarm-B when the orbital planes were at the closest angular location in Summer to Winter, 2021 (so-called counter-rotating orbits), then to raise the orbits of the lower pair, and then Swarm-B in an attempt to over-come the altitude drop caused by the Solar Cycle 25, whose strength and effect on the orbit is increasing with respect to the predictions of the first years of the cycle. Another challenge, on the rise since a few years is the impact of the Collision Avoidance activities on operations, with dozens of events analysed every year with an increasing trend, culminating this year with more than 60 events screened, most of which connected to encounters with active Starlink satellites, but only a few resulting in a Collision Avoidance Manoeuvre. The presentation will describe the Swarm specific ground segment elements of the FOS and explain some of the challenging operations performed so far during this 10+-years-long journey, from pay-loads operations to resolution of anomalies and the last orbital manoeuvre campaigns.
Add to Google Calendar

Tuesday 24 June 18:00 - 19:00 (Nexus Agora)

Session: F.04.31 UNEP ESA Strategic Partnership

The Agora is dedicated to the UNEP-ESA Partnership, based on the Memorandum of Understanding and continued collaborative efforts.

UNEP is addressing the so-called three planetary crises of: climate change, nature and biodiversity loss, and pollution and waste. UNEP has the mandate of setting the global environmental agenda and promoting the coherent implementation of the environmental dimension of sustainable development.

It is unique opportunity for UNEP to present their latest updates, future plans and cooperation opportunities.

The UNEP-ESA partnership aims at aligning the efforts of the two organizations with the creation of synergies and also in support to:

a) the sharing of field data sets and surveys by UNEP. These are fundamental information which are complementary to the EO data.

b) the co-development of innovative Earth Observation algorithms, products and applications relevant for the mandate of UNEP, making use of cutting-edge information technology capabilities, facilitating operational solutions.

c) the exchange of expertise to increase the sharing of knowledge between UNEP and ESA.

The Agora will have a panel discussion format with lightning talks and subsequently fostering an interactive dialogue with the audience.

Speakers:


  • Melissa De Kock - Deputy Director of the UN Environment Programme World Conservation Monitoring Centre (UNEP-WCMC)
  • Matthias Jurek - Programme Management Officer - UNEP
  • Itziar Irakulis Loitxate - Remote Sensing Lead of the UN Environment Programme's International Methane Emissions Observatory (UNEP's IMEO)
  • Magda Biesiada - UNEP, Global Sub-Programme Coordinator for Digital Transformations
  • Musonda Mumba - Secretary General of the Convention on Wetlands
Add to Google Calendar

Tuesday 24 June 18:00 - 18:20 (EO Arena)

Demo: B.03.17 DEMO - Sustainimaps: Monitoring Agricultural Supply Chains with Earth Observation

How can Earth Observation help trace the origin of your morning coffee—or ensure that your chocolate is deforestation-free?
In this session, Trade in Space introduces Sustainimaps: a geospatial platform that combines open EO data with supply chain insights to monitor deforestation and traceability in global agriculture.
From satellite imagery to dashboard analytics, see how tools like Sustainimaps are enabling compliance with regulations like the EU Deforestation Law—while supporting smallholder farmers and scaling transparency from farm to export.
Add to Google Calendar

Wednesday 25 June

1233 events

Wednesday 25 June 08:30 - 10:00 (Hall N1/N2)

Session: C.03.03 Advancing global-scale high resolution imaging spectroscopy in preparation for CHIME - PART 1

The growing availability of high resolution imaging spectroscopy products from missions such as EnMAP, EMIT, HiSUI, PRISMA and DESIS, is enabling a wide spectrum of novel scientific products and applications. The Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) will provide routine observations over the land and coastal zone through the Copernicus Programme in support of EU- and related policies for the management of natural resources, assets and benefits. This unique visible-to-shortwave infrared spectroscopy based observatory will provide unique and major contributions in fulfilling user requirements in the domains of environmental monitoring and management, with a focus on soil productivity, sustainable raw materials exploitation, sustainable use of nutrients and water in agriculture and food security. A number of secondary applications will benefit from the routine provision of CHIME products e.g. biodiversity, coastal and inland water quality monitoring, methane and carbon dioxide detection from point sources. In this session we will welcome contributions from the scientific and user communities encompassing CHIME preparatory activities, including L2 products development, calibration/validation, downstream products and applications prototyping. We will also welcome contributions building on current spaceborne imaging spectroscopy missions, and anticipated missions like SBG-VSWIR.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall N1/N2)

Presentation: EnMAP as a Precursor Mission for Exploring the Potential of Soil Monitoring from Spaceborne Imaging Spectroscopy

Authors: Sabine Chabrillat, Robert Milewski, Kathrin Ward, Asmaa Abdelbaki, Theodora Angelopoulou, Thomas Schmid, Nikolaos Tsakiridis, Kostas Koriatis, Jose Dematte, Vera Krieger
Affiliations: GFZ German research center for Geosciences and LUH Leibniz University Hannover, GFZ German research center for Geosciences, CIEMAT Research Centre for Energy, Environment and Technology, Department of Environment, TAU Tel-Aviv University, AUTH Aristotle University of Thessaloniki, i-BEC interBalkan Environment Center, USP University of Sao Paulo, German Space Agency DLR
Soils are recognized as an essential provider of foods and services, and for their role as carbon storage with 30% of global terrestrial carbon stored in soils. The mapping and monitoring of soil properties as indicators of soil quality and soil health is an important pre-requisite and requested to implement recent legislations such as the European Union Soil Monitoring Law. Preserving our soil and its health will further support several actions and policies e.g. the European Green Deal, international initiatives such as the 4 per 1000 Initiative, and policies for reducing greenhouse gas emissions. Soil spectroscopy and imaging spectroscopy in the VIS-NIR-SWIR spectral range (400-2500 nm) have demonstrated a large potential for the accurate determination of soil properties in the laboratory and with airborne data with a resolution of a few meters. There is a high expectation from current and upcoming VNIR-SWIR imaging spectroscopy spaceborne missions represented as precursors with ASI PRISMA and DLR EnMAP missions, in view of upcoming Copernicus mission such as CHIME. The potential of imaging spectroscopy for providing accurate surface key properties over dry exposed soil areas is largely demonstrated. However, full demonstration and test for current capabilities and limitation for operational soil monitoring from space at 30 m scale is still to be provided. This needs clear and precise research to be defined in the coming years. The Environmental Mapping and Analysis Program (EnMAP) is a German hyperspectral satellite mission launched in April 2022. EnMAP has been operational since November April 2022, providing high-quality hyperspectral data for scientific research and applications in Earth observation, and enabling advanced analyses across diverse fields of research including soil science. The EnMAP science program led by the GFZ and its partners is focused on demonstrating scientific achievements for various cutting-edge and environmental applications of benefit to society today, and more specifically, on demonstrating the significant potential for contribution to scientific exploitation to support implementation of the EU soil monitoring law. In this presentation, we aim to show current works on the development of improved soil products based on multi-missions spaceborne imaging spectroscopy. This includes new modeling approaches for improved spectral modeling performances, modeling of surface conditions (such as non-photosynthetic vegetation cover, photosynthetic cover, soil moisture) for the accurate quantification of soil properties in different environmental areas, and the development of new workflows for multi-temporal soil mapping based on coincident time-series observations from imaging spectroscopy missions, serving as pioneers to demonstrate what can be achieved from upcoming global CHIME mission for soil monitoring programs.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall N1/N2)

Presentation: The Copernicus Hyperspectral Imaging Mission for the Environment (CHIME)

Authors: Dr. Marco Celesti, Kevin Alonso, Valentina Boccia, Laurent Despoisse, Ignacio Fernandez, Antonio Gabriele, Adrian Garcia, Ferran Gascon, Nafiseh Ghasemi, Claudia Isola, Giuseppe Ottavianelli, Anke Schickling, Helene Strese, Heidrun Weber, Jens Nieke
Affiliations: European Space Agency, European Space Research and Technology Centre (ESA-ESTEC), Starion for ESA - European Space Agency, European Space Research Institute (ESA-ESRIN), European Space Agency, European Space Research Institute (ESA-ESRIN), Aurora Technology for ESA - European Space Agency, European Space Research and Technology Centre (ESA-ESTEC)
As part of the European Copernicus Programme, the Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) will provide routine hyperspectral observations globally over land and coastal zones in support to European Union policies for the management of natural resources, ecosystem services and societal benefits. This unique visible-to-shortwave infrared spectroscopy-based observatory will provide key unique contributions in fulfilling user requirements in many domains including for example sustainable raw materials exploitation, resilient agriculture with a focus on soil protection and soil-health restoration, correct use of nutrients, effective water management, food security and the protection of biodiversity. By sampling many specific absorption features of the surface and the atmosphere, and by exploiting the information content of the full continuous VSWIR spectral signature, CHIME measurements and derived Level-2 products will be diagnostic for geophysical surface properties such as plant functional traits (e.g., nitrogen content, leaf water content), soil properties (e.g., soil organic carbon content) and minerals abundance (e.g., kaolinite). Moreover, several secondary applications will benefit from the routine provision of CHIME Level-1 and Level-2 products, e.g., coastal and inland water quality monitoring (e.g., phytoplankton pigments), methane and CO2 emission detection from point sources, snow and ice light absorbing impurities. A thorough analysis of the user requirements as expressed by the Copernicus users has been performed since the early development stage of the mission. Building on that, a set of mission requirements for the space and ground segment traceable to these user needs has been defined by ESA and the CHIME Mission Advisory Group in the Mission Requirements Document (MRD). For the development of the Space Segment Contract (Phase B2/C/D/E1) Thales Alenia Space (France) as Satellite Prime and OHB (Germany) as Instrument Prime were selected. Currently there are two satellites foreseen and each of the satellites will embark a HyperSpectral Instrument (HSI) which is a pushbroom-type grating Imaging Spectrometer with high Signal-to-Noise Ratio (SNR), high radiometric accuracy and data uniformity. HSI is characterised by a single telescope, and three single-channel spectrometers covering each one-third of the total swath of ~130 km. Each spectrometer has then a single detector covering the entire spectral range from 400 to 2500 nm. With two satellites in orbit, the mission will provide global coverage of land and coastal areas every 11 days with a spatial resolution of 30 m. This baseline is compliant with the mission requirements defined in the MRD and traceable to the requirements expressed by the Copernicus users. CHIME data will be pre-processed onboard the satellite within the dedicated Data Processing Unit, allowing cloud detection and compression using machine learning techniques. Once transmitted via Ka-Band antenna to the ground, the CHIME data will be processed by the Copernicus Ground Segment and disseminated through the Copernicus Data Space Ecosystem (Level-1 and Level-2 core products; https://dataspace.copernicus.eu/). Additional demonstration products and higher-level prototype products related to key vegetation, soil and raw material properties are also being prototyped. A full-fledged End-to-End (E2E) simulator is being developed to support the CHIME mission development. It embeds the Observation Performance Simulator (OPSI) and includes a full forward and inverse modelling of CHIME data products and preliminary retrieval algorithms, with the main aim of assessing the CHIME performance with respect to the requirements set in the MRD and flowed down in the System Requirements Document. In parallel to the CHIME E2E, a specific relevant activity was kicked-off in 2023 for the development of the future operational processors of the CHIME Level-2 products. These activities will also exploit data collected during the CHIME campaigns in 2018, 2020 and 2021 with a combination of airborne and (in 2020 and 2021) spaceborne spectrometers deployed by ESA, NASA, DLR and ASI, supported by ground teams from numerous European institutes. In this contribution, the main outcomes of the ongoing CHIME activities, as well as the planned future activities will be presented, covering the scientific support studies, the technical developments, and the user community preparatory activities. The status of the ongoing cooperation between ESA, ASI, DLR and NASA towards increasing synergies of current and future imaging spectroscopy missions in space will be reported as well.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall N1/N2)

Presentation: The CHIME Ground Prototype Processor (GPP) and calibration approach

Authors: Isabell Krisch, Clarissa Hamann, Richard Wachter, Johannes Schmidt, Marco Spagnolli, Johanna Dall'Amico, Dimitri Lebedeff, Vincent Soulignac, Hugo Monchatre, Antonio Gabriele, Adrian Garcia, Ignacio Fernández, Claudia Isola, Nicolas Lamquin, Benjamin Finociety, Romain Sumérot, Sinh Khoa Nguyen, Wouter Dierckx, Stefan Adriaensen
Affiliations: OHB System AG, Thales Alenia Space, ESA/ESTEC, ACRI-ST, VITO NV
The Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) is one of the Copernicus Sentinel Expansion Missions developed to support EU- and related policies on the management of natural resources. This unique visible-to-shortwave infrared spectroscopy-based capability will be dedicated to raw material- and sustainable agricultural management with a focus on soil properties, mineral resources and agricultural services, including food security and biodiversity. To enable accurate quantitative estimates of specific vegetation bio/geo-physical/chemical variables, as well as top-soil variables and mineral compositions CHIME encompasses as payload an imaging spectrometer. This imaging spectrometer allows monitoring of land and coastal/inland water bodies with many contiguous spectral channels covering the spectral range from the VIS to the SWIR (400-2500 nm) at a 30 m spatial resolution with a revisit of 22(11) days with one (two) satellite(s). The CHIME space segment development is led by Thales Alenia Space in France (mission prime) and OHB System AG in Germany (instrument prime). The CHIME space segment is currently in the development Phase C/D and just about to pass the critical design review (CDR). Going hand-in-hand with the space segment development is the development of the CHIME Observation Performance Simulator (OPSI). The OPSI is a software tool composed of an Instrument Performance Simulator (IPS), a Ground Processor Prototype (GPP) and a Performance Assessment Module (PAM). It shall support the performance assessment and verification of the space segment, the development of the ground segment, the development and testing of calibration tools for commissioning, and the Level 0 and Level 1 processing during the commissioning phase. This presentation will focus on the role of the Ground Processor Prototype (GPP) in the end-to-end architecture and the calibration approach which is behind the on-board and vicarious calibration processors. We will briefly introduce the CHIME Level 1 nominal products, TOA radiance (at L1b) and orthorectified TOA reflectance (at L1c), and the different calibration products. We will also present the role of the OPSI in the development and verification of the CHIME instrument and system.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall N1/N2)

Presentation: From PRISMA to the future Hyperspectral Missions of ASI

Authors: Luigi Ansalone
Affiliations:
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall N1/N2)

Presentation: New results from the EMIT Imaging Spectroscopy Mission onboard the International Space Station

Authors: David R. Thompson, Robert O. Green, Niklas Bohn, Philip G. Brodrick, K. Dana Chadwick, Regina Eckert, Evan Greenberg, Sarah Lundeen, Natalie Mahowald, Nayma Nur, Andrew K. Thorpe, Winston Olson-Duvall
Affiliations: Jet Propulsion Laboratory, California Institute of Technology, Cornell University
NASA’s Earth Surface Mineral Dust Source Investigation (EMIT) is entering its third year of operations, providing actionable data to help a growing user community address vital natural resource and science questions. Launched to the international Space Station in 2022, EMIT is a visible shortwave infrared (VSWIR) imaging spectrometer measuring the 380-2500 nm range with 7.4 nm spectral sampling and 60 m spatial sampling over the land mass and coastal area under the ISS orbit. EMIT’s primary task was to map the mineral composition of Earth’s deserts, informing Earth system models that predict mineral dust composition and its influence on atmospheric radiative forcing. In 2023, it produced the first continental-scale direct measurements of Earth’s surface mineralogy. Following the successful completion of this primary goal, NASA approved EMIT for an extended mission in 2024 and funded 16 new research projects spanning themes from terrestrial and aquatic ecosystems to hydrology to atmospheric studies. This presentation provides updates on EMIT instrument performance and processing, including improvements to instrument calibration and surface reflectance retrievals. We also preview early results from extended mission science. Studies in progress and development aim to: • Assess agricultural practices for agricultural management; • Predict the melt rate of mountain snowpack, a transformative capability for water management and drought mitigation in areas that rely on snowmelt for fresh water; • Study phytoplankton in inland waters for managing water quality and aquaculture; • Study forest ecosystems with benefit for conservation and forest resource management; • Measure nonphotosynthetic vegetation, which indicates fuel load and wildfire risk; • Map acid mine drainage, informing remediation efforts to improve water safety; • Detect natural gas leaks in facilities and pipelines, providing actionable data to oil and gas producers to improve the safety and efficiency of their infrastructure; • Map turbidity, suspended sediment and other measures of water quality; • Mapping plastic across continents at sub-hectare-scale to understand pollution sources. EMIT data is available to the public through NASA archives. The science data system code is open source and available for the science community to adapt, reuse, or apply to reproduce mission results.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall N1/N2)

Presentation: NASA SBG VSWIR Global Imaging Spectroscopy: Overview of the Planned Measurements, Products, and Interoperability with other Missions

Authors: Robert Green
Affiliations: NASA JPL Caltech
NASA SBG VSWIR Global Imaging Spectroscopy: Overview of the Planned Measurements, Products, and Interoperability with other Missions Robert O. Green, David R. Thompson, Phil Brodrick, Regina Echert, Niklas Bohn, Dana Chadwick, Kelly Luis, Christine Lee, Christiana Ade, and Colleagues Jet Propulsion Laboratory, California Institute of Technology, United States Keywords: imaging spectroscopy, earth system, science, applications, products, interoperability The NASA Surface Biology and Geology Decadal Survey Mission is in development and includes global Visible to Short Wavelength Infrared (VSWIR) imaging spectroscopy observations with 16 day revisit and 30 m spatial sampling. The core VSWIR science addresses a broad set of the important Decadal Survey objectives spanning the domains of terrestrial ecology, hydrology, coastal and inland waters, and geology. A companion set of important new applications for societal benefit are identified and addressed. In support of these science and applications objectives, the VSWIR Project Science and Applications Team, supported by many colleagues, has developed a comprehensive set of measurement requirements that have led to the current mature VSWIR Project design. These requirements have in turn been used to establish the baseline measurement processing architecture from instrument-recorded signal, to top-of-atmosphere radiance, to surface reflectance, fractional cover, and suites of products addressing the core science and applications VSWIR objectives. At all product levels, uncertainty estimates are determined and reported. Special attention has been focused to assure a comprehensive set of product characterization information is available at every level. This characterization information is intended to enable a broad set of additional “instrument-agnostic” algorithms that build upon the core products delivered by the Project. All Project algorithms are open. Work is ongoing to coordinate the product characterization information with other imaging spectroscopy projects and missions, to enable broad interoperability and harmonization of VSWIR derived products. We present the current status of the SBG VSWIR observation requirements and product processing plans with a focus on international cooperation in space imaging spectroscopy.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 0.96/0.97)

Session: F.04.13 Urban Resilience - PART 1

From the beginning of the years 2000, more than half of the world population live in cities and the dynamic trend of urbanization is continuously growing and it is expected by 2050 that 7 of 10 people will be living in cities. The speed and scale of urbanization leads to important development challenges and sets clear needs for effective urban planning and development measures that enhance city resilience. The use of Earth Observations and their integration with other source of information into effective urban data analytics tools can produce a quantum leap in tracking progress towards and achieving international development goals such as the SDG 11, the Sendai Framework for Disaster Risk Reduction or the new urban Agenda (Habitat III).
The advent of continuous data streams of high quality and free of charge satellite observations such as the Sentinels of the European Copernicus program, in combination with the emergence of automated methods for large data processing and image analysis, together with the democratization of computing costs, offer unprecedented opportunities to efficiently monitor the changes and trends in urban development globally. In addition, the synergetic use of EO data from different satellite sensors (radar/optical, HR/VHR, SAR/InSAR, TIR, Hyperspectral, Lidar) and the combination with the availability of ancillary datasets such as ground-based and airborne data, drones, and citizen science data, opens new pathways to extract an unprecedented range of urban information. Urban remote sensing is therefore progressively evolving from traditional urban extent and land cover mapping into advanced urban applications, connecting to the monitoring of urban-related environmental parameters (impervious surfaces, green and blue infrastructures, urban welfare, air pollutants). Moreover, municipalities and city practitioners are showing growing interest in using these applications as decision support tools, bringing to a stronger demand for interactive tools that deliver EO-integrated solutions in actionable information.
The series of LPS 2025 urban sessions will present the recent scientific advances in the application of remote sensing in urban applications, discuss opportunities and challenges which lie ahead for mainstreaming EO solutions into urban development practices and policies, and highlight future paths of research.
Topics of interest for the urban sessions include (not limited to):
• multi-sensor, multi-scale and multi-temporal approaches to urban mapping;
• Remote sensing methods for characterising urban areas (multispectral, hyperspectral, SAR/InSAR, TIR, LiDAR)
• Detailed LULC classification and change detection
• Cost-effective use of commercial data
• Downscaling (e.g., super-resolution)
• AI for urban
• 3D/4D mapping
• Night-lights applications
• UAVs/drones, aerial platforms
• Capacity building, education, citizen science, crowdsource data and tools for urban applications
• EO integration in urban social science and policy
• Urban planning and modelling of urban growth
• Health, well-being and liveability
• Urban ecology
• Nature-based solutions
• Urban energy infrastructure and renewables (demand, access, smart grids)
• Urban climate (Urban Heat Islands, pollution/air quality)
• Urban green and blue infrastructures
• Transport, Infrastructure and Sustainable Mobility
• Natural hazards, risk reduction and urban resilience
• Informal settlements
• Population distribution
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: A Novel GAN-based approach for Super-Resolution of Nighttime Light imagery

Authors: Dr Gianluca Murdaca, Mr Francesco Asaro, Mr Filippo Cazzarolli, Dr Emanuele Strano, Dr Mattia Marconcini, Dr Alessandra Feliciotti
Affiliations: MindEarth
Urbanization has rapidly intensified over the last few decades, driving unprecedented challenges for energy access, infrastructure planning, and sustainability. To address these issues and meet the targets set by SDG 7 (Affordable and Clean Energy) and SDG 11 (Sustainable Cities and Communities), innovative approaches are essential for monitoring urban dynamics with high precision and scalability. In this framework, nighttime light (NTL) imagery has proven to be a key Earth Observation (EO) data source for capturing urban activity and development, with applications spanning energy access monitoring, infrastructure planning, disaster resilience, economic activity assessment, poverty mapping. The utility of NTL imagery stems from decades of advancements in remote sensing. The evolution began with the Defense Meteorological Satellite Program’s Operational Linescan System (DMSP-OLS) in the 1970s, providing imagery at 2.8km resolution. Advances in technology brought NOAA/NASA’s VIIRS-DNB sensors, which since 2012 have delivered systematic NTL averages at 500m resolution, enhancing frequency and detail. However, challenges such as light saturation and blooming effects hinder analysis in urban and rural areas. Emerging sources, such as China’s Luojia-1 (LJ1) and SDGSAT-1 satellites, provide higher-resolution alternatives, offering spatial details of 130m and 40m, respectively. LJ1, a pilot mission from Wuhan University, offers openly accessible data from its 2018–2019 mission; in contrast, SDGSAT-1 provides limited free-tier access, with broader data availability granted to selected research projects. Commercial high-resolution alternatives like JL1-3B and BlackSky's Gen-3 are also constrained by high costs and limited availability, emphasizing the need for innovative, scalable solutions to harness the full potential of NTL imagery for sustainable urban planning and equitable energy access. To overcome these limitations, MindEarth, as part of the ESA SupR-NTL project, has pioneered an innovative methodology that leverages Super-Resolution (SR) techniques powered by Generative Adversarial Networks (GANs) to enhance VIIRS-DNB Night-Time Light (NTL) imagery from a coarse 500m to a fine 130m resolution. This marks a significant advancement in NTL analytics, enabling detailed insights into urban and rural systems with applications ranging from electricity access mapping to urban planning and resilience analysis. Specifically, the implemented solution integrates multi-sensor data from VIIRS-DNB, LJ1, and Sentinel-2 (S2). VIIRS-DNB ensures consistent global coverage but is limited by its coarse resolution. LJ1 offers fine-scale imagery at 130m but is constrained by a short operational period and limited spatial coverage. S2 complements these datasets with its spectral richness and spatial detail, acting as a crucial conditioning input for the GAN model. A comprehensive data pre-processing pipeline has been first implemented to target cloud masking and georeferencing. In particular, the latter step has been crucial to overcome LJ1’s poor georeferencing, which hindered in the past its use primarily to local analyses, often limited to one or a few scenes. By employing the open-source AROSICS tool and jointly leveraging DLR’s World Settlement Footprint (WSF) and OpenStreetMap (OSM) road network, it has been possible to effectively orthorectify the whole LJ1 dataset consisting of 8675 scenes (each one covering an area of ~250x250km). The developed GAN-based framework builds upon an architecture inspired by state-of-the-art Super-Resolution GANs (SRGANs), optimized for enhancing VIIRS-DNB NTL imagery. A conditioning mechanism replaces the standard upsampling block, enabling the effective integration of S2 data as auxiliary input. The Generator, a convolutional neural network with 6 residual blocks, processes lower-resolution VIIRS-DNB data in conjunction with key S2-derived inputs, namely the yearly temporal medians of the RGB bands and the Road Index (RI), which highlights artificial surfaces and urban structures. These conditioning inputs play a critical role in enhancing the model’s ability to reconstruct fine-grained spatial details. The Discriminator, built with progressively increasing filter layers and strided convolutions, assesses the authenticity of the reconstructed outputs against higher resolution LJ1 imagery, using Binary Cross-Entropy (BCE) loss. This adversarial dynamic between the Generator and Discriminator is further refined through a multi-loss strategy, combining L1 loss for pixel-level accuracy, perceptual loss for structural similarity, and adversarial loss to ensure visual realism. Histogram matching between VIIRS-DNB and LJ1 data ensures consistency during training. The approach has been rigorously validated using a spatially-aware test set designed to avoid autocorrelation effects, leveraging key performance metrics. Specifically, the model achieved a Peak Signal-to-Noise Ratio (PSNR) of 27.9, indicating the ability to reconstruct outputs with minimal distortion and high fidelity to real scenes. Additionally, the Structural Similarity Index Measure (SSIM) of 0.911 underscores the model's effectiveness in preserving structural details such as road networks and infrastructure outlines. Moreover the workflow’s robustness and scalability across diverse geographies have been further demonstrated through its application in Kenya, Malawi, Mozambique, Cambodia, and Myanmar, where the generated SR-NTL imagery served as the foundational dataset for deriving detailed energy access and energy consumption maps. These analyses have been specifically designed to support the project stakeholder Sustainable Energy for All (SE4All), aligning with their mission to promote equitable energy access and inform targeted energy strategies. The contribution of this work to urban resilience extends far beyond its immediate outputs, providing significant advancements in EO science and its practical applications. By combining cutting-edge machine learning techniques with multi-sensor EO data, the SupR-NTL methodology demonstrates the transformative potential of AI-driven approaches for monitoring and analyzing urban and rural dynamics. Beyond generating SR-NTL imagery, this research lays the groundwork for future developments, including the addition of temporal dynamics to monitor changes over time and the integration of socio-economic datasets to deepen understanding of human development, inequality, and energy access disparities. Furthermore, the implemented GAN-based solution offers a scalable and adaptable framework for enhancing other EO datasets, establishing its role as a foundational tool for advancing urban sustainability research and addressing diverse challenges in sustainable development. This work highlights the power of EO and AI to provide actionable insights for equitable and data-driven decision-making at local, national, and global scales.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: The first global built-up product delivered by the Copernicus Exposure Mapping Component

Authors: Thomas Kemper, Michele Melchiorri, Armin Leitner, Michael Riffler, Christian Schleicher, Eva Poglitsch, Loic Faucqueur, Alexandre Pennec, Mattia Marconcini, Andreea Maria Julea
Affiliations: European Commission, Joint Research Centre, Geoville GmbH, CLS, DLR, GAF AG
Since almost 10 years, the Global Human Settlement Layer (GHSL) is providing innovative, global data on built-up areas, population and human settlements to monitor the human presence on planet Earth. With the integration of the GHSL as exposure mapping component (EMC) into the Copernicus Emergency Management Service (CEMS), the GHSL data production is entering into a new era of operational production to assure regular updates of the information. This is crucial, if we want to boost the urban, and more general, societal resilience. The capacity to resist and rebound from disasters and crises such as natural hazards or environmental challenges relies on up-to-date information about the urban environment. This presentation will provide an overview of the first operational creation of the Copernicus EMC-BUILT product for the reference year 2022, leveraging high-resolution satellite imagery from the Copernicus program's Sentinel-2 satellite. The current Copernicus GHSL product portfolio consists of three main components. The built-up surface fraction product that quantifies the percentage of land covered by buildings within a given pixel of Sentinel-2 imagery. The second products classifies the built-up areas into residential and non-residential areas (large industrial and commercial areas), and the third product is a greenness layer that reports about vegetation presence and density in the vicinity of the built-up areas. The new operational production makes use of a specifically designed U-Net model to process the Sentinel-2 imagery for the built-up surface and the residential/non-residential classification products. To maximise the information content, the workflow exploits the full Sentinel-2 archive for the production year. It makes use of the 10 m and 20 m bands as well as the scene classification layer for the cloud weighting and creates a cloud free composite per band as well as a 90%-percentile NDVI layer. In addition, intra-annual fluctuations of the underlying land cover are captured through time-series analysis and provided as 3rd order Fourier series coefficients to the U-Net model. The U-Net architecture is trained on a curated dataset that is derived from a variety of high-quality reference sources, including Open Street Map, Microsoft Building Footprints, and Google Open Buildings. In the training phase, building polygons from these sources are extracted and rasterized to match the spatial resolution of Sentinel-2's imagery. The CNN is then taught to recognize these rasterized patterns as built-up areas. Special attention is given to distinguishing between residential and non-residential buildings, with the latter being identified using specific tags from the Open Street Map dataset and complemented by the non-residential layer of the GHSL R2023A. The classification is performed at a per-pixel level, and minimal post-processing is adopted to maintain the model's inherent accuracy, with a focus on applying an NDWI (Normalized Difference Water Index) threshold to reduce water-related misclassifications. The dedicated U-Net model incorporates spatial context into the classification process, which is a significant improvement over previous models. This capability allows the network to detect more structures and details, especially in rural areas, where individual buildings are more isolated. The news methodology in combination with the recently published building footprints offers a marked improvement over previous iterations of GHSL built-up products. It reduces the overestimation of built-up areas and provides a more accurate delineation of buildings. The new EMC-BUILT products represent a significant advancement in mapping human settlements. The next production for the reference year 2024 will rely on the same methodology, but will include as well improvements based on the experience from the first production cycle. In the larger GHSL context, the data will be a covariate for the production of the population grids and need to be integrated into the GHSL time series.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: FATSat-NeRF : Finally A True Satellite NeRF for scaling remote sensing images

Authors: Camille Billouard, Dr Dawa Derksen, Dr Alexandre Constantin, Pr Bruno Vallet
Affiliations: CNES, IGN
The advent of satellite imagery has revolutionised the way we perceive and understand our planet. This evolution has opened up new frontiers for 3D reconstruction, offering unprecedented applications in a range of fields, including urban heat islands, flood risk mitigation, vegetation monitoring, urban planning and more.Many of these topics require highly detailed images of the Earth, produced by constellations such as WorldView-3 and SPOT6-7/Pléiades. However, extracting an accurate 3D representation from 2D images is often challenging, especially when using multi-date images that may vary in lighting conditions, shadows, occlusion and transient objects (objects that appear/dissapear between two images such as cars for example). Traditional stereo vision pipelines, designed primarily for single date pairs of images, have difficulty when presented with a pair of images acquired at different dates, resulting in inaccuracies and holes in the reconstructed digital surface models (DSMs). To address these issues, neural radiance fields (NeRFs) have recently been applied to multi-date satellite imagery and have shown a remarkable ability to represent and encode information from complex scenes. However, NeRFs are not applicable to satellite image areas ranging over dozens of square kilometres, and their ability to scale efficiently to reconstruct large areas remains limited compared to standard stereoscopic vision pipelines (e.g CARS). Our research addresses the problem of scalability of NeRFs applied to satellite imagery to enable efficient 3D reconstructions from multi-date imagery. The ability of a NeRF to represent a 3D scene is defined by the size of the neural network. Unfortunately the scale of the network required for encoding a city in 3D is beyond current memory capacity. For this reason previous reseach on applying NeRF to satellite imagery has been limited to small areas of interest (AOI) of less than half a square kilometres. These limitations are due to the size of the scene and the number of pixels needed to learn the scene, ranging from million of pixels (1000x1000) per images for previous work to billions of pixels (50 00x50 000) per images in our study. To overcome this limitation, our approach is to subdivide the surface into smaller, more manageable sections and then train an individual NeRF for each subdivision. First, the appropriate image slices for each ground AOI section is determined using RPC projection functions from the minimum and maximum scene elevations. This ensures that the AOI is visible to all pixels in the images due to the different viewing angles. Next, each subdivided part is assigned to a single NeRF. Inspired by NVIDIA's recent work, NeRF-XL, we reinterpret the volumetric rendering equation in terms of ray segments. Each segment is learned independently from the NeRF sub-network in question. This segmentation not only improves scalability, but also enables parallel processing, significantly reducing the time required for large-scale 3D reconstructions. These changes allow us to process large satellite tiles containing billions of pixels. To validate our approach, we conducted experiments using multi-temporal satellite imagery from WorldView-3 and Pléiades with high resolution LiDAR ground truth. The results suggest a significant improvement in the quality and scalability of 3D reconstructions. Our method achieved an accuracy comparable to existing state-of-the-art techniques, while demonstrating a significant reduction in processing time and computational resources. In conclusion, our research presents a groundbreaking solution to the challenge of scalability in 3D reconstruction using NeRF methods from high-resolution satellite imagery. Our solution is general and can be used to scale a variety of different NeRF solutions that have been proposed for satellite imagery (e.g. Sat NeRF, EO-NeRF, Season-NeRF, SUNDIAL, etc.). This work not only contributes to the academic field, but also has practical implications for real-world applications, enabling more cost-effective and detailed monitoring of our ever-changing planet. By introducing a more flexible and efficient method, our approach opens up new possibilities for the analysis and interpretation of satellite data, enhancing our ability to respond to modern environmental and urban challenges.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: Advancing Urban Resilience through Livability Mapping and Flood Exposure Analysis Using Remote Sensing Techniques

Authors: Lorraine Trento Oliveira, Enzo Campomanes, Monika Kuffer, Dr. Mariana Belgiu, Dr. Angela Abascal, Dr. Anne Dijkstra
Affiliations: University of Twente, ITC, University of Twente, BMS
The rapid urbanization in many African cities has led to increasing challenges in managing urban growth. Around 60% of the urban population in Africa is living in informal settlements, areas that are deprived in the main aspects (see the IDEAMAPS domains of deprivation framework). Such areas are characterized by large socioeconomic disparities and increasing risk (e.g., floods) caused by climate change. The "Space4All" project combines cutting-edge geospatial technologies with locally tailored methodologies to promote sustainable and inclusive urban development. This research focuses on integrating livability mapping and flood exposure analysis for a sample of African urban areas, using pilot cities in Ghana, Kenya, and Mozambique. The data processing workflow uses open data, such as Sentinel-2 imagery and state-of-the-art analytical frameworks. Space4All uses Sentinel-2 satellite imagery to assess urban livability, utilizing a recently developed method that integrates environmental, infrastructural, and social indicators to identify the most deprived areas. This method quantifies livability by evaluating critical dimensions such as green space availability, housing quality, infrastructure access, and socio-economic conditions by combining citizen-generated data (employing a developed app for generating massive labelled data) with deep learning methods. By processing and analyzing Sentinel-2 data, we generate high-resolution livability indices that highlight spatial disparities across urban landscapes. Building on this analysis, we incorporate flood exposure information derived from local knowledge created in workshops in informal settlements with satellite-based hydrological models and historical flood event data. The integration of these two datasets—livability indices and flood exposure—provides a comprehensive framework to evaluate the compounded risks faced by urban populations, particularly in informal areas. 1. Ghana: In Accra and surrounding secondary cities, rapid urbanization combined with inadequate drainage systems, is leaving densely populated low-income areas particularly susceptible to floods. Livability mapping indicates hotspots of deprivation, while flood models show the most exposed locations. Combining these two analyses we highlight areas where targeted interventions can significantly enhance resilience. Such areas are often neglected in urban plans. 2. Kenya: Nairobi's informal settlements, housing around 60% of the population in around 4% of the built-up area, often experience the dual challenges of poor livability and high flood exposure. Similar results were found for Kisumu, a secondary city. Sentinel-2 data reveal spatial patterns of deprivation, such as lack of infrastructure, services, and poor housing conditions. Informal settlements are frequently exposed to flash floods. Working together with city planners, disaster management, and NGOs, this integrated analysis supports local stakeholders in prioritizing flood mitigation projects and community-based initiatives. 3. Mozambique: In Beira to Chimoio, recurring floods disproportionately affect the city’s informal communities. The livability mapping identifies regions with inadequate infrastructure, compounded by high flood exposure. The insights provide critical inputs for designing urban development policies that address both socioeconomic disparities and disaster preparedness. The integration of livability mapping and flood exposure analysis offers a novel lens to assess urban resilience. Initial results indicate that the most deprived areas, as identified by the livability index, are disproportionately exposed to flood risks. For example, across cities areas with low livability scores are highly impacted by frequent floods. This stresses the urgent need for integrated urban planning approaches that consider both socio-economic vulnerabilities and environmental risks. The findings of this study provide actionable insights and tools for urban planners, policymakers, and humanitarian organizations. As part of the project, we invest a substantial team into local workshops and capacity building for local governments and NGOs. By identifying and prioritizing the most vulnerable areas, this research aims to call for targeted investments in infrastructure and flood mitigation projects using participatory and community-based approaches. The combination of Sentinel-2 data and advanced AI methods shows the potential of remote sensing to deliver scalable, data-driven solutions for urban resilience challenges.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: Leveraging Machine Learning and Earth Observation for Mapping deprived areas in the Global South

Authors: Michael Hathorn, Dr Sophie Naue, Prof. Dr Monika Kuffer, Dr Kenneth Harttgen, Dr Nicolas Büttner, Dr Raian Vargas Maretto
Affiliations: UNITAC, Hafencity Universitat Hamburg, University of Twente, ETH Zurich
Informal settlements are the primary residential areas in the majority of the world. Despite their significance, a lack of accurate data and outdated records pose significant challenges for public entities in meeting resident needs. Consequently, our understanding of the spatial distribution, evolving patterns, and social and demographic profiles within these settlements remains limited. However, citizen science and emerging remote sensing technologies offer new opportunities for collecting high-quality data. This study focuses on using urban data and AI/Machine Learning (ML) mapping technologies to gain deeper insights into the complex realities of informal settlements. With representation from ETH Zürich, the University of Twente, HafenCity University Hamburg, a dynamic team of data scientists and researchers is joining forces and sharing their knowledge to address one of the most pressing urban challenges: the growth of informal settlements. By involving the research of data scientists, developers and urbanists this study highlights, on one hand, the role of different technological solutions and mapping tools such as ML and remote sensing. On the other hand, by reflecting on the experience of partnering with local governments that have used these tools, it emphasizes a collaborative approach. This aspect of the study delves into the practical implications of our work and its impact at the local level. We inquire about the effectiveness of the methods and technologies we employ for cities, with the aim of improving living conditions in informal settlements. This study compares and combines several novel approaches to provide large-scale data for understanding the characteristics and dynamics of informal settlements. Specifically, we do a comparative analysis of three distinct machine learning approaches for informal settlement mapping using Earth observation data: DeepLNAfrica (ETH/Swiss Data Science Center): A machine learning tool developed specifically for mapping patterns across sub-Saharan African cities, enabling spatiotemporal mapping. IDEAtlas (University of Twente): Development of transferable AI-based methods for urban area mapping, with implementation across multiple pilot cities globally. Building and Establishment Automated Mapper (BEAM) (HafenCity University Hamburg/UNITAC United Nation Innovation Technology Accelerator for Cities): An ML-based tool for rooftop mapping using high-resolution satellite imagery or aerial photography, with demonstrated applications in South Africa and Central America. This collaborative research advances the field of earth observation for sustainable development by developing new methodological approaches for informal settlement mapping while also grappling with the practical applications of these methods. The research emphasizes understanding diverse data needs and validation requirements across different contexts, ensuring that the resulting datasets are both locally relevant and actionable. Through critical examination of AI/ML-based mapping technologies, the work illuminates both the potential and limitations of these emerging tools.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: The WSF Tracker - A Global Lens on Settlement Dynamics

Authors: Dr.-Ing. Mattia Marconcini, Dr. Annekatrin Metz-Marconcini
Affiliations: German Aerospace Center - DLR
Urbanization is a complex phenomenon characterized by rapid, often unplanned, changes, especially in developing countries. Despite the availability of global settlement layers and large-scale building footprint databases (e.g., from Google and Microsoft), these resources suffer from infrequent updates, which quickly renders them obsolete in fast-growing regions, particularly in Africa and Asia. This limitation is especially critical where timely data is essential for responsive urban planning, disaster management, and sustainable development initiatives. To address this gap, DLR’s novel World Settlement Footprint (WSF) tracker provides the first consistent monitoring of global settlement extent at 10m resolution, updated every six months from July 2016 to the present. This advancement marks a significant step forward in urban geography, enabling a finer, near-real-time understanding of settlement dynamics that is critical for addressing the needs of rapidly urbanizing regions. The WSF tracker is based on a robust methodology, enhanced from that used to generate WSF2019, allowing for highly accurate settlement delineation by combining multitemporal Sentinel-2 (S2) optical and Sentinel-1 (S1) radar data. The underlying rationale leverages the distinct temporal dynamics of human settlements, which differ from those of non-settlement classes. Initially, images acquired over the region of interest within a defined period, during which minimal change is expected (e.g., one year), are collected. Next, key temporal statistics are extracted (i.e., temporal mean, minimum, maximum, median and standard deviation) of: i) the original backscattering value in the case of S1 data, as well as polarimetric decomposition indices; and ii) different spectral indices (e.g., vegetation index, built-up index, etc.) derived after performing cloud/cloud-shadow masking in the case of S2 imagery. Training samples for the settlement and non-settlement class are then generated by thresholding a specific subset of the resulting features. In particular, threshold values vary depending on the Köppen-Geiger climate type of the considered pixel and are determined considering as reference the merger of the WSF2019 and a 10m-resolution rasterized version of the Microsoft Global ML Building Footprints, Google Open Buildings and OpenStreetMap (OSM) building footprints wherever available. Finally, Random Forest (RF) binary classification is applied and information from suitable ancillary datasets is considered to further reduce omission and commission errors. As an example, it is worth noting that in complex topography regions: (i) radar data often show high backscattering comparable to that of settlements; and (ii) bare rocks are present, which often exhibit a behavior similar to that of built-up areas in the optical-based temporal statistics. Accordingly, to exclude these from the analysis, all pixels have been masked whose slope (i.e., the angle corresponding to the maximum elevation difference between the given pixel and its 8 neighbors) is higher than 10 degrees. To this purpose, the SRTM DEM has been used for latitudes between −60° and +60° and the ASTER DEM elsewhere. Roads are also ultimately masked out by jointly exploiting the corresponding OSM layer and the novel dataset predicting roads missing from OSM recently published by Facebook. The approach described above has been employed on the Google Earth Engine (GEE) platform to derive settlement extent bi-annual updates from 2016 onwards. In particular, starting from 1st July 2016, temporal statistics are generated every 6 months from the analysis of S1 and S2 imagery collected in the previous year. These form the foundation for generating a series of classification maps, which are further refined using a dedicated temporal consistency ruleset to ensure reliability and reduce temporal noise in the results. As of July 2024, the WSF tracker has produced 17 updates, each offering a comprehensive and consistent representation of settlement changes across the globe. These systematic updates are planned to continue into the future, establishing the WSF tracker as a reliable and enduring tool for monitoring urbanization trends. The robustness and high accuracy of the dataset are being validated through extensive qualitative and quantitative assessments. Notably, the WSF tracker has already been effectively employed in various activities under multiple ESA Global Development Assistance (GDA) projects, as well as in supporting diverse teams at the World Bank, demonstrating its practical applicability and value in addressing complex urbanization challenges. For instance, the layer has been employed to assess the presence of new constructions not recorded for taxation purposes, to analyze urbanization trends in flood-prone areas for understanding vulnerability and guiding disaster risk reduction strategies, or to identify developing regions lacking adequate road infrastructure, facilitating targeted investments in connectivity and accessibility. Additionally, the WSF tracker is expected to serve as a foundation for related datasets, namely the WSF Imperviousness (estimating the percent of impervious surface area), and the WSF Population (estimating resident population density). Both of these will similarly benefit from updates every six months, thus further enhancing their relevance and utility in several applications, including urban planning, environmental management, and socio-economic analysis.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall G2)

Session: D.02.12 Big-data Driven AI Techniques in Ocean Applications: Enhancing Marine Monitoring and Analysis

The vast and dynamic nature of the ocean presents significant challenges for environmental monitoring and marine ecosystem management. The emerging big-data driven AI techniques as powerful tools for modelling and monitoring the oceans, providing new ways to interpret and analyze large volumes of ocean data, has opened new frontiers in ocean applications, allowing us to better analyze, predict, and monitor marine systems from e.g. physical, ecological and biogeochemical aspects. AI-powered tools, such as machine learning and deep learning techniques, enable efficient processing of large and complex datasets generated from various sources including satellite products, underwater sensors, and autonomous vehicles. These techniques facilitate long-term oceanographic parameter monitoring, near real-time pattern detection and predictive modeling of ocean systems, therefore have the potential to revolutionize our understanding of the complex processes and phenomena that occur in the marine environment. This session invites key topics and contributions that address a range of topics related to the use of AI for ocean applications, including but not limited to:
•Development and application of AI algorithms for ocean data analysis and monitoring in marine biogeochemistry and ecosystems, including plankton biodiversity, fisheries and aquaculture
•Integration of multiple data sources (in situ, remote sensing, etc.) for improved ocean parameter estimations
•Use of AI for prediction and forecasting of ocean dynamics, including extreme events such as hurricanes and tsunamis
•Data processing and pattern recognition: ML and DL algorithms process satellite imagery and sensor data automatically, fastening its analysis and, therefore, saving time and resources that have traditionally been employed in data interpretation (Ex: Eddies and fronts detection, phytoplankton blooms, HABs, ...).
•Innovative uses of AI for oceanographic research and exploration, including autonomous ocean vehicles and robotics
•Challenges and solutions: the computational, data integration and interdisciplinary challenges with using AI in ocean applications (Ex: homogenization and harmonization of data, data fusion, gap filling, ...)
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall G2)

Presentation: Advances in Deep Learning for Detecting Karenia brevis Harmful Algal Blooms with Sentinel-3 OLCI Data

Authors: Dr. Salem Ibrahim Salem, Dr. Hiroto Higa, Dr. Eko Siswanto, Dr. SeungHyun Son
Affiliations: Faculty of Engineering, Kyoto University of Advanced Science, Faculty of Engineering, Alexandria University, Institute of Urban Innovation, Yokohama National University, Earth Japan Agency for Marine-Earth Science and Technology (JAMSTEC), NOAA/NESDIS Center for Satellite Applications and Research, ESSIC/CISESS, University of Maryland
Karenia brevis (K. brevis) causes annual harmful algal blooms (HABs) in the Gulf of Mexico, leading to severe ecological and economic impacts due to brevetoxins that harm marine life and human health. Traditional threshold-based detection algorithms, while functional, suffer from inaccuracies caused by indistinct boundaries between bloom and non-bloom states. To address these limitations, we introduce a novel one-dimensional convolutional neural network (1D-CNN) model designed to classify K. brevis blooms using Ocean and Land Colour Instrument (OLCI) data from Sentinel-3A and Sentinel-3B satellites. The model integrates OLCI data with in situ measurements from the Florida Fish and Wildlife Conservation Commission (FWC), spanning April 2016 to December 2023. The dataset includes 78,430 in situ cell count measurements and 3,740 satellite images, yielding 6,561 matchups. To mitigate class imbalance, we employed oversampling and class weighting strategies. Model optimization was achieved using the Optuna framework, ensuring systematic hyperparameter tuning for optimal performance. The 1D-CNN demonstrated exceptional accuracy (95.6%) and precision (98.1%), excelling in distinguishing bloom conditions with a specificity of 99.8%, thereby minimizing false positives. The model reliably tracked bloom dynamics from 2017 to 2023, identifying onset, peak, and termination phases with month-by-month consistency. Analysis of Bloom Occurrence Frequency (BOF) exceeding 50% revealed that the largest bloom occurred in October 2021, covering approximately 17,000 km². Additionally, heightened bloom activity was observed during August-November 2018, with a median bloom area of 4,782 km² over these months. Comparative assessments demonstrated the superior reliability of our model against traditional methods. The Red-Green Chlorophyll Index (RGCI) exhibited higher false-positive rates compared to the Red Band Difference (RBD) approach. Manual delineation in RBD-based methods often exaggerated bloom intensity. A comparison of bloom footprints derived from OLCI and the Moderate Resolution Imaging Spectroradiometer (MODIS) on Aqua showed significant overlap, though MODIS consistently identified broader bloom areas due to its reliance on manual in situ delineation. This study underscores the transformative potential of deep learning in oceanographic applications. The integration of Sentinel-3 OLCI data with advanced 1D-CNN modeling offers a scalable, automated, and highly accurate solution for HAB monitoring. The approach improves bloom detection accuracy, reduces false positives, and provides a robust tool for ecological management and mitigation, advancing the sustainable monitoring of K. brevis blooms and their impacts.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall G2)

Presentation: Strengths and weaknesses of super-resolved satellite-derived SST data via Generative Adversarial Networks (GANs)

Authors: Claudia Fanelli, Tiany Li, Bruno Buongiorno Nardelli, Luca Biferale, Daniele Ciani, Andrea Pisano, Michele Buzzicotti
Affiliations: CNR-ISMAR, University of Rome Tor Vergata and INFN, CNR-ISMAR
Sea Surface Temperature (SST) is a crucial parameter in understanding oceanic processes, climate change, and weather forecasting, making accurate and high-resolution SST data essential for a wide variety of applications. However, conventional satellite-derived SST products often suffer from coarse effective spatial resolution, which limits their utility in resolving small-scale ocean features such as eddies and fronts. Exploiting the impressive potential of deep learning methods to obtain high quality results in the field of computer vision, Generative Adversarial Networks (GANs) have been used here to super-resolve satellite SST data, with the aim to enhance their spatial resolution while preserving critical features. GANs are a class of deep learning frameworks comprising two neural networks, a generator and a discriminator, that compete against each other during training. The generator attempts to produce high-resolution SST data from low-resolution inputs, while the discriminator evaluates the realism of the generated data by distinguishing it from ground-truth high-resolution SST observations. This adversarial process enables GANs to learn complex spatial and temporal patterns in SST data, making them particularly suited for super-resolution tasks. The strengths of GAN-based super-resolution for SST include their ability to generate fine-scale details, enhance feature resolution without requiring significantly larger datasets, and adapt to varying scales and noise levels inherent in satellite measurements. However, despite these advantages, this kind of network has notable weaknesses. In fact, GAN-generated data may introduce artifacts or incorrect features, especially in regions where training data are sparse or underrepresented. This can lead to physical inconsistencies in SST fields, resulting in a larger root mean squared error with respect to other convolutional neural networks used to solve the same task. Here, we compare the performance obtained by a GAN for super-resolving SST data with two other architectures: an autoencoder and a dilated convolutional multi-scale learning network. All the networks have been trained to improve the effective resolution and the gradients’ accuracy of SST fields derived from the Mediterranean Sea High Resolution and Ultra High Resolution Sea Surface Temperature Analysis provided in Near Real Time by the Copernicus Marine Service, using as a target the high-resolution SST data obtained from the Sea and Land Surface Temperature Radiometer (SLSTR) instruments on board of Sentinel 3A and 3B.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall G2)

Presentation: Hybrid Modeling Approach for Enhancing Sea Surface Turbidity Mapping Using Neural Data Assimilation Networks: Application in the Wadden Sea

Authors: Thi-Thuy-Nga NGUYEN, Johannes Pein, Fabio Mistrangelo, Wei Chen, Felix Dols, Ronan Fablet, Frederic Jourdin, Lorinc Meszaros, Joanna Staneva
Affiliations: Imt Atlantique, Helmholtz-Zentrum Hereon, Deltares, Service Hydrographique et Océanographique de la Marine (SHOM)
This study presents a novel hybrid modeling approach that combines real satellite observations, also known as Observing System Experiments (OSE), with high-quality simulation data, referred to as Observing System Simulation Experiments (OSSE), for enhanced Sea Surface Turbidity Mapping in the Wadden Sea. The focus is on Suspended Particulate Matter (SPM), a critical environmental variable that influences marine and coastal ecosystem dynamics. Existing methods, such as DInEOF, or neural network architectures like U-Net, often fail to produce accurate mappings under conditions of significant data gaps, which are common in satellite-derived observations due to factors like cloud cover, especially near the coastal areas. To address these challenges, we introduce an integration of sophisticated simulation datasets—Delft3D-FM for the Dutch Wadden Sea and SCHISM for the German Wadden Sea—with real, but gappy, satellite observations. Simulation data provides valuable knowledge about physical dynamics using many forcings, which have been studied over decades; however, it often lacks the flexibility to capture the overly complex real world systems. On the other hand, real observations offer a snapshot of the actual world but can be very gappy, especially near the coast, and noisy due to calibration methods.  In this study, the proposed integration is facilitated by the 4DVarNet scheme, an end-to-end data assimilation neural network designed to bridge the help of the physical dynamics underlined simulation data and the gappy real observations. The power of 4DVarNet lies in its design to harness the dynamics learned in a data-driven manner from the OSSE, providing a deeper understanding and more reliable reconstruction of SPM dynamics. Our approach forms a hybrid model technique that uses both simulation and real data, capturing the 'best of both worlds' where innovative neural networks meet deep physical understanding. By merging these two types of knowledge, our method significantly improves the quality and accuracy of turbidity maps, marking a major improvement over existing methods. This is especially crucial in areas like the Wadden Sea, where turbidity plays a fundamental role in the ecological and chemical processes. Extensive experiments demonstrate that our hybrid model substantially improves sea surface turbidity mapping compared to state-of-the-art techniques such as DInEOF and U-Net. Our results are not only superior statistically, with lower root mean square errors and relative errors, but they also provide a more coherent and smooth representation of the dynamic turbidity processes.  This research highlights the critical need for integrating machine learning with traditional environmental science approaches, particularly in the realm of satellite data processing where high rate data gaps pose a significant challenge.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall G2)

Presentation: 4DMED-SEA project: data-driven reconstruction of Mediterranean seascape for the study of upper ocean biophysical interactions and their impact assessment

Authors: Bruno Buongiorno Nardelli
Affiliations: Consiglio Nazionale Delle Ricerche -istituto Di Scienze Marine
The Mediterranean Sea ecosystem provides fundamental resources and services that support large and complex human socio-economic systems, that can be severely impacted by both natural and anthropogenic stressors. The management, restoration and preservation of the Mediterranean Sea Health requires an advanced understanding of the marine ecosystem functioning, fostering innovative approaches to provide science-based solutions for sustainable development, such as those enabled through future Digital Twins of the Ocean. Investigating the complex interactions between the physical/biological/biogeochemical processes that shape the Mediterranean ecosystem represents a key interdisciplinary research objective of the 4DMED-Sea project (2023-2025) funded by the European Space Agency. The project specific objectives are: 1) to improve the European EO capacity of the Mediterranean through the generation of consistent high-resolution data-driven 4D reconstruction(s) of essential ocean variables; 2) to describe the Mediterranean physical and biogeochemical state and analyse its space-time variability through synergistic tools that combine satellite observations, in situ data, and innovative algorithms using emerging technologies and advanced models based on Artificial Intelligence/Machine Learning. The project includes an algorithm development phase dedicated to the generation of gap-free level 4 surface products at increased resolution, specifically surface dynamic topography and sea surface salinity, as well as of novel 4D reconstructions of both physical and physical-biological ocean state covering the entire Mediterranean. The successive phase is dedicated to address three Scientific Case Studies. The first study focuses on bio-physical interactions at ocean fronts. The second one investigates seascape connectivity and its variability, intercomparing different definitions of ecoregions, starting from either hydrodynamical provinces or trophic regimes identification. The third study is dedicated to the analysis of the role of Lagrangian dispersion on higher trophic levels, focusing on the ecology of selected target species in specific sub-regional contexts. The project also aims to assess the potential impact of the new products and the achievements of the scientific studies as advanced monitoring solutions within future DTO, targeting marine spatial planning and protection policies, and fishery management.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall G2)

Presentation: Oil Spill Detection in the North Sea Using Landsat-8/9 Images and Deep Learning

Authors: Olga Schmidt, Egbert Schwarz, Detmar Krause
Affiliations: DLR
Oil pollution of seas and oceans poses a danger to human health and has a big impact on the marine and coastal environment. Oil enters the water from various sources which are of natural (47 %) and anthropogenic origin (53 %). The release of natural seeps is steady but slow over time which allows the ecosystems to adapt. The most common anthropogenic cases are accidents in maritime transportation, on oil platforms or the deliberate discharges of oil from ships where large amounts of oil can be released into the water within a short time. As indicated by data from the German Central Command for Maritime Emergencies, the number of pollution incidents identified in 2023 exhibited a notable increase in the time spent by naval pilots detecting pollution from the air. Consequently, pollution was detected on average every 7.7 flight hours, a significant deviation from the previous average of 12.5 to 20 flight hours observed since 2009. The global risks of the maritime safety and security have increased significantly due to Russia's utilisation of a very outdated tanker fleet. With the advantage of wide coverage, remote sensing can be used for a timely and accurate oil spill detection and help to prevent pollution spread and support clean-up operations to minimize the negative impacts on the environment as well as to identify the polluter. This study presents two different deep learning methods for the detection of oil spills on multispectral optical satellite images acquired from the Landsat-8 and Landsat-9 satellites. The North Sea was chosen as the study area due to the fact that the satellite data is transmitted in direct downlink mode to the DLR ground station, which allows the operational use of the developed methods in a near real-time application as a remote objective. The two deep learning methods used in this study are a (fully connected) deep neural network (DNN) and a convolutional neural network (CNN) in the type of a U-Net architecture. Both networks utilize different techniques for handling training data in order to train the model. – A DNN is trained pixel by pixel ignoring the spatial component while the U-Net is able to use the context information and the localization at the same time. In the initial phase of the study, the Landsat dataset was subjected to an analysis, with those images containing oil spills being indexed in a data cube (Open Data Cube, ODC). Subsequently, this data underwent pre-processing to facilitate its labeling and incorporation into the model training process. The pre-processing includes the generation of the training images and the preparation of the corresponding oil masks, which together form the initial training dataset. The training images were generated by applying an atmospheric correction and calculating three independent indices based on specific spectral bands. These indices are the Normalised Difference Oil Index (NDOI), the Green-Shortwave Infrared Index (G-SWIR) and the CaBGS index. The NDOI is defined as the ratio of the difference and the sum of the surface reflectance values for the green band and the near-infrared band. The index G-SWIR is based on the ratio of the difference and the sum of the surface reflectance values for the green band and the shortwave infrared band. These two indices have been demonstrated to enhance the visual differentiation between oil slicks and the surrounding water. CaBGS is a combination of the surface reflectance values of the spectral bands coastal aerosol, blue, green and SWIR2. This value has shown to have greater efficacy in distinguishing oil slicks from look-alikes features in shallow water areas. The oil masks were labelled manually using a segmentation method based on the indices mentioned above and the coastal aerosol band which was used to remove very dark and very bright pixel values and to enhance the contrast between oil slicks and water. The efficacy of this methodology is significantly contingent upon the circumstances under which the image was captured and the contrast between oil slicks and the surrounding water. Especially, the different illumination around the oil spills has a significant influence on the pixel values. This made it necessary to define individual thresholds for each Landsat image, or even for each separate oil slick. A precise oil mask is crucial to train a robust model, as any pixel not marked as an oil pixel (pixel value 1) is automatically considered as a non-oil pixel (pixel value 0) including pixels of the land surface, clouds, cloud shadows and no-data values (black border outside the Landsat image). These pixel values show a high variability and they would negatively influence the model training. Different masks, which are freely available, were taken to remove them from the training. For now, the masks for clouds and cloud shadows lack sufficient precision, necessitating manual post-processing of these pixels. Due to these challenges, the process of oil mask generation took a lot of time. In the case of the U-Net, the trained images and the corresponding binary oil masks are divided into patches of 128 pixels to 128 pixels. The Keras ImageDataGenerator class was used to create new variations of the images at each model epoch by applying horizontal and vertical flips, with the objective of enhancing the robustness of the U-Net model. In order to exclude patches with a low amount of oil pixels from the training dataset, only those patches containing more than 5 % of oil pixels were used. Utilizing this training dataset, the models were then trained to recognise and classify patterns of oil spills against the complex background of marine and coastal environment. Consequently, the performance and effectiveness of the models was evaluated using precision, recall and F1-score and their efficiency was demonstrated on different datasets. The result of the study has indicated that the proposed methodology is, in principle, an effective approach. The performance of the DNN is quite good with respect to the used test data while the U-Net indicates a less favourable performance outcome. With regard to the performance of the models on some selected entire Landsat image frames, the DNN has demonstrated a tendency to overestimate regions that are supposed to be oil, particularly in areas along the coastline. In contrast, the U-Net has demonstrated superior performance in terms of accuracy but it has tended to underestimate the extend of the spilled oil. The model demonstrated enhanced performance when the oil slicks exhibit a broad shape. The reason for that could be the exclusion of patches with less than 5 % oil pixels from the training process and with this the exclusion of oil spills of narrow and small-scale structures. The current performance is constrained by the restricted scope of the training data, particularly the lack of diversity in the oil types and the inconsistency in weather conditions under which the data were acquired.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall G2)

Presentation: Deep learning algorithm to uncover links between satellite-derived physical drivers and biological fields.

Authors: Luther Ollier, Maitre de conférence Roy ElHourany, Chercheur Marina Levy
Affiliations: Log/locean, LOG, LOCEAN, CNRS
Understanding the dynamics of phytoplankton communities in response to physical environmental changes is essential for evaluating the impact of climate change on marine ecosystems. Satellite observations provide a rich dataset spanning over two decades, capturing physical sea surface parameters such as temperature, salinity, and sea surface height, alongside biological insights such as ocean color. Ocean color data, in particular, is processed to estimate sea surface chlorophyll-a concentrations — a widely recognized proxy for phytoplankton biomass. Recent advancements in ocean color observation have further enabled the characterization of phytoplankton community structure in terms of functional groups or size classes. However, linking satellite-derived physical parameters to biological indicators remains challenging due to spatial and temporal variability. Can physical data reliably predict patterns in ocean color, such as chlorophyll-a concentrations and phytoplankton community structures, and potentially assess their variations? This study addresses this question through a deep-learning approach, utilizing an attention-based autoencoder model to learn relationships between physical variables and ocean color data, including chlorophyll-a concentrations and phytoplankton size classes at weekly and 1° spatial resolution. Our trained deep-learning model effectively captures patterns and correlations between physical parameters, chlorophyll concentrations, and phytoplankton size classes. It enables detailed exploration of how physical factors influence biological variability across different temporal scales. Utilizing a phytoplankton database spanning 1997–2023, this approach demonstrates promising results in replicating chlorophyll concentrations, inferring phytoplankton size classes, and shedding light on the potential links between physical and biological data. This study highlights the potential of machine learning for ecological research, contributing to more accurate trend analyses. Understanding phytoplankton variability is critical for marine ecosystem management, given their role in global carbon cycling. This methodology underscores the value of deep-learning to anticipate phytoplankton dynamics under changing environmental conditions.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall G1)

Session: D.02.09 Enhancement of EO products using advanced multi-instrument and multi-platform synergies

In the past five decades, a very large number of satellite instruments have been launched, providing a vast number of datasets for exploitation. It is now evident that efforts should focus on efficient approaches for interpreting this large amount of diverse satellite data, towards advancing EO capacities and enhancing products, to support and advance new applications. Indeed, no single sensor provides comprehensive information about a targeted object in a complex environment, and there is always a need to inject the missing information from complementary observations, modeling or other sources of knowledge. In addition, once the instruments have been deployed in space, the quality of measurements cannot be radically improved, while the processing algorithms remain under constant improvement and can be notably improved by fusion of information towards optimizing the use of joint sensitivity of multi-instrument datasets.

This session is focused on the discussion of the algorithms and approaches exploring the synergies of complimentary observations such as: synergy of passive imagery with active vertical profiling of the atmosphere and hyperspectral spectrometry, combining observations of different sensitivities obtained in different spectral ranges, or at different time or spatial scales, as well as, combining satellite observations with sub-orbital observations and chemical transport model simulations. The presentations are expected to demonstrate the advantages of synergy methods by using observations from the Copernicus Sentinels, EarthCARE, MTG, EPS-SG, PACE and other European and international recent and forthcoming advanced satellite missions.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall G1)

Presentation: Multi-instrument synergy as a tool to maximize the positive impact on the advancement of the aerosol global characterization from satellite observations

Authors: Oleg Dubovik, Pavel Litvinov, Anton Lopatin, Otto Hasekamp, Bastiaan van Diedenhoven, Piyushkumar Patel, Dr. Ulla Wandinger, Holger Baars, Dr. Julian Hofer, Vassilis Amiridis, Alexandra Tsekeri, Gerd-Jan van Zadelhoff, Thanos Tsikerdekis, David Donovan, Lucja Janicka, Iwona Stachlewska, Dr Juan Cuesta, Alexandru Dandocsi, Doina Nicolae, Daniele Gasbarra, Thorsten Fehr
Affiliations: Laboratoire D'optique Atmosphérique, Cnrs / University Of Lille, GRASP SAS, SRON, Netherlands Institute for Space Research, TROPOS, Leibniz Institute for Tropospheric Research, National Observatory of Athens, KNMI, Royal Netherlands Meteorological Institute, University of Warsaw, Faculty of Physics, Laboratoire Interuniversitaire de Systèmes Atmosphériques (LISA), UPEC, National Institute of R&D for Optoelectronics, ESRIN, Euorpean Space Agency, ESTEC, Euorpean Space Agency
The presentation discusses the realization of the multi-instrument synergy as a powerful approach to maximize the positive impact of the advanced satellite capabilities for improving characterization of atmospheric aerosol at global scale. We discuss the realization of the multi-instrument synergy in the framework of ESA AIRSENSE and EU PANORAMA projects using versatile GRASP algorithm (Dubovik et al, 2021). The algorithm is designed for interpreting of diverse remote sensing observations, and, therefore it is suitable for realizing synergetic processing of observations from multiple sensors. Moreover, the algorithm allows for synergy processing of observations that are not fully coincident nor fully colocated. This is realized by the multi-pixel approach (Dubovik et al., 2011), when large groups of satellite observations (pixels) are inverted simultaneously. By processing observations from multiple pixels together, the retrieval efficiently incorporates prior knowledge about the temporal and spatial variability of the retrieved parameters. For instance, land surface reflectance tends to be stable over weeks, while aerosol changes within day to hours. Similarly, aerosol properties vary minimally across several kilometers, whereas land surface can exhibit high spatial heterogeneity. First, we discuss the synergy processing of complementary simultaneous observations as those from Multi Angle Polarimeters (MAPs) and space lidars. The MAPs are known to infer information about detailed optical properties of columnar aerosol such as Angstrom Exponent , Single Scattering Albedo, Absorbing Aerosol Optical Depth, Fine/Coarse mode Aerosol Optical Depth, and Cloud Condensation Nuclei (CCN). At present, such polarimeters are operating as part of the PACE mission (SPEX and HARP-2). They are also planned to operate as part of several future missions such as 3MI/EPS-SG, MAP/CO2M, MAIA, AOS, etc. The observations by space lidars are provided recently by Aeolus and currently by EarthCARE missions. The lidar observations are very sensitive to vertical variability of atmospheric aerosol. At the same time, interpretation of lidar observations usually relies on an additional information about aerosol type, while MAP measurements are highly sensitive to aerosol type that has rather limited vertical variability. Thus, MAP and lidar observations are highly complementary, and well suited for synergy retrieval. Second, we discuss the synergy of not fully coincident or colocated observations. Although this type of synergy is less intuitive, it seems to be even more promising than synergy of simultaneous observations. In fact, whereas the synergy of co-incident MAP and lidar observations is very efficient, in practice, the coincidence of such observations can be very restricted. For example, the trajectories of currently operating EarthCARE and PACE have very limited overlaps, therefore the possible synergy product of these two satellites can only be very sparse. In contrast, the synergy of not fully coincident or colocated observations can be applied always for combining any observations from operating satellites with different trajectories. Based on our current experience such synergy, realized using multi-pixel approach, allows for substantial improvement of aerosol characterization due to two phenomena: (i) propagation of superiour information about aerosol details from more sensitive observations to less sensitive, and (ii) overall increase of observations voulume of the same aerosol event in different times and locations. The benefits of such synergy of non-coincident observations have been demonstrated in the framework of ESA SYREMIS project (https://www.grasp-earth.com/portfolio/syremis/, last access 01/12/2024), where the synergetic multi-instrument retrieval approach was developed for characterizing aerosol and surface properties using different combinations of S-3A, S-3B, S-5p, and HIMAWARI observations. The application of multi-pixel retrieval for LEO instruments OLCI/S-3A + OLCI/S-3B + TROPOMI/S-5P, and GEO AHI/HIMAWARI under refined a priori constraints on temporal and spatial variability of different retrieved parameters results in retrieval of outstanding accuracy. For example, the AOD derived from the joint retrieval was more accurate than the retrieval using observations of any instrument alone. Moreover, the multi-sensor retrieval using deep synergy allows information to propagate from TROPOMI and make possibe retrieval of AE and SSA for the pixel with only OLCI and AHI observations reasonable accuracy, while processing these observations separately couldnot provide these parameters (Litvinov et al., 2023). In frame of AIRSENSE and PANORAMA, depending on data availability and maturity, it is planned to gradually generate and enrich the novel products using following complementary synergy approaches: - (i) synergy of currently operating single view imagers (S-3, S-5P, S-2 “PRISMA”, etc.); - (ii) inclusion of products of Aeolus and EarthCARE; - (iii) synergy of MAP observations (HARP-2, SPEX, 3MI, etc,), with single viewing polar (S-3,S-5P) and geostationary (FCI/MTG) instruments. The results of the deep synergy retrievals will be demonstrated and discussed. REFERENCES: Dubovik, O., M. Herman, A. Holdak, T. Lapyonok, D. Tanré, J. L. Deuzé, F. Ducos, A. Sinyuk, and A. Lopatin, “Statistically optimized inversion algorithm for enhanced retrieval of aerosol properties from spectral multi-angle polarimetric satellite observations”, Atmos. Meas. Tech., 4, 975-1018, 2011. Dubovik, O., D. Fuertes, P. Litvinov, et al. , “A Comprehensive Description of Multi- Term LSM for Applying Multiple a Priori Constraints in Problems of Atmospheric Remote Sensing: GRASP Algorithm, Concept, and Ap-plications”, Front. Remote Sens. 2:706851. doi: 10.3389/frsen.2021.706851, 2021. Litvinov, P., O. Dubovik, C. Chen, S. Zhai, C. Matar, D. Fuertes, A. Lopatin, B. Torres, T. Lapyonok, V. Lanzinger, L. Bindreiter, M. Dornacher, A. Lehn, D. Gasbarra, and C. Retscher, “Synergetic retrieval from multi-instrument measurements for advanced aerosol and surface characterization”, APOLO-2024, 18-21 November, 2024, Kyoto, Japan.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall G1)

Presentation: Multi-mission satellite retrieval of Vegetation Parameters

Authors: Else Swinnen, Manuela Balzarolo, Dr Simon Blessing, Fernando Camacho, Sarah Gebruers, Dr Ralf Giering, Dominique Jolivet, Enrique Martínez-Sánchez, Dr Catherine Morfopoulos, Didier Ramon, Jorge Sánchez-Zapero, Carolien Toté, Christiaan van der Tol, Bram Van Dyck, Kris Vanhoof
Affiliations: VITO, EOLAB, University of Twente, FastOpt, HYGEOS, Imperial College London, University of Antwerp
Terrestrial vegetation plays a major role in the climate system through photosynthesis and respiration, evaporation, and soil formation. Two essential climate variables that describe the state of the vegetation are the leaf area index (LAI) and fraction of incident surface solar radiation in the photosynthetically active region (0.4-0.7 μm) absorbed by vegetation (FAPAR). The CCI+ Vegetation Parameters project within the Climate Change Initiative (CCI) program of ESA aims at addressing the need of the scientific community for multi-decadal global time series of LAI and FAPAR, to advance the study of the Earth system, including climatology, hydrology, carbon sequestration, ecosystem functioning and phenology. The project includes 3 cycles, in which gradually more input datasets are added, the scientific and operational maturity are increased, and user feedback is taken into account. During the first cycle, the retrieval algorithm was selected on a single-sensor time series. During the current cycle 2, the algorithm is further improved, and several other datasets are included resulting in the first multi-satellite retrieval of vegetation parameters spanning the years 2000 – 2020 at 1 km resolution. Surface reflectance data from Sentinel-3 OLCI, SPOT4/5-VEGETATION1/2, PROBA-V, SNPP- and NOAA20-VIIRS, and MetOpA/B/C-AVHRR are used as input for the retrieval algorithm. The surface reflectance data are processed from L1B data using the same atmospheric correction approach and input data. Uncertainties related to the input data and atmospheric correction are added to the surface reflectance datasets. The OptiSAIL retrieval system is a complex, physics-based retrieval system. It is based on a model comprising SAIL (canopy reflectance), PROSPECT-D (leaf properties), TARTES (snow properties), a soil model (soil reflectance anisotropy, moisture effect), and a cloud contamination model. The system is independent of the input data and allows the simultaneous retrieval from multiple sensors of LAI and FAPAR, but also of other canopy and leaf parameters (e.g., chlorophyll a&b, surface albedo or green FAPAR). The validation methodology is compliant with the CEOS LPV good practices for LAI products. The dataset is generated over a latitudinal North-South transect and over a selection of sites for both direct validation (DIRECT V2.1, GBOV/NEON, AMMA) and product intercomparison (LANDVAL sampling) purposes with Copernicus Land Monitoring Service, MODIS C6.1 and VIIRS C1 equivalent biophysical products. Several criteria of performance are evaluated, including completeness, spatial consistency, temporal consistency, error evaluation (accuracy, precision and uncertainty) as well as the conformity testing with requirements. We will present the latest status of the retrieval algorithm, and the validation and intercomparison of the LAI and FAPAR datasets generated in cycle 2.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall G1)

Presentation: Novel Observing Strategies: Pioneering the Next-Generation of Intelligent Systems for Earth Science

Authors: Laura Rogers, Jacqueline Le Moigne Le Moigne, Louis Nyugen, Ben Smith, Jon Ranson, Robert Morris, Nikunj Oza
Affiliations: NASA HQs, NASA Langley, Jet Propulsion Lab, NASA GSFC, NASA Ames
NASA’s Earth Science Division (ESD) has long contributed to the efforts to measure, monitor, understand, and predict the Earth environment including its climate trends and global changes. The global constellation of space-based observing platforms has increased in capability and number in the recent decade due to advancing technology – across NASA, global space agencies, and commercially. NASA’s recent Earth Science to Action Strategy calls this out and further calls out gaps in our ability to understand interconnected processes, integrate data and observations across numerous platforms, and coalesce knowledge into end-to-end systems. To address these gaps, the Intelligent Systems Technology (IST) group with NASA’s Earth Science Technology Office (ESTO) is developing cutting-edge technologies to enable intelligent observing systems and innovative autonomous capabilities. Novel Observing Strategies (NOS) exploit onboard intelligence, agile targeting, and coordination among ground and space-based sensors to dynamically respond to Earth science events and combine observations from multiple vantage points. NOS aims to optimize measurement acquisitions by using diverse observing and modeling capabilities, that represent various resolutions, that are dynamically coordinated to provide complete representations of Earth Science phenomena. The observing assets can be in space, in the air, or in situ, and the observed phenomena may exist on a variety of spatial or temporal scales (e.g., real-time tracking of hazards and disasters or long-term asset coordination for continuous ecosystem monitoring). The two main NOS goals are to: 1. Design and develop future observation concepts at the request of a new measurement, for example as identified in the latest Decadal Survey or as the result of a model or other science data analysis; and 2. Dynamically respond to science and applied science events of interest, not only focusing on rapid disaster-like events, but also considering mid- and long-term events and various area coverages, from global to regional to local-impact events, (e.g., distressed vegetation, potential landslides due to runoff, etc.). NASA’s IST group leads the development of these core innovations to optimize Earth Science return. This presentation will share recent advances in observing strategies funded across domains and enabling software technologies, system concepts, and in-space demonstrations. Three specific advances will be highlighted: 1. Decentralized, Distributed, Dynamic and Context-aware HEterogeneous Sensor Systems (3D-CHESS): A proof of concept for a context-aware Earth observing sensor web consisting of a set of nodes with a knowledge base, heterogeneous sensors, edge computing, and autonomous decision-making capabilities. 2. Sensor-in-the-Loop Testbed to Enable Versatile/Intelligent/Dynamic Earth Observation (VIDEO): An approach for real-time sensor reconfiguration to enable scene measurement to significantly improves the resolution of the retrieved atmospheric fields in regions in which that improvement is most beneficial, while conserving resources in other regions. 3. Dynamic Targeting: an in-flight demonstration, proving the ability of deep learning based detection algorithms to trigger rapid alerts. Detection and rapid alerts are the first step toward dynamic targeting, where the detection drives retargeting i.e. (take a lookahead image, analyze it, retarget the trailing instrument). Intelligent systems will revolutionize Earth science missions, knowledge, and understanding. Novel observing strategies have the potential to improve science by optimizing the observation of diverse science events by minimizing model uncertainties and enabling autonomous decisions. The continuous technology advances developed through IST are critical to our ability to understand interconnected processes, integrate data and observations across numerous platforms, coalesce knowledge into end-to-end systems and drive action.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall G1)

Presentation: SAR-to-Optical Image Translation Using Deep Learning Attention Mechanisms: Enhancing Structural Preservation and Remote Sensing Applications

Authors: Andrea Cavallini, Angie Catalina Carrillo Chappe, Marco Uccelli, Leticia Pérez Sienes
Affiliations: Starion Group
Synthetic Aperture Radar (SAR) imaging is a powerful remote sensing technology with also ground penetration capabilities that provides surface information in all weather conditions, widely used for its ability to penetrate clouds, vegetation, and concealed objects, obtaining information about the lower vegetation layer and soil. However, SAR images are challenging to interpret visually due to signal noise, image distortion, and the absence of colour information. On the other hand, optical imaging provides information in the visible and near invisible spectrum, offering details about colours and textures, making them essential for applications such as crop classification, vegetation monitoring, environmental analysis, and urban planning. However, their waves cannot penetrate clouds, resulting in missing information and limitations when cloud coverage is high. There are many differences between SAR images and optical images. For instance, SAR images often have only one channel when derived from a single polarization, although they can have multiple channels in multi polarization formats, additionally, they are characterized by speckle noise , nevertheless, they are unaffected by cloud coverage and can obtain acquisitions both during day and nighttime, allowing the collection of information at different wavelengths in challenging conditions. In contrast, optical images have multiple bands, ranging from a few (e.g., Red, Green, Blue and/or Near Infrared, SWIR bands) in multispectral images, up to hundreds of channels in hyperspectral images, providing detailed spectral information, facilitating applications such as classification and detection within the images , and even more complex tasks when working with hyperspectral images, as they can capture specific spectral signatures. However, as already said, optical imagery is heavily affected by cloud coverage, limiting their usability in many regions such as in Northern Europe, where a heavy cloud coverage could be present up to 80% of the time, and even simple analysis such as NDVI or other vegetation indices become extremely challenging. The task of fusing SAR and optical images to obtain cloud-free optical-like images is called SAR-to-Optical (S2O) image translation, which has recently gained significant attention since this technique is able to overcome the limitations of optical images by using the information provided by SAR images. In this context, we are working on a novel deep learning approach for S2O image translation that exploits Artificial Intelligence capabilities like Generative Adversarial Networks with Attention Mechanisms, outperforming the state-of-the-art results across different evaluation metrics. Therefore, being able to combine the capabilities and advantages of both types of imagery by performing S2O image translation gives an added value and significance, integrating SAR, which is unaffected by weather conditions with optical imagery, thus enhancing the reliability and reach of remote sensing applications. Traditionally, the issue of missing information in optical images is tackled by fusing multi-source, multi-temporal, or multi-spectral optical images, using cloud-free images, or bands that could retrieve the missing information. The limitation of this method relies on the impossibility of retrieving the missing information when challenging cloud conditions persist across the dataset. With the new advances in Artificial Intelligence, Deep Learning techniques have been explored to overcome traditional limitations by fusing SAR and optical imagery, learning complex mappings between these domains to perform S2O image translation. While previous methods have achieved commendable results, challenges remain, particularly in preserving geometric structures, enhancing fine details, and reducing noise, since SAR and optical remote sensing present many fundamental differences in imaging principles, making it difficult to map a relationship between the two. In this work, we introduce a novel Deep Learning approach that leverages attention mechanisms within a GAN-based (Generative Adversarial Network) architecture for image translation between two different domains, in this case, SAR-to-optical image translation. The base GAN architecture is divided into two integrated modules: a generator, which incorporates spatial and channel attention modules into an encoder-decoder structure that produces the optical-like images and a discriminator that uses attention layers to detect inconsistencies in critical regions of the generated optical like images. Additionally, the architecture presents multi-attention mechanisms based on spatial attention to capture patterns such as edges and contours of the SAR images, focusing on areas with crucial structural information; channel attention that learns the interdependencies between channels to be able to correctly translate from selected polarizations in SAR images to the multi-channel optical images ensuring realistic colour and intensity distributions; and multi-scale fusion to capture features at different levels of granularity, balancing global context and local details in an effective manner. To ensure compatibility between SAR and optical images, a data preprocessing module was designed to correct alignment issues. SAR data, such as those from ESA's Sentinel-1, is typically captured at off-nadir angles (20° / 47°), while optical imagery, such as from Sentinel-2, is generally acquired near nadir (0°). These differences necessitate corrections to align both data types into a common spatial framework. Preprocessing steps include radiometric calibration, geometric terrain correction, and orthorectification, among others. This standardization minimizes geometric distortions that could otherwise compromise the effectiveness of the SAR-to-Optical (S2O) image translation approach. The framework is implemented using TensorFlow and trained on NVIDIA GPUs, using a loss function composed of an adversarial loss, which evaluates the optical-like images produced by the generator; a perceptual loss that measures the semantic consistency between the generated and the expected images; a structural loss, which aims to preserve the geometric structures, avoiding edge deviations and wrong texture representations; and a pixel loss, which minimizes the Mean Squared Error (MSE) between the optical-like generated images and the ground truth, ensuring pixel-wise accuracy. The performance of the model is evaluated using the results obtained by the MSE, the Peak Signal-to-Noise Ratio (PSNR), and the Structural Similarity Index (SSIM), obtaining a significant improvement over baseline models such as unsupervised GANs, cGANs, and traditional CNNs. In particular, the model outperforms existing state-of-the-art techniques, achieving improvements of 5–9 dB in PSNR and 0.04–0.1 in SSIM compared to previous benchmarks. Additionally, metrics such as the Frechet Inception Distance (FID) and the Natural Image Quality Evaluator (NIQE) are obtained to demonstrate the quality, diversity and naturalness of the generated optical like images. These results portrait our solution as a promising approach for S2O image translation, outperforming the current state of the art methods. This work offers a novel deep learning approach for S2O image translation, with the integration of spatial and channel attention mechanisms allowing the model to address the challenges of structural preservation, integrated within a GAN architecture, which outperforms the current state of the art, obtaining high-quality optical-like generated images, highlighting the potential of attention mechanisms in the S2O image translation field.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall G1)

Presentation: Improving Planet Fusion Gap-filling and Uncertainty Estimation Using Sentinel-1 Data Fusion

Authors: Julie Nutini, Rasmus Houborg
Affiliations: Planet Labs
Planet Fusion is a daily, 3m, cloud-free surface reflectance product in 4 spectral bands (RGB-NIR). It uses a rigorous methodology based on the CESTEM [1] algorithm to radiometrically harmonize PlanetScope data from Planet’s constellation of ~200 cubesats. The CESTEM methodology uses data from MODIS and VIIRS to do a coarse radiometric correction, and then leverages FORCE [2] (harmonized Landsat-8/9 and Sentinel-2) surface reflectance data as the gold standard radiometric reference for fine-scale tuning. Improvements to the PlanetScope UDM generation of cloud masks are enabled by exploiting both the radiometrically corrected PlanetScope data and the high temporal cadence of the PlanetScope constellation. A sophisticated gap-filling approach then leverages both spatial and temporal information of surrounding clear pixels, as well as any near-coincident clear-sky FORCE data, to generate a spatially complete, cloud-free, daily surface reflectance product. The gap-filling technique used by Planet Fusion works well when changes on the ground are typical – we can exploit spatially similar pixels and their temporal trajectories to estimate the values of masked pixels. However, when changes on the ground are not typical (e.g., outlier events such as heavy rain, flooding, fire, deforestation) and occur during unrelenting cloud cover, the uncertainty in the Planet Fusion gap-filling approach increases. In order to generate a spatially and temporally continuous product that has improved uncertainty during these types of events, we turn to Sentinel-1 data. The Copernicus Sentinel-1 satellites carry synthetic aperture radar (SAR) sensors. Unlike optical sensors, which are passive sensors operating in the optical/infrared wavelengths, SAR is an active sensing instrument that transmits its own energy source in the microwave portion of the electromagnetic spectrum and then receives the backscattered light waves from the Earth’s surface. Due to the longer microwave wavelengths, it is possible to image through cloud-cover, making SAR ideal for the task of gap-filling in Planet Fusion. However, optical and SAR data are visually quite different and provide distinct information about the Earth’s surface. Optical data measures spectral reflection from sunlight, while SAR backscatter is influenced by surface geometry and dielectric properties. These fundamental, yet complementary differences make combining these two data modalities more complex. In this work, we first present our approach to predicting RGB-NIR surface reflectance using Sentinel-1 data. We utilize machine learning to identify complementary patterns in the Sentinel-1 and PlanetScope data. We leverage the temporal cadence of the PlanetScope constellation and the CESTEM radiometric harmonization approach used by Planet Fusion to find near-coincident pairs of Sentinel-1 and radiometrically consistent PlanetScope data. We also utilize our passive microwave (SMAP/AMSR-2) derived Planetary Variables (soil water content, land surface temperature and vegetation optical depth) to help our machine learning model partition various scattering influences in the Sentinel-1 signal, leaving behind patterns with the strongest correlation to the optical surface reflectance values. We then integrate our Sentinel-1 derived surface reflectance (S1-SR) into the Planet Fusion gap-filling and uncertainty estimation methods, and share our results. We demonstrate that we can improve gap-filling during extended periods of cloud cover and decrease the uncertainty of gap-filled pixels on near-coincident dates to any available Sentinel-1 acquisition. This work pushes the forefront of remote sensing data fusion and truly demonstrates multi-instrument, multi-modality, and multi-platform synergies. [1] “A Cubesat enabled Spatio-Temporal Enhancement Method (CESTEM) utilizing Planet, Landsat and MODIS data”, Houborg and McCabe (2018). [2] “FORCE—Landsat + Sentinel-2 Analysis Ready Data and Beyond”, Frantz (2019).
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall G1)

Presentation: Sustainable Mining Valorization: EO and Geophysical Synergies for Critical Raw Material Recovery

Authors: Dr. Sandra Lorenz, Dr. Richard Gloaguen, Moritz Kirsch, Dr. Margret Fuchs, Dr. Andréa de Lima Ribeiro, Dr. René Booysen, Dr. Christian Haberland
Affiliations: Helmholtz-Zentrum Dresden-Rossendorf (HZDR), GeoForschungsZentrum Potsdam (GFZ)
Mining waste, including tailings and waste rock, often contains valuable materials that can be recovered through innovative valorization approaches. The EU-Horizon-funded "Multiscale observation services for mining-related deposits" (MOSMIN) project leverages integrated Earth Observation (EO)-based services to map domains of secondary prospectivity, supporting the sustainable recovery of critical raw materials. By combining satellite, uncrewed aerial vehicles (UAV), and ground-based data with advanced geophysical methods, MOSMIN develops scalable workflows for mapping and characterizing mineralized zones across different spatial resolutions, contributing to circular economy initiatives in the raw materials industry. At the satellite scale, the project uses multispectral and hyperspectral imaging to delineate surface geology and identify domains of interest for secondary prospectivity. This includes the direct mapping of materials (such as rare earth elements - REEs) or the identification of alteration facies associated with grade zones (e.g. for sulfides in copper porphyries). These large-scale observations provide critical insights into the spatial distribution of mineralized zones, forming a basis for further detailed investigations. At the outcrop level, tripod- and UAV-based hyperspectral imaging systems offer higher resolution mapping of the same features, allowing for more precise delineation of mineralized domains. These observations are further enhanced by repeated acquisitions allowing us to monitor the potential resource through time. UAV-based magnetic surveys can help delineate targets in areas where mineralization is known to be associated with magnetite. Additionally, ambient noise seismic data is used to characterize the 3D structure of waste rock deposits, enabling the identification of geological domains that may correlate with grade. This combination of geophysical and EO data provides a multi-dimensional perspective on mineral distribution, facilitating targeted exploration and recovery efforts. We showcase the potential of such an approach with the example of Siilinjärvi, a phosphate mine that produces waste with substantial amounts of REEs. At Siilinjärvi, the integration of satellite and ground-based hyperspectral imaging has demonstrated the ability to detect and characterize mineralized zones. Through synergies with a parallel research project, emission and absorption spectroscopic measurements of sample material from both fresh rock and tailings at the same location validate these EO observations. This validation corroborates the reliability of the EO-based workflows and highlights the potential of these methods for broader application in mining waste valorization. MOSMIN’s approach illustrates the potential of EO data, combined with UAV, ground-based methods, and advanced geophysical techniques, to support the recovery of valuable materials from mining waste. By enabling the identification and characterization of mineralized zones across scales, the project contributes to sustainable resource recovery and the reduction of environmental footprints associated with legacy and active mining sites.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall F2)

Session: C.02.12 ESA's Biomass mission - PART 1

BIOMASS is the 7th ESA Earth Explorer mission. Earth Explorers are the backbone of the science and research element of ESA’s Living Planet Programme, providing an important contribution to the global endeavor of understanding the Earth system.
The overall objective of the Biomass mission is to reduce the uncertainty in the worldwide spatial distribution and dynamics of forest biomass in order to improve current assessments and future projections of the global carbon cycle. For enabling this, the Biomass mission data products will provide consistent global estimates of forest biomass, forest height, forest disturbance and re-growth parameters.
The Biomass Satellite industrial Prime contractor is Airbus Defence and Space, Stevenage (UK). The radar payload is built by Airbus Defence and Space, Friedrichshafen (Germany).
The Biomass  payload consists of a fully-polarimetric left-looking P-band SAR which is the first of its kind in this frequency band for Earth observation purposes. The BIOMASS mission is designed to last 5 years, and consists of two phases, i.e. a tomographic and an interferometric phase.
The Biomass Qualification Acceptance Review is scheduled to be concluded within 2024 with the satellite due for launch in 2025.

Biomass will provide global maps of the amount of carbon stored in the world's forests and how these change over time. Biomass will also provide essential support to UN treaties on the reduction of emissions from deforestation and forest degradation. Forest type and forest cover worldwide can be detected by today's satellites, but the mission’s unique capabilities will allow to get access to global forest structural parametrisation obtained with a homogeneous quality and sampling – allowing a multitude of global maps of these main forest parameters over its mission lifetime.
Apart from the above, the session also intends to cover the wider context of how carbon science has moved on and how Biomass, GEDI and many other elements provide the bigger picture.

The session will highlight the latest developments covering overall mission status, mission science and foreseen exploitation, higher-level products (implementation and algorithmic content) and ground segment.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall F2)

Presentation: Improved biomass estimation algorithm for ESA’s BIOMASS mission

Authors: Maciej Soja, Francesco Banda, Dr. Klaus Scipal
Affiliations: Wageningen Environmental Research, Aresys, ESA
ESA’s 7th Earth Explorer mission BIOMASS is planned for launch in March or April 2025 and will be the first P-band (435 MHz) synthetic aperture radar (SAR) in space. Its eponymous goal will be mapping the distribution and dynamics of above-ground biomass density (AGBD) of forests on four continents: South America, Africa, Asia, and Oceania, which is where the current AGBD knowledge is most limited. The 200-metre resolution maps generated by BIOMASS every eight months for about five years will provide unprecedented information on carbon stocks and fluxes and will also aid international deforestation regulations [1]. The current AGBD estimation algorithm for the BIOMASS mission relies on a power law model that is fitted to P-band backscatter data corrected for ground-level scattering using the ground cancellation technique [2]. The fitting is done in the logarithmic domain separately for the three polarisations [3], [4]: s_pi=l_p+a_p*w_i+n_p*c_i (1) where s_pi is the logarithm of ground-cancelled backscatter at polarisation p and for sample i, c_i is the logarithm of the cosine for the local incidence angle, and w_i is the logarithm of AGBD. In contrast to earlier implementations of the algorithm [3], the current algorithm uses ordinary least squares to estimate the three model parameters: one intercept l_p and two scaling (slope) constants (a_p and n_p). To account for varying meteorological conditions, moisture, phenology, and forest structure, the model parameters can attain different values for different forest type and acquisitions. Model parameters are estimated using local training data within partly overlapping 5°×5° tiles in latitude and longitude. This approach allows the model to adapt to local conditions and enforce some correlation between adjacent tiles but requires significant amounts of training data. Currently, AGBD estimates from NASA’s spaceborne lidar scanners GEDI and ICESat-2 as considered the main training data source, but other options are studied in ongoing projects. Another limitation of the current approach is that it assumes that reference AGBD estimates are free of error, but even the most accurate non-destructive in situ inventories of AGBD can have plot-level inaccuracies of more than 20%. First, we introduce the baseline AGBD mapping algorithm for ESA’s BIOMASS mission. Then, we propose, illustrate, and discuss two significant improvements to AGBD estimation from BIOMASS data: i) using effective variance weighting to account for errors in both SAR observables and reference AGBD data, and ii) a two-step training approach in which the adjacent SAR images are first calibrated for meteorological and moisture variability, and then absolute AGBD estimation is performed. The effective variance weighting method is described excellently in [5], where it is compared with several other parameter estimation approaches and recommended as the go-to method for problems with uncertainties in both the independent and dependent variables. The first improvement replaces the ordinary least squares fitting of (1) with the following minimisation task: min_(l_p,a_p,n_p)⁡∑_p∑_i(l_p+a_p*w_i+n_p*c_i-s_pi )^2/(a_p^2*σ _(wi)^2+n_p^2*σ_(ci)^2+σ_(spi)^2) (2) where σ _(wi), σ_(ci),σ_(spi) are the estimated standard deviations of the three measurements w_i,c_i,s_pi (w_i is the log-AGBD for the training data). This approach can be implemented through a non-linear least squares minimisation. We here also investigate data for four tropical sites from Gabon and French Guiana (Nouragues, Paracou, Mabounie, and Lope) and two boreal sites from Sweden (Remningstorp and Krycklan). w_i,c_i,s_pi are estimated as the mean values for the logarithms of AGBD, cosine of the local incidence angle, and ground-cancelled backscatter within 200 m x 200 m pixels, while σ_wi, σ_ci, σ_spi are estimated as standard deviations thereof. SAR data were acquired within the ESA funded airborne campaigns: TropiSAR 2009 (Nouragues and Paracou) [6], AfriSAR 2015 (Mabounie and Lope) [7], BioSAR 2007 (Remningstorp) [8] and BioSAR 2008 (Krycklan) [9]. Reference AGBD maps were generated from airborne laser scanning and in situ data in connection to the aforementioned campaigns. Initial results show that the effective variance weighting in (2) provides a better correlation and less bias in AGBD estimation, because areas with large local variability in AGBD, local incidence angle, or ground-cancelled backscatter play a lesser role in parameter estimation. For the second improvement, we introduce a two-step training process, in which the power law model is first fitted to another remote sensing metric that is locally correlated with AGBD, and the final map is subsequently re-calibrated using a few reference AGBD estimates. This allows for the removal of swath-to-swath discontinuities in the BIOMASS data caused by, e.g., moisture variabilities. Two candidate remote sensing products that can be used here is the difference between an X-band DEM and the BIOMASS DTM [10] and GEDI Level 2A forest height products [11]. Collectively, these two improvements are expected to substantially decrease the AGBD estimation bias and reduce reliance on third-party AGBD estimates, which may be affected by systematic errors [12], [13]. [1] S. Quegan et al., “The European Space Agency BIOMASS mission: Measuring forest above-ground biomass from space,” Remote Sens. Environ., vol. 227, pp. 44–60, Jun. 2019, doi: 10.1016/j.rse.2019.03.032. [2] M. Mariotti d’Alessandro, S. Tebaldini, S. Quegan, M. J. Soja, L. M. H. Ulander, and K. Scipal, “Interferometric Ground Cancellation for Above Ground Biomass Estimation,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 9, Art. no. 9, Sep. 2020, doi: 10.1109/TGRS.2020.2976854. [3] M. J. Soja et al., “Mapping above-ground biomass in tropical forests with ground-cancelled P-band SAR and limited reference data,” Remote Sens. Environ., vol. 253, p. 112153, Feb. 2021, doi: 10.1016/j.rse.2020.112153. [4] F. Banda et al., “The BIOMASS Level 2 Prototype Processor: Design and Experimental Results of Above-Ground Biomass Estimation,” Remote Sens., vol. 12, no. 6, Art. no. 6, Mar. 2020, doi: 10.3390/rs12060985. [5] J. Tellinghuisen, “Least Squares Methods for Treating Problems with Uncertainty in x and y,” Anal. Chem., vol. 92, no. 16, Art. no. 16, Aug. 2020, doi: 10.1021/acs.analchem.0c02178. [6] P. Dubois-Fernandez et al., “TropiSAR 2009: Technical Assistance for the Development of Airborne SAR and Geophysical Measurements during the TropiSAR 2009 Experiment: Final Report,” ESA contract no. 22446/09, CNES contract no. 9292903/08/09, 2011. [7] I. Hajnsek et al., “Technical Assistance for the Development of Airborne SAR and Geophysical Measurements during the AfriSAR Experiment,” European Space Agency, ESA contract no. 4000114293/15/NL/CT, 2017. [8] I. Hajnsek et al., “BioSAR 2007 technical assistance for the development of airborne SAR and geophysical measurements during the BioSAR 2007 experiment: Final report without synthesis,” ESA contract no. 20755/07/NL/CB, 2008. [Online]. Available: http://earth.esa.int/campaigns/DOC/biosar_finalreports_nosynthesis.pdf [9] I. Hajnsek et al., “BioSAR 2008 Technical Assistance for the Development of Airborne SAR and Geophysical Measurements during the BioSAR 2008 Experiment: Final Report – BioSAR Campaign,” ESA contract no. 22052/08/NL/CT, 2009. [Online]. Available: http://earth.esa.int/campaigns/DOC/BIOSAR2_final_report.pdf [10] M. J. Soja et al., “Sub-Hectare Resolution Mapping of Forest Biomass with Global Dem Data and a Coarse Digital Terrain Model,” 2024. doi: https://dx.doi.org/10.2139/ssrn.4762399. [11] R. Dubayah, M. Hofton, J. Blair, J. Armston, H. Tang, and S. Luthcke, “GEDI L2A Elevation and Height Metrics Data Global Footprint Level V002.” NASA EOSDIS Land Processes Distributed Active Archive Center, 2021. doi: 10.5067/GEDI/GEDI02_A.002. [12] N. Hunka et al., “On the NASA GEDI and ESA CCI biomass maps: aligning for uptake in the UNFCCC global stocktake,” Environ. Res. Lett., vol. 18, no. 12, Art. no. 12, Dec. 2023, doi: 10.1088/1748-9326/ad0b60. [13] A. Araza et al., “A comprehensive framework for assessing the accuracy and uncertainty of global above-ground biomass maps,” Remote Sens. Environ., vol. 272, p. 112917, Apr. 2022, doi: 10.1016/j.rse.2022.112917.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall F2)

Presentation: Heritage and post-launch plans for ESA BIOMASS Campaigns

Authors: Dr. Tânia GD Casal, Dr. Bjorn
Affiliations: European Space Agency (ESA)-ESTEC
In the framework of its Earth Observation Programmes the European Space Agency (ESA) carries out since 1981 ground based and airborne campaigns to support geophysical algorithm development, calibration/validation, simulation of future spaceborne earth observation missions, and applications development related to land, oceans and atmosphere. ESA campaigns have been fundamental and an essential part in the preparation of new Earth Observation missions, as well as in the independent validation of their measurements and quantification of error sources. The EE7 BIOMASS satellite mission with a planned launch date for April 2025 is ESA’s Forest Mission. BIOMASS will carry a novel P-band synthetic aperture radar, designed to deliver crucial information about the state of our forests and how they are changing, and to further our knowledge of the role forests play in the carbon cycle. The presentation will focus on examples on the crucial role of campaigns to support and develop BIOMASS retrieval algorithms as well as the planned post-launch validation activities in Gabon and French Guyana. All the ESA campaign datasets are freely available to the community under https://earth.esa.int/eogateway/search?category=Campaigns
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall F2)

Presentation: Mapping Forest Structure Changes at P-Band: Results from the GABONX / AfriSAR-2 Campaign and BIOMASS Perspectives

Authors: Matteo Pardini, Roman Guliaev, Noelia Romero-Puig, Konstantinos Papathanassiou, Irena Hajnsek
Affiliations: German Aerospace Center (DLR)
The BIOMASS mission, selected as the ESA 7th Earth Explorer mission in 2013, will implement synthetic aperture radar (SAR) fully polarimetric interferometric and tomographic measurements at P-band to map forest height, biomass, disturbances, and the associated carbon fluxes globally over tropical forests. In the frame of the BIOMASS mission studies, the AfriSAR campaign was successfully carried out in 2015 and 2016 as a joint effort among space agencies (the European ESA, ONERA, DLR, the US NASA, and the Gabonese AGEOS) over tropical forest sites in Gabon [1]. The acquired polarimetric interferometric and tomographic data sets, lidar full waveforms and ground measurements supported a number of improvements or even full developments of innovative algorithms for height, structure and biomass estimation algorithms (see e.g. [2]-[6]). In May 2023, the GABONX / AfriSAR-2 campaign was deployed as a successor of the AfriSAR campaign over the same sites for assessing the impact of 7-year tropical forest changes in SAR measurements. Supported by ESA, the campaign was led by DLR and carried out in cooperation with AGEOS and NASA. SAR data sets at L- and P-band were acquired in the same flight configurations as in 2016. At the same time, the NASA Goddard Space Flight Center conducted flights over the same test areas with their 'Land, Vegetation, and Ice Sensor' (LVIS) laser altimeter system providing very precise information about forest structure and underlying topography. All these data sets together form a unique data basis that allows to investigate the characterization of forest structural changes in the general context of SAR measurements at L- and P-band, and become critical for characterizing BIOMASS capabilities. In this work experimental results will be reported aimed at relating the changes from the AfriSAR to the GABONX / AfriSAR-2 data sets to changes of the forest structure on ground in terms of the P-band polarimetric interferometric coherences and tomographic SAR vertical reflectivity profile reconstructions across sites. The ability to detect and characterize changes will also be evaluated for forest structural parameters including heights and heterogeneity indices. Potentials and challenges in terms of algorithms and related observation spaces will be addressed, and implications for the BIOMASS mission discussed. References: [1] I. Hajnsek, M. Pardini, M. Jäger, R. Horn, J. S. Kim, H. Jörg, and K. Papathanassiou, Technical assistance for the development of airborne SAR and geophysical measurements during the AfriSAR campaign, Final technical report, ESA contract no. 4000114293/15/NL/CT. Available at: https://earth.esa.int/documents/10174/134665/AfriSAR-Final-Report. [2] M. Pardini, M. Tello, V. Cazcarra-Bes, K. P. Papathanassiou and I. Hajnsek, "L- and P-Band 3-D SAR Reflectivity Profiles Versus Lidar Waveforms: The AfriSAR Case," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 11, no. 10, pp. 3386-3401, Oct. 2018. [3] V. Wasik, P. C. Dubois-Fernandez, C. Taillandier and S. S. Saatchi, "The AfriSAR Campaign: Tomographic Analysis With Phase-Screen Correction for P-Band Acquisitions," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 11, no. 10, pp. 3492-3504, Oct. 2018. [4] N. Labrière et al., "In Situ Reference Datasets From the TropiSAR and AfriSAR Campaigns in Support of Upcoming Spaceborne Biomass Missions," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 11, no. 10, pp. 3617-3627, Oct. 2018. [5] M. Mariotti D’Alessandro and S. Tebaldini, "Digital Terrain Model Retrieval in Tropical Forests Through P-Band SAR Tomography," in IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 9, pp. 6774-6781, Sept. 2019. [6] F. Banda, D. Giudici, T. Le Toan, M. Mariotti d’Alessandro, K. Papathanassiou, S. Quegan, G. Riembauer, K. Scipal, M Soja, S. Tebaldini, et al. “The BIOMASS Level 2 Prototype Processor: Design and Experimental Results of Above-Ground Biomass Estimation”, Remote Sens. 2020, 12, 985.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall F2)

Presentation: The Biomass Mission Algorithm & Analysis Platform (MAAP) for Enabling Open Science

Authors: Dr. Clement Albinet, Cristiano Lopes, Muriel Pinheiro, Tamara Queune, Björn Rommen, Klaus Scipal, Roberto Alacevich
Affiliations: ESA, ESA
ABSTRACT To help scientists in the above ground biomass community and to support the science behind the upcoming BIOMASS mission, ESA is developing the Mission Algorithm and Analysis Platform (MAAP). The MAAP is a cloud computing platform that will include not only data (satellite, airborne, in situ data and products), but also computing capabilities and sets of tools and algorithms developed to support this specific field of research. To best ensure that users can collaborate across the platform and to access needed resources, the MAAP requires all data, algorithms, and software to conform to open access and open-source policies. In addition to aiding researchers, the MAAP will pave the way for future platforms at ESA that will focus on sharing data, science algorithms and compute resources to foster and accelerate scientific research. Index Terms— Synthetic Aperture Radar, Biomass retrieval, Open-Science, Open-Source, Cloud Computing 1. INTRODUCTION Selected as European Space Agency’s seventh Earth Explorer in May 2013, the BIOMASS mission will provide crucial information about the state of our forests and how they are changing [1]. This mission is being designed to provide, for the first time from space, P-band Synthetic Aperture Radar measurements to determine the amount of biomass and carbon stored in forests [2]. The data will be used to further our knowledge of the role forests play in the carbon cycle. 2. CLOUD COMPUTING SCIENTIFIC PLATFORMS Earth Observation platforms play an essential role in the new European Space Agency Earth Observation strategy implementing a significant evolution of the ground segment. Earth Observation Platforms are gradually appearing in Europe, at national level or at European scale, and outside Europe. For example, the Thematic Exploitation Platforms developed at European Space Agency are a category of exploitation platforms dedicated to geophysical themes. The Thematic Exploitation Platforms developed are instrumental as pilots to formulate future technical and economical approaches for networking exploitation platforms (European ecosystem of Earth Observation Platforms), which is a central idea of the Earth Observation ground segment evolution. 3. BIOMASS MISSION ALGORITHM AND ANALYSIS PLATFORM (MAAP) In this context of an innovative sensor and a changing ground segment, the concept of Mission Algorithm and Analysis Platform dedicated to the BIOMASS, to the NASA-ISRO SAR (NISAR) mission [3] and to the NASA Global Ecosystem Dynamics Investigation (GEDI) mission [4] mission is proposed. Developed in a collaborative way between ESA and NASA, this Mission Algorithm and Analysis Platform will implement, as part of the payload data ground segment, a virtual open and collaborative environment. The goal is to bring together data centre (Earth Observation and non- Earth Observation data), computing resources and hosted processing, collaborative tools (processing tools, data mining tools, user tools, …), concurrent design and test bench functions, accounting tools to manage resource utilisation, communication tools (social network) and documentation. This platform will give the opportunity, for the first time, to manage the community of users of the BIOMASS mission thanks to this innovative concept. 4. OPEN-SOURCE ALGORITHMS WITH BIOPAL To best ensure that users can collaborate across the platform and to access needed resources, the MAAP requires all data, algorithms, and software to conform to open access and open-source policies. As an example of best collaborative and open-source practices, most of the BIOMASS Processing Suite (BPS) will be made openly available within the MAAP. This Processing Suite contains all elements to generate the BIOMASS upper-level data products and is currently in development under the umbrella of the open-source project called BioPAL [5]. BioPAL is developed in a coherent manner, putting a modular architecture and reproducible software design in place. BioPAL aims to factorize the development and testing of common elements across different BIOMASS processors. The architecture of this scientific software makes lower-level bricks and functionalities available through a well-documented Application Programming Interface (API) to foster the reuse and continuous development of processing algorithms from the BIOMASS user community. This API will greatly simplify the use of the BIOMASS Processing Suite (BPS) on the MAAP. 5. OPEN REFERENCE CAL/VAL DATA WITH GEOTREES In addition to open satellite data and open-source algorithms, open reference data is needed for Calibration and Validation. GEOTREES is composed of Biomass Reference Measurement sites that are in situ forest measurement sites with a common standard for high-quality data acquisition, transparent measurement protocols, long-term monitoring, and measurements traceable to SI units. GEO-TREES will be established through collaboration with existing international networks of high-quality forest plots that use standard forest monitoring protocols. GEOTREES will be centered on 100 core sites representing forests around the world, with strong priority placed on the tropics. Core sites will consist of long-term intensive measurements of forest biomass. An additional 200 supplementary sites will be used to ensure full representation of the main environmental and anthropogenic dimensions over which forests occur globally. Supplementary sites will involve less intensive biomass sampling, but will provide a strategy for gap-filling under-represented areas. The raw biomass data collected by the GEO-TREES project will be made publicly accessible through the GEO-TREES web portal [6] and through the MAAP. To deliver these high-quality data over a sustained period in dozens of countries requires skilled teams hosted by institutional partners. The labor and skill demanded for the groundwork are high, and conditions of work are often insecure and difficult. It follows that for in situ data to be shared openly GEO-TREES partners will be fairly and systematically funded with adequate provision of training and career development. The intellectual property of the primary stem and species data remain with the principal investigators of each site. GEO-TREES implements the recommendations of the CEOS Aboveground Biomass Land Product Validation protocol [7]. 6. CONCLUSION The MAAP will make connections between data, algorithms, software and results. In addition, the MAAP brings together data from various spaceborne missions from various organizations to support development of global biomass maps. Moreover, GEOTREES, BioPAL and the Concept of the Product Algorithm Laboratory make it easier to reproduce results and build from existing work. Finally, the concept of MAAP will become the baseline for all the future ESA missions. 7. REFERENCES [1] T. Le Toan, S. Quegan, M. Davidson, H. Balzter, P. Paillou, K. Papathanassiou, S. Plummer, F. Rocca, S. Saatchi, H. Shugart and L. Ulander, “The BIOMASS Mission: Mapping global forest biomass to better understand the terrestrial carbon cycle”, Remote Sensing of Environment, Vol. 115, No. 11, pp. 2850-2860, June 2011. [2] T. Le Toan, A. Beaudoin, et al., “Relating forest biomass to SAR data”, IEEE Transactions on Geoscience and Remote Sensing, Vol. 30, No. 2, pp. 403-411, March 1992. [3] P.A. Rosen, S. Hensley, S. Shaffer, L. Veilleux, M. Chakraborty, T. Misra, R. Bhan, V. Raju Sagi and R. Satish, "The NASA-ISRO SAR mission - An international space partnership for science and societal benefit", IEEE Radar Conference (RadarCon), pp. 1610-1613, 10-15 May 2015. [4] https://science.nasa.gov/missions/gedi [5] BioPAL project site: http://www.biopal.org [6] GEOTREES web portal: https://geo-trees.org [7] Duncanson, L. et al. (2021). Aboveground Woody Biomass Product Validation Good Practices Protocol. Version 1.0. In L. Duncanson et al., Good Practices for Satellite-Derived and Product Validation, (p. 236): Land Product Validation Subgroup (WGCV/CEOS), doi:10.5067/doc/ceoswgcv/lpv/agb.001
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall F2)

Presentation: The role of BIOMASS in carbon cycle and climate science and policy

Authors: Shaun Quegan, Thuy Le Toan, Stefano Tebaldini, Lars Ulander, Jérôme Chave, Mathew Williams, Kostas Papathanassiou, Jørgen Dall, Philippe Paillou, Prof Markus Reichstein, Dr Nuno Carvalhais, Sassan Saatchi, Hank Shugart, Klaus Scipal, Björn Rommen
Affiliations: University of Sheffield, CESBIO, Politecnico di Milano, Chalmers University of Technology, CNRS, University of Edinburgh, DLR, DTU, University of Bordeaux, Max Planck Institute for Biogeochemistry, NASA JPL, University of Virginia, ESA/ESRIN, ESA/ESTEC
The original 2005 proposal stated that the BIOMASS P-band SAR mission would provide crucial data needed to reduce the uncertainty in the worldwide spatial distribution and dynamics of forests in order to improve our present assessments and future projections of the carbon cycle, with important ramifications for the Earth system, including the water cycle and human use of the biosphere. In the intervening 20 years, it might have been expected that this uncertainty would have decreased, but the annual global carbon budgets produced by the Global Carbon Project show that this has not happened. This is due to three inter-linked underlying factors: uncertainties about the reliability of existing estimates of carbon in vegetation, especially dense forests; little use of satellite-based estimates of forest carbon in carbon models; and very limited exploitation of space-based estimates of biomass in reporting of national carbon budgets to the IPCC. The impact of BIOMASS for science and policy will come through how it alters these three factors. First and foremost, BIOMASS aims to provide reliable repeated maps of the biomass of dense forests at a scale of 4 ha. Where this has most impact is in providing significantly improved estimates of the Land Use Change carbon flux, which has the largest absolute uncertainty of any of the global carbon balance terms and by far the largest relative uncertainty (in excess of 50%). This has knock-on effects, because an error in the Land Use Change term must be balanced by errors in other terms. Hence it creates uncertainty in, for example, the magnitude of the land take-up of carbon, and the processes driving this take-up. The impact on models can be through testing, benchmarking and as a driver. Few of the large-scale climate models use biomass data to test performance, despite it being a key indicator of whether the internal carbon dynamics are correct. In particular, in combination with photosynthesis data, it can be used to assess if the modelling of forest mortality is correctly represented. This is a key climate process since it controls the release of forest carbon to the atmosphere. BIOMASS data can also be more directly exploited in carbon models through data assimilation. This is a much more powerful way to exploit the BIOMASS time series since it involves the whole set of model parameters being adjusted to achieve consistency with the biomass data and other input data streams. Accuracy of the biomass estimates is clearly essential if nations are to use BIOMASS data, but this is far from the whole story. Countries have existing structures for reporting to the IPCC and space data on biomass or forest activity will only be used if they are relevant to these structures. Issues that affect this include calibration of maps for national circumstances, spatial and temporal resolution, temporal coverage, sensitivity to biomass density, accuracy (including measures of uncertainty and reliable validation), consistency over time and the longer-term perspective, completeness of GHG inventories, capacity building, ownership and sovereignty and integration of data from multiple sources. A significant task once BIOMASS is operational is addressing these potential barriers to its exploitation. This will form part of a more general drive by the Committee on Earth Observation Satellite (CEOS) to undertake activities in support of the Paris Agreement.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall F2)

Presentation: FRM4Biomass: Providing Fiducial Reference Measurements of above ground forest biomass and height.

Authors: Jerome Chave, Araza Arnan, Arthur Bailly, Martina Dürauer, Fabian Fischer, Andres Gonzalez-Moreno, Martin Herold, Tommaso Jucker, Nicolas Labriere, Ayala Loisel, Dominique Lamonica, Georgia Pickavance, Maxime Rejou-Mechain, Dmitry Schepaschenko, Dr. Klaus Scipal, Klaus Scipal, Anto Subash
Affiliations: Université Toulouse 3 – Paul Sabatier, Wageningen University, UMR AMAP, International Institute of Applied System Analysis, University of Bristol, GFZ, University of Leeds, European Space Agency
Space Agencies including ESA have made significant investments in Earth Observation missions to map forest biomass across continents in support of climate science and carbon markets. The BIOMASS mission is planned for launch Q1 of 2025, and it will represent a key advance in our ability to provide quality biomass and carbon estimates. Besides ESA, several other Space Agencies started comparable initiatives. NASA launched the GEDI mission in 2018 and together with ISRO is preparing for the launch of the NISAR mission in 2025. JAXA operates the ALOS SAR series. But satellites alone cannot produce accurate biomass maps – all satellite-based monitoring systems fundamentally rely on field data collected on the ground to train models and validate estimates to ensure global forest biomass estimates can be used with confidence. To address the need for a rigorous validation framework ESA established the concept of Fiducial Reference Measurements (FRM) which are defined as“The suite of independent ground measurements that provide the maximum Return on Investment for a satellite mission by delivering, to users, the required confidence in data products, in the form of independent validation results and satellite measurement uncertainty estimation, over the entire end-to-end duration of a satellite mission.” Characteristics for FRMs are: (1) documented unit traceability using metrology standards; (2) independence from the satellite geophysical retrieval process; (3) provision and maintenance of an uncertainty budget for all FRM instruments and derived measurements; (4) definition, publication and compliance to FRM protocols and community-wide management practices (measurement, processing, archive, documents etc.); (5) data open and freely available for independent verification; (6) capacity to determine via independent validation activities the in-orbit uncertainty characteristics of satellite geophysical measurements. In this framework, the FRM4Biomass project establishes the protocols and tools to ensure that ground-based measurements of above ground biomass and forest height are made available and their use in EO product validation adhere to the above principles. In this contribution we will report on the FRM4Biomass objectives and their status.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 0.11/0.12)

Session: A.08.10 Coastal Ocean and Land-sea interaction - PART 1

Around 10% of the world population live in coastal regions. These areas are interested by an ample set of socio-economic activities – ranging from fisheries and aquaculture to tourism and ecosystems preservation. These regions are however highly vulnerable to climate change, being particularly affected by sea level rise and erosion, and subject to extreme events such as storm surges and floods.
They also play a crucial role in the Earth system as the interface between land and ocean, being also of fundamental importance in relation to fluxes of carbon, nutrients, pollutants and freshwater between the land-sea continuum.

This Session welcomes contributions on comprehensive data-driven reconstructions of coastal regions processes, based on the latest EO capabilities and the exploitation of the increasing set of different sensors offering high-resolution data over the coastal domains and interfaces. Although satellite EO have a prominent role, a complete representation of the overall 4D processes can only be achieved by a consistent, synergistic and multi-modal use of complementary assets such as in-situ measurements, modelling experiments and AI.

Cross-disciplinary thematic topics that are of interest for this Session are:
• Coastal ocean dynamics and sea level variability
• Extremes and Geohazards – eg, flash floods, storm surges and coastal erosion.
• Multi-stressors – such as MHW and deoxygenation.
• Water Quality – including pollution and harmful algae proliferation.
• Ocean carbon and gas fluxes – with special emphasis on high-resolution carbon processes and blue carbon.
• Air-sea-land-ice fluxes/exchanges – with a characterization of exchanges of heat, momentum and freshwater, including atmospheric nutrient deposition and its influence on ocean biogeochemistry and biology.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: A Bias Correction For ACOLITE/DSF Processing Of Sentinel-2 Over Clear Waters

Authors: Quinten Vanhellemont, Kevin Ruddick
Affiliations: RBINS
The Dark Spectrum Fitting (DSF) algorithm as implemented in ACOLITE (Vanhellemont and Ruddick, 2018, Vanhellemont, 2019) has been shown to perform well for turbid coastal water applications, but tends to show a positive bias for lower turbidity waters. The positive bias on the water reflectance is a result of the underestimation of the aerosol optical thickness, and is of the order of the water signal in low turbidity waters. This effect is especially present for sensors designed for land applications, which have lower signal to noise requirements, such as Landsat and Sentinel-2. For processing of Sentinel-2 data, removing the shortwave infrared (SWIR) bands from the DSF, improved the results considerably in general (Vanhellemont, 2020). Without SWIR bands however, the DSF will show degraded performance over very turbid waters, i.e. where all visible to near-infrared bands have a non-negligible water-leaving signal. For global automated processing, for a wide range of water reflectance targets, the band set used in the DSF must be as general as possible, and hence this bias must be corrected. In this presentation, we show that this bias arises from (1) the construction of the DSF, and (2) the noise level in the satellite imagery. We propose a simple, but physics-based bias correction for the Sentinel-2 imagers, which is established through on-orbit noise characterisation. The sensor noise is characterised by sampling top-of-atmosphere reflectance for stable targets, both in terms of surface and atmosphere. A per band noise estimation is presented, and the bias correction is integrated in the DSF. Validation using WATERHYPERNET in situ measurements, shows improved performance after application of the bias correction, and we show that the bias correction allows the SWIR bands to be included in the ACOLITE/DSF processing of Sentinel-2 imagery. We propose the general application of the correction for water applications relying on ACOLITE/DSF processing.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: Mapping the Arctic coastal zone – subsurface topography and its dynamics

Authors: PhD, Senior Business Development Manager Mikkel H Bojesen, PhD, Data Science Engineer Lisbeth T Nielsen, PhD, Senior Business Development Manager Niels H Broge, Professor Ole B Andersen
Affiliations: DHI A/S, Technical University of Denmark - Department of Space Research and Technology Geodesy and Earth Observation
Human life across the Arctic is closely associated with activities throughout the coastal zone and as such maritime activities are an embedded part of the life in Arctic communities. Maritime activities play a vital role in people’s livelihoods, providing sources of income, connectivity as well as societal and cultural belonging. Consequently, navigating and understanding the dynamics of the largely unmapped archipelagos and other coastal waters across the Arctic region remains a key skill set to the local communities. Hence, an immense environmental knowledge base rests among local navigators, e.g. fishermen, hunters, sea pilots and local government officials. This knowledge base has been built up throughout decades and passed on within the local communities. Therefore, much of this knowledge is internal knowledge and as such undocumented and unavailable to a wider range of stakeholders. As climate change unfolds across the Arctic, at a rate four times the speed of the global average, these regions are increasingly vulnerable and are being affected by the consequences of climate change, including sea level rise, erosion, and extreme events such as storm surges and floods rock landslides. The inherited local knowledge concerning the coastal waters is consequently becoming increasingly diluted. At the same time, human activity is increasing across the Arctic region in terms of tourism, fisheries, mineral exploration, and shipping. While this increased global attention and level of activity takes place in the maritime domain, official nautical charts haven’t been able to map the shallow and highly dynamic coastal zone using traditional surveying methods and large areas remain uncharted or existing charts are deficient and misplaced. Consequently, the maritime activities taking place are associated with an increased level of risk threatening human lives, ecosystems and local communities. In overcoming this increasing information need and knowledge gap, while building on state-of-the-art research and innovation, satellite derived bathymetry (SDB), offers a cost-efficient data source for coastal Arctic bathymetry. These data are especially relevant in shallow waters (0-10 meters) and remote areas where traditional survey methods are particularly challenged. SDB refers to a set of methods and techniques for deriving water depths from satellite data, e.g. optical Sentinel-2 data, using either radiative transfer modelling or empirical models to derive depths. Recent research has found that deep learning models can outperform traditional methods for SDB, since deep learning models may be better in handling the complex structures found in the coastal zone and generalize across different environments and conditions. Also, traditional machine learning models do not consider the contextuality of observations to the same degree. Tides play an important part understanding the dynamic nature of Arctic coastal waters. Estimating ocean tides in the coastal region has challenged tide modelers for decades. The recently launched SWOT satellite (in 2022) provides the opportunity to derive estimations of coastal tides at unprecedented spatial scales thanks to the innovative wide‐swath measurement principle, particularly in complex coastal regions. In this presentation we present the most recent advances within applying deep learning models and traditional machine learning models to optical sentinel-2 data obtaining scalable mapping of coastlines and satellite derived bathymetry for Arctic conditions with examples from Greenlandic fjord systems. Also, we present novel applications of SWOT satellite data to obtain estimates of tidal conditions in the same fjord systems and by that forming a comprehensive model setup for mapping the Arctic coastal zone. During this presentation we will present the performance of the models and discuss possibilities for further upscaling, with the goal of paving the way for a Panarctic coastal mapping.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: Mapping and Monitoring the Valencia Flood Event During 2024 Using Sentinel-2 Data and Machine-learning Based Turbidity Models

Authors: Ms. Masuma Chowdhury, Dr. Ignacio de la Calle, Dr. Irene Laiz, Dr. Ana B. Ruescas
Affiliations: Departamento de Física Aplicada, INMAR, Universidad de Cádiz, (CEI·MAR), Quasar Science Resources, S.L., Image Processing Laboratory, University of Valencia
Flood events pose significant challenges to both public health and the environment, as they degrade water quality, disrupt ecosystems, and facilitate the spread of waterborne diseases due to increased levels of suspended sediments, nutrients, and pollutants. Monitoring turbidity during such events is vital for assessing the extent of water contamination, understanding sediment transport dynamics, and guiding emergency response efforts to mitigate ecological damage and health risks. In this study, we aim to evaluate the application of a global turbidity model, developed using Sentinel-2 satellite data and machine-learning techniques, to map and monitor turbidity during the October 2024 flood event in Valencia, Spain. The model has been developed based on a comprehensive dataset including hyperspectral in-situ measurements (GLORIA), turbidity data, and Sentinel-2 observations, enabling robust estimation across a wide range of turbidity levels (0-2500 FNU). Feature selection methods evaluated single bands, band ratios, and normalized difference indices, while machine-learning algorithms – including elastic net, random forest, gradient boosting, and extreme gradient boosting – were employed for model development. Gradient boosting outperformed other techniques, achieving high accuracy in turbidity prediction. The model has been validated across diverse optical water types (i.e., blue, green, and brown waters) and water bodies (rivers, lakes, estuaries, and coastal oceans), demonstrating its versatility and scalability (Chowdhury et al., in preparation). The proposed application of this global model to the Valencia flood is expected to provide valuable insights into the spatial extent and severity of turbidity variations, sediment transport, and water quality degradation during the event. This study highlights the potential of combining satellite observations with advanced machine-learning models for real-time environmental monitoring, offering critical tools for disaster response and water resource management.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: Mediterranean Coastal Water Boundaries Effectively Set by Remote Sensing Data

Authors: Eva Flo, Carlos Rodero, Cristina González-Haro, Kaori Otsu, Joaquim Ballabrera, Paula Catasús, Manel Grifoll, Jordi Isern-Fontanet, Xavier Garcia, Lluís Pesquer, Jaume Piera
Affiliations: Barcelona Expert Center on Remote Sensing (BEC), Marine Sciences Institute (ICM), Spanish National Research Council (CSIC), EnvironMental and sustainaBility participatory InforMatiOn Systems (EMBIMOS), Marine Sciences Institute (ICM), Spanish National Research Council (CSIC), GRUMETS research group, Centre de Recerca Ecològica i Aplicacions Forestals (CREAF), Barcelona Innovative Transportation (BIT), Polytechnic University of Catalonia (UPC), Institut Català de Recerca per a la Governança del Mar (ICATMAR), Marine Geosciences, Marine Sciences Institute (ICM), Spanish National Research Council (CSIC), Physical and Technological Oceanography, Marine Sciences Institute (ICM), Spanish National Research Council (CSIC)
Although coastal zones comprise just 20% of the Earth’s surface, the transitional area between land and sea holds significant environmental, economic, and social importance. On the environmental level, coastal zones are among the most biodiverse and biologically productive ecosystems. Their waters contain 90% of all marine species and their phytoplankton contribute an annual net primary production of 50 Pg of C. On the socioeconomic level, coastal zones are considered “golden areas,” supporting key activities such as agriculture, fisheries, industry, urbanization, transportation, and tourism. Additionally, coastal zones are home to 45% of the global population. While in the year 2000 the population living at low elevations (<10 m), primarily in coastal areas, was 625.2 million, this figure is projected to increase to 938.9 million by 2030 and 1,318 billion by 2060. Intense human activities and high population densities generate substantial nutrient loads that reach coastal waters (CW) via freshwater pathways such as rivers, surface runoff, and groundwater. These inputs alter the physical, chemical, and biological characteristics of CW, creating nutrient gradients that extend from land to ocean. In the outermost CW, located at the seaward boundary, nutrients are diluted and these waters generally resemble the open ocean, which exhibits relatively homogeneous variability. Closer to the shoreline, however, nutrient inputs from land are less diluted. As a result, coastal inshore waters (CIW) exhibit higher nutrient concentrations, greater spatial and temporal heterogeneity, more dynamic processes, and steeper nutrient gradients compared to the open ocean. The uniqueness of the CIW ecosystem in the Mediterranean Sea can be attributed to several factors. First, the Mediterranean Sea is micromareal, with a low tidal range (20–40 cm) that limits the dilution of continental inflows. Second, its oligotrophic nature amplifies the contrast between open waters and CIW. Third, the high population density along much of the Mediterranean coastline (20–1,000 inhabitants/km2) significantly affects the CIW through substantial continental inputs. As a result, the characteristics of the Mediterranean CIW are more pronounced compared to those of other seas, leading to steeper gradients between CIW and open waters. One of the most visible effects of nutrient-rich inflows from land on CW is eutrophication. According to Ferreira et al. (2011), eutrophication is “a process driven by the enrichment of water by nutrients, especially compounds of nitrogen and/or phosphorus, leading to increased growth, primary production and biomass of algae; changes in the balance of organisms; and water quality degradation. The consequences of eutrophication are undesirable if they appreciably degrade ecosystem health and/or the sustainable provision of goods and services”. Despite comprising less than 1% of CW, CIW are among the most threatened ecosystems globally due to eutrophication. Several policies have been enacted in Europe with the aim of restoring and protecting waters, such as the Water Framework Directive (WFD; 2000/60/EC) and the Marine Strategy Framework Directive (MSFD; 2000/60/EC). Both directives are based on the driver-pressure-state-impact-response (DPSIR) framework, which allows the identification of environmental problems and their causes. This link permits afterwards the proposal of management plans, which generally follow the Ecosystem Approach and aim for environmental sustainability, social well-being, and economic prosperity. Eutrophication assessments rely on chlorophyll-a concentration (Chl-a) data from water bodies. Traditionally, these assessments are conducted using in-situ Chl-a measurements, typically expressed as mean or 90th percentile values. However, ocean colour satellites, such as Sentinel-3 from the European Space Agency’s (ESA) Copernicus Programme, have enhanced these assessments by providing Chl-a maps with higher spatial and temporal resolutions (300 m) compared to in-situ data. These maps have improved assessment capabilities, particularly for open waters. CW, however, present greater variability and require even higher spatial resolution. Recent advancements, such as the Sentinel-2 satellite equipped with the MultiSpectral Instrument (MSI) sensor, now enable ocean colour mapping with very high spatial resolution (10–60 m). This advancement allows for more precise eutrophication assessments in Mediterranean CW, particularly within their CIW. One of the key challenges in assessing water bodies is defining and setting their boundaries. At the seaward boundary of CW, characteristics often closely resemble those of ocean waters. However, this boundary is inherently variable, influenced by factors such as distance from the shore (e.g., exclusive economic zones; EEZ), bathymetric features (e.g., continental shelf or slope), biotic distributions, hydrographic structures, and other processes. In contrast, the landward boundary of CW is more straightforward, aligning with the coastline. The most critical boundary for eutrophication assessments, however, is the division between CIW and outermost CW. Previous studies have defined this boundary as 200 m from the shoreline, based on in-situ data. These studies established a mean Chl-a concentration of 2.42 μg/L for CIW, which is an order of magnitude higher than the 0.37 μg/L observed in the outermost CW. The aim of this study is to evaluate the suitability of remote sensing data and methods for defining CW boundaries in the Mediterranean Sea, specifically focusing on the boundary between CIW and outermost CW. Sentinel-2 imagery was selected as the primary data source for this initial assessment. The ACOLITE processor was employed for atmospheric correction, using the Dark Spectrum Fitting approach, along with sun glint correction. For Chl-a retrieval, the OC3 algorithm was applied, utilizing the 443, 490, and 555 nm blue and green spectral bands. The Catalan Coast was chosen as the case study, as it represents a typical Mediterranean coastal zone. The selected Sentinel-2 tiles include 31TEG, 31TDG, 31TDF, 31TCF, and 31TBF. Various methodologies were tested, aiming to define boundaries using both in-situ and remote sensing data. One approach sought to establish a general boundary for the study area using in-situ data. For instance, a mean perpendicular transect was derived from 76 transects distributed along the coast, based on imagery collected from 2015 to 2018, resulting in a fixed boundary set at 126 m from the shoreline. However, given the extensive spatial coverage provided by remote sensing data, procedures targeting variable boundaries were preferred. These analyses utilized Sentinel-2 images collected from 2019 to 2024, aligning with the six-year assessment cycles stipulated by relevant directives for evaluating eutrophication. Accordingly, the boundary between CIW and outermost CW along the Catalan Coast varies depending on spatial coordinates. This variability is illustrated on a map, which highlights broader CIW areas where continental inflows—rich in freshwater and nutrients—are typically larger. Notable examples include the regions near the northern bays, the Barcelona metropolitan area, and the Ebre River mouth. Monitoring strategies under water-related policies must operate at local scales and should focus their eutrophication assessments on CIW, where a detailed understanding of Chl-a variability is essential. Therefore, even CW boundaries are conventions, finer and localized boundaries are more effective for assessing eutrophication. The proposed variable boundary for CIW will directly contribute to the next implementation cycles of the WFD and the MSFD. Specifically, it will support the WFD’s Biological Quality Element for Phytoplankton and the MSFD’s Descriptor 5 on Eutrophication. Defining CW boundaries is particularly challenging compared to other surface waters, such as rivers or lakes, due to the absence of distinct hydromorphological features. Setting precise CIW boundaries is critical for indicators like Phytoplankton and Eutrophication because they are assessed throughout the entire water body, unlike other marine indicators (e.g., macroalgae or phanerogams), which are sampled only along the coast. The new CIW boundary, derived from satellite data, will refine the geographical delineation of CW bodies and aid in defining CIW for the next River Basin Management Plans (4th cycle; 2028–2033). This includes the intracommunity basins of Catalonia and the intercommunity Ebre River basin, managed by its Hydrographic Confederation. As the MSFD integrates WFD CW data, the new boundary will also inform the next Marine Strategy for the Levantine-Balearic Marine Demarcation (3rd cycle; 2024–2030). Beyond its regulatory applications, the boundary will be valuable to various stakeholders. These include the Catalan Water Agency (ACA), the Spanish Ministry for the Ecological Transition and the Demographic Challenge (MITERD), the European Commission (EC), the scientific community, and organizations related to the Mediterranean coastal zone. It will enhance understanding of Chl-a variability in CW and provide benefits to enterprises, environmental organizations, and the general public. Finally, the authors aim to ensure that the remote sensing procedures and results of this study are reproducible in accordance with the Findable, Accessible, Interoperable, and Reusable (FAIR) principles, thereby contributing to Open Science. Using the Mediterranean as a case study, we demonstrate the reusability of our workflow for other scientists applying it to different study areas and for comparing results. To facilitate this, the procedures implemented in this study will be integrated into a virtual research environment (VRE) within an interactive platform. Our results will also be incorporated into a data management system, enabling their integration into the European Open Science Cloud (EOSC) and facilitating their use by the European network of environmental research infrastructures (ENVRI) community, specially by those involved in the ENVRI-FAIR project. This study is financed by the PITACORA project (TED2021-129776B-C21) and is a contribution to MARS, DEMON, AquaINFRA, ENHANCE, and GOYAS projects, as well as to TELEDETECT and ICATMAR organizations.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: SAR-Based Assessment of Water Lines, Water Levels, and Damages During Hurricane Landfalls

Authors: Roland Romeiser, Michael Caruso, Hans Graber
Affiliations: University of Miami, Rosenstiel School, University of Miami, CSTARS
In the NOPP (National Oceanic Partnership Program) project "Hurricane Coastal Impacts", more than 10 teams of atmospheric and oceanic modelers, observers, and remote sensing specialists from different institutions have worked together for four years to advance the state of the art in our understanding of hurricanes, with special emphasis on improved predictions of damages in coastal regions during landfall. Our team at the University of Miami has been responsible for the implementation and evaluation of improved spaceborne SAR-based capabilities to map water lines, flooded areas, and topographic changes as quickly as possible, using the SAR satellites available today. In this talk we will present results from three hurricane seasons (2022, 2023, 2024) and discuss our final conclusions regarding the suitability of current and future spaceborne SAR systems for hurricane damage assessments. The satellite systems utilized for the project include Sentinel-1, the RADARSAT Constellation, TerraSAR-X / Paz, COSMO-SkyMed, COSMO-SkyMed SG, Capella, ICEYE, and Umbra, where the C-band systems were mostly used for wind retrievals over water from wide-swath imagery (a secondary objective of our contributions to the project), while the X-band image acquisitions focused on the coastal areas, usually in stripmap and occasionally in spotlight mode. The Capella images were acquired and used by another team at the University of Massachusetts at Amherst and will not be discussed in this presentation. Waterline retrievals can be done based on contrasts, preferably using images acquired in HH polarization and at high incidence angles, where the water is much darker than the land. However, we also have an unusual example of an ICEYE image taken at VV polarization and only 12°, where the contrast is the other way around. At less extrem incidence angles, the contrasts are less pronounced, and features such as vegetation lines may be misidentified as water lines. To deal with this problem, our algorithm makes use of known terrain heights from high-resolution lidar data sets for coastal areas of the U.S., which are publicly available. The final waterline product is forced to be close to a local terrain height isoline, but with some freedom to account for possible changes in the topography. This makes the algorithm efficient and reliable and produces estimated water levels as an additional product, which can be very valuable for estimates of the extent of flooding on land and of the amount of water damages. A direct detection of flooding in SAR images of densely populated areas (narrow streets with many buildings) can be very difficult. We will show and discuss examples. For a rapid SAR-based mapping of land topography, we utilize the technique of stereo radargrammetry, which requires pairs of images taken at significantly different incidence angles and / or look directions. Compared to the well-established techniques of repeat-pass interferometry, which requires two coherent images, usually from the same satellite in consecutive repeat orbit cycles, which can be many days apart, and single-pass cross-track interferometry, which has been done with TanDEM-X, radargrammetry is a less demanding technique. The required image pairs can be acquired by different satellites of the same type, usually within 1-2 days. This makes the technique more suitable for time-critical applications. All radargrammetry pairs for our part of the project were acquired by the TerraSAR-X / Paz satellites and processed by Airbus Defence and Space, a formal industry partner of the project. We will show that with some tweaking and filtering, the radargrammetry results are in good general agreement with the well-validated lidar data. The effective spatial resolution is sufficient to show the general street patterns of cities. By combining the initial digital elevation models with building and street layouts from high-resolution optical imagery (e.g. Google Earth), we are able to improve the horizontal resolution and estimate the heights of buildings. If two radargrammetry image pairs are available from the days before and after a hurricane landfall, this technique can be sufficient for an identification of collapsed or washed-away buildings. Again, we will show and discuss examples.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: Advancing Monitoring of Complex Coasts: Harnessing Sentinel-2 and Landsat Data for Complementary Open-Source Approaches at Continental Scale

#stac

Authors: Stephen Sagar, Robbi Bishop-Taylor, Claire Phillips, Vanessa Newey, Rachel Nanson
Affiliations: Geoscience Australia
Since 2015, Digital Earth Australia (DEA) has been producing continental-scale Earth observation (EO) products for the historical characterisation and ongoing monitoring of the Australian coastal region. Our products and workflows focus on leveraging petabytes of analysis-ready EO data (ARD), innovative methods for dealing with noise and environmental variability, and a commitment to open-source code, methods, and data access. These products allow coastal managers and scientists to evaluate the socio-economic and environmental impacts from issues such as coastal erosion in this dynamic interface between land and sea, integrating these historical and ongoing data insights into future planning. Australia has one of the most varied coastal and tidal environments in the world, ranging from complex macro-tidal mudflats in the north to micro-tidal rocky shores and beaches in the south of the country. This variability presents an issue common to the application of EO products to coastal monitoring in many locations worldwide: a single methodology and product type may not be sufficient for comprehensive characterisation and monitoring, and a complementary approach must often be considered. This talk will introduce a new method for mapping intertidal topography at unprecedented spatial and temporal resolution. Our approach combines analysis-ready Landsat and Sentinel-2 satellite imagery with state-of-the-art global tide modelling to analyse patterns of tidal inundation across Australia’s entire intertidal zone. This approach is applied at the pixel level, allowing us to extract fine-scale morphological details that could not be resolved by previous waterline-based intertidal mapping methods. This pixel-based method greatly reduces the volume of satellite imagery required to generate accurate intertidal elevation models, enabling us to produce multi-temporal snapshots of Australia’s dynamic intertidal zone from 2016 to the present. Importantly, this method represents the first Open Data Cube (ODC) product to fully integrate ESA Sentinel-2 data with USGS Landsat data into a single derived product for coastal applications. We show the clear benefits of incorporating 10m resolution Sentinel-2 data into the product workflow, enabled by the common ARD workflow used in DEA and the consistency of cross-sensor surface reflectance data this provides. We also demonstrate the power of increasing the temporal density of satellite observations for coastal regions and change analysis, setting the scene for future missions such as Landsat Next and the next generation of Sentinels. We show that when paired with satellite-derived shoreline approaches, such as the DEA Coastlines product, our new DEA Intertidal product allows this critical transition zone between land and sea to be fully integrated into multi-temporal coastal change analysis at a continental scale. These products work in a highly complementary way, with each filling the gap for coastal regions and environments where the other may struggle to accurately capture the nature and magnitude of coastal change. For example, in macro-tidal regions with extensive muddy tidal flats, where DEA Coastlines may produce shorelines with high uncertainty, DEA Intertidal can better model and represent the dynamic nature of the shifting mudflats. Validation of both products is completed using a full suite of LiDAR, ground survey, drone, and photogrammetry data. Along with quantified uncertainty metrics, this provides confidence for coastal managers, scientists, and modellers looking to incorporate this data into decision-making processes and scientific workflows. Underpinning this novel approach, we will introduce work we have undertaken to optimise the use of multiple global tide models, based on the findings that no single tidal model performs best across these complex environments. In keeping with our open-source ethos, this work is published as a suite of tools in the ‘eo-tides’ Python package, and we show how this package can be used freely to improve coastal EO analysis. In an international context, one of the most significant developments in this product stream is the addition of tools designed to enable the application of these workflows to locations outside of Australia. Our approach is based on open-source data and code, allowing it to be applied to any freely available source of satellite data (e.g., cloud-hosted Microsoft Planetary Computer data) loaded using STAC metadata and the Open Data Cube. This provides new opportunities for deeper engagement with initiatives like the CEOS Coastal Observations Applications Services and Tools (COAST) VC, and stronger collaboration with our international partners like ESA and the USGS.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall E1)

Session: A.10.02 Geodetic satellite missions and their applications - PART 1

Space-based geodetic techniques provide extreme-accuracy observations of a vast range of Earth system parameters, such as the gravitational field, the shape of the Earth, the reference frame, as well as navigation and positioning. The past 20 years have seen a rapid development in the field, including highly successful endeavours such as ESA's GOCE mission, and the GRACE and GRACE-FO Third-Party Missions. As a direct consequence, the research and application community has developed a vast range of ideas building on the achievements of this golden age of space geodesy. This session deals with state-of-the-art results from the current set of geodetic satellite missions (e.g. GOCE, Swarm, GRACE, GRACE-FO), as well as plans and ideas for new initiatives in the field. This session invites contributions related to applications of geodetic satellite observations including, among others, gravimetry, altimetry, GNSS/VLBI, radio occultation, InSAR and GNSS reflectometry techniques. Applications of these missions go beyond the traditional geodetic science, including climate, hydrology, solid Earth, cryosphere, oceanography and thermosphere science. Contributions relevant to the above-mentioned scientific disciplines, as well as operational applications from geodetic satellite missions are also invited.

Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall E1)

Presentation: Revisiting drought cascades with daily satellite observations of soil moisture and terrestrial water storage

Authors: Daniel Blank, Annette Eicker, John T. Reager, Andreas
Affiliations: Hafencity University, NASA's Jet Propulsion Laboratory, Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences
Changes in soil water storage can be studied on a global scale using a variety of satellite observations. With active or passive microwave remote sensing, we can study the upper few centimeters of the soil, while satellite gravimetry allows us to detect changes in the entire column of terrestrial water storage (TWS). The combination of both types of data can provide valuable insight into hydrological dynamics in different soil depths towards a better understanding of changes in subsurface water storage. We use daily Gravity Recovery and Climate Experiment (GRACE) data and satellite soil moisture data to identify extreme hydroclimatic events, focusing on prolonged droughts. To enhance our comprehension of the subsurface, we utilize not just surface soil moisture data but also integrate information on root zone soil moisture. Original level-3 surface soil moisture data sets of SMAP and SMOS are compared to post-processed level-4 data products (both surface and root zone soil moisture) and a multi-satellite product provided by the ESA CCI. The main goal of this study is to use remote sensing for the investigation how drought affects water storage in the soil across different layers, from the top layer to the root zone and finally the whole water column of TWS. To identify different dynamics, we compute the rate of change of anomalies to assess how quickly the system accumulates storage deficits during drought conditions and recovers from them for different soil depths. Our investigation focuses on the temporal dynamics of near-surface soil moisture and TWS, highlighting the cascading effects that propagate from the surface into the subsurface. The results we obtained indicate characteristic patterns of the temporal dynamics of drought recovery in varying soil depths. Specifically, our analysis shows that surface soil moisture recovers faster than TWS, and that this recovery process slows down as soil integration depth increases.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall E1)

Presentation: The combined satellite gravity field model GOCO2025s

Authors: Torsten Mayer-Guerr, Patrick Dumitraschkewitz, Sandro Krauss, Felix Oehlinger, Andreas Strasser, Barbara Suesser-Rechberger, Cornelia Tieber-Hubmann
Affiliations: Graz University of Technology
The GOCO (Gravity Observation Combination) series provides high-resolution static global gravity field models based on data of the dedicated satellite gravity missions CHAMP, GRACE, and GOCE, Satellite Laser Ranging (SLR) data, and kinematic orbits from different Low Earth Orbiters (LEOs). In this contribution we present the latest release GOCO2025s. For the first time, the model contains data from the GRACE-FO mission including data from the Laser Ranging instrument (LRI). Various additional improvements have been made compared to the previous version. Among updated background models and other things, the SLR data is now processed consistently in-house using the same algorithms and standards. The focus in the GOCO combination process is on the proper handling of the stochastic behavior of the data. Therefore, the resulting accuracy information in terms of a full variance covariance matrix is quite realistic and published with the solution. The GOCO2025s consists not only of a static gravity field, but the temporal variations are modelled as well. These are represented as regularized trend, annual and semiannual signal.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall E1)

Presentation: Integrating InSAR and GRACE observations to Assess Aquifer Overexploitation in Spain

Authors: Guadalupe Bru, Carolina Guardiola-Albert, Víctor Expósito, Pablo Ezquerro, Juan López-Vinielles, Marta Béjar-Pizarro, Anna Barra
Affiliations: Geological and Mining Institute of Spain (IGME-CSIC), Centre Tecnològic de Telecomunicacions de Catalunya (CTTC)
Monitoring land deformation over large areas is possible thanks to advanced satellite radar interferometry (InSAR) techniques. In recent years, the extensive availability of Copernicus Sentinel-1 satellite imagery, and the implementation of the European Ground Motion Service (EGMS), have provided millions of ground deformation measurement points over the whole Europe, allowing national and regional scale analysis. The EGMS provides, for each measurement point, the annual deformation velocity and a time series of deformations. The EGMS first update (2015–2021) includes the baseline dataset (2015–2020), now unavailable as a standalone release. From the second update onward, EGMS adopts a five-year rolling window approach, with the current update covering 2019-2023 and the next update set for 2020-2024. Users can view and download the latest release via the EGMS explorer, with the previous version available for download only. Additionally, the Gravity Recovery and Climate Experiment (GRACE) satellite initiative has significantly contributed to monitoring terrestrial water storage (TWS) by measuring fluctuations in Earth's gravity field. Since 2002, GRACE data have provided valuable insights into TWS by detecting changes in gravity that reflect mass redistribution, including the possibility of isolating the groundwater contribution (groundwater storage GWS) to the signal. These data are especially crucial in regions with limited hydrogeological information, where GRACE-derived GWS has shown strong correlations with in-situ well measurements. One valuable source to get GWS daily time series is the NASA's GLDAS portal. Aquifer overexploitation is a global issue. In Europe, where more than 65% of drinkable water comes from aquifers, groundwater is under significant pressure from excessive irrigation and industrial overexploitation among other factors. In Spain, 27% of groundwater bodies are in poor quantitative status, which means that they are being overexploited. Extensive groundwater withdrawal often leads to land subsidence, which is the downward movement process driven by the compaction of unconsolidated geological layers. Land subsidence can permanently reduce aquifer-system storage capacity and trigger other impacts, such as increased flooding (especially in coastal areas) and damage to infrastructures. This study uses InSAR data from EGMS, covering the 2018–2022 period across the Spanish territory (including islands), to investigate land deformation linked to aquifer overexploitation. Using EGMS ORTHO product data, we automatically detected Active Deformation Areas (ADAs) of vertical movement using the ADAfinder tool (Barra et al. 2017). A comprehensive ADA map was generated, focusing on regions with defined groundwater bodies. Using ancillary data, we filter the ADAs to remain with those more prone to be caused by land subsidence. To complement the analysis, GRACE-derived GWS data were integrated to assess temporal trends in groundwater storage across these regions. By correlating the GWS with ADA maps and the groundwater bodies identified as having a poor quantitative status, we obtained a robust understanding of areas with high spatial correlation between significant land deformation and aquifer overexploitation. GRACE data will corroborate the results showing if there is a negative trend in GWS temporal variation in these areas. This study highlights the utility of national-scale InSAR data and GRACE observations as complementary tools for groundwater resource management. By combining geodetic satellite techniques, we underscore the potential to address critical water resource challenges and inform sustainable management practices. Barra, A., Solari, L., Béjar-Pizarro, M., Monserrat, O., Bianchini, S., Herrera, G., Crosetto, M., Sarro, R., González-Alonso, E., Mateos, R. M., Ligüerzana, S., López, C., & Moretti, S. (2017). A methodology to detect and update active deformation areas based on Sentinel-1 SAR images. Remote Sensing, 9(10), 1002. https://doi.org/10.3390/rs9101002
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall E1)

Presentation: Mass variations of Tyrrhenian Seamounts and their detectability through NGGM/MAGIC

Authors: Carla Braitenberg, Prof. Angelo De Min, Prof. Teresa Trua, Dr. Michael Marani, Gerardo Maurizio, Dr. Federico Bernardini
Affiliations: University of Trieste, Department of Mathematics, Informatics and Geosciences, University of Parma, Department of Chemistry, Life Sciences and Environmental Sustainability, Institue of Marine Sciences - CNR, Ca' Foscari University of Venice, Department of Humanities, Multidisciplinary Laboratory, The Abdus Salam International Centre for Theoretical Physics
This study investigates the plausible mass changes of the Marsili volcano, a seamount of the Mediterranean Sea, with the scope of estimating the detectability through the upcoming gravity mission NGGM/MAGIC. The satellite constellation of GRACE-C and NGGM is expected to have improved noise levels and time resolution, compared to GRACE/GRACE-FO. Braitenberg and Pastorutti (2024) estimated that the mass change at the top of a selection of worldwide submarine volcanoes lead to mass changes of up to 11 Gt caused by eruptions. These eruptions can be non-explosive if the basalt outflow occurs at great water depths, for instance greater than 1500 m, remaining undocumented for remote areas. This bears a hazard for ships and submarines, as well as for the knowledge of the location of an active submarine volcano. The innovation in the present work lies in the fact that we constrain the internal density structure of the volcano by the mineral composition of dredged basalt samples. Moreover, we deduce values of porosity profiles at the side and below the volcano summit from seismic velocities, available as one 2D profile crossing the studied volcano. The profile allows to define constraints on the over- pressurization of pore fluids in the crust below the volcano leading to a reduced effective pressure acting on the porosity. The mineral composition of the basalt samples and calculated compressional velocity (Vp) and density given pressure and temperature values, deliver a clear evidence that up to certain depths, porosity is an important parameter to explain the observed Vp values. Finally, the density is calculated depending on inferred porosity, pressure, temperature and rock mineral composition extrapolating the knowledge from the profile over the entire length of the volcano. The Marsili volcano of Italy is used as an example. References Braitenberg, C., Pastorutti, A., 2024. Detectability of Seamount Eruptions Through a Quantum Technology Gravity Mission MOCAST+: Hunga Tonga, Fani Maoré and Other Smaller Eruptions. Surv. Geophys. 45, 1331–1361.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall E1)

Presentation: Combining In-Situ Measurements and Space Observations to Investigate the Earth’s Response to Climate Change in Svalbard

Authors: Joelle Nicolas, Alicia Tafflet, Dr. Jean-Paul Boy, Lissa Gourillon, Manon Tranchand-Besset, Dr. Valentin Pillet, Jean-Michel Lemoine, Pr. Agnès Baltzer, Pr. Jérôme Verdun
Affiliations: GeF/Cnam, ITES/CNRS, LETG/CNRS, I-Sea, Cnes
The Arctic region is recognized as the largest short-term contributor to the sea-level rise and one of the fastest-warming areas on the Earth [1]. With glaciers covering about 60% of its total land area, the Svalbard Archipelago (Norway) in the High Arctic is highly exposed to the effects of climate change and rapid changes occurred over the past decades. As its glaciers are highly sensitive to fluctuations in temperature and precipitation, Svalbard is a natural laboratory for constraining the climatic sensitivity of glaciers. Over the last 50 years, Svalbard has experienced a warming of 3 to 5°C and annual precipitation has intensified by 30-35%, with a growing fraction as rain [2]. These changes are leading to an increase in glacier melt, sea ice cover loss, runoff, erosion, and sediment transport, with consequences for natural hazards and ecosystem balance. Though the glacier mass loss is most visible in the retreat of the ice fronts, there are still large uncertainties regarding the glacier mass balance in Svalbard. As glacier retreat has an impact on the transport of sediments to the shoreline, which in turn has an impact on coastal evolution, we question how in-situ glacier measurements, submarine sediment data, and satellite observations can be linked to better understand the geophysical processes leading to the transition from a glacial to a periglacial regime in Svalbard. In this study, we use datasets from satellite and ground-based observations, with complementary sensitivities and spatio-temporal resolutions. Firstly, to quantify the crustal deformation, we compute daily time series of 3D GNSS coordinates for all permanent GNSS stations in Svalbard since 2000. These time series show that the Earth’s crust movement at Svalbard is composed of a very weak seasonal component and a strong uplift component, with mean vertical velocity ranging from 8 to 13 mm/yr. Secondly, since 2002, GRACE and GRACE Follow On space missions deliver time-varying gravity field measurements that provide a unique record of global mass changes and mass transport on the Earth. This means that ice mass loss can be measured on a regional scale, which solves the problem of extrapolating point measurements to larger areas. Expressed in water equivalent height, these measurements show a strong declining trend over the whole Svalbard. Thirdly, we estimate variations in ice levels using altimetry from CryoSat-2 and ICESat-2, and map changes in the regional coastline using Sentinel-2 and Landsat-5 satellite images. Finally, in-situ snow depth measurements carried out regularly around Ny-Ålesund since 2007 have been used to establish a local mass balance describing short-term changes. Since 2009, we also have DGPS coastline measurements and sediment datasets showing the delta and pro-delta dynamics linked to the flows of sediments and liquids associated with glaciers. Our analysis consists of interpreting the coherences and discrepancies between the different datasets and the predictive models. For instance, Global Isostatic Adjustment (GIA) models explain less than 30% of Svalbard’s surface uplift observed by GNSS. The question that then arises is how much of the observed value is due to the current melting of ice sheets and glaciers. To answer this question, we have developed a regional mechanical model of the lithosphere and mantle beneath Svalbard which considers the lithosphere flexural rigidity of mantle viscosity. This model was then used to determine the response of the Earth's surface to the removal of ice loads following the last ice ages (Great and Little Ice Ages) and the current melting, to be compared to these observed in GNSS and GRACE time series. We present the melting rates estimated from space geodesy time series, consistent with in-situ observations, using realistic values for the elastic thickness of the Svalbard lithosphere and the ice thicknesses of the present and the different ice ages. We demonstrate that such a multi-disciplinary and multi-scale approach offers crucial insights for accurately characterizing the Earth’s response to past and present ice melting. References [1] Rantanen, M., Karpechko, A. Y., Lipponen, A., Nordling, K., Hyvärinen, O., Ruosteenoja, K., Vihma, T., & Laaksonen, A. (2022). The Arctic has warmed nearly four times faster than the globe since 1979. Communications Earth & Environment, 3(1), 168. https://doi.org/10.1038/s43247-022-00498-3 [2] NCCS Report. (2019). Climate in Svalbard 2100 – a knowledge base for climate adaptation. https://www.miljodirektoratet.no/globalassets/publikasjoner/m1242/m1242.pdf
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall E1)

Presentation: Processing strategies for the GFZ GRACE/GRACE-FO Level-2 data release 07

Authors: Markus Hauk, Christoph Dahle, Michael Murböck, Natalia Panafidina, Josefine Wilms, Karl-Hans Neumayer, Frank Flechtner
Affiliations: GFZ Potsdam, Technische Universität Berlin, Physical Geodesy
Being part of the GRACE/GRACE-FO Science Data System, the GFZ German Research Centre for Geosciences is one of the official Level-2 processing centers routinely providing monthly gravity field models. These models are used by a wide range of geoscientists to infer mass changes at the Earth’s surface to study climate related phenomena. Currently, GFZ´s operationally processed monthly gravity fields are still based on release 6 (RL06) standards. The distribution of a reprocessed and improved RL07 time series is planned for fall 2025. Most of these improvements have been developed within the Research Unit “New Refined Observations of Climate Change from Spaceborne Gravity Missions” (NEROGRAV) funded by the German Research Foundation DFG. The main focus of the new release lies on an optimized stochastic modeling for GRACE and GRACE-FO gravity field determination. This includes the extension of the stochastic instrument error models, the optimization of the combination of the different observations, and the inclusion of tidal and temporally changing non-tidal background model error variance-covariance matrices in the adjustment process. This presentation provides an overview about the performance to be expected of the upcoming new gravity field release with respect to the current time series. This includes also discussions on the effectiveness of the inclusion of background model error variance-covariance information to reduce temporal aliasing.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 0.94/0.95)

Session: C.03.16 Sentinel-2 Mission Status and Outlook

The Sentinel-2 mission has been a cornerstone of the Copernicus programme, providing high-resolution, multispectral imagery that underpins a diverse range of applications, including agriculture, forestry, water resource management, disaster response, and methane emissions monitoring. Its global reach and open data policy have made it an indispensable resource for addressing environmental and societal challenges.

This session will offer a comprehensive overview of the mission, covering key aspects of both the space and ground segments. Topics will include flight operations, spacecraft maintenance, ground segment operations, data access, and data quality. Special emphasis will be placed on how these components collectively ensure the mission’s success and reliability.

Bringing together leading experts from each domain, the session will provide a detailed status update on the mission and explore its future outlook. Attendees will gain valuable insights into the operational strategies and innovations that sustain Sentinel-2 as a world-class Earth observation platform.

Presentations and speakers:


Sentinel-2 Mission Status


  • Ferran Gascon - ESA

Sentinel-2 Ground Segment Operations Status


  • Franck Desbouillons - ESA

Sentinel-2 Space Segment Operations Status


  • Franco Marchese, Jean-Baptiste Gratadour - ESA

Sentinel-2C Commissioning Phase and Sentinel-2D Development Status


  • Davide Oddenino, Patricia Lopez - ESA

Sentinel-2 Data Quality Status


  • Valentina Boccia - ESA

Sentinel-2 Next Generation


  • Janice Patterson, Francisco Reina - ESA
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall L1/L2)

Session: A.02.06 Advances in land surface phenology monitoring and applications - PART 1

Land surface phenology (LSP) plays a key role in monitoring and understanding the seasonal dynamics of terrestrial ecosystems, providing critical insights into how these ecosystems respond to environmental changes, including those driven by climate change. This makes LSP a vital tool in predicting and managing ecosystem responses to climate variability and environmental stressors. This field has seen significant advances over the last decade, particularly with the advent of more sophisticated remote sensing technologies and data processing techniques and the development of new phenological products, such as the pan-European High-Resolution Vegetation Phenology and Productivity (HR-VPP) dataset included in the Copernicus Land Monitoring Service. We invite contributions on recent advancements in LSP monitoring, emphasizing the use of innovative techniques to detect phenological changes, the application of sub-daily resolution data, new tools for processing and analysing satellite data, and applications in various fields such as agriculture, forestry, ecology, public health, or climate science.

This session also welcomes any contribution concerning the intercomparison of LSP and complementary phenological observations, including in-situ human observations, phenocams (digital cameras capturing vegetation changes), and flux towers (measuring exchanges of carbon, water, and energy). The synergy between these observation methods can address inherent discrepancies and limitations, leading to a more accurate and holistic view of terrestrial ecosystems and their responses to climate change. It is expected that these contributions will provide further insight into CEOS LPV- phenology validation protocol.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall L1/L2)

Presentation: Validation of the global Sentinel-3 / OLCI Land Surface Phenology products of the Copernicus Land Monitoring Service

Authors: Fernando Camacho, Jorge Sánchez-Zapero, Enrique Martínez-Sánchez, Lars Eklundh, Hongxiao Jin, Zhanzhang Cai, Roselyne Lacaze
Affiliations: EOLAB, Lund University, HYGEOS
The Copernicus Land Monitoring Service (CLMS) provides a series of qualified bio-geophysical products on the status and evolution of land surface. The products are used to monitor vegetation, crops, water cycle, energy budget and terrestrial cryosphere. Production and delivery take place in a timely manner and are complemented by the constitution of long-term time series. The CLMS portfolio has recently included a global Land Surface Phenology (LSP) product, which is annually produced at 300 m resolution from Sentinel-3 Ocean and Land Colour Instrument (OLCI) acquisitions. In the coming months, the temporal coverage could be extended back to 2014 using data from the PROBA-V satellite. The LSP product provide a total of 13 vegetation phenology and productivity (VPP) parameters, derived for up to two growing seasons. The parameters are derived from smooth seasonal trajectories of the plant phenology index (PPI), which is derived from Bidirectional Reflectance Distribution Function (BRDF)-adjusted surface reflectance data at 5-day frequency. This talk will focus on the validation of first global CLMS OLCI LSP products generated for the year 2023. The results for the LSP 2024 will be also discussed if available. The validation methodology is based on two main approaches: comparison with in situ observations and product intercomparisons with similar datasets. Several criteria of performance are evaluated, including product completeness, spatial consistency, accuracy, precision and uncertainty. The validation of start of the season date (SOSD) and the end of the season date (EOSD) is performed using the Pan-European Phenology Project (PEP725), the French TEMPO and the US National Ecological Observatory Network (NEON) datasets, which provide phenological observations of phenological events. Besides, the near-ground Ecosystem Phenology Camera (PhenoCam) V3 dataset, which infers phenometrics from digital cameras, is also used over homogenous areas at 300 m. The product intercomparison with global satellite-based products includes the comparisons of SOSD/EOSD with MODIS C6.1 and VIIRS V2 equivalent phenometrics while the Total Productivity (TPROD) and the Seasonal Productivity (SPROD) are compared with the CLMS OLCI and MODIS C6.1 Gross Primary Productivity (GPP) products over a global network of validation (LANDVAL) sites. Over Europe, global LSP products are intercompared with Medium Resolution and High Resolution VPP (MR-VPP and HR-VPP) products over a European network of sites (EVAL). CLMS OLCI LSP products are reliable at global scale and consistent with ground references and equivalent estimates of other satellite phenology and productivity products. Bias of CLMS OLCI LSP with PEP725, TEMPO and NEON phenophases related to the first stages of the growing season is close to zero or translates a slight delay (up to 5 days). For the phenophases related to more developed leaf growth, CGLS OLCI LSP displays a slightly anticipated SOSD, as expected. In case of EOSD, bias typically between -33 (i.e., anticipated EOSD) to 4 days is found compared with in-situ observations, depending on the phenophases.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall L1/L2)

Presentation: Cross-Scalar Analysis of Multisensor Land Surface Phenology

Authors: Mark Friedl, Xiaojie Gao, Tristan Green, Minkyu
Affiliations: Boston University, Harvard University, Kangwon National University
Land surface phenology (LSP) metrics derived from remote sensing are widely used to monitor vegetation phenology over large areas and to characterize how the growing seasons of terrestrial ecosystems are responding to climate change. Until recently, however, most LSP studies relied on coarse spatial resolution sensors, which makes assigning direct linkages between LSP metrics and ecological processes and properties challenging due to scale mismatches and because substantial variation in phenology and ecological properties are often present at sub-pixel scale in coarse resolution LSP metrics. In this study, we leverage publicly available LSP data products with three orders of magnitude difference in spatial resolution derived from Moderate Resolution Imaging Spectroradiometer (MODIS, 500 m), Landsat and Sentinel-2 (HLS, 30 m), and PlanetScope (3 m) imagery to examine and characterize the nature, magnitude, and sources of the agreement and disagreement in LSP metrics across spatial scales. Our results provide three key conclusions: (1) LSP metrics from three sensors showed consistently high cross-scalar agreement across sites (r2 = 0.70 – 0.97), suggesting that they all effectively capture geographic variation in LSP; (2) within-site cross-scalar agreement between LSP metrics was systematically lower relative to agreement across sites, but mean absolute differences were consistent across and within sites (generally <14 days for day of year-based metrics, with a few exceptions); and (3) local-scale composition and heterogeneity in land cover is a key factor that controls cross-scalar agreement in LSP metrics. In particular, we found that site-level heterogeneity in land cover (measured via entropy) and the proportion of evergreen versus deciduous land cover types explain up to half of site-to-site variance in local-scale cross-scalar agreement in LSP metrics. Results from this study support the internal consistency and quality of the three LSP data products examined, and more generally, provide guidance regarding the choice of spatial resolution for different applications and land cover conditions, and yield new insights related to how LSP observations scale across different sensors and spatial resolutions.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall L1/L2)

Presentation: Validating the Land Surface Phenology of the Copernicus High Resolution Phenology and Productivity (HR-VPP) Product in the Mediterranean Ecosystems

Authors: Guilhem N. Jeannet-Chaves, Mr. Jose A. Caparros-Santiago, Mr. Miguel A. Garcia-Perez, Prof. Victor F. Rodriguez-Galiano
Affiliations: Universidad De Sevilla
Vegetation phenology, defined as the study of the seasonal cycles of plants, is a pivotal proxy for understanding ecosystem dynamics regarding global change. It is mainly analyzed through two general complementary approaches: satellites (Land Surface Phenology; LSP) and ground-based human observations (Ground Phenology; GP). A number of LSP phenological products have been developed in recent years, such as the MODIS Medium Resolution Vegetation Phenology and Productivity (MR-VPP), the MODIS Land Cover Dynamics product (MCD12Q2), the VIIRS Global Land Surface Phenology (GLSP), the AVHRR Vegetation Index and Phenology (VIP) phenology product, or the Sentinel-2 High-Resolution Vegetation Phenology and Productivity product (HR-VPP) itself. LSP products offer different phenometrics (quantitative measures of phenological events), like start of season (SOS), middle of season (MOS), end of season (EOS); modeled from vegetation indices through specific smoothing and extraction algorithms. HR-VPP is a Copernicus-based product released in late 2021 that extracts phenometrics and productivity variables from Plant Phenology Index (PPI). Calibrations and validations have been mostly conducted on Atlantic and continental environments, whereas Mediterranean ecosystems are strongly underrepresented. Moreover, Mediterranean vegetation is highly heterogeneous, diverse, and particularly vulnerable to climate change. This diversity is embodied by its species, densities, structures, landscape variety, and divergent phenological patterns. LSP provides an aggregated perspective of seasonal dynamics pixel-wise. Thus, metrics might be influenced both by plant biological cycles and other land covers (e.g., water, paved surfaces). Additionally, diverse ecosystems cause the blending of different vegetation signals and phenological dynamics. Hence, a high spatial resolution is essential for such an ecologically intricate region. It is, therefore, urgent to assess the representativity of HR-VPP for describing the Mediterranean phenological complexity. This research aims to validate HR-VPP using diverse observation networks with stations in the Mediterranean region (pan-European PEP725, French TEMPO, and Spanish AEMET). LSP phenometrics were compared to the phenophases of the observation networks. Prior to the comparative analysis, ground observations were aggregated by functional type, site, stage, and year; while phenometrics were stratified by CORINE land cover type, year and site. The findings of our study show a good agreement of HR-VPP in the Mediterranean landscapes, suggesting a good representativity of the product, albeit with significant differences between evergreen broadleaf and deciduous broadleaf taxa. The maximum correlation coefficients for each functional type are 0.88 and 0.73, respectively. Broadly, slightly stronger correlations were observed for evergreen broadleaf and deciduous needleleaf trees than for broadleaved deciduous trees; as well as for EOS compared to SOS. The AEMET network presents the highest correlation values in the entire validation, with HR-VPP achieving r=0.88 for the evergreen broadleaved second spring phenophase. Regarding deciduous broadleaved trees, results are more robust in autumn. TEMPO network reveals that much better results are obtained in deciduous broadleaved datasets (averaging r=0.67) than in coniferous datasets. Conversely, deciduous coniferous autumn phenophase exhibits the highest correlation in the whole database, with r=0.81. Finally, HR-VPP correlates the least with PEP725 among all networks (r=0.285 on average for all models). Autumn correlations are also significantly higher (r=0.462) than spring correlations, which are virtually zero. These findings could indicate the product may be suitable for upscaling and estimating phenology over large areas and sensitive ecosystems, and pave the way to the use of HR-VPP for upcoming monitoring in Southern Europe. Future validation efforts could benefit from an enhanced and more accurate land cover map, as well as from more taxonomically diverse, spatially precise phenological networks.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall L1/L2)

Presentation: Development of Next Generation High Spatial Resolution Phenology and Productivity Data for the European Continent

Authors: Lars Eklundh, Else Swinnen, Zhanzhang Cai, Hongxiao Jin, Jonas Ardö, Janik Deutscher, Stefan Kleeschulte, Ludvig Forslund
Affiliations: Lund University, Flemish Institute for Technological Research (VITO), Joanneum Research, space4environment, European Environment Agency (EEA)
The Copernicus Land Monitoring Service (CLMS) provides a freely-available product suite for monitoring of dynamic land surface conditions across Europe: The High Resolution Vegetation Phenology and Productivity (HR-VPP). It is derived from Sentinel-2 data and provides 10-meter spatial resolution for monitoring vegetation variations from 2017 onwards. Land Surface Phenology (LSP) describes vegetation seasonality, summarised in parameters like the start, peak, and end of growing seasons. Furthermore, the HR-VPP product suite includes descriptions of plant productivity (e.g., total sum, amplitude, rates of increase and decrease), covering a total of 13 parameters per growing season. These dynamic properties are fundamental to understanding ecosystem functioning, and are related to the provision of ecosystem services of vegetated land areas. The high spatial resolution enables detailed land monitoring, and the first HR-VPP data have already been applied in a number of studies, including urban green areas, biodiversity, crop yield forecasting, ecosystem health, drought impacts, and carbon flux assessment. Building upon the initial years of operation, a new generation of the HR-VPP database is now under development. This update aims to improve the modelling of seasonal vegetation growth trajectories by incorporating land cover-specific information and applying more flexible time series fitting methodology, implemented in the latest version of the TIMESAT software system. Improved accuracy will be achieved through correction for bi-directional reflectance distribution function (BRDF), enhanced cloud-flagging methodology, and better adaptation to land cover-specific conditions across Europe. Furthermore, in addition to the previously used plant phenology index (PPI), GPP (gross primary productivity) will be investigated as a potential basis for extraction of phenological parameters, which will consequently be strongly linked to the capacity of land vegetation to assimilate carbon dioxide. Based on combined information on carbon use efficiency from ecosystem models and other satellite products, vegetation NPP (net primary productivity) will also be computed. An addition to the product suite is the development of a new European tree cover disturbance monitoring prototype. This prototype will be based on multi-year time series of Sentinel-2 data, and will include timings, magnitudes, and trends of detected anomalies related to forest vitality changes, disturbances and forest management activities. To ensure the accuracy of the HR-VPP products, in-situ data from flux towers, phenological cameras, and ground observations are used in the product calibration and validation. The project is commissioned by the European Environment Agency, and executed by a consortium of European research institutions and consultancy companies. Altogether, the product suite will contain highly useful data for assessing vegetation conditions and their changes across the European continent. The developed variables have a large potential to strengthen operational land management and planning, and to support a range of commercial, public, and research uses. The presentation will summarise experiences from the first generation of HR-VPP as well as preliminary results of the new calibrations and product developments.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall L1/L2)

Presentation: A generalized method to estimate vegetation phenology from time-series of optical data: analysis of short-term impact anomalies due to 2022 drought in north of Italy

Authors: Federico Filipponi, Francesco Nutini, Mirco Boschetti
Affiliations: National Research Council (CNR) - Institute of Environmental Geology and Geoengineering (IGAG), National Research Council (CNR) - Institute for Electromagnetic Sensing of the Environment (IREA)
Development of Earth observation data analytics allows to analyze and systematically extract diverse set of information from a variety of large datasets, including those observing and measuring the response of natural and agro-ecosystems processes to environmental and climate conditions and drivers. Earth observation capacity to monitor Land Surface Phenology (LSP) has improved over the past years, with the development of many algorithms that can exploit dense time series at high spatial resolution allowing to pass from LPS to crop field/forest plot level. Such data flow is nowadays available thanks to high revisit frequency of constellation scenario and it is guaranteed in future due to operational satellite missions. Spatial explicit LSP information exists operationally like the pan-European high resolution vegetation phenology and productivity product generated by the Copernicus Land Monitoring Service (CLMS) from Sentinel-2 satellite acquisitions. Such a product is distributed to the users with annual timeliness and can be exploited for multi annual analysis. In season analysis of ecosystems and agricultural dynamics for territorial and agriculture applications need LSP products with different characteristics and additional thematic content: i) in season/early end of season estimates (i.e. lower product distribution latency); ii) identification of multiple crop seasons; iii) seasonal analysis of crop traits trajectories and; iv) extraction of additional phenological metrics (e.g. seedling emergence and start of leaf senescence) to be exploited in crop monitoring and yield forecasting systems. In this framework, we develop an automatic procedure to analyze time series of EO data vegetation products (VIs or biopar) to further derive the above mentioned phenometrics and seasonal products. To achieve this aim, a generalized multitemporal smoothing and gap-filling procedure was implemented to process vegetation indices (VIs) or biopar such as the LAI from SNAP biophysical processor generating vegetation productivity proxy time series with evenly spaced timesteps. The method consists in a pixel-based approach following four steps: i) removal of invalid pixels (e.g. clouds, cloud shadows, topographic shadows) for each satellite acquisition; ii) small drops identification and removal; iii) weighted Savitski-Golay polynomial fitting, preserving upper envelope; iv) daily interpolation through Whittaker smoother. The processing chain produces: i) LAI\VIs time series, at daily timesteps, that provide seasonal vegetation dynamics; ii) phenological metrics using local curve fitting and derivatives analysis allowing phenophases identification and; iii) annual synthesis from vegetation trajectories and seasonal(s) temporal statistics. The workflow has been implemented and is producing operational products, the first validation exercise using PhenoCam ground observations shows satisfactory results with differences in phenophases occurrence of around 15 days (MAE) consistent with the accuracy reported in other studies. Further evaluation of goodness of the estimated seasonal trajectories and phenological metrics is ongoing exploiting LAI field measurements and crop phenology observation available for different season in northern Italy. An experiment to fully exploit the processing chain described was conducted to analyze impact of 2022 drought conditions on North of Italy rice district. The method was applied to S2 products (LAI) generated from imageries acquired in the period 2016-2024 over Lombardy and Piedmont regions (north Italy) covering the main European rice districts. A land use map provided by regional administrations was used to shrink the analysis specifically on rice cropping systems. LAI has been selected to derive phenological metrics, since it represents a biophysical parameter relevant for crop growth, biomass accumulation and yield, and it is not affected by signal saturability in areas with high vegetation coverage as happen with vegetation indices. The phenological metrics estimated in different years were finally compared to detect anomalies in extensive cropland area. Results allowed to identify hotspots characterized by reduced productivity or no photosynthetic activities, which are the consequence of the reduced water availability due to the drought. According to farmers interviews, due to absence of irrigation water, some fields were abandoned after crop emergency. The map generated highlights differences at district level in relation to water availability and crop management and spots which areas faced yield loss during the severe 2022 drought event, hence shows location that are more likely to face the same issue in the very next future. Such information is useful for planning activity prioritizing analysis on the most vulnerable area in case of a new drought event. The proposed procedure is generalized and can be used to estimate phenological metrics from various EO products like spectral indices and other plants traits generated from high revisiting optical system (e.g. Planet). It also supports the estimation of LSP from virtual constellations of satellite, allowing the generation of denser vegetation indices time series and contributing to improve vegetation seasonal trajectories analysis, phenology estimation accuracy, and consequently enhancing agro-ecosystems monitoring capacity. The results show that LSP enables the study of spatio-temporal patterns associated with environmental variability and climate drivers, turning plant phenology into a fingerprint of ecosystems and human response to climate change.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall L1/L2)

Presentation: Physics-guided Deep Learning for Crop Phenology Retrieval from Sentinel-1 and Sentinel-2 Time Series

Authors: Keke Duan, dr.ir. Anton Vrieling, dr. Michael Schlund, Felix Lobert, dr. Claudia Paris, dr. Andrew
Affiliations: University of Twente, Faculty of Geo-Information Science and Earth Observation (ITC), Thünen Earth Observation (ThEO), Thünen Institute of Farm Economics, Earth Observation Lab, Geography Department, Humboldt-Universität zu Berlin
Timely and accurate monitoring of crop phenology is crucial for optimising agricultural management and understanding climate change impacts on crop development. Process-based models and remote sensing (RS) based curve fitting techniques are often used for phenology monitoring over large areas though their accuracy can be limited under highly variable crop growth conditions. Deep learning (DL) can handle large datasets and complex patterns but are underused in phenology retrieval due to limited ground observations across diverse environments. Integrating process-based and data-driven approaches offers the potential for improved accuracy and interpretability by embedding physical and biological insights. This study presents a physics-guided hybrid phenology model that integrates DL with the physical mechanisms of process-based crop models and RS observations to enhance crop phenology estimation. Weather-response variables derived from crop models are aligned with Sentinel-1 and Sentinel-2 time series to create physics-guided features, a new temporal framework that reduces environmental variability between samples. These features feed into a hybrid DL architecture that combines a one-dimensional U-Net, for extracting multi-scale temporal features, and the Long short-term memory (LSTM), for capturing sequential dependencies, enabling robust identification of key phenological transitions. Trained and validated on ~16,500 phenology observations of winter wheat from Germany (2017-2021), the model significantly outperformed baseline DL models without crop model integration. On average, across various phenological stages, the hybrid model increased Prediction Accuracy (PA) by 54% (from 0.41 to 0.63); reduced Root Mean Square Error (RMSE) by 31% (10.5 to 7.2 days), and enhanced the coefficient of determination (R²) by 292% (0.12 to 0.47). Combining multispectral and Synthetic Aperture Radar (SAR) Sentinel data yielded the best results, with photo-vernal-thermal time (PVTT) as the most effective weather-response variable for winter wheat phenology retrieval. The model also demonstrated strong generalisability, maintaining stable performance for untrained regions and years while addressing cultivar-specific parameterization challenges that are common in process-based models. To conclude, this hybrid approach integrates physiological insights with DL techniques offering a scalable, interpretable, and transferable solution for crop phenology monitoring.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.31/1.32)

Session: F.02.03 Using satellite EO in global capabilities for disaster response

Satellite EO can contribute to provide damage mapping and situation mapping in a rapid fashion. This is illustrated with two large and operational EO capabilities for disaster response, the International Charter Space & Major Disasters and the Copernicus Emergency Mapping Service (CEMS) of the European Union. There are EO based services based on a broad range of EO missions, radar and optical, with relevant products for several types of hazards both hydromet hazards and geohazards. Beyond awareness issues, the main barriers to user uptake are the quality of the geoinformation provided and its timeliness. Furthermore, with Climate Change, more and more extreme events are occurring in many regions of the world increasing the need for robust services including wide area damage mapping. Novel EO techniques and advanced IT can help enhance the accuracy and latency of disaster response service including the use of multi-sensor multi-temporal techniques combined with Artificial Intelligence to increase coverage and timeliness and the use of Cloud optimized methods able to access EO data in seamless fashion from large virtual constellations. Moreover because of climate change there is an increasing need for EO based geoinformation that can be used in the long term for the analysis and comparison of historical events for better understanding risks and for multi-hazard risk management.

Convenors: Philippe Bally (ESA); Roberto Biasutti (ESA); Casper Fibaek (ESA); Anne Schneibel (DLR )
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.31/1.32)

Presentation: Remote Imaging Support for Emergencies (RISE): Empowering Humanitarians with Faster, Cost-Effective Geospatial Intelligence

Authors: Cristiano Nattero, Paolo Campanella, Lucas Falk, Laura Giustarini, Valentina Leone, Marco Menapace, Nikhil Mohan, Betty Spurgeon, Alberto Tasso, Jihed Ben Zarb, Marco Chini, Yu Li, Anis Amziane, Joao Vinholi, Tian Hu, Aolin Jia, Kanishka Mallick, Patrick Matgen, Ana Carolina Oliveira Helena, Daniel Ledesma Nicrosi, Michelle Joseph, Bethany Plant
Affiliations: WASDI sarl, Luxembourg Institute of Science and Technology (LIST), International Aid of the Luxembourg Red Cross - Shelter Research Uni (AICRL-SRU), World Food Programme (WFP) - Innovation Accelerator
Geospatial intelligence is essential for informed decision-making in humanitarian operations, particularly during crises when the need for timely and actionable information is critical. Through firsthand experience, we have witnessed the transformative impact of satellite-based Earth Observation (EO) data in key humanitarian efforts, including emergency response, damage assessment, disaster risk reduction, and enabling financial mechanisms such as parametric insurance for disaster risk transfer. Satellite-based Earth Observation (EO) consistently outperforms traditional field surveys, even those supported by drones, offering a more scalable and efficient solution. This advantage is especially pronounced in addressing large-scale natural hazards such as floods, droughts, wildfires, earthquakes, and landslides. However, despite its potential, the adoption of EO data in humanitarian emergency response faces significant barriers. These include technical challenges, limited access to expertise, commercial constraints, and a steep learning curve for non-specialist users. The reliance on geospatial experts as intermediaries remains a critical bottleneck, adding costs and delaying the conversion of raw EO data into actionable insights. While advances in automation, algorithms, and cloud technologies have mitigated many challenges, the human-in-the-loop paradigm still hampers the rapid dissemination of critical geospatial intelligence. To address these challenges, we introduce RISE (Remote Imaging Support for Emergencies), a user-friendly web application designed to provide actionable insights from remote sensing data to emergency managers. Developed by WASDI in collaboration with the Luxembourg Institute of Science and Technology (LIST) and the Shelter Research Unit of the International Aid of the Luxembourg Red Cross (AICRL-SRU), RISE is funded by the World Food Programme’s Humanitarian Innovation Accelerator (HIA). RISE transforms EO data into accessible, decision-ready intelligence. Its modular, plugin-based architecture initially focuses on flood mapping, urban and settlement analysis and drought monitoring, with a roadmap that includes tools for earthquakes, landslides, wildfires, and impact assessments. Advanced algorithms, provided by LIST and operationalized on WASDI's cloud platform, power RISE’s analytical capabilities: - Flood Mapping: It integrates advanced floodwater mapping algorithms with near-real-time public datasets to manage a variety of land covers and urban settings. The SAR-based floodwater mapping algorithm, i.e., HASARD, is fully automatic and can scale up globally by leveraging AI models and multitemporal interferometric SAR coherence, enabling remote flood monitoring even in urbanized areas. - Urban and Settlement Analysis: The system regularly updates building maps globally via the CityWatch service, using satellite data from the Copernicus Sentinel-1 and Sentinel-2 missions at a 10-meter resolution. For higher detail, it incorporates commercial very high-resolution (VHR) satellite imagery, reaching up to 50 cm. Advanced AI models process both optical and radar data for the 10-meter maps, while the VHR solution uses AI to classify images based on lower-resolution labels, ensuring accurate results. - Drought Monitoring: RISE leverages cutting-edge models and datasets, including ECOSTRESS and VIIRS, to deliver actionable insights. These tools enable precise tracking of soil moisture, vegetation health, and evapotranspiration, offering critical data for early warning systems and effective water resource management. While designed for global application, RISE is currently undergoing field validation in Niger, led by AICRL-SRU in collaboration with the Niger Red Cross. Field expertise from AICRL-SRU ensures that RISE aligns with the highest humanitarian requirements. Its potential has already been recognised at prestigious forums, including the 2024 Multi-stakeholder Forum on Science, Technology, and Innovation for the SDGs (STI Forum), where it was showcased as a breakthrough solution for leveraging technology in humanitarian emergencies. RISE represents a significant leap forward in making EO data accessible and actionable, empowering emergency managers to make timely, and informed decisions that save lives and resources.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.31/1.32)

Presentation: CEOS WGDisasters: the Italian Space Agency’s contribution to disaster response and lessons learnt

Authors: Dr. Deodato Tapete, Dr. Antonio Montuori, Dr Laura Frulla, Lorant Czaran, Andrew Eddy, Maria Virelli, Gianluca Pari, Simona Zoffoli
Affiliations: Agenzia Spaziale Italiana (ASI), Comisión Nacional de Actividades Espaciales (CONAE), United Nations Office for Outer Space Affairs (UNOOSA), Athena Global
The Working Group on Disasters (WGDisasters) has been established since 2013 by the Committee on Earth Observation Satellites (CEOS, https://ceos.org) to ensure the sustained coordination of disaster-related activities undertaken by the CEOS Agencies as well as to act as an interface between CEOS and the community of stakeholders and users involved in risk management and disaster reduction. The main pillars of WGDisasters are: • Support Disaster Risk Management (DRM) authorities by means of satellite-based Earth Observation (EO) and science-based analyses. • Support the United Nations Office for Disaster Risk Reduction (UNDRR) Sendai Framework, mainly with reference to Priority1 “Understanding Risk” and Priority4 “Build Back Better” activities. • Support international initiatives such as the Group on Earth Observation (GEO), International Charter Space & Major Disasters and Copernicus Emergency Management Service (EMS). • Raise the awareness of Governments, policymakers, decision-makers, and major stakeholders about the benefits of using satellite EO in all phases of DRM. • Foster increased use of EO in support of DRM and DRR, and express related EO capabilities and needs. In this framework, CEOS WGDisasters has initiated, promoted and supported a series of concrete actions for DRM and DRR oriented to disaster monitoring, preparedness and prevention. These actions have been translated into single-hazard Pilot and Demonstrator projects (currently focusing on fires, floods, landslide, volcanoes and seismic hazards) as well as multi-hazards projects as the Recovery Observatory (RO) and the Geohazard Supersites and Natural Laboratories (GSNL). Since 2012, ASI participates and contributes to the above-mentioned initiatives in terms of: • project selection and evaluation (as part of Data Coordination Team); • data provision of COSMO-SkyMed, SAOCOM (only within the ASI Zone of Exclusivity defined in agreement with CONAE within SIASGE program) and PRISMA images for earthquakes, volcanoes and RO monitoring; • on a case-by-case basis and depending on the typology of demonstration activities required by the end-users, also development of experimental scientific products (thus not operational) that are jointly analysed with the end-users in order to promote innovation, as well as exportability and replication in future beyond the single activation or international cooperation experience. To support both Supersites and the DRM projects, ASI, in coordination with WG members and CEOS Agencies, has delivered more than 20.000 EO products until now and is actively involved in demonstrating novel scientific products that are generated based on a tailored exploitation of the Italian state-of-the-art COSMO-SkyMed radar imaging technology, to address specific challenges faced by the affected communities in terms of DRM and DRR. For example, COSMO-SkyMed data have been instrumental for the U.S. Geological Survey’s Hawaiian Volcano Observatory to track Kīlauea’s reawakening in 2018. Moreover, the interferometric use of COSMO-SkyMed data further proved the valuable support to monitor volcanic processes occurred after the eruption of St. Vincent volcano, allowing monitoring the decrease of surface deformations and major changes recorded in the area (e.g. the volcanic dome collapse and crater filling phenomena). More recently, COSMO-SkyMed data were helpful to show deformation related to the dike intrusion recorded in 2021 on the Reykjanes Peninsula and proved their benefits to monitor seismic and deformation processes related to the 2023 Turkey-Syria earthquake (where a dedicated acquisition plan was required and planned in coordination with the Italian National Institute of Geophysics and Volcanology – INGV team). In the framework of Recovery Observatory (RO) initiative, ad hoc collections of COSMO-SkyMed radar images were collected over in Haiti, to monitor the recovery and rehabilitation process of the south-western department that was severely affected by the Hurricane Matthew in October 2016 and, more recently, by the 7.2 Mw earthquake and Hurricane Grace in August 2021. In this regard, some showcases will be presented at the time of the conference to highlight the ASI contribution to WGDisasters activities for DRM and DRR purposes. The examples will be discussed in light of the return of experience gathered by ASI during the various activations and projects and the constant interactions with the other space agencies and end-users.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.31/1.32)

Presentation: Enhancing Disaster Response Through Cloud-Based Multi-Mission EO Data Processing

#stac #cog

Authors: Fabrizio Pacini, Mauro Arcorace, Marco Chini, Zachary Foltz, Roberto Biasutti
Affiliations: Terradue Srl, Environmental Research and Innovation, LIST, ACRI-ST, ESRIN, European Space Agency
The effective use of Earth Observation (EO) data during disasters is critical for timely response and recovery efforts. Since 2000, the International Charter Space and Major Disasters has provided at no-cost for the final user satellite imagery to global EO experts, empowering them to deliver information products essential for disaster relief operations. Supporting this mission in recent years, the ESA Charter Mapper platform has been developed as an innovative cloud-based processing environment, showcasing the transformative potential of cutting-edge technologies in EO data exploitation. Concerning the analysis of remote sensing data, the Charter Mapper provides a dedicated web application where Project Managers (PM) and Value Adders (VA) can find pre-processed multi-sensor EO data including extracted metadata, perform visual analysis, and submit on-demand processing to extract geo-information from satellite imagery. After a search of a desired calibrated dataset via the GUI, PM/VAs can visualize either an overview or a single band asset in the map at full resolution and apply on the fly changes in the render of imagery by stretching the histogram. Furthermore users can also combine single-band Assets of Calibrated Datasets to create custom intra-sensor RGB band composites on the fly or can employ expressions to derive a binary mask from a single band asset. Users can also visually compare pre- and post-event images directly in the map using a slider bar to depict the evolution of catastrophic events. Beyond visualization, the ESA Charter Mapper offers a robust suite of EO processing services designed to meet diverse analytical needs during disaster response and recovery. Accessible through an intuitive interface, the platform’s portfolio includes 26 processing services that support both systematic and on-demand workflows. These services empower users to perform advanced data operations such as pan sharpening, band combination, image co-location and co-registration, change detection, cloud masking, and hotspot and burned area mapping. For SAR data, the platform enables InSAR processing for surface displacement monitoring and coherence analysis. Furthermore users can also perform unsupervised image classification, raster filtering, vectorization, and map composition. Outputs include geo-information products such as spectral indices, flood masks, and burned area maps in various formats, from TOA reflectance to false-color RGBA visualizations. Recent developments introduced advanced functionalities to facilitate the generation of Value Added Products (VAP) such as the GIS functions panel where users can work with vector files, and the Map Composition functionality to generate a professional cartographic product as PDF or PNG file. The Charter Mapper integrates technologies such as Kubernetes, SpatioTemporal Asset Catalog (STAC), and cloud-optimized GeoTIFF (COG), enabling streamlined access, visualization, and processing of data from a constellation of 41 EO missions managed by 24 international space agencies and data distributors. By employing Common Band Names (CBN) for harmonized spectral mapping and automating ingestion workflows, the platform ensures rapid, systematic pre-processing of diverse datasets, including optical and SAR imagery. This automation enables consistent, high-quality, and analysis-ready datasets to be available within short timeframes, a critical advantage for timely decision-making in disaster response. Additionally, the Charter now hosts a growing archive of multi-mission data, providing a valuable resource for analysis and reanalysis across different activations and scenarios. In September 2021 the Charter Mapper was officially released in operations. So far it facilitated PM and VA users in accessing and using a large amount of EO data acquired over 195 charter activations. The architecture of the Charter Mapper has been designed with scalability and adaptability in mind, benefiting from the Open Geospatial Consortium (OGC) Application Package best practice. This approach, developed in collaboration with the EO Exploitation Platform Common Architecture (EOEPCA) under ESA, enables EO applications to be portable and reproducible across different infrastructures and cloud environments. By leveraging the Common Workflow Language (CWL) and containerized workflows, EO algorithms can be seamlessly deployed, whether for local testing, distributed Kubernetes clusters, or OGC API Processes. The Charter Mapper adopts this framework to support rapid deployment of EO algorithms, ensuring that its processing services are both robust and scalable for disaster response applications. The Charter Mapper architecture is designed to address complex challenges in harmonizing, visualizing, and processing large EO datasets and it can provide a replicable framework for multi-mission EO data management. Its blueprint is a scalable and adaptable solution for other EO initiatives and is well-suited for applications beyond disaster response, such as environmental monitoring, urban planning, and climate change analysis. This oral presentation will showcase the platform’s innovative technical framework and its role in supporting disaster relief efforts. We will present recent case studies from Charter activations, demonstrating how the automated workflows and scalable architecture facilitate faster and more reliable disaster response. Furthermore, we will illustrate how the Charter Mapper's design and operational principles can serve as a replicable model for other EO platforms seeking to address the growing demand for efficient data processing and analysis across multiple sectors.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.31/1.32)

Presentation: Diffusion Model-Driven Heterogeneous Change Detection for Very High-Resolution Rapid Natural Disaster Response

Authors: João Gabriel Vinholi, Anis Amziane, Marco Chini, Patrick
Affiliations: Luxembourg Institute Of Science And Technology
Our work focuses on developing a novel approach to heterogeneous change detection by integrating high-resolution (<1 m²/pixel) and lower resolution (>3 m²/pixel) optical imagery. The proposed method leverages latent space information derived from an advanced diffusion-based image-to-image translation pipeline, which maps low-resolution images into the domain of very high-resolution (VHR) imagery. The translated images are further processed using the same diffusion-based model to extract deep latent features for effective change detection. The motivation for this work stems from the practical challenges associated with accessing consistent VHR imagery for both pre- and post-event analysis. VHR images are often costly and not always available for the required temporal windows, particularly in scenarios involving rapid changes such as natural disasters or urban development. For instance, pre-event VHR imagery for natural disaster events is frequently inaccessible. By enabling comparisons across heterogeneous imagery, our approach reduces reliance on VHR imagery, making it feasible to perform accurate change detection using widely available medium- and low-resolution imagery. This capability is crucial for improving access to timely and cost-effective remote sensing analysis in critical applications such as disaster management, environmental monitoring, and infrastructure development. Our approach introduces a unified latent space representation, enabling direct comparison between heterogeneous images while preserving spatial and radiometric consistency. The framework extracts features exclusively in the forward diffusion process, ensuring that domain-specific nuances are accurately captured in the latent space. Extensive evaluations demonstrate that this methodology enhances the ability to detect subtle yet critical changes across resolutions, outperforming traditional pixel- and feature-based methods. These experiments span critical events such as the Beirut port explosion (2020), the Turkey-Syria earthquake (2023), and others, analyzing imagery from Planet SkySat, Planet Dove, and Sentinel-2 sensors.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.31/1.32)

Presentation: Development of Innovative 3D Based Damage Assessment Products for Insurance Market Sector Applications in Case of Extreme Wind Events

Authors: Henri Giraud, Zoé Papirer, Danielle Alves Teixeira da Silva, Hervé Tong, Emilie Bronner, Dimitri Lallement, Dawa Derksen
Affiliations: ICube-SERTIT, Descartes Underwriting, CNES
Extreme wind events such as thunderstorms, tornadoes, hurricanes, typhoons, etc. can be responsible for disasters in terms of human lives, but also in terms of assets. Indeed, the amount of losses caused by such events can be substantial, more particularly in a context of climate change where natural disasters intensity and their impact follow an increasing trend. Assets impacted by extreme wind events can be of different nature: - building structures such as warehouses or homes, - energy infrastructures (solar panels or wind turbines for example), - forests and plantation areas. Considering that these events represent a worldwide threat, it is a particularly interesting topic for the insurance sector. Indeed, being able to accurately and quickly identify the impact of such climatic phenomena on buildings, energy or forestry assets is currently a real challenge. Nowadays, customers who suffer from losses due to extreme wind events must often wait before the damage assessment can be carried out, as experts still need to realize on-field and manual evaluation. The development of insurance coverage tailored for covering losses related to an event, more data-driven and less human dependant, would greatly benefit to the customers and would bring a real added value. In this context, Earth Observation satellite data represent an undeniable benefit for developing such innovative solutions. However, satellite remote sensing is currently facing several technical obstacles to map the impact of extreme wind events, the main one being probably the satellite imagery low temporal and spatial resolution. Indeed, since the extent of damages is often very limited spatially, using imagery with spatial resolution around or below one meter is required. Furthermore, the consequences of sun illumination, shadow areas, relief or even clouds can affect imagery quality, with a negative impact on damage detection. Last but not least, regarding the specific forestry example, it can be difficult to distinguish between areas damaged by extreme wind event and clearcuts possibly initiated by the client after the disaster in order to limit the losses. To overcome the current technical obstacles and meet the needs expressed by customers, Descartes Underwriting, a French tech-driven insurer providing parametric insurance against climate and emerging risks and ICube-SERTIT, a public research lab specialized in remote-sensing and image processing, harness very high resolution optical satellite imagery and 3D data to conduct comprehensive assessments of damage in the aftermath of such extreme wind events. With assistance from CNES through the Ambition Aval initiative, Descartes and ICube-SERTIT aim at benefitting from the current (Pléiades-HR, Pléiades Neo) and future (CO3D) satellite constellations to promote the development of sustainable services that address user needs, leveraging spatial data in innovative ways that could yield valuable insights for the broader application of satellite-derived information in future space missions. The objective is to develop a new service with innovative damage mapping products, along with estimates of destruction intensity, based on 3D data over the impacted areas. This requires the generation of Digital Surface Models (DSM) from stereoscopic satellite images acquired before and after the catastrophic event, combining the resulting DSM layers for improving the damage detection. The methodological developments, combining radiometry and altimetry information derived from satellite data, should improve the current traditional methods where only radiometric values are considered. These cutting-edge satellite sensor capabilities, offering sub-meter spatial resolution and 3D capacities, will bring enhanced accuracy for damage estimations, to quantify economic losses, and evaluate changes to the landscape. The aftermath of Hurricane Irma in September 2017 over the French Caribbean Saint Martin island is selected as a use case example for the product development stages. Therefore, several Pléiades-HR stereoscopic data acquired before and after the event are used. This will be complemented with validation databases, capitalizing on the results of the Copernicus Emergency Management Service EMSR232 activation.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.31/1.32)

Presentation: Copernicus Emergency Management Service On-Demand Mapping. What Does it Bring to the Disaster Management Community?

Authors: Pietro Ceccato, Simone Dalmasso, Peter Spruyt, Ines Joubert-Boitat, Cristina Rosales-Sanchez, Emanuele Sapino, Guido Di Carlo, Paolo Pasquali, Alan Steel, Oliva Martin-Sanchez, Javier Belo-Garcia
Affiliations: EC Joint Research Centre
Rescue forces and aid organizations rely on up-to-date, comprehensive, and accurate information about hazard extent, affected assets, and damages to respond swiftly and effectively. To address these challenges, the Copernicus Emergency Management Service (CEMS) On-Demand Mapping provides maps and products utilized by various actors in crisis management, such as national civil protection authorities within EU Member States, EU agencies and services, and humanitarian aid organizations. CEMS On-Demand Mapping generates maps by leveraging Copernicus satellites, including Copernicus Contributing Missions and aerial components, to monitor natural disaster impacts in vulnerable regions worldwide. The CEMS On-Demand Mapping component offers detailed information on disasters through its activation service, supporting all phases of the disaster management cycle, from preparedness to emergency response and recovery. - Preparedness (pre-event) activations help reduce the impact of potential disasters by providing risk assessment analysis, including hazard, exposure, vulnerability, and risk analysis for both natural and human-related disasters. - Emergency response activations support immediate post-disaster management activities by delivering an event extent and impact assessment within hours of a service request. - Recovery (post-event) activations provide disaster impact assessment and support planning and monitoring to guide reconstruction and recovery efforts. All activation products are freely accessible in the activation viewers https://rapidmapping.emergency.copernicus.eu/ and https://riskandrecovery.emergency.copernicus.eu/search/ . Additionally, situational reports detailing each activation can be found on the respective activation pages. The primary objective of our presentation is to showcase the activities of CEMS On-Demand Mapping, highlighting the challenges and solutions it offers to map natural disasters and their impacts in a rapid and global manner.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.15/1.16)

Session: A.07.05 Monitoring and predicting surface water and flood dynamics - PART 1

The socio-economic consequences of floods are rising rapidly, as floods are the most frequent and impactful weather-related disasters, affecting nearly 800 million people in the past decade and causing economic losses exceeding $300 billion. In this context, remote sensing has emerged as a critical tool for data collection and observation, especially in regions where field surveys and gauging stations are limited, such as remote areas and developing nations. The integration of remotely-sensed variables—like digital elevation models, river width, flood extent, water level, flow velocities, and land cover—into hydraulic models offers the potential to significantly enhance our understanding of flood processes and improve predictive capabilities.

Over recent decades, research has focused on optimising the use of satellite observations, supported by both government and commercial initiatives, and numerous datasets from airborne sensors, including aircraft and drones. Recent advancements in Earth observation (EO) have further enhanced the monitoring of floods and inland water dynamics, utilising optical imagers, Synthetic Aperture Radars (SARs), and Global Navigation Satellite System Reflectometry (GNSS-R) to detect surface water, even in densely vegetated regions. Radar altimeters now measure water levels over smaller lakes and rivers. However, despite these advancements, the update frequency and timeliness of most remote sensing data products are still limited for capturing dynamic hydrological processes, which hinders their use in forecasting and data assimilation. Additionally, spatial and temporal inconsistencies across different sensors pose challenges in creating integrated multi-sensor products, such as fused surface water and flood extent products, water volume estimates, and wetland maps.

The scientific community has increasingly recognized the potential of remotely-sensed data for calibrating and validating hydraulic models, and to revolutionise real-time flood monitoring. With the expansion of open data from sources such as the European Space Agency (ESA), and the availability of more Earth observation data than ever before, this progress is expected to continue.

This session invites cutting-edge presentations on flood monitoring and mapping through remotely-sensed data, focusing on:

- Remote sensing data for flood hazard and risk mapping, including commercial satellite missions and airborne sensors (aircraft and drones);
- Remote sensing techniques for monitoring flood dynamics;
- The use of remotely-sensed data for calibrating or validating hydrological or hydraulic models;
- Data assimilation of remotely-sensed data into hydrological and hydraulic models;
- Enhancements in river discretization and monitoring through Earth observations;
- River flow estimation using remote sensing;
- Machine learning and deep learning-based flood mapping or predictions;
- Ideas for developing multi-satellite data products and services to improve the monitoring of flood and surface water dynamics.

Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: Parameterizing and Validating Hydraulic River Models from Satellite Altimetry and a Global Hydrological Model

Authors: Aske Folkmann Musaeus, Dr. Cécile Marie Margaretha Kittel, Dr. Karina Nielsen, Mr. Roar Askær Jensen, Dr. Monica Coppo Frias, Mr. Connor Chewning, Prof. Peter Bauer-Gottwein
Affiliations: DHI A/S, 2Department of Space Research and Technology, Technical University of Denmark, 3Department of Geosciences and Natural Resource Management, University of Copenhagen
Climate change is increasing the frequency and severity of flood events – but at the same time, the number of in-situ stations observing hydrometric parameters is decreasing. Hydraulic models for flood prediction and river management rely on observations of water surface elevation (WSE) and discharge, combined with terrain elevation surveys for river and floodplain geometry. Efforts to calibrate and validate hydraulic river models with satellite earth observation (EO) datasets aim to reduce the reliance on scarce and inconsistent in-situ observations. This provides the opportunity to model rivers in data-scarce areas, where in-situ surveying is limited by physical access, political boundaries, safety, or cost. Recent satellite missions like ICESat-2 and SWOT provide inland water altimetry datasets with significantly higher spatial resolution and, for the first time, provide the opportunity to estimate the water surface slope (WSS). The interconnection between river WSE, WSS, cross-sectional geometry, discharge, and hydraulic roughness enables the estimation of some of these parameters, provided that others are observed. The 70 cm resolution of the ICESat-2 ATL03 altimetry product is sufficient to map the exposed part of river cross-sections, and both ICESat-2 and SWOT provide observations of WSE and WSS. The SWOT mission detects surface water within the 120 km swath of the KaRIn instrument, where each waterbody is observed 2-6 times in the 21-day orbit. If discharge estimates are available, the effective hydraulic properties of the submerged part of the cross-section can be estimated using the diffusive wave approximation of the 1D Saint-Venant equations. Multi-mission WSE and slope observations not used in cross-section definition can be used for validating the model, by identifying sections of the river with deviations between modelled and observed WSE. Hydraulic anomalies like rapids and supercritical sections can be identified with the dense spatial coverage and resolution of the ICESat-2 and SWOT datasets. Thus, hydraulic river models can be parameterized and validated by combining the satellite altimetry products with a global hydrological model (GHM), without any in-situ input. In this study, we use the DHI-GHM developed at DHI A/S, which provides historical and forecasted simulations of river hydrology globally down to hourly timesteps. The simulated runoff from the DHI-GHM is used to estimate the hydraulic properties of the submerged cross-section at the time of the ICESat-2 crossing, and the simulated runoff time series is used to force the hydraulic river model for validation. With the 20-year hindcast period of the DHI-GHM, we can evaluate the predictive capability of the model against the altimetry record of earlier missions. We demonstrate the feasibility of this workflow for a large river case study to show scalability and global applicability. Using these methods, we can create large scale river models of basins without access to extensive terrain surveys or in-situ stations observing hydrometric parameters. Additionally, the operational DHI-GHM system already allows the extraction of hydrological simulations for any catchment in the world, ensuring scalability of the test case. With this work, we strive to create a river model that is consistent with satellite altimetry, with the outlook of assimilating satellite altimetry data into large-scale hydraulic models operationally.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: A Multi-Source Approach for Enhanced Flood Delineation Using Harmonized Satellite Data

Authors: Edoardo Arnaudo, Gaetano Chiriaco, Claudio Rossi
Affiliations: Fondazione LINKS
Flooding events increasingly threaten communities worldwide, with their frequency and severity amplified by climate change. While Earth Observation (EO) data provides crucial capabilities for flood monitoring and response, current operational solutions often face limitations due to their reliance on specific sensor types or the inherent delays in data processing and delivery. In this work, we present a more flexible approach to flood delineation that attempts to overcome some of these constraints by harmonizing multi-source satellite data into a unified analytical framework. Central to our approach is the use of multi-modality, which integrates complementary data from optical and synthetic aperture radar (SAR) sensors. Optical imagery provides high-resolution visual information, which is valuable for identifying surface water features under clear-sky conditions. However, its utility is frequently hindered by cloud cover or poor lighting, particularly during flood events. SAR, on the other hand, is less affected by atmospheric conditions and can penetrate clouds, making it a reliable alternative for detecting water surfaces in challenging environments. By fusing these two modalities, we leverage their individual strengths to create a more robust and comprehensive flood mapping solution. Our framework is further designed to handle partial modalities, recognizing that optical data may be unavailable or degraded due to noise or obstructions. This capability ensures that the model remains operational and effective even when one data source is partially or entirely missing. Fusing data from multiple sources not only enhances the resilience of the system, but also improves the accuracy of flood delineation by reducing uncertainties inherent in single-modality approaches. Our methodology introduces three key innovations in flood mapping and modality-fusion. First, we develop a comprehensive multi-source dataset by harmonizing existing flood mapping repositories: starting from KuroSiwo and integrating information from other sources, such as WorldFloods, and MMFlood, we create a unified data structure that preserves the unique characteristics of each source, while enabling consistent analysis. The KuroSiwo dataset covers 43 major flood events from six different continents, with images ranging from 2015 to 2022. For each event is composed of a pair of pre-event and post-event Sentinel-1 GRD images and a label delineating the permanent water and flooded area. WorldFlood covers 119 flood events that occurred between 2016 and 2019. Each pair of images is composed of a Sentinel-2 optical image and a multiclass segmentation mask. During the unification process, the missing modality for each dataset was retrieved and processed. When a Sentinel-1 GRD image is missing, we directly download the corresponding Sentinel-1 Radiometrically Terrain Corrected (RTC) image for the area of interest. Similarly, when the GRD is available, we substituted it with the RTC version to ensure consistency and improved quality. The result of the unification process is a harmonized multi-modal dataset that can be used to train multi-modal models for flood delineation. Additionally, each flood event includes at least one pre-event image and one post-event image, ensuring robust temporal coverage for event analysis. This harmonization process addresses challenges in temporal alignment, spatial resolution differences, and annotation inconsistencies across sources. Second, we implement a modality-agnostic deep learning architecture capable of processing both Synthetic Aperture Radar (SAR) and optical imagery, allowing for flood delineation regardless of weather conditions or specific sensor availability. The model achieves this flexibility through a novel encoder-decoder structure that maintains separate processing streams for each modality while sharing high-level semantic features, enabling robust performance even when only single-source data is available. The model's performance is compared with other state-of-the-art methods that leverage multimodality, including cross-modal feature alignment techniques applied to remote sensing and multi-modal foundation models. Third, we introduce a temporal fusion mechanism that leverages pre-event imagery from multiple sources to improve flood extent estimation accuracy. This approach enables the system to better differentiate between permanent water bodies and flood-affected areas, while also capturing the dynamic nature of flood events through time series analysis. Validation results demonstrate significant improvements over single-source approaches, achieving an F1 score of 0.78 for flood extent mapping across diverse geographic regions and environmental conditions. The system maintains robust performance even in challenging scenarios where cloud cover limits optical imagery availability or where SAR data exhibits significant noise due to urban structures or vegetation. The model is made operational through the OVERWATCH (HEU GA n.101082320) and UNICORN (HEU GA n. 101180172) projects to provide practical value for emergency response, with processing times in the order of minutes from data acquisition to flood map delivery. This rapid turnaround, combined with the system's ability to utilize multiple data sources, can provide emergency responders with timely and accurate flood extent information, which is crucial for disaster response. Also, thanks to the model's capabilities of working with multi-modality and handling partial modalities, rapid mapping remains effective even when either optical or SAR imagery is unavailable. This ensures reliable flood delineation, even in challenging conditions where data availability may be limited. Our work contributes to advancing the state-of-the-art in flood mapping by demonstrating how harmonized multi-source datasets and modality-agnostic deep learning architectures can overcome traditional limitations in operational flood monitoring. The approach has been validated across diverse geographic regions and flood types, offering a robust solution for emergency management agencies and highlighting the potential for future integration of additional EO data sources. Future developments will focus on expanding the temporal analysis capabilities and incorporating new data sources, possibly including high-resolution commercial satellites and other sources.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: Advancing Flood and Drought Prediction with SEED-FD : Leveraging Remote Sensing for Hydrological Forecasting

Authors: Vanessa Pedinotti, Luca Brocca, Paolo Filippucci, Jean-Christophe Poisson, Alexis Exposito, Peter Burek, Malak Sadki, Eric Marchand, Gilles Larnicol, Gwyneth Matthews, Andrew Benett, Nicola Martin, Christel Prudhomme, Cinzia Mazzetti, Calum Baugh, Michel Wortmann, Carmelo Cammalleri, Vanesa Garcia, Alessandro Ceppi, Andrea Maier-Bode, Sebastian Marcu, Bjorg Brockmann, Mohammed Hassan, Andrea Toreti, Peter Salamon, Jesus Casado-Rodriguez, Stefania Grimaldi, Carlo Russo
Affiliations: Magellium-artal, CNR-IRPI, VorteX-io, IIASA, ECMWF, PoliMi, Design&Data, ICPAC, JRC
Floods and droughts, among the most devastating hydrological extremes, continue to pose significant socio-economic challenges globally. Despite advancements in forecasting, many regions, particularly in the Global South, suffer from limited accuracy due to scarce observational data and constrained model capabilities. The Strengthening Extreme Events Detection for Floods and Droughts (SEED-FD) project, supported by the European Commission under Horizon Europe, addresses these challenges by leveraging Earth observation (EO) and non-EO data to enhance flood and drought forecasting capabilities. SEED-FD’s overarching goal is to improve the reliability and global applicability of the Copernicus Emergency Management Service (CEMS) Early Warning Systems (EWS) for floods and droughts. This is achieved by enhancing every critical component of the CEMS Hydrological Forecasting Modelling chain, including refining hydrological processes and calibration strategies in the LISFLOOD model, integrating innovative data assimilation and machine learning techniques to reduce forecasting errors, and derive new global forecast products. One important aspect is exploring the benefit of non conventional observations in the modelling and forecasting chain, including precipitation, soil moisture and streamflow products from Earth Observation and river discharge data from microstations. The project employs a two-phased approach. First, new algorithms and methods are developed and validated in data-rich regions (Danube and Bhima basins) to establish proof of concept. Next, these advancements are scaled and applied to three vulnerable and diverse regions: the Paraná River Basin (Brazil), the Niger River Basin (West Africa), and the Juba-Shebelle Basin (Horn of Africa). This presentation will showcase the mid-term results of SEED-FD, highlighting advancements in hydrological model calibration, enhanced process representation, machine learning-based post-processing, data assimilation, and improved flood and drought detection indicators. These advancements, applied initially in the Danube and Bhima basins, demonstrate significant progress in enhancing prediction reliability and offer insights into the scalability of solutions for diverse and vulnerable regions.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: Integrating Remote Sensing and geomorphological analysis to assess the impacts of the Derna dams collapse

Authors: Roberto Sergio Azzoni, Dr Luca Forti, Dr Andrea Pezzotta, Andrea Zerboni
Affiliations: Università Degli Studi Di Milano - Dipartimento di Scienze Della Terra "A. Desio"
Between September 10th and 11th, 2023, the catastrophic failure of two dams released approximately 30 million m³ of water and debris, devastating large portions of the city of Derna along the northeastern coastline of Libya. This disaster resulted in over 10,000 fatalities and displaced thousands, underscoring the increasing socio-economic impacts of flood-related hazards. This study examines the geomorphological and urban context of the event, with a particular focus on the role of urban sprawl and its interaction with the geomorphic characteristics of the Wadi Derna. Its watershed covers 575 km² and is crossed by a drainage with short runoff times, making the region particularly susceptible to flash floods. Over the past century, the city’s expansion has predominantly occurred in lower, flood-prone areas of the alluvial fan, significantly increasing the exposure and vulnerability of the population to extreme events. High-resolution satellite imagery from ESA’s Pleiades and SPOT 6 and 7 missions was pivotal in reconstructing the settlement dynamics of Derna and assessing the geomorphic impacts of the 2023 flood event. Digital Elevation Models (DEMs) derived from these data supported a DEM of Difference (DoD) analysis, allowing precise quantification of topographic changes and sediment displacement. Moreover, the integration of remote sensing data enabled the detailed mapping of flood extent and the assessment of damages, particularly in areas where field surveys were not feasible. Funded by the Italian Ministry of University and Research under the PRIN project "GEOTRes – Geoheritage Threatening and Resilience", this work underscores how remote sensing can support flood hazard mapping and the analysis of the impacts of extreme flood events in vulnerable settings.
LPS Website link: Integrating Remote Sensing and geomorphological analysis to assess the impacts of the Derna dams collapse&location=Room+1.15/1.16" class="text-info" target="_blank">Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: Surface Water Inventory and Monitoring (SWIM): Hands-on Examples for Improved Flood Mapping and Water Resource Monitoring

#stac

Authors: Sandro Groth, Marc Wieland, Dr. Sandro Martinis
Affiliations: German Aerospace Center (DLR)
Over the last years, DLR has established automated workflows to derive the extent of open surface water bodies from various Earth Observation (EO) datasets. This workflows have consistently demonstrated their value in supporting flood mapping and water monitoring activities. In order to maximize the potential of these methods, we are developing an open access surface water product at 10-20 m spatial resolution based on Sentinel-1/2 data. The primary objective is to provide easy access to global high-resolution surface water information and offer interfaces for seamless integration into automated rapid mapping and monitoring workflows to support the disaster management and water monitoring community. The proposed SWIM product contains two publicly available data collections: 1) Water Extent (SWIM-WE) provides a dynamically updated set of binary masks identifying open surface water bodies extracted from both Sentinel-1/2 scenes. This data collection enables users to rapidly identify the water extent on a specific date or to analyze surface water dynamics over time covering large areas. 2) Reference Water (SWIM-RW) contains fused information on permanent and seasonal water bodies based on the SWIM-WE collection over an observation period of two years. This collection was designed to support disaster response workflows by providing pre-computed information on the "normal" hydrologic conditions to accelerate the identification of flooded areas. By considering seasonal water dynamics, potential overestimation of inundation extent can be limited. Users are also able to access additional assets containing the relative water frequency as well as quality layers. The described product will be available via OGC-Webservices on DLR GeoService. All published data assets can be accessed from spatio-temporal asset catalogs (STAC). We choose this technology as it allows for quick data search by filtering user-specific areas and time-frames. Matching water masks can also be projected and mosaicked efficiently using open-source tools such as odc-stac. In combination with the publication of the SWIM product, a set of Jupyter Notebooks that demonstrate common use cases will be made available. For the visualization of SWIM data in GIS software or web mapping applications, Web Map Services (WMS) will be hosted. In this conference contribution, we aim to introduce the SWIM product to a broader audience and provide hands-on examples on how the data can be used for effective flood disaster response and long-term water resource monitoring. To showcase a typical rapid mapping workflow using SWIM, the visualization of observed water extent at a specific date and location using the SWIM-WE WMS service is demonstrated on the example of the 2024 flood event in Southern Germany. Additionally, the automated identification of flooded areas using the SWIM-RW reference water mask is shown. The importance of considering seasonality in reference water layers is demonstrated by comparing inundation masks from real flood events in Germany and India. To highlight potential use-cases of the SWIM product apart from flood monitoring, time-series analysis of SWIM-WE items of water reservoirs in Germany is conducted to showcase the analysis of hydrologic drought conditions.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: Near Real-Time Tracking of Lake Drainage in Arctic Permafrost Regions

Authors: Ingmar Nitze, Kayla Hardie, Chen Wang, Todd Nicholson, Luigi Marini, Benjamin M. Jones, Melissa Ward Jones, Matthew B. Jones, Wenwen Li, Anna Liljedahl, Guido Grosse
Affiliations: Alfred Wegener Institute Helmholtz Centre For Polar And Marine Research, Google, National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, Institute of Northern Engineering, University of Alaska Fairbanks, Woodwell Climate Research Center, Institute of Geosciences, University of Potsdam, Arctic Data Center, National Center for Ecological Analysis and Synthesis, University of California Santa Barabara, School of Geographical Sciences and Urban Planning, Arizona State University
Lakes are an abundant landscape feature in the Arctic and subarctic permafrost regions. While some Arctic lake types such as glacial lakes are fairly stable, other lakes such as thermokarst lakes are not. They are strongly interconnected with ice-rich permafrost, forming through thaw and consequently leading to further permafrost degradation under and around the lakes. Compared to lakes in temperate climates, thermokarst lakes are highly dynamic and can be polycyclic. They typically expand, resulting in carbon emissions as CO2 and CH4 due to the thaw of permafrost soils. Once they reach a drainage gradient or their thaw bulb penetrates through the permafrost layer, they often drain within hours to a few years. However, following drainage, permafrost may reform in the exposed lake sediments, allowing subsequent lake generations to form in the same location and thus leading to repetitive growth and drainage cycles over millennial timescales given favourable environmental conditions. Drained lakes often become carbon sinks due to permafrost aggradation and peat accumulation; however, carbon emissions from lake expansion outweigh the carbon uptake of drained lakes. Additionally, lake drainage can significantly impact local hydrology, wildlife, and water security for communities. Accelerated thermokarst lake cycling, including widespread drainage, indicate ongoing permafrost degradation and hydroclimatic changes at high latitudes. In this study, we monitor the dynamics of approximately 5 million circum-arctic lakes and determine lake drainage events in near real-time (NRT). We extracted a baseline layer of polygons for lakes larger than 1 ha, that existed within the period between 2000 and 2020 based on previous publications for the Arctic and subarctic permafrost regions. We then created monthly mosaics from the recent period (2017-2024) using Dynamic World data based on Sentinel-2 to extract areas of surface water, snow/ice, bare ground, and vegetation—all of which inform lake status. For longer time scales, we also explore annual surface water information from the JRC Global Surface Water dataset based on Landsat data, which provides insights into permanent or seasonal water occurrence. We extract time-series information from these datasets for each of the ~5 million individual tracked lakes across the Arctic permafrost region and automatically identify sudden and sustained water surface loss as lake drainage. We leveraged Google Earth Engine along with the geemap and eemont Python packages for data retrieval and water area extraction, containerized the pipeline using Docker for portability, and executed it on Google Cloud Computing resources. This approach delivers a lightweight, reproducible solution that provides end users with an easy-to-use and transferable workflow. Preliminary results reveal distinct spatial and temporal patterns of lake drainage events consistent with previous regional analyses, such as notable drainage events in western Alaska from 2018-2020. In such event-driven years, drained lake area can be up to 50 times higher than in “quiet years,” though the number of drained lakes is less variable, with a factor of 2-3 between active and inactive years. Total drained area is thus also highly influenced by the size of drained lakes, which can vary considerably. Lake drainage also follows distinct seasonal patterns peaking in early summer (June and July) depending on ice-off timing. These results indicate both spatial and temporal clustering of drainage events. The temporal frequency offers a more precise understanding of significant short-term events that have been anticipated in regional analyses but not at a pan-Arctic level. As permafrost is not directly measurable from space, our findings provide a valuable proxy for assessing permafrost degradation. However, interactions between surface hydrology, permafrost, local geomorphology, weather and climate are extremely complex. This work lays the foundation for NRT r analysis and an early warning system for lake drainage events in the northern permafrost zone. Within the framework of the Permafrost Discovery Gateway (PDG), we will further 1) utilize this novel dataset to explore spatio-temporal influencing factors on lake drainage and permafrost hydrology using geoAI methods and 2) enable a NRT monitoring of permafrost thaw with preliminary products made publicly available through the automated workflows. In a broader scope, our data retrieval pipeline is easily adaptable to other regions and could be used for surface water monitoring for example within defined areas such as water reservoirs, national parks, or individual nations.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall K2)

Session: C.02.14 The EarthCARE Mission’s First Year in Orbit: Opening new Horizons for Cloud, Aerosol and Radiation Science - PART 1.

The Earth Cloud, Aerosol and Radiation Explorer (EarthCARE) satellite mission aims to improve our understanding of cloud-aerosol-radiation interactions and Earth radiation budget, such that they can be modelled with better reliability in climate and numerical weather prediction models. To achieve this objective, EarthCARE will measure the three-dimensional structure of clouds, precipitation and aerosols, along with collocated observations of solar and terrestrial radiation.

The successful launch of this ESA-JAXA mission was in May 2024 and, following the satellite and instrument commissioning phase, provides unique co-registered observations from a suite of four unique instruments located on a common platform: (1) ATmospheric LIDar (ATLID), (2) Cloud Profiling Radar (CPR), (3) Multi- Spectral Imager (MSI) and (4) BroadBand Radiometer (BBR). EarthCARE global observations include vertical profiles of natural and anthropogenic aerosols, the vertical contribution of ice and liquid water content, the cloud mesoscale distribution, precipitation microphysics, estimates of particle size, convective vertical air motions, as well as atmospheric radiative heating and cooling profiles. In addition to enabling new insights into climate science and providing unique data for NWP improvements, EarthCARE continues the heritage measurements of CloudSat, CALIPSO and Aeolus, and bridges to future missions such as NASA's Atmosphere Observing System mission (AOS) and Aeolus-2.

The session invites contributions from the science community on EarthCARE and related science themes, including Passive and Active Observational Techniques; Cloud and Precipitation Microphysics, Aerosols and Radiation Process Studies; Radiation and Earth Radiation Budget; Scientific and User Applications as well as Long-Term Data Records. In addition, scientific synergies with heritage, operational and future satellite missions as well as with ground-based, air- or ship-borne campaign activities are welcome

Contributions on Modelling, Assimilation and Parameterisation at Global, Regional and Cloud Level enhancing high-resolution atmospheric numerical model activities through evaluation and improvement using novel satellite observations on EarthCARE and related satellite missions are in particular invited. A focus is placed on the use of cutting-edge atmospheric climate and weather models, including "global km-scale" or “global storm-resolving models” and commensurate Earth observations of clouds, aerosols and convection.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall K2)

Presentation: A first look at the contributions of the first global Doppler velocity measurements to EarthCARE’s synergistic cloud and precipitation retrievals

Authors: Shannon Mason, Professor Robin Hogan
Affiliations: ECMWF
EarthCARE’s flagship synergistic retrieval product “ACM-CAP” combines atmospheric lidar (ATLID), cloud profiling radar (CPR), and multispectral imager (MSI) measurements at nadir to make a “best estimate” of the quantities and properties of cloud, aerosol and precipitation. The retrieval framework underlying ACM-CAP is the Clouds, Aerosols, and Precipitation Retrieval using a Variational Technique (CAPTIVATE) algorithm. CAPTIVATE has been applied to many configurations of ground-based, airborne and spaceborne radars, lidars and radiometers in preparation for EarthCARE. Special attenuation has been paid to the novel capabilities of EarthCARE compared to the A-Train of satellites, such as preparing for the use of radar Doppler velocity measurements, dominated in most cases by the fallspeeds of hydrometeors, to inform the improved retrieval of the microphysical properties of rain and rimed snow. This additional information may justify the retrieval of an additional parameter of the drop size distribution in rain, and of the density (or degree of riming) of snowflakes in ice and snow. With EarthCARE commissioning phase complete and the upcoming public release of EarthCARE’s synergistic data products, we can now start to investigate the statistical connections between the first global spaceborne Doppler radar measurements from EarthCARE, and the retrieved properties of rain and snow in ACM-CAP across different precipitation regimes. In this study we show selected case studies of rain and snow observed by EarthCARE, and begin to characterise the microphysical properties of selected precipitation regimes. We demonstrate the distinct properties of drizzling stratocumulus, “warm rain” from cumulus congestus, and cold rain from melting snow, and investigate the spatial distribution and frequency of occurrence of stratiform riming of snow. information will provide critical global observational constraints that inform the development of improved parameterizations of cloud and precipitation processes in numerical weather models.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall K2)

Presentation: ATLID L2 A-PRO Processor (Extinction, Backscatter, Depolarization and Classification Profile Products) status and results.

Authors: David Donovan, Gerd-Jan van Zadelhoff, Ping Wang
Affiliations: KNMI
ATLID[1] (ATmospheric LIDar) is the lidar flown onboad the multi-instrument Earth Cloud Aerosol and Radiation Explorer (EarthCARE[1]). EarthCARE is a ESA–JAXA mission that was launched in May 2024 and has finished its commissioning phase in December 2024. ATLID is a three-channel, linearly polarized, high-spectral-resolution lidar (HSRL) system operating at 355 nm optimized for cloud and aerosol sensing. Cloud and aerosol optical properties and target classification are key EarthCARE products. This paper provides an overview of the ATLID Level 2a (L2a; i.e., single instrument) retrieval algorithms for deriving profiles of cloud/aerosol optical properties and associated target classification. The L2 lidar optical property and classification algorithms ingest calibrated profiles of cross-talk corrected molecular, co-polarized particulate, and cross-polarized attenuated backscatters. The accuracy of the L1 products have a direct impact on the quality of the L2 products. Thus, an overview of the L1 ATLID data will also be given. The L2a lidar algorithms that retrieve the aerosol and cloud optical property profiles and classify the detected targets are grouped together in the so-called A-PRO (ATLID-profile) processor. The A-PRO processor produces the ATLID L2a aerosol product (A-AER); the extinction, backscatter, and depolarization product (A-EBD); the ATLID L2a target classification product (A-TC); and the ATLID L2a ice microphysical properties estimation product (A-ICE). This presentation provides an overview of the processor and its component algorithms. Examples based on ATLID observations will be presented and discussed as well as validation examples. References [1] Wehr, T., Kubota, T., Tzeremes, G., Wallace, K., Nakatsuka, H., Ohno, Y., Koopman, R., Rusli, S., Kikuchi, M., Eisinger, M., Tanaka, T., Taga, M., Deghaye, P., Tomita, E., and Bernaerts, D.: The EarthCARE mission – science and system overview, Atmos. Meas. Tech., 16, 3581–3608, https://doi.org/10.5194/amt-16-3581-2023, 2023. [2] Donovan, D. P., van Zadelhoff, G.-J., and Wang, P.: The EarthCARE lidar cloud and aerosol profile processor (A-PRO): the A-AER, A-EBD, A-TC, and A-ICE products, Atmos. Meas. Tech., 17, 5301–5340, https://doi.org/10.5194/amt-17-5301-2024, 2024.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall K2)

Presentation: First EarthCARE Level 2 Outputs and Overview of the EarthCARE L2 Processing Chain

Authors: Gerd-Jan van Zadelhoff, Donovan David, Ping Wang, Jos de Kloe, Anja Huenerbein, Ulla Wandinger, Moritz Haarig, Dr Sebastian Bley, Athena Floutsi, Nicole Docter, Rene Preusker, Nils Madenach, Nicolas Clerbaux, Edward Baudrez, Almudena Velazquez, Jason Cole, Howard Barker, Mark Shephard, Zhipeng Qu, Meriem Kacimi, Irene Hidalgo Revilla, Carla Salas Molar, Carlos Domenech, Shannon Mason, Professor Robin Hogan, Bernat Puigdomenech, Pavlos Kollias
Affiliations: KNMI, TROPOS, Freie Universität Berlin (FUB), RMIB, Environment and Climate Change Canada (ECCC), GMV, ECMWF, McGill
The interactions between clouds, aerosols, and solar and terrestrial radiation are central to understanding Earth's climate system. Despite a long history of satellite observations, further high-quality novel observations are needed for atmospheric model evaluation and process studies. It has been recognized that true height-resolved global observations of cloud and aerosol properties are essential for making progress. The ESA/JAXA EarthCARE mission, launched in May 2024, aims to fill these gaps with a sophisticated payload that includes the Atmospheric Lidar (ATLID), the Cloud Profiling Doppler Radar (CPR), the Multi-Spectral Imager (MSI), and the Broad-Band Radiometer (BBR). Operating in a sun-synchronous orbit, EarthCARE delivers unique global datasets, including radiative Top of Atmosphere (TOA) flux measurements, which are essential for studying the Earth’s climate processes. The ESA scientific retrieval processors fully exploit the synergy of the combination of the calibrated Level 1 (L1b) data from the four instruments, comprising geolocated measurements such as lidar attenuated backscatter, radar reflectivity, Doppler velocities and multi-spectral radiances. The L1 data has undergone extensive calibration and validation during the commissioning phase to ensure the accuracy and reliability needed for downstream Level 2 processing. The Level 2 processing chain integrates L1 inputs to produce 25 science products developed within the CARDINAL project and continued under the EarthCARE-DISC project for at least the coming three years. These L2 products include detailed vertical profiles of clouds, aerosols, and precipitation, as well as reconstructed 3D atmospheric domains. Additionally, they can be further processed to estimate key radiative properties, such as heating rate profiles, essential for understanding atmospheric energy exchange processes. A distinct feature of the L2 chain is the forward modeling of the reconstructed 3D domains to calculate the TOA radiances and fluxes, which are subsequently compared to the BBR measurements. This innovative approach enhances the understanding of aerosol-cloud-radiation interactions and their role in the Earth’s energy budget. Early results from L2 processing highlight the mission’s ability to capture complex atmospheric structures, such as thin cirrus clouds, aerosol layers, and deep convective clouds, with unprecedented vertical resolution. Comparisons with coincident observations from ground-based, aircraft and satellite platforms validate the accuracy of these products, providing confidence to the retrievals. This presentation provides an overview of EarthCARE’s L1 data, early insights from the single instrument L2 products, and the continued development of the L2 processing chain. These efforts highlight the mission’s critical role in advancing the understanding of climate processes and refining atmospheric models. References Hogan, R. J., Illingworth, A. J., Kollias, P., Okamoto, H., and Wandinger, U.: Preface to the special issue “EarthCARE Level 2 algorithms and data products”: Editorial in memory of Tobias Wehr, Atmos. Meas. Tech., 17, 3081–3083, https://doi.org/10.5194/amt-17-3081-2024, 2024. Eisinger, M., Marnas, F., Wallace, K., Kubota, T., Tomiyama, N., Ohno, Y., Tanaka, T., Tomita, E., Wehr, T., and Bernaerts, D.: The EarthCARE mission: science data processing chain overview, Atmos. Meas. Tech., 17, 839–862, https://doi.org/10.5194/amt-17-839-2024, 2024.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall K2)

Presentation: Current status and early results of EarthCARE/CPR

Authors: Takuji Kubota, Eiichi Tomita, Tomomi Nio
Affiliations: Japan Aerospace Exploration Agency
Earth Cloud Aerosol and Radiation Explorer (EarthCARE) satellite was launched at 7:20 a.m. (JST) on 29 May 2024. EarthCARE is equipped with four sensors with different observation methods: radar, lidar, imager, and radiometer. The Cloud Profiling Radar (CPR) onboard the EarthCARE satellite is the world’s first spaceborne Doppler radar in the W-band (94 GHz) and was jointly developed by the Japan Aerospace Exploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT). The CPR conducted its first observations on 12 and 13 June 2024, and observed the cloud area in a stationary front, called the Baiu front, over the ocean at east of Japan, measured the internal structure of cloud, and succeeded in the world's first measurement of vertical cloud motion from space. The CPR has continued observations since June 2024 and on 4th October 2024, JAXA released first image of the synergistic cloud synergy observed for Typhoon Shanshan (2024), approaching the Japanese archipelago. The EarthCARE products have been developed and will be distributed from both JAXA and ESA by data providing systems. Level 1 products has been developed by sensor provider agencies. i.e. JAXA has developed CPR Level 1 product, and ESA has developed the other Level 1 products. JAXA and ESA have developed Level 2 geophysical products individually, and continuous exchange of information is being conducted between Japan and Europe. CPR, Atmospheric Lidar (ATLID) and Multispectral Imager (MSI) Level-2 provide cloud mask, cloud phase and cloud microphysics (such as cloud effective radius, liquid water content, optical depth, etc) for the respective sensor products, together with the synergy products by using the combination of the sensors. In addition, the CPR provides the Doppler velocity products. ATLID Level-2 includes aerosol flagging, aerosol component type (such as dust, black carbon, sea salt and water soluble), as well as the aerosol optical properties including aerosol extinction. The cloud and aerosol products will be used to derive the radiative flux at shortwave and longwave, whose consistency with the Broadband Radiometer (BBR) will be checked to produce the final radiation product by 4-sensors. Furthermore, a wide range of application research activities are planned to achieve the EarthCARE mission objectives. EarthCARE observation data will contribute to understanding cloud, aerosol, and radiation processes, evaluations and improvements of climate models and numerical weather prediction (NWP) models, and atmospheric quality monitoring. This presentation will introduce current status and early results of the CPR and the Japanese activities for the EarthCARE mission.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall K2)

Presentation: First cloud physical and optical properties from EarthCARE’s Multi-Spectral imager

Authors: Anja Hünerbein, Sebastian Bley, Nicole Docter, Nils Madenach, René Preusker
Affiliations: Leibniz Institute For Tropospheric Research, Institute of Meteorology, Freie Universität Berlin
The Multi-Spectral Imager (MSI) is one of four instruments aboard the EarthCARE platform launched in May 2024. The ESA cloud, aerosol and radiation mission EarthCARE provides active profiling and passive imaging measurements from a single satellite platform. This unique set-up makes it possible to extend the products obtained from the combined active/passive observations along the ground track into the swath by means of active/passive sensor synergy, to estimate the 3D fields of clouds and to assess radiative closure. The backscatter lidar (ATLID) and cloud profiling radar (CPR) provide vertical profiles of cloud and aerosol parameters with high spatial resolution. Complementing the active measurements from EarthCARE, the passive MSI delivers visible and infrared images for a swath width of 150 km and a pixel size of 500 m. MSI observations are used to extend the spatially limited along-track coverage of products obtained from the active sensors into the across-track direction. After reachning its final orbit, MSI began measurements in June 2024 and had been collecting data since that time. A comprehensive set of remote sensing algorithms for cloud detection and the retrieval of cloud physical and optical properties have been developed. The cloud product algorithms combine infrared and visible techniques to determine both physical and radiative properties. These products include cloud detection and masking, cloud-top properties (pressure, temperature), thermodynamic phase, and optical and microphysical properties (optical thickness, particle size, water path). The challenge of these algorithms is to provide retrievals on a global operational basis. Since then, many algorithms have been significantly refined and improved. We present the progress achieved so far on the core MSI cloud algorithms and provide examples. Various EarthCARE frames and a comparison with the active measurement are provided.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall K2)

Presentation: EarthCARE’s First year in Orbit

Authors: Thorsten Fehr, Bjoern Frommknecht, Timon Hummel, Emilio Alvarez, Christophe Caspar, Patrick Deghaye, Olivier Defauchy, Michael Elsinger, Matthias Gollor, Robert Koopman, Takuji Kubota, Fabien Marnas, Tomomi Nio, Stephanie Rusli, Toshiyuki Tanaka, Eiichi Tomita, Vasileios Tzallas, Georgios Tzeremes, Jonas Von Bismarck, Kotska Wallace
Affiliations: ESA, JAXA
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall E2)

Session: D.01.08 4th DestinE User eXchange - Opening Session and Visionary Insights

Opening Session and Visionary Insights

This session will set the scene for this year’s Destination Earth (DestinE) User eXchange, which is composed of six dedicated sessions and posters. The event places a strong focus on end users and practical examples of how they can explore DestinE. The European Commission will outline the vision and direction of the initiative, highlighting its evolving role in supporting user-driven applications.

This will be followed by a keynote and updates from ESA, ECMWF, and EUMETSAT, covering key achievements and the system status of the DestinE Platform, the Digital Twins, and the Data Lake.

Welcome to the 4th DestinE User eXchange


  • Nicolaus Hanowski - ESA

Policy Meets Practice: Steering DestinE Toward Accessible, End-User Solutions


  • Gustav Kalbe - European Commission

Keynote “From Data to Action: Using DestinE for resilient and healthier cities”


  • Ana Patrícia Oliveira - CTO for Space at +ATLANTIC

Progress Presentations 3Es



The three Entrusted Entities – ESA, ECMWF, EUMETSAT – will give an update on key achievements, status of the system and plans for Phase II.

Growing the DestinE Platform


  • Kathrin Hintze - ESA

Advancing DestinE’s Digital Twins and the Digital Twin Engine


  • Irina Sandu - ECMWF

The Evolving DestinE Data Lake


  • Lothar Wolf - EUMETSAT
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.61/1.62)

Session: F.04.02 Supporting Global Food Security through Earth Observation - PART 1

Remote sensing technologies play a crucial role in shaping agriculture policies and food systems worldwide. By leveraging satellite imagery and data processing technologies, we can monitor agricultural activities worldwide, from planting to harvest. Earth observation data also provides valuable insights into crop health, soil moisture levels, and land use patterns, allowing policymakers and humanitarian organizations to make informed decisions about resource allocation, disaster response, and food distribution. By harnessing the power of Earth observation, we can enhance resilience in food systems, improve agricultural productivity, and ensure food security for communities around the globe.


Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.61/1.62)

Presentation: Operational crop yield forecasting using earth observation data and machine learning pipelines at sub-national levels

Authors: Filip Sabo, Dr Michele Meroni, Dr Hervé Kerdiles, Francesco Collivignarelli, Dr Felix Rembold
Affiliations: Joint Research Centre
Multitemporal remote sensing and weather data are essential for crop yield forecasting over large regions, playing a critical role in ensuring food security by identifying areas where crop production may be at risk. Machine learning and deep learning techniques are increasingly applied to process Earth Observation (EO) data, offering the potential to automate and standardize the yield forecasting process, thereby enhancing accuracy and timeliness. In this contribution we describe the operational yield forecasting workflow based on sub-national crop yield statistics provided by the countries and international organizations. The operational system leverages on the use of machine learning (Random Forest, XGBoost, Gaussian process, SVR, Lasso) and deep learning (1DCNN and LSTM) regression models to relate sub-national crop yield data with ASAP EO (FPAR) and gridded weather data (rain, temperature, rainfall and soil moisture) as predictors. In order to verify the added value of ML or DL usage, the models are benchmarked against two null models: (1) (average of observed yields per administrative unit) and (2) the yield forecast by the linear regression between the maximum FPAR and yield at administrative unit level (peak FPAR model). Due to limited availability of crop yield statistics, an extensive feature engineering/selection, hyperparameter tuning and nested cross validation is performed during the training procedures (hindcasting) for machine learning and deep learning models. The best selected model, in terms of its lowest relative root mean square error in validation, is then used for operational crop yield forecasts at sub-national level. The forecasts are issued at 75% of the crop growth cycle and at the end of the growing season. Feature importance metrics (plots) of the top-performing models are published alongside the forecasts. We provide feature importance plots for each region and the most recent forecasts, as well as global feature explanation covering all regions and training years. Additionally, model-agnostic confidence intervals are included for the estimates. The system is deployed in Algeria for winter cereals and in South Africa for summer crops and it is currently being tested in Benin, Mozambique and Zambia with plans to extend to additional countries as sub-national crop yield data is becoming available. Regional and national authorities can benefit from this system, while international partnerships can also leverage this information to strengthen food security efforts and enhance the alignment of forecasts. This is particularly valuable as various organizations rely on different indicators and crop statistics.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.61/1.62)

Presentation: Building a Sustainable Earth Observation-Based Agricultural System in Zimbabwe to Support Food Security

Authors: Lorenzo De Simone, Sophie Bontemps, Boris Noogard, Phibion Chiwara, Rogerio Bonifacio
Affiliations: Agrifood Economics and Policy Division (ESA). Food And Agriculture Organization of the United Nations (FAO), Université catholique de Louvain (UCL), World Food Programme (WFP)
Global food security faces growing challenges from climate change, resource constraints, and economic uncertainties. These pressures demand innovative, data-driven solutions to monitor agriculture and inform decision-making. Zimbabwe, with its rich agricultural diversity and heightened vulnerability to climate shocks, stands as a critical focus of the EOSTAT project. Funded through the Zimbabwe Emergency Food Production Project (ZEFPP) program of the African Development Bank (AfDB) and led by the Food and Agriculture Organization (FAO), this initiative aims to establish a sustainable Earth Observation (EO)-based system that strengthens agricultural resilience, disaster risk management, and food security while fostering local capacity and ownership. EOSTAT Zimbabwe integrates three key technical components. First, the project provides timely and accurate crop mapping and acreage estimates using the open-source Sentinel for Agricultural Statistics (Sen4Stat) toolbox developed under European Space Agency (ESA) funding. Sentinel satellite data and machine learning algorithms are applied to classify crops and estimate acreage, supporting better-informed agricultural decisions. Prototype map of irrigated wheat during the 2023 winter season was generated at national scale as a proof-of-concept. More importantly, the main crop types during the 2024 summer season were also mapped at national scale, based on an ad-hoc field campaign, combining statistical and windshield surveys. F-Score values for maize, millet and sorghum was higher than 75%, while the sugar cane was also very well mapped (F-Score of 98%). These maps were combined with the agriculture statistical survey to estimate the main crop acreage, with a reduced confidence interval compared to a situation without EO data. Second, the project focuses on disaster risk management by combining crop mapping with flood monitoring and drought assessments. Flood risks are tracked using World Food Program (WFP)’s Automated Disaster Analysis and Mapping (ADAM) system, while droughts are monitored using an enhanced version of FAO’s Agricultural Stress Index System (ASIS), tailored to Zimbabwe’s context using open and free data. Third, the project develops a dynamic parcel boundary mapping and farmer registry, enabling equitable access to subsidies, insurance, and financial tools through detailed geospatial data linking farmers to their parcels. These components work in tandem to create a comprehensive framework for agricultural monitoring and resilience building. The data outputs generated by EOSTAT Zimbabwe have broad applications that extend beyond immediate agricultural monitoring. Timely and accurate crop and disaster data contribute to early-season planning, disaster preparedness, and post-disaster impact assessments, allowing stakeholders to make informed decisions and allocate resources effectively. The integration of parcel data with the farmer registry opens new opportunities for targeted support, making the system a cornerstone of Zimbabwe’s evolving digital agriculture framework. At the heart of EOSTAT Zimbabwe is the deployment of the Sen4Stat system on the Zimbabwe Center for High Performance Computing (ZCHPC) infrastructure, which empowers national stakeholders to process and analyze satellite data locally and produce seasonal crop type maps. This innovation not only enhances Zimbabwe’s technical capacity but also establishes a sustainable foundation for integrating EO technologies into national agricultural monitoring frameworks. This infrastructure supports the project’s goal of producing accurate, timely, and actionable data for decision-making in agriculture and disaster risk management. The success of EOSTAT Zimbabwe stems from a balanced partnership that integrates national leadership, technical expertise, and global best practices. Key national institutions, including the Ministry of Agriculture, Zimbabwe National Statistics Agency (ZIMSTAT), ZCHPC, the Zimbabwe National Geospatial and Space Agency (ZINGSA), Meteorological Services Department (MSD) and the National University of Science and Technology (NUST), provide essential guidance, local insights, and capacity-building support to ensure the system's alignment with national priorities and long-term sustainability. Collaboration with international partners such as UCLouvain and the WFP contributes advanced methodologies and operational expertise, enabling the project to adhere to global standards and address humanitarian objectives. Initiatives like the mapathon and technical training programs strengthen local capacity, empowering stakeholders to maintain and expand the system. Supported by the AfDB through the ZEFPP program, EOSTAT Zimbabwe is firmly linked to regional development goals, enhancing resilience and promoting sustainability in the country's agricultural sector. As the project progresses, it is laying the groundwork for a digital agriculture platform, planned for full development in 2025. This platform will make EOSTAT outputs accessible to a diverse range of stakeholders. In the mid-term, farmers should be able to request disaster compensation, apply for subsidies, and track their productivity, while policymakers should gain insights into crop performance, disaster risks, and the impacts of agricultural policies. This digital infrastructure promises to transform Zimbabwe’s agricultural landscape, promoting more efficient, transparent, and responsive governance. Although still in its early stages, EOSTAT Zimbabwe exemplifies the transformative potential of EO-based systems in addressing food security challenges. By integrating national priorities with innovative technologies, the project is establishing a scalable model that can be replicated across Africa. Its balanced approach to partnership ensures that the system is not only impactful but also sustainable, fostering local ownership and resilience. In an era of mounting climate uncertainties, EOSTAT Zimbabwe stands as a testament to the power of collaboration in driving sustainable agricultural transformation and building a more food-secure future
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.61/1.62)

Presentation: Improving Agricultural Productivity Monitoring in Data-Sparse and Conflict Regions

Authors: Yuval Sadeh, Sheila Baber, Ritvik Sahajpal, Shabarinath Nair, Nikhil Sasi Rajan, Louise Leroux, Mark Wronkiewicz, Tesfaye Shiferaw, Gerald Blasch, Oleksandra Oliinyk, Christina Justice, Inbal Becker-Reshef
Affiliations: School of Earth, Atmosphere and Environment, Monash University, Department of Geographical Sciences, University of Maryland, ICube, University of Strasbourg, CIRAD UPR AIDA, Jet Propulsion Laboratory, California Institute of Technology, International Maize and Wheat Improvement Center – CIMMYT Ethiopia
Timely and accurate crop production data is essential for decision-makers at both regional and national scales. These insights are critical for stakeholders, including institutions responding to droughts and food crises, as well as commodity market actors, who depend on early information about agricultural productivity to anticipate challenges and allocate resources effectively. However, in data-scarce regions such as parts of Africa and conflict zones like Ukraine, the lack of reliable yield datasets for model calibration poses a significant challenge to the applicability of conventional AI-based yield prediction methods, making accurate crop yield estimation in these environments difficult. This study focuses on maize and wheat yield estimations at regional scales in Ethiopia, Malawi, and Ukraine, using two distinct methodologies. The first is a calibration-free approach that integrates satellite imagery with crop model simulations to generate high-resolution (pixel-scale) yield maps. By monitoring crop growth through remote sensing data and aligning it with simulated growth trajectories, this method avoids the need for yield training data, making it suitable for regions with minimal data availability. The second approach employs machine learning, using official regional-scale yield data to train models for yield mapping where such data is accessible. The accuracy of both methods was evaluated against available official yield datasets, providing a comprehensive understanding of their performance. Results demonstrate that the calibration-free approach is highly effective in data-scarce regions, producing reliable yield estimates despite the lack of local calibration data. Meanwhile, the machine learning approach proved effective in areas where regional-scale yield datasets were available for training. This research, conducted as part of NASA Harvest projects in collaboration with CIMMYT/CGIAR, Malawi’s Ministry of Agriculture, and Ukraine’s Ministry of Agriculture, contributes actionable yield estimates that inform food security policies and resource allocation strategies. By addressing the unique challenges of data-sparse environments, this study underscores the transformative role of Earth observation technologies in enhancing agricultural monitoring and yield prediction. It highlights the critical potential of satellite data in mitigating global food security challenges in the context of intensifying climate impacts, resource limitations, and geopolitical instability.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.61/1.62)

Presentation: Agricultural Production Information through Enhanced Sampling Frames for Improved Food Security Monitoring: A Case Study in Mozambique

Authors: Dr. Claudia Paris, Prof. Rolf de By, Robert Ohuru, Andrew Nelson, Furkan Celik, Florian Ellsäßer, Lourenco Manuel, Sosdito Estevão Mananze
Affiliations: University Of Twente, ITC, Centro de Estudos de Políticas e Programas Agroalimentares (CEPPAG), Universidade Eduardo Mondlane (UEM), Escola Superior de Desenvolvimento Rural, Universidade Eduardo Mondlane (UEM)
1. Introduction and motivation Agricultural production data play a crucial role in achieving the Sustainable Development Goals (SDGs) [1], enhancing productivity, and ensuring food security, especially in low-income countries [2]. Generating reliable agricultural information from surveys and reference data requires effective sampling strategies. Traditional sampling frames for national accounting are typically derived from population census data and reflect the spatial and demographic distribution of the population [3]. While capturing spatial variations in the population, they fail to represent the spatial patterns of agricultural land use and production, often leading to agricultural statistics that do not accurately reflect the true state of agriculture. Additionally, population-based sampling frames tend to be static, lacking adaptability to different farming systems, and changes in farming practices over time. Another significant challenge in collecting agricultural information is managing the logistics of performing annual surveys, including the methods used to collect, process, store, and later analyze the survey data. Significant advancements can be achieved by leveraging modern technologies [4]. Digital survey tools can streamline data collection, while databases capable of integrating digitally collected data allow for near-real-time quality checks, enhancing the accuracy and timeliness of the field data information. Data analytics and visualisation platforms further support the process by enabling efficient data analysis, aggregation, and the generation of reliable statistical outputs. In the framework of the “Statistics from Space: Next-Generation Agricultural Production Information for Enhanced Monitoring of Food Security in Mozambique” project [5]— funded by the Government of the Republic of Korea (Ministry of Agriculture, Food and Rural Affairs MAFRA) with contributions from CGIAR Initiative on Digital Innovation and coordinated by the International Food Policy Research Institute (IFPRI) — the use of remote sensing data was investigated to produce and disseminate accurate crop production statistics crucial for making timely food policy decisions in Mozambique. Indeed, the lack of reliable and timely agricultural production data in the country has historically both contributed to and resulted from low levels of agricultural investment and development over recent decades [6]. This contribution aims to present the methodologies and the findings developed by the Faculty of ITC, University of Twente, in collaboration with local partners from Mozambique, namely, Centro de Estudos de Políticas e Programas Agroalimentares (CEPPAG) and Escola Superior de Desenvolvimento Rural, both of the Universidade Eduardo Mondlane (UEM). Specifically, it focuses on the technical solutions designed to address the following challenges: (1) developing and testing a novel remote sensing-based method for sampling frame design, (2) developing and testing a smartphone app for field data collection, (3) building an aggregation system and data visualisation platform, and (4) generating crop statistics and cropping system maps at high spatial resolution combining field data, satellite data, and machine learning. 2. Methodology Sampling frame design: the enhanced dynamic area sampling frame is designed to optimise the representativeness of field data across space and time, given limited resources for field operation. This innovative approach leverages: (1) 20 years of 10-day composite Normalised Difference Vegetation Index (NDVI) data from Spot-VGT and Proba-V satellites to automatically identify regions of consistent long-term agricultural productivity at 1km spatial resolution, (2) soil data properties categorised into landforms (categorised as alluvial plains, pediplains, and floodplains), and (3) 2023 farming intensity maps (categorised as scarce farming, regular farming, and irrigated farming) generated through machine learning applied to Sentinel-1 and Sentinel-2 satellite data. These datasets are combined to divide agricultural areas into homogeneous land-use strata with similar vegetation dynamics, soil properties, and farming intensities. Within each stratum, representative clusters were randomly selected for field-level data collection, referred to as Sampling Areas (SAs). For each identified SA, high-resolution Google Map RGB images were used to perform digital segmentation to delineate the largest contiguous areas having uniform agricultural practices. Digital field data collection: to streamline the field data collection process, a mobile app was developed using Open Data Kit (ODK). This open-source platform enables the creation of customisable data collection forms compatible with Android devices, allowing enumerators to upload data directly to a central server. Enumeration teams visit the designated SAs to gather agricultural data based on the predefined segments via the mobile app. For each segment, enumerators answer a series of questions (e.g., crop cover percentages, farming practices). Using a hierarchical classification scheme, they label each segment into one of three primary categories: “cultivated,” “uncultivated,” or “object.” Within the first category, “cultivated,” further distinctions are made at a second classification level, such as “agricultural field,” “grassland,” or “kitchen garden.” The “uncultivated” category encompasses more natural features not actively used for agriculture, while the “object” category includes human-made structures or isolated large trees. To further support data collection, some EAs are mapped using UAVs (Uncrewed Aerial Vehicles) to produce high-resolution maps and quantitative data. The collected information is then used to generate accurate crop statistics for the target provinces and create detailed high-spatial resolution crop-type maps. Crop statistics and maps: the crop type mapping process relied on time-series optical multispectral images from Sentinel-2 and Synthetic Aperture Radar (SAR) data from Sentinel-1. The integration of SAR and optical data, due to their complementary characteristics, enhanced the accuracy and timeliness of crop mapping. Various machine learning algorithms, including Random Forest, Support Vector Machine, Decision Tree, and XGBoost, were tested to identify the most effective and computationally efficient approach. The final output consisted of pixel-based 10m spatial resolution crop type maps for selected major crop types in Mozambique; bean, rice, cassava, and maize. Data visualisation platform: the platform provides a geo-information angle to agricultural production and statistics. It brings together various geospatial data sources that identify features with which statistical values can be associated or that may help explain such statistics, and that may provide insight into production parameters. Technically, the data can be characterised as either (1) stemming directly from satellite data and their outputs, or as (2) derived geometries that represent natural or manmade features in the landscape. Specifically, the platform is populated by: (1) Copernicus 30m Digital Elevation Model (DEM), (2) 20-year NDVI maps based on SPOT and Proba-V data, (3) OpenStreetMap data to provide ancillary geospatial information about river networks, roads, etc., (4) administrative boundaries, (5) human population to allow mapping population densities, (6) the spatial locations of the selected SAs, (6) satellite-based crop type maps, and (7) agricultural statistics per strata and per administrative unit. 3. Product generation and conclusion After a pilot phase in one province (Gaza), we scaled up to cover three important and diverse crop producing provinces; Gaza, Manica, and Zambezia. The direct outputs from this project are: (1) timely and reliable crop statistics for major crops; (2) a geo-referenced dataset of SAs identified using the proposed dynamically area sampling frame to collect field data on crop area, production, and yield; (3) improved tracking of progress towards the SDG 2 especially related to targets 2.1, 2.3, and 2.4 through indicators 2.1.1, 2.3.1, 2.3.2, and 2.4.1, and (4) a suite of high spatial resolution crop area maps for the main growing season derived from machine learning and remote sensing data. As the outcome of this project, the improved agricultural production data aim to: (1) inform better policymaking on agricultural investments, subsidies, and initiatives at local, national, and regional levels, (2) enable relief and humanitarian organisations to plan interventions, provide more effective and cost-efficient services, and get food to those who need it, (3) drive agricultural markets, with prices often contingent on government statistics, and (4) mitigate against seasonal price volatility, with significant welfare implications for small farmers, many of whom sell their crops at low prices post-harvest and are often obliged to purchase back food at high prices pre-harvest, consistent with SDG 2. References [1] Challinor, A., Watson, J., Lobell, D., Howden, S., Smith, D. and Chhetri, N. (2014). A Meta-Analysis of Crop Yield under Climate Change and Adaptation. Nature Climate Change, 4, pp. 287-291. [2] Arndt, C., et al. (2016). Growth and Poverty in Sub-Saharan Africa. In Growth and Poverty in Sub-Saharan Africa, (Ed.) Arndt, C., McKay, A., and Tarp, F. Oxford University Press, Oxford [3] FAO (2019). Updating the sampling design for the IAI 2019. Report prepared by Dramane Bako, FAO Statistics Division. [4] Rosegrant, Mark W. et al. 2014. Food security in a world of natural resource scarcity: The role of agricultural technologies. Washington, D.C.: International Food Policy Research Institute (IFPRI). http://dx.doi.org/10.2499/9780896298477 [5] https://www.ifpri.org/project/statistics-space-next-generation-agricultural-production-information-enhanced-monitoring/ [6] Ministry of Agriculture and Food Security (2017) Mozambique National Agricultural Investment Plan (PNISA): Assessment. Maputo, November 2017. https://www.masa.gov.mz/wp-content/uploads/2018/05/PNISA_Assessment_Final-Version_Nov-28.pdf
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.61/1.62)

Presentation: Spatiotemporal Variations of Cropping Intensity in China (2001–2020): Implications for Sustainable Agriculture

Authors: Miao Zhang, Bingfang Wu, Dr. Hongwei Zeng
Affiliations: Aerospace Information Research Institute, Chinese Academy Of Sciences, College of Resources and Environment, University of Chinese Academy of Sciences
Achieving sustainable agriculture and food security is central to the United Nations' Sustainable Development Goals (SDGs). In order to meet the demands of a growing population and changing diets, global agricultural production must increase by 70–110% by 2050. However, this challenge is further compounded by the increasing frequency of extreme climate events and regional conflicts. In order to address critical human needs for food security and environmental sustainability, it is of major scientific significance to gain an understanding of how agricultural land resources are utilized, both locally and globally. Cropping intensity (CI), defined as the number of cropping cycles per year, is a principal indicator of land use efficiency and food production. However, challenges remain due to the lack of generalizable algorithms for accurately and efficiently mapping global CI with a fine spatial resolution, which has constrained the capacity to accurately detection of farming practices across diverse landscapes. In this study, we developed a 30-m resolution CI mapping framework with the reconstructed time series of Normalized Difference Vegetation Index (NDVI) from multiple satellite images. Using a binary crop phenophase profile indicating growing and non-growing periods, we estimated pixel-by-pixel CI by enumerating the total number of valid cropping cycles during the study years. Based on the Google Earth Engine (GEE) cloud computing platform, we implemented the framework to estimate CI during 2000-2020 in China by integration of available Sentinel-2, Landsat series and MODIS imageries. The approach incorporated ground-based CI samples for validation, resulting in an overall accuracy of 92.9%. The spatial and temporal dynamics of the cropping intensity in China and different regions are analyzed with the support of cropping intensity datasets in the past 20 years. The results demonstrate that China's national average cropping intensity has declined by 6% over the past two decades. This decline can be attributed to the impact of agricultural policies that have been implemented with the objective of promoting cropland sustainability. These policies include measures such as cropland preservation, incentives for fallow periods, and crop rotation. Prior to 2010, cropping intensity demonstrated a period of growth, reaching 166%. However, by 2020, it had declined to 152%, exhibiting a notable decline in 2016. Significant regional disparities were identified, with an 8% increase observed in the Huanghuaihai region and a rise in CI noted in Inner Mongolia, while the Lower Yangtze River Basin experienced a notable decline of 6%. The decline in the Lower Yangtze can be attributed to a shift from double-cropping rice to single-cropping rice, which was driven by labour migration and changing agricultural policies. In contrast, the South and Southwest regions exhibited fluctuations, while the Northeast and Loess Plateau regions demonstrated stability. Despite the decline in national cropping intensity, remote sensing-based estimates indicate that China's total grain production increased by approximately 29% over the past two decades, with year-on-year increases in the majority of years. This growth is closely linked to the sustained agricultural land management as well as improvements in crop yields, driven by advancements in breeding technology, agricultural management (e.g. fertilization, irrigation), and infrastructure development. The findings highlight the influence of agricultural policy on CI, particularly regarding cropland preservation and crop rotation, and suggest that sustainable land use practices have been critical in maintaining food security.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.61/1.62)

Presentation: Rapid Agricultural Assessments in Support of Policy and Food Security

Authors: Inbal Becker-reshef, Sven Gilliams, Mary Mitkish, Chris Justice, Patty Rhee, Dr. Sergii Skakun, Hannah Kerner, Michael Humber, Josef Wagner, Shabarinath Nair, Catherine Nakalembe, Ritvik Sahajpal, Christina Justice, Yuval Sadeh, Sheila Baber, Blake Munshell, Natacha Kalecinski, Gabriel Tseng, Mehdi Hosseini, Eric Vermote, Kristof Van Tricht, Juan Lavista Ferres, Andrew Zolli, James Verdin, Kiersten Johnson, Brad Doorn
Affiliations: University of Maryland, University of Strasbourg, GEOGLAM, Arizona State University, Monash University, VITO, Planet, Microsoft, NASA, USAID
The increasing severity and frequency of extreme weather events, armed conflicts, and declining agricultural market transparency underscore the fragility of global food systems in the face of external shocks. These disruptions pose significant risks to agricultural trade, food access, and the food security of millions worldwide. Addressing these challenges requires timely, transparent, and scalable information to enable effective responses. Enhanced capabilities for monitoring, forecasting, and assessing agricultural disruptions are essential to equip markets, humanitarian organizations, and policymakers with the information needed to safeguard lives and livelihoods. Satellite data analytics and associated technologies provide powerful input to address significant gaps in agricultural information during food system shocks or when ground access is disrupted. However, based on the experiences of the GEOGLAM community, including NASA Harvest, there remains a critical deficit in institutional capacity to deliver rapid, satellite-driven assessments despite a growing demand for such analysis. While many national and international organizations leverage remote sensing for agricultural monitoring, there is no dedicated, on-demand facility focused exclusively on agricultural rapid response. The need for such standing capacity has been recognized by multiple governments, UN agencies, humanitarian organizations, and policy frameworks. In response, we are developing an international agricultural rapid response center designed to be activated during events that threaten agricultural production or information transparency. This center focuses on three primary types of food system shocks: armed conflicts and wars; extreme weather events; and regions with high agricultural uncertainty or low data transparency. This presentation will discuss the need for such an international center, the international framework for building it, and will provide examples of our rapid response work and its impact.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 0.49/0.50)

Session: F.04.23 Expert insights on the Nature Restoration Regulation: enabling the policy implementation

The European Commission's Nature Restoration Regulation, adopted in August 2024, is a landmark law designed to combat biodiversity loss and restore degraded ecosystems across the EU. The law implements the EU Biodiversity Strategy for 2030 and is closely connected with the restoration targets of the Kunming-Montreal Global Biodiversity Framework adopted at the 15th Convention of the Parties of the Convention on Biological Diversity, ensuring alignment with climate action and sustainable land-use practices. The regulation aims to restore at least 20% of terrestrial and marine ecosystems by 2030, with all ecosystems in need of restoration targeted for recovery by 2050. It also features ecosystem-specific articles, including legally binding targets on forest ecosystems, agricultural ecosystems, and urban ecosystems, among others. Collaboration with stakeholders, including national environmental protection agencies, will be crucial for its successful implementation. This session aims to bring together the policy and the implementation perspectives, discussing how Earth Observation data can catalyze monitoring and reporting, existing data gaps, the consistency of existing data, and future support in environmental policy design, building from lessons learnt from key invited speakers.

Speakers:


  • Melissa de Kock - UNEP-WCMC
  • Ludvig Forslund - European Environment Agency (EEA)
  • Gebhard Banko - Environment Agency Austria
  • Emmanuel Pajot - EARSC
  • Andy Dean - Hatfield
  • Amanda Fronzi - WWF Italy
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall K1)

Session: B.02.05 Restoring Biosphere Resilience: Transforming Agriculture, Forestry and Other Land Use (AFOLU) from Carbon Source to Sink - PART 1

To address climate change effectively, emissions must reduce by 50% every decade to achieve net zero emissions by 2050. A core component to this is the transformation of agriculture into a major carbon sink, as projected in current IPCC climate models. These models have assumed a shift from source to sink in Agriculture, Forestry and Other Land Use (AFOLU)—primarily agriculture—in the next two decades. In fact, even if fossil fuel use was eliminated today, emissions from the food system alone are enough to put the 1.5°C and 2°C targets out of reach. This pivotal role of the food system in climate dynamics underscores its broader impact on the planet. The global food system places immense pressure on Earth systems, primarily due to land-clearing (slash-and-burn practices and biomass burning), animal farming (enteric fermentation and manure management), and crop production (fertilisers and emissions from land and paddies). This makes it the single largest driver for planetary boundary transgressions and Earth system instability.

Agricultural ecosystems cover more than 40% of the global land surface, making agricultural land the largest terrestrial biome on the planet, with animal agriculture taking up 83% of it. In the past 300 years, a staggering 55% of all ice-free land has been converted into croplands, pastures and rangelands, leaving only 45% for natural or semi-natural ecosystems. Earth’s terrestrial ecosystems store a vast amount of carbon, about 60 times the yearly human-induced greenhouse gas emissions, with soil containing roughly 70% of this (1500–2400 GtC). To harness this potential, we must begin to reclaim agricultural land, a process facilitated by a global shift towards sustainable plant-based food sources, which could ultimately free up 75% of the agricultural land for rewilding and restoration.

The use of Earth Observation data in land applications is well-explored and maturing quicky, providing in-depth insights and monitoring services for terrestrial ecosystems, which in turn supports the transformation of food systems. Therefore, we invite Earth Observation data application researchers and engineers to submit abstracts for this session, that:

  • Showcase best practices, case studies and research utilizing Earth Observation for Agroecology, Nature restoration, Agriculture, Forestry and Other Land Use (AFOLU) statistics monitoring, carbon sink tracking (e.g., via Net Primary Production (NPP)/ Above Ground Biomass (AGB)), monitoring nutrient loading in terrestrial & aquatic ecosystems, detecting resilience within agricultural landscapes for early warning systems and others.

This session aims to raise awareness, nurture, support, and expand a community committed to transforming the current food system into one that regenerates and strengthens the biosphere’s innate resilience—one that preserves and restores land and aquatic ecosystems, allocates cropland to the most productive regions, adopts land management systems that work with nature rather than against it, transitions to plant-based food sources, and serves as a future carbon sink.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall K1)

Presentation: Sub-National Carbon Dynamics in European Forests: A Novel Database to Resolve Data Gaps and Facilitate EO Integration for Improved Forest Use Observation

Authors: Florian Weidinger, Sarah Matej, Claudine Egger, Karl-Heinz Erb
Affiliations: Institute of Social Ecology (SEC), BOKU University
Forests play a crucial role in global carbon cycles, functioning as significant carbon sinks or sources, depending on their management and natural dynamics. However, forests in Europe are under increasing pressure from anthropogenic and natural impacts. Monitoring carbon dynamics—e.g. input flows like biomass increment, output flows like harvest, and changes in carbon pools like standing biomass—is essential for informed decision-making on sustainable forest use. Earth Observation (EO) using remote sensing offers the ability to monitor proxies such as vegetation indices (e.g., NDVI) for biomass and disturbance detection. However, these observations require reliable, harmonised, and accessible ground-based data at sufficient high resolution for validation and extrapolation. Such data for European forest biomass is available from National Forest Inventories (NFI) and census and have been collected for various datasets. However, in order to make them easily usable for integration with remote sensing data, they face some key limitations: 1. Spatial Resolution: European-wide consistent time series data on biomass carbon in forests are currently only available at national level (such as the „State of European Forests“ (SoEF)). Data at the national level is usually too coarse to be effectively linked to high-resolution remote sensing data. 2. Temporal Resolution: Existing sub-national datasets are typically for one year or 10-year averages, rather than in annual time-series. This is partly due to the periodicity of National Forest Inventory (NFI) data, which is often five years but can even be longer than ten years. This limitation in temporal resolution makes it difficult to track short-term dynamics and extreme events. 3. Accessibility and Definition Consistency Issues: Data from NFIs and censuses, while normally being publicly available, are often not easily accessible, because of non-machine-readable formats or language barriers. Also, they typically have country-specific definitions of forest area and harvest. 4. Limited Scope: Many datasets emphasize on forest area and biomass stock, excluding critical metrics like harvest data that represent significant carbon flows and are vital for understanding socioeconomic drivers. To address these challenges, we present a novel database covering forest area, biomass stock, increment and harvest for 39 European countries across 218 regions (from NUTS 3 to national level) for the period 2000–2022 in consistent representation with annually resolution. This database has been constructed by collecting data from over 40 NFIs and census statistics and has been harmonised using existing forest assessments. To fill temporal biomass carbon stock gaps, the CaRbon Accumulation in ForesT (CRAFT), a bookkeeping model, was employed. The analysis derived from our database highlights that, between 2000 and 2022, forest area within the EU27 expanded by 6%, harvest levels increased by 21%, and biomass stock rose by 19%, although carbon stock growth has slowed in recent years. Nonetheless, regional trends display significant variability. For example, the Královéhradecký region in the south of the Czech Republic experienced a much higher decline in biomass stock than in some other parts of the country, where it remained more stable over the period. The database's annual resolution and consistent coverage of forest area, biomass stock, increment and harvest provide a comprehensive perspective on forest use intensity that goes beyond the often narrow focus on individual variables. By enabling the calculation of multiple indicators, including common ones like harvest per area, the database can reveal diverse spatial patterns in forest use to better reflect the complexity of forest systems. Also, the harmonised sub-national database is aimed to fill critical data gaps and will be made publicly available, facilitating the integration of remote sensing observations with ground-based measurements for more accurate estimates of carbon dynamics. Among others, it is aimed for studying the impact and dynamics of extreme events such as forest fires and storms for carbon stock and flux dynamics. Such an integration allows to advance the fundamental understanding of key interactions in forest systems and can also contribute to near real-time assessments of forest biomass.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall K1)

Presentation: Operational validation of 2 evapotranspiration detection approaches > ESA SEN-ET and ECOSTRESS PT-JPL- case study Jratunseluna basin, Java

Authors: Alen Berta, Viktor Porvaznik, Juan Suarez Beltran, Alessandro Marin
Affiliations: CGI Deutschland, CGI Italy, GMV
Evapotranspiration (ET), the combined process of water evaporation from the soil and transpiration from plants, is a vital factor in agriculture, directly affecting soil moisture, plant health, and agricultural productivity. Accurate detection and measurement of ET allow farmers to optimize irrigation practices, ensuring crops receive the right amount of water at the right time. Moreover, tracking water stress levels allows for better crop management, improving yields while enhancing water use efficiency, reducing wastage and conserving water resources. Modern technologies such as remote sensing and advanced analytics, now provide farmers with near real-time ET monitoring, facilitating informed decision-making that supports sustainable and efficient agricultural practices. Within the ESA Global Development Assistance-Agriculture project, significant efforts have been made to enhance ET and water use efficiency (WUE) monitoring. The project utilizes the ESA Sen-ET algorithm, integrated into the CGI Insula platform, to analyze daily ET and WUE using Sentinel-2 and Sentinel-3 data. The project supports the Asian Development Bank in enhancing dryland farming systems in Indonesia, offering near-real-time monitoring and analysis of water usage efficiency. These enable farmers to make timely adjustments to their irrigation practices, improving water use efficiency, enhancing crop yields, and reducing the risk of water scarcity. Within this project, emphasis is also placed on validation of the results which is done twofold: 1) Two-Source Energy Balance (TSEB) Approach: This method relies solely on Sentinel-2 and meteorological data, excluding Sentinel-3’s sharpened Land Surface Temperature (LST) measurements (which is a part of SEN-ET TSEB approach). 2) ECOSTRESS PT-JPL Approach: This uses data from NASA’s ECOSTRESS sensor, incorporating the PT-JPL algorithm for ET detection. The validation process includes a comprehensive analysis of results, comparing values derived from these methods and exploring key differences in their implementation. Advantages and disadvantages will be discussed based on resolution, revisit time, and other variables influencing the operational application of these satellite-based ET detection methodologies. By critically evaluating these approaches, the project highlights the strengths and limitations of each method, contributing to the refinement of ET monitoring strategies. This effort ensures that EO-based solutions like Sen-ET can meet the demands of operational use, providing a robust framework for optimizing agricultural water management and supporting global sustainability goals.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall K1)

Presentation: Deep learning based field level classification of tillage intensity to facilitate remote monitoring, reporting and verification of regenerative agricultural practices

Authors: Nicholas Synes, François Lemarchand, Vincent Cornwell
Affiliations: Agreena
Reducing emissions from the global food system will be crucial in our efforts to limit global warming. The adoption of regenerative agricultural practices will be a key component in mitigating these climate change effects because it leads to the sequestration of carbon, whilst also providing numerous other benefits such as enhancing soil health. To enable the widespread adoption of regenerative agriculture practices, farmers must be rewarded for their efforts. To enable this, there is a growing need for Monitoring, Reporting and Verification (MRV) platforms that can reliably track field level activities. MRV platforms will enable food companies, governments, and carbon programs to reward real practice change. Tillage is one such practice that has adverse environmental impacts, but also has opportunities for reductions in the frequency and intensity of its implementation as part of a transition towards regenerative agriculture. The environmental impacts of conventional tillage include the release of carbon, erosion of soils, and nutrient runoff. Reducing the intensity of tillage practices simultaneously enhances soil organic carbon whilst also having a positive impact on soil health and productivity. We have developed an MRV platform that provides reliable and scalable field level verification of regenerative practices including crop rotation, cover crops and tillage intensity. We will present our tillage verification system: a deep learning approach based on Sentinel-2 imagery. This tillage verification system was trained using over one hundred thousand ground truth data points, collected across Europe and North America, with each point passing through a detailed quality assessment system. The model classifies field level tillage activities into the three commonly used categories: conventional, reduced or no tillage, and has been validated with independent out of sample ground truth data. We will discuss our modelling approach, regional differences in farming practices and their adoption across Europe and North America, and the next steps in our research.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall K1)

Presentation: Mapping Soil Organic Carbon Using Multispectral and Hyperspectral Data

Authors: Fabiana Ravellino, Valerio Pisacane, Renato Aurigemma, Fabio Castaldi, Maria Daniela Graziano, Alfredo Renga
Affiliations: University Of Naples "Federico II", Euro.soft srl, National Research Council (CNR), Institute of BioEconomy
Soil organic carbon (SOC) is the main component of organic matter and it is a crucial component of the global carbon cycle, accounting for 50–80% of total terrestrial carbon ( https://www.ipcc.ch/ ). Its role in mitigating climate change underscores the urgent need for improved SOC monitoring to prevent soil degradation and develop adaptive management strategies . Numerous studies also focus on agricultural soils used for carbon storage, where SOC is measured to determine the resulting carbon credits. Traditional SOC mapping methods are often labor-intensive, but advancements in remote sensing technology offer promising alternatives for rapid and accurate SOC estimation. This study presents the initial findings of a novel approach that combines Sentinel-2 multispectral data, recognized for its high revisit frequency, with PRISMA hyperspectral data, noted for its superior spectral resolution, to generate high-resolution SOC maps. Both datasets underwent extensive preprocessing tailored to their unique attributes. For PRISMA data, band selection and cloud and shadow masking were performed to reduce dimensionality and minimize unnecessary data propagation in subsequent phases. Spatial resolution was enhanced from 30 m to 5 m using pansharpening techniques, leveraging the panchromatic band. Similarly, Sentinel-2 data were processed with super-resolution techniques to achieve a 5 m resolution, and the MAJA cloud masking algorithm was applied. Imagery alignment was ensured using the AROSICS coregistration tool. Bare soil areas, central to this analysis, were identified through spectral indices such as the Normalized Difference Vegetation Index (NDVI), the Normalized Cellulose Absorption Index (NCAI), and the Normalized Burn Ratio 2 (NBR2) to differentiate them from vegetated regions. Urban and water areas were excluded using publicly available Land Cover and Land Use (LCLU) maps, such as CORINE Land Cover. For locations with multiple acquisitions, a temporal composite was created. Data were aggregated using the 90th percentile of pixel values, a metric selected to represent the driest soil conditions in the time series and thus optimize bare soil characterization. To account for regional variability in SOC dynamics, different eco-pedological regions were detected for the Italian territory based on the Eco-pedological map of Italy (http://www.pcn.minambiente.it/PCN/). To delineate these eco-pedological regions, geomorphological environment, climate region, and the main geological substrate were taken into account. For each eco-pedoregion, a machine learning (ML) model for SOC estimation was implemented and trained using ground soil data collected within the region in combination with multispectral and hyperspectral satellite data. The optimization of the parameters of each model was assessed by using k-fold cross-validation. An area of interest (AOI) was selected within each eco-pedoregion for the validation of the prediction models, so that it was representative of the existing SOC variability. The trained models were tested on the independent validation dataset of ground soil data extracted from the validation AOIs. The results confirm that integrating Sentinel-2 and PRISMA data enables accurate SOC estimation, validating the effectiveness of the proposed methodology. This study underscores the potential of combining multispectral and hyperspectral remote sensing data for SOC mapping and highlights the need for further research to enhance soil carbon monitoring and management strategies. It becomes even more interesting considering the new hyperspectral missions that will be launched soon, including ESA's CHIME scheduled for 2028, which will allow us to benefit from an even larger hyperspectral image database.
LPS Website link: Mapping Soil Organic Carbon Using Multispectral and Hyperspectral Data&location=Hall+K1" class="text-info" target="_blank">Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall K1)

Presentation: Recovering forests dominate current and future carbon uptake

Authors: Sassan Saatchi, Zhihua Liu, Yan Yang, Susan Cook-Patton
Affiliations: JPL/CALTECH, CTrees.org, The Nature Conservancy
Forests store a significant portion of global carbon and play a crucial role in removing CO2 from the atmosphere, thereby mitigating climate change. However, there are substantial uncertainties regarding the rate and capacity of forests to absorb carbon at spatial scales relevant to restoration efforts, as well as at time-scales relevant to policy. In this study, we combined remote sensing and field observations to produce age-dependent carbon removal rates for global forests at 1-km spatial resolution. Using the forest carbon removal rate map, we estimated that the net live biomass (aboveground + belowground) change on existing forest land was 1.83 PgC yr-1 from 2000 to 2020. However, this is largely offset by tropical deforestation emissions (1.2 PgC yr-1), leaving temperate forests as the strongest carbon sink. Including harvested wood products (0.2 PgC yr-1), the global net forest biomass carbon sink is 0.83 PgC yr-1, consistent with recent estimates from inventory, and remote sensing observations. Our results also align well with National Greenhouse Gas Inventories (NGHGI) submitted to the UNFCCC. The existing forests store 340 PgC in biomass carbon, with 35% of this carbon stored in forests older than 150 years. If existing forests were allowed to grow under current climate and disturbance conditions, they could gain an additional 53 PgC by 2050. The unrealized maximum carbon stock in current forests is approximately 124 PgC, with over 80% of this potential in recovering temperate and tropical forests. Our analysis provides the first bottom-up, earth observational, fine-resolution assessment of forest carbon removal rates. This information can guide priority conservation and restoration efforts.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall K1)

Presentation: Sentinel-2-Based Monitoring System for Estimating 5 Years SOC Changes in Regenerative Farms

Authors: Fabio Castaldi, Piero Toscano, Flavio Bertinaria
Affiliations: National Research Council Of Italy - CNR, Barilla G. e R. Fratelli S.p.A.
According to a recent study of the Joint Research Center (JRC) based on LUCAS surveys (De Rosa et al. 2023), the average yearly Δ of soil organic carbon (SOC) in European croplands between 2009 and 2018 was -0.04 g C kg−1. The regenerative agriculture (RA) aim to invert this trend, preserving and possibly increasing SOC content. In this regard, monitoring the effects of RA practices on SOC content on the short-medium term becomes crucial to understand whether the solutions applied were effective or whether changes or improvements need to be made. Sentinel-2 data demonstrated their capability for SOC estimation on croplands (Gholizadeh et al. 2018; Vaudour et al., 2022) and, although the time window in which soil is exposed during the year is very narrow in croplands, the short revisit time of the Sentinel-2 mission allows to acquire a suitable number of bare soil images for implementing SOC prediction models. Therefore, this short revisit time can be used to monitor soil conditions and then to quantify SOC changes over time. We collected 923 topsoil samples in France and Northern Italy from 2020 to 2022 in agricultural areas devoted to wheat cultivation in order to represent the main soil types, climatic zones and agricultural management systems of these two area of interest. For each soil sample we measured SOC in the laboratory. A pixelwise temporal mosaicking approach was applied for the selection of bare soil images using Sentinel-2 imagery collections from 2020 to 2022 and obtaining composite bare soil layers for the sampling regions. The mosaicking approach was implemented by using the rgee R package to wrap the Earth Engine Python API for working on the Google Earth Engine environment within R. The approach consisted of two steps. In the first step, filters were applied for each pixel and acquisition date to identify and select bare soil pixels excluding data affected by clouds, green and dry vegetation and high soil moisture content. In the second step, if at least 3 bare soil dates were left after applying the filters, the available data were combined to implement a mosaicking approach selecting the median (50th percentile) and the 90th percentile (R90; Castaldi et al. 2023) of the reflectance values for each band and pixel throughout the time series. Spectral data from the resulting composite bare soil layers of 10 m resolution were extracted from each sampling location and machine learning (ML) models for predicting SOC were trained for each soil type insisting in the two area of interest. All the ML models were successfully validated on independent datasets and the level of accuracy obtained (0.18 ≤ NRMSE ≤ 0.22) enables the implementation of a satellite-based system for SOC monitoring to replace time consuming and expensive traditional approaches. Therefore, a Sentinel-2 based SOC monitoring system was applied to 53 fields in Italy and France where some RA practices have been applied, both for the year before the beginning of the adoption of RA practices (t0) and after 5 years (t5). The monitoring system entails the application of the Sentinel-2 pixelwise temporal mosaicking approach on t0 and t5 and the application of the ML models for predicting SOC content in the two years. The monitoring system was also applied to other 53 fields selected in the same regions among those managed according to the conventional agriculture (CA). For each field, both those managed according RA and those managed according CA, the average SOC content was computed both for t0 and t5 and the difference between the two (Δ SOC) was calculated. Then it was checked whether the two mean SOC values were significantly different at 95% confidence level. The 60% of the RA fields showed a significant increase in SOC content in five years, 32 had negative Δ SOC and 8% had not significant changes. The percentage of positive Δ SOC in CA was 47%, 41% had negative difference and 12% did not change. The average Δ SOC in RA was +0.5 g C kg−1 both in Italy and France, that corresponds approximately to +0.1 g C kg−1 year−1. For the field managed according CA the computation of the average Δ SOC highlighted differences between Italy and France: while in Italy the Δ SOC is -0.2 g C kg−1 , i.e. ~ -0.04 g C kg−1 year-1, thus in line with the average European value, in France we measured an average positive Δ SOC (+0.3 g C kg−1). The Sentinel-2-based SOC monitoring system put in place for Northern Italy and France allowed to verify the efficacy of RA practices in term of SOC increase and the differences with conventional agriculture in different soil and climate regions. Reference Castaldi, F., Halil Koparan, M., Wetterlind, J., Žydelis, R., Vinci, I., Özge Savaş, A., Kıvrak, C., Tunçay, T., Volungevičius, J., Obber, S., Ragazzi, F., Malo, D., & Vaudour, E. (2023). Assessing the capability of Sentinel-2 time-series to estimate soil organic carbon and clay content at local scale in croplands. ISPRS Journal of Photogrammetry and Remote Sensing, 199, 40–60. https://doi.org/10.1016/J.ISPRSJPRS.2023.03.016 Gholizadeh, A., Žižala, D., Saberioon, M., & Borůvka, L. (2018). Soil organic carbon and texture retrieving and mapping using proximal, airborne and Sentinel-2 spectral imaging. Remote Sensing of Environment, 218, 89–103. https://doi.org/10.1016/J.RSE.2018.09.015 Vaudour, E., Gholizadeh, A., Castaldi, F., Saberioon, M., Borůvka, L., Urbina-Salazar, D., Fouad, Y., Arrouays, D., Richer-De-forges, A. C., Biney, J., Wetterlind, J., & van Wesemael, B. (2022). Satellite Imagery to Map Topsoil Organic Carbon Content over Cultivated Areas: An Overview. Remote Sensing 2022, Vol. 14, Page 2917, 14(12), 2917. https://doi.org/10.3390/RS14122917
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall L3)

Session: C.01.10 Microwave Instrument Concepts and Technologies to enable new EO Science Missions - PART 1

New microwave instrument concepts and technologies are enabling new Earth observation science missions.
This session aims to discuss new microwave remote sensing instrument concepts and present advances in microwave instrument technologies and related instrument pre-development activities.

Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall L3)

Presentation: Performance assessment of next-generation scatterometers with Doppler capability in order to retrieve wind and current simultaneously

Authors: Adrien Martin, Eva Le Merle, Tobbias Otto, Carsten Jonas, Mahmoud El Hajj, Petronilo Martin-Iglesias, Marc Zimmermanns
Affiliations: NOVELTIS, OHB, ESA-ESTEC
For over three decades, Europe has benefited from continuous C-band wind scatterometer services, facilitated by successive missions such as ERS-1 and -2, and MetOp-A, -B, and -C ASCAT. The upcoming MetOp-SG satellites, particularly MetOp-SG B1 scheduled for launch in Summer 2026, will ensure the continuity of these vital observations. Wind scatterometry measures the surface backscattering coefficient (a.k.a. normalized radar cross-section - NRCS) which is directly linked to ocean surface roughness and, consequently, ocean surface winds. Measurements from multiple viewing angles enable the retrieval of ocean surface vector wind (OSVW) field. Beyond wind vectors, ocean surface currents are crucial for weather and climate forecasting, as they significantly influence energy, momentum, and heat transfer. The scientific community recognizes the need for comprehensive ocean current observations. Given that operational European spaceborne scatterometers provide regular global measurements, future systems that can simultaneously measure ocean surface winds and currents would be highly beneficial. This requires fully coherent radar systems capable of measuring both the NRCS and the radar signal phase to determine ocean surface current vectors. This study aims to explore the feasibility of next-generation scatterometers with Doppler capability. Advanced beamforming technics permit to revisit previous concepts for wind retrieval and to assess the performance for current retrieval. This paper presents a comprehensive numerical analysis of the expected wind and current performance for the different concepts. The method uses a classical Bayesian framework to retrieve simultaneously total surface current vectors (TSCV) and OSVW. It provides the performance as function of across-track distance. The beamforming technics offer greater design flexibility and improved signal-to-noise ratio. The study also evaluates the impact of different viewing geometries and incidence angles on the retrieval accuracy of both wind and current vectors. The study will enable to highlight potential trade-off once combing all observables and the associated non-linearity. This comprehensive approach aims to ensure that future scatterometers can provide simultaneous, high-quality measurements of both wind and ocean current vectors, addressing critical needs in weather and climate research.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall L3)

Presentation: Companion satellites for ROSE-L and NISAR to address 3D deformation and 3D vegetation measurements needs

Authors: Marco Lavalle, Ilgin Seker, Eric Loria, Shashank Srinivas, Shadi Oveisgharan, Ekaterina Tymofyeyeva, Ala Khazendar, Paul Rosen, Gerald Bawden, Ben Kim
Affiliations: NASA JPL, NASA HQ
NASA and ESA are advancing L-band synthetic aperture radar (SAR) missions to address critical science and application needs in solid Earth, ecosystems, cryosphere, and hydrology. The NASA-ISRO SAR (NISAR) mission, a dual-frequency L-/S-band SAR satellite set for launch in 2025, will provide global dual-polarimetric L-band data with a 20 MHz bandwidth every 12 days, enabling new insights into dynamic Earth processes. In parallel, ROSE-L, an ESA High-Priority Candidate Mission, is scheduled for launch in 2028-2030 to complement the Copernicus constellation and enhance Earth observation capabilities, addressing information gaps and augmenting existing services. ROSE-L will operate as a two-spacecraft L-band system in the Sentinel-1 orbit with a 6-day repeat interval, enabling new applications from co-located L- and C-band imagery. Neither NISAR nor ROSE-L alone can meet all the observational goals outlined in the 2017-2027 Decadal Survey for Earth Science and Applications report. Key priorities such as 3D surface deformation vectors, correction for tropospheric phase delay, bare-Earth topography, and vertical vegetation structure require simultaneous observations from multiple spacecraft. Meeting the needs of the Surface Topography and Vegetation (STV) community requires long cross-track baselines, preferably in a single-pass interferometric configuration and with sufficient diversity to cover a broad range of topography and vegetation heights. Similarly, 3D surface deformation mapping with tropospheric error mitigation necessitates squinted contemporaneous observations with long along-track baselines. To address these needs, NASA and ESA are jointly exploring innovative satellite formation concepts. A particularly promising approach involves deploying receive-only companion satellites that utilize existing L-band transmitters in space, such as those provided by ROSE-L or NISAR. These companion satellites, with simplified hardware designs, offer potentially cost-effective solutions for enhancing observational capabilities. By coherently combining SAR images from these satellites with data from primary transmitters, multi-baseline and multi-squint measurements can be achieved without the need for tight operational coordination. This presentation at the Living Planet Symposium discusses the technical challenges, expected performance, and potential impacts of the ROSE-L (or NISAR) augmentation strategy, emphasizing its role in advancing international cooperation and addressing the critical needs of the Surface Deformation and Change (SDC) and Surface Topography and Vegetation (STV) communities.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall L3)

Presentation: 3D-SAR: Constellation of Passive Receiver SAR Satellites in Formation with Sentinel-1 for Operational Applications

Authors: Martin Jüssi, Sergei Kuzmin, Dr. Kaupo Voormansik, Erik Kulu
Affiliations: KappaZeta Ltd
INTRODUCTION Sentinel-1 is a powerful data factory. No other current SAR mission produces data with systematic global coverage in such a large quantity. However, its information content is relatively limited – dual-polarisation backscatter and repeat-pass interferometry data. Across-track interferometry is not feasible with Sentinel-1 due to temporal decorrelation (6 or 12 days) and short interferometric baselines (<100 m). The limited information content of Sentinel-1 sets an inherent limit to applications built on Sentinel-1. KappaZeta are developing a passive receiver constellation mission to increase the value of the Sentinel-1 mission significantly with an additional investment constituting just a fraction of what a new active SAR constellation would cost. Currently, Sentinel-1 produces two-dimensional imagery about the Earth's surface. By adding relatively small and simple passive receiver satellites to the mission, it is possible to add the height dimension to the Sentinel-1 data and enable direct height measurements from space. Providing global surface and vegetation height information at up to 12-day intervals would be unprecedented, all while cleverly augmenting the existing Copernicus program infrastructure and data. BASELINE CONCEPT AND APPROACH The baseline design is three receive-only companion SAR satellites flying 50-100 km behind Sentinel-1. Considering the timelines and the ESA Harmony mission (aiming for Sentinel-1D), Sentinel-1C is the most likely illuminator spacecraft, so that is currently considered as a baseline approach. Three satellites enable both along-track and multi-baseline across-track measurements throughout the orbit, without having to move the receivers too close to the illuminator satellite. The spacecraft microsatellite platform is expected to be a commercial off-the-shelf flight-proven product. Potential suppliers for the satellite bus have been mapped and will be evaluated, which will be one of the key decision points during Phase A in 2025. The antenna will have a fixed beam, but it is currently under study whether it will be a microstrip patch array or a reflectarray type, and whether it will be mechanically fixed or deployable wings. The antenna size is planned to be approximately 200 x 80 cm, which would correspond to a footprint on Earth of approximately 70 x 30 km, to capture one Sentinel-1 subswath at a time with some margin for attitude control and TOPS mode behaviour. The antenna will likely be designed and built by KappaZeta, but potential suppliers are being mapped and evaluated as an alternative. It would be lower cost to launch the spacecraft on a commercial rideshare mission, but the delta-V required for the orbit raise and the right ascension of the ascending node change to dusk-dawn orbit to match Sentinel-1 needs further analysis. Rideshare opportunities and small dedicated launchers were mapped as part of Phase 0. In case of dedicated launchers, we will be tracking the progress of upcoming small launch vehicles from Europe. DATA PRODUCTS The focus of the mission will be on providing across-track interferometric products for forest monitoring applications. A forest height measurement error of <10% or <1m, whichever is smaller, was set as a baseline measurement requirement. This will enable the forest stem volume measurement error to be <20% for parcels larger than 0.5 ha. Based on TanDEM-X based research, the accuracy figures are achievable at least in boreal and temperate forests. STATUS AND TIMELINE The Phase 0 of the mission was conducted under the ESA Estonian Industry Incentive Scheme (IIS) programme and was concluded in November 2024. The presentation at the LPS will discuss the Phase 0 findings: the technical concept formulated along with critical technologies and key trade-offs to be undertaken in later phases. Also, any Phase A findings and outcomes achieved in Q1-Q2 2025 will be reported.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall L3)

Presentation: Snow Characterization by SAR Tomography (SnowCAT): Mission and Instrument Design

Authors: Leonardo Carrer, Davide Giudici, Francesco Banda, Fabio Gerace, Antonio Giordano, Paolo Falcone, Stefano Tebaldini
Affiliations: Aresys, Politecnico di Milano
The Snow Characterization by SAR Tomography (SnowCAT) mission is a pioneering Synthetic Aperture Radar (SAR) initiative designed to characterize snow properties with high spatial resolution from space. Utilizing a three-satellite formation, SnowCAT provides detailed measurements of snow depth, snow mass, and the vertical structure of snow cover in mountainous regions and on floating sea ice [1,2]. The mission aims to generate key geophysical parameters, including snow depth, Snow Water Equivalent (SWE), internal layering of the snowpack, snow surface topography, and underlying terrain or ice topography. Additionally, for snow-covered drifting sea ice, SnowCAT will estimate freeboard height. By capturing these parameters and their temporal evolution, the mission offers critical insights into cryospheric processes and contributes to the understanding of snow dynamics in a changing climate. The SnowCAT mission's three-satellite formation operates in the X-band (9.6 GHz) with a Multiple Input Multiple Output (MIMO) configuration using a Frequency Division Multiplexing (FDM) access scheme [3,4]. In this setup, all satellites transmit simultaneously on distinct frequency bands and receive echoes scattered from the Earth's surface across all transmitted bands. The SAR system operates in stripmap mode, targeting a swath approximately 10 km by 10 km, with incidence angles ranging from 20 to 40 degrees. The formation is designed for single-polarization transmission, with reception capabilities in either single or dual polarization. The across-track positions of the three SnowCAT satellites are optimized based on the principle of Minimum Redundancy Wavenumber Illumination (MRWI) [3]. This approach maximizes the formation's sensitivity to the vertical structure of the snowpack. During a single overpass, the mission collects nine SAR images, which can be combined to generate up to 36 interferometric pairs. This configuration enables three-dimensional imaging of the snowpack with high vertical resolution, allowing for direct measurement of the apparent depth of snow layers and interfaces with distinct electromagnetic (EM) properties. In this work, we address key aspects of the mission, including the selection of the optimal flight formation for the three-satellite MIMO system and the proposed instrument design. Specifically, we analyze the trade-offs in satellite positioning to maximize coverage over icy target areas while maintaining a MIMO configuration, achieving the target baselines and radar performance, and ensuring an optimal safety distance between the spacecraft. Additionally, we provide a detailed elaboration on the technical specifications of the SAR payload, including performance assessment and synchronization. We also explain the rationale behind the selection of radar parameters and the practical implementation of the FDM scheme. By leveraging the presented innovative design choices, the SnowCAT mission will achieve precision in snow and ice phenomena characterization, advancing the understanding of cryospheric processes and their implications for global climate dynamics. [1] Rekioua, Badreddine, et al. "Snowpack permittivity profile retrieval from tomographic SAR data." Comptes Rendus. Physique 18.1 (2017): 57-65. [2] Tebaldini, Stefano, et al. "Imaging the internal structure of an alpine glacier via L-band airborne SAR tomography." IEEE Transactions on Geoscience and Remote Sensing 54.12 (2016): 7197-7209. [3] Tebaldini, S., Manzoni, M., Ferro-Famil, L., Banda, F., & Giudici, D. (2024). FDM MIMO spaceborne SAR tomography by minimum redundancy wavenumber illumination. IEEE Transactions on Geoscience and Remote Sensing. [4] Banda, F., Marini, J., Giudici, D., & Tebaldini, S. (2024, April). On MIMO TomoSAR System for Snow Mapping. In EUSAR 2024; 15th European Conference on Synthetic Aperture Radar (pp. 84-87). VDE.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall L3)

Presentation: Next Generation Ocean Scatterometer Concepts

Authors: Adrian Hauser
Affiliations: Airbus Defence and Space GmbH Immenstaad
The European Space Agency (ESA) study, "Feasibility Study into the Next-Generation Ocean Scatterometers Based on Advanced Beamforming," explores innovative concepts for future scatterometers tailored to oceanographic applications. This research focuses on advancing the capabilities of scatterometers to enable the simultaneous and precise derivation of ocean surface winds and currents at a high spatial resolution of 5 km in both directions and a wide swath. Building upon the heritage of European C-band scatterometers, such as SCA and ASCAT, the next generation of instruments will maintain operation in the C-band while incorporating new technologies to meet stringent performance requirements. Key objectives include achieving highly accurate ocean current vector measurements with an accuracy target of approximately 0.1 m/s. These ambitious goals impose significant constraints on instrument design and necessitate comprehensive simulations. Level-2 data products, such as ocean winds and surface currents, are generated using an advanced geophysical model function (GMF) and are evaluated in conjunction with system noise parameters derived from the proposed instrument concepts. This study evaluates various design approaches, analyzing their potential and challenges in meeting the demanding requirements for data accuracy and resolution. Preliminary results from the simulations are presented, providing insights into the feasibility of the proposed concepts and highlighting critical issues that need to be addressed. The findings contribute to the foundation for developing next-generation scatterometers capable of delivering enhanced oceanographic data for scientific and operational applications. The final goal of this study is to outline an instrument concept and to provide a corresponding Level-1 set of instrument requirements.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall L3)

Presentation: A companion mission for distributed SAR in a long-baseline bistatic scenario

Authors: Alfredo Renga, Antonio Gigantino, Gianluca Coppa, Maria Salvato, Francesca Pelliccia, Maria Daniela Graziano, Antonio
Affiliations: University Of Naples Federico Ii
This work presents RODiO, an Italian companion SAR mission using the concepts of bistatic and distributed SAR. Rhodium (rodio in Italian) is an ultra-rare and precious metal, which is found in platinum ores, and it is used as alloyed with platinum to realize high-temperature and corrosion insensitive coatings. Similarly, RODiO is a cluster of 4 CubeSats conceived as a companion SAR mission complementing PLATiNO-1 (PLT-1), which is a preexisting SAR mission currently under realization by the Italian Space Agency (ASI). PLT-1 is a small satellite embarking an X-band SAR payload able to work in a standard monostatic mode. In the last part of its mission, PLT-1 will operate from a low altitude of approximately 430 km. During this phase, PLT-1 is also used an illuminator for the CubeSats of RODiO cluster. The cluster is expected to fly at a safe distance from PLT-1, i.e. from 50 to 90 km. Each CubeSat works with an X-band receiving only SAR payload and the cluster realizes a Distributed SAR (DSAR), i.e. it performs single-pass multi-baseline bistatic acquisitions. The cluster is thus limited in size to 600 m in along-track and less than 400 m in the radial/cross-track direction to keep coherence among the raw bistatic data collected by each CubeSat. In addition, one CubeSat also includes a hybrid rocket propulsion system (HPS) as an additional payload. RODiO is funded by ASI, and it is realized by a consortium led by the Department of Industrial Engineering of the University of Naples Federico II, Italy. The consortium includes Thales Alenia Space Italy as system and payload responsible, Apogeo Space as platform responsible, Telespazio for ground segment and other small companies like T4i for propulsion. The phase A has been recently completed, while Phase B has been already approved and it is under contractualization. The main objectives of the mission are: - in-orbit-demonstration of DSAR concept using the companion configuration with PLT-1; - in-orbit-demonstration of HPS for long baseline formation reconfiguration; - demonstration of novel bistatic SAR products using the companion configuration with PLT-1. Concerning the first goal, each cubesat independently collects bistatic echoes, but those echoes are then coherently processed on the ground by multi-platform image synthesis techniques. The output is thus a single super-image that the cluster as a whole generates. The goal is to demonstrate that this super-image is better than the ones that each CubeSat can generate as an isolated item. Specifically, the mission shall demonstrate that multi-platform image synthetic can be applied to improve the Noise Equivalent Sigma Zero (NESZ) of about 4 dB, and the Ambiguity to Signal Ratio (ASR) up to 10 dB. With reference to the second goal, the HPS shall be used to demonstrate that high thrust, up to 10 N, can be achieved by a very compact unit (1.5 U volume) using a polymeric fuel and working with a liquid oxidizer. The idea is to use the HPS to perform orbit formation reconfiguration, allowing one satellite to separate from the cluster in such a way to realize triplets of acquisitions over the same area, one with PLT-1, one from the escaping CubeSat, and one from the remaining three CubeSats in the cluster. As for the third goal, RODiO’s aim is also to demonstrate new SAR products taking advantage of the strong cooperation with the pre-existing PLT-1 mission. The idea is that the high quality bistatic SAR images generated by RODiO as a cluster can be processed together with the corresponding, monostatic, PLT-1 ones collected over the same area, so that the products resulting from this combination can benefit from the increase of qualitative and quantitative measurements above the observed scene. These products are expected to widen PLT-1 application fields, so they are expected to be complementary and synergic to those planned in the framework of the PLT-1 mission. Indeed, due to the nature of the mission, the added value will be in the novelty of the products rather than in their amount. The individuated products derive from the long-baseline bistatic geometry of the PLT-1/RODiO system. Depending on the involved baseline, the cluster observes the same scene as that of the PLT-1 but from different perspective (bistatic angle) and a different time instant (owing to different bistatic range history). Hence, comparing at the amplitude level the monostatic image to the bistatic one resulting from multi-platform image synthesis, original applications can be realized which are impossible or limited to monostatic systems, such as ground motion, radargrammetry, ship detection and ship velocity estimation. As an output of phase A and for assessing mission feasibility, a preliminary design of the platform has been carried out. Major constraints were to have CubeSats no more than 16 U wide which could be deployed by a standard cubesat deployer with no adaptation. The final result was 4 identical satellites with about 35 kg marginated masses and protrusions within the margins allowed by the deployer. In 3 out of 4 satellites the HPS is replaced by dummy masses to keep unchanged mass and inertia properties which are crucial for formation flying. Subsystems and equipment are selected among COTS, with the only exception of the SAR and HPS payloads that shall be the result of a dedicated design. With specific reference to the SAR antenna, a reflect array solution has been selected including PCB reflective panels and a slotted waveguide feed. A dedicated deployment mechanism has been also preliminarily designed allowing for a 1.4 m long antenna to be stowed with the available volume. Feasibility was also confirmed by a dedicated mission analysis. The baseline scenario is a shared launch, and the use of a carrier to bring the satellites to the final orbit where they are deployed and perform formation acquisition. Then the nominal mission starts in which maintenance maneuvers are necessary to keep the distance from PLT-1 in the desired range and to maintain the cubesat at the required baselines. Even if the first natural option for formation design is to align the CubeSats into a train, rigidly controlling the along-track separations, a different approach has been selected in RODiO, allowing the CubeSats to drift exploiting cross-track and vertical separations. This reduces the collision risk while enabling the mission through a feasible deltaV, i.e. less than 70 m/s in one year. Concluding this quick overview, it is also worth mentioning that important breadboarding activities are foreseen during the phase B for the HPS, SAR antenna, antenna deployment mechanism and SAR electronics, to increase the TRL. At the same time, experimental and simulation activities are planned concerning the verification of DSAR imaging taking into account specific problems like compensation of clock errors and multi-platform focusing.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall M1/M2)

Session: D.03.05 Harnessing Open and Community-Driven Innovation in Times of Emergencies

In an era marked by frequent climate-related disasters and global health crises, the need for rapid and effective innovation in Earth Observation (EO) is paramount. This session explores how open and community-driven innovation can provide timely and impactful solutions by leveraging the collective intelligence and collaborative efforts of diverse stakeholders.
Two key themes are envisaged to be the focus of this session:
1. Open Innovation in Crisis Response:
- The necessity of accelerated innovation during emergencies, such as the COVID-19 pandemic.
- Benefits of open innovation in EO, including reduced time-to-market, shared costs and risks, and enhanced creativity.
- EO Case studies of rapid development in various sectors (i.e. healthcare, such as personal protective equipment manufacturing, medical devices, and vaccine technologies) driven by open innovation practices.
- The role of cross-boundary collaboration in addressing complex challenges swiftly and effectively.
2. Community-Driven Approaches:
- The transformative potential of collaborative development models, crowdsourcing, and citizen science in satellite EO.
- Examples of successful open-source projects and co-creation initiatives that have advanced EO tools and technologies.
- Strategies for harnessing the collective intelligence of distributed communities to analyse large datasets, identify patterns, and develop innovative EO solutions.
- Empowering citizen scientists in data collection, validation, and raising public awareness to democratise access to EO data and its applications.
The objectives of this session are:
- To highlight how integrating external knowledge and community engagement can enhance innovation performance during crises.
- To showcase real-world examples of rapid innovation and effective community-driven approaches.
- To foster dynamic discussions on best practices and future directions for open and collaborative EO initiatives.
In terms of target audience, this session is designed for climate scientists, disaster experts, NGOs, city managers, emergency community members, first responders, insurers, government bodies, researchers, developers, policymakers, and citizen science enthusiasts. It provides a unique opportunity for these groups to exchange ideas and collaborate on innovative solutions for climate and health crises.

Presentations and speakers:


EO Open Innovation in Times of Emergencies


  • Loretta Latronico - ESA

Forking as a worldview: a software engineering concept transforming government agencies IRL (in real life) with Openscapes


  • Julia Lowndes - Openscapes

CloudCatcher: Validating cloud identification within Earth observation satellite products using citizen science


  • Caroline Cox - Ral Space, Stfc

Harnessing Open-Source Innovation for Emergency Response in Tonga: Lessons from the Hunga Tonga-Hunga Haʻapai Eruption


  • Berit Mohr - OPENGIS.ch

Participatory Wall-to-Wall Mapping of Burned Areas in Russia With Sentinel-2 Imagery


  • Ilona Juravlyova - Greenpeace

Breaking the Silos: A Holistic Approach for Smart Digital Marketplace


  • Alen Berta - CGI ​
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.14)

Session: C.02.15 - M1 session: EO research missions: From sketch to reality (Development Phase) - Designing, Prototyping, and Overcoming Technical Hurdles - Harmony, Thruth, NGGM

The status of development of ESA missions will be outlined in 4 sessions of 1h30 minutes (equally of a full day).
Participants will have the unique opportunity to gain valuable insights in technology developments and validation approaches used during the project phases of ongoing ESA programmes.
The Projects are in different phases (from early Phase A/B1 to launch and operations) and together with industrial/science partners the status of activities related to Mission developments will be provided.


M1 session: EO research missions: From sketch to reality (Development Phase) - Designing, Prototyping, and Overcoming Technical Hurdles - Harmony, Thruth, NGGM


Research Mission Programme Introduction


  • Dirk Bernaerts – ESA

Harmony Challenges and Solutions to Make It Feasible!


  • Florence Hélière – ESA

Harmony Industrial Development Status


  • Katarina Jesswein – OHB

TRUTHS Development Status


  • Andrea Marini – ESA

TRUTHS Industry


  • H.Wood, M. Del Junco Rodriguez

NGGM Development Status


  • Michael Francois – ESA

NGGM Industry


  • Roberto Bogiatto – TAS-I / Airbus
  • Thomas Ott– TAS-I / Airbus

Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 0.14)

Session: B.03.14 Space Solutions for the Green Transition: Breakfast Networking session

For decades, space capabilities have proven to be useful to mitigate the impact of climate change on our ecosystems, economies, and societies. Despite this, space capabilities remain underutilized in driving the urgent and tangible climate action needed as the world faces escalating climate-related risks and challenges. This networking session, open to all, provides a platform to identify immediate collaboration opportunities between and among ESA and non-space actors in aid of the green transition. Green transition topics span impact sectors such as insurtech, agriculture, forestry and many others. Through pitches and moderated thematic discussions between the user communities and the service providers, the session aims at identifying needs and collaboration opportunities to co-develop and scale space-integrated solutions to accelerate the green transition. 

Chairs:


  • Cristina Bramanti - ESA
  • Beatrice Barresi - ESA

Organizers:


  • Sveinung Loekken (ESA), Zaynab Guerraou (ESA), Catrin Lewis (ESA)

Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.85/1.86)

Session: A.09.08 Understanding the state of the cryosphere with satellite altimetry - PART 1

Altimetry missions have provided unique observations of the polar regions for more than three decades, transforming our understanding of the cryosphere in the process. Currently operating satellite altimeters such as NASA’s ICESat-2 and ESA’s CryoSat-2 are the now backbone of the long-running altimetry record and together with SWOT, Sentinel-3, and SARAL they provide year-round observations of changes in Earths sea ice, ice sheets, ice shelves, icebergs, the polar oceans, mountain glaciers, and terrestrial snow. In a complementary way, airborne missions are providing calibration and validation data for these satellite altimeters as well as unique high spatial resolution data for studies of small-scale features such as ice ridging, rifts, and melt ponds. This session will demonstrate the vital role of both continuous and high-resolution altimetry in advancing our understanding of the processes involved in a changing cryosphere. We encourage submissions which use altimetry observations as a primary data set as well as those which integrate altimetry measurements into model results to provide new insights into cryospheric processes.

Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.85/1.86)

Presentation: Partitioned ice sheet surface mass balance and ice-dynamical imbalance from satellite radar altimetry

Authors: Tom Slater, Inès Otosaka, Andrew Shepherd, Alan Muir, Lin Gilbert
Affiliations: Centre for Polar Observation and Modelling, Department of Geography and Environmental Sciences, Centre for Polar Observation and Modelling, Department of Earth Sciences, University College London, Mullard Space Science Laboratory, Department of Space and Climate Physics, University College London
Fluctuations in the elevation and mass of Antarctica and Greenland are driven by processes occurring over varying time scales and due to changes in snowfall, surface melt, and ice flow unique to each ice sheet. Here, we combine over three decades of observations from four satellite radar altimeter missions and a regional climate model to separate these signals and quantify volume changes due to snow and ice variability. In Antarctica, these measurements reveal the extent of dynamic changes masked by snowfall anomalies across the coast of Wilkes Land; between 2021 and 2024, thinning at Totten Glacier due to its continuing ice dynamical imbalance was masked by thickening due to high snowfall across its wider catchment area. In Greenland, distinct patterns of change strongly associated with accelerating ice flow emerge at its key marine-terminating ice streams. Isolating the surface lowering arising through ice-dynamical imbalance in this way allows us to examine the onset and pace at which it has propagated inland and the movement of the grounding line, which are key indicators of ice-sheet instability.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.85/1.86)

Presentation: Using CryoSat-2, ICESat-2 and airborne data to prepare for ice-sheet monitoring with CRISTAL

Authors: Karla Boxall, Mal McMillan, Qi Huang, Jennifer Maddalena, Kate Briggs, Joe Phillips, Alan Muir, Sophie Dubber, Noel Gourmelen, Michele Scagliola, Paolo Cipollini, Jerome Bouffard
Affiliations: UK Centre for Polar Observation and Modelling, Lancaster University, Mullard Space Science Laboratory, University College London, Earthwave Ltd., University of Edinburgh, ESA ESRIN
Three decades of routine satellite altimetry have provided a near-continuous observational record of polar topography, offering unparalleled insights into ice sheet elevation change. The ability of CryoSat-2, Sentinel-3 and ICESat-2 to simultaneously and continually monitor Earth’s ice surfaces is critical towards understanding the ongoing and future imbalance of the ice sheets in a changing climate. The launch of the Copernicus Polar Ice and Snow Topography Altimeter (CRISTAL), currently scheduled for the end of 2027, will mark a new era of operational polar-orbiting radar altimeters. CRISTAL’s novel and innovative dual-frequency Ku-Ka radar altimeter, large-scale Ku interferometric SAR acquisitions over ice surfaces, increased bandwidth, and Open Loop tracking will drive significant improvements in the measurement and monitoring of ice sheet elevation. Scientific and technical questions remain, however, with regards to how best to exploit CRISTAL data in domain-specific Level-2 processing. Here, we provide an overview of several Land Ice Research & Development activities conducted as part of the CRISTAL LEVel-2 procEssor prototype and R&D (CLEV2ER) project. These activities utilise in-orbit CryoSat-2 and ICESat-2 data, and are designed to contribute to the development of the Level-2 Ground Processor Prototype for Land Ice and Inland Water surfaces. Here, we will provide highlights from several of the R&D studies, including analysis and improvements of the methodology used for uncertainty estimation, the retrieval of penetration depth from dual band altimetry, and the role of snowpack properties on penetration depth estimates. Ultimately, these findings will enhance the scientific readiness of the CRISTAL mission and help to ensure the timely exploitation of its data for the improved measurement and monitoring of ice sheet elevation.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.85/1.86)

Presentation: Daily drift-aware Antarctic sea ice freeboard and thickness maps from satellite altimetry

Authors: Marion Bocquet, Robert Ricker, Stefan Hendricks, Thomas Lavergne, Emily Down
Affiliations: NORCE Norwegian Research Centre, Alfred Wegener Institute for Polar and Marine Research, Norwegian Meteorological Institute
The polar regions are key indicators of climate change, experiencing rapid and significant changes that have profound implications for the global climate system. Antarctic sea ice plays a critical role in regulating the Southern Ocean's circulation and therefore influencing broader global climate processes. In contrast to the Arctic, Antarctic sea ice extent has not shown a clear long-term trend in recent decades but more complex regional changes. Sea ice modulates ocean heat exchanges and freshwater fluxes, affecting the global ocean circulation. To compute freshwater fluxes, we need accurate estimates of sea ice thickness and snow depth. We use satellite altimetry to derive sea ice freeboard and thickness. Snow depth can be derived with a dual-altimetry approach, using NASA’s ICESat-2 mission to retrieve the total freeboard, and radar altimetry satellites such as CryoSat-2 and Sentinel-3 to estimate the sea ice freeboard. The difference between total and sea ice freeboard will indicate the snow depth. On the one hand, this depends on appropriate methods to derive surface heights, considering the complex snow stratigraphy on Antarctic sea ice, which alters radar altimetry measurements. On the other hand, particularly when using measurements from more than one satellite mission, co-registration of measurements is important. Monthly maps of ice freeboard or thickness are typically generated by averaging along-track data onto polar grids corresponding to the satellite overflights. However, sea ice drifts significantly over the course of a month, particularly in regions such as the Ross Sea and the Weddell Sea. Neglecting sea ice drift when generating monthly sea ice freeboard and thickness maps from satellite altimetry blurs the spatial distribution and introduces additional uncertainties. This is even more important when combining freeboard estimates from radar and laser altimetry from different orbits. To address this, we synergize altimetry data with EUMETSAT Ocean and Sea Ice Satellite Application Facility (OSI SAF) sea ice drift estimates derived from passive microwave sensors. In the framework of the ESA-funded project “Sea Ice Mass Balance Assessment: Southern Ocean” we will present daily updated freeboard and thickness maps based on satellite altimetry data from ICESat-2, CryoSat-2 and Sentinel-3 missions. Based on a method previously developed for the Arctic, in the ESA Climate Change Initiative (CCI) Sea Ice project our approach involves advecting along-track altimeter measurements daily over the course of a month to produce drift-corrected sea ice maps. These along-track measurements are projected onto a target day with drift accounted for both 15 days before and after the target day. The drift correction also allows us to identify areas of sea ice observed multiple times by the satellite, enabling the estimation of growth rates and changes in freeboard and thickness due to deformation and thermodynamic processes as well as snow accumulation. By incorporating these growth estimates, we can adjust the freeboard and thickness estimates for the time offset between the acquisition day and the target day. By computing drift-aware freeboard from radar altimetry (CryoSat-2 and Sentinel-3) and laser altimetry (ICESat-2), we ensure co-location of satellite measurements and we will investigate the potential of deriving snow depth.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.85/1.86)

Presentation: Using Swath Altimetry to Better Understand Nadir Altimetry: a Comparison of SWOT, Sentinel-3 and IceSat-2 over Sea-Ice

Authors: Fanny Piras, Marta Alves, Sara Fleury, Matthias Raynal, François Boy, Filomena Catapano
Affiliations: CLS, LEGOS, CNES, ESA/ESRIN
Spatial altimetry, although initially dedicated to ocean observation, is particularly well-suited to observe the polar regions. Since the ERS missions, the polar ice caps have been observable from space, but with rather limited exploitation until 2010 due to the operational mode used, namely the LRM (Low Resolution Mode) which is adapted to ocean but not as much for ice-covered surfaces. SAR (Synthetic Aperture Radar) altimetry which significantly reduces the along-track resolution, has been a true revolution with CryoSat-2 and later Sentinel-3 allowing the first consistent estimation of freeboard and sea-ice thickness. The IceSat-2 mission launched in 2018 is also innovative thanks to its laser that not only provides excellent ground resolution, but also does not penetrate the snow layer, making it complementary to Ku-band instruments such as those embarked onboard CryoSat-2 and Sentinel-3. Despite the variety of available missions, processing data on sea ice remains extremely challenging. The diversity of surface types (open water, sea-ice, leads, melt-ponds …) and the different surface properties (presence of snow, ice type, etc.) significantly affect the radar signal and are only partially accounted for in current retracker models. This has a substantial impact on the estimated geophysical parameters, and several studies are currently on-going to better account for these effects (for example [1], [2]). Finally, given the particularly hostile nature of polar regions, there are relatively few in-situ datasets available to validate altimetric data. Notable examples include the annual BGEP mooring campaigns, which provide regular estimates of ice draft (the submerged part of the sea-ice), but always in the same limited area. The SWOT satellite is a NASA/CNES mission developed with participation from CNSA/UKSA and launched in December 2022. The KaRIn interferometric radar is a significant innovation as it provides 2D information on topography measurements, covering 50km-wide swaths on each side of the nadir measurement provided by Poseidon-3C. The first topography images on sea-ice showed promising initial results exceeding expectations, even though sea-ice was not the primary objective of the mission. The combined 2D information on topography and roughness offers an unprecedented context allowing not only to discriminate the nature of the sampled surface, but also to potentially validate and better understand signals from nadir altimetry. The first part of this talk will focus on KaRIn/Sentinel-3 xovers case studies, where KaRIn data is exploited to better understand the nadir altimetric signal over sea-ice. More specifically, KaRIn data is used in this analysis to assess a critical step in the freeboard computation, namely the leads/floes discrimination step, which is performed differently by the different agencies/research teams. The waveform shape over different surface types is also analyzed here, as well as level 2 geophysical variables such as range, sigma0 and SWH. A similar analysis will then be presented, using KaRIn/IceSat-2 xovers case studies, which is particularly interesting because laser data does not penetrate the snow layer and neither does (supposably) the Ka-band. This direct comparison between these two instruments is thus fully complementary to better understand the interactions between the surface and the radar signal. Finally, depending on the progress of the study at the time of the conference, we may present an innovative way of simulating SAR Sentinel-3 echoes based on a facet-based numerical model, using KaRIn topography and roughness information. [1] Landy, J. C., Petty, A. A., Tsamados, M., & Stroeve, J. C. (2020). Sea ice roughness overlooked as a key source of uncertainty in CryoSat‐2 ice freeboard retrievals. Journal of Geophysical Research: Oceans, 125(5), e2019JC015820. [2] Landy, J. C., Tsamados, M., & Scharien, R. K. (2019). A facet-based numerical model for simulating SAR altimeter echoes from heterogeneous sea ice surfaces. IEEE Transactions on Geoscience and Remote Sensing, 57(7), 4164-4180.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.85/1.86)

Presentation: From Observation to Insight: Antarctic Coastal Polynyas Monitored by SWOT and Their Role in Deep Water Formation

Authors: Dr. Lu Zhou, Julienne Stroeve, Michiel van den Broeke, Prof Shiming Xu
Affiliations: Utrecht University, AWI, Tsinghua University
Antarctic coastal polynyas are critical regions for ocean-atmosphere interactions, sea ice production, and the formation of high-salinity shelf water, a precursor to Antarctic Bottom Water, which drives global ocean circulation. Despite their importance, monitoring these dynamic regions at high spatial and temporal resolutions remains challenging due to their remote location and complex physical processes. The Surface Water and Ocean Topography (SWOT) satellite mission offers a transformative opportunity to enhance our understanding of polynya dynamics by providing unprecedented 2-D high-resolution measurements of sea surface height, ocean circulation, and associated hydrodynamic features. This study adopted SWOT data to monitor Antarctic coastal polynyas, with a focus on their spatial variability and seasonal cycles. By applying SWOT elevation and backscatter from KaRIn, we quantify key processes, including sea ice production and heat exchange. Preliminary results highlight the utility of SWOT in capturing high resolution polynya texture compared to the traditional passive radiometers, as well as the first direct measurements in sea ice produced within the polynya. The insights gained from this study provide a more comprehensive understanding of the mechanisms driving polynya formation and evolution and their downstream impacts on the global thermohaline circulation. This research underscores the potential of SWOT in advancing Antarctic cryosphere and its implications for climate variability and change.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.85/1.86)

Presentation: POLAR OCEAN TIDES REVISITED WITH SWOT AND CRYOSAT

Authors: Ole Andersen, Michael Hart-Davis
Affiliations: Dtu Space, TU Munich
Polar ocean tides have been improved significantly with Cryosat-2 being the first non-sunsyncronous satellite to operate in the Polar region. This satellite enables the determination of tides up to the 88 Parallel. However, the satellite poses a number of challenges to tidal analysis because of its long ground repeat period (368 days). Within the ESA CP40 project the SAMOSA+ physical retracker were developed to process the Cryosat-2 data across SAR and SARin measurement modes in the Polar regions which enabled significant improvements of ocean tides in the Polar regions. SWOT was launched with an inclination of 77 degrees and consequently misses quite a bit of the Arctic Ocean. However, SWOT provides interferometric satellite altimetry in the Polar region with a resolution and accuracy not seen before. This is particularly because it can discriminate better between sea ice and ocean, delivering considerably more accurate sea surface height observations. Finally, SWOT is also designed to determine all tidal constituents with well-defined alias periods, and it’s timely to revisit the determination of ocean tides in the Polar region again. In this presentation, 12 years of Cryosat-2 data and 1.5 years of SWOT altimetry have been analyzed for residual ocean tides to the FES2014/FES2022 ocean tide model in the Arctic Ocean and Antarctic Ocean using both the harmonic method and the response formalism. We use these satellites to derive tidal corrections to the major astronomical constituents M2, S2, K2, N2, K1, O1, P1, and Q1. In addition, several smaller third, fourth, and sixth diurnal tides have been determined. Some of these small compounds/overtides show small but consistent signals across regions like the Weddell Sea (South Atlantic) and in the Baffin Bay between Greenland and Canada.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.34)

Session: A.02.08 Impacts of fire in the Earth system - PART 1

Fire is one of the critical factors affecting global ecosystems and societies, impacting atmospheric and vegetation composition, soil erosion and runoff, as well as human health, resources, and assets. The occurrence of catastrophic seasons with many human casualties and/or large burned areas in the last decades is often associated with heat waves and droughts, underscoring the decisive role of climate and weather. Important lessons can be learned from studying the impacts of those wildfires and analysing the implications for future fire seasons so we can continuously improve our understanding and management of fire.
We encourage all abstracts that explore fire occurrence in the Earth system at any temporal and spatial scale using remote sensing data and/or modelling and its impacts on (1) ecosystems, vegetation composition and structure, resilience, and fuel management; (2) atmospheric chemistry, air quality, and human health, (3) biochemical cycles, carbon budget, water and nutrients, (4) soil erosion, (5) burn severity, (6) fire occurrence in the past, present and future.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.34)

Presentation: An Eco-Evolutionary Optimality Approach to Modelling Wildfires

Authors: Dr Olivia Haas, Dr David Sandoval, Ms Yicheng Shen, Dr Dharma Sapkota, Professor Sandy P. Harrison, Professor I. Colin Prentice
Affiliations: Geography and Environmental Science, University of Reading, Leverhulme Centre for Wildfires, Environment and Society, Georgina Mace Centre for the Living Planet, Department of Life Sciences, Imperial College
Empirical and process-based models are used to model wildfire and its impacts on the Earth System, but struggle to reproduce many fire properties, including fire intensity. They also capture the temporal dynamics of the fire regime, including the length of the fire season, the timing of maximum fire occurrence and the climatically-induced interannual variability in fires, poorly. Furthermore, the complexity of these models makes it difficult to diagnose the cause of these failures or to investigate the impacts of multiple scenarios of environmental change and management strategies on fire regimes. Eco-evolutionary optimality (EEO) theory, which is predicated on the assumption that plants are adapted to the environment in which they occur, has been shown to provide an alternative and much simpler modelling approach for many vegetation properties. Here, we develop an EEO-based model for wildfires, on the assumption that the environmental niche for different plants is co-determined by climate and the fire regime characteristic of that climate. Drawing on the fact that gross primary production and atmospheric dryness, as measured by vapour pressure deficit, have consistently been shown to be primary drivers of fire intensity, size and burnt area, we show that the phase difference between these properties and their seasonal magnitude can be used to distinguish between different types of wildfire regime. Based on this we develop a model to simulate these regimes under modern conditions. We show that this EEO-based model reproduces the observed seasonal cycle, interannual variability and burnt area of wildfire both in specific vegetation types and globally. We then use the model to explore how wildfire regimes are likely to be affected under future climate and land-use change scenarios. Finally, we explore how such a modelling strategy could be used to examine the possible impact of different fire management strategies on future fire type and occurrence.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.34)

Presentation: Characterization of Extreme Fires From Satellite Earth Observations

Authors: Erika Solano-Romero, Carlota Segura-Garcia, Dr Lucrecia Pettinari, Prof Emilio Chuvieco
Affiliations: Universidad De Alcala
Extreme events, including fires, represent a considerable risk due to their adverse effects on climate, society and the environment, including human health and properties, ecosystem services, water and air quality, soil erosion and atmospheric conditions. Over the past two decades, there has been a string of record-breaking fires in terms of size, intensity and carbon emissions, some with devastating social and ecological consequences. Further, climate change is expected to increase the occurrence of extraordinarily large and intense fires. However, despite their relevance to the Earth System, extreme fires are still identified as a knowledge gap and more research is needed to understand better these extreme events, their causes and consequences. Several studies have reported increases in the frequency of fire events with extreme fire behaviour in different regions across the world. Despite these efforts, however, the definition of ‘extreme fire event’ remains imprecise. Further, definitions of extreme fires often focus solely on single fire behaviour metrics, like size or intensity, and pre-established thresholds. However, fire is a complex process that encompasses diverse aspects that influence fire occurrence and behaviour, which demands a holistic approach to studying this phenomenon. Indeed, fire regimes vary with the climatic, geographic and anthropic characteristics of the landscape, and, consequently, what constitutes an ‘extreme fire event’ will also depend on the context. This study, developed in the framework of the XFires project of the European Space Agency (ESA), aims to characterise extreme fire events at a global scale providing a comprehensive definition and quantification of this phenomena, and to analyse the spatio-temporal patterns of these extreme fire events. We use global data from ESA's Climate Change Initiative (CCI) like the FRY 2.0 database, which has data for a 20-year period (2001-2020), allowing us to analyse several fire behaviour variables such as fire intensity measured through fire energy radiation (FRP), event severity, duration, size and rate of spread (ROS). First, we explore different methodologies to determine extreme fire events based on the frequency distribution of these different fire behaviour variables, separately and together (including statistical methods of extremes analysis such as upper percentiles of distributions and time series anomaly analysis). Then, we analyse the spatial distribution of extreme fires and their characteristics, and how the magnitude of extreme fire behaviour has changed over time in different areas. To allow context-dependency, we perform the characterisation of extreme fire events and the spatio-temporal analysis for different biomes, as well as for each 0.25º grid cell. Since biomes present different types of vegetation with differing adaptations to fire, as well as different climatic characteristics, we first categorise what constitutes an ‘extreme fire event’ for each biome and compare how the fire characteristics of these events differ between biomes. Further, because features like topography, landscape fragmentation, or population density vary within biomes, we also categorise ‘extreme fire events’ at the grid-level. Finally, we quantify how the characteristics of these events change across the world, and how the frequency and the magnitude of these events have evolved over time. This study will provide a global-scale assessment of fires with extraordinary behaviour, as well as a definition and identification of ‘extreme fire events’. As a result, we will identify the combination of fire behaviour variables that make a fire extreme in different biomes and regions of the world. The categorisation of ‘extreme fire events’ can enable the identifications of the causes and drivers that lead to such fires. Finally, findings will enable a better analysis and modelling of the impacts of ‘extreme fire events’ on different elements of the Earth System - like carbon emissions, the climate, water bodies or the cryosphere. This work has been performed within the XFires project, which is part of the European Space Agency Climate Space Programme.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.34)

Presentation: Mapping fire-induced permafrost thaw and carbon emissions from space

Authors: Sander Veraverbeke, Lucas Diaz, Max van Gerrevink, Sonam Wangchuk
Affiliations: Vrije Universiteit Amsterdam, International Centre for Integrated Mountain Development
Arctic-boreal fire regimes are intensifying, leading to a growing number of boreal forest fires and Arctic tundra fires occuring on permafrost terrain. After fires, the seasonally thawed active layer of permafrost soils usually thickens, and this can lead to long-term gradual or abrupt permafrost degradation. Permafrost soils store large amounts of carbon, and hence fire-induced thaw may lead to additional carbon emissions from permafrost soils for many years after the fire. This presentation explores several spaceborne measurements for mapping fire-induced permafrost thaw and its associated carbon emissions and will cover two case studies and one continental application. First, we investigated a boreal forest fire in Eastern Siberia using several measurements from sensors on Landsat 8. We found that especially land surface temperature (LST) related strongly to field-measured thaw depth, and we developed a statistical model to map fire-induced thaw depth over the entire fire scar. Second, we mapped post-fire permafrost soil subsidence after several tundra fires in Northeastern Siberia using Sentinel-1 interferometric synthetic aperture radar (InSAR) data. We found that burned areas experienced about three times higher soil subsidence than adjacent unburned areas in the growing season after the fire (4.88 cm/year vs. 1.54 cm/year), and this difference was primarily driven by fire-induced surface albedo darkening. Lastly, we used the ESA Climate Change Initiative permafrost product to estimate post-fire active layer thickening and associated carbon emissions for all fires in boreal North America between 2001 and 2019. We estimate that post-fire carbon emissions from permafrost thaw amount up to 30 % of the direct carbon emissions during fires, demonstrating the importance of including permafrost thaw when estimating climate feedbacks from boreal forest fires. Taken together, this presentation highlights the use of multi-source remote sensing products for estimating post-fire surface deformation and active layer thickening of fires in permafrost ecosystems, and provides a first continental assessment of the climate warming feedback from carbon emissions from fire-induced permafrost thaw.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.34)

Presentation: Global fire radiative power (FRP) trends over the MODIS record

Authors: Jose Gomez-dans, Prof Martin Wooster, Dr Zixia Liu, Dr Weidong Xu, Dr Jiangping He
Affiliations: Dept of Geography, King's College London, Leverhulme Centre for Wildfire Research, National Centre for Earth Observation (NCEO)
Landscape fires emit thermal radiation that can be detected by overflying spaceborne sensors and quantified as fire radiative power (FRP). FRP measures the rate at which fires emit radiant energy and is directly related to the instantaneous amount of biomass combusted. Consequently, it acts as a proxy for understanding not only the presence and extent of fires but also their behaviour, associated emissions, and their impact on vegetation. This study compiles and analyses the complete FRP dataset captured by the MODIS (Moderate Resolution Imaging Spectroradiometer) sensors from 2001 to the present. The dataset undergoes extensive pre-processing to remove non-fire sources of radiation, such as persistent gas flares, eliminate highly oblique observations, and separate data into daytime and night-time observations. The FRP values are then gridded to facilitate statistical analysis. Due to the episodic and sparse nature of satellite observations, FRP data present significant challenges for statistical modelling. To address these, we test a range of probability density functions (PDFs), including the standard log-normal distribution and modified variants with tailored upper and lower tails. These adjustments better account for extreme fire behaviour and the underrepresentation of smaller fires, which are often below MODIS detection thresholds. The fitted parameters from these PDFs offer insights into both central and extreme FRP behaviour, providing a more complete characterisation of fire dynamics. Using these parameters alongside additional FRP-derived metrics, we investigate global trends in fire activity over the past two decades. Comparisons with complementary datasets (for example, with the VIIRS sensor for the ovarlapping period) are used to confirm the robustness of the findings and highlight the evolving nature of fire activity and try to understand its underlying drivers. This work will present the methods, key findings, and implications of this analysis. We will also outline ongoing work into attributing these changes to different climatic, human and land surface drivers. We will report on comparisons with other fire traits (burned area, fire size, ...) when relevant.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.34)

Presentation: GFAS4HTAP - A blended fire emission dataset for studying the atmospheric impact of vegetation fires in HTAP3

Authors: Johannes Kaiser, Daniel G. Holmedal, Martin A. Ytre-Eide, Mark de Jong, Bo Zheng, Martin J. Wooster, Cynthia H. Whaley
Affiliations: Klima- og miljøinstittut NILU, King's College London, Environment and Climate Change Canada, Tsinghua University, Natural Resources Canada - Canadian Forest Service
We present updated and re-calibrated daily global vegetation fire emissions of various smoke constituents based on satellite observations of fire radiative power (FRP) covering 2003-present. The new dataset will be used as base pyrogenic emission dataset in the upcoming HTAP3 multi-model, multi-pollutant study of fire impacts (Whaley et al. 2024), and thus called GFAS4HTAP. ESA CCI land cover data for 2018 and a global peat map (Xu et al. 2018) are used to generate new global maps of thirty fire biomes and six fire types. The gap-filled representation of daily FRP in CAMS GFASv1.2 (Kaiser et al. 2012) is used as underlying activity and thus determines the temporal variability on all timescales from days to decades and the spatial distribution in each biome. The conversion of FRP to dry matter combustion in each biome is re-calibrated to GFED5beta during 2017-2019. It anchors the absolute levels of vegetation fire activity in each biome to this (small) part of the time series. Emissions of various smoke constituents are subsequently determined with the emission factors from NEIVA (Shahid et al. 2024) for each of the fire types. Comparisons with dry matter combustion rate and emissions from GFED4s, GFED5beta, FINN2.5 and GFASv1.2 reveal that the inventories can be roughly classified into one group of "traditional" inventories with lower fire activity, resp. emissions, and another of "more recent" inventories with higher fire activity. The pyrogenic carbon monoxide emission estimates from an inversion of satellite observations of atmospheric composition (Zheng et al. 2019) lie between these two groups in terms of global annual values. However, they are consistent with the "more recent" inventories during the late boreal summer peak of the global fire activity and with the "traditional" inventories during periods of lower fire activity. We interpret this as indicative support for the "more recent" inventories. GFAS4HTAP falls into this category due to being anchored to GFED5beta.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Room 1.34)

Presentation: FireCCI Burned Area Algorithms and Products for Climate Modelling

Authors: M. Lucrecia Pettinari, Amin Khairoun, Erika Solano, Dr. Daniela Stroppiana, Thomas Storm, Dr. Martin Boettcher, Prof Emilio Chuvieco
Affiliations: Universidad de Alcala, National Research Council of Italy, Institute for Electromagnetic Sensing of the Environment, Brockmann Consult GmbH
Earth Observation data is a widely used and very advantageous source of information for burned area (BA) detection, as it provides global information in a systematic way. Several polar-orbiting satellites offer information in spectral regions suitable for BA detection, particularly infrared data at different wavelengths: near (NIR), short-wave (SWIR) and thermal. Within the European Space Agency (ESA) Climate Change Initiative (CCI), the FireCCI project has the objective to develop and validate burned area algorithms to meet, as far as possible, GCOS (Global Climate Observing System) Essential Climate Variable (ECV) requirements for (consistent, stable, error-characterized) global satellite data products from multi-sensor data archives. Since the start of the CCI Programme, more than ten years ago, FireCCI has developed different BA data records based on surface reflectance and active fire information from a variety of ESA and NASA sensors, specifically targeted for climate modelling applications, but also useful for other purposes such as fire behaviour or fire ecology research. Through the life of the FireCCI project, the different algorithms for BA detection have been refined, based on validation results of previous versions and new sensors’ data becoming available. The current suite of global medium-resolution datasets obtained from the FireCCI algorithms spans from 2001 to 2023, with plans to expand it further into the future. Two main data records are available. The first is FireCCI51, based on MODIS surface reflectance and active fires, and covering the period 2001-2022 at a spatial resolution of 250 m. With the foreseen end of the MODIS programme and the decrease in quality of its observations, the Sentinel-3 (S3) satellites provide valuable information to continue the global BA detection at a similar spatial resolution. This is the main source of data of the FireCCIS311 product, derived from the FireCCI51 algorithm, but improved and adapted to input S3 SWIR surface reflectance and VIIRS active fires. This dataset is currently available for the period 2019-2023, and will be further processed into the future. It uses the SYNERGY (SYN) product of S3, which provides SWIR data at 300 m spatial resolution, being the same resolution of the BA product. Additionally, both data records are also provided at a gridded resolution of 0.25-deg, more appropriate to be ingested in global climate and dynamic vegetation models. These products detect an average BA of 4.47 Mkm2 (FireCCI51) and 4.77 Mkm2 (FireCCIS311) in their respective time series. In the years of overlap between FireCCI51 and FireCCIS311, the latter detects a mean of 25.5% more BA than FireCCI51. Although the spatial resolution of FireCCIS311 is a bit lower than FireCCI51 (300 m vs. 250 m), an improved algorithm, the availability of SWIR information in the SYN bands of S3 and the use of VIIRS active fires improved the results of this product. These results are also confirmed by their validation results. At the same time, algorithms have been developed to create BA products based on high-resolution earth observation data, in this case Sentinel-2 (S2). Two continental data records are currently available: FireCCISFD11 (SFD meaning Small Fire Dataset) and FireCCISFD20. These products are provided at 20 m spatial resolution, and were processed for Sub-Saharan Africa. FireCCISFD11 used S2-A plus MODIS active fires and was processed for the year 2016. FireCCISFD20, processed for the year 2019, took advantage of the availability of both S2 A&B, and used VIIRS active fires. These two products detect between 58% and 83% more BA than FireCCI51 for the same area, showing the importance of small fires, usually not detected with medium resolution sensors, in their contribution to the total global burned area. The different BA datasets of FireCCI showcase the increase of global (and continental) BA detected as new algorithms and sensors have become available. These results have direct implications for global emissions estimations from fires, and more generally for climate modelling, and stress the need to revise and update the previous global estimations and their contribution to climate change. Within the current phase of FireCCI, the project will expand its suits of products: we plan to develop a harmonized FireCCI51-FireCCIS311 dataset, to provide users with a consistent time series of BA starting from 2001 and extending into the future. Furthermore, the SFD algorithm will be processed at global scale for the first time, to obtain a wall-to-wall BA product based on S2 for the year 2023. In this way, FireCCI strives to continue to improve the Fire Disturbance ECV, and provide the best possible data records to the scientific community.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall F1)

Session: A.04.03 Monitoring Greenhouse Gases from Space - Methods and Validation - PART 1

Since the launch of SCIAMACHY on ENVISAT, our understanding of the uncertainties surrounding the global distribution of potent Greenhouse Gases (GHGs) such as Methane (CH4) and Carbon dioxide (CO2) have been dramatically reduced. Yet, despite advances in technology in the decades since SCIAMACHY, with missions such as GOSAT and Sentinel 5P, comparisons of satellite observations with bottom-up inventories show that significant uncertainties still remain. Key to reducing these uncertainties are the validation networks providing confidence to the satellite retrievals, and the advancing of retrieval methods including sophisticated use of machine learning, and advanced uncertainty quantification methods.

This session is dedicated to presenting the current state of the art for methods and validation of the remote sensing of GHGs, such as but not limited to CH4 and CO2, including results from current missions and ground based networks such as Sentinel-5P, GOSAT/2, PRISMA, EnMAP and the TCCON and COCCON networks. The presentation of advanced remote sensing techniques and methods, leveraging open-science and machine learning techniques are strongly encouraged in this session.

Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall F1)

Presentation: Bringing Full Physics to TANGO

Authors: Peter Sterk, Jochen Landgraf, Sha Lu, Otto Hasekamp, Raul Laasner, Tobias Borsdorff
Affiliations: SRON
The Twin Anthropogenic Greenhouse gas Observers (TANGO) is an ESA mission within the ESA SCOUT program to be launched in 2028. It comprises two CubeSats flying in close formation---TANGO Carbon and TANGO-Nitro---with TANGO-Carbon measuring anthropogenic greenhouse gas emissions of carbon dioxide and methane. Observations of nitro dioxide by TANGO-Nitro are meant to support the detection of the emission plume. Using highly agile attitude control and a spatial resolution of 300x300 m², the satellites are capable of targeting individual emission sources. The TANGO mission will provide the dry air mole column mixing ratio of carbon dioxide (XCO₂) and methane (XCH₄) using the so-called proxy method (Frankenberg, 2005). Here, a known concentration of one gas species (e.g. CH₄) is used to estimate the total light path modification, which is then used in a non-scattering retrieval of the mixing ratio of the target gas (e.g. CO₂). The focus of the TANGO mission lies on the accurate estimation of emissions in the case that the proxy gas is not co-emitted with the target gas, which is true for most anthropogenic emissions of the energy sector. In this study, we investigate the explorative option to apply a full-physics retrieval by combining TANGO with collocated aerosol observations. Here, we investigate sequential data processing, in which a multi-angle polarimeter measurement is used to retrieve aerosol properties, which are then injected into a TANGO full-physics retrieval. The TANGO satellites will share spatial overpasses with 3MI on MetOP SG and CO2M, with 3MI crossing at 9:30, TANGO at 11:00 and CO2M at 12:00, thus providing aerosol measurements at roughly a one-hour time delay to TANGO. This opens an opportunity for synergy between the three missions. Although CO2M and 3MI operate at a much coarser resolution of approximately 4 km, the sequential data use can provide adequate aerosol information to reduce the aerosol-induced error in the TANGO data product. Furthermore, the synergy allows increasing the detection limit of all 3 missions by enhancing the precision of the XCH₄ and XCO₂ products, and may give observation opportunities for complex target areas with mixed sources, such as industrial areas. We develop a proof-of-concept framework by fusing the TANGO end-to-end simulator with RemoTAP, one of three candidate operational retrieval algorithms for CO2M.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall F1)

Presentation: Methane retrievals from Sentinel-5 Precursor, PRISMA and EnMAP and corresponding emission estimates of localized sources

Authors: Michael Buchwitz, Oliver Schneising, Stefan Noel, Maximilian Reuter, Michael Hilker, Jonas Hachmeister, Heinrich Bovensmann, John P. Burrows, Hartmut Bösch
Affiliations: University of Bremen, Institute of Environmental Physics (iup)
Direct or indirect releases of methane (CH4) by man are after those of carbon dioxide (CO2) the second most important anthropogenic sources of greenhouse gasses and are important drivers of climate change. Satellite retrievals of the atmospheric column-averaged CH4 dry mole fractions (XCH4) in combination with appropriate data analysis are used to assess the magnitudes of anthropogenic and natural methane sources. Significant progress in this area has been made since the launch of the Sentinel-5 Precursor (S5P) satellite with its TROPOMI instrument thanks to its unique combination of daily coverage (swath width approximately 2600 km), dense spatial sampling, and moderate spatial resolution (about 5.5x7 km2). Thanks in part to ESA projects such as GHG-CCI, MEDUSA and SMART-CH4 we have been continuously improving our scientific WFMD retrieval algorithm to generate a high-quality scientific XCH4 data product from S5P. Here we present latest results obtained with WFMD version 2.0. In addition, we now also retrieve XCH4 at high spatial resolution from PRISMA and EnMAP radiances. These sensors provide much higher spatial resolution (approximately 30 m) compared to that of TROPOMI/S5P but have poorer spatial coverage. To obtain emission estimates of localized emission sources from these sensors, we have developed a Cross-Sectional-Flux (CSF) algorithm and will present some first results from the application of this algorithm to S5P, PRISMA and EnMAP.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall F1)

Presentation: Advanced Retrievals of Methane Emissions using Optimal Estimation applied to EMIT

Authors: Jelle Schrijver, Dr. Pepijn Veefkind, Prof. dr. Pieternel Levelt
Affiliations: Department of Geosciences and Remote Sensing, Delft University of Technogy (TU Delft), Royal Netherlands Meteorological Institute (KNMI), National Center for Atmospheric Research (NCAR)
Emissions from oil and gas exploration contribute significantly to the increasing concentrations of methane in the atmosphere. Because the atmospheric lifetime of methane in the atmosphere is much smaller than that of other greenhouse gases like CO2 and N2O, mitigation of methane emissions is considered an effective strategy to limit climate change on the decadal timescale. Detecting individual point sources is possible with high spatial resolution hyperspectral images from satellites like EMIT, EnMAP and PRISMA. These instruments have a spatial resolution up to 30 x 30 m2 but have a limited spectral resolution ranging between 7-10 nm. The retrieval of methane enhancements from these hyperspectral images has widely been carried out by applying the Matched Filter algorithm. This algorithm utilizes the methane absorption in the spectra of the 2.3 μm band to statistically derive methane enhancements against the background spectrum. In this work, we employ advanced retrievals where enhancements are retrieved with the use of an optimal estimation algorithm. A forward model is used to calculate, among others, methane concentrations from the measured radiances in the wavelength ranges of either the 1.6 μm or the 2.3 μm band. The final methane concentration enhancements are the result of fitting the surface albedo, CH4 absorption, H2O absorption and additionally the CO2 absorption, depending on the choice of spectral window. A-priori and a-posteriori diagnostic information, such as the error covariance matrices, are included with the use of the optimal estimation algorithm. This enables the opportunity to make an analysis of the per pixel fitting diagnostics and the correlation between fitted parameters. We compare oil and gas related emission events retrieved with the physics-based retrieval algorithm to the statistical Matched Filter algorithm results of the same events. The focus is pointed towards understanding the accuracy of the measurements and the quantification of the precision of the enhancements. In addition, the enhancements can be closely examined for any occurrences in surface albedo artifacts. With the comparison of the optimal estimation retrieval results of the various satellites, a better understanding of the performance of the current hyperspectral instruments will be gained and will set a strong baseline for the measurements made by the upcoming methane-detecting satellites missions like CarbonMapper. These results highlight the potential of optimal estimation algorithms to provide us with comprehensive and accurate methane enhancements to aid with the detection of emissions from the oil and gas industry.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall F1)

Presentation: The MicroCarb CO2 mission: imminent launch!

Authors: Denis Jouglet, Philippe Landiech, François-Marie Breon, Didier Pradines, Carole Deniel, Aurélie Bornot, Elodie Cansot, Christel Guy, Christelle Pittet, Pierre Lafrique, Charlotte Revel, Laurie Pistre, Pascal Prieur, Bruno Vidal
Affiliations: CNES, LSCE
MicroCarb will be the first European mission dedicated to the monitoring of CO2 surface fluxes from space. MicroCarb is developed as a partnership led by CNES with major contributions from UKSA, EUMETSAT and EU through H2020 IOD-IOV program operated by ESA. The main objective of MicroCarb is the monitoring of CO2 fluxes for a better understanding of the carbon cycle mechanisms. For this objective, the platform will acquire relevant spectra in nadir or glint observation modes. An imagery mode (footprint size 2x2km, swath 40km) is also implemented as a demonstrator for anthropogenic emission estimates. MicroCarb measurements will allow estimates of the atmospheric CO2 column integrated concentrations at high accuracy (requirement for random error <1ppm, regional bias <0.2ppm) from an affordable micro satellite (Myriade series, ~200kg). The satellite will fly on a sun-synchronous 10:30 am (descending node) orbit. The MicroCarb instrument is a compact grating spectrometer based on a unique telescope, spectrometer and detector instrument concept. The instrument will measure high-resolution radiance spectra (resolving power 25,000) of the Earth during daylight in four short-wave infrared spectral bands: CO2 centered at 1.61 and 2.04 µm, O2 at 0.76 and 1.27 µm. This latter band is specific to the MicroCarb mission to mitigate aerosol-related biases, and the retrieval algorithms shall account for the airglow emission. The XCO2 retrievals will be performed by the 4ARTIC full physics optimal estimation code, developed specifically for MicroCarb and currently tested on OCO-2 and EM27/SUN spectra. An imager is also embedded for geolocation and cloud detection. The MicroCarb launch is imminent, with a planned launch date for June 2025 on Vega-C. Both space and ground segments are ready. TVAC performance tests have been mostly successful, with an ongoing analysis to finalize the instrument model necessary for measurement calibration. CNES and its partners are now ready for Cal/Val activities, which are planned to last one year. Cal/Val for L1 will be performed with on-board sources (Shutter, Lamp) as well as natural sources (Cold Space, Sun, Moon and terrestrial scenes). Cal/Val for L2 will be based mostly on comparison with ground-based TCCON, EM27/SUN, AirCore and CO2 atmospheric modeling results. We will present: - A quick reminder of the mission - The current status of the program. - The algorithms, from raw measurements to XCO2 - An estimate of the L1 and L2 products performances before launch - The cal/val plan
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall F1)

Presentation: First results from the global analysis of methane point sources with the MethaneSAT mission

Authors: Luis Guanter, Jack Warren, Ritesh Gautam, Mark Omara, Katlyn MacKay, Anthony Himmelberger, James P Williams, Maryann Sargent, Joshua S. Benmergui, Christopher C. Miller, Sebastien Roche, Jonathan E. Franklin, Steven C Wofsy, Steven P Hamburg
Affiliations: Environmental Defense Fund, Universitat Politècnica de València, Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University
Mitigating anthropogenic methane emissions is vital for reducing the rate of global warming. Under the scientific lead of the the Environmental Defense Fund and Harvard University, the MethaneSAT satellite mission is designed to detect and quantify anthropogenic methane emissions around the world, in order to help mitigate emissions and develop global policy-relevant emission database efforts such as the International Methane Emissions Observatory. MethaneSAT was launched on 4 March 2024 and is currently about to enter the nominal phase and release the first data to users. The instrument consists of two spectrometers, one covering the 1249-1305 nm window sampling oxygen absorption, and one covering 1598-1683 nm for methane and CO2 retrievals, with <0.1 nm spectral sampling and <0.3 nm spectral resolution, which will enable methane concentration maps with high accuracy and precision. During the operational phase, the mission will sample up to 25 sites per day, with a swath width of about 220 km and a spatial sampling of about 110 m x 400 m. These capabilities uniquely allow MethaneSAT to quantify total regional methane emissions and at the same time detect high-emitting point sources and characterize diffuse area sources. MethaneSAT will provide quantitative data products on methane emission rates focusing on the majority of worldwide oil and gas production, with additional capacity to measure emissions from other sectors including agriculture. Operational data products related to methane emissions will be made freely available in the public domain via MethaneSAT’s data platform and the Google Cloud. In this contribution, we will report on the status of the MethaneSAT project, and will discuss results from the analysis of the data acquired during the first phase of the mission, with focus on the characterization of super-emitters around the world. In particular, we will discuss the potential and limitation of MethaneSAT for the detection and quantification of methane point sources under different environments, and will detail results from different sites with an outstanding presence of high emitting point sources, including oil and gas basins in Turkmenistan, USA, Iran and Venezuela.
Add to Google Calendar

Wednesday 25 June 08:30 - 10:00 (Hall F1)

Presentation: Current Status of GOSAT Series

Authors: Tsuneo Matsunaga, Dr Hiroshi Tanimoto, Dr. Hiroshi Suto, Dr. Kei Shiomi
Affiliations: National Instiute for Environmental Studies, Japan Aerospace Exploration Agency
GOSAT Series is a series of Japanese Earth observation satellites for greenhouse gas measurement promoted by Ministry of the Environment, JAXA (Japan Aerospace Exploration Agency) and NIES (National Institute for Environmental Studies). The first satellite, GOSAT (Greenhouse gases Observing SATellite), was launched in 2009 with a FTS (Fourier transform spectrometer) for CO2 and CH4 measurement, and the second satellite, GOSAT-2, was launched in 2018 with an improved FTS for CO2, CH4, and CO measurement. Two satellites and their instruments have been healthy although their design lifetimes already passed. The Level 2 standard products of these instrument contain column averaged concentrations of three gases derived by full physics and proxy retrievals and validated mostly with TCCON (Total Carbon Column Observing Network) data. More than 15 years of data obtained by GOSAT and GOSAT-2 have been used in global carbon budget studies as well as evaluations of national greenhouse gas emission inventories. The third satellite, GOSAT-GW (Global Observing SATellite for Greenhouse gases and Water cycle), to be launched in 2024/2025 time frame will carry a newly developed grating imaging spectrometer, TANSO-3 (Total Anthropogenic and Natural emissions mapping SpectrOmeter-3), for mapping of CO2, CH4, and NO2 with two spatial observation modes, Wide Mode (911 km swath and 10 km resolution) and Focus Mode (90 km swath and 3 or higher spatial resolution). TANSO-3 Wide Mode data will be used to continue global measurements of CO2 and CH4 by GOSAT and GOSAT-2 but with 100 times more data. TANSO-3 Focus Mode data will be used for megacity-scale observation including plume tracking using NO2 data. The validation of TANSO-3 Level 2 products will utilize existing networks for CO2 and CH4 such as TCCON and COCCON (COllaborative Carbon Column Observing Network), and for NO2 such as PGN (Pandora Global Network). TANSO-3 Level 1B and 2 products will be freely available from G3PA (GOSAT-GW TANSO-3 Product Archive). In this presentation, the status of GOSAT Series will be reported with a special emphasis on GOSAT-GW preparations.
Add to Google Calendar

Wednesday 25 June 09:00 - 09:20 (EO Arena)

Demo: D.02.24 DEMO - EarthDataInsight: AI-Driven Geospatial Intelligence for Climate and ESG Applications

The intended audience includes: Urban planners and policymakers seeking AI-powered geospatial insights for climate resilience and smart city planning. ESG professionals, financial institutions, and real estate developers interested in leveraging satellite data for risk assessment and sustainability strategies. Geospatial analysts and EO data users looking for automated AI-driven tools to extract insights from multi-temporal EO imagery. Tech and innovation leaders exploring scalable SaaS solutions for EO data integration. Prerequisites: No prior expertise in remote sensing or geospatial analytics is required, as EarthDataInsight is designed for non-space users with an intuitive, AI-powered interface. Technical Requirements Theatre-style space at the ESA booth for demonstration. Large screen or projector to showcase the platform’s interactive interface. Stable internet connection for live access to EarthDataInsight’s cloud-based system. Laptop with remote access capability to run the demonstration. Wireless microphone (if available) for clear presentation. This setup will ensure an engaging and immersive experience, highlighting how EarthDataInsight transforms EO data into real-world decision-making tools.

Speakers:


  • Paolo De Piano - Geospatial Data Scientist at Latitudo40
Add to Google Calendar

Wednesday 25 June 09:22 - 09:42 (EO Arena)

Demo: B.03.16 DEMO - Copernicus-based Evapotranspiration and Root-Zone Soil Moisture Products

Actual evapotranspiration (ET) and root-zone soil moisture (SM) are critical parameters for sustainable water management in agriculture. This is especially relevant considering the growing demand for food coupled with changing climate and weather patterns and the fact that irrigated agriculture is already responsible for 70 % of global fresh water withdrawals.

Modelling of actual evapotranspiration requires thermal-infrared and shortwave satellite observations together with meteorological forcing. The Copernicus programme provides access to all the required data through the Sentinel-2 and Sentinel-3 satellites and Copernicus services such as Copernicus Climate Change Service and Copernicus Atmosphere Monitoring Service. By exploiting synergies between this data it is possible to model ET at various spatial and temporal resolutions. Once ET is quantified it can be used in a fairly simple soil water-balance model to estimate root-zone soil moisture.

In this presentation we will demonstrate various Copernicus-based evapotranspiration and soil moisture products ranging from global ET product being developed for the Copernicus Land Monitoring Service, through daily field-scale maps at 20 m spatial resolution, to fusion of satellite data and in-situ measurements for increased product accuracy. We will also present various open-source tools and Python packages which can be used for ET modelling. Finally, we will briefly discussed the new and upcoming thermal satellite missions which have the potential to greatly enhance the ET and SM products.

Speakers:


  • Radoslaw Marcin Guzinski - Senior Remote Sensing Specialist and Software Engineer, DHI
Add to Google Calendar

Wednesday 25 June 09:45 - 10:05 (EO Arena)

Demo: A.08.19 DEMO - SEAScope: interactive EO data visualisation and analysis tool

Discovering, collocating and analysing an heterogeneous dataset can be tedious and might be a barrier for many potential users who want to exploit EO data as a wide variety of expertise skills are required to access low level data and prepare them before analysis is even possible. Online data visualisation websites, such as the OVL-NG portals have enabled users to explore different satellite, in-situ and model data very easily. A complementary tool named SEAScope is being developed to provide additional features that enable users to analyse various pre-processed data as well as their own.

SEAScope is a free standalone application available on Windows, Linux and macOS for
visualising and analysing EO data (download link: https://seascope.oceandatalab.com). It collocates in time and space all data available in your directory and render them on a 3D globe. You can also tweak rendering settings on the fly, extract data over an area or a transect and communicate with external applications such as a Jupyter notebooks. This enables you to extract data on a shared grid, analyse them and import back the results into SEAScope to visualise them along with the input data. It can even be controlled remotely!

Come to this demo to discover how to use SEAScope together with Jupyter notebooks to
study the synergy between satellite, model and in-situ data. We will showcase data analysis such as the study of wave propagation from the cross spectra between Sentinel-2 optical channels. Then we will demonstrate how to easily create animations controlling SEAScope with a python script.

Discussions and feedback are more than welcome and will drive the future
evolutions of SEAScope, so don't hesitate to come to the ESA booth and talk with us!

Speaker:


  • Lucile Gaultier - oceandatalab
Add to Google Calendar

Wednesday 25 June 10:00 - 10:45 (Nexus Agora)

Session: F.05.07 Women Trailblazers Round Tables - Session 2 - The Climate Challenge

“Women trailblazers: the present and the future of ...” is a series of round tables focused on the specific disciplines of Earth Observation Remote Sensing, Science, Engineering, Policy making, Entrepreneurship etc. that recognizes women’s significant contributions including senior and young promising professionals, whilst promoting female role models in our community.
The session will bring together prominent figures from diverse organisations, academia, industries, and associations to engage in a focused dialogue on collaborative strategies to address climate change and promote sustainable development. The main objective is to inspire and to discuss the current status and future development in Earth observation data and technologies to address climate change and promote sustainable development.

Speakers:


  • Catharina Bamps - Policy Officer, European Commission
  • Minoo Rathnasabapathy - Research Engineer/Program Lead at the Massachusetts Institute of Technology Media Lab
  • Anny Cazenave - Scientist at LEGOS-CNES
  • Sarah Connors - Climate Application Scientist at European Space Agency (ESA)

Add to Google Calendar

Wednesday 25 June 10:00 - 11:30 (Plenary - Hall D)

Session: The Global Crisis and Earth Action

The world faces a Triple Planetary Crisis - climate change, rising pollution and accelerating biodiversity loss. Satellites provide a unique vantage point to observe the planet. Space-based data records contribute multiple lines of scientific evidence to mitigate and adapt to the negative consequences of environmental change - and anticipate the challenges of tomorrow.
This panel session brings together international partners to discuss ESA's evolving role beyond monitoring the planet's health. Moving forward, how can the Agency support smarter, more rapid decision-making and financial flows to deliver effective long-term action?

This session will be accessible via live captioning at the following link: HERE

Due to limitations in the app, this is clickable only from the web version of the programme

Panel 1 : Understanding our Planet



  • Martin Herold (TOPC Chair, GCOS)
  • Eleanor Blyth (JSC member, WCRP)
  • Helene Hewitt (Co-Chair, CMIP)
  • Greet Maenhout (Head of Bioeconomy Unit, JRC)


Panel 2 : From science to policy and beyond



  • Dyfed Aubrey (Regional Director for Europe, UN-Habitat)
  • Dusan Chrenek( Principal Adviser, DG-CLIMA)
  • Renaud Seligmann(Director for Strategy & Operations, Sustainable Development Practice Group, WorldBank)
  • Hermann Ludwig Moeller(Director, ESPI)
  • Fani Kallianou de Jong (Principal Manager, EBRD)

Add to Google Calendar

Wednesday 25 June 10:00 - 10:45 (ESA Agora)

Session: C.03.17 Exploring New Radar Altimetry Products from Sentinel-3 for Inland Water Monitoring

Join us for an insightful Agora session as we explore the cutting-edge radar altimetry products from the Sentinel-3 Hydro-Cryo Mission, part of the EU Copernicus programme. This session will focus on the critical role of radar altimetry in enhancing our understanding of the hydrological cycle by providing essential data on surface heights across various inland water bodies.

We will examine the capabilities of the onboard radar altimeter, which operates in Synthetic Aperture Radar (SAR) mode to improve resolution and monitor an increasing number of lakes, reservoirs, and rivers, even in complex terrains. The session will highlight advancements in radar processing techniques, including new waveform inversion methods (retrackers) that estimate geophysical parameters with greater precision.

A key highlight will be the introduction of a "demonstration" Sentinel-3 Hydro product available through the ESA Copernicus Data Space Ecosystem (CDSE). This new product use an innovative retracker, powered by numerical simulations, enables the accurate processing of small water bodies by accounting for their contours.

We will also showcase St3TART, the Fiducial Reference Measurement (FRM) network, which plays a crucial role in validating Sentinel-3 radar altimeter data over a variety of land surfaces. In addition, we'll discuss synergies and comparisons with other space missions, such as Sentinel-6 and SWOT, to further enhance the understanding of inland water dynamics.

This informal and interactive Agora session fosters an open environment for dialogue, networking, and collaboration. Participants will have the opportunity to engage directly with speakers and peers, exchanging ideas and insights to advance the field of satellite altimetry for inland water analysis.

Speakers:


  • Francois BOY - CNES
  • Carlos YANNEZ - CNES
  • Alessandro DI BELLA - ESA
  • Angelica TARPANELLI - CNR
  • Nicolas TAUBURET - CLS/CLMS
  • Elena ZAKHAROVA - EOLA
  • Sylvie LABROUE - CLS
  • Claire Duffau - CLS
  • Usue DONEZAR - EEA
  • Alejandro EGIDIO - ESA
  • Peter SALAMON - JRC/CEMS
Add to Google Calendar

Wednesday 25 June 10:00 - 10:45 (Frontiers Agora)

Session: A.08.21 ESA Advanced Ocean Training Course: Harnessing satellite data to unlock insights into our blue planet

During an extraordinary six-week voyage from northern Norway to Nice, France on the iconic Norwegian tall ship – the Statsraad Lehmkuhl - students learned to master techniques for collecting ocean measurements and harnessed satellite data to unlock insights into our blue planet.
During the ESA’s 2025 Advanced Ocean Training course the crew and students had to brave everything from wild storms to calm near-freezing seas, while discovering the beauty of our ocean and the scientific use of Earth Observation data, sparking curiosity and a deeper commitment to understanding and protecting our oceans.
The ESA’s course was part of the year-long One Ocean Expedition – a scientific and educational voyage across the Northern Hemisphere dedicated to raising awareness of the ocean’s vital role in building a sustainable future.
In total 29 measurements from different oceanographic stations were taken, sampling water at different depths down to 1200 m together with phytoplankton and zooplankton net samples. These were connected with an array of measurements of the ocean currents, biology and meteorology, amongst others. Exceptionally good satellite data, particularly from the Copernicus Sentinel-1, Sentinel-2 and Sentinel-3 missions and from ESA’s SMOS mission, had been used, not only to guide the sampling plan in real-time, but also by the students who have leveraged their full impact to explore the physics, biology and health of our oceans.
The exceptional Ocean Training Course reflects ESA’s dedication to sharing knowledge and empowering the next generation of scientists and satellite data users. ESA’s aim here, was to help equip students with the new skills and confidence as they embark on their careers in ocean science.
Lectures and students will present in the agora the challenges they had to overcome – personal, technical, emotional and the learning experience gained while fully embracing modern ocean science aboard an historical tall ship.

Speakers:


  • Dr. Craig Donlon - ESA
  • Dr. Fabrice Collard, aka Dr. Fab - Ocean Data Laboratory
  • The students from the ESA Advanced Training Course on Ocean Synergy Remote Sensing 2025
Add to Google Calendar

Wednesday 25 June 10:07 - 10:27 (EO Arena)

Demo: F.04.34 DEMO - Applications Showcase: Ecosystems and Biodiversity

This demonstration will present an overview of the range of different EO based tools and data sets available to users in the theme of environment and biodiversity. These could be from ESA projects, from other public services (such as CLMS) or from commercial services. This includes projects focused on ecosystem accounting, ecosystem restoration, ecosystem mapping, forest mapping, and others. It also looks at the interaction with other themes, such as looking at applying EO to monitor ecoschemes under the EU Common Agricultural Policy.

The demonstration will be performed using the APEX environment (https://apex.esa.int) to show interactive examples of the types of data available. The aim is not to give a detailed overview of each tool, but rather to showcase the range of solutions available, and to show how different tools can be used to address key concerns in the field of ecosystems and biodiversity, in particular to address key policy considerations such as the need to assess progress in nature restoration.

This session is organised by the ESA Stakeholder Engagement Facility. The SEF is a service funded by ESA to provide innovative ways of interacting with a diverse range of users and stakeholders to promote the uptake of Earth Observation powered applications. It works across a range of different themes, aiming to engage users by looking at their overall needs and what EO solutions are available to meet these, rather than being limited to a single project or service.

Instructor:


  • Natalia Kobliuk - Serco

Add to Google Calendar

Wednesday 25 June 10:30 - 10:50 (EO Arena)

Demo: E.01.11 DEMO - Advanced Web Platform for Infrastructure Monitoring and Natural Risk Management through Data Fusion and Artificial Intelligence

The implementation of an advanced Web-GIS-based platform for the processing, management and visualization of infrastructure monitoring data, represents an innovation in risk mitigation management. Combining automated Earth Observation data analysis, data fusion processes and AI, represents a powerful integration tool of SAR/optical satellite data and geological data: this allows high-resolution geospatial analysis and assessing the impact of natural events on critical infrastructure such as bridges, roads, railways, and dams.
A specific platform is implemented and tested, offering an interactive interface that integrates SAR, optical, multispectral, and in-situ data (e.g., IoT sensors, LiDAR, GNSS, accelerometers, inclinometers, piezometers), Digital Elevation Models (DEMs), geological and hydrogeological databases.
Using the information provided by the user, the platform generates a pre-assessment that automatically identifies the most suitable satellite sensors and technique for the selected infrastructure or environmental analysis, optimizing past assessment and/or continuous monitoring. Furthermore, a “real-time” notification alerts system is implemented, to detect critical events occurrence.
In a post-processing phase, advanced clustering algorithms are applied to InSAR data, to correlate identified deformations with landslides-prone areas and hydraulic risks. Moreover, spatial interference analysis support a robust mapping of vulnerable areas, improving risk prediction associated with natural events.
This approach enhances prediction reliability, optimizes predictive maintenance strategies, and strengthens the resilience of infrastructure networks.
The integration of satellite data with geohazard information in an advanced Web-GIS environment represents an innovative solution for infrastructure monitoring and natural risk management. With its multi-temporal analysis capabilities, predictive modelling and predictive warning systems, such a platform helps to strengthen infrastructure resilience and improve territorial safety in a context of increasing exposure to geological and climatic risks.


Speakers:


  • Paolo Caporossi - TITAN4 S.r.l
  • Giovanni Quacquarelli - TITAN4 S.r.l
Add to Google Calendar

Wednesday 25 June 10:45 - 11:30 (ESA Agora)

Session: F.01.06 Earth Observation for Everyone: Creating an Inclusive Sector Together

In this session, we bring together leaders from around the world to transparently and openly address new and existing challenges that women and underrepresented communities are facing in the Earth observation (EO) sector to bring awareness and open a conversation for developing pathways and action plans to cultivate and ensure an inclusive future in the field.

Inclusivity, representation, accessibility and equity in science have been severely worsened in the years following the COVID-19 pandemic and recent political instability worldwide. With attacks on science and diversity, equity, and inclusion (DEI), we are repeatedly seeing that the most vulnerable scientists in society are being targeted, systematically harmed and isolated. Even when happening regionally, these attacks can be felt in all regions of the world, where other advocates must pick up the torches and ensure tenets of DEI and science continue to advance. In the EO sector, a field that has historically lacked diversity and representation, the challenges in the last few years have further emphasised known, yet ignored, disparities surrounding accessibility, equity and inclusivity. On top of these challenges, continued lack of representation in academic programmes and university staff, persistent bias and lack of accommodations in research organisations and institutions, and lack of funding opportunities directly contribute to persistent issues impacting the field of EO.

To change this reality, it is vital to promote and support underrepresented scientists in the field of Earth observation from different backgrounds, career paths and career stages. By sharing stories (including accomplishments and hardships) and promoting advocacy initiatives that prioritise underrepresented scientists, we can learn from each other about how to continue to make advancements toward inclusivity in the EO sector.


Moderators:


  • Dr. Flávia de Souza Mendes
  • Sabrina H. Szeto

Speakers:


  • Dr. Gopika Suresh
  • Dr. Karen Joyce
  • Miriam Gonzalez
  • Omowonuola Akintola
Add to Google Calendar

Wednesday 25 June 10:45 - 11:30 (Frontiers Agora)

Session: F.01.10 Grand Marathon Finalists pitching and award

The Grand Marathon, organized by the ESA Φ-lab Division, is structured as a progressive competition that begun on 28 November 2024 (Km 0). The challenge awards top solutions that are scalable and market-ready, addressing climate events and infectious disease and their impacts, with focus on younger populations. The Challenge was open to teams leveraging advanced technologies like AI, machine learning, and blockchain to drive the future of environmental sustainability and health resilience.

In collaboration with Save the Children, and powered by Hello Tomorrow, ESA selected the top three runners and awarded them 15K Euros each at the Global Summit in Paris , in March 2025. The LPS in Vienna will host the pitch of the top two finalists (GEOMATYS and Plastic-i) that will be awarded 50K euros each and will compete for the first prize (150K).

Speakers:


  • Geomatys: predictive epidemic-disease modelling
  • Plastic-i: plastic pollution alert and detection platform
Add to Google Calendar

Wednesday 25 June 10:52 - 11:12 (EO Arena)

Demo: D.03.31 DEMO - SNAP in Action - Various Application Examples throught the week demonstrating the power of SNAP for EO data visualisation, analysis and processing - session 3

SNAP is the ESA toolbox for visualising, analysing and processing of optical and microwave EO data. SNAP support a large number of current and past satellite sensors as well as generic data formats. SNAP addresses all kind of users, from early stage students, through experienced researchers, up to production managers responsible for a public and commercial EO processing service.

In a series of demonstrations we showcase this breadth of possibilities at various land and water real life applications. Demonstratoins will be repeated multiple times to allow as many as possible participants to join a specific demonstration. We will tailor the daily programme from a set of prepared demonstrations according to themes of the days, and user needs if expressed during the conference.

The following list give a glimpse of demonstrations from which we can select:
1. Sentinel-1 ETAD processing with SNAP
2. Change Detection Monitoring
3. Supporting new SAR missions with SNAP
4. “Live” fire evolution in Los Angeles using Sentinel-2 image
5. Burned Areas Detection – Mehedinti, Romania
6. Monitoring Drought Evolution – Dobrogea, Romania
7. Water Quality in urban areas at the example of the city of Hamburg
8. Interpreting Hyperspectral Data for coastal habitat mapping
Add to Google Calendar

Wednesday 25 June 11:15 - 11:35 (EO Arena)

Demo: E.05.04 DEMO - Applications Showcase: Urban Issues

This demonstration will present an overview of a range of different EO based tools and data sets available to address the needs of cities and municipal authorities. These could be from ESA projects, from other public services (such as CLMS) or from commercial services. This includes projects and tools focused on city mapping, urban heat, air quality, climate resilience and others.

The demonstration will be performed using the APEX environment (https://apex.esa.int) to show interactive examples of the types of data available. The aim is not to give a detailed overview of each tool, but rather to showcase the range of tools available and to show how different tools can provide data suitable for cities and municipal authorities, with case studies of how this is done in practice.

This session is organised by the ESA Stakeholder Engagement Facility. The SEF is a service funded by ESA to provide innovative ways of interacting with a diverse range of users and stakeholders to promote the uptake of Earth Observation powered applications. It works across a range of different themes, aiming to engage users by looking at their overall needs and what EO solutions are available to meet these, rather than being limited to a single project or service.

Instructor:


  • Natalia Kobliuk - Serco
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall G2)

Session: A.08.08 Upper Ocean Dynamics - PART 1

The upper ocean, by exchanging properties such as heat, momentum, gas, mass and freshwater with the atmosphere, plays a key role in shaping the Earth’s climate and influencing various environmental processes. It is characterized by numerous processes which superimpose and interact with each other, and which cover a wide range of spatial (from sub-mesoscale to basin-wide) and temporal (from high frequency to climatic) scales.
Different parameters are needed to properly describe the upper ocean dynamics (e.g. temperature, salinity, sea level, currents, wind, waves, mixed layer depth) and a large variety of active and passive instruments have been put into orbit over the last few decades providing more or less direct information about the upper-ocean dynamics (e.g. altimeters, including the recently launched SWOT mission, gradiometers, scatterometers, synthetic aperture radars, imaging radiometers operating at different wavelengths (microwave, infrared), and spectrometers). In this context, this session welcome contributions exploring how multi-variable satellite observations, together with in-situ data and/or numerical modelling can be consistently and systematically used in synergy to better observe and understand upper ocean dynamics, across different dynamical regimes and spatial and temporal scales.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall G2)

Presentation: Submesoscale Sea Surface Height Mapping Along the East Greenland Coast from SWOT

Authors: Sara Nielsen Jensen, Ole Baltazar Andersen, Carsten Bjerre Ludwigsen
Affiliations: Technical University Of Denmark
The recently launched Surface Water Ocean and Water Topography (SWOT) mission provides Sea Surface Height (SSH) observations over a 120 km wide swath at 2 km resolution. This presents a unique opportunity to map the mesoscale and submesoscale ocean dynamics that are not currently resolved by conventional altimetry. The aim of this project is to produce high resolution maps of the SSH field that are consistent in time and space along the East Greenland coast. From the SSH field, geostrophic currents and relative vorticity can be derived and used to study the mesoscale (20km-200km) and submesoscale (200m-20km) dynamics developing in the East Greenland Current (EGC) and along the Polar Front. The submesoscale is associated with vertical exchanges of nutrients, heat, and carbon, and is thus particularly important for understanding the effects of the warming and freshening the region is experiencing due to climate change. Because of its location at high latitudes, the study area benefits from small Rossby number and short satellite revisit time. However, in the period between November and June a large part of the East Greenland shelf is covered by Arctic sea ice that has been transported south by the EGC. To overcome the problem of limited altimetry observations in these months, SWOT data in the marginal ice zone will be exploited by masking individual ice floes. The linear Optimal Interpolation (OI) methods traditionally used on conventional altimetry is not able to fully exploit the high-resolution content of the SWOT observations because the decorrelation time of the small-scale dynamics are much smaller than the satellite revisit time. Thus, a Dynamical Optimal Interpolation (DOI) method based upon conservation of potential vorticity in a quasi-geostrophic framework is applied to assimilate SWOT and conventional altimetry data. The spatial resolution is significantly increased compared to the DUACS products which are based on OI of conventional altimetry data. Validation with independent along-track data shows a threefold improvement over DUACS in root-mean-square errors. Moreover, the coverage is significantly increased in coastal and sea-ice covered areas. The capability of the DOI method of capturing mesoscale and submesoscale features will be validated with Sea Surface Temperature (SST) data. This project illustrates how combining high resolution SWOT data with a dynamic model can increase both the accuracy and the spatial and temporal resolution of SSH mapping.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall G2)

Presentation: Seasonal analysis of the Dense Water Formation in the Atlantic Ocean from satellite observations

Authors: Aqeel Piracha, Estrella Olmedo, Antonio Turiel, Dr. Marcos Portabella
Affiliations: Institut Of Marine Science (csic)
Recent advancements in satellite oceanography have enabled detailed analyses of seasonal and sub-seasonal dense-water formation variability. A kinematic dense-water formation framework is used to estimate buoyancy-driven ocean circulation from satellite datasets. In particular, in this work we have estimated dense-water formation from monthly satellite-derived Sea Surface Salinity, Temperature, and Currents at a 1/4 degree of resolution for the period 2011–2020. These datasets are combined with a blended satellite and in-situ Mixed Layer Depth dataset to derive dense-water formation estimates across the Atlantic Ocean. In order to assess the seasonality, we have applied harmonic analysis to sea surface salinity, temperature and density, along with dense-water formation estimates. From the results of this analaysis we can disentangle the contributions of thermal and haline processes to the dense-water formation variability. Our results reveal that the annual cycle in the subtropics and mid-latitudes explains around the 70–80% of the variability in the net dense-water formation, while the annual cycle explains more than the 90% of the variance of the sea surface density. This indicates that the dense-water formation is influenced by additional atmospheric and oceanic processes operating at higher temporal frequencies. Including a semi-annual harmonic cycle only marginally increases the explained variance (to 80–85%), further highlighting the role of variability at other timescales. Haline processes are found to dominate the dense-water formation variability in critical regions for the Atlantic Meridional Overturning Circulation (AMOC), such as the Denmark Strait, the Labrador Sea, and the Norwegian Sea. Seasonal anomalies of the dense water formation relative to 2011–2020 averages reveal notable autumn freshening events in the eastern Sub-Polar Gyre between 2013 and 2015. These extreme anomalies suggest significant impacts on deep-water formation, with implications for the future stability and strength of the AMOC. In this talk we will present the data sets, methods as well as the main results of this work.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall G2)

Presentation: Data fusion: altimetry and surface currents data to access the high frequency currents signal

Authors: Solene Jousset, Michel Tchilibou, Clément Le Goff, Clément Ubelmann, Gérald Dibarboure, Marie-Hélène Rio
Affiliations: Cls, eOdyn, Datlas, CNES, ESA
Satellite alyimetry provides observations of sea surface height and derived geostrophic currents through the operational DUACS/Copernicus Marine Service Sea Level Thematic Assembly (SLTAC). This current is partial information on the total surface currents, which also contains other ocean signals that form the ageostrophic component. Hence, complementary observations-based approaches have been developed to take the best out of existing ocean observing systems (satellite and in-situ) to correct the altimeter-derived geostrophic currents and to obtain more realistic upper ocean surface circulation fields in terms of physical content and temporal resolution. In this work, we use the Multiscale Inversion for Ocean Surface Topography (MIOST, Ubelmann et al., 2020) variational tool to retrieve both geostrophic and ageostrophic currents. This tool achives the merging through the decomposition of the signal into representors accounting for different time and space scales (mesoscale to large-scale) and different physical signals (geostrophy, wind-driven current, Near Inertial Oscillations …). We present the work of two studies using the same MIOST method with different input data. First, as part of the S4SS CCN project (funded by ESA), altimetry data and ocean velocities estimated by processing the messages sent by the boats through Automatic Identification System (AIS, Le Goff et al 2021) are combined in the Agulhas Current (coast of South Africa) to represent part of the wind-driven currents. Secondly, as part of the DUACS project, altimetry data are combined with hourly AOML drifter database (Elipot et al, 2016) to represent wind-driven current and faster ageostrphic signals, Near Inertial Oscillations (NIO), that is a key challenge for satellite observation of surface currents (SKIM mission concept (Ubelmann et al., 2021), ODYSEA mission). The reconstructed currents are intercompared with currents obtained from various products: the Copernicus-Globcurrent total surface current (MULTIOBS_GLO_PHY_MYNRT_015_003) and Total Surface Current from ESA WOC project. Elipot, S., R. Lumpkin, R. C. Perez, J. M. Lilly, J. J. Early, and Sykulski, A.M.: A global surface drifter dataset at hourly resolution, J. Geophys. Res. Oceans, 121, doi:10.1002/2016JC011716, 2016   Le Goff, C. et al. (2021). Monitoring the greater Agulhas Current with AIS data information. Journal of Geophysical Research: Oceans, 126, e2021JC017228. https://doi.org/10.1029/2021JC017228 Ubelmann, C., Dibarboure, G., Gaultier, L., Ponte, A., Ardhuin, F., Ballarotta, M., & Faugere, Y. (2021). Reconstructing ocean surface current combining altimetry and future spaceborne Doppler data. Journal of Geophysical Research: Oceans, 126, e2020JC016560. https://doi.org/10.1029/2020JC016560
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall G2)

Presentation: Enhanced detection and reconstruction of a small-scale intrathermocline eddy using SWOT and high-resolution in-situ data in the Western Mediterranean

Authors: Elisabet Verger-Miralles, Baptiste Mourre, Laura Gómez-Navarro, Bàrbara Barceló-Llull, Eugenio Cutolo, Daniel R. Tarry, Nikolaos D. Zarokanellos, Ananda Pascual
Affiliations: Mediterranean Institute for Advanced Studies (IMEDEA), CSIC-UIB, IMT Atlantique, Lab-STICC, UMR CNRS, Balearic Islands Coastal Observing and Forecasting System (SOCIB), Applied Physics Laboratory, University of Washington
Small-scale ocean currents (1–100 km) play a crucial regulating Earth’s climate, with a significant impact on the distribution of heat, salt, nutrients, and biogeochemical properties within the oceans. During the Calibration/Validation phase of the Surface Water and Ocean Topography (SWOT) satellite mission, we conducted a high-resolution, multi-platform experiment (April-May, 2023) targeting a ~20–25 km-radius anticyclonic intrathermocline eddy (ITE) in the Western Mediterranean Sea. Underwater glider measurements revealed biconvex isopycnals, Western Mediterranean Intermediate Water (WIW) trapped between 70 and 450 m, and a ~3 cm sea level anomaly, while Acoustic Doppler Current Profiler (ADCP) data recorded peak velocities of ~40 cm/s. SWOT successfully resolved the sea level signature and geostrophic currents of the eddy, demonstrating significant advancements over conventional altimetry (DUACS). Compared to glider-derived Dynamic Height, SWOT improved sea level representation by 33%. In terms of geostrophic horizontal velocity, SWOT achieved a 61% improvement along an ADCP section crossing the eddy. Additional improvements of 44.3% and 9.7% were observed for velocity magnitude and direction, respectively, when compared with SVP-B drifter observations. These results confirm that SWOT provides more accurate velocity estimates than DUACS, particularly in magnitude. This study represents the first detection and analysis of a small ITE in the Mediterranean using SWOT observations combined with in-situ multi-platform data. SWOT’s enhanced spatial resolution advances the detection and characterization of small-scale features, significantly improving our ability to resolve small-scale upper ocean dynamics. These advancements in detecting small-scale sea level signals and horizontal velocities are critical for improving our understanding of vertical transport processes. Enhanced analysis of vertical velocities in small structures is essential for characterizing heat, freshwater, and biogeochemical exchanges. By integrating SWOT observations with in-situ data collected via the Rosette Conductivity-Temperature-Depth (CTD) system, Moving Vessel Profiler (MVP), ADCP, surface drifters, and gliders through advanced methodologies such as DIVAnd, we achieve detailed 3D reconstructions of the sampled eddy. We present initial results from this approach and preliminary findings from integrating SWOT data into the WMOP high-resolution numerical model through data assimilation, demonstrating improved representation of small-scale dynamics.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall G2)

Presentation: DUACS DT-2024: 30 years of reprocessed sea level altimetry products

Authors: Dr Maxime Ballarotta, Quentin Dagneaux, Antoine Delepoulle, Gerald Dibarboure, Stephanie Dupuy, Yannice Faugere, Marie Jenn-Alet, Cécile Kocha, Isabelle Pujol, Guillaume
Affiliations: Cls, ALTEN, CNES, CELAD
The ocean surface circulation arises from a complex interplay of processes operating across a wide range of spatial and temporal scales. These include large-scale, slow-moving geostrophic flows, mesoscale turbulent eddies, and small-scale mixing driven by the internal wave field. Mesoscale eddies, which have been extensively studied over the past three decades using multi-mission gridded altimetry products, are recognized as key contributors to the horizontal transport of heat, nutrients, and carbon. Their role is critical for understanding ocean surface dynamics and addressing pressing challenges in marine forecasting and maritime safety, including navigation, search and rescue operations, and resource management. The Sea-Level Thematic Assembly Centers (SL-TAC), a component of the Copernicus Marine Services, provides near-real-time and delayed-time gridded sea level and surface current products at global and regional scales. These datasets, processed using the Data Unification and Altimeter Combination System (DUACS), are important for the ocean science community, enabling the study and monitoring of oceanic system evolution. Recently, DUACS has reprocessed 30 years of altimeter data, releasing the DT2024 products through the Copernicus Marine Service (CMEMS) and Copernicus Climate Change Service (C3S). These new products integrate updated geophysical correction standards, advanced mapping methods, and refined processing techniques, delivering significant accuracy improvements compared to the previous DT2021 release. This study provides a comprehensive overview of the CMEMS and C3S DT2024 products and assesses their quality against independent datasets. The analysis demonstrates that updated altimetry standards enhance accuracy, especially in coastal regions, reducing errors by approximately 10% due to improved ocean tide model corrections. Furthermore, the application of the Multi-Scale Inversion of Ocean Surface Topography mapping method has reduced mapping errors by 5%–7% in areas of high ocean variability. These enhancements position the DT2024 products as valuable resources for advancing our understanding of ocean dynamics and improving the accuracy of climate and oceanographic research.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall G2)

Presentation: Advancing Mesoscale Eddy Characterization in the Lofoten Basin Using Satellite Sensor Synergy and Machine Learning Approaches

Authors: Mohamed Babiker, Dr. Antonio Bonaduce, Dr. Roshin Raj, Dr. Tsuyoshi Wakamstsu, Dr. Artem Moiseev, Adrien Perrin, Prof. Johnny Johannessen
Affiliations: NERSC
The Lofoten Basin, located in the Norwegian Sea, serves as a critical region for studying mesoscale ocean dynamics and their broader ecological impacts. This research leverages satellite sensor synergies and machine learning (ML) techniques to enhance the understanding and characterization of eddy properties in this dynamic environment. By adopting a novel 2D-to-3D approach, the study integrates physical and biological variables from multiple Earth Observation (EO) platforms, including the newly launched Surface Water and Ocean Topography (SWOT) mission. SWOT’s capability to deliver high-resolution two-dimensional mapping of the ocean surface, combined with its fast-sampling phase (1-day repeat orbit), enables unprecedented tracking of the temporal and spatial evolution of mesoscale eddies. A key milestone in 2023 was the SWOT Early Adopter “Adopt-A-Crossover” (AdaC) Initiative, which concentrated on the Lofoten Basin during a 90-day campaign. This effort took advantage of the region's SWOT crossover tracks, sampled twice daily, to produce a rich dataset. Initial findings reveal the potential of combining SWOT KaRIn retrievals with complementary data from Sentinel-3 (OLCI) and Sentinel-1 (Doppler Shift) missions, effectively distinguishing geostrophic and ageostrophic processes. The integration of satellite observations with high-frequency Argo profiling floats further advances the 2D-to-3D characterization of sub-mesoscale eddies. Machine learning-powered 4D reconstructions, blending observational data with model outputs, provide additional layers of insight into mesoscale dynamics. To enhance accessibility and foster broader engagement, an interactive data layer is being developed within the framework of the NERSC ARctic Virtual Laboratory (NARVAL), supporting visualization and integration with a digital twin of the ocean. The outcomes of this study extend beyond advancing mesoscale eddy science. They deliver crucial insights for the Copernicus Programme and Marine Services while laying the groundwork for the ESA Sentinel-3 Next Generation Altimeter (S3NG-Altimeter) mission. These developments provide actionable tools for national environmental agencies and stakeholders, equipping them with innovative capabilities for improved ocean monitoring, ecosystem management, and the advancement of digital twin technologies.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall F1)

Session: A.04.03 Monitoring Greenhouse Gases from Space - Methods and Validation - PART 2

Since the launch of SCIAMACHY on ENVISAT, our understanding of the uncertainties surrounding the global distribution of potent Greenhouse Gases (GHGs) such as Methane (CH4) and Carbon dioxide (CO2) have been dramatically reduced. Yet, despite advances in technology in the decades since SCIAMACHY, with missions such as GOSAT and Sentinel 5P, comparisons of satellite observations with bottom-up inventories show that significant uncertainties still remain. Key to reducing these uncertainties are the validation networks providing confidence to the satellite retrievals, and the advancing of retrieval methods including sophisticated use of machine learning, and advanced uncertainty quantification methods.

This session is dedicated to presenting the current state of the art for methods and validation of the remote sensing of GHGs, such as but not limited to CH4 and CO2, including results from current missions and ground based networks such as Sentinel-5P, GOSAT/2, PRISMA, EnMAP and the TCCON and COCCON networks. The presentation of advanced remote sensing techniques and methods, leveraging open-science and machine learning techniques are strongly encouraged in this session.

Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall F1)

Presentation: Combined retrieval of XCO2, XCH4 and aerosol properties from SWIR spectrometric and multiangle polarimetric measurements with GRASP algorithm.

Authors: Marcos Herreras Giralda, Oleg Dubovik, David Fuertes, PhD Masahiro Momoi, Pavel Litvinov, Tatyana Lapyonok, Fernando Rejano, Wushao Lin, Juan Carlos Antuña-Sánchez, Alejandro García-Gómez, Christian Matar, Anton Lopatin, Andrew Barr, Jochen Landgraf, Tobias Borsdorff, Otto Hasekamp, Bastiaan van Diedenhoven
Affiliations: GRASP SAS, Univ. Lille, CNRS, UMR 8518 - LOA - Laboratoire d’Optique Atmosphérique, SRON Netherlands Institute for Space Research
One of the primary challenges in achieving the necessary precision in satellite retrievals of XCO2 and XCH4 is accurately characterizing atmospheric aerosols. Multiangular-Polarimetric measurements (MAP) represent the most advanced approach for understanding aerosol properties from space-borne platforms. Conversely, the optimal sensitivity to CO2 and CH4 concentrations, while minimizing scattering effects, is typically found within the SWIR-1 and SWIR-2 spectral bands. As a result, upcoming Copernicus missions such as Sentinel-7 CO2M or MetOp-SG Sentinel-5 with 3MI onboard are equipped with both MAP and SWIR spectrometric measurements. The Generalized Retrieval of Atmosphere and Surface Properties (GRASP) is a recently developed versatile algorithm designed for various remote sensing observations (Dubovik et al., 2021). GRASP relies on comprehensive and rigorous modeling of atmospheric radiation, applicable to simulating a wide range of observations. Its numerical inversion is executed through highly statistically optimized fitting, following the Multi-Term Least Square minimization concept. Originally applied to MAP-like measurements in various applications (POLDER, 3MI, CO2M/MAP), the GRASP approach has now been extended to combine MAP and SWIR spectrometric measurements, offering a synergistic combined product of aerosol properties alongside additional information on columnar XCO2 and XCH4 (Li et al., 2019; Chen et al., 2020). The GRASP algorithm is set to be employed with two different combinations of MAP and SWIR spectrometer instruments: CO2M/MAP+CO2M/CO2I (Lu et al., 2022) and 3MI+S5/UVNS. The inherent generality and high flexibility of the GRASP code will demonstrate the advantages of incorporating MAP measurements for improved XCO2 and XCH4 accuracy compared to standalone spectrometers. Additionally, it will highlight the benefits of combining additional spectrometric bands around the O2 A-Band to enhance aerosol layer height characterization (Herreras-Giralda et al., 2022). References Chen, C., Dubovik, O., Fuertes, D., Litvinov, P., Lapyonok, T., Lopatin, A., ... & Federspiel, C. (2020). Validation of GRASP algorithm product from POLDER/PARASOL data and assessment of multi-angular polarimetry potential for aerosol monitoring. Earth System Science Data Discussions, 2020, 1-108. Dubovik, O., D. Fuertes, P. Litvinov, et al., “A Comprehensive Description of Multi- Term LSM for Applying Multiple a Priori Constraints in Problems of Atmospheric Remote Sensing: GRASP Algorithm, Concept, and Ap-plications”, Front. Remote Sens. 2:706851. doi: 10.3389/frsen.2021.706851, 2021. Herreras-Giralda, M., P Litvinov, O. Dubovik, R. Preusker, A .Lopatin, C. Matar, J.-C. Antuna-Sanchez, T. Lapyonok, J. Fischer and D. Fuertes, Aerosol layer height retrieval from OLCI Oxygen A-band using GRASP algorithm, AGU Fall Meeting 2022, Chicago, USA and online everywhere, 12 - 16 December 2022. Li, Lei; Derimian, Yevgeny; Chen, Cheng; Zhang, Xindan; Che, Huizheng; Schuster, Gregory L.; Fuertes, David; Litvinov, Pavel; Lapyonok, Tatyana; Lopatin, Anton; Matar, Christian; Ducos, Fabrice; Karol, Yana; Torres, Benjamin; Gui, Ke; Zheng, Yu; Liang, Yuanxin; Lei, Yadong; Zhu, Jibiao; Zhang, Lei; Zhong, Junting; Zhang, Xiaoye; Dubovik, Oleg Climatology of aerosol component concentrations derived from multi-angular polarimetric POLDER-3 observations using GRASP algorithm. Earth Syst. Sci. Data, vol. 14, no. 7, pp. 3439–3469, 2022, ISSN: 1866-3516. Lu, S., Landgraf, J., Fu, G., van Diedenhoven, B., Wu, L., Rusli, S. P., & Hasekamp, O. P. (2022). Simultaneous Retrieval of Trace Gases, Aerosols, and Cirrus Using RemoTAP—The Global Orbit Ensemble Study for the CO2M Mission. Frontiers in Remote Sensing, 3, 914378.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall F1)

Presentation: Full-Scale Controlled Release Experiments for Investigating Methane Measurement Performance at Landfills

Authors: David Risk, Rafee Iftakhar Hossian, Pylyp Buntov, Yurii Dudak, Chelsie Hall, Tarek Abichou
Affiliations: St. Francis Xavier University (FluxLab), Florida State University
Landfill methane measurement technologies require more validation as government and landfill operators incorporate measurement into methane management plans, and regulation. A 12-hectare controlled release site called the Simulation Facility for Landfill Emission Experiments (SimFLEX) was established on a closed landfill near Sarnia Ontario, to test measurement methodologies at realistic scales and under various blind release rates and configurations. The landfill used for this study had an effective collection system and baseline emissions rates were low (24 kg/hr). In the first phase of this controlled release project, blind releases up to 300 kg/hr were performed from 10 point- and area-source locations (>250 m²). For aircraft and satellite imagers, area-source locations are much harder to detect due to reduced column concentration density. Sixteen participant technologies were involved in initial release experiments during November 2023, using an adapted experimental protocol from the Methane Emissions Technology Evaluation Center (METEC). Numerous technologies performed well during quantification trials. For truck, aircraft, and drone approaches, almost all values were within 3x of known release rates, and more often within 0.5x with high correlation (generally R²>0.75) between estimates and known release rates. No satellite detections were made despite high (near 300 kg/hr) release rates during clear-sky opportunities, although under windy conditions. These non-detects highlighted the challenge associated with measurement of area-source emissions. New installations of the SimFLEX controlled release array, completed in November 2024, extend the release capacity to 840 kg/hr. Multiple satellite-based teams are expected to participate in releases during the spring of 2025 and preliminary results will be shared during the presentation.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall F1)

Presentation: Leveraging the AI potential for the monitoring of large methane plumes within Copernicus Atmosphere Monitoring Service

Authors: Solomiia Kurchaba, Joannes D. Maasakkers, Berend J. Schuit, Ana Isabel Lopez Norena, Matthieu Dogniaux, Shubham Sharma, Ilse Aben
Affiliations: SRON Netherlands Institute for Space Research, GHGSat Inc., Department of Earth Sciences, Vrije Universiteit Amsterdam
Anthropogenic methane emissions are responsible for more than 30% of human-caused global warming (Ocko et al., 2018; IPCC, 2021). Moreover, a large fraction of total methane emissions originates from a small number of very large “super-emitters” including coal mines, the oil and gas industry, and landfills. Thus, to be able to mitigate global warming optimally, continuous monitoring of these super-emitters should be prioritized. This is possible with the TROPOMI instrument on board the ESA Sentinel-5 precursor satellite. Launched in 2017, it is the first instrument that can be used to detect emissions from methane super-emitters globally, on a day-to-day basis. Since TROPOMI produces terabytes of data, finding these methane plumes can only be done using process automation. We therefore developed a two-step AI-based procedure. In the first step, a machine learning (ML) model detects all plume-like objects in the methane data. Following this, another ML model is responsible for filtering out enhancements that are retrieval artifacts and hence not real methane emissions. Detections that pass this two-step verification are further inspected by human labelers who prepare a final set of plumes. In May 2024, this routine detection of plumes from methane super-emitters became a part of the Copernicus Atmosphere Monitoring Service (CAMS). Within the project, we perform a weekly delivery of detected plumes categorized by emission rate and the dominant emission source in the area. For example, in the period between August and October 2024, we detected 715 methane plumes around the globe, which is a bit more than in 2023. The most common dominant emission source in the area for the detected plumes was the oil and gas industry followed by landfills and coal mines. The highest density of detections was observed in the region of Central Asia, followed by North-Eastern China and the Permian Basin in the USA. As more data becomes available, we are continuously working on improving our methodology to detect methane plumes. In this presentation, we will discuss the newest developments with regard to the improvements in the detection of plume-like features and the distinguishability between real methane plumes and retrieval artifacts. In addition, we will show the results from our detection pipeline from the last couple of years up to the day of the presentation, focusing on changes in persistent emitters and noteworthy emission events. References Ocko, I. B., Naik, V., and Paynter, D.: Rapid and reliable assessment of methane impacts on climate, Atmos. Chem. Phys., 18, 15555–15568, https://doi.org/10.5194/acp-18-15555-2018, 2018. IPCC: Climate Change 2021: The Physical Science Basis, Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, Cambridge, UK and New York, NY, USA, https://doi.org/10.1017/9781009157896, 2021.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall F1)

Presentation: Semi-Supervised Methane Emission Detection Using Sentinel-2 Data

Authors: Dr. Masoud Mahdianpari, Mr Ali Radman, Dr Fariba
Affiliations: C-core
Methane is a significant contributor to global warming, making its accurate detection and monitoring critical for climate change mitigation. Satellite-based remote sensing, particularly using multispectral Sentinel-2 data, has shown great promise in methane monitoring, but challenges such as the need for large, high-quality labeled datasets and varying environmental conditions hinder fully supervised approaches. Manual labeling of datasets for methane plume detection requires significant expertise and effort, especially when simulating emissions under diverse conditions. Furthermore, training data limited to specific regions or scenarios may not generalize well to unseen environments. Semi-supervised learning (SSL) offers a promising alternative by leveraging unlabeled data for representation learning, reducing reliance on extensive labeled datasets. Recent advancements in contrastive learning-based approaches, such as SimCLR have demonstrated significant success in remote sensing applications. However, SSL’s potential remains largely unexplored in methane emission monitoring. In this study, we adopted the SimCLRv2 framework, which uses a ResNet-50 backbone for feature extraction, combined with a U-Net decoder fine-tuned using a small labeled dataset. This approach integrates a self-supervised stage, where features are learned from a large unlabeled dataset, and a supervised stage, where the pre-trained encoder is frozen, and the decoder is optimized with minimal labeled data. The dataset for this study comprises 64,432 samples, evenly divided between positive (with methane plumes) and negative (plume-free) cases. Positive samples were generated by integrating realistic methane plume simulations into real Sentinel-2 background imagery, while negative samples consisted of pure backgrounds. Methane plumes were simulated using the Weather Research and Forecasting (WRF) model with Large Eddy Simulation (LES) capabilities, covering emission rates from 600 to 30,000 kg/h and wind speeds from 1 to 10 m/s. Each sample included three layers: methane concentration retrieved via the multi-band multi-pass (MBMP) technique, Albedo, and Albedo difference (variation between the main and reference images in MBMP). These layers provided critical contextual information to distinguish methane plumes from background noise and artifacts. During the self-supervised training stage, the SimCLRv2 model was trained on all samples for 100 epochs. The pre-trained ResNet-50 encoder was then incorporated into a U-Net structure, with its decoder fine-tuned using just 1% of the labeled dataset. Despite this small labeled subset, the proposed SSL approach achieved strong results. For pixel-level segmentation, it recorded an F1-score of 74.78% and an IoU of 59.71%, outperforming a fully supervised U-Net model trained with the same small dataset (73.79% F1 and 58.46% IoU). At the scene level, the proposed model demonstrated its ability to accurately classify methane-emitting patches, achieving an F1-score of 83.78%, again surpassing the supervised U-Net’s performance (83.13% F1). Although the improvement might appear modest, it validates the potential of SSL in scenarios where labeled data is scarce, achieving better accuracy with significantly reduced data annotation requirements. The fully-supervised methods’ reliance on extensive labeled data restricts their scalability. In comparison, the proposed SSL approach provides a practical solution for scenarios where labeled data is scarce, enabling methane monitoring with significantly reduced annotation effort. Moreover, its competitive performance highlights its potential for addressing key challenges in methane emission detection. This study demonstrates the capability of semi-supervised learning to enhance methane plume detection in satellite imagery, leveraging the strengths of contrastive learning for feature extraction. The proposed approach provides a scalable and efficient tool for methane emission monitoring, supporting global efforts to mitigate greenhouse gas emissions and combat climate change. By reducing the dependency on labeled data, this method facilitates monitoring across diverse environmental conditions, representing a step forward in the development of robust and adaptable greenhouse gas monitoring systems.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall F1)

Presentation: Expansion to versatile multi-satellite Cal/Val activities at the FMI Arctic Space Centre

Authors: Hannakaisa Lindqvist, Anna Kontu, Juha Lemmetyinen, Tomi Karppinen, Dr. Rigel Kivi, Marika Honkanen, Johanna Tamminen, Jouni Pulliainen
Affiliations: Finnish Meteorological Institute, Finnish Meteorological Institute
Finnish Meteorological Institute’s Arctic Space Centre (FMI-ARC) in Sodankylä, Finland has undergone systematic development and expansion for the past 15 years. Two goals have guided the development of measurement infrastructure thus far: (1) research of the local carbon and water cycles and (2) ground-based Cal/Val support for relevant satellite missions. From a satellite mission Cal/Val perspective, four key instrument sectors are microwave observations, multispectral/hyperspectral observations, greenhouse gas observations and ozone/UV observations. Currently, the number of instruments maintained comprises 139 measurement entities and includes 768 sensors, which include the key biophysical parameters targeted by satellite sensors. Additionally, the site hosts and expands ground segment services for satellite data reception, processing, and storage. In this presentation, we will introduce the current suite of FMI-ARC ground-based instruments that enable versatile Cal/Val activities both regarding quasi-continuous monitoring and ad hoc campaigns targeting, e.g., specific ecosystems (wetlands, boreal forest). During the past 15 years, different instruments have been designed, purchased, and deployed over periods spanning up to several years of observations to serve needs of diverse scientific projects (e.g. ESA NoSREx, ESA SMOS ESL, ESA FRM4GHG, ESA SNOWITE, ESA WIFVOS). We will review the challenges and development needs for ground-based Cal/Val measurements to best support upcoming European Earth Observation missions, e.g. Sentinel-5, CO2M, FLEX, CHIME, CIMR, CryoRad, and HydroTerra+. Furthermore, we will highlight the specific advantages of our measurement setup where the cryospheric and atmospheric monitoring is intertwined. Beyond routine satellite Cal/Val activities, the ground-based measurements have high scientific value for identifying algorithm improvements to space mission data products, developing new data products, and advance process understanding through multisource data analysis or model evaluation. The measurements at the site have also potential to support the testing of innovative mission operation concepts. Finally, the FMI-ARC vision is to become a central satellite Cal/Val supersite as well as an innovation testbed for the European Space Industry.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall F1)

Presentation: Validating the XCO2 product from the Orbiting Carbon Observatory-2 against ground-based and airborne measurements

Authors: Dr. Saswati Das, Mahesh Kumar Sha, Dr. Susan Kulawik, Dr. Matthäus Kiel, Dr. Joshua Laughner, Dr. Gregory
Affiliations:
Carbon dioxide (CO2) is the primary greenhouse gas emitted into the atmosphere from anthropogenic activities. Although it is naturally present as a part of Earth’s carbon cycle, the ability of natural sinks to reduce CO2 from the atmosphere is impacted by human activities, thus altering the carbon cycle. It, therefore, becomes important to make precise, accurate, and continuous measurements and focus on the long-term monitoring of atmospheric CO2. The Orbiting Carbon Observatory-2 (OCO-2), launched in 2014, is NASA’s first Earth-orbiting satellite dedicated to making observations of CO2 in the atmosphere. One important goal of the OCO-2 mission is to provide XCO2 measurements with sufficient precision and accuracy alongside quantifying its seasonal and interannual variability. Although OCO-2 provides global coverage and consistently measures at high latitudes, the remote sensing measurement of CO2 from space can be challenging since the goal is to resolve inter-annual CO2 deviations to subcontinental scales and capture the known seasonal cycle and trends. Moreover, the XCO2 data is susceptible to location- and surface-property-dependent biases that must be corrected. Thus, validation of the XCO2 data from OCO-2 becomes necessary to ensure a high degree of retrieval accuracy on a global scale. With more than a decade of XCO2 observations, OCO-2 can aid the long-term monitoring of atmospheric CO2. We use the new and improved OCO-2 V11.2 dataset in this study and compare coincident XCO2 measurements against three independent datasets. The Total Carbon Column Observing Network (TCCON) is a network of solar-viewing ground-based Fourier Transform Spectrometers. TCCON measurements are unaffected by surface properties and are minimally sensitive to aerosols, and it is the primary validation source for XCO2 from OCO-2. The COllaborative Carbon Column Observing Network (COCCON) is a network of portable ground-based Fourier Transform Infrared spectrometers. For selected TCCON and COCCON sites, the absolute average bias values are less than 0.2 ppm for TCCON across all observation modes and less than 0.3 ppm for COCCON in the Land Nadir/Glint mode compared to coincident OCO-2 measurements. Lastly, we compare coincident OCO-2 measurements to the airborne Atmospheric Tomography Mission (ATom) when ATom conducted around-the-world flights in each of the four seasons between 2016 and 2018. This study aids the improvement of the OCO-2 XCO2 data product by bridging the gap between satellite, ground-based, and airborne XCO2 measurements. Further, it provides the latest validation analysis for OCO-2 and the most up-to-date information on biases and uncertainty in the OCO-2 data.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.49/0.50)

Session: F.04.08 Earth Observation for Nature Finance and Ecosystem Accounting

The integration of Earth Observation (EO) for nature finance mechanisms and ecosystem accounting is essential towards a nature-positivescenario and for advancing sustainable economic policies, ensuring transparent reporting of environmental dependencies and impacts.
The contribution of companies and financial institutions towards nature-positive goals plays a pivotal role in addressing the global biodiversity crisis by redirecting financial flows toward towards environmentally sustainable and Nature-based solutions (NbS). The Kunming-Montreal Global Biodiversity Framework (GBF) and the European Corporate Sustainability Reporting Directive (CSRD) highlight the need for standardized biodiversity metrics and risk assessments to align financial decisions with sustainability goals. EO provides spatially explicit, high-quality, and reliable data to support the development of nature-positive financial mechanisms and transparent biodiversity-debit/credit schemes.
Ecosystem accounting, as formalized by the UN SEEA Ecosystem Accounting framework, provides a structured approach to integrating ecosystems into national economic planning. Ecosystem Extent Accounts, which monitor spatial changes in ecosystems, rely on robust EO-based methodologies to improve classification, detect ecosystem change, and ensure temporal consistency. EO also plays a key role in Ecosystem Condition and Service Accounts, enabling efficient monitoring of ecosystem health and the services they provide.
This session will explore EO-based solutions for nature finance and ecosystem accounting, addressing challenges such as data standardization, EO-based metrics, and integration of EO in national reporting systems.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: Object-Based Assessment of Ecosystem Restoration Level for Biodiversity Certificates in Europe

Authors: Lisa Delvaux, Julien Radoux, Pr Pierre Defourny
Affiliations: Université catholique de Louvain - Earth and Life Institute
To meet the objectives of the Kunming-Montreal Global Biodiversity Framework and halt the biodiversity crisis, investments in conservation and restoration are essential. The BioCapital project aims to develop innovative financial solutions to trigger ecosystem restoration projects. Biodiversity certificates are investigated as a transparent and reliable mechanism to support biodiversity financing. They provide a standardized framework for monitoring and incentivizing restoration efforts. By co-creating these certificates with stakeholders, the project ensures trust, reliability, and relevance in their implementation. Certificates are tailored to reflect the scope of actions, from ecosystem preservation to active restoration. They are designed to align with a market-based framework that incentivizes biodiversity conservation. A key aspect of the biodiversity certificates is the quantitative assessment of biodiversity gain. Different types of metrics have been developed for such an assessment, all with specific strengths and weaknesses. Species diversity is a direct metric of the biodiversity in the studied taxa, but it requires extensive biomonitoring (usually from ground sensors or field surveys) and may not be appropriate in terms of temporality. Biophysical variables can be derived by remote sensing in a repeatable way on large areas, but these quantitative variables are rarely linearly proportional with changes in biodiversity. The gain in ecosystem services is a framework to quantify monetary benefits of the ecosystems, but these do not necessarily include biodiversity benefits. Finally, the amount of biodiversity enhancing practices based on expert knowledge could would reward restoration effort, but not necessarily the outcomes. In any case, we believe that it is of paramount importance to avoid large shift towards a single ecosystem type when restoring biodiversity, because each ecosystem type has its own importance. Our approach therefore relies on homogeneous patches of the landscape delineating specific ecological conditions (similar abiotic and land cover conditions) in the frame of Lifewatch-ERIC. A consolidated 10 m land cover map was created by random forest using Sentinel-2 data and ancillary data. The class likelihood derived from the random forest was used as input of multiresolution segmentation to delineate patches of homogeneous ecological properties. For this study, these polygons are then grouped together in clusters of similar properties based on abiotic factors and land cover information, at a level of detail intermediate between the habitat type and the ecosystem type. The relevance of this clustering is assessed based on their specificity with regards to abiotic variables and their ability to depict EUNIS habitat types. This validation is performed based on locally available habitat maps and its overall consistency is assessed wall-to-wall based on the habitat models of GEOBON. These clusters then serve as proxies for the biodiversity potential under specific environmental and climatic conditions. The second step of our study is to compare in more details the land cover status and condition within polygons of the same group to evaluate their biodiversity status. Two types of metrics are tested on agricultural use cases with remote sensing and GIS analysis: very high resolution image analysis with deep learning provides information about biodiversity positive practices such as hedgerows, flower strips or small ponds. At the same time, generic biodiversity variables such as phenology metrics or peak biophysical variables derived from Sentinel-2 are used to evaluate the same polygons. The collected data is cross-referenced with reported activities to determine the net impact on biodiversity. This approach helps identify priority areas where restoration efforts would be most effective, and could be used to monitor biodiversity changes if repeated in time.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: Assessment Ecosystem Assets of a World Natural Heritage Site Using Remote Sensing, Field Surveys and Statistical Data

Authors: Shang Chen, Mr Shuai
Affiliations: First Institute Of Oceanography, Ministry For Natural Resources Of China
Ecosystems and its ecological resources with clear ownership and use rights are regarded as assets in economics and accounting sciences. Good ecosystem assets provide high quality ecosystem services to people. The standing stock value of ecosystem assets are generally valued on the basis of the net present value of the expected flow of ecosystem services. Ecosystem service value includes provisioning service value, regulating service value, cultural service value and supporting service value. The Yellow Sea and Bohai Sea Migratory Bird Habitat (YBBH), located in the western part of the Yellow Sea, was inscribed as a World Natural Heritage Site in 2019. The forest, grassland, salt marsh and coastal waters were identified and analysed using 3 years of remote sensing, field survey and statistical data. The total area of the Yellow Sea and Bohai Sea Migratory Bird Habitat ecosystem is 2686.99 square km. From 2019 to 2021, the total service value of Yellow Sea and Bohai Sea Migratory Bird Habitat ecosystem was 131.01 billion CNY/year, 128.10 billion CNY/year and 138.84 billion CNY/year separately. The regulating services and supporting services account for 46%-50% and 41%-43% of the total service value, they both play the same dominant roles. At the same time, the standing stock value of the Yellow Sea and Bohai Sea Migratory Bird Habitat ecosystem asset was 2.67 trillion CNY, 2.61 trillion CNY and 2.83 trillion CNY separately, with an average annual growth rate of 3.00%. The spatial density of the Yellow Sea and Bohai Sea Migratory Bird Habitat ecosystem asset was 10.54 million CNY/ha. Our results were adopted by the Yellow Sea and Bohai Sea Migratory Bird Habitat Bureau to revise the management plan and the annual high-resolution remote sensing and field survey monitoring programme. Using the up-to-date high-resolution data from remote sensing and field survey, the changing rate of the ecosystem asset value is an appropriate indicator for evaluating both the own sustainability of ecosystem and the management performance of the local government's ecological protection efforts.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: A bottom-up approach to compiling wood provision ecosystem service accounts with EO data

Authors: Alexandra Evans, Marcel Buchhorn, Bruno Smets, Ján Černecký, Andrej Halabuk
Affiliations: VITO, ILESAS
Ecosystem accounting allows countries to monitor and communicate how their ecosystems contribute to natural capital as assets, expressed in physical or monetary terms. Within the UN System of Environmental Economic Accounting (UN-SEEA), ecosystem service accounts record the supply of ecosystem services by ecosystem assets and the use of those services by economic units, including households. However, there are significant challenges associated with estimating the values of ecosystem services in a realistic, accurate and accessible way. Top-down methods exist for measuring the wood provisioning service and presenting supply and use in a natural capital account, but these tend to use Earth Observation (EO) data only as a proxy to spatially redistribute the forest net annual increment at country level, and the geolocation of the statistics are at low resolution. Given the need for more accurate accounts and the potential of accessible, high-resolution EO data products, we describe a bottom-up approach to calculating the Net Annual Increment (NAI) of forest available for wood supply using EO. This method accounts for forest not available for wood supply and non-natural tree mortality by combining existing high resolution EO products to create living tree maps, and using productivity data as proxies for timber production. We demonstrated its use by generating an example wood provisioning ecosystem service account for the Slovak Republic for the year 2021. We validated the NAI output with an empirical method using national forest inventory data as a reference for timber distribution of Forest Available for Wood Supply (FAWS), and we compared the accounting tables and geospatial maps to those generated by the INCA tool. Our EO approach generated a mean NAI over all FAWS areas of 7.05 m³/ha overbark, which was a reasonably good match (R² = 0.24) with the reference inventory NAI of 6.22 m³/ha, compared to the the INCA estimates of 10.86 m³/ha. The total wood volume calculated by the EO method (12.726 x 1000m³) was highly comparable to that produced by INCA (12.731 x 1000m³), but the EO method was capable of capturing the spatial distribution patterns of forest growth across different stands that are more realistic than the top-down spatial disaggregation approach of INCA. While the EO method tended to overestimate NAI compared to the reference inventory dataset, it has the potential to capture finer spatial variations within stands that may be overlooked by traditional field-based methods, which typically rely on larger, uniform management units. These results demonstrate the potential for using EO data products in a bottom-up method to produce wood provisioning ecosystem service accounts with NAI estimates similar to those of established methods, but with more realistic and detailed spatial maps necessary for efficient land-use management.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: Using public remote sensing data to facilitate corporate sustainability reporting

Authors: Alexandra Evans, Bruno Smets, Daniel Whitaker, Catarina Braga, Dr Jacob Bedford
Affiliations: VITO
With the recent adoption of the EU Corporate Sustainability Reporting Directive (CSRD), companies will be required to report on their environmental and social impacts according to the European Sustainability Reporting Standards (ESRS). The significant risk that biodiversity loss poses to business and societal functioning is reflected in the ESRS as an increased focus on biodiversity in alignment with the Kunming-Montreal Global Biodiversity Framework. The need to meet the requirements of reporting directives has resulted in the development of a plethora of databases and tools, but a holistic method to manage biodiversity data, calculate indices and report impacts to comply with the CSRD remains lacking. The complexity of reporting on biodiversity and site impacts in a spatially explicit way has created concerns over the reporting load for businesses, particularly those that are newly obligated to comply with these reporting standards, but also highlights the potential for using open remote sensing data. As a co-lead in the Horizon Europe project A-TRACK, Vito is developing a common data structure to facilitate and harmonise biodiversity impact reporting for businesses. We investigate the potential of using remote sensing data to calculate biodiversity metrics, structuring them following the principles of SEEA Ecosystem Accounting, and aligning them with ESRS requirements and State-of-Nature metrics from Nature Positive Initiative as well as the Taskforce for Nature Financial Disclosures (TNFD) frameworks. We demonstrate an example case from the mining sector and delve into the challenges and opportunities associated with generating accounts and calculating spatially explicit indices for small private sites from public Earth Observation (EO) data. From this demonstration we will identify the main needs that underly a common data structure for using public data in private sustainability reporting, highlight areas that cannot currently be estimated with EO, and assess the opportunities this offers for developing novel methods of quantifying biodiversity with open data.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: The role of Earth Observation in supporting and enabling nature conservation finance: current practices and future prospects

Authors: Andy Shaw, Dr Joseph Bull, Dr Nicola Ranger
Affiliations: Assimila Ltd, Environmental Change Institute, Durrell Institute of Conservation and Ecology, Institute for New Economic Thinking, Oxford Martin School
Global efforts to achieve nature recovery and conservation goals (e.g. as outlined in the Kunming-Montreal Global Biodiversity Framework) are hampered in part by a known substantial shortfall in necessary financing. ‘Nature finance’ mechanisms (e.g. biodiversity credit markets, bonds, debt-for-nature swaps, green investment portfolios) are emerging as possible routes to fill that shortfall. However, these mechanisms will need more reliable, comprehensive, affordable, and accessible remote monitoring capabilities if they are to go to the necessary scale. In this talk, we review the prospects for using Earth Observation to support nature finance. We start by presenting an outline of (a) the state of nature finance, and (b) identified needs for scaling and monitoring/reporting on nature finance mechanisms. Then, we report on preliminary results of a literature review into policy standards and frameworks underlying nature finance objectives; including their various strengths and weaknesses, gaps in knowledge/capacity, and challenges around existing data/methodologies. We will go on to explore the most pertinent and innovative scientific research that has tried to advance EO solutions beyond current state-of-the-art practices, and address the shortfalls of existing approaches. The results arise from the newly launching LEON project (‘Leveraging Earth Observation for Nature finance’; commencing January 2025), which is a collaboration between ESA, the University of Oxford, Assimila Ltd, and a range of partners and collaborators across Europe and beyond.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: The World Ecosystem Extent Dynamics Solution

Authors: Bruno Smets, Marcel Buchhorn, Mathilde De Vroey, Carsten Meyer, Ruben Remelgado, Polina Tregubova, Stefano Balbi, Ferdinando Villa, Alessio Bulckaen, Santosh Karanam, Myroslava Lesiv, Steffen Fritz, Ian McCallum
Affiliations: VITO, iDiv, Uni Bonn, BC3, IIASA
Effective monitoring of ecosystem extents is necessary to ensure the sustainability of goods (e.g. food, fiber, etc.) and their services (e.g. pollination, flood control, carbon sequestration, etc.) to sustain human well-being. Yet, declines in ecosystem extents are increasing due to land transformations caused by climate change (e.g. increased frequency of fires and flood events, pollution), and unsustainable land uses (e.g., deforestation due to agricultural expansion and loss of natural landscapes as urban areas expand). Monitoring ecosystem extents provides vital information to detect unsustainable land use practices and to halt and reverse biodiversity loss. There is an urgent need to map ecosystem extents and their dynamics in all dimensions, going beyond relying on crude ecosystem proxies (e.g. land-cover classes) or mapping one single ecosystem type (e.g. mangroves). A solution needs to be developed that maps ecosystem extent and their dynamics at large-scale in a consistent and reproducible manner across all terrestrial, freshwater and coastal ecosystems to support reporting on various environmental policies. The World Ecosystem Extent Dynamics (WEED) project is developing a globally applicable open-source solution, leveraging existing datasets and tools while applying expert-revised, creative and novel machine learning methods together with Earth Observation (EO), to enable users to generate comprehensive maps of the extents of terrestrial freshwater and coastal ecosystem types and their temporal variations according to different ecosystem typologies. Here we describe the WEED solution, connecting the ARIES semantics platform with the OpenEO earth observation platform, and expanded with the Laco-WIKI platform to train and validate ecosystem extent maps and their dynamics. Moreover, the architecture is context-aware and can integrate any other data or model made available online (complying with standard protocol according to the FAIR principle). The robustness and transferability of the solution will be demonstrated to scale to larger areas (both national and continental Europe) and will showcase the use of its results (maps and tables) for policy applications such as Ecosystem Accounting, Global Biodiversity Framework, RAMSAR convention and additionally, Corporate Sustainability Reporting.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.94/0.95)

Session: D.03.03 Impact through Reproducibility in Earth Observation Science and Applications

The science and R&D community in EO and Earth Science are delivering a wealth of innovative EO research data (new products, new methods, new algorithms). FAIR and Open Science principles are increasingly being adopted as fundamental parts of the scientific research cycle. What are the most relevant approaches today that enable reproducibility in EO? What lessons can we learn from the most adopted solutions and how can we leverage technologies to gain more impact and use of the research data? What are the champions of reproducibility among the scientific community? Which domains are most advanced and how can we transfer knowledge and tools across the various EO domains?

Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.94/0.95)

Presentation: TACO: Transparent Access to Cloud-Optimized Spatio-Temporal Datasets

#stac #parquet

Authors: Cesar Aybar, Luis Gómez-Chova, Julio Contreras, Oscar Pellicer, Chen Ma, Gustau Camps-Valls, David Montero, Miguel D. Mahecha, Martin Sudmanns, Dirk Tiede
Affiliations: Image Processing Laboratory (IPL), Institute for Earth System Science and Remote Sensing, Leipzig University, Department of Geoinformatics-Z_GIS, University of Salzburg
Over the past decade, Earth system sciences (ESS) have increasingly relied on machine learning (ML) to address the challenges posed by large, diverse, and complex datasets. The performance of ML models in ESS is directly influenced by the volume and quality of data used for training. Paradoxically, creating ready-to-use, large, high-quality datasets remains one of the most undervalued and overlooked challenges in ML development. Identifying suitable training datasets for specific tasks is often challenging, and few existing ESS datasets fully adhere to the FAIR (Findable, Accessible, Interoperable, and Reusable) principles. To address this challenge, we present TACO, a new specification designed to simplify and streamline the creation of FAIR-compliant datasets. In TACO, a dataset is represented as a data frame that lists file paths for each data point alongside its metadata. TACO stores all data point components as binary large objects (BLOBs) accessible through GDAL Virtual File Systems (VFS). It leverages the /vsicurl/ method for accessing online resources and /vsisubfile/ for reading specific file segments. These features enable efficient partial and parallel data reads (i.e. cloud-optimized) at the data point level, making TACO ideal for integration with ML frameworks like Torch and TensorFlow. Users can perform tasks such as exploratory data analysis or create online dataloaders without needing full downloads. The metadata framework builds on the STAC GeoParquet standard [1], enhanced with naming conventions from the Open Geospatial Consortium (OGC) training data markup language [2]. Additionally, TACO recommends optional fields derived from the Croissant Responsible AI (RAI) framework [3] to capture the economic, social, and environmental context of the regions surrounding each data point. Designed for broad compatibility, TACO relies solely on GDAL, a core dependency for major raster data libraries in Python (e.g., rasterio), R (e.g., terra and stars), and Julia (e.g., Rasters.jl). This ensures seamless integration with the geospatial ecosystem with minimal installation effort across various programming environments. To demonstrate the effectiveness of the TACO specification, we applied the specification to methane plume detection datasets. Each published methane plume dataset was transformed to adhere to TACO principles. Given that each TACO dataset is organized as a data frame at a high level, combining them involved a straightforward concatenation operation. This process created MethaneSet, the largest and most geographically diverse multisensor collection of methane emissions to date, covering 37,316 methane leaks across 9,931 distinct emission sites. MethaneSet is 25 times larger than the methane dataset presented by the United Nations Environment Programme at COP28. Looking ahead, we plan to extend TACO’s capabilities to other areas of ESS, further expanding its scope and impact. References [1] https://github.com/stac-utils/stac-geoparquet [2] https://docs.ogc.org/is/23-008r3/23-008r3.html [3] https://docs.mlcommons.org/croissant/docs/croissant-rai-spec.html
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.94/0.95)

Presentation: Whose data are "open" and which science is "free"? Data governance in the era of Earth observation, big data, and AI

Authors: Dr. Alyssa Whitcraft, Dr. Michael Humber, Todd Janzen
Affiliations: University of Maryland, NASA Acres, Janzen Schroeder Ag Law
Access to high quality field data for model calibration and validation has been a persistent bottleneck in the use of satellite data by decision makers across land applications. Nowhere is this more true than in agriculture, where farmers increasingly rely upon data for decisions and yet are often reluctant to engage with satellite data, cutting edge government researchers, and private agritech companies due to privacy concerns. At the same time, state actors and markets for ecosystem services increasingly rely upon the same agricultural data and satellite-based models to measure and "truth out" environmental claims about agricultural management and sustainability. These multiple, often competing forces spurred NASA Acres - NASA's agriculture and food security consortium focused on working with agriculture stakeholders to adopt satellite Earth observations (EO) for decisions across the value chain and into policy chambers in the United States of America - to undertake a comprehensive effort on data governance that takes into account the multiple claims and claimants around who owns (or doesn't own) which data and science in a field that is advancing more quickly than have accompanying ethical frameworks. Our work addresses what is collected on the ground as well as the maps, models, methods, and intellectual property that result from the union of ground and satellite data, and aligns with calls to create a new ethical framework for Earth observations. We aim to share with the broader Earth observations research and applications communities the governance principles that we have co-developed with farmer representatives and agritech companies. These governance principles provide guidance for designing research that confronts ethical challenges up front and builds successful collaborations from space to farm, empowering agriculture to mitigate climate change while continuing to feed the world. The lessons we have learned can help society and land-based applications reap the greatest benefit from satellite data, while reducing the harm that can accompany even the best-intentioned efforts. This work will help ESA, NASA, and other space agencies and private space companies achieve their core missions: positively impacting life on Earth.
Affiliation(s): University of Maryland, NASA Acres, Janzen Schroeder Ag Law
LPS Website link:
Whose data are "open" and which science is "free"? Data governance in the era of Earth observation, big data, and AI&location=Room+0.94/0.95" class="text-info" target="_blank">Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.94/0.95)

Presentation: Leveraging FAIR Digital Objects for Climate Change Adaptation: Building on Observation Frameworks

Authors: Barbara Magagna, Anne Fouilloux, Tina Odaka, Florence Cayocca, Gao Chen, Morgan Silverman, Kathi Schleidt, Markus Fiebig, Ivette Serral, Joan Maso
Affiliations: University of Twente, GO FAIR Foundation, Simula, IFREMER, NASA Langley Research Center, Analytical Mechanics Associates, DataCove e.U., NILU, CREAF
The Earth Observation (EO) and Earth Science research communities are generating a wealth of data, including new products, methods, and algorithms, which are critical for addressing grand challenges like Climate Change Adaptation (CCA) or the fulfillment of the EU Biodiversity Strategy for 2030. Ensuring that this data is Findable, Accessible, Interoperable, and Reusable (FAIR) is key to enhancing its reproducibility and reusability, thereby maximising its impact and leveraging data to knowledge and decision making. Interest in FAIR and Open Science principles is growing, as many recognise their potential to enhance cross-domain collaboration, improve data integration, and increase the reusability of research outputs. However, the detailed implementation of these principles often remains unclear, and full adoption is still a challenge for many research communities, especially when it comes to semantics. This contribution focuses on concrete examples and lessons learned from initiatives that have made progress in applying FAIR and Open Science principles in EO and Earth Science. One key approach for enhancing interoperability is the Interoperable Descriptions of Observable Property Terminology (I-ADOPT) framework that was developed by a Working Group of the Research Data Alliance. This framework facilitates the standardisation of the description of observable properties (or variables) across scientific domains, addressing challenges posed by fragmented terminologies. By breaking down variables into standardized, machine-interpretable components, I-ADOPT enables datasets from different sources to align and integrate more easily, thus facilitating data sharing across different research contexts. The framework promotes the reuse and mapping of existing vocabularies and supports the creation of new, aligned ones. It strengthens the FAIR principles of data by making variables easier to find, access, and reuse, ultimately facilitating cross-domain collaboration and more efficient data workflows. During the recent update of the OGC Observations, Measurements and Samples standard (OMS/ISO 19156), care was taken to enable integration with I-ADOPT for the provision of richer observable properties. Within OGC, the I-ADOPT model is currently being discussed as an extension for the observable property concept to both the OMS standard as well as to the OMS based OGC SensorThings API data model, which is currently being updated to version 2.0 to align with the recent OMS updates. In parallel, ACTRIS (Aerosol, Clouds, and Trace Gases Research Infrastructure) has successfully applied the I-ADOPT framework to enhance the findability, interoperability and reusability of their observational data, ensuring it is both human-understandable and machine-interpretable. A similar effort has started at NASA to use I-ADOPT to describe more complex atmospheric composition variables frequently observed in airborne field campaigns. Moreover, in the context of the EU Green Deal policy, AD4GD EU funded project is applying and extending the I-ADOPT model to describe Essential Variables (EVs), and in particular, Essential Biodiversity Variables (EBVs). These conceptualized EVs will be then included in the OGC Rainbow Definition Server for proper accessing and reusing. Drawing from the experiences of ACTRIS, NASA’s efforts, AD4GD, and others, we highlight how these initiatives have overcome challenges related to data fragmentation, interoperability, and standards convergence. Building on these lessons, the FAIR2Adapt project, an INFRA-EOSC EU project set to launch in early 2025, aims to further advance the implementation of FAIR and Open Science principles in EO by leveraging FAIR Digital Objects (FDOs) as a long-term solution for enhancing climate change adaptation (CCA) efforts. A key success factor is the adoption of Research Objects (ROs) and its RO-Crate model, Data Type Registry (DTR), and the nanopublication network for implementing FDOs. The RO-Crate model, developed through collaboration between the life science and digital library communities, represents a successful approach to implementing FAIR Digital Objects (FDOs). RO-Crate allows for the description of collections, offering flexibility by supporting schema.org semantics as well as domain-specific and custom vocabularies. This flexibility makes RO-Crate widely applicable in various research domains. Importantly, RO-Crate facilitates the wrapping of existing resources with enhanced FAIR metadata, making them more accessible and interoperable without disrupting existing systems. DTRs are also essential for the effective implementation of FAIR Digital Objects (FDOs), providing a standardised, machine-actionable framework for managing structured research data. By registering diverse data types—such as metadata schema elements, measurement units, and taxonomy nodes—the DTR ensures unambiguous identification and interoperability of FDO components. These registries support the integration of these types into FDOs, enabling consistent, reusable, and discoverable data across scientific domains. With features like persistent identifiers, JSON schema validation, and federated search, the DTRs facilitate the creation, validation, and exploration of FDOs, strengthening their compliance with FAIR principles and enhancing cross-domain collaboration. To operationalise these concepts, FAIR2Adapt applies the Three-Point FAIRification Framework (3PFF), developed by the GO FAIR Foundation, which provides practical "how-to" guidance to stakeholders seeking to "go FAIR". In FAIR2Adapt, these stakeholders include scientists, policymakers, local authorities, data providers, software developers, private sector actors, NGOs, and educational institutions, all working collaboratively to enhance climate adaptation through FAIR data practices. The 3PFF framework prioritises reuse, optimises interoperability, and accelerates the development and adoption of standards and technologies to support FAIR data and services through FAIR Implementation Profiles (FIPs) that are declared by a community of practice. These FIPs are expressed as FDOs through nanopublications, which foster convergence in the use of FAIR standards and technologies, ultimately ensuring that research outputs are not only accessible but also (re)usable in diverse contexts. We will present the key outcomes from a recent workshop focused on modelling three different use cases with I-ADOPT and OMS as an integrated model, covering essential biodiversity variables, atmospheric composition measurements related to air quality and climate change, and climate adaptation scenarios. We will show real time series observational data representing these use cases, annotated with FAIR vocabularies and made available via the OGC SensorThings API, a widely used implementation of OMS, that is capable of integrating the I-ADOPT model. Additionally, we will show how we encapsulated these time series into RO-Crates, by packaging data with their metadata and linking them via the I-ADOPT variable, and enhanced findability both in a machine-interpretable way and readable for humans, thereby increasing its potential for reuse. This modelling event provided valuable insights into practical strategies for implementing FAIR Digital Objects in EO research and data infrastructures, as well as best practices for leveraging the I-ADOPT framework to harmonize and enhance data interoperability. To communicate about the technology choices made to implement the FAIR principles we will present the FIPs for each scenario. We will also highlight lessons learned from integrating cross-domain vocabularies to improve the FAIRness of existing infrastructures. These outcomes validate the methodologies that will inform the FAIR2Adapt project, offering practical recommendations for advancing FAIR practices in the EO community. This presentation seeks to inspire continued collaboration within the Earth Science community by showcasing lessons learned from existing initiatives and providing a roadmap to further the impact of FAIR and Open Science in EO research and applications.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.94/0.95)

Presentation: The FAIRagro Data Steward Service Center – A Helpdesk for Research Data Management

Authors: Florian Beyer, Lucia Vedder, Wahib Sahwan, Lea-Sophie Singson, Sophie Bosse, Elena Rey-Mazon, Marcus Schmidt, Nikolai Svoboda
Affiliations: Julius Kuehn Institute, University of Bonn, Leibniz Centre for Agricultural Landscape Research (ZALF), FIZ Karlsruhe – Leibniz Institute for Information Infrastructure, ZB MED Informations Centre for Life Sciences, Leibniz Institute of Plant Genetics and Crop Plant Research (IPK)
Scientific research is the driving force behind innovation and development. It is essential that research results and their digital data products are not lost, especially after the end of a research project, and that they can be further developed as sustainably as possible afterwards. Wilkinson et al. (2016) [1] have formulated the FAIR principles for this purpose. According to these principles, research data should be findable, accessible, interoperable and reusable, preferably at all times and throughout the entire data lifecycle. FAIRagro (FAIRagro.net) is the consortium of the National Research Data Infrastructure (NFDI) for the agricultural system sciences in Germany. The focus of this joint research project is to create a FAIR research data management (RDM) and associated infrastructure in the context of plant, soil and environmental research within theagricultural domain. As a central unit within FAIRagro, the Data Steward Service Center (DSSC) aims to provide sustainable agricultural RDM and is therefore one of the most important points of contact between FAIRagro and the scientific agricultural community and beyond. As the FAIRagro helpdesk, the DSSC answers all questions about agricultural scientific RDM free of charge and organizes the continuous exchange of knowledge and experience in this field. It consists of the FAIRagro Data Stewards, who as experts combine expertise on data from different areas of agricultural systems science, such as soil data, general field data, genetic data (“omics data”), phenotyping data, field robotics data, sensor data and geo(mass) data, as well as information technology topics. Two data managers also have extensive knowledge about earth observation data and analysis, as well as establishing cloud-based geospatial data infrastructures to analyze large amounts of remote sensing data. These competencies are embedded in the context of FAIRagro’s research disciplines and range across scales from genes, phenomics, crop trials and management to landscapes and regions. In addition, the DSSC offers legal support on issues of anonymization/pseudonymization, copyright, licensing and more. Knowledge and experience are pooled to make the full range of expertise available to the community. The stewards process support requests from the helpdesk (2nd level support) and, if necessary, establish contact with FAIRagro developers (3rd level support). In addition to personal support for the community, easy-to-understand materials such as a list of research data repository recommendations [2] or a collection of Open Educational Resources (OER) [3] are collected in the FAIRagro Toolbox (fairagro.net/inform/toolbox/) and offer low-threshold access to relevant information (1st level support). Requests from specialist subject areas can be forwarded to the FAIRagro network – other NFDI helpdesks, local German RDM offices or international partners – and answered by these offices. The knowledge about products and services is collected and translated by the developers in FAIRagro and passed on to the community as part of professional, user-specific training. This includes cooperation with other RDM service providers such as other NFDI consortia, universities and non-university research institutions. These trainings are prepared and organized by the training and education unit associated with FAIRagro and conducted in close cooperation with the data stewards. In addition to general training and specific RDM courses, reusable OERs will also be created. As a central document, a training manual will be jointly written by the training coordinators and stewards to enable trainers to design their own courses on RDM in agrosystems research. FAIRagro is to become a central point of contact for agricultural research in Germany through these activities in the various specialist disciplines and the integrative work of the stewards. With our contribution, we explain how the DSSC is integrated into the agrosystems community, what expertise our data stewards have, especially for the remote sensing community, what services they support and how to contact them for RDM questions. [1] Mark D. Wilkinson et al.: “The FAIR Guiding Principles for scientific data management and stewardship”. In: Scientific Data 3.1 (March 2016), S. 160018. issn: 2052-4463. doi: 10.1038/sdata.2016.18. url: https://www.nature.com/articles/sdata201618. [2] E. Rey Mazon et al.: Recommended repositories by the FAIRagro Helpdesk for agro- system research (1.0). Techn. Ber. 2024. doi: https://doi.org/10.5281/zenodo.11144471. [3] S. Boße et al.: How to Learn and Teach Research Data Management in Agrosystem Research - a FAIRagro Collection (1.0). Techn. Ber. 2024. doi: https://doi.org/ 10.5281/zenodo.11148701.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.94/0.95)

Presentation: Nextflow: Reproducible and scalable data analysis pipelines

Authors: Philip Ewels, Paolo Di Tommaso, Felix Kummer, Friederike Hanssen, Maxime Garcia
Affiliations: Seqera, Humboldt-Universität zu Berlin
Reproducibility is a cornerstone of robust science, and in the field of Earth Observation (EO), it is more critical than ever. The increasing complexity and scale of EO data necessitate tools and practices that ensure research findings can be replicated, validated, and applied across diverse domains. This presentation highlights the utility of Nextflow, an open-source workflow manager, as a transformative tool for reproducible and scalable EO research. Traditionally rooted in the life sciences, Nextflow (https://nextflow.io) has established itself as a robust and generalist platform, with over 10 years of development and a thriving user base. Central to this success is the nf-core community (https://nf-co.re), a global network of more than 10,000 contributors dedicated to open-source workflows and collaborative development. The nf-core community organises regular hackathons and events, fostering innovation, knowledge-sharing, and collective problem-solving across disciplines. Its ethos of transparency, openness, and accessibility has helped create a rich ecosystem of high-quality, reusable workflows that serve as a model for Open Science practices. One such example is nf-core/rangeland (https://nf-co.re/rangeland), a pipeline developed to process remotely sensed satellite imagery alongside auxiliary data in multiple steps, to arrive at a set of trend files related to land-cover changes. This pipeline demonstrates Nextflow’s potential to facilitate EO applications, from the desktop to High-Performance Computing (HPC) clusters and cloud platforms. The portability of Nextflow workflows ensures they can run seamlessly across various infrastructures, bridging the gap between researchers in different institutions and geographical regions. Furthermore, its scalability allows for the processing of massive datasets, a common requirement in EO, encompassing millions of files and terabytes of data. This presentation will address the following key aspects: 1. Reproducibility in EO Science: How Nextflow ensures that workflows produce consistent results across different computing environments and collaborators. 2. Portability Across Infrastructures: Demonstrating the seamless execution of workflows on local systems, HPCs, and major cloud providers. 3. Scalability to Meet EO Challenges: Case studies showcasing Nextflow’s ability to handle the increasing data volume and complexity in EO applications. 4. Community-Driven Innovation: Lessons from the nf-core community and the life sciences that can be applied to EO, leveraging Nextflow’s mature ecosystem and existing resources. The FAIR principles (Findable, Accessible, Interoperable, and Reusable) are embedded within Nextflow workflows, aligning closely with the growing emphasis on Open Science in EO. We will explore how tools like nf-core/rangeland and the broader Nextflow ecosystem can be catalysts for reproducible science, enabling researchers to extend their impact and share knowledge effectively across domains. By bridging the gap between life sciences and EO, this work champions a cross-disciplinary approach to reproducibility. Nextflow’s proven reliability and adaptability, coupled with the thriving nf-core community, offer a model for other scientific domains to emulate. Together, they empower the EO community to tackle pressing global challenges with confidence and collaboration at their core.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.94/0.95)

Presentation: Best Practices for Reproducible FAIR Workflows

Authors: Pedro Goncalves, Ingo Simonis, Fabrice Brito, Micah Brachman
Affiliations: Terradue Srl, Open Geospatial Consortium (OGC)
Earth Observation (EO) applications generate unprecedented volumes of data, offering innovative solutions to environmental challenges. However, the reproducibility of these applications and their alignment with FAIR principles, Findability, Accessibility, Interoperability, and Reusability, are critical to ensure their long-term scientific impact. Addressing these challenges requires the development and adoption of standardized frameworks that promote open science and reproducibility across platforms, applications, and workflows. One such effort was initiated in the OGC Testbed-18, which developed best practices to ensure EO applications meet FAIR principles focusing on creating reproducible workflows through algorithm portability, metadata standardization, and workflow provenance. Integrating tools such as CodeMeta, CWLProv, and persistent identifiers (e.g., DOIs), the Testbed established a framework for ensuring that EO applications and their results remain transparent, traceable, and reusable. The OGC API Processes specification played a key role in enabling deployment, execution, and interoperability of applications across diverse platforms, further ensuring the accessibility and usability of EO services. Building on these achievements, the Open Science Persistent Demonstrator (OSPD) is now scaling these concepts into a broader initiative for advancing reproducible Earth Science research. Led by the OGC, the OSPD promotes reproducible research through interoperable cloud technologies and standardized workflows. By fostering collaboration among organizations such as ESA, NASA, and other partners, the OSPD enhances data sharing, streamlines scientific workflows, and integrates FAIR principles into the research lifecycle. The primary objective of the OSPD is to create a collaborative, agile prototyping and engineering environment, with an emphasis on hands-on practices that illustrates how platforms operated by various organizations can harmoniously collaborate, fostering a culture of cooperative research and comprehensive data representation to improve data sharing and scientific workflows reproducibility. During the oral presentation, we will provide an overview of the technical mechanisms and standards of the proposed approach focusing on the reproducibility cycle for EO applications. This includes the steps from algorithm packaging, from source code in repositories, to containerized applications, to deployment on EO Exploitation Platforms, and finally, ensuring reproducible execution and result sharing. Specific attention will be given to the use of the Common Workflow Language (CWL) and CWLProv for modeling workflows and capturing retrospective provenance. We will present practical examples demonstrating the application of FAIR principles, showcasing how metadata enrichment and standardized identifiers (e.g., DOIs) foster reproducibility and transparency. One example will illustrate the end-to-end reproducibility of a water body detection workflow, integrating data from Sentinel-2 with analytical models, and how its outputs are preserved and made reusable across platforms. Finally, we will explore the broader implications for the EO community highlighting how the transition from Testbed-18 to the OSPD provides a framework for addressing the challenges of reproducibility and FAIR-driven collaboration.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall L1/L2)

Session: A.02.06 Advances in land surface phenology monitoring and applications - PART 2

Land surface phenology (LSP) plays a key role in monitoring and understanding the seasonal dynamics of terrestrial ecosystems, providing critical insights into how these ecosystems respond to environmental changes, including those driven by climate change. This makes LSP a vital tool in predicting and managing ecosystem responses to climate variability and environmental stressors. This field has seen significant advances over the last decade, particularly with the advent of more sophisticated remote sensing technologies and data processing techniques and the development of new phenological products, such as the pan-European High-Resolution Vegetation Phenology and Productivity (HR-VPP) dataset included in the Copernicus Land Monitoring Service. We invite contributions on recent advancements in LSP monitoring, emphasizing the use of innovative techniques to detect phenological changes, the application of sub-daily resolution data, new tools for processing and analysing satellite data, and applications in various fields such as agriculture, forestry, ecology, public health, or climate science.

This session also welcomes any contribution concerning the intercomparison of LSP and complementary phenological observations, including in-situ human observations, phenocams (digital cameras capturing vegetation changes), and flux towers (measuring exchanges of carbon, water, and energy). The synergy between these observation methods can address inherent discrepancies and limitations, leading to a more accurate and holistic view of terrestrial ecosystems and their responses to climate change. It is expected that these contributions will provide further insight into CEOS LPV- phenology validation protocol.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall L1/L2)

Presentation: Phenology Monitoring in Regenerative Agriculture Experiments Using Daily Repeat Field Camera Imagery Across Different Agroecological Zones in Kenya

Authors: Ashfak Mahmud
Affiliations: Department of Geosciences and Geography, University of Helsinki, Finland
Ashfak Mahmud¹, Atikul Hoque², Juuso Tuure³, Mwadime Mjomba⁴, Cecilia Moraa Onyango⁶, Petri Pellikka¹,⁷, Janne Heiskanen¹,⁵ ¹Department of Geosciences and Geography, University of Helsinki, Finland; ²New Mexico State University, Department of Geography and Environmental Studies, USA; ³Department of Agricultural Sciences, University of Helsinki, Finland; ⁴Taita Environmental Research and Resource Arc (TERRA), Wundanyi, Kenya; ⁵Finnish Meteorological Institute, Helsinki, Finland; ⁶Department of Plant Science and Crop Protection, University of Nairobi, Kenya; ⁷Wangari Maathai Institute of Environmental and Peace Studies, University of Nairobi, Kenya. Keywords: phenology, regenerative agriculture, field-camera, time series, earth observation Crop phenology describes the sequence of growth stages a plant undergoes, from planting to harvesting, driven by its adaptation to seasonal and long-term climatic shifts. Tracking phenology in regenerative agriculture (RA) through earth observation method offers a promising approach to monitor crop growth stages and analyze their interactions with agroclimatic and biophysical factors. Traditional phenology monitoring involves regular sensory observations by experts to identify and record timing of growth stages, which is labor-intensive, time-consuming and potential for subjective bias method. On the other hand, satellite-based monitoring of smallholder farms is limited by coarse spatial resolution, cloud cover, and the dependence on field-based data. This study utilizes fixed-position daily field cameras with high temporal field-level image-capturing capabilities at experimental sites across diverse agroecological zones (AEZs) in Kenya to retrieve RA phenological stages in maize experimental plots. Field experiments conducted by the Taita Research Station of the University of Helsinki at Maktau (3°25'32.73” S, 38° 8'21.96” E, 1060 m a.s.l.) and Kishenyi (3°21’43.8” S, 38°19’47.4” E, 1550 m a.s.l.), and by University of Nairobi at Kabete (1°14’57.7” S, 36°44’32.0” E, 1900m a.s.l.), provide a comprehensive dataset of camera imagery and ground observations across two growing seasons (long rains and short rains). Additionally, agroclimatic variables, including rainfall, temperature, and soil moisture, along with biophysical attributes such as chlorophyll content, leaf stomatal conductance and temperature, Leaf Area Index (LAI), and canopy height, are collected on a sub-daily and biweekly basis. We employ the green chromatic coordinate (GCC) as an index for vegetation monitoring, creating a time series processed through Savitzky-Golay filtering to capture phenological transitions accurately. We have detected six key phenological stages (e.g., emergence, stem elongation, tasseling, fruit development, ripening, and senescence) based on a 10% relative threshold, validated through inter-site and within-site leave-one-out cross-validation (LOOCV). Comparative analyses between camera-derived phenology and ground observations reveal an accuracy range of 3-12 days for most stages, with slightly better accuracy observed in within-site validation. Our results highlight RA’s potential to foster robust crop phenology insights, valuable for optimizing management interventions such as fertilization scheduling and pest control. This study expands the understanding of RA phenology and provides evidence of field cameras as effective tools for low-cost, high-frequency crop monitoring across AEZs. In the future, by combining satellite-based observations, these findings will contribute to advancing climate-resilient agriculture monitoring.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall L1/L2)

Presentation: Complex Phenological Response of Broad-leaved and Mixed Forests to Drought Legacy in Europe

Authors: Mingzheng Chen, Dr. Nicole Estrella, Prof. Dr. Cornelius Senf, Prof. Dr. Annette Menzel
Affiliations: Technical University Of Munich, School of Life Sciences, Professorship of Ecoclimatology, Technical University Of Munich, School of Life Sciences, Professorship of Earth Observation for Ecosystem Management
Understanding the long-term impact of drought on forests phenology is important to predict ecological feedback to climate extremes. However, the forest phenological responses to drought-induced legacy effects remain unclear at large spatial scale. Here, we quantified the impact of meteorological parameters, including Standardized Precipitation-Evapotranspiration Index (SPEI), Temperature, and Growing Degree Days (GDD), on the MODIS phenology data with statistical models to show how forest phenology changed after a drought event subsided. We investigated the effects of various parameters and their interactions on the shift of Start of Season (SOS) and End of Season (EOS) after severe drought in broad-leaved and mixed forests regions across Europe. Our results show that spring and summer drought have stronger legacy effects on SOS, with spring drought leading to an advancement in SOS, while summer drought result in its delay in the following year. High (GDD) in the subsequent year will intensify drought legacy effects, probably related to the temperature-sensitivity of SOS, which may weaken the drought impacts last year under cooler conditions. Furthermore, earlier spring phenology in a drought year advances EOS, particularly under warmer summer conditions, suggesting a compounding effect where both drought and temperature influence autumnal senescence. Over all, these findings statistically link forest phenology to drought legacy, highlight the non-stationarity in drought legacy effects on shaping forest phenology, and provide new insights for predicting ecosystem-climate feedbacks across large scales.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall L1/L2)

Presentation: Monitoring Phenological Shifts in Rice Cultivation Using Earth Observation Data in Vietnam's Mekong Delta

Authors: Thi Thao Nguyen
Affiliations: CESBIO - Centre d’Etudes Spatiales de la Biosphère
The Mekong Delta, Vietnam's largest rice-producing region and one of the world's leading rice exporters, confronts significant challenges due to climate change, including rising sea levels, salt intrusion, and severe weather events linked to ENSO cycles. Accurate monitoring of rice phenology and cultivation patterns is crucial for food security, climate adaptation, and environmental management. This study showcases the application of multi-temporal Sentinel-1 C-band SAR data to monitor rice cultivation practices and their adaptation to environmental conditions from 2016 to 2024. Our methodology overcomes the inherent constraints of traditional fixed-threshold methods by creating an adaptive algorithm for diverse incidence angles in SAR imagery (31° to 46°) throughout the Mekong Delta. The research analyzes the temporal signatures of VH and VV polarizations in Sentinel-1 data to detect rice growth stages and estimate sowing dates. We implement a customized threshold function that takes into account incidence angle factors, which improves rice field detection accuracy over typical fixed-threshold techniques. Leveraging the uniqueness of rice's backscatter patterns during rice growth stages, particularly the VH/VV ratio evolution from planting through maturity, an adaptive Savitzky-Golay smoothing algorithm was applied to time series data, optimizing the window length based on dominant frequencies in the temporal profiles. The method effectively detects local extrema in backscatter time series to determine growing seasons and estimate sowing dates, which is the first and most important step to developing rice phenological stage maps. This revised approach is implemented on a pixel basis across the entire Mekong Delta (40,500 km²), showing improvements in accuracy and flexibility for rice detection. The obtained results indicate notable interannual spatiotemporal fluctuations in rice cultivation intensity (single, double, and triple cropping) strongly correlated with El Niño-Southern Oscillation (ENSO) phases. The coastal areas are the most likely to be affected by the hazardous events. Hence, in order to minimize the effects of drought and salinity, farmers adjusted their sowing calendars earlier and reduced the intensity of their cropping during the strong El Niño of 2015-2016. On the contrary, the "triple-dip" La Niña period (2020-2022) enabled the expansion of triple cropping practices thanks to increased rainfall, but it also led to delays in planting for rainy season crops and transitions to shorter-duration crops in certain regions impacted by flooding threats. Validation against government statistics showed good agreement for most provinces, with deviations ranging from +3% to +9.5% for the whole delta, highlighting the method's potential for monitoring agricultural practices under changing climate conditions. The estimated area planted with rice was +8.2% and +9.5% compared to the statistics in 2016 and 2019 respectively, indicating the effects of El Niño phenomena that occurred during those years. In the "triple-dip" La Niña period (2020–2022), deviations ranged from +3.9% to +4.8%, effectively reflecting the rise in triple-cropping practices. The methodology achieved an overall accuracy of approximately 93.54% for the whole Mekong Delta, compared to 81% for the fixed-threshold approach. However, higher errors were observed in An Giang (+18.6%), Long An (+20.1%) and Ben Tre (-15.2%), indicating difficulties in these regions with complex farming systems or unique agricultural practices. Sowing dates exhibited an average error range of 3–7 days, while crop intensity maps indicated distinct patterns influenced by ENSO. This research offers crucial insights into the dynamic interaction between rice practices and environmental changes in the Mekong Delta. Our approach aids evidence-based decision-making for sustainable agriculture and biodiversity conservation by precisely mapping rice production patterns and how they respond to climate variability. Policymakers and farmers in this at-risk area can use the up-to-date and spatially explicit data to develop adaptive policies that meet the demands of food security while protecting the ecosystem. Future work can incorporate other environmental variables in addition to the rice phenological stages maps and extend the analysis to estimate greenhouse gas emissions from rice cultivation, serving Vietnam's climate goals under the Paris Agreement and its national strategy for green growth. Keywords: Rice phenology, Mekong Delta, SAR remote sensing, climate adaptation, time series analysis, crop monitoring
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall L1/L2)

Presentation: Synergy of Remote Sensing data and Machine Learning method to Enhance Global Crop Calendars in WorldCereal

Authors: Italo Moletto-Lobos, Belen Franch, Andreu Guillem-Valls, Katarzyna Cyran, Natacha Kalecinski, Kristof Van Tricht, Eric Vermote, Inbal Becker-Reshef, Shabarinath Nair, Jeroen Degerickx, Christina Butsko, Zoltan Szantoi
Affiliations: Global Change Unit, University of Valencia, Department of Geographical Sciences, University of Maryland, VITO, NASA Goddard Space Flight Center, University of Strasbourg, Earth Observation Programmes, European Space Agency, Department of Geography & Environmental Studies, Stellenbosch University
Accurate crop calendars are vital for global agricultural monitoring and food security, particularly in the face of climate variability. These calendars provide critical timing information on planting, growth, and harvesting, supporting stakeholders to optimize agricultural practices and mitigate risks. The WorldCereal initiative, funded by the European Space Agency (ESA), has significantly advanced in high-resolution global crop and irrigation mapping. In its initial phase, the initiative developed crop calendars for maize and winter cereals using ERA5-Land reanalysis data and Sentinel-2 control points, achieving notable progress but revealing limitations, such as inaccuracies in dormancy estimation and regional error margins. To address these challenges, we propose a model integrating Land Surface Phenology (LSP) metrics from satellite data with machine learning (ML) algorithms. LSP metrics, derived from vegetation indices, capture precise crop growth stages like start (SOS) and end (EOS) of the season. Despite their potential, LSP metrics alone are limited by spatial resolution and data purity challenges. Machine learning methods, such as Random Forest and XGBoost, complement LSP by analyzing complex climate-phenology interactions, providing scalable solutions for global coverage. Building on the WorldCereal framework, our method combines refined LSP metrics and ERA5-Land parameters through a two-phase approach: a climate-driven phase for initial SOS and EOS estimation, followed by LSP-enhanced refinements. MODIS AQUA NDVI data is processed with the Harmonic Analysis of Time Series (HANTS) algorithm to handle gaps in cloudy regions. The model also incorporates agro-ecological zones (AEZs) to address dormancy periods in winter cereals. Validation demonstrates substantial improvements. For summer crops, SOS predictions achieve an R² of 0.94 and an RMSE of 18.58 days, while winter crops show an R² of 0.90 and an RMSE of 29.58 days, highlighting robust alignment with observed phenology. This LSP-ML integration marks a pivotal step forward in agricultural monitoring, providing accurate, spatially continuous datasets to support adaptive management, early warning systems, and climate impact forecasting. Future works will focus on additional crop types, higher spatial coverage, and higher-resolution satellite data to refine capabilities further in a changing global climate.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall L1/L2)

Presentation: Long-Term Shifts in Crop Growing Seasons Across Czech Climatic Regions Using High-Resolution Optical Data

Authors: Jiří Tomíček, Jan Mišurec, Markéta Potůčková, Petr
Affiliations: Gisat / Faculty of Science, Charles University
Land Surface Phenology (LSP) provides valuable insight into the seasonal patterns and dynamics of crops across large spatial scales. Landsat and Sentinel-2 images are highly suitable data sources for LSP analysis and have complementary potential. The high revisit frequency of Sentinel-2 data supports detailed phenological tracking at the scale of individual agricultural parcels, whereas Landsat enables the examination of about 40 years of phenological changes aggregated at a regional scale. Analyzing long-term LSP trends is essential for understanding climate change impacts on agriculture by detecting shifts in key growing season parameters, such as the timing of the start, end, or peak of season or total productivity. In this study, we analyze long-term phenological changes using harmonized LAI seasonal profiles derived from Sentinel-2 and Landsat data for the dominant crop species cultivated in the Czech Republic. These are winter wheat (Triticum aestivum), spring barley (Hordeum vulgare), winter rape (Brassica napus subsp. napus), alfalfa (Medicago sativa), sugar beet (Beta vulgaris) and maize (Zea mays subsp. Mays). An artificial neural network (ANN) was trained to calculate LAI using simulations generated by the radiative transfer model ProSAIL. Seasonal profiles were aggregated to the level of agroclimatic regions of the Czech Republic. Four phenological (start-of-season: SOS, end-of-season: EOS, length-of-season: LOS and day-of-season-climax: MAX_DOY) and two productivity-correlated (gross-primary-production: GPP, net-primary-production: NPP) parameters were then extracted from the seasonal profile curves using a 'threshold approach'. In most climatic regions of the Czech Republic, a significant shift was recorded for all the spring and winter crops, with the peak of the growing season (MAX_DOY) and the harvest period (EOS) moving earlier in the season. For productivity indicators, only a few significant trends were observed, and the slope values showed considerable variation across crops and climatic regions. An exception was spring barley, which exhibited a substantial increase in net primary production (NPP) in 8 out of 10 climatic regions. Knowing the response of the agricultural landscape to climate change, it is then possible to consider mitigation strategies such as optimizing planting schedules, adopting resilient crop varieties, and improving soil and water management.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall L1/L2)

Presentation: Crop calendars from land surface phenology: Challenges in distinguishing crops from space in one of Europe's most complex agricultural regions

Authors: Miguel A. Garcia-Perez, Mr. Guilhem N. Jeannet-Chaves, Sr. Jose M. Ollega-Caro, Víctor F. Rodríguez-Galiano
Affiliations: University of Seville
Food security faces significant challenges driven by climate change. Agricultural policies need access to accurate, near real-time information for effective decision-making. Remote sensing enables such studies; however, this unprecedented time series of high-resolution satellite imagery requires methods to extract relevant agronomic information and reduce spectral and temporal dimensionality. This challenge can be addressed through land surface phenology (LSP), which studies the cyclical and seasonal patterns of vegetation and their interactions with biotic and abiotic environmental factors. A crop calendar is a chronological representation of the sequential occurrence of different phenological stages in the crop growth cycle, from the fallow period and land preparation, through crop establishment and maintenance, to harvest and storage. Crop calendars are useful for crop management and for assessing crop separability, a key factor cropland mapping. Obtaining crop calendars is changing for complex regions with the existence of diverse climates, altitudes, agricultural practices, and irrigation systems The main objective of this study is to evaluate LSP using Sentinel-2 data to develop calendars for the main crops in a large and diverse region such as Andalusia (Southern Spain). The secondary objective is to assess the existence of large agro-phenological regions and evaluate the separability of crops, as well as how they can be grouped into different families to advance in the creation of crop maps. Andalusia is a NUTS-2 region located in southern Spain where agricultural plots represent a total area of 34,649 km², or 39.55% of Andalusia's total area, which is the second largest NUTS-2 region in Spain and the fourth largest in the European Union. This study proposes a crop calendar based strictly on phenological criteria. It used Sentinel-2 EVI2 time series (2018-2020) to monitor crop phenology. Unlike NDVI, a more common vegetation index which saturates in high-biomass conditions, EVI2 is more sensitive to variations in some plant species. Using the Double Logistic smoothing method to handle gaps in data due to clouds or sensor issues, we analyzed fields in Andalusia as identified in the CAP database (GISCAP-CAP). Key phenometrics such as start of the season (SOS), middle of the season (MOS), and end of season (EOS) were extracted for all agricultural plots, and the results were aggregated later at the crop level. For the clustering analysis, phenometrics were used in DOY (Day of Year) format, but since DOY is a circular variable, it was decomposed into its sine and cosine components, generating a total of six variables. The elbow method was used to determine the optimal number of clusters, as it evaluated when adding a new category, no longer provided a significant advantage in terms of heterogeneity within the groups. To avoid over-representation of more extensive crops, 48 random plots were selected for each crop. However, categories with fewer than 24 observations were removed. The phenological variability in Andalusia showed that over 65% of the plots reached SOS in the last quarter of the year, with 35% in October. Nearly 70% of the plots reached MOS in the first quarter of the year, with February being the most common month at 26%. Almost 50% reached EOS in the second quarter, while 40% did so in the third quarter, with June being the dominant month at nearly 34%. This showed a certain phenological homogeneity driven by the region's climate, with the onset of phenological activity in autumn, coinciding with the return of precipitation and milder temperatures after the summer months. However, the situation was not entirely homogeneous, with secondary peaks corresponding to summer crops that were irrigated. The elbow method suggested the use of six clusters, and upon analyzing the crops included in each cluster, along with their characteristics and those of the clusters themselves, this study proposed the following categories: 1) Winter cereals and grain legumes; 2) Summer crops; 3) Stone fruits and vineyards; 4) Olive groves, mixed woody crops, and grasslands; 5) Citrus and other woody crops; 6) Spring and early-summer crops. A large proportion of cereals was grouped in cluster 1, with 84.9% of durum wheat, 75% of common wheat, 67.2% of triticale and 62.6% of barley included in this cluster. However, some cereals were also notable in cluster 4, including 27.1% of barley, 21.4% of common wheat, and 13.9% of durum wheat. Examples of crops in cluster 2 included cotton and rice, with 97% of cotton and 99.8% of rice classified in this group. Cluster 3 included a significant portion of vineyards, with 70.33% of wine grapes and 64.94% of table grapes grouped in this cluster as well as other stone fruits. Woody crops showed greater dispersion across clusters. For example, almond groves were primarily distributed between cluster 4 (37.6%) and cluster 5 (28.5%), while orange trees exhibited a similar pattern, with 37.9% in cluster 4 and 33.7% in cluster 5. In contrast, olive groves, the crop with the greatest presence in the region, were predominantly classified in cluster 4, with 64.3% of the total, but also had a notable presence in cluster 5 (26.2%). Cluster 6 included a significant portion of specific crops, such as 92.1% of sunflower and 85% of chickpea. The inclusion of the same crop in multiple categories may be attributed to the large size of the region and the resulting variability in climate, agricultural practices or irrigation system availability.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall E2)

Session: D.01.08 4th DestinE User eXchange - Panel Discussion "From Vision to Impact: Unlocking Destination Earth's Potential for Users"

Panel Discussion
"From Vision to Impact: Unlocking Destination Earth's Potential for Users"

Destination Earth (DestinE) has made remarkable progress in just three and a half years. What started as an ambitious idea is slowly turning into a new Earth system information tool, providing new opportunities for scientists, policymakers, and industry. But the journey doesn’t stop here. As we move forward, one question remains at the core of our efforts: how can DestinE best serve its users? As DestinE continues to evolve, the focus is now on establishing it as a trusted source of actionable information, making it even more accessible, user-friendly, and impactful.

This session will bring together representatives from selected user institutions as well as ESA, ECMWF, EUMETSAT, and the European Commission. Through lightning talks and a panel discussion, the session will highlight the progress made so far, how DestinE is evolving to better meet user needs, and the next steps in unlocking its full potential.

A User-Centric Perspective:


Facilitating climate risk assessments for European policies


  • Eva Ivits - European Environment Agency

Improved flood risk management using DestinE


  • Bram Schnitzler - Hydrologic

What DestinE could do for energy grids


  • Sascha-Phillipp Salm - TenneT TSO GmbH

A national perspective: Digital Twin Germany & DestinE


  • Sven Boehme & Andreas von Dömming - Federal Agency for Cartography and Geodesy (BKG)

Panel discussion


  • Kathrin Hintze - ESA
  • Irina Sandu - ECMWF
  • Lothar Wolf - EUMETSAT
  • Charalampos Tsitlakidis - EC COM

Discussion with the audience


D.01.08 4th DestinE User eXchange - Panel Discussion "From Vision to Impact: Unlocking Destination Earth's Potential for Users"&location=Hall+E2" class="text-info" target="_blank">Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.34)

Session: A.02.08 Impacts of fire in the Earth system - PART 2

Fire is one of the critical factors affecting global ecosystems and societies, impacting atmospheric and vegetation composition, soil erosion and runoff, as well as human health, resources, and assets. The occurrence of catastrophic seasons with many human casualties and/or large burned areas in the last decades is often associated with heat waves and droughts, underscoring the decisive role of climate and weather. Important lessons can be learned from studying the impacts of those wildfires and analysing the implications for future fire seasons so we can continuously improve our understanding and management of fire.
We encourage all abstracts that explore fire occurrence in the Earth system at any temporal and spatial scale using remote sensing data and/or modelling and its impacts on (1) ecosystems, vegetation composition and structure, resilience, and fuel management; (2) atmospheric chemistry, air quality, and human health, (3) biochemical cycles, carbon budget, water and nutrients, (4) soil erosion, (5) burn severity, (6) fire occurrence in the past, present and future.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.34)

Presentation: Examining wildfire impacts on Arctic permafrost through InSAR analysis

Authors: Barbara Widhalm, Annett Bartsch
Affiliations: b.geos
Wildfires have a substantial impact on permafrost environments and are anticipated to become more frequent and severe as climate change progresses, with recent years witnessing increasingly intense fire seasons across the Pan-Arctic. This trend is expected to accelerate further as climate change amplifies the likelihood of Arctic fires through a combination of factors, including the occurrence of more extreme fire weather, increased lightning frequency, and drier vegetation. Wildfires have a profound impact on permafrost environments. By stripping away vegetation and the insulating organic layer, they not only release carbon through combustion but also accelerate permafrost thaw, leading to the additional release of soil organic carbon. This degradation is frequently marked by the thickening of the active layer - the seasonally thawed soil above the permafrost - and the formation of thermokarst features. Wildfires cause both immediate and long-term effects on permafrost systems, particularly by altering ground thermal dynamics and hydrological conditions. To enhance our understanding of wildfire effects on permafrost, this study focuses on multiple Arctic regions through a space-for-time methodology, analyzing fire events up to 60 years after they occurred. Over 100 fire scars from five study regions in Alaska, Canada and Siberia were studied. These regions represent a variety of Arctic permafrost environments, allowing for a comprehensive analysis of wildfire impacts. To assess both annual and seasonal ground displacements, we utilized Interferometric Synthetic Aperture Radar (InSAR) data, which provides a detailed view of permafrost dynamics over time. Sentinel-1 (C-band) was used to examine seasonal subsidence, while PALSAR-2 (L-band) was employed for annual subsidence analysis. InSAR data proved highly valuable in detecting subtle ground movements and changes, offering insights into the processes of permafrost degradation and the long-term consequences of wildfire activity. Our observations revealed prolonged subsidence lasting up to 30 years, followed by a heaving signal in comparison to the surrounding area. Additionally, differences were observed between the responses of seasonal and annual subsidence.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.34)

Presentation: Evaluating fire emissions and atmospheric composition impacts of North and South America wildfires in 2024

Authors: Mark Parrington, Enza Di Tomaso, Johannes Kaiser, Johannes Flemming, Antje Inness, Melanie Ades, Anna Agusti-Panareda, Ernest Koffi, Nicolas Bousserez, Auke Visser, Samuel Remy, Vincent Huijnen, Thanos Tsikerdekis, Yasmine Bennouna, Richard Engelen, Laurence Rouil, Julien Nicolas
Affiliations: European Centre For Medium-range Weather Forecasts, European Centre For Medium-range Weather Forecasts, Klima- og miljøinstituttet NILU, HYGEOS, Royal Netherlands Meteorological Institute, Laboratoire d’Aérologie, CNRS and Université Toulouse III – Paul Sabatier
In 2024 regions of North and South America experienced historic wildfires with persistent, large-scale fires resulting in vast smoke plumes impacting on atmospheric composition and air quality. In North America, wildfire emissions for Canada were some of the most extreme in the past two decades, only superceded by 2023. In South America, the Pantanal wetlands and Bolivia experienced rapid growth in the total estimated emissions from the wildfires that far exceeded the annual totals of the previous two decades. We present an analysis of the North and South American wildfires during 2024 and their impact on regional air quality and longer-range atmospheric composition transport. We will review the underlying climatological conditions, the estimated emissions, air quality impacts, long-range smoke transport over the Atlantic Ocean and surface deposition rates. The European Centre for Medium-Range Weather Forecasts (ECMWF) through its operation of, and contribution to, different Copernicus Services is in a unique position to provide detailed information to monitor wildfire emissions around the world, including their evolution and potential atmospheric impacts. Analyses based on observations of fire radiative power, along with analyses and forecasts of associated atmospheric pollutants, from the Copernicus Atmosphere Monitoring Service (CAMS) aid in quantifying the scale and intensity in near-real-time and the subsequent atmospheric impacts, including local air quality and long-range smoke transport. Surface climate anomalies from the Copernicus Climate Change Service (C3S), and hydro-meteorological information based on ECMWF forecasts, provide context to the environmental conditions required for these wildfires to persist. We will show how CAMS operational products provide a wealth of information required for monitoring wildfires, atmospheric composition and air quality and highlight how these products are evolving to improve our current knowledge and understanding of wildfires.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.34)

Presentation: Past, present and future: an analysis of the fire regime in a West African savanna area under a changing climate and environment

Authors: Gernot Ruecker, Jean-Luc Kouassi, Amara Ouattara, Pascal Kouame, Roger Kouadio
Affiliations: ZEBRIS Geo-IT GmbH, Office Ivorien des Parcs et Réserves (OIPR), AMBERO GmbH, INP-HB
Fires and herbivory have shaped West African mesic savannas for millennia. Recent decades have brought a steep decline in herbivore biomass through poaching and habitat destruction, and with it altered fire regimes. Ongoing climate change is altering fire weather and thus fire seasonality and fire behavior. Managers of protected areas in West African national parks therefore need to understand climate and human induced changes in fire regimes and the associated implications on vegetation and fauna. This understanding needs to drive management responses geared towards ecosystem restoration and resilience to climate change. Remotely sensed and model data on fires, wildlife surveys, climate reanalysis and climate models can inform management decisions in this complex situation. In this study, we investigate fire dynamics in one of the largest West African protected areas, the Comoé National park, a UNESCO World heritage site, and its adjacent areas in Northern Côte d’Ivoire. We use a wide range of datasets for this task. We use historic satellite data (Landsat and MODIS) to analyze past fire and vegetation patterns and dynamics. Starting in 2017, we have generated Sentinel 2 derived high resolution burned area maps at five day temporal resolution and fuel consumption/emissions datasets based on burned area and active fire detections. We analyze fire intensity, a key parameter of the fire regime, in a multi-sensor approach: we use active fire detections by Sentinel 2 and VIIRS to analyze rate of spread of a fire front observed by both sensors by analyzing time between the overpasses of the two sensors. Meteosat SEVIRI derived fire radiative power (FRP) together with Sentinel 2 derived burned area is used to estimate fuel consumption per square meter. Having established fuel consumption and rate of spread, fire intensity can be estimated. To compare observations to model outputs, we use fire spread models driven by climate reanalysis data. We can thus develop a database on fire behavior and fire seasonality to detect possible trends. This database is further supported through a limited number of field fire experiments. Active fire data from Sentinel-2, VIRRS and other sensors are furthermore used to analyze ignition patterns. To assess the impact between fire and herbivory, wildlife inventory data are analyzed. Spatially explicit wildlife inventory data are available from airborne surveys since 2016. Before that time, we rely on published, non-spatial data. To assess potential impact of climate change, we analyze regionalized climate models for the coming decades up to 2100 and calculated key fire weather indices and metrics of potential fire behavior to better understand future fire climatology. Our analysis results show that general vegetation patterns within the park seem to be remarkably stable back to the 1980ies when first satellite images became available: the general distribution and shape of forest islands and gallery forest versus tree and shrub savannas did not change much over the decades. Also, general burning patterns within the park seem to be relatively stable, although only very few satellite images are available up to 2000. In the areas surrounding the park, however, conversion of savanna areas to agriculture as switched out fire in large areas and isolated the park from its surroundings in terms of habitat continuity. The heavy poaching during the last decades of the 20ieth century that continued to the early years of the 21st century has heavily reduced herbivory in the park. Only since 2016, wildlife numbers are partially rebounding. This effect may have shifted the system to a state were fire is the main consumer of herbs, thus altering savanna floral composition and the fire regime itself, leading to more intense fires. However, it is difficult to support this conclusion from available remote sensing data. Fire rate of spread is usually not very fast (4 – 20 m/min), and observed fire intensities are moderate (200 – 2000 kW/m). Ongoing climate change seems to lead to an earlier start of the fire season, and this effect is repeatedly observed through large areas burned due to very dry conditions at the beginning of the fire season, which also enables rather intense fires in early dry season. Climate change projection data indicate that these conditions may be more frequent in the future with an earlier start and a longer dry season. Also, due to increasing temperature and decreasing relative humidity, and possibly increasing wind speed, more intense fires are to be expected in the future. In order to cope with these changes, management in the park could focus on further reestablishment of (large) herbivore density by further controlling poaching, and by making efforts to conserve and possibly expand savanna areas in the vicinity of the park, and by reducing fuel load through targeted prescribed burns in order to control fire intensity.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.34)

Presentation: The Burned Area Simulator for Europe under Extreme Fire Weather (BASE-X)

Authors: Dr. Jessica Hetzer, Dr. Matthew Forrest, Prof. Dr. Thomas Hickler
Affiliations: Senckenberg Biodiversity And Climate Research Centre
Fire danger driven by meteorological conditions has been increasing in severity across Europe. Studies indicate a multifold rise in the frequency of extreme weather days, with the intensity of these events expected to increase further as climate change progresses. This trend is particularly concerning because wildfires under extreme weather conditions are more likely to result in large, uncontrollable fires, posing critical challenges for fire management and mitigation. These conditions highlight the urgent need for predictive models that can anticipate fire risks under unprecedented climatic conditions. We present the Burned Area Simulator for Europe under Extreme Weather (BASE-X), an advanced model developed to predict burned area under extreme weather conditions across Europe. BASE-X builds upon the established Burned Area Simulator for Europe (BASE) framework, which combines earth observation data with Generalized Linear Models to simulate burned area at a ~9x9 km spatial resolution and monthly temporal resolution. The model differentiates between land cover types, capturing their unique drivers of fire occurrence, including meteorological conditions, vegetation characteristics, and socioeconomic factors. At the core of the model are the latest bias-corrected Fire Weather Indices from ERA5. Additionally, it integrates Gross Primary Productivity data to approximate fuel availability alongside the Monthly Ecosystem Productivity Index (MEPI) to capture the current state of vegetation, a critical element in determining fuel characteristics. A key innovation of BASE-X is its capacity to differentiate between fires under extreme weather conditions and those occurring under less severe meteorological conditions. By explicitly modeling fire dynamics under high-impact climate events, BASE-X enhances our understanding of fire risks associated with these increasingly frequent and severe phenomena. BASE-X is designed for coupling with dynamic vegetation models like LPJ-GUESS, a process-based model that simulates vegetation dynamics and incorporates the feedback effects of fires on ecosystems and carbon cycles. This integration allows for a more comprehensive analysis of how extreme weather events influence fire behavior, ecosystem resilience, and carbon budgets, and is with that the first DGVM that explicitly simulates resulting extreme weather fires. With its robust predictive capabilities, BASE-X serves as a powerful tool for future fire danger projections and adaptation strategies, aligning with the growing need for enhanced fire management in a changing climate.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.34)

Presentation: Modelling multidimensional causes and impacts of extreme fires in the climate system through X-ECV analysis (XFires)

Authors: Stephen Sitch, Clement Albergel, Simon Bowring, Emilio Chuvieco Salinero, Philippe Ciais, Pierre Defourny, Wouter Dorigo, Darren Ghent, James Haywood, Daan Hubert, Ben Johnson, Amin Khairoun, Stine Kildegaard Rose, Céline Lamarche, Patricia Oliva Pavón, María Lucrecia Pettinari, Monica Pinardi, Lizzie Quaye, Louise Sandberg Sørensen, Erika Cristina Solano Romero, Dr Daniela Stroppiana, Roland Vernooij, Guido van der Werf, Yidi
Affiliations: University Of Exeter
Fire plays an important role in the Earth system, affecting atmospheric composition and climate, vegetation, soil and societal resources. Extreme fires are particularly important, as they entail the most severe damages, both in terms of social and ecological values. According to the European Commission, within Europe “most damage caused by fires is due to extreme fire events, which only account for about 2% of the total number of fires”. Their occurrence and impacts are closely linked to climate change, and are related to a wide range of climatic and environmental state variables, such as soil and vegetation moisture content, biomass, temperature, etc. Fires have a powerful impact on the atmosphere and thus aerosol, greenhouse gases, and ozone concentrations, while the indirect effects of fire-related particles affect also water bodies and ice sheets. Here we summarize progress on the ESA funded XFires project, which aims to research and quantify all of the above interactions to gain a holistic understanding of extreme fires and their impact in the Earth system. A particular focus of this project lies in gaining an improved theoretical and quantitative understanding of what the medium-term net effects of fire are on global carbon and radiative forcing budgets. This is important because at a global scale, little is known regarding how extreme fires impact vegetation and soil recovery timescales with respect to the time until the same system next experiences fire. Extreme fires are of particular interest because of how different biomes might hypothetically respond to, and recover from, different extreme fire characteristics, which have significant potential bearing on the global carbon cycle. To address these questions, we first use a cross-Essential Climate Variable (ECVs) approach to define and characterise extreme fires. We will then develop and apply machine-learning approaches to model extreme fires and generate new emissions datasets to be used as input into an Earth System Model, to quantify impacts on atmospheric composition and climate. Finally, we will explore the wider impact of extreme fires on human health, lakes, and via black carbon affecting melt-rates on the Greenland ice-sheet. These advances will help inform Intergovernmental Panel on Climate Change (IPCC), as well as the Global Carbon Budget.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.34)

Presentation: Projected Ignition Behaviour of Lightning-Ignited Wildfire Under Climate Change Scenarios

Authors: Brittany Engle, Prof. Dr. Cornelius
Affiliations: Technical University of Munich, School of Life Sciences, Earth Observation for Ecosystem Management
Boreal forests are critical components of the global carbon cycle, housing approximately 32% of the world’s terrestrial carbon stocks. Historically, boreal forests have functioned as essential carbon sinks, but rising temperatures likely will increase wildfire activity, which raises concerns that they could transition into net carbon emitters. This shift from a carbon sink to an emitter could have significant implications for the global carbon cycle, as increased emissions from boreal wildfires could further accelerate the impacts of climate change. Climate change has intensified the frequency and severity of fire weather in high-latitude boreal forests. Due to this, lightning - a major source of ignition in the boreal - is projected to occur more frequently, which could potentially lead to a doubling in burned area. Lightning-ignited wildfires (LIW) present unique challenges due to their ability to smoulder undetected for long periods of time, occur in concentrated clusters, and resist suppression. The interplay between these dynamics highlights the importance of predicting ignition dynamics under future climate changes scenarios. Due to data availability, lightning-ignited wildfire research is heavily concentrated in specific regions, like North America, which restricts our understanding of lighting ignition efficiency across the broader biome. Efforts to address this, such as the recently developed BoLtFire dataset (Engle et al. 2024), have made progress by creating a dataset that incorporates both fire occurrences and non-occurrences coupled with ignition driver parameters for modelling. In this study, we build upon this previous work to incorporate modelling of lighting ignition efficiency under different Representative Concentration Pathways (RCPs), incorporating also global Earth observation data such as: ESA CCI Biomass (Santoro and Cartus, 2024), and ERA5-Land Surface Temperature, Snow, and Soil Moisture (Sabater, 2019; Sabater et al., 2024). This will allow us to better understand the impact of critical ignition drivers across global Boreal forests, better predict future wildfire ignition dynamics under climate change, and contribute to a more comprehensive framework for LIW mitigation to better support wildfire management strategies. References: Engle, B., Bratoev, I., Crowley, M. A., Zhu, Y., and Senf, C.: Distribution and Characteristics of Lightning-Ignited Wildfires in Boreal Forests – the BoLtFire database, Earth Syst. Sci. Data Discuss. [preprint], https://doi.org/10.5194/essd-2024-465, in review, 2024. Muñoz Sabater, J. (2019): ERA5-Land hourly data from 1950 to present. Copernicus Climate Change Service (C3S) Climate Data Store (CDS). DOI: 10.24381/cds.e2161bac (Accessed on 02-12-2024) Muñoz Sabater, J., Comyn-Platt, E., Hersbach, H., Bell, B., Berrisford, P., Biavati, G., Horányi, A., Muñoz Sabater, J., Nicolas, J., Peubey, C., Radu, R., Rozum, I., Schepers, D., Simmons, A., Soci, C., Dee, D., Thépaut, J-N., Cagnazo, C., Cucchi, M. (2024): ERA5-land post-processed daily-statistics from 1950 to present. Copernicus Climate Change Service (C3S) Climate Data Store (CDS), DOI: 10.24381/cds.e9c9c792 (Accessed on 02-12-2024) Santoro, M.; Cartus, O. (2024): ESA Biomass Climate Change Initiative (Biomass_cci): Global datasets of forest above-ground biomass for the years 2010, 2015, 2016, 2017, 2018, 2019, 2020 and 2021, v5.01. NERC EDS Centre for Environmental Data Analysis, 22 August 2024. doi:10.5285/bf535053562141c6bb7ad831f5998d77. https://dx.doi.org/10.5285/bf535053562141c6bb7ad831f5998d77
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.14)

Session: E.05.03 Connecting Energy stakeholders and the Earth Observation Sector

This networking session aims at connecting the energy stakeholders (demand side) with EO solution providers (supply side) to explore opportunities to address the energy sector needs and challenges using EO capabilities. Open to all participants interested in the energy-related applications, this session creates a collaborative space to discuss end-user needs and challenges, linking them to space-enabled innovation opportunities. Through structured moderated discussions, the session also aims at exploring new collaboration pathways between the space and the energy sector to enhance the adoption of space capabilities across the energy value chain.

Chairs:


  • Zaynab Guerraou - ESA
  • Asimina Syriou - ESA
  • Stefano Ferretti - ESA
  • Ola Grabak - ESA
  • Richard Eyers - Richard Eyers Geoscience & Photography
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.85/1.86)

Session: A.09.08 Understanding the state of the cryosphere with satellite altimetry - PART 2

Altimetry missions have provided unique observations of the polar regions for more than three decades, transforming our understanding of the cryosphere in the process. Currently operating satellite altimeters such as NASA’s ICESat-2 and ESA’s CryoSat-2 are the now backbone of the long-running altimetry record and together with SWOT, Sentinel-3, and SARAL they provide year-round observations of changes in Earths sea ice, ice sheets, ice shelves, icebergs, the polar oceans, mountain glaciers, and terrestrial snow. In a complementary way, airborne missions are providing calibration and validation data for these satellite altimeters as well as unique high spatial resolution data for studies of small-scale features such as ice ridging, rifts, and melt ponds. This session will demonstrate the vital role of both continuous and high-resolution altimetry in advancing our understanding of the processes involved in a changing cryosphere. We encourage submissions which use altimetry observations as a primary data set as well as those which integrate altimetry measurements into model results to provide new insights into cryospheric processes.

Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.85/1.86)

Presentation: Improved neural network classification for Arctic summer sea ice

Authors: Anne Braakmann-Folgmann, dr. Jack Landy, Robert Ricker, Geoffrey Dawson
Affiliations: UiT The Arctic University of Norway, NORCE Norwegian Research Centre, University of Bristol
Arctic sea ice thickness impacts various physical and biogeochemical processes at the air-ice-ocean interface. For example, it determines how much sunlight reaches the base of the ice – a key parameter for primary production. It is also an essential variable for sea ice forecasts, shipping and other marine activities. During the summer months melt ponds complicate the retrieval of sea ice thickness compared to winter. On the other hand, observations of summer sea ice thickness are particularly important as this is when most shipping and biological production happen. Summer sea ice thickness estimates are also crucial to improve predictions (Zhang et al. 2023). Recently, the first estimate of year-round Arctic sea ice thickness has been presented (Landy et al. 2022). Based on this, a novel Cryo-TEMPO product of radar freeboard during the summer months (May- September) was launched in spring 2024. Validation against various airborne and mooring data reveal that large-scale temporal and spatial patterns are well captured, while radar freeboard is underestimated in the roughest multi-year ice zone north of Greenland (Dawson et al. 2022). This is where melt ponds have the biggest impact, as they dominate the return signal, but sit significantly lower than the surrounding ridged sea ice and hence introduce a negative EM bias. Improvements made to the retracking scheme have the potential to mitigate this bias. Summer sea ice generally has different scattering characteristics than winter sea ice and the optimal retracking point is unknown. For the threshold first-maximum retracker algorithm (TFMRA), adjusting the threshold offers an easy and intuitive way to correct the EM bias. This is the first study to analyse the impact of different retracking thresholds on the freeboard distribution during summer and to compare the agreement between the resulting freeboards and in situ validation data for each threshold. We show that changing the retracking threshold leads to an Arctic-wide adjustment of the sea ice freeboard estimates. So, when the thresholds are tuned to minimize the EM bias over the roughest multi-year ice areas, this is turn also reduces the agreement with in situ data over the smoother first-year ice areas, which are well captured with the current approach and do not exhibit any significant bias. Ideally, the retracker should therefore be adjusted to the specific and varying conditions of summer sea ice, taking into account both temporal differences (onset of the melt season, ponding, draining and refreezing), as well as spatial differences mainly related to the roughness of the sea ice. To achieve this, we also experiment with adaptive retracking thresholds, which vary in space and time according to the specific sea ice conditions. We use pulse peakiness, day of year or sea ice age as proxies to represent the varying conditions. Dawson, G., Landy, J., Tsamados, M., Komarov, A., Howell, S., Heorton, H., Krumpen, T. (2022).: A 10-year record of Arctic summer sea ice freeboard from CryoSat-2 in Remote Sensing of Environment, https://doi.org/10.1016/j.rse.2021.112744. Landy, J., Dawson, G., Tsamados, M., Bushuk, M., Stroeve, J., Howell, S., Krumpen, T, Babb, D., Komarov, A., Heorton, H., Belter, J., Aksenov, Y. (2022): A year-round satellite sea-ice thickness record from CryoSat-2 in Nature, https://doi.org/10.1038/s41586-022-05058-5 Zhang, Y.-F., Mitchell Bushuk, M., Winton, M., Hurlin, B., Gregory, W., Landy, J., and Jia, L. (2023): Improvements in September Arctic Sea Ice Predictions Via Assimilation of Summer CryoSat-2 Sea Ice Thickness Observations in Geophysical Research Letters, https://doi.org/10.1029/2023GL105672
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.85/1.86)

Presentation: Dynamics of Radar Backscatter Changes During Early Melt Period in Greenland: Insights from Sentinel-3 Altimetry

Authors: Stine Kildegaard Rose, Sebastian B. Simonsen, Kirk Scanlan, Louise S. Sørensen
Affiliations: DTU Space
Satellite radar altimeters operating at Ku-band have provided crucial data for monitoring ice sheet surface elevations over the past 30 years. This information is vital for assessing climate change impacts, including ice loss, surface thinning, and grounding line migration. Surface height is derived from radar waveforms through retracking algorithms, while waveform parameters can detect anomalous surface characteristics, such as melt events. Variations in waveform parameters also influence elevation estimates by correlating with retracked heights. This study examines temporal variations in radar backscatter coefficients using Sentinel-3 radar altimetry data over two regions in Greenland: North (dryer) and Southwest (annual occurring wet conditions). The Southwest region exhibits a distinct annual backscatter signal linked to surface melt and ice exposure during summer. Conversely, the North region shows relatively stable backscatter values, except for a sudden decline observed in the summer of 2023. The cause of this variability in the North remains unclear, but we suggest that the different snow conditions in the North allow for rapid percolation of the melt into the snowpack. Our analysis highlights the challenges and benefits of integrating backscatter information into surface elevation algorithms. Incorporating a backscatter-height correlation improves alignment with ICESat-2 laser altimetry in the Southwest region. However, applying the same method in the North region introduces errors. These findings underscore the importance of region-specific approaches when using waveform parameters to refine surface elevation measurements. Further research into the underlying processes driving backscatter variability is crucial for enhancing the accuracy of ice sheet monitoring using radar altimetry.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.85/1.86)

Presentation: Greenland Ice Sheet Elevation Change from CryoSat-2 and ICESat-2

Authors: Nitin Ravinder, Andrew Shepherd, Inès Otosaka, Thomas Slater, Alan Muir, Lin Gilbert
Affiliations: School of Earth and Environment, University Of Leeds, Centre for Polar Observation and Modelling, Northumbria University, Centre for Polar Observation and Modelling, University College London, Mullard Space Science Laboratory, University College London
The polar ice sheets are reacting to climate warming. Changes in their height can be used to study changes in their snowfall, surface melting, glacier flow, and sea level contribution. Although satellite altimeters are able to detect changes in ice sheet height, it is not clear whether these changes are sensed differently by laser and radar systems. Using four years of coincident measurements recorded by ESA’s CryoSat-2 and NASA’s ICESat-2, we show that differences between radar and laser at the ice sheet scale are, in fact, a small proportion (< 10 %) of the changes in height that are taking place across Greenland. Interannual trends in interior and the ablation zone of the ice sheet agree to within -0.2 ± 1.5 cm/yr and 3.3 ± 6.0 cm/yr, respectively, and the amplitudes of the seasonal cycle within the ablation zone agree to within 3.5 ± 38.0 cm. This means that either system can be used with confidence to study the effects of climate change on the polar ice sheets. Using both missions, we estimate the Greenland Ice Sheet lost volume at an average rate of 226 ± 37 km3/yr between 2010 and 2022 with an interannual variability of 129 km3/yr. At smaller spatial scales, the remaining differences are still important and should be investigated further so that we can understand their causes.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.85/1.86)

Presentation: Lake Ice Thickness from radar altimetry: new data, improvements and perspectives

Authors: Anna Mangilli, Claude Duguay, Kassem Asfour, Justin Murfitt, Jaya Sree Mugunthan, Samira Amaraoui, Thomas Moreau, Pierre Thibaut, Clement Albergel, Creig Donlon, Jerome Bouffard
Affiliations: CLS, H2O geomatics, University of Waterloo, Bioceanor, ESA ECSAT, ESA ESTEC, ESA ESRIN
Lake Ice Thickness (LIT) is an important climate indicator recognized as a thematic product of the 'Lakes' Essential Climate Variable (ECV) by the Global Climate Observing System (GCOS). Yet, the monitoring of LIT is very limited in both time and space from conventional field measurements. In this respect, radar altimetry offers a viable means of filling this critical gap since it provides increasingly longer historical records for many northern lakes required for climate monitoring. Although the potential of radar altimetry is now recognized, it has not yet been fully explored. Building on recent advances in the development of novel analytically based retracking approaches to accurately estimate LIT from Ku-band radar altimetry waveform data, both low-resolution mode (LRM) and high-resolution Synthetic Aperture Radar (SAR) data (Mangilli et al. 2022, Mangilli et al. 2024), in this talk we will present long LIT time series obtained from radar altimetry data for a selection of northern lakes. More specifically, we will provide an overview of the statistical analysis and methods used to estimate the LIT time series generated and their validation for lakes of different bathymetry, snow and ice conditions, highlighting preliminary results of LIT climatologies. We will then report on recent improvements in LIT estimates obtained from current radar altimetry SAR missions, presenting LIT time series obtained with unfocused and fully focused (UFSAR and FFSAR) SAR data. Finally, we will provide perspectives for future missions such as the European Space Agency (ESA) CRISTAL mission. The LIT time series produced from the analysis of LRM data have been generated and validated within the ESA Climate Change Initiative (CCI) Lakes project and are publicly available according to the project schedule. LIT estimates with UFSAR and FFSAR data have been generated within the ESA S6JTEX project. Finally, the development and assessment of LIT estimates for the CRISTAL mission is being carried out as part of the ESA CLEV2ER project.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.85/1.86)

Presentation: From global AI satellite products to local Inuit led travel safety maps: lessons learned from the Sikuttiaq “the good ice” project.

Authors: Michel TSAMADOS, Dr Randall Scharien, Dr Thomas Newman, Dr Thomas Johnson, Professor Saeid Homayouni, Dr Rob Briggs, Mr Andrew Arreak, Dr Grant Macdonald, Dr Becky Segal, Professor Trevor Bell
Affiliations: Earth Sciences, CPOM, UCL, University of Victoria, INRS, NRC, SmartICE, Durham University, SIKU
This presentation highlights recent advancements in applying artificial intelligence (AI) methods to Earth Observation (EO) of polar regions, with a particular focus on sea ice monitoring (AI4EO). The synergy between the exponential growth of satellite data and AI techniques has enhanced our capability to observe and analyze the dynamic polar environment, including the challenging Arctic and Antarctic regions. Sea ice and snow surface roughness, a variable of important climatic interests, modulates critical exchanges between the atmosphere and ocean, with implications for both global climate dynamics and local realities. For Inuit communities, sea ice roughness carries a distinct meaning, representing a physical impediment to safe travel and hunting. Acknowledging these cultural sensitivities, this work explores the necessity for rigorous satellite-derived metrics of sea ice and snow roughness. High-resolution, year-round ground-based, airborne, and drone validation datasets are essential to ensure the fidelity and utility of satellite measurements for diverse end-user needs. The Sikuttiaq project exemplifies this dual focus, demonstrating how advanced satellite monitoring and AI can intersect with Inuit knowledge and priorities to co-develop tools that are both scientifically robust and socially relevant. This presentation underscores the importance of integrating cutting-edge satellite EO with local context and validation efforts to deliver actionable insights for global science and community resilience.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.85/1.86)

Presentation: Evolution of the Greenland and Antarctic Ice Sheets over the past 30 years from satellite radar altimetry

Authors: Malcolm Mcmillan, Jennifer Maddalena, Karla Boxall, Qi Huang, Robert Wassink, Kate Briggs, Joe Phillips, Alan Muir, Inès Otosaka, Andrew Shepherd, Tom Slater
Affiliations: CPOM, Lancaster University, CPOM, Northumbria University, CPOM UCL
Melting of the Greenland and Antarctic ice sheets currently contributes more than one third of global sea-level rise. As Earth’s climate continues to warm throughout the 21st Century, ice loss is expected to increase further, with the potential to cause social and economic disruption. To track the changes that are currently underway in the Polar regions requires detailed and systematic monitoring programmes. Given the vast and inaccessible nature of the ice sheets, this is only feasible from space. One technique that has proved particularly valuable in recent decades is that of satellite radar altimetry. From the launch of ERS-1 thirty years ago, through to the current high resolution measurements provided by CryoSat-2 and Sentinel-3, ESA satellite radar altimeters have provided a continuous record of ice sheet surface elevation change, allowing mass imbalance and the associated contribution to sea level rise to be quantified, and the physical processes that drive ice mass imbalance to be better understood. Uniquely, satellite radar altimeters have been able to provide both the continuous and continental-scale coverage required for comprehensive mass balance assessments, together with estimates at sufficiently fine resolution to be able to resolve changes at the scale of individual glacier catchments. In this presentation, we will bring together data and results from multiple ESA-funded studies, in combination with the ongoing work of the UK Centre for Polar Observation and Modelling, to synthesise recent advances in our capacity to measure and understand the evolution of the polar Ice Sheets, based upon radar altimeter measurements acquired over the past 30 years. First, we will describe the recent progress made towards improving the quality and useability of these radar altimetry measurements, from advances made to the Level-2 processing of ERS-1, ERS-2, Envisat and CryoSat-2 within the FDR4ALT and Cryo-TEMPO projects, through to new estimates of ice sheet elevation, volume and mass change made by the UK Centre for Polar Observation and Modelling and the UK Earth Observation Climate Information Service. Second, we will assess what these measurements tell us about the evolving mass balance of different sectors of the Greenland and Antarctic Ice Sheets over the past 30 years, the current state of ice sheet imbalance in 2025, the associated contribution to global sea level rise, and the implications in terms of forcing mechanisms. Finally, we will conclude by looking at some of the future opportunities for ice sheet radar altimetry processing and downstream applications, including recent work to develop Kalman Filter frameworks orientated towards the operational data assimilation of multiple parallel streams of altimetry data.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.11/0.12)

Session: A.08.10 Coastal Ocean and Land-sea interaction - PART 2

Around 10% of the world population live in coastal regions. These areas are interested by an ample set of socio-economic activities – ranging from fisheries and aquaculture to tourism and ecosystems preservation. These regions are however highly vulnerable to climate change, being particularly affected by sea level rise and erosion, and subject to extreme events such as storm surges and floods.
They also play a crucial role in the Earth system as the interface between land and ocean, being also of fundamental importance in relation to fluxes of carbon, nutrients, pollutants and freshwater between the land-sea continuum.

This Session welcomes contributions on comprehensive data-driven reconstructions of coastal regions processes, based on the latest EO capabilities and the exploitation of the increasing set of different sensors offering high-resolution data over the coastal domains and interfaces. Although satellite EO have a prominent role, a complete representation of the overall 4D processes can only be achieved by a consistent, synergistic and multi-modal use of complementary assets such as in-situ measurements, modelling experiments and AI.

Cross-disciplinary thematic topics that are of interest for this Session are:
• Coastal ocean dynamics and sea level variability
• Extremes and Geohazards – eg, flash floods, storm surges and coastal erosion.
• Multi-stressors – such as MHW and deoxygenation.
• Water Quality – including pollution and harmful algae proliferation.
• Ocean carbon and gas fluxes – with special emphasis on high-resolution carbon processes and blue carbon.
• Air-sea-land-ice fluxes/exchanges – with a characterization of exchanges of heat, momentum and freshwater, including atmospheric nutrient deposition and its influence on ocean biogeochemistry and biology.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.11/0.12)

Presentation: Dredge plume monitoring in coastal waters

Authors: Els Knaeps, Robrecht Moelans, Liesbeth De Keukelaere, David Doxaran
Affiliations: Vito, Laboratoire d'Océanographie de Villefranche
Dredging operations are essential to protect coastlines, safe navigation and ensuring access to harbours. With threats of sea level rise, flooding and more extreme storms, the need for dredging operations will only increase and their sustainable implementation will be key. Clients are ever more aware of the need for a sustainable dredging operation and also ask the contractors to work in a sustainable way. One of the key aspects to control, is the release of sediments in the water and increase of turbidity during dredging, which can degrade water quality and harm sensitive ecosystems. Imposing turbidity limits where dredging contractors have to comply with, is common in most parts of the world. However, there are various sources of turbidity near the dredging area: from resuspension, to dredged bottom sediments and outflows from rivers. Hence, a turbidity increase is not necessarily linked to a (nearby) dredging activity. It is therefore key to understand the extent of the dredge plume and natural background turbidity to avoid unacceptable ecological impacts, adhere to the directives in place but also to avoid unnecessary shutdowns. We have developed a method for a dedicated dredge plume monitoring based on EO data. This includes the development of some specific products such as extent, turbidity level and impact. First, a Sentinel-2 database of dredge plumes was compiled, including several large dredging sites around the world. This database includes water reflectance and turbidity products derived from the L1C data. The images are atmospherically corrected with iCOR and the turbidity is derived from a multiconditional algorithm based on Dogliotti et al., 2015. Auxiliary information is recorded for each site such as the type of dredging activity and site characteristics. The S2 database is complemented with an automated AI based plume delineation. This plume delineation automatically recognizes the plumes in the images and derived the near field dredging plume. In a second step, optical properties of the dredge plumes are evaluated. This is done by developing a dredge plume spectral library from the database and by collecting field samples during dredging activities. These field samples are analysed and interpreted to understand their optical properties (light attenuation, absorption, scattering and backscattering, turbidity and fluorescence per units of suspended particulate matter and chlorophyll-a concentrations). Finally we will present the results of our new Optical Water Type (OWT) classification, adapted for the specific conditions of a dredging site. Having the OWT classification, we can derive the turbidity, extent and impact for the dredging site.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.11/0.12)

Presentation: Primary productivity in Upwelling Systems (PRIMUS)

Authors: Steve Groom, Gemma Kulk, Bror Jonsson, Shubha Sathyendanath, Gavin Tilstone, Dan Clewley, Peter Land, Kim Hockley, Elin Meek, Pepe Salgado, Javier Aristegui, Nauzet Hernandez, Vanda Brotas, Catarina Guerreira, Afonso Ferreira, Paula Oliveira, Susanna Garrido, Christo Whittle, Marie Smith, Marco Restano, Roberto Sabia, Jérôme Benveniste
Affiliations: PML, CSIC-IIM, ULPGC, FCUL, IPMA, CSIR, ESA, COSPAR
The ESA-supported Primary productivity in Upwelling Systems (PRIMUS) project investigated net primary productivity (NPP), an important component of the coastal carbon budget, and its relationship to upwelling, in Atlantic Eastern Boundary Upwelling Systems (EBUS). The project first constructed an extensive database of in situ PP measurements, gathered with a variety of techniques, primarily focussed on the Galician upwelling, off NW Spain, and also with coverage of the Spanish, Canary Current and Benguela upwelling systems. Next, PRIMUS created a 25-year, daily time series of 1-km satellite-derived NPP over the Atlantic, using two models: Platt and Sathyendranath (1988) and Smyth et al (2005). Models were forced with ESA ocean colour climate change initiative (CCI) data, together with sea-surface temperature (SST) and satellite-derived photosynthetically available radiation (PAR) data. Finally, an experimental daily NPP product was produced, at higher-resolution (300m) using the unique capabilities of the Sentinel 3 A and B satellite OLCI sensors, covering the Galician sea and the rias of NW Spain. EO produced are available via an online data visualisation and analysis portal while the in situ data have been submitted as a data paper and added to the global in situ carbon database with the ESA SCOPE project. PRIMUS utilised these data to advance analyses of Atlantic EBUS including temporal and spatial variability in NPP and its statistical relationship to upwelling along with eight further science cases in specific science areas / regional settings including, inter alia: mussel aquaculture in the Galician rias; the sardine fishery and larval recruitment in the Portuguese upwelling region; the potential EBUS impacts on coastal carbon pools; and Lagrangian estimates of NPP in relation to vertical exports in the Canary Current upwelling system. PRIMUS also conducted demonstrations on transferring the science cases into “solutions for society” working together with scientific, agency, policy and commercial “early-adopters”, focussing on EBUS and aquaculture; fisheries; and eutrophication monitoring. Finally, the project investigated approaches for the transition of data production to operational initiatives such as Copernicus and GMES and Africa and the potential for exploitation of the data by the European and international ecosystem modelling community. This communication will present highlight results from the in situ database, the 25-year NPP time series and high resolution NPP computations as well as selected science cases, focusing on the importance of upwelling systems in the Atlantic and their impact on the coastal carbon cycle.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.11/0.12)

Presentation: Reconstruction of dynamic processes in the Baltic Sea using the synergy of satellite images and in situ data supported by numerical modeling and AI methods.

Authors: Miroslaw Darecki, Dr Anna Bulczak, Rafael Catany, Dr Jaromir Jakacki, Maciej Janecki, Professor Lidia Dzierzbicka-Glowacka, Artur Nowicki, Dominik Lis, Dr Joanna Ston-Egiert, Dr Andreas Lehmann, PD Dr-Ing. habil Luciana Fenoglio, Artu Ellmann, Jamming Chen, Dr Laurent Bertino, Quentin Jutard, Philippe Bryère, Dr Nicole Delpeche-Ellmann, Hela Merthens, Natascha Mohammadi, Raphaëlle Sauzède
Affiliations: IOPAN, GEOMAR, University of Bonn, NERSC, TallTech, ACRI-ST, Albavalor, ARGANS Ltd, AdwäisEO, LOV
Modern satellite data provide powerful and unprecedented tools for monitoring the marine environment on both local and global scales. However, these data are typically limited to observations of the sea surface or the nearest subsurface layers, making it difficult to obtain a comprehensive view of the environment, accurately assess its state, and track changes over time and space. Achieving a holistic understanding requires data that encompasses the entire water column. Mathematical modeling of the marine environment can provide such data, but even the most advanced models struggle to accurately reproduce all environmental parameters. Alternatively, modern statistical techniques, including artificial intelligence (AI) and machine learning (ML), can offer valuable insights into the water column. Yet, neither mathematical modeling nor AI/ML methods alone can yield fully reliable information without the integration of real-world environmental data. The synergy of these approaches with satellite observations is crucial to delivering a comprehensive and reliable picture of the marine environment, enabling the monitoring of changes and forecasting their impact on ecosystems and human activities. This integrated approach is being realized in the Baltic Sea Dynamics through the 4D Modeling and Integrated Earth Observation (4DBALTDYN) project, launched in 2024. The project seeks to enhance our understanding of the Baltic Sea's physical and biogeochemical states by leveraging modern statistical methods, advanced modeling, in situ data, and satellite imagery. Depending on the available parameters, its primary output will be a 4D reconstruction of the Baltic Sea's physical and biogeochemical conditions over recent decades. The project's potential will be demonstrated through four case studies illustrating how this 4D reconstruction can advance our understanding of the complex interactions among physical, biological, and biogeochemical processes in the Baltic Sea.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.11/0.12)

Presentation: Geophysically-driven or artificial grounding? Considering closed-boundary conditions in satellite-derived flow fields to improve off-line Lagrangian studies in the coastal ocean

Authors: Dr. Vincent Rossi, Dr Ismael Hernandez-Carrasco, Dr Maxime Ballarotta, Dr Léo Berline, Prof Annalisa Bracco, Dr Diego Cortes-Morales, Dr Michael Denes, Lorenzo Dellacioppa, Dr Federico Falcini, Dr Patrick Marchesiello, Prof Paolo Oddo, Dr Maria Josefina Olascoaga, Dr Siren Ruhs, Chaimaa Rwawi, Prof Erik Van Sebille, Bruno Buongiorno
Affiliations: CNRS - MIO
The coastal ocean houses a large part of the global human population, hosts a great majority of the marine life (including harvested, threatened and harmful macro-/micro-organisms) and is characterized by specifically complex dynamics. While it supports biodiversity hotspots, key ecosystem services and regulates the climate system, non-linear physical processes related to coastal transport, mixing and dispersion are still not well understood. Indeed, shelf and coastal multiscale currents constantly redistribute in space and time any particulate and dissolved entities present or released in the water column. Offline Lagrangian models, which rely on time-dependent gridded flow fields (among which satellite-derived products are the only source of currents derived exclusively from observations), have proven their efficiency for fundamental and applied research on oceanic transport and dispersion. However, their results are highly dependent on the quality of the input velocity fields. In the 4DMED-SEA project, such fields have been generated for the first time at high-resolution (including smaller-scale processes) and in the interior ocean, opening new opportunities. Yet, factors contributing to inaccuracies near the coast and the bottom topography (closed-boundary conditions) are not taken into account in such products, meaning they do not fulfil conservation laws of energy and mass. In this talk, we first recall the main steps of the production chains of these products, highlighting potential error sources. Then, we conceptualize the grounding conundrum and review the main issues that arise in offline Lagrangian modelling. It includes spurious trajectories, gain/loss of particles preventing conservative studies, biased analyses (e.g. structure detection, mixing quantification, dispersal pathways) and the inability to distinguish artificial from true beaching. We then propose a common framework to harmonize current practices (e.g. employed equations of motion, numerical integrators, interpolation schemes, boundary definition, etc...) and discuss their impacts on grounding. To anticipate future developments, we remind the peculiarities of shelf and coastal dynamics linked with intricated coastlines, convoluted bathymetry, fine-scale wind patterns, interactions between tides, wind-induced currents, nearshore waves, rivers plumes, etc… Finally, we review the existing strategies employed to circumvent the above-mentioned issues, including observation-driven grounding parameterizations, trajectories’ corrections and the implementation of closed-boundary conditions on the flow field itself. We conclude by summarizing good-practices. Advancing our ability to model transport, mixing and dispersion processes in shelf and coastal seas from satellite-derived currents will improve both fundamental and applied research (e.g. tracking plastics, oil spills and other anthropogenic contamination events, assessing the connectivity of Marine Protected Areas network and better managing Fisheries) as well as various societal applications (e.g. Search & Rescue operations, water-quality assessment after dredging operations near seaport, underwater cables, offshore wind farms…).
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.11/0.12)

Presentation: Monitoring coastal and land-sea interactions of water topography from merged SAR nadir- and SWOT-altimeters

Authors: Luciana Fenoglio, Dr. Volker Morholz, Dr.Ing. Christopher Buchhaupt, Jiaming Chen, Poom Yoosiri, Dr. Jaromir Jakacki, Dr. Artu Ellmann, Dr. Nicole Depeche-Ellmann, Dr. Ulf Graewe, Jens Faber, H Renzelmann, Dr. Anne Bulczak, Dr. M. Darecki, Dr. D. Rack, Dr. Andrea Lehmann, Maciej Janecki, David Dybowski, Marine Bretagnon, Laurent Bertino, Rafael Catany, Aurelien Prat, Quentin Jutard, Philip Bryere, Roberto Sabia
Affiliations: University Of Bonn, Leibniz-Institute for Baltic Sea Research (IOW), University of Maryland, IOPAN, Tallin Technical University, GEOMAR, ACRI-ST, Albavalor, NERSC, ESA
Since 2010, a significant advancement in water level height measurements is provided by the smaller along-track footprint and higher pulse repetition frequency of the Synthetic Aperture Radar (SAR) altimetry compared to conventional altimetry. An unprecedented accuracy and resolution in monitoring water heights and water dynamics is possible since 2016 and 2020 respectively, with the Sentinel-3 and Sentinel-6 27-day and 10-day repeat missions. This capability is especially crucial in coastal regions, estuaries and inland waters due to the traditionally problematic contamination of land in the altimeter footprint. Since 2023 SWOT high-resolution 2D observations are expected to improve further the detection of the fine-scale topography patterns and of their variations. In this work we estimate spatial and temporal changes at fine-scales obtained by merging the SAR nadir- and SWOT-altimeter missions. Region of analysis are the coast and estuaries of Baltic and North Sea in the time interval from April 2023 to December 2024. First, nadir-altimeter mission data have been prepared with unfocused and focused techniques to increase the data quality and optimise the selection of the processing. Measurements and crucial corrections, such as sea state bias, have been derived, analysed and selected. Water heights are merged with SWOT data in daily gridded maps of high spatial resolution (HR). The geostrophic ocean currents are derived from the maps of absolute dynamic topography (ADT) and validated against the CMEMS/DUACS L4 and Globcurrent currents. First results indicate a high variability of the water surface at daily time scales and suggest that SWOT data agree better than nadir-altimetry to in-situ and models. The HR daily gridded maps are assimilated in the regional ocean models. The water level accuracy improves at less than 2 kilometers from coast and in land-sea regions and estuaries, environments most affected by human impact and climate change. The land-ocean interface is subject to multiple risks in hydrodynamic and bio-morphological context, as coastline retreat, flooding in storms surges, river floods and pollution. Changes relative to land and coastal impacts are studied in four ROIs, which represent different key areas, namely open ocean, coastal ocean circulation, inland water and ice coverage. In the order: [1] Danish Straits, Belt Sea, Skaggerak and Kattegak key area for currents and velocities ; [2] Gotland Basin key area for deep-water and surface layer connection ; [3] Gdansk Basin with Vistula River key area for freshwater river input in coastal waters with marine ecosystems modification; [4] Gulf of Bothnia with sea ice dynamic components. This work is part of the 4DBALTDYN project funded by ESA. By integrating high spatial observations of satellite altimetry, in-situ and ocean modelling the project aims at improving the understanding of the Baltic Sea water height dynamics. The analysis is at seasonal and annual scales and will be extended to other to trends.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.11/0.12)

Presentation: On the assimilation of 5 hz wave data in IBI coastal wave model

Authors: Lotfi Aouf, Adrien Nigou, Annabelle OLLIVIER, Breogan
Affiliations: Météo France, CNRM
Coastal wave models have greatly improved their spatial resolution in order to meet the requirements of wave-submersion warnings in european seas. It is therefore crucial for wave data from altimetry to adapt with data sampling that will be efficiently exploited in the assimilation system. The availability of wave heights from altimetry at 5 hz resolution is an interesting development for coastal models. This work analyzes the impact of assimilating altimetry data at 5 hz on coastal sea state forecasting. Four satellite missions, Jason-3, Saral, CFOSAT and HY2B, are assimilated into the MFWAM wave model for the Iberian Biscay Ireland (IBI) configuration at high spatial resolution of 2.5 km, which is dedicated to monitoring forecasting centre of the regional Copernicus marine service IBI. The analysis covers the year 2021. We have also implemented combined assimilation experiments with wave heights at 5 hz and wave spectra from CFOSAT's SWIM instrument. Validation of these MFWAM-IBI model simulations was carried out with independent wave height data from altimeters and integrated wave parameters from drifting and fixed buoy observations. The results show a significant impact on near-shore wave heights and a strong reduction in the bias and dispersion index of wave heights. We examined the performance of assimilation in storms near the French coast and in the Mediterranean. Spectral analysis of wave heights shows that assimilation of wave heights at 5 hz provides a better description of small-scale ocean surface variability, often dominated by wave/current interactions.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall L3)

Session: C.01.10 Microwave Instrument Concepts and Technologies to enable new EO Science Missions - PART 2

New microwave instrument concepts and technologies are enabling new Earth observation science missions.
This session aims to discuss new microwave remote sensing instrument concepts and present advances in microwave instrument technologies and related instrument pre-development activities.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall L3)

Presentation: Wideband L-band Integrated LNA for RF Receiver Frontends in Earth Observation Systems

Authors: Valerie Dutto, Alessandro Barigelli, Ferdinando Costanzo, Francesco Vitulli, Sergio Colangeli, Ernesto Limiti, Patrick Longhi, David Cuadrado Calle
Affiliations: Esa, Thales Alenia Space Italia, University of Rome “Tor Vergata”
Low noise microwave amplifiers (LNA) are a vital component for realising high performance RF receiver front ends in Earth Observation systems. However, in conventional L-band passive radiometer instruments, unwanted signals from Radio-Frequency Interference (RFI) can degrade or destroy the LNA MMIC (Monolithic Microwave Integrated Circuit). Radio Frequency protection circuits are used to protect the low noise amplifier. Limiters are typically integrated in the front-end to increase the system robustness against excessive power. These solutions introduce losses, degrade the noise figure performance and increase the overall dimensions of the front-end unit. With the advent of low noise gallium nitride (GaN) technology, robust LNA MMICs can be designed which may not require a limiter at the input, or require a less demanding limiter, hence improving the noise figure of the receive front-end. Recent studies have demonstrated that GaN combines high input signal power handling capabilities, while achieving state-of-the-art noise performance, which make this technology an excellent choice for robust LNA development. Current L-band LNAs have been optimised for narrow-band applications. However, interesting radiometer concepts for sounding the atmosphere have emerged that require wideband L-band receivers. This activity, under development in the frame of a ARTES AT (advanced technology) with the European Space Agency (ESA and supported by the Italian Space Agency (ASI), aims to develop and test a breadboard of a L-band packaged LNA with improved input signal power handling capabilities, while maintaining state-of-the art noise figure performance over the 0.4GHz-2GHz band. As of today, a first iteration of the LNA MMICs has been designed, manufactured and tested. The adopted technology is the GaN process GH15 offered by the French foundry UMS. Two narrow band chips have been realized, namely a LB-LNA and UP-LNA, covering the 0.4-0.9 GHz and 0.9-2 GHz ranges, respectively. In addition, a WB-LNA covering the full range 0.4-2GHz has been realized as well. LB-LNA and WB-LNA are provided with an on-chip integrated limiter to improve their resilience to high RFI, while UP-LNA is not. The employed design typology is a two-stage cascode configuration with parallel-parallel feedback, chosen for its inherently broad band. The achieved gain is around 19dB, noise figure is 1.1dB average in band with a power consumption of 1.4W (VD=20V, ID=70mA). Matching is 15dB over such a large band. Most interestingly, the measured robustness to high RFI has been found quite satisfactory: the amplifiers provided with input limiter have withstood RFI levels up to 20W (pulsed RFI: duration 2.5ms, repetition time 10ms; total duration of the test: 15 sec.), while the ones without limiter were tested up to 1W in the same test condition. Just after the RFI application, a gain drop of few tens of dB was observed, but the measure of gain and noise figure over the full band after about 24hrs has reported identical performances as before the application of the RFI. The mentioned gain drop will be further analysed on the LNA chips of the final foundry run. The LNAs have also been tested for Short Term Gain Stability (STGS), which is a fundamental performance where amplifiers are used in radiometer application. The measured figure is 5x10-4 that is in-line with the figure measured by TASI on several radiometers produced for METOP program, where the GaAs technology was used for the design of the LNA’s. This achievement is a first demonstration that the GaN technology is well fitted for use in radiometers design. A second iteration of the LNAs design was sent to manufacturing and the chips will be available in few months. The final object of the Study is the manufacturing and test of the LNA module featuring a gain of 35dB in the wide band 0.4-2.0 GHz, a noise figure better than 1.5dB (target: 1.3dB) and a 1dB compression point of 23dBm. The RF chain will be composed by 2 or 3 packaged LNAs, an amplitude flatness equalizer and a thermo-pad for the gain compensation over the specified temperature range from -20 to 50°C. All the above functional blocks are to be assembled on a RF board. Test results of the LNA module are planned to be ready by April 2025.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall L3)

Presentation: Spaceborne Demonstration of Remote Sensing using P-band Signals of Opportunity

Authors: James Garrison, Manuel A. Vega, Rashmi Shah, Benjamin Nold, Justin R. Mansell, Roger Banting, Eric P. Smith
Affiliations: Purdue University, NASA Jet Propulsion Laboratory, NASA Goddard Space Flight Center
Microwave observations below 1 GHz enable sensing through the dense vegetation with a penetration depth of several decimeters into soil. Retrieval of sub-surface soil moisture profiles from these observations are useful for estimating the Root-Zone Soil Moisture (RZSM), a priority target variable in the NASA ESAS 2017 decadal survey and critical for understanding plant–soil–atmosphere interactions with applications in monitoring and forecasting agricultural drought, for example. Reflections from any surface are expected to remain coherent up to a higher roughness at wavelengths on the scale of 1 m, than for the higher frequency bands commonly used in spaceborne remote sensing. Ocean scattering in P-band (360 MHz for example) is predicted to remain coherent up to wind speeds of 7m/s, approximately the global average. P-band remote sensing from space presents substantial technical challenges: (1) Antenna size scales with wavelength to meet diffraction limited resolution requirements in radiometry. (2) There are limited allocations and protected bands for science in these frequencies, resulting in (3) large amounts of radio frequency interference (RFI), largely from terrestrial communications. Signals of opportunity (SoOp), in which existing spaceborne transmitters (usually for communications) are non-cooperatively used as sources in bistatic radar, has been studied as an alternative approach to P-band remote sensing. SoOp essentially uses potential source of interference beneficially. The forward scatter configuration results in an order of magnitude higher signal to noise ratio as compared to monostatic backscatter, and the surface resolution is determined by the first Fresnel zone (for coherent scatter) or the signal bandwidth (for incoherent scatter), not the antenna beamwidth. Early airborne demonstrations showed the feasibility of P-band (260 MHz) SoOp observations over land. This was followed by the first capture of P-band (367.5 MHz) reflected signals from low Earth orbit (LEO) through reprogramming a software-defined radio on a commercial satellite. Although limited data on the orbit and attitude of this satellite were available, this experiment confirmed a coherent reflection with a reflectivity within the range of physically realistic values. SigNals Of Opportunity: P-band Investigation (SNOOPI) was a technology validation mission supported by the NASA In-space Validation of Earth Science Technologies (InVEST). Deployed from the International Space Station (ISS) on 18-Apr-2024, SNOOPI performed six (6) data captures before de-orbiting on 26-Sep-2024. The data from these captures are presently being analyzed. The first detection of ocean-reflected P-band (240-270 MHz) signals was confirmed on 6-Jun-2024 off the coast of Brazil, with the observation of 5 channels within this band. Supporting the SNOOPI mission, theory for the single-antenna autocorrelation and a sample-level numerical model have been developed and cross-validated. A ground-based instrument, Wideband Year-round AmbuguiTy-function Tracking Effective Isotropic Radiated Power (WYATT-EIRP) was designed to sample transmissions from the source satellite at both frequency bands (240-270~MHz and 360-380~MHz) and produce radiometrically calibrated estimates of the emitted power from the source. WYATT-EIRP will also be used to map background radiation and potential interference sources. We expect to present new findings from processing SNOOPI data and comparison of these results to model predictions, including coherence time and signal strength. We will also attempt to identify and classify any anomalies (e.g. radio frequency interference (RFI)).
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall L3)

Presentation: European GaN Technology for High Power VHF Space Radar Systems

Authors: Francisco De Arriba, Lorena Cabria, Reinel Marante, Francesco Manni, Rocco Giofrè, Paolo Colantonio, Antonio Raffo, Gianni Bosi, Giorgio Vannini
Affiliations: TTI Norte, University of Roma Tor Vergata, University of Ferrara
The use of VHF frequencies for spaceborne radars will permit to improve the sounding capabilities for Earth subsurface sensing. The VHF radar will enable to map the basal topography and ice thickness, internal structure of the ice sheet, detect the subglacial hydrological system and determine the melting/freezing regime at the base of ice shelves, but also map aquifers in the Middle East and sub-Saharan Africa. The ability to sound the Earth subsurface on a global scale is well beyond the current technical capabilities based on airborne or ground surveys. The future radar needs a high-power transmitter for allowing a good measurement of the echo. A VHF radar would be the first Earth Observation radar working in the range of 40 – 50 MHz and the first radar designed for subsurface sounding. This work describes the design of a high-power solid state power amplifier (SSPA) based on European GaN (Gallium Nitride) technology aimed at being used in a future spaceborne VHF radar. The technical design challenge of the proposed system lies in delivering an output power of 800 W with an associated efficiency superior to 70 % in the wide operating frequency band, i.e., 40 to 50 MHz. The adopted solution for achieving these outstanding characteristics is based on optimizing the combination scheme where several high efficiency HPMs (high-power modules) are coherently combined in a highly-compact structure The design rationale of the proposed solution has been the maximization of the RF to DC efficiency as this is a critical parameter in high-power space applications, due to its influence on the electrical design, mass and volume of the spacecraft. To achieve this goal, a triple way strategy has been followed in this work to increase both peak and average efficiency: the realization of a low loss combining scheme, the development and realization of very high efficient harmonic tuning power amplifiers (PA) based on EU GaN devices, and the specifically designed and optimized Electronic Power Conditioner (EPC) architecture. Peak efficiency is increased by selecting broadband high efficiency harmonic tuning power amplifier topology for the HPMs (Class E, F, and F-1 have been investigated) and using wideband low-loss combiner. On the other hand, the average efficiency is risen by properly adapting the power amplifiers DC power consumption to the envelope of the radar signal in such a way that average efficiency converges to peak one. This is achieved by means of dedicated PA biasing circuitry specially designed integrated in the EPC. The selection of European space qualified 0.5-um GaN power bars in hermetic package for the HPMs is also a key aspect in the achievement of good high power and efficiency figures within the wide frequency band. Successful design of the harmonic tuning PA has been achieved by an extensive measurement campaign under linear and nonlinear power-bar operation proving the accurate prediction of the transistor nonlinear model under the operating conditions determined to be most suitable. Considering the low operating frequency and large fractional bandwidth, a great deal of design effort has been devoted to reducing the volume and weight of the final SSPA as much as possible. To this end, a ‘smart’ compact concept that makes optimal use of the available 3D space has been adopted in the complete SSPA. In this scenario, the novel custom design of the employed division and combination stages has been crucial, where high combination efficiency is achieved with a flat response across the entire frequency band in a constrained occupied space. In the search for mass and volume saving, the mechanical design has also optimized the internal modules size and the unit housing. The VHF SSPA volume developed in the framework of this activity is 390x390x160 mm3. The SSPA design meets the ECSS standards for being able to work in a space environment. It is important to highlight the design robustness against the multipactor phenomena. All the critical areas analyzed have shown a big margin with respect to the values defined by the ECSS multipactor standard. After determining the thermal loads of the SSPA, mainly those due to the GaN transistors, the thermal analysis of the proposed structure has guaranteed the reliable operation of all SSPA components with respect to their maximum derating values.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall L3)

Presentation: BARODAR: Surface Air-Pressure Measurements Using Differential Absorption Radar at the Right Wing of the 60 GHz of the Oxygen Band.

Authors: Dr Emal Rumi, Alessandro Battaglia, Mr Richard Reeves, Mr Brett Candy, Dr Ishuwa Sikaneta
Affiliations: UKRI, STFC, RAL Space, Politecnico di Torino, European Space Agency, UK Met Office, Exeter Devon, United Kingdom
Climate change presents the most critical and pressing challenge of the century. In a warming climate, it is expected, that the frequency and intensity of extreme weather events, capable of widespread devastation, will increase, and therefore, there is an urgent need for improved weather prediction capabilities. Key to achieving this goal is the timely forecasting of related weather and climate-driving parameters, of which surface pressure is a critically important feature. Regular, accurate and global-scale measurement of surface air-pressure presents a substantial technical challenge, which can only be achieved by remote sensing from space. Current Numerical Weather Prediction (NWP) models still feature uncertainties in predicting the track and the centre of tropical storms and hurricanes, calling for more frequent, higher temporal and spatial resolution of atmospheric observations on global scale. Retrieval of atmospheric pressure from satellite observations requires very accurate knowledge of the atmospheric mixing ratio of the spectral feature to be targeted. In practice, molecular oxygen offers the only viable candidate, since its mixing ratio is effectively constant over the full range of atmospheric conditions and altitude. The BAROmetric Absorption Radar (BARODAR) mission concept relies on the frequency dependent absorption of mm-wave radar pulses in the upper wing of the 60 GHz oxygen band. The ratio of transmittance of two channels, one in-band and another off-band, i.e the differential absorption, is a measure of the total oxygen in the path. Since the oxygen is well-mixed in the atmosphere, this ratio is related to the total surface pressure. Adding a third channel will produce another ratio which will reduce uncertainty. The novel design of the BARODAR instrument is based on three measurements at separate frequencies, characterising the atmosphere both inside and outside of the oxygen absorption band. BARODAR pulsed radar operates in two modes: the first mode measures signal returns from the surface at three different frequencies for surface pressure retrieval. The second mode transmits longer pulses in the out of band channel to measure cloud returns. This enables cloud detection, necessary to improve the accuracy of pressure measurements. The BARODAR instrument will fly with a multi-frequency microwave radiometer to sense water vapour, liquid water, and temperature profiles. The atmospheric correction information is required to accurately derive surface pressure and to provide great value due to the co-location in time and space with surface pressure measurements. The presentation will introduce the motivation and need for global, regular measurements to increase the accuracy of NWP. A second part of the presentation will detail the principle of the measurement technique and instrument design.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall L3)

Presentation: An Assessment of the Potential of WIVERN Pulse-Pair Observations for Monitoring Ocean Surface Currents Applications

Authors: Louis Marie, Bertrand Chapron, Frédéric Nouguier, Mr Jean-Christophe Angevain, Dr Marcel Kleinherenbrink, Dr Maryam Pourshamsi, Dr Ishuwa Sikaneta, Alessandro Battaglia
Affiliations: Laboratoire d'Océanographie Physique et Spatiale (LOPS), IFREMER, ESA - ESTEC, Politechnico di Torino / DIATI
WIVERN, the WInd VElocity Radar Nephoscope - a mission dedicated to the observation of in-cloud winds, and micro- and macro-physical properties of clouds and precipitation - has been recommended to proceed to Phase A as a candidate for ESA’s 11th Earth Explorer (EE11). The WIVERN mission will be operating a conically scanning W-band coherent radar implementing the "Polarization Diversity Pulse-Pair" (PDPP) technique (Pazmany et al, 1999) to provide Doppler measurements of in-cloud wind. This design is reminiscent of concepts proposed within the framework of two other missions dedicated to the study of the ocean "Total Surface Current Vector"' (TSCV), SKIM, which was a candidate for ESA’s EE9, and ODYSEA, a NASA/JPL concept currently under evaluation as part of the NASA Earth System Explorers program (Rodriguez, 2019, Torres et al, 2023). In this work, we propose an assessment of the usability of the WIVERN W-band PDPP observations for monitoring the ocean TSCV. Our presentation will include: An overall error budget for WIVERN PDPP current retrieval, outlining the impacts of geometric mispointing and more specific radar PDPP phase estimation error. A derivation of the statistical dispersion of the PDPP phase estimation based on instrument characteristics. A description of the dependence of the error statistics on the interpulse time lag, acquisition geometry, and sea state effects. A discussion of the implication for swath width and the error statistics anticipated for WIVERN ocean TSCV measurements. Finally, a discussion of the opportunities offered by the WIVERN design to enable direct Doppler TSCV measurements from space (i.e., without relying on geostrophic assumptions) at a global scale.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall L3)

Presentation: L-band Internal Calibration System for Wide-Band Radiometers

Authors: Joan Capdevila, Adrián Castro, Roger Vilaseca
Affiliations: Sener Tafs S.a.u.
On-board calibration of Earth Observation radiometers is essential in order to provide accurate and stable brightness temperature of the scene under observation. Hot and cold reference loads are used to perform such calibration. Hot loads are typically implemented using a matched load at the instrument temperature. Cold sky references are used in several remote sensing radiometers although additional maneuvers might be required to change antenna pointing, with drawbacks like additional calibration time, thermal stability issues and extra power consumption. Active Cold Loads have been proposed for different radiometers in order to have much flexibility and less complexity in on-board calibration procedures. Several missions have been proposed by NASA (e.g. UWBRAD) and ESA (e.g. CryoRad) in order to operate a radiometer from 400MHz up to 2000MHz. The use of such broadband radiometer allows the retrieval of many more geophysical properties of polar ice sheets using special techniques for RFI mitigation. This paper presents the design of an Internal Calibration System (ICS) for a L-band radiometer operating from 400MHz up to 2000MHz. The main elements of the ICS are an Active Cold Load (noise temperature ranging from 90 to 100K) and a calibration switch which allows selecting the antenna (scene under observation), the cold target (ACL) or the hot target (matched load at ambient temperature). A detailed survey of already existing solutions and technologies have been performed and a summary is included in this paper. The characterization of the noise temperature generated by an ACL requires a special test bed provided that there is no Commercial-Off-the-Shelf hardware to perform such measurements. This paper presents as well a Noise Temperature Measurement System (NTMS) based on two calibration targets and a Total Power Radiometer based on an analog Front-End plus a digital Back-End which allows measuring the ACL noise temperature in the whole bandwidth with a less than 25MHz frequency resolution. Temperature characterization and Long-Term-Stability Tests (LTST) of the ACL using SENER NTMS are presented. This paper has been written in the frame of ITT AO/1-11552/23/NL/VA, ESA contract.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall E1)

Session: A.10.02 Geodetic satellite missions and their applications - PART 2

Space-based geodetic techniques provide extreme-accuracy observations of a vast range of Earth system parameters, such as the gravitational field, the shape of the Earth, the reference frame, as well as navigation and positioning. The past 20 years have seen a rapid development in the field, including highly successful endeavours such as ESA's GOCE mission, and the GRACE and GRACE-FO Third-Party Missions. As a direct consequence, the research and application community has developed a vast range of ideas building on the achievements of this golden age of space geodesy. This session deals with state-of-the-art results from the current set of geodetic satellite missions (e.g. GOCE, Swarm, GRACE, GRACE-FO), as well as plans and ideas for new initiatives in the field. This session invites contributions related to applications of geodetic satellite observations including, among others, gravimetry, altimetry, GNSS/VLBI, radio occultation, InSAR and GNSS reflectometry techniques. Applications of these missions go beyond the traditional geodetic science, including climate, hydrology, solid Earth, cryosphere, oceanography and thermosphere science. Contributions relevant to the above-mentioned scientific disciplines, as well as operational applications from geodetic satellite missions are also invited.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall E1)

Presentation: Potential and Limitations of DORIS-based Ionospheric Corrections for Satellite Altimetry

Authors: Ke Zhang, Denise Dettmering, Michael Schmidt
Affiliations: Deutsches Geodätisches Forschungsinstitut der Technischen Universität München (DGFI-TUM)
Satellite altimetry measurements need to be corrected for ionospheric delays in order to ensure precise water level determination. For single-frequency systems and for inland and coastal applications, this correction has to be derived from external information. In case no dual-frequency altimeter observations are available, the most common correction models are the GNSS-based global ionospheric maps (GIM) from the International GNSS service (IGS). These are computed from terrestrial GNSS dual-frequency measurements and reach their best performances over the continents. Over the oceans, they are less accurate due to the sparse distribution of permanent stations. However, all of the currently active altimeter missions are also equipped with DORIS (Doppler Orbitography and Radiopositioning Integrated by Satellite), mainly for precise orbit determination. DORIS is a geodetic space technique using two microwave frequencies that can also be combined to estimate the ionospheric delay. Even if operated on the same platforms, DORIS is currently not used to correct single-frequency altimeter measurements, mainly because early studies indicate lower accuracies than other correction models. However, in the last years, significant progress has been made on the DORIS system and new generations of beacons and receivers have been developed. In view of this, in this contribution, we investigate the current potential of DORIS-based ionospheric information for the application as corrections for single-frequency altimetry observations. In the study, data from the Jason-3 mission will be investigated as the dual-frequency altimeter observations can be used for validation purposes. Moreover, a long time series is available covering different periods of solar activity. It will be shown, that for quite ionospheric conditions, DORIS corrections outperform GIM-based corrections over the oceans, whereas in active conditions, GIM is superior to DORIS in view of accuracy and coverage. Thus, a combination of both measurement technologies seems to be the most promising solution for a precise and accurate ionospheric correction with full global coverage, if dual-frequency altimeter corrections are not available.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall E1)

Presentation: SWOT-Derived Geometric Marine Geoid Surface Reveals Detailed Gravimetric Geoid Modelling Deficiencies

Authors: MSc Aleksei Kupavoh, Dr Artu Ellmann, Dr Nicole Delpeche-Ellmann, Dr Sander Varbla, Luciana Fenoglio
Affiliations: Department of Civil Engineering and Architecture, Tallinn University of Technology, Department of Cybernetics, Tallinn University of Technology, Institute of Geodesy and Geoinformation, University of Bonn
This study explores the potential of utilizing the Surface Water and Ocean Topography (SWOT) altimetry mission sea surface height (SSH) measurements in synergy with dynamic topography (DT) obtained from hydrodynamic model (HDM) and tide gauges (TG) to derive the geodetic geoid (i.e. geoid=SSH-DT). Such a method not only improves on existing geoid models but can also potentially derive new geoid models in areas where geoid models are of low accuracy and quality. The method is examined in the eastern section of the Baltic Sea where accurate geoid models are established but also in areas bounding the Russian sea border where lack of in-situ gravity measurements has been suggested to cause deficiencies in the model. The methodology first involves obtaining the SWOT derived geoidal height, which are then compared with marine geoid models of the Baltic Sea (e.g. NKG2015, BSCD2000) and also that of the geoidal heights obtained from of precise aerial laser scanning and shipborne GNSS measurements. Utilising KaRIn data (L2_LR_SSH, Basic, 2x2 km grid) we were able to identify and quantify errors in a gravity data void area, revealing discrepancies of a few decimetres in existing geoid models. Additional validation data such as geometric geoidal heights derived from precise aerial laser scanning and shipborne GNSS measurements provided even further validation on the accuracy of SWOT-derived surface. In terms of standard deviation SWOT-derived geoid may compete with the geoid models being around 3 cm. Results show that KaRIn altimeter allows to determine marine geoid surface with the sufficient 5 cm (and better) accuracy. Further exploration and refinement of the method, along with new SSH versions and additional KaRIn data may may lead to overall improvement of accuracy and elimination of biases. Such an exploratory use of KaRIn observations creates new opportunities for geometric geoid determination, being especially appealing for those countries where it can be beneficial for marine-related activities, such as climate research, offshore industry or unmanned navigation.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall E1)

Presentation: The DTU25 Marine Gravity field enhanced from SWOT

Authors: Ole Andersen, Bjarke Nilsson, Per Knudsen
Affiliations: Dtu Space
High resolution global free air gravity fields have been consistently developed with the increasing number of geodetic mission since the launch of ERS-1 and GEOSAT more than 30 years ago. With the development of nadir-altimeters with particularly Cryosat-2 and SARAL/ALtiKa. Significant improvement have been gained the last 10 years. However, an inherent limitation with conventionally nadir-oriented satellite altimeters is a preference for north-south along-track observations. Secondly, conventional altimetry is limited to resolving wavelengths longer than 20 km due to the altimeter's beam width. With SWOT swath-altimetry, small-scale 2-dim features in the ocean surface are observable from just a single pass. Including SWOT interferometric altimeter data in the next generation global marine gravity are expected to improve the spatial resolution to better than 10 km. Today, around 1.5 years of SWOT data are available. This is much shorter than the 30 years of conventional satellite altimetry used to produce high-resolution marine gravity fields. However, due to the great coverage and precision of SWOT data, marine gravity resolved by SWOT is surpassing the marine gravity field from nadir altimetry. A challenge faced with utilizing SWOT data is distinguishing between small-scale oceanographic features and altimetric noise. Stacking the data is essential for reducing the impact of dynamic oceanographic features and random altimetric noise. Understanding small-scale effects in the open ocean, but especially in coastal and arctic regions, enables better utilization of the high-resolution SWOT data, and supports the development of reliable mean sea surfaces in these more challenging areas. Despite huge improvements in global marine gravity field modeling using SWOT, we already see differences in SWOT coverage and degradation of the marine gravity field between the two swaths and in the diamond gaps at low latitudes.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall E1)

Presentation: From Volcano Monitoring to Volcanic Alerts: Using Complex Dynamical Models to Train Deep Learning Networks for Forecasting Deformation Scenarios Using InSAR

Authors: Camila Novoa Lizama, Professor Andrew Hooper, Matthew Gaddes, Gopal Phartiyal, Shailza Sharma, Milan Lazecky
Affiliations: University Of Leeds
Over the past three decades, InSAR has revolutionized volcano monitoring, offering unprecedented insights into the diverse behaviors of volcanoes worldwide. Spatial deformation patterns observed in interferograms reveal variations in magmatic reservoir geometries, while time series data trace magma dynamics, illuminating complex subsurface processes. Despite these advances, the variability in volcanic behavior—both between different volcanoes and within the same volcano under different eruption triggers—presents challenges for global predictive models. This study addresses these challenges through a global-scale investigation using Sentinel-1 data to compute a decade-long InSAR time series for over 1,000 volcanoes, complemented by numerical simulations to train deep learning networks for forecasting deformation. Focusing on three volcanic complexes in Chile, we explore the physical models that provide the basis for realistic simulations of volcanic deformation used for deep learning training. At Laguna del Maule, we analyze 15 years of significant inflation (exceeding 3.5 meters) detected by ALOS-1, ALOS-2, and Sentinel-1, with no eruption to date. At Puyehue-Cordón Caulle, we document pre-eruptive inflation (~10 cm observed by ALOS-1), extensive subsidence (>2 meters during the eruption tracked by ENVISAT), and ongoing post-eruptive deformation monitored by Sentinel-1. At Nevados de Chillán, we examine a six-year eruption characterized by an explosive stage with no clear deformation, followed by inflation during the effusive phase. These observations highlight the complexity of magmatic systems, which cannot be explained using simple models. We describe advanced dynamic models that incorporate viscoelastic and poroelastic rheologies to account for the behavior of rocks at high temperatures and the presence of fluids, as well as the effects of these properties on the arrival of new magma in the upper crust and reservoir interactions during inflation episodes. These models enable the deep learning network to learn from realistic deformation scenarios, offering robust forecasts of eruptive behavior for any volcano under study. This work highlights the transformative potential of combining InSAR observations with dynamical modeling and deep learning to address the challenges of forecasting volcanic deformation. By bridging monitoring and alert systems, it opens the way to a global framework for forecasting volcanic hazards, helping to make communities around the world safer and more informed.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall E1)

Presentation: High-Temporal Resolution Imaging of Geohazards with Hydroterra+

Authors: Tim J Wright, Dr Cristian Rossi
Affiliations: COMET, School of Earth and Environment, University Of Leeds, ESTEC, European Space Agency
The ~weekly revisit time available from Low Earth Orbiting SAR missions remains a fundamental barrier to using the data to respond rapidly to events such as earthquakes and to monitor evolving hazardous processes such as volcanic eruptions (Biggs and Wright, 2020). The Earth Explorer 12 Candidate Mission, Hydroterra+, will be in a geosynchronous orbit and is capable of imaging ground movement with sub-daily sampling. In this presentation, I will review the proposed capabilities of Hydroterra+ for studying tectonic, volcanic, and landslide processes. In volcanology, Hydroterra+ will be able to constrain critical processes that occur as magma rises towards an eruptive event and has the potential to be a key dataset that enables reliable predictions of the onset time of eruptions or the time of transitions in behavior from effusive to explosive activity. Change mapping data from amplitude and coherence could be assimilated in near real-time to help forecast the route taken by lava flows. In tectonics, Hydroterra+ will help identify if there are any detectable precursory ground movements in the hours, days, or weeks before major earthquakes. Following an earthquake, the data could be used to reliably map earthquake damage within a few hours of the event, providing vital data to emergency response teams. The data would also fill a crucial gap, constraining the very earliest phases of postseismic deformation, which are critical for understanding the frictional properties of faults. For landslides, short-revisit imaging would help maintain coherence for rapidly deforming or accelerating slope instabilities and could help in forecasting slope failures. There are inevitably trade-offs when designing a mission such as Hydroterra+. Notably, the coverage is not global, and the temporal resolution (integration time) trades off with the spatial resolution, such that short integration time images have relatively low spatial resolution. In addition, if the ground is moving during the acquisition time, this will lead to distortions in the image. However, even with these limitations, the high temporal sampling offered by Hydroterra+ will be an exciting and unique offering that reveals new processes hidden from current observational systems. In the future, a suite of geosynchronous SARs could monitor our hazardous planet, complementing the data available from low Earth orbit.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall E1)

Presentation: Multiple geodetic satellite ranging systems detected extensive equatorial plasma depletions after the 2022 Tongan volcanic eruption and revealed broad-scale uplift of ionosphere by dynamo electric field

Authors: Dr. Shin-chan Han, Hyosub Kil, Dr Richard Ray, Frank Lemoine, Colin Waters
Affiliations: University of Newcastle, Applied Physics Laboratory, The JohnsHopkins University, NASA Goddard Space Flight Center
We investigated the ionospheric disturbances in the equatorial region caused by the Hunga-Tonga Hunga-Ha'apai (HTHH) volcanic eruption on 15 January 2022 by examining the data from four different geodetic ranging systems: between GPS and CubeSat satellites (L1/L2 bands), between two GRACE Follow-On satellites (K/Ka bands), between altimeter satellites and the ocean surface (C/Ku bands) and between the ground beacon stations and DORIS-equipped satellites (S/VHF bands). These dual-frequency satellite systems were used to measure electron density changes integrated along ray paths in line of sight from transmitters to receivers. Five types of total electron content (TEC) measurements were taken (1) at high altitude above CubeSat satellite altitude of ~550 km, (2) at low altitude below ~550 km, (3) at GRACE Follow-On satellite altitude of ~480 km, (4) for the entire vertical column of ionosphere below the altimeter satellite altitude of 800 km, and (5) in line-of-sight between the ground and satellites above top of the ionosphere. While the CubeSats, altimeter and DORIS satellites sample TEC variations over a wide range of altitude, the GRACE Follow-On satellites sample horizontal TEC variation along the orbit at a fixed altitude. Their observations revealed the development of anomalously large trough-like plasma depletions, along with plasma bubbles, in the equatorial regions of the Pacific and East Asian sectors. Trough-like plasma depletions appeared to be confined within approximately ±20° magnetic latitude, accompanied by density enhancements just outside this latitude range. These plasma depletions and enhancements were aligned with the magnetic equator and occurred across broad longitudes. On the basis of our observations, the conditions for the development of the depletions are (1) inside the boundary of the Lamb waves produced by the volcanic eruption and (2) close to the sunset terminator when the Lamb waves pass the region. These conditions facilitate the generation of dynamo electric fields, and by which the ionosphere can be lifted to high altitudes resulting in plasma depletion in the equatorial region. However, the dynamical coupling of the ionosphere with other waves such as secondary gravity waves cannot be ruled out. The size and magnitude of depletion by the HTHH volcanic eruption were anomalously large; for example, at the satellite altitude of 500 km, the reduction of electrons by more than 1,000 times was recorded from the GRACE Follow-On K/Ka band ranging system over the extensive areas of ~3,000 km around the geomagnetic equator. Such magnitudes of depletions are unusual phenomena, indicating that Earth’s hazardous surface events can be a source of severe ionospheric disturbances. We interpret these phenomena in terms of the E dynamo electric fields driven by atmospheric waves from the eruption. The uplift of the ionosphere beyond satellite altitudes, followed by subsequent plasma diffusion to higher latitudes along magnetic field lines, results in the formation of trough-like plasma depletions around the magnetic equator and density enhancement at higher latitudes. The detection of plasma bubbles in the Asian sector during the non-bubble season (January) is likely associated with the uplift of the ionosphere at the sunset terminator. This study demonstrated the use of geodetic (non-space-weather-dedicated) satellite constellations for capturing ionospheric perturbations induced by ground events. These geodetic satellite systems can be cohesively operated and analyzed to support space weather monitoring and prediction and, possibly, activity concerning navigation, communications, and surveillance systems that may be affected by extreme event irregularities in the ionosphere. All four geodetic techniques (GNSS high-low tracking, radio occultation, low-low intersatellite tracking, and Doppler ground beacon) and increasing number of suitable satellites will produce new opportunities to monitor the ionosphere continuously and to study the atmosphere-surface coupled processes including atmospheric and gravity waves caused by earthquakes, tsunamis, extreme weather, and volcanic eruptions.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall G1)

Session: D.02.04 Machine Learning for Earth System Observation and Prediction - PART 1

Earth, its weather and climate constitute a complex system whose monitoring and modelling has undergone remarkable progress in recent years. On one hand, enhanced spaceborne observations and the development and availability of novel in-situ sensor data have provided unprecedented information about our planet's systems. On the other hand, the integration of AI and big data analytics has opened new frontiers in how we approach the challenges of weather and climate modelling. Altogether, these elements are key drivers of innovation in Earth System Observation and Prediction (ESOP) for Weather and Climate.

Machine/Deep Learning (ML/DL) techniques have revolutionized numerous fields and have proven to be particularly advantageous in various applications such as image recognition, traffic prediction, self-driving vehicles, and medical diagnosis. These techniques have garnered significant attention and adoption within the Earth System Observation and Prediction (ESOP) community due to their ability to enhance our understanding and prediction capabilities of the Earth's complex dynamics. One prominent area where ML/DL techniques have proven invaluable is in the development of high fidelity digital models of the Earth on a global scale. These models serve as comprehensive monitoring, simulation, and prediction systems that enable us to analyse and forecast the intricate interactions between natural phenomena and human activities. By providing a holistic understanding of the Earth's dynamics, these models contribute to the achievement of the European Commission's Green Deal and Digital Strategy goals towards a green & digital transition.

ML/DL solutions also showcased promising advancements in data assimilation, weather forecasting and climate prediction. Algorithms can be trained to identify instances where physical models may exhibit inaccuracies and subsequently learn to correct their predictions accordingly. Moreover, AI-based models have the potential to create hybrid assimilation and forecasting models that combine the strengths of traditional, physics-based methodologies with the capabilities of ML/DL, ultimately enhancing the accuracy and reliability of predictions.

The aim of this session is to invite new ML4ESOP explorers to present their latest innovation in ESOP. A specific focus on the exploration of new data sources and benchmarks for weather and climate modelling, the adaptation of large-scale data-driven Earth system models, as well as novel demonstrations of their applicability to weather and climate observation and prediction. This session invites all experts from diverse fields to discuss how recent advances innovate on established ESOP approaches, to address current challenges, and to identify opportunities for future work.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall G1)

Presentation: Hydrological Surface Classification Using Multiple Sensors and Exogeneous Data

Authors: Dr François De Vieilleville, Mr Pierre-Jean COQUARD, Mr Guillaume EYNARD-BONTEMPS, Mr Stéphane MAY, Dr Dawa Derksen
Affiliations: AGENIUM Space, CNES
We addressed the challenge of classifying hydrological surfaces - such as water, turbid water, salt pan, snow, and ice - using multiple remote sensing datasets. Traditional approaches relying only on optical data (e.g., Sentinel-2) face limitations under cloud cover areas and often confuse similar surface types (snow & turbid water, ice & salt pan, water & cloud or mountain shadows). To overcome this, the study explores the use of deep learning methods, particularly Convolutional Neural Networks that can integrate spatial context, trained with multiple data sources like SAR (e.g., Sentinel-1), optical imagery, and exogenous inputs (weather, elevation, …). By developing a flexible classification chain capable of leveraging these diverse inputs, the study improves classification robustness and reliability, providing valuable outputs for glaciology, climatology, and hydrology. Deep Neural Networks are well-suited for texture extraction in remote sensing imagery and can efficiently handle inputs with multiple spectral bands. However, processing data from various sensor modalities introduces the challenge of aligning these inputs within a shared feature space where correlations can be effectively captured. To address this challenge, we developed a classical encoder-decoder architecture and explored the use of multiple encoders feeding into a single shared decoder. Two types of encoder families – EfficientNet and Swin Transformer – and two types of decoders – UNET and FPN – alongside various fusion methods were tried and showed similar performances. For this study, a global multimodal database was gathered using open-source data from the Copernicus program. Initial experiments showed that a large volume of data was needed to achieve interesting results. More precisely, initial trials with 17 labelled scenes (50GB) showed poor generalisation capabilities, leading to the extension of the dataset to 57 different scenes worldwide. Additional products were integrated to the initial labelled Sentinel-2 database, including Sentinel-1 (GRD VV+VH) data, 30m digital elevation models (ASTER GDEM), and meteorological data (from the ECMWF) to build the final 350GB database. Segmentation masks were generated semi-automatically (using a first version of our DL network) and then refined through visual inspection of Sentinel-2 images. Results showed improved classification performance for all target classes when elevation data was included, and a dedicated dual-encoder-decoder model architecture proved particularly effective. On the other side, the integration of Sentinel-1 SAR data did not improve performance, likely due to the low temporal correlation between Sentinel-1 and Sentinel-2 acquisition (3-days average). Similarly, adding meteorological information did not enhance results, as our experiments showed that the model consistently disregarded scalar inputs regardless of integration approach. Our model demonstrated notable robustness on the global database and was compared to existing CNES classification chains, including SurfWater (surface water detection) and Let-It-Snow (snow and cloud segmentation in European mountains). Classification performance was comparable to SurfWater, though snow classification showed limitations in comparison to Let-It-Snow, particularly in the French Pyrenees. The findings from this study underscore the potential of a multimodal approach in improving hydrological surface classification, particularly by incorporating data such as elevation. This study opens up the field to further research, paving the way toward the development of a single, globally applicable model capable of classifying diverse hydrological surfaces with consistency and accuracy. Future work could focus on increasing the volume of labelled data used to train the network to further enhance the model’s global applicability and precision across varied geographic and climatic conditions. Additionally, to fully leverage SAR imagery, reworking the database with more precise, directly annotated products would be essential. Finally, other approaches have to be tried in order to take into account of meteorological data, for example using seasonality or more complex inputs. Other exogenous data could be added like terrain shadows.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall G1)

Presentation: Correction of NWP ocean forcing biases with machine learning and scatterometer data

Authors: Ms. Evgeniia Makarova, Dr. Marcos Portabella, Ad Stoffelen
Affiliations: Barcelona Expert Centre (BEC), Institute of Marine Sciences (ICM-CSIC), Royal Netherlands Meteorological Institute (KNMI)
Global Numerical Weather Prediction (NWP) model sea-surface wind output is commonly used to force ocean models due to their time and spatial continuity. However, these outputs exhibit local biases, with one of the most persistent and systematic biases occurring in sea surface wind direction. Scatterometer-derived sea surface winds have been assimilated into the European Centre for Medium-Range Weather Forecasts (ECMWF) Integrated Forecasting System (IFS) for more than two decades. Scatterometers are radar instruments that measure the sea surface roughness, corresponding to microwave energy scattered by ocean surface gravity-capillary waves, which are induced by the local wind. Despite the assimilation of scatterometer data into global models like the fifth ECMWF reanalysis (ERA5), validation of NWP forecasts against scatterometer observations reveals persistent biases. A previous approach to reduce these biases is based on correcting the forecasts using the averaged accumulated differences between scatterometer observations and NWP outputs over a defined time window (in a range of a few days-weeks) centered at each forecast time, i.e., the so-called scatterometer corrections. Applied to ERA5, this method demonstrates a 3–9% reduction in error variance globally when validated against independent scatterometers (i.e., not used in generating the corrections). However, the corrected reanalysis dataset (ERA*) exhibits quality degradation during periods of reduced scatterometer availability (data gaps). In the operational framework, the performance is even lower since the time window, centered for reanalysis, must be shifted to rely only on past observations. To address these limitations, we propose a Machine Learning (ML)-based approach to predict these biases using other atmospheric and oceanic NWP variables as inputs, thereby requiring observational data only during training. The model parameters used as inputs for the ML model include ERA5 10-m stress-equivalent zonal and meridional wind components (U10S), as well as the speed and direction, mean sea level pressure, 2-m surface temperature, specific humidity and sea surface temperature. Other inputs include zonal and meridional surface currents components taken from the GLOBCURRENT reanalysis dataset retrieved from the Copernicus Marine Service (CMEMS). The ML models are trained to replicate the differences between the scatterometer observational data and ERA5 stress-equivalent wind forecasts. The chosen reference scatterometer for training is the C-band Advanced Scatterometer (ASCAT) on board the MetOp satellite series, which delivers high-quality wind measurements with reduced sensitivity to precipitation compared to Ku-band scatterometers, while also offering higher spatial resolution. Various ML and deep learning (DL) architectures are evaluated, including eXtreme Gradient Boosting (XGBoost), fully connected neural networks, and convolutional neural networks. The predicted corrections applied to the ERA5 reanalysis outputs, show a reduction in error variance of up to 10-15% on a global test subset when compared to the ASCAT scatterometer and up to 6-7% against independent Ku-band scatterometers, such as those onboard the Chinese HY-2B and -2C, highlighting the effectiveness of ML-predicted corrections in mitigating local biases. The best performance is observed in extra-tropical regions, with error variance reductions of up to 18%. Notably, the ML approach enables corrections over extended data series, including periods without scatterometer observations. Moreover, further fine-tuning of the ML models with operational ECMWF IFS data allows their application to reduce biases in an operational framework. This capability demonstrates the flexibility of the ML/DL approach in both reanalysis and operational contexts, providing a scalable and adaptive solution for bias correction in NWP sea surface wind forecasts.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall G1)

Presentation: Advancing ecosystem modeling: integrating remote sensing and generative AI for predicting climate response

Authors: O. Enrique Apolo Apolo, Dr Ben Somers, Stef Lhermitte
Affiliations: Division Forest, Nature and Landscape, University of Leuven (KU Leuven), Celestijnenlaan 200E-2411, BE-3001, Leuven, Belgium
Predicting how ecosystems respond to climate extremes requires robust models that capture the spatiotemporal dynamics of ecosystems within vegetation models, along with their relationships to environmental factors and macro- or microclimatic conditions. However, the ability to simulate reliable scenarios and perform sensitivity analyses of ecosystem responses to future climate extremes remains constrained. These limitations stem from the simplistic parameterization of current vegetation models and the insufficient availability of accurate data to properly refine and constrain them. We present a study proposing an innovative approach to developing data-driven, spatially explicit ecosystem response models based on remote sensing data and generative artificial intelligence models. To construct these models, the EarthNet dataset was utilized, comprising 32,000 Sentinel-2 satellite imagery samples supplemented with weather variables. Among the various models reviewed in the literature, a Conditional Generative Adversarial Network (cGAN) and a diffusion model were trained to generate four wavebands (red, green, blue, and NIR) from Sentinel-2 data. The inputs for these predictions included spatial features from the landscapes, as provided in the dataset. These features comprised digital elevation models, the ESA WorldCover Land Cover map, and Geomorpho90m terrain classification. Preliminary results indicated that it is possible to predict landscapes based on geospatial features with reasonable accuracy, particularly when using land cover maps. However, there remains significant room for improvement. To enhance the model's performance, we plan to include additional features, such as weather conditions, to incorporate the temporal dimension into the forecasts. Furthermore, certain regions, such as Central Europe, areas with moderate vegetation changes, and locations with varying land use types, presented unique challenges and valuable insights. As a result, we aim to develop our own dataset encompassing a broader range of locations to increase the model's robustness. Thus, this study presents an advancement in cGANs and diffusion models, utilizing innovative methods to generate synthetic yet realistic satellite imagery time series that capture ecosystem responses, leveraging globally accessible remote sensing data along with environmental and biophysical variables.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall G1)

Presentation: Local Land Use and Land Cover Models Deliver Higher Quality Maps

Authors: Girmaw Abebe Tadesse, Caleb Robinson, Charles Mwangi, Eshter Maina, Joshua Nyakundi, Luana Marotti, Gilles Quentin Hacheme, Hamed Alemohammad, Rahul Dodhia, Juan M. Lavista Ferres
Affiliations: Microsoft AI for Good Research Lab, Kenya Space Agency, Clark University
Purpose: As of 2022, an estimated 20% of the population in Africa suffered from undernourishment, and 868 million people experienced moderate to severe food insecurity. The increasing availability of earth observation data and advancements in geospatial machine learning facilitate the development of global Land-Use and Land-Cover (LULC) maps, e.g., Google’s Dynamic World (GDW), European Space Agency’s (ESA) WorldCover Map, and Environmental Systems Research Institute’s (ESRI) LULC Map. LULC maps provide valuable insights, by characterizing land cover types, for key policy decisions in resource management, environmental monitoring and enhancing food security. LULC types include Bare Ground, Built-up, Crop, Grass, Shrub & Scrub, Trees and Water. However, prior research has shown these global LULC maps exhibit lower accuracy and inconsistencies in Africa, partly due to a lack of representative training data. To address this, we propose a data-centric framework to build a local LULC model that uses both high- and lower-resolution satellite images through a knowledge distillation setup. We test our approach in Murang’a County in Kenya – a leading producer of several agricultural products – using data from 2022. The County spans a total area of approximately 2,558.8km2, and it is situated between latitudes 0◦34′ S and 1◦07′ S, and longitudes 36◦00′ E and 37◦27′ E. Methods: Our framework uses raw satellite imagery, with different resolutions, and label examples (annotations), collected from different sources, to train the models. The following are the specific contributions of this work. First, we adopt a teacher-student model setup to use diverse data sources in the form of knowledge distillation. We use a high-resolution Maxar imagery (≈ 0.3 m/pixel), with limited coverage of the study area (51.55%), to train the teacher model. Next, we use a lower-resolution Sentinel-2 imagery (10 m/pixel) to train the student model, which produces high-quality LULC maps for the entirety of the study area. We use additional data to augment the set of label examples to train the models, including road vector data from OpenStreetMap, and predictions from the teacher model, as weak labels, to train the student model. Both the teacher and student models are implemented with deep learning-based semantic segmentation architectures. We validate our maps in collaboration with domain experts from Murang’a County, and we compare our maps with the existing global maps. Results: Our local model produces a higher quality LULC map achieving an improvement of 0.14 in F1 score and 0.21 in Intersection-over-Union (IoU), when compared to the best global map (ESRI) on the detection performance of the LULC types. When validated for the top priority classes: Built-up and Cropland, we achieved F1 score of 0.88 and 0.90, compared with ESRI’s 0.68 and 0.63, respectively. Furthermore, our evaluation of their agreement reveals inconsistencies across the global maps, with the highest agreement is found to be 30% between GDW and ESA maps. Our map achieves the highest agreement of 34% with the ESRI’s WorldCover map, followed by 24% with the GDW map. We observed notable inconsistencies when computing the total coverage area for each LULC type within the County. Specifically, the GDW and ESA maps overestimated the coverage of Trees, reporting it as 57.85% and 39.35% of the total area, respectively, whereas our map estimated only 20.22%. Additionally, the ESA map overestimated Shrubland coverage at 33.42%, compared to our estimate of 6.27%. Conclusions: Food security remains a significant challenge, particularly in the sub-Saharan Africa, partly due to adverse climate impacts. LULC maps are key in enhancing food security in addition to resource management. However, existing maps from global models were found to perform poorly in African contexts. In this work, we proposed a datacentric framework to build a local LULC mapping model with a setup of teacher and student models, using satellite images with different resolutions. Our framework facilitates the transfer of knowledge from the teacher model to the student model, which results in a more accurate and scalable LULC map compared to the existing global maps. Our study highlights the limitations of global LULC maps, as they are inconsistent and less accurate when validated in a local context, such as Murang’a County in Kenya. Our research emphasizes the importance of developing local models for more accurate maps, using diverse and growing data sources. Our study also highlights the need for cross-collaborations among diverse organizations to enhance the trustworthiness of the developed product and increase the likelihood of its deployment for practical impacts. Insights obtained from these models can improve data-driven decision-making to mitigate food insecurity. Future work will involve scaling the framework to the national level and incorporating temporal information to further enhance performance.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall G1)

Presentation: Detecting wind speeds from geostationary data using deep learning

Authors: Yongxing Loo, Dr. Jing Sun, Dr. Geet George
Affiliations: Delft University of Technology
Doldrums refer to regions over the ocean near the equator, characterized by calm winds. The doldrums are in close proximity and overlap with convergence bands in the Intertropical Convergence Zone (ITCZ), which is classically understood as where the northeast and southeast trade winds converge, causing heavy precipitation and frequent thunderstorms. Studying the evolution of doldrums (their formation, development and dissolution) is crucial for understanding the organization of the ITCZ and subsequently global climate patterns. The research target of this project is to develop a deep learning model to detect doldrums from geostationary data. To be more specific, the aim is to predict the near-surface (10 m altitude) wind speed over the ocean in order to detect low wind speeds corresponding to doldrums. This is achieved by training a multi-modal neural network on geostationary data and scatterometer data. There are no geostationary satellites that can measure wind speed. The geostationary satellite used in this research, GOES (imager), measures passively the radiation emitted or reflected from the Earth’s surface. The GOES imager gives a “photo” in different channels (wavelengths ranging from 0.47-13.3μm) of the side of the Earth it faces. These images in different channels are not designed to measure wind speeds. However, glancing at the visible images, it becomes obvious that one can find the signatures of doldrums in such images. One sees the doldrums show up distinctly as the darker part of the oceans. This makes physical sense too, because the calm winds make for less waves over the ocean surface, thus allowing for sunlight to be reflected in all directions – a signature distinct from a wavy surface where the reflected sunlight has a preferred direction. The idea of this project is to see if we can apply deep learning to study such patterns and get areas of doldrums from these images. The wind speed measurements from scatterometers can be used for training the network. Scatterometers (radars) are specifically designed to measure the speed and direction of wind at the ocean surface. They do this by sending microwave signals to the Earth’s surface and measuring the intensity of the signal’s backscatter, which varies according to the roughness of the ocean surface caused by winds. However, because they are polar orbiting, we only get measurements twice a day at the most for a given location. In order to train a model to detect wind speeds from GOES images, spatial and temporal differences in the different satellite data have to be taken into account. The GOES satellite measures the same geographical domain every 10 minutes whereas a scatterometer from an ASCAT satellite provides two swaths of 400 km in longitudinal width. Collocating time and space in order to create a training dataset amounts to an extensive data preparation. Multiple sources of scatterometer satellites are used in training: MetopA, MetopB, MetopC and HY-2 satellites. This gives a variety of data sources from different instruments for the model to train on. Training is done pixel by pixel. Each value from the scatterometer corresponds to around 500 pixels from GOES (depending on the latitude and longitude). The model is thus using the multiple GOES pixels to predict a scatterometer pixel. Two channels of GOES are used to train the model (CH01 and CH03). A single scatterometer satellite provides more than 500'000 valid training scatterometer pixels for a month of measurement. Before training, dataset debiasing is also applied in order for the model to learn to predict reliably very high (13m/s) and very low (<3m/s) wind speeds which are not common in the raw dataset. On top of the images provided by the satellites, the model is also fed two numerical inputs : solar zenith angle (SZA) and solar azimuth angle (SAA). Since the model is trying to learn the signatures of wind speeds from reflected sunlight, it is necessary to feed it the location of each pixel relative to the sun's position as this has influence on the GOES image. In order for the model to accept the different types of inputs, a multi-input model architecture is used with on one side a convolutional neural network on the other a MLP. Both parts are then concatenated and fed through a series of dense layers to reduce to a single value output (wind speed prediction). Multiple different models with varying depths and number of features have been tested to find the best result. Our current results show that it is possible to detect doldrums from geostationary data with an 84% accuracy. We also can see that it is possible to infer wind speeds with a RMSE of 2.6 m/s. This analysis has been done on a model trained on a single month's worth of data. Our results also show that the accuracy of our model improves by feeding it with larger amounts of training sets. We are currently adding multiple months of data from other satellites for the training. Buoy data are used as a final validation for the model. These instruments float on top of the ocean at known locations and measure meteorological variables. Comparing the model output at those locations and the buoy data will provide a final real-life test for the model. If scaling our model continues to improve the accuracy of our wind speed prediction this new method could improve data availability on the whole globe and be useful not only for research but also for operational forecasting. One limitation of this methodology is that the visible spectrum channels of GOES used in this method do not have any night-time values. Therefore, we can only find wind speeds during daylight. This research shows not only an application of deep learning to detect doldrums and predict wind speed but also shows a training process that could be extended to other variables used in meteorology and could provide high temporal resolution and full-globe coverage from geostationary satellites.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall G1)

Presentation: A Multi-Sensor Approach for Early Wildfire Detection Through Automated Hotspot Monitoring in Europe

Authors: Mr. Luca Barco, Ms. Angelica Urbanelli, PhD. Edoardo Arnaudo, Claudio Rossi
Affiliations: Links Foundation
The increasing frequency and intensity of wildfires in Europe demands rapid and automated detection systems for early warning and response. In the context of the OVERWATCH and UNICORN projects, we present a comprehensive approach to wildfire detection through multi-sensor hotspot monitoring, combining both traditional and novel machine learning techniques. Our research integrates multiple satellite data sources, including MODIS, VIIRS, Sentinel-3, and MSG-HRSEVIRI, to create a robust detection system that leverages both spatial and temporal characteristics of thermal anomalies for improved accuracy and reduced detection times. We have developed two complementary methodologies that address different aspects of the wildfires hotspots detection challenge. The first one [4] is a multimodal supervised learning approach, which cross-references thermal anomalies from MODIS and VIIRS satellites with the European Forest Fire Information System (EFFIS) database. This method incorporates land cover data from ESRI's annual Land Use Land Cover (LULC) maps and Sentinel-3 observations from both SLSTR and OLCI instruments. All these data sources provide data at different spatial resolutions, varying from 300 meters up to 1 kilometer per pixel. In the context of this work, two approaches have been implemented and compared on different configurations of input features (i.e., satellite data, temporal features, number of previous hotspots, land use and land cover): standard ML techniques (i.e., XGBoost) which process single pixels and an end-to-end deep learning architecture based on ResNet-18 which processes satellite images. The best results have been achieved by XGBoost using all features (MODIS, VIIRS, Time, Land cover, Sentinel-3, Number of previous hotspots), obtaining an F1 score of 80.53% on the validation set. For the same feature set, the deep learning approach achieves 76.47%. Nevertheless, the deep learning approach shows best performance with a reduces set of features (Sentinel-3 and land cover only), achieving an higher F1 score (76.16%) on the validation set with respect to XGBoost (71.40%), showing that it likely leverages the data from the entire area more effectively, unlike XGBoost which only considers individual pixels. The architecture processes data from a 90km² area centered around each hotspot point, effectively capturing the spatial context necessary for accurate classification. Since the revisit times of MODIS and VIIRS are not suitable for near real time analysis, which is crucial for hotspot detection task, the second methodology [5] utilizes MSG-HRSEVIRI satellite, processing geostationary data with a 15-minute temporal resolution and a spatial resolution of 4 kilometers per pixel. The approach employs a self-supervised learning framework that exploits temporal patterns in geostationary satellite data, implementing a modified version of Presto [1], that is a self-supervised architecture specifically designed to handle pixel-timeseries of remote sensing data. Presto has been originally designed for Sentinel-2 data. In this case, it has been adapted to handle the 11 spectral bands of MSG-HRSEVIRI data, grouped by their central wavelengths into infrared, visible, and water vapor categories. The model processes 24-hours sequences of observations and achieves an F1 score of 63.58% on our test set built upon THRAWS [2] dataset’s fire related events. This result demonstrates the viability of continuous monitoring approaches. Our research makes several contributions to the field of satellite-based wildfire detection. We developed and openly released two comprehensive datasets. The first one is obtained by cross-referencing MODIS and VIIRS thermal anomalies extracted from the algorithm by Giglio et al.[3] on the Mediterranean region with the EFFIS wildfires database, resulting in a collection of more than 9.5M hotspots from January 2012 to August 2022 with corresponding satellite observations from MODIS, VIIRS and Sentinel-3 with a spatial resolution of 1 kilometer per pixel. The second one is a time series dataset covering 164 fire events across Europe from EFFIS wildfires database from 2012 to 2022 with corresponding satellite observation from MSG-HRSEVIRI, with a spatial resolution of 4 kilometers per pixel. These datasets enable the training and validation of machine learning models for wildfire detection and serve as benchmarks for future research. Alongside, we provided a thorough evaluation of different machine learning architectures, including traditional approaches like XGBoost and deep learning models, establishing performance baselines for both static and temporal analysis approaches. Finally, we presented a framework for integrating multiple satellite sources with different temporal and spatial resolutions, demonstrating how complementary data sources can be combined to enhance detection capabilities. The results of our study highlight important trade-offs in satellite-based wildfire detection. While high-resolution sensors provide better classification accuracy, the integration of geostationary satellite data enables more frequent monitoring, which is critical for early detection. Our analysis shows that the inclusion of additional and more resoluted data sources, such as land cover information and multiple spectral bands, significantly improves classification performance, with an increase of 33.65 percentage points in F1 score compared to baseline approaches. The temporal analysis component demonstrates the feasibility of near real-time monitoring, though with lower accuracy than static analysis, suggesting the potential value of a combined approach. Future works will focus on integrating these complementary methodologies into a single operational system, leveraging the strengths of both approaches to create a more robust detection capability. We plan to expand the temporal analysis to include smaller fires and explore the use of advanced time series analysis techniques on geostationary satellite data. This research represents a significant step toward automated, near real-time wildfire detection systems that can aid in reducing response times and mitigating the impact of wildfires across Europe, directly supporting the emergency management objectives of the OVERWATCH and UNICORN projects. This work was founded by Horizon projects OVERWATCH (GA n.101082320) and UNICORN (GA. 101180172). REFERENCES [1]: Gabriel Tseng, Ruben Cartuyvels, Ivan Zvonkov, Mirali Purohit, David Rolnick, & Hannah Kerner. (2024). Lightweight, Pre-trained Transformers for Remote Sensing Timeseries. [2]: Meoni, G., Prete, R., Serva, F., De Beusscher, A., Colin, O., & Longépé, N. (2024). Unlocking the Use of Raw Multispectral Earth Observation Imagery for Onboard Artificial Intelligence. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 17, 12521–12537. [3]: Louis Giglio, Jacques Descloitres, Christopher O. Justice, and Yoram J. Kaufman, “An enhanced contextual fire detection algorithm for modis,” Remote Sensing of Environment, vol. 87, pp. 273–282, 2003. [4]: Angelica Urbanelli, Luca Barco, Edoardo Arnaudo, & Claudio Rossi. (2023). A Multimodal Supervised Machine Learning Approach for Satellite-based Wildfire Identification in Europe. [5]: Luca Barco, Angelica Urbanelli, & Claudio Rossi. (2024). Rapid Wildfire Hotspot Detection Using Self-Supervised Learning on Temporal Remote Sensing Data.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall K2)

Session: C.02.14 The EarthCARE Mission’s First Year in Orbit: Opening new Horizons for Cloud, Aerosol and Radiation Science - PART 2.

The Earth Cloud, Aerosol and Radiation Explorer (EarthCARE) satellite mission aims to improve our understanding of cloud-aerosol-radiation interactions and Earth radiation budget, such that they can be modelled with better reliability in climate and numerical weather prediction models. To achieve this objective, EarthCARE will measure the three-dimensional structure of clouds, precipitation and aerosols, along with collocated observations of solar and terrestrial radiation.

The successful launch of this ESA-JAXA mission was in May 2024 and, following the satellite and instrument commissioning phase, provides unique co-registered observations from a suite of four unique instruments located on a common platform: (1) ATmospheric LIDar (ATLID), (2) Cloud Profiling Radar (CPR), (3) Multi- Spectral Imager (MSI) and (4) BroadBand Radiometer (BBR). EarthCARE global observations include vertical profiles of natural and anthropogenic aerosols, the vertical contribution of ice and liquid water content, the cloud mesoscale distribution, precipitation microphysics, estimates of particle size, convective vertical air motions, as well as atmospheric radiative heating and cooling profiles. In addition to enabling new insights into climate science and providing unique data for NWP improvements, EarthCARE continues the heritage measurements of CloudSat, CALIPSO and Aeolus, and bridges to future missions such as NASA's Atmosphere Observing System mission (AOS) and Aeolus-2.

The session invites contributions from the science community on EarthCARE and related science themes, including Passive and Active Observational Techniques; Cloud and Precipitation Microphysics, Aerosols and Radiation Process Studies; Radiation and Earth Radiation Budget; Scientific and User Applications as well as Long-Term Data Records. In addition, scientific synergies with heritage, operational and future satellite missions as well as with ground-based, air- or ship-borne campaign activities are welcome

Contributions on Modelling, Assimilation and Parameterisation at Global, Regional and Cloud Level enhancing high-resolution atmospheric numerical model activities through evaluation and improvement using novel satellite observations on EarthCARE and related satellite missions are in particular invited. A focus is placed on the use of cutting-edge atmospheric climate and weather models, including "global km-scale" or “global storm-resolving models” and commensurate Earth observations of clouds, aerosols and convection.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall K2)

Presentation: Validation of CPR radar reflectivity and Doppler measurements using airborne observations collected during the PERCUSION campaign

Authors: Florian Ewald, Silke Groß, Takuji Kubota
Affiliations: German Aerospace Center (DLR), Japan Aerospace Exploration Agency (JAXA)
Clouds play a crucial role in Earth’s climate system and in the global water cycle. Their impact on both is intrinsically linked with their macro- and microphysical properties as well as their internal dynamics. With the recently launched ESA/JAXA satellite mission EarthCARE and its Cloud Profiling Radar (CPR), it will be possible for the first time to simultaneously observe the internal dynamics and retrieve the microphysical properties of clouds via the radar-lidar synergy on a global scale. Besides other requirements this expectation can only be fulfilled with a well calibrated and characterized CPR instrument. With direct comparisons along collocated measurement curtains, airborne underflights provide a unique opportunity to validate the L1 and L2 products provided by CPR. During the Persistent EarthCARE underflight studies of the ITCZ and organized convection campaign (PERCUSION), the German research aircraft HALO was equipped with an EarthCARE-like payload consisting of a high-power (30kW peak power) cloud radar at 35 GHz, a high spectral resolution lidar (HSRL) system, and further passive radiation measurements in the visible, thermal and microwave region. During a period of nine weeks between August to November 2024, over 33 underpasses have been performed over the East and West Tropical Atlantic, Europe and the Arctic. In this presentation, we will give an overview over our L1 and L2 product comparisons along the collected underpasses. Besides selected case studies, we will provide a comprehensive validation of the absolute calibration and the sensitivity of CPR. In addition, we will check the correction of Doppler velocities and characterize its Doppler performance. This will allow us to anticipate the impact on downstream EarthCARE products and the ability of CPR to resolve internal cloud dynamics. By sharing our knowledge, we want to promote the performance of CPR to the wider scientific community, discuss remaining issues and hope to foster helpful discussions for further analysis.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall K2)

Presentation: Contrasting EarthCARE’s Multi-Spectral Imager (MSI) with MSG SEVIRI and MTG FCI: Evaluating MSI's Radiometric Accuracy and Spectral Performance.

Authors: Sebastian Bley, Anja Hünerbein, Gregor Walter, Nicole Docter, Nils Madenach, Rene Preusker
Affiliations: Leibniz Institute For Tropospheric Research (TROPOS), FU Berlin
The join ESA and JAXA EarthCARE satellite mission was successfully launched on 28 May 2024 to advance our understanding of cloud-aerosol-radiation interactions and their role in the Earth system. Within just two months after launch, the mission has delivered impressive data from its four state-of-the-art instruments: the Atmospheric Lidar (ATLID), the Cloud Profiling Radar (CPR), the Multi-Spectral Imager (MSI), and the BroadBand Radiometer (BBR). Results from the commissioning phase confirm that this synergistic dataset is truly exceptional, showcasing EarthCARE's potential to transform atmospheric research. MSI features seven spectral channels spanning the visible, near-infrared, short-wave infrared and thermal infrared range, with a 150 km wide measurement swath. This design provides essential context for the vertical profile measurements obtained by the two active instruments, ATLID and CPR, enabling a detailed horizontal representation of cloud and aerosol properties. MSI enables retrievals of key cloud and aerosol properties, including the cloud mask (M-CM) and cloud optical properties (M-COP) and the aerosol optical thickness (M-AOT). Furthermore, MSI data serve as an essential input for the BBR, enhancing its capacity to capture Earth’s radiative balance. This study focuses on comparing Level 1 observations from EarthCARE’s MSI with data from two geostationary sensors: the Spinning Enhanced Visible and InfraRed Imager (SEVIRI) on Meteosat Second Generation (MSG) and the Flexible Combined Imager (FCI) on Meteosat Third Generation (MTG). These instruments are utilized to validate MSI’s reflectances and brightness temperatures, providing insights into its radiometric accuracy, a key prerequisite for the retrieval of cloud and aerosol properties. Special attention is given to challenges arising from differences in viewing geometry, MSI’s nadir perspective versus SEVIRI/FCI’s geostationary perspective and spatial resolution, where MSI’s finer detail complements the broader coverage of its geostationary counterparts. The comparative analysis highlights the spectral channel characteristics influencing cross-platform validation and underscores the significance of harmonized approaches for inter-calibration. We also show how long time series derived from across-track MSI pixel statistics can contribute to refining the Level 1 calibration. The MSI Level 1 validation efforts are further supported by the MSI tool, a radiative transfer simulator designed to simulate the top-of-atmosphere (TOA) spectral bands of MSI using ground-based synergistic profile measurements from ACTRIS sites worldwide. These simulations also serve as an independent reference for SEVIRI and FCI, addressing the spatial representativeness of a ground-based observation. This work not only supports the verification of MSI data quality but also highlights the synergistic potential of combining MSI with SEVIRI/FCI for atmospheric and climate studies.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall K2)

Presentation: Validation of ATLID-HSRL products with airborne HSRL WALES measurements based on EarthCARE underflights obtained during the PERCUSION campaign

Authors: Konstantin Krüger, Martin Wirth, Silke Groß
Affiliations: Deutsches Zentrum für Luft- und Raumfahrt (DLR)
The Earth Cloud, Aerosol and Radiation Explorer (EarthCARE) is a joint ESA/JAXA satellite mission with main objective to support weather and climate predictions, and to enhance knowledge of cloud and aerosol radiative impact. With respect to these mission goals, a key instrument of EarthCARE’s scientific payload is the Atmospheric Lidar (ATLID). It comprises a high-spectral resolution lidar (HSRL) operating in the UV-range (355 nm) with three channels (cross-polar, Rayleigh and Mie) allowing for profiling of aerosols and optically thin clouds with high vertical resolution. The use of this valuable ATLID dataset for scientific purposes requires careful and thorough validation to ensure that the intended accuracy requirements are met. A direct validation approach is to compare collocated ATLID and airborne HSRL measurements that are spatially and temporally coordinated along EarthCARE's flight track. This approach is pursued in this study. As airborne measurements we use the HSRL channels of the Water Vapor Lidar Experiment in Space (WALES) demonstrator, which was developed at the German Aerospace Center and was operated over the last two decades onboard the DLR aircraft Falcon and the German research aircraft HALO for numerous field campaigns. WALES includes an HSRL capability, which operates at 532 nm and provides profiles of particle backscatter, linear depolarization ratio and particle extinction with high vertical and horizontal resolution along the flight track extending from the surface to flight altitude (up to 15 km) making WALES HSRL data highly suitable for ATLID validation. Between August and November 2024, WALES was deployed on the HALO field campaign PERCUSION (Persistent EarthCare underflight studies of the ITCZ and organized convection) which was conducted from three sites, i.e., Cape Verde, Barbados (tropics) and Germany (midlatitudes). A specific element of each individual research flight was a flight segment along the predicted EarthCARE track with a coordinated underpass in the center of this track to create a comprehensive data set for validation activities. In total, 33 EarthCARE underpasses were carried out during PERCUSION, enabling a large number of collocated ATLID and WALES profiles along approximately 400 km / 30 min flight segments each. This large data set covers a variety of different aerosol and cloud settings expected in the tropics, mid-latitudes and in the Arctic such as the thick Saharan dust air layer, marine aerosol in the boundary layer, and optically thin ice cloud and contrails at higher altitudes. The WALES data set is used to statistically compare ATLID L1 data with regard to the different settings indicated. In addition to the coincident WALES and ATLID data at the overpass time, we also separately study their comparability and representativeness along the 30-min underpass flight leg accounting for the spatial and temporal difference (i.e. the mismatch) of both measurements. Furthermore, first results of a comparison between ATLID L2 data products, such as aerosol layer properties (aerosol layer top/base heights, aerosol classification, etc.) are shown. The results of this validation study will not only provide a comprehensive insight into the performance of ATLID in different atmospheric conditions in the tropics and extratropics, but will also contribute to reducing potential errors in future retrievals.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall K2)

Presentation: Calibration and Validation of ESA-JAXA EarthCARE’s ATLID Level 2 Products With Underflights of NASA Aircraft Campaigns PACE-PAX and ARCSIX

Authors: Diko Hemminga, Ping Wang, Gerd-Jan van Zadelhoff, dr. Dave Donovan, Johnathan Hair, Amin Nehrir, Snior researcher Richard Ferrare, Chris Hostetler, Taylor Shingler
Affiliations: Royal Netherlands Meteorological Institute (KNMI), NASA Langley Research Center
With the successful launch of the ESA-JAXA Earth Cloud, Aerosol and Radiation Explorer (EarthCARE) satellite mission, a new lidar instrument for atmospheric research is now spaceborne. The ATmospheric LIDar (ATLID) instrument is one of the four instruments on the common EarthCARE satellite platform. ATLID is a linearly polarized three-channel lidar operating at 355 nm. This lidar instrument provides vertically resolved measurements of aerosol layers and clouds in the atmosphere as well as ground backscatter. The ATLID level 2 single-instrument (L2a) product A-EBD delivers extinction, backscatter and depolarization profiles in the atmosphere. Such data products are derived from attenuated backscatter profiles split by the high-spectral-resolution lidar (HSRL) technique in co-polar Rayleigh, co-polar Mie and cross-polar channels. In addition to the results of an optimal-estimation-based retrieval algorithm (A-EBD), L2a products generated by the ATLID profile processor (A-PRO) include a large-scale aerosol extinction and backscatter product (A-AER), target classification (A-TC) and ice microphysical property estimation (A-ICE). Moreover, a feature mask (A-FM) is determined and the ATLID layer products processor (A-LAY) generates products for cloud top height (A-CTH) and aerosol layer description (A-ALD). Calibration and validation of the instrument and data processing is being performed since launch. This includes comparisons with ground-based measurements, airborne measurements, such as presented in this work, as well as comparisons with numerical models. Three months after launch, during EarthCARE’s satellite and instrument commissioning phase, specifically organized underflights as part of two NASA aircraft campaigns, PACE-PAX and ARCSIX, provide collocated lidar measurements for calibration and validation of ATLID products. In the Plankton, Aerosol, Cloud, ocean Ecosystem Postlaunch Airborne eXperiment (PACE-PAX) campaign five successful underflights are performed with the ER-2 aircraft carrying the HSRL-2 instrument. In addition, the Gulfstream III aircraft performed an underflight during their Arctic Radiation-Cloud-aerosol-Surface Interaction eXperiment (ARCSIX) campaign, utilizing the HALO lidar instrument. Both multi-wavelength HSRL instruments measure the vertical, nadir, profile under the aircraft using measurement wavelengths of 532 nm and 1064 nm. The HSRL-2 instrument adds 355 nm as well, matching the laser wavelength of ATLID. In this work we utilize the aircraft campaign data for calibration and validation of the ATLID single-instrument level 1 and 2 data products. This includes airborne measurements of extinction, backscatter and depolarization, as well as scattering ratio, lidar ratio and cloud top height. The extended collocation due to the underflights, with time differences within 30 minutes, enables intensive comparison. First comparisons with ATLID level 1 data already show great promise. Furthermore, a first look at mean profiles from A-PRO products A-AER and A-EBD indicates good agreement with the HSRL-2 lidar measurements at 355 nm from the PACE-PAX campaign. A favorable comparison is also found between the A-AER product and the ARCSIX underflight, displaying a scene rich in aerosols over the Atlantic Ocean off the East Coast of the USA. The latest results and conclusions will be shared in this presentation.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall K2)

Presentation: EarthCARE Validation Campaigns & Activities Overview

Authors: Montserrat Pinol Sole, Jonas Von Bismarck, Rob Koopman, Stephanie Ruslie, Fehr Thorsten, Kotska Wallace, Timon Hummel, Bjoern Frommknecht, Vasileos Tzallas, Michael Eisinger
Affiliations: ESA, ESA, ESA
The Earth Cloud Aerosol and Radiation Explorer (EarthCARE) is a satellite mission developed by the European Space Agency (ESA) in collaboration with the Japan Aerospace Exploration Agency (JAXA) to measure global profiles of aerosol, cloud and precipitation properties along with radiative fluxes and derived warming rates, with the goal of advancing our understanding of cloud-aerosol and radiation interactions and the Earth's radiative budget. With all four of EarthCARE’s instruments working together well and generating data that are already exceeding expectations after successful launch in May 2024, assuring the data quality of the ESA EarthCARE science products has been an essential and collaborative effort. It is being realise by contributions from the independent EarthCARE validation team (ECVT) under coordination by ESA as well as monitoring-, calibration- and airborne campaign activities performed under ESA (co-)management or coordinated with ESA. An early focus to stabilize the data quality has been on airborne activities underflying, the satellite with EarthCARE-like remote sensing payloads as well as complementary in-situ payloads. The number of validation underflights achieved during the EarthCARE commissioning phase significantly exceeds those for any other Earth Observation missions and this unprecedented airborne activity is complemented by comparisons with a multitude of groundbased and shipborne instruments over the globe, intercomparisons with other satellites, and evaluation using numerical weather models. The presentation will focus on an overview of the major campaigns activities in support of EarthCARE cal/val during its first year in orbit, selected results and complementing validation activities. Furthermore an outlook on planned activities, campaigns and validation workshops will be given.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall K2)

Presentation: Evaluation of EarthCARE ATLID aerosol products using EARLINET measurements

Authors: Ping Wang, Dave Donovan, Gerd-Jan van Zadelhoff, Diego Gouveia, Jos de Kloe, Arnoud Apituley
Affiliations: Royal Netherlands Meteorological Institute (knmi)
The Atmospheric Lidar (ATLID) onboard EarthCARE is a high spectral resolution lidar system operating at 355 nm. ATLID emits linearly polarized light and separates the returned backscatter in the parallel and perpendicular polarized components with respect to the plane defined by the emitted beam. The three ATLID physical channels are the parallel (or co-polar) Mie channel, the parallel (or co-polar) Rayleigh channel, and the perpendicular (or cross-polar) channel. The EarthCARE local equator crossing time on descending node is at 14:00 local time (LT) with a repeat cycle is 25 days. The ATLID L2a processor chain starts with the A-FM (ATLID Feature Mask) processor which detects features to help facilitate the appropriate averaging of the data. The A-PRO (ATLID-profile) processor retrieves the aerosol and cloud optical property profiles and classifies the detected targets. The ATLID level 2a products include the feature mask, particle extinction coefficient, backscatter coefficient, lidar ratio, particle linear depolarization ratio profiles, target classification and some aerosol/cloud layer properties derived from the profiles. We have been evaluating the ATLID products with the ground-based EARLINET (European Aerosol Research LIdar NETwork) measurements since August 2024, after the ATLID calibration had been fine tuned during the commissioning phase. We utilize the hourly EARLINET products available in the ESA Validation Data Center (EVDC), selected and clipped to the EarthCARE overpass times at the EARLINET sites. The ATLID profile that has the shortest distance (<100 km) to the EARLINET site and within one hour of the EARLINET measurements is used for the direct comparison. The EarthCARE orbit is separated into 8 frames (from A to H). The EARLINET sites are mostly collocated with the frames B (nighttime) and D (daytime). There are about 30 EARLINET sites but only a few sites have collocated data each day. The comparison of the ATLID and EARLINET profiles are monitored automatically everyday. With several months of monitoring, we can see that the ATLID daytime frame is more noisy than the night time frame. The nighttime ATLID products have often better agreement with the EARLINET products, this is consistent with a known problem with the background subtraction of the earlier version of ATLID L1b product and is expected to be improved. In the quantitative evaluations, we separate aerosols and clouds using the classification in the A-PRO product. The A-PRO high resolution profiles often have better agreement with the EARLINET measurements but with a large error bar. While the low resolution profiles sometime have a negative bias but with a smaller error bar. The Evaluation is a task in the EarthCARE DISC project, so the work is ongoing. In the presentation, we will show the daily monitoring of the ATLID A-PRO products and some highlights of the comparisons with EARLINET products.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.14)

Session: C.02.15 - M2 session: EO research missions : The Stress Test (Testing Phase) - Validating, pushing to the limit, and Perfecting for Mission Readiness - Flex, Forum, Altius

The status of development of ESA missions will be outlined in 4 sessions of 1h30 minutes (equally of a full day).
Participants will have the unique opportunity to gain valuable insights in technology developments and validation approaches used during the project phases of ongoing ESA programmes.
The Projects are in different phases (from early Phase A/B1 to launch and operations) and together with industrial/science partners the status of activities related to Mission developments will be provided.


M2 session: EO research missions : The Stress Test (Testing Phase) - Validating, pushing to the limit, and Perfecting for Mission Readiness - Flex, Forum, Altius


FLEX Development Status


  • Ralf Bock – ESA

FLORIS – An Innovative Spectrometer for Fluorescence Measurement


  • Emanuela de Luca – Leonardo

FORUM Development Status


  • Paolo Laberinti – ESA

FORUM Industry


  • Chris Burgess and Ernesto Cabrera – Airbus Stevenage

ALTIUS


  • Michael Francois – ESA

ALTIUS Industry


  • Ingmar Lafaille – Redwire

Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (ESA Agora)

Session: F.01.17 ArtEO – methods and benefits of building bridges between art and Earth sciences

ArtEO (www.arteo.earth) is an arts initiative established by Imperative Space, with the simple aim of bringing Earth observation (EO) and other environmental data closer to artists and creatives. Its mission is to facilitate easier access to EO data and imagery for artists, musicians, storytellers and creative professionals who wish to engage with climate change and wider environmental subjects. This session will include talks and examples of artwork from a diverse range of multidisciplinary artists who have worked with ArtEO in its first phase across many genres, ranging from large scale sculpture to immersive projections, movement-based performance art to video work, software-based sonification to powerful musical composition. These works have been showcased in a number of major art-science events, climate communications and data innovation contexts.

In the first part of this Agora session, we will hear from some of the ArtEO ‘Pioneer’ artists about the inspirations, challenges, discoveries and creative process behind their respective work, and we’ll learn about the common goals and opportunities for cross-learning between artistic use of EO data and other forms of science communication and data visualization. In the second part, we will hear from new artists commencing their journey with EO, and the session will culminate with a panel discussion on the opportunities and challenges for engaging the wider public through artistic uses of EO data and imagery.

Moderator:



  • Ravi Kapur

Speakers:



  • Tom DeMajo
  • Eva Petric
  • John Palmesino
  • Christian Clauwers
  • Rosalinda Morrone
  • Marcus Neustetter
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall N1/N2)

Session: C.03.03 Advancing global-scale high resolution imaging spectroscopy in preparation for CHIME - PART 2

The growing availability of high resolution imaging spectroscopy products from missions such as EnMAP, EMIT, HiSUI, PRISMA and DESIS, is enabling a wide spectrum of novel scientific products and applications. The Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) will provide routine observations over the land and coastal zone through the Copernicus Programme in support of EU- and related policies for the management of natural resources, assets and benefits. This unique visible-to-shortwave infrared spectroscopy based observatory will provide unique and major contributions in fulfilling user requirements in the domains of environmental monitoring and management, with a focus on soil productivity, sustainable raw materials exploitation, sustainable use of nutrients and water in agriculture and food security. A number of secondary applications will benefit from the routine provision of CHIME products e.g. biodiversity, coastal and inland water quality monitoring, methane and carbon dioxide detection from point sources. In this session we will welcome contributions from the scientific and user communities encompassing CHIME preparatory activities, including L2 products development, calibration/validation, downstream products and applications prototyping. We will also welcome contributions building on current spaceborne imaging spectroscopy missions, and anticipated missions like SBG-VSWIR.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall N1/N2)

Presentation: The CHIME E2E L2B Vegetation Processor: Towards Sensor-Agnostic Processing of Hyperspectral L2A Images into Vegetation Traits

Authors: José Luis García Soria, Miguel Morata, Viktor Ixion Mészáros, Dávid Kovacs, Dr Jochem Verrelst
Affiliations: Universitat De València
The upcoming Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) will produce operational products up to L2A, i.e., atmospherically corrected and orthorectified reflectances, as well as L2B biophysical products, such as vegetation traits estimates. In this context, ESA has initiated the Copernicus Hyperspectral End-to-End Simulator (CHEES) to provide realistic but synthetic data sets to support the development of the L2A and L2B algorithms. Regarding the L2B vegetation products, the Mission Advisory Group (MAG) has considered the following priority traits: leaf nitrogen content (LNC), leaf mass area (LMA), leaf water content (LWC), canopy nitrogen content (CNC), and canopy water content (CWC). Apart from those, the processor is also prepared to process: leaf and canopy chlorophyll content (LCC/CCC), leaf area index (LAI), fractional vegetation cover (FVC), and fraction of absorbed photosynthetically active radiation (FAPAR). A hybrid workflow was implemented to operationally retrieve these traits. Hybrid models leverage the physical accuracy of radiative transfer models (RTM) and the flexibility of machine learning regression algorithms (MLRA) to establish non-linear relationships between spectra and vegetation traits. The RTM SCOPE (v2.1) generated the training pool of data. Then, Active learning optimized the training samples by selecting the most relevant data from the training pool. Principal component analysis reduced hyperspectral data collinearity to 20 components, capturing 99% of the original spectral information. Finally, Gaussian processes regression (GPR), kernel ridge regression (KRR), random forest (RF) and neural networks (NN) were trained to retrieve the variables, with GPR being the benchmark as it provides probabilistic uncertainty estimates. The other MLRAs use either bootstrapping (KRR and NN) or quantile regression (RF) to estimate the epistemic uncertainty of per-pixel prediction. While canopy models are accurate for simulated scenes, LNC and LMA models remain challenging. Challenges include adapting hybrid models for atmospheric correction and potential anomalies. This presentation explores the latest advancements and future developments of the L2B vegetation (L2BV) processor. It features a resampling tool that uses splines to adapt input data for compatibility with other imaging spectrometers (e.g., PRISMA, EnMAP). To address noisy data and improve spectral quality, spectral smoothing and outliers peak detection techniques are being incorporated. These options can be activated in the local configuration file, and so enable automatic processing of any hyperspectral L2A image that covers the 400-2400 nm range. Examples With EnMAP, PRISMA and airborne HyPlant data will be shown. Beyond the L2BV processor, initiatives are underway to make the GPR models run into Python-based environments. The first Python version of the L2BV is presented here, named PYL2BV. This package allows a faster runtime and uses fewer memory resources. Besides, the Python processor opens possibilities to process hyperspectral data within cloud computing platforms, e.g., the Copernicus Data Ecosystem. A Python package has just been released within the openEO framework: PyEOGPR (https://pypi.org/project/pyeogpr/). While already operational for tailored Sentinel-2/3 models, it is foreseen that PyEOGPR will be able to process hyperspectral data when such data becomes available in cloud computing platforms. These packages will become publicly available to the community to facilitate the processing of hyperspectral surface reflectance into vegetation traits.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall N1/N2)

Presentation: Status of the L2B soil and mineral processors in the context of the CHIME mission preparation

Authors: Robert Milewski, Stéphane Guillaso, Karl Segl, Saeid Asadzadeh, Fernando Alemán
Affiliations: GFZ Helmholtz Centre for Geosciences, GMV Aerospace and Defence
As part of the preparation of the CHIME mission (Copernicus Hyperspectral Imaging Mission for the Environment), an international consortium is developing the CHIME End-to-End Simulator (CHEES) to provide engineers with a tool for predicting system performance and generating test data for the preparation of processing and retrieval algorithms. The Mission Advisory Group has identified soil organic carbon (SOC) and kaolinite abundance as high-priority spectral products, essential for addressing environmental challenges such as climate change and resource monitoring. Accordingly, the GFZ is developing suitable L2B soil and mineral retrieval modules with which both properties can be precisely quantified for demonstration purposes, considering uncertainties and overcoming considerable technical challenges. Soil Organic Carbon (SOC) Retrieval SOC is a critical soil property directly linked to climate change, but its retrieval within the CHIME framework faces several challenges. These include assessing pure bare soil at a 30 m spatial resolution, the effects of variable viewing angles within a 110 km swath, and the localized nature of existing SOC databases for training retrieval models. To address these issues, the Level 2 Bare Soil (L2BS) module employs a two-step approach: 1. Pixel Preselection: CHIME L2A data undergo spectral analysis to filter out non-bare soil pixels. Indices like NDRBI, NDBI, and NSMI help exclude pixels containing water, green/dry vegetation, or moisture, leaving pure bare soil pixels for further analysis. 2. SOC Retrieval: For the retrieval, a state-of-the-art machine learning workflow is employed that is trained on CHIME L2A spectral information and soil campaign datasets for local calibration. The workflow tests a wide range of spectral transformations (e.g., absorbance, spectral derivatives) and different ML algorithms (RF, PLS, SVM, GP) for the SOC regression. Results show promising SOC estimations with the GPR approach outperforming precedent mathematical model tested in this activity, achieving an R^2 of 0.89 and RMSE of 2.31. Future work focuses on expanding test areas, refining pixel selection, addressing observation angle impacts, and extending the training database from regional campaign datasets to continental soil spectral libraries (e.g., EU LUCAS) to include a wide range of soil types and conditions. These developments are tested with the current generation of hyperspectral EO tasking missions (e.g., EnMAP). Kaolinite Abundance Retrieval Kaolinite represents a group of minerals with diagnostic absorption features around 2165 and 2207 nm. Its remote sensing detection poses unique challenges due to its coexistence with spectrally very similar minerals and the ambiguous definition of "abundance" in geologic remote sensing community. The Level 2 Bare Mineral (L2BM) module tackles these issues through a two-step process: 1. Pixel Preselection: An artificial neural network (multilayer perceptron) trained on extensive synthetic mixtures of kaolinites and other surface materials identifies kaolinite-bearing pixels. The enriched training and validation data include kaolinite fractions ranging from 5-100% achieving an overall accuracy of more than 98%. 2. Kaolinite Content Analysis: Two retrieval methods were employed: o A linear regression-based model using the Triangle Area Feature (TRF) approach, which is quite robust against spectral mixing especially with other clay minerals. o GPR, offering superior performance and direct uncertainty quantification. Validation with synthetic validation data yielded high accuracy (R^2=0.874 and RMSE=0.086 for TRF; R^2=0.976 and RMSE=0.037 for GPR). In addition to these spectral (areal) abundances of the very top surface layer, the weight percentage (volumetric abundance) can also be calculated using a local (site-specific) model for test purposes. The coating property is considered here. Initial tests with EnMAP data produced credible abundance maps. Future research includes expanding test sites, refining volumetric abundance models, and conducting laboratory mixing experiments using mineral samples. Conclusion Both SOC and kaolinite retrieval modules demonstrate significant progress in addressing the challenges of the planned CHIME hyperspectral satellite mission. The adoption of advanced machine learning models, extensive validation with synthetic and real datasets, and future expansions to incorporate greater variability and field measurements will serve the CHIME mission to deliver high-accuracy, scalable solutions for soil and mineral analysis. These advancements contribute to understanding climate change impacts and managing natural resources.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall N1/N2)

Presentation: CHIME L2A: Correction of the atmosphere’s effects on the CHIME hyperspectral data to retrieve surface and water leaving reflectance for global land and water applications

Authors: Raquel De Los Reyes, Martin Bachmann, Pieter De Vis, Belen Franch, Peter Gege, Andreas Hueni, Maximilian Langheinrich, Jerome Louis, Peter Schwind, Astrid Zimmermann, Kevin Alonso, Valentina Boccia, Tobias Stoch
Affiliations: German Aerospace Center (DLR), National Physical Laboratory (NPL), Universidad de Valencia, Universitaet Zuerich, Telespazio France, Starion Group c/o European Space Agency, ESRIN, European Space Agencya (ESA)
In 2028, the Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) is planned to join the family of Earth Observation missions covering land and coastal areas in the optical domain, including SBG, EMIT, Sentinel-2, -3, PRISMA, Landsat, DESIS, EnMAP, LSTM, among many more. The imaging spectrometer covers the VNIR-SWIR wavelengths range from 400 to 2500 nm at intervals around 10 nm. CHIME is designed to map a swath of around 130 km with a ground sampling distance of 30 meters. The CHIME L2 project is in charge of developing a collection of processors intended to provide different levels of processed CHIME products to the scientific community. After the correction of the atmospheric effects (CHIME L2A) in the CHIME L1 products, CHIME L2 will implement a L2B processor to apply well-established approaches for the retrieval of Canopy and Leaf Nitrogen and Water Content, Leaf Mass per Area for vegetated pixels, and Soil Organic Carbon (SOC) content and Kaolinite abundance for non-vegetated pixels. To account for the influence of cover fractions of green photosynthetic active vegetation (PV), dry non-photosynthetic vegetation (NPV) and bare soil per pixel, these will be estimated by an improved and tailored pixel classification. Aiming to get the densest and most consistent time series, CHIME L2 includes a L2H/F (harmonized / fused) process which extends the Level 2A processing to the SBG-VSWIR NASA JPL hyperspectral mission, by considering co-registration to a reference image, inter-calibration, and spectral band adjustment between the two missions. Therefore, the CHIME atmospheric correction must achieve more accurate and better characterized atmospheric corrected L2A products. CHIME L2A processor is built on the current EnMAP L2A land processor (PACO, Python Atmospheric COrrection) incorporating algorithms to characterize water types and retrieve water leaving reflectance products. One of the main challenges at this processing step is often the interdependency of the signal received from both types of surfaces, specially inland and coastal areas and their interaction with the atmosphere. The unification of a land and water processor into a single and more complete atmosphere correction package will help to achieve this. Once the atmosphere in the remote sensing scene has been characterized, the WASI (Water Colour Simulator) algorithms characterize and correct the water surface classified pixels and produces the water leaving reflectance products as part of the CHIME L2A products. The results will be CHIME L2A products with unique pixel classification, and atmosphere characterization for two ground reflectance products that will be compliant with the current CEOS ARD (Analysis Ready Data) products specifications. The CHIME L2A products will include a quantification of the surface anisotropy using a hyperspectral Bidirectional Reflectance Distribution Function (BRDF) model, giving the possibility to the users to apply it. In addition, the retrieval of the uncertainties of the final ground reflectance products is expected to minimize the current discrepancies in further applications, like trend analysis based on time series. All the CHIME L2 projects processors will have open source license, therefore available to the full Earth Observation community.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall N1/N2)

Presentation: A novel tool for adapting land atmospheric correction to water

Authors: Dr Peter Gege
Affiliations: DLR
All currently available multi- and hyperspectral satellite sensors with a spatial resolution better than 50 m have been developed for land applications, while dedicated Ocean colour sensors for monitoring open water have a resolution of 250 m or coarser. As this is not sufficient for most inland waters, shallow water zones and for the resolution of small-scale features in coastal areas, land sensors such as Sentinel-2, Landsat-8/9, EnMAP or PRISMA are often used here. The Level-2 products of land sensors, i.e. images at bottom of atmosphere in units of reflectance, are derived using an atmospheric correction which is optimized for land (except EnMAP, for which also water-specific Level-2 products exist). As the applied methods for determining the atmospheric properties are not well-suited over water, significant errors can be introduced to the derived reflectance. Furthermore, reflectance is not corrected for the specular reflections of the sun and the sky at the water surface. These add a wavelength-dependent offset which can be larger than water reflectance itself. For these reasons the Level-2 products of land sensors are commonly considered unsuitable for water, and water-specific atmospheric correction software such as Acolite, C2RCC or Polymer is applied to top-of-atmosphere Level-1 data to derive water leaving reflectance, also called remote sensing reflectance (Rrs). A novel tool has been developed and implemented in the latest version 7 of the open-source software WASI (available at https://ioccg.org/resources/software/) which can be applied directly to Level-2 data to convert atmospherically corrected image data to Rrs. It is based on well-established physical models of water reflectance and specular reflections at the water surface. Two methods, inverse modelling and image-specific neural networks, are combined to process all water pixels. Errors caused by spectral ambiguities are quantified by comparing the results of the two methods. The application of the Rrs tool is illustrated using examples from different satellites (Sentinel-2, Landsat-9, EnMAP, DESIS) for different water types and varying sun glint conditions. Validation results are presented using multispectral in-situ measurements from AERONET-OC stations and hyperspectral in-situ measurements from field spectrometers. The algorithms used in the Rrs tool will be implemented in ESA’s atmospheric correction software for CHIME, which is currently under development at DLR.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall N1/N2)

Presentation: Uncertainty Assessment and Propagation for the CHIME L2A Processor

Authors: Astrid M. Zimmermann, Pieter de Vis, Andreas Hueni, Carmen Meiller, Mike Werfeli
Affiliations: National Physical Laboratory, Remote Sensing Laboratories, University of Zurich
Satellite observation data downloaded by users usually is pre-processed. The processing is done in different levels (e.g. L1B, L1C, L2A), where for example radiometric, geometric, and atmospheric corrections are applied. Each of these correction steps will inevitably introduce uncertainties to the resulting product, due to other external data sources used (e.g. DEM or atmospheric data) or through the processing algorithm. Uncertainties are defined as the quantification of doubt about the measurement result and hence need to be accounted for to enable a faithful interpretation of the data and ensure comparability through time as well as between different missions. Therefore, these uncertainties need to be identified, assessed, validated, and propagated through the processing chain. CHIME is a two-satellite hyperspectral mission to be launched at the end of this decade, focusing on multi-purpose land and water imagery globally. The spectrometer is designed to have over 200 bands ranging from 400 nm (VIS) to 2500 nm (SWIR), a swath width of 130 km and a ground resolution of 30 m, enabling a repeat cycle of ~12 days (with both satellites). Hyperspectral data, due to its large data volumes, introduces an additional challenge compared to multi-spectral data by its increasing storage demand and processing times. When adding uncertainties to this data, one needs to consider that each uncertainty component typically takes up as much space as the data variable itself, and adding a complete error-correlation (i.e. between all spectral/spatial pixels) becomes impossible. Therefore, evaluating and propagating uncertainties becomes very time and space consuming for recent (PRISMA, EnMap) and upcoming (CHIME, FLEX) hyperspectral missions. We will explore some potential solutions for managing the data volume, such as storing the error correlation information separately per dimension (spectral/spatial), and with fewer bits. We will present the current status of the uncertainty analysis of the L2A/S processor developed for the upcoming CHIME mission. We performed an uncertainty analysis in form of an uncertainty tree diagram, based on the FIDUCEO project and QA4EO guidelines, highlighting the measurement function for each processing step, its sources of uncertainty, and their propagation path. By propagating those uncertainties, we get a first high-level estimation of uncertainties per algorithm module and L2A CHIME product. Based on the amount of contribution of each uncertainty, a methodology will be developed to improve uncertainty storage and processing time, e.g. combining different sources of uncertainty into a few components with different error-correlation structures. We show how these approaches are used in practise in the implementation of uncertainty propagation through the CHIME L2A processing chain. We will be presenting our preliminary results, as well as ways to improve storage and time costs.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall N1/N2)

Presentation: Towards an operational BRDF correction for CHIME: evaluating the transferability of HABA BRDF algorithm to hyperspectral domain

Authors: César José Guerrero Benavent, Belen Franch Gras, Italo Moletto Lobos, Sébastien Saunier, Raquel De Los Reyes, Peter Schwind, Tobias Storch
Affiliations: Global Change Unit (UCG), Image Processing Laboratory (IPL), University of Valencia (UV),, Dept of Geographical Sciencies, Univesity of Maryland, Telespazio France, Satellite System and Operation,, German Aerospace Center (DLR), Earth Observation Center (EOC)
BRDF (Bidirectional Reflectance Distribution Function) describes the anisotropy properties of a given surface by representing the surface reflectance for different viewing-illumination geometries. This function allows the directional reflectance normalization to a common geometry reducing noise in the time series. In the multispectral domain, BRDF methods rely on inverting the BRDF model coefficients using several observations of a limited period of the same area from various sun-view geometries. For medium and high resolution sensors with narrow angular sampling, BRDF retrieval complicates. HABA (High resolution Adjusted BRDF Algorithm), proposed by Franch et al.,(2019) is used to normalize the Sen2like product and is based on disaggregating coarse resolution MODIS BRDF parameters inferred from the VJB method at Sen2like level. There is limited research concerning the applicability of kernel-based BRDF correction models to hyperspectral imaging, and is mostly focused on airborne data mosaics, and on anisotropic datasets premeasured in a laboratory environment. However, these methods are far from being operational. Within the framework of incoming future hyperspectral missions as CHIME, there exists a necessity for a well-established operational BRDF hyperspectral correction. This work explores the adaptability of the HABA algorithm for the retrieval of BRDF parameters at hyperspectral domain. The first approach consists in a linear interpolation over the multispectral BRDF parameters retrieved by HABA to generate the hyperspectral ones based on the hypothesis of high correlated BRDF properties over nearby wavebands, while accounting for spectral consistency and integrity, which are critical on the hyperspectral domain. We evaluate this approach over two sites in Gobabeb (Namibia) and Panzerwiese (Munich,Germany). After implementing HABA, the methods will be evaluated to transfer the model to the hyperspectral domain to correct EnMAP data by studying the noise reduction and comparing the results with in-situ nadir measurements. Complementary, in this work we provide the results of a field campaign where different set of angular measurements with ASD spectroradiometer were held over an orange tree field in Valencia (Spain). Four main blocks of measurements were planned: first, angular measurements taken from a crane to characterize the canopy BRDF along the solar principal plane, varying only the observation angle and repeating each set in a different Solar Zenith Angle (SZA); second, measurements from the crane but pointing the spectroradiometer at nadir (fixed View Zenith Angle, VZA), varying the SZA; third, angular measurements analogous to the first ones but using a goniometer to characterize the BRDF at tree level; and fourth, angular multi-spectral drone measurements using the MAIA sensor. Additionally, different EnMAP observations were planned over the area. With the multiangular sampling performed at different scales, different sets of BRDF coefficients were retrieved and subsequently intercompared. The different EnMAP observations were normalized to a common geometry with each set of BRDF coefficients, showing at first instance the best correction capacity for the canopy level coefficients retrieved with the crane at nadir. Furthermore, interpolation demonstrated to be a good approach to retrieve hyperspectral BRDF coefficients and HABA BRDF normalization showed promising results with same favorable corrections as the other methods, highlighting the potential of HABA algorithm for its application to hyperspectral data.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.15/1.16)

Session: A.07.05 Monitoring and predicting surface water and flood dynamics - PART 2

The socio-economic consequences of floods are rising rapidly, as floods are the most frequent and impactful weather-related disasters, affecting nearly 800 million people in the past decade and causing economic losses exceeding $300 billion. In this context, remote sensing has emerged as a critical tool for data collection and observation, especially in regions where field surveys and gauging stations are limited, such as remote areas and developing nations. The integration of remotely-sensed variables—like digital elevation models, river width, flood extent, water level, flow velocities, and land cover—into hydraulic models offers the potential to significantly enhance our understanding of flood processes and improve predictive capabilities.

Over recent decades, research has focused on optimising the use of satellite observations, supported by both government and commercial initiatives, and numerous datasets from airborne sensors, including aircraft and drones. Recent advancements in Earth observation (EO) have further enhanced the monitoring of floods and inland water dynamics, utilising optical imagers, Synthetic Aperture Radars (SARs), and Global Navigation Satellite System Reflectometry (GNSS-R) to detect surface water, even in densely vegetated regions. Radar altimeters now measure water levels over smaller lakes and rivers. However, despite these advancements, the update frequency and timeliness of most remote sensing data products are still limited for capturing dynamic hydrological processes, which hinders their use in forecasting and data assimilation. Additionally, spatial and temporal inconsistencies across different sensors pose challenges in creating integrated multi-sensor products, such as fused surface water and flood extent products, water volume estimates, and wetland maps.

The scientific community has increasingly recognized the potential of remotely-sensed data for calibrating and validating hydraulic models, and to revolutionise real-time flood monitoring. With the expansion of open data from sources such as the European Space Agency (ESA), and the availability of more Earth observation data than ever before, this progress is expected to continue.

This session invites cutting-edge presentations on flood monitoring and mapping through remotely-sensed data, focusing on:

- Remote sensing data for flood hazard and risk mapping, including commercial satellite missions and airborne sensors (aircraft and drones);
- Remote sensing techniques for monitoring flood dynamics;
- The use of remotely-sensed data for calibrating or validating hydrological or hydraulic models;
- Data assimilation of remotely-sensed data into hydrological and hydraulic models;
- Enhancements in river discretization and monitoring through Earth observations;
- River flow estimation using remote sensing;
- Machine learning and deep learning-based flood mapping or predictions;
- Ideas for developing multi-satellite data products and services to improve the monitoring of flood and surface water dynamics.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: A Novel Approach to Mapping Muddy Floodplains: Insights from the Valencia Flood

Authors: Emanuele Alcaras
Affiliations: Parthenope University Of Naples
Flooding is a catastrophic natural disaster that continues to have profound societal and economic impacts on affected communities worldwide. Effective and timely mapping of flooded areas is essential for guiding disaster response and recovery efforts, but this remains a complex and challenging task, particularly in the immediate aftermath of heavy rainfall events. One of the primary challenges is the frequent presence of persistent cloud cover, which often limits the utility of satellite-based observations. This issue is compounded by the fact that many widely used flood mapping techniques depend on spectral bands, such as near-infrared (NIR) or shortwave infrared (SWIR), that are unavailable on the RGB sensors typically found on low-cost drones and other readily deployable imaging platforms. These limitations highlight the urgent need for innovative tools that can overcome these barriers, providing accurate and rapid flood mapping under constrained conditions. In this context, the present study introduces the Flood Mud Index (FMI), a novel spectral index specifically designed to address these challenges. Unlike existing indices, the FMI relies exclusively on the red and blue bands of the electromagnetic spectrum, making it fully compatible with standard RGB sensors. This compatibility is particularly advantageous in scenarios requiring immediate deployment, such as post-flood assessments, where drones equipped with RGB cameras can be used to capture imagery even under adverse conditions like heavy cloud cover. The FMI was developed to address a critical gap in current flood mapping methodologies: the ability to detect muddy floodwaters, a common feature of flood events that involve significant sediment transport. These muddy conditions often complicate the application of traditional indices, which are generally optimized for detecting clear or shallow water. The need for such a specialized tool was underscored by the recent catastrophic flood event that struck Valencia on October 29, 2024. This disaster followed an unprecedented period of heavy rainfall, which resulted in widespread inundation across both urban and rural areas. One of the most challenging aspects of the Valencia flood was the significant sediment transport that accompanied the rising waters, leading to the formation of extensive muddy floodplains. These high-turbidity conditions rendered many traditional flood mapping indices inadequate, as their algorithms are not designed to differentiate between muddy waters and other surfaces with similar spectral characteristics. The disaster caused immense damage, claiming over 200 lives and leaving a trail of destruction that affected homes, infrastructure, and livelihoods. The scale and complexity of the event highlighted the critical need for accurate flood mapping tools that could effectively capture the unique characteristics of such scenarios, enabling responders to allocate resources efficiently and support recovery efforts. Existing flood mapping indices have long been regarded as valuable tools for delineating water-covered areas, but they each come with limitations. Among the most widely used indices are the Normalized Difference Water Index (NDWI), the Modified Normalized Difference Water Index (MNDWI) using SWIR1 and SWIR2 bands, the Automated Water Extraction Index (AWEI) in its shadow and non-shadow variants, and the Normalized Difference Flood Index (NDFI). While these indices have demonstrated success in mapping clear or shallow waters, their reliance on spectral bands such as SWIR and NIR limits their applicability when only RGB-equipped sensors are available. Furthermore, these indices often struggle to accurately delineate high-turbidity conditions like those observed during the Valencia flood. For instance, while the NDWI is highly effective at detecting water bodies in clear environments, it fails to distinguish between water and muddy surfaces due to its dependence on the spectral contrast between vegetation and water. Similarly, the MNDWI variants, although optimized for urban environments, rely on SWIR bands, which are not accessible in standard RGB imagery. AWEI and NDFI, while offering improvements in certain scenarios, still fall short in accurately identifying turbid floodwaters, particularly when sediment levels are high. In response to these limitations, the Flood Mud Index (FMI) was developed as a simple yet robust tool tailored to the detection of muddy floodwaters. The FMI leverages the distinct spectral characteristics of muddy water, which exhibits higher reflectance in the red band compared to the blue band. By focusing on this specific spectral behavior, the FMI achieves high accuracy in delineating turbid flood zones, even under conditions that are challenging for traditional indices. The simplicity of the FMI’s formulation is another key advantage, as it enables rapid computation and easy implementation across a wide range of imaging platforms. This accessibility is particularly important in the context of disaster response, where time is of the essence and resources are often limited. To validate the FMI, the study utilized Landsat 8 data, a widely recognized benchmark for spectral analysis due to its high radiometric resolution and well-calibrated bands. Validation was conducted across multiple test sites within the Valencia region, encompassing a diverse range of flood-affected landscapes. The FMI’s performance was evaluated against existing indices, including the NDWI, MNDWI (SWIR1 and SWIR2), AWEINS, AWEIS, and NDFI, using confusion matrices to assess key accuracy metrics. These metrics included User Accuracy (UA), Producer Accuracy (PA), and Overall Accuracy (OA), which collectively provide a comprehensive measure of an index’s ability to correctly classify flooded and non-flooded areas. The results of this comparative analysis demonstrated the FMI’s clear superiority over existing indices. The FMI achieved an overall accuracy of 97.64%, significantly surpassing the performance of all other indices evaluated. In addition to its high OA, the FMI exhibited near-perfect UA and PA for both the “mud” and “no-mud” classes, highlighting its robustness in detecting muddy floodwaters while minimizing false positives. This performance is particularly noteworthy given the challenging conditions posed by the Valencia flood, where traditional indices struggled to accurately capture the extent of inundation. One of the most compelling advantages of the FMI is its compatibility with RGB sensors, which are widely available and cost-effective. This feature enables the FMI to be deployed using a variety of platforms, including consumer-grade drones, smartphones, and other accessible imaging devices. Unlike indices that rely on more complex and expensive sensors equipped with SWIR or NIR bands, the FMI can be applied in a broad range of scenarios, making it an ideal tool for emergency response and post-disaster assessments. Its ability to operate effectively under heavy cloud cover further enhances its utility, as it ensures that critical information can be gathered even in adverse weather conditions. The implications of the FMI extend beyond flood mapping to a variety of applications in disaster management, environmental monitoring, and urban planning. For instance, the FMI can be used to assess sediment transport and deposition patterns in riverine and coastal environments, providing valuable insights into the dynamics of flood events. In urban planning, the FMI can support the development of mitigation strategies by identifying areas most vulnerable to flooding, particularly in regions prone to high-turbidity conditions. Moreover, the FMI’s simplicity and accessibility make it an invaluable tool for community-based disaster preparedness initiatives, enabling local stakeholders to contribute to monitoring efforts and improve resilience to future floods. The case study of the Valencia flood serves as a compelling demonstration of the FMI’s potential to revolutionize flood mapping. By providing accurate and timely maps of affected areas, the FMI enabled responders to identify priority zones for intervention, allocate resources more effectively, and support recovery efforts with greater precision. The ability to rapidly delineate muddy floodwaters was particularly critical in this context, as it allowed authorities to address the unique challenges posed by high-turbidity conditions, ensuring that response efforts were tailored to the specific characteristics of the disaster. In conclusion, the Flood Mud Index (FMI) represents a significant advancement in the field of flood mapping. By addressing the limitations of traditional indices and offering a simple, accurate, and widely applicable solution, the FMI has the potential to greatly improve disaster response and recovery efforts worldwide. Its performance during the Valencia flood underscores its value as a robust tool for detecting muddy floodwaters, providing a critical foundation for more effective and resilient disaster management strategies in the future.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: Global Validation of an Automatic Sentinel-1-based Flood Mapping Method

Authors: Florian Roth, Mark Edwin Tupas, Claudio Navacchi, Jie Zhao, Wolfgang Wagner, Bernhard Bauer-Marschallinger
Affiliations: Department of Geodesy and Geoinformation, Technische Universität Wien, Department of Geodetic Engineering, University of the Philippines Diliman, Chair of Data Science in Earth Observation, Technical University of Munich
The increasing frequency of flood events and number of affected people, as reported by the Intergovernmental Panel on Climate Change (IPCC), highlights the importance of reliable and efficient satellite-based flood information. Synthetic Aperture Radar (SAR) data has proven highly effective for mapping flood extents, thanks to its strong contrast between land and water and its ability to operate in all weather conditions. As a result, numerous advanced SAR-based flood mapping methods have been developed, with some actively employed in operational services, such as those run by the European Commission and UNOSAT. These products play a vital role in supporting emergency management, enhancing flood prediction, and contributing to flood risk assessment. The critical nature of these applications requires a thorough understanding of the performance of the flood mapping methods utilized. However, validation studies that could ensure such understanding are underrepresented in the literature, and often, publications show insufficient validation methods (as stated by Amitrano et al., 2024). Our Bayesian flood mapping algorithm functions as one of the three scientific algorithms in the Copernicus Emergency Management Service (CEMS) Global Flood Monitoring (GFM) component. In the GFM service, flood maps are automatically generated from every incoming Sentinel-1 scene, regardless of whether flooding is present or not. This strategy requires the algorithm to be robust and capable of handling diverse land cover types and changes, including those that may mimic flood-like behaviour, such as vegetation growth or snow cover. The GFM products are systematically assessed, but the performance of the individual algorithms is only be determined indirectly. To bridge this validation gap and consider the challenging requirements of GFM, we designed a dedicated validation study (Roth et al., in preparation) to learn about the strengths and weaknesses of our approach. Therefore, an unbiased selection of 18 flood events distributed over 5 continents is performed from the CEMS on-demand mapping (ODM) archive. Well-selected and robust accuracy metrics are computed for all events by comparing our results to the ODM. Enhancing the quantitative validation, we analysed 8 representative events qualitatively to investigate the underlying differences of the validation. The findings of this study not only demonstrate the overall performance of our algorithm, but also highlight potential areas for improvement for both the individual algorithm and the GFM in general. In this contribution, we address the challenges encountered in conducting a global validation of an automatic flood mapping algorithm and outline the strategies we employed to overcome them. Key aspects include the careful selection of appropriate accuracy metrics and the identification of reliable reference data. The validation study described above (Roth et al., in preparation) has yielded valuable insights into the effects of using different polarizations for flood mapping and the limitations of our approach in capturing small-scale flood events. Building on these findings, we have refined our method and now aim to re-conduct and present the validation study using the enhanced approach to further assess its performance. Additionally, we extend the study by incorporating an assessment of no-flood situations, thereby providing a more complete evaluation of the algorithm's performance. References: Amitrano, D., Di Martino, G., Di Simone, A., & Imperatore, P. (2024). Flood detection with SAR: A review of techniques and datasets. Remote Sensing, 16(4), 656. Roth, F., Tupas, M. E., Navacchi, C., Zhao, J., Wagner, W. & Bauer-Marschallinger, B. (in preparation). Evaluating the robustness of Bayesian flood mapping with Sentinel-1 data: a multi-event validation study. Science of Remote Sensing.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: Monitoring surface water variations from optical imagery, laser altimetry, time-variable gravity and permanent GNSS stations

Authors: Jean-Paul Boy, Claudia Carabajal
Affiliations: EOST/ITES, SSAI Inc @ NASA GSFC
Although surface water is one of the key components of the continental hydrological cycle, it is often not taken into account in most global hydrological models. In this study, we estimate surface water mass changes using various remote sensing techniques. First, we derive surface water extent using MODIS optical and infrared imagery. Water elevation is estimated using laser altimetry from the ICESat-2 mission, launched in September 2018. The water masks computed from MODIS data are also used to filter out returns from the ICESat-2 ATL13 Inland Water products. We validate our elevation series using state-of-the-art radar altimetry products and available in-situ water level data. In addition, we show ICESat-2 altimetry products can be used to improve geoid models over large water bodies. In addition, we derive surface water (lake, reservoir, but also flood plains) mass variations from high resolution time-variable gravity solutions (mascons) from the GRACE-Follow On mission, which launched in May 2018. We show a good agreement between water volume changes computed from ICESat-2 altimetry and MODIS imagery, and the mass changes derived from GRACE-FO. Variations of the continental water storage also induce deformation which can be measured with high accuracy by permanent GNSS stations. We show that adding the loading effects of surface water changes, in addition to the other hydrological components, significantly improves the agreement with the observed vertical displacements.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: Compound Flood Mapping with Multi-Sensor Fusion and Foundation Models

Authors: Mirela G. Tulbure, Dr. Mark Broich, Om Jain, Sanay Shah, Dr. Júlio Caineta, Dr. Mollie Gaines, Dr. Hamed Alemohammad
Affiliations: Center For Geospatial Analytics, NC State, Labcorp, Clark University
Floods are the most significant weather-related hazard, causing loss of life and substantial economic damage. In 2024, the warmest year on record, we have witnessed numerous extreme climate-driven flood events across the globe. From catastrophic flooding in Vietnam, Nigeria, and Brazil to severe storms such as hurricanes Helene and Milton in the southeastern USA and flooding in Valencia, Spain, in October, these events have had devastating impacts across four continents. Extreme rainfall has resulted in the deaths of thousands and has submerged entire towns. Earth observation (EO) provides a valuable data source for flood mapping due to its extensive coverage and frequent revisit times, with over 800 petabytes of data currently available. Recent advances in EO Foundation Models (FMs), such as NASA-IBM's Prithvi, enable efficient utilization of this vast amount of data, showing promise for better generalization than traditional supervised deep learning models. A significant challenge for expanding the applications of supervised AI models lies in their reliance on large labeled datasets. In this context, we introduce FloodsNet, a new harmonized training dataset that combines co-located Sentinel-1 (S1) and Sentinel-2 (S2) data for 85 global flood events. This dataset integrates four existing freely available training datasets. We present results of mapping global flood events using traditional machine learning and deep learning approaches, using both S1 and S2 data and digital elevation models. Additionally, we present the results of fine-tuning the Prithvi model for mapping major flood events using FloodsNet. Our models had accuracies of > 93%. Features derived from S1 data were less important than S2 features but were critical when S2 was unavailable because of cloud cover during the peak of flood events. Integrating complementary data of 10m resolutions and various AI algorithms is essential for evaluating the near-real-time mapping of flood events.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: Daily Inundation Extent Forecasting for the HIndu Kush Himalaya by Combining Sentinel-1 SAR with River Discharge Information

Authors: Franz Meyer, Kas Knicely, Victor Devaux-Chupin, Dr. Hyongki Lee, Dr Jim Nelson
Affiliations: University Of Alaska Fairbanks, University of Houston, Brigham Young University
The Hindu Kush Himalaya (HKH) region has experienced rapid change during the recent global warming period, altering meteorological and hydrological extreme events and giving rise to an increased frequency of flood hazards. Today, large regions of the HKH are inundated during every rainy season, and induced floods have accounted for nearly half of all disaster events in the region between 1980 and 2015. Despite these growing hazards from flooding, existing disaster risk reduction systems in the region lack access to daily, low-latency information on flood extent that could assist in disaster response, hazard prediction, and long-term disaster mitigation. To address this information gap, this paper presents HydroSAR-NG, a NASA-funded effort that is developing daily flood extent mapping and forecasting capabilities for the HKH. To this end, the project uses machine learning technology to integrate Sentinel-1 SAR-derived water extent maps with daily river discharge information available across the HKH. Our implemented technology provides daily flood extent forecasting capabilities even if no satellite acquisitions are available. The HydroSAR-NG flood area forecasting service became operational in fall 2024. Data generated by this service are made available to end-users via a dedicated web interface maintained by the International Centre for Integrated Mountain Development (ICIMOD), an in-region partner of the project. The main inputs for our machine learning approach stem from two existing disaster monitoring services maintained by ICIMOD: (1) ICIMOD’s flood inundation service has been providing surface water extent maps to the community since 2018. These data are derived from Sentinel-1 SAR observations and cover a potential flood area once every 10 days; (2) ICIMOD’s Streamflow Prediction tool is based on ECMWF weather modeling data and provides both a deep historic record and a short-term (15 day) forecast of river discharge for all river reaches in the region. The HydroSAR-NG forecasting technology accesses these data through public APIs and integrates them using a rotated empirical orthogonal function (REOF) analysis framework that is composed of three essential steps. Step 1 applies a REOF orthogonal projection to deep historic time-series stacks of SAR-derived water extent maps to extract independent spatiotemporal patterns contained in the data. Step 2 associates these identified patterns with historic river discharge information using polynomial regressions. Once regression models were developed, Step 3 uses these regression models together with 15-day river discharge forecasts from ICIMOD’s Streamflow Prediction tool to derive surface water extent maps for each of the forecasted river discharge dates. This paper introduces the machine learning technology we developed in this effort including its scientific basis and practical implementation. We describe the main input data sets used in our service and demonstrate its main capabilities and limitations for several hydrobasins in the HKH. We also show results of an extensive performance analysis, where forecasted inundation extent maps are compared to reference data extracted from satellite observations. Finally, we present example use-cases related to the early warning of emerging monsoon floods and the retrospective analysis of previous flood seasons.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: Determining total flood extent from SWOT under vegetation: a winey study case

Authors: Christophe Fatras, Quentin Bonassies, Ludovic Cassan, Santiago Peña-Luque, Sophie Ricci, Andrea Piacentini
Affiliations: Collecte Localisation Satellites (CLS), CECI, CERFACS/CNRS UMR 5318, Centre National d’Etudes Spatiales (CNES)
In the frame of the Copernicus emergency rapid mapping service, flood maps must be established within a few hours from the availability of remote sensing data. This data can be of different types, either from optical or SAR datasets, which all present different properties (wavelengths, band availability, resolution, etc.). This production is for the moment semi-automatic, to avoid misclassification and provide flood maps as accurate as possible. For very large areas and for speed purposes, there is a need for an accurate automatically produced first guess, to eventually be modified manually. This is the main reason why the use of machine learning approaches to learn and detect flooded areas is developed for both optical and SAR data, as an aid to map producers. Nevertheless, the areas detected are deeply constrained by their direct observability from the sensors. In other words, forests and urban areas are badly mapped as flooded or not from current sensors. If multi-bounce effects can be observed from SAR data in flooded forest areas, its impact on the backscattering coefficient can vary strongly from one place to another depending on the tree’s height and density among other parameters. This prevents a straightforward machine learning approach in these areas. Launched in December 2022, SWOT (Surface Water and Ocean Topography) satellite with its instrument KarIn now proposes a high-quality and high-resolution radar pixel-cloud at least once per 21 days repeat cycle on 97% of the globe. Along with the 2D estimation of water levels, SWOT also provides backscattering and coherent power coefficients. If Ka-band data from SWOT were not initially supposed to observe water under vegetation, the flood event that occurred in Chinon (France) on March 31st, 2024 tends to show otherwise. Indeed, SAR data from Sentinel-1 and SWOT were acquired on this flooded vineyard area of central France within a 3h timeframe. If Sentinel-1 allows to map properly the observable flooded areas, SWOT shows also areas that might correspond to flooded forest areas compatible with other approaches made from high resolution DEM, through relative elevation models for instance. However, concerns about areas with high soil moisture content detected as flooded are still to be addressed. These comparisons are to be explored in this work, as it could underline major advances in flood mapping with SWOT-like satellites in the near future.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.31/1.32)

Session: F.02.17 International Collaborations concerning Satellite EO and Disaster Response

For global disaster risk reduction efforts—like those outlined in the Sendai Framework—to be truly effective, countries must develop and implement strong national and local strategies. A key part of this is understanding the core elements of risk: hazards, exposure, and vulnerability. Satellite EO plays a crucial role in providing accurate, up-to-date information to support disaster risk management and emergency response. Today, several global EO-based initiatives help humanitarian efforts, including the International Charter Space & Major Disasters, UNITAR/UNOSAT satellite mapping, Sentinel Asia, and the Copernicus Emergency Management Service. These tools offer vital real-time insights, but there is still a need for faster data access and better integration across platforms. This session will highlight the critical role EO practitioners play in turning satellite data into actionable insights during emergencies. Experts will discuss how space technology is currently being used in disaster response and explore the future of EO-driven emergency management. Join us for a discussion on how international collaboration and technological advancements are shaping the next generation of disaster response.

Presentations and Speakers:


Rapid Mapping using satellite data with the CopernicusEMS


  • Simone Dalmasso - EC/JRC

The International Charter: Delivering EO Support for Global Disaster Response


  • Charter Board

United Nation capacity building activities for disaster risk management


  • Lorent Czaran - OOSA

Satellite data to support the international humanitarian community


  • Evariste Karambizi - UNITAR

new space Earth Observation missions to support disaster risk management


  • Martin Langer - Ororatech

Advanced space technologies for disaster response


  • Linda Tomassini - CNES

The contribution of the international charter space and major disasters


  • Harshbir Sangha - UKSA
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.61/1.62)

Session: F.04.02 Supporting Global Food Security through Earth Observation - PART 2

Remote sensing technologies play a crucial role in shaping agriculture policies and food systems worldwide. By leveraging satellite imagery and data processing technologies, we can monitor agricultural activities worldwide, from planting to harvest. Earth observation data also provides valuable insights into crop health, soil moisture levels, and land use patterns, allowing policymakers and humanitarian organizations to make informed decisions about resource allocation, disaster response, and food distribution. By harnessing the power of Earth observation, we can enhance resilience in food systems, improve agricultural productivity, and ensure food security for communities around the globe.


Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.61/1.62)

Presentation: Forecasting Food Security in African Countries at Sub-National Scale: A Deep Learning Approach Using Heterogeneous Data

Authors: Roberto Interdonato, Mehtab Alam Syed, Asha Seif, Simon Madec, Jérémy Lavarenne
Affiliations: CIRAD - INRIA - UMR TETIS, CIRAD - UMR TETIS, UMR TETIS - CentraleSupélec
Food security (FS) is a complex phenomenon, that can be described by means of several key components such as the availability of food, access to food, food utilization (nutrition quality and food safety), and the stability of these aspects over time. Since 2020, the global FS situation has been deteriorating, intensified by the COVID-19 pandemic, with an estimated 820 million people suffering from chronic hunger and malnutrition. Therefore, estimating food insecurity is crucial for governments and humanitarian organizations to make informed and timely decisions regarding relevant policies and programs. Among the many factors that impact of FS, we can cite global population growth (that increases malnutrition and makes financial aid challenging) and Land Use and Land Cover (LULC) changes (e.g., due to agricultural land conversion, deforestation, climate change) that may force farmers to migrate, disrupting local food systems. Therefore, estimating food insecurity is crucial for governments and humanitarian organizations to make informed and timely decisions regarding relevant policies and programs. In this study, we leverage Artificial Intelligence (i.e., Machine and Deep Learning methodologies) in order to predict FS classes at sub-national scale in three African countries: Rwanda, Tanzania, and Burkina Faso. We focus on widely used indicators such as Food Consumption Score (FCS) and Household Dietary Diversity Score (HDDS), by predicting discrete classes (poor, borderline, and acceptable) based on a large pool of heterogeneous data sources. FCS and HDDS are widely used indicators to provide comprehensive assessment of FS through extensive household surveys. However, these household surveys are resource intensive, time-consuming, unfeasible in disputed zones and may have biases. Therefore, training machine learning models on historical survey data can be useful for near real-time prediction of FS, as compared to periodic FS assessment through household surveys. To this aim, we compute FCS and HDDS from household surveys conducted during several years for the three target countries. These data serve as ground truth for the proposed model. Our methodology exploits a multi-branch model architecture framework that includes both 2-D Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). The approach allows us to assess the generalizability and effectiveness of the proposed methodology across diverse climatic and socio-economic contexts. The architecture leverages diverse data sources such as Temporal Covariates and Geospatial Raster Datasets. Concerning Temporal Covariates, these are time-series explanatory variables that capture dynamic aspects of FS over time. These variables include data aggregated monthly over multiple years at the district or commune level, depending on the lowest granularity of the ground truth dataset. We have selected the following variables for our study: rainfall, smooth mean temperature (SMT), maximum and minimum temperatures, and maize prices. As regards Geospatial Raster Datasets, these datasets provide spatially explicit information in the form of high-resolution maps. In this study, we used two types of raster data: population time-series data and high-resolution, up-to-date LULC raster data. Firstly, the population time-series data is collected from the GHSL, providing population density in a 100 meters pixel raster format for every five years since 2000. These variables are crucial for FS as they provide insights into the availability, accessibility, and utilization of food resources. Secondly, the up-to-date LULC raster data were obtained from the Esri Sentinel-2 dataset, which provides a high spatial resolution of 10 meters. The LULC dataset comprises various classes, but our analysis concentrates on three of them: crops, forests, and built areas, which are extracted and processed as binary rasters. These classes have impact on agricultural productivity, biodiversity conservation, and urban development. Our methodology leverages two Deep Learning architectures, LSTM and CNN. LSTM excel for capturing temporal dependencies in time-series data, such as rainfall patterns or crop yield trends over time. CNN on the other hand is well-suited for processing spatial data, such as satellite imagery and land use patterns, which are crucial for understanding FS at a regional level. Therefore, these two models are selected because of their ability to handle heterogeneous data and effectively analyze both the spatial and temporal aspects of food security prediction. Note that the high heterogeneity of the input data (associated with the challenging prediction task) is also the reason why we choose to rely on well-known neural network models such as standard CNN and LSTM, which have been widely shown to have solid performance on data and tasks from different domains, e.g. over more recent approaches such as Vision Transformers. The classification resulting from our experimental evaluation helps in identifying vulnerable regions up to third-level administrative divisions. Our comparative analysis across multiple African countries provides valuable insights into the suitability of the heterogeneous data approach for FS assessment. More specifically, the proposed approach achieved significant accuracy scores for Burkina Faso, Rwanda, and Tanzania, demonstrating its effectiveness across these regions. Overall, the CNN model consistently outperforms the LSTM model across all countries for FCS and HDDS indicators. The analysis of results ensures that the spatial features captured by CNNs are crucial for accurate FS forecasting. In contrast, the LSTM model, designed to capture temporal dynamics through time-series data, shows lower accuracy scores. This could indicate that short-term temporal variations may have less immediate impact on food security classifications in our context. However, the LSTM model's incorporation of temporal covariates adds a valuable dimension of explainability to our analysis, potentially revealing longer-term trends and seasonal patterns that influence FS. The disparity in model performance underscores the complexity of FS forecasting and highlights the need for careful consideration in model selection. It emphasizes that while temporal factors are important for understanding the dynamics of FS, spatial features are more predictive for classification tasks due to their ability to capture fine-grained, localized information at higher resolutions, providing a more detailed representation of the factors influencing FS in specific areas. The analysis of the CNN model demonstrates its superior performance compared to the LSTM model, particularly in terms of spatial prediction accuracy. The spatial analysis using the CNN model provides valuable insights into the geographical distribution of FS predictions, highlighting areas where the model performs well and where it struggles. Specifically, the analysis reveals prediction errors in relation to the geographical locations of districts or communes and their FS scale. Summing up, the study demonstrated the use of a multi-branch model architecture, incorporating CNNs and LSTM, to predict FS status using FCS and HDDS indicators. We introduced a new training dataset from Rwanda, Tanzania, and Burkina Faso for effectively forecasting FS classes such as poor, borderline, and acceptable. The methodology used the diversity of different data drivers including population density, LULC, and temporal covariates for enhancing its applicability across different climatic and socio-economic contexts. The approach achieves significant accuracy scores across the studied regions, underscoring its potential for identifying vulnerable areas. In future work, we plan to expand our research in several directions. Firstly, we aim to incorporate livestock raster datasets as additional drivers to enhance the model's predictive capabilities. Secondly, we intend to extend our study beyond African countries to evaluate the model's effectiveness in diverse global contexts. Additionally, we will explore the application of other advanced neural network models, in the aim to fully exploit the informative power of time series data, such as Vision Transformers (ViT) for Satellite Image Time Series, Temporal CNNs and Swin Transformers. These models have shown promising results in various domains and could potentially improve our food security predictions. Furthermore, we plan to investigate transfer learning techniques to apply our model to countries with similar contexts. This approach could significantly enhance the model's generalizability and reduce the need for extensive data collection in new regions. These future directions will not only expand the scope of our research but also potentially increase the accuracy and applicability of our food security prediction models across diverse global settings.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.61/1.62)

Presentation: Contrasting the Ability of MODIS and VIIRS Surface Reflectance in the ARYA Crop Yield Model Across Different Spatial Scales: Applications in the USA and Europe

Authors: Natacha Kalecinski, Sergii Skakun, Eric Vermote, Brian Barker, Alyssa K Whitcraft, Belen Franch, Jean-Claude Roger
Affiliations: Department of Geographical Sciences, University of Maryland, NASA Goddard Space Flight Center Code 619, Global Change Unit, Image Processing Laboratory, Universitat de Valencia
Imager/Radiometer Suite (VIIRS) aboard the Suomi National Polar-orbiting Partnership (S-NPP) satellite was launched in 2011, partly to ensure continuity with the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument on NASA's Terra and Aqua Earth observation satellites. As the MODIS sensor is soon to be decommissioned and the VIIRS sensor matures, it is crucial to continuously evaluate and compare the quality of reflectance data from both instruments. As an experiment, we used the yield forecasting model ARYA (Agriculture Remotely sensed Yield algorithm) from Franch et al. (2021), to analyze de-trended yields forecasting for winter wheat and summer crops. We quantify the uncertainties from differences in spatial resolution and spectral bands at national, state, and county levels across the USA and the Europe, as well as at the field level in Iowa, using daily surface reflectance data from MODIS Terra (MOD09GA), Aqua (MYD09GA), a combination of Terra and Aqua, VIIRS Version 1 (VNP09GA) and VIIRS Version 2 release on June 17, 2024. The model is based on a Gaussian fit between the Difference Vegetation Index (DVI) and temperature accumulation in growing degree days (GDD) estimated from ERA-5. Additionally, the model includes a correction for crop stress conditions captured by the difference between daily accumulated Land Surface Temperature (LST) and air temperature from ERA-5. The model provides weekly predictions for crops such as wheat, corn, soybeans, and sunflowers, achieving high accuracy with R² > 0.7. At the national level, the root mean square error is approximately 5%, increasing to 14% at the sub-national level when using MODIS surface reflectance. These errors increase further when using VIIRS data. Most studies use various implementation of satellite quality flags (L2 flags), and some exclude data due to spatial heterogeneity or significant temporal gaps between satellite overpasses. A critical assessment of the quality flags including cloud cover, water/land masks and the high aerosol masks from different satellite daily surface reflectance reveals variable performance in yield estimation. Results show that the difference in cloud-free vegetation index occurrences ranges from -50% to 50%, highlighting a significant gap in the DVI distribution and effectiveness of satellites as predictors for crop yield.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.61/1.62)

Presentation: GEOGLAM Crop Monitor, community driven early warning for food security

Authors: Christina Justice, Brian Barker, Kara Mobley, Antonio Sanchez, Michael Humber, Ritvik Sahajpal, Inbal Becker-Reshef
Affiliations: University Of Maryland, NASA Harvest
Agricultural monitoring plays a crucial role in ensuring a region's food supply, supporting food security and livelihoods, and contributing to the achievement of SDG 2, Zero Hunger. Accurate and timely information on both in-season agricultural conditions and forecasted conditions in advance of a season can help provide critical information to decision-makers to support and protect the food security and livelihoods of the most vulnerable populations. For over a decade, the GEOGLAM Crop Monitor has provided open, timely, and science-driven information on global crop conditions through in-season crop assessments and regional to global climate forecasts, enhancing market transparency and early warning of production shortfalls at scale. Consensus-based crop condition reports are released monthly and developed in partnership with the main agricultural and food security monitoring agencies globally, providing robust and vetted information that in turn supports improved information services and coordination across the UN and humanitarian partner agencies. This activity is a strong example of robust and lasting coordination across the agricultural monitoring community in support of global food security. Assessments are based primarily on remote sensing information systems combined with ground reports, weather forecasts, and region-specific sources to provide relevant, timely, and actionable information on agricultural conditions in advance of potential food security impacts. The remote sensing based information products developed under this work are now widely used across the agricultural sector to inform food security and policy decisions. With new advances in extended outlook climate forecasting and yield forecast modeling, early warnings of potential production shortfalls are being provided by the Crop Monitor community earlier and are supporting programmatic planning across the agriculture and humanitarian sector. In the face of increasing intensity and frequency of extreme events under a changing climate, consensus-based, timely, and coordinated efforts as provided by the Crop Monitor are critical to ensuring that decision-makers have the vetted information they need to support efficient and effective anticipatory actions, and that duplication of efforts are minimized and food security impacts reduced.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.61/1.62)

Presentation: Hyperspectral Indicators of Crop Canopy and Grain Nutrient Status

Authors: Michael Marshall, Dr. Mariana Belgiu, Dr. Gabriele Candiani, Dr, Francesco Nutini, Dr. Monica Pepe, Dr. Mirco Boschetti, Dr. Micol Rossini, Dr. Luigi Vignali, Dr. Chiara Ferrè, Dr. Cinzia Panigada, Dr. Tobias Hank, Stefanie Steinhauser, Dr. Stephan Haefele, Dr. Murray Lark, Alice Milne, Grace Kangara, Kwasi Ofori-Karikari, Dr. Alfred Stein, Dr. Raian Maretto, Dr. Chris Hecker, Dr. Andy Nelson
Affiliations: University Of Twente, CNR-IREA, University of Milano-Bicocca, Ludwig-Maximilians-University of Munich, Rothamsted Research, University of Nottingham
“Hidden hunger” is a type of malnutrition that affects more than two billion people across the globe. It refers to micronutrient deficiency (MND) in food that when consumed can inhibit proper growth and development. Chronic MND can lead to an exacerbation of disease and other negative health outcomes. MND is commonly reported at the national level via biomarker analysis or with food composition tables. Both approaches are time-consuming and costly, which prohibits the assessment of MND over space and time. Such information is increasingly important, as variations in micronutrient concentrations in staple foods is growing , because of the intensification of cereal production (i.e., “green revolution”), CO2 fertilization, and climate change. In the absence of this information, we cannot effectively guide research activities dedicated to alleviating MND through genetic biofortification (i.e., crop breeding) or agronomic biofortification (i.e., fertilizers). Satellite image data are widely used for agricultural monitoring, because they capture spectral information from the canopy related to the biophysical properties and biochemical processes of crop growth and development. The information is acquired over large areas through time at relatively high (≤30m) spatial resolution. Hyperspectral sensors record spectral information with narrow (<10nm) bands across the full optical range (400 – 2500nm). While a few remote sensing studies investigated the feasibility of using hyperspectral imagery for estimating crop foliar nutrients none of them took the step of predicting nutrient levels in the grains until the European Space Agency (ESA) project HyNutri. The concentration of nutrients in the grain is complex, as it involves the translocation from the soil via plant roots and remobilization from crop leaves, which in turn is influenced by crop species/cultivar, environmental conditions, and farm management practices. Therefore, efficiently and reliably mapping nutrients in the grain as in HyNutri needs to be complemented by an analysis of spectra and nutrients in the canopy across the growing season. Not only do these additional steps improve the estimation of nutrients in the grain. They help us to better understand nutrient dynamics. We present the results of a field campaign that was conducted over the 2023 growing season on a large commercial farm near Jolanda di Savoia, Italy. The campaign established ten elementary sampling units (ESUs) or frames of 60×60m2 across maize, rice, and wheat fields. Each consisted ESU of four evenly-spaced plots. From each plot, ground spectra; biophysical parameters (e.g., leaf area index); and leaf, stem, and grain samples were collected, processed and analysed. The data were acquired during the vegetative, reproductive, and harvest stages of crop development. Each stage resulted in 4×10 or 40 samples per crop. We estimated the concentration of nine macro- and micro-nutrients (N, Ca, Fe, K, Mg, P, S, Se, Zn) in the leaf, canopy (leaf + stem), and grain with spectral information and biophysical parameters acquired during the vegetative and reproductive stages. The estimates were made with Partial Least Squares Regression (PLSR). The optimal number of components of each PLSR model was selected using 100 iterations of a 5-fold cross validation. Ninety-eight model runs were made for each crop. Leaf spectra tended to estimate canopy and grain nutrients better than canopy spectra. Nutrients overall were best predicted during the vegetative phase and worst predicted during the reproductive phase when nutrients were generally in low concentration. N and P were well predicted in the vegetative phase of wheat (R2=0.54, RMSE=3.00 kg m-2, RRMSE=0.36 and R2=0.59, RMSE=0.24 kg m-2, RRMSE=0.31) but the reproductive phase of wheat was better at predicting N and P in the grain (R2=0.72, RMSE=1.87 kg m-2, RRMSE=0.12 and R2=0.59, RMSE=0.32 kg m-2, RRMSE=0.15). Ca, Fe, K, Se, and Zn were well predicted during the vegetative phase (R2=0.69, RMSE=0.38 kg m-2, RRMSE=0.27; R2=0.51, RMSE=0.01 kg m-2,RRMSE=0.39; R2=0.52, RMSE=2.35 kg m-2, RRMSE=0.25; R2=0.62, RMSE=0.01 kg m-2, RRMSE=0.42; and R2=0.74, RMSE=0.01 kg m-2, RRMSE=0.33) but underperformed for the other stages. Mg was well predicted during the reproductive phase (R2=0.65, RMSE=0.12 kg m-2, RRMSE=0.31), which corresponded well to the grain (R2=0.81, RMSE=0.12 kg m-2, RRMSE=0.16). S was well predicted during the vegetative phase (R2=0.76, RMSE=0.23 kg m-2, RRMSE=0.28), which was well correlated with the grain (R2=0.60, 0.13 kg m-2, RRMSE=0.14). The results are particularly promising for four of the nutrients: N, P, Mg, and S. Future work should scale the results with hyperspectral satellite platforms, such as Environmental Mapping and Analysis Program (EnMAP) and PRecursore IperSpettrale della Missione Applicativa (PRISMA). EnMAP and PRISMA are precursors to Sentinel-2 Next Generation (S2NG) and Copernicus Hyperspectral Imaging Mission (CHIME). CHIME and S2NG will acquire hyperspectral information on an operational basis globally. Estimates of nutrient concentrations in grain from CHIME and S2NG together with data on nutrient thresholds could facilitate MND mapping through time.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.61/1.62)

Presentation: Advancements in Remote Sensing for Water Management and Crop Disease Detection: Insights from Regional Agricultural Projects

Authors: PhD Pietro Sciusco, PhD Walter De Simone, Dr Costanza La Parola
Affiliations: Planetek Italia s.r.l.
Remote sensing technologies have become indispensable tools for shaping agricultural policies and strengthening global food systems. By utilizing satellite imagery and advanced data processing, these technologies enable real-time monitoring of agricultural activities. Insights into crop health, soil moisture, and land use patterns empower policymakers and humanitarian organizations to make informed decisions on resource management, disaster response, and food distribution. Taking advantage of EO data is key to enhancing food system resilience, boosting productivity, and ensuring food security for communities worldwide. This abstract provides an overview of the involvement of Planetek Italia in various projects within the context of water management in agriculture and crop disease detection, both critical for ensuring food security and sustainable farming practices. Efficient water management is crucial in agriculture, especially in the context of growing water scarcity and climate variability, due to climate change. Remote sensing offers valuable tools for monitoring key components of water cycle, i.e., soil moisture, evapotranspiration, connecting them to crop water stress and irrigation efficiency. Satellite-based sensors of existing international missions (e.g., Landsat, Sentinel-2, and MODIS) provide multispectral imagery that can be used to assess vegetation health, water use, and crop water demand. In this context, Planetek Italia is involved in various R&D activities through the following projects: WAter DIgital Twin (WADIT): funded by the Ministry of Enterprises and Made in Italy, WADIT has the aim of building a digital twin for the estimation of crop water used at watershed level in Apulia region. The main concept is based on the use of both in-situ and EO/EO-derived data to feed a water balance model which returns the amount of water consumed by crops. Among the main activities: (i) the production of a detailed land use and land cover map layer (for agricultural classes only) over the entire Apulian territory. Such layer is designed to represent the water needs of the main regional crop types and it will be used to set-up different scenarios showing how the cultivation of different crops would affect the overall water consumption in the agricultural sector at watershed level. (ii) The integration of the water balance model (originally a stand-alone model) into an ICT platform made of various modules (i.e., data collection, data processing, data visualization) with the ultimate goal of workflow automatization. (iii) The replacement of non-EO data with EO/EO-derived data. UNIVERSWATER: funded under the EU HORIZON ZEROPOLLUTION, UNIVERSWATER has the overarching goal of integrating local DSSs into a common suite of DSS tools that can be reusable and adaptable in further areas developing a modular, extensible and holistic Universal DSS. Among the various activities, Planetek Italia is involved in that of integrating EO imagery with the in-situ sensors that will be used to develop local robust and explainable AI-based DSSs modules tailored to the different needs of the three demo sites (i.e., Irish, Italian, and Greek). Specifically, the use of EO data (both HR and VHR multispectral data) is needed to carry out spatiotemporal analysis of vegetation indices and biophysical parameters as proxy of crop water-salinity-nutrient stress. Crop diseases can significantly affect agricultural productivity, demanding for early detection and management strategies. Remote sensing technology facilitates the identification of diseased plants through spectral analysis, as diseases often alter the reflectance characteristics of crops. Multispectral and hyperspectral imaging can detect subtle changes in leaf pigment, structure, and moisture content, which are indicative of both biotic and abiotic stress. Remote sensing is particularly valuable in mapping the spatial distribution of diseases (if also coupled with environmental parameters), aiding in the containment and management of outbreaks. It also enables large-scale monitoring across diverse agro-ecosystems, reducing the need for time-consuming and labor-intensive field surveys. Within this context, Planetek Italia is involved in various activities through the following two projects: FIght XYLeLla fastidiosa (FIXYLL): funded by the Italian Space Agency, FIXYLL has the goal of developing four downstream satellite-based services to fight Xylella infections in Apulia Region. The four services deal with (i) identification of vigorous olive plants in areas highly affected by the infection; (ii) graphic visualization of the dynamics of vegetation indices trends at municipality level; (iii) monitoring of application of Regional prescriptions (i.e., tillage); and (iv) validation of the explant activities of infected plants. The four services are based on a combination of different types of data, such as, satellite (both open source and commercial), drones, and in-situ, and they target a variety of end users, such as, Regional authorities, research agencies, universities, farmers’ associations. Remote Early Detection of Xylella (REDoX): founded by the Ministry of Education, University and Research, REDoX aims at the development of technologies and procedures based on very high-resolution remote sensing techniques (i.e., airborne and drone hyperspectral imageries) for (i) the identification of olive plants affected by Xylella, (ii) the early-detection of new Xylella outbreaks, and (iii) the reduction of monitoring time by public authorities.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 1.61/1.62)

Presentation: A scalable and Transferable Wheat Yield Forecasting Empirical Model Using Multi-Sensor Satellite Data and Novel Vegetation Indices: Applications in Spain, Egypt, and Ukraine

Authors: Belen Franch, Italo Moletto-Lobos, Javier Tarin-Mestre, Lucio Mascolo, Eric Vermote, Natacha Kalecinski, Inbal Becker-Reshef, Alberto San Bautista, Constanza Rubio, Marcos Caballero, Sara San Francisco, Miguel Angel Naranjo
Affiliations: University Of Valencia, University of Maryland, NASA Goddard Space Flight Center, Universidad Politecnica de Valencia, Fertinagro Biotech
In this study, we present a novel model for forecasting wheat yield at the field level using Sentinel-1 (S1) and Sentinel-2 (S2) satellite data. The model was calibrated using precise field level yield measurements collected between 2018 and 2023 in an agricultural region dominated by cereals in Valladolid, Spain, and validated with data from various provinces across Spain and the Nile Delta in Egypt. To account for temporal variability across seasons and regions, we normalized the satellite signals using the accumulated Growing Degree Days (GDDacum) considering the ERA5 air temperature data, and the Start of Season from the WorldCereal crop calendars. The yield model is based on a two-parameter linear regression across GDDacum ranges, exploring three configurations: (1) optical parameters from S2 alone, (2) microwave parameters from S1 alone, and (3) a combination of S2 and S1 parameters. Additionally, we introduce novel vegetation indices, sensitive to wheat yield, derived solely from S2 data or by fusing S1 and S2. Validation on the calibration site using a Leave-One-Year-Out approach revealed that while S2 or S1 data alone estimated yield with an error of approximately 1,100 kg/ha, combining S1 and S2 reduced the error to 700 kg/ha. We further evaluated the model’s transferability by applying it to major wheat-producing regions in Spain and the Nile Delta in Egypt. In Spain, S2 alone demonstrated better spatial transferability, achieving a lower error (1,200 kg/ha) compared to the S1-S2 combination (1,400 kg/ha). However, in Egypt, the fusion model performed better, with an error of 500 kg/ha compared to 800 kg/ha for the optical model alone. Finally, we assessed the impact of integrating the proposed optical vegetation index into the Agriculture Remotely-sensed Yield Algorithm (ARYA), a national and subnational yield forecasting model currently based on the Difference Vegetation Index (DVI) from MODIS at 1 km resolution. For subnational and national level yield forecasting in Ukraine, preliminary results show that replacing the DVI with the proposed optical vegetation index improves ARYA's performance, reducing errors from 14% to 10% at the national level and from 21% to 15% at the subnational level. These findings highlight the potential of combining multi-sensor satellite data with novel indices to enhance yield forecasting accuracy.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall F2)

Session: C.02.12 ESA's Biomass mission - PART 2

BIOMASS is the 7th ESA Earth Explorer mission. Earth Explorers are the backbone of the science and research element of ESA’s Living Planet Programme, providing an important contribution to the global endeavor of understanding the Earth system.
The overall objective of the Biomass mission is to reduce the uncertainty in the worldwide spatial distribution and dynamics of forest biomass in order to improve current assessments and future projections of the global carbon cycle. For enabling this, the Biomass mission data products will provide consistent global estimates of forest biomass, forest height, forest disturbance and re-growth parameters.
The Biomass Satellite industrial Prime contractor is Airbus Defence and Space, Stevenage (UK). The radar payload is built by Airbus Defence and Space, Friedrichshafen (Germany).
The Biomass  payload consists of a fully-polarimetric left-looking P-band SAR which is the first of its kind in this frequency band for Earth observation purposes. The BIOMASS mission is designed to last 5 years, and consists of two phases, i.e. a tomographic and an interferometric phase.
The Biomass Qualification Acceptance Review is scheduled to be concluded within 2024 with the satellite due for launch in 2025.

Biomass will provide global maps of the amount of carbon stored in the world's forests and how these change over time. Biomass will also provide essential support to UN treaties on the reduction of emissions from deforestation and forest degradation. Forest type and forest cover worldwide can be detected by today's satellites, but the mission’s unique capabilities will allow to get access to global forest structural parametrisation obtained with a homogeneous quality and sampling – allowing a multitude of global maps of these main forest parameters over its mission lifetime.
Apart from the above, the session also intends to cover the wider context of how carbon science has moved on and how Biomass, GEDI and many other elements provide the bigger picture.

The session will highlight the latest developments covering overall mission status, mission science and foreseen exploitation, higher-level products (implementation and algorithmic content) and ground segment.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall F2)

Presentation: Biomass Commissioning and First Results

Authors: Philip Willemsen, Maktar Malik, Antonio Leanza, Lucia Hernando Aguero, Adriano Carbone, Michael Fehringer, Tristan Simon, Muriel Pinheiro, Björn Rommen, Klaus Scipal, Elia Maestroni
Affiliations: ESA/ESTEC, ESA/ESRIN, ESA/ESOC
The Biomass mission is an Earth Explorer Mission in the ESA Earth Observation Programme. The primary objective of Biomass is to determine the worldwide distribution of forest above-ground biomass in order to reduce the major uncertainties in calculations of carbon stocks and fluxes associated with the terrestrial biosphere. The Biomass Satellite industrial Prime contractor is Airbus Defence and Space Ltd. The radar instrument is built by Airbus Defence and Space, Friedrichshafen. The Biomass satellite carries a fully-polarimetric (HH, VV, HV, VH) P-band SAR which operates in strip-map imaging mode with a carrier frequency of 435 MHz. The Biomass satellite employs a unique design: the coverage and performance requirements at P-band imply the use of a large aperture antenna. This is realized by an offset-fed reflector antenna system consisting of the instrument feed array and a deployable reflector with a 12 m projected aperture. The Biomass mission relies on a calibration transponder which is designed and developed specifically for Biomass mission needs. The Biomass Calibration Transponder (BCT) is a fully polarimetric active transponder working in P-band. It is used primarily during the commissioning phase for three applications: 1/ the Biomass satellite end-to-end antenna pattern characterisation, 2/ radiometric and polarimetric calibration and 3/ performance verification. The BCT is provided by C-Core, Canada. It will be located at ESA’s New Norcia antenna site in Australia. The Biomass In-Orbit Commissioning Phase will last six months and it is divided into five sub-phases, named COM1 to COM5. Each sub-phase is characterised by a specific orbit and associated ground track drift which is optimised to achieve specific Cal/Val objectives. During the first sub-phase COM1, the Biomass satellite will be deployed on a zero drift orbit to enable exact revisit of the targets every 3 days. This sub-phase will be dedicated to Transponder Commissioning, collection of data stacks for Radiometric Stability verification by the Persistent Scatterer technique and the assessment of the suitability of targets of opportunity for performance verification. The antenna pattern knowledge is a necessary input for the on-ground data processing. To measure the antenna pattern in flight, the satellite is placed in a dedicated commissioning orbit (COM2) with a defined orbit drift and a repeat cycle of 3 days. The orbit drift allows an antenna pattern characterisation of azimuth cuts at different elevation angles. In total, two months are dedicated to the antenna pattern characterisation which allows sufficient transponder overpasses providing doublet patterns measurements at different elevations. Once the antenna pattern has been measured, the commissioning phase will continue with the radiometric, geometric and polarimetric calibration and performance verification activities (COM3 to COM5). As for the antenna characterisation, the calibration and verification activities will rely on the BCT. In addition, the potential of natural targets to support the cal/val activities is currently investigated. The following system key performance parameters are driving the in-flight performance verification activities: • Channel Imbalance and Cross Talk: for the accurate estimation of the channel imbalance and cross-talk, all four polarimetric scattering responses are required from a location with the same Faraday rotation. • Absolute Calibration Factor: this is computed by multiple acquisitions over the transponder whose RCS is known. • IRF-based performance: PSLR, ISLR, gelocation accuracy are verified through multiple acquisitions over the transponder and suitable point targets of opportunity. • Noise Equivalent Sigma Nought: this is verified for the swaths through multiple acquisitions over very low backscatter homogenous areas.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall F2)

Presentation: The BIOMASS Operational Processors and Products

Authors: Muriel Pinheiro, Riccardo Piantanida, Federico Boniardi, Paolo Mazzucchelli, Antonio Novelli, Michele Caccia, Antonio Valentino, Dr. Klaus Scipal, Bjoern Rommen, Clément Albinet
Affiliations: ESA/ESRIN, Aresys, ESA/ESTEC, Starion
The BIOMASS Processing Suite (BPS) is in charge of processing BIOMASS Level-0 data up to Level-2, generating a wide set of products. The processing foresees the following steps: - L1 processing: this is the BIOMASS image formation step, going from raw data to focused SLC (Single Look Complex) images. This includes, e.g., compensation of antenna pattern, correction of RFI artifacts and a first compensation of ionosphere effects. - Stack Processing: this the interferometric processing step, which forms the 7-images Tomographic stacks or the 3-images Interferometric stacks. It includes the co-registration of the images, filter of the common Doppler spectrum and phase calibration steps to compensate for residual ionosphere and baseline errors. It also includes the computation of the ground steering phase required tor next processing steps (e.g., L2 or DTM generation). - L2a Processing: intermediate L2 processing performed for each stack. For one GC the number of stacks varies mainly with latitude, approximately from a minimum of two at the Equator (ascending/descending) to about six at higher latitudes (three ascending/descending pairs). - L2b Processing: processing that aggregates, for each global cycle, all the L2a products covering the same location on the ground, generating a single BIOMASS L2b tile product. A different number of stacks may be available for each geographic grid tile, based on swath coverage with respect to grid tiling; Accordingly, the BIOMASS BPS product family includes: - L1a or Single-look Complex Slant-range (SCS): L1a products are images in the slant range versus azimuth plane, i.e., in the image plane of satellite data acquisition. Each pixel is represented by a complex value containing calibrated radar brightness (amplitude and phase). The processing for all L1a products results in a single look in each dimension using the full available signal bandwidth. L1a images are produced in a Zero Doppler geometry. This convention is common with the standard slant range products available from other SAR sensors. The L1a products contain one image per polarization channel. The L1a image is sampled at the natural pixel spacing in range (i.e., the radar range sampling frequency) and at a configurable pixel spacing in azimuth. - L1b or Detected Ground-range Multi-looked (DGM): L1b products lie in the ground range versus azimuth plane, with image coordinates oriented along ground range and flight direction. The standard L1b products are detected, multi-look products, with approximately square resolution cells and square pixel spacing. Each image pixel is represented by a real (amplitude) value containing calibrated radar brightness. Multi-looking is a processing property that results in images with reduced speckle, but also with reduced resolution: the more looks the less speckle noise and the lower the resolution. Thermal noise compensation is not applied by default on data, but the information to perform the procedure is included in the metadata. - L1c or Co-registered Stack (STA): BIOMASS L1c products are obtained co-registering a given L1a image over another reference L1a image. The co-registration operation consists in a bi-dimensional interpolation of the data over a reference sampling grid. From a product definition perspective, L1c products are very similar to L1a ones. The only exception is the number of polarizations, which can be either 4 or 3, since cross-polarization channels can be combined, e.g., to improve SNR. Each L1c product contains just one image. The term ‘stack’ is used to indicate a set of L1c products (3 for INT phase, 7 for TOM phase) generated using the same reference image and just logically linked. - L2a (Stack-Based): Level 2a products are related to forest properties. There is no dependency between different L2a products and they can be considered as a biophysical extension of the stack products (L1c). The products are geocoded and are obtained from a single stack . There are three types of L2a products: -- Forest Disturbance (FD): the Forest Disturbance Level 2a product, giving indication of forest change between two global cycles -- Forest Height (FH): Forest Height L2a product, representing top canopy height. Same algorithms are envisaged to generate the same product in INT and TOM phases. -- Ground Notch (GN): Above Ground Biomass L2a processing product. This product corresponds to ground cancelled data, enhancing volume contribution, to be input to L2b for AGB generation. Same algorithms are envisaged to generate the same product in INT and TOM phases. - L2b: L2b products are geophysical, geocoded and tile based. They are derived from merging all L2a in a given Global cycle on the tile. The products at this level can take advantage of FD availability (for the same geographic tile) or of a forest mask generated outside of BPS, to mask out non-forest pixels. The products are: -- FD: The Forest Disturbance level 2b product; -- FH: The Forest Height level 2b product; -- AGB: Above Ground Biomass level 2b product. This presentation will provide more details of the processing chains and products, including examples of BIOMASS data.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall F2)

Presentation: BIOMASS Interferometric Processing

Authors: Francesco Banda, Francesco Salvaterra, Naomi Petrushevsky, Stefano Tebaldini, Muriel Pinheiro, Björn Rommen
Affiliations: ESA/ESTEC, ESA/ESRIN, ARESYS, POLIMI
Next to launch in early 2025, ESA BIOMASS is a repeat pass Synthetic Aperture Radar (SAR) mission supporting the scientific community in gaining a better understanding of global forest status [1]. BIOMASS will be the first spaceborne fully polarimetric P-band SAR, with interferometric and tomographic capabilities. P-band allows sensitivity to the woody elements of trees, whereas polarimetry, interferometry and tomography allow retrieving forest structure and changes over time. Orbit design provides almost-global coverage (except for North America and Europe), with a few months Commissioning Phase followed by about 17 months of seven-passes Tomographic (TOM) phase and the rest of five years operation in dual-baseline Interferometric (INT) phase. P-band opens also to new science opportunities, like mapping of topography under-vegetation and desert and ice sub-surface. Preliminary to forest products generation, fine phase calibration of the data stacks must be performed on top of polarimetric, radiometric and ionospheric calibration carried out by Level-1 (L1) processor [2]. In this work we present an enhanced design of the BIOMASS interferometric calibration processor presented in [3]. The processor takes as input finely coregistered data, accounting both for known geometry and residual shifts adaptively estimated from the data stack itself. The interferometric calibration strategy pursued mainly consists of three steps: slow-varying ionosphere correction through split-spectrum; fast varying ionosphere and orbit correction based on multi-squint processing; estimation of residual low-pass phase disturbance and ground phase by SKP decomposition, to finally generate stacks ready for Level-2 processing (i.e., retrieval of forest products) and SAR tomography [4]. The revised split-spectrum algorithm accounts both for preliminary adaptive coregistration as in [5] and interplay with L1 ionospheric corrections, recovering the differential linear ionospheric phase components. Compared to previous implementation, orbit and fast-varying ionospheric corrections are now merged into a single multi-squint calibration step. The new module relies on an interferometric estimation technique [6] that leverages multi-squint phase variations due to orbit and ionosphere to recover both components, accounting for adaptive coregistration. Partial refocusing is applied within compensation to improve coherence. The correction is applied at high latitudes, where fast ionosphere variations are more likely to occur. The last module performs additional phase calibration based on SKP decomposition [7], generating phase screens that containing both terrain topography and residual low-pass disturbances. In this case the innovation with respect to previous implementation consists of the fact that a short-baseline block decomposition is computed to retrieve ground phases instead of employing all possible baselines. The choice is motivated by the quality degradation caused by spectral shift effects related to high BIOMASS baselines (up to 90% of the critical baseline for TOM), where coherence drop significantly impacts estimation results. For the same reason, multi-baseline strategies for split-spectrum and multi-squint also pursue a short-baseline selection/limitation. The enhanced design proposed is being developed and validated by testing each module both with simulations and campaign acquisitions. Airborne acquisitions are spectrally reprocessed to mimic BIOMASS acquisitions, whereas a rigorous geometric simulator has been specifically developed to reproduce BIOMASS acquisitions over a realistic scenario (both in terms of physics, scene reflectivity and coverage), accounting for the different sources of disturbance (i.e., DEM and orbit errors, ionospheric phase screens). End-to-end simulations address as much as possible the interplay between the different modules to produce a meaningful assessment in view of the mission. A focus on specific modules is given in companion presentations [8] and [9]. References: [1] Francesco Banda, Davide Giudici, Thuy Le Toan, Mauro Mariotti d’Alessandro, Kostas Papathanassiou, Shaun Quegan, Guido Riembauer, Klaus Scipal, Maciej Soja, Stefano Tebaldini, Lars Ulander, and Ludovic Villard, “The BIOMASS Level 2 Prototype Processor: Design and Experimental Results of Above-Ground Biomass Estimation,” Remote Sensing, vol. 12, no. 6, 2020. [2] P. Prats-Iraola, K. Papathanassiou, J. S. Kim, M. Rodriguez-Cassola, D. D’Aria, R. Piantanida, A. Valentino, D. Giudici, M. Jaeger, R. Scheiber, M. Pinheiro, N. Yague-Martinez, M. Nannini, V. Gracheva, T. Guardabrazo, and A. Moreira, “The BIOMASS Ground Processor Prototype: An Overview,” in EUSAR 2018; 12th European Conference on Synthetic Aperture Radar, 2018. [3] Banda, F., Mancon, S., d’Alessandro, M. M., Tebaldini, S., Giudici, D., Pinheiro, M., & Scipal, K. (2023, July). Biomass Interferometric Calibration Processor Design. In IGARSS 2023-2023 IEEE International Geoscience and Remote Sensing Symposium (pp. 7785-7788). IEEE. [4] Banda, F., Petrushevsky, N., Tebaldini, S., & Monti-Guarnieri, A. (2024, July). Spaceborne L-Band Forest Tomosar: A First Case Study. In IGARSS 2024-2024 IEEE International Geoscience and Remote Sensing Symposium (pp. 10885-10888). IEEE. [5] Petrushevsky, N., Banda, F., Monti-Guarnieri, A., Thibeault, M., Gonzalez, J. P. C., & Giudici, D. (2024, April). Operational Ionospheric Correction for SAOCOM Interferometry. In EUSAR 2024; 15th European Conference on Synthetic Aperture Radar (pp. 1221-1226). VDE. [6] Stefano Tebaldini, Mauro Mariotti d’Alessandro, Jun Su Kim, and Kostas Papathanassiou, “Ionosphere vertical profiling from biomass multi-squint InSAR,” in 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2017, pp. 5866–5869. [7] Francesco Banda, Mauro Mariotti d’Alessandro, and Stefano Tebaldini, “Ground and Volume Decomposition as a Proxy for AGB from P-Band SAR Data,” Remote Sensing, vol. 12, no. 2, 2020. [8] Francesco Salvaterra, Francesco Banda, Naomi Petrushevsky, Stefano Tebaldini, "BIOMASS Fast Varying Ionosphere and Orbit Correction Trough Multi-squint Interferometry," ESA Living Planet Symposium 2025 [9] Francesco Banda, Stefano Tebaldini, "SKP decomposition for BIOMASS ground phase retrieval," ESA Living Planet Symposium 2025
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall F2)

Presentation: BIOMASS Scientific data exploitation and validation

Authors: Stefano Tebaldini, Kostas Papathanassiou, Matteo Pardini, Laurent Ferro-famil, Thuy Le Toan, Lars Ulander, Maciej Soja, Francesco Banda, Jérôme Chave, Martin Herold, Jørgen Dall, Shaun Quegan, Björn Rommen, Clement Albinet, Klaus Scipal
Affiliations: Politecnico di Milano, DLR, ISAE Supaero, CESBIO, Chalmers University of Technology, Wageningen University & Research, ARESYS, CNRS, GFZ Potsdam, DTU, University of Sheffield, ESA/ESTEC, ESA/ESRIN
As the first P-Band SAR mission in orbit, BIOMASS, planned to launch in April 2025, will provide unprecedented data for monitoring the world’s forests. This is due primarily to the characteristics of P-Band (435 MHz) waves, whose penetration capability provides sensitivity to the whole vertical structure of the vegetation even in dense tropical forests. The BIOMASS orbit is designed to illuminate the same scenes 3 or 7 times with 3 days separation, resulting in the ability to deliver accurate information about forest Above Ground Biomass Density (AGBD), forest height and vertical structure leveraging interferometric and tomographic processing techniques. In addition to this, forest disturbances and forest changes will be tracked over time by repeating the same set of acquisitions every 9 months. BIOMASS data will also be used to deliver other products, such the first Digital Terrain Model (DTM) of terrain topography below vegetation cover, ice velocity over Antarctica, and an exploratory characterization of desert areas and of the interaction of the ionosphere with P-Band waves. As an Earth Explorer Mission, the ground segment of BIOMASS will include all processors required for data exploitation and generation of geophysical products. For this, scientific activities are planned to revolve around four cornerstones: 1) Evolution of processors for Core products. This relates to the generation of the four main products: forest AGBD, forest height, Forest Disturbance/ Forest Change, and interferometrically calibrated L1 images. AGBD retrieval will consider an improved two-step training approach to jointly exploit largely available but possibly biased calibration data and high-quality unbiased data at specific sites, with the aim of producing unbiased estimates of AGBD across the whole biomass range. Forest height retrieval will consider improved models for compensation of temporal decorrelation and ingestion of forest structure profiles from Tomography. Detection of forest disturbance will be conducted under a sequential Bayesian approach to discriminate deforestation events. Forest change analysis will consider new methods to characterize the type of change that has occurred in a time series, aiming at discriminating between weather effects and actual changes. L1 processing activities will include detection and removal of RFI, compensation of mean ionosphere effects at the level of single images based on Faraday Rotation, and stack-based interferometric calibration to remove phase residuals due to uncompensated ionosphere perturbations within the synthetic aperture (necessary for the generation of all subsequent L2 products). L3 products will consist of a regularized version of L2 products based on the enforcement of geophysical knowledge and constraints. 2) Validation of Core products. Validation of BIOMASS Core products is planned to be carried out by direct comparison against high-quality maps of AGBD and forest height derived from airborne laser scanning (ALS) data at GEO-TREES sites. This will include a network of 100 core biomass reference measurement (BRM) sites and 200 supplementary BRM sites prioritizing the tropics. Each site consists of a collection of at least 10 1-ha forest inventories, together with ALS coverage of the forest over at least 1000 ha. This process will be complemented by verification & testing activities, intended to verify the consistency of BIOMASS maps with other global maps and to provide a qualitative assessment of their accuracy in areas where no high-quality reference data are available. Forest disturbance products will be compared against very high-resolution data in disturbed regions in South America, Africa and Southeast Asia. 3) Definition and consolidation of processors for Secondary products. This relates to the generation of terrain topography below the vegetation cover (DTM), Tomographic cubes representing vertically resolved Radar intensity in forested areas, ice velocity over Antarctica, and experimental products such as retrieval of the internal structure of ice sheets by tomographic and PolInSAR processing, characterization of sub-surface penetration in desert areas and retrieval of spatially resolved ionospheric Total Electron Content information. 4) Open science. Scientific activities within BIOMASS are intended to be carried out in the spirit of transparency and Open Science. All processing methods will be made publicly available. Interested users will be able to participate in the developments through dedicated tools and intercomparison exercises.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall F2)

Presentation: Biomass - System status after launch

Authors: Michael Fehringer, Janice Patterson, Philip Willemsen, Jan Rautakoski, Lucia Hernando, Stefan Kiryenko, Gian Luigi Fava
Affiliations: ESA
Biomass is expected to be launched in April 2025 with very high probability. At the time of the Symposium the Large Deployable Reflector will have been unfolded and the satellite and instrument commissioning will be completed. The presentation will very briefly recap the Biomass mission and then focus on the platform and instrument commissioning activities. An overview of the planned five years mission lifetime will then be given
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall F2)

Presentation: A Level 3 product processor for the BIOMASS mission

Authors: Laurent Ferro-famil, Clément Albinet, Francesco Banda, Paolo Mazzucchelli, Antonio Novelli, Nathan Paillou, Muriel Pinheiro, Bjorn Rommen, Dr. Klaus Scipal
Affiliations: Isae-supaero & Cesbio, ESA-ESRIN, ARESYS, STARION, CAP GEMINI, ESA-ESTEC
This contribution describes the principles and implementation of a Level 3 (L3) product processor for ESA’s BIOMASS mission. This mission aims at reducing the uncertainty in the worldwide spatial distribution and dynamics of forest biomass, and will achieve this objective using a P-band SAR, providing global maps of forest biomass stocks, forest disturbance and growth [1]. In its dual-baseline interferometric operating phase, the BIOMASS mission will provide at each Global Cycle (GC), i.e. approximately every 7 months, a set of world-wide Level 2b (L2b) products consisting of maps of the Above Ground Biomass (AGB), Forest Height (FH) and Forest Disturbance (FD). The L2b AGB and FH estimation processes being led independently, and at rather local spatial and temporal scales, it is very likely that output maps show some variability, related to the intrinsic uncertainty of L2b estimators, but also to potential exceptionally unfavorable factors, such as severe meteorological conditions (rain, wind), problematic propagation effects, or non-optimal baseline configuration. The objective of L3 processing is to improve the consistency of L2b maps, by enforcing geophysical constraints through an iterative statistical regularization process. Three kinds of constraints are considered: - spatial consistency is derived by comparing the spatial statistics of each product with autocorrelation function features computed over a wider neighborhood. Significantly different behaviors are to be penalized in order to guarantee spatially homogeneous estimates over undisturbed areas. - temporal consistency is evaluated by observing estimates performed for different GCs, and by limiting the positive change rate (gain velocity) of AGB and FH parameters. One may note maximal gain rates are fixed according the considered geographical location and to the observed type of forest. Unlike gains, AGB and FH losses are not constrained, as they can happen in a very abrupt way. - allometric consistency allows to mutually regularize AGB and FH fields using local, and forest-class specific relationships [2]. Allometric equations, as well as their associated dispersion, are directly estimated from L2b maps, at each GC, for each of the forest class provided by a land-cover map, and at the scale of an L2b tile, i.e. over regions of about 100 km x 100 km at the equator. The regularization process is implemented under the form a Maximum Likelihood optimization, aiming to determine AGB and FH space-time maps which maximize a compound likelihood function, composed of losses terms related to L2b, spatial, temporal and allometric statistics [3]. A log-normal framework is adopted, which allows to represent this optimization as a very large, but sparse, system of linear equations. At each new GC, the optimization process is run considering the freshly estimated L2b parameters in addition to the previously regularized fields. As a consequence, L3 estimate maps are expected to change significantly during the lifetime of the mission, with a quality level that increases with time. The L3 processor delivers, at each GC and for each processed tile, L3 AGB and FH maps together with their level of confidence [4] and descriptors of their temporal evolution, auxiliary information related to the considered land-cover maps, and parameters of the BIOMASS allometric relationships at all dates and for all forest types. An example of regularization, built from realistic AGB and FH maps estimated over Gabon, is provided and illustrates the capabilities of the processor to actually improve the consistency of L2b product maps. [1] Quegan, S. et al. “The European Space Agency BIOMASS mission: Measuring forest above-ground biomass from space”, Remote Sensing of Environment, Volume 227, 2019, Pages 44-60, ISSN 0034-4257, https://doi.org/10.1016/j.rse.2019.03.032. [2] Chave, J. et al. “Improved allometric models to estimate the aboveground biomass of tropical trees”, Global change biology, 2014, 20(10), pp.3177-3190. [3] Tarantola, A. (2005) Inverse Problem Theory and Methods for Model Parameter Estimation. SIAM: Society for Industrial and Applied Mathematics, 342 p. https://doi.org/10.1137/1.9780898717921 [4] Keener, R.W., 2010. Theoretical statistics: Topics for a core course. Springer Science & Business Media.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall K1)

Session: B.02.05 Restoring Biosphere Resilience: Transforming Agriculture, Forestry and Other Land Use (AFOLU) from Carbon Source to Sink - PART 2

To address climate change effectively, emissions must reduce by 50% every decade to achieve net zero emissions by 2050. A core component to this is the transformation of agriculture into a major carbon sink, as projected in current IPCC climate models. These models have assumed a shift from source to sink in Agriculture, Forestry and Other Land Use (AFOLU)—primarily agriculture—in the next two decades. In fact, even if fossil fuel use was eliminated today, emissions from the food system alone are enough to put the 1.5°C and 2°C targets out of reach. This pivotal role of the food system in climate dynamics underscores its broader impact on the planet. The global food system places immense pressure on Earth systems, primarily due to land-clearing (slash-and-burn practices and biomass burning), animal farming (enteric fermentation and manure management), and crop production (fertilisers and emissions from land and paddies). This makes it the single largest driver for planetary boundary transgressions and Earth system instability.

Agricultural ecosystems cover more than 40% of the global land surface, making agricultural land the largest terrestrial biome on the planet, with animal agriculture taking up 83% of it. In the past 300 years, a staggering 55% of all ice-free land has been converted into croplands, pastures and rangelands, leaving only 45% for natural or semi-natural ecosystems. Earth’s terrestrial ecosystems store a vast amount of carbon, about 60 times the yearly human-induced greenhouse gas emissions, with soil containing roughly 70% of this (1500–2400 GtC). To harness this potential, we must begin to reclaim agricultural land, a process facilitated by a global shift towards sustainable plant-based food sources, which could ultimately free up 75% of the agricultural land for rewilding and restoration.

The use of Earth Observation data in land applications is well-explored and maturing quicky, providing in-depth insights and monitoring services for terrestrial ecosystems, which in turn supports the transformation of food systems. Therefore, we invite Earth Observation data application researchers and engineers to submit abstracts for this session, that:

  • Showcase best practices, case studies and research utilizing Earth Observation for Agroecology, Nature restoration, Agriculture, Forestry and Other Land Use (AFOLU) statistics monitoring, carbon sink tracking (e.g., via Net Primary Production (NPP)/ Above Ground Biomass (AGB)), monitoring nutrient loading in terrestrial & aquatic ecosystems, detecting resilience within agricultural landscapes for early warning systems and others.

This session aims to raise awareness, nurture, support, and expand a community committed to transforming the current food system into one that regenerates and strengthens the biosphere’s innate resilience—one that preserves and restores land and aquatic ecosystems, allocates cropland to the most productive regions, adopts land management systems that work with nature rather than against it, transitions to plant-based food sources, and serves as a future carbon sink.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall K1)

Presentation: Nationwide Hedgerow Monitoring With VHR Satellites and Deep Learning to Support Climate and Biodiversity Action.

Authors: Javier Muro, Lukas Blickensdörfer, Tom Brög, Anna Köber, Dr. Marcel Schwieder, Prof. Dr. Axel Don, Dr. Stefan Erasmi
Affiliations: Thünen Institut of Farm Economics, Thünen Institut of Climate-Smart Agriculture
Efforts to mitigate climate change in the agricultural sector increasingly aim to shift this area from a source of carbon emissions to a net carbon sink. Hedgerows play a valuable role in this process, capturing carbon in their branches and roots, increasing soil organic carbon within up to 3 meters from their edges, and offering vital soil protection against wind erosion when fields are bare. Additionally, they provide year-round habitat and food sources for diverse animal species. However, European countries, including Germany, may have lost about 50% of their hedgerows since the 1950s. National estimates of hedgerow coverage are often outdated, inconsistent, and lack standardization across federal states, hampering conservation efforts and accurate carbon accounting. To address this, we developed an efficient national-scale hedgerow mapping workflow that leverages very high-resolution satellite imagery, deep learning, and imperfect reference data. Predictors consists of RGB+near-infrared PlanetScope mosaics for the months of April, June, August and October 2018 and 2022 at 3 meters resolution. The reference data consists of an incomplete inventory of polylines over the Federal State of Schleswig-Holstein, in northern Germany. We study the impact of different loss functions and predictors on model performance using a UNet architecture. We propose the Average Symmetric Surface Distance and Hausdorff distance as more appropriate metrics to evaluate performance, besides the traditional f1 score. We further validate our predictions at national scale against the Copernicus Small Woody Features (SWF) dataset and other available area-wide datasets. Despite the eventual flaws of the reference data, the models achieve good accuracies (f1-score 0.64), efficiently separating the vast majority of the hedgerows from the rest of the landscape. We found that flexible loss functions, such as binary cross-entropy, help mitigate errors produced by sensor artifacts in the time series. Models based on multitemporal imagery from the four available months that included the near-infrared band outperformed models that used single-months or RGB data. We show that using datasets not specifically designed for hedgerows, such as the SWF, as surrogates for hedgerow inventories can lead to overestimations of up to 130% at national scale, and significant local over- and underestimations, underscoring the need for target-specific mapping models, such as ours. Based on our predictions, we provide an overview of Germany's current hedgerow landscape and changes since 2018. We estimate approximately 4,400 km² of hedgerows—2.6% of the country’s usable agricultural land—with uneven distribution across pedo-climatic regions and agricultural zones. By combining our predictions with ancillary data, we create a priority map for hedgerow planting and estimate the amount of carbon that can be sequestered if we restore hedgerow surface to 1950s levels during the next two decades.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall K1)

Presentation: A Global, Earth Observation-Based Monitoring System for Land-Based Greenhouse Gas Fluxes in the 21st Century

Authors: Sarah Carter, David Gibbs, Nancy Harris, Melissa Rose, Erin Glen, Angela
Affiliations: World Resources Institute
Greenhouse gas (GHG) emissions from Agriculture, Forestry, and Other Land Uses (AFOLU) account for approximately one-quarter of global emissions. At the same time, land absorbs one-third of annual fossil fuel emissions. Understanding where and when these emissions and removals are occurring can help governments, companies, landowners, and other actors develop land-related climate change mitigation policies. Moreover, high spatiotemporal variability and the mixture of anthropogenic and natural sources complicate the monitoring of GHG fluxes from AFOLU. Existing approaches for monitoring GHG fluxes from land and agriculture are generally not fully transparent, consistent, or spatial. This limits their application for climate policy and progress tracking across spatial scales and among different actors. Here, we introduce a global, geospatial, Earth observation-based framework for monitoring GHG fluxes for AFOLU. It expands upon a similar framework for monitoring GHG fluxes from forests. The geospatial data integration framework maps gross emissions and removals due to AFOLU in the 21st century by combining Earth observation-based data within Intergovernmental Panel on Climate Change guidelines for national GHG inventories. It uses global data rather than combining national or regional data, producing a model that is flexible, globally consistent, and fully spatial. Emissions and removals from land-use change are based on sequential maps of land cover and land use, vegetation height, fires, and more and comprise loss and gain of woody biomass (forests and trees outside forests) at 30-m resolution. Changes in soil carbon in mineral soils and emissions due to drainage of peat and other organic soils are also mapped at 30-m resolution. Biomass burning and agricultural activities (e.g., fertilizer application, livestock emissions, rice paddy emissions) are mapped at a range of resolutions. These maps can inform climate action and monitor progress towards climate change goals across a wide range of spatial scales. This presentation will share preliminary GHG flux maps and global and regional estimates of GHG fluxes from the AFOLU sector and its sub-sectors.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall K1)

Presentation: IMPROVING ESTIMATES OF CARBON LOSS BY DEFORESTATION AND FOREST DEGRADATION BY THE BIOMASS MISSION

Authors: Thuy Le Toan, Dr Stephane Mermoz, PhD Juan Doblas, Alexandre Bouvet, Professor Laurent Ferro-Famil, Ludovic Villard, Dinh Ho Tong Minh, PhD Thierry Koleck
Affiliations: Cesbio, GlobEO, TETIS, INRAE, Centre National d'Etudes Spatiales
To improve our understanding of the carbon cycle, accurate estimates of forest biomass are needed. High biomass values of dense tropical forests are particularly important, as they determine uncertainties in carbon stock assessment and carbon loss due to deforestation and forest degradation. This information is also essential in national GHG communications to the UNFCCC. However, using existing remote sensing systems, it remains difficult to estimate above-ground biomass (AGB) of dense tropical forests, due to the lack of sensitivity of remote sensing data in the high AGB range, typically 200–400 Mg.ha-1. With a wavelength of about 70 cm, P-band SAR has proven to be more suitable for measuring the high biomass range, and the wave penetration depth is compatible with the height of tropical forests. The measured signal which carries information about the biomass and 3D structure of the forest can be used to infer parameters such as AGB and forest height. This paper presents the expected contribution of the BIOMASS mission in estimating carbon loss due to deforestation and forest degradation, using P-band SAR to provide AGBs with reduced uncertainties in the AGB range of 200 to 400 Mg.ha-1 . According to IPCC guidelines, the approach to estimating emissions from deforestation or forest degradation can be based on activity data (AD) and emission factors (EF). AD correspond to the change in forest area due to deforestation (in ha) since the last report. Emission factors represent the carbon (and therefore biomass) stored in the forest before deforestation. The key variable involved in the calculation is aboveground biomass and change in forest area, accessible from forest inventory and remote sensing (IPCC, 2019). With the increasing availability of data from current space sensors and advances in the ability to process big data, much work has been conducted on both forest disturbance detection and aboveground biomass (AGB) map production. Significant advances have been observed in remote sensing-based systems aimed at detecting forest disturbances in tropical forests in near real time (NRT), using SAR data as the main input. At the same time, several regional and global AGB maps of tropical forests have been produced, combining multiple satellite observations, in situ inventory measurements, and airborne Lidar data. Comparison of these global AGB maps shows that the spatial patterns and magnitude of AGB are well captured, but gaps in AGB estimates are significant, especially in high carbon stock forests with AGB > 200 Mg ha-1. Carbon loss by deforestation Since deforestation mainly affects high-AGB forests, large uncertainties in carbon loss through deforestation (referred to as LULCC emissions) are reported when using existing remote-sensing based AGB products. As an illustration, we focus on Amazonian forests, where the average AGB of forests that have undergone deforestation has increased over the last decade, indicating the progression of deforestation fronts within forests (Olmetto et al., 2014). Comparison of AGB loss due to deforestation calculated using AGB estimates from available pantropical remote sensing-based maps shows that the range of AGB loss values in the Brazilian Amazon differs significantly (up to a factor of 2) between estimates. Specifically, remote sensing-based AGBs were found to be largely underestimated when AGB > 200 Mg ha-1. Using advanced SAR techniques (ground-based cancellation and SAR tomography), it has been demonstrated that P-band SAR data can provide AGBs up to 500 Mg ha-1 . Considering the limitations of a space-based P-band SAR system, including a) the limited pulse bandwidth of 6 MHz, b) the perturbation of the ionosphere, c) the temporal decorrelation due in particular to environmental effects such as rain, wind, diurnal and seasonal variations ; the overall simulation results show that at a resolution of 4 ha, BIOMASS can provide AGBs in the order of 200 to 400 Mg ha-1with an accuracy of 20%. This indicates that the data acquired by BIOMASS could significantly contribute to improving estimates of carbon loss through deforestation. Carbon loss by forest degradation Recent studies show that forest degradation, neglected in remote sensing-based approaches, could also explain the decline in the forest carbon sink observed by the top-down approach. However, the impacts of forest degradation are understudied, largely because international emission reduction programs have focused on deforestation, which is easier to detect and therefore easier to monitor. In fact, tropical forest degradation is estimated to be responsible for 25% of forest carbon emissions (Pearson et al., 2017) and about 20% of tropical forests are disturbed by logging activities that can then lead to forest degradation (Hubau et al., 2020). Unlike deforestation, selective logging represents a more diffuse disturbance where only a subset of trees (the most economically valuable) are harvested. Typically, the AGB of harvested wood ranges from about 50 Mg ha-1 to 100 Mg ha-1. A forest with an AGB of, for example, 400 Mg ha-1 could see its AGB reduced to 350 or 300 Mg ha-1 after selective logging. Such a reduction, despite its importance for the carbon balance, could not be detected with existing remote sensing systems. To illustrate the potential of BIOMASS to measure AGB reduction by selective logging, we examine the range of AGB in the Paracou experimental site in French Guiana. From 1986 to 1987, forest plots were logged at different logging intensities. Before logging, AGB was in the range of 420-440 Mg ha-1, and after logging 220-320 Mg ha--1, including additional tree mortality (Sist et al., 2012). At the time of the TropiSAR campaign in 2009, 22 years after selective logging, the range of AGB was 280 and 380 Mg ha-1, which are 90% and 63% of their pre-logging values. Using SAR tomography, it was found that the different degrees of degradation could be detected (Le Toan et al., 2024). A comparison with available remote sensing products indicated that BIOMASS has the unique capacity to estimate the AGB of these degraded forests. In summary, the results indicate that the data acquired by BIOMASS will significantly contribute to improving estimates of carbon sources, not only through deforestation, but also through forest degradation. However, the performance in terms of spatial configuration remains to be evaluated. Resolution effects due to the 6 MHz bandwidth, ionospheric effects, and temporal decorrelation have been evaluated in simulation studies, but the overall performance of the BIOMASS mission will be assessed once the system is in orbit. References Hubau, W., Lewis, S. L., Phillips, O. L., Affum-Baffoe, K., Beeckman, H., Cuní-Sanchez, A., ... & Zemagho, L. (2020). Asynchronous carbon sink saturation in African and Amazonian tropical forests. Nature, 579(7797), 80-87. Le Toan, T., Villard, L., Ho Tong Minh, D., Doblas, J., Mermoz, S., Ferro-Famil, L., ... & Polidori, L. (2024). Tackling high biomass in tropical forests through the BIOMASS mission. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 48, 287-293. Ometto JP, Aguiar AP, Assis T, Soler L, Valle P, Tejada G, Lapola DM, Meir P (2014) Amazon forest biomass density maps: tackling the uncertainty in carbon emission estimates. Climate Change 124:545–560. Pearson, T. R., Brown, S., Murray, L., & Sidman, G. (2017). Greenhouse gas emissions from tropical forest degradation: an underestimated source. Carbon balance and management, 12, 1-11. Sist, P., Blanc, L., Mazzei, L., Baraloto, C., & Aussenac, R. (2012). Nouvelles connaissances sur la dynamique globale de la biomasse après exploitation en forêt nord amazonienne. BOIS & FORETS DES TROPIQUES, 314, 41-49.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall K1)

Presentation: Advancing Land-Use Research: Exploring the Potential and Limitations of Earth Observation for Monitoring Human Pressures on Ecosystems

Authors: Sarah Matej, Florian Weidinger, Linda Gotsmy, Harald Grabher, Egor Prikaziuk, Michael Marshall, Michael Schlund, Rebecca Varney, Nina Raoult, Stephen Sitch, Tim Ng, Karl-Heinz Erb
Affiliations: Universität für Bodenkultur Wien (BOKU University), University of Twente (UT-ITC), University of Exeter (EXON), The Flemish Institute for Technological Research (VITO)
Land use is closely connected to essential components of the Earth System, such as climate, biodiversity, and biogeochemical cycles. Gaining a deeper understanding of land-use patterns and dynamics is crucial for evaluating their effects on these components and for developing strategies to promote sustainability. However, data of sufficient thematic detail required to analyze the spatiotemporal dynamics and drivers of land use, and in particular of land-use intensity and its changes over time, are currently lacking. The ongoing progress in Earth Observation (EO) imagery, both thematically and related to spatial resolution, allows for significant advances in monitoring changing patterns of land cover and land use. Various EO products tracking terrestrial surface characteristics are available in consistent time series, allowing for an improved analysis of spatiotemporal changes. However, EO only detects certain aspects of land use, such as vegetation types and land cover. Other aspects, such as land-use or management intensities remain difficult to capture. To address these knowledge and data gaps we present the LUIcube, a detailed global database on land use. It builds upon the ESA CCI land cover product and integrates statistical data on agricultural and forestry production to trace global land-use area and intensity developments annually from 1992 to 2020 for 32 land-use classes at 30 arcsecond resolution. The LUIcube focuses on Net Primary Production (NPP) as a key Earth System parameter vital for ecosystem and societal functioning, and applies the framework of Human Appropriation of NPP (HANPP) to assess changes in NPP induced by land conversion and land management in high detail. In this presentation, we use our experience in constructing the LUIcube to reflect and explore the potential and the limitations of EO to monitor and quantify human pressures on ecosystems, from global to local scales. Based on in-depth HANPP analyses for four case study regions in Africa, constructed within the Project “Land Use Intensity’s Potential, Vulnerability and Resilience for Sustainable Agriculture in Africa (LUISA)”, we illustrate the potential of EO to capture certain patterns and dynamics of NPP and its use by society, but also discuss the difficulties encountered when attempting to attribute EO signals to specific land-use practices. Although EO has a high potential to contribute to environmental monitoring and reporting, especially in regions such as the explored African case studies where statistical data might be incomplete, we identify domains where other data sources, such as census information or insights from stakeholder engagement, will remain essential. Our case studies show that appropriate strategies for data triangulation vary between land-use types and require a clear understanding of the various uncertainties and conceptual differences associated with specific types of data.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall K1)

Presentation: Bringing Clarity on Post-disturbance Aboveground Carbon Emission and Removal Factors in Tropical Moist Forests using Earth Observation

Authors: Viola Heinrich, Amelia Holcomb, Simon Besnard, Charlotte Wheeler, Daniela Requena Suarez, Clement Bougoin, Yidi Xu, Na Chen, Celso Silva Junior, Susan Cook-Patton, Nathaniel Robinson, David Gibbs, Luiz Aragão, Martin Herold
Affiliations: Section 1.4 Remote Sensing and Geoinfomatiacs, GFZ, Department of Computer Science, University of Cambridge, 0Joint Research Centre, European Commission, Laboratoire des Sciences du Climat et de l'Environnement, LSCE/IPSL, CEA-CNRS-UVSQ, Université Paris-Saclay, 91191 Gif-sur-Yvette, Dept of Civil and Environmental Engineering, Massachusetts Institute of Technology, Amazon Environmental Research Institute - IPAM, The Nature Conservancy, World Agroforestry (CIFOR-ICRAF), World Resources Institute, National Institute for Space Research (INPE)
Tropical forests are temporally and spatially dynamic ecosystems shaped by deforestation, degradation, and recovery processes, with consequences for the carbon cycle. While emissions from deforestation have been relatively well understood and quantified, information on emissions from degradation such as fire, logging, windrow and drought remain poorly quantified, reflecting the complexity of these processes in space and time. Similarly, the carbon recovery potential of degraded forests is understudied compared to secondary forests regrowing after deforestation. Closing these knowledge gaps is crucial for accurate estimates of the tropical carbon budget and for addressing the priorities of international climate policies, which increasingly emphasize the value of protecting and restoring forests, without which we cannot limit global warming within critical temperature limits. In recent years, research on carbon emissions and removals in tropical forests has surged, driven by advancements in Earth Observation. Here we synthesize these approaches with the aim to bring clarity and advance our understanding on aboveground carbon (AGC) emission and removal factors applicable for tropical moist forests across the pantropics. Our synthesis of 35 studies shows emission estimates vary widely across disturbance types: average aboveground carbon (AGC) losses for extreme drought are 3% (range 1–4%), 30% (range 13–17%) for selective logging, and 54% (range 22–83%) for fire, relative to nearby und previously undisturbed forest. Our analysis underscores the need to account for disturbance severity, frequency and the cumulative effects of interacting disturbances to improve emissions estimates. For AGC recovery, our synthesis of 60 studies showed degraded forests regain 41–117% of AGC within 20 years relative to undisturbed forests; significantly higher than forests regrowing from deforestation, which regained between 1% and 74%. Across the data sources (field-data, satellite-derived, air-bourne and data-compilation studies), there is agreement that young recovering forests (<20 years) have a higher absolute rate of regrowth, compared to older ones (> 20 years). Focusing in on two regions within the Amazon region, where we have a good sample of field-data and region-specific remote sensing data, we see good agreement between field-data and satellite derived regrowth rates. Remote sensing data therefore has the potential to fill the gaps in our spatial knowledge where field data is lacking. The results from our synthesis have the potential to be integrated to build a detailed carbon flux bookkeeping model considering different forest disturbance types and their associated carbon emissions and removals and assess the sensitivity of using one emissions/removal factor over another. Our results also highlight some of the major gaps that still exist to provide long-lasting and relevant information into the policy and wider carbon budget science domain. To this end we highlight key research needs and gaps: (i) to reduce the variability of emission factors within disturbance types, we need to further stratify by disturbance severity and frequency of occurrence, which will likely have implications on any potential regrowth follow the disturbance, (ii) there remains a large bias in research focused on the Americas, and the Amazon more specifically, with large potential to expand the focus domain. We call for a more connected approach between research focusing on deforestation, degradation and regrowth, recovery and consider these processes a single ongoing dynamic in space and time. The results and outcomes from this abstract and presentations was the result of a workshop on “Quantifying Regrowth and Recovery from Deforestation and Degradation at Large Scale (R2D2)” held in March 2024. We thank and acknowledge all participants who took part and contributed to discussions.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall K1)

Presentation: EO4CarbonFarming – Large-Scale Analyses on Carbon Sequestration as an Effect of Sustainable Farming Methods

Authors: Jakob Wachter, Isabella Kausch, Silke Migdall, Dr. Heike Bach
Affiliations: Vista Geo Gmbh
Agriculture can make a significant contribution to the reduction of atmospheric CO2 through adapted cultivation methods and actively binding CO2 through the targeted build-up of humus in the soil. In addition, more resilient farming methods such as crop rotation and the planting of catch crops contribute significantly to more sustainable food production. To exploit this potential of carbon farming, a solution for monitoring, reporting and verification (MRV) of these measures is required. Valuable information on vegetation and soil can be derived from high-resolution, globally available Earth Observation data from the Copernicus program. These data are therefore ideally suited to enable this solution, both for past and future periods. Within the ESA Business Application project "EO4CarbonFarming" (EO4CF), such an MRV tool has been developed based on Copernicus data. It can monitor the growth of catch crops and verify farming measures taken to assure CO2 uptake and humus build-up in the soil. To facilitate this, high-resolution optical and radar data (Sentinel-1, Sentinel-2) are used in combination with high-quality pre-processing methods, a radiation transfer model and newly developed algorithms specifically for the derivation of humus in the soil. The sound determination of carbon stocks in fields is an essential component of assessing the effectiveness of carbon farming for the climate and to develop business models on this basis. After the successful development and testing of the EO4CF services, the project is now in the pilot phase, where the service results are generated and analysed for 5 pilot farms in 2 countries. Within the pilot, SOC content has been calculated for two timesteps, one in 2017 and one in 2023. The changes in SOC content in the soil are then correlated against the amount of biomass for the monitored fields over the time in between the SOC content measurements. The amount of biomass is obtained using various vegetation parameters derived from Sentinel-2 data, such as the Leaf Area Index (LAI), in the winter months. Results are expected to show a positive correlation between winter biomass representing the application of sustainable farming measures and carbon sequestration in the soil. This effect could already be shown with a very high correlation in one pilot region and is expected to be similarly high in all pilot regions. This project is funded within ESA Contract Number 4000134139/21/NL/MM/mr.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.96/0.97)

Session: F.04.13 Urban Resilience - PART 2

From the beginning of the years 2000, more than half of the world population live in cities and the dynamic trend of urbanization is continuously growing and it is expected by 2050 that 7 of 10 people will be living in cities. The speed and scale of urbanization leads to important development challenges and sets clear needs for effective urban planning and development measures that enhance city resilience. The use of Earth Observations and their integration with other source of information into effective urban data analytics tools can produce a quantum leap in tracking progress towards and achieving international development goals such as the SDG 11, the Sendai Framework for Disaster Risk Reduction or the new urban Agenda (Habitat III).
The advent of continuous data streams of high quality and free of charge satellite observations such as the Sentinels of the European Copernicus program, in combination with the emergence of automated methods for large data processing and image analysis, together with the democratization of computing costs, offer unprecedented opportunities to efficiently monitor the changes and trends in urban development globally. In addition, the synergetic use of EO data from different satellite sensors (radar/optical, HR/VHR, SAR/InSAR, TIR, Hyperspectral, Lidar) and the combination with the availability of ancillary datasets such as ground-based and airborne data, drones, and citizen science data, opens new pathways to extract an unprecedented range of urban information. Urban remote sensing is therefore progressively evolving from traditional urban extent and land cover mapping into advanced urban applications, connecting to the monitoring of urban-related environmental parameters (impervious surfaces, green and blue infrastructures, urban welfare, air pollutants). Moreover, municipalities and city practitioners are showing growing interest in using these applications as decision support tools, bringing to a stronger demand for interactive tools that deliver EO-integrated solutions in actionable information.
The series of LPS 2025 urban sessions will present the recent scientific advances in the application of remote sensing in urban applications, discuss opportunities and challenges which lie ahead for mainstreaming EO solutions into urban development practices and policies, and highlight future paths of research.
Topics of interest for the urban sessions include (not limited to):
• multi-sensor, multi-scale and multi-temporal approaches to urban mapping;
• Remote sensing methods for characterising urban areas (multispectral, hyperspectral, SAR/InSAR, TIR, LiDAR)
• Detailed LULC classification and change detection
• Cost-effective use of commercial data
• Downscaling (e.g., super-resolution)
• AI for urban
• 3D/4D mapping
• Night-lights applications
• UAVs/drones, aerial platforms
• Capacity building, education, citizen science, crowdsource data and tools for urban applications
• EO integration in urban social science and policy
• Urban planning and modelling of urban growth
• Health, well-being and liveability
• Urban ecology
• Nature-based solutions
• Urban energy infrastructure and renewables (demand, access, smart grids)
• Urban climate (Urban Heat Islands, pollution/air quality)
• Urban green and blue infrastructures
• Transport, Infrastructure and Sustainable Mobility
• Natural hazards, risk reduction and urban resilience
• Informal settlements
• Population distribution

Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: GDA Urban Sustainability – Facilitating Analyses of Urban Heat Islands at Scale and at Detail.

Authors: Dr. Christoff Fourie, Christophe Sannier, Fabian Enßle
Affiliations: GAF AG
Urban heat and the urban heat island (UHI) effect is a well-studied phenomenon having health and economic impacts on populations in dense urban areas throughout the world. The Global Development Assistance (GDA) Urban Sustainability activity aims to mainstream Earth Observation (EO) into International Financing Institutes (IFIs) urban development programmes; the project which is led by GAF AG and a Consortium of European partners had identified with the World Bank, Asian Development Bank and the European Bank for Reconstruction and Development (EBRD) the increasing challenges posed by UHIs in developing countries. Thus, there is a need for monitoring systems of UHIs as well as the possibility for undertaking analysis at city level, but existing thermal imagery from the Landsat satellite is often considered too coarse to allow such detailed analysis. In the context of this project, new thermal sensor data as available from Constellr will be pioneered to map and monitor urban heat islands. The study will be presented holistically, touching on its inception, technical developments, outreach and the novel platform being developed. The main aspects elaborated upon will include: • Methodological developments in mapping the urban heat island phenomena, focusing on a two-phase approach. • Building an intuitive web platform for allowing self-service analytics regarding urban heat. • Outreach to a selection of cities to demonstrate and help further develop the service via IFIs Methodologically, a two-phase approach is followed. Phase one allows for cost-effective analysis of entire cities using purely remote sensing data to derive Surface Urban Heat Island Intensity (SUHII). SUHII is an indicator for the increased local temperature due to the urbanization/imperviousness effect and simply allows decoupling the absolute temperature into two components. This helps in modelling urban heat and also to inform policy in the spheres of healthcare and city planning. One of the methods employed (Li et al., 2018) entails estimating the linear correlation via land surface temperature, as measured via Landsat 8 and Constellr here, and impervious surfaces. Additional analyses/features implemented include time series exhibitions, a simple model for predicting the increased SUHII beyond the recorded temperatures and a basic greening simulation - if some areas were to be transformed into parks. Results are presented in vector (city block) format, with additional layers such as population density assisting in analyses. While this first phase, done purely with remote sensing data, is useful and cost effective to allow analyses at scale, it has a couple of limitations with two major concerns. Firstly, capture times of Landsat 8 and Constellr data are not at peak daytime temperatures (10:42am and 13:30pm respectively). Nor are the derived LST and SUHII products validated for accuracy. A calibration regime is envisaged on the regression approach mentioned above, or a new model similar to InVEST (Hamel et al. 2023) created for a given city, with both LST and in-situ data to be used for UHI calibration. This second-tier service also intrinsically allows for accuracy assessment of derived block-level urban heat intensity. The second phase entails a crowd sourcing ground data collection campaign, in this pilot work. Street level temperature and other telemetry data, also time, are captured. This information is used to calibrate the RS captured SUHII information for specific cities and also used as an additional validation dataset. Results are presented on a city block level, with the use of OSM vectors as containers or refined using in-house segmentation tools (Probeck et al. 2021) where OSM details are lacking. A major component of the work is to ensure that the SUHI products can be visualized and analysed by non EO experts for identifying UHI prone areas and planning of mitigation actions. For that purpose, an additional development was conducted, building an intuitive web application for stakeholders to analyse and explore urban heat information for cities. This application, titled the Urban Microclimate Analysis Platform (UMAP), allows users to conduct on the fly, city wide analyses and visualisations of SUHII. Additionally, it allows for temporal views, estimated population vulnerability and basic greening simulations. The application is designed to be extendable, meaning new data sources or ways of analysis may easily be integrated as additional panels in the main dashboard. Additionally, the application also provides various information on academic resources related to urban heat and city planning and methods and links/contacts to organizations addressing this topic. The aim would be to update the temporal data for cities on a yearly basis. A modern tech stack (next.js, deck.gl, etc) used in the development also ensures a compelling user experience. Experiences from the previous iteration of GDA Urban highlighted the need for ease of access to analytics – prompting this development. In conclusion the paper showcases multiple case studies and outreaches that were done involving International Financing Institutes (IFIs), consultants with expertise in urban planning and local stakeholders – specifically for mapping and analysing urban heat within Abidjan, Novisad and Nairobi. The first phase analytics were done for all three cities, while the second ground-calibration, phase is planned for Abidjan and Nairobi specifically. The aim would be for this project and initial technical developments to spur future interest in such a cross-cutting service. References: Li, H., Zhou, Y., Li, X., Meng, L., Wang, X., Wu, S., & Sodoudi, S. (2018). A new method to quantify surface urban heat island intensity. Science of the total environment, 624, 262-272. Probeck, M., Ruiz, I., Ramminger, G., Fourie, C., Maier, P., Ickerott, M., ... & Dufourmont, H. (2021, July). CLC+ Backbone: Set the Scene in Copernicus for the Coming Decade. In 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS (pp. 2076-2079). IEEE. Hamel, P., Bosch, M., Tardieu, L., Lemonsu, A., de Munck, C., Nootenboom, C., ... & Sharp, R. P. (2023). Calibrating and validating the InVEST urban cooling model: Case studies in France and the United States. EGUsphere, 2023, 1-25.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: Healthy Urban Habitat Index

Authors: Alen Berta, Viktor Porvaznik
Affiliations: CGI Deutschland
Climate change and extreme weather conditions, such as floods, heat waves, and droughts, significantly impact urban areas, affecting natural ecosystems, human lives, and public and private properties. City administrations and planners must use smart solutions to understand the interactions between human activities and ecosystems, assess past changes, monitor the present, and simulate future scenarios. Accelerated urbanization and climate change increase threats to urban citizens, exacerbating the heat island effect, decreasing air quality, and negatively impacting health and productivity. Effective city climate adaptation strategies must consider socio-economic contexts and risk assessments to create resilient and sustainable environments. To support city planners but also to provide actionable insights to the decision makers and easy to understand information for citizens, CGI has developed Healthy Urban Habitat Index (HUH Index). This demonstrator is already tested on a few pilot areas which results will be showcased. For the first time, this innovative index combines on one side urban heat stress and air pollution (as the main negative influence of urban living) and on the other the analyses of blue and green infrastructure (as main positive influences on urban living). Additionally, being able to use inputs from almost any EO satellites (Sentinel 1, 2, 3, 5; Landsat, etc) but also being able to digest the information from VHR imagery, IOT sensors or LIDAR where available with combination of analysed parameters, make this HUH index unique. Using various ML techniques (e.g. sharpening, classification) supported with advanced spatial post-processing and geographically-weighted multi-criteria decision analyses (MCDA), this index allows for objective and automatic distillation of vast amounts of satellite based data into a single index (with values 1-5). HUH index also provides information on different levels of complexity/details, ie. raw information on LST, Land Cover, Air pollution, or intermediate analyses of heat stress and UHI, air pollution or green/blue infrastructure (ie. through compliance to rule 3-30-300). With this multi-levelled approach, the HUH index allows end-user not just to easily detect possible issues, track positive or negative changes by ongoing monitoring or after the implementation of mitigation measures at a certain location, but also to easily communicate the results to the public and decision makers.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: Copernicus Services and Satellite Data Products in the Service of Urban Heat Mapping and Monitoring

Authors: Zina Mitraka, Giannis Lantzanakis, Emmanouil Panagiotakis, Nektarios Chrysoulakis, Giorgos Somarakis, Mattia Marconcini, Christian Feigenwinter, Dirk Lauwaet
Affiliations: FORTH - RSLab.gr, DLR - Deutsches Zentrum für Luft- und Raumfahrt, Universitaet Basel, Flemish Institute for Technological Research (VITO)
The increasing vulnerability of urban areas to overheating due to climate change and urbanization has heightened the importance of addressing the urban heat island (UHI) phenomenon. The UHI effect, characterized by elevated temperatures in urban areas compared to their rural surroundings, influences energy consumption, biodiversity, air quality, and human thermal comfort. As global temperatures continue to rise, the frequency and severity of extreme heat events are expected to increase, necessitating the integration of mitigation and adaptation strategies into urban planning. This study highlights the development and application of urban heat-related services using Copernicus Earth Observation (EO) data to provide high-resolution, city-specific heat load characterizations, thus informing urban resilience strategies. Copernicus EO data from services like the Land Monitoring Service (CLMS) and the Climate Change Service (C3S), coupled with Sentinel-2 and Sentinel-3 satellite data are used in a cross-cutting was for addressing urban heat. The study combines land use, building height, vegetation, and imperviousness data with meteorological information to derive dynamic surface properties. Advanced techniques, including physically-based downscaling, modelling and statistics are employed for five services: (1) dynamics of land surface temperature (LST), (2) surface UHI index (SUHII), (3) urban heat emissions, (4) heat storage monitoring, and (5) thermal comfort assessment. The methodological framework for deriving high-resolution LST involves combining low-resolution thermal infrared satellite data with dynamic surface cover maps, atmospheric parameters, and spectral emissivity libraries. This approach enables the generation of localized LST data at a resolution of 100 m, significantly enhancing the spatial and temporal granularity of thermal measurements. SUHII is calculated by regressing LST against degree of imperviousness, providing insights into the spatial and temporal variations of urban-rural temperature contrasts. For urban heat emissions, turbulent sensible heat flux (QH) is derived using the Aerodynamic Resistance Method (ARM), which incorporates satellite-derived LST, atmospheric parameters, and detailed urban surface roughness. Heat storage flux (ΔQS) is estimated using the Objective Hysteresis Model (OHM), which employs radiation estimates, dynamic surface cover fractions, and coefficients for energy response. Thermal comfort is evaluated using the Wet Bulb Globe Temperature (WBGT) index, a composite measure integrating wet bulb, black globe, and air temperatures. High-resolution meteorological inputs from the UrbClim model, coupled with 3D building and vegetation data, facilitate precise thermal comfort assessments. This comprehensive methodology ensures robust evaluation of the thermal environment, enabling the identification of critical areas for intervention. LST products are validated against in-situ radiometer measurements, showing strong agreement for both daytime and nighttime scenarios. SUHII estimates, derived using Copernicus and World Settlement Footprint (WSF) datasets, exhibit consistent profiles, confirming the robustness of the regression-based approach. Heat emissions and storage estimates align with previous studies, while thermal comfort assessments offer new insights into urban heat stress under varying climatic conditions. Results reveal the spatial heterogeneity of urban heat dynamics, highlighting the utility of the developed services in addressing urban thermal challenges. For example, LST maps of Berlin demonstrate significant intra-urban variations, with the high-resolution products capturing detailed thermal patterns overlooked by coarser datasets. In Heraklion, heat flux and storage maps reveal contrasting diurnal behaviors, with urban surfaces exhibiting distinct energy partitioning during day and night. The SUHII analysis across Sofia, Copenhagen, Berlin, and Heraklion underscores the seasonal and diurnal variability of UHI effects, with notable differences in urban-rural temperature contrasts. Thermal comfort maps for adaptation scenarios, such as green roof installations in San Sebastian, showcase the potential of targeted interventions in mitigating heat stress. This study underscores the pivotal role of Copernicus EO data in advancing urban climate resilience. By integrating high-resolution thermal and energy flux data into urban planning, the services developed herein provide actionable insights for climate adaptation. As cities face escalating heat risks, these tools empower decision-makers to implement evidence-based strategies, fostering sustainable urban environments. Future work will focus on refining methodologies, expanding the spatial coverage of services, and exploring additional adaptation scenarios to further enhance urban resilience in the face of climate change.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: Predictive Urban Climate Modeling: Orchestrating Multi-Sensor Earth Observation, Meteorological Data, LULC Dynamics and Emission Paths

Authors: Andreas Walli, Matthias Sammer
Affiliations: Geoville
Urban areas face significant challenges from climate change, including exacerbated heatwaves and the Urban Heat Island Effect, driven by land use and land cover (LULC) changes. Addressing these challenges requires detailed, actionable insights into the interactions between LULC and land surface temperature (LST) at high spatial and temporal resolutions. To this end, a unified approach leveraging Earth Observation (EO) technologies, advanced AI models, and multi-sensor data integration has been developed to provide transformative tools for urban planning and climate adaptation. This initiative combines data from satellite platforms, including Sentinel-2 and ECOSTRESS, with meteorological and climate models to create a multi-resolution framework for LST prediction. A novel super-pixel Convolutional Neural Network (CNN) architecture underpins this approach, enabling downscaling of LST data to a resolution of 10 meters while maintaining temporal and spatial consistency (R² ≳ 0.9). Detailed LULC classifications and change detection methodologies enhance the accuracy of urban microclimate analysis, offering new insights into the interplay between built environments and green and blue infrastructures. The integration of predictive modeling capabilities allows users to assess how future changes in LULC, guided by interactive scenario adjustments, affect LST under various IPCC climate scenarios. The integration of these technologies into a interactive tool supports spatial planning departments and policymakers by visualizing the effects of LULC changes on LST. This predictive capability supports evidence-based decision-making, fostering urban environments that are more resilient to heat stress and aligned with climate adaptation goals. The inclusion of nighttime LST data enhances the analysis of Urban Heat Island effects and heat reservoirs, further emphasizing the critical role of green and blue infrastructures in climate resilience. Applications extend to public health, energy demand management, and communication strategies, helping municipalities engage stakeholders with spatially explicit data on heat risks and mitigation strategies. By addressing topics such as multi-sensor and multi-temporal urban mapping, AI applications, and the integration of EO data into urban policy, this work advances the scientific understanding of urban climate dynamics. Predictive capabilities based on IPCC scenarios directly support health and well-being by mitigating heat-related risks and fostering sustainable, liveable urban environments.Aligned with national and European climate adaptation strategies, this effort highlights the transformative potential of predictive, data-driven tools in reducing soil sealing, preserving green spaces, and enabling climate-neutral urban development. By linking high-resolution climate data with spatial planning information, it empowers municipalities to address climate challenges effectively, fostering informed decision-making and public engagement. Ultimately, this initiative represents a significant step toward climate-resilient cities and sustainable urban futures.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: The HEATWISE Advanced Products and Algorithms for Urban Resilience Exploiting Hyperspectral and Thermal Data in Preparation to ESA’s Sentinel Expansion Missions

Authors: Paolo Gamba, Antonietta Sorriso, Iphigenia Keramitsoglou, P Sismanidis, C Kiranoudis, M König, D Müller, K Stelzer, C Brockmann, J Fisher, R. Preusker, H-P Grossart, I Ogashawara, M Törmä
Affiliations: University of Pavia, Brockmann Consult GmbH, Spectral Earth GmbH, National Observatory Athens, Ruhr University Bochum, National Technical University of Athens, Leibniz Institute of Freshwater Ecology and Inland Fisheries, Finnish Environment Institute
The HEATWISE project (High-resolution Enhanced Analysis Tool for Urban Hyperspectral and Infrared Satellite Evaluation) seeks to develop and test innovative methodologies leveraging CHIME and LSTM, two of the Copernicus Expansion missions, to advance urban heat characterization and mitigation—an essential aspect of urban resilience. By utilizing data that are appropriate to simulate CHIME and LSTM missions, in synergy with current Sentinel-2 (S-2) and Sentinel-3 (S-3) observations, HEATWISE aims to design, deliver, and validate novel products for urban stakeholders, particularly in the areas of extreme heat event management and energy transition. With a strong emphasis on urban built-up areas, the project focuses on developing three primary products through the combined use of hyperspectral and thermal data, addressing high-priority applications in climate resilience and sustainable urban planning. <b>Product 1 - Maps of the thermal properties of Urban Materials</b>: ESA's Sentinel Expansion missions with CHIME monitoring in hyperspectral manner in the VSWIR and LSTM monitoring night and day in the TIR part of the spectrum offer a game-changing opportunity for city authorities. These advanced imaging techniques can provide detailed insights into the specific materials contributing to the so-called Urban Heat Island (UHI) effect and mapping the hottest areas with unparallel detail. The designed product aims at creating comprehensive maps of the various materials present in the hot spots of the city, focusing on their thermal properties such as emissivity and thermal inertia, and to facilitate the modelling of heat fluxes, hence providing a detailed understanding of how different materials contribute to the city’s overall thermal profile. To map materials in urban areas and characterize their thermal behavior, a proper mix of supervised and unsupervised techniques is considered. The overall procedure aimed at mapping materials in urban areas starts from the CHIME data and refines it using LSTM measurements. Specifically, it is described by the following processing steps: <ol><li>Extraction of the material spectra in a hyperspectral scene is performed by looking for pure pixels according to the availability of prior information. <li>Spectral unmixing will be subsequently applied to obtain the spectral abundances for the recognized materials by means of non-linear unmixing algorithms, possibly complemented by super-resolution algorithms that allow mapping at a finer spatial resolution than the original data. <li>The LSTM night and day observations are then used to obtain an unsupervised characterization of the materials in pure or “close to pure” pixels, and extract information about their thermal properties. </ol> <b>Product 2 - Advanced Local Climate Zone (LCZ) maps</b>: The concept of LCZ describes a classification system that integrates Urban micrometeorology to standardize urban temperature observation, e.g., for UHI studies. LCZ enables comparisons on large temporal and spatial scales, provides a common standard for knowledge exchange and supports model applications [1]. Existing methodologies for higher spatial resolution LCZ mapping based on earth observation combine synthetic-aperture radar (SAR), Digital Surface Models and multispectral VSWIR data (e.g., Sentinel-1 (S-1) and S-2) and occasionally also consider single-band TIR data (e.g., from Landsat). Integrating hyperspectral VSWIR and multispectral TIR observations from CHIME and LSTM will add further layers of information which are so far neglected for LCZ mapping. By improving material characterization and localization in urban spaces it will be possible not only to enable LCZ mapping at an unprecedented level of detail, but also to better recognize their relevance for the UHI and similar effects. The novel approach proposed in this project will therefore consist in <ol><li>A thermal-aware Deep Learning (DL) classification technique, where the thermal property layer acts as a method to reduce misclassifications performed by the model architecture. <li>A DL technique to encode the thermal behaviors and the material density in an area into the relevant features useful for LCZ recognition, enabling a better explanation of the results and making explicit the rationale for the selection of one specific LCZ type instead of another. </ol> <b>Product 3 - Urban water characterization</b>: Urban water bodies are susceptible to warming and eutrophication. Subsequent pollution with cyanobacteria and other potentially harmful algae is often the result of factors like stormwater runoff rich in nutrients from fertilizers and pet waste, wastewater discharges, and the UHI effect, which can elevate water temperatures favorably for algae blooms. The hyperspectral data provided by CHIME and temperature data from LSTM can be used for the characterization of the biogeochemical and physical status of urban water bodies. Bio-optical models applied to atmospherically corrected water-leaving reflectance data are able to determine optical algal groups and of potentially harmful algal blooms. Different phytoplankton species have different heat tolerances and therefore the combination with water temperature derived from LSTM will provide additional insight. By combining information about the temporal evolution of chlorophyll-a concentration, optical algal groups and water temperature, models can be trained to improve the detection and prediction of cyanobacteria and other phytoplankton blooms. To achieve this, the following steps are considered: <ol><li>The total chlorophyll-a concentration is estimated from S-2 and CHIME based on well-established bio-optical algorithms. <li>CHIME is used to identify algal groups based on their specific pigment composition. This information is then paired with time series of water temperature from LSTM. Based on in situ observations of total chlorophyll-a, relative abundance of algal groups, and water temperature in different optical water types, we will develop a model to predict phytoplankton blooms, with a focus on cyanobacteria, based on time series of the aforementioned parameters, which will later be derived from CHIME and LSTM. </ol> Overall, the HEATWISE products, providing a refined spatial composition and characterization of these components of the urban spaces, in combination with the spatiotemporal information about surface temperature trends, will empower city authorities and other stakeholders to transform overheated urban areas into cooler, greener, and more liveable spaces, ultimately improving urban resilience.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: Understanding the Thermal Environment of UK Cities With Satellite Remote Sensing

Authors: Charlotte Paton, Dr Darren Ghent, Dr Mike Perry, Prof John Remedios
Affiliations: University of Leicester, National Centre for Earth Observation
With over 50% of the world’s population already living within urban areas, and that percentage projected to rise, it is with growing importance that we understand the thermal environments of our cities, and how increasing heat not only affects our infrastructure but also significantly impacts our health. Remote sensing of Land Surface Temperatures (LSTs) provides a way of studying these environments with spatially averaged data at high resolution, across wide areas. Many LST retrievals currently rely on auxiliary datasets to provide prior knowledge of surface and atmosphere parameters, specifically the Land Surface Emissivity (LSE). Over cities this is overwhelmingly overlooked, with current schemes frequently only partitioning land into vegetation and soils. Where urban landscapes are considered, they are often described by a single land-use classification which is insufficient to fully document their heterogenous nature. In order to address this, a novel thermal-based classification algorithm is presented. Data is acquired from Landsat 8 & 9 satellites at 100m spatial resolution, as well as from ECOSTRESS and SLUM spectral libraries. Using this, a series of indices are calculated using visible and short-wave infrared bands. The scheme then works, through a series of statistically derived thresholds on the relationships between indices, to partition land into 8 classifications; three densities (low, medium and high) of urban and vegetation expanse as well as additional classifications for bare soil and coastal regions. The output classification is shown to have good agreement with other land-use based schemes; and where differences occur, it is shown to better represent the materials emissivity. Further, the thermal classification is found to have seasonal variations, with vegetation classifications being more variable than urban. When variation is present these changes are shown to primarily occur between neighbouring classifications, and can likely be attributed to changes in vegetation quality in areas such as parkland or gardens. For the first time this classification allows for LSEs of urban extents to be considered. This is done by attributing samples from ECOSTRESS and SLUM spectral libraries into each of the classifications and then forming a unique relationship between their emissivity and NDVI. The resulting LSEs follow spatial patterns depicted by the thermal classification, with lower values being found in high density urban areas. Further LSEs are also shown vary seasonally, decreasing within winter months as expected due to the phenological vegetation cycles. The newly created prior LSEs are used to generate LSTs through use of a generalised split window retrieval. Validation of these LSTs is done by analysing the difference with respect to thirteen in-situ measurement sites. Generally, differences fall around the 1:1 line; and where larger difference occur, they can be explained by the sites unique land cover properties. Further it was found that the newly developed thermal classification method created a reduction in RMSE of 0.60 when compared to NDVI based methods, and a reduction of 0.49 when compared to methods utilising the CAMEL dataset. Additionally, created LSEs are used within LST downscaling methods to transform data at native 1km resolution to match the 100m from that of Landsat. This allows for greater temporal understanding of how LSTs across cities varies, with specific focus on heatwave events. The thermal classification then further allows for a robust, and consistent methodology for defining rural background areas, without in depth knowledge of each city. This results in the Urban Heat Islands (UHIs) of cities across the UK being comparable.
Add to Google Calendar

Wednesday 25 June 11:30 - 13:00 (Hall M1/M2)

Session: B.03.11 Beyond monitoring - unlocking climate action through Earth Observation

This session is conceived as a moderated panel discussion among representatives of various organisations involved in making Earth Observation a key enabler of climate action in Europe and beyond. Among them, representatives of the space sector, science, and climate community, spanning political, executive and other leadership functions will be featured in the programme.
The session shall be opened by a keynote speech focused on the developments of the high-level climate agenda in preparation for COP3o and the perceived value of EO in this context. This should build the link between space technology and the policy level to support the achievement of climate objectives.
Starting from this policy push, panelists will reflect on how to increase the contribution of EO to climate action both at the international and local level. This includes considerations on the need to also focus on climate adaptation in parallel to mitigation, as well as reflecting on how to foster the commercial sector while maintaining an open data policy. For shorter-term actions, panelists will discuss the contribution of space towards the next Conference of the Parties in Belém, Brazil, in parallel to expectations and main challenges in affirming EO as an enabler of climate policy. Moreover, a key question will revolve around the need for closer collaboration among policy and space actors with the aim to develop a common language and joint objectives

The session is jointly organized by the European Space Policy Institute (ESPI) and the International Society for Photogrammetry and Remote Sensing (ISPRS).

Chairs:


  • Gunter Schreier - ISPRS
  • Gabriele Redigonda – ESPI

Speakers:


  • Inge Jonckheere - Head of Green Solutions Division, ESA
  • Laurent Polidori - President of ISPRS Commission III, UFPA, Belém, Brazil
  • Dusan Chrenek - Principal Advisor, Digital for the Green Transition, DG CLIMA, European Commission
  • Mark Dowell - Co-chair of GEO Climate Change Working Group & Senior Scientific Officer and Project Leader for Scientific and Technical Support to the Copernicus Programme at the European Commission’s Joint Research Centre
  • Andreas Schaffhauser, Director of GeoSphere Austria
Add to Google Calendar

Wednesday 25 June 11:37 - 11:57 (EO Arena)

Demo: D.03.22 DEMO - Enabling Open Science with EarthCODE and the Copernicus Dataspace Ecosystem openEO Federation

Join us for a demonstration showcasing how the Copernicus Dataspace Ecosystem openEO federation integrates with ESA’s EarthCODE environment. EarthCODE is a dynamic Collaborative Open Development Environment designed to transform Earth System Science by embedding FAIR and Open Science principles into the scientific development process. It provides tools, expertise, and opportunities for science teams to seamlessly adopt open science practices in their workflows. By integrating the EarthCODE Open Science Catalogue, it simplifies the discovery of research outcomes from ESA Science Clusters and beyond. With direct access to commercial platform services and a rich ecosystem of community-driven tools, EarthCODE empowers scientists to collaborate, innovate, and drive the next wave of discovery in Earth Science.

In this session, you will learn how to leverage the CDSE openEO federation to create reusable processing workflows that can access a large collection of satellite datasets and cloud computing resources. We will guide you through the entire process—from building a reproducible workflow using openEO’s Python libraries, to executing an experiment, and finally publishing the results to the EarthCODE Open Science Catalogue. This streamlined approach ensures that your research remains openly accessible, reusable, and reproducible by the broader scientific community.

Whether you seek to enhance your research transparency, implement workflows using openEO, or contribute to an evolving open science ecosystem, this demonstration will show how EarthCODE and the Copernicus Dataspace Ecosystem openEO federation can support your scientific activities.

Speakers:


  • Bram Janssen - VITO
Add to Google Calendar

Wednesday 25 June 12:00 - 12:20 (EO Arena)

Demo: D.03.24 DEMO - AlTiS-NG software for generating Time-Series of Water Levels from Radar Altimetry Data

ALTIS is a desktop application designed to process altimetry data and create time series of water levels in rivers, lakes, and other water bodies. This tool is capable of handling a wide range of official GDR products from nadir altimetry missions in both LRM and SAR modes, provided by the Center for Topography of the Oceans and Hydrosphere (CTOH), part of the
French Observation Service at the LEGOS laboratory. It handles data from historical (Topex, Jason-1/2/3, Envisat,...) and current missions (Sentinel-6, SWOT nadir).

ALTIS enables the easy processing of time series data over rivers and lakes using more than three decades of satellite observations. ALTIS is also an important tool for validating altimetry data using in situ measurements and evaluating the consistency between historical mission data and new SWOT observations. It can also be customized for various applications, including
coastal altimetry and monitoring backscattering on ice-covered lakes.

In this demonstration, we will showcase the new version, ALTIS-NG, and demonstrate how to use this powerful graphical tool to generate time series data, using a practical example of river Congo.

Speaker:


  • Fabien Blarel - CNRS engineer, LEGOS Laboratory
Add to Google Calendar

Wednesday 25 June 12:22 - 12:42 (EO Arena)

Demo: D.01.16 DEMO - Experience Destination Earth: The Art of Immersive Climate Data

Interactive and immersive visualisations are essential tools in modern data communication. These visualisations enable users to engage with data more intuitively and meaningfully, transforming abstract simulations into experiences that can be explored and understood on a deeper level. Immersive environments, created using advanced game engines, place users inside the data, allowing them to experience weather patterns or climate phenomena as if they were occurring around them. This approach not only enhances understanding but also increases emotional engagement, making the data more relatable and impactful. In an era where public awareness and understanding of climate issues are crucial, these advanced visualisation techniques play a pivotal role in bridging the gap between complex scientific data and society.

This demonstration presents immersive visualisations specifically designed for the high-resolution weather and climate data of Destination Earth. Leveraging the digital twins developed by ECMWF and its European partners, these visualisations transform complex datasets into compelling, interactive experiences that reveal the hidden beauty of the inner workings of the atmosphere. By making intricate atmospheric processes visible in an intuitive way, they provide a fresh perspective on weather and climate dynamics.

Through real-time rendering, high-quality graphics, and interactive exploration, these visualisations make complex scientific concepts more accessible and visually engaging. They allow users to witness the formation of extreme weather events, and long-term climate trends in a way that is both entertaining and educational. This approach opens up new possibilities for conveying the complexity and elegance of weather and climate systems to a wide range of audiences.

Speaker:


  • Andreas Mueller - ECMWF
Add to Google Calendar

Wednesday 25 June 12:45 - 13:05 (EO Arena)

Demo: A.01.17 DEMO - Demonstration of an Integrated approach for the Validation and exploitation of Atmospheric missions

DIVA (Demonstration of an Integrated approach for the Validation and exploitation of Atmospheric missions) is a python jupyter notebook-based platform aiming to collect, handle, and exploit atmospheric data from ground-based remote sensing instruments, and to build new synergies beyond the operational products.
Throughout its design and development stages, DIVA had demonstrated the capability and versatility of such a system to integrate ground-based (lidar, ceilometer, sun/lunar photometer, and spectrometer), satellite and model data, stand-alone, and synergetic algorithms for advanced data products, using combined algorithms from different platforms and sensors, as well as innovative data mining and data visualization tools.
DIVA platform is also used as a centralised hub for EarthCARE Calibration and Validation activities, focusing on aerosol and cloud products. The platform ensures adequate data access and user management of the dedicated cal/val teams to EarthCARE L1 and L2 products.
Moreover, in DIVA users do have the possibility for science exploration and exploitation of ground based and satellite products by taking advantages of the GRASP (Generalized Retrieval of Atmosphere and Surface Properties) and NATALI (Neural network Aerosol Typing Algorithm based on LIdar data) algorithms to further exploit aerosol related products from ground-based datasets. All these tools have been integrated in the DIVA platform for its users.
In this session, it will be demonstrated and showcased the main functionalities and applications of DIVA. The session will be divided in different stages, first part will show and guide the audience on how one can register into GRASP Cloud and access the DIVA Platform. Second is dedicated to the emphasis of the synergy of ground based measurements (lidar, photometer, ceilometer, spectrometer). Third part will showcase and highlight DIVA as the centralised hub for EarthCARE Validation activities, focusing on aerosol and cloud products.
To register to the platform, please follow instructions here: https://access-request.grasp-sas.com/service/grasp-cloud.

Speaker:


  • Alexandru Dandocsi - INOE
Add to Google Calendar

Wednesday 25 June 13:00 - 13:45 (Nexus Agora)

Session: B.01.05 Integrated global climate observations: in-situ global terrestrial networks and their sustainability

It is widely recognized that in-situ observations provide critical inputs to reanalysis and calibration/validation for satellite data and modelling. Furthermore, in-situ data also have an intrinsic value, providing additional information complementing the satellites one. The GCOS 2022 Implementation Plan, recognizing this value, defined actions to improve the sustainability of in-situ measurement programs. Nevertheless, many of the global in-situ terrestrial networks are established in the frame of short-term research efforts and do not have the long-term support required to ensure the continuity and development of long time-series of several Essential Climate Variables (ECVs).
The intention of this session is to initiate an interactive dialogue between different communities (e.g. in-situ networks, remote sensing, finance) for discussing the transition from scientific research to long-term monitoring networks, as well as a co-designed approach for ground- and space data monitoring of ECVs. In-situ networks, and relevant financial implications, should be taken into account since the beginning of planning satellite missions. A coordinated planning will provide a more comprehensive set of data resulting in higher value for decisions at different scales.
The session will start with a short introduction to set the scene from a GCOS perspective, then there will be few short presentations to present different cases and needs from selected terrestrial networks and how they complement satellite observations. At least half of the session will be interactive, with a moderated panel discussion involving the audience. The Agora setting will promote exchange of ideas with the attendees, widening the potential of received feedback. Few key questions and the use of interactive tools will guide the discussion. A final short concept document could be produced as an output of the session.

Moderators:


  • Claudia Giardino - CNR

Speakers:


Setting the scene: GCOS, TOPC and the Global Terrestrial Networks


  • Antonio Bombelli (GCOS) and Martin Herold (GFZ Potsdam)

The Global Terrestrial Network for Rivers (GTN-R) and the remote sensing-based extension of GRDC river discharge time series


  • Simon Mischel (BfG) and Omid Elmi (University of Stuttgart)

The International Soil Moisture Network (ISMN): the transition from research to sustained operation


  • Wolfgang Korres (BfG)

Interventions from other networks


  • Various speakers
Add to Google Calendar

Wednesday 25 June 13:00 - 14:30 (ESA Agora)

Session: A.05.12 Joint EC-ESA Earth System Science Initiative: Joining Forces for Earth System Science

Addressing today’s complex global challenges and transitioning towards a sustainable future demands transformative science capable of advancing our understanding of the Earth system and translating knowledge into societal solutions.
Europe is uniquely positioned to lead this scientific endeavour, thanks to an unparalleled Earth observation infrastructure—from Copernicus Sentinels and Earth Explorers to meteorological and national missions—and a rapidly digitalising research landscape. This leadership is grounded in Europe's commitment to scientific excellence and its capacity to generate independent, high-quality knowledge for strategic policy action.
Recognising this opportunity, the European Commission and ESA launched the joint Earth System Science Initiative in 2020. The initiative represents the world’s largest collaborative effort in Earth system science, uniting leading scientists, institutions, and space agencies across domains and borders. By aligning EC and ESA research activities across a diverse portfolio—ranging from polar science and ocean health to clouds, aerosols, carbon, methane and agriculture—the initiative fosters deep scientific synergy and ensures that European research remains at the forefront of global Earth system understanding.
Importantly, the Initiative accelerates the flow of insights from cutting-edge science into transformative policies, innovation ecosystems, and societal services. It strengthens the foundations for Europe’s scientific sovereignty, while ensuring that Earth system science contributes directly to climate action, biodiversity conservation, and the implementation of the Sustainable Development Goals.
This Agora brings together programme leaders and leading researchers to highlight achievements, explore future directions, and demonstrate how joint European efforts in Earth system science are turning knowledge into societal impact at scale.

Agenda:


Welcome & Opening Remarks


  • Joanna Drake – European Commission

The EC-ESA Earth System Science Initiative: Scientific Alliances


  • Diego Fernandez – ESA

Spotlight on Flagship Projects


  • Introduced by Diego Fernandez – ESA
  • Victor Martinez-Vicente – Plymouth Marine Laboratory (PML), UK
  • Ulla Wandinger – Leibniz Institute for Tropospheric Research (TROPOS), Germany
  • Christine Schøtt Hvidberg – University of Copenhagen, Denmark
  • Gregoire Broquet – Laboratoire des Sciences du Climat et de l’Environnement (LSCE), France

Q&A with the Audience


  • All Speakers

Round Table Discussion: A Cornerstone of European Excellence in Earth System Science for Society


  • Franz Immler – European Commission
  • Diego Fernandez – ESA
  • Victor Martinez-Vicente – PML
  • Ulla Wandinger – TROPOS
  • Christine Schøtt Hvidberg – University of Copenhagen
  • Gregoire Broquet – LSCE

EC-ESA Scientific Alliances – A Way Forward


  • Simonetta Cheli – ESA
Add to Google Calendar

Wednesday 25 June 13:00 - 13:45 (Frontiers Agora)

Session: F.01.09 EO award ceremony

This session will celebrate the winners of the EO Excellence Award, featuring a short introduction to the award, brief presentations by the recipients, the handover of certificates and trophies, and the announcement of the next award round.
I hope this is helpful. Please let me know if you need any additional details.

Speakers:


  • Simonetta Cheli - ESA
  • Maryam Pourshamsi - ESA
  • Lina Eklund - Lund University
  • Marcello Passaro - Technical University of Munich
Add to Google Calendar

Wednesday 25 June 13:07 - 13:27 (EO Arena)

Demo: D.02.28 DEMO - Platform Extension with AI capabilities: Kubeflow

This demonstration showcases how Kubeflow integration into Insula enhances AI model training, reproducibility, and deployment for Earth Observation applications. Attendees will explore how scalable AI workflows, automated model tracking, and seamless transfer to production improve AI exploitation in EO.

Speakers:


  • Stefano Marra - CGI
  • Simona Gargiulo - CGI
Add to Google Calendar

Wednesday 25 June 13:30 - 13:50 (EO Arena)

Demo: E.05.05 DEMO - Applications Showcase: Food systems

This demonstration will present an overview of a range of different EO based tools and data sets available to National Statistical Offices to support the compilation of official statistics. This mainly focused on outputs from ESA projects, but also considers the use of other services. Some of these were explicitly designed to support production of official statistics, others can easily be repurposed for this. This includes projects that have focused on agricultural statistics, ecosystem accounting, quality of life indices, and many more.

The demonstration will be performed using the APEX environment (https://apex.esa.int) to show interactive examples of the types of data available. The aim is not to give a detailed overview of each tool, but rather to showcase the range of tools available and to show how different tools can provide data suitable for official statistics, with case studies of how this is done in practice.

This session is organised by the ESA Stakeholder Engagement Facility. The SEF is a service funded by ESA to provide innovative ways of interacting with a diverse range of users and stakeholders to promote the uptake of Earth Observation powered applications. It works across a range of different themes, aiming to engage users by looking at their overall needs and what EO solutions are available to meet these, rather than being limited to a single project or service.

Instructor:


  • Natalia Kobliuk - Serco

Add to Google Calendar

Wednesday 25 June 13:52 - 14:12 (EO Arena)

Demo: E.01.10 DEMO - SatGrass app for near-real time yield and quality estimation of grassland

SatGrass, launching in 2026, is a free mobile and browser application that can estimate grassland yield and quality down to the parcel level. The app uses daily data from SatGrass models that integrate regional, weather (INCA, ECMWF), and remote sensing data (MODIS NDVI, Sentinel-2, Sentinel-1). These models are validated by a three-year research project which collected thousands of grass samples across more than 200 sampling sites in Austria.

The initial focus is on the development of a mobile application decision aid for farmers to determine the optimal date to cut their grassland. To achieve this, user personas and stories were created as part of a user interface design process. These user stories were then prioritized and aligned with farmer needs through an in-person workshop, providing valuable insights for the application's design.

The application first shows a map of all the parcels managed by a farmer. The user can select a parcel to view trend charts showing grassland yield and quality since the last cut, along with a 10-day forecast. Detailed insights and historical statistics are available on a separate details page. Current and historical data can be accessed at both group and farm levels.

The demonstration will present the scientific background of the project, explain the user interface design process and showcase the current status of the implementation of the prototype application.

Link: https://satgrass.at/

Speakers:


  • Mag. Dr. Andreas Schaumberger - HBLFA Raumberg-Gumpenstein
  • Stefan Brand - EOX IT Services GmbH
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall K1)

Session: C.02.17 Celebrating 15 Years of CryoSat for climate science: shaping the future of polar altimetry - PART 1

This conference session is dedicated to celebrating the 15th anniversary of the launch of the CryoSat satellite, a cornerstone in Earth observation and climate science. Since its deployment, CryoSat has been instrumental in advancing our understanding of the cryosphere, offering unprecedented insights into the dynamics of ice sheets, sea ice, and glaciers. The session will review the mission's contributions to cryosphere science, oceanography, hydrology and climate change studies, and discuss the satellite's ongoing impact and future directions in Earth science research.

The session will feature 5-6 distinguished scientists who will deliver keynote speeches, each focusing on different aspects of the results and developments stemming from the CryoSat mission. These experts represent the leading edge of research in cryospheric sciences, oceanography, hydrology and climate change, and will provide comprehensive analyses of how CryoSat's data has transformed our understanding in these fields.

At 17:45, the keynotes will be followed by a light refreshment with 15th anniversary birthday cake and photo opportunity.

Session Schedule:


Introduction


  • Tommaso Parrinello

Welcome from D/EOP


  • Simonetta Cheli

Mission Status and future outlook


  • Tommaso Parrinello

80,000 orbits and counting: the scientific contribution of CryoSat


  • Prof. Andrew Shepherd

Polar Altimetry and the Changing Sea Ice Systems: Current Insights and Future Prospects


  • Dr. Sarah Kacimi
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.31/1.32)

Session: D.02.05 EO-based solutions to address civil security-related scenarios

The present geopolitical landscape, coupled with heightened awareness of global implications, is giving rise to security scenarios of greater complexity than those encountered in the past. Together with traditional security scenarios (e.g. monitoring of critical infrastructure, humanitarian aid, maritime awareness), new ones are emerging (e.g. energy security, health security). These new challenges require multidisciplinary efforts to fully understand their causes and implications as well as to develop new policies that work towards an adequate response.

Earth Observation (EO) data are widely recognized as a valuable tool in support of decision and policy-making processes which, together with other data sources (e.g. statistics, in situ, geolocation), can significantly contribute to the analysis of the above scenarios. The rapid expansion of remote sensing satellite constellations dedicated to EO has revolutionized the continuous monitoring of crisis areas, providing unprecedented temporal and spatial insights. In the future, the role of space-based remote sensing will only increase, driven by the growing involvement of governments, public institutions, and private companies. The current data flow coming from traditional EO players and new space actors opens the door to a wealth of data to be acquired, catalogued, processed, analyzed and visualized.

On the other hand, new technological trends to process and manage vast amounts of Big geospatial data are also developing at a fast pace and their exploitation is becoming fundamental to fully unlock the potential of the available data for a better understanding of security-related scenarios.

This session aims at incentivizing the interest of the EO community in civil security-related activities through technical contributions based on Big Data analytics, Machine Learning, Artificial Intelligence, Data Fusion, Semantic Knowledge Graphs, Advanced Image Processing and other relevant technologies along the different steps of the data value chain. In particular, the session will demonstrate how these innovative methods and technologies can improve current capabilities to address the whole spectrum of civil security applications.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: Spaceborne SAR compression with AI for data-efficient vessel detection

Authors: Miguel Angel Belenguer-Plomer, Dr. Michele Lazzarini, Conrad Albrecht, Rikard Vinge, Michael Marszalek, Sergio Albani
Affiliations: European Union Satellite Centre, DLR
Maritime domain awareness (MDA) is the effective understanding of activity associated with the maritime domain that could impact the security, safety, economy, and/or environment of the EU and its Member States. Space assets are fundamental in the MDA, supporting the situational awareness and improving early warning and strategic foresight capabilities (EU Maritime Security Strategy). Ship detection represents a key piece of information for general maritime awareness, with particular emphasis for security-related analysis due the potential risks associated to a diverse number of scenarios such as the detection of illegal activity in the sea, the monitoring and protection of valuable cargo or the dangers posed by the transport of hazardous resources. Indeed, these risks represent a threat to economies, infrastructure management and, in some cases, citizens. In high sea, where terrestrial communications or monitoring systems are not available, space assets are the main source of information. Earth Observation data is an essential source for extracting information about vessels, and due to the intrinsic characteristics of the maritime environment, as well as those of ships, Synthetic Aperture Radar (SAR) images are particularly valuable. The annual volume of data generated by the Sentinel satellites – approx. 6 petabytes as by the Copernicus 2023 Annual Report – demands new approaches to analyze such geo-information for MDA. While maintaining information relevant for various downstream applications, self-supervised deep learning has demonstrated the ability to generate compressed features vectors from raw imagery at ratios of up to 1:1000. In this work, conducted as part of the Embed2Scale project funded by the Horizon Europe program (Grant Agreement No. 101131841), the EU Satellite Centre (SatCen) and the German Aerospace Center (DLR) have collaboratively developed an innovative approach to enhance automatic ship detection with the aid of deep learning (DL) in the framework of neural compressors. DL-based ship detectors have been trained to detect ships using input labels from the Automatic Identification System (AIS) (i.e. the actual location of the ship), while considering as input data different options such as: - Ground Range Detected (GRD) dual-polarized (vertical-vertical VV, and vertical-horizontal VH polarizations) SAR images acquired by the Sentinel-1 satellites in interferometric wide (IW) swath mode; - neural compression of SAR images to feature vectors; - SAR imagery decompressed from feature vectors. In this sense, these three different input labels-based detectors will be presented, and, as such, the insights extracted will serve to improve the current operational capacities when detecting ships automatically and, especially, when compressing the input data (i.e. satellite imagery) without losing critical information which could put into risk the MDA.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: AI in Bridge Monitoring from Space for Resilient and Sustainable Infrastructure

Authors: Stefan Scheiblauer, Dr. Francescopaolo Sica, Prof. Dr. Michael Schmitt
Affiliations: University Of The Bundeswehr Munich
Bridges are critical to modern infrastructure, yet monitoring their structural integrity remains a complex and resource-intensive task. The "AutoSatBridge" project addresses the pressing need for scalable and efficient monitoring of critical infrastructure by integrating advanced Earth observation (EO) techniques with cutting-edge artificial intelligence (AI). The A3 motorway bridge in Sinzing, Germany, serves as the demonstration site for this innovative approach. The project relies on high-resolution TerraSAR-X radar data, acquired during descending overpasses in the morning hours. Stable scatterers are identified using Persistent Scatterer Interferometry (PSI) to detect and analyze structural deformations with exceptional precision at the component level. The work addresses key challenges inherent to synthetic aperture radar (SAR) data, such as double-bounce and triple-bounce reflections that occur between the bridge structure and underlying water surfaces. To tackle these issues, machine learning (ML) techniques are applied to distinguish between single-path and multipath scatterers. Specifically, we leverage on Deep Learning strong classification and segmentation capability to segment the point cloud of persistent scatterers obtained through PSI processing. This processing will enable the attribution of each point scatterer to specific bridge components. The innovation lies in the unique combination of training and validating ML methods using Building Information Modeling (BIM) data and high-precision in-situ measurement systems. In this research, we make use of ground control points, such as a water level gauge system positioned all over the bridge. This system provides a vertical accuracy in the order of 0.05 mm with a frequency of 2 to 4 measurements per hour, enabling precise monitoring of displacement. GNSS sensors complement this system with positional accuracy ranging from 1 to 3 mm, ensuring robust positioning of displacement data. These in-situ measurements are vital for calibrating learned models and for validating the EO-based monitoring framework, enhancing its reliability and operational readiness. Satellite-derived estimates and in-situ measurements will be integrated seamlessly into a digital twin of the monitored infrastructure. This digital twin will serve as a dynamic, data-driven model for ongoing analysis and maintenance planning. By significantly reducing the need for resource-intensive on-site inspections and facilitating predictive maintenance, this approach directly contributes to the European Green Deal. It aligns with United Nations Sustainable Development Goal (SDG) 9 by fostering resilient infrastructure and sustainable industrialization and SDG 3 by promoting health and well-being through safer, more reliable transportation networks.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: Surveillance of Maritime Critical Infrastructure: Data Fusion of SAR, AIS and DAS

Authors: Kristian Aalling Soerensen, Hr. Hasse Pedersen, Hr. Peder Heiselberg, Mr. Henning Heiselberg
Affiliations: Technical University of Denmark, National Space Institude
Critical maritime infrastructure, such as subsea communication cables plays an important role in global connectivity (Anglmayer, 2021). Despite their importance, these assets have historically been underprotected (Gallagher, 2022). Recent incidents, however, have highlighted their vulnerabilities. For example, the Nord Stream pipeline explosion (Sanderson et al., 2023), or the damage to subsea communication cables connecting Svalbard to mainland Norway in January 2022 (Space Norway, 2022). More recently, the November 2024 anchor damage caused by the Yi Peng 3, triggered an international crisis involving China, Denmark, and Sweden (Associated Press, 2024). As such, safeguarding these infrastructures is critical. Earth Observation technologies, including Synthetic Aperture Radar (SAR), multispectral imaging (MSI), and radio-frequency (RF) sensing, have been extensively researched. For instance, In Sørensen 2023, they implemented a fast and efficient one-stage object detection model for Sentinel-1 images. Then, in Sørensen 2024, the same authors fused AIS, SAR and RF detections and found the same dark ships with both RF and SAR data and discussed the advantages and disadvantages of each source. Similarly, Heiselberg 2024 also identified the different ships using methods similar to what is used in fingerprint. Conversely, little research has explored the use of fiber optic cables themselves as sensors for ship detection through Distributed Acoustic Sensing (DAS). (Theim 2023) and (Landrø 2022) shows the possibility of tracking a single ship, whereas our focus is finding multiple ships, clustering them and utilizing that knowledge to detect ships. This study introduces a new methodology for using DAS as a maritime surveillance tool and ship detection. And unlike other research, we fuse DAS detections with both Automatic Identification System (AIS) and SAR data. This fusion enables the identification of vessels by correlating detections from several independent sensing modes. To our knowledge, this is the first study to integrate DAS, AIS, and SAR detections for identifying ships, advancing multi-modal surveillance capabilities to address the growing complexities of maritime security, especially near critical infrastructure. 2. Methodology The methodology describes firstly the data sources, whereafter it briefly introduces how ships were detected in DAS data. 2.1. Data 2.1.1. Automatic Identification System (AIS) data AIS is a cooperative maritime surveillance system where vessels transmit their identity, location, and activity data through AIS transceivers. This system, alongside similar technologies such as VMS and LRIT, is crucial for tracking ship movements and identifying dark ships, i.e., a ship can only be classified as being dark, it if does not transmit, e.g., AIS data. AIS operates via terrestrial and satellite-based receivers, providing wide-area coverage but has several limitations regarding the monitoring of illegal activity, such as deliberate signal deactivation, spoofing, or power reduction. 2.1.2. Synthetic Aperture Radar (SAR) data SAR is an active remote sensing technology that transmits microwave signals to generate high-resolution images, allowing reliable data collection under all weather conditions and during darkness. It is particularly valuable for maritime surveillance in remote regions like the Arctic, where optical sensors are ineffective. SAR identifies vessels through radar cross-sections and wake patterns, distinguishing them from features like skerries. Despite its benefits, including global coverage and large footprints, SAR has limitations such as latency and restricted revisit times. 2.1.3. Distributed Acoustic Sensing (DAS) data DAS data uses the optical fiber cables as arrays of acoustic sensors to monitor, e.g., maritime activity above the cable. Variations in strain along the fiber caused by external disturbances, such as vibrations or pressure changes induced by ships etc., lead to measurable changes in the intensity and phase of the backscattered light. These changes are detected and processed to the signals to determine the location and characteristics of the disturbances with high spatial resolution. DAS systems provide continuous monitoring over distances of up to several tens of kilometers with a spatial resolution typically ranging from one to ten meters, depending on the system configuration and application. This enables the detection of localized events along the fiber, such as vibrations from passing vessels, anchor dragging, or structural interactions, passively, in real time, 24/7. 2.2. Ship detection in DAS To detect and identify signals of interest in the DAS data, a normal picture of the data is generated through statistical analysis of the data channels. This method is designed to identify outliers, such as those corresponding to ship signals, by detecting deviations from expected patterns. The normal picture thus automates the process of finding anomalies, streamlining the identification of relevant signals for further analysis. Once a signal of interest is found, a frequency analysis is used to reveal the signature of the signal. Decomposing this signal into principal components, using only the ones with highest variance, makes it possible to cluster signals. Having found enough signals and clustered them, it is then possible to distinguish between ships, earthquakes, marine life, etc. It also makes it possible to categorize signals, which we do not know the origin of. This is shown in our results, where we have found a dark ship, a ship that does not transmit AIS data, and categorized it using our clustering method and further verified it using a Sentinel-1 SAR image. 3. Results The objective of this study was to detect ships using AIS, SAR, and DAS data and to fuse these data sources to monitor maritime activity specifically near fibre optical communication cables. A signal of interest was identified in the DAS strain measurements, appearing as a pronounced anomaly approximately 22 km along the cable. The strain data indicated a distinct and continuous pattern consistent with ship-induced vibrations, rather than intermittent signals typically associated with marine life or other non-vessel disturbances. The signal's hyperbolic wavefronts further supported the identification of a moving ship as the source. A frequency analysis of the signal revealed characteristic features of a ship’s signature. The engine frequency was centred around 50 Hz, with additional components corresponding to the propeller's characteristics, such as its pitch and size, observed at lower frequencies. These spectral features matched known profiles for maritime vessels, providing a reliable basis for classification. SAR imaging showed the presence of the detected ship and its location and trajectory. AIS data further validated the finding, showing that the ship crossed the fibre optical cable at approximately 21.9 km and was the only vessel operating in the vicinity at the corresponding time. This cross-validation between DAS, SAR, and AIS ensured the accuracy of the detection and classification. 4. Discussion and Conclusion The objective of this study was to illustrate the effectiveness of fusing SAR, AIS, and DAS data for monitoring maritime critical infrastructure. And particularly, how DAS and SAR can be fused to better monitor, and verify, dark ship activity. The results show that integrating DAS into existing frameworks improves the detection of ships. DAS data are valuable for continuous, passive monitoring of vessel activity, complementing SAR’s high-resolution imaging and AIS’s real-time tracking of cooperative ships. DAS especially offers a future powerful tool for monitoring all activity near the fiber cables, using the fiber cables themselves. This work demonstrates, for the first time, the use of DAS and SAR together for detecting ships, advancing multi-sensor fusion for maritime surveillance. However, while our methodology should be usable for continuous detection, we only illustrated two use-cases. Future work should refine fusion methods, improve DAS signal classification, and test this approach in different regions and environments especially with more data to further its applicability. References Anglmayer, Irmgard (Feb. 2021). European Critical Infrastructure: Revision of Directive 2008/114/EC. European Parliamentary Research Service (EPRS). URL: https://w ww.europarl.europa.eu/RegData/etudes/BRIE/2021/662604/EPRS_BRI(2021)66 2604_EN.pdf. Gallagher, Jill C. (Sept. 2022). Undersea Telecommunication Cables: Technology Overview and Issues for Congress. Congressional Research Service. URL: https://crsreport s.congress.gov/product/pdf/R/R47237. Sanderson, Hans, Michal Czub, Jaromir Jakacki, Sven Koschinski, Jakob Tougaard, Signe Sveegaard, Torsten Frey, Patrik Fauser, Jacek Beldowski, Aaron J Beck, Anna Pzyborski, Adam Olejnik, Bogdan Szturomski, and Radoslaw Kicinski (2023). “En- vironmental Impact of the Explosion of the Nord Stream Pipelines”. In: Scientific Reports 13. URL: https://api.semanticscholar.org/CorpusID:265211193. Space Norway (June 2022). The Svalbard Fibre Optic Cable Connection. URL: https://spacenorway.no/en/what-we-do/operational-infrastructure/the-svalbard-fibre-optic-cable-connection/ Associated Press (Nov. 2024). “Danish military monitors a Chinese-flagged bulk carrier after undersea data cables were ruptured”. In: AP News. URL: https://apnews.com/article/denmark-sweden-finland-germany-lithuania-china-yi-peng-undersea-cables-d3af1bf7e68ff060bb6e669f24425fd0 Sørensen, Kristian Aalling, Martin Veicherts, Hasse B. Pedersen, Peder Heiselberg, and Henning Heiselberg (2024). “RF, SAR and AIS Data Fusion for Improved Arctic Maritime Surveillance”. Sørensen, Kristian Aalling, Peder Heiselberg, Henning Heiselberg “Lightweight SAR Ship Detection”. In: IGARSS 2023 - IEEE International Geoscience and Remote Sensing Symposium. IEEE. DOI: 10.1109/igarss52 108.2023.10283445. Heiselberg, Peder, Hasse B. Pedersen, Kristian Aalling Sørensen, and Henning Heiselberg (2024). “Identification of Ships in Satellite Images”. In: IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 17, pp. 6045–6054. ISSN: 2151-1535. DOI: 10.1109/jstars.2024.3368508. Theim, Lukas; Wienecke, Susann; Taweesintananon, Kittinat; Vaupel, Melvin and Landrø, Martin (2023) “Ship noise characterization for marine traffic monitoring using distributed acoustic sensing”. In: IEEE. DOI: 10.1109/MetroSea58055.2023.10317227. Landrø, Martin; Bouffaut, Léa; Joy Kriesell, Hannah; Robert Potter, John; André Rørstadbotnen, Robin; Taweesintananon, Kittinat; Emil Johansen, Ståle; Kristoffer Brenne, Jan; Haukanes, Aksel; Schjelderup, Olaf and Storvik, Frode. (2022) “Sensing whales, storms, ships and earthquakes using an Arctic fibre optic cable”. In Nature. DOI: https://doi.org/10.1038/s41598-022-23606-x.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: A Deep Learning Framework for Change Detection in Open-Pit Mines Using SAR Imagery

Authors: Enrico Paglia, Alessio Rucci, Martin Bernardi
Affiliations: TRE Altamira
One of the applications for Synthetic Aperture Radar is change detection, where pixel-wise differences between pairs of coregistered images are computed in order to identify changes in the scene. In particular, the analysis of changes in open-pit mine operations is important to ensure sustainable resource utilization, for mitigation of the environmental impact, and for human safety via the monitoring of ground instability and structural collapses. Applying change detection to open-pit monitoring presents challenges due to the diverse range of land cover types within a single scene. These diverse land cover types complicate the detection and distinction of changes in areas such as excavated zones, waste piles, infrastructure, vegetation, and water bodies. An additional challenge is the need to discern actual changes of interest such as blasting, collapsing, or waste pile operations while ignoring reflectivity changes that are not relevant, including equipment movements and fluctuations of the radar returns. This work presents a deep learning model trained to detect changes of interest in open-pit monitoring using TerraSAR-X images. After evaluating different architectures, a classical U-net was selected as the most suitable for the task. In order to enhance the model performance, we adopted a novel approach that incorporates a coherence layer obtained as described in [1], alongside SAR amplitude images. Coherence plays a crucial role in improving the reliability of the change detection performance, as it quantifies the degree of similarity between two radar images captured of the same area at different times. Specifically, changes are only possible in areas of low coherence and not in areas of high coherence values. The model was trained in a supervised manner on a diverse dataset encompassing 8 different open-pit mines with multiple temporal samples from each site, totaling 33 pairs, along with corresponding coherence layers. The ground truth data was labeled manually by experts in the field. Due to the relatively small size of the available data, pseudo labelling and data augmentation techniques were used to train over a bigger semi-supervised dataset. Moreover, we performed dataset subsampling to compensate for class imbalances, specifically due to the relatively low presence of areas with changes. Finally, to efficiently process large input images during inference, we adopted a patch-based approach. In this approach, the images were divided into smaller, manageable batches and then seamlessly recomposed to avoid border artifacts. [1] Murdaca G, Rucci A, Prati C. Deep Learning for InSAR Phase Filtering: An Optimized Framework for Phase Unwrapping. Remote Sensing. 2022; 14(19):4956. https://doi.org/10.3390/rs14194956
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: Towards a combination of web-text and earth observation data for crisis and disaster risk reanalysis – An explorative study on a flooding event

Authors: Vanessa Rittlinger, Dr. Stefan Voigt, Dr. Christian Geiß, Dr. Hannes Taubenböck
Affiliations: German Aerospace Center (DLR), Earth Observation Center
During and after natural disasters, all relevant information about a given situation, both in real time and in the post-analysis phase, is crucial for search and rescue operations, relief efforts, recovery and future mitigation and prevention measures. Systematically analyzing web text data from various sources such as news articles, social media and general websites can provide valuable insights for disaster monitoring and analysis when available in near real-time. However, it remains a challenge to capture large amounts of web text data in a timely manner and extract meaningful insights from this unstructured data to support disaster relief efforts. This study focuses on extracting information from large amounts of web text data relevant to crisis and disaster monitoring and synergistically analyzing it with other sources of Earth Observation data of the same event, especially satellite imagery. Satellite imagery has proven to be a valuable source of information for disaster monitoring and mitigation, e.g. in rapid mapping for disaster relief. However, Earth Observation data can have limitations, such as temporal availability, limitation of flight campaigns due to poor weather conditions, etc. The aim of this study is to investigate the potential of fusing web text data and satellite imagery to create a better awareness of the situation of large-scale flooding. To achieve this goal, we conduct an exploratory study focusing on large-scale flooding events for which Earth Observation data and web text data like online media and news reports are available. Specifically, it is about the reanalysis of a flood event that occurred in June 2021 in the Ahr valley in Germany and led to significant damage and human and economic losses. In this use case, information on a specific disaster situation is extracted from web text data such as the Open Web Index of the OpenWebSearch.eu project, carefully georeferenced and combined with satellite-based Earth Observation data for joint visualization and analysis. Using a combination of different natural language processing methods for disaster risk management related information is extracted, analyzed and filtered from the web-text data. In combination with fine-grained geoparsing approaches, the text-based information is further processed to link it with Earth Observation data in space and time. Earth Observation data used in this study includes flood maps, inundation maps or information on building damage. Once the systematic derivation of web-text information can be used to identify hotspots and aims to be operationalized in a routine and timely manner, e.g. through a publicly available European Open Web Index, such information extracted can be used for emergency response and e.g. for the planning of satellite or airborne emergency mapping campaigns. Beyond the acute phase of disaster response, crisis and disaster-related information derived from web texts can be used to enrich and contextualize satellite-based disaster management and mitigation information in other phases of a disaster risk management cycle. With continued research and development, this can lead to a more robust and efficient disaster management system that effectively utilizes both sources.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: Utilizing Geo-Knowledge Graph Techniques to Integrate EO and Multi-Source Data to predict the on-set of crises in Fragile Communities.

Authors: Arioluwa Aribisala, Prof Johannes Scholz, Associate Professor Stefan Lang, Prof Thomas Blaschke, Mr Andreas Papp
Affiliations: Paris Lodron University Of Salzburg, PeaceEye
The World Bank estimates that by 2030, up to two-thirds of the world’s extreme poor could reside in fragility, conflict, and violence (FCV) communities. However, more than 300 million people currently require humanitarian aid due to conflicts—accounting for 80% of humanitarian needs—climate crises, and other factors. PeaceEye is dedicated to advancing global development by leveraging several available datasets to address the critical needs of the most vulnerable, fragile communities globally. Using Geographic Artificial Intelligence (GeoAI) techniques, including Geo-Knowledge Graphs (GeoKGs) , PeaceEye integrates Earth Observation (EO) data with diverse ‘soft’ data sources, including Social Media Intelligence (SOCMINT), Open-source Intelligence (OSINT), and Human Intelligence (HUMINT), to predict the onset of crises in high-risk regions. GeoKGs are a specialized form of knowledge graphs which combine spatial data with the structured framework of traditional knowledge graphs, enabling the representation of intricate geographic relationships and phenomena. Like traditional Knowledge Graphs (KGs), GeoKGs can model an ontology—an abstract representation of a geographical domain of interest—while also encapsulating data instances in a graph-based representation. GeoKGs are designed to manage spatial and temporal data dimension, and evolving knowledge, making them ideal for applications in environmental monitoring, urban planning, and disaster management. Using Semantic Web standards like RDF, OWL, and GeoSPARQL, they provide a comprehensive view of geographic data that can be queried and visualized for various use cases. Unlike traditional databases, KGs understand the semantics of data, allowing for smarter queries and insights. They allow the integration of diverse data sources, uncovering hidden patterns and relationships, and adapt as new information emerges to maintain an up-to-date dataset. In addition, GeoKGs are a natural fit to GeoAI algorithms – such as Graph Convolutional Neural Networks – and facilitate the integration of data into a multitude of AI algorithms and tolls. Leveraging its many advantages, PeaceEye incorporates GeoKGs into its conflict prediction approach to seamlessly integrate diverse data sources within its system architecture. The availability of EO data offers a valuable opportunity to monitor key fragility dimensions, including economic, environmental, political, security, societal, and human factors. Local stressors, such as resource scarcity, disasters, environmental degradation and migration, which may worsen the vulnerability of regions, can be monitored over time. This marks a shift from relying solely on aggregated socio-political and economic data to obtaining detailed insights via remote sensing. To enrich the GeoKG data pool with additional context, PeaceEye incorporates crowdsourced HUMINT data from community-based stakeholders with firsthand experience, alongside openly available sources such as OECD data, media, and social media channels. This approach creates a robust and comprehensive information repository, enabling thorough and in-depth predictive analysis of the situation. As an initial step, PeaceEye utilizes geo-semantics and intelligent data-crawling algorithms to identify, categorize, and validate current online data, ensuring accurate geo-location and visualization of contextual conflict information. The diverse nature of these multi-source datasets presents significant challenges in maintaining both data quality and robust security. Similarly, HUMINT information shared by direct stakeholders demands strict data protection and security measures. To foster trust and uphold high standards in data transfer and storage, advanced cybersecurity protocols are implemented at every stage of storage and processing. Once all data security standards are met, the data is securely integrated into the overall knowledge graph environment. By integrating EO data with ground-level intelligence using GeoKGs, PeaceEye delivers a comprehensive spatial understanding of fragile zones, enabling real-time conflict monitoring, assessment, and prediction for stakeholders. This combined bottom-up and top-down approach improves the accuracy of conflict analysis and ensures secure transmission of field intelligence through advanced telecommunications and navigation technologies, essential in compromised areas. PeaceEye enhances emergency preparedness and contingency planning, aiming to support safer and more effective humanitarian and commercial operations in volatile regions.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall K2)

Session: C.02.14 The EarthCARE Mission’s First Year in Orbit: Opening new Horizons for Cloud, Aerosol and Radiation Science - PART 3.

The Earth Cloud, Aerosol and Radiation Explorer (EarthCARE) satellite mission aims to improve our understanding of cloud-aerosol-radiation interactions and Earth radiation budget, such that they can be modelled with better reliability in climate and numerical weather prediction models. To achieve this objective, EarthCARE will measure the three-dimensional structure of clouds, precipitation and aerosols, along with collocated observations of solar and terrestrial radiation.

The successful launch of this ESA-JAXA mission was in May 2024 and, following the satellite and instrument commissioning phase, provides unique co-registered observations from a suite of four unique instruments located on a common platform: (1) ATmospheric LIDar (ATLID), (2) Cloud Profiling Radar (CPR), (3) Multi- Spectral Imager (MSI) and (4) BroadBand Radiometer (BBR). EarthCARE global observations include vertical profiles of natural and anthropogenic aerosols, the vertical contribution of ice and liquid water content, the cloud mesoscale distribution, precipitation microphysics, estimates of particle size, convective vertical air motions, as well as atmospheric radiative heating and cooling profiles. In addition to enabling new insights into climate science and providing unique data for NWP improvements, EarthCARE continues the heritage measurements of CloudSat, CALIPSO and Aeolus, and bridges to future missions such as NASA's Atmosphere Observing System mission (AOS) and Aeolus-2.

The session invites contributions from the science community on EarthCARE and related science themes, including Passive and Active Observational Techniques; Cloud and Precipitation Microphysics, Aerosols and Radiation Process Studies; Radiation and Earth Radiation Budget; Scientific and User Applications as well as Long-Term Data Records. In addition, scientific synergies with heritage, operational and future satellite missions as well as with ground-based, air- or ship-borne campaign activities are welcome

Contributions on Modelling, Assimilation and Parameterisation at Global, Regional and Cloud Level enhancing high-resolution atmospheric numerical model activities through evaluation and improvement using novel satellite observations on EarthCARE and related satellite missions are in particular invited. A focus is placed on the use of cutting-edge atmospheric climate and weather models, including "global km-scale" or “global storm-resolving models” and commensurate Earth observations of clouds, aerosols and convection.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall K2)

Presentation: Preliminary validation results from EarthCARE Commissioning Cal/Val Campaign in Ottawa (ECALOT)

Authors: Dr Zhipeng Qu, Zen Mariani, Alexei Korolev, Howard Barker, Meriem Kacimi, Stephen Holden, Robert Reed, Jason Milbrandt, Jason Cole, Daniel Michelson, Ivan Heckman, Igor Grishin, Robert Crawford, Melissa Cholette, Michael Wheeler, Cuong Nguyen, Paloma Borque, Keyvan Ranjbar, Natalia Bliankinshtein, Leonid Nichman, Kenny Bala, Eric Roux, Antony Brown, Reagh Sherwood, Bryan Carrothers, Daniel MacDonald, John Gyakum, Eve Bigras, Yi Huang, Lei Liu, Benjamin Riot-Bretêcher, Julie M. Thériault, Kaley Walker, Rob Koopman, Jonas von Bismarck, Stephanie Rusli, Montserrat Pinol Sole, Ann Mari Fjæraa
Affiliations: Environment And Climate Change Canada, National Research Council Canada, McGill University, Université du Québec à Montréal, University of Toronto, European Space Agency, Norwegian Institute for Air Research
Following the launch of ESA and JAXA’s Earth Cloud, Aerosol, and Radiation Explorer (EarthCARE) satellite in May 2024, the EarthCARE Commissioning Calibration/Validation (Cal/Val) Campaign in Ottawa (ECALOT) represents Canada’s first contribution to EarthCARE’s global cal/val initiative. ECALOT aims to collect crucial airborne and surface observations to calibrate and validate key EarthCARE products, including Level 1 Atmospheric Lidar (ATLID) and Cloud Profiling Radar (CPR) products, as well as Level 2 single-instrument, multi-instrument, synergetic retrievals, and radiation products. Flights were conducted using the National Research Council Canada’s (NRC) Convair-580 aircraft, equipped with advanced remote sensors such as W-band radars, 355 nm lidars—mirroring EarthCARE’s active payloads—as well as X and Ka-band radars. The Convair-580 also carried state-of-the-art in situ cloud and aerosol sampling instruments. To support these validation efforts, Environment and Climate Change Canada (ECCC) and McGill University deployed a specialized surface observation site near Ottawa International Airport. Additionally, two Climate Sentinels network stations operated by McGill University and Université du Québec à Montréal in the Montréal region collected critical surface data to complement the airborne measurements. The ECALOT campaign, conducted from October 2024 to January 2025, focused on observing fall and winter atmospheric conditions within a 350 km radius of Ottawa. Flights were carefully synchronized with EarthCARE satellite overpasses, and surrounding areas were also observed to provide comprehensive atmospheric context for validation. The campaign captured diverse cloud and aerosol scenes, including cumulus, stratocumulus, cirrus, nimbostratus, large-scale rain and snow systems, complex multi-layer clouds, and aerosol layers above clouds. This presentation provides an overview of the ECALOT campaign, shares preliminary airborne calibration and validation results for EarthCARE Level 1 and Level 2a products, and explores potential applications for validating ECCC’s numerical weather prediction models. Additionally, it highlights ECALOT's role in preparing for the upcoming Arctic Polar Night Experiment (PONEX) campaign, scheduled for January 2026.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall K2)

Presentation: Intercomparison of cloud products between EarthCARE/MSI and Himawari-9/AHI

Authors: Masataka Muto, Takuji Kubota, Toshiyuki Tanaka, Takashi Y. Nakajima, Minrui Wang
Affiliations: Japan Aerospace Exploration Agency, Tokai University
The Earth Cloud Aerosol and Radiation Explorer (EarthCARE) is a Japanese-European collaborative earth observation mission, successfully launched in May 2024. The mission is to elucidate the effects of clouds and aerosols on the Earth's climate system and radiation budget. The assessment of radiative forcing of clouds and aerosols is a major uncertainty in climate change projections with climate models. The EarthCARE satellite uses four instruments: the Cloud Profiling Radar (CPR), the Atmospheric Lidar (ATLID), the Multi-Spectral Imager (MSI), and the Broadband Radiometer (BBR), to globally observe cloud and aerosol distribution, vertical structure, and radiation flux at the top of atmosphere. The accumulation of these data will be used particularly to improve the accuracy of cloud and aerosol reproduction in climate and weather forecast models. MSI is one of the instruments which has been developed by the European Space Agency (ESA) and measures horizontal characteristics of clouds and aerosols with 1 visible band (0.67µm), 1 near infrared (NIR) band (0.865µm), 2 shortwave infrared (SWIR) bands (1.65µm and 2.21µm), and 3 thermal infrared (TIR) bands (8.80µm, 10.8µm, and 12.0µm). The spatial resolution of each band is 500m and the swath width is 150km. The swath width has 384 pixels with including some dummy pixels for each side, and nadir is not located at the center of the swath width to avoid sunglint on sea surface. JAXA MSI level 2 cloud products, including cloud flag and cloud phase, are provided by the Cloud and Aerosol Unbiased Decision Intellectual Algorithm (CLAUDIA), and cloud optical thickness, cloud particle effective radius, cloud top temperature, and cloud top height are provided by the Comprehensive Analysis Program for Cloud Optical Measurements (CAPCOM). Intercomparison for quantitative validation of the MSI products was conducted with cloud products from the imager aboard Himawari-9 satellite. Himawari-9 is a geostationary meteorological satellite launched in November 2016 and operated by the Japan Meteorological Agency. Advanced Himawari Imager (AHI) aboard Himawari-9 has several wavelength bands similar to those of MSI, with horizontal resolution of 500m for visible (0.64µm), 1km for NIR (0.86µm), and 2km for SWIR (1.6µm and 2.3µm) and TIR (8.6µm, 10.4µm, and 12.4µm). This presentation shows the intercomparison result of cloud products between EarthCARE/MSI and Himawari/AHI.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall K2)

Presentation: EarthCARE ATLID and passive instruments synergy for advanced retrieval of aerosol vertical profiles

Authors: Anton Lopatin, Dr Alexandra Tsekeri, Chong Li, Dr Eleni Marinou, Paul Tytgat, Kalliopi Artemis Voudouri, Dr Vassilis Amiridis, Oleg Dubovik, Edward Malina
Affiliations: GRASP SAS, IAASARS, National Observatory of Athens, Université de Lille, CNRS, UMR 8518 - LOA - Laboratoire d’Optique Atmosphérique, ESRIN, European Space Agency
Here we present the ESA-supported “EarthCARE ATLID and MSI instruments Synergy for advanced retrieval of aerosol vertical profiles” (ECAMS) project that is focused on advancing the synergy of passive and active observations by combining L1 products of EarthCARE and various imagers. More specifically, it proposes to realise aerosol retrievals using the fusion of L1 observations from lidar (ATLID) and imagers deployed on EarthCARE (MSI) and PACE (SPEXone, HARP-2), as well as on other advanced missions. Such deep synergy retrievals based on the fusion of L1 observations is the methodologically most promising and fruitful approach while involving essential technical challenges and significant engagement of resources. Atmospheric aerosol is one of the main drivers of climate crisis and significantly impacts human health. The global information on the distribution of aerosol properties is obtained mainly from space-borne measurements. Passive remote sensing includes spectral observations of the top of the atmosphere reflectance at one or several angles, providing information on the atmospheric aerosol amount, with some sensitivity on the particle size, shape, and morphology, with limited sensitivity to their vertical variability. On the other hand, active lidar observations are highly sensitive to the vertical distribution of aerosols. The interpretation of lidar data, though, needs assumptions about aerosol sizes and morphology. Therefore, information from collocated radiometric measurements is always desirable for the interpretation of lidar observations and the complementarity of passive and active measurements. At present, there are several approaches to perform synergetic retrievals of aerosol properties using a combination of passive and active observations. However, these approaches mainly focus on ground-based observations and do not fully explore recent advancements in lidar developments such as observations by high spectral resolution lidars. Moreover, these available approaches do not provide sufficient flexibility in using different lidar configurations and assuring their adequate and consistent fusion with the passive measurements for space-borne remote sensing observations. In this respect, ECAMS developments are expected to provide new opportunities for the generation of global aerosol vertical distribution products with enhanced accuracy that can be of essential value for the validation of climate models. The synergetic passive & active satellite retrieval is implemented using highly optimized and flexible forward models (aerosol, gases, and surface reflectance) following the inversion concept of statistical estimation realized in the open-source GRASP (Generalized Retrieval of Atmosphere and Surface Properties) software (Dubovik et al., 2021). The developed synergetic concept is designed to be easily ported and adapted for processing various types of satellite observations with different spatial, vertical, and spectral resolutions as well as different spectral ranges, and serves as a community platform for advancing the remote sensing of the atmosphere vertical structure and surface, thus providing a virtual laboratory for different kinds of remote sensing developments and applications. One of the objectives of ECAMS is to support and complement synergy developments ongoing in the frame of the ESA AIRSENSE project, and its studies on aerosol-cloud interactions realised in collaboration with EC CleanCloud and CERTAINTY projects. Projects’ on going developments and results will be presented and discussed. References: Dubovik, O., D. Fuertes, P. Litvinov, et al., “A Comprehensive Description of Multi- Term LSM for Applying Multiple a Priori Constraints in Problems of Atmospheric Remote Sensing: GRASP Algorithm, Concept, and Applications”, Front. Remote Sens. 2:706851. doi: 10.3389/frsen.2021.706851, 2021.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall K2)

Presentation: A First Evaluation of the EarthCARE CPR Doppler Velocity Measurements for the Estimation of Hydrometeors Sedimentation Velocities and Convective Vertical Air Motions

Authors: Pavlos Kollias, Mr. Bernat Treserras, Alessandro Battaglia, Ms. Aida Galfione
Affiliations: McGill University, Stony Brook University, Politecnico di Torino
The joint European Space Agency (ESA) and Japanese Aerospace Exploration Agency (JAXA) Earth Cloud, Aerosol and Radiation Explorer (EarthCARE) satellite mission was successfully launched on May 28, 2024. The EarthCARE mission includes a 94 GHz Cloud Profiling Radar (CPR) with high sensitivity (-35 dBZ), a sub-kilometer footprint, and is the first to provide Doppler velocity measurements from space. The availability of Doppler measurements from space offers a unique opportunity for the collection of a global dataset of vertical motions in clouds and precipitation. Such a global dataset is expected to improve our understanding of convective motions in clouds and to help evaluate current parameterizations of convective mass flux in cloud-resolving models. Furthermore, global climate models (GCMs) require an accurate representation of ice particle sedimentation rates, which the CPR Doppler measurements can potentially provide. Such measurements will also help to constrain the retrieval of particles’ characteristic size in drizzling and large-scale precipitation conditions. The Doppler capability of EarthCARE's CPR represents a significant innovation, yet considerable work is required before its potential can be fully capitalized to provide new insights into cloud dynamics and the sedimentation terminal velocity of hydrometeors. The EarthCARE commissioning phase was completed in December 2024, and the CPR L2a data products and algorithms have successfully tackled several challenges with the CPR measurements. Now more is known about the CPR performance especially when it comes to the Doppler velocity measurements. The focus of this paper is on enhancing the scientific value of the CPR Doppler velocity measurements. The first global climatology of ice hydrometeor’s fall velocities as a function of temperature and radar reflectivity for different cloud and precipitation systems will be presented. In addition, examples of retrieved convective vertical air motions and a preliminary assessment of the ability of the EarthCARE CPR to characterize convective vertical air motions will be discussed.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall K2)

Presentation: Validation of EarthCARE Cloud and Precipitation Products Using FAAM Aircraft Observations: VERIFY campaign

Authors: Dr. Kamil Mroz, Dr Rui Song, Prof Roy G. Grainger, Prof Christopher Westbrook, Prof Thorwald Stein
Affiliations: NCEO, University of Leicester, NCEO, University of Oxford, University of Reading
The EarthCARE satellite mission, a collaboration between ESA and JAXA, aims to improve understanding of clouds, aerosols, and their interactions with radiation to enhance climate modelling and prediction. As part of this mission, the Validation of EarthCARE Retrievals using FAAM In-situ FlYing (VERIFY) campaign was launched, utilizing the FAAM BAe-146 research aircraft equipped with a comprehensive suite of instruments. The campaign, based in the UK and conducted over selected satellite overpasses, focuses on validating Level 2 cloud and precipitation products, with particular emphasis on the CPR Cloud and Precipitation (C-CLD) product developed by the campaign’s Principal Investigator and the synergistic ACM-CAP product developed at ECMWF. The VERIFY campaign involves coordinated flights under EarthCARE satellite tracks to collect in-situ measurements of clouds, precipitation, and aerosols. Key instruments on the FAAM aircraft, such as the Nevzorov probe and Cloud Droplet Probe, measure cloud water content and droplet distributions, while Broadband Radiometers and optical array probes capture radiative fluxes and particle properties. These in-situ measurements are compared against satellite-derived products to assess their accuracy and reliability. To date, five flights have been conducted during the first intensive observation period in November 2024. These include two flights sampling marine stratocumulus clouds over the North Sea, one targeting cirrus clouds in the northern Atlantic, a flight over Northern Ireland focused on a precipitating cloud system, and a final sortie sampling developed cumulus clouds characterized by strong updrafts and downdrafts. Initial results from these flights demonstrate promising collocations between in-situ and EarthCARE observations, with notable examples including the detection of FAAM aircraft radar returns in satellite reflectivity data and improved alignment after advection corrections. These flights underscore the importance of strategic flight planning and highlight the unique capabilities of the FAAM platform for advancing the validation of EarthCARE products. Three more flights are planned in January 2025.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall K2)

Presentation: Analysis of Doppler accuracy across different observation modes of EarthCARE/CPR

Authors: Yuki Imura, Takuji Kubota, Shunsuke Aoki, Hirotaka Nakatsuka
Affiliations: JAXA
Cloud-aerosol-radiation interactions are one of the largest components of uncertainty in accurate climate change predictions. EarthCARE is a satellite developed with the primary goal of reducing these uncertainties. It is equipped with four key sensors: CPR (Cloud Profiling Radar), ATLID (Atmospheric LiDAR), MSI (Multi-Spectral Imager), and BBR (Broad-Band Radiometer). The simultaneous observation by these instruments will provide valuable insights into the roles of clouds and aerosols in Earth’s climate system. The CPR developed by JAXA and NICT is the world's first sensor with Doppler velocity measurement capabilities, which helps constrain the uncertainties related to the vertical velocity of clouds. The CPR sensor has three Sub Operation modes, each with different observation altitude limits: 16 km, 18 km, and 20 km. The observational differences in these sub-modes have a significant impact on the measured cloud top heights and Doppler velocities. In the 16 km mode, observations are conducted with a high Pulse Repetition Frequency (PRF), which provides the most accurate Doppler velocity measurement, compared to the other two modes. However, this mode cannot capture clouds above 16 km (such as cirrus or tropical convective clouds that extend above 16km). On the other hand, the 20 km mode can observe higher clouds, but the Doppler velocity accuracy decreases due to the lower PRF. In this study, we used observation data from the global fixed Sub Operation mode, which was specially conducted during 5th-12th November 2024, to examine the difference in the Doppler velocity accuracy among the three sub-modes. The results demonstrated that the 18 km mode offers Doppler velocity accuracy almost equal to that of the 16 km mode. We also compared cloud top heights observed in each sub-mode with those observed by ATLID, the LiDAR onboard the EarthCARE satellite. Very few clouds were found at altitudes higher than 18 km, suggesting that the 18 km mode could cover nearly all clouds. However, it should be noted that the 18 km mode is more susceptible to false cloud echoes, known as "mirror images," compared to the 20km mode. This issue can be addressed and mitigated through higher-level algorithms, such as those used in Level 2 processing. The findings in this study support the use of the 18 km mode for CPR observations, and it will be incorporated into actual operations and observations.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.15/1.16)

Session: A.07.05 Monitoring and predicting surface water and flood dynamics - PART 3

The socio-economic consequences of floods are rising rapidly, as floods are the most frequent and impactful weather-related disasters, affecting nearly 800 million people in the past decade and causing economic losses exceeding $300 billion. In this context, remote sensing has emerged as a critical tool for data collection and observation, especially in regions where field surveys and gauging stations are limited, such as remote areas and developing nations. The integration of remotely-sensed variables—like digital elevation models, river width, flood extent, water level, flow velocities, and land cover—into hydraulic models offers the potential to significantly enhance our understanding of flood processes and improve predictive capabilities.

Over recent decades, research has focused on optimising the use of satellite observations, supported by both government and commercial initiatives, and numerous datasets from airborne sensors, including aircraft and drones. Recent advancements in Earth observation (EO) have further enhanced the monitoring of floods and inland water dynamics, utilising optical imagers, Synthetic Aperture Radars (SARs), and Global Navigation Satellite System Reflectometry (GNSS-R) to detect surface water, even in densely vegetated regions. Radar altimeters now measure water levels over smaller lakes and rivers. However, despite these advancements, the update frequency and timeliness of most remote sensing data products are still limited for capturing dynamic hydrological processes, which hinders their use in forecasting and data assimilation. Additionally, spatial and temporal inconsistencies across different sensors pose challenges in creating integrated multi-sensor products, such as fused surface water and flood extent products, water volume estimates, and wetland maps.

The scientific community has increasingly recognized the potential of remotely-sensed data for calibrating and validating hydraulic models, and to revolutionise real-time flood monitoring. With the expansion of open data from sources such as the European Space Agency (ESA), and the availability of more Earth observation data than ever before, this progress is expected to continue.

This session invites cutting-edge presentations on flood monitoring and mapping through remotely-sensed data, focusing on:

- Remote sensing data for flood hazard and risk mapping, including commercial satellite missions and airborne sensors (aircraft and drones);
- Remote sensing techniques for monitoring flood dynamics;
- The use of remotely-sensed data for calibrating or validating hydrological or hydraulic models;
- Data assimilation of remotely-sensed data into hydrological and hydraulic models;
- Enhancements in river discretization and monitoring through Earth observations;
- River flow estimation using remote sensing;
- Machine learning and deep learning-based flood mapping or predictions;
- Ideas for developing multi-satellite data products and services to improve the monitoring of flood and surface water dynamics.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.15/1.16)

Presentation: Improving flood detection in Arid regions using Sentinel-1 Interferometric Coherence and Machine learning.

Authors: Shagun Garg, Dr Antara Dasgupta, Prof. Dr. Mahdi Motagh, Dr. Sandro Martinis, Dr. Edoardo Borgomeo, Dr. Sivasakthy Selvakumaran
Affiliations: University Of Cambridge, RWTH Aachen University, Institute of Photogrammetry and GeoInformation, Leibniz University Hannover, GFZ German Research Centre for Geosciences, Department of Geodesy, Section of Remote Sensing and Geoinformatics, German Aerospace Center (DLR), German Remote Sensing Data Center (DFD)
Floods are among the most devastating natural disasters, affecting about 1-in-4 people globally. The increasing frequency and intensity of extreme weather events have led to unprecedented flooding impacts, particularly in arid regions. The low soil permeability in arid regions means that short periods of heavy rain can cause rapid surface runoff, erosion, and infrastructure damage. Moreover, arid regions often lack the infrastructure and resources to cope with such disasters. Satellite-based remote sensing has become crucial for near real-time flood mapping and monitoring and rapid response and rescue operations. While optical satellites are limited by cloud cover, Synthetic Aperture Radar (SAR) satellites are increasingly utilized due to their relatively longer wavelengths which penetrate clouds, and their ability to collect information in different modes of polarization. However, current SAR-based flood detection methods struggle to differentiate between water and dry sandy surfaces, as both exhibit similar low-amplitude backscatter characteristics. This creates a critical gap in our ability to monitor and respond to floods in arid regions. We present a methodology that combines SAR amplitude and interferometric coherence data for flood detection in arid regions. We leverage ESA Copernicus Sentinel-1 data and employ a Random Forest classifier to integrate multiple SAR features, including temporal coherence and backscatter information. The predicted flood map is validated against reference flood maps derived using cloud-free Sentinel-2 optical imagery. The methodology is tested through three real-world flood events in Iran, Turkmenistan, and Pakistan. Our analysis reveals that combining coherence information with amplitude-based methods improves flood detection accuracy from 12% to 25% across the three test cases, with strong performance in areas where traditional methods typically fail. Using permutation feature importance analysis, we identified three key parameters: coherence and pre/post-flood amplitude changes all in vertically transmit and vertically receive. By focusing on these features, our model maintains the same accuracy while reducing processing time by 33%, making it more suitable for emergency response. The model also demonstrates robust performance across different geographical regions: successfully detecting floods in previously unseen locations without retraining the model. This geographical transferability of the model suggests the potential for a standardized flood detection system including arid regions. The increasing availability of open-access SAR data and advances in cloud computing have made handling and computing calculations of SAR data more feasible. With multiple space agencies launching new SAR missions, there are opportunities to test and adapt this methodology across different sensors and integrate it into operational flood mapping systems.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.15/1.16)

Presentation: An integrated workflow for post flood analysis

Authors: Gundula Winter, Martin Schobben, Björn Backeberg, Willem Tromp, Florian Roth, Clay Taylor Harrison, Matthias Schramm, Wolfgang Wagner
Affiliations: Deltares, TU Wien
Floods are the most costly disasters after storms globally. To adequately respond, recover and ultimately prepare for flooding, an integrated workflow of flood modelling and remote sensing can help decision-makers to better understand impacts and risks and ultimately take action for disaster risk reduction (DRR) and climate change adaptation (CCA). We will present an automated workflow that combines remote sensing with a physics based flood model, enabling users to perform post flood analysis. Remotely sensed flood maps are generated using Sentinel-1 Synthetic Aperture Radar (SAR) imaging, which is highly sensitive to water occurrences and can operate irrespective of daylight and cloud coverage, thereby providing an advantage over optical sensors. The flood detection procedure* is based on a Bayesian decision that harnesses knowledge of historical backscatter behaviour over land and water surfaces. This yields a computationally efficient operation that is well suited for the integration in an automated workflow with physics based flood models. The satellite-based flood model can only provide an output, when a Sentinel-1 observation is available and is therefore dependent on the temporal resolution of the satellite. Hence, remotely detected floods will be used to trigger a workflow, which builds and runs a flood model using the hydrodynamic model SFINCS**. SFINCS is an open-source reduced-complexity model, which retains essential physics to accurately predict compound (pluvial, fluvial and coastal) flooding, with a significant gain in computational speed. The hydrodynamic model will be forced with meteorological data of wind, pressure and rainfall to model flood extent and depth. While both products, modelled and remotely-sensed flood maps, have inherent uncertainties, their integration can help fine-tune workflow parameters and model choices to improve the accuracy and credibility of both. For disaster response and recovery, a post-event analysis would increase the accuracy and ultimately trust in observed and simulated flood maps. Divergences between both flood maps can also indicate potential damages to flood protection infrastructure such as a breached dike. This can help first responders to quickly assess flood impacts, prioritize recovery efforts and optimize resource allocation. For long-term planning, the combined flood map product represents a status-quo that stakeholders can relate to and have a living memory of. Building upon this, the numerical model helps to explore “what-if scenarios”. This allows decision makers to ask questions like what would have happened, if this levee had failed, if this rainfall event had coincided with a spring tide, or how this flood would have looked like in a world with more than 1.5 degree increase in global temperature and higher sea levels. This will help decision-makers to identify appropriate DRR and CCA strategies and communicate choices, and the consequences of inaction, effectively with citizens. The overall goal is to provide a scalable, reproducible framework that integrates satellite-based flood maps with hydrodynamic modelling that can ultimately be used to build digital twins for DRR and CCA. * Bauer-Marschallinger, B., Cao, S., Tupas, M. E., Roth, F., Navacchi, C., Melzer, T., ... & Wagner, W. (2022). Satellite-Based Flood Mapping through Bayesian Inference from a Sentinel-1 SAR Datacube. Remote Sensing, 14(15), 3673. ** Leijnse, T., van Ormondt, M., Nederhoff, K., & van Dongeren, A. (2021). Modeling compound flooding in coastal systems using a computationally efficient reduced-physics solver: Including fluvial, pluvial, tidal, wind-and wave-driven processes. Coastal Engineering, 163, 103796.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.15/1.16)

Presentation: Integrating Site-Specific Context and Uncertainty Mitigation in Flood Map Validation

Authors: Juan Ardila, Henri Riihimäki, Archana Joshi, Ambika Khadka, Annett
Affiliations: Iceye
Abstract ICEYE operates the world’s largest constellation of X-band Synthetic Aperture Radar (SAR) satellites, revolutionizing flood mapping through improved availability, spatial resolution, and temporal revisit capabilities. With more than 300 flood events mapped globally, ICEYE produces detailed flood extent and depth maps that are instrumental in disaster response and flood insurance for urban and agricultural areas. These maps leverage daily SAR observations and advanced geoprocessing algorithms to provide actionable data at unprecedented scales and timeliness. However, the improved spatial and temporal resolution of ICEYE’s maps introduces challenges for validation, as conventional datasets and metrics often fail to capture the necessary precision. Verification of detailed flood maps requires diverse data sources, including SAR imagery from Sentinel satellites, high water marks (HWM), social media posts, and insurance claims. Yet, each source has limitations. Sentinel data, while widely used, lacks the resolution and revisit frequency to validate high-detail maps. HWMs offer more localized accuracy but are subject to geolocation uncertainties and selection biases. Social media provides valuable real-time insights but suffers from poor geotagging and temporal reliability. Insurance claims, though technically reliable, are influenced by business logic or self-reporting deviations, creating systematic biases for verification. This study evaluates the uncertainties inherent to these verification methods, focusing on three critical areas: geolocation accuracy, temporal alignment, and depth estimation. By identifying the strengths and limitations of existing datasets as well site-specific characteristics, we propose a more transparent framework to benchmark flood maps. This framework emphasizes filtering and weighting of reference data, spatial autocorrelation analysis, and the development of metrics that account for terrain characteristics and other contextual factors. 1. Introduction ICEYE’s constellation of X-band SAR satellites enables subdaily observations of flood events, creating a new paradigm in flood monitoring. The system integrates advanced algorithms and cloud-based geoprocessing to produce detailed flood extent and depth maps, which are updated multiple times per day. These maps are essential for the insurance industry to assess losses and for governments to allocate resources and guide rescue operations. With these advancements, verifying the accuracy of these maps remains a challenge due to the lack of datasets with comparable precision in space and time as well as the specificity of each event and site. 2. Geolocation Uncertainty in Flood Verification Flood verification data often lack the spatial accuracy needed to benchmark high-resolution ICEYE maps. SAR imagery from Sentinel-1, for instance, offers a resolution of 10 meters and observations every 5–12 days, which is insufficient for capturing dynamic flood edges in urban and semi-urban areas. High water marks (HWMs), while offering precise depth measurements, are subject to significant geolocation uncertainties and require careful filtering to identify useful observations. Social media, another emerging data source, provides valuable qualitative insights but often lacks reliable geotagging and timestamping, further complicating its integration into verification workflows. Likewise, insurance claims are influenced by compensation thresholds and clustering resulting from localized customer subscription or limited availability of field personnel. 3. Temporal Uncertainty in Flood Verification Time is a critical factor in flood map validation. ICEYE’s subdaily observation capabilities capture rapid changes in flood dynamics, but many traditional datasets are not temporally aligned. Sentinel-2, for example, revisits flood-affected areas every 5–12 days, often missing critical phases of the flood timeline. Similarly, HWMs and insurance claims are typically collected days or weeks after an event, when flood evidence has often eroded. Social media posts may offer explicit insights but are affected by the lack of accurate capture times, limiting their utility for precise temporal validation. 4. Uncertainty in Flood Depth Estimation Flood depth verification presents unique challenges, as satellite imagery does not directly measure depth. Depth estimation requires modeling and algorithmic approaches that themselves require validation. While HWMs may provide reliable depth measurements, they are limited in coverage and often clustered spatially. Social media offers occasional depth insights but that depends on reference objects and photogrammetric techniques, which introduce additional uncertainties. Insurance claims data are allegedly influenced by compensation thresholds, leading to systematic biases in reported depths. These limitations highlight the need for more robust methods to assess and validate depth estimates. 5. Site specific characteristics and event type Flood maps should be benchmarked with careful consideration of the event type and site-specific context. For example, the success rate should be evaluated based on whether the flood event is driven by fluvial, pluvial, or tidal conditions, as each presents unique challenges for mapping. Additionally, benchmarks become more meaningful when accuracy is analyzed in relation to site-specific factors, such as hydrological complexity, terrain variation, and land cover distribution. These nuanced assessments provide the community with more actionable insights and improve the reliability of flood mapping methodologies across diverse scenarios. 5. Accuracy assessment framework proposal To address the challenges posed by geolocation, temporal, and depth uncertainties in flood map validation, we propose a comprehensive, multi-criteria approach tailored to consider both field data limitations and site-specific characteristics. The strategy involves refining field measurements through filtering and weighting to prioritize accuracy and reliability, mitigating biases such as spatial clustering and selection effects using spatial autocorrelation analysis, and incorporating contextual metrics that account for landscape features like vegetation, impervious surfaces, and terrain variations. Additionally, uncertainty quantification is central, with statistical analyses explicitly addressing inaccuracies before accuracy assessments. Beyond conventional metrics such as precision and recall, the framework introduces enhanced measures that integrate spatial and temporal dynamics critical for benchmarking high-resolution flood maps. Case Studies To demonstrate the framework’s effectiveness, we applied it to two mapped events in 2024: a large-scale flood event that impacted Japan in 2024 and Hurricane Helen in the USA. More than 300 field measurements were available for each event. These cases are used to illustrate the application of the proposed method to address geolocation and temporal uncertainties, as well as the value of contextual metrics in validation flood maps.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.15/1.16)

Presentation: Impact of long-lasting Flood Water on Agricultural Productivity: a Case Study of the May 2023 Emilia Romagna Floods

Authors: Margherita Sarcinella, Jeremy Pal, Antonio Trabucco, Valentina Mereu, Jaroslav Mysiak
Affiliations: CMCC - Risk Assessment and Adaptation Strategies Division, Ca' Foscari University, CMCC - Impacts on Agriculture, Forests and Ecosystem Services Division
Heavy rainfall events occurred in the Emilia-Romagna region in Northern Italy due to two major storms on May 2nd and 17th that led to the overflow of 22 rivers and triggered over 250 landslides. This event claimed 15 lives, forced 10 thousand people to evacuate and caused over 400 road closures. Due to a prior long-lasting winter drought and poor land use management that hampered effective water drainage, floodwaters stagnated for over a month in some areas, exacerbating the crisis. Over 40% of regional agricultural land was flooded leading to irreversible crop damage and, in some instances, entire harvest loss. This study aims to build a consistent and replicable methodology to quantify the agricultural damages and economic loss resulting from stagnated floodwater over cropland using the Emilia Romagna floods as a case study. The study emphasises the use of remote sensing data as a tool to achieve accurate impact estimates. Sentinel-1 SAR imagery is used to derive 10-meter resolution flood extent and duration maps at a revisit time of 3 to 6 days. Flood depth is derived from satellite extent maps with the FLEXTH model. An infiltration model is developed to fill the gaps in the 1.5-month-long flood extent and depth time series to continuously monitor the event dynamics. The maps are matched with crop data available for the region from the AGREA database and damages are computed as a function of ponded water duration, depth and crop type as well as resistance to oxygen deprivation. The data, comprised of crop type, growing season and sowing date, allow for the characterization of the growth state of each crop at the time of flooding, implicitly providing insights on the probability of plant survival. The use of satellite-derived vegetation indices from the Copernicus High-Resolution Vegetation Phenology and Productivity (HR-VPP) as markers for post-disaster crop recovery, with a focus on identifying crop-specific recovery rates and patterns is highlighted. This study highlights the need for collaborative efforts with key regional entities and can provide factual-hazard-based agricultural loss estimates to local institutions. These findings can guide targeted adaptation strategies, improve the spatial accuracy of loss assessment, and improve our comprehension of the aftermath of prolonged floods on agricultural output.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.15/1.16)

Presentation: Using Remote Sensing and Advanced Modeling to Analyze Urban Flood Risks in Grabels, Southern France 

Authors: Emmanuel Brempong, Vanessya Laborie, Teodolina Lopez, Aurélien Mure, Frédéric Pons, Minh Tan Vu
Affiliations: Cerema, Satellite applications Team, Cerema, Risks, Water and Sea Technical directorate, RHITME Research Team, Cerema, Natural Risks Departement
Urban flooding is a critical issue exacerbated by increasingly intense precipitation patterns associated with global warming. Among other forcings and parameters, acccurate hydrodynamic modeling of flood events may depend on parameters such as the curve number (CN), which quantifies surface runoff potential, and the Strickler coefficient, which physically aims at representing surface roughness and its impact on water flow. However, traditional methods for deriving these parameters, such as ground surveys, are expensive and often impractical for large-scale applications. This study investigates the use of monoscopic very high spatial resolution (VHR) satellite images from Pléiades (50 cm/pixel) and Pléaides Neo (30 cm/pixel) to derive these parameters in a scalable and cost-effective manner. The research focuses on Grabels, a peri-urban area northwest of Montpellier, France, that has experienced rapid urbanization since the 1960s. Grabels is highly vulnerable to flash floods triggered by intense Mediterranean rains. Notable flood events, including the catastrophic 2014 flash flood, highlight the need for advanced flood risk assessment and management tools. Using five Pléiades and three Pléiades Neo images over a period starting from 2012 until 2024, the study derives critical urban topographic and physical characteristics considering flood risk. Pleiades imagery classification is achieved through the analysis of shadows and textures to determine soil occupation (OCS) and production of 8 land cover maps. The Curve Number and Strickler coefficient are derived from each land cover maps. Afterwards, these parameters are integrated into the 2D-numerical model Telemac 2D, which solves shallow-water equations by computing water levels and flow velocities under various flood scenarios at each node of a finite element mesh. Additionally, the study examines the temporal sequence of imagery to assess changes in flood behavior and evolving flood risks in response to urbanization and meteorological events. The results include an accurate classification of land use and urban fabrics, providing a reliable basis for hydrodynamic modeling. Flood risk maps and simulations will be provided to assess the ability of monoscopic images from Pleiades and Pleiades Neo to help for local flood management strategies, offering actionable insights into high-risk areas.This new methodology provides scalable, data-driven tools applicable to other urban areas facing similar challenges. Indeed, when refined topography and OCS databases are not available, analyzing flood risk using archived satellite data can provide a deeper understanding of the interactions between urbanization and the flood dynamics. By leveraging high-resolution satellite imagery and integrating advanced modeling techniques, this study delivers a innovative and performant approach for urban flood risk assessment. Keywords : Urban flooding, rainfall, Telemac2D, Grabels city, Satellite imagery, Pleiades, Pleiades Neo, surface impermeability, curve number, Strickler coefficient.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.15/1.16)

Presentation: A method for continuous satellite-based flood mapping using on-demand SAR data

Authors: Luca Pulvirenti, Dr Giuseppe Squicciarino, Dr Luca Cenci, Dr Francesca Trevisiol, Dr Giorgio Pasquali, Dr Annalaura Di Federico, Dr Silvia Puca
Affiliations: CIMA Research Foundation, e-GEOS, Italian Civil Protection Department
When large floods occur, emergency managers usually require not only the near real-time (NRT) availability of a first overview of the inundation extent, but also persistent monitoring of the spatiotemporal evolution of the flood. This request may be fulfilled using on-demand SAR data which represent the only source of information that allow operators to daily produce high-resolution maps derived from satellite acquisitions performed day and night over different areas, even if clouds are present. However, continuous flood mapping from on-demand SAR data generally requires combining images captured with different acquisition parameters. In turn, this makes data interpretation and processing quite challenging and might require a time-consuming visual analysis activity, which contrasts with the requirement of fast daily delivery of flood maps to end-users. This study presents a new methodology designed to perform a continuous flood monitoring activity in near real-time using on-demand SAR data. The core of the proposed methodology is a new flood mapping algorithm based on change detection that can work with data acquired with different imaging geometries. The algorithm has been developed in the framework of the IRIDE program, which will be implemented in Italy under the management of the European Space Agency (ESA)-with the support of the Italian Space Agency (ASI). IRIDE will consist of a set of different remote sensing instruments including SAR. Together with other national (e.g., COSMO-SkyMed) and European space systems, IRIDE will support the Italian Civil Protection Department in the management of natural hazards such as floods. The algorithm is designed to discriminate the change due to the change in the scenario from that due to possible differences in the acquisition parameters of the images. It maps open water by applying different image processing techniques, such as clustering, histogram equalization, fuzzy logic, and region growing, and implements two electromagnetic models. It performs three main steps in a fully automated manner: 1) Identification of candidate objects in a scene to be classified as flooded through a clustering algorithm, whose input data are preliminarily manipulated by applying a cross-normalization that is useful for making comparable pre-flood and flood images. 2) Use of fuzzy logic to select a subset of the candidate objects based on a) the prediction of two electromagnetic (EM) models, b) the values of the backscatter from permanent water bodies, and c) the size of the candidate objects. 3) Application of a region-growing (RG) algorithm for more accurate mapping of flooded areas. The algorithm is complemented by a post processing step whose objective is to make the daily flood maps consistent with each other. The methodology also implements a set of guidelines for the tasking of the satellites, the way of proceeding if more than one image per day is needed to cover the AOI(s), the collection of a set of pre-flood images, and the selection for each flood image of the most suitable pre-flood image(s) among those included in the set. The methodology was tested on two large floods that hit the Emilia-Romagna region in Northern Italy in 2023 and 2024 using COSMO-SkyMed data of the first and second generations. Benchmark maps were derived from cloud free optical data, as well as from maps generated by the Rapid Mapping Component of the Copernicus Emergency Service through an approach which includes supervised image interpretation carried out by a skilled operator. The results demonstrated that the methodology produces reliable results even when using data acquired with geometries that are non-optimal for flood mapping.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.94/0.95)

Session: C.02.05 MAGIC – Preparing for the ESA-NASA satellite gravity constellation - PART 1

ESA and NASA have discussed collaboration on satellite gravity missions for more than a decade, both by engaging the community (e.g. GOCE Workshop, Munich 2011), and under the JPPG dedicated discussions (ESWG, 2015). The two Agencies recently co-signed a Joint Statement of Intent for cooperation on the MAss Change and Geosciences International Constellation (MAGIC). MAGIC is the jointly developed concept for collaboration on future satellite gravity observations that addresses needs of the international user community and stakeholders, consisting of the GRACE-C (NASA and DLR) and the Next-Generation Gravity Mission (NGGM) (ESA) staggered deployment of two satellite pairs. MAGIC will address the interest to join efforts in building a constellation of two pairs of satellites, polar and inclined, to drastically increase the temporal and spatial resolutions to provide, among others, information essential for water management, monitoring and forecasting of hydrological extremes (i.e. droughts/floods) and for monitoring the Global Tipping Points. The constellation is founded on a joint reference document co-signed October 2020: MAGIC Mission Requirements Document (MRD), which consolidated the needs of the international user community as described in the IUGG 2015 publication (Pail et al, 2015) and the NASA 2017 Decadal Survey (NASEM). Furthermore, the first ESA-NASA MAGIC Science and Applications Workshop in Assisi, Italy, November 2023, refined needs of the international user community and stressed the importance of demonstrating operational capabilities of mass change monitoring from space. The two agencies have implemented a joint science and applications plan on MAGIC to coordinate relevant activities. This session invites contributions related to the MAGIC Level-2 and Level-3 product development, as well as Level-1 product development of the individual GRACE-C and NGGM missions. Constellation data processing, algorithm development and mission performance aspects shall be addressed. Contributions related to the generation of higher-level (e.g. Level-4) products involving data assimilation, data fusion and AI techniques are also encouraged. This session shall also address the novel science and operational services and applications enabled by MAGIC and user perspectives related to synergies between the Science Data Systems of ESA and NASA.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: The NGGM/MAGIC End-to-End Mission Performance Evaluation Study

Authors: Prof. Dr. Roland Pail, Ilias Daras
Affiliations: Technical University of Munich, European project consortium, European Space Agency
The Next-generation Gravity Mission (NGGM) is planned to constitute the inclined pair of the double-pair Bender-type constellation MAGIC. In a Phase A science support study, which was complementing two parallel Phase A industry system studies, the key mission parameters have been studied in detail by means of numerous numerical closed-loop simulations, in order to identify the optimum set-up regarding science return, technical feasibility, and costs. The Phase A activities, which also included a thorough analysis of the science impact of the mission for the main application fields, could be successfully concluded in November 2023. In the current Phase B1, on the science side in the frame of the Mission Performance Evaluation Study an NGGM/MAGIC end-to-end mission simulator from product Level 0 to Level 3 will be established and implemented, representing a first prototype for an operational NGGM and MAGIC ground processor by a European consortium of 9 partner institutions. Since one of the main goals of MAGIC is to demonstrate pre-operational capabilities regarding significant contributions to operational service applications, in addition to a standard processing line also a fast-track processing line to generate gravity and mass change products with short latencies of only a few days will be considered. In this presentation, we will give an overview of the current status of development of the MPEF, focusing on the architectural design and product flow, validation and test procedures as well as selected specific aspects and features of the simulator. Further, the next steps and future plans towards an operational ground processing system will be outlined.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: Enhancement of Gravity Field Retrieval Performance Through Incorporation of Background Model Uncertainty Information

Authors: Petro Abrykosov, Prof. Dr. Roland Pail
Affiliations: Technical University Of Munich
De-aliasing based on geophysical background models (BM) allows for a reduction of high-frequency, high-amplitude signal components in gravity field retrieval. These are primarily related to the ocean tides (OT) and non-tidal variations within the atmosphere and the oceans (AO), and would otherwise superimpose the target signals stemming e.g. from the hydro- or the cryosphere. In recent studies related to both simulations and real data processing, it could be demonstrated that incorporating prior knowledge on the uncertainty distribution within BMs has the potential to greatly improve the quality of the derived gravity product. In this contribution, we aim to transfer the knowledge gained within GRACE/GRACE-FO processing scenarios and assess the potential for de-aliasing reduction in the context of NGGM and MAGIC constellations. Here, we derive a priori uncertainties of OT and AO background models as error co-variance matrices (VCMs) and consistently propagate them through the processing chain. While the error behaviour of an OT model is stationary by nature, substantial temporal variations exist in case of AO. Consequently, the performance impact of utilizing a stationary error VCMs versus tailored time-variable ones is evaluated for the latter. The performance evaluation is carried out for various retrieval periods such as standard monthly, but also sub-weekly ones. Finally, since the error VCMs may quickly become overly demanding with regard to computational resourced, we also discuss the impact of various spatial resolutions and de-correlation periods on the solution quality.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: Studying the Impact of the NGGM and MAGIC future satellite gravity missions for scientific applications and operational services

Authors: Julia Pfeffer, Benoît Meyssignac, Michaël Ablain, Rory Bingham, Alejandro Blazquez, Marie Bouih, Carla Braitenberg, Luca Brocca, Sebastian Cruz Bacca, Henryk Dobslaw, Ehsan Forootan, Helena Gerdener, Volker Klemann, Jürgen Kusche, Gilles Larnicol, Muhammad Usman Liaqat, Francesco Leopardi, Roland Pail, Isabelle Panet, Ingo Sasgen, Marius Schlaak, Maike Schumacher, Linus Shihora, Anne Springer, Dimitrios Tsoulis, Georgios Vergos, Bert Wouters, Fan Yang, Ilias Daras
Affiliations: Magellium, CNES, University of Bristol, University of Trieste, Research Institute for Geo-Hydrological Protection, National Research Council, Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research, GFZ German Research Centre for Geosciences, University of Aalborg, University of Bonn, Technical University of Munich, Institut de Physique du Globe de Paris, Aristotle University of Thessaloniki, Delft University of Technology, European Space Agency, Earth & Mission Science Division, ESTEC
The SING project aims to evaluate the added value of the NGGM and MAGIC missions for scientific applications and operational services in hydrology, ocean sciences, glaciology, climate sciences, solid earth sciences, and geodesy. Using a closed-loop simulator with a comprehensive description of instrumental and dealiasing errors, synthetic observations of the gravity field will be generated to assess the observability of mass changes occurring in the atmosphere, ocean, hydrosphere, cryosphere, and solid earth for different mission configurations, including GRACE-C (single polar pair), NGGM (single inclined pair), and MAGIC (double pair). Significant benefits are expected from the higher spatial and temporal resolution of future satellite gravity missions, particularly in flood prediction, drought monitoring, and water resource management. These aspects will be explored through dedicated hydrological data assimilation and data fusion studies. Additionally, the improved spatial resolution will be critical for detecting changes in the Atlantic Meridional Overturning Circulation (AMOC) in the context of climate change. The SING project will also assess the impact of the NGGM and MAGIC missions on monitoring ice mass changes in mountain glaciers and ice sheets. The greater spatial and temporal resolution of future satellite gravimetry missions may also improve the recovery of the Ocean Heat Content and Earth Energy Imbalance due to better consistency with other ocean monitoring systems such as satellite altimetry. The added value of a second satellite pair will be estimated for monitoring co-seismic and post-seismic processes, as well as improving predictive modeling for geohazards, with a focus on providing timely forecasts that are socially relevant. Finally, the project will investigate the impact of the NGGM and MAGIC missions on geodesy by enhancing gravity and geoid models in support of the IHRF realization, its time evolution, and precise orbit determination. The results of the SING project will provide further validation for the NGGM and MAGIC mission concepts and contribute to the preparation of NGGM and MAGIC products into operational services.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: NGGM Project Activities Within MAGIC

Authors: Luca Massotti, Michael Francois, Philip Willemsen, Ilias Daras, Gunther
Affiliations: ESA
NGGM Objectives/context The Next Generation Gravity Mission (NGGM) is ESA’s next Mission of Opportunity and part of the ESA-NASA Mass Change and Geosciences International Constellation (MAGIC). MAGIC is the jointly developed concept for collaboration on future satellite gravity constellation that addresses needs of the international user community and stakeholders, and consists of the staggered deployment of two satellite pairs: GRACE-Continuity (GRACE-C NASA and DLR mission) and the Next-Generation Gravity Mission (NGGM, ESA mission). GRACE-C will extend the successful US-DE cooperation of GRACE and GRACE-FO, with contributions from DLR, GFZ, and NASA/JPL, leveraging key European technologies. NGGM will advance the scientific innovation , and will enable the demonstration of applications and operational capabilities for both NGGM and MAGIC for a panoply of applications critical for climate change monitoring and Earth sciences, e.g. Hydrology, Climate Change, Cryosphere, Oceanography, Solid Earth, and Geodesy. NGGM will fly in a low altitude (~400km), controlled inclined orbit at 70 degrees, with the objective to deliver consistent, quality-assured data products with high spatial (~100 km) and temporal (sub-weekly) resolutions as well as reduced latency compared to the present state-of-the-art. A Joint Statement of Intent was signed in June 2024 by ESA and NASA, formally confirming the intent to collaborate on MAGIC. NGGM mission concept The principal NGGM observable is the variation of the range and range rate between two satellites measured by a laser interferometer; additionally, ultra-precise accelerometers measure the non-gravitational accelerations which are used to correct the gravity signal in the data processing. The two satellites will fly in a loose formation at an average separation of 220km. The NGGM design requires an ultra-stable satellite platform with low thermo-elastic expansion and low magnetic noise to enable the required accurate accelerometer measurements. The NGGM design is further based on three critical key technologies: • very accurate , ultra sensitive accelerometers of MicroSTAR class , capitalizing from the GOCE GRADIO accelerermeters, provided by ONERA – FR; • ultra precise inter-satellite distance measurement with laser interferometry (Laser Tracking Instrument, capitalizing on the GRACE-FO LRI), provided by STI – DE, and • very low thrust electric propulsion system to compensate the variable atmospheric drag in LEO, to lower the noise level as sensed by the accelerometers. This is achieved via Gridded Ion Thrusters, capitalizing on T5 thrusters embarked on GOCE provided by Mars Space Ltd – UK. Additionally, linear cold gas thrusters from Leonardo - IT are used for fine pointing control and potential cross-track/nadir drag compensation NGGM Project status The presentation focuses on the status and outlook of the on-going Phase B1 system activities. An overview of the on-going pre-development activities is provided together with the the mission objectives and scientific thematic fields.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: Global Gravity and Mass Change Observations: from GRACE Follow-On to the Upcoming GRACE-Continuity Mission

Authors: Felix Landerer, Mike Gross, Sebastian Fischer, Frank Flechtner, Christoph Dahle, David Wiese, Himanshu Save, Krzysztof Snopek, Frank Webb
Affiliations: Jet Propulsion Laboratory, California Institute of Technology, Germany Space Agency at DLR, Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences, University of Texas at Austin, Center for Space Research
GRACE-Continuity (GRACE-C) is a joint NASA (US) & DLR (Germany) mission, scheduled for launch in Dec-2028. It will continue the essential climate data record of Earth system Mass Change initiated by the original GRACE mission (2002-2017) and currently enabled by the GRACE-FO mission (2018 – present). The GRACE-series provides foundational observations of monthly to decadal global mass changes and transports in the Earth system derived from temporal variations in the Earth’s gravity field, now spanning over 23 years. These data are a core component of GCOS’s Essential Climate Variables and have become indispensable for climate-related studies that enable process understanding of the evolving global water cycle, including ocean dynamics, polar ice mass changes, and near-surface and global ground water changes. Mass Change data enable the monitoring of flood potentials as well as droughts by tracking freshwater availability, groundwater and aquifer volume changes. These observations and integrated tools built around them have become indispensable for 'water intelligence' to inform data-driven water management in a changing climate. Like its predecessor missions, GRACE-C will fly in a polar orbit at an initial altitude of 500 km. With the primary science goal of maintaining gap-free continuity in the essential record of mass change data, the mission design, implementation and international partnerships considerably leverage heritage elements to ensure mission success with a fast-paced schedule and minimal cost impact. One departure from heritage is that the primary ranging instrument on GRACE-C will be a higher precision laser ranging interferometer, capitalizing on the successful demonstration of this technology on GRACE-FO. In this presentation, we will present a brief update on GRACE-FO in the context of satellite operations, data processing, science/applications highlights and prospects for achieving gap-free continuity between GRACE-FO and GRACE-C. We will further provide a detailed update on the status of the GRACE-C project design, analyses, hardware build and measurement system performance assessments during its current Final Design and Fabrication phase. We will conclude with an outlook on expected science performance, as well as plans for low- and high-level data products and algorithms to enable optimal science and application mission return.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: MAGIC Status Overview and Scientific Outlook

Authors: Dr. Ilias Daras, Dr. Lucia Tsaoussi
Affiliations: ESA - ESTEC, NASA HQ
ESA and NASA have discussed collaboration on satellite gravity missions for more than a decade (e.g. GOCE Workshop, Munich 2011). The two Agencies common interests are captured in the ESA NASA Earth Sciences MoU signed in June 2022 between ESA’s Director General and NASA’s Administrator, where gravity / mass change was specifically highlighted as an area for cooperation. On June 27, 2024, NASA and ESA signed a joint statement of intent for cooperation on the MAss Change and Geosciences International Constellation (MAGIC). MAGIC is the jointly developed concept for collaboration on future satellite gravity constellation that addresses needs of the international user community and stakeholders. The constellation consists of the staggered deployment of two satellite pairs: GRACE-Continuity (a NASA and DLR mission) and the Next-Generation Gravity Mission (NGGM) (ESA mission). This presentation outlines the framework for scientific collaboration focusing on the utilization of the MAGIC expected data sets and describes the jointly established science and applications plan between the two agencies. An overview of the activities included in the MAGIC scientific roadmap will be provided. The scientific objectives of MAGIC will be introduced, emphasizing the key research questions it addresses across various Earth science thematic fields and applications.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall M1/M2)

Session: C.03.09 The Copernicus services - PART 1

The Copernicus programme is headed by the EC, which is also responsible for setting requirements and managing the environmental services while ESA coordinates the space infrastructure.
The wealth of Copernicus satellite observations, coming from the Sentinel and Contributing missions, and of in situ data, coming from ground-based, sea-borne or air-borne monitoring systems, feed a set of thematic services in different domains: marine, land, atmosphere, emergency, climate change and security.
The services convert all the space and non-space data into actionable information used by public authorities, international organisations and users for environment protection, management of urban areas, regional and local planning, agriculture, forestry, fisheries, health, transport, climate change, sustainable development, civil protection and tourism, among others.

Presentations and Speakers:


The management of the Copernicus services by the EC:


  • Hugo Zunker – EC, DG DEFIS

The Copernicus Land Monitoring Service:


  • Andreas Brink – JRC
  • Usue Donezar – EEA

The Copernicus Climate Change Service:


  • Carlo Buontempo – ECMWF

The Copernicus Atmosphere Monitoring Service:


  • Laurence Rouil – ECMWF

EU Research in support of the Copernicus services:


  • Iulia Simion – HaDEA
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall F2)

Session: C.02.18 BIOMASS Mission Insight: Open cloud computing platform, processors, tools and data

In this session we will provide insight into the BIOMASS mission and present the BIOMASS open cloud computing platform MAAP, the open-source approach to the BIOMASS processors, BIOMASS products and supporting Cal/Val datasets from the GEO-TREES network. To help scientists in the above ground biomass community and to support the science behind the upcoming BIOMASS mission, ESA has developed the Mission Algorithm and Analysis Platform (MAAP). The BIOMASS MAAP is a cloud computing platform that will include not only data (satellite, airborne, in situ data and products), but also computing capabilities and sets of tools and algorithms developed to support this specific field of research. To best ensure that users can collaborate across the platform and to access needed resources, the MAAP requires all data, algorithms, and software to conform to open access and open-source policies. As an example of best collaborative and open-source practices, the Level-2 and 3 processors, and the data quality tools that are part of the BIOMASS Processing Suite (BPS) will be made openly available within the MAAP. In addition, open reference data needed for Calibration and Validation will be provided as part of the GEO-TREES database. This aims at arriving faster to stable algorithms on a user cooperative approach, and allow people outside the core science team to contribute to the product improvement cycle. This is very well adapted to innovative missions like BIOMASS and will accelerate scientific research. This concept is the baseline for all the future ESA missions.
This insight will present the how the BIOMASS MAAP makes connections between data, algorithms, BIOMASS Processing Suite (BPS) and GEO-TREES database. Different use cases will be presented (visualizing data, processing data, improving the algorithm, validating data, etc.) and some discussions will take place for experts and participants to exchange ideas and perspectives.

Speakers:


  • Clement Albinet - ESA
  • Saskia Brose – ESA
  • Tamara Queune - ESA
  • Benedetta Antonielli - SERCO
  • Neha Hunka - ESA
  • Marina Longoni – University of Toulouse
  • Stefano Marra - CGI
  • Alessandro Marin - CGI
  • Roberto Alacevich - ESA
  • Cristiano Lopes - ESA
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.49/0.50)

Session: B.02.07 Social-ecological land systems: practical approaches for improved mapping

This session invites presentations on integrating remote sensing products with data from other diverse sources to address the complex spatial patterns and dynamics of social-ecological land systems in the context of land-use and climate change. The focus will be on exploring practical applications, analyzing the complementary nature of various data sources, identifying gaps, and discussing ways to enhance transdisciplinary data synthesis efforts.
The study of socio-ecological systems is crucial for sustainability, adaptation and territorial planning. Capturing the dynamics of these systems requires a comprehensive understanding of both biophysical and socioeconomic dimensions, generally through detailed conceptual models. However, obtaining the appropriate input data is often a major challenge. The increasing availability of open access satellite imagery and a wide array of specialized products, coupled with improved accessibility to analysis-ready datasets, and the development of powerful computational analytical tools, present new opportunities to develop a better understanding of social-ecological systems and their dynamics for improved decisionmaking. Transdisciplinary approaches greatly enhance the process, from model design and data source identification to finding creative solutions for integrating and adapting data created for different purposes.
One practical application of this approach is social-ecological mapping based on identifying land-use typologies. Land classifications that consider human-nature interactions provide a valuable framework for studying land systems, understanding processes and contextualizing sustainable development initiatives. These methods analyze spatial patterns of characteristics along a multidimensional continuum to identify areas with similar profiles, often combining data-driven and expert-based approaches.
This session will highlight example initiatives and studies that pioneer data integration and transdisciplinary approaches for long-term social and ecological monitoring and to examine how different drivers of change impact on diverse regions, from local to global scales.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.49/0.50)

Presentation: Drivers and Future Risks of Plant Invasions in Coastal Reclamation Ecosystems Amid Climate Change Pressures in Global Hotspots

Authors: Binyue Kang, Jingya Zhang, Jiren Xu, Brian Barrett, Hongyuan Li
Affiliations: School of Geographical & Earth Sciences, University Of Glasgow, College of Environmental Science and Engineering, Nankai University, School of Social & Environmental Sustainability, University Of Glasgow
Coastal land reclamation and rapid urbanization have profoundly reshaped global coastal ecosystems, creating highly dynamic social-ecological systems increasingly strained by land-use change and climate variability. These transformations pose significant challenges to biodiversity conservation and ecological resilience, with invasive alien plants (IAPs) emerging as a critical threat in ecologically sensitive reclamation zones. Such regions, shaped by dynamic interactions between natural and human-induced factors—such as climate variability, sea-level rise, and extensive land reclamation—serve as hotspots for biological invasions. IAPs not only threaten native biodiversity but also disrupt ecosystem functions, amplify ecological instability, and hinder sustainable development. Aligned with Sustainable Development Goal (SDG) 15.8, which advocates for measures to prevent the introduction and mitigate the impacts of invasive alien species, this study addresses the pressing need to understand and forecast invasive plant dynamics. SDG 15.8 also emphasizes prioritizing control or eradication of invasive species, particularly in coastal ecosystems already under significant environmental and anthropogenic pressure. Despite these priorities, limited information on the drivers and impacts of biological invasions hampers effective management and conservation strategies. To address the complexities of social-ecological systems in coastal reclamation cities, this study leverages global biodiversity datasets, including GloNAF (Global Naturalized Alien Flora Database) and GBIF (Global Biodiversity Information Facility), in conjunction with environmental datasets to identify the driving factors behind invasive species richness and forecast their future distribution under changing climatic conditions. Recognizing the integrated nature of social-ecological systems the study incorporates 26 environmental variables spanning both ecological and socioeconomic dimensions. These include 19 bioclimatic factors derived from remote-sensing products such as CHELSA (https://chelsa-climate.org), topographic elevation sourced from ETOPO global relief models (https://www.ncei.noaa.gov/ products/etopo-global-relief-model), and coastal variables such as sea surface temperature and salinity obtained from bio-ORACLE (https://www.bio-oracle.org/). Socioeconomic indicators like population density and GDP, human-induced factors such as reclamation area, and traffic-related variables including distances to the nearest railway, national highway, expressway, and human footprint were also integrated. These comprehensive, Earth Observation (EO)-derived datasets provide a comprehensive framework for assessing social-ecological dynamics, enabling a nuanced understanding of how invasive species interact with both biophysical and socioeconomic drivers. Future climate projections, derived from five ensemble models under CMIP6 and three Shared Socio-economic Pathways (SSPs: SSP1-2.6, SSP3-7.0, SSP5-8.5), indicate significant increases in air temperature (up to 4.55°C) and sea water temperature (up to 3.72°C), alongside declines in sea salinity (up to 8.31 PSS) by 2100. These findings emphasize the intricate interplay between climate change, human activities, and ecological processes, highlighting the need for adaptive management strategies tailored to the vulnerabilities of social-ecological systems in coastal reclamation zones. For modeling invasive species distributions, five advanced algorithms were applied: Classification Tree Analysis (CTA), Generalized Linear Models (GLM), Generalized Boosted Regression Models (GBM), Random Forest (RF), and MaxEnt. These models were executed via the "biomod2" package in R, with model performance validated through AUC and TSS metrics. Metrics such as the Area Under the Curve (AUC) and True Skill Statistic (TSS) were used to evaluate the predictive accuracy of the models. Models with AUC values greater than 0.8 (indicating good to excellent predictive power) and TSS values above 0.8 were selected for ensemble modelling, ensuring robust and reliable predictions. Data cleaning and validation were conducted using the CoordinateCleaner package to ensure the accuracy and reliability of geospatial records. Our study showed that: (1) by generating a dataset (30,769,412 occurrence points) of 839 IAPs across 126 families, we reveal that Fabaceae (15%), Asteraceae (14.3%), and Poaceae (12.8%) dominate coastal reclamation invasive species composition. Herbs represent 58.7% of IAPs, with shrubs (16.6%) and trees (14.1%) also contributing significantly. Capsule (28.9%) and achene (24.8%) are the most common fruit types, with transient seed banks prevalent in 80.7% of IAPs. Dispersal mechanisms are predominantly animal-mediated (48.8%) and wind-driven (46.7%). The C3 photosynthetic pathway is predominant (93.1%), while C4 (6.3%) and CAM (1.4%) pathways are less frequent. Geographically, IAPs are primarily associated with Eurasia (26.2%), the Americas (26%), and Asia (22.1%). Sexual reproduction (89.6%) is the dominant reproductive strategy, with mixed or asexual modes accounting for 10.4%; (2) through examining the correlation between invasive plant species richness and reclamation project areas across 47 major global reclamation cities, Chinese coastal cities dominate in reclamation scale, with Shanghai (34,978 km², 437 IAPs), Tianjin (32,800 km², 183 IAPs), and Ningbo (32,278 km², 432 IAPs) leading globally. Despite their vast reclamation areas, the IAPs numbers vary significantly, suggesting the influence of urbanization intensity and environmental management practices. Reclamation cities in developed nations, such as Los Angeles (29 km², 2,116 IAPs) and Buenos Aires (29 km², 466 IAPs), exhibit significant IAPs numbers despite limited reclamation extents. This pattern likely results from historical biodiversity baselines, conducive climatic conditions, global trade connectivity, and robust monitoring frameworks, which enhance detection and reporting accuracy. (3) using occurrence records of 839 IAPs from 30,769,412 points, we identified hotspots of invasion risk predominantly in southeastern China, including reclaimed coastal areas and major urban centres. Key drivers of IAP richness included population density, GDP, reclamation area, and specific climatic factors such as precipitation in the driest month (BIO14) and annual mean temperature (BIO1). For instance, GDP significantly influenced the top 5% of invasion hotspots, while population density, reclamation area, and precipitation variability were critical for broader invasion patterns. (4) Human-driven factors, including land reclamation, urbanization, and human footprint, account for an additional 50% of the spatial variance in invasive species distribution. Land reclamation projects, often characterized by large-scale sediment deposition and habitat modification, create nutrient-rich, disturbed landscapes ideal for opportunistic species. Projections under future climate scenarios (SSP1-2.6, SSP3-7.0, and SSP5-8.5) revealed substantial shifts, with invasive species richness expanding in 40.55–66.97% of regions, particularly under the high-forcing pathway (SSP5-8.5). Climate change is expected to significantly drive the expansion of potential IAP distribution areas under both optimistic and pessimistic scenarios. Our study highlights the global-scale implications of invasive alien plant species (IAP) proliferation within coastal reclamation ecosystems, emphasizing their dynamic interactions within social-ecological systems under escalating climate change pressures. By integrating global biodiversity datasets from social intelligence, high-resolution Earth observation (EO) environmental variables, and advanced machine learning modeling approaches, we identified key drivers of invasive alien plant species (IAP) richness. These drivers include land reclamation activities, urbanization intensity, and critical climatic factors such as precipitation variability, alongside socioeconomic influences like population density and GDP. These insights were derived from a comprehensive analysis spanning 47 global coastal reclamation cities, encompassing diverse climatic zones and socioeconomic contexts. Coastal reclamation projects, especially in southeastern China, were identified as hotspots for invasion risk, with future projections under SSP1-2.6, SSP3-7.0, and SSP5-8.5 climate scenarios revealing substantial expansions in potential IAP distributions globally. For instance, under high-forcing scenarios, invasion risk is expected to increase by up to 66.97% in vulnerable regions, driven by warming temperatures, changing precipitation patterns, and anthropogenic pressures.The findings underscore the vulnerability of coastal ecosystems worldwide to dual pressures from human-driven reclamation and climate change. These results highlight the need for transdisciplinary, global-scale approaches that integrate climate projections into spatial planning, advance real-time EO-based monitoring systems, and foster stakeholder engagement across regions. Such adaptive management practices are critical for mitigating ecological and socio-economic risks, ensuring resilience, and supporting the sustainable development of coastal environments under changing climatic conditions. By forecasting the potential distribution and abundance of invasive plant species under various climate change scenarios, our study provides actionable insights into the drivers of biological invasions and their future dynamics. Theses actionable insights can support sustainable development and long-term conservation of coastal environments, in alignment with global sustainability goals.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.49/0.50)

Presentation: Bridging Satellites and Society: Mapping Land Use and Ecosystem Service Dynamics in Ethiopia’s Changing Drylands

Authors: Henry Thompson
Affiliations: King's College London
Semi-arid regions across Africa are experiencing rapid environmental and social transformations. A key challenge is maintaining the adequate and reliable flow of essential ecosystem services (ES) to support human needs, as emphasized in the UN Sustainable Development Goals (SDGs). While satellite remote sensing provides valuable large-scale insights into ES, integrating stakeholder input is crucial to ensure the relevance and accuracy of indicators in specific local contexts. This study explores the intersection of land-use and land-cover (LULC) change and ecosystem service provision over two decades in South Omo Valley, a rapidly transforming, semi-arid landscape in Ethiopia. By combining modern remote sensing techniques with bottom-up, participatory approaches, it highlights interactions between environmental and socioeconomic dimensions of environmental change, offering insights relevant to sustainable land management and conservation. Landsat-derived LULC changes were mapped using object-based image analysis (OBIA) and random forest classification and validated in the field. The results revealed significant declines in grassland cover and increases in cropland, driven by the expansion of both subsistence and commercial agriculture. Grassland loss, often transitioning to shrub-dominated landscapes, was widespread, while cropland expansion was concentrated in areas associated with large-scale infrastructure development. To assess the impact of these changes on ES provision, participatory mapping and ranking exercises were conducted, incorporating diverse community perspectives. These methods revealed differences in how various LULC types contribute to ES provision, with natural vegetation types—such as shrublands, grasslands, and riverine forests—consistently valued for their wide range of services. In contrast, landscapes dominated by intensive agriculture or urban development were perceived as providing limited ES. The analysis further identified how social factors, including cultural and livelihood differences, shape perceptions of ES value. Intra-ethnic and gender-specific priorities emerged and spatial variability also played a role, as proximity to key resources and emerging pressures influenced community-level ES valuations. Mapping changes in ES provision over time revealed uneven impacts. Some land transitions, such as shrub encroachment, were perceived as beneficial in certain areas while representing losses in others. These findings underscore the importance of integrating local perspectives into assessments of environmental change, as top-down approaches alone may miss critical nuances. By combining remote sensing with bottom-up methods, this study demonstrates a scalable approach to understanding the social-ecological implications of LULC change. It offers valuable insights for designing more inclusive, community-informed conservation and development strategies.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.49/0.50)

Presentation: Agricultural surveying through street view imagery and deep learning

Authors: Sherrie Wang
Affiliations: Massachusetts Institute Of Technology
Creating accurate crop type maps remains challenging in many parts of the world. In lower-income countries, there has historically been a lack of large-scale ground surveys, while in recent years, declining farmer response rates in high-income regions have also jeopardized the reliability of official surveys. Meanwhile, alternative sources of agricultural data, such as street-level imagery and crowdsourced data, have become more prevalent. These data sources offer valuable insights into crops at at scale but are characterized by relatively high noise levels. Extracting agricultural information efficiently from such data is challenging due to the global diversity of agricultural landscapes and noise in the data collection process. We present an automated system that leverages deep learning, crowdsourcing, street-level imagery, and remote sensing data to create crop type ground references and national-scale maps, with an initial focus on India. We map major crops including rice, wheat, cotton, sugarcane, and maize at 10-meter resolution for the kharif and rabi growing seasons, and we compare area estimates from our maps with estimates from government surveys.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.49/0.50)

Presentation: Monitoring Land-Use Dynamics in the Tropical Andes of Venezuela: Between Prosperity and Crisis (2000-2024)

Authors: Julia Kate Smith, Dr. Luis Daniel Llambí, Dr. Lina Sarmiento, Dr. María Andreina Salas-Bourgoin
Affiliations: Instituto de Ciencias Ambientales y Ecológicas (ICAE), Universidad de Los Andes, Consorcio para el Desarrollo Sostenible de la Ecoregión Andina, Instituto de Geografía y Conservación de Recursos Naturales, Universidad de Los Andes
Socio Ecological systems need to be seen in their historical perspective in order to clarify trajectories and causal mechanisms that have shaped their current structure and functioning. Our study area, situated in the tropical Andes of Venezuela, has experienced an extended history of land use and anthropic transformations that up to this day have an imprint in the landscape and its dynamics. This area is the high valley of the Chama River (2700-4640m asl) situated in the páramo ecosystem, bordering the upper agricultural frontier and includes parts of two large national parks within its’ boarders. The area constitutes a nodal observatory of the Andean Network of Socio-ecological Observatories (ROSA), which includes several long-term monitoring initiatives for understanding the ecological and social impacts of land-use and climate change. Here, our aim is to monitor land-use dynamics between 2000 and 2024 in this area, considering the driving factors that explain these changes, taking into account the profound socio-economic crisis that Venezuela has been experiencing since the last decade. Land use was mapped in the years 2000, 2010 and 2024 in our study area (30.267 ha), using panoramic photographs in combination with aerial photographs for the year 2000, a spot image for 2010 and open-source imagery for 2024. Additionally, questionnaires and expert interviews were conducted to understand the functioning of this agricultural system and the driving forces of the land use dynamics within the agricultural belt. For the historic context land use was analysed using bibliographic sources complemented by interviews of elderly people In current times, the area is occupied by small and middle scale farms dedicated to the intensive production of potatoes and other cool temperature vegetables for the national market. As most of the study area lays in a semi-arid basin, agriculture is dependent on irrigation systems, which are organized in local self-administered associations. Another characteristic is the use of large amounts of organic fertilisers as poultry manure and rice hull, apart from chemical fertilizers and pesticides. In pre-colonial times potatoes were one of the main crops in this high Andean valley and in contrast to other areas of the Andes large herbivorous animals such as camelids, were absent. Agriculture was probably based on long fallow systems to restore soil fertility as was common in most tropical regions. With the arrival of the Spanish, cattle, horses and sheep were introduced along with wheat cultivation, which soon became the region’s main export commodity. After centuries of cultivation of this crop on steep slopes combined with cattle grazing, the soil eventually became depleted and degraded through erosion, leading to a prolonged economic depression and poverty. From the 1960s, thanks to the wealth derived from the petroleum industry, governmental programs were implemented to prevent further degradation of the abandoned areas and improve the agriculture through modernization. This gave way to more prosperity in the region lifting the area from one of the poorest municipalities to one of the richest in the country. In the year 2000, 25,2% of the agricultural area was under intensive agriculture, 24,5% under long fallows systems (5 to 10 years) and 34% of the agricultural land was abandoned degraded slopes from wheat cultivation. Also, more heterogeneous areas were found where fallow and intensive sectors were mixed (9,6%). For the year 2010 the intensive agriculture area and the mixed sectors increased to 33,7% and 12% respectively, fallow areas declined to 18,3% and abandoned areas to 29,3%. This trend continued for the year 2024, where intensive agriculture now occupies 41,1% of the agricultural area and mixed areas account for 14%; fallow agriculture declined to 13,7% and abandoned areas to 24,3%. In absolute terms the intensive and mixed agriculture areas rose from 2827 ha in the year 2000 to 4653 ha in 2024. This constant increase in the last 24 years, is only marginally due to advances on the agricultural frontier on the natural paramo ecosystem. Between 2000 and 2024, merely 215 ha of paramo were converted into agricultural land. The rise in intensive agriculture is mostly due to the reoccupation of the abandoned slopes and the intensification of the fallow agriculture. Both processes have been possible due to the large amounts of organic fertilisation applied. These organic fertilisers also help to avoid soil degradation in the current intensive system. Also, the improvement and expansion of the irrigation systems has contributed to this increase. In order to understand the described land use dynamics, several driving forces have to be considered. The Venezuelan socio-economic situation changed substantially between the two intervals of the study. In overall terms the years between 2000 and 2010 brought benefits for the agriculture in Venezuela and for our study area (mainly linked to a significant rise in international oil prices). Credits for agriculture were available, agricultural supplies were available and their prices were more affordable. Gasoline was subsidised, which facilitated the use of poultry manure, which is brought from chicken farms in the low land hundreds of kilometres away and helped the sale of the agricultural products in distant cities. Potato imports were low, which improved the sales prices. Migration from Colombia helped the maintain the agricultural labour force. The migration to low-volume irrigation made the water use more efficient. All these factors contributed to the increase in cultivated area and the intensification of agriculture. In the following decade Venezuela experienced a series of social and economic crises leading to hyperinflation, the disappearance of essential products, including agricultural inputs, mass migration and in the last few years severe shortages of gasoline especially in our area. Additionally, the costs of labour, agricultural inputs and gasoline have risen notably. With these developments, it could be expected that the agriculture in the area would collapse or at least shrink markedly. Nevertheless, the data from the land use study didn’t show this tendency. The farmers seem to have been able to mitigate the crisis and have found various strategies to face the difficult situation and increase the efficiency in the use of the limiting factors and resources. Among these strategies we found that agrochemicals are now used in exact doses and only when absolutely necessary. There was also an important increase in the labour force efficiency. While the workers were once employed full-time by the farms, they now work in independent groups, and are paid for specific tasks like sowing fields or harvesting. This payment for task makes the people work more efficiently. Also, farmers have change their 4x4 vehicles for motorcycles to save gasoline. These strategies allowed a surprising resilience of this agricultural system in the face of Venezuela's catastrophic crisis and prevented that this area entered in a new cycle of decline similar to that of the wheat cycle.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.49/0.50)

Presentation: Integrating Earth Observation Data and Participatory Mapping to Characterize Social-Ecological Land Systems in the Andean Range

Authors: Ms. Lucía Zarbá, Lic. Maria Virginia Gonzalez, PhD Francisco Cuesta, PhD Wanderley Ferreira, PhD Karin Petra Wallem, PhD Carolina Tovar, PhD Armando Rodriguez-Montellano, PhD Julieta Carilla, PhD Ricardo Grau, PhD María Piquer-Rodríguez
Affiliations: Instituto de Investigaciones Territoriales y Tecnológicas para la Producción del Hábitat (INTEPH) Universidad Nacional de Tucumán (UNT) - Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Instituto de Ecología Regional (IER), UNT - CONICET, Global Solutions & Research, Universidad San Francisco de Quito, Centro de Investigación en Ciencias Exactas e Ingeniería, Carrera de Ingeniería Ambiental, Universidad Católica Boliviana, Royal Botanic Gardens, Centro de Investigación en Tecnologías para la Sociedad (C+) de la Universidad del Desarrollo, Fundación Amigos de la Naturaleza & Centro de Investigación en Agua, Energía y Sostenibilidad (CINAES) de la Universidad Católica Boliviana "San Pablo", Institute of Geographical Sciences, Freie Universität Berlin
As humans consolidate as the main driver of land change and stability on the planet, ecoregions and biomes, based on biophysical characteristics, are falling short in typifying the functioning of land systems in the Anthropocene. Land classifications that consider human-nature interactions are much advantageous for framing land system studies, understanding processes and contextualizing sustainable development initiatives. One strategy to generate such classifications is the Social-Ecological Land Systems (SELS) mapping. The SELS approach aims to identify spatial units with similar social and biophysical patterns (SELS typologies) through a hybrid methodology that combines data-driven machine learning techniques with bottom-up/participatory knowledge-driven evaluations by an interdisciplinary group of regional experts. The core of the method is building a sound and comprehensive database to feed unsupervised classification algorithms. However the whole process relies on the experts' knowledge collective criteria for guiding decisions on the analysis (including input variables and data sources) as well as the accuracy evaluation and interpretation of the output maps. We present the ongoing SELS mapping of the Andes region in South America, an initiative requested by the Red de Observatorios Socio-ecológicos de los Andes (ROSA) to generate a spatial framework to better understand the geographical variability of complex social-ecological systems across the area. ROSA is a bottom-up monitoring initiative created in 2023 aiming to articulate efforts from independent long-term monitoring initiatives, mainly on hydroclimate, ecology, and land-use. Currently, the network summarizes more than 50 monitoring initiatives distributed along all Andean countries. The integration and systematization enabled by the creation of the network strengthens the potential for a more comprehensive understanding of broader dynamics and drivers of change at a larger scale. The development of a SELS map would help ROSA organize a better integrated network of observatories, provide a spatial framework to contextualize observations, allow associating data to a larger area, and help identify under-represented areas. To develop the Andean SELS map the area was divided into a grid of hexagonal cells (n=5201, 25 km diagonal) and characterized through a diverse set of variables. The input variables were decided together with the group of experts (members of ROSA) through four rounds of interactions (two online and one in-person workshops, plus an online survey), in addition to specific thematic queries. We built a dataset of 32 variables representing complementary system dimensions: 6 physical, 4 biological and management, 7 land cover, 6 economic, 6 demographic, and 3 cultural. We identified groups of hexagons with similar profiles by employing clustering algorithms, and obtained a series of maps depicting their spatial distribution depending on the number of clusters considered. We are currently in the process of evaluating those alternative output maps with the group of specialists, therefore the maps presented here are preliminary and may differ from the final version. We highlight the power of using a wide range of thematic variables, although of high local relevance, to allow the identification of typologies more meaningful for the local reality rather than generic categories. However, selecting key relevant variables and building such a comprehensive database following strict data requirements is a major challenge for the approach, which ultimately influences the quality of the results. The main data gaps we found were on socio-environmental conflicts, natural ecosystems degradation, governance, local commodity exports, culture, land tenure, level of capitalization, agribusiness infrastructure. This work contributes not only to advancing understanding of the Andes, but also offers a methodological blueprint for the broader fields of land system and sustainability science. By integrating remote sensing data with participatory approaches, this project exemplifies a novel transdisciplinary framework that bridges diverse knowledge systems. We aim to inspire the scientific community to explore mechanisms that enhance the integration of top-down and bottom-up data sources, fostering collaborations to address critical data gaps in social-ecological mapping.
LPS Website link: Integrating Earth Observation Data and Participatory Mapping to Characterize Social-Ecological Land Systems in the Andean Range&location=Room+0.49/0.50" class="text-info" target="_blank">Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.49/0.50)

Presentation: Unveiling Historical Dynamics of Social-Ecological Deforestation Frontiers in the Peruvian Amazon (1990-2023)

Authors: Karla Vergara Rodriguez, Maria Piquer Rodriguez, Matthias Baumann
Affiliations: Department of Earth Science, Freie Universität Berlin, Geography Department, Humboldt-Universität zu Berlin
Deforestation is one of the most significant threats to Tropical Moist Forests (TMF), especially in the Amazon region, profoundly impacting the livelihoods and well-being of local Amazonian people. These changes uniquely affect the Peruvian Amazon, the second largest area of TMF in the Amazon and a hotspot of bio-cultural diversity. Despite its ecological and cultural significance, research on social-ecological systems in deforestation frontiers in the Peruvian Amazon remains limited. This gap reveals a critical lack of understanding of deforestation's historical patterns and trends and the complex social-ecological interactions that emerge in deforestation frontiers, underscoring the need for comprehensive research to address these challenges. In this context, archetypes serve as valuable analytical tools, allowing for the identification and comprehension of complex deforestation patterns in a multidimensional approach. Our study introduces an innovative approach to investigating the dynamics of recurrent and emerging temporal deforestation patterns in the Peruvian Amazon from 1990 to 2023. We employ archetype analysis (AA) as a methodological framework, constructing Deforestation Frontier Archetypes (DFA) and Social-Ecological Frontier Archetypes (SEFA) to effectively detect forest loss patterns and evaluate the critical roles played by various drivers over time. We unravel the intricate interplay among these factors by applying nesting typologies. Our analysis identified five distinct deforestation frontier archetypes, each defined by unique configurations of constructed metrics (i.e., baseline condition, speed, fragmentation). We pinpointed the initial emergence of deforestation frontiers, determining the year each archetype appeared and its prevalence among all archetypes within specific periods. We named the archetypes based on the prominent combination of metrics and theories of forest loss. Additionally, we created maps to illustrate these archetypes' temporal and regional distribution. The most prevalent archetype, suspended frontiers, accounted for 17.8% of the deforestation frontier and was distributed throughout the Amazon's northern fluvial networks (primary and secondary rivers). These frontiers were associated with drivers like oil activities (pipelines), forestry concessions, and permanent production forests. Slowed frontiers (9.8%) and gradual frontiers (16.2%) were found along the roads and fluvial networks and are mostly linked to smallholder farming near expanding roads. However, despite their similar drivers and locations, these frontier archetypes exhibit different characteristics regarding frontier expansion. Fragmented frontiers (4.9%) were primarily located in the Andes-Amazon transition zone, with a significant presence of smallholder farming and road expansion. Rampant frontiers (4.0%) were concentrated in particular regions of the Peruvian Amazon, such as the north of San Martin, the continuous border area between Huánuco and Ucayali, and Madre de Dios along the "Interoceanica" Highway. This archetype is strongly associated with large-scale agriculture (including palm oil, cacao, and rice), cattle ranching, and mining operations. Lastly, emergent frontiers (13.9%) are dispersed across frontier areas, with notable concentrations in Ucayali, Madre de Dios, and Loreto. These frontiers were driven by multiple factors and actors, such as smallholder farming, mining operations, cattle ranching, commercial logging, and road expansion, depending on the timing of the frontier's emergence. Our findings also revealed that multiple actors and factors contribute to deforestation patterns over time, showing that these patterns are not static but evolve with new actors or factors within the same geographic area. Furthermore, interactions between specific and different actors over time add complexity to the overall dynamics of deforestation. Overall, our research emphasizes the significance of understating spatiotemporal relationships between actors and drivers in analyzing deforestation patterns. This highlights the need to integrate social-ecological interactions and recognize the diverse roles that different actors and activities play in contributing to forest loss patterns. Additionally, we provide a replicable methodological approach and detailed code for exploring the dynamics of deforestation frontiers through DFA and SEFA in other Amazon or TMF regions, demonstrating broader applicability. By improving the understanding of the historical dynamics of deforestation frontiers from a social-ecological perspective, we aim to inform policy decisions to promote sustainable development through the support of the conservation of vital ecosystems of the Amazon while simultaneously improving the livelihoods of the communities that depend on these ecosystems.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.96/0.97)

Session: B.03.02 Monitoring and measuring climate adaptation using earth observation - PART 1

The Intergovernmental Panel on Climate Change (IPCC) describe adaptation as the process of adjustment to actual or expected climate and its effects, in order to moderate harm or exploit beneficial opportunities. With over 1 degree Celsius of global warming already experienced, adapting to current and future climate changes is now of far greater focus for all levels of policymaking than ever before. Yet, the IPCC Sixth Assessment Report (AR6) concluded that despite progress, adaptation gaps exist, and will continue to grow at current rates of implementation. This has also been confirmed by the UNEP-WASP Adaptation Gap Report 2023.

The decision text from the UNFCCC Conference of the Parties (COP28) held in December 2023 in Dubai, UAE, affirms a framework for a Global Goal for Adaptation (GGA), of which monitoring, evaluation and learning is one of the four core targets. The text states that by 2030, all Parties have designed, established and operationalized a system for monitoring, evaluation and learning for their national adaptation efforts and have built the required institutional capacity to fully implement the system.

In the COP28 Decision on the GGA, Parties are urged to enhance adaptation action in the following areas: water supply and safe potable water, food and agricultural production, health, ecosystems and biodiversity, infrastructure and settlements, poverty and livelihoods, and protection of cultural heritage. With adaptation actions to be guided by science, including indicators, metrics and targets, among others.
Satellite Earth Observation (EO) revolutionized systemic observations and has played a pivotal role in understanding climate changes to date, yet its potential to support adaptation implementation, monitoring and implementation is only beginning to be explored. This session will highlight current adaptation actions using EO across the focus areas of the GGA listed above.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: Using satellites to monitor and measure the impacts of agricultural adaptations around the world

Authors: David Lobell, Stefania Di Tommaso, Daniel Kluger, Richard Lee, Yuchi
Affiliations: Stanford University
Evidence remains thin about how farmers around the world are changing practices in response to climate change, and which practices have proven effective at dealing with climate extremes such as drought or heavy rains. Satellites offer the ability to track several practices as well as crop outcomes across many fields, providing an opportunity to learn about which adaptations are most effective. Here we focus on three practices that are often promoted as improving the resilience or productivity of a system in the face of climate change and that are amenable to measurement by satellites: reduced tillage with residue retention, enhanced crop rotation, and switching to double crop systems. First, we map reduced tillage adoption throughout the United States and compare yields on fields with and without tillage using causal forest analysis. We find that no-till fields have significantly higher yields, especially in water–limited conditions. To assess whether farmers are partially adopting no-till as an adaptation strategy, we also evaluate whether the rate of adoption of no-till over the past two decades has corresponded to the pattern of climate drying or wetting throughout the country. Second, we map the rotation patterns of the main crops in four different countries (U.S., Canada, France, and China) using remote sensing datasets on crop types in recent years. We then use causal forest to assess the effect on yields of moving from continuous cropping to a 2-crop rotation, and from a 2-crop rotation to more diverse rotations. We find strong yield benefits of moving to a 2-crop rotation, and a general increase in these benefits for wetter conditions. Rotation benefits do not generally increase for warmer temperatures, except for soybean yields. Furthermore, we find little evidence that increasing rotation complexity beyond a two crop sequence confers benefits, except in some wheat systems. Third, we map the adoption of double cropping in the U.S. using historical crop type maps, finding that double cropping has indeed increased in recent years in response to warming. However, we find that the production gains from this switch have been small relative to other impacts of climate change, including yield impacts as well as expansion of corn and soybean area in northern latitudes. Overall, these studies demonstrate that satellite data can provide valuable insights into the adaptation process in agriculture, especially the long-term records of the Landsat and Sentinel sensors. Further opportunities may exist as the satellite record continues to lengthen, as the range of practices and cultivar traits that can be measured by satellite continue to grow, and as causal inference methods continue to improve.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: Iterative pest model predictions: A framework for integrating earth observations and monitoring to track changing drivers of variation

Authors: Sara Emery
Affiliations: Cornell University
Despite centuries of innovations in pest management, the discovery of a diverse array of potent synthetic insecticides in the 20th century, and a continued upward trajectory in insecticide use over the past several decades, herbivorous arthropod pests still destroy 14-18% of potential crop yield annually, constraining global food production. Farmers have also been noticing changes in the emergence and population patterns of pests in their fields. These shifts can be due to climate change stressors and cascading effects from agricultural intensification on the invasion, population patterns and emergence timing of pest and beneficial species. Predicting when and where pest outbreaks will occur is crucial, both for farmers to maintain high yields and to minimize prophylactic insecticide use. Understanding what resources pests depend on, and how they are provided by different habitats at different times of the year, is key to predicting how pest populations are likely to change across landscapes and under climate change. To date, however, many studies characterize landscapes, and thus define habitat availability, using broad land-use categories (e.g., natural versus agricultural). More nuanced landscape information, characterizing the functional attributes of different ecosystems, is needed to better understand the drivers of pest and predator dynamics in agricultural systems. Through long-term, frequent, and spatially continuous measurements of climate, vegetation, and other biophysical variables, satellite-based Earth Observations offer great opportunities to improve the spatial, temporal, and functional characterization of landscapes, beyond land use and land cover. Satellite observation imagery can directly detect vegetation productivity, which has been shown to correlate positively with arthropod densities Ecoinformatics is the application of big data methods drawing from preexisting data. Pest emergence timing, infestation rates, outbreaks and pesticide applications are highly stochastic across time and space, and underutilized large monitoring data sets capture effects climate change and agricultural intensification that smaller studies miss. Datasets like these can be powerful tools to understand variation in pest abundance and shifts in insect emergence timing. Capitalizing on data gathered by professional pest consultants, pest control advisors and growers, allows us to build larger datasets that compensate for any reduction in measurement precision. I integrate novel data streams of satellite-based remote sensing, local and landscape intensification metrics in commercial fields and insect monitoring data. Quantifying how the location, productivity, diversity and extent of vegetation supports pest populations and what functional attributes of agricultural management contribute to outbreaks could provide a major leap forward for integrated pest management. I am working with extension agents to build an API to reimagine pest prediction models. Due to the microevolutionary and plastic shifts in pest emergence, population and invasion patterns it is imperative to build dynamic and iterative models that are shifting the predictions based on shifting data input. As a species-specific example for this reimagined framework of pest prediction modeling, I am using the invasive the spotted lanternfly to estimate present and future species distribution models for the spotted lanternfly and one of its primary hosts, the tree of heaven. Satellite-based remote sensing imagery, including Sentinel-2, RBG, MODIS net primary productivity among other products, has been combined with 10,000 observations of ground-truthed host tree location data to build out a statewide density maps of tree of heaven, as well as for spotted lanternfly using 10,000 insect observations. These data have been combined with weather day to create a statewide spotted lanternfly risk map for NY vineyards. Over 50 climatic and geographic factors contributed to high Random Forest model accuracy (AUC >99%) and strong predictive performance (TSS >97%) for both species. Temperature and vapor pressure deficit positively influenced habitat suitability for tree-of-heaven and spotted lanternfly, supporting establishment in warmer, drier areas. Conversely, higher relative humidity and net primary productivity were negatively correlated, reducing suitability for these species in the Finger Lakes and Lake Erie regions. Vineyard risk assessments considered both the proximity of vineyard locations and habitat suitability for spotted lanternfly and tree-of-heaven. Habitat suitability and vineyard risk were also modeled under two future climate change scenarios representing either near-future conditions reflective of our current trajectory (RCP 7.0) or a worsening emissions scenario (RCP 8.5). The overall risk of spotted lanternfly to New York vineyards is anticipated to decrease under both scenarios. However, the grape production region in Western New York is expected to experience increased vulnerability overall, though at lower intensity. These findings challenge the common assumption that climate change universally supports the spread of invasive species. Our results emphasize the need for sub-regional, rather than uniform, mitigation strategies. These species distribution models are integrated to estimate risk at site specific locations of vineyards across New York State. Since not all vineyard locations were available for incorporation into model development, the API will exist as a way to input new vineyard locations and reassess model risk. Additionally, the model is being developed to iterate yearly using the most recent year of satellite imagery and weather data. Though this initial effort reflects an iterative and dynamic framework for predicting farmer risk to an invasive species, my lab is using this framework as a foundation to develop new phenology models that are also iterative, incorporating new monitoring data to update model predictions in a spatially explicit manner and over time. My lab is working with extension agents and software developers to create easy-to-use web-based products (similar to iNaturalist) for growers, extension agents and pest control advisors to input future pest observation data that will integrate with an iterative model and create site-specific pest predictions. Integral to the viability of this effort to develop dynamic iterative pest prediction models is a broader investment to build systems that integrate multiple data streams from both NASA and ESA and agricultural pest monitoring data so that landscapes can be characterized on a finer scale. Multi-government efforts are needed to invest in FAIR (findable, accessible, interoperable, reusable) principles of data repository management and sharing. If this is done well, these data streams can be integrated in novel ways to promote resilience and sustainability of working agricultural landscapes.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: From pixels to trends - how earth observation data are turned into climate change indicators

Authors: Kerstin Stelzer, Jorrit Scholze, Dr Katja Kuhwald, Prof Dr Natascha Oppelt
Affiliations: Brockmann Consult GmbH, Kiel University - Department of Geography
Climate change indicators are intended to support climate policy decisions and public discussions. They are important for monitoring and evaluating targets and for communicating changes in the phenomenon under investigation to society. Impact indicators show how climate change affects certain environmental phenomena. Response indicators show how society is adapting to climate change. In Germany, the Federal Environment Agency coordinates the German Strategy for Adaptation to Climate Change (Deutsche Anpassungsstrategie an den Klimawandel - DAS). This framework comprises around 100 impact and response indicators in the six areas of health, water, soil, infrastructure, economy and spatial planning/disaster prevention. However, the availability of representative environmental data is often a challenge. Lakes, for example, are considered early indicators of climate change, but nationwide data for consistent and long time series are rare. Remote sensing data can support the monitoring of climate indicators as it can provide continuous and reproducible information about the environment. The DASIF project demonstrates that remote sensing-based algorithms are suitable for calculating climate change indicators nationwide. Three of the indicators developed in the DASIF project were included in the DAS Monitoring Report 2023 (https://www.umweltbundesamt.de/en/monitoring-on-das/introduction#process-of-adaptation-to-climate-change-in-germany). These are the two impact indicators cyanobacteria in bathing waters and spring algal blooms in lakes and the response indicator green roofs in large cities. Two additional response indicators were developed but not published in the DAS Monitoring Report: lake water temperature and lake ice cover. For the presence of cyanobacteria, the Maximum Peak Height algorithm was applied using data from ENVISAT MERIS and Sentinel-3 OLCI. The approach identifies the number of days with cyanobacteria presence during both the entire season (March to October) and the summer months (June to September). This information is aggregated at the lake level and normalized against the number of valid satellite image acquisitions, ensuring consistency in the data. The beginning of the spring phytoplankton bloom was determined by analyzing chlorophyll-a concentrations calculated from multiple sensors, including Sentinel-3 OLCI, ENVISAT MERIS, and Sentinel-2 MSI. The date on which the chlorophyll-a concentration first exceeded the 70th percentile during spring was extracted from the time series, providing a clear indicator of bloom onset. Despite their importance, there is no comprehensive inventory of green roofs at the national level in Germany, with data often limited to specific cities like Dresden. Using satellite imagery from Sentinel-2 and PlanetScope, DASIF demonstrated the feasibility of identifying green roofs with high accuracy, offering a cost-effective and replicable method to support resilient urban planning. The project also established a framework for evaluating data quality, ensuring the reliability of the indicators for long-term trend analysis. This framework incorporates spatial and temporal measures of data coverage, enabling assessments of how representative the values are for inclusion in broader trend evaluations. These quality measures were critical for integrating the indicators into an operational service for the German Federal Environment Agency, using the national CODE-DE cloud-processing infrastructure. This system supports the retrospective calculation of the indicators and facilitates further analysis of national and regional trends. In addition, an intuitive dashboard (https://dasif-dashboard.brockmann-consult.de/) was developed to make the indicator results accessible to the public. The ESA CCI Lakes project is developing essential climate variables as a global lake product. In phase 2 of the CCI Lakes project, based on the DASIF project, a use case entitled “Aggregated climate indicators for the global lakes” is developed, which applies the DASIF approaches on a global scale. This demonstrates the usefulness of climate indicators on a global scale to assess climate impacts on lake groups. The use case builds on the indicators already addressed in the DASIF project and are visualized in a public dashboard (https://lakescci.climate-indicators.brockmann-consult.de/) . The global use case illustrates how climate change indicators can reveal patterns across diverse lake systems, further emphasizing their importance in understanding and mitigating climate impacts on aquatic ecosystems.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: Regional Intercrop Mapping in Malawi With High Resolution Data Using Sparse Labels: A First Step in Enabling Reliable Impact Studies of Sustainable Practice Adoption on Productivity

Authors: Shabarinath Nair, Urs Schulthess, Manav Gupta, Thomas Lampert, Blake Munshell, Christina Justice, Yuval Sadeh, Inbal Becker-Reshef
Affiliations: University Of Strasbourg, Department of Geographical Sciences, University of Maryland, MD, NASA Harvest, Department of Geography, University of Monash, CGIAR
Intercropping is an important cropping practice among the recommended practices for climate-smart agriculture. Intercropping is expected to improve soil fertility and reduce the environmental impacts resulting from excessive fertilization while improving productivity in the long run. However, these outcomes have not been conclusively validated and need to be studied at regional scales under varied growing conditions over multiple years. The first step in this study is to have regional crop type maps which include intercrops. Currently, there is no such map in Malawi, which is one of the countries with high adoption of intercropping; studies indicate up to 84% of the maize crop grown in Malawi are intercropped. Previous studies analysing the impact of intercropping on productivity can be grouped into two types: field-level and regional-level. The results from both these types of studies have not been conclusive and have been contradictory on several occasions. Field-level trials are usually performed in a controlled environment with a standard set of practices that may not fully represent the effects at scale. Regional level studies, on the other hand, have questionable approaches including scaling impact studies based only on monoculture maps. As a first step, we propose a method to map intercrops in Malawi and in general crops with sparse label data, at a regional level using a few-shot deep learning approach. Majority of supervised machine learning techniques require extensive training data, and on the other hand applying unsupervised techniques in countries like Malawi is highly unlikely to work given the high signal similarity between different crops and smaller field sizes. Few-shot learning based on distance-based metrics focuses on increasing the distance between similar and dissimilar instances without taking into consideration the actual label. We can then start identifying new crops using only a few labelled data. In our work we identify maize, maize-pigeopea, cowpea, cowpea-pigeon pea, groundnut and soybean. The crop classes were highly imbalanced and the minority classes are highly similar to the majority classes. Our model is trained using a triplet loss function, which we compared to the more standard contrastive loss function. Using triplet loss improved the performance as it was challenging for the contrastive loss to maintain the intra-class distances across all batches of training data, while having an additional anchor point in the triplet loss estimation helped overcome this challenge. We used bi-weekly Planet data for the 2023-2024 season for the analysis and achieved F1-score for maize of 80% and for maize-pigeonpea of 95% on 67 and 105 samples respectively, with an overall model F1-score of 83%. Through this work we were able to demonstrate that using high resolution imagery and algorithms that can handle imbalanced data and work with sparse labels, a reliable map containing inter-crops can be developed. We are currently generating such maps for multiple years, which will be followed by unbiased area estimates. The maps will then be used to estimate productivity of both monoculture and intercrop species. We believe this work can help us in beginning to understand the true impacts of inter-cropping at the regional scale.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: Earth Observations for Climate Adaptation: Tracking Progress Towards the Global Goal on Adaptation Through Satellite-Derived Indicators

Authors: Dr Sarah Connors, Rochelle Schneider, Johanna Nalau, Michelle Hawkins, Sofia Ferdini, Ying Wang, Michael Rast, Kristin Aunan, Jean-Philippe Aurambout, Mark Dowell, Claire Dafau, Caroline Gevaert, Matti Goldberg, Aaron Golden, Andrew Kruczkiewicz, Thelma Krug, Timo Leiter, Tatiana Loboda, Cromwel Lukorito, Antonio Moreno Rodenas, Naledzani Mudau, Brian O’Connor, Ana Oliveira, Louis Reymondin, Cynthia Rosenzweig, Apolline Saucy, Chris Trisos, Ambrosio Yobánolo del Real
Affiliations: European Space Agency, Griffith University, NASA, WASP: World Adaptation Science Programme, ISSI: International Space Science Institute, CICERO, EEA: European Environment Agency, European Commission's Joint Research Centre (JRC)., CLS: Collecte Localisation satellites, The World Bank, University of Twente, Woodwell Climate, University of Galway, Columbia University, GCOS: Global Climate Observing System, LSCE, Grantham Research Institute on Climate Change and the Environment, University of Maryland, University of Nairobi, IPCC: Intergovernmental Panel on Climate Change, Deltares, SANSA: South African National Space Agency, UNCCD: United Nations Convention to Combat Desertification, +ATLANTIC, Alliance of Bioversity International and CIAT, University Bern, African Climate & Development Initiative, Technology Executive Committee UNFCCC
As climate change impacts are unfolding at greater speed, frequency and intensity, climate adaptation has become a necessity worldwide. Identifying effective pathways for climate adaptation requires not only generating viable solutions but an ability to monitor and evaluate climate adaptation progress across different scales and dimensions. A robust assessment strategy in turn requires a suite of meaningful indicators, metrics, and corresponding datasets. In this presentation, which results from the June 2024 forum 'Using Earth Observations Systems to Improve Climate Adaptation Policy and Action' in Bern, Switzerland, we explore the potential of Earth Observation (EO) data in supporting the tracking of the targets under the Global Goal on Adaptation (GGA) and the development of associated indicators. This work focuses on four broad themes that connect closely with GGA targets: agriculture (which also integrated aspects of water), biodiversity, extremes (which integrated aspects of water, infrastructure, and poverty) and health. We examine how EO can contribute to all stages of the adaptation process, from hazard mapping and risk assessment to monitoring and evaluation of adaptation measures. We highlight EO's strengths in providing objective, repeatable, and globally consistent data, while also acknowledging challenges such as data disaggregation, integration with socio-economic factors, and the need for long-term, robust baselines. Drawing parallels with the Sustainable Development Goals (SDGs), we caution against missing opportunities to leverage EO data effectively, given its potential to provide near-global accessibility and coverage to indicators derived from them. This presentation will include some key recommendations on how to incorporate EO data and expertise in the GGA indicator selection process, suggest how to develop standardised and operationalized EO-based adaptation indicators, and list priority research gaps to improve EO's efficacy in supporting adaptation actions.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: Space-borne green roof monitoring for supporting climate resilient settlement planning

Authors: Patric Brandt, Saskia Foerster, Petra van Rueth
Affiliations: Umweltbundesamt
Green roofs in urban areas provide a range of ecosystem services such as water storage, climate regulation, carbon sequestration, improved air quality, recreational space and habitats for animal and plant species. The greening of roof tops is considered an adaptation measure against climate change impacts such as heat waves, mitigating heat island effects and contributes positively to biodiversity in urban areas. Green roofs are vital to the sponge city, an integrated urban planning concept that enables cities to adapt to increasing extreme weather events such as heatwaves and heavy rainfall by supporting the regulation of water cycles through blue-green infrastructures. As part of the blue-green infrastructure in settlements, green roofs may help retain rainwater, improving runoff control, thus, enabling the storage of water more effectively and reduce the risk of flooding caused by heavy rainfall events. Hence, monitoring the extent and spatial distribution of greened roof area over time is key to inform decision-making on related adaptation policies, including financing mechanisms and helps ensure their effectiveness. The existing monitoring framework of climate change impacts and adaptation at national level in Germany (DAS Monitoring System) coordinated by the German Environment Agency (UBA) does include an indicator on aggregated green roof area based on Sentinel-2 imagery. However, the indicator, as part the most recent DAS monitoring report, is limited to larger roof areas (> 400 m2) in 15 selected cities, estimating related green roof coverage for the year 2020 only. Therefore, this study serves the need to further develop the DAS green roof indicator in order i) to include smaller roof surfaces, ii) to increase spatial coverage, potentially comprising all settlements in Germany, and iii) to estimate the share of green roofs over multiple years. These improvements would enable the monitoring of green roofs in Germany based on unbiased satellite-based time-series data. Moreover, from a methodological perspective, this study aims at testing whether high-resolution imagery from earth observation satellites combined with machine learning (ML) may result in robust green roof estimates that can serve related monitoring at municipal level. Accurate multi-annual green roof estimates at municipal level are crucial input for evaluating and tailoring incentives provided through funding programmes as part of climate-resilient urban planning. An ML-classification approach is applied leveraging gradient boosted trees (XGBoost) for detecting green roof tops at pixel level. The ML classifier is trained and validated capitalizing on available in-situ reference datasets obtained from municipal green roof surveys, enabling model comparisons using common evaluation metrics for classifications such as area under curve (AUC) and F1-Score. Multispectral satellite imagery from Planet Labs are fed into the models featuring a spatial resolution of about 5 m, hence, providing a higher spatial resolution compared to Sentinel-2 imagery. More specifically, Planet Basemaps are used, representing analyses ready data, that is, cloud free and bottom-of-atmosphere, wall-to-wall image mosaics. The final model is employed to detect green roof surfaces based on Level-of-Detail 2 (LoD2)-building data for the whole of Germany over multiple years. The modelling outputs are spatially and temporally explicit geodata, indicating the share of green roof area per administrative unit (e.g. districts). The study aims at providing obtained results as open data through standardized Open Geospatial Consortium (OGC) Services such as Web Map Service (WMS) and Web Feature Service (WFS). This poster introduces the conceptual modelling workflow, validation procedure, and resulting data products and discusses wider implications for monitoring climate change adaptation. Given the remote sensing-driven modelling approach followed, insights gained in this study are potentially transferable to climate change monitoring frameworks in other countries, thus, informing international climate adaptation strategies and monitoring frameworks.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.34)

Session: A.05.14 What makes a climate variable essential?

The need to understand how climate is changing has never been greater, and fundamentally we cannot understand what we do not observe. This session will describe the origin and evolution of the set of Essential Climate Variables (ECVs) managed by the Global Climate Observing System (GCOS) programme. The ECVs constitute the minimum set of observations required to systematically observe the Earth’s changing climate across three domains: the ocean, the land and the atmosphere.

ECVs have facilitated the implementation of the observing system through a user-driven process, guiding investment decisions and mobilizing climate observing communities. The first set of Essential Climate Variables were developed by GCOS in the late 1990’s and since then the list has grown to the current set of 55 ECVs.

After 25 years GCOS has started a process aimed at the rationalization of the ECV list. In this contribution the main outcomes of this rationalization process will be presented: (1) establishment of a formal governance process to adopt new ECVs; (2) revised definitions for ECVs and ECV quantities; (3) a proposal for an updated set of ECVs including to enhance consistency and consider ECVs that span multiple domains.

The session will provide an opportunity to interact with ECV users, discuss how the rationalization process will be delivering a simpler, fairer, more consistent and more transparent set of ECVs, and encourage participants to participate in the on-going Public Review running from May-Sep 2025.

Presentations and speakers:


Welcome and setting the scene: Why a GCOS ECV Rationalization process was due and progress so far


  • Carlo Buontempo - ECMWF

New definitions and governance of the GCOS ECV process: what makes a climate variable essential and who decides that


  • Stefan Kern - University of Hamburg

The revised list of GCOS ECVs and ECV quantities


  • Stephan Bojinski - EUMETSAT
  • Belén Martín Míguez - WMO/GCOS Secretariat
  • Martin Herold - GFZ Potsdam
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.14)

Session: C.02.15 - M3 session: EO Research missions: The Moment of Truth (Execution Phase) - Final Checks, Countdown, Liftoff and then ...! - Earthcare, Biomass, HydroGNSS

The status of development of ESA missions will be outlined in 4 sessions of 1h30 minutes (equally of a full day).
Participants will have the unique opportunity to gain valuable insights in technology developments and validation approaches used during the project phases of ongoing ESA programmes.
The Projects are in different phases (from early Phase A/B1 to launch and operations) and together with industrial/science partners the status of activities related to Mission developments will be provided.


M3 session: EO Research missions: The Moment of Truth (Execution Phase) - Final Checks, Countdown, Liftoff and then ...! - Earthcare, Biomass, HydroGNSS


EarthCARE Launch and IOV Highlights


  • Kotska Wallace – ESA

EarthCARE Industry/Partner


  • Cornelius Haas – Airbus Friedrichshafen

Biomass After the Launch


  • Michael Fehringer – ESA

Biomass – How Industry Made It Happen


  • Chris Lloyd – Airbus

HydroGNSS Launch and IOV Readiness, Second SCOUT Cycle


  • Jean-Pascal Lejault – ESA

Research Mission Programme Wrap-up and What’s Next


  • Dirk Bernaerts – ESA


Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall E1)

Session: F.02.07 Essential Agricultural Variables: Building Blocks for Global Agriculture Monitoring and Policy Support

The growing global challenges of food security, climate change, and disaster risk reduction require more than just data—they demand coordinated, actionable insights. GEOGLAM (Group on Earth Observation Global Agricultural Monitoring), a G20-endorsed international community of practice, plays a pivotal role in this mission by harnessing Earth Observation (EO) technologies to enhance decision-making processes in agriculture and food security.
In early 2019, GEOGLAM's international partners developed the “Community Research and Development Agenda for GEOGLAM: Operational Research and Development,” a foundational document that identified key variables crucial for agricultural production forecasting. This effort catalyzed the creation of the Essential Agricultural Variables (EAVs).
EAV’s help integrating EO data with global policy such as the UN Sustainable Development Goals, the Paris Accord on Climate Change, and the Sendai Framework for Disaster Risk Reduction., EAVs empower stakeholders— to translate policy into data needs and understand data gaps and needs—to make informed decisions that address critical global challenges.
Significant progress has been made since the initial release of the EAVs in May 2022. These efforts have focused on refining and implementing EAVs across diverse agricultural contexts. The CEOS LSI-VC working group has played a pivotal role in advancing the EAV concept and getting support from different space agencies across the world. This session will explore the evolution of EAVs, highlighting examples such as cropland and crop type mapping, evapotranspiration (ET), and surface water monitoring, and will discuss the future potential of EAVs in enhancing global agricultural monitoring.
For more detailed information on the EAV framework, visit https://agvariables.org/
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall E1)

Presentation: ESA WorldCereal: Advancing Open-Source, On-Demand Crop Mapping at Any Scale, Anytime

Authors: Kristof Van Tricht, Jeroen Degerickx, Christina Butsko, Hendrik Boogaard, Juan-Carlos Laso Bayas, Santosh Karanam, Shabarinath Nair, Jeroen Dries, Vincent Verelst, Belen Franch, Italo Moletto-Lobos, Andreu Guillem-Valls, Katarzyna Cyran, Arun Pratihast, Dr. Luboš Kučera, Martin Babic, Inbal Becker-Reshef, Steffen Fritz, Sven Gilliams, Benjamin Koetz, Zoltan Szantoi
Affiliations: VITO, Wageningen Environmental Research (WENR), Wageningen University & Research, International Institute for Applied Systems Analysis (IIASA), University of Strasbourg, Department of Geographical Sciences, University of Maryland, Global Change Unit, Image Processing Laboratory, Universitat de Valencia, GISAT, Group on Earth Observations Global Agricultural Monitoring Initiative (GEOGLAM), European Space Agency, Department of Geography & Environmental Studies, Stellenbosch University
The ESA WorldCereal project is making significant strides in the field of crop mapping by delivering a new open-source, on-demand mapping system aimed at producing regional to global cropland and crop type maps. With the release of the first version of this renewed system, ESA WorldCereal is positioning itself as a transformative tool for the remote sensing and agricultural monitoring communities. Built to provide accurate, timely, and scalable crop information, the platform integrates advanced Earth observation (EO) data and machine learning technologies, inviting the community to collaboratively explore its potential. The initial release of the new ESA WorldCereal mapping system marks a major milestone, establishing an accessible platform that allows users to access both private and public harmonized reference data and generate cropland and crop type maps anywhere in the world, either by using our pretrained classification models or by training their own models. The system operates with minimal to no installation requirements, thanks to a scalable cloud-based pipeline driven by openEO and running on the Copernicus Data Space Ecosystem. A significant update to the backbone classification algorithm was made in collaboration with NASA Harvest, which involved adopting and adapting the Presto model as a feature extractor within the system. Presto is a lightweight transformer-based foundation model pretrained on millions of unlabeled EO time series. Integrating Presto in the WorldCereal system has refined the accuracy and robustness of the mapping component across both spatial and temporal scales, enabling it to better handle regional agricultural nuances and diverse cropping systems. In our presentation, we will provide an overview of the current status of the ESA WorldCereal project, detailing the different components of the system and how they work together to deliver accurate crop mapping results. We will highlight the availability of the newly released on-demand mapping tool, which allows anyone to generate cropland and crop type maps. By demonstrating how the platform's components integrate seamlessly—from data ingestion and processing to model deployment and visualization—we aim to showcase the flexibility and accessibility of ESA WorldCereal for a wide range of users. WorldCereal is not just a product; it is a community-driven initiative. We invite researchers, developers, and stakeholders from across the agricultural and remote sensing sectors to join us on this journey. Throughout the project, several regional demonstrations and capacity building activities are taking place to inform a broad range of potential end users about the possibilities and benefits of the system. By participating in this open-source effort, contributors can help shape the future of global crop mapping—enhancing methodologies, expanding training datasets, and developing innovative solutions to meet evolving agricultural challenges. ESA WorldCereal's open-source, on-demand crop mapping platform represents a significant step forward in democratizing access to agricultural information and enhancing global food security. With the recent release and the planned future developments, ESA WorldCereal aims to become a leading tool for crop monitoring and decision-making.We encourage the community to engage, contribute, and make use of the platform, collectively pushing the boundaries of what is possible in large-scale crop mapping.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall E1)

Presentation: Improving ECOSTRESS Evapotranspiration Estimates and Drought Indicators in Drylands with ENMAP and PRISMA

Authors: Michael Marshall, Dr. Monica Pepe, Giulia Tagliabue, Kwasi Ofori-Karikari, Dr. Wim Timmermans, Agnieszka Soszynska, Dr. Vincent Odongo, Dr. Volkan Yilmaz, Dr. Cinzia Panigada, Dr. Micol Rossini, Dr. Francesco Fava, Dr. Sonja Leitner, Dr. Francesco Vuolo, Dr. Chris Hecker, Dr. Micro Boschetti
Affiliations: University of Twente, CNR-IREA, University of Milano-Bicocca, University of Leicester, International Livestock Research Institute, Karadeniz Technical University, University of Milano, BOKU
Droughts in drylands have devastating impacts on the global food supply and demand equilibrium. The 2020–2023 Horn of Africa drought, for example, led to severe disruptions in food supply chains and widespread food insecurity—more than 30 million people were affected across Ethiopia, Kenya, and Somalia. Droughts are particularly devastating in Africa, because the primary livelihoods there consist of smallholder farming and pastoralism. The farmers and pastoralists depend exclusively on rainfall to grow crops and forage, as well as to water livestock. State-of-the-art drought monitoring platforms, such as the European ECOsystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS) HUB use satellite imagery and gridded meteorological data to map the evaporative stress index (ESI), which is derived from evapotranspiration (ET). ESI is increasingly used for monitoring in drylands because it effectively tracks “flash” droughts, which involve the rapid evaporation of water from the surface due to high temperature and atmospheric moisture demand. The original implementation of the ECOSTRESS technical workflow by the National Aeronautics and Space Administration estimated ET and ESI with the Priestley-Taylor Jet Propulsion Laboratory model (PT-JPL). PT-JPL is simple and reliable in many ecosystems across the globe but tends to overestimate ET and underestimate drought triggers in dry areas. Hyperspectral sensors consist of hundreds of narrow (≤10nm) bands that distinguish many biophysical and biochemical foliage properties related to ET. Substantial gains in ET model performance were obtained with hyperspectral vegetation indices (HVIs) derived from field spectroscopy collected over one growing season. The Environmental Mapping and Analysis Program (EnMAP) and PRecursore IperSpettrale della Missione Applicativa (PRISMA) satellite platforms usher in a new era in global monitoring missions with hyperspectral capability. They afford an opportunity to assess the performance of PT-JPL and other ET models with and without HVIs over large areas through time. We present the results of the European Space Agency (ESA) HyRelief project, which aimed to (i) quantify the performance of PT-JPL in Kenyan drylands as implemented in ECOSTRESS with and without HVIs and (ii) develop a prototype that could be integrated into current drought monitoring support tools, namely the National Drought Management Authority (NDMA) early warning bulletins. Objective one was achieved with a unique experimental set-up in the Kapiti Research Station and Wildlife Conservancy in Machakos County, Kenya that consisted of an eddy covariance (EC) flux tower and fluorescence box (FloX). The former recorded ET and other micrometeorological measurements every 30 minutes, while the latter captured hyperspectral information (400 – 950nm) every two minutes. The experiment spanned two growing seasons (2019 – 2020 and 2023 – 2024). Four ENMAP and PRISMA cloud free images were obtained on the same date over the second season. The performance of the original PT-JPL with EC flux and FloX data was high (R2=0.82, RMSE=1.74, RRMSE=0.36) but led to a clear positive bias during the growing season. The integration of HVIs into the canopy (588.83, 896.17nm) and soil (688.09, 896.17nm) components of PT-JPL led to a modest improvement in performance (R2=0.88, RMSE=1.44, RRMSE=0.30). More importantly, the HVIs corrected the positive bias during the growing season. Both versions of PT-JPL outperformed the Two-Source Energy Balance (TSEB) model driven with and without HVIs. TSEB replaced PT-JPL on the European ECOSTRESS Hub to estimate ET and ESI. The results of HyRelief are important to European ECOSTRESS Hub and other drought monitoring system developers who are interested in refining or generating new technical workflows. We hypothesize performance would have been higher if soil evaporation was driven by narrowbands in the shortwave infrared (SWIR). EnMAP and PRISMA collect information across the full optical range (400 – 2500nm) but are experimental and acquired on-demand, so the availability of cloud-free imagery was limited. The imagery only supported a qualitative assessment at the satellite scale. The upcoming Sentinel-2 Next Generation (S2NG) and Copernicus Hyperspectral Imaging Mission (CHIME) will collect hyperspectral information in the SWIR across the globe at frequent intervals and could be used to verify our hypothesis. Thirteen knowledge users from NDMA and other relevant agencies at the Regional Centre For Mapping Of Resources For Development in Nairobi, Kenya in October 2024 used the AGILE software development process to transform the data collected by HyRelief into an ECOSTRESS-EnMAP-PRIMSA drought monitoring prototype. The prototype will be made publicly available on GitHub at the beginning of 2025. It could serve as a testbed for S2NG and CHIME.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall E1)

Presentation: Essential Agriculture Variables for GEOGLAM: building a bridge between satellite remote sensing and policy communities to support global agriculture monitoring for food security, productivity, and sustainability

Authors: Alyssa Whitcraft, Sven Gilliams
Affiliations: GEOGLAM
GEOGLAM was launched in 2011 by the G20 agricultural ministers as part of an action plan to address food commodity price volatility. The ministers postulated that global, timely, independent information on crop production outlooks from Earth observations (EO) would stabilize markets and promote food security. Since 2011, GEOGLAM - through its international, bottom-up community of practice - has produced substantial success and steps forward in promoting the adoption of EO and EO-derived information by national statistics services to global policy halls confronting sustainability, climate change, and productivity. GEOGLAM has evolved and expanded since its inception, and so too have the policy environments in which it operates, now intersecting with the United Nations Sustainable Development Goals, the United Nations Framework Convention on Climate Change, the Sendai Framework for Disaster Risk Reduction, and others. Accordingly, in 2018 GEOGLAM launched an effort to define Essential Agriculture Variables (EAVs) as core, EO-based building blocks for understanding state, change, and forecast in agriculture that could be applied across policy drivers, across borders, and across scales. Now, in 2024, GEOGLAM is working with the Committee on Earth Observation Satellites (CEOS) and other organizations around the world to turn these variables into standardized, national-to-global level scale products that foster the community's capacity to provide clear and consistent measurements and assessments of agriculture. This presentation will detail the core efforts of 2023-2026 related to operationalizing the EAVs, including a call to the community to align on core product quality requirements and on priorities for methodological improvement.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall E1)

Presentation: Copernicus4GEOGLAM, the CLMS agricultural mapping service: crop type maps and their use in African countries

Authors: Ilaria Palumbo
Affiliations:
Food insecurity is a growing challenge worldwide and the agriculture systems play a key role in its prevention and mitigation. Countries with vulnerable food security are often lacking baseline information about agricultural statistics or have access only to out-dated statistics, therefore better information is key to support their decision making and allow better planning or compensation actions. As part of the Copernicus Land Monitoring Service (CLMS), and in close collaboration with GEOGLAM (Group on Earth Observation Global Agricultural Monitoring), the Copernicus4GEOGLAM Service has been operating since 2020 to improve agriculture monitoring and crop area statistics in countries where food security is vulnerable or even in a critical state. Relying on observations from Sentinel-1 and 2 and extensive field data collections, the service is able to provide countries with relevant information of the crop type distribution and their area coverage. In particular, the service provides crop masks, crop-type maps and crop area statistics, during and at the end of the growing season. The derived products are also accompanied by accuracy assessments to provide users quality statistics. Since 2020, several countries have benefited from Copernicus4GEOGLAM products, especially in Africa (Uganda, Kenya, Tanzania, Ivory Coast), but also in other regions, for example in Yemen. The maps and statistics provided have contributed to improve the reporting and monitoring capabilities of national governments, and since information is received soon after the end of the growing season, it can also be used to identify regions with unfavourable crop conditions at an early stage to take prompt action. These products are used to update national maps of agro-ecological regions and can also improve national planning of fertilizers distribution and crop insurance implementation. Copernicus products are free and open access and can be used also by other institutions working on food security and agriculture monitoring. Beside the maps and statistics, the in-situ data collected during the field campaigns are made available to the country governments and the international community, who can use them for other mapping or validation purposes. The private sector is also showing growing interest in these datasets for developing farm level services. Finally, the Copernicus4GEOGLAM team provides knowledge transfer to countries receiving the service and works in collaboration with other institutions (GEOGLAM, FAO, World Bank) to increase countries’ capacity in EO to maximise the benefit of the service. In 2024 Copernicus4GEOGLAM supported the Kenya Space Agency, Kenya Ministry of Agriculture and local universities to use the methodology and processing chains developed by the Service in a local crop mapping project. This first autonomous application of methods and tools developed by the service increases its sustainability and impact. Since late 2024, the Copernicus4GEOGLAM has entered a new phase to ensure continuity of the service and improve support to the countries. In the new phase the classification methods used for the Crop type mapping have been improved by incorporating the methodologies developed by the Worldcereals initiative. Furthermore, a Validation Service has been added to support countries in the implementation of the EU Regulation on Deforestation Free products (EUDR): this Service can be applied to assess existing maps produced by the country for an independent validation.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall E1)

Presentation: Assessing Added-value of Hyperspectral Spaceborne Data in Crop Residues Cover Estimation with Machine Learning: Experimental Results from PRISMA and EnMAp Imageries in the Perspective of Next Generation Multispectral Missions.

Authors: Dr Monica Pepe, Katayoun Fakherifard, Luca Miazzo, Rodolfo Ceriani, Dr. Francesco Nutini, Dr Gabriele Candiani, Dr Andrea Ferrarini, Lorenzo Parigi, Dr Mirco Boschetti
Affiliations: CNR - IREA, Department DISAA, Università degli Studi di Milano, Department of Environmental Science and Policy, Università degli Studi di Milano, Department of Sustainable Crop Production, Università Cattolica del Sacro Cuore di Piacenza
The role of remote sensing in crop monitoring has been recently extended to the study of crop residues coverage in space and time. Crop residues, together with cover crops, are key practices for conservative agriculture, for their role in hydrological, bio-physical and chemical balance and maintenance of soils. In addition, the perspective of The European Climate Law and the goal of EU of becoming carbon-neutral by 2050 raised attention to carbon removals in agriculture, asking for new Monitoring, Verification and Reporting (MVR) tools, as demonstrated by the inclusion of remote sensing techniques in the upcoming Carbon Removals and Carbon Farming Certification (CRCF) Regulation. Hyperspectral data have proved to be more suitable than multispectral data to account for Non-Photosynthetic Vegetation (Verrelst et al., 2023) while resolving also subtle absorption features, and covering the diagnostic spectral bands for carbon base constituent (CBC) especially in the SWIR, a spectral region where both past and current satellite multispectral missions offer limited data. At the same time, there is a growing trend to develop agnostic algorithms that are effectively sensor-independent, aiming to improve the frequency and consistency of delivering added-value information from multiple satellite data. In this framework agnostic algorithms for NPV could be trained either on MRT simulations (Berger et al. 2021), or on spectral libraries (Pepe et al. 2022). The overall objective of the study is to assess the value-added contribution of novel hyperspectral missions, as PRISMA (PRecursore IperSpettrale della Missione Applicativa) and EnMAP (Environmental Mapping and Analysis Program), in detecting and quantifying crop residues (CR) presence in farmland. The method is rooted on a set of absorption bands, diagnostic for different covers (crop residues, green vegetation, bare soil); since the relationship between them and crop residues quantity is not linear, the regression analysis is performed by Machine Learning Regression Algorithms (MLRA). The spectral intervals of interest, based on previous studies that have been experimentally verified on both ground and satellite imaging spectra (Pepe et al., 2022), they cover chlorophyll (540 – 790 nm) and water (1070 -1260 nm) absorption of green vegetation, lignin and cellulose absorption for NPV (2010 - 2210 nm) and clay absorption for soil presence (2100 – 2240 nm). To model these absorption peaks the method relies on 3-bands spectral indexes, namely Continuum Interpolated Band Ratio (CIBR, Dennison et al. 2023). In that way the spectral hyperspace is reduced to a variable space of 4 diagnostic indexes on which a Random Forest is trained for detecting NPV and retrieving CRC (Crop Residue Cover). State of the art NPV spectral indices (Sis), CAI and CINDI, have been also computed to simulate the contribution of next generation multispectral missions specifically proposed to address crop residues detection and quantification (Hively et al. 2021, Verrelsts et al. 2023). MLRA based on hyperspectral CIBR besides CAI and CINDI have been trained exploiting an NPV spectral library - “Reflectance spectra of agricultural field conditions supporting remote sensing evaluation of non-photosynthetic vegetation cover” - openly shared by USGS (Hively et al, 2021). This dataset is composed of around 900 surface reflectance spectra (400 to 2400 nm) annotated with crop residues, vegetation and soil fraction percentages, and it is resampled to 10nm, which is the spectral resolution of most current and near-future satellite sensors, allowing to create a sensor-independent model. Validation set consisted on independent ground spectral measurements acquired for different crops on a thinning experiment. Measured plots were manipulated to reproduce a range of CRC classes, varying from approximately 0% to 100%, in increments of 10%. This process yielded a test library of 100 samples. Both spectral libraries have been resampled to the hyperspectral (i.e. PRISMA and EnMap) and new generation (e.g. S2-NG and Landsat X) spectral configuration for the computation of the four CIBR besides CAI and CINDI. The different models have been tested on real farm conditions. The study area comprises the Jolanda di Savoia farm of Bonifiche Ferraresi S.p.A., about 3800 ha wide, located in the Po Valley (Emilia-Romagna region; 44°53′N; 11°59′E). Since it is a reclaimed land, it is characterized by a great variability of soil types creating within field spatial pattern of crop biomass. It is also a site for the scientific cal/val of the ASI PRISMA mission (Genesio et al., 2022), and was also selected within the ESA CHIME & SBG 2021 Mission. It hosted various project activities related to the use of hyperspectral data for agricultural studies and it was recently included in the EnMap foreground mission. This condition allowed to acquire in the period 2022 to 2024 a time series of PRISMA (22 images) and EnMAP images (5 images). The model was applied to these imageries in order to have a seasonal quantification of crop residues for different crops, as well as an indication of cropping patterns in agricultural landscape. Cross validation analysis shows very good results for hyperspectral CIBR, computed from PRISMA_like and EnMap_like configuration (R^2 > 0.8, RMSE < 0.12), and multispectral Sis (e.g. CAI, R^2 > 0.58, RMSE 0.16). Indeed, the ML model based on hyperspectral configuration outperform the NPV specific spectral indices in CRC estimation. Independent validation on CIBR based model shows very satisfying results for PRISMA_like (R^2 > 0.75, RMSE 0.14) and EnMap_like (R^2 > 0.78, RMSE 0.13) configuration. Due to the fact that the clay feature can be not always detectable in real spaceborne imageries, we further test a MLRA with only three CIBR. Results are very encouraging also in this configuration that increase the transferability of the method. Finally, the derived models were applied at district scale and CRC values analysed in relation to ancillary information. The information content of the CRC maps is in agreement with the information at parcel level on cultivated crops and plausible harvest date. The analysis of temporal change from one PRISMA/EnMap image to the following one show how it is possible to provide information on field trajectories and crop residues management such as persistence of CRC and its changes. Such information are fundamental to provide quantitative information and as MVR within the carbon farming framework.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall E2)

Session: D.01.08 4th DestinE User eXchange - Addressing Application Needs for Extreme Events Preparedness

Addressing Application Needs for Extreme Events Preparedness

Weather-related hazards cause some of the highest economic damage and most loss of life among all natural disasters. These include floods, droughts, heatwaves, storms and heavy rain. The Weather-Induced Extremes Digital Twin (Extremes DT) provides detailed information on extreme weather events two to four days in advance. This can help decision-makers respond and adapt quickly to these events.

In this session, speakers involved in Destination Earth and other partners working on the Extremes DT will update the community on its status. They will give examples of how users can benefit from this digital twin and show how these capabilities can support different applications.

Presentations and speakers:


The Global Extremes Digital Twin: development status and future directions


  • Benoît Vannière - European Centre for Medium-Range Weather Forecasts

The On-Demand Extremes Digital Twin: development status and future directions


  • Christoph Wittmann - Geosphere Austria

Improving the simulation of extreme weather in Europe with the Extremes DT


  • Natalie Theeuwes - Royal Netherlands Meteorological Institute, KNMI
  • Estibalîz Gascon - European Centre for Medium-Range Weather Forecasts

UrbanAIR: a new digital twin to support climate adaptation in cities


  • Femke Vossepoel - TU Delft

Towards the detection of energy-critical situations in DestinE’s Extreme Digital Twin


  • Matthias Zech - German Aerospace Center, DLR

A global to local compound flood forecasting pilot service for Destination Earth


  • Kun Yan - Deltares

ALaDyn - towards dynamic landslide risk modelling


  • Christoph König - Lugitsch & Partner
  • Janik Deutscher - Johanneum Research

Discussion: Perspectives on opportunities and next steps


  • Moderated by Benoît Vannière
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall G2)

Session: A.08.08 Upper Ocean Dynamics - PART 2

The upper ocean, by exchanging properties such as heat, momentum, gas, mass and freshwater with the atmosphere, plays a key role in shaping the Earth’s climate and influencing various environmental processes. It is characterized by numerous processes which superimpose and interact with each other, and which cover a wide range of spatial (from sub-mesoscale to basin-wide) and temporal (from high frequency to climatic) scales.
Different parameters are needed to properly describe the upper ocean dynamics (e.g. temperature, salinity, sea level, currents, wind, waves, mixed layer depth) and a large variety of active and passive instruments have been put into orbit over the last few decades providing more or less direct information about the upper-ocean dynamics (e.g. altimeters, including the recently launched SWOT mission, gradiometers, scatterometers, synthetic aperture radars, imaging radiometers operating at different wavelengths (microwave, infrared), and spectrometers). In this context, this session welcome contributions exploring how multi-variable satellite observations, together with in-situ data and/or numerical modelling can be consistently and systematically used in synergy to better observe and understand upper ocean dynamics, across different dynamical regimes and spatial and temporal scales.

Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall G2)

Presentation: Upper Ocean Heat Content From Earth Observation: Challenges and Opportunities for Climate Monitoring

Authors: Manuel Arias, Dr Estrella Olmedo, Dr Verónica González-Gambau, Dr Michael Mayer, Dr Antonio Turiel
Affiliations: Zenithal Blue Technologies S.L.U., Institut de Ciències del Mar, Consejo Superior de Investigaciones Científicas, University of Vienna, Research Department, European Centre for Medium-Range Weather Forecasting
According to the Global Climate Observing System (GCOS), surface heat fluxes between the atmosphere and the ocean are among the major contributors to the energy and moisture budgets, and are largely responsible for thermodynamic coupling of the ocean and atmosphere at global and regional scales. In consequence, the upper ocean heat content (OHC) plays an important role in regulating the global climate, as it is through the surface of the ocean that such exchanges occur. Upper OHC refers to the heat stored in the Mixed Layer (ML) of the ocean. This layer acts as a thermal buffer, absorbing more than 90% of the excess heat from greenhouse gas-induced warming. Its variability influences atmospheric circulation, storm formation, and patterns of heat distribution across the globe. For example, changes in upper OHC are strongly linked to phenomena like El Niño and La Niña, which have profound impact on global weather patterns. Furthermore, the ability of the ocean to sequester heat is critical in buffering the climate system against the effects of anthropogenic warming. However, this buffering capacity may be reaching a tipping point, which could result in more pronounced climatic feedback loops. Thus, monitoring upper OHC from space can be a highly valuable tool for climate sciences and improve our understanding of the energy imbalance on Earth. Satellite-based EO techniques, in combination with modelled and in-situ data, provide unique opportunities to measure and analyse upper OHC at a spatial and temporal resolution that traditional in-situ methods alone cannot achieve, with a significant potential to contribute to a better understanding of the Earth System dynamics. In this study, we estimate upper OHC derived from the Thermodynamic Equation of Seawater (TEOS-10), by exploiting remotely-sensed Sea Surface Temperature (SST) and Sea Surface Salinity (SSS), from the corresponding ESA Climate Change Initiative (CCI) Climate Data Records (CDR). Additional/alternative products for SST and SSS will also be explored. These variables are thermodynamically combined to determine the surface ocean heat density. By assuming homogeneity in the ML, the surface ocean heat density multiplied by the Mixed Layer Depth (MLD) provides an estimate of the vertical integral of heat density that yields the upper OHC. The main advantage of our approach is that it significantly improves both spatial and temporal resolution of OHC measurements in the upper ocean with respect to other EO-derived OHC products. With a 0.25-degree spatial resolution and daily products, the novel dataset better supports the data needs for heat flux estimations identified by GCOS than those obtained from a geodetic approach using altimetry and gravity anomalies to transfer thermal expansion to heat content (currently with a 1-degree spatial resolution and monthly products). The main limitation appears in the dependency of the computation on the MLD values, for which a large uncertainty exists as of today, both from ocean models and in-situ observations. The assumption that surface EO-based measurements yield a representative value of temperature and salinity for the ML is also an important aspect to evaluate. In this work, we have generated a 12-year time series (2011-2022) of an upper OHC family of products, using MLD information coming from Copernicus Marine Environment Monitoring Service (CMEMS), namely ARMOR3D L4 analysis and the Global Ocean Ensemble Physics Reanalysis. The former product consists of weekly estimates of MLD derived from 3D reconstructions of vertical profiles that combine satellite and in-situ datasets. The latter includes 0.25-degree daily MLD estimates from three reanalyses, namely GLORYS2V4 from Mercator Ocean, ORAS5 from ECMWF and C-GLORSv7 from CMCC. Using ARGO vertical profiles, we have addressed the degree of representativity of the EO-based SST and SSS measurements with respect to the MLD, as well as the uncertainties derived from a lack of knowledge on the MLD estimates. Using CMEMS Coriolis Reanalysis (CORA) of in-situ datasets, we have also assessed the quality of the EO-based surface heat density parameter. Moreover, we assess the temporal variability of the upper OHC at both long and short temporal scales, after subtracting the seasonality component, discussing the implications of the results at global scale and specific use cases. We are particularly interested in how the long-term variability of the upper OHC can provide us a better understanding of processes like the Atlantic Meridional Overturning Circulation (AMOC) and El Niño. Our results are indicative that monitoring surface heat density from Earth Observation sources allows for building powerful indicators for atmosphere-ocean interactions with climate implications, which creates an opportunity to better constraints key processes, not always well represented in climate models. This work also shows the need for improvements on the vertical reconstruction of the upper ocean, which can be instrumental to leverage EO datasets for better constraining climate processes in the interface between ocean and the atmosphere. Efforts made in this direction will significantly enhance the development of synergistic products, of significant relevance for key activities such as the Destination Earth initiative, the Climate Change Initiative, and the ESA Ocean Science Cluster.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall G2)

Presentation: Internal Waves Observations from the Surface Water Ocean Topography Mission: On the use of combined sea surface heigth and roughness high resolution measurements

Authors: Bertrand Chapron, Vladimir Kudryavtsev, Vahid Cheshm Siyahi, Fabrice Collard
Affiliations: ifremer, SOLab, Russian State Hydrometeorological University, OceanDataLab
This study aims to demonstrate how the use of newly available high-resolution data from the Surface Water and Ocean Topography (SWOT) mission provide new quantitative means to more precisely investigate internal waves (IWs). IWs arise within the stratified upper ocean, where less dense water overlies denser water. Perturbations from external forces, such as interactions of surface tides with underwater topographical features, can then trigger the generation of IWs at isopycnals. In the past, many studies have intensively used satellite-borne synthetic aperture radar (SAR) images and also moderate resolution imaging radiometer (e.g. MODIS) measurements to detect IWs signals through surface roughness distinctive patterns. Unlike the afore-mentioned SAR and imaging radiometer observations, the SWOT mission can now uniquely provides combined sea surface height (SSH) and normalized radar cross section (NRCS) measurements. SWOT combined observations promises a significant leap to study and monitor IWs compared to previous satellite observations. Indeed, SWOT's Ka-band radar interferometer (KaRIn) measurements make it possible to capture the detailed sea surface heigth anomalies (SSHAs) associated to IWs dynamical properties related to the interior ocean pressure signal. Unlike past satellite observations that solely relied on surface roughness analysis from SAR or sunglint data from optical observations, SWOT provides a unique combination of SAR images and elevation maps over IWs. Demonstrated off the Amazon shelf region, SWOT SSHA data often distinctively map IW signatures, with wavelengths ranging from 3 to 50 km and surface height anomalies as small as 0.15 m. A simplified three-layer approximation of ocean stratification can robustly be employed to reconstruct IW-induced vertical displacements of the pycnocline, reproducing pycnocline displacements up to 100 m, ingood agreement with SWOT SSHAs. More important, SWOT data is helping to more directly explore how the NRCS radar detected roughness anomalies can be quantitatively related to contemporaneous SSHAs, under varying wond conditions. Cross-spectral analysis between SWOT NRCS and SSHA data under varying wind conditions are demonstrated to lead to the development of a modulation transfer function (MTF), relating NRCS contrasts to IW amplitude, wavenumber, and wind speed. Simulations using the Radar Imaging Model (RIM) provide additional validation of NRCS modulation trends, highlighting the role of wind field variability in enhancing IW detection. These findings clearly underscore SWOT’s capability to resolve fine-scale IW dynamics and surface modulations, offering new insights into ocean mixing and stratification processes. Importantly, the SWOT-refined MTF is then further shown to help interpret off-nadir SAR NRCS measurements and sun-glitter optical ones. Using SWOT and this transfer learning to other high resolution satellite (including past and furture) observations, a robust methodlogy can thus be established for combining these data with in-situ measurements and numerical models, paving the way for deeper exploration of IW impacts on the upper ocean dynamics and climate systems.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall G2)

Presentation: Reconstruct upper-mid ocean volume transport of the Atlantic Meridional Overturning Circulation from along-track altimetry

Authors: Anneke Sperling, Dr. Alejandra Sanchez-Franks, Dr. Encarni Medina-Lopez, Dr. Chris Banks, Dr. Ben Moat, Dr. Joel Hirschi
Affiliations: National Oceanography Centre, University of Edinburgh, National Oceanography Centre
The Atlantic Meridional Overturning Circulation (AMOC) plays a crucial role in mediating the climate in north-west Europe as well as globally. It is commonly referred to as the heat ‘conveyor belt’, suggesting a coherence across all latitudes. However, especially the upper limb of the AMOC is not one but a combination of several currents emerging and transforming into each other at different latitudes on its journey northward. So how latitudinally coherent is the AMOC actually? Understanding the AMOC’s meridional coherence is paramount to understanding the impacts of a changing AMOC on regional climates and weather extremes, such as hurricane evolution along the eastern seaboard of the U.S.. Thus, being able to monitor upper ocean volume transport continuously across latitudes is desirable. Volume transport of the AMOC is generally computed using dynamic height measurements at selected latitudes (e.g. 26.5˚N). Current observational efforts for the AMOC include mooring arrays (e.g. RAPID; https://rapid.ac.uk/), ship-based hydrographic surveys and a variety of floats. These methods are time-consuming and susceptible to the weather. More crucially though, the longest comprehensive transport record only extends back 20 years, latitudinal coverage is limited to two arrays (in the subtropical and subpolar North Atlantic respectively), and it is not sustainable long-term. Since the launch of TOPEX/Poseidon in 1992, however, the oceanographic community has had continuous sea surface height measurements from satellite altimetry, which can be used to substitute dynamic height measurements in the calculation of upper ocean volume transport. This altimetry record exceeds the length of most observational efforts in place enabling a more robust analysis of lower-frequency temporal variability. Additionally, the redundancy of altimetry missions at any point in time offers near-global coverage at varying temporal frequencies. Thus, they offer a continuous and sustainable alternative for monitoring upper ocean volume transport. Most studies using satellite altimetry to estimate volume transport of the AMOC have been done using gridded satellite altimetry in combination with in-situ hydrographic data. However, gridded satellite products are heavily smoothed due to the interpolation of selected along-track missions in space and time, and may not capture smaller scale variability potentially limiting the capability of satellite altimetry to reconstruct volume transport. We therefore explore the use of along-track satellite altimetry, based on the hypothesis that the higher spatial resolution of this data will provide a more accurate reconstruction of volume transport variability. Here, in-situ hydrographic data is used in combination with single-mission L2 and L3 along-track satellite altimetry to reconstruct upper-mid ocean volume transport of the AMOC at 26.5˚N. At the selected latitude, a submarine telephone cable between Florida and the Bahamas measures the Florida Strait Current, a significant component of the northward surface flow of the AMOC. The offshore component at this latitude is measured by the RAPID array. These efforts have been in place since 1982 and 2004 respectively, and represent the longest observational records of the AMOC, providing a unique opportunity to establish a robust method of estimating upper ocean volume transport using satellite altimetry. We present initial results exploring interannual time scales both near the coast and offshore, and comparing them with transport estimates derived from gridded altimetry. For the open ocean, best agreement is found when using the L3 along-track product. In the Florida Strait, gridded and L2 along-track altimetry perform similarly, whereas the L3 along-track product has a lower correlation. Ultimately, we aim to apply this method on shorter timescales and at latitudes further north and south, providing a new framework for the assessment of meridional coherence in the AMOC.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall G2)

Presentation: Long term wave-coupled processes impact in upper ocean circulation : Thanks to CFOSAT data

Authors: PhD fellow Emma Bedossa, Lotfi Aouf, Hervé Giordani, Stéphane
Affiliations: Météo France, CNRM
Ocean/atmosphere coupling remains a complex subject, with major uncertainties regarding to the physical processes at the surface that drive the exchange of momentum and heat fluxes. The aim of this work is to analyze the impact of improving the sea state with CFOSAT data, on wave-ocean coupling. Coupled experiments with MFWAM and NEMO ocean models were carried out over a 4-year period. The innovation of this work concerns the assimilation of SWIM wave spectra and the improvement of coupling processes concerning surface stress, Stokes drift and wave breaking induced turbulence in the ocean mixed layer. As a reminder, the SWIM instrument can detect different wave wavelength scales from 50 to 700 m. Several coupling experiments are tested with and without assimilation of SWIM wave spectra and wave height, and a control experiment without coupling is implemented to assess the impact. Validation of key ocean parameters (temperature, salinity and currents) is carried out with in situ and satellite observations. The results show an improvement in surface currents and temperature in the western boundaries currents ocean regions and in the tropics. We will also present the significant impact of waves on the 300 m heat content in the Southern Ocean, particularly dominated by extreme storms. The results also demonstrate that improved wave forcing acts as a nudging on ocean currents trajectories consistent with observations. A focus of the analysis concerns the impact of waves on ocean temperature and salinity in critical ocean regions such as the marginal ice zone. Further comments and conclusions will be presented in the final paper.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall G2)

Presentation: VarDyn: Dynamical joint-reconstructions of Sea Surface Height and Temperature from multi-sensor satellite observations

Authors: Florian Le Guillou, Bertrand Chapron, Marie-Helene Rio
Affiliations: Datlas, Ifremer, INRIA, ODYSSEY, European Space Research Institute (ESA-ESRIN)
For many years, satellite observations of sea surface height (SSH) and sea surface temperature (SST) have provided invaluable information on upper ocean dynamics at many scales. SSH and SST variables are dynamically linked, and are very often used together for many scientific studies (e.g. estimating heat transport in the upper layer by SSH-derived geostrophic currents). As satellite observations are unevenly distributed in space and time (SSH is measured along one-dimensional trajectories and SST measurements are affected by clouds), many scientific and operational applications rely on level-4 gridded SSH and SST products. However, these products suffer from two main limitations. Firstly, conventional mapping techniques rely on static optimal interpolation schemes, which limits the estimation of nonlinear dynamics at scales poorly sampled by altimetry or, for SST, in regions densely affected by clouds (e.g. near western boundary currents). Secondly, SSH and SST reconstructions are performed separately, without relying on synergies between the two variables, which has an impact on the consistency of the two reconstructed fields. Here, the use of an hybrid methodology, VarDyn, combining minimal physically-based constraints with a variational scheme, is demonstrated to improve mapping capabilities of SSH and SST. Synthesizing multi-modal satellite observations, VarDyn improves the accuracy of SSH and SST maps compared to operational products, both in terms of RMSE and effective spatial resolution. Expected, most improvements occur in high energetic ocean regions. Still, the accuracy of SSH maps also slightly improves in low energetic regions, which is a clear improvement compared to other methods. VarDyn SSH fields and associated geostrophic velocities further reveal strong agreements with newly available high-resolution instantaneous SWOT estimates. Remarkably, the assimilation of SST especially benefits SSH reconstruction when only 2 altimeters are available. The VarDyn method thus opens robust means to refine climate SSH records, jointly assimilating SSH from 2 altimeters and SST from microwave sensors.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall G2)

Presentation: New ocean dynamics assessment strategy through SST frontal detection

Authors: Lucile Gaultier, Fabrice Collard, Clément Pouplin, Sammy Metref
Affiliations: OceanDataLab, France Energies Marines, Datlas
Over the past decade, advancements in satellite sensor technology have significantly enhanced our ability to observe a wide range of oceanic physical variables at various spatial and temporal scales. For ocean temperature retrieval, we now have access to complementary sensors, including high-resolution instruments (e.g., S3-SLSTR, VIIRS, MODIS), medium-resolution geostationary infrared sensors (e.g., SEVIRI), and low-resolution microwave sensors (e.g., AMSR2/GMI). Together, these datasets offer a wealth of information about the upper ocean’s dynamics across different scales. To fully leverage this data, there is a growing need to treat it as structured information, rather than isolated, pointwise observations. As part of the ESA World Ocean Circulation (WOC) project, we developed an automatic algorithm to detect ocean fronts in sea surface temperature (SST) data. This method identifies fronts by detecting two distinct populations in the data, offering a robust alternative to traditional gradient-based approaches that are often sensitive to noise or cloud contamination. The algorithm has been successfully applied to various types of SST datasets, including Level 2 (L2), Level 3 (L3), and Level 4 (L4) products, creating a rich 10-year database of SST fronts now available on the WOC website (https://www.worldoceancirculation.org/Products). These SST fronts provide valuable insights into the ocean’s surface dynamics, as they often correspond to Lagrangian Coherent Structures that shape the transport and mixing of ocean properties, and we defined a metric to automatically assess the direction of the current in a product using the direction of the frontal structure. In this talk, we will present the methodology used for detecting these fronts and demonstrate how they can be applied to assess surface ocean currents. Additionally, we will discuss the intercomparison of various velocity products, including those from the WOC project, CMEMS/GlobCurrent, and other providers, highlighting the utility of SST fronts as a diagnostic metric for evaluating ocean circulation models and observational datasets. Finally, we will show how this metrics has been successfully implemented operationally to help maritime transport company find the best estimate of the upper ocean current in Near Real Time, enabling them to optimize their routes efficiently. This metric has been implemented in an open-source Python package, making it accessible for researchers and developers to use and build upon. Additionally, a Data Challenge has been launched to engage the community, providing an opportunity to explore and apply these tools in innovative ways, and compare your results with WOC, CMEMS or other available upper ocean velocity products. Details about the challenge, including participation guidelines and resources, can be found at https://2024-dc-woc-esa.readthedocs.io/en/latest/. We encourage you to join and contribute your expertise! If you’re interested, don’t hesitate to reach out. We will also be available at the ESA booth during the event to answer questions and provide more details about the participation in the Data Challenge
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.14)

Session: A.03.08 Terrestrial Carbon Cycle Assimilation System - PART 1

The Terrestrial Carbon Community Assimilation System (TCCAS) is designed around the newly developed D&B terrestrial biosphere model (https://doi.org/10.5194/egusphere-2024-1534). D&B builds on the strengths of its two component models, DALEC and BETHY, in that it combines the dynamic simulation of the carbon pools and canopy phenology of DALEC with the dynamic simulation of water pools, and the canopy model of photosynthesis and energy balance of BETHY. Both component models have a long track-record of successful data assimilation applications. TCCAS includes a suite of dedicated observation operators that allows the simulation of solar-induced fluorescence (SIF), fraction of absorbed photosynthetically active radiation (FAPAR), vegetation optical depth from passive microwave sensors, and surface layer soil moisture. The model is embedded into a variational assimilation system that adjusts a control vector to match the observational data streams. For this purpose TCCAS is provided with efficient tangent and adjoint code. The control vector consists of a combination of initial pool sizes and process parameters in the core model and in the observation operators.
TCCAS and D&B are open source developments. The session will provide a combination of presentations and hands-on exercises to participants with an interest in applying the systems for their research or their teaching. Topics covered include: The terrestrial carbon cycle, D&B, observations and observation operators, fundamentals of data assimilation and their implementation in TCCAS.

Speakers:


  • M. Drusch
  • T. Kaminski
  • W. Knorr
  • T. Quaife
  • P. Rayner
  • M. Scholze
  • L. Smallmann
  • M. Voßbeck
  • M. Williams
  • Sönke Zaehle
  • Songyan Zhu
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall N1/N2)

Session: C.01.08 Optical instrument concepts and technologies to enable new EO science missions

This theme aims to discuss new optical remote sensing techniques and instrument concepts, highlighting how this will expand our understanding of the earth, complement existing assets and improve the capability and performance of existing earth observation missions.
The session aims also at fostering discussions of potential future Earth Observation missions enabled by innovative optical payloads and technologies.

Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall N1/N2)

Presentation: Radiometric sensitivity and instrument requirements for management-relevant satellite-based coral reef habitat mapping

Authors: Elizabeth Botha, Tim Malthus, Chris Roelfsema, Mitchell Lyons, Courtney Bright, David Thompson, Arnold Dekker, Joshua Pease, Dylan Cowley, David Ardila, Rob Green, Alex Held
Affiliations: CSIRO Environment, CSIRO Environment, University of Queensland, CSIRO Space and Astronomy, CSIRO Manufacturing, University of New South Wales, NASA Jet Propulsion Laboratory
Coral reef ecosystems are among the most productive and biologically diverse environments on Earth, serving as critical habitat and supporting the livelihoods of nearly a billion people. They provide essential goods and services that are vital to the health of broader connected ecosystems and the welfare of local communities. However, coral reef ecosystems are increasingly threatened by global climate change and anthropogenic disturbances. Effective management and protection of these ecosystems, particularly at regional and national scales, rely on up-to-date spatial information systems that inform economic and environmental valuations, including for the United Nations Sustainable Development Goal 14 (Life Below Water) and several targets of the Kunming-Montreal Global Biodiversity Framework. Live coral cover—a key metric for ecosystem health—is typically assessed through field-based measurements, which are costly and limited in spatial coverage. Due to the size and remoteness of coral reef systems like the Great Barrier Reef, state of the reef reporting is necessarily extrapolated across large regions from only a small selection of in-situ surveys, despite the inherent local-scale variability and patchiness of reef ecosystems and their condition. Satellite-based remote sensing has the potential to transform monitoring by sampling at spatial scales not otherwise possible, but current systems lack the spectral resolution and radiometric sensitivity required to distinguish between live coral and other benthic cover types at sufficient spatial resolution. This study aimed to determine the radiometric sensitivity requirements for an imaging spectrometer to detect changes in live coral cover across different water conditions and depths. The work occurs in the context of the AquaSat-1 mission concept, which is designed to advance the capabilities of aquatic remote sensing with a state-of-the art imaging spectrometer and targeted imaging strategy. Using field-based underwater spectral reflectance measurements, field-based bio-optical characterisation, radiative transfer modelling, and instrument modelling, we simulated remote sensing reflectance to assess sensitivity thresholds for detecting fractional cover changes in coral reef habitat benthic types. Our analysis focussed on two management-relevant benthic type combinations that are known to co-occur within pixel sizes less than 30 m: live coral, turf algae, macro algae, and crustose coralline algae; and seagrass and sand. Our findings indicate that detecting live coral cover changes from space at 10 metres depth in relatively clear water requires an instrument with high spectral resolution (≤ 10 nm) and high signal-to-noise ratio. With the AquaSat-1 mission concept, we propose to achieve this using a ground motion compensation (GMC) imaging strategy, where the satellite’s pointing remains fixed over a target site to enhance measurement sensitivity. GMC allows for a ~3.5 fold increase in achievable SNR across the visible spectrum. However, as depth increases (beyond 5 – 10 m) the ability to discriminate changes in coral rapidly deteriorates due to the relative similarity in the spectra for the different coral targets. For seagrass cover, the higher spectral contrast between seagrass and sand allows for detection of cover changes at greater depths, extending beyond 20 metres with GMC under clear water conditions. These results inform satellite requirements for mapping coral reef habitat benthic types with acceptable uncertainties. A targeted sensor such as AquaSat-1 would provide a significant performance improvement over current assessments of global live coral cover status based on contemporary multispectral sensors which lack the spectral and radiometric capabilities required at relatively high (<30 m GSD) spatial resolution.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall N1/N2)

Presentation: Overview of the Cloud and Aerosol Lidar for Global Scale Observations of the Ocean-Land-Atmosphere System

Authors: Paolo Di Girolamo, Davide Dionisi, Marco Di Paolantonio, Donato Summa, Noemi Franco, Simone Lolli, Giuseppe D’Amico, Lucia Mona, Rosalia Santoleri, Francesco Tataranni, Sara Venafra, Arianna Rinaldi, Francesco Longo, Raffaele Votta, Valentina Sacchieri, Francesco Coppola, Pasquale Ferrara, Peter Coppo, Alberto Cosentino, Yongxiang Hu, Chris A. Hostetler, Tyler Thorsen, John Smith, Mark Vaughan, Jason Tackett, Brian Getzewich, Michael J. Behrenfeld, Scott Braun, Bob Holz, Gerald “Jay” Mace, Hal Maring, Laura Lorenzoni, Stephen R. Hall, Charles R. Trepte(f)
Affiliations: Università degli Studi della Basilicata, Institute of Marine Sciences, Italian National Research Council, Institute of Methodologies for Environmental Analysis, Italian National Research Council, Italian Space Agency, Leonardo S.p.A, NASA Langley Research Center, Oregon State University, University of Wisconsin-Madison, University of Utah
The Cloud and Aerosol Lidar for Global Scale Observations of the Ocean-Land-Atmosphere System (CALIGOLA) is an advanced multi-purpose space lidar mission with a focus on atmospheric and oceanic observation aimed at characterizing the Ocean-Earth-Atmosphere system and the mutual interactions within it. This mission has been conceived by the Italian Space Agency (ASI) with the aim to provide the international scientific community with an unprecedented dataset of geophysical parameters capable to increase scientific knowledge in the areas of atmospheric, aquatic, terrestrial, cryospheric and hydrological sciences. The Italian Space Agency is partnering with NASA on this exciting new space lidar mission. The mission is planned to be launched in the time frame 2031-2032, with an expected lifetime of 3-5. Exploiting the three Nd:YAG laser emissions at 354.7, 532 and 1064 nm and the elastic (Rayleigh-Mie), depolarized, Raman and fluorescent lidar echoes from atmospheric and ocean constituents, CALIGOLA will carry out multi-wavelength profile measurements of the particle backscatter, extinction and fluorescent coefficient and the depolarization ratio. Atmospheric measurements of these parameters allow for the determination of the microphysical and dimensional properties of aerosols and clouds and their typing. Measurement of the elastic backscatter echoes from the sea surface and the underlying layers, and their degree of depolarization, CALIGOLA will be able to carry out measurements of ocean optical properties and measurements of the suspended particulate matter, which are needed to study phytoplankton seasonal and inter-annual dynamics and to improve the understanding of the role of phytoplankton in marine biogeochemistry, the global carbon cycle and the responses of marine ecosystems to climate variability. One specific measurement channel at 685 nm will be dedicated to fluorescence measurements from atmospheric aerosols and marine chlorophyll, for the purpose of aerosol typing and for characterizing ocean primary production. CALIGOLA will also carry out accurate measurements of the small-scale variability of the earth's surface elevation, primarily associated with variations in the ice and snow, terrain, vegetation and forest canopy height. Phase A studies, commissioned by the Italian Space Agency to Leonardo S.p.A. and focusing of the technological feasibility of the laser source and the receiver, were carried out in the period October 2022-October 2024, while Phase B1 activities will start in December 2024. Scientific studies in support of the mission are ongoing, commissioned by the Italian Space Agency to University of Basilicata (KO: November 2021) and ISMAR-CNR (KO: September 2023). In September 2023, NASA-LARC initiated a pre-formulation study to assess the feasibility of a possible contribution to the CALIGOLA mission based on the development of the detection system and sampling chain and the implementation of data down link capabilities. The pre-formulation study ended recently and a review process of the outcome of the study is presently underway.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall N1/N2)

Presentation: Hyperspectral Sensor Initiatives Under the Japanese SBIR Program

Authors: Takahiro Kawashima, Akira Iwasaki, Tomonori Deguchi, Ryo Ui, Takayoshi Fukuyo
Affiliations: The University of Tokyo, ArkEdge Space Inc.
The Small Hyperspectral Sensor Satellite Project is being developed under the Japanese Small Business Innovation Research (SBIR) program, led by Japan's Ministry of Economy, Trade, and Industry (METI). It is scheduled for launch in 2027 and is designed to observe vegetation, soil, minerals, coastal regions, and greenhouse gases, among other targets. While its observation specifications are similar to those of the ISS-mounted Japanese hyperspectral sensor HISUI, the project aims to reduce both the mass and cost to approximately one-tenth of HISUI. This significant reduction enables the feasibility of future constellation-based observations. Furthermore, from the initial stages, the design considers how uncertainties induced by factors such as noise, unknown spectral response functions (SRFs), and other influences propagate to the final physical quantities being measured. In this way, we aim to develop the instrument within a short timeframe and at a low cost while ensuring the provision of Analysis Ready Data (ARD) that is traceable from the sensor hardware to the data analysis algorithm.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall N1/N2)

Presentation: Design of an Optical Ranging Payload for Satellite Gravimetry with Small Satellites

Authors: Elisabeth Paul, Matthew Darrow, Nikolas Pfaffenzeller, Bastian Eder, Thomas Gruber, Roland Pail
Affiliations: MUnique Technology GmbH, Institute of Astronomical and Physical Geodesy, Technical University of Munich
Satellite gravimetry is an observation technique to determine the temporal and spatial changes in the Earth's gravitational field. Two satellites orbit the Earth at a distance of around 200 km from each other. The Earth's gravitational field can be calculated from the change in their distance. This enables the determination of the mass distribution on the Earth and thus the observation of climate-relevant processes such as ice melt and ocean currents. Satellite gravimetry offers a unique opportunity to monitor groundwater resources globally, thus making a significant contribution to water resource management and the prediction of droughts and floods. The measurement method has already been used very successfully in the GRACE and GRACE-FO missions. A joint ESA/NASA follow-up mission, called “Mass Change And Geosciences International Constellation” (MAGIC), is already in the development phase. All of these missions are carried out using conventional satellites with a size of several metres. A cost-effective solution using small satellites (e.g. CubeSats) has already been investigated in several studies. With a constellation of several satellite pairs higher frequent mass variations can be observed and consequently the temporal aliasing error, which currently is the limiting factor, can be reduced significantly. For a small satellite gravimetry mission a miniaturized ranging instrument with the necessary performance is required, but not available so far. The novel DORT (Dynamic Optical Ranging & Timing) measurement method enables sub-micrometre resolution at distances of up to several hundred kilometres. It is an optical ranging device that relies on ultrashort pulse lasers. The innovation lies in the mathematical algorithm, so that no complex hardware is required. The ranging device is an all-in-fibre design that fits into a small satellite or even a CubeSat. With DORT, a satellite gravimetry mission can be realised at a fraction of the cost compared to conventional large satellites. This article presents the design of the ranging payload for a small satellite gravimetry mission. In addition to the ranging device DORT, a pointing, acquisition and tracking unit is needed to establish and maintain an optical link between the two satellites. The restrictions of a satellite gravimetry mission are taken into account, like minimal displacement of the centre of mass or tilt-to-length coupling. In addition to the consideration of diffraction limitations, a concept for compensating transmission losses that is compatible with the DORT measurement method must be integrated. The complete DORT ranging payload covers a volume of <4U of a CubeSat, where 1U corresponds to 1000 cm³. The DORT system is a big step towards the realization of future satellite gravimetry missions.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall N1/N2)

Presentation: Spaceborne Water Vapour DIAL: Benefits, Technology and Synergies

Authors: Martin Wirth, Silke Groß, Andreas Fix
Affiliations: DLR
Water vapour is the key trace gas component of the air and involved in virtually all relevant atmospheric processes. To know the vertical profile with decent resolution is crucial for a large number of atmospheric science questions and especially for numerical weather prediction. Specifically the ability to measure water vapour throughout the atmosphere, and particularly also in the lower troposphere over both land and sea would greatly enhance our understanding of: (i) the coupling of deep convection to its environment and the factors influencing the genesis of tropical storms; (ii) the controls on the distribution of shallow convection, the intensity of cloud feedbacks and thus climate sensitivity; (iii) the predictability of weather on a wider range of time-scales; (iv) the origin of model biases in the upper troposphere and lower stratosphere; (v) mechanisms of long-range moisture transport associated with weather extremes in the mid-latitudes; and (vi) estimates of boundary layer moisture and hence a better quantification of the surface energy budget, as well as land-atmosphere and ocean-atmosphere interactions. Such a data product could be provided by a Differential Absorption Lidar (DIAL) on a satellite in a low Earth orbit. The key features of such an instrument are high accuracy, low bias, and high vertical resolution for profiles extending from the ground to the upper troposphere and into the lowermost stratosphere. It has the capability to obtain profiles above cloud tops as well as through and below clouds of low optical depth, and it has the potential to sound through gaps in scattered or broken cloud fields. It could provide simultaneous information on water vapour, cloud top height, aerosol optical depth, and boundary layer height. The H2O retrieval by DIAL is free of a priori assumptions with a very low sensitivity to other parameters (e.g. surface emissivity, temperature profiles, other trace gases, aerosols) needed by other remote sensing techniques. As after aerosol/cloud and wind lidars have been very successfully applied within space missions, the natural next step would be the profiling of water vapour by a Differential Absorption Lidar (DIAL) from a satellite in a low Earth orbit. About 20 years ago the ESA EarthExplorer Proposal for such an instrument called WALES went through phase A, but was not further selected due to the identified technological risks and the corresponding financial efforts. Thanks to the European spaceborne lidar missions Aeolus/2, EarthCare, and MERLIN now the major building blocks for a such water vapour DIAL have reached the necessary technological readiness to realize such a program within the financial limits of a typical Earth Explorer mission. We will review the benefits of water vapour profiling by lidar as compared to passive sensors for different applications and present an updated system design based on the current European space lidar component pool. The technological developments still needed will be highlighted with a special focus on parts and subsystems, where the highest impact on instrument performance is expected. The optical parametric oscillator (OPO) based technology proposed for the lidar transmitter simultaneously generates light pulses at the wavelengths to retrieve water vapour (around 935 nm), and also at 1064 nm and 532 nm. Thus, this would offer the possibility to build a combined H2O/Aerosol lidar at much lower cost than two separate instruments. Besides the financial benefit this opens up new science applications in aerosol/ice cloud interaction where current systems like EarthCARE are missing a collocated measurement of relative humidity which is absolutely essential to understand ice cloud formation. Since there is a great interest in such a system also within science groups at non-European space agencies, this enhanced version could be realized within an international collaboration.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall N1/N2)

Presentation: Scientific applications of the VULCAIN space mission project

Authors: Vito Romaniello, Fabrizia Maria Buongiorno, Michèle Roberta Lavagna, Niccolò Camarlinghi, Malvina Silvestri, Simona Scollo
Affiliations: Istituto Nazionale di Geofisica e Vulcanologia, Politecnico di Milano, Aerospace Science & Technology Dept., Flysight srl
Monitoring changes in volcanic activity is crucial to understand signs of impending eruptions. Satellite data can provide a very effective tool to study active volcanoes remotely in order to understand their evolution by analysing different characteristic parameters. During the last twenty years’ a remarkable progress has been done in volcanic remote sensing to measure temperatures and gas emissions; the new planned space missions equipped with thermal sensors, such as ESA-LSTM and NASA/ASI-SBG, will provide a great contribution to the scope. In this framework, the VULCAIN mission study aims to design and characterize new CubeSat satellites for Earth Observation (EO) dedicated to volcanoes. The project, funded by the Italian Space Agency (ASI) and supervised by the European Space Agency (ESA), involves the construction of two 12U nanosatellites flying in formation in order to obtain stereoscopic images. Each satellite embarks two Commercial Off-The-Shelf (COTS) instruments on-board: a visible camera and a multispectral thermal camera with four channels in the 8-12 µm spectral range with a Ground Sampling Distance (GSD) of about 30 m and 90 m, respectively. The main scientific objectives are to measure the Land Surface Temperature (LST), as well as the SO2 content in degassing plumes and to detect ash emissions from active volcanoes. Furthermore, the combining of visible and thermal data allows to improve the observation on volcanic areas. The using of two nanosatellites in formation allows to obtain 3D images of volcanoes: while visible stereoscopic images are useful to the morphological analysis of volcanic sites, an innovative aspect of the space mission regards the possibility to reproduce stereoscopic views of thermal acquisitions. This can provide precious information about the thermal structure of volcanic edifices and enable the 3D analysis of SO2 and ash plumes. Finally, a total of 34 volcanic sites, located in Europe, Central and South America and Indonesia, have been selected as primary targets for the VULCAIN space mission. In this work, we present the scientific applications that can be achieved using VULCAIN acquisitions and several examples of applications from current EO space missions (i.e. Landsat, ASTER, Sentinel-2), including results from data fusion (visible-thermal), super-resolution and stereoscopic techniques.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.85/1.86)

Session: A.09.02 Dynamic Antarctica: From the coastal margins to the deep interior - PART 1

Since 1992 mass loss from the Antarctic Ice Sheet has contributed approximately 8 mm to global mean sea level rise (IMBIE; Otosaka et al., 2023). Mass loss from the Antarctic Ice Sheet not only increases global mean sea level, but also impacts ocean circulation, marine biology and Antarctic sea ice. Current changes in Antarctica have been driven by both ocean and atmospheric melting of ice shelves and grounded margins. Intrusions of warm Circumpolar Deep Water onto the continental shelf around Antarctica are the primary driver of ice-shelf basal melting, while atmospheric warming and extreme events have increased supraglacial melting on ice shelves and in coastal regions, leading to firn densification and increased potential for supraglacial ponding and ice-shelf collapse.

Changes at the coastal margins, such as ice-shelf thinning, weakening and collapse, reduces the ability of ice shelves to provide buttressing to inland grounded ice. Therefore, the impact of these changes can propagate inland, with potential to destabilize large areas of the ice sheet. Meanwhile, the dynamics and stability of the grounded ice sheet is controlled by multiple factors: including the bed topography, basal friction, subglacial hydrology, geothermal heat flux and englacial temperate.

It is increasingly possible to monitor change, its impact, and improve understanding of these processes, due to advanced satellite Earth observations, improvements in data processing, modelling and AI/ML. This session will highlight recent advances in our understanding of the dynamics of the Antarctic Ice Sheet including:
- Interactions between the atmosphere and ice-sheet surface: surface mass balance, firn evolution, supraglacial hydrology, and the impact of extreme events.
- Quantifying ice-shelf basal melting, its spatial distribution, and drivers.
- The dynamics and stability of inland ice: bed topography, basal friction, subglacial hydrology, geothermal heat flux
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: Change in Pinning Point Area across all Antarctic Ice Shelves from 2015 to 2024

Authors: Yikai Zhu, Anna E. Hogg, Professor Andrew Hooper
Affiliations: University of Leeds, Wuhan University
The Antarctic Ice Sheet has been losing mass at an accelerating rate, contributing significantly to global sea-level rise and increasing risks to coastal communities (Andrew et al., 2018; Greene et al., 2022; Kaitheri et al., 2024). Warm ocean currents weaken the buttressing effects of ice shelves, leading to enhanced basal melting, thinning, and increased ice discharge into the ocean (Jenkins et al., 2010; Pritchard et al., 2012). These processes are most pronounced in West Antarctica and along the Wilkes Land coastline. Pinning points, which are locally grounded features on ice shelves, play a critical role in stabilizing ice shelves by anchoring the ice to bedrock, slowing flow, and enhancing structural integrity (Matsuoka et al., 2015; Roberts et al., 2018). They also modify basal melting patterns by influencing ocean circulation beneath the ice. Monitoring changes in pinning point areas provides a reliable proxy for ice-shelf thickness changes, helping to assess stability and reduce uncertainties in models predicting Antarctica’s future sea-level contributions (Miles and Bingham, 2024; Ritz et al., 2015). While satellite altimetry has shown significant thinning in parts of Antarctica over the past 30 years, these records are too short compared to the multidecadal response times of many ice shelves (Edwards et al., 2019). Extending observational records through pinning point monitoring is crucial for improving future projections of ice-sheet behaviour. In this study we measure change in pinning points across Antarctica’s ice shelves using the Differential Radar Offset Tracking (DROT) technique (Joughin et al., 2016; Wallis et al., 2024) applied to Sentinel-1 SAR data collected between 2015 and 2024. From this dataset we map the location of pinning points across Antarctic ice shelves, and measure change in pinning point area. These DROT results effectively delineate the boundaries between floating ice and grounded pinning points, identifying 501 pinning points across 43 ice shelves from the 162 analysed. Notably, the Abbot Ice Shelf exhibits the highest concentration, with 72 distinct pinning points detected. A detailed temporal analysis of pinning point areas indicates significant trends. Over the study period, 58% of the pinning points exhibit a decline in area, suggesting potential instability, while 25% show area growth, and the remaining 17% exhibit no notable changes. Regionally, pinning points in West Antarctica demonstrate a more pronounced reduction in area compared to those in East Antarctica, likely reflecting the heightened sensitivity of West Antarctic ice shelves to ocean-driven melting. To investigate the potential drivers of these changes, we conducted a comparative analysis of pinning point area change percentages and basal melting rates derived from independent datasets. Our results reveal a strong correlation between high basal melt rates and pinning point area loss. Pinning points experiencing the most significant shrinkage were predominantly located in regions with elevated basal melt rates, particularly in West Antarctica. This suggests that basal melting plays a primary role in destabilizing pinning points, which in turn weakens ice shelf stability and accelerates ice sheet mass loss. These findings have important implications for understanding ice shelf dynamics and their role in Antarctic ice sheet stability. Pinning points act as key indicators of changes in ice shelf thickness and stability over time. Their vulnerability to basal melting highlights the critical influence of ocean-ice interactions on ice shelf dynamics. Furthermore, the observed spatial and temporal trends in pinning point behaviour provide valuable insights for improving models of ice shelf responses to climate forcing. Continued monitoring of pinning points is essential for reducing uncertainties in future sea-level projections and understanding the impacts of a warming climate on Antarctic ice shelves. References Andrew, S., Erik, I., Eric, R., and Ben, S.: Mass balance of the Antarctic Ice Sheet from 1992 to 2017, Nature, 558, 219–222, https://doi.org/10.1038/s41586-018-0179-y, 2018. Edwards, T. L., Brandon, M. A., Durand, G., Edwards, N. R., Golledge, N. R., Holden, P. B., Nias, I. J., Payne, A. J., Ritz, C., and Wernecke, A.: Revisiting Antarctic ice loss due to marine ice-cliff instability, Nature, 566, 58–64, https://doi.org/10.1038/s41586-019-0901-4, 2019. Greene, C. A., Gardner, A. S., Schlegel, N.-J., and Fraser, A. D.: Antarctic calving loss rivals ice-shelf thinning, Nature, 609, 948–953, https://doi.org/10.1038/s41586-022-05037-w, 2022. Jenkins, A., Dutrieux, P., Jacobs, S. S., McPhail, S. D., Perrett, J. R., Webb, A. T., and White, D.: Observations beneath Pine Island Glacier in West Antarctica and implications for its retreat, Nature Geoscience, 3, 468–472, https://doi.org/10.1038/ngeo890, 2010. Joughin, I., Shean, D. E., Smith, B. E., and Dutrieux, P.: Grounding line variability and subglacial lake drainage on Pine Island Glacier, Antarctica, Geophysical Research Letters, 43, 9093–9102, https://doi.org/10.1002/2016GL070259, 2016. Kaitheri, A., Otosaka, I., and Shepherd, A.: Mass Balance of Greenland and Antarctic Ice Sheets since the 1970s, Copernicus Meetings, 2024. Matsuoka, K., Hindmarsh, R. C., Moholdt, G., Bentley, M. J., Pritchard, H. D., Brown, J., Conway, H., Drews, R., Durand, G., Goldberg, D., and others: Antarctic ice rises and rumples: Their properties and significance for ice-sheet dynamics and evolution, Earth-science reviews, 150, 724–745, https://doi.org/10.1016/j.earscirev.2015.09.004, 2015. Miles, B. W. and Bingham, R. G.: Progressive unanchoring of Antarctic ice shelves since 1973, Nature, 626, 785–791, https://doi.org/10.1038/s41586-024-07049-0, 2024. Pritchard, Hd., Ligtenberg, S. R., Fricker, H. A., Vaughan, D. G., van den Broeke, M. R., and Padman, L.: Antarctic ice-sheet loss driven by basal melting of ice shelves, Nature, 484, 502–505, 2012. Ritz, C., Edwards, T. L., Durand, G., Payne, A. J., Peyaud, V., and Hindmarsh, R. C.: Potential sea-level rise from Antarctic ice-sheet instability constrained by observations, Nature, 528, 115–118, https://doi.org/10.1038/nature16147, 2015. Roberts, J., Galton-Fenzi, B. K., Paolo, F. S., Donnelly, C., Gwyther, D. E., Padman, L., Young, D., Warner, R., Greenbaum, J., Fricker, H. A., and others: Ocean forced variability of Totten Glacier mass loss, Geological Society, London, 2018. Wallis, B. J., Hogg, A. E., Zhu, Y., and Hooper, A.: Change in grounding line location on the Antarctic Peninsula measured using a tidal motion offset correlation method, The Cryosphere, 18, 4723–4742, https://doi.org/10.5194/tc-18-4723-2024, 2024.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: Influence of subglacial lake discharge on ice shelf melting and grounding line retreat of Thwaites Glacier, Antarctica

Authors: Noel Gourmelen, Livia Jakob, Paul Holland, Pierre Dutrieux, Daniel Goldberg, Suzanne Bevan, Adrian Luckman, George Malczyk
Affiliations: University of Edinburgh, Earthwave, British Antarctic Survey, Swansea University
The retreat of the Antarctic Ice Sheet is conventionally attributed to increase in ocean melting of fringing ice shelves, potentially enhanced by internal instability due to the proximity of its grounding lines to retrograde bed slopes. Ocean melting is enhanced by increased intrusion of modified Circumpolar Deep Water (mCDW) within ice shelf cavities. Upwelling from the re-lease of subglacial melt water at the grounding line can enhance the ability of mCDW to melt ice shelves, but the efficacy of this process is still poorly understood and constrained and is currently not accounted for in projections of ice sheet loss. Here we quantify the impact of this process during an exceptional subglacial lake drainage event under Thwaites Glacier. We show that the buoyant plume created by subglacial discharge generated a pulse of ocean melting under Thwaites, during which melt rates doubled, causing the ice shelf to thin. We argue that these events likely contributed to rapid thinning and grounding line retreat along pro-grade slopes at Thwaites during that period. However, simulations show that a steady re-lease of subglacial water is more efficient at enhancing ice shelf basal melt rate at Thwaites overall, with melt rate increasing with one-half power of the subglacial discharge. As a result, it remains to be determined whether increased subglacial flooding events provide a stabilising influence on West Antarctic ice loss by decreasing the overall impact of subglacial water on ocean melting, or a destabilising influence by triggering rapid changes at the grounding zone.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: Seasonal evolution of Antarctic supraglacial lakes in the Sentinel satellite era

Authors: Celia Baumhoer, Jonas Koehler, Stef Lhermitte, Bert Wouters, Andreas Dietz
Affiliations: German Aerospace Center (DLR), Earth Observation Center (EOC), German Remote Sensing Data Center (DFD), Delft University of Technology, Department of Geoscience & Remote Sensing, KU Leuven, Department of Earth & Environmental Sciences
Supraglacial lakes in Antarctica play a crucial role in ice sheet dynamics, as they influence ice stability by accelerating meltwater-driven processes such as meltwater injection and basal lubrication. The accumulation of meltwater on ice surfaces reduces albedo, causing more solar energy absorption, which in turn accelerates melting in a self-reinforcing cycle. Therefore, it is essential to monitor the seasonal evolution of surface meltwater features and get a better understanding of their dynamics. Earth observation data provide continuous and long-term observations of the vast and remote Antarctic ice sheet, but extracting reliable information on supraglacial lakes requires more than one satellite sensor. During the short period of lake occurrence, optical satellite imagery might miss shortly occurring dynamics in supraglacial lake extent due to heavy cloud cover. In contrast, the extent of supraglacial lakes is very difficult to interpret in cloud and illumination independent SAR data, as slightly overfrozen lakes become easily invisible due to backscatter similarities with the surrounding ice. The Sentinel mission provides the best source to combine advantages from both worlds and monitor the seasonal evolution of supraglacial lakes at best possible accuracy. We developed a deep learning-based classification algorithm to infer the extent of supraglacial lakes based on their spectral and backscattering properties, as well as their spatial appearance. The algorithm is based on a U-Net architecture with an Atrous Spatial Pyramid Pooling module to widen the receptive field and improve the detection of differently sized lakes. For each sensor, we trained the networks separately and reached classification accuracies with an F1-Score of 93% for Sentinel-1 and 91% for Sentinel-2. A final post-processing step combines both classification result to a bi-weekly recurrence map of maximum supraglacial lake extents during the austral summer months (November to March). This presentation addresses advantages and challenges of fully automated supraglacial lake mapping with Sentinel-1 and Sentinel-2 data and presents bi-weekly recurrence maps for 19 Antarctic ice shelves and regions with repeated supraglacial lake formation. Statistics on anomalies in maximum lake extents provide a circum-Antarctic picture on the seasonal evolution of supraglacial lakes during the Sentinel era 2014 to 2024. The circum-Antarctic data availability allows to compare the differences in supraglacial lake dynamics between years with high (e.g. 2019/2020) and low (e.g. 2023/2024) circum-Antarctic surface melt occurrence and regionally intense melt years (e.g. Dronning Maud Land 2017/2018). Accurate information on lake dynamics can inform hydrological and surface mass balance models to improve their ability to model surface melt processes.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: Insights on surface-to-bed glacier hydrology in the Antarctic Peninsula from Sentinel-2 imagery, meltwater plumes and deep-learning.

Authors: Benjamin Wallis, Anna Hogg, David Hogg
Affiliations: University of Leeds
Subglacial hydrology is an important factor in the dynamic behaviour of ice sheets and tidewater glaciers. One area of the Antarctic Ice Sheet where understanding the subglacial hydrology is of particular interest is the Antarctic Peninsula, where recent detections of seasonal speed variations on tidewater glaciers suggest that surface meltwater reaching the glacier bed may be a key factor influencing ice dynamic behaviour on the Antarctic Peninsula Ice Sheet (APIS). However, surface-bed hydrological connections have not been directly observed through fieldwork on the mainland APIS, owing to the difficulty of accessing the subglacial environment. One process relevant to subglacial hydrology on the APIS which can be investigated through remote sensing is the appearance of turbid buoyant meltwater plumes at the terminus of tidewater glaciers. Plumes visible to remote sensing imagery are formed when buoyant meltwater laden with sediments from the glacial bed is discharged at a glacier’s grounding line and rises to the ocean surface. Where meltwater plumes occur they are significant in the dynamics of tidewater glaciers. They increase submarine melting of glacier termini and may increase the calving flux of tidewater glaciers by undercutting the ice front. Meltwater plumes are also important to ocean biogeochemistry and ecosystems, by the mixing of water masses and the supply of nutrients from subglacial sediments. Studying the timing, distribution and appearance of plumes can provide crucial insights into meltwater pathways in glacial hydrological systems. Such meltwater plumes are observed extensively in Greenland, Svalbard and other regions, however in Antarctica and the Antarctic Peninsula observations have been limited. Here we develop a remote-sensing approach to map the locations and frequency of sediment plumes on the Antarctic Peninsula coastline using high-resolution multi-spectral imaging from Sentinel-2 satellites and a U-Net based convolutional neural network. This methodology allows us to detect small sediment plumes in images with high cloud and sea-ice densities. We apply our approach to the Antarctic Peninsula north of 65°S, including the South Shetland Islands, to produce a time-series of sediment plumes from 2015 to 2023 covering 150,000 km2. We use these results combined with outputs from regional climate models and reanalysis to assess the link between surface-visible sediment plumes and surface melt and runoff from the Antarctic Peninsula’s glaciers. We find that the timings and locations of sediment plumes correspond to modelled runoff and that plume area and runoff volume are correlated temporally. From this evidence, we conclude that surface to bed hydrological connections are likely to be widespread in the summer on the Antarctic Peninsula, providing a mechanism by which glaciers in this region may respond to atmospheric forcing on seasonal timescales. In the absence of offsetting feedback through efficient subglacial drainage observed elsewhere, our results highlight the potential sensitivity of Antarctic Peninsula ice dynamics to projected increases in meltwater availability.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: Shades of Blue: Quantifying Antarctic Peninsula Supraglacial Hydrology Using Multi-Sensor Satellite Data and Machine Learning

Authors: Luke Trusel, Mahsa Bahrami, Rajashree Tri Datta
Affiliations: Pennsylvania State University, TU Delft
The stability of Antarctic ice shelves is central to controlling ice sheet contributions to sea-level rise, with supraglacial lakes playing a potentially destabilizing role. However, current estimates of these lakes are limited in spatial and temporal coverage, with existing techniques often relying on subjective thresholds and prone to misclassification. To address these challenges, we developed a U-Net deep learning model to detect and quantify supraglacial lakes on the Antarctic Peninsula using Landsat 8/9 and Sentinel-2 imagery spanning 2013–2025. Our approach significantly reduces water misclassification errors commonly associated with previous spectral-based methods, particularly those caused by shadows from rocks and clouds. Moreover, our deep learning-based method identifies a greater extent of water coverage and, by combining multiple sensors within a unified framework, substantially improves the temporal frequency of lake observations. Using coincident ICESat-2 altimetry data, we estimate lake depths, which are then used to train machine learning models for large-scale depth estimation from multispectral reflectance. These methods enable enhanced mapping of water presence and volumes, advancing our understanding of surface hydrology dynamics. Initial findings reveal large interannual variability in supraglacial water coverage on the Antarctic Peninsula, with ice shelf water extent peaking in 2023, surpassing that of the previous high-melt years 2020 and 2022. This variability in surface water coverage generally tracks changes in surface meltwater production, as simulated by regional climate models, through 2020. For western Antarctic Peninsula ice shelves, however, years after 2020 exhibited high supraglacial water extents despite relatively reduced surface meltwater production. In contrast, the Larsen C ice shelf of the northeastern Antarctic Peninsula showed a substantial increase in supraglacial water extent in 2023, only after three successive high-melt seasons spanning 2020–2022. These results may therefore provide critical insights into firn layer evolution and the processes governing supraglacial hydrology in this rapidly changing region. This work demonstrates significant advances in satellite-based water mapping and volume estimation, providing a framework for continent-wide application and enhancing our understanding of surface hydrology’s role in shaping the ice sheet’s response to climate change.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: Intermittent grounding of the central Pine Island ice shelf and its dynamic impact

Authors: Trystan Surawy-Stepney, Benjamin Wallis, Anna
Affiliations: University of Leeds, Leeds, UK
Over the last half-century, the ice shelf of Pine Island Glacier, in the Amundsen Sea sector of West Antarctica, has undergone dramatic changes. Grounding line retreat occurring in the mid 20th century likely led to its broad current configuration, and persistent thinning due to decadal influxes of warm ocean water have lessened its importance as a stabilising influence on the upstream Pine Island ice stream. Over the last two decades, a number of large calving events, compounded by the disintegration of its southern shear margin, have left an ice shelf much diminished in stature and structural integrity. Due to thickness anomalies at the grounding line, ice keels have regularly bumped along the bedrock underneath the ice shelf throughout much of this period, causing small regions of the central shelf to appear intermittently grounded. These periods of ephemeral grounding remain largely unstudied and their impact on stresses within the ice shelf remain poorly understood. In this study, we show the movement of a prominent ice keel over an important bathymetric ridge during the period 2014-2021 and analyse the effects this had on the dynamics of the ice shelf. We use offset tracking methods, applied to synthetic-aperture radar data from the European Space Agency and European Commission Copernicus' Sentinel-1 satellites, as the base dataset for this study. By looking at the variation in range-offsets (those associated with changes in the distance of the surface of the ice shelf from the sensor) through time - so-called Differential Range Offset Tracking - we follow the motion of the ice keel over the last decade and show that, at times, up to 18km² of the central Pine Island Ice Shelf has been grounded. Using the BISICLES ice sheet model as a diagnostic tool, relying principally on its inverse solver, we analyse the effects that this grounding had on the dynamics of the ice shelf, and discuss the results in the context of re-grounding as a mechanism for stabilising the retreat of marine ice sheets.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall F1)

Session: A.04.01 Estimating and observing local-scale GHG emissions - PART 1

Recent advances in Earth Observation capabilities have revolutionized our ability to detect CO2 and methane emissions at facility-to-city scale and from individual emission sources.

These localised sources often present good potential for rapid mitigation and are thus high priority targets for rapid action in response to the Paris Agreement. International policy initiatives such as the European Green Deal and the Global Methane Pledge set targets for emissions reductions that are increasingly backed by legislation, with a requirement for monitoring and reporting systems in relevant industries.

There are various space-based observations suitable to estimate methane emissions from, e.g. landfill and the oil&gas and coal mining industry, and more and more as well for CO2 emissions from, e.g., power plants and cities. However, the observing system is characterised by a large and increasing diversity of satellite instruments and retrieval algorithms, including substantial involvement of New Space. Efforts to integrate and harmonise facility scale emissions estimates, to develop estimates of uncertainty, including for increasingly prevalent AI techniques, and to develop good practice both within and across satellite platforms are a rapidly evolving.

This session aims to present an overview on topics related to estimating emissions of CO2 and CH4 from sources ranging from point sources up to megacities in spatial extent. It will showcase recent scientific advances, including new instruments, new detection and retrieval methods, their uncertainties, validation, data uptake, and efforts towards integration into global initiatives.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall F1)

Presentation: Assessing anthropogenic CH4 emissions from aircraft – constraining landfill and open-pit coal mine emissions

Authors: Sven Krautwurst, Jakob Borchardt, Christian Fruck, Stephen Harris, Jorg Hacker, Sebastian Wolff, Mark Lunt, Oke Huhs, Konstantin Gerilowski, Michał Gałkowski, Andrew McGrath, Bryce F.J. Kelley, Robert A. Field, Christoph Kiemle, Shakti Chakravarty, Mathieu Quatrevalet, Wolfgang Junkermann, Martin Wirth, Martin Kumm, Josua Schindewolf, Mei Bai, Christian Mallaun, Wolfgang Lieff, Adrian Murphy, Jakob Thoböll, John P. Burrows, Christoph Gerbig, Andreas Fix, Hartmut Bösch, Heinrich Bovensmann
Affiliations: University of Bremen, Institute of Environmental Physics (IUP), Deutsches Zentrum für Luft- und Raumfahrt, Institut für Physik der Atmosphäre, UNEP's International Methane Emissions Observatory (IMEO), School of Biological, Earth and Environmental Sciences, The University of New South Wales, ARA - Airborne Research Australia, Flinders University, College of Science and Engineering, Environmental Defense Fund, Department Biogeochemical Signals, Max Planck Institute for Biogechemistry, Faculty of Physics and Applied Computer Science, AGH University of Kraków, Jade University of Applied Sciences, Alfred Wegener Institute - Helmholtz Centre for Polar and Marine Research, The University of Melbourne, School of Agriculture, Food and Ecosystem Sciences, German Aerospace Center, Flight Experiments
Along with carbon dioxide (CO2), methane (CH4) is a key driver of anthropogenic climate change. Reducing CH4 emissions is critical for short-term climate change mitigation. Two main drivers of anthropogenic CH4 emissions are waste- and fossil fuel-related activities. They account for about half of the global anthropogenically emitted CH4. Atmospheric measurements of CH4 concentration gradients are a powerful tool to test inventory estimates, which are usually based on bottom-up methods. In this paper we present two recent examples of elevated CH4 emissions from landfills near Madrid, Spain, and an open-pit coal mine in Queensland, Australia. Both targets are also clearly detectable in TROPOMI satellite XCH4 data and satellite data derived emission rates, which were questioned by facility operators for both areas. We use measurements from airborne passive imaging and airborne active greenhouse gas lidar instruments on board the same aircraft to identify emissions from the waste facilities. These are combined with validated modeled wind field data in a simple but fast cross-sectional flux method to quantify their strength. Emission rates with up to 13 t/h at the time of the measurement flight for the study area from both techniques are in excellent agreement with each other and support the conclusion of under-reporting of emissions from the landfills, which is in line previous findings. A large proportion of the emissions are attributed to active areas of the landfill sites. For the open-cast coal mine in Australia, passive airborne imaging data from one aircraft were combined with synchronously acquired in situ CH4 concentrations and wind field observations from a second aircraft. Emission rate estimates using the cross-sectional flux method, acquired on multiple days over 2 years, are a factor of at least 3-5 higher than bottom-up based estimates. These emission estimates agree with previous satellite-based estimates. Reasons for the large discrepancy between top-down and bottom-up estimates could be related to regional emission factors being used to estimate the emissions based on run-of-mine coal production data. However, regional emission factors may not be valid for specific mines. These two examples illustrate how irregular but high precision measurements acquired during airborne campaigns can support emission estimates from satellite instruments and pinpoint CH4 sources. Furthermore, within certain limits, they can also support the verification of bottom-up methods.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall F1)

Presentation: Assessment of European localised CH4 Emission Sources with hyperspectral sensors

Authors: Hartmut Boesch, Michael Hilker, Michael Buchwitz, Jonas Hachmeister, Stefan Noel, Max Reuter
Affiliations: University of Bremen
Methane is a strong greenhouse gas with a global warming potential much higher than that of CO2 on a 20-year timescale. Thus, reduction of methane emissions has recently gained strong attention as a potent way for mitigating climate change on short time scales. Consequently, at COP29, 159 countries and the European Commission agreed to curb emissions by 30% below 2020 level by 2030. However, such efforts for methane reduction can only be successful if we can successfully identify and quantify all relevant emission sources and monitor their emission changes with time. A specific challenge are localised emission sources from the oil&gas, coal mining or waste sector where emissions for individual sites are often poorly known. Hyperspectral and multispectral satellites have shown great promise in detecting methane emission plumes from strong local emitters which then allow estimating instantaneous emission fluxes. With many new satellite missions appearing and novel retrieval techniques applied, this has become a dynamic and rapidly developing research field. At IUP Bremen, we have developed the HIFI (high resolution fitting) methane retrieval package for hyperspectral satellite sensors that includes different methods allowing seamless switching between methods. This is coupled to a flux estimation tool based on mass balance methods. We have applied this methane retrieval and flux estimation method to the sensors ENMAP, EMIT and PRISMA to evaluate emission sources over Europe (including parts of Russia). Overall, we have included several hundred detections which we classified according to trustworthiness criteria. A key challenge here are surface artifacts that interfere with plume detection in particular for the weak plumes often found for Europe. For all trustworthy plumes, emission fluxes have been derived and compared to inventory-based information. This work is supported by the EU Horizon Europe Project and the ESA Smart-CH4 project.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall F1)

Presentation: Uncertainty framework for satellite-based estimates of methane emissions from point sources

Authors: Javier Gorroño, Luis Guanter, Javier Roger, Shanyu Zhou, Itziar Irakulis-Loitxate, Manuel Montesino-San Martín, Gonzalo Mateo-Garcia, Matthieu Dogniaux, Joannes Maasakkers, Ilse Aben
Affiliations: Universitat Politècnica de València, Environmental Defense Fund, UNEP's International Methane Emissions Observatory, SRON Netherlands Institute for Space Research
In the last years, a growing number of satellite instruments have been used to detect and quantify emissions from methane hot spots and point sources. These satellites are typically divided into area flux mappers (e.g., TROPOMI) with a spatial resolution in the order of kilometers and point source imagers (20-500 m) instruments that can detect and quantify emissions at ‘facility level’. In turn, these point source imagers satellites can be specifically designed to measure methane (e.g., GHGSat constellation), while other satellites were not designed for that purpose but their spectral design entirely or partly covers the methane-absorption region in the short-wave infrared (SWIR). The latter group consists of both band imagers (e.g., Landsat, Sentinel-2, Sentinel-3/SLSTR, GOES/ABI) with regular global coverage, as well as hyperspectral imagers (e.g., PRISMA, EnMAP, EMIT) with higher revisit times. The rapid development and growing number of instruments requires the development of a data quality framework that supports the translation of this satellite data into information for policy users, mitigation actions, or reporting of emissions. The MEDUSA (Methane Emission Detection Using Satellites Assessment) project intends to tackle this challenge by developing a set of techniques and a pre-operational system to harmonize and integrate global information using diverse satellite instruments and algorithms. Among these activities, the development of a framework for methane flux uncertainty for various types of instruments is planned. The proposed uncertainty framework for methane fluxes from point sources is applied on a set of proposed missions, including Sentinel 2, EnMAP, and TROPOMI, but is intended to provide general guidelines applicable to a large suite of satellite missions. In particular, the uncertainty analysis reviews the entire processing chain encompassing the methane enhancement retrieval, detection methodology, and quantification of methane flux rates. This analysis is transformed into an uncertainty tree diagram where the different uncertainty sources can be identified. Among all the contributions, those arising from the wind speed datasets, flux rate calibration using transport models , and flux rate error correlation are studied in detail. For example, the spatial representativeness of wind datasets (e.g. 9x9km for ERA5 Land) differs from that of the plume area (typically below 1km). This introduces an uncertainty that has been assessed using as a proxy wind climatology maps with much higher spatial resolution (e.g. 250m). The impact of this contribution can be below 1% for flat plateaus but significantly increase to over 20% in mountainous areas. The provision of uncertainty estimates associated to methane flux rates is fundamental for a range of activities that include: better link the emission data with socioeconomic parameters, improve the communication of emissions to stakeholders, prior data in inversion of satellite observations, or the inter-comparison and harmonisation of different methane flux rate estimates. We will exemplify the importance of developing such uncertainty metrics in different test cases. For example, we will showcase the importance of the propagation of flux rate uncertainty estimates to temporally integrated emissions using a diverse range of sensors and methods. In particular, a mega-leak at an exploration well in the Mangistau region, southwestern Kazakhstan that was monitored by several satellites for over 6 months. Additionally, we will discuss the benefits of uncertainty estimates within the comparisons results in the MEDUSA project. This results in an improved analysis with the possibility to identify sources of errors and define qualitative acceptance/rejection criteria. The work here presented is performed in close collaboration with The Methane Alert and Response System (MARS) from the UNEP’s International Methane Emissions Observatory (IMEO). This is the first global satellite detection and notification system that relies on satellite data to identify and monitor very large methane emissions around the world. This partnership allows for the transfer of results into an operational context that connects with final users such as governments and facility operators. For example, recent work has developed an initial automated validation process and estimation of probability of detection at a facility level. This information, together with uncertainty estimates, contributes to the identification of potential systematic errors at a facility level and an approximate range of detectable/non-detectable emissions.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall F1)

Presentation: Multi-instrument assessment of methane emissions from landfills: How can satellites, aircraft and ground measurements be used to inform and target mitigation strategies?

Authors: Daniel Potts, Dr Harjinder Sembhi, Matthieu Dogniaux, Joannes Maasakkers, Luis Guanter, Marianne Girard, Hanford Deglint, Carles Debart, Claus Zehner, Dirk Schuettemeyer, Ilse Aben
Affiliations: University Of Leicester, SRON, Universitat Politècnicade València, GHGSat, European Space Agency
Routine, passive monitoring of landfills using satellite sensors could be a vital way to gain new perspectives on the extent, frequency and origin of methane (CH₄) plumes originating from individual waste management sites, to support both management and regulation. Whilst area flux mappers such as the TROPOspheric Monitoring Instrument (TROPOMI, spatial resolution of 7 x 5.5 km²) can capture regional fluxes and CH₄ hotspots, plume observations from hyperspectral point source imagers such as GHGSat (25 x 25 m²) and aircraft (1 x 1 m²) can be attributed directly to individual landfill sites, allowing for the evaluation of plume persistence, identification of source origins, quantification of emission rates and investigations into the relationship between site operations and emissions. Within Europe, one of the largest persistent CH₄ hotspots observed by TROPOMI originates near to the urban area of Madrid, where enhanced CH4 concentrations have been continuously detected across 2018-2024. A 2021 ESA web story attributed a significant component of the observed regional scale CH₄ enhancement seen by TROPOMI to two landfills, Las Dehesas and Pinto. Emissions from these landfills were further investigated with TROPOMI, GHGSat and a 2-day aircraft campaign during 2022, conducted as part of ESA EDAP+. This study builds upon this work, combining various satellite data products from 2019-2024 to attribute plume origins to operational activity, identify trends and change-points in emissions across the whole time series, and to investigate the role meteorology plays in the variability of emissions. This will be complemented by a coordinated 3-week multi-instrument satellite-aircraft-ground campaign in May 2025 designed to guide on-site mitigation and evaluate its effectiveness, with near real time data processing and communication direct to the operators to guide and evaluate interventions. Methane legislation is rapidly evolving, with stronger, more ambitious regulations entering into force. New approaches are needed to identify, quantify and mitigate methane emissions, all of which need to be effectively communicated to those capable of acting on this information. This study showcases how this type of relationship between research and industry could lead to greater insight into industrial emissions and support their mitigation.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall F1)

Presentation: Analysis of Methane Emissions from the Darvaza Gas Crater

Authors: Adriana Valverde, Itziar Irakulis-Loitxate, Javier Gorroño, Luis Guanter
Affiliations: Universitat Politècnica De València, International Methane Emissions Observatory (IMEO), United Nations Environment Programme, Environmental Defense Fund
Methane (CH4), a greenhouse gas 86 times more potent than carbon dioxide (CO2) over a 20-year period, has become one of the main drivers of climate change, with atmospheric levels doubling since pre-industrial times. Among its various natural and anthropogenic sources, such as oil and gas systems, coal mines, or landfills, the Darvaza gas crater in Turkmenistan stands out as a unique and persistent contributor. This crater, usually known as "Door to Hell", is located in the Amu-Darya basin, a geological formation replete with large quantities of oil and natural gas, in which methane is predominant. In 1971, a drilling operation for natural gas by a soviet geologist caused the ground to collapse. The resulting crater measured approximately 70 meters in diameter and 20 meters deep. To mitigate the release of hazardous gases, authorities ignited the escaping gas, which has been burning without interruption so far. However, since last year, the fire in the crater has been reduced by the Turkmenistan government, as we can monitor using the VIIRS Fires and Thermal Anomalies product. Our work focuses on detecting and quantifying the Darvaza methane emissions, trying to confirm whether there is a correlation between fire reduction and emissions. Currently, we have detected more than 20 methane emissions using the hyperspectral imaging spectrometers EnMAP, PRISMA, and EMIT spaceborne instruments. The emissions range is between 1.000-3.000 kg/h, amounting to thousands of tonnes of CH4 annually. In addition to quantifying emissions, we examined the chronology of the crater flames. By analyzing radiance and thermal bands from Landsat 4-5, we determined the onset of the crater fire in late 1987 or early 1988, a detail previously shrouded in uncertainty. This revelation contributes to the temporal analysis of this geological piece and provides key information for estimating the total amount of methane released by the Darvaza crater to date.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall F1)

Presentation: Greenhouse Gas Emission Monitoring with the GHGSat Constellation: Progress and Performance

Authors: Antoine Ramier, Hanford Deglint, Marianne Girard, Dylan Jervis, Jean-Philippe MacLean, David J Marshall, Jason McKeever, Mathias Strupler, Ewan Tarrant, David Young
Affiliations: GHGSat
GHGSat offers services of detection, monitoring, and analysis of greenhouse gas emissions using its proprietary satellite constellation and fleet of airborne sensors, as well as third-party satellite data. Its flagship service, DATA.SAT, uses a growing constellation of small satellites designed to detect and quantify methane emissions from individual industrial sites at ~25 m resolution and ~100 kg/hr source rates. As of 2024, the constellation comprises 11 satellites, including a first CO2-specific sensor based on the same technology. For end users, detection limit and quantification accuracy are key figures of merit of an observing system. They can be assessed through controlled releases, where plumes are generated intentionally with a steady, metered flow rate. GHGSat has built up a world leading sample size of 49 controlled release events, including self-organized and third-party single-blind studies. From this data, we infer a detection threshold of 102 kg/hr, at 50% probability of detection and wind speed of 3 m/s One drawback of controlled releases is that the same site is often used repeatedly to build up a sufficient sample size of detection (and non-detection) events. This means that the inferred behaviour may not be representative of the full range of conditions seen by the instruments across all site types and terrain classes. Two of the most important such variables are the local wind speed u and the magnitude of noise σ in the underlying Level 2 retrieval field. Here we present an improved nonlinear PoD model that (a) enforces PoD=0 for q=0 and (b) accounts for changes in local Level 2 noise conditions. This enables us to estimate an adjusted detection limit - not only using wind speed, but also terrain-dependent σ. Additionally, we present recent results from the newly launched CO2 satellite GHGSat-C10 “Vanguard”, including example retrievals and detected emissions. This unit brings a new dimension to our knowledge of global greenhouses gases emissions.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.11/0.12)

Session: A.08.09 Marine and Coastal Carbon - PART 1

By absorbing 25% of the anthropogenic carbon dioxide in the atmosphere, oceans are a major sink of carbon and a key component of the Global Carbon Cycle. Ocean carbon uptake/release spatial distributions and trends are inherently triggered by and impact a vast set of biological, physical and chemical processes.
This session welcomes contributions on the different carbon pools and processes in the marine and coastal ocean including:
- both the inorganic (including Ocean Acidification) and organic carbon domains, demonstrating how remote sensing, together with in-situ data and numerical modelling, can improve the characterization and understanding of the different pools of carbon in the ocean (Dissolved Inorganic Carbon - DIC, Dissolved Organic Carbon - DOC, Particulate Inorganic Carbon - PIC, Particulate Organic Carbon - POC).
- the key processes that determine the fluxes of carbon among these pools, such as Primary Production and Export, or between interfaces, such as Air-Sea or Land-Sea exchanges.
- Coastal blue carbon ecosystems (e.g. mangroves, seagrass, salt marshes) and the role of remote sensing to 1- monitor those ecosystems in term of e.g. ecosystem extent, carbon stock, carbon sequestration potential,…, 2-monitor/predict the impact of external drivers on blue carbon ecosystems and their carbon stocks/sequestration potentials and 3- quantify the added value of climate change mitigation and adaptation strategies (e.g. conservation, restoration, creation).
This session is also open to studies and procedures addressing how EO-derived marine and coastal carbon products can support Global Carbon Budget modelling efforts, and also contribute to informing evidence-based policies (IPCC and Paris Agreement).
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: Phytoplankton carbon from ocean colour: validation and intercomparison of retrieval algorithms

Authors: Shovonlal Roy, Lekshmi Krishnakumary, Gemma Kulk, Tihomir Kostadinov, Marco Bellacicco, Shubha Sathyendranath
Affiliations: University of Reading, Plymouth Marine Laboratory, California State University San Marcos, National Research Council (CNR) Institute of Marine Sciences (ISMAR)
Phytoplankton carbon, a critical component of the oceanic carbon cycle and an essential indicator of marine ecosystem health, needs to be monitored operationally. Although considerable progress has been made in developing algorithms to estimate phytoplankton carbon from satellite remote sensing, the reliability of these algorithms across different oceanic conditions is yet to be fully established. To this end, we conducted a comprehensive validation and intercomparison of representative algorithms developed previously for computing phytoplankton carbon from satellite ocean colour. This research is part of the ESA-funded project ‘Satellite-based observations of Carbon in the Ocean: Pools, fluxes and Exchanges’ (SCOPE). Based on the algorithm structures and applicability, we selected four categories of phytoplankton-carbon retrieval algorithms: (1) particle backscattering-based empirical relationship, (2) particle backscattering-based allometric semi-analytical algorithm, (3) absorption-based allometric semi-analytical algorithm, and (4) photoacclimation-based algorithm. All algorithms are capable of producing total phytoplankton carbon on a global scale. Additionally, algorithms 2-4 can further split satellite-derived phytoplankton carbon according to size classes such as pico-, nano-, and microplankton. To test the performance of the algorithms, we compiled a large in situ database of phytoplankton carbon consisting of both flow cytometry-based measurements and directly measured phytoplankton carbon from across the global ocean. All algorithms independently generated outputs using in situ matched-up satellite data extracted from ESA’s Ocean Colour CCI v6 data archive. The outcomes of our extensive validation and intercomparison will be presented to assess the performance and consistency of the selected algorithms against in situ datasets. The commonalities and major differences in algorithm performance will be highlighted, and the possibilities for further improvement will be discussed. The suitability of routinely applying these algorithms, either individually or in combination, for estimating phytoplankton carbon in the global ocean will also be discussed. By harmonizing the outputs of different algorithms for improving global phytoplankton carbon estimates, our research will contribute to enhanced understanding of the oceanic carbon cycle and the estimation of the ocean carbon budget using satellite remote sensing.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: Ocean Color through space-borne Lidar measurements: Proteo project in the Caligola mission

Authors: Lorenza Masi, Paolo Di Girolamo, Marco Di Paolantonio, Giovanni Giuliano, Gian Luigi Liberti, Gianluca Volpe, Andrea Andrisani, Sara Venafra, Simona Zoffoli, Michael Behrenfeld, Davide Dionisi
Affiliations: University of Naples "Parthenope", CNR - ISMAR, School of Engineering,University of Basilicata, Department of Botany and Plant Pathology
Accurately determining the total biomass of phytoplankton, along with understanding its distribution and seasonal variations, is essential for gaining deeper insights into the Earth's carbon cycle and the dynamics of the food web. This knowledge plays a critical role in elucidating how phytoplankton contribute to primary production and nutrient cycling in aquatic ecosystems, which are at the base of carbon dioxide capture and carbon fixation/export to the deep ocean. Satellite Ocean Color (OC) measurements have been a breakthrough in the assessment of phytoplankton biomass and concentration of Chlorophyll-a (Chl-a) due to their unprecedented spatial coverage and resolution. However, these measurements are limited to clear sky, day-light, high Sun elevation angles and ice-free ocean conditions and are exponentially weighted toward the ocean surface. Furthermore, OC passive sensors are unable to resolve phytoplankton vertical structure, which can be a primary source of error in global phytoplankton biomass and net primary production estimates. As an example, recent studies highlighted that omission of Subsurface Chlorophyll Maximum Layers (SCML) causes underestimations of Net Primary Production (NPP) during the post-bloom period, especially in Arctic regions. To address limitations of the OC approach it is necessary to employ complementary remote sensing techniques that provide a 3-D view of ocean ecosystems. The light detection and ranging (lidar) technique can overcome limitations of passive OC remote sensing as it enables observations during the day and night and provides vertical profiles of bio-optical and biogeochemical parameters through the water column to depths well below the detection depth of OC measurements. Until now, space-borne lidars have only been designed for atmospheric science observation. Nonetheless, measurements from the CALIPSO and ICESAT-2 atmospheric lidar missions have been used for ocean retrievals to provide new perspectives on aquatic ecology, albeit for the same upper ocean layer detected through OC (<20 m). An ocean-dedicated lidar mission providing deep penetration depths and vertical resolution is however still missing. To this end, the Agenzia Spaziale Italiana (ASI) and NASA are now partnering on the new spaceborne lidar mission known as CALIGOLA (i.e., the Cloud and Aerosol Lidar for Global Scale observations of the Ocean-Land-Atmosphere). By flying an innovative multi-wavelength, polarization-sensitive Raman lidar that will simultaneously acquire elastic backscatter and fluorescence measurements, CALIGOLA seeks to significantly advance our understanding of Earth’s coupled ocean-land-atmosphere systems. Targeted objectives of the mission also include quanatifying the three-dimensional distribution of global phytoplankton biomass and evaluating zooplankton dynamics. In the frame of this ASI-NASA collaboration, CNR-ISMAR is involved in the PeRfOrmance simulaTor for ocEan Observations (PROTEO) project that aims to strengthen the CALIGOLA mission by improving its scientific and applicative aspects and quantitatively assessing its ocean observation capabilities. Specifically, this involves the development of an end-to-end (E2E) simulator that simulates the lidar signal given a geophysical scene, observation geometry and the instrumental characteristics and also to invert the signal to retrieve the geophysical scene. Early development of this tool is crucial for the definition phase and also to evaluate the L2 product retrievals that shape the overall mission. The oceanic variables of interest (L2 products) are particulate backscatter, attenuation, depolarization ratio at two wavelengths (355 nm and 532 nm) and Chl-a fluorescence at 685 nm. Drafted mission requirements include 3 m vertical resolution for oceanic observables, with signals collected within the upper 250 m of the ocean surface layer, and at 150 m horizontal resolution. Our first goal is to compare these requirements with SCML typical scales. Then, an adequate signal-to-noise ratio (SNR) will be set as a reference to evaluate signal significance. Based on these constraints, the simulations will run scenarios involving various SCML depths, thicknesses and horizontal uniformity, trying to determine conditions allowing valid ocean retrievals. Combining these findings with in-situ data on SCML abundance and distribution will provide a first view of the mission’s new contributions to understanding marine ecology and biogeochemistry.
LPS Website link: Ocean Color through space-borne Lidar measurements: Proteo project in the Caligola mission&location=Room+0.11/0.12" class="text-info" target="_blank">Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: Satellite-based observations of carbon in the ocean: Pools, fluxes and exchanges

Authors: Gemma Kulk, Dr Shubha Sathyendranath, Meike Becker, Marco Bellacicco, Dr. Antonio Bonaduce, Heather Bouman, Javier Concha, Daniel Ford, Prof. Johnny Johannessen, Bror Jönsson, Lekshmi Krishnakumary, Pr Tiit Kutser, Marko Laine, Joan Llort, Elin Meek, David Moffat, Are Olsen, Emanuele Organelli, Ralf Quast, Dr. Roshin Raj, Mayra Rodriguez, Shovonlal Roy, Dr Roberto Sabia, Roman Shevchuk, Jamie Shutler, Valentina Sicardi, Kaire Toming, Tsuyoshi Wakamatsu
Affiliations: Earth Observation Science and Applications, Plymouth Marine Laboratory, National Centre for Earth Observation, Plymouth Marine Laboratory, University of Bergen, Consiglio Nazionale delle Ricerche, Nansen Environmental and Remote Sensing Center, University of Oxford, European Space Research Institute, European Space Agency, University of Exeter, University of New Hampshire, University of Tartu, Finnish Meteorological Institute, Barcelona Supercomputing Center, Brockmann Consult, University of Reading
Quantifying the ocean carbon budget and understanding how it is responding to anthropogenic forcing is a major goal in climate research. It is widely accepted that the ocean has absorbed around a quarter of CO₂ emissions released anthropogenically, and that the ocean uptake of carbon has increased in proportion to increasing CO₂ emissions. Yet, our understanding of the pools of carbon in the ocean, the processes that modulate them, and how they interact with the land and atmosphere, is not satisfactory enough to make confident predictions of how the ocean carbon budget is changing. Improving our understanding requires a holistic and integrated approach to ocean carbon cycle research, with monitoring systems capable of filling the gaps in our understanding. Satellite observations can play a major role in this. The ESA-funded ‘Satellite-based observations of Carbon in the Ocean: Pools, fluxes and Exchanges’ (SCOPE) project aims to provide the best possible characterisation of the ocean carbon budget from satellite observations and further the understanding of its variability in space and time. Here, we present the development of an internally consistent dataset of the carbon pools, fluxes and exchanges that are observable from space, including dissolved inorganic and organic carbon, particulate inorganic and organic carbon, phytoplankton carbon, primary and export production, and air-sea CO₂ and land-sea exchange. This satellite-based ocean carbon dataset is harmonised in space and time, based on climate-quality input data from the ESA’s Climate Change Initiative and fully error-characterised. This allows us, for the first time, to address both the physico-chemical and biological processes that drive the ocean carbon cycle in a consistent manner. We use the newly developed satellite-based ocean carbon dataset to analyse trends in each component of the ocean carbon cycle since the start of the ocean-colour data record in 1997. This will provide insight into how satellite observations can aid in the assessment of the ocean carbon budget in a climate context and provide useful information to evaluate and improve climate models.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: The Strengths And Limits Of Satellite-driven pCO2 Products And Model Estimates For Assessing The Ocean Carbon Sink

Authors: Jamie Shutler, Alizee Roobaert, Daniel Ford, Peter Landschutzer, Judith Hauck, Sreeush Mohanan, Galen McKinley, Amanda Fay, Thea Heimdel, Caroline Shaum, Marion Gehlen, Frederic Chevallier, Gemma Kulk, Dr Shubha Sathyendranath, Peter Land, Christian Rodenbeck, Nicolas Gruber, Luke Gregor, Jacqueline Behncke, Andrew Watson, Laique Djeutchouang, Jiye Zeng, Annika Jersild, Yosuke Iida
Affiliations: University Of Exeter, Flanders Marine Institute (VLIZ), Alfred Wegener Institute Helmholtz-Center for Polar and Marine Research, Columbia University and Lamont-Doherty Earth Observatory, Laboratoire des Sciences du Climat et de l’Environnement,, Plymouth Marine Laboratory, Max Planck Institute for Biogeochemistry, Environmental Physics, Institute of Biogeochemistry and Pollutant Dynamics, ETH Zürich, Max Planck Institute for Meteorology, Stellenbosch University and Southern Ocean Carbon – Climate Observatory, National Institute for Environmental Studies, NASA Goddard Space Flight Center, Atmosphere and Ocean Department
The ocean acts as a significant sink for anthropogenic CO2, but recent results from the Global Carbon Budget (GCB, used to guide policy) point to a discrepancy in estimates of ocean carbon uptake. Since around the year 2000, estimates of ocean carbon uptake based on reconstructions of measurements of in-water gaseous CO2 (pCO2) have increasingly differed from uptake estimates produced by global hindcast model simulations. The resulting unexplained gap of 0.6 - 0.7 Pg C yr-1 is comparable to the CO2 emissions of the EU27. This discrepancy may be due to a combination of errors in simulation models, and data sparsity and uncertainties in the upscaled measurement estimates. Initial studies suggest that biases in the measurement-based methods from the under-sampled Southern Hemisphere, may contribute significantly to this gap. To address these concerns, the Surface Ocean CO2 Mapping intercomparison project has initiated its second phase (SOCOMv2). SOCOMv2 involves four experiments aimed at understanding biases and uncertainties linked to data sparsity, changing observing networks and data availability, as well as uncertainties from input data, mapping methods and sea surface air-sea CO2 exchange calculation. These experiments include 1) a comprehensive geospatial uncertainty analysis, as well as three subsampling studies using 2) GCB hindcast simulations to represent true climate variability, 3) large ensemble simulations to represent multiple climate states, and 4) idealized scenarios of carbon uptake without climate variation. Preliminary results point to additional uncertainties within the pCO2 measurement-based ocean carbon uptake estimates, notably due to errors in extrapolating sparse measurements and kinetic air-sea CO2 exchange calculations, which could be critical contributors. Moreover, the GCB hindcast simulations experiments shows that measurement-based products tend to underestimate the pCO2 trend from the models. SOCOMv2 aims to improve the accuracy and precision of ocean carbon flux estimates, supporting enhanced observational strategies and informing policy decisions on climate mitigation. This paper will present early results and conclusions from these experiments and explain how SOCOMv2 is funded within a larger European Space Agency (ESA) ocean carbon study, called Ocean Carbon for Climate.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: The Role Of Earth Observations in Developing a Pilot Demonstration of the Marine Organic Carbon Atlas for the Arctic Ocean

Authors: Artur Palacz, Aleksandra Cherkasheva, Mirosław Darecki, Katarzyna Dragańska-Deja, Joanna Stoń-Egiert, Marlena Szeligowska, Marta Szczepanek, Maciej Telszewski
Affiliations: Institute Of Oceanology Polish Academy Of Sciences (iO PAN), UiT The Arctic University of Norway, International Ocean Carbon Coordination Project (IOCCP)
Significant knowledge gaps remain in our understanding of marine carbon cycling hindering models' ability to forecast future scenarios. In particular, the changing role of biological processes in atmospheric carbon uptake and sequestration is not sufficiently accounted for, as highlighted for example by the IPCC or the IOC-UNESCO Working Group on Integrated Ocean Carbon Research. A critical factor contributing to these gaps is the mismatch between the data requirements of Earth Observation (EO) and Earth System Models and the availability of relevant in situ data and data products. This challenge is especially acute in polar regions which experience the most pronounced impacts of climate change but suffer from scarcity of openly and freely available in situ data related to, in particular, biological and ecosystem observations. Moreover, inadequate communication between the observation and modelling communities, as well as among researchers studying inorganic and organic carbon processes, further limits our ability to map and predict changes in marine carbon cycling. Here we describe the vision and preliminary roadmap towards developing a regional demonstration of a Marine Organic Carbon Atlas (MOCA) for the Arctic Ocean. We explain how the Atlas builds on the outcomes of an international initiative, which defined priority biogenic data products to advance ocean carbon sequestration modelling in the Arctic Ocean, formulated through consensus between observing and modelling experts. Furthermore, we highlight the ongoing work on the subsequent development of several new Arctic-specific data products, for example those related to the changing role of phytoplankton in the ocean carbon cycle: net primary production (NPP), net community production (NCP) and phytoplankton functional types, which integrate EO-derived and in situ observations spanning a number of biogeochemistry and biology & ecosystems Essential Ocean Variables. Specifically, the results of new NPP and NCP satellite-based parameterizations in the Greenland Sea region are presented, and the relationship between the two is explored. A large dataset of bio-optical in situ data spanning over a decade is used for multiple data product developments. Moreover, we discuss the challenges and opportunities behind developing Arctic MOCA as inspired by the success of the Surface Ocean CO₂ Atlas and the Global Ocean Data Analysis Project, and their impact on regional and global ocean carbon modelling and assessments. The value of coordinated ocean observation and data mining efforts are presented in the context of fulfilling MOCA requirements. Finally, we highlight the critical role of the federated and interoperable ocean data management system which has been significantly advanced through the IODE Ocean Data Information System (ODIS) and associated resources. We present first steps towards utilising ODIS to foster data product co-design and long-term implementation by the observing and modelling communities rallied behind MOCA or other similar initiatives which aim to address key knowledge gaps related to the ocean carbon cycle.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: Constraining the Lateral Flux of Carbon From Shelf Seas to the Open Ocean Using Deep Ocean Remote Sensing

Authors: Jennifer Watts, Jamie Shutler, Daniel J. Ford
Affiliations: University Of Exeter
Shelf seas, which are shallow waters extending from the shoreline to the shelf break (generally less than 200 meters deep), cover only 7-9% of the total ocean area globally. However, these seas play a disproportionately significant ecological and economic role. For example, contributing ~20-25% of global marine primary production and ~90% of fish catches. They also serve as the critical link between the terrestrial, oceanic, and atmospheric carbon cycles. Despite their importance, the lateral flux of dissolved inorganic carbon (DIC) between shelf seas and the open ocean remains poorly constrained due to limited observations at the spatial and temporal scales relevant to these dynamic environments. This study presents an observation-driven assessment of the lateral DIC flux between the shelf and open ocean in two contrasting Atlantic shelf systems: the northwest European shelf (including the Celtic Sea and North Sea) and the northeast U.S. continental shelf (encompassing the Mid-Atlantic Bight and Gulf of Maine). Using a Deep Ocean Remote Sensing (DORS) approach, we integrate global monthly satellite observation-based and depth-resolved surface currents (1993–2016) and a physical understanding of their vertical structure with seasonal vertical DIC gradient data derived from in situ observations. This approach enables the deep-water annual net lateral DIC flux to be estimated for each shelf system. This approach characterises the shelf contribution to the ocean interior and the results are comparable to those reported in existing in situ and modelling studies. The findings indicate an annual DIC export of 59.1 ± 41 Tg C y⁻¹ (mean ± 1 sigma uncertainty) from the northwest European shelf and 9.2 ± 24.6 Tg C y⁻¹ from the northeast U.S. continental shelf. Notably, the temporal changes in export differ: the northwest European shelf exhibits an increasing export rate (+0.43 Tg C y⁻¹), while the northeast U.S. shelf shows a decreasing rate (−0.18 Tg C y⁻¹). These differences reflect contrasting physical and biogeochemical regimes. For the northwest European shelf, the results align closely with those reported in the literature, reflecting the strong continental shelf pump mechanism which is effectively captured by our methodology. However, the literature for the northeast U.S. continental shelf suggests a net import of DIC, which is not well characterised by our mean results. This is likely due to the complexity of this region and the lack of a significant inorganic carbon continental shelf pump mechanism in this region. Enhancing the complexity of our approach to account for these regional oceanographic features, along with improved handling and integration of the available in situ data, is likely needed to improve the accuracy of these results. For example, by accounting for additional horizontal DIC gradients across the shelf break and the influence of Gulf Stream intrusions and eddies, which drive exchanges between distinct water masses such as Gulf Stream-derived waters, slope water, and shelf water. These findings represent the first entirely observation-driven and synoptic-scale approach to estimate the annual net lateral deep-water flux of DIC between the shelf and open ocean across two distinct continental shelves. By offering a synoptic-scale assessment of lateral DIC fluxes and their associated uncertainties, this study should better constrain regional shelf-sea carbon budgets, and it offers a framework for quantifying the contribution of individual shelf seas to the global oceanic carbon budget. The results demonstrate how combining remote sensing with knowledge of the internal ocean physics and in situ data can allow us to use satellite observations of the surface ocean to understand internal ocean water flows and concentrations.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall L1/L2)

Session: A.05.04 Advances at the observation -modelling interface - PART 1

As observations become more sophisticated, and models more complex, so too does the link between them. Rapidly expanding processing and data storage requirements bring new challenges for model evaluation and validation, and the need for tools, toolboxes and cross community collaboration. Novel assimilation techniques, developments in inverse modelling, and improved treatment of combined model-observation uncertainties, require close collaboration between communities.
This session welcomes submissions at the interface of earth observation and modelling. Relevant topics include but are not limited to
• The role of observations in climate forcings
• Observational requirements to enable the next generation of CMIP model benchmarking
• the role of emerging technologies, including machine learning and artificial intelligence, in advancing the assessment of ESMs.
• Development of toolboxes and metrics for model assessment
• Novel observations relevant to high resolution model processes
• Model-observation integration approaches for process understanding


Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall L1/L2)

Presentation: Combining Satellite Observations and an Earth System Model to Reassess the Pre-Industrial to Present-Day Land Cover Change Forcing

Authors: Emma Sands, Professor Ruth Doherty, Dr Fiona O'Connor, Dr Richard Pope
Affiliations: University Of Edinburgh, Met Office Hadley Centre, University of Exeter, University of Leeds, National Centre for Earth Observation
Biogenic volatile organic compounds (BVOCs) impact climate through multiple pathways. BVOCs are precursors aiding ozone and secondary organic aerosol (SOA) formation. Further, due to their rapid oxidation, they can affect the atmospheric oxidising capacity, influencing methane lifetimes. The impact of BVOCs on climate through radiative forcing is now widely recognised. However, its magnitude remains uncertain. The first satellite retrievals of isoprene, which is the globally-dominant BVOC, have recently become available. These data, in combination with satellite retrievals of formaldehyde and aerosol optical depth (AOD) observations, are used here to quantify biases in the UK Earth system model (UKESM1.1). Formaldehyde is an oxidation product of BVOCs and, therefore, biases in total column formaldehyde may reflect the limitations of current BVOC oxidation mechanisms, while biases in AOD may reflect challenges in simulating SOA formation. We assess the relative importance of various processes that influence the impacts of biosphere-atmosphere interactions on atmospheric chemistry and aerosols and evaluate them using satellite observations of the trace gases and AOD. Simulations using UKESM1.1 are found to overestimate isoprene total column amounts over key tropical BVOC emission hotspots in the present day, in some cases by over a factor of five. Additionally, the simulated BVOC seasonality in South America is inconsistent with the observations. In contrast to the isoprene results, the formaldehyde columns are biased low by up to 80% over some of the same tropical regions. The AOD values are also biased low by up to 100% in the northern hemisphere, particularly from June to August, and in South America from September to November. Positive AOD biases occur in the Congo (>60%) and western Australia (>180%) throughout the year. Replacing the standard chemistry mechanism (Strat Trop vn1.0) with one that is more complex (Cri-Strat 2) impacts both of the trace gases and AOD. The effects of using the more detailed chemistry mechanism tend to dominate over other process included in the study. The isoprene lifetime is reduced by 50%, in part as a result of the inclusion of HOx recycling and, consequently, greater atmospheric oxidising capacity. Formaldehyde column values increase, in part driven by a new secondary source from the oxidation of monoterpenes. The changes in oxidants influence the location and pathway of sulphate aerosol formation, resulting in higher AOD values over some regions of high anthropogenic emissions such as China. The isoprene, formaldehyde and AOD are also sensitive to the land cover and relevant emission factors. New BVOC emission factors combined with observation-derived land cover reduced erroneously high isoprene column amounts over African savannas and in northwest Australia. Additionally, the change in land cover impacts dust emissions, leading to significant impacts on AOD in north Australia and south of the Sahel. The northern hemisphere AOD is reduced when either organically-mediated boundary layer nucleation or the contribution to aerosol mass from isoprene are included in the model, as the aerosol size distribution is shifted to smaller modes. The combination of processes can reduce some of the model biases, particularly the overestimated tropical isoprene. When all the processes included in the study are combined, significant changes also occur in the aerosol direct effect, consistent with the variations in AOD. The global mean difference in the top of atmosphere aerosol direct effect between the standard set up of UKESM1.1 and that inclusive of all the studied processes is +0.21 W m-2. Consequently, we expect the calculation of radiative forcing from land use change will be affected by the new process representation. Following the evaluation of model biases, an optimised version of UKESM1.1 is used to quantify the land cover change pre-industrial to present-day forcing. We further isolate the radiative forcing from changes in ozone and carbon dioxide concentrations, the methane lifetime, as well as the aerosol-radiation interactions and aerosol-cloud effects. Overall, the study provides a detailed model assessment focused on biosphere-atmosphere processes, updated estimates of pre-industrial to present-day land cover forcing, as well as a case study for using satellite observations for model improvement and process understanding.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall L1/L2)

Presentation: The use of climate forcings in reanalyses and operational forecasting systems

Authors: Timothy Stockdale, Professor Robin Hogan, Anca Brookshaw, Chris Goddard
Affiliations: ECMWF, ECMWF
Operational forecasting centres such as ECMWF have an increasing requirement to accurately model the earth system and its changes across multiple decades. Reanalyses such as the forthcoming ERA6 stretch from the mid-twentieth century to present; seasonal forecasts for the months ahead are often dominated by the strength of the signal due to climate change; even medium-range forecasts are provided with calibration runs covering the last twenty years. ECMWF have recently revised the suite of climate forcing data available in their model, for use in forthcoming reanalyses (ERA6) and next generation seasonal forecasts (SEAS6). We illustrate the range of forcing data needed, such as tropospheric aerosols, volcanic aerosols, ozone and solar forcing, and the differing roles that forcing plays for different forecast timescales. An ongoing challenge is the requirement for accurate datasets which are available both historically and continually updated form, and the value of short-term projections of forcings for operational systems will also be discussed.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall L1/L2)

Presentation: Improved quantification of stratospheric radiative forcing

Authors: Yi Huang, Yuwei Wang, Qiurun Yu, Ruogu
Affiliations: Mcgill University
Accurate knowledge of radiative forcing is the prerequisite for predicting climate change. However, forcing quantification is subject to substantial uncertainties. One outstanding uncertainty originates from the radiative perturbations taking place in the stratosphere. For example, the stratospheric constituents such as water vapour and aerosols can be drastically enriched by volcanos and tropopause-penetrating extreme storms, which can be invigorated by the warming climate and wild-spread wildfires. These constituents are potent radiative forcing agents and, once lifted to the stratosphere, can have lasting impacts on the global climate. Generally speaking, the climate forcing of a radiative agent is not only determined by its direct radiative effects (i.e., the instantaneous forcing) but is strongly influenced by the atmospheric adjustment process. It is increasingly recognized that the forcing magnitude of stratospheric radiative agents, such as water vapor, absorbing aerosols and ozone, is especially influenced by the adjustment process. We will present an analysis of the Hunga eruption in 2022 ,which was exceptional in that a remarkable amount of water vapor was injected into the stratosphere by this submarine volcano. Can the Hunga stratospheric water vapor injection exacerbate global warming or account for the global warming spike observed in the summer of 2023? To answer this question, we use a hierarchy of atmospheric models to dissect the adjustment process and quantify the adjusted forcing resulting from radiative and other physical processes. We find the overall forcing magnitude is suppressed due to compensating atmospheric adjustments. Coupled atmosphere-ocean model simulations affirm that only a minimal influence on global mean surface temperature should be expected, despite the enormous water vapor perturbation (Wang & Huang, 2024). From the sutdies of various stratospheric forcing including the Hunga case, it is increasingly recognized that the prediction of climate impacts of stratospheric perturbations requires quantification of the associated atmospheric adjustment. Motivated by this need, we develop an analytical method based on a set of radiative kernels, which, different from the conventional kernels, account for the radiative adjustment process and can be used for diagnosing atmospheric heating rate variations. We demonstrate the application of this method for quantifying the forcing of stratospheric water vapour and aerosols enhanced by global warming volcanos and wildfires.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall L1/L2)

Presentation: Seeking Consistency Amongst Satellite Datasets for Enhanced Earth System Science Modeling

Authors: Dr Gregory Duveiller, Ulisse Gomarasca, Moritz Link, Gustau Camps-Valls, Dr. Maria Piles
Affiliations: Max Planck Institute for Biogeochemistry, Image Processing Laboratory (IPL), Universitat de València
Satellite remote sensing technology is an invaluable tool to provide robust estimations of biophysical variables describing the surface of the Earth across space and time. Remote sensing already offers valuable data to calibrate, validate, and benchmark the process-based Earth system models (ESMs) used to project the future of our living planet. However, our capacity to deal with satellite data is constantly beset by trade-offs in terms of resolution (spatial, temporal, spectral, radiometric, and even angular) combined with variable obstacles related to observational and atmospheric conditions. As such, our datasets of biophysical variables are not always at ideal spatial and temporal resolutions to represent the land surface processes of interest for ESMs at their sub-grid scale. Satellite biophysical variables do not even necessarily correspond to the same spatio-temporal support. Additionally, because individual biophysical variables are typically retrieved from independent satellite data streams, they often differ in the underlying assumptions made in their respective processing chains, potentially compromising consistency across variables. Here, we propose a dedicated approach to tackle these issues by enhancing the quality of information provided by satellite remote sensing data to ESMs. This work is done within the AI4PEX HorizonEurope project. We first explore whether individual variables vary more in space or time by comparing the variability over comparable temporal (time series) and spatial (moving windows) supports. To do this effectively, we rely on Gaussianization to generate continuous probability density distributions for both space and time supports [1]. We demonstrate this approach over various biophysical variables, including soil moisture, vegetation optical depth (VOD), green leaf area index (LAI), daytime land surface temperature (LST), and sun-induced chlorophyll fluorescence (SIF), all with a spatial resolution ranging from 0.05 to 0.25 decimal degrees. In a second step, we exploit the temporal consistency of higher spatial resolution time series of greenness to extract information on spatial heterogeneity [2]. Based on the first two steps, we can identify the ideal time series of all variables to characterize how coherently they represent phenological changes and how consistently they compare across variables. As a result, we hope to provide enhanced satellite-based data streams that modelers can use to calibrate, validate, or benchmark their models within and beyond the AI4PEX project. References: [1] J. E. Johnson, V. Laparra, M. Piles and G. Camps-Valls, "Gaussianizing the Earth: Multidimensional information measures for Earth data analysis," in IEEE Geoscience and Remote Sensing Magazine, vol. 9, no. 4, pp. 191-208, 2021, doi: 10.1109/MGRS.2021.3066260 [2] G. Duveiller, G. Camps-Valls, G. Ceccherini, A. Cescatti, “Spatial homogeneity from temporal stability: Exploiting the combined hyper-frequent revisit of Terra and Aqua to guide Earth System Science”, Remote Sensing of Environment,vol. 261, 112496, 2021, doi: 10.1016/j.rse.2021.112496.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall L1/L2)

Presentation: Greenhouse gas concentration forcing for CMIP7 and beyond

Authors: Zebedee Nicholls, Dr. Mika Pflüger, Prof. Malte Meinshausen
Affiliations: Climate Resource, IIASA, University of Melbourne
The world's premier project for comparing coupled climate models (AGOCMs) and earth system models (ESMs) is entering its seventh phase: the coupled model intercomparison project phase 7 (CMIP7). Climate forcings are an integral part of CMIP because of their use as the input drivers to the AOGCMs and ESMs. Here we present the greenhouse gas concentration forcing (GHG forcing) used in CMIP7 for both the historical period and multiple future scenarios. Changes in atmospheric greenhouse gas concentrations are a key driver of climate change, making these forcings one of the key inputs for accurate reproduction of historical climate change and projection of future climate change (conditional on the underlying emissions scenario). We analyse the changes in the GHG forcing since CMIP6 and show that, while concentrations of some greenhouse gas concentrations have decreased, the concentrations of the two major drivers, carbon dioxide and methane, continue to rise. Beyond changes in global-, annual-mean GHG concentrations, we also present monthly, latitudinally-resolved GHG forcing. These spatiotemporal variations are particularly important for GHGs with a shorter lifetime that can have strong latitudinal gradients, such as methane. Alongside the GHG forcing to be used in CMIP7, we also present early results from an ESA research project that considers how to incroporate satellite-based observations into the GHG forcing production algorithm (an observation source which is currently not used). These early results provide a direct comparison between the ground-based observations that are currently used and the satellite-based observations which will be incorporated. In addition, we discuss some of the uncertainties in the various data sources and how these could be incorporated into uncertainty in the final GHG forcings, as well as how these uncertainties could be better explored through climate modelling. Beyond their use in CMIP, the greenhouse gas forcings are used in a number of other applications: such as seasonal weather forecasting and the reduced complexity model intercomparison project (RCMIP). As a result, we also present an initial suggestion for how these forcings could be updated approximately annualy, a significant increase from the current frequency of roughly once every seven years.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall L1/L2)

Presentation: The Climate Modelling User Group, Current and Future Work

Authors: Amy Doherty, Dr Richard Jones
Affiliations: Met Office
The Climate Modelling User Group (CMUG) brings climate modelling and research perspectives to ESA’s Climate Change Initiative (CCI) programme. A consortium of 12 modelling centres from across Europe, CMUG is now in its fourth phase and continues to work closely with the CCI+ projects on the interface between observations and models. The feedback from each community provides improvements to both climate models and climate observational datasets. CCI Essential Climate Variable (ECV) datasets are evaluated, and their consistency, usability, quality and applicability demonstrated by CMUG, through their use in climate modelling, in reanalyses and in other applications. The models are in turn evaluated with the high quality climate data records supplied, improving output and process understanding. CMUG promotes the use of the ECV datasets by contributing to development of data analysis tools, which allow easy access to, and manipulation of, the data by climate scientists, modellers and researchers. Wider use of CCI Essential Climate Variables (ECVs) is also encouraged by encouraging and facilitating communication between the data producers and the data users through various forums such as the Climate Science Working Group (CSWG). CMUG’s work helps to ensure that the CCI ECV datasets are used by and that they meet the needs of the climate research, climate monitoring and climate modelling communities. This presentation summarises the current and future work being undertaken by CMUG including results of the latest CMUG science studies and how the links between observation and modelling communities might potentially be continued in future phases of the ESA’s climate programme. [Please note that exactly which research will be highlighted in this talk will depend on the extent to which these topics are included in other LPS talks]
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall L3)

Session: C.01.16 NASA’s Earth Science Technology Validation on CubeSats/SmallSats and their path towards building future missions.

With the advent of increased CubeSat and SmallSat constellation deployments by both Government and Commercial entities, there is a need to assess the technology maturity and their impact on scientific research. Since 2012 the NASA Earth Science Technology Office has been running research programs focused on technology validation in space. These programs encourage flying new technologies and new methods on CubeSat and SmallSat platforms. The basic premise of the programs is to validate new technologies before they are implimented on future NASA missions. The technology validated under this program were instrumental in architecture of CubeSat/SmallSat constellations like TROPICS and INCUS. Some of the other successful instruments like IceCube has successfully produced the first ever global atmospheric ice map at the 883-GHz band. Similarly RainCube and TEMPEST-D observations of typhoons and hurricanes resulted in future INCUS mission.

This session plans to have invited presentations on TROPICS, INCUS missions and upcoming technology demonstration programs like GRATTIS, ODIN and others.

Presentations and speakers:


Technology Validation on CubeSats/SmallSats and their path towards building future missions


  • Sachidananda Babu - NASA/ESTO

Earth Observing Systems of the Future: Proliferated Constellations of Small Satellites, Large-Format Arrays, and Cognitive Sensing


  • William Blackwell - MIT/LL

The Gravitational Reference Advanced Technology Test In Space (GRATTIS)


  • John Conklin - Univ of Florida

ODIN – An Optomechanical-Distributed Instrument for Inertial Sensing and Navigation


  • Felipe Guzman - Univ of Arizona
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.61/1.62)

Session: E.05.01 Traceable, Transparent Supply Chain Data for Monitoring: Examples from the Forest Data Partnership and A Call to Accelerate Industry Alignment.

To apply remote sensing monitoring data and models to supply chains requires Traceability systems linking products to accurate and complete geographic ground (vector) data. The Forest Data Partnership (Google, UN FAO, Unilever, NASA SERVIR, USAID, and the World Resources Institute (WRI)) have been exploring innovative tools and systems to make supply chain data more accessible (open), well documented interoperable so that public and private sector data can be better integrated with remote sensing analysis.

This session will explore how open-source tools, open data exchange, anonymization, creative commons licensing and other soft infrastructure can unlock geospatial use cases around transparency, regulation (EUDR and CSDDD), ESG Risk, double materiality, and other trends in disclosure and monitoring of supply chains. Example include applying AI to quality control, ground truth verification, open data exchange standards, Land Cover validation engine “What is in that plot? (Whisp) solution to implement convergence of evidence”

https://www.forestdatapartnership.org/news-events/navigating-data-challenges-and-compliance-for-deforestation-free-supply-chains

Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: EO-Based Biomass Data for Estimation and Reporting: Needs, Approaches and Recommendations

Authors: Dr. Daniela Requena Suarez, Dr. Natalia Málaga, Dr. Neha Hunka, Dr. Chad Babcock, Dr. Carly Green, Dr. Naikoa Aguilar Amuchastegui, Dr. Javier G.P. Gamarra, Larlyn Faith Aggabao, Alexs Arana Olivos, Jorge Carranza Castañeda, Ricardo de la Cruz Paiva, Dr. Jingjing Liang, Prof. Erik Næsset, Dr. Anssi Pekkarinen, Maurizio Santoro, Dr. Maria José Sanz, Dr. Frank Martin Seifert, Dr Martin Herold
Affiliations: Helmholtz GFZ German Research Centre for Geosciences, Remote Sensing and Geoinformatics Section, University of Maryland, Department of Geographical Sciences, University of Minnesota, Forest Resources Department, Environmental Accounting Services, Climate Change Fund Management Unit, World Bank, Forestry Division, Food and Agriculture Organization of the United Nations, Forest Management Bureau, Department of Environment of Natural Resources, Servicio Nacional Forestal y de Fauna Silvestre, Ministerio de Desarrollo Agrario y Riego, Purdue University, Department of Forestry and Natural Resources, Norwegian University of Life Sciences, GAMMA Remote Sensing AG, Basque Centre of Climate Change, European Space Research Institute, European Space Agency
The increasing availability of EO-based forest biomass estimates has sparked critical questions for the land-based climate change mitigation community: Can these data contribute to improving National Greenhouse Gas Inventories (NGHGI) through their Measurement, Reporting and Verification (MRV) procedures to the UNFCCC? What are the key considerations behind their correct and effective use in this context? NGHGI specialists, national forest monitoring teams, policymakers, EO-biomass data providers, validation-verification bodies and researchers have raised this question in diverse contexts. Simultaneously, a range of targeted efforts has emerged globally, including (1) direct efforts of national forest monitoring teams to produce national biomass maps, often collaboratively with research-oriented bodies or international agencies (2) collaborative efforts of EO experts and researchers to work with national forest monitoring teams to uptake open-access EO-based biomass products, and (3) capacity-building workshops to equip national teams with the latest methods/tools to integrate biomass maps in their monitoring efforts. Yet, there are currently no agreed-upon practices for their appropriate use within national MRV procedures. To address the questions raised above in a consistent and coordinated manner, the Global Forest Observations Initiative (GFOI) is leading a collaborative effort to develop much-needed guidance on the appropriate use of EO-based biomass estimates for MRV purposes and define the next steps needed to address existing research and capacity gaps limiting the operational use of biomass maps in national estimation and reporting processes. This ongoing effort is being carried out jointly by EO-based biomass data providers and researchers, NGHGIs specialists and national forest monitoring teams. Our presentation provides an overview of the GFOI R&D coordination and synthesis activities related to developing guidance on this topic, which include: 1. Insights on the main needs and challenges behind the use of EO-based biomass data, identified through an online survey by over 70 country biomass experts and FAO’s Global Forest Resources Assessment (FRA) National Correspondents. 2. A guiding framework for the selection of methodological approaches for using EO-based data, based on the intended final use and ground data availability. 3. An assessment of the operationalisation level of methodologies that use EO-based biomass data in the context of NGHGIs. 4. An overview of country case studies showcasing the integration of NFI data with EO-based biomass data, highlighting the overarching lessons learned within NGHGI contexts, as well as examples of additional internal uses, such as reporting on Sustainable Development Goals (SDG) indicators and implementing country-specific forest sector strategies. 5. Recommendations for EO-based biomass data providers and researchers aimed to improve the communication of their estimates and methods. 6. Ongoing efforts towards enhancing research and guidance on using EO-based biomass data to estimate forest removal factors. These efforts aim to support end users, such as national forest monitoring teams, as they consider incorporating EO-based data into their estimation and reporting processes to enhance their ability to meet IPCC-compliant reporting standards. Furthermore, these efforts will equip reviewers and verifiers with tools to assess reports that use these datasets. By fostering a shared understanding and establishing best practices, we aim to bridge the gap between technological advancements and practical applications contributing towards global climate change mitigation goals.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: A Pan-tropical Tree Crop Map

Authors: Katelyn Tarrio, Nicholas Clinton, Julia Guo, Bryce Cronkite-Ratcliff, Estefania Lahera, Daniel Coelho, Devaja Shah, Alicia Sullivan, Oliver Guinan, Tanya
Affiliations: Google
Planted tree crops make up a significant part of landscape compositions in the tropics; accurately differentiating them from natural forests has major implications for conservation, carbon emissions estimation and zero deforestation supply chain regulations such as the European Union Deforestation Regulation (EUDR). However, the distribution and expansion of key tree commodities linked to deforestation is largely unknown. Existing map products are limited by geographic extent, spatial resolution and lack broader contextual information on land-use. Here we present a pan-tropical tree crop probability map at 10-meter resolution for 2020 and 2023 focused on four major commodities: coffee, cocoa, rubber and oil palm. We also provide information on the presence of natural forests. Map outputs include 1) tree type (coffee, cocoa, rubber, palm, natural forest) in a given pixel, representing the class with the highest likelihood, and 2) per-pixel probabilities of each of the five tree types. Maps were produced through a machine learning framework that integrates diverse, “community”-sourced training datasets with annual composites derived from Sentinel-1, Sentinel-2, ALOS-PALSAR 2 and ALOS DSM data. Preliminary results show high model performance both qualitatively, with accurate mapping of smallholder plantations, and quantitatively, with area under the receiver operating characteristic (ROC) curve (AUC) values exceeding 90%. Using maps from the two time periods, we demonstrate how this dataset can be used to track changes in commodity presence, including regrowth and expansion into forested areas. These maps also support environmental applications such as early-warning systems for deforestation risk and land-use change. Regions with high probabilities across multiple tree types, which are challenging to classify, can serve as priority areas for field data collection and for future research to address gaps in information. This dataset helps enhance supply chain transparency for the EUDR and similar efforts, offering critical insights into the risks of commodity-driven deforestation, rates of tree plantation expansion, and the conservation status of natural forests.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: Towards a Digital Public Infrastructure for Zero Deforestation Value Chains and Regulatory Compliance

Authors: Remi Dannunzio, Jonas Spekker, Pascal Ripplinger
Affiliations: UNFAO, GIZ
There are numerous earth observation maps available on tree cover, forest disturbances and land use & land cover. However, relying on any one dataset exclusively for identifying deforestation risk can be problematic. Maps often differ from one another in terms of accuracy, resolution, or classification systems and definitions applied. To address these problems, the Convergence of Evidence approach combines the information of multiple datasets to gain a detailed understanding of a particular area and to reduce the impact of potential errors or biases in individual data sources. Whisp is a simple and innovative web application designed to easily and quickly assess plots for deforestation risk in the face of emerging international regulations on deforestation-free commodities like the EUDR. Whisp translates this approach into an integrated open-source and free-to-use software solution, bringing together multiple freely available global and regional datasets on tree-cover, forest disturbances, as well as land-use and land cover (including commodities such as coffee & cocoa that often constitute both, tree cover and agricultural land use). With Whisp’s risk assessment function, users can assess their own geodata for deforestation risk against a logically stacked hierarchy of these datasets. It comes with a simple and intuitive web application to screen single plots against actual map visualizations as well as an API for large-scale processing of hundreds of plot geometries at once. The API can similarly be accessed by a simple web interface or built into individual software solutions by the users, and the output deforestation risk statistics can easily be visualized in pre-programmed dashboards. Whisp has been developed with simple interoperable and light-weight data formats in mind, utilizing mainly GeoJSON and CSV for inputs and outputs, and fully supporting the AgStack asset registry with its privacy-friendly, easy-to-share GeoIDs. Through its open-source nature and free availability of the code on GitHub, Whisp can be forked and fully customized to the users’ needs, including the possibility to integrate own proprietary maps. INATrace is a digital traceability solution for agricultural raw materials—from production to the final product. To maximize its developmental impact, the software’s source code is openly and freely available, allowing supply chain actors to adapt it according to their needs. The "Principles for Digital Development" form the foundation of INATrace, particularly the principle of "Design with the User." INATrace enables farmer organizations to digitally record all production steps and manage their own data. Additionally, INATrace includes an app feature that records farmers' fields and directly checks them for deforestation in accordance with the EUDR definition. This is made possible through an interface with WHISP and the use of unique anonymous geo-IDs provided by the "Asset Registry" of the AgStack project by the Linux Foundation. The main objective of INATrace is to provide an inclusive traceability solution for small-scale farmers that ensures data sovereignty and enables their active participation in global supply chains.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: Comparing Three Data-Driven Approaches to Assessing Deforestation Risk for Regulatory Compliance With a Convergence of Evidence Framework

Authors: Katelyn Tarrio, Dr. Karis Tenneson, Andrea McMahon, Karen Dyson, Ate Poortinga, Remi D'Annunzio, Nick Clinton, Jeffrey A. Cardille, Robert Kennedy, Sime McKenzie, Elijah Dalton, Dr. David Saah
Affiliations: Google, Spatial Informatics Group, FAO, McGill University, Oregon State University
The global push for sustainable and deforestation-free supply chains requires operators to demonstrate compliance with strict environmental standards and national laws. These regulations emphasize the need for clear, traceable, and integrated geospatial data to support effective risk assessment and decision-making. This shift compels us to rethink how geospatial data is utilized to ensure transparency and accountability. In this study, we test a new convergence of evidence framework, a methodology that combines multiple geospatial datasets to provide a more accurate and reliable understanding of land-use changes. We evaluate three approaches to convergence of evidence, focusing on their ability to integrate satellite-derived forest maps, deforestation alerts, and ancillary datasets into a unified analysis for assessing deforestation risks. These methods aim to deliver actionable insights into the risk profiles of commodity plots, enabling more informed decision-making by stakeholders. We examine the application of the convergence of evidence framework in monitoring supply chains for several key commodities newly regulated under the EUDR and compare three approaches: (1) Single-Source Methodology: This approach uses a single dataset, such as the Global Map of Forest Cover 2020 by the European Union Joint Research Centre, to determine whether land was covered by forest in 2020. While simple and easy to implement, its reliance on a single dataset can result in inaccuracies in complex landscapes where data gaps and biases are more pronounced. (2) Decision-Tree Frameworks (WHISP): The open-source tool WHISP (What’s in That Plot), developed by the Forest Data Partnership and part of FAO’s Open Foris platform, enhances analysis by integrating multiple datasets to assign risk levels (e.g., high, medium, low). WHISP uses the convergence of evidence approach to combine data on forest cover, agricultural use, and deforestation events into a transparent and replicable workflow. (3) Bayesian Updating of Land Cover Classifications (BULC): The BULC methodology, developed by Cardille et al., dynamically updates land cover classifications by incorporating new data into a probabilistic framework. This iterative process synthesizes multiple data sources, continuously refining deforestation risk assessments at the pixel level. BULC overcomes the limitations of single datasets by reducing biases and providing a more accurate depiction of land-use changes over time. The analysis compares each approach in terms of the accuracy of the deforestation information provided versus the ease of use and operational implementation. The findings show that integrating multiple datasets under a convergence of evidence framework significantly improves the accuracy and reliability of deforestation risk assessments. However, this comes at the cost of increased complexity and resource requirements. These trade-offs highlight the importance of balancing precision with simplicity and cost-effectiveness to meet the diverse needs of stakeholders. Beyond the technical analysis, the study examines broader implications for industry alignment and stakeholder engagement. The convergence of evidence framework can help align producers, regulators, and data providers, creating a scalable system for managing sustainable supply chains. This research highlights how geospatial data integration can support large-scale sustainability efforts while addressing practical challenges such as computational demands, data accessibility, and capacity building.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: Forest Land Use Assessment Using Remote Sensing and Artificial Intelligence in Support of FAO FRA and EUDR

Authors: Dr. Timothée Stassin, Dr. Diego Marcos, Adolfo Kindgard, Simon Besnard, Dr. Alba Viana Soto, Cornelius Senf, Frederic Achard, Dr. Anssi Pekkarinen, Martin Herold
Affiliations: GFZ German Research Centre for Geosciences, Inria, Food and Agriculture Organization of the United Nations, Technical University of Munich, European Commission Joint Research Centre
Forest as a land use is a key variable to monitor at large scale to support countries reporting on ‘Life on Land’ indicators of the Sustainable Development Goals, and to the Global Forest Resources Assessment of the Food and Agriculture Organization of the United Nations (FAO FRA). The recent European Union Regulation on Deforestation-free products (EUDR), expected to tackle climate change and biodiversity loss, also relies on a land use definition of forests. Land use monitoring and reporting is challenging. Distinguishing between different land uses is inherently complex: areas may have a similar land cover (e.g., trees) but differ from a land use perspective (e.g., fruit orchard vs planted forest). Forest land use classification has thus far extensively relied on expert interpretation, for example through the FAO FRA and its participatory Remote Sensing Survey (RSS). In the FAO FRA 2020 RSS, more than 800 local experts have been trained in the visual interpretation of remote sensing imagery and consistently analyzed 400 000 sample locations. These surveys provide high-quality interpretations at sample locations for regional and global analysis, but no spatially explicit information on forest land use distribution. Such maps would be particularly valuable in informing global and regional forest conservation and restoration actions, and critical to the implementation of the EUDR. With this respect, the European Commission Joint Research Centre recently released the Global map of Forest Cover (GFC 2020). The GFC 2020 is a Forest/Non-Forest map that was developed by combining existing land cover maps with a series of overlays and decision rules to align it with the land use definition of forest. Here, we focus on how Artificial Intelligence in combination with Earth Observation and FAO FRA RSS reference data can be leveraged to develop tools for forest land use assessment. Central to our approach is to take inspiration from the way human experts perform this task, for example translating historical knowledge into suitable time series data, for example translating landscape dynamics knowledge into the integration of spatial context (e.g., buffer area) and of proxies of human activity (e.g., infrastructure and population density). Our works target 16 land use classes as defined by the FAO FRA: stocked vs temporarily unstocked forests; naturally regenerating vs planted forests; grassland, cropland, settlements, with or without trees; other wooded land; bare soil; water; oil palm; mangroves. We discuss the performance of our classification models, currently achieving overall accuracies of 60% [16 classes], 78% [F/OWL/OL/W] and 84% [F/NF], and how it compares to land cover baselines and to the GFC 2020. We also discuss how these results will be validated using a high-quality dataset of areas analyzed by multiple human experts, and the roadmap towards the generation of a forest land use map. Eventually, we present how our models can help increase the productivity and accuracy of forest land use reporting and monitoring, including the Quality Assurance/Quality Control (QA/QC) procedures of the ongoing FAO FRA 2025 RSS reporting cycle. To ensure uptake by countries and facilitate end-users capacity building, we explain how our models will be made available on FAO Sepal, the open-access cloud-based platform for forest and land monitoring.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: Mapping Plantation Forests in Vietnam's Central Highlands Using LSTM and Sentinel-2 Time Series Data

Authors: Juliane Huth, Cornelia Zygar, Dr. Thorsten Hoeser
Affiliations: German Aerospace Center (DLR)
While forest cover continues to decline globally, Vietnam stands out as an exception. Between 2010 and 2020, Vietnam ranked among the top five countries with the largest annual increase in forest cover, gaining 126,000 hectares per year. This represents an annual growth rate of 0.9%. It is therefore important to distinguish between 'natural regenerating forest' and 'plantation forest'. In Vietnam, it is mainly the area of plantation forests that has increased. As plantation forests provide fewer ecosystem services than natural forests detailed knowledge of plantation locations and species is a valuable asset for monitoring these ecosystems. For Vietnam, this valuable up-to-date information on plantation forest areas and species is not publicly available. To distinguish industrially important tree plantations from natural forests in the Central Highlands of Vietnam, the focus region of this study, we applied a Long Short-Term Memory (LSTM) model. The target classes were acacia plantation, rubber plantation, evergreen forest, deciduous forest and background. Rubber is a deciduous plant with a very characteristic phenology, whereas acacia is an evergreen plant. The input data for this time series-based classification were 12 Sentinel-2 (S2) monthly median composites for 2020, consisting of the 10 S2 bands with resolutions of 10 and 20m. LSTM models with three different kernel sizes (1, 3 and 5) were investigated. The temporal transferability of the model was evaluated by testing on a 2021 test set. The F1 values for rubber - with 0,96 for the test year 2020 and 0,97 for the temporal transfer to 2021 - were higher than the F1 values for the acacia class - with 0,96 for 2020 and 0,63 for 2021. This can be explained by the fact that rubber, due to its characteristic heterogeneous phenology, is better discriminated by the LSTM model than the homogenous time series of the evergreen acacia plantations. Compared to traditional approaches such as random forest, the LSTM results were achieved with a minimal amount of data pre-processing of the Sentinel-2 Level-2A data, which contributed to a much more efficient end-to-end classification pipeline.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall G1)

Session: D.02.04 Machine Learning for Earth System Observation and Prediction - PART 2

Earth, its weather and climate constitute a complex system whose monitoring and modelling has undergone remarkable progress in recent years. On one hand, enhanced spaceborne observations and the development and availability of novel in-situ sensor data have provided unprecedented information about our planet's systems. On the other hand, the integration of AI and big data analytics has opened new frontiers in how we approach the challenges of weather and climate modelling. Altogether, these elements are key drivers of innovation in Earth System Observation and Prediction (ESOP) for Weather and Climate.

Machine/Deep Learning (ML/DL) techniques have revolutionized numerous fields and have proven to be particularly advantageous in various applications such as image recognition, traffic prediction, self-driving vehicles, and medical diagnosis. These techniques have garnered significant attention and adoption within the Earth System Observation and Prediction (ESOP) community due to their ability to enhance our understanding and prediction capabilities of the Earth's complex dynamics. One prominent area where ML/DL techniques have proven invaluable is in the development of high fidelity digital models of the Earth on a global scale. These models serve as comprehensive monitoring, simulation, and prediction systems that enable us to analyse and forecast the intricate interactions between natural phenomena and human activities. By providing a holistic understanding of the Earth's dynamics, these models contribute to the achievement of the European Commission's Green Deal and Digital Strategy goals towards a green & digital transition.

ML/DL solutions also showcased promising advancements in data assimilation, weather forecasting and climate prediction. Algorithms can be trained to identify instances where physical models may exhibit inaccuracies and subsequently learn to correct their predictions accordingly. Moreover, AI-based models have the potential to create hybrid assimilation and forecasting models that combine the strengths of traditional, physics-based methodologies with the capabilities of ML/DL, ultimately enhancing the accuracy and reliability of predictions.

The aim of this session is to invite new ML4ESOP explorers to present their latest innovation in ESOP. A specific focus on the exploration of new data sources and benchmarks for weather and climate modelling, the adaptation of large-scale data-driven Earth system models, as well as novel demonstrations of their applicability to weather and climate observation and prediction. This session invites all experts from diverse fields to discuss how recent advances innovate on established ESOP approaches, to address current challenges, and to identify opportunities for future work.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall G1)

Presentation: Towards a large-scale river discharge forecasting system based on Eath Observation, AI and precipitation forecast

Authors: Valentin Fouqueau, Vincent Cloarec, Julianna Devillers
Affiliations: vorteX-io
Due to increasing urbanization and the intensification of rainfall events associated with global warming, forecasting flood events has become increasingly important. Predicting flood intensity, estimating their territorial extent, or forecasting river behavior requires estimation of the river's flow rate. Traditionally, the estimation of flow rate based on atmospheric conditions, primarily rainfall, is conducted using a rainfall-runoff transformation model. These conceptual models rely on input parameters such as land use, morphological traits of catchments, watershed response times, and more. These parameters, often semi-empirical, are challenging to estimate and require strong assumptions and extensive calibration for specific watersheds. Consequently, due to the necessary oversimplifications of these models, the results are often disappointing and limiting model’s effectiveness and adaptability across different regions and scenarios.  Deep learning, particularly Long Short-Term Memory (LSTM) networks, has shown great potential in streamflow prediction, offering improvements over traditional hydrodynamic models. LSTMs can capture complex long-term dependencies in hydrological processes and generalize well across different basins, including ungauged ones, improving flood forecast reliability. Recent studies have demonstrated that LSTMs can predict streamflow at multiple temporal resolutions and outperform existing models in forecasting extreme events, even in the context of climate change. Additionally, integrating physical constraints into LSTM models has enhanced their physical interpretability and ensured more consistent predictions. LSTMs can also learn hydrological processes from data, providing an interpretable alternative to traditional models. To provide a predictive indicator for flooding, vorteX-io is currently developing a model trained on a vast dataset of flow measurements from European national networks. The model training uses input dataset such as geomorphological data and the characteristics of each watershed, along with atmospheric datasets, with the flow rate as the target variable. These datasets include high-resolution national data, such as COMEPHORE in France for rainfall, and global datasets like ERA5 from ECMWF, covering key atmospheric variables like rainfall, wind, temperature, snow, and soil water content. This model should be able to forecast flow rate as a flood indicator of any river based on global geomorphologic data and global atmospheric forecasting. We present here the developments and results obtained with this innovative system dedicated to being deployed at large scale all over Europe.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall G1)

Presentation: BSRLC-U: The first tri-annual 10-m maps of urban Residental, Industrial and Open spaces for the Baltic Sea region over two decades (2000 – 2021)

Authors: Dr Vu-Dong Pham, Dr Franz Schug, JProf. Dr. David Frantz, Dr Sebastian van der Linden
Affiliations: Earth Observation and Geoinformation Science Lab, Institute of Geography and Geology, University of Greifswald, SILVIS Lab, Department of Forest and Wildlife Ecology, University of Wisconsin, Geoinformatics - Spatial Data Science, Trier University
Detailed information on urban development over time—whether pertaining to residential, industrial, or transportation infrastructures—is critical for monitoring the pace and direction of regional growth and assessing its socio-economic impacts on the surrounding environment. In this context, Earth observation (EO) provides unique opportunities to map urban infrastructures across extensive spatial and temporal scales. Notably, the Landsat and Sentinel-2 satellite programs have been among the primary sources of EO data for urban-focused applications, consistently delivering comprehensive spatial and temporal coverage over the past four decades (Landsat) and recent years (Sentinel-2). However, the relatively coarse spatial resolution of Landsat imagery (30 meters) poses significant challenges in generating fine-grained maps of historical urban built-up types in the past. To address this limitation, recent advancements in deep learning methods—particularly image super-resolution (SR) techniques—have enabled the upscaling of low-resolution (LR) imagery by leveraging training data from high-resolution (HR) images. One of the major drawbacks of existing SR methods is their limited applicability to real-world scenarios. Specifically, there are few large-scale, long-term time-series mapping products generated using SR-based approaches. This limitation arises from inconsistencies in data quality between recent satellite datasets (e.g., Sentinel-2 A/B, Landsat 8/9) and historical datasets (e.g., Landsat 5/7). Consequently, training an SR model using recent data may not be directly transferable to historical datasets. To address this challenge, we propose a potential solution: instead of utilizing raw satellite imagery, we apply SR to spectral-temporal metrics (STM) calculated from EO data time series. STM are pixel-wise time-series metrics (e.g., mean, median, percentiles) that exhibit greater consistency across time (inter-annual) and across different satellite sensors. Specifically, we trained the SR model on LR and HR image pairs derived from Landsat STM and Sentinel-2 STM from the same temporal period. The trained model was then used to upscale historical Landsat STM to a 10-meter resolution, which serves as input data for subsequent processes. In urban-targeted applications, traditional methods such as pixel-based mapping often struggle to distinguish different types of built-up areas due to the absence of spatial context information. Alternative approaches, such as scene classification or semantic segmentation, leverage Convolutional Neural Networks (CNNs) to exploit spatial context, thereby improving the characterization of various built-up types. However, these methods have limitations, including reduced mapping unit resolution or reliance on manually hand-crafted training data. In this study, we adopt the concept of “center-patch” mapping, wherein a patch image is used as input, but only the central pixel is classified. This approach serves as a hybrid between pixel-based and scene-based classification, allowing CNN models to fully utilize the spatial information embedded in patch images while maintaining the high-resolution mapping characteristic of pixel-based methods. Unlike semantic segmentation, center-patch classification does not require labels for every pixel in image patches; only the central pixel needs to be labeled. This significantly simplifies reference data acquisition, particularly for large-scale applications. By combining super-resolution techniques with center-patch classification, we developed the Baltic Sea Region Land Cover – Urban (BSRLC-U) product, the first 10 m resolution tri-annual maps for the time period from 2000 to 2021 that classify three distinct types of built-up areas: (1) residential buildings, (2) industrial buildings, and (3) open spaces, roads, and railways. The BSRLC-U serves as a novel and complementary dataset to the existing BSRLC+ product (Pham et al., 2024, DOI: 10.1038/s41597-024-04062-w), which provides annual land cover data for the Baltic Sea Region at 30 m resolution from 2000-2021. To the best of our knowledge, BSRLC-U represents the first urban-focused mapping dataset in Europe that covers extensive regions and spans more than two decades. This dataset will be publicly available by the time of LPS2025.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall G1)

Presentation: Estimating Parameters of a Spatial Dryland Vegetation Model From Time Series of Satellite Images Using Differentiable Programming

Authors: Jens Van Der Zee, PhD Diego Marcos, Dr. Eric Siero
Affiliations: Wageningen University, Inria
The future state of dryland ecosystems under climate change remains uncertain, with potential outcomes ranging from desertification to regreening depending on various interacting environmental factors. Dryland ecosystems are characterized by complex vegetation dynamics that respond to a range of environmental drivers. Understanding these dynamics requires accurate parameter estimation in models that describe interactions between vegetation and water across space and time. In this work, we leverage differentiable programming to estimate the parameters of a spatial dryland vegetation model using time series of satellite images. This approach allows for simultaneous optimization of model parameters by efficiently calculating gradients of the loss function between a solution of the underlying equations and a time series of vegetation density images. The first part of the study involves a synthetic experiment, where we simulate spatial vegetation data using a known parameter set and test the ability of our method to recover these parameters. This synthetic setup enables validation of the differentiable programming framework in controlled conditions, highlighting its capability to handle the sparse and partially-observed non-linear dynamics of dryland environments. In the second part, we apply the developed method to real-world satellite imagery. We use time series data from dryland regions to estimate model parameters and investigate if the model is able to replicate characteristic vegetation patterns observed in drylands. We evaluate the model’s ability to extrapolate in space, time, and under different precipitation regimes. Preliminary results suggest that differentiable programming provides a promising tool for bridging the gap between ecological models and remote sensing data, allowing for improved model calibration and a better understanding of vegetation processes, especially in the context of climate variability. Our findings demonstrate the potential of machine learning techniques, particularly differentiable programming, to enhance spatial models of ecosystems by enabling direct assimilation of remote sensing data. This work contributes to the broader goal of combining machine learning with mechanistic models to understand and forecast ecosystem behaviour under changing environmental conditions.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall G1)

Presentation: Supporting Weather and Climate Application Development with ML-Friendly Earth Observation Data

Authors: Jörg Schulz, Roope Tervo, Viju John, Ken Knapp, Paolo Ruit, Llorenç Lliso, Pilar Rípodas, Xavier Calbet
Affiliations: EUMETSAT, NOAA NCEI, AEMET
Recent development of ML-methods has been proven to enhance the processing of Earth observation (EO) data. Yet, the adoption and operational use of ML methods are hindered by the availability of high-quality training data, necessary data preparation, training time, and infrastructure resources. EUMETSAT, together with its Member States and partners, address these challenges by preparing data for ML applications and extracting valuable information to create information data records. As one example, EUMETSAT, NOAA and JMA are using a distributed cloud infrastructure in the joint GEO-Ring project to create a quality-controlled radiance data set with global coverage from the measurements of all previous sensors on board geostationary satellites which can be sustainably extended with future observations. The data set starts with the early SMS measurements in 1974, incorporates data from all past and still operating platforms, and will be continued in a consistent way with the new generation satellite instruments ABI on GOES, FCI on Meteosat Third Generation and AHI on JMA’s Himawari satellites. The data set has a spatiotemporal sampling of a few kilometres and 30 minutes over the whole-time range and can offer better spatiotemporal sampling for the modern era instruments. The quality of the data allows analysis of long-term climate variability up to trend estimates with some uncertainty. As another example, new surface radar (OPERA) and geostationary satellite measurements (SEVIRI) fusion dataset foster a generation of ML-based downstream products. EUMETSAT and the Spanish Meteorological Service (AEMET) are producing a static training dataset containing OPERA weather radar composites of reflectivity, rain rate, and rain accumulation and all spectral channels from the SEVIRI instrument rapid scan data by interpolating SEVIRI data onto the spatial grid of OPERA weather radar data. The data cover Europe with 2x2km resolution and 15-minute time step from 2008 to 2013 and are available in the European Weather Cloud (EWC) for EUMETSAT and ECMWF Member and Co-operating states. Moreover, EUMETSAT actively supports its Member States in detecting relevant meteorological features crucial for nowcasting, early warning, and product generation activities. To facilitate this, EUMETSAT and its Member States are developing a collaborative environment within the European Weather Cloud (EWC) for joint manual annotation, model development, and storing the identified features. EUMETSAT also plans to compose a database of long time series of meteorological features identified from climate data records (CDR). This database allows for the analysis of the development and relationships of the features, providing insights into climate patterns and trends. As part of these initiatives, EUMETSAT has evaluated generalized pre-trained ML models through transfer learning for object detection, specifically focusing on identifying and classifying tropical storms. This approach utilizes pre-trained models from other domains, significantly reducing the need for extensive data preparation and training. The results have shown that transfer learning can achieve comparable accuracies to bespoke models while using significantly less data, making it a highly efficient method for feature detection in EO data.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall G1)

Presentation: Machine Learning-based inference of InSAR coherence from detected backscatter

Authors: Dr. Francescopaolo Sica, Maximilian Barczynski, Prof. Dr. Michael Schmitt
Affiliations: University of the Bundeswehr Munich
Synthetic Aperture Radar (SAR) is a versatile remote sensing technology capable of acquiring high-resolution images of the Earth's surface regardless of weather or lighting conditions. SAR interferometry (InSAR) extends the potential of SAR by enabling the extraction of phase differences between two SAR acquisitions, facilitating applications such as topographic mapping and ground deformation monitoring. A key parameter in InSAR processing is coherence, which quantifies the correlation between two SAR images and indicates the quality of the measured interferometric phase [R1]. Coherence is traditionally derived from complex SAR data through computationally intensive processing steps, including image coregistration, interferogram generation, coherence estimation and, optionally, geocoding. In various scenarios, including mission planning, data selection, and scene analysis, it is crucial to predict coherence values without undergoing full InSAR processing. For example, coherence predictions can inform scene prioritization, evaluate potential system configurations, and anticipate temporal and spatial coherence variations due to environmental or geometric factors. Moreover, recent advances in SAR image analysis emphasize the importance of coherence as a critical parameter for scene understanding [R2]. However, InSAR coherence estimation is limited by the availability of suitable interferometric data, which may be unavailable for certain bands, locations, or observation times. To address these limitations, we propose a machine learning (ML)-based approach to infer coherence directly from detected SAR backscatter, bypassing the need for complex InSAR processing. This approach provides an efficient and versatile solution for coherence estimation, offering significant advantages for SAR remote sensing applications and mission design by enabling fast, highly accurate predictions without the need for full interferometric processing. Previous studies have shown that coherence can be approximated using statistical features of speckle distributions and backscatter intensity [R3], or their derived ratios and products [R4]. However, these methods face challenges in complex terrain, such as mountainous regions, or when preprocessing steps such as despeckling or geometric and radiometric corrections alter the data characteristics. In this study, we present a supervised learning approach using a residual U-Net architecture to infer pixel-wise coherence from geometrically corrected Ground Range Detected (GRD) Sentinel-1 backscatter data. Our model is trained on a dataset of paired backscatter and coherence values and incorporates auxiliary inputs such as land cover maps and a Digital Elevation Model (DEM) to improve predictions in challenging terrains. The proposed method demonstrates robust performance across diverse landscapes, including mountainous regions where traditional methods often fail. [R1] Touzi, R., Lopes, A., Bruniquel, J., & Vachon, P. W. (1999). Coherence estimation for SAR imagery. IEEE transactions on geoscience and remote sensing, 37(1), 135-149. [R2] Sica, F., Pulella, A., Nannini, M., Pinheiro, M., & Rizzoli, P. (2019). Repeat-pass SAR interferometry for land cover classification: A methodology using Sentinel-1 Short-Time-Series. Remote Sensing of Environment, 232, 111277. [R3] Guarnieri, A. M., & Prati, C. (1997). SAR interferometry: A" quick and dirty" coherence estimator for data browsing. IEEE Transactions on Geoscience and Remote Sensing, 35(3), 660-669. [R4] Aiazzi, B., Alparone, L., Baronti, S., & Garzelli, A. (2003). Coherence estimation from multilook incoherent SAR imagery. IEEE transactions on geoscience and remote sensing, 41(11), 2531-2539.
Add to Google Calendar

Wednesday 25 June 14:00 - 15:30 (Hall G1)

Presentation: SeasFireBench: A Benchmark for Data-driven Sub-seasonal to Seasonal Wildfire Forecasting

Authors: Ioannis Prapas, Ilektra Karasante, Professor Gustau Camps-Valls, Professor Ioannis Papoutsis
Affiliations: National Observatory Of Athens, National Technical University of Athens, University of Valencia
In contrast to the revolution in short to medium-range weather forecasting [1], long-range forecasting capabilities in the scale of weeks to months ahead of time lag behind [2]. The advancement of S2S (Subseasonal-to-Seasonal) forecasting skill for weather remains limited, primarily due to the chaotic nature of the atmosphere. Progress has depended mainly on improvements achieved by increasing ensemble members [3]. For wildfires, however, there are indications for enhanced long-term predictability because they are largely modulated by slow Earth-scale phenomena like memory effects and teleconnections [4]. For example, extreme wildfires in Siberia have been linked to previous year's anomalies of the Arctic Oscillation [5]. The availability of global public satellite data from the last 3 decades allows us to monitor wildfires, as well as their antecedent and coincident conditions [6]. As such, we can frame wildfire forecasting as a machine learning task, associating these conditions to the resulting burned area and learning models to forecast wildfires in a data-driven way. Early studies have revealed that ML-based wildfire forecasts can be more skillful than operational models at short forecasting horizons [7]. At larger forecasting horizons, the performance of machine learning models can increase by incorporating information from teleconnections [8]. Although some sparse studies have explored machine learning-based wildfire forecasts at S2S scales and shown potential improvements in forecasting skill, these efforts remain fragmented. Each study typically employs unique datasets and metrics, limiting their credibility among domain experts and disaster management professionals. Establishing a standardized benchmark is crucial to enabling rigorous performance assessment and building trust in data-driven approaches for wildfire forecasting. In this direction, our work uses the SeasFire cube to define a benchmark for S2S wildfire forecasting. The SeasFire cube [9], is a global cloud-optimized dataset that contains a diverse set of fire driver variables, together with the burned areas and fire emissions. We propose SeasFireBench, the first benchmark for the development and assessment of data-driven models for global S2S wildfire forecasting. SeasFireBench includes (i) a public cloud-optimized ML-ready dataset, (ii) a complete set of metrics to assess model performance spatially and temporally across fire regimes and fire seasons, and (iii) open-source baselines stemming from a range of models (physical and machine learning model baselines). As such, SeasFireBench is destined to advance wildfire modeling on S2S scales. Acknowledgements: This work is part of the SeasFire project, which deals with ”Earth System Deep Learning for Seasonal Fire Forecasting” and is funded by the European Space Agency (ESA) in the context of the ESA Future EO-1 Science for Society Call. References: [1] Rasp, Stephan, et al. "WeatherBench 2: A benchmark for the next generation of data‐driven global weather models." Journal of Advances in Modeling Earth Systems 16.6 (2024): e2023MS004019. [2] White, Christopher J., et al. "Potential applications of subseasonal‐to‐seasonal (S2S) predictions." Meteorological applications 24.3 (2017): 315-325. [3] Chen, Lei, et al. "A machine learning model that outperforms conventional global subseasonal forecast models." Nature Communications 15.1 (2024): 6425. [4] Cardil, Adrián, et al. "Climate teleconnections modulate global burned area." Nature Communications 14.1 (2023): 427. [5] Forkel, Matthias, et al. "Extreme fire events are related to previous-year surface moisture conditions in permafrost-underlain larch forests of Siberia." Environmental Research Letters 7.4 (2012): 044021. [6] Pettinari, M. Lucrecia, and Emilio Chuvieco. "Fire danger observed from space." Surveys in Geophysics 41.6 (2020): 1437-1459. [7] Kondylatos, Spyros, et al. "Wildfire danger prediction and understanding with deep learning." Geophysical Research Letters 49.17 (2022): e2022GL099368. [8] Prapas, Ioannis, et al. "Televit: Teleconnection-driven transformers improve subseasonal to seasonal wildfire forecasting." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [9] Karasante, Ilektra, et al. "SeasFire as a Multivariate Earth System Datacube for Wildfire Dynamics." arXiv preprint arXiv:2312.07199 (2023).
Add to Google Calendar

Wednesday 25 June 14:15 - 14:35 (EO Arena)

Demo: A.02.16 DEMO - Interactive calculation of area-wide Farmland Habitat Biodiversity

Land use, its intensity, and management practices shape biodiversity and affect ecosystem functions and services. Farmland habitat use, condition, and quality can serve as proxies for assessing biodiversity, but developing reliable indicators, particularly for monitoring trends in habitat quality, remains challenging. To advance the development of such indicators, a generalized workflow for a farmland habitat biodiversity indicator (FHBI) was proposed by the OECD. Here, agricultural landscape and management diversity is utilized as a proxy for habitat quality. Based on already available data, FHBI can be derived by assigning habitat quality scores of individual land cover / use classes based on their assumed influence on farmland species diversity. The aggregated indicator aims at enabling the assessment of the status and trends of farmland habitat quality.
In this demo session we will present an interactive web-viewer, which enables users to assign individual quality values to land use/cover classes. This makes the calculation highly adaptive to user needs.
Preliminary version of the viewer: https://www.thuenen.de/fhbi-viewer
Introductory talk @ESA Biospace on FHBI background and the underlying data:
https://www.youtube.com/live/e-eQ8XhRrsE?si=6pE3hdK5svCDArTF
(talk starts at: 4:06:10h)

Speaker:


  • Marcel Schwieder - Thünen Institute
  • Felix Lobert - Thünen Institute

Add to Google Calendar

Wednesday 25 June 14:37 - 14:57 (EO Arena)

Demo: B.02.10 DEMO - Monitoring grassland productivity through remote sensing to provide drought adaptation strategies

Regularly monitoring grasslands' productivity is essential for the sustainable use of their resources and for improving our understanding of how current and predicted extreme events impact productivity and the adaptations needed in their management. This session will showcase tools developed by the Grassland Research and Innovation Lab, as part of the ScaleAgData project (partners: IFAPA, EURAC, and Indra-Deimos). This Horizon Europe project aims to upscale real-time sensor data for EU-wide agricultural monitoring, including innovative data collection, promoting data sharing, and demonstrating benefits in precision farming. The Grassland RILab is focused on developing biomass products explicitly tailored to grasslands, with a special focus on providing services to cope with drought impacts. The lab uses ground sensors, data fusion, and modelling to effectively analyze optical and radar satellite data for monitoring biophysical parameters of grasslands. Spatially distributed ground sensor observations, which are seldom available, have been collected to validate biomass and biopars. In this demonstration, we will present: (I) Preliminary results of a methodology for estimating gross primary production using an ML solution that incorporates satellite biopars, meteorological information, and soil moisture. This method utilizes data from Sentinel-1, Sentinel-2, Eddy Covariance tower measurements, and a Feedforward Artificial Neural Network model. (ii) A tool is being developed to provide distributed information about grassland productivity to a large number of farmers via smartphones, in collaboration with a large livestock cooperative. Over the past twenty years, productivity in Mediterranean grasslands has been monitored under various climate conditions. This tool offers quantitative data on productivity and potential stocking rates at the field level under favorable, regular, and drought conditions. It provides recommendations for adjusting the stocking rate based on the management intensity desired by the farmer, helping to maintain profitability and prevent overgrazing.

Speakers:


  • María Pat González-Dugo - Andalusian Institute for Research and Training in Agriculture and Fisheries, IFAPA
  • Paolo Cosmo Silvestro - INDRA-DEIMOS
Add to Google Calendar

Wednesday 25 June 15:00 - 15:20 (EO Arena)

Demo: D.04.27 DEMO - The Sentinels EOPF toolkit: Notebooks and Plug-ins for using Copernicus Sentinel Data in Zarr format

#zarr

As part of the ESA Copernicus Earth Observation Processor Framework (EOPF), ESA is in the process of providing access to “live” sample data from the Copernicus Sentinel missions -1, -2 and -3 in the new Zarr data format. This set of reprocessed data allows users to try out accessing and processing data in the new format and experiencing the benefits thereof with their own workflows.

To help Sentinel data users experience and adopt the new data format, a set of resources called the Sentinels EOPF Toolkit is being developed. Development Seed, SparkGeo and thriveGEO, together with a group of champion users (early-adopters), are creating a set of Jupyter Notebooks, plug-ins and libraries that showcase the use of Sentinel data in Zarr for applications across multiple domains for different user communities, including users of Python, Julia, R and QGIS.

This demonstration will give a first glimpse of the first set of notebooks and plugins of the Sentinels EOPF toolkit that were developed and that facilitate the adoption of the Zarr data format for Copernicus Sentinel data users. Additionally, we will give an overview of toolkit developments and community activities that are planned throughout the project period.

Speakers:


  • Julia Wagemann - thriveGEO
  • Gisela Romero Candanedo - thriveGEO
  • Emmanuel Mathot - Development Seed

Add to Google Calendar

Wednesday 25 June 15:22 - 15:42 (EO Arena)

Demo: D.04.31 DEMO - NoR Updates and Road Map - session 3

This session will showcase the ESA Network of Resources initiative current status, major updates, and road map.

Speaker:


  • Francesco Barchetta - Starion for ESA
Add to Google Calendar

Wednesday 25 June 15:30 - 16:15 (Nexus Agora)

Session: A.05.13 Delivering Sustained Mode Climate Forcings – the Critical Role of Earth Observations

Climate forcings are key in defining external drivers of ongoing climate change and variability. Up-to-date, accurate, and well-documented forcing datasets are needed to build confidence in model simulations, attribution of recent historical changes, and projections of future climate change. High quality earth observations are a critical component of many of the CMIP climate forcing datasets that drive the global model simulations supporting the IPCC, the climate science research community, a large set of operational products as well as a wide range of climate service and downstream users.
With this wide and growing user base, and the resulting increase in the relevance of timely climate information, the CMIP community is working with partners across the globe to develop a pathway to a sustained mode delivery for forcings dataset with regular updates. This will require sustained funding and adequate resource, and support for addressing key scientific challenges such as quantification of uncertainty across the observation-modelling interface.
This agora session will build on the discussions at the Pathway to regular and sustained delivery of climate forcing datasets workshop that took place in October 2024 at ECMWF. A panel of climate forcing dataset providers, earth observation experts and climate service providers will provide an overview of the current and future observational needs, how these needs are being met by the earth observation community and identify the priority scientific and organisational challenges. Together with the audience they will then discuss how to optimise high quality earth observation data to achieve the vision of regular, sustained and robust climate forcing delivery.

Moderators:


  • Eleanor O’Rourke, CMIP International Project Office

Speakers:


  • Helene Hewitt - Met Office
  • Carlo Buontempo - Copernicus Climate Change Services
  • Zebedee Nicholls - Climate Resource/IIASA
  • Jarmo Kikstra - IIASA
  • Maureen Wanzala - World Climate Research Programme
  • Claire MacIntosh - ESA
Add to Google Calendar

Wednesday 25 June 15:30 - 17:00 (ESA Agora)

Session: A.05.08 Europe’s capacity for climate monitoring, prediction and services: the Earth Observation perspective

Leading European institutional partners – the European Commission, ESA, EUMETSAT and ECMWF, collectively referred to as the “4Es” - have made concerted efforts to develop a sophisticated and unprecedented ecosystem of programmes to provide climate monitoring and prediction capacities and develop operational authoritative climate services over recent decades. Each entity brings a unique role and expertise to the table, resulting into an end-to-end system, covering the entire “value chain” for actionable climate data and information.
Space infrastructure, most noticeable through the European Copernicus programme, funded and implemented by the EU, ESA and EUMETSAT, provides the base for climate monitoring. Europe has taken on a global lead in climate prediction and projection through the hosting and supporting of the WCRP’s Coupled Model Comparison Project at ESA with significant contributions from ECWMF. Both these efforts provide key lines of evidence to support a thorough state of climate assessment on an annual basis, as done by the EU’s Copernicus Climate Change Service (C3S), and on a decadal basis, as provided by IPCC assessment reports. Based on the EU’s (H2020, Horizon Europe), as well as ESA’s (Climate Change Initiative) and EUMETSAT’s joint R&D efforts, the base for operational climate services has been built, transitioning research accomplishments into operations. This has particularly been taken forward by the ECMWF hosted C3S and CMAS and other climate relevant information providers, such as the EU’s Copernicus Marine and Land services.
On the basis of this “European climate information ecosystem”, significant contributions have been made to address reporting requirement towards international climate policies such as the UNFCCC Paris Agreement, leading to climate action.
European partners play an important role internationally in promoting coordination mechanisms for Climate, Carbon and GHG. This results in anticipating new priorities i.e. for CO2 emission monitoring in support of Paris Agreement, as promoted through the WMO’s GHG Watch, establishing roadmaps and groups for both space segment, overall system architecture, and R&D activities required to support monitoring.
This agora will explore how to responded to emerging priorities as well as new technological opportunities, to further develop the already significant contribution European partners make in an international context to address the imminent climate crises and growing challenges.

Agenda:


Part #1 - The European Habitat – Highlights from the 4 Es


  • Tim Lemmens - EC
  • Rune Floberghagen - ESA
  • Jörg Schulz - EUMETSAT
  • Carlo Buontempo - ECMWF
  • Mark Dowell - JRC, EC

Part #2 – Challenges and opportunities – towards the future


  • Dr Pepijn Veefkind - Deputy Head R&D Satellite Observations, Senior Scientist & TROPOMI Principal Investigator, KNMI
  • Fani Kallianou de Jong - Principal Climate Strategy and Delivery (CSD), EBRD
  • Richard Jones - UK Metoffice, IPCC author
  • Martin Herold - GFZ, GCOS TOPC chair
  • Federico Fierli - DG-RTD, EC
Add to Google Calendar

Wednesday 25 June 15:30 - 16:15 (Frontiers Agora)

Session: C.02.24 FutureEO, Pioneering world-class Earth Observation for the benefits of society - Status and Future Prospects

This session focuses on ESA’s core Earth Observation research and development programme, FutureEO. Paving the way for future activities, FutureEO harnesses novel ideas to develop pioneering satellites. In turn, allowing scientific excellence to flourish, which is critical to addressing environmental challenges of tomorrow and instrumental to boosting societal and economic resilience. The audience will first hear about what the programme entails, how it retains Europe as a world-leader in Earth Observation and what is next, and continue with panel to discuss the current status and trends relevant to the programme.

Panellists:


  • Prof. Kathy Whaler - The University of Edinburgh
  • Philippe Martimort - ESA
  • Dirk Bernaerts - ESA
  • Anja Stromme - ESA
  • Inge Jonckheere - ESA
  • Malcolm Davidson - ESA
  • Gerd-Jan van Zadelhoff - KNMI
  • Charles Gallard - ASD Eurospace
  • Nicki McGoh - Caribou
  • Christoph Aubrecht - ESA
  • Maria Piera Padula - ESA
Add to Google Calendar

Wednesday 25 June 15:45 - 16:05 (EO Arena)

Demo: F.01.16 DEMO - Education & Professional Development Platform - Session 1

The Education & Professional Development (EPD) Platform of the Space Generation Advisory Council (SGAC) serves as a comprehensive resource for students and young professionals aiming to embark on or advance their careers in the space sector. Recognizing the challenges posed by the vastness and complexity of space-related disciplines, the EPD Platform offers tailored initiatives to bridge knowledge gaps and foster professional growth.

One of the cornerstone initiatives is the SpaceGen Academy, an e-learning platform that provides accessible and high-quality educational content on various space topics. This academy ensures that members can acquire foundational and advanced knowledge at their own pace, irrespective of their geographical location.

Complementing the academy is the Mentoring Committee, which facilitates personalized guidance by pairing members with experienced mentors in the industry. This mentorship program is designed to offer insights, advice, and support, thereby enhancing the mentees' professional trajectories.

The Career Development Platform is another pivotal component, offering a curated list of job postings, internships, and other career opportunities worldwide. This platform acts as a bridge between employers seeking fresh talent and SGAC members ready to contribute their skills and passion to the space sector.

To stimulate innovation and practical application of knowledge, the ACHIEVED Competition encourages members to engage in original and inventive mission designs. This competition not only fosters creativity but also provides participants with real-world challenges that hone their problem-solving skills.

For those seeking structured training, the ACHIEVED Academy offers courses in Space Systems Engineering, equipping members with the technical expertise required to excel in the industry.

Through these initiatives, the EPD Platform exemplifies SGAC's commitment to nurturing the next generation of space professionals, ensuring they are well-prepared to meet the evolving demands of the global space community.

Speakers:


  • Nikol Koleva - Executive Director, SGAC
  • Tatiana Komorná - Operations Officer, SGAC
  • Marcos Rojas - Education & Professional Development (EPD) Coordinator, SGAC
  • Antonino Salmeri - SGAC
Add to Google Calendar

Wednesday 25 June 16:07 - 16:27 (EO Arena)

Demo: D.05.08 DEMO - EvoLand: New methods for the next generation of Copernicus Land Monitoring

EvoLand is developing and testing new and innovative methods in support of the evolution of the Copernicus Land Monitoring Service. It integrates novel Earth Observation, in-situ data and the latest Machine Learning techniques to continuously monitor the status, dynamics and biomass of the land surface.

The project focuses on five key thematic domains: agriculture, forest, water, urban, and general land cover. Across these themes it is developing 11 prototype services that could potentially be part of the future CLMS baseline. These will be operationally benchmarked and qualified as candidate CLMS services with a TRL5-7, which will have the potential to be taken up and integrated into CLMS by the Entrusted Entities from 2025 onwards. More information on the prototypes is available at https://www.evo-land.eu/clms-prototypes

This demonstration will give an overview of the 11 prototype services, showing how they improve on existing CLMS services, and indicating how this is relevant for CLMS to support future data needs for policy monitoring and reporting. Initial versions of all prototypes are already available, and feedback from the community is encouraged. The demo will be performed using the Evoland results portal.

Evoland is funded by the European Union through the Horizon programme, and is undertaken by a consortium of 10 European EO companies coordinated by VITO.

Instructor:


  • Phillip Harwood
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall N1/N2)

Session: C.05.05 The German EnMAP Mission: 3 Years of hyperspectral data - From Science to Environmental Applications

The German hyperspectral satellite EnMAP will be 3 years in orbit acquiring data at an increasing pace. Users feedback collected during these years at conferences and workshops is extremely positive. The EnMAP HSI instrument generates data of quality beyond expectations. During the time of operations, the team of the ground segment, supported by the space segment and in collaboration with the science segment, gradually enhanced the Missions proficiency. They implemented new features and fine-tuned the observing strategy optimizing the exploitation of the satellite capabilities. Moreover, it further developed the processing pipeline improving continuously the quality of the derived products.
We will give an overview of the status of operations including EnMAP observation strategy and synergies with other hyperspectral missions. A special focus will be on the current science strategy of the mission and successful application examples from science and industry.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall N1/N2)

Presentation: How do hybrid retrieval schemes cope with hyperspectral time-series of managed landscapes? - Deriving seasonal dynamics of crop development by quantifying CNC and NPV via Spectroscopy in Central Valley, California

Authors: Stefanie Steinhauser, Dr. Akpona Okujeni, Dr Patrick Hostert, Prof. Tobias Hank
Affiliations: Ludwig-Maximilians-Universität München (LMU), Humboldt-Universität zu Berlin (HU), German Research Centre for Geosciences (GFZ)
Accurate temporal and spatial information on crop status are essential for sustainable and efficient agricultural management. Precise quantitative knowledge, e.g. on nutrient uptake and/or carbon cycling, not only enhances soil health but also reduces the economic and environmental costs associated with inefficient fertilization by saving operating resources while at the same time avoiding emissions. Overuse of nitrogen fertilizers contributes to environmental issues such as non-point-source water pollution and greenhouse gas emissions, while undersupply results in poor crop yields concerning both, yield quantity and quality, posing a threat on global food security. To achieve optimal management strategies, spatial measurements derived from Earth Observation (EO) are of high value. Strong correlations between chlorophyll content and e.g. canopy nitrogen content (CNC) exist during the vegetative growth stages and are widely used in multispectral contexts to provide guidance for crop management in spring. However, using chlorophyll as a proxy becomes challenging once the plant has developed to the reproductive stage, where pigments are gradually decomposed during senescence and the chlorophyll-nitrogen-correlation becomes invalid for nutrient monitoring. The same limitations apply to the discrimination of crop residues, such as non-photosynthetically active vegetation (NPV), from bare soil backgrounds, which cannot be achieved using proxies, but must be resolved in the shortwave infrared (SWIR). Narrow-band contiguous spectroscopy is the only measuring technique that provides access to the subtle absorptions in the SWIR of protein-based (important for calculating the nitrogen use efficiency) and carbon-based constituents (important for balancing the carbon budget in arable production systems). Spatial mapping of these crucial variables finally becomes feasible thanks to spectroscopy recently making the jump towards spaceborne sensors that allow measuring different agricultural systems based on physical principles, which can be implemented in a sensor-agnostic way. One example for spaceborne hyperspectral EO systems is the Environmental Mapping and Analysis Program (EnMAP). Dedicated tools have been developed and implemented in the free and open EnMAP-Box software, which allow for combining the physical relations between reflectance signatures and biophysical/biochemical crop status with computationally efficient Machine Learning (ML) regression algorithms as part of so-called hybrid retrieval schemes. The EnMAP Hybrid Retrieval Workflow (EnHyR) for example comprehends a selection of physically-based invertible Radiative Transfer Models (RTM, e.g. PROSAIL-PRO) that can be combined with various ML regression algorithms (e.g. Gaussian Process Regression; GPR) to perform hybrid variable mapping independent of the availability of in-situ data for model training. In-situ datasets are difficult to obtain due to high costs, time constraints, and other environmental factors (e.g. cloud cover). By simulating a large number of spectra-parameter-combinations, EnHyR enables compiling a comprehensive and extensive database without relying on in-situ measurements. The rich simulated training database can be optimized using different dimensionality reduction schemes, e.g. band selection or principal component analysis (PCA), and is subsequently streamlined using active learning (AL) heuristics, thus providing meaningful seed data for the different regression algorithms. For this study, the original training database consists of a Look-Up Table (LUT) compiled of 2000 members derived from the PROSAIL-PRO model. To avoid interference with strong signals that affect vegetation reflectance across the full spectral range, such as the leaf area index (LAI), a strict band-selection was performed, leaving only the SWIR range, which responds to the respective absorption coefficients for protein- and carbon-based constituents (1400-2500 nm). The database is optimized using pool-based AL. In this process, redundancies are effectively removed, adequately preparing the training data for use with probabilistic regression algorithms such as GPR. To better tailor the training data to the two target variables (CNC and NPV), the AL process is supported by in-situ data that has been collected in the MNI test area near Munich, Southern Germany over several growing seasons of the past seven years. With spaceborne hyperspectral instruments just recently emerging, only snapshot observations so far exist from the intensively sampled test areas in central Europe. This mainly applies to the EnMAP-mission, where high competition around limited observation orbits across central Europe prevails. The EnHyR procedure nonetheless could already prove to return realistic quantities of CNC and NPV for airborne experimental validation datasets as well as for selected EnMAP scenes. With EnMAP now approaching three years in space, first time-series of high spectral resolution and high SNR images are starting to emerge, although not yet for central Europe. The preparation for future more densely observed time-series of multi-sensoral hyperspectral data, e.g. from CHIME and SBG, leads to two overarching research questions: 1. Are the EnHyR algorithms (in addition to successfully quantifying CNC and NPV for single observations) also capable of delineating meaningful seasonal dynamics of these two crucial variables? 2. How large are the effects of different sun-sensor-target geometries in heterogeneous time-series achieved from extremely agile platforms such as EnMAP? To find answers to these questions, the trained models for CNC and NPV are applied to one of the first comprehensive EnMAP time-series, recorded over the Central Valley, California, USA. From May to November 2024 monthly acquisitions over the Central Coast including the Central Valley have been collected with reasonable cloud cover, aiming at close to nadir observations (-4° - +3°). Land use in the Central Valley covers a diverse range of vegetables, fruits, and crops. This arable land use diversity provides ample opportunities to demonstrate not only the spatial and temporal transferability of the hybrid GPR models but also their adaptability to seasonal dynamics and heterogeneous illumination conditions. The work is ongoing, so first results are presented and the found challenges are put up for discussion with the scientific community.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall N1/N2)

Presentation: A Dynamic & Automated Map for Monitoring of Sugar Cane Cultivations in Smallholder Regions of Western Kenya – Leveraging Hyperspectral EnMAP Imagery in the Cloud with Geo Engine

Authors: Leander Leist, Dr. Christian Beilschmidt, Dr. Jöhannes Drönner, Michael Mattig, Prof. Jörg Bendix
Affiliations: University of Marburg, Department of Geography, Geo Engine GmbH
Kenya's economy and food security largely depend on smallholder agriculture, which employs ~ 54% of the Kenyan population. Sugarcane is one of Kenya's most important cash crops. It is grown on larger farms and by smallholder farmers on parcels with an average size of ~0.4 acres. Sugarcane's status as a cash crop facilitates farmers engaging with contractors for trade, thereby gaining access to broader markets, material, knowledge, and credit. This dynamic harbors the potential for raising living standards and providing plannable income which is important for individual economic stability. However, many farmers are not yet engaging in such contracting. Additionally, sugarcane is often grown alongside other crops or only semi-regularly. These circumstances create situations where sugarcane shortages occur in some places while sugarcane is available in excess elsewhere. Sugarcane plantations also require sufficient irrigation throughout the growing season, making the crop susceptible to climate extremes (droughts), thereby putting farmers' livelihoods at risk. Challenge Remote sensing-based mapping of sugarcane with e.g. multispectral imagery can produce relatively high accuracies for large homogenous farms, however, with smaller farm sizes, performance decreases due to three main factors. First, as farm sizes decrease, spectral mixing occurs, hindering discrimination. Second, when Sugarcane is grown alongside natural swamp vegetation, the risk of overprediction increases due to spectral and structural similarities. Third, varying planting and harvesting dates as well as the variability of irrigation cause ambiguity in spectral space, impeding classification. Aim The CropHype project, as a collaborative effort between the University of Marburg’s Laboratory of Climatology and Remote Sensing and the Geo Engine GmbH aims to provide accurate crop-type maps for Kenyan smallholder agricultural regions, taking the region's specific spectral, spatial, and temporal characteristics into consideration. Such crop-type maps will serve as a foundational tool for enhancing food and economic security among smallholder farmers, directly contributing to the achievement of the United Nations Sustainable Development Goals. In this work, we present a dynamic sugar cane map, based on EnMAP hyperspectral data and operationalized via a machine learning model using the Geo Engine platform. Methods We chose Hyperspectral EnMAP data for classification because its exceptionally high discriminatory information allows tackling the three abovementioned shortcomings of classical multispectral approaches. Reference data was mapped by the local startup AgriBORA in northwestern Kenya across three growing seasons. We use raw spectra and transformations like derivatives and latent variables from Partial Least Squares Discriminant Analysis to extract highly expressive, temporally stable predictors for sugarcane classification. The predictors are selected via feature importance analyses to enable model explainability. The multi-seasonal reference data and a large set of EnMAP acquisitions enable robust training and validation. We employ an XGBoost model to perform in-season predictions on single EnMAP acquisitions. The predictions are made upon ingestion of new EnMAP imagery into the Geo Engine platform, which is hosted in DLR's EO-Lab cloud. The latter enables automatic ingestion, preprocessing, and classification as new EnMAP acquisitions become available as the season progresses. The data is automatically procured for end-user interpretation without placing complex or computationally heavy workloads on their local equipment. Results and presentation format Our classification approach demonstrates substantial improvements in sugar cane classification by tackling the specific challenges encountered with other data sources and methodologies through the use of EnMAP hyperspectral data. The oral presentation will provide a platform to introduce the scenario, motivation, and methodology while also offering a demonstration of the Geo Engine, showcasing the interactive capabilities for handling and interpreting results.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall N1/N2)

Presentation: EnMAP Mission Status Overview

Authors: Laura Dr. rer. nat. La Porta, Prof. Dr. Sabine Chabrillat, Dr. Emiliano Carmona, Dr. Katrin Wirth, Dr. Nicole Pinnel, Dr. Sabine Baumann, Vera Krieger, Michael Bock, Dr. Miguel Pato, Daniel Schulze, Dr. Karl Segl, Dr. Akpona Okujeni, Rupert Feckl
Affiliations: German Space Agency, DLR
The Environmental Mapping Program, shortly EnMAP, is a German satellite, launched on April 1st 2022. The satellite carries an hyperspectral instrument that samples about 200 frequency bands between 420 nm to 1000 nm (VNIR) and from 900 nm to 2450 nm (SWIR). EnMAP is now more than 2 years in orbit and is acquiring data at an increasing pace. Users feedback, as collected during these years at conferences and workshops dedicated to hyperspectral imagery, is extremely positive: the EnMAP HSI instrument generates data of quality beyond expectations. Thanks to the experience gained during the time of operations, the team of the Ground Segment, supported by the Space Segment and in collaboration with the Science Segment, gradually enhanced the Missions proficiency. They implemented new features and fine-tuned the observing strategy, thus optimizing the exploitation of the satellite capabilities. Moreover, they further developed the processing pipeline, thus improving the quality of the derived products. In 2024 the EnMAP team joined forces with the PRISMA team to acquire data in parallel with both satellites and started collaborating also with other flying hyperspectral missions to generate more and more match-ups. We will give an overview of the health state of the satellite and of the onboard resources (e.g. fuel). We will present the highlight of the two and a half year of routine operations, which in 2024 ran almost uninterrupted and almost doubled the amount of data stored in the EnMAP archive. We will present some numbers about the users (e.g. nationalities, research area) and acquired data (e.g. geographical coverage) and address the platforms from which the data may be retrieved. We will present the activities to support the development of commercial use cases for the EnMAP data. We will conclude with an outlook about the plans and goals of the mission for the next years.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall N1/N2)

Presentation: EnsAD: EnMAP Satellite-based Algae Detection for Copernicus and Downstream Services

Authors: Dagmar Müller, Johannes Timm, Marcel König, Shun Bi, Dr Rüdiger Röttgers, Dr Martin Hieronymi, Jorge Garcia, Ana Ruescas, Kerstin Stelzer, Carsten Brockmann, Eefke van der Lee, Annika Grage, Karin Heyer
Affiliations: Brockmann Consult Gmbh, HEREON, University of Valencia, Federal Maritime and Hydrographic Agency BSH
The EnsAD project aims at further developing the methods from preparatory studies for algae group recognition, and opening potential uses of hyperspectral satellite products for water quality assessment and preparing for operational use with the Copernicus mission CHIME and the NASA mission PACE. The developed method must be robust and flexible to address both coastal waters and inland waters and provide demonstration products for EnMAP data. National reporting and information services concerning water quality and harmful algae blooms profit from the enhanced algae group detection e.g., of cyanobacteria. The complete processing chain will be designed to feed into ecosystem model assimilation starting from hyperspectral data via deriving an optical algae group product. The biogeochemical forecasts can be improved by assimilating satellite-based information on algae groups. Processing of hyperspectral water leaving reflectance into phytoplankton optical types is solved by combining an extensive bio-optical water model [Bi et al., 2023, doi:10.3389/fmars.2023.1196352, Gege, 2013, doi:10.1016/j.cageo.2013.07.022] with a machine learning approach. Hyperspectral spectroscopy data (water leaving reflectance spectra (400-750nm) from EnMAP L2A v1.04.02 at 30m and PACE L2A v2 at 1km spatial resolution) of the North Sea and Baltic Sea are sampled and extracted per optical water types (OWT, [Bi & Hieronymi., 2024, doi:10.1002/lno.12606]). The full inversion of these spectra with the bio_optics framework [König et al., 2024, doi:10.5281/zenodo.10246860] provides concentrations of eight phytoplankton groups, non-algal particles (NAP) and CDOM absorption at 440nm. After in-depth analysis and filtering, spectra are reduced in their dimensionality by principal component analysis (PCA) considering OWTs. For the application on entire scenes, a machine learning model is trained to derive the in-water properties from the PCA loadings and OWT continuous parameters. The processing workflow includes the same steps as they have been applied to the training data: specifying OWT parameters, reducing spectral dimensions by the given principal components and application of the machine learning model. Depending on the OWT, different sets of phytoplankton groups are allowed in the inversion. E.g. the two cyanobacteria groups are only included of the Baltic Sea spectra, the chlorophyta group is restricted to very coastal or inland waters. The brown group (mainly diatoms), dinoflagellates and cryptophytes are calculated everywhere, while the occurrence of coccolithophores is restricted to one OWT class in the North Sea. The phytoplankton group product from EnMAP data can potentially reveal the high spatial variability of these groups. EnMAP water leaving reflectances are challenging to analyse as they are influenced by strong spectral noise. The PCA analysis shows that the noise is at least partially random, leading to smoother reconstructed spectra. Our approach of inherent forward modelling allows the inversion of spectrally noisy water leaving reflectance spectra, because expected spectral shapes of absorption and scattering of phytoplankton are preset. The membership sum of the OWT classification is an effective filter for reasonable water spectra and filtering out haze and semitransparent clouds which are not identified within the EnMAP cloud screening. The machine learning inversion differentiates between optical algae classes quite well. Concentrations of different optical algae groups are calculated within reasonable ranges; the spatial patterns of the groups need further study and validation. Assimilation of phytoplankton groups in an ecosystem model requires hyperspectral satellite data at a larger scale and higher temporal coverage, which the NASA PACE mission can provide. With a spatial resolution of 1km the study area of North Sea and Baltic Sea is covered by PACE almost daily. We processed PACE OCI data in a similar manner to the EnMAP data and built a database of daily phytoplankton optical group concentrations for ingestion in the ecosystem model. The released OCI remote sensing reflectances are currently provisional products (version 2), which already demonstrate the benefits for large scale applications. The optical algae group concentration as well as SST is directly assimilated into a predictive model. Traditional ecosystem models often rely on very few generic phytoplankton groups, which can obscure the nuanced details within plankton communities. In order to match the optical algae groups discernible by hyperspectral satellites, the number phytoplankton groups within the biogeochemical model had to be increased, raising the computational cost. These plankton groups form an extended 3N8P2Z1D version of the operationally used HBM-PDAF-ERGOM model (Brüning et al., 2021, doi:10.23784/HN118-01). Our methodology utilizes the Local Error Subspace Transform Kalman Filter of the PDAF Framework (LESTKF) (Nerger et al., 2012, doi:10.1175/MWR-D-11-00102.1), to assimilated near real time plankton group concentrations on model estimates of optical algae group distributions. By directly assimilating concentrations, we enhance the model's ability to capture the dynamics of these plankton groups individually, leading to improved predictions of the plankton groups. The initial conditions for the phytoplankton are derived from typical concentrations derived from the optical algae group product followed by a spinup period. However, this innovative approach presents significant challenges, particularly concerning the long-term behaviour of the models and their computational footprint. The interactions among various plankton groups are complex and not fully understood, which raises uncertainties in model projections and parameters. Validation of the split into phytoplankton optical groups is still on-going. In-situ measurements of algae groups and water leaving reflectances are sparse, match-ups with EnMAP overpasses are even sparser. Instead of quantitative analysis, we focus on qualitative comparison with abundance maps from a recurring summer campaign in the North Sea conducted by BSH or the climatological abundance data provided by the Continuous Plankton Recorder. An intercomparison between the PACE-OCI phytoplankton group product and our approach is also planned. With further development of the atmospheric corrections for both EnMAP and PACE-OCI the exploitability of the spectral information will hopefully increase and the spectral and spatial noise present in the phytoplankton group products will decrease accordingly. We advocate for continued interdisciplinary collaboration to enhance our understanding of plankton dynamics and improve the reliability of hyperspectral remote sensing applications in marine ecology. This research not only contributes to the field of remote sensing but also provides new insights for managing marine resources.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall N1/N2)

Presentation: EnMAP Science Activities 3 Years After Launch: Highlights

Authors: Sabine Chabrillat, Akpona Okujeni, Karl Segl, Daniel Scheffler, Kathrin Ward, Saeid Asadzadeh, Alexander Khokanovsky, Robert Milewski, Arlena Brosinsky, Theodora Angelopoulou, Astrid Bracher, Najoro Randrianalisoa, Tobias Hank, Stefanie Steinhauser, Benjamin Jakimow, Andreas Janz, Nicole Pinnel, Emiliano Carmona, Vera Krieger, Michael Bock, Laura La Porta
Affiliations: GFZ German research center for Geosciences and LUH Leibniz University Hannover, GFZ German research center for Geosciences, Alfred Wegener Institute and University Bremen, Alfred Wegener Institute, Ludwig-Maximilian University, Humbold-University Berlin, Earth Observation Center DLR, German space agency DLR
The Environmental Mapping and Analysis Program (EnMAP) satellite mission was launched on 1st April 2022, designed to make a significant contribution to the availability of high quality spaceborne imaging spectroscopy products for the characterization and monitoring of the Earth’s environment and its changes. The EnMAP mission stands for its sophisticated design with on-board calibration, high data quality, long-term development and strong science accompanying program for preparation and exploitation of the mission. EnMAP is led by the German Space Agency at the German Aerospace Center (DLR), the space segment was built by OHB, the ground segment is led by the Earth Observation Center at DLR, while the German Research Center for Geosciences (GFZ) is leading the science segment, OHB building the space segment. After a successful short commissioning phase, EnMAP entered into operation in November 2022. The EnMAP mission encompasses global coverage from 80° N to 80° S based on user demands. The freely accessible data are characterized by 30 m spatial resolution, high spectral resolution with a spectral sampling distance of 6.5 nm and 10 nm in the VNIR and SWIR regions respectively, and high signal-to-noise ratio. The primary scientific goals of the hyperspectral EnMAP mission are to study environmental changes, investigate ecosystem responses to human activities, and monitor the management of natural resources. The EnMAP scientific exploitation and support program in the satellite operational phase focuses on the demonstration of scientific achievements for various cutting-edge and environmental applications of benefit to society today. The emphasis is also placed on the external validation of EnMAP supported by the EnSAG and international validation teams from EnMAP science community, as well as on the developments in open source softwares, algorithms and educational tools. In this presentation, we present the results of the external EnMAP validation, and highlight the science potential of EnMAP to operate in various environmental conditions, including high and low light levels, dense forests, Antarctic glaciers, inland and coastal waters, deserts and agricultural areas. This enables the use of satellite-based imaging spectroscopy data for various applications in field such as retrieval of vegetation traits, mapping of soil composition, raw materials exploration, methane emissions, or an assessment of water quality, and snow and ice properties. Overall, EnMAP’s performance exceeds the mission requirements, and significant potential for contribution to scientific exploitation in various geo- and bio-fields is demonstrated to support upcoming EU Copernicus services. Further, these advances shall support the establishment of synergies and collaboration with concordant imaging spectroscopy missions, and support the development of open benchmark datasets and algorithms as key long-term objectives, which can serve as precursor for future missions like CHIME and SBG.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall N1/N2)

Presentation: Leveraging EnMAP hyperspectral time series for monitoring fire-prone ecosystems

Authors: Akpona Okujeni, Shawn Schneidereit, Kushnerniva Laurent, Dr Patrick Hostert
Affiliations: Helmholtz Center Potsdam, Gfz German Research Center For Geosciences, Potsdam, Germany, Humboldt-Universität zu Berlin, Geography Department, Earth Observation Lab, Germany
Earth Observation is advancing rapidly, with hyperspectral satellite missions unlocking novel possibilities for time series analyses of ecosystems worldwide. Vegetation monitoring, in particular, benefits from high spectral resolution data acquired regularly throughout the season, enabling detailed descriptions of vegetation types and providing insights into vegetation conditions over time. Current scientific missions like EnMAP and PRISMA provide the opportunity to showcase the potential of hyperspectral time series, paving the way for future operational hyperspectral missions such as ESA’s CHIME and NASA’s SBG. These upcoming missions are expected to significantly enhance global vegetation monitoring, helping to better understand and measure environmental processes and the impacts of global change. This study provides an overview of recent advancements in hyperspectral remote sensing, focusing on time series analyses for mapping and monitoring vegetation in fire-prone Mediterranean ecosystems. We successfully retrieved close to monthly EnMAP time series over two consecutive years for three study sites in California. These sites represent a diverse range of ecosystems and ecosystem transitions, comprising forests, open woodlands, shrublands, grasslands, as well as urban and agricultural areas. Our analyses highlight the value of the EnMAP hyperspectral time series and spectral unmixing modeling for mapping plant life-form fractions and tracking changes in plant life-form conditions through fractional cover estimates of green and non-photosynthetic vegetation. By integrating hyperspectral-temporal metrics and generalizing spectral mixture models over time, our analyses highlight the growing potential of hyperspectral time series for enhanced vegetation mapping and monitoring. Derived plant life-form maps and plant life-form condition time series reveal patterns of vegetation composition and seasonality across ecosystems. Reference data, combining information from visual interpretation of high-resolution imagery from Google Earth and field surveys, demonstrates the accuracy and quality of the results. Overall, this work contributes to advancing the field of satellite-based hyperspectral time series applications, with a particular focus on monitoring fire-prone ecosystems. EnMAP enables analyses that were previously not feasible, providing a valuable foundation for future applications, as the use of satellite-based hyperspectral time series continues to grow. Future work will focus on improving temporal resolution and spatial coverage by integrating data from additional hyperspectral missions, including PRISMA and EMIT. With ongoing efforts to expand this approach beyond California’s fire-prone ecosystems, we aim to establish a framework for hyperspectral time series analysis and vegetation monitoring globally.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.14)

Session: A.03.08 Terrestrial Carbon Cycle Assimilation System - PART 2

The Terrestrial Carbon Community Assimilation System (TCCAS) is designed around the newly developed D&B terrestrial biosphere model (https://doi.org/10.5194/egusphere-2024-1534). D&B builds on the strengths of its two component models, DALEC and BETHY, in that it combines the dynamic simulation of the carbon pools and canopy phenology of DALEC with the dynamic simulation of water pools, and the canopy model of photosynthesis and energy balance of BETHY. Both component models have a long track-record of successful data assimilation applications. TCCAS includes a suite of dedicated observation operators that allows the simulation of solar-induced fluorescence (SIF), fraction of absorbed photosynthetically active radiation (FAPAR), vegetation optical depth from passive microwave sensors, and surface layer soil moisture. The model is embedded into a variational assimilation system that adjusts a control vector to match the observational data streams. For this purpose TCCAS is provided with efficient tangent and adjoint code. The control vector consists of a combination of initial pool sizes and process parameters in the core model and in the observation operators.
TCCAS and D&B are open source developments. The session will provide a combination of presentations and hands-on exercises to participants with an interest in applying the systems for their research or their teaching. Topics covered include: The terrestrial carbon cycle, D&B, observations and observation operators, fundamentals of data assimilation and their implementation in TCCAS.

Speakers:


  • M. Drusch
  • T. Kaminski
  • W. Knorr
  • T. Quaife
  • P. Rayner
  • M. Scholze
  • L. Smallmann
  • M. Voßbeck
  • M. Williams
  • Sönke Zaehle
  • Songyan Zhu
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.94/0.95)

Session: C.02.05 MAGIC – Preparing for the ESA-NASA satellite gravity constellation - PART 2

ESA and NASA have discussed collaboration on satellite gravity missions for more than a decade, both by engaging the community (e.g. GOCE Workshop, Munich 2011), and under the JPPG dedicated discussions (ESWG, 2015). The two Agencies recently co-signed a Joint Statement of Intent for cooperation on the MAss Change and Geosciences International Constellation (MAGIC). MAGIC is the jointly developed concept for collaboration on future satellite gravity observations that addresses needs of the international user community and stakeholders, consisting of the GRACE-C (NASA and DLR) and the Next-Generation Gravity Mission (NGGM) (ESA) staggered deployment of two satellite pairs. MAGIC will address the interest to join efforts in building a constellation of two pairs of satellites, polar and inclined, to drastically increase the temporal and spatial resolutions to provide, among others, information essential for water management, monitoring and forecasting of hydrological extremes (i.e. droughts/floods) and for monitoring the Global Tipping Points. The constellation is founded on a joint reference document co-signed October 2020: MAGIC Mission Requirements Document (MRD), which consolidated the needs of the international user community as described in the IUGG 2015 publication (Pail et al, 2015) and the NASA 2017 Decadal Survey (NASEM). Furthermore, the first ESA-NASA MAGIC Science and Applications Workshop in Assisi, Italy, November 2023, refined needs of the international user community and stressed the importance of demonstrating operational capabilities of mass change monitoring from space. The two agencies have implemented a joint science and applications plan on MAGIC to coordinate relevant activities. This session invites contributions related to the MAGIC Level-2 and Level-3 product development, as well as Level-1 product development of the individual GRACE-C and NGGM missions. Constellation data processing, algorithm development and mission performance aspects shall be addressed. Contributions related to the generation of higher-level (e.g. Level-4) products involving data assimilation, data fusion and AI techniques are also encouraged. This session shall also address the novel science and operational services and applications enabled by MAGIC and user perspectives related to synergies between the Science Data Systems of ESA and NASA.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: MAGIC-AMOC: Monitoring the Atlantic Meridional Overturning Circulation with MAGIC

Authors: Rory Bingham, Chris
Affiliations: University Of Bristol
The Atlantic Meridional Overturning Circulation (AMOC) plays a central role in balancing Earth's energy budget and redistributing oceanic mass and the tracers it carries. A key driver of the AMOC is North Atlantic Deep Water (NADW) formation in the Nordic and Labrador Seas. As Earth warms, surface freshening of these seas due to increased ice-melt and freshwater input threatens to inhibit NADW formation and drastically reduce the strength of the AMOC with profound and wide-reaching implications for Earth’s climate and weather patterns. Recent analysis goes as far as to suggest the AMOC is at risk of imminent collapse which would precipitate a rapid reorganisation of the climate system and prevailing weather patterns including, ironically, the plunging of Northwestern Europe into a much colder regime, the adjustment to which would put an enormous strain on the economies and infrastructure of affected countries and tax the ability of governments, and society more generally, to respond. While the hugely successful RAPID programme has sustained a programme of continuous AMOC measurement at 26N since 2004 – revealing a possible AMOC slowdown since 2010 – the array is labour intensive, expensive to maintain and is limited to measurement at a single latitude. As such the long-term viability of this array remains questionable. The ability to monitor the AMOC from space would overcome such limitations and as such would have enormous scientific and societal benefits. Such a possibility is offered by the fact that AMOC variability has a distinctive fingerprint in bottom pressure along the western boundary of the North Atlantic. However, the relatively narrow across-slope width of this fingerprint poses a significant technical challenge to single-pair gravity missions such as GRACE. By flying in a double-pair bender formation, the MAGIC mission can potentially overcome resolution and measurement noise hurdles and accurately measure changing AMOC strength from space, complementing, or even supplanting, in-situ monitoring thereby providing a robust AMOC early-warning system. Sophisticated data processing strategies may, however, still be necessary to extract the AMOC signal from the inevitable measurement noise and leakage from larger continental hydrological signals. Here we present a comprehensive assessment of the ability of the MAGIC mission to monitor AMOC changes from space. First, using a range of ocean models sampling a full range of potential AMOC changes and associated western boundary bottom pressure variations, we examine the limits of AMOC detectability under the spherical harmonic transformation of the modelled ocean bottom pressure fields in the presence of competing oceanographic signals. We pay particular attention to indications of AMOC collapse or significant weakening. Having established this baseline, we next consider the extent to which the introduction of contamination due to leakage from continental sources degrades AMOC detectability and consider approaches to dealing with this issue. Next, we repeat this process but focusing instead on the impact of simulated measurement noise, which introduces a trade-off between omission and commission error reduction. Finally, we demonstrate how, with careful treatment of the combined contamination from leakage and measurement noise, MAGIC can successfully monitor AMOC variations at multiple latitudes through the Atlantic with an accuracy comparable to in-situ measurements.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: Exploring the Role of NGGM and MAGIC in Enhancing Global Water Storage Modeling Through Data Assimilation

Authors: Dr. Helena Gerdener, Yorck Ewerdwalbesloh, Anne Springer, Prof. Dr. Jürgen Kusche
Affiliations: University of Bonn
The gravity missions GRACE (Gravity Recovery And Climate Experiment) and its successor GRACE-Follow ON enable to derive total water storage anomalies (TWSA) globally from space since 2002. As these observations contain water mass variations on Earth, the mission provides an unprecedented opportunity to observe surface and subsurface water storage. For example, they were successfully used to study water losses due to glacier melting in Greenland, anthropogenic groundwater uses in India or extreme dry events in the Amazon River basin. However, the GRACE/-FO TWSA are limited to a coarse spatial resolution of 300 km and require additional information for the vertical disaggregation into different water storage compartments such as soil moisture, groundwater etc. Therefore, the GRACE/-FO TWSA were assimilated into global hydrological models in the past to derive improvements in the spatial and vertical resolution, while improving the model's realism. For example, at IGG, a system was developed that globally assimilates GRACE/-FO TWSA into the WaterGAP global hydrology model that is operating on an 0.5° (~50 km) spatial resolution. The NGGM and MAGIC future missions promise to be a great opportunity to provide a finer spatial resolution as compared to the GRACE/-FO missions. Nonetheless, it is still unclear how the improvements in spatial resolution of the observations will lead to improvements in the realism of the assimilation and its disaggregation into single compartments. A synthetic framework will be set up for South America from 2003 to 2019 where model simulations are equipped with error uncertainties from the NGGM and MAGIC simulations and are assimilated into the existing assimilation framework. With this setup, we will aim to assess the uncertainty of downscaled and disaggregated data from the assimilation that can be achieved by integrating future missions.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: Assessing the potential of MAGIC and future quantum mission constellations for hydrology and climate applications

Authors: Annette Eicker, Klara Middendorf, Philipp Zingerle, Marius Schlaak, Roland Pail, Laura Jensen
Affiliations: Hafencity University, Technical University of Munich, Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences
Satellite gravimetry observations as provided by the GRACE and GRACE-FO missions have revolutionized our understanding of the global water cycle under climate change. But despite the groundbreaking success of ongoing satellite gravity observations, limitations of the current missions in terms of accuracy, spatial/temporal resolution, latency, and the length of the time series are still hampering the use for geophysical applications and a comprehensive operational integration into scientific services and monitoring programs of, e.g., the Copernicus Climate Change Service. In this presentation we analyze end-to-end simulation output of the MAGIC mission and of future quantum multi-satellite constellations to demonstrate their potential for improved applications in hydrology and climate science. Results have been obtained within ESA’s QSG4EMT (Quantum Space Gravimetry for monitoring Earth’s Mass Transport Processes) project. Uncertainty estimates derived from basin-averaged time series of differently sized hydrological units clearly show the innovations achievable by MAGIC both in terms of accuracy and spatial resolution as well as potential further improvements to be expected from quantum-based satellite constellations. The stability of linear trends not only depends on the measurement accuracy, but also on the length of the available time series and can thus be expected to strongly improve with new satellite missions. The detectability of climate-driven changes to the seasonal water cycle will be showcased as an example of a climate application that is still challenging using currently available satellite gravity data, but will only become feasible with the MAGIC mission and beyond.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: Prospects of resolving short-term ice sheet processes with MAGIC

Authors: Ingo Sasgen, Mariia Usoltseva, Bert Wouters
Affiliations: Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research, Chair of Astronomical and Physical Geodesy, Technical University of Munich, Delft University of Technology
The GRACE and GRACE-FO satellite missions have been essential for studying changes in continental ice masses driven by climate change. However, these data are limited to monthly temporal resolution and are affected by striping noise. These limitations often hinder the detection of short-term mass change signals like melt or accumulation events, critical for understanding and quantify the response of the ice sheets to present and future extremes in a warming climate. The MAGIC constellation, featuring one near-polar pair and one inclined pair at ~70°, holds great potential to overcome these limitations. Here we present results of our investigation focusing on the detectability of short-term mass signals of the ice sheets with the MAGIC mission concept. We simulate sub-monthly mass variations of the surface mass balance for Greenland and Antarctica, and compare these signatures against noise fields derived from MAGIC mission scenarios. We demonstrate the highly improved noise level of MAGIC will considerably advance the detection of short-term signals of the ice sheets. We show that MAGIC is able to detect weekly mass changes at the basin scale in Greenland and bi-weekly mass changes in Antarctica. This capability highlights MAGIC's potential to capture rapid melt and accumulation events, offering enhanced insights into dynamic ice mass changes and advancing our understanding of ice sheet processes.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: High-Resolution PyGLDA-based Hydrological Data Assimilation to Reveal Added Values of Future NGGM and MAGIC Satellite Gravity Products

Authors: Dr Fan Yang, Associate Professor Maike Schumacher, Ms Leire Retegui-Schiettekatte, Mr Marius Schlaak, Professor Roland Pail, Ehsan Forootan
Affiliations: Geodesy group, Department of Sustainability and Planning, Aalborg University, Astronomical and Physical Geodesy, TUM School of Engineering and Design
The state-of-the-art Data Assimilation (DA) can be applied to take advantage of time-variable satellite gravity observations to constrain the vertical sum of water storage simulations of large-scale hydrological models. These DA implementations are, however, often performed regionally, and if globally, at low spatial resolution. This choice is made to manage the considerably high computational demands of hydrological DA at a global scale and to avoid numerical problems, e.g., instabilities related to the inversion of covariance matrices. To fully exploit the potential of future satellite gravity observations, including GRACE-C, NGGM, and MAGIC missions, we will test the performance of the AAU’s Python-based open-source DA system, known as PyGLDA, to integrate the global estimates of terrestrial water storage (TWS) from these missions with a global hydrological model at 0.1° spatial resolution. The main novelties of our experiments are: (i) testing the effect of spatial resolution and error structure of the future gravity missions on the global fine-resolution DA-derived water storage estimates; (ii) evaluating the effects of basin-scale and grid-scale strategies on the experiment 1; and (iii) evaluating the effect of domain localization and neighbouring-weighted global aggregation on the efficiency of global DA implementations. As observations, monthly GRACE-C-like, NGGM-like, and MAGIC-like TWS estimates with their full error covariance matrices will be used. The data sets are the outputs of closed-loop simulations of the ESA SING project, which are computed by considering both instrumental and dealiasing errors. The relevance of the experiments for operational hydrological applications will be evaluated by computing indicators such as wetness/dryness indices and storage deficits to explore the performance of the DA experiments in downscaling satellite gravity products.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: The Solid Earth Response to Earthquake Cycle Processes using MAGIC

Authors: Dr. Jeanne Sauber, Dr. Shin-chan Han, Dr. Bryant
Affiliations: NASA GSFC, University of Newcastle
Global gravity change measurements by the GRACE and GRACE-FO (2002 – present) missions have provided new insight into Earth surface and interior processes such as the earthquake seismic cycle. The time series of gravimetric observations enabled by these two initial missions will be extended with the launch of GRACE-C (polar pair of MAGIC, scheduled for 2028) and the Next-Generation Gravity Mission (NGGM, inclined pair of MAGIC) missions with a staggered deployment of the two satellite pairs. Here, we report the predicted and observed gravimetric change associated with the full range of processes associated with earthquake seismic cycle (Inter-, Pre-, Co- and Post-seismic) and compared these to the simulated performance of the MAGIC dual pair. As background, during the GRACE-GRACE FO era (to date, 2002-2024) there have been 25 earthquakes of magnitude ≥ 8.0. Since many of these events are located beneath the ocean portion of the planet, satellite gravity measurements have provided unique long-wavelength information on coseismic slip and the response of the solid Earth to these events. For the 5 largest of the events (M>8.5) we have inverted the gravimetric data for the independent components of the earthquake centroid moment tensor and a simple rheological structure (Han et al., 2013). Numerous authors have documented and modeled greater than 20 years of gravimetric change associated with portions of the seismic cycle of these great earthquakes. With additional information provided by seismic, cGPS, and InSAR data, we have been able to forward model GRACE-GRACE-FO coseismic and post-seismic gravity changes after an additional 7+ events, 7.9 < M <8.5, to better understand a range of processes associated with these great earthquakes. The total of 12 events we studied included shallow (<~50 km) megathrust, strike-slip, normal fault ruptures, as well as intermediate depth earthquakes (>100 km) and one deep (~600 km) source earthquake (Han et al., 2024). In the initial ESA-NASA MAGIC Mission Requirements Document (Oct. 2020) the solid Earth sub team (AJSST: I. Panet, J. Sauber) formulated Threshold and Target resolution and accuracy desired to advance the processes associated with the earthquake cycle. To understand and advance modeling of these process for natural hazard assessments, we specified short-term (Daily to weekly), Monthly (as well as seasonal-interannual) and Long term (trends) resolution and accuracy. An example of a short-term process is individual satellite passes over a recent earthquake rupture for rapidly constraining coseismic slip. For post-seismic relaxation processes, short-term, monthly and the long-term trend would better enable separation of contrasting processes of fault after slip, poroelastic relaxation and viscoelastic relaxation in the upper mantle with implications for subsequent geohazards. The GRACE era was an unusual time with 5 events M > 8.5; for many decades in the past there have been no great earthquakes of M > 8.5 (for instance from 1970 – 2000). With an extended time series, greater range accuracy provided by the laser ranging instruments on both satellite pairs as well as a lower altitude for the inclined pair, and reduced noise associated with the addition of the inclined pair observations, the MAGIC constellation should ensure we have a range of large to great earthquakes at various stages of the seismic cycle.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall E1)

Session: F.04.01 Earth Observation for European Agricultural Policies

Earth observation technology is increasingly important in the formulation and implementation of European agricultural policies such as the Common Agricultural Policy (CAP). By utilizing satellite imagery, policymakers gain valuable insights into various aspects of agriculture, including land use, crop health, and environmental impact. These data allow for monitoring crop growth, assessing soil conditions, and identifying areas in need of conservation or restoration. Moreover, Earth observation helps in implementing sustainable agricultural practices, optimizing resource management, and mitigating the effects of climate change on agriculture. By integrating this technology into policy-making processes, European agricultural policies can become more effective, evidence-based, and responsive to the evolving needs of farmers and the environment.

Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall E1)

Presentation: Large-scale implementation of agriculture land abandonment identification – The Italian example

Authors: Philippe LOUDJANI, Mr. Daniele BORIO, Davide RIZZI, Salvatore CARFI, Lorenzo
Affiliations: Joint Research Centre of The EC
The abandonment of agricultural land in Europe is a growing concern, with significant social, economic and environmental implications. This problem refers, among other things, to the cessation of farming activities on previously cultivated land or to certain economic pressures such as the low profitability of farming in marginal areas, such as mountainous or remote regions. It is very important to be able to identify abandoned lands within a general framework of competition for land availability and with, for instance, possible land stewardship programmes for reintegrating them into conservation or productive use. The work that will be presented concerns an ongoing project carried out in collaboration with the main Italian paying agency (AGEA) to determine areas at high risk of farmland abandonment and to identify abandoned plots. The identification of at-risk land is based on a GIS platform that combines information from the Land Parcel Identification System (LPIS), the history of farmers' declarations (GSA), the distribution of farm parcels and even the altitude of parcels. In the risky areas identified, abandoned plots are identified using Machine-learning techniques based on multi-year time trends in Sentinel 2 and 1 data. Protocols are defined according to the main types of land cover (arable land, grassland, permanent crops) and by combining different markers (vegetation indices, Sentinel-2 10-band spectral curves, Sentinel-1 coherence, etc.). Some results from the first year of large-scale implementation will be presented and discussed.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall E1)

Presentation: Leveraging Sentinel-1 and Sentinel-2 Data With Machine and Deep Learning for Common Agricultural Policy Compliance Monitoring in Castille and León

Authors: Mr. Félix Hernando Sancho, Mr. Sergio Álvarez Álvarez, Mr. Gustavo Río Briones, Dr. Vanessa Paredes Gómez, Mr. Vicente del Blanco Medina, Mr. David Nafría García, Mr. Alberto Gutiérrez García
Affiliations: Agro-Technological Institute Of Castilla y León (ITACyL)
The current Common Agricultural Policy (CAP) programming period (2023-2027) brings the New Delivery Model (NDM) and changes the approach in which the Policy is monitored and evaluated shifting from a compliance-based model to a new performance-oriented one. Member States (MSs) has more flexibility for the design and setting up the implementation of the interventions and measures to achieve the policy’s objectives in their CAP Strategic Plans (SP). MSs are also responsible for monitoring both the performance of the SP and the beneficiaries’ compliance with the requirements stated in the interventions. Paying Agencies (PAs) are the bodies in charge of carrying out this compliance monitoring. The Integrated Administration Control System (IACS) ensures the efficiency with which direct payments are made to farmers applying for CAP aids. One of the building blocks of this system is the Area Monitoring System (AMS) which is defined as “a procedure of regular and systematic observation, tracking and assessment of agricultural activities and practices on agricultural areas by Copernicus Sentinels satellite data or other data with at least equivalent value” (European Commission, 2021). The Regional Ministry of Agriculture of Castilla y Leon, in coordination with the Spanish Agrarian Guarantee Fund (FEGA), is one of these PAs in Spain. The agro-technological Institute of Castilla y Leon (ITACyL) is a public entity that gives technical support to the Regional Ministry of Agriculture and runs and manages the AMS in the Region. The system makes an intensive use of the data provided by the Copernicus services, more precisely of Sentinel-2 and Sentinel-1 data. Castilla y Leon is largest region in Spain (94.224 Km2) with more than five million hectares of crop land , distributed in more than three million and a half parcels and declared by fifty-six thousand applicants for CAP subsidies (in 2024). The AMS must provide evidence (markers) for the PA to make a fair judgement on the compliance of each of these parcels with the monitorable requirements. These markers inform on the type of crop grown in the parcel, the presence/absence of homogeneity within the plot and the detection of certain agricultural practices (i.e.- ploughing, sowing, harvest). To deliver such markers, the system carries out a continuous analysis of Sentinel-2 data over each agricultural parcel along the agriculture campaign. More precisely, the signature of several indexes such as Normalized Difference Vegetation Index (NDVI), Normalized Difference Water Index (NDWI), Tasseled cap Brightness (TC Brightness), Normalized Burn Ratio (NBR), Normalized Difference Tillage Index (NDTI) are analytically and systematically computed and analysed. This analysis delivers information on the parcel from different points of views such as the crop vegetation development, the implementation of certain agricultural practices, the detection of irrigation events, the identification of burned areas and, finally, the presence of bare soil during sensitive periods of year. The system also benefits from Sentinel-1 data. Several indexes combining the two possibles backscatter polarizations (VV, VH) support the identification of certain agricultural practices such as ploughing and, at the same time, serve as a proxy for NDVI when optical data are not available due to cloud coverage. While all these remote sensing signals provide paramount information, additional methods are required for crop identification. Machine Learning algorithms help with this task. In this regard, CyL’s AMS operationally implements the data-mining tool C5.0 as crop classification algorithm. C5.0 algorithm combines various input variables, among others, temporal series of sentinel-2 images, digital terrain models and climatological data to derive a pixel-based land cover map (Gómez, V.P., 2018). The algorithm is run twice a year and the output, the crop type map enters the AMS as primary source of information. In this same direction, crop identification, the use of Convolutional Neuronal Network (CNN) to derive, in this case, an object-based crop type map has been explored. This map informs on the three most likely crops planted in the parcel, giving additional information for compliance checking. The above-mentioned analytical analysis of the spectral indexes for event detection presents a clear limitation when trying to identify the date in which the event took place. The implementation of Recurrent Neuronal Network (RNN) for estimating the most likely date in which harvest event was performed has also been explored. More precisely, we explored Long-Short Term Memory (LSTM) that captures temporal dependencies in the long term. In this case, the inputs for LTSM were some statistical metrics from NDVI, NDWI and TCBRI indexes as well as the number of days between acquisition dates. Nevertheless, we found some difficulties when implementing this approach. Cloud coverage, presence of shades, multiple or not event detection, not enough data and early harvests were the most remarkable encountered drawbacks. All these methodologies (C5.0, CNN and LSTM) require a reliable ground-truth data set for both training and validate the models. Carrying out a sampling camping to collected ground-truth data every season over the whole extension of CyL is unaffordable. Paredes et al., 2020 describe the ground-truth data source employed for the pixel-based crop type map. As for event detection, we explored the use of other alternative data sources as ground-truth to validate the detection of an event. This communication presents an operational system that leverages the synergetic combination of Sentinel-1 and Sentinel-2 data to support the efficient delivery of CAP funds to farmers. The system also benefits from the integration of Machine Learning and Deep Learning algorithms, which enhance automation and improve the accuracy of decisions regarding beneficiaries' compliance with CAP requirements. However, implementing this system in a large region such as CyL highlights obstacles and challenges, not only related to data gaps but also to the timely delivery of information to the PA. REFERENCES European Commission. (2021). REGULATION (EU) 2021/2116 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 2 December 2021 on the financing, management and monitoring of the common agricultural policy and repealing. Retrieved from on the financing, management and monitoring of the common agricultural policy and repealing. Paredes-Gómez, V., Gutiérrez, A., Del Blanco, V., & Nafría, D. A. (2020). A Methodological Approach for Irrigation Detection in the Frame of Common Agricultural Policy Checks by Monitoring. Agronomy, 10(6), 867. https://doi.org/10.3390/agronomy10060867 Gómez, V. P., Medina, V. D. B., Bengoa, J. L., & García, D. A. N. (2018, July). Accuracy assessment of a 122 classes land cover map based on sentinel-2, landsat 8 and deimos-1 images and ancillary data. In IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium (pp. 5453-5456). IEEE.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall E1)

Presentation: Revealing Spatial and Temporal Patterns of Agricultural Land Use in Germany for the Last Three Decades

Authors: Gideon Okpoti Tetteh, Dr Vu-Dong Pham, Dr. Marcel Schwieder, Felix Lobert, Alexander Gocht, Dr Sebastian van der Linden, Dr. Stefan
Affiliations: Thünen Institute of Farm Economics, Earth Observation and Geoinformation Science Lab & Interdisciplinary Centre for Baltic Sea Region Research, University of Greifswald, Thünen Institute of Farm Economics, Thünen Institute of Farm Economics, Thünen Institute of Farm Economics, Earth Observation and Geoinformation Science Lab & Interdisciplinary Centre for Baltic Sea Region Research, University of Greifswald, Thünen Institute of Farm Economics
Changes in agricultural land use are driven by environmental conditions as well as policies, economic factors, and technological advancements. The spatiotemporal monitoring of these changes is essential to creating an objective database that enables the provision of evidence-based support to decision-makers. A long time series of land use maps makes it possible to evaluate the drivers and consequences of land use and land use change, e.g., by political interventions or climate change. Within the context of the European Common Agricultural Policy (CAP) framework, crop rotation and diversification are key sustainability objectives under measures like eco-schemes and the Good Agricultural and Environmental Conditions (GAEC). Changes in crop rotation and diversity can serve as a valuable proxy for assessing agricultural land use intensity. With Earth observation (EO) data, crop rotation, and diversity can be monitored in space and time at a large scale. To effectively monitor crop rotation and agricultural land-use changes, a long time series of agricultural land-use maps is required. With such maps, crop sequences can be generated and further used to extract crop rotations. For more recent years, an increasing number of detailed agricultural land use maps are becoming available via EO or official agricultural statistics. However, historical information on agricultural land use at a high spatial resolution, especially before the Sentinel era, is rather scarce. This study aims to close this gap by mapping the dominant agricultural land use types in Germany for each year between 1990 and 2023 in a consistent manner. For the aforementioned timeframe, we processed all available Landsat and Sentinel-2 images with a cloud cover below 70% to generate surface reflectance images and masked out clouds using the Framework for Operational Radiometric Correction for Environmental monitoring (FORCE) (Frantz, 2019). All data were resampled to a spatial resolution of 30 m and the six bands common between Landsat and Sentinel-2 as well as spectral indices were used. Since the agricultural land underwent considerable changes throughout the past decades in Germany with a continuous loss of agricultural land to urban areas, we followed a two-step approach. Firstly, we created a mask containing six broad land-cover types: forests, cropland, grassland, wetlands, settlements, and other land. For this classification, we extracted 10,000 samples per class from the German Authoritative Topographic Cartographic Information System (ATKIS) and used that to train a recently published transferable deep learning approach (Pham et al., 2024). This approach, which uses a one-dimensional convolutional neural network (1D-CNN), was proposed to deal with the limited availability of historical reference data for training. It employs Temporal Encoding (TE) to assign all clear-sky observation pixels to their respective acquisition dates in a constant array of 365 days. It then employs two data augmentation techniques namely Random Observations Selection (ROS) and Random Day Shifting (RDS) to synthetically characterize temporal patterns in data-sparse situations and different environmental conditions. The ATKIS samples were grouped by year and used individually to train yearly models. With these yearly models, we created a time series of annual landcover maps, which were validated with all remaining ATKIS pixels that were not used for training. The annual F-scores for the cropland and grassland classes ranged between 0.84 and 0.91 and 0.66 and 0.76, respectively. Secondly, the annual cropland and grassland masks were used as the basis for the agricultural land use classification. We derived training data from the Integrated Administrative Control System (IACS), which was available for several German federal states and years between 2006 and 2022. We aggregated the land use information in IACS to 14 main agricultural land-use classes (grassland and 13 crop types). Per class, 1,000 training samples were extracted per year and subsequently merged to create a multi-year reference data pool. Based on this data pool, we trained the aforementioned 1D-CNN, which we subsequently used to map the 14 agricultural land-use classes for each year. Each map was validated using all remaining IACS pixels not used for training. The overall accuracies ranged between 68.9% in 2012 and 93.1% in 2009. The maps revealed comprehensive spatial and temporal patterns, which will now be further explored within the context of major political upheavals and changes in agricultural and environmental conservation policies. We demonstrate in this study the ability of satellite-based monitoring to provide consistent, large-scale insights into agricultural dynamics and highlight that the integration of multi-source data and deep learning is essential for tracking long-term trends and addressing data gaps in traditional agricultural statistics. By leveraging satellite imagery, this study not only enhances our understanding of past trends but also supports the development of predictive tools for future agricultural planning. References Frantz, D. (2019). FORCE—Landsat + Sentinel-2 Analysis Ready Data and Beyond. Remote Sensing, 11, 1124. Pham, V.-D., Tetteh, G., Thiel, F., Erasmi, S., Schwieder, M., Frantz, D., & van der Linden, S. (2024). Temporally transferable crop mapping with temporal encoding and deep learning augmentations. International Journal of Applied Earth Observation and Geoinformation, 129, 103867.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall E1)

Presentation: Enhanced spectral information for monitoring land-related EU policies

Authors: Dr. Katja Berger, Dr Patrick Hostert, Dr. Akpona Okujeni, Dr. Saskia Foerster, Zoltan Szantoi, Dr. Martin Schlerf, Saeid Asadzadeh, Dr. Inge Jonckheere, Christelle Vancutsem, Dr. Jean Baptiste Feret, Dr. Anke Schickling, Marco Celesti, Dr. Simon Proud, Sabine Chabrillat, Martin Herold
Affiliations: GFZ Helmholtz Centre for Geosciences, Geography Department, Humboldt-Universität zu Berlin, Integrative Research Institute of Transformations of Human-Environment Systems (IRI THESys), Humboldt-Universität zu Berlin, Umweltbundesamt - German Environment Agenc, Directorate of Earth Observation Programmes, European Space Agency (ESA), Department of Geography and Environmental Studies, Stellenbosch University(SU), Luxembourg Institute of Science and Technology, Remote Sensing and Natural Resources Modelling Group, European Commission, Joint Research Centre (JRC), TETIS, INRAE, AgroParisTech, CIRAD, CNRS, Université Montpellier, European Space Agency, European Space Research and Technology Centre (ESA-ESTEC)
The effective translation of scientific research in the Earth observation (EO) domain into usable products and services for policy-makers and -owners remains a significant challenge. Due to their increasing value and quality, the role of EO-based products is evolving beyond mere awareness tools, becoming increasingly important in shaping the design, implementation, monitoring and assessment of environmental and agricultural policies of the European Union (EU). This study introduces a comprehensive framework to evaluate the current and required technology readiness levels (TRL) of EO-derived variables for mapping, monitoring, assessing, and managing crop, forest, soil, and water resources, ultimately supporting policy implementation and compliance. By systematically analyzing existing policies of the post-Green Deal era, such as the evolution of the Common Agricultural Policy (CAP), the EU Regulation on Land, Land Use Change and Forestry (EU LULUCF), the Regulation on Deforestation-free Products (EUDR), the EU Biodiversity strategy for 2030 or the proposed Forest Monitoring Law (FML) and Soil Monitoring Law (SML), and evaluating the potential of emerging spaceborne observation capacities, we aim to identify opportunities and requirements for developing value-added products. Those products may include non-photosynthetic vegetation (NPV), canopy nitrogen content (CNC), different leaf pigment contents, crop types, species levels, soil organic carbon content (SOC), mineralogic composition and abundances, and dissolved organic carbon concentrations. The actual TRLs of these variables range from 4 (=functional verification/tested in a laboratory) to a maximum of 6 (=demonstrated/verified in relevant environments), and the goal would be to obtain the most mature readiness level 9, i.e. variable fully verified and proven in an operational environment. Towards the end of this decade, the Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) is anticipated to routinely deliver data with an unprecedented combination of spatial, spectral and temporal resolution, complementing the data streams currently offered by multispectral systems like Sentinel-2 (S2). Furthermore, the S2 Next Generation (S2NG) mission, currently planned for launch in the mid-2030s, will offer enhanced multispectral capabilities by providing additional bands for specific applications, and a higher spatial resolution than S2. While S2NG will have more than 20 bands mostly at ~5-10m spatial resolution and 5 days revisit time, the CHIME mission will provide more than 200 bands at 30 m pixel size every 11 days with two units. Both missions, CHIME and S2NG, will provide significant technological improvements, in particular in the spectral domain, to allow the preparation of high-level TRL EO-based products. Among others, this is expected for assessing forest degradation (for instance due to drought), or post-fire recovery, relevant for the FML. Furthermore, specific nutrients, like CNC, can be obtained and used to optimise fertilization strategies in the context of the CAP. Moreover, hyperspectral data is a valuable source for the retrieval of SOC and monitoring NPV that can both potentially support the CAP eco-schemes for the estimation of carbon stocks and provide information for the SML. Also, spectral discrimination of plant (crop, tree) species and improved biodiversity mapping are facilitated, serving the Biodiversity strategy and similar EU legislation. In summary, from our investigations, we recommend: ● EO time series and large area demonstrations for more robust retrieval assessments and to better track land dynamics with a focus on Green Deal policy requirements; ● Exploration of scientific precursor missions data (EnMAP, PRISMA, DESIS) to bring products to higher TRLs towards future service development and operational uptake; ● Development and scaling of advanced processing and retrieval methodologies across a range of conditions and in Green Deal priority parameters; ● Enhancing the in-situ component and ground/satellite data integrated analysis, required for robust calibration and validation of advanced methodologies; ● Exploration of synergies of future sensor systems (i.e. CHIME, S2NG), complementing each other in spectral, temporal and spatial domains. In addition, for an ultimate successful user (policy) uptake of EO-based products and services, an early engagement of the users is critical, as the stakeholders' trust in the products and services will develop in line with their involvement. Therefore, early consultation, including product co-design, will serve as the basis for higher TRL developments for numerous EO-based variables. We conclude that upcoming spaceborne imaging spectroscopy and enhanced multispectral data streams will emerge as a game-changer in the early 2030s. These higher-level products will be invaluable tools for implementing EU policies, enabling enhanced traceability of key processes and changes. To ensure the EU achieves its ambitious climate targets outlined in the Green Deal, preparedness for the upcoming EO missions within the Copernicus services is critical. Therefore, we strongly advocate for increased investment in research and development to bridge the gap between the current TRLs of EO-based variables and the required levels for operational implementation
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall E1)

Presentation: Integrating Earth Observation, survey, and statistical data streams to monitor the impact of agricultural policies

Authors: Marijn Van Der Velde, Martin Claverie, Raphael d'Andrimont, Momtchil Iordanov, Fernando Fahl, Nicolas Lampach, Guido Lemoine, Emanuele Lugato, Melissande Machefer, Matteo Marcantonio, Laura Martinez-Sanchez, Linda See, Fernando Sedano, Jon Skoien
Affiliations: EC Joint Research Centre, EC AGRI, Seidor Consulting, European Dynamics, EC Eurostat, Landlife Ecospatial Labs, IIASA, ARHS Developments
Transitioning to more sustainable food systems requires a comprehensive monitoring system. Recently, Tóth et al. (2024) presented a comprehensive framework and dashboard built on a set of environmental, economic and social indicators. A variety of sources, including sales, survey, statistical, and model data underpins this framework. Notably, from the >120 indicators none are based on Earth Observation (EO). The need to increase the scope, thematic coverage, as well spatial detail of such monitoring frameworks provides new opportunities, not only for EO, but also for geospatial information from farmers’ declarations and agricultural census data. Relevant data streams include the Copernicus High Resolution Layer on Crop Types, the Farm Structure Survey census data from 9 million European farm holdings, recently released by Eurostat following statistical disclosure principles guaranteeing confidentiality, as well as increasingly available farmers’ declarations from geospatial applications (GSA) under the High-Value Dataset Implementing Regulation. Furthermore, very high-resolution products, e.g. mapping trees in agricultural landscapes (Liu et al., 2023) as well as citizen-science based efforts open new avenues to collect relevant monitoring data. Given their EU-wide extent, computing a variety of new farm and landscape level metrics from these geospatial data layers becomes possible. Importantly, integrating these data streams can enhance spatial targeting of agricultural policies and facilitate the monitoring of policy impact at farm to regional level. We illustrate this with use-cases that focus on ex-post monitoring of crop management practices, tracking woody features loss and expansion, and incorporation of Copernicus High Resolution Layers to improve predictions or organic carbon in agricultural soils. Further integrating EO in data reporting and modelling workflows can facilitate synergies between the evidence provision and monitoring aspects of agricultural policies. References Liu et al. (2023). The overlooked contribution of trees outside forests to tree cover and woody biomass across Europe, Science Advances, 9 (37), doi: 10.1126/sciadv.adh4097. Tóth et al. (2024). EU food system monitoring framework. From concepts to indicators. doi:10.2760/94456, JRC137971.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall E1)

Presentation: Multimodal Learning from Satellite Time Series and Field Images for Crop Type Mapping

Authors: Adel Abbas, dr. Claudia Paris, Dr. Andy Nelson
Affiliations: Department of Natural Resources, Faculty of Geo-information Science and Earth Observation (ITC), University of Twente
1. Introduction and motivation Large-scale crop type maps are a key tool for the management of agrifood systems and more broadly meeting the Sustainable Development Goals (SDG-2 in particular). They are already deployed in critical applications like the allocation of farm subsidies and are typically produced using remote sensing (RS) data, which has become more readily available in the last decade with initiatives like the European Copernicus Program. Sentinel-2 satellite data, with high temporal frequency and spatial resolution, has been used to generate seasonal irrigation and cereal maps at the global level [1]. Due to the crop rotation practices, crop type maps should be updated seasonally or at least annually. For computational efficiency, global products are generated with classical machine learning algorithms, although, recently, many specialised deep learning (DL) architectures have been proposed [2]. Current state-of-the-art performance is achieved by transformer-based models using satellite image time series (SITS) [3], [4]. They require a large amount of expensive labelled data for training supervision and for validation, yet still face challenges in generalising across new spatial and new temporal contexts [5]. To mitigate the lack of reference data, recent studies have demonstrated the potential of photo interpretation of ground-level images, by producing noisy labels for the supervision of SITS models [6], [7]. Despite its potential, the combination of satellite data (SITS in particular) with ground-level images (field images in particular) is still in an exploratory phase in agricultural mapping and has been tested in limited spatial contexts. In this European-level study, we evaluate different field image collection protocols, DL models of different sizes, and fusion approaches. We analyse different strategies to combine Sentinel-2 SITS and field images in Europe, leveraging the public availability of crop field images and field data information reported in the 2018 Land Use and Land Cover Survey (LUCAS) [8]. The “close-up cover” and “landscape” images in the LUCAS dataset were tested and compared to assess the most effective way to collect field images in agricultural areas. While the “close-up images” provide detailed views of crop features (e.g., leaves, barks, flowers, or fruits), the “landscape images” (from the actual point and its four cardinal directions) offer broader context about the agricultural area. Instead of relying only on ground-level images, we also propose a multimodal framework to generate reference crop data. The main contributions of our work are (1) recommendations for field image collection protocols, (2) an evaluation of vision models for crop type classification in field images, (3) an evaluation of multimodal fusion strategies for SITS and field images, and (4) a multimodal framework to generate reference crop data using both sources, and its application to crop type mapping at the European level. 2. Methodology and experiments Multimodal Dataset Preparation: In the first step of the proposed method, we prepare the multimodal dataset. We consider Sentinel-2 SITS and field images collected on crop parcels across Europe in 2018. Specifically, we employ the Sentinel-2 SITS provided in the Sen4Map dataset [9] that align with available field data collected during the 2018 LUCAS survey. LUCAS provides boundary polygons, geo-tagged landscape and close-up images, and a curated set of close-up images. These data allow us to create multimodal datasets with varying numbers of labelled samples, field image composition and quality. DL model training: We train unimodal and multimodal classifiers using different DL architectures, including convolutional neural networks (i.e., ResNet) and vision transformers (i.e., ViT-B, ViT-L) of different model sizes. For field images, we evaluate models trained either on “close-up images” or “landscape images”. For SITS, we consider transformer-based architectures as in [9]. We also introduce a multimodal pseudo-labelling pipeline, where the trained multimodal classifier generates pseudo-labels for the unlabelled samples. This allows us to increase the number of annotations, and the augmented set is then used to supervise training of a SITS model at the continental level. Multimodal fusion strategies: To identify the most effective solution to integrate the SITS with the field images, we consider three fusion strategies: late-fusion, decision fusion, and ensemble aggregation. In late-fusion, each modality is processed by a modality-specific encoder which extracts feature representations that are then merged to produce a joint representation to be classified. Decision fusion involves training parallel classifiers for each modality, which produce individual predictions. The final prediction, which serves to compute the training loss, is obtained by aggregating the outputs of the unimodal classifiers. Finally, unlike decision fusion, ensemble aggregation operates in two steps. First, each modality-specific classifier is trained independently. Then, at inference, the predicted probabilities from each classifier are aggregated to obtain the final prediction. Preliminary results: All tests are performed on the same set of parcels which are not part of any training recipe. Our unimodal results confirm that data relevance matters for field images. If the inference task contains landscape, models trained on landscape images perform better than models trained on close-ups, and vice versa. However, the drop in performance is higher when transferring from landscape to close-up than when transferring from close-up to landscape. Data quality also matters; encoders trained only on the curated images (1k samples) achieved performance on par with encoders trained on more non-curated samples (16k samples). As expected, larger models like ViT-L achieved the best macro F1-score (0.86), while smaller models like ViT-B reached an F1-score of 0.73. Comparable to ViT-B for SITS, using the VViT architecture[9], the best macro F1-score was 0.72. Finally, the best fusion strategy was late-fusion, which consistently outperforms unimodal alternatives up to 0.88 MF1. 3. Conclusion With this contribution, we evaluate the impact of field image types (i.e., “close-up” versus “landscape”), data quality, and sample size on crop classification accuracy to identify optimal protocols for field image collection in agricultural contexts. We explore the multimodal integration of Sentinel-2 SITS and field images for crop type mapping and show its potential to generate robust reference data compared to unimodal approaches. With the growing availability of both data sources, our framework can enable the generation of more robust crop maps at a large-scale, in support of global agricultural policy and decision-making processes. References: [1] K. Van Tricht et al., ‘WorldCereal: a dynamic open-source system for global-scale, seasonal, and reproducible crop and irrigation mapping’, Earth Syst. Sci. Data, vol. 15, no. 12, pp. 5491–5515, Dec. 2023, doi: 10.5194/essd-15-5491-2023. [2] L. Miller, C. Pelletier, and G. I. Webb, ‘Deep Learning for Satellite Image Time-Series Analysis: A review’, IEEE Geosci. Remote Sens. Mag., pp. 2–45, 2024, doi: 10.1109/MGRS.2024.3393010. [3] V. S. F. Garnot and L. Landrieu, ‘Lightweight Temporal Self-attention for Classifying Satellite Images Time Series’, in Advanced Analytics and Learning on Temporal Data, V. Lemaire, S. Malinowski, A. Bagnall, T. Guyet, R. Tavenard, and G. Ifrim, Eds., Cham: Springer International Publishing, 2020, pp. 171–181. doi: 10.1007/978-3-030-65742-0_12. [4] M. Tarasiou, E. Chavez, and S. Zafeiriou, ‘ViTs for SITS: Vision Transformers for Satellite Image Time Series’, in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada: IEEE, Jun. 2023, pp. 10418–10428. doi: 10.1109/CVPR52729.2023.01004. [5] E. Rolf, K. Klemmer, C. Robinson, and H. Kerner, ‘Mission Critical -- Satellite Data is a Distinct Modality in Machine Learning’, Feb. 02, 2024, arXiv: arXiv:2402.01444. Accessed: Feb. 09, 2024. [Online]. Available: http://arxiv.org/abs/2402.01444 [6] M. Paliyam, C. Nakalembe, K. Liu, R. Nyiawung, and H. Kerner, ‘Street2Sat: A Machine Learning Pipeline for Generating Ground-truth Geo-referenced Labeled Datasets from Street-Level Images’. [7] J. L. Soler, T. Friedel, and S. Wang, ‘Combining Deep Learning and Street View Imagery to Map Smallholder Crop Types’, Proc. AAAI Conf. Artif. Intell., vol. 38, no. 20, Art. no. 20, Mar. 2024, doi: 10.1609/aaai.v38i20.30225. [8] R. d’Andrimont et al., ‘LUCAS cover photos 2006–2018 over the EU: 874 646 spatially distributed geo-tagged close-up photos with land cover and plant species label’, Earth Syst. Sci. Data, vol. 14, no. 10, pp. 4463–4472, Sep. 2022, doi: 10.5194/essd-14-4463-2022. [9] S. Sharma, R. Sedona, M. Riedel, G. Cavallaro, and C. Paris, ‘Sen4Map: Advancing Mapping With Sentinel-2 by Providing Detailed Semantic Descriptions and Customizable Land-Use and Land-Cover Data’, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 17, pp. 13893–13907, 2024, doi: 10.1109/JSTARS.2024.3435081.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall K1)

Session: C.02.17 Celebrating 15 Years of CryoSat for climate science: shaping the future of polar altimetry - PART 2

This conference session is dedicated to celebrating the 15th anniversary of the launch of the CryoSat satellite, a cornerstone in Earth observation and climate science. Since its deployment, CryoSat has been instrumental in advancing our understanding of the cryosphere, offering unprecedented insights into the dynamics of ice sheets, sea ice, and glaciers. The session will review the mission's contributions to cryosphere science, oceanography, hydrology and climate change studies, and discuss the satellite's ongoing impact and future directions in Earth science research.

The session will feature 5-6 distinguished scientists who will deliver keynote speeches, each focusing on different aspects of the results and developments stemming from the CryoSat mission. These experts represent the leading edge of research in cryospheric sciences, oceanography, hydrology and climate change, and will provide comprehensive analyses of how CryoSat's data has transformed our understanding in these fields.

At 17:45, the keynotes will be followed by a light refreshment with 15th anniversary birthday cake and photo opportunity.

Session Schedule:


CryoSat: 15 years of successful monitoring of polar cryosphere changes


  • Prof. Rene Forsberg

Echoes Through Time: 15 Years of CryoSat at the Frontiers of Glacier Monitoring


  • Dr. Livia Jakob

Earth Observations from Space and climate Change


  • Prof. Anny Cazenave
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall L3)

Session: C.01.12 New science-driven Earth Observation Mission ideas – Are you ready to boost future satellite Earth Observation?

Are you ready to boost the future of Earth Observation with new science-driven Earth Observation Mission ideas? Ideas which potentially enable breakthrough science tackling future (2040+) needs and challenges? Do you have a new scientific mission idea at a very early stage before entering Phase 0, which may also include innovative technical aspects? Then this is your session. We are looking for mission ideas which combine scientific excellence with innovative technology to target scientific, environmental and societal challenges. The ideas should be in-line with the new ESA EO Strategy “Earth Science in Action for Tomorrow`s World” (ESA Earth Observation Science Strategy 2040) within the action areas “Frontier Science and Discovery” and/or “Filling critical observation gaps: preparing for tomorrow starts today”.

Ideas can originate from former ESA activities, national agencies, international activities or any other initiatives.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall L3)

Presentation: GALENE: A satellite mission proposed at Earth Explorer-ESA program for observing coastal and inland aquatic ecosystems and wetlands

Authors: Malik Chami, Pr Astrid Bracher, Xavier Briottet, Pr Maycira Costa, Pr Alexander Damm-Reiser, Dr Arnold Dekker, Dr Shungu Garaba, Dr Peter Gege, Dr.ssa Claudia Giardino, Dr Els Knaeps, Pr Tiit Kutser, Pr Richard Lucas, Dr Dani Odermatt, Dr Gerard Otter, Dr Nima Pahlevan, Dr Nicole Pinnel, Dr Sindy Sterckx, Dr Kévin Turpie
Affiliations: Sorbonne Université, Alfred Wegener Institute - Helmholtz Center for Polar- and Marine Research, ONERA, University of Victoria, University of Zurich, CSIRO, Carl von Ossietzky Universität Oldenburg, German Aerospace Center (DLR), National Research Council (CNR), VITO, University of Tartu, Aberystwyth University, Eawag, TNO, NASA, University of Maryland Baltimore County
Coastal and inland aquatic ecosystems are of fundamental interest to society and economy, given their tight link to urbanization and economic value creation. These ecosystems, which are continuously impacted by natural processes and human activities, play a significant role in the carbon cycle, and they comprise critical habitats for biodiversity. Systematic, high-quality and global observations, such as those provided by satellite remote sensing techniques, are key to understand complex aquatic systems. While multitudes of remote sensing missions have been specifically designed for studying ocean biology and biogeochemistry as well as for evaluating terrestrial environments, missions dedicated to studying critical coastal and inland aquatic ecosystems at global scale are non-existent. Thus, these ecosystems remain among the most understudied habitats on the Earth’s surface. A satellite mission called Global Assessment of Limnological, Estuarine and Neritic Ecosystems (GALENE), is proposed to ESA’s Earth Explorer Mission Idea call to respond to current and future challenges linked to coastal and inland ecosystems. The mission concept consists of a synergy of three innovative instruments, namely a hyperspectral sensor, a panchromatic camera and a polarimeter. GALENE will then provide optimized measurements of these aquatic ecosystems by enabling an adaptive spectral, spatial, multidirectional and polarimetric sampling of properties and processes in water column, benthic habitats and associated wetlands. GALENE will substantially contribute to solving global water challenges, including water pollution and ensuring clean drinking water supply for all and protecting coastal areas and populations. The GALENE science objectives and the main innovative features will be presented.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall L3)

Presentation: Monitoring the vertical structure of the ocean color in the upper ocean using a space-borne oceanic profiling lidar

Authors: Cédric Jamet, Davide Dionisi, Peng Chen, Paolo Di Girolamo, Yongxiang Hu, Dong Liu, Xiaomei Lu, Iwona Stachlewska, Yudi Zhou
Affiliations: Log, CNR, State Key Laboratory of Satellite Ocean Environment Dynamics, Second Institute of Oceanography, Ministry of Natural Resources, School of Engineering, University of Basilicata, Science Directorate, Lidar Science Branch, NASA Langley Research Center, Ningbo Innovation Center, State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, University of Warsaw, Faculty of Physics
Passive ocean color space-borne observations began in the late 1970s with the launch of the CZCS space mission. An uninterrupted record of global ocean color data has been sustained since 1997. These passive observations have enabled a global view of the distribution of marine particles (phytoplankton, total suspended matter, colored dissolved organic matter, particulate organic carbon). However, these measurements are limited to clear sky, day-light, high Sun elevation angles, ice-free oceans and are exponentially weighted toward the ocean surface (no information on the vertical distribution of the marine particles). Moreover, the processing of the ocean color images requires the knowledge of the atmospheric components (gases, air molecules and aerosols). Lidar is an acronym for LIght Detection And Ranging. This “laser radar” technique has been used for a wide range of atmospheric applications. For ocean applications, lidar has been mainly employed on-board aircrafts for estimation of the bathymetry along coastal waters, detection of fish schools, or detection of scattering layers. Lidar techniques have another advantage: a better penetration into the seawater, up to three times that of passive sensors. Despite different oceanic applications of lidar, this active remote sensing technique has not received significant attention from the ocean color remote sensing community. Several reasons can explain this: cost and size of the instrument, lack of sampling swath, few wavelengths that can be used for detection, lack of space-borne oceanic profiling lidar. However, this technique has regained interest from the ocean community in the past years. New studies used the lidar signal from the space-borne CALIOP instrument on-board CALIPSO to estimate particulate backscatter and showed that the lidar signal from CALIOP provides accurate estimates of this parameter over the globe. As none of the aforementioned space-borne lidars have been designed to measure the vertical profiles of the ocean upper layer in terms of bio-optical and biogeochemical properties (as their scientific objectives are the study of aerosols and clouds for CALIOP, the ice topography for ATLAS and the atmospheric wind for ALADIN), we face the lack of specific measurement requirements. There is therefore a huge need of those kinds of observations. In this proposal, we propose a cutting-edge and breakthrough oceanic profiling lidar using a trial-wavelength (UV-blue-green) lidar with vertical resolution of at least one meter with cross- and co-polarization at each band. The vertical profiling capabilities of a space-borne lidar can yield additional key ocean properties.Thanks to this new space-borne lidar dedicated to ocean color, the profiles of chlorophyll-a and particulate organic carbon can be estimated up to 80-meter depth in oceanic waters. This will help to improve the plankton detection in polar oceans, to better constrain the marine carbon cycle and carbon flux between the surface and the deep ocean, to reduce uncertainty in global ocean phytoplankton and net primary production and plankton physiology characterization.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall L3)

Presentation: SLAINTE: A SAR constellation to observe vegetation water dynamics, stress and resilience

Authors: Prof.dr.ir. Susan Steele-Dunne, Ana Bastos, Francesco de Zan, Wouter Dorigo, Stef Lhermitte, Christian Massari, Jalal Matar, Dr David Milodowski, Diego Miralles, Albert Monteith, Marc Rodriguez Cassola, Christopher Taylor, Stefano Tebaldini, Lars Ulander
Affiliations: TU Delft, Universität Leipzig, delta phi remote sensing, Technische Universität Wien, KU Leuven, Italian National Research Council, German Aerospace Center (DLR), Microwaves and Radar Institute, University of Edinburgh, Ghent University, Chalmers University, UK Centre for Ecology and Hydrology & National Centre for Earth Observation UK, Politecnico di Milano
SLAINTE was initially developed in an ESA New Earth Observation Mission Idea (NEOMI) study. SLAINTE is science-driven, aiming to capture sub-daily variations in vegetation water storage due to stomatal regulation, water uptake and loss and their evolution in response to environmental stress at landscape scales. In particular, sub-daily observations of vegetation optical depth, vegetation water content, plant water content and the wet-dry canopy state are essential to (1) enable the estimation of sub-daily evaporation and its components (2) unveil sub-daily coupling mechanisms and feedbacks between carbon and water cycles (3) quantify the sub-daily stress response of vegetation to abiotic and biotic disturbance and (4) detect early signs of vegetation health decline, reveal ecosystem resilience to environmental stressors, and support development of early-warning metrics for ecosystem shifts. The availability of sub-daily SAR also allows us to extend the range of resolved surface soil moisture changes to sub-daily scales. These are essential to (1) quantify agricultural water use for more sustainable use of water resources, optimize crop production and understand the impact of irrigation on local and regional climate (2) develop fundamental understanding of how vegetation and soil water status influence the time and space scales of predictability of mesoscale convective storms (3) explain runoff generation processes and infiltration dynamics that drive the triggering and evolution of geo-hydrological hazards like flash floods, debris flows and landslides. The unique observations from SLAINTE would enable frontier science and discovery in several themes identified in the new ESA Earth Observation Science Strategy 2040, namely Carbon Cycle, Water Cycle, Ecosystem Health, Extremes and Hazards and Energy Fluxes. Filling the critical observation gap at sub-daily scales is expected to provide unprecedented knowledge on processes that are critical to understand the resilience of terrestrial ecosystems and their water resources, and to meet future needs and challenges as we face increasing climate variability and extremes, and pressures from human land and water use. SLAINTE is therefore highly relevant for several Articles (4,5,7,8) of the Paris Agreement, several SDGs (15, 13,6,2, 11) as well as the EU Green Deal and the Convention on Biodiversity. A first mission concept was developed in response to ESA’s 12th Call for Earth Explorers, comprising a constellation of decametric resolution monostatic SAR satellites in a LEO, displaced in orbit to provide sub-daily imagery. Though SLAINTE can benefit from decades of SAR development for ESA missions, the development of a constellation of SARs with interferometric capability using small-class platforms poses unique challenges. This requires significant innovation in terms of the payload and spacecraft design and sizing, and the development of (In)SAR solutions to meet the ambitious requirements. Stimulating and making the necessary technological advances, SLAINTE can also be considered as a demonstrator to show that SAR constellations can complement Sentinel-1 NG and ROSE-L as part of a system of systems to address the scientific and societal challenges of the coming decades.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall L3)

Presentation: Towards data-driven fire management:​ From​ comprehensive fuel characterization data to satellite sensors design​

Authors: Marta Yebra, Nicolas Younes
Affiliations: Australian National University
Wildfires represent a significant and growing global challenge, emphasizing the need for advanced tools to monitor and manage fire risk effectively. Live Fuel Moisture Content (LFMC) and vegetation flammability are critical variables for predicting fire behavior and informing management strategies. While remote sensing offers the potential to provide large-scale observations of these parameters, progress has been constrained by the lack of sufficient datasets for calibration and validation, as well as by the limitations of current satellite capabilities in accurately capturing LFMC and forest flammability. To address these challenges, we integrate remote sensing technologies with a comprehensive data framework designed to enhance forest flammability monitoring in Australia and worldwide. Central to this initiative is Globe-LFMC 2.0, a global dataset encompassing 47 years of data and over 280,000 LFMC measurements from diverse plant species across 15 countries. This dataset serves as a foundation for the calibration and validation of algorithms, enabling more accurate LFMC estimates derived from satellite imagery. The OzFuel hyperspectral satellite instrument has been specifically optimized for Australia’s diverse landscapes. It delivers high spatial, temporal, spectral, and radiometric resolutions, improving the capacity to monitor forest flammability dynamics at an unprecedented level of detail. This tailored design overcomes the limitations of existing systems, facilitating reliable, high-resolution tracking of LFMC and flammability dynamics across fire-prone regions. The instrument's design leverages data from EUCFlamm, which combines spectral reflectance information with the physical and biochemical properties of eucalypt species, known drivers of wildfire behavior in Australian ecosystems. This integration ensures that the monitoring capabilities of OzFuel are closely aligned with the unique characteristics of Australian vegetation and fire risk. The OzFuel Project represents a substantial advancement in wildfire science and management, particularly in regions such as Australia, where fire risks are intensifying. By combining advanced remote sensing technologies with extensive datasets, the initiative provides improved tools for predicting and monitoring forest flammability, supporting proactive strategies for wildfire prevention and mitigation. Future developments aim to enhance Globe-LFMC and EUCFlamm as well as the satellite instrument design to position OzFuel as an essential resource for wildfire science.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall L3)

Presentation: The Concept and Design of BNU Satellites for Real Time Application

Authors: Prof. Qiao Wang, Prof. Xiaoyuan Li, Prof. Guoqiang Wang, Prof. Kun Jia, Jinlong Fan
Affiliations: Satellite Application Engineering Innovation Center, Faculty of Geographical Science, Beijing Normal University, Faculty of Geographical Science, Beijing Normal University, State Key Laboratory of Earth Surface Processes and Resource Ecology, Faculty of Geographical Science, Beijing Normal University
The Concept and Design of BNU Satellites for Real Time Application Qiao Wang1,2,3, Xiaoyuan Li2 , Guoqiang Wang1,2,3 , Kun Jia1,2,3 , Jinlong Fan1,2,3 1: Satellite Application Engineering Innovation Center, Faculty of Geographical Science, Beijing Normal University, China; 2: Faculty of Geographical Science, Beijing Normal University, China; 3: State Key Laboratory of Earth Surface Processes and Resource Ecology, Faculty of Geographical Science, Beijing Normal University, China Abstract: Earth observation satellite development worldwide supports and promotes the remote sensing application in various sections. For the major applications at present, the logic of retrieving information from satellite image is that user has to access to the satellite image before one can start to work on the image. The latency of accessing to the satellite image is days or hours for most satellites in the world. However, for emergency applications and operational business, the latency of days or hours is not tolerated or affects the quick response measures taken. At present, the latency of satellite data access to users via traditional ground station is almost reaching the limit. Therefore, the concept of real time remote sensing was proposed by Prof. Wang Qian with Beijing Normal University in China in 2020 and is trying to develop a new solution of circulating the information from satellite direct to the users instead of transmitting the satellite image in the case of emergency application. The concept of real time remote sensing was to seek another approach to accelerate the procedure of information to users, like minute latency of information to users in near future. The main concern of this concept is about to address the SPEED issue of remote sensing applications. It believes that the timely information to the user hands through the concept of real time remote sensing should be a very importance subdirection of remote sensing development. Fortunately, the concept will be testified by the coming twin small BNU satellites of lighter than 100 kg. These twin satellites are expected to be put in the orbit in 2026 and one is equipped with a 8-channel Imaging spectrometer in optical and near infrared spectrum with a GSD of 20 meters while another is equipped with a 26-channel Imaging Hyperspectrometer in optical, near infrared and middle infrared spectrum with the GSD of 10 and 30 meters. The work mode is that the multispectral one is flying a few minutes earlier than the Hyperspectral one. The multispectral one will be flying and searching abnormal earth surface and sending the AOI identified to the Hyperspectral one following just a few minutes later. The on-boarding computation chips will immediately process the images captured by twin satellites and retrieve the information about abnormal condition on the earth surface and compile them into a text message that will be sent back to the end user equipment on the earth via Chinese Navigation satellite series. These considerations will be fulfilled by the BNU Satellites. Feed backs and experiences from community will be heard and used to improve the design for BNU Satellites. The user community is also warm welcome to improve and engage the implementation of this kind of observing and using application for real time remote sensing. Keywords: Small Satellite; Multispectral; Hyperspectral; Earth Observation; Real Time Remote Sensing; On-board Computing; Beijing Normal University
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall L3)

Presentation: Title: Sea-Air-Ice-Land INteractions (SAILIN) mission: taking the pulse of our planet’s fluxes.

Authors: Estrella Olmedo, A. Naveira-Garabato, A. Alvera-Azcárate, E. Cardellach, I. Corbella, F. Doblas, G. Duveiller, M. Grégoire, A. Haumann, N. Karlsson, M. Piles, L. Renault, M. Arias, J. Barbosa, B. Buongiorno-Nardelli, R. Catany, D. Ciani, J. Closa, C. Colombo, R. Diez-García, N. Duffo, C. Gabarró, V. Gaias, A. García-Espriu, E. García-Ladona, V. González-Gambau, C. González-Haro, J. Haapala, H. Hewitt, T. Jagdhuber, C. Kittel, A. Moreno, R. Oliva, M. Pablos, M. Portabella, F. Scala, A. Silvano, A. M. Solgaard, L. Sorensen, G. Spreen, A. Stoffelen, L. Tuami, A. Turiel, M. Umbert, M. Unwin, M. Vall-Llossera, J.P. Wigneron, A. Zurita, M. Martin-Neira
Affiliations: Institut Of Marine Science (csic), SOTON-NOC, U. Liege, ICE-CSIC, UPC, BSC, Max Plank Institute for Biogeochemistry, AWI/LMU, GEUS, UV, IRD, ZBT, RDA, CNR-ISMAR, Argans, Airbus, POLIMI, VEGA, FMI, METOFFICE, DLR/UniA, University Grenoble, DLR, DTU, U. Bremen, KNMI, SSTL, INRAE, ESA
The mass and energy fluxes between the ocean, atmosphere, ice and land are crucial to understanding a climate that is dramatically changing. Interactions between the ocean and the atmosphere are complex. They encompass exchanges of gasses, water, particles, momentum, energy and heat through the air-sea interface. Terrestrial and oceanic systems are connected by rivers and continental runoff that are responsible for the delivery of fluxes of essential biogeochemical components (e.g., carbon, nitrogen and phosphorus) in inorganic and organic forms. The margins of the ice sheets are the areas where the largest ice mass loss from Greenland and Antarctica is occurring, and sea ice is changing rapidly, strongly altering surface ocean properties and associated fluxes of heat, freshwater and carbon. However, these air-sea-land-ice interactions are poorly understood, and as a result, they are often inadequately represented or even neglected in global climate models. In the last call for ideas for future Earth Explorer missions, we presented SAILIN with the main objectives of providing accurate estimates with an unprecedentedly fine spatial resolution of 15 km every 3 days globally and daily over polar regions in all-weather conditions of: freshwater fluxes at the sea-air, ocean-ice, sea-land, and land-air interfaces; heat fluxes at the air-sea, sea-land and ocean-ice interface; water mass transformations in the near-surface ocean. For this, we proposed the Tri-HEX concept, a novel 3-platform formation flying in general circular orbits, yielding an alias-free imaging system by means of high-resolution distributed L-band synthetic aperture radiometers combined with 3 GNSS-R instruments for enhancing the sensing capability of the sea roughness and the soil moisture in densely vegetated areas. The three hexagonal spacecrafts have a 3 m envelope diameter, making them in line with the available envelope of a Vega-C launcher. The effective radiometric sensitivity is expected to be 0.2 K, outperforming values associated with other L-band missions (0.36 K and 0.3 K in the case of SMOS and CIMR, respectively). The spatial resolution at nadir is 15 km, also improving preceding values (33-50 km for SMOS) and on future planned L-band missions (36x64 km for CIMR). SAILIN provides multi-angular observational capabilities, with an incidence angle range of 0-60 degrees and a fully alias-free swath of 1200 km. In this talk we will present the SAILIN proposal, as well as the main outcomes from the ESA/ACEO evaluation report and some of the future avenues of the mission.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall K2)

Session: C.02.14 The EarthCARE Mission’s First Year in Orbit: Opening new Horizons for Cloud, Aerosol and Radiation Science - PART 4.

The Earth Cloud, Aerosol and Radiation Explorer (EarthCARE) satellite mission aims to improve our understanding of cloud-aerosol-radiation interactions and Earth radiation budget, such that they can be modelled with better reliability in climate and numerical weather prediction models. To achieve this objective, EarthCARE will measure the three-dimensional structure of clouds, precipitation and aerosols, along with collocated observations of solar and terrestrial radiation.

The successful launch of this ESA-JAXA mission was in May 2024 and, following the satellite and instrument commissioning phase, provides unique co-registered observations from a suite of four unique instruments located on a common platform: (1) ATmospheric LIDar (ATLID), (2) Cloud Profiling Radar (CPR), (3) Multi- Spectral Imager (MSI) and (4) BroadBand Radiometer (BBR). EarthCARE global observations include vertical profiles of natural and anthropogenic aerosols, the vertical contribution of ice and liquid water content, the cloud mesoscale distribution, precipitation microphysics, estimates of particle size, convective vertical air motions, as well as atmospheric radiative heating and cooling profiles. In addition to enabling new insights into climate science and providing unique data for NWP improvements, EarthCARE continues the heritage measurements of CloudSat, CALIPSO and Aeolus, and bridges to future missions such as NASA's Atmosphere Observing System mission (AOS) and Aeolus-2.

The session invites contributions from the science community on EarthCARE and related science themes, including Passive and Active Observational Techniques; Cloud and Precipitation Microphysics, Aerosols and Radiation Process Studies; Radiation and Earth Radiation Budget; Scientific and User Applications as well as Long-Term Data Records. In addition, scientific synergies with heritage, operational and future satellite missions as well as with ground-based, air- or ship-borne campaign activities are welcome

Contributions on Modelling, Assimilation and Parameterisation at Global, Regional and Cloud Level enhancing high-resolution atmospheric numerical model activities through evaluation and improvement using novel satellite observations on EarthCARE and related satellite missions are in particular invited. A focus is placed on the use of cutting-edge atmospheric climate and weather models, including "global km-scale" or “global storm-resolving models” and commensurate Earth observations of clouds, aerosols and convection.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall K2)

Presentation: Adding ATLID/EarthCare to a Long-Term Cloud Record

Authors: Artem Feofilov, Prof. Dr. Hélène Chepfer, Dr. Vincent Noël, Marius Dahuron
Affiliations: LMD / Sorbonne University / Ecole Polytechnique, Laboratoire d’Aérologie, CNRS, Université de Toulouse
Clouds exert multifaceted radiative effects on Earth's energy budget, serving as both insulators and reflectors of incoming solar radiation while also trapping outgoing infrared radiation. Consequently, clouds contribute to both surface cooling and warming processes, exerting a profound influence on regional and global climate dynamics. Despite their crucial role for the Earth's energy balance, uncertainties persist regarding their feedback mechanisms. A comprehensive understanding of clouds, including their spatial coverage, vertical distribution, and optical properties, is imperative for accurate climate prediction. Satellite-based observations, particularly those from active sounders, offer continuous monitoring of clouds with high vertical and horizontal resolution, starting from 2006. However, comparing cloud data from different spaceborne lidars presents challenges due to variations in wavelength, pulse energy, detector type, and local time of observation. This study discusses a methodology aimed at reconciling cloud data derived from several disparate spaceborne lidar platforms: CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation), which was operating from 2006 to 2023, ALADIN/Aeolus (Atmospheric Laser Doppler Instrument) operating in 2018-2023, IceSat-2, which has been operating since 2018, and ATLID/EarthCARE (ATmospheric LIDar), launched this year. For historical reasons, we use the Scattering Ratio at 532 nm (SR532) as a baseline for our cloud definition. In the numerator, we put the Attenuated Total Backscatter at 532 nm (ATB532), and the denominator contains a would-be Attenuated Molecular Backscatter at 532 nm (AMB532), which is calculated in an assumption of cloud-free atmospheric profile. To consider the layer as cloudy, we require the fulfillment of two conditions: SR532>5 and (ATB532 − AMB532) > 2.6e-6 m-1 sr-1. Correspondingly, for the measurements performed at other wavelengths we make a conversion of the retrieved optical properties to SR532 and ATB532 to compare apples to apples. We show that using this approach one can retrieve comparable clouds both for CALIOP and ALADIN using real measurements and for CALIOP and ATLID using synthetic measurements. For a pair of lidars overlapping in time, it is straightforward to fine-tune the aforementioned cloud detection parameters to ensure a seamless transition between the datasets: the data are collocated in time and space and the collocated dataset is analyzed with respect to cloud fraction at different latitudes, altitudes, and seasons. The differences are explored and the correction is introduced, which might be linked to a sensitivity of a given instrument or to its noise. However, if the instruments do not overlap in time, then an intermediate step is needed. We show how one can use IceSat-2 as a reference to match CALIOP’s and ATLID’s clouds. For a series of completely different space-borne lidars, we have developed a method of compensating for (a) wavelength, (b) differences in noise levels, (c) gap between observational periods. With these corrections, we have produced and analyzed the joint CALIOP-ATLID cloud dataset.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall K2)

Presentation: Initial Assessment of the Direct Impact of EarthCARE Observations on Weather Forecasts

Authors: Mark Fielding, Marta Janisková, Angela Benedetti, Will McLean, Marijana
Affiliations: ECMWF
Accurate weather forecasts rely on precise initial conditions, yet existing observation systems struggle to fully resolve the vertical structure of clouds and precipitation. When EarthCARE was conceived, its mission objectives were targeted at improving models through a better understanding of the physical processes of and between clouds, aerosols and radiation. However, as model fidelity has improved, cloud related observations have increasingly been used in data assimilation to help constrain the initial conditions to produce more accurate forecasts. These cloud-related observations tend to be from passive sensors with limited information on the vertical structure of cloud and precipitation. The wealth of information in EarthCARE’s unique profiling observations therefore have the potential to fill the gap in our current observing system and improve the representation of clouds in the model analysis. Thanks to EarthCARE’s frequent down-links and fast processing chain – its observations are often available within just a few hours – the potential for EarthCARE’s direct impact on weather forecasts is real. In this presentation, we will showcase the first assimilation of EarthCARE radar and lidar observations into the European Centre for Medium-Range Weather Forecasts (ECMWF) global model and assess its impact. This work builds on earlier studies with CloudSat and CALIPSO, which demonstrated significant improvements in medium-range temperature and wind forecasts. Using data from the first year of EarthCARE’s lifetime, we will perform a comprehensive analysis of the effect of the CPR radar reflecitivity and ATLID lidar backscatter on forecast skill at different lead times. Issues of quality control, bias correction and observation error specification will be investigated through sensitivity experiments, exploiting EarthCARE’s radiation measurements as independent verification. Synergies with other cloud-related observations that are routinely assimilated at ECMWF will also be explored and used to show the non-local impact of EarthCARE’s relatively narrow swath. Finally, we will discuss the schedule for operational implementation.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall K2)

Presentation: Simultaneous evaluation of clouds and radiation in the ECMWF model using EarthCARE

Authors: Robin Hogan, Shannon Mason, Mark Fielding
Affiliations: European Centre for Medium-Range Weather Forecasts, University of Reading
One of the many strengths of EarthCARE is the near-perfect co-location of retrieved profiles of cloud and aerosol with corresponding measurements of outgoing shortwave and longwave radiative fluxes. Not only does this allow us to test “radiative closure”, whereby radiative transfer calculations are performed on the retrieved profiles and compared to the measured fluxes, but it also provides new insights into the performance of atmospheric models. By simultaneously evaluating modelled cloud properties and radiative fluxes, we are more easily able to pinpoint the cause of radiative errors in models. In this talk we present preliminary results of evaluating several versions of the ECMWF weather forecast model in this way, including the operational 9-km medium-range weather forecast, the operational 40-km “CAMS” forecast including prognostic aerosols, and more recent research versions of the ECMWF model. The initial period of comparison is August-September 2024 to coincide with the EarthCARE-ORCESTRA Model Intercomparison Project (ECOMIP) described by Satoh et al. in this symposium. We use the operational ACM-CAP synergistic retrieval product, developed at ECMWF, which combines information from the radar, lidar and imager to obtain a “best estimate” of the properties of clouds, aerosols and precipitation. We particularly focus on aspects of clouds that should be better retrieved by EarthCARE than by the A-Train, such as cirrus extinction coefficient and particle size thanks to the high spectral resolution capability of the lidar and the higher sensitivity of the radar. The comparisons are stratified by cloud type to identify the causes of specific biases in the model. We also evaluate the forecasts as a function of forecast lead time to determine both how biases develop and the rate at which forecast skill degrades for different cloud types. The results are discussed in terms of the likely causes of the differences and what improvements are needed to the representation of cloud and radiation processes in the model.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall K2)

Presentation: ECOMIP: a new atmospheric model intercomparison project with validation data from EarthCARE and the ORCESTRA field campaign

Authors: Masaki Satoh, Dr. Woosub Roh, Robin Hogan, Prof. Bjorn Stevens
Affiliations: Atmosphere And Ocean Research Institute, The University Of Tokyo, Typhoon Science and Technology Research Center, Yokohama National University, European Centre for Medium-Range Weather Forecasts, The Max Plank Institute for Meteorology
EarthCARE offers exciting new opportunities for evaluating and improving the representation of clouds, aerosols, precipitation and radiation in atmospheric models at the scale of individual storms. We propose the EarthCARE-ORCHESTRA Model Intercomparison Project (ECOMIP), targeting the period of the ORCESTRA field campaign from Aug. 9 to Sep. 29, 2024. The experimental design is as follows: • Evaluation of models with EarthCARE data both in observation space using satellite simulators (including radar reflectivity, Doppler velocity and the lidar Mie and Rayleigh channels) and in model space comparing to observed radiative fluxes and retrievals of quantities such as water content, particle size and precipitation rate. • Types of experiments: - 2-day simulations initialized on each day from Aug. 9 to Sep. 29, 2024 (with some days selected for special focus) to be compared directly to observations on the second day of each simulation - Free run through 2-months from Aug. to Sep. 2024 to be evaluated statistically, e.g. using contoured frequency by altitude diagrams (CFADs) • Model data will be extracted in a “curtain” underneath EarthCARE, as well as in a 3D domain in the central Atlantic where the ORCESTRA campaign took place. Currently, we are coordinating more than 5 modeling groups that are interested in participating in this MIP. We encourage participants from more groups, including global storm-resolving models (resolution better than 5 km), global weather and air-quality models, regional km-scale models and conventional climate models. In this talk, we present a preliminary analysis using the 870m-mesh simulation by NICAM (Nonhydrostatic Icosahedral Atmospheric Model) with the EarthCARE/CPR observation by comparing cross sections along the satellite tracks and CFADs of radar reflectivities and Doppler velocities. We will discuss how to evaluate vertical mass transport by the CPR Doppler velocity data.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall K2)

Presentation: Evaluations of a Global Storm-Resolving Model Using the EarthCARE and a Satellite Simulator

Authors: Woosub Roh, Masaki
Affiliations: Atmosphere and Ocean Research Institute, The University of Tokyo
Global circulation models (GCMs) have long been used to simulate global weather and climate systems and have been evaluated with satellite data to improve their representations of clouds and precipitation. Recently, the emergence of global storm-resolving models (GSRMs; Satoh et al., 2019; Stevens et al., 2019) has introduced a paradigm shift. With kilometer-scale resolutions, these models capture mesoscale convective systems in unprecedented detail, surpassing the granularity of traditional GCMs. GSRMs aim to reduce uncertainties associated with cumulus parameterization in GCMs by incorporating advanced cloud microphysics schemes, which simulate the complex behaviors of hydrometeors, such as nucleation and riming, leading to more accurate representations of cloud and precipitation processes. The resolution of GSRMs closely matches the along-track sampling of active satellite sensors, typically less than 5 km, enabling direct comparisons between satellite observations and GSRM outputs without relying on subgrid-scale assumptions. Several studies have utilized satellite active sensor data to evaluate and refine the accuracy of microphysical representations within these models (e.g., Roh and Satoh, 2014; Roh et al., 2017; Ikuta et al., 2021). The EarthCARE cloud radar, with its capability to observe Doppler velocity, provides a valuable opportunity to evaluate and improve GSRMs. Doppler velocity measurements capture downward motions linked to the terminal velocity of hydrometeors and upward motions associated with convective processes, offering insights into the microphysical and dynamical properties of convective systems. Satellite simulators, which integrate comprehensive radiative transfer models to replicate satellite signals from atmospheric model outputs, play a crucial role in bridging the gap between simulations and observations. These tools enable robust assessments of GSRMs by facilitating direct comparisons between simulated and observed satellite signals, thereby enhancing model accuracy and alignment with observational data. In this study, we evaluate GSRM simulations conducted at 3.5 km and 870 m horizontal resolutions using NICAM (Satoh et al., 2014) and EarthCARE data. Radar reflectivity and Doppler velocity were simulated with the Joint Simulator (Hashino et al., 2013; Roh et al., 2020), a satellite simulator. We investigate the utility of radar reflectivity and Doppler velocity as metrics for assessing microphysical processes and interpreting convective dynamics in both observations and simulations. Furthermore, we address the limitations of observational data, such as sampling constraints and restricted variable coverage, and discuss how GSRM simulations can be leveraged to enhance the interpretation and utility of EarthCARE data.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall K2)

Presentation: EarthCARE radiative closure assessment: Initial results

Authors: Jason Cole, Dr. Howard Barker, Dr Zhipeng Qu, Ms. Meriem Kacimi, Almudena Velázquez Blázquez, Dr. Shannon Mason, Dr. Robin Hogan
Affiliations: Environment And Climate Change Canada, Royal Meteorological Institute of Belgium, European Centre for Medium Range Weather Forecasts
The EarthCARE satellite mission’s primary goal is to retrieve profiles of cloud and aerosol properties, from active and passive sensor measurements, well enough such that when acted on by radiative transfer models, simulated top-of-atmosphere shortwave (SW) and longwave (LW) radiative fluxes agree, on average, to within 10 W m-2 of corresponding values inferred from collocated measurements made by the on-board broadband radiometer (BBR). In essence, this will amount to a near-continuous radiative closure assessment of EarthCARE products that will benefit both algorithm developers and subsequent process studies that use EarthCARE’s products. In a first for satellite missions, 3D radiative transfer models (RTMs) will be used operationally to compute SW and LW fluxes as well as SW and LW broadband radiances for the three views of the BBR. The 3D RTMs require more information to account for horizontal transport of photons which can significantly impact mean fluxes and radiances for some atmosphere-surface conditions. This information is obtained by mapping retrieved profiles into 3D domains of size ~30 km across-track and ~50 km along-track; mean 3D fluxes and radiances are computed for the inner (assessment) domain of size ~5 x 21 km. To maintain continuity with previous missions, 1D RTMs will operate directly on each ~1 km-wide column of retrieved properties which will also be averaged up to the ~5 x 21 km assessment domain. For the closure assessment, RTM radiances are treated as though they were BBR observations; both sets of radiances are combined the same way, as dictated by EarthCARE’s angular distribution models, to produce “effective” TOA fluxes. These fluxes are differenced, and using as many reliable uncertainties as possible, the probability of them differing by less than 10 W m-2 is computed. This presentation will provide an overview of the closure methodology and show results for several months’ worth of EarthCARE observations partitioned according to cloud regime and season.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall E2)

Session: D.01.08 4th DestinE User eXchange - Addressing Application Needs for Climate Adaptation

Addressing Application Needs for Climate Adaptation

The Climate Change Adaptation Digital Twin (Climate DT) provides climate projections across multiple decades, aiming to operationalise their production. The goal for the next years is to provide users with updated simulations every year. It will also produce specific information for selected sectors impacted by climate change, such as renewable energy and urban planning. In this way, the data produced can support urban planners, industry leaders, and others design infrastructure that is more resilient to climate change.

In this session, researchers involved in the Climate DT will provide an update on its progress. Speakers will also demonstrate how the digital twin can support different users, using real-life examples of how various stakeholders are using this data to improve their climate adaptation efforts.

Presentations and speakers:


The Climate Digital Twin


  • Sebastian Milinski - ECMWF

Application examples:


The Energy Indicator application developed within the Climate DT – user engagement and experience story


  • Sushovan Ghost - Barcelona Supercomputing Center

From data to decisions: How the Hydroland application can help users to adapt to future droughts and floods


  • Aparna Chandrasekar - Helmholtz Centre for Environmental Research

Building cooler cities together: Stakeholder engagement in urban heat management using DestinE climate information


  • Dirk Lauwaet - VITO

Using km-scale storylines to understand the impact of warming on extreme events


  • Katherine Grayson - Barcelona Supercomputing Center

Understanding climate risk


  • Discussion with participants
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.31/1.32)

Session: B.04.02 Addressing multi-hazards: Compounding and Cascading Events through Earth Observation

Our planet faces a growing threat from complex, interconnected natural disasters. This session explores how Earth Observation (EO) data and technologies, maybe coupled with Digital Twins (DTs), can be leveraged to understand, predict, and ultimately mitigate the impacts of multi-hazards.

We'll delve into:
- Compounding and Cascading Events: Analyze how seemingly separate events like floods and droughts can combine to create more devastating consequences.
- EO for Multi-Hazard Risk Assessment: Discover how EO data from satellites helps map vulnerabilities, monitor real-time conditions, and forecast potential multi-hazard scenarios.
- The Role of Digital Twins in Multi-Hazard Management: Explore how DTs can integrate EO data with other sources to create a virtual representation of a region, enabling simulations and risk assessment for multi-hazards.
- Actionable Solutions from EO and DTs: Showcase real-world applications of EO and DTs in mitigating multi-hazard risks and improving preparedness.

This session targets anyone interested in utilizing EO and DTs for effective multi-hazard management. We'll foster discussion on best practices, emerging technologies, and the path forward for a more resilient future.

Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: ARCEME: Adaptation and Resilience to Climate Extremes and Multi-hazard Events

Authors: Khalil Teber, Dr. Mélanie Weynants, Fabian Gans, Marcin Kluczek, Dr. Jędrzej S. Bojanowski, Jan Musiał, Miguel D. Mahecha
Affiliations: Leipzig University, Max Planck Institute for Biogeochemistry, CloudFerro S.A.
Cascading droughts and extreme precipitation events are frequent compound events that pose a wide range of threats to ecosystems, whether natural or managed, and to society as a whole. For instance, these events can disrupt water supplies, ruin crops, cause physical destruction of manmade structure, and lead to ecological degradation of natural ecosystems. The severity of such impacts depends on the intensity of the cascading hazards, the exposure and vulnerability of the affected systems. Despite their importance, empirical research dedicated to investigating their complex dynamics remains insufficient. The limited scope of existing studies restricts our ability to generalize findings and draw actionable insights, highlighting a critical gap in current disaster risk research and management practices. One important reason behind this is the lack of data, sampled across multiple events from different hydroclimatic and socioeconomic settings. To address this gap, ARCEME (Adaptation and Resilience to Climate Extremes and Multi-hazard Events), a project funded by the European Space Agency (https://rsc4earth.de/project/arceme/), ambitions to leverage available high resolution Sentinel-1 and Sentinel-2 remote sensing data, to improve our understanding of cascading droughts and extreme precipitation events and their impacts on society and vegetation. ARCEME is structured around three main objectives: (1) Produce a weather-based database of cascading droughts and extreme precipitation events having impacted society and / or on which to assess their effect on ecosystems. (2) Use high resolution Sentinel 1 and Sentinel 2 data, to characterize the identified natural disasters, understand their complex dynamics and interactions. (3) Develop analytical case studies at different scales, to operationalize the use of Earth observation (EO) data towards vulnerability assessment and disaster risk reduction. As part of its commitment to open science, the ARCEME project will ensure that all data products generated within its framework are shared in alignment with the FAIR (Findable, Accessible, Interoperable, and Reusable) data and software principles. This approach is intended to maximize the usability of the project's outputs, fostering broader collaboration and enabling further advancements in disaster risk research and mitigation strategies.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: Investigating the Link Between Marine Heat Waves (MHWs) and Compound Drought-Heatwaves (CDHW) events in Europe Using Observation Data

Authors: Fabíola Silva, Jørn Kristiansen, Ana Oliveira, Chunxue Yang, Costanza Bartucca, Charlotte Stark, João Paixão, Beatriz Lopes, Luísa Barros, Inês Girão, Rita Cunha, Manvel Khudinyan, Andrea Storto, Johannes Langvatn, Ingrid Vigna, Sofia Aguiar, Tiago Garcia, Tine Eklund
Affiliations: Colab +Atlantic, Norwegian Meteorological Institute, Italian National Research Council, Institute of Marine Sciences
Weather extremes are occurring with greater frequency and intensity across Europe due to climate change. Events that were once considered rare are becoming more common, with significant implications for adaptation and mitigation strategies. The concurrence of extremes, such as heatwaves and droughts, exacerbates these impacts, creating complex challenges at various scales. Simultaneously, the occurrence of Marine heatwaves (MHWs) is also increasing in intensity, duration, and frequency, with profound impacts on marine ecosystems and exhibiting potential links to terrestrial extreme weather events. Both ocean and land-based heat extremes show similar warming trends, globally, indicating the connectivity between the Earth’s subsystems. However, the specific teleconnections that explain such connectivity at the regional scale are still underexplored. How well do we understand what is the relationship between these three different extremes? Can we use Earth Observations (EO) to showcase underlying common spatiotemporal features? And can such a discovery allow us to better predict such events, anticipate their impacts and promote early response measures to mitigate them? Within the framework of XHEAT, an ESA-funded project, we explore the relationship between MHWs and Compound Drought-Heatwaves (CDHW) events in Europe, examining whether teleconnection patterns serve as a link between oceanic and atmospheric extremes. The goals of XHEAT are to (1) develop novel EO-based extreme events indicators, (2) identify the distribution and frequency of compound extreme events, (3) disclose the drivers, and cascade effect of MHWs and summer CDHWs by employing data-driven methods, (4) identify the multiple weather and climate drivers leading to compound eXtreme HEAT in Europe and (5) investigate the social risks and impacts of the compound extreme events focusing on agriculture and forestry management. Particularly, we will research whether climate modes and prolonged extreme sea surface temperature (SST) events detected over the North Atlantic-Mediterranean basin are leading to more frequent warm and dry summer conditions, as observed via land surface temperature (LST) and soil moisture (SM) in continental Europe, aiming to illustrate how persistent MHWs in the North Atlantic and Mediterranean basins modify atmospheric stability and lead to CDHW events across Europe. This work emphasizes the critical need for integrating marine and atmospheric datasets in climate studies, offering new insights into the cascading impacts of oceanic changes on societal risks. Future efforts will expand on these findings by developing a comprehensive framework that merges oceanic and atmospheric metrics, aiming to improve real-time monitoring and enhance the probabilistic prediction of compound and cascading extreme events in Europe across various timescales. Keywords: Ocean-atmosphere Interaction, Marine heatwaves (MHWs), Compound Extreme Events, Predictive Capabilities
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: The Role of Earth Observation in Advancing Our Understanding of High Sustained Temperature Leading to Dry Conditions Compound Events: The UK Science Case

Authors: Erin Mills, Dr Roxana Ciurean, Dr Annie Winson, PhD Luke Bateson, Dr Kay Smith
Affiliations: British Geological Survey
Throughout 2022, the UK experienced the driest weather in over 40 years. This culminated in July, when temperatures exceeded 40 degrees Celsius for the first time since records began. The unprecedented hot, dry conditions resulted in various hazard and multi-hazard interactions across the region that had not previously been experienced nor to this extremity. Multi-hazards induced by high-sustained temperatures in conjunction with below average precipitation resulted in events with high direct and indirect impacts for both society and ecosystems with longer residence times, particularly seen across the South East of England. Interactions such as increase in wildfire occurrence as a response of drought conditions, landslide and subsidence events in response to shrink-sell and heave, or significant flash flooding events associated with rapid drops in temperatures and dry surface conditions. With timely and efficient derivation of spatial-temporal information and products from EO data, comes the instrumental technologies and methodologies in predicting, monitoring and assessing the occurrence of single natural hazard events and their potential impacts. However, the understanding of EO-derived environmental indicators in characterizing complex casual relationships, and underlying mechanisms resulting in cascading or compounding multi-hazard impacts is less understood. Utilizing EO-derived data highlighting climatic and environmental parameters allows for a deeper understanding of interactions both pre- and post-hazard event, and may give detail about how the event unfolders across space and time. This can be demonstrated using time series analysis of single indicators or derived from several time series of two or more indicators of interrelated hazard events. This work considers interactions across heatwaves, droughts, wildfires, flooding, subsidence and landslides. Within this study, we aim to contribute to state-of-the-art discipline by using long-term EO open-source satellite data to enable the identification of trends, thresholds and tipping points within time series of established environmental metrics which indicate the dynamic evolution of a multi-hazard event. This information is supplemented with in-situ observations, in-house models and impact narratives to help identify environmental precursors and chain of effects that may be suggestive of multi-hazard event onset conditions across the SE region of the UK - with scalability hopes. By utilizing several vulnerability and impact assessment models - such as impact chains - and data repositories, we will demonstrate the utility of EO techniques and datasets in enhancing multi-hazard risk assessment and mitigation management. In this presentation, we briefly introduce the research context, questions, methodological approaches, preliminary results, and future direction of the UK Science Case as part of the High Impact Multi-hazards Science (EO4Multihazards) project funded by the European Space Agency (2023 – 2026).
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: Beyond Extremes: Reconstructing Compound Events and Their Impacts in Europe

Authors: Niklas Luther, Dr. Arthur Hrast Essenfelder, Dr. Andrej Ceglar, Dr. Andrea Toreti, Cosmin Madalin Marina, Dr. Jorge Perez-Aracil, Dr. Odysseas Vlachopoulos, Prof. Dr. Sancho Salcedo-Sanz, Dr. Elena Xoplaki
Affiliations: Climate and Environmental Change, Centre for International Development and Environmental Research, Justus Liebig University, European Commission, Joint Research Centre (JRC), Climate Change Centre, European Central Bank, Department of Signal Processing and Communications, Universidad de Alcalá, Alcalá de Henares, Department of Geography, Justus Liebig University
Compound events are known to significantly amplify socio-economic risks compared to single extreme events. However, studies investigating the interconnections between such events and their impacts often face notable limitations. A common challenge is the predefined classification of events as extreme, such as droughts, heat waves, or floods. This threshold-based approach, often rooted in meteorological, physical, or statistical criteria, can constrain the range of meteorological phenomena considered and potentially disregard events that contribute to observed impacts. Importantly, impactful events may also arise from the combination of non-extreme conditions that fail to meet conventional thresholds. As a result, critical processes driving compound event-related risks may be overlooked. This study addresses these gaps by directly linking climate events to their socio-economic impacts. Specifically, we focus on wet and warm late winters followed by dry and warm springs, examining their role in driving agricultural damages across Europe. Using kernel regularized generalized canonical correlation analysis combined with preimage techniques, we identify dominant spatial patterns associated with crop yield losses. Our findings reveal that the most critical hazard is the combination of a severe April drought, preceded by moderately warm conditions in late winter, which exacerbate the drought's impact. Additionally, we demonstrate that imbalanced random forests can effectively predict these compound events at the local scale and estimate thresholds for the climate variables to reconstruct their impacts. Notably, some of the thresholds for temperature and moisture derived from our analysis are lower than those typically used to define extreme events. This underscores that significant agricultural impacts can result from the interaction of conditions that may appear benign individually but become hazardous when combined in space and time. Furthermore, it highlights that focusing exclusively on a single extreme, such as an April drought, risks underestimating the full scope of compound event-related risks. By identifying a valuable precursor, such as late winter moderate warmth, this approach provides critical insights that can enhance the effectiveness of multi-hazard early warning systems.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: Multi-hazard risk assessment for Arctic coastal environments

Authors: Annett Bartsch, Barbara Widhalm, Anna Irrgang, Rodrigue Tanguy, Goncalo Vieira, Ingmar Nitze, Birgit Heim, Mareike Wieczorek, Rustam Khairullin
Affiliations: b.geos, Alfred Wegener institute for Polar and Marine Research, University Lisbon
Arctic coastal settlements and infrastructure are vulnerable to coastal erosion, Sea-Level Rise (SLR) and permafrost thaw. It is expected that by 2100, more than two thirds of settlements will be exposed to at least one of these hazards, and more than one third to several risks. Satellite observations are fundamental for such assessments. They can be used to quantify current rates of changes for coastal erosion and permafrost thaw. Coastline retreat can be derived semi-automatically from various types of EO data. Within the framework of the ESA Polar Cluster project EO4PAC, Landsat has been shown to be suitable for circumpolar assessments over several decades. Ground temperature change information for the entire northern hemisphere starting from 1997 became available through the ESA CCI+ Permafrost project. The objective of Permafrost_CCI is to develop and deliver permafrost maps, mainly derived from satellite measurements, as ECV products . The associated parameters required by GCOS for the ECV Permafrost are “Depth of active layer (m)” and “Permafrost temperature (K)”. Algorithms have been identified which can provide these parameters by ingesting a set of global satellite data products (Land Surface Temperature (LST), Snow Water Equivalent (SWE), and landcover) into a permafrost model calculating the ground thermal regime. Satellite observations are also fundamental for the consistent identification of settlements and infrastructure across the Arctic. This can be achieved through fusion of AI-derived information from Sentinel-1 (SAR) and Sentinel-2 (multispectral). The 10 m recordings support the identification of roads and human impacted areas. All from EO available records - coastal erosion, ground temperature change and coastal settlement monitoring - continue to be developed and specifically their combination with additional EO based records is being explored. This includes monitoring InSAR subsidence in relation to ground thaw and landcover information from the recently published CALU dataset (Circumarctic Landcover Units, 10m, fusion of Sentinel-1 and Sentinel-2), as well as extending the validation of permafrost thaw records. The advancement of the multi-risk assessment contributes for example to the HORIZON Europe project ILLUQ lead by the Alfred Wegener Institute of Polar and Marine Research. Recent developments and results will be presented.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: Monitoring Decline in Aleppo Pine Forests (Pinus halepensis) Using Satellite Image Time Series

Authors: Isabel González, Ph.D Jessica Esteban, Ph.D Hugo Mas, José Luís Tomé, Ph.D Lucia Yañez, Ph.D Mariluz Guillén, Ph.D Nur Algeet, Ph.D Santiago Martín
Affiliations: Agresta S. Coop, Laboratori de Sanitat Forestal. CIEF. VAERSA- Conselleria de Medi Ambient, Aigua, Infraestructures i Territori (Generalitat Valenciana)
This study was conducted as part of the LIFE ADAPT-ALEPPO project (LIFE20 CCA/ES/001809), which aims to develop innovative tools to support the adaptation of Iberian Aleppo pine forests to climate change and demonstrate their practical application. In this context, a methodology was developed to detect and evaluate decline processes in the habitat of Pinus halepensis across the Iberian Peninsula and the Balearic Islands, leveraging time series analyses of Landsat satellite imagery. The approach employs trend analyses based on spectral indices to identify persistent reductions in vegetation vigor that are mild to moderate in severity yet span several years. The study primarily focuses on water stress as the key driver of forest decline, often compounded by biotic agents such as bark beetles, fungi, and mistletoe. To preliminarily validate the detected decline processes, a field campaign was carried out in 2023. Severity values derived from the maps were compared to field data collected in the same plots. A confusion matrix was generated using the diagonal expansion method to account for the challenges in delineating severity classes. For forest stands with a canopy cover ≥ 60%, the model achieved an overall accuracy of 0.84, despite some classes displaying high omission and commission errors. The results reveal the areas affected by forest decline as of spring 2023 and classify these areas into various severity levels. This methodology offers the potential for continuous annual monitoring, enabling the early identification of forest stands most vulnerable to biotic agent attacks and supporting the implementation of timely preventive measures.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall F1)

Session: A.04.01 Estimating and observing local-scale GHG emissions - PART 2

Recent advances in Earth Observation capabilities have revolutionized our ability to detect CO2 and methane emissions at facility-to-city scale and from individual emission sources.

These localised sources often present good potential for rapid mitigation and are thus high priority targets for rapid action in response to the Paris Agreement. International policy initiatives such as the European Green Deal and the Global Methane Pledge set targets for emissions reductions that are increasingly backed by legislation, with a requirement for monitoring and reporting systems in relevant industries.

There are various space-based observations suitable to estimate methane emissions from, e.g. landfill and the oil&gas and coal mining industry, and more and more as well for CO2 emissions from, e.g., power plants and cities. However, the observing system is characterised by a large and increasing diversity of satellite instruments and retrieval algorithms, including substantial involvement of New Space. Efforts to integrate and harmonise facility scale emissions estimates, to develop estimates of uncertainty, including for increasingly prevalent AI techniques, and to develop good practice both within and across satellite platforms are a rapidly evolving.

This session aims to present an overview on topics related to estimating emissions of CO2 and CH4 from sources ranging from point sources up to megacities in spatial extent. It will showcase recent scientific advances, including new instruments, new detection and retrieval methods, their uncertainties, validation, data uptake, and efforts towards integration into global initiatives.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall F1)

Presentation: AI for methane emission detection and mitigation

Authors: Anna Vaughan, Gonzalo Mateo-Garcia, Dr. Itziar Irakulis-Loitxate, Marc Watine, Pablo Fernandez-Poblaciones, Richard Turner, James Requeima, Dr. Javier Gorroño, Cynthia Randles, Luis Guanter, Manfredi Caltagirone, Claudio Cifarelli
Affiliations: United Nations Environment Programme, University of Cambridge, Vector Institute, Universitat Politecnica de Valencia
Mitigating methane emissions is the fastest way to stop global warming in the short-term and buy humanity time to decarbonise. A potent greenhouse gas, methane is responsible for over 25% of the warming experienced to date. A distinct feature of methane amongst the greenhouse gases is that a few sources contribute a large percentage of emissions, providing the potential for substantial reduction by mitigation action at relatively few locations. Remarkably, despite their climate impact and increasing public awareness after events such as the Nordstream pipeline leak or the record breaking Kazakhstan blowout, systematic documentation of and regular detection of emissions from these sites has not been available. This is primarily due to the challenges of detecting methane in vast satellite datasets. While a variety of satellites are capable of detecting methane, only Sentinel-2 and Landsat currently offer the combination of high revisit frequency and spatial resolution needed for global, detection and attribution of emissions. Artificial intelligence (AI) presents a promising pathway for automatically detecting emissions in these datasets. However, this potential remains unrealized due to significant challenges. Key obstacles include detecting weak signals in multispectral imagery, the lack of a large, verified global dataset of methane plumes in multispectral data, and the difficulty of fostering collaboration with governments and asset owners to respond to alerts. In this study, we introduce MARS-S2L, a machine learning model for global methane emission detection in Sentinel-2 and Landsat multispectral imagery. We hand-annotate an extensive dataset of over 53,000 satellite images, including 4,230 emission events from 707 distinct emitter sites across diverse global regions and use this to train a probabilistic model for emission detection. Globally MARS-S2L achieves a mean average precision of 0.67, compared to 0.31 for a previous state-of-the-art emission detection model developed for ideal background conditions. Taking a threshold of 0.5 for binary classification, MARS-S2L achieves an accuracy of 0.90, recall of 0.77, precision of 0.37, and 9% false positive rate, compared to 0.91, 0.42, 0.32 and 6% for the baseline. By flux rate, the recall of the model is slightly lower in weak plumes: 0.75 in plumes between 1 to 4t/h; (50% of the test data), whereas it rises to 0.85 for large plumes (>4t/h). Evaluation on the Stanford controlled release experiments demonstrates similar performance capturing some of the weak releases and all the large ones (>3t/h). In addition, we assess the performance of MARS-S2L generalising to 172 new locations unseen at training time. At these locations MARS-S2L achieves a mean average precision of 0.58, demonstrating solid generalisation to unseen sites. We present in-depth case studies of systematically labelled data in Turkmenistan, the Permian Basin and for offshore platforms. MARS-S2L is deployed operationally within the United Nations Environment Programme’s Methane Alert and Response System (MARS), as part of the International Methane Emissions Observatory (IMEO). During a six-month trial period, the model has achieved 457 near-real-time detections across 22 countries, with 71 of these resulting in formal notifications to governments and stakeholders. A notable success story from MARS has been the mitigation of a persistent super-emitter in Algeria. This source, responsible for releasing approximately 27,000 tonnes of methane annually for over a decade, was successfully mitigated following notifications based on MARS-S2L detections. Eliminating this single leak is equivalent to removing approximately 480,000 vehicles from U.S. roads or neutralizing the methane emissions of the entire countries of Estonia, Albania or Iceland. This demonstrates the potential for AI to assist in real world action reducing emissions, both for methane and greenhouse gases more broadly.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall F1)

Presentation: Detection of European Solid Waste Landfill Infrastructure Using Deep Learning and Sentinel-2 Satellite Imagery to Support Methane Reduction

Authors: Emma Schoenmakers, Abdullah Al Nayeem, Anchal Chawla, Stijn Dellaert, Hugo Denier van der Gon
Affiliations: Department of Air Quality & Emissions Research, Netherlands Organisation for Applied Scientific Research (TNO), Spatial Health Lab, College of Health, Oregon State University, D2Win
Global methane emissions have been on the rise for the past 20 years, with the waste management sector playing a key role in their anthropogenic component. In Europe, methane emissions from the waste sector have remained stagnant, despite the continent's commitment to the Global Methane Pledge to reduce methane emissions across sectors by 30% by 2030 compared to 2020 levels. This current trend highlights the urgent need for reducing uncertainties in methane emission reporting and improve infrastructure data to support targeted mitigation strategies. To address these gaps, we explored the potential of advanced satellite technology with open-source datasets such as those for solid waste landfill data, integrated within the TNO Global Point Source (TNO-GPS) emission inventory, to refine the location accuracy of methane-emitting facilities. With the anticipated deployment of the Twin Anthropogenic Greenhouse Gas Observer (TANGO), supported by prior information from TNO-GPS, refining high-resolution infrastructure data – especially for methane-intensive sources like landfills- will be essential for precise and timely emission monitoring at the individual facility level. This study built on a deep learning approach using a custom U-Net model, trained using Sentinel-2 satellite imagery from 10 bands and a manually ground-truthed curated dataset of as little as 100 known landfill and non-landfill sites across Europe to perform binary classification of landfill detection. When applied to over 900 European potential landfill sites, our model correctly detected more than 618 landfills and detected potentially false location coordinates for over 60 point sources, with a test accuracy of 73% across diverse backgrounds. These findings underline the importance of refining landfill location data for methane reporting but also the importance of the routine detection of false positives to minimize satellite mis-targeting. Future efforts will focus on expanding geographic scope to the globe and refining methodologies to enhance detection sensitivity and accuracy. By addressing infrastructure-level uncertainties, this approach offers a scalable, data-driven pathway to support global methane reduction goals.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall F1)

Presentation: Unveiling Methane Emissions from Geostationary satellite: The Monitoring Potential of MTG-FCI

Authors: Shanyu Zhou, Javier Gorroño, Dr. Itziar Irakulis-Loitxate, Rasmus Lindstrot, Luis Guanter
Affiliations: Research Institute of Water and Environmental Engineering (IIAMA), Universitat Politècnica de València (UPV), EUMETSAT
The Meteosat Third Generation Flexible Combined Imager (MTG-FCI) offers a unique combination of temporal, spatial, and spectral capabilities, acquiring data every 10 minutes over Europe and Africa with spatial sampling between 500 and 1000 m at nadir. As a multispectral imager spanning solar and thermal infrared regions, including near- and shortwave infrared wavelengths, the FCI provides a promising platform for the detection and monitoring of methane emissions, complementing missions such as Sentinel-2 and Sentinel-3/SLSTR. This study evaluates the potential of MTG-FCI for methane detection, focusing on retrieval thresholds, accuracy assessments, and the instrument's limitations. Two retrieval approaches were tested: the multi-band multi-pass (MBMP) band-ratio method and the mono-temporal matched-filter retrieval. Using simulated methane plumes, an end-to-end sensitivity analysis was conducted for two contrasting regions within the MTG-FCI disk: Niger, characterized by flat, homogeneous desert terrain, and Algeria, with relatively heterogeneous surfaces. In Niger, the matched-filter retrieval achieved a 1-sigma retrieval error of 0.49 ppb, slightly outperforming the MBMP method (0.53 ppb). In Algeria, the matched-filter error increased to 1.01 ppb due to surface variability, while the MBMP method maintained a comparable error (0.52 ppb) to that in Niger. These findings suggest that the matched-filter method excels in flat terrains, while the MBMP approach is more robust in complex landscapes, effectively mitigating surface-induced variability. In a real-world scenario, the MTG-FCI detected a transient methane release from a compressor station in Algeria, corroborated by independent VIIRS observations, demonstrating its capability for operational monitoring. Additionally, simulations of point source emissions with flux rates between 0 and 70 t/h showed that the MBMP method could reliably detect plumes at 40 t/h, with detection limits of 30-40 t/h in Algeria. These results underscore the potential of the MTG-FCI for monitoring large methane emissions, offering opportunities for near-real-time detection, continuous tracking, and improved response strategies to mitigate methane emissions in line with global climate objectives.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall F1)

Presentation: Inverse Modelling of CH4 Emissions in the Upper Silesian Coal Basin Using WRF and Airborne Lidar

Authors: Sebastian Wolff, Friedemann Reum, Christoph Kiemle, Gerhard Ehret, Mathieu Quatrevalet, Martin Wirth, Andreas Fix
Affiliations: German Aerospace Center (DLR, Institute of Atmospheric Physics)
Methane (CH4) is the second most important anthropogenic greenhouse gas (GHG) with respect to radiative forcing. Atmospheric concentrations of CH4 have shown a significant increase over the past decades. A large fraction of global anthropogenic CH4 emissions originates from point sources, e.g. coal mine ventilation shafts. International treaties foresee GHG emission reductions, entailing independent monitoring and verification support capacities. Considering the spatially widespread distribution of point sources, airborne remote sensing approaches are favorable, in order to enable rapid survey of larger areas. In this respect, active remote sensing by airborne lidar is promising, such as provided by the integrated-path differential-absorption lidar CHARM-F operated by DLR. Installed onboard the German research aircraft HALO, CHARM-F serves as a demonstrator for the upcoming German-French satellite mission MERLIN (MEthane Remote sensing Lidar missioN). CHARM-F measures weighted vertical column mixing ratios of CO2 and CH4 below the aircraft. In spring 2018, measurements were taken in the Upper Silesian Coal Basin (USCB) in Poland. The USCB is considered to be a European hotspot of CH4 emissions, covering an area of approximately 50 km × 50 km. Coal mining releases CH4, which is emitted from the mines through ventilation shafts. Determining their emission rates is challenging. On the one hand, measurements are relatively sparse in relation to the high number of coal mines. On the other hand, the ventilation shafts are located in close proximity to each other, causing their exhaust plumes to overlap and become unattributable. To cope with these difficulties in estimating the emission rates of individual shafts, we apply a Bayesian inverse modelling approach. Specifically, we employ the Weather Research and Forecast Model (WRF) coupled to the CarbonTracker Data Assimilation Shell (CTDAS), an Ensemble Kalman Filter. CTDAS-WRF propagates an ensemble realization of the a priori CH4 emissions forward in space and time, samples the simulated CH4 concentrations along the measurement’s flight path, and updates the a priori emission rates to achieve the best possible fit to the measured values. As a result, we obtain a regularized a posteriori best emission estimate for the individual ventilation shafts. The analysis involves two inversion runs. In the initial run, all ventilation shafts are treated as separate state vector elements at the grid-cell resolution. The extent to which individual sources can be disentangled depends on the spatial distribution of the measurements relative to the prevailing atmospheric transport conditions. In cases where the initial inversion cannot disentangle individual sources — indicated by strong anti-correlation in the posterior covariance matrix — the affected state vector elements are clustered. This is followed by a second run, where clustered sources are treated as one state vector element, respectively. This two-step process advances the capabilities for disentangling emission sources while maintaining consistency with the data's spatial and transport-driven constraints. Here, we report on the results of this inverse modelling approach, including individual and clustered emission estimates and their uncertainties.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall F1)

Presentation: Detection and attribution of transient methane emissions using tiered Machine Learning models with TROPOMI and Sentinel-2 data

Authors: Berend J. Schuit, Joannes D. Maasakkers, Tobias A. de Jong, Matthieu Dogniaux, Jason McKeever, Dylan Jervis, Daniel J. Varon, Ilse Aben
Affiliations: SRON Netherlands Institute for Space Research, GHGSat Inc., School of Engineering and Applied Sciences, Harvard University, Department of Earth Sciences, Vrije Universiteit Amsterdam
In order to achieve the Global Methane Pledge target of reducing anthropogenic methane emissions by 30% by 2030, prompt and significant action is required. Mitigating the largest methane super-emitters provides an important opportunity for significant short-term emission reduction. Large emission events from the gas industry are a key part of these emissions but are difficult to characterize as they tend to only last a few hours. We provide a first estimate of regional emissions from gas transmission using a two-step machine learning approach analyzing both the global TROPOMI methane data as well as the high-resolution Sentinel-2 archive. We use an improved version of the TROPOMI machine learning models developed in Schuit et al. 2023 [1] to detect methane super-emitter plumes in TROPOMI data (daily global coverage at 7 x 5.5 km² resolution). Using this approach, we find emissions from a large number of persistent emitters, where targeted high-resolution instruments (~25 m resolution) can be used to pinpoint emissions. However, we also find that about a quarter of detected emission plumes comes from large transient events such as large short-term blowdowns from compressor stations, where “tip-and-cue” of future observations of high-resolution satellites is not feasible. Here we focus on exactly those plumes, identifying over 1000 transient emission plumes in 2018-2024 TROPOMI data. To be able to identify the facilities responsible for these plumes, we use the large archive of Sentinel-2 (MSI, global coverage every 5 days at 20m resolution) data that provides the ability to pinpoint the largest methane emissions. To automate this follow-up process, we have developed a Convolutional Neural Network to scan the Sentinel-2 data to detect and localize methane plumes. We have trained this model using realistic synthetic modeled emission plumes which are embedded in a background of real Sentinel-2 observations. We use a diverse range of surface types as background to make sure that the model can deal with complicating factors such as clouds, snow, waterbodies and agricultural fields to make it suitable for global application. Combined, the machine-learning-based analysis of these two Copernicus satellite datasets provides an unprecedented look at difficult-to-capture transient emissions. We will show both our long-term regional analysis focused on quantifying total detected emissions as well as application to recent data demonstrating the ability of our system to quickly pinpoint large emission events to individual transmission facilities. [1] Schuit, B. J.; Maasakkers, J. D.; Bijl, P.; Mahapatra, G.; Van den Berg, A.-W.; Pandey, S.; Lorente, A.; Borsdorff, T.; Houweling, S.; Varon, D. J.; McKeever, J.; Jervis, D.; Girard, M.; Irakulis-Loitxate, I.; Gorroño, J.; Guanter, L.; Cusworth, D. H.; Aben, I.: Automated detection and monitoring of methane super-emitters using satellite data. Atmos. Chem. Phys., 23, 9071-9098, https://doi.org/10.5194/acp-23-9071-2023, 2023.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall F1)

Presentation: Evaluation Of Methane Emission Estimation from Super-Resolved Sentinel-5P L1 Using Hyperspectral Data

Authors: Daniele Settembre, Dott Davide De Santis, Prof Fabio Del Frate
Affiliations: "Tor Vergata" University of Rome
The aim of this work is to estimate methane concentration in the atmosphere using hyperspectral data and machine learning techniques. This study focuses on the advanced monitoring of atmospheric methane emissions by integrating multi-resolution satellite data and employing innovative machine learning approaches. Key datasets utilized include GHGSat high-resolution optical sensor data, PRISMA hyperspectral satellite data, and Sentinel-5P observations. These data sources were leveraged to develop a super-resolution product capable of enhancing the spatial detail of methane concentration maps. Methane concentration estimates have been successfully derived from PRISMA data, which has a spatial resolution of 30m x 30m, using the MAG1C algorithm. This methodology, as demonstrated in previous work (Settembre et al., 2024), has yielded significant and encouraging results. To validate these estimates, the regions identified by PRISMA as having increased methane concentrations were cross-referenced with the corresponding pixels from GHGSat demonstrative product, which has a spatial resolution of approximately 4 km, and the bicubic upsample of Sentinel-5P product to match the GHGSat resolution. The PRISMA estimates were resampled through a weighted average to align spatial resolutions and ensure consistency for comparison. The comparative analysis, focusing on petrochemical plants and waste management sites, revealed a strong agreement between PRISMA and GHGSat over a range of backgrounds, with slight discrepancies observed with Sentinel-5P due to its coarser spatial resolution. The bias between the values obtained from Sentinel-5P and GHGSat, compared to those calculated using the algorithm, indicates a mismatch of 22 ppb and 5 ppb, respectively. Although these biases are relatively small and can be considered not so relevant within the scale of interest. Initially, validation efforts primarily utilized GHGSat data from its demonstrative version, which provided useful but limited resolution for comparison. Given the promising results of the initial comparison, an ESA Third Party Mission project was proposed and accepted (ESA EarthNet - Project 92210), which provided the opportunity to access GHGSat data with higher resolution and improved precision. These data enabled validation efforts that demonstrated good agreement between GHGSat and PRISMA-derived estimates, with local methane emission differences reaching up to 300 kg/hr. For Sentinel-5P, which provides a coarser resolution of 3.5 x 7.5 km, a super-resolution algorithm was applied to the SWIR bands, showing promising preliminary results. Notably, tests conducted on 300m resolution products yielded a Root Mean Square Error (RMSE) of 0.186, indicating an excellent correlation between original and super-resolved data. GHGSat data exhibited a notable capability to detect local emission sources with smaller plume dimensions, which explains occasional overestimations by PRISMA due to its sensitivity to smaller source areas, where higher concentrations are detected. Comparisons with super-resolved radiance-derived estimates further confirmed the robustness of the approach, yielding encouraging results. The innovation of this work lies in the integration of multi-resolution datasets and advanced algorithms to enhance methane monitoring at local scales, overcoming limitations of current data. The results have significant implications for science, climate policy, and operational services, offering a refined understanding of methane emission dynamics. This study demonstrates the potential of combining hyperspectral data with machine learning to improve atmospheric methane monitoring, contributing to effective mitigation strategies in combating climate change. ________________________________________ Reference: D. Settembre, D. De Santis and F. Del Frate, "Exploring Methane Column Estimation from Prisma Data," IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 2024, pp. 3700-3703, doi: 10.1109/IGARSS53475.2024.10640582.
LPS Website link: Evaluation Of Methane Emission Estimation from Super-Resolved Sentinel-5P L1 Using Hyperspectral Data&location=Hall+F1" class="text-info" target="_blank">Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.15/1.16)

Session: A.07.06 Monitoring river discharge variability in a context of climate change: bridging satellite technology, ground data and modelling

River discharge is recognized by the Global Climate Observing System as an Essential Climate Variable due to its crucial role in regulating the flow of freshwater into the oceans, influencing salinity and thermohaline circulation, which are fundamental components of the Earth's climate system. In addition to its importance for water resource management and flood forecasting, climate change impact on river discharge has received more and more interest and is needed to develop effective adaptation strategies. However, the decline in ground-based monitoring networks since the 1980s has made it increasingly difficult to accurately estimate river discharge, a challenge exacerbated by the rise in extreme weather events such as floods and droughts.
Satellite remote sensing, including altimetry, optical, and multispectral sensors, has emerged as a valuable and effective solution for global river discharge monitoring, providing extensive spatial coverage and long-term data. Multi-mission approaches enhance the spatial and temporal resolution as well as the accuracy of these estimates. However, it is essential to integrate remote sensing data with ground-based observations and hydrological models to achieve a more comprehensive and long-term understanding of river discharge.
The European Space Agency (ESA), through its Climate Change Initiative, is making significant investments in the study of river discharge with the aim of providing long-term data that are crucial for analysing trends and understanding the impacts of climate change. By combining satellite technologies with traditional methods, researchers can deepen their understanding of river dynamics and their role in the Earth's climate, thereby supporting the development of more effective climate adaptation strategies.
This session is dedicated to presenting advances in river discharge estimation from space and will cover algorithms development, novel and merged products and applications including assimilation of satellite and in situ data in models. The final goal is to provide an outlining of future directions and to guide upcoming investments in the satellite sector, with a particular focus on the improvement of the river discharge estimation also under the climate change umbrella. Participation is open to all who wish to contribute and share scientific requirements, with the goal of identifying the necessary levels of precision, accuracy, temporal resolution, and spatial scale of the river discharge as an essential climate variable, needed for effective climate studies.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.15/1.16)

Presentation: Twenty years of satellite-based discharge time series for small ungauged rivers with uncertainty quantification

Authors: Daniel Scherer, Christian Schwatke, Denise Dettmering, Florian Seitz
Affiliations: DGFI-TUM
Long-term discharge time series are essential to identify and adapt to the most significant potential effects of climate change. However, the availability of in-situ discharge records has declined over the past decades. While many recent efforts for remotely sensed discharge require data from the novel SWOT mission, we developed an optimization approach to estimate discharge time series using space-based sensors available for the past 20 years. It combines multi-mission nadir satellite altimetry with high-resolution satellite imagery. The method is suitable for filling spatial gaps in the sparse in-situ network since it only requires approximate boundary conditions but no prior calibration data. We construct the high-resolution river bathymetry and cross-sectional geometry using deep-learning image classification and a hypsometric curve. With the 3-meter image resolution, we can detect variations even in small rivers (>50 m). The average and extreme river slope can be observed with lidar or wide-swath altimetry. The unobserved part of the bathymetry and unknown roughness coefficient is optimized based on the principle of mass conservation. Assuming steady and gradually varied flow, we calculate flow velocity and discharge using the Manning equation. By accounting for errors and uncertainties in the different input quantities, we provide realistic uncertainties, which are crucial for data assimilation. The approach requires minimal input and is applied globally to various river sections, yielding a median normalized root mean square error (NRMSE) of 12%. The 90% uncertainty range includes 91% of the in-situ validation data. Additionally, we discuss the feasibility and impact of using time-variable river slopes. With the new SWOT mission, we identify reaches with a stable stage-slope relationship, which we use to reconstruct a slope time series for the period before the launch of SWOT. We expect a decrease in uncertainty of up to 20%, reduced errors, and a more constrained optimization process, further reducing the required expert input.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.15/1.16)

Presentation: Progress towards satellite requirements to capture water propagation in Earth’s rivers

Authors: Arnaud Cerbelaud, Cédric H. David, Tamlin Pavelsky, Sylvain Biancamaria, Pierre-André Garambois, Cécile Kittel, Fabrice Papa, Paul Bates, Mohammad J. Tourian, Michael Durand, Renato Frasson, Hind Oubanas, Guy Schumann, George Allen, Angelica Tarpanelli, Sly Wongchuig, Jeffrey Wade, Manu Tom, Pierre-Olivier Malaterre, Jean-Francois Crétaux, Karina Nielsen, Omid Elmi, Christian Schwatke, Daniel Scherer, Antara Dasgupta, Simon Munier, Dongmei Feng, Peter Bauer-Gottwein, Faisal Hossain, Dai Yamazaki, Benjamin Kitambo, Simon Mischel, Laetitia Gal, Adrien Paris, Matthew Bonnema, Kaushlendra Verma, Colin J. Gleason, Konstantinos Andreadis, Nicolas Picot, Ernesto Rodriguez, Jérôme Benveniste
Affiliations: NASA Jet Propulsion Laboratory, Caltech, Department of Earth, Marine and Environmental Sciences, University of North Carolina, Université de Toulouse, LEGOS (CNES/CNRS/IRD/UT3), INRAE, RECOVER, Aix-Marseille University, DHI A/S, School of Geographical Sciences, University of Bristol, Institute of Geodesy, University of Stuttgart, School of Earth Sciences, and Byrd Polar and Climate Research Center, The Ohio State University, UMR G-EAU - AgroParisTech, BRGM, Cirad, IRD, INRAE, Institut Agro Montpellier, Univ Montpellier, RSS-Hydro, Virginia Polytechnic Institute and State University, CNR/IRPI, Department of Space Research and Technology, Technical University of Denmark, Deutsches Geodätisches Forschungsinstitut der Technischen Universität München (DGFI-TUM), Institute for Hydraulic Engineering and Water Resource Management, RWTH Aachen University, CNRM, Université de Toulouse, Météo-France, CNRS, College of Engineering and Applied Science, University of Cincinnati, Technical University of Denmark, University of Washington, Department of Civil and Environmental Engineering, Global Hydrological Prediction Center, Institute of Industrial Science, The University of Tokyo, Congo Basin Water Resources Research Center (CRREBaC), Global Runoff Data Centre (GRDC), Federal Institute of Hydrology (BfG), Hydro-Matters, University of Massachusetts, CNES, COSPAR
The water in Earth’s rivers propagates as waves through space and time across global hydrographic networks. A detailed understanding of river dynamics globally is essential for achieving the accurate knowledge of surface water storage and fluxes to support water resources management and water-related disaster forecasting and mitigation. Global in situ data are crucial to support such an investigation but remain difficult to obtain at adequate spatiotemporal scales, if they even exist. Many expectations are placed on remote sensing techniques as key contributors. Despite a rapid expansion of satellite capabilities, however, it remains unclear what temporal revisit, spatial coverage, footprint size, spatial resolution, observation accuracy, latency time, and variables of interest from satellites are best suited to capture the space-time propagation of water in rivers. Additionally, the ability of numerical models to compensate for data sparsity through model-data fusion remains elusive. We review recent efforts to identify the type of remote sensing observations that could enhance understanding and representation of river dynamics. Key priorities include: (a) resolving narrow water bodies (< 50-100 m), (b) further analysis of signal accuracy versus hydrologic variability and relevant technologies (optical/SAR imagery, altimetry, microwave radiometry), (c) achieving 1-3 days observation intervals, (d) leveraging data assimilation and multi-satellite approaches using active constellations, and (e) new variable measurement for accurate water flux and discharge estimates. We recommend a hydrology-focused, multi-mission observing system comprising: (1) an advanced single or dual-satellite mission for fine-scale surface water measurements, and (2) a cost-effective multi-satellite constellation targeting dynamic processes.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.15/1.16)

Presentation: Toward a Global Scale Runoff Estimation Through Satellite Observations: the STREAM Model

Authors: Francesco Leopardi, Luca Brocca, Carla Saltalippi, Jacopo Dari, Karina Nielsen, Dr, Peyman Saemian, Nico Sneeuw, Mohammad J. Tourian, Dr Marco Restano, Dr Jérôme Benveniste, Stefania Camici
Affiliations: National Research Council, Research Institute for Geohydrological Protection, Dept. of Civil and Environmental Engineering, University of Perugia, Division of Geodesy, National Space Institute, Technical University of Denmark, Institute of Geodesy, University of Stuttgart, SERCO, ESA-ESRIN, COSPAR
Climate change alters familiar environments and impacts our daily lives. In this circumstances it essential to monitor river discharge for a range of activities, including water resource management and flood risk reduction. However, in-situ stations have some limitations, such as low density, incomplete temporal coverage, and data access delays, which make continuous spatio-temporal monitoring of river discharge a challenging task. For this reason, researchers and space agencies have developed new satellite-based methods for estimating runoff and river discharge. Among these, the European Space Agency (ESA) has funded the STREAM (SaTellite-based Runoff Evaluation And Mapping) and STREAM-NEXT projects, which exploit satellite observations of precipitation, soil moisture, terrestrial water storage, altimetric water level, and snow cover fraction within a conceptually parsimonious model, STREAM, to estimate runoff and river discharge. Applied to more than 40 basins worldwide including the largest basins in the world (e.g., Mississippi-Missouri, Amazon, Danube, Murray-Darling, and Niger), the STREAM model has shown good ability to replicate observed river discharge, even in basins with a high degree of human pressure where flow is regulated by dams, reservoirs, or floodplains, or in heavily irrigated areas. The positive results achieved have paved the way for regionalizing the parameters of the STREAM model to make it applicable on a global scale. Through the calibration of the STREAM model on the 40 pilot catchments, it was possible to obtain a large set of parameters that were linked, through specific relationships, to various features including climate, soil characteristics, vegetation and topographic attributes. This approach yielded regionalized STREAM parameters. This study aims to evaluate the efficacy of the STREAM runoff and river discharge estimates, derived from regionalized parameters, across a diverse range of basins. To this end, a comparative analysis will be conducted between observed and simulated river discharge, as well as between simulated and modeled land surface runoff estimates. This contribution aims to demonstrate how the use of readily available information processed through a conceptual regionalized hydrological model can bring benefits in estimating river discharge and producing runoff maps, even in basins characterised by intricate interactions between natural and anthropogenic phenomena.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.15/1.16)

Presentation: Global Scale River Discharge from The SWOT Mission

Authors: Hind Oubanas, Pierre-Olivier Malaterre, Kevin Larnier, Michael Durand, Colin Gleason
Affiliations: INRAE G-EAU, The Ohio State University, University of Massachusetts Amherst, HydroMatters
The Surface Water and Ocean Topography (SWOT) satellite is the first mission to provide a global river discharge product, estimated through the unique simultaneous observation of water surface elevation, slope and extent at an unprecedented spatial resolution. This mission product is produced by the SWOT Discharge Algorithm Working Group (DAWG) and delivered to the public through PODAAC and Hydroweb.next. Given the mission requirements for a river discharge data product, the SWOT mission has driven the development of algorithms specifically designed for operational use. Six discharge algorithms were implemented and serve as the foundation of "Confluence" a cloud-based platform purpose-built to produce SWOT discharge estimates. Discharge algorithms solve the inverse problem of river discharge estimation using diverse range of methodological approaches, spatiotemporal scales and levels of representation of the underlying river dynamics. To accommodate the diverse applications of SWOT discharge data, two distinct branches of discharge estimates will be disseminated: "gauge-constrained" and "gauge-unconstrained" discharge. The "gauge-constrained" discharge estimates may incorporate in-situ measurements at some prior or calibration steps, ensuring enhanced accuracy and reliability. Conversely, the "gauge-unconstrained" discharge relies solely on SWOT measurements, providing valuable insights into ungauged basins where conventional in-situ measurements may be lacking. In this presentation, we will introduce the SWOT global discharge product, which fills the gap in river discharge monitoring from space. We will present the latest available discharge product, discussing their spatiotemporal coverage and the current status of validation at the time of the meeting. Our focus will be on the product derived from the nominal (science) orbit, which provides global coverage at each cycle of 21 days. Differences between the gauge-constrained and gauge-unconstrained products will be explored. Preliminary results have also examined the impact of data quality filters on discharge performance, showing that stricter filters bring performance closer to pre-launch expectations. Remaining challenges related to data and algorithms will also be discussed.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.15/1.16)

Presentation: Assimilation of CCI Discharge Products to Enhance Large-Scale Hydrological Simulations for Climate Studies

Authors: Malak Sadki, Gaëtan Noual, Vanessa Pedinotti, Simon Munier, Verma Kaushlendra, Gilles Larnicol, Sylvain Biancamaria, Adrien Paris, Laetitia Gal, Alice Andral, Clément Mathieu Jacques Albergel
Affiliations: Magellium, CLS, CNRM, LEGOS, HydroMatters, ESA
Large-scale hydrological models are essential for simulating river dynamics and supporting climate studies by providing insights into long-term trends and freshwater variability. However, their accuracy often suffers from uncertainties in model inputs and processes representation, which can be addressed through data assimilation. Satellite remote sensing, with tools like altimetry and multispectral sensors, offers wide coverage and long-term data to fill gaps left by declining ground networks. This study builds on the first phase of discharge estimation work from the ESA Climate Change Initiative (CCI), leveraging 20 years of high-resolution satellite-derived discharge data (2000–2020) to evaluate how these new long-term datasets improve the accuracy of two large-scale hydrological models, CTRIP and MGB, in the context of climate studies. Using ensemble Kalman filter frameworks (HyDAS and HYFAA), the approach integrates discharge data derived from altimetry and multispectral imagery, along with water surface elevation (WSE) anomaly observations, to assess their impact on model performance over the Niger and Congo basins, each of the datasets coming with its own characteristics (spatial and temporal sampling, uncertainty). Discharge assimilation is proven to be more effective than WSE anomaly assimilation, as it provides a more direct input for improving model accuracy. Temporal data density emerged as a key factor, particularly in the Niger Basin, where dense observations significantly reduced bias and improved long-term trend simulations. Higher temporal resolution also better captures flow variability, critical for both seasonal climate studies and short-term predictions like extreme events. In the Congo Basin, the study highlights trade-offs between spatial coverage, data quality, and temporal resolution, emphasizing the need for balanced data acquisition strategies. This work underscores the added value of long-term, high-resolution satellite-derived discharge products for data assimilation applications, advancing the capability of large-scale hydrological models to support climate studies. By integrating these approaches, the study contributes to improving predictions of seasonal flow dynamics and extreme hydrological events, thereby informing adaptation strategies in the context of climate change.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.15/1.16)

Presentation: Leveraging high-resolution Earth observations to assess the degree of evaporation-soil moisture coupling for various Noah-MP runoff parameterizations during drought

Authors: Sara Modanesi, Domenico De Santis, Daniela Dalmonech, Alessio Collalti, Francesco Avanzi, Gabriëlle De Lannoy, Tommaso Moramarco, Nunzio Romano, Paolo Nasta, Fabio Massimo Grasso, Martina Natali, Christian Massari
Affiliations: Research Institute for the Geo-Hydrological Protection, National Research Council, Research Institute for Geo-Hydrological Protection, National Research Council, Institute for Agriculture and Forestry Systems in the Mediterranean, National Research Council, CIMA Research Foundation, Department of Earth and Environmental Sciences, KU Leuven, Department of Agricultural Sciences, University of Naples Federico II, Institute of Atmospheric Sciences and Climate, National Research Council, Department of Civil and Environmental Engineering, University of Perugia
Meteorological droughts, characterized by prolonged precipitation deficits and frequently increased atmospheric water demand, exert a significant influence on the water budget dynamics in river basins. These dynamics arise from the complex interplay between land surface evaporation and basin storage capacity, which drives the transition from meteorological to hydrological droughts, marked by a decline in streamflow and groundwater recharge. Accurately modeling these hydrological processes requires a precise representation of evaporation, soil moisture, and vegetation dynamics, including their links to the carbon cycle. However, land surface model simulations face challenges due to their sensitivity to parameterization choices. Among these, the parameterization of runoff is particularly crucial, as it shapes the degree of soil moisture-evapotranspiration coupling, which in turn affects the partitioning of water fluxes and, consequently, streamflow, groundwater recharge, and vegetation dynamics. Despite its importance, the impact of different runoff schemes on drought representation in land surface models remains underexplored. In this context, the new era of high-resolution satellite observations offers a valuable resource to assess these parameterizations, thereby enhancing our understanding of soil moisture-evaporation coupling dynamics. The objective of this study was to assess the sensitivity of Noah-MP land surface model simulations to different runoff schemes, integrated within the NASA Land Information System framework. Our analysis focused on the interactions between hydrological and carbon cycles, which are influenced by these parameterizations in the Mediterranean region. In particular, two river basins were considered: one in central Italy and one in southern Italy. To determine an optimal parameterization, we leveraged satellite-derived datasets to evaluate soil moisture-evaporation coupling strength using Triple Collocation analysis. Additionally, we examined the challenges inherent in modeling hydrological drought periods and identified potential factors contributing to suboptimal parameterization performance. These findings offer valuable insights into improving land surface model representations of hydrological and carbon cycle interactions, with implications for the management of water resources and the health of agroecosystems in drought-prone regions. Acknowledgements This work was funded by the Next Generation EU - Italian NRRP, Mission 4, Component 2, Investment 1.5, call for the creation and strengthening of ‘Innovation Ecosystems’, building ‘Territorial R&D Leaders’ (Directorial Decree n. 2021/3277) - project Tech4You - Technologies for climate change adaptation and quality of life improvement, n. ECS0000009. This work reflects only the authors’ views and opinions, neither the Ministry for University and Research nor the European Commission can be considered responsible for them.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.34)

Session: B.03.13 Space in Action: Driving the Green Transition from Policy to Impact Sectors

ESA emphasizes a partner-centric approach to the global green transition, where practical space-based capabilities are harnessed by governments, institutions, industries and civil society green transition actors. This insights session will highlight contributions from several ESA partners and stakeholders, showcasing how they use and integrate space data and services in their strategies, decision-making and operational processes in support of climate action. Partners will have the opportunity to present their needs and challenges related to EO, fostering dialogue with space providers to identify existing solutions or collaboration opportunities to co-develop new ones. The session will feature a series of short presentations by partners and stakeholders covering several critical topics, from enhancing policy and decision-making processes to accelerating decarbonization in key industrial sectors and enabling informed finance and investment strategies.

Speakers:


  • Fani Kallianou de Jong. Principal Manager in the Climate Strategy and Delivery Department of the European Bank for Reconstruction and Development
  • Dušan Chrenek. Principal Adviser for ‘Digital for the Green Transition’ in the Directorate-General for Climate Action of the European Commission
  • Mohammad Qasim. Products Manager for Carbon and Biodiversity at the Forest Stewardship Council
  • Mila Luleva. Head of Remote Sensing at Rabobank
  • Nikoletta Fodor. Project Officer at Solar Power Europe
  • Nga Thi Viet Nguyen- Senior Economist at the World Bank
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.96/0.97)

Session: B.03.02 Monitoring and measuring climate adaptation using earth observation - PART 2

The Intergovernmental Panel on Climate Change (IPCC) describe adaptation as the process of adjustment to actual or expected climate and its effects, in order to moderate harm or exploit beneficial opportunities. With over 1 degree Celsius of global warming already experienced, adapting to current and future climate changes is now of far greater focus for all levels of policymaking than ever before. Yet, the IPCC Sixth Assessment Report (AR6) concluded that despite progress, adaptation gaps exist, and will continue to grow at current rates of implementation. This has also been confirmed by the UNEP-WASP Adaptation Gap Report 2023.

The decision text from the UNFCCC Conference of the Parties (COP28) held in December 2023 in Dubai, UAE, affirms a framework for a Global Goal for Adaptation (GGA), of which monitoring, evaluation and learning is one of the four core targets. The text states that by 2030, all Parties have designed, established and operationalized a system for monitoring, evaluation and learning for their national adaptation efforts and have built the required institutional capacity to fully implement the system.

In the COP28 Decision on the GGA, Parties are urged to enhance adaptation action in the following areas: water supply and safe potable water, food and agricultural production, health, ecosystems and biodiversity, infrastructure and settlements, poverty and livelihoods, and protection of cultural heritage. With adaptation actions to be guided by science, including indicators, metrics and targets, among others.
Satellite Earth Observation (EO) revolutionized systemic observations and has played a pivotal role in understanding climate changes to date, yet its potential to support adaptation implementation, monitoring and implementation is only beginning to be explored. This session will highlight current adaptation actions using EO across the focus areas of the GGA listed above.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.96/0.97)

Presentation: Monitoring drought effects on vegetation with Sentinel-1 backscatter signal: A case study over Mozambique

Authors: Carina Villegas-Lituma, Mariette Vreugdenhil, Samuel Massart, Bernhard Raml, Pavan Muguda Sanjeevamurthy, Wolfgang Wagner, Rogerio Borguete Alves Rafael
Affiliations: Technical University of Vienna, Universidade Eduardo Mondlane - Faculty Of Agronomy And Forestry Engineering
Droughts are defined as prolonged periods of insufficient rainfall and high temperatures. They are among the most critical abiotic stress factors driving plants to their wilting point and triggering physiological stress that can ultimately result in plant death. Mozambique frequently experiences severe droughts, which have a significant impact on its economy and the livelihoods of its population. With an agricultural sector heavily dependent on rain-fed systems, the country is highly vulnerable to drought events. Smallholder farmers, who constitute 99.6% of landowners, are particularly at risk due to their reliance on rainfall for crop production. Understanding the effects of drought on vegetation is crucial for strengthening climate adaptation strategies in Mozambique. While optical remote sensing data are commonly used for drought monitoring, atmospheric interference limits their effectiveness. In contrast, Sentinel-1's Synthetic Aperture Radar (SAR) provides all-weather, day-and-night imaging, enabling continuous monitoring of vegetation structure, biomass, and water content. To date, the potential of CR for monitoring drought effects on vegetation in Mozambique remains unexplored. This study evaluates the potential of CR for monitoring drought effects at the regional level, focusing on the 2019/2020 drought year and comparing it with the non-drought year 2020/2021. To examine CR’s sensitivity to vegetation dynamics across forests, grasslands, and croplands, we integrated the ASCAT Soil Water Deficit Index (SWDI), Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS), MODIS Normalized Difference Vegetation Index (NDVI), and drought impact data from the Famine Early Warning Systems Network (FEWS NET). Additionally, the study compared CR with MODIS NDVI and explores how drought alters the seasonal patterns of both radar and optical data. Our analysis suggests that variations in the time series of CR during the drought year reflect alterations in the physiology of deciduous forests. In grassland areas, CR exhibited only minimal increases during the growing season, suggesting a possible latency in vegetation growth due to drought. In cropland areas, a decline in CR was observed during the harvest season, with notable regional differences tied to agricultural development levels. The correlation between Sentinel-1 CR and optical data varied by land cover type. Forested areas showed predominantly negative correlations, whereas grassland and cropland displayed positive correlations with high median values. Spatially, CR anomalies over forested areas revealed positive anomalies during the drought year, followed by a shift to negative anomalies in the following year. Across traditional cropping systems, deviations in optical and radar data aligned with grassland patterns during both drought and non-drought years. These findings underscore the potential and limitations of CR to monitor the physiological responses of forests, grasslands, and croplands to drought in Mozambique, providing valuable complementary insights to traditional optical data sources.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.96/0.97)

Presentation: Earth Observation and Agent-Based Population Simulation for Resilience Against Climate Change and Related Emergencies

Authors: Matthias Schramm, Martin Bicher, Niki Popper, Wolfgang Wagner, Christoph Draxler, Jürgen Waser
Affiliations: TU Wien, dwh GmbH, VRVis GmbH
Recently, climate change is increasingly driving the occurrence of extreme weather events, posing significant challenges worldwide. In Europe, the rising frequency and intensity of floods and droughts are impacting infrastructure, mobility, healthcare, and supply chains. These evolving climatic conditions necessitate resilience-building measures across multiple sectors to safeguard lives and maintain the functionality of critical systems. Addressing the consequences of such events requires both short-term responses, such as evacuation planning, rapid public information dissemination, and the provision of substitute infrastructure, as well as long-term strategies, including adapting settlement patterns and ensuring food security. However, a critical obstacle to effective climate adaptation and mitigation lies in transforming raw monitoring data into actionable, value-added information tailored to decision-makers’ needs. In this context, a research project has been started to address this challenge by developing advanced simulations and scenarios that enhance decision-making processes in Austria across sectors such as healthcare, mobility, and energy, particularly in the context of inland flood events. The project leverages multi-platform Earth Observation (EO) data and geospatial datasets related to flood dynamics, which are processed or collected from platforms like the Green Transition Information Factory (GTIF) developed by the European Space Agency (ESA). Key data inputs include flood extent and inundation monitoring, snow cover extent, precipitation and atmospheric conditions, soil moisture, and water levels, complemented by static datasets such as airborne Digital Elevation Models (DEM) and infrastructure information. This geodata is integrated – after standardizing its metadata – into two Digital Twin environments: one focusing on machine-learning algorithms to simulate flood dynamics and thus enabling predictive modeling of future flood scenarios, and the other modelling socio-demographic processes, simulating the cascading effects of climate-induced disruptions on population mobility, infrastructure usage, and network processes. In a co-development approach, governmental decision-makers and insurance sector representatives are actively involved to ensure the resulting tools’ alignment with real-world requirements and the stakeholders’ priorities. Primary objective is to generate AI-based scenarios to understand the impacts of various climate change factors and extreme events on critical infrastructure, human mobility, and network processes. These scenarios will inform strategies that enable policymakers to address both the short-term needs and long-term climate adaptation goals. To ensure the validity and robustness of these methods, simulations are trained and tested in diverse use cases from Europe and beyond. Each use case is designed to capture varying temporal scales, spatial extents, and thematic focuses, ensuring their broad applicability. Examples include integrating flood dynamics with population and routing models to optimize evacuation strategies, conducting risk group analyses to prioritize healthcare access during crises, and quantifying cause-effect relationships between flooding and socio-economic impacts. These use cases not only validate the models but also demonstrate their relevance and utility in addressing pressing societal challenges. In addition, a reduced data model will be generated using high-resolution geodata and the calculated flood scenarios. This model will be enhanced by linking it with auxiliary datasets on population dynamics, mobility, settlement patterns, and infrastructure configurations. Newly developed interfaces will enable seamless integration of these datasets into actionable insights, facilitating the creation of "what-if" scenarios and modeling the impacts of extreme events on dynamic processes within the population. The outcomes of this project will enhance the efficiency and reliability of planning processes across various sectors. For example, healthcare systems can better allocate resources during flood events, while infrastructure managers can optimize flood protection measures for highways and rail networks. By publishing the resulting geoinformation for Austria on the GTIF Explore platform, the project not only ensures open access for national stakeholders but also provides a scalable template for international developers and decision-makers. Ultimately, by combining EO data with AI-driven socio-economic modeling, this project bridges the gap between data provision and actionable insights. It provides decision-makers with robust tools to efficiently plan for climate change impacts, fostering resilience and ensuring a sustainable future for affected communities.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.96/0.97)

Presentation: The Synergistic Power of Earth Observation and Simulation Modelling for Evaluating Climate Adaptation in Agriculture

Authors: Claas Nendel, Gohar Ghazaryan
Affiliations: Leibniz Centre for Agricultural Landscape Research (ZALF), Institute of Biochemistry and Biology, University of Potsdam, Integrative Research Institute on Transformations of Human-Environment Systems (IRI THESys), Humboldt-Universität zu Berlin, Global Change Research Institute of the Czech Academy of Sciences, Geography Department, Humboldt University of Berlin
Earth observation (EO) is a powerful tool to detect and monitor relevant variables on the Earth’s surface to evaluate climate change impact as well as climate mitigation and adaptation measures at various scales. However, some variables of interest will remain difficult to observe from above, or virtually impossible. Examples are greenhouse gas emissions, soil carbon stock changes in permanently vegetated areas, and nitrate leaching. At the same time, there is an increasing need to predict future states of many variables of interest (e.g. crop production), to serve early-warning systems for climate hazards or to project even longer-term futures under different scenarios, which is not possible with using EO method alones. While there is an increasing effort under way to make come as close as possible to a real-time observation of observable variables using EO, , the most reliable approach currently available to derive non-observable variables and their spatiotemporal dynamics is the use of process-based simulation models. Under the aspect of informing climate protection, adaptation and mitigation policies, process-based models add the dimension of prediction capability, which EO does not offer. However, their demand for input data is huge, and no matter how well these models have been designed, implemented and calibrated, the quality and spatio-temporal resolution largely determines the quality of the simulation. For large-area assessments (e.g. at national or continental level), EO plays a pivotal role in delivering such data. Here, we demonstrate the most recent advancements in feeding process-based agroecosystem models with EO information on a range of desired information: 1. Crop rotation patterns The presentation will highlight the use of Sentinel-2 and Sentinel-1 data for high-resolution national-scale crop type mapping, including first attempts to identify intermediate crops and their position in the crop rotation, and the use of artificial intelligence to extrapolate crop rotation patterns into the future as a basis for (i) advanced large-area scenario simulations for carbon dynamics in soils and related emissions and (ii) a high-resolution assessment of potential climate change adaptation measures at the level of crop rotations and the use of intermediate and non-traditional crops. 2. Sowing dates The presentation will elaborate on the use of Sentinel-2 and Sentinel-1 data and a range of fusion products to determine sowing dates for crops across large areas. Adding high-quality ground-truth data, the accurate identification of the sowing date is essential to communicate to a simulation model that is then to simulate the subsequent growth and development processes of crops towards proper yield estimates and related ecosystem services and disservices. It will furthermore deliver the precondition to assess any changes in sowing dates over time as a climate change adaptation measure. 3. High-resolution soil data Agroecosystem simulations require soil information as a prerequisite for site-dependent responses of crop growth and development and feedbacks to soil processes, including emissions. Here, we report on recent advances using Sentinel-2 and Sentinel-1 data with inverse modelling to identify field-scale properties that do not easily emerge from observation alone, e.g. subsoil moisture availability. 4. Supplementary irrigation The assessment of crop production across large areas via EO suffers from an unstable relationship between observable vegetation indices and actual yields and the lack of predictive power. Simulation models enter here as a tool to predict within-season yields at the field but also larger scales, using weather data products. Supplementary irrigation is missing in these products, but this information is are essential to reproduce the respective response of crops using models. We present a large-scale identification of irrigated fields in a heterogeneous, rain-fed landscape, using Sentinel data, and a method to quantify irrigation water use with the help of a simulation model. 5. Residue management Soil carbon and moisture dynamics on agricultural fields are influenced by crop and soil management. Crop residues management, including the off-take of cereal straw after harvest for different purposes or the purposely left straw mulch, has a strong impact on long-term carbon and soil moisture dynamics. We show how Sentinel data can be used to identify the fate of cereal straw and what the consequences are for simulations of long-term soil carbon dynamics. All these items represent essential information pieces to inform simulations of agricultural production, soil carbon sequestration, greenhouse gas emissions, water use efficiency and nitrate leaching. The presentation will highlight the use of Sentinel-2 and Sentinel-1 data for national-scale mapping activities and phenology assessment while Sentinel-3 and Sentinel-2 provide information on crop conditions, including drought and heat stress. EO data in combination with a process-based simulation model can support scenario-based assessments and as well as better spatiotemporal understanding of changes in cropping systems. Furthermore, this information presents the best-informed baseline for proposing and testing climate change adaptation and mitigation measures in agriculture. The use of process-based agroecosystems in combination with EO-provided input data currently serves, among others, the agricultural production and greenhouse gas emission outlook 2050 for Germany, and the greenhouse gas emission monitoring and accounting system of the Czech Republic.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.96/0.97)

Presentation: A nationwide Heat Vulnerability Index for climate adaptation planning in urban areas based on remote sensing and demographic data

Authors: Dr. Michael Foerster, Kathrin Wagner, Marion De Simone, Annett Frick, Benjamin Stöckigt, Sascha Gey, Franziska Löffler, Viktoria Engnath, Stefan Heiland, Mohamed Salim, Sebastian
Affiliations: LUP - Luftbild Umwelt Planung GmbH, Technische Universität Berlin, Stadt Leipzig - Amt für Stadtgrün und Gewässer
Municipalities play a key role in the process of climate change adaptation, especially regarding the mitigation of the urban heat island effect and the protection of the sensitive population groups, e.g. children and elderly people. Thus, quantitative and qualitative knowledge of the current spatial exposure to heat in relation to the sensitivities is urgently required for sustainable urban development strategies and measure planning, to enhance synergies. Key to this is reliable and easy to use data and a methodology which allows for future monitoring at low costs. The UrbanGreenEye project (Remote Sensing Indicators for Monitoring of Urban Areas for Climate Change Adaptation) aspires to provide Germany-wide quantitative indicators based on earth observation data mainly from the European Copernicus satellite program. This data shows the current state as well as the past development and is therefore a valuable option for cost-effective monitoring. The developed indicators are to be integrated with demographic data in a Heat Vulnerability Index (HVI) in order to perform a spatial hotspot analysis. Indicators used to calculate the HVI are the following: (i) the Land Surface Temperature (LST), (ii) the green volume, (iii) the imperviousness, (iv) the population density and (v) the population age. The demographic data (population density and age) come from the German census database which is updated every ten years. The LST is based on the thermal infrared sensors of the Landsat program and is calculated based on data from the past 5 years. The green volume and the imperviousness are based on satellite images (Sentinel-2) and computed using a transformer AI-Model, trained and validated with high resolution aerial images. The Heat Vulnerability Index is calculated by combination of these indicators, discussed and validated in consultation of around 15 municipalities from all over Germany participating in the UrbanGreenEye project. The UrbanGreenEye Heat Vulnerability Index will provide a standardized index directly accessible and free of charge for any German municipality. It also will be updated every year so that the data stays up to date, allowing different usage, in particular for controlling and monitoring purposes. The Heat Vulnerability Index is a tool to differentiate and prioritize areas for climate adaptation measures within a reference area. It can therefore be used as a powerful communication and decision tool.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.96/0.97)

Presentation: Improving water infrastructure safety and resilience coupling MT-InSAR, classical monitoring, modelling and machine learning

Authors: Miguel Marchamalo Sacristan, Antonio Miguel Ruiz-Armenteros, Francisco Lamas-Fernández, Alvaro Hernández-Cabezudo, Alfredo Fernández-Landa, Diego Macarrón-Robles, Isabel Granados, Alfredo Granados, Ignacio González-Tejada, Luis Jordá, Rubén Martínez-Marín, Juan Gregorio Rejas-Ayuga
Affiliations: Universidad Politecnica De Madrid, Department of Cartographic, Geodetic and Photogrammetry Engineering, University of Jaén, Research Group RNM-282 Microgeodesia Jaén, Centre for Advanced Studies in Earth Sciences, Energy and Environment (CEACTEMA), Universidad de Granada, Detektia Earth Surface Monitoring SL
Water cycle infrastructures are critical for the adaptation of our society to current and future climate scenarios. Among them, dams are key elements for the provision of water while their safety plans must be improved to face new climatic and service challenges. Infrastructure monitoring is evolving quickly in present, triggered by advances in remote sensing, modelling and machine learning. Dam managers face a context of multiple sources of monitoring information, normally not integrated and this can hinder de adoption of new technologies in real life. There is a need of reliable tools integrating monitoring data with contextual information, modelling outputs and machine learning, for example, for early warnings. Advances in SAR interferometry and processing technologies now enable quasi-real-time monitoring of dams and their associated regions and infrastructures. Several barriers impede the widespread adoption of advanced technologies in dam monitoring and management: lack of standardization, insufficient integration of available technologies and rigidity of official safety procedures. The implementation of standardized databases with consistent and comparable information is a pivotal step toward overcoming these barriers. Beyond the advantages of integrating multi-temporal InSAR (MT-InSAR) into dam monitoring systems, common databases simplify data querying and enable the rapid retrieval of accurate results. Such databases improve information tracking and management by establishing restrictions and validations to ensure data consistency and precision. Moreover, they facilitate updates and modifications, enhancing the adaptability and utility of the monitoring framework. EyeRADAR-Dam, a pilot tool for the surveillance and management of embankment dams, has been developed through collaboration between industry and academia, with scientific validation supporting its design. The platform’s strength lies in its ability to integrate monitoring data with contextual information such as climate, topography, geotechnical parameters, hydrology, and flow data, in an accessible and quantitative platform. This work presents the integration of EyeRADAR-Dam analysis tool in the operational monitoring of dams selected in the frame of the SIAGUA project located in the Guadalquivir Basin (Spain). The SIAGUA project investigates the integration of satellite data, in situ monitoring and expert judgment for the surveillance and control of water cycle infrastructures. This analysis employs a two-scale approach, simultaneously monitoring entire reservoirs—including slopes and associated infrastructures—while providing detailed surveillance of critical infrastructure and the dam body itself. As part of a new generation of management platforms, exemplifying the integration of space-based data into routine operational workflows. Results show that the integration allows to analyze together information from classical auscultation, remote sensing and external variables in an efficient way in terms of time and outputs. A second level of analysis allows integrating modelling results and running machine learning algorithms. We can conclude that integrated monitoring and analysis is essential to increase safety and resilience of water infrastructures.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.96/0.97)

Presentation: Use of remote sensing in support of carbon credits for regenerative agricultural practices

Authors: Urs Schulthess, Dr. Simon Fonteyne, Dr. Tek Bahadur Sapkota
Affiliations: CIMMYT
Practices of regenerative agriculture that aim at restoring and maintaining soil health are key to sustaining crop productivity in the long-run. Farmers that follow these practices are entitled to so-called carbon credits. These carbon credits rely on measurement, reporting, and verification (MRV) procedures. These procedures can be quite labor intensive and costly, given the fact that large areas need to be monitored over long time-spans. However, the monetary values of these carbon credits are generally low. Hence, given the low transactional value of these carbon credits, efficient and cost-effective methods are required. One of the primary advantages of the use of remote sensing for MRV is its ability to gather extensive data over large areas without the need for extensive on-the-ground measurements. We are using Sen4CAP to identify tillage practices and crop residue retention for an area covering more than 30,000 km2 in the Valle de Santiago in Central Mexico over several years. In that region, farmers can grow up to two crops per year, thanks to irrigation during the dry winter months. However, depending on the availability of water, some farmers may leave the land fallow over winter. These fallow lands are tilled over a long period of time spanning from December to June, e.g., a tillage operation with a plough in January may be followed up with a harrowing operation in June. Moreover, the residues start to decompose over time, aided by occasional rains. Some farmers also burn the residues, depending on the circumstances. Hence, this region serves as an ideal test-bed to assess the sensitivities and accuracies of different algorithms to detect tillage practices and residues over a prolonged time. We also aim to estimate carbon sequestration from root growth. To do this, we estimate biomass production from the satellite images and then apply a fixed root:shoot ratio to approximate the root dry matter (carbon).
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall F2)

Session: A.03.05 Opportunities and challenges for global monitoring of photosynthesis from space

In the last years there has been a rapid progress in understanding the dynamics of photosynthesis from space with the advancement of remote sensing observatories that enable the retrieval of solar-induced chlorophyll fluorescence (SIF). The remotely sensed SIF signal has demonstrated potential in detecting and monitoring plant stress and gross primary productivity (GPP) across various vegetation types and spatial and temporal scales. The upcoming FLuorescence EXplorer mission (FLEX) aims at providing new insights into the carbon cycle of terrestrial ecosystems by means of its unique measurements and products. In this session we will welcome contributions from the scientific and user community linked to the consolidation of the link between SIF and carbon balance, carbon cycle studies, agriculture, forestry, food security and crop production. Updates on recent modeling activities coupling remote sensing signals and ecosystem processes will also be welcomed, including the integration of products from active/passive research and operational EO missions.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall F2)

Presentation: Solar-induced chlorophyll fluorescence quantum efficiency retrieved from in-situ and airborne crop observations using DART modelling

Authors: Omar Regaieg, Prof Zbyněk Malenovský, Dr. Bastian Siegmann, Jim Buffat, Julie Krämer, Nicolas Lauret, Dr. Valérie Le Dantec
Affiliations: University Of Bonn, Forschungszentrum Jülich GmbH, Centre d’Etudes Spatiales de la Biosphère
Although remotely sensed solar-induced fluorescence (SIF) is increasingly used as a proxy for vegetation photosynthetic activity at different and temporal scales, the top-of-canopy (TOC) SIF signal is highly impacted by non-physiological vegetation factors such as canopy structure, leaves and soil background optical properties, atmospheric condition, etc. The removal of these confounding factors requires downscaling methods capable of extracting the physiologically relevant information from remotely sensed SIF. Some existing approaches are scaling TOC SIF down to the leaf emission efficiency using spectral reflectance indices (e.g., FCVI, NIRv, NIRvH). These methods usually rely on simplifications limiting their performance. More advanced methods combine numerical optimization or machine learning techniques with the Soil Canopy Observation, Photochemistry and Energy fluxes (SCOPE). Since SCOPE is a one-dimensional (1D) model, the applicability of these approaches is limited mainly to spatially homogeneous canopies such as agricultural crops. To overcome these limitations, we propose a novel method that is scaling SIF down to the level of the photosystems (i.e., PSI and PSII) using the 3D Discrete Anisotropic Radiative Transfer (DART) modelling with its latest and highly efficient DART-Lux mode. The method, estimating the photosystem-level Fluorescence Quantum Efficiency (FQE), consists of physical TOC SIF simulations based on 3D representation of the actual vegetation canopy architecture with realistic optical properties and accounting for instantaneous illumination conditions. The steady-state photosystem level FQE is estimated by optimizing the simulated TOC SIF against its corresponding field or airborne measurements. The method was first applied on in-situ diurnal SIF measurements acquired over an alfalfa crop with the Fluorescence Box (FloX) system. Resulting diurnal photosystems’ FQE trends were decreasing from late morning (9.00 am) till afternoon (4.00 pm), with a slight increase in late-afternoon (4.00 and 7.00 pm) in the case of fully closed canopies. The obtained FQE diurnal courses correlated significantly with the photosynthetic yield of PSII measured by an active leaf florescence instrument MiniPAM (R = 0.87, R² = 0.76 before and R = -0.82, R² = 0.67 after 2.00 pm). The method was extended to airborne TOC SIF images acquired with the HyPlant imaging spectrometer over the same alfalfa field. The per-pixel FQE estimates formed narrow bell-shaped near-Gaussian histograms with lower coefficient of variation than the input TOC SIF radiance, indicating the reduction of spatial heterogeneity caused by confounding factors. This work demonstrates the necessity of TOC SIF downscaling for extracting the physiologically meaningful SIF information about plant photosynthetic performance that can be used, for instance, in the terrestrial carbon cycle modeling.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall F2)

Presentation: Assimilating SIF in the LDAS-Monde system using a deep learning operator: application on global cropland

Authors: Pierre Vanderbecken, Jasmin Vural, Oscar Rojas-Munoz, Bertrand Bonan, Timothée Corchia, Jean-Christophe Calvet
Affiliations: Météo-france
The Solar-Induced chlorophyll Fluorescence (SIF) results from photosynthesis activity of the leaves. The recent level 2 SIF product from the TROPOMI instrument onboard SENTINEL-5P provides a daily estimate of SIF all around the globe. Independently of its uncertainties, this product can allow a more updated vegetation monitoring. The questions we address here are: (1) can the SIF from TROPOMI be simulated using land surface model variables as predictors with a neural network, and (2) does its assimilation improve the vegetation and carbon fluxes monitoring in the LDAS-Monde system? Since SIF results from a complex small scale physics unresolved in our land surface model, we used deep learning to build the observation operator, adopting a feed-forward neural network architecture. This approach yields good results on other remote sensing products like ASCAT backscatter. The predictors (aka features) we used as inputs of the operator are either geographical parameters or observed Leaf Area Index (LAI). The main idea of training using only observed LAI over models output is to not bias the learning by the model used. Futhermore, during the assimilation process, we do not want an observation operator that only acknowledges the model prediction. This view assumes that the observations used for the training are what is the closest to the truth. This observation-driven operator is trained on croplands all around the globe for a whole year of observations and evaluated for another year. The predictions exhibit satisfying accuracy with the observations, while the operator architecture remains simple enough to be linearised by finite differences in our Simplified Extended Kalman Filter data assimilation scheme. The mains asset of TROPOMI-SIF is its temporal coverage allowing a more frequent correction of the vegetation than the usual vegetation product we used. To evaluate its impact while assimilating, we run experiments with varying error formulations and in co-assimilation with other observations. The vegetation monitoring is not always improved by the assimilation of TROPOMI-SIF only, but it has a significant positive impact when co-assimilated with CLMS LAI product.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall F2)

Presentation: Improving the GPP of boreal evergreen needleleaf forests estimated by a land surface model through a physiologically-based representation of NPQ and co-assimilation of space-borne SIF and in situ GPP

Authors: Fabienne Maignan, Lucas Leverne, Camille Abadie, Cédric Bacour, Vincent Tartaglione, Yuan Zhang, Thomas Andrew Black, Vladislav Bastrikov, Nina Raoult, Philippe Peylin, Anja Krieger-Liszkay
Affiliations: Laboratoire des Sciences du Climat et de l'Environnement, Faculty of Land and Food Systems, University of British Columbia, Science Partners, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Université Paris-Saclay, Institute for Integrative Biology of the Cell
Estimating the gross primary production (GPP) of land surfaces is challenging due to its multiple driving factors, which vary across temporal and spatial scales. Solar-induced fluorescence (SIF) has emerged as a valuable proxy for GPP and is now available globally from various spaceborne instruments. To correctly exploit SIF estimates, land surface models (LSMs) must incorporate a representation of non-photochemical quenching (NPQ) processes, which allow for the thermal dissipation of excess absorbed energy. In this study, we assimilated SIF estimates from the ESA TROPOSIF products to improve the GPP simulated by the ORCHIDEE land surface model (LSM) over boreal evergreen needleleaf forests (BorENF). Boreal forests account for approximately 20% of the global forest carbon sink, playing a crucial role in mitigating climate change. We first developed an NPQ model that distinguishes between a sustained NPQ, which dominates in winter when needles absorb solar energy but cannot photosynthesize due to low temperatures, and a reversible NPQ. We then assimilated TROPOSIF estimates at a 0.1° spatial resolution and 8-day temporal resolution across 20 homogeneous grid cells, sampling the spatial distribution of BorENF, along with in situ FLUXNET GPP estimates. We performed three experiments, assimilating only SIF data, only GPP data, and jointly SIF and GPP data. The results were evaluated over independent grid cells and sites (“local scale”), as well as at the regional scale. Compared to the prior simulations, the posterior ones show improved mean seasonal cycles for both GPP and SIF. At the local scale, e.g., for SIF, the RMSD reduction over the 2019-2022 period is 89% (SIF-only), 65% (GPP-only), and 90% (SIF-GPP), for GPP the RMSD reduction evaluated at FLUXNET sites is 39% (SIF-only), 37% (GPP-only) and 35% (SIF-GPP), and the RMSD reduction evaluated at grid cells over the 2001-2021 period is 65% vs FLUXCOM-X-BASE and 63% vs FluxSat (SIF-only), 65% and 65% (GPP-only), and 73% and 72% (SIF-GPP).
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall F2)

Presentation: Terrestrial vegetation fluorescence quantum yields: a satellite-based study

Authors: Catherine Morfopoulos, Professor Colin
Affiliations: Imperial College Of London
Photosynthesis is the core engine of vegetation productivity, usually estimated by Gross Primary Production (GPP), the rate of carbon fixed by photosynthesis per unit of ground area. A better understanding of ecosystem productivity relies on two main streams of information: observations and modelling. However, both streams have severe limitations with respect to GPP: 1- no large-scale measurements exist for GPP, 2- while satellites typically measure light reflectance by foliage, the light reactions (i.e., light absorption by photosystems generating reduction power and energy for carbon-fixation) are still described empirically in vegetation models. Leaf fluorescence is a natural process by which chlorophyll pigments relax part of the electromagnetic energy they absorb in the form of electromagnetic energy of lower intensity. Chlorophyll fluorescence happens in the photosynthetic apparatus and is tightly linked to processes generating reduction power and energy to drive carbon assimilation. For more than 40 years, scientists measured plant photosynthesis using Pulse-Amplitude-Modulation (PAM) chlorophyll fluorometer, an apparatus where fluorescence is measured upon actinic illumination and saturating pulse of light to retrieve quantum yield for photosynthesis. A little more than a decade ago, a breakthrough in spatial earth observation occurred: using narrow band observations in the oxygen A-band, the first global Solar Induced Chlorophyll Fluorescence (SIF) measurements were obtained. For the first time, the scientific community had access to observations directedly linked to photosynthetic processes arousing high expectations to constrain carbon uptake by terrestrial vegetation. SIF is now measured at global scale by instruments onboard several satellites, including the Greenhouse gas Observing Satellite (GOSA), the MetOp-A/GOME-2, the Orbiting Carbon Observatory 2 (OCO-2) and the TROPOspheric Monitoring Instrument (TROPOMI). In this study we calculate, from the different satellites SIF datasets, the quantum yield for fluorescence (ɸF) i.e. how much absorbed energy is re-emitted in the form of fluorescence. We compare satellite-retrieved ɸF 1- between the different satellite products, 2- with data measured at leaf and meso-scale and 3- with ɸF modelled by the P-model, a new-generation vegetation model based on optimality principles. Environmental drivers that impact changes in ɸF are also explored. Finally, we discuss difficulties arising when comparing vegetation models simulations with SIF measurements.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall F2)

Presentation: Monitoring Physiological and Structural Vegetation Traits through Multi-Mission Reflectance and Solar Induced Fluorescence Radiative Transfer Model Inversion.

Authors: Christiaan van der Tol, Dr Simon Blessing, Else Swinnen, Dr Ralf Giering, Dr Catherine Morfopoulos, Kris Vanhoof
Affiliations: University Of Twente (UT-ITC), FastOpt, VITO, Imperial College London
Solar induced chlorophyll fluorescence (SIF) of terrestrial vegetation has often been used as a proxy for gross primary productivity (GPP) and environmental stress. The remote sensing signal of SIF depends on three processes: the absorption of photosynthetic active radiation (aPAR), the efficiency of the alternative energy dissipation pathways including photochemistry and non-photochemical dissipation, and the structure of the vegetation that traps part of the emitted fluorescence (the escape probability). Because it is not possible to disentangle these three independent processes without using ancillary or prior information, it is useful to combine SIF with reflectance observations. Recent studies established theoretical relationships among reflectance, the escape probability of fluorescence, and aPAR, using radiative transfer theory and models. These enable the joint retrieval of aPAR and energy dissipation efficiencies from reflectance and SIF. In the context of the ESA-CCI Vegetation Parameters project, we developed improved an algorithm, OptiSAIL, for the direct physically based inversion of vegetation traits from reflectance, including aPAR and LAI. We extended OptiSAIL with elements of the model SCOPE, to enable the simulation of ‘as measured’ SIF in the viewing geometry and at the desired wavelength. This enables a radiative transfer model inversion constrained by multi-sensor reflectance observations (Sentinel-3 OLCI, SPOT4/5-VEGETATION1/2, PROBA-V, SNPP- and NOAA20-VIIRS, and MetOpA/B/C-AVHRR) and a TROPOMI product of SIF (TROPOSIF) combined. A cost function is minimized which includes the following components: the difference between simulated and observed SIF for each TROPOMI pixel (~3.5 x 7 km resolution), the difference between simulated and observed reflectance (at 1 km resolution), and the difference between prior and posterior vegetation properties. For the the plant structural traits and the fluorescence emission efficiency, a mixed prior is used, that includes distributions established from the literature, and information based on retrievals of earlier time steps, using a parameterization for the temporal autocorrelation. The output is, among others, in ‘green’ aPAR (chlorophyll absorption), a temporally smoothed SIF product at 1-km resolution from which viewing angle dependence is removed, but otherwise consistent with TROPOSIF, and posterior estimates of the fluorescence emission efficiency, all with uncertainty estimates (posterior covariance matrix). We show initial results, demonstrate insights gained from the project, and discuss potential applications beyond the scope of the CCI Vegetation Parameters project.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall F2)

Presentation: The Role of Vegetation Fluorescence in Photosynthesis and Carbon Cycle Research: Information Content of FLEX Science Products

Authors: Jose Moreno
Affiliations: University of Valencia
In recent years, vegetation fluorescence has become a new type of information provided by Earth Observation satellites, globally used in photosynthesis and carbon models, and also in practical applications such as agriculture and food security. The unique capabilities of the fluorescence emitted by chlorophyll in plant leaves, to reflect the internal processes occurring inside, makes the fluorescence signal a quite relevant indicator for terrestrial vegetation studies, also allowing the determination of vegetation stress and the regulation of plant photosynthesis by environmental conditions. Although global information has been provided by satellites having the needed spectral resolution to resolve the fluorescence signal, such information has been retrieved from instruments designed for atmospheric chemistry and other applications, not optimized for chlorophyll fluorescence retrievals. In particular, they cannot retrieve the full spectrum of fluorescence emission and, most importantly, they only provide fluorescence signal, which alone is not enough information to derive the photosynthesis rates and stress conditions in vegetation. As a result, the fluorescence signals have been used as another proxy, like another vegetation index, in some cases assuming very simplistic correlation approaches with Gross Primary Productivity or other photosynthesis/carbon related variables. The idea of using chlorophyll fluorescence for photosynthesis and carbon studies is not new, in fact the concept for a FLEX (FLuorescence EXplorer) mission was initially proposed to ESA as an Earth Explorer back in 1998. The FLEX mission was selected for implementation in 2015 after years of developments and consolidation, both in technical and scientific aspects. With a launch planned for 2026, FLEX is the first space mission specifically designed for global mapping of chlorophyll fluorescence, together with all the needed information to make possible the quantification of photosynthesis rates and to derive quantitative estimators of vegetation stress conditions. To perform a quantitative global mapping of actual photosynthetic activity of terrestrial ecosystems, the FLEX satellite carries the FLORIS spectrometer, specially optimized to derive, spectrally resolved, vegetation fluorescence emission, and the spectral variability in surface reflectance indicative of chemical adaptations in regulated heat dissipation, with a spatial resolution of 300m at global scale, adequate to resolve land surface processes associated to vegetation dynamics. FLEX is designed to fly in tandem with Copernicus Sentinel-3. Together with FLORIS, the OLCI and SLSTR instruments on Sentinel-3 provide all the necessary information to retrieve the emitted fluorescence, and to allow proper derivation of the spatial and temporal dynamics of vegetation photosynthesis from such global measurements. OLCI and SLSTR data also help in the compensation for atmospheric effects and the derivation of the additional biophysical information needed to interpret the variability in fluorescence measurements. FLEX will provide quantitative physiological indicators to account for vegetation health status and environmental stress conditions, to better constraint global terrestrial carbon models. The science products to be provided by the FLEX mission are not restricted to the basic chlorophyll fluorescence measurements, but include also the estimates of regulated and non-regulated energy dissipation, needed to quantify actual photosynthesis. Together with canopy temperature and other variables characterizing vegetation status, Level-2 products include instantaneous photosynthesis rates and estimates of vegetation stress based on ratios between actual versus potential photosynthesis and variable PSI/PSII contributions tracking photosynthesis dynamics. Level 3 products are derived by means of spatial mosaics and temporal composites, giving also as multi-temporal products the activation / deactivation of photosynthesis, growing season length and related vegetation phenology indicators. Finally, by means of data assimilation into advanced dynamical models of FLEX time series and ancillary information, Level-4 products provide time series of Gross Primary Productivity and more advanced vegetation stress indicators. Such science products can be directly used by vegetation dynamical models, carbon and climate models, and different applications. In this presentation, a detailed review of FLEX science products will be provided, explaining the physical basis, the interpretation of the information content, and the proper usage in models and applications. While L2 data are intended for rapid usage of the information in direct applications, the more elaborated L3/L4 products can later be used when more precise information is required, in particular for carbon cycle and photosynthesis models. An important aspect to be pointed out is the fact that all FLEX products are provided with the corresponding individual uncertainties for each product, propagated from the initial raw measurements until the final science products. Users can then not only apply the products in the corresponding research, but also can take into account accordingly all such provided uncertainties. This is particularly critical for the usage of FLEX science products for data assimilation into global photosynthesis/carbon models, where uncertainties are absolutely needed, but also in applications where quantitative usage of information is preferable versus qualitative interpretations. Efforts are put in place to guarantee proper Cal/Val activities and dedicated validation networks for FLEX products. However, it must be pointed out that the usage of FLEX science products in current model of photosynthesis and carbon assimilation would be suboptimal, because most of such models are not appropriate to ingest the type of information to be provided by FLEX. Completely new modelling schemes are being developed as part of the preparations for the FLEX scientific exploitation, specially optimized to use FLEX science products as inputs to such new models. Improved parameterizations in Land Surface Models can later on be used as Land modules in Global Dynamical Vegetation Models and integration in Earth System Models and Global Climate Models where the potential of the new science products of FLEX can be fully exploited, hopefully improving also the predictive capabilities of such global models towards vegetation impacts in future climate scenarios.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.85/1.86)

Session: A.09.02 Dynamic Antarctica: From the coastal margins to the deep interior - PART 2

Since 1992 mass loss from the Antarctic Ice Sheet has contributed approximately 8 mm to global mean sea level rise (IMBIE; Otosaka et al., 2023). Mass loss from the Antarctic Ice Sheet not only increases global mean sea level, but also impacts ocean circulation, marine biology and Antarctic sea ice. Current changes in Antarctica have been driven by both ocean and atmospheric melting of ice shelves and grounded margins. Intrusions of warm Circumpolar Deep Water onto the continental shelf around Antarctica are the primary driver of ice-shelf basal melting, while atmospheric warming and extreme events have increased supraglacial melting on ice shelves and in coastal regions, leading to firn densification and increased potential for supraglacial ponding and ice-shelf collapse.

Changes at the coastal margins, such as ice-shelf thinning, weakening and collapse, reduces the ability of ice shelves to provide buttressing to inland grounded ice. Therefore, the impact of these changes can propagate inland, with potential to destabilize large areas of the ice sheet. Meanwhile, the dynamics and stability of the grounded ice sheet is controlled by multiple factors: including the bed topography, basal friction, subglacial hydrology, geothermal heat flux and englacial temperate.

It is increasingly possible to monitor change, its impact, and improve understanding of these processes, due to advanced satellite Earth observations, improvements in data processing, modelling and AI/ML. This session will highlight recent advances in our understanding of the dynamics of the Antarctic Ice Sheet including:
- Interactions between the atmosphere and ice-sheet surface: surface mass balance, firn evolution, supraglacial hydrology, and the impact of extreme events.
- Quantifying ice-shelf basal melting, its spatial distribution, and drivers.
- The dynamics and stability of inland ice: bed topography, basal friction, subglacial hydrology, geothermal heat flux
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: Antarctic Subglacial Lakes: New Active Lakes and Their Behaviour, With CryoSat-2 and Radio Echo Sounding

Authors: Sally Wilson, Anna Hogg, Richard Rigby, Noel Gourmelen, Isabel Nias, Thomas Slater
Affiliations: University Of Leeds, University of Edinburgh, University of Liverpool, Northumbria University
Lying beneath several kilometres of ice, subglacial lakes remain one of the most enigmatic components of an ice sheet system. In Antarctica so far, 146 active subglacial lakes have been detected via ice surface elevation changes associated with lake draining and filling behaviour. Observations of active subglacial lakes, which transport water beneath the ice sheet by draining via currently unknown mechanisms, are limited to the satellite era; just 36 “complete” fill-drain cycles have been observed globally thus far. Therefore, knowledge of subglacial lake variability and its causes are limited. Here, we use a decade of ice surface observations (2010-2020), from the ESA CryoSat-2 mission, to identify 85 newly detected active subglacial lakes in Antarctica, adding 37 complete drainage and 34 complete filling events to the existing archive. We delineate time-varying perimeters to investigate subglacial lake variability over time. Twenty-five clusters of lakes are detected, providing the opportunity to investigate interconnected subglacial hydrological pathways. Many new subglacial lakes are located close to the grounding zone, with implications for increased grounding line discharge as lake drainages have potential to increase ice speed and influence ice shelf basal melt rates. We investigate the influence of bed elevation and hydraulic potential on these 85 new active subglacial lakes, using in-situ measurements from Radio Echo Sounding (RES) surveys collated in the Bedmap3 archive, providing a validation metric for our new lake locations. Extending this analysis to the entire Antarctic active subglacial lake archive, we note that fewer than 50% of active lakes coincide with RES survey lines. Of the locations where RES campaigns have surveyed the ice base, we find that the vast majority of subglacial lakes are situated within both bedrock lows and hydraulic potential sinks. We also assess the bed’s influence on filling and draining behaviours by evaluating bed and hydraulic potential geometries between 8 behavioural patterns observed across the subglacial lake archive: (1) slow fill with fast drainage, (2) similar fill and drainage rates, (3) rapid fill with slow drainage, (4) quiescent at high stand, (5) quiescent at mid stand, (6) quiescent at low stand, (7) slow fill only, (8) slow drain only. We find that RES surveys fail to image the ice base in a number of locations due to the presence of subglacial water, which can scatter radar away from the sensor. These results provide insights into both new and existing areas of subglacial hydrological activity in Antarctica and their evolution over time, which are vital to resolve in order to understand their impacts on Antarctic Ice Sheet stability.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: Estimating Geothermal Heat Flow in Ice-Covered Regions Using Bayesian Inversion and SMOS Data

Authors: Judith Freienstein, Wolfgang Szwillus, Marion Leduc-Leballeur, Giovanni Macelloni, Jörg Ebbing
Affiliations: Kiel University, Institute of Applied Physics “Nello Carrara”
We propose a novel approach for estimating and reconciling geothermal heat flow (GHF) in ice-covered regions, where direct observations are sparse, and geophysical data interpolations often exhibit high uncertainties. Accurate GHF estimation is crucial for understanding the thermal structure of the lithosphere and its influence on ice sheet dynamics, particularly in remote regions such as Antarctica. However, the challenges posed by limited in-situ data requires innovative methods relying on multiple sources of information. Our approach integrates Solid Earth models with ice temperature profiles derived from remote sensing microwave emission data from the Soil Moisture and Ocean Salinity (SMOS) satellite mission. Geothermal heat flow is estimated using a Bayesian inversion framework, treating GHF as a free parameter to derive posterior distributions that best match the observed ice temperature profiles. This probabilistic approach allows us to quantify uncertainties and explore the parameter space of potential GHF values. Stationary thermal modelling is used to achieve convergence between ice and lithospheric temperature models at the base of the ice sheet. Through stochastic inversion, we estimate lithospheric thermal parameters, such as radiogenic heat production, using the posterior distribution derived from the ice temperature retrieval as a prior constraint in the GHF inversion. This approach enables us to derive GHF estimates that are consistent with both ice temperature profiles and Solid Earth models. We apply this methodology to the Dome A region in Antarctica, where an independent approach (by Wolovick et al. 2022) to estimate GHF based on glaciological constraints from ice-penetrating radar data provides an opportunity to validate the SMOS data in terms of their ability to constrain geothermal heat flow. This application allows us to assess the utility of SMOS-derived observations for GHF estimation. Future work will incorporate three-dimensional modelling and integrate topographic and sedimentary data to enhance the accuracy and spatial resolution of GHF estimates.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: Sources of Extreme Precipitation over Antarctica

Authors: Rajashree Datta, Dr. Adam Herrington, Dr. David Schneider, Dr. Luke Trusel, Dr. David Bonan, Dr. Jesse Nusbaumer
Affiliations: TU Delft, Climate & Global Dynamics Lab, Penn State University, University of Washington
The overall gain and loss of snow and ice at the surface of the Antarctic ice sheet is strongly driven by rare extreme events, some of which result from atmospheric rivers transporting moisture from the tropics towards the south pole. In recent years, the Southern Ocean region has witnessed major changes, including sequential record lows for sea ice extent and warming oceans, with direct impacts on the Antarctic ice sheet itself. As an example, in 2022, the AIS experienced the first positive annual mass balance since satellite-based estimates have been available, largely due to a single snowfall event driven by an atmospheric river. Developments in the Digital Twins space have highlighted the importance of simulating the impacts of extreme events in a changing climate, and coupled Earth Systems Models (ESMs) are the most critical tools in reaching this goal. ESMs can provide historical, future and idealized realizations of AIS surface mass balance and near-surface conditions. Weaknesses of ESMs include sea ice biases in fully-coupled models (where those constrained by observations sometimes perform better), and a relatively coarse resolution (as high-resolution models are costly and can therefore rarely manage more than limited runs). The combination of ESMs and observations provide a unique opportunity to attribute extreme events to specific observed drivers, e.g. La Niña conditions and/or reduced sea ice extent. Here, we employ a variable-resolution grid over Antarctica, using CESM2 (VR-CESM2), which retains the extensibility of an ESM while avoiding the computational costs of a climate model run at high-resolution globally. This setup uses observed sea surface temperature and sea ice concentration imported from historical observations as well as specific experiments constructed from the observational record for sea ice and SSTs. We also implement moisture-tagging (linking precipitation to a moisture source region on the globe),and produce high spatial and temporal resolution atmosphere and ice sheet surface outputs, which can be used to detect atmospheric rivers, calculate extreme precipitation events (EPEs), and to estimate their impact, including the rain/snow partition over vulnerable regions near grounding lines in Antarctica. We will quantify sources of extreme precipitation within the 1990-2020 period as well as present results of two idealized experiments. These experiments were generated from observations and designed to test the impacts of sea ice decline/SST patterns tied to La Niña events (vs sea ice/SST patterns associated with El Niño conditions).
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: Subglacial Hydrology in central Dronning Maud Land, East Antarctica revealed by InSAR time series

Authors: Jelte Van Oostveen, Calvin Shackleton, Yngvar Larsen, Geir Moholdt
Affiliations: NORCE Norwegian Research Centre, Norwegian Polar Institute
Beneath the Antarctic Ice Sheet, the world's largest freshwater reservoir, vast amounts of liquid water are produced, stored, and transported toward the ocean. This subglacial meltwater, generated primarily by geothermal heat, frictional heating, and the immense pressure of overlying ice, plays a crucial role in controlling ice dynamics while also transporting nutrients –essential for primary production– to the ocean. Despite its significance, subglacial hydrology remains one of the least understood aspects of the ice sheet system, primarily due to challenges of direct observation and the complexity of underlying processes. Current knowledge of subglacial meltwater volumes, flow pathways, and physical processes beneath the ice is largely theoretical, thus relying heavily on modeling. Observations are limited and typically focus on specific phenomena, such as active subglacial lakes, leaving many aspects of the system poorly constrained even though such data is critical for validating models and advancing our understanding of the subglacial environment. In this study, we present a decade-long (2014–2024) time series of subglacial activity beneath slow-moving ice (0–200 m yr⁻¹) in central Dronning Maud Land, East Antarctica. Using quadruple differential InSAR derived from Sentinel-1 Interferometric Wide (IW) and Extra Wide (EW) swath data, we identify and track >1000 elevation anomalies over time. This approach enables us to trace water movement from upstream regions to the grounding line, revealing multiple hydrological pathways, identifying potential subglacial lakes, and pinpointing plume exit locations. We estimate conservative volume fluxes and, where possible, recharge rates. To interpret these findings, we cross-reference InSAR-derived water flow patterns with hydropotential water routing models, using an ensemble mean of 50 stochasticly simulated bed topographies –derived from ground-penetrating radar (GPR) observations– as input. This study provides unique evidence of subglacial hydrology, significantly expanding the observational record and highlighting the utility of Sentinel-1-based InSAR time series for investigating subglacial processes. Our findings demonstrate the potential to transform our understanding of Antarctic subglacial systems, particularly when integrated with advanced modeling approaches.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: A fine resolution mass budget of the Antarctic Ice Sheet

Authors: Tom Slater, Inès Otosaka, Andrew Shepherd, Jan Wuite, Thomas Nagler, Lukas Krieger, Dana Floricioiu, Amber Leeson, Sainan Sun, Benjamin Davison, Professor Anna
Affiliations: Centre for Polar Observation and Modelling, Department of Geography and Environmental Sciences, ENVEO IT GmbH, Remote Sensing Technology Institute, German Aerospace Center (DLR), Lancaster Environment Center, Lancaster University, Department of Geography and Environmental Sciences, Northumbria University, School of Geography and Planning, University of Sheffield, School of Earth and Environment, University of Leeds
We divide Antarctica into 4779 contiguous ice drainage units of 10 km width at the grounding line and with a median area of 155 km² – 62 times smaller than existing delineations. Using these units, we create a fine-resolution assessment of Antarctic mass balance and explore its sensitivity to choices of input data and similarity to estimates from satellite altimetry and gravimetry. Total ice discharge varies between 1,910 ± 30 and 2,204 ± 19 and net accumulation 1,996 ± 24 and 2,172 ± 27 Gt/yr depending on the source of bed elevation and surface mass balance data, respectively. Our preferred estimate of Antarctic mass balance is -141 ± 30 Gt/yr over the period 2005 to 2021, based on a choice of ancillary data that leads to optimal agreement with satellite altimetry and gravimetry. Our new approach paves the way for fine-resolution reconciled estimates of the ice sheet imbalance to support improvements in numerical ice sheet models.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: Ice structure mapping: a secondary objective of ESA’s BIOMASS SAR mission

Authors: Jørgen Dall, Anders Kusk, Julius Gräs Kokholm
Affiliations: Technical University of Denmark
Ice mapping is one of the secondary objectives of ESA’s P-band SAR mission, BIOMASS, to be launched in 2025 [1]. BIOMASS will be the first P-band SAR in space, and until recently, P-band data from Antarctica and from the dry-snow zone of Greenland have not been acquired with airborne SAR, either. However, newly acquired airborne P-band SAR data have enabled an assessment of the potential of P-band for ice structure mapping in Antarctica, hence characterization of melt and compaction processes. First results will be presented. P-band penetrates deeper into ice sheets, ice shelves and glaciers than the higher frequencies that have been used by space-based SAR systems until now. The deeper penetration is beneficial for structural mapping, e.g. mapping of firn depths, ice inclusions in the firn, and aquifers. A more experimental application is mapping of the basal topography of ice shelves. A deeper penetration leads to scattering from more stable scatterers and hence longer coherence time, which is beneficial for ice velocity mapping with speckle tracking and interferometry. However, ice velocity mapping is not addressed in this presentation, as data with appropriate temporal baselines have not been acquired. From 11 December 2023 to 14 February 2024, DTU Space completed an airborne radar campaign in Antarctica in support of BIOMASS. Data were acquired with POLARIS, a polarimetric airborne SAR and ice sounding radar that has been developed by DTU Space, commissioned by ESA [2]. Like BIOMASS, POLARIS is a P-band radar. BIOMASS has two data acquisition phases with orbits designed for Polarimetric SAR interferometry (PolInSAR) and SAR Tomography (TomoSAR), respectively. Likewise, the POLARIS flight tracks are designed for both PolInSAR and TomoSAR. In the percolation zone, a characterization of ice sheet structure with PolInSAR has previously been attempted [3][4]. However, 90% of the Antarctic ice sheet is in the dry snow zone where the ice surface never melts. Specific meteorological conditions also distinguish Antarctica from other snow-covered regions. Longitudinal snow dunes are widespread, and in the Dome C region it results in a highly anisotropic surface roughness [5]. With POLARIS, PolInSAR data have been acquired at two sites with different snow dune orientation. Previously, TomoSAR has also been used to characterize ice structure, but that, too, was in the percolation zone [6][7]. With POLARIS, both PolInSAR and TomoSAR data have been acquired over the Shackleton ice shelf. This allows for development of more realistic surface and volume scattering models than the simple random-volume-under-surface model that has previously been used in PolInSAR [3]. The POLARIS data also enables an assessment of the feasibility of mapping the basal topography of ice shelves with TomoSAR. Lacking P-band ice shelf data, an assessment has previously been attempted with ice parameters derived from P-band ice sounder data [8]. On a coarse scale, the basal topography of ice shelves can be inferred from the surface topography, but finer details are of interest, as (1) the ice shelf is not in hydrostatic equilibrium near the grounding line, and (2) basal channels can enhance the circulation of warm ocean water under ice shelves. In turn, ocean water intruding beneath the ice sheet in the grounding zone can accelerate the mass loss. The BIOMASS mission is designed for TomoSAR forest mapping, but since ice shelves are much thicker than the height of a forest, and since the attenuation of radar signals in ice is very different from that in forests, the feasibility of the ice shelf mapping remains to be demonstrated. The impact of ionospheric scintillations is worse at high latitudes, and it must be emphasised that the methods that are developed for mitigation of ionospheric scintillations in regard to BIOMASS forest mapping may prove insufficient in the polar regions. This remains a risk until BIOMASS data become available during the commissioning phase. [1] S. Quegan, T. Le Toan, J. Chave, J. Dall, J-F. Exbrayat, D.H.T. Minh, M. Lomas, M.M. D’Alessandro, P. Paillou, K. Papathanassiou, F. Rocca, S. Saatchi, K. Scipal, H. Shugart, T.L. Smallman, M. J. Soja, S. Tebaldini, L. Ulander, L. Villard, M. Williams, “The European Space Agency BIOMASS mission: Measuring forest above-ground biomass from space”, Remote Sensing of Environment, vol. 227, pp. 44-60, June 2019. [2] J. Dall, S.S. Kristensen, V. Krozer, C.C. Hernández, J. Vidkjær, A. Kusk, J. Balling, N. Skou, S.S. Søbjærg, E.L. Christensen, “ESA’s polarimetric airborne radar ice sounder (POLARIS): Design and first results”, IET Radar, Sonar & Navigation, vol. 4, no. 3, pp. 488-496, June 2010. [3] J. Dall, K.P. Papathanassiou, H. Skriver, “Polarimetric SAR interferometry applied to land ice: modeling”, Proceedings of the EUSAR 2004 conference, pp. 247-250, Ulm, May 2004. [4] G. Fischer, K. P. Papathanassiou and I. Hajnsek, "Modeling Multifrequency Pol-InSAR Data from the Percolation Zone of the Greenland Ice Sheet" IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 4, pp. 1963-1976, 2019. [5] M. Poizat, G Picard, L. Arnaud, C. Narteau, C. Amory, F. Brun, “Widespread longitudinal snow dunes in Antarctica shaped by sintering”, Nature Geoscience, Vol. 17, pp. 889-895, September 2024. [6] F. Banda, J. Dall, and S. Tebaldini, “Single and multipolarimetric P-band SAR tomography of subsurface ice structure,” IEEE TGRS, vol. 54, no. 5, pp. 2832–2845, May 2016. [7] G. Fischer, M. Jäger, K. P. Papathanassiou and I. Hajnsek, "Modeling the Vertical Backscattering Distribution in the Percolation Zone of the Greenland Ice Sheet with SAR Tomography," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 12, no. 11, pp. 4389-4405, 2021. [8] J. Dall, T. Taillade, M. Engdahl, “Mapping of the basal topography of ice shelves from space”, Proceedings of IEEE 2023 International Geoscience and Remote Sensing Symposium, Pasadena, pp. 7750-7753, July 2023.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.49/0.50)

Session: F.05.01 Satellite EO data benefit the economy, the environment and our societies: the evidence and the stories

Understanding and quantifying the societal benefits derived from the use of EO is critically important in prioritizing, using, and enhancing the value of EO in decision making. Earth Observation data and service providers strive to improve the understanding of the benefits that society derives from its own missions and from EO in general, in order to improve services and products and to substantiate perspective analyses for the planning of future missions.
From various interactions and studies, there is an overwhelming evidence that the benefits generated by EO are substantial – they include economic savings and efficiency gains, improved compliance to regulations, reduced pollution and improved understanding about climate change, to mention just a few. However, while recent years have witnessed a proliferation of examples and use cases, quantitative and structured assessments have still to gain traction.
The proposed session pursues improvements in EO data use impact assessments and in the understanding of the mechanisms through which EO data use can generate benefits to society. The Session will convene the international community that is undertaking similar efforts and invite them to exchange on good practices. Focus will be on exemplary case studies and methodologies that have been successfully applied in a credible way.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: A Comprehensive Framework for Assessing Socio-Economic and Environmental Benefits of ESA Activities, Including EO Missions

Authors: Marta Salieri
Affiliations: EUROPEAN SPACE AGENCY (ESA)
Over the past decade, several pilot studies have been conducted to evaluate the environmental impacts of the European Space Agency's (ESA) activities, with Earth Observation (EO) missions being part of it. EO systems play a critical role in monitoring environmental changes, supporting climate action, and contributing essential data for sustainability goals worldwide. However, for sustainability assessments to gain meaningful traction and provide enhanced value in decision-making, these efforts must be complemented by a unified, actionable framework. Building on a series of publications, impact assessments, and impact studies, including the present activity, ESA aims to address this gap by developing a standardized approach with carefully selected indicators tailored to assess the sustainability benefits of space projects. This initiative will enable ESA to better understand and mitigate the environmental footprint of not only EO but also other space activities while promoting best practices across the industry for more sustainable space exploration. This framework supports broader Sustainability Strategy that aims at providing project and programme managers with guidelines to: - Assess the alignment of ESA projects and activities with sustainability objectives. - Highlight the positive contributions of new ESA projects and activities to sustainable societal development. - Identify and evaluate key objectives and associated indicators/criteria for measuring both the positive and negative sustainability and environmental impacts of ESA projects and activities. To refine its sustainability impact methodology, ESA has selected two activities from each of its main programme directorates as case studies, based on their potential contributions to sustainability goals. Simplified Life Cycle Assessments (LCAs) will be applied to some of these activities, while others will serve as benchmarks without LCAs. The selected case studies with LCA applied are divided into two general categories: - Technology case studies: These focus on reference systems or technologies with alternative, more environmentally friendly solutions. - Space mission case studies: These consider a reference scenario for a given sector or application without space systems. The alternative scenario includes a full space mission (e.g., navigation, communications, Earth Observation (EO)) that enables various applications like monitoring, air traffic management, or transport optimization. LCAs for the selected studies are being conducted to identify environmental hotspots and propose eco-design mitigation strategies aimed at reducing the negatives impacts identified through the LCA. Ultimately, this approach enables the assessment of the net environmental impacts of space systems. Two case studies from the Earth Observation Program (EOP), ESA Green Transition Factory (GTIF) and Soil Moisture and Ocean Salinity (SMOS), are being undertaken to serve as benchmarks for other activities within the directorate to evaluate their benefits. Based on lessons learned from the case studies and a workshop which will be organised in support of it, a conceptual framework enabling the systematic evaluation of ESA activities in terms of their sustainability impact will be developed. Case studies scheduled for 2024 and 2025 will deliver quantitative and qualitative evidence presented in a straightforward, visual and easily communicable manner. The results of the sustainability impact studies will enable ESA to present during this symposium a methodology that is certified to make a solid sustainability and environmental impact analysis of the benefits of space projects.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Water management in Finland: Open EO-service providing status assessment of marine and lake water quality

Authors: Dr Jenni Attila, MS Hanna Alasalmi, MS Vesa Keto, MSc Sakari Väkevä, M. Sc. Jesse Anttila, Eero Alkio, Sampsa Koponen, Mikko Kervinen, Eeva Bruun
Affiliations: Finnish Environment Institute (syke)
The requirements set for defining the status of water areas, for example, European Union Water Framework Directive (WFD), Marine Strategy Framework Directive (MSFD), and Habitat directive (HD), share common core elements with respect to status assessments. The status assessment is done using discrete classes for water bodies, sub-areas or critical sites representative for certain area of interest. For HD over marine areas it is marine habitats, such as lagoons and small narrow bays, whereas in WFD and MSFD it is coastal water bodies. Typically, chlorophyll a (chl-a), turbidity, and surface temperature belong to the main variables considered. In addition, nutrients such as total phosphorus, and transparency of the water, usually determined as Secchi disk depth, are accounted. During 2024, the 4th WFD reporting has been underway in EU Member States (MS). In Finland, the obligations set by EU for WFD reporting are high, and concern about 4600 lakes and more than 270 coastal water bodies with often an irregular shape and multitude of islands. Most of these water areas represent a large range of optically complex waters - absorption dominated humic waters that form one extreme part of Case II waters (aCDOM at wavelength 400 nm ranges between 0.1 and 29 m−1 and even higher over humic-rich lake water bodies). Tarkka web application (tarkka.syke.fi) by the Finnish Environment Institute is used to provide them together with the station sampling data collected within the reporting unit, water body. In addition to authorities responsible for status assessment, the service is open for public, which has made it popular among citizens and media. What makes it interesting for public is that the service provides a comprehensive archive of true color images over the area of Baltic Sea and its drainage basin. According to a recent ‘Sentinel benefits for society’-story (EARSC, 2022) ‘Using satellite data is especially helpful in a country like Finland where the large amount of water bodies would imply enormous associated costs for authorities should they have to use traditional monitoring methods across the whole country. Sentinel data therefore helps authorities to improve water quality at a lower cost, which in turn improves the quality of life for citizens, aids in the protection of biodiversity and helps to ensure environmental sustainability’. The primary users and tiers of Tarkka-service are the regional environmental centres and Syke. Furthermore, Ministries of Environment and of Agriculture and Forestry obtain strong regulatory benefits as they have access to better information - more comprehensive reports and data from SYKE and the regional centres - on which to base new policy decisions (EARSC, 2022). WFD status assessment in Finland has utilized EO chl-a for three rounds already: first time for the 2nd classification period in 2012 over coastal waters (Attila et al. 2018). During the 3rd classification round Copernicus Sentinel-2 MSI instrument-derived chl-a was utilized over most of the 250 coastal and over 2000 lake water bodies. For the 4th WFD period, we will increase the use of EO-based information in status assessment by increasing the availability of EO data over approximately 800 new lake water bodies and including satellite-based observations of total phosphorus over the south-west archipelago and certain nutrient-rich Finnish lakes. The satellite instruments used are Sentinel2 A&B MSI, Landsat 8&9 OLI, and Sentinel-3 OLCI. The water quality parameters provided are chl-a, turbidity, Secchi depth and surface temperature. For the forthcoming 4th WFD reporting, EO data and a separate analysis-part of Tarkka-service is linked directly to the national water management system used by the authorities responsible for status assessment. EO is accounted in WFD assessment via modelling. The backbone of Tarkka-service is STATUS database contains EO-based water quality information for regional authorities responsible for the WFD assessment. The STATUS database is used to store water body based daily statistics of EO water quality (chl-a, turbidity, surface temperature). Time series of observations, starting from 2013, are maintained via automated daily processing, including interfaces for following the data flow and, if needed, quality assurance by the operator. In this presentation we will give an overview of the solutions made to integrate EO to national water management, directive reporting and status assessment purposes. Also, we shortly overview cross-comparisons between the EO chl-a and monitoring site observations show good correlation over various water types and the correspondence between EO and station sampling based WFD status assessment of over 3000 lakes and 260 coastal water bodies in Finland. In our presentation the focus will be on environmental, regulatory, innovational, science & technology as well as societal benefits by the web application that serves on average 300 individual users daily. Although Tarkka was developed as authorities in mind, citizens and society benefit the economic, societal and environmental impacts of the service. References (EARSC, 2022). Sentinel benefits for the society- Water management in Finland, https://earsc.org/sebs/wp-content/uploads/2022/11/Water-quality-in-Finland-Final.pdf Attila, J., Kauppila, P., Alasalmi, H., Kallio, K. K., Keto, V., Bruun, E. (2018). Applicability of Earth Observation chlorophyll a data in assessment of water status via MERIS with implications for the use of OLCI sensors. Remote Sensing of Environment, 212, 273 287. https://doi.org/10.1016/j.rse.2018.02.043
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: The Sentinel Benefits Study Methodology - A Practical Guide for Practitioners to evaluating the benefits derived from the use of Earth Observation data

Authors: Lefteris Mamais, Mr. Geoff Sawyer, Mr. Dimitrios Papadakis
Affiliations: Evenflow, EARSC
The Sentinel Benefits Study (SeBS) was a project managed by the European Space Agency and funded by the European Commission. This study, which ran from March 2017 to July 2024, developed cases showing how EO derived products based on data generated by one or more Sentinel satellites deliver value to society and citizens. The Sentinel satellites form a crucial part of EU’s Copernicus Programme, providing space-based observations on a full, free and open basis. Data coming from the Sentinels – together with other data collected by contributing missions and ground, sea or airborne instruments – is used to support key economic or societal areas such as agriculture, insurance, disaster management, climate change monitoring, etc. Sentinel data are thus a key component of the Copernicus Programme contributing to the Copernicus Services and serving as a crucial data source used by companies and public organisations to deliver products and services. SeBS developed a rigorous methodology which was designed to be applied for any type of EO data and/or derived services. The core of the SeBS methodology is to understand how a specific EO-derived service is being used by an organisation which in turn, is benefiting others and ultimately society and citizens at large. At the core of the SeBS methodology is a robust and structured framework which analyses of how value is generated across six well-defined dimensions, namely, Economic, Environmental, Regulatory, Innovation & Entrepreneurship, Science & Technological Research, and Societal benefits. The approach is bottom-up based on the evaluation of a value chain. In this, it differs from many cost-benefit studies which work top-down at a macro-economic scale. By the end of SeBS, 22 full cases have been analysed along with a further 15 short cases. The latter was introduced during the study as a way to tell the story but without a deep analysis of the economic benefits. At the core of the analysis, SeBS deploys a value chain approach to study the benefits brought about by the use of Copernicus Sentinel data. This starts with the identification and selection of cases wherein Copernicus Sentinel are operationally used and contribute to better decisions or actions by different actors along the value chain. This step requires a careful selection of each case to be analysed against a set of key criteria. If several cases are being looked at, there may also be a filter on the types of case in order to meet a set of portfolio requirements. Typically, key criteria to be met include a requirement for operational use of a service based, at least in part, on Sentinel data. The participation of an engaged service provider and primary user of the service who are willing to share information is also an absolute must. Once a case has been selected, the core analysis focusses on the value chain in order to “develop the picture”. This starts with an in-depth investigation of how the EO service fits into the business process of the primary user. Typically, each value chain analysed has 4 tiers. Tier 1 is the “supplier” of the EO service, tier 2 is the “primary user”. Tier 3 are the “secondary users”, sometimes referred to as “other beneficiaries” i.e. all the specific users of the service provided by the primary user and tier 4 is the wider “citizens and society”. There may also be tertiary beneficiaries which sit alongside the main value chain, which may be government bodies or other institutions with a direct responsibility or interest in the sector. This framework was developed over time as more SeBS cases were analysed and clear patterns emerged in terms of where and how value was manifesting. Thus, within this step analyses on the quantitative and qualitative benefits for each of the stakeholders, in each of the tiers and against each of the value dimensions are performed. For each dimension of value, SeBS has developed sets of “indicators” which can be used to scrutinise the existence or magnitude of benefit. For example, economic benefits can materialise against indicators such as capital expenditure reduction, costs savings, increased revenues or increased employment. Environmental benefits can be documented against indicators such as reduced pollution or reduced natural resource depletion. Scientific benefits can manifest against indicators such as number of academic publications or increases in research budgets etc. The final step is to extend the analysis to the required boundary, considering three aspects: geographic extension, increased market penetration and improved technological maturity. Thus, SeBS would typically carry out an extrapolation of the benefits to capture the national boundary especially for the benefits in the economic dimension. Some key lessons learned in SeBS have been on the importance of ensuring that not only is the primary user fully committed to supporting the case analysis but that their hierarchy also supports this. Several times the study came up against these barriers when halfway through an analysis. Moreover, storytelling became a very important part of the work. Not all cases are suited to being fully analysed. The most common reason being that the EO service use is not fully operational or that the key stakeholders are unable or unwilling to share key business information. Where details or the depth of understanding is limited, this can lead to a “short case” where SeBS provided the essence of the story including details of benefits in several dimensions, but does not go as far as to conduct a full economic analysis. All methodological considerations developed in SeBS are publicly accessible through a dedicated website and through scientific publications, with the hope that they can form a tool in the hands of practitioners seeking to understand the value generated by EO data across different use cases.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Benefits to who? Unlocking the Potential of Inland Water Monitoring through Earth Observation for Low-Income Countries

Authors: Dr. Harriet Wilson, Evangelos Spyrakos, Andrew Tyler
Affiliations: University Of Stirling
The work presented in this abstract includes input from many contributors who participated in the activities. Water quality and quantity have a strong impact on a country's gross domestic product (GDP), for example through lowering agricultural yields and increasing healthcare pressures. Water scarcity can also have a disproportionate impact on marginalised communities, which are often forced into living in areas with less available and safe water. Meanwhile, the UN’s Sustainable Development Goals (SDG’s) have demonstrated stark contrasts in the availability of in-situ water data between low and high income countries. Earth observation (EO) has the transformative potential to democratise access to water quality data, offering significant benefits for public health, food security, and climate resilience for both low and high income countries. In this study we investigated the current use, opportunities and challenges to be overcome to realise the potential of Earth observation for inland water monitoring using a number of tools for engaging with EO and water experts, private companies, NGO’s, policy-makers, and governmental and intergovernmental organisations. Our findings are based on a series of interconnected research activities: 1) Survey: We surveyed 240 individuals from the water sector to understand the barriers to uptake and comparison of knowledge and attitudes among low and high income countries, 2) Hackathon: We gathered inputs from 35 participants from 22 countries through a curated Hackathon experience to find solutions to realising the potential of EO for localised water management challenges, 3) Co-design workshop: We organised a UNEP-funded seed funding co-design workshop with SDG focal points from four African low-income countries facing transnational water issues, 4) Inclusive open EO network: Ongoing activities from an inclusive open-access Earth Observation network within the United Nations Environment Program’s World Water Quality Alliance (WWQA). The findings of these activities represent the ideas and perspectives of a wide snapshot of the current water community, across different parts of the water value chain, different countries and from organisations to concerned community leaders. Analysis has revealed patterns on the current distribution of benefits derived from EO within the water sector, and actions to increase the realised benefits among those communities and countries that have the most to gain. Assessment of the global survey demonstrated that, overall, skills and access determined the perceived usefulness and familiarity with EO. Further analysis, however, revealed that the country of work was a far greater determinant of EO use than any other factor, and that water professionals from low-income countries had a greater perceived lack of access, infrastructure, capacity building activities and awareness. This result further confirmed in EO market reports demonstrates that while the last decade has resulted in rapid advancements in EO capabilities, including lucrative downstream sectors, the benefits of EO for water monitoring remain disproportionately concentrated in high-income countries, and potentially exacerbating disparities. Expanding on these results, the authors hosted a hackathon event to generate a greater understanding of the root causes to low uptake of EO among lower income countries, and innovative solutions that can help to address these. A broad summary of these ideas is; 1) promoting awareness of the potential benefits of EO through media campaigns for decision makers and communities, 2) creating an accessible Hub to collate all available tools and credible resources for users, 3) developing education strategies focused on the Global South, and 4) establishing an Intergovernmental Panel for Water Quality to improve data integration and inform policy. Practical lessons and a wealth of user stories, have also been learnt from the co-design workshop with SDG focal points, and the WWQA EO network, which brings together EO experts, local water forums and interested members of the public. Insights include effective and less effective techniques for knowledge-sharing across cultural, geographical and language barriers, as well as ideas for presenting technical information in a way which creates interest among non-technical audiences. Monitoring critical resources such as inland water has impactful implications for economic growth, public health, food security, and more. These collective activities demonstrate an encouraging enthusiasm from a community of (potential) EO users globally. The ideas and perspectives gathered in this study are essential for driving EO innovation, research, and industry towards a more equitable future.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: The Value Generated by ERA5

Authors: Catherine Mandler, Lefteris Mamais
Affiliations: ECMWF
ECMWF’s ERA5 is a globally recognized meteorological dataset which is considered a leader in its area. Meteorological reanalysis datasets are datasets which incorporate observational meteorological data with precise and fine-tuned physics applications to “fill in the gaps” left by observational data. Meteorological reanalysis datasets have transformed how various sectors which rely on anticipating meteorological activity operate on a routine basis. These changes can be seen in the academic, public and private sectors. Our study aimed to evaluate the impacts of ERA5 and the benefits it brings society and the economy. ERA5 is one of the top choices for meteorological reanalysis datasets globally. ERA5 provides high quality data on the past, present and future states of the climate, tools to enable climate change adaptation and mitigation strategies for the policy and business worlds. The ERA5 reanalysis dataset starts in January 1940 and updated in near real-time on a consistent basis. It is the fourth iteration of the ERA meteorological reanalysis series produced by ECMWF with the support of the European Commission and the ECMWF member states. The primary objective of the study was to evaluate the socio-economic impact of the ERA5 dataset from its final release date in 2018 to the end of 2023. The SeBS Framework approach was utilized in this study because it easily accommodates a multidimensional and 360 degree analysis of benefits. The SeBS Framework requires a selection of use cases are chosen which demonstrate operational use of ERA5 and contribute to better decisions or actions by the primary user and various actors down the value chain. The impacts and benefits are assessed at various levels of the value chain. Impacts are assessed qualitatively and quantitatively among six dimensions: Economic, Environmental, Regulatory, Innovative, Scientific and Societal. Our approach included an expansive interview campaign, multidisciplinary desk research and key perspectives from industry and academic experts. Across all sectors, ERA5 is considered an essential input relied upon by respondents. Many expressed gratitude for all that ERA5 and anticipation for the upcoming ERA6 meteorological reanalysis dataset. Monetized and non-monetised benefits were analysed in a multidimensional way to reflect the impacts and benefits of ERA5 on the sectors of analysis. Our study evaluated the impacts and benefits of ERA5 in 9 sectors: Energy, ML & AI, Agriculture, Water Management, Tourism and Cultural Heritage, Public Health, Climate Science Policy Nexus, Media and Public Awareness, and NWP. Ultimately, 10 use cases were chosen, including 2 use cases from the energy sector. The findings from five sectors of the nine selected sectors were monetized. Value of Information methodology was applied to two of the monetized sectors, Climate Science-Policy Nexus and Media and Public Awareness. Nuanced qualitative findings highlight how ERA5 is being used to create and add value to all of the selected sectors in this study. All of the 79 respondents expressed gratitude and enthusiasm for the role ERA5 plays in their routine work. Please read the final report, the value generated by ERA5, for more details on the methodology and insights from the findings. The findings show how ERA5 has supported various parts of the society and global economy. Additionally, the findings highlight current dynamics which ERA5 supports advancement towards EU policy goals, such as the EU Green Deal and RePowerEU, and may shed light on potential opportunities for ERA5 and the overall ERA reanalysis dataset series to offer similar support in the future.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Use Of Earth Observation To Quantify The Decoupling of Economic Growth And Air Pollution

Authors: Michael Bittner, Rosina Engert, Renée Bichler
Affiliations: University of Augsburg, University of North Carolina, German Aerospace Center (DLR)
The growth of an economy based primarily on the combustion of fossil fuels leads to the production of air pollutants such as nitrogen dioxide. Economic growth and the emission of nitrogen dioxide are therefore causally linked. In line with the goals formulated by the United Nations, the EU Green Deal provides for a sustainable decoupling of the economy and air pollution. According to estimates by the German Bundesbank, the measures required to achieve this will require a considerable amount of investment. Indications as to which measures are particularly effective and which are not are therefore of great interest. Satellite-based measurements of air pollutants offer the possibility of recording decoupling processes worldwide for any region, consistently and with uniform quality. A so-called "economic indicator" is presented, which is aimed at recording the decoupling of economic activity and air pollution for any region in the world. Satellite-based observations of tropospheric NO2 column concentrations and economic data from the OECD are used. These data are coupled with spectral analytical methods (e.g. wavelet analysis) to enable the most sensitive diagnostics possible. It is demonstrated that fluctuations in economic activity can be well captured. In this way, it is possible, in principle, to assess which measures for decoupling economic activity and air pollution have proven to be particularly efficient. In addition, we present initial indications that the effect of mobile working may also be recognized in the tropospheric nitrogen dioxide column concentration as commuting traffic has a significant impact on the NO2-load. Commuters are expected to not follow a regular five day (workdays) and two day (weekend) cycle but a much more confuse pattern. This finding is based on a daily time-series of tropospheric NO2 column density data from the Ozone Monitoring Instrument(OMI) from 2004 until 2024 for the area of the greater Milan, Italy.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall G2)

Session: A.08.04 Submesoscale air-sea interactions: understanding, observability and impact

The interactions between oceans and atmosphere occurring over 70% of the Earth’s surface shape the environment where humanity lives, by regulating heat, freshwater, gas, and momentum transfer at the surface. For example, the recent increase of carbon dioxide (CO2) and heat into the atmosphere is largely buffered by the absorption from the oceans, which now contain about 25% of the anthropogenic CO2 (Le Quéré et al., 2018) and about 90% of the excess heat with respect to the mid 20th century (Arias et al., 2021). Complex interactions between the ocean and atmosphere are thus essential for understanding Earth's dynamic climate systems. High-resolution observations are crucial for capturing the small-scale processes that significantly influence the Earth's energy budget and contribute to global sea level changes. These processes span a range of scales and involve intricate exchanges of momentum, heat, and gases at the air-sea interface, critical for shaping weather patterns and broader climate changes.

Improving our understanding and modeling of these interactions requires integrating advanced observational techniques with model developments, addressing significant gaps in our current knowledge and capabilities.
This improved understanding is essential for accurately predicting Earth system dynamics and assessing environmental impacts, underscoring the importance of improving Earth Observation capabilities to advance our knowledge of ocean-atmosphere interactions.

The observation of submesoscales air-sea interactions in Synthetic Aperture Radar (SAR) imaging data has been a topic of intense research over the last two decades. The recently launched Surface Water and Ocean Topography (SWOT) Mission introduces a novel entry to identify and study air-sea interactions through provision of high-resolution sea surface height information (e.g. contrasts over fronts obtained from SSH measurements), possibly to be combined with information from e.g. complementary near-collocated wind and SST observations. Sun glitter observations can also provide a wealth of information to understand these small-scale processes.
In the future, ESA’s EE10 Mission, Harmony, will provide multistatic SAR imaging capabilities through the addition of passive SAR bistatic receivers, and spatially and temporally collocated multi-view Thermal Infra-Red (TIR) observations for SST and cloud motion measurements, that will allow to resolve high resolution winds, waves, surface currents and sea surface temperature differences, at the air-sea interface.

This session is dedicated to the progress in understanding air-sea interactions from Earth Observation data, to identify gaps and opportunities in our ability to observe, model or parameterize such processes/mechanisms in air-sea coupling. Multi-sensor techniques to combine different data sources are encouraged.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall G2)

Presentation: Advancing Ocean Wind Stress Retrieval: Insights from Sentinel-1 SAR and In Situ Observations

Authors: Anis Elyouncha
Affiliations: University Of Gothenburg
Ocean wind stress is a fundamental driver of atmosphere-ocean interactions, facilitating the exchange of momentum, heat, and gases. These exchanges profoundly influence climate dynamics, weather systems, ocean circulation, and marine ecosystems. Accurate measurements of ocean surface wind stress are critical for understanding these processes and improving predictions of the complex atmosphere-ocean interactions. Despite its importance, current wind stress observations rely primarily on sparse, localized, and costly in situ measurements. Synthetic Aperture Radar (SAR) provides a unique opportunity to fill this gap by responding directly to ocean surface roughness, which is influenced more by wind stress than wind speed. However, SAR is typically used to derive wind speed, which is then converted to wind stress using a prescribed drag coefficient—introducing uncertainties. High-resolution wind stress fields are particularly critical for forcing regional ocean, wave, and weather models. This study explores the feasibility of directly retrieving wind stress from SAR data, bypassing the uncertainty associated with drag coefficient parameterization. Momentum exchange between the ocean and atmosphere is influenced by sea state, including the complex interaction of wind-driven waves and swell. Existing algorithms, while incorporating sea state effects, do not fully account for wave propagation direction or distinguish between wind waves and swell. This is particularly limiting in mixed swell-dominated seas. Using SAR-derived wave spectra, we evaluate the effects of swell parameters and their directional alignment with wind on wind stress estimation. Our analysis leverages Sentinel-1 SAR data alongside two in situ eddy correlation datasets. The first dataset, from the Ocean Observatories Initiative (OOI), was collected in the Southern Ocean in 2016, representing a fetch-unlimited, swell-dominant environment. The second dataset, from the Integrated Carbon Observation System (ICOS), covers the period 2017–2024 at the Östergarnsholm station in the Baltic Sea, a fetch-limited, wind-sea-dominant environment. These datasets provide complementary perspectives on diverse wind and wave conditions. We co-locate in situ wind stress measurements with SAR backscatter and associated meteorological, oceanic, and wave parameters to analyze the relationship between normalized radar cross section (NRCS) and wind stress. Comparisons are made between wind stress estimated from bulk drag coefficient parameterization and direct in situ measurements. Additionally, we investigate the influence of wave age, steepness, and wind-wave misalignment on wind stress and drag coefficient variability. Our findings demonstrate a clear monotonic relationship between NRCS and wind stress, which can be modeled using a geophysical model function (GMF) analogous to those used for wind-NRCS relationships. The results also highlight the critical influence of sea state parameters—particularly wave age, steepness, and directional misalignment—on the NRCS-stress relationship. While additional data collection is required to refine and validate this approach, these findings lay the groundwork for a more accurate and direct retrieval of ocean wind stress from SAR observations, with significant implications for regional and global modeling efforts.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall G2)

Presentation: Environmental Control of Wind Response to Sea Surface Temperature Patterns From Remote Sensing Data

Authors: Lorenzo Francesco Davoli, Fabien Desbiolles, Paco Lopez Dekker, Agostino N. Meroni, Claudia Pasquero
Affiliations: Università degli Studi di Milano-Bicocca, Institut de Recherche pour le Développement, Technische Universiteit Delft
Air-sea interactions are ubiquitous but extremely diverse depending on the scale and region. At large scales, the atmosphere drives ocean responses through several mechanisms, while meso- and sub-mesoscale sea surface temperature (SST) patterns have been shown to influence surface wind convergence, atmospheric boundary layer (ABL) mixing and low-level cloud dynamics. Considerable differences emerge from region to region depending on the environmental conditions and their influence on air-sea interaction mechanisms. The dependence of mesoscale SST-wind coupling mechanisms on large-scale atmospheric features - specifically, surface wind patterns and air-sea temperature differences - has been proven using reanalysis data, but it still awaits further confirmations from observational studies at higher resolution. Several case studies show interesting interactions occurring at the sub-mesoscales, dampening or reinforcing the overlying mesoscale phenomena depending on the situation. However, a systematic assessment of sub-mesoscale air-sea interaction is still missing due to the scarcity of high-resolution observations on a global scale. A better understanding of the role played by environmental conditions at different scales would not only improve our description of surface winds but could also provide crucial information on shallow cumuli dynamics, which is a major source of uncertainty in current climate models. We aim to fill these gaps by employing high-resolution satellite observations (on grid spacing ranging from 1 km to 12.5 km) of surface wind (from ASCAT and Sentinel-1 L2 OCN product) and SST (from AVHRR) together with new products estimating atmospheric instability and surface fluxes from Sentinel-1 Synthetic Aperture Radar (SAR) data. The goal is to study the variability of SST-wind coupling intensity as a function of the environmental conditions and to determine whether the same behaviour occurs at the sub-mesoscale. We analyze 2 years of data (2020-2021) over the Gulf Stream region, as it is a region with strong SST gradients, a wide range of atmospheric stability conditions and large-scale winds up to 25 m/s. The two main SST-wind coupling mechanisms acting at the mesoscales (downward momentum mixing, DMM, and pressure adjustment, PA) are taken into consideration according to the metrics defined by Meroni (2022, 2023). In this formulation, the contributions of the two mechanisms to wind divergence are disentangled by separating the wind and SST gradient components along (DMM) and across (PA) the background wind direction. The intensity of the coupling between these gradients is evaluated as a function of the large-scale surface wind speed and a few proxies of atmospheric instability, that have also been shown to modulate shallow convective phenomena. Estimates of atmospheric instability (namely Obukhov length and buoyancy flux) are retrieved from Sentinel-1 SAR images through a machine-learning algorithm (O’Driscoll et al. 2023). The scenes are acquired in WV mode, which is the default acquisition mode over the ocean and consists of 20 km by 20 km vignettes sampled roughly every 100 km along the orbit, and only scenes containing convective conditions are considered. These SAR-derived environmental conditions are compared with air-sea temperature difference and buoyancy flux from hourly ERA5 data to determine whether or not their use contributes to a more appropriate characterization of the environmental conditions and as a consequence of the air-sea interactions themselves. The large-scale background wind, strictly related to the ABL mixing and friction velocity, is also included as an environmental condition, and it is obtained by low-pass filtering MetOp A ASCAT wind data. A control analysis conducted on the MetOp database before the colocation with SAR data has shown that the intensity of DMM coupling peaks for background winds around 8 m/s and near-neutral atmospheric conditions (estimated from ERA5 air-sea temperature difference and buoyancy flux). These results confirm the well-known nature of the DMM mechanism, which is enhanced by the mixing induced by the entrainment of momentum from the troposphere into the ABL. The use of Obukhov length estimates from ERA5 as a proxy for atmospheric instability yields no significant signals in the coupling intensity. In fact, Obukhov length describes the balance between the surface stress turbulent production and the buoyancy flux, equating situations of strong instability and strong winds with situations of marginal instability and weak winds which nonetheless exhibit significantly different mixing dynamics. The same methodology has been applied to the database of co-located MetOp A and Sentinel-1 observations, with two major drawbacks. First, a drastic reduction in the amount of data, which reflects in a scarcity of strong SST gradients and therefore in an overall weakening of the coupling signal. Second, as the SAR database contains only shallow convection scenes, the sensitivity to atmospheric stability is very limited. In this case, DMM is more active in situations of high wind speed (around 12m/s) and near-neutral atmospheric conditions, but with some concerns regarding the robustness of the detected signal. Both the control and co-located analyses conducted on the PA mechanism have shown no significant results when aggregating the data according to the above-mentioned environmental conditions. The reason behind this behaviour is likely to be found in a generalized scarcity of strong SST laplacian needed for PA to develop, suggesting the need for a larger database. To effectively employ SAR-derived estimates in the study of SST-wind interactions it is necessary to expand the analysis beyond the limits imposed by the current SAR database. To this end, a new dataset of SAR observations from Sentinel-1 SAR Interferometric Wide (IW) Swath mode scenes is currently under development. The IW mode is not the default acquisition mode over the ocean, but it is often sampled near coastal areas (where most of the WBCs and the associated SST patterns lie) and major islands (as in the North-West-Atlantic Tropical region). IW scenes consist of continuous swaths of 200 km width, providing a much larger amount of data compared with the sparse 20 km by 20 km images of the previous WV mode. Finally, the availability of Sentinel-1 L2 OCN product, containing surface wind at 1 km resolution, allows to extend the analysis to the sub-mesoscales, once coupled with an appropriate SST product. Since SAR wind retrieval is strongly dependent on wind speed and direction, this step requires an extended investigation of the bias and limits introduced by the use of such a product. The challenges posed by this new methodology are of major interest in the path toward the deployment of the Harmony constellation, as they provide useful insight into the potential of the new observations that the mission will make available to the scientific community.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall G2)

Presentation: The Air-Sea Interaction (ASI) submesoscale: physics and impact

Authors: Louise Nuijens, Jacob Wenegrat, Paco Lopez-Dekker, Claudia Pasquero, Larry O'Neill
Affiliations: Delft University Of Technology, University of Maryland, Ocean Park, University of Milan-Bicocca, Oregon State University
During a 2023 Lorentz Workshop in Leiden, the Netherlands, a diverse community working in atmospheric and oceanic sciences and in observations and modeling met to discuss emerging ideas and questions on physical air-sea interaction (ASI) processes at the ASI submesoscale. In this presentation, I will present the outcomes of the workshop white paper - an effort of more than thirty workshop participants. By outlining leading challenge questions, and opportunities to target these questions, the white paper invites the scientific community to take up the challenge to combine efforts across the different disciplines. We introduce a conceptually useful new scale: the ASI-submesoscale, which contains the range of length scales for which the marine atmospheric boundary layer remains in disequilibrium with a submesoscale ocean forcing feature (200 m – 200 km), encompassing also the scales of transition between forward and inverse energy cascades in both the ocean and atmosphere. We also make a distinction in different hypothetical interactions: from zero- or agnostic interaction, in which the ocean and the atmosphere are essentially uncoupled, to one-way forced interaction, in which ASI-submesoscale variability in one fluid influences the evolution of the other fluid through air-sea fluxes without subsequent feedback. In two-way interaction, a forced response of one boundary layer to the fluxes induced by the other introduces a feedback either at a different scale or location, so that the coupling is weak, or with a (near) match of scale or resonant frequency, so that the coupling is strong. The key challenge identified at the workshop is to unravel if and how observed atmospheric and oceanic structures (e.g. cloud organization, ocean eddies and filaments) within the ASI-submesoscale involve strong two-way coupling that can lead to an integrated or rectified response. In other words: how does atmospheric and oceanic structure at scales of 200 m - 200 km influence air-sea interaction and the statistics of larger-scale circulations? Leading challenge questions are identified, including to what extent errors introduced by long-standing assumptions in the derivation of air-sea fluxes are systematic and affect estimates of air-sea energy and momentum exchange on larger scales. Key climate zones and weather phenomena in which the ASI submesoscale can play important roles are presented, including, in the tropics, the low-wind speed doldrums and high-wind speed cyclones or storms; in the subtropics, shallow cloud organization; and in the extra-tropics, western boundary currents and storm tracks and marine heat waves. We discuss the great potential of a new era of high-resolution coupled models and space-borne observations, as well as transformative field ‘super-sites’. We advocate for models to make use of semi-coupled runs, (un)plugging of coupled feedbacks, smoothing of structure (in specific) regions and multi-nested simulations with grid-refinement to study upscaling effects of ASI at submesoscales. While many new and existing in-situ and remote sensing observations can measure relevant parameters for ASI-submesoscale processes, they are often made only in the MABL or in the OBL. "Supersites" can effectively bundle longer-term measurements in the ocean, atmosphere, and at their interface and a few examples will be given. Despite rapidly expanding Earth Observation capabilities and exciting upcoming missions, including ESA’s tenth Earth Explorer mission Harmony, there are still critical gaps in space-borne observations e.g. of near-surface air temperature and humidity measured from space and of surface current vectors, which needed for understanding air-sea interaction and coupling. Several EO mission concepts that are highly recommended to be further studied and reviewed will be discussed.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall G2)

Presentation: Air-sea interaction in the tropical Pacific Ocean - an intercomparison between ERA5, Synthetic Aperture Radar, and buoy

Authors: Justin Stopa, Doug Vandemark, Ralph Foster, Alexis Mouche, Bertrand
Affiliations: The University Of Hawai`i At Manoa
The need for accurate, global air-sea fluxes crosses many atmospheric, oceanographic, and climate disciplines and they form the boundary conditions for observational or numerical studies. Atmospheric stratification through a bulk Richardson number (ratio of buoyancy and wind shear production) can be used to improve the diagnosis of marine atmospheric layer flux estimate uncertainties from in situ, satellite, or model products. This work focuses on inter-comparing stratification states from ERA5, Sentinel-1 (S-1) synthetic aperture radar images, and moored buoys from the Tropical Atmosphere Ocean array in the Pacific Ocean. The buoy array provides an in-situ measure and full-time history of the diurnal cycle - complementing the 06 AM/PM S-1 overpasses. The diurnal wind variations in the Eastern Pacific are modulated by the cold surface waters and ERA5 is consistently more unstable compared to S-1 and the buoy observations. Despite the sparse S-1 sampling, time-space transitions between the three SAR-derived stratification states reasonably match the near-continuous point observations measured at the buoy. There is a marked change in the wind speed when rolls are present suggesting that a critical stratification Ri separates convective and forced-convective states. The S-1-derived stratification states now exceeding ~10 years could be used to constrain flux parameterizations - especially in regions lacking in-situ observations.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall G2)

Presentation: Highly resolved observations of the coupled ocean-atmosphere system: the EE10 Harmony and EE11 WIVERN missions

Authors: Bertrand Chapron, paco Dekker-Lopez, Alessandro Battaglia, Maryam Pourshamsi, Marcel Kleinherenbrink, Bjorn Rommen, Dr Ishuwa Sikaneta, Jean-Christophe Angevain
Affiliations: ifremer, TU-Delft, European Space Agency, Politecnico di Torino
Data from satellite sensors have important and profound applications in weather and climate analyses. At relatively low spatial resolution, present-day Machine Learning (ML) models trained on data assimilating global models have now emerged as very robust emulators of numerical weather models. These ML models, with a spatial resolution reaching about 30km and a typical temporal resolution of 6 hours, have obtained up to four orders of magnitude speedups, while maintaining comparable skill. However, ML models have thus far not proven fully skilful when applied at km-scale resolution, and/or to forecast the rapidly evolving intensity of high-impact weather events. The dynamics of the coupled ocean-atmosphere system at the km-scale, especially during the development of extremes, are generally far more complicated than at the 30km scale, possibly leading to largely enhanced energetic exchanges and intensified wind conditions. Km-scale remote sensing observations indeed show that the ocean-atmosphere boundary layer flow is certainly not a region of homogeneous turbulence. More specifically using Synthetic Aperture Radar measurements and especially under stormy conditions, the observations often convincingly demonstrate that a large fraction of the turbulent flow in the regions away from deep convective rainbands can be highly organized into organized large eddies detected as intense horizontal roll vortices that are approximately aligned with the mean wind. Likely spanning the entire depth of the marine-atmosphere boundary layer, these organised large eddies can strongly increase the flux of momentum between the underlying surface and the main body of the storm compared to an equivalent hurricane boundary layer flow without these redistributing features. In this presentation, we will discuss the potential to provide essential observations to advance the description of the planetary boundary layer (PBL) from the future observations of the EE10 Harmony mission and the proposed EE 11 WIVERN mission. WIVERN would provide the first measurements of wind within clouds and precipitation, while Harmony, will provide very high-resolution ocean surface observations, will more quantitatively about the processes that govern the exchange of momentum, heat and moisture at the ocean-atmosphere interface. Accordingly, both Harmony and WIVERN should uniquely provide new sources of information for developing improved boundary layer parameterizations, to capture the stochastic nature of the coupled ocean-atmosphere system at these scale, and thus to provide the necessary new insights into severe storms.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall G2)

Presentation: SARWAVE : Revealing Sea-State variability with wide acquisitions of Sentinel-1 SAR

Authors: Lisa Maillard, Frédéric Nouguier, Antoine Grouazel, Quentin Febvre, Robin Marquart, Bertrand Chapron, Alexis Mouche
Affiliations: Ifremer, Univ. Brest, CNES, CNRS, IRD, Laboratoire d’océanographie physique et spatiale, Self employed in Data Analysis
Monitoring the sea state is essential to document and understand processes occurring at the air-sea interface. Recent advances in instrumentation and satellite data processing have greatly enhanced our capacity to observe sea states from space. The currently ongoing ESA SARWAVE project aims to derive sea-state information from Interferometric Wide Swath acquisitions of Sentinel-1, enabling to investigate both large and fine scale processes by providing quantitative information over the entire swath (see presentations in A.08.02 for details on the method). Moreover, we now have several space missions that provide highly resolved sea state information such as Sentinel-1 SAR, Sentinel-2 optical, SWOT interferometer. In the microwave regime, SAR are unique sensors providing sea-sate information over large swath, giving access to spatial characteristics of wave fields such as spatial gradients, directive wave properties as well as wave periods which are key assets to study signatures of extreme wind events. There is now a unique opportunity to combine those independent measurements to study and document complex interactions between the ocean and atmosphere. In the first part of this presentation, we will demonstrate the capabilities of Sentinel-1 to accurately infer wave parameters including significant wave height of swells and wind-sea, as well as wave periods. Geophysical parameters retrieved from wide swath SAR acquisitions are statistically validated against in situ buoys measurements, conventional altimeter products and wave models. The agreements are found to be very good and will be discussed in details. In the second part of the presentation, we will dive into typical case studies of wave interactions with their environment. We will investigate how wide-swath SAR acquisitions capture the wind action on wave growth in the Mediterranean sea in fetch-limited conditions; how their extensive spatial coverage offers new perspectives to study wave-current interactions in the Agulhas Current; and how the peculiar configuration of SAR (incidence, polarization) maintains a high sensitivity to strong wind conditions, providing measurements of extreme sea states caused by tropical cyclones and medicanes. When available, we will take benefit of co-localization to underline strength, weaknesses and complementarity of different sensors to document sea state conditions. As a striking example, we will present some remarkable co-localized cases of Sentinel-1 and SWOT KaRIn both providing bi-dimensional maps of wave height, derived with two different algorithms, opening new perspectives for synergistic studies. The high resolution now offered by these sensors provides valuable insights of the ocean waves geometry, kinematics and directional characteristics.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall G1)

Session: D.02.04 Machine Learning for Earth System Observation and Prediction - PART 3

Earth, its weather and climate constitute a complex system whose monitoring and modelling has undergone remarkable progress in recent years. On one hand, enhanced spaceborne observations and the development and availability of novel in-situ sensor data have provided unprecedented information about our planet's systems. On the other hand, the integration of AI and big data analytics has opened new frontiers in how we approach the challenges of weather and climate modelling. Altogether, these elements are key drivers of innovation in Earth System Observation and Prediction (ESOP) for Weather and Climate.

Machine/Deep Learning (ML/DL) techniques have revolutionized numerous fields and have proven to be particularly advantageous in various applications such as image recognition, traffic prediction, self-driving vehicles, and medical diagnosis. These techniques have garnered significant attention and adoption within the Earth System Observation and Prediction (ESOP) community due to their ability to enhance our understanding and prediction capabilities of the Earth's complex dynamics. One prominent area where ML/DL techniques have proven invaluable is in the development of high fidelity digital models of the Earth on a global scale. These models serve as comprehensive monitoring, simulation, and prediction systems that enable us to analyse and forecast the intricate interactions between natural phenomena and human activities. By providing a holistic understanding of the Earth's dynamics, these models contribute to the achievement of the European Commission's Green Deal and Digital Strategy goals towards a green & digital transition.

ML/DL solutions also showcased promising advancements in data assimilation, weather forecasting and climate prediction. Algorithms can be trained to identify instances where physical models may exhibit inaccuracies and subsequently learn to correct their predictions accordingly. Moreover, AI-based models have the potential to create hybrid assimilation and forecasting models that combine the strengths of traditional, physics-based methodologies with the capabilities of ML/DL, ultimately enhancing the accuracy and reliability of predictions.

The aim of this session is to invite new ML4ESOP explorers to present their latest innovation in ESOP. A specific focus on the exploration of new data sources and benchmarks for weather and climate modelling, the adaptation of large-scale data-driven Earth system models, as well as novel demonstrations of their applicability to weather and climate observation and prediction. This session invites all experts from diverse fields to discuss how recent advances innovate on established ESOP approaches, to address current challenges, and to identify opportunities for future work.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall G1)

Presentation: Advancing Surface NO2 Estimation Across Europe Using Machine Learning: Integration of Cloud Filling and Extended Spatio-Temporal Analysis

Authors: Ka Lok Chan, Richard Siddans, Brian Kerridge, Barry Latter
Affiliations: RAL Space
In this study, we expand our previously developed machine learning-based framework for estimating surface NO2 concentrations in Germany to encompass the entire European region. Leveraging tropospheric NO2 vertical column densities (VCDs) from the TROPOMI satellite, along with meteorological parameters, we employ a neural network model to predict surface NO2 concentrations with high accuracy. This study incorporates two key advancements: (1) the application of machine learning techniques to fill gaps caused by cloud cover and invalid data, thereby enhancing data continuity, and (2) the integration of neighbouring pixel information to better account for local transport and dispersion of NO2. These innovations significantly improve data reliability and capture spatial transport dynamics, which are essential for understanding urban and regional NO2 variations. Our model is validated against ground-based air quality monitoring data across Europe, demonstrating robust performance, with Pearson correlation coefficients (R) exceeding 0.80. Additionally, sensitivity analyses emphasize the critical roles of neighbouring pixel data and meteorological inputs in enhancing the accuracy of predictions. By applying the trained neural network, we estimate surface NO2 concentrations over Europe from 2018 to 2024, offering insights into seasonal, weekly, and long-term trends. The resulting dataset reveals intricate spatial and temporal patterns, including significant reductions in NO2 concentrations during the COVID-19 lockdowns. Annual average NO2 exposure estimates confirm a general decline, reflecting reduced industrial and transportation activities during 2020. The high-resolution surface NO2 maps generated by this study provide a valuable tool for interdisciplinary applications. These maps can be integrated with population density data and epidemiological profiles, such as demographics of respiratory or cardiovascular diseases, to assess the health impacts of NO2 exposure. Furthermore, they enable evaluation of the economic implications of emission reductions, such as increased life expectancy, enhanced labour productivity, and reduced healthcare burdens. This work underscores the potential of advanced machine learning techniques in creating actionable datasets for air quality management, public health research, and policy development across Europe.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall G1)

Presentation: Gap Filling Sentinel-2 Observations for Improved Vegetation Greenness Forecasting

Authors: Maria Gonzalez-Calabuig, Dr. Miguel-Ángel Fernández-Torres, Prof. Dr. Gustau Camps-Valls
Affiliations: Image and Signal Processing, Universitat De València, Universidad Carlos III
Earth observation from satellite sensors allows monitoring natural ecosystems by deriving spatially explicit and temporally resolved biogeophysical parameters [1, 2]. Optical remote sensing, however, suffers from missing data mainly due to the presence of clouds, sensor malfunctioning, and atmospheric conditions [3]. This study proposes a novel deep learning architecture to address gap filling of satellite reflectances, more precisely the visible and near-infrared bands, and illustrates its performance at high-resolution Sentinel-2 data. We introduce GANFilling, a generative adversarial network [4] capable of sequence-to-sequence translation, which comprises convolutional long short-term memory [5] layers to exploit complete dependencies in space-time series data effectively. Our approach differentiates from the existing literature by its unrestricted inference. While most works require either paired data, external sources (e.g., SAR), or quality masks to guide the recovery of missing information, our model does not impose any constraints on additional sources of information. We focus on Europe and evaluate the method's performance quantitatively and qualitatively. Quantitatively, our model offers the best trade-off between denoising corrupted data and preserving noise-free information, underscoring the importance of considering multiple metrics jointly when assessing gap-filling tasks. Qualitatively, it successfully deals with various noise sources, such as clouds and missing data, constituting a robust solution to multiple scenarios and settings. The quality of the enhanced Sentinel-2 reflectances is illustrated and quantified in the relevant downstream application of vegetation greenness forecasting. Improving the quality of forecasts is important for deriving better products for monitoring ecosystems and supports better decision-making by governmental bodies. Our focus lies on forecasting the kernel normalized difference vegetation index (kNDVI) [6], considered a de facto better proxy for vegetation photosynthetic activity. We evaluate the benefits of training a forecasting model using data gap-filled by our model or using data based on the traditional approach of discarding corrupted regions guided by quality masks. To compare both strategies, we perform an analysis per land cover and provide spatial maps depicting the difference in temporal Pearson correlation coefficients between both. GANFilling enhances forecasting in approximately 70% of the considered regions in Europe. Our research contributes to underlining the utility of deep learning for Earth observation data, which allows for improved spatially and temporally resolved monitoring of the Earth’s surface. References [1] Mateo-Sanchis, A., Adsuara, J.E., Piles, M., Munoz-Marí, J., Perez-Suay, A., Camps-Valls, G., 2023. Interpretable long short-term memory networks for crop yield estimation. IEEE Geoscience and Remote Sensing Letters 20, 1–5. doi:10.1109/LGRS.2023.3244064. [2] Persello, C., Wegner, J.D., Hänsch, R., Tuia, D., Ghamisi, P., Koeva, M., Camps-Valls, G., 2022. Deep learning and earth observation to support the sustainable development goals: Current approaches, open challenges, and future opportunities. IEEE Geoscience and Remote Sensing Magazine 10, 172–200. doi:10.1109/MGRS.20210471.3136100 [3] Zeng, C., Shen, H., Zhang, L., 2013. Recovering missing pixels for Landsat etm+ slc-off imagery using multi-temporal regression analysis and a regularization method. Remote Sensing of Environment 131, 182–194. doi:https://doi.org/10.1016/j.rse.20121131.12.012 [4] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley,D., Ozair, S., Courville, A., Bengio, Y., 2014. Generative adversarial nets, in: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., Weinberger, K. (Eds.), Advances in Neural Information Processing Systems, Curran Associates, Inc. [5] Shi, X., Chen, Z., Wang, H., Yeung, D.Y., Wong, W.k., WOO, W.c.,2015. Convolutional LSTM network: A machine learning approach for precipitation nowcasting, in: Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., Garnett, R. (Eds.), Advances in Neural Information Processing Systems, Curran Associates, Inc. [6] Camps-Valls, G., Campos-Taberner, M., Álvaro Moreno-Martínez, Walther, S., Duveiller, G., Cescatti, A., Mahecha, M.D., Muñoz-Marí, J., García-Haro, F.J., Guanter, L., Jung, M., Gamon, J.A., Reichstein, M., Running, S.W., 2021. A unified vegetation index for quantifying the terrestrial biosphere. Science Advances 7, eabc7447. doi:10.1126/sciadv.abc7447.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall G1)

Presentation: Local Enhanced Global Ensemble Digital Terrain Model in 30m: a Community-based Open Data Service to Support Regional and Global Modeling

Authors: Yu-feng Ho, Doctorate Leandro Parente, Doctorate John Lindsay, Doctorate Carlos Grohmann, Doctorate Hannes Reuter, Doctorate Tomislav Hengl
Affiliations: OpenGeoHub, Department of Geography, Environment & Geomatics, The University of Guelph, Institute of Astronomy, Geophysics and Atmospheric Sciences (IAG) of the Universidade de São Paulo (USP), Eurostat, European Commission
Digital Elevation Models (DEMs) are critical components of Earth system modeling, providing essential data for a range of applications such as soil mapping and hydrological analyses. However, a fully open, global Digital Terrain Model (DTM) with a 30-meter resolution does not yet exist. This study explores the use of machine learning techniques to fuse two high-quality Digital Surface Models (DSMs): the Copernicus DEM (GLO-30) and the ALOS Global Digital Surface Model (AW3D30). According to Bielski et al. (2024), both GLO-30 and AW3D30 outperform other global DEMs. They ranked global DEMs considering multiple aspects, and the result shows GLO-30 and ALOS are compatible and significantly better than NASADEM, SRTM, and ASTER as DSM [1]. In addition to DSMs, canopy and building models, Landsat biophysical indices and other auxiliary data such as landform classification and slope in coarse resolution are included into mapping terrain. The machine learning model is trained for terrain height by combining ATLAS/ICESat-2 L3A Land and Vegetation Height (ICESat-2 ATL08) Version 6 and Global Ecosystem Dynamics Investigation (GEDI) Level 2. Both dataset are global space-based Light Detection and Ranging (LIDAR) measurements, orbiting from pole to pole and between 51.6° N and S respectively [2,3]. The concept of transfer learning is practiced in this modeling. Firstly, the global modeling is trained according to the stratification of landform classes, canopy height and building height. Afterwards, a local model based on 5 x 5 ° tile is trained using additional local samples from ICESat-2 ATL08 and GEDI to improve accuracy taking local context into account. The local models are subsequently used to predict local tiles and create a local enhanced global ensemble digital terrain model. The result is validated against a global coverage reference DTM [4]. The model prediction is not only evaluated by common machine learning metrics but also the consistency of derived land relief parameters, such as hillshade and slope. It has significantly improved the accuracy and bias compared with Multi-Error-Removed Improved-Terrain DEM (MERIT), and it shows more realistic land relief patterns than Forest And Buildings removed Copernicus DEM (FABDEM) [5,6]. In conclusion, the prediction result takes advantage of both high equality DSMs and produces a single consistent DTM. This study contributes to The International Society for Geomorphometry (ISG) for large-scale high resolution global topographic and hydrological service. The research is also the first global terrain modeling which takes and fuses both the ICESat-2 ATL08 and GEDI of the world into modeling. The global space-based LIDAR has multiple functionality not only to map the terrain but also to estimate canopy height, canopy cover or even aboveground biomass. The use of transfer learning enables the integration of billions of data points, facilitating machine learning applications for Earth system modeling. Last but not least, the feature of local enhancement allows subsequent modeling, which empowers everyone who possesses local data to improve the model locally, hence to achieve a community-based open data service. 1. Bielski, C., López-Vázquez, C., Grohmann, C. H., Guth, P. L., Hawker, L., Gesch, D., ... & Strobl, P. (2024). Novel approach for ranking DEMs: Copernicus DEM improves one arc second open global topography. IEEE Transactions on Geoscience and Remote Sensing, 62, 1-22. 2. Morison, J. H., Hancock, D., Dickinson, S., Robbins, J., Roberts, L., Kwok, R., Palm, S. P., Smith, B., Jasinski, M. F. & the ICESat-2 Science Team. (2023). ATLAS/ICESat-2 L3A Ocean Surface Height. (ATL12, Version 6). [Data Set]. Boulder, Colorado USA. NASA National Snow and Ice Data Center Distributed Active Archive Center. https://doi.org/10.5067/ATLAS/ATL12.006. [describe subset used if applicable]. Date Accessed 12-03-2024. 3. Dubayah, R., Hofton, M., Blair, J., Armston, J., Tang, H., Luthcke, S. (2021). GEDI L2A Elevation and Height Metrics Data Global Footprint Level V002 [Data set]. NASA EOSDIS Land Processes Distributed Active Archive Center. Accessed 2024-12-03 from https://doi.org/10.5067/GEDI/GEDI02_A.002 4. Guth, P. L., Riazanoff, S., Corseaux, A., Grohmann, C., Trevisani, S., & López-Vázquez, C. (2023). DEMIX 1" Reference DEMs version 2.0 (2.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.8086806 5. Yamazaki D., D. Ikeshima, R. Tawatari, T. Yamaguchi, F. O'Loughlin, J.C. Neal, C.C. Sampson, S. Kanae & P.D. Bates A high accuracy map of global terrain elevations Geophysical Research Letters, vol.44, pp.5844-5853, 2017 doi: 10.1002/2017GL072874 6. Hawker, L., Uhe, P., Paulo, L., Sosa, J., Savage, J., Sampson, C., & Neal, J. (2022). A 30 m global map of elevation with forests and buildings removed. Environmental Research Letters, 17(2), 024016.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall G1)

Presentation: Combining Vision Embedding of Satellite Images and Message Passing Graph Neural Networks for Accurate Hyper-Local Weather Forecasting at Arbitrary Locations

Authors: Jonathan Giezendanner, Qidong Yang, Dr. Johannes Jakubik, Daniel Salles Civitarese, Dr. Eric Schmitt, Dr. Anirban Chandra, Dr. Jeremy Vila, Dr. Detlef Hohl, Dr. Chris Hill, Dr. Campbell Watson, Dr. Sherrie Wang
Affiliations: MIT, IBM Research, Shell Information Technology International Inc.
With the escalating impacts of climate change, the demand for precise and localized weather forecasting has never been more critical. As global climate patterns become increasingly unpredictable and complex, the limitations of traditional weather modeling approaches have become apparent. While global and regional weather predictions have long been the standard, modern applications require a far more granular understanding of atmospheric conditions. The growing complexity of weather systems demands innovative approaches that can bridge the gap between large-scale meteorological models and hyper-local environmental conditions. Industries such as renewable energy, agriculture, disaster management, and urban planning rely increasingly on accurate, location-specific weather predictions. Wind and solar power generation, for instance, requires a precise understanding of local wind speeds, solar radiation, and atmospheric conditions. Similarly, wildfire prevention and management depend on precise local weather patterns to predict and mitigate potential risks. Traditional weather forecasting methodologies have predominantly relied on global regular grid models, which generate predictions based on broad, uniform spatial intervals. These approaches inherently struggle to capture the nuanced microclimatic variations that characterize local environments. Topographical features, land use, proximity to water bodies, and urban heat islands can dramatically influence local weather patterns in ways that cannot be adequately represented by coarse-grained global models. In response to these challenges, our research introduces a cutting-edge approach utilizing a heterogeneous graph neural network (GNN) designed to dramatically improve the spatial resolution and accuracy of weather forecasts. By leveraging advanced machine learning techniques, our model offers a transformative solution to the long-standing challenge of downscaling gridded forecasts to arbitrary locations of interest. The proposed multi-modal GNN model represents a sophisticated integration of multiple data sources. At its core, the model combines classical graph neural network message passing methods with state-of-the-art vision embedding techniques. This innovative approach allows for the comprehensive integration of diverse data types, each treated as a distinct node embedding within the graph neural network architecture. Our model ingests and processes multiple data streams, including high-resolution historical weather observations (wind speeds, temperature, humidity, dewpoint) and satellite imagery-derived environmental characteristics such as topographical data and Land cover and land use information. The graph neural network's message passing mechanism enables a dynamic aggregation of information across these heterogeneous data sources. Each prediction location becomes a node that can draw insights from its neighboring nodes, creating a contextually aware prediction framework. This approach allows the model to capture complex, non-linear relationships that traditional interpolation methods might overlook. By training the model end-to-end on diverse datasets including HRRR (High-Resolution Rapid Refresh), ERA5 reanalysis data, and other foundation weather models, we demonstrate the model's ability to generate locally accurate weather forecasts across various lead times. Experimental validation reveals that our heterogeneous GNN model consistently outperforms existing machine learning and traditional downscaling techniques. The model's performance stems from its ability to dynamically integrate multiple data sources, learn complex spatial-temporal relationships, and adapt to diverse local environmental contexts. As climate variability continues to challenge our existing predictive capabilities, innovative approaches like our heterogeneous graph neural network model represent crucial steps toward more resilient, adaptive environmental forecasting technologies.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall G1)

Presentation: Assimilating Brightness Temperatures of Microwave Sensors Into the LDAS-monde System Using a Neural Network

Authors: Jasmin Vural, Pierre Vanderbecken, Bertrand Bonan, Timothée Corchia, Jean-Christophe Calvet
Affiliations: Météo-France
The EU-project CORSO aims at improving the knowledge of the anthropogenic part of the CO2 emissions and therefore relies on adequate monitoring of carbon-cycle variables. To improve the state estimate of those, especially the leaf-area index (LAI) and the gross-primary production (GPP), we aim to optimally combine the information obtained from microwave satellite observations with the land-surface model ISBA. We investigate the impact of the observations of two different passive remote-sensing instruments, SMAP and AMSR2, on the analyses produced with our assimilation system LDAS-monde. LDAS-monde uses a simplified extended Kalman filter and the CO2-responsive version of the ISBA model. The key development here is the direct assimilation of the brightness temperatures provided by the microwave sensors. To allow for this direct assimilation, an observation operator is needed to transform the model variables into observation space. As the usual approach of using a radiative transfer model as observation operator can be computationally expensive for high-resolution observations, we exploit the increasing capabilities of machine learning techniques. Accordingly, we developed an observation operator that employs the coefficients found with a feedforward neural network (NN) onto the model variables. In this contribution, we would like to present the challenges as well the potential of the development of NN-based observation operators for microwave observations and the impact of their assimilation on the LAI analyses. We trained the NN with variables of the open loop run of LDAS-monde as well as LAI observations as predictors but found that employing forcing variables can add additional value. Furthermore, the exact setup of the NN seems to only play a minor role in the ability to reproduce the observational data. In contrast, the preprocessing of the observational data is of major importance and especially the presence of RFI contamination can compromise the performance of the training. Eventually, we found a working setup and implemented the resulting observation operator into LDAS-monde. We ran different assimilation experiments with special regard to the influence of the polarisations and the observation error and verify the impact of the microwave observations onto the global LAI analysis against LAI satellite observations.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall G1)

Presentation: A DEEP LEARNING APPROACH FOR REGULAR RAINFOREST MONITORING WITH SENTINEL-1 TIME SERIES

Authors: Ricardo Dal Molin Jr., Paola Rizzoli, Prof. Dr. Laetitia Thirion-Lefevre, Prof. Dr. Régis Guinvarc’h
Affiliations: Microwaves and Radar Institute, German Aerospace Center (DLR), SONDRA, CentraleSupélec, Université Paris-Saclay
The latest advances in remote sensing hold a central role in regularly providing Earth Observation (EO) data on a global scale for numerous applications in the scope of reaching environmentally sustainable goals. Indeed, over vast and inhospitable areas, such as the Amazon rainforest, these data represent the main information source for assessing landscape changes related to the rampant advances of logging, agriculture, cattle ranching and mining. Currently, the main operational projects for monitoring the Amazon biome -- the Brazilian Amazon Rainforest Monitoring Program by Satellite (PRODES), the Real-Time Deforestation Detection System (DETER), and the land use and land cover mapping system (TerraClass) -- rely on optical satellite imagery. However, in tropical areas, these observations are often hindered by extensive cloud coverage, which means that cloud-free and analysis-ready images are essentially restricted to the dry season. A potential data source up to this task arises from spaceborne synthetic aperture radar (SAR) systems, whose active microwave imaging sensor allows for data retrieval day or night and through clouds or smoke. Since SAR images are overall harder to interpret because of distortions related to their side-looking acquisition geometry, susceptibility to speckle noise and indifference to visible light spectrum, in this study we propose combining radar features extracted from dense interferometric SAR (InSAR) short time-series of Sentinel-1 Interferometric Wide Swath (IW) data with a deep learning classification scheme to accurately map the state of the forest through the whole year. To this end, we consider a sequence of time-series covering only 24 days (i.e., three acquisitions for a repeat cycle of 12 days) to produce land cover maps over environmentally critical areas as often as possible. The proposed methodology is based on the joint use of SAR backscatter (sensitive to structural and dielectric characteristics of the scene) and interferometric coherences at different temporal baselines (which characterize typical decorrelation sources in repeat-pass interferometry, such as temporal decorrelation) to perform pixel-wise classification of land cover classes of interest (e.g., forest, water bodies, urban areas, and cattle or agricultural fields). Preliminary analyses of Sentinel-IW time series from the year 2017 over 11 non-overlapping scenes indicate that land cover classes might exhibit particular variations depending on seasonal or meteorological conditions (different SAR backscatter response of grasslands depending on the humidity level and varying interferometric coherence of fields according to cropping cycles, for instance), which could be confused with an actual change of class if not properly addressed. Another challenge in performing short-term predictions stems from the fact that the ground truths are usually available on a yearly basis. To this end, we first propose a sampling of the training data which is representative of the different seasonal conditions of this region, based on precipitation and soil moisture estimates. Then, we apply deforestation masks to ensure that the reference holds throughout the year, so that intra-class patterns caused by seasonality effects are also learned. Finally, due to the noisy nature of SAR data and the high spatial dependency among neighboring pixels, the classification is performed with a convolutional neural network (CNN) based on the U-Net model. Preliminary results show that overall accuracies above 90% can be achieved throughout the whole year with the proposed method, arising as a potential tool for mapping the Amazon rainforest with unprecedented temporal resolution at large scale.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.11/0.12)

Session: A.08.09 Marine and Coastal Carbon - PART 2

By absorbing 25% of the anthropogenic carbon dioxide in the atmosphere, oceans are a major sink of carbon and a key component of the Global Carbon Cycle. Ocean carbon uptake/release spatial distributions and trends are inherently triggered by and impact a vast set of biological, physical and chemical processes.
This session welcomes contributions on the different carbon pools and processes in the marine and coastal ocean including:
- both the inorganic (including Ocean Acidification) and organic carbon domains, demonstrating how remote sensing, together with in-situ data and numerical modelling, can improve the characterization and understanding of the different pools of carbon in the ocean (Dissolved Inorganic Carbon - DIC, Dissolved Organic Carbon - DOC, Particulate Inorganic Carbon - PIC, Particulate Organic Carbon - POC).
- the key processes that determine the fluxes of carbon among these pools, such as Primary Production and Export, or between interfaces, such as Air-Sea or Land-Sea exchanges.
- Coastal blue carbon ecosystems (e.g. mangroves, seagrass, salt marshes) and the role of remote sensing to 1- monitor those ecosystems in term of e.g. ecosystem extent, carbon stock, carbon sequestration potential,…, 2-monitor/predict the impact of external drivers on blue carbon ecosystems and their carbon stocks/sequestration potentials and 3- quantify the added value of climate change mitigation and adaptation strategies (e.g. conservation, restoration, creation).
This session is also open to studies and procedures addressing how EO-derived marine and coastal carbon products can support Global Carbon Budget modelling efforts, and also contribute to informing evidence-based policies (IPCC and Paris Agreement).
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.11/0.12)

Presentation: Remote sensing monitoring of the spatio-temporal dynamics of the marine carbon polls of the global coastal ocean over the two last decades.

Authors: Hubert Loisel, Dr Roy El Hourany, Dr Vincent Vantrepotte, Dr Daniel S.F. Jorge, Dr Marie Monterro, Dr Lucile Duforêt Duforêt-Gaurier, Marine Bretagnon, Philippe Bryère, Antoine Mangin
Affiliations: LOG, ACRI-ST
The increase of atmospheric CO2 levels by as much as about 10% since the beginning of 21st century and its impact on the Earth’s climate and the biosphere represent a major concern. Based on a compilation of in situ data extrapolated over the global coastal ocean, previous studies indicate that the world’s coastal shelves absorb about 0.25 Pg C/year (~17% of oceanic CO2 influx), although these areas represent only 7% of the oceanic surface area. However, large uncertainties in coastal carbon fluxes and stocks exists due to their under sampling in both space and time. Moreover, all of the methods used to assess the land carbon sink rely upon accurate estimate of oceanic carbon as a key constraint or input. In this context, satellite remote sensing of ocean colour play a central role, as this Essential Climate Variables is currently the only ones for monitoring coastal and open ocean waters globally and systematically, at high spatial and temporal resolutions. Based on the development of recent algorithms performed in the frame of several research projects funded by CNES, EUMETSAT, ANR, and the COPERNICUS marine service, we present and discuss the first global vision of key aquatic carbon components over the global coastal ocean at high spatial resolution. The different algorithms developed to assess the particulate organic carbon, POC, dissolved organic carbon, DOC, and the in-water partial pressure of CO2, pCO2w, from ocean colour remote sensing will be presented with their quantified uncertainties. Theses algorithms will then be applied to the remote sensing reflectances obtained from the ESA Data User Element Project GlobColour to assess the spatio-temporal patterns of POC, DOC, and pCO2w over the global coastal waters from 1998 to 2024. The main temporal patterns will be presented and discussed and the global coastal carbon hot spots of long-term significative changes will be identified.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.11/0.12)

Presentation: Assessing the potential of remote sensing for large scale Blue Carbon monitoring through the ESA Coastal Blue Carbon project.

Authors: Benoit Beguet, Christophe Proisy, Marlow Pellatt, Oscar Serrano, Christine Dupuy, Amélie Séchaud, Manon Tranchand-Besset, Thibault Catry, Elodie Blanchard, Karen Kohfeld, Miguel A. Mateo, Imad El-Jamaoui, Natacha Volto, Nicolas Lachaussée, Marie-Aude Sévin, Timothée Cook, Pierre Coan, Alvise Ca'zorzi, Fanny Noisette, Virginie Lafon, Aurélie Dehouck
Affiliations: i-Sea, AMAP, IRD, AMAP, IRD, CIRAD, CNRS, INRAE, Univ. Montpellier, Simon Fraser University (SFU), Centro de Estudios Avanzados de Blanes, CSIC, Littoral Environnement et Sociétés (LIENSs), UMR 7266, CNRS-La Rochelle Université, ESPACE-DEV, IRD, Univ. Montpellier, Univ. Guyane, Univ. La Réunion, Univ. Antilles, BlueSeeds, Université du Québec à Rimouski, Vois-là
Coastal Blue Carbon Ecosystems (BCE) such as mangroves, seagrasses, and tidal salt marshes play a crucial role in sequestering carbon, providing valuable ecosystem services, and enhancing coastal resilience against global change impacts. There is a pressing need to provide actionable data and tools for policymakers, conservationists, and stakeholders to support the effectiveness of policies aimed at conserving and restoring BCEs. But how to deal with the limited number and representativeness of datasets to ensure comprehensive and robust carbon storage maps of such ecosystems across three different continents? This is a key challenge we aim to address in the ESA Coastal Blue Carbon Project, focusing on highly representative pilot regions in France, Canada, Spain and French Guiana. Earth Observation (EO) methods are essential to monitor and assess BCE’s potential for carbon storage. They provide a unique opportunity to up-scale estimates from local to regional and global regions. While ESA’s Copernicus programme (Sentinel-1 and Sentinel-2) provides high global availability and high repeatability at large scale, very-high spatial resolution imagery (Pleiades, Spot-6/7, Planet) at local scale is necessary to assess the geographical distribution of BCE and capture finer information around environmental drivers of blue carbon storage. These include vegetation species for salt marshes, fragmentation and density for seagrass meadows and forest structure for mangroves. We developed a common methodological framework to study mangroves, seagrasses and tidal salt marshes using a multi-sensor approach and leveraging existing datasets to validate the models. For seagrasses and salt marshes, we designed a standardized approach to build a consistent reference dataset, usable at both resolutions (VHR and HR) and integrated into very well-known test sites with existing field dataset. The advantage of this approach is that it enables rigorous comparison of simple pixel-level classifiers (such as RandomForest) and more advanced classifiers (patch-based, like CNNs) at both scales. A simple typology compatible for both local and global scales is defined for the three BCEs, that allow to compare methods and results. First, habitat mapping is performed with a semi-automatic framework, then, depending on data availability, local carbon stock values were applied to each class in the habitat map to produce C stocks maps. Otherwise, the biomass of each habitat is modelled from a multivariate analysis based on ground measurements, before applying allometric relationships from the study-site specific literature to predict C stocks maps. All ecosystems are not equally mature in terms of in-situ knowledge of blue carbon storage capacity and in terms of EO-based estimation. For mangrove, extensive field data from forest inventories, tree growth equations are employed to calculate the above-ground biomass (AGB) and carbon of forest stands at the time of acquisition of a large dataset of very high-resolution Pleiades satellite imagery (50 cm) in French Guiana. A texture analysis is performed on Pleiades images to (1) map and label the diversity of mangrove habitats in terms of canopy properties and (2) predict the associated AGB at a 1-ha scale, then transformed into carbon maps based on a total (soil, below- and above-ground) carbon storage model. For intertidal (saltmarsh and seagrass) and underwater (seagrass) habitats, the estimation of carbon storage via EO is less direct, a large part of this storage is in fact located in the soil. This therefore requires more in-depth studies to fully understand the potential and limits for these habitats. Many questions arise from this project and remain opened, such as the consistency of reference data through time, the need to derive specific carbon storage average values for distinct bioclimatic regions, uncertainty mapping as a function of BC habitat, spatial resolution, field-based models, etc. What environmental factors will improve carbon estimates using remote sensing (tidal range, coastal morphology, ice and growing season, rivers, latitude) and how does this affect regional upscaling of site-based data?
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.11/0.12)

Presentation: SwedCoast-BlueCarb project: mapping eelgrass extent and health

Authors: Samantha Lavender, Dr Susanne Krazer
Affiliations: Pixalytics Ltd, Stockholm University
In support of broader activities to protect eelgrass beds, stop ongoing losses and facilitate recovery, satellite Earth Observation (EO) based research is being carried out on populations around the Swedish coast. The SwedCoast-BlueCarb project (https://www.spaceclimateobservatory.org/swedcoast-bluecarb), funded by the Swedish and UK Space Agencies and recognised by the Space for Climate Observatory, aims to use a combination of EO and in situ data (for validation) to assess the possible effects of mitigation efforts against climate change in the designated test areas that include contrasting conditions found around Kalmar and on the Swedish West Coast. Initial project activities sought to find relevant projects sites and setting up collaborations within academia and monitoring programs, then laboratory-based activities have focused on understanding the absorption and reflectance spectra of submerged vegetation. In addition, EO activities has focused on consistently processing satellite datasets, including testing atmospheric correction approaches and developing a modelling-based approach using the surface reflectance that can concurrently derive the optical properties and submerged vegetation. This approach includes applying Copernicus Sentinel-2 satellite data to map the eelgrass extent, with associated uncertainties, where the populations exist at a detectable depth. Also, the aim is to understand the water's optical status, which can impact light availability and health, with the coarser spatial resolution (300 m versus 20m for Sentinel-2) marine-focused Copernicus Sentinel-3 mission providing supporting information. WorldView-2 imageery is also being processed to showcase the potential improvements with higher spatial resolution (2 m) commercial satellites. Now the initial modelling approach is working, the ongoing work is using the benefits of a machine learning model to speed up the modelling so whole Sentinel-2 scenes can be run systematically. Ultimately, an automated processing system is being developed to systematically provide products openly online within a GIS-style portal, supporting local authorities and conservationists with the required information to conserve existing populations and potentially support replanting activities. The progress, including the portal, will be showcased during the presentation.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.11/0.12)

Presentation: Dissolved organic matter dynamics along the European coasts, new insights from the TRaversing European Coastlines (TREC) expedition

Authors: Chiara Santinelli, Giancarlo Bachi, Paola Bertucci, Peer Bork, Emmanuel Boss, Vittorio Brando, Marco Carloni, Giovanni Checcucci, Margherita Costanzo, Douglas Couet, Colomban De Vargas, Valtere Evangelista, Mirco Guerrazzi, Morgan Guillam, Tom Jordan, Cristian Marchese, Dr Victor Martinez-Vicente, Claudia Tropea, Silvia Valsecchi
Affiliations: Biophysics Institute, National Research Council (CNR-IBF), European Molecular Biology Laboratory (EMBL), School of Marine Sciences, University of Maine, Institute of Marine Sciences, National Research Council (CNR-ISMAR), CNRS & Sorbonne Université, Station Biologique de Roscoff, Research Federation for the study of Global Ocean Systems Ecology & Evolution, Plymouth Marine Laboratory, National Biodiversity Future Center
Dissolved Organic Matter (DOM) in the oceans is the largest, the most complex, and the least understood reservoir of reactive carbon on Earth. Holding 6.6 10^14 kg of carbon and being the main source of energy for heterotrophic microbial communities, DOM plays a crucial in the global carbon cycle. Its dynamics in coastal areas is very complex since many different sources, both natural and anthropic, contribute to its pool, and the interaction of several biotic and abiotic factors drives its concentration, quality, distribution and evolution. In coastal areas, DOM shows a non-linear response to changes in environmental forcing and human pressure, and can be considered as a synthetic descriptor of water quality and aquatic ecosystems’ health. Thanks to the ESA HyperBOOST “Hyperspectral Bio-Optical Observations Sailing on Tara” project, which supported our participation in the Tara EUROPA expedition within the EMBL TREC (“TRaversing European Coastlines”) program, we have the unique opportunity to take the significant challenge of systematically explore the complexity of DOM dynamics along the European coastline from Finland to Greece. TREC is an international multidisciplinary research program, whose main goal is to holistically measure biodiversity and contextual parameters in waters, sediments, soils and the air across land-sea gradients, in order to understand the cross-talking between these ecosystems and their response to natural and anthropogenic factors. More than 500 DOM samples were collected in 2023 and 2024 along the European coastlines, including the Baltic Sea, the North Sea and the Mediterranean Sea. This unprecedented spatial coverage of standardized DOM data offers a comprehensive overview of DOM distribution along the European coasts. All the samples were collected and filtered following the exact same protocol by trained people and were analyzed in a single laboratory (CNR-IBF, Italy), making the dataset very robust. The high similarity among the 3 replicates, collected at each site, demonstrates the quality of the data, with no report in the literature covering such extensive geographical and environmental scales. While the highest values of dissolved organic carbon (DOC), absorption and fluorescence of the chromophoric fraction of DOM (CDOM) are observed in the Baltic Sea, the lowest ones are found in the western Med Sea with a marked increase close to the river mouth, in particular in the northern Adriatic Sea. The CDOM shows a significant linear correlation with its fluorescent fraction (FDOM) and the DOC concentration, opening to the possibility to improve the DOC quantification through remote sensing. Our data also show differences in DOM quality amongst basins, suggesting the need for regional corrections of current algorithms. The improvement of satellite algorithms for the assessment of DOC would help fill many of the DOM knowledge gaps in coastal area. It could provide more accurate estimates of their fluxes from the land, their spatial and temporal variability, as well as better predictions on how anthropogenic activities and climate change affect DOM dynamics in coastal areas, where terrestrial inputs of carbon affect the functioning of coastal ecosystem.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.11/0.12)

Presentation: Trends in Coastal Ocean Primary Productivity

Authors: Dr Yanna Alexia Fidai, Shubha Sathyendranath, Gemma Kulk
Affiliations: Plymouth Marine Laboratory, National Centre of Earth Observation, Plymouth Marine Laboratory,
The coastal ocean is a region of socio-economic and ecological importance. Yet, this system is under immense pressure from global climate change and other anthropogenic hazards, which threaten ecosystem services and increase the vulnerability of the growing coastal population and infrastructure. Whilst the coastal ocean is under pressure, it can also be part of the solution to manage and adapt to changes, with the phytoplankton ecosystem as an example of this. Primary production by phytoplankton plays an important role in the global carbon cycle through the conversion of inorganic carbon in the water to organic carbon via photosynthesis. This process is not only important for global climate regulation, but also essential for supporting all coastal ecosystems and the services they provide. In this study, we explored changes in phytoplankton primary production in the global coastal ocean from 1998-2022 within Longhurst's ecological provinces. We address three key questions: (1) In which coastal provinces does primary production undergo significant changes? (2) What are the underlying causes of these changes? And (3) Is the aggregation of data into large areas (such as the ecological provinces) suitable for investigating the underlying causes of any observed change in the global coastal ocean? To address these questions, we have undertaken trend analysis of primary production model outputs based on the Ocean-Color Climate Change Initiative (OC-CCI) data, and applied a linear regression analysis using a seasonal decomposition of the time series and autoregressive integrated moving average (ARIMA) method. We further explored the impact of sea surface temperature (from SST CCI), phytoplankton chlorophyll-a concentration from OC-CCI (while recognising that chlorophyll-a concentration was an input to the primary production calculations, and that, therefore the two are not independent) and upwelling (Bakun Index) on primary production. We also studied the impacts of missing data (with a systematic bias) and of data aggregation methods at individual pixel level on apparent or artificial trends. Results showed that there were five (out of 23) coastal provinces with statistically significant increasing or decreasing linear trends in primary productivity, and that all regions with significant trends are also upwelling regions. Through this work we contribute to developing an understanding of the changes experienced in the global coastal ocean, which is essential knowledge for management of coastal challenges and pressures globally. This is a contribution to the UKRI NERC funded FOCUS Project (NE/X006271/1), the ESA SCOPE Project and the Simons Foundation CBIOMES Project.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 0.11/0.12)

Presentation: Multiscale Coastal blue carbon habitat mapping and stock assessments - a comprehensive approach using Copernicus Sentilel 2 and Contributing Missions

Authors: Dr Kasper Hancke, Robert N Poulsen, Dr Núria Marbà, Dr Iris E. Hendriks, Dr Julia Mañez, Dr Alex Morell, Lidia Cucala, Dr Laura Claramonte, Dr Hege Gundersen, Dr Charlie Lavin, Toms Buls, Dr Christian Lindemann, Dimitris Poursanidis
Affiliations: Foundation for Research And Technology Hellas, NIVA, SpectroFly ApS, Global Change Research Group, IMEDEA (CSIC-UIB)
High-resolution, digital, and systematic data are crucial for the protection and efficient management of marine coastal habitats, which are exposed to heavy pressure from human and climate impacts. Satellite remote sensing data, alone, often lacks the necessary image resolution and revisiting time for sufficient assessments of blue carbon ecosystems, areal extend and carbon stock along with the estimation of ecosystem health in habitats such as seagrass meadows and macroalgal forests. This study presents the first results from a pilot project that employs a multiplatform approach to map coastal habitats, identify key ecosystem habitat types and species, attempting to quantify blue carbon stock in the above-ground biomass. By integrating ground-collected data, high-resolution (cm-scale) drone data, and multiscale satellite remote sensing data from Copernicus Missions and Contributing Missions, we demonstrate how a multi-sensor approach can enhance data quality and spatial coverage for assessing ecosystem distribution and carbon stocks. Using ground truth data, RGB and multispectral imagery from drones and satellites, along with drone-based LIDAR data, we demonstrate a way to assess seagrass distribution, biomass and carbon stocks at different scales while utilizing advanced methods to quantify the habitat’s cover at the pixel level, allowing the transition among the sensors. This pilot marks the first step towards developing cost-effective methodologies for long-term assessments and monitoring of blue carbon habitats and coastal zone biodiversity. The project is a collaborative effort between the HEU projects C-BLUES and OBAMA-NEXT, and the Norwegian drone infrastructure SeaBee.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall M1/M2)

Session: C.03.09 The Copernicus services - PART 2

The Copernicus programme is headed by the EC, which is also responsible for setting requirements and managing the environmental services while ESA coordinates the space infrastructure.
The wealth of Copernicus satellite observations, coming from the Sentinel and Contributing missions, and of in situ data, coming from ground-based, sea-borne or air-borne monitoring systems, feed a set of thematic services in different domains: marine, land, atmosphere, emergency, climate change and security.
The services convert all the space and non-space data into actionable information used by public authorities, international organisations and users for environment protection, management of urban areas, regional and local planning, agriculture, forestry, fisheries, health, transport, climate change, sustainable development, civil protection and tourism, among others.

Presentations and Speakers:


The Copernicus Emergency Management Service:


  • Peter Salamon – JRC

The Copernicus Marine Environment Monitoring Service:


  • Pierre-Yves Le Traon – Mercator Ocean

The Copernicus service for Security applications:


  • Sonja Gyallay-Pap (FRONTEX) – FRONTEX
  • Filipe Lisboa – EMSA
  • Denis Bruckert – SatCen

The Copernicus In-situ component:


  • Jose Miguel Rubio – EEAy (EEA)
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall L1/L2)

Session: A.05.04 Advances at the observation -modelling interface - PART 2

As observations become more sophisticated, and models more complex, so too does the link between them. Rapidly expanding processing and data storage requirements bring new challenges for model evaluation and validation, and the need for tools, toolboxes and cross community collaboration. Novel assimilation techniques, developments in inverse modelling, and improved treatment of combined model-observation uncertainties, require close collaboration between communities.
This session welcomes submissions at the interface of earth observation and modelling. Relevant topics include but are not limited to
• The role of observations in climate forcings
• Observational requirements to enable the next generation of CMIP model benchmarking
• the role of emerging technologies, including machine learning and artificial intelligence, in advancing the assessment of ESMs.
• Development of toolboxes and metrics for model assessment
• Novel observations relevant to high resolution model processes
• Model-observation integration approaches for process understanding


Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Quantifying and evaluating the influence of cloud controlling factors on cloud properties using causal inference

Authors: Lisa Bock, Axel Lauer, Veronika
Affiliations: Deutsches Zentrum Für Luft- Und Raumfahrt (DLR), Institute of Atmospheric Physics
As a key component of the hydrological cycle and the Earth’s radiation budget, clouds play an important role in both weather and climate. Our incomplete understanding of clouds and their role in cloud-climate feedbacks leads to large uncertainties in climate projections. Using causal discovery as an unsupervised machine learning method we aim to systematically analyse and quantify causal interdependencies and dynamical links between cloud properties and their controlling factors. This approach goes beyond correlation-based measures by systematically excluding common drivers and indirect links. The method we chose is based on the Fast Causal Inference (FCI) Algorithm that considers also the presence of unobserved variables. By quantifying the causal effect of each cloud controlling factor for different cloud regimes we expect to be able to better understand the dominant processes determining the micro- and macro-physical properties of clouds. Specifically, causal inference is used to investigate the links between cloud properties such as cloud cover, cloud water path, cloud top height and cloud radiative effects and so-called cloud controlling factors, i.e., quantities that impact cloud formation and temporal evolution of the cloud (e.g., sea surface temperature, water vapour path and lower tropospheric stability). For this, satellite data such as from the ESA CCI cloud, sea surface temperature and water vapour products are used together with ERA5 reanalysis data. By applying causal effect estimation, the strengths of the individual links in the resulting causal graph are quantified. The same method is then applied to output from global climate models, which allows for a process-based model evaluation of clouds and their controlling factors. In a first case study, we quantify causal links between cloud properties and cloud controlling factors over the subtropical ocean off the west coast of South America. The consistently high amount of cloud cover in this region is dominated by a single cloud type, the marine stratocumulus. The workflow developed for this example case will then be applied to other cloud regimes.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Benchmarking Earth System Model simulations with ESMValTool and ESA CCI satellite data

Authors: Axel Lauer, Manuel Schlund, Lisa Bock, Birgit
Affiliations: DLR Institute of Atmospheric Physics
Earth system models (ESMs) are important tools to improve our understanding of present-day climate and to project future climate change. For this purpose, ESMs are continuously improved and extended with additional and also more detailed processes resulting in more complex models. Particularly for new model versions, it is important to assess early on how well the historical climate is reproduced across many different climate-relevant variables and to systematically analyze, evaluate, understand, and document possible shortcomings. In order to identify which model biases would need to be addressed with higher priority, putting model deviations from observations into the context of the deviations shown by other state-of-the-art ESMs is a promising option. The Earth System Model Evaluation Tool (ESMValTool) has been extended to monitor running or benchmark existing simulations with observations in the context of results from an ensemble of models such as the Coupled Model Intercomparison Project (CMIP). ESMValTool is an open-source community-developed software for the evaluation and analysis of ESMs. Model output can be benchmarked by calculating metrics such as, for instance, root mean square error (RMSE), the coefficient of determination (R²) or the Earth mover’s distance (EMD) relative to observationally-based reference datasets. These results are directly compared to the same metric calculated for an ensemble of models such as CMIP6, which provides a measure for the range of values that can be considered typical for state-of-the-art models. Results can be displayed in different types of plots such as map plots and zonal means (using stippling and hatching), time series including diurnal and seasonal cycles (via uncertainty bands), bar plots, or portrait diagrams. Here, example applications are shown that use satellite data from the European Space Agency’s (ESA) Climate Change Initiative (CCI) and results from the European Centre’s for Medium-Range Weather Forecasts Reanalysis v5 (ERA5) as reference data for comparison with model output.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Indicators of Global Climate Change: Annually Updated Climate Forcings and Observations

Authors: Chris Smith, Pr Piers Forster, Tristram Walsh, Debbie Rosen, Jiddu Brorsema, Alex Borger
Affiliations: Vrije Universiteit Brussel, International Institute for Applied Systems Analysis, University of Leeds, University of Oxford, Climate Change Tracker, Data for Action Foundation
Every seven years or so, the Intergovernmental Panel on Climate Change (IPCC) updates their assessments of the physical climate, which includes the global mean surface temperature change since pre-industrial times, greenhouse gas concentrations, radiative forcing, and remaining carbon budget. While the IPCC is the gold standard in international climate assessments, its update frequency is too sparse in this critical decade where policymakers need updated information to base climate policy upon. The Indicators of Global Climate Change (IGCC) project was started in 2023 to annually update these high-level IPCC assessments, using methods that were traceable to the original IPCC Working Group 1 Sixth Assessment report but using recently updated observational data, and including over 50 IPCC authors and other scientists. In June 2025, IGCC will publish its third annual update, reporting up to the calendar year 2024. An annual update of the anthropogenic and natural radiative forcing is one product from IGCC, for which we find that the large majority of the approximately 3 watts per square metre increase in radiative forcing since 1750 is due to anthropogenic influences. This is composed by and large of a positive (and increasing) greenhouse gas forcing approaching 4 watts per square metre offset by negative (and weakening) aerosol forcing of around -1 watts per square metre. Using the IGCC radiative forcing time series split into anthropogenic and natural components, we can attribute historical warming to anthropogenic and natural factors, and project these scenarios into the near future under different emissions scenarios, mapping out the policy space that is consistent with limiting warming to Paris Agreement targets. A variety of methods are used to obtain the radiative forcing assessment. Greenhouse gas concentrations and emissions of aerosol precursors and reactivate gases from global inventories, combined with climate model results, are used to determine the major anthropogenic factors. Earth Observation methodologies also play a role, currently for the volcanic radiative forcing which is determined from estimates of stratospheric aerosol optical depth obtained from satellite retrievals combined into the Global Satellite-based Stratospheric Aerosol Climatology (GloSSAC) dataset, with the most recent year using data from the Ozone Mapping and Profiler Suite Limb Profiler (OMPS-LP). The eruption of Hunga Tonga-Hunga Ha'apai in 2022 was unusual in that it emitted substantial amounts of water vapour into the stratosphere, causing a net positive warming. We obtain data from the Microwave Limb Sounder (MLS) instrument to minor the ongoing excess stratospheric water vapour loading from this eruption, finding that the signal was persisting into 2024. There are areas of the radiative forcing methodology that would benefit from improvements, and where Earth Observation would greatly assist. One is the assessment of radiative forcing from land use change, which was last updated in 2014, and where combining satellite albedo retrievals with climate model derived transfer functions would provide a near real-time estimate. Another is a separation of wildfire emissions into the forcing and feedback components, teasing out climate-driven contributions to wildfires and retaining the anthropogenic component as the forced response. Alongside a peer-reviewed paper published in Earth System Science Data every year, we provide a user-facing interactive dashboard showing the key climate indicators at igcc.earth.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Hydroterra+: advancing water cycle observation for enhanced forecasting and monitoring of intense weather events

Authors: Georgios Fragkoulidis, Christos Giannaros, Dr Martina Lagasio, Professor Vassiliki Kotroni, Antonio Parodi, Dr. Francesco Uboldi, Professor Andrea Monti Guarnieri, Dr. Christian Rossi
Affiliations: Cima Research Foundation, Polytechnic University of Milan, National Observatory of Athens, European Space Agency
Hydroterra+, subsequently referred to as “H+,” is an ESA Earth Explorer 12 (EE-12) candidate mission focused on the observation of rapid water cycle processes, particularly within Europe, the Mediterranean basin, and selected regions of Africa. These areas are particularly susceptible to climate change, and H+ aims to address a significant observational deficiency regarding the water cycle and, specifically, the processes that occur over hours to days at a regional scale. Existing and planned low Earth orbit missions do not sufficiently cater to this requirement. H+ is an advancement of the Hydroterra candidate mission for Earth Explorer 10, benefiting from the insights gained during its Phase 0 study and subsequent field experiments to improve scientific readiness. The mission is structured around 4 scientific objectives, which have been carefully selected to address critical uncertainties in water cycle science, where improved temporal sampling is a vital component. Among the aforementioned scientific objectives there are the intense storms in the Mediterranean and selected regions of Africa (SO1). Various studies have established that the assimilation of Integrated Water Vapor (IWV), measured in kg m−2, or its equivalent, Zenith Total Delay (ZTD) [m], plays a beneficial role in improving heavy rainfall forecasts. This has been demonstrated through research involving InSAR (Pichelli et al., 2015; Mateus et al., 2016; Mateus et al., 2018; Lagasio et al., 2019; Pierdicca et al., 2020), just to cite some of the very rich literature and GNSS techniques. The integration of ZTD maps obtained from Earth Observation (EO) data, characterized by high spatial resolution and rapid revisiting intervals, is anticipated to significantly enhance the performance of NWP models in predicting intense weather events. This is where H+ has the potential to make a substantial impact. Preliminary findings from the scientific study activities associated with SO1 objectives will be shared and examined.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall L1/L2)

Presentation: WIVERN: Leveraging Unique Observations to Benchmark Earth System Models Globally and Regionally

Authors: Dr Cathy Hohenegger, Dr Alessansro Battaglia, Prof Anthony Illingworth, Dr Rémy Roca, Maryam Pourshamsi
Affiliations: European Space Agency (esa)
Advances in computing technologies now enable next-generation Earth System Models (ESMs) to simulate the climate system at grid spacings of just a few kilometres, allowing for the explicit representation of small-scale processes. Benchmarking these models against high-quality observational data is crucial to assess their credibility, improve parameterisations, enhance predictive capabilities, and identify discrepancies between theoretical expectations and simulation outcomes. However, due to the high computational cost of such simulations, sufficient observations must be available regionally over short timescales (e.g., monthly). Challenges also remain in addressing key uncertainties, particularly in representing sub-grid processes such as cloud microphysics and turbulence, which significantly influence model performance. The WIVERN (WInd VElocity Radar Nephoscope) mission concept, one of two candidate missions competing for selection as the Earth Explorer 11 mission within the ESA’s FutureEO programme, offers transformative potential for ESM evaluation. WIVERN’s unique capability to provide global, vertically-resolved wind profiles in cloudy and precipitating regions, coupled with its enhanced observations of clouds micro and macro physical properties, can directly address critical gaps in model benchmarking. Specifically critical gaps highlights areas where existing observational data are insufficient to: - Validate and improve the representation of small-scale processes, such as cloud dynamics, precipitation, and wind fields, within ESMs. - Address uncertainties in the representation of sub-grid processes, such as cloud microphysics and turbulence, which have a major influence on model performance. - Provide the spatial and temporal resolution required to benchmark high-resolution climate models effectively, particularly over short timeframes and regional scales. By delivering unprecedented spatial and temporal resolution and a significant coverage, WIVERN data will enable a comprehensive validation even at regional scale based on only few months of measurements. This presentation will highlight whether WIVERN can meet these expectations. It will use global climate simulations conducted with the ICON model at a grid spacing of 10 km as virtual reality for WIVERN and assess how well WIVERN can reproduce the simulated climate regionally and over short time scales.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Development of the Passive and Active Microwave radiative TRAnsfer (PAMTRA) simulator for the next generation climate models and Earth Observation capabilities

Authors: Davide Ori, Dr. Nils-Arne Dreier, Dr. Vera Schemann, Mario Mech, Susanne Crewell, Dr. Florian
Affiliations: University Of Cologne
Model observation simulators are essential components of the Earth observation and modeling systems as they enable the development of retrieval methods, data assimilation algorithms, and model-observation comparisons. With few exception that are optimized for specific instruments, all-sky microwave radiative transfer operators are usually applied as diagnostic tools upon the output of a weather/climate model. This approach requires storage of a huge amount of temporary data, including detailed cloud properties, that is then synthesized into a smaller dataset of brightness temperatures and radar reflectivities. While there is copious literature about model evaluation for coarse-scale models or high-resolution on limited domains, the upcoming global km-scale simulations are posing concrete challenges on how to handle such a great amount of data to enable the comparison with satellite observations. The emerging trends of increasing complexity of the representation of cloud microphysics are only burdening this already demanding issue. With this contribution, we illustrate the recent developments on the Passive and Active Microwave radiative TRAnsfer (PAMTRA) tool to work alongside and together with the ICOsahedral Nonhydrostaic (ICON) model as a microwave instrument simulator for various (ground-based, airborne, and satellite) platforms. The two models run together, eliminating the the need to store temporary data and allowing to get high-resolution synthetic observation from PAMTRA at the model time step, which is particularly convenient for track-following airborne and satellite measurement comparisons. Also, they are strictly segregated and communicate through an ad-hoc interface, eliminating the risk of introducing spurious errors in the model codes and allowing the use of PAMTRA as a prototype for various analysis applications that would run as model plugins. This objective has required extending the internal workflow of PAMTRA with new and efficient methods (look-up tables) to allow PAMTRA to run online with ICON without relevant performance losses. We illustrate the application of this new PAMTRA implementation to aircraft and satellite platforms. The results show a substantial improvement in the comparison between the model and observations at high spatial resolutions and a significant reduction in data storage. Altogether, these developments offer the climate science community much easier access to reliable synthetic microwave observations, and we believe it will help expand the range of applications of the current and future Earth observation datasets.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.61/1.62)

Session: F.04.28 The role of cities in advancing policy objectives through Earth Observation data and technology

As cities are the centres of most of human life and activities, the way they are designed, function and move has a huge impact in determining our relationship with the planet. Indeed, the flow of urban life – encompassing the goods we consume, methods of transport, waste management practices, heating of buildings, and how we cohabit with green spaces and other species- influences not only urban contexts themselves with direct impact on our health but has also broader implications on life on Earth. Urban policies at international, regional, and local levels provide frameworks for cities to assess and manage their environmental impact, guiding them in measuring, monitoring, and reporting on their footprints, hence enabling informed urban planning response.
Earth Observation data and technology provide a means to quantify impacts, detect changes, evaluate trends, detect anomalies, respond to natural disasters and feed models and if scenarios analysis, which are essential for informed urban planning and decision-making. Moreover, by offering a consistent, global perspective, EO allows for comparability across different scales and regions. In combination with other data sources, such as local inventories, aerial imagery, IoT, commercial high-resolution data, EO technologies provide city managers with a comprehensive overview of urban dynamics and priorities, supporting urban planning actions accordingly and effective communication with the public.
Despite its growing use, the impact of EO data varies across cities in Europe and worldwide due to differences in accessibility, integration capabilities, and local expertise. This session will explore best practices of EO-integrated solutions in urban management, while discussing key barriers and opportunities in the uptake of these for monitoring and reporting on policy objectives.

Speakers:


  • Dyfed Aubrey - UN Habitat
  • Thomas Kemper - European Commission JRC
  • Monika Kuffer - University of Twente
  • Yifang Ban - KTH Royal Institute of Technology
  • Tomáš Soukup - GISAT
  • Mattia Marconcini - DLR
  • Zina Mitraka - FORTH
  • Iphigenia Keramitsoglou - NOA
  • Fabio Del Frate - University Tor Vergata
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.14)

Session: B.02.02 Managing the Urban Green Transition with Earth Observation data and advanced analytics.

Urban areas are at the forefront of the global green transition and the decarbonization of society, addressing challenges like climate change, resource scarcity, and social equity. Cities contribute approximately 70% of global CO2 emissions, highlighting their critical role in climate mitigation. Current policies and international agreements, such as the European Green Deal and networks like C40 Cities, are driving the push towards net zero carbon goals, necessitating innovative solutions.

This session explores the role of urban analytics, predictive analytics, and Earth Observation (EO) in supporting this mission. By leveraging data-driven approaches and satellite imagery, cities can optimize their green transition strategies, address local challenges, and effectively align resources.

Key topics include:
• Advances in Digital Twin Technologies and Predictive Analytics for managing the Urban Green Transition.
• Integration of novel analytics (i.e. AI, NLP, agent-based modelling) and EO-data for managing the Urban Green Transition.
• EO-based analytic services and solutions from local to global scales to support urban green transition, including energy management, building energy efficiency and retrofitting, sustainable mobility, air quality, urban green management, and more.
• Best practices in data sharing and collaboration among cities, service providers, and stakeholders, to develop EO-based green transition services.
• Utilization of multimodal datasets and technologies for comprehensive urban analysis.

This session is targeted towards individuals engaged in or interested in the application of novel urban analytics an EO for managing the urban green transition. By fostering a collaborative environment, we aim to accelerate the development and adoption of innovative solutions applicable across various cities and regions.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.14)

Presentation: Building Energy-efficiency Estimation with Artificial Intelligence

Authors: Dr Gianluca Murdaca, Mr Francesco Asaro, Mr Filippo Cazzarolli, Dr Emanuele Strano, Dr Mattia Marconcini, Dr Alessandra Feliciotti
Affiliations: MindEarth SA
Urban areas, responsible for nearly 70% of global CO₂ emissions, are central to the global green transition, requiring innovative tools for energy efficiency planning and retrofitting strategies. Policies such as the European Green Deal and initiatives like "100 Climate-Neutral Cities by 2030" call for innovative, data-driven solutions to address the complex challenges of energy efficiency, urban retrofitting, and sustainable development. In response to these pressing needs, MindEarth, as part of the ESA-funded Building Energy-efficiency Estimation with Artificial Intelligence (BEE-AI) project, has developed a dedicated novel framework. This leverages cutting-edge AI to integrate Earth Observation (EO) data, proprietary street-level optical and thermal imagery collected via the proprietary MindEarth smartphone app, cadastral information, morphological indicators, and Energy Performance Certificates (EPCs). By leveraging these diverse datasets, the approach enables a comprehensive assessment of building energy performance and retrofitting potential, offering actionable insights to support evidence-based decision-making and policy implementation for sustainable urban development. The proposed framework combines implicit features computed from multimodal inputs (façade optical and thermal imagery, roof optical imagery) using a vision foundation model, alongside explicit features (construction age, roof and façade material, roof shape, presence of green-roofs or PV panels) extracted with specialized models. Additionally, morphological features are incorporated to construct a comprehensive descriptive vector consisting of 2120 features. This ultimately serves as input for a gradient-boosted tree model, which is trained to perform downstream energy performance classification by grouping classes into three categories: AB, CD, and EFG. The methodology has been demonstrated across three use cases focused on residential buildings in Milan, Copenhagen, and Vienna, representing diverse urban environments and data availability scenarios. In all cases, the framework delivered a detailed spatial assessment of residential energy efficiency and retrofitting potential. Analysis of façade characteristics identified clusters of buildings with low energy performance (classified as E or lower), supporting local authorities in prioritizing retrofitting efforts. Roof material and shape classification, combined with PV installation detection, provided insights into untapped renewable energy potential, particularly for residential and mixed-use buildings. Additionally, façade thermal behavior analysis highlighted areas of poor insulation, enabling targeted energy-saving interventions. Validation metrics across these pilots have been rigorously evaluated to assess the methodology's robustness and reliability. For EPC classification, the model achieved an F1 Micro score of 0.719 and an F1 Macro score of 0.711, indicating solid overall performance. Specialized models for roof material classification exhibited results, with an F1 Micro score of 0.793 and an F1 Macro score of 0.719. Among individual classes, the model performed best for tiles (F1: 0.897), stone (F1: 0.826), and plastic (F1: 0.818). Similarly, for photovoltaic (PV) installation detection, the framework yielded high accuracy with class-specific precision and recall above 0.87. These results highlight the framework's capacity to generate precise and consistent outputs, demonstrating its versatility and reliability across diverse urban contexts and application areas. Beyond the immediate applications demonstrated in these pilots, the framework offers significant potential for broader adoption, serving as a critical tool for advancing global green transition goals. Its ability to operate across diverse urban environments makes it particularly valuable for cities aiming to enhance energy efficiency and achieve climate neutrality. By dynamically integrating multimodal datasets, the methodology overcomes the limitations of single-source approaches, offering granular insights into building energy performance and retrofitting potential. Advanced AI algorithms enable the extraction of complex features, enhancing both predictive accuracy and adaptability. The modular and automated design ensures scalability and reproducibility, making the framework suitable for large-scale implementation in cities with varying levels of data availability and infrastructure. While the results underscore the transformative potential of this approach, several challenges remain. Data accessibility and harmonization—especially for Energy Performance Certificates (EPCs)—pose ongoing obstacles due to regulatory differences, privacy restrictions, and inconsistencies in benchmarking systems across European countries. Addressing these challenges will require close collaboration with municipalities, housing associations, and national regulators, as well as alignment with harmonization initiatives like the Energy Performance of Buildings Directive (EPBD). Where EPC data is unavailable, incorporating alternative metrics such as energy consumption or CO₂ emissions could offer a viable solution. Street-level imagery availability presents similar challenges. Licensing restrictions on platforms like Google Street View and the limited quality or coverage of open alternatives such as Mapillary often necessitate supplementary in-situ campaigns. Accurate thermal imagery collection, essential for identifying heat loss, depends on specific conditions such as winter evenings. Tools like the MindEarth app provide a means to address these gaps by enabling localized, project-specific data collection. Despite these limitations, the methodology lays the groundwork for expanded applications across Europe and beyond. Validation in diverse urban contexts, including Copenhagen, Milan, and Vienna, demonstrates its adaptability and robustness. Partnerships with stakeholders such as KAB, HOFOR, Immobiliare.it Insights, and Urban Innovation Vienna have ensured the framework is aligned with real-world needs, providing actionable insights to support energy planning and retrofitting strategies. Key outcomes include precise classification of EPC ratings, identification of low-performing buildings, and assessments of renewable energy potential, offering critical guidance for resource allocation and policy development. This AI-driven framework bridges the gap between scientific innovation and practical application, empowering stakeholders to design targeted interventions, optimize resources, and monitor progress toward energy efficiency and climate neutrality. By integrating EO data with advanced AI techniques, the framework delivers scalable, impactful solutions for the global green transition. Supporting efforts in energy efficiency, retrofitting, and climate resilience, it represents a foundational tool for sustainable urban development and future climate-neutral initiatives.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.14)

Presentation: Spatially-optimized urban greening for reduction of population exposure to land surface temperature extremes

Authors: Emanuele Massaro
Affiliations: European Commission Joint Research Centre
Urban areas are at the forefront of the global green transition, contributing approximately 70% of global CO2 emissions and facing increasing risks from climate change, resource scarcity, and social inequities. Rising urban populations, compounded by anthropogenic climate change, have dramatically increased exposure to extreme heat, making cities critical arenas for climate mitigation and adaptation strategies. The urgency to mitigate urban heat risks is recognized in global initiatives such as the European Green Deal and C40 Cities, which advocate for net-zero carbon transitions. However, despite the growing emphasis on urban resilience, tools that effectively guide city-specific greening interventions to reduce heat exposure remain limited. This study addresses this gap by introducing a novel, spatially optimized approach to urban greening, integrating Earth Observation (EO) data, predictive analytics, and urban analytics to support the urban green transition. This research utilizes a spatial regression model to quantify and mitigate population exposure to Land Surface Temperature (LST) extremes across 200 cities globally, representing diverse climate zones. Exposure is defined as the product of the number of days per year when LST exceeds a 90th percentile threshold and the urban population exposed, measured in person-days. By leveraging satellite-derived datasets, including the Normalized Difference Vegetation Index (NDVI) and proximity to water bodies, we developed a robust framework for predicting exposure and assessing mitigation strategies through urban greening. The methodology integrates multiple datasets, such as MODIS thermal imagery for LST, ERA5 data for climate baselines, and the European Commission’s Global Human Settlement Layer for urban boundaries and population distributions, ensuring technical rigor and global applicability. Key Contributions and Findings Spatial Regression Model Development and Validation: The spatial regression model was rigorously validated using a k-fold cross-validation framework, achieving an R² ≥ 0.8 across diverse climatic and geographic contexts. This predictive accuracy underscores the model’s robustness in estimating population exposure to LST extremes using minimal input variables. Unlike traditional models, which often rely on generalized assumptions, our approach explicitly accounts for spatial autocorrelation, leveraging spatial lag modeling techniques to improve predictions. Role of Urban Vegetation in Reducing Heat Exposure: The findings highlight the critical role of urban vegetation in mitigating heat exposure. Increasing NDVI values through targeted urban greening significantly reduces LST exposure. For example, increasing NDVI by 0.3 in targeted areas resulted in a 12% reduction in exposure in temperate cities, 16% in continental zones, and up to 32% in tropical regions. These results emphasize the need for location-specific strategies that prioritize high-exposure areas. Optimization of Greening Strategies: A key innovation of this study is the demonstration of spatially optimized greening interventions. Targeting high-exposure areas reduced the amount of vegetation required by up to 70% compared to uniform greening strategies, offering cost-effective and resource-efficient solutions for urban planners. For instance, in Paris, a localized greening strategy achieved the same exposure reduction as uniform greening while requiring only 14% of the total NDVI increment. Global Patterns and Regional Variability: The analysis revealed distinct regional trends in LST exposure and mitigation potential. Cities in tropical regions showed the greatest vulnerability to heat extremes but also the highest mitigation potential through targeted greening. In contrast, temperate cities displayed more moderate exposure levels but still benefited from optimized interventions. This regional variability underscores the importance of tailoring greening strategies to local climatic and socio-economic conditions. Urban Greening as a Scalable Mitigation Strategy: The methodology developed in this study is scalable and replicable across cities worldwide, facilitated by the consistent availability of EO data. The use of globally accessible datasets, such as MODIS and the Global Human Settlement Layer, ensures that this approach can be applied to cities with varying levels of data infrastructure and technical capacity. Relevance to the Urban Green Transition This study makes significant contributions to the urban green transition by providing actionable insights for policymakers, urban planners, and stakeholders. It aligns with the session’s emphasis on leveraging EO data and predictive analytics to optimize urban sustainability. Specifically, the research demonstrates how spatial optimization can maximize the impact of urban greening initiatives while minimizing resource use, addressing critical challenges such as land scarcity, cost constraints, and equity in green space distribution. The findings are directly relevant to key areas of the urban green transition, including: Energy Efficiency and Retrofitting: By reducing LST, urban greening lowers cooling demands in buildings, contributing to energy efficiency and reducing greenhouse gas emissions. Sustainable Mobility: Cooler urban environments promote pedestrian and cycling-friendly spaces, enhancing sustainable mobility options. Air Quality Improvement: Increased vegetation cover contributes to improved air quality by reducing particulate matter and other pollutants, complementing efforts to decarbonize urban areas. Equitable Green Space Distribution: The optimization framework ensures that greening interventions prioritize high-exposure and vulnerable areas, addressing social equity concerns in urban planning. Methodological Innovation and Technical Correctness This study exemplifies technical innovation in urban analytics by integrating EO data with advanced spatial modeling techniques. The use of the spatial lag model to address autocorrelation in urban data represents a significant methodological advancement, ensuring accurate predictions of LST exposure and mitigation potential. The robustness of the model was demonstrated through extensive validation across diverse cities, highlighting its reliability and applicability. Moreover, the integration of multimodal datasets, including thermal imagery, vegetation indices, and population distributions, provides a comprehensive understanding of urban heat dynamics. The methodology’s reliance on globally available datasets ensures scalability, enabling cities worldwide to adopt this approach regardless of resource constraints. Policy and Application Relevance The insights generated by this research have immediate implications for urban policy and planning. By providing a data-driven framework for targeting greening interventions, the study supports evidence-based decision-making and resource allocation. The results can inform urban adaptation strategies, green infrastructure investments, and climate action plans, contributing to the broader goals of the European Green Deal and other global sustainability initiatives. Key policy recommendations include: Prioritizing high-exposure areas for urban greening to maximize impact and cost efficiency. Integrating urban greening with other mitigation strategies, such as increasing surface albedo and enhancing water bodies, to achieve synergistic effects. Developing standardized metrics for monitoring and evaluating the effectiveness of greening interventions, ensuring accountability and continuous improvement. Promoting data-sharing collaborations among cities, EO service providers, and research institutions to enhance the scalability and impact of urban sustainability initiatives. Broader Implications and Future Directions The research contributes to the growing body of knowledge on urban resilience and sustainability, emphasizing the potential of EO-based analytics to address complex urban challenges. By combining high-resolution satellite data with predictive modeling, this study advances the field of urban climate adaptation and provides a blueprint for integrating sustainability frameworks into urban planning. Future research should explore the integration of additional mitigation strategies, such as urban water management and reflective surface technologies, to complement greening efforts. Additionally, further studies could examine the socio-economic dimensions of urban heat exposure, including disparities in green space access and the role of community engagement in adaptation planning. Conclusion This study provides a robust, data-driven framework for advancing the urban green transition, addressing critical challenges in climate adaptation, resource efficiency, and social equity. By leveraging EO data and spatial optimization techniques, the research offers innovative solutions for mitigating urban heat exposure and enhancing the livability of cities worldwide. The findings align with the session’s objectives, showcasing the transformative potential of EO-based analytics and interdisciplinary collaboration in achieving sustainable urban futures. Reference: Massaro, E., Schifanella, R., Piccardo, M., Caporaso, L., Taubenböck, H., Cescatti, A., & Duveiller, G. (2023). Spatially-optimized urban greening for reduction of population exposure to land surface temperature extremes. Nature communications, 14(1), 2903.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.14)

Presentation: Spatial Planning Digital Twins for Sustainable Cities

Authors: Mark Birkin, Daniel Arribas-Bel, Wenhua Jiang, Nicolas Malleson, Matthew Tipuric, David Wagg
Affiliations: University Of Leeds, Alan Turing Institute, University of Liverpool, University of Sheffield
In response to the climate crisis, net zero strategies are needed for planning and policy-making at national, regional and local scales. The EU Green Deal provides the framework for a coordinated policy response to the sustainability challenge across the continent of Europe. Nevertheless understanding the complex interplay between human behaviour, climate dynamics and policy intervention is rife with complexity and uncertainty. In this paper we will describe initial steps in a project to develop technology components for spatial planning in a city region in support of the transition to net zero. The project builds on “DemoLand”, a project at The Alan Turing Institute to develop a spatial decision support system. DemoLand has been developed in partnership with UK government and an English city council. DemoLand provides a user-friendly interface to several predictive models that link urban land use to a set of indicators of relevance for urban policy makers, including air quality, house prices, and accessibility. The mathematical description of land use used in the models relies on available open vector data as well as satellite imagery from the Sentinel 2 mission. These are combined through the use of recent foundation models for vision (e.g., SatlasPretrain, NASA-IBM, Clay) to obtain predictions of the indicators of interest. To supplement the insights gained from EO data, we intend to build a multi-level agent-based models (MABM) to capture the behaviour of households, policy makers and markets. We will use the model to understand future energy use and the impact of ‘what if?’ policy scenarios, producing estimates of energy requirements to the medium term future (in the order of ten years) for individual residential households under uncertain climatic and economic conditions. The model will work at multiple different scales so that it can be informed by, and integrated with, earth observation images. At the ‘high’ level it will use DemoLand to draw on insight from earth observation images from the European Space Agency Copernicus database. At the ‘middle’ level it will capture policy-maker decisions on their local regions. At the finest level it will include individual ‘household’ agents that make decisions and perform actions related to energy use, such as increasing energy consumption in a cold winter, or adoption of an alternative heating source in response to government subsidies. A new user interface to the combined DemoLand-MABM will be provided via a micro-services web application architecture and built so that users can develop and test scenarios, based on specific geographical locations. The result will be a ‘scenario planning digital twin’ to support decisions by policy-makers. The project will explore two use cases which focus on priority objectives from the EU Green Deal. The first use case – promotion of clean energy – is targeted on the transition to renewable energy sources, such as wind, solar, and hydroelectric power. The second use case – making homes more energy efficient – will support initiatives aimed at increasing energy efficiency in buildings, transportation, and industry. The success of this project will depend on strong collaborations with city planners, commercial organisations, regulators, government departments and members of the public. Partners will be rewarded with robust insights, evidence-based assessments and strategic guidance towards the mitigation of climate change through policy.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.14)

Presentation: "3-30-300" rule compliance analyses with satellite data

Authors: Alen Berta
Affiliations: CGI Deutschland
In urban areas there are many stressors that can influence human health and well-being - starting from urban heat island effect, air pollution or noise to lack of areas for relaxation. On the other hand, these effects can be mitigated or reduced by green and blue infrastructure. Studies had shown that well developed green and blue infrastructure in urban areas have positive impact of well-being. These impacts can be: direct, for instance living close to such infrastructure what can increase the physical activity, or indirect, improving air quality and reducing urban heat island effects. Recently, a new paradigm or framework regarding green infrastructure has arisen, under a name “3-30-300 rule” (Konijnendijk, 2022). This rule requires that every citizen should: 1) be able to see at least 3 trees from their home, 2) have 30 percent tree canopy cover in their neighbourhood and 3) not live more than 300 m away from the nearest park or green space, which in return brings many benefits including increased physical activity, improved air quality, and reduced urban heat island effects. Rule 3-30-300 can be used by city planners and decision makers as it provides a measurable target for urban greening efforts, monitoring changes or during development of the new areas, while also being a simple information that can be provided to the citizens enhancing transparency. This rule has it challenges to be measured, but according to the ongoing discussions high resolution land cover maps and various spatial and network analyses are recommended to measure this rule. Within this implementation, we introduced one of the possibilities to measure each component of this rule using high resolution satellite imagery, ML-based land cover classification and advanced GIS spatial analyses. This approach is also implemented in CGI Healthy Urban Habitat index, as a crucial pilar and baseline for quantifying green infrastructure characteristics.
Affiliation(s): CGI Deutschland
LPS Website link:
"3-30-300" rule compliance analyses with satellite data&location=Room+1.14" class="text-info" target="_blank">Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.14)

Presentation: Urban Thermal Dynamics Analysis with Remote Sensing and Deep Learning for Micro-Scale Vegetation-Based Cooling Strategies

Authors: Priyanka Rao, Patrizia Tassinari, Daniele Torreggiani
Affiliations: Department of Agri-Food Sciences and Technologies, University Of Bologna
Abstract: Urbanization and climate change have amplified urban heat islands (UHIs) [1, 2], exacerbating thermal stress, increasing energy demands, and threatening public health and ecosystems, particularly as urban areas are projected to host 70% of the global population by 2050 [3]. While existing research primarily focuses on broad-scale assessments of urban heat mitigation, these studies often rely on coarse-resolution data or generic strategies that fail to capture fine-scale thermal variations in urban design, land use, and population distribution [4]. This lack of localized analysis limits their applicability to urban planning at the micro-scale, where the challenges and solutions originate. In addition, mapping the heat stress hotspots is crucial for planning targeted interventions [5]. Therefore, to address the gaps, detailed, spatially explicit studies are required to evaluate the drivers of urban thermal dynamics and identify scalable [6] and effective strategies to mitigate heat stress. Urban green spaces (UGS) present a promising solution [7], offering cooling benefits through evapotranspiration, shading, and enhanced surface albedo. However, their effectiveness varies widely depending on vegetation type, urban morphology, and local climatic conditions, necessitating high-resolution spatial analysis to understand and optimize the complex interactions between these factors. This study integrates Landsat-derived Land Surface Temperature (LST) and landscape morphology datasets from Copernicus Land Monitoring Services (CLMS), including urban morphology, tree cover, and land cover, along with population data and built-up surface area from the Global Human Settlement Layer (GHSL), derived from Sentinel-2 composites, to analyze thermal variations within metropolitan boundaries at a grid scale. Using regression based modeling, LST was downscaled to improve spatial resolution from 30 m to 10 m. A convolutional neural network (CNN)-based model was utilized to analyze the influence of key factors, including built-up surfaces, vegetation cover, urban land types, and population density, on surface thermal dynamics. The model demonstrated strong predictive capabilities, highlighting the dominant role of built-up surfaces in driving thermal variations, followed by contributions from other variables. These findings underscore the importance of spatially explicit analysis for understanding and mitigating urban heat. This study underscores the intricate relationship between urban design, population density, and thermal dynamics, highlighting the importance of context-specific strategies for neighborhood-scale heat mitigation. Offering actionable insights into the cooling potential of UGS provides a valuable framework for urban planners and policymakers to design sustainable, climate-resilient cities. Integrating advanced remote sensing techniques with deep learning models showcases a robust approach to addressing critical urban climate challenges, paving the way for informed decision-making and enhanced urban livability. Keywords: Urban green spaces, Climate resilience, Land surface temperature, convolutional neural network (CNN), Regression, Remote Sensing References [1] G. Manoli, S. Fatichi, M. Schl¨apfer, K. Yu, T. W. Crowther, N. Meili, P. Burlando, G. G. Katul, and E. Bou-Zeid, “Magnitude of urban heat islands largely explained by climate and population,” Nature, vol. 573, no. 7772, pp. 55–60, 2019. [2] P. Rao, P. Tassinari, and D. Torreggiani, “Exploring the land-use urban heat island nexus under climate change conditions using machine learning approach: A spatio-temporal analysis of remotely sensed data,” Heliyon, vol. 9, no. 8, 2023. [3] “Department of economic and social affairs,” 2018. Maintained by: United Nations. Accessed: May 20, 2023. [4] Y. Yao, C. Chang, F. Ndayisaba, and S. Wang, “A new approach for surface urban heat island monitoring based on machine learning algorithm and spatiotemporal fusion model,” IEEE Access, vol. 8, pp. 164268–164281, 2020. [5] P. Rao, P. Tassinari, and D. Torreggiani, “A framework analyzing climate change, air quality and greenery to unveil environmental stress risk hotspots,” Remote Sensing, vol. 16, no. 13, p. 2420, 2024. [6] Q. Weng, “Thermal infrared remote sensing for urban climate and environmental studies: Methods, applications, and trends,” ISPRS Journal of photogrammetry and remote sensing, vol. 64, no. 4, pp. 335–344, 2009. [7] D. E. Bowler, L. M. Buyung-Ali, T. M. Knight, and A. S. Pullin, “A systematic review of evidence for the added benefits to health of exposure to natural environments,” BMC public health, vol. 10, pp. 1–10, 2010.
Add to Google Calendar

Wednesday 25 June 16:15 - 17:45 (Room 1.14)

Presentation: A national map of local climate zones: a tool based on satellite data for the French National Climate Adaptation Plan

Authors: Benjamin Piccinini, Dr. Julien Bouyer, Teodolina Lopez, Arnaud Ceyte, Quentin Gautier
Affiliations: Cerema, Satellite Applications Team, Occitanie regional directorate, Cerema, TEAM research team, East regional directorate, Cerema, ENDSUM Research team
The urban heat island (UHI) effect, intensified by climate change, poses a critical threat to urban resilience worldwide. As urban areas expand, the increased impervious areas, reduced vegetation, and concentrated human activity heighten the risk of overheating, particularly during extreme heat events. Addressing this challenge requires reliable, actionable data on urban morphology and land use to guide local adaptation measures. The concept of Local Climate Zones (LCZs), as introduced by Stewart & Oke (2012), provides a robust framework for understanding urban climate variability. We developed the first nationwide Local Climate Zones (LCZ) dataset for France, based on a method developed by Cerema research & engineering teams throughout several years of research and development, with support from Cnes (through the SatLCZ Space Climate Observatory project) and the French Ministry for Ecological Transition. Urban Atlas from Copernicus Land is used to provide homogeneous typo-morphological units. The 2022 national coverage SPOT6/7 satellite imagery (1,5m resolution) of France provided by DINAMIS and BD TOPO v3.3 (French national geographical institute IGN) are used to produce a 5 classes land cover (buildings, roads and imperviousness areas, permeable bare soils, water areas, vegetation). The buildings from BD TOPO provides the height of buildings. Digital surface model (MNS-Correl from IGN) and digital elevation model (RGE ALTI 1M from IGN) help to distinguish herbaceous and tree vegetation. These data are used to derive morphological indicators and land cover indicators to classify the elementary typo-morphological units into Local Climate Zones. This initiative provides a detailed and consistent classification of urban areas across metropolitan France, covering 42 million inhabitants in urban areas with populations exceeding 50,000. Outputs are openly available via a webmapping platform (https://bit.ly/plateforme-LCZ) and through the French national platform data.gouv.fr (https://www.data.gouv.fr/fr/datasets/cartographie-des-zones-climatiques-locales-lcz-de-83-aires-urbaines-de-plus-de-50-000-habitants-2022/), ensuring easy accessibility for local authorities, urban planners and private companies. This work directly supports the 3rd French National Climate Change Adaptation Plan (PNACC3, Measure 13). By providing detailed spatial information, the LCZ maps enable cities to identify heatwaves-vulnerable areas, focus further diagnostics, and implement targeted adaptation measures. We will emphasize several operational use-case examples in cities of different sizes: resilience diagnostics and action plans, urban planning, city greening strategy, cooling networks potential identification… Beyond national boundaries, we will show how the approach may be adaptable to international contexts (with an example in Thailand), providing a scalable framework for addressing urban climate challenges.
Add to Google Calendar

Wednesday 25 June 16:30 - 16:50 (EO Arena)

Demo: A.08.17 DEMO - CNES cloud platform and services to optimize SWOT ocean data use

#pangeo

The Surface Water and Ocean Topography (SWOT) mission, a joint-venture between the United States (NASA) and France (CNES), with contributions from the Canadian Space Agency (CSA) and the United Kingdom Space Agency (UKSA), has been measuring the world's surface waters for more than two years, providing the first high-resolution mapping of our planet's water resources. SWOT's innovative KaRIn (short for Ka-band Radar Interferometer) instrument provides remarkable insights into the study of fine structures (down to about 10 km) of the ocean circulation, coastal processes, and freshwater stock variations in lakes and rivers (greater than 100 m).

As part of the SWOT ocean data dissemination, this demonstration will showcase the cloud-based tools and services offered by CNES. In particular, we will present the CNES cloud-like platform for hosting SWOT projects (high computing power with CPU and GPU capacities, very fast and optimized remote access to SWOT data products, etc.) together with SWOT specific Pangeo-based libraries, powerful tools, dedicated tutorials to illustrate simple use cases (intercomparison with other satellite data or in-situ measurements, cyclone monitoring, coastal applications, etc.) and a technical support (helpdesk) for smooth sailing on the platform.

Speakers:


  • Cyril Germineaud - CNES
Add to Google Calendar

Wednesday 25 June 16:52 - 17:12 (EO Arena)

Demo: D.03.34 DEMO - EDC & Pangeo Integration on EarthCODE

#stac #pangeo

This demonstration will provide a concise yet comprehensive overview of how the Pangeo ecosystem (on EDC) integrates seamlessly into EarthCODE. During this 20-minute talk, participants will learn about EarthCODE's core capabilities that support FAIR (Findable, Accessible, Interoperable, and Reusable) and open-science principles for Earth Observation (EO) data.

We will showcase:
- The integration of Pangeo's scalable, reproducible scientific workflows within EarthCODE, enabling users to efficiently discover, access, and process large EO datasets.
- Key functionalities such as dataset access via EarthCODE Science Catalog using STAC and OGC standards.
- Practical examples demonstrating data analysis with Pangeo tools, including data loading with Xarray, visualization using HvPlot, and scalable computation leveraging Dask.
- Real-world use cases featuring Copernicus Sentinel satellite data

The demonstration will highlight how researchers can easily adapt existing workflows to their needs and ensure reproducibility by publishing results directly through EarthCODE's integrated platforms.

Speakers:


  • Samardzhiev Deyan - Lampata
  • Dobrowolska Ewelina Agnieszka - Serco
  • Anne Fouilloux - Simula Labs
Add to Google Calendar

Wednesday 25 June 17:00 - 19:00 (Schweizerhaus, Prater 116)

Social: Cloud-native Geospatial Community Social.
Register to attend here

#cloud-native

Socialize with other members of the Cloud-Native Geospatial community.
Add to Google Calendar

Wednesday 25 June 17:15 - 17:35 (EO Arena)

Demo: C.01.28 DEMO - Innovative space-based solution for the Maritime Domain Awareness

Unseenlabs is a European company headquartered in France, pioneer and global leader in space-based radio-frequency (RF) detection for Maritime Domain Awareness. The company owns and operates its constellation of satellites. Each satellite is equipped with exclusive, in-house developed technology that enables the detection, characterization, and geolocation of shipborne RF emitters in maritime areas worldwide, anytime and regardless of weather conditions.

Unseenlabs constellation is specifically designed to address the gaps in maritime surveillance, particularly those arising from the limitations of the Automatic Identification System (AIS) . While AIS is the most conventional maritime security shipborne system, it can be turn-off voluntarily, easily jammed or spoofed, leading to significant blind spots when monitoring the maritime traffic. By capturing and analyzing RF signals emitted by onboard systems, Unseenlabs solution provides a more comprehensive and reliable overview of the maritime traffic.

The session will consist of a demonstration of Unseenlabs services through a presentation of use-cases and a showcase of technology using the internal visualisation tool to better uderstand the use of RF data.
The ongoing technological development of the service with the arrival of the next generation of satellites will be shared. The extended services will be introduced with appropriate use-cases as well.

Speaker:


  • Rosario Ruiloba Quecedo - Unseenlabs
Add to Google Calendar

Wednesday 25 June 17:37 - 17:57 (EO Arena)

Demo: C.02.23 DEMO - CryoSat Companion: A LLM Support Tool for Cryospheric Science

CryoSat Companion is a custom-built large language model (LLM), developed on top of ChatGPT and tailored for cryospheric science. Whether you're a researcher or just someone interested in how we monitor the changing climate, CryoSat Companion helps you explore the world of satellite altimetry and the cryosphere. It provides an engaging, conversational way to learn about Earth’s ice-covered regions and how they are changing, with insights derived from CryoSat-2 land ice elevation measurements.

Speakers:


  • Carolyn Michael - Earthwave Ltd
  • Nicola Sorace - Earthwave Ltd
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: F.04.01 - POSTER - Earth Observation for European Agricultural Policies

Earth observation technology is increasingly important in the formulation and implementation of European agricultural policies such as the Common Agricultural Policy (CAP). By utilizing satellite imagery, policymakers gain valuable insights into various aspects of agriculture, including land use, crop health, and environmental impact. These data allow for monitoring crop growth, assessing soil conditions, and identifying areas in need of conservation or restoration. Moreover, Earth observation helps in implementing sustainable agricultural practices, optimizing resource management, and mitigating the effects of climate change on agriculture. By integrating this technology into policy-making processes, European agricultural policies can become more effective, evidence-based, and responsive to the evolving needs of farmers and the environment.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Advancing European Agricultural Policies through Innovative Earth Observation Solutions by Planet

Authors: Mr Diego Vanelli, Marta Longo
Affiliations: Planet Labs
The implementation of the Common Agricultural Policy (CAP) requires robust, efficient, and timely monitoring systems to support National Paying Agencies (NPAs) in ensuring compliance, promoting sustainable agricultural practices, and optimizing resource allocation across Europe. Planet, following its strategic integration of Sinergise, is at the forefront of delivering cutting-edge Earth Observation (EO) solutions that significantly enhance the capabilities of NPAs in implementing European agricultural policies. Daily High-Resolution Monitoring with PlanetScope: Planet's constellation of satellites, known as PlanetScope, offers unprecedented daily, high-resolution satellite imagery of the Earth's landmass. This daily revisit capability is transformative for agricultural monitoring, providing NPAs with near real-time insights into farming activities. The high spatial resolution allows for detailed observation of individual fields, enabling the detection of subtle changes in land use and crop conditions. Continuous data streams from PlanetScope empower NPAs to monitor compliance with CAP regulations more effectively. Timely detection of unauthorized land conversions, adherence to crop diversification requirements, and assessment of ecological focus areas become feasible. The daily imagery reduces the risk of missing critical events due to cloud cover or less frequent satellite passes, ensuring comprehensive monitoring. Advanced Markers and Analytics by Sinergise: Building upon PlanetScope's rich dataset, Sinergise has developed sophisticated algorithms and markers that extract actionable information. These markers identify specific agricultural events and conditions, such as sowing dates, crop emergence, growth stages, harvesting activities, and crop type classification through spectral analysis. Automating the detection of these events significantly reduces the need for labor-intensive field inspections. Sinergise's markers utilize machine learning techniques and advanced image processing to analyze temporal patterns and spectral signatures associated with different crops and practices. This automation enhances NPAs' efficiency, allowing strategic allocation of resources and targeted field visits when necessary. Agricultural Monitoring System (AMS) Platform: Sinergise has also developed a comprehensive Agricultural Monitoring System (AMS) platform that integrates PlanetScope data, advanced markers, and other relevant datasets into a cohesive, user-friendly interface. The AMS platform is tailored to NPAs' specific needs, offering tools for data management, analysis, visualization, and reporting. Enhancing CAP Implementation and Supporting Sustainable Agriculture: The combined capabilities of PlanetScope imagery, Sinergise's markers, and the AMS platform provide NPAs with a powerful toolkit for effective CAP implementation. These innovations contribute to: - Improved Compliance Monitoring: Automated detection of agricultural practices and land use changes enables more efficient enforcement of CAP regulations. - Resource Optimization: By reducing reliance on extensive field inspections, NPAs can allocate resources strategically, focusing on areas identified through EO data analysis. - Data-Driven Decision Making: Access to high-quality, timely data supports better decision-making processes, policy adjustments, and development of targeted support measures for farmers. - Environmental Sustainability: Enhanced monitoring promotes sustainable farming practices, biodiversity conservation, and reduced environmental impacts, aligning with CAP's environmental objectives. Practical Applications and Benefits: Several NPAs have begun implementing Planet and Sinergise's solutions, witnessing significant improvements in monitoring capabilities. For instance, NPAs have successfully used the AMS platform to detect non-compliant land use changes promptly, leading to timely interventions and reduced instances of fraud. The markers developed by Sinergise have enabled accurate crop type classification, assisting in verifying farmer declarations and facilitating fair subsidy distribution. Farmers also benefit from these innovations. Access to accurate, timely information helps them optimize agricultural practices, comply with regulations, and improve yields. The transparency and efficiency introduced by these technologies foster better relationships between NPAs and the farming community. Conclusion: The integration of Sinergise's technologies with Planet's satellite imagery represents a significant advancement in applying EO for European agriculture. The synergy between daily high-resolution imagery, advanced data processing, and comprehensive platform solutions equips NPAs with the tools necessary to implement CAP objectives more effectively. These innovations streamline compliance monitoring and support broader goals of promoting sustainable agriculture and environmental stewardship across Europe. Leveraging Planet and Sinergise's strengths positions NPAs to address modern agricultural policy implementation challenges. This collaborative effort underscores the transformative potential of EO technologies in enhancing policy effectiveness, supporting farmers, and contributing to the agricultural sector's sustainability.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Detection and Classificatoin of Nitrogen Fertilized Fields Using Sentinel-2 Imagery: an AI-Based Approach With Comparative Reflectance Analysis of Fertilizer Types

Authors: Sara Romagnoli, Beatrice Gottardi, Enrico Baglione
Affiliations: CGI, Istituto Nazionale di Geofisica e Vulcanologia
Nitrogen plays a critical role in plant growth, being a key constituent of chlorophyll, amino acids, nucleic acids, and proteins essential for development. However, nutrients' level in soils decline after each harvest necessitating the application of fertilizers to maintain agricultural productivity. Nitrogen fertilizers replenish soil nitrogen, enabling crop growth, but their effectiveness depends on precise management. Over-application can harm plants, reduce yield, and lead to environmental degradation. Farmers must strike a balance to maximize yield while minimizing waste and ecological damage. The environmental, climatic, and health impacts of nitrogen use have prompted stringent regulations by the European Union and national governments, which specify the timing and amounts of nitrogen applications. Currently, enforcement relies on citizen reports and costly, labor-intensive on-site inspections, which are highly inefficient. This paper aims to address these challenges by presenting the first outcomes of a study leveraging Sentinel-2 satellite imagery and spectral indices to detect and classify nitrogen-fertilized areas. Artificial intelligence was used to train a model capable of identifying nitrogen fertilizer application and mapping its spatial distribution over the inspected area. For the first time ever, the possibility of identifying different types of nitrogen fertilizers based on their unique reflectance spectra was explored: a crucial distinction as nitrogen content varies significantly by fertilizer type. The project, launched in May 2024, has achieved significant results over its study area, the Emilia Romagna region of Italy, demonstrating high potential for further model calibration and adaptation to other global regions. The final goal is to develop a near-real-time system able to precisely evaluate nitrogen application rates and detect the timing and locations of fertilizer use, enabling authorities to monitor compliance and farmers to optimize fertilizer use. This innovation could enhance agricultural productivity, reduce environmental impact, protect human health, and lower greenhouse gas emissions, fostering sustainable farming practices worldwide.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Spectral Signatures of Sustainability: Using Sentinel-2 to Monitor Soil Management Practices

Authors: HAKKI EMRAH ERDOGAN, PHIILIPPE LOUDJANI
Affiliations: European Commission - Joint Research Center
The intersection of sustainable land management, the European Green Deal, and the Common Agricultural Policy's (CAP) calls for a reshaping of agriculture towards sustainable practices. This study explores the use of Sentinel-2 satellite imagery to monitor sustainable soil management under the EU’s Common Agricultural Policy (CAP). Sentinel-2’s multispectral capabilities were leveraged to track tillage/ non-tillage practices, supporting CAP's strategic goals and advancing the EU’s commitment to sustainability and carbon neutrality. Key spectral indices, including Normalized Difference Vegetation Index (NDVI), Normalized Difference Infrared Index (NDII5), Normalized Difference Tillage Index (NDTI), and Normalized Difference Water Index (NDWI), were used to assess vegetation health, soil moisture, and surface characteristics. Ground-truth data from Denmark and Spain validated the methodology, and temporal filtering improved accuracy in detecting tillage practices. Results showed that Sentinel-2 data significantly enhanced agriculture land monitoring within the IACS and LPIS frameworks, supporting CAP implementation and promoting soil health and carbon farming. This research demonstrates the transformative potential of Sentinel-2 for sustainable agriculture, aiding EU member states in achieving CAP objectives and advancing the European Green Deal’s environmental goals.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Crops classification algorithms comparison to support the CAP controls

Authors: Enrico Chiesa, Samuele De Petris, Alessandro Farbo, Filippo Sarvia, Enrico Borgogno Mondino
Affiliations: Università degli Studi di Roma "La Sapienza", Università degli Studi di Torino, Food and Agriculture Organization of the United Nations (FAO)
In the Common Agricultural Policy (CAP) framework, paying agencies must check the consistency between farmers’ declarations, provided to obtain subsidies, and the crops actually present in the field. The control process is usually performed by checking a small sample of fields by ground surveys. However, remote sensing allows the monitoring of the entire declared area while saving time and money allowing ground surveys to focus on areas that are incorrectly detected or not monitorable due to the geometric resolution of the satellite data used in the process. In this context, the European Commission has recognised the Copernicus Sentinel-2 (S2) satellite mission as one of the main tools to support the CAP controls. S2 data allow for the computation of spectral indices, such as the Normalised Difference Vegetation Index (NDVI), which can be analysed over time to monitor the phenological development of different crops. The NDVI multi-temporal profiles can be also used within appropriate classification algorithms (supervised and unsupervised) with the aim of identifying the crops present in the field. In this work, S2 NDVI images covering the period between the 1st of November 2023 and the 30th of October 2024 were collected over Piemonte region (NW-Italy). Ground reference data containing crop type information were provided by the Agenzia Piemontese per le Erogazioni in Agricoltura (ARPEA) and were used to train and validate the crops map classifications. Four crop macro-categories were defined, including permanent meadows, winter, summer and double crops (two different crops that are grown and harvested sequentially within the same field during a single calendar year). The classification was performed using different algorithms. In particular, supervised classifiers such as minimum distance, random forest and artificial neural network were compared to a rules-based classifier involving phenological metrics derived by NDVI profile analysis. The performances of each algorithm were evaluated using confusion matrices derived from the comparison between the classification maps and the validation set provided by ARPEA. Moreover, further analysis was made using the conditional kappa value and a Z-test to assess the significance of differences among classifiers with increasing computational complexity. Finally, an inference test was conducted over a new study area where the four algorithms were never trained before, to investigate the generalization capability of the selected classifiers. Accordingly, classifying macro-categories of crops can be the first step in the mapping process, simplifying subsequent crop type identification, especially in complex contexts with frequent crop rotations during the growing season. Future developments are expected to use these macro-categories as masks to support the generation of detailed crop type maps.
LPS Website link: Crops classification algorithms comparison to support the CAP controls&location=X5+–+Poster+Area+–+Zone+T" class="text-info" target="_blank">Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Multi-frequency SAR Time Series for the Detection of Sowing Events and Early-Season Crop Classification

Authors: Basile Boland, Tom Kenda, Jean Bouchat, Pierre Defourny
Affiliations: Earth and Life Institute, Université catholique de Louvain, European Space Agency (ESA), European Space Research Institute (ESRIN)
In the context of the sustainable development goals driving 21st-century debates, agriculture faces the challenge of reducing environmental impact while meeting global consumption needs. Tillage activities influence various terrestrial dynamics, including soil erosion, runoff, carbon sequestration and water and carbon dioxide exchange. Monitoring tillage events on a regional scale is therefore essential to facilitate the transition to more sustainable agricultural systems. Synthetic aperture radars (SARs), such as Sentinel-1 and TerraSAR-X, can retrieve information on soil roughness in agricultural fields despite cloud cover, and are particularly sensitive to soil roughness in agricultural fields. The objective of this study is to assess the individual and combined contributions of X-band and C-band SAR data for sowing events detection and early-season crop classification for the most common spring crops grown in Wallonia (Belgium), i.e., beet, potato, and maize. Along with SAR acquisitions, in situ data were collected during a field campaign carried out at the beginning of the 2021 growing season. Between March 26th and May 31st, 166 parcels were visited 7 times, with an average of 11 days between each visit to match the revisit time of TerraSAR-X. At each visit and for each plot, the soil cover, i.e. bare, covered, presence of crop residues, and its roughness, i.e. shallow or deep furrows, crop mounds, or seedbed, were recorded. A sowing period detection method based on VV coherence drop showed promising results in highlighting homo- and heterogeneity in spatial (regional mapping) and temporal distribution of sowing events for studied cultures. Although the available validation data for sowing detection has been limited, expanding the study to a regional and national scale within the context of the BELCAM (BELgian Collaborative Agriculture Monitoring) project will enable greater validation opportunities. Despite these limitations, the results provided valuable insights into crop classification, revealing that the sowing period for each crop type significantly influences classification performance when using C-band or X-band data. More globally, Random Forest classifications have primarily highlighted three key parameters that contribute to better classifications: the necessity of a roughly balanced dataset, the significance of the sowing period, and the critical role of high temporal resolution. While Sentinel-1 outperformed TerraSAR-X for beet and potato fields classification, the best overall results were achieved using a combination of both sensors acquisitions, which leveraged the small-scale roughness sensitivity of X-band data and the sensitivity to sub-surface soil layers of C-band data. In this context, beet fields were accurately classified in early April, potato fields in late April, and maize fields in the final weeks of May. This underscores the importance of utilizing multi-frequency information to accurately classify selected crops as early as possible in the growing season. The sowing events detection and early-season crop classification act as valuable tools for assisting farm advisory services, agricultural insurance companies, improving or build new agricultural policies, running yield or nitrogen uptake models, etc. Results also offer promising perspective for further improvement and analysis in the monitoring of tillage practices. Further efforts targeting the mapping of tillage or no-tillage practices could improve reporting of greenhouse gas emissions at a regional level, taking a better account of the diversity of farming practices, as well as the annual assessment of farmers' efforts to move towards conservation agriculture. These results could also be used by public administration as part of the Common Agricultural Policy land monitoring system.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Delineation of management zones for the application of fertilizers in the environment of an innovative precision farming approach based on remote sensing data in Brandenburg, Germany

Authors: Larissa Torney
Affiliations: Deutsches GeoForschungsZentrum GFZ
The focus of the project “Digimix-PA” lies on precise fertilization practices, where organic and mineral N-fertilizers are combined. The nutrient application needs to be adapted in space and time, dependent on the uses and absorption capacity of the plants. Therefore, the ingredients of the fertilizers, the current nutrient supply and the capacity of the soil need to be known in advance. To apply the exact amount of fertilizer, regarding the current conditions in the field, it is necessary to distribute the field into different management zones. These zones represent different soil- and potential growth conditions of the vegetation which gives an indication for adapting the upcoming management practices, especially the fertilization. To achieve this, we combined satellite-based phenological dependent timeseries data of vegetation growth patterns and proximal sensing data using soil sensor data from a multi-spectral platform acquired in September 2022. The satellite data was acquired with the Sentinel-2 multispectral sensor over the last five years. With its spatial resolution of 10 m and a relatively high repetition rate, the datasets were appropriate for a 17-ha field-based analysis. The input datasets were combined with a multi-step clustering algorithm for generating different zones. One zone represents uniform growing conditions, where zone dependent precise field management practices can be adapted. The clustering algorithm is based on a hierarchical clustering approach for the single input datasets, which get combined by a consensus clustering algorithm, resulting in different management zones. The validation of the management zones is conducted by a statistical comparison between the expressions of the underlying input datasets per cluster. Therefore, a higher deviation of the values for the entire field is assumed, compared to the deviation of the values for one cluster. The management zones are interpreted as a recommendation for farmers, which serve the purpose to adapt their management practices within the framework of the existing possibilities.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Enhancing Crop Monitoring through Crop-specific Copernicus Sentinel-2 data: Insights from the JRC MARS Bulletin Use Cases

Authors: Martin Claverie, Mattia Rossi, Fernando Sedano, Julien Morel, Maurits Vandenberg, Giacinto Manfron, Lorenzo Panarello, Stefan Niemeyer
Affiliations: European Commission, Joint Research Centre (JRC)
The launch of Copernicus Sentinel-2 mission was announced a promising shift towards crop-specific agricultural monitoring using remote sensing, moving away from coarse resolution data for monitoring the earth. However, a decade later, it has not yet revolutionized the field. Barriers such as the lack of a long-term legacy and the absence of in-season crop type maps impede its full integration into crop monitoring systems. Nevertheless, as the 2024-2025 crop season marks the satellite's tenth observational season, we are entering a new era, ready to leverage its long-term analytical potential. Major continental or national crop monitoring systems have not yet adopted Sentinel-2 data operationally. The JRC MARS activity, which is the European Commission's official crop monitoring platform, faces the same constraints. The monthly published JRC MARS Bulletin which supports the Common Agricultural Policy and ensures market transparency relies only on coarse resolution remote sensing data unsuited for crop-specific monitoring. However, starting in 2022, irregular ad-hoc Sentinel-2 analyses began to enhance its monitoring capabilities. These enhancements are exemplified by several use cases presented below: (i) Regional-based Leaf Area Index (LAI) and Fraction of Absorbed Photosynthetically Active Radiation (fAPAR) time series in Ukraine (2022-2024 seasons) using early-season, pixel-based crop classification, and in the Netherlands (September and October 2024) utilizing the Geospatial Aid Applications (GSA) from farmer declarations. In both cases, the methodology involved filtering and smoothing parcel-level time series, followed by regional aggregation. (ii) Detection and timing of rapeseed flowering across parcels in Brandenburg state, Germany, for the 2024 season. Given the exceptional early flowering for this year, we used a threshold-based approach on a pixel-basis to determine the yellowing of fields, leveraging the previous year's GSA dataset. Compared with some ground observations of early flowering, the full flowering stage was detected reliably with a realistic lag of one to two weeks showing a great potential for effectively detecting this phenophase. (iii) Winter cereals and rapeseed crop area estimation based on early-season winter crop detection in Russian-occupied territories of Ukraine (2023 and 2024 seasons). These use cases were presented in several issues of the JRC MARS Bulletin since 2022 (visible at https://joint-research-centre.ec.europa.eu/monitoring-agricultural-resources-mars/jrc-mars-bulletin_en). They highlight the potential of integrating high-resolution satellite data into agricultural monitoring systems for more crop-specific and accurate monitoring. While current applications remain qualitative, it also demonstrates the potential of Sentinel-2 imagery to complement coarse resolution earth observation systems and enhance important aspects of the quantitative analyses, ultimately contributing to more accurate yield modelling.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Drought Adaptation Strategies Through a Multi Data Approach

Authors: Valeria Gentile, Manuela Balzarolo, Odunayo David Adeniyi, Léa Duran, Athanassios Argiriou, Miltos Anastasiadis, Matteo Colombo, Juan Charneco, Lelia Ataliani, Luisa Pinto, Ines Camilo, Noemi Parada Santiago, Marta Santos, Mariano Corzo
Affiliations: CMCC, Atos, University of Patras, Dotsoft, Ubitel, Infotrend Innovations Company, EDIA, Junta de Andalucía
Satellite images have long been used for environmental monitoring due to their synoptic capability to cover wide areas in a short time and to collect data on the long term. Furthermore the availability of ground truth data and the advancement in machine learning tools made forecasting and prediction of environmental indicators and parameters possible, using a multi data (remotely sensed and ground based) approach. This information is useful to improve preventive management of natural risks, in order to mitigate their effects on the environment. Among natural hazards, drought is one of those most afflicting the mediterranean area, impacting people's health, agriculture and consequently economy. In the last decades, the frequencies of occurrence and intensities of droughts increased and the ever-growing requirement for water resources and the compound uncertainty of hydroclimatic factors aggravate the impact of droughts on agro-ecosystems. Recent research has increased understanding of droughts and led to many different definitions of droughts: meteorological, hydrological, agricultural and socio-economic drought. In Germ of Life, a project co-funded by the Interreg Euro-MED Programme, we study the impact of drought on vegetation growth and productivity, the so called agricultural drought, with the aim of developing a drought vulnerability assessment tool in cooperation with stakeholders in Portugal, Spain, Italy and Greece. This tool is based on the use of Earth observations and climate and machine learning algorithms to calculate compound indicators of multifaceted (meteorological, hydrological, agro-ecological) droughts and their impacts on the soil-vegetation-water processes and resources. Forecasting of selected key drought indicators is done using machine learning algorithms and forecasted meteorological variables available from climate services (e.g. C3S Copernicus). The ultimate goal of this work is to promote drought adaptation strategies, guiding the actions of stakeholders and policy makers in the short term, based on the prediction of meaningful indicators and constant monitoring of natural ecosystems.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: NEOCAP

Authors: Sales & Business Development Manager Patryk Jaskula
Affiliations: Airbus
Supporting agricultural activities has been a forefront of Airbus’s Earth Observation activities. Three decades, since the very beginning of commercial Earth Observation, the CAP was one of the first systematic usages of EO data and in particular the SPOT family for HHR image analysis. As Europe's leading satellite operator and VHR data provider, we pride ourselves on efficiency in assigned CAP coverage, hitting 100% success rate in the last years, ensuring effective coverage in demanding conditions. From scattered acquisitions to full country coverage, our reactive tasking and frequent re-visit capability guarantees data supply throughout the entire agricultural cycle. Thanks to the advancement of Pleiades Neo constellation, European agricultural community can benefit from 30 cm native resolution, RGB, Near-infrared, and Red-EDGE spectral bands, giving more accurate crop development information for precision agriculture as well as Leaf Area Index readings to quantify foliage development and chlorophyll content to assess crop nutrition, health status and better monitoring of forest. Relying on Airbus data, users can also detect missing vines within vineyard rows to meet regulatory specifications, extract mapping of field boundaries, identify Good Agricultural & Environmental Conditions (GAECs), perform preliminary characterization of the surrounding environment of arable fields (e.g., hedges, tree lines, and small walls).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Assessing pan-European crop diversity and rotation using the High Resolution Layers on Crop Types from 2017 to 2021

Authors: Momchil Yordanov, Laura Martinez-Sanchez, Martin Claverie, Melissande Machefer, Rapheal d’Andrimont, Marijn van der Velde
Affiliations: Joint Research Center
Crop diversity is a landscape descriptor, which estimates how crops are distributed through space or time for a given area unit and/or time period. When reaching a sufficient threshold, it is associated with beneficial impacts on biodiversity and ecosystem services provision, soil health, yield stability, and the overall resilience of food systems. Whether spatial or temporal, estimating crop diversity can provide important information for policy-makers. Spatial crop diversity (alpha, beta, gamma) is computed through analysis of the area-weighted crop distributions for a given area unit, while temporal diversity considers the sequence of crops cultivated across a reference area during the study period (e.g. pixel or parcel level sequences). We leverage data from the Copernicus Land Monitoring Service (CLMS) High Resolution Layers (HRL) Vegetative Land Cover Characteristics (VLCC) for 19 Crop Types (CTY), covering the EEA38 to provide the first wall-to-wall estimate of the state of crop diversity at such an extensive spatial and temporal scale. For the spatial aspect area counting is performed separately for a regular grid with cell size 500x500m and for the EU-27 Local Administrative Units (LAUs) and results are further aggregated through both grid (1km, 2km, 5km, 10km, 20km, 50km, 100km) and administrative regions (NUTS-3 to NUTS-0). Temporal diversity is observed on a per-pixel basis and is calculated for each unique sequence; these are later aggregated through the administrative levels. Predominant management relevant crop rotation cycles can be deduced from these crop sequences. When considering gamma spatial diversity, we found that 45.5 % of EU Member States had their maximum in 2018, while 36.4% had their minimum in 2019. At a resolution of 10 km grid cell size, we found that gamma diversity remained stable for 83.1% of the EEA38, while it decreased for 12.1%, and increased for the remaining 4.8% between 2017 and 2021. The drivers behind these observed patterns need to be explored carefully. Regarding crop rotation, where we only consider the main crop, we found that the most common rotations and associated diversity are, akin to their spatial counterpart, scale-dependent. The most common rotations are between winter cereals such as barley and rye, and maize. Excluding permanent crops, a total of 13.8% of the EEA38 area is covered by monocultures (i.e. a single crop cultivated for all 5 years). The crop that is most often observed as a monoculture across the EU/EEA38 is wheat closely followed by maize. A major strength of the Copernicus HRL VLCC CTY is that it provides an annual, harmonized, and spatially consistent crop type product for a relevant set of years from 2017 to 2021, soon to be updated up to 2024. This is a prerequisite when considering making such crop diversity or rotation data operational in the future. In an agri-environmental policy context, these considerations also need to take into account other data sources that can provide such information (e.g. farmers’ declarations). Elsewhere, we (Claverie et al.) evaluate the overall quality of the HRL Crop Types in terms of crop classification accuracy.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Integration of Satellite Technologies for Agricultural Land Classification (STALC)

Authors: Ivana Puškarić, Dragan Divjak, M.Eng in Geodesy and Geoinformatics Luka Stemberga
Affiliations: LIST LABS LLC
The STALC (Integration of Satellite Technologies for Agricultural Land Classification) project focuses on the use of advanced satellite technologies and machine learning to enhance and automate the classification of agricultural land. Agricultural land is a critical resource for food production, ecological balance, and economic development. By integrating data from satellite missions such as Sentinel-1 and Sentinel-2 with field measurements of soil quality, climate, and terrain relief, the project aims to support sustainable resource management, spatial planning, and the preservation of high-value agricultural land. This approach reduces the need for traditional field inspections. This methodology seeks to provide a centralized tool for characterizing agricultural land, reducing analysis costs, ensuring sustainable agriculture, and increasing greater control over natural resources, not only in Croatia but throughout the European Union. Field visits by experts for rating assessments should not be neglected or discontinued; this model proposes the use of field samples as control points and training data to enhance the generated machine learning models. The project introduces an innovative system that applies machine learning to analyze satellite data and generate classification models that exceed the capabilities of traditional methods. For soil evaluation pedological (soil) maps will be used as the main source of variables not observable remotely, while NDVI and NDMI will be used as a source for describing mainly soil moisture and land cover. Variables such as precipitation and temperature will be extracted from climate models. By incorporating meteorological and terrain data, the system also considers factors like flood susceptibility and drought conditions Results are going to be visualized through an interactive web platform, providing stakeholders, including farmers and policymakers, with accessible and practical tools for land management. The primary goal is to develop a scientificall advanced system for evaluating agricultural land quality to aid informed decision-making in spatial planning and resource development. The project focuses on creating and validating a prototype classification model, tested in pilot areas of Croatia’s Istrian Peninsula, and complemented by an interactive map for real-time data visualization. This promotes sustainable land management and improved agricultural practices. Key phases include data collection, model development, and validation. Field and meteorological data are combined with satellite observations to train machine learning algorithms capable of analyzing soil fertility, terrain features, and climate conditions. The validated model aims to achieve over 90% precision in detecting key land features, with results presented via an intuitive mapping interface to support spatial planning and policy development. The ultimate goal is to achieve a TRL 3, a functional remote classification method capable of classifying and evaluating agricultural land. For integration of all the data sources in machine learning, an approach based on boosting (XGBoost) or convolutional neural network (CNN) shall be applied. In collaboration with the City of Poreč and the Institute of Agriculture and Tourism STALC is expected to provide innovative tools for sustainable land management with potential applications at both national and international levels. The system improves efficiency, reduces land analysis costs, and protects high-value agricultural areas from degradation. By offering accurate, accessible data, the project supports better decision-making, contributing to the sustainability of agriculture and alignment with European policies like the European Green Deal (EGD) and the European Common Agricultural Policy (CAP).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Dashboard Service Supporting Agricultural Decision-Making Based on Satellite and In-Situ Data

Authors: Stefan Brand
Affiliations: EOX IT Services GmbH
The EOX AgriApp is a flexible dashboard service for continuous remote sensing analysis of agricultural parcels or, more generally, any monitoring areas of interest. It visually presents satellite information such as vegetation profiles of different satellite spectral signals interactively linked to image time series down to the parcel level—even allowing for comparison between parcels. Value-added derived information such as machine learning- or threshold-based markers (crop type, harvest events, mowing events, vegetation cover changes, …) are available as chart annotations and thematic map layers. A compliance rule engine evaluates specific parcel-level requirements and provides customized thematic map visualizations. The service is rounded off by a high-performance full-dataset search engine with dynamic filter combinations and interactive charts allowing for detailed analysis of machine learning results and compliance rule engine outcomes. Usage areas for the EOX AgriApp include—but are not restricted to—Common Agricultural Policy implementation, drought- and irrigation monitoring, CO2 flux visualization and quantification, nature capital and biodiversity monitoring. In the context of the CAP Area Monitoring System the EOX AgriApp has successfully been in operative use within the Austrian and Irish Paying Agencies since the beginning of 2023, serving near real-time insights for millions of parcels. Initial development of the EOX AgriApp was partially funded by the ESA InCubed programme within the EO-WIDGET project.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Towards Biodiversity Restoration in Agricultural Landscapes: Hedgerow Mapping and Analysis in Bavaria

Authors: Verena Huber Garcia, Dr. Sarah Asam, Johanna Buchner, Michael Stellmach, Dr. Ursula Gessner
Affiliations: German Aerospace Center (DLR), Bayerisches Landesamt für Umwelt (LfU)
Agriculture in Europe is regulated under a comprehensive framework of policies designed to ensure food security, environmental sustainability, rural development, animal welfare and economic viability for farmers. Building on existing regulations, the newly adopted EU Nature Restoration law introduced in February 2024 places an even greater emphasis on agricultural ecosystems. To enhance biodiversity, EU member states must demonstrate progress in indicators such as the share of agricultural land with high-diversity landscape features which includes hedgerows and other woody features, field margins, ponds or stone walls among others. To assess the status quo and monitor changes over time at the national scale, spatially explicit data needs to be collected and updated regularly. Traditional methods, such as field observations or reporting, are still necessary but no longer sufficient to meet this demand. In this context, Earth Observation technologies offer a powerful solution for assessing environmental indicators across large areas efficiently. Hedgerows and other woody vegetation features are widely recognized as key components of high-diversity landscapes. They constitute important habitats for flora and fauna, provide forage shelter and breeding grounds to many species. In addition, their importance for biodiversity is closely tied to their distribution and interconnectivity as they function as migration corridors for numerous species. Beyond biodiversity, hedgerows provide important ecosystem services including erosion control, regulation of the microclimate and of the nutrient, carbon, and water cycle. In this study, we examine the heterogeneity of agricultural landscapes in Bavaria, Germany, with a focus on the occurrence and distribution of hedgerows. We integrate a wall-to-wall hedgerow map derived from orthophotos by means of a Convolutional Neural Network that provides information on the precise location and extent of hedgerows, with other sources such as the High-Resolution Layer (HRL) Small Woody Features (SWF) Layer from the Copernicus Land Monitoring Service as well as crop type maps derived from Sentinel-1 and Sentinel-2 imagery. These datasets enable spatial and statistical analyses to assess regional variations of landscape heterogeneity. Our findings reveal significant regional differences identifying areas with a high density of woody landscape features and a strong habitat connectivity. At the same time, we also pinpoint regions that would profit from targeted interventions such as the planting of hedgerows to meet the goals outlined in the new EU Nature Restoration Law. These insights underline the importance of hedgerow conservation and restoration for promoting biodiversity in agricultural landscapes and provide valuable insights to policy makers for achieving broader environmental objectives.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Identifying summer and winter crops with Sentinel-2 data for catchment-level nutrient runoff modelling

Authors: Emilie Susi, Evelyn Uuemaa
Affiliations: Department of Geography, Institute of Ecology and Earth Sciences, University of Tartu
Detection of summer and winter crops using satellite imagery is an important step in environmental monitoring, helping to assess the agricultural impact on environmental conditions and nutrient runoff risk. During the autumn to spring period, summer crops leave soil exposed, increasing susceptibility to erosion and nutrient leaching. Identifying areas covered by summer crops is, therefore, a pertinent indicator for modelling nutrient runoff at the catchment level. The aim of the study is to develop an effective and accurate method for distinguishing between summer and winter crops, utilizing various remote sensing data and spectral indices. The methodology primarily utilizes satellite data, such as Sentinel and Landsat, which enable high temporal and spatial resolution monitoring of land cover. The study area is approximately a 5x5 km region in Southern Estonia. In this study area, crop type was identified in all agricultural fields in autumn and spring. In addition, previous years’ Estonian Agricultural Board registry data on crop type was used for validation. Vegetation indices, such as the Normalized Difference Vegetation Index (NDVI), and the Bare Soil Index (BSI), were calculated in Google Earth Engine. Two approaches were used: 1) threshold-based - where based on BSI values, a threshold was identified which classifies the area as either summer or winter crop; 2) a machine learning model called Random Forest was used to classify crops. The study also incorporates harmonics of the time series of indices into a machine learning model. This research is highly relevant in the context of sustainable agriculture and environmental management. Monitoring crop types contributes to understanding land use dynamics, optimizing agricultural practices, and implementing strategic interventions to mitigate environmental degradation.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Monitoring of Common Agriculture Policy (CAP) standards with Copernicus earth observation data for the efficient use in administrative practice

Authors: Christine Wessollek, Eric Kosczor, Daniel Kinalczyk, Pierre Karrasch, Nora Egli
Affiliations: TUD Dresden University Of Technology, Junior Professorship in Environmental Remote Sensing, Saxon State Office for Environment, Agriculture and Geology (LfULG)
The Common Agricultural Policy (CAP) of the EU promotes sustainable agricultural practices through various measures, including the introduction of nine standards for Good Agricultural and Environmental Conditions (GAEC). These standards aim to encourage environmentally friendly practices that protect natural resources such as soil, water, and biodiversity. GAEC requirements include crop rotation, the establishment of buffer strips along water bodies, the protection of carbon-rich soils such as wetlands and peatlands, as well as the preservation of grasslands. Compliance with these measures supports sustainable land use, contributes to environmental and climate goals, and ensures agricultural productivity in the long term. As the EU spends some 40% of its budget on agriculture subsidies, the inspection of GAEC is essential to ensure adherence to these standards and to guarantee that public funds are used effectively to achieve the objectives of the CAP. Hence, without proper monitoring, there is a risk of non-compliance, which could undermine environmental goals and create unfair competition among farmers. The compliance with GAEC standards is, in practice, still often checked through time-consuming on-site inspections. Because it is impossible for governing bodies to continuously check all agricultural areas with this procedure, only randomly selected parcels are checked for potential violations. By nature, this leads to a low detection rate of violations and constitutes an ineffective use of resources. Here, remote sensing technologies can play a key role. Earth observation satellites offer high-resolution imagery and frequent revisit times allowing for detailed monitoring of land use, crop conditions, and environmental changes on small and large scales. Therefore, our project focuses on evaluating the feasibility of monitoring GAEC standards in Saxony, Germany based on Sentinel data. For those GAEC standards identified to have the potential to be monitored by satellite imagery, we develop approaches to detect non-compliance. Finally, the methods developed, will be implemented in an interactive application that is intended to support regional authorities in monitoring GAEC standards. This application allows for evidence-based selection of agricultural plots for on-site inspection. To date, we have successfully addressed GAEC 1 (Maintain a certain share of permanent grassland of the total agricultural area), GAEC 2 (Protect wetlands and peatlands) and GAEC 9 (Protect environmentally-sensitive permanent grasslands in Natura 2000 sites) for which we developed a pipeline for the automatic detection of non-compliance candidates in near real-time based on Sentinel-2 data. In the remaining project period, we aim to realize an operational workflow for GAEC 6 (protect soil by defining rules for minimum soil cover) and an algorithm draft for GAEC 8 (Maintain non-productive areas and landscape features, and ensure the retention of landscape features through, for example, a ban on cutting hedges and trees during the bird breeding and rearing season). Our results so far have proven that the integration of remote sensing technology can effectively increase the efficiency of CAP monitoring. The operational implementation of our project outputs is scheduled for the near future to support the agricultural monitoring tasks of authorities in charge.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Counting pixels: an analysis of statistical assumptions in times of synoptic, multi-sensor high accuracy crop mapping

Authors: Guido Lemoine, Martin Claverie, Marijn van der Velde
Affiliations: European Commission, Joint Research Centre
Crop type mapping has taken a huge leap since the introduction of full, free and open high resolution data from the European Union’s (EU) Copernicus Sentinel-1 and -2 and the United States Landsat sensors. Combined with big data analysis platforms and rapid progress in machine and deep learning (MDL) techniques this has led to the routine production of high accuracy synoptic crop map products with consistency at continental scale. The recent release of the Copernicus High Resolution Vegetated Land Cover Characteristics (HR VLCC) service includes a crop type (CTY) layer for the EEA38 countries [1]. A key element of the CTY layer is that it is largely based on applied MDL to time series of features extracted from both the multispectral Sentinel-2 and microwave Sentinel-1 sensors. The MDL models are trained, tested and validated with extensive ground truth data. In this paper, we address the issue on the derivation of crop area estimates from CTY, in particular the established claim that “pixel counting” is not a valid method to arrive at area estimates. Our hypothesis is that this claim stems from an era in which crop area estimation with Earth Observation (e.g. Activity-B in Europe in the early 90s) was based on poorly designed samples of a limited amount of image footprints, a low revisit frequency of multispectral sensor data (typically 3-4 per growing season), mostly due to limited financial means, and far less sophisticated classification methodology and processing capabilities, leading to suboptimal producer and user accuracies (well below 0.8) for only a handful of crop types. In this context, statistical methods were fine-tuned to derive unbiased stochastic estimates for crop area from the stratified sample, including the need to compensate for the high (c)omission errors. Whereas “pixel counting” came out as one of the logical flaws, it was never clearly distinguished from a wider plethora of methodology weaknesses. Activity B was discontinued in the late 1990s. The availability of HR VLCC CTY offers a unique opportunity to revisit these statistical assumptions. CTY is currently available for the period 2017-2021 (and foreseen also for 2022 and 2023 before the start of LPS). Furthermore, we have access to the open data sets that were used by the HR VLCC consortium for the MDL model development. In particular, these include multi-annual full country features coverages of Common Agricultural Policy (CAP) arable crop declarations filed through the national GeoSpatial Aid (GSA) applications. GSA features typically outline the agricultural areas for which, among other, an annual crop cultivation is registered as an attribute. The annual GSA set is usually available to authorities responsible for EU CAP monitoring and control (and their equivalent in Switzerland) towards the end of May of the current year. Increasingly, the finalized GSA data are released in anonymized form as open data. We can, thus, create full country crop maps directly from GSA and derive crop area estimates through simple aggregation. This method is already applied by various National Statistical Offices, e.g. [2]. We will use Denmark, the Netherlands, Switzerland, and France as test cases. Since we can substitute GSA sets with their matching CTY layers, we can enumerate the impact of residual classification noise on crop area estimates, i.e. test the equivalent of “pixel counting”. We can use GSA feature aggregation of CTY, for instance, using a majority rule to isolate local classification noise from overall errors in class assignment. In a next step, we apply (stratified) sampling grids, such as the one used in the LUCAS surveys, to the national coverages to test the statistical regression estimators that have been proposed in literature. Finally, we conclude on the suitability of “pixel counting” for crop area estimation and extrapolate our findings to the need to estimate subtle annual change in crop area, which is expected to be a major contribution of the HR VLCC CTY data use in future CAP indicator frameworks especially if error sources or properly understood and accounted for. References: [1] https://www.copernicus.eu/en/news/news/observer-vegetated-land-cover-characteristics-vlcc-new-era-land-monitoring [2] https://www.cbs.nl/nl-nl/onze-diensten/methoden/onderzoeksomschrijvingen/korte-onderzoeksomschrijvingen/landbouwtelling
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Early-season Cotton Yield Estimation in Türkiye Using Satellite-Derived Inputs with XLSTM

Authors: Furkan Yardimci, Abdullah Kerem Erdemir, Dr. Mustafa Serkan Işık, Prof. Dr. Alp Erturk, Geomatics Engineering Esra Erten
Affiliations: Kocaeli University, Istanbul Technical University, OpenGeoHub Foundation
Accurate early-season yield estimation is critical for optimizing decision-making and enhancing resource management in agricultural practices. This study focuses on early cotton yield estimation in Türkiye, by using multi-source Earth observation (EO) features collected during the first three to five months of the growing season. Specifically, the analysis is conducted on seasonal data from May to September, during which the cotton flowering and boll development stages take place, leading up to harvesting towards the end of September in Turkiye. Early yield estimation has the potential to support timely decision-making in cotton cultivation, providing farmers with valuable insights to optimize resource allocation, irrigation, and other management strategies. This research covers three distinct agricultural regions of Türkiye: South-Eastern Anatolia, the Mediterranean, and the Aegean Regions, characterized by different climatic conditions and soil properties, as well as distinct agricultural practices [1]. The dataset used in this study includes the ERA-5 Reanalysis climate model, which provides variables such as temperature, rainfall, evapotranspiration, soil moisture, and solar radiation, along with soil data from SoilGrids, and satellite data-driven features such as enhanced vegetation index (EVI) from Sentinel-2 and radar backscatters from Sentinel-1 synthetic-aperture radar (SAR) measurements. These multi-source data offer a comprehensive view of the biophysical and structural changes in cotton and environmental conditions influencing its yield and allow for accurate modelling of crop yield under varying climatic and soil conditions. For yield estimation, eXtended Long Short-Term Memory (XLSTM) [2], which is a deep learning technique efficient at capturing temporal dependencies in sequential data, is employed. The study compares the performance of models trained with data from the first three, four, and five months, starting from May. Preliminary results reveal that the model's performance remains consistent across these periods, with R² values of 0.864 for the first three months, 0.859 for the first four months, and 0.866 for the complete five months. These findings suggest that early-season data can provide reliable yield estimates even before the full growing season concludes, positioning early yield estimation as a vital tool for anticipating crop outcomes and enabling proactive measures for farmers and policymakers well ahead of harvest. Additionally, this study examines the post-hoc interpretability of the regression models through the Shapley Additive Explanations (SHAP) method, a popular explainable AI (XAI) technique, to identify the most influential features for yield prediction at different stages of the growing season. According to the analysis, early-season crop models are mostly driven by temperature conditions (dew-point and soil) and soil properties, mainly bulk density, pH, and soil texture. However, the results for the five-month model exhibit greater variability across different model runs, suggesting less stability in model behaviour when incorporating the full five months of data. High-resolution EO signals begin to show up in the feature importance of the five-month model. This could indicate that models are incorporating more complex interactions as harvesting season approaches. This work highlights the potential of using early-season data, particularly from the first three to four months of the season, for reliable cotton yield estimation, offering a powerful tool for improving cotton crop management and decision-making in Türkiye. The results suggest that during early yield estimation, particularly up to vegetative stage, the low spatial resolution static and meteorological variables, which reflects geographic distinctions, play a more important role than the high spatial resolution features that capture in-field variability. The impact of high spatial resolution imaging data (Sentinel-1/-2) starts with the flowering stage of the cotton. Furthermore, the interpretability of the models, through SHAP analysis, has the potential to provide valuable insights into the key environmental and soil factors influencing yield prediction, helping farmers, agronomists, and policymakers better understand and optimize cotton cultivation practices. [1] M. S. Isik, M. F. Celik and E. Erten, "Unveiling the High-Resolution Cotton Yield Variations from Low-Resolution Statistics: Lessons from a Nationwide Study in Turkey," IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 2024, pp. 5040-5043, doi: 10.1109/IGARSS53475.2024.10641634. [2] M. Beck, K. Pöppel, M. Spanring, A. Auer, O. Prudnikova, M. Kopp, G. Klambauer, J. Brandstetter, and S. Hochreiter, "xLSTM: Extended Long Short-Term Memory," *arXiv preprint arXiv:2405.04517*, May 2024.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: BirdWatch - a Copernicus satellite-based service to measure and improve farmland biodiversity

Authors: Nastasja Scholz, Dr. Annett Frick, Ursula Ploetz, Dr. Ruth Sonnenschein, Dr Bartolomeo Ventura, Prof Dr Damaris Zurell, Levin Wiedenroth, Dr Nika Oman Kadunc, Dr Nejc Vesel, Dr Ine Rosier, Dr Rik Hendrix, Tomas Orlickas, Martynas Rimgaila, Mindaugas Busila
Affiliations: LUP GmbH, University of Potsdam, Eurac Research, National Paying Agency under the Ministry of Agriculture of the Republic of Lithuania, Vito, Sinergise, Agro Digital Solutions
The decline in the number of birds and in the bird species richness are clear signs of worsening ecosystem health. The strongest declines are observed on farmland, especially where intensive farm land management is in practice. At the same time, agroecological farming practices can very well be in harmony with the ecosystem it relies on. BirdWatch, funded under the Horizon Europe Program, focuses on improving the state of biodiversity of the EU's agricultural landscape, in line with the EU Green Deal, the EU Biodiversity Strategy for 2030 and the Farm to Fork Strategy. Leveraging Copernicus satellite data, the project assesses agricultural areas in terms of their suitability as habitats for farmland birds and strategises ways to enhance agroecological conditions. Birds are excellent indicators of the health and sustainability of natural environments. They are both visible and audible and they rely on different elements constituting the environment, such as land cover type, landscape structure, vegetation health, climatic conditions, presence of food sources (insects, arthropods, seeds, etc.). In this way, they are also sensitive to changes in their environment and thus, they support us in achieving a broader understanding of the state of an ecosystem. The BirdWatch project employs species distribution modelling to link bird occurrence data with species-specific habitat requirements and data on environmental descriptors. Thereby, it establishes relationships between bird presence and absence and the environmental conditions, which allow to explore the factors impacting farmland habitat suitability as well as the relative importance of the factors. The development of each habitat model starts with the assessment of the climate suitability in a target region. Satellite data are then used to quantify essential environmental descriptors, including structural variability, land cover type, crop type, mowing intensity or soil moisture, to assess habitat suitability on a landscape scale. Knowing the state of habitat suitability and the habitat requirements, BirdWatch establishes a link to the available agri-environmental interventions under the EU’s Common Agricultural Policy (CAP) and identifies which of them have to be applied to improve the farmland conditions. The optimised agri-environmental interventions are selected in such a way to ensure that they are not in conflict with any spatial, operational, policy or ecological requirements. To achieve this, BirdWatch uses spatial optimisation, taking into account the relevant local constraints of the farmers who need to implement specific agri-environmental measures as part of their obligations under the CAP. Benefiting from satellite imagery of the European Union's Copernicus Programme's, with their high temporal resolution and wide spatial coverage, BirdWatch is well suited to evaluate the success of agri-environmental interventions but also to identify and propose targeted policy adjustments as needed. Upon project completion, the service will be accessible through a web-based GIS application in the project regions of Flanders, Germany, Lithuania, and South Tyrol.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Crops Classification by Procrustes Analysis to Support the CAP Controls

Authors: Dr. Samuele De Petris, Ph.D. Filippo Sarvia, Enrico Chiesa, Alessandro Farbo, Enrico Borgogno-Mondino
Affiliations: University Of Turin, Food and Agriculture Organization of the United Nations
The effective implementation of the European Union’s Common Agricultural Policy (CAP) relies on accurate monitoring systems for ensuring compliance with subsidy and environmental standards. Crop type classification using remote sensing data, particularly from the Sentinel-2 (S2) mission, has emerged as a support tool within this framework. S2 offers high temporal, spatial, and spectral resolution, enabling new scenarios based on a detailed and accurate identification of crop types across diverse agricultural landscapes. This study proposes a novel supervised algorithm combining spectral features time series derived from the S2 data of 2024 year and Procrustes analysis to enhance crop type classification accuracy and scalability. Spectral features time series capture phenological patterns unique to different crop types, while Procrustes analysis aligns and compares these temporal signatures (specific for each crop type) to identify crop-specific growth trajectories. By integrating these techniques, the proposed algorithm addresses challenges in distinguishing crops with complex phenological characteristics and improves classification performance in heterogeneous agricultural landscapes. The algorithm was tested on a diverse agricultural dataset located in NW-Italy encompassing various crop types and seasonal variations. For training and testing proposed approach, reference data mapping each crop type were provided by ARPEA (Agenzia Regionale Piemontese per le Erogazioni in Agricoltura), a local Italian CAP paying agency. The median temporal times series was computed and used as reference temporal profile for each crop type. The Procrustes analysis was implemented at-the-pixel level in order to fit the local time series to the reference ones. The roto-translation and anisotropic scaling (RTAS) parameters were consequently estimated using the singular value decomposition Procrustes analysis and mapped at the pixel level creating new layers per each class (i.e. rotation angle, translation and scaling factors). These new discriminants were used to find the class that minimizes the transformation matrix, and a new label was assigned to the respective pixel. This method offers significant advantages for CAP controls by enabling automated, large-scale, and cost-effective monitoring of agricultural parcels. The proposed approach demonstrates the potential of advanced time-series analysis and Procrustes techniques in CAP monitoring. Future research will explore the integration of multi-source remote sensing data and further refinement of the algorithm to accommodate mixed cropping systems and smallholder farms, ensuring inclusivity and precision in agricultural policy enforcement across the EU.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Automatic Detection of Sewage Spills on the Insula EO Platform From Sentinel 2 Satellite Imagery

Authors: Gianluca Brunelli, Simona Gargiulo, Erminia De Grandis, Beatrice Gottardi, Roberto De Rienzo
Affiliations: Cgi, Italy
The use of sewage in agriculture offers potential benefits by providing nutrients and water for crop growth, but it also introduces risks of soil degradation, water contamination, and public health problems if not managed properly. Effective monitoring of sewage storage and its application in agricultural systems is crucial to mitigate these risks. Remote sensing technologies provide innovative solutions to detect and analyze sewage holding practices in agricultural areas. Using satellite imagery and advanced analytical techniques, remote sensing enables the identification of sewage storage sites, monitoring of their distribution, and assessing of environmental impacts, promoting sustainable agricultural practices. Insula, a cloud-based platform operated by CGI Italia, represents an advancement in Earth Observation (EO) technology. This cutting-edge platform is specifically created to simplify and accelerate the processing of large EO datasets, addressing the complexities of large-scale data analysis with remarkable efficiency. Leveraging the latest technological innovations, Insula provides users with a versatile suite of tools designed for big data analysis, anomaly detection, and predictive modeling, supplying insights into Earth's dynamic phenomena. The objective of this work is to implement an accurate and easily replicable algorithm, fully integrated into the Insula Platform, that allows the identification of potential spills to monitor where such sewage occurs. Specifically, the algorithm aims to generate binary maps identifying sewage presence. To address this issue, an initial prototype with classification of sewage spills based on Sentinel-2 data was implemented over a selected test area. In detail, Sentinel-2 data was collected weekly over the Emilia Romagna region in Italy between October 2023 and July 2024 and a tree approach has been developed using the RED, NIR and SWIR spectral bands to detect sewage spills events. A further approach is under evaluation and consists in implementing an automatic classification by combining spectral analysis with a deep learning model. In detail, a U-Net architecture using Sentinel 2 bands and different spectral indices such as the NDVI (Normalized Difference Vegetation Index) is implemented to detect the presence of sewage and delineate its shapes within agricultural landscapes. Sentinel-2 time series data are then combined with the ground truth over the test areas to train and validate the model. This paper presents the results obtained by both methodologies that have been executed in different regions of Northern Italy such as Emilia Romagna and Lombardy, together with a comparative analysis between the two approaches. The goal of this work is to propose a final algorithm fully integrated into the Insula platform, providing access to the full Copernicus archive, to enable end users to generate sewage spills maps across the entire national territory, demonstrating its scalability at different latitudes.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G)

Poster: D.02.12 - POSTER - Big-data Driven AI Techniques in Ocean Applications: Enhancing Marine Monitoring and Analysis

The vast and dynamic nature of the ocean presents significant challenges for environmental monitoring and marine ecosystem management. The emerging big-data driven AI techniques as powerful tools for modelling and monitoring the oceans, providing new ways to interpret and analyze large volumes of ocean data, has opened new frontiers in ocean applications, allowing us to better analyze, predict, and monitor marine systems from e.g. physical, ecological and biogeochemical aspects. AI-powered tools, such as machine learning and deep learning techniques, enable efficient processing of large and complex datasets generated from various sources including satellite products, underwater sensors, and autonomous vehicles. These techniques facilitate long-term oceanographic parameter monitoring, near real-time pattern detection and predictive modeling of ocean systems, therefore have the potential to revolutionize our understanding of the complex processes and phenomena that occur in the marine environment. This session invites key topics and contributions that address a range of topics related to the use of AI for ocean applications, including but not limited to:
•Development and application of AI algorithms for ocean data analysis and monitoring in marine biogeochemistry and ecosystems, including plankton biodiversity, fisheries and aquaculture
•Integration of multiple data sources (in situ, remote sensing, etc.) for improved ocean parameter estimations
•Use of AI for prediction and forecasting of ocean dynamics, including extreme events such as hurricanes and tsunamis
•Data processing and pattern recognition: ML and DL algorithms process satellite imagery and sensor data automatically, fastening its analysis and, therefore, saving time and resources that have traditionally been employed in data interpretation (Ex: Eddies and fronts detection, phytoplankton blooms, HABs, ...).
•Innovative uses of AI for oceanographic research and exploration, including autonomous ocean vehicles and robotics
•Challenges and solutions: the computational, data integration and interdisciplinary challenges with using AI in ocean applications (Ex: homogenization and harmonization of data, data fusion, gap filling, ...)
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G)

Poster: Enhancing Offshore Infrastructure Monitoring: Synthetic Data Generation for Deep Learning-Based Object Detection on Sentinel-1 Radar Imagery

Authors: Robin Spanier, Prof. Dr. Claudia Künzer, Dr. Thorsten Höser, Marco Ottinger
Affiliations: German Aerospace Center (DLR), Earth Observation Center (EOC), German Remote Sensing Data Center (DFD), University of Wuerzburg, Institute of Geography and Geology
The recent and ongoing expansion of marine infrastructure, including offshore wind farms, oil and gas platforms, artificial islands, and aquaculture facilities, highlights the need for effective monitoring systems. Precise quantification in space and time is crucial to planning the future expansion, usage, management, and impact of marine offshore infrastructure. In the past decade, numerous studies have explored the detection and monitoring of offshore infrastructure using space-borne data and remote sensing techniques. Recently, deep learning-based approaches have emerged as a powerful tool for these tasks. However, the development of robust and reliable object detection models depends on the availability of comprehensive, balanced training datasets. Manual annotation of existing objects is the standard method for dataset creation, but it falls short when samples are scarce, particularly for underrepresented object classes, shapes, and sizes. To address this limitation, we propose a deep learning-based approach for generating synthetic training data by modifying and retraining a stable diffusion model. The goal of this approach lies within the augmentation of manual image-label pairs and the enhancement of the dataset quality and diversity. We validate this approach by applying the object detector YOLOv10 to efficiently detect and classify offshore infrastructure objects (specifically offshore oil and gas platforms) on Sentinel-1 radar imagery in three diverse test regions: the Gulf of Mexico, the North Sea, and the Persian Gulf. We will present an analysis of the impact of our synthetic data generation approach on training results with a focus on how unbalanced classes can be better represented and model performance improved. This study underscores the critical importance of balanced datasets and highlights synthetic data generation as an effective strategy to address common challenges in remote sensing. Furthermore, it reaffirms the pivotal role of Earth observation in advancing offshore infrastructure monitoring by demonstrating the first test results of our model on unseen data.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G)

Poster: Improved monitoring of phytoplankton functional types in the Arctic Ocean based on big-data driven machine learning methods

Authors: Dr Hongyan Xi, Alfredo J. Bellido Rosas, Ehsan Mehdipour, Aurélien Prat, Marine Bretagnon, Antoine Mangin, Pr Astrid Bracher
Affiliations: Alfred Wegener Institute, Helmholtz-centre For Polar And Marine Research, School of Business, Social and Decision Sciences, Constructor University, ACRI-ST, Institute of Environmental Physics, University of Bremen
In the Arctic Ocean (AO), phytoplankton are highly influenced by sea ice conditions and brine release. Phytoplankton community composition can be dynamic due to the timing of nutrient and light-dependent biological production particularly under the much faster climate-induced changes in the AO. Both uncertainty assessment and validation have shown that the current global products of phytoplankton functional types (PFT) available on the Copernicus Marine Service for the AO bear larger gaps and higher uncertainties compared to those in the low-latitude oceans. This study proposes to exploit marine big data-driven machine learning (ML) methods and aims to set up an improved approach for better quantifications of the chlorophyll a concentration (Chla) for multiple PFTs (namely diatoms, haptophytes, dinoflagellates, green algae and prokaryotic phytoplankton) in the AO. In situ data on phytoplankton community composition are the essential element for the model development, testing, and evaluation. We involve the large in situ data of PFTs in the AO that are derived from the HPLC pigment concentration and cover the period since 1997 collected at own expeditions and from other sources, and also the underway hyperspectral absorption measurements from our flow-through AC-S system. For model input, parameters including ocean colour bio-optical data, physical and biogeochemical variables are considered important for the PFT model construction. We will acquire these data from the Copernicus Marine Service catalogue to ensure consistency by using the same data provider. We perform feature selection to identify the most sensitive predictors among the input variables, focusing on the subset that significantly enhances prediction or decision-making. Three powerful ML techniques, including one-dimensional convolutional neural network (1D-CNN), gradient boosting machine (GBM) and self-organizing map (SOM), will be tested. Each algorithm will utilize the selected input variables (predictors) and the in situ PFT matchup data (target variables) to train the sub-models and generate independent prediction output of the PFTs. The final PFT prediction model will be determined by applying a ridge regression on the trained ML models to achieve a training ensemble with optimal predictive performance. Evaluation of the ML-based PFT model will be fulfilled by 1) intercomparison with current PFT global products on Copernicus Marine Service to highlight the differences between both products to be further evaluated by the field measurements; and 2) validation with underway hyperspectral absorption derived PFT data collected in the Fram Strait, which allows both quantitative and qualitative evaluations with higher number of matchups thanks to the high frequency of the hyperspectral measurements. The ML-based PFT model will then be applied to all consolidated input data spanning from 1997 to the present (from May to September only) to generate the PFT data for the AO over the last decades. Such PFT data set with improved accuracy will allow reliable long-term monitoring and trend analyses for the surface phytoplankton community structure, helping in detecting potential shifts and changes in phytoplankton diversity in the AO under the Arctic amplification effect.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G)

Poster: Deep Learning for Near Real-Time Oil Spill Detection Triggering Forecasting Applications in a Digital Twin of the Ocean Framework

Authors: Noemi Fazzini, Sabrina Outmani, Antonios Parasyris, Vassiliki Metheniti, George Alexandrakis, Nikolaos Kampanis, Maria Luisa Quarta
Affiliations: Meeo, Sistema GmbH, Coastal & Marine Research Lab, FORTH-IACM
The EU funded Iliad project builds on the assets resulting from two decades of investments in policies and infrastructures for the blue economy, aiming at establishing an interoperable, data-intensive Digital Twin of the Ocean (DTO). Iliad combines ocean data, advanced models, Machine Learning (ML) and computing infrastructures to create high-resolution, real-time ocean digital twins, focusing on different regions and thematic sectors e.g., marine renewables, biodiversity, fisheries, pollution, etc. The current work presents the Cretan Oil Spill Detection and Forecasting DTO developed in the framework of Iliad. The DTO focuses on early detection of marine oil spills and forecasting of spill trajectories to support rapid response, minimizing environmental and economic impacts. The system utilizes Deep Learning (DL) for near real-time oil spill detection from Sentinel-1 SAR imagery. The approach incorporates statistical and DL techniques for object detection and semantic segmentation. Input data consist of 10 m resolution dual-polarization Sentinel-1 images acquired with the Interferometric Wide Swath (IW) Ground Range Detected mode at high resolution (GRD-HD). Images are automatically pre-processed in their VV polarization using customized functions based on SNAPpy. A trained Fully Convolutional One-Stage (FCOS) model is employed for object detection, fine-tuned with a dataset of over 1000 SAR images with four distinct classes: oil spill, look-alike, ship, and land [1]. A U-Net model, trained on the same dataset, is employed for the segmentation of the detected oil spill regions. The accuracy of predictions on new images depends on consistent preprocessing steps applied to both the training and inference data, including speckle filtering, linear-to-dB conversion and contrast enhancement. Oil spill segmentation can be performed using either a SNAPpy-based adaptive thresholding algorithm or a U-Net model, each with distinct strengths. The U-Net model is generally faster and produces cleaner segmentation with less noise, while SNAPpy provides more precise delineation, particularly in complex spill scenarios. Detected oil spills trigger the MEDSLIK-II model to predict spill fate and transport, supported by high-resolution metocean forecasts. Forecasting models are tailored to the area of spill detection. For the regional application of the Cretan Sea, high-resolution forecasts are used to drive the oil spill forecast, e.g., a 3 km spatial resolution atmospheric model (WRF), a 1 km hydrodynamic model (NEMO), and a 1 km wave model (Wavewatch III). For the global case, lower resolution forecasts are used as alternatives. The oil spill detection and forecasting system is tested on real world oil spill incidents. To ensure reproducibility, interoperability and scalability, the entire pipeline is containerized using Docker and orchestrated with Common Workflow Language (CWL) protocols, following best practices for efficient and standardized environmental modeling workflows. This approach makes the system scalable and ready for deployment across different platforms, enabling efficient oil spill detection and forecasting operations in real-time. Acknowledgements: This research has received funding from the European Union’s H2020 RIA programme under GA No 101037643. References: [1] M. Krestenitis, G. Orfanidis, K. Ioannidis, K. Avgerinakis, S. Vrochidis, and I. Kompatsiaris, “Oil spill identification from satellite images using deep neural networks,” Remote Sens., vol. 11, no. 15, p. 1762, 2019.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G)

Poster: Sensor-Agnostic Seagrass Mapping across Spatial Scales: Evaluating Satellite Sensors in the Baltic Sea

Authors: Eike M. Schütt, Florian Uhl, Philipp R. Schubert, Thorsten Reusch, Natascha Oppelt
Affiliations: Earth Observation and Modelling, Kiel University, Marine Evolutionary Ecology, GEOMAR Helmholtz Center for Ocean Research
Seagrass ecosystems play a crucial role in marine biodiversity and are important carbon sinks, but face growing pressures and resulting degradation worldwide. Monitoring and protecting these ecosystems requires reliable data on habitat distribution, yet traditional survey methods struggle to meet this need due to the large spatial extent and fragmented nature of seagrass habitats. Satellite Earth Observation (EO) presents a promising, cost-effective alternative for large-scale, repetitive seagrass mapping. However, the lack of standardized practices for EO-based seagrass mapping, including satellite sensor selection, data preprocessing and model development, remains a barrier for widespread application and scalability. To address these issues, we developed a versatile, sensor-agnostic processing pipeline around the XGBoost machine-learning model to streamline EO-based seagrass mapping. This pipeline automates key steps of the processing, including data preprocessing and feature selection. This is achieved through coupling Recursive Feature Elimination (RFE) with Shapley Additive exPlanations (SHAP; Lundberg et al. 2020), which also improves model interpretability. Using this pipeline, we evaluate the fitness-for-purpose of four widely used satellite sensors (Pléiades-1, WorldView-3, Planet SuperDove, and Sentinel-2) in seven different configurations (single scenes, composites, de-striping) in the south-western Baltic Sea. We tested the sensors in three tasks: detecting submerged aquatic vegetation, distinguishing seagrass from algae, and estimating seagrass cover percentage. Our results reveal notable differences in the sensors’ ability to capture key aspects of seagrass habitats. Due to their larger Ground Resolution Distance, Sentinel-2 and Planet SuperDove fail to resolve meadow-scale habitat fragmentation. However, Sentinel-2 proves valuable for cost-effective regional-scale assessments of aggregated metrics such as overall vegetation cover. The very-high resolution sensors WorldView-3 and Pléiades-1 (spatial resolution < 2 m) capture fine-grained, meter-scale fragmentation, critical for assessing habitat fragmentation. However, no sensor reliably distinguished seagrass from algae, underscoring an ongoing challenge for EO-based coastal ecosystem mapping. While this issue of EO-based seagrass mapping deserves more attention in the future, our presented sensor-agnostic pipeline for automated preprocessing, feature selection and modelling offers a robust pathway to accessible and scalable habitat monitoring of coastal habitats.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G)

Poster: Forecasting Water Quality from Space

Authors: Jorge García Jiménez, Dr Ralf Quast, Jorrit Scholze, Dr. Carsten Brockmann
Affiliations: Brockmann Consult, Universitat de València
Water Quality (WQ), defined as the biological, physical and chemical state of water, is a key environmental parameter, having far-reaching implications for humans, ecosystems, and socio-economic domains. With global warming and increasing environmental uncertainty, complex and often unpredictable interactions between known and emerging environmental drivers have become frequent in aquatic ecosystems. Hence, early-warning protocols and forecast models have been developed based on in-situ measurements, enabling data-based decisions, policies and interventional actions. However, they are often limited by the inherent point-based spatial coverage of in-situ measurements. Biogeochemical models are helpful in simulating marine processes on larger time and space scales, but they often can’t characterize the variability, complexity and fine scale required for coastal regions and extreme events, especially when influenced by external and interlinked environmental drivers such as riverine nutrient inputs or anthropogenic activities. In contrast to the two above techniques, satellite data provide a synoptic perspective and have a proven record of retrieving critical WQ parameters such as Chlorophyll-a concentration. Despite these advantages in synoptic overview, time series density, and consistency of methodology, up to now forecasting water quality relying largely on satellite observation is much less developed. This is due to challenges inherent to this technology, such as cloud cover and data scarcity. Significant improvements have been made since Machine Learning (ML) and Deep Learning methods entered forecast science. The ability to uncover information from complex and high-dimensional data spaces is one of the reasons why these methods can capture non-linear relationships beyond traditional statistical methods. Most of these methods require either gap-free data or a merely minimal lack of information tocapture system dynamics accurately . The FC-WQ project has been selected for funding by ESA FutureEO-1 Permanently Open Call in order to bridge the gap and derive short-term predictive maps of Chlorophyll-a concentrations up to 7 days into the future for coastal areas. The basic idea is to benefit from the good quality of EO products of biological and physical parameters, as well as the good predictive skill of physical models, and train a model which forecasts the biological parameters using the history of observation, and predictions of physics as input. An innovative machine learning forecast approach was developed and implemented, combining forecast outputs from marine (CMEMS) and meteorological (ECMWF) numerical models with Sentinel-3 OLCI-derived Chlorophyll-a observations. The Extreme Gradient Boosting (XGBoost) algorithm was chosen for its efficiency in handling sparse datasets and its explainability possibilities. To enhance the model’s performance to specific coastal dynamics and processes, a custom loss function was developed to incorporate physical constraints, allowing the training process to focus on critical regions rather than relying solely on the statistical distribution of the data. This, combined with explainability techniques, ensures greater transparency in model behavior, supports alignment with physical principles, and fosters trust in ML-based predictions. Results indicate a median absolute percentage deviation (MAPD) of less than 30% for forecasts up to six days into the future, and the methodology demonstrates some capability in filling daily observational gaps caused by cloud coverage, with a MAPD of less than 25%. We are very satisfied with this result as it was defined as the goal to reach, because it agrees with the uncertainty of satellite derived chlorophyll-a concentration. Further, comparative analysis shows meaningful improvement over persistence models, demonstrating the machine learning approach's potential to extract non-trivial patterns by utilizing the complementary information provided by multi-source data. While the current model surpasses naïve baseline predictions, it also highlights the challenges of forecasting marine chlorophyll dynamics using limited observational data. Biogeochemical models maintain an advantage in their ability to provide mechanistic insights into ecosystem processes, while the ML model demonstrates strengths in pattern recognition and gap-filling capabilities. Together, these approaches offer complementary perspectives that can enhance marine ecosystem forecasting and management.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G)

Poster: A remote sensing foundation model for the ocean using Sentinel-3

Authors: Geoffrey Dawson, Dr Andrew Taylor, Sarah Jackson, David Moffat, Prof Chunbo Luo, Tong Luo, Paolo Fraccaro, Dr Remy Vandaele, Prof Hywel Williams, Dr Anne Jones
Affiliations: IBM Research, Science and Technology Facilities Council, Plymouth Marine Laboratory, University of Exeter
Foundation models are increasingly used in machine learning workflows for earth observation tasks. These models are trained on a large amount of unlabelled data, then the trained models are used for a variety of downstream tasks. One of the perceived advantages of a foundation model is that it can achieve equal or better performance in downstream tasks, compared to other deep learning methods, while using less training data and computational effort. So far, remote sensing foundation models have primarily been developed for land applications. Models such as this could be particularly important for the ocean, where for many applications there tends to be relatively little high-quality data taken from in situ measurements. This type of downstream task with little labelled data will benefit from the use of a geospatial foundation model. Here we build an ocean remote sensing foundation model based on Sentinel-3 Ocean and Land Colour Instrument (OLCI) measurements. We use the same Vision Transformers (ViT) architecture as Prithvi-100M, which is a foundation model trained using Harmonized Landsat-Sentinel 2 (HLS) dataset over land. We apply the same self-supervised encoder with a Masked AutoEncoder (MAE) learning strategy, where random parts of the image are masked and then the model is trained to reconstruct the image. We train on a global dataset of OLCI L2 measurements and use the mean square error to measure the difference between constructed image and the original image. To improve performance, we also investigate the use of additional data relevant to ocean applications. For example, we include sea surface temperature from Sentinel-3, Sea and Land Surface Temperature Radiometer (SLSTR) measurements as an additional band. We also test the use of additional parameters related to time and location the measurement, for example region, ocean depth, and time of year. We then demonstrate how we can use Terratorch (an open source fine-tuning framework for Geospatial Foundation Models) to fine-tune this model variety of downstream tasks. We use primary production quantification and classification of harmful algal blooms as examples. The details of the fine tuning are presented in the related abstract ‘Advancing Marine Earth Observation with AI Foundation Models’.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G)

Poster: Diffusion Models for Sea Surface Height Reconstruction

Authors: Nils Lehmann, Jonathan Bamber, Prof. Xiaoxiang Zhu
Affiliations: Technical University Of Munich, University of Bristol, Munich Center for Machine Learning
Numerous satellite altimeters observe sea surface height (SSH) along one-dimensional tracks. However, gridded SSH maps offer more complete spatio-temporal coverage and are commonly utilized by physical oceanographers to study ocean dynamics [1]. A central component of global ocean circulation are mesoscaled eddies (50-300km) and small-scale processes that contain between fifty to ninety percent of the ocean’s kinetic energy (KE) [2]. the current operational method is based on statistical optimal interpolation approach DUACS (Data Unification and Altimeter Combination System) [3] but has been shown to smooth or distort eddies [4]. More recently, several works have demonstrated the applicability of discriminative deep learning approaches for SSH interpolation [5] [6] [7]. [5] and [7] use a ConvLSTM [8] model under a variational data assimilation framework that can be trained end-to-end, termed 4DVarNet. [6] also use a ConvLSTM but with separate encoders for altimetry obeservations and sea surface temperature (SST) maps. In a follow up work [9] use a 3D adapted video prediction model SimVP [10]. In contrast, we propose to instead use a generative model to reconstruct gridded SSH maps from partial SSH observations and SST maps and evaluate our methodology within the recently proposed OceanBench [11] framework. Specifically, we use a diffusion model and adapt it as a conditional generative model for altimetry interpolation across the different OceanBench tasks: Nadir, Nadir-SWOT and Nadir-SST. By tuning the parameters of the exponential noise schedule of the diffusion process, we find that the diffusion model can be applied flexibly across the different task. The result is a generative model that resolves both spatial and time scales significantly better than previous approaches and inherently comes with a notion of predictive uncertainty based on quantile regression. Because the generative model can better resolve small scale processes, we also find significant improvements of derived strain and vorticity. Since we can capture length scales significantly better through the diffusion model approach, our work demonstrates the strong potential of tackling a wider variety of inverse problems. In particular, we plan to use our existing framework for high resolution SWOT coastal data, such that its application to the historical altimeter record could improve coastal resolution. [1] Klein P, Lapeyre G, Siegelman L, Qiu B, Fu L L, Torres H, Su Z, Menemenlis D and Le Gentil S 2019 Earth and Space Science 6 795–817 [2] Morrow R, Fu L L, Rio M H, Ray R, Prandi P, Le Traon P Y and Benveniste J 2023 Surveys in Geophysics 44 1243–1286 [3] Taburet G, Sanchez-Roman A, Ballarotta M, Pujol M I, Legeais J F, Fournier F, Faugere Y and Dibarboure G 2019 Ocean Science 15 1207–1224 [4] Ballarotta M, Ubelmann C, Pujol M I, Taburet G, Fournier F, Legeais J F, Faug`ere Y, Delepoulle A, Chelton D, Dibarboure G et al. 2019 Ocean science 15 1091–1109 [5] Febvre Q, Sommer J L, Ubelmann C and Fablet R 2023 arXiv preprint arXiv:2309.14350 [6] Martin S A, Manucharyan G E and Klein P 2023 Journal of Advances in Modeling Earth Systems 15 e2022MS003589 [7] Beauchamp M, Febvre Q, Georgenthum H and Fablet R 2023 Geoscientific Model Development 16 2119–2147 [8] Shi X, Chen Z, Wang H, Yeung D Y, Wong W K and Woo W c 2015 Advances in neural information processing systems 28 [9] Martin S A, Manucharyan G E and Klein P 2024 Geophysical Research Letters 51 e2024GL110059 [10] Gao Z, Tan C, Wu L and Li S Z 2022 Simvp: Simpler yet better video prediction Proceedings of the IEEE/CVF conference on computer vision and pattern recognition pp 3170–3180 [11] Johnson J E, Febvre Q, Gorbunova A, Metref S, Ballarotta M, Le Sommer J et al. 2024 Oceanbench: the sea surface height edition vol 36
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G)

Poster: Fusion of Multi-Source Data for Comprehensive Assessment of Phytoplankton Composition

Authors: Ehsan Mehdipour, Dr. Hongyan Xi, Dr. Alexander Barth, Dr. Aida Alvera-Azcárate, Prof. Dr. Adalbert Wilhelm, Prof. Dr. Astrid Bracher
Affiliations: Alfred Wegener Institute (AWI), Helmholtz Centre for Polar and Marine Research, School of Business, Social & Decision Sciences, Constructor University, GeoHydrodynamics and Environment Research (GHER), University of Liège, Institute of Environmental Physics (IUP), University of Bremen
Assessing the impact of climate change on marine ecosystems requires high-quality, globally consistent data on phytoplankton composition, a crucial indicator of ocean health and carbon cycling. This data is typically obtained through optical observations using various platforms, including ships, satellites, and autonomous profiling floats. While each platform contributes valuable insights, integrating these datasets is essential for producing a unified, spatiotemporally comprehensive view of phytoplankton dynamics. Such integrated data significantly improves campaign planning and detailed analysis, while also enabling comparisons with other physical and biogeochemical parameters. However, the integration of multi-source datasets poses considerable challenges due to their heterogeneous nature, with variations in spatial and temporal resolution and inconsistencies in uncertainty estimates. This study introduces a novel methodology for integrating multi-source datasets to estimate phytoplankton functional type (PFT) chlorophyll-a (Chla) concentration across diverse spatiotemporal scales. The approach was applied to data collected during the RV Polarstern PS113 transatlantic expedition. The study integrates PFT Chla data retrieved from 1) the Sentinel-3 Ocean and Land Colour Instrument (OLCI) multispectral reflectance and 2) hyperspectral absorption data from a flow-through AC-S sensor installed in the ship’s inline water system. For both datasets, spectral data were analysed using empirical orthogonal functions (EOFs) to extract key features, followed by a predictive modelling approach using multi-linear regression. Satellite data for phytoplankton groups in the Atlantic Ocean often has large gaps caused by cloud cover and limitations in retrieval algorithms. To reconstruct this data and create a consistent satellite product, we used the Data-Interpolating Convolutional Autoencoder (DINCAE), an advanced AI-based reconstruction technique. This method improved pattern recognition and produced robust, gap-filled satellite data with quantified uncertainties. The datasets from the shipborne AC-S sensor were then seamlessly integrated with the reconstructed satellite data using Optimal Interpolation, a data assimilation technique that incorporates spatial and temporal covariance information. The result was a high-resolution, spatiotemporally continuous dataset that covered a corridor of the cruise path and expedition timeframe. Validation of the final integrated product against PFT Chla concentrations derived from High-Performance Liquid Chromatography (HPLC) pigment analysis of water samples collected during the expedition demonstrated the method’s accuracy. The fused dataset exhibited comparable or, in certain instances, superior accuracy than the original satellite products, confirming the effectiveness of the interpolation process. The implications of this study extend beyond the PS113 expedition. The presented approach offers a scalable framework for integrating diverse datasets from other regions and observational platforms. Future applications will target high-priority areas such as the Arctic Ocean, where climate change impacts are particularly pronounced. Additionally, efforts are underway to generalize the process, enabling its application to similar optical sensor datasets from a wide array of platforms, thereby enhancing the global monitoring and understanding of marine ecosystems in a changing climate.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G)

Poster: Using Sentinel-2 data to quantify marine traffic in the Archipelago Sea 2018-2024: Where, when and how much?

Authors: Janne Mäyrä, Elina Virtanen
Affiliations: Finnish Environment Institute (Syke)
Monitoring and measuring the amount of marine traffic is vital for estimating their effects to marine habitats. Especially recreational boating can cause considerable amounts of pressure locally and threaten habitats formed by submerged aquatic vegetation (SAV). As smaller vessels are able to enter shallow areas, where SAV communities thrive, the increase water turbidity and sedimentation, as well as increase the pollution in these areas through potential gray water discharges. However, while commercial marine traffic has been monitored via the Automatic Identification System (AIS) for a long time, there is a knowledge gap about the trends, amount and local hotspots for recreational boating. Leisure boating is typically measured either through questionnares, logging the visits to harbors, or with visual observations via field work, which, while providing accurate data, have very limited spatial and temporal coverage. Also, while there is a large amount of studies related to marine vessel detection from satellite imagery, most of it is focused on the commercial traffic. In this study, we used the workflow developed in our earlier research [1] to detect and quantify marine traffic in the Archipelago Sea using Sentinel-2 images. The Archipelago Sea is the largest archipelago in Europe, and the major hub for recreational boating in the Finnish coast. Our study period spans from 2018 to 2024, and covers the main period (May-September) for leisure boating in the Finnish Coast for each year. In total, we processed over 1,600 individual Sentinel-2 tiles from 441 timesteps, of which 159 produced suitable results for further analyses. Most of these data were from May, June or July, as only 24 usable images were from August and September. We used HELCOM AIS to classify our results into commercial and recreational traffic, as well as to further validate our results. Based on our results, marine traffic in the Archipelago Sea was concentrated close to the established navigation routes, as the median distance from a route was 135 meters, and 75 % of all detections were less than 600 meters away from the routes. July, the main summer holiday season in Finland, was the most active month each year, and almost half of all detected vessels were from July. Of the observed years, 2022 had the least amount of detected traffic, whereas the two preceding were the most active. These observations correspond with the COVID-19 travel restrictions and when they were lifted in Finland, suggesting that there was a clear surge in recreational boating during the pandemic. To the best of our knowlegdge, this is the first time that marine traffic including recreational boating has been quantified in this scale in the Archipelago Sea. Our results also show how openly available satellite images coupled with powerful object detection techniques, can support the quantification of recreational boating, without relying on resource-heavy methods (e.g. AIS, high-resolution images). The information can be used to support the development of spatial marine management and conservation measures, and further in ecological research and impact avoidance. Bibliography [1] Janne Mäyrä, Elina A. Virtanen, Ari-Pekka Jokinen, Joni Koskikala, Sakari Väkevä, and Jenni Attila, "Mapping recreational marine traffic from Sentinel-2 imagery with YOLOv8", http://dx.doi.org/10.2139/ssrn.4827287
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G)

Poster: Satellite detection of small vessels to monitor their potential impact on marine protected areas

Authors: Enrique Portalés-Julià, Javier Menéndez-Blázquez, Gonzalo Mateo-García, David March, Luis Gómez-Chova
Affiliations: Image Processing Laboratory, University of Valencia, Cavanilles Institute of Biodiversity and Evolutionary Biology (ICBiBE), University of Valencia, International Methane Emissions Observatory, United Nations Environmental Programme
Human activities, such as resource extraction, pollution, underwater noise, and species composition changes, are significantly altering oceans and seas, with no marine area left untouched and 41% heavily impacted by multiple anthropogenic factors. However, there is still a gap in understanding the pace and cumulative impact of these pressures due to their expanding and diversifying nature. Effective ocean management must address these impacts through monitoring and predictive techniques. Marine protected areas (MPAs), as controlled and regulated spaces, provide an ideal setting to study how human activities and climate change affect marine ecosystems. Concretely, in Cabrera and Atlantic Islands National Parks (Spain), these marine activities are carried out by small recreational vessels and artisanal fishing boats, whose access is regulated. A good characterization of the spatiotemporal patterns of human activities is key to understanding anthropogenic effects in these marine protected areas, and therefore effective monitoring of vessels in these areas is essential. In this regard, tracking systems such as automatic identification systems (AIS) and vessel monitoring systems (VMS) are commonly used in industrial activities; however, their use is not mandatory for recreational activities. Also these systems may have limitations in areas with high vessel density, limitations in their public access, and may not be used lawfully. Data from Earth observation satellites can be used as an alternative to fill the information gap of these methods. Recent studies have demonstrated the potential to detect previously unmonitored global industrial fishing activities using publicly available resources such as Sentinel-1 and Sentinel-2 satellite missions. However, recreational vessels often fall below the spatial resolution of these satellites (10-20 metres). In this work, we present an AI-driven system for monitoring recreational activities using remote sensing data. To achieve this goal, we have built a high-quality dataset of PlanetScope constellation imagery (3-meter resolution) containing approximately 20000 manually labeled instances of small vessels identified in the Spanish terrestrial-marine National parks. We processed the annotated dataset into an analysis-ready data format, obtaining approximately 6000 reference image-mask pairs of 128x128 pixel size. We performed a comparative analysis on this dataset, evaluating several fully convolutional neural networks (FCNN) and object detection models. These methods achieved a mean per-object detection F1 score of 0.91 across more than ten different detection approaches. Alongside the high-resolution commercial imagery dataset, we also compiled a manually labeled dataset of Sentinel-2 imagery, comprising more than 3000 vessel instances. To our knowledge, this is the first dataset focusing on small recreational vessels along the Mediterranean coast. It allowed us to replicate the analyses performed with Planet imagery and to examine the resolution limitations of Sentinel-2 data for these monitoring task. Both datasets and models will be made publicly available to foster collaboration and innovation in this field. We have applied our multispectral detection models over the extension of these marine protected areas, characterizing the spatial and temporal patterns of human activities. Moreover, we compared AIS data with our detected vessels, to quantify the proportion of vessels that are not monitored when relying solely on traditional monitoring systems. This approach has the potential to be scaled up to other parts of the Mediterranean Sea, where different practices such as recreational and artisanal fishing, also performed with small vessels, are potentially putting significant pressure into marine ecosystems. Acknowledgements: This work was supported by the Spanish Ministry for Ecological Transition and the Demographic Challenge (project PPNN/3002/2023).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G)

Poster: Operational High-Resolution Ocean Current Forecasts for Maritime Stakeholders

Authors: Evangelos Moschos, Hannah Bull, Inès Larroche, Amélie Pesnec, Théo Archambault, Pierre Garcia
Affiliations: Amphitrite, LIP6, Sorbonne Université
We introduce a machine learning framework capable of delivering 14-day operational forecasts of ocean surface currents at high resolution on a global scale. This neural network leverages real satellite data and in-situ observations, eliminating the need for regional numerical models or synthetic experiments to learn ocean dynamics. Our model is trained using physical constraints for coastal currents. Moreover, we incorporate hourly tidal currents to enhance accuracy in regions where tidal currents dominate. Validation against drifter-based in-situ data demonstrates superior accuracy compared to numerical forecasting methods. Producing accurate real-time nowcasts and forecasts of ocean surface currents is particularly challenging due to the indirect and often incomplete nature of satellite remote sensing data. Our approach employs a multi-stage, multi-arm network designed to integrate sparse and noisy data sources into precise, high-resolution forecasts. Since ocean surface currents leave discernible signatures in high-resolution sea surface temperature (SST) and chlorophyll imagery, the model uses previous days of these satellite images as inputs. Additionally, sparse measurements from traditional nadir altimeters and higher-resolution data from the new SWOT altimeter measure sea surface height, from which we can derive geostrophic ocean currents. Past satellite altimetry data are incorporated as inputs, while future altimetry observations serve as targets during training. The model architecture is a multi-arm encoder-decoder neural network designed to forecast sea surface height and surface currents at high resolution. It leverages decades of sparse Nadir satellite altimetry data, recent data from the wide-swath SWOT altimeter, high-resolution observations, and drifter measurements to capture both large-scale patterns and small-scale dynamic features. To further refine predictions, the model applies physical constraints on coastal currents, ensuring consistency with ocean dynamics. Additionally, hourly tidal currents are integrated in regions where they dominate the overall flow. A positional encoding module enhances the model’s ability to account for temporal and geographic variability. The resulting forecasts demonstrate superior accuracy compared to leading methods, particularly in regions with high kinetic energy and rapidly evolving ocean conditions. To validate our ocean current model, we use a global drifter dataset and in situ measurements collected from ships. The drifter data provide direct observations of surface currents, allowing us to compare both the angle and magnitude of the currents forecasted by our model against real-world measurements. Additionally, we compare the observed effects of currents on ships with our model's predictions. This allows us to refine our understanding of how accurately the model captures real-world dynamics, ensuring its reliability for practical applications such as navigation and ocean forecasting. We demonstrate applications of this forecasting model in maritime route optimisation. By generating reliable MetOcean data - particularly ocean current forecasts - we empower vessels to optimize their routes, resulting in substantial fuel savings and reduced CO2 emissions. Our findings are based on pilot studies conducted in collaboration with shipowners and captains navigating the Mediterranean Sea. Furthermore, we will present how satellite observation of the ocean currents can improve the detection of plastic hotspots and enhance clean-up in the Pacific Ocean. These innovations provide critical tools not only for maritime stakeholders, such as shipping and environmental agencies, but also for scientists engaged in operational oceanography, enabling more precise ocean monitoring and decision-making.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G)

Poster: Coastal Pond Aquaculture Expansion in Asia: A Multi-Decadal Satellite Time Series Analysis

Authors: Marco Ottinger, Juliane Huth, Dr Felix Bachofer, Dr. Thorsten Hoeser, Robin Spanier, Kemeng Liu, Prof. Dr. Tobias Ullmann, Prof. Dr. Claudia Kuenzer
Affiliations: German Aerospace Center (DLR), Earth Observation Center (EOC), German Remote Sensing Data Center (DFD), University of Wuerzburg, Institute of Geography and Geology, Department of Remote Sensing,
Aquaculture in Asia has undergone significant expansion, driven by rising fish demand and overfishing. Coastal pond systems, as crucial protein sources, are prevalent in low-lying coastal areas of South, Southeast, and East Asia and vital to regional food security and economies. Employing Earth observation time series data, this study monitored pond aquaculture development in Asia and reveals dramatic changes in the spatiotemporal distribution of these systems. Covering 22 countries, we focused on a 300,000 km coastline, applying a 200-km inland and seaward buffer, resulting in 261 coastal parcels for efficient satellite data processing. Using dense optical satellite imagery, the evolution of coastal pond aquaculture across Asia was analyzed at continental scale from the 1980s to 2019. An automatic multi-sensor approach combining Sentinel-1 Ground Range Detected (GRD) and Sentinel-2 Surface Reflectance (SR) images was used to generate a pond reference dataset of 3.6 million pond objects with an accuracy of 91.9%. Based on this reference pond object dataset, mean annual water masks were derived from the entire Landsat archive (1984–2019), providing multispectral data with 30-meter resolution. These water masks enabled long-term monitoring of the reference pond objects to determine the yearly aquaculture status (active/inactive) for each reference pond identified in 2019. The use of water indices (NDWI, MNDWI, AWEI, and WIFI) and receiver operating characteristic (ROC) curve analysis across eight test sites ensured precise water detection, with the WIFI indices performing best. Results indicate a dramatic expansion of pond aquaculture, with active pond areas increasing from 6,500 km² in 1988 to over 19,000 km² in 2019—a more than threefold increase. China leads with 40% of the total pond area, followed by Indonesia (13%), India (11%), Vietnam (7.7%), and Thailand (7.2%). Comparisons with FAO aquaculture production data demonstrate correlations between reference pond area expansion and official reported aquaculture production volumes. However, deviations suggest that additional factors, such as changes environmental conditions, market dynamics, diseases, policy shifts, and economic factors, influence production trends beyond spatial expansion alone. These findings reveal the rapid transformation of Asia’s coastal aquaculture and emphasize the need for ongoing monitoring to ensure sustainable development. By integrating geospatial data and long-term satellite observations, this research provides essential insights for policymakers and resource managers, supporting strategies to balance economic growth with environmental conservation and reduce the ecological footprint of expanding pond aquaculture in the coastal zone.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: B.04.02 - POSTER - Addressing multi-hazards: Compounding and Cascading Events through Earth Observation

Our planet faces a growing threat from complex, interconnected natural disasters. This session explores how Earth Observation (EO) data and technologies, maybe coupled with Digital Twins (DTs), can be leveraged to understand, predict, and ultimately mitigate the impacts of multi-hazards.

We'll delve into:
- Compounding and Cascading Events: Analyze how seemingly separate events like floods and droughts can combine to create more devastating consequences.
- EO for Multi-Hazard Risk Assessment: Discover how EO data from satellites helps map vulnerabilities, monitor real-time conditions, and forecast potential multi-hazard scenarios.
- The Role of Digital Twins in Multi-Hazard Management: Explore how DTs can integrate EO data with other sources to create a virtual representation of a region, enabling simulations and risk assessment for multi-hazards.
- Actionable Solutions from EO and DTs: Showcase real-world applications of EO and DTs in mitigating multi-hazard risks and improving preparedness.

This session targets anyone interested in utilizing EO and DTs for effective multi-hazard management. We'll foster discussion on best practices, emerging technologies, and the path forward for a more resilient future.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Advancing Multi-Hazard Risk Management With On-Board AI and Earth Observation: A Conceptual Framework for Addressing Cascading Hazards

Authors: Pietro Di Stasio, Dr Deodato Tapete, Paolo Gamba, Silvia Liberata Ullo
Affiliations: University Of Sannio, University of Pavia, Italian Space Agency (ASI)
Natural events, often perceived as isolated phenomena, tend to trigger cascading processes that amplify damages and make emergency management more complex. Phenomena such as landslides induced by earthquakes or floods caused by hydraulic obstructions highlight the interconnected nature of primary and secondary hazards, exacerbating vulnerabilities in affected communities and infrastructures. These cascading events [1] demonstrate the complex interplay between physical, social, and environmental systems, underscoring the urgent need for innovative tools capable of capturing these dynamics. Examples from the literature, such as the 2010 Haiti earthquake and the 2022 Luding earthquake in China, are referenced to demonstrate the complexity of cascading hazards and their devastating effects on vulnerable systems [2]. These events highlight how primary phenomena can trigger chains of secondary processes, such as landslides, soil liquefaction, and floods, which exacerbate the impacts on vulnerable populations and infrastructures [3]. This research introduces a novel conceptual framework that leverages Earth Observation (EO) and Artificial Intelligence (AI) technologies, particularly their direct implementation on satellite platforms, offering a transformative approach to tackling the challenges of cascading hazards. EO technologies, such as those provided by Sentinel-1 and COSMO-SkyMed missions [4], have demonstrated their capability to monitor environmental changes with high spatial and temporal resolution. These missions are crucial for identifying surface deformation, hydrological changes, and other indicators of hazard dynamics. When integrated with on-board AI models, these technologies could revolutionize hazard management by enabling satellites to autonomously detect and analyze primary and secondary hazards in real-time. This advancement aligns with the growing need for rapid, precise, and data-driven disaster response strategies. The integration of EO and AI technologies represents a significant step forward innovative disaster management solutions, with profound implications for scientific research, policy-making, and operational services. Indeed, despite its infancy, on-board AI processing [5][6] represents a transformative opportunity for disaster risk management. In practice, these technologies can significantly reduce latency, ensuring that critical information reaches decision-makers and emergency responders in near real-time. This capability is particularly valuable in monitoring remote or inaccessible regions, where delays in data transmission and analysis can hinder effective response efforts. Beyond immediate disaster response, on-board AI could facilitate more frequent and regular monitoring of hazard-prone areas, offering predictive insights that support proactive risk mitigation strategies. Despite its potential, this vision faces key challenges, including the current lack of operational satellite missions capable of autonomous AI-driven Synthetic Aperture Radar (SAR) data processing. Addressing these limitations requires cutting-edge experimental designs, the development of tailored hardware, and strategies to optimize algorithmic performance within the constraints of satellite systems. These challenges are explored with a focus on ensuring technical correctness and the feasibility of the proposed approach, offering a pathway to validated, scalable solutions. Prior to focusing on specific case studies, this research adopts a broader, conceptual approach to investigate the integration of EO and AI technologies for multi-hazard risk management. Accordingly, a key contribution of this research is the development of a conceptual framework for multi-hazard risk assessment, which integrates spatial, temporal, and vulnerability data [7] to systematically model the interactions between primary and secondary hazards. The framework is designed to combine qualitative insights, such as causal relationships between hazards, with quantitative data-driven approaches for modeling cascading events. For example, in a scenario where an earthquake triggers embankment failures and subsequent flooding, the framework would incorporate EO data, such as interferometric SAR to monitor ground deformation of the affected embankments and optical imagery to assess river morphology and embankment conditions. AI plays a key role in processing these datasets through operations like data decimation (prioritizing affected regions), extraction of objects of interest (isolating critical features like embankments and rivers), onboard classification (assessing stability of structures), and change detection (identifying evolving risks like new cracks or shifting water levels). These capabilities would allow for real-time identification and analysis of hazard interactions, with predictive models [8] simulating how primary events (e.g., seismic activity) could lead to secondary effects (e.g., embankment breaches and subsequent inundation of neighbouring land). While the framework is conceptual at this stage, exploring the integration of these AI-driven operations into EO workflows, particularly for SAR and optical data, is a promising avenue for future research. Traditional approaches often treat hazards as independent events, overlooking the interconnected dynamics that define cascading phenomena. This framework challenges such paradigms, emphasizing instead the need to model the temporal evolution of events and their compounded impacts. By simulating these interactions, the proposed framework provides a foundation for designing tools that support rapid, data-driven decision-making during complex disaster scenarios. Moreover, it highlights the importance of integrating pre-event vulnerability assessment with real-time hazard monitoring to enhance preparedness and mitigation strategies. As a consequence, this study focuses on the overarching concepts and technical requirements for future EO missions that aim to address cascading hazards comprehensively and this innovation contributes directly to enhanced decision-making and operational services, supporting rapid responses during complex disaster scenarios. In conclusion, this research emphasizes both the immense potential and the current limitations of on-board AI for EO missions. By addressing these challenges, it aims to advance the capabilities of satellite systems for managing cascading hazards more effectively. The study underscores the importance of aligning technological innovation with the needs of disaster-prone communities, fostering interdisciplinary collaboration to overcome existing barriers. Ultimately, it contributes to the development of scalable, sustainable strategies for addressing the growing complexity of global environmental risks, building a more resilient and prepared global community in the face of an increasingly uncertain future. References [1] PESCAROLI, G.; ALEXANDER, D. What are cascading disasters? UCL open environment, 2019, 2019.1. [2] ZHANG, Shuai, et al. Increased human risk caused by cascading hazards–A framework. Science of the total environment, 2023, 857: 159308. [3] TSOUTSOS, Michail-Christos. Cascading Effects of Major Natural Hazards in Greece. In: Proceedings. MDPI, 2023. p. 27. [4] TAPETE, Deodato; CIGNA, Francesca. COSMO-SkyMed SAR for detection and monitoring of archaeological and cultural heritage sites. Remote Sensing, 2019, 11.11: 1326. [5] DI STASIO, Pietro, et al. Early detection of volcanic eruption through Artificial Intelligence on board.In: 2022 IEEE International Conference on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE). IEEE, 2022. p. 714-718. [6] Razzano, F., Di Stasio, P., Mauro, F., Meoni, G., Esposito, M., Schirinzi, G., & Ullo, S. L. (2024). "AI Techniques for Near Real-Time Monitoring of Contaminants in Coastal Waters on Board Future Phisat-2 Mission," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 17, pp. 16755-16766, 2024, doi: 10.1109/JSTARS.2024.3455992. [7] DE ANGELI, Silvia, et al. A multi-hazard framework for spatial-temporal impact analysis. International Journal of Disaster Risk Reduction, 2022, 73: 102829. [8] M. R. Busto, T. Eda, T. Udagawa and T. Sekine, "Case-Study: Two-Phase AI Prediction for Onboard Synthetic Aperture Radar (SAR) Data Processing," IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 2024, pp. 7320-7324, doi: 10.1109/IGARSS53475.2024.10642771.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: Exploring the potential of remote sensing data to assess the combined effects of drought and heatwave over the Amazon Basin in the year of 2023.

Authors: Vitor Miranda, Undergraduate Ronaldo Albuquerque, MSc. João Geirinhas, PhD Leonardo de Faria Peres, PhD Renata Libonati, PhD. Juan Carlos Jimenez, PhD Isabel Trigo
Affiliations: Instituto Português do Mar e da Atmosfera - IPMA, Universitat de Valencia - UVEG, Universidade Federal do Rio de Janeiro - UFRJ, Instituto Dom Luiz - IDL
Recent studies underscore a marked warming trend in the Amazon basin, coupled with projections of a potential tipping point by 2050, where the intensification of unprecedented climatic extremes such as droughts and heatwaves, poses a significant threat towards a shift from its well established energy-limited hydrological regime to a one more dependable on water availability. Observations from September-November of 2023 reveal co-occurring extreme drought and heat events, with record-low Amazon River levels and amplified land surface temperatures (LST), emphasizing its vulnerability. Co-occurring drought and heatwave conditions are associated with strong evaporation rates and sharp decreases in soil moisture levels. Such drying trends may lead to a point where the land surface loses its capability to meet the atmospheric water demand, triggering a feedback loop where soil desiccation amplifies surface temperatures through enhanced sensible heat fluxes. This promotes the re-amplification and the propagation in space and time of the already established drought and heatwave conditions. This study looks into the relation between all-sky Land Surface Temperature (All-Sky-LST) derived primarily from GOES satellite series and soil moisture in the root zone provided by the Satellite Applications Facility on Support to Operational Hydrology and Water Management (H-SAF). Findings suggest that prolonged droughts, combined with persistent heatwaves, may accelerate soil desiccation and disrupt evapotranspiration processes amplifying the region's climate vulnerabilities. Analyses for specific Amazon areas, i.e., Northeast (NE), Northwest (NW), Southeast (SE), and Southwest (SW), reveal distinct dynamics. The NE, SW, and SE regions showed soil moisture deficits and/or high LST anomalies during the September-November 2023 period, highlighting their susceptibility to extreme climatic events and increasing risks of ecological disruption. In contrast, the NW region exhibited relative resilience, with stable soil moisture levels and moderate temperature responses throughout this period. Our findings highlight the importance of enhanced remote sensing capabilities for the study of the Amazon basin resilience in the face of escalating extreme events.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: ARCEME event database for cascading drought and extreme precipitation events

Authors: Dr. Mélanie Weynants, Khalil Teber, Miguel D. Mahecha, Marcin Kluczek, Dr. Jędrzej S. Bojanowski, Jan Musiał, Fabian Gans
Affiliations: Max Planck Institute for Biogeochemistry, Leipzig University, CloudFerro S.A.
Recent advances in the study of compound and cascading extreme events have highlighted the growing need for extensive amounts of data to support empirical and systematic investigations. Events involving multiple hazards often have complex spatio-temporal dynamics, and they require robust data sources to enable meaningful analysis and improve the capacity of modelling their impacts. Despite the overwhelming increase in the availability of high resolution Earth Observation (EO) data in recent years, sampling compound and cascading extreme events remains challenging. This limits the current number of analysis-ready datasets. The lack of such datasets restricts the scope of scientific studies and hinders efforts to generalize their findings. Here, we introduce the ARCEME cascading event database, which is currently under development and whose aim is to sample cascading droughts and extreme precipitation events globally that impact society and ecosystems. The events, occurring between 2016 and 2024, are sampled from different hydro-climatic and socio-economic settings around the world. A dual event screening approach is used to identify the cascading events. The first approach relies on a time series of climate variables to detect weather extremes. The second approach uses records from the emergency database (EM-DAT), which resulted in human lives losses and economic damage. The overall result is a collection of cascading drought and extreme precipitation events that impacted society and/or on which to study their interaction with vegetation, leveraging multi-sensor EO data. The database will ultimately comprise data cubes with a spatial coverage of 10 by 10 km, of Sentinel-1 and Sentinel-2 images spanning 1 year before and one year after the extreme precipitation events. This structured approach ensures that the data captures the pre-event conditions, the event itself, and its aftermath, providing a comprehensive basis for detailed analysis. The ARCEME cascading event database aims at making available a high-quality, well structured analysis-ready database suitable for a wide range of applications like synoptic global studies on cascading and compound events, regional and local case studies on single extreme events or machine learning applications aiming at predicting impacts of extreme events on the vegetation state.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: During Drought Lights Go Out – EO-Based Monitoring Of Ecuador´s Drought Cascade

Authors: Weston Buckley Anderson, Valerie Graw, Inga Lammers, Niels Dedring, Dr. José Jara-Alvear, Dr. Pablo Guzman
Affiliations: Ruhr-University Bochum (RUB), Universidad del Azuay (UDA), University of Maryland (UMD), Department of Geographical Sciences
Drought conditions sustained throughout multiple countries of South America in 2023 and 2024. In 2024, also Ecuador is facing one of the worst droughts since more than 60 years leading to water and electricity shortages as well as fire outbreaks across the country, affecting thousands of hectares of land and putting urban areas at risk. While 80% of Ecuador´s energy is renewable and depending on hydropower, drought conditions are impacting the urban system intensively resulting in extended power cuts for up to 14 hours per day. In addition, fire outbreaks affect urban settlements, national wildlife and biodiversity. Ongoing dry conditions are therefore triggering a cascade of risk in the country. Therefore the detection of tipping points impacting the urban systems regarding water shortages and hazard to fire outbreaks are of key importance. The use of hydropower in Ecuador rapidly increased since 2005 and tripled production within only 15 years. While hydropower is known as clean and sustainable energy, still the establishment of a dam within geographically dynamic regions represents a challenge of its own. Maintenance of hydropower dams but also the high dependance on stable climate conditions need to be considered. This study integrates earth observation data to provide a spatio-temporal profile of the drought cascade in Ecuador starting from ENSO models for early warnings of drought conditions, towards precipitation deficits, decreasing water dam levels and finally the decrease of night-time lights in urban centers in combination with fire hazard and risk mapping throughout the country. In combination with groundwater monitoring based on LDAS (Land Data Assimilation System), sequential time series analysis of sentinel-1 data will provide insights into decreasing water levels close to hydropower dams. Data fusion of Sentinel-1 and Sentinel-2 data will support the temporal parameters of drought conditions with a focus on vegetation condition, surface water and soil moisture monitoring. Phenometrics across scales will support the identification of drought dynamics for the water catchment areas. Sentinel-3 will further complement data on Land Surface Temperature (LST). Power cuts and the lack of access to energy are monitored with daily NASA Black Marble data to estimate energy shortages. This research identifies drivers and tipping points within a complex coupled Human-Environment System with regard to drought dynamics to provide a better understanding of interdependencies and the stability/instability of systems.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone J-K)

Poster: The Future Impacts of Multihazad Events

Authors: Dr Annie Winson, Claire Dashwood, Kathryn Leeming, PhD Luke Bateson, Katy Lee, Erin Mills, Leo Rudczenko, Anna Harrison, Dr Kay Smith, Miss Holly Hourston, Rob Shaw
Affiliations: British Geological Survey
The 'Future Impacts of Multihazard Events (FIME)' project is a UK Government funded project to improve our understanding of the increase in impacts related to future climate change on the people and properties of the UK. Multihazard events involve the interaction of more than one natural hazard, for example large winter storms and landslides. Due to climate change we expect the incidence and impact of hazardous events to increase, meaning that more properties and people over a larger area will be affected than before. Currently hazards such as clay shrink swell have the largest geological impact on homeowner insurance claims in the UK (£3 billion pounds in the last decade), as it leads to subsidence and heave damage. The UKCP18 climate projections anticipate a greater chance of warmer, wetter winters and hotter, drier summers, which is likely to increase the frequency and magnitude of such hazards. We also anticipate that sea level change will lead to a larger number of UK residents becoming exposed to coastal flooding as well as storm surges and winds, generated by the increasing intensity of the frequency and duration of storms. It is essential that we begin to develop both an understanding of the potential scale of these impacts, as well as developing methods for identifying and monitoring areas of key concern. FIME aims to connect three BGS assets related to multihazards, climate change and -- eTOMRAP, GeoClimate and the GeoSure Insurance Product (GIP) to advance our understanding of the location, magnitude and risk from future multihazard events. eTOMRAP, the extended Tool Multihazard Risk Assessment in Python takes in spatial data for hazards and couples it with building data to assess multihazard risk across the UK, incorporating multihazard impact chain modelling, cascading hazards and fragility failure curves. The BGS GIP gives an index-level assessment of the potential for a geological deposit to create financial insurance loss due to natural ground movement. The BGS GeoClimate shrink--swell national datasets show the potential change in subsidence due to future changes in climate and identify the areas projected to experience the largest increases in susceptibility to subsidence over the next century. Initial work in the FIME project focuses on categorizing both the hazard and exposure changes that are likely to develop through time: 1) modelling future multihazards. Here we create a multihazard impact network for a case study area in the northeast of the UK, adding a forecasting component aligned various future climate projections. We will use ground motion trends derived from InSAR to inform this network and identify areas that are changing most rapidly. 2) future exposure assessment in our case study location, producing datasets to describe a range of potential future changes in exposure scenarios. This will be informed by satellite derived land use change / city growth maps that will be forward modelled using generative AI. By integrating Earth Observation datasets with our understanding of how multihazards interact and impact the public, we aim to develop an understanding of changing multihazard risk scenarios and propose methods for monitoring these for the wider UK.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: C.05.05 - POSTER - The German EnMAP Mission: 3 Years of hyperspectral data - From Science to Environmental Applications

The German hyperspectral satellite EnMAP will be 3 years in orbit acquiring data at an increasing pace. Users feedback collected during these years at conferences and workshops is extremely positive. The EnMAP HSI instrument generates data of quality beyond expectations. During the time of operations, the team of the ground segment, supported by the space segment and in collaboration with the science segment, gradually enhanced the Missions proficiency. They implemented new features and fine-tuned the observing strategy optimizing the exploitation of the satellite capabilities. Moreover, it further developed the processing pipeline improving continuously the quality of the derived products.
We will give an overview of the status of operations including EnMAP observation strategy and synergies with other hyperspectral missions. A special focus will be on the current science strategy of the mission and successful application examples from science and industry.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Mapping Forest Canopy Nitrogen Content From EnMAP Imaging Spectroscopy by Coupled Leaf-Canopy Radiative Transfer Models and Gaussian Process Regression

Authors: Nizom Farmonov, Dr. Boris Thies, Prof. Dr. Jörg
Affiliations: Philipps-universität Marburg, Department Of Geography, Laboratory For Climatology And Remote Sensing
Leaf nitrogen is an essential nutrient for the plant growth cycle and plays a key role in the functioning, dynamics, and biodiversity of the forest ecosystem. According to many studies, nitrogen (N) has been estimated from remote sensing data mainly on the basis of parametric regression model by establishing the relationship between chlorophyll content and N from. However, chlorophyll pigments within a plant contain only a minimal portion of N, accounting for less than 2% of total nitrogen in the leaf. Recent studies show that leaf protein content (Cp) is a primary nitrogen-containing biochemical component and is a more meaningful parameter for N. There is a lack of research on the nitrogen content of the forest canopy by exploiting the correlation between Cp and N, leveraging continuous spectral signatures from hyperspectral imagery. In this study, we used the new version of the leaf optical properties model, PROSPECT-PRO, and the INFORM invertible forest canopy reflectance radiative transfer models (RTMs) to generate training database for machine learning modeling. In the PROSPECT-PRO leaf model, the dry matter of the leaf is separated into Cp and carbon-based constituents (CBC), allowing the link between Cp and N based on the protein-to-nitrogen conversion factor of 4.43. During a field campaign, tree climbers collected leaf samples from sun and shade leaves from the upper part of the canopy, and the nitrogen (N) content of the leaves was analyzed in a laboratory. In addition, nondestructive biophysical traits (e.g., leaf chlorophyll and anthocyanin content) were measured using DUALEX - Optical Leafclip Meter which were used for PROSPECT-PRO leaf model parametrization. Spaceborne hyperspectral data, EnMAP was collected one day after the in situ measurements. Gaussian process regression (GPR) was trained on the output of the RTMs and validated against the measured ground N data. Active learning was used to select the most meaningful training data against ground-truth data. The optimized ML model was applied pixel by pixel to the EnMAP data, and associated uncertainty maps were calculated. The result of this research will be presented at the Living Planet Symposium 2025.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Environmental Rehabilitation in the Western Negev: Asbestos Mapping with ENMAP Satellite Data

Authors: Elad Sagi
Affiliations: Israel Space Agency
The October 2023 conflict in the Western Negev resulted in widespread environmental damage, including potential asbestos contamination from damaged buildings. To address this hazard, the Ministry of Environmental Protection and the Israel Space Agency spearheaded a project utilizing advanced remote sensing technology. This initiative employed hyperspectral data acquired by the German Environmental Mapping and Analysis Program (ENMAP) satellite to identify and map asbestos-containing materials. Preliminary results demonstrate the efficacy of this approach, achieving an 84% success rate in asbestos detection. This study highlights the potential of ENMAP data for rapid and accurate asbestos mapping in conflict zones, facilitating informed decision-making for targeted remediation and safeguarding public health during post-conflict environmental rehabilitation.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: EnMAP and PRISMA time series for agricultural practice

Authors: Matthias Wocher, Anne Schucknecht, Anita Bayer, Stefanie Steinhauser, Tobias Hank
Affiliations: OHB System AG, Ludwig-Maximilians University
The hybrid approach, combining physically-based modelling and machine learning, is considered a promising generalized and transferable method for the retrieval of crop traits from hyperspectral data to provide important quantitative information on crop status and health for crop monitoring and management. With the launch of the EnMAP and PRISMA missions in 2022 and 2019 respectively, spaceborne spectrometric imagery is increasingly available and multiple acquisitions can be obtained for tasked test sites over the growing season. Still, validation of hybrid models so far was mostly performed on airborne campaign datasets, but could not yet be evaluated using EnMAP data, as ground measurements were not available close in time to satellite overpass. In this study, we evaluate the performance of hybrid retrieval models applied to a time series assembled from EnMAP and PRISMA data acquired over winter wheat and summer barley fields in southwest Poland, which were accompanied by a measurement campaign in 2024. In the HyLAP project (DLR call on commercial usage of EnMAP), the established retrieval models will serve as generalized base predictors for primary information products. The derived quantitative information products on crop traits are the basis for secondary information products which can directly be used by agricultural practitioners. In our hybrid retrieval workflow, first, a synthetic database using PROSAIL-PRO is generated, representing a large variety of crop types and growth stages over 2000 members. The database is then used to train a Gaussian process regression (GPR) algorithm. During the model training process, the 4-year Munich-North-Isar (MNI) dataset is integrated in an Active Learning (AL) framework to improve sample selection from the database. After training, the models are retrained with scene-specific bare soil spectra obtained from an automatic bare soil detection algorithm to reflect current local soil conditions. Model performance is evaluated by means of full-scene mappings of a 2024 spaceborne hyperspectral time series of fields in SW Poland where a field campaign was performed from March until July with in-situ measurements once per month on three winter wheat and three summer barley fields. The collected data included measurements of chlorophyll a+b (Cab), LAI, fresh and dry aboveground biomass (AGBfresh/AGBdry), as well as area-based carbon (Carea) and nitrogen (Narea) contents. Despite frequent cloud cover in the region this summer, three EnMAP (31.03., 01.05., and 05.05.2024) and four PRISMA (05.05., 28.05., 15.06., 12.08.2024) acquisitions were obtained. PRISMA data was spectrally resampled to the EnMAP spectral configuration to facilitate model application and mapping. In terms of model performance on MNI data, AL showed good ability to select meaningful samples from the training database, resulting in lightweight and efficient GPR models by reducing the number of training samples by 82% on average. For Cab results reached a coefficient of determination (R²) and root-mean-squared-error (RMSE) of 0.88, 3.98 µg cm-2, for LAI 0.94 and 0.38 m² m-2, for AGBdry 0.75 and 204 g m-2, for AGBfresh 0.76 and 269 g m-2, for Narea 0.80 and 1.74 g m-2, and for Carea 0.83 and 21 g m-2. To improve the hybrid retrieval workflow, the next steps are the integration of scene-specific bare soil spectra into all models, the estimation and mapping of all the biophysical and biochemical crop variables for each image and finally, the evaluation of the model performances over all available measurements collected during the 2024 field campaign.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: EnPT: An Open-Source Tool for Custom Processing of EnMAP Hyperspectral Data

Authors: Daniel Scheffler, Dr. Niklas Bohn, Dr. Alexander Kokhanovsky, Dr. Mariana A. Soppa, Prof. Dr. Astrid Bracher, Dr. Karl Segl, Dr. Maximilian Brell, Prof. Dr. Sabine Chabrillat
Affiliations: Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences, Jet Propulsion Laboratory, California Institute of Technology, Alfred Wegener Institute, AWI, Helmholtz Centre for Polar and Marine Research, Institute of Environmental Physics, University of Bremen, HySpex – Norsk Elektro Optikk AS, Leibniz University Hannover, Institute of soil science
The Environmental Mapping and Analysis Program (EnMAP) is a German hyperspectral satellite mission that has been operational since its launch in April 2022. It provides high-quality hyperspectral data for scientific research and applications in Earth observation, enabling advanced analyses across diverse fields such as land surface monitoring, water quality assessment, and vegetation studies. EnMAP offers data products at three processing levels: Level-1B, which includes radiometrically and spectrally corrected top-of-atmosphere (TOA) radiance in sensor geometry; Level-1C, geometrically corrected radiance; and Level-2A, bottom-of-atmosphere (BOA) reflectance corrected for atmospheric effects. These standardized data products serve as a foundation for advanced applications and user-driven processing workflows. The EnMAP Processing Tool (EnPT), developed by the German Research Centre for Geosciences (GFZ), is an advanced, open-source processing pipeline designed to provide an alternative to the official EnMAP ground segment processing for generating Level-2A BOA reflectance products from Level-1B TOA radiance data. With its flexible architecture and focus on advanced algorithms, EnPT particularly addresses expert users with high demands for accuracy and customization in hyperspectral data processing. EnPT executes a sequence of pre-processing tasks that can be tailored to meet individual user requirements. The pipeline supports radiometric conversion, atmospheric correction, co-registration, and orthorectification in a fully automated workflow, with optional components that users can activate as needed. For atmospheric correction, users may choose from multiple algorithms: SICOR for land surfaces, ACwater/Polymer for water bodies, and the recently integrated ISOFIT for spectrally optimized corrections over land. Additionally, the EnFROSP processor is now available, delivering advanced atmospheric correction for snow and ice surfaces and enabling the retrieval of snow-specific parameters, such as albedo, grain size, and impurity load . This makes EnPT particularly suitable for demanding applications in cryosphere research, climate studies, and beyond. EnPT’s geometric processing capabilities ensure precise alignment and geospatial accuracy. Its co-registration is powered by the AROSICS algorithm, which corrects spatial misalignments between the VNIR and SWIR detectors or aligns data to user-specified references, overcoming challenges such as varying resolutions or surface coverage changes. Orthorectification leverages customizable digital elevation models, ensuring reliable spatial fidelity across diverse environments. An important recent advancement is the inclusion of EnMAP-specific uncertainty layers as part of the pipeline output, derived through the ISOFIT atmospheric correction process. These layers represent the first spectral uncertainty estimates for EnMAP Level-2A products, providing users with valuable insights into retrieval reliability and enhancing confidence in their analyses. EnPT is implemented as a Python-based solution, providing three accessible interfaces to meet diverse user needs: a command-line interface, a graphical interface integrated into the EnMAP-Box QGIS plugin, and a Python API for seamless integration into custom workflows. This combination ensures usability for a wide range of tasks while maintaining the advanced functionality required by expert users. By providing a customizable, accurate, and efficient processing solution, EnPT enables the scientific community to fully leverage EnMAP’s hyperspectral capabilities. Its modular design, advanced algorithms, and continuous development based on user feedback make it a reliable choice for tackling complex hyperspectral remote sensing tasks.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Monitoring Ecosystem Dynamics via Comprehensive Plant Traits Using EO Data Cubes

Authors: Chaonan Ji, Prof Dr Hannes Feilhauer, Stefanie Holzwarth, Eya Cherif, David Montero, Moritz Mischi, Luis Maecker, Prof Miguel D. Mahecha
Affiliations: Remote Sensing Centre for Earth System Research, Leipzig University, German Centre for Integrative Biodiversity Research (iDiv), Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI), Remote Sensing Data Center (DFD), German Aerospace Center (DLR)
Rapid changes in terrestrial ecosystems due to climate impacts and human activities are leading to declines in biodiversity and ecosystem functionality. Climate-related extreme events such as droughts, heatwaves, and storms further exacerbate this decline, underscoring the need for dynamic ecosystem mapping methods. Remote sensing, especially using multi-modal Earth Observation data, provides a scalable, temporally comprehensive approach to assess ecosystem extent and condition. Hyperspectral remote sensing (HS) enables detailed monitoring of plant chemical and physiological traits, enhancing insights into vegetation health, but likewise fundamental ecosystem processes. While HS data from in-situ, drone, and airborne sources have been employed, spaceborne sensors like EnMAP and PRISMA provide broader spatial coverage, increased revisit rates, and lower costs to the users. Nevertheless, the availability of HS data is limited, complicating long-term, temporally consistent observations. The Bavarian Forest National Park, known for its conservation-focused management and often used as an experimental field for introducing new remote sensing techniques with data pool initiatives (Holzwarth et al., 2019), was chosen as the study area. The park has experienced significant dieback of spruce trees due to bark beetle disturbances. Using this area, we developed a methodology that integrates multispectral and hyperspectral Earth Observation (EO) data into a framework for ecosystem change analysis. Initially, we processed spaceborne multispectral data, such as Landsat and Sentinel-2, to compute vegetation indices (Montero et al., 2023). These indices were used to create a regional data cube covering 1984 to the present. Concurrently, we collected spaceborne hyperspectral datasets, such as EnMAP (Chabrillat et al., 2024), PRISMA (Lopinto et al., 2020) and airborne hyperspectral archives (e.g., HySpex) to create a plant traits dataset (Cherif et al., 2023). The Earth System Data Cube (ESDC) framework, which aims to unify a large influx of data into a common and simple spatio-temporal data structure, was applied to efficiently interoperate and exploit these time series of datasets (Mahecha et al., 2020; Estupiñan-Suarez et al., 2021; Montero et al., 2024). This approach enables integration of plant traits with long-term time series, facilitating inferences of both past and potential future ecosystem states through temporal hindcasting and forecasting. We develop a regional demonstrator plant traits data cube using the ESDC framework, integrating diverse EO datasets to map ecosystem extent and condition. We also establish a workflow for combining current multispectral time series with discrete hyperspectral data. With upcoming spaceborne hyperspectral missions (e.g., NASA's SBG, ESA's CHIME), this framework marks a critical step toward comprehensive ecosystem analysis leveraging hyperspectral data. Future spaceborne hyperspectral missions have the potential to enable detailed and consistent ecosystem monitoring at unprecedented spectral resolution. The developed approach, which integrates multi-source hyperspectral and multispectral data within a dynamic Earth Observation data cube framework, offers substantial potential for monitoring and forecasting ecosystem dynamics. References: 1. Holzwarth, Stefanie, Uta Heiden, Marco Heurich, Jörg Müller, Martin Stáry, and Andrew Skidmore. "Data pool initiative for the Bohemian Forest Ecosystem; 4 years of success." (2019). 2. Montero, David, César Aybar, Miguel D. Mahecha, Francesco Martinuzzi, Maximilian Söchting, and Sebastian Wieneke. "A standardized catalogue of spectral indices to advance the use of remote sensing in Earth system research." Scientific Data 10, no. 1 (2023): 1-20. 3. Cherif, Eya, Hannes Feilhauer, Katja Berger, Phuong D. Dao, Michael Ewald, Tobias B. Hank, Yuhong He et al. "From spectra to plant functional traits: Transferable multi-trait models from heterogeneous and sparse data." Remote Sensing of Environment 292 (2023): 113580. 4. Mahecha, Miguel D., Fabian Gans, Gunnar Brandt, Rune Christiansen, Sarah E. Cornell, Normann Fomferra, Guido Kraemer et al. "Earth system data cubes unravel global multivariate dynamics." Earth System Dynamics 11, no. 1 (2020): 201-234. 5. Estupinan-Suarez, Lina M., Fabian Gans, Alexander Brenning, Victor H. Gutierrez-Velez, Maria C. Londono, Daniel E. Pabon-Moreno, Germán Poveda et al. "A regional Earth system data lab for understanding ecosystem dynamics: An example from tropical South America." Frontiers in Earth Science 9 (2021): 613395. 6. Montero, David, Guido Kraemer, Anca Anghelea, César Aybar, Gunnar Brandt, Gustau Camps-Valls, Felix Cremer et al. "Earth System Data Cubes: Avenues for advancing Earth system research." arXiv preprint arXiv:2408.02348 (2024). 7. Chabrillat, Sabine, Saskia Foerster, Karl Segl, Alison Beamish, Maximilian Brell, Saeid Asadzadeh, Robert Milewski et al. "The EnMAP spaceborne imaging spectroscopy mission: Initial scientific results two years after launch." Remote Sensing of Environment (2024): 114379. 8. Lopinto, Ettore, and Cristina Ananasso. "The Prisma hyperspectral mission." In Proceedings of the 33rd EARSeL Symposium, Towards Horizon, vol. 12. 2020.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: EnMAP data from a user‘s view

Authors: Sabine Baumann, Emiliano Carmona, Nicole Pinnel, Anett Gidofalvy, Marco Vereda, Martin Habermeyer, Sebastian Hartung, Mirco Tegler
Affiliations: DLR, DLR
The Environmental Mapping and Analysis Program (EnMAP) is a German spaceborne imaging spectroscopy mission aimed at providing accurate information on the state and evolution of ecosystems, their response to human activities and the management of natural resources. EnMAP was launched in 2022 and working operational since then. The acquired data is mainly dedicated to scientific usage and all mission data are freely available for scientific use worldwide. As of now, more than 100 000 images are available and about 3000 users have already registered for receiving data. By the design of the mission, the satellite makes acquisitions on demand as commanded by dedicated orders. Any registered user can order an acquisition but, especially over Europe, this can lead to conflicting orders when the requests are located on the same geographic areas in the same time window. The EnMAP planning system accepts scientific observation requests from users upon submission and approval of an observation proposal for future acquisition requests and also gives access to archived data. Over the course of more than three years of operation, several enhancements have been made to the mission, significantly improving the quality and accessibility of data. To bring more light into the workflow behind the ordering process, data acquisition and data reception, we want to show the different structures implemented in the mission’s strategy. There are several options to receive data and also several possibilities for processing levels and parameter settings. Additionally, a registration beforehand is necessary to be able to enter the corresponding platform and portal. This step helps us monitor who is accessing the mission data, as well as support the building of a secure and trusted community around the project. We show the different workflows for setting up orders for future acquisitions but also for receiving data from the catalogue archive. To address the growing need to exploit the satellite’s capabilities as optimally as possible we have implemented several acquisition strategies aimed at maximizing data coverage while minimizing unnecessary overlaps or gaps. These strategies are designed to balance user needs, ensuring that each acquisition serves the most critical objectives. They are implemented as Foreground and Background mission and cover highly frequented areas as well as continuous creation of time series. By demonstrating the workflows and the underlying strategies in action, we will highlight how we have fine-tuned the process to meet various user requirements while ensuring the satellite is being used to its fullest potential. Our goal is to provide a clear, hands-on understanding of how these workflows can be leveraged to streamline the ordering process, optimize acquisition plans, and ultimately deliver high-quality, timely data to the users.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Three Years in Orbit: Statistics from the Ground Segment of the EnMAP Mission

Authors: Martin Habermeyer, Dr. Sabine Baumann, Dr. Emiliano Carmona, Prof. Sabine Chabrillat, Sabine Engelbrecht, Sebastian Hartung, Dr. Laura LaPorta, Dr. Miguel Pato, Dr. Nicole Pinnel, Mathias Schneider, Daniel Schulze, Peter Schwind, Dr. Katrin Wirth
Affiliations: DLR - German Aerospace Center, DLR - German Aerospace Center, Helmholtz Center Potsdam, GFZ German Research Center for Geosciences
The EnMAP mission was launched on April 1st in 2022 and thus has now been in orbit for just over three years delivering data products of a quality exceeding the expectations. There were challenges in the context of acquisition planning and data processing, especially serving the high demand over regions with many acquisition requests. In this contribution we present the status of EnMAP mission products based on statistics from the Ground Segment’s perspective. These statistics have been derived from the metadata accompanying every data product. The presentation will cover the performance of the mission after the adoption of the acquisition strategy by adapting the background and introducing the foreground mission. Furthermore, the effects on the integration of the additional Ground Station Inuvik will be discussed and the benefits from the introduction of back-to-back imaging will be shown. The aim of this presentation is to better explain the order and tasking options of EnMAP based on these numbers to derive recommendations for data acquisition. Background Mission EnMAP prioritizes acquisition requests entered by the users. These requests are temporally and spatially distributed uneven over the globe. In order to exploit the Mission’s capacity the remaining time is used for the “Background Mission” with over 600 high-interest sites worldwide and for mapping large areas of land, which are tasked with a lower priority. Foreground Mission Since the start of operations, experience has shown that not all user requests could be fulfilled. This was partly due to short acquisitions that users ordered, which resulted in a mutual blocking of (short) concurrent acquisition requests. To overcome this problem, the “Foreground Mission” was introduced. A set of pre-selected areas over Europe has been identified in collaboration with users, which now is periodically observed with a high priority. Up-to-date information is shared at the EnMAP web site (www.enmap.org) about the status of the observations and future plans. Back-to-Back Imaging Acquisition requests are prioritized and integrated in a timeline, which arranges requests, data downlinks, and other planned activities in a conflict-free timeline. The time elapsing between two observations used to be ~400s, which resulted in a distance of two subsequent targets of ~2750 km on ground. This restriction is now overcome by the introduction of back-to-back imaging. This mode allows taking up to three image acquisitions within a single attitude maneuver, reducing the distance between consecutive acquisitions to 105 seconds (~790 km on ground). Integration of Inuvik Ground Station Inuvik Ground station has been successfully integrated into the Ground Segment Workflow in August 2024, resulting in an increase of acquisition possibilities, higher-frequent health-monitoring of the satellite and more flexible up- and downlink. In this contribution we discuss the statistics of all data products acquired in the first three years and show the evolution of the Ground Segment for optimizing the performance of the EnMAP mission regarding acquisitions in the above context.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Advancing Hyperspectral Data Analysis with the EnMAP-Box

#cloud-native

Authors: Benjamin Jakimow, Andreas Janz, Akpona Okujeni, Leon-Friedrich Thomas, Patrick Hostert, Sebastian van der Linden
Affiliations: Humboldt-Universität zu Berlin, GFZ German Research Centre for Geosciences, University of Helsinki, Universität Greifswald
Imaging spectroscopy data from missions such as EnMAP, EMIT, PRISMA, and the upcoming CHIME and SBG initiatives offer transformative potential for environmental monitoring, agriculture, and mineral exploration. As imaging spectroscopy (IS) data from satellites becomes increasingly accessible, there is a growing need for advanced tools that enable users to handle, visualize and analyze images with hundreds of collinear bands, while seamlessly integrating them with data from other sources, such as multispectral raster data and measurements from field spectroscopy. Existing GIS and remote sensing software often fall short due to high costs, restricted accessibility, or insufficient flexibility for processing data from different sensors. The EnMAP-Box (Jakimow et al. 2023) is an open-source Python plugin for the QGIS geoinformation system, designed to address these challenges. Developed as part of the EnMAP mission activities (Chabrillat et al. 2024), the EnMAP-Box offers comprehensive functionality with focus, but not limited to, IS data and spectral libraries. With over 150 algorithms integrated into the QGIS Processing Framework, users can generate classification maps, estimate continuous biophysical parameters, and conduct advanced analyses. These processing algorithms are highly adaptable, capable of running in environments ranging from laptops to cloud-based processing clusters and are easily embedded into extensive workflows. The EnMAP-Box serves as a platform for domain-specific applications, such as the EnMAP Preprocessing Tools (EnPT, Scheffler 2023) for radiometric corrections, the EnMAP Geological Mapper (EnGeoMap) and EnMAP Soil Mapper (EnSoMap), for mineral and soil classification, or the Hybrid Retrieval Workflow for Quantifying Non-Photosynthetic Vegetation (Steinhauser 2024). Its versatility has made the EnMAP-Box a valuable resource across a multitude of Earth Observation applications. Our presentation focusses on the latest innovations in EnMAP-Box version 3.16, which represent a significant step forward in functionality, e.g.: • Preprocessing of Hyperspectral Time Series: A new pipeline transforms imaging spectroscopy data into Analysis-Ready Data (ARD), providing a consistent and accessible format for time-series analysis. This facilitates multitemporal investigations, such as phenology tracking and multi-temporal classification, while ensuring spectral and spatial consistency across sensor constellations. • Eased visualization and editing of spectral libraries, and integration of spectral libraries into raster processing workflows • Execution of computational-intensive EnMAP-Box algorithms on high-performance clusters, addressing the growing demand for scalability in hyperspectral data analysis. • Deep-Learning based semantic segmentation with SpecDeepMap The EnMAP-Box has proven to be a go-to environment for scientists, professional use cases and educational settings (Foerster et al. 2024), serving researchers, students, public authorities, land managers, and private companies. Its state-of-the-art algorithms embedded in the most-important open-source GIS-environment position the EnMAP-Box as a cutting-edge tool for Earth Observation, and specifically for IS applications. Attendees will gain insights into how these tools enable scalable, reproducible workflows for both researchers and operational users. Concluding the presentation, we will outline the roadmap for the EnMAP-Box, focusing on planned developments until end of 2026, including enhanced cloud-native capabilities and additional tools for emerging hyperspectral missions. This presentation aims to empower the remote sensing community to tackle complex environmental challenges with powerful and easy-to-use solutions. References: Chabrillat, S., Foerster, S., Segl, K., Beamish, A., Brell, M., Asadzadeh, S., Milewski, R., Ward, K.J., Brosinsky, A., Koch, K., Scheffler, D., Guillaso, S., Kokhanovsky, A., Roessner, S., Guanter, L., Kaufmann, H., Pinnel, N., Carmona, E., Storch, T., Hank, T., Berger, K., Wocher, M., Hostert, P., van der Linden, S., Okujeni, A., Janz, A., Jakimow, B., Bracher, A., Soppa, M.A., Alvarado, L.M.A., Buddenbaum, H., Heim, B., Heiden, U., Moreno, J., Ong, C., Bohn, N., Green, R.O., Bachmann, M., Kokaly, R., Schodlok, M., Painter, T.H., Gascon, F., Buongiorno, F., Mottus, M., Brando, V.E., Feilhauer, H., Betz, M., Baur, S., Feckl, R., Schickling, A., Krieger, V., Bock, M., La Porta, L., Fischer, S., 2024. The EnMAP spaceborne imaging spectroscopy mission: Initial scientific results two years after launch. Remote Sensing of Environment 315, 114379. https://doi.org/10.1016/j.rse.2024.114379 Foerster, S., Brosinsky, A., Koch, K., Eckardt, R., 2024. Hyperedu online learning program for hyperspectral remote sensing: Concept, implementation and lessons learned. International Journal of Applied Earth Observation and Geoinformation 131, 103983. https://doi.org/10.1016/j.jag.2024.103983 Jakimow, B., Janz, A., Thiel, F., Okujeni, A., Hostert, P., van der Linden, S., 2023. EnMAP-Box: Imaging spectroscopy in QGIS. SoftwareX 23, 101507. https://doi.org/https://doi.org/10.1016/j.softx.2023.101507 Scheffler, D., Brell, M., Bohn, N., Alvarado, L., Soppa, M.A., Segl, K., Bracher, A., Chabrillat, S., 2023. EnPT – an Alternative Pre-Processing Chain for Hyperspectral EnMAP Data, in: IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium. IEEE, pp. 7416–7418. https://doi.org/10.1109/igarss52108.2023.10281805 Steinhauser, S., Wocher, M., Halabuk, A., Košánová, S., Hank, T., 2024. Introducing the Potential of the New Enmap-Box Hybrid Retrieval Workflow for Quantifying Non-Photosynthetic Vegetation, in: IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium. IEEE, pp. 4073–4076. https://doi.org/10.1109/igarss53475.2024.10642095
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Mapping Kaolinite by EnMAP Hyperspectral Imagery Using Machine Learning Algorithms Trained on Synthetic Spectra: a Case Study of Cuprite Hills, Nevada, USA

Authors: Anna Buczyńska, Dariusz Głąbicki, Saeid Asadzadeh, Raymond Kokaly
Affiliations: Wrocław University of Science and Technology, Helmholtz Centre Potsdam, GFZ German Research Centre for Geosciences, United States Geological Survey
Since the launch of Hyperion, the first spaceborne hyperspectral imaging system in early 2000, numerous hyperspectral satellite missions have been developed to image the earth's surface at medium spatial resolution. Hyperspectral data enable the identification and characterization of surface constituents, including surface mineralogy in a fast and cost-effective manner. Machine (ML) and Deep Learning (DL) methods are promising tools for detecting surface mineralogy in remote sensing data. A challenge that has not been fully addressed, however, is the fact that minerals usually occur in close association with each other forming mixtures in the retrieved pixel spectra. Accordingly, generating high-quality training data is still an open challenge for the effective application of ML/DL methods. To address this, we synthetically generated mineral mixtures for kaolinite, as an exemplary mineral, via Monte Carlo simulations and trained selected ML/DL models to detect this mineral over the Cuprite test site (Nevada, USA) imaged by the EnMAP satellite instrument on July 7th, 2023. For this aim, reflectance spectra of kaolinite and 21 minerals commonly co-occurring with it in geologic environments (alunite, montmorillonite, jarosite, chlorite, calcite, muscovite, hematite, illite, dolomite, quartz, pyrophyllite, olivine, epidote, pyroxene, goethite, dickite, buddingtonite, nontronite, opal, chalcedony, halloysite) were selected from the USGS spectral library. The synthetic training dataset comprising >10,000 individual spectra, was generated by linear combinations of two, three, and four minerals, sampled in random proportions using Monte Carlo simulation. For training, the spectra containing more than 20% kaolinite were divided into three subclasses including low (20% - 30%), medium (30% - 40%), and high kaolinite proportion (40% - 55%). These classes were used to train the algorithms and map kaolinite in the EnMAP imagery. The EnMAP data was pre-processed by removing the overlapping channels in the VNIR and SWIR range and removing the atmospheric and noisy bands. The test data consisted of the mineral mapping product yielded by the Material Identification and Characterization (MICA) method developed by the U.S. Geological Survey. We evaluated the performances of five algorithms including Random Forest (RF), eXtreme Gradient Boosting (XGBoost), Support Vector Machine (SVM), Convolutional Neural Network (CNN), and Recurrent Neural Network (RNN). The accuracy of the results was assessed using the error matrix constructed by comparing the classification results with the test dataset, and the accuracy measures including the overall accuracy, Kappa coefficient, precision, recall, and F1-score. Our analysis showed that the CNN algorithm yields the highest accuracy for kaolinite detection. The overall accuracy, Kappa coefficient, precision, recall, and F1-score were equal to 89.6%, 69.4%, 88.8%, 66.2%, and 75.8%, respectively. The lowest performance was obtained by the RF method giving rise to 82.6%, 39.9%, 91.5%, 30.8%, and 47.7% values, respectively. While all classifiers correctly identified kaolinite in the central parts of the area, towards the southern margin where kaolinite co-occurs with montmorillonite, it was not feasible to map kaolinite accurately by any of the methods. This study indicates the great potential of the ML/DL methods in quantitative mineral mapping using satellite-borne hyperspectral data. Future work involves mapping other mineral classes, applying the methodology to other hyperspectral sensors, and testing more DL models.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: The EnMAP Foreground Mission – High priority acquisitions for enhanced science and synergies

Authors: Dr. Nicole Pinnel, Vera Krieger, Sabine Chabrillat, Dr Akpona Okujeni, Karl Segl, Dr Max Brell, Dr. Sabine Baumann, Martin Habermeyer, Dr Emiliano Carmona, Dr Laura La Porta
Affiliations: Earth Observation Center, German Aerospace Center (DLR), German Space Agency, German Aerospace Center (DLR), Geosciences, Potsdam, Germany (GFZ), Leibniz University Hannover, Institute of Soil Science
Since November 2022, the Environmental Mapping and Analysis Program (EnMAP) (www.enmap.org) has attracted more than 2800 registered users from over 100 countries. Users frequently access the EnMAP data archive downloading approximately 5400 tiles per month, and can request future observations through the EnMAP Instrument Planning Portal (https://planning.enmap.org). The demand for future observations however varies by region, with Europe showing the highest demand, leading to issues like overlapping orders in the same orbit. To address these challenges and improve mission efficiency, EnMAP has adjusted its acquisition strategies and planning. The goal is to expand access to EnMAP data for a larger user community and increase data acquisition in areas with very high demand. In March 2024, EnMAP introduced the a new acquisition strategy, the so called "Foreground Mission," focusing on extended 990-kilometer flightlines over Europe. Initially, a set of 29 flightlines over Germany and Europe (Spain, France, Italy, Poland, Greece) has been identified in collaboration with the Science User community. These flightlines are scheduled in quasi nadir-view (+/- 5° off-nadir angle) regularly every 27 days (approximately 20 flightlines per month) to enhance data coverage and create systematic time series during the vegetation period (from March to November). Field campaigns were also supported by high-priority orders to continue its repeated acquisition, while coordinating simultaneous overpasses in close collaboration with other hyperspectral missions such as PRISMA, DESIS, and EMIT. In 2025, the Foreground mission will continue the high priority tasking over Europe and focusing globally during the European winter months, with high frequent timeseries over selected areas and biodiversity hotspots. The acquisition plan is updated on a daily basis and adjusted based on other mission priorities such as ground activities or weather conditions. Users can receive advanced information about future priority observations at https://www.enmap.org/data_tools/foreground_mission/. Successfully acquired data are accessible to all registered users approximately six days after acquisition via the EnMAP Data Archive on the EOWEB Geoportal. This informative presentation emphasizes the outcomes of the 2024 Foreground Mission initiative and outlines the efforts of the mission consortium to support concurrent field campaigns and mission synergies in 2025.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Spaceborne Hyperspectral Data Application for Characterizing Debris-Covered Glaciers in High-Mountain Environments. The Khumbu Glacier Case Study.

Authors: Marco Casu, PhD student Claudia Collu, PhD Francesco Dessì, Professor Maria Teresa Melis
Affiliations: Sapienza University \, Cagliari University
Debris-covered glaciers, predominantly found in the Hindu-Kush-Karakoram and Himalayan ranges, are dynamic and complex systems that play a critical role in regional hydrology and climate regulation. These glaciers are characterized by a surface layer of debris, whose physical and geochemical properties significantly influence melt rates and glacier dynamics. Understanding the lithological composition and thermal properties of this debris is crucial for modeling melt patterns and addressing the impacts of climate change. However, while most studies focus on the geometry, mass balance, and morphology of debris-covered glaciers, a significant research gap remains in their lithological characterization. This study seeks to address this gap by integrating EnMAP hyperspectral data with ASTER emissivity and land surface temperature data. The Khumbu Glacier, the focus of this study, is located in northeastern Nepal within the Sagarmatha National Park. EnMAP spaceborne hyperspectral data was used to identify and map lithological variations across the glacier. By analyzing small-scale diagnostic absorption features within the 420–2450 nm spectral range, this approach enables the discrimination of different lithological components and their spatial distribution across the glacier. Additionally, ASTER emissivity and land surface temperature data were employed to explore the spatial correlation between lithological composition and thermal patterns. This analysis identified cooler regions associated with SiO₂-rich debris and warmer zones in moraines and low-lying areas. The results provide significant insights into the composition and distribution of supraglacial debris. Quartz-rich felsic debris was found dominant through ASTER emissivity analysis. EnMAP-derived classifications revealed distinct lithological zones, aligning closely with existing cartography. A combination of linear unmixing and band ratio techniques, such as the Al-OH phyllosilicate relative band depth, highlighted substantial geochemical variability in the glacier’s debris, particularly in the lower ablation zones. By integrating remote sensing and field-based approaches, this study aims to advance the understanding of the lithological dynamics of debris-covered glaciers. Future research will focus on validating the produced cartography using samples collected during the field campaign conducted in October. A total of 350 spectra were extracted under laboratory conditions and will be used to validate hyperspectral-based classifications, while also providing new and accurate mineralogical information through XRD (X-Ray Diffraction) laboratory techniques to enhance the completeness of the spectral data. The preliminary outcomes of this research underscore the potential for expanding the study by correlating VNIR-SWIR hyperspectral data with thermal data. Linking lithology and temperature could yield critical insights into glacier melt patterns, contributing to strategies for mitigating the impacts of climate change and natural hazards.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: First Retrievals of Aerosol Optical Thickness, Surface Reflectance and Cloud Properties using EnMAP Radiance Data with the XBAER Algorithm

Authors: Simon Laffoy, Dr Marco Vountas, Professor Linlu Mei, Prof. Dr. Hartmut Bösch
Affiliations: Institut für Umweltphysik (IUP) University of Bremen, University of the Chinese Academy of Science
We present work and results of the XBAER4EnMAP project, which focuses on the adaptation and further development of the mature eXtensible Bremen Aerosol/cloud and surfacE parameters Retrieval (XBAER) algorithm for the HyperSpectral Imager (HSI) instrument on board the Environmental Mapping and Analysis Program (EnMAP) satellite mission. The XBAER algorithm retrieves aerosol optical thickness (AOT), surface reflectance (SRF) and cloud parameters and was previously developed using radiance data from both the MEdium Resolution Imaging Spectrometer (MERIS) onboard Envisat and the Ocean and Land Colour Instrument (OLCI) onboard Sentinel 3, both of which provide radiance data at a spatial resolution of 300m to 1.2km. We seek to update XBAER to utilize EnMAP's higher spatial resolution radiance data of 30m, and to produce equivalent spatial-resolution data products with it. We present a comparison of co-located OLCI and EnMAP top of atmosphere reflectances (RTOA), as well as an analysis of which EnMAP bands or composite bands would best replace OLCI bands within the XBAER algorithm. Following on from this we use these co-located OLCI and EnMAP scenes as input for XBAER and present a comparison of the resulting AOT and SRF at OLCI's lower spatial resolution. We discuss the challenges of updating XBAER's existing cloud mask to handle input data of a significantly higher spatial resolution. Additionally we present how an attempt to derive higher spatial resolution data products with XBAER's existing lower spatial resolution surface parameterization scheme, demands modification elsewhere to the algorithm to account for pixel effects. We present preliminary results of AOT at the full 30m spatial resolution of EnMAP as derived by XBAER, and we compare this to AOT of colocated AErosol RObotic NETwork (AERONET) sites. We discuss the challenge of comparing a satellite-derived data product to the point-site valuations of AERONET, particularly given EnMAP's relatively small scene size and low global coverage. Finally we summarize the work still to be performed as part of the project
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Optimizing EnMAP Satellite Operations: Acquisition Strategies and Data Access

#stac

Authors: Emiliano Carmona Flores, Dr. Sabine Baumann, Sabine Chabrillat, Sabine Engelbrecht, Martin Habermeyer, Sebastian Hartung, Dr. Laura La Porta, Dr. Nicole Pinnel, Dr. Miguel Pato, Mathias Schneider, Daniel Schulze, Peter Schwind, Dr. Katrin Wirth
Affiliations: German Aerospace Center (DLR), German Research Center for Geosciences (GFZ), Leibniz University Hannover, German Aerospace Center (DLR), German Aerospace Center (DLR)
Since its launch on April 1, 2022, the EnMAP mission is delivering high-quality hyperspectral data to a growing global user base. The mission follows an open-access policy, providing freely available data for scientific use. Additionally, users are encouraged to submit observation proposals granting the opportunity to task EnMAP over specific areas of interest. This approach requires the mission operations to balance the needs of the users with the technical capabilities of the satellite. This contribution summarizes the status of the tasking strategy and data access in the EnMAP mission and the challenges that have been found and optimizations that have been introduced during the initial years of operation. Acquisition Strategy While many present satellite missions offer global mapping capabilities, this is not yet available for the current generation of imaging spectroscopy missions. Today’s hyperspectral missions, like EnMAP, operate with a tasking mission concept, where acquisitions are prioritized and scheduled on specific locations. EnMAP follows a strategy that prioritizes observations entered by the users. To maximize the observation capacity, the remaining time is used for the so-called Background Mission which targets over 600 high-interest sites worldwide and, at the same time, aims to map large land surface areas. Experience from the first few years of operation shows that user requests are very unevenly distributed globally, which makes it a real challenge to fulfill all user requests. In addition, users order short acquisitions, i.e. single EnMAP products (30 x 30 km), underutilizing the potential observation capabilities of the satellite. Moreover, there is a growing number of requests for time series data which is difficult to achieve on the most requested geographic areas, due to the competing orders from different users. To address these difficulties, the so-called Foreground Mission was introduced. In this approach, a set of pre-selected areas over Europe are periodically observed and up-to-date information is shared at the EnMAP web site (www.enmap.org) about the status of the observations and future plans. The targets for the Foreground Mission were defined in collaboration with users representing different application areas. Following its positive reception by the user community, this initiative will be extended to other geographic areas in the future. In this contribution, we present the EnMAP acquisition approach and its optimization. We discuss the statistics of different observation modes and provide best practice recommendations for effectively tasking the EnMAP satellite to acquire data. Data access The open-access data policy of EnMAP allows users to access and download more than 110,000 products from the mission archive simply by registering as an EnMAP user. Archived products can be ordered and processed on-demand, with users selecting their preferred parameters, including product levels or the type of atmospheric correction. The available data product levels are: - Level 1B: radiometrically corrected data in sensor geometry - Level 1C: radiometrically and geometrically corrected, orthorectified data - Level 2A: atmospherically and geometrically corrected, orthorectified surface reflectance data with two atmospheric correction modes (land-mode and water-mode) The on-demand processing approach ensures that the EnMAP products adapt better to user needs and are generated using the up-to-date version of the processing software, which is regularly updated with enhancements. On the other hand, high-volume product requests can lead to long waiting times. To address the needs of users with no specific processing requirements or interested in large data volumes, EnMAP provides the complete set of Level 2A products processed with a standard set of parameters, available at the EOC-Geoservice and EOLab platforms. This dataset, verified as CEOS Analysis Ready Data (CEOS-ARD) for land applications, is optimized for large-scale use and easily accessible through the Geoservice STAC API, facilitating data discovery and access. In this contribution we will present what are the different deliverable EnMAP products, what are the available options to obtain them and the main differences that the users shall expect in each case.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: A.04.01 - POSTER - Estimating and observing local-scale GHG emissions

Recent advances in Earth Observation capabilities have revolutionized our ability to detect CO2 and methane emissions at facility-to-city scale and from individual emission sources.

These localised sources often present good potential for rapid mitigation and are thus high priority targets for rapid action in response to the Paris Agreement. International policy initiatives such as the European Green Deal and the Global Methane Pledge set targets for emissions reductions that are increasingly backed by legislation, with a requirement for monitoring and reporting systems in relevant industries.

There are various space-based observations suitable to estimate methane emissions from, e.g. landfill and the oil&gas and coal mining industry, and more and more as well for CO2 emissions from, e.g., power plants and cities. However, the observing system is characterised by a large and increasing diversity of satellite instruments and retrieval algorithms, including substantial involvement of New Space. Efforts to integrate and harmonise facility scale emissions estimates, to develop estimates of uncertainty, including for increasingly prevalent AI techniques, and to develop good practice both within and across satellite platforms are a rapidly evolving.

This session aims to present an overview on topics related to estimating emissions of CO2 and CH4 from sources ranging from point sources up to megacities in spatial extent. It will showcase recent scientific advances, including new instruments, new detection and retrieval methods, their uncertainties, validation, data uptake, and efforts towards integration into global initiatives.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: Testing CO2M Satellite Applications: Sensitivity Testing with Meso Scale Atmospheric Model DEHM

Authors: Niels S Hvidberg, Anne Sofie Lansø, Hoyeon Shi, Jesper H. Christensen, Jacob Høyer, Camilla Geels
Affiliations: Aarhus University, Danish Meteorological Institute
The Copernicus Anthropogenic CO¬2 Monitoring (CO2M) mission is part of the European integrated monitoring and verifications support capacity (MVS). As new policies, which target reductions in emissions on a national basis, are implemented, there will be a need for a politically unbiased verification of countries’ reported emissions. A new group of three satellites is planned to be launched as part of the CO2M mission which will carry equipment capable of detecting anthropogenic CO2 signals from megacity to country scales with a much wider swath width than previously. This study focuses on simulating level 2 data products for the CO2M satellites to investigate sensitivity to city changes in CO2 emissions and hence the potential to detect changes related to emission reductions at the city scale. Atmospheric CO2 concentrations are modelled with the combined atmosphere-biosphere model DEHM (Danish Eulerian Hemispheric Model) and SPA (Surface Plant and Atmosphere) around Copenhagen, Denmark. The model uses an annual CO2 emissions inventory for Denmark with a 1x1 km2 grid resolution together with CAMS TEMPO temporal profiles. The resulting emissions for traffic are then scaled using data from an activity-based traffic model for the Greater Copenhagen Area (COMPASS) running a high and low emission scenario. As DEHM is run with the Weather Research and Forecast Model (WRF) meteorology the model data can be combined with cloud data from WRF and estimated instrument uncertainties to generate a simulation scheme similar to the CO2M satellites. The sensitivity analysis for the simulated level 2 data will be presented with the resulting data reflecting the high and low emission scenarios. Additionally, a comparison of the variability and surface-to-column propagation within the modelled atmospheric CO2 of the two scenarios is presented to discuss the sensitivity analysis.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: Hotspots Detection on Landsat and Sentinel-2 via Gaussian Mixture Modeling for Estimating Industrial CO2 emissions

Authors: Antoine Tadros, Rafael Grompone, Florentin Poucin, Simon Lajouanie, Carlo de Franchis, Charles Hessel
Affiliations: Université Paris-Saclay, ENS Paris-Saclay, CNRS, Centre Borelli, Kayrros SAS
In light of the increasing concern over greenhouse gas emissions, this research focuses on the role of steel mills and cement works as heavy sources of CO2 emissions. These industries significantly contribute to environmental pollution due to their energy-intensive processes and reliance on fossil fuels. In response, we propose an innovative monitoring framework that relies on a statistical analysis of Sentinel-2 and Landsat images to accurately assess the activity status of the aforementioned facilities. Given that the activity of such facilities produces a significant amount of heat, we propose to detect hotspots as anomalies using the Reed-Xiaoli (RX) algorithm [1]. The RX method efficiently discriminates between normal background activities and anomalous heat emission events by analyzing spectral data. To enhance reliability, we integrate the a contrario framework, which allows us to express the computed p-values as an expected Number of False Alarms (NFA). This statistical measure provides a clear interpretative advantage, as it quantifies the expected number of false detections in complex industrial environments. One of the principal challenges in this domain lies in modeling the background environment of industrial sites, where traditional Gaussian models can fall short in capturing the complexity of dynamic scenes. To address this limitation, we propose to use Gaussian Mixture Models (GMM), which offer a more flexible and robust representation of heterogeneous background processes. GMMs enable the accommodation of multiple overlapping distributions, making them particularly suited for the varied backgrounds that the model might encounter. The use of GMM allows for the joint segmentation of the scene in homogeneous areas while also providing a setting on which to run the RX algorithm. By focusing on these critical industrial sources, our study aims to provide valuable information that can drive targeted interventions and policy decisions to reduce overall carbon footprints. [1] I. S. Reed and X. Yu, "Adaptive multiple-band CFAR detection of an optical pattern with unknown spectral distribution," in IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 38, no. 10, pp. 1760-1770, Oct. 1990, doi:10.1109/29.60107.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: Development of an End-to-End Simulator (E2ES) for the performance assessment of the Copernicus anthropogenic CO2 Monitoring mission (CO2M).

Authors: Laurent Blanot, Nicolas Bertaud, Meriem Chakroun, Julien Demaria, Stéphane Ferron, Eirini Milioti, Marion Piccinelli, Frederic Rouffi, Patricia Schippers, Romain Sumerot, Veronique Bruniquel, Eric Jeansou, Stéphane Bonnot, Bruno Chetrite, Elise Mathieu, Jens Dannenberg, Carlo Zelli, Angela Birtwhistle
Affiliations: ACRI-ST, TAS-F, ESA/ESTEC, OHB System AG
The anthropogenic CO2 Monitoring mission (CO2M) is one of the High-Priority Candidate Missions (HPCM) endorsed by ESA for the expansion of the Copernicus Sentinel missions. The Main goal of CO2M is the measurement of anthropogenic carbon dioxide (CO2) concentration in Earth atmosphere. The increased performances with respect to other CO2 missions outside Europe will allow to meet challenging mission requirements and be able to characterize CO2 sources. CO2M satellites, on polar orbit (≈735 km), will carry a payload made of three complementary instruments: 1) CO2i, the main instrument, consisting of a four spectral channels spectrometer allowing the retrieval of CO2 and CH4 mixing ratio (NIR, SWIR1 and SWIR2 channels) as well as NO2 concentration (VIS channel), the latter being a tracer of the anthropogenic origin of the measured CO2. 2) MAP, a multi angle, multi spectral, multi-polarization imager, which provides useful information on aerosols. 3) CLIM, a 3 bands imager providing useful information on cloud coverage at high spatial resolution. An important aspect of the preparation of the CO2M mission is the development of an End-to-End simulator (E2ES) during phase B2CD (pre-flight) which allows verifying that the expected performances of the mission, in terms of retrieved radiometry (L1) and retrieved geophysical quantity (L2) are met. The E2ES consists, for each of the 3 instruments, of a series of software modules, which 1) simulate the signal measured at the entrance of the instrument (GM + SGM modules) and at the output of the instrument then sent to the ground by Telemetry (OPSI module) 2) process the TM data in order to retrieve an estimation of the signal at the entrance of the instrument (L1GPP module) and an estimation of the geophysical quantity put at the entrance of the simulation (L2P module) ; 3) compare the true and retrieved quantities at L1 and L2 level (PEM module). The development of the E2ES is performed by a consortium led by OHB (prime contractor) and Thales Alenia Space in France. Within this consortium, ACRI-ST is responsible for the implementation of several modules of the E2ES. This presentation provides an overview of the E2ES software, highlighting the status of the developments. It is focused on the modules developed by the ACRI-ST team.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: Methane source identification from hyperspectral infrared satellite observations: a physically informed neural network-based inversion approach

Authors: Rocco Giosa, Giuliano Liuzzi, Pamela Pasquariello, Guido Masiello, Carmine Serio, Maria Ragosta, Francesco Carbone, Italia De Feis, Lorenzo Cassini, Fabio Della Rocca, Marco D’Emilio, Christian Natale Gencarelli
Affiliations: University of Basilicata, IIA, National Council of Research, IAC, National Council of Research, University “La Sapienza” Department Of Civil, Building And Environmental Engineering, University of Basilicata, Department of Health Sciences, IGAG, National Council of Research
Monitoring greenhouse gas emissions, particularly methane (CH4) and carbon dioxide (CO2), is becoming increasingly vital to combating climate change and achieving international objectives such as the Paris Agreement and the Global Methane Pledge. CH4, with a radiative effect much greater than CO2 and significant anthropogenic sources, presents a high-priority opportunity for rapid mitigation. Recent advancements in satellite-based Earth Observation have enabled new technologies and techniques to quantify emissions at scales ranging from individual facilities to entire cities. However, the diversity of satellite instruments, retrieval algorithms, and observation strategies underscores the need for harmonized approaches for emission estimation and validation across platforms, as well as for developing new approaches to increase the computational efficiency of retrievals. Artificial intelligence techniques offer a promising solution, especially in architectures that integrate physical models into the learning process to improve the accuracy and reliability of retrievals and exploit the information that is usually provided to more classic retrieval approaches. In this context, we present the results of the PRIN-MVP(*) (Methane Vertical Profiling) project, which develops an innovative retrieval scheme for the identification of methane emissions and the derivation of CH4 vertical profiles from hyperspectral thermal and mid infrared satellite observations. The methodology developed in this work is based on a Physics Informed Neural Network (PINN), which are able to ingest physical information about the constraints of the inverse problem and the vertical profile one wants to retrieve. The methodology was developed, applied and validated to IASI data, using about 1,000,000 clear sky spectra, partly simulated by the σ-IASI/F2N radiative transfer code and partly observed, along with atmospheric state parameters (e.g., temperature, latitude). Spectra were compressed through principal component analysis (PCA), considering the optimized spectral range for methane, defined through an Averaging Kernels (AK) study. The work aims to demonstrate the power of PINNs for the identification of CH4 sources. In this regard, we will show the results obtained from PINNs, and the first steps to create a public repository and website to describe the algorithm and distribute data products. (*) The PRIN 2022 PNRR MVP Project is funded by The European Union – Next Generation EU, Mission 4 Component 1 CUP C53D23010210001
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: Retrievals of CO2 and CH4 Maps from the EnMAP Satellite Using RemoTeC and a Matched Filter

Authors: Leonie Olivia Scheidweiler, Lennart Resch, Benedikt Löw, Ida Jandl, Andreas Baumgartner, Claas Köhler, Andre Butz
Affiliations: Institute of Environmental Physics, Heidelberg University, School of Geography, Earth and Atmospheric Sciences, University of Melbourne, CSIRO Environment, German Aerospace Center (DLR), Earth Observation Center, Heidelberg Center for the Environment (HCE), Heidelberg University, Interdisciplinary Center for Scientific Computing, Heidelberg University
Satellite measurements are an indispensable tool for the global monitoring of anthropogenic greenhouse gas emissions. While it is not a primary mission goal, the Environmental Mapping and Analysis Program (EnMAP) satellite is able to measure carbon dioxide and methane emission plumes from localized hotspot sources. We compare methods to get emission estimates from these measurements. We analyze observations of point and area sources of these gases taken by the EnMAP satellite. We use a physical retrieval (RemoTeC), which yields integrated column densities for the targeted greenhouse gases, and a statistical retrieval (matched filter), which outputs enhancements only, but does so much faster and requires less auxiliary data. Observed sources include a power plant in Riyadh, Saudi Arabia, an oil and gas facility in Turkmenistan, and the Pinto landfill near Madrid, Spain. We further analyze airborne measurements of coal mine ventilation shafts in the Upper Silesian Coal Basin, Poland, taken during the CoMet campaign. First results indicate that the physical retrieval performs better at distinguishing surface albedo structures from the spectral signal of the target gas for some surfaces. The retrieved image contains a striping pattern caused by detector properties. The matched filter yields a smoother and destriped enhancement map, as it determines detector properties from the measurement statistics. It yields a lower total integrated mass enhancement for the emission plume than the physical retrieval. We aim to improve the retrieval tools. One approach is to use the covariance matrix from the statistical retrieval as input information for the physical retrieval. A first analysis suggests that this combined method results in a smoothed and destriped image without sacrificing the accuracy of the physical retrieval.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: Assessing the Detection Potential of Hyperspectral Satellites for Global Greenhouse Gas Monitoring: Insights From TANGO Simulations

Authors: Harikrishnan Charuvil Asokan, Dr Jochen Landgraf, Prof. Dr. André Butz
Affiliations: Heidelberg University, SRON Netherlands Institute for Space Research
Monitoring and reporting anthropogenic greenhouse gas (GHG) emissions, mainly CO2 and CH4, is critical for informing climate mitigation strategies. While conventional satellites provide broad spatial coverage, hyperspectral satellites like the upcoming TANGO mission, scheduled for launch in 2026-27 under ESA’s SCOUT program, can detect smaller, localised point sources with high spatial resolution. TANGO operates with a swath width of approximately 30 km and a ground pixel resolution of ~300 m, enabling the detection of emitters that are often too small to be captured by conventional satellites. TANGO’s ability to roll dynamically (up to ±30°) enables the selection of high-priority targets along its orbital trajectory, enhancing its observational flexibility. This study evaluates the detection potential of TANGO, incorporating its orbital parameters, dynamic manoeuvrability, target prioritisation strategies, and practical cloud coverage scenarios. The prioritisation strategies used in the simulations included CO2 and CH4 prioritisation schemes, reflecting the operational flexibility to focus on different types of sources depending on mission objectives. Using the TNO global point source inventory as a baseline, simulations under the current detection limits of TANGO demonstrate that approximately 200 targets can be realistically detected per repeat cycle (4 days) after applying a two-step cloud filtering process. The Clear-Sky scenario, serving as an idealised upper limit, highlights TANGO’s potential to detect up to 1100 targets per repeat cycle under ideal conditions, depending on the prioritisation strategy, with CH4 prioritisation showing higher counts due to its larger baseline inventory of sources. Moreover, the integration of cloud-forecast information elevates detection efficiency by circumventing cloud-filled areas, achieving up to a 35% improvement in the number of detected targets. Seasonal trends indicate reduced detection during Northern Hemisphere winter, driven by solar zenith angle constraints, further stressing the operational challenges caused by cloud cover and illumination conditions. Additionally, the study investigates the operational significance of Enhanced Detection Limits, which expands the baseline inventory by detecting smaller emission sources, suggesting potential future improvements in global GHG monitoring coverage. The study shows the critical trade-offs between detection efficiency, prioritisation strategies for CO2 and CH4 sources, and operational constraints. The results emphasise the potential of hyperspectral satellite missions like TANGO to bridge significant gaps in global GHG emission inventories, particularly for distributed and under-monitored sources. These observations contribute to advancing space-based monitoring capabilities, providing actionable data for emissions mitigation and policy development.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: The MicroCarb PayLoad Ground Segment: description of a ground segment for new CO2 data

Authors: Sarah Guibert, Elodie Jaumouille, Valérie Lefort, Céline Menard, Sophie Pelou, Xavier Toubeau, Philippe Landiech, Didier Pradines, Céline Tison
Affiliations: CNES
MicroCarb will be the first European mission dedicated to CO2 fluxes monitoring from space, with a target launch for mid-2025 on Vega-C. MicroCarb has been developed in a partnership led by CNES under French Government funding with major contributions from UKSA, EUMETSAT and EU. MicroCarb will make global measurements of the atmospheric CO2 column integrated concentrations at high accuracy (random error < 1ppm, regional bias < 0.2ppm) from an affordable micro satellite. The main objective of the MicroCarb mission is the study of natural fluxes for a better understanding of their mechanisms. Within the MicroCarb mission, CNES is in particular responsible of the PayLoad Ground Segment (PLGS) development, which includes: • A payload management function, in charge of the instrument monitoring and control, • A payload expertise function in charge of instrument calibration and characterization • A product processing function, in charge of mission products elaboration until level 3 • A products expertise function in charge of quality monitoring and algorithm expertise. • A data server function in charge of data transfers and operational data archive. • A long-term data archive storing science data during mission life and post mission decommissioning • A data dissemination function in charge of the distribution of products to the science community as well as information about mission events to public. PLGS is located in CNES Toulouse and EUMETSAT Darmstadt, Germany. PLGS components are operated by CNES and EUMETSAT operational & expert teams. PLGS uses AERIS services dedicated to the atmosphere for the distribution of L1-L3 MicroCarb products to the scientific community. This presentation: • Shortly introduces the MicroCarb mission, given its objective, • Presents the PLGS architecture and the partnership organization, • Describes the different ground segment components, with focus in particular on the several acquisition modes, the different processing chains, and the associated technical challenges.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: Synthetic datasets for benchmarking point-source methane emissions detection approaches

Authors: Mahrukh Niazi, Timothy Davis, Robert Huppertz, Dr Maxime Rischard
Affiliations: Orbio Earth
A growing number of spaceborne sensors can be used to monitor for point-source methane emissions, yet a dearth of imagery associated with emissions events of known sizes hampers the validation of methods. Where data do exist from controlled releases or coincident observations they are frequently only available in small numbers, over a limited geographic region, or for a single instrument. These limitations frustrate the determination of generalized validation metrics and inhibit cross-sensor comparisons. We present an approach to determining the false positive rate (FPR) and probability of detection (POD) by inserting synthetic methane plumes into real satellite imagery. FPR is the number of erroneous methane plumes detected by an algorithm. POD quantifies the probability that a methane emission of a given rate will be detected. It depends on the sensing instrument, algorithmic approach, atmospheric and solar conditions, and the albedo properties of a location. Increasing the model's sensitivity to achieve a higher POD typically involves accepting a higher FPR. Together, FPR and POD provide direct measures of an algorithm's sensitivity and offer critical insights into how reliably it can detect methane emission events. By decoupling FPR and POD we have made the tradeoff between FPR and detection sensitivity explicit. We use a set of synthetic methane plumes that had been generated by large eddy simulations (LES). The LES plumes simulate methane point-source emissions under controlled conditions, realistically capturing how methane disperses in the atmosphere. These are inserted into real satellite images captured by various instruments by adjusting the radiance in each spectral band according to the methane absorption in that band as determined by the simulation. The flux rates of the plumes can also be scaled to capture a large diversity of simulated emission events, and produce POD curves against emission size. This approach allows us to produce arbitrarily large datasets and to control the distribution of land cover types and seasonality of imagery. Having large and diverse datasets of methane plumes is critical for estimating generalized algorithmic performance. Our approach also allows us to create matching datasets across sensors for direct comparisons. An acceptable FPR must be determined based on operational considerations. Using a large and diverse synthetic dataset allows for a more reproducible determination of FPR compared to earlier methods that relied on hand-labeled datasets, which are often limited and inconsistent. The synthetic plumes are inserted into real imagery that is unlikely to contain true methane emissions events. Given a fixed FPR, the corresponding POD curves and detection limits can then be determined. We initially tested our validation approach on Sentinel-2 imagery, determining the POD for a deep learning model. We found that land cover types with a low signal to noise ratio (SNR) made the plume detection problem more difficult for the model as reflected by a reduced POD. Accordingly we split our dataset into three regions, corresponding to the homogeneity of image content, and retrieved POD values for each region. This analysis enables us to assert model performance in specific geographies. Looking forward, the value of this work will be to make similar assertions about the relative performance of various algorithms and across different instruments, i.e. to allow the community of scientists working on spaceborne methane detection to share reproducible validation benchmarks.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: Simulating CH4 Emissions in the Po Valley with WRF-GHG: Validation Against TROPOMI Observations and Ground-based Measurements

Authors: Marco D'Emilio, Pamela Pasquariello, Guido Masiello, Carmine Serio, Giuliano Liuzzi, Dr. Francesco Carbone, Dr. Christian Natale Gencarelli, Rocco Giosa, Lorenzo Cassini
Affiliations: University of Basilicata, IIA-CNR, IGAG-CNR, University of Rome "La Sapienza"
The increasing concentration of methane (CH₄) in the atmosphere represents one of the most pressing challenges for climate change mitigation due to its high global warming potential and significant contribution to the greenhouse effect. The Po Valley, characterized by a high density of agricultural, industrial, and urban activities, is one of the European regions with the highest CH₄ emissions. To enhance our understanding of the spatial and temporal distribution of CH₄ in this area, we performed numerical simulations using the WRF-Chem model with the WRF-GHG chemical module. The WRF-GHG module is specifically designed for greenhouse gas modeling and treats gases such as CH₄ as passive tracers. This approach assumes that CH₄ does not undergo chemical reactions within the atmosphere, focusing on its transport and dispersion influenced by meteorological conditions, thereby reducing computational complexity while maintaining accuracy in capturing spatial and temporal dynamics. The model combined the Emissions Database for Global Atmospheric Research (EDGAR-2024_GHG) inventory, background data provided by the Whole Atmosphere Community Climate Model (WACCM) database, and biomass burning emissions from the Fire INventory from NCAR (FINN) database. The simulation was conducted for the entire year 2022, with a 7-day spin-up run, to properly initialize the atmospheric conditions. We employed two nested domains: the outer domain with a spatial resolution of 10 km and the inner domain with a resolution of 5 km. This nesting strategy allowed for a detailed representation of local emission sources and atmospheric processes within the Po Valley while accounting for broader regional influences, with the innermost domain also including part of central Europe. Model outputs were compared with satellite observations from the TROPOspheric Monitoring Instrument (TROPOMI) sensor onboard Sentinel-5P, and ground-based measurements from Integrated Carbon Observation System (ICOS) stations for further validation, assessing the reliability of simulated CH₄ concentrations and their seasonal trends. As part of the PRIN-MVP project, the annual dataset generated through these simulations will serve as a critical input for validating a physics-informed neural network-based retrieval system for CH₄, further advancing the capability to monitor and mitigate greenhouse gas emissions.
LPS Website link: Simulating CH4 Emissions in the Po Valley with WRF-GHG: Validation Against TROPOMI Observations and Ground-based Measurements&location=X5+–+Poster+Area+–+Zone+M-N" class="text-info" target="_blank">Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: Advancing Point-Source GHG Emission Monitoring with Microsatellites: A Coverage and Imaging Mode Analysis

Authors: Jinyoung Shin, Jae-Pil Park, Sujong Jeong, Geuk-Nam Kim, Namgyu Kim, Taemin Kim, Jaemin Hong, Yu-Ri Lee, Kwangwon Lee, Seongwhan Lee
Affiliations: Nara Space Technology Inc., Climate Laboratory, Seoul National University
As regional emission sources of Greenhouse Gases (GHGs) gain priority for rapid mitigation under the Paris Agreement, microsatellites have emerged as a crucial tool for bridging gaps in global GHG monitoring systems. Missions such as GHGSat, Carbon Mapper, TANGO-Carbon, GEISAT, and GHOSt—representative of platforms under 100 kg—have demonstrated advanced capabilities in detecting localized CO₂ and CH₄ emissions. However, the effectiveness of these systems hinges on their spatial and temporal coverage and observation validity under diverse environmental conditions. Factors such as solar zenith angle (SZA), viewing zenith angle (VZA), surface albedo, weather, and source persistence influence their performance. Furthermore, imaging modes (e.g., Nadir mode, target forward motion compensation (FMC) mode, offshore glint mode) play critical roles in determining their suitability for emissions monitoring at facility-to-city scales. This study evaluates the spatial and temporal coverage performance of microsatellite-based GHG observation systems through numerical simulations incorporating environmental and instrumental conditions. Beyond traditional revisit performance analysis, the study includes advanced assessments that account for valid observation conditions, such as acceptable SZA and VZA ranges, defined by the required minimum signal-to-noise ratio (SNR). Weather conditions and emission source persistence are factored in, with results presented as probabilities of successful observation within a given time window. Given the strong influence of latitude on environmental conditions, the performance of these systems is analyzed for emission sources across various latitudes. Revisit capability is significantly influenced by imaging modes as well, especially for low-earth-orbit microsatellites with limited size and volume. Imaging strategies that extend exposure time or optimize the geometrical relationship between the Sun, target, and observatory are critical for ensuring valid observations under non-ideal conditions, such as heterogeneous surface environments or high SZAs. Two renowned imaging modes, target FMC mode and offshore glint mode are breifly summarized and applied to the coverage analysis. The analysis extends to the spatial and temporal coverage analysis of microsatellite constellations under different operational orbits, including Sun-Synchronous Orbit (SSO) and mid-inclined orbit configurations, achievable through commercial rideshare launches. Observation characteristics such as SZA and local imaging times are evaluated on a seasonal basis to provide deeper insights into orbit selection. Finally, the results are discussed in the context of integrating microsatellite observations into global GHG monitoring initiatives and addressing uncertainties in detection and estimation processes. Keywords: Greenhouse Gases (GHGs), Microsatellite, Coverage Analysis, Contellation.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: Parameterizing the Performance of Emissions Quantification Methods from Space Using Synthetic Data

Authors: Sherrie Wang
Affiliations: Massachusetts Institute Of Technology
Carbon dioxide (CO2) emissions from the energy sector amounted to 36.8 Gt in 2022. Globally, a majority of these emissions come from power plants with self-reported data. An atmospheric-based approach that uses remote sensing is an avenue for high-resolution, top-down, and transparent greenhouse gas emission tracking. However, the efficacy of these methods is limited by sensor sensitivity, topography, accurate high-resolution wind field information, topography, overpass frequency, and other adverse meteorological conditions. Furthermore, CO2 enhancements from anthropogenic sources tend to be small compared to background levels, making it challenging to attribute emissions to specific emitters. While there is a growing body of literature in this space, generalizability remains a concern as comparisons of existing methodologies have been limited to case studies and have yet to prove scalable beyond large coal power plants. There has been a rise in the use of deep learning for methane emissions quantification, although studies are limited in scale to untangle and parameterize variation in performance. This work explores the feasibility of scaling up CO2 emissions monitoring using a deep learning approach to examine electricity-generating facilities. We start with a simulated dataset over part of Europe to parameterize how performance varies with meteorological conditions, spatial resolution, and temporal aggregation. We notably explore the generalizability of our methodology by exploring performance for out-of-sample facilities. These results are compared against baseline methods common in the literature (e.g., cross-sectional flux and integrated mass enhancement).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: NarSha – South Korea’s First Pinpoint Methane Monitoring Microsatellite Constellation

Authors: Dr Geuk-Nam Kim, Jae-Pil Park, Dr Sujong Jeong, Jinyoung Shin, Namgyu Kim, Taemin Kim, Dr Jaemin Hong, Dr Yu-Ri Lee, Paweł Snopek, Michał Zięba, Alexandra Jercaianu, Jooin Lee, Dohyun Kim, Dr Hyojeong Lee, Dr Jungkyu Lee, Hyun Min Kim, Dr Kwangwon Lee, Seongwhan Lee, Dr Dong Yeong Chang, Dr Young-Jun Choi
Affiliations: Nara Space Technology Inc., Climate Laboratory, Seoul National University, Scanway S.A., Korea Astronomy and Space Science Institute
To combat global warming and reduce greenhouse gas emissions, 158 countries have committed to the Global Methane Pledge (GMP), launched during COP26 in 2020. The pledge aims to reduce methane emissions by 30% by 2030, making satellite and geospatial technologies vital for monitoring and enforcement. Methane (CH4), which has contributed 30% to global warming since industrialization, is over 80 times more potent than carbon dioxide (CO2) in warming the planet over 20 years. However, methane emissions are often significantly underreported, with some sectors, such as oil and gas, underestimating emissions by 70-80%. In 2020, South Korea’s methane emissions were approximately 27 million metric tons of CO2 equivalent (MtCO2e), accounting for 4.1% of its total greenhouse gas emissions. About 97% of these emissions came from agriculture, waste, and the energy sector, with 71% being identified as fugitive emissions originating from energy operations. Limited research on downstream emissions suggests a high likelihood of underreporting. As one of the most industrialized nations, South Korea has seen a sharp rise in greenhouse gas emissions, ranking as the world’s 9th largest emitter in 2021. To meet its carbon neutrality target by 2050, South Korea must reduce methane emissions by 40% from 2018 levels. Accurate measurement of methane sources and distribution is crucial to achieving this goal. Although advancements in technology have improved global methane monitoring, satellite data coverage remains incomplete, especially in equatorial regions, offshore areas, and northern oil and gas production zones. The urgent need to address these challenges led to the launch of the NarSha Project -South Korea’s first spaceborne methane monitoring initiative, has been launched. Basically, spaceborne observations of methane emissions provide large-scale coverage compared to ground-based measurements, which offer relatively higher accuracy at localized levels. Thus, spaceborne observations of methane emissions cover a much larger area compared to ground-based measurements, which are typically more accurate but limited in scale. The point-source spaceborne observatory can help address gaps in knowledge about greenhouse gas emissions, such as leaks, venting, and flaring, by identifying inconsistencies between data collected from space and ground-based instruments. Additionally, integrating data from multiple sources enhances the ability to scale insights from local to regional levels with greater precision. A collaboration between Nara Space Technology, Seoul National University's Climate Laboratory, and the Korea Astronomy and Space Science Institute, together with international partner at Scanway Space, the project aims to develop and launch the Korean Methane Monitoring Microsatellite (K3M) by the end of 2026. The mission targets the operation of a microsatellite constellation to monitor methane emissions with a detection threshold of 100 kg/h and provide Level 2 data, defined as methane concentrations, within four weeks upon request. The K3M, equipped with a hyperspectral imager operating in the short-wave infrared (SWIR) range, will offer spectral resolution finer than 1 nm within the methane absorption band (1625-1670 nm). With a ground sampling distance of under 50 meters at an altitude of 500 km, the satellite will cover swath widths exceeding 10 km, significantly enhancing local-scale and pinpoint methane monitoring capabilities. The linear varilable filter (LVF) based hyperspectral imager is designed as a diffraction grating concept given the limited pointing capability of the microsatellite platform. To meet the payload specification for detecting methane emissions with a precision of 60 ppb, the payload is designed to have a signal-to-noise ratio (SNR) greater than 150 under conditions of 0.2 albedo and a solar zenith angle (SZA) of 40 degress. The imager operates in three observation modes to detect methane: target forwared motion compensation (FMC) mode, which provides the highest SNR; nadir mode, optimized for low SZA; and offshore glint mode, ideal for imaging targets over water surfaces, such as oil and gas infrastructure in the ocean.The 16U CubeSat-sized microsatellite platform features fine pointing capabilities, ensuring higher SNR during observations, with a pointing accuracy of 0.02 degrees and stability of 0.01 degrees. The platform is equipped with an AI-driven data processing unit and a high-speed X-band transmitter, enhancing overall mission capability. The constellation will consist of 12 satellites distributed across three orbital planes, achieving over 90% system completeness within four weeks. The constellation is set to be fully deployed by the end of 2028, enabling global coverage and delivering precise methane emission data in the shortest possible time. Moreover, the constellation will act as a critical tool for identifying local emitters while significantly improving the accuracy of methane's contribution estimates to global warming.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: Monitoring Gas Flaring Volumes With Deep Learning on Sentinel-2 Data

Authors: Zhuldyz Darynova, Léo Turon, Clément Killisly, Benoit Blanco
Affiliations: SeaOwl Energy Services, CVA Group, TotalEnergies SA
Gas flaring, although decreasing due to industry efforts to eliminate routine practices, still significantly contributes to global greenhouse gas emissions. This study presents an advanced methodology for detecting and quantifying gas flaring volumes by integrating Sentinel-2 satellite imagery with deep learning techniques and radiative transfer modeling. Specifically, we employ the U-Net deep learning architecture for site detection, trained on a dataset of 3,000 manually verified, cloud-free images encompassing global onshore and offshore sites. Each training image comprises two channels: a normalized shortwave infrared (SWIR) image for network training and a binary mask labeling flaring hot pixel, enabling accurate detection of small flare areas that typically occupy only a few pixels in satellite imagery. To enhance the customized U-Net model’s performance, a weighted cross-entropy loss function was utilized, assigning greater weight to flare pixels. The model’s sigmoid activation function allows for pixel-wise detection probabilities ranging from 0 to 1, ensuring robust identification of flaring signals. This optimization reduces false positives and significantly improves detection accuracy over traditional threshold-based methods. Our results demonstrate U-Net’s capability to adapt to small detection areas while minimizing artifacts, offering superior detection of flaring activities even in complex backgrounds. We quantified the radiant heat from detected flares using Planck curve fitting for offshore sites and SWIR radiance models for onshore sites. Calibration against reported data from four sites yielded high accuracy, achieving coefficients of determination (R²) of 0.91 for offshore and 0.98 for onshore flaring volumes, with discrepancies below 18%. The methodology outperforms conventional tools such as VIIRS imagery, particularly in detecting smaller and weaker flares often overlooked by lower-resolution systems. A robust data pipeline, facilitated by Google Earth Engine, streamlines the process by incorporating cloud masking, atmospheric corrections, and image preprocessing. This enables efficient handling of large datasets and ensures scalability for global monitoring efforts. Validation across additional sites, including high-flaring regions, confirmed the model’s reliability, with errors below 15% in areas experiencing significantly higher flaring activities. This study underscores the potential of integrating advanced machine learning techniques with remote sensing data for precise environmental monitoring. The validated approach provides critical insights for policymakers and industry stakeholders, facilitating improved regulation and mitigation strategies. Future work will extend this methodology by incorporating data from additional satellite platforms to enhance temporal coverage and refine interpolation accuracy for global flaring activities.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: Leveraging Satellite Data for Localized CO2 Emission Measurements to Catalyze New Business Opportunities: Carb-Chaser System

Authors: Frederic Pistone, Alexandre Guevezov, Denis Simeoni, Pascal Prunet, Eric Péquignot, Alain Retière, Hervé Hamy, Céline Serrano, Jérôme Valour
Affiliations: Thales Alenia Space, OHB Sweden, IRT Saint Exupery, SPASCIA, WaltR, Everimpact, QAIrbon
The rapid advancement in satellite technology has enabled precise monitoring of greenhouse gas emissions, particularly CO2, on a global scale. However, the ability to measure CO2 emissions at a localized level has profound implications for both scientific research and emerging commercial markets. In this presentation, we explore how high-resolution satellite data, powered by Multispectral Partial Interferometry, can provide detailed insights into localized CO2 emissions and how this data can be harnessed to create new business opportunities. The maturation of Partial Multispectral Interferometry technology is enabling the development of new satellite constellations tailored to meet these emerging needs. The integration of satellite-derived CO2 measurements enables businesses to monitor environmental compliance, optimize carbon offset strategies, and enhance sustainability reporting. Additionally, the availability of such precise data supports new business avenues in environmental regulation, allowing companies to adhere to and anticipate stringent regulatory frameworks. Our findings demonstrate the potential for satellite data to become a pivotal asset in the commercial data market, driving innovation in sectors such as environmental certification, urban planning, agriculture, and air quality. The implications for environmental policy-making and corporate governance are significant, as businesses increasingly seek actionable, accurate data to inform their environmental initiatives. This paper provides a comprehensive analysis of the current state of satellite-based CO2 measurements, the challenges and opportunities in local-scale monitoring, and the potential pathways for commercialization of this valuable data resource. The work presented is supported by the French and Swedish Space Agencies CNES and SNSA, with Thales Alenia Space selected as the study’s prime contractor.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: Development of a Level 1 to Level 4 Processing System for Estimating Greenhouse Gas Emissions of Localized Sources: Methane Emissions as Derived from Sentinel-5 Precursor, PRISMA and EnMAP

Authors: Stefan Noël, Michael Hilker, Jonas Hachmeister, Michael Buchwitz, Oliver Schneising-Weigel, Maximilian Reuter, Heinrich Bovensmann, John P. Burrows, Hartmut Bösch
Affiliations: IUP University of Bremen
Methane is an important greenhouse gas significantly contributing to climate change. A large fraction of methane is emitted by localized sources such as landfills, oil and gas fields and coal mines. The spectral radiance measurements of satellite instruments such as TROPOMI on Sentinel-5 Precursor (S5P), PRISMA and EnMAP are sensitive to methane concentration changes near the Earth’s surface. This information can be extracted from the measured radiances (Level 1 (L1) product) by retrieving atmospheric methane vertical column information. In our case, the resulting Level 2 (L2) product is either the column-averaged dry-air mole fraction methane (XCH4) or its anomaly relative to the scene average. Emission information (Level 4 (L4) product) can be obtained by a second processing step from the L2 product. Within the ESA projects MEDUSA (https://climate.esa.int/nl/projecten/medusa/) and SMART-CH4 (https://smart-ch4.lsce.ipsl.fr/) we are developing methods to derive methane emission information from satellite sensors such as TROPOMI, PRISMA and EnMAP. For TROPOMI we use the L2 XCH4 product as generated with the Weighting Function Modified DOAS (WFMD) algorithm within the ESA project GHG-CCI (https://climate.esa.int/en/projects/ghgs/). For sensor with high spatial resolution like PRISMA and EnMAP we developed a retrieval suite to generate L2 products using different methods like Matched Filter (MF). To obtain emission information we use a Cross-Sectional-Flux (CSF) based retrieval scheme. We are currently developing a processing scheme up to L4 which can be applied to different satellite instrument data in a consistent and mostly automated way using as much as possible the same methods for all sensors. The latest status of this activity will be presented as well as first results in terms of emission estimates for various localized emission sources.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: Monitoring Methane Variations and Anomalies in Coal Mining Areas of Shanxi Province, China Using Sentinel-5P Data

Authors: Mr. Botao He, Prof. Yong Xue, Mr. Wenhao Liu
Affiliations: China University of Mining and Technology, Nanjing University of Information Science & Technology
Methane (CH₄) is the second largest greenhouse gas in the world. The atmospheric sources of CH₄ primarily stem from two categories: natural and anthropogenic. Natural sources encompass wetlands, oceans, termites, methane hydrates, and vegetation, while anthropogenic sources involve natural gas production, coal mining activities, biomass burning, rice fields, livestock, and landfills. Coal mining activities are one of the important sources of CH₄ emissions. Emission characterization in the coal sector is also important for estimating atmospheric methane. This paper focused on the Shanxi province, which was used the Global Coal Mine Tracker (GCMT) and Sentinel-5P (S5P) data. The GCMT data provides asset-level details on coal type, capacity, production, reserves and resources, methane emissions, geolocation, and over 30 other categories. The column averaged dry air mixing ratio of CH₄ data (XCH₄) from S5P was used for CH₄ anomaly identification. To guarantee the utilization of top-tier data quality, we filtered out pixels classified with a qa_value of less than 0.5. The main research contents include characterization of XCH₄ temporal variations and monitoring of XCH₄ anomalies. The result shows that the monthly average concentration of XCH₄ in Shanxi is higher than that over whole China from January 2019 to December 2023. The trends of the monthly mean concentration curves maintain good consistency in changes, with XCH₄ increasing year by year. By analyzing monthly average XCH₄ data from different years, it is evident that the XCH₄ changes in Shanxi have an obvious seasonal trend (autumn> winter> summer> spring). The result also shows the spatial distribution map of the annual mean XCH₄ and XCH₄ enhancement in Shanxi from 2019 to 2023. The XCH₄ enhancement for a pixel is calculated by subtracting the median of XCH₄ in the S5P observation scene. We found that XCH₄ enhancement in Shanxi has obvious spatial distribution characteristics. There are obvious abnormal areas with high XCH₄ enhancement values (XCH₄ enhancement greater than 50 ppb) in the central (Lvliang City), central-eastern (Yangquan City) and southeastern Shanxi (Changzhi City and Jincheng City). They are located in Qinshui Coalfield, one of the six major coalfields. The maximum values of these high-value anomaly areas are 158.82 ppb (2019), 181.73 ppb (2020), 153.16 ppb (2021), 177.70 ppb (2022) and 112.98 ppb (2023) respectively. We found that these high-value anomaly areas are consistent with the distribution of coal mine points by GCMT data and some coal mine remote sensing images in the abnormal XCH₄ enhancement areas. This also well explains the reason for the abnormally high value of XCH₄ enhancement in these areas. In addition, when analyzing the annual mean spatial distribution of XCH₄, we found some strange phenomena. The XCH₄ values were consistently zero in some areas, such as the central, eastern, and southern areas in Shanxi. Combined with Digital Elevation Model (DEM), it is easy to realize that these areas of missing data are at higher elevations. These areas are more than 2000 meters above sea level. Perhaps the drastic changes in the terrain have led to the missing data in these mountainous regions. Keywords— XCH₄, Sentinel-5P, Coal Mine, Remote Sensing
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: Integrating Multi-Source Remote Sensing for Scalable Methane Emission Monitoring in Irrigated Rice Cultivation

Authors: Dr. Nuntikorn KITRATPORN, Mr. Panu NUEANGJUMNONG, Thuy Le Toan, Stéphane Mermoz, Dr. Chompunut CHAYAWAT, Ms. Kanjana KOEDKURANG
Affiliations: Geo-informatics and Space Technology Development Agency (GISTDA), Centre D’Etudes Spatiales de la Biosphère (CESBIO), GlobEO
Methane (CH4) significantly contributes to global warming due to its high heat-trapping capacity. As global efforts to mitigate methane emissions intensify following commitments during COP26, the agriculture sector is under increased scrutiny due to its substantial contributions. Rice cultivation in particular is a prominent source of methane emissions, driven largely by water management practices, cultivation area, and cropping density. Thailand has a total land area of approximately 51.31 million hectares, nearing 50 percent is used for crop production. Of this total agriculture area, rice caltivation occupied approximately 40 percent. To address methane emissions from this sector, a low emission rice cultivation practice has been promoted for reduced or intermittent flooding of rice paddy. Therefore, accurate and scalable methods to monitor rice cultivation activities are essential for methane emission estimation. To demonstrate the potential utilization of remote sensing technology, this study integrates multi-source satellite imagery with in-situ measurements to obtain necessary inputs for methane estimation based on Intergovernmental Panel on Climate Change (IPCC) Monitoring, Reporting and Verification (MRV) system. A web-based platform was also developed to visualize the potential impacts of scaling up low-emission cultivation practices. Satellite-based synthetic aperture radar (SAR) data from C-band Sentinel-1 were employed to classify rice phenology and L-band ALOS-2 PALSAR-2 for field submersion status. The time-series of Sentinel-1 polarization backscatter was analyzed to determine sowing dates and cultivation periods. Water regime classification, which essentially linked to methane emission scaling, was conducted using full polarimetric ALOS-2 PALSAR-2 data with pixel-based thresholding and parcel-level aggregation to distinguish between continuous submersion, single drainage, and multi-drainage periods, as defined by the IPCC (2019). The ground-based measurement of water-level was carried out in 40 rice fields, while methane emissions were measured using both flux towers equipped with eddy covariance systems and manual gas chamber method in one experimental field. Initial results illustrate the effectiveness of satellite data in estimating rice phenology and submersion status over large areas, overcoming the scalability limitations of in-situ sensors. Comparisons between satellite-derived estimates and in-situ flux measurements of methane emission showed promising agreement, validating the robustness of this hybrid approach. The web-based platform further facilitate stakeholder engagement, so that the status of mitigation efforts can be evaluated and its potential impact can be anticipated.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: Documentary best practice for facility scale methane emissions from remote sensing

Authors: Paul Green, Alex Arnstein, John Worden, Annmarie Eldering, Evan Sherwin, Adam Brandt, Paul Palmer, Kalyani Ramanan
Affiliations: National Physical Laboratory, UK, NASA JPL, NIST, Lawrence Berkeley National Laboratory, Stanford University, University of Edinburgh
In response to international agreements including the COP26 Methane Pledge and the COP28 Oil and gas decarbonization charter the reduction of fugitive methane emissions from industrial processes is high on the international agenda. The response over the last few years has taken several forms, from the launch of the UNEP IMEO program, to the development and launch of new monitoring capability. Action to meet these objectives are being enshrined in national targets and policies, and most recently in new regulation. 2023 legislation in the US and EU not only provides an obligation to verifiably reduce emissions but also opens the door to the use of new methods & techniques to identify & report leaks and perform 3rd party verification of the Oil & Gas sectors emission reporting to the regulators. Simultaneously, listed companies are being required to report their emissions in corporate disclosures as part of their evidence base in moving towards a low-carbon economy. These diverse goals can only be quantifiably met with verified data, with satellite-derived measurements being a key part of the solution to this data need. The global reach and inherent spatial sampling/mapping properties of on-orbit instruments lend themselves to consistent surveys across borders, cataloguing sources (and sinks) of GHG emission to the atmosphere and identifying their geographic locations. To address this date need, a number of new on-orbit sensors from commercial and philanthropic (new space) stakeholders are joining the longer running public missions tracking methane concentrations at a range of scales (10s m to few km). Innovations have also shown that some public missions not originally designed to monitor methane can be used to do so, albeit limited to the more intense point sources. This plenitude of satellite data has enabled multiple actors to enter the methane emissions product landscape, ranging from start-ups to academia, space agencies, on-orbit asset owners and international organizations. This rapidly growing yet organic trade for methane emissions data has developed to meet a market need but is susceptible to a number of pitfalls. In a relatively immature and rapidly developing field, divergent emissions estimates from multiple actors will cast doubt on credibility, whereas questionable methods & data quality from new, non-expert actors could significantly undermine the reputation of the entire sector. A suitably agile, quantifiably evidenced, best practices framework would enable this market: endorsing reputable suppliers, rooting out bad actors and provide the necessary confidence to the user/customer base. Internationally adopted best practice, based on transparency, traceability, independence, and evidenced QA would enable and ensure fit-for-purpose data, interoperable between suppliers as an EO contribution to impactful climate action, feeding into the wider carbon cycle economy. This presentation will report on the CEOS-led initiative within the GHG task team to develop best practice for the quantifying, reporting, validating and assessing facility scale methane emissions using remote sensing. The program has run for 2 years with the aim of an internationally adopted v1.0 practical implementation of a best practice standard by the end of 2025.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: Strengthening evidence of mitigation actions through MARS multi-satellite approach

Authors: Manuel Montesino-SanMartin, Dr. Javier Gorroño, Dr. Gonzalo Mateo-García, Dr. Itziar Irakulis-Loitxate
Affiliations: International Methane Emissions Observatory, United Nations Environment Programme, Research Institute of Water and Environmental Engineering (IIAMA), Universitat Politècnica de València (UPV)
Methane is a critical target for rapid mitigation due to its large warming potential and relatively short lifetime in the atmosphere. The Methane Alert and Response System (MARS), managed by UNEP’s International Methane Emissions Observatory (IMEO), detects and monitors large methane emissions globally through satellite technology, and notifies governments and Oil and Gas companies to drive mitigation action. MARS integrates data from more than 12 satellite instruments with multiple designs (e.g., band imagers and hyperspectral sensors) and methane detection capabilities (~1-1000 pbb). In 2024, MARS contributed to the successful mitigation of large methane emissions sources in the Oil and Gas sector in several countries. Mitigations require two types of evidence for IMEO confirmation: (1) proof of action from the operator addressing the notified emission, and (2) an interruption in the plumes detected by MARS. While the operator's proof of action and the reported ground situation are essential for confirming the mitigation, satellite data plays a key role in independently verifying that the emission is below the detection limits. Here, we explore a quantitative framework for using satellite data to collect the second type of evidence. The framework relies on the Probability of Detection (PoD) for new observations after the mitigation date. The PoD represents the likelihood of detecting emission at a specified flowrate under given conditions. MARS calculates the PoD running its detection and quantification pipeline with simulated plumes in emission-free scenes following the mitigation date. As new evidence is collected, PoDs are combined to assess the confidence in the mitigation action. Once a specific confidence threshold is exceeded, the mitigation can be confirmed. Here we cover the description of methods, PoD calculations across various instruments, environments (both onshore and offshore), and surface types (temporally constant and variant) and finally demonstrate its use on a real-world mitigation scenario. This probabilistic approach helps non-experts understand that satellite data is not a definite proof of absence of emission, but they offer additional confidence about their reductions.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: CO2 mixing ratio profiling in the lower troposphere using the Raman lidar technique

Authors: Marco Di Paolantonio, Davide Dionisi, Dr Annalisa Di Bernardino, Tatiana Di Iorio, Noemi Franco, Giovanni Giuliano, Anna Maria Iannarelli, Gian Luigi Liberti, Donato Summa, Paolo Di Girolamo
Affiliations: Institute of Marine Sciences, Italian National Research Council (CNR-ISMAR), University of Basilicata, Sapienza - University of Rome, Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA), Serco Italia SpA, Institute of Methodologies for Environmental Analysis, Italian National Research Council (CNR-IMAA)
While CO₂ has a relatively uniform vertical distribution in the atmosphere, making its total atmospheric content well-monitored through in-situ and satellite measurements, accurate quantitative knowledge of source/sink processes, which occur mostly near the surface, requires vertically detailed sampling as well as sub diurnal temporal sampling. Accurate CO₂ profiles with high vertical and temporal resolution, such as those potentially provided by ground-based Raman lidars, have significant potential to impact climate research and modelling. These measurements could help address current gaps in the comprehension of CO₂ sink mechanisms by the biosphere, e.g., estimating forests’ carbon capture capabilities, and quantifying CO₂ sources. Although the space and ground network for CO₂ monitoring has regularly expanded over the past years, it does not guarantee the necessary spatial and temporal resolution needed for a quantitative analysis of sources and sinks. The ro-vibrational Raman lidar technique, employed to measure water vapor mixing ratio profiles, has been investigated over the past decades for its application in CO₂ profile measurements. Despite its high potential, the technique has faced challenges due to the low signal-to-noise ratio associated with CO₂ Raman signals. However, recent advancements in laser technology, spectral selection, and signal detection devices, have renewed interest in the technique, leading to the development of a new generation of lidar systems specifically designed for this purpose. In the frame of the project CONCERNING (COmpact RamaN lidar for Atmospheric CO₂ and ThERmodyNamic ProfilING), a Raman lidar has been developed. This instrument is capable to provide accurate and high-resolution profiles of CO₂ and water vapor mixing ratio, temperature, particle backscattering, extinction and depolarization ratio. The results of the CO₂ profile measurements carried out by this system will be presented at the conference. The CO₂ mixing ratio is derived from ro-vibrational Raman backscattered signals, calibrated using a novel method adapted from water vapor Raman lidar measurements. Specifically, the calibration constant is determined vicariously using daytime lidar background signals proportional to the zenithal sky radiance. Field measurements were conducted in Potenza, Italy, integrated over a 2-hour time window. The obtained profile has an uncertainty ranging from 1 ppm with 150 m vertical resolution at 1 km to 5 ppm with 300 m vertical resolution at around 4 km, enough to clearly highlight concentration gradients in the lower troposphere. The CO₂ profiles obtained were compared with data from the OCO-2 satellite, showing agreement within one standard deviation.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: The Space Carbon Observatory Next Step (SCARBON)

Authors: Laurence Croizé, Laure Brooker, Céline Belloc, Jordi Corbera, Yann Ferrec, Olga Gonsiorová, Cécile Gourgon, Silvere Gousset, Mélissa Heidir, Thomas Hurot, Lenka Medová, Friedemann Reum, Juana Rodrigo, Eric Tatulli
Affiliations: Dota, Onera, Université Paris-Saclay, Airbus Defence and Space, Institut de Planétologie et d’Astrophysique de Grenoble, Université Grenoble-Alpes,, Institut Cartogràfic i Geològic de Catalunya, Absolut System, German Aerospace Center (DLR), LTM, University Grenoble-Alpes/CNRS, GRANT GARANT SRO
The Space CARbon Observatory Next step (SCARBOn) project seeks to advance the development of a satellite-based system that will monitor greenhouse gas (GHG) emissions with unprecedented frequency and accuracy. The system will comprise a constellation of small satellites equipped with an innovative miniaturized CO2 and CH4 detection instrument, NanoCarb, alongside a compact aerosol sensor, SPEXone. Together these instruments, SCARBOn system is expected to achieve a twice-daily, high-precision global measurements, with the ability to monitor the diurnal variations in fossil CO2 emissions. This data will be crucial for tracking emissions from major point sources such as large power plants (~20 MtCO2 per year) and major cities (>35 MtCO2 per year). The SCARBOn satellite constellation is designed to detect and quantify those emissions. Each Nanocarb spectral imager is designed with a 200 km swath and multi-pixel imaging capabilities. The targeted spatial resolution is 2x2 km and the column concentration measurement targeted precision is 1 ppm for CO2 and 6 ppb for methane. Rather than pursuing ultra-high spatial resolution imagery, SCARBOn prioritizes sub-daily temporal resolution coupled with comprehensive spatial coverage to improve the accuracy of current emissions inventories and to monitor emission trends and impact of mitigation measures. Although the primary focus is CO2 detection, the system will also capture methane as a secondary target. The SCARBOn project, which began in January 2024 with Horizon Europe funding from the European Commission, is a continuation of the earlier SCARBO project. [1, 2, 3]. It is being led by Airbus Defence and Space in collaboration with an interdisciplinary consortium that includes space industry partners, SMEs, scientific and technical institutes. The overarching goal of SCARBOn is to mature the technical and industrial definition of the NanoCarb instrument and of the SCARBOn constellation, targeting an operational system availability by the end of the decade. This paper firstly presents an overview of the key achievements from the SCARBO project, which have significantly advanced the development of SCARBOn. In particular, the major feasibility challenges associated with the NanoCarb instrument have been overcome: one of the critical breakthroughs was the successful extraction of geophysical data from partial interferograms. Additionally, the manufacturing process for interferometric plates was validated, marking a key milestone in the instrument’s development. As a result, the proof of concept for the NanoCarb instrument has progressed substantially, raising the sensor technology's readiness level from TRL2 to TRL4. Furthermore, the complete retrieval chain, encompassing the entire data processing sequence (from Level 0 to Level 2) for SCARBO-SPEX, has been finalized. Simulations have confirmed the high potential of NanoCarb's truncated interferogram—despite its extremely compact design—to deliver Level 2 performance reaching and even outperforming the precision requirements thanks to the synergistic advantages of integrating NanoCarb measurements with those from SPEXone. Furthermore, from a system perspective, both instrument concepts have been confirmed as compatible for deployment on small satellites (130 kg class). This compatibility paves the way for the envisioned cost-effective constellation of 24 satellites, which promises to significantly reduce the time to market. SCARBOn main challenges are now (i) to pursue the TRL raising of the NanoCarb instrument by improving its optical design and key components manufacturing; (ii) to address the entanglement of GHG and albedo information in truncated interferograms and assess mitigation actions, then compare the approach with other observing concepts; (iii) to pursue demonstration campaign opportunities to validate the proposed improvements for the space instrument. In the second part of this paper, an example of SCARBOn outcome will be presented. It will show how a co-design approach based on the Cramer-Rao Lower Bound was used to update the configuration of the interferometric core for the NanoCarb-CO2 and NanoCarb-CH4 airborne prototypes. Theoretical results show a sensitivity increase about x2 for CO2 and x3 for CH4. The main impact factor contributing to this improvement is the Fabry-Perot finesse, which helps to better discriminate between the different gas signatures. However, increasing the finesse results in reduced mean transmission, which would negatively impact the signal. To counter this effect, the current system design balances the reduced transmission with longer integration times, allowing for sustained performance even with a lower transmission rate. Further refinement of the design was achieved using the MEDOC model, which incorporates additional parameters such as depth of the different engravements, and the thickness of the wafer that was ordered at the beginning of last summer. The final design for the CO2 camera was compared with the earlier CO2 SCARBO design. The study revealed that CO2 scale factor random error is reduced by more than a factor two. Additionally, performance has become more constant with the variation of the incidence angle. Correlation between albedo and CO2 retrieved products are also decreased. Finally, preliminary results on the flight model design improvement and performances assessment will be presented. In summary, preliminary results from the refined model design indicate continued improvement in both sensitivity and accuracy. These advancements are expected to enhance the overall performance of the SCARBOn system, moving closer to the project’s goal of providing reliable, precise measurements of greenhouse gas emissions from fossil fuel sources. This work has received funding from the European Union’s H2020 research and innovation programme under grant agreement No 769032 (SCARBO) and Horizon Europe programme - GA No 101135301 (SCARBOn). [1] L. Brooker Lizon-Tati, S. Val Serra, H. Bovensman, C. Crevoisier, M. Dogniaux, L.Croizé, Y. Ferrec, E. Le Coarer, S. Gousset, B. Sic, M. Smit, and the SCARBO consortium, SCARBO: A constellation of small satellites for the monitoring of anthropogenic greenhouse gases, 73rd International Astronautical Congress (IAC), Paris, France, 18-22 September 2022. [2] S. Gousset, L. Croizé, E. Le Coarer et al., « NanoCarb hyperspectral sensor: on performance optimization and analysis for greenhouse gas monitoring from a constellation of small satellites », CEAS Space J., sept. 2019. [3] M. Dogniaux, C. Crevoisier, S. Gousset, E. Le Coarer, Y. Ferrec, L. Croizé, ... &L. Brooker (2021). The Space CARBon Observatory (SCARBO) concept: Assessment of XCO2 and XCH 4 retrieval performance. Atmospheric Measurement Techniques Discussions, 1-38.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: Capabilities of the Airborne Visible InfraRed Imaging Spectrometer 4 (AVIRIS-4) for the quantification of anthropogenic CH4 emissions

Authors: Sandro Meier, Marius Voegtli, Andreas Hueni, Prof. Dr. Dominik Brunner, PD Dr. Gerrit Kuhlmann
Affiliations: Empa, Laboratory for Air Pollution/Environmental Technology, Remote Sensing Laboratories, Department of Geography, University of Zurich
Airborne imaging spectroscopy is a powerful tool to survey large areas of methane emission hot spots such as fossil fuel production regions, to detect unknown leaks and to reduce the uncertainties in known sources (e.g., Alexe et al. 2015; Fraser et al. 2013). However, most of the currently available point source imagers of CH4 are limited by spatial and/or spectral resolution, which hinders the precision and accuracy of the emission estimates (Bousquet et al. 2018). The Airborne Visible InfraRed Imaging Spectrometer 4 (AVIRIS-4) is a state-of-the-art imaging spectrometer which can also be used for CH4 detection (Green et al. 2022; Shaw et al. 2022). It is part of the Swiss Airborne Research Facility for the Earth System (ARES), which provides a unique platform for high-resolution airborne observations. AVIRIS-4 offers enhanced stability and signal-to-noise ratio (SNR) with its new Compact Wide-Swath Imaging Spectrometer II (CWIS-II). This improvement enables high-quality CH4 retrievals for ground pixel sizes ranging from 0.3 to 5.0 m, depending on the flight altitude. Such a high spatial resolution is needed to detect and thus quantify the plumes of small CH4 sources In July 2024, we conducted test campaigns focusing on landfills, compressor stations, and coal mines across Italy, France and Germany. In September 2024, we participated in a controlled release experiment in Pau, France, followed by measurement flights over coal mine areas in Upper Silesia, Poland. In total, we were able to acquire over 160 flight lines at various altitudes. We employed a matched filter algorithm to retrieve CH4 column concentrations from imaging spectroscopy (Foote et al., 2020). Emission estimates were subsequently derived using the Integrated Mass Enhancement (IME) method (Kuhlmann et al., 2024). To ensure the robustness of these approaches, we tested our findings through high-resolution simulations conducted with the MicroHH model (van Heerwaarden et al., 2017). The results of the release experiment are used to characterize the capability to detect and quantify methane emissions with the AVIRIS-4 instrument. Our preliminary analysis shows that AVIRIS-4 can detect methane at a spatial resolution of 30 - 50 cm, which allows for the differentiation of spatially proximate plumes. The initial findings suggest that AVIRIS-4 can be employed to estimate emissions exceeding 10 kg/h. Additionally, we present CH4 maps and first emission estimates from a landfill in Italy and coal mine shafts in Poland. REFERENCES Alexe, M., Bergamaschi, P., Segers, A., Detmers, R., Butz, A., Hasekamp, O., Guerlet, S., Parker, R., Boesch, H., Frankenberg, C., Scheepmaker, R.A., Dlugokencky, E., Sweeney, C., Wofsy, S.C., & Kort, E.A. (2015). Inverse modelling of CH4 emissions for 2010–2011 using different satellite retrieval products from GOSAT and SCIAMACHY. Atmos. Chem. Phys., 15, 113-133 Bousquet, P., Pierangelo, C., Bacour, C., Marshall, J., Peylin, P., Ayar, P.V., Ehret, G., Bréon, F.-M., Chevallier, F., Crevoisier, C., Gibert, F., Rairoux, P., Kiemle, C., Armante, R., Bès, C., Cassé, V., Chinaud, J., Chomette, O., Delahaye, T., Edouart, D., Estève, F., Fix, A., Friker, A., Klonecki, A., Wirth, M., Alpers, M., & Millet, B. (2018). Error Budget of the MEthane Remote LIdar missioN and Its Impact on the Uncertainties of the Global Methane Budget. Journal of Geophysical Research: Atmospheres, 123, 11,766-711,785 Foote, M.D., Dennison, P.E., Thorpe, A.K., Thompson, D.R., Jongaramrungruang, S., Frankenberg, C., & Joshi, S.C. (2020). Fast and Accurate Retrieval of Methane Concentration From Imaging Spectrometer Data Using Sparsity Prior. IEEE Transactions on Geoscience and Remote Sensing, 58, 6480-6492 Fraser, A., Palmer, P.I., Feng, L., Boesch, H., Cogan, A., Parker, R., Dlugokencky, E.J., Fraser, P.J., Krummel, P.B., Langenfelds, R.L., O'Doherty, S., Prinn, R.G., Steele, L.P., Van Der Schoot, M., & Weiss, R.F. (2013). Estimating regional methane surface fluxes: the relative importance of surface and GOSAT mole fraction measurements. Atmospheric Chemistry and Physics, 13, 5697-5713 Green, R.O., Schaepman, M.E., Mouroulis, P., Geier, S., Shaw, L., Hueini, A., Bernas, M., McKinley, I., Smith, C., Wehbe, R., Eastwood, M., Vinckier, Q., Liggett, E., Zandbergen, S., Thompson, D., Sullivan, P., Sarture, C., Gorp, B.V., & Helmlinger, M. (2022). Airborne Visible/Infrared Imaging Spectrometer 3 (AVIRIS-3). In, 2022 IEEE Aerospace Conference (AERO) (pp. 1-10) IPCC, A. (2013). Climate change 2013: the physical science basis. Contribution of working group I to the fifth assessment report of the intergovernmental panel on climate change, 1535 Kuhlmann, G., Koene, E.F.M., Meier, S., Santaren, D., Broquet, G., Chevallier, F., Hakkarainen, J., Nurmela, J., Amorós, L., Tamminen, J., & Brunner, D. (2024). The ddeq Python library for point source quantification from remote sensing images (Version 1.0) Shaw, L.A., Geier, S., Mckinley, I.M., Bernas, M.A., Gharakhanian, M., Dergevorkian, A., Eastwood, M.L., Mouroulis, P., Green, R.O., 2022. Design, alignment, and laboratory calibration of the Compact Wide Swath Imaging Spectrometer II (CWIS-II). Imaging Spectrom. XXV: Appl., Sens., Process. 12235, 1223502-1223502–10. https://doi.org/10.1117/12.2634282 Thorpe, A.K., Roberts, D.A., Bradley, E.S., Funk, C.C., Dennison, P.E., & Leifer, I. (2013). High resolution mapping of methane emissions from marine and terrestrial sources using a Cluster-Tuned Matched Filter technique and imaging spectrometry. Remote Sensing of Environment, 134, 305-318 van Heerwaarden, C.C., van Stratum, B.J.H., Heus, T., Gibbs, J.A., Fedorovich, E., & Mellado, J.P. (2017). MicroHH 1.0: a computational fluid dynamics code for direct numerical simulation and large-eddy simulation of atmospheric boundary layer flows. Geosci. Model Dev., 10, 3145-3165
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: Monitoring Methane Emissions From Landfills in Greece Using Sentinel-5P TROPOMI Data

Authors: Thomas Panou, Doctor Maria Elissavet Koukouli, Professor Dimitrios Balis
Affiliations: Laboratory of Atmospheric Physics, Department of Physics, Aristotle University of Thessaloniki
Methane, a significant greenhouse gas (GHG), is known for its high global warming potential (GWP), approximately 28 times greater than that of carbon dioxide over a 100-year timeframe, as highlighted in the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). One of the most significant sources of methane emissions globally is municipal solid waste (MSW) landfills, where organic waste decomposes under anaerobic conditions, producing methane as a byproduct. These emissions contribute substantially to the overall GHG emissions associated with waste management practices, making the monitoring and mitigation of landfill methane emissions an urgent priority in efforts to combat climate change. The focus of this study is the evaluation of methane emissions from the largest landfills in Greece, which are among the key contributors to national GHG emissions in the waste management sector. Given the critical need for accurate, high-resolution data on methane concentrations, this research utilizes satellite observations from the TROPOspheric Monitoring Instrument (TROPOMI), an advanced sensor aboard the Sentinel-5 Precursor (S5P) satellite mission. TROPOMI provides near-real-time measurements of atmospheric methane with an unprecedented spatial resolution, enabling the identification and quantification of localized methane hotspots, including emissions from specific landfill sites. In this study, methane emissions were analyzed over a period of six years to identify temporal and spatial patterns associated with major landfill operations in Greece. The analysis focuses on key landfills such as Filis-Ano Liosia near Athens, and other prominent sites, which are responsible for managing a significant portion of the country's municipal waste. Methane levels demonstrate significant seasonal fluctuations and an overall increasing trend over time, with a growth rate of 2.09% observed for the Greek region. In several instances, the methane levels over major landfills surpass the national average methane concentration for Greece during the 2019–2024 period, of 1866.15 ± 23.45 ppbv. Notably, for the Filis-Ano Liosia landfill, the average residual methane concentrations exhibit a relative deviation of 44% compared to the national average residual methane concentrations. The high-resolution data obtained from the Sentinel-5P mission can enable policymakers and stakeholders to identify emission hotspots and evaluate the effectiveness of mitigation measures, such as the installation of methane capture systems or improved waste management practices. Furthermore, the findings emphasize the need for coordinated efforts to reduce methane emissions in Greece, aligning with national and international climate targets, including the European Union’s commitment to achieving net-zero emissions by 2050.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: Low-Temperature Hotspot Detection In VIIRS Imagery For The Monitoring of Industrial CO2 Emissions

Authors: Charles Hessel, Gilles Recouvreux, Florentin Poucin, Carlo de Franchis, Simon Lajouanie, Antoine Tadros, Rafael Grompone
Affiliations: Kayrros SAS, Université Paris-Saclay, ENS Paris-Saclay, CNRS, Centre Borelli
Industrial activity is a major source of CO2 emissions. Following the Paris Agreement and to support mitigation efforts, it is important to develop tools for monitoring the Greenhouse Gases (GHGs) from the industry sector. In this context, we propose a method for the detection of hotspots in VIIRS imagery bands. We focus on low-temperature hotspots, such as those that are encountered in industrial facilities. These hotspots are indicative of activity, which can be directly linked to CO2 emissions. The VIIRS sensor is currently onboard three satellites: Suomi-NPP, NOAA-20 and NOAA-21. They all acquire images at night for a limited number of bands. Our method focuses on these nighttime acquisitions. With each acquiring the entire Earth every day and every night, any place on Earth is observed at least three times per night. The bands considered in our algorithm are imagery bands with a resolution of 375 meters per pixel. The method we propose takes advantage of these several images acquired each night to detect points with weak signal. Our work focuses on industrial assets, whose activity can be assumed constant over the night. We can therefore base our detection on multiple images at once. Furthermore, we restrict ourselves to nighttime images, in which the heat signal is much more visible. Our approach is statistically based and uses the a contrario framework. It has a single parameter with a clear interpretation: the expected number of false detections that can be tolerated in one image. This multi-image approach allows to lower the detection thresholds without increasing the number of false detections. We apply our method to several kinds of industrial facilities. We validate our detections using Sentinel-2 and Landsat images acquired on these locations. Furthermore, for validation we make the most of Landsat nighttime images available for a subset of these locations.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: TNO Global Point Sources Emission Inventory for Greenhouse Gases

Authors: Stijn Dellaert, Emma Schoenmakers, Riccardo Nanni, Hugo Denier van der Gon
Affiliations: Netherlands Organization For Applied Scientific Research (TNO)
Identification and quantification of large sources of greenhouse gas emissions using earth observation is instrumental in supporting emission reduction policies to limit the impact of climate change. Current emission inventories have limitations for use in supporting emission reductions as they are typically only available as sector- and country-level estimates. To guide observation-based monitoring and mitigation, emissions data needs to be specified by location and should include auxiliary data to match both the satellite component and the national reporting obligations. Here we present a global greenhouse gas emissions database combining the best available existing facility-level emission data. The TNO global point sources emission inventory is a hybrid global inventory, combining the most accurate, comprehensive and up-to-date point source emission datasets for carbon dioxide (CO2) and methane (CH4), as well as co-emitted nitrogen oxides (NOx). Data sources include ClimateTRACE, CarbonMapper, E-PRTR/LCP, and other datasets. The inventory covers a large number (>100k) of emission sources across various emission intensive sectors with their exact coordinates and expected annual emission rates. Sectors include power generation, cement production, iron/steel production, oil & gas production, refineries, landfills and coal mines. In total, individual sources included in the TNO emission inventory cover some 105 Megatons of CH4 emissions (~30% of global total anthropogenic CH4 emissions) and 18.5 Gigatons of CO2 emission (~40% of global total anthropogenic CO2 emissions). To improve the comparability of prior emission data with observations, each emission source is linked to an associated temporal profile, which allows an estimate of hourly emission rates for specific hours in the year. The temporal profiles Furthermore, the inventory is accompanied by a set of gridded sector-level emissions (0.1 x 0.1 degree resolution) that, together with the point sources, give a complete picture of global anthropogenic emissions, making the TNO emission inventory suitable to function as input to chemical transport models (CTMs) for larger scale inversions and data assimilation. We are implementing several steps for validating emission sources in the TNO emission inventory. Methods include cross-referencing with other datasets, image recognition of facilities in combination with deep learning (see abstract Schoenmakers et al.), and automated searching for co-located CH4 plumes in TROPOMI satellite imagery. The validated inventory can support multiple use cases. The current TNO emission inventory has provided up-to-date emission source location and prior emission information at facility level to support the science case for the Twin Anthropogenic Greenhouse Gas Observers (TANGO) mission by allowing estimations of the number, estimated source strength and spatial spread of detectable sources of CO2 and CH4 assuming certain instrument specifications. TANGO, a new ESA Scout satellite mission (expected launch date in 2027) that will exist of two satellites: TANGO-Carbon and TANGO-Nitro, will aim to monitor and quantify emissions of the greenhouse gases methane (CH4), carbon dioxide (CO2) and of co-emitted nitrogen oxides (NOx) at high spatial resolution. The TNO global point sources emission inventory can provide support for targeting greenhouse gas satellite missions like TANGO to optimally utilize observation capacity. To aid in prioritization of sources, the TNO global point sources emission inventory includes data quality indicators for each emission value and spatial coordinates, making it possible to prioritize the observation and quantification of potentially large emission sources with a large uncertainty in their emission levels. Finally, the TNO emission inventory will be ready to absorb new source-level emission data generated by the TANGO mission and other EO missions to continuously update and verify existing emission estimates and allow for periodic monitoring of large emission sources.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone M-N)

Poster: MEDUSA: Methane Emissions Detection Using Satellites Assessment

Authors: Ilse Aben
Affiliations: SRON Netherlands Institute for Space Research, UPV, IUP-UB, GHGSat, Kayrros, UoL, BIRA
The proliferation of satellite-based methane detection technologies has opened up new abatement opportunities but also spurred a need for harmonization, verification, and contextualization of the various satellite products for policy users. The ESA MEDUSA project brings together leading European methane teams to help users navigate this somewhat bewildering ecosystem and support them in putting the new methane tracking technologies to good use. Over the past few years, a large number of satellite instruments have become available providing information on emissions from methane hot spots and point sources. These satellites encompass so-called flux mappers (e.g., TROPOMI) with ‘city-scale’ resolution and high-resolution (20-500 m) instruments that can detect and quantify emissions at ‘facility level’. There are new high-resolution satellites that have been specifically designed to measure methane (e.g., the GHGSat constellation), while new analysis techniques have made it possible also to extract methane signals from satellites that were not designed for that purpose but can now be used to detect large methane super-emitters under favorable conditions (e.g. homogeneous terrain). The latter group consists of both band imagers (e.g., Landsat, Sentinel-2, Sentinel-3, GOES) with regular global coverage as well as targeting hyperspectral imagers (e.g., PRISMA, EnMAP, EMIT). As such, the current space-based methane point source/hot spot observing system consists of a heterogeneous plethora of instruments with widely different sensitivities, spatial resolutions, and coverage characteristics. The system will further expand over the next few years with the launch of and data becoming available from additional flux mappers (e.g., Sentinel-5, GOSAT-GW), high resolution mappers (e.g., Carbon Mapper launched June 2024), and intermediate resolution instruments such as MethaneSAT (launched March 2024). This makes the overall system complex and difficult to understand for policy users that have strong needs for the information on super emitters provided by these satellite instruments to enable mitigation activities and improve reporting of emissions under the UNFCCC Paris Agreement. Within the MEDUSA project, we will therefore compare the various satellite products, both from the same instrument using different algorithms and/or research teams, as well as between different satellites. Moreover, where possible we will validate the emission products such as through controlled releases. We will also develop a framework for methane flux uncertainty for the various types of instruments. In addition, we will perform a few case studies highlighting the potential of (the synergistic use of) these point source/hot spot methane observations. At the ESA Living Planet Symposium, we will present the MEDUSA project in more detail and provide an overview of its status
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G)

Poster: D.03.03 - POSTER - Impact through Reproducibility in Earth Observation Science and Applications

The science and R&D community in EO and Earth Science are delivering a wealth of innovative EO research data (new products, new methods, new algorithms). FAIR and Open Science principles are increasingly being adopted as fundamental parts of the scientific research cycle. What are the most relevant approaches today that enable reproducibility in EO? What lessons can we learn from the most adopted solutions and how can we leverage technologies to gain more impact and use of the research data? What are the champions of reproducibility among the scientific community? Which domains are most advanced and how can we transfer knowledge and tools across the various EO domains?
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G)

Poster: Overview of the services provided to Earth and environmental data producers by the DATA TERRA Research Infrastructure, with a focus on satellite EO data

Authors: Anne Puissant, Frédéric Huynh, Sébastien Payan, Erwann Quimbert, Emmannuel Chaljub, Yvan Le Bras
Affiliations: IR Data Terra - THEIA
The consequences of global change on the Earth system are multiple such as increase in air temperature and sea level, stronger weather events, impacts on ecosystem biodiversity and natural hazards. But the detection of changes and impacts is still difficult because of the diversity and variability of the Earth environments (oceans, land surfaces, atmosphere, solid Earth). While there has been a clear increase in the number of environmental observations, whether by in situ, laboratory or remote sensing measurements, each data is both costly to acquire and unique. The number and variety of data acquisition techniques require efficient methods of improving data availability via interoperable portals, which facilitate data sharing according to FAIR principles for producers and users. In this context, DATA TERRA Research Infrastructure (RI) for Earth data, is the entry point to access all the French environmental observation data. As a digital infrastructure, DATA TERRA works closely with Earth Observation research infrastructures and space agencies. It is backed by a continuum of distributed and interconnected platforms, proposing services that span the full data cycle from access to value-added processing, thus enabling the exploitation of large volumes of data - notably satellite data- and the generation of information through advanced on-demand and systematic processing services. At national, European and international levels, it is advancing the development of open science, implementation of FAIR* approaches, contributing to space missions and applications and to the initiative to generate digital twins of the Earth. The objective of this talk is to present the DATA-TERRA strategy for satellite data and products analysis in the domain of marine sciences (ODATIS Data hub), continental surfaces sciences (THEIA), atmospheric sciences (AERIS), solid Earth science (FormaTerre), and ecological science (PNDB). DATA TERRA relies on several scientific consortia in order to promote and develop innovative processing methods and products with a focus of hybridating several data types.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G)

Poster: Enhancing knowledge reuse and impact with the GEO Knowledge Hub

Authors: Kalamkas Yessimkhanova, Felipe Carlos, Paola De Salvo
Affiliations: GEO Secretariat
We face no precedent crises on our Planet, like deforestation, climate change, land degradation, and many other challenges. Due to Open Science practices, like Open Data and Open Knowledge, Earth Observation (EO) Applications have emerged as powerful tools, providing valuable insights about our Planet. These insights empower us to understand our current crisis and its solutions, serving as a base for informed policies and decision-making. Although EO Applications' growth and popularity were based on open practices, they currently face significant challenges, such as a lack of reusability, knowledge transfer, and capacity building. These problems reduce their use and potential. The Group on Earth Observations (GEO) is a global partnership of governments, institutions, and organizations created to enhance the use of EO Applications in policy development and decision-making. To support Open Data and Open Knowledge, the GEO Community, through the Data and Knowledge Working Group (DK-WG), promotes best practices, principles, and guidelines for sharing and managing data and knowledge, such as the GEO Data Sharing and Data Management Principles, GEO Open Data Licensing Guidance, and the GEO Open Knowledge Statement. GEO also has an infrastructure to promote the use, sharing, and preservation of data and knowledge. This infrastructure consists of several components, each focusing on specific topics. A key component is the GEOSS Platform, which enhances the accessibility and discoverability of EO datasets from various sources in one place. Another integral part of the infrastructure is the GEO Knowledge Hub (GKH), an innovative, cloud-based open-source digital library designed to facilitate the preservation and sharing of open and closed EO Applications. Leveraging the power of InvenioRDM, the GKH (https://gkhub.earthobservations.org) provides many features, including powerful search methods, preservation capabilities (e.g., Digital Object Identifier), a marketplace for closed applications, and specialized curation tools. Experts from the DK-WG and other members of the GEO community support the operation and continuous enhancement of the GKH. For maximizing reusability and reproducibility, the GKH adopts a content publication process that evaluates the organization, availability, and clarity of the shared EO Applications. This process follows best practices and is done with the content provider and members of the GEO Community. Based on this approach, the GKH is supporting various institutions and initiatives within the GEO Community to share and preserve their EO applications, such as the EuroGEO, GEOGLAM, GEOGLOWS, and the Food and Agriculture Organization (FAO). As a result, several open and closed applications are available in the GKH as reusable and well-documented content. Alongside the best practices adopted in the publication process, to empower people to reuse the GKH content, the DK-WG started carrying out capacity-building webinars on the GKH content. In these webinars, the available EO Applications that are well documented and reusable are presented, with details on how to access them in the GKH, how to use the materials available, whom to contact for assistance, and so forth. By pursuing such activities, the process of generating new knowledge from existing knowledge is encouraged, resulting in a circular knowledge economy in which prior knowledge supports the creation of new knowledge. This speeds up the evolution of knowledge and reduces resource waste, as every developed application is preserved and reused. Based on the capabilities of the GKH, various EO Applications have been shared, and many webinars were developed, engaging over 700 participants. In this work, we aim to share and discuss the strategies employed to enhance the impact and reuse of knowledge and present the outcomes of two years of activities on this topic.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G)

Poster: EarthCODE - a FAIR and Open Environment for collaborative research in Earth System Science

Authors: Garin Smith, Dr Chandra Taposeea-Fisher, Dr Anca Anghelea, Ewelina Dobrowolska, Daniele Giomo, Francesco Barchetta, Stephan Meissl
Affiliations: Esa, Telespazio UK, Serco Italy, Starion Group, EOX IT Services
The Open Science and Innovation Vision included in ESA’s EO Science Strategy (2024) addresses 8 key elements: 1) openness of research data, 2) open-source scientific code, 3) open access papers with data and code; 4) standards-based publication and discovery of scientific experiments, 5) scientific workflows reproducible on various infrastructures, 6) access to education on open science, 7) community practice of open science; and 8) EO business models built on open-source. EarthCODE (https://earthcode.esa.int) is a strategic ESA EO initiative to support the implementation of this vision. EarthCODE (Earth Science Collaborative Open Development Environment) aims towards an integrated, cloud-based, user-centric development environment for European Space Agency’s (ESA) Earth science activities and projects. EarthCODE looks to maximise long-term visibility, reuse and reproducibility of the research outputs of such projects, by leveraging FAIR and open science principles and enabling, thus fostering a sustainable scientific process. EarthCODE proposes a flexible and scalable architecture developed with interoperable open-source blocks, with a long-term vision evolving by incrementally integrating industrially provided services from a portfolio of the Network of Resources. EarthCODE platform collaborators participate in creating integrated architecture, with interoperable solutions and federated capabilities. Additionally, EarthCODE is a utilisation domain of the EOEPCA+, thus contributing to the development and evolution of Open Standards and protocols to enable internationally interoperable solutions. EarthCODE will provide an Integrated Development Platform, giving developers the tools needed to develop high quality workflows that allow experiments to be executed in the cloud and be end-to-end reproduced by other scientists. EarthCODE is built around existing open-source solutions, building blocks and platforms, such as the Open Science Catalogue, EOxHub and EOEPCA. It has additionally begun to integrate platform services from DeepESDL, Euro Data Cube, Polar TEP and the openEO federation on CDSE platforms, with more being added annually through ESA best practices. With it’s adopted federated approach, EarthCODE will have the capability to facilitate processing on other platforms, i.e. DeepESDL, ESA EURO Data Cube, Open EO Cloud/Open EO Platform and AIOPEN/AI4DTE. The roadmap for the portal includes the initial portal release by end of 2024, followed by the capability to publish experiments in Q1 2025 (including development, publishing, finding and related community engagement), and by mid-2025 to have a further release with reproducibility capabilities around accessibility and execute functionalities. Collaboration and Federation are at the heart of EarthCODE. As EarthCODE evolves we expect providing solutions allowing federation of data and processing. EarthCODE has ambition to deliver a model for a Collaborative Open Development Environment for Earth system science, where researchers can leverage the power of the wide range of EO platform services available to conduct their science, while also making use of FAIR Open Science tools to manage data, code and documentation, create end-to-end reproducible workflows on platforms, and have the opportunity to discover, use, reuse, modify and build upon the research of others in a fair and safe way. Overall, EarthCODE aims to enable elements for EO Open Science and Innovation vision, including open data, open-source code, linked data/code, open-access documentation, end-to-end reproducible workflows, open-science resources, open-science tools, and a healthy community applying all the elements in their practice.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G)

Poster: Systematic Reference Data Quality Assessment For Global Crop Mapping

Authors: Arun Pratihast, Hendrik Boogaard, Jappe Franke, Charlotte Van Haren, Santosh Karanam, Christina Butsko, Juan Carlos Laso Bayas, Jeroen Degerickx, Kristof Van Tricht, Steffen Fritz
Affiliations: Wageningen Environmental Research (WENR), Wageningen University & Research, International Institute for Applied Systems Analysis (IIASA), Vlaamse Instelling Technologisch Onderzoek (VITO)
High-quality reference data is essential for training and validating satellite-based crop classification algorithms as it directly impacts the accuracy and reliability of the resulting maps. In contrast, poor-quality datasets introduce biases, propagate errors, and degrade machine learning performance, which makes users cautious in their reliance on external reference data. Even when datasets are open and comply with FAIR (Findable, Accessible, Interoperable, Reusable) principles uncertainties about their quality often hinder broader reuse. To address these challenges, systematic evaluation of dataset quality is critical. Within the ESA funded WorldCereal project, we developed a semi-automatic framework to assess the fitness of reference datasets for crop classification. The framework evaluates data quality across three dimensions: spatial, temporal, and thematic accuracy. Spatial accuracy measures positional correctness by examining GPS errors, spatial resolution, and visual alignment with field boundaries. Temporal accuracy assesses consistency with observation dates or crop calendars, which is particularly important in regions with complex cropping systems or multiple growing seasons. Thematic accuracy evaluates the reliability of labels, such as land cover or crop type, using evidence from field observations, expert assessments, or automated classifications. A confidence score is calculated for each label type by averaging the three accuracy scores, with weights assigned to emphasize the most relevant dimension based on the dataset type. To date, we have evaluated more than 97 datasets and around 20 datasets (more than 2000 features) were visually assessed using high-resolution satellite imagery, such as Google Earth. These confidence scores have been used as selection criteria to train satellite-based classification models, enabling the generation of local to global crop type maps within the WorldCereal project. The framework is currently implemented in the WorldCereal Reference Data Module (RDM) and is done by a WorldCereal moderator. By ensuring rigorous quality assessment, the framework guarantees that datasets are suitable for current applications and future reuse, enabling accurate crop mapping and building trust in publicly accessible reference data. The next step is to enhance the framework by developing observation-specific confidence scores, leveraging metrics such as proximity to roads, spatial indices, and NDVI for the current collection's crop type assessments.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G)

Poster: Cubes & Clouds – A Massive Open Online Course for Cloud Native Open Data Sciences in Earth Observation

#pangeo #stac #cloud-native

Authors: Michele Claus, Anne Fouilloux, Tina Odaka, Juraj Zvolensky, Stephan Meißl, Tyna Dolezalova, Robert Eckardt, Jonas Eberle, Alexander Jacob, Anca Anghelea
Affiliations: Eurac Research, Simula Research Laboratory AS, IFREMER Laboratoire d'Oceanographie Physique et Spatiale (LOPS), Ignite education GmbH, EOX IT Services GmbH, Eberle Web- and Software-Development, European Space Agency
Earth Observation (EO) scientists are facing unprecedented volumes of data, which continue to grow with the increasing number of satellite missions and advancements in spatial and temporal resolution. Traditional approaches, such as downloading satellite data for local processing, are no longer viable. As a result, EO science is rapidly transitioning to cloud-based technologies and open science practices. However, despite the swift evolution and widespread adoption of these new methods, the availability of training resources remains limited, posing a challenge for educating the next generation of EO scientists. The free Massive Open Online Course Cubes & Clouds - Cloud Native Open Data Sciences for Earth Observation (https://eo-college.org/courses/cubes-and-clouds/) introduces data cubes, cloud platforms, and open science in Earth Observation. Aimed at Earth Science students, researchers, and Data Scientists, it requires basic EO knowledge and basic Python programming skills. The course covers the entire EO workflow, from data discovery and processing to sharing results in a FAIR (Findable, Accessible, Interoperable, Reusable) manner. Through videos, lectures, hands-on exercises, and quizzes, participants gain both theoretical knowledge and practical experience in cloud-native EO processing. Students who successfully completed the course should confidently use cloud platforms for EO research and share their work following open science principles. The hands-on exercises use Copernicus data accessed through the SpatioTemporal Asset Catalog (STAC) to demonstrate two approaches for defining Earth Observation (EO) workflows: the openEO API and the Pangeo software stack. Participants engage in similar exercises using both methods, allowing them to compare their benefits and limitations. This approach provides a deeper understanding of the importance of APIs and standards in modern EO practices. In the final exercise, participants collaborate on a community snow cover map, mapping small areas of the Alps and submitting results to a STAC catalogue and web viewer. This project demonstrates their ability to apply EO cloud computing and open science practices while adhering to FAIR standards. Upon successful completion of the course, each participant will receive a certificate which can be listed and integrated in their CV or shared easily. The talk will guide through the topics covered in Cubes & Clouds and show how they are presented in the EOCollege e-learning platform, the links to the exercises carried out on CDSE will be explored and the open science aspect will be shown in the community mapping project.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: F.04.08 - POSTER - Earth Observation for Nature Finance and Ecosystem Accounting

The integration of Earth Observation (EO) for nature finance mechanisms and ecosystem accounting is essential towards a nature-positivescenario and for advancing sustainable economic policies, ensuring transparent reporting of environmental dependencies and impacts.
The contribution of companies and financial institutions towards nature-positive goals plays a pivotal role in addressing the global biodiversity crisis by redirecting financial flows toward towards environmentally sustainable and Nature-based solutions (NbS). The Kunming-Montreal Global Biodiversity Framework (GBF) and the European Corporate Sustainability Reporting Directive (CSRD) highlight the need for standardized biodiversity metrics and risk assessments to align financial decisions with sustainability goals. EO provides spatially explicit, high-quality, and reliable data to support the development of nature-positive financial mechanisms and transparent biodiversity-debit/credit schemes.
Ecosystem accounting, as formalized by the UN SEEA Ecosystem Accounting framework, provides a structured approach to integrating ecosystems into national economic planning. Ecosystem Extent Accounts, which monitor spatial changes in ecosystems, rely on robust EO-based methodologies to improve classification, detect ecosystem change, and ensure temporal consistency. EO also plays a key role in Ecosystem Condition and Service Accounts, enabling efficient monitoring of ecosystem health and the services they provide.
This session will explore EO-based solutions for nature finance and ecosystem accounting, addressing challenges such as data standardization, EO-based metrics, and integration of EO in national reporting systems.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: BioDivER – an Earth Observation based environmental reporting tool

Authors: Guido Riembauer, Julia Haas, Lennart Meine, Oliver Herrmann, Thomas Pioch
Affiliations: Mundialis GmbH & Co. KG, Climate & Company
In the context of the EU Green Deal, ESG (Environmental, Social and Governance) reporting plays an important role to monitor progress of companies towards a sustainable economy in a transparent way. While economic and social metrics are largely established, the way that environmental performance is to be reported is still subject to ongoing effort. A central framework in this respect are the European Sustainability Reporting Standards (ESRS). The project BioDiver (BioDiversity Environmental Reporting) analyses how Earth Observation data can be used to support the environmental ESRS-compliant reporting obligations of commercial entities. As a theoretical basis, we rely on the SEEA-EA based ENCORE framework to identify general dependencies, impacts and potential mitigation practices between economic sectors, ecosystems, and in consequence biodiversity. While biodiversity is hardly measurable by Earth Observation methods, it relies on stable ecosystems which in turn provide services that are important for economic activity. Thus, we use environmental data to map the provision of sector-relevant ecosystem services as well as biodiversity-relevant metrics in the vicinity of company sites. The main project outcome that is currently under development will be a platform where companies can provide a list of their sites and economic sector(s). Time series of EO-derived environmental datasets (e.g. vegetation cover from yearly land cover maps, groundwater anomalies derived from gravitational missions, etc.) are then automatically analysed in the vicinity of the company sites. In an ongoing exercise, we research suitable Earth Observation data products that have relevance for both ecosystem services and ESRS data points while featuring large spatial and temporal coverage (e.g. EO-derived biophysical parameters provided by the Copernicus Land Monitoring Service) which can extend the BioDiver data portfolio. These datasets are then translated into relevant reporting metrics. Finally the tool generates a report that contains a general description of sector dependent ecosystem impact and dependency ratings, a trend analysis of ecosystem service provision and relevant environmental metrics across all sites, and a detailed site-specific breakdown that allows to identify potentially problematic sites. The most important finding of the project is the understanding of potential user’s needs. While numerous variables are defined by frameworks like ESRS or State of Nature (SON), it is yet unclear for which metrics and domains companies have the largest information gaps that BioDivER could support. Therefore, test users from different economic sectors and company sizes are involved in the development process from the beginning. Their feedback is crucial for the selection of the included metrics and required report format. The backend of the BioDivER platform is based on the geoprocessing engine actinia, which is able to ingest and analyse large volumes of geodata due to its scalabale and cloud-optimised architecture. Automatic processing chains can be defined and executed via a REST interface in a straightforward manner. For this project, actinia will be extended to allow for the automatic generation of reports based on trend identification from time series. Following the research project, BioDivER shall be upscaled to become a commercial service. We expect a high demand since ESG reporting obligations will gradually be extended to smaller, not publicly listed companies that can save costs by using the envisaged service instead of having to build up competences themselves. This research project is funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) and administrated by the German Space Agency within the German Aerospace Center (DLR).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Mapping the 60-year Evolution of Agricultural Parcel Extents: A Customised Segmentation Approach Combining Historic and Modern Remote Sensing Imagery

Authors: Eric Kosczor, Matthias Forkel, Anna Cord
Affiliations: TUD Dresden University of Technology, Junior Professorship in Environmental Remote Sensing, University of Bonn, Institute of Crop Science and Resource Conservation
Monitoring the extent of European agricultural landscapes is key for analysing the conditions of farmland ecosystems and its impact on ecosystem services. Within the agricultural landscape, the change in parcel sizes, along with crop diversity, has been proven to have notable impacts on habitat complementation and connectivity, affecting species diversity. Thus, there is a need for accurate delineation of agricultural parcels which is possible with high-resolution remote sensing data but requires an adaptive and robust algorithm. Here we exploit the strengths of the Segment Anything Model (SAM) for the automatic segmentation of farmland parcels in historic panchromatic CORONA imagery from the year 1965 and in modern digital orthophotos from 2021/2022 for the German state of Saxony. Saxony experienced severe transitions in agricultural landscape structure during and after the 1960s due to substantial policy changes implemented, such as collectivisation, and is a suitable example for examining changes in parcel extent. We selected several study areas throughout the state representing various soil, terrain and meteorological conditions for which scenes from both data sources were pre-processed and harmonised. We used administrative data from regional authorities to mask non-farmland before we applied a two-step SAM algorithm. Post-processing included the removal of falsely detected segments based on a compactness score and morphological operations. Accuracy analysis shows mean Intersection over Union (IoU) values of up to 0.8 for historic and above 0.9 for modern images. Preliminary comparisons between the two time steps further suggest a significant increase in parcel size from 1965 to the 2020s. The developed approach is expected to be easily transferable to other data sources and can support comparable efforts for mapping single-type ecosystem extents. Furthermore, the results can serve as a basis for subsequent applications in landscape ecology, such as the calculation and evaluation of landscape configuration metrics. We plan to apply the method to a larger area and to add more time steps in the future, allowing us to make statements about long-term changes in agricultural extent and future policy implications for ecosystem services in that area.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: C.02.14 - POSTER - The EarthCARE Mission’s First Year in Orbit: Opening new Horizons for Cloud, Aerosol and Radiation Science

The Earth Cloud, Aerosol and Radiation Explorer (EarthCARE) satellite mission aims to improve our understanding of cloud-aerosol-radiation interactions and Earth radiation budget, such that they can be modelled with better reliability in climate and numerical weather prediction models. To achieve this objective, EarthCARE will measure the three-dimensional structure of clouds, precipitation and aerosols, along with collocated observations of solar and terrestrial radiation.

The successful launch of this ESA-JAXA mission was in May 2024 and, following the satellite and instrument commissioning phase, provides unique co-registered observations from a suite of four unique instruments located on a common platform: (1) ATmospheric LIDar (ATLID), (2) Cloud Profiling Radar (CPR), (3) Multi- Spectral Imager (MSI) and (4) BroadBand Radiometer (BBR). EarthCARE global observations include vertical profiles of natural and anthropogenic aerosols, the vertical contribution of ice and liquid water content, the cloud mesoscale distribution, precipitation microphysics, estimates of particle size, convective vertical air motions, as well as atmospheric radiative heating and cooling profiles. In addition to enabling new insights into climate science and providing unique data for NWP improvements, EarthCARE continues the heritage measurements of CloudSat, CALIPSO and Aeolus, and bridges to future missions such as NASA's Atmosphere Observing System mission (AOS) and Aeolus-2.

The session invites contributions from the science community on EarthCARE and related science themes, including Passive and Active Observational Techniques; Cloud and Precipitation Microphysics, Aerosols and Radiation Process Studies; Radiation and Earth Radiation Budget; Scientific and User Applications as well as Long-Term Data Records. In addition, scientific synergies with heritage, operational and future satellite missions as well as with ground-based, air- or ship-borne campaign activities are welcome

Contributions on Modelling, Assimilation and Parameterisation at Global, Regional and Cloud Level enhancing high-resolution atmospheric numerical model activities through evaluation and improvement using novel satellite observations on EarthCARE and related satellite missions are in particular invited. A focus is placed on the use of cutting-edge atmospheric climate and weather models, including "global km-scale" or “global storm-resolving models” and commensurate Earth observations of clouds, aerosols and convection.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: The Potential of the ERATOSTHENES CARO National Facility in the EMMENA Region: Observations over Cyprus During the First Year of EarthCARE Mission

Authors: Rodanthi Elisavet Mamouri, Argyro Nisantzi, Hossein Panahifar, George Kotsias, Maria Poutli, Athina Savva, Constantinos Chrysostomou, Patric Seifert
Affiliations: ERATOSTHENES Centre of Excellence, Cyprus University of Technology, Faculty of Engineering and Technology, Cyprus, Leibniz Institut für Troposphärenforschung, Leipzig, Germany
Cyprus holds a strategic position in the Eastern Mediterranean, Middle East, and North Africa (EMMENA) region, serving as a crossroads between Europe, Asia, and Africa. This unique location reflects the meteorological conditions and coastal dynamics of the broader EMMENA area, making Cyprus—and particularly the coastal city of Limassol—an exceptional natural laboratory for advanced research on climate change, aerosol-cloud-precipitation interactions, and the interlinked dynamics of weather, precipitation, and aridity. A major barrier to advancing our understanding of the complex atmospheric processes influencing climate is the lack of extensive ground-based monitoring stations. While satellites provide broad regional and global atmospheric data, they require calibration with ground-based measurements to ensure accurate integration into atmospheric models used for climate change projections. Establishing a modern observational supersite in Cyprus is crucial for investigating the atmospheric and environmental systems in the rapidly urbanizing and increasingly polluted EMMENA region. This area, which is highly impacted by climate change, faces challenges such as severe pollution, frequent dust storms, and declining precipitation. Contemporary approaches to weather, climate, and environmental forecasting rely on advanced modeling paired with cutting-edge observational data. This study highlights recent progress in establishing a permanent, state-of-the-art atmospheric remote sensing station in Limassol, Cyprus. The Cyprus Atmospheric Remote Sensing Observatory (CARO), developed through collaboration between the Leibniz Institute for Tropospheric Research (TROPOS) and the ERATOSTHENES Centre of Excellence under the EU H2020 Teaming project EXCELSIOR. The observatory is designed to provide long-term atmospheric profiling, including wind, humidity, aerosol and cloud properties, and precipitation fields, to support critical research in this dynamic and vulnerable region. The ERATOSTHENES CARO National Facility in Limassol, Cyprus, equipped with advanced aerosol (PollyXT-Cyp station) and cloud (CLOUDNET Limassol) remote sensing instrumentation, supports the CAL/VAL activities of the ESA/JAXA EarthCARE satellite mission, launched in May 2024, through its involvement in the CORAL project. This work will showcase selected cases highlighting the complex aerosol conditions characteristic of the Eastern Mediterranean, alongside the first intercomparison of Level 1 and Level 2 data products during EarthCARE overpasses over Cyprus. Acknowledgements The authors acknowledge the ‘EXCELSIOR’: ERATOSTHENES: EΧcellence Research Centre for Earth Surveillance and Space-Based Monitoring of the Environment H2020 Widespread Teaming project (www.excelsior2020.eu). The ‘EXCELSIOR’ project has received funding from he European Union’s Horizon 2020 research and innovation programme under Grant Agreement No 857510, from the Government of the Republic of Cyprus through the Directorate General for the European Programmes, Coordination and Development and the Cyprus University of Technology. This study was supported by the ATARRI Horizon Europe Widespread Twinning Project. ATARRI receives funding from the European Union’s Horizon Europe Twinning Call (HORIZON-WIDERA-2023-ACCESS-02) under the grant agreement No 101160258.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: POLIPHON for 355 nm wavelength: novel conversion parameters and their application to EarthCARE/ATLID and ground-based lidar data

Authors: Julian Hofer, Ulla Wandinger, HOLGER BAARS, Leonard König, Albert Ansmann
Affiliations: Leibniz Institute for Tropospheric Research (TROPOS) Leipzig
In framework of the ESA AIRSENSE project (https://www.grasp-earth.com/portfolio/airsense/), the goal of this study is to make the POLIPHON method (Polarization Lidar Photometer Networking; e.g., Mamouri and Ansmann, 2016) applicable for EarthCARE/ATLID data on a global scale. POLIPHON allows the retrieval of microphysical properties for different aerosol components from polarization lidar observations. These obtained microphysical properties are used to estimate profiles of particle mass, cloud condensation nuclei (CCN) and ice-nucleating particle (INP) concentrations, which are of greatest importance for aerosol-cloud-interaction studies, by means of parameterizations (e.g., DeMott et al., 2015). The conversion of aerosol-type-dependent particle extinction coefficients into the aerosol microphysical properties is the central part of the POLIPHON computations. The conversion parameters needed for this are determined from AERONET aerosol climatologies of optical and microphysical properties (Holben et al., 1998; Dubovik and King, 2000; Dubovik et al. 2006) by regression analyses (e.g., Ansmann et al., 2019). Up to now, this method was mainly applied to observations at 532 nm wavelength. POLIPHON conversion parameters valid for 355 nm were previously reported by Mamouri and Ansmann (2017) only for a limited number of regions (aerosol types) namely Barbados (dust and marine aerosol), Cabo Verde (dust and marine aerosol), Cyprus (dust and continental/pollution aerosol) and Germany (continental/pollution aerosol). Therefore, there is a need to revisit the AERONET observations to obtain conversion factors valid at 355 nm (the Aeolus and EarthCARE/ATLID wavelength) as well. In this study, about 50 globally distributed AERONET stations corresponding to predominant aerosol types (dust, marine, smoke, continental/pollution) are used to derive the novel conversion parameters. Many additional, previously not analyzed AERONET stations are included in the retrieval. Especially, many more marine (used for the retrieval of conversion parameters of marine aerosol) and urban AERONET stations (used for the retrieval of conversion parameters of continental/pollution aerosol) are considered. The AERONET stations used for the calculation of dust and smoke conversion parameters at 355 nm are generally similar as in Ansmann et al. (2019, 2021) for 532 nm but contain additional time periods. Using the novel conversion parameters and auxiliary data such as temperature and pressure profiles (X-MET, Eisinger et al., 2024), POLIPHON will be applied on ATLID profiles (A-PRO/EBD, Donovan et al., 2024) and on ground-based multiwavelength lidar observations such as from PollyNET (Baars et al., 2016) during EarthCARE overpasses for testing purpose, which will be presented at the symposium. After successful testing, the final aim is to make the POLIPHON algorithm and products operational, in accordance with the EarthCARE production chain, to provide global estimates of microphysical and cloud-relevant aerosol properties from EarthCARE/ATLID observations. References Ansmann, A., Mamouri, R.-E., Hofer, J., Baars, H., Althausen, D., and Abdullaev, S. F.: Dust mass, cloud condensation nuclei, and ice-nucleating particle profiling with polarization lidar: updated POLIPHON conversion factors from global AERONET analysis, Atmos. Meas. Tech., 12, 4849–4865, https://doi.org/10.5194/amt-12-4849-2019, 2019. Ansmann, A., Ohneiser, K., Mamouri, R.-E., Knopf, D. A., Veselovskii, I., Baars, H., Engelmann, R., Foth, A., Jimenez, C., Seifert, P., and Barja, B.: Tropospheric and stratospheric wildfire smoke profiling with lidar: mass, surface area, CCN, and INP retrieval, Atmos. Chem. Phys., 21, 9779–9807, https://doi.org/10.5194/acp-21-9779-2021, 2021. Baars, H., Kanitz, T., Engelmann, R., Althausen, D., Heese, B., Komppula, M., Preißler, J., Tesche, M., Ansmann, A., Wandinger, U., Lim, J.-H., Ahn, J. Y., Stachlewska, I. S., Amiridis, V., Marinou, E., Seifert, P., Hofer, J., Skupin, A., Schneider, F., Bohlmann, S., Foth, A., Bley, S., Pfüller, A., Giannakaki, E., Lihavainen, H., Viisanen, Y., Hooda, R. K., Pereira, S. N., Bortoli, D., Wagner, F., Mattis, I., Janicka, L., Markowicz, K. M., Achtert, P., Artaxo, P., Pauliquevis, T., Souza, R. A. F., Sharma, V. P., van Zyl, P. G., Beukes, J. P., Sun, J., Rohwer, E. G., Deng, R., Mamouri, R.-E., and Zamorano, F.: An overview of the first decade of PollyNET: an emerging network of automated Raman-polarization lidars for continuous aerosol profiling, Atmos. Chem. Phys., 16, 5111–5137, https://doi.org/10.5194/acp-16-5111-2016, 2016. DeMott, P. J., Prenni, A. J., McMeeking, G. R., Sullivan, R. C., Petters, M. D., Tobo, Y., Niemand, M., Möhler, O., Snider, J. R., Wang, Z., and Kreidenweis, S. M.: Integrating laboratory and field data to quantify the immersion freezing ice nucleation activity of mineral dust particles, Atmos. Chem. Phys., 15, 393–409, https://doi.org/10.5194/acp-15-393-2015, 2015. Donovan, D. P., van Zadelhoff, G.-J., and Wang, P.: The EarthCARE lidar cloud and aerosol profile processor (A-PRO): the A-AER, A-EBD, A-TC, and A-ICE products, Atmos. Meas. Tech., 17, 5301–5340, https://doi.org/10.5194/amt-17-5301-2024, 2024. Dubovik, O. and King, M.: A flexible inversion algorithm for retrieval of aerosol optical properties from sun and sky radiance measurements, J. Geophys. Res., 105, 20673–20696, https://doi.org/10.1029/2000JD900282, 2000. Dubovik, O., Sinyuk, A., Lapyonok, T., Holben, B. N., Mishchenko, M., Yang, P., Eck, T. F., Voltne, H., Munoz, O., Veihelmann, B., Van der Zande, W. J., Leon, J.-F., Sorokin, M., and Slutsker, I.: Application of spheroid models to account for aerosol particle nonsphericity in remote sensing of desert dust, J. Geophys. Res., 111, D11208, https://doi.org/10.1029/2005JD006619, 2006. Eisinger, M., Marnas, F., Wallace, K., Kubota, T., Tomiyama, N., Ohno, Y., Tanaka, T., Tomita, E., Wehr, T., and Bernaerts, D.: The EarthCARE mission: science data processing chain overview, Atmos. Meas. Tech., 17, 839–862, https://doi.org/10.5194/amt-17-839-2024, 2024. Holben, B. N., Eck, T. F., Slutsker, I., Tanré, D., Buis, J. P., Setzer, A., Vermote, E., Reagan, J. A., Kaufman, Y. J., Nakajima, T., Lavenu, F., Jankowiak, I., and Smirnov, A.: AERONET – a federated instrument network and data archive for aerosol characterization, Remote Sens. Environ., 66, 1–16, https://doi.org/10.1016/S0034-4257(98)00031-5, 1998. Mamouri, R.-E. and Ansmann, A.: Potential of polarization lidar to provide profiles of CCN- and INP-relevant aerosol parameters, Atmos. Chem. Phys., 16, 5905–5931, https://doi.org/10.5194/acp-16-5905-2016, 2016. Mamouri, R.-E. and Ansmann, A.: Potential of polarization/Raman lidar to separate fine dust, coarse dust, maritime, and anthropogenic aerosol profiles, Atmos. Meas. Tech., 10, 3403–3427, https://doi.org/10.5194/amt-10-3403-2017, 2017.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Quality Assurance of the Ground-based Aerosol High Power Lidar Measurements in the framework of the ATMO-ACCESS pilot for EarthCARE Cal/Val

Authors: Belegante Livio, Gabriela Ciocan, Doina Nicolae, Victor Nicolae, Alexandru Dandocsi, Volker Freudenthaler, Nikolaos Siomos, Michael Haimerl, Maria Mylonaki, Aldo Amodeo
Affiliations: National Institute Of Research And Development For Optoelectronics - Inoe 2000, Remote Sensing Department, University of Bucharest, Faculty of Physics, Fakultät für Physik, Meteorologisches Institut, Ludwig-Maximilians-Universität, Consiglio Nazionale delle Ricerche – Istituto di Metodologie per l'Analisi Ambientale CNR-IMAA
The Aerosol, Clouds and Trace gases Research Infrastructure (ACTRIS) provides high-quality datasets from the ground using a wide variety of measurement techniques, and serving many applications, including Calibration/Validation (Cal/Val) of satellite instruments. The six Topical Centres of ACTRIS are responsible for developing the Quality Assurance/Quality Control (QA/QC) procedures. The Cal/Val of the EarthCARE Atmospheric Lidar (ATLID) is served in this sense by the Centre for Aerosol Remote Sensing (CARS) which comprises specialized units for calibration of photometers and both low and high-power aerosol lidars (https://www.actris.eu/topical-centre/cars). Its primary goal is to ensure accurate measurements of the optical characteristics of tropospheric and lower stratospheric aerosols, enabling the use of this data as reliable QA data for applications such as aerosol typing, Cal/Val activities, and data assimilation in models. A large-scale experiment was organized in the framework of the European Commission funded ATMO- ACCESS project (Sustainable Access to Atmospheric Research Facilities), with the goal to prepare ACTRIS and ACTRIS-associated facilities for provision of harmonized and high-quality datasets to validate pre-operational ATLID data products. The QA protocols for aerosol high-power lidars were designed in such a way to allow automated and centralized data processing chain. Additional data like distance of full overlap, maximum altitude height, depolarization calibration factors, trigger delay and bin shift are objectively assessed from the QA data and passed to the data processing chain in order to optimize the quality of the final optical products. Key QA tests for high-power aerosol lidars include: 1) the Telecover Test to determine the distance of full overlap and evaluate deviations in near-range signals caused by optical misalignments; 2) the Rayleigh Fit Test to validate lidar signals against molecular backscatter profiles to evaluate deviations in far-range signals caused by optical misalignments, systematic signal distortions, and wrong background subtraction; 3) Polarization Calibration to assess systematic errors in depolarization measurements; 4) Zero Bin Test to ensure exact synchronization between the laser emission and the start of data acquisition and a correct range-correction; 5) Dark Signal Measurements to identify and mitigate trigger synchronous electronic distortions and background noise affecting the signal quality. Test data are fed into the ATLAS software (a software developed by CARS to carry out all the QA tests: https://github.com/nikolaos-siomos/ATLAS), providing to the CARS experts insights into the lidar performance. The results are subsequently integrated into the Single Calculus Chain (SCC https://scc.imaa.cnr.it/) to fine-tune the "operational configurations" employed by ACTRIS National Facilities (NFs) for automated lidar data processing. Furthermore, these results help in assessing systematic errors, therefore improving the reliability and accuracy of the resulting datasets. Over 60 reports have been generated for more than 22 lidar stations over the last three years, including both ACTRIS NFs and associated stations. Looking ahead, participation is expected to grow, up to 50 lidar stations anticipated to join these efforts, further enhancing the Cal/Val activities within the ACTRIS framework. This work aims to highlight the impact of QA on raw data and associated lidar products, emphasizing the importance of a comprehensive QA approach for all reference data used in satellite validation. Acknowledgements: This work is supported by the European Commission under the Horizon 2020 – Research and Innovation Framework Programme, through the ATMO-ACCESS Integrating Activity under grant agreement No 101008004 and by the Romanian Core Program within the National Research Development and Innovation Plan 2022-2027, carried out with the support of MCID, project no. PN 23 05. We also acknowledge all contributing persons and their institutions in the framework of the ATMO-ACCESS pilot for EarthCARE Cal/Val and the AECARE (ACTRIS for EarthCare L2 product evaluation) project, as well as the ACTRIS ERIC for the support provided to CARS. This work receives funding from the European Union’s Horizon 2020 research and innovation programme under grant agreements No 871115. ACTRIS-D is funded by the German Federal Ministry for Education and Research (BMBF) under grant agreements 01LK2001A-K & 01LK2002A-G.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Cloud top heights from ATLID and from ATLID/MSI synergy

Authors: Athena Augusta Floutsi, Moritz Haarig, Anja Hünerbein, HOLGER BAARS, Sebastian Bley, Leonard König, Henriette Gebauer, David Donovan, Gerd-Jan van Zadelhoff, Ulla Wandinger
Affiliations: Leibniz Institute for Tropospheric Research (TROPOS), Royal Netherlands Meteorological Institute (KNMI)
Since its launch on 28 May 2024, EarthCARE has been performing global observations of the atmosphere with unprecedented accuracy. EarthCARE’s payload consists of two active instruments, an atmospheric lidar (ATLID) and a cloud profiling radar (CPR), and two passive instruments, a multispectral imager (MSI) and a broad-band radiometer (BBR). The active instruments provide detailed vertical information on aerosol and clouds, while the passive instruments provide information on spectral radiances and radiation at the top of the atmosphere. Together, these synergistic observations enhance our understanding of aerosol, cloud, and radiation interactions. The vertical distribution of clouds plays an important role in a number of meteorological and climatological processes, acting as a proxy for the meteorological conditions and the thermodynamic and microphysical processes that enable cloud formation. Therefore, cloud top height (CTH) information is a source of valuable information on, e.g., the vertical distribution of liquid water, the thermodynamic phase of clouds and cloud classification. EarthCARE delivers CTH in three different products, A-CTH, M-COP and AM-CTH, which utilize information from ATLID (Wandinger et al., 2023), MSI (Hünerbein et al., 2023) and ATLID/MSI synergy (Haarig et al., 2023), respectively. While active remote sensing is powerful when it comes to vertical information, it lacks wider spatial coverage. Passive remote sensing, on the other hand, suffers from limited vertical accuracy but provides a broader horizontal coverage. One way to overcome this problem is the synergistic CTH difference, which is measured along the track and then transferred to the swath in the AM-CTH product. In this presentation, the CTH product from two of the EarthCARE processors (A-LAY and AM-COL) will be presented in detail. Special emphasis will be given to the verification and validation of the CTH products. In particular, first validation results against ground-based measurements from the PollyNET network (Baars et al., 2016), but also against measurements available from the EarthCARE Validation Teams (ECVT), will be presented. References Baars, H., Kanitz, T., Engelmann, R., Althausen, D., Heese, B., Komppula, M., Preißler, J., Tesche, M., Ansmann, A., Wandinger, U., Lim, J.-H., Ahn, J. Y., Stachlewska, I. S., Amiridis, V., Marinou, E., Seifert, P., Hofer, J., Skupin, A., Schneider, F., Bohlmann, S., Foth, A., Bley, S., Pfüller, A., Giannakaki, E., Lihavainen, H., Viisanen, Y., Hooda, R. K., Pereira, S. N., Bortoli, D., Wagner, F., Mattis, I., Janicka, L., Markowicz, K. M., Achtert, P., Artaxo, P., Pauliquevis, T., Souza, R. A. F., Sharma, V. P., van Zyl, P. G., Beukes, J. P., Sun, J., Rohwer, E. G., Deng, R., Mamouri, R.-E., and Zamorano, F.: An overview of the first decade of PollyNET: an emerging network of automated Raman-polarization lidars for continuous aerosol profiling, Atmos. Chem. Phys., 16, 5111–5137, https://doi.org/10.5194/acp-16-5111-2016, 2016. Haarig, M., Hünerbein, A., Wandinger, U., Docter, N., Bley, S., Donovan, D., and van Zadelhoff, G.-J.: Cloud top heights and aerosol columnar properties from combined EarthCARE lidar and imager observations: the AM-CTH and AM-ACD products, Atmos. Meas. Tech., 16, 5953–5975, https://doi.org/10.5194/amt-16-5953-2023, 2023. Hünerbein, A., Bley, S., Deneke, H., Meirink, J. F., van Zadelhoff, G.-J., and Walther, A.: Cloud optical and physical properties retrieval from EarthCARE multi-spectral imager: the M-COP products, Atmos. Meas. Tech., 17, 261–276, https://doi.org/10.5194/amt-17-261-2024, 2024. Wandinger, U., Haarig, M., Baars, H., Donovan, D., and van Zadelhoff, G.-J.: Cloud top heights and aerosol layer properties from EarthCARE lidar observations: the A-CTH and A-ALD products, Atmos. Meas. Tech., 16, 4031–4052, https://doi.org/10.5194/amt-16-4031-2023, 2023.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: ESA Mobile Lidar for EarthCARE mission: EMORAL & ATLID

Authors: Iwona Stachlewska, Afwan Hafiz, George Georgoussis, Doina Nicolae, Pol Ribes Pleguezuelo, Jonas von Bismarck
Affiliations: University of Warsaw - Faculty of Physics, Raymetrics SA, National Institute of Research and Development for Optoelectronics, European Space Agency
ESA & JAXA EarthCARE mission should be serving the community with provision of a great deal of climate relevant parameters and observables. Massive affection has been put to design, built, and remote operate a range of instrumentation able to provide data variables that will allow better assessments on climate. There is a number of dedicated tools available to the community for visualizing, processing and analyzing of EarthCARE data. Of course, Cal/Val activities that are currently organized in the commissioning phase on a wide scale are crucial to be able to assess the quality of produced data. In our team for direct comparisons of the spaceborne ATLID lidar data we use the mobile lidar system EMORAL that was built in a collaborative effort of research and private sector within the POLIMOS activity (Stachlewska et al. 2024). POLIMOS aims at several goals, among which crucial role pay Cal/Val efforts of ESA. EMORAL lidar is installed in the van, able to be operated in field autonomously, easy to be relocated and moved directly under the ATLID lidar overpass of EarthCARE. POLIMOS Team performed dedicated observations for collocation of EMORAL lidar under overpass of ATLID, e.g. in Wroclaw, Ojrzanow, Tresna, Wzdoł Rzadowy, Przewloka in Poland. By using various tools and approaches, we will show the results of direct comparisons of the two lidars and also application of unusual evaluation methods for deriving data products, such as the two-stream approach (Stachlewska & Ritter, 2010). We will highlight what are the advantages for having a complex mobile lidar system used in colocation with spaceborne lidar, especially in the context of improving our understanding on what aerosol and cloud properties can be sensed by ATLID and how they shall be interpreted. EMORAL lidar, equipped with 5 Mie channels, 2 of which provide depolarization information, 3 Raman channels, and broadband Fluorescence channel give a panoply of physical quantities that can be used for aerosol typing at different altitudes. It enables information within 15 km above the Earth surface on biogenic, geogenic and anthropogenic aerosol types, and moreover, it is able to discriminate them form the ice, mixed phase and liquid clouds. Such joint data analysis of EMORAL and ATLID presents therefore a great basis for the interpretation of the ATLID lidar data products. References: °Stachlewska, I.S. et al. (2024). EMORAL—Mobile Mie-Raman Lidar with Fluorescence, Polarization and Water Vapor Observational Capabilities for Satellite Cal/Val Field Campaigns. In: Singh, U.N., Tzeremes, G., Refaat, T.F., Ribes Pleguezuelo, P. (eds) Space-based Lidar Remote Sensing Techniques and Emerging Technologies. LIDAR 2023. Springer Aerospace Technology. Springer, Cham. https://doi.org/10.1007/978-3-031-53618-2_21 °Stachlewska, I. S. and Ritter, C.: On retrieval of lidar extinction profiles using Two-Stream and Raman techniques, Atmos. Chem. Phys., 10, 2813–2824, https://doi.org/10.5194/acp-10-2813-2010, 2010.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Exploring Aerosol Composition and Vertical Distribution: Validation of EarthCARE ATLID with CAMS forecast

Authors: Xuemei Wang, Ping Wang, Thanos Tsikerdekis, David P. Donovan, Gerd-Jan van Zadelhoff
Affiliations: KNMI
Aerosols contribute to one of the greatest uncertainties in global radiative forcing of changing climate by reflecting, scattering and absorbing radiation, as well as acting as cloud condensation nuclei to form clouds. However, the effects of aerosols vary depending on their type, because different compositions influence how aerosols interact with radiation and their hygroscopicity. Similarly, the vertical distribution of aerosols also plays an important role, as meteorological conditions provide the background for aerosol-cloud and aerosol-radiation interactions. Consequently, we need a thorough understanding of the vertical profiles of various aerosol compositions in the atmosphere. The EarthCARE (Earth Cloud, Aerosol and Radiation Explorer) mission, launched in May 2024, is measuring aerosol and cloud properties globally. The ATmospheric LIDar (ATLID) lidar onboard EarthCARE efficiently measures aerosol profiles at a 355 nm wavelength, while the Target Classification (TC) product provides valuable information about cloud and aerosol (sub-)types per layer. The combination of the two allows us to understand how aerosol of different compositions vary with altitude. However, both the ATLID lidar and the TC products are still in their preliminary stages and require validation and calibration. In our study, we aim to validate the observations from the combination of ATLID aerosol and TC products with the CAMS (Copernicus Atmosphere Monitoring Service) forecast by comparing the observations with CAMS data. The CAMS forecast provides high-quality data by incorporating observations, which enhances the accuracy of atmospheric composition estimates. The co-located comparisons will allow us to better understand the quality of the observations and the vertical distribution of aerosols. We also investigate the aerosol optical depth (AOD) which is a useful parameter for quantifying the influence of aerosols on radiation balance. The composition-specific AOD information will further enable us to assess radiative forcing according to each aerosol type. Therefore, such comparisons will provide valuable insights into the lidar measurements, eventually allowing us to use the observations to improve global models for better climate change predictions.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Radiative closure assessment using A-Train satellite data and EarthCARE synergetic retrieval algorithm

Authors: Dr Zhipeng Qu, Howard Barker, Jason Cole, Meriem Kacimi, Shannon Mason, Robin Hogan
Affiliations: Environment And Climate Change Canada, European Centre for Medium-Range Weather Forecasts
Launched in May 2024, ESA and JAXA’s Earth Cloud, Aerosol, and Radiation Explorer (EarthCARE) satellite mission delivers detailed observations of cloud, aerosol, and radiative properties, playing a crucial role in advancing our understanding of atmospheric processes and their impacts on the global climate system. EarthCARE facilitates continuous radiative closure assessments using both 1D and 3D broadband radiative transfer models. This capability provides invaluable insights into understanding and quantifying Earth’s radiation budget, which is essential for improving climate predictions and weather models. To support the EarthCARE mission, the Clouds, Aerosol, and Precipitation from Multiple Instruments using a Variational Technique (CAPTIVATE) algorithm was developed at the European Centre for Medium-Range Weather Forecasts (ECMWF). CAPTIVATE synergistically retrieves cloud, aerosol, and precipitation properties by integrating observational data from EarthCARE’s Atmospheric Lidar (ATLID), Cloud Profiling Radar (CPR), and Multi-Spectral Imager (MSI). This integrated approach ensures comprehensive and accurate characterizations of atmospheric constituents. CAPTIVATE’s adaptability extends its utility beyond EarthCARE; it can be applied to observations from the A-Train satellite constellation, including CloudSat, CALIPSO, and MODIS. To validate the simulated radiative fluxes, CERES observations are employed in a manner similar to how the Broad-Band Radiometer (BBR) data will be used for EarthCARE’s radiative closure assessment. This study presents radiative closure assessments using observations from A-Train satellites with evenly sampled days in 2009, alongside data from EarthCARE during its first year of operation. The results will evaluate current retrieval algorithms and propose potential improvements for radiative closure assessments based on both A-Train and EarthCARE observations. Additionally, the study underscores the advantages of incorporating direct extinction retrievals from EarthCARE’s High Spectral Resolution Lidar (HSRL). In summary, this study highlights the critical role of EarthCARE and CAPTIVATE in advancing radiative closure assessments, contributing to more accurate climate models and improving numerical weather prediction (NWP) systems. By comparing EarthCARE data with historical A-Train observations, the research aims to refine retrieval techniques and deepen our understanding of Earth’s radiation budget.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Use of State-of-the-Art Spectral Radiation and Aerosol Measurements for EarthCARE Validation

Authors: Kyriaki Papachristopoulou, Dr. Stelios Kazadzis, Dr. Andreas Hüni, Dr. Vassilis Amiridis, Dr. Dimitris Balis, Dr. Natalia Kouremeti, Angelos Karanikolas, Anna Moustaka, Dr. Eleni Marinou, Dimitra Kouklaki
Affiliations: Physikalisch-Meteorologisches Observatorium Davos, World Radiation Center (PMOD/WRC), Remote Sensing Laboratories, Dept. of Geography, University of Zurich (UZH), Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing, National Observatory of Athens (IAASARS/NOA), Laboratory of Atmospheric Physics, Aristotle University of Thessaloniki (AUTH), Institute for Particle Physics and Astrophysics, ETH Zurich, Laboratory of Climatology and Atmospheric Environment, Department of Geology and Geoenvironment, National and Kapodistrian University of Athens (NKUA)
The EarthCARE satellite mission, launched in May 2024, delivers valuable insights into the three-dimensional structure of clouds, precipitation, and aerosols, alongside co-located radiative flux measurements. These simultaneous measurements will be utilized to assess the representation of clouds, precipitation, aerosols, radiative fluxes, and estimated heating rates in weather and climate models, enhancing at the same time our understanding of cloud and aerosol radiative effects and their interactions (Wehr et al., 2023). ESA coordinates many EarthCARE-related calibration and validation activities targeting to evaluate the mission products, including the ACROSS activity, coordinated by the National Observatory of Athens (NOA), which aims to organize field experiments in the Mediterranean region focused on EarthCARE validation (Marinou et al., 2024). The ACROSS Mediterranean experiment will be conducted in 2026 as a joint effort by several institutes and universities in Europe, deploying advanced state-of-the-art instruments tailored to meet the mission’s validation requirements at three Mediterranean supersites. PMOD/WRC, the world reference institute for solar measurements and aerosol optical depth as designated by the World Meteorological Organization (WMO), in collaboration with the University of Zurich (UZH) and Aristotle University of Thessaloniki (AUTH), will participate in the ACROSS activity under the RACE-ECV (Radiation Closure Experiments for EarthCARE Validation) project by contributing high-accurate photometric suites in synergy with other ground-based remote sensors, filling a critical gap previously missing in ACROSS. Atmospheric measurements (aerosols and clouds) and solar spectral radiation data are required at the surface for a detailed evaluation of EarthCARE's atmospheric and radiation products. Specifically, the analysis of the synergistic dataset will be utilized for radiation closure experiments aimed at verifying EarthCARE products. The RACE-ECV project will launch an intensive campaign in spring 2025 at two sites in Greece as a preparatory phase for the upcoming ACROSS Mediterranean experiment. The high-quality synergistic datasets will be used not only for EarthCARE validation but also for various atmospheric studies, linking the results to new EU projects focused on aerosol-cloud-radiation interactions (e.g., CERTAINTY). The first comparisons and radiative closure studies are envisaged to be presented upon the availability of the synergistic EarthCARE products. Acknowledgements: This research was financially supported by the RACE-ECV project (Grant Agreement SBFI-633.4-2021-2024/PMOD - EarthCARE 202/2) funded by SBFI, the CERTAINTY project (Grant Agreement 101137680) funded by Horizon Europe program and the PANGEA4CalVal project (Grant Agreement 101079201) funded by the European Union. References: Marinou et al., "Across Mediterranean Experiment for the Cal/Val of the Earthcare Mission," IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 2024, pp. 1141-1143, doi: 10.1109/IGARSS53475.2024.10642456. Wehr et al., "The EarthCARE mission – science and system overview", Atmos. Meas. Tech., 16, 3581–3608, https://doi.org/10.5194/amt-16-3581-2023, 2023.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: First Examples of Aerosol Optical Thickness from the Multi-Spectral Imager (MSI) on EarthCARE

Authors: Nicole Docter, René Preusker, Nils Madenach, Anja Hünerbein, Sebastian Bley
Affiliations: Leibniz Institute For Tropospheric Research, Institute of Meteorology, Freie Universität Berlin
With the recent launch of the EarthCARE mission in May 2024, horizontal and vertical aerosol information becomes available from one platform. While ATLID (ATmospheric LIDar) ob-serves aerosol properties vertically resolved along the satellite track, MSI (Multi-Spectral Im-ager) offers a complementary horizontal context on a 150 km swath. The passive aerosol prod-uct (M-AOT, Docter et al. 2023) is based on measurements of the four available MSI bands in the visible to shortwave infra-red (670 nm, 865 nm 1650 nm and 2210 nm) and uses the optimal estimation technique. The product contains columnar aerosol estimates aerosol optical thickness over ocean at 670 and 865 nm and, where possible, over land at 670 nm. The algorithm itself is separated in an ocean and land retrieval branch and uses the EarthCARE Hybrid End-To-End Aerosol Classification (Wandinger et al., 2023) to ensure consistency between MSI and ATLID based aerosol products by assuming consistent underlying aerosol optical properties. We will give an overview of the underlying algorithm itself and will present first product examples based on EarthCARE MSI measurements together with a first verification of aerosol optical thickness using Aerosol Robotic Network (AERONET), Maritime Aerosol Network (MAN) and AERONET – Ocean Color (AERONET-OC) data. Docter, N., Preusker, R., Filipitsch, F., Kritten, L., Schmidt, F., and Fischer, J.: Aerosol optical depth retrieval from the EarthCARE Multi-Spectral Imager: the M-AOT product, Atmospheric Measurement Techniques, 16, 3437–3457, doi: https://doi.org/10.5194/amt-16-3437-2023, 2023 Wandinger, U., Floutsi, A. A., Baars, H., Haarig, M., Ansmann, A., Hünerbein, A., Docter, N., Donovan, D., van Zadelhoff, G.-J., Mason, S., and Cole, J.: HETEAC– the Hybrid End-To-End Aerosol Classification model for EarthCARE, Atmospheric Measurement Techniques, 16, 2485–2510, doi: https://doi.org/10.5194/amt-16-2485-2023, 2023
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Ground Measurements of the ATLID Laser Beam at Ultra-High-Energy Cosmic Ray Observatories

Authors: Oliver Lux, Oliver Reitebuch, Georgios Tzeremes, Kotska Wallace
Affiliations: Deutsches Zentrum für Luft- und Raumfahrt, Institut für Physik der Atmosphäre, Observatorio Pierre Auger, Telescope Array Project, European Space Agency, European Space Research and Technology Centre
Following the activation of the Atmospheric Lidar (ATLID) laser transmitter onboard EarthCARE in summer 2024, its ultra-violet (UV) laser pulses have been consistently observed by the Pierre Auger Observatory (Auger) in Argentina and the Telescope Array Project (TA) in the USA. These observatories are the world’s largest facilities dedicated to detecting ultra-high-energy cosmic rays, whose origins remain one of the most profound mysteries in high-energy astrophysics. A key detection method involves capturing UV fluorescence light produced during air showers—cascades of secondary particles created when cosmic rays interact with the Earth’s atmosphere. Auger and TA deploy arrays of telescopes to monitor fluorescence emissions over areas of approximately 3000 km² and 700 km², respectively. During its 25-day orbital repeat cycle, EarthCARE overpasses each observatory at least once, allowing sideward-scattered UV light from the ATLID laser beam to be recorded similar to air-shower fluorescence. The ground-based observations enable precise geometric reconstruction of the laser beam’s trajectory through the atmosphere. By combining these observations with detailed modelling of molecular and aerosol scattering, as well as optical losses along the beam path, ATLID’s laser energy at its exit point in space can be determined. The approach was successfully demonstrated during ESA’s Doppler wind lidar mission Aeolus, which also utilized a UV laser transmitter, providing independent validation of the instrument performance and improving the precision of its laser beam's ground track reconstruction. Unlike Aeolus, whose visibility at Auger was limited to a few months per year, ATLID’s year-round observability at both Auger and TA enables continuous monitoring of its laser signal strength and beam divergence. Additionally, the close temporal proximity of EarthCARE’s overpasses of the two observatories in opposite hemispheres enables precise cross-calibration of the energy scales of Auger and TA. This calibration is essential for refining the celestial map of cosmic ray arrival directions and increasing sensitivity to the highest-energy particles in the universe. This contribution will showcase the unique synergy between the EarthCARE mission and cosmic ray observatories, representing a significant advancement in both active satellite remote sensing and the physics of cosmic rays.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Unfiltering of the EarthCARE BBR: First results

Authors: Carla Salas, Almudena Velázquez Blázquez, Nicolas Clerbaux, Edward Baudrez, Carlos Domenech
Affiliations: Royal Meteorological Institute Of Belgium, GMV
The Broad-Band Radiometer (BBR) instrument on EarthCARE measures outgoing SW (0.25 to 4µm) and TW (0.25 to >50µm) radiances at the Top Of the Atmosphere (TOA) at three fixed telescopes views (fore, nadir and aft) in an along track configuration. The signal provided by the BBR radiometer is a radiance filtered by the spectral response of the instrument, which is corrected in the unfiltering process in order to reduce the effect of a limited and non-uniform spectral response. In practice, the BBR SW and TW measurements are converted into solar and thermal unfiltered radiances within the BM-RAD processor. In a first step the LW radiance is estimated from the SW and TW measurements. Secondly, the inter-channel contaminations, i.e, the parts of the LW signal due to reflected solar radiation and of the SW signal due to planetary radiation, are accounted for. Finally, multiplicative factors are applied to estimate the unfiltered solar and thermal radiances from the SW and LW channels, respectively. The methodology is based on theoretical regressions between filtered and unfiltered radiances. The regressions have been derived from a large geophysical database of spectral radiance curves simulated using radiative transfer models. Each BBR telescope uses an array of 30 microbolometer detectors, allowing the BBR measurements to be averaged over different spatial domains, namely standard (10x10 km²), small (5x10 km²) and full (17x10km² for nadir and 28x10 km² for the off-nadir views), all defined by the level-1 B-NOM product, and additionally the Assessment Domain (AD) resolution, on the Joint Standard Grid (~ 5x21 JSG pixels), used for the radiative closure assessment, defined within BM-RAD. The main inputs to the processor are the level-1 B-NOM product that gives the filtered shortwave (SW) and longwave (LW) radiances over the standard, small, and full domains, the level-1 B-SNG product that gives the filtered SW/TW radiances at detector level, the MSI cloud mask and cloud phase product from level-2 M-CM, the Joint Standard Grid (X-JSG) and ancillary meteorological data from X-MET. With regards to the algorithm, two unfiltering algorithms are used: the Stand-alone (for the SW and the LW channels) and the MSI based (for the SW channel). The stand-alone algorithm estimates the unfiltered radiances from the filtered ones without using information from the imager and the unfiltering is done according to the measured broadband radiances and land use classification. The MSI based algorithm makes use of the MSI cloud mask and cloud phase in the unfiltering. This study will show an overview of the BBR unfiltered radiances since the EarthCARE launch in May 2024. Verification studies before launch have shown that expected accuracies are better than 0.5% for the solar radiances and better than 0.1% for thermal radiances at one standard deviation. The BM-RAD product is essential in the EarthCARE production chain, since the unfiltered radiances produced will be used in the calculation of the TOA fluxes by the BMA-FLX processor and also in the ACM-DF processor to verify the EarthCARE radiative closure.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Overview of ATLID Level 1 processing and calibration

Authors: Annabel Chantry, David Donovan, Fabien Marnas, Gerd-Jan van Zadelhoff, Bertrand Corselle, Geraud De Villele
Affiliations: European Space Agency, Royal Netherlands Meteorological Institute (KNMI), Airbus Defence and Space S.A.S
The atmospheric lidar (ATLID) ATLID is the UV Atmospheric LIDAR payload onboard the EarthCARE (Earth Clouds, Aerosols and Radiation Explorer) satellite that was successfully launched on the 28th of May 2024 from Vandenberg Space X launch pad. ATLID is a LIDAR with a High Spectral Resolution capability allowing it to discriminate between the molecular (Rayleigh) and particulate (Mie) contributions of the backscattered radiation. Additionally, the cross-polarized component of the signal is measured using a dedicated channel. The spectral separation is performed via the High Spectral Resolution Etalon (HSRE) included in the telescope focal plane of the instrument and polarizing optics are used to separate the cross- polarized backscattered signal. (Do Carmo et al 2021) The main products retrieved at Level 1 are the crosstalk corrected Attenuated Backscatter (ATB) profiles corresponding to the channels (Mie, Rayleigh, cross-polar) which should be derived with high accuracy to support higher levels of processing. The complex set-up requires accurate calibrations in order to deliver products with the required accuracy. Indeed, at level 0, the signal in each channel is a combination of each component of the atmospheric signals via several crosstalk effects (Mie spectral crosstalk χ, Rayleigh spectral crosstalk ϵ, polarisation crosstalk ψ∥ and ψ⊥). These level 0 products are ingested by the ATLID EarthCARE Ground Processor (ECGP), the main step of the ECGP processing is to correct the acquired instrumental signals from these crosstalk effects, as well as to correct for the radiometric sensitivity of each channel, to retrieve range-corrected attenuated backscatter. Two dedicated calibration modes are routinely executed and ingested by the ECGP. First, a spectral calibration to regularly tune the emitted laser wavelength on the transmittivity peak of the high-resolution etalon (by minimising ϵ). Regular dark current calibrations are also performed to adjust the level of dark current involved in the radiometric calibration (Eisinger et al., 2024). Additionally, and to ensure constant calibration of the products, the spectral and polarisation crosstalk coefficients as well as the absolute radiometric calibration constants of each channel are constantly monitored thanks to innovative algorithmic methods. This presentation will give an overview of ATLID calibration operations and the associated level 1 processing as well as the derivation of the most critical variables for the calibration of the level 1 products. The stability of the key calibration variables such as the cross talk parameters as well as the LIDAR constants will be presented and the quality and accuracy of ATLID Level 1 products will be discussed. References: do Carmo, J.P.; de Villele, G.; Wallace, K.; Lefebvre, A.; Ghose, K.; Kanitz, T.; Chassat, F.; Corselle, B.; Belhadj, T.; Bravetti, P. ATmospheric LIDar (ATLID): Pre-Launch Testing and Calibration of the European Space Agency Instrument That Will Measure Aerosols and Thin Clouds in the Atmosphere. Atmosphere 2021, 12, 76. https://doi.org/10.3390/atmos12010076. Eisinger, M., Marnas, F., Wallace, K., Kubota, T., Tomiyama, N., Ohno, Y., Tanaka, T., Tomita, E., Wehr, T., and Bernaerts, D.: The EarthCARE mission: science data processing chain overview, Atmos. Meas. Tech., 17, 839–862, https://doi.org/10.5194/amt-17-839-2024, 2024.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: ECLiAP: A Physics-Based Framework for Advanced CCN Retrievals Using Lidar Observations

Authors: Piyushkumar Patel, Bastiaan van Diedenhoven
Affiliations: SRON Netherlands Institute For Space Research
Cloud Condensation Nuclei (CCN) are central to aerosol–cloud interactions (ACIs), playing a pivotal role in cloud formation and influencing cloud microphysics, precipitation, and radiative forcing. Despite their critical importance, CCN remain one of the least understood components in climate systems, contributing significantly to uncertainties in global climate predictions. Accurate, vertically resolved global observations of CCN are vital for quantifying their impact on aerosol-cloud-climate interactions and improving climate models. This study presents an advanced retrieval framework, ECLiAP (Patel et al., 2024), specifically designed to estimate CCN concentrations from lidar measurements, including those from single-wavelength systems like ATLID onboard EarthCARE. ECLiAP employs multi-wavelength aerosol-type-specific lookup tables, integrating particle size distributions, aerosol optical properties, and hygroscopic growth factors. These properties are derived using κ-Köhler theory to compute critical diameters for CCN activation across six supersaturation levels (0.07% to 1.0%). The algorithm accounts for five distinct aerosol subtypes with bimodal size distributions and applies type-specific hygroscopic adjustments to improve the accuracy of dry aerosol property retrievals. This framework may also be adapted to single wavelength lidars like ATLID, augmented by depolarization measurements to refine aerosol type identification. EarthCARE and NASA’s PACE satellite may be leveraged by using SPEXone polarimetric data to be improving aerosol size and composition inputs and providing water uptake (Van Diedenhoven et al., 2022) estimates to refine hygroscopic corrections. The robust capabilities of ECLiAP have been validated against the NASA airborne lidar (ORACLES campaign), showcasing strong correlations with independent CCN counter measurements. These validations demonstrate its potential to deliver global, vertically resolved CCN datasets, enabling the first-ever multi-year CCN climatology by integrating past (CALIOP-CALIPSO), current (ATLID-EarthCARE), and future (NASA AOS) lidar missions. This effort will facilitate direct CCN–cloud droplet number concentration (Nd) studies, improving estimates of Effective Radiative Forcing (ERFaci) and cloud susceptibility metrics. The resulting CCN climatology will provide critical insights into aerosol-cloud interactions over decadal scales, significantly reducing uncertainties in aerosol climate forcing and advancing predictions of climate sensitivity and future warming scenarios. References Patel, P. N., Jiang, J. H., Gautam, R., Gadhavi, H., Kalashnikova, O., Garay, M. J., Gao, L., Xu, F., & Omar, A. (2024). A remote sensing algorithm for vertically resolved cloud condensation nuclei number concentrations from airborne and spaceborne lidar observations. Atmospheric Chemistry and Physics, 24(5), 2861–2883. https://doi.org/10.5194/ACP-24-2861-2024 Van Diedenhoven, B., Hasekamp, O. P., Cairns, B., Schuster, G. L., Stamnes, S., Shook, M., & Ziemba, L. (2022). Remote sensing of aerosol water fraction, dry size distribution and soluble fraction using multi-angle, multi-spectral polarimetry. Atmospheric Measurement Techniques, 15(24), 7411–7434. https://doi.org/10.5194/AMT-15-7411-2022
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Investigating Aerosol Impacts on Cloud Properties Using Remote Sensing Techniques

Authors: Gabriela Ciocan, Livio Belegante, Alexandru Dandocsi, Camelia Talianu, Doina Nicolae, Razvan Pirloaga, Anca Nemuc
Affiliations: National Institute Of Research And Development For Optoelectronics - Inoe 2000, Remote Sensing Department, University of Bucharest, Faculty of Physics
Atmospheric aerosols significantly influence the Earth’s radiation balance and climate systems, particularly through their interactions with clouds. These interactions, which modify cloud properties such as lifetime, height, and condensation nuclei number, contribute to uncertainties in global radiative forcing. The "first indirect effect" of aerosols refers to increased cloud optical depth due to a reduction in cloud droplet radius, while maintaining constant liquid water content as aerosol concentration increases (IPCC Fourth Assessment Report,2007). This study presents a method of using DIVA platform (https://cloud.grasp-sas.com/ ) in conjunction with stand-alone datasets to analyse aerosol-cloud interactions and illustrate the first indirect effect. The DIVA platform is a collaborative environment for developing complex data processing and analysis tools in a user-friendly environment. DIVA already integrates observational data from various ground-based systems, including LiDAR, photometers, and spectrometers, as well as satellite and model data, but more importantly it hosts the GRASP (Generalized Retrieval of Aerosol and Surface Properties) and NATALI (Neural network Aerosol Typing Algorithm based on Lidar data) algorithms to produce advanced data products for improved characterization of atmospheric dynamics and composition. In addition to its versatility in supporting calibration and validation activities for ESA and Copernicus satellite missions, DIVA proves to be a valuable tool for aerosol and cloud characterization. GRASP (https://www.grasp-earth.com/ ) is employed to retrieve aerosol properties such as particle size distribution, absorption, and single scattering albedo. It is especially effective in challenging conditions like bright desert surfaces, where surface reflectance can dominate aerosol signals. The study involves data from multiple instruments including LiDAR, photometer, cloud radar, and Doppler LiDAR. While the cloud radar and Doppler LiDAR data are used independently as stand-alone datasets, they complement the GRASP retrievals based on LiDAR and photometer data. Altogether, the aerosol and cloud data provide the necessary synergy for retrieving aerosol concentration profiles, liquid water cloud properties, and the atmospheric boundary layer. The quantification of these variables allows for a comprehensive analysis of aerosol-cloud interactions and provides a detailed understanding of how aerosol properties affect cloud formation and behaviour. A central methodology of the study is the calculation of aerosol concentration profiles from LiDAR and photometer data, the retrieval of liquid water cloud properties from cloud radar data, and the atmospheric boundary layer (ABL) from Doppler LiDAR data. The GRASP profiles, obtained from LiDAR and photometer data, are used to analyse fine and coarse aerosol modes and derive single scattering albedo profiles. Aerosol typing is performed for specific layers within the boundary layer to assess how different aerosol types affect cloud properties. Furthermore, the study examines the differences between aerosol and cloud characteristics at different altitudes. By comparing the differences with respect to the altitude, we are able to identify which location is most effective at describing the first indirect effect. We calculate the mean aerosol concentration and radar reflectivity factor for each time step. The ABL is generated using the methodology developed by Manninen et al. (2018). An important aspect of the study is the analysis of aerosols’ optical and physical properties and the changes in relation to their interactions with clouds, as well as the altitude from the ABL border. The full methodology will be presented in detail during the conference. The results of this study include the validation of the first indirect effect and the statistical characterization of aerosol types and their impact, or lack thereof, in the effect. Furthermore, these findings provide a robust foundation for satellite missions like EARTHCare to improve the understanding of aerosol-cloud interactions. Acknowledgements: This work is supported by the Romanian Core Program within the National Research Development and Innovation Plan 2022-2027, carried out with the support of MCID, project no. PN 23 05. This research has been partially funded by the ESA project DIVA (Demonstration of an Integrated approach for the Validation and exploitation of Atmospheric missions), contract 4000121773/17/I-EF. We acknowledge the support of the ACTRIS ERIC for the long-term sustainability of the ground-based infrastructure providing data to this study.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Aerosol layer properties from ATLID and from ATLID/MSI synergy

Authors: Athena Augusta Floutsi, Moritz Haarig, HOLGER BAARS, Sebastian Bley, Leonard König, Henriette Gebauer, Nicole Docter, René Preusker, David Donovan, Gerd-Jan van Zadelhoff, Ulla Wandinger
Affiliations: Leibniz Institute for Tropospheric Research (TROPOS), Freie Universität Berlin, Institute for Space Science, Royal Netherlands Meteorological Institute (KNMI)
EarthCARE’s payload is ideal for synergistic retrieval approaches since it consists of two active and two passive instruments. The active instruments, ATLID (atmospheric lidar) and CPR (cloud profiling radar), provide vertically-resolved profiles of aerosol and cloud properties. The two passive instruments, BBR (broad-band radiometer) and MSI (multispectral imager), provide information on the total and shortwave radiation at the top of the atmosphere and spectral radiances at the cross-track swath. The combination of ATLID and MSI measurements is necessary to achieve the mission objective of deriving the top-of-atmosphere radiative flux with an accuracy of 10 W m−2 for every 100 km2. Information on the aerosol layers, including the vertical layer boundaries and mean optical properties, but also on the whole atmospheric column is important for understanding the aerosol types present in the atmosphere, identifying their sources, and finally assessing their radiative impact. EarthCARE provides such information along track from ATLID in the A-ALD (Aerosol Layer Descriptor) product and across track from MSI in the M-AOT (Aerosol Optical Thickness) product. The synergistic AM-ACD (Aerosol Columnar Descriptor) product combines the vertically resolved ATLID information with the horizontally spread columnar information from MSI. Here, we focus on the ATLID and ATLID/MSI synergy products that are processed by the A-LAY (Wandinger et al., 2023) and the AM-COL (Haarig et al., 2023) processors, respectively. The A-ALD product contains the boundaries of the aerosol layers along with their respective errors. Most importantly, the aerosol layer-mean optical properties, i.e., the extinction and backscatter coefficient, the lidar ratio and the particle linear depolarization ratio at 355 nm, as well as the aerosol type probabilities are provided. Within the A-ALD, the aerosol optical thickness at 355 nm is also calculated for the whole atmospheric column and for the stratosphere separately. The product belongs to the L2a EarthCARE data, which are, among others, required input for the synergistic algorithms combining data from ATLID and MSI i.e., AM-COL. The synergistic combination of the aerosol optical properties from ATLID at 355 nm and from MSI at 670 and 865 nm (ocean only) enables the calculation of the Ångström exponent at the wavelength pairs of 355/670 and 355/865 nm as part of the AM-ACD product. The Ångström exponent adds additional information to the vertically-resolved aerosol classification from ATLID, which is combined with the MSI aerosol classification and then transferred to the swath. In this presentation, both the A-ALD and the AM-ACD products will be presented in detail. Special emphasis will be given on the verification and validation of the products, especially for multilayer scenarios and aerosol layers close to clouds. First validation results against ground-based measurements from the PollyNET network (Baars et al., 2016), but also against measurements available from the EarthCARE Validation Teams (ECVT), will be presented. References Baars, H., Kanitz, T., Engelmann, R., Althausen, D., Heese, B., Komppula, M., Preißler, J., Tesche, M., Ansmann, A., Wandinger, U., Lim, J.-H., Ahn, J. Y., Stachlewska, I. S., Amiridis, V., Marinou, E., Seifert, P., Hofer, J., Skupin, A., Schneider, F., Bohlmann, S., Foth, A., Bley, S., Pfüller, A., Giannakaki, E., Lihavainen, H., Viisanen, Y., Hooda, R. K., Pereira, S. N., Bortoli, D., Wagner, F., Mattis, I., Janicka, L., Markowicz, K. M., Achtert, P., Artaxo, P., Pauliquevis, T., Souza, R. A. F., Sharma, V. P., van Zyl, P. G., Beukes, J. P., Sun, J., Rohwer, E. G., Deng, R., Mamouri, R.-E., and Zamorano, F.: An overview of the first decade of PollyNET: an emerging network of automated Raman-polarization lidars for continuous aerosol profiling, Atmos. Chem. Phys., 16, 5111–5137, https://doi.org/10.5194/acp-16-5111-2016, 2016. Haarig, M., Hünerbein, A., Wandinger, U., Docter, N., Bley, S., Donovan, D., and van Zadelhoff, G.-J.: Cloud top heights and aerosol columnar properties from combined EarthCARE lidar and imager observations: the AM-CTH and AM-ACD products, Atmos. Meas. Tech., 16, 5953–5975, https://doi.org/10.5194/amt-16-5953-2023, 2023. Wandinger, U., Haarig, M., Baars, H., Donovan, D., and van Zadelhoff, G.-J.: Cloud top heights and aerosol layer properties from EarthCARE lidar observations: the A-CTH and A-ALD products, Atmos. Meas. Tech., 16, 4031–4052, https://doi.org/10.5194/amt-16-4031-2023, 2023.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: The ATLID-FeatureMask L2a Processor; Initial Results Using EarthCARE L1 Data

Authors: Gerd-Jan van Zadelhoff, David Donovan, Ping Wang, Diko Hemminga
Affiliations: KNMI
The Earth Clouds, Aerosol, and Radiation Explorer (EarthCARE) mission, a joint ESA/JAXA endeavor, is designed to enhance understanding of the interactions between clouds, aerosols and their influence on the Earth’s radiative balance. Central to this mission, is the synergistic use of active and passive sensors combining information from four highly specialized instruments, the High Spectral Resolution Lidar ATLID, the Cloud Profiling Doppler Radar (CPR), the Multi-Spectral Imager (MSI), and the Broad-Band Radiometer (BBR). EarthCARE provides an integrated view of cloud and aerosol distributions, complemented by radiation measurements. ATLID's ability to distinguish Mie (particle) from Rayleigh (molecular) backscatter allows for precise retrievals of extinction, backscatter, and depolarization with vertical resolutions of 100 m up to 20 km altitude, and 500 m above until 40 km. The ATLID FeatureMask (A-FM) processor is the first step in the retrieval chain for ATLID Level 2a (L2a) data products. It plays a fundamental role by identifying regions of "significant return" in the lidar signal, which may correspond to aerosols or cloud particles. Importantly, the A-FM does not classify the type of feature but ensures the separation of particle-dominated regions from “clear-sky” regions. This preliminary segmentation allows for the optimal application of subsequent microphysical retrieval algorithms, such as cloud phase determination and aerosol typing. Unlike traditional approaches that depend on fixed thresholds or signal-to-noise ratio limits, the A-FM processor is based on image reconstruction techniques. This ensures robust feature detection even when lidar signals from optically thin clouds or tenuous aerosols approach the noise floor. By focusing on the correlation structure of the backscatter profiles rather than using predefined thresholds, the algorithm can maintain full single-shot resolution. This is critical for producing higher-quality aerosol retrievals and deriving cloud-aerosol feature fractions at varying resolutions, enhancing the accuracy of EarthCARE’s higher-order data products. Actual ATLID Level 1b (L1b) data from the commissioning phase, along with pre-launch simulated data, demonstrate the A-FM processor’s capability to detect cloud and aerosol features effectively, even in challenging conditions. The successful application of the algorithm to actual measurements confirms its robustness in detecting fine-scale atmospheric structures. In summary, the A-FM processor provides an essential, robust, and adaptive foundation for the EarthCARE lidar retrieval chain. As calibration and validation activities progresses, early results affirm the utility of the A-FM’s feature mask in advancing our understanding of cloud-aerosol interactions and their radiative effects, reinforcing the mission’s pivotal role in climate science. References: van Zadelhoff, G.-J., Donovan, D. P., and Wang, P.: Detection of aerosol and cloud features for the EarthCARE atmospheric lidar (ATLID): the ATLID FeatureMask (A-FM) product, Atmos. Meas. Tech., 16, 3631–3651, https://doi.org/10.5194/amt-16-3631-2023, 2023. Wang, P., Donovan, D. P., van Zadelhoff, G.-J., de Kloe, J., Huber, D., and Reissig, K.: Evaluation of Aeolus feature mask and particle extinction coefficient profile products using CALIPSO data, Atmos. Meas. Tech., 17, 5935–5955, https://doi.org/10.5194/amt-17-5935-2024, 2024.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: DISC support to the EarthCARE Cal/Val activities

Authors: Dr Eleni Marinou, Silke Gross, Holger Baars
Affiliations: IAASARS, National Observatory of Athens (NOA), Institut für Physik der Atmosphäre, Deutsches Zentrum für Luft- und Raumfahrt (DLR), Leibniz Institute for Tropospheric Research (TROPOS)
The successful launch of the ESA-JAXA EarthCARE (Earth Cloud, Aerosol, and Radiation Explorer) satellite mission, in May 2024, paved the way towards a better understanding of cloud-aerosol-radiation interactions and Earth radiation budget, and their more accurate representation in climate and numerical weather prediction models. EarthCARE collects observations of the atmosphere using a high spectral resolution lidar (ATLID), a Doppler cloud radar (CPR), a multi-spectral imager (MS), and a broadband radiometer (BBR). In order to fully exploit EarthCARE’s capabilities, early on and ongoing Calibration evaluation and Validation (Cal/Val) activities are critical to ensure the quality, credibility, and integrity of its products. The activities of the ESA validation component are solicited by an Announcement of Opportunity (AO) issued in 2017 and are continuously enhanced with new contributions. The activities currently consist of an international community with 42 validation projects, including airborne, ground-based, satellite, and modeling Cal/Val components. Apart from the diversity of the Cal/Val teams and the reference platforms, the Cal/Val of the EarthCARE mission brings several additional validation challenges arising from the multi-sensor complexity/diversity and the innovation of its standalone and synergistic products. The ESA’s EarthCARE Data Innovation Science Cluster (DISC) supports the Calibration evaluation and Validation (Cal/Val) activities of EarthCARE products throughout the mission's lifetime. To this end, a dedicated working group performs Cal/Val activities and intercomparison studies based on relevant datasets provided by ECVT for the assessment of the quality of the EarthCARE products. Additionally, it supports the coordination of the AO Cal/Val activities through: the consolidation of validation findings, the organization of knowledge transfer, the organization of interactions between subgroup members with algorithm and instrument teams, the provision of pre-campaign guidance and recommendations, and the support of ad-hoc plan refinement during large Cal/Val campaigns. The working group co-led the formulation of the Best Practice Protocol for the Validation of Aerosol, Cloud, and Precipitation Profiles (ACPPV) which is the reference document for the validation of the missions profiling products. Furthermore, validation tools tailored to the EarthCARE mission are provided. This includes product readers, visualization, and intercomparison codes and platforms. Additionally, the EarthCARE simulators of ATLID, MSI, and CPR instruments are provided, and their use from the EasrthCARE Cal/Val team is supported. The codes and tools are available through the ESA EarthCARE website and Cal/Val confluence spaces.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: What is the EarthCARE DISC?

Authors: Amanda Hall, Kajal Haria, Gerd-Jan van Zadelhoff, Antje Ludewig, Timon Hummel
Affiliations: Telespazio UK, KNMI, ESA
With the successful launch of the Earth, Cloud, Aerosol and Radiation Explorer (EarthCARE) in May 2024 and the release of the Level 1 data products to the public targeted in early 2025, EarthCARE is one of ESA’s latest success stories. The EarthCARE mission is a joint venture between the European Space Agency (ESA) and the Japanese Aerospace Exploration Agency (JAXA) and is the most complex of ESA’s Earth Explorer missions. EarthCARE carries a high performance ultraviolet (UV) Atmospheric Lidar (ATLID), a doppler Cloud Profiling Radar (CPR, provided by JAXA), a Multi-Spectral Imager (MSI) and a Broad-Band Radiometer (BBR). These four instruments will work in synergy to monitor clouds and aerosols and their role in regulating the climate. The data products, which are being released in stages, include single instrument products at Level 1 and Level 2a, and Level 2b synergistic products that use data from two or more instruments. However, there are many activities that go on in the background, both before and after data release, that users of the data are often unaware of. These activities are carried out by the EarthCARE Data, Innovation and Science Cluster (DISC). So, what is the EarthCARE DISC? The DISC is an international consortium of experts from 16 organisations, based across nine countries, that is led by the Royal Netherlands Meteorological Institute (KNMI). The aim of this consortium is to support ESA in ensuring that the EarthCARE data products are of the highest quality. To meet this aim, the EarthCARE DISC carries out several different tasks, making use of their product, algorithm, and instrument expertise. The EarthCARE DISC project began in September 2024, with the aim to develop a complete product quality assurance framework for the EarthCARE mission data. The DISC project team is organised into six Expert Centres, with DISC activities split into 12 Tasks. The EarthCARE DISC activities focus on: • Evolution and maintenance for the Level 1, Level 2, Calibration and Auxiliary processors • Calibration and Validation, including data quality control and monitoring, • Operation of the Instrument Calibration and Monitoring Facility (ICMF), • Tools for data processing, visualisation, data quality and the Level 2 testbed, • Assimilation of EarthCARE products within numerical weather predictions models, • User support for all ESA products and • Project communication and outreach. The first three months of the EarthCARE DISC project was the successful Ramp Up phase, which saw the DISC team prepare for and take over responsibility of various activities from the Commissioning team. The DISC also set up the processes and tools required for operational activities such as instrument and data quality reporting, data validation, project communications and outreach, and user support. The EarthCARE DISC became fully operational in December 2024 by entering into the Optimisation Stage. During 2025, the tasks of the DISC will continue to evolve and expand, with the increased availability of EarthCARE data products to the user community. The EarthCARE DISC will function throughout the operational phase of the mission, which is designed to have a lifetime of three years. If you are interested in learning more about the EarthCARE DISC activities that go on ‘behind the scenes’, please come to see our presentation.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: The validation of the depolarization ratio measured by ATLID

Authors: Dr. Moritz Haarig, Dr. David Patrick Donovan, Dr. Ulla Wandinger, Fabien Marnas, Annabel Chantry, Leonard König, Gerd-Jan van Zadelhoff, Henriette Gebauer, Dr. Julian Hofer, Dr. Athena A. Floutsi, Holger Baars
Affiliations: Leibniz Institute For Tropospheric Research (TROPOS), Royal Netherlands Meteorological Institute (KNMI), European Space Agency–ESTEC
The atmospheric lidar (ATLID) onboard of EarthCARE observes the depolarization ratio at 355 nm. The particle linear depolarization ratio is defined as the ratio of the cross-polar to the co-polar component of the received particle signal. It is of great value for the target classification, because it enables the separation of ice clouds (high depolarization ratio) from liquid water clouds (low depolarization ratio) and of desert dust (high depolarization ratio) from anthropogenic pollution (low depolarization ratio). Accurate observations of the depolarization ratio are of high interest for the separation of aerosol components (e.g., Wandinger et al., 2023) and the subsequent estimation of cloud condensation nuclei and ice nucleating particles (e.g., Mamouri and Ansmann, 2016, Choudhury and Tesche, 2022). These estimations are essential for the study of aerosol-cloud interactions, one of the main objectives of the synergistic EarthCARE mission. The ATLID L1 signal ratio of the Mie cross-polar signal divided by the Mie co-polar signal represents already the calibrated depolarization ratio. It is possible because of ATLID’s comprehensive characterization of cross talk between the three detection channels. The spectral cross-talk correction eliminates the molecular (Rayleigh) contribution to the depolarization ratio. And the polarization cross-talk correction removes the influence of the Mie co-polarized to the Mie cross-polarized signal and vice versa. Besides extensive characterization efforts prior to launch, the cross-talk assessment continues in space. We show statistical verification by using cirrus clouds and liquid stratus clouds. Liquid cloud droplets have a depolarization ratio of 0 (in the singe scattering regime) and can be used to proof the cross-talk correction in the lower depolarization ratio regime. However, it might be difficult to detect just the cloud top with a vertical resolution of 100 m. From suborbital observations of cirrus clouds, we know that the depolarization ratio of ice crystals is enhanced with values ranging from 0.3 – 0.6, which can be used to check the calibration in a high depolarization ratio regime. The large variation in the depolarization ratio of cirrus clouds dependents on the ice crystal size which is influenced by the forming mechanism (homogenous or heterogeneous), temperature range and others. These variations challenge a statistical verification of ATLID’s depolarization ratio in cirrus clouds. Therefore, validation efforts with suborbital measurements are needed additionally. Several aircrafts equipped with a polarization lidar have flown under the EarthCARE track, such as the HALO of DLR during PERCUSION or NASA’s HSRL-2 during PACE-PAX. The data are currently analyzed but will be available soon. Furthermore, EarthCARE overflights about ground-based polarization lidars will be used. PollyNET (Baars et al., 2016) offers several stations with calibrated polarization lidars, e.g., at Mindelo (Cabo Verde), Leipzig (Germany) and Dushanbe (Tajikistan). The validation will be performed for low depolarizing targets such as anthropogenic pollution or humid marine aerosol, enhanced depolarization ratios such as mineral dust and high depolarization ratios such as collocated cirrus cloud observations. Besides the L1 signal ratio, the particle linear depolarization ratio which is a L2 product from the A-EBD processor (Donovan et al., 2024) will be validated. It is important to note, that ALADIN the lidar onboard of Aeolus was designed to just detect the co-polar signal and was therefore missing a significant part of the signal in presence of depolarizing particles such as mineral dust. The process is still ongoing but results can be expected until the Living Planet Symposium. References: Baars, H., Kanitz, T., Engelmann, R., Althausen, D., Heese, B., Komppula, M., Preißler, J., Tesche, M., Ansmann, A., Wandinger, U., Lim, J.-H., Ahn, J. Y., Stachlewska, I. S., Amiridis, V., Marinou, E., Seifert, P., Hofer, J., Skupin, A., Schneider, F., Bohlmann, S., Foth, A., Bley, S., Pfüller, A., Giannakaki, E., Lihavainen, H., Viisanen, Y., Hooda, R. K., Pereira, S. N., Bortoli, D., Wagner, F., Mattis, I., Janicka, L., Markowicz, K. M., Achtert, P., Artaxo, P., Pauliquevis, T., Souza, R. A. F., Sharma, V. P., van Zyl, P. G., Beukes, J. P., Sun, J., Rohwer, E. G., Deng, R., Mamouri, R.-E., and Zamorano, F.: An overview of the first decade of PollyNET: an emerging network of automated Raman-polarization lidars for continuous aerosol profiling, Atmos. Chem. Phys., 16, 5111–5137, https://doi.org/10.5194/acp-16-5111-2016, 2016. Choudhury, G. and Tesche, M.: Estimating cloud condensation nuclei concentrations from CALIPSO lidar measurements, Atmos. Meas. Tech., 15, 639–654, https://doi.org/10.5194/amt-15-639-2022, 2022. Donovan, D. P., van Zadelhoff, G.-J., and Wang, P.: The EarthCARE lidar cloud and aerosol profile processor (A-PRO): the A-AER, A-EBD, A-TC, and A-ICE products, Atmos. Meas. Tech., 17, 5301–5340, https://doi.org/10.5194/amt-17-5301-2024, 2024. Mamouri, R.-E. and Ansmann, A.: Potential of polarization lidar to provide profiles of CCN- and INP-relevant aerosol parameters, Atmos. Chem. Phys., 16, 5905–5931, https://doi.org/10.5194/acp-16-5905-2016, 2016. Wandinger, U., Floutsi, A. A., Baars, H., Haarig, M., Ansmann, A., Hünerbein, A., Docter, N., Donovan, D., van Zadelhoff, G.-J., Mason, S., and Cole, J.: HETEAC – the Hybrid End-To-End Aerosol Classification model for EarthCARE, Atmos. Meas. Tech., 16, 2485–2510, https://doi.org/10.5194/amt-16-2485-2023, 2023.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: The fractal dimension of cloud measured with radar, lidar, and imagery

Authors: Dr Adam Povey, Mr Ansel Ong
Affiliations: NCEO Uni. Leicester
Clouds are ubiquitous on the Earth, covering about two-thirds of the planet at any given moment. They are a dominant influence on the planetary energy budget, and therefore climate, by reflecting, scattering, and absorbing solar and thermal radiation. The properties of cloud are influenced by the humidity and aerosol available, and can respond quickly to changing conditions. Adjustments in the behaviour of clouds due to human activity is a major source of uncertainty in predictions of future climate, and one part of reducing that is better understanding cloud morphology. As clouds are smaller than the typical pixel in climate model, they must be represented by approximations rather than explicit physics. Climate models, therefore, make various assumptions about the shape, size, and evolution of clouds. These should be validated against real clouds, but imaging the full volume of a cloud is a distinct challenge due to their size. Fractal dimension describes the relationship between the area and perimeter of a two-dimensional shape. It has been estimated for clouds from visible/infrared imagery from a number of satellites, utilising the results to evaluate the fidelity of clouds within cloud-resolving models. Those studies indicated models were largely behaving realistically but only assessed the horizontal spread of multi-kilometre scale clouds. This presentation outlines the first estimation of the fractal dimension of clouds within vertically resolved observations, through use a synergistic lidar/radar feature mask at a number of ground sites. This high-resolution product better captured small clouds than prior studies, finding an index consistent with previous estimates but systematically smaller, but struggles with mesoscale systems. To overcome that issue, the method will be applied to the EarthCARE feature mask to quantify cloud fractal dimension of larger systems and confirm if self-similar geometry is maintained during vertical growth. (That is not guaranteed given that different process drive the spread of cloud horizontally and vertically.) This study was not possible before the launch of EarthCARE due to ambiguities in the collocation of A-Train observations.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: The ESA EarthCARE Instrument Calibration and Monitoring Facility

Authors: Fay Emmett, Sam Lavender, Silvia Pardo Martinez, Sergio Perelli, Fabrizio Tringali, Federica Selva, Timon
Affiliations: Telespazio UK, ESA ESRIN, NV5, Exprivia
The main objective of the Earth Cloud, Aerosol and Radiation Explorer (EarthCARE), a joint European Space Agency (ESA) and Japan Aerospace Exploration Agency (JAXA) mission that was successfully launched in May 2024, is to take measurements of global profiles of aerosols, cloud, precipitation and associated radiative fluxes that could be used for reliable representation in weather forecasting and climate models. These measurements are provided by the missions’ four instruments: the Atmospheric Lidar (ATLID, ESA), the Cloud Profiling Radar (CPR, JAXA), the Multi-Spectral Imager (MSI, ESA) and the Broad-Band Radiometer (BBR, ESA). At the end of the mission's commissioning phase, most of the activities related to the mission, excluding flight operations, were handed over to the EarthCARE Data, Innovation and Science Cluster (DISC) project for the mission’s operational phase. The ESA DISCs are user data quality frameworks tailored to the innovative characteristics of Earth Explorers, such as EarthCARE (ESA’s 6 Earth Explorer), grouping in a single cluster, product, algorithm and instrument expertise. The EarthCARE Instrument Calibration and Monitoring Facility (ICMF), which is a core and complex component of the EarthCARE Payload Data Ground Segment (PDGS), is responsible for providing monitoring and offline calibration activities that are not fully handled within the nominal ESA Level 1 processing chains (the real-time chain). The ICMF, whose maintenance and evolution are provided by the EarthCARE DISC Task 3 team, is composed of: 16 calibration processors for ATLID (e.g. fine spectral calibration, science dark current calibration, and optional Mie co-polar channel absolute calibration), 5 calibration processors for MSI (e.g. VNS (visible/NIR channel) sensitivity and correction factors, TIR (thermal channel) offset monitoring and correction), and 6 calibration processors for BBR (e.g. sun calibration and blackbody temperature inversion). The operation of the ICMF allows for the update of relevant on-ground calibration parameters via the generation of auxiliary data file updates that are supplied as inputs to the Level 0 / Level 1 processing chains and the update of relevant on-board calibration parameters via the generation of configuration table items that are to be uploaded in-flight to the instrument. This presentation will explore the ICMF and its calibration processors in more detail.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Validation of EarthCARE MSI L1 data using the MSI forward simulator tool

Authors: Nils Madenach, Sebastian Bley, Nicole Docter, Anja Hünerbein, Rene Preusker
Affiliations: Leibniz Institute for Tropospheric Research, Freie Universität Berlin
ESA's and JAXA's Earth Explorer mission EarthCARE has been launched in May 2024. For the validation of L1 spectral radiances and brightness temperatures of EarthCARE's Multi-Spectral Imager (MSI), the MSI-simulator tool has been developed. As a Python-based forward simulator, the tool is designed to be flexible, easy to use and lightweight in terms of input data. As input, information of the solar-satellite geometry, the surface characterization, the vertically resolved atmospheric state information such as temperature, pressure, gases and particles (liquid clouds, ice clouds and aerosols) are used. The data can come from ground, airborne or satellite measurements or from model simulations. We will present the MSI-tool and a case study for MSI validation using different data from the PERCUSION (Persistent EarthCARE underflight studies of the ITCZ and organized convection) campaign that took place in August and September 2024 at Cape Verde. Furthermore, we will cross-validate the results with MTG-FCI data and forward simulations using the FCI forward simulator tool.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Evaluation of EarthCARE Aerosol Extinction and Backscatter Profiles from Raman Lidars.

Authors: Larisa Sogacheva
Affiliations: FMI
The Earth Clouds, Aerosols, and Radiation Explorer (EarthCARE) satellite carries four advanced instruments: Atmospheric Lidar (ATLID), Cloud Profiling Radar (CPR), Multispectral Imager (MSI), and Broadband Radiometer (BBR). Together, these instruments allow the exploration of the role that clouds and aerosols play in Earth’s climate system. The aerosol vertical distribution is of crucial importance in radiative transfer calculations and in the study of aerosol-cloud interaction. Raman lidars provide detailed insights into aerosol extinction, backscatter coefficients, and depolarization ratio, enabling the profile of aerosol optical properties at various altitudes. Raman lidars are operating globally. International networks like the European Aerosol Research Lidar Network (EARLINET) and Portable Lidar Network (PollyNET) are built on lidar measurements. These networks play a crucial role in aerosol research and satellite data validation. EARLINET (https://www.earlinet.org/index.php?id=earlinet_homepage) includes over 30 European sites, providing continuous and campaign-based lidar measurements. PollyNET (https://polly.tropos.de), which focuses on multiwavelength Raman-polarization lidar observations for aerosol characterization, operates several stations globally, including Europe (Germany, Finland, Portugal, Poland, and Greece). In addition, ceilometers and other ground-based remote sensing tools can be employed to observe vertical aerosol distribution. Those measurements contribute to networks like Aerosols, Clouds, and Trace gases Research Infrastructure (ACTRIS, https://www.actris.eu) to enhance understanding of aerosol properties and their impact on air quality and climate systems. In our study, we record and analyse ground lidar data from Vehmasmäki station in Eastern Finland (62◦44′ N, 27◦33′ E). At Vehmasmäki, aerosol vertical profile measurements are conducted using a range of lidar systems and instruments (https://egusphere.copernicus.org/preprints/2024/egusphere-2024-3032/egusphere-2024-3032.pdf). The site is equipped with a multi–wavelength PollyXT lidar, a Vaisala CL61 ceilometer, a HALO Photonics Streamline Pro Doppler lidar, and various in situ instruments for aerosol characterization up to 10 μm aerosol particles as well as meteorological quantities from the station and a 318 m tall mast. In addition, a holographic imaging instrument (ICEMET) was installed on site in 2021 allowing to determine the shape and size distribution of 5–200 μm aerosol particles. For each EarthCARE overpass episode, we acquire the corresponding EarthCARE Level 1.5 total attenuated backscatter profile and compared it with our ground-based lidar data. The results will be reported.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Using EarthCARE to Disentangle Cloud Adjustments to Aerosol Perturbations

Authors: Johanna Mayer, Blanka Piskala Gvozdikova, Arthur Avenas, Edward Malina, Thorsten Fehr, Daniele Gasbarra
Affiliations: ESA, Shamrock Space Services c/o ESA-ESRIN
Aerosol-cloud interactions (ACIs) have a significant impact on the Earth's radiative budget, but, despite many years of research substantial uncertainties remain in quantifying their effects. The recently launched EarthCARE satellite provides a unique opportunity to address these uncertainties by providing simultaneous information on aerosols, clouds and radiation. In this work, we explore the capabilities of EarthCARE to improve our understanding of ACI, with a focus on the cloud adjustments of liquid clouds to aerosol perturbations. The synergy of the atmospheric lidar (ATLID), cloud-profiling radar (CPR), and multi-spectral imager (MSI) aboard EarthCARE allows measurements of vertically resolved cloud properties such as droplet number concentrations, droplet radii, liquid water concentrations, and precipitation rates, as well as vertical profiles of aerosol properties. We propose a causal approach that combines physical knowledge in the form of a causal graph with the detailed observations from EarthCARE. This approach aims to disentangle the various (counteracting) physical processes involved in cloud adjustments, in order to improve our physical understanding and ultimately the representation of ACIs in climate models. We present the first results that highlight the potential of EarthCARE data to advance our knowledge of cloud adjustments. As this project is still in progress, we are eager to discuss our methods and results with the EarthCARE and modeling communities to ensure that our research meets their needs and effectively contributes to the understanding of ACI.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Monitoring and assimilating EarthCARE ATLID aerosol products in ECMWF’s IFS-COMPO

Authors: Will McLean, Mark Fielding, Michael Rennie, Angela Benedetti, Marta Janisková, Shannon Mason, Robin Hogan, Marijana Crepulja, Luke Jones
Affiliations: ECMWF
Accurate air quality forecasts are vital for policy makers and issuing public health warnings, but can be difficult to validate. New observations from EarthCARE’s Atmospheric Lidar (ATLID), could present a step-change in evaluating the vertical distribution of aerosols in models, and thereby improving the accuracy of air quality forecasts and analyses. ATLID is a high spectral resolution lidar operating at 355 nm, optimised for the remote sensing of aerosol and thin cloud layers, providing the depolarisation ratio along with the backscatter and extinction coefficient, allowing for differentiation between aerosol types. The European Centre for Medium-Range Weather Forecasts, ECMWF, implements the Copernicus Atmosphere Monitoring Service, CAMS, on behalf of the European Union. CAMS produces global forecasts for atmospheric composition twice a day, using a global state-of-the-art data assimilation and modelling framework, the integrated forecasting system (IFS), set up in atmospheric composition mode, IFS-COMPO. We currently assimilate multiple satellite-derived aerosol optical depth (AOD) retrieval products. AOD is a column-integrated quantity; as such it does not provide a constraint on the altitude of an aerosol layer, unlike the height-resolved global aerosol profiles from ATLID.   This presentation will give an overview of the work undertaken to fully exploit ATLID’s novel high-quality aerosol products in IFS-COMPO, building on lessons learnt from ECMWF’s role in using the atmospheric composition products from ESA’s Aeolus mission. Results from monitoring and assimilation carried out as part of the EarthCARE Data, Innovation, and Science Cluster (DISC) activities are presented, which continued the PEARL (Preparations for EarthCARE Assimilation - Radar and Lidar) aerosol activities precursor work of preparing IFS-COMPO for using the ATLID aerosol products. Comparison of the aerosol products in ECMWF’s latest operational IFS-COMPO cycle will be shown, alongside those derived from ATLID measurements. The first results from assimilation of the lidar aerosol profiles will be presented, along with verification of both the ATLID products and the model fields against ground-based measurements.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Overview of Japanese Validation Activities for JAXA EarthCARE Products

Authors: Toshiyuki Tanaka, Dr. Takuji Kubota
Affiliations: JAXA (Japan Aerospace Exploration Agency)
The EarthCARE satellite (Japanese name: Hakuryu) is a collaborative mission between Japan and Europe, successfully launched on May 28, 2024 (UTC), aboard the SpaceX Falcon-9 rocket. The satellite is equipped with advanced instruments, including the Cloud Profiling Radar (CPR) developed by Japan, and three European instruments: the Atmospheric Lidar (ATLID), the Multispectral Imager (MSI), and the Broadband Radiometer (BBR). EarthCARE aims to: (1) Enhance scientific understanding of climate change mechanisms related to clouds and aerosols through the analysis of EarthCARE observation data. (2) Support climate adaptation strategies by incorporating these mechanisms into climate models. (3) Facilitate operational applications in numerical weather prediction and atmospheric environmental monitoring through reliable EarthCARE products. The EarthCARE JAXA Level 2 (L2) products are categorized into two types: L2a products, derived from a single sensor, and L2b products, created through synergistic use of multiple sensors. L2a products and CPR-ATLID synergistic L2b products are scheduled for release nine months after launch (March 2025), while other L2b products will be released 18 months after launch. Validation results will also be published alongside the products as quality information. Validation activities are conducted by the JAXA EarthCARE Validation Team, consisting of experts from universities and research institutions, primarily Principal Investigators (PIs) of JAXA’s Research Announcement on the Earth Observations. The validation employs diverse datasets, including ground-based network observations, satellite data, and aircraft campaigns. For early validation, emphasis is placed on sample size, leveraging satellite and ground-based network observations. Additionally, intensive validation efforts are being conducted using specialized instruments, such as radars, lidars and other instruments installed at the NICT headquarters in Koganei, Japan. International collaboration is a cornerstone of the EarthCARE validation activity, involving a key partnership with the European Space Agency (ESA) for sharing validation-related information, promoting exchanges between European and Japanese researchers through the jointly organized international workshops. We also corporate with the Deutsches Zentrum für Luft- und Raumfahrt (DLR), and the U.S. National Oceanic and Atmospheric Administration (NOAA) for sharing ground-based and aircraft campaign dataset as well as scientific expertise to support robust validation. This presentation will highlight the progress and preliminary results of validation activities for JAXA Level 2 products.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: The EarthCARE CPR L2A C-PRO data product: Updates and performance evaluation

Authors: Pavlos Kollias, Mr. Bernat Treserras, Alessandro Battaglia
Affiliations: McGill University, Stony Brook University, Politecnico di Torino
The joint European Space Agency (ESA) and Japanese Aerospace Exploration Agency (JAXA) Earth Cloud, Aerosol and Radiation Explorer (EarthCARE) satellite mission was successfully launched on May 28, 2024. The EarthCARE mission features a 94 GHz Cloud Profiling Radar (CPR) that features high sensitivity, narrow footprint and it is the first to provide Doppler velocity measurements from space. The launch of the EarthCARE CPR come just a year after the decommissioning of NASA’s CloudSat CPR that operated for 17 years (2006 to 2023) and provided the first every detailed view of clouds and precipitation. The CloudSat observations and data products offer a great heritage for developing the EarthCARE CPR algorithms. However, the EarthCARE CPR has new measurement capabilities such as Doppler velocity that is subject to many sources of uncertainty and biases and a new set of challenges including the very frequent presence of mirror images. Here we describe the operational status and post-launch updates of the Level 2, single instrument (L2A) processing and quality control of the raw CPR reflectivity and Doppler velocity observations (C-PRO). The raw CPR observations and auxiliary information are used as input to three L2A algorithms: (1) C-APC: Antenna Pointing Characterization; (2) C-FMR: CPR feature mask and reflectivity; (3) C-CD: Corrected CPR Doppler Measurements. These algorithms apply quality control and corrections to the CPR primary measurements and derive important geophysical variables, such as hydrometeor locations, and best estimates of particle sedimentation fall velocities. The C-PRO processor operates nominally, producing outputs without interruptions. Several software bugs have been resolved, and updates to some of the C-PRO algorithms became necessary, primarily due to issues related to the C-NOM L1b input product. We will describe the improvements implemented to address several challenges including the CPR receiver noise floor estimation, the mitigation of the second trip and mirror images, the detection of the melting layer, the CPR antenna pointing correction and the Doppler velocity unfolding.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Evaluation of the EarthCARE BBR solar and thermal radiative fluxes

Authors: Carla Salas Molar, Almudena Velázquez Blázquez, Carlos Domenech, Edward Baudrez, Nicolas Clerbaux, Irene Hidalgo, Jason Cole
Affiliations: GMV, Royal Meteorological Institute of Belgium, Environment and Climate Change Canada (ECCC)
Launched in May 2024, the Earth Cloud, Aerosol and Radiation Explorer (EarthCARE) satellite mission is a joint venture between the European Space Agency (ESA) and the Japan Aerospace Exploration Agency (JAXA), thought to advance our understanding of the role that clouds and aerosols play in reflecting incident solar radiation back into space and trapping infrared radiation emitted from Earth’s surface. The satellite’s payload includes four instruments designed to synergistically retrieve vertical profiles of clouds and aerosols, along with atmospheric radiation data, with the idea to determine atmospheric heating rates and top-of-atmosphere radiances and fluxes, improving numerical atmospheric models. The Broad-Band Radiometer (BBR) instrument on-board EarthCARE provides accurate top-of-atmosphere (TOA) outgoing solar and thermal radiances obtained in an along-track configuration at three fixed viewing directions (nadir, fore and aft). The BBR’s multi-viewing capability is key to capturing the Earth's radiation field from multiple angles, providing a comprehensive view of both reflected and emitted energy. The BMA-FLX operational EarthCARE L2 product focuses on TOA radiative fluxes, based on a radiance-to-flux conversion algorithm fed mainly by unfiltered broad-band radiances from the BBR instrument (provided by the BM-RAD product), along with auxiliary data from EarthCARE’s L2 cloud products and modelled geophysical databases. The Multi-Spectral Imager (MSI) and Atmospheric Lidar (ATLID) on-board the satellite are the two instruments that support cloud identification. The conversion algorithm models the angular distribution of solar radiation reflected by the Earth-Atmosphere system, and the thermal radiation emitted by it, returning flux estimates to be used for the radiative closure assessment of the mission in the ACMB-DF product. Different methods are employed for the solar and thermal BBR flux retrieval models. Models for shortwave radiances are created for different scene types and constructed from Clouds and the Earth’s Radiant Energy System (CERES) data using a feed-forward back-propagation artificial neural network (ANN) technique. Longwave models are based on correlations between BBR radiance field anisotropy and spectral information provided by MSI narrow-band radiances. Both retrieval algorithms exploit the multi-viewing capability of the BBR, co-registering radiances and providing flux estimates for each view, ensuring their integrity before being combined into the optimal flux of the observed target. The reference height where the three BBR measurements are co-registered corresponds to the height where most reflection or emission takes place and depends on the spectral regime. Longwave observations are co-registered at the cloud top height, but the most radiatively significant height for shortwave radiances is highly influenced by cloud properties and therefore determined by minimizing flux differences between nadir, fore and aft fluxes. The estimated fluxes are scene-dependent, meaning that they vary with surface, cloud properties and angular geometry. Since the outgoing TOA flux is only dependent on the solar geometry and the radiometric properties of the atmospheric-surface domain, the fluxes obtained from the three BBR views should ideally converge, but in operational scenarios discrepancies arise due to scene anisotropy, the limitations in the Angular Distribution Models (ADMs) and the co-registration of the three views. To address this, the co-registration at the reference level aligns radiances and fluxes from the oblique views with the nadir observation. The study presented here shows an analysis of the performance and optimization of the BMA-FLX processor during the first year of the mission. Recognizing that surface type and cloud properties strongly influence the flux retrieval accuracy, our study focuses on assessing cloud parallax effects and the ADMs performance. For cloudy scenes, a surface-specific cloud classification has been conducted using the International Satellite Cloud Climatology Project (ISCCP) scheme, which categorizes clouds by cloud-top pressure and optical thickness. This approach to cloud categorization, tailored to specific surface types and using cloud data from the MSI L2 processor (M-CLD), divides cloud types (cirrus, cirrostratus, deep convection, altocumulus, altostratus, nimbostratus, cumulus, stratocumulus and stratus) across nine surface categories (water bodies, forests, savannahs, grasslands, shrubs, deserts, permanent snow, sea ice and fresh snow). By analyzing fore, nadir and aft flux retrievals across these combinations, we evaluate the co-registration performance and assess the sensitivity of ADMs to diverse surface and atmospheric conditions. For each cloud-surface group, we compare BM-RAD radiances with co-registered radiances from BMA-FLX, focusing on the displacement of oblique views relative to nadir, and pinpointing scenarios where the flux retrieval diverges from expectations. The availability of commissioning data has been crucial in optimizing the scene definitions used by the ADMs and validating the co-registration methodology employed by the BMA-FLX processor. This study seeks to validate the readiness of the BMA-FLX processor for release to the scientific community, based on a thorough evaluation of data collected during the mission's first year.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Radiative Effects of Dust Aerosols and Water Vapor During the ORCESTRA Campaign Over the Atlantic

Authors: Dimitra Kouklaki, Alexandra Tsekeri, Anna Gialitaki, Bernhard Mayer, Claudia Emde, Silke Gross, Eleni Marinou, Vassilis Amiridis, Stelios Kazadzis
Affiliations: Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing (IAASARS), National Observatory Of Athens, Department of Geology and Geoenvironment, National and Kapodistrian University of Athens, Department of Physics and Astronomy, University of Leicester, Meteorological Institute, Ludwig-Maximilians-University, Institut für Physik der Atmosphäre, Deutsches Zentrum für Luft- und Raumfahrt (DLR), Physikalisch-Meteorologisches Observatorium Davos/World Radiation Center
Aerosols play a significant role in attenuating solar radiation and influencing atmospheric thermodynamic stability, particularly over heavily affected regions like the Atlantic, underscoring the fundamental impact on Earth's energy budget and climate, through radiative heating or cooling (IPCC, 2022). However, quantifying these effects is challenging due to the diverse and complex nature of aerosols. For desert dust particles the difficulty lies in defining their optical properties, and accurately monitor their extensive atmospheric distribution. This study aims to assess and improve the understanding of the radiative effects of dust aerosols and water vapor (WV), as well as their impact on atmospheric heating rates, by adopting non-spherical particle shapes and their intrinsic microphysical and optical properties during severe dust events. To achieve this, airborne and satellite observations are employed, along with Radiative Transfer (RT) modeling. The study utilizes data from the ORCESTRA/PERCUSION campaign (https://orcestra-campaign.org), conducted across the Cabo Verde Islands, Barbados and the Atlantic Ocean, in August and September 2024, a period that coincides with maximum trans-Atlantic dust transport. The airborne measurements were conducted with the German research aircraft HALO (High Altitude and long-Range Aircraft), and provided, amongst others, extensive lidar measurements, radiances at the aircraft altitude, and meteorological profiles from dropsondes. The RT calculations took into account the measured WV concentrations as well, for the investigation of the interaction of dust and WV with solar radiation. The simulated radiances at different altitudes were compared with airborne solar radiation measurements performed by a sophisticated spectrometer, specMACS (the Munich Aerosol Cloud Scanner) onboard HALO, as well as with radiation measurements at the top of the atmosphere measured by the Broad-Band Radiometer (BBR) of EarthCARE ESA mission. The findings highlight the necessity of detailed atmospheric field measurements in validating satellite-based products and optimizing climate models. Acknowledgements This research was financially supported by the PANGEA4CalVal project (Grant Agreement 101079201) funded by the European Union , the CERTAINTY project (Grant Agreement 101137680) funded by Horizon Europe program and the AIRSENSE project which is part of Atmosphere Science Cluster of ESA’s EO Science for Society programme. DK, ΑΤ and SK would like to acknowledge COST Action HARMONIA (International network for harmonization of atmospheric aerosol retrievals from ground-based photometers), CA21119, supported by COST (European Cooperation in Science and Technology). References Intergovernmental Panel on Climate Change (IPCC) (2022), Land–climate interactions. In: Climate Change and Land: IPCC Special Report on Climate Change, Desertification, Land Degradation, Sustainable Land Management, Food Security, and Greenhouse Gas Fluxes in Terrestrial Ecosystems. Cambridge University Press,131-248.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: The MTG/FCI Optimal Cloud Analysis product as complementary information for the analysis of EarthCARE cloud observations

Authors: Loredana Spezzi, Alessio Bozzo, Phil Watts, John Jackson, Andre Belo do Couto, Johan Strandgren, Dr. Bertrand
Affiliations: EUMETSAT
The simultaneous availability of EarthCARE and the Flexible Combined Imager (FCI) on the Meteosat Third Generation (MTG) satellite present an opportunity to enhance cloud products by combining observations both active and passive sensors. In this contribution, we present the MTG/FCI Optical Cloud Analysis (OCA) products, soon-to-be released by EUMETSAT. We introduce the OCA algorithm, the main features of the FCI-OCA products (10min repeat cycle, 2km spatial resolution, ability to retrieve cloud top height and microphysics for up to 2 cloud layers) and the validation of these products. This is currently based on ACTRIS/Cloudnet ground-based measurements and will be extended to EarthCARE L2A and L2B cloud products as soon as they become available. We aim to highlight the complementary nature of EarthCARE and FCI cloud observations over Europe and Africa (extendable to other areas using analogous GEO sensors). On the one hand, EarthCARE cloud profiling and optimized measurements of droplet and ice crystal sizes offer valuable data for validating passive imager cloud products and improve the cloud model assumed in their retrievals. On the other hand, EarthCARE provides snapshots of atmospheric conditions with limited spatial and temporal coverage, and numerous potential applications would benefit from simultaneous cloud retrievals from GEO imagers to enable broader analysis of the temporal evolution and life cycle of clouds and their radiative impacts.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Ice cloud microphysical balance in cirrus clouds captured by high-resolution climate simulations and doppler radar observations

Authors: Tatsuya Seiki, Dr. Akira Noda, Dr. Yuichiro Hagihara, Dr. Hiroaki Horie
Affiliations: Japan Agency For Marine-earth Science And Technology, National Institute of Information and Communications Technology
Cloud microphysical parameters have been estimated based on observations conducted within the temperature range of 0°C to -30°C, and in lower-temperature regions, extrapolated values (or globally fixed constants) have been used. It is fundamentally difficult to directly observe the growth of ice cloud particles under extremely low-temperature conditions and, consequently, to constrain the uncertainties in the parameters included in the theoretical formulations of cloud microphysics. Therefore, these estimates in cloud microphysics schemes generally lack physical reliability in cirrus simulations. The cloud microphysical processes occurring within cirrus clouds, which are widely distributed in the upper troposphere, are dominated by three mechanisms: collision, sublimation, and gravitational settling. The number of uncertain parameters in the theoretical modeling of these processes is relatively small. Therefore, the aim of this study is to investigate the balance of particle growth across a two-dimensional parameter space and constrain the uncertain parameters by utilizing simultaneous observations of radar echoes and Doppler velocities. To validate the theoretical framework of cloud microphysics, ground-based observations using HG-SPIDER is employed. Furthermore, the separation of sublimation and collision effects within the two-dimensional parameter space is achieved using climate models. For the climate model analysis, a budget analysis of radar reflectivity is conducted.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Advancing Aerosol-Cloud-Lightning Interactions: The Storm Data Cube

Authors: Blanka Piskala Gvozdikova, Johanna Mayer, Arthur Avenas, Edward Malina, Thorsten Fehr, Daniele Gasbarra
Affiliations: European Space Agency (ESA) - ESRIN, European Space Agency (ESA) - ESTEC, Shamrock Space Services c/o ESA-ESRIN
Lightning activity is generally regarded to be enhanced by increased aerosol concentrations, yet the simultaneous impacts of aerosols and thermodynamic processes complicate this response. Furthermore, limited observational data restricts our ability to capture detailed in-cloud microphysics, challenging direct studies of how aerosols influence microphysical and electrical processes in clouds. In 2024, ESA launched two missions that have potential to enhance our ability to explore these relationships: the EarthCARE mission, equipped with advanced lidar and radar systems for detailed aerosol and cloud profiling, and the geostationary MTG-I1 (Meteosat Third Generation - Imager 1), which includes the Lightning Imager (LI). The LI instrument offers novel capabilities to observe lightning from space over Europe and Africa, complementing similar instruments already covering the Americas, the Pacific and Atlantic Oceans, and China. In addition, both EarthCARE and MTG-I host multispectral imagers that provide additional context information on clouds and aerosols. Building on these observational advancements, this project develops a comprehensive data cube of deep convective events, such as thunderstorms and tropical cyclones. A data cube is a multidimensional array of values, often used to represent different variables in a consistent format that facilitates analysis. Our storm data cube merges information on aerosol retrievals, along with cloud microphysics, satellite-based lightning observations, and relevant ECMWF model data, creating a robust foundation for studying aerosol-cloud-lightning interactions. Initial case studies of specific storms further illustrate how different data layers within the cube can reveal these interactions. By making the data cube accessible to the scientific community, we aim to advance research in cloud microphysics and electrification domain. As this project evolves, we are committed to engaging with potential users to understand their needs and refine the data cube to support a wide range of research applications.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: EarthCARE PDGS status and lessons learned after one year of operations

Authors: Christophe Caspar, Beniamino Abis, Stefano April, Paola D'Aulerio, Eliana Barbera, Lorenzo Di Ciolo, Alessio Farina, Carlo Giarizzo, Timon Hummel, Alessandro Piro, Riccardo Ricelli, Jean-Michel Rosaz, Antonio Russo, Gertrud Waich, Stefania Ciccodemarco, Tullio Crisafulli, Fabio Padula
Affiliations: ESA/ESRIN, Starion Group, Serco Italia, Exprivia
The EarthCARE (Earth Clouds, Aerosol and Radiation Explorer) mission launched in May 2024 hosts four instruments (three from ESA and one provided by JAXA) to improve our understanding of the role that clouds and aerosols play in regulating Earth’s climate. The Payload Data Ground Segment (PDGS) is the component of the overall EarthCARE Ground Segment in charge of receiving via X-Band instrument source packets, processing them to different levels and disseminating the resulting instrument specific and synergistic products - products to users just a few hours from sensing. The PDGS is also in charge of the routine calibration and monitoring of the three ESA instruments, products quality control as well as planning of payload operations While the EarthCARE PDGS inherits from earlier developments some aspects specific to this mission have led to new concepts. In particular, the synergistic nature of the mission results in a complex processing model which involves about 30 different processors and lead to the definition of a formal modelling of the processing chain allowing for a streamlined update process of processors versions and production tree. The presence of a Japanese Instrument on-board also imposes tight dependencies between the ESA and JAXA Ground Segments in terms of processing and payload planning. After a brief introduction of the EarthCARE PDGS functions and architecture this talk will share the experience gained and lessons learned after six months of commissioning of this challenging mission.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Continuous Validation of EarthCARE’s First Year Through Monitoring within ECMWF’s Data Assimilation System

Authors: Mark Fielding, Shannon Mason, Professor Robin Hogan, Will McLean, Michael Rennie, Marta Janisková, Angela Benedetti, Maijana Crepulja, Mohamed
Affiliations: ECMWF
Calibration and validation are critical during the commissioning phase of any scientific satellite mission. For EarthCARE, a groundbreaking multi-instrument satellite, this phase is particularly vital to ensure its instruments achieve their full potential, delivering innovative observations of Earth’s clouds, aerosols, and their radiative impacts. A unique feature of EarthCARE is its capacity to provide spaceborne radar and lidar measurements in near-real time, unlocking opportunities for real-time monitoring and data assimilation. In this presentation, we will provide a comprehensive overview of the near-real-time monitoring of EarthCARE’s Cloud Profiling Radar (CPR) and Atmospheric Lidar (ATLID) during both the commissioning and operational exploitation phases. The health and stability of these instruments will be assessed by comparing forward-modelled observations from the European Centre for Medium Range Weather forecasts (ECMWF) data assimilation system with the actual measurements. This method is particularly effective for identifying drifts or changes in instrument calibration over time, ensuring data reliability for scientific and operational use. Comparisons will be made with observations from CloudSat and CALIPSO, using the model as a stepping-stone for a useful sanity check on the relative calibration of the equivalent EarthCARE instruments. A bespoke bin-by-bin monitoring approach for ATLID will be highlighted, showcasing how anomalies such as ‘hot pixels’—damage to the lidar detector from high-energy particles—can be detected almost instantaneously and corrected with a high degree of accuracy. This capability ensures data integrity throughout the mission and could be useful input for reprocessing of level 2 datasets. The value of continuous monitoring extends beyond routine diagnostics, as it provides a rich temporal context for more intensive calibration and validation campaigns, which are often constrained by specific locations or time periods. By bridging gaps between periodic validation efforts, this continuous stream of quality assessments ensures the long-term reliability and scientific utility of EarthCARE’s observations, supporting both research and operational applications.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: B.03.02 - POSTER - Monitoring and measuring climate adaptation using earth observation

The Intergovernmental Panel on Climate Change (IPCC) describe adaptation as the process of adjustment to actual or expected climate and its effects, in order to moderate harm or exploit beneficial opportunities. With over 1 degree Celsius of global warming already experienced, adapting to current and future climate changes is now of far greater focus for all levels of policymaking than ever before. Yet, the IPCC Sixth Assessment Report (AR6) concluded that despite progress, adaptation gaps exist, and will continue to grow at current rates of implementation. This has also been confirmed by the UNEP-WASP Adaptation Gap Report 2023.

The decision text from the UNFCCC Conference of the Parties (COP28) held in December 2023 in Dubai, UAE, affirms a framework for a Global Goal for Adaptation (GGA), of which monitoring, evaluation and learning is one of the four core targets. The text states that by 2030, all Parties have designed, established and operationalized a system for monitoring, evaluation and learning for their national adaptation efforts and have built the required institutional capacity to fully implement the system.

In the COP28 Decision on the GGA, Parties are urged to enhance adaptation action in the following areas: water supply and safe potable water, food and agricultural production, health, ecosystems and biodiversity, infrastructure and settlements, poverty and livelihoods, and protection of cultural heritage. With adaptation actions to be guided by science, including indicators, metrics and targets, among others.
Satellite Earth Observation (EO) revolutionized systemic observations and has played a pivotal role in understanding climate changes to date, yet its potential to support adaptation implementation, monitoring and implementation is only beginning to be explored. This session will highlight current adaptation actions using EO across the focus areas of the GGA listed above.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Advancing Drought Resilience through Parameter Optimisation of Agro-Ecological Model

Authors: Olga Wold, Roland Baatz, Michael Berg-Monicke, Claas Nendel
Affiliations: ​​​​​​​​​​​​​​​​​​​​​Leibniz Centre for Agricultural Landscape Research (ZALF), Institute of Biochemistry and Biology, University of Potsdam
The effects of climate change are becoming increasingly apparent in agricultural systems across Central Europe. As a result, there is a necessity for the development of forecasting models that can accurately predict drought events and improve the proactive strategies that enhance resilience and adaptation. This study focuses on the development and calibration of the MONICA (Model of Nitrogen and Carbon Dynamics in Agro-Ecosystems) crop model to enhance drought resilience in Germany's agricultural sector. The objective is to enhance the model's capacity to simulate and predict crop yield dynamics in response to diverse climatic and soil conditions by integrating high-resolution input data, including weather observations from the Deutscher Wetterdienst (DWD), soil information from BÜK200 with a 1km resolution, and regional yield statistics from NUTS-3 level. Additionally, as a land use mask, a crop map created using machine learning techniques based on Sentinel-1 and Sentinel-2 data was employed to provide spatial information on crop distribution. As a next step, a sensitivity analysis was conducted to identify the key parameters influencing the simulated crop yield and above-ground biomass, particularly in the context of drought and heat stress. Five parameters were determined to be highly sensitive and were then calibrated at the NUTS-3 regional level using observed yield data, with a particular focus on reducing errors in regions with differing soil and microclimatic conditions. In contrast to previous studies which used a uniform calibration approach across Germany, this study utilizes region-specific calibration to better reflect the heterogeneity of local conditions. Results demonstrate a significant improvement in the model's performance following calibration. The discrepancies between simulated and observed yields, measured using metrics like RMSE were substantially minimized. The calibrated MONICA model not only captures the yearly yield dynamics but also shows the ability to detect and simulate the impacts of extreme drought events, such as the examples observed in 2003 and 2018. This highlights its potential as a tool for assessing regional drought impacts. By incorporating seasonal forecasts into the calibrated model, it is possible to simulate potential drought events and provide early warnings to stakeholders. Therefore, the integration of seasonal weather forecasts into the MONICA model opens pathways for operational drought prediction. This work demonstrates the importance of region-specific calibration for improving crop model outcomes, especially in capturing the spatial and temporal variability. In future work, the focus will be on upscaling the simulations to cover the entire Central European region. This will involve integrating additional earth observation datasets, such, as for example ERA5, provided by ECMWF (European Centre for Medium-Range Weather Forecasts). Additionally, drought indices derived from remote sensing, such as the Standardized Precipitation-Evapotranspiration Index (SPEI) or soil moisture anomalies, will be incorporated in the future work. Therefore, the findings of this study are not only applicable to Germany but can also help improving drought resilience and developing land-based adaptation strategies. In the poster presentation, we look forward to sharing the newest findings from our ongoing research on advanced calibration tools and yield simulations conducted over Germany and possibly over Czech Republic. Simulation results are compared to observed yield data, providing valuable insights into the effectiveness and real-world applicability of the modelling approaches.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: The future of dam monitoring: Integrating EGMS InSAR monitoring with conventional techniques.

Authors: Antonio Miguel Ruiz-Armenteros, Miguel Marchamalo-Sacristán, Francisco Lamas-Fernández, Álvaro Hernández-Cabezudo, Alfredo Fernández-Landa, José Manuel Delgado-Blasco, Matus Bakon, Milan Lazecky, Daniele Perissin, Juraj Papco, Gonzalo Corral, José Luis Mesa-Mingorance, José Luis García-Balboa, Admilson Da Penha Pacheco, Juan Manuel Jurado-Rodríguez, Joaquim João Sousa
Affiliations: Department of Cartographic, Geodetic and Photogrammetry Engineering, University of Jaén, Campus Las Lagunillas s/n, 23071 Jaén, Centre for Advanced Studies in Earth Sciences, Energy and Environment (CEACTEMA), University of Jaén, Campus Las Lagunillas s/n, 23071 Jaén, Research Group RNM-282 Microgeodesia Jaén, University of Jaén, Campus Las Lagunillas s/n, 23071 Jaén, Topography and Geomatics Lab. ETS ICCP, Universidad Politécnica de Madrid, Department of Civil Engineering, University of Granada, SpainS, Detektia Earth Surface Monitoring S.L., Spain, insar.sk s.r.o., Department of Finance, Accounting and Mathematical Methods, Faculty of Management and Business, University of Presov in Presov, School of Earth and Environment, University of Leeds, IT4Innovations, VSB-TU Ostrava, Raser Limited, Hong Kong, CIRGEO, Università degli Studi di Padova, Department of Theoretical Geodesy and Geoinformatics, Slovak University of Technology in Bratislava, Inteligencia Geotécnica SpA, Center for Technology and Geosciences, Department of Cartographic and Surveying Engineering, Federal University of Pernambuco, Cidade Universitária, Av. Prof. Moraes Rego, 1235, Recife 50670-901, Department of Computer Science, University of Jaén, Universidade de Trás-os-Montes e Alto Douro, Vila Real, INESC-TEC - INESC Technology and Science, Porto, 4200-465
Embankment dams are critical infrastructures essential for water supply, power generation, and flood control. Their inherently complex nature and sensitivity to environmental and geotechnical factors demand continuous and rigorous monitoring to ensure safety and minimize risks to nearby populations. Traditional monitoring methods, such as leveling, total station measurements, and geotechnical sensors, while precise, face limitations in spatial coverage, temporal frequency, and cost-effectiveness. Over the past few decades, Interferometric Synthetic Aperture Radar (InSAR) has emerged as a groundbreaking remote sensing technique for detecting millimeter-scale ground deformation. Using satellite-borne radar sensors, InSAR offers wide spatial coverage, high measurement accuracy, and retrospective time-series analysis, making it a transformative tool for dam monitoring. The European Ground Motion Service (EGMS), developed under the Copernicus Program, is a key advancement in this field, providing standardized, high-quality InSAR products across Europe. These include velocity maps, displacement time-series, and orthorectified datasets, enabling a holistic understanding of surface dynamics. Integrating EGMS InSAR products with traditional monitoring methods forms a comprehensive framework for embankment dam monitoring. This integration enhances stability assessment by combining InSAR’s broad spatial coverage and temporal insights with in situ sensor data, numerical models, and expert evaluations. Such a hybrid approach strengthens early warning systems and improves disaster preparedness, reducing risks associated with dam failures. EGMS and InSAR offer numerous benefits: early deformation detection, spatially comprehensive risk identification beyond point-based measurements, and time-series analyses that reveal trends and anomalies. Moreover, they enhance the identification of areas susceptible to landslides, differential settlements, and other phenomena threatening dam stability, all while potentially reducing costs compared to conventional monitoring. However, challenges such as signal degradation due to dense vegetation, atmospheric interference, and limitations in detecting non-linear displacements must be addressed for optimal use. Additionally, calibration with dense GNSS networks and independent validation against in situ data are vital for ensuring accuracy and reliability. The systematic integration of InSAR with complementary techniques—including topographic surveys (leveling, GNSS), geotechnical sensors (piezometers, inclinometers), and numerical models—overcomes the limitations of individual methods. Expert interpretation further enriches this process, offering nuanced insights into dam behavior under diverse conditions. Looking ahead, the future of dam monitoring lies in intelligent systems that merge InSAR data with real-time in situ measurements, predictive numerical models, and artificial intelligence-based analytics. These systems will enable proactive and sustainable water infrastructure management, optimizing resource use while safeguarding populations and ecosystems. This study explores the potential of InSAR for dam monitoring through case studies from the SIAGUA project. SIAGUA demonstrates the integration of satellite data, in situ monitoring, and expert knowledge for water infrastructure surveillance. The SIAGUA-SAT subproject leverages InSAR datasets from Copernicus satellites and others (e.g., Sentinel-1A/B, PAZ, TSX) to create deformation maps and time-series analyses of dams, reservoirs, and surrounding areas. These examples highlight the practical advantages of combining InSAR with conventional methods, aligning with the session's focus on innovative geotechnical engineering practices and resilience building.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Monitoring Urban Development Strategies to Mitigate Surface Urban Heat Islands Using Pixel-Wise LST and Air Temperature Regression

Authors: Hugo Poupard, Fabien Castel, Tarek Habib
Affiliations: Murmuration-SAS
Surface Urban Heat Islands (SUHI) are a significant urban climate challenge, where certain urban land cover characteristics—such as a lack of vegetation, low-albedo surfaces (e.g., black roofs), and extensive impervious surfaces—lead to elevated surface temperatures in cities compared to surrounding rural areas. Remote sensing data, particularly from Landsat satellites, has been widely used to analyze SUHI patterns and their correlation with land use changes (Zhou et al., 2011). However, one key challenge in previous studies is the inability to directly compare Land Surface Temperature (LST) across time periods due to variations in atmospheric conditions in the context of monitoring urban planning measures (Chudnovsky et al., 2004). This study introduces a pixel-wise regression framework that normalizes LST data using air temperature (Tair) to overcome this limitation and allows for meaningful thermal comparisons across different periods and Tair scenarios (Marzban et al., 2020). The framework integrates LST data from Landsat 8/9 (resampled from 100 m to 30 m resolution) and air temperature data from ERA5 reanalysis. Pixel-level regressions are calculated separately for two distinct time periods: one representing conditions before the urban intervention and the other after the intervention. These periods are defined interactively by the user within the tool and are processed dynamically during the analysis. By normalizing LST to specific Tair scenarios, the methodology compensates for atmospheric variability and enables consistent thermal comparisons. The user can select scenarios (e.g., 20°C or 30°C) to generate thermal change maps, quantifying the impact of urban planning measures on surface temperatures. Two case studies demonstrate the utility of this framework. In "L’Île du Ramier" in Toulouse, the redevelopment of the "Parc des Expositions" into a green space (completed in 2020) led to LST reductions of 5°C under a 20°C air temperature scenario and 11°C under a 30°C scenario. Similarly, in the "Village Olympique de Paris," the implementation of green roofs (completed in 2024) resulted in significant LST reductions of 7°C under a 20°C air temperature scenario and 11°C under a 30°C scenario. These findings underscore the effectiveness of green infrastructure and high-albedo surfaces in mitigating urban heat (Zhou et al., 2014). While the lack of field validation data is a limitation, the simplicity and scalability of this framework make it suitable for large-scale applications, especially for municipalities with limited budgets. Its ability to align LST with Tair ensures robust and consistent thermal comparisons, enabling urban planners to effectively evaluate and monitor the thermal impacts of land use changes. Other limitations include challenges in representing finer details due to the resolution of input data, such as accurately capturing narrow vegetation strips or urban canopies, and potential errors in interpreting canopy-covered areas where satellite thermal signals may not fully represent surface conditions. Additionally, because this methodology computes LST based on air temperature, it can be coupled with projected Tair datasets from climate models. This enables simulations and projections of future SUHI dynamics under various climate scenarios, providing valuable insights for long-term urban resilience planning. In the future, the approach could also incorporate a tool to identify the percentage contribution of features—such as vegetation, albedo, or impervious surfaces—to the cooling or warming of specific areas, enhancing the understanding of SUHI drivers and supporting more targeted mitigation strategies (Zhou et al., 2011). REFERENCES: - Marzban, F., Sodoudi, S., & Preusker, R. (2020). The influence of land-cover type on the relationship between NDVI–LST and LST–Tair. Theoretical and Applied Climatology, 142, 791–803. - Zhou, W., Qian, Y., Li, X., Li, W., & Han, L. (2014). Quantifying how urban landscape heterogeneity affects land surface temperature at multiple scales. Ecological Indicators, 45, 108–117. - Chudnovsky, A., Ben-Dor, E., & Saaroni, H. (2004). Diurnal thermal behavior of selected urban objects using remote sensing measurements. Energy and Buildings, 36(11), 1063–1074. - Zhou, W., Huang, G., & Cadenasso, M. L. (2011). Does spatial configuration matter? Understanding the effects of land cover pattern on land surface temperature in urban landscapes. Landscape and Urban Planning, 102(1), 54–63. - Yang, X., & Jin, Y. (2010). Remote sensing and geospatial technologies for urban climate studies: Applications and case studies. GIScience & Remote Sensing, 47(3), 271–286.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Landscapes in Flux: Modeling Urban Growth and Climate Futures in Western Germany 2050

Authors: Jun. Prof. Dr. Andreas Rienow, Jiaqi
Affiliations: Ruhr University Bochum
In Germany, a land consumption of less than 30 ha per day is envisaged by 2030 and a net zero land take by 2050. With stable values of 50 ha per day in 2024, those goals seem to be hard to fulfil. In that regard, the effect of growing metropolitan areas on the climate of local neighbourhoods becomes more and more a topic in regional planning. The study is a contribution to the climate change-related land cover simulation efforts in Germany. It investigates future land consumption rates based on the Patch-generating Land Use Simulation (PLUS) framework enabling the future simulation of the generation and evolution of patches of multiple land use and land cover classes and the exploration of the mechanisms of land use change in the Rhine-Ruhr metropolitan region. The PLUS model combines a random forest-based decision probability approach for analysing land expansion with a cellular automata (CA) model that utilizes a multi-type stochastic seeding mechanism. This integration allows for the identification of driving factors behind land use and land cover change and the prediction of patch-level changes in landscapes. We present the calibration, validation, and scenario implementation of PLUS for the Rhine-Ruhr metropolitan region 2006-2050. As land use and land cover information we utilize the 27 Urban Atlas classes provided by the Copernicus Land Monitoring Service for the years 2006, 2012, and 2018. We generate spatially-explicit data on over 20 socioeconomic and geobiophysical driving forces establishing random forest-based growth probability maps for the CA component. Beside a baseline scenario, we calculate a climate change adaptation scenario and an economic prosperity scenario for 2050. These scenarios enable a better understanding of potential future dynamics of land use changes under varying assumptions. By analysing the scenario diversity, we detect areas depicting a higher sensitivity to the impacts of regional spatial planning decision-making, offering valuable insights for sustainable urban development strategies. These insights highlight the importance of proactive spatial policies to balance environmental, economic, and social needs while addressing the challenges of land resource management in a changing climate.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Support Local Climate Adaptation Actions Using Earth Observation Data

Authors: Shaojuan Xu
Affiliations: Research Institute For Regional and Urban Development, Geography Department, Humboldt University of Berlin
Greenhouse gas emissions from energy use in buildings are responsible for about half of the total emissions at the city level. The low efficiency of buildings makes it a challenge for many cities to become climate-neutral. To make matters worse, the increasing energy prices lead to many families not being able to afford to use heating and cooling, affecting human health and well-being. Many governments have started initiatives to improve building energy efficiency and reduce energy poverty but are facing difficulties due to a lack of precise and detailed information at the local level. To tackle these issues, our project, “Climate Adaptation through Thermographic Campaign and Heat Mapping (CATCH4D)”, attempted a data-driven approach and developed digital tools to support decision-makers in local government’s action on building energy renovation and energy justice in practice. Firstly, we used aerial survey and 3D photogrammetry techniques to build a 3D thermographic model to investigate building energy leaks at a city scale. Then, we further integrate socioeconomic data at the building level to analyse the contributors and the spatial distribution of energy-poor households. Lastly, we transfer our data and model into a 3D WebGIS application, which enables users to visualise the energy loss from buildings, retrieve information for energy renovation and assess possible energy poverty. Our project produces a 3D thermal model of the entire city of Dortmund, Germany. Users can zoom, pan, and tilt the model to visualise heat loss from building roofs, walls, and windows at a fine scale. It provides an intuitive way to inform property owners about their building energy efficiency, support their actions in building renovation, and reduce energy costs. Our digital products will be part of the social instrument for stakeholder engagement in low-income neighbourhoods. Supported by neighbourhood management offices, our model and data analysis results will be presented at the Property Owner Forum and Energy Consulting Workshops, involving local communities, local authorities, civil engineers, energy experts, and property owners. We expect that a) our products will raise awareness of energy efficiency in public, commercial, and private sectors; b) facilitate civil engineers, energy experts and public energy consulting offices to provide consultants on building renovation and energy saving on-site; c) help local social welfare offices to support the identified energy-poor households.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: The Global Oasis Knowledge Hub

Authors: Dr. Jessica Hetzer, Dr. Rainer Krug, Dr. Aidin Niamir
Affiliations: Senckenberg Biodiversity And Climate Research Centre
Oases are fertile ecosystems within some of the world's driest regions and are vital socio-economic systems. In countries with drylands, up to 15% of the total population lives in oases, amounting to as many as 224 million people in Middle Eastern and African countries alone. These unique environments represent remarkable examples of human adaptation to extreme conditions, relying on intricate agricultural practices that integrate crops, livestock, and water management. However, their significance extends far beyond local sustenance, directly intersecting with peace and the Climate-Security nexus. As fragile hubs of life in arid landscapes, oases serve as crucial lifelines for communities, fostering economic stability, cultural heritage, and ecological balance. They are often critical to regional peace, as their resources—particularly water and fertile land—are shared and managed collectively. Yet, mounting climatic, environmental, and anthropogenic pressures threaten their ecological and cultural integrity. When oasis ecosystems fail to function effectively, the consequences can be severe, including critical threats to water and food security. Such disruptions can trigger resource-driven conflicts and may even force large-scale migrations. A significant knowledge gap in understanding the current state and recent developments of oases lies in the scattered and fragmented nature of information about these ecosystems. Although some remote-sensing-based products have been developed, effectively linking these tools with field-based knowledge and research insights—often sparsely published across diverse journals—remains a substantial challenge. This disconnect limits our ability to form a comprehensive understanding of the potential insights derived from Earth observation. To address these challenges, we developed the Global Oasis Knowledge Hub, a transformative initiative designed to advance systematic research and knowledge sharing. Its inherent open-access literature database, containing over 12,000 entries, provides invaluable insights into oasis ecosystems, empowering researchers, policymakers, and practitioners to explore the latest findings. Furthermore, the Hub creates a pathway for exploring new research areas and collectively closing knowledge gaps by offering a transparent and adaptable framework.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Biodiversity Recovery in the Salt Marshes: Assessment of Heterogeneity and Climate Vulnerability

Authors: Hana Švedová, Matúš Hrnčiar, M.A. Jan Labohý, Helena Chytrá, Júlia Buchtová, Marie Kotasová Adámková, Antonín Zajíček
Affiliations: World from Space, Masaryk University, VUMOP
Wetlands and salt marshes are irreplaceable components of agricultural landscapes. They support biodiversity, provide essential ecosystem services, and help to mitigate the negative effects of drought and downstream flooding. However, since the 1970s, wetlands have become increasingly endangered as a result of agricultural intensification, drainage, and inappropriate water management practices. The objective of this research is the restoration of degraded wetland habitats in Moravian Pannonia, with the aim of ensuring their long-term sustainability. The research component focusing on EO data has two principal objectives. The first is to assess and monitor the heterogeneity and biodiversity of the habitats, while the second is to evaluate their vulnerability to climate change. This serves as a baseline for management strategies aimed at reducing vulnerability. The majority of the monitored area is characterised by the dominance of undesirable vegetation, with only a third supporting diverse communities. A reduction in invasive species may serve to enhance habitat heterogeneity. Furthermore, the analysis of spectral heterogeneity is being employed to monitor the efficacy of management interventions. The heterogeneity is assessed using the Spectral Variability Hypothesis, with satellite data from PlanetScope used to calculate Shannon entropy as a measure of spectral diversity. The results reveal higher spectral heterogeneity in areas near ponds and along linear vegetation, contrary to areas dominated by reeds and other homogeneous vegetation. These findings emphasise the necessity of targeting management efforts to create mosaics of smaller, diverse habitats. The climate change vulnerability of the selected habitats is assessed to the identified climate stressors (namely, rising average air temperature, heat waves, droughts, and floods). The evaluation incorporates EO data from Landsat missions, meteorological data, hydrological and terrain modelling, and expert knowledge, following IPCC guidelines (exposure, sensitivity, and adaptive capacity). The results have demonstrated that there is an increasing exposure to rising air temperatures and prolonged dry periods. The analysis revealed a high level of sensitivity in habitats dependent on water availability and in areas with sparse vegetation. In contrast, the highest adaptive capacity was observed in regions with well-established water retention features. As it is unlikely that exposure and sensitivity will decline significantly in the near future, the most effective way to reduce vulnerability is to promote increased adaptive capacity. The annually updated results will assist in the implementation of effective adaptive management strategies and inform policy-making decisions to ensure the continued thriving and sustainability of the wetlands in the region.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: B.02.02 - POSTER - Managing the Urban Green Transition with Earth Observation data and advanced analytics.

Urban areas are at the forefront of the global green transition and the decarbonization of society, addressing challenges like climate change, resource scarcity, and social equity. Cities contribute approximately 70% of global CO2 emissions, highlighting their critical role in climate mitigation. Current policies and international agreements, such as the European Green Deal and networks like C40 Cities, are driving the push towards net zero carbon goals, necessitating innovative solutions.

This session explores the role of urban analytics, predictive analytics, and Earth Observation (EO) in supporting this mission. By leveraging data-driven approaches and satellite imagery, cities can optimize their green transition strategies, address local challenges, and effectively align resources.

Key topics include:
• Advances in Digital Twin Technologies and Predictive Analytics for managing the Urban Green Transition.
• Integration of novel analytics (i.e. AI, NLP, agent-based modelling) and EO-data for managing the Urban Green Transition.
• EO-based analytic services and solutions from local to global scales to support urban green transition, including energy management, building energy efficiency and retrofitting, sustainable mobility, air quality, urban green management, and more.
• Best practices in data sharing and collaboration among cities, service providers, and stakeholders, to develop EO-based green transition services.
• Utilization of multimodal datasets and technologies for comprehensive urban analysis.

This session is targeted towards individuals engaged in or interested in the application of novel urban analytics an EO for managing the urban green transition. By fostering a collaborative environment, we aim to accelerate the development and adoption of innovative solutions applicable across various cities and regions.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: Probabilistic Approach to Effective Road Detection in Noisy Satellite Imagery

Authors: Nina Krašovec, Dr. Aleš Marsetič
Affiliations: ZRC SAZU
Road detection is important for road monitoring, disaster management, traffic management, urban planning, and GPS navigation. It is also useful for extracting roads that can be used as a reference for precise orthorectification. Reliable, highly accurate and fast orthorectification of high resolution data is of paramount importance for various spatial applications. Automatically detected road networks can be used to orthorectify various types of imagery. However, it can be a challenging task to detect roads in images from small satellites with low radiometric range, which can contain higher levels of noise. One such satellite is Slovenia's first microsatellite, NEMO-HD. It was launched in 2020 and is operated by the Slovenian Centre of Excellence for Space Sciences and Technologies SPACE-SI. The satellite provides four spectral channels (RGB-NIR) with a spatial resolution of 5.6 m and a panchromatic channel with a resolution of 2.8 m. It is aimed at monitoring smart cities, rivers and the sea, forests, agriculture, droughts, floods, and invasive plants (Rodič et al., 2022). In order to obtain spatially accurate data from the images, they have to be orthorectified. ZRC SAZU and SPACE-SI have developed an automatic orthorectification processing chain that requires specific ancillary data, such as: - road network data (e.g. OpenStreetMap (OSM) data), - digital terrain model (DTM) (e.g. Shuttle Radar Topography Mission (SRTM) data, airborne laser scanning (ALS) data - orthophoto or orthoimage data (e.g. national orthophoto data, Copernicus data, ALS data). For the automatic orthorectification process to work, a binary road mask must be detected (extracted) from the raw satellite image to be orthorectified. Road detection using traditional single point estimates often struggles to cope with the inherent noise and variability of satellite data, particularly from small satellites such as NEMO-HD. Bayesian statistics provides a framework for considering many possible parameter sets. This presentation introduces the use of ensemble-based methodology through approximate Bayesian methods to increase prediction accuracy, improve robustness to overfitting, and handle uncertainties in road detection tasks. For road detection, multispectral NEMO-HD data is complemented by freely available PlanetScope optical satellite imagery to improve model learning and increase the amount of training data. The main Dove constellation provides images with four spectral channels (RGB-NIR), 3 m spatial resolution and high temporal resolution due to the number of satellites in low Earth orbit. The images are downsampled to NEMO-HD resolution and used to train a U-Net model with a ResNet-101 encoder and additional ensemble integration. Reference road masks are derived from OSM, which are often noisy, i.e. missing roads or mapped roads that are not visible in the image. We first test the model with all roads, and then additionally we discard smaller roads based on OSM tags in order to find the optimal set of roads for labelling without additional manual work. Detection is highly dependent on the spatial resolution of the input image and the amount of noise. Due to the high radiometric variability, higher signal-to-noise ratio of NEMO-HD satellite images, and sometimes poor labelling, a single point estimate may overfit to different subsets of data or noise during model learning, which can be addressed by ensemble-based methods. Two different approximate Bayesian methods were tested: (1) Monte Carlo (MC) Dropout extends the idea of conventional training dropout to the inference phase. At inference time, multiple forward passes are made and the MC Dropout randomly deactivates neurons in the network with a Bernoulli distribution. This is seen as sampling from an approximate posterior distribution over the model parameters. The final predictions are then obtained from these sampled models by model averaging. This averaging approximates the integral over the posterior distribution, where the expected value of a function - representing the model output or prediction given the model parameters - is computed with respect to the posterior distribution. (2) Anchored ensembles use randomized maximum a posteriori probability (MAP) sampling. They approximate the posterior distribution by using anchored points that are sampled from a prior distribution and remain fixed during training. This is achieved by adding a regularization term to the loss function to minimize the anchoring loss. Both approaches produce multiple predictions. MC Dropout uses a different dropout mask in each forward pass, which can be seen as a prediction from a different model. In Anchored Ensemble, each set of parameters regularized to different sampled values from the anchor distribution represents a different model. We show that the two ensemble methods substantially improve road prediction in noisy satellite imagery compared to the standard single point estimate approach. This improvement is particularly reflected in increased recall rates, which is the most critical aspect in our use case to ensure that fewer roads are missed. Predicted road masks are used in the process of ground control point (GCP) extraction, where the detected road patches are matched to the reference road patches. The use of ensemble-based methodology through approximate Bayesian methods can improve this step by using the uncertainty generated during the road extraction process, calculated from the standard deviations across multiple model predictions. This is based on the idea that pixels where predictions vary widely indicate less certainty in the prediction. This makes area-to-area matching an uncertainty-informed process, increasing the efficiency and accuracy of the matching and subsequent transformation.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: Estimating the Global Area of Built Structures and Human-Made Impervious Surfaces Using a Sample of BlackSky and PlanetScope Data

Authors: Alexandra Tyukavina, Dr. Matthew Hansen, Andres Hernandez-Serna, Theodore Kerr, Maxim Potapov
Affiliations: University of Maryland, James Hubert Blake High School
Monitoring urban areas from space is the only feasible solution for global assessment of the urban green transition. Medium- and medium-high resolution optical imagery, such as that from Landsat and Sentinel-2 satellites (10-30m) and PlanetScope constellation (3-4m) are suitable for mapping built-up vs. green portions of the urban areas and monitoring the changes in the composition of the urban environment over time. However, to validate these maps and to estimate the area of various types of components within the urban landscape, higher resolution (~1m or finer) optical imagery is critical. This project aims to employ BlackSky imagery (~1m resolution) to estimate the global area under various human-made structures and impervious surfaces in 2022-2023 and estimate the accuracy of Landsat- and PlanetScope-based maps using the resulting sample. BlackSky constellation (of 14 satellites in 2022) provides the unique capability of repeat imaging of the same location with ~1m resolution on the same day. The BlackSky imagery archive mostly covers large urban areas. In the current project, we are trying to produce a truly global estimate, including areas with medium and low urban density, which did not have any imagery available in the archive. The project is a part of the NASA Commercial SmallSat Data Acquisition program’s on-ramp vendor evaluation. For this global analysis we have sampled 300 2.5x2.5 km blocks via stratified random sampling, with high-, medium- and low-built-up strata constructed from Landsat-based maps (Hansen et al., 2022, Potapov et al., 2022). We then tasked the BlackSky constellation to acquire one minimally cloudy image per sample block between May 2022 and June 2023. This resulted in 75% of the sample having cloud-free imagery coverage for the entire sample block (from one cloud-free scene or multiple partially cloudy scenes) and 11% having some BlackSky imagery (partially cloudy). The remaining 14% of the sample was interpreted from PlanetScope imagery (3-4m), also used as a georeferencing target for BlackSky imagery. Within each sample block, 20 random points were sampled and visually interpreted to assign one of the reference classes. We have distinguished between built 3D structures (houses, commercial buildings, greenhouses, etc.), flat impervious surfaces (roads, parking lots, bridges, railroads, etc.), mining pits, and tall built-up structures without large footprints on the ground (pipes, fences, oil wells, power lines, etc.). We will present global area stats for these different types of built surfaces and preliminary map validation results. Our insights regarding the challenges of tasking commercial satellites to acquire imagery over a global sample of locations will help inform the planning of the global land cover map validation and estimation efforts. References: Hansen, M. C., Potapov, P. V., Pickens, A. H., Tyukavina, A., Hernandez-Serna, A., Zalles, V., ... & Kommareddy, A. (2022). Global land use extent and dispersion within natural land cover using Landsat data. Environmental Research Letters, 17(3), 034050. Potapov, P., Hansen, M. C., Pickens, A., Hernandez-Serna, A., Tyukavina, A., Turubanova, S., ... & Kommareddy, A. (2022). The global 2000-2020 land cover and land use change dataset derived from the Landsat archive: first results. Frontiers in Remote Sensing, 3, 18.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: Using Satellites in Support of Reducing Urban Methane Emissions From Solid Waste

Authors: Ilse Aben, Joannes Maasakkers, Matthieu Dogniaux, Xin Zhang, Srijana Lama, Marianne Girard, Dylan Jervis, Jason McKeever, Berend Schuit, Shubham Sharma, Ana Lopez-Norena, Daniel Varon
Affiliations: SRON Netherlands Institute for Space Research, GHGSat Inc, Harvard University
The importance of reducing methane emissions to mitigate climate change in the short term has been recognised at the highest political level. By now, over 150 countries have signed up to the Global Methane Pledge (GMP) to reduce emissions by 30% by 2030 compared to 2020. Solid waste is responsible for about 10% of the global anthropogenic methane emissions. With growing urbanization, waste production is expected to further increase by 70% in 2050. As such, waste is an important sector to address when aiming to reduce (urban) methane emissions and a key pathway to realizing the GMP. Using the TROPOMI satellite instrument on ESA Sentinel-5 Precursor, we have shown we can detect methane hot spots/super-emitters globally coming from large emitting sources found in oil & gas production fields, coal mine regions, but also corresponding to urban areas (Schuit et al., 2023). In fact, we have detected methane hot spots with TROPOMI in some 67 urban areas so far. These urban hot spot detections have been used to tip-and-cue high spatial resolution satellites (e.g., GHGSat, EnMAP, PRISMA) to identify the main responsible sources. In all these urban areas, we detected major methane emissions coming from managed landfills as well as waste dumps that are often also associated with additional environmental problems (Dogniaux et al., 2024; Zhang et al., 2024). Since May 2023, SRON leads the TWOS (Targeting Waste emissions Observed from Space) project funded by the Global Methane Hub, a philanthropic organisation that supports projects to reduce methane emissions. Within TWOS, we use satellite observations of methane at both urban level (TROPOMI) as well as at facility level (GHGSat) to support local stakeholders with their mitigation of solid waste methane emissions. For this SRON partners with GHGSat who have a constellation of 10+ satellites measuring methane at 25-meter resolution. Within TWOS we monitor methane emissions from ten waste disposal sites in ten different cities in the Global South during a period of three years. These cities include Santo Domingo, Rio de Janeiro, Sao Paulo, Brasilia, Buenos Aires, Dakar, Abidjan, Lagos, Nairobi, and Delhi. Reduction of waste emissions involves a complex system of highly site-specific organizations and optimal mitigation approaches. We therefore also partner with NGOs such as C40, RMI, GMH, and CATF to best engage with the local stakeholders and optimally support emission mitigation. On a larger scale SRON is one of the partners in the LOW-Methane (Lowering Organic Waste Methane) initiative to accelerate the delivery of the GMP by cutting 1 million tonnes of annual methane emissions from the solid waste sector well before 2030. For this, LOW will unlock up to 10 billion in public and private investment across as many as 40 jurisdictions. Methane measurements (including from satellites) are key in providing the necessary data to establish baseline emissions, pinpoint where the largest emissions are coming from, and monitor the progress with respect to reducing these methane emissions. In this presentation we will illustrate and elaborate on the above developments of relevance to reducing urban greenhouse gas emissions.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: Quantifying the Impact of Urban NBS on Heat Island Mitigation Using High-Resolution EO and In-situ Data

Authors: Efthymios Papachristos, Stylianos Kossieris, Dr. Panagiotis Michalis, Chrysovalantis Tsiakos, Georgios Tsimiklis, Dr. Angelos Amditis
Affiliations: Institute Of Communication And Computer Systems (ICCS)
Urban areas in the Mediterranean region are increasingly impacted by the Urban Heat Island (UHI) effect which is amplified by extreme climatic conditions. Various Nature Based Solutions (NBS) have been proposed and implemented to reduce impact of urban overheating. However, these solutions have mainly been applied at local scales and specific areas of interest, limiting knowledge about their spatial impact on heat mitigation. As a result, the effectiveness of NBS in reducing heat impact remains a challenging issue to monitor and quantify using Earth Observation (EO) technologies, which limits understanding about their widespread benefit and upscale implementation at larger areas. This study focuses on upscaling NBS applications and quantifying their impacts at larger scales by integrating EO techniques coupled with data from in-situ sensors and crowdsourcing applications. This is achieved by employing a multi-sensor data fusion approach, combining geospatial datasets derived from EO and ground-based measurements to generate high-resolution temperature data, which is expected to enhance a localized assessment of NBS effectiveness in mitigating heat within Mediterranean regions. The methodology is based on a downscaling, cross-modality framework that combines measurements from ECOsystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS), Land Surface Temperature (LST) products derived from SLSTR Sentinel-3 satellite mission and in-situ data produced either from professional sensors or Citizen Science participatory campaigns. While the methodology is designed to be broadly applicable across various regions, it is specifically being implemented as a use case for the city of Marseille, where different NBSs are being developed in two different areas of the city. The methodology is based on the analysis of several parameters, such as LST measurements in different resolutions, vegetation indices, precipitation, land cover, and wind speed and direction among others. This will enable us to bridge the gap in NBS efficiency estimation, as a first step to the NBS upscaling strategy.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: Emission Observatory – piloting air quality and greenhouse gas emission hotspot visualization dashboard for Africa

Authors: Johanna Tamminen, Janne Hakkarainen, Iolanda Ialongo, Henrik Virta, Seppo Hassinen, Anne Hirsikko, Hannakaisa Lindqvist, Jari Liski, Katja Loven, Olli Nevalainen, Harri Pietarila, Anu-Maija Sundström
Affiliations: Finnish Meteorological Institute
We demonstrate the pilot version of Emission Observatory (www.emissionobservatory.org), which is a dashboard for visualizing air quality, greenhouse gases and emission hotspots in Africa. The dashboard is built on utilizing satellite data of NO2, SO2, CO2 and CH4 measured by TROPOMI/S5-P, OMI/EOS-Aura, OCO-2 and OCO-3 and recently developed emission estimation methods. The aim of the dashboard is to support authorities and decision makers by improving monitoring air pollution and GHG emissions in areas, where the ground-based observations are sparse and the existing emission inventories are incomplete. The first version of the Emission Observatory dashboard, launched in fall 2024, focuses on emissions at selected areas in Africa related to oil and gas industry, mining industry, power plants and megacities. In this project we collaborate with the Ministry of Foreign Affairs of Finland and meteorological institutes in Africa. The Finnish Meteorological Institute (FMI) has over 30 years of experience in supporting meteorological and climate co-operation and capacity building in over 100 developing countries. Currently FMI is running 10 projects targeted especially for eastern Africa. In the development of the Emission Observatory, we utilize the expertise based on these on-going capacity building projects and the collaboration to test and pilot the developed service among the participating institutes. In this presentation we demonstrate the current dashboard, its data and analysis streams, and discuss the potential of extending the dashboard to other applications are geographical locations.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: Decision-making tool for Night-Time Light policies with SDGSAT-1 and Biodiversity Data in Haute-Savoie, France

Authors: Aurélien Mure, Emma Bousquet, Tom Brunelle, Camille Lhenry, Vincent Delbar, Phd candidate Samuel Busson, Jean-François Bretaud, Phd Florian Greffier, Anne Gizard, Arnaud Ceyte, Quentin Gautier
Affiliations: Cerema, Satellite images team, La TeleScop, Cerema, Light and biodiversity team, Cerema, Lighting research team, Cerema, urban and biodiversity team, SYANE, Haute-Savoie energy syndicate
Introduction Artificial night-time lighting represents a critical yet underexplored intersection of energy management, urban development, and biodiversity conservation. The ORENOS project pioneers a novel approach to assessing and improving artificial lighting quality by integrating satellite imagery with ecological and urban data. Focused on Haute-Savoie in France, a rapidly urbanizing and biodiverse region in France, ORENOS seeks to equip local authorities (SYANE energy syndicate) with actionable tools to balance energy efficiency, biodiversity preservation, and urban lighting needs. Methodology The project leverages cutting-edge imagery from the SDGSAT-1 satellite operated by CBAS, developed for monitoring the United Nations Sustainable Development Goals (SDGs). Its 10-meter panchromatic spatial resolution and its three color bands enables precise radiance mapping at a regional scale, offering insights into artificial lighting intensity and distribution over four years. Innovative pre-processing techniques for radiometric and geometric corrections are being developed to automate image analysis. Radiance data, combined with street lighting databases, albedo field measurements, and ecological datasets, will enable the creation of two unique indicators: 1. Lighting Quality Indicator: This metric evaluates the efficiency and appropriateness of lighting installations, identifying issues such as over-lighting, malfunctioning luminaires, and improper light orientation (e.g., upward-directed flux contributing to skyglow). 2. Biodiversity Pressure Indicator: By cross-referencing radiance maps with ecological databases, this indicator quantifies the potential impact of artificial lighting on nocturnal biodiversity, particularly in areas of high ecological sensitivity. Night Satellite images are giving information about ascendent light and is composed of both direct flux and ground-reflected flux. A study of a simplified albedo is introduced thanks to day images from Pléiades or Spot6/7 images, as the material optical properties widely condition the contribution to light pollution. A key innovation lies in the project’s integrative modeling approach, which combines satellite-based radiance data with expected lighting patterns derived from road classifications, urban layouts, localized reflectivity measurements and artificial light local databases. By comparing observed and modeled radiances, the project will not only identify discrepancies but also suggest actionable interventions to optimize lighting systems for reduced ecological disturbance and improved energy efficiency. Expected outcomes The anticipated outcomes include multi-annual radiance maps, regional biodiversity sensitivity overlays, and comprehensive lighting quality assessments, all tailored to the Haute-Savoie context. These tools will provide local decision-makers with a clear, data-driven basis for prioritizing renovations, switching off unnecessary lighting, and adopting technologies that are more environmentally friendly and cost-effective. As the tools will be designed together with the user (SYANE), it will meet their needs and be replicable for other French and international policymakers focused on lighting ORENOS represents a transformative step in applying Earth observation for urban and ecological management, bridging the gap between large-scale satellite data and local actionable insights. By addressing the dual challenges of ecological degradation and energy overuse, the project not only contributes to sustainable development but also demonstrates the untapped potential of night satellite based tools for regional governance and planning. Keywords: Artificial lighting, SDGSAT-1, light pollution, biodiversity conservation, energy efficiency, sustainable urban planning, satellite innovation.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: B.02.07 - POSTER - Social-ecological land systems: practical approaches for improved mapping

This session invites presentations on integrating remote sensing products with data from other diverse sources to address the complex spatial patterns and dynamics of social-ecological land systems in the context of land-use and climate change. The focus will be on exploring practical applications, analyzing the complementary nature of various data sources, identifying gaps, and discussing ways to enhance transdisciplinary data synthesis efforts.
The study of socio-ecological systems is crucial for sustainability, adaptation and territorial planning. Capturing the dynamics of these systems requires a comprehensive understanding of both biophysical and socioeconomic dimensions, generally through detailed conceptual models. However, obtaining the appropriate input data is often a major challenge. The increasing availability of open access satellite imagery and a wide array of specialized products, coupled with improved accessibility to analysis-ready datasets, and the development of powerful computational analytical tools, present new opportunities to develop a better understanding of social-ecological systems and their dynamics for improved decisionmaking. Transdisciplinary approaches greatly enhance the process, from model design and data source identification to finding creative solutions for integrating and adapting data created for different purposes.
One practical application of this approach is social-ecological mapping based on identifying land-use typologies. Land classifications that consider human-nature interactions provide a valuable framework for studying land systems, understanding processes and contextualizing sustainable development initiatives. These methods analyze spatial patterns of characteristics along a multidimensional continuum to identify areas with similar profiles, often combining data-driven and expert-based approaches.
This session will highlight example initiatives and studies that pioneer data integration and transdisciplinary approaches for long-term social and ecological monitoring and to examine how different drivers of change impact on diverse regions, from local to global scales.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: Integrating global land cover and human mobility data to understand human-nature interactions

Authors: Anne Cathrine Linder, Laura Alessandretti, David Lusseau
Affiliations: National Institute of Aquatic Resources, Technical University Of Denmark, Department of Applied Mathematics and Computer Science, Technical University Of Denmark, Copenhagen Center for Social Data Science, University of Copenhagen
One of the grand challenges of sustainability science is understanding the principal trade-offs between human well-being and the natural environment. Such trade-offs are dependent on how well-being benefits are derived from spending time in nature and how such use of nature may in turn threaten biodiversity. This knowledge is crucial for human activities to be sustained while keeping the planet in a habitable state for our species. One poorly understood aspect of socio-ecological systems is cultural ecosystem services (CES). CES are co-produced by people undertaking activities in nature and are generally defined as intangible benefits people obtain from nature exposure. One of these non-material benefits that can be obtained from human-nature interactions is human well-being. However, we still have a poor understanding of how well-being benefits emerge from human-nature interactions because we lack detailed observations of the cumulative exposure to nature for a large number of individuals. Smartphone users collect a wide range of information, such as geolocation, on their devices which they can choose to make available to apps they use. Associating such data with nature exposure and environmental features can contribute to understanding socio-ecological systems. We have developed a unique dataset of human movement across various land cover types, which offers a detailed view of global human mobility with 25 billion data points across five million individuals living in 245 countries. This smartphone dataset contains anonymized GPS location data sampled hourly for approximately five million individuals collected from 2017 to 2019 by a global smartphone and electronics company and users’ self-reported age, gender, and country of residence. Thus, providing the sociodemographic context for understanding human-nature interactions. We qualified the habitat exposure of individuals using the ESA WorldCover data (at 10-m resolution) to determine the ecotype of individuals. We find that individuals could be classified in persistent ecotype groups defined by the frequency distribution of habitat use. This work provides the foundation for developing spatio-temporal CES exploitation decision models to identify habitats and their socio-ecological contexts most likely to yield well-being benefits to users, without endangering biodiversity. This research highlights how remote sensing products can be combined with human mobility data to increase our understanding of socio-ecological systems and their dynamics for improved decision-making.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: Optimizing Land Cover Classification with Limited Training Data: A Multi-Temporal Machine Learning Approach at the Valencian Anchor Station

Authors: David Garcia Rodriguez, Dra Ana Perez Hoyos, Dra Beatriz Martinez Diaz, Dr Ernesto Lopez Baeza, Dr Jose Javier Samper Zapater, Dr Juan Jose Martinez Dura
Affiliations: Universitat de Valencia
The Valencian Anchor Station (VAS), located in eastern Spain, is a pivotal site for the calibration and validation of remote sensing instruments and products, particularly for European and global Earth observation missions. Given its critical role in validation activities, an accurate land cover characterization is essential. This study performs a detailed land cover classification for the VAS, covering a core area of 10×10 km² and an extended region of 30×30 km² for the year 2021, using multi-temporal Sentinel-2 imagery. The research focuses on several key aspects: (1) feature selection to optimize classification performance, (2) temporal analysis at monthly and seasonal resolutions to identify optimal data combinations, (3) evaluation of six machine learning (ML) algorithms—Classification and Regression Trees (CART), Gradient Tree Boosting (GTB), k-Nearest Neighbors (k-NN), Naïve Bayes (NB), Random Forest (RF), and Support Vector Machines (SVM), and (4) the effect of reducing training sample sizes while extending classification to buffer zones of 1 km, 5 km, and 10 km. The study highlights the superior performance of the Random Forest (RF) model, achieving the highest overall accuracy (90%) and Kappa coefficient (87%) with seasonal composites of July and August, coinciding with the phenological peak of vineyards, the predominant land cover in the area. Producer’s and user’s accuracy values were above 76% across most classes, although challenges were observed for certain land covers, such as bare soil and fruit trees, due to spectral overlaps. Feature selection played a critical role, with 12 key features identified using Pearson correlation and Permutated Variable Importance Measures, which minimized redundancy and enhanced model stability. Temporal analysis underscored the influence of seasonal variability, demonstrating that classification accuracy improves when aligned with the phenological stages of dominant land cover types. The methodology also assessed the transferability of classification to extended buffer zones without increasing reference data. Results showed that while accuracy slightly decreased in larger buffers, the RF model maintained robust performance, demonstrating the feasibility of scaling the approach. The findings were benchmarked against global datasets, including ESA WorldCover and ESRI Land Use/Land Cover, validating the reliability and adaptability of the proposed methods. This research provides valuable insights into the efficient application of traditional ML algorithms for high-resolution land cover classification, even with limited training data. By offering a practical, reproducible framework, the study contributes to advancing land cover monitoring in specialized validation areas and supports the broader implementation of remote sensing techniques in similar contexts worldwide.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: D.02.05 - POSTER - EO-based solutions to address civil security-related scenarios

The present geopolitical landscape, coupled with heightened awareness of global implications, is giving rise to security scenarios of greater complexity than those encountered in the past. Together with traditional security scenarios (e.g. monitoring of critical infrastructure, humanitarian aid, maritime awareness), new ones are emerging (e.g. energy security, health security). These new challenges require multidisciplinary efforts to fully understand their causes and implications as well as to develop new policies that work towards an adequate response.

Earth Observation (EO) data are widely recognized as a valuable tool in support of decision and policy-making processes which, together with other data sources (e.g. statistics, in situ, geolocation), can significantly contribute to the analysis of the above scenarios. The rapid expansion of remote sensing satellite constellations dedicated to EO has revolutionized the continuous monitoring of crisis areas, providing unprecedented temporal and spatial insights. In the future, the role of space-based remote sensing will only increase, driven by the growing involvement of governments, public institutions, and private companies. The current data flow coming from traditional EO players and new space actors opens the door to a wealth of data to be acquired, catalogued, processed, analyzed and visualized.

On the other hand, new technological trends to process and manage vast amounts of Big geospatial data are also developing at a fast pace and their exploitation is becoming fundamental to fully unlock the potential of the available data for a better understanding of security-related scenarios.

This session aims at incentivizing the interest of the EO community in civil security-related activities through technical contributions based on Big Data analytics, Machine Learning, Artificial Intelligence, Data Fusion, Semantic Knowledge Graphs, Advanced Image Processing and other relevant technologies along the different steps of the data value chain. In particular, the session will demonstrate how these innovative methods and technologies can improve current capabilities to address the whole spectrum of civil security applications.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Projecting Ukraine’s Agricultural Future: synergy of the Global Change Analysis Model and Satellite-Based Insights on Crop Areas and Income Trends Amid Conflict

Authors: Dr. Nataliia Kussul, Dr. Andrii Kolotii, Sofiia Drozd, Volodymyr Kuzin, Yevhenii Salii, Prof. Andrii Shelestov, PhD. Nazar Kholod
Affiliations: University Of Maryland, Space Research Institute NAS Ukraine and SSA Ukraine, National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, Pacific Northwest National Laboratory
Agriculture, particularly crop production, is one of cornerstones of Ukraine’s economy, providing significant revenue, employment, and sustaining rural livelihoods. Ukraine plays a crucial role in global food security, supplying significant quantities of wheat, corn, and sunflower oil to international markets. However, the ongoing conflict caused by the russian invasion in 2022 has profoundly disrupted Ukraine’s agricultural sector, creating challenges that threaten both the country’s economic stability and global food supplies. The physical damage of farmland, displacement of labor, and restricted access to valuable agricultural areas have significantly impacted Ukraine’s ability to produce and export essential crops. The ongoing war has also severely impacted access to reliable statistical data for the agricultural sector especially in regions near the russian border and in the areas of active conflict, where data collection is either severely limited or absolutely impossible. This lack of data presents a major challenge for accurately forecasting crop trends and planning effective agricultural recovery and economic resilience in the future. Given these data limitations, our study takes a new approach by combining advanced satellite-based land cover maps with projections from the Global Change Analysis Model (GCAM). GCAM is an integrated assessment model developed and maintained at the Pacific Northwest National Laboratory’s Joint Global Change Research Institute (JGCRI, 2023). The model provides insight into how global and regional climate, socioeconomic, and policy scenarios affect land use, water, energy, and economic outcomes. Satellite-based data, enhanced by machine learning techniques as demonstrated by Kussul et al. (2023), presented in the form of land cover maps, offer high-resolution and accurate insights into Ukraine’s agricultural landscape. This approach enables to map crop areas with higher accuracy, providing reliable data in regions where ground-based surveys are not feasible due to security concerns caused by war. By using these satellite-based maps that contain a spatially comprehensive overview of Ukraine’s agricultural areas to calibrate GCAM, we create a foundation for producing more precise projections of future crop areas, crop production, and revenue streams from agriculture, even in areas impacted by the ongoing conflict. Through GCAM’s model projections, we explore how Ukraine’s agricultural sector may evolve under different climate, socioeconomic, and geopolitical conditions. However, the future projections have a degree of uncertainty due to the volatility introduced by the conflict, particularly concerning international trade routes, resource access, and regional stability. Ultimately, this research underscores the added value of integrating satellite data with advanced global modeling to bridge information gaps in conflict-affected areas where statistics quality can be limited. By adding high-resolution and accurate land cover maps, we offer a more robust basis for projecting Ukraine’s agricultural future and planning for economic resilience. These satellite-based insights are critical for stakeholders navigating Ukraine’s complex agricultural and economic landscape. Stakeholders can use these findings to make informed decisions about supporting Ukraine’s agricultural recovery, promoting sustainable practices, and stabilizing food supplies. In an era marked by conflict and uncertainty, these projections offer a pathway for understanding and adapting to the challenges facing Ukraine’s agricultural sector. References: 1. JGCRI, 2023. GCAM Documentation (Version 7.0). https://github.com/JGCRI/gcam-doc. Joint Global Change Research Institute. https://zenodo.org/doi/10.5281/zenodo.11377813. 2. Kussul, Nataliia, et al. "Assessing damage to agricultural fields from military actions in Ukraine: An integrated approach using statistical indicators and machine learning." International Journal of Applied Earth Observation and Geoinformation 125 (2023): 103562.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Improving Railway Safety: Using Satellite Data and AI for Monitoring and Risk Prediction

Authors: Simona Reale, Mattia Reale, Gabriele Punzo, Silvia Liberata Ullo
Affiliations: University of Sannio
The efficient and safe operation of railway infrastructure is crucial for the transportation sector, impacting economic growth and societal well-being. With advancements in remote sensing (RS) technologies, Synthetic Aperture Radar (SAR) and optical satellite data have emerged as powerful tools for observing and monitoring railway systems. This research summarizes an extensive analysis conducted across Italy, focusing on several critical sites within the Italian railway network. Various case studies were considered, yielding interesting results. Both classical methodologies and advanced techniques based on artificial intelligence (AI) were taken into account. The ultimate goal is to provide a new paradigm, an efficient tool for enhancing the maintenance and management of railway networks. This study highlights the use of SAR and optical technologies for railway maintenance and prevention, demonstrating how these technologies can improve infrastructure efficiency and safety. By leveraging public-private partnerships, including collaborations with companies like Trenitalia, RFI, Italferr, etc., we aim to address the unique demands of both institutional and commercial stakeholders. An in-depth analysis of the State of the Art (SOTA) revealed a surprising lack of similar studies in Italy. SAR and optical satellite data, primarily sourced from the ESA Copernicus mission, were integrated to offer a comprehensive approach to monitoring railway infrastructure, aiming to demonstrate how innovative solutions can come from utilizing SAR and optical satellite data for railway infrastructure monitoring through various case studies. SAR data, particularly DInSAR (Differential Interferometric SAR) techniques, were employed to monitor ground deformation and identify potential risks to railway infrastructure. Landslides are one of the primary geological threats to railways, and SAR technology provides a reliable means to detect and assess such hazards under certain conditions. One of our notable case studies involved a landslide that occurred on the tracks in Valbrenta, Italy, in January 2024. Two SAR images, acquired by the ESA's Sentinel-1 satellite (one before and one after the event) were analyzed. By applying interpolation techniques using the ESA-SNAP software, the exact displacement of the terrain was detected with millimeter-level precision. Programs such as Google Earth further enabled a visual representation of the damage, enhancing the understanding of the event's impact. On the other side, optical satellite data, from Sentinel-2, were utilized to calculate specific indices like the NDVI (Normalized Difference Vegetation Index) and NDWI (Normalized Difference Water Index), which measure vegetation growth and water presence, respectively. For instance, a case study focusing on the Valle Caudina railway line in Italy revealed that the NDVI tripled as expected due to the railway's period of inactivity compared to its operational state in 2019. These technologies demonstrate significant potential for future applications, offering precise tools for monitoring infrastructure stability, detecting environmental changes, and managing risks associated with several factors affecting railway lines. Additionally, a machine learning (ML) model leveraging AI has been developed as part of an ongoing effort to enable predictive monitoring of railway infrastructure. This approach aimed to identify and mitigate potential issues before they escalate into critical failures. Additional case studies have been added to explore how predictive models could address geological risks, such as the above-mentioned landslides, which pose significant threats to railway networks. By integrating RS data with ML algorithms, the study has provided a foundation for developing scalable and adaptable innovative models to forecast these risks in various regions. The research involved compiling a robust dataset, including historical landslide records from the Italian catalog (ITALICA) and geospatial factors such as precipitation, slope, topographic wetness index, proximity to infrastructures, land cover, and lithology. This dataset was processed and optimized for ML applications, highlighting the potential for future refinement and expansion. Several ML algorithms were evaluated, with the Gradient Boosting Classifier emerging as the most effective due to its accuracy. The model was trained using Python to assess landslide susceptibility, generating a susceptibility index that quantifies risk levels in different areas. The model’s flexibility allows for scenario-based predictions, including projections under potential climate change conditions. Preliminary results indicate an accuracy of 79% in predicting landslide occurrences, demonstrating the model’s effectiveness. Notably, proximity to infrastructures was identified as the most critical factor influencing landslide risk, with approximately 85% of past landslides occurring within 143 meters of transportation infrastructure. Projections for 2041–2050 suggest that overall landslide susceptibility in Italy will remain consistent with current levels, with localized increases expected in northeastern Italy. This research not only highlights the immediate applicability of predictive modeling to railway infrastructure but also serves as a springboard for future developments. The methodology can be extended to other regions or adapted to include additional risk factors and datasets. By combining advanced RS techniques and ML, this study opens pathways for building proactive, scalable, and adaptive systems to safeguard infrastructure, optimize maintenance strategies, and anticipate challenges linked to climate and environmental changes. By analyzing these case studies, the research highlights the benefits, challenges, and potential improvements in using these technologies for railway monitoring and maintenance. The findings are expected to contribute to the development of more effective and efficient monitoring systems, ultimately enhancing the safety and reliability of railway operations. The case studies, focusing on specific Italian sites, cover diverse geographic and environmental contexts, including urban areas, coastal regions, and hilly or mountainous terrains where landslides can occur and affect railway functionality. The research shows how risk-prone areas can be identified and managed through data processing techniques that combine traditional methods and advanced AI-based approaches. Moreover, to monitor railway infrastructure, the software QGIS (Quantum Geographic Information System) has been used. In a detailed QGIS project, various essential shapefiles are integrated to analyze the impact of hydrogeological risks on railway infrastructure. These include the Italian railway network, Hydrogeological Risk Areas for landslides (PAI), and the Inventory of Landslides in Italy (IFFI). The Italian railway network shapefile provides a comprehensive map of the country’s railway lines, offering crucial insights into their geographic layout. The Hydrogeological Risk Areas for landslides shapefile highlights areas designated by the Italian government as high-risk for landslides, enabling the identification of railway sections exposed to these hazards. The Inventory of Landslides in Italy records all documented landslides, providing valuable historical and current data on landslide activity across the country. Integrating these datasets into a unified QGIS platform has allowed for the identification of railway sections that intersect high-risk landslide or subsidence areas, enabling the creation of a prioritized list of the most vulnerable sections, based on factors such as their proximity to the ground or placement on artificial structures. The analyzed case studies span a wide temporal and spatial interval, showcasing the flexibility of these technologies in addressing different scenarios. The integration of classical and AI-driven data processing has proven instrumental in extracting actionable insights, leading to timely and effective interventions. This study not only underscores the benefits and challenges of using SAR and optical satellite data for railway monitoring and maintenance but also identifies potential improvements in predicting risks and activating suitable actions under specific conditions. The findings will contribute to developing more effective and efficient strategies for enhancing the safety and reliability of railway operations and supporting managers in making informed decisions. In conclusion, we believe that this study is significant for supporting companies such as Trenitalia, RFI, Italferr, and others involved in the management and maintenance of railway networks in Italy. Furthermore, the proposed paradigms and models could not only be limited to the railway sector but could be extended to other fields such as road infrastructure management, dam monitoring, and environmental surveillance, offering innovative solutions for a wide range of applications. The adoption of SAR and optical technologies, supported by innovative business models and public-private partnerships, can ensure more precise and timely maintenance. This approach will enhance the safety and reliability of the national railway system, meeting the evolving needs of the market effectively. By embracing these new paradigms and innovative models, we aim to provide valuable insights that can drive the future of railway infrastructure management.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Using met-ocean repositories to support the creation of large datasets for AI applications

Authors: Paolo Vavasori, Federica Braga, Angela Carmen Cristofano, Roberto Del Prete, Margareth Di Vaia, Amedeo Fadini, Federico Franciosa, Maria Daniela Graziano, Sergio Iervolino, Andrea Mazzeo, Stefano Menegon, Gian Marco Scarpa, Marisa Sperandeo, Giuliano Vernengo, Diego Villa, Davide
Affiliations: CNR-ISMAR Venezia
Artificial Intelligence (AI) algorithms are increasingly being recognized as powerful tools across a wide range of disciplines, including remote sensing applications for physical oceanography and maritime security. The successful implementation of these tools requires the availability of large amounts of information and extensive training datasets. In this contribution we present an automated procedure for the retrieving and processing of meteo-oceanographic data from public repositories to support the analysis of potentially large collections of satellite images in an AI application perspective. For each image depending on its acquisition time and location, our algorithm retrieves the gridded fields of key atmospheric and oceanographic variables from Copernicus services, computes derived quantities and stores the whole information in a self-explanatory and interoperable netCDF file, addittionally generating 2D plots of the extracted variables. More specifically, hourly atmospheric fields (10 m wind components, total cloud cover, and total precipitation) at 1/4° spatial grid resolution are retrieved from the ERA5 reanalysis via the Copernicus Climate Service (C3S). Oceanographic quantities such as wave spectral parameters, surface current velocity components, potential temperature, surface and near-surface salinity and temperature (upper 10m) are collected from Copernicus Marine Service (CMEMS) with hourly to daily frequency and spatial resolution ranging from 1/24° to 1/5°. Wave steepness, surface and near-surface potential density anomaly and its horizontal gradients, as well as surface current convergence and near-surface buoyancy frequency are additionally computed in the process. This procedure has been developed as a part of the UEIKAP (Unveil and Explore the In-depth Knowledge of earth observation data for maritime Applications) Project, funded by the Italian Ministry of University and Research. The project aims to create an innovative Artificial Intelligence (AI) architecture for the detection of non-collaborative vessel wakes from satellite products, particularly from Synthetic Aperture Radar (SAR) and optical images. The methodology has been applied to a dataset of over 1,000 SAR and optical images, complementing them with environmental information. The algorithm capability to select the parameters to be downloaded and computed offline meets specifically the needs for the UEIKAP Project focusing on factors influencing wake detectability and the appearance of look-alike features such as wind streaks or filaments from satellite-borne sensors. Moreover, the modular structure of the implemented procedure allows an easier adaptation to other scopes, enabling its use in a number of several oceanographic applications.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: SaferPlaces AI-based Digital Twin Platform for Flood Risk Intelligence in Urban Area

Authors: Stefano Bagli
Affiliations: Gecosistema Srl
Flooding remains one of the most devastating natural disasters, exacerbated by climate change, rapid urbanization, and aging infrastructure. The impacts of floods extend beyond environmental concerns to critical civil security challenges, such as protecting lives, safeguarding infrastructure, and ensuring the continuity of essential services. Addressing these challenges requires innovative tools that leverage the vast capabilities of Earth Observation (EO) data, advanced modeling, and real-time analytics. The SaferPlaces platform exemplifies such innovation, providing high-resolution, real-time flood risk intelligence to support decision-making in both immediate crises and long-term resilience planning. The growing frequency and severity of flooding, as highlighted by events like the Emilia-Romagna floods of May 2023, underscore the urgency of advancing flood risk management technologies. Traditional flood risk assessment methods, which often rely on static, resource-intensive tools, fail to address the dynamic and complex nature of modern flood scenarios. SaferPlaces addresses these limitations by integrating EO data from satellites such as Sentinel-1 and Sentinel-2 with in situ measurements, IoT networks, and advanced hydrodynamic modeling. This integration enables the platform to provide actionable insights tailored to the needs of diverse stakeholders, from municipal planners and civil protection agencies to insurers and policymakers. Earth Observation Data: A Foundation for Flood Risk Intelligence EO data has revolutionized environmental monitoring by offering unparalleled spatial and temporal insights into natural and human-made phenomena. Sentinel-1, with its synthetic aperture radar (SAR) capabilities, delivers all-weather, day-and-night observations, making it invaluable for monitoring flood extents during extreme weather events . Sentinel-2 complements this capability by providing high-resolution optical imagery, enabling detailed assessments of land cover changes and infrastructure impacts (Drusch et al., 2012). During the Emilia-Romagna floods, Sentinel-1 data was used to map flood extents with remarkable precision, providing critical information for evacuation planning and resource allocation. By incorporating these EO data into its platform, SaferPlaces was able to offer real-time flood maps that guided emergency responders in prioritizing high-risk areas. This application highlights the potential of EO data to enhance situational awareness and decision-making during flood events. The SaferPlaces Platform: A Transformative Approach to Flood Risk Management At the core of SaferPlaces is its ability to create a digital twin of urban areas—a dynamic, virtual representation of a city that evolves with real-time data and changing conditions. Digital twins allow stakeholders to simulate flood scenarios under various conditions, such as extreme rainfall, rising sea levels, or infrastructure changes (Batty, 2018). This capability is essential for understanding the interplay between urban development and flood risks, enabling more precise and adaptive risk assessments. For example, in Rimini, Italy, SaferPlaces was used to evaluate the effectiveness of nature-based solutions for coastal flood mitigation. By modeling scenarios with and without these interventions, the platform quantified the potential benefits, helping policymakers justify investments in green infrastructure. This application underscores the role of SaferPlaces in integrating flood resilience into urban planning and policy development. In addition to digital twin technology, the platform leverages real-time data integration to enhance its responsiveness during crises. IoT devices, such as water level sensors and weather stations, provide continuous measurements that are incorporated into the platform’s models. This capability allows SaferPlaces to deliver up-to-the-minute alerts and recommendations, enabling stakeholders to respond proactively to emerging threats. Advancing Scientific Understanding Through High-Resolution Mapping and Modeling One of the key strengths of SaferPlaces is its ability to generate high-resolution flood maps that capture the spatial variability of flood impacts. By combining EO data with hydrodynamic models, the platform produces maps that account for real-time conditions, forecasted weather, and climate change projections. These maps are critical for identifying vulnerable areas, assessing the extent of damage, and planning recovery efforts. During the Emilia-Romagna floods, SaferPlaces used EO data to conduct post-event analyses, quantifying economic losses and guiding resource allocation for recovery. These insights not only supported immediate response efforts but also informed long-term strategies for building resilience against future floods. By integrating historical records with real-time data, SaferPlaces provides a comprehensive understanding of flood risks, bridging the gap between short-term crisis management and long-term adaptation. Enhancing Civil Security Applications The capabilities of SaferPlaces extend beyond flood risk management to broader civil security applications. The platform’s integration of diverse data sources and advanced analytics makes it a versatile tool for protecting critical infrastructure, supporting emergency response, and enabling risk transfer mechanisms. For instance: 1. Critical Infrastructure Protection: SaferPlaces helps assess the vulnerability of key assets, such as transportation networks, power grids, and hospitals, to flood risks. By identifying high-risk areas, the platform supports proactive measures to enhance infrastructure resilience. 2. Emergency Response: The platform delivers real-time alerts and flood maps that guide evacuation planning and resource deployment. During the Emilia-Romagna floods, these capabilities were instrumental in prioritizing interventions and minimizing losses. 3. Insurance and Risk Management: SaferPlaces supports the development of parametric insurance products by providing high-resolution risk maps and damage assessments. These tools enable insurers to price risks accurately and develop innovative risk transfer mechanisms. 4. Climate Adaptation Policy: By modeling future flood scenarios under various climate pathways, the platform informs policies that prioritize investments in flood resilience. This aligns with the IPCC’s recommendations for anticipatory adaptation measures to reduce climate risks (IPCC, 2022). Technological Innovations Driving SaferPlaces SaferPlaces exemplifies the integration of emerging technologies into flood risk management. The platform’s AI-driven algorithms process vast geospatial datasets, transforming raw EO data into actionable intelligence (Reichstein et al., 2019). Advanced image processing techniques enable the generation of detailed flood maps, while cloud-based scalability ensures robust performance even during large-scale crises. The fusion of EO data with in situ observations and historical records provides a multi-dimensional view of flood risks. This holistic approach enhances the accuracy and reliability of risk assessments, making SaferPlaces a valuable tool for both immediate and long-term applications. By combining scientific rigor with practical usability, the platform bridges the gap between data and decision-making. Future Directions: Scaling EO-Based Solutions for Resilient Security As climate change accelerates and security scenarios grow more complex, the demand for integrative solutions like SaferPlaces will only increase. The platform’s ability to address both immediate flood risks and long-term resilience positions it as a critical tool for civil security. Its success during high-impact events, such as the Emilia-Romagna floods, demonstrates its readiness to tackle the challenges posed by climate-induced risks. Looking forward, the integration of new EO technologies, such as hyperspectral imaging and higher-resolution SAR sensors, will further enhance SaferPlaces’ capabilities. Additionally, advancements in machine learning and data fusion techniques will enable the platform to process and analyze even larger volumes of data, delivering insights with greater speed and precision. Conclusion The SaferPlaces platform represents a paradigm shift in flood risk management, leveraging EO data, digital twin technology, and advanced analytics to provide actionable insights for civil security. By addressing both immediate crisis needs and long-term resilience, the platform exemplifies the transformative potential of EO-based solutions. Its proven efficacy during events like the Emilia-Romagna floods underscores its value in protecting communities, safeguarding infrastructure, and fostering sustainable urban development. As climate-induced risks continue to grow, platforms like SaferPlaces will play an increasingly vital role in shaping a safer, more resilient future.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Preventive Infrastructure Monitoring: AI-Driven Integration of SAR and Meteorological Data for Bridge Failure Prediction

Authors: Andrea Cavallini, Tommaso Martire, Angie Catalina Carrillo Chappe
Affiliations: Starion Group
Satellite-based Synthetic Aperture Radar (SAR) provides valuable , all-weather, day-and-night acquisitions, essential for large-scale infrastructure monitoring. SAR satellite measurements offer a powerful tool for monitoring Earth's surface, enabling the detection of ground deformation, structural changes, and potential hazards like subsidence and flooding. This capability is increasingly important as climate change accelerates infrastructure deterioration, impacting functionality, safety, operation, and maintenance. SAR can identify and quantify changes in various environments, for instance, it can monitor bridges for deflection and cracking, assess ground motion risks to people and property, track road infrastructure conditions including pavement degradations and landslides for timely repairs and improve road safety, supporting urban planning by identifying areas of subsidence and monitoring the impact of construction activities. Given that the design life of bridges is typically more than 100 years, they are more exposed to climate change risks than other transportation infrastructure. Changes in temperature, precipitation, and sea level rise are expected to have a great impact on the safety and functionality of the bridges. A wide range of SAR-based techniques has been developed to assess structural health, enabling the detection and monitoring of subtle surface displacements, deformation patterns, and changes over time. By exploiting SAR data, engineers and researchers can gain valuable insights into infrastructure behaviour, identify potential risks, and support proactive maintenance strategies. Traditional InSAR technology, which uses pairs of radar images to measure ground deformation, has faced significant challenges since variations in the ground's reflectivity and atmospheric effects, especially water vapor, can introduce errors into the measurements. Multi-Temporal InSAR (MT-InSAR) was developed to overcome these limitations, instead of using just two images, MT-InSAR analyzes multiple radar images collected over time to create a time-series of ground deformation. The most widely used technique is Persintent Scatterers Interferometric (PSI), which measures surface deformation by comparing multiple SAR images, PSI represents an advanced class of the Differential Interferometric Synthetic Aperture Radar (DInSAR) techniques. However, DinSAR is limited by the need for a large number of points to accurately assess ground deformation parameters, especially at high spatial and temporal resolutions. One of the main challenges in DInSAR analysis lies in the automated identification of points of interest, which often requires extensive and tedious data processing to ensure the reliability and accuracy of the results. In addiction climate change has exacerbated the frequency and intensity of extreme weather events, posing significant risks to the integrity of aging infrastructure designed and constructed without considering these modern challenges. As a result, it is important to assess the vulnerability of such structures to these extreme events. Artificial Intelligence (AI) is revolutionizing infrastructure management by enabling data driven, predictive analytics for proactive maintenance and risk mitigation. By leveraging SAR measurements, AI facilitates the correlation of historical and real-time climate data with infrastructure behavior, enabling predictive modeling and early warning systems for potential failures. Advanced AI algorithms not only reduce noise and enhance phase unwrapping accuracy but also learn to identify and mitigate decorrelation noise and atmospheric effects from extensive training datasets. Through automation of critical steps in the DInSAR workflow, such as coherence estimation, phase filtering, and deformation detection; AI dramatically accelerates data processing, paving the way for near-real-time monitoring systems. Beyond efficiency, AI enables the seamless integration of DInSAR data with weather models and other sources, enhancing the precision and reliability of displacement measurements. Most importantly, AI’s forecasting capabilities shine in its ability to analyze both historical trends and real-time data to predict future risks. By providing proactive risk assessments, AI empowers decision-makers with the tools needed for timely disaster prevention and mitigation, fundamentally transforming the way infrastructure health is monitored and maintained. This enables the development of predictive models that can assess the probability of failure for specific infrastructure components under diverse climatic conditions. Additionally, AI facilitates the implementation of real-time monitoring systems that continuously analyze SAR data and meteorological conditions. These systems can identify early warning indicators of infrastructure distress, enabling proactive maintenance and risk mitigation strategies. This work focuses on multiple aged bridge sites, a subset of which experienced incidents (e.g. Ponte Albiano Magra 08/04/2020, Viadotto Torino-Savona 24/11/2019, Ponte Morandi 14/08/2018, Ponte Ozzanello 30/10/2023, ponte Molina 18/01/2023). Historical Sentinel 1 SAR data and meteorological data, such as temperature, precipitation, wind speed, and humidity, were collected for each site to analyze potential correlations between environmental factors and structural degradation. This analysis aimed to identify early warning indicators that could be used to assess the risk of future incidents and implement timely mitigation measures. Our initial findings indicate a correlation between the algorithm's predicted risk and the observed risk of bridge failure. The AI framework employed consists of a Convolutional Neural Network (CNN) to extract discriminative features, such as deformation patterns and textural changes, from Sentinel-1 acquisitions. These features were then concatenated with corresponding meteorological time series and fed into an Attention based - Bidirectional Long Short-Term Memory (Att-BiLSTM) network to predict the likelihood of structural failure. Structural failure was considered to have occurred only when an actual incident, such as a collapse or landslide, was recorded (ground truth data). This binary classification approach allowed for the evaluation of the model's ability to accurately identify high-risk scenarios. Unlike traditional DInSAR methods, which primarily provide a retrospective assessment of infrastructure health, our AI-powered approach offers predictive capability. By leveraging real-time, open-source data streams, our model enables timely identification of potential infrastructure vulnerabilities and prioritization of mitigation strategies. This proactive approach empowers decision-makers to take preventative actions before catastrophic failures occur. By providing quantitative risk evaluations , this methodology empowers decision-makers within Environmental Protection Agencies to make more informed maintenance planning, and potential interventions. This AI-driven approach not only enhances SAR data processing but also forecasts risk factors by analyzing a wide range of structural and climatic variables. Specifically, it enables the prediction of potential hazards and vulnerabilities for bridges, supporting early intervention strategies. In the future, this method could be extended to other critical infrastructure, such as railways and roads, offering site-specific risk assessments and improving the ability to ensure safety, reliability, and long-term operational performance.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: ATR Toolbox: Operational end-to-end training environment for feature detection and recognition purposes based on high-resolution SAR imagery

Authors: Oliver Lang, Cornelius Grupp, Pawel Kluter, Diana Weihing, Christoph Stahl, Monika von der Werth
Affiliations: Airbus Defence and Space
AI-driven automatic feature or target recognition in SAR imagery (SAR-ATR) is well documented and several neural networks are optimized for the usage with SAR imagery (see e.g. [1]). While scientifically well covered, the operational application of SAR-ATR often suffers from various drawbacks in particular for object recognition and classification tasks based on high resolution SAR imagery. Typical limitations are the availability of labelled training data including detailed object annotation and the significant variety of objects representations. Impact on the latter comprises for example the viewing geometry, the varying clutter situation and the surface interaction of the received radar signal. Further complexity is added from the wide range of imaging modes and processing levels of SAR imagery resulting in a large range of representations of a given object class. In this presentation we feature an operational end-to-end system for SAR-ATR, including labelling, training and prediction components. The system focuses on the usage in an operational scenario and allows to scale-up to a variety of use cases and SAR-sensors. In the presentation we address a set of utilities supporting the operational usage and reducing the need of in-depth domain knowledge for object model training. In particular, we introduce label prediction for faster annotation, the usage of synthetic SAR imagery for augmentation purposes, a reinforced learning element for fast training convergence and the incremental refinement of trained models extending its applicability. We furthermore address validation routines and performance measures. The training environment is complemented by a portable inference client. An exemplary integration into a commercial geospatial software suite is presented. The introduction of the ATR toolbox is augmented by object recognition examples (e.g. aircraft and vessel types) derived from high-resolution TerraSAR-X imagery and a discussion about the model’s performance and generalization capability. Reference: [1] X. X. Zhu et al., "Deep Learning Meets SAR: Concepts, models, pitfalls, and perspectives," in IEEE Geoscience and Remote Sensing Magazine, vol. 9, no. 4, pp. 143-172, Dec. 2021, doi: 10.1109/MGRS.2020.3046356.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Terrain-Trafficability assessment for optimised route planning using soil moisture & load bearing information derived from EO data

Authors: Dr. Raul Scarlat, Catalin Patilea, Naman Jain, Dr. Gopika Suresh
Affiliations: Marble Imaging AG
Accurate terrain trafficability assessment is essential for applications ranging from military logistics and disaster response to agricultural operations and infrastructure planning. Traditional methods for evaluating soil strength and vehicle mobility often rely on ground-based measurements, which are time-consuming, labor-intensive, and limited in spatial and temporal coverage. To address these limitations, we propose a novel approach that leverages Earth Observation (EO) data to derive comprehensive trafficability assessments over large geographic areas. In the first phase of our methodology, we extract surface soil moisture from multispectral data provided by Sentinel-2 satellites. The Short-Wave Infrared (SWIR) bands of Sentinel-2 are particularly sensitive to soil moisture content in the top 0–5 cm layer of the soil. By applying indices such as the Modified Soil Moisture Monitoring Index (MSMMI) and the Normalized Soil Moisture Index (NSMI), we obtain quantitative estimates of surface soil moisture expressed as volumetric water content percentages. This step provides a critical snapshot of moisture conditions in bare or sparsely vegetated terrains. To enhance the accuracy of soil moisture estimation, we integrate surface moisture data from Sentinel-2 with Synthetic Aperture Radar (SAR) data from Sentinel-1 satellites. SAR imagery offers the advantage of penetrating deeper into the soil and is sensitive to surface roughness and texture variations that influence moisture retention and runoff. By deriving the Soil Water Index (SWI) from SAR data, we adjust for terrain roughness and account for subsurface moisture variations. This fusion of multispectral and SAR data yields a more robust soil moisture profile that encompasses both surface and subsurface layers. The subsequent step involves translating the comprehensive soil moisture data into soil strength parameters, specifically the cone index (CI), which measures the soil's resistance to penetration and its ability to support loads. Soil moisture has a direct impact on soil strength; higher moisture content typically reduces the soil's load-bearing capacity. We employ empirical models that relate soil moisture content to cone index values, tailored to specific soil types such as clay or sand. These models consider the unique behavioral characteristics of different soils under varying moisture conditions, allowing for more accurate soil strength predictions. Incorporating vehicle-specific requirements is crucial for practical trafficability assessments. Each vehicle has a Vehicle Cone Index (VCI), indicating the minimum soil strength necessary for it to traverse terrain without becoming immobilized. By comparing the calculated soil cone index with the vehicle's VCI, we assess the likelihood of successful passage. A soil cone index lower than the vehicle's VCI suggests potential mobility challenges, including the risk of the vehicle becoming stuck or experiencing reduced maneuverability. The final component of our methodology involves synthesizing the soil strength data and vehicle parameters into a comprehensive trafficability model. This model generates a trafficability score that categorizes terrain into zones of varying mobility potential: areas where vehicles can operate without issues, regions where slowdowns may occur, and zones that are impassable. This scoring system provides actionable insights for route planning, operational logistics, and risk management. Our proposed approach offers several significant advantages. By utilizing readily available satellite data from Sentinel-1 and Sentinel-2, the method is scalable and capable of providing near-real-time updates. The integration of multispectral and SAR data enhances the reliability of soil moisture and strength estimations. Moreover, the methodology is adaptable to different soil types and vehicle specifications, making it broadly applicable across various sectors. The anticipated outcomes of this work include improved efficiency in vehicle mobility planning, reduced operational risks associated with uncertain terrain conditions, and enhanced decision-making capabilities for stakeholders. The approach has the potential to transform how organizations assess and respond to terrain challenges, leading to safer and more effective operations. This research will be presented at the ESA Living Planet Symposium, contributing to the exploration of innovative applications of EO data in geophysical analysis and operational contexts. We aim to engage with the broader scientific community to refine the methodology further and explore collaborative opportunities for implementation.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Improving Tiny Object Detection in Enhanced Sentinel-2 Images Using Density and Detection Methods

Authors: Christian Ayala, Juan Francisco Amieva, Pablo Vega, Roland Perko, Sebastian Aleksandrowicz
Affiliations: Tracasa Instrumental S.L., Joanneum Research, Space Research Centre of the Polish Academy of Sciences
Globally and freely available satellite images, such as those provided by Sentinel-2 with a ground sampling distance (GSD) of 10 meters, are currently limited by their spatial resolution when attempting detection of vehicles such as aircraft. This limitation raises the question of the smallest object sizes that can be reliably detected for object detection applications. As highlighted by Kaur et al., small object size remains one of the main challenges in object detection. In this study, we address this limitation by enhancing the resolution of the input data and combining two complementary methods for object density estimation and detection. The study is framed within the development of a system for continuous monitoring of critical transport infrastructure. Leveraging the continuous acquisition and availability of Sentinel-2 imagery, the system is designed to detect tiny objects and proactively generate activity reports. Aircraft were selected as the target objects for this proof of concept, as they are often close to the GSD of Sentinel-2 images and exhibit variations in size, shape, and color. For the proof of concept, 5,378 aircraft were annotated across 118 areas of interest, with sizes ranging from extra-small and small (up to 12–16 meters in length) to large and extra-large (up to 50 meters or beyond). The fully automated workflow for object detection consists of the following steps: 1. Super-Resolution: The limited spatial resolution of Sentinel-2 images presents challenges in detecting objects like aircraft, especially smaller ones. The first step in the workflow enhances image sharpness using a super-resolution deep learning model. Specifically, an Enhanced Deep Residual Network (EDSR) was trained using a high-resolution reference sensor as ground truth, as proposed by Galar et al. The resulting model, tailored for defense use cases, increases the spatial resolution of Sentinel-2 images to 2.5 meters GSD. 2. Object Detection: Using super-resolved images as input, an object detection model was employed to identify aircrafts. The YOLO (You Only Look Once) family of models was chosen for its balance between speed and accuracy. Unlike traditional sliding window approaches, YOLO tackles object detection as a single regression problem, predicting bounding boxes and class probabilities in a single pass. The latest YOLOv8 architecture incorporates innovations such as spatial attention mechanisms, which focus the model on relevant parts of the image for precise localization, and a novel feature fusion module that combines high-level semantic features with low-level spatial details. These features are crucial for detecting small aircraft in satellite imagery, where objects are often located in specific patterns and vary significantly in size. 3. Density Estimation: For very small, densely packed objects that often occlude each other, instance-wise detection may be unfeasible. To address this, density estimation techniques, originally developed for cell counting in microscopy and later adapted for crowd counting, are employed. A custom-tailored vision Transformer-based model was developed, extending the approach of Liang et al. This model not only estimates object density at the pixel level but also derives object locations. 4. Data Fusion: Despite the improved spatial resolution of super-resolved Sentinel-2 images, challenges remain due to the varying sizes of target objects, which range from 107 to 3,272 square meters. Some aircraft may occupy only a few pixels, making them difficult to detect. Object detection models may either fail to identify such small objects or produce false positives by overcompensating. To mitigate this, a post-processing step integrates the outputs of the density estimation model. Heatmaps generated by the density estimation approach are used to filter out false predictions, reducing false positives while preserving true detections. This integrated workflow significantly reduces the number of false detections without compromising true positives, thereby improving precision. The enhanced object detection capabilities make monitoring critical transport infrastructure using freely available satellite imagery feasible, despite the initial limitations posed by the high false prediction rates. Furthermore, this method demonstrates the downstream benefits of super-resolution, achieving acceptable performance in detecting small objects. The approach opens new possibilities for globally monitoring strategic areas of interest, offering a cost-effective and scalable solution for tracking activity in critical transport infrastructure.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Transformer-Based Networks for Efficient Vectorization: A Case Study of Building Rooftop Extraction

Authors: Zhimeng HE, Yuwei Cai, Brian Barrett, Meiliu Wu
Affiliations: University of Glasgow
Advancements in Artificial Intelligence (AI) have significantly increased the feasibility of creating high-precision, detailed, and up-to-date building rooftop maps, which are crucial for applications in urban planning and the development of digital twins. Over recent years, researchers have made substantial progress in developing deep learning models to improve their performance of building extraction, focusing on model architectural innovations and spatial module enhancements. For instance, shortcut connections in ResNet have improved gradient flow in deep networks. Later, U-Net has also integrated skip connections into an encoder-decoder structure with skip connections, forming a foundation for object detection tasks in remote sensing. Further, more advanced enhancements have been achieved, such as Atrous Spatial Pyramid Pooling (DeepLab v3) capturing multi-scale features, as well as UNet++ incorporating dense skip connections for dynamic feature fusion. Moreover, improvements on spatial modules, such as attention mechanisms and channel recalibration (e.g., SENet), have further refined segmentation accuracy. Notably, models with emerging attention-based architectures, including Vision Transformers (ViTs) and Swin Transformer, have delivered state-of-the-art performance on global context capture, and innovations in multi-path designs and lightweight modules have made segmentation tasks more efficient, especially in urban areas. Despite this significant progress, most existing methods only produce raster-based outputs. This has posed challenges for downstream applications (e.g., building rooftop analysis) that require precise vectorized outputs, which can offer enhanced geometric accuracy and seamless spatial integration with other datasets. Moreover, efficient vectorization has remained a difficult task. Current approaches to vector extraction include: (1) Raster-to-Vector, converting rasterized outputs into vector formats through post-processing techniques such as boundary-following algorithms; (2) End-to-End Vector Generation, employing deep learning techniques to directly predict vector vertices and edges; and (3) Neural Field-based Methods, utilizing implicit representations for detailed spatial optimization. However, each method still encounters limitations, including dependence on raster quality, difficulty in capturing fine-grained details, and high computational costs. To address these gaps, this study proposes a novel transformer-based architecture that extends the Raster-to-Vector framework for improved vectorized building extraction. Specifically, our proposed method consists of a two-stage process: raster feature refinement and vectorization. In the first stage, a transformer-based encoder processes high spatial resolution remote sensing imagery to refine spatial features and to capture global context effectively. Next, the decoder generates precise intermediate rasterized representations. In the second stage, these raster outputs are processed through a streamlined vectorization pipeline, which leverages contour extraction algorithms. This process is optimized by using the refined spatial features from the transformer network to ensure sharp and accurate boundary delineation, resulting in high-quality vectorized building footprints. To demonstrate the effectiveness of our approach, we compare our model with the Raster-to-Vector method, the End-to-End Vector Generation method (Graph Neural Networks), and the Neural Field-based method (Signed Distance Function (SDF)), based on the SpaceNet and WHU datasets. These datasets encompass diverse and complex urban environments, providing a robust testing ground for assessing the scalability and adaptability of our proposed model. In particular, we benchmark the models using metrics such as IoU and runtime performance in real-world scenarios. The results demonstrate that by incorporating advanced attention mechanisms and multi-scale feature extraction, our proposed transformer-based architecture achieves state-of-the-art performance across these metrics, obtaining both superior Accuracy and IoU at 95.33% and 75.35% with high computational efficiency. The approach demonstrates greater efficiency and scalability in real-world scenarios, validated through extensive evaluations on the SpaceNet and WHU datasets. By directly generating building footprints in vectorized formats (e.g., shapefiles), our proposed model can be applied to various high-resolution remote sensing and geospatial applications for urban analytics, such as building detection, land use analysis, and urban structure representation. This method can also enhance GIS data integration, better supporting detailed spatial analysis and informed decision-making for urban planning and management in the future.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Enhancing Copernicus Security Services - EU governmental crisis management hub for forced population displacement - THEIA project

Authors: Liza Panagiotopoulou, Betty Charalambopoulou, Juan Francisco Romero Quesada, Jose Santos, Pol Kolokoussis, Christos Kontopoulos
Affiliations: Geosystems Hellas S.A., SATCEN
The Copernicus Service for Security Applications (CSS) aims to support European Union (EU) policies by providing critical information on Europe’s security challenges. It enhances crisis prevention, preparedness, and response across four key areas: Border Surveillance, Maritime Surveillance, Support to EU External and Security Actions, and Research and Development (R&D) for Earth Observation (EO) Security. By leveraging State-of-the-Art (SotA) technology and space capabilities, CSS contributes to civil security, law enforcement operations, and crisis management in Europe, while also supporting the EU’s external actions. This creates opportunities for expanded outreach across a wider range of related applications. However, CSS still faces observational gaps in space-based intelligence. The combined threats posed by recent wars, climate change, and political crises underscore the urgent need for EU decision-makers to access qualitative, exclusive, fast, and resilient information. One of the most critical challenges is population displacement resulting from conflicts, compounded by climate change, extreme weather events, food insecurity, and poverty. In this context, addressing these challenges—particularly population displacement and its contributing factors—is of paramount importance. The implementation of THEIA project, which integrates data fusion from different Earth Observation (EO) missions including Copernicus Security Services (CSS), processing, and analysis—especially through the use of Geospatial Artificial Intelligence (GeoAI) and Machine Learning—aims to significantly enhance the effectiveness of existing services. Through the amalgamation of multi-temporal EO data and diverse datasets, THEIA empowers better decision-making and adapts to changing policy and user needs. This technological advancement, strengthened by GeoAI, boosts detection capabilities and ensures timely access to vital information, bridging the gap between operational capabilities and stringent security requirements. Through the integration of non-space data and end-user intelligence, THEIA’s supply chains create value not only at the operational level but also at regional and local levels, promoting better coordination. Additionally, THEIA plays a critical role in advancing EU-independent capabilities and technologies, supporting the consolidation of the European space ecosystem, and facilitating the sustainable coexistence of legacy and New-Space solutions. Its services are tailored to a wide range of end-users, including EU entities such as Frontex, Member State Ministries of Defence, Intelligence Agencies, Police Forces, NATO, and potentially non-EU organizations, such as the United Nations. THEIA’s development leverages the complementary strengths of the EU Satellite Centre (SATCEN) and the EU Agency for the Space Programme (EUSPA), enhancing situational awareness for the EU and its Member States. Rather than duplicating the existing capabilities of the Copernicus Security Service (CSS) in EU Member States, THEIA is designed to complement them effectively. THEIA project receives funding from the European Union’s Horizon Europe programme – Space for Research/ Copernicus for Security under Grant Agreement: 101190051.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: SPECTRE: Marine traffic monitoring through an innovative AI-powered multi-sensor multi-mission framework

Authors: Andrea Mazzeo, Valerio Pisacane, Renato Aurigemma, Andrea LoRusso, Marco Focone, Prof. Maria Daniela Graziano, Alfredo Renga, Maria Salvato, Angela Carmen Cristofano, Fabiana Ravellino, Angela Volpe, Maria Virelli, Diodato Tapete
Affiliations: University Of Naples "Federico II", Euro.soft srl, Italian Space Agency (ASI)
Monitoring maritime traffic has always been a subject of study for researchers in the field of remote sensing, and a key interest with undeniable strategic value for both the civil and military perspectives. In the current state of the field, illegal maritime activities, including human tracking, poaching, smuggling, drug tracking, and the concerning issue of voluntarily sunk unregistered ships, highlight the critical need for a tool capable of conducting thorough, regular, and ongoing surveillance of the Mediterranean. In fact, numerous Earth observation missions that are either already in orbit or planned in the future have the capabilities for ship detection, exploiting a plethora of different sensors with a wide variety of spectral characteristics and spatial resolutions. However, the use of single missions does not provide the required spatial and temporal capabilities necessary to properly monitor illegal or otherwise unreported maritime traffic. The project SPECTRE (ShiP dEteCTion based on aRtificial intElligence; agreement n. 2023-4-E.0) funded by the Italian Space Agency in the framework of RESEARCH DAY ASI 2020 program, has the objective to develop an AI-based tool that is able to integrate data from Synthetic Aperture Radar (SAR) and optical satellite missions, nominally the Sentinel platforms, COSMO-SkyMed, and future missions such as IRIDE. The use of several missions with different capabilities and orbital parameters allows for much higher monitoring frequency, enabling tracking of single vessels through much wider areas and longer times frames than would otherwise be possible. Deep Learning (DL) techniques are exploited for the processing chain of both optical and SAR products. All satellite imagery is pre- processed with knowledge of sea characterization, filtering-out surface phenomena that might interfere with ship hull detection. AI detection models are developed and trained for SAR and optical imagery, and an intelligent algorithm for temporal and spatial matching is used to pair products from different missions to track targets. At the end of the project a WEBGIS-based interface will be delivered to enable the monitoring of non-cooperative vessels.
LPS Website link: SPECTRE: Marine traffic monitoring through an innovative AI-powered multi-sensor multi-mission framework&location=X5+–+Poster+Area+–+Zone+K" class="text-info" target="_blank">Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Advancing Maritime Surveillance: Integrated AI-Based Wake Detection in Earth Observation Imagery for Enhanced Civil Security Applications

Authors: Andrea Mazzeo, Maria Daniela Graziano, Giuliano Vernengo, Davide Bonaldo, Angela Carmen Cristofano, Sergio Iervolino, Roberto Del Prete, Margareth Di Vaia, Marisa Sperandeo, Diego Villa, Federico Franciosa, Gian Marco Scarpa, Federica Braga, Paolo Vavasori, Amedeo Fadini, Stefano Menegon
Affiliations: Università Degli Studi Di Napoli Federico II – DII, Univesità Degli Studi Di Genova – DITEN, Consiglio Nazionale delle Ricerche – Istituto di Scienze Marine
The monitoring and detection of uncooperative maritime vessels, commonly known as "dark ships," have become pressing concerns in civil security. These vessels, often of small sizes and engaged in illicit activities such as smuggling, human trafficking, illegal fishing, and unauthorized territorial incursions, operate without active Automatic Identification System (AIS) transponders or manipulate transmitted information to avoid detection. The lack of effective monitoring of these vessels threatens national security, environmental protection, and maritime law enforcement. The UEIKAP project—Unveil and Explore the In-depth Knowledge of Earth Observation data for maritime Applications—aims to enhance maritime surveillance capabilities by developing advanced methodologies for detecting ship wakes using Earth Observation (EO) data. By focusing on the analysis of wakes generated by moving ships, which inherently contain rich information about the vessel's characteristics and movements, UEIKAP seeks to provide robust tools for maritime domain awareness that address critical civil security-related scenarios. The project has been funded under the Research Projects of National Interest (PRIN) by Italian Ministry of University and Research. Existing ship detection methods utilizing optical and Synthetic Aperture Radar (SAR) imagery have limitations, particularly in detecting small, fast-moving, or uncooperative vessels. Challenges stem from factors such as limited satellite image resolution, the dynamic maritime environment, and prevalent sea clutter and natural phenomena that can obscure or mimic vessel signatures. Hull detection techniques, while effective for larger vessels with active AIS transponders, are inadequate for identifying dark ships that deliberately evade detection. Moreover, hull detection alone does not provide comprehensive situational awareness, lacking information about the vessel's speed, heading, and historical trajectory. Ship wakes, conversely, are more prominent features in EO imagery and serve as reliable indicators of vessel presence and activity. Wakes can extend over several kilometers and are influenced by the vessel's speed, heading, hull design, and surrounding meteo-marine conditions. Analyzing these wakes allows for the extraction of critical information, including the vessel's instantaneous position, velocity, heading, and even estimates of its size and type. The UEIKAP project capitalizes on this potential by integrating state-of-the-art Deep Learning (DL) techniques and data fusion to develop a comprehensive ship wake detection and analysis system. This approach aligns with the need for leveraging EO-based solutions, Artificial Intelligence (AI), and data fusion to address complex civil security scenarios. The system architecture is presented in full, including the wake detection algorithm based on the YOLO family of detection models, as well as the inclusion of a classification model for discriminating ship wakes from other natural and anthropogenic surface features. A comprehensive dataset comprising both synthetic and real-world Earth Observation (EO) imagery is developed to train and validate the ship wake detection model. The primary sources of EO data are the Sentinel-1 and Sentinel-2 satellite missions operated by the European Space Agency (ESA), providing high-quality, freely accessible imagery suitable for maritime surveillance applications. Sentinel-1 supplies C-band Synthetic Aperture Radar (SAR) data, which is invaluable for maritime applications due to its all-weather, day-and-night imaging capabilities. The project employs Sentinel-1 terrain-corrected images in the VV polarization mode. VV polarization is chosen for its sensitivity to surface roughness and its effectiveness in capturing sea surface features, including ship wakes. For optical observations, the project utilizes Sentinel-2 Level 2A images, which provide high-resolution multispectral data across 13 spectral bands ranging from visible to shortwave infrared wavelengths. In both cases, wake regions are labelled using polygonal masks, capturing the complex shapes and extents of wakes. Each image is also tagged with relevant metadata such as acquisition time, sensor parameters, and environmental conditions to facilitate context-aware analyses, as well as corresponding AIS data for validation purposes. Understanding the influence of environmental factors on wake visibility is critical. As such, meteo-marine data are obtained from the European Centre for Medium-Range Weather Forecasts (ECMWF) ERA5 dataset, providing important parameters like wind speed and sea surface temperature. Incorporating meteo-marine data helps the models distinguish between genuine wakes and similar natural phenomena influenced by environmental conditions. The first results of the UEIKAP system's implementation are presented, highlighting its effectiveness in detecting and analyzing ship wakes under various conditions. The performance metrics of the DL-based wake detection model are discussed, including detection rates, false positive rates, and robustness across different sea states and imaging conditions. Several case studies gathered in the areas of interest of the project are presented to demonstrate the system's capability to detect uncooperative vessels not visible through traditional hull detection methods. The integration of meteo-marine data and AIS is shown to significantly enhance detection accuracy and provide valuable contextual information. Furthermore, the data fusion approach is shown to provide a significant contribution to comprehensive maritime domain awareness, enabling identification of vessel trajectories and estimation of the vessel state. Finally, the validation campaigns planned for Summer 2025 are presented. These campaigns will involve coordinated efforts between satellite data acquisition, meteo-marine measurements, and on-site observations, with the objective of evaluating system performance. Outcomes from these campaigns will inform further development and optimization, ensuring effectiveness in operational settings.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Night-time Satellite and Aerial Image Denoising with Online Complex Noise Modeling and Deep Learning

Authors: Sandeep Kumar Jangir, Dr. Reza Bahmanyar, Dr. Tobias Storch
Affiliations: German Aerospace Center (DLR)
At any given time, half of the Earth’s surface experiences daytime, while the other half is in night-time. Most Earth observation missions focus on daytime imaging, overlooking the significant applications of night-time imagery. Artificial lighting has profoundly shaped human societies, enabling night-time activities and linking light emissions to social factors (e.g., wealth) and environmental impacts (e.g., light pollution). Night-time light imagery has been widely used to monitor conflict areas, assess the impacts of natural disasters, and detect damage to infrastructure such as roads, bridges, and buildings [3]. Additionally, it has been employed to analyze urban changes during conflicts, monitor power outages in disaster-stricken regions, and evaluate humanitarian crises through changes in night-time illumination patterns. Recent advances in instrumentation have driven rapid growth in night-time remote sensing research [1]. However, progress remains constrained by the limited number of sensors and their shortcomings, including insufficient spatial resolution, spectral information, radiometric sensitivity, and revisit frequency. Furthermore, continuous operation of imaging sensors introduces noise artifacts such as dark current noise, fixed pattern noise, and hot pixels, which reduce the signal-to-noise ratio and hinder downstream applications such as classification, detection and more [4]. Traditional filtering methods struggle to address the diverse and complex noise types inherent to night-time imagery due to the ill-posed nature of the problem. Deep learning (DL) approaches [2] have emerged as a robust alternative, effectively learning noise patterns from data and handling complex degradations while preserving image details and colors, especially in low-light conditions. These models outperform traditional methods without requiring manual tuning and adapt to new noise profiles. To address the scarcity of paired noisy and clean datasets for training night-time denoising models for satellite and aerial image, we propose a comprehensive noise modeling framework. By analyzing camera dark frames, we model a range of degradations, including stripe and pattern noise, hot pixels, dark current noise, and simpler noise types like RGB and black-and-white noise. During training, these degradations are injected into satellite and aerial images at various combinations and intensities, generating unique noise patterns in each mini-batch. This approach enables our DL-model to learn effective noise removal while preserving the fidelity of details. Our trained model not only removes noise from raw images (8, 12, or 16-bit depths) but also strategically fills in missing information at noisy pixel locations. Comparative evaluations demonstrate that our approach outperforms state-of-the-art denoising methods. Furthermore, we highlight the practical benefits of denoising for a downstream night-time remote sensing application. This work provides a robust simulation model and training framework, advancing the utility of night-time imagery for both research and practical applications. [1] Cynthia L. Combs, Steven D. Miller, "A review of the far-reaching usage of low-light nighttime data," Remote Sensing, vol. 15, no. 3, p. 623, 2023, MDPI. [2] Michael Elad, Bahjat Kawar, Gregory Vaksman, "Image denoising: The deep learning revolution and beyond—a survey paper," SIAM Journal on Imaging Sciences, vol. 16, no. 3, pp. 1594-1654, 2023, SIAM. [3] Daniele Cerra, Nina Merkle, Corentin Henry, Sebastian Gapp, Veronika Gstaiger, "Increases in Night Lights Intensity Reveal Extreme Events: A Case of Study on the Ongoing Conflict in Ukraine," ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 10, pp. 53-59, 2024, Copernicus GmbH. [4] Tiago S. Nazaré, Gabriel B. Paranhos da Costa, Welinton A. Contato, Moacir Ponti, "Deep convolutional neural networks and noisy images," in Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications: 22nd Iberoamerican Congress, CIARP 2017, Valparaíso, Chile, November 7–10, 2017, Proceedings 22, pp. 416-424, 2018, Springer.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Vessel Detection Leveraging Satellite Imagery and YOLO in Maritime Surveillance

Authors: Rita Magalhães
Affiliations: Instituto Superior Técnico, CEiiA
Vessel detection is essential for maritime surveillance, supporting the monitoring of fishing, commercial and transportation activities, as well as detecting suspicious or illegal vessels and aiding in search and rescue operations. Satellite imagery has become increasingly valuable for Earth Observation applications due to advancements in resolution and availability and the wide range of unadulterated spatio-temporal data. The European Space Agency (ESA) offers free optical imagery from the Sentinel-2 satellite, providing a cost-effective solution for large-scale maritime surveillance, especially vessel detection. Deep learning models, particularly You Only Look Once (YOLO), have demonstrated impressive real-time object detection performance. YOLO’s single-shot detection excel in processing satellite imagery for vessel detection, combining both accuracy and speed. This article focuses on comparing YOLO versions 8 and 10 for vessel and wake detection using a dataset provided by CEiiA. The models were tested to determine the optimal configuration for this task and after evaluation, a YOLOv8 configuration was selected. Following this selection, the dataset was expanded using assisted annotations and further tests showed performance improvements. The final YOLOv8 model achieved an F1-score of 88.69% (with 90.47% precision and 86.98% recall), an Intersection Over Unit (IoU) of 72.91%, a mAP50 of 87.53% and a mAP50-95 of 61.20%. In Janne Mäyrä and Virtanen Mäyrä et al (2024) the best performing model achieved a test precision of 87.60% and a recall of 87.40%, resulting in an F1 score of 87.50% and a validation mAP50 of 86.70%. Our model outperforms the only paper to our knowledge testing YOLOv8 and Sentinel-2 for vessel detection in all metrics except recall, indicating that it could be more effective at minimizing false negatives. There was a notable disparity between the performance of the vessel and wake classes, with the vessel class constantly outperforming the results of the wake class. Single class training improved bounding box localization and delineation, however it slightly reduced performance, possibly due to a loss of contextual information. Additionally, tests on high-resolution images were conducted and the model reached a general F1-score of 60.61% (with 73.17% precision and 51.72% recall) and an IoU of 64.36%, confirming that models trained on lower-resolution data can still perform relatively well on higher-resolution images. Nonetheless, there is potential for further enhancement through fine-tuning. These results confirm that YOLO models are highly effective for real-time vessel detection and can be reliably applied in maritime surveillance.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: B.02.05 - POSTER - Restoring Biosphere Resilience: Transforming Agriculture, Forestry and Other Land Use (AFOLU) from Carbon Source to Sink

To address climate change effectively, emissions must reduce by 50% every decade to achieve net zero emissions by 2050. A core component to this is the transformation of agriculture into a major carbon sink, as projected in current IPCC climate models. These models have assumed a shift from source to sink in Agriculture, Forestry and Other Land Use (AFOLU)—primarily agriculture—in the next two decades. In fact, even if fossil fuel use was eliminated today, emissions from the food system alone are enough to put the 1.5°C and 2°C targets out of reach. This pivotal role of the food system in climate dynamics underscores its broader impact on the planet. The global food system places immense pressure on Earth systems, primarily due to land-clearing (slash-and-burn practices and biomass burning), animal farming (enteric fermentation and manure management), and crop production (fertilisers and emissions from land and paddies). This makes it the single largest driver for planetary boundary transgressions and Earth system instability.

Agricultural ecosystems cover more than 40% of the global land surface, making agricultural land the largest terrestrial biome on the planet, with animal agriculture taking up 83% of it. In the past 300 years, a staggering 55% of all ice-free land has been converted into croplands, pastures and rangelands, leaving only 45% for natural or semi-natural ecosystems. Earth’s terrestrial ecosystems store a vast amount of carbon, about 60 times the yearly human-induced greenhouse gas emissions, with soil containing roughly 70% of this (1500–2400 GtC). To harness this potential, we must begin to reclaim agricultural land, a process facilitated by a global shift towards sustainable plant-based food sources, which could ultimately free up 75% of the agricultural land for rewilding and restoration.

The use of Earth Observation data in land applications is well-explored and maturing quicky, providing in-depth insights and monitoring services for terrestrial ecosystems, which in turn supports the transformation of food systems. Therefore, we invite Earth Observation data application researchers and engineers to submit abstracts for this session, that:

  • Showcase best practices, case studies and research utilizing Earth Observation for Agroecology, Nature restoration, Agriculture, Forestry and Other Land Use (AFOLU) statistics monitoring, carbon sink tracking (e.g., via Net Primary Production (NPP)/ Above Ground Biomass (AGB)), monitoring nutrient loading in terrestrial & aquatic ecosystems, detecting resilience within agricultural landscapes for early warning systems and others.

This session aims to raise awareness, nurture, support, and expand a community committed to transforming the current food system into one that regenerates and strengthens the biosphere’s innate resilience—one that preserves and restores land and aquatic ecosystems, allocates cropland to the most productive regions, adopts land management systems that work with nature rather than against it, transitions to plant-based food sources, and serves as a future carbon sink.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Leveraging multi-decadal satellite image time series to characterize grassland history for climate reporting

Authors: Lukas Blickensdörfer, M. Sc. Tom Brög, Felix Lobert, Marcel Schwieder, Patrick Hostert, Stefan Erasmi
Affiliations: Thünen Institute Of Farm Economics, Humboldt-Universität zu Berlin, Earth Observation Lab, Humboldt-Universität zu Berlin, Integrative Research Institute of Transformations of Human-Environment Systems (IRI THESys)
An increasing number of nations require spatially explicit data for the compilation of greenhouse gas inventories. The amending EU LULUCF regulation (2018/841) will require all EU member states to make use of the best available data from 2028 onwards. However, the availability of nationwide, high-resolution land use data is currently limited. This applies especially to data that reflect mitigation measures in agricultural land, which contribute to reduced carbon emissions and thus to climate protection. The distribution and history of grasslands is one key element in the monitoring and modelling of carbon emissions for greenhouse gas inventories on agricultural land. Mineral soils under permanent grassland generally have higher carbon stocks than under arable land use. The carbon sink of grassland depends also on the number of years of non-tillage use. Approaches that generate such information on agricultural land over decades are therefore of great importance. The aim of this study is hence to map the history of grassland distribution, the year of initial grassland establishment, and periods of non-inversion tillage in grassland at national level for Germany from dense satellite image time series back to 1990, the reference year of climate reporting. All available Landsat 5, 7, 8, 9, and Sentinel-2 observations since 1986 are used. We include 4 years prior 1990 to prevent unwanted time series artefacts of multiannual moving windows, which are part of our algorithm, within the relevant time period. The mapping of the grassland history requires a consistent distinction between cropland and permanent grassland in the agricultural landscape. Grasslands show different soil cover and management regimes compared to cropland. We therefore start by mapping the occurrence of bare soil within the satellite data time series from 1986 to 2023. This is done using a random forest classification model trained on reference data derived from the European Land Use/Cover Area frame Survey (LUCAS). The survey plots are labeled based on the photos accompanying LUCAS from the years 2006, 2009, 2012, 2015, and 2018. As the resulting model is used to map bare soil occurrence until 1986, we test its temporal transferability in a cross-validation framework using annual folds. From the time series of bare soil occurrence, we calculate a relative measure of seasonal bare soil frequency over time. From this information we derive the year of initial grassland establishment and periods of non-inversion tillage grassland use. As reference data for grassland establishment, we use regionally available agricultural land use data from the European Integrated Administration and Control System (IACS) reaching back to 2006. Finally, a holdout sample of the IACS data is used to assess the quality of the mapped grassland-establishment indicator. We find that throughout the time series, bare soil observations can be differentiated well from non-bare soil observations (overall accuracy of 92.3 ± 0.6 %), and no significant accuracy decrease is observed when transferring the model in time. Based on the bare soil time series, the model differentiates grassland newly established within the analysis time frame and grassland established before the analysis time frame with an overall accuracy of 98.7 ± 0.1 % and an F-score for newly established grassland of 75.4 ± 0.6. When new grassland is predicted correctly, the mean absolute error of the year of establishment is estimated at slightly over 1 year when compared to the IACS-validation sample. Annual and regional variations of the results are further analyzed and discussed. We will present the Germany-wide products and data on the history of grassland management and outline the workflow for integrating the data into climate reporting at national level back to the IPCC baseline (1990). Our approach will provide insights into the value of remote sensing products for climate reporting in the LULUCF sector for agricultural land. The use of earth observation in combination with land use data that is also available in other European states enables a high resolution, area-wide, consistent and comparable assessment at the European level in future. Data on grassland age as well as past and current grassland distribution is also valuable for questions related to ecology and biodiversity. Making such data accessible to the wider community will therefore enable critical research beyond climate reporting.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: SEN4MOZ: Mapping Shifting Cultivation Dynamics in Conservation Areas of Mozambique using Copernicus Data

Authors: Lennart Eichfuss, João Chibue Domingos, Pauline Hammer, Dr Patrick Hostert, Patrick Meyfroidt, Natasha Ribeiro, Philippe Rufin
Affiliations: UCLouvain, Humboldt-Universität zu Berlin, Eduardo Mondlane University, F.R.S. - FNRS
Conservation areas are key in reducing deforestation and degradation in African woodlands. However, achieving conservation targets equitably requires balancing conservation with competing land uses in and around protected areas that are crucial for the livelihoods of local communities relying on semi-subsistence agriculture. Depending on the protection status, local communities may obtain rights to practice shifting cultivation within protected areas or designated buffer zones. Our knowledge regard-ing the dynamics of shifting cultivation in conservation areas, however, remains scarce. On the one hand, the diversity of actors, related land uses, and geographies challenge the generation of useful generic insights on the impacts of conservation efforts. On the other hand, the lack of suitable maps of heterogeneous shifting cultivation systems with small plots in a spatially explicit, timely, and themati-cally detailed manner challenges the understanding of the interplay between protected area effec-tiveness and agriculture-based livelihoods. Currently available map products have multiple limitations in this regard. First, existing cropland maps disagree in fragmented shifting cultivation systems of Afri-ca, or explicitly exclude shifting cultivation in their cropland definition. Second, existing cropland data products have limited temporal coverage, either only covering single points in time or not capturing the dynamics at sufficient temporal detail. Third, existing data products do not explicitly account for fallows as a key component in shifting cultivation systems but either combine them with active cropland or exclude them entirely of their definition of agricultural areas. Lastly, current map products do not allow for quantifying either the woody vegetation cover lost due to land clearing at the fine spatial resolution needed to capture relevant processes, or the rates of woody vegetation regrowth, which are critical to determine the mid-term impacts of shifting cultivation. Consequently, workflows for producing spatial-ly, temporally and thematically detailed maps of the extent and status of shifting cultivation agriculture are needed to better understand the impacts of shifting cultivation on conservation areas. The Sen4MozParks project leverages satellite EO data to provide spatially and temporally detailed In-formation about shifting cultivation. Our findings support the formulation of public policies and incen-tive programs that promote sustainable agricultural practices, assisting in both the conservation of pro-tected areas and the socio-economic development of the people that depend on these land re-sources. Key project outcomes are annual maps of active versus short-term fallow cropland extent as well as annual maps of fractional woody vegetation cover. The methodological fundament for produc-ing these maps is the application of state-of-the-art time-series pre-processing and analysis workflows using Sentinel-2 and Sentinel-1 time series. We use machine-learning models for mapping active and fallow cropland annually and apply active learning routines to facilitate efficient training data collection. We apply regression-based unmixing approaches leveraging land surface phenological metrics for pro-ducing annual fractional woody cover estimates. We integrate image processing routines that minimize inter-annual variability in the outputs that may arise from variations in clear-sky observation density. We here provide initial insights on workflow development, outlining potential and challenges related to our methodology across three case study regions, covering the Maputo National Park (PNAM), Gilé National Park (PNAG), and Niassa Special Reserve (NSR), including associated buffer zones. These are-as represent diverse regions of southern, central, and northern Mozambique, each with unique histo-ries of protection status, land use rights, and agricultural activities, representing a good basis to assess the transferability of our workflow. The resulting knowledge about patterns and dynamics in shifting cultivation systems shall provide robust information to strengthen the communication and collabora-tion between local communities and protected area managers, reducing conflicts related to access to and use of natural resources.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Towards an Operational EO-integrated LULUCF and Carbon Removal Monitoring, Reporting and Verification Service at Pan-European level

Authors: Nathalie MORIN
Affiliations: GeoVille Information Systems and Data Processing GmbH
196 countries signatory of the Paris Agreement (2015) report on their annual anthropogenic greenhouse gas (GHG) emissions by sources and removals by sinks to the United National Framework Convention on Climate Change (UNFCCC). The European Union has set ambitious targets through its Green Deal Agenda to reach climate neutrality by 2050. The Land Use Land Use Change and Forestry (LULUCF) sector plays a crucial role as the main carbon pool compensating emissions from the agricultural, energy and industry sectors. The latest 2023 revision of the EU LULUCF Regulation compels Member States (MS) to new requirements using geospatially explicit land use conversion and higher tier approaches for key categories such as forest from 2027 onwards. Earth Observation (EO), in particular Copernicus current and future satellite missions and derived thematic products can support policymakers by providing wall-to-wall maps of Activity Data in compliance with the TACCC Principles of Transparency, Accuracy, Consistency, Completeness and Comparability. Within a EUROPEAID funded activity, we achieved the first EO-based Monitoring-Reporting-Verification (MRV) system for monitoring emissions and removals from LULUCF in Türkiye based on Landsat time series over the 1990-2015 period. Lessons learned from this activity as well as a longstanding production history for the Copernicus Land Monitoring Services (CLMS) led to the successful implementation of the first CORINE Land Cover Plus (CLCplus) Instances. The latter is based on the ingestion of the latest updates of existing CLMS data products which are combined through the extraction of rulesets according to a common EAGLE nomenclature in a 100m grid expert product. CLCplus LULUCF Instances Activity Data (AD) maps and area statistics at EEA-38 level provide an independent proxy for the verification of countries’ LULUCF inventory in support to the European Environment Agency (EEA)’s LULUCF MRV system and EU policy framework. Further significant national and European R&D investments in recent years (> 5 Million Euro) led to technological advances ready to be integrated, thereby shortening the time towards operational MRV systems. SMARTCO2 (Austrian Space Applications Program - ASAP 17) aimed at evaluating the requirements and gaps in GHG reporting, revealing the high potential of an EO-based MRV service for the forest carbon offset market in Europe. The Austrian R&D project GHG-KIT (ASAP 18) addressed the transition between the Landsat and Sentinel era and provides an EO-integrated LULUCF prototype over a 9-year period 2015-2023 and a concept for backward extension for the verification of the Austrian LULUCF inventory. The ESA GDA Climate Resilience developed a proof-of-concept in collaboration with the World Bank for a MRV2.0 wall-to-wall Above-Ground-Biomass (AGB) mapping in Mozambique. Furthermore, BREATHE - EO-based enhancement and verification of LULUCF Inventories for Forest & Biomass provides a MRV system for carbon emission and removals from AGB mapping based on NASA GEDI Spaceborne LiDAR merged with Harmonized Landsat Sentinel-2 (HLS) data at 30m resolution at countrywide level over Austria and Türkiye over the period 2019-2023. CarbComply (ASAP 20) aims to establish an innovative EO-based compliance evidence service in support to the EU Deforestation-free Regulation (EUDR) in Austria, specifically focusing deforestation and forest degradation, carbon emissions and removals from AGB density long trend time series and soil organic carbon (SOC) monitoring for arable land in Austria. Uniting the outcomes of the above investments now offers a scientifically proven technological solution and opportunity to establish an EO-integrated LULUCF and Carbon Removal MRV service covering transition periods of 20-35 years. If implemented via high performance cloud operation system it can enable the roll-out at pan-European level, addressing the needs of both the compliance and private carbon market, in support to MS GHG reporting, EU level MRV as well as the overhauling of the EU Emissions Trading System.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Aligning Geospatial Methods for Corporate Greenhouse Gas Accounting of Deforestation

Authors: Lucia Fitts Vargas, Oliver James, Alexi Ernstoff, David Gibbs, Elizabeth Goldman, Nancy Harris, Thierry Hohmann, Matthew Ramlow, Lindsey Sloat, Alexandra Stern, Benjamin Wielgosz, Chelsea Wilson, Caroline Winchester
Affiliations: World Resources Institute, Quantis
Globally, the Agriculture, Forestry, and Other Land Uses (AFOLU) sector contributes 21% of anthropogenic greenhouse gas (GHG) emissions (11.9 ± 4.4 Gt CO2e yr-1, IPCC 2022), with land use change accounting for about half of these emissions. The expansion of agricultural commodities into non-agricultural land, including native ecosystems, is the primary driver of land use change. For many consumer goods companies, emissions from deforestation and other land-use changes dominate their Scope 3 emissions, which are emissions not directly produced by a company, but which arise indirectly through their upstream supply chain sourcing. To track these emissions, companies use corporate GHG inventories. The GHG Protocol’s Land Sector and Removals Standard (LSRS) offers high-level requirements and guidance on how to report emissions from land use change, including deforestation. However, given the breadth of the audience, the sectors it serves and the issues it covers, the LSRS offers considerable flexibility on the data and methods companies can use to estimate them. As a result, companies sourcing the same commodities from the same supply chains may report different GHG emissions and land tracking metrics and consequently arrive at different conclusions when tracking progress towards reducing emissions. This can lead to confusion, inefficiencies, and contradicting claims from companies that are based solely on methodological or data differences rather than on-the-ground realities. To address the urgent need to improve comparability of results and provide a harmonized resource for the industry, the World Resources Institute and Quantis, with collaboration of other expert partners, are developing a set of best practices based on geospatial data and methods that operationalize key parts of the LSRS including LUC emissions. Using geospatial data for GHG reporting provides consistency in approaches across scales and levels of traceability, helping ensure that a company’s path towards emissions reductions result from tangible on-the-ground actions. By publishing detailed methods at different levels of traceability and offering open-source tables of emission factors for at least 42 key commodities, the goal is to promote consistency and comparability in corporate inventory reporting. This will enable companies to track progress credibly, facilitating meaningful comparisons and driving progress in reducing GHG emissions in supply chains.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Recent national and sub-national forest loss trends

Authors: Juan Carlos Laso Bayas, Orysia Yaschun, Zoriana Romanchuk, Martina Dürauer, Steffen Fritz
Affiliations: International Institute for Applied Systems Analysis
Earth Observation has contributed greatly to forest loss tracking, with several data sets using satellite imagery to detect tree cover and tree loss during the last decades. Nevertheless, forest loss is different to tree loss as it implies not merely the existence or not of a tree or trees (including tree crops like oil palm) but a forested ecosystem, with varying definitions of “forest”. According to FAO, forest is a land-use category and includes also temporary unstocked forests, while forest loss can be teased out from tree loss products (and similar others) although additional adjustments are needed. We produced estimates of forest loss from the last 7 years globally by masking tree loss derived from the yearly Hansen et al (2013) layer (from 2017 onwards) using two auxiliary data sets. The first mask employed included two classes from the 2015 Global Forest Management layer (Lesiv et al, 2022), namely class 11, i.e., naturally regenerating forests without any signs of management, including primary forests, and class 20, i.e., naturally regenerating forests with signs of forest management, e.g., logging, clear cuts etc.). The second data set used to mask out forested areas was the JRC Global Map of Forest Cover 2020 (Burgoin et al 2023). Here, all pixels from these second data set that also coincided with pixels belonging to either class 11 or 20 in the Global Forest Management layer and where tree loss was reported in the Hansen layer, where counted as deforestation. Since all these data sets have different spatial resolutions, they were all aggregated to a pixel size of 1 km and the masking was done at this resolution. We summarized the resulting forest loss first at national levels (administrative level 0) using the administrative boundaries obtained from GADM (www.gadm.org). For countries with comparatively high deforestation trends when contrasted with the average forest loss, further forest loss statistics were calculated at subnational level (GADM administrative level 1). The talk will present the results of this classification, national and subnational deforestation numbers, the potential advantages and pitfalls of the procedure, and its potential use to inform forest loss policies.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Ex-Ante Baseline Deforestation Risk Modelling

Authors: Mohan Smith, Aida Mashkouri
Affiliations: Revalue
Carbon projects focussed on Avoiding Unplanned Deforestation and Degradation (AUDD) make up a key component of the carbon credit market. The credibility of AUDD credits is currently in doubt as credits can be issued with poor correlations to actual CO2 emission avoidance. We address this with a new statistical approach for ex-ante baseline projections to understand deforestation risk prior to intervention in the Ruvuma region of Tanzania. This method is designed to surpass the VERRA ex-ante risk model and help re-establish market confidence in AUDD projects. A critique of the VERRA methodology is that it is overly simplistic and relies heavily on a single variable. It is hard to justify that their approach correctly characterises, with the best understanding, the underlying drivers of deforestation. Binning forest regions by distance to the forest edge, calculating deforestation rate for each bin, and employing a simple persistence forecast into the future, is not a valid approach to produce long term projections or to characterise risk. It does not capture temporal evolution of the system or stochasticity. Instead, we present a more comprehensive approach to understand deforestation by constraining all physically meaningful drivers of deforestation. We utilise remote sensing data to measure distance metrics to various points of infrastructure, accessibility metrics between these points, geographical characteristics of the project area and human population data. We engineer these datasets to extract physically justified information. We then combine them within simple regression models to establish their relative importance in approximating historic deforestation likelihood. With deforestation likelihood we model potential future deforestation trajectories using Markov chain methods. This enables us to comprehensively understand the likelihood of future deforestation rates in a dynamic way that captures the evolution of the system. Whilst this approach does not take into account other socio-economic, cultural or political factors that may lead to deforestation, it provides our best characterisation of those elements of the deforestation problem that are quantifiable and predictable. The work, specifically the evaluation of the deforestation drivers and their importance, should further enhance dynamic baseline estimation and control matching approaches. In this way we hope to add credibility to all elements of AUDD project baselines, such that AUDD credits can confidently be linked to at least 1 tonne of CO2 avoidance.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Aboveground Biomass Net-Change and Private Landholdings in the Brazilian Amazon: How Are They Related?

Authors: Mateus Macul, Dr Nezha Acil, Dr Fernando Espirito-Santo, Prof Heiko Balzter
Affiliations: University of Leicester
In the Brazilian Amazon biome, 33% of land is private and 46% is public, with 26% of native vegetation being under private ownership and 52% public [1]. Although public land used for conservation is crucial for protecting the forest cover, private lands still hold 9 Gt of carbon in the aboveground biomass (AGB) and are often associated with forest loss [1–3]. In fact, it is clear that land-use change (LUC) in the Amazonia forest happens mainly on regions occupied by landholdings. However, it is still not clear how farmlands are shaping the forest cover, and consequently, shaping the AGB balance. Can AGB net-change provide insights into how land tenure is shaping forest cover in the Brazilian Amazon? AGB dynamics in forest cover, described by their balance over time, can be affected by two main LUC processes: forest disturbances (deforestation and degradation) and forest recovery. Although deforestation rates have been controlled in recent years [4], studies have shown that forest regeneration is below its potential, whereas AGB decreases over time, mainly due to human-induced forest degradation, such as fire and selective logging [5–7]. Comparing annual forest cover maps, forest net recovery is responsible for offsetting no more than 10% of deforestation emissions [8,9]. AGB net change derived from LUC processes could provide insights into how the forest cover change over years. This study aims to understand the relationship between AGB net-change and the landholdings in the Brazilian Amazon biome by combining: 1) the 100 m resolution AGB net-change from 2015-2021 derived from European Space Agency's (ESA's) Climate Change Initiative (CCI) Biomass team AGB data version 5.1 [10] and; 2) land tenure data for private landholdings from Brazilian Enviromental Rural Cadaster (CAR) processed and assembled by Camara et al. (2023) [2]. The relationship was assessed with a grid of 1⁰ degree size resolution (~100 km) with the following variables calculated for each grid cell: number of properties, average size of properties, total sum of AGB net-change, mean AGB net-change, and median AGB net-change. To isolate the effect of private properties, we applied a mask of private landholding on AGB net-change data. Results showed that the number and size of properties were inversely correlated: the more properties, the smaller their size. Additionally, the number and size scale logarithmly: an increase in size, the number of properties drops abruptely. Regarding the effects on forest biomass, grid cells with smaller properties are more associated with negative AGB net change, whilst neutral and positive net changes are more common in grid cells with larger properties. This is supported by literature showing that small properties can be associated with higher deforestation rates as they lack of resources, which force farmers to expand the use of land to increase income. On the other hand, larger landholders can afford to leave land without use for more time, where the forest could naturally regrow [11]. The relation of size and number of private properties and AGB balance can provide insights on the forest cover balance in the Brazilian Amazon. Investigate how the landholdings’ size scales and is distributed across the biome can support the understanding on farmlands shaping the forest cover. Therefore, this work highlight the importance in capturing the scaling relation of size and numbers of landholdings, potentially modelling the distribution of the farmlands size, to further understand the deforestation and forest regeneration process conducted by farmlands. References: [1] Freitas F L M, Englund O, Sparovek G, Berndes G, Guidotti V, Pinto L F G and Mörtberg U 2018 Who owns the Brazilian carbon? Glob. Change Biol. 24 2129–42 [2] Camara G, Simoes R, Ruivo H M, Andrade P R, Soterroni A C, Ramos F M, Ramos R G, Scarabello M, Almeida C, Sanches I, Maurano L, Coutinho A, Esquerdo J, Antunes J, Venturieri A and Adami M 2023 Impact of land tenure on deforestation control and forest restoration in Brazilian Amazonia Environ. Res. Lett. 18 065005 [3] Pacheco A and Meyer C 2022 Land tenure drives Brazil’s deforestation rates across socio-environmental contexts Nat. Commun. 13 5759 [4] INPE 2019 PROJETO PRODES DIGITAL: mapeamento do desmatamento da Amazônia com Imagens de Satélite. Inst. Nac. Pesqui. Espac. [5] Heinrich V H A, Dalagnol R, Cassol H L G, Rosan T M, de Almeida C T, Silva Junior C H L, Campanharo W A, House J I, Sitch S, Hales T C, Adami M, Anderson L O and Aragão L E O C 2021 Large carbon sink potential of secondary forests in the Brazilian Amazon to mitigate climate change Nat. Commun. 12 1785 [6] Qin Y, Xiao X, Wigneron J-P, Ciais P, Brandt M, Fan L, Li X, Crowell S, Wu X, Doughty R, Zhang Y, Liu F, Sitch S and Moore B 2021 Carbon loss from forest degradation exceeds that from deforestation in the Brazilian Amazon Nat. Clim. Change 11 442–8 [7] Fawcett D, Sitch S, Ciais P, Wigneron J P, Silva-Junior C H L, Heinrich V, Vancutsem C, Achard F, Bastos A, Yang H, Li X, Albergel C, Friedlingstein P and Aragão L E O C 2023 Declining Amazon biomass due to deforestation and subsequent degradation losses exceeding gains Glob. Change Biol. 29 1106–18 [8] Silva Junior C H L, Heinrich V H A, Freire A T G, Broggio I S, Rosan T M, Doblas J, Anderson L O, Rousseau G X, Shimabukuro Y E, Silva C A, House J I and Aragão L E O C 2020 Benchmark maps of 33 years of secondary forest age for Brazil Sci. Data 7 269 [9] Smith C C, Espírito‐Santo F D B, Healey J R, Young P J, Lennox G D, Ferreira J and Barlow J 2020 Secondary forests offset less than 10% of deforestation‐mediated carbon emissions in the Brazilian Amazon Glob. Change Biol. 26 7006–20 [10] Santoro M and Cartus O 2024 ESA Biomass Climate Change Initiative (Biomass_cci): Global datasets of forest above-ground biomass for the years 2010, 2015, 2016, 2017, 2018, 2019, 2020 and 2021 [11] D’Antona Á O, VanWey L K and Hayashi C M 2006 Property Size and Land Cover Change in the Brazilian Amazon Popul. Environ. 27 373–96
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Towards an EO-driven and very high-resolution bookkeeping model (EO4BK project)

Authors: Thomas Gasser, Antoine Lefebvre, Maxime Knibbe, Alexandre Rouault, Dr. Philippe Ciais, Ronny Lauerwald, Daniel Goll, Yang Su, Ana Bastos, Sebastian Wieneke, Dominic Schierbaum, Stephen Sitch, Thais Rosan, Scott Barningham, Lina Mercado
Affiliations: IIASA, KERMAP, LSCE, INRAE, University of Leipzig, University of Exeter
Land-use and land-cover change (LULCC) fluxes are among the most uncertain components of the global carbon cycle, presenting significant challenges for accurately assessing the global carbon budget. Uncertainties stem from inconsistent definitions (e.g. between scientific estimates and national greenhouse gas inventories), different modeling tools and methodologies, limited resolution of land-use data, and the complex interplay of natural and anthropogenic processes. Here, we summarize progress on the ESA-funded EO4BK project, which aims to address some of these challenges by pioneering a new approach that integrates high-resolution Earth Observation (EO) data into a Bookkeeping (BK) model to advance the accuracy and granularity of LULCC flux estimates. Unlike process-based Dynamic Global Vegetation Models (DGVMs), BK models are computationally efficient and data-driven, making them ideal for leveraging very large datasets and deriving EO-driven insights. Our aim is to develop a next-generation BK model capable of using spatially explicit EO data at resolutions of 10 to 30 meters to enhance global and regional carbon budget assessments. Our work focuses on Europe and Brazil as case studies. These regions represent contrasting landscapes with diverse land use practices and varying levels of data availability. By combining Sentinel-1 and Sentinel-2 satellite imagery with deep learning techniques, we will classify crop types (especially distinguishing C3 and C4 crops), identify key agricultural management practices such as double cropping, no-till farming, and irrigation, and map LULCC transitions. Combined with fine-grained EO data on gross primary productivity and above-ground biomass from existing products, as well as with a set of backstop parameters for soil carbon obtained using a DGVM, this will inform and parameterize the BK model, which will be benchmarked against existing models in use in the annual Global Carbon Budget. Expected outcomes include improved spatial and temporal estimates of LULCC-related carbon fluxes, detailed assessments of the impact of agricultural transitions and management on carbon budgets, and ultimately insights into the trade-offs between climate mitigation strategies and ecosystem services. Additionally, this work will produce open datasets, a new and open BK modeling platform, and a roadmap for operationalizing EO-based BK models in future carbon budget assessments.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Time series mapping of the world’s 25 important crops

Authors: LIANGZHI You, Dr. Deepak Ray, Dr Zhe Guo, Shuang Zhou
Affiliations: International Food Policy Research Institute (ifpri), University of Minnesota
There have been significant advancements in both crop type mapping using satellite-based observations and pixel level classification, and crop type mapping with crop statistics. Both methods have their advantages and limitations; but in recent years opportunities to bring both methods together are becoming more favorable and can potentially lead to developing a new generation of global fine resolution crop type. In our presentation we will share (1) the current state of crop mapping using the (statistical) SPAM modeling system for circa 2020. Specifically, we will report on the major update to the version 1 circa 2020 that was released in the summer of 2024, currently underway. (2) We will discuss how we are developing a time series version of the SPAM model outputs, and the University of Minnesota crop harvested areas and yield maps (a separate product) for 25 important global crops. (3) Finally, we will discuss how we are attempting to fuse remote sensing crop type mapping with statistical approaches that is leading to better locational accuracy of crop types. The resulting higher quality time series global crop harvested areas and crop yield maps can be deployed to not only visualize locations crop production on existing natural landscapes around the world at much higher resolutions. It can also be used to compute the time evolution of the production sourced from lost natural lands through deforestation and grassland losses which was never possible before.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Increasing Carbon Loss from Forest to Cropland Conversion in the 21st Century

Authors: Shi Jun Wee, Xiao-Peng Song, Alexandra Tyukavina, Dr John Armston, Dr Peter Potapov, Dr Hao Tang
Affiliations: University of Maryland, National University of Singapore
The rising global demand for food, fuel, and feed has contributed to a massive ~100 Mha expansion of croplands in the early 21st Century (Potapov et al., 2022). Cropland expansion is responsible for half of global deforestation (FAO, 2022), with most newly established croplands originating from tropical forests (Gibbs et al., 2010), exemplifying a major conflict between Sustainable Development Goals #2 (Zero Hunger) and #15 (Life on Land). The replacement of forest ecosystems, which are vital carbon sinks, contributes substantially to greenhouse gas emissions. To quantify carbon loss from forest-to-cropland conversion, we combined Global Ecosystem Dynamics Investigation (GEDI) footprint-level aboveground biomass density (AGBD) data with high-resolution global maps of forest loss and cropland extent, to produce spatially explicit estimates of gross aboveground carbon loss (50°N to 50°S). Between 2004 and 2019, we estimated a mean carbon loss of 28.4 Tg C/yr from forest-to-cropland conversion, with rates tripling throughout the analysis period. Tropical countries across three continents are among the top contributors of carbon loss, including Brazil and Bolivia (South America), the Democratic Republic of the Congo (Sub-Saharan Africa), and Cambodia (Southeast Asia). Major hotspots of carbon loss are located in the Chaco, Cerrado, and southern Amazon biomes in South America, DRC, Tanzania, and Zambia in Sub-Saharan Africa, and Cambodia in Southeast Asia. While higher rates of forest-to-cropland conversion usually corresponded to higher carbon loss, several areas diverged from this pattern. For example, Argentina showed relatively lower carbon loss (ranked #14 nationally) despite higher conversion rates (ranked #2) due to the clearing of forests with lower biomass density. Our time-series maps also revealed a distinct geographic displacement of carbon loss hotspots in South America throughout the study duration. Overall, our findings can help clarify the uncertainties in the carbon budget, which is crucial since land-use change is one of the most uncertain components of the global carbon budget (Houghton et al., 2012). Moreover, forest-to-cropland conversion remains a significant pathway of land-use change, underscoring the importance of quantifying its associated carbon loss.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Dynamic Baseline Approach Based on Barlow Twins for ARR Projects in Brazil

Authors: Markus Immitzer, Josue Lopez Ruiz, Talita Nogueira Terra Parizzi, Clara Rajadel-Lambistos, Kleber Trabaquini, Clement Atzberger
Affiliations: Mantle Labs Ltd
Nature-based solutions (NbS) in forests refer to projects and strategies that harness the natural processes and ecosystem services provided by forests to address environmental challenges, such as climate change, biodiversity loss, and water management. These projects rely on protecting, restoring, and sustainably managing forests and other natural ecosystems to provide ecological, social, and economic benefits. Baselines, which describe what would have happened without the project, are often used to assess and quantify the benefits of such projects. Many past NbS projects were criticized because they usually work with static baselines, which has the big disadvantage of not considering changes in socio-environmental conditions. In addition, non-representative baselines were often consciously or unconsciously used for the projects, leading to overly optimistic carbon credits. The dynamic baseline concept provides a more flexible, accurate, and transparent framework for assessing the impacts of nature-based solutions. It reflects evolving social, political, and environmental conditions, allows for adaptive management, and increases the project's credibility. The recently published VCS Methodology VM0047 basically follows this concept for ARR projects but leaves much open in terms of implementation. We have implemented the requirements for agroforestry projects in Brazil with a scientific approach using state-of-the-art Earth Observation (EO) data. The basis are Barlow Twins which are foundation models that utilize advances in self-supervised learning (SSL) from multi-spectral EO time series. This non-contrastive foundation model condenses all available spectral observations within a year into a few orthogonal, highly informative representations at 10 m (for Sentinel-2) and 30 m (for Landsat 7/8). This means that the same information is available for each data point, which describes the entire annual development of the multispectral data in a compressed manner, even in areas with very high cloud cover. An initial donor pool for selecting counterfactuals was defined for each project unit (same ecoregion, excluding protected areas and already registered projects), and only areas within a 100 km radius of the centroid of the project were included. We added an additional refinement of the donor pool step based on annual stacks of Barlow Twins representations, with five representation layers per year and considering the five years preceding the project implementation. Based on this input data, we identified the 150 best-matching counterfactuals with respect to the standardized representation layers. Amongst the 100 best matching controls, the 30 most similar counterfactuals were finally selected based on Euclidean distance calculated for the 10-years above-ground biomass (AGB) values. The annual AGB products were themselves created by combining the representations with sparse Global Ecosystem Dynamics Investigation (GEDI) full-waveform measurements. This ensured that the control points had the same history in terms of spectral-temporal similarity and the same AGB trajectories as the project units, which was verified by a standardized difference of means (SDM) test. The selected controls remain in place for the entire duration of the project (> 40 years). The AGB development of the project units is compared annually with the development in the controls to determine the additionality of the project implementation. This approach therefore allows for a better assessment of the "additionality" and thus improves the monitoring and transparency of ARR projects.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: A preliminary framework for evaluation of spatially resolved uncertainties of land cover maps

Authors: Anna Pustogvar, Samuel E. Hunt, Bernardo Mota, Kevin J. Tansey, Heiko Balzter
Affiliations: National Physical Laboratory (NPL), University of Leicester, National Centre for Earth Observation
Land cover maps provide essential information on biophysical cover on the Earth’s surface. They are typically produced based on Earth Observation (EO) data using machine learning (ML) models, which assign each pixel with a single land cover class (per-pixel classification) or fractions of several land cover classes (sub-pixel classification). EO-derived land cover maps are used in a variety of applications, including national greenhouse gas accounting, land management, biodiversity monitoring, etc. Land cover information is also used as auxiliary data in retrievals of other EO products. For example, in ESA Sentinel User Preparation project Urban LST for Public Health, sub-pixel land cover maps are being developed to improve land surface temperature retrievals over urban environments. The land cover mapping community has established protocols for assessing the quality of land cover maps. However, these protocols are mainly focused on evaluation of the quality of a land cover map as a whole. It is performed by comparing a land cover map with an independent reference dataset and deriving global and class-specific quality metrics, also known as overall accuracy and users’ and producers’ accuracy. These metrics do not provide any information on how the quality of a land cover map varies spatially, from pixel to pixel. Knowing the spatial variation of the quality is important for a number of reasons. Spatially resolved uncertainties can help users to choose the best land cover map for their specific area of interest. Also, if a land cover map is used as auxiliary data in another EO product, uncertainty budget for this product will be incomplete without evaluating per-pixel uncertainties of the land cover map. In this poster, we will present a preliminary framework for evaluation of spatially resolved uncertainties for EO-derived land cover maps. This framework follows the metrological stepwise approach for uncertainty evaluation. It starts with defining the measurand and establishing the traceability. Then, it proceeds to the evaluation of individual uncertainty sources (in this poster, we will show only a selection of these sources, e.g., per-pixel radiometric uncertainties). In the final step, the uncertainties are propagated through the entire processing chain to the final land cover map. This framework is applied to two land cover maps: a land cover map over a countryside environment produced by a ML classification model using multispectral data and a land cover map over an urban environment produced by a ML regression model using hyperspectral data.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Annually Mapping Dominant Leaf Type of Bavaria State in Germany Using Sentinel-2 Images

Authors: Qian Song, Dr. Xiaoshan Ni
Affiliations: Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences, Shanghai University of Technology and Science
In order to effectively respond to global climate change, the German Federal Government officially set a goal of achieving greenhouse gas neutrality by 2045 in 2019. Since then, forests, as an important source of carbon sinks, have received unprecedented attention. However, in recent years, Germany's forests have faced severe challenges, in which their area and health have been severely affected by human activities (such as excessive logging) and natural factors (including drought, extreme weather events, pests and diseases). Faced with these challenges, the German government and relevant agencies are actively seeking effective solutions. Annual, long-term and detailed analysis of forest loss helps to gain a deeper understanding of the pattern of vegetation loss and the reasons behind it, and provides an important basis for the formulation of more scientific and reasonable policies and measures. As one of the largest contiguous forests in Germany and even in Europe, the Bavarian forests play a vital role in the country's overall carbon neutrality strategy. Therefore, this study quantified the dynamic changes of different dominant leaf types (DLT, including non-tree-covered areas, broadleaved trees, and coniferous trees) by generating a 10-meter resolution map of DLT in Bavaria, Germany, annually from 2015 to 2023, aiming at explore the main causes of forest loss and how to enhance the carbon sequestration capacity of forests to support Germany's ambitious carbon neutrality goal. Harmonized Sentinel-2 images that are of low cloud coverage (<35%) and acquired in summer months (May to August) in the same year were mosaiced as a yearly remote sensing image of the Bavaria state. The reference DLT map for the year of 2018, which was validated by ground measurements, was downloaded from the Copernicus Land Monitoring Service. We co-registered the DLT map with the acquired Sentinel-2 mosaic in 2018, and divided the whole area into training and test sets within a ratio of 8:2. A U-Net based deep learning network model is then trained and validated with the prepared dataset. Results show over 92% of the pixels are correctly mapped both for training and test data. The trained model is then used for mapping DLT over Bavaria annually. Due to the severe radiation errors, we exclude 2016 from analysis. Our results reveal a significant increase of broadleaf forest occupation (from 15.85% to 18.43%) and a slightly decrease of coniferous forest area (21.6% to 20.3%) from 2015 to 2023. Due to the abnormal extreme droughts in 2018 and 2021, the non-forest cover has increased significantly. Geographically, Upper Franconia and Lower Bavaria regions are more dynamically changing compared with other regions, especially in 2021, which might result from the pests’ affection. And broadleaf forests become more and more dynamic since 2020.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Continuous Monitoring of Cropland Cover and Management to Support Carbon Modeling and Climate Action

Authors: Felix Lobert, Marcel Schwieder, Tom Broeg, Jonas Alsleben, Katja Kowalski, Akpona Okujeni, Selma Busse, Patrick Hostert, Stefan Erasmi
Affiliations: Thünen Earth Observation, Thünen Institute of Farm Economics, Earth Observation Lab, Geography Department, Humboldt-Universität zu Berlin, Earth Observation for Ecosystem Management, School of Life Sciences, Technical University of Munich, GFZ Helmholtz Center Potsdam, GFZ German Research Center for Geosciences, Institute of Geography, University of Goettingen, Integrative Research Institute of Transformations of Human-Environment Systems (IRI THESys), Humboldt-Universität zu Berlin
Agriculture is of great importance for global sustainability challenges, both as a significant contributor to greenhouse gas emissions but also as a potential carbon sink. Croplands, while essential for food security, put pressure on natural ecosystems, contributing to biodiversity loss, soil degradation, and climate change. Transforming cropland management to support sustainable practices and reduce emissions from agricultural land use is essential in addressing these challenges. Monitoring, understanding, and modeling these transformations require accurate and comprehensive information on cropland status and management. Earth Observation provides the basis for frequent, large-scale monitoring of croplands, but appropriate methods are needed to turn the data into information for decision-makers. In our work, we propose a framework for the continuous monitoring of cropland and soil status by assessing fractional cover time series of photosynthetic vegetation (PV), non-photosynthetic vegetation (NPV), and soil on croplands with regression-based spectral unmixing. As a use case, we focus on Germany by unmixing time series of Sentinel-2 and Landsat imagery at a national scale. Our framework introduces a novel strategy for enhancing the quantification of soils, by including a soil-specific spectral unmixing approach, which accounts for their spectral variability. This innovation is particularly relevant for annually cultivated croplands with frequent bare soil exposure. Our models predicted all three cover fractions for our reference sites with mean absolute errors (MAE) between 0.13 and 0.19. Introducing soil-specific unmixing reduced the MAE of the model predictions for soil by 11.3% and for NPV by 15.1% without compromising PV predictions. The soil-specific method proved to be especially effective in regions with bright soils, where the spectral signatures of NPV and soil showed less pronounced differences in the Sentinel-2 and Landsat spectral feature space. Overall, our models were able to accurately predict fractional vegetation cover throughout the whole cultivation period and our results underline the added value of incorporating a soil-specific adjustment into the unmixing workflow. Based on these findings, we applied the soil-specific unmixing models to all the croplands of Germany. The resulting dense time series of fractional vegetation cover provides a powerful tool for characterizing cropland management practices at high spatial and temporal resolution on a national scale. The generated information is particularly critical during the winter season when accurate identification of whether a field is left as bare fallow or covered by winter crops, cover crops, or spontaneous vegetation is essential for robust carbon stock modeling. Additionally, detailed phenological information, such as the grow-up and termination phases of cover crops, can be derived from the fractional cover time series and can further contribute to enhanced modeling of carbon flux estimates. Overall, the high spatial and temporal resolution enables a more precise assessment of land-use and management impacts on soil organic carbon stocks and greenhouse gas emissions compared to existing approaches that only build on crop type information. This research contributes to the goals of reducing emissions from agricultural land use by providing actionable data for modeling the agricultural carbon cycle and soil health, supporting the shift toward sustainable croplands that sequester carbon.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Investigating Planet Imagery to Monitor Crop Residue Management: First Test in North Italy.

Authors: Giulio Tellina, Mirco Boschetti, Monica Pepe, Andrea Ferrarini, Federico Filipponi, Francesco Nutini
Affiliations: Consiglio Nazionale Delle Ricerche - Istituto per il Rilevamento Elettromagnetico dell'Ambiente (CNR-IREA), Università Cattolica del Sacro Cuore (Unicatt), Consiglio Nazionale Delle Ricerche - Istituto di Geologia Ambientale e Geoingegneria (CNR-IGAG)
In the context of climate change mitigation, carbon farming is an innovative and sustainable approach that relies on agricultural practices (e.g. cover-crop, minimum tillage, crop residue management) to obtain carbon sequestration into soil organic carbon (SOC). Carbon farming integrates traditional and sustainable agricultural soil management practices with state-of-the-art technology such as ground sensors, climate models and satellite imagery for the monitoring and assessment of the positive impact of such regenerative practices. These technologies allow also to set protocols to monitor report and verify (MRV) the impact of agricultural practices to quantify and certify carbon sequestration. The MRV protocols are fundamental in ensuring that agricultural practices are carried out correctly and that greenhouse gas emissions are effectively reduced. Satellite imageries play a crucial role in MRV protocols since they allow to monitor phenomena in their spatial and temporal variability producing spatial explicit information. The aim of this work is to investigate the potentiality of Planet imagery to detect changes due to tillage and harvesting operations. A second objective is to check if it is feasible to use multispectral optical data to identify the management of crop residues due to carbon farming practices, such as keeping crop residues on field at the end of growing season to identify presence and duration of such practice. This work is done in a framework of REMOTE-C, a project funded by the Italian Research Ministry, which focuses on developing an approach to estimate changes in SOC thanks to a spatialized version of Roth-C model fed by Remote Sensing (RS) products. In this context, developing advanced methods to identify key agricultural practices such as i) harvesting time, ii) residues management and their presence and iii) tillering moment and typology is of key importance. The identification of these dates is critical because they represent a direct assessment of conservative and regenerative management practices and also act as inputs to models that simulate SOC dynamics. The study area is located in north Italy (Emilia-Romagna region) where 5 fields, starting from 2021, were managed and monitored by Department of Sustainable Crop Production (DI.PRO.VE.S) of Università Cattolica Del Sacro Cuore. Each of the 5 experimental fields is divided into a test, where carbon farming actions are done (e.g. cover crops, incorporation of all crop residues) and a control, where standard management is performed (e.g. removing most of the crop residues). These field experimental tests are conducted aiming at delineating new carbon farming protocols, and give a unique opportunity to monitor similar fields with different residue management. Dates of all agropractices are available for these fields and are used as reference to explore the capability of remote sensing data in detecting changes before and after soil preparation and harvesting. As first step, 8006 Planet satellite images were obtained on the study area. The data cover a time period from August 2016 to January 2024, with almost a daily coverage. The dataset includes images with four spectral bands (blue, green, red, near-infrared) from 2016 to 2022 and expands to eight bands, including Red Edge and Coastal Blue that are interoperable with Sentinel-2, from 2020 to 2024. The high spatial resolution (3 m) and extended time window of these images allow detailed analyses of changes related to agricultural practices and land management over time. A database was created by extracting reflectance values from pixels within test and control fields for the period when field experiments were conducted (2021-2024), labelling which dates are too cloudy. First, time series of Planet bands and spectral indices (SI) is analyzed in order to inspect if changes in spectral features are related with harvesting/tillering and differences are recognizable between test and control fields. Secondly, statistical analysis of differences in spectra and SI is performed on i) any given field between succeeding dates, in order to prove if harvesting and tillage can be identified and ii) test versus control fields, in order to verify if some spectral features can be potentially used for an automatic detection of crop residues. An early test of Planet RGB photointerpretation and preliminary statistical analysis show that harvesting and tillage are detectable from spectral temporal change analysis, hence making promising the research for spectral features to be used for automatic mapping of field management. Once these spectral features will be verified on a wider dataset, they will be used to labelling Planet images finding patches of harvesting/tillage to be used as training samples for a machine learning classification, exploiting the delta of reflectance of two dates (t1 pre-event and t2 post-event) and the spectral response of the second date (post-event), following a well-known procedure commonly adopted for burn area mapping (see Stroppiana at al. 2021). The final results will be a map with field level information reporting dates of practices (harvesting and tillage) and duration of crop presence (proxy of management and information about carbon degradation processes).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Classification of Post-Deforestation Land Use with Multi-Modal Deep Learning

Authors: Jan Pišl, Dr. Gaston Lenczner, Dr. Gencer Sümbül, Professor Martin Herold, Camilo Ernesto Zamora Ospina, Professor Jan Dirk Wegner, Professor Devis Tuia
Affiliations: Environmental Computational Science and Earth Observation (ECEO) EPFL, Das Deutsche GeoForschungsZentrum (GFZ), Department of Mathematical Modeling and Machine Learning (DM3L)
Tropical forests play a key role in preserving global climate stability due to their rich biodiversity and ability to capture and store carbon. However, the rates of tropical deforestation remain alarmingly high. Developing effective policy responses to reduce deforestation requires understanding of the driving forces behind it. It is known that the majority of deforestation is driven by agricultural expansion, but more detailed data is needed. Drivers differ regionally, lead to different levels of carbon emissions, and change over time. To enable localized, targeted measures, each deforestation event needs to be attributed to a specific agricultural commodity or another land use type. Remote sensing allows monitoring of land use conversion following deforestation events, providing a proxy for such drivers. However, recognizing individual commodities is challenging due to similarities in spectral responses between fine-grained classes, the limited spatial resolution of free satellite imagery, and the limited labeled data. In this work, we address these challenges by developing a deep learning model trained to recognize post-deforestation land uses from multi-modal data. As inputs, the model uses a time series of Sentinel-2 images, geographic coordinates of the deforested location, and country-level statistics related to the production of agricultural commodities. Our approach leverages the fact that the production of agricultural commodities is geographically clustered due to climatic, cultural, and economic factors. By incorporating additional data modalities, the model learns associations between specific commodities and their spatial distribution, improving the model’s predictions where satellite imagery alone is insufficient. Our model recognizes eleven land use classes, including seven forest-risk commodities linked to the majority of tropical deforestation—beef, soy, palm oil, rubber, cocoa, coffee, and wood fiber. To integrate the different modalities, we transform all inputs to a common representation with modality-specific encoders, which are then passed to a Transformer neural network, where the self-attention modules enable interactions between them. To train the model, we compile a training dataset from public sources. Results demonstrate that integrating multi-modal inputs improves prediction quality, achieving a classification accuracy of 87%, an improvement of 9% over models trained exclusively on satellite imagery. To reach the same accuracy as image-only models (78%), our model requires five times less training data. The additional modalities are small in size and can be easily obtained at no cost for any location globally. The model architecture can also be extended to include more input modalities such as climatic or topographic variables, that could further improve model performance. Overall, we show that integrating multiple modalities enables accurate recognition of fine-grained land use classes, with a particular focus on agricultural commodities associated with tropical deforestation. The model can be used to attribute tropical deforestation to drivers in an automatic, repeatable, and scalable way, supporting the design and implementation of targeted interventions to protect tropical forests.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Integrating Earth Observation and Modelling for Carbon Budgeting in Finnish Boreal Lakes

Authors: Mr Sakari Väkevä, Marie Korppoo, Minna Kuoppala, Jenni Attila, Marko Järvinen, Markus Huttunen, Pirkko Kortelainen, Karoliina Koho
Affiliations: Finnish Environment Institute (Syke), Geological Survey of Finland (GTK), Department of Geosciences and Geography, University of Helsinki (UH)
The boreal lakes of Finland are accumulating allochthonous carbon. This is often linked to drainage from specific soil types and land use classes, particularly peatlands ditched for forestry or agriculture. Successive lakes within river networks retain transported organic carbon in bottom sediments through processes such as phytoplankton settling, flocculation, or direct sedimentation of organic particulates, or release it into the atmosphere as greenhouse gases (GHGs), such as methane and carbon dioxide. Lakes also contribute to carbon cycling through macrophyte production (blue carbon) and other related geochemical processes. In Finnish lakes, blue carbon is predominantly associated with helophytes such as the common reed (Phragmites australis), which exhibit significant above-surface biomass and are often accompanied by thick organic sediment beds. While nitrogen and phosphorus cycling are well-documented due to extensive research, regulatory frameworks, and cross-border initiatives, carbon cycling in boreal lakes remains less understood and has lacked an established accounting scheme. The 2019 wetlands refinement to the Intergovernmental Panel on Climate Change (IPCC) Guidelines for National GHG Inventories suggested that regulated natural lakes (where surface area and/or residence time are changed by >10 per cent) should be treated as reservoirs from the point of view of GHG reporting. In a country like Finland with numerous boreal lakes (57,000 of more than one hectare in size) and a long history of lake regulation, this can constitute a significant monitoring and reporting challenge. Recently, water ecosystem management has also gained attention as a tool to support national climate targets, raising expectations for increasing carbon storage by sustainable blue carbon utilization. The Blue Lakes project (2023–2025) aims to quantify carbon stock magnitudes, cycling and burial in three Finnish pilot lakes—Oulujärvi, Vesijärvi, and Kallavesi—and scale these findings nationally using field data, existing environmental databases, catchment-level modelling, and Earth Observation (EO). The lakes were visited in field campaigns in August 2023 where we collected and measured variables like macrophyte abundance (species, stem counts, diameters, height), porewater characteristics, and sediment cores. EO played a key role in planning and executing the fieldwork. One of the project’s aims is to enhance Finland’s catchment-scale water quality model (WSFS-Vemala) by integrating a macrophyte (blue carbon) module and improving the representation of carbon species and their processes. This update is motivated because boreal lakes differ significantly from the traditional (inorganic) carbonate buffer system: most of the Total Organic Carbon (TOC) is in dissolved form, dissolved organic carbon far exceeds the dissolved inorganic carbon (DOC ≫ DIC), and the chemistry of DOC gives the water additional buffering capacity. The goal of this work is to prepare WSFS-Vemala to deliver upscaled carbon budgets and carbon cycle dynamics, giving water managers tools for anticipating the impacts of land use changes, ecosystem modifications (such as ditching and blue carbon harvesting), or restoration efforts under various climate scenarios. EO supports the Blue Lakes analyses in multiple ways. It can provide direct estimates of the extent of blue carbon ecosystems and phenological growth cycles. EO can also provide the diffuse irradiance attenuation coefficient (Kd) to improve light climate inputs for the phytoplankton module. Proxy variables like absorption of Coloured Dissolved Organic Matter (aCDOM) can complement field monitoring when TOC data is not available. We will showcase the types of EO data analyzed and incorporated in catchment-scale water quality modelling. We focus on high-resolution instruments such as EU and ESA’s Sentinel-2 and USGS’s Landsat which, under optimal conditions, can deliver dozens of cloud-free scenes for lakes during summer. We also demonstrate the added value of EO by comparing model outcomes before and after the proposed upgrades. Blue Lakes: Digitizing the carbon sink potential of boreal lakes has received funding from the European Union – NextGenerationEU instrument and is funded by Research Council of Finland, Call: Special RRF funding for research on key areas of green and digital transition (project number: 352811).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Mapping Salt Marsh Extent in Atlantic Canada

Authors: Elisha Richardson, Koreen Millard, Danika van Proosdij
Affiliations: Carleton University, Saint Mary's University
Salt marsh restoration has been identified as a natural climate solution pathway, with Environment and Climate Change Canada (ECCC) estimating up to 44,129 hectares of tidal marsh suitable for restoration, resulting in the potential for 1.5 TgCO2e per year of carbon mitigation (Drever et al., 2021). Each zone of the salt marsh, characterised by differences in elevation, vegetation, and inundation rate, provides a different capacity for carbon sequestration (van Proosdij et al., 2006). Therefore, as we work towards carbon accounting frameworks for tidal marsh ecosystems, we must be able to define the characteristics of each zone and understand how this extent has changed over time. Understanding of historical change presently relies on records of agricultural conversion, which estimate that salt marsh ecosystems have lost 75-90% of their historical extent in Atlantic Canada (Tiner, 2013). Current global and regional salt marsh inventories remain incomplete, with notable gaps in Newfoundland and Labrador, the Northwest Territories, and Nunavut, and low regional accuracy in much of Canada (Rabinowitz & Andrews, 2022). This problem is further exaggerated in areas of high tidal variability, such as the Bay of Fundy (Desplanque & Mossman, 2004). To this end, a better method for the remote sensing of salt marshes, particularly in the context of GHG inventories, is required. This study explores the use of multitemporal and multisensor imagery (Sentinel-1 and Sentinel-2) to produce 10 m salt marsh extent maps for the Bay of Fundy, improving on existing products both in regional accuracy (~85%) and in the delineation of high and low marsh classes, as broadly determined by their dominant vegetation type (Spartina Patens and Spartina Alterniflora, respectively). Findings indicate that machine learning methods perform best with a combination of optical (Sentinel-1) and radar (Sentinel-1) imagery, but only when tidally and phenologically appropriate imagery is selected. Low tide imagery is best suited for classification as it exhibits less flooded vegetation, and a novel method of sorting imagery by tidal stage using water station data and Sentinel-2 metadata is employed to reduce the need for analyst image selection. Image classifications were tested using early spring, mid-summer, and late fall data (informed by the phenological cycles of S.alterniflora and S.patens) to determine which season or combination of seasons would provide the best class separation, concluding that combined low-tide imagery from all seasons produced the highest overall and per-class accuracies. Classified salt marsh maps for 2020 and 2023 were then used to generate extent change maps and uncertainty values, as would be required for reported activity data. Currently there exists no standardized method for reporting uncertainty in land cover conversion in salt marshes, contributing to the difficulty of including salt marsh extent change in Canada’s IPCC (Intergovernmental Panel on Climate Change) carbon reports. References Desplanque, C., & Mossman, D. J. (2004). Tides and their seminal impact on the geology, geography, history, and socio-economics of the Bay of Fundy, eastern Canada. Atlantic Geology, 40(1). https://doi.org/10.4138/729 Drever, C. R., Cook-Patton, S. C., Akhter, F., Badiou, P. H., Chmura, G. L., Davidson, S. J., Desjardins, R. L., Dyk, A., Fargione, J. E., Fellows, M., Filewod, B., Hessing-Lewis, M., Jayasundara, S., Keeton, W. S., Kroeger, T., Lark, T. J., Le, E., Leavitt, S. M., LeClerc, M.-E., … Kurz, W. A. (n.d.). Natural climate solutions for Canada. Science Advances, 7(23), eabd6034. https://doi.org/10.1126/sciadv.abd6034 Rabinowitz, T.R.M., & Andrews, J. (2022). Valuing the salt marsh ecosystem: Developing ecosystem accounts. Environment Accounts and Statistics Analytical and Technical Paper Series. Statistics Canada Catalogue no. 16-001-M Tiner, R. W. (2013). Tidal wetlands primer: an introduction to their ecology, natural history, status, and conservation (1st ed.). University of Massachusetts Press. van Proosdij, D., Ollerhead, J., & Davidson-Arnott, R. G. D. (2006). Seasonal and annual variations in the volumetric sediment balance of a macro-tidal salt marsh. Marine Geology, 225(1–4), 103–127. https://doi.org/10.1016/j.margeo.2005.07.009
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: A comparison of stratification strategies for area estimation of forest and forest change

Authors: Andreas Vollrath, Erik Lindquist, Nandika Tsendbazar, Sytze van Bruin
Affiliations: UN-FAO, Wageningen University
Precise estimates of forest area and change help nations meet their reporting obligations under various international agreements and are a requirement to receive payments for verified reductions in carbon emissions under carbon accounting standards. Over the last decade, many countries have used sample-based inference to report area estimates of deforestation to the United Nations Framework Convention on Climate Change (UNFCCC). Change is rare relative to large stable land cover classes, and estimates are challenging to obtain with sufficient precision to judge progress towards the desired goal of reducing deforestation over time. Failure to produce precise estimates can prohibit countries from taking advantage of funding and impact efforts to mitigate or adapt to climate change. The statistics for sample-based area estimation are well established. However, large-scale, operational application of statistical inference to meet rigorous climate finance criteria is challenging. In stratified area estimation, a categorical map of stable forest, non-forest, and additional change classes is used to design and distribute a sample for reference data collection to estimate forest change areas precisely. However, the stratification layer is prone to error, which may lead to change sample sites being detected in the large stable strata. Under typical sample allocation strategies, those reference sites have a huge weight in the estimation process and inflate the estimation uncertainty beyond acceptable levels. The introduction of probable change stratum via a spatial buffer that would catch most of those change sample sites has been useful, but not always successful. Moreover, the continuous production of high-quality national maps imposes challenges to countries with limited resources and capacities. To seek for improvements over the current status quo of good practices, we introduce a stratification approach employing a continuous layer of forest change probabilities. The approach is specifically designed to address and alleviate the issue of omission of likely change areas in the large stable stratum. Essentially, the procedure establishes a statistical buffer stratum based on the change probability layer. The subsequent Neyman-type sample unit allocation accounts for both the stratum size and the within-strata variability according to the change probability layer. The approach is demonstrated by a comparative numeric study against established methodologies using simulations with data from various regions in the world. Moreover, the work addresses practical considerations regarding the usability of both new and existing methodologies for practitioners, considering their anticipated expertise level in Earth Observation (EO) and advancements in their current efforts to precisely estimate areas of forest change.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Estimation of land cover change areas using high spatiotemporal resolution land cover products: a case study of Uganda

Authors: Olga Nepomshina, Martin Herold, Dr. Nandika Tsenbazar, Dr. Panpan Xu, Daniela Requena Suarez
Affiliations: German Research Centre for Geosciences (GFZ), Department of Remote Sensing and Geoinformatics, Wageningen University, Laboratory of Science and Remote Sensing
The global climate change mitigation community has been raising awareness of the importance of timely land cover change reporting to better understand the effects of land use change on climate change. Accurate land cover change (LCC) estimates on the national scale are crucial for decision-makers to introduce effective sustainable development strategies and reporting on national greenhouse gas inventories (NGHGI). This study aims to provide a reliable baseline for policymakers on change area estimates and contribute to integrating AFOLU (Agriculture, Forestry, and Other Land Use) sector evaluations for land-based emissions and carbon sequestration potential assessment. The primary research question focuses on evaluating existing high-resolution global land cover products for their effectiveness in estimating land cover changes and accounting for associated greenhouse gases (GHG). In current research, an operational framework is developed based on good practices for accurate area estimates of land cover change utilizing high-resolution land cover products. We propose an assessment for the case study of Uganda where we integrate the open access Dynamic World (DW) and ESRI global land cover products of 10-meter resolution for land cover change area estimation. We are considering the timeframe between 2019 and 2023. After harmonizing different land cover legends to IPCC categories, we introduce a stratification focused on land cover transition processes (including deforestation, reforestation, urbanization, land abandonment, cropland expansion, wetland expansion, wetland degradation, and other transitions) to collect a robust reference sample on LCC. To generate an accurate stratification, we assess the agreement of LC transition processes based on the two global land cover products. At the Uganda level, reference data on land cover change was then collected based on stratification. For data collection, we collaborated with two Ugandan interpreters. The post-processing phase evaluates the accuracy and LCC, encompassing both change and no-change levels, as well as specific transition types within a single land cover product. We are addressing related uncertainties and integrating the AFOLU estimates for carbon accounting. In further research, we plan to expand the framework to the regional as well as global scale to improve the accuracy of land cover change mapping and expand the study on estimating associated uncertainties.
Add to Google Calendar

Wednesday 25 June 17:45 - 18:30 (Frontiers Agora)

Session: B.01.04 Accelerating efforts of global Climate Finance with EO – how ESA is partnering with the multilateral climate funds

Over the last 15+ years, the European Space Agency (ESA) has established strategic partnerships with different International Financial Institutions (IFIs) with the aim of mainstreaming the fusion of space based Earth Observation (EO) technology into their operations as a catalyst for sustainable long-term action. Those partnerships have been fuelled by a pipeline of ESA-funded initiatives, including most recently the Global Development Assistance (GDA) programme through which ESA mobilises the European EO ecosystem to support development projects and programmes funded and implemented by global development banks and multilateral climate funds (MCFs). Partnerships thereby follow agreed cooperation principles of co-design and alignment of resources to stimulate uptake and adoption at scale.

Considering the potential of space applications and technologies to contribute to climate action in line with global policy and financing frameworks, ESA is currently strengthening its collaboration with the MCFs, in particular with the Green Climate Fund (GCF), the Global Environment Facility (GEF), the Climate Investment Funds (CIF), the Adaptation Fund (AF) and the Loss and Damage Fund (FRLD). The increased financial leveraging power of these funds to further amplify development assistance mechanism is vital to national efforts to safeguard communities, contain emissions, adapt and build resilience.

This agora will highlight the motivations and nature of the collaboration between ESA and the MCFs through a panel discussion with key representatives from GCF, GEF, CIF, AF, and FRLD. The transition from the opening remarks to the panel discussion will be given by the World Bank (WB), in recognition of its role as a leading Implementing Entity (IE) across several climate funds and as a longstanding partner in leveraging ESA’s EO capabilities – demonstrated through over 14 years of collaboration and successful case studies. Challenges regarding the mainstreaming of EO into programming and implementation as well as potential avenues to address those challenges will be explored. Some of the high priority thematic sectors to be addressed include food security, forestry, land restoration, water resources management and urban sustainability.

Opening: welcome & keynotes


  • Alex Chunet - ESA, EO Application Engineer
  • Rune Floberghagen - ESA, Head of Climate Action, Sustainability and Science Department

Panel Discussion:


  • Green Climate Fund - Kevin Horsburgh, Climate Science Lead
  • Adaptation Fund - Marcus Johannesson, Senior Climate Change Specialist
  • World Bank - Xueman Wang, Program manager for the GEF8 Global Platform for Sustainable Cities, TBC
  • International Fund for Agricultural Development - Gladys Morales, Global Head of Innovation
  • Climate Investment Fund - Paul Hartman, Lead, Nature, People and Climate Investment Program (TBC)
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: A.08.04 - POSTER - Submesoscale air-sea interactions: understanding, observability and impact

The interactions between oceans and atmosphere occurring over 70% of the Earth’s surface shape the environment where humanity lives, by regulating heat, freshwater, gas, and momentum transfer at the surface. For example, the recent increase of carbon dioxide (CO2) and heat into the atmosphere is largely buffered by the absorption from the oceans, which now contain about 25% of the anthropogenic CO2 (Le Quéré et al., 2018) and about 90% of the excess heat with respect to the mid 20th century (Arias et al., 2021). Complex interactions between the ocean and atmosphere are thus essential for understanding Earth's dynamic climate systems. High-resolution observations are crucial for capturing the small-scale processes that significantly influence the Earth's energy budget and contribute to global sea level changes. These processes span a range of scales and involve intricate exchanges of momentum, heat, and gases at the air-sea interface, critical for shaping weather patterns and broader climate changes.

Improving our understanding and modeling of these interactions requires integrating advanced observational techniques with model developments, addressing significant gaps in our current knowledge and capabilities.
This improved understanding is essential for accurately predicting Earth system dynamics and assessing environmental impacts, underscoring the importance of improving Earth Observation capabilities to advance our knowledge of ocean-atmosphere interactions.

The observation of submesoscales air-sea interactions in Synthetic Aperture Radar (SAR) imaging data has been a topic of intense research over the last two decades. The recently launched Surface Water and Ocean Topography (SWOT) Mission introduces a novel entry to identify and study air-sea interactions through provision of high-resolution sea surface height information (e.g. contrasts over fronts obtained from SSH measurements), possibly to be combined with information from e.g. complementary near-collocated wind and SST observations. Sun glitter observations can also provide a wealth of information to understand these small-scale processes.
In the future, ESA’s EE10 Mission, Harmony, will provide multistatic SAR imaging capabilities through the addition of passive SAR bistatic receivers, and spatially and temporally collocated multi-view Thermal Infra-Red (TIR) observations for SST and cloud motion measurements, that will allow to resolve high resolution winds, waves, surface currents and sea surface temperature differences, at the air-sea interface.

This session is dedicated to the progress in understanding air-sea interactions from Earth Observation data, to identify gaps and opportunities in our ability to observe, model or parameterize such processes/mechanisms in air-sea coupling. Multi-sensor techniques to combine different data sources are encouraged.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: Atmospheric Gravity Waves signature on the sea surface: Insights from SWOT and OSCAR observations

Authors: Adrien Martin, Eva Le Merle, David McCann, Christine Gommenginger, Mahmoud El Hajj, Francesco d'Ovidio, Tania Casal, Yannice Faugere
Affiliations: NOVELTIS, National Oceanography Centre, National Oceanography Centre, LOCEAN-IPSL, ESA-ESTEC, CNES
Air-sea interactions are fundamental in regulating the Earth's climate by the transfer of heat, freshwater, gas, and momentum between these two surfaces. These interactions enable to limit increase in CO2 and heat in the atmosphere, with oceans absorbing about 25% of anthropogenic CO2 and 90% of excess heat. The literature highlights the importance of small-scale processes to drive these exchanges. Hence to improve our understanding of the Earth system, it is critical to improve the observation and modelling of processes at these scales. The recent Surface Water and Ocean Topography (SWOT) satellite mission, launched in December 2022, delivers groundbreaking 2D and high-resolution observation of oceanic dynamics with centimeter-level precision in the measurement of Sea Level Anomaly (SLA). SWOT also provides information on the ocean surface roughness which is highly complementary to SLA to interpret processes at the interface between the ocean and atmosphere. During the SWOT Cal/Val 1-day repeat orbit in 2023, several campaigns were conducted under the Adopt a Crossover (AdaC) consortium. In the Western Mediterranean Sea, the BioSWOT-Med campaign focused on the biogeophysical processes at the core and edge of an eddy north of Minorca. In the context of the ESA Earth Explorer 11 candidate, SeaSTAR, its airborne demonstrator, OSCAR, was flown below the western sub-swath of SWOT Track 3 during the Cal/Val period. Three flights were conducted on the 5th, 7th and 8th of May 2023. The results we are presenting here concern data observed on the 8th May which were characterized by the presence of atmospheric gravity waves (AGW). These AGW are visible in both Normalized Radar Cross Section (NRCS) and SLA data from SWOT. These waves observed in SLA, with an amplitude of about 2 cm, correspond to the inverse barometric effect of the AGW, a phenomenon confirmed by high-resolution sea level pressure data from AROME (resolution ~1 km). The signature visible in NRCS is due to increased wind speed. This phenomenon is also evident in OSCAR data across the three antennas in both NRCS and phase. This presentation will provide a comprehensive overview of the atmospheric gravity waves signature on the sea surface, highlighting the SWOT's unparalleled capability to produce 2D maps of SSH with km-scale resolution and centimeter-level precision.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: Observing waves, current, wind and coastal bathymetry from satellite optical sensors: The «Multi-Angle Sunglint for Air-Sea Interactions» (MASAI) concept

Authors: Nicolas Rascle, Fabrice Ardhuin, Bertrand Chapron, Frédéric Nouguier, Alexis Mouche, Rafael Almar, Erwin Bergsma, Renaud Binet, Fabrice Collard, Francisco Ocampo-Torres, Carlos Villarreal Olavarrieta, Rodney Mora
Affiliations: Ifremer
Jointly observing waves, ocean currents and winds from space has clearly been identified as a breakthrough, with a potential to revolutionize monitoring capabilities of physical air-sea interactions and ocean mesoscale, submesoscale and coastal circulations. In this context, most proposed satellite mission concepts use some form of microwave scatterometer and Doppler radar measurements to probe ocean surface currents, waves and winds. Those microwave observations suffer some limitations, in particular due to the complex surface physics involved in microwave backscatter and in the complex calibration and processing (e.g. synthetic aperture) of the radar impulse. Those translate into difficulties for instance for the interpretation of the radar signal in terms of surface current and for the interpretation of processed image backscatter modulations in terms of directional wave spectrum. Optical observations do not suffer as severely from those limitations and thus offer a unique opportunity to complement, validate and calibrate microwave observations of wind, waves, current and coastal bathymetry for cloud-free scenes. We propose here a concept of an optical satellite mission dedicated to such purpose. It is based on multiple view angles within the sunglint. The viewing geometry is optimized to retrieve simultaneously wind, waves and current at high spatial resolution. A proof of concept is performed using opportunistic satellite observations.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: A.09.08 - POSTER - Understanding the state of the cryosphere with satellite altimetry

Altimetry missions have provided unique observations of the polar regions for more than three decades, transforming our understanding of the cryosphere in the process. Currently operating satellite altimeters such as NASA’s ICESat-2 and ESA’s CryoSat-2 are the now backbone of the long-running altimetry record and together with SWOT, Sentinel-3, and SARAL they provide year-round observations of changes in Earths sea ice, ice sheets, ice shelves, icebergs, the polar oceans, mountain glaciers, and terrestrial snow. In a complementary way, airborne missions are providing calibration and validation data for these satellite altimeters as well as unique high spatial resolution data for studies of small-scale features such as ice ridging, rifts, and melt ponds. This session will demonstrate the vital role of both continuous and high-resolution altimetry in advancing our understanding of the processes involved in a changing cryosphere. We encourage submissions which use altimetry observations as a primary data set as well as those which integrate altimetry measurements into model results to provide new insights into cryospheric processes.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Spaceborne Laser and Radar Sea Ice Freeboards: From Winter to Summer

Authors: Catherine Taelman, dr. Jack Landy, dr. Polona Itkin, dr. Anthony Doulgeris, dr. Rolf-Ole Jenssen
Affiliations: UiT The Arctic University of Norway, the Norwegian Polar Institute, NORCE
Retrieving accurate sea ice thickness estimates from radar altimetry requires correctly locating the snow-ice interface in the waveform returns. Standard re-tracking algorithms commonly assume that Ku-band radar waves penetrate through the entire snow cover, which implies that the main radar scattering surface is the snow-ice interface. While this assumption generally is true for cold, dry, brine-free, and homogeneous snowpacks, several studies have shown that deviating geometrical and physical properties in the snow cover, as well as changes in surface roughness, can lead to significant surface and volume scattering of the radar signal within the snow cover. The exact dielectric behavior of snow on sea ice in the microwave spectrum is, however, not well understood yet. Strong radar returns from within the snowpack could bias the re-tracked radar freeboards high, leading to overestimation of the ice freeboards and subsequent sea ice thickness estimates. As the Arctic sea ice cover is becoming thinner, more mobile, and primarily composed of first-year ice (FYI), it becomes increasingly important to improve ice thickness retrievals for this ice type. Snow covers on FYI are typically characterized by a saline bottom layer due to upwards brine wicking from the ice or flooding. This saline bottom layer is less typical for snow covers on multiyear ice (MYI) due to brine drainage during the previous melt seasons. It has been shown that brine-wetted snow alters the Ku-band radar signal penetration, as do other snow properties such as liquid water content and the presence of metamorphic features. However, the resulting hypothesized overestimation of spaceborne Ku-band radar freeboards remains to be verified. We take a first step towards verifying this hypothesis by observing the mean Ku-band radar scattering surface over a full snow accumulation and melt cycle for FYI and MYI. Using Sentinel-1 radar imagery, we manually delineate a set of study sites for FYI and another one for MYI in a fast ice area off the East Greenland coast, in western Fram Strait, for the period January – August 2022. We gather all spaceborne altimetry observations from the ICESat-2, CryoSat-2, and Sentinel-3A&B missions that cross the study sites during this time period, as well as all coinciding Sentinel-1 imagery. The spaceborne observations are linked to in-situ snow and sea ice measurements collected during the CIRFA-22 research cruise, as well as weather data from stations and reanalysis. The CIRFA-22 cruise went to western Fram Strait in April and May 2022. The unique advantage of this cruise was a combination of immobile landfast ice and drift ice sampling locations, both composed of sea ice of various age, roughness and weather history. The following in-situ measurements are available: ice thickness and snow properties for a number of different study sites, snow and sea ice thickness transects made with Magnaprobe and EM conductivity meter (EM-31), snow depth transects from drone, and high-resolution vertical temperature profiles in the snow and ice over the course of several weeks obtained from SIMBA (Snow and Ice Mass Balance Array) buoys, from which snow depth and ice thickness is derived. Using the combination of spaceborne observations and in-situ measurements, we study the seasonal evolution of the laser and radar freeboards over FYI and MYI. We investigate potential biases in the retrieved freeboards as a function of snow properties and atmospheric conditions. Furthermore, we examine whether the radar waveform retracking algorithm might introduce biases.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Sea Ice Thickness, Drift, and Deformation Estimates from Airborne Laser Scanning Data and Machine Learning: Unveiling New Process Understanding

Authors: Nils Hutter, Dr Lorenzo Zampieri, Prof Cecilia Bitz, Prof Christian Haas
Affiliations: Geomar Helmholtz Centre For Ocean Research Kiel, European Centre for Medium-Range Weather Forecasts (ECMWF), University of Washington, Alfred-Wegener-Institute Helmholtz-Centre for Marine and Polar Research
During the Multidisciplinary drifting Observatory for the Study of Arctic Climate (MOSAiC) expedition, an airborne laser scanner (ALS) was employed to map the sea-ice surface at sub-meter resolution (0.5 m). Over the course of 64 flights between September 2019 and September 2020, we collected extensive data on sea-ice and snow surface elevation, capturing the full annual cycle with high spatial resolution and coverage (2D maps of 5x10 km²). Repeated surveys of the same areas allowed us to monitor the drift and deformation of the ice as well as snow re-distribution. Using RAFT, a machine-learning algorithm to solve optical flow, we derived drift and deformation estimates that match the resolution of the freeboard data. This unique dataset, which includes both freeboard and deformation data of the same ice on the same grid over extended time periods, provides valuable insights into the processes shaping the icescape and its snow and ice thickness distributions. Here, we present findings on dynamic ridging and lead opening, snow redistribution, and thermodynamic growth and melt. We highlight an analysis of snow drifts linking their geometry and orientation to major wind events during their formation as well as studying their positioning along the ridge profile. Furthermore, the comprehensive measurement program during MOSAiC enables us to enhance ALS surveys with other field data. We show how in-situ snow and ice thickness and freeboard measurements can be used to train a convolutional neural network (CNN) to incorporate additional contextual topographic information when converting freeboard data to ice thickness. Given that features of thick snow or ice, such as snow drifts and pressure ridges, exhibit distinct surface characteristics, the trained CNN achieves higher accuracy in predicting snow and ice thicknesses concurrently compared to standard methods relying on snow climatologies and hydrostatic approximation. We will present a comprehensive comparison of different machine learning methods with standard approaches used in altimetry integrating both 2-D as well as 1-D topographic information. With the latter we highlight the potential to extend similar methods to satellite altimetry data of comparable resolution like ICESat-2. This work underscores the potential of integrating machine learning with high-resolution ALS data to advance our understanding and predictive capabilities of sea-ice dynamics and properties.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Greenland's Supra-Glacial Lakes: Harnessing Sparse Climate Data for AI-Driven Mapping

Authors: Daniele Fantin, Mr Jacob Hay
Affiliations: Science And Technology As
Supraglacial lake volume is an important parameter with respect to understanding the hydrology of the Greenland ice sheet. Previous studies have focussed on mapping the outline of supraglacial lakes using both optical and radar satellites like Sentinel-2 (S2) and Sentinel-1, respectively. The depth of the lakes is known to have high variance, and volume estimates can’t be accurately obtained solely based on the lake extent. Nevertheless, the lake depth can be measured using the ICESat-2 (IS2) LiDAR altimeter instrument which captures along-track 2D profiles and has a monthly repeating sub-cycle. The IS2 dataset captures only a sparse spatial and temporal record of lake depth. Since supraglacial lakes occur seasonally and change in depth and size rapidly, a dense record both spatially and temporally is desirable. S2 can potentially be exploited to retrieve / approximate such a dense record of lake depth. The spectral signature of the lake changes subtly with lake depth. Previously S2 has been used to map supraglacial lake depth through a radiative transfer modelling approach. This approach is limited as it typically only takes single-pixel input, not taking into account contextual information about for example: neighbouring pixels, terrain, atmospheric conditions, floating ice, shading etc. By taking into account such contextual information, a deep learning approach has the potential to yield improved results. To the best of our knowledge, deep learning has not previously been exploited to model supraglacial lake depth from S2. Our method covers the entire extent of the lakes, unlike the IceSat tracks, and makes use of contextual information. We trained several variations of UNets on a dual target of both extent and depth estimates using a combined loss function that mixed Focal loss and MAE loss. Depth predictions were therefore only effectively trained on a small sample set of IS2 data, presented as lines intersecting lakes. Indipendent validation was done over a selection of lakes on the 79N, and Sermeq Kujalleq glaciers by our project partners at ASIAQ. The validations further show that our models have a clear tendency to underestimate the depths when compared to radiative transfer models that rely on single-band estimates. Lake extent predictions were estimated at an F1-score of 0.885 while showing some tendencies to invent lake covers in the models that were trained on dual output. This tendency can be remedied by augmentation strategies like cut-mix or cow-mix, introducing artificial lake edges towards the lake interior. Similar artefacts are seen in the depth estimates, demonstrating some of the potential issues common for UNet receptive fields. We present a deep-learning approach to estimate lake extent and depth based on Sentinel-2 imagery and partial training labels are derived from IS2. The project is part of the phase 2 ESA Greenland Ice Sheet Climate Change Initiative (ESA GIS CCI+).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: CryoSat and GRACE help each other: Monthly Antarctic ice sheet mass changes at tens of kilometres resolution from combining satellite data

Authors: Maria Kappelsberger, Martin Horwath, Matthias Willen, Veit Helm, Sanne Veldhuijsen, Michiel van den Broeke
Affiliations: Institut für Planetare Geodäsie, TU Dresden, Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research, Institute for Marine and Atmospheric research Utrecht (IMAU), Utrecht University
The Antarctic Ice Sheet (AIS) is likely to be a major contributor to future sea-level rise, which drives the need for a thorough understanding of ice-sheet mass changes. However, estimates of ice mass changes over recent decades from satellite gravimetry (GRACE and GRACE-FO or just simply GRACE) and satellite altimetry (e.g. CryoSat-2) differ strongly in several regions of Antarctica and have large uncertainties. GRACE has the great advantage of being ultimately sensitive to mass changes on a monthly scale, but is limited by spatial resolution (a few hundred kilometres), correlated noise and—especially in Antarctica—uncertainties introduced by the application of the Glacial Isostatic Adjustment (GIA) correction. CryoSat-2 captures ice mass changes at high spatial and temporal resolution, but suffers from assumptions about volume-to-mass conversion and correlated errors in space and time resulting from the altimetry-data processing chain, e.g. time-variable radar-signal penetration into snow and firn. Here we present results from a method that is able to exploit the advantages of GRACE and CryoSat-2 while overcoming their limitations. The inverse approach allows co-estimation of GIA instead of using GIA model output, and incorporates results from regional climate modelling and firn modelling to improve the volume-to-mass conversion for satellite altimetry. The mass balance time series are compared with data from ICESat-2 and estimates from the Ice sheet Mass Balance Inter-comparison Exercise (IMBIE). We also discuss perspectives of satellite data combination with future missions such as MAGIC and CRISTAL.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: 15 Years of CryoSat Data Quality Control: Evolution and Status of the Ice Processors

Authors: Harry Cook, Alice Charles, Erica Turner, Alessandro di Bella, Tommaso
Affiliations: Telespazio UK
Now in its 15th year in orbit, the European Space Agency’s (ESA) polar-orbiting CryoSat satellite was specifically designed to measure changes in the thickness of polar sea ice as well as the elevation of ice sheets and mountain glaciers. To reach this goal, CryoSat products must meet the highest data quality and performance standards. This is achieved through improvements to the Instrument Processing Facilities (IPFs). Processing algorithms are improved based on feedback and recommendations from Quality Control (QC) activities, Calibration and Validation campaigns and the Scientific Community. Since launch, CryoSat QC activities have been performed by the Telespazio UK led Quality Assurance for Earth Observation (IDEAS-QA4EO) service, on behalf of ESA/ESRIN. IDEAS-QA4EO routinely monitors all operational CryoSat ice and ocean products to detect anomalies, support investigations, and prevent the distribution of poor-quality data to users. QC activities also provide valuable input to CryoSat processor evolution through identifying and tracking anomalies to be resolved in an updated IPF. Over the past 15 years, QA4EO has supported the transfer to operations of four major software upgrades for the Ice processor. The team also played an important role in subsequent reprocessing campaigns to ensure data for the whole mission was consistent and of a high quality. The latest processor upgrade – Ice Baseline F – brings improvements to all Near Real Time and Offline ice products, including improved snow depth climatology models, updated high resolution tidal models over polar oceans and evolutions to retracking for winter processing. In 2025, the Ice Baseline-F reprocessing campaign will reprocess the full mission dataset from July 2010 to Baseline-F, thereby providing an improved dataset to users. IDEAS-QA4EO has provided QC throughout this process to ensure these updates are applied effectively and without regression. This poster provides an overview of the CryoSat Ice data quality status and the QC activities performed by the IDEAS-QA4EO consortium over the past 15 years. This covers both operational QC and results from the recent reprocessing campaigns. It describes the main features of the current operational Ice Baseline-E processor and the planned evolutions for the future Baseline-F release.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: 15 Years of CryoSat Data Quality Control: Evolution and Current Status of the Ocean Processors

Authors: Alice Charles, Harry Cook, Erica Turner, Alessandro Di Bella, Tommaso Parrinello
Affiliations: Telespazio UK, European Space Agency (ESA)
Going beyond its original ice-monitoring mission objective, CryoSat is now a valuable source of data for the oceanographic community. CryoSat’s radar altimeter can measure high-resolution geophysical parameters from the open ocean to the coast. The first CryoSat Ocean Processors were installed into operations in April 2014 to generate products specifically designed for ocean applications. Since then, the CryoSat ocean products have evolved through updates and improvements to the CryoSat Ocean Processors. These changes improve data performance and quality to ensure the full scientific exploitation of CryoSat ocean products in a broad range of oceanographic and climate studies. The CryoSat Ocean products are generated with Baseline-D, since the last processor upgrade in September 2024. Baseline-D provided an update to the SAR/SARIn SAMOSA retracker, improved sea state bias, wind speed and sigma-0 solutions, and upgraded surface type mask, models and corrections. The CryoSat ocean products are routinely monitored as part of Quality Control (QC) activities by the ESA/ESRIN Sensor Performance, Products and Algorithms (SPPA) office with the support of the Quality Assurance for Earth Observation (IDEAS-QA4EO) service led by Telespazio UK. IDEAS-QA4EO are closely involved in CryoSat Ocean Processor validation activities prior to the installation of a new processor at the Payload Data Ground Segment. Activities involve the verification of test data generated with the new processor and validation that all expected changes have been made and no additional data quality anomalies have been introduced. Once a new processor version is installed to operations, the full mission dataset is reprocessed from July 2010 to present day. This poster provides an overview of the CryoSat ocean data quality status over the last 11 years. This includes the results of QC activities performed by the IDEAS-QA4EO consortium on both operational and reprocessed data products, the main evolutions implemented to the ocean processors, and anticipated evolutions for the future.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Enhanced Spatial and Temporal Resolution of Greenland Surface Elevation Changes Using a State-Space Model

Authors: Natalia Havelund, Sebastian Simonssen, Louise Sørensen, Karina Nielsen, Mai
Affiliations: Dtu Space
Surface elevation change over the Greenland Ice Sheet (GrIS) is an important parameter for understanding ice sheet dynamics and the impacts of climate change. Monitoring these changes provides insights into how the ice sheet responds to warming temperatures and changing climate patterns. Within the framework of the C3S (Copernicus Climate Change Service), we have developed an operational time series of surface elevation changes derived from radar altimetry data, using observations from as far back as 1992. While the current C3S product, with a spatial resolution of 25 km, is valuable for understanding large-scale trends and using it as input for climate modeling, capturing localized variations is essential for improving insights into the processes driving these changes. To address this, we leverage CryoSat-2 data as part of the ESA Climate Change Initiative (CCI+) and apply a state-space model (SSM) capable of analyzing spatial and temporal dimensions. The SSM significantly enhances spatial resolution to a grid size of 5 km, enabling a detailed reconstruction of monthly surface elevation changes across the ice sheet. This approach uncovers finer-scale patterns and enhances our ability to monitor and understand localized processes driving changes in the ice sheet over time. The gridded elevation changes are generated using the RTMB package in R, with pre-processing steps that remove the ice sheet's surface topography by subtracting a reference surface derived from ArcticDEM using the model only on the anomalies. The SSM captures spatial interdependencies through a Gaussian Markov Random Field (GMRF) and temporal correlations using an autoregressive (AR1) process. The sparse precision matrix in the GMRF ensures computational efficiency while maintaining robust spatial correlations. Additionally, an iterative weighting scheme reduces the impact of noisy or outlier data by dynamically adjusting weights for data points that deviate significantly from the initial model, resulting in more accurate and reliable elevation estimates. This novel dataset offers high-resolution insights into Greenland’s ice sheet changes, allowing for improved assessments of climate change impacts on the Greenland Ice Sheet and enhancing operational climate monitoring.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Advancing ice altimetry using deep learning: Waveform retracking with AWI-ICENet1

Authors: Angelika Humbert, Veit Helm, Erik Loebel
Affiliations: Alfred Wegener Institute Helmholtz Centre For Polar And Marine Research
Satellite altimetry is essential for monitoring and understanding changes in the Antarctic and Greenland ice sheets. It relies on accurate and continuous observations of surface elevation. In addition to surface slope and complex topography, the spatial and temporal variability of radar pulse penetration into the snowpack is a significant source of error. This leads to an inaccurate measurement of the true surface elevation and consequently affects estimates of surface elevation change (SEC). We developed and applied a deep convolutional neural network (CNN) to improve the precision and correct the SEC. This CNN was trained on a simulated waveform dataset, considering different surface slopes, topography and attenuation. The trained network was then applied as a CNN-retracker to both Ku- and Ka-Band Altimeters over the Antarctic and Greenland ice sheets. Our results demonstrate that the CNN-retrieved SEC has significantly lower uncertainty and time variable radar penetration, outperforming conventional retracking algorithms such as OCOG or TFMRA, thus eliminating the need for backscatter or leading-edge corrections typically applied in SEC processing. In this contribution, we will introduce our study and present an updated version of the AWI-ICENet1 retracker, which is applicable to historical, recent, and future altimetry missions. This includes Ku- and Ka-band as well as LRM and SAR sensors. We will present time series and volume change estimates for the Antarctic and Greenland ice sheets and compare these with conventional retracking solutions and ICESat-2. Overall, this technique opens up new opportunities for radar altimetry processing, allowing for more accurate observations of ice surface elevation and SEC.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: CRISTAL SEA ICE & ICEBERG L2 PROCESSING: BASELINE APPROACH

Authors: Baker Steven, David Brockley, Anne Braakmann-Folgmann, Albert García-Mondéjar, Lin Gilbert, Christian Haas, Stephan Hendricks, Jack Landy, Alan Muir, Robert Ricker, Julienne Stroeve, Jean Tournadre, Michel Tsamados, Dr Michele Scagliola, Dr Jérôme Bouffard, Dr Paolo Cipollini
Affiliations: UCL, UiT, Arctic University of Norway, isardSAT, Alfred- Wegener-Institut Helmholtz-Zentrum, NORCE Norwegian Research Centre AS, University of Manitoba, Ifremer, RHEA for ESA, ESA-ESRIN, ESA-ESTEC
The Copernicus Polar Ice and Snow Topography Altimeter (CRISTAL) is a satellite system developed as part of the European Union's Copernicus Expansion Mission (CEM) activities. The primary objective of the CRISTAL mission design is to measure and monitor variability of Arctic and Southern Ocean sea ice thickness and its snow depth. Built on the heritage of CryoSat, CRISTAL's main instrument is an Interferometric Radar Altimeter for Ice and Snow (IRIS). This technical solution is based on a dual-frequency synthetic-aperture radar altimeter with interferometric capabilities at Ku-band and a second Ka- band frequency. Moreover, CRISTAL will be equipped with a passive microwave radiometer to aid in atmospheric corrections and surface-type classification. This presentation provides an overview of the current baseline approach taken in designing a L2 processing chain to make use of these hardware capabilities in deriving Sea Ice thickness and Snow Depth over polar oceans during both the winter and summer seasons. Additionally this Level-2 Ground Processor Prototype (GPP) will implement the geophysical retrieval algorithms to derive ice shelves thickness, iceberg distribution and volume. To achieve the Mission performance objectives, advanced geophysical retrieval algorithms for sea ice and icebergs are needed, so the CLEV2ER activity will carry out R&D studies to support the definition and implementation of the new retrieval algorithms. Such R&D studies are geared towards addressing current scientific inquiries regarding sea ice retrieval in the context of CRISTAL: (i) evaluating variations in sea ice thickness derived from Ku-band and Ka-band altimetry, resulting from differing penetration depths in snow cover on sea ice; (ii) enhancing the quantity of sea surface height measurements in regions covered by ice through the utilization of multi-peak waveform retracking, identification of leads from nearby tracks, and swath processing; and (iii) expanding the capacity to retrieve geophysical data in challenging sea ice conditions. As part of the activity, a Transcoding Tool has been developed in order to adapt L1B data from other altimeters into the CRISTAL L1B format so their products can be used by the GPP to emulate CRISTAL L2 products, validate the new algorithms and derive performance metrics. This presentation will detail the first preliminary outcomes from these research tasks.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Arctic and Antarctica sea ice thickness estimates with a physical model

Authors: Zhaoqing Dong
Affiliations: Hohai University / National Satellite Ocean Application Service
In this study, we estimate radar freeboard for Arctic and Antarctic sea ice using the FBEM physical waveform model combined with a waveform-fitting process applied to HY-2B pulse-limited radar altimeter. By incorporating ancillary data on snow depth, snow density, seawater density, and ice density, we derive estimates of sea ice thickness for both polar regions. Our approach aims to develop a robust and reliable sea ice thickness observation product that ensures maximum consistency with current radar altimetry observations from CryoSat-2 (CS-2) and Sentinel-3A/B in Synthetic Aperture Radar (SAR) mode.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: The EOLIS dataset: monitoring land ice from CryoSat-2 swath processing

Authors: Andrea Incatasciato, Livia Jakob, Carolyn Michael, Sophie Dubber, Noel Gourmelen, Tristan Goss, Julia Bizon, Martin Ewart, Alex Horton, Alessandro Di Bella, Jerome Bouffard, Tommaso Parrinello
Affiliations: Earthwave Ltd, University of Edinburgh, ESRIN, European Space Agency
Satellite radar altimetry has been routinely used to monitor land ice heights since the 1990s. However, the launch of CryoSat-2 - the first altimetry mission to carry a synthetic aperture radar interferometer on board - has allowed several technical breakthroughs and led to many new applications that were previously unforeseen. One such breakthrough is Swath processing of CryoSat’s SARIn mode, making full exploitation of the information contained in CryoSat’s waveforms and leading to one to two orders of magnitude more measurements than the conventional so-called Point-Of-Closest-Approach (POCA) technique. Following on from the early demonstration of the technique and of its potential impact, the CryoTEMPO EOLIS (Elevation Over Land Ice From Swath) dataset now routinely provides information of elevation over land ice at high resolution on a monthly basis. The dataset allows the use of radar altimetry in new environments such as the more complex terrain over glaciers and ice caps, as well as new applications thanks to the superior spatial and temporal resolution, such as the more precise quantification of subglacial lake drainage events. Currently, the EOLIS dataset is provided at monthly intervals over both ice sheets as well as all RGI v7.0 glacier regions that intersect with the CryoSat-2 SARIn processing mode mask, with a recent extension to the Antarctic Ice Shelves. Future developments will include the publication of gapless annual ice sheet wide digital elevation models, combining EOLIS and the CryoTEMPO Land Ice data sets for coverage over the entirety of the ice sheets. Additionally, a new baseline of EOLIS will be released in 2025, with improved data formatting for ease of use and a complete review of the processing technique to enhance accuracy and coverage. CryoTEMPO-EOLIS has already been used to study the evolution of the ice sheets, as well as to produce glacier mass balance timeseries included in intercomparison studies like GlaMBIE. The algorithmic development within EOLIS has also been used as input into the development of CRISTAL, the new polar ice altimeter mission planned to start in 2028. With the aim of making CryoSat-2 altimetry data available to non-altimetry experts and encouraging its use more broadly by the community, the platform CS2EO (cs2eo.org) provides advanced data access to the EOLIS suite datasets. All these aspects will be presented in the poster.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Combining Altimetry and Digital Elevation Models to Map Antarctic Peninsula Glacier Evolution

Authors: Romilly Close, Professor Malcolm McMillan, Dominic Hardy, Dr Emma Woolliams
Affiliations: University Of Lancaster, National Physical Laboratory, UK Centre for Polar Observation and Modelling, Centre of Excellence in Environmental Data Science, Lancaster University
The Antarctic Peninsula is vulnerable to climate change and is one of the fastest warming regions on Earth. However, estimates of its glacial mass loss vary significantly, leading to large uncertainties in estimating the sea level rise contribution from the region. These uncertainties arise from the fact that the Antarctic Peninsula is difficult to monitor, with challenging topography, regionally variable climates, and numerous small, fast-flowing, marine-terminating outlet glaciers. These challenges affect in-situ, aircraft-based, and satellite-borne measurements. Common approaches for quantifying ice sheet mass balance include gravimetric, input-output, and volumetric methods. In volumetric techniques it is common to use radar (e.g. CryoSat-2) or laser altimetry (e.g. ICESat-2). In theory, it is also possible to utilise digital elevation models (DEMs) constructed using pairs of stereoscopic satellite images (e.g. Reference Elevation Model of Antarctica - REMA). However, challenges in georeferencing these data, together with the data volumes of these high resolution datasets, have generally precluded their use over large ice sheet areas, with a few notable exceptions (Hugonnet, McNabb et al. (2021)). Despite these challenges, timestamped DEMs, such as REMA, represent a potentially valuable and underutilised data stream, given that they have a resolution of two metres and typically cover an area of 17 km x 120 km with each acquisition. As such, they are well suited to resolving small-scale, heterogenous signals of marine glacier dynamical imbalance across the Antarctic Peninsula and, with scalable workflows, offer the opportunity to make new regional mass balance assessments. Despite their potential, however, without co-registration to known reference points the uncertainty of DEMs is much higher than for altimetry satellites. This has limited the widespread use of DEM data for ice sheet studies. In mountain glacier regions, such as the Alps, numerous previous studies have shown that it is possible to coregister DEM data to reference data, provided there is sufficient exposed stable ground over which to perform the co-registration. The resulting co-registered product gives wide data coverage with significantly improved DEM accuracy. However, across the Antarctic Peninsula there is a lack of stable ground to perform such coregistration, an issue highlighted by Hugonnet, McNabb et al. (2021) who removed two thirds of the acquired REMA DEMs (around 5900 tiles) from their workflow due to this issue. Here, we build upon the methods introduced by Shean, Joughin et al. (2019), which was focused on the Amundsen Sea Sector of West Antarctica, and used coincident altimetry data to perform an ‘on ice’ coregistration of timestamped REMA tiles. Specifically, we develop a scalable approach to matching REMA and coincident altimetry data, and extend the focus to explore the impact of different co-registration methods, altimetry satellites and applicability to other regions, namely demonstrating the workflow over Fleming and Drygalski glaciers on the Antarctic Peninsula. To perform the data matching, we test the use of auxiliary datasets (velocity and height change rate) to identify coincident altimetry records where the ice surface has not moved or changed in height significantly. This allows us to identify areas that are quasi-stable, thus circumventing the need to have stable bedrock areas within our scene of interest. Based upon this approach, we assess the impact of co-registering to different altimetry sources (e.g. ICESat-2 versus CryoSat-2 swath processing), and of using different coregistration methods, to determine the resultant accuracy that can be achieved. Ultimately, this work aims to contribute to the development of scalable processing pipelines that fuse altimetry and DEM data in order to better constrain mass loss from topographically complex ice sheet regions, such as the Antarctic Peninsula. Hugonnet, R., R. McNabb, E. Berthier, B. Menounos, C. Nuth, L. Girod, D. Farinotti, M. Huss, I. Dussaillant, F. Brun and A. Kääb (2021). "Accelerated global glacier mass loss in the early twenty-first century." Nature 592(7856): 726-731. Shean, D. E., I. R. Joughin, P. Dutrieux, B. E. Smith and E. Berthier (2019). "Ice shelf basal melt rates from a high-resolution digital elevation model (DEM) record for Pine Island Glacier, Antarctica." The Cryosphere 13(10): 2633-2656.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Mapping Sea Ice Concentration and Volume in the Arctic with CryoSat-2

Authors: Amy Swiggs, Dr Isobel Lawrence, Professor Andrew Shepherd
Affiliations: Centre For Polar Observation And Modelling, Northumbria University, European Space Agency, ESRIN
Declines in sea ice concentration (SIC) and sea ice thickness (SIT) have been observed across the Arctic since the start of the satellite record, intersected by strong interannual and spatial variability that demonstrate the need for high resolution observations of sea ice. Furthermore, knowledge of sea ice volume is critical for understanding Arctic sea ice variability and dynamics but has historically been difficult to quantify due to the challenges in retrieving Arctic-wide SIT. The launch of CryoSat-2 in 2010 led to the production of SIT up to 88˚, complemented by laser altimetry SIT from ICESat-2 in 2018. While SIT is commonly estimated with satellite altimetry, SIC is generated from passive microwave (PMV) sensors that extend back to the 1970s. SIC is also frequently used in the process of classifying return altimetry waveforms to ensure diffuse waveforms returned from open ocean are not misclassified as ice floes. Currently, no sea ice volume estimate independent of PMV sensors exists. Any data gaps or missing coverage from PMV sensors would result in limitations in our ability to estimate Arctic-wide sea ice volume. Furthermore, the application of ancillary datasets can increase the lag time on sea ice volume processing, thereby inhibiting near-real-time sea ice volume data. Here, we thus utilise the high spatial sampling of CryoSat-2 to map winter SIC in the Arctic from 2010 to 2023. We employ a bias adjustment generated from Landsat 8 validation imagery and undertake a comprehensive comparison to SIC estimates from PMV, Canadian Ice Service ice charts, and Landsat 8 imagery. We find that CryoSat-2 can detect greater ice pack variation due its high spatial sampling resolution and performs well against observations from other sensors. From this assessment, we therefore use SIT and SIC from CryoSat-2 to generate a winter sea ice volume estimate in the Arctic. We compare our sea ice volume estimate to existing products, and evaluate their spatial and temporal differences, and consider the limitations and future potential of sea ice volume from satellite altimetry in the Arctic.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: SAR Altimetry Modelling for Monitoring of Lake Ice Properties

Authors: Justin Murfitt, Claude Duguay, Ghislain Picard, Pierre Zeiger, Elena Zakharova, Marcus Engdahl
Affiliations: H2O Geomatics, Institut des Géosciences de l’Environnement-Université Grenoble Alpes, EOLA Micro-Enterprise, European Space Agency
The cryosphere is a key component of the Earth’s climate system. Ranging from seasonal snow and freshwater ice cover to polar ice sheets and sea ice, these aspects impact planetary energy balance, local climate conditions, and provide socio-economic benefits to local communities. However, the cryosphere is experiencing major changes due to increasing temperatures as a result of climate change. This includes changing lengths of seasonal freshwater ice coverage and thinning ice covers. These lake ice characteristics are identified as essential climate variables (ECVs) and key to understanding how the planet is changing. Remote sensing has played a crucial role due to the planet wide monitoring capabilities of these technologies. Satellite altimetry has become a tool of increasing interest for lake ice due to its capabilities to monitor lake ice type and thickness. Most recently, altimetry systems now offer synthetic aperture radar modes which provide measurements with a higher spatial resolution compared to traditional radar altimetry and a higher accuracy. To aid in the use of these data, modelling of SAR altimetry measurements over lake ice is needed. While models exist for this function, there is currently no common framework for accessing and using these different solutions. The SAMS-Cryo project works to address this by developing a new module for SAR altimetry return signal simulation within the snow microwave radiative transfer model (SMRT). SMRT is internationally recognized and allows for user flexibility in the construction of simulation mediums, selection of electromagnetic models, and radiative transfer solving options. This presentation will highlight the use of the new SAR altimetry module for investigations into lake ice properties. The sensitivity of SAR altimetry to properties such as snow on the ice and ice thickness will be looked at through the use of different snow and ice conditions. Additionally, comparisons will be made between observed SAR altimetry signals over lake ice will be compared with simulations. These results can help in planning of future missions and development of retrieval algorithms for lake ice properties.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Recent surface elevation changes of the Antarctic and Greenland ice sheets, from radar (Sentinel-3 AMPLI) and laser (ICESat-2 ATL15) altimetry

Authors: Jeremie Aublanc, Sebastian Bjerregaard Simonsen, Stine Kildegaard Rose, Carlos Yanez, François Boy, Franck Borde, Anouk Chamayou, Filomena Catapano, Pierre Féménias
Affiliations: CLS, DTU, CNES, ESA/ESTEC, ESA/ESRIN
Sentinel-3 is an Earth observation satellite series developed by the European Space Agency as part of the Copernicus Programme. It currently consists of 2 satellites: Sentinel-3A and Sentinel-3B, launched on 16 February 2016 and 25 April 2018, respectively. The satellites carry a radar altimeter among the on-board instruments, to provide operational topography measurements of the Earth’s surface. In 2022-2023, an innovative Level-2 processing chain was developed at CLS, with the aim of retrieving surface topography from Sentinel-3 radar altimetry measurements acquired over land ice: the so-called “Altimeter Data Modelling and Processing for Land Ice” (AMPLI) software. Compared to ground segment processing, the performance is significantly improved thanks to an innovative relocation method, performed by means of a numerical modelling. The improvements are particularly significant over the ice sheet margins, where the most dramatic changes are occurring. In 2024, a complete reprocessing of the Sentinel-3A and Sentinel-3B missions was performed with the AMPLI software, for measurements taken over Antarctica and Greenland. The new “Sentinel-3 AMPLI Demonstration Products” are planned to be openly released by ESA during Q1 2025. The NASA ICESat‐2 mission, launched in September 2018, carries a single instrument on-board: the Advanced Topographic Laser Altimeter System (ATLAS), a photon‐counting laser altimeter using 532 nm wavelength laser pulses. One of the primary applications for the ICESat-2 mission is the monitoring of surface mass balance of glaciers and ice sheets. The outcomes are critical for quantifying and understanding the consequences of global warming on Earth’s cryosphere. Among the different products generated by the ground segment, the ICESat-2 ATL15 data set provides time-varying gridded estimates of land ice topography. In this study, the Surface Elevation Change (SEC) derived from Sentinel-3 AMPLI topography estimates is compared with that extracted from ICESat-2 ATL15 products. The assessment is performed over the recent period from 2019 to 2024, both on the Antarctic and Greenland ice sheets. This work is therefore of major importance, to confirm that the current topographic changes taking place in the polar ice sheets can be consistently observed by two different altimetry technologies. This study also underlines and discloses the potential of the Sentinel-3 mission for ice sheet mass balance monitoring. With the future launch of Sentinel-3C and Sentinel-3D, the constellation is planned to operate until at least 2035, which will deliver a continuous ~20-year record of ice sheet observations. Aside the climate change outcomes, the impact of snow volume scattering in the surface topography retrieved from Ku-band delay-Doppler altimetry is also analysed. A correction for this effect is investigated, which is of interest for other polar altimetry missions that can operate with delay-Doppler capabilities (i.e. SAR mode) in Ku-band: CryoSat-2 and the upcoming CRISTAL mission.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Multiscale assessment of surface roughness at the Queen Maud Land - Antarctica, using in-situ, ICESat-2 and CryoSat-2 measurements

Authors: Adriano Lemos, Heidi Sallila, Jaakko Seppänen, Aku Riihelä
Affiliations: Finnish Meteorological Institute
Surface roughness is an important and still poorly known parameter for the characterization of snow and sea ice cover. It directly impacts surface melt over, e.g., the Greenland Ice Sheet (Riihelä et al., 2019), as it is an important driver for the albedo changes. Furthermore, a recent study presented sea ice roughness as a key source of uncertainty for sea ice freeboard retrieval of CryoSat‐2 mission (Landy et al., 2020), and it also degrades laser altimeter observations of Antarctic snow (Brunt et al., 2019). The roughness of the detected surface impacts the return echo or the reflected photons of CryoSat-2 (CS2) and ICESat-2 (IS2), respectively. For CS2, smooth surfaces return a stronger echo, whereas rough surfaces return a weaker echo and cause changes in the waveform, which can lead to bias in the estimated elevation. For IS2 multiple surface and subsurface scattering can increase the travel time of photons, causing bias to the measured elevation. The main goal of this study is to assess the impact of surface roughness over land- and sea-ice combining dataset collected in-situ with CryoSat-2 and ICESat-2 measurements around the Finnish Antarctic research station (Aboa) at Queen Maud Land, Antarctica. We utilize the in-situ measurements collected during the 2022-2023 austral summer campaign and assess the impact of the surface roughness on the CS2 and IS2 measurements during the same period. The in-situ dataset contains high resolution laser drone measurements and snow pit data, providing valuable information on the sub-footprint scale surface roughness and observation of possible layers affecting the radar penetration, respectively. At the first stage, the surface elevation comparisons provided a fine agreement, in the order of centimetre scale, between the laser drone measurements and ATL06 ice surface points from IS2 over land-ice. Next step is to observe how well the in-situ surface roughness compares with the satellite derived surface roughness estimates.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: A Drone Based Radar for Detecting Sea Ice Thickness

Authors: Marlena Holloway, Andrew Shepherd, Paul Brennan, Tom Slater
Affiliations: Centre for Polar Observation and Modelling, University College London, ESRIN, European Space Agency, NASA Goddard Space Flight Center
Sea ice plays a critical role in regulating regional and global climate by influencing weather patterns and oceanic circulation. Knowledge of sea ice thickness is essential to improve our understanding of how the cryosphere will evolve and the impact this will have on its inhabitants and ecosystems. It is also fundamental to the safety and efficiency of maritime operations in the polar regions. Although satellite altimetry offers global coverage, its sampling is inadequate for synoptic assessments of sea ice conditions, it is costly and difficult to acquire the in situ measurements required for validation, and the imminent end of life of CryoSat-2 and ICESat-2 pose a risk of an observational gap before successors can be launched. To address these challenges, we propose the development of a low-cost and scalable drone-based dual-frequency radar altimeter system for autonomous measurement of sea ice thickness. The project is structured around (1) the design and construction of the radar sensor, (2) the development of software for real-time retrieval of sea ice thickness, (3) the integration of the sensor and drone platform, and (4) the deployment and testing of the system over sea ice.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Towards optimised and operationalised land ice altimetry pipelines for sustained climate monitoring.

Authors: Clare Willis, Dr Jenny Magdalena, Dr Malcom McMillan, Mr Alan Muir
Affiliations: Lancaster University, UK Center for Polar Observation and Modelling (CPOM)
Satellite radar altimetry has provided an effective technique for monitoring the polar ice sheets’ topography and mass balance for over three decades. The processing, and reprocessing, of altimetry data from historic sensors such as ERS-1 (1990’s) and newer missions including CryoSat-2 (2010’s) is essential in providing a long-term observational record. Altimetry data processing pipelines are often computationally intensive, requiring extensive spatiotemporal matching of large datasets to generate and validate measurements from each satellite mission. Processing pipelines must therefore be designed for frequent execution while prioritising readability, robustness, and scalability. Thoughtful pipeline design enhances accessibility, minimizes rework, and reduces the technological and environmental footprint of altimetry research. Here we present examples from various ongoing projects, which form part of the UK Centre for Polar Observation and Modelling’s evolving package for altimetry data processing and validation. This code is designed to be researcher-friendly, enabling execution without in-depth knowledge of the underlying code, and broadening the reach of altimetry to new researchers. Specifically, we will present several examples of processing chain optimisations undertaken within the context of the following studies: (1) ESA’s FDR4ALT initiative, which generates a set of thematic data products (TDPs) by leveraging data from the historic ERS-1, ERS-2 and ENVISAT missions to provide a continuous climatic record. (2) ESA’s Cryo-TEMPO project, an ESA-funded study to deliver a new era of innovative CryoSat-2 TDPs. By adopting Data Engineering and Software Engineering practices, our approach explores potential innovations in data storage, provides algorithm enhancements, and implements reusable code modules. Additionally, we address the potential for workflow automation ensuring altimetry processing chains are efficient and sustainable to the growing needs of the community.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: 30 Years of Sea Ice Thickness and Volume over Arctic and Antarctic from Satellite Altimetry

Authors: Sara Fleury, Marion Bocquet, Fanny Piras, Frédérique Rémy, Eero Rine, Pierre Femenias, Nicolas Picot
Affiliations: LEGOS/CNRS, CLS, ESA/ESRIN, UNIS, CNES
Sea ice is both a key witness to and driver of climate change. While the extent and surface area of sea ice are well described thanks to the observations made over the last four decades using space images, the thickness of sea ice and changes in its volume remain poorly known. However, ice thickness is an essential variable for understanding past, present and future changes in sea ice, as well as its dynamics for climate models, and for better quantifying freshwater flows and their impact on ocean circulation. Despite improvements in the estimation of sea ice thickness from altimetry in recent years thanks to SAR and laser altimetry, the old radar altimetry missions such as Envisat and especially ERS-1 and ERS-2 have remained under-exploited until now. One of the main difficulties is that the footprint of these first altimeters in low-resolution mode was too large (about 15km of diameter) compared with the scales of the pack ice and its fractures, or leads. To overcome this difficulty, we developed a machine learning technique to calibrate the LRM measurements from Envisat on the CryoSat-2 SAR measurements, taking advantage of their common period of flight. A second difficulty arisen from the problem of blurring that impacts ERS1 and ERS2. This problem is due to tracker-board instabilities, which are exacerbated over sea-ice because of the strong backscattering variations, producing blurred waveforms. A correction dependent on tracker-board dynamics reduces this effect before calibration. We were then able to calibrate successively ERS2 against Envisat, which had itself been calibrated against CryoSat-2, and finally ERS1 with this new version of ERS2. We thus obtained a homogeneous time series of 30 years of observations of the thickness and the volume of the sea ice over both hemispheres. These estimates are combined with uncertainties derived from a Monte Carlo methodology. We will present this methodology and the resulting time series, which is freely available. We will also show validation results with field measurements, as well as initial comparisons with solutions derived from models.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Enabling SAR altimetric simulations in the Snow Microwave Radiative Transfer (SMRT) model for the ice-sheet, sea-ice and lake-ice.

Authors: Ghislain Picard, Justin Murfit, Pierre Zeiger, Elena Zakharova, Jérémie Aublanc, Claude Duguay
Affiliations: Université Grenoble Alpes, H2O Geomatics Inc., EOLA, Collecte Localisation Satellites
Radar altimeters are crucial tools for monitoring ice sheet volume and sea ice thickness changes and show promising ability for the retrieval of ice thickness from frozen lakes and rivers. The retrieval of these geophysical quantities (e.g. elevation, freeboard, ice thickness) from the echo recorded by these altimeters is however challenging because the echo is not only coming from the surface (so called surface echo), as it is the case over the ocean surface, but also from the inside of the snowpack and ice (volume echo). The relative intensity of the surface and volume echos and the penetration depth of the wave affect the recorded waveform shape, and contribute to the uncertainties in the retrieval of some of the altimetric parameters (e.g. surface elevation). On the other hand, this complexity of the interface may represent an opportunity to retrieve other parameters (e.g., elevation of the snow / ice interface). Considerable efforts to improve the retracking algorithms have taken place using empirical approaches. With the launch of new more accurate radar altimeters, and the availability of dual-frequency, e.g. AltiKa in tandem with Cryosat 2, Sentinel 3, or Sentinel 6 and the future dual-frequency Copernicus Polar Ice and Snow Topography Altimeter (CRISTAL) mission, the question of the penetration is becoming even more crucial, and the empirical approach less efficient. Several altimetric waveform models have been developed but their application has been limited for cryosphere applications to date. The Snow Microwave Radiative Transfer (SMRT) model was initially developed in the framework of the ESA project “Microstructural origin of electromagnetic signatures in microwave remote sensing of snow”, starting in 2015, with the specific aim of computing microwave thermal emission and backscatter of the terrestrial snowpack. SMRT was designed with extensibility from the beginning and the different components of the radiative transfer problem (i.e. the description of the medium, the permittivity of the materials, the scattering/absorption model, the radiative transfer numerical solver) can have multiple formulations or representations, that are selectable by the user to perform specific simulations. The most notable extensions to date aimed to represent sea-ice and lake-ice in addition to the original terrestrial snowpack, and to compute conventional low-rate-mode altimetric waveforms. With the generalization for SAR-mode altimeters in space, this latter extension is insufficient. Here, we present the development and validation of the SAR-mode / Delay Doppler altimetric extension of SMRT in the framework of the ESA SAMS project. In a first step, the SAR models proposed in the literature have been analyzed in details, showing quite a large spectrum of approaches. The most notable difference relates to the degree of approximations of the base equation performed by the authors, resulting in either an analytical and quick but approximate model, or a precise model requiring intensive numerical calculations. In a second step, instead of selecting a single approach for SMRT, we have implemented eight SAR models, selectable by the user and have performed an inter-comparison of these models. In the last step, we have run simulations at 18 sites on the Antarctic ice sheet where detailed in-situ measurements were available. This shows the performance of the models. In this poster, we present the numerical comparison of the eight models and the validation on the Antarctic ice-sheet.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: How can High-Resolution Data Help Us Improve our Understanding of Multi-Decadal Ice Sheet Mass Balance for Greenland and Antarctica

Authors: Johan Nilsson
Affiliations: NASA Jet Propulsion Laboratory
The Greenland and Antarctic Ice Sheets, two massive reservoirs of ice that store a largest portion of Earth's freshwater, have undergone profound changes in recent decades as a result of a warming climate. These changes have led to substantial mass loss in these regions, which has been accelerating in recent years. Consequently, this raises major concerns of the potential magnitude of the ice sheets contribution to future sea level rise. Thus, understanding the underlying mechanisms driving these transformations is of utmost importance, as it will enhance our understanding of the future state of our planet's cryosphere and global sea levels. We will present how high-resolution topographical data from various sensors and techniques can enhance both the spatial and temporal resolution of existing long-term records derived from satellite altimetry. These high-resolution, multi-sensor datasets can be merged to achieve sub-kilometer resolution at monthly time scales, particularly for areas undergoing rapid change, such as outlet glaciers and regions with complex topography. We will demonstrate how time-variable high-resolution DEMs and altimetry data (including laser and radar measurements, as well as SWOT data) can be integrated to provide detailed estimates of mass loss in these rapidly changing regions.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: CPOM Land Ice Data Processor and Products

Authors: Ben Palmer, Andrew Shepherd, Inès Otosaka, Dr Tom Slater
Affiliations: Centre For Polar Observation And Modelling, Northumbria University
The Antarctic and Greenland ice sheets store 68% of Earth’s freshwater and as a result even modest losses raise the global sea level, increase coastal flooding and disturb oceanic currents. Satellite altimeter observations, available now for over three decades since the early 1990s, have been critical to understanding the ice sheet response to climate change and informing numerical model improvements. Here, we present a description of the data processing steps used to calculate Antarctic and Greenland ice sheet surface elevation change using 6 satellite radar altimeter missions over three decades. This is a land ice processor that has been under constant development at the Centre for Polar Observation and Modelling (CPOM) since the early 1990s. We show examples of Antarctic and Greenland ice sheet elevation change data products and their associated uncertainty derived from the CPOM processor. We validate these products against the independent Operation ICEBridge and ICESat-2 datasets depending on their temporal overlap. These data products are freely available to the public, and the land ice processing chain continues to be under development.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Retrieval of Lake Ice Thickness on Canadian Frozen Lakes From Surface-Based, Polarimetric, Dual-Frequency Radar Altimetry

Authors: Clement Soriot, Julienne Stroeve, Anton Komarov
Affiliations: Univeristy Of Manitoba, Alfred Wegener Institute, University of Bremen, National Snow and Ice Data Center, Cooperative Institute for Research in Environmental Sciences, University of Colorado
Lake ice thickness (LIT) is a sensitive indicator of climate change, identified as a thematic variable of Lakes as an Essential Climate Variable (ECV) by the Global Climate Observing System (GCOS). Few methods have been developed during the last decades (Murfit et al 2021, Mangilli et al 2024), most of them using low frequencies microwave radar (L-,C-,X- and Ku-band). Here we propose to extend the range of frequencies for the retrieval of the LIT, using simultaneously Ku and Ka band radar, as this two frequencies will be onboard the next Copernicus CRISTAL mission (Kern et al 2016). Here we present data that were collected using a surface-based, fully-polarimetric (VV, HH, HV, and VH) Ku- and Ka-band Frequency Modulated Continuous Wave system (stroev et al 2020) over several Canadian frozen lake. In-situ measurement of the snow and ice layers and their respective properties have been made along the transect radar measurement. Then used similar yet different methods from Willatt et al 2023 to estimate the lake ice thickness and the impact of the snow cover on this retrieval. Lake Ice Thickness retrieval from the radar echograms showed good agreement with the in situ LIT measurement at both Ku and Ka Band. Impact of snow microstructure and snow/ice roughness on the retrieval is then discussed. The capacity of the Ka- band to retrieve LIT is also evaluated and compare with satellite altimetry waveforms. References: Kern, Michael, et al. "The Copernicus polar ice and snow topography altimeter (CRISTAL) high-priority candidate mission." The Cryosphere 14.7 (2020): 2235-2251. ​​​​Mangilli, Anna, et al. "Improving the Estimation of Lake Ice Thickness with High-Resolution Radar Altimetry Data." Remote Sensing 16.14 (2024): 2510. Murfitt, Justin, and Claude R. Duguay. "50 years of lake ice research from active microwave remote sensing: Progress and prospects." Remote Sensing of Environment 264 (2021): 112616. Stroeve, Julienne, et al. "Surface-based Ku-and Ka-band polarimetric radar for sea ice studies." The Cryosphere Discussions 2020 (2020): 1-38 Willatt, Rosemary, et al. "Retrieval of snow depth on Arctic sea ice from surface‐based, polarimetric, dual‐frequency radar altimetry." Geophysical Research Letters 50.20 (2023): e2023GL104461.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Cryo-TEMPO: Expanding the radar altimetry portfolio with CryoSat-2 Thematic Products over Land and Sea Ice, Polar and Coastal Oceans, and Inland Waters

Authors: Malcolm Mcmillan, Alessandro Di Bella, Dr Michele Scagliola, Dr Jérôme Bouffard
Affiliations: CPOM, Lancaster University, ESA-ESRIN
Since its launch in 2010, CryoSat-2 has continued the long-term radar altimeter record, and provided over a decade of measurements with which to monitor and understand our changing planet. With its unique orbit and payload, not only has CryoSat-2 far exceeded its primary mission objectives over both land and sea ice, but it has also delivered scientific and methodological advances across a diverse range of applications. Although these datasets have historically been processed by ESA to Level-2, following consultations with the wider community, it has become increasingly clear that there is significant untapped value that can be realised by expanding the user-base through the development of dedicated L2P Thematic Products. Crucially, this requires simplified, agile and state-of-the-art products and processing flows, which deliver an easy-to-use dataset whilst maintaining the native along-track sampling of the original Level-2 products. Thus, ESA has embarked on a new path towards developing CryoSat-2 Thematic Products, which aim to drive further innovation and exploitation, and have created a model that has now been replicated across other radar altimeter missions. Here, we present the latest results from Cryo-TEMPO, an ESA-funded study that began in October 2020 with the goal of delivering a new era of innovative CryoSat-2 Thematic Products over a number of domains; land ice, sea ice, snow depth on sea ice, polar ocean, coastal ocean and inland water. The over-arching objectives of Cryo-TEMPO are (1) to implement dedicated, state-of-the-art processing algorithms over each thematic domain, (2) to develop agile, adaptable processing workflows, that are capable of rapid evolution and processing at high cadence, (3) to create products that are driven by, and aligned with, user needs; thereby opening up the data to new communities of non-altimetry experts, and (4) to deliver transparent and traceable uncertainties associated with each thematic parameter. In this poster presentation we shall provide an overview of the project, a review of the current generation of these thematic products, and a look forward to the evolutions that are being implemented within the next phase of the study.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Long term records of radar altimetry for monitoring ice sheet processes.

Authors: Jennifer Maddalena, Mal McMillan, Alan Muir, Lin Gilbert, Tom Slater, Qi Huang, Ben Palmer
Affiliations: Lancaster University, University College London, Northumbria University
Monitoring the Greenland and Antarctic ice sheets' response to surface and ocean warming is critical, as their combined melt has contributed approximately one-third of the global mean sea level rise since the early 1990s. Satellite radar altimetry is one technique used to monitor changes to the ice sheets. By tracking changes in ice sheet elevation over time, radar altimetry provides critical insights into ice sheet processes driving mass loss. The radar altimetry record began in 1991 with the launch of ERS-1 and has continued through multiple subsequent missions, providing over three decades of records for monitoring ice sheets. The most significant advancement has been the launch of CryoSat-2 in 2010, the first cryosphere specific mission, which employs a Synthetic Aperture Radar Interferometric Altimeter (SARIn) for higher spatial resolution and accuracy in areas that are challenging for the conventional altimetry on board the earlier missions. Over the past two decades, the UK Centre for Polar Observation and Modelling (CPOM) has made significant contributions to the development of long-term altimetry datasets for the purposes of monitoring changes in ice sheet height, volume and mass over Greenland and Antarctica. This study presents recent developments in CPOM’s comprehensive datasets derived from the satellite radar altimetry record, spanning missions from ERS-1 to CryoSat-2. We leverage ESA’s state-of-the-art altimetry datasets, including the Fundamental Data Records for Altimetry (ERS-1, ERS-2, and Envisat) and the CryoSat ThEMatic PrOducts (CryoTEMPO), which have demonstrated enhanced performance for measuring ice sheet elevation due to their optimised processing tailored for land ice. Using these advanced Level-2 altimetry datasets, we generate time series of ice sheet elevation, volume, and mass, developed as part of the UK’s Earth Observation Climate Information Service. Here we showcase the datasets available via the CPOM land ice online portal, together with several illustrative applications. We aim to engage in discussions that can enhance the utility of CPOM datasets for both the scientific and downstream services communities.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Towards Operational Frameworks for Monitoring Ice Sheet Elevation change: A Kalman Filtering Approach

Authors: Robert Wassink, Malcolm Mcmillan, Jennifer Maddalena
Affiliations: CPOM, Lancaster University
Generating accurate estimates of ice sheet surface elevation change (SEC) is crucial to our understanding of the impact that ice sheets have on sea level rise. The 30-year radar altimetry record is one of our best resources for assessing these continental-scale changes spanning multiple decades. Conventionally, altimetry-derived SEC time series for ice sheets are computed by fitting a polynomial model to repeated measurements within small ice sheet regions for a single satellite mission. However, with the presence of multiple synchronously orbiting altimeters, there is now the opportunity to employ more advanced statistical methods capable of assimilating measurements from various sensors while acknowledging disparities in uncertainties. Here, we present results from work to develop one such approach, which employs Kalman filtering to fuse data from multiple sources. The Kalman Filter is a recursive algorithm for estimating the states of dynamic systems where the true state is unobservable. This is achieved by integrating state predictions with observations over time, thereby providing optimal estimates of a system's state. First devised in the late 1950s, the Kalman Filter and its extensions have found widespread use in various fields, including Earth System Science. Four-dimensional local ensemble transform Kalman filters have been used in atmospheric science for generating time series of spatiotemporally chaotic systems, and we aim to employ a similar technique for our applications to the cryosphere. Here, we present a novel approach for generating ice sheet SEC time series from radar altimetry data, utilising spatially localised Kalman smoothing and Gaussian mixture model approximations. We demonstrate this technique using Cryo-TEMPO CryoSat-2 radar altimetry measurements acquired over the Greenland Ice Sheet between 2010 and 2024. The results will be assessed with two case studies over areas with different climatological regimes: (1) Greenland Summit and (2) Russell Glacier, making intercomparisons with existing approaches and validating against in situ observations. If successful, we hope this work will form the basis of a new framework for assimilating other current and historical missions to produce a single collective time series of SEC over ice sheets.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Retrieval of Ice Sheet Topography from CryoSat-2 Waveforms Using Deep Learning

Authors: Joseph Phillips, Professor Malcolm McMillan
Affiliations: Lancaster University, UK Centre for Polar Observation and Modelling, Centre of Excellence in Environmental Data Science
Between 1992 and 2020, the Antarctic Ice Sheet (AIS) lost approximately 2671 ± 530 billion tonnes of ice, with recent rates of loss approximately six times greater than those measured in 1979. This accelerated ice loss, predominantly occurring across parts of West Antarctica and the Antarctic Peninsula, has raised global sea levels by about 7.4 ± 1.5 mm over the same period. As Earth's climate continues to warm, projections suggest that the AIS contribution to sea-level rise could exceed several decimetres by 2100, with worst-case scenarios exceeding 1 meter. Detailed and long-term monitoring is therefore essential both to quantify, and to understand the processes driving, this change. While a range of techniques exist for remote observation of the cryosphere, our understanding of how ice sheets are changing is largely informed by satellite observations, with the longest continuous record coming from the technique of radar satellite altimetry, dating back to the early 1990s. These instruments work by transmitting a short radio-wave pulse towards Earth’s surface and recording the returned echo in the form of a discrete waveform. The principal use of altimetry over ice sheets is to derive estimates of ice sheet elevation change, which, ultimately, can be used to determine ice sheet mass imbalance. Although satellite radar altimetry has proven invaluable for monitoring polar ice sheets, single-antenna measurements used in both historical and contemporary missions suffer from ambiguity in the origin of surface reflections. Each power waveform therefore represents a distribution of potential surfaces within the satellite footprint. Conventional Level-2 (L2) Point-of-Closest-Approach processing chains address this ambiguity using the techniques of retracking and slope correction, but these become less certain over complex terrain, and also reduce the detailed information in the waveforms to a single elevation measurement at the surface’s point of closest approach to the satellite. This study introduces a novel deep learning-based approach to predict the across-track topographic distribution of potential surfaces directly from CryoSat-2 (CS2) Synthetic Aperture Radar Interferometric (SARIn) altimetry power waveforms. CS2 SARIn data is selected for this proof-of-concept study because (1) it provides a large range window which is beneficial for complex ice surfaces, and (2) allows the withheld interferometric information to be used as an additional source of validation for the trained network. Using CS2 waveforms from 2012–2022 and the 100 m resolution Reference Elevation Model of Antarctica (REMA), we derive 15 km-wide, across-track topographic profiles corresponding to each 1024-bin waveform. To filter waveform-profile mismatches, we employ a rudimentary waveform simulator, and additionally balance the dataset across surface slope and roughness, resulting in a training dataset of 600,000 waveform-profile pairs. We then train an ensemble of 16 ResNet-RS models, using Pinball loss, to predict the 5%, 50%, and 95% quantiles of topographic profiles alongside uncertainty estimates (1σ). Model performance is evaluated using withheld data from Pine Island (2012–2022) and temporally withheld data across the entire ice sheet (2023). Our results demonstrate that the method successfully predicts high-resolution surface distributions that consistently and accurately capture the true topography. This study represents an early step toward integrating deep learning into conventional L2 processing chains, providing a framework for retrieving detailed surface topographic distributions directly from radar altimetry power waveforms. In doing so, we demonstrate how new, data-driven processing approaches can complement and enhance existing techniques, and thus contribute to maximising the exploitation of both current and future altimeter missions, such as Sentinel-3's Ku-band SAR and the Ka-band SAR that will be flown onboard CRISTAL.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: A Seamless Ice Sheet Digital Elevation Model Using CryoSat-2

Authors: Carolyn Michael, Sophie Dubber, Livia Jakob, Noel Gourmelen, Andrea Incatasciato, Martin Ewart, Alessandro Di Bella, Jerome Bouffard
Affiliations: Earthwave, The University of Edinburgh, ESA Centre for Earth Observation
The Greenland and Antarctic Ice sheets are contributing to a quarter of current sea level change and have the potential to raise sea level by several metres in the future. The surface elevation of ice sheets, and its temporal evolution, is one of the essential climate variables, it forms the basis observation for mass balance monitoring and for projection of sea level contribution under future climate scenarios. This work explores the creation of a seamless and gapless annual digital elevation model (DEM) derived from CryoSat-2 radar altimetry measurements to aid in the ongoing study of their ever-changing topography. CryoSat-2 waveforms can be processed using two distinct techniques; (1) the conventional Point-Of-Closest-Approach (POCA), sampling a single elevation beneath the satellite, and (2) Swath processing which produces a swath of elevation measurements across the satellite ground track beyond the POCA, increasing spatial and temporal resolution. CryoSat-2 operates in its Synthetic Aperture Radar Interferometric (SARIn) mode over the margins of the ice sheets allowing both processing techniques, however, within the ice sheet interior, CryoSat-2 switches to its Low Resolution Mode (LRM), allowing solely the POCA technique for data processing. To achieve a comprehensive DEM encompassing the entirety of the ice sheet, whilst optimising data coverage, it is imperative to integrate and reconcile the outputs obtained from these distinct processing methodologies. This investigation uses two data sets provided by ESA’s CryoSat thematic product range: the CryoSat-2 ThEMatic PrOducts (CryoTEMPO) land ice data set that applies the POCA processing technique and covers the entirety of the ice sheets, and the CryoTEMPO-EOLIS (Elevation Over Land Ice from Swath) data set that provides a comprehensive point cloud data set specific to the ice sheet margins. To create the seamless and gapless DEM, the EOLIS and CryoTEMPO land ice datasets are aggregated into a spatial grid, utilising a Gaussian Radial Basis Function kernel to consider both the spatial and temporal distribution of data points. To integrate EOLIS measurements from the margins of the ice sheet with CryoTEMPO land ice measurements from its interior, adjustments for variations in penetration are necessary to facilitate a seamless transition and mitigate the impact of anomalies. The combined and adjusted dataset is then post-processed to remove outliers while missing data is interpolated using a combination of External Drift Kriging, Universal Kriging, and Radial Basis Function interpolation. Each method is selected based on the specific properties of the surface being interpolated, ensuring a gapless DEM. This poster will provide and summarise an overview of the gridding, merging, and interpolation methodologies. Additionally, an assessment of the accuracy of the dataset will be presented with comparisons to existing datasets. This is a new CryoSat-2 product that will be made available in 2025.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Sea ice topography estimation from grazing GNSS reflectometry and Satellite Radar Altimetry

Authors: Matthias Raynal, Weiqiang Li, Estel Cardellach, Nicolas Picot, Franck Borde
Affiliations: Cnes, Institute of Space Studies of Catalonia, ESA/ESTEC
The sea ice thickness is a key parameter to study sea ice response to the climate change. Changes in the sea ice are causing important impacts on the local ecosystems, ocean-atmosphere heat fluxes and human activities. However, this variable remains particularly difficult to measure with a high precision over the whole Arctic and Antarctic regions. Since 1990s, spatial altimetry partially answers the needs by providing freeboard estimations with an unprecedented spatial coverage and relatively good temporal sampling. The freeboard estimation is based on a difference of heights measured above the floes and over the leads. Using additional information on water, ice, snow densities and assuming hydrostatic equilibrium an estimation of the ice thickness can be computed. It is not a straightforward operation and particularly because of the complexity of processing the radar altimeter signal, necessary to measure the leads and floes topography. Indeed, the surface heterogeneities (mixes of open water, sea-ice, leads, melt-ponds …) and the different surface properties (presence of snow, ice type, etc.) significantly affect the radar signal and are only partially accounted for in current retracker models. In addition, the electromagnetic waves from radar altimeters, present different propagation properties depending on snow properties impacting the precision of the final topography measured. The validation of the altimetry datasets and characterization of residual errors in these complex regions remains difficult because of the small number of reference measurements available, which are usually retrieved from local in situ campaigns. In the frame of an ESA project, it was decided to explore the capabilities of retrieving sea ice topography from GNSS reflectometry at low elevation angle or Grazing Angle (GA), and figure out how could perform this technology in these regions. These measurements retrieved from different signal characteristics (footprint resolution, L-Band signal …) could be of great interest for the sea ice community. The Institute of Space Studies of Catalonia (IEEC), performed the GA GNSS-R processing from measures provided by Spire Global (and acquired in conditions of low elevation angles). The dataset obtained was compared with radar altimetry measurements from Sentinel-3 in the arctic ocean, close to the Beaufort sea. The results obtained show impressive performances for the GNSS-R measurements. The topography signal has very low random error and the leads/floes transitions are very well observed. These results retrieved from a small data sample are very promising and confirms that the GNSS-R technic can complement the radar altimetry observation, for topography estimation but most importantly to further study snow and ice penetration and properties thanks to the different frequencies of the electromagnetic signals (L, Ku, Ka, C bands).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: PRODEM: An Annual Series of Summer DEMs (2019–2024) for the Greenland Ice Sheet and Adjacent Ice Caps and Glaciers

Authors: Mai Winstrup, Heidi Ranndal, Signe Signe Hillerup Larsen, Sebastian Bjerregaard Simonsen, Kenneth D. Mankoff, Robert S. Fausto, Louise Sandberg Sørensen
Affiliations: DTU Space, Technical University of Denmark (DTU), Geological Survey of Denmark and Greenland, Autonomic Integra LLC, NASA Goddard Institute for Space Studies
The surface topography of the Greenland Ice Sheet and adjacent ice masses evolves rapidly under the influence of seasonal, climatic, and dynamic processes. Traditional large-scale Digital Elevation Models (DEMs) often aggregate data over multiple years, thereby obscuring these short-term changes. To address this, we developed PRODEM, an annual series of high-resolution (500 m) summer DEMs spanning the years 2019 through 2024. These annual DEMs encompass both the Greenland ice sheet marginal zone and adjacent ice caps and glaciers, and current efforts are directed towards extending them to cover the entire ice sheet. Each PRODEM is constructed by fusing data from CryoSat-2 radar altimetry and ICESat-2 laser altimetry through a regionally adaptive kriging approach, after referencing these data to ArcticDEM elevations. The applied approach exploits the high spatial resolution of ArcticDEM, while enhancing it with a temporal resolution based on the satellite altimetry, thereby providing detailed insights into surface elevation dynamics. Validated through leave-one-out cross-validation, PRODEM effectively captures the spatial and temporal evolution of the ice sheet surface topography. Compared to ArcticDEM, the annual PRODEMs reveal a general lowering of elevations, while highlighting a detailed and complex spatial pattern of annual variability across the ice sheet and adjacent ice masses. The high spatio-temporal resolution allows for the detection of subtle changes in surface elevation that are often missed by coarser models, providing a more nuanced understanding of the localized impacts of climate change and ice dynamics combined. PRODEM offers a critical resource for studying the response of the ice sheet and surrounding ice caps and glaciers to environmental change. The detailed annual DEMs will be of value to a wide range of researchers and users studying ice sheet dynamics at local and regional scales, and facilitate the monitoring of ice mass balance, glacier flow rates, and surface melt processes, which are essential for predicting future sea-level rise and its global impacts. Furthermore, the extension of PRODEM to cover the entire ice sheet will enhance its utility, offering comprehensive coverage and enabling more accurate assessments of Greenland’s contribution to sea-level rise. This ongoing work underscores the importance of high-resolution, temporally consistent DEMs in advancing our understanding of the cryosphere in a changing climate. The current PRODEM series is available at https://doi.org/10.22008/FK2/52WWHG (Winstrup, 2024), and we plan to annually update the product henceforth.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Sentinel-3 Land STM: Performance of the S3A and S3B Surface Topography Mission over Land Ice

Authors: Stine Kildegaard Rose, Sebastian B. Simonsen, Jérémie Aublanc, Anouk Chamayou, Filomena Catapano, Pierre Femenias
Affiliations: DTU Space, Collecte Localisation Satellites, ESA-ESRIN
Sentinel-3 is an Earth observation satellite series developed by the European Space Agency as part of the Copernicus Programme. It currently consists of 2 satellites, Sentinel-3A and Sentinel-3B, launched on 16 February 2016 and 25 April 2018, respectively. The satellites carry a radar altimeter among the onboard instruments to provide operational topography measurements of the Earth’s surface. For land ice, the main objective of the Sentinel-3 constellation is to provide accurate measurements of the polar ice sheet topography to support ice sheet mass balance studies. Compared to previous missions embarking on conventional pulse-limited altimeters, Sentinel-3 measures the surface topography with an enhanced spatial resolution, thanks to the onboard SAR Radar ALtimeter (SRAL), exploiting the delay-Doppler capabilities. To ensure the mission requirements are met, the Sentinel-3 Mission Performance Cluster (MPC) oversees the monitoring of the instrument and core product performances. Here, the MPC Land Ice ESLs (Expert Support Laboratories) presents the current performance of the satellites over the Antarctic and Greenland ice sheets. Further improving the performance of Sentinel-3 Altimetry products is also a core element of the MPC. Hence, ESA and MPC recently developed specialized delay-Doppler and Level-2 processing chains over (1) Inland Waters, (2) sea ice, and (3) Land Ice areas. The objective was to provide users with new dedicated “thematic products” for the three surfaces mentioned. For land ice, delay-Doppler processing with an extended window has been implemented to enhance the coverage of the ice sheet margins. The full-mission reprocessing of S3A/B data was finalized in 2023 and is now operational. This full-mission reprocessing shows excellent stability and homogeneity between S3A and S3B; this addresses the Land Ice community's demands for consistency in deriving ice-surface elevation changes. Significantly, we've streamlined this process exclusively through Sentinel-3, distinguishing it as the go-to mission for deriving essential climate parameters. Our presentation highlights Sentinel-3's recent performance over land ice, showcasing surface elevation change results derived solely from its thematic product. To validate our findings, we compare results with CryoSat-2 and ICESat-2, providing a benchmark for future evaluations. This reinforces Sentinel-3's reliability in a landscape with fewer polar radar altimeters, ensuring sustained and accurate assessments of land ice dynamics. As part of the Copernicus program, the Sentinel-3 mission will provide continuous operational observations of the cryosphere until at least 2030.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: CS2 Baseline F Level 2 Evolutions

Authors: David Brockley, Steven Baker
Affiliations: UCL
CryoSat-2, a flagship mission of the European Space Agency (ESA), has been instrumental in monitoring the Earth’s cryosphere, delivering high-resolution data on ice thickness, freeboard, and sea level for over a decade. As scientific requirements and technological capabilities evolve, the mission’s Level 2 (L2) processing chains—the critical step for deriving geophysical parameters from raw measurements—are undergoing significant updates to enhance performance, accuracy, and accessibility. This poster presents an overview of upcoming advancements in the L2 processing chains for CryoSat-2. Key updates include refined algorithms for retracking radar waveforms in SARIN mode, improved corrections for tidal and geophysical effects, and improvements to the discrimination of sea ice leads. This evolution of CryoSat-2’s L2 processing chain reflects a commitment to ensuring the mission remains a cornerstone of cryosphere and climate research, maximizing its scientific legacy for the global community, and is a first-step in aligning the processing approach to that to be used by the upcoming CRISTAL mission with a view to ease cross-comparison of data from the two missions.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Sensitivity of Sea Ice Concentration and Snow Depth to radiometer brightness temperatures at 23, 36 and 89 GHz

Authors: Laiba Amarouche, Mr. Antoine Feufeu, Ms. Morgane Farradeche, Ms. Marie-Laure Frery, Mr. Franck Bordes
Affiliations: CLS, CNES, ESA
Sentinel-3 Microwave Radiometer (MWR) instrument has proven its ability to provide highly accurate geophysical parameters over ocean surfaces and also over ice, such as sea ice type (Tran et al. 2009). However, there is still considerable room for new products. Therefore, we explored the potential of Sentinel-3 MWR 18 and 36 GHz and AMSU-A nadir 89 GHz for estimating sea ice concentration and snow depth on sea ice. AMSU-A nadir 89 GHz has been used to anticipate the behavior of this high-frequency over sea ice which can be beneficial to support other altimetry missions as S3NGT or CRISTAL for which these retrievals are part of the mission observation requirements. Sea ice concentration (SIC) is essential for determining the sea ice extent as well as for estimating sea ice freeboard and thickness from altimeter measurements. Currently, SIC information available in Sentinel-3 altimeter Level 2 product is interpolated to the time and location of the altimeter measurement from external models (OSI SAF in the latest Sentinel-3 Thematic Land processing). Sentinel-3 MWR data could be used to estimate SIC collocated with altimeter measurements in order to provide more precise information in space and time. An analysis of sensitivity of S3A/B MWR brightness temperatures to Sea Ice Concentration was carried out. From the state of the art (ESA Sea Ice CCI report, 2013, Ivanova et al. 2015), two algorithms have been identified as being applicable to S3A/B MWR channels, characterized by two microwave frequencies (23.8 GHz and 36.5 GHz) and a near-nadir incidence angle: One-channel algorithm (Pedersen, 1991) and the Bootstrap algorithm (two-channel) in frequency mode (Comiso, 1986 and 1995). We adapted and implemented these two algorithms using one year (2017) of S3A MWR brightness temperatures. The performance has been evaluated by computing the average root mean square error between the retrieved S3A MWR SIC and OSI SAF SIC, taken as a reference, for both algorithms. It was in the range of 5 % in Winter and 6-8 % in Summer with a low average bias, most of the times less than 1%. These results are very promising as they show similar performance to other state-of-the-art algorithms applied to radiometer missions as AMSR-E, SSM/I or SMMR (ESA Sea Ice Climate Change Initiative report, 2013). For what concerns Snow Depth (SD) over sea ice, it is needed for an accurate determination of sea ice freeboard which in turn is used in the estimation of sea ice thickness. Indeed, sea ice freeboard measurements using altimetry signals are impacted by the radar signal penetration in the snow leading to non-negligeable errors in the freeboard estimation and hence on the sea ice thickness. For this study, a snow depth on sea ice retrieval algorithm from literature was studied in order to assess whether it can be applied using S3 MWR brightness temperature channels. This analysis was performed over a year of data and as a function of hemisphere and surface type. We used as reference Ku-Ka altimetric snow depth (ASD) product from LEGOS. No direct correlation between snow depth derived from S3A MWR brightness temperatures and Ka-Ku ASD were found. We then extended the analysis by adding a high frequency : AMSU-A 89 GHz nadir channel. The results of this analysis will be presented.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: High Resolution Freeboard From ICESat-2 Photon Cloud Data

Authors: Wenxuan Liu, Michel Tsamados, Taoyong Jin
Affiliations: Wuhan University, University College University
Reliable ICESat-2 observations of sea ice freeboard have been predominantly limited to winter months. In this study, we present an innovative approach to estimate higher resolution sea ice freeboard for both the Arctic and Antarctic regions, in winter and summer, using ICESat-2 ATL03 photon cloud data. This method involves two critical steps: height retrieval and sea ice classification. First, we introduce the Kernel Density Estimate (KDE) algorithm to calculate photon density for each individual photon, allowing for the extraction of signal photons and the resolution of surface heights at the photon-cloud data's native resolution. Compared to the offical ATL07 product, the KDE algorithm enables more accurate height retrieval due to its ability to effectively reject after-pulse photons, thereby improving the precision of freeboard estimates. For sea ice classification, we sample ATL03 signal photons into along-track windows of <5 m, finer than the ~27 m resolution typically averaged in ATL07. Key parameters, such as photon density, photon rate, background rate, and the width of the height distribution, are derived to conduct the classification of the observations. We then apply this high-resolution data and extracted parameters to train several machine learning models, including Convolutional Neural Networks (CNN), Random Forest (RF), and Artificial Neural Networks (ANN). Ground truth data is provided by over 10 collocated Sentinel-2 images from both the Arctic and Antarctic, which correspond to nearly 700,000 ICESat-2 5 m sampling observations. The trained models classify the ICESat-2 observations into three categories: sea ice, lead, and gray ice, with both classification accuracies exceeding 95%. To validate the performance of our approach, we compare the resulting sea ice freeboard estimates with near-coincident airborne lidar data (ATM). The results show excellent correlations between ICESat-2 5 m sampling freeboard and ATM freeboard, with an average correlation coefficient greater than 0.94 for strong beams and greater than 0.91 for weak beams. Furthermore, the mean residuals are near zero, ranging between -0.01 and 0.01 cm, indicating a high level of agreement between the freeboard estimates derived from ICESat-2 and those from the ATM. Notably, this approach is applicable to summer data, as the classification model effectively distinguishes melt ponds from leads, preventing misclassification. This approach represents a significant advancement in the ability to retrieve high-accuracy sea ice freeboard estimates over both the Arctic and Antarctic, across both winter and summer seasons. The integration of high-resolution photon data with machine learning techniques provides a robust methodology for sea ice monitoring and offers valuable insights for climate change studies and sea ice dynamics.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: CS2EO: Query Platform for Altimetry Data

Authors: Julia Bizon, Martin Ewart, Noel Gourmelen, Alex Horton, Tristan Goss, Curtis Burns, Andrea Incatasciato, Tommaso Parrinello, Dr Jérôme Bouffard, Alessandro Di Bella, Carolyn Michael, Jonathan Alford, Robert Easthope, Marco Meloni
Affiliations: Earthwave Ltd, University of Edinburgh, European Space Agency
Coincident observations from altimetry missions such as CryoSat-2 and ICESat-2 are vital for advancing our understanding of the cryosphere. These observations enable key applications, including the measurement of snow depth, detection of surface elevation changes, and monitoring of temporal variability in ice and snow conditions. Efficient access to coincident datasets is crucial to unlock the full potential of these missions, facilitating insights into polar processes and their response to climate change. CS2EO is Earthwave’s high performance geospatial-temporal query platform for Earth Observation data. The portal was originally designed to support the CRYO2ICE campaign, and so was focused on enabling users to find coincident data in the polar regions between CryoSat-2 and ICESat-2 satellites. Since its inception, the portal has evolved into a comprehensive data access platform, offering a wide range of functionalities. The portal enables users to discover, visualise, combine, and download data from missions such as CryoSat-2, ICESat-2, CryoVEx, CryoTEMPO, Operation IceBridge, and AWI IceBridge. Combined datasets for CryoSat-2, linking the LRM, SAR, and SARIn L2 products, and for ICESat-2, linking the ATL06 and ATL07 products, allow users to view and intersect these products at the same time. Predicted ground track data is also available, allowing users to view and plan for future CryoSat-2 and ICESat-2 satellite passes and intersections. The website allows users to select intersects that are close in time (e.g. 3 hours) up to a 28 day separation, supporting diverse applications in sea ice, land ice, and ocean studies. Each dataset available on the website can also be queried for basic data download, allowing users to view the data independently rather than searching for intersections. The user-friendly interface allows researchers to define their area of interest by specifying a bounding box, drawing a polygon, uploading a GeoJSON, or selecting from a list of Known Locations - polygons defined within the RGI V7 and IMBIE datasets. The intuitive interface allows users to download only the data they are interested in, saving a large amount of both download time and data preparation time. The portal’s Time Series Processing functionality allows users to generate, visualise, and download time series for a variety of datasets. After selecting a dataset, users may choose which variable to plot, enabling them to focus on specific aspects of the data such as surface elevation or ice thickness. Multiple areas of interest can be selected for side-by-side analysis of different locations over time. Users may further enhance their analysis by adding percentile bands, which display the spread of data over the selected areas, or apply an outlier removal algorithm to remove spurious data points. Recent enhancements include user accounts with query and download histories, enabling researchers to revisit, modify, and resubmit past queries or retrieve recent downloads. The newly introduced Advanced Data Download feature allows users to filter data by column value, to reproject, and to reformat before downloading data, further improving the accessibility and usability of the data. This poster will present the features and capabilities of the CS2EO portal, illustrating its role in simplifying access to coincident and individual altimetry data. Feedback on potential features, datasets, and enhancements will be gathered to guide future developments.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Establishing FRM measurements for Sentinel-3 altimeter on polar ice-sheets and ice-caps.

Authors: Vincent Favier, Ghislain Picard, Emmanuel Le Meur, Laurent Arnaud, Arnaud Reboud, Lucile Fayon, Jérémie Aublanc, Geir Moholdt, Claire Miller, Mahmoud El Hajj
Affiliations: Université Grenoble Alpes, Collecte Localisation Satellites, Norwegian Polar Institute, NOVELTIS
With the launch of high-resolution radar altimeters, many researches focused on the monitoring of ice-sheet mass balance. For changes in surface elevation, it is important to monitor the same locations repeatedly over long periods and with a high accuracy of the order of 1 cm to 10cm. However, this level of accuracy is difficult to achieve, due to biases caused by signal penetration and scattering in snow, and because altimetry measurements accuracy is low in areas with complex topography. In theses areas, to interpret the returned echo waveforms, the first echo is generally used to identify the ‘point of closest approach’ (POCA) to the satellite, but its geolocation can vary with several hundred meters between different overpasses, impeding repeated measurements at the same location. Calibration/validation (Cal/Val) programs for new altimeters are therefore necessary to interpret satellite data and assess their accuracy. This is particularly the case for Sentinel-3's SAR altimeter, with which glaciological applications have been developed on ice sheets. Airborne campaigns with dual lidar/radar systems onboard have been used for altimetry Cal/Val in polar regions in the framework of NASA’s Operation IceBridge and ESA’s CryoVEx program. In situ surveys of surface elevation have also been conducted using kinematic GNSS, with the antenna mounted on a snow vehicle/sledge, or by UAVs using photogrammetry or lidar. However, many of these surveys are only valid and useful for the validation for a short time before/after the acquisition, given that the ice sheet surface is highly dynamical . Indeed, surface firn and fresh snow layers move with the ice flow and due to internal deformation and densification. Over time, new snow accumulates, sublimates or melts, and snow and firn erode and deposit in new places. All this means that the surface is changing dynamically, so there is no permanent reference surface for ground-based validation measurements. Repeated measurements are required. Here we propose a strategy to produce fiducial reference measurements (FRM) at Sentinel-3’s POCA locations, dealing with the high spatial and temporal variability of snow surface in Antarctica and in Svalbard. Our strategy is to pmeasure the position of a platform (using a Global Navigation Satellite System (GNSS) receiver) equipped with a snow ranger. This plateform can be 1) mounted on a snow mobile vehicle to provide almost instantaneous profiles of absolute height along satellite tracks, or 2) mounted on a fixed ground station taking continuous measurements over long period of time giving access to high frequency changes related to snow accumulation and erosion or ablation. Here we present this two-fold strategy to perform FRM measurements for Sentinel-3 altimetry validation in Svalbard and in Antarctica. A first fixed station was tested in the Alps, whereas the selection of the best sites for FRM in Antarctica is made according to the stability of POCA location in time and to the quality of returned radar waveform. First data are analyzed and compared with Sentinel-3 altimetry measurements and with available data from previous programs.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: 10+ years of Greenland Ice Sheet near-surface density evolution from remote sensing

Authors: Kirk Scanlan, Anja Rutishauser, Nicolaj Hansen, Sebastian Simonsen
Affiliations: DTU Space, Technical University of Denmark, Geological Survey of Denmark and Greenland, Department of National Centre for Climate Research (NCKF), Danish Meteorological Institute
Knowing near-surface density is fundamental when deriving an ice sheet mass balance from repeat satellite radar altimetry. Surface elevation change is inherently a volumetric measurement, and a density estimate is required to link changes in volume to changes in mass. Conventional means for assessing pan-ice sheet density are 1) assume density is constant, 2) assume a simply varying spatial pattern (e.g., above or below the equilibrium line) or 3) rely on the outputs of numerical models. A complimentary approach currently being investigated derives spatiotemporal patterns in Greenland Ice Sheet (GrIS) surface densities directly from satellite measurements. Here we will present pan-GrIS and decade-long monthly timeseries of both GrIS near-surface density and vertical density heterogeneity from radar altimetry (i.e., ESA CryoSat-2, CNES/ISRO SARAL) and passive microwave (i.e., ESA SMOS) satellite measurements. Near-surface densities are derived from the strength of CryoSat-2 (Ku-band) and SARAL (Ka-band) surface echoes and critiqued against both in-situ measurements as well as state-of-the-art numerical models (e.g., MAR and HIRHAM5). Once constrained, altimetry derived near-surface densities are used to precondition a forward model of SMOS brightness temperatures, whose discrepancies relative to observations reveal a proxy for vertical density heterogeneity in the GrIS firn pack. Our 10+ year observational records reveal a complex spatiotemporal evolution of the GrIS near-surface. Whereas climate models suggest constant near-surface densities in the GrIS interior, observations reveal a denser and more dynamic environment actively responding to known climatic events. At the same time, insight into vertical density heterogeneity from combined altimetry and passive microwave observations highlights progressive firn pore space renewal after strong melt events (e.g., 2012, 2019) and how it changes across the GrIS. Overall, these results broaden our understanding of the observations we can make with satellite remote sensing in polar regions and represent new comparative datasets relevant for future model developments as well as GrIS mass balance estimates.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: CryoSat-2 - 15 Years of Successful Flight Operations

Authors: Jens Lerch, Giuseppe Albini, Angel Fernandez, Giuseppe Romanelli, Filippo Inno, Anne Hartmann, David Patterson, Thomas Demeillers, Aybike Kolmas, Marco Bruno, Alessandro Latino
Affiliations: ESA - European Space Agency, Telespazio Germany GmbH, Serco Services GmbH, Solenix Engineering GmbH
The CryoSat-2 spacecraft is one of the Opportunity missions of the ESA Living Planet programme, operated by the European Space Operations Centre (ESOC), in Darmstadt, Germany. The Mission objective is to provide global continuous measurements of the ice cover variations on the Polar caps and continental ice sheets plus other areas of land ice which is being achieved thanks to the use of its multi-mode interferometer radar, SIRAL. Additionally, CryoSat-2 data is being used to support the Oceanography community with respect to the long-term sea level monitoring. More than 15 years after its launch from Baikonur Cosmodrome in Kazakhstan on 8 April 2010, 13:57 UTC the CryoSat-2 satellite is in very good condition and both, platform and it payload composed of SIRAL (SAR Interferometer Radar Altimeter) and DORIS (Doppler Orbitography and Radiopositioning Integration by Satellite) have performed remarkably well, even long after the end of nominal mission of 3.5 years. Overall, the mission has proven to be very successful and the spacecraft overall performance very reliable. This poster presentation provides an overview of the CryoSat-2 status and a historical perspective of the flight operations of the CryoSat-2 mission, from the launch and commissioning to current date. Special operations in support of the mission performance and characterisation, and as reaction to anomalies will be presented. Despite the robustness of the spacecraft, flight operations have not been always smooth. The Flight Control Team has been challenged with multiple anomalies over the course of the years. Innovative solutions have to be found to address the spacecraft aging and degradations, such as the reconfiguration of the RCS thruster branch to isolate a leak, refresh of EEPROM images to mitigate degradation due to aging, reconfiguration of the MMFU, and development of a new software to counteract the degradation of the star trackers. The paper describes the most important problems encountered during the mission and the solutions in place to preserve a healthy satellite during the mission extension and beyond. The key event in the three years since the last Living Planet Symposium was the reconfiguration of the cold gas reaction control system to the redundant thruster branch in November 2023 to isolate a small leakage of propellant present on the nominal branch. This operation was successfully executed, the leakage was stopped, and now sufficient propellant to maintain the ground track, control he attitude and perform collision avoidance manoeuvres is expected to be available beyond 2030. In order to support the further continuation of the mission a number of changes to the ground segment have been undertaken, including the recent adoption of a lights-off operations approach that discontinued the presence of operators on console 365 days per year and the full integration of the CryoSat-2 and Swarm spacecraft operations teams into a single service.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Performances of the CRYO2ICE tandem in the new Arctic configuration

Authors: Javier Sanchez Martin, Jens Lerch, Angel Fernandez, Tommaso Parrinello
Affiliations: ESA, ESA, Telespazio Germany GmbH
The CRYO2ICE project aims to optimise observation coincidences between the radar altimeter on ESA’s CryoSat-2 and the laser altimeter on NASA’s ICESat-2 missions. Despite their significantly different orbital altitudes, the 19:20 resonance between their orbital periods produces overlapping tracks with a periodicity of 1.33 days. These tracks can extend up to distances of the order of a thousand kilometres along-track, enabling detailed coincident measurements. The geographical location of the coincidences is constrained to a relatively small orbital arc in the key revolution of the resonance cycle. By precisely adjusting the orbital phase of CryoSat-2, it is possible to fine-tune these spatial overlaps to specific regions of interest. This strategy defines the CRYO2ICE Arctic and Antarctic configurations, which maximise instrument overlap in the respective polar regions. Since July 2020, CryoSat-2 has been operated in this resonance-based orbit, starting in the Arctic configuration, and then switching to the Antarctic configuration in June 2022. In 2025, CryoSat-2 will return to the Arctic configuration. This requires another manoeuvre campaign to achieve the optimal orbital phase for maximising coincidences over the Northern Hemisphere. This work, presented at the Living Planet Symposium, describes the planning and execution of the 2025 manoeuvre campaign. It details the number of manoeuvres and the duration of the transition phase to reach the target orbital position. Additionally, we analyse the resulting Northern Hemisphere coverage and the spatial characteristics of the coincidences, accounting for effects such as the oscillation of the orbital plane inclination, and provide detailed maps illustrating the resulting overlap zones. Another important aspect is the temporal difference between observations. While CRYO2ICE achieves spatial overlap between the instruments' footprints, a time lag exists between their measurements, determined by the separation of the satellites’ orbital planes. The orbital plane alignment of the two satellites, in early 2025, offers a unique opportunity for near-simultaneous coincident observations. We will present the evolution of this temporal characteristic throughout the upcoming Arctic phase.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Investigating the Influence of Cyclones on Sea Ice Thickness Variability in the New Arctic

Authors: Rachel Tilling, Linette Boisvert
Affiliations: NASA Goddard Space Flight Center, University of Maryland
The decline in Arctic sea ice extent over the past four decades has coincided with the Arctic warming at least two times faster than the rest of the planet. Since the late 1990s, sea ice has experienced an increased rate of extent decline, and transitioned to a more seasonal ice cover in a ‘New Arctic’ regime. Sea ice in the New Arctic is more susceptible to atmospheric and oceanic forcings and experiences large interannual variability in extent compared to the older ice cover of the 1980s-1990s. Episodic weather events like moisture intrusions and cyclones can immediately impact the sea ice state both dynamically and thermodynamically. However, the majority of studies exploring the response of sea ice to these events focus on sea ice extent rather than sea ice thickness. Here we produce year-round estimates of Arctic sea ice thickness by combining lidar altimetry data from NASA’s ICESat-2 satellite with ancillary snow loading data. Then, using a reanalysis-derived cyclone database, we investigate how individual cyclone events impact ice thickness. Firstly, the thickness changes of individual sea ice parcels will be analysed over the course of a cyclone using a Lagrangian sea ice parcel tracking database and data from coinciding ICESat-2 orbits. Secondly, we will explore the compound effects of cyclone events on monthly averaged sea ice thickness. By utilising the year-round measurement capabilities of ICESat-2, this will be the first time that summer sea ice data have been included in such an analysis.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: C.01.12 - POSTER - New science-driven Earth Observation Mission ideas – Are you ready to boost future satellite Earth Observation?

Are you ready to boost the future of Earth Observation with new science-driven Earth Observation Mission ideas? Ideas which potentially enable breakthrough science tackling future (2040+) needs and challenges? Do you have a new scientific mission idea at a very early stage before entering Phase 0, which may also include innovative technical aspects? Then this is your session. We are looking for mission ideas which combine scientific excellence with innovative technology to target scientific, environmental and societal challenges. The ideas should be in-line with the new ESA EO Strategy “Earth Science in Action for Tomorrow`s World” (ESA Earth Observation Science Strategy 2040) within the action areas “Frontier Science and Discovery” and/or “Filling critical observation gaps: preparing for tomorrow starts today”.

Ideas can originate from former ESA activities, national agencies, international activities or any other initiatives.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: ERADICATE: A Mission Concept to Improve Food and Nutritional Security Through an Evaluation of Remote Airborne Detection of Invasive Agricultural Threats

Authors: Katie Gold, Yu Jiang, Ryan Pavlick, Rocio Calderon, Nik Cunniffe, Hakim Weatherspoon, Phil Townsend, Alyssa Whitcraft
Affiliations: Cornell University, NASA Headquarters, Cambridge University, University of Wisconsin Madison, University of Maryland College Park
Agricultural pests and diseases (biotic stressors) damage ~20% of the global harvest annually resulting in losses and mitigation spend upwards of $250B USD. Climate change exacerbates this crisis by accelerating the spread and virulence of biotic stressors, as well as increasing their geographic range, potentially reducing global crop yields by a further up to 25% by the end of the century. This comes at a time when population growth is projected to increase food demand by 35–56% by 2050. To mitigate these challenges, a more strategic and scalable approach to crop protection is needed. We are developing ERADICATE – an Evaluation of Remote Airborne Detection of Invasive Agricultural Threats- as an innovative, science and application-driven mission with high societal relevance to develop imaging spectroscopy as a cornerstone technology for biotic stress detection and management. Imaging spectroscopy is uniquely suited for scalable detection of crop stress, offering rich, remote sensing data to uncover fundamental pathomechanisms. Biotic stress alters how solar radiation interacts with leaves, canopy, and plant energy balance, changes that are capturable in visible to shortwave infrared (VSWIR, 400-2400nm) hyperspectral imagery both pre- and post-symptomatically. Airborne imaging spectroscopy can even discriminate between visually identical abiotic and biotic stresses (e.g., wilt, discoloring, chlorosis) due to their divergent underlying spectral characteristics. Case studies have further shown that imaging spectroscopy captures differences that can be linked to pathogen lifestyles (e.g., biotrophic or necrotrophic) and isolate fitness as well as host genotype. However, the biological mechanisms underlying imaging spectroscopy-based detection are poorly understood, as is the extent of evolutionary conservation of spectroscopically discernible mechanisms across pathosystems. This is significant because the more evolutionarily conserved a process is, the more likely it is that spectral features associated with that process will be conserved too, and thus more widely useful. In contrast, abiotic stressors such as nitrogen deficiency produce observable differences by well-known mechanisms that can be readily characterized using remote sensing. Absent basic mechanistic understanding, separate ad hoc models must be developed for each biotic pathosystem, a bottleneck that has kept imaging spectroscopy at the margins of agricultural pest and disease surveillance despite its extraordinary potential. Discerning the underlying processes that shape the spectral characteristics of biotic stress will allow us to quickly develop methods for accurate detection of pests and diseases and reliable differentiation from non-biotic stresses, methods that can be employed with forthcoming space-based hyperspectral instruments such as those to launch with NASA’s SBG and ESA’s CHIME missions. These advancements, combined with the growing body of evidence that imaging spectroscopy can reveal both momentary plant status and plant evolutionary history have the potential to significantly impact our understanding of, and ability to respond to, plant disease and pest outbreaks, ultimately promoting more sustainable and productive agricultural practices. ERADICATE aims to harness imaging spectroscopy to develop a pioneering biotic stress surveillance system that will enable data-driven agricultural decision-making, reduce pesticide use, and enhance climate resilience in agriculture. The key scientific question driving our mission concept is: What are the mechanistic underpinnings of biotic stress detection using imaging spectroscopy, and to what extent are these signals conserved across pathosystems? We hypothesize that the evolutionary conservation of defense responses to biotic stress results in shared spectral features detectable across geographies, crops, and scales. We envision a mission that explores the extent to which biological commonalities between pathosystems translate into spectral commonalities, enabling the development of transferable detection models. However, ERADICATE is both science- and application- driven, with driving questions and objectives designed to produce a novel Earth observation capability that reveals basic pathomechanisms while informing sustainable crop management and agricultural policy development. Our mission concept includes objectives relating to mapping pest and disease outbreaks in U.S. agricultural systems; constructing a spectral taxonomy to assess similarities and differences between pathosystems across regions and crops; developing hybrid radiative transfer and machine learning models that integrate imaging spectroscopy with epidemiological models to assess risk and map loss and damage potential; and a custom built computational “clean room” based on distributed edge-cloud computing to ensure cooperating stakeholder privacy. ERADICATE will help farmers leverage space agency Earth observations to address critical sustainability challenges, fostering more productive, adaptive, and sustainable agricultural systems. By combining airborne remote sensing data with domain-specific principles, ERADICATE will produce a robust, data-driven paradigm for actionable agricultural management. Expected outcomes include improved risk monitoring and dispersal tracking of biotic stressors, reduced pesticide use, enhanced crop longevity and quality, and higher agricultural yields. Our mission concept aligns with NASA Earth Science priorities, particularly the Surface Biology and Geology (SBG) satellite mission outlined in the 2017 Decadal Survey and new ESA EO Strategy “Earth Science in Action for Tomorrow's World” (ESA Earth Observation Science Strategy 2040) within the action areas “Frontier Science and Discovery.” By addressing critical observation gaps, ERADICATE contributes to a broader mission of precision agriculture and advances the application of remote sensing in tackling global food and nutritional security challenges.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: PLUTO: a knowledge platform to boost remote sensing processing research and development

Authors: Julien Osman, Mickael Savinaud, Alexandre Fiche, Axelle Pochet, Frederic Ligeard, Yannick Tanguy, Guillaume
Affiliations: Cs Group - France
PLUTO is a CNES initiative that aims at creating a synergy in the scientific community, in order to promote innovation in various application fields, by gathering remote sensing tools, useful resources and users at the same place. Considering the great diversity of software available for remote sensing images processing, the CNES wanted a place to inventory those tools, show their basic and advanced usages, and showcase how one can bring them to work together for various purposes. The portal presents 4 main sections: - A “Getting Started” section to help newcomers understand the different steps of satellite imagery and orient visitors to various related tools on the catalog. It is organized by subsections on various subjects such as “Data access”, “Preprocessing” or “Visualization tools”. - A catalog of tools for various applications in spatial imagery, from pre-processing to specific use cases such as feature extraction and so on. Most of the tools are open source, PLUTO promoting free software and sharing available knowledge to a broad range of users. Confirmed scientists and developers as well as newcomers, interns or simply curious people can find appropriate resources on the portal. Each tool comes with a short description, a handful of information such as its license or maturity (from prototype to maintained project) and redirects users to related resources such as the source code repository, the documentation, tutorials, contacts or similar tools. - A “Tutorials” section proposing a large set of tutorials and tips related to the tools listed in the catalog. If learning material already exists online, PLUTO guides users to them. On the contrary if quality tutorials are not available, PLUTO proposes its own notebooks on the subject. - Because PLUTO is a lot more than just a catalog but a participative knowledge platform, we strongly encourage users to share their own tools and propose contributions to the developing team. The last section named “How to contribute” assists the users in the contribution process so the knowledge platform can grow and stay up to date. PLUTO will be open to the public in early 2025.
Add to Google Calendar

Wednesday 25 June 17:45 - 18:30 (ESA Agora)

Session: F.01.07 LPS25 Sustainable Horizons Award

Join us for an inspiring Agora session at the LPS, where the three shortlisted candidates for the LPS Sustainable Horizons Award will present their groundbreaking projects. This award, organised by the Young ESA Environmental Committee, celebrates the innovative use of Earth Observation technologies to tackle global challenges, enhance societal well-being, and promote the sustainable management of natural resources and ecosystems. The session will also feature a keynote speech by the Director of Earth Observation Programmes, Simonetta Cheli, who will share insights on the importance of sustainability and the role of Earth Observation in achieving it. At the end of the presentations, the winner of the LPS Sustainable Horizons Award will be announced, recognizing the most innovative and impactful project among the finalists.

Speakers:


  • Marta Salieri - ESA
  • Lena Eberhard, Toyah Eglin, Reuben Langdon and Ricarda Leske (ESA Environmental Committee)
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B-C)

Poster: A.10.02 - POSTER - Geodetic satellite missions and their applications

Space-based geodetic techniques provide extreme-accuracy observations of a vast range of Earth system parameters, such as the gravitational field, the shape of the Earth, the reference frame, as well as navigation and positioning. The past 20 years have seen a rapid development in the field, including highly successful endeavours such as ESA's GOCE mission, and the GRACE and GRACE-FO Third-Party Missions. As a direct consequence, the research and application community has developed a vast range of ideas building on the achievements of this golden age of space geodesy. This session deals with state-of-the-art results from the current set of geodetic satellite missions (e.g. GOCE, Swarm, GRACE, GRACE-FO), as well as plans and ideas for new initiatives in the field. This session invites contributions related to applications of geodetic satellite observations including, among others, gravimetry, altimetry, GNSS/VLBI, radio occultation, InSAR and GNSS reflectometry techniques. Applications of these missions go beyond the traditional geodetic science, including climate, hydrology, solid Earth, cryosphere, oceanography and thermosphere science. Contributions relevant to the above-mentioned scientific disciplines, as well as operational applications from geodetic satellite missions are also invited.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B-C)

Poster: A New Catalog of InSAR Derived Source Parameters for Global Seismicity, Generated via Automated Bayesian Inversion.

Authors: John Condon, Dr John Elliott, Dr Tim Craig, Susanna Ebmeier, Dr Stuart Nippress
Affiliations: University Of Leeds, AWE Blacknest
Seismic waveform based inversions have provided source parameter catalogs which are used widely throughout the geophysics community for a range of different disciplines, such as location calibration and validation of Earth structure models. Traditionally source parameters are inverted for by waveform fitting of seismic waves at low frequencies. The accuracy of seismic derived event locations and depths are dependent upon geographical station distribution and seismic travel-time models. InSAR, in contrast, provides near-global land coverage and can provide robust event location and depth estimates in areas where seismic networks are sparse. We present a new automated schema for the inversion of co-seismic interferograms for earthquake source parameters: `GABIS' (Geodetic Automated Bayesian Inversion Software). We use pre-processed Sentinel-1 C-band co-seismic interferograms within the COMET-LiCS earthquake catalog. This schema uses GACOS atmospheric corrections and a Bayesian inversion approach, that provides a statistically meaningful estimate for each of the inverted source parameters. The inversion uses a half-space with a uniform slip Okada model as the forward model. We present a new dataset generated using our automated source inversion approach, on events currently in the COMET-LiCS earthquake catalog. We compared our results against seismically derived source parameters for the same events, evaluating regional and systematic biases in source parameters and location estimates. We also compare against other recent non-Bayesian Sentinel-1 based event catalogs in the same manner. This dataset will be made available for the wider community, in order to supplement current 3-D global velocity model ground truth (GT) location validation sets, which tend to have a large northern hemispheric bias and are poorly distributed globally.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B-C)

Poster: Towards a deep learning approach for improved phase unwrapping for rapid deformation and isolated regions

Authors: Eilish O'Grady, Andy Hooper, David Hogg, Dr Matthw Gaddes
Affiliations: University Of Leeds
Interferometric synthetic aperture radar (InSAR) is a valuable technique which can be used to monitor natural hazards such as deformation (Lazecky et al., 2020). Wrapped interferograms show the wrapped phase difference between two co-registered SAR images taken at different times recorded as modulo 2 π. This results in ambiguity in the difference in phase cycle number or wrap count between neighbouring pixels. Phase unwrapping is therefore an essential process to return a continuous scale of relative deformation across the interferogram, removing the ambiguity. This process is performed using Equation 1. Φi,j = Ψi,j + 2πki,j (1) where Φi,j is unwrapped phase, Ψi,j is wrapped phase and ki,j is the wrap count number at position i, j. However, the problem is ill-posed and research to improve phase unwrapping continues. The problems’ ill-posed nature necessitates assumptions. A commonly applied assumption is the Itoh condition (Chen and Zebker, 2001) which states absolute difference between neighbouring pixels is no greater than one half phase cycle, or π radians. This assumes sufficient sampling to account for rapid deformation events and can also be broken in the presence of noise. Where this is does not hold indicates the presence of discontinuities. Classic unwrapping techniques have generally looked to return the best unwrapped solution through the minimisation of the difference between wrapped and unwrapped phase gradients to some penalisation power (Ghiglia et al., 1996; Chen and Zebker, 2001). Additional information is often used to guide the integration pathway. Branch cut maps mark connections between residues blocking the integration pathway and minimising the likelihood of error propagation. Residues are locations where the integration around a loop of pixels does not equal 0 indicating the Itoh condition is not holding (Goldstein et al., 1988). Other approaches use the quality of pixels to guide unwrapping (Zhong et al., 2014), or a statistical-cost network-flow algorithm for phase unwrapping (Chen and Zebker, 2001). In the latter approach, a cost is associated with the difference between wrapped and unwrapped phase gradients based upon a probability density function. However, in regions of low coherence, regions can remain unwrapped and there is often difficulties when unwrapping high fringe densities. Given the success of applying deep learning across multiple other fields (LeCun et al., 2015), it is unsurprising its potential application for phase unwrapping has been explored. Generally a supervised approach to training of models is taken with proposed methodologies being broadly split into one-step and multi-step approaches. One-step approaches go directly from wrapped phase to unwrapped phase, often utilising models based upon a generative adversarial networks (GAN). Proposals include PU-GAN (Zhou, Pascazio, et al., 2022) and pu-pix2pix (Zhang et al., 2023) but these face issues with generalisation and smoothing. Multi-step approaches instead use deep learning models to target outputs which aid classical unwrapping methods. This includes optimising the arrangement of branch cuts (Zhou, Yu, et al., 2022), finding wrap count (Spoorthi et al., 2019), isolating discontinuity locations (Wu et al., 2022) and finding phase or wrap count gradients (Chen, He, et al., 2023; Sica et al., 2022). When considering isolating wrap count gradients, the problem remains that there is not an explicit identification of locations likely to have an absolute wrap count change between neighbouring pixels greater than one. Another issue lies in that most existing phase unwrapping methods assume noisy, low coherence regions are small (Gao et al., 2023). Such techniques can struggle to successfully fully unwrap an interferograms where coherent phase regions are separated by large noise patches, with few studies looking to tackle such cases (Gao et al., 2023). Here we present a deep learning method which builds upon others’ work targeting wrap count gradient classification for phase unwrapping. We trained a Brownian Bridge Diffusion Model (BBDM) (Bo et al., 2023) using a synthetic dataset (Gaddes et al., 2019) to ’in-paint’ pre-defined masked regions of the wrapped phase.By doing so, we replace noise with more meaningful information for phase unwrapping based on the surrounding phase values to connect isolated, continuous wrapped phase regions prior to unwrapping. In synthetic data test cases, the BBDM is able to reproduce sharp fringe edges and more atmospheric-like patterns to in-paint between regions separated by large noise patches which will in-turn aid phase unwrapping. A second deep learning model explicitly identifies likely discontinuity locations through the inclusion of an additional class when labelling wrap count gradient values. We trained a multi-output supervised model using a synthetic dataset capable of classifying the x and y gradients of wrap count into 4 classes. These are a 0 and ± 1, and a 2 class for explicit identification of cases where absolute changes greater than 1 wrap count have occured. Output prediction certainty levels combined with the classification predictions are used to guide the order of unwrapping to return the ambiguity number of each pixel whilst minimising risk of error propagation. Unwrapped phase is then calculated per Equation 1. References Bo, L, X Kaitao, L Bin, and L Yu-Kun (2023). BBDM: Image-to-image Translation with Brownian Bridge Diffusion Models. arXiv: 2205.07680 [cs.CV]. url: https://arxiv.org/abs/2205.07680. Chen, C and H Zebker (2001). “Two-dimensional phase unwrapping with use of statistical models for cost functions in nonlinear optimization”. In: J. Opt. Soc. Am. A. 18.2, pp. 338–351. doi: https://doi.org/ 10.1364/JOSAA.18.000338. Chen, X, C He, and Y Huang (2023). “An error distribution-related function-trained two-dimensional insar phase unwrapping method via U-GauNet”. In: Signal, Image and Video Processing. doi: 10.1007/s11760- 022-02482-y. Gaddes, ME, A Hooper, and M Bagnardi (2019). “Using machine learning to automatically detect volcanic unrest in a time series of interferograms”. In: Journal of Geophysical Research: Solid Earth 124.11, pp. 12304–12322. doi: 10.1029/2019JB017519. Gao, J, H Jiang, Z Sun, R Wang, and Y Han (2023). “A Parallel InSAR Phase Unwrapping Method Based on Separated Continuous Regions”. In: Remote Sensing 15.5. issn: 2072-4292. doi: 10.3390/rs15051370. url: https://www.mdpi.com/2072-4292/15/5/1370. Ghiglia, D and L Romero (1996). “Minimum Lp-norm two-dimensional phase unwrapping”. In: Journal of the Optical Society of America A 13.10, pp. 1999–2013. doi: https://doi.org/10.1364/JOSAA.13.001999. Goldstein, R, H Zebker, and C Werner (1988). “Satellite radar interferometry: Two-dimensional phase unwrap- ping”. In: Advancing Earth and Space Science 23.4, pp. 713–720. doi: 10.1029/RS023i004p00713. Lazeck´y, M, K Spaans, P.J Gonz´alez, Y Maghsoudi, Y Morishita, F Albino, N Elliott J Greenall, E Hatton, A Hooper, and D Juncu (2020). “LiCSAR: An automaticInSARtool for measuring and monitoring tectonic and volcanic activity”. In: Remote Sensing 12.15, p. 2430. doi: https://doi.org/10.3390/rs12152430. LeCun, Y, Y Bengio, and G Hinton (2015). “Deep Learning”. In: Nature 521.7553, pp. 436–444. doi: 10.1038/ nature14539. Sica, F, F Calvanese, G Scarpa, and P Rizzoli (2022). “A CNN-Based Coherence-Driven Approach for InSAR Phase Unwrapping”. In: IEEE Geoscience and Remote Sensing Letters 19, pp. 1–5. doi: 10.1109/LGRS. 2020.3029565. Spoorthi, G, S Gorthi, and R.K Sai Subrahmanyam Gorthi (2019). “PhaseNet: A Deep Convolutional Neural Network for Two-Dimensional Phase Unwrapping”. In: IEEE Signal Processing Letters 26.1, pp. 54–58. doi: 10.1109/LSP.2018.2879184. Wu, Z, T Wang, Y Wang, and D Ge (2022). “Deep-learning-based phase discontinuity prediction for two- dimensional phase unwrapping of SAR interferograms”. In: IEEE Transactions on Geoscience and Remote Sensing 60, pp. 1–16. doi: 10.1109/TGRS.2021.3121906. Zhang, L, G Huang, Y Li, S Yang, L Lu, and W Huo (2023). “A Robust InSAR Phase Unwrapping Method via Improving the pix2pix Network”. In: Remote Sensing 15.19. issn: 2072-4292. doi: 10.3390/rs15194885. url: https://www.mdpi.com/2072-4292/15/19/4885. Zhong, H, J Tang, S Zhang, and X Zhang (2014). “A Quality-Guided and Local Minimum Discontinuity Based Phase Unwrapping Algorithm for InSAR/InSAS Interferograms”. In: IEEE Geoscience and Remote Sensing Letters 11.1, pp. 215–219. doi: 10.1109/LGRS.2013.2252880. Zhou, L, V Pascazio, H Yu, and M Xing (2022). “PU-GAN: An One-step 2-D InSAR Phase Unwrapping based on Conditional Generative Adversarial Network”. In: IEEE Transactions on Geoscience and Remote Sensing, p. 1. doi: 10.1109/TGRS.2022.3145342. Zhou, L, H Yu, Y Lan, and M Xing (2022). “Deep Learning-Based Branch-Cut Method for InSAR Two- Dimensional Phase Unwrapping”. In: IEEE Transactions on Geoscience and Remote Sensing 60, pp. 1–15. doi: 10.1109/TGRS.2021.3099997
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B-C)

Poster: A decade of temporal gravity observed by the ESA Swarm satellites

Authors: João Encarnação, Daniel Arnold, Ales Bezdek, Christoph Dahle, Junyi Guo, Jose van den IJssel, Adrian Jaeggi, Jaroslav Klokocnik, Sandro Krauss, Torsten Mayer-Guerr, Ulrich Meyer, Josef Sebera, CK Shum, Pieter Visser, Yu Zhang
Affiliations: TU Delft, Astronomical Institute of the University of Bern, Astronomical Institute of the Czech Academy of Sciences, GFZ German Research Centre for Geosciences, School of Earth Science of the Ohio State University,, Institute of Geodesy of the Graz University of Technology
Since 2014, we have exploited GPS data collected by the ESA Swarm satellites to produce monthly gravity field models representing geophysical mass transport processes at a spatial resolution larger than 1,500 km (half-wavelength). In doing so, we rely entirely on GPS data and avoid assumptions regarding temporal and spatial correlations. The resulting models are thus independent of GRACE and GRACE-FO data and, therefore, continue monitoring geophysical processes in parallel to the ll-SST technique and during gaps in their data records. We expect to continue providing these high-accuracy hl-SST gravity field solutions well into the future. They can be used along with or as validation for the ll-SST models or as a substitute in case of a data gap between the GRACE-FO and GRACE-C/MAGIC mission period - depending on the continued good health of the Swarm satellites. Our team comes from the Astronomical Institute of the University of Bern, the Astronomical Institute of the Czech Academy of Sciences, the Delft University of Technology, the Institute of Geodesy of the Graz University of Technology, and the School of Earth Sciences of the Ohio State University. We produce individual gravity field models following independent gravity field inversion strategies. From these, we produce combined gravity field solutions using weights derived with Variance Component Estimation by the International Combination Service for Time-variable Gravity Fields (COST-G). In doing so, our main goal is to ensure that the models are unbiased. We publish the models every 3 months to be publicly accessible at ESA’s Swarm Data Access server (https://swarm-diss.eo.esa.int) as well as at the International Centre for Global Earth Models (http://icgem.gfz-potsdam.de/series/02_COST-G/Swarm), with the support of the European Space Agency and the Swarm Data, Innovation, and Science Cluster. In this contribution, we highlight the capability of the models produced from the Swarm GPS data to resolve large-scale geophysical signals over hydrological and cryospheric basins. We quantify the errors in the Swarm models by means of the signal variability over the oceans and assess them by comparing with GRACE and GRACE-FO solutions. We demonstrate that the team has successfully addressed the challenges of estimating gravity field models derived from GPS data during the occurrence of high solar activity, which has become more frequent in recent years.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B-C)

Poster: Detectability of Gravity Signals Related to Mantle Convection for Future Gravity Satellite Missions

Authors: Yixiati Dilixiati, Nico Sneeuw, Bart Root
Affiliations: Institute of Geodesy, University of Stuttgart, Department of Space Engineering, Delft University of Technology
The GRACE and GRACE-FO missions have provided over 20 years of time-variable gravity field data which have been utilized in various geoscientific disciplines. Numerous research topics have revealed successful findings including mass balance of major ice sheets, sea level rise, groundwater depletion and co- and post-seismic gravity changes associated with an earthquake. However, these novel research findings are mainly related to the surface or shallow parts of the earth. There is no doubt that mass redistribution also occurs in the deeper interior of the earth, e.g., in the mantle. Mass transports in the mantle may affect the time-variable gravity field at decadal time scales. On the other hand, gravity field signals coming from mantle convection can give insight into the mantle viscosity which is a critical parameter for geodynamic process of the earth’s interior. This study addresses the challenges associated with detecting gravity signals resulting from mantle convection. To this end, we carry out simulation together with geodynamic modeling. The simulation is designed as closed-loop simulation which provides synthetic observables with the MAGIC/NGGM constellation. We present results on the detectability of mantle signals using this simulation setup, incorporating realistic instrument noise and background model noise. Our findings indicate that, for decade-long time-variable gravity fields, mantle signals exceed the instrumental noise level. Moreover, observing mantle convection through gravity variations appears feasible, providing that the background models achieve sufficient accuracy.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B-C)

Poster: CAN SENTINEL-1 ALONG-TRACK MEASUREMENTS IN THE EARTH REFERENCE FRAME SUBSTITUTE MISSING GNSS DATA FOR STRAIN MAPPING?

Authors: Milan Lazecky, Andy Hooper, Muhammet Nergizci, Dehua Wang, Professor Tim Wright
Affiliations: University Of Leeds
The vast amount of Sentinel-1 data over the last decade and high accuracy of related data such as precise ephemerides allow exploring measurements of the solid Earth surface dynamics in a global reference frame. We previously reported 2-sigma errors better than 4 mm/year for large-scale northward motion in the Alpine-Himalayan belt measured using along-track interferometry within Sentinel-1 burst overlaps, averaged over large 250x250 km regions. The data were processed using the COMET LiCSAR system, and corrected for the solid-earth tides and reference ionosphere using IRI model. The measurement accuracy allowed us to identify a mean offset of along-track satellite position in updated ephemerides (August 2020) with 2-sigma of 4 mm. With improved processing routines and auxiliary data, such as a GNSS-based ionosphere model (CODE GIM), we have further decreased the error of estimated velocity of tectonic plate motion in this region within the global reference frame of ITRF2014 in both directions after decomposing overlapping data from both ascending and descending tracks, but our northward velocity estimates still deviate from averaged GNSS velocities by a median offset of almost -8 mm/year. This bias remains under investigation but can be corrected. Here we will present results and challenges of the next iteration of the effort with a cleaner updated dataset primarily over the Alpine-Himalayan Belt, and compare velocities at selected individual burst overlaps with available GNSS data. The aim is to evaluate using such InSAR measurements in regions without GNSS measurements as constraints for large-scale strain rate models. This is relevant for surface strain mapping of tectonic plate boundaries. Additionally, we will elaborate on secondary objective of data-driven ionosphere mapping since the GNSS-based model is limited in a) temporal sampling, b) coverage of GNSS stations and c) generalised towards an assumed constant height of the peak levels of electron content. Although the model can be improved by an assimilation with IRI, we instead attempt to estimate the ionospheric gradient from our data using intensity cross correlation technique and compare with the model values.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B-C)

Poster: Detection of Simulated Corner Reflector Displacement at Mårtsbo Geodetic Observatory Using Persistent Scatterer Interferometry

Authors: Faramarz Nilfouroushan, Nureldin Gido, Mathias Billenberg, Chrishan Puwakpitiya Gedara, Per-Anders Olsson
Affiliations: Lantmäteriet, University of Gävle, Former Employee, Lantmäteriet
Since 2020, Lantmäteriet (the Swedish mapping, cadastral, and land registration authority) has installed three active electronic transponders and various types of passive corner reflectors at different locations across Sweden. The plan is to further enhance the national geodetic infrastructure by installing additional passive reflectors co-located with permanent GNSS stations and absolute gravity points. These co-located GNSS stations and corner reflectors are expected to contribute to the development and maintenance of geodetic reference frames, as well as the validation of European ground motion services. Additionally, this colocation facilitates mapping relative ground motions estimated with InSAR into an absolute geodetic reference frame. In this contribution, we present the installation progress of these transponders and passive reflectors in recent years, alongside validation results for a corner reflector installed at the Mårtsbo Geodetic Observatory. To evaluate whether a vertical displacement of a corner reflector (CR) can be detected and estimated accurately using the Persistent Scatterer Interferometry (PSI) technique, we simulated it by placing a 9-mm thick disk beneath the CR and analysed the results using two software tools: SARPROZ and GECORIS. Our preliminary results are based on 90 SAR images of Sentinel-1A data acquired between January 2023 and May 2024 from three different ascending orbit tracks (ASC29, ASC102, and ASC175). These data were employed to generate Persistent Scatterer (PS) points and estimate Line of Sight (LOS) displacements. In July 2024, four months later, we removed the disk and introduced a second 9-mm displacement, and data processing and analysis are ongoing to evaluate how accurately this second displacement can be detected over different time intervals. The initial findings reveal that the corner reflector in Mårtsbo serves as a valuable geodetic device for monitoring ground motion. PSI analysis successfully detected the simulated displacement of the corner reflector in the line-of-sight (LOS) direction for each satellite orbit track, with an average difference of approximately 1.5 mm between the estimated and simulated displacement. The comparison between SARPROZ and GECORIS demonstrated strong agreement in the estimated LOS displacement for the corner reflector, indicating that both software tools provide accurate results when applying the PSI technique.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B-C)

Poster: A Combined Sentinel-1 InSAR & GNSS Surface Velocity and Strain Rate Field for the Deforming Alpine-Himalayan Belt

Authors: John Elliott, Yasser Maghsoudi, Milan Lazecky, Dr Chris Rollins, Dr Jin Fang, Dehua Wang, Jess Payne, Dr Qi Ou, Professor Andrew Hooper, Professor Tim Wright
Affiliations: University Of Leeds, University of Exeter, GNS, University of Edinburgh
A continental-wide distribution of surface velocities and strain is essential for understanding the distribution of tectonic deformation and seismic hazard across the Alpine-Himalayan Belt. While previous studies have mainly used spatially sparse GNSS to measure deformation at such large scales, these approaches cannot adequately characterize shorter wavelength features of deformation. This study develops a data-driven approach using space-based Sentinel-1 radar observations 2016-2024 to improve estimates of surface velocities and strain rates for detecting the concentrations of ground deformation associated with active tectonic faulting, volcanism and land subsidence due to water extraction. We present a dataset of trans-national average surface velocities and time series at 1 km spatial resolution stretching from southern Europe to eastern China. This velocity field was produced using combined displacement time series from Sentinel-1 satellite radar tied to an AHB-wide GNSS reference frame compiled from previously published datasets. This integrated dataset yields continuous spatial deformation information and improves our understanding of seismic hazard and land surface processes across the actively deforming Alpine Himalayan Belt, providing an important baseline measure of ground velocities for future studies. We will present the first trans-continental scale three-component velocity field of the Alpine-Himalayan Belt from satellite radar observations. Derived from these velocity fields, we will show extensive zones of strain accumulation associated with major faulting imaged across the continents. The horizontal velocity field is largely dominated by tectonic processes. In contrast, other surface processes including large regions of fast and persistent subsidence due to water extraction from aquifers, and regions prone to permafrost deformation at high altitude dominate the vertical. We will present resulting time-series displacements estimated from the large dataset of Sentinel-1 interferograms. We will discuss the necessary corrections to the phase to account for tropospheric path delays, ionospheric disturbances, solid earth tides and relative plate motion. All the interferograms are openly available on the COMET-LiCS portal, and we will be making available the derived velocity fields (average line-of-sight and 3 component surface displacement rates), as well as strain rate estimates and coherence information. We will also be releasing the GNSS compilation created from over 100 published studies that we used in the referencing of the velocity field.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B-C)

Poster: GNSS data processed in PPP mode for the Estimation of the Local Solid Earth Tides: Improvements in Geophysical Investigations

Authors: Mr. Francesco Vespe, Mr. Jan Dousa, Mr. Carlo Doglioni, Mr. Pavel Vaclavovic, Mr. Jakub Nosek, Mr. Giovanni Nico, Ms. Eleonora Ficini, Olimpia Masci, Mr. Gianluca Sottili, Mr. Davide Zaccagnino, Mr. Luis Mendes, Mr. Francisco Amarillo-Fernandez
Affiliations: DIAN S.r.l., Italian Space Agency, Giuseppe Colombo Centre for Space Geodesy, Research Institute of Geodesy Topography and Cartography, Geodesy, Università La Sapienza, Earth Science, National Research Council of Italy, Institute for Applied Mathematics, National Research Council of Italy, Institute of Environmental Geology and Geoengineering, ESA/ESAC, Rhea Group for ESA, ESA/ESTEC
This work presents some results achieved in the frame of TILDE project (Tidal Interplate Lithospheric Deformation of Earth). GNSS data collected at 98 stations, splitted into global and regional networks have been used. The global network consists of 73 GNSS stations which have piled up a stack of data 20 years long. The regional networks consist of 25 stations, 7 located in New Zealand, 1 in Kamčatka, and 17 stations in Italy for which 3 year-long time series of data are available at least All these big amount of data has been processed in precise point positioning mode (PPP) in order to provide absolute displacements of the stations. These kind of observables are suitable to estimate Local Solid Earth Tides (LSET); i.e., models which depends on the geographical position of the selected sites. The LSET models are built estimating Love and Shida numbers for each station and for each main tidal constituents. The objective is to investigate possible correlations between LSET and geological/geophysical events, such as tectonic plates movements, as well as earthquakes and volcanic activities/hazards. GNSS coordinates of the stations, expressed both in geocentric XYZ and local NEU references, have been estimated in Precise Post Processing mode, with a sampling rate in turn of 1 day and 3 hours. Different GNSS solutions have been generated according the objectives of the project. The first one was the background solution in which the full IERS2010 tides model has been applied. The second solution was obtained by switching off all the tides model for all the stations. The third one was the solution, only for global stations, for which only the Long-Periodic Tides (LPT) has been switched off. This last solution has been applied in order to lower the level of flickering of GNSS time series when Love and Shida numbers of LPT had to be estimated. This analysis showed that there is a correlation between the latitude measured from the tectonic equator and Love numbers. This confirms the theory that moon tides contribute to trigger tectonic movements. An interesting result, relevant for the assessment and potential precursiveness of the risk of seismic hazards, was the correlation found between the variation in time of Love numbers of diurnal (K1) and semi-diurnal (M2) tides and the occurrence of earthquakes nearby GNSS sites. At this purpose we selected GNSS global stations which were at a distance <200 Km from the epicentre of EQ events. The investigation has outlined that almost the seismic events are got ahead by a downfall of Love numbers. It seems that each earthquake event cannot be characterized only by the type of slip occurred along faults: compressive (i.e., reverse fault), extensional (i.e., normal fault), strike slip or combination of them. This result could be explained with the rigidity of the crust/mantle which play a major role in triggering seismic events. For smaller values of Love number, we have indeed a more rigid response of Earth to Tidal forcing. We will plan to present further LSET solutions using GNSS data coordinates estimated in PPP mode at higher sampling rate of 30" achieved applying kinematic algorithms for GNSS sites close to Earthquake occurrence in order to have a more robust statistic corroboration of such evidence.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B-C)

Poster: Assessing GNSS reflectometry wind speed information over the ocean for NWP applications

Authors: Dr Sean Healy, Saleh Abdalla, Giovanna De Chiara, Christian Buckingham, Giuseppe Foti
Affiliations: ECMWF, National Oceanography Centre
Progress in operational numerical weather prediction (NWP) relies on both improvements in forecast models and better initial conditions, through the improved use of more, high quality in-situ and satellite observations. The potential value of any new observations in NWP needs to be assessed relative to the information that is already provided by other observing systems. Therefore, any new observations need to be tested in state of the art NWP systems, in addition to the current global observing system, for extended experimental periods. Global navigation satellite system reflectometry (GNSS-R) observations provide surface roughness information over the ocean, from which surface wind speed estimates can be retrieved. These new data potentially complement the ocean surface wind vector information provided by operational scatterometers, because they have very different spatial sampling characteristics. However, they are not currently routinely assimilated in operational NWP systems. To assess their potential value, a new ESA study will investigate the forecast impact of these measurements when they are assimilated in the ECMWF NWP system, which already assimilates a comprehensive set of both in-situ and satellite observations used operationally, including five scatterometers. (Parallel assimilation activities will also be performed at the UK Met Office in this project, but they will not be reported here.) Unlike most previous research in this area, which has assimilated retrieved GNSS-R wind speed information, in this work we will assimilate the GNSS-R delay-Doppler map (DDM) normalised bistatic radar cross section (NBRCS) observations (“sigma0”). Therefore, a new forward model for NBRCS based on a machine learning approach will be developed and tested. This GNSS-R forward model development work will be informed by a previous study where an ASCAT scatterometer sigma0 forward operator was developed. The NBRCS forward model will be used to assimilate a new, well calibrated CYGNSS dataset provided by the National Oceanography Centre (NOC) for an extended NWP impact experiment period of at least 3 months. The operator will also be used to assess the quality of Spire GNSS-R observations for the same period. This talk will focus on the initial development and testing of the GNSS-R NBRCS forward operator, which represents the first step towards assimilating these datasets. The plans for assimilating the CYGNSS NBRCS data in observing system experiments will also be presented.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B-C)

Poster: Interseismic and Postseismic Deformation of 2023 Kahramanmaraş Earthquakes from Subswath and Burst Overlap Interferometry (SBOI)

Authors: Muhammet Nergizci, Milan Lazecky, Professor Tim J. Wright, Professor Andrew Hooper, Professor Ziyadin Çakir
Affiliations: University Of Leeds, Istanbul Technical University
The East Anatolian Fault Zone (EAFZ) is a critical tectonic boundary for studying strike-slip fault mechanics and the dynamics of continental convergence, as it accommodates the relative motion between the Anatolian and Arabian plates. Previous studies indicate that the interseismic slip rate along the main East Anatolian Fault (EAF) is ~10 mm/year. However, this rate decreases to around 4 mm/year along the Amanos Fault in the southwest, due to the tectonic complexity near the triple junction of the Arabian, Anatolian, and African plates. This region exhibits significant north-south deformation, which is challenging to observe using conventional InSAR techniques because the Line-of-Sight (LoS) component has low sensitivity to north-south motion. On February 6, 2023, two powerful left-lateral strike-slip earthquakes (Mw 7.8 and Mw 7.6) struck the EAFZ, producing extensive ruptures - 350 km along the East Anatolian Fault (EAF) and 150 km along the Çardak-Doğanşehir faults (ÇDF). In this study, we investigate the interseismic and postseismic deformation of the EAFZ using Subswath and Burst Overlap Interferometry (SBOI), a technique enabled by Sentinel-1 TOPS acquisition mode. SBOI provides precise measurements of deformation in the north-south direction, complementing conventional InSAR observations of east-west and vertical displacements. This method allows us to constrain interseismic and postseismic deformation with mm/yr level precision , although its accuracy can be affected by additional error sources. To mitigate these, we apply corrections for solid Earth tides and ionospheric effects, the latter being particularly significant for ascending tracks acquired at dusk when ionospheric activity is heightened. Ionospheric corrections are implemented using an external model of CODE GIM to improve measurement precision. Our analysis encompasses 22 frames (11 ascending and 11 descending), covering approximately 500,000 km² around the EAFZ and adjacent regions where GNSS measurements of crustal motions were available prior to the earthquake with a typical spacing of ~70 km. By deriving along-track InSAR time series for both interseismic and postseismic periods, we aim to capture large-scale deformation patterns with enhanced accuracy. We further integrate these along-track velocities with conventional across-track InSAR (which provides relative displacement measurements) and GNSS velocities to construct a high-resolution 3D velocity field tied to the global reference frame. Only a handful of large earthquakes have geodetic measurements spanning late-cycle interseismic deformation, coseismic deformation and early postseismic deformation. Our comprehensive 3D dataset for the 2023 earthquakes will significantly advance our understanding of the dynamics of large, strike-slip fault zones.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B-C)

Poster: Efficient DEM Error Detection and Mitigation in Multi-Temporal InSAR for Accurate Deformation Retrieval over Large Areas

Authors: Lei Zhang, Xinyou Song, Hongyu Liang
Affiliations: Tongji University
Digital Elevation Models (DEMs) are critical for removing topographic phases in InSAR observations. However, their limitations, such as outdated acquisitions and insufficient accuracy, are increasingly evident. Although morden SAR satellites feature a relatively narrow orbit tube, the phases induced by DEM errors cannot be ignored when retrieving subtle surface displacements. To mitigate the DEM errors, Multi-Temporal InSAR (MT-InSAR) techniques are typically employed, wherein the DEM error is modeled and estimated alongside ground deformations from a stack of DInSAR observations. However, the estimation of DEM errors can be biased when the deformation model significantly deviates from actual ground displacements. Additionally, since DEM error estimation is conducted at each selected coherent point, the process is computationally inefficient, especially for large areas. We propose here a detection-and-estimation strategy to mitigate DEM errors more efficiently. Recognizing that many points within an area of interest do not exhibit notable DEM errors, we suggest that it is unnecessary to estimate DEM errors at every point. First, we introduce a novel criterion based on phase gradient direction consistency (PGDC) analysis to identify DEM error pixels (DEPs). This approach offers a direct and intuitive visualization of DEM errors, a first in the field. Building on this, we develop a highly generalizable framework for DEM error correction, adaptable to diverse application scenarios. Comprehensive evaluations using simulated data and real-world cases from urban and mountainous regions demonstrate that the method effectively isolates DEM errors from deformation-induced phases with varying spatiotemporal patterns. By efficiently identifying and estimating DEM errors directly from wrapped phases, our approach simplifies the deformation retrieval process and can be seamlessly integrated into existing MT-InSAR frameworks.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B-C)

Poster: Widespread extent of irrecoverable aquifer depletion revealed by country-wide analysis of land surface subsidence hazard in Iran, 2014-2022, using two component Sentinel-1 InSAR time series

Authors: Jessica Payne, Andrew Watson, Dr Yasser Maghsoudi, Dr Susanna Ebmeier, Richard Rigby, Milan Lazecky, Mark Thomas, John Elliott
Affiliations: COMET, University Of Leeds, SatSense, Department of Earth and Environmental Sciences, University of Exeter, CEMACS, University of Leeds, School of Earth and Environment, University of Leeds
Ongoing depletion of Iran's groundwater, driven by human extraction, has contributed to 108 incidences of basin-scale land-surface subsidence covering 29,600 km2 (>10 mm/yr, 1.8 %) of the country. 75 % of this subsidence that exceeds 10 mm/yr correlates with agricultural land use. We find Karaj city, neighbouring Iran's capital Tehran, is exposed to the steepest surface velocity gradients (angular distortion, β) caused by differential, vertical subsidence rates. In fact, in Karaj, around 23,000 people are exposed to ‘high' levels of subsidence induced hazard. We further use these velocity gradients to aid identification of structural and geological controls on surface velocity patterns for seven of Iran’s most populated cities, identifying potentially unmapped tectonic faults buried beneath aquifer sediments. We demonstrate that most of Iran’s subsidence caused by groundwater extraction is permanent (inelastic), with the spatial pattern of the proportion of inelastic deformation potentially depending on geology. During a recent, severe regional drought (2020-2023) we demonstrate the control of precipitation on the elastic, recoverable subsidence deformation magnitude with the elastic to inelastic deformation ratio falling from 41-44 % pre-drought to 31-36 % post-drought. Additionally, we highlight the relationship of subsidence due to aquifer consolidation with adjacent, vertical uplift caused potentially by unloading of groundwater and, elsewhere, growth of salt crystals. To generate and estimate these ground displacements through time, we use automatically processed short baseline networks of Sentinel-1 Interferometric Synthetic Aperture Radar (InSAR) data, 2014-2022. We correct for atmospheric noise using weather model data and perform time series analysis in the satellite line-of-sight direction, serving this data through an open-access online portal. For each subsidence region, we use this InSAR time-series data alongside GNSS velocity data to decompose line-of-sight velocities into 100 m resolution vertical and horizontal (east-west) surface velocity fields. We use temporal Independent Component Analysis to constrain automatically and manually the inelastic and elastic components of subsidence, respectively.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B-C)

Poster: Integration of multi-satellite geodetic satellite data for High-Resolution Analysis of Lake Water Storage and Flood Risk

Authors: Mrs. Rafailia Adam, Mrs. Elisavet- Maria Mamagiannou, Mrs. Anastasia Triantafyllou, Mrs. Eleni Tzanou, Mr. Georgios Vergos
Affiliations: Laboratory of Gravity Field Research and Applications – GravLab, Department of Geodesy and Surveying, Aristotle University of Thessaloniki, Department of Surveying and Geoinformatics Engineering, School of Engineering, International Hellenic University
The increasing frequency of extreme weather events, particularly short-term flooding, poses significant challenges to societies, impacting human lives and economic development globally. In recent years, such floods have caused substantial damage, stressing the importance of monitoring and predicting hydrological changes in water bodies. This study investigates the spatial and temporal variations of Terrestrial Water Storage (TWS) over lakes, focusing on Lake Victoria as a case study, utilizing a combination of satellite data and advanced modeling techniques. The main aim is to assess whether these inland water bodies exhibit tendencies toward flooding, which could inform future water resource management and disaster mitigation strategies. Our research utilizes data from the Gravity Recovery and Climate Experiment (GRACE) and its successor, the GRACE Follow-On (GRACE-FO), specifically focusing on the Level-2 Monthly Geopotential Spherical Harmonics (GSH) products provided by the Center for Space Research (CSR) at the University of Texas, Austin. GRACE and GRACE-FO have been instrumental in measuring global TWS by detecting gravitational anomalies caused by mass redistributions on Earth, including variations in surface water, soil moisture, and groundwater. However, the relatively coarse spatial resolution of approximately 300 km inherent to GRACE data limit their application to localized hydrological analyses. To overcome this limitation, and achieve high-resolution TWS estimates, we incorporate satellite altimetry data from missions such as CryoSat, Jason-2, and the Surface Water and Ocean Topography (SWOT) satellite. These missions provide direct measurements of water surface elevation over lakes and rivers with a spatial resolution of approximately 50 m (for the latter), significantly enhancing the large-scale insights offered by GRACE and GRACE-FO. This approach proves particularly valuable in accurately capturing water level variations, especially in areas where in-situ data are sparse or unavailable. In addition to satellite data, this study employs land surface modeling through the Global Land Data Assimilation System (GLDAS). GLDAS integrates hydrometeorological observations with advanced modeling techniques to enhance the spatial resolution of GRACE-derived TWS. By combining GRACE-FO and altimetry data with GLDAS outputs, we aim to bridge the resolution gap and produce refined TWS estimates at the basin level. An innovative element of our approach is the application of artificial intelligence (AI) techniques to further increase the spatial resolution and predictive capabilities of TWS data derived from GRACE and GRACE-FO. Machine learning algorithms are employed to effectively interpolate and upscale satellite data, resulting in more detailed and actionable hydrological insights. The integration of GRACE and GRACE-FO data, satellite altimetry, GLDAS models, and AI-based spatial enhancement forms a comprehensive framework for analyzing TWS variations. The ultimate aim is to improve the resolution of TWS and associated flood risks, supporting proactive water resource management and improved disaster preparedness strategies.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B-C)

Poster: Joint assimilation of satellite-derived daily Terrestrial Water Storage and Surface Soil Moisture for improving land water storage estimates and flood predictions.

Authors: Leire Retegui-Schiettekatte, Manuela Girotto, Maike Schumacher, Henrik Madsen, Mohammad Shamsudduha, Ehsan Forootan
Affiliations: Geodesy Group, Department of Sustainability and Planning, Aalborg University, University of California Berkeley, DHI A/S, Institute for Risk & Disaster Reduction, University College London
A complete and realistic representation of land water storage conditions is essential to accurately model river discharge variability and predict extreme flood events. Models alone often display limitations in adequately representing land storage conditions and therefore rely on the integration of observations. In poorly gauged basins, satellite observations can be used to monitor the saturation state of the land. For example, satellite-derived Surface Soil Moisture (SSM) products can provide information on the saturation level of the shallow soil layers. The Gravity Recovery and Climate Experiment (GRACE) and its Follow-On mission (GRACE-FO) allow the estimation of Terrestrial Water Storage (TWS) changes, which represents the temporal variability in the vertical summation of water stored below and above the land surface. In this study, we integrate these satellite-derived daily TWS and SSM observations into a hydrological model through a sequential Data Assimilation (DA) framework. The objectives are (1) to assess how the joint multi-sensor assimilation of daily TWS and SSM observations can help improve the representation of land water storage within the model, and (2) to analyze the extent of the impact of this multi-sensor DA on river discharge estimates and flood predictions. The World Wide Water Resources Assessment (W3RA) water balance model, with a daily temporal resolution and a 0.1° spatial resolution, is chosen as the basis for our experiments. Sub-basin averaged TWS derived from the ITRF-2018 daily product and SSM extracted from the ESA Climate Change Initiative (CCI) SM product (spatial resolution of 0.25°) are assimilated. A localized Ensemble Kalman Filter-based DA framework is implemented to jointly assimilate the spatially distributed observations in a balanced manner and while appropriately accounting for both model and observation uncertainties. The Brahmaputra River Basin, a poorly gauged basin that is recurrently affected by large flood events, is chosen to develop and test our DA framework.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B-C)

Poster: A Flexible Reference Frame Connection Procedure for InSAR Time Series Based on Open GNSS Data: a Case Study in Southern Italy

Authors: Laura Giaccio, Valeria Belloni, Roberta Ravanelli, Mattia Crespi
Affiliations: Sapienza University Of Rome, Geodesy and Geomatics Division, DICEA, Faculty of Civil and Industrial Engineering, University of Liege, Geomatics Unit, Department of Geography, Faculty of Sciences, Sapienza University Of Rome, Sapienza School for Advanced Studies
The reliable analysis of wide-area ground displacement fields from InSAR data is critical for various applications, yet it presents unique challenges that can often be overlooked when working on smaller areas. Displacements derived with radar interferometry are, in fact, always the result of a double difference: between the positions at the epochs of the primary and secondary images in time, and between the displacements of the pixel of interest and the origin pixel in space. Areas processed and unwrapped together will have the same origin pixel, while, in general, displacements resulting from two separate unwrapping procedures will be in different reference frames. Therefore, when the displacement field over a very large area is of interest, data must be carefully processed and calibrated in order to be comparable. The aim of this work is to provide a general and flexible approach, based on open-access data, to anchor InSAR time series to a common reference frame, easily scalable to areas of different sizes, exhibiting either linear or non-linear behavior, and with varying configurations of the available GNSS network. Many authors dealt with this problem, at a regional, national or international scale [1][2][3][4][5], basing the calibration process on both leveling and GNSS data [1][3], or only on GNSS data [2][5]. Naturally, merging different techniques introduces challenges, related to the different periods covered, the different time span between adjacent measurements, the uneven spatial coverage, appropriate description of non-linear behaviors and different characteristics of the technologies used. In this study we propose an approach to connect InSAR-derived displacement time series (obtained through the European Ground Motion Service) to ETRF2000, using GNSS time series from open access datasets (Nevada Geodetic Laboratory, EPN Densification). The procedure is performed by splitting the InSAR dataset into tiles, each tile consisting of a single burst and a single sub-swath of Sentinel-1 Interferometric Wide swath mode. Each GNSS time series is treated as a single source of information, when more than one station is available a redundant system of equations is solved to find the motion of the reference point, while when no station is available the connection is performed taking into account the overlapping areas with adjacent tiles, eventually extending the available information to all tiles. The computations are carried out considering a local reference frame originating in the centre of the study area, in order to take into account the changes in the unit vectors of the north, east, and vertical directions across the study area with respect to a global reference system, without losing the practical advantages that the use of a local reference system provides for InSAR applications. Non-linear behavior in GNSS time series is also properly modelled and taken into account. [1] T. Fuhrmann, M. Caro Cuenca, A. Knöpfler, F.J. van Leijen, M. Mayer, M. Westerhaus, R.F. Hanssen, B. Heck, Estimation of small surface displacements in the Upper Rhine Graben area from a combined analysis of PS-InSAR, levelling and GNSS data, Geophysical Journal International, Volume 203, Issue 1, October 2015, Pages 614–631, https://doi.org/10.1093/gji/ggv328 [2] Bischoff, C. A., Ferretti, A., Novali, F., Uttini, A., Giannico, C., and Meloni, F.: Nationwide deformation monitoring with SqueeSAR® using Sentinel-1 data, Proc. IAHS, 382, 31–37, https://doi.org/10.5194/piahs-382-31-2020, 2020. [3] Cuenca, M. C., Hanssen, R., Hooper, A., & Arikan, M. (2011, September). Surface deformation of the whole Netherlands after PSI analysis. In Proceedings of the Fringe 2011 Workshop, Frascati, Italy (pp. 9-23). [4] Ferretti, A., Passera, E., Capes, R., (2023). Algorithm Theoretical Basis Document, Document Code: EGMS-D3-ALG-SC1-2.0-006. https://land.copernicus.eu/en/technical-library/egms-algorithm-theoretical-basis-document. [5] Johnston, P. J., Filmer, M. S., Fuhrmann, T. (2021). Evaluation of methods for connecting InSAR to a terrestrial reference frame in the Latrobe Valley, Australia, Journal of Geodesy 95:115, https://doi.org/10.1007/s00190-021-01560-2
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B-C)

Poster: GNSS Tomography as a Cost-Effective Tool for Atmospheric Water Vapor Monitoring

Authors: Pedro Mateus, Pedro Miranda, João
Affiliations: University of Lisbon, Instituto Dom Luiz (IDL)
Extreme precipitation events, increasingly intensified by climate change, result in landslides, floods, and debris flows, causing severe loss of life and extensive infrastructure damage. While research on the impact of climate change on extreme precipitation has been ongoing since the 1980s, the interactions between large-scale atmospheric flow and small-scale convective precipitation processes remain poorly understood. Enhancing weather and climate models necessitates connecting storm initiation and development to local environmental moisture conditions, particularly in deep convection. Water vapor dynamics are critical in triggering precipitation, making this variable essential for numerical weather prediction (NWP) models. However, significant observational gaps persist regarding the structure of the lower troposphere, a crucial region for understanding water and energy cycles. Multi-GNSS systems (e.g., GPS, GLONASS, Beidou, Galileo), initially designed for accurate positioning, navigation, and timing, also provide atmospheric data. Microwave signals transmitted by GNSS satellites are refracted by the atmosphere, with a portion of this refraction influenced by water vapor. This phenomenon allows for precisely estimating column-integrated or precipitable water vapor (PWV). Traditional GNSS methods have been unable to resolve the vertical structure of atmospheric water vapor. However, the growing spatial density of continuously operating GNSS stations has enabled advancements in tomographic algorithms, allowing for three-dimensional reconstructions of water vapor fields. This study explores how small-scale, spatially dense GNSS networks can cost-effectively monitor tropospheric water vapor profiles through tomographic inversion of Slant Integrated Water Vapor (SIWV) observations. The ultimate aim is to improve the understanding and forecasting of 3D water vapor dynamics at sub-kilometer scales, focusing on regions characterized by marine stratocumulus clouds and frequent extratropical or occasional tropical storms, such as the North Atlantic. This study includes comparisons of vertical PWV distributions derived from GNSS, radiosonde data, ERA5 atmospheric reanalysis, and spatial distribution analyses using InSAR-based PWV maps. Miranda, P. M. A., & Mateus, P. (2021). A new unconstrained approach to GNSS atmospheric water vapor tomography. Geophysical Research Letters, 48, e2021GL094852. https://doi.org/10.1029/2021GL094852 Miranda, P. M. A., & Mateus, P. (2022). Improved GNSS water vapor tomography with modified mapping functions. Geophysical Research Letters, 49, e2022GL100140. https://doi.org/10.1029/2022GL100140 Miranda, P. M. A., Adams, D. K., Tomé, R., Fernandes, R., & Mateus, P. (2023). Optimizing boundary conditions in GNSS tomography: A continuous 7-month case study in the Amazon. Geophysical Research Letters, 50, e2023GL105030. https://doi.org/10.1029/2023GL105030 Mateus P, Catalão J, Fernandes R, Miranda PMA. Atmospheric Water Vapor Variability over Houston: Continuous GNSS Tomography in the Year of Hurricane Harvey (2017). Remote Sensing. 2024; 16(17):3205. https://doi.org/10.3390/rs16173205 This work was funded by the Portuguese Fundação para a Ciência e a Tecnologia (FCT) I.P./MCTES through national funds (PIDDAC) – UIDB/50019/2020 (https://doi.org/10.54499/UIDB/50019/2020), UIDP/50019/2020 (https://doi.org/10.54499/UIDP/50019/2020), LA/P/0068/2020 (https://doi.org/10.54499/LA/P/0068/2020), and by 2022.15714.MIT project.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B-C)

Poster: Separating Volcanic Deformation From Atmosphere and Addressing Temporary Loss of Coherence Using Bayesian Estimation of Independent Components

Authors: Professor Andrew Hooper, Matthew Gaddes, Milan Lazecky, Camila Novoa Lizama, Susanna Ebmeier
Affiliations: COMET, School of Earth and Environment, University Of Leeds
Ground deformation is a key indicator of volcanic activity and routine acquisition by the Sentinel-1 satellite mission allows us to monitor volcano deformation globally. We have developed a system to routinely apply radar interferometry (InSAR), whenever a new Sentinel-1 image is acquired, and update the time series. However, interpreting the latest image, which is critical for volcano monitoring, can be difficult due to the presence of atmospheric signals and temporary loss of coherence due to snow or seasonal vegetation. While the first issue can be partly addressed using a weather model, and further addressed through spatio-temporal filtering, weather models can sometimes be wildly inaccurate, particularly for volcanic islands, and filtering in time does not work well for the latest image, due to the lack of subsequent images. Here we present an approach for addressing both issues, using machine learning. We have previously developed an automated algorithm, based on independent component analysis, to identify new deformation patterns and also any changes in rate of existing deformation patterns, both of which are key indicators of changes in volcanic activity. This algorithm first identifies independent components that appear in interferograms acquired during a training period, which typically represent regular atmospheric patterns and any long-lived deformation signals. When a new image is acquired, significant changes in contribution of any signal are flagged, as are any patterns that are not represented by the independent components. In our new approach, we extend this algorithm. When a new image is acquired, we use a Bayesian approach to estimate the contribution of each component, with prior probabilities based on the variability of each component in the training period. The estimated atmospheric contributions are then subtracted to isolate the deformation signal. Our tests show that the atmospheric components are significantly better estimated using this approach than with a weather model. To address the issue of temporary loss of coherence, only pixels that are coherent in the new image are used to estimate the contribution of each independent component. However, because the components are known for all pixels that are coherent for at least part of the time, the estimated contributions can be used to estimate the signal for the missing pixels. While any new small-scale signal that is purely present in areas covered by incoherent pixels will still be missed, larger-scale signals that extend beyond these areas can be well characterised.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: F.05.01 - POSTER - Satellite EO data benefit the economy, the environment and our societies: the evidence and the stories

Understanding and quantifying the societal benefits derived from the use of EO is critically important in prioritizing, using, and enhancing the value of EO in decision making. Earth Observation data and service providers strive to improve the understanding of the benefits that society derives from its own missions and from EO in general, in order to improve services and products and to substantiate perspective analyses for the planning of future missions.
From various interactions and studies, there is an overwhelming evidence that the benefits generated by EO are substantial – they include economic savings and efficiency gains, improved compliance to regulations, reduced pollution and improved understanding about climate change, to mention just a few. However, while recent years have witnessed a proliferation of examples and use cases, quantitative and structured assessments have still to gain traction.
The proposed session pursues improvements in EO data use impact assessments and in the understanding of the mechanisms through which EO data use can generate benefits to society. The Session will convene the international community that is undertaking similar efforts and invite them to exchange on good practices. Focus will be on exemplary case studies and methodologies that have been successfully applied in a credible way.

Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Results of the Sentinel Benefits Study - Demonstrating the value of Sentinel Data through Rigorous Value Chain Analyses and Powerful User Stories

Authors: Lefteris Mamais, Mr. Geoff Sawyer, Mr. Dimitrios Papadakis, Ms. Lauriane Dewulf, Mr. Daire Boyle, Mr. Christopher Oligschlager, Ms. Julia Caufape, Mr. Nikolay Khabarov
Affiliations: Evenflow, EARSC, IIASA
The Sentinel Benefits Study (SeBS) was a project funded by the European Space Agency which ran from March 2017 to July 2024. The objective of SeBS was to demonstrate the value produced by the Copernicus Sentinels using a value-chain methodology. To achieve this, SeBS followed two streams of activity. The first entailed the execution of value chain analyses, in which specific case studies were selected, analysed and presented with the aim to showcase the benefits brought by the usage of Sentinels data. At the core of the SeBS methodology is a robust and structured framework which analyses of how value is generated across six well-defined dimensions, namely, Economic, Environmental, Regulatory, Innovation & Entrepreneurship, Science & Technological Research, and Societal benefits. The findings of these cases not only contributed to creating a concrete picture of a wide range of benefits for different actors; they also helped to strengthen the methodological framework for the execution of such studies. This methodological framework was further enhanced by a number of cross-cutting studies carried out alongside the value chain analyses. The aim of these was to highlight the impact of the use of Copernicus Sentinel data across a wide range of topics. This included analyses on the impact of Copernicus Sentinels data within Academic Publications, as a key enabler for Innovation and Start-ups and a key resource in support of Environmental Compliance Assurance. Moreover, additional analysis relating to the Transversal Benefits across cases was performed. SEBS has developed an extensive portfolio of analysed cases which allowed for the extraction of meaningful lessons learned, the identification of common patterns, and the articulation of insights that can inform future studies and activities. This portfolio spans across multiple sectors, countries, stakeholders and Sentinel data type. Cases studies include Farm Management Support in Denmark, Flood Management in Ireland, Ground Movement Monitoring in Norway, Water Resources Management in Spain, and Invasive Species Detection in Croatia to name but a few. Through its numerous analyses and the fundamental work on the underlying methodologies, SeBS has contributed to the work done by the GeoValue community – an international group of practitioners developing best practices in the analysis of the value and socio-economic benefits of geospatial information in support of decision makers. Across 22 comprehensively analysed cases, the total economic benefits calculated are quite significant. A minimum economic benefit figure of €242.5m (or €11m per case on average) has been calculated with the maximum extending to approximately €531m (or €24.1m per case on average). Depending on the case, these benefits materialise in relation to capital expenditure reductions, cost savings, increased revenues and growth in employment as a result of the utilisation of Sentinel data. The study also analysed several other domains of benefit. Environmental benefits were a major upside in many cases, where SeBS documented how Sentinel data-powered services helped to reduce pollution, reduce several negative impacts on biodiversity and contribute to the protection of natural resources. The use of Sentinel data was also found to have a profound impact on the design and implementation of regulatory measures and policies, as documented in many cases. The availability of Sentinel data was also proven to be a strong driver for innovation and entrepreneurship, with 73% of respondents to an innovation survey conducted in 2021 reporting that they use Sentinel data “a lot” and almost half of respondents reporting that they rely on Copernicus for their business to run at a competitive advantage. Whilst typically perceived as intangible, societal benefits are often considered by the interviewed stakeholders in each case as essential aspects in the greater picture. In many cases, they may even represent a key driver motivating the primary users to procure the Sentinel-based service in the first place. The study found Sentinel data contributes to public health, civil security and can even aid in community building and quality of life for citizens across Europe and beyond. Copernicus Sentinel data also contribute to scientific output across the research community as exhibited in a dedicated SeBS study which found that up to and including 2021, there have been 11,888 unique scientific publications from over 150 countries relating to Sentinels-1, -2, -3 and -5p. This figure comprises of 4,194 conference papers presented at 475 conferences and 7,694 journal articles published across 890 journals. Year on year, it saw significant increases in the volume of publications: of the 11,888 overall publications studied, 3,684 (31%) of these were published in 2021 alone. Overall, SeBS shows that the use of Sentinel data is generating outstanding value and often in quite different ways. Not only has SeBS demonstrated the strength of a bottom-up methodological approach in quantifying benefits associated with the use of Sentinel-based products and services. It has also allowed for the development of a deeper understanding of the processes, actions and decisions faced by primary and secondary users and how Sentinel-based services truly contribute to their improvement. The power of storytelling, extensive utilised in SeBS, has helped to convey tangible information on these benefits to other actors in the ecosystem that can benefit from this knowledge: companies that are trying to reach out to users and sell their innovative solutions; users that are trying to understand the specifics and the real value of using new services that will bring change to their workflows; decision makers that need concrete examples that will guide them in defining the strategic vision and practical implementation of a programme like Copernicus.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Leveraging Earth Observation Data for Robust Supply Chain Resilience

Authors: Violeta Damjanovic-Behrendt
Affiliations: GreenTwin GmbH
Global supply chains are increasingly exposed to disruptions driven by systemic risks, including geopolitical conflicts, climate change, pandemics, raw material depletion, as well as Europe’s efforts to achieve resource and technology independence from other nations. Recent geopolitical tensions and climate-induced disasters, such as floods, have demonstrated how even minor disruptions in one region can ripple across global supply networks, impacting industries and economies. This rising complexity underscores the urgent need to build resilience into supply chains. Modern technologies, such as cloud computing and Earth Observation (EO) data, are essential for achieving this resilience, offering advanced tools to monitor, predict, and mitigate risks effectively. Cloud Computing Cloud platforms facilitate real-time data sharing, advanced analytics, and seamless integration of tools and systems. Supply chain stakeholders can monitor operations, respond to disruptions, and optimise processes using cloud-based applications. Earth Observation (EO) Data EO data, collected via satellites, provides critical insights into weather patterns, natural disasters, and environmental conditions. Integrating EO data with supply chain platforms enhances predictive analytics and situational awareness. Benefits of Integrating Cloud Computing and EO Data By combining these technologies, organizations can achieve: Real-Time Insights: Stakeholders receive up-to-date information on risks and disruptions. Cost Efficiency: Automated data analysis reduces manual interventions and reliance on external data sources like online portals, environmental services, and disaster management systems. Scalability: Cloud platforms can handle expanding datasets and user bases as businesses grow. Enhanced Collaboration: Cloud-enabled EO services foster collaboration among supply chain actors, from manufacturers to logistics providers, while also supporting companies in sustainability reporting and compliance with mandatory directives (e.g., the Corporate Sustainability Due Diligence Directive (CS3D)). EO Integration with Supply Chain Platforms To maximize the impact of EO-based services, seamless integration of EO data with cloud-based platforms is essential. Key steps include: Data Ingestion and Processing: Raw EO data is processed on cloud platforms using scalable computing resources, enabling real-time analytics. API Integration: Cloud platforms integrate EO services via APIs, allowing users to access EO insights within existing supply chain management tools. Visualization and Decision-Making: Dashboards provide intuitive visualizations, enabling users to quickly identify risks and opportunities. Collaboration and Communication: Cloud-based tools enable stakeholders to share insights and coordinate responses to disruptions effectively. Our Supply Chain Resilience Platform We are currently developing a supply chain resilience platform that leverages EO data for the following services: Supply Chain Mapping: Mapping supply chain networks with integrated EO insights. Risk Visualisation: Cloud-based dashboards displaying risk zones and suggesting alternative transportation routes or storage locations. Environmental Monitoring: Monitoring raw material exploitation for products offered through mapped supply chain networks. Resource Optimization: EO-driven decision-making to save resources and energy. Compliance Verification: Verifying adherence to environmental and sustainability regulations, including carbon footprint assessments related to supply chain networks. Some of these services are already available on our free supply chain resilience platform, which is partially funded through the Horizon Europe project ResC4EU (Resilience Supply Chains for Europe). Conclusion By integrating EO services into supply chain cloud platforms and their services, companies can gain unparalleled insights, enhance their operational agility, and build robust supply chains capable of withstanding future challenges. Our primary motivation for leveraging EO data is to empower companies to adapt to and recover from risks more effectively. By providing actionable insights, we aim to help organizations proactively avoid potential disruptions or identify fast, reliable alternative solutions when disruptions occur. Our vision is to make EO-powered supply chain resilience a cornerstone for sustainable and secure business operations worldwide.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Recovery Observatory: contributing to more effective and resilient recovery strategies for disaster.

Authors: Aurélien Sacotte, Linda Tomasini, Andrew
Affiliations: Cnes
The Recovery Observatory (RO) is a key activity of the Committee on Earth Observation Satellites (CEOS) Working Group on Disasters, aiming to provide satellite-maps and information for disaster recovery. It was set up as a pilot in December 2016 to support the monitoring of recovery and rehabilitation in the areas of southwest Haiti affected by hurricane Matthew. Through time, the RO evolved to a generic and scalable CEOS activity ready to support other events by replicating the efforts initially seen for hurricane Matthew. From 2020 to 2023, the RO Demo activity, led by the French Space Agency-CNES and the World Bank/GFDRR (Global Facility for Disaster Reduction and Recovery), was activated for 5 events by Tripartite Agreement partners: • The Lebanon explosion (Beirut Port) in August 2020 • Hurricanes Eta and Iota in Central America late 2020 • The Haiti earthquake in August 2021 • The historic Pakistan floods in September 2022 • The Libya floods in September 2023 To help with planning and monitoring after disasters, CEOS and the RO have been working collaboratively with the United Nations Development Programme (UNDP), the European Union Service for Foreign Policy Instruments (FPI), and the World Bank GFDRR. This teamwork and the existing Tripartite agreement among these partners support the completion of Post-Disaster Needs Assessments (PDNAs) and helps create effective plans for rebuilding and recovery after a disaster strikes. Building upon the experience gained through these activations, the RO recently began its transition to a pre-operational phase. In November 2023, CEOS approved a new project, the pre-operational RO, that started in 2024 and aims to become permanent by 2027. a. 2024: The project began with a transition year. During this year, the leadership shifts to international disaster recovery partners. b. 2025-2026 : The project will enter a pre-operational phase to prepare for becoming permanent in 2027. Last year, the pre-operational RO was activated by EU/FPI for floods in Armenia. In May 2024, heavy rainfall and consequent floods hit northern Armenia, particularly Lori and Tavush Provinces, where the rivers Debed, Aghstev, and Tashir all burst their banks washing away roads, bridges, parts of a railway and flooding towns and villages located along them. The International Charter ‘Space and Major Disaster’ and the Copernicus EMS Risk and Recovery were also activated for disaster response and assessment of post-flood landslide risk, respectively. The Regional Image Processing and Remote Sensing Service (SERTIT, Strasbourg University) worked closely with the Armenian government on behalf of the RO Team to provide information about: 1) The localisation, qualification and quantification of debris; 2) The estimation of the flood propagation as well as flood simulations in the river basins; 3) The potential of future landslides. RO activations like that in Armenia show how satellites can be used to quantify impacts, prepare authorities for better response to future events, and improve disaster-related decision-making processes. The transition to a permanent status by 2027 promises to enhance the Recovery Observatory’s capacity to deliver timely and accurate information, ultimately contributing to more effective and resilient recovery strategies in the face of future disasters.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Harnessing Earth Observation and Socio-Economic Insights for Sustainable Aquaculture Development: Lessons from Bangladesh and Kenya

Authors: PhD Valentina Santarsiero, Mr. Lucio Martino, Dr. Giulio Ceriola, Dr. Daniela Iasillo, PhD Antonello Aiello, PhD Cristiano Rossignoli, PhD Lorenzo Longobardi
Affiliations: Planetek Italia s.r.l., WorldFish
Aquaculture plays a vital role in fostering economic growth and addressing rural poverty in numerous regions worldwide. The AQUADMC (Optimising AQUAculture Development and Market Connectivity) project focuses on integrating diverse datasets, including EO-derived information and socio-economic data, to support the development of freshwater and coastal aquaculture in various parts of Bangladesh and Kenya. By harnessing satellite imagery, socio-economic insights, and logistical analysis, the project seeks to optimise site selection, enhance market connectivity, and promote environmentally sustainable practices. The main typologies of aquaculture in Bangladesh include small ponds, closed artificial waterbodies commonly used to cultivate species like tilapia, catfish, carp and other indigenous species; seasonal cultured waterbodies, which are temporarily flooded areas ideal for rearing carp. catfish and indigenous fish during the wet season; baors, natural oxbow lakes with unique horseshoe-shaped formations that support the growth of carp, tilapia, and other indigenous fish; shrimp and prawn farming, which occurs in coastal ponds or brackish water systems to produce species like giant tiger prawns and freshwater prawns; and cage culture, where floating cages in rivers and lakes are used to grow fast-growing species such as tilapia and catfish in controlled environments. For Bangladesh, the focus was mainly on ponds, which are the most common mode of aquaculture. The methodology employs advanced satellite technologies to monitor water quality parameters in rivers, lakes and coastal areas. Indicators such as turbidity, chlorophyll-a concentrations, salinity and surface temperature are analysed to evaluate environmental suitability for aquaculture activities. Satellite images support the identification of existing and potential aquaculture sites, with an emphasis on their location, water quality, and proximity to essential resources. For example, high- and mid-resolution satellite imagery and sensors such as Sentinel-2 and Sentinel-3 are used to map potential sites by assessing factors such as chlorophyll-a concentrations, sediment loads, and thermal regimes. Water quality data obtained through the application of algorithms on satellite imagery are complemented by historical, socio-economic data tracing the evolution of aquaculture practices in the analysed regions. For the socio-economic characterisation, the study incorporates various data points, including fish production and productivity of ponds, information regarding the level of development of Bangladesh (e.g., Human Development Index, Literacy Rate, Food Security, Poverty Rate), communication routes and data on investments in the agricultural sector. By understanding the spatial distribution of ponds and their connectivity with regional and urban markets, the study highlights the interplay between resource availability and socio-economic factors. To tackle logistical challenges, proximity analyses of seafood transportation and storage infrastructure are carried out. These assessments consider major road networks, access to waterways, and the availability of cold-chain logistics systems to ensure efficient connectivity between aquaculture sites and urban markets. This approach also highlights critical infrastructure gaps that could impede the movement of aquaculture products, emphasising the importance of targeted interventions to improve logistical efficiency. The integration of socio-economic and logistical data in the identification of aquaculture trends seems to demonstrate a strong alignment with water quality data derived from satellites for the monitoring of existing aquaculture sites and the possible location of new ponds. In addition, the study highlighted the role of aquaculture in sustaining livelihoods, with data indicating that the sector is growing strongly, contributing to rural economies. The project is an interactive decision support tool that synthesises this data, providing policymakers and stakeholders with a comprehensive resource for making informed decisions. This tool provides insights into the operational, environmental and economic dimensions of aquaculture development, enabling balanced, data-driven strategies. This project represents an exemplary case of international cooperation in Earth observation. The fusion of global expertise, satellite technology and local knowledge promotes innovative solutions to complex challenges. By promoting cooperation between nations, organisations and research institutions, EO provides a solid framework to address local and global challenges in aquaculture development. These efforts highlight the critical role of collaborative approaches in promoting sustainable aquaculture development in Bangladesh, Kenya and beyond.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Cyprus Earth Observation for Public Sector (CEOPS) ESA PECS Project: Bridging EO Innovation and Public Sector Needs

Authors: Ms. Eleni Loulli, Thomas Papakosmas, Dr Christodoulos Mettas, Dr. Christiana Papoutsa, Marios Tzouvaras, Maria Prodromou, Andri Kalogirou, Sofia Fidani, Constantina Theofylaktou, Marianna Hadjichristodoulou, Stavroula Dimitriadi, Mr. Thrasos Stylianou, Christodoulos Dimitriou, Despina Makri, Eirini Nikolaou, Spyroula Georgiou, Maria Poutli, Athina Savva, Marina Doukanari, Josefina Kountouri, Kyriacos Themistocleous, Diofantos Hadjimitsis
Affiliations: Eratosthenes Centre Of Excellence, Geomatics (Cyprus) Ltd, Cyprus University of Technology
The “Cyprus Earth Observation for Public Sector (CEOPS)” ESA PECS project is a key response to the Cyprus Space Strategy 2022-2027, which envisions Cyprus to become a regional digital hub in Earth Observation and space technologies. The strategy underlines the importance of creating a “home” market for EO applications. The public sector plays a crucial role as a key customer of EO services, infrastructure and data, while it can serve as a reliable stakeholder and a trusted customer. The CEOPS project aims to support this vision by mapping the EO capabilities and needs within the Cypriot ecosystem with the continuous involvement of the public sector stakeholders. The ultimate goal is to identify high-priority EO solutions that are tailored to real-world requirements. The project consortium consists of the ERATOSTHENES Centre of Excellence (project coordinator) and Geomatics (Cyprus) Ltd. ERATOSTHENES Centre of Excellence has been established through the “EXCELSIOR” H2020 Widespread Teaming Phase 2 project, upgrading the existing Remote Sensing and Geo-Environment Lab that had operated within the Department of Civil Engineering and Geomatics of the Cyprus University of Technology since 2007. Geomatics (Cyprus) Ltd is a company specializing in advanced remote sensing technologies, including satellite and UAV imaging, spectroscopy, and custom system development, providing innovative geospatial solutions for research, government, and private sectors. The project begins with the development of an extensive EO services catalogue, which includes EO tools, algorithms and services related to the Cypriot public sector. This involves cataloguing the current EO algorithms and services through a systematic literature review, use of databases and expert consultations. In line with the Copernicus Services Portfolio, the catalogue details existing capabilities, including technical specifications, data inputs, performance metrics and use cases. The catalogue covers EO services tailored to Cyprus’ specific needs, and is structured in six domains, following the Copernicus services domains, i.e. Climate Change, Atmosphere, Land, Emergency, Security and Marine. Stakeholder engagement and requirements consolidation is essential to ensure that the project addresses the specific needs, gaps and challenges faced by public sector end-users. Initial engagements introduce the preliminary EO catalogue and identify gaps and stakeholders needs. To further tailor the catalogue, the CEOPS project gathers feedback through workshops, surveys and interviews, as well as through a global presentation of the EO catalogue. The global presentation provides a comprehensive overview of the preliminary EO catalogue and is designed to be user-friendly in order to ensure that stakeholders from different backgrounds and domains can easily understand and navigate the information. Continuous stakeholder engagement ensures iterative refinement of the EO capabilities catalogue, while avoiding “user fatigue”. Finally, efforts focus on the shortlisting of potential prototype services. Stakeholder requirements are matched with EO capabilities, and high-priority solutions are identified. A gap analysis is performed in order to highlight the areas that need enhancement or customization. EO services are shortlisted based on their capacity in addressing challenges such as drought, marine biodiversity preservation, air quality and natural disaster preparedness, ensuring they are reliable, technically sound, and capable of being integrated seamlessly into existing workflows. The shortlisted services serve as the foundation for future activities, including demonstration pilots and pre-operational developments. Aligned with the Cyprus Space Strategy, the CEOPS project seeks to make EO data more accessible and user-friendly. The project fosters synergies with ongoing European initiatives, such as Copernicus, DestinE, Horizon Europe, ESA’s FutureEO programme and ensures that Cyprus’s EO capabilities follow the broader European space and environmental strategies. The CEOPS project not only enhances the use of space-based data in Cyprus, but also drives forward its digital transformation, positioning the public sector as a leader in the use of space-based data for innovation. The authors acknowledge the CEOPS (Cyprus Earth Observation for Public Sector) ESA PECS Project that has received funding under the European Space Agency (ESA) tender "Earth Observation capabilities and service needs in Cyprus (CY_AD03)" with contract no. 4000146291/24/NL/MH/fx. The authors also acknowledge the 'EXCELSIOR': ERATOSTHENES: EXcellence Research Centre for Earth Surveillance and Space-Based Monitoring of the Environment H2020Widespread Teaming project (www.excelsior2020.eu). The 'EXCELSIOR' project has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No 857510, from the Government of the Republic of Cyprus through the Directorate General for the European Programmes, Coordination and Development and the Cyprus University of Technology.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Using EO for Agricultural Resilience and to Support Communities in Afghanistan: The Char Dara Canal Desilting Project

Authors: Miss Alice Brindley, Miss Isabella Brittan, Mr Tim Buckley
Affiliations: Alcis Holdings
Afghanistan’s Kunduz Province, long vulnerable to drought, poor irrigation, and agricultural decline, represents a microcosm of the challenges facing communities in arid regions. In early 2024, USAID, through its Afghanistan Value Chains Program (AVCP), launched the Char Dara Canal Desilting Project in partnership with our team at Alcis to revitalise critical irrigation, in Afghanistan. The project exemplifies how international collaboration and earth observation (EO) technologies can strengthen agricultural resilience and encourage community involvement in regions facing complex environmental and socio-economic challenges. Our team took a data-driven approach, using EO data to for decision-making and maximise the project's impact. We used two satellite-based tools to understand the situation: the Normalised Difference Vegetation Index (NDVI) and the Vegetation Condition Index (VCI). NDVI helped us monitor how healthy crops were by analysing their greenness, while VCI measured how much drought had affected crops over time. Together, these tools highlighted areas suffering from drought, tracked crop health, and showed the difference in water availability between upstream and downstream parts of the Char Dara canal. This detailed analysis allowed the team to pinpoint a 6-kilometre stretch of the canal that needed urgent desilting. Years of neglect had caused a heavy build-up of silt, blocking water flow to downstream villages and severely affecting their agriculture. The desilting project allowed for 400 farmer-labourers in a cash-for-work scheme to remove 58,400 cubic metres of silt, equivalent to filling 23 Olympic-sized swimming pools, and restored eight sub-canals. By improving water flow across nearly 24,000 hectares of farmland, the project has transformed the livelihoods of over 10,000 people in 120 villages, with early indicators suggesting a significant increase in agricultural productivity. Farmers who previously had to grow mainly wheat for basic income are now hopeful to switch to more profitable crops like cotton, due to better access to irrigation. After the project was completed, satellite monitoring continued, using NDVI to check crop health and ensure the canal’s benefits last over time. This allows AVCP to track how well plants are growing from a distance and adjust management plans as needed to support Kunduz’s changing agricultural needs. Monitoring the Char Dara Canal remains an active and ongoing project, where we are continually receiving fresh data and results. Our proposed presentation seeks to showcase the innovative application of earth observation and geospatial analysis in addressing agricultural sustainability and recovery. During our oral session, we will encourage conversations on how EO-driven solutions can be utilised globally, bridging environmental monitoring with socio-economic outcomes.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Applications and Global Impacts of National Oceanic and Atmospheric Administration’s Joint Polar Satellite System

Authors: Dr. Satya Kalluri
Affiliations: NOAA/NESDIS/LEO
For over a decade, the Joint Polar Satellite System has been providing critical observation of our living planet’s atmosphere, biosphere, hydrosphere, and the cryosphere. The multi sensor satellite mission collects a variety of observations from the ultraviolet to the microwave region twice a day. These data are vital for Numerical Weather Prediction (NWP) models to provide accurate and timely forecasts of severe weather events as demonstrated by several recent observing system experiments. The three satellite in the constellation NOAA21, NOAA20 and the Suomi National Polar-orbiting Partnership (Suomi NPP) provide redundant and robust remote sensing measurements that are well calibrated and stable for monitoring Earth’s climate as well. JPSS is an important component of the global Low Earth Orbit (LEO) satellite network, and the mission data are used for several applications such as monitoring wildfires, droughts, floods, atmospheric ozone, aerosols, precipitation, ocean biology and sea ice which impact life on Earth. JPSS satellites were critical in monitoring the evolution and far reaching impacts of the 2023-2024 El Niño including droughts, coral bleaching, heavy precipitation and floods over several parts of the globe. The long historical climate data record from NOAA’s LEO satellites are tracking the record-breaking warming events in 2024 that caused significant hardship to population over several regions of the world. This presentation will provide a status of the current JPSS program as well as NOAA’s plans to ensure continuity of these critical measurements in the coming decades beyond the current program of record. The impact of the JPSS program will be highlighted through examples of observations that contributed to the modeling, prediction and monitoring of recent significant environmental events that have significant impact on life on Earth.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Uniting industry, policy and academia to launch climate satellite expertise – and the difference it makes - into the mainstream media orbit

Authors: Tara Thompson, Mrs Sally Stevens
Affiliations: Space4Climate
Overview – the challenge Communication has the power to drive information from space into the hands of decision-makers across every sector of society, industry and government. If Earth Observation data does not reach those who can use it, what is the point? For effective take-up of climate data from space and a meaningful return on the huge investment in gathering it, the many positive messages about its potential and the benefits it delivers across society must be communicated more widely and in terms that wider audiences understand and recognise as relevant and beneficial to their lives. This presentation will take the audience behind the scenes of a new initiative led by the National Centre for Earth Observation and Space4Climate, working closely with the UK Space Agency and European Space Agency, to propel the EarthCARE climate satellite mission story into mainstream media in the UK. It is not about scientific and technical innovation – it is about what happens next. The aim of this exciting initiative was to showcase the partnership of academic-industry-policy collaboration at a combined Media Briefing. It was designed to tell one, cohesive and captivating story through the mouths of those involved from each sector and from many of the organisations involved. It traced the EarthCARE mission from its conception more than 30 years earlier, its launch in May 2024, and the difference that its data will make in terms that everyone can comprehend and relate to. At the same time we needed to demonstrate to the non-communications specialists that their science and technology could be communicated in a clear and accurate way, without dumbing down, and that there was an appetite for their story among the mainstream media and public. We did this by bringing together the significantly different actors - academia, industry and policy – in person, in front of mainstream and specialist UK national and regional media in the first UK satellite launch Media Briefing held by ESA, striking several significant ‘firsts’ and setting a template for future climate satellite mission storytelling including BIOMASS (launch currently scheduled for April 2025). EarthCARE – telling a complex story simply EarthCARE is the most complex European Space Agency climate satellite mission ever undertaken, with four different instruments and 73 collaborators around the world, including UK, Europe and Japan. It was successfully launched and deployed on May 29th 2024, and, at the time of writing, has just completed its commissioning phase. It is the first time a single satellite will take four different measurements of the same column of atmosphere at one time. EarthCARE will tell us more about the role clouds and aerosols play in either reflecting incoming solar radiation back into space or trapping infrared radiation emitted from Earth’s surface – key triggers in the rate of climate change. EarthCARE will also improve climate projections and weather forecasts. The Media Briefing was hosted by the European Space Agency at ECSAT on the Harwell Space Campus – the first time they have held a pre-launch media briefing in the UK - and it has become a pilot for the series of UK-involved upcoming climate satellite launches. A friendly, informal approach, a good-sized model of EarthCARE and an open atmosphere were key. No dumbing down! One of the many remarkable achievements of the EarthCARE UK Media Briefing was that size really didn’t matter! From multinational corporation communications teams such as Airbus, to the micro comms teams at NCEO and Space4Climate, and every organisation in between, each component was given an equal voice. The storytelling was comprehensive and in first person – from Professor Anthony Illingworth who conceived the mission in the 1990s, to the UK Space Agency, to the satellite designers – all put the complex science into lay-person’s terms without dumbing down - and answering the ‘So what?’ question: the difference that EarthCARE will make in terms of everyday lives. The journalists ranged from experienced specialists from national broadcasters and newspapers to a writer for a leading scientific publication and regional newspaper and radio reporters. The widely differing needs of each type of media, each level of journalist – as well as the vastly differing needs of the academic, industry and agency actors behind EarthCARE – all had to be delicately balanced and met. And, of course, taking the time to gather a good understanding of the media’s own audiences and sharing that with the experts underpinned an effective communications event. This involved detailed preparations, diplomacy, tact, support, confidence building and some subtle training. The result was coverage across UK national newspapers, national TV news, as well as some regional and specialist media. In addition, EarthCARE’s launch coverage generated so much interest that there was also national coverage of each instrument’s first data being received – a useful step towards encouraging preparations to develop climate applications using its data when the time comes. The interest generated has also enabled us to widen the communications and engagement communications and outreach campaigns ahead of next year’s BIOMASS launch – another story of successful UK and European collaboration across science, industry and policy. What audiences will gain You won’t need to be a specialist in communications to learn from this presentation – there are clear learnings for everyone involved in every aspect along the Earth Observation supply chain. We will share those and: • how challenges in bringing together the diverse stories and media needs were overcome • how balancing the disparate needs of industry, policy and academia was negotiated • lessons learned that can benefit us all in communicating international scientific and technological successes. We will evidence the immediate and longer term impacts, what worked well and what lessons we have learned. There will be clear takeaways that every audience member can apply to their own role in facilitating or driving the use of climate data from space for climate action, development of actionable data platforms, developing climate services or making climate decisions to benefit our planet, our society and communities. If accepted for the Living Planet Symposium we will draw on the academic, industry and policy contributors to the briefing to give all sides, the pluses and minuses of our story and what the next steps are.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: F.04.13 - POSTER - Urban Resilience

From the beginning of the years 2000, more than half of the world population live in cities and the dynamic trend of urbanization is continuously growing and it is expected by 2050 that 7 of 10 people will be living in cities. The speed and scale of urbanization leads to important development challenges and sets clear needs for effective urban planning and development measures that enhance city resilience. The use of Earth Observations and their integration with other source of information into effective urban data analytics tools can produce a quantum leap in tracking progress towards and achieving international development goals such as the SDG 11, the Sendai Framework for Disaster Risk Reduction or the new urban Agenda (Habitat III).
The advent of continuous data streams of high quality and free of charge satellite observations such as the Sentinels of the European Copernicus program, in combination with the emergence of automated methods for large data processing and image analysis, together with the democratization of computing costs, offer unprecedented opportunities to efficiently monitor the changes and trends in urban development globally. In addition, the synergetic use of EO data from different satellite sensors (radar/optical, HR/VHR, SAR/InSAR, TIR, Hyperspectral, Lidar) and the combination with the availability of ancillary datasets such as ground-based and airborne data, drones, and citizen science data, opens new pathways to extract an unprecedented range of urban information. Urban remote sensing is therefore progressively evolving from traditional urban extent and land cover mapping into advanced urban applications, connecting to the monitoring of urban-related environmental parameters (impervious surfaces, green and blue infrastructures, urban welfare, air pollutants). Moreover, municipalities and city practitioners are showing growing interest in using these applications as decision support tools, bringing to a stronger demand for interactive tools that deliver EO-integrated solutions in actionable information.
The series of LPS 2025 urban sessions will present the recent scientific advances in the application of remote sensing in urban applications, discuss opportunities and challenges which lie ahead for mainstreaming EO solutions into urban development practices and policies, and highlight future paths of research.
Topics of interest for the urban sessions include (not limited to):
• multi-sensor, multi-scale and multi-temporal approaches to urban mapping;
• Remote sensing methods for characterising urban areas (multispectral, hyperspectral, SAR/InSAR, TIR, LiDAR)
• Detailed LULC classification and change detection
• Cost-effective use of commercial data
• Downscaling (e.g., super-resolution)
• AI for urban
• 3D/4D mapping
• Night-lights applications
• UAVs/drones, aerial platforms
• Capacity building, education, citizen science, crowdsource data and tools for urban applications
• EO integration in urban social science and policy
• Urban planning and modelling of urban growth
• Health, well-being and liveability
• Urban ecology
• Nature-based solutions
• Urban energy infrastructure and renewables (demand, access, smart grids)
• Urban climate (Urban Heat Islands, pollution/air quality)
• Urban green and blue infrastructures
• Transport, Infrastructure and Sustainable Mobility
• Natural hazards, risk reduction and urban resilience
• Informal settlements
• Population distribution
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Hyperspectral characterization of local climate zones (LCZ) in small-scale urban and rural landscapes using PRISMA imagery

Authors: Giandomenico De Luca, Federico Carotenuto, Tommaso Giordano, Beniamino Gioli, Franco Miglietta, Lorenzo Genesio
Affiliations: Institute of Bioeconomy (IBE) - National Research Council of Italy (CNR), Institute of Bioeconomy (IBE) - National Research Council of Italy (CNR)
Urban growth poses significant environmental challenges, including surface impermeabilization, alteration of local temperatures, and pressures on surrounding agroecosystems and natural environments. Effective urban governance and planning are crucial for developing strategies for climate change adaptation and mitigation, requiring a deep understanding of urbanization's impact on ecosystems. Although the application of urban climatology in urban and landscape planning remains limited, due to the lack of large-scale dense sensor networks and the expertise required for their modeling, remote sensing and geographical information systems (GIS) offer innovative methods to address this gap. One promising approach is the use of local climate zones (LCZs), which provide reliable tools for mapping urban temperature changes. Integrated with remote sensing, the LCZ classification system standardizes urban climate and micro-climate local patterns, heat flux quantification, as well as valuable inputs for climate, air quality, and energy-use modelling, as well as landscape change analysis. Nevertheless, such studies are typically focused on medium to large urban areas, whereas small-scale urban contexts, such as those common in the Italian territory, remain understudied. These smaller urban areas are characterized by a mixture of historical city centers, peri-urban zones in which residential and industrial areas, surrounded by rural areas intertwined with agroecosystems and natural ecosystems. In this context, detailed spatial and spectral information is particularly valuable for capturing the intricate interactions between urbanization and the surrounding environment. In this study, various datasets have been used in developing LCZs for small-scale urban and rural landscapes. In particular, it focuses on the municipality of Prato (Tuscany, Italy), characterized by a landscape texture featuring clusters of historic and newly built dense urban areas, peri-urban residential and industrial zones, agricultural fields, and forested regions. Firstly, airborne datasets and ancillary in-situ meteorological measurements were employed to retrieve a detailed LCZs map of the study area. In this sense, LiDAR-based digital surface model (DSM) data was employed to obtain urban morphology information, including building height and density. A high-resolution land cover and land use (LCLU) classified map, derived from airborne hyperspectral data, was also used as supplementary information in LCZs mapping. Subsequently, a statistical correlation between the PRISMA imagery data and the high-resolution LCZ maps was performed for an initial characterization of the hyperspectral satellite data. PRISMA data was then used to classify LCZs through machine learning algorithms. Two configurations were tested: the first used the original pixel resolution (30 m) of the PRISMA data, while the second refined the spatial resolution of the PRISMA data through image fusion with Sentinel-2 multispectral data at 10 m resolution. While part of the airborne-based LCZ data was used to train the machine learning classification algorithms, another independent portion was used for validation. Overall and per-class F-score metrics were computed to assess the accuracy, highlighting the high potential of PRISMA data for LCZ characterization in small-scale urban and rural landscapes, particularly when the dense spectral information is coupled with the enhanced spatial resolution obtained from image fusion with Sentinel-2 data.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Urbanization Pressure and Temporal Dynamics of Green Spaces in Cities Using High Spatial and Temporal Resolution PlanetScope Satellite Imagery

Authors: Dr. Achilleas Psomas, Dominique Weber, Giulia Casciano, Dr. Natalia Kolecka, Dr. Bronwyn Price, Dr Janine Bolliger, Christian Ginzler
Affiliations: Swiss Federal Research Institute WSL
Urban green spaces play a critical role in enhancing the ecological health, social well-being and overall livability of cities. They provide multiple benefits, ranging from reducing the urban heat island effect to improving air quality, promoting biodiversity and offering recreational opportunities for city residents. Despite their vital importance, urban green spaces are increasingly threatened by rapid urbanization, spatial expansion and shifts in human behavior. Understanding the complex dynamics of these green areas under urban pressure requires detailed spatio-temporal monitoring to capture subtle yet significant trends and changes. This study leverages high-resolution (3 m), high-frequency (daily) PlanetScope satellite imagery from 2019 to 2024 to analyze temporal changes in urban green spaces across five major Swiss cities: Zurich, Geneva, Basel, Bern and Lugano. Our objectives are threefold (1) Quantify temporal changes in urban green space extent and assess trends related to vegetation health (NDVI) over a five-year period. This includes determining which cities experience the most significant changes and evaluating potential reasons behind these trends with a focus on detecting removal or degradation of urban green. (2) Examine changes in different urban zones, including city centers, suburban and peri-urban areas. (3) Analyze vegetation dynamics through intra- and interannual variations of their spectral signals. As a side objective, we plan to use PlanetScope data to evaluate if shifts in human behavior (such as increased park visits) during the COVID-19 period are detectable from space. Specifically, we aim to compare PlanetScope satellite imagery of urban parks collected during the pre-COVID, COVID and post-COVID periods to increased visitation patterns and additional pressures on green spaces during the 2021 and 2022 pandemic. To achieve our objectives, we will use high-resolution satellite imagery from the 4-band and 8-band SuperDove sensors of the PlanetScope constellation, collected from 2019 to 2024. Unlike other sensors, the high spatial and temporal frequency of PlanetScope imagery allows us to capture subtle, rapid changes that would otherwise be undetectable. We will generate daily, weekly and 16-day composites to assess urban green space dynamics. The primary analysis will involve calculating vegetation indices (NDVI, EVI and others) to assess changes in green space extent, health and vigor. We will combine these indices with machine learning algorithms for accurate urban green classification and change detection. Cities will be segmented into different urban zones such as city centers, suburban and peri-urban to systematically examine urban green changes and trends over the study period. Publicly available mobility data, such as the Google Mobility Reports and Apple's COVID‑19 Mobility Trends Reports, will be used to assess the temporal variation in the pressure put on urban parks due to increased visitation. This analysis will help determine if the observed increase in park visitation can be detected through satellite-derived changes in the extent and condition of urban green spaces before and after the pandemic. Our expected findings will provide valuable insights to urban planners, scientists and policymakers, offering a data-driven foundation to support urban resilience efforts towards urban heat mitigation, environmental health and climate adaptation. By identifying specific areas where green spaces are under pressure, these data can support targeted interventions such as strategic planting initiatives, protective zoning policies and green infrastructure development. Furthermore, our analysis will demonstrate the utility of frequent, high-resolution PlanetScope satellite imagery in monitoring urban vegetation changes over time. Overall, our study aims to help Swiss stakeholders respond proactively to ongoing urbanization challenges, ensuring large cities remain livable, sustainable and resilient in the long term.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Evaluation of the effects of drought and urbanization on urban trees using Sentinel-2 time series

Authors: Théo Le Saint, Dr Jean Nabucet, Dr Karine Adeline, Pr Laurence
Affiliations: LETG
Urban trees provide essential ecosystem services, including temperature regulation, carbon storage, and biodiversity conservation, which are crucial for enhancing urban living conditions. However, they are exposed to various stress factors in urban environments, such as limited light, restricted growth space, and increased exposure to pollutants. Moreover, climate forecasts predict an increase in the frequency and severity of droughts and drought events which can lead to tree defoliation, increased sensitivity to pests and pathogens and therefore increase tree mortality. Quantifying and monitoring the response of urban trees to stress factors can be used as a proxy for tree health assessment.The effects of stress factors (drought, urban conditions) on urban trees dynamics have already been observed in situ with dendrochronology and biochemical analysis. These effects have been shown on a large scale using remote sensing data, but the coarse spatial resolution of the data used (MODIS, 250m) make it impossible to discriminate species that suffer from stress from those that are more resilient. We first investigated tree dynamics in Rennes (France) at fine spatial scale using Sentinel-2 time series to analyze the impact of drought events and urbanization on tree health. Specifically, we analyzed five deciduous tree species (Platanus Acerifolia, Acer Platanoides, Fraxinus Excelsior, Quercus Rubra and Quercus Robur) during two years characterized by very contrasting climatic conditions (2021 and 2022) and along an urban-rural gradient. Vegetation dynamics were monitored using a vegetation index (ARVI). Phenological, productivity and disturbance metrics which characterize respectively key dates and growth-cycle duration, tree growth and productivity, and intra-annual anomalies in tree-dynamics, were derived from the ARVI filtered time-series. Drought events were characterized using a standardized precipitation index derived from climate data and urban intensity was determined at pixel-scale using the Copernicus’ Imperviousness Density data. Temporal metrics values were aggregated at patch-scale (group of contiguous trees of the same species). The results showed a notable extension in the length of the growing season and maturity periods for four species in 2022 (the driest year), driven primarily by delayed senescence and end-of-season dates. Productivity metrics displayed mixed responses, with some species showing reduced growth under drought, while others, like Platanus Acerifolia, exhibited increased productivity, suggesting potential resilience mechanisms. Elevated disturbance levels were observed in 2022, indicating higher stress conditions compared to the previous year. Furthermore, the results outlined that urban intensity was most often correlated with extended growing seasons and altered productivity dynamics. Based on these results, we then developed an exploratory approach at the city scale to identify trees with unusual temporal profiles that correspond to exacerbated trees responses to stress factors. We determined a reference profile for each tree species, corresponding to the average profile of trees growing in similar and contrasting urban conditions, and compared tree profiles to this reference profile based on statistical analysis. This analysis was conducted for period 2016-2024 to monitor tree disturbance levels. This tool can be used by urban tree managers to locate, guide and schedule in situ investigations to monitor tree health and potentially identify trees in decline. This study underscores the value of Sentinel-2 time series in supporting urban tree management and policy decisions amidst changing environmental and climate conditions.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Towards an Annual Update of Urban Change Detection

Authors: Eve Poitevin, Yoann Courmont, Nicolas Granier, Carlos Dewasseige
Affiliations: Collecte Localisation Satellites
Urban areas are changing quickly and every decision about how we use the land in urban spaces can either improve peoples’ lives and reduce our impact on the environment or have precisely the opposite effect. In the framework of the Copernicus Land Monitoring System, the European Urban Atlas is updated every three years. However, users require to have an annual update of the urban changes in land cover/land use. This is the one of the goals of the Horizon Europe Project, EVOLAND: to provide an automated solution capable of providing annual updates of urban dynamics. To address this challenge, we propose an automated workflow designed to detect and characterize urban changes between two consecutive years using Sentinel-2 monthly mosaics (Level 3A) imagery (O. Hagolle et al., 2018). The proposed approach calculates differences between two annual images of the same region and employs Principal Component Analysis (PCA) (Deng, J. S. et al., 2008) to generate change maps. PCA isolates the most significant components of the difference image, and a threshold is applied to identify pixels representing changes. However, the detection process is influenced by seasonal variations and land cover dynamics (e.g., vegetation, snow, water), which may lead to false positives. The results were refined to focus solely on urban-related changes through the implementation of an exclusion mask incorporating water, forest, cloud, snow, and agricultural layers. This filtering process ensures the removal of non-urban variations and increases the relevance of detected changes. Following detection, Random Forest classification models trained on Corine Land Cover+ (CLMS) vector data are used to characterize changes. These models classify land cover categories for the detected polygons for each year, allowing characterization of multi-years changes through majority classes of each year. Improving confidence in detected changes is achieved by incorporating a confidence index based on the occurrence of changes across multiple months. This index minimizes uncertainties caused by short-term, non-urban changes such as vegetation cycles or snow cover. Validation was performed using blind visual interpretation of Very High Resolution (VHR) and High Resolution (HR) reference datasets. The average accuracy of the automated change polygons detection was approximately 70%, which is encouraging for an automated approach. However, it underscores the importance of human supervision at this stage to ensure the reliability and precision of the results. Keywords: Urban Dynamics, LULC, Sentinel-2, PCA, Random Forest, Change Detection, Annual Updates, Environmental Monitoring
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: An atlas of land cover, land use and ground motion information in Europe.

Authors: Lorenzo Solari, Ana Maria Ribeiro de Sousa
Affiliations: EEA-CLMS
Urbanization in Europe reflects a complex interplay between growth, sustainability, and adaptation to global challenges. According to Eurostat, almost 80% of the European citizens live in cities, towns and suburbs, the latter expanding rapidly and pressuring the natural environment surrounding the urban areas. The Urban Agenda for the EU defines three pillars of EU policymaking and implementation: (i) better regulation, (ii) better funding and (iii) better knowledge. Specifically, ‘knowledge’ emphasises the need for evidence-based decision-making, shared learning, and improved access to knowledge for urban stakeholders. In this context, the Copernicus Land Monitoring Service delivers to European user thematic layers and products that can serve as the knowledge base for the requirements of the Urban Agenda or similar EU actions. Urban Atlas (UA) provides high-resolution land use and land cover data for approximate 780 functional urban areas, meaning cities above 50’000 inhabitants, in Europe. It delivers change and status layers based on very high resolution and high resolution (Sentinel-2) images, Building Block Height information and an updated Green Urban Areas layer that will rely on OpenStreetMap, TomTom and Strava information to extract the public and private character of these areas. European Ground Motion Service (EGMS) provides high-precision data on ground movement and their temporal evolution across Europe. It is exclusively based on Sentinel-1 satellite interferometry, and it delivers three data products, two of which are calibrated with the aid of Global Navigation Satellite System information. The two products can single handedly generate value added information in urban areas: UA offers solutions for dynamically monitoring urban growth and green public areas, whereas EGMS gives solid information for urban risk assessment. However, data synergises are even more powerful. For example, EGMS highlights the points of critical vulnerability of a city and UA provides a high-resolution map of those natural and anthropic elements affected by ground motion, and this information can be converted into exposure and, potentially, risk quantification. This is even more relevant in case of vulnerable communities, e.g. flood-prone settlements. This presentation will show that the combination of UA and EGMS data can support the development and improvement of urban planning activities. By providing granular insights into land use changes, infrastructure (in)stability, and environmental factors, these datasets may enable policymakers to make informed decisions that align with global sustainability goals. This study highlights the transformative potential of integrating Earth Observation data into urban development practices and sets the stage for innovative research and applications in the urban domain.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Creating Cooler Cities: Modeling Present and Future Microclimate Impacts of Laser-Scanned Vegetation in European Cities

Authors: Jonathan Simon Terschanski, Research Professor Juha Aalto, Assistant Professor Sanna Ala-Mantila, Associate Professor Eduardo Maeda
Affiliations: University of Helsinki, Finnish Meterological Institute, HELSUS - Helsinki Institute of Sustainability Science
Physical and topographic characteristics of urban structures cause cities to exhibit higher temperatures than natural environments. This phenomenon is known as the urban heat island. Most urban heat mitigation strategies emphasize the importance of green areas, because the physical properties of vegetation cause it to counteract urban heat islands. Besides their significance for human thermal comfort, urban parks, forests, and even individual street trees function as habitats, preserving urban biodiversity and providing ecosystem services to plants, animals, and city dwellers. Both perspectives depend on one immensely important property of vegetation: its ability to reduce the temperature range. This so-called temperature buffering effect is caused by shading, evapotranspiration and, sometimes, nightly heat retention, and can be characterized by substantial divergence (decoupling) from ambient climatic conditions. Decoupled microclimates shape the thermal environments of many organisms and stabilize biodiversity, albeit locally, by allowing species to withstand substantial changes in the surrounding ambient climate. Thus, effective urban biodiversity conservation necessitates a holistic understanding of urban microclimates. Climate challenges increasingly encountered by major urban areas and exacerbated by the urban environment create a certain urgency to increase our understanding of temperature buffering within urban green spaces, both from a human thermal comfort and a biodiversity conservation perspective. Thus, we want to find out • what the most important structural characteristics of vegetation regulating microclimate in urban green areas are (1), • how effective such spaces are in maintaining stable microclimate conditions during extreme weather events (2), • and how changes in global temperatures by the end of the century will affect temperature buffering capabilities of urban green areas in Europe (3). In the past, satellite-derived temperature estimates have greatly contributed to urban thermal studies, but, unlike field measurements, they cannot capture climate conditions below the canopy. However, our study utilizes remote sensing technology to measure vegetation structure directly; a method brought forward by the recent advent of light detection and ranging (Lidar or laser scanning) technology in forest ecology. In our multi-scale study design, we combine meso-scale airborne laser scanning data to estimate structural vegetation characteristics such as canopy cover, depth, and volume, with micro-scale field measurements of microclimate gathered by a network of urban temperature and moisture sensors in Helsinki and other European cities. We settled on this approach to clarify the role 3D vegetation structure plays in urban temperature buffering and expect to produce estimates on the cooling impacts of certain structural features across a continental gradient. Moreover, we will map microclimates across different cities, predict future changes in urban microclimates, and provide evidence-based practical recommendations for urban decisionmakers. For the latter purpose, we closely collaborate with the City of Helsinki and the Finnish Meteorological Institute. Our presentation will showcase the importance of microclimate studies for increasing urban resilience, contributions of remote sensing technology to urban microclimate studies, our study design, and our experiences from conducting a large-scale microclimate study in a European metropolis. We will also reveal our preliminary results on the role vegetation structure played in the attenuation of extreme temperatures in Helsinki during Summer 2025, based on data from our Helsinki Microclimate Observatory (Helmo).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Why Directional LST Data Might Fall Short for Urban Climate Adaptation Monitoring and Policy

Authors: Benjamin Bechtel, T. C. Chakraborty, Simone Kotthaus, Scott Krayenhoff, Alberto Martilli, Negin Nazarian, Panagiotis Sismanidis, James Voogt
Affiliations: Ruhr University Bochum, Atmospheric, Climate, and Earth Sciences Division, Pacific Northwest National Lab, École Polytechnique, Institut Pierre Simon Laplace (IPSL), School of Environmental Sciences, University of Guelph, Atmospheric Modelling Group, CIEMAT, School of Built Environment, University of New South Wales, Department of Geography and Environment, Western University
Adapting cities to climate change is a key challenge for humanity. Within this context, urban overheating more recently has become a high priority for cites worldwide, which are about to invest billions of Euros in measures to reduce this problem. To design sustainable urban planning strategies, detailed knowledge on the heterogeneous urban environment is urgently required. Remote Sensing in principle is a powerful means to acquire spatially explicit data for any city and thus to inform these actions and monitor their impact. However, the strong urge to use existing datasets for timely urban heat exposure monitoring and climate adaptation information must carefully consider the physical limitations of remotely sensed LST so as to not result in incomplete, wrong, or misleading indicators. Most importantly, remotely sensed LST provides a biased representation of the urban surface temperature and does not directly provide near-surface air temperature. These pitfalls are mainly linked to the complex natures of the LST to air temperature coupling, which result in temporal and spatial mismatch of the urban surface and air temperatures respectively. Moreover, the urban LST signal requires very careful processing and interpretation due to effects of strong thermal anisotropy of heterogeneous 3D urban landscapes, emissivity assumptions, and geometrical bias towards horizontal surfaces amongst others. Readily available LST maps are therefore rather detached from the heat exposure at street level and should hence not be used directly for the support of urban planning without careful interpretation and detailed knowledge. In this contribution we highlight the most relevant misconceptions and pitfalls with the aims to first avoid large investments based on the wrong metrics, and second to start a conversation between Urban Climate Science and Urban Remote Sensing to develop pathways towards more suitable parameters and methods that can be applied to monitor urban climate more appropriately.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Advancing Renewable Energy Solutions for Cities

Authors: Dr. Annekatrin Metz-Marconcini, Dr.-Ing. Mattia Marconcini, Julian Zeidler
Affiliations: German Aerospace Center - DLR
Global urbanization has reached a critical turning point, with over half of the world’s population living in urban areas since 2008—a significant milestone in human history. This trend is expected to accelerate, with urban populations projected to rise to 68% of the global total by 2050. This rapid urban expansion presents profound challenges, including heightened resource demands, overburdened infrastructure, and increased air pollution. Urban transportation, as a major source of emissions, significantly impacts public health, contributing to respiratory illnesses and other health conditions. Additionally, cities account for 71–76% of global carbon dioxide (CO2) emissions and 67–76% of total energy consumption, underscoring their pivotal role in addressing climate change. The urban heat island (UHI) effect further exacerbates these challenges, causing localized temperature increases due to reduced vegetation, impervious surfaces, and heat-retaining materials. This phenomenon intensifies heat stress and related health impacts, driven by reduced evapotranspiration, increased heat storage, and anthropogenic heat emissions. Compounding this, urban sprawl often extends into disaster-prone areas, increasing the vulnerability of communities to floods, earthquakes, and other natural hazards, with marginalized populations disproportionately affected. Sustainable urban planning is crucial to addressing these challenges and creating livable, resilient cities that cater to the needs of current and future generations. In Europe, where urban areas occupy just 4% of the land but house 75% of EU citizens, cities contribute over 70% of CO2 emissions. As a result, they play a central role in meeting the European Green Deal’s climate neutrality goal by 2050. Here, key targets include increasing the share of renewable energy, with the Renewable Energy Directive mandating 32% by 2030—a target expected to rise in upcoming policy revisions. Solar energy is a cornerstone of this transition, with European cities actively promoting rooftop photovoltaic (PV) installations to generate clean energy and reduce carbon footprints. Streamlined approval processes and financial incentives have accelerated adoption, making solar rooftops a prominent feature of urban sustainability initiatives. By harnessing solar power, cities are not only reducing emissions but also setting a path for sustainable urban growth aligned with climate action goals. In such framework, the German Aerospace Center - DLR has led efforts to support this transition through two landmark projects, namely ESA´s GTIF Demonstrator for Austria (GTIF-AT) and EO Solar for Germany. Specifically, these initiatives provide urban planners and policymakers with comprehensive, spatially detailed data to systematically assess solar energy potential. Unlike existing solar cadasters, which are often region-specific and vary in data collection methods and detail, this approach offers a uniform, up-to-date representation of entire countries. This consistency makes it an invaluable resource for developing effective strategies to expand solar energy deployment and support broader energy transition goals. Leveraging high-resolution digital orthophotos (DOP20, 20 cm spatial resolution), digital surface models (DOM1, 1 m spatial resolution), and building footprints (sourced from OpenStreetMap in Austria and the Federal Agency for Cartography and Geodesy (BKG) in Germany), these projects analyze current solar utilization and identify the solar potential of buildings. In Austria, artificial intelligence (AI) techniques were implemented to detect rooftops with existing photovoltaic (PV) installations, while in Germany, the official core energy data register (MaStR) provided up-to-date insights into solar infrastructure. A specialized model calculates the potential for rooftop PV installations by estimating electrical power based on factors such as peak sunshine hours, roof inclination, orientation, and shading from nearby trees or buildings. Additionally, the methodology can extend to open spaces, enabling the calculation of solar potential beyond rooftops. This flexibility makes the approach transferable to other countries, offering scalable solutions for promoting solar energy projects globally. Beyond solar energy, GTIF-AT also evaluates the potential of green roofs to mitigate the urban heat island effect and reduce energy consumption. Green roofs provide cooling benefits, conserve energy, and enhance climate resilience, aligning with sustainable development goals and advancing urban sustainability efforts. The data and insights generated by these projects significantly contribute to evaluating and promoting building-integrated solar installations. By identifying opportunities and encouraging targeted subsidies and strategic investments in renewable energy, these initiatives support greenhouse gas reduction targets and accelerate the energy transition. Projects like EO Solar and GTIF-AT empower cities to lead the way toward a resource-efficient, carbon-neutral future by mapping current capabilities and uncovering pathways for sustainable urban development.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: A new shadow compensation approach for advanced retrievals of urban environmental information

Authors: Lars Groeneveld, Prof. Dr. Alexander Damm
Affiliations: Remote Sensing Laboratories, Department of Geography, University of Zurich, Eawag, Swiss Federal Institute of Aquatic Science and Technology
Climate change is expected to increase the frequency, intensity and duration of heat waves, while urban areas are particularly vulnerable due to the amplifying effect of urban heat islands. One proposed solution to mitigate heat stress is enhancing green infrastructure in cities. These nature-based solutions aim to promote cooling through increased evapotranspiration. However, understanding the complex responses of urban vegetation in the related carbon, energy and water cycles remains a challenge. Additionally, the potential of these nature-based solutions to mitigate urban heat stress in a changing climate is unclear. Remote sensing data facilitates monitoring of vegetation state and environmental conditions over large areas. However, studying highly heterogeneous urban environments via remote sensing faces the major challenge of cast shadows and geometrical-optical scattering. Dedicated analytical approaches are needed to account for cast and gradual shadow and to ensure accurate information retrieval from remote sensing measurements. In fact, common analytical approaches typically assume that the observed surface is flat and fully illuminated. In reality, obstructing objects may cast shadows on the targeted surface or limit its sky view. Secondly, tilted surfaces are differently exposed to the sun and the hemisphere. In all these cases, the contribution of direct and diffuse irradiance varies, creating wavelength-dependent differences between effective and assumed irradiance. Particularly the wavelength dependency leads to an incorrect conversion of radiance measurements to reflectance, which then propagates errors in derived data products. The often-used normalized difference vegetation index (NDVI) can decrease up to 25% in shaded compared to fully illuminated pixels, causing underestimation in vegetation health assessments or modelling evapotranspiration products. In our contribution, we present a physically based shadow compensation approach. We develop the approach using spatially high-resolution airborne imaging spectroscopy data in two well-characterized urban test sites (i.e. the Swiss cities Basel and Zürich) and evaluate its operational application using Sentinel-2 satellite data. We expect that derived insights will allow us to advance retrievals of environmental information, extending beyond urban environments. In addressing a major challenge in remote sensing of complex 3D surfaces such as urban environments, this work ultimately aims to enhance our understanding of urban vegetation in the context of climate change.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: The Action Fund 2.0 Data2Resilience Project: Data-driven Urban Climate Adaption for Dortmund

Authors: Panagiotis Sismanidis, Benjamin Bechtel, Charlotte Hüser, Luise Weickhmann, Jonas Kittner
Affiliations: Ruhr University Bochum
Extreme heat endangers human health and well-being and impairs the use of public spaces. Dortmund’s Integrated Climate Adaption Master Plan prioritizes actions and measures to improve heat resilience. This project supports the city of Dortmund (Germany) in attaining this goal, by deploying a state-of-the-art biometeorological sensor network and developing a nowcasting service for monitoring thermal comfort across the city. The project aims to pioneer the integration of thermal comfort data in smart-city ecosystems and provide actionable insights for the development of Dortmund’s Heat Action Plan. Modeled, remotely sensed, and in-situ data are used to provide near-real-time information regarding the outdoor thermal conditions. City-officials of Dortmund are involved in the design of the dashboard, and the weather station network, ensuring they meet their needs. The collected data will be used in a series of on-ground actions, supporting the evaluation of existing climate adaptation measures, and the design of new ones. These actions include the mapping of areas with high potential for planting trees, the investigation of changes in human behavior during hot days, and the assessment of backyard greening strategies. To engage with the local stakeholders, promote the role of citizen scientists, and disseminate the project, a series of workshops and on-site events are planned, such as climate comfort labs, mobile measurement campaigns, or climate walks with citizens. The overall goal of the project is for the city of Dortmund to adopt and integrate the developed network and nowcasting service into its smart-city ecosystem.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: The influence of urbanization on vegetation phenology in European capital cities

Authors: Doctoral researcher Amanda Ojasalo, Hannakaisa Lindqvist, Professor Tuuli Toivonen, Postdoctoral researcher Elias Willberg, Associate Professor Eduardo Maeda
Affiliations: University Of Helsinki, Finnish Meteorological Institute
Climate warming and urbanization are reshaping our living environment at a rapid pace, underscoring the importance of urban vegetation in regulating urban climates, preserving ecosystems, and enhancing human wellbeing. As vegetation phenology controls the functionality of these ecosystem services, it is essential to understand how the several urbanization mechanisms, including Urban Heat Island (UHI), human management, and hydrological changes influence on the seasonal dynamics of vegetation. Earlier urban Start of Season (SOS), later End of Season (EOS) and longer Growing Season Length (GSL) have been widely observed from optical remote sensing data on a coarse spatial resolution, and is often associated with UHI [1,2,3]. These shifts indicate enhanced photosynthetic activity and carbon sequestration potential in urban areas when compared to their rural adjacent. However, these satellite-based observations suffer from the high number of mixed pixels in urban areas due to heterogenous land cover and vegetation characteristics [4,5], and recent finer spatial scale assessments indicate that the SOS and EOS variation could be also driven by differences in vegetation composition [4]. Consequently, urban phenology dynamics and its drivers need reassessment on a high spatial detail and regional level to increase the understanding of these mechanisms, which supports the development of sustainable and carbon-smart urban greenery. We used the new Sentinel-2 based Copernicus High Resolution Vegetation Phenology and Productivity (HR-VPP) dataset to analyze the variability in SOS, EOS and GSL along urban-rural gradients in 37 European capital cities between 2017–2020. The phenology parameters are based on Plant Phenology Index (PPI) time series, and the data has a spatial resolution of 10 meters. We defined urban areas with the EEA/Copernicus Urban Atlas land cover and calculated the average SOS, EOS and GSL within urban areas and 10 surrounding buffer zones with a 2 km width. All pixels within agricultural land cover were masked out from the calculations. We used the proportion of natural and semi-natural vegetated land cover within the urban areas and buffer zones to indicate the urban-rural gradient and analyzed the phenology variation along these gradients with a linear regression. To assess the regional variation between the cities, we classified them into four different climate zones [5]. To analyze how temperature, land cover and vegetation characteristics influence on SOS and EOS variation along the gradients, we used a random forest regression model for all the cities and for each climate zone using the Urban Atlas land cover, MODIS land surface temperature and Copernicus Dominant Leaf Type data. The results reveal that the SOS is delaying along the urban-rural gradients in 21 out of 37 cities (R2 > 0.4, p < 0.05) when the proportion of natural and semi-natural vegetation increases, as expected from previous research. However, the EOS is also delaying along the gradient in 17 out of 37 cities (R2 > 0.4, p < 0.05) contradicting to the later urban EOS observed in different regions. SOS and EOS patterns are relatively similar across different climate types, besides “temperate, dry season, hot summer” where the variation among and within the cities is greater. GSL does not show clear patterns along the gradients due to the earlier EOS, as it shortens in 6 out of 37 cities and prolongs in 6 out of 37 cities. Modelling results show variation in feature importances and responses between different climates, and vegetation types play a key role especially in the EOS variation. The proportion of non-tree vegetation advances EOS, coniferous trees delay it, and the influence of broadleaved trees depends on the climate zone. In addition, increasing temperature advances EOS instead of delaying it, which may indicate a water deficit in urbanized areas potentially hampering the carbon sequestration in European capital cities under the warming conditions. These results reveal new insights from urban phenology and its drivers highlighting the influence of different vegetation characteristics and high-resolution remote sensing data for analyzing the fragmented and small-scaled urban vegetation. References [1] Jia, W., Zhao, S., Zhang, X., Liu, S., Henebry, G. M., & Liu, L. 2021. Urbanization imprint on land surface phenology: The urban–rural gradient analysis for Chinese cities. Global Change Biology, 27(12), 2895–2904. https://doi.org/10.1111/GCB.15602 [2] Li, X., Zhou, Y., Asrar, G. R., Mao, J., Li, X., & Li, W. 2017. Response of vegetation phenology to urbanization in the conterminous United States. Global Change Biology, 23(7), 2818–2830. https://doi.org/10.1111/GCB.13562 [3] Zhou, D., Zhao, S., Zhang, L., & Liu, S. 2016. Remotely sensed assessment of urbanization effects on vegetation phenology in China’s 32 major cities. Remote Sensing of Environment, 176, 272–281. https://doi.org/10.1016/J.RSE.2016.02.010 [4] Chen, X., Wang, D., Chen, J., Wang, C., & Shen, M. 2018. The mixed pixel effect in land surface phenology: A simulation study. Remote Sensing of Environment, 211, 338–344. https://doi.org/10.1016/J.RSE.2018.04.030 [5] Tian, J., Zhu, X., Wu, J., Shen, M., & Chen, J. 2020. Coarse-Resolution Satellite Images Overestimate Urbanization Effects on Vegetation Spring Phenology. Remote Sensing 2020, Vol. 12, Page 117, 12(1), 117. https://doi.org/10.3390/RS12010117 [6] Alonzo, M., Baker, M. E., Caplan, J. S., Williams, A., & Elmore, A. J. 2023. Canopy composition drives variability in urban growing season length more than the heat island effect. Science of The Total Environment, 884, 163818. https://doi.org/10.1016/J.SCITOTENV.2023.163818 [7] Beck, H. E., McVicar, T. R., Vergopolan, N., Berg, A., Lutsko, N. J., Dufour, A., Zeng, Z., Jiang, X., van Dijk, A. I. J. M., & Miralles, D. G. 2023. High-resolution (1 km) Köppen-Geiger maps for 1901–2099 based on constrained CMIP6 projections. Scientific Data, 10, Article 724. https://doi.org/10.1038/s41597-023-02549-6
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Urban Scaling of Well-Being in Dutch Cities

Authors: Mirjam Van Hemmen, Dr Arend Ligtenberg, Professor Kirsten Maria de Beurs
Affiliations: Wageningen University
The United Nations Human Settlements Program (UN-Habitat) and the new urban agenda strongly advocate for well-managed cities that foster well-being for all. As cities become the primary habitat for humanity, the growing urban population highlights the urgent need to monitor and manage urban environments effectively. Indicators of well-being are collected and available on the city level for a number of countries. Moreover, urban well-being is shaped by various factors, including the built environment, access to green spaces, air quality, infrastructure, and social connectivity (Chen et al., 2023; Hajrasoulih et al., 2018; Mouratidis, 2021). Global remote sensing (RS) and geo-information science (GIS) data are readily available to map these physical characteristics of cities, enabling a large-scale analysis of the relationship between (census) well-being data and remotely sensed city characteristics. Systematic linkages of urban well-being to city characteristics are important information for decision-makers. Yet, the bulk of urban research is still composed of single case studies and most urban remote sensing studies are on large cities within China, the USA, or the EU (Acuto et al., 2018; Zhu et al., 2019). To create a more balanced and comprehensive urban knowledge base, comparative analyses between urban areas that cover different geographical scales and city sizes are needed (Acuto et al., 2018; Reba & Seto, 2020; Seto et al., 2017). Complicating factors are the difficulty to define an urban area delineation that is globally applicable and relevant to measuring urban well-being as well as linking urban, census unit, data to remote sensing-based city boundaries. This study develops a methodology to systematically link urban well-being data and remotely sensed city characteristics. The urban scaling method is used to investigate generalizable patterns in the relationship between cities and well-being, and whether urban environment characteristics can explain deviations from the norm. Cities in the Netherlands are used as a case study, to assess the importance of the city definition and data interpolation methods. To assess the impact of a city definition on scaling results, the Functional Urban Area delineations and Degree of Urbanisation Method by the EU & OECD, and a Dutch city definition are compared (CBS, n.d.; Dijkstra et al., 2019, 2021). For urban well-being, indicators are used that cover physical health, mental health and social relationships. These indicators are available for several years on high resolution by the National Institute for Public Health (RIVM). To link the RIVM data, based on neighbourhood boundaries, to a city area, the areal weighting, pycnophylactic and dasymetric binary interpolation methods are used and compared. The results are scaling exponents for different indicators of physical health, mental health and social relationships, sensitivity of the results to the methods used and a globally workable, well-being relevant city definition. With current high urbanization rates, understanding the influence of the habitats we create on their residents is especially important. Looking for the comparable and generalizable between cities will gain meaningful knowledge that can inform urban decision-makers. REFERENCES Acuto, M., Parnell, S., & Seto, K. C. (2018). Building a global urban science. In Nature Sustainability (Vol. 1, Issue 1, pp. 2–4). Nature Publishing Group. https://doi.org/10.1038/s41893-017-0013-9 CBS. (n.d.). Stedelijkheid. Retrieved April 22, 2024, from https://www.cbs.nl/nl-nl/onze-diensten/methoden/begrippen/stedelijkheid--van-een-gebied-- Chen, T.-H. K., Horsdal, H. T., Samuelsson, K., Closter, A. M., Davies, M., Barthel, S., Pedersen, C. B., Prishchepov, A. V, & Sabel, C. E. (2023). Higher depression risks in medium-than in high-density urban form across Denmark. Science Advances, 9(21). https://www.science.org Dijkstra, L., Florczyk, A. J., Freire, S., Kemper, T., Melchiorri, M., Pesaresi, M., & Schiavina, M. (2021). Applying the Degree of Urbanisation to the globe: A new harmonised definition reveals a different picture of global urbanisation. Journal of Urban Economics, 125. https://doi.org/10.1016/j.jue.2020.103312 Dijkstra, L., Poelman, H., & Veneri, P. (2019). The EU-OECD definition of a functional urban area. https://doi.org/10.1787/d58cb34d-en Hajrasoulih, A., Del Rio, V., Francis, J., & Edmondson, J. (2018). Urban form and mental wellbeing: scoping a theoretical framework for action. In Journal of Urban Design and Mental Health (Vol. 10, Issue 5). Mouratidis, K. (2021). Urban planning and quality of life: A review of pathways linking the built environment to subjective well-being. Cities, 115. https://doi.org/10.1016/j.cities.2021.103229 Reba, M., & Seto, K. C. (2020). A systematic review and assessment of algorithms to detect, characterize, and monitor urban land change. In Remote Sensing of Environment (Vol. 242). Elsevier Inc. https://doi.org/10.1016/j.rse.2020.111739 Seto, K. C., Golden, J. S., Alberti, M., & Turner, B. L. (2017). Sustainability in an urbanizing planet. In Proceedings of the National Academy of Sciences of the United States of America (Vol. 114, Issue 34, pp. 8935–8938). National Academy of Sciences. https://doi.org/10.1073/pnas.1606037114 Zhu, Z., Zhou, Y., Seto, K. C., Stokes, E. C., Deng, C., Pickett, S. T. A., & Taubenböck, H. (2019). Understanding an urbanizing planet: Strategic directions for remote sensing. Remote Sensing of Environment, 228, 164–182. https://doi.org/10.1016/j.rse.2019.04.020
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Mapping heat capacity of city by diurnal airborne scanning

Authors: Daniel Kopkáně
Affiliations: Global Change Research Institute CAS
Urban Ecosystems often suffer from severe overheating in summertime. The main cause for this (next to anthropogenic heat production) is accumulation of heat energy within the mass of city. Airborne observation is a strong tool to observe and describe the urban system on the level of individual streets - the detail that satellites cannot deliver yet, but will be capable off in future including the thermal region. In this presentation we demonstrate how the synergy of FLIS covering visible, near infrared, thermal and LIDAR data can be utilized for addressing the heat capacity of individual fragments (e.g. house, parking lot, road, urban greenery patch with pixel size about 2 meters). The Flying Laboratory of Imaging Systems (FLIS) consists of an airborne carrier, imaging spectroradiometers and a laser scanner. Photogrammetric airplane Cessna 208B Grand Caravan with two hatches serves as an airborne carrier. The basic sensor equipment consists of hyperspectral sensors CASI-1500, SASI-600 and TASI-600, produced by company Itres. The LMS Q780 airborne full-waveform laser scanner positioned on the board of the plane is a product of the company Riegl. Our approach is based on a diurnal (day and night) airborne scanning campaign from August 2024 over the city of Brno (Czech Republic). In the after processing, we estimate potential isolation of individual pixels by its position form 3D model delivered by LIDAR and it ‘s albedo derived from visible and near infrared data. The surface temperature corrected for emissivity (delivered by optimalizad temperature/emissivity separation algorith OTES) and atmosphere than give insight into thermal storage capacity for the individual fragments. Since the urban heat island is strongly related to energy balance (albedo and emissivity) and heat storage, our approach can give valuable data for further work. It can be a valuable piece for building urban digital twins. Since there is upcoming significant progress in special resolution for earth observing satellites, it is possible, such methodology can be upscaled worldwide.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Leveraging Earth Observation for Resilient and Sustainable Urban Development: The BUILDSPACE Approach

Authors: Dr Claudio Pisa, Vasileios Baousis, Dr Stamatia Rizou, Mr Iasonas Sotiropoulos
Affiliations: ECMWF, SingularLogic
The BUILDSPACE project is an initiative that harnesses space-driven technologies to address the challenges of urban development and construction. By aligning with the European Green Deal objectives, the project aims to reduce CO₂ emissions, enhance urban resilience, and improve living conditions through innovative, data-driven decision-making tools. BUILDSPACE delivers a suite of five (5) advanced services designed to meet key needs in urban planning and construction. At the building scale, services focus on the generation and visualisation of high-fidelity digital twins of buildings, integrating data derived from drone scans, enabling precise simulations, analyses, and operational optimisation. These visualisation tools support user interaction through advanced interfaces, including virtual reality (VR) and augmented reality (AR), alongside traditional interactive displays. At the city scale, BUILDSPACE provides services to simulate and analyse climate-related scenarios, including urban heat and flooding, offering valuable insights for urban planners and municipalities, integrating data from local observations. These tools are map-centric and designed with an intuitive layer panel to support interactive exploration of diverse scenarios, fostering a user-friendly experience that aids evidence-based planning and policy-making. The project is validating and assessing its services across four European cities representing diverse climatic conditions. This validation focuses on two main scenarios: construction companies monitoring and optimising building processes through advanced digital tools, and municipalities and urban planners analysing the impacts of climate change on the built environment. BUILDSPACE advances the maturity of its technological components from TRL 5-6 to TRL 7-8 by the project’s conclusion, ensuring the delivery of market-ready solutions that drive innovation and sustainability in urban development. BUILDSPACE also engages key stakeholders, including municipalities, construction companies, and urban planners, who are seeking innovative solutions to streamline construction processes, reduce errors, minimise carbon footprints, and improve resilience against climate change and natural disasters. By offering high-precision digital services tailored to simulate complex built-environment scenarios, the project addresses these needs comprehensively. Through the use of immersive visualisation technologies and advanced analytics, BUILDSPACE empowers stakeholders to evaluate energy performance, forecast energy demand, address energy poverty challenges, and strengthen resilience to urban heat and flood risks. BUILDSPACE aims to create a more connected and efficient environment for urban development and to align with EGNSS specifications and Copernicus services and to synchronise with the advances of DestinE, targeting Digital Twin technologies and data federation mechanisms. The project’s outcomes are expected to significantly enhance decision-making processes, fostering sustainability and resilience in urban planning and construction practices across Europe.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Land use and land cover changes in the Bucharest region following Romania's accession to the European Union in 2007

Authors: George-Călin Baltariu, Ana Navarro Ferreira, João Catalão
Affiliations: “Gheorghe Asachi” Technical University of Iasi, Faculty of Civil Engineering and Building Services, Iasi, Romania, Universidade de Lisboa, Faculdade de Ciências, Instituto Dom Luiz, Lisboa, Portugal
Following the political regime change in 1989 and Romania's accession to the European Union (EU) in 2007, the country underwent significant land use and land cover (LULC) changes. These changes were primarily driven by political, economic, technological, and demographic factors. In Eastern Europe, and particularly in Romania, property restitution emerged as a key catalyst for those changes. The shift to private ownership of agricultural and forest land resulted in cultivated land abandonment, deforestation, and the replacement of profitable crops with pastures, meadows, or natural afforestation. As a result, rural areas became dominated by small, semi-subsistence, family-operated farms. After 2007, land purchase or leasing by Romanian companies, supported by EU funding, and acquisitions by multinational corporations promoted land consolidation. By 2013, large agricultural holdings represented approximately one-third of all farms in the EU. An increasing trend in urbanization has surpassed other types of land use changes, largely driven by rural-to-urban migration toward major urban centers. CORINE Land Cover data have mainly been used to assess these transitional dynamics and identify long-term spatial and temporal trends. In this study, we present a comparative analysis of three key transitional periods: the political regime transition (1993–2000), the EU accession transition (2000–2015), and the post-transitional period (2015–2022). This analysis is based on historical Landsat satellite imagery, specifically data from Landsat 5 and 8, complemented by GIS tools and landscape metrics to characterize the primary trends in LULC changes within the Bucharest metropolitan area from 1993 to 2022. Population data were also incorporated to examine its correlation with urban expansion. A multi-temporal set of satellite images acquired between 1991 and 2022 was used to produce four reference LULC maps (1993, 2000, 2015, and 2022). These maps were generated using a Random Forest classifier and categorized into four major land classes: Urban, Forest, Agriculture, and Water. The LULC maps were validated with an independent set of random samples, achieving overall accuracies exceeding 90% and kappa coefficients above 0.80, except for the 2000 LULC map. Several landscape metrics were computed to quantify the spatial and temporal pattern of the LULC class dynamics in the test area for the past 30 years. The results highlighted substantial urban expansion around major cities, accompanied by the loss and fragmentation of agricultural land and a decline in rural settlements. Urban expansion became particularly pronounced during the two latter transitional periods, driven by economic growth in the 2000s and the recovery following the 2007–2008 financial crisis. Demographic data from the National Institute of Statistics (INS Romania) revealed that the Bucharest-Ilfov development region, encompassing Bucharest and Ilfov County, experienced population decline and minimal urban expansion during the first transitional period. In contrast, the subsequent periods showed a strong positive correlation between population growth and urban expansion, with an R² value of 0.82.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Characterizing spatially explicit urban heat risk from Earth Observation data to drive urban renovations

Authors: Gabriele Oneto, Dr Luca Cenci, Ph.D. Fillippo Fraschini, Ph.D. Luca Pulvirenti, Ph.D. Katia Perini
Affiliations: Department of Architecture and Design, University of Genoa, CIMA Research Foundation
Contrasting urban heat is one of the primary concerns for contemporary administrations, as highlighted from evidence that link the rising temperatures to increase in morbidity. This aspect is particularly relevant in a context of climate change scenario. Within this framework, the exploitation of Earth Observation (EO) and geospatial data and analysis techniques are effective tools to assess this risk for population. Indeed, EO-based datasets can be used to characterize all the components required for risk assessment, i.e., hazard, exposure and vulnerability. Moreover, recent advances in morphogenetical spatial analysis have shown that heterogeneous data, such as those related to urban morphology, demographic, and land cover, can be effective in describing both the form and contents of cities. In this context, we take advantage of high spatial resolution Land Surface Temperature data acquired by the Landsat 8 and 9 satellites to characterize the cities’ Surface Urban Heat Island (SUHI), that we consider as hazard in our risk analysis. We explore the usage of a Voronoi-based morphometrics technique – i.e., the Enclosed Tessellation (ET) by Fleischmann et al. (2021) - to model and stack all risk components. During this step, we select and model over 60 indicators related to urban heat risk for population from climate, demographics, morphology, land cover and land use, using strictly opensource data. We use mostly EO-derived datasets for morphology and topography (Urban Atlas, Tinitaly DEM), land cover (Copernicus High Resolution Layers), demography (WorldPop, Global Human Settlement Layer), climate (Global Wind Atlas). We also use geospatial data from Open Street Maps, direct census data from Italian National Institute of Statistics (ISTAT), and high-resolution DEM derived from EO and non-EO data (Tinitaly). We use multicriteria optimization algorithms to identify urban areas prone to SUHI risk. We test and rank different algorithms such as TOPSIS and VIKOR to define risk and compare the results from ET with Census Blocks (CB) for validation. Finally, we produce a typological characterization of SUHI-prone city areas through Hierarchical Clustering, enabling to compare different and similar conditions through different cities. We present our findings for two Italian cities with different morphology and regional climate, Genoa and Turin. Both of them are located in Northern Italy. In order to compare the cities’ behaviour in similar critical conditions, Summer 2022 was used as reference time period. For Northern Italy, this particular period was characterized by anomalous weather conditions, leading to drought events and heatwaves. In this study, we demonstrate the effectiveness of our approach by identify where and in which urban conditions ET can be effective in producing urban heat risk by studying spatially explicit correlations. This enables researchers and practitioners two things. Firstly, it directly correlates EO-derived SUHI information with architectural dimensions and non-EO data, simplifying and streamlining the design and implementation of mitigation strategies, such as green infrastructure and Nature-based Solution. This urban characterization for risk is conducted at a higher spatial resolution and fidelity to actual urban form than traditional spatial approaches, such as with CB. Secondly, we produce a harmonized spatially explicit ranking criteria for city comparison. This extensive, and high resolution, application is possible thanks to rich spatio-temporal distribution of EO data. We envision this similar approach could be scaled-up to drive regional, or national, urban renovation planning.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Monitoring and Detection of Urban Developments through Integration of Multiple Satellite Image Sources (Radar and Optical): A Case Study in Türkiye

Authors: Tamer Özalp
Affiliations: Researchturk Space Co.
The world is constantly changing and becoming more complex in all aspects. The changes have unpredictable impacts and implications at various scales. The increasing complexities in urbanized systems pose challenges in their comprehension and management. The intricate structures and large spatial scales make visualization and analysis arduous. Conventional methods may fall short in accurately representing the situation, thereby impeding detailed analysis and decision-making processes. Consequently, communities seek new synergies to access timely, updated, standardized, reliable, user-friendly, and actionable information to make well-informed decisions. There is a pressing need for innovative monitoring systems to streamline and simplify urbanization processes. It is imperative to democratize information, integrate it into society, and ensure its accessibility to all. Earth Observation (EO)-based space techniques have become more comprehensible, accessible, and dependable for analyzing Earth resources and monitoring urban environments. The outcomes of numerous application-oriented studies have been promising thus far. This research primarily addresses the requirements of public and research organizations, with a particular focus on utilizing Imaging Radar Systems for spatial feature extraction and remote mapping of sensitive areas in Turkey. The major concern is the expanding urbanization. The study's multidisciplinary approach incorporates EO technologies, particularly integrating SAR and optical satellite imagery data to produce detailed maps of land surface features. The study primarily aims to monitor and delineate spatial features of urban developments to enhance understanding of environment, land and sea changes including land use, and analyze their connections to reference data. The basic methodology involves detecting, delineating, recognizing, identifying, and interpreting urban features. Leveraging 3D visualization, color manipulation, and the three-dimensional sensing capabilities of Synthetic Aperture Radar (SAR) data sets facilitates a better understanding of the surface, aiding in discriminating, locating, and mapping meaningful spatial information in the study area. The integration of spatially enhanced SAR and optical imagery data yields significant combined analysis results, providing highly acceptable outcomes for operational use, even in cases where extensive ground studies have been conducted. The imagery data taken from different wavelength bands are used in models that explain the processes controlling the development and configuration of new land surfaces in the region. The SAR and optical data results demonstrated the links between surface developments and remote sensing in the visible, infrared, and microwave spectra. By combining SAR and optical imagery data, land and sea surface features, objects, structures, patches, and changes are effectively mapped. Color composite analysis of SAR-enhanced images enabled optimal extraction of spatial information in the study region. Multi-seasonal and multi-year color composite SAR images highlight changes by displaying them in different color tones. EO-based systems facilitate timely identification, enabling proactive intervention, preventive measures, and documentation of urban development. This approach enhanced the precision and effectiveness of change detection. Multi source imaging systems serves as valuable data for Geographic Information Systems (GIS) analysis and can be a crucial data source for local and state governments, real estate companies, financial businesses, and individuals to make informed decisions. The integration of Space based EO multi source data offered novel information and innovative aspects for land and sea surface mapping studies. As a conclusion, EO systems play a pivotal role in encouraging research and user community activities in the vital domain of urban development mapping and change detection.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: UrbanAI: Data-Driven Surface Mapping for Sustainable Urban Development

Authors: Adrian Fessel, Sarah Tesmer, Daro Krummrich
Affiliations: Ohb Digital Connect, Federal State Office GeoInformation Bremen
In the context of climate change, urbanization, and the increasing awareness of the environmental impacts of human settlements, accurate mapping and monitoring of urban surfaces are essential for effective planning and adaptation. As a co-development between OHB and the Federal State Office GeoInformation Bremen, UrbanAI addresses this need by developing a data-driven method for classifying urban land cover types and computing soil sealing levels, using exclusively high-resolution RGBI aerial imagery. Aligning local initiatives with global green transition objectives and policy, Bremen and other municipalities devise and implement adaptation strategies to overcome challenges posed by urbanization and climate change. One building block is the creation of a sealing and unsealing potential registry, which forms the basis for the development of an unsealing plan, a tool to guide policy measures aimed at enhancing urban resilience and mitigating climate impacts. As a joint development project, UrbanAI aims to specify, develop, and validate a sustainable method to generate the required data basis from Earth observation data, driven by continuous consultation with different stakeholder groups. This presentation highlights the results and challenges of this process. UrbanAI focuses on a deep learning method for the accurate segmentation of 16 surface and object types, as well as on an approach to estimate the degree of sealing from the model’s calibrated probabilities. The specified classes cover surface types relevant for sealing assessment including types of vegetation, different water-bound surfaces and types of pavement or built-up structures. They are complemented by classes with relevance for climate-related monitoring tasks including grass-grid pavers, solar panels, or green roofs. A high variability of surface types is characteristic for the urban environment, and their abundance naturally leads to significant class imbalance. Our technical focus is on the challenges involved with adapting and training a machine learning model for this setting, such that prediction accuracies are acceptable for further utilization by the end user. We report on the data labeling strategy and the design of a custom loss function employed in the project. We highlight the strong performance of the model and analyze residual errors to pinpoint their points of origin. We demonstrate the applicability of the model for the intended task via the computation of sealing degree and by measuring the abundance of surface types of interest for different administrative districts. Future directions for UrbanAI are centered on the goal of integrating its results into relevant administrative processes and minimizing the interaction required to apply the model across different regions, time periods, and data providers. To support this objective, planned next steps include expanding the training datasets to better address problematic and underrepresented classes, enhancing validation for rare classes in collaboration with the end user, and incorporating height data to improve the detection of complex structures such as green roofs. Further, extending the analysis to multiple time points will enable monitoring of surface sealing dynamics and enhance the model’s generalization capability for unseen datasets through transfer learning.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: AI-based Forecasting and Immersive Visualization for Flood and Mobility Resilience

Authors: Mr Francesco Asaro, Dr Simone Fratini, Dr Alessandra Feliciotti, Mr Marco Antonio Mazzeo, Dr Mattia Marconcini, Mr Ludovico Lemma, Mr Andreas Altenkirch, Mr Josselin Stark, Mr Marian Matees, Mr Erik Scatamburlo, Patrick Ebel
Affiliations: Mindearth s.r.l., Solenix Engineering GmbH, Bit S.P.A, European Space Agency - Φ-lab
Floods are among the most pervasive natural disasters globally, accounting for approximately 40% of all natural catastrophe losses. Their impact is both widespread and severe, particularly in countries like Denmark, where the threat of flooding is exacerbated by climate change and rising sea levels. Projections suggest that sea levels could rise by up to 1.2 meters within this century, significantly increasing the risk of flooding in urban and coastal areas. Currently, around 9% of Denmark's population is at risk of flooding, with the southern regions and Danish Islands being particularly vulnerable. Copenhagen, a major urban hub, is among the high-risk areas. Between 0.9% to 1.2% of Danish homes are already at risk of flooding, a number expected to nearly double by 2071. The economic impact of such events is stark; for instance, the 2011 cloudburst in Copenhagen resulted in approximately €800 million in damages. Addressing the challenges posed by floods, particularly in densely urbanized areas, requires not only accurate data but also robust models that go beyond direct impacts to consider the broader implications on urban accessibility and mobility. As part of the ESA-funded Immersive Visualisation and Metaverse for EO project (Immersive EO), the Visu4EO platform emerges as a groundbreaking solution to address these challenges by integrating Earth Observation (EO) data, Virtual Reality (VR), and Explainable AI (XAI) into a unified framework for flood and mobility resilience. The platform is designed to simulate, analyze, and visualize the cascading impacts of flood events on urban systems, empowering policymakers, urban planners, and other stakeholders with actionable insights for disaster risk reduction and resilience planning. A key strength of Visu4EO lies in its utilization of EO data to underpin its analytical models. The platform leverages high-resolution Digital Elevation Models (DEMs), land cover maps derived from Sentinel-1 and Sentinel-2, as well as global datasets like MERIT Hydro and GEBCO. These datasets provide essential inputs for hydraulic simulations derived from the Deltares SFINCS (Super-Fast INundation of CoastS) model. SFINCS is a high-performance 2D flood simulation engine capable of modeling compound flooding scenarios, including coastal, riverine, and urban flooding. By integrating parameters such as rainfall intensity, sea-level rise, and land-use characteristics, the model produces dynamic flood maps, enabling a detailed understanding of flood extents and depths across urban areas. Complementing the hydraulic modelling, Visu4EO integrates extensive mobility data to evaluate the impacts of flooding on urban transportation networks. The platform incorporates high-frequency location-based data (HFLB), population density metrics, and traffic flow data to simulate and predict mobility patterns. These datasets are fed into the Deep Gravity Model, an advanced deep learning based algorithm which, analyzing origin-destination flows from empirical data, learns to predict traffic dynamics, uncovering complex mobility behaviors. By linking the outputs of the SFINCS and Deep Gravity models, Visu4EO provides a virtual environment where users have the ability to modify critical parameters related to flood events and infrastructural configurations to explore a wide range of scenarios. In particular, for flood event simulation, users can adjust rainfall intensity and duration to replicate varying storm conditions, from moderate precipitation to extreme cloudburst scenarios. They can simulate the effects of rising sea levels, specifying different increments to analyze the long-term impacts of climate change or sudden surges associated with storms. Additionally, the platform allows for the placement or modification of flood barriers, levees, or retention basins to evaluate their capacity to mitigate flood depth and spread. Changes to drainage system efficiency can also be explored by modifying flow rates, enabling users to test how urban drainage networks respond to different intensities of flooding and identify areas of potential failure. In terms of infrastructural transformations, users can interact with road networks by opening or closing specific road segments to simulate emergency evacuation routes or the loss of accessibility due to floodwaters. The virtual environment supports the placement of new infrastructure, such as flood defenses, culverts, or other mitigation structures, enabling users to analyze their effectiveness in water diversion and protecting urban assets. Urban planners can also model land-use changes, such as converting impermeable surfaces to permeable ones or incorporating green infrastructure, to reduce surface runoff and mitigate flooding risks. Furthermore, traffic management strategies can be simulated by rerouting traffic, adjusting signal timings, or designating evacuation corridors, helping to minimize congestion and ensure mobility during critical flood events. The immersive capabilities of Visu4EO are powered by Virtual Reality, offering an interactive and intuitive environment for exploring simulation results. Users can navigate through 3D representations of flood-affected areas, observing changes in water levels, road accessibility, and traffic flows in real time. This immersive experience enables stakeholders to assess vulnerabilities, evaluate the effectiveness of proposed interventions, and make informed decisions. For instance, urban planners can use the VR environment to simulate the placement of flood barriers or the re-routing of traffic and immediately visualize the outcomes of these actions. The ability to interact with and explore complex data in a 3D space enhances situational awareness and facilitates collaborative decision-making. Explainable AI plays a central role in the platform’s design, ensuring that the predictive models are transparent and interpretable. Using methods such as SHAP (Shapley Additive Explanations), Visu4EO provides users with clear explanations of how input variables influence model outputs. This capability is particularly important for building trust in the system, as it allows users to understand the factors driving specific predictions, such as why a particular road segment is expected to experience severe congestion during a flood event. XAI also supports the exploration of "what-if" scenarios, enabling stakeholders to simulate and compare the impacts of various interventions, such as implementing new flood defenses or altering traffic management strategies. Visu4EO’s modular and scalable architecture ensures its adaptability to diverse urban contexts and requirements. The platform consists of three main components: a scenario generation module, a simulation engine, and a VR rendering module. Users begin by defining scenarios in the generation module, specifying parameters such as rainfall intensity, road closures, and land-use changes. The simulation engine integrates the hydrological and mobility models to analyze these scenarios, while the VR module transforms the simulation results into an immersive 3D experience. This architecture supports deployment in both cloud-based and local environments, ensuring accessibility and flexibility for various user needs. The platform’s ability to simulate and visualize the cascading impacts of flooding and mobility disruptions makes it an invaluable tool for urban resilience planning. For example, decision-makers can use Visu4EO to prioritize investments in flood defenses, optimize traffic management during emergencies, and design evacuation strategies that minimize risk to human lives and assets. By providing a comprehensive framework for analyzing interconnected urban systems, the platform enables stakeholders to develop proactive and adaptive plans that enhance resilience to climate-related disasters. While Visu4EO is currently focused on flooding and mobility, its underlying principles and architecture make it well-suited for future expansions. The platform could be extended to address other urban challenges, such as air quality, urban heat islands, and public health impacts. For instance, it could incorporate S5P data to analyze the effects of traffic management on air pollution or use thermal imagery to study the cooling effects of urban green spaces. These potential developments highlight the platform’s versatility and its capacity to evolve into a comprehensive tool for sustainable urban management. As climate change and urbanization continue to reshape the global landscape, Visu4EO stands as a beacon of innovation, offering solutions that bridge the gap between data, analysis, and decision-making.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Advancing Understanding of Urban Green Spaces’ Cooling Effects Using Sub-Meter Resolution Imagery

Authors: Yue Zeng, Jianhua Guo, Xiao Xiang Zhu
Affiliations: Data Science in Earth Observation,Technical University of Munich, Munich Center for Machine Learning
Urban green spaces (UGS) play a critical role in mitigating the urban heat island (UHI) effect, contributing to more sustainable and livable cities. However, existing studies on UGS and their cooling effects face several significant challenges. First, most studies rely on low- to medium-resolution datasets, which are insufficient for accurately detecting and identifying UGS spatial patterns. This limitation leads to uncertainties in assessing the cooling efficiency of UGS, particularly in dense urban environments where detailed spatial information is crucial. Second, previous studies predominantly focus on single cities, restricting the generalizability and transferability of their results. This city-specific result prevents researchers from capturing variations in UGS cooling effects across diverse climatic zones and geographical features. Therefore, it is crucial to address these gaps by employing high-resolution data and adopting more representative analytical methods. To address these challenges, this study constructs a high-resolution sub-meter UGS dataset for 50 major Chinese cities derived using a semi-supervised deep learning method, delivering unprecedented spatial precision and accuracy. By employing a fixed panel effects model, we assess the cooling effects of UGS on land surface temperature (LST). Our results reveal that UGS significantly mitigates the urban heat island effect, with an average cooling effect of approximately 0.297°C for every 10% increase in UGS coverage. Additionally, our findings show variation in cooling efficiency across different contexts. Specifically, the cooling effect of green spaces is most pronounced in public areas, open spaces, and high-rise areas, whereas it is comparatively weaker in industrial areas, compact spaces, and low-rise areas. These results offer deeper understanding of the factors influencing UGS cooling effects and provide valuable insights for designing targeted mitigation strategies. Our findings not only highlight the importance of UGS as a nature-based solution to combat urban heat but also underscore the need for integrating high-resolution remote sensing imagery into urban planning. By publicly releasing our high-resolution dataset, we aim to establish a benchmark for future research and guide policymakers in creating climate-resilient cities. We expect that the high-resolution UGS dataset and findings of this study will be essential for enhancing urban sustainability, fostering resilience, and addressing the growing challenges posed by climate change.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: How are Urban Heat Islands impacted by LULC Heterogeneity? German Cities in Focus

Authors: Keli Wang, Dr.-Ing. Yuanyuan Wang, Dr. Tobias Leichtle, Prof. Dr. Hannes Taubenböck, Prof. Dr.-Ing. habil. Xiaoxiang Zhu
Affiliations: Chair of Data Science in Earth Observation, Technical University of Munich, German Aerospace Center (DLR), Earth Observation Center (EOC), Institute for Geography and Geology, Julius-Maximilians-Universität Würzburg, Munich Center for Machine Learning
European cities have experienced more frequent and intense heat waves over the past decade. For a large part of Europe, 2022 was the warmest year on record. These heat waves significantly increased excess mortality rate. Furthermore, these effects are amplified by the Urban Heat Islands (UHIs) effect, which intensifies heat in dense urban areas. With advances in computing and the availability of large-scale remote sensing data, extensive studies have shown that density of commercial area and the area of urban green significantly impact the UHI effect. However, the influence of the heterogeneity of urban land use land cover (LULC), such as their composition and configuration has received comparatively less attention. This study aims to address the question: How are UHIs impacted by heterogeneous LULC composition and configuration? We address this by means of a data-driven approach based on multi-source remote sensing data. We delineated multiple geospatial parameters that were shown to be most relevant to UHIs, including LULC, residential and non-residential building density and population density. To quantify the heterogeneity of LULC composition and configuration, a variety of class- and landscape-level landscape metrics (LMs), including edge density (ED), landscape shape index (LSI), and Shannon's diversity index (SHDI), were calculated. Land surface temperature (LST) data was employed to proxy the UHIs. Based on the available data, 80 German cities with population over 100,000 serve as the study area for this work. Methodology-wise, the Pearson correlation coefficient was employed to provide a general understanding of the relationships between various impact factors and UHIs in the first place. Moreover, to quantify the impact on UHI by these factors and heterogeneity indexes, both linear (Global Ordinary Least Squares regression) and non-linear (LightGBM) model were trained using these variables as input and UHIs as output. Notably, as a popular xAI method, SHAP (SHapley Additive Explanations) was used to interpret the models and provide quantitative insights into feature importance and interactions. Preliminary findings indicate that although the general trends of these impact factors are similar, their relative importance and interaction varies depending on the analytical perspective. For instance, the proportion of built-up areas and the building coverage ratio generally serve as the two most significant features positively influencing UHIs. Conversely, the proportion of tree cover and water cover are the two primary features negatively impacting UHIs. Notably, the edge density of built-up areas demonstrates a negative association with UHIs, indicating a potential cooling effect from a more heterogenous LULC.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: High resolution evapotranspiration for climate adaptation strategies

#stac

Authors: Stefano Natali, Leon Stärker, Stefanie Pfattner, Daniel Santillan, Maximilien
Affiliations: SISTEMA GmbH
The rapid increase in climate change is affecting a number of sectors, systems, individuals and institutions worldwide, which have to adapt to its impact. Especially in Austria, climate change is making itself more and more noticeable and its existence, its pace and its impact is demonstrated by numerous measurements and observations. According to recent climate data (https://www.iea.org/articles/austria-climate-resilience-policy-indicator), the increase of annual mean temperature in the country has been more than twice the amount of global warming, having a bigger impact in areas such as urban, agricultural or mountainous and forest areas. These climate change signals can be better observed through the rapid melting of glaciers and thawing of the permafrost in alpine regions or through increasing of hot days and tropical nights and increase in precipitation. The overall motivation of the proposed project is to develop services that can support climate adaptation strategies through integration and monitoring of evapotranspiration (ET) using the existing satellite missions and in-situ data. The tool, which is under development for the FFG Project GET-ET for GTIF, will use ECOSTRESS and Sentinel 2 data to enable high resolution ET data for urban application, agriculture, and forest management. Even though methodologies to perform estimation of evapotranspiration with Sentinel-2 have already been proposed, the current approach uses a combination of VIS-NIR high resolution satellite data, ancillary data such as land cover and climate reanalysis data to train a deep learning model and estimate the ET values provided by ECOSTRESS. New foundations models trained with geospatial imagery allow improving the prediction capabilities with satellite imagery. The IBM pretrained model “Prithvi” is one of the examples of architecture to use to perform such task, indeed the large pretraining over all the bands of Sentinel-2 will facilitate the fine tuning. Prithvi shows meaningful capabilities for segmentation and regression that fit with the objective of estimating evapotranspiration. Such a model can also be optimized based on the need of ancillary data; indeed, its flexibility is leading to this deep learning choice. After the training, the model can produce ET maps only by using Sentinel-2 and ancillary data as input. High resolution evapotranspiration maps are valuable tools in urban planning and the strategic design of green infrastructures, enabling climate resilient planning for Cities. They facilitate precise identification of areas that experience significant heat stress, known as urban heat islands (UHI), due to the lack of green infrastructures (GI). By highlighting urban heat islands, targeted green interventions can be introduced to mitigate these effects while enhancing natural cooling mechanisms, such as cooling corridors. Additionally, these maps are useful for monitoring larger green spaces, such as green roofs and parks, to assess their vitality over time. This ensures the long-term effectiveness of green infrastructure, maintaining their cooling benefits and enhancing the quality of urban living spaces. In the framework of the project, the usefulness of the produced ET maps is assessed through real use cases in the Vienna city area. The generated data and resulting information products are maintained as STAC collections (time-series data cube) and made accessible spatially disaggregated to block/district/ commune level via both RESTful API and interactive WebGUI/GIS. This functionality will be provided through the EODASH ecosystem which allows visualization of a multitude of heterogeneous data sources, from serverless to OWS (OGC Web Services) providing interactive visualization, process triggering and custom results display.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Mapping changes within urban areas: what is possible with spaceborne SAR sequences?

Authors: Babak Memar, Luigi Russo, Silvia Liberata Ullo, Antonietta Sorriso, Paolo
Affiliations: Sapienza University Of Rome, University of Pavia, University of Sannio
Urban areas are subject to multiple and very different changes, in a two- and three-dimensional sense, mostly as a consequence of human activities, such as urbanization, but also because of catastrophic and sudden events, such as earthquakes, landslides, or floods. Using Synthetic Aperture Radar (SAR) sequences is it possible to cope with both types of changes by combining interferometric coherence and backscatter amplitude and provide a semantically meaningful analysis of the changes detected in both city inner cores and suburban areas. Specifically, to detect multi-dimensional changes in urban areas using a stack of repeat-pass SAR data set amplitude and coherence time series can be jointly considered. SAR amplitude is used to extract changes about the urban extents, i.e., in 2D, while interferometric coherence is sensitive to the presence of buildings and to their size, i.e., to 3D changes. Indeed, a procedure trying and solving the overall problem of urban change detection, and able to discriminate among different change patterns, should be on the one side able to track changes to the urban extents, and on the other side capable to understand changes due to upgrade of the built-up/recreational areas (e.g., because of roof reconstruction for old buildings), as well as the enlargement or upgrade of specific built-up elements. Examples are the change in the number of floors for a single building, or the change of a residential area block from traditional to more modern buildings. For instance, by means of such a procedure, different urbanization processes, such as infill, edge-expansion, and outlying areas growth, were easily detected and recognized, as described in for a subtropical big city of South America during the last decade (2010–2021). The research published in that work allowed depicting urban cover over time and detecting the new built-up zones, quantifying the intensity and extension of land take, describing, through trajectory analyses, urban cover and spatial configuration over the last decade in the overall city and in sectors characterized by different soil sealing degree, and eventually identifying different urban expansion processes across the city. Moreover, the use of multiple tracks by the same sensors, its ascending or descending modes, or the exploitation of different sensors with different bands, spatial resolutions and revisit time opens the path for a world-wide exploitation of this wealth of information. Finally, the urgent need to obtain a semantic recognition of these multidimensional temporal patterns can be better addressed using Deep Learning (DL) techniques. In particular, DL-based methodologies to extract, depending on the spatial resolution of the sensor, the built-up total area or the single building footprint, the average height in a block or the individual height of each building, has proved to provide good enough results to help in the recognition of the peculiarity of the changes highlighted from a specific temporal pattern, to the point to be able to discriminate among constructions, demolitions and renovations in different parts of the same city. DL has been also validated to be useful for the segmentation of the temporal backscatter and/or intensity sequences to temporally recognize when specific events occurred. By comparing results achieved over the cities of Berlin and Milan using Sentinel-1, COSMO-SkyMed and TerraSAR-X multi-year sequences, this presentation aims at showing which are the current achievable results, as well as the limits and the missing steps to be implemented, for the characterization of urban areas at the global level in a truly multitemporal fashion, providing an additional layer of information to those obtainable by considering changes as differences of classification maps of built-up structures or human settlements in different years.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Developing a Spectral Library of Urban Materials to Support Climate-Responsive Cities

Authors: Giannis Lantzanakis, Nektarios Chrysoulakis, Andreas Christen, Sue Grimmond, Joern Birkmann
Affiliations: Remote Sensing Lab, Institute of Applied and Computational Mathematics, Foundation for Research and Technology Hellas (FORTH), Albert-Ludwigs-Universität Freiburg, Environmental Meteorology, University of Reading, Urban Micromet, University of Stuttgart, Institute of Spatial and Regional Planning
Identifying urban surface materials is a crucial aspect of Earth Observation, as it has the potential to support urban planning and climate resilience. The radiative and thermal properties of construction materials significantly influence the urban microclimate, particularly by affecting the urban surface radiation budget. Materials such as asphalt and concrete absorb and store large amounts of energy, contributing to urban overheating. In recent years, new urban construction materials have been developed for applications such as cool roofs, permeable pavements, and reflective coatings. These materials are designed to reduce heat absorption and promote more sustainable urban environments. Accurate mapping of both traditional and new materials is therefore critical for urban planners to design and implement effective cooling strategies, optimize green infrastructure, and enhance energy efficiency. Each material type exhibits distinct behaviour across the electromagnetic spectrum, influenced by factors such as chemical composition, texture and orientation. The various types of materials can be detected from space by mainly examining their interactions with solar radiation. Robustly analyzing spectral signatures from multispectral and hyperspectral satellite imaging is challenging in urban environments due to the varied composition of artificial materials and the presence of mixed pixels. Alternatively, in-situ HS reflectance signatures with spectral resolution finer than 5 nm can be down-sampled to match any available HS or MS satellite, as they provide higher spectral detail. However, the existing in-situ spectral libraries, typically based on spectroradiometer measurements, often fall short in providing the spectral diversity necessary for training robust machine learning models for detailed urban surface cover classification. To bridge this gap, an urban hyperspectral library has been developed as part of the urbisphere project to include high-resolution spectral data collected from a wide range of urban materials across Europe, covering a spectral range from 350 nm to 2500 nm. The HySpex Mjolnir VS-620 hyperspectral camera, extensively employed in urbisphere field campaigns, provides detailed hyperspectral measurements of complex surfaces, including vertical facades and shaded areas, which were previously challenging to capture using field spectroradiometers. In this study, hyperspectral signatures from the urbisphere library were adjusted to align with the multispectral bands of the WorldView-3 and Sentinel-2 satellites and then used to train an SVM classifier. The trained models were subsequently applied to classify WorldView-3 and Sentinel-2 imagery acquired over the city of Heraklion, Greece. The results highlight the enhanced capabilities of the library, demonstrating accurate identification of urban surface materials while eliminating reliance on labor-intensive, image-based end-member extraction.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Large-Scale Application of Very High-Resolution Orthophotos for Mapping Impervious Surfaces: An Automated, AI-Based Approach in North Rhine-Westphalia, Germany

Authors: Jan-Philipp Langenkamp, Max Kreke, Jun.-Prof. Dr. Andreas Rienow
Affiliations: Ruhr-Universität Bochum
Impervious surface coverage, such as roads, pavements, and buildings, is closely linked with urban challenges including increased flood risk, intensified urban heat island effects, and land consumption. In response, Germany has set the goal to reduce land consumption for settlement and transportation purposes to less than 30 hectares per day by 2030, indirectly targeting the expansion of impervious surfaces. Currently, impervious surface statistics in Germany are primarily derived from approximations based on cadastral land use information, lacking comprehensive monitoring both spatially and thematically explicit. This data gap presents a significant challenge in accurately assessing and managing urban development impacts. Recent advancements in artificial intelligence (AI) have made very-high resolution digital orthophotos (DOP) increasingly suitable as a data source for extracting detailed impervious surface information. Despite challenges posed by high spatial detail and low spectral resolution, state-of-the-art semantic segmentation models have achieved high segmentation accuracies in processing orthophotos. To address the need for very high-resolution impervious surface datasets, we developed an automated workflow that leverages the big data potential of freely available DOPs of the federal state of North Rhine-Westphalia (NRW) in Germany. Our approach utilizes state-of-the-art deep learning segmentation algorithms, specifically the UNetFormer model, to create a spatially explicit dataset of impervious surfaces at 0.5 m resolution within the federal state. We utilized the UNetFormer model for binary semantic segmentation of impervious surfaces. This architecture combines Convolutional Neural Networks (CNNs) and Transformers to deliver high performance and computational efficiency in remote sensing applications. Our dataset comprised balanced DOPs resampled to a 0.5 m spatial resolution, paired with binary segmentation masks indicating impervious surfaces. Since the DOPs of NRW are recorded asynchronously, covering both summer and spring periods, we ensured a balanced selection of examples from each timestamp. Target data were sourced from regional water associations and the city of Wuppertal, focusing on areas in the Ruhr region and Wuppertal. We segmented the imagery into 512 × 512 pixel patches, excluding misaligned or inaccurately annotated samples after visual inspection. To address variability in land use, acquisition periods, and seasonality, additional targets were manually digitized based on land use information data from Germany’s Authoritative Real Estate and Cadastre Information System (ALKIS). We generated 2,282 patches in total, splitting them into 90% for training (2,053 patches) and 10% for validation (229 patches). We trained the UNetFormer model using a dual-loss strategy to enhance segmentation performance. The primary Dice Focal loss addressed high intra-class variability, while a secondary binary cross-entropy loss optimized intermediate features. Training employed the AdamW optimizer with an initial learning rate of 1e-4, adjusted via cosine annealing. Dropout (rate = 0.1) and weight decay (1e-4) were applied to prevent overfitting. Data augmentation techniques -such as horizontal/vertical flipping, random brightness-contrast adjustments, and Gaussian noise -were used to enhance model generalization. Training was conducted over 50 epochs. Our post-training workflow integrates the model into an automated, tile-based inference pipeline, generating segmentation masks for 1 km² tiles across NRW. This Docker-containerized pipeline encompasses data acquisition, model inference, post-processing, and data provisioning to create the final impervious surface dataset. It accesses input data via authoritative geodata Web-Coverage, Web-Map, and Web-Feature Services. To improve dataset reliability and address the seasonal heterogeneity in NRW's DOP data, we refined the segmentation results using weighted superimposition. This approach combines model-generated logit mask outputs with cadastre-accurate ALKIS data from the corresponding Web Feature Service. The result is an 8-bit-integer GeoTIFF, providing detailed spatial information of impervious surfaces. Since the DOPs are updated every two years on an asynchronous schedule, the result offers a temporal snapshot of impervious surfaces across NRW. To evaluate the model's performance and generalizability in impervious surface segmentation, we employed two distinct evaluation approaches. On the validation set, comprising 229 patches, the model achieved an overall accuracy (OA) of 95%, with an Intersection over Union (IoU) of 89% and an F1-score of 94% for the impervious surface class. To assess generalizability, we analysed the model's performance in three representative cities of NRW: Muenster, Dortmund, and Cologne. In the absence of comprehensive ground truth datasets for the cities, we employed a random sampling method, selecting 1,500 pixels per city for manual inspection and labelling on DOP. The model exhibited robust performance across all cities, achieving an OA of 95% to 96%. For the impervious surface class, it attained an IoU of 82% to 87% and F1-scores ranging from 90% to 93%. The workflow processed a 1 km² DOP in approximately 7 seconds using a NVIDIA A40 GPU, estimating around 70 hours for complete coverage of NRW. Precise identification of impervious surfaces is crucial for urban planning, hydrological modelling, and climate adaptation. Our approach fills a significant gap by providing spatially explicit datasets that enable more detailed analyses and statistics. Building upon our promising results, we aim to overcome current limitations in future developments. Our primary focus will be addressing the constraints of low temporal and spectral resolution inherent in DOP. To achieve this, we propose integrating Sentinel-2 satellite data into our established workflow to assist in creating higher temporal and spectral resolution through super-resolution techniques. This integration aims for more frequent and comprehensive mapping of impervious surfaces.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: A.07.05 - POSTER - Monitoring and predicting surface water and flood dynamics

The socio-economic consequences of floods are rising rapidly, as floods are the most frequent and impactful weather-related disasters, affecting nearly 800 million people in the past decade and causing economic losses exceeding $300 billion. In this context, remote sensing has emerged as a critical tool for data collection and observation, especially in regions where field surveys and gauging stations are limited, such as remote areas and developing nations. The integration of remotely-sensed variables—like digital elevation models, river width, flood extent, water level, flow velocities, and land cover—into hydraulic models offers the potential to significantly enhance our understanding of flood processes and improve predictive capabilities.

Over recent decades, research has focused on optimising the use of satellite observations, supported by both government and commercial initiatives, and numerous datasets from airborne sensors, including aircraft and drones. Recent advancements in Earth observation (EO) have further enhanced the monitoring of floods and inland water dynamics, utilising optical imagers, Synthetic Aperture Radars (SARs), and Global Navigation Satellite System Reflectometry (GNSS-R) to detect surface water, even in densely vegetated regions. Radar altimeters now measure water levels over smaller lakes and rivers. However, despite these advancements, the update frequency and timeliness of most remote sensing data products are still limited for capturing dynamic hydrological processes, which hinders their use in forecasting and data assimilation. Additionally, spatial and temporal inconsistencies across different sensors pose challenges in creating integrated multi-sensor products, such as fused surface water and flood extent products, water volume estimates, and wetland maps.

The scientific community has increasingly recognized the potential of remotely-sensed data for calibrating and validating hydraulic models, and to revolutionise real-time flood monitoring. With the expansion of open data from sources such as the European Space Agency (ESA), and the availability of more Earth observation data than ever before, this progress is expected to continue.

This session invites cutting-edge presentations on flood monitoring and mapping through remotely-sensed data, focusing on:

- Remote sensing data for flood hazard and risk mapping, including commercial satellite missions and airborne sensors (aircraft and drones);
- Remote sensing techniques for monitoring flood dynamics;
- The use of remotely-sensed data for calibrating or validating hydrological or hydraulic models;
- Data assimilation of remotely-sensed data into hydrological and hydraulic models;
- Enhancements in river discretization and monitoring through Earth observations;
- River flow estimation using remote sensing;
- Machine learning and deep learning-based flood mapping or predictions;
- Ideas for developing multi-satellite data products and services to improve the monitoring of flood and surface water dynamics.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Assessing the quality of a SAR-based flood mapping algorithm under flood and no-flood situations

Authors: Mark Edwin Tupas, Florian Roth, Bernhard Bauer-Marschallinger, Wolfgang Wagner
Affiliations: University of the Philippines, Technische Universität Wien
Most earth-observation-based (EO) mapping operations rely on ancillary triggers such as more than usual rainfall, inundation from hydrologic models, and manually initiated event activation to initiate flood mapping workflows. This approach can lead to delays or completely missed flood events, resulting in missed opportunities to provide essential information for flood assessments. Thus, the Copernicus Emergency Management Service's (CEMS) Global Flood Mapping (GFM) component deviates from this approach. CEMS GFM offers a completely automated workflow for all incoming Sentinel-1 imagery. Using this approach raises issues that have never been considered before in flood mapping literature. One notable issue here is false labeling in no-flood situations. An abundance of flood mapping algorithms have been proposed in the literature-- most of which supposedly show promising results. However, most assessment methodologies use classification accuracy metrics implemented to assess flooding events only. Some of these methods are known to be biased towards more significant flood events. Using these assessment methodologies only may result in overfitting algorithms biased toward labeling floods. Thus, using these EO algorithms on non-flooded scenes often results in significant false alarms. Additionally, several well-known drivers cause marked false alarms in SAR datasets. These include environmental causes such as dry soils, frozen ground, or changes in vegetation cycle. While recent works have been proposed to improve the prevailing assessment paradigm, they have yet to address no-flood image results. Considering these issues, we have been analyzing a component of the GFM ensemble workflow, the TU Wien flood mapping algorithm, using a novel time-series approach to assess false alarms. First, this methodology entails individual analysis of each scene for false positive rates (FPR)-- based on the assumption of no floods. Time series plotting of FPR against ancillary information such as ERA5 land (i.e., soil moisture and temperature) gives insights into the prevailing SAR backscatter dynamics on the scene level that causes FPR. Moreover, the spatial patterns of false positives aggregated for the time series, i.e., False Positive Percentage per pixel, provide valuable insights into which areas are susceptible to false labeling. Cross-referencing with land cover information identifies agriculture, grasslands, and bare soil areas as particularly troublesome in most sites across the globe. Further, agricultural areas often show plot level distinction, suggesting particular crops and agricultural practices affect FP susceptibility. This study further highlights the FP issue in fully automated operations, thus reiterating the need for further developments in algorithm robustness and interventions such as exclusion masking and sensitivity flagging. We conclude that this assessment methodology is necessary for fully automated monitoring algorithms.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Enhancing Flood Mapping With Polarimetric Radar Beyond Backscatter Intensity

Authors: Felix Kasiti Isundwa, Dr Armando Marino, Dr. Andrea Berardi, Dr. Isabella Bovolo, Peter Hunter, Dr. Claire Neil, Dr. Cristian Silva Perez, Dr. Thiago S. F. Silva
Affiliations: University of Stirling, The Open University, Durham University, Keen AI, Scottish Environment Protection Agency
Introduction The occurrence of flood disasters has increased globally in the past decade. Since floods account for over 70% of annual natural disasters globally, the understanding of occurrence, frequency and the extent of floods is vital in planning and mitigation. With the changing climate, the frequency of occurrence and the severity of floods are likely to increase and with it increased risk of severe flooding. It is thus vital to get accurate information on the occurrence, and severity of floods, and remote sensing provides a useful platform where inundated areas can be identified over large scales. Remote sensing platforms using optical data can be useful in identifying flood area. Methodologies utilizing indices such as the Modified Normalized Difference Water Index (MNDWI), Normalized Difference Water Index (NDWI), and Normalized Difference Moisture Index (NDMI) have been used to extract water surfaces (Ji, Zhang and Wylie, 2009). However, the use of optical data is limited by cloud cover, it can only be acquired during daytime and cannot identify flooding under vegetation. Synthetic Aperture Radar (SAR) has the capability of collecting data any time of the day and in almost any weather, at the expense of a slightly more complex interpretation of the SAR backscattering when compared to reflectance. Flood extraction from SAR images have been accomplished using classification methods (supervised and unsupervised), application of thresholds, or change detection on a backscatter intensity image. Although these methods offered varied levels of accuracy, the use of polarimetric data has been suggested to offer better performance (Chini et al., 2016; Pillai and Dolly, 2024). The use of polarimetry has shown better performance compared to backscatter intensity only when applied in change detection methods (Chini et al., 2016). In this research, polarimetric SAR data was used to delineate flood inundated areas on flood events that occurred in Imola, Italy, in May 2023, and the Forth Catchment, Scotland, on 8 October 2023. This study demonstrates how the use of polarimetric data can significantly improve the accuracy of open flood detection and potentially flood under vegetation. Methodology There was severe flooding in Imola, Italy in May of 2023 resulting in significant loss of life and property (Ghiglione and Bettiza, 2023). The Sentinel-1 satellite of the European Union Copernicus programme acquired SAR images over this region during the flood. There were three flood images acquired using PlanetScope Optical sensors on the same day as the SAR image. These image pairs were acquired on 2023-05-22, 2023-05-23, and 2023-05-27. The SAR data were downloaded from the Copernicus portal in Single Look Complex (SLC) format and belonging to 3 orbits. Besides the flood image, we considered over 180 existing historical Sentinel-1 SLC SAR images for each orbit over the same geographical area for the optimisation of the reference image selection and detector performance analysis. The analysis was carried out separately for each orbit such that only images acquired with the same orbit were compared. The three orbits were analysed and referred to here as ‘frames’. Frames 1 and 2 were used for optimisation and 3 was used for testing. As an independent test site, the Forth catchment was also mapped. A flood occurred on the 2023-10-08 along the tributaries to the Forth river (river Teith and Forth)(Kendon, 2023). During this flood, Sentinel-1 acquired data over the flooded areas, which was used in the analysis. SAR data was pre-processed into the elements of the covariance matrices. Thereafter, five change detection methods utilising polarimetric and backscatter intensities were tested. The polarimetric change detectors were the Optimisation of Power Difference (OPDiff) (Diff1, Diff2 rep. the difference in maximum and minimum eigenvalues respectively and their corresponding alpha angles) , Optimisation of Power Ratio (OPRatio) (Rat1 and Rat2 representing maximum and minimum eigenvalues respectively) (Marino and Hajnsek, 2014), the difference in the Cloude-Pottier eigenvalues (LamD1 and LamD2 representing maximum and minimum eigenvalues respectively)(Cloude and Pottier, 1996), the backscatter intensity ratio and difference (diffVV, diffVH and RatVV and RatVH rep. difference in VV, VH, and ration VV and VH respectively). The last two were applied on none polarimetric data. The change detection was applied by comparing two images acquired before and during the flood. Twelve outputs were generated from the change detectors as below: OPDiff {Diff1, Diff2, Diff1*sin (Alpha1) and Diff2 *cos (Alpha2)}, OPRatio (Rat1 and Rat2), backscatter intensity ratio (RatVV and RatVH), difference (DiffVV and DiffVH) and Cloude-Potier eigenvalue difference (LamD1 and LamD2). Results Assessing the performance of individual detectors was the first activity which was based on the area under the receiver operator characteristics (ROC) curve. Besides just comparing the detectors, the effect of the reference image on the accuracy of the results was assessed by using the mean of all images acquired in March for all years on record, the first, second, and third images acquired before flood. When the mean of AUC of frames 1 and 2 were considered, Rat2 was the best detector followed by Rat1. The backscatter intensity ratio RatVH and RatVH were third and fourth with values over 0.8. All other detectors had values below this with the difference-based detectors being lowly ranked. The order of ranking from 5 was as follows: LamD2, DiffVH, LamD1, Diff1*sin(alpha1), Diff2*cos(alpha2), DiffVV, Diff1, and Diff2. When the overall performance of detection was assessed by combining probability of false alarm, probability of detection, AUC, F1-score and accuracy, almost similar performance was recorded. The analysis showed that the Rat2 and RatVV were the highest ranked recording an overall performance of over 85%. Rat1 was third and RatVH being fourth with the two having 84% and 80% respectively. The rest of the detectors has values of below 80%. We then assessed how the refence image used in the change detection affected the results of flood detection. It was clear that the selection the reference image influences the results of detection. Specifically, selecting the last image before a flood did not yield the highest accuracy. The second image before the flood was better in our scenario. Interestingly, when we averaged all images acquired in the months of March for all the images on record, the highest accuracy and the least probability of false alarm was recorded. Visual inspection of the detection results returns similar findings as the quantitative assessment where Rat1 and Rat2 has better depiction of flood area and minimal overpredictions. The difference-based detectors have significantly more over predictions. When the best detector (Rat2) was applied on the flood event in Scotland, there was comparable performance with existing operational flood mapping services by the Copernicus Emergency Management Service. This demonstrated the potential of applying this detector as an operational detector for flood extent extraction and the advantage of using polarimetric data over backscatter intensity only. Acknowledgements This work was supported by the Natural Environment Research Council via an IAPETUS2 PhD studentship held by Isundwa Kasiti Felix (grant reference NE/S007431/1). We used publicly archived Sentinel-1 data accessed from https://search.asf.alaska.edu/#/ accessed on 1 September 2023. PlanetScope high-resolution images were provided by Planet Labs PBC through the Education and Research Standard program at https://www.planet.com/explorer/. Reference Chini, M. et al. (2016) ‘SAR coherence and polarimetric information for improving flood mapping’, 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp. 7577–7580. Available at: https://api.semanticscholar.org/CorpusID:20931659. Cloude, S.R. and Pottier, E. (1996) ‘A review of target decomposition theorems in radar polarimetry’, IEEE Transactions on Geoscience and Remote Sensing, 34(2), pp. 498–518. Available at: https://doi.org/10.1109/36.485127. Ghiglione, D. and Bettiza, S. (2023) Italy floods leave 13 dead and force 13,000 from their homes. Available at: https://www.bbc.co.uk/news/world-europe-65632655 (Accessed: 10 October 2023). Ji, L., Zhang, L. and Wylie, B. (2009) ‘Analysis of Dynamic Thresholds for the Normalized Difference Water Index’, Photogrammetric Engineering & Remote Sensing, 75, pp. 1307–1317. Available at: https://doi.org/10.14358/PERS.75.11.1307. Kendon, M. (Met O.N.C.I.C. (2023) Exceptional rainfall in Scotland, 6 to 7 October 2023. Available at: https://www.metoffice.gov.uk/binaries/content/assets/metofficegovuk/pdf/weather/learn-about/uk-past-events/interesting/2023/2023_07_scotland_rain.pdf. Marino, A. and Hajnsek, I. (2014) ‘A change detector based on an optimization with polarimetric SAR imagery’, IEEE Transactions on Geoscience and Remote Sensing, 52(8), pp. 4781–4798. Available at: https://doi.org/10.1109/TGRS.2013.2284510. Pillai, L.G. and Dolly, D.R.J. (2024) ‘Flood detection using SAR images: A review’, PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON RESEARCH ADVANCES IN ENGINEERING AND TECHNOLOGY - ITechCET 2022 [Preprint]. Available at: https://api.semanticscholar.org/CorpusID:267354432.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Observing Inland Water Body Dynamics Using Capella Space Commercial X-band Synthetic Aperture Radars

Authors: Mohammad Al-Khaldi, Professor Joel Johnson, Professor Emre Ertin, Dr Nithin Sugavanam
Affiliations: The Ohio State University
Observations of inland water at meter level resolutions on a global scale can be conducted using optical systems such as Landsat and Planet. Despite the effectiveness of these systems, their utility over more complex scenes where the visibility of water bodies is occluded by vegetation or cloud cover remains limited. The steady increase in the availability of high resolution microwave X-band observations of the Earth's surface provided by an emerging class of ‘SmallSat’ SAR observatories provides an alternative avenue where the concurrent requirements of all-weather operation and sub-meter spatial resolutions may be met. This presentation overviews the utility of a commercial X-band SAR dataset, a NASA CSDA purchased archive of Capella Space collections, for sensing various properties of inland water bodies and the related dynamics. To examine the sensitivity to dynamic changes in surface water extent, a sequence of acquisitions was made over scenes known to experience periodic variability such that "dry" and "wet" phase observations are available over scenes with a variety of land classes and topographic conditions. Assessments of the level of sensitivity to these changes, any benefits these acquisitions are able to offer in comparison to reference datasets, and their limitations will all be discussed. The sensitivity of the imagery to other properties like surface wind speeds will also be explored using the example of a series of collections conducted over the surface of Trout Lake, Wisconsin, USA. The choice was motivated by the lake's 16,079 km2 surface area such that NRCS (Normalized Radar Cross Section) statistics over a homogeneous 'water-only' subset may be examined as well as the availability of reference wind speed data from nearby in situ sensors. In addition to utilizing gauge data as a reference, a climatology of wind historical information was used to schedule SAR acquisition times of day with the highest likelihood of a specified wind speed. The series of acquisitions obtained successfully sampled winds ranging from 2 to 7 m/s. The dataset acquired also provides the opportunity to explore calibration stability across varied receivers, observation geometries, and imaging modes. The presentation will also place particular emphasis on these considerations in the context of monitoring inland water bodies and their dynamics.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Can I Trust my Flood Maps? A Comprehensive Analysis of Validation Strategies

Authors: Tim Landwehr, Prof. Dr. Antara Dasgupta, Prof. Dr. Björn Waske
Affiliations: Osnabrück University, RWTH Aachen University
Flood maps are recognized as essential tools for disaster management, shaping decisions on emergency aid, adaptation strategies, and insurance premiums. However, inconsistent validation practices and a lack of standardization frequently result in unreliable accuracy assessments. Such shortcomings undermine user trust and hinder the broader adoption of these tools. A key challenge in flood map validation is class imbalance, wherein non-flooded areas significantly outnumber flooded areas, and distort commonly used accuracy metrics, complicating the comparison of flood mapping methods across events. To assess which metrics are most robust towards this issue and enable objective map evaluation, flood maps derived from Sentinel-1 synthetic aperture radar data were analyzed, with the Copernicus Emergency Management Service flood maps serving as benchmarks. The following primary aspects of validation were examined: sampling design and analysis. Drawing upon established practices in land cover mapping, various sampling strategies were evaluated, including random and systematic sampling with both equal and proportional allocation. The robustness of key accuracy metrics, such as Overall Accuracy, Matthews Correlation Coefficient, Precision, Recall, and the F1-Score, was examined, particularly with regard to their sensitivity to class imbalance. Results demonstrated that stratified sampling, which ensures balanced representation of both flooded and non-flooded areas, is crucial for producing reliable accuracy estimates. It was further observed that the choice of accuracy metric depends on the intended use of the flood map: (1) Overall Accuracy provides a general measure of performance. (2) Metrics such as the F1-Score, commonly used in flood mapping, require optimized sample designs to mitigate their tendency to overestimate flood extent. (3) Bootstrapping was identified as a valuable technique for assessing the stability and variability of accuracy metrics. A multi-criteria assessment approach combining global and class-specific metrics is deemed necessary to ensure a comprehensive evaluation of flood map accuracy. Results from this study pave the way for the development of globally standardized guidelines for flood map validation, which would improve the consistency, reliability, and trustworthiness of EO-derived flood risk and recovery products. By addressing the challenges posed by class imbalance and inconsistent validation practices, the research seeks to unlock the full potential of EO data for enhancing flood resilience and recovery efforts on a global scale.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Urban Flood Mapping Using Sentinel-1 and Sentinel-2 and a Capsule Network-Based Deep Learning Approach

Authors: Pouya Ahmadi, Mohammad Javad Valadan Zoej, Fatemeh Saba, Mehdi Mokhtarzade, Nazila Kardan, Olena Dubovyk
Affiliations: Faculty of Geomatics Engineering, K.N Toosi University Technology, Institute of Geodesy and Photogrammetry, Technische Universität Braunschweig, Department of Civil Engineering, Azarbaijan Shahid Madani University, Institute of Geography, University of Hamburg, Hamburg
Flooding is one of the most severe natural disasters worldwide, causing irreversible damage to economies, infrastructure, and human lives. Effective disaster management relies on accurately mapping flood-prone areas, which is crucial for timely responses and mitigation efforts. Despite advances in flood analysis using deep learning models and satellite imagery, accurately mapping continues to be challenging in urban areas affected by meandering rivers or waterlogging, where traditional Convolutional Neural Networks (CNNs) often struggle to preserve critical spatial relationships. To address these challenges, this study proposes a novel approach that combines CNN with Capsule Network (CNN-CapsNet) for urban flood mapping, where CapsNet utilizes capsules—groups of neurons that encode spatial properties— to effectively preserve these relationships. In particular, the method integrates Sentinel-1 radar and Sentinel-2 optical datasets, employing a dual deep CNN architecture as a feature extractor. This architecture is designed to independently process data from two different satellite datasets and generate corresponding initial feature maps, which are then passed to a designed CapsNet. The feature maps are processed by both low-level and high-level capsules to capture their presence and spatial relationships. The output is then fed into a fully connected layer for the classification task. The performance of the CNN-CapsNet model was evaluated in two flood-prone regions of Iran: Aq Qala and Khuzestan. The model was trained and validated using datasets from the March 2019 flood in Aq Qala and the 2020 flood in Khuzestan. The results demonstrated that the model performed exceptionally well, achieving accuracy, recall, and F1 scores of 97.21%, 98.34%, and 97.97%, respectively. These findings underscore the potential of CNN-CapsNet as a superior alternative to traditional CNNs for urban flood mapping. Its improved accuracy and ability to handle complex spatial relationships highlight its value in post-disaster emergency management, particularly in areas where conventional methods encounter difficulties with data complexity.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Benefiting from the synergy of SAR, laser and wide-swath altimetry and hydrodynamic model. Arctic Lena River as a case study.

Authors: Elena Zakharova, Inna Krylenko, Eugeny Filatov, Alexei
Affiliations: Eola, IWP RAN
In many regions of the Arctic, where the climate warming results in degradation of permafrost, an increase in river water discharge have been observed. Destabilizations of the river banks and modification in water flow lead to intensification of bank and channel erosion. At the same time, large arctic rivers are one of the main transport arteries for people living in these remote regions. When navigation conditions become unstable their closer monitoring is required. However, water level gauging network for these river stretches is insufficient for safe navigation. In this context, a hydrodynamic models combined with satellite observations are probably the only solution to reproduce accurately the water and inundation regimes. They also can be used to run experiments to predict flow and navigation conditions under the climate and fluvial environment change. For Arctic rivers the use of satellite altimetry to provide water level measurements in ungauged reaches have so far been limited by lack of certainty in interpretation of altimetry signal over the freshwater ice and by high deviation of altimetric level retrievals from in situ observations in winter and early spring. However, in mid and high latitudes the most remarkable changes due to climate warming are observed in winter water and ice regime, exactly when satellite altimetry estimates are not very reliable. From this perspective, we face the challenge to improve our understanding of the altimetry signal behavior in winter to improve our water level estimates over seasonally frozen rivers. For this, we use the simulations of a hydrodynamic model, satellite altimetry measurements and in situ observations. For 300 km-long river reach of the Middle Lena River in Siberia a 2D hydrodynamic model STREAM-2D was adapted using detailed field bathymetric survey. Forest And Buildings removed Copernicus DEM (FABDEM) was used for floodplain topography. The reach is anabranching, characterized by numerous sand bars, islands, secondary channels. The model was forced by discharges provided by gauging station situated at upper limit of the modeled section. Three water level gauging stations are located within first 100 km of modeled section , providing data for validation of the model. Other 200 km in middle and lower reaches are not covered by observational network, so satellite observations here become main source of data. Satellite SAR altimetry missions Sentinel-3A, 3B, Sentinel-6, laser altimeter ICESAT2, as well as wide-swath SWOT mission data were used for verification of the model simulations in different hydrological phases and in zones with different water regime (straight channels, meanders, secondary channels, floodplain). An experimental model run with ice module switched-off showed that in winter Sentinel-3 altimetric signal is reflected mostly from the top of hummocking ice. The signal returns to the elevation of the water surface only when the open water fraction during the spring melt reaches 50%. Lower reach of the modeled section is located in backwater zone created by confluence of the Lena and Aldan Rivers. The modeled length of the backwater zone is of 50 km and corresponds well to the observations of the ICESAT-2 and SWOT satellites. We also present a comparison of the along-river water surface elevation profiles simulated by model and measured by SWOT mission and discuss how the SWOT measurements performs over when the river is frozen. Study is supported by the CNES TOSCA SWIRL and RSF № 24-17-00084 grants
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Monitoring tropical wetland storage using the SWOT mission

Authors: Yue Xu, Frédéric Frappart, Cassandra Normandin, Renaud Hostache, Luc Bourrel, Sakaros Bogning, Dr. Jean-Pierre Wigneron
Affiliations: ISPA, UMR 1391 INRAE/Bordeaux Sciences Agro, ESPACE-DEV, GET, University of Douala
The recently launched Surface Water and Ocean Topography (SWOT) mission is the first satellite to simultaneously provide gridded estimates of surface water extent and water level at medium to high spatial resolutions, with a 21-day repeat period. This information is particularly relevant to estimate the surface water storage changes in wetlands and their complex hydrodynamics that are still poorly known from regional to global scales. In this study, one year and a half of SWOT will be analysed in terms of spatio-temporal patterns of the flood in two large river basins: the Ogooué River, in Gabon, Equatorial Africa, and the downstream part of the Mekong River, in South-East Asia. These two study cases were selected for their environmental conditions characterized by the presence of flooded forests and rice paddies, respectively. In both basins, comparisons with external datasets, including in situ gauge station records, time series of water levels from Low Resolution Mode (LRM) and Synthetic Aperture Radar (SAR) radar altimetry missions, and time series of wetlands extent maps derived from different Earth Observation (EO) missions will be achieved. This study will present a comparison of the surface water extent,water levels and derived surface water storage in the wetlands of these two large river basins against external sources. It will especially focus on the capability of SWOT to retrieve surface water extent and water levels in forested areas and in presence of flooded field of arable land used for growing semiaquatic crops. An analysis of complementary parameters, such as the backscattering coefficient, the dark water fraction, and the interferometric coherence, will be performed to define criterion to be used to better delineate surface water extent and filter out possible outliers present in the water level maps. This dataset will be then used to study fthe filling and draining of the considered wetlands and to estimate the residence of water in them.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: A Decadal Survey of Flood Inundation in Afghanistan Using Sentinel-1 SAR Amplitude and Coherence Analysis

Authors: M. Sulaiman Fayez Hotaki, Dr.-Ing. Mahmud Haghshenas Haghighi, Prof. Dr. Mahdi Motagh
Affiliations: Institute of Photogrammetry and Geoinformation, Leibniz University Hannover, Helmholtz Centre Potsdam–GFZ German Research Centre for Geosciences
Sentinel-1 mission has emerged as an alternative to optical remote sensing data for flood mapping, thanks to its regular global Synthetic Aperture Radar (SAR) data acquisition since 2014 and its ability to penetrate clouds, a common challenge in optical sensing. Typically, the amplitude of SAR images is used for flood mapping, as water exhibits a distinct dark signature that clearly distinguishes it from non-water areas. The advent of cloud-based platforms, such as Google Earth Engine, and the availability of data with short latency have increased data accessibility and utilization of Sentinel-1 amplitude data for near-real-time flood mapping. In contrast, the phase information of SAR images is typically used in SAR interferometry and generally not employed for flood mapping, although it holds the potential to provide insights in regions where distinguishing between water and non-water areas using amplitude information is less straightforward. In this study, we combine Sentinel-1 amplitude and interferometric coherence to delineate floods across various environments. The amplitude analysis is conducted using Ground Range Detected (GRD) images. The processing includes Radiometric Terrain Correction (RTC) to normalize the impact of topography and reduce the effects of topographic variations in the flood detection process. Furthermore, we utilize digital elevation models as supplementary data to enhance water body detection. Our workflow is implemented in the Google Earth Engine (GEE) platform, allowing for the analysis of long-term data prior to the flood to learn about the statistics of the amplitude and to better identify changes related to the flood. The coherence analysis is performed using Single Look Complex (SLC) data with 12-day temporal baselines. We estimate the coherence using Alaska SAR Facilities’ HyP3 system. Then, we assess the coherence differences between various pairs and, through analysis, distinguish the coherence changes associated with natural coherence decay from coherence loss associated with flooded areas. We applied the method to various floods between 2018 and 2024 across Afghanistan. The results indicate that amplitude is generally successful in delineating flood extents. Particularly in vegetated areas, amplitude contributed significantly to flood delineation. However, integrating the coherence analysis greatly improved the outputs, especially in urban areas, where amplitude data alone failed to delineate flood boundaries accurately. Validation using high-resolution satellite imagery demonstrated F1 scores ranging from 67% to 82.7%, illustrating the method’s ability to handle flood delineation in complex environments.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Detecting temporal variations of river water surface slopes from ICESat-2

Authors: Linda Christoffersen, Prof. Peter Bauer-Gottwein, Prof. Louise Sandberg Sørensen, Dr. Karina Nielsen
Affiliations: Technical University Of Denmark, University of Copenhagen
ICESat-2 is a laser altimetry satellite launched in 2018. It simultaneously measures the surface elevation with six laser beams grouped in three pairs of two. The satellite’s six laser beams are grouped into three pairs, spaced across a 6.6 km wide swath, providing simultaneous measurements at multiple positions. This unique beam configuration enables the calculation of local and time-varying water surface slopes (WSS) along river sections. These local river water surface slopes can be assimilated into hydraulic models and thereby contribute to an improved river discharge estimate. To leverage ICESat-2's capabilities, we developed software to process ICESat-2 data and estimate local WSS where the satellite ground track intersects rivers catalogued in the SWOT River Database. The method is designed to work independently of the intersection angle between the satellite track and the river centerline, ensuring robustness under varying geometric conditions. The performance of this method was demonstrated using four years of ICESat-2 data over the Amur River in Asia. This analysis produced 7306 slope estimates, ranging from 0.6 cm/km to 651.04 cm/km, across 1693 river reaches. The median standard error for these estimates was 0.5 cm/km, highlighting the accuracy of the approach. On average, four slope estimates were obtained per river reach, with some reaches yielding as many as 22 estimates. The availability of these high-resolution WSS measurements enables the identification of river reaches where slopes vary over time. Temporal changes in WSS can result from dynamic processes such as flood waves or backwater effects caused by slope breaks, cross-sectional changes, or tributary junctions. This study demonstrates the utility of ICESat-2 for monitoring time-variable WSS and highlights the potential of these data for advancing hydrological research. Furthermore, the processing methodology has been automated in the ICE2WSS R package, enabling efficient and scalable analysis of ICESat-2 data for rivers worldwide.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Automatic Detection of Water Pans in Agropastoral Areas of Taita Taveta County, Kenya

Authors: Pauline Ogola, Mr. Ian Ocholla, Mr Ilja Vuorinne, Mr James Mwadime, Dr Gretchen Gettel, Dr Janne Heiskanen, Dr Petri Pellikka
Affiliations: University of Helsinki, IHE Delft Institute for Water Education,Department of Water Systems and Ecosystems, Finnish Meteorological Institute, University of Nairobi, Wangari Maathai Institute for Environmental and Peace Studies, Taita Environmental Research and Resource Arc
Water scarcity is a critical challenge in semi-arid regions of sub-Saharan Africa, where livelihoods heavily depend on limited water resources. In Kenya, 80% of the land is classified as arid or semi-arid, supporting approximately 10 million people and contributing 15% to the national GDP through livestock and agriculture. Water pans, which are small, shallow reservoirs replenished by surface runoff, play a crucial role in supporting livelihoods by providing water for livestock, wildlife, and irrigation on small-scale agropastoral farms. However, these surface water bodies are increasingly vulnerable to climate change, highlighting the need for accurate mapping and monitoring to ensure water availability for communities. Advances in remote sensing and deep learning technologies have opened new opportunities for improving the detection and monitoring of water bodies. The application of high-resolution satellite imagery and state-of-the-art deep learning techniques for mapping water pans remains underexplored in semi-arid regions of sub-Saharan Africa, despite its potential to enhance water resource management significantly across the continent. We aimed to assess the various remote sensing data types for water pan detection and mapping together with state-of-the-art object detection methods. We trained computer vision models including YOLOv5, YOLOv8, and Faster R-CNN to identify water pans using RGB and color-infrared aerial images, digital terrain model derived from airborne lidar, and high-resolution data from pleiades and planet satellites. Results demonstrated high performance for very-high resolution datasets, with YOLOv5 achieving the highest accuracy of mAP50 of 98.5% with aerial RGB imagery. The models recorded much lower performance with the much coarse planet images with the highest mAP50 score of 36.3% recorded by YOLOv5 model. Despite these challenges, larger water pans were still detectable using the planet data. These findings represent first step towards developing a comprehensive and dynamic database of water pans in Kenya’s semi-arid regions, paving the way for enhanced water resource management, planning, and sustainability in these vulnerable areas.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: EO4FLOOD: Earth Observation data for Advancing Flood Forecasting

Authors: Angelica Tarpanelli, Guy Schumann, Cecile Kittel, Jafet Andersson, Silvia Barbetta, Peter Bauer-Gottwein, Elia Cantoni Igomez, Luca Ciabatta, Denise Dettmering, Omid Elmi, Paolo Filippucci, Laetitia Gal, Miguel González-Jiménez, David Gustafsson, Yeshewatesfa Hundecha, Gilles Larnicol, Kevin Larnier, Christian Massari, Vanessa Pedinotti, Karina Nielsen, Adrien Paris, Beatriz Revilla Romero, Malak Sadki, Daniel Scherer, Paolo Tamagnone, Marta Toro Bermejo, Mohammad Javad Tourian, Karim Douch, Dr. Espen Volden
Affiliations: CNR-IRPI, RSS-Hydro, DHI, SMHI, DTU, GMV, TUM, University of Stuttgart, Hydro Matters, Magellium, ESA-ESRIN
Floods are among the most destructive natural disasters, causing severe damage to human health, the environment, cultural heritage, and economies. Over the past 50 years, Europe alone has experienced approximately 4,000 fatalities and $274 billion in economic losses due to floods. The situation is even more severe in developing regions, where the lack of infrastructure and resources intensifies the impacts of such disasters. As climate change exacerbates the frequency and intensity of flood events, there is an urgent need for innovative approaches to improve flood forecasting and reduce societal impacts. EO4FLOOD is a project funded by ESA demonstrating the potential of advanced satellite data in enhancing the accuracy and timeliness of flood forecasting systems. The project focuses on integrating state-of-the-art satellite technologies and hydrological and hydraulic models to deliver reliable flood predictions up to seven days in advance. EO4FLOOD is structured around three main objectives: 1. Development of an Advanced EO Dataset: The EO4FLOOD dataset integrates high-resolution satellite products from ESA and non-ESA missions, providing global coverage of critical variables such as precipitation, soil moisture, snow, flood extent, water level and river discharge. 2. Integration into Flood Forecasting Models: By combining these datasets with machine learning-enhanced hydrological and hydraulic models, the project achieves more accurate flood predictions while quantifying uncertainty. 3. Demonstration for Science and Society: EO4FLOOD showcases the application of these tools in flood risk management and explores the influence of human activities, such as land-use changes and dam construction, on flood dynamics. By leveraging cutting-edge algorithms and satellite products, EO4FLOOD provides a robust framework for advancing flood forecasting and supporting effective disaster preparedness and response, highlight its broader implications for global flood risk management.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Integrated coastal-river water surface elevation datasets derived from SWOT and ICESat-2 over the Mekong Delta

Authors: Monica Coppo Frias, Cecile Marie Margaretha Kittel, Karina Nielsen, Aske Folkmann Musaeus, Christian Toettrup, Peter Bauer-Gottwein
Affiliations: DHI, Department of Geosciences and Natural Resource Management, University of Copenhagen, Department of Space Research and Technology, Technical University of Denmark
River deltas are home to more than 400 million people worldwide, being fundamental centers for industry and ecosystems of great ecological and economic importance. Some of the most densely populated rural and urban areas are in low-lying deltaic regions, such as the Mekong Delta. These areas are highly vulnerable to the impacts of climate change on coastal-river floods, which are driven by several factors, such as sea level rise, extreme river flows or storm surges. To mitigate these effects, accurate integrated coastal-river hydraulic models are essential for enhancing predictive capabilities for compound flooding events and developing effective contingency plans. However, the accuracy of hydraulic models is highly constrained by the quality of available observations, and creating reliable datasets for coastal-river domains presents several challenges that need to be addressed: a) the high spatial and temporal variability of coastal-estuary dynamics, b) the complex morphology of delta regions, with large floodplain, braided river-channels and man-made structures, and c) the lack of continuous coastal-river datasets. Traditional in-situ monitoring provides data only at widely spaced stations, which limits coverage. As a results, satellite earth observation (EO) has emerged as a solution to generate datasets with large spatial coverage and high spatial resolution. Missions such as SWOT or ICESat-2 are particularly well-suited to overcoming the monitoring challenges in coastal-river domains. The SWOT mission, with a 21-day revisit time, offers high spatial resolution that can deliver water surface elevation (WSE) and surface water extent observations for rivers as narrow as 50 meters. The ICESat-2 altimetry mission provides large spatial coverage at a resolution down to 70 cm, delivering observations of water surface elevation, land elevation, and surface water extent over rivers that are a few tens of meters wide. This study utilizes SWOT and ICESat-2 observations over the Mekong Delta to generate continuous datasets that span from the river to the ocean to inform and validate integrated coastal-river hydraulic models. Coastal and estuarine WSE time series are derived by exploiting the SWOT global WSE raster product, along with a combination of ICESat-2 inland water product ATL13, sea surface height product ATL12, and photon cloud product ATL03. Additionally, surface water extent time series are generated by merging SWOT raster data and ICESat-2 ATL03 product, whose high resolution allows to identify complex river formations and man-made infrastructure that were not observable by previous satellite EO missions. The results provide continuous coastal-river datasets to validate and inform integrated coastal-river hydraulic models. These datasets map the interplay between near coastal and estuarine dynamics, as well as the complex morphology of the Mekong Delta region, which are key factors in accurately simulating water level and inundation in delta areas. The datasets are expected to enhance hydraulic models and improve their predictive skill for compound flooding simulations.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Reassessing Ice Cover Detection in Internal Waters: A Novel Perspective

Authors: Vladimir Karaev, Dr. Maria Panfilova, Eugeny Sorokin, Dr. Yury Titchenko, Dr. Sonia Ponce de León, Dmitry Kovaldov, Eugeny Meshkov
Affiliations: Waves Team Lda, Centre for Marine Technology and Engineering, Instituto Superior Tecnico, Universidade de Lisboa, Institute of Applied Physics RAS
In recent decades, understanding and monitoring the impacts of climate change has become a critical scientific priority. One key indicator of these changes is the duration of ice cover on inland waters, which provides valuable insights into environmental shifts. Modern satellite radar technologies, including scatterometers, altimeters, and synthetic aperture radars (SAR), offer significant capabilities for monitoring. However, the small size of inland water bodies poses challenges for scatterometers, altimeters, and radiometers, while optical sensors are often hindered by cloud cover during ice formation and melting periods. Additionally, SAR cannot provide the frequent observations necessary for comprehensive monitoring. The SWOT (Surface Water and Ocean Topography) satellite offers high spatial resolution measurements at small incidence angles (<10°), but its narrow swath width (~120 km) limits observation frequency. Addressing this limitation, we propose using dual-frequency precipitation radar (DPR) data from the GPM (Global Precipitation Mission) satellite. With incidence angles of ±18° and a radar swath width of 245 km, DPR provides spatial resolution of ~5 km in the Ku and Ka bands. Preliminary analysis shows that rivers as narrow as 400–500 meters are detectable. Once ice cover forms, rivers become “invisible” on radar images, enabling precise determination of ice formation and breakup dates. For the initial test site, a section of the Volga River between Saratov and Volgograd was studied from November 1, 2021, to September 1, 2022. This section, where the river widens significantly, is clearly visible in radar images. Future research will focus on determining the minimum dimensions of inland waters detectable by radar. Notably, DPR radar imagery comprises 5 km “pixels,” defined by the antenna beamwidth. At small incidence angles, the normalized radar cross section (NRCS) of land is significantly lower than that of the water surface. As a result, when a river wider than 500 meters falls within a 5 km radar footprint (pixel), the pixel's NRCS value contrasts sharply with surrounding land pixels, making the river distinctly visible in the radar image. Data processing has validated the potential of radar imagery for determining the dates of ice formation and breakup. Since data from the dual-frequency precipitation radar (DPR) has been available since 2015, it is possible to analyze trends in climate change over a substantial period. This study demonstrates for the first time that the duration of ice cover on inland waters can be effectively estimated using DPR data. Future work will focus on processing data for major rivers in northern Russia and Canada.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Advancing Urban Flood Mapping with SAR: Enhancing the UrbanSARFloods Dataset from SAR to InSAR

Authors: Jie Zhao, Prof. Dr. Xiao Xiang Zhu
Affiliations: Chair of Data Science in Earth Observation, Technical University of Munich,
Urban flooding is a critical issue in flood risk management and urban planning, but challenges in data accessibility have made it difficult to address effectively. Among the acceesable earth obervation data sources, SAR data has proven to be a valuable resource for flood monitoring due to its all-weather, day-and-night imaging capabilities. However, SAR-based urban flood mapping remains under-developed, with even fewer applications of deep learning methods. Insights from [1] indicate that the primary challenge lies in the absence of readily available, comprehensive datasets while the review in [2] emphasize that SAR intensity, coherence, and phase are all crucial for effective flood mapping. Consequently, there is an urgent need for a robust dataset specifically designed for urban flood scenarios, incorporating these SAR parameters to advance research and applications in the field. The UrbanSARFloods dataset introduced in [1] marks significant progress in addressing the lack of tailored datasets for urban flood mapping. Built on Sentinel-1 Single Look Complex (SLC) data, it incorporates both intensity and interferometric coherence features, enhancing its reliability for detecting urban floods. The dataset includes 8,879 512×512 tiles covering 807,500 km² across five continents and 18 flood events. It differentiates between flooded urban and open areas, utilizing semi-automatic annotations derived from Sentinel-1 data, complemented by manual annotations based on high-resolution optical imagery. Benchmark evaluations with state-of-the-art deep learning models highlight several challenges, including data imbalance and the limited effectiveness of conventional transfer learning approaches. These findings underscore the need for further advancements in both dataset design and algorithm development to improve urban flood detection accuracy [1]. In this conference contribution, we will present our ongoing efforts to extend the UrbanSARFloods dataset by collecting additional flood events globally and incorporating phase information—a feature often overlooked in flood mapping studies. Our goal is to leverage phase data to improve the delineation of flood boundaries in complex urban environments. Additionally, we aim to conduct a detailed analysis of the dataset's performance across diverse environmental conditions to better understand the strengths and limitations of existing methodologies and each SAR feature. While this work is still in progress, we anticipate that the enhanced dataset will set a new benchmark for urban flood mapping and provide a solid foundation for developing more robust models capable of addressing the complexities of urban and open-area flood mapping at scale. References 1. J. Zhao, Z.T. Xiong, X.X. Zhu. ”UrbanSARFloods: a benchmark dataset for open and urban flood detection using Sentinel-1 SLC data”; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshop (2024). 2. J. Zhao, M.Li, Y.Li, P. Matgen, M. Chini. ”Urban Flood Mapping Using Satellite Synthetic Aperture Radar Data: A Review of Characteristics, Approaches and Datasets”. IEEE Geoscience and Remote Sensing Magazine (in press, 2024)
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Near-Real-Time Multi-Source Flood Monitoring: Enhancing and Blending SAR and VIIRS-Based Flood Inundation Products

Authors: Dr. Qing Yang, Dr. Qingyuan Zhang, Dr. Sanmei Li, Dr. Josef M. Kellndorfer, Dr. Nicholas C. Steiner, Mr. Tyler Ruff, Ms. Jessie C. Moore Torres, Dr. Xinyi Shen, Mr. Sean
Affiliations: University Of Wisconsin-Milwaukee
Remote sensing has revolutionized flood monitoring and forecasting, with Near-Real-Time (NRT) systems providing critical multi-sensor flood products for hazard response and as reference data for flood modeling and assimilation. Synthetic Aperture Radar (SAR) and the Visible Infrared Imaging Radiometer Suite (VIIRS) are two primary satellite sensors utilized in NRT flood inundation mapping (FIM). SAR-based FIM systems, leveraging sensors like Sentinel-1, RCM, and RS2, deliver weather-penetrating, high-resolution flood delineation with frequent active acquisitions but are hindered by obstructions such as vegetation and urban infrastructure. VIIRS, on the other hand, offers robust temporal coverage and vegetation-tolerant mapping due to its couple hours revisit time and spectral capacity but is limited by cloud cover and moderate spatial resolution. This study introduces GLOCAL (Global-LOCAL Solvers Integration Algorithm), a flood depth estimation algorithm that integrates SAR-based FIM with topographic data to address vegetation and urban occlusions. The resulting infilled areas align closely with VIIRS-based FIM and ground hydrological observations. Furthermore, we explore the fusion of infilled SAR and downscaled VIIRS-based FIM products to combine their strengths, enhancing the monitoring of flood dynamics. By incorporating high-resolution DEM products, such as CDSM, this approach advances blended FIM at a 30-meter scale, providing a more detailed depiction of flood events. These researches contribute to the advancement of NRT remote sensing systems for practical flood inundation monitoring.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Multi-sensor Integration of SAR and Optical Data for Robust Surface Water Mapping

Authors: Shagun Garg, Dr. Edoardo Borgomeo, Prof. Dr. Mahdi Motagh, Dr. Sivasakthy Selvakumaran
Affiliations: University Of Cambridge, Section of Remote Sensing and Geoinformatics, GFZ German Research Centre for Geosciences
Surface water bodies play a fundamental role in the Earth's hydrological cycle and are critical indicators of climate change impacts. These water systems not only support essential ecosystem services and biodiversity but also serve as primary water sources for agriculture, industry, and human consumption. In tropical regions, where water availability is increasingly volatile due to climate change, accurate monitoring of surface water dynamics becomes crucial for sustainable resource management and disaster preparedness. Despite this importance, current monitoring capabilities remain inadequate, particularly in regions experiencing rapid environmental change.  Existing surface water mapping approaches rely heavily on optical satellite data, which faces significant limitations in tropical regions due to persistent cloud cover and atmospheric interference. While optical sensors from Landsat and Sentinel-2 missions provide valuable spectral information for water detection, their effectiveness is severely compromised during wet seasons when monitoring is most critical. Even when clouds do not cover the entire image, processing becomes challenging, and sometimes, it is preferable to avoid using such data altogether due to the difficulties in managing the noise. Although Synthetic Aperture Radar (SAR) systems offer cloud-penetrating capabilities, they present significant challenges in distinguishing between open water and features with similar backscatter characteristics, such as flooded vegetation, barren lands, tarmac roads, and airport runways, limiting their standalone application in tropical contexts.  To address these limitations, we propose a multi-sensor integration framework that systematically combines Sentinel-1 SAR (VV/VH polarization) with optical data from Landsat-8/9 and Sentinel-2. The methodology specifically addresses the geometric reconciliation between SAR's side-looking geometry and optical sensors' near-nadir observations. We evaluate both pixel-based (Random Forest, SVM) and object-based (OBIA) classification approaches, with particular emphasis on their performance in handling mixed pixels and geometric distortions at water boundaries. The validation framework encompasses test sites across three distinct tropical regions, incorporating water bodies ranging from 0.5 to 500 hectares. Reference data includes high-resolution Planet imagery (3m). Preliminary results indicate that object-based classification achieves higher accuracy (85-90%) compared to pixel-based approaches (75-80%) in areas with complex water-land boundaries. The integrated approach demonstrates the potential for sub-weekly monitoring frequency, with average temporal gaps reduced from 15-20 days (optical-only) to 4-6 days (combined sensors). The improvements in temporal resolution and classification accuracy of surface water bodies will directly improve operational water resource monitoring capabilities, particularly in tropical regions. The frequent monitoring system will benefit a wide range of stakeholders, including water resource managers, disaster response teams, environmental agencies, and policymakers, by providing more timely and reliable data for water management.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Exploring Hydrological Connectivity in European Wetlandscapes with SAR and SWOT

Authors: Abigail Robinson, Prof Fernando Jaramillo, Dr Anna Scaini
Affiliations: Department of Physical Geography, Stockholm University
Wetlands play a crucial role in maintaining ecosystem services such as carbon sequestration, flood attenuation, and biodiversity conservation, making their hydrological functioning critical to understanding and managing these environments. Despite their importance, there remains a significant gap in our knowledge of the hydrological processes that govern wetlands, especially concerning how water is stored and moves within the landscape. This gap is further exacerbated by the complexity of these systems, which arises from the diverse types of wetlands, variations in management practices, and scarcity of in-situ data. However, due to recent advances in remote sensing, including the launch of the Surface Water and Ocean Topography (SWOT) satellite, combined with the growing accessibility to artificial intelligence (AI) tools, new opportunities exist to bridge these knowledge gaps in wetland surface hydrology. This project explores a key but underexplored aspect of wetland hydrology: hydrological connectivity, which plays a key role in ecosystem service delivery. Here, we use the term hydrological connectivity to describe surface water pathways that connect water bodies within a wetland-rich landscape, known as a wetlandscape. The primary objective of this project is to analyse hydrological connectivity across diverse wetlandscapes in Europe. We will focus on a selection of 5-15 wetlandscapes representing various environmental conditions and levels of human-induced alteration. By examining a diverse array of wetlands, we aim to understand better how human activities and management practices impact hydrological connectivity and, consequently, the resilience of these wetlands. Additionally, we aim to provide a method that assesses surface water extent dynamics that can be applied to any environments with multiple dynamic water bodies, such as floodplains or multi-lake systems. We first assess the spatial and temporal evolution of hydrological connectivity using a connected component and geostatistical connectivity function (GCF) analysis. The GCF analysis identifies connected ‘wet’ pixels as a function of distance in any given direction. This methodology helps us to predict the degree of surface water connectivity between individual water bodies in the landscape. Surface water in the landscape is indicated by a binary wet/dry binary mask produced by a deep learning algorithm based on Sentinel-1 SAR imagery. Complementing the GCF analysis, when available, are water level data from SWOT to provide a more comprehensive analysis of hydrological connectivity. So far, the methodology has been tested on one wetlandscape in Sweden where it has successfully captured seasonal hydrological connectivity using connected components and GCF analysis based on SAR imagery. The probability that water bodies in a wetlandscape are connected coincides with water extent, most likely driven by an influx of precipitation and spring snow melt. Combined with data from SWOT, we hope to determine the exact threshold whereby the wetlandscape switches from having a predominant water storage function to a water transmitting function. This approach integrates remote sensing and AI techniques and offers a new perspective on wetland hydrology that can improve both scientific understanding and water-related policy and decision-making. If hydrological connectivity can inform ecosystem service identification, then the findings of this project will be important for future restoration and conservation efforts, with the potential to allocate protected sites in the future.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Enhancing River Discharge Simulations in Snow-Dominated Regions: Assimilating Sentinel-1 Snow Depth into a Hydrological Model to Improve Precipitation Estimates in Aosta Valley

Authors: Shima Azimi, Prof. Manuela Girotto, Riccardo Rigon, Silvia Barbetta, Christian Massari
Affiliations: University of Trento, CNR, Research Institute for the geohydrological protection, University of California Berkeley
Estimating precipitation in mountainous regions is challenging due to the intricate topography and complex atmospheric conditions. The limited spatial coverage of ground based precipitation observations makes satellite based and reanalysis derived precipitation products essential for hydrological models representing the water cycle. However, mitigating the bias and uncertainty of these global precipitation products, particularly in mountainous regions, is crucial for achieving more accurate water budget estimates. In this study, a snow depth data assimilation approach is applied to the Aosta Valley catchment in northwest Italy to analyze the error patterns of snowfall across the region. We use a particle batch smoother to assimilate satellite-derived snow depth data from Sentinel-1 into the snow component of GEOframe-NewAGE model, enhancing the accuracy of the European Meteorological Observations (EMO-1) precipitation product. To investigate how Sentinel-1 snow depth retrievals could capture the spatiotemporal patterns of snow depth in the Aosta Valley catchment, we conducted an evaluation using ground-observed snow depth data. This initial assessment demonstrated sufficient confidence in the snow depth product, making it suitable for assimilation with the snow component of the hydrological model. This, in turn, allows for a more in-depth investigation into its effectiveness. The results indicate that the corrected EMO-1 precipitation aligns more closely with the kriging interpolation results, which were derived from 81 precipitation gauges distributed across the catchment and used as the reference in this study. Additionally, the considerable improvement observed during the winter period highlights the effectiveness of the snow depth assimilation approach in correcting precipitation estimates. Furthermore, the performance of the hydrological model forced with both the corrected and original EMO-1 precipitation products was compared. The results showed that the simulated river discharge was more accurate when the corrected EMO-1 was used as the input. Additionally, the predictive uncertainty analysis revealed that the corrected EMO-1 precipitation introduced less uncertainty into the model compared to the original EMO-1. By leveraging Sentinel-1 snow depth retrievals, we enhanced the capacity to accurately estimate water cycle components, particularly in snow-dominated regions. These advancements have broad implications for better water management, flood forecasting, and climate resilience in mountainous catchments.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Regional mapping of the surface water mass changes by inversion of Line-of-Sight GRACE acceleration changes

Authors: Guillaume Ramillien, Lucia Seoane, José Darrozes
Affiliations: Centre National de la Recherche Scientifique (CNRS), Université Paul Sabatier, Géoscience Environnement Toulouse (GET), Observatoire Midi-Pyrénées (OMP)
We propose a new representation based on Regional Surface Harmonic Functions (RSHF) to determine water mass changes by inversion of Line-of-Sight (LoS) GRACE acceleration reduced for continental hydrology. Surface water mass anomaly maps are expressed as a linear combination of functions and optimal coefficients fitted from acceleration changes for given 10-day/month periods. Regional coefficients are recovered by applying a full-rank pseudo-inverse of the Newtonian operator. A closed-loop approach with the global hydrological model, e.g. GLDAS or WGHM is used to validate the inversion procedure, which leads to accurate recovery when noise-free data are considered. In the case of synthetic noisy data, RSHF coefficients can be determined up to a certain order, i.e. long wavelengths. A comparison between different hydrology signal reconstructions is also made in terms of efficiency of spatiotemporal resolution when Slepian functions are used. LoS GRACE acceleration variations can be computed by derivative of observed KBRR versus time and then low-pass filtered, before being inverted to produce series of regional solutions in various drainage basins.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: A Decade of PALSAR-2 Data: Mapping Inundation Variability of Tropical Wetlands

Authors: Mr. Paul Senty, Dr. Cécile Kittel, Prof. Dr. Rasmus Fensholt, Dr. Stéphanie Horion
Affiliations: DHI, Department of Geosciences and Natural Resource Management, University of Copenhagen
Tropical wetlands store great quantities of carbon, emit large amounts of methane, support a rich biodiversity, and provide food to local communities. These ecosystems are highly dynamic, with seasonal flooding playing a central role in their functioning. However, our understanding of the seasonal and inter-annual variability of their inundation extent remains limited. L-band SARs, with their ability to sense through clouds and forest canopies, have been effective in detecting surface water changes. They have been used to produce static maps of maximum and minimum water extent, especially in the Amazon. Yet, the year-to-year variability of flooding in these regions is understudied. The decade-long record of PALSAR-2 data (from 2014 to 2024) provides an unprecedented opportunity to quantify these variations at high-resolution. In this study, we applied a machine learning algorithm to estimate the maximal inundation extent and the timing of flooding, for each year between 2015 and 2024, and for the two largest river basins: the Amazon and Congo. The spatial patterns from the past two years were compared with high-resolution water extent maps from the SWOT mission. Yearly changes in flooding were validated using GRACE data and satellite altimetry. By comparing water level time-series with radar data, we assessed how floodplains are connected to rivers and determined the water levels needed to flood different areas of the floodplain. The results show significant year-to-year differences in inundation extent and timings. For example, in 2023/2024, flooding in the Congo wetlands increased sharply, while the simultaneous historical drought in the Amazon led to smaller flooded areas than usual. These variations highlight how tropical wetlands respond to changing weather patterns. Our high-resolution maps can complement coarser wetland extent maps (derived from passive microwave observations) which are currently used in methane emission models. Furthermore, our analysis of flooding variability and timing provides important information for understanding the resilience and vulnerability of tropical wetlands to climate change and shifting rainfall patterns. Acknowledgement: this research is part of the Global Wetland Center funded by the Novo Nordisk Foundation. More info: https://globalwetlandcenter.ku.dk/
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Initial development of CIMR Level-2 Surface Water Fraction algorithm

Authors: Roberto Fernandez-Moran, Andrés Terrer-Gómez, Moritz Link, Thomas Lavergne, Dr. Maria Piles
Affiliations: University Of Valencia, Norwegian Meteorological Institute
This study introduces the Surface Water Fraction (SWF) retrieval algorithm developed in preparation for the Copernicus Imaging Microwave Radiometer (CIMR) mission. CIMR is one of the six Copernicus Expansion Missions currently being implemented by the European Space Agency and the European Commission. The algorithm builds upon methodologies previously developed for AMSR-E and SMAP but evolved to exploit CIMR K-band (18.7 GHz) observations. It is based on a Double Differential Ratio (DDR) approach that uses a pre-defined Lookup Table (LUT) of emissivity values representing pure land and water endmembers across varying soil moisture (SM) and vegetation optical depth (VOD) conditions. The SWF algorithm exploits the specific characteristics provided by passive microwave radiometry, which complements optical, infrared (IR), and radar-based techniques for global surface water monitoring. While optical-IR sensors deliver high spatial resolution data, their temporal coverage can be constrained by weather conditions, cloud interference, and reduced effectiveness in high-latitude regions. Similarly, despite their utility for detailed mapping, active radar techniques face challenges related to global coverage and acquisition frequency. Passive microwave sensors such as AMSR-E, AMSR2, and WindSat offer daily to 10-day temporal resolutions and the ability to penetrate clouds. However, these sensors are often constrained by coarse spatial resolution and reduced sensitivity to surface water under dense vegetation canopies, particularly at higher frequencies. In this study, the K-band was selected for its balance between spatial resolution and reduced atmospheric interference compared to higher frequencies, while still being sensitive to detect seasonal inundation. The SWF algorithm follows a two-step process. First, brightness temperatures at K-band are processed to derive emissivities. Second, the DDR method calculates SWF by comparing observed emissivities to LUT-derived reference values for land and water. Land emissivity references are derived by identifying pixels with a SWF near zero within the Land Parameter Data Record (LPDR), which provides a long-term global record of SWF, SM, VOD, and other key parameters. Water emissivity references are deduced using the Fresnel equations and the Double-Debye dielectric model. To demonstrate the algorithm's functionality and prepare it for operational use, data from AMSR2 has been utilized for initial testing. These tests provide a preliminary evaluation of the algorithm's performance and enable refinement of LUT parameters to improve retrieval accuracy. The input data for the retrieval process include brightness temperatures from K-band, soil moisture, and vegetation optical depth from AMSR2, supported by auxiliary datasets for flagging and masking. The algorithm presented in this study has been developed as part of the CIMR L2PAD project. The algorithm theoretical baseline documents (ATBDs) and associated code are planned to be made publicly available at https://github.com/CIMR-L2PAD in the future.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: From satellite to applications: a comprehensive perspective from CNES for water monitoring from space

Authors: Delphine Leroux
Affiliations: CNES
Water scarcity is one of the most pressing global challenges of our time. While freshwater resources are limited and unevenly distributed across the globe, understanding and managing water availability and water quality is crucial for ensuring sustainable development. In a context of climate change, more and more extreme events also occur throughout the world, impacting people with longer periods of droughts, or violent flood events. Ground-based measurements, while essential, are often constrained by geographic, financial, and logistical challenges, especially in remote or inaccessible areas. In this context, satellite observations play a vital complementary role. They provide invaluable insights over large areas and with consistent revisit times, offering a comprehensive view of hydrological phenomena, which is essential for effective water management. The French space agency CNES (Centre National d'Études Spatiales) has been actively involved in space-based hydrology for several decades, building on a strong history of collaboration with NASA. Over the last 30 years, the two agencies have worked closely in the field of altimetry, enabling precise monitoring of water levels in rivers, lakes, and reservoirs. This partnership has led to significant advances in understanding hydrological processes from space. Among the most notable recent advancements is the launch of the Surface Water and Ocean Topography (SWOT) mission in December 2022, which is a true game changer in hydrology. With its wide swath and high-resolution altimetry, SWOT provides unprecedented coverage and accuracy in monitoring water heights at the global scale. CNES is also at the forefront of developing ambitious EO satellite missions that contribute to the global understanding of the water cycle. SWOT is a prime example of such mission, and soon it will be joined by another groundbreaking project: the TRISHNA mission, a joint effort with the Indian Space Research Organisation (ISRO). TRISHNA, which stands for Thermal InfraRed Imaging Satellite for High-resolution Monitoring of the Water Cycle and Agriculture, will provide high-resolution measurements of land surface temperature and evapotranspiration, two critical parameters for understanding water use and availability in agricultural and natural ecosystems. Both SWOT and TRISHNA benefit from a significant scientific support and an ambitious downstream program. One of the key strategies at CNES for hydrology is the integration of satellite data into a global approach of the Earth system. The research infrastructure Data Terra is central to this vision, offering a platform for sharing and processing data from various missions. A key component of this is the THEIA platform, which focuses on continental surfaces, including hydrology through hydroweb.next thematic hub, which provides access to SWOT data, as well as to numerous other sources. By making available such diverse datasets, CNES enables researchers and operational services to use a wide array of information for hydrological studies, from large-scale water cycle modeling to more localized water management efforts. Moreover, CNES has set up programs focused on bridging the gap between scientific research and practical applications, enabling the operational use of satellite data. CNES is then actively supporting the creation of new applications and services in response to user needs, through various programs such as the missions’ downstream programs, which support science projects and research and development activities for new applications; the SCO (Space for Climate Observatory), which supports the development of applications for climate change adaptations; the French national initiative France 2030, which supports the development of applications and services for public stakes. Also, CNES plays a key role in supporting the BASS (Business Applications and Space Solutions) program of the European Space Agency, which aims to operationalize space-based services. Through all these support programs, CNES helps ensuring that satellite-based hydrological services are both scientifically robust and operationally relevant to end-users. Looking towards the future, CNES is making substantial investments in the next generation of space-based hydrological observations such as the S3NG-T (Sentinel-3 New Generation - Topography) mission within the Copernicus program, which will inherit the concept of wide swath from SWOT. Another focus development is the SMOS-HR mission, which will improve soil moisture observations with a higher resolution, offering more accurate insights into water availability in the root zone, which is crucial for agricultural and drought management. Additionally, CNES is exploring the development of a high-revisit constellation for hydrology, which would provide daily measurements of water levels, further enhancing the ability to monitor and respond to changes in water resources with precision. CNES is also working on the NGGM-MAGIC mission, which should enhance the resolution and the precision of the GRACE satellite measurements of the gravity fields, and thus improve the monitoring of the total water storage on Earth. These investments in space-based hydrology are about to revolutionize the way we monitor and manage water resources, addressing the growing challenge of water scarcity in a rapidly changing world. By combining cutting-edge technology, international collaboration, support programs, and a commitment to open data sharing, CNES is positioning itself as a leader in the field of space-based hydrology, providing critical tools for climate adaptation and sustainable water management in the years to come.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Synergy of SWOT and other optical/radar missions to monitor very small sized, but high impact, reservoirs in semi-arid Bresil

Authors: Marielle Gosset, Rafael Reis Alençar Oliveira, Alfredo Ribeiro Neto, jonas souza, Eduardo
Affiliations: Ird, GET
In the semi-arid Ceara (Noth East Brazil), the water resource is disseminated through more than thirty thousand reservoirs of all sizes from state monitored reservoirs larger than 5.000 hectares, down to very small farm reservoirs of less than 1 ha. Over the years, and as the region underwent several severe droughts, the number of small reservoirs has grown anarchically. The very large number and reduced size of these “açudes” make it challenging to inventory them, analyze the hydrological processes and to monitor the amount of water they hold. The small sizes of these reservoirs make it a challenge for remote sensing, especially considering that the cloudiness in the region limits the use of optical sensors. While severe droughts is a major issue for water resource in semi-arid Ceara, the intense convective rain events that happen during the short rainy season generate intense runoff and sudden floods, and dams are regularly broken. These phenomena involve high temporal variability and need to be surveilled with sufficient time sampling (ideally weekly or better). Here we analyze the possibility to analyze the mission SWOT in synergy with S1 and S2, in order to provide high temporal monitoring of the water extent/WSE of the small reservoirs. In order to validate satellite based monitoring, eleven reservoirs were equipped for intense in situ monitoring with pressure gauges to measure water surface elevation (WSE) and drone/bathymetry survey to obtain a high resolution DEM of the basins. In addition to these SWOT CAL/VAL specific equipment, Ceara’s state data base provides historical monitoring of about 200 reservoirs, information on water usage and a hydrological network including a weather radar for high spatiotemporal resolution rainfall monitoring. Several combined satellite (SWOT-S1-S2) methods (from basic to IA based) are being evaluated against the reference to monitor the variability of small reservoirs and the occurrence of the most extreme events – during SWOT cal/val and science orbits, in 2023 and 2024 rainy seasons.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: SAR Change Detection-based Flood Mapping Using New Space Technology

Authors: Angel Caroline Johnsy, Dr Qiaoping Zhang, Dr Valentyn Tolpekin, Michael Wollersheim, Pyry Poutanen, Brandon Wright, Dr Melanie Rankl
Affiliations: ICEYE Oy
Earth observation (EO) by space sensors provides human responders with a wealth of information during natural catastrophes. Synthetic Aperture Radar (SAR) sensors, which operate day and night with the capability to penetrate clouds, smoke, and fog can provide critical information to authorities and local governments during the emergency response phase and the recovery phase [1]. Due to climate change, natural catastrophes such as floods and wildfires are increasing in frequency and severity, necessitating timely information to evacuate people, assess the damage, and mitigate the impacts of these events. Flood mapping using SAR images has been traditionally single image-based. However, it has been proven to be difficult to distinguish flooded areas and non-flooded areas based on variations within a single scene as the SAR signatures over flooded areas can vary significantly with incidence angle, polarization, frequency, and land coverage [2, 3]. This work explores the change detection technique for flood mapping using ICEYE’s microsatellite constellation, which operates at X-Band and VV polarization. Its multi-mode imaging capabilities, ranging from 25 cm ground resolution with Long Dwelling Spotlight to 15 m ground resolution with TOPS mode, cover areas between 5x5 km and 100x500 km [4]. Thanks to the large constellation size and its high temporal and spatial resolution imaging capabilities, we were able to acquire a number of SAR images suitable for amplitude- and/or coherence-based change detection for dozens of flood events around the world including the major flood events in many European countries in September 2024, the mega flood in Spain in October 2024, and the floods in the Philippines in November 2024. The change detection results are compared to single image-based water segmentation outputs and they are further evaluated using ground observations and the flood delineation maps generated by the Copernicus Emergency Management Service (CEMS) team for the same event [5]. In this presentation, we also discuss the importance of optimally tasking SAR acquisitions starting from the early stage of the flood right to the flood peak and to the late stage of flood. This is often achieved by direct support from metrological experts, who can provide the timing of a major rainfall for a given region. Challenges remain in identifying flooded streets in urban areas due to layovers and shadows from buildings or trees. [1] Moreira, A., Prats-Iraola, P., Younis, M., Krieger, G., Hajnsek, I., Papathanassiou, K.P., 2013. A tutorial on synthetic aperture radar. IEEE Geoscience and Remote Sensing Magazine, 1(1), pp. 6–43. https://doi.org/10.1109/MGRS.2013.2248301. [2] Amitrano, D., Di Martino, G., Di Simone, A., Imperatore, P., 2024. Flood detection with SAR: A review of techniques and datasets. Remote Sensing, 16, p. 656. https://doi.org/10.3390/rs16040656. [3] Pulvirenti, L., Chini, M., Pierdicca, N., Boni, G., 2016. Use of SAR data for detecting floodwater in urban and agricultural areas: The role of the interferometric coherence. IEEE Transactions on Geoscience and Remote Sensing, 54(3), pp. 1532–1544. https://doi.org/10.1109/TGRS.2015.2482001. [4] ICEYE Product guide, 2024. Available at: https://sar.iceye.com/5.1/productguide/products/ (accessed 26 November 2024). [5] Copernicus Emergency Management Service, 2024. Available at: https://emergency.copernicus.eu/ (accessed 26 November 2024).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: CAMEO-WAGST: Cameroon Advanced Measurements for Enhanced Observations of Water levels using Affordable GNSS-IR and Sentinel-3&6 Technology

Authors: Dr. Loudi Yap, Dr. Makan Karegar, Professor Jürgen Kusche, Abdou Nasser Ngouh, Jiaming Chen, Chrétien Ngouanet, Ludovic Houetchak Kandé, Joseph Kamguia
Affiliations: Research Laboratory in Geodesy, National Institute of Cartography, Institute of Geodesy and Geoinformation, University of Bonn
In-situ surface water monitoring networks are crucial for calibrating and improving forecast models, validating satellite altimetry data and developing early-warning flood systems. While satellite remote sensing offer the potential to gauge remote river and coastal areas and provide valuable data for hydrological forecasting, their effective use still requires expertise in processing algorithms and access to in-situ validation data. Cameroon's coastal region, located at the base of the Gulf of Guinea, is highly vulnerable to natural hazards such as erosion, flooding, storms and sea-level rise. Despite the growing risks posed by climate change, there is no dedicated water-level monitoring system along the coast or for the major rivers in the country. In the CAMEO-WAGST project, recently funded by the Earth Observation Africa Research and Development Facility through the European Space Agency (ESA), we are using innovative, low-cost and low-power sensors based on Global Navigation Satellite System Interferometric Reflectometry (GNSS-IR). These sensors, known as Raspberry Pi Reflectors (RPRs), are being deployed to establish a monitoring network for river and sea levels in Cameroon, with the aim of achieving two key objectives. The first objective is to develop a prototype of cost-effective and low-maintenance infrastructure that can be adopted across Africa for operational early warning flood and drought systems. The second objective focuses on validating altimetry data from ESA’s Sentinel-3 and Sentinel-6 satellites. These efforts will be supported by the development of reusable, open-source EO algorithms and applications, tailored to address African challenges with African solutions particularly relevant for managing hydrological extremes and mitigating hazards such as floods and droughts.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Advancing Hydrological Forecasts with Data Assimilation of Earth Observation Datasets: Mid-Term Results from SEED-FD

Authors: Vanessa Pedinotti, Malak Sadki, Guillaume Fassot, Nicola Martin, Gilles Larnicol, Eric Marchand
Affiliations: Magellium-artal, ECMWF
Large-scale hydrological models are essential tools for simulating river dynamics and forecasting hydrological extremes, providing critical insights for water resource management, disaster preparedness, and climate adaptation. However, their accuracy is often hindered by uncertainties in model parameters, input data, and process representation—challenges that are particularly pronounced in data-scarce regions. Satellite Earth Observation technologies, such as altimetry and multispectral sensors, have transformed hydrological data collection by offering global coverage and long-term datasets, effectively addressing gaps left by the decline of ground-based monitoring networks. Despite these advancements, their potential remains underutilized in global flood forecasting. Among the techniques available for integrating EO datasets, data assimilation stands out as a powerful approach, enabling the real-time incorporation of observations to refine model predictions and enhance their reliability. As part of the SEED-FD (Strengthening Extreme Events Detection for Floods and Droughts) project, which aims to enhance the quality of hydrological forecasts within the Copernicus Emergency Management Service (CEMS) Early Warning Systems (EWS), this work investigates the use of an Ensemble Kalman Filter framework to assimilate diverse datasets, including discharge observations and water level anomalies, into the LISFLOOD hydrological model, a core component of the CEMS EWS. These datasets provide critical information for estimating hydrological states and parameters, improving both real-time forecasting and reanalysis efforts. By assimilating multi-variate observations, this work aims to address the uncertainties inherent in large-scale hydrological models and demonstrate the value of EO data for improving predictions of extreme hydrological events. Preliminary applications focus on data-rich river basins (Danube and Bhima basins), with the aim of assessing the contribution of different types of assimilated data and the possible interactions between their spatial and temporal density and quality. This presentation highlights the mid-term results of this data assimilation effort within the SEED-FD project, showcasing its contributions to advancing global flood and drought forecasting capabilities. As a next step, this methodology will be applied to use cases in data-scarce regions, demonstrating its scalability and potential for improving hydrological resilience worldwide.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: The Influence of Lake Size on the Accuracy of Satellite-Derived Water Level Measurements

Authors: David Lindao Caizaguano, Prof Fernando Jaramillo
Affiliations: Stockholm University
The accuracy of satellite-derived water surface elevation (WSE) measurements is crucial for hydrological studies and water resource management. This research investigates how lake area influences the accuracy of such measurements by analyzing preliminary data from 53 Swedish lakes, categorized by size: small (<1 km²), medium (<10 km²), large (<100 km²), and extra-large (>100 km²). WSE measurements were obtained from the first year of the Surface Water and Ocean Topography (SWOT) mission's science orbit, specifically using the Pixel Cloud Product, which provides high-resolution observations of Earth's surface water bodies. The SWOT mission employs Ka-band radar interferometry to capture detailed spatial and temporal variations in WSE, aiming to enhance our understanding of global hydrology. We compared the SWOT-derived WSE data from the Pixel Cloud Product with high-accuracy in-situ observations provided by the Swedish Meteorological and Hydrological Institute (SMHI) between August 2023 and May 2024. The analysis focused on assessing the performance of SWOT measurements across different lake size categories. Initial findings reveal a significant dependency of measurement accuracy on lake size. Smaller lakes exhibit higher Root Mean Square Error (RMSE) in satellite-derived WSE measurements compared to larger lakes. These results underscore the impact of lake area on the reliability of satellite-based hydrological data. While preliminary, this study offers valuable insights for refining satellite hydrology techniques and enhancing their applicability to smaller and more complex lake systems. The implications of our findings suggest the need for improved algorithms or calibration methods to increase the accuracy of SWOT measurements in small lakes, which could greatly benefit water resource management and hydrological modeling in regions with numerous small water bodies.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Optimizing a Random Forests Pipeline for SAR-based Flood Mapping

Authors: Paul Hosch, Prof. Dr. Antara Dasgupta
Affiliations: Institute of Hydraulic Engineering and Water Resources Management, RWTH Aachen University
Flood events, intensified by climate change and urbanization, demand accurate and timely mapping to support emergency response and future preparedness. Synthetic Aperture Radar (SAR), with its all-weather, light independent imaging, is a powerful tool for rapid flood assessment, yet the complexity of backscatter interactions in urban and vegetated areas poses challenges for accurate flood delineation. Various SAR-based flood mapping techniques, such as thresholding, change detection, and active contouring, have been thoroughly explored. However, machine learning methods excel in integrating additional contextual information, like land cover. Random Forest (RF), a robust ensemble-based machine learning model, has proven effective across domains, and is popularly used for SAR-based flood mapping. Yet, crucial factors which influence the performance of the RF algorithm including the training strategy e.g. the sampling design, input features, hyperparameters, and decision thresholding methods, are rarely even presented in literature, let alone critically discussed. In this study, we optimize a RF classification pipeline for binary flood detection, and show the substantial impacts of each of these factors on the flood classification performance. The RF pipeline optimization for binary SAR-based flood mapping addresses the following aspects: (1) The underrepresentation of flooded pixels is an inherent challenge in flood mapping. Class Imbalance in the training data, causes traditional classifiers, including RF, to prioritize the majority class. To address this, the study compares mitigation strategies in the sampling design, balancing samples with Stratification and Synthetic Minority Oversampling Technique, alongside algorithmic modifications like Weighted RF and Balanced RF to address the imbalance within the model itself. (2) The impact of training sample sizes in the sampling design is analyzed, comparing prediction accuracy and computational efficiency. (3) Feature importance measures are compared for feature reduction. Impurity-based, permutation-based and Shapley values based feature importance calculations are used to recursively reduce a comprehensive feature pool of 72 SAR-based and 6 contextual features, to an optimal feature set for flood mapping from SAR. (4) Hyperparameter tuning is explored to improve model performance, comparing Sequential Model-Based Optimization, exhaustive search, and random search in terms of prediction accuracy and computational time, addressing the relatively limited tunability of RF parameters compared to other machine learning classifiers. (5) Decision threshold tuning is conducted using precision-recall curves to further optimize classification performance. The different pipeline configurations are evaluated using Overall Accuracy and F1 scores, calculated with quality-controlled ground truth data from Copernicus Emergency Service Rapid Mapping Products. We use a case study on the major 2024 flood event in central Europe, where we focus on Germany. Additionally, the spatial and temporal transferability of the optimized RF algorithm is assessed using data from the 2021 eastern Australia floods. Through this RF pipeline optimization, we provide critical insights into the application of RF for SAR-based flood mapping, specifically on the substantial impacts of each choice made within the classification pipeline on the final map accuracy. Findings show the criticality of optimizing each of these factors when using RF algorithms for flood classification, with changes in overall accuracy of up to 20%. Given the dearth of adequate appropriate data to train larger and more complex deep learning models, this optimization strategy for RF paves the way for global scale application, made possible through the objective and spatiotemporally transferable decision-making criteria illustrated herein, making it possible to automate the entire pipeline for operational SAR-based flood mapping.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: On the Use of SWOT Water Surface Elevations for Capturing Global Elevation Dynamics

Authors: Paula Wessling da Silva, PhD Antara Dasgupta
Affiliations: Institute of Hydraulic Engineering and Water Resources Management, RWTH Aachen University
Flood forecasting accuracy strongly depends on the quality of rainfall, streamflow, water levels, channel geometry, and floodplain topography information available as inputs to numerical or data-driven models. Bathymetry and floodplain information are irreplaceable in any flood inundation modelling and are typically defined via digital elevation models. The elevation information helps determine flow paths, and velocities, which is particularly important in urban areas, to predict how quickly water will move through an area and where it will accumulate. After precipitation, topography is the main source of uncertainties in flood forecasting and the greatest source of uncertainty in flood risk models. At a global scale, currently available DEMs include FABDEM, MERIT DEM, COPDEM30, and ASTER GDEM. Of these, FABDEM is the most recently acquired product based on COPDEM30, the free and open version of the 12m WorldDEM based on the TanDEM-X satellite survey of the globe completed in 2012. The FABDEM corrects for vegetation and hydrography, topography is dynamic by nature and changes faster than global elevation can be systematically mapped. While at regional and local scales, some locations have access to more recently acquired and high resolution LiDAR topography, better techniques to keep global DEMs updated to recent morphological changes are urgently needed, since these are the only sources of information available to many regions. Recent advances in satellite sensors provide an opportunity of improving topography data globally, regardless of the availability of local LiDAR data. In particular, SWOT altimetry mission which enables the monthly monitoring of water surface elevations, can be used to improve current DEMs near water bodies where they are most variable. The rasterized water surface elevation data can now be used to investigate biases in global DEMs at the intersection of the inundation extent with the DEMs. These biases can then be propagated spatially to bias correct global DEMs. The current study aims at testing SWOT’s capabilities in correcting floodplain elevation information, using case studies in Australia and Germany, selected due to the recent large flood events these countries experienced and the availability of high resolution LiDAR data for the validation of the bias corrected DEMs. Results suggest that SWOT water surface elevations can help improve elevation information globally and systematically, helping reduce uncertainties in global flood inundation modelling and a variety of other applications relying on high-resolution and precise elevation information.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: SAR-based UNet trained on authoritative governmental water masks to boost small-scale surface water and flood detection in Denmark

Authors: Simon Jakob Köhn, Ana Cristina Mosebo Fernandes, Casper Fibæk, Karina Nielsen
Affiliations: DTU Space, NIRAS A/S, ESA Phi-Lab
Floods are today one of the most common naturally occurring disasters, with the perspective of climate change amplifying the severity and frequency. Especially in remote areas, flood mapping and quantification are challenging. Thus, satellite earth observation data such as synthetic aperture radar and optical imagery play a vital role in flood mapping on global and regional scales. The high temporal frequency allows rapid mapping of the extent of floods, which enables fast disaster response and a better understanding of flood dynamics. Deep learning approaches combining synthetic aperture radar with digital terrain model information have recently increased in popularity. Surface water masks for model training are usually derived from optical multispectral imagery, using spectral indices such as the Normalised Difference Water Index (NDWI). Recent global models achieve high accuracy in large-scale flood mapping predictions. Floods in Denmark, however, rarely reach the spatial extent usually investigated in the literature. Yet, flood risk in Denmark is not low, especially since coastal floods are predicted to increase in frequency and severity. We, therefore, investigate how high-resolution authoritative Danish governmental geo data in the form of river, lake, and ocean masks can be combined to generate a high spatial resolution training ground truth mask for deep learning model training and how that affects the minimum predictable spatial scale for surface water. We compare models trained on our newly generated ground truth mask vs. baseline Sentinel-2 NDWI mask-based models. Furthermore, we experiment with small additions to the standard UNet architecture to further boost performance and to best utilize the two model inputs, Sentinel-1 SAR and the Danish government digital terrain model (DTM). Flood prediction capabilities of all models are evaluated on three flood instances of the October 2023 Baltic Sea flood that hit Denmark and northern Germany between the 20th and 21st. Furthermore, we validate the models on a river flood instance of Denmark's largest river (Gudenå) captured by Sentinel-1 on the 13th of January 2022. Each flood site’s extent is manually mapped using the Sentinel-1 SAR VH polarization.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: FloodCatch: AI-driven flood detection application for agricultural policy monitoring

Authors: Aymeric Roumy, P. Cordier, Roberto Sabia, S. Schaack, M. Chauvin, J. Elkhiati, L. Fouques
Affiliations: Capgemini, ESA
Over the past two decades, flood events have surged by 134%, driven by the accelerating effects of climate change. These floods severely impact agriculture by damaging crops, eroding soil, and threatening food security. They also complicate regulatory compliance, because working on flooded or water logged soils is prohibited. On the other hand, compensation for flood damage is crucial. The Common Agricultural Policy (CAP) provides financial aid to affected farmers, helping them recover from crop and livestock losses. With projections indicating more frequent and severe floods ahead, the agricultural sector faces an urgent need for faster, smarter, and more resilient monitoring tools. Earth observation (EO) data is central to tracking and mitigating flood risk. As part of ESA’s EOP-S sustainability strategy, EO-driven solutions (such as Sentinel-1 radar imagery) enable water detection even through clouds or at night. However, Sentinel-1’s 12-day revisit time leaves a critical gap in daily monitoring, especially for floods that could unfold in just 48 hours. This limitation obstructs timely detection and delays response, underscoring the need for a more efficient, automated solution. FloodCatch, an initiative led by ESA and Capgemini, tackles this limitation by combining daily flood imputation with short term flood forecasting in a user centric geospatial application. To achieve this objective, we need to apply water and flood segmentation to Sentinel-1 products. The solution combines signal processing and unsupervised learning to generate spatial masks of water bodies and flooded areas from SAR data. Each pixel is classified as water or non-water with an associated probability score. Once generated, these segmentation masks will serve as crucial inputs for our two models: flood imputation and forecasting. The role of the imputation and forecasting model is to perform inference, in other words, to make predictions based on input data (composed by S1, meteorological data, DEM). The training follows a loop based paradigm where inference and optimization alternate across the dataset. The objective is to reproduce a high resolution ICEYE reference mask as accurately as possible, maximizing Dice (Continuous IoU) and binary cross entropy scores. Additionally, the model is trained to understand the evolution from date T to T+N, incorporating a dynamic constraint: the greater the temporal distance of the inferred mask, the more flexibility the model must diverge from the Sentinel masks. By bridging Sentinel 1’s temporal gap, FloodCatch delivers near real time flood intelligence, supporting farmers, insurers, and public authorities in mitigating the growing impacts of climate change driven flooding.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Instantaneous Sea Surface Height Prediction Using Satellite Altimetry and Deep learning

Authors: Saeed Rajabi-Kiasari, Dr. Nicole Delpeche-Ellmann, Prof. Artu Ellmann
Affiliations: Department of Civil Engineering and Architecture, Tallinn University of Technology, Department of Cybernetics, School of Science, Tallinn University of Technology
Sea level prediction is crucial for applications such as coastal protection, marine engineering, safe shipping routes, and geo-hazard management. Traditional sea level modeling approaches often relied on single data sources like tide gauges or hydrodynamic models, which face limitations in spatial and temporal resolution and discrepancies with real-world conditions. Advances in satellite altimetry, including the Surface Water and Ocean Topography (SWOT) mission, can revolutionize methods with high-resolution measurements, enhancing accuracy and reliability while addressing existing gaps. By integrating satellite data with other sources through Multi-Sensor Data Fusion, a more robust, accurate, and comprehensive understanding of the marine environment can be expected. As a result, this research proposes a novel deep learning (DL) approach to predict satellite-based sea surface height (SSH), leveraging higher-resolution data to improve the accuracy of sea level predictions. The model also offers a solution to the challenges of computationally intensive data assimilation techniques. It will use tide gauge data for precise local measurements, hydrodynamic models to capture ocean dynamics, and meteorological data such as wind speed, atmospheric pressure, wave height, etc., for environmental insights. A novel DL architecture will include attention mechanisms to prioritize spatially relevant data. Explainability features like attention visualizations will ensure insights into the relationships between inputs and predictions. The aim is to develop geoid (as the equipotential surface of the earth)-referenced SSH maps for real-time adaptable predictions, combining spatio-temporal information in a way that has not been done before in sea level research. This approach also represents the first integration of instantaneous SSHs in the modeling process, rather than using it for validation purposes. The approach will be tested in the Baltic and North Seas to evaluate its accuracy and adaptability under varied oceanographic conditions. The model could establish a new benchmark for SSH forecasting, advancing environmental monitoring, climate research, and operational oceanography with high-resolution, interpretable predictions. Keywords: Sea Level Prediction; Satellite Altimetry; Deep Learning; Data Fusion; SWOT; Hydrogeodesy.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Advancing Wetland and Flood Monitoring Through Multi-Sensor Data intercomparison of SAR and GNSS-R Observations: the Yucatan Lake Case.

Authors: Mr. Paolo G. Madonia, Dr. Cinzia Zuffada, Dr. Nazzareno Pierdicca
Affiliations: Sapienza - Università Di Roma, Jet Propulsion Laboratory, California Institute of Technology, Scuola Superiore Studi Avanzati Sapienza
Wetlands and floodplains play a vital role in terrestrial water storage, ecosystem functioning, and global atmospheric methane emissions. However, accurately monitoring the dynamic distribution of surface water, particularly under vegetation and during rapid flooding events, remains challenging due to the weaknesses of single-sensor remote sensing approaches. Improving the monitoring techniques targeting these water-rich ecosystems is crucial for a better understanding of the equilibriums at play, but also has an impact in supporting early-warning systems for flood-prone regions, providing actionable insights for rapid disaster response strategies. To address these challenges, this study investigates the combined use of Synthetic Aperture Radar (SAR) and Global Navigation Satellite System Reflectometry (GNSS-R) to monitor surface water in wetlands and flood-prone areas. SAR offers robust capabilities in detecting water extent with high spatial resolution and coverage, yet its temporal limitations — due to long satellite revisit times — make it less effective for capturing inundation events occurring on short timescales. At the same time, speckle noise in SAR images makes classification challenging and ambiguities in the signature of water can lead to misclassification with other targets, such as shadows, parking lots in urban areas, etc. On the other hand, GNSS-R provides high temporal revisit and the ability to penetrate vegetation, a crucial feature for observing several areas of interest for a better understanding of water dynamics. Despite these advantages, GNSS-R presents challenges in signal interpretation due to quasi-random measurement tracks that produce non uniformly distributed data accruing over time. These do not allow to produce images comparable to those of SAR, and represent an alternative approach for sampling the Earth surface. Existing results demonstrate that this approach is sensitive to water features with unprecedented spatial detail, even in densely vegetated areas. This work builds upon the foundation established by previous studies [1,2,3], based on methodologies suitable to compare the maps of SAR and GNSS-R data. Regarding the penetration capability through vegetation, the use of low frequency bands is an essential characteristic of the monitoring system in this application domain. Only few SAR based satellite platforms operate at L band, whereas GNSS-R implicitaly works at that frequency as it collects the reflection of signal from the navigation satellites working at L-band . An important difference of the two systems, whose impact in this application is not yet fully understood, is the difference in the observation geometry, as a SAR measures the Earth surface backscatter whereas GNSS-R implements a bistatic, near specular, radar observation. The SAR and GNSS-R datasets taken into consideration in this study both exploit the L-band of the electromagnetic spectrum, in order to maximise vegetation penetration and enable a more direct and relevant comparison of the capability of detecting water underneath vegetation by the two techniques. The GNSS-R data are from the Cyclone Global Navigation Satellite System (CyGNSS) mission, ensuring sub-daily temporal revisit times in the latitudinal band between +/- 38 degrees. The SAR data are from the ALOS-2 satellite from the JAXA Japanese Space Agency, carrying aboard the PALSAR-2 L-band SAR. The PALSAR-2 instrument provides spatial resolution ranging between a few meters up to one hundred meters, depending on the operational mode (spotlight, stripmap, scanSAR). ScanSAR data are used in this work, with a swath width of about 400km, and a spatial resolution between 60m and 100m. Extending the investigation carried out in [1,2], this study focuses on a multi-sensor analysis of the Yucatan Lake, an oxbow lake located close to the Mississippi river, at the border between Mississippi and Louisiana (United States of America). Its proximity to a major watercourse makes the Yucatan Lake an excellent site to study seasonal changes in water extent and water level, as it experiences rapid and frequent overflows from the Mississippi river. For these reasons it is a cal/val site for several NASA missions including the soon to be launched NISAR. The anticipated outcomes of this research include enhanced understanding of wetland hydrology and flood extent localization, as well as an improved characterisation of the underlying dynamics of the flooding phenomena. Depending on the outcomes, future developments of this study may involve modeling the coherent scattering contributions of water beneath vegetation using GNSS-R signals, improving radar-based discrimination of surface water and vegetation signals, and Combining short timescale GNSS-R measurements with hydrological models. [1] Downs, B., Mapping Inland Surface Water with Spaceborne GNSS Refectometry and SAR, Ph.D. Thesis, Ohio State University (2023) [2] Chapman, B.D. et al., Comparison of SAR and CYGNSS Surface Water Extent Metrics, IEEE JSTARS 15, 3235 – 3245 (2022) [3] Archini, S., Radar and GNSS Reflectometry for mapping water bodies, floods and wetlands, M.Sc. Thesis, Sapienza – Università di Roma (2024)
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Data-driven analysis of landslide and flood risk assessment with EO data

Authors: Francisco Moral, Fran Martín
Affiliations: Earthpulse
Hydro-meteorological risks affect ecosystems, human health, infrastructure, agriculture, food security, and economies. They include hazards linked to water and weather, such as heavy rainfall, storms, hurricanes, droughts, and heatwaves, as well as their impacts like floods, landslides, and water scarcity. Landslides and floods are among the most severe natural hazards, threatening lives and livelihoods, particularly in regions with complex geographies and socio-economic vulnerabilities. Climate change exacerbates these risks by intensifying the hydrological cycle, increasing evaporation rates, and causing more frequent and unprecedented weather events. This work presents ongoing experiences based on data-driven modeling approach that leverages Earth Observation (EO) datasets among other to assess landslide and flood risks in Chile, Brazil, and the Dominican Republic. Deep experience has been gained in the work developed through two Proofs of Concept (PoCs) funded by ESA project GDA FFF (World Bank request tackling flood and landslides) and the private company Abertis (in collaboration with Detektia) for landslides affecting road slopes. Datasets have been arranged according to the three risk component categories (hazard or susceptibility, vulnerability and exposure) and analyzed to take part in a comprehensive methodology. Integrating past event data with static and dynamic variables enables detailed assessment through modelling hazard and shed light over the conditions that provoked an event. Sentinel-1 data is employed to detect flood extents using radar backscatter analysis and Artificial Intelligence (AI), in addition to derived ground motion in a time series basis, while Sentinel-2 provides multispectral information to moisture and vegetation indices. Additionally, a Digital Elevation Model (DEM) is used to derive topographic conditioning and hydrological parameters critical for flood susceptibility modeling along with other landscape descriptors such as Land Cover maps, river network or lithology. Furthermore, meteorological variables to account for dynamic factors and probability of occurrence have been included as well. The approach combines exposure datasets, such as population distribution and infrastructure maps, and vulnerability local indicators. Machine Leaning algorithms (Random Forests and Logistic Regression) have been used to train independent preliminary models for two areas of interest affected by landslides, considering the most relevant variables in Chile: rainfall, ground motion, slope, vegetation and moisture indices, seismic activity; and respectively for Brazil: slope, Topographic Position Index (TPI), and 10 days accumulated rainfall. The results show that 90% of the landslide events were classified as critical risk, highlighting the potential of EO and geospatial-based models for identifying landslide high-risk areas and critical periods according to meteorological conditions, which can guide early warning systems, and informing targeted disaster risk reduction strategies. Phase 1 of the Dominican Republic project has recently concluded, delivering a detailed analysis focused on data collection to develop risk indices for landslides and floods. This process utilized global datasets, supplemented with higher-resolution local datasets when available. Based on the collected data and identified gaps, methodologies were proposed to integrate all relevant variables and apply either data-driven or theoretical modeling, depending on the presence of past event records. Additionally, risk monitoring strategies and tools were proposed to align with the dynamic nature of the datasets. In the recently initiated Phase 2, the insights from Phase 1 will be leveraged to develop risk profiles for floods and landslides, with a focus on critical infrastructure. The ultimate goal is to devise adaptation strategies to assess future risks and promote resilient development. Results are expected in the coming months, including outputs such as dynamic flood probability risk maps (driven by rainfall conditions) and static landslide risk maps based on landscape characteristics. These Proofs of Concept (PoCs) emphasize the need for continuous monitoring of hazard-prone areas, recommending the integration of higher-resolution meteorological datasets and DEMs, and precise mapping of ongoing events to improve the prediction of future disasters and their potential impacts. This work demonstrates the critical role of EO technologies in dynamic risk assessments and fostering resilience in vulnerable regions, while advocating for the development of more robust and localized datasets to refine predictive capabilities. This includes investing in early warning systems, improving infrastructure, and developing policies that protect vulnerable communities. Understanding and preparing for these risks is essential for ensuring the safety, health, and well-being of populations worldwide in an increasingly unpredictable climate.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Validation of Sentinel-3 and Sentinel-6 Derived Water Levels Using a Comprehensive Network of In-Situ Data Over Rivers

Authors: Stylianos Kossieris, Chrysovalantis Tsiakos, Georgios Tsimiklis, Dr. Angelos Amditis
Affiliations: Institute of Communication and Computer Systems (ICCS)
Inland water level and its dynamics are critical components of the global water cycle and land surface hydrology, with important implications in water resource management, biodiversity preservation, and climate change mitigation. Historically, water level heights can be obtained from in-situ sensors, but the network is limited due to prohibitive cost of deploying and maintaining state-of-the-art sensors. In addition, in some cases the scientific community has limited access to historical gauge data. As result, the spatial distribution of sensors is not dense enough to represent the only input for calibrating and validating hydrological or hydraulic models. In this context, space-based observations, in particular altimetry missions, supplement the limited gauge measurements providing inland water level measurements in global coverage and consistent accuracy. However, validating altimetry-derived water levels pose significant challenges, primarily due to the limited access to historical in-situ data. In this study, a comprehensive validation of Sentinel-3 and Sentinel-6 MF derived water levels is conducted, over different areas of Danube River, using historical gauge measurements acquired by Austrian and Romanian water authorities. In the framework of CLARION Horizon Europe project, satellite altimetry measurements are utilized to establish a synergistic framework with in-situ data, in a common processing cycle for inland water level extraction, as well as to improve inland water level forecasting models. As we showed in a recent study, summarizing the space-based applications for inland water level monitoring between 2018 and 2022, the use of rivers for altimetry mission’s validation is limited. Also, Sentinel-6 MF mission’s data has not been widely validated over inland water bodies, since it was launched in November 2020. Thus, water level time series from different gauges are used to validate the Sentinel-3 and Sentinel-6 missions. Also, historical water level data from gauges in the Danube – Black Sea canal in Romania, with a width of approximately 150m and constant water level, are used to validate the Fully-Focused SAR (FF-SAR) method.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Multi-Sensor SAR-based Flood Mapping for High-Temporal Assessment of the 2020 Flood Event in Huế, Vietnam

Authors: Felix Bachofer, Elly Schmidt, Juliane Huth, Dr. Tung Pham Gia, Dr. Andreas Braun, Dr. André Assmann, Kerstin Bueche, Marco Ottinger, Assoc. Prof. Dr. Linh Nguyen Hoang Khanh
Affiliations: German Aerospace Center (DLR), Hue University - International School (HUIS), Department of Geography, University of Tuebingen, geomer GmbH
The 2020 flood event in Huế, Central Vietnam, ranks among the most devastating in recent history, highlighting the critical need for high-temporal monitoring to improve flood risk management and disaster response. The Thừa Thiên Huế province's topography is marked by a complex interplay of mountainous regions, low-lying plains, and an extensive lagoon system, making it highly vulnerable to diverse flood dynamics. Situated along the Hương River, where steep upstream gradients transition into flat downstream areas, Huế faces heightened risks of fluvial, pluvial, and coastal flooding, particularly during extreme weather events. The 2020 flood, driven by prolonged inundation lasting several weeks due to consecutive typhoons and tropical depressions, including Typhoons Molave and Goni, caused extensive damage and severely tested local disaster response capacities. This study leverages Earth Observation (EO) data from Sentinel-1 Synthetic Aperture Radar (SAR), Cosmo-Skymed, and TerraSAR-X acquisitions to capture the flood dynamics of the 2020 event. Threshold-based change detection algorithms and the Normalized Difference Index (NDI) were applied to delineate flood extents, revealing spatial and temporal flood evolution. Drone imagery and in situ measurements from the early flood period provided critical reference for accuracy assessment. Additionally, flood modeling using HEC-RAS simulated the hydrodynamic behavior of the Hương River system, offering detailed insights into water flow, inundation extent, and depths. Results from the HEC-RAS model were cross-compared with SAR-derived data. The results are illustrating the prolonged rainfall’s extreme challenge to flood risk management systems. The findings underscore the importance of understanding flood dynamics for improving flood management and adaptation processes. This research demonstrates the effectiveness of integrating multi-sensor EO data and hydrodynamic modeling for disaster risk reduction, providing scalable methodologies to address the increasing frequency of extreme weather events. Future work will focus on integrating additional EO sensors and enhancing automation in flood detection to further improve disaster response timeliness and accuracy.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Improving inland water altimetry through Bin-Space-Time (BiST) retracking: A Bayesian approach to incorporate spatio-temporal information

Authors: Mohammad J. Tourian, Omid Elmi, Shahin Khalili, Johannes
Affiliations: Institute Of Geodesy, University Of Stuttgart
In the 30 years of its availability, satellite altimetry has established itself as an important tool for understanding the Earth system. Originally developed for oceanography and geodesy, it has also proven valuable for monitoring water level variation of lakes and rivers. However, when using altimetry for inland waters, there is always a critical issue: retracking, i.e., the procedure in which the range from the satellite to the water surface is (re)estimated. The current retracking methods heavily rely on single waveforms, which results in a high sensitivity to every individual peak in the waveform and in a strong dependency on the waveform’s shape. Here, we propose the Bin-Space-Time (BiST) retracking method that moves beyond finding a single point in a 1D waveform and instead seeks a retracking line within a 2D radargram, for which the temporal information over different cycles is also considered. The retracking line divides the radargram into two segments: the left (Front) and right-hand side (Back) of the retracking line. Such a segmentation approach can be interpreted as a binary image segmentation problem, for which spatiotemporal information can be incorporated. We follow a Bayesian approach, exploiting a probabilistic graphical model known as a Markov Random Field (MRF). There, the problem is arranged as a Maximum A Posteriori estimation of an MRF (MAP-MRF), which means finding a retracking line that maximizes a posterior probability density or minimizes a posterior energy function. Our posterior energy function is obtained by a prior energy function and a likelihood energy function, both of them depending on signal intensity and bin: 1) The prior: the bin-space energy function defined between first-order neighbouring pixels of a radargram modeling the spatial dependency between their labels for given intensities and bins and 2) The likelihood: the temporal energy function of a pixel for labeling Front or Back given its overall temporal evolution. The realization of the field with the minimum sum of the bin-space and the temporal energy functions is then found through the maxflow algorithm. Consequently, the retracking line, the boundary between the Back and Front region is obtained. We apply our method to both pulse-limited and SAR altimetry data over nine lakes and reservoirs in the USA with different sizes and different altimetry characteristics. The resulting water level time series are validated against in situ data. Across the selected case studies, on average, the BiST retracker improves the RMSE by approximately 0.5 m compared to the best existing retracker. The main benefit of the proposed retracker, which operates in bin, space, and time domains, is its robustness against unexpected waveform variations, making it suitable for diverse inland water surfaces.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Operational Near Real Time Monitoring of Lakes and Rivers Water Level Exploiting the Copernicus Altimetric Constellation, Copernicus Global Land Services Current Performances and Roadmap: Towards the Inclusion of Swath Altimetry and Copernicus Extension Mission Cristal in the Services

Authors: Nicolas Taburet, Maxime Vayre, Jeremy Guilhen, Dr Beatriz Calmettes, Thérèse Barroso, Nadine Gobron
Affiliations: CLS, CNES, JRC
Freshwater resources are increasingly pressured to meet population needs and are also a fundamental element for industry and agriculture, therefore becoming an economic and political stake. With the intensification of extreme events, the monitoring of inland water levels, that is a proxy to surface freshwater stocks, conditions of navigability on inland waterways, discharge, flood risk management, is thus an important challenge. With the decreasing number of publicly available in situ water level records, the altimetry constellation brings a powerful and complementary alternative. The combined support of CNES (THEIA Hydroweb) and the European Copernicus Global Land Monitoring Service allows to operationally monitor water level over more than 20000 rivers and lakes worldwide with less than 3 days delay by exploiting satellite altimeters. The number of operational products is in constant augmentation thanks to successful successive integration of the Sentinel-3A, Sentinel-3B missions as well as Sentinel-6MF to ensure continuity of the timeseries over historical targets. The service closely follows the Sentinel altimeters commands and products updates to integrate them and valorise them by evolving the service accordingly. In 2024 the processing workflow has continued evolving, namely now integrates a river slope correction which improves the precision of the water level estimates. This has been made possible by exploiting river slope databases derived from Lidar altimetry (IceSat2 data) as well as slopes estimates provided by SWOT KaRIn data. This presentation will detail the validation and benefits of this algorithmic evolution and quantify the service performances by comparing the satellite altimetry timeseries with that of in situ gauging stations. Finally, the service roadmap will be detailed and illustrated with results of R&D activities: a promising processing to re-exploit SAR altimetry by exploiting off nadir signals, progress in using geodetic missions in the perspective on integrating the upcoming CEM CRISTAL and the perspective to include swath altimetry based on SWOT consolidated validation results.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Remote sensing of river water surface dynamics over multi-branch rivers in the age of SWOT: extracting channel slope and meander from complex river networks

Authors: Sarah Franze, Simon Köhn, Linda Christoffersen
Affiliations: Technical University Of Denmark
Satellite remote sensing of river surface water has been revolutionized by the recently launched Surface Water and Ocean Topography (SWOT) mission. SWOT provides 2D images of geolocated surface water height across a wide swath, observing 96% of the land area between 78°S and 78°N with a temporal frequency of 21 days or higher. The high level river SWOT product, RiverSP, is assigned to predefined river reaches based on the SWORD river centerlines database (Altenau et al. 2021). For multi-channel or braided river systems, however, the SWORD river centerlines provide a spatially coarse representation of the true complex river geometry. The hydraulic visibility (Garambois et al. 2016) over braided rivers that we obtain with SWOT can be more properly utilized by identifying the primary channels within the braided river section and analyzing the water surface observations with respect to unique channels. Here, we present a method for extracting river centerlines from monthly surface water extent masks and processing the SWOT pixel cloud data over a 70 km section of the multi-channeled Brahmaputra River for the SWOT observation period (July 2023 – present). Water masks are extracted from multispectral Sentinel-2 images and an image analysis procedure is applied to extract the detailed centerline geometry of the braided river network. SWOT pixel cloud data are then assigned to unique channels, and the channel slopes are computed for each observation date. These methods allow for both the temporal monitoring of channel meander and channel slope. Utilizing the water mask generated centerlines, we find that in low flow situations the slope difference between parallel main channels is over 2 cm/km, while in high flow situations channel slopes are very similar. Additionally, if new channel centerlines are not extracted and all observed river levels are projected to the SWORD centerline, the estimated channel slope can be up to 2 cm/km different from the unique channel slopes in both low and high flow scenarios. These findings show us that greater attention to the individual channels in a braided river system and their unique flow regime is now possible with satellite remote sensing, and in low flow situations channel separation may be necessary to accurately observe and model river water dynamics. By utilizing SWOT data at the individual channel level, new insights into multi-channeled river water surface height and extent may provide great improvements to hydrologic models of complex river networks.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Assessing River Connectivity in the Danube River Basin Using Sentinel-1 Data and Advanced Machine Learning Techniques.

Authors: Oualid Yahia, Evangelos Spyrakos, Dr Armando Marino, Allan Audsley
Affiliations: University Of Stirling
Abstract In this study, we developed a geospatial processing chain using Sentinel-1 Synthetic Aperture Radar (SAR) data to evaluate river connectivity. We exemplify the method in the Danube River Basin, specifically in Linz, Austria and, Budapest, Hungary. The focus was to improve the accuracy of water body detection, evaluate river connectivity, and extract key hydrological metrics such as river width, connectivity indices, and water levels across various temporal datasets. For maximum spatial and temporal coverage, Sentinel-1 Ground Range Detection (GRD) products were utilized as input for the classification pipeline. The preprocessing steps included orbit file application, sub-setting, thermal noise removal, radiometric calibration, terrain correction using the SRTM 1Sec DEM, speckle filtering (Lee Sigma 3x3), conversion from linear to decibel, and the co-registration of temporal imagery to ensure geospatial alignment. Python-based geospatial libraries, such as GDAL and Rasterio, were employed to automate workflows. The classification process employed machine learning (pixel-based) and deep-learning (object-based) approaches to automatically identify water bodies. A time series of Sentinel-1 GRD products from 02/2022 to 10/2024 was used for training. The models under consideration were: • U-Net: This model utilized 3,105 patches of size 256x256, with pre-processed VH and VV bands as input. Data samples were partitioned into 80% for training and 20% for testing, with a batch size of 16. Results: Accuracy: 0.98, Precision: 0.91, Recall: 0.97, F1 Score: 0.94. • Pixel-Based Fusion Framework: This approach combined estimates from a Multilayer Perceptron and Random Forest. The input feature vector included {VH, VV (pre-processed), DEM, and projected incident angle}. A total of 1,378,956 balanced data samples were partitioned into 80% for training and 20% for testing. Results: Accuracy: 0.95, Precision: 0.95, Recall: 0.95, F1 Score: 0.95. A temporal aggregation approach was implemented to generate water body probability maps. Using binary classification masks derived from multiple coregistered SAR images, the percentage of temporal water body presence (temporal persistence) at each pixel was calculated. The resulting water body persistence maps highlighted areas of consistent water presence, quantified from the proportion of images indicating water at a given pixel. Thresholding was applied to derive binary masks representing persistent water features (e.g., pixels classified as water in at least 75% of the observations). These masks were further validated against field data and shapefile barriers to assess accuracy. Key hydrological metrics included river width estimation, derived using distance transform techniques. River centrelines were extracted from the water masks, and gradients along the centreline were calculated to generate perpendicular transects for width measurement. Width profiles were analysed, with results aggregated into mean values and visualized spatially. River connectivity was assessed using a graph-based approach where nodes represented connected water regions and edges represented potential flow paths. The River Connectivity Index (RCI) was calculated to quantify hydrological continuity, integrating the positions of barriers such as bridges and dams from shapefile data. Connectivity graphs were iteratively refined by removing nodes with fewer than two edges and reconnecting interrupted paths to maintain flow visualization. Hydrological time series, including discharge and water level data, were statistically analysed against classified water extents. Anomalies were normalized to address seasonal variability, and regression models were applied to assess the correlation between satellite-derived water areas and field-observed hydrological variables. The incorporation of temporal aggregation, graph-based connectivity analysis, and visualization techniques enhanced our ability to monitor and quantify water resources. These methodologies are adaptable for scaling to larger riverine systems or integrating multi-sensor data for improved hydrological analysis.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I-J)

Poster: Recovering noisy measurements over inland water bodies by regenerating L1B SAR altimetry waveforms using a Fully-Focused Sub Synthetic Aperture Radar processing scheme

Authors: Shahin Khalili, Mohammad J Tourian, Omid Elmi, Johannes Engels, Nico Sneeuw
Affiliations: Institute of Geodesy, University of Stuttgart
The satellite nadir altimetry technique has been a powerful tool for understanding ocean and seas over the past few decades. However, over rivers and small inland water bodies, sometimes deliver noisy observations that may lead to gaps or false measurements in water level time series data. Here, we aim to identify and recover anomalous measurements through a reprocessing at the Level 1B (L1B) processing chain of satellite altimetry. To this end, we first detect abnormal waveforms that lead to anomalous water level measurements, by extracting various parameters from the satellite's altimeter, such as the AGC parameter, tracker range, and features related to waveform shape, including the number and location of peaks, noise level, kurtosis, center of gravity, and peakiness. Abnormal waveforms are then identified by analyzing the distribution of these features. While previous studies focused solely on L2 data to retrack multi-peak and noisy waveforms, here we propose a robust strategy for regenerating abnormal waveforms in the L1B SAR processing chain by eliminating unwanted back-scattered powers in the waveforms. We introduce the Fully Focused Sub-Synthetic Aperture Radar technique in the L1B processing chain by dividing the illumination time into smaller stacks consisting of multiple beam looks. Due to variables including antenna side lobe gain, wide antenna footprint, and environmental unevenness, some of the beam looks may exhibit undesired patterns. Our proposed approach addresses this issue by comparing the power of individual stacks with an analytically derived reference waveform and allocating weights to each stack based on their similarity to the reference waveforms. This reduces the impact of unwanted components in the back-scattered final waveform, and enables the regeneration of detected abnormal waveforms for inland waters. We applied the proposed methodology to Sentinel-3A, Sentinel-3B, and Sentinel-6MF data over various inland waters and validated our results against in-situ data. The validation shows that the water height time series obtained by retracking our newly generated waveforms (using either SAMOSA+ or OCOG) match the in-situ data significantly better specifically, in terms of RMSE, the accuracy of the water level time series improved by 60% for the selected case studies.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: D.02.09 - POSTER - Enhancement of EO products using advanced multi-instrument and multi-platform synergies

In the past five decades, a very large number of satellite instruments have been launched, providing a vast number of datasets for exploitation. It is now evident that efforts should focus on efficient approaches for interpreting this large amount of diverse satellite data, towards advancing EO capacities and enhancing products, to support and advance new applications. Indeed, no single sensor provides comprehensive information about a targeted object in a complex environment, and there is always a need to inject the missing information from complementary observations, modeling or other sources of knowledge. In addition, once the instruments have been deployed in space, the quality of measurements cannot be radically improved, while the processing algorithms remain under constant improvement and can be notably improved by fusion of information towards optimizing the use of joint sensitivity of multi-instrument datasets.

This session is focused on the discussion of the algorithms and approaches exploring the synergies of complimentary observations such as: synergy of passive imagery with active vertical profiling of the atmosphere and hyperspectral spectrometry, combining observations of different sensitivities obtained in different spectral ranges, or at different time or spatial scales, as well as, combining satellite observations with sub-orbital observations and chemical transport model simulations. The presentations are expected to demonstrate the advantages of synergy methods by using observations from the Copernicus Sentinels, EarthCARE, MTG, EPS-SG, PACE and other European and international recent and forthcoming advanced satellite missions.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Integrating Earth Observation and AI for Advanced Geohazard Monitoring: The EASTERN Project

Authors: Osmari Aponte, Selene Bianco, Valeria Corcione, Matteo Maragliano, Giovanna Chiara Rodi, Stefano Marangoni, Lia Morra, Tatiana Tommasi, Antonio Alliegro, Michelangelo Caretto, Rosario Milazzo, Andrea Gatti, Eugenio Realini
Affiliations: Geomatics Research & Development (GReD), Politecnico di Torino, aizoOn Technology Consulting, Politecnico di Milano
Flooding is among the most frequent and devastating natural disasters, leading to significant socioeconomic losses, infrastructure damage, and public health crises. With climate change intensifying the frequency and severity of extreme weather events, innovative approaches to disaster monitoring and mitigation are urgently needed. The EASTERN project addresses these challenges by developing advanced Earth Observation (EO) models that integrate data from Copernicus Sentinel Synthetic Aperture Radar (SAR) satellites and ground-based Global Navigation Satellite System (GNSS) measurements. Leveraging Artificial Intelligence (AI) and deep learning techniques, EASTERN enables comprehensive analysis and monitoring of both direct consequences of flooding, such as landslides, and indirect impacts like the proliferation of disease vectors. Use Case 1: Landslide Risk Monitoring Landslides are a catastrophic direct consequence of heavy rainfall and flooding. Early detection and monitoring are vital for reducing risks to infrastructure and human lives. The EASTERN project enhances landslide monitoring by integrating SAR interferometry (InSAR) with GNSS measurements. InSAR provides high spatial resolution Line-of-Sight (LOS) displacement data by processing SAR images from successive satellite passes. However, LOS measurements are inherently one-dimensional and can be affected by atmospheric disturbances and orbital biases. GNSS stations offer precise, continuous three-dimensional (3D) deformation measurements but are spatially limited. EASTERN bridges these limitations by employing a deep learning framework that uses GNSS data to calibrate InSAR measurements. The model is trained on point clouds derived from InSAR data, incorporating spatial coordinates and parameters from SAR imaging geometries. By fusing these datasets, the network outputs a calibrated 3D deformation field. This integration mitigates atmospheric delays in SAR signals using GNSS data as a correction reference and integrates information beyond the GNSS line of sight to reconstruct a more realistic 3D deformation. The resulting high-resolution deformation maps are invaluable for monitoring infrastructure stability and identifying areas susceptible to landslides. Comparisons with non-machine learning techniques, using data from the European Ground Motion Service (EGMS) and GNSS stations in disaster-prone regions, have validated the methodology and enhanced the quality of deformation monitoring. Use Case 2: Health Risks from Flooding and Disease Vectors Beyond immediate impacts, flooding creates ideal conditions for the proliferation of disease vectors like mosquitoes, which transmit illnesses such as West Nile virus and malaria. The geographical expansion of these vectors due to climate change necessitates proactive monitoring and mitigation strategies. EASTERN employs SAR imagery combined with computer vision models to predict vector populations and assess associated health risks. Unlike optical sensors limited by weather conditions and vegetation cover, SAR sensors provide reliable data under all conditions and can penetrate vegetation, making them suitable for monitoring flooded and densely vegetated areas. The project integrates environmental predictors—temperature, rainfall, humidity, and wind speed—into its models. SAR imagery is used to derive surface roughness metrics that correlate with wind dynamics, a key factor influencing vector movement. Ground-truth data from entomological traps and meteorological stations are used to train and validate the models. The predictions yield high-resolution risk maps, enabling targeted interventions like vector control measures and strategic placement of monitoring traps. A notable innovation is the use of SAR-derived wind field data to estimate wind speeds over water surfaces, offering insights into vector movement patterns in flooded areas. This reduces reliance on ground-based weather stations, extending applicability to remote and unmonitored regions. Discussion The EASTERN project exemplifies the potential of integrating EO data with AI to tackle complex environmental challenges. By combining ground-based GNSS measurements with SAR imagery, the project enhances the spatial and temporal resolution of flood and landslide monitoring, overcoming the limitations inherent in each data source individually. The use of AI facilitates efficient processing of large datasets, uncovering patterns critical for accurate risk assessment. In health risk mitigation, the innovative use of SAR imagery to estimate environmental factors like wind dynamics advances disease vector monitoring. This capability addresses gaps in traditional approaches, enabling reliable monitoring even in regions lacking ground-based infrastructure. EASTERN’s methodology is scalable and adaptable to diverse geographic and environmental conditions. By leveraging freely available Copernicus data, such as Sentinel-1 SAR imagery, the project offers a cost-effective approach to addressing the growing impacts of climate change. Its focus on both direct and indirect consequences of flooding highlights the interconnected nature of environmental, socioeconomic, and health challenges, emphasizing the need for integrated solutions. The project’s contributions align with several Sustainable Development Goals (SDGs), including Goal 3 (Good Health and Well-being), Goal 13 (Climate Action), and Goal 15 (Life on Land). By providing actionable insights for disaster management and public health, EASTERN supports broader objectives of sustainable development and resilience building. Conclusion Through the innovative integration of EO data and AI, the EASTERN project showcases the transformative potential of satellite-based monitoring systems in addressing global challenges posed by climate change. It underscores the value of ESA’s Sentinel missions in enabling cutting-edge research and applications that benefit communities worldwide. By advancing the state of the art in landslide and health risk monitoring, EASTERN lays the groundwork for future EO-based solutions that safeguard lives and livelihoods in an increasingly uncertain climate.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: MERCURIO, an Innovative Monitoring System Employing Multisource Data for the Safety of Railway Infrastructure

Authors: Chiara Abbundo, Maria Daniela Graziano, Giancarmine Fasano, Flavia Causa, Diego Di Martire, Valerio Striano, Pasquale Rovito, Andrea Vitiello, Luca Continisio, Mario Pucciarelli, Mariano Focareta
Affiliations: University Of Naples Federico II, Distretto Tecnologico Aerospaziale Della Campania S.C.AR.L., Ente Autonomo Volturno S.R.L., Medinok S.p.A., Mapsat - Telerilevamento Euromediterraneo S.R.L, Sapienza University of Rome
In 2020, the global space economy has increased of 4.4% compared to a revised 2019 total of $428 billion with a total value of $447 billion, a number that has increased of approximately 55% compared to ten years ago, with a predominance of the commercial segment (“The Space Report 2021 Q2”). In this framework, the performance of the earth observation market and its services is particularly significant. In 2021, the global turnover of Earth Observation (EO) data and value-added services amounts to €2.8 billion (“EUSPA-2022 EO and GNSS Market Report”). In 2023, the global revenues from EO data and value-added services amounted to €3.4 billion and almost half of this total is generated by the top three segments: Agriculture, Climate, Environment and Biodiversity and Urban Development and Cultural Heritage (with projections that suggest significant growth in the Insurance and Finance segment). In a decade, the overall revenues of the global EO data and value-added services market are projected to reach nearly €6 billion. The global EO data market is expected to see a CAGR of 5% by 2033, resulting in total revenues of almost €1 billion while the EO value-added services market, considerably larger, has generated a total of around €2.8 billion in 2023. Starting from 2023, the EO value-added services market is expected to experience a CAGR of almost 6%, achieving total revenues of almost €5 billion by 2033 (“EUSPA-2024 EO and GNSS Market Report”). It's in the framework of space value-added services that the rail sector plays a fundamental role between the ones of always more growing interest towards EO services employment. As the “EUSPA-2024 EO and GNSS Market Report” states, in the rail sector, the major stakeholders include government agencies and infrastructure management or private railway operators and manufacturers. In this application context, GNSS and EO-based solutions could help ensure increased safety of railway infrastructure while allowing reduced costs for on-site inspections. Nowadays, various regions across Europe are modernising their rail infrastructure also in the optics of sustainability. In this wake, the European Green Deal aims to invest almost €90 billion in rail-related infrastructure upgrades. Due to the increasing need of having always more ready measures and monitoring of some events on the line, EO and the use of drones as well as of in situ sensors could help improving the reliability of rail infrastructure management and predictive as well as proactive measures to ensure safety for users. Within the Multi-Annual Work Program of Europe's Rail Joint Undertaking (EU-Rail) is underlined the explicit indication for the creation of intelligent railway infrastructure asset management systems. In the Flagship Area 3, explicit reference is made to the ability to apply remote observation technologies as one of the development directions. It’s in this European strategic scenario that the idea and technological objectives of the MERCURIO (acronym derived from the Italian “SisteMa di monitoraggio intElligente e multipiattafoRma per la siCURezza dell'Infrastruttura ferrOviaria”) project fully fits. MERCURIO is a research project whose aim is fitting into the transformation process towards a concept of smart railways which, thanks to the intensive use of Artificial Intelligence applied to the processing of heterogeneous data, can create a digital twin of the asset (over the last ten years, the growing interest in the application of Artificial Intelligence in the railway sector has resulted in around 50 projects at European and international level, with a 70% share earmarked for the development of new technological solutions dedicated to maintenance activities and infrastructure inspection). MERCURIO project will develop an innovative multi-platform monitoring system to support the inspection and maintenance process of regional railway infrastructures in Campania (Italy) through the integration into a single decision support tool of all the results of an advanced processing chain that will lead to a modular and scalable data fusion strategy through the intensive use of Artificial Intelligence techniques. The project takes on board the needs to provide regional railway infrastructures with a flexible tool that responds to the specific criticalities of the territories on which they insist, optimising maintenance procedures, thus preventing, in a timely manner, critical situations that may affect passenger safety and the costs dictated by line stoppages and on-site inspection interventions, especially for some use-cases of interest such as: • Ground movement analysis; • Soil moisture trends in the vicinity of the railway network; • Monitoring of civil works; • Monitoring of tunnels and galleries. To guarantee the response to heterogeneous problems, MERCURIO will implement a modular approach to integrate heterogeneous data into a single platform: satellite data, distributed sensors and sensors on drones, will be collected in an open architecture, to allow the evolution of the entire monitoring system based on technological advances and the needs of particular use cases in the optics of the use of innovative asset monitoring techniques. The acquired data will be suitably analysed through machine learning algorithms with the dual objective of exploring possible correlations potentially useful for the definition of risk reduction strategies and identifying the presence of areas of greater danger. In particular several techniques will be used: • unsupervised machine learning techniques, such as clustering algorithms that will allow the subdivision of data into groups to highlight similarities and differences for the different data categories. Clustering algorithms will make possible to perform a multivariate analysis by grouping data of different nature (such as mechanical characterisation of soils, hydrological data, moisture data, vegetation data, etc.) into a number of similarity classes. The different classes are associated with different degrees of criticality and visualised through a colour index (green-yellow-red). Soft-type algorithms allow the attribution of the same data to several clusters, assigning the data a probability of belonging to a specific class. This characteristic makes these algorithms useful for better defining the boundaries of zones of different criticality. The different types of algorithms will be implemented in the Python language. Since the clustering analysis is sensitive to the number and type of data, a stability analysis of the solutions will be performed, whereby the algorithms will be applied to different datasets, taking into account different scenarios; • supervised machine learning techniques, such as artificial neural networks (ANN) that will allow the definition of future hazard scenarios based on the training of the available data; the algorithms will have to learn to identify areas with several degrees of risks and assign a degree of warning to monitor a railway line. During the training phase, the algorithms will be provided with known cases, each of which will be accompanied by factors and variables characterising the event and will generate hundreds of mathematical models that will differ in the random combination of those variables. The ‘model validation’ will consist of an iterative process in which the performance of the models in correctly classifying cases that are known but never entered into the algorithm during the training phase, will be measured. The integration between Artificial Intelligence and data derived from the several sources will then make it possible to implement a Geospatial Artificial Intelligence (GeoAI) model that can be defined as the collection, analysis and exploitation of images and information to describe, evaluate and virtually represent the physical characteristics and geographical activities detected on the territory. The monitoring system, depending on the different scenarios analysed, will produce a series of data that are extremely heterogeneous in terms of type, frequency and significance. Through the data fusion operation, the intention is to combine several elements with the aim of generating integrated and objective information to understand the evolution of phenomena and allow the best quantitative characterisation of the conditions in the analysed areas. The objective is to constitute a data set of information on which the project WebGis/DSS will be based. The monitoring system that will be set up for the MERCURIO project brings an intrinsic complexity linked to the heterogeneity of the data sources, the produced data format and the information contained therein. Specifically, with regard to the information extracted from multi-source data sources, these can be directly correlated with each other if they refer to the same entity, such as temperature, height, etc. or they can be correlated with each other by indices generated through appropriate processing of multisource data. Furthermore, for a correct fusion and interpretation of the data generated by the system, there is a need for a careful study of the areas investigated to allow the elaboration of a context model, which contemplates the geological, environmental and infrastructural characteristics. The process of data homogenisation, fusion and correlation will be followed by an information management phase through the use of a WebGis/DSS platform. In conclusion, by employing the methods and tools explained earlier in this abstract, the MERCURIO platform could reach some functional and research goals in order to make the platform be used to monitor some already identified use cases that nowadays represent the main criticalities in terms of safety for the regional railway infrastructure and lead the way in the employment of such system in a wider range of applications, possibly also in other sectors of interest.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: EO-base Water Quality and Ship Detection Web Application For Decision-Making in the Baltic States

Authors: Edvinas Tiškus, Dr. Diana Vaičiūtė, Rūta Tiškuvienė, Prof. Dr. Martynas Bučas, Jonas
Affiliations: Ku Marine Research Institute
Environmental monitoring is critical for sustainable management of natural resources and maritime activities. The Baltic States face unique challenges in managing water quality and tracking maritime activities in their coastal and inland waters. To address these challenges, we present a web application based on open-source software, with automated processes, designed to provide real-time insights into water quality parameters and ship detection using Earth Observation data.s The Water Quality application integrates Copernicus, Sentinel-2, and Sentinel-3 data to deliver comprehensive information on water quality indicators such as chlorophyll-a, water surface temperature, total suspended matter, and coastal vegetation. Users can select from medium (referred to 300 m or 1000 m) and high (referred to 10 m or 100 m) spatial resolution datasets tailored to their needs, and a time series feature enables visualization and analysis of recent time series with future intention of expanding to historical trends. This streamlined interface allows decision-makers to download and analyze the latest data, ensuring timely and informed responses to environmental issues. In parallel, the ship detection application utilizes Sentinel-1 Synthetic Aperture Radar (SAR) data combined with Automatic Identification System (AIS) logs to provide an integrated maritime tracking solution. This application visualizes detected ships and allows users to compare ship shapes and positions with AIS data for cross-validation. The feature is particularly valuable for applications such as maritime safety, illegal fishing monitoring, and environmental protection. The applications are built upon an open source computing framework, ensuring scalable and efficient processing of large datasets. By leveraging open-source tools and automated workflows, the applications reduce operational costs and complexity for users, making it accessible to governmental institutions, environmental agencies, and other stakeholders across the Baltic States. The primary objectives of this study are to: Provide automated water quality monitoring for Baltic coastal and inland waters with high temporal and spatial resolution. Develop tools for decision-makers to access and analyze water quality data with minimal technical expertise. Deliver an operational ship detection service that integrates EO and AIS data for improved maritime management. Validate applications performance using stakeholder feedback and in-situ data to ensure reliability and user relevance. These innovative applications demonstrate the potential of EO-based tools to support environmental and maritime management. By bridging the gap between scientific data and practical applications, it delivers actionable insights that enhance the sustainability of Baltic water bodies and maritime operations. Project was funded by the European Space Agency under the contract “EO BALTIC PLATFORM FOR GOVERNMENTAL SERVICES (EO-BALP), No. 4000142702/23/I-NB.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Cloud-Based Validation of HISTARFM for Gap Filling in Sentinel-2 Data

Authors: Andrea Gonzalez-Ramirez, Emma Izquierdo-Verdiguier, Álvaro Moreno-Martínez, Jordi Muñoz-Marí, Nicholas Clinton, Francesco Vuolo, Professor Clement Atzberger, Professor Gustau Camps-Valls
Affiliations: Laboratorio de Telecomunicaciones, CINVESTAV del IPN, Unidad de Guadalajara, Institute of Geomatics, University of Natural Resources of Life Sciences (BOKU), Image Laboratory Processing (IPL), Universitat de València, Google, Inc., Mantle Labs
Optical remote sensing data has immense benefits for land monitoring, climate studies, and agricultural assessments, among other applications. However, cloud contamination poses a significant challenge when using these data types. Optical data require clear atmospheric conditions to obtain surface information; otherwise, images have gaps and/or deteriorated signatures due to aerosols, clouds, and shadows, which reduces the usability of satellite temporal series. This problem is especially relevant in the most cloudy regions of the Earth and is often time-dependent. These challenges create complications in spatiotemporal feature extraction, which trickle down in the processing chain and lead to land cover classification errors or data interpretation biases. Cloud detection techniques like thresholding and machine learning have been developed to identify cloud positions and mask images, considerably improving algorithms and results (Skakun et al., 2022). However, some limitations persist, requiring additional processing to fill the data gaps. Gap-filling approaches are essential for estimating missing data to maintain continuous temporal information. The HISTARFM algorithm, developed by Moreno-Martinez et al. (2020), provided excellent results by fusing MODIS and Landsat into monthly gap-filled reflectance data at 30 m spatial resolution. Still, higher detail and precision are necessary in remote sensing images from satellites such as Sentinel-2. These advancements enable support for different applications in agriculture, biodiversity, and climate studies, thereby improving our understanding of natural processes (Segarra et al., 2020). Therefore, a robust gap-filling algorithm at these spatial resolutions is fundamental. One main problem of the previous algorithm is fusing information from more than two different sensors. This work presents the preliminary validation of the HISTARFM adaptation to estimate five-day reflectance data at a 10-meter spatial resolution by merging Sentinel-2 with the HISTARFM database. The first validation results, implemented on the Google Earth Engine, show the potential of the new algorithm, providing reflectance RMSE lower than 0.01 for a couple of selected study areas in Europe. These findings emphasize the potential of the adapted HISTARFM algorithm to provide the scientific community with more accurate and continuous high-resolution datasets, enabling advanced research and improved environmental monitoring and land management decision-making. References Moreno-Martínez, A., et al. "Multispectral high resolution sensor fusion for smoothing and gap-filling in the cloud." Remote Sensing of Environment 247 (2020): 111901. Skakun, S., et al. "Cloud Mask Intercomparison eXercise (CMIX): An evaluation of cloud masking algorithms for Landsat 8 and Sentinel-2." Remote Sensing of Environment 274 (2022): 112990. Segarra, J., et al. "Remote sensing for precision agriculture: Sentinel-2 improved features and applications." Agronomy 10.5 (2020): 641.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: A multi-sensor strategy to observe coastal marine macroplastic litter

Authors: Maurizio Migliaccio, Ferdinando Nunziata, Dr Francesco Serafino, Annamaria Vicari, Dr Andrea Bianco, Mrs Anna Verlanti, Dr Nicola A. Famiglietti, Dr Antonino Memmolo, Prof Andrea Buono
Affiliations: Università Di Napoli Parthenope, Sapienza University of Rome, National Research Council, National Institute of Geophisics and Voulcanology, Istituto Superiore per la Protezione e la Ricerca Ambientale
Remotely sensed measurements are of primarily importance to fill the gap between sparse in situ observations and to provide nearly uniform/nearly global coverage over long time scales [1]. The remote sensing of marine debris, in particular plastic pollution, is in its infancy [2]. Despite encouraging results of first experiments with detection of large floating items overall marine debris monitoring represents a significant technological challenge. Global coverage is particularly important because some artificial marine debris such as plastic can travel over long distances and accumulate over time. Satellites can also help to survey remote, hard-to-reach areas, in which direct observations are sparse, expensive and difficult. Because of the huge diversity of composition and geometry of different types of debris, no single sensor can see it all [3]. A number of these sensors/platforms have already been tested in past missions, but each sensor has observational limitations related to spectral resolution, spectral range, sensitivity, revisit time, geospatial resolution and coverage. To this end, an integrated monitoring system combining sensors of different capabilities and operated by different platforms is needed to advance future operational remote sensing efforts of marine debris especially in remote areas of the global oceans. Optical remote sensing tools, operated by fixed, ship-borne, air-borne and satellite platforms, have been used to observe floating and slightly submerged plastic litter [4]. Typically, true color composite RGB images are used and the key requirement for the imaging technique is the fine spatial resolution to guarantee the detection of floating plastic and – hopefully – the characterization of the specific observed debris. The outcome resulting from the above-mentioned preliminary investigations is that the identification of smaller debris is only possible by suborbital missions. A combination of drones and satellite imagery was recently experimented to detect typical household items as floating plastic targets. In [4] it has been shown that floating plastic can be seen from space as bright objects and that there are key benefits of using very high (0.022 m) geo-spatial resolution imagery from drones to improve geo-referencing of Sentinel-2 satellite optical data, resampled to 10 m resolution. A prospective technique to observe submerged debris is active remote sensing using a LIght Detection and Ranging (LIDAR) that can measure the onboard laser lights backscattered from the ocean. With respect to radar technologies, in [5] the ability of a fixed-platform X-band marine RAdio Detection And Ranging (RADAR) to identify, discriminate, characterize and track small floating aggregations of marine litter made up mainly of plastic debris was demonstrated through controlled experi- ments where artificial plastic was released at sea under calm sea conditions. Another promising tool could be the Synthetic Aperture Radar (SAR), whose ability to detect surfactants such as biogenic films, oil slicks and targets such as derelict fishing gear and larger items by exploiting both intensity and polarimetric features of the backscattered microwave signals, has been unanimously demonstrated. When dealing with plastic litter, however, the particularities in composition and size of marine debris and their interaction with the background ocean make direct detection exploiting SAR very challenging [6]. Up to now, only a preliminary experiment has been carried out showing that the analysis of SAR data, in combination with surface wind speed, and Langmuir cell ocean circulation patterns appeared to be quite promising. For example, VV-polarized SAR images had distinct dark signatures, which could be related to microbiological activity on microplastics [6]. This study is to investigate the sensitivity of remotely sensed measurements to floating plastic items, hereinafter aggregated plastic, collected using three remote sensing layers: • the satellite layer that consists of using SAR and multi-spectral optical sensors; • the aerial layer that consists of drones equipped with optical cameras and LIDAR; • the ground layer that consists of using X-band marine RADARs installed on both a fixed mast and a ship. The microwave signature is discussed through outdoor controlled experiments where aggregated plastic, arranged according to modules of different sizes and including different type of plastics, are deployed at sea by boat under different sea state conditions and are observed by the three layers of sensors. The main scientific objectives can be summarized as follows: a) to gain a better understanding of the interaction between microwaves and optical frequencies and aggregated plastic under different environmental conditions using different platforms; b) to analyze the ability of space-borne X- band SAR very high resolution (VHR) and optical hyper- spectral imagery to observe aggregated plastic of different size and composition under different environmental conditions; c) to analyze the ability of measurements collected by X- band ground-based and ship-borne marine RADAR to observe aggregated plastic of different sizes and compositions under different environmental conditions; d) to analyze the ability of drone-based LIDAR to observe aggregated plastic and submerged plastic litter. In this study the lesson learned form two experimental campaigns is discussed. The campaigns were conducted in july 2024 off the Livorno coast (North Italy) and in a lake in the south of Italy where floating aggregated plastics were released into the water and observed by a X-band tower-based surveillance radar, a drone (equipped with mutli-spectral and lidar) and satellite X-band Cosmo-SkyMed 2nd Generation (CSG) SAR. First experiments confirm the ability of all the platforms to observe aggregated plastic targets whose size is larger that 1x1m. A distinguishable macroplastic signature is visible in both co- and polarised X-band SAR imagery. In addition, Lidar observations allow also distinguishing plastic targets from natural phenomena occurring both in sea and inland waters. references [1]UN environment programme – Mediterranean Action Plan – Barcelona Convention: “Programme 1 – Towards a pollution and litter free Mediterranean Sea and coast embracing circular economy”. [2]S. P. Garaba, J. Aitken, B. Slat, H. M. Dierssen, L. Lebreton, O. Zielinski, and J. Reisser, “Sensing ocean plastics with an airborne hyperspectral shortwave infrared imager,” Environ. Sci. Technol., vol. 52, no. 20, pp. 11699–11707, 2018. [3]N. Maximenko, P. Corradi, K. L. Law, et al., “Toward the integrated marine debris observing system”, Front. Mar. Sci., vol. 6, no. 447, pp. 1–25, 2019. [4] T.S.VeenstraandJ.H.Churnside,“Airbornesensorsfordetectinglarge marine debris at sea,” Marine Pollut. Bull., vol. 65, pp. 1–3, 2012. [5] V. Martinez-Vicente, J. R. Clark, P. Corradi, et al., “Measuring marine plastic debris from space: Initial assessment of observation require- ments”, Remote Sens., vol. 11, pp. 2443, 2019. [6] N. Davaasuren, A. Marino, C. Boardman, M. Alparone, F. Nunziata, N. Ackermann, and I. Hajnsek, “Detecting microplastics pollution in world oceans using SAR remote sensing,” in Proceedings of 2018 IEEE International Geoscience and Remote Sensing Symposium, pp. 938–941, 2018.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Deep Learning Downscaling of LST: Enhancing Heat Stress Detection in Amazonian Rainforests

Authors: Anastasia Kozhevnikova, Dr Stuart King, Dr David Galbraith, Professor Emmanuel Gloor, Dr Amanda Lenzi, Ed Mitchard, Professor Patrick Meir
Affiliations: University Of Edinburgh, University of Leeds, Space Intelligence Ltd
Tropical rainforests play a crucial role in global carbon and water cycles, regulating climate and supporting biodiversity. However, these ecosystems face increasing threats from climate change, deforestation, and habitat fragmentation [1]. Fragmented forests are particularly vulnerable to heat stress [2], which can disrupt ecosystem functioning and accelerate biodiversity loss. Accurate assessment of heat stress in these regions is essential for understanding ecological impacts and guiding effective conservation strategies [3]. Currently, monitoring temperature variations in fragmented rainforests is hindered by the limited spatial and temporal resolution of existing satellite datasets [4, 5]. MODIS Aqua, with its daily afternoon (~1:30 pm local time) overpasses over the Amazon rainforest, provides peak daily LST measurements. However, its coarse spatial resolution (1 km) is insufficient to detect fine-scale temperature variations that are critical for identifying localised heat stress hotspots. In contrast, Landsat 8 offers higher spatial resolution of 30 m (resampled from 100 m) but lacks the temporal frequency to capture peak temperatures, as its morning overpasses (~10 am) miss critical afternoon thermal extremes. Additionally, its 16-day revisit cycle does not capture daily fluctuations in temperature. These temporal and spatial trade-offs highlight the need for integrating complementary LST datasets through synergic data fusion [6]. To address this challenge, we developed a novel approach using Convolutional Neural Networks (CNNs) to downscale MODIS Aqua LST data from 1 km to 100 m resolution. Our method employs high-resolution Landsat and Sentinel data to reconstruct land surface temperature extremes, essential for monitoring heat stress. To train the CNN, we generated synthetic training datasets that simulate coarse-resolution conditions from high-resolution imagery [7], allowing the model to learn intricate thermal patterns. Additionally, key biophysical covariates were incorporated to account for complex thermal variability tropical rainforest canopies [8]. We expect the CNN-based algorithm to outperform traditional models like Random Forests, particularly in detecting extreme temperature variations [9]. Preliminary results suggest that this method significantly enhances spatial resolution, revealing localised heat stress hotspots in fragmented Amazonian forests that remain undetectable in coarser datasets. The resulting high-resolution thermal maps offer a better understanding of spatial heat distribution patterns, providing a more precise identification of vulnerable areas. To validate the accuracy and reliability of these maps, we plan to ground-truth them using UAV data collected in Brazil during concurrent MODIS overpasses. This will allow us to assess inter-scale variability in thermal imagery and ensure consistency across different resolutions. Furthermore, we aim to link these high-resolution thermal datasets with tree mortality patterns across the Amazon [10], as well as drought and precipitation records [11]. By comparing temperature extremes with optical indicators of tree browning [12], we will explore the complex relationships between heat stress and rainforest health. This methodology advances the monitoring of heat stress in the Amazon rainforest and provides a scalable framework for downscaling LST data in other complex ecosystems. The results provide insights, which are vital for informing targeted conservation strategies and supporting sustainable forest management practices. References [1] Parsons, L. A., Jung, J., Masuda, Y. J., Zeppetello, L. R. V., Wolff, N. H., Kroeger, T., ... & Spector, J. T. (2021). Tropical deforestation accelerates local warming and loss of safe outdoor working hours. One Earth, 4(12), 1730-1740. [2] Tuff, K. T., Tuff, T., & Davies, K. F. (2016). A framework for integrating thermal biology into fragmentation research. Ecology letters, 19(4), 361-374. [3] Alves de Oliveira, B. F., Bottino, M. J., Nobre, P., & Nobre, C. A. (2021). Deforestation and climate change are projected to increase heat stress risk in the Brazilian Amazon. Communications Earth & Environment, 2(1), 207. [4] Inamdar, A. K., French, A., Hook, S., Vaughan, G., & Luckett, W. (2008). Land surface temperature retrieval at high spatial and temporal resolutions over the southwestern United States. Journal of Geophysical Research: Atmospheres, 113(D7). [5] Zhan, W., Chen, Y., Zhou, J., Wang, J., Liu, W., Voogt, J., ... & Li, J. (2013). Disaggregation of remotely sensed land surface temperature: Literature survey, taxonomy, issues, and caveats. Remote Sensing of Environment, 131, 119-139. [6] Wu, P., Shen, H., Zhang, L., & Göttsche, F. M. (2015). Integrated fusion of multi-scale polar-orbiting and geostationary satellite observations for the mapping of high spatial and temporal resolution land surface temperature. Remote Sensing of Environment, 156, 169-181. [7] Xiao, Y., Yuan, Q., Jiang, K., He, J., Wang, Y., & Zhang, L. (2023). From degrade to upgrade: Learning a self-supervised degradation guided adaptive network for blind remote sensing image super-resolution. Information Fusion, 96, 297-311. [8] Wang, S., Luo, Y., Li, M., Yang, K., Liu, Q., & Li, X. (2022). A taylor expansion algorithm for spatial downscaling of MODIS land surface temperature. IEEE Transactions on Geoscience and Remote Sensing, 60, 1-17. [9] Nguyen, B. M., Tian, G., Vo, M. T., Michel, A., Corpetti, T., & Granero-Belinchon, C. (2022, August). Convolutional neural network modelling for modis land surface temperature super-resolution. In 2022 30th European Signal Processing Conference (EUSIPCO) (pp. 1806-1810). IEEE. [10] Esquivel-Muelbert, A., Phillips, O. L., Brienen, R. J., Fauset, S., Sullivan, M. J., Baker, T. R., ... & Galbraith, D. (2020). Tree mode of death and mortality risk factors across Amazon forests. Nature communications, 11(1), 5515. [11] Docherty, E. M., Gloor, E., Sponchiado, D., Gilpin, M., Pinto, C. A., Junior, H. M., ... & Galbraith, D. (2023). Long‐term drought effects on the thermal sensitivity of Amazon forest trees. Plant, Cell & Environment, 46(1), 185-198. [12] Bucha, T., Pavlenda, P., Konôpka, B., Tomaštík, J., Chudá, J., & Surový, P. (2024). Satellite Assessment of Forest Health in Drought Conditions: A Novel Approach Combining Defoliation and Discolouration. Forests, 15(9), 1567.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Estimating High Resolution Building Height at a Country-Level Using Sentinel-2 and Sentinel-1 Mission Data

Authors: Riccardo Contu, Mikolaj Czerkawski, Valerio
Affiliations: ESA - ESRIN
Building counting and mapping are essential for estimating and monitoring population counts, providing a foundation for accurately mapping where people live and how they are distributed. Currently, Italy lacks a building map that includes both buildings’ perimeter and height derived from satellite imagery to support population estimates, urban planning, disaster management, and sustainable development efforts. Mapping buildings automatically with remote sensing generally requires high-resolution imagery, which is costly and often limited. Recent studies have demonstrated the potential of using freely available, low-resolution imagery like Sentinel-2, by applying super-resolution techniques (from 10 m to 2.5 m) to produce detailed building datasets [3]. This approach helps bridge the gap between the need for precise data and limitations in high-resolution imagery. However, super-resolution often requires high computational power and results in hard-to-interpret outputs. Moreover, some main issues with super-resolution can hinder the reliability of produced high-resolution imagery. The goal of this study is to obtain a height map of built-up environments, focusing on urban areas, using free images avoiding any explicit super-resolution steps. It is demonstrated that it is feasible to generate a 3D map at regular intervals using optical-multispectral and SAR images from Sentinel-1 and Sentinel-2, validated against high-resolution aerial imagery (1 m resolution) [1-2]. This work can also benefit the IRIDE mission, which will be launched in a few years. The IRIDE (Italian Radar Interferometer for Digital Earth Observation) mission is an Italian-led Earth observation program designed to create a satellite constellation for frequent, detailed, real-time monitoring of the Earth’s surface. IRIDE will provide higher resolution products w.r.t to Sentinel ones that will facilitate tasks such as building segmentation, individual building counting, and height prediction when compared to the data provided by Sentinel 1 and 2. Mapping Italy’s buildings' height and footprint can also provide some relevant downstream tasks like 3D change estimation from bitemporal images, population estimation based on buildings size and the monitoring of green and built-up areas. This work opens new possibilities for Italy's building monitoring - like already possible for other countries [1] - and, soon, for using IRIDE imagery for tasks previously achievable only with airborne high-resolution imagery, potentially transforming how building and urban changes are monitored nationwide. [1] Yadav, Ritu and Nascetti, Andrea and Ban, Yifang, How High are We? Large-Scale Building Height Estimation Using Sentinel-1 Sar and Sentinel-2 Msi Time Series. Available at SSRN: http://dx.doi.org/10.2139/ssrn.4762421 [2] Sirko, W., Brempong, E. A., Marcos, J. T., Annkah, A., Korme, A., Hassen, M. A., ... & Quinn, J. (2023). High-Resolution Building and Road Detection from Sentinel-2. arXiv preprint arXiv:2310.11622. [3] Cao, Y., & Weng, Q. (2024). A deep learning-based super-resolution method for building height estimation at 2.5 m spatial resolution in the Northern Hemisphere. Remote Sensing of Environment, 310, 114241.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Enhancing PM Estimation Through Multi-Platform Synergies: a Preparatory Study for the MAIA Mission

Authors: Marica De Lucia, Mariella Aquilino, Silvana Fuina, Cristina Tarantino, Matteo Picchiani, Giovanni Rum, Simona Zoffoli, Alfonso Monaco, Ester Pantaleo, Alessandro Fania, Roberto Cilli, Roberto Bellotti, Angela Morabito, Ilenia Schipa, Vincenzo Campanaro, Francesca Intini, Alessandra Nocioni, Annalisa Tanzarella, Lorenzo Angiuli, Roberto Primerano, Maria Patrizia Adamo
Affiliations: Institute of Atmospheric pollution Research-⁠National Research Council of Italy (IIA-CNR), Italian Space Agency, Interateneo Phisycs Department "M. Merlin", University of Bari, National Institute for Nuclear Physics, Regional Environmental Protection Agency, Department of Biology, University of Naples "Federico II"
Urban particulate matter (PM) pollution represents a significant global public health concern, with 55% of the world's population exposed to its effects in urban areas. Despite the importance of PM monitoring for public health and the goals of the United Nations 2030 Agenda, current ground-based monitoring networks lack sufficient spatial coverage to assess population exposure at the intra-urban scale. In the framework of the agreement between the NASA and Italian Space Agency (ASI), signed in January 2023, the MAIA/PLATiNO-2 mission will provide an innovative tool aimed at linking the exposure to different types of airborne particulate matter (PM) with adverse human health outcomes. The PLATiNO-2 satellite, provided by ASI, will embark the Multi-Angle Imager for Aerosols (MAIA) instrument, selected in 2016 as part of NASA’s Earth Venture Instrument (EVI), with launch targeted for 2025. The MAIA mission will make available frequent observations (up to 3-4 times per week over Primary Target Areas) on selected areas of interest around the world, with a pixel size of about 1 km. Over such areas the multi-angular and multispectral measurements of the MAIA instruments will be used to retrieve atmospheric optical depth and microphysical properties. Such measurements, combined with near-surface data and chemical transport model computations, will be elaborated to provide maps of mass concentrations for PM10 and PM2.5. Speciated mass concentrations (for the nitrate, organic carbon, elemental carbon, sulfate, and dust components) will be also included for the latter parameter. In preparation for this upcoming mission, the APEMAIA project (https://iia.cnr.it/project/apemaia/) is developing a multi-modular system of PM concentrations at sub-kilometre (300 m), designed to later incorporate simulated MAIA data when they become available. In this framework, the system integrates multi-source and multi-resolution satellite data (PRISMA, Sentinel-2, Sentinel-3, MODIS) with urban morphological information and meteorological variables provided by the Environmental Protection Agencies (ARPA Puglia and Lazio) through the WRF model. The methodology, which is based on ensemble learning techniques, will be tested on the urban area of Bari (the calibration site) and the cities of Rome and Taranto (the secondary study areas) during the period 2019-2022. The approach is based on a feature selection process utilizing SHAP (SHapley Additive exPlanations) values, which identified AOD and Planetary Boundary Layer Height (PBLH) as the most influential variables in the prediction. Of the ensemble learning techniques tested, XGBoost was found to be the most effective in capturing the complexity of non-linear relationships between variables, achieving high predictive performance with a coefficient of determination R² of 0.75 and an RMSE (root mean square error) of 5.6 µg/m³ for PM10 and an R² of 0.76 and an RMSE of 3.2 µg/m³ for PM2.5. The robustness of the model was validated using a dual strategy of k-fold cross-validation and leave-one-out, with the latter specifically employed to assess the spatial generalization capability. These results demonstrate the potential of the proposed methodology to produce accurate PM concentration maps at the intra-urban scale, contributing to the development of improved air quality monitoring systems in preparation for the MAIA mission.
LPS Website link: Enhancing PM Estimation Through Multi-Platform Synergies: a Preparatory Study for the MAIA Mission&location=X5+–+Poster+Area+–+Zone+V" class="text-info" target="_blank">Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Lessons Learned from COAST-VC Satellite and GOOS In Situ Co-Design Communities

Authors: Rashmi Sharma, Aurelien Carboniere, Paul DiGiacomo, Erin Urquhart, Merrie Beth Neely, Jérôme Benveniste, SeungHyun Son, Andrea Mccurdy
Affiliations: COL/UCAR, COSPAR, ISRO Space Applications Centre, CNES, NOAA,, CICESS, University of Maryland, Global Science and Technology, NASA Headquarters
The benefits to be realized by the integration of the remote sensing and in situ design, implementation, and data products and services development is taking on increased significance across the ocean observing value chain. A keystone Program focused on this endeavor is the UN Ocean Decade Program, Ocean Observing Co-design hosted within the Global Ocean Observing System (GOOS). Co-design is working with six Exemplars: Ocean Carbon, Boundary Currents, Tropical Cyclones, Marine Heatwaves, Storm Surge, and Marine Life to better realize integration and coordination. To accomplish this goal the Program is also working in alignment with CEOS COAST-VC to better determine their requirements of integration and product evolution as they relate to working groups such as the WGCapD (Capacity Development), WGClimate, WGDisasters. Of particular interest and importance is the interoperability among satellite and in situ sensors. There is also the need to focus on which data sources and funded products are suitable for which applications to the observing system user community. This session will also explore more generally the important development of coordination within the ocean observing community. Collaborative activities will address the global need for the identification and coordination of diverse and often disparate enterprise capacity, to identify data requirements and actions for adaptation, participation, and mitigation. Of particular interest and importance is the interoperability among satellite and in situ sensors. There is also the need to focus on which data sources and funded products are suitable for which applications to the observing system user community. Co-Design Exemplar pilots, activities, and outcomes will reflect the value of this integrated, co-designed observing system, and demonstrate its enhanced importance and societal relevance. Discussion will also include an update on integration efforts such as the GOOS Co-design, Exemplars, and funded modeling and end user interaction, additionally global projects such as Observing Air-Sea Integrations Strategy (OASIS) which is presently maturing from a concept to pilot phase, and Tropical Pacific Observing System (TPOS) which is maturing from pilot to mature, and finally a review of GOOS, GOOS Regional Alliance (GRA) activities in various stages of maturing, for example the system Integrated Ocean Observing System (IOOS) maintained by the US, as well as Small Island Developing States (SIDS), and the global south. In summary this session will explore examples or case studies of projects and programs where satellite and in situ integration is occurring. The session will encourage the discussion of how these communities can more routinely engage with the goal of better exploiting the benefits of both in situ and remote platforms, the complementarity of their contribution as well as new synergies to be realized through the development of enhanced user engagement and system design.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Gap-filled LAI series for crop monitoring at 20 m resolution by fusing optical and SAR data based on CNN

Authors: Juan Li, Raffaele Gaetano, Dino Ienco, Marie Weiss
Affiliations: Inrae, Cirad, INRIA, Univ. Montpellier, Beijing Normal University
IntroductionThe Leaf Area Index (LAI) is one of the critical parameters of canopy structure, described as the total area of green leaves on one side per unit of horizontal ground area (Chenand Black, 1992). It can reflect the growth and density of vegetation, water evaporation, and biomass accumulation, thus holding significant application value in crop monitoring (Desloires et al., 2024; Gaso et al., 2021). Global LAI products are commonly generated from sur-face reflectance data collected by optical sensors, leveraging the extensive spatial coverage provide by current Earth observation missions to enable large-scale crop monitoring. Currently, several global time-series LAI products has been released, such as GEOV, MODIS, VIIRS and GLASS. However, they typically suffer from two main problems.Firstly, the spatial and temporal resolutions are limited to250 m and 8 days, respectively. This hampers the precise detection of localized changes in smaller cropland patterns and continuous variations in crop growth, thereby reducing crop monitoring effectiveness (Dahiya et al.,2024). Secondly, sensor failure and adverse weather conditions, particularly cloud contamination, can lead to gaps in optical sensor acquisitions, negatively affecting the retrieval of gap-free LAI data both spatially and temporally. The first aforementioned issue has been partially alleviated with with recent Earth observation missions like Copernicus Sentinel-2, which provides high spatial resolutions of 10-20 m and a high temporal revisit frequency of around 5 days. Unfortunately, the second issue is more prominent for high spatial resolution optical sensor acquisitions. Due to spatial-temporal trade-off, these sensors generally have a lower temporal revisit frequency compared to coarse-spatial-resolution sensors with a daily revisit frequency, making their time series more sensitive to gaps. According to previous studies, there are two main strategies to cope with the issue of gaps in optical sensor acquisitions. The first strategy focuses on gap filling algorithms for optical time series data from a single sensor (Fang et al., 2008). These algorithms are effective for producing gap-free data. However, when the frequency of acquisitions without missing data from a single sensor is low, resulting in long gaps, they may struggle to provide accurate time series extraction. The second strategy involves the combination of multi-scale optical time series data coming from remote sensing satellites collecting imageries at different spatial resolutions.This way can provide both temporally and spatially dense data, but it still suffers from spatiotemporal discontinuities. Particularly in tropical regions, high and persistent levels of cloudiness may make all optical sensor data unavailable simultaneously, leading to long gaps. Moreover, for data with different spatial resolutions, higher-resolution data is usually resampled to a lower resolution to ensure spatial consistency. For instance, in (Claverie et al., 2018), Sentinel-2 MSI data is upscaled to 30 m to facilitate integration with Landsat 8 OLI data, thereby reducing the raw spatial resolution of Sentinel-2 in exchange for a denser time series.Unlike optical data, Synthetic Aperture Radar (SAR) data have the capacity of all-weather observation, thanks to the long wavelength of radar signals (Hu et al., 2023; Hamidi et al., 2024). However, signal characteristics reflected by SAR data differ significantly from those of optical data. Hence, it is extremely difficult to acquire vegetation variables only based on SAR data. Fortunately, SAR datais sensitive to vegetation canopy structure and dielectric, allowing them to describe vegetation growth processes (Bao et al., 2023). This makes SAR data an important complementary source to optical data for crop growth monitoring. The fusion of optical and all-weather SAR data for producing gap-free remote sensing data has become a hot topic and has been proven to improve the accuracy of yield estimations for crop monitoring. The publicly accessible Sentinel-2 optical (S2) and Sentinel-1 SAR (S1) data facilitate this approach. Both optical and SAR data from Sentinel have a high spatial resolution and a high temporal frequency. Especially regarding spatial resolution, the 10m Sentinel-1 SAR data has the capability to keep spatial consistency with the Sentinel-2 optical data. In terms of these points, combining Sentinel-2 and Sentinel-1 data offers a novel approach to generate high spatial resolution, gap-free estimation of LAI time series for crop monitoring. More recently, Pipia et al. (2019) and Mastro et al. (2023) both proposed using multi-output Gaussian Process Regression (GPR) to fuse Sentinel-2 and Sentinel-1 data for filling gaps in LAI time series data. (Zhao et al., 2023) used deep learning, utilizing a bidirectional LSTM (BiLSTM) network to combine Sentinel-2 and Sentinel-1 time series data for retrieving gap-free winter wheat LAI. The GPR method is essentially based on the principle of conditional probability statistics. Hence, it does not only output predicted values but also provides uncertainty information associated to the provided estimations. However, GPR has limitations in addressing large-scale region analysis due to slow inversion efficiency. In contrast, deep learning overcomes this issue effectively, thanks to its high computational efficiency and powerful fitting capability. In this work, a novel framework based on deep learning and previous experiences for extraction of vegetation indices from Sentinel data is proposed to com-bine optical and SAR data, producing high spatial resolution and gap-free LAI time series for crop monitoring. Then, an operational workflow is presented to enable the automatic processing of optical and SAR time series data into the estimation of gap-free LAI time series data. Taking into account spatial information, a Convolutional Neural Network (CNN) model is used to estimate LAI and extract features from multi-temporal optical and SAR data, simultaneously.In summary, the main contributions of this work are: i) A novel multi-source strategy to combine optical and SAR time series data with an operational workflow based on deep learning to estimate gap-free LAI time series. ii) A comprehensive validation for representative Köppen climate zones and gap scenarios of the proposed framework and standard approaches commonly employed in the remote sensing field for time series gap-filling. iii) The ability to generate high spatial resolution, gap-free LAI time series for crop monitoring. References (Bao et al. 2023) Xin Bao, Rui Zhang, Jichao Lv, Renzhe Wu, Hongsheng Zhang, Jie Chen,Bo Zhang, Xiaoying Ouyang, and Guoxiang Liu. Vegetation descriptorsfrom sentinel-1 sar data for crop growth monitoring. ISPRS Journal ofPhotogrammetry and Remote Sensing, 203:86–114, 2023. (Chenand Black, 1992) Jing M Chen and TA Black. Defining leaf area index for non-flat leaves.Plant, Cell & Environment, 15(4):421–429, 1992. (Claverie et al., 2018) Martin Claverie, Junchang Ju, Jeffrey G Masek, Jennifer L Dungan, Eric FVermote, Jean-Claude Roger, Sergii V Skakun, and Christopher Justice. The harmonized landsat and sentinel-2 surface reflectance data set.Remote sensing of environment, 219:145–161, 2018. (Dahiya et al., 2024) Neelam Dahiya, Gurwinder Singh, Dileep Kumar Gupta, KleomenisKalogeropoulos, Spyridon E Detsikas, George P Petropoulos, SartajvirSingh, and Vishakha Sood. A novel deep learning change detection ap-proach for estimating spatiotemporal crop field variations from sentinel-2 imagery. Remote Sensing Applications: Society and Environment, page101259, 2024. (Desloires et al., 2024) Johann Desloires, Dino Ienco, and Antoine Botrel. Early season forecastingof corn yield at field level from multi-source satellite time series data.Remote Sensing, 16(9):1573, 2024. (Fang et al., 2008) Hongliang Fang, Shunlin Liang, John R Townshend, and Robert E Dick-inson. Spatially and temporally continuous lai data sets based on anintegrated filtering method: Examples from north america.RemoteSensing of Environment, 112(1):75–93, 2008. (Gaso et al., 2021) Deborah V Gaso, Allard de Wit, Andres G Berger, and Lammert Kooistra.Predicting within-field soybean yield variability by coupling sentinel-2 leaf area index with a crop growth model.Agricultural and forestmeteorology, 308:108553, 2021. (Hamidi et al., 2024) Masoumeh Hamidi, Saeid Homayouni, Abdolreza Safari, and HadisehHasani. Deep learning based crop-type mapping using sar and opticaldata fusion.International Journal of Applied Earth Observation and Geoinformation, 129:103860, 2024. (Pipia et al., 2019) Luca Pipia, Jordi Muñoz-Marí, Eatidal Amin, Santiago Belda, GustauCamps-Valls, and Jochem Verrelst. Fusing optical and sar time seriesfor lai gap filling with multioutput gaussian processes.Remote Sensingof Environment, 235:111452, 2019. (Hu et al., 2023) Xikun Hu, Puzhao Zhang, Yifang Ban, and Maryam Rahnemoonfar. Gan-based sar and optical image translation for wildfire impact assessment. Remote Sensing of Environment. 2023. (Mastro et al., 2023) Pietro Mastro, Margherita De Peppo, Alberto Crema, Mirco Boschetti, and Antonio Pepe. Statistical characterization and exploitation of synthetic aperture radar vegetation indexes for the generation of leaf area index time series.International Journal of Applied Earth Observation and Geoinformation, 124:103498, 2023. (Zhao et al., 2023) Weiying Zhao, F. Yin, H. Ma, Q. Wu, J. Gomez-Dans, and P. Lewis. Combining multitemporal optical and sar data for lai imputation with bilstm network. arXiv, 2023.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Integrating Multi-Scale Earth Observation for Monitoring of Mining Waste: Insights from the MOSMIN Project

Authors: Moritz Kirsch, Dr. Sandra Lorenz, Dr. René Booysen, Dr. Cornelius Quigley, Dr. Verónica Rodríguez Tribaldos, Dr. Christopher Wollin, Dr. Christian Haberland, Dr. Oriol Monserrat, Dr. Calin Baciu, Dr. Farid Djeddaoui, Dr. Diana Comte, Aaron Moya, Dr. Richard Gloaguen
Affiliations: Helmholtz-Zentrum Dresden-Rossendorf (HZDR), GeoForschungsZentrum Potsdam (GFZ), Centre Tecnològic de Telecomunicacions de Catalunya (CTTC), Babes-Bolyai University, University of Sciences and Technology Houari Boumediene, FCFM-AMTC, Universidad de Chile
Mining operations generate significant volumes of waste from extraction (waste rock) and processing (tailings), which pose substantial geotechnical, environmental, and social risks. The EU-Horizon-funded "Multiscale observation services for mining-related deposits" (MOSMIN) project addresses these challenges by developing integrated Earth Observation (EO)-based services that leverage satellite, uncrewed aerial vehicles (UAV), and in-situ data. The project aims to deliver innovative multi-scale, multi-source solutions tailored to enhance environmental and geotechnical monitoring, in alignment with environmental, social, and governance (ESG) objectives in the raw materials industry. To achieve this, MOSMIN is developing a series of case studies across a diverse set of mining sites to showcase the application and value of these services in real-world contexts. One case study focuses on acid mine drainage (AMD) monitoring from source to sink. At the satellite scale, compositional mapping of waste rocks using multispectral and hyperspectral imaging identifies domains of acid generators and natural buffers. Temporal analyses of vegetation health and water quality provide insight into downstream impacts. These satellite observations are complemented by UAV-based hyperspectral imaging and ground truthing, which deliver detailed estimates of dissolved metal concentrations, key physicochemical properties in water bodies, and iron minerals such as jarosite and goethite in sediments. These results highlight the potential for integrated EO services to track AMD processes and their environmental impacts. Another case study involves geotechnical stability monitoring, combining surface and subsurface measurements. Corner-reflector-enhanced interferometric synthetic aperture radar (InSAR) detects surface displacement across tailings facilities with high precision, while ground-based fibre-optic sensing installed along tailings dams provides subsurface dynamic strain measurements from which high-resolution structural images of the dam over time can be derived. This combined approach produces critical insights into structural stability, enabling early detection of potential instabilities and supporting the development of robust monitoring services for tailings facilities. Tailings moisture monitoring represents another critical service under development. By integrating time-series data from SAR (Sentinel-1) and optical satellites (Sentinel-2), the project tracks surface moisture dynamics in tailings facilities. These observations are validated using 2D electrical resistivity tomography (ERT) profiles, which provide detailed subsurface moisture distribution data. Additionally, the service includes mapping the distribution of efflorescent surface salts, which plays a critical role in influencing evaporation rates and the structural stability of the dam, further supporting proactive hydrological management. The MOSMIN case studies illustrate the versatility and impact of integrated EO services tailored to mining sector challenges. By combining complementary EO, UAV, and ground-based observations, MOSMIN provides actionable insights that address diverse geotechnical and environmental issues to enhance sustainability in mining operations.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: A.09.02 - POSTER - Dynamic Antarctica: From the coastal margins to the deep interior

Since 1992 mass loss from the Antarctic Ice Sheet has contributed approximately 8 mm to global mean sea level rise (IMBIE; Otosaka et al., 2023). Mass loss from the Antarctic Ice Sheet not only increases global mean sea level, but also impacts ocean circulation, marine biology and Antarctic sea ice. Current changes in Antarctica have been driven by both ocean and atmospheric melting of ice shelves and grounded margins. Intrusions of warm Circumpolar Deep Water onto the continental shelf around Antarctica are the primary driver of ice-shelf basal melting, while atmospheric warming and extreme events have increased supraglacial melting on ice shelves and in coastal regions, leading to firn densification and increased potential for supraglacial ponding and ice-shelf collapse.

Changes at the coastal margins, such as ice-shelf thinning, weakening and collapse, reduces the ability of ice shelves to provide buttressing to inland grounded ice. Therefore, the impact of these changes can propagate inland, with potential to destabilize large areas of the ice sheet. Meanwhile, the dynamics and stability of the grounded ice sheet is controlled by multiple factors: including the bed topography, basal friction, subglacial hydrology, geothermal heat flux and englacial temperate.

It is increasingly possible to monitor change, its impact, and improve understanding of these processes, due to advanced satellite Earth observations, improvements in data processing, modelling and AI/ML. This session will highlight recent advances in our understanding of the dynamics of the Antarctic Ice Sheet including:
- Interactions between the atmosphere and ice-sheet surface: surface mass balance, firn evolution, supraglacial hydrology, and the impact of extreme events.
- Quantifying ice-shelf basal melting, its spatial distribution, and drivers.
- The dynamics and stability of inland ice: bed topography, basal friction, subglacial hydrology, geothermal heat flux.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Ice Velocity and Discharge from SAR Satellite Missions: Current Status and Emerging Opportunities

Authors: Jan Wuite, Thomas Nagler, Markus Hetzenecker, Tanja Rahim, Helmut Rott
Affiliations: ENVEO - Environmental Earth Observation IT GmbH
As the flagship SAR mission of the Copernicus program Sentinel-1 (S1) has demonstrated remarkable success across various applications for over a decade. Most notably, S1 has been a cornerstone for monitoring glaciers and ice sheets since 2014. One of the mission's key strengths lies in its systematic acquisition strategy for the polar regions, which ensures continuous coverage of the Greenland Ice Sheet and the Antarctic Ice Sheet margins, as well as of other polar ice caps. This has enabled operational monitoring of key parameters such as ice velocity and discharge, capabilities that were previously limited to selected glaciers and specific dedicated campaigns. The loss of S1-B in December 2021 has posed challenges, which are expected to be solved with the launch of S1-C, but it has also encouraged efforts to enhance synergies with other satellite systems to maximize the utility of S1 data. This includes leveraging new missions like the L-band SAR satellite constellation SAOCOM. These efforts not only address current needs but also serve as preparation for new and upcoming missions such as NISAR and ROSE-L. We will present an overview of the highlights and recent developments of the work performed within the framework of the ESA CCI and Polar+ programs and the EU Copernicus Climate Change (C3S) ice sheets service regarding ice velocity and ice discharge monitoring using SAR EO data. This includes the development of advanced ice velocity products by integrating various techniques, such as InSAR and offset tracking, and utilizing data from multiple sensors operating at different frequency bands, including C-band and L-band. The velocity maps, combined with ice thickness data, form the basis for deriving and studying changes in ice discharge, freshwater fluxes and ice sheet mass balance. Our results show that the combination of L-band and S1 (C-band) SAR enables the generation of detailed ice velocity maps from crossing orbit InSAR data that surpass the capabilities of S1 alone. L-band data exhibit reduced fringe frequency in shear zones and on rapidly moving glaciers, enabling more reliable phase unwrapping in regions where S1 data often suffer from decorrelation and facilitating the retrieval of ice velocity in faster-moving regions. However, our results also point out the higher sensitivity of L-band to ionospheric effects that can occasionally reduce the performance. These findings demonstrate the manyfold complementarities and opportunities of Sentinel-1 and other current and upcoming SAR missions ensuring a continued delivery of essential climate variables for comprehensive monitoring of polar ice masses and their response to climatic change.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Ice sheet discharge constrained by RINGS airborne surveys of bed topography in Dronning Maud Land and Enderby Land

Authors: Geir Moholdt, DML/EL RINGS Project
Affiliations: Norwegian Polar Institute
Knowledge of bed topography at the ice-sheet margin is critical for accurate estimation of Antarctic ice discharge and contribution to sea level change. RINGS is an Action Group within the Scientific Committee on Antarctic Research (SCAR) that aims to provide more accurate and complete reference bed topography data all around Antarctica. Here, we present a decade long time series of ice sheet discharge by combining satellite-based records of surface velocity and elevation with newly collected airborne radio-echo sounding data of bed topography for the East Antarctic coast between 10W to 70E; a sector spanning almost a quarter of the continent, including Dronning Maud Land and Enderby Land (DML/EL). The data were collected along parallel lines following the ice sheet grounding line during two regional RINGS surveys in austral summers 2023-24 and 2024-25, facilitated through a scientific and logistic collaboration between Australia, Belgium, China, Finland, Germany, India, Japan, Norway, Sweden, and USA. The results demonstrate the importance of accurate bed topography data for constraining ice discharge in Antarctica and for getting more scientific value out of satellite observations of the surface. The framework for the surveys and data analyses can be used as a guideline for similar efforts in other parts of Antarctica, which put together can improve current and future estimates of ice-sheet discharge, mass balance and the Antarctic contribution to sea level change.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Speed-up, slowdown, and redirection of ice flow on neighbouring ice streams in the Pope, Smith and Kohler region of West Antarctica

Authors: Heather Selley, Anna Hogg, Dr Benjamin Davison, Pierre Dutrieux, Thomas
Affiliations: University Of Leeds
The ice streams feeding Dotson and Crosson Ice Shelves are some of the fastest changing in West Antarctica. We use satellite observations to measure the change in ice speed and flow direction on eight ice streams in the Pope, Smith and Kohler region of West Antarctica from 2005 to 2022. Seven ice streams have sped up at the grounding line, with the largest increase in ice speed at Smith West Glacier (87 %) whilst Kohler West Glacier has slowed by 10 %. We observe progressive redirection of ice flowlines from Kohler West into the more rapidly thinning and accelerating Kohler East Glacier, resulting in the deceleration of Kohler West Glacier and eastward migration of the ice divide between Dotson and Crosson Ice Shelves. These observations reveal previously undocumented impacts of spatially varying ice speed and thickness changes on flow direction and ice flux into downstream ice shelves, which may influence ice shelf and sheet mass change during the 21st century.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Towards detecting perennial firn aquifers within Nivlisen Ice Shelf, East Antarctica

Authors: Dr Rebecca Dell, Dr Randall Scharien, Mr Connor Dean
Affiliations: Scott Polar Research Institute, Department of Geography, University of Victoria
Perennial Firn Aquifers (PFA’s) offer a year-round mechanism for meltwater storage deep within the firn layer of both the Greenland and Antarctic Ice Sheets. Their presence is most extensively documented across the Greenland Ice Sheet, where they delay surface to bed meltwater routing. Contrastingly, in Antarctica, PFA observations are limited to the Antarctic Peninsula, where high ablation and accumulation regimes provide favourable conditions for the insulation and burial of surface melt. Despite the potential for PFAs within Antarctic ice shelves to contribute to ice-shelf damage through hydrofracture, little effort has been made to map their presence beyond the Peninsula. In an effort to begin to better understand the potential for PFAs within ice shelves on a pan-Antarctic scale, this study offers a case study for the Nivlisen Ice Shelf, situated within Dronning Maud Land, East Antarctica. Nivlisen Ice Shelf experiences high rates of ablation and accumulation, and prior studies have identified extensive surface meltwater networks, which traverse 20 km across the ice-shelf surface before terminating in a thick firn layer. These conditions make the ice shelf a likely candidate for PFA presence. Using a combination of C-band synthetic aperture radar (SAR) data from Sentinel-1 and RADARSAT Constellation Mission (RCM), we deploy a method previously applied with success across the Greenland Ice Sheet, which exploits the relatively low backscatter values returned to locate PFAs within the ice shelf. We complement this C-band data with L-band SAR imagery, in order to explore the power of L-band data for PFA detection. With the upcoming launch of the NASA-ISRO Synthetic Aperture Radar (NISAR) Satellite, along with the future launch of the Copernicus ROSE-L, both of which will provide freely available L-band data, our study offers the opportunity to understand the potential for combining C and L-band data in studies of surface and subsurface meltwater.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Towards Large-Scale Mapping of the Supraglacial Hydrology of Earth’s Ice Sheets

Authors: Emily Glen, Dr. Alison Banwell, Prof. Malcolm McMillan, Dr. Amber Leeson, Dr. Jennifer Madalenna, Dr. Rebecca Dell, Ms. Laura Melling, Ms. Jacqueline Otto, Dr. Diarmuid Corr, Dr. Karla Boxall, Dr. Katie Miles, Dr David Parkes
Affiliations: Lancaster Environment Centre, Cooperative Institute for Research in Environmental Sciences (CIRES), UK Centre for Polar Observation & Modelling, Centre for Excellence in Environmental Data Science, Scott Polar Research Institute
The study of the distribution and dynamics of supraglacial hydrology on Earth’s ice sheets has become increasingly important in understanding ice sheet mass balance and the associated contribution to global sea-level rise. Supraglacial hydrology encompasses supraglacial lakes (SGLs), slush and meltwater channels – features that govern meltwater storage and routing on ice sheet surfaces each summer. While more abundant on the Greenland Ice Sheet (GrIS) due to air warmer temperatures, extensive supraglacial hydrology has been recently observed on the low elevation areas of the Antarctic Ice Sheet (AIS) including its ice shelves. In Greenland, SGLs may drain via hydrofracture, routing water to the bed and influencing ice velocity in Greenland, while contributing to ice shelf collapse in Antarctica, with tentative evidence from the Antarctic Peninsula suggesting SGL drainage may trigger a dynamic response of grounded ice. Recent advances in Earth Observation technologies, including the availability of operational high-resolution optical imagery from Sentinel-2 and Synthetic Aperture Radar imaging from Sentinel-1, have revolutionised the ability to analyse these features at unprecedented spatial and temporal scales. However, processing these vast datasets presents significant challenges, driving the increasing use of Machine Learning (ML) algorithms and cloud computing platforms like Google Earth Engine (GEE) for efficient, scalable, and automated mapping. Here, we present ongoing progress towards large-scale surface hydrology mapping across both the Greenland and Antarctic ice sheets, which is being performed within the context of the ESA-funded 5D Antarctica and 4D Greenland studies. This work is leveraging machine learning techniques and the extensive archive of Sentinel-2 data to produce large-scale hydrological datasets. Specifically, we will present two example case studies; a decadal-scale assessment of supraglacial hydrology and slush across the entire Greenland Ice Sheet, and results from an intercomparison exercise of the performance of different ML algorithms for mapping meltwater features, conducted within GEE. We will explore the lessons learnt from initial work performed over the GrIS, and how this will guide future work over the AIS. We will conclude by looking at the potential for future methodological developments, and the opportunities to connect to modelling communities for the purposes of constraining future ice sheet mass balance projections.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: 5D Antarctica: An integrated assessment of ice dynamic processes

Authors: Anna Hogg, Malcolm McMillan, Noel Gourmelen, Ghislain Picard, Dr. Thomas Nagler, Marion Leduc-Leballeur, Livia Jakob, Trystan Surawy-Stepney, Giovanni Macelloni, Dr Jan Wuite, Katie Miles, Kate Briggs, Carolyn Michael, Pierre Zeiger, Amber
Affiliations: University Of Leeds
Rapid mass loss from the Antarctic Ice Sheet is a major contributor to global sea level rise and introduces the largest uncertainty into projections of future change. Over the last 25 years, the Antarctic Ice Sheet (AIS) has been responsible for 7.6 ± 3.9 mm of global sea level rise (Shepherd et al., 2018), and, based on current climate pledges, Antarctica is projected to contribute a further -5 to 43 cm to global sea level rise by 2100, depending on the modelling assumptions used (Edwards et al., 2021). The vast majority (98.8%) of mass loss from Antarctica is due to changes in ice flow caused by interactions with the surrounding ocean, rather than due to atmospherically-driven surface processes (Slater et al., 2021). The aim of the ESA 5DAIS project is to produce a suite of Earth Observation datasets to characterise how the coastal ice sheet and ice shelves in Antarctica have changed over the last decades, and to make use of these data sets to investigate the physical processes driving this evolution. During this project we will provide the first comprehensive assessment of ice dynamic observations across the whole Antarctic ice sheet and ice shelves combined, at the highest spatial and temporal resolution possible, using satellite datasets acquired over the last 30-years. We will perform an integrated assessment of ice dynamic processes in Antarctica, delivering new knowledge about the interconnectedness of different glaciological parameters, and the timing and rate of ice dynamic change. We will analyse the environmental drivers of change, studying ice-ocean and ice-atmosphere interactions, partitioning their relative importance to different regions of Antarctica. This will deliver new knowledge about the forcing mechanisms driving present day change, which is critical for accurately projecting future ice sheet evolution. Here, we present an overview and early findings from the ESA 5D Antarctica research project.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: A Decade of High-Resolution Antarctic Ice Speed Variability from the Sentinel-1 Mission

#zarr #pangeo

Authors: Ross Slater, Anna E. Hogg, Pierre Dutrieux, Benjamin J. Davison, Richard Rigby, Benjamin Wallis
Affiliations: University Of Leeds, British Antarctic Survey, University of Sheffield
Highly uncertain ocean warming is driving dynamic imbalance and increased mass loss from the Antarctic Ice Sheet (AIS), with important global sea level rise implications. Ice velocity, measured primarily through satellite-observations, is a key indicator of this change and recent advances in Earth observation capabilities now allow measurements with unprecedented spatial and temporal resolution across key regions of the AIS. The Sentinel-1 synthetic aperture radar (SAR) satellites, part of the European Commission’s Copernicus program, have acquired repeat imagery over the ice sheet’s coastal margin at a combination of 6 and 12-day repeats since 2014. Using an established offset-tracking processing chain, we generate Antarctic-wide mosaics of ice speed on a 100m grid for each 6 and 12-day separated Sentinel-1 image pair between October 2014 and February 2024. We perform analysis with tools from the Pangeo software ecosystem, using the continent-wide mosaics to generate multi-terabyte ice velocity data cubes. The Xarray and Dask Python packages are used for distributed computation of these cubes, which are stored in the Zarr chunked-array format for optimised access. We analyse the spatial distribution of ice speed trends through the study period, as well as decadal, multi-year, and sub-annual variability in time series extracted from 445 flow units across the AIS. Of these time series, we identify 239 flow units that have accelerated, and 206 that decelerated, whilst the full distribution of trends consistently skews toward acceleration. Acceleration trends are found most prominently in West Antarctica and the West Antarctic Peninsula, but ice speed variability is spectrally broad, complex, and spatially heterogeneous across the continent. Strong multi-year variability in ice speed is observed predominantly in West Antarctica. At sub-annual time scales, we identify seasonal speed variations on the Antarctic Peninsula and at scattered locations around the rest of the continent. This new dataset reveals the highly dynamic nature of the AIS, paving the way for improving our understanding of its interactions with other components of the Earth system.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Melt detection in Greenland and Antarctica from SMOS enhanced resolution brightness temperatures

Authors: Pierre Zeiger, Ghislain Picard, Philippe Richaume, Arnaud Mialon, Nemesio Rodriguez-Fernandez
Affiliations: Institut des Géosciences de l’Environnement (IGE), Université Grenoble Alpes, Centre d’Etudes Spatiales de la Biosphere (CESBIO)
Time series of passive microwave observations have been used for decades to detect the wettening of snow which provokes a sharp increase in brightness temperature. 19 GHz observations are the most commonly used due to a long time series spanning more than 40 years and a high sensitivity to surface melt. More recently, 1.4 GHz (L-band) observations became available with the launch of ESA’s SMOS and NASA’s SMAP missions in 2009 and 2015, respectively. L-band penetrates deeper in the snowpack and is sensitive to liquid water at depth. This information can be combined with surface melt products to provide more insights into meltwater percolation and refreezing on the ice sheets. A new SMOS melt product based on the CATDS Level 3 brightness temperature (L3TB) has been recently published. However, given the coarse spatial resolution of the L3TB, melting and non-melting areas are mixed in the footprint which affects melt detection and has been idenfified as a strong limitation. For that reason, we developed a SMOS enhanced resolution brightness temperature product covering the entire Greenland and Antarctic ice sheets. We used the radiometer version of the Scatterometer Image Reconstruction (rSIR) algorithm developed at NSIDC to reconstruct enhanced resolution maps at 40° incidence (37.5°-42.5° bin) and 52.5° (50°-55°). The reconstructed datasets show a 30% resolution enhancement compared to the L3TB product at the same incidence. The effective spatial resolution was evaluated to ~30 km at 40° incidence and ~45 km at 52.5°. Both the new SMOS rSIR-enhanced 40° dataset and the L3TB were used as inputs for the melt detection algorithm to evaluate the impact of spatial resolution for this application. The resolution enhancement allows for the detection of new melting areas and for a large and widespread increase in the average number of melt days detected in both Antarctica and Greenland each year. On specific areas like the Wilkins ice shelf in the Antarctic Peninsula, this increase represents as much as 30 additional melt days on a melt season of 80-100 days maximum. The spatial resolution of radiometer observations has a major impact on melt detection and the new distributed products will be useful for future works on ice sheet hydrology, including liquid water content (LWC) retrieval with SMOS.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Ice Shelf Area and Ice Shelf Area Change From Sentinel-1 SAR

Authors: Dana Floricioiu, Lukas Krieger, Sindhu Ramanath Tarekere, Celia Baumhoer
Affiliations: German Aerospace Center (DLR)
Floating ice shelves fringe 74% of Antarctica's coastline, directly linking the ice sheet and the surrounding oceans. The landward extent of the ice shelf is the grounding line, which marks the transition of grounded ice to the ice shelf, and its seaward limit is the ice shelf front, which is the boundary between the shelf and the ocean. The change in ice shelf area is an important indicator of ice shelf stability in a warming climate, being affected by grounding line retreat as a possible consequence of ice thinning and calving events at the front, culminating with ice shelf disintegration or collapse. Two independent processing chains developed at DLR's Earth Observation Center are used to quantify the ice shelf perimeter around Antarctica. For the ice shelf fronts, we use IceLines, a deep learning-based framework providing calving front locations (CFL) on different temporal scales (daily, monthly, quarterly, annual) for Antarctic ice shelves automatically extracted from Sentinel-1 radar imagery. The procedure is operational, and monthly releases of the datasets are available on the DLR's GeoService data portal (https://download.geoservice.dlr.de/icelines/files/). The time series of gapless grounding lines is more challenging to obtain due to limited timely and coherent SAR acquisitions. We use the time-annotated Grounding Line Location (GLL) product of ESA's Antarctic Ice Sheet Climate Change Initiative to derive an average grounding line for a certain period, e.g., one year. Our custom procedure fills the gaps with grounding lines from manual and machine-learning delineations of temporally close Sentinel-1 DInSAR interferograms and external datasets. This study presents ice shelf area changes derived with a novel method that was developed to combine the grounding lines derived from Sentinel-1 A/B with contemporaneous Sentinel-1 A/B-based ice shelf fronts. By analyzing area change based on changes in the ice shelf front and the grounding line we can attribute area change to either grounding line retreat or ice shelf calving, providing information about the causes of an area change. Examples of annual ice shelf perimeters of major ice shelves from the start of the Sentinel-1 era to the present will be presented and discussed.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: A Novel Method for Creating Complete, Gapless Lines From Fragmented 2D Data of Antarctic Grounding Line Measurements

Authors: Lukas Krieger
Affiliations: Remote Sensing Technology Institute (DLR)
InSAR grounding line retrieval has resulted in numerous datasets that cover most of the ice shelves that fringe Antarctica. With the wealth of data the Sentinel-1 mission collects over the Antarctic grounding zone, it has become feasible to process nearly 60 grounding lines per year for some locations. While the interferometric pipelines are well established and operational, the need for manual delineation of the landward side of the dense fringe belt detected in double difference interferograms prevented the efficient processing of dense time series. Most Antarctic-wide products contain manually traced individual grounding lines at low temporal resolution and are focused on spatial completeness. However, recent advances in automatic detection with deep neural networks now offer the possibility to create long time series [Ramanath Tarekere et al. 2024]. Neither manually nor automatically delineated grounding lines are complete, as they often contain gaps due to variations in interferometric coherence, tidal states, and the generally intricate geometry of the grounding line. Furthermore, these limitations result in a time series of fragmented 2D line segments that cannot be directly used to trace the perimeter of an entire ice shelf or even form a single circum-Antarctic grounding line. However, applications in ice shelf modelling, calculation of ice export in the mass-budget method, and retreat measurements all benefit from simple continuous grounding line geometries, which surround entire ice shelves. Currently, a widely used product that fulfills these requirements but lacks annotation and timeliness is the MEaSUREs Antarctic Boundaries dataset [Mouginot et al. 2017]. Here, we present a novel method that is able to create single gapless and complete grounding lines for individual ice shelves for targeted time periods (annual, bi-annual, monthly). The fragmented grounding lines are stacked in time to distinguish grounded from floating ice. We then apply a median threshold to detect regions grounded in at least 50% of the measurements. In order to create the gapless perimeter of an ice shelf, we prioritize grounding lines according to their acquisition time and fill the remaining gaps with measurements from other periods or other sensors. Combined with ice shelf calving front data, we can produce time-annotated ice shelf perimeter, area, and area change. References: Mouginot, J., Scheuchl, Bernd and Rignot, Eric (2017). MEaSUREs Antarctic Boundaries for IPY 2007-2009 from Satellite Radar. (NSIDC-0709, Version 2). Boulder, Colorado USA. NASA National Snow and Ice Data Center Distributed Active Archive Center. (Visited on 11/19/2024). Ramanath Tarekere, S. et al. (Mar. 11, 2024). “Deep Learning Based Automatic Grounding Line Delineation in DInSAR Interferograms”. In: EGUsphere, pp. 1–35. (Visited on 04/16/2024).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Disentangling climate trends from natural variability using satellite altimetry and firn modelling

Authors: Maria T. Kappelsberger, Martin Horwath, Veit Helm, Johan Nilsson, Eric Buchta, Matthias O. Willen, Sanne B. M. Veldhuijsen, Michiel R. van den Broeke
Affiliations: Institut für Planetare Geodäsie, Technische Universität Dresden, Alfred-Wegener-Institut Helmholtz-Zentrum für Polar- und Meeresforschung, Jet Propulsion Laboratory, California Institute of Technology, Institute for Marine and Atmospheric Research, Utrecht University
Projections of Antarctica’s contribution to sea-level rise are highly uncertain due to our limited understanding of the processes driving ice-mass changes in a warming climate. While the ice-dynamically induced hotspots of mass loss are known, a long-term increase in surface mass balance (SMB) is poorly constrained. This is because long-term climate trends in SMB are subtle and masked by the large natural SMB variability. In this study, we explore advanced methods to resolve long-term trends in SMB. We do this by jointly analysing elevation changes from satellite altimetry and from SMB and firn modelling in space and time back to 1992. By synergistically combining both data sets, we quantify the large interannual variations related to SMB and firn processes, allowing a robust estimation of the underlying long-term trends. In order to assess a realistic trend uncertainty and trend significance, our approach accounts for temporal correlations in the altimetry time series due to both SMB-related time-variable signals and altimetry errors such as radar penetration. The analyses are carried out at the level of monthly grids with a spacing of a few kilometres. The resulting high-resolution spatial trend patterns enable interpretation and attribution to SMB-related or ice-flow related processes.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Geothermal heat flow models for ISMIP-7 – Recommendations for Antarctica

Authors: Mareen Lösing, William Colgan, Helene Seroussi, Tobias Stål, Tong Zhang, Felicity McCormack, Jörg Ebbing
Affiliations: School of Earth and Oceans, The University of Western Australia, Australian Centre for Excellence in Antarctic Science, Department of Glaciology and Climate, Geological Survey of Denmark and Greenland, Thayer School of Engineering, Dartmouth College, Institute for Marine and Antarctic Studies, University of Tasmania, School of Natural Sciences (Physics), University of Tasmania, State Key Laboratory of Earth Surface Processes and Resource Ecology, Beijing Normal University, Securing Antarctica's Environmental Future, School of Earth, Atmosphere and Environment, Monash University, Kiel University
Geothermal heat flow (GHF) affects ice sheet thermal conditions, determining how it slides and internally deforms, as well as the rheological behaviour of the lithosphere. However, the distribution of GHF in polar regions remains poorly constrained, with few borehole-derived estimates and large discrepancies between available glaciological and geophysical estimates. These discrepancies relate to methodological differences and data treatment or availability. Therefore, many ice sheet modelling studies use a homogeneous GHF estimate, an ensemble average, or or vintage models that only provide a very simplified or misleading picture. Here, we discuss the suite of all available GHF models with respect to the underlying methodology and the data sets used and provide recommendations on which models to use for different purposes. In general, models can be grouped into three classes: (1) outdated due to improved data availability (2) methodological shortcomings due to too simplified parametrization and (3) preferred models. To further evaluate model applicability, we conduct an online expert elicitation to identify which heat flow models are most suitable for ice sheet modelling, as rated by experts in this field. For the preferred models, we discuss model uncertainty and data dependency in order to provide means to judge the model suitability depending on the application, e.g. in the Ice Sheet Model Intercomparison Project (ISMIP7). For Antarctica, all models agree on the divide between low heat flow in East and higher heat flow in West Antarctica, while the amplitudes and spatial patterns between models vary. However, the preferred models provide a baseline for more detailed local studies, where data sets can be consulted, which are not available on a continent-wide scale. An example would be depth estimates from magnetic data, which can be anchored by regional baseline models, or studies that use estimates of local geology to predict local variations in geothermal heat flow. Please note that there is an accompanying poster discussing data and models for Greenland.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Tracking Antarctic Ice Sheet Elevation Changes with Satellite Altimetry and UAV Laser Scanners in Dronning Maud Land

Authors: Jaakko Seppänen, Heidi Sallila, Adriano Lemos, Dr Antero Kukko, Aku Riihelä
Affiliations: Finnish Meteorological Institute, Finnish Geospatial Research Institute FGI
Accurate measurements of snow and ice properties in the polar regions are essential for understanding ice sheet dynamics, mass balance, and the factors that influence sea level rise. Satellite altimeters are a key instrument for making regular measurements of ice sheet height and thus mapping their changes. The Ice, Cloud and land Elevation Satellite (ICESat-2) carries the Advanced Topographic Laser Altimeter System (ATLAS), a space-based lidar, repeating the same trajectory every three months. Repeated measurements form a time series that can be used to interpret for example the mass balance of the glacier and the tidal influence on the floating ice shelf, and even the migration of the grounding line. Laser scanning provides high-resolution measurements that record detailed surface features and local variations in snow and ice properties. Our study combines data collected simultaneously by unmanned aerial vehicle (UAV) and field observations of snow and ice in Antarctica, with the ICESat-2 measurements. The field campaign was carried out in the Antarctic summer from November 2022 to February 2023 around the Finnish research station Aboa, situated on the Basen Nunatak at Dronning Maud Land. The analysis sought connections between changes in snow properties and the photon product of satellite data. During the field campaign, we used a UAV-borne laser scanner that measured the surface topographic features in high-resolution, providing surface height and roughness. Ground measurements determined snow grain size, stratification, water content, temperature, specific surface area (SSA) and albedo. By combining these data with repeated measurements from the satellite, we improve the spatial and temporal coverage of the data and the detailed analysis of the changing height over time. Field measurements help to confirm the connection between the development of the altimeter photon product, height product and the quantified snow properties over time. Through analysis of the time series, we observed the vertical movement of floating ice caused by ocean tides and estimated snowpack evolution during the observation period. These results help us to further develop the use of satellite altimetry to better understand the development of the ice sheet in Dronning Maud Land.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Quantification of tidal grounding line migration using Sentinel-1 observations

Authors: Sindhu Ramanath, Michael Engel, Dana Floricioiu, Jan Wuite, Lukas Krieger
Affiliations: DLR e.V., School of Engineering and Design, Technical University of Munich, ENVEO IT, Technical University of Munich
Grounding lines are the boundaries between grounded and floating ice in marine-terminating glaciers. They are considered a proxy for ice sheet stability as they retreat landwards and advance seawards in response to changes in ice mass from melting and accumulation respectively [1]. Grounding lines also move ephemerally due to ice shelf flexure induced by ocean tides. While Differential Interferometric SAR (DInSAR) is regarded as the most accurate grounding line mapping method, the derived lines contain tide-induced displacement arising from multiple tide states. Several studies have quantified this displacement to be heterogeneous and in the range of hundreds of metres [2] - [5]. Isolating the tidal displacement would help retrieve long-term migration trends and provide a better understanding of the dynamic processes at the ice-bed interface. In this study, we aim to quantify grounding line migration at the timescale of the tidal cycle from SAR data with a different method. Namely, we used a times series from $2019 - 2021$ of line-of-sight (LOS) offsets from 6-day repeat cycle Sentinel-1 acquisitions over Larsen C Ice Shelf (LCIS) and Thwaites Glacier. The datasets were generated using the differential range offset tracking method outlined in [6]. Unlike the DInSAR phase, SAR offsets are independent of coherence and capture only two tide states instead of three or four, making it an ideal data source for analysing the tidal grounding line migration. For LCIS, the tidal deformation of the ice shelf is distinctly visible in the offsets derived from just one pair of SAR acquisitions. This signal resembles the deformation at the flexure zone obtained from unwrapping the interferometric phase, where the minimum tidal deflection point corresponds to the grounding line position [7]. The signal is weaker for the Thwaites Glacier. We attribute this difference to the high tidal range of the Weddell Sea of 2 m - 4 m [8]. We developed an algorithm based on Bayesian Model Averaging [9] which automatically picks out the grounding line from the offsets and provides the associated positioning uncertainty. We used this algorithm to generate a dense time series of grounding lines with a frequency of 6 days. We will compare these lines to those obtained from DInSAR for LCIS and Thwaites Glacier and investigate the relation between the grounding line positions and the falling and rising tidal state. References [1] C. Schoof, “Ice sheet grounding line dynamics: Steady states, stability, and hysteresis,” J. Geophys. Res., vol. 112, no. F3, F03S28, 2007. DOI: 10.1029/2006JF000664. [2] V. Brancato, E. Rignot, P. Milillo, M. Morlighem, J. Mouginot, L. An, B. Scheuchl, S. Jeong, P. Rizzoli, J. L. Bueso Bello, and P. Prats-Iraola, “Grounding line retreat of Denman Glacier, East Antarctica, measured with COSMO-SkyMed radar interferometry data,” Geophys. Res. Lett., vol. 47, no. 7, 2020. DOI: 10.1029/2019GL086291. [3] H. Chen, E. Rignot, B. Scheuchl, and S. Ehrenfeucht, “Grounding zone of Amery Ice Shelf, Antarctica, from differential synthetic-aperture radar interferometry,” Geophysical Research Letters, vol. 50, no. 6, 2023. DOI:10.1029/2022GL102430. [4] P. Milillo, E. Rignot, P. Rizzoli, B. Scheuchl, J. Mouginot, J. Bueso-Bello, and P. Prats-Iraola, “Heterogeneous retreat and ice melt of Thwaites Glacier, West Antarctica,” Sci. Adv., vol. 5, no. 1, 2019. DOI: 10.1126/sciadv.aau3433. [5] P. Milillo, E. Rignot, P. Rizzoli, B. Scheuchl, J. Mouginot, J. L. Bueso-Bello, P. Prats-Iraola, and L. Dini, “Rapid glacier retreat rates observed in West Antarctica,” Nat. Geosci., vol. 15, no. 1, pp. 48–53, 2022. DOI: 10.1038/s41561-021-00877-z. [6] T. Nagler, H. Rott, M. Hetzenecker, J. Wuite, and P. Potin, “The sentinel-1 mission: New opportunities for ice sheet observations,” Remote Sensing, vol. 7, no. 7, pp. 9371–9389, 2015. DOI: 10.3390/rs70709371. [7] E. Rignot, “Tidal motion, ice velocity and melt rate of Petermann Gletscher, Greenland, measured from radar interferometry,” J. Glaciol., vol. 42, no. 142, pp. 476–485, 1996. DOI:10.3189 /S0022143000003464. [8] L. Padman, M. R. Siegfried, and H. A. Fricker, “Ocean tide influences on the antarctic and greenland ice sheets,” Reviews of Geophysics, vol. 56, no. 1, pp. 142–184, 2018. DOI:10.1002/2016RG000546 [9] K. Zhao, M. A. Wulder, T. Hu, R. Bright, Q. Wu, H. Qin, Y. Li, E. Toman, B. Mallick, X. Zhang, and M. Brown, “Detecting change-point, trend, and seasonality in satellite time series data to track abrupt changes and nonlinear dynamics: A bayesian ensemble algorithm,” Remote Sensing of Environment, vol. 232, p. 111 181, 2019. DOI: https://doi.org/10.1016/j.rse.2019.04.034.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Mapping Subglacial Water Transport Beneath the Antarctic Ice Sheet With Sentinel-1 Interferometry

Authors: Jonas Kvist Andersen, Dr. Romain Millan, Dr. Anders Anker Bjørk
Affiliations: Department of Geosciences and Natural Resource Management, University Of Copenhagen, Univ. Grenoble Alpes, CNRS, IRD, INRAE, Grenoble-INP, IGE
Understanding the role of subglacial hydrology in modulating ice flow has become a rapidly evolving field over the past few decades. Subglacial water can significantly influence ice dynamics by altering basal traction, potentially triggering short-term flow accelerations or modulating long-term stability. Capturing these processes is critical for improving ice sheet models, which aim to project future contributions to sea level rise in a warming climate. Subglacial and supraglacial lake drainage events have been shown to cause strong, albeit often brief, accelerations in ice flow on both the Antarctic and Greenland ice sheets (Stearns et al., 2008; Siegfried et al., 2016; Maier et al., 2023; Andersen et al., 2023). These events provide critical insights into the interaction between hydrology and ice dynamics. Additionally, episodic channelized water transport, revealed through vertical surface displacements of a few centimeters, has been observed in the far inland regions of both ice sheets using double-difference satellite Synthetic Aperture Radar (SAR) interferometry (Gray et al. 2005; Neckel et al., 2021; Andersen et al., 2023). Unlike lake drainage events, these episodic displacements often re-occur along defined pathways and are generally not directly linked to significant changes in horizontal ice flow velocity. The EU Copernicus Sentinel-1 SAR archive, spanning over a decade and featuring a 6-12 day repeat-pass period, enables high-temporal-resolution monitoring of vertical displacement signals across the Antarctic ice sheet. Double-difference displacement anomaly maps generated from Sentinel-1 interferometry isolate localized vertical motion, offering a unique means of studying transient subglacial hydrological processes. In this study, we present a Sentinel-1-based dataset of vertical displacements covering the Antarctic ice sheet north of the 78.5°S polar gap. These displacement events are characterized by centimeter-scale surface displacements, resolved at a spatial resolution of 50 meters. Our approach enables the detection and mapping of both isolated and recurrent subglacial hydrological events over an extended time period. A key focus is placed on events where episodic subglacial water transport propagates toward the grounding zones of ice shelves, as such events may have significant implications for ice-ocean interactions and, ultimately, ice shelf stability. For such cases, we analyze the potential impacts of basal water outflow on horizontal ice flow speeds. This study offers new insights into the spatiotemporal distribution and pathways of subglacial water transport, providing a valuable resource for understanding the coupling between subglacial hydrology, ice dynamics, and ice shelf processes in Antarctica. It represents an important step toward improving models of ice sheet behavior by incorporating the impacts of dynamic subglacial water systems. References Andersen, J. K., Rathmann, N., Hvidberg, C. S., Grinsted, A., Kusk, A., Merryman Boncori, J. P., & Mouginot, J. (2023). Episodic subglacial drainage cascades below the Northeast Greenland Ice Stream. Geophysical Research Letters. https://doi.org/10.1029/2023GL103240 Gray, L., I. Joughin, S. Tulaczyk, V. B. Spikes, R. Bindschadler, and K. Jezek (2005), Evidence for subglacial water transport in the West Antarctic Ice Sheet through three-dimensional satellite radar interferometry, Geophysical Research Letters. doi:10.1029/2004GL021387. Maier, N., Andersen, J. K., Mouginot, J., Gimbert, F., & Gagliardini, O. (2023). Wintertime supraglacial lake drainage cascade triggers large-scale ice flow response in Greenland. Geophysical Research Letters. https://doi.org/10.1029/2022GL102251 Neckel, N., Franke, S., Helm, V., Drews, R., & Jansen, D. (2021). Evidence of cascading subglacial water flow at Jutulstraumen Glacier (Antarctica) derived from Sentinel-1 and ICESat-2 measurements. Geophysical Research Letters. https://doi.org/10.1029/2021GL094472 Siegfried, M. R., H. A. Fricker, S. P. Carter, and S. Tulaczyk (2016), Episodic ice velocity fluctuations triggered by a subglacial flood in West Antarctica, Geophysical Research Letters. doi:10.1002/2016GL067758. Stearns, L., Smith, B. & Hamilton, G. (2008), Increased flow speed on a large East Antarctic outlet glacier caused by subglacial floods. Nature Geoscience. https://doi.org/10.1038/ngeo356
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Calving front dynamics in coastal Dronning Maud Land, East Antarctica

Authors: Jennifer Arthur, Geir Moholdt, Lotte Wendt, Anca Cristea, Josephine Maton
Affiliations: Norwegian Polar Institute
Antarctica's ice shelves help to control the flow of glacial ice as it drains into the ocean. Iceberg calving is a key component of frontal ablation (i.e., mass loss at the calving front) at tidewater glaciers and ice shelves. Iceberg calving plays a crucial role in the mass balance of the Antarctic Ice Sheet, with large, tabular calving events accounting for the majority of the mass loss from Antarctic ice shelves. While large calving events involve the initiation and propagation of crevasses and rifts over timescales of days to decades, less attention has been given to smaller-scale ice-shelf frontal ablation processes that also contribute to ice-shelf area and mass changes around Antarctica. Smaller-scale ice-shelf frontal ablation can be triggered by the collision of large- to medium-sized (>100 km2) tabular icebergs with the ice shelf fronts, by increased basal melting and thinning towards the fronts, or by undercutting of ice-shelf fronts around the waterline caused by ocean waves in the absence of protective landfast sea ice. Here, we analyse calving front dynamics and frontal ablation (ice-shelf advance and retreat) rates in the coastal Dronning Maud Land region of East Antarctica since 2015, with a particular focus on the Fimbul Ice Shelf, where the loading bay for the Norwegian research station Troll is located. Using time series derived from semi-automated classification of Sentinel-1 radar imagery, we quantify ice-shelf frontal ablation and mass change rates. Our results reveal complex interannual patterns in calving front dynamics, demonstrating the importance of smaller-scale ice-shelf frontal ablation processes. These findings provide improved knowledge of localised calving dynamics and drivers in this region of East Antarctica and provide detailed observational data for long-term scientific and operational monitoring of the calving front and for refining models, for example wave undercutting of ice-shelf fronts.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Using Sentinel-1 SAR Imagery and Machine Learning Methods to Investigate Buried Meltwater Lakes on an Antarctic Ice Shelf

Authors: Paula Suchantke, Rebecca Dell, Neil Arnold, Devon Dunmire
Affiliations: Scott Polar Research Institute, KU Leuven
Antarctica’s floating ice shelves fringe three quarters of the continent’s margins and buttress upstream grounded ice, ultimately reducing Antarctic contributions to global mean sea level rise. Each ice shelf is exposed to a unique set of glaciological and climatological characteristics, which determine its vulnerability to partial or complete disintegration. One component that may impact present day and future ice-shelf stability is the presence of surface and sub-surface meltwater, which may drive firn-air depletion, and lead to flexure and fracture of the ice shelf. Our understanding of surface meltwater occurrence is extensive, and recent work has documented widespread surface meltwater systems across numerous ice shelves during the austral summer. However, our knowledge of meltwater storage within the ice-shelf subsurface is limited. Liquid water can persist perennially underneath the ice-shelf surface due to insulating properties of overlying layers of snow, firn, and/or ice. In this way, buried meltwater lakes introduce the possibility of hydrofracture outside of the melt season. Buried lakes are hard to distinguish or completely invisible in optical imagery, which has challenged investigations of their evolution and effect on ice-shelf stability. Here, we aim to test the potential for machine learning methodologies previously applied on the Greenland Ice Sheet to identify buried meltwater lakes on Antarctic ice shelves. In a deep learning approach using a convolutional neural network the ice-shelf surface and subsurface can be classified into several discrete classes, allowing for the detection of buried meltwater lakes. Early-stage qualitative interpretations of Sentinel-1 Synthetic Aperture Radar (SAR) imagery have identified several buried lakes near the grounding line on the western portion of Wilkins Ice Shelf (Antarctic Peninsula) in the vicinity of Merger Island. These subsurface lakes not only provide the opportunity to test the use of established machine learning methods in an Antarctic context but also for using Operation Ice Bridge airborne radar data to validate buried lake detection from Sentinel-1 SAR imagery in Antarctica.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: POLARIS Airborne Radar Ice Sounding Campaign in Antarctica

Authors: Anders Kusk, Jørgen Dall
Affiliations: Technical University Of Denmark
From 11 December 2023 to 14 February 2024, DTU Space completed an airborne radar campaign in Antarctica in support of ESA’s BIOMASS synthetic aperture radar (SAR) satellite mission, to be launched in 2025. In addition to the BIOMASS support flights, the airborne campaign was part of the ICECAP programme. ICECAP stands for “Investigating the Cryospheric Evolution of the Central Antarctic Plate”. Since 2008, the Australian Antarctic Division (AAD) has led 11 ICECAP campaigns, and for the 2023/24 season the POLARIS radar was used in its ice sounder configuration. POLARIS is a fully polarimetric airborne SAR and ice sounding radar operating at P-band (435 MHz). It has been developed by DTU Space, commissioned by ESA. It is so far the only airborne polarimetric ice sounder in operation, a capability that enables the estimation of crystal orientation fabrics from airborne sounding [1]. This campaign is the first time that airborne P-band and polarimetric ice sounding data have been acquired on the East Antarctic Icesheet. All data were acquired in fully polarimetric configuration. During the campaign, ice sounding data were acquired in Wilkes Land over Vanderford Glacier, the fastest retreating glacier in East Antarctica [2], and Law Dome, a small icecap with extensive ice core records [3], situated between Vanderford and Totten glaciers. A survey at Dome C was also carried out, with the aim of improving local bed elevation models, supporting the selection of new ice core drilling sites, and to examine the potential for application of polarimetric methods in this region. During the campaign it was established that the 435 MHz frequency (a high frequency for an ice sounder) can be used to consistently profile the bed through ice thicknesses of up to 4 km. In this poster, we report on the results of the campaign, showing example images and measurements, including polarimetric signatures. [1] J. Dall, “Estimation of crystal orientation fabric from airborne polarimetric ice sounding radar data”, Proceedings of IEEE 2020 International Geoscience and Remote Sensing Symposium, pp. 2975-2979, 2020. [2] Picton, H. J., Stokes, C. R., Jamieson, S. S. R., Floricioiu, D., and Krieger, L.: Extensive and anomalous grounding line retreat at Vanderford Glacier, Vincennes Bay, Wilkes Land, East Antarctica, The Cryosphere, 17, 3593–3616, https://doi.org/10.5194/tc-17-3593-2023, 2023. [3] Jong, L. M., Plummer, C. T., Roberts, J. L., Moy, A. D., Curran, M. A. J., Vance, T. R., Pedro, J. B., Long, C. A., Nation, M., Mayewski, P. A., and van Ommen, T. D.: 2000 years of annual ice core data from Law Dome, East Antarctica, Earth Syst. Sci. Data, 14, 3313–3328, https://doi.org/10.5194/essd-14-3313-2022, 2022
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Mass Balance and Ice Discharge of the Northern Antarctic Peninsula Derived From Multi-mission SAR Date

Authors: Dr. Thorsten Seehaus, Kaian Shahateet, Thomas Dethinne, Christan Sommer
Affiliations: Friedrich-Alexander-Universität Erlangen-Nürnberg, Université Grenoble Alpes, Department of Geography, Laboratory of Earth Observation and Ecosystem Modelling, SPHERES,University of Liège
The Antarctic Peninsula is home to some of the highest specific mass change rates in Antarctica. However, existing estimates for the northern Antarctic Peninsula (<70°S) are either spatially limited or significantly uncertain. This study provides a comprehensive assessment of the geodetic mass balance and ice discharge across the northern Antarctic Peninsula Ice Sheet using multi-mission SAR data back to the early 1990s. Repeat TanDEM-X acquisitions from the austral winters 2013 and 2017 allow the computation of spatially detailed ice surface lowering information. The results cover 96.4% of the study area, revealing a total mass loss of -24.1 ± 2.8 Gt/a. Hotspots of ice loss include former ice shelf tributary glaciers of the Prince-Gustav-Channel, Larsen-A and B, and Wordie ice shelves, with the Airy-Seller-Fleming glacier system showing the highest mass change rate (-4.9 ± 0.6 Gt/a) and Drygalski Glacier the greatest average surface elevation change (-2.30 ± 0.03 m/a). While changes in climatic mass balance (CMB) partly contributed to mass loss in the southern region, the primary driver was imbalanced high ice discharge. Ice discharge to the ocean, derived from glacier surface velocity fields and reconstructed ice thickness, showed an 11.7% increase from 89.8 ± 50.7 Gt/a in the early 1990s to 100.3 ± 31.0 Gt/a in the 2010s, based on a database of 13,784 flux measurements across 333 glaciers. When comparing ice discharge rates to climatic mass balance and calibrating with geodetic mass balance data, ice mass loss declined slightly from -20.8 ± 51.7 Gt/a in the 1990s to -19.37 ± 38.4 Gt/a in the 2010s for the observed glaciers. This suggests that increased ice discharge was offset by higher climatic mass balance rates. The study highlights the critical impact of ice thickness uncertainties on discharge estimates, with errors exceeding 80%, underscoring the need for improved ice thickness data. Regional trends indicate an overall increase in ice discharge, likely driven by atmospheric and oceanic influences.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Determining Ice Flow Direction in Antarctica

Authors: Veronica Tollenaar, Etienne Legrain, Stef Lhermitte, Julien Seguinot, Andrés Rivera, Prof Ramon Hanssen, Maaike Izeboud, Loreline Faugier, Harry Zekollari
Affiliations: Department of Water and Climate, Vrije Universiteit Brussel, Laboratoire de Glaciologie, Université libre de Bruxelles, Department of Earth and Environmental Sciences, KU Leuven, Department of Geoscience and Remote Sensing, Delft University of Technology, Departamento de Geografía, Universidad de Chile
Over 65% of the surface of the Antarctic ice sheet (including ice shelves) flows slow (< 15 meters/year). These slow-moving regions contain more than 75% of the volume of the ice sheet and include many blue ice areas which host meteorites and some of the world’s oldest ice. Ice flow directions and velocities are hard to constrain in these areas, given that ice moves at rates lower than the typical resolution of satellite observations (tens of meters), inhibiting tracking techniques that are very successfully and widely applied to fast-flowing regions/glaciers. Other satellite observations, Interferometric synthetic radar aperture (InSAR), do allow the detection of small deformations that can be translated to an ice flow velocity component in the direction of the line of sight. However, these observations do not measure the actual direction of the ice flow. To estimate the direction of ice flow, ice sheet geometry can be used. Theoretically, the flow direction is governed by the surface slope, as gravitational forces act on the mass of ice. However, due to horizontal stress coupling, a regionally averaged slope of the ice sheet should be considered by smoothing elevation data over lengths of several (4-10) times the ice thickness. In several case studies, such a smoothing has been applied, using a uniform length for a larger region. With expanding computational resources, it is now feasible to use an adaptive smoothing filter depending on the local ice thickness. By constructing a database of GNSS observations of ice flow directions from the literature, we optimize the smoothing of digital elevation models and estimate the performance of an algorithm that calculates the steepest gradient based on the Runge-Kutta 4th order method. The resulting ice flow lines provide a direct assessment of flow patterns and highlights, for example, the configuration of complex flow patterns around (exposed) mountains in diverse blue ice areas across the continent. Understanding these areas is of key importance for the search for the oldest ice. Additionally, we can estimate ice flow directions and uncertainties induced by the quality of the data (digital elevation model, bedrock topography) at any given location to reproject InSAR line-of-sight deformations to the direction of flow, and as such obtain 3D velocities in slow-flowing areas.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Lasering for insights into Antarctic snow surface roughness and mass balance – Finnish Antarctic research projects LAS3R and EXQALIBR

Authors: Aku Riihelä, Prof. Antero Kukko, Dr. Leena Leppänen, Dr. Adriano Lemos, Mr. Aleksi Rimali, Dr. Jaakko Seppänen, Dr. Heidi Sallila, Dr. Paula Litkey, Dr. Matti Lehtomäki, Dr. Juha Lemmetyinen
Affiliations: Finnish Meteorological Institute, Finnish Geospatial Research Institute
Finland operates an Antarctic research station called Aboa on Queen Maud Land (QML), providing a base for on site investigations of the ice sheet. Here, we introduce select results from the Aboa field measurements and associated satellite data analysis from our concluded Research Council of Finland research project LAS3R (Low orbit altimetry, Albedo, and Antarctic Snow and Sea-ice Surface Roughness) and plans for our upcoming research project EXQALIBR (EXceptional Queen mAud Land Ice mass Balance incRease – causes and effects). The connecting thread between the projects is the extensive use of UAV-mounted laser scanners to investigate the surface roughness of Antarctic snow and ice (LAS3R), and by revisiting the same areas during EXQALIBR field work, to ascertain western QML surface mass balance changes as reference material for our satellite and model-based analyses of MB and SMB change over region during the past decade. The laser scanner measurements are supported by a broad variety of radiative and physical measurements of the ice sheet surface. We will highlight select results from LAS3R related to collocated ICESat-2 and CryoSat-2 altimetry retrievals and related surface roughness impact assessments – with detailed results available elsewhere in this conference. We also illustrate the effect of surface roughness on snow albedo across our field sites based on intercomparisons of smooth snow albedo and roughness impact modeling (after Manninen et al., 2021) versus our reference field observations of albedo. We demonstrate that accounting for surface roughness features of QML snow fields bring modeled snow albedos closer to measurements, although the effect magnitude is often smaller than the typical albedo measurement uncertainty (<5% relative). We discuss the challenges of laser scanning over snow fields with few distinguishing outcroppings or other prominent features, as well as the practical solutions needed for operations in the harsh Antarctic conditions. Finally, we conclude by demonstrating the unique synergy between the two projects. We present our plans for the QML mass balance investigations and associated fieldwork planned for the 2025-26 field season. In particular, we draw attention to the insights obtainable by revisiting the laserscanned measurement sites in order to obtain spatially well-resolved data on QML surface mass balance change - to be combined with a broad array of satellite gravimetry/altimetry observations, precipitation observations and snow modeling to unravel the mass balance changes by component. We expect that our contributions from these projects will be relevant for e.g. the ongoing discussion on why QML mass balance is substantially increasing while most other Antarctic margins are rapidly losing ice mass.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Monitoring of supraglacial lakes on the Nansen ice shelf with active microwave satellite remote sensing: a preliminary assessment

Authors: Francesco De Biasio, Stefano Zecchetto, Matteo Zucchetta, Stefano Vignudelli, Emiliana Valentini, Marco Slavadore
Affiliations: National Research Council - Institute of Polar Sciences, National Research Council - Institute of Polar Sciences, National Research Council - Institute of Biophysics, National Research Council - Institute of Polar Sciences, University Institute for Advanced Study of Pavia (IUSS)
The evolution of the Cryosphere components (snow cover, ice, and melt water) is a key knowledge necessary for understanding the impact of climate change in remote areas. Energy exchanges between the atmosphere and the ice sheet can facilitate the formation of supra-glacial lakes (SGLs). The Holistic Overview of the supra-glacial Lake-Ice-Snow TIming and Climate causality (HOLISTIC) project, funded by the Italian National Antarctic Research Program, aims at establishing a solid knowledge of the dynamics of SGLs and the surface hydrology over the Nansen Ice Shelf (Victoria Lands, Antarctica), and the potential relationships with climatic drivers. The seasonal evolution of the surface triggers melting and drainage processes induced by albedo and meteo-climatic feedback. Among the various scientific questions, there is a clear need to identify, measure and monitor SGLs on the ice shelf. The project plans to synergistically use observations from active microwave remote sensing instruments, namely SAR and radar altimeter, onboard Sentinel-1 and Sentinel-3 Copernicus missions of the European Space Agency (ESA). The objective of the approach adopted in this study is based on microwaves sensors, which permit imaging the surface in all-weather conditions, and focuses on the analysis of the spatial and temporal variations of the snow layer above the large ice sheet, to describe the formation, the characteristics and the fate of SGLs, and possibly integrate the clues coming from both instruments, in a unique and consistent description of the surface hydrology dynamics. The detection of SGL extent and depth is very important for understanding glacier dynamics at different time and spatial scales. For this reason, other methodologies are used in our approach, involving also the use of imaging sensors in the visible spectrum (Sentinel-2, Landsat), useful for assessing the evolution of the surface hydrology. A dedicated processing of altimetry data of the Sentinel-3 altimetry mission over ice at higher posting rate (HPR, 80 Hz) is going to be started soon through the Altimetry Virtual Lab on the EarthConsole computing platform, thanks to the European Space Agency (ESA) support. The detection of SGL extent, depth and spatial frequency will be crucial for understanding the glacier dynamics. The methodologies involved in the project activities will include different platforms and sensors, at different time and spatial scales. Moreover, a different processing strategy documented in Abileah and Vignudelli (2021), already validated over specular waters, will be tested over polar targets. Despite Sentinel-1 SAR images over Nansen Ice Sheet being only in single polarization (HH), they possibly allow to individuate the presence of water in the SGLs and to retrieve the wind field over the sea surface. We anticipate that the results of this preliminary study will possibly be a contribution to ESA’s Crystal mission (the continuation of Cryosat which aims to measure the snowpack). Crystal mission objective is to derive snow depth directly (dual-frequency approach): Ka-band scattering originates at the air-snow interface; Ku-band scattering originates at the snow–sea-ice interface. SGLS are good targets for testing this idea.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Channelised-Basal-Melt-Induced Instability of Roi Baudouin Ice Shelf, East Antarctica: A Case Study of the D-29 Calving Event

Authors: Phoebe Jackson, Professor Ian Willis, Dr Rebecca Dell, Karla Boxall
Affiliations: Scott Polar Research Institute, University of Cambridge, Lancaster Environment Centre, Lancaster University
Localised ice-shelf thinning induced by underlying channelised basal melt has been hypothesised to precondition ice shelves to instability. Resultant flexural stresses that have the potential to initiate fracture can structurally weaken ice shelves, in some instances leading to major calving events. Though the basal channel network within western Roi Baudouin Ice Shelf has been widely reported on, no research to date has considered the potential vulnerability of the ice shelf to channelised-basal-melt-induced instability. Through a 15-year period observational case study of the Roi Baudouin Ice Shelf that uses Landsat and Sentinel imagery alongside derived surface velocity and strain rates, we document the processes and forcing that led to the 950 km2 D-29 calving event in June 2021. From 2006 to 2021, we observe the formation and evolution of two fractures, one longitudinal and one transverse, associated with channelised basal melt underneath the West Ragnhild Glacier, which drains ~10% of Dronning Maud Land’s ice discharge into western Roi Baudouin. These basal-channel-induced fractures structurally weakened the ice shelf such that when the West Ragnhild Glacier frontal margin was hit by iceberg D-28 on 31 May 2021, the fractures joined and released iceberg D-29. Thus, we suggest that channelised-basal-melt-induced fracture served as a precursor to the D-29 calving event and present the first evidence of basal channel-associated instability of Roi Baudouin Ice Shelf. As such, our findings highlight the importance of future patterns of channelised basal melt rates for the future stability of Roi Baudouin Ice Shelf. In turn, this will be dependent on potential regimes of subglacial melt flux and oceanic forcing under future climate scenarios. Our work contributes to the wider literature surrounding drivers of ice-shelf calving and stability, particularly across East Antarctica. Ultimately, it will serve to inform regional and ice-sheet-wide assessments of mass balance, which are essential for constraining accurate projections of Antarctica’s global sea level contribution.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: A.05.04 - POSTER - Advances at the observation -modelling interface

As observations become more sophisticated, and models more complex, so too does the link between them. Rapidly expanding processing and data storage requirements bring new challenges for model evaluation and validation, and the need for tools, toolboxes and cross community collaboration. Novel assimilation techniques, developments in inverse modelling, and improved treatment of combined model-observation uncertainties, require close collaboration between communities.
This session welcomes submissions at the interface of earth observation and modelling. Relevant topics include but are not limited to
• The role of observations in climate forcings
• Observational requirements to enable the next generation of CMIP model benchmarking
• the role of emerging technologies, including machine learning and artificial intelligence, in advancing the assessment of ESMs.
• Development of toolboxes and metrics for model assessment
• Novel observations relevant to high resolution model processes
• Model-observation integration approaches for process understanding
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: A novel moist static energy balance model (nEBM) for improving the (atmospheric) hydrological cycle representation

Authors: Nedim Sladić, Dr Tim Trent
Affiliations: University of Leicester
The hydrological cycle represents the largest movement of any substance on Earth, crucial for life on Earth. Therefore, it is essential to understand the physical mechanisms of the water cycle and their feedback loops, which provide the framework to develop and build the models needed for climate studies. Given the human-induced warming climate, this becomes a particularly critical research topic, as the long-term established patterns become disrupted due to the intense water cycle. Two philosophical approaches tackle the water cycle issues, distinct by complexity. The complex approach involves the general global circulation models represented as an ensemble that exhibits the best representation – or strength of the spatiotemporal pattern through complex and multidimensional mathematics. These models are complex due to their multidimensionality and high computational costs. Hence, the alternative is to break the complexity with the simplified energy budget models. Instead of solving complex mathematical equations, the philosophy is focused on investigating the energy climate budget within the simplified model, often comprising one or two dimensions, represented as the interface(s). Additionally, the simplified version breaks down the mathematical factors into summative parts, frequently producing a linearised tendency equation, encapsulating the multiple smaller-scale models within a general model. This framework allows computational flexibility and variability capture as the processes become disentangled and less reliant on each other. Hence, these models exhibit a promising starting point to understand the planetary energy budget and to conduct various climate variability and sensitivity experiments. This study focuses on improving the hydrological cycle representation within the novel two-dimensional energy balance model based on moist static energy heat energy transport. This approach diverges from the classical representation of "dry models", which shall improve the net air parcel enthalpy and accurately capture the precipitation over different regions. This poster will present results from a variety of climate sensitivity tests to external forcings for nEBM (e.g. doubling of CO2), with particular attention given to poleward transport and fluxes. The results of this study will be put in context with current climate model simulations from CMIP6 and satellite data products from ESA's Climate Change Initiative programme.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: StraitFlux - Tools for Precise Oceanic Transport Calculations on Various Modeling Grids

Authors: Susanna Winkelbauer, Michael Mayer, Leopold Haimberger
Affiliations: University Of Vienna, Department of Meteorology and Geophysics, ECMWF (European Centre for Medium-Range Weather Forecasts)
Oceanic transports play a crucial role in regulating the Earth's climate system, influencing heat, freshwater, and nutrient distribution across the globe. Mooring lines, placed strategically around the globe, provide important insights into key oceanic processes, however their spatial and temporal coverage remains limited. This raises the need of assessing oceanic transports from reanalyses and climate models to understand past variabilities and possible future changes. However, these assessments often face technical challenges due to the diversity and complexity of model grids and discretization schemes used. This work presents StraitFlux, a newly developed Python tool designed to address these challenges by enabling precise calculations of oceanic transports from diverse climate models (e.g., The Coupled Model Intercomparison project Phase 6, CMIP6) and reanalyses. Operating consistently with the discretization schemes of the models, StraitFlux allows for robust and accurate analyses and enables comprehensive assessments of oceanic transports as well as the ocean’s vertical structure and flow patterns via cross-sections. By simplifying calculations and resolving challenges posed by varying model grids, StraitFlux offers a user-friendly approach for benchmarking model outputs against observational data and evaluating future changes of oceanic transports under different SSP scenarios. Analysis of the Arctic’s energy budget in CMIP6 models revealed tight connections between oceanic heat transport and the state and changes within the Arctic Ocean as well as sea ice, with biases propagating across the components. The transport diagnostics are used to weight and constrain CMIP6 models by evaluating their performance against observational data and assessing their independence from one another, and therefore help to identify and reduce biases and uncertainties in future projections. While initially applied to projections of Arctic change (e.g. sea ice), this methodology can be extended to other variables and is used to constrain diverse energy budget components, as well as large-scale oceanic transports such as the AMOC.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: Can the upcoming Ice Cloud Imager (ICI) provide long-term insights into the vertical distribution of atmospheric ice mass?

Authors: Eleanor May, Patrick Eriksson
Affiliations: Chalmers University Of Technology
Ice clouds play an important role in Earth's climate system, yet they remain poorly understood. The Ice Cloud Imager (ICI) is designed to address this challenge. It will be hosted on the MetOp-SG B satellite as part of the Second Generation EUMETSAT Polar System (EPS-SG). ICI is a passive microwave sensor that will measure at frequencies from 183 GHz to 664 GHz. This includes novel sub-millimetre channels (325 GHz and above), which are particularly suited to ice hydrometeor observations.  While EUMETSAT's official ICI level-2 product will provide column-integrated variables, such as ice water path (IWP), there are currently no existing products offering vertical information on cloud ice retrieved from passive microwave observations. However, the first-of-their-kind, global observations that ICI will provide offer a unique opportunity to explore the possibility of developing such a dataset. In this study, we ask: can vertical ice mass profiles be retrieved from ICI observations? Such a dataset could serve as a complementary data source to those provided by active sensors. Furthermore, considering ICI's relatively long expected lifetime (~22 years), the data would provide significant benefits if used to constrain and validate climate models over long time periods. In the absence of real ICI observations, high-quality 'all-sky' simulations are needed to train the retrieval model. Rigorous radiative transfer calculations are performed using the Atmospheric Radiative Transfer Simulator (ARTS). Simulations are performed within three-dimensional atmospheric states to fully capture cloud structure variability. A range of ice hydrometeor sizes are represented, with each hydrometeor model including single scattering data and empirically-based assumptions on size and shape. Polarisation effects are captured through the approximation of azimuthal particle orientation. The resulting retrieval database will be used for operational level-2 retrievals for ICI performed at EUMETSAT. In this study, we train a probabilistic machine learning model using the simulated database. By using the same retrieval database that will be used for the official level-2 product, we gain insight into the feasibility of a vertical profile level-2 ICI product. We demonstrate that it is possible to retrieve ice water content (IWC) and vertical profiles of the mean mass diameter (Dm) of ice particles from ICI observations. Comparisons of individual swaths show good agreement between retrieved and simulated profiles. The resolution of IWC retrievals is estimated to be approximately 2.5 km, and the highest retrieval accuracy is achieved at altitudes between 3 and 14 km. Finally, our retrievals of IWC are shown to be statistically consistent with the radar/lidar-based product DARDAR. Equipped with state-of-the-art observation simulations, we are able to explore the capability of ICI prior to launch. If we fully exploit ICI's observations once they become available, we will possess a reliable and long-term source of vertical information on ice mass. In turn, this will help us advance our understanding of ice clouds and improve current climate models.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: A Synergistic Description of Upper Tropospheric Cloud Systems and Diabatic Heating: Towards Enhanced Process Understanding

Authors: Claudia Stubenrauch, Xiaoting Chen, Giulio Mandorli
Affiliations: CNRS IPSL Laboratoire de Météorologie Dynamique
Upper tropospheric clouds are most abundant in the tropics and often form as cirrus anvils from convective outflow, building mesoscale systems (MCS). While latent heating is released into the atmosphere by the precipitating parts of these MCSs, the long-lasting anvils play a crucial role in modulating the Earth’s energy budget and heat transport. Convective organization may change the relationship between latent and radiative heating within and around the MCSs. We present a coherent long-term dataset which describes tropical UT cloud systems for process and climate studies. In order to investigate also the cirrus surrounding these anvils, we used CIRS (Clouds from IR Sounders) data, retrieved from AIRS (Atmospheric InfraRed Sounder) and IASI (Infrared Atmospheric Sounding Inferometer) measurements, together with atmospheric and surface properties from the meteorological ERA reanalyses as input to artificial neural network (ANN) models to simulate the cloud vertical structure and radiative heating rates derived from CloudSat radar – CALIPSO lidar measurements, available only along narrow nadir tracks. In this way, we could expand this sparse sampling in space and in time. Furthermore, a rain rate classification, with an accuracy of about 70%, allows to build objects of strong precipitation to identify convective organization. This dataset is now available at https://gewex-utcc-proes.aeris-data.fr/data/. We could demonstrate that this rain intensity classification is more efficient than cold brightness temperatures to detect large latent heating, the latter derived from radar measurements of the Tropical Rainfall Measuring Mission (TRMM). While TRMM provides a diurnal sampling, the spatial coverage within a time window of one hour is only about 7%. Therefore, we also expanded these latent heating profiles over the whole tropics, using ANN regression. Over ocean, the zonal averages of vertically integrated latent heating (LP) align well with those from the full diurnal sampling of TRMM. In combination with a cloud system analysis we found that deeper convection leads to larger heavy rain areas, with a slightly smaller thick anvil emissivity. Convective organization enhances the mean atmospheric cloud radiative effect (ACRE) of the MCSs, in particular at small rain intensity. The projection of different MCS properties in the LP-ACRE plane can be further used for a process-oriented evaluation of parameterizations in climate models.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: Global trends in marine ecological indicators – Understanding the ocean ecosystem’s response to climate forcing using remote sensing

Authors: Mayra Rodriguez Bennadji, Dr Gemma Kulk, Dr Shubha Sathyendranath
Affiliations: Plymouth Marine Laboratory, National Centre for Earth Observation
Monthly records of rising air and sea surface temperatures underscore the accelerating impacts of global warming. Over 90% of the excess heat generated by human activities is stored in the oceans, with significant thermal changes occurring in the sunlit layer. These changes play a critical role in air-sea interactions, driving climate feedback mechanisms and profoundly affecting marine life, which is sensitive to temperature variations. Rising sea surface temperature (SST) is fuelling unprecedented marine heatwaves, with potentially devastating consequences for marine ecosystems. Phytoplankton, the microscopic plants that form the foundation of the marine food web, play a key role in carbon sequestration and contribute roughly half of Earth's net primary production. They regulate essential biogeochemical processes, including the transfer of carbon to the deep ocean, with marine primary production in the surface ocean accounting for approximately 50 gigatons of carbon annually. Chlorophyll-a (Chl-a), a pigment in phytoplankton, serves as a proxy for phytoplankton biomass and is critical for assessing the optical, biological, and biogeochemical properties of oceans. Monitoring Chl-a variability at global and regional scales is essential to understanding climate-driven changes in marine ecosystems. However, identifying long-term trends remains challenging due to spatial and temporal limitations of in situ data. Satellite ocean-colour observations over nearly three decades provide unprecedented insights into phytoplankton dynamics at the global scale, revealing complex long-term patterns, yet key questions about trends in phytoplankton biomass and species composition remain unresolved. This study addresses these gaps by analysing long-term trends in satellite-derived Chl-a and SST data across global, regional, and fine spatial scales. Using statistical and trend analyses, we investigate how elevated SSTs and their variability influence Chl-a concentrations, focusing on ecological provinces defined by Longhurst and per-pixel analyses at 4 and 9 km resolution. Preliminary findings show that Chl-a concentrations from 2019 to 2024 remain within two standard deviations of the 1998–2018 mean, though 2024 values are (so far) below the long-term mean. Linear trend analysis indicates an intensification of SST-related impacts in recent years. Most ecological provinces exhibit a significant negative correlation between SST and Chl-a, while polar and certain coastal regions show a positive correlation. This reflects the temperature dependence of photosynthetic processes, where phytoplankton growth rates increase with warming at lower temperatures, but decline under thermal stress as temperatures exceed optimal thresholds. Future research will incorporate double integration, resilience, stability, and tipping point analyses to better understand these dynamics. The results highlight the intricate interactions between physical forcing and phytoplankton dynamics, offering critical insights into the resilience and vulnerability of ocean ecosystems in a warming world.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: Navigating the Jungle of CMIP Data as a First-Time User: Key Challenges and Future Directions

Authors: Susanna Winkelbauer, Feba Francis, Julia K. Green, Stuart Jenkins, Stella Jes Varghese, Sian Kou-Giesbrecht, Christine Leclerc, Gaurav Madan, Kelvin Ng, James O. Pope, Abhnil Prasad, Indrani Roy, Serena Schroeter, Lina Teckentrup, Alexander J. Winkler
Affiliations: University Of Vienna, Department of Meteorology and Geophysics, Université catholique de Louvain, Earth and Life Institute, Louvain-la-Neuve, Belgium, Department of Environmental Science, University of Arizona, Tucson, AZ, USA, Atmospheric, Oceanic and Planetary Physics, Department of Physics, University of Oxford, Oxford, United Kingdom, Centre for Earth, Ocean and Atmospheric Sciences, School of Physics, University of Hyderabad, Hyderabad, India, Department of Earth and Environmental Sciences, Dalhousie University, Halifax, Nova Scotia, Canada, Department of Geography, Simon Fraser University, Burnaby, British Columbia, Canada, National Centre for Atmospheric Science, University of Reading, Reading, United Kingdom, School of Geography, Earth and Environmental Sciences, University of Birmingham, Birmingham, United Kingdom, UK Met Office, Exeter, United Kingdom, Climate Change Research Centre, School of Biological, Earth and Environmental Sciences, University of New South Wales, Sydney, Australia, University College London (UCL), London, UK, CSIRO Environment, Hobart, Tasmania, Australia, Barcelona Supercomputing Center (BSC), Barcelona, Spain, Max Planck Institute for Biogeochemistry, Jena, Germany
Output generated by the different phases of the Coupled Model Intercomparison Project (CMIP) has underpinned countless scientific projects and serves as the foundation of the United Nations climate change reports. While initially CMIP was largely driven by the scientific curiosity in the broader climate modeling community, CMIP output has also become a crucial source of data for disciplines more tangentially related to physical climate science such as the economic modeling community. The upcoming CMIP phase 7 is expected to produce the largest amount of CMIP related data to date. However, with an increasing number of modeling systems, represented realms, model complexity, variable names, experiments, and different grid types, the initial exposure to CMIP output has undoubtedly become an overwhelming experience for first time users. For this presentation, we would like to start a conversation with users who are in or have recent experience of being in the early stages of employing CMIP outputs for their research, and together identify: 1) Key barriers and challenges experienced when first using CMIP data 2) Additional documentation/tools needed to facilitate the use of CMIP data 3) Key pieces of advice for new CMIP users
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: The potential of LUCAS for quality control of Cop4ALL-DE landcover

Authors: Thorsten Dahms, Nicole Habersack, Sebastian Claus, Patrick Merita, Carola Stolle, Dr. Michael Hovenbitzer
Affiliations: Bkg
The Cop4ALL-DE project (Copernicus for All – Germany) aims to promote the use of Copernicus data and services in Germany and demonstrate their potential in various application domains. A central component of the project is the provision of high-resolution land cover and land use information, which serves as a foundation for numerous applications in environmental management, spatial planning, and policymaking. To ensure the quality and reliability of these data, comprehensive validation is essential. As part of Cop4ALL-DE, the Federal Agency for Cartography and Geodesy (BKG) has undertaken the validation of the land cover datasets. This involves systematically comparing the land cover maps generated within the project with reference data to assess their accuracy and reliability. For this purpose, the LUCAS (Land Use/Cover Area Frame Survey) data from the European Union are utilized. The LUCAS database provides detailed, in-situ information on land cover and land use and serves as a harmonized, Europe-wide reference dataset. Its high precision and representative coverage make it an ideal basis for the validation of remote sensing products. In the validation process, the LUCAS points are systematically compared with the corresponding pixels in the project’s land cover datasets. Statistical metrics such as overall accuracy and class-specific accuracy are calculated to evaluate the agreement between the datasets. The results of this analysis provide not only an assessment of the quality of the generated land cover maps but also valuable insights for improving classification methods and data processing algorithms. The validation of Cop4ALL-DE land cover data by BKG using LUCAS points represents a critical step in ensuring data quality. This process enhances the credibility of the products and their practical applicability across diverse domains. Consequently, the project makes a sustainable contribution to the integration of Copernicus data into decision-making processes and applications in Germany.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: Atmospheric modes of variability as a driver for European drought conditions

Authors: Tobias Holmgaard
Affiliations: DMI
In a changing climate, it is important to understand drought occurrences for adaptation to the impacts of droughts. For this purpose, better long-term weather forecasting of droughts is highly desired, and calls for a better understanding of the mechanisms behind drought occurrences and whether this can reconstruct previous droughts and their drivers of atmospheric variability. Here we focus on analysing the relationship between drought conditions with atmospheric modes of variability in the North Atlantic region. Standardized Precipitation Index (SPI) is used as drought index on the timescales 1, 3, 6, 12 and 24 months. Data on the Fraction of Absorbed Photosynthetic Active Radiation Anomaly (FAPARA), from the MODIS sensor on the Terra satellite has been used to verify the SPI as drought index in Europe. SPI-3 is best for detecting vegetation impact drought south of 55°N, outside of high mountain ranges and the Iberian peninsula. In the cold regions north of 55°N and high elevation, anticorrelations are found rather than correlations with FAPARA. SPI-6 and SPI-12 have the best correlations at the Iberian peninsula. An Empirical Orthogonal Function (EOF) analysis has been performed, finding the four most present modes of variability for the extended winter season (NDJFM), the extended summer season (MJJAS), annual modes (all months), and the spring (MAM) at MSLP and the 500 hPa level. The summer modes have a northwards shift compared to the winter modes, with lower explained variance. The PC of the EOF modes has been correlated with the SPI values, where SPI-1 NDJFM has the strongest correlations with the North Atlantic Oscillation (NAO) and the Scandinavian pattern. SPI-1 MJJAS is not strongly affected by the NAO, but highly by the Scandinavian pattern (EOF3) and the East Atlantic/West Russian pattern (EOF4), but only the Scandinavian pattern gives robust correlations for SPI-3 MJJAS. SPI-6 MJJAS is in central Europe strongly affected by the spring EOF2 at 500 hPa level. SPI-12 MJJAS is highly controlled by the extended winter season (NDJFM) NAO (EOF1) for Scandinavia and the Mediterranean. The Scandinavian pattern (EOF3) NDJFM is highly related to MJJAS SPI-12 in northwestern Europe. These results have been tested for spatial autocorrelation (using Moran’s I) and all results are globally highly clustered and are significant at a 99%- significance level, showing large-scale circulation is to a large extent driver of the drought conditions. These results can be used when linking drought conditions from climate proxies such as pollen and tree rings to atmospheric modes of variability. Long-term forecasting and climate modelling of droughts will benefit from these results.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: Earth’s energy imbalance more than doubled in recent decades

Authors: Thorsten Mauritsen
Affiliations: Stockholm University
Global warming is caused by anthropogenic greenhouse gas emissions, which disrupt the delicate balance between incoming sunlight and reflected and emitted radiation from the Earth. This imbalance causes energy to accumulate in the atmosphere, oceans and land, and the cryosphere to melt, leading to rising temperatures, rising sea levels and more extreme weather around the globe. Despite the fundamental role of the energy imbalance in the climate system, which humanity has known about for more than two centuries, our ability to observe it is rapidly diminishing as NASA's successful CERES equipped satellites are being decommissioned. Concerningly, the observed energy imbalance is rising much faster than expected, in 2023 reaching 1.8 Wm-2 after more than doubling within just two decades. 2023 was also the first year to exceed 1.5 degrees of warming above pre-industrial levels. The strong upward trend in the imbalance is difficult to reconcile with climate models. Even when taking into account the increase in anthropogenic radiative forcing and the associated climate response, state-of-the-art global climate models can only barely reproduce the rate of change up to 2020 within the observational uncertainty. Since then the trend has strengthened further. A single Libera follow-on instrument is planned to launch by NASA in 2027, replacing a set of four reliable CERES scanners. This means that for a period of 5-10 years we will have a single point of failure in the observing system of Earth's energy budget, a position that can lead to unrepairable damage to the observations record so important for climate monitoring. An ESA initiative, the Earth Climate Observatory (ECO) using an innovative setup of radiometers is in development, as is a NASA initiative involving measuring the integral radiation pressure using spherical black satellites. Neither of these are likely to fly before the second half of the 2030's. As a community we must work to understand and quantify the underlying causes of the changes in the energy imbalance that we observe, in particular as questions mount with regards to the 2023 combined record imbalance and temperatures. And, together with funding agencies and policy makers, we must strive to secure a robust and reliable capability to observe the energy imbalance – the perhaps most fundamental quantity in the climate system.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: Advances in Radiative Transfer and Assimilation of All-Weather Microwave and Radar Observations in NWP Models

Authors: Isaac Moradi, Dr. Arlindo da Silva, Dr. Satya Kalluri, Dr. Ronald Gelaro, Dr. Yanqiu Zhu, Dr. Benjamin Johnson, Dr. Benjamin Ruston
Affiliations: University Of Maryland, NASA Global Modelling and Assimilation Office, NOAA's Office of Low Earth Orbit Observations, Joint Center for Satellite Data Assimilation, UCAR
Microwave and radar observations are crucial for enhancing the representation of clouds and precipitation in Numerical Weather Prediction (NWP) models. Passive microwave instruments provide valuable insights into the spatial distribution of precipitation and clouds, while radar systems offer essential information on the vertical structure of clouds, precipitation intensity, and hydrometeor properties. Together, these measurements significantly improve model accuracy, particularly in the lower troposphere, where cloud processes and precipitation most influence weather forecasting. Integrating microwave and radar data into NWP models helps refine model states, leading to more precise forecasts of high-impact weather events. Radiative transfer models play a key role in assimilating satellite observations into NWP models and in retrieving geophysical products from satellite measurements. The Community Radiative Transfer Model (CRTM) is widely used for various applications requiring radiative transfer (RT) calculations. CRTM utilizes bulk optical properties of hydrometeors to perform all-sky RT calculations. The first part of this presentation discusses the implementation and validation of a Discrete Dipole Approximation (DDA) cloud scattering database within CRTM. The DDA technique effectively simulates the optical properties of non-spherical hydrometeors in the microwave spectrum. To generate the required mass scattering parameters for CRTM, we incorporate size distributions that depend on water content and single-scattering properties. Results are evaluated using a collocated dataset, combining short-term forecasts from the European Centre for Medium-Range Weather Forecasts (ECMWF) Integrated Forecast System with satellite microwave data. Our findings show that the DDA-based lookup tables significantly reduce discrepancies between simulated and observed values compared to traditional Mie scattering tables. The second part of the presentation focuses on the challenges of assimilating active spaceborne radar measurements into data assimilation systems. Incorporating radar data into NWP models requires fast forward radiative transfer models and robust error modeling, which have proven challenging. We present the evolution, systematic evaluation, and sensitivity analysis of an innovative forward radar model, seamlessly integrated into the CRTM framework. This model, based on hydrometeor scattering properties derived from DDA, computes reflectivity and attenuated reflectivity for various radar instruments and zenith angles using the appropriate instrument-specific coefficients. Evaluation using measurements from CloudSat CPR and the Global Precipitation Measurement (GPM) DPR reveals strong agreement between model simulations and observations, contingent on the consistency between hydrometeor input profiles and measured reflectivity. Our efforts are currently focused on assimilating microwave and radar observations within the Joint Effort for Data assimilation Integration (JEDI) framework using enhanced radiative transfer capabilities. This includes the assimilation of data from spaceborne radar instruments such as GPM DPR, as well as microwave instruments like ATMS and TROPICS. These efforts are in preparation for the assimilation of EarthCare CPR observations and future data from the NASA Atmospheric Observing System (AOS) into NWP models.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: E.05.01 - POSTER - Traceable, Transparent Supply Chain Data for Monitoring: Examples from the Forest Data Partnership and A Call to Accelerate Industry Alignment.

To apply remote sensing monitoring data and models to supply chains requires Traceability systems linking products to accurate and complete geographic ground (vector) data. The Forest Data Partnership (Google, UN FAO, Unilever, NASA SERVIR, USAID, and the World Resources Institute (WRI)) have been exploring innovative tools and systems to make supply chain data more accessible (open), well documented interoperable so that public and private sector data can be better integrated with remote sensing analysis.

This session will explore how open-source tools, open data exchange, anonymization, creative commons licensing and other soft infrastructure can unlock geospatial use cases around transparency, regulation (EUDR and CSDDD), ESG Risk, double materiality, and other trends in disclosure and monitoring of supply chains. Example include applying AI to quality control, ground truth verification, open data exchange standards, Land Cover validation engine “What is in that plot? (Whisp) solution to implement convergence of evidence”

https://www.forestdatapartnership.org/news-events/navigating-data-challenges-and-compliance-for-deforestation-free-supply-chains
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: Integrated EO Information for Advancing the MRV of European Forests Entering Nature Markets – EU-FOCIS

Authors: Dr Joanne Nightingale, Mr Matthew Scholes, Ms Nicole Reynolds, Professor Mathias Disney, Dr Cecilia Chavana-Bryant, Ms Wanxin Yang, Professor Jadu Dash, Dr Booker Ogutu, Mr Finn James, Dr Eija Honkavaara, Dr Eero Ahokas, Dr Antero Kukko, Dr Juha Hyyppä, Dr Miguel Noreira, Dr Rui Lopes, Dr Silvia Faria Estrela, Mr Leon DeBell, Dr Frank Martin Seifert
Affiliations: National Physical Laboratory, UCL Geography, University of Southampton, National Land Survey of Finland, CO2OFFSET, Scottish Forestry, European Space Agency, NCEO
Without harnessing the ability of nature to store carbon and help regulate the climate, it will be impossible to meet the goals of the Paris Agreement, and to avoid the worst risks of climate change. Afforestation is one of the few proven Greenhouse Gas Removal methods needed alongside emission reduction schemes to help countries meet international climate commitments. Private sector financing through Voluntary Carbon Markets is crucial for implementation at the scale needed to realise NetZero targets such as the European Green Deal. The contribution approach to funding carbon sequestration and nature projects, that also inherently generate broader environmental, social, and economic benefits, is gaining traction as a high-impact route for corporate sustainability action. This coupled with the high-profile scrutiny of tropical forest projects and methodologies is spurring a strong demand and willingness of European corporates to invest in high-quality interventions closer to home. As a result, European project developers and brokers are eager to build a pipeline of projects that should deliver millions of tonnes in credit inventory in the next 20 years. There is a critical need to measure and monitor this credit inventory to quantify the climate mitigation outcomes being claimed. But there are still significant uncertainties in current estimates of forest carbon stocks and sequestration potential. Currently, verification processes are infrequent and rely on manual field survey measurements of tree diameter-at-breast height (DBH), tree height, tree numbers and species. From these, species-specific estimates of total volume and then aboveground biomass (AGB) are made from empirical tables. These methods are hard to replicate or scale and are often regionally biased and have been shown to significantly underestimate AGB. New EO-derived measurements could address these limitations. Satellite-derived estimates of forest AGB/carbon will be critical for implementing and monitoring nature-based solutions at scale. However, there are currently no operational AGB/carbon data products available across Europe at the fine spatial and high temporal resolution appropriate for continuous monitoring of VCM project inventory. Therefore, there is a distinct and urgent need for a spatially and temporally consistent and operational AGB, carbon and health monitoring product for European forest management and carbon markets. We present an overview and initial results from the ESA-funded project, EU-FOCIS (EU – Forest Observations for Carbon Market Integrity using Sentinels) that aims to develop and demonstrate a forest health and carbon information monitoring capability for Europe that can be used by product developers, verification bodies and project investors to quantify and independently check carbon stock/change through time. This monitoring product will combine two EO data streams: 1) Spatially and temporally explicit estimates of terrestrial vegetation GPP (gross primary productivity) derived from ESA Copernicus Sentinel-2 MSI, and future CHIME and LSTM expansion satellite missions; and 2) well quantified ground measurements of AGB made with terrestrial laser scanning at individual tree level that will be used to parameterise and test new empirical models of forest AGB. This will facilitate development of a database of coefficients representing the portion of GPP converted to AGB for representative forestry compositions at different ages across Europe. These coefficients applied to the GPP satellite data will provide an EU-calibrated estimate of carbon stock. Whilst the GPP time-series will provide useful information on health and impacts resulting from management regimes (e.g. logging, fires, pests) and natural phenomena (e.g. weather, climate).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: National Tree Species Mapping Using Multispectral Radar and Optical Time Series Satellite Imagery and In-situ Data

Authors: Ana Potočnik Buhvald, Prof. Krištof Oštir, Matej Račič, Dr. Tatjana Veljanovski, Dr. Liza Stančič, Dr. Andrej Kobler, Dr. Jernej Jevšenak, Peter Pehani, Dr. Aleš Marsetič, Dr. Urška Kanjir, Dr. Mitja Skudnik
Affiliations: University of Ljubljana, Faculty of Civil and Geodetic Engineering, Slovenian Forestry Institute, University of Ljubljana, Biotechnical Faculty,Department of Forestry and Renewable Forest Resources, Research Centre of the Slovenian Academy of Sciences and Arts (ZRC SAZU)
Slovenia, a biodiversity hotspot at the crossroads of four biogeographical regions — the Mediterranean, the Alps, the continent and the Pannonian Sea — offers a unique and diverse ecological landscape. The country's rapidly changing terrain and climate create the conditions for diverse vegetation and a wide range of tree species. Slovenia is one of the most forested countries in Europe, with around 1.2 million hectares of forest covering 58.2% of the country's land area. This study investigates the integration of multisensor satellite image time series with forest inventory data for the classification of tree species in Slovenia. Using an annual comparison over three consecutive years, the study assesses the reliability of the model and investigates how satellite imagery can reduce the need for costly field surveys and thus improve forest monitoring and biodiversity assessment. A machine learning algorithm using Sentinel-1 and Sentinel-2 satellite imagery is used for a scalable annual nationwide classification of tree species. The study uses a reference dataset of over 27,000 homogeneous forest stands from in-situ data provided by the Slovenian Forest Service and focuses on 17 predominant tree species represented with over 98% homogeneity. Input features include Sentinel-1 coherence time series, Sentinel-2 spectral bands, key spectral indices (NDVI, IRECI) and combined Sentinel-1/2 data. This study is the first to use coherence data for the classification of tree species at the national level. It overcame the challenges associated with complex calculations and the need for significant computing power. LightGBM was used for tree species classification due to its speed, its flexibility in processing numerous input features and its ability to process complex time series of satellite images of different densities, including raw data with missing values. This made it particularly suitable compared to other machine learning methods such as Random Forest (RF) and Support Vector Machines (SVM). The margin value, a metric reflecting the confidence of the model in the classification, was also calculated to assess the reliability of species identification per pixel. These margin values provide an innovative means of indirectly assessing species diversity by identifying areas of lower classification confidence, which often indicate ecological complexity or species heterogeneity. The results show that the combination of Sentinel-1 and Sentinel-2 data significantly improves the classification accuracy, based on an analysis over three consecutive years, of minority tree species samples such as Populus spp. (OA+0.3–0.6) and Quercus spp. (OA+0.1–0.8). The model achieved an overall accuracy (~0.90) using all Sentinel-2 bands (except B10) and key spectral indices (NDVI, IRECI). The analysis underlines the importance of spectral and temporal features for distinguishing between coniferous and deciduous tree species and provides insights into phenological variations. The high classification accuracy for homogeneous stands demonstrates the ability of the models to support biodiversity monitoring, while the detection of mixed stands remains a challenge. Addressing this limitation will be a focus of future work. It is planned to explore advanced machine learning models and data fusion techniques to improve the detection and classification of mixed stands. The spatial patterns derived from the classification maps also provide a tool to assess biodiversity hotspots and enable targeted conservation and management measures. The high accuracy of the tree species classification maps allows forest managers and other conservationists to determine the distribution of species on a national scale, providing a reliable basis for planning forestry work in addition to field work and improving current data. This study highlights the ecological richness of Slovenian forests and demonstrates the potential of satellite-based time series analysis to improve forest management and biodiversity assessment. It contributes to scalable real-time monitoring methods and provides valuable insights for tackling ecological challenges such as climate change, forest health and species conservation. The use of time series analyses opens up possibilities for timely monitoring of forest health and enables a timely response to disturbances such as pest outbreaks, diseases or storm damage. This work was supported by the Slovenian Research Agency as part of the research project J2-3055 (ROVI - Innovative radar and optical satellite image time series fusion and processing for monitoring the natural environment) and the research programmes P6-0079 (Anthropological and Spatial Studies), P2-0406 (Earth observation and geoinformatics) and P4-0107 (Forest Biology, Ecology and Technology).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: Intergovernmental Panel on Climate Change (IPCC) Tier 1 forest biomass estimates from Earth Observation

Authors: Neha Hunka, Laura Duncanson, John Armston, Ralph Dubayah, Sean Healey, Maurizio Santoro, Paul May, Arnan Araza, Clément Bourgoin, Paul Montesano, Dr. Christopher Neigh, Hedley Grantham, Peter Potapov, Svetlana Turubanova, Alexandra Tyukavina, Jessica Richter, Nancy Harris, Mikhail Urbazaev, Adrián Pascual, Daniela Requena Suarez, Martin Herold, Benjamin Poulter, Sylvia Wilson, Giacomo Grassi, Sandro Federici, Maria Sanz, Joana
Affiliations: University Of Maryland
Introduction/Aim: Aboveground dry woody Biomass Density (AGBD) maps produced with Earth Observation (EO) data have a large potential to periodically provide a transparent, consistent and replicable picture of the state of the world’s forests. They can comply with the standards mandated by the UNFCCC, but are yet to formally adopted in international policy guidance. Our research provides the first compilation of AGBD estimates in the format of Intergovernmental Panel on Climate Change (IPCC) Tier 1 values for natural forests1, sourced from NASA’s GEDI and IceSAT-2 missions, and ESA's Climate Change Initiative (CCI). It also provides the underlying classification of global forests by ecozones, continents and status (primary, young (≤20 years) and old secondary (>20 years)). Methods: Our approach is based on a Boolean compilation of various EO-derived datasets, which leverages the strengths of layers of satellite-derived forest tree cover, height, age and land use classifications. In summary, first, layers that identify a potential forest status/condition class (e.g. primary forests) are merged, and second, layers that identify sources of disagreement (e.g. presence of plantations or deforestation detected in the delineated primary forests) are used to remove areas of potential commission errors. The classification is run on the collaborative open-science could-computing system, the ESA-NASA Multi-mission analysis and algorithm platform (https://scimaap.net/). MAAP has capabilities to host relevant data, processing algorithms, and computing capabilities in a common cloud environment, linked to public GitLab/GitHub repositories, ensuring the transparency of our methods. Upon classification, mean estimates of AGBD (and their associated errors) are sourced from the GEDI’s hybrid inference estimators2, High northern latitude estimates from IceSAT-23 and the ESA CCI Biomass map of 20204. Results: Across the world's natural forests (excluding planted forests), approximately 1678 Mha of primary forests, 1265 Mha of old secondary forests and 316 Mha of young secondary forests were identified. The trends in EO-derived AGBD estimates across these classes are captured well by the EO-datasets; like the IPCC values, the GEDI/ICESat-2 dataset estimates that primary Asian tropical rainforests and mountain systems harbour some of the highest AGBD globally, while the CCI dataset estimates that primary African rainforests harbour the highest AGBD. Model results show that there isn't sufficient evidence to indicate that GEDI/ICESat-2 estimates exhibit significant systematic differences to the IPCC values over all forest classes globally, although systematic differences are observed for the CCI estimates. Conclusion: The results of this article are a pioneering international effort from CEOS, presenting AGBD maps in a format practical for policy and adoptable, upon review, in the IPCC Emissions Factors Database.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: Enhancing Forest Biomass Monitoring for Sustainable Bioenergy Production using Multi-Source EO Data

Authors: Felix Bernsteiner, Christian Briese, Senmao Cao, Janik Deutscher, Gerald Irsiegler, Koimé Kouacou, Anh Nguyen, Gerhard Pelzmann, Martin Puhm, Christoph Reimer, Stefan Reimond, Sebastian Vogler, Andreas Wimmer
Affiliations: JOANNEUM RESEARCH Forschungsgesellschaft mbH, EODC Earth Observation Data Centre for Water Resources Monitoring GmbH, Beetle ForTech, Landwirtschaftskammer Steiermark
Wood biomass is still the most important bioenergy source in the EU and plays a crucial role in the green energy transition. However, balancing the need for bioenergy production with the conservation of forests as vital carbon storage ecosystems presents a key challenge. EU legislation addressing LULUCF, climate policy, and the bioenergy sector as well as the EU Forest Strategy for 2030 foresee the use of damaged forest biomass (e.g., from windthrow or insect infestation) and wood residues as primary sources of biomass for energy production, while avoiding the use of wood biomass from intact and healthy forests. Addressing this challenge requires accurate and continuous monitoring of forest status and forest biomass resources to ensure sustainable and climate-neutral bioenergy management. In the proposed “Regulation on a monitoring framework for resilient European forests” national forest authorities are requested to provide data on available “forest biomass for bioenergy” (Article 5, “Forest data collection framework”, 3n). With project GreenForCE, we address these new requirements by combining national Austrian high-accuracy airborne LiDAR (ALS) data with the high temporal resolution of satellite Earth Observation (EO) data. This integration ensures improved above-ground biomass (AGB) estimation and the timely detection and classification of forest disturbances. A web-based framework combines these results with infrastructure data, such as the locations of sawmills and bioenergy plants, to provide actionable insights. These tools are designed to support decision-making processes, enabling sustainable resource management and strategic planning for the green energy transition and we will further enhance existing ESA Green Transition Information Factory (GTIF) forest products and bioenergy solutions for Austria. The presented remote sensing solution was originally drafted in the Space4Energy Hackathon, co-organized by ESA in 2022. An enhanced multi-source approach integrates satellite EO data, ALS data, forest inventory data, and forest growth models to provide accurate AGB estimates with a significantly improved accuracy over existing satellite-based products. Time-series analysis of Sentinel-2 imagery facilitates the detection of forest disturbances, while forest attributes such as tree species, stand age, and height are combined with AGB data to accurately quantify timber volume and potential wood residues. We use empirical data on tree species specific harvesting and processing wastes (e.g. in sawmills and during pulping) to model and map potential primary and secondary wood processing residues that could be used locally for bioenergy production. Processing workflows are deployed on cloud-based infrastructure to enable scalability. GIS tools aggregate regional statistics, analyse wood biomass potential at administrative levels (e.g. NUTS regions), and map infrastructure such as sawmills and bioenergy plants to support spatial planning and resource optimisation. The resulting maps of available AGB, detected newly disturbed forest areas for bioenergy use, and regional and annual statistics are currently embedded in a standalone web application to identify optimal locations for small-scale biomass plants, based on spatial proximity to biomass sources and existing infrastructure. It is planned to embed the results into the existing ESA GTIF Austria framework since part of the utilised data, such as forest disturbance information, is already included here. This work advances EO-based applications for sustainable forest biomass management by improving the accuracy and usability of AGB and disturbance data. It addresses the critical need for sustainable bioenergy solutions in line with EU climate and forest protection policies. The derived tools and outputs will support key aspects of policy-making, resource management, and regional economic planning, ensuring that bioenergy production contributes to climate neutrality without compromising forest ecosystems.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone D-E)

Poster: A.08.09 - POSTER - Marine and Coastal Carbon

By absorbing 25% of the anthropogenic carbon dioxide in the atmosphere, oceans are a major sink of carbon and a key component of the Global Carbon Cycle. Ocean carbon uptake/release spatial distributions and trends are inherently triggered by and impact a vast set of biological, physical and chemical processes.
This session welcomes contributions on the different carbon pools and processes in the marine and coastal ocean including:
- both the inorganic (including Ocean Acidification) and organic carbon domains, demonstrating how remote sensing, together with in-situ data and numerical modelling, can improve the characterization and understanding of the different pools of carbon in the ocean (Dissolved Inorganic Carbon - DIC, Dissolved Organic Carbon - DOC, Particulate Inorganic Carbon - PIC, Particulate Organic Carbon - POC).
- the key processes that determine the fluxes of carbon among these pools, such as Primary Production and Export, or between interfaces, such as Air-Sea or Land-Sea exchanges.
- Coastal blue carbon ecosystems (e.g. mangroves, seagrass, salt marshes) and the role of remote sensing to 1- monitor those ecosystems in term of e.g. ecosystem extent, carbon stock, carbon sequestration potential,…, 2-monitor/predict the impact of external drivers on blue carbon ecosystems and their carbon stocks/sequestration potentials and 3- quantify the added value of climate change mitigation and adaptation strategies (e.g. conservation, restoration, creation).
This session is also open to studies and procedures addressing how EO-derived marine and coastal carbon products can support Global Carbon Budget modelling efforts, and also contribute to informing evidence-based policies (IPCC and Paris Agreement).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone D-E)

Poster: Assimilation of satellite-derived surface carbon into ocean biogeochemical models to improve the ocean's carbon budget

Authors: Valentina Sicardi, Andrea Orihuela, Joan Llort, Martí Galí, Vladimir Lapin, Gemma Kulk, Lekshmi Krishnakumary, Sholovan Roy, Yohan Rupich-Robert, Etienne Tourigny, Raffaele Bernardello
Affiliations: Barcelona Supercomputing Center, Institut de CIències del Mar (ICM-CSIC), Earth Observation Science and Applications Plymouth Marine Laboratory, National Centre for Earth Observation, Plymouth Marine Laboratory, University of Reading
Ocean carbon budget estimates, whether derived from observations or from models, are still characterized by large uncertainties that reflect our incomplete understanding of the ocean carbon cycle. We aim to improve this understanding using newly developed remote sensing products and insights from science case studies of the ESA-funded project ‘Satellite-based observations of Carbon in the Ocean: Pools, fluxes and Exchanges’ (SCOPE). In our approach, we have assimilated an observations-based phytoplankton carbon (Cphy) dataset, produced by ESA’s SCOPE project, into the ocean biogeochemical model PISCESv2. By doing so, we enhance the model's accuracy of key carbon fluxes and stocks, such as primary production, export production, and air-sea gas exchange. Specifically, the assimilation of the latest version of the Cphy dataset is expected to improve the spatial and temporal representation of carbon dynamics, particularly in regions where direct in situ observations are sparse, such as the open ocean and polar regions. To assess the impact of assimilation, we conduct two simulations using the EC-Earth3-CC Earth System Model. The control simulation allows biogeochemistry to evolve freely, driven by physical ocean conditions, while the assimilation simulation (Cphy-Assim) is nudged toward satellite-derived Cphy observations for the period 1959–2021. Both simulations use atmospheric forcing from ERA5 and assimilate temperature and salinity data from ORAS5 and EN4 products. The biogeochemical model PISCESv2, embedded within EC-Earth3-CC, simulates the cycling of carbon and nutrients as well as lower trophic levels, including phytoplankton and zooplankton. The Cphy-Assim reconstruction will not only reduce uncertainties in surface ocean carbon processes, but also lead to downstream improvements in deeper ocean processes, such as organic carbon remineralization and dissolved inorganic carbon (DIC) storage. Validation is carried out using remote sensing products (e.g., primary production, DIC, air-sea CO₂ flux, also from SCOPE) and in situ and gridded data. This study demonstrates the critical role of satellite-derived observations in improving ocean carbon budgets, and proposes strategies for integrating remote sensing into biogeochemical modeling, enhancing our ability to predict ocean carbon dynamics and their implications for the climate system.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone D-E)

Poster: Changes in Global Mangrove Height and Structure

Authors: Nathan Thomas, Dr Pete Bunting, Dr Marie Lachaise, Dr Manfred Zink, Dr Ake Rosenqvist, Dr Marc Simard, Dr Lola Fatoyinbo, Nesredin Kenzu Abdela, Barbara Schweißhelm, Irena Hajnsek
Affiliations: Edge Hill University, Aberystwyth University, DLR, SoloEO, NASA JPL, NASA GSFC, ETH Zurich
Mangrove forests are unique coastal ecosystems that are capable of sequestering and storing very large amounts of coastal blue carbon. They are among the most carbon dense ecosystems in the tropics and despite being traditionally overlooked within productive land initiatives, there has been a recent focus on quantifying their carbon stocks for inclusion in countries’ Nationally Determined Contributions (NDCs) and carbon crediting schemes. Mangroves are an important component of a broader interconnected blue carbon system that includes sea grasses, corals and salt marshes. To effectively achieve this, gains and losses in mangrove forest structure must be accurately quantified. Located at the transition zone between terrestrial and aquatic systems, mangroves intersect the most dynamic regions of our planet and are at the forefront of the impacts of climate change. Changes in mangrove extent and baselines of ecosystem biomass have been comprehensively mapped. However, until now, there has been no means of quantifying changes in mangrove structure at a global scale over a decadal timescale. Without this information, fluxes in coastal blue carbon cannot be adequately quantified. For over a decade, the DLR TanDEM-X mission has been generating high quality Digital Elevation Models of Earth’s surface. The longevity of the mission has now enabled the generation of decadal changes in elevation. The TanDEM-X DEM Change Maps (DCM) product is a global elevation model of changes in DEM height from a baseline TanDEM-X Edited DEM (EDEM) product. Using this new DCM data in combination with EDEM and TanDEM-X DEM baselines, changes in mangrove structure can now be quantified for the first time. We generate a global map of changes in mangrove forest height over a decadal period at 30 m resolution. We quantify country-level and global changes in mangrove structure and reveal hotspots of both a gain and loss of aboveground biomass. Furthermore, we accurately quantify the change period on a per-pixel basis, allowing the derivation of mangrove growth and structure accumulation rates. This new dataset uses novel Earth Observation datasets to facilitate an opportunity for mangroves to be cemented in coastal blue carbon accounting initiatives. An increase in our knowledge of structural changes in mangrove ecosystems through time will allow the scientific community to understand fluxes in coastal blue carbon in more detail and more accurately estimate the carbon storage potential of reforestation initiatives.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone D-E)

Poster: Is the efficiency of the biological carbon pump in transporting organic carbon changing due to ongoing climate change?

Authors: Mengyu Li, Dr. Marco Bellacicco, Dr. Bruno Buongiorno Nardelli, Dr. Emanuele Organelli
Affiliations: Institute of Marine Science (ISMAR), National Research Council of Italy (CNR), Institute of Marine Science (ISMAR), National Research Council of Italy (CNR)
The biological carbon pump (BCP) is an ensamble of mechanisms that transfer organic carbon, mainly produced by small, plant-like organisms (i.e., phytoplankton) through net primary production, from the surface to the deep ocean. The BCP influences the magnitude of carbon fluxes through three key pathways: particle transfer via sedimentation (i.e., biological gravitational pump), circulation-driven processes (i.e., physical pump), and zooplankton-mediated transport (i.e., migration pump). Among the physical pumps, the physical injection makes a significant contribution to the BCP and requires precise characterization and quantification. More importantly, the role of biology in determining the strenghts of processes is far to be characterized, both regionally and wordwild. Under the ESA’s Ocean Carbon initiative, the “Satellite-based observations of Carbon in the Ocean: Pools, fluxes and Exchanges – SCOPE” project will synergistically leverage long-term time-series of Ocean Colour satellite measurements, BioGeoChemical-Argo (BGC-Argo) optical observations and 4D observation-based reconstructions to evaluate changes in biological carbon pump with the ongoing climate change. We primarily focus on the North Atlantic Ocean, developing synergies between multiple observational platforms to address the following scientific goals: (1) Assess the spatio-temporal changes of physical particle injection pump from short (e.g., intraseasonal) to long-term (e.g., inter-annual) scales, with a focus on the relative strength of the physical processes (e.g., local change in mixed layer depth, vertical advection) that regulate organic carbon fluxes and export; (2) Evaluate how changes in oceanic productivity, phytoplankton community structure (i.e., PFTs), and the rate of particle remineralisation influence organic carbon fluxes. The link between primary production and the consequent particulate organic carbon fluxes (e-ratio) and particle remineralisation (b-exponent) will be interpreted by integrating satellite-derived products and estimates with co-located in situ observations collected by more than 100 BGC-Argo floats in the North Atlantic Ocean since 2012.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone D-E)

Poster: Regional Modification Of Air-Sea CO2 Fluxes Due To The Inclusion Of Quantified Ocean Biological Processes Within Satellite-based Assessments

Authors: Daniel Ford, Gemma Kulk, Dr Shubha Sathyendranath, Mayra Rodriguez, David Moffat, Heather Bouman, Galen McKinley, Amanda Fay, Thea Heimdel, Marion Gehlen, Frederic Chevallier, Jamie Shutler
Affiliations: University Of Exeter, Plymouth Marine Laboratory, University Of Oxford, Columbia University and Lamont-Doherty Earth Observatory, Laboratoire des Sciences du Climat et de l’Environnement
During the United Nations decade of ocean science (2021–2030), the UN and International Oceanographic Commission (IOC) decadal roadmap for Integrated Ocean Carbon Research (IOC-R) highlighted that the role of ocean biology, namely the plankton community, was a key issue to understanding the current state and future evolution of the global ocean CO2 sink. The global ocean sink can be estimated through observation-based approaches that interpolate in situ fugacity of CO2 in seawater (fCO2 (sw)) using a synergy of satellite and reanalysis observations with sophisticated interpolation techniques (e.g neural networks), as routinely produced for global and regional carbon budgets. The inclusion of quantified biological processes such as net primary production (quantified net drawdown of CO2 by phytoplankton) or net community production (the balance of photosynthesis and respiration in the plankton community), instead of using proxy parameters such as chlorophyll-a, has been shown in recent work to modify the regional air-sea CO2 fluxes within the South Atlantic Ocean. The inclusion of these quantified biological parameters in a global fCO2 (sw) interpolation approach has not previously been attempted. Therefore, improvements in our ability to quantify the global ocean CO2 sink could be obtained through improved representation of ocean biology. And synergy satellite-based global coverage datasets of chlorophyll-a and net primary production are now available thanks to ESA science projects. In this work, a sensitivity analysis was conducted using three global fCO2 (sw) interpolation approaches (as submitted to the Global Carbon Budget). The three interpolation approaches were run (1) without any biological parameters (i.e just ocean physical parameters), (2) using chlorophyll-a as a proxy for biological carbon drawdown and (3) using net primary production to quantify carbon drawdown by phytoplankton. The effect of including the increasingly complex biological contributions was evaluated at the global and regional scale. Both the estimated fCO2 (sw) and resulting air-sea CO2 fluxes, all calculated with a consistent methodology, were investigated. Uncertainties in the fCO2 (sw) and air-sea CO2 fluxes were calculated using a comprehensive uncertainty budget to aid in the evaluations. Initial results indicate for one of the interpolation schemes (the UExP-FNN-U) a reduction in the fCO2 (sw) uncertainties by ~2 μatm (or ~ 10%) when chlorophyll-a or net primary production were included alongside the physical parameters. However, there was no significant improvement between using chlorophyll-a versus net primary production. When using the interpolated fCO2 (sw) to estimate the global ocean CO2 sink, only small differences were apparent, and all were within the calculated uncertainty. On the regional scale, differences in the estimated fCO2 (sw) (>20 μatm) were observed in regions of strong biological activity, when including either chlorophyll-a or net primary production. For example, in the Eastern Boundary upwelling systems, an increase in the fCO2 (sw) was observed, whereas in the South Subtropical Convergence a decrease in fCO2 (sw) was highlighted. Generally, the magnitude of these changes were larger for chlorophyll-a, than net primary production which could indicate (as expected by theory) that chlorophyll-a (as a biological proxy) overestimates the biological contribution. These observations are consistent with previous regional work for example in the South Atlantic Ocean. These changes in the fCO2 (sw) are likely to impact regional ocean CO2 uptake but currently appears to have limited effect on the global ocean CO2 uptake due to compensating changes in the fCO2 (sw) on the global scale. Although the inclusion of quantified biological processes within these observation based approaches is likely needed to provide consistency with other components of the global carbon cycle.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone D-E)

Poster: A global observing system for ocean color validation: harnessing synergies between kinematical models, remote sensing, and in situ data

Authors: Dr. Jacopo Busatto, Dr Guglielmo Lacorata, Dr Federico Falcini, Dr. Gianluca Volpe, Dr. Salvatore Marullo, Dr. Rosalia Santoleri, Dr. Luca Centurioni, Dr. Maria Laura Zoffoli, Dr. Marco Bellacicco
Affiliations: Cnr - Ismar, Scripps Institution of Oceanography
Satellite observations of particulate backscattering (bbp) have greatly enhanced our understanding of ocean biology and biogeochemistry on large scales, serving as proxies for phytoplankton biomass or particulate organic carbon. bbp is essential for estimating organic carbon stocks and fluxes and ocean productivity, which is subsequently incorporated into coupled physical-biogeochemical models. However, the paucity of in situ multi-band bbp data hinders efforts to quantify uncertainties in satellite bbp and its derived products (Brewin et al., 2023). The bbp data gap is exacerbated in subtropical gyres, the Southern Hemisphere, and the Arctic Ocean (Valente et al., 2022). To address these observational deficiencies, Surface Velocity Programme (SVP) drifting buoys have been equipped with bbp sensors, the BO-SVP drifters (Bellacicco et al., 2024). The high sampling frequency combined to a Lagrangian approach enables it to overpass numerous pixels in a single day, thus providing large in situ datasets for validation activities of bbp, not achievable by other in situ platforms. Thanks to their configuration, they also capture variability at satellite sub- and pixel scales, complementing satellite observations. The ESA INSPIRE project builds on these advances by developing a novel Global Observing System (GOS) to validate satellite bbp products, integrating remote sensing, Lagrangian modeling, and in situ data. The main innovations include: (i) advanced Lagrangian simulations using a sub-grid kinematic model applied to ocean currents datasets (e.g., GlobCurrent), that is calibrated through drifters dispersion properties (e.g. Finite Scale Lyapunov Exponent). This enables accurate reconstruction of sub- and mesoscale structures for reliable predictions of surface particle dispersion. (ii) constraining simulated trajectories with gapped satellite bbp data to assess variability and identify optimal deployment sites for BO-SVP drifters, maximizing match-up opportunities. The GOS framework has potential applications for validating other ocean color variables (e.g., chlorophyll-a) across ongoing (e.g., Sentinel-3/OLCI, PACE/OCI) and future satellite missions (e.g., ESA CHIME, Sentinel Next-Generation).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone D-E)

Poster: Understanding ground data collection needs for Multi-Resolution Satellite Imagery in Coastal Blue Carbon Monitoring

Authors: Amélie Séchaud, Benoit Beguet, Christophe Proisy, Thibault Catry, Elodie Blanchard, Oscar Serrano, Christine Dupuy, Natacha Volto, Imad El-Jamaoui, Marlow Pellatt, Manon Tranchand-Besset, Miguel A. Mateo, Nicolas Lachaussée, Karen Kohfeld, Marie-Aude Sévin, Timothée Cook, Pierre Coan, Alvise Ca'zorzi, Fanny Noisette, Virginie Lafon, Aurélie Dehouck, Meghana Paranjape
Affiliations: i-Sea, AMAP, IRD, AMAP, IRD, CIRAD, CNRS, INRAE, Univ. Montpellier, ESPACE-DEV, IRD, Univ. Montpellier, Univ. Guyane, Univ. La Réunion, Univ. Antilles, Centro de Estudios Avanzados de Blanes, CSIC, Littoral Environnement et Sociétés (LIENSs), UMR 7266, CNRS-La Rochelle Université, Simon Fraser University (SFU), BlueSeeds, Université du Québec à Rimouski (UQAR), Vois-là
The ESA Coastal Blue Carbon project aims to develop innovative tools and methodologies based on Earth Observation (EO) to monitor the extent and carbon stocks of coastal vegetated ecosystems such as seagrass meadows, tidal marshes and mangroves. Carbon storage in BCEs is controlled by several biological and geological drivers that can be identified and quantified during field surveys. However, these datasets are limited in space and time and thereby, preclude upscaling estimates of blue carbon storage beyond local case studies. This project examines the relationship between the spatial coverage of field surveys and satellite imagery at various resolutions (Pleiades, Sentinel-2). Ground data collected through field surveys offer detailed information on vegetation type and carbon stocks in above- and below-ground biomass and soil. However, existing ground data is scarce and often lack spatial coverage, limiting their application on a broader scale. For the salt marshes, analyses focused on dominant species, including Halimione portulacoides, Aster tripolium and Spartina spp. These detailed local data serve as a foundation to assess their relevance for calibrating and validating remote sensing products while evaluating the capability of satellite sensors to capture these characteristics across scales. Another case study involved the mapping of blue carbon stocks and accumulation rates in seagrass Posidonia oceanica across the Mediterranean Sea. Thus, benefiting from EO data on environmental drivers coupled with existing maps of seagrass extent we up-scaled blue carbon storage with >75% prediction power based on Boosted Regression Tree models. Therefore, the validation exercise conducted in our project offers exciting opportunities for upscaling estimates from limited data-rich regions to broader local and regional scales. Multi-sensor satellite imagery, with its varying resolution from 50 cm (Pleiades) to 10 m (Sentinel-2), provides a means of scaling these measurements across large areas, though challenges arise from the resolution discrepancies between fine-scale ground data and coarser satellite imagery. We reflect on the methodological aspects of ground data collection for remote sensing, emphasizing the importance of high-quality, representative field measurements in supporting satellite-driven models at different spatial resolutions, and discuss strategies for spatially integrating these datasets. We highlight the need for standardized ground sampling protocols for multi-resolution approaches in coastal blue carbon monitoring to enhance the accuracy and scalability of blue carbon assessments. Both the type of satellite sensor (VHR, HR, multi-spectral, radar, panchromatic) and the size of a field sample depends on the objective of the study, the ecosystem being studied, its diversity of habitats and their heterogeneity, the degree of fragmentation of that ecosystem and the level of precision required. What sample size will capture the heterogeneity of the studied habitats without taking too long or being prohibitively expensive, while being usable at the selected EO spatial resolution? We face several challenges for BCEs surveys such as accessibility, habitat distribution in many cases too thin to be distinguished with Sentinel-2 images… Photointerpretation and unsupervised machine learning habitat mapping can be used to design fieldwork campaigns to make sure samples are taken across the whole variability of each site. This provides a selection of polygons to visit to ensure a good a priori representativeness (both spatially and thematically) and support teams on the ground to identify areas large enough for the different satellite spatial resolutions. The study aims to determine how detailed but spatially limited field data can be integrated into large-scale satellite observations. It highlights the need for harmonized protocols to effectively connect spatial and temporal scales, supporting improved management of coastal ecosystems. The tools and methodologies developed under the ESA Coastal Blue Carbon project will benefit end-users interested in monitoring, reporting and verification steps linked to conservation and restoration Blue Carbon projects, among others.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone D-E)

Poster: Characterizing Unique Phytoplankton Bio-Optics to Enhance Estimates of Pigments and Productivity in Antarctic Coastal Waters

Authors: Eva Scrivner, Dr. Jessica Turner, Dr. Oscar Schoffield, Michael Cappola, Dr. Heidi Dierssen
Affiliations: Department of Marine Science, University Of Connecticut, Department of Marine and Coastal Sciences, Rutgers University, School of Marine Science and Policy, University of Delaware
The Southern Ocean is a critical regime for the exchange of carbon between the atmosphere and ocean, yet drastic uncertainties remain in the region’s carbon budget. This is due, in part, to the unique bio-optics of Antarctic waters. Phytoplankton absorb less blue light in the Southern Ocean than the global ocean; as such, global remote sensing retrievals underestimate Chlorophyll-a (Chl-a), a key metric of primary productivity, by a factor of 2 – 2.5. Light limited conditions also contribute to increased Chl-a concentrations per phytoplankton cell (i.e. pigment packaging), further confounding remote estimates of carbon uptake in this region. Using a suite of radiometric, pigment, and imaging data, here we characterize the unique bio-optics of the Western Antarctic Peninsula, with the goal of refining algorithmic retrievals of chlorophyll, primary productivity, and phytoplankton community composition in this region. Data was collected onboard three cruises through the National Science Foundation - Palmer Station Long Term Ecological Research Site (NSF Palmer Station LTER) in November 2021 and January 2023 and 2024. Reflectance measurements were conducted using a bow-mounted, solar tracking Satlantic Hyperspectral Surface Acquisition System (HyperSAS) radiometer from 350 – 800 nm and a handheld Analytical Spectral Devices (ASD) spectroradiometer from 350 – 2500 nm. Additional measurements include phytoplankton absorption using a Quantitative-Filter-Technique Integrating-Cavity Absorption-Meter (QFT-ICAM), phytoplankton taxonomic composition using High Performance Liquid Chromatography (HPLC), and flow-through particle imaging using an Imaging Flow Cytobot (IFCB). Field radiometry will be compared to remote sensing reflectance products from the MODIS and Sentinel 3 – OLCI sensors to validate sensor calibration and atmospheric correction in this region. Refined algorithmic retrievals of phytoplankton, pigments, and productivity will then be parameterized using field absorption and radiometry data and applied to this validated imagery. The performance of this regionally tuned Southern Ocean algorithm will then be compared with global ocean productivity algorithms. Future cruise efforts will leverage coincident field radiometry data to validate the recently launched PACE sensor, assessing the utility of hyperspectral satellite imagery in monitoring these optically unique Antarctic waters. While ongoing, the data and models produced in this study can go on to improve currently limited understandings of Southern Ocean carbon dynamics in the wake of continued accelerated global climate change.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone D-E)

Poster: Advancing Mangrove Restoration: Deep Learning for Seedling Detection in UAV Imagery

Authors: Yuri Shendryk
Affiliations: Dendra Systems
Detecting mangrove seedlings is crucial for maintaining healthy mangrove ecosystems, which provide essential habitats for biodiversity and act as natural barriers against coastal erosion and storm surges. It also supports effective reforestation efforts, carbon sequestration, and informs adaptive management and policy implementation, ensuring the long-term viability of these critical coastal forests. Research so far has focused primarily on seedling detection in agricultural settings, while natural environments, such as mangrove forests, remain poorly studied. This gap highlights the need for advanced methods to accurately detect seedlings in complex, natural ecosystems. Unmanned aerial vehicle (UAV) imagery is ideal for seedling detection due to its high-resolution, relatively extensive coverage, and ability to access remote areas, enabling efficient monitoring of mangrove forests for conservation and reforestation. However, the task is challenging because seedlings are small (< 15 cm) and often visually similar to surrounding vegetation, with varying light conditions and complex backgrounds further complicating accurate identification. In this study ultra-high-resolution (8.5 mm) RGB imagery collected from UAVs were used to investigate the potential of mangrove seedling detection at a large scale. For this, 22 seeding sites covering a total area of 0.41 km² were surveyed around Abu-Dhabi area in UAE in 2023 and 2024. A total of 11,088 image chips of 2.2m × 2.2m in size were extracted in a random stratified manner from UAV orthomosaics and manually labelled for the seedling presence. All cross-shaped 4-leaf seedlings, along with their variants having 3 to 8 leaves, were manually identified in the imagery. Following the success of applying deep learning in crowd counting tasks (Khan et al., 2022), the model for seedling detection was based on Gaussian blurring (σ=2.5) of input seedling locations to create a density map. The density map was then used to train a density estimation model to predict it from UAV imagery. Finally, the Difference of Gaussians (DoG) method (Marr et al., 1980) was applied to the predicted density map to accurately localize individual seedling locations. Specifically, the 11,088 labelled image chips were stratified into the train (90%, n=9,979) and validation (10%, n=1,109) datasets. The stratification was based on the number of seedlings per image chip grouped into five bins using a quantile strategy. Then a density estimation model was developed using a tiny configuration of MaxViT encoder (Tu et al., 2022) pre-trained on ImageNet-1K and UNet decoder (Ronneberger et al., 2015) with the last Softmax layer replaced by ReLU, resulting in a final model with 31.8 million parameters. The model was fine-tuned with multiple augmentations (e.g. flips, rotations and colour adjustments), batch size of 8, AdamW optimizer with an initial learning rate of 3e-05, weight decay of 0.01, and a CosineAnnealingLR scheduler. The model was optimized using a Tweedie loss (Tweedie variance power of 1.1) to account for the effect of zero inflation in the dataset for 30 epochs. During training F₁-score was also calculated on the validation dataset using multiple DoG parameter configurations to identify the best model for seedling detection. The best model achieved F₁-score of 0.68 on the validation dataset (precision of 0.68 and recall of 0.69), which was used for the inference on all 44 UAV orthomosaics from 2023 and 2024. The balance between precision and recall shows that the model is effective at identifying most seedlings while keeping false positives to a minimal level. However, future efforts should focus on fine-tuning the model with more training labels to further improve overall accuracy, thus enhancing the reliability of seedling monitoring in complex natural environments using UAV technology. References: Khan, M. A., Menouar, H., & Hamila, R. (2022). Revisiting Crowd Counting: State-of-the-art, Trends, and Future Perspectives (arXiv:2209.07271). arXiv. http://arxiv.org/abs/2209.07271 Marr, D., Hildreth, E., & Brenner, S. (1980). Theory of edge detection. Proceedings of the Royal Society of London. Series B. Biological Sciences, 207(1167), 187–217. https://doi.org/10.1098/rspb.1980.0020 Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation (arXiv:1505.04597). arXiv. http://arxiv.org/abs/1505.04597 Tu, Z., Talebi, H., Zhang, H., Yang, F., Milanfar, P., Bovik, A., & Li, Y. (2022). MaxViT: Multi-Axis Vision Transformer (arXiv:2204.01697). arXiv. http://arxiv.org/abs/2204.01697
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone D-E)

Poster: Estimation of various carbon fractions in coastal waters by Sentinel-2 MSI and Sentinel-3 OLCI to support large-scale carbon cycle studies

Authors: Kaire Toming, Dr. Ele Vahtmäe, Dr. Martin Ligi, Dr. Tuuli Soomets, Laura Argus, Tiit
Affiliations: Estonian Marine Institute, University of Tartu
A thorough understanding of the global carbon pools and cycle is essential to understanding and predicting the effects of climate change. Coastal waters, which make up 7-11% of the ocean's total area, play a significant role in global carbon cycles and thereby contribute to climate regulation and are considered critical to achieving emission reductions necessary for fulfilling a variety of Sustainable Development Goals. Carbon monitoring in coastal waters involves unique challenges due to their complex ecosystem structure, seasonality, and sensitivity to climate impacts. Satellite remote sensing data could provide high spatial and temporal resolution for systematic carbon monitoring at the local, regional, and global levels. The highly dynamic nature of coastal areas, as well as their optical complexity and issues associated with aquatic remote sensing near land, places greater demands on remote sensing and its sensors as well. Currently, there are no satellite sensors that specifically address challenges related to coastal water remote sensing. Consequently, suboptimal sensors are used for developing and delivering water remote sensing products. Sentinel-2 MSI (S2) offers high spatial resolution for carbon monitoring and studies in optically complex coastal areas. Sentinel-3 OLCI (S3) offers better spectral band configuration, temporal resolution, and radiometric sensitivity than S2, but may not be sufficient due to the high spatial heterogeneity of near-coastal zones. In the frame of the ESA CoastalCarbonMapper project, we aim to develop and validate algorithms for innovative remote sensing products of carbon fractions—dissolved inorganic carbon (DIC), total inorganic carbon (TIC), dissolved organic carbon (DOC), particulate organic carbon (POC), and total organic carbon (TOC)—in coastal waters by using in situ data and S2 and S3 imagery. The following research questions were addressed: (1) What are the optical proxies for different carbon fractions in coastal waters? (2) Which remote sensing algorithms are the most suitable for mapping different carbon fractions in coastal waters by using S2 and S3 imagery? In situ data of bio-optical and physical water parameters, as well as the concentrations of different carbon fractions and optically active water constituents (CDOM, Chl-a, and TSM) were collected and analysed from the water samples in the laboratory. Water reflectance and bio-optical properties were mostly measured and collected simultaneously with overflight of the S2 to get as many matchups as possible for further analyses. Measurements in the test site were taken in two different years (2023-2024) and four times during the ice-free season (in April, May, July, and September). For the next, the concentrations of various carbon fractions were compared to different water parameters to determine the possible optical proxies. Also, different algorithms/methods for retrieving carbon fractions were tested. Further steps will include testing the retrieval algorithms with S2 and S3 imagery. The study allows to make a step forward in the domain of coastal waters remote sensing and advancing earth observation science in general. The carbon fraction products proposed in this project are innovative for coastal seas and, if successful, will allow to make significant progress in different fields, from research to monitoring and policymaking.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone D-E)

Poster: Anomalous Summertime CO2 sink in the subpolar Southern Ocean promoted by early 2021 sea ice retreat

Authors: Kirtana Naeck, Dr Jacqueline Boutin, Dr Sebastiaan Swart, Dr Marcel Du Plessis, Dr Liliane Merlivat, Laurence Beaumont, Antonio Lourenco, Dr Francesco D'Ovidio, Louise Rousselet, Dr Brian Ward, Dr Jean-Baptiste Sallée
Affiliations: LOCEAN, Department of Marine Sciences, University of Gothenburg, DT-INSU, University of Galway
The physical and biogeochemical processes governing the air-sea CO2 flux in the Southern Ocean are still widely debated. This presentation focus on an anomalously large sink of CO2 observed north of the Weddell Sea in Summer 2022. The processes behind this anomalous situation are analyzed based on the combination of in situ observations, various satellite parameters (altimetric currents, Chl-a, ice concentration and sea surface salinity) and ocean model reanalysis. The “Southern Ocean Carbon and Heat Impact on Climate” cruise in Summer 2022 aimed at studying physical and biogeochemical processes in the Weddell Sea and in its vicinity. A “CARbon Interface OCean Atmosphere” (CARIOCA) drifting buoy was deployed in January 2022 in the subpolar Southern Ocean, providing hourly surface ocean observations of fCO2 (fugacity of CO2), dissolved oxygen, salinity, temperature and chlorophyll-a fluorescence for 17 months. An underwater glider was piloted with the buoy for the first 6 weeks of the deployment to provide vertical ocean profiles of hydrography and biogeochemistry. These datasets reveal an anomalously strong ocean carbon sink for over 2 months occuring in the region of Bouvet Island and associated with large plumes of chlorophyll-a (Chl-a). Based on Lagrangian backward trajectories reconstructed using various surface currents fields, we identified that the water mass reaching the Bouvet Island region originated from the south-west, from the vicinity of sea ice edge in Spring 2021. We suggest that a strong phytoplankton bloom developed there in November 2021 through dissolved iron supplied by early sea ice melt in 2021 in the Weddell Sea. These waters, depleted in carbon, then travelled to the position of the CARIOCA buoy. The very low values of ocean fCO2, measured by the buoy (down to 310 μatm), are consistent with net community production previously observed during blooms occurring near the sea ice edge, partly compensated by air-sea CO2 flux along the water mass trajectory. Early sea ice retreat might therefore have caused a large CO2 sink farther north than usual in Summer 2022, in the Atlantic sector of the subpolar Southern Ocean. Such events might become more frequent in the future as a result of climate change.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone D-E)

Poster: Towards a New Database of Photosynthesis Parameters Estimated From Production Profiles

Authors: Marija Bačeković Koloper, Dr Žarko Kovač, Gemma Kulk, Dr Shubha Sathyendranath, Heather Bouman
Affiliations: Department of Physics, Faculty of Science, University of Split, Earth Observation Science and Applications, Plymouth Marine Laboratory, National Centre of Earth Observation, Plymouth Marine Laboratory, Department of Earth Sciences,University of Oxford
Photosynthesis parameters are used in photosynthesis-irradiance functions to describe the rate of carbon assimilation as a function of available light. The first parameter, known as the initial slope, characterizes the response of photosynthesis at low light levels, while the second parameter, referred to as the assimilation number, represents the maximum rate of carbon assimilation at saturating light intensities. These parameters are integral to remote sensing and ecosystem models for calculating carbon assimilation rates from local to global scales. In the past few decades, substantial progress has been made in estimating global primary production using ocean-colour remote sensing observations. While the implementation of these models may differ, satellite-based primary production models have been shown to conform to the same principles with the same set of parameters. Yet, one of the major challenges in the application of satellite-based primary production models remains the assignment of appropriate model parameters, which forms the largest source of uncertainty in the computation of satellite-based primary production estimates. Recent efforts have been made to compile global in situ datasets of photosynthetic parameters for the use in satellite-based primary production models, but relatively few photosynthesis-irradiance measurements exist globally, with many regions being under-represented. The largest publicly available global dataset of photosynthesis parameters includes approximately 8,500 pairs of initial slope and assimilation number values (Kulk et al., 2020). Here, we explore the use of in situ primary production measurements to obtain further information on photosynthetic parameters to improve satellite-based primary production estimates and reduce uncertainty. Using inverse modeling, based on the approach of Kovač et al. (2016), we estimate photosynthesis parameters from a global database of chlorophyll and primary production profiles measured in situ under natural light conditions, compiled by Mattei & Scardi (2021). This approach expands the existing global dataset by adding approximately 6,000 new pairs of photosynthesis parameters. We present the basic statistics of the newly estimated parameters alongside those of the measured chlorophyll and primary production profiles. Additionally, we analyze the spatial and temporal distributions of these parameters. Finally, we incorporate the parameters into models to assess their performance in estimating primary production profiles and watercolumn production. References Kulk, G., Platt, T., Dingle, J., Jackson, T., Jönsson, B., Heather A. Bouman, Babin, M., Brewin, R. J. W., Doblin, M., Estrada, M., Figueiras, F. G., Furuya, K., González-Benítez, N., Gudfinnsson, H. G., Gudmundsson, K., Huang, B., Isada, T., Kovač, Ž., Vivian A. Lutz, Marañón, E., Richardson, K., Raman, M., Rozema, P. D., Van de Poll, W. H., Segura, V., Tilstone, G. H., Uitz, J., van Dongen-Vogels, V., Yoshikawa, T. & Sathyendranath S. (2020). Primary production, an index of climate change in the ocean: Satellite-based estimates over two decades. Remote Sensing, 12, 826. doi: 10.3390/rs12050826. Kovač, Ž., Platt, T., Sathyendranath, S., Morović, M. (2016). Analytical solution for the vertical profile of daily production in the ocean, Journal of Geophysical Research: Oceans, 121, doi: 10.1002/2015JC011293. Mattei, F. and Scardi, M. (2021). Collection and analysis of a global marine phytoplankton primary production dataset. Earth System Science Data, 13, 4967–4985. doi: 10.5194/essd-13-4967-2021.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone D-E)

Poster: Enhancing Ocean Color Observations’ Description of Colored Dissolved Organic Matter by Retrievals of the Diffuse Attenuation in the UV from Sentinel-5P TROPOMI Data

Authors: Alfredo Joswar Bellido Rosas, Dr. Vladimir Rozanov, Andreas Richter, Alexei Rozanov, Emanuele Organelli, Dr. Riccardo Nanni, Dr. Hongyan Xi, Dr. Leonardo Alvarado, Dr. Astrid Bracher
Affiliations: Alfred Wegener Institute, Institute of Environmental Physics, University Bremen, Consiglio Nazionale delle Ricerche, Istituto di Scienze Marine (ISMAR-CNR)
The backscattered light from within the ocean carries information about surface ocean optical constituents, e.g., phytoplankton and the amount of light in the ocean. Global quantified insight into these parameters is important for estimating primary productivity and the ocean heat budget, and for a better understanding and modeling of biogeochemical cycles. Atmospheric sensors such as TROPOMI on Sentinel-5P, SCIAMACHY on ENVISAT and GOME-2 on the Metop satellites have proven to yield valuable information on phytoplankton diversity, sun-induced marine fluorescence, and the underwater light field. As commonly used for the retrieval of atmospheric trace gases, the oceanic parameters are inferred from differential optical absorption spectroscopy (DOAS) combined with radiative transfer modeling (RTM). SCIAMACHY and GOME-2 have rather large footprints limiting the ability to resolve the ocean’s mesoscale dynamics of the surface biogeochemical compounds. The more recent satellite sensor TROPOMI with its nearly daily coverage and 3.5 km by 5.5 km pixel size enables us to resolve these scales much better. We have demonstrated previously the retrieval of two novel diffuse underwater attenuation coefficient (Kd) products in the UV range (312.5–338.5 nm and 356.5–390 nm), additionally to the short blue Kd (390–423 nm). These retrievals are based on identifying the strength of the vibrational Raman scattering (VRS) signal in the spectrally high resolved TROPOMI data and then converting it via a look-up table (LUT) based on coupled atmospheric-oceanic RTM calculations of Kd in the specific wavelength region corresponding to the excitation of the VRS. In this study, we present our recent Kd retrievals’ optimizations for the application to TROPOMI. We have optimized the retrieval by a) improving the parametrization for phytoplankton and CDOM absorption applied to calculate the cross section used to fit VRS in TROPOMI data, and by b) extending the LUTs to include all dependencies of the VRS signal versus the Kd to the observation geometry (solar zenith angle, viewing angle, azimuth angle). We have assessed further the sensitivities of the retrievals’ parametrizations to atmospheric and oceanic parameters, specifically to temperature and the composition of phytoplankton and colored dissolved organic matter (CDOM). All of the latter have rather marginally effects (<2% and 5%, respectively) to the retrieved Kd uncertainty. While in the past our Kd UV and short blue products were evaluated only for the Atlantic Ocean by validation to in-situ data from one campaign, here we will show their validity on a global scale by comparing to the multispectral data set of Kd available from the global BGC-ARGO program. In particular, for the Mediterranean Sea, we explore the capabilities of the TROPOMI Kd products by adding more information on the underwater light field particularly in the UV to improve classifying DOM properties to differentiate between terrigenous and locally formed sources which enables describing in more detail the CDOM dynamics. We further use these data together with other ocean color products, such as chlorophyll-concentration of specific types of phytoplankton (PFT), CDOM and the 4D-products of the ESA project 4DMedSea on the overall chlorophyll concentration (CHL), sea surface temperature and salinity to differentiate ecoregions in these waters. We aim to improve previous regionalization approaches of the Mediterranean Sea through a better incorporation of fluxes between the various phytoplankton groups, particulate organic matter to depth and dynamics and storage of dissolved organic carbon and its colored fraction CDOM. Our study shows the first exploitation of these TROPOMI products in relation to the ocean’s carbon pool.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone D-E)

Poster: Toward improved validation of satellite particulate backscatter estimates for climate research: INSPIRE Project

Authors: Marco Bellacicco, Jacopo Busatto, Guglielmo Lacorata, Federico Falcini, Gianluca Volpe, Salvatore Marullo, Luca R. Centurioni, Rosalia Santoleri, Maria Laura Zoffoli
Affiliations: Institute of Marine Science, National Research Council of Italy, Lagrangian Drifter Laboratory, Scripps Institution of Oceanography, University of California, La Jolla
The particulate backscattering coefficient (bbp) is an indicator of phytoplankton biomass, particulate organic carbon, and particle size distribution in the ocean. It serves as input for modeling net marine primary production and net community production. Since bbp can be estimated through satellite imagery, it plays a fundamental role in quantification primary production on a global scale and evaluating its spatial patterns. Therefore, accurate satellite-based bbp is required to constrain coupled physical and biogeochemical models, thereby improving climate projections. To date, most of the European Space Agency (ESA) Ocean Science Cluster-funded projects that utilize bbp have relied on global operative products (i.e., ESA OC-CCI). However, these products lack associated uncertainty compared to in situ measurements, limiting out understanding of their impact on ocean productivity and organic carbon export. The INSPIRE projetc aims to address this gap by developing a Global Observing System (GOS) specifically tailored for validating satellite bbp products, harmonising remote sensing, Lagrangian modelling and in situ data. This involves using a new generation of Surface Velocity Programme (SVP) drifting buoys equipped with bbp sensor, known as Backscatter-Optical (BO)-SVP drifter. Designed for extended deployment periods, they offer a promising solution for collecting data in challenging marine environments by the combination of the Lagrangian approach and a high sampling frequency. This project seeks optimize drifter drifter deployment locations to maximises the number of in situ observations usable for match-up activities, as well as showcase a demonstration of the GOS. Finally, satellite bbp products will be validated with in situ measurements collected using BO-SVP drifters deployed both the global ocean with a particular focus in the Mediterranean Sea. The drifters launched in the Mediterranean Sea were acquired through the ITINERIS (Italian Integrated Environmental Research Infrastructures System) project. The high-frequency observations obtained from BO-SVP drifters could be impactful in relation to the next generation of altimetry (e.g., NASA SWOT), hyperspectral ocean color satellite missions (e.g., NASA GLIMR, ESA Sentinel Next Generation (NG), and ESA CHIME), and future lidar mission (e.g., ASI CALIGOLA) for the detection of ocean processes from larger to finer scales (i.e., space and time). In the future, a coordinated array of BO-SVP drifters, BGC-Argo floats, and other autonomous platforms working in synergy could provide the required surface and subsurface data at the temporal, spatial, and spectral (e.g., multi- or hyperspectral resolution) scales of interest to future satellite missions.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone D-E)

Poster: Dynamic assignment of photosynthetic parameters for modelling primary production from satellite observations using machine learning

Authors: Mayra Rodriguez Bennadji, David Moffat, Shubha Sathyendranath, Professor Heather Bouman, Lekshmi Krishnakumary, Gemma Kulk
Affiliations: Plymouth Marine Laboratory, National Centre for Earth Observation, Department of Earth Sciences, University of Oxford
Phytoplankton couple solar energy to the marine biosphere through photosynthesis and thereby play a vital role in the global carbon cycle. Phytoplankton take up about 50 Giga tonnes of carbon each year, comparable to terrestrial plants' net primary production. Understanding the factors which influence phytoplankton primary production is crucial, as these can vary regionally and seasonally, affecting carbon fluxes at multiple scales. Marine photosynthesis can be studied using the photosynthesis-irradiance (PI) model, which associates light (irradiance) to photosynthesis per unit biomass. The model curve is defined by three parameters: the maximum photosynthetic rate (PmB), the initial slope (𝛼), and the light level where photosynthesis shifts from light-limited to light-saturated (Ek). Accurately determining these parameters can greatly improve marine primary production estimates as these photosynthetic parameters form the basis of satellite-based primary production as well as ecosystem models. This study uses machine learning methods, specifically random forests, alongside in situ and remote sensing data, to predict the three PI parameters, and apply them at the global scale. The aim is to develop predictive models to map PI parameters dynamically, that can improve current satellite-based estimates of primary production, ultimately aiding our understanding of the ocean carbon cycle. The research follows these steps: (1) data cleaning – ensuring the datasets from in situ measurements and satellite match-ups are consistent and reliable, (2) variable selection - by conducting correlation analysis to create a training dataset, (3) model training and validation – implementing a spatial clustering technique for validation and dividing the dataset into training and testing subsets for robust model evaluation, (4) bias correction - utilising various methods to address systematic biases in the PI predictions, improving the alignment between modelled outputs and observations across spatial and temporal scales, and finally, (5) global mapping – to produce global maps of the PI parameters informed by the developed models. The remote sensing observed variables selected for building our machine learning model include sea surface temperature, salinity, chlorophyll-a, remote-sensing reflectance (490 nm), photosynthetically available radiation (PAR), mixed-layer depth, ocean depth, day-length, month, season, and Longhurst’s ecological provinces. Our machine learning model demonstrates significant predictive capability, with a coefficient of determination (r2) of 0.51 for PmB and 0.57 for 𝛼. These values indicate that the models capture a substantial portion of the variability in the data. Bias correction is incorporated to correct for extreme values, which is a key step that ensures that the predictive models generate more accurate and reliable outputs, supporting their application in global oceanographic research. Three techniques are investigated: (a) bias estimation using a second machine learning model of the residuals, (b) bias correction using simple linear regression, and (c) bias corrections using residual rotation. By providing dynamic estimates of PI parameters, this study contributes to refining global understanding of primary production. Improved primary production estimates can lead to more accurate estimates of the ocean's role in the global carbon cycle, helping scientists and policymakers understand how changes in marine ecosystems may impact atmospheric carbon dioxide levels and, ultimately, global climate change.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone D-E)

Poster: A study on the applicability of Geostationary Ocean Colour Imager (GOCI) to monitoring diurnal variations in ocean carbon budget around Korea coastal waters

Authors: Jongkuk Choi, Dr Deukjae Hwang, Dr Eunna Jang, Dr Taekeun Rho, Dr Youngbaek Son
Affiliations: Kiost
The world’s first geostationary ocean color satellite sensor, Geostationary Ocean Color Imager (GOCI) has been successfully applied to monitoring ocean surface environment such as the concentration of total suspended sediments, absorption coefficient of colored dissolved organic matter, and chlorophyll-a content. Suspended sediment dynamics, HAB growth, and current movement around the Korean Peninsula have been also evaluated using GOCI. Here, we have tested the applicability of Geostationary Ocean Colour Imager (GOCI) to estimate dissolved and particulate organic carbon (POC and DOC) of the sea surface around Korean Peninsula. In-situ measurement of POC/DOC has been collected at coastal waters near the Jeju Island, which was employed to validate the GOCI-derived result calculated by using existing algorithms based on GOCI-derived IOPs/AOPs. We could also examine the diurnal variations in POC/DOC using hourly images of GOCI. The advantage of GOCI has been also applied and tested to estimate hourly primary production around Korean Peninsula. From the results, we can imply that GOCI can effectively estimate the ocean carbon budget and using the hourly images monitor the diurnal variations of it around coastal waters.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone D-E)

Poster: A Satellite-Based Approach To Estimate Ocean pCO2 and Air-Sea CO2 Fluxes in the Central Mediterranean

Authors: Mattia Pecci, Fabrizio Anello, Lorenzo De Silvestri, Tatiana Di Iorio, Daniela Meloni, Francesco Monteleone, Giandomenico Pace, Salvatore Piacentino, Damiano Sferlazzo, Elena Principato, Alcide di Sarra
Affiliations: Laboratory for Earth Observations and Analyses - ENEA, Laboratory for Earth Observations and Analyses - ENEA, Laboratory for Earth Observations and Analyses - ENEA, Laboratory for Earth Observations and Analyses - ENEA
Oceans play a critical role in the global carbon cycle, absorbing approximately a quarter of anthropogenic CO₂ emissions. However, ocean-atmosphere CO₂ exchange exhibits significant spatial and temporal variability, influenced by complex feedback mechanisms. Continuous, spatially distributed observations are essential to improve our understanding of the carbon cycle. At the Lampedusa Oceanographic Observatory (35.49°N, 12.47°E) in the central Mediterranean, in situ measurements of ocean pCO₂, temperature, salinity, and ancillary parameters, such as pH and chlorophyll concentration, have been collected since December 2021. These measurements complement atmospheric CO₂ data from the nearby Lampedusa Atmospheric Observatory. Combined observations enable the calculation of atmosphere-ocean CO₂ fluxes. Satellite regional algorithms have been developed to estimate ocean pCO₂ using in situ data and satellite-derived proxies of physical and biological processes. Traditional regression analysis and machine learning approaches (using the XGBoost algorithm) have been implemented. Regression models incorporating sea surface temperature and salinity, chlorophyll concentration, photosynthetically active radiation, and wind speed show robust performance, with low bias and good agreement with observations. Best-performing models are characterised by bias in the order of 1%, root mean square difference (RMSD) in the order of 3% and coefficient of determination (R²) up to 0.94. Machine learning models exhibit a lower agreement with observations. The best-performing machine learning model, using only sea surface temperature as input, has a bias of approximately 1%, an RMSD of about 6%, and an R² of 0.76. However, the effectiveness of machine learning methods appears more constrained by the size of the training dataset. These algorithms allow for estimating CO₂ fluxes across broader areas using a wind-dependent parameterization for gas transfer velocity. Fluxes derived from satellite and model-based inputs align well with in situ data, demonstrating the feasibility of extending flux estimates across the central Mediterranean. This approach enhances our ability to monitor CO₂ dynamics and contributes to a better understanding of regional carbon cycling.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone D-E)

Poster: Agreement between phytoplankton communities using pigments, microscopy, and flow cytometry over three Atlantic Meridional Transects

Authors: Alexander Hayward, Dr Vanda Brotas, Dr Glen Tarran, Dr Simon Wright, Dr Robert Brewin, Dr Vera Veloso, Dr Andreia
Affiliations: Danish Meteorological Institute
Phytoplankton, microscopic autotrophs found in the sunlit layers of marine and freshwater ecosystems, represent a diverse group spanning multiple phylogenetic kingdoms. They are foundational to aquatic ecosystems, playing a central role in the biological carbon pump and serving as the primary entry point for organic matter into aquatic food webs. Understanding their community composition is crucial for evaluating global carbon budgets and ecosystem processes. Despite their ecological significance, characterising phytoplankton communities remains challenging due to their diversity and the limitations of existing analytical methods. Different approaches, including pigment analysis, microscopy, and flow cytometry, are commonly used, but systematic comparisons between these methods are limited. This study integrates data from pigment analysis, microscopy, and flow cytometry collected during three Atlantic Meridional Transect (AMT) campaigns in 2015, 2018, and 2019. The AMT program spans a wide latitudinal gradient from 50°N to 50°S, encompassing diverse phytoplankton communities. These range from oligotrophic regions dominated by small picoplankton to nutrient-rich, high-latitude areas where larger taxa and seasonal variation in abundance and diversity are prominent. By leveraging this multi-method approach, we aim to better understand the composition and dynamics of phytoplankton communities across these varying ecosystems. Our analysis revealed strong overall agreement between pigment-based community structures and those derived from microscopy and flow cytometry. However, notable discrepancies emerged for specific taxa, particularly nano- and pico-eukaryotes, due to the coarse taxonomic resolution of pigment data and the inherent challenges of identifying smaller taxa. These findings underscore the value of combining multiple methods, as no single approach alone can fully capture the complexity of phytoplankton communities. Moreover, the observed consistency across large spatial scales builds confidence in the application of these methods to study phytoplankton dynamics. Monitoring phytoplankton communities using an integrated approach is vital for understanding fluctuations in biomass and diversity. This comprehensive perspective is critical to distinguishing between changes driven by natural variability and those linked to climate change, ensuring more accurate predictions of ecosystem responses in a changing world.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone D-E)

Poster: Towards a global assessment of coastal dissolved organic carbon

Authors: Kaire Toming, Dr. Gemma Kulk, Tiit Kutser
Affiliations: Estonian Marine Institute, University of Tartu, Earth Observation Science & Applications Plymouth Marine Laboratory
The pool of dissolved organic carbon (DOC) plays a crucial role in ecological and biogeochemical processes. Several models have been developed to estimate DOC in coastal waters at a local scale using ocean-colour remote-sensing data. These models use absorption of coloured dissolved organic matter (CDOM) as a proxy for DOC or rely directly on the remote sensing reflectance. DOC can affect remote sensing reflectance through its optically active component—CDOM. However, only a small component of the DOC pool is chromophoric. Strong correlations between CDOM absorption and DOC have been observed in coastal regions. Still, the relationship between CDOM and DOC is often regionally and seasonally highly variable due to terrestrial inflows, land use and the proportion of autochthonous DOC in the overall DOC pool. While several regional DOC algorithms for coastal waters have been developed, there is currently no global algorithm capable of addressing the variability and complexity of DOC dynamics on a broader scale. In the Satellite-based observations of Carbon in the Ocean: Pools, fluxes and Exchange (SCOPE) project, funded by the European Space Agency (ESA), we aim to address this gap by developing a global DOC retrieval algorithm for coastal waters by using daily 4 km resolution data from the ESA Ocean Colour Climate Change Initiative (OC-CCI version 6.0) from 1997 to 2023. Waters shallower than 200 m were considered coastal waters. The global in situ data of DOC concentration were used to calibrate the model parameters and validate the predictions of the coastal DOC models. Additionally, sea surface salinity (GLORYS12 version 1) and sea surface temperature (ESA SST-CCI version 3.0) data were included in model development. Salinity indicates freshwater influence, which is closely tied to the distribution and variability of DOC and CDOM in coastal areas, while temperature affects microbial degradation and primary production, both of which influence DOC dynamics in coastal waters. High-spatial resolution data are essential in capturing the complexity of coastal waters, where significant gradients in DOC can occur over small spatial and short temporal scales. Although the 4 km resolution of OC-CCI may not detect all fine-scale spatial features, it still provides a robust framework for developing a global algorithm that balances spatial coverage and computational feasibility. Before model development, a visual inspection of pair-wise correlation between different parameters was used to identify the most relevant predictor variables of coastal DOC concentration. The band ratios are often more effective than single bands in capturing the optical properties of water. Therefore, fifteen formulas based on 2- or 3-band combinations were used and tested with various band combinations of OC-CCI spectral bands, resulting in 1,170 unique combinations. A variety of statistical methods, including multiple linear regression (MLR), random forest regression (RF), and extreme gradient boosting (XGBoost) were used to develop an algorithm for predicting DOC concentrations in coastal waters. Model performance was evaluated based on multiple metrics, including the coefficient of determination (R2), the root-mean-squared error (RMSE), and mean absolute percentage error (MAPE). The results demonstrate that the model, using a combination of spectral bands and environmental parameters, effectively predicts coastal DOC concentrations. This work contributes to a better understanding of carbon dynamics in coastal ecosystems and provides a robust tool for future satellite-based assessments of DOC in global coastal waters.
Add to Google Calendar

Wednesday 25 June 17:45 - 18:30 (Nexus Agora)

Session: C.03.19 The future evolution of the Copernicus services

The Services established in the Copernicus Programme are delivering operational products for their users, but constant evolution is necessary to deliver output fit for the purpose. The evolution is not only driven by newly available technology and new mission data, but also by the constantly evolving user requirements and policy context. This session provides a forum to discuss the challenges and opportunities of the Copernicus Services, and which plans or ideas they can implement for the future. Possible topics are expansion mission data and Copernicus Contribution Missions, EU policy priorities and new legislation, the next Multi-annual Financial Framework (MFF) and Copernicus 3.0, cooperation for digital or for user-support.

Moderators:


  • P. Potin and H. Zunker

Speakers:


  • Usue Donezar - EEA
  • Carlo Buontempo - ECMWF
  • Laurence Rouil - EMCWF
  • Peter SALAMON - JRC
  • Pierre-Yves LE TRAON - Mercator Ocean International
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: A.02.08 - POSTER - Impacts of fire in the Earth system

Fire is one of the critical factors affecting global ecosystems and societies, impacting atmospheric and vegetation composition, soil erosion and runoff, as well as human health, resources, and assets. The occurrence of catastrophic seasons with many human casualties and/or large burned areas in the last decades is often associated with heat waves and droughts, underscoring the decisive role of climate and weather. Important lessons can be learned from studying the impacts of those wildfires and analysing the implications for future fire seasons so we can continuously improve our understanding and management of fire.
We encourage all abstracts that explore fire occurrence in the Earth system at any temporal and spatial scale using remote sensing data and/or modelling and its impacts on (1) ecosystems, vegetation composition and structure, resilience, and fuel management; (2) atmospheric chemistry, air quality, and human health, (3) biochemical cycles, carbon budget, water and nutrients, (4) soil erosion, (5) burn severity, (6) fire occurrence in the past, present and future.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: The Impact of Prescribed Moorland Burning in the UK on Air Quality

Authors: Ailish Graham, Dr Mike P Shewring, Professor Dominick V Spracklen, Dr David J T Douglas, Professor Martyn P Chipperfield
Affiliations: University Of Leeds, National Centre for Earth Observation, Royal Society for the Protection of Birds
Ailish M Graham 1,2, Mike P Shewring 3, Dominick V Spracklen 1, David J T Douglas 3, Martyn P. Chipperfield 1,2 1. School of Earth and Environment, University of Leeds, UK 2. National Centre for Earth Observation, University of Leeds, UK 3. Royal Society for the Protection of Birds, UK In UK uplands, prescribed burning of heath, grass and blanket bog (moorland) is widely used to support gamebird populations for recreational shooting and to improve grazing for livestock. Burning encourages growth of new heather shoots that are more palatable and nutritious to gamebirds and livestock. Rotational burning is carried out on small areas in different years to create a mosaic of young and old heather. However, prescribed burning has become increasingly contentious in recent years due to concerns over impacts on ecosystem services, including water quality, flood risk, carbon storage in peat soils and air quality. Previously, one of the key barriers to understanding the impacts of prescribed burning on ecosystem services was a lack of information on the spatial and temporal extent of burning. However, the new era of high-resolution satellite data has made it possible to map annual moorland burning for the first time. We used annual burned area for 2017 to 2022 at 10 m × 10 m resolution, calculated using data from Sentinel 2. We partition annual burned area to daily burned area based on hotspot data from the Visible Infrared Imaging Radiometer Suite (VIIRS), combined with information on when conditions are suitable for burning according to temperature, rainfall, relative humidity and wind-speed fields from ERA-5. We then used the derived daily burned area to estimate the fire emissions associated with prescribed burning. These emissions are used to simulate the air quality impacts using the WRF-Chem regional model. In particular, the model is used to predict the contribution of prescribed burning to population-weighted exposure to PM2.5 concentrations. This allows us to quantify the impacts of UK prescribed burning on air quality for the first time.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: FRM4FIRE: Early recommendations for the characteristics and quality of airborne reference data used for satellite fire product evaluation

Authors: Farrer Owsley-brown, Professor Martin Wooster, Bernardo Mota, Gareth Roberts, Dirk Schuettemeyer, Callum Middleton, Toby Wainwright
Affiliations: King's College London, National Physical Laboratory, University of Southampton, European Space Agency
The Fiducial Reference Measurements for Fire (FRM4FIRE) project aims to advance fire radiative power (FRP) validation activities by establishing protocols and standards for the validation of active fire and FRP data obtained from Earth observation. FRP is classified as an essential climate variable. However, it is currently categorised at the lowest level of validation by CEOS. FRP validation data is scarce due to the dynamic and ephemeral nature of active fires, as well as the logistical and technical challenges of obtaining independent, simultaneous FRP observations with satellite measurements. While the project will address uncertainties associated with satellite FRP products, this discussion will focus on the characterisation of airborne measurements used for satellite validation. The NCEO Airborne Earth Observatory (NAEO), based at King’s College London, has recently conducted several campaigns to collect airborne data over real-world wildfires and hot point targets, such as gas flares. Some of these measurements coincided with Sentinel-3’s FRP observations as part of a coordinated validation effort. Other measurements focussed on evaluating uncertainties within the airborne data itself. Using these data, this project will outline the protocols for collecting reference data for FRP validation and quantify the sources of uncertainty impacting airborne FRP retrievals. These sources include sensor-specific uncertainties such as the point spread function (PSF), modulation transfer function (MTF), geometric distortions from the optics, and post-processing techniques for georeferencing data. In addition to sensor-related factors, environmental effects, such as directional influences caused by vegetation, will also be considered. Ultimately, this work aims to establish a standard for collecting high-quality reference data for FRP validation and provide a framework for quantifying uncertainties. These advancements will support more reliable validation of satellite-based FRP measurements, which are widely used.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Top-Down Carbon Emission Estimates of the Extreme 2020 and 2024 Pantanal Wildfire Seasons

Authors: Ben Bradley, Dr Chris Wilson, Professor Martyn Chipperfield, Dr Carly Reddington, Dr Fiona O'Connor
Affiliations: University Of Leeds, National Centre for Earth Observation, Met Office
The Pantanal is the world’s largest wetland, with exceptionally high levels of biodiversity. However, its ability to act as a carbon sink and haven for wildlife is threatened by slash-and-burn biome destruction combined with climate-change-induced fire weather. Extreme wildfires in the Brazilian Pantanal burnt over 22% of its total area during 2024, triple the 12-year annual average¹. This comes in the wake of the devastating and unprecedented 2020 Pantanal wildfire season, which 2024 is on-track to mirror. It is crucial to understand the consequences of this drastic change in the local wildfire regime, especially regarding the ability of the Pantanal to act as a carbon sink. Publicly accessible databases of wildfire characteristics and carbon fluxes exist, however there can be considerable disagreement between them ² ³. This is due to large uncertainties, originating from the underlying assumptions of the databases’ methodologies and the imperfect sensitivity of their satellite data inputs. Additionally, they can struggle to capture emissions from smouldering belowground biomass such as peat⁴, which is common in the Pantanal. Satellite observations of carbon monoxide (CO) released from wildfires provide another avenue to constrain these estimates, allowing more accurate carbon emissions to be measured⁵. Here, we derive carbon emission estimates for the 2020 and 2024 Pantanal wildfire seasons by utilising CO observations from the TROPOspheric Monitoring Instrument (TROPOMI). These are used as input to the inverse version of the global 3D chemical transport model TOMCAT, which uses data assimilation to optimally predict the surface fluxes of CO from wildfires. Emissions from several different fire inventories are scaled to match these CO fluxes, from which emission ratios are used to estimate the annual carbon emissions from the 2020 and 2024 Pantanal wildfires. Analysis of carbon sequestration and methane emissions then allows the carbon cycle of the Pantanal to be evaluated for extreme fire years. ¹ALARMES, https://alarmes.lasa.ufrj.br/ ²Pan et al., 2020, Six global biomass burning emission datasets: intercomparison and application in one global aerosol model ³Liu et al., 2020, Diagnosing spatial biases and uncertainties in global fire emissions inventories: Indonesia as regional case study ⁴Datta and Krishnamoorti, 2022, Understanding the Greenhouse Gas Impact of Deforestation Fires in Indonesia and Brazil in 2019 and 2020 ⁵Byrne et al., 2024, Carbon emissions from the 2023 Canadian wildfires
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Landscape Fire Scars Monitoring in Eastern Europe with Deep Learning and Remote Sensing Data

Authors: Igor Glushkov, Ilona Juravlyova, Nickolay
Affiliations: Greenpeace
Landscape fires play an important role in transforming landscapes and contributing to greenhouse gas emissions. However, some types of landscape fires, especially short-term spring fires, are not well represented in official reports in many countries, and it is unclear which land-cover types are associated with the spread of spring fires. The Landscape Fire Scars Map of Eastern Europe is an experimental product created by the Greenpeace Global Mapping Hub, based on remote sensing data and an ensemble of pre-trained deep learning models. The map automatically delineates the boundaries of areas affected by wildfires using Sentinel-2 satellite data with a spatial resolution of 10 m/pixel, and provides regular updates every three days for the year 2024 over nine countries (Belarus, Bulgaria, Czechia, Hungary,Poland, Republic of Moldova, Romania, Russian Federation, Slovakia, Ukraine). The results of the volunteer fire mapping project in 2020 and our expert mapping contributions were used to create training and validation datasets to develop an ensemble of models (based on the DeepLab3 architecture) for semantic segmentation of satellite data. The entire analyzed area (5 million square kilometers) is divided into 1° × 1° plots, and automatic processing is performed within each plot separately, followed by merging of the results. Post-processing includes the removal of polygon intersections, filtering based on Normalized Burn Ratio (NBR) index calculations for each burned area, intersection with thermal points, comparison of time series for each polygon within the analyzed area, and then cropping to Eastern European borders and geometry simplification. A land cover class was assigned to each scar polygon based on 10-meter resolution ESA WorldCover land use and land cover data. As of 2024, the map and the method of its compilation are still in the testing phase, so the data will not be updated with the same frequency for the entire territory; more burnable areas (according to the number of FIRMS hotspots) will be prioritized for processing and analysis over less burnable areas. Fires on the map are classified into confirmed fires (visible on satellite imagery over a long period of time and/or overlapping with hotspots) and unconfirmed fires (not visible on other imagery or not overlapping with hotspots but detected on at least one satellite image). According to the landscape fire map in 2024, the area of confirmed (FIRMS) fire scars is 18 million hectares; when unconfirmed areas are included, the total reaches 20 million hectares. Obtaining consistent and independent data across all countries to facilitate comparison with our preliminary findings is challenging. Consequently, a subset of countries was selected for comparison with other datasets. Specifically, our preliminary results were compared with official statistics from the Russian Federation, which reported a total burned area of 16 million hectares as of November 1, 2024. According to the official remote fire monitoring system (ISDM-Rosleskhoz), the area of landscape fires in Russia as of November 1 was approximately 13.7 million hectares, which is approximately 2 million hectares less than the our result. In the Russian Federation, 51% of the total area of detected fires in 2024 occurred in forests. This indicates that, on average, forests and all other natural territories burn at approximately the same rate. Approximately 32% of the total area of fires occurred in herbaceous vegetation, excluding wetlands and vegetation on cultivated lands. The majority of such vegetation consists of agricultural lands that are either out of use and abandoned or are used very rarely (e.g., occasionally used pastures and hayfields). Fires on such lands are most strongly associated with hazardous agricultural practices or legislative provisions that incentivize landowners to use fire in dangerous ways. Approximately 3% of the total area of landscape fires occurred on cultivated lands, involving the burning of stubble, straw, and other plant residues. This indicates that fire is still widely used in Russian agriculture, despite official prohibitions (burning stubble and straw, except for rice straw, is prohibited by law) and the detrimental effects of such burning on soil fertility. In Ukraine, the distribution of landscape fire scars reflects the consequences of the ongoing Russian invasion. The concentration of fires in combat zones forms an almost continuous contour of burned areas along the front line. This is yet another consequence of the war, affecting not only human populations but also ecosystems and landscapes. A web application representing the results of our work is still in development, and we plan to share it during the conference. As this landscape fire map is being produced – and is the first to work for such an area and with such a range of features in 2024 – there may be some errors in it. The authors are trying to identify and correct these as quickly as possible, and peer review and artifact removal will be carried out regularly during development and testing.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Mapping Vegetation Changes with five years of Sentinel-1 and -2 time series in Fire-effected Fynbos Areas in the Overberg District, South Africa

Authors: Lieselotte Spieß, Celina Nolopp, Marco Wolsza, Dr. Jussi Baade, Andrew Skowno, Stephni Van Der Merwe, Vernon Visser, Prof. Dr. Christiane Schmullius
Affiliations: Department for Earth Observation, Friedrich-Schiller-University Jena, Department of Geography, Friedrich-Schiller-University Jena, South African National Biodiversity Institut, Department of Statistical Sciences, University of Cape Town
Fire are a natural component of the Fynbos ecosystem in South Africa, an ecosystem globally renowned for the high biodiversity and endemic species. Fire play a central role in influencing the dynamics of the Fynbos ecosystem and driving vegetation regeneration. At the same time, they pose a threat to biomass and the long-term stability of the ecosystem. This study aims to map the effects of fire on Fynbos vegetation in the southern Overberg District over a time period from January 2018 until December 2022 and is part of an M.Sc.-seminar at the University Jena in cooperation with South African institutions. The data cubes are part of the South Africa Land Degradation Monitor (SALDi) project. The project employs high-resolution Earth observation data to analyze subtle land cover changes that may contribute to degradation. Funded under the SPACES II program by the German Federal Ministry of Education and Research (BMBF), the project utilizes data from international missions, including Sentinel-1 and -2 of the European Space Agency (ESA) and TanDEM-X of the German Aerospace Center (DLR). The identification and spatial delineation of burned areas were based on the Burned Area Product from Moderate Resolution Imaging Spectroradiometer (MODIS), which provides global data with high temporal resolution. For a detailed analysis of vegetation changes, Sentinel-1 (S1) and Sentinel-2 (S2) data were employed. Sentinel-2 data facilitated the calculation of spectral indices, such as the Burned Area Index for Sentinel-2 (BAIS2) and the Enhanced Vegetation Index (EVI), used to quantify vegetation regeneration after fire. The Sentinel-1 data are utilized to analyze vegetation structure, providing insights into the physical properties of the landscape through radar backscatter and coherence. Comparison of our S2 EVI time series with the EVI time series from the Landsat Vegetation Change Inspector (zandersamuel.users.earthengine.app/view/evi-trend-inspector) over the same period shows that the trends follow a similar pattern, and both time series indicate decreasing of the EVI at the time of fire. However, the EVI values in the Venter time series exhibit more extreme fluctuations. Implications from the four decades of Landsat EVI time series indicate the strong variation of the Fynbos vegetation caused by fire. Fire cause significant changes in vegetation cover. Hence, EVI clearly identifies burn areas. Vegetation regeneration begins within a few weeks post-fire, however a detectable increase in S2 spectral vegetation indices occurs after a longer time period, i.e. months. This regeneration highlights the adaptability of the Fynbos ecosystem. To investigate the long-term stability of the woody Fynbos components, particularly in light of shifting fire regimes driven by climate change, S1-analyses will be added and presented. It is expected to seperate leaf regeneration from fully or partly stand-replacing fire impacts in the 5-year observation period. This poster aims to demonstrate the capabilities of synergistic radar-optical time series in a very complex spatio-temporal environment. The goal is to explore operational perspectives from Earth observation for robust monitoring approaches for fire-effected areas and to identify remote sensing applications for ecosystem monitoring and conservation strategy planning.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Extreme Fire Sourced Haze in Mainland Southeast Asia: Using a New AQ Network to Evaluate the Outputs of Air Quality Models Fed with Satellite Data of Fire Emissions

Authors: Toby Wainwright, Dr Mark Grosvenor, Professor Martin Wooster, Dr Zixia Liu, Associated Professor Chittana Phompila, Dr Veerachai Tanpipat, Dr Quang Nguyen
Affiliations: Department of Geography, King's College London, Leverhulme Centre for Wildfires, Environment and Society, King's College London, NERC National Centre for Earth Observation, King's College London, Faculty of Forestry, The National University of Laos, Upper ASEAN Wildland Fire Special Research Unit, Kasetsart University, Vietnam Meteorological and Hydrological Administration
Global and regional atmospheric models, which are fed by satellite observations of fires and derived estimations of their emissions, indicate that air quality in northern parts of mainland Southeast Asia is amongst the worst in the world during the dry season, apparently due to smoke from agricultural biomass burning. A lack of in-situ air quality data in the region means it is not possible to validate the accuracy of these extreme model outputs. A study in Indonesia, utlised ‘low-cost’ air quality sensors to assess the effectiveness of AQ models during the extreme 2019 peatland fires, and found that the Copernicus Atmospheric Monitoring Service (CAMS) datasets showed good agreement with ground-based data (Grosvenor et al., 2024). The approach from that study, was repeated on a larger scale across Thailand, Laos and Vietnam, where a network of ‘low-cost’ particulate matter sensors (Purple Air) were installed, a correction factor for biomass smoke was derived, and the data was used as a validation source for air quality models. Despite the good performance of CAMS in Indonesia, there are different uncertainties with satellite measurements of fires which could influence the accuracy of models in northern Southeast Asia. The satellite data utilised by these models may not fully capture all fire-sourced smoke emissions as they can miss smaller agricultural fires, fires occurring outside of satellite overpasses, and those obscured by clouds. They can also potentially be calculating emissions incorrectly by using generalised emission factors for tropical forests. Furthermore, the spatial resolution of models is often on the scale of tens of kilometers, encompassing diverse areas as a single datapoint, losing the variability within that grid cell. This can all lead to potential underestimations of human exposure to dangerous respiratory PM₂.₅. Ideally, accurate air quality data, often originating from sophisticated and expensive in-situ monitoring systems, is used to validate these models, but these types of systems are lacking in LMICs, such as Laos where in-situ measurements are apparently non-existent. Often ‘low-cost’ sensors are used to validate model outputs in these regions, but many of these instruments assume a particle mass, which is often unsuitable for biomass derived particulate matter. To address this, a mass correction factor was derived in-situ from regional biomass burning sources in order to correct the sensor data. Additionally, field measurements were made for improved gas and particulate emission factors from these types of fires, to help refine model inputs. This makes the network the first of its kind with the ability to accurately quantify surface level mass concentration of fire-sourced PM₂.₅. We present discrepancies between the network dataset and the CAMS Near-Real-Time forecasts and the EAC4 reanalysis datasets and investigate the possible reasons for these differences.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: A Wildfire Hazard Map for Germany

Authors: Johannes Heisig
Affiliations: Institute For Geoinformatics, University Of Münster
In the light of climate change, wildfires are increasingly recognized as a significant environmental threat in regions traditionally considered less vulnerable to such events, such as Central Europe. This trend is exemplified in Germany, where rising temperatures and prolonged droughts have recently heightened the risk of wildfires, posing substantial threats to local ecosystems and communities inadequately prepared to address such incidents. This study presents a comprehensive wildfire hazard map for Germany, developed through the integration of satellite earth observation data and advanced modeling techniques. By considering key factors such as vegetation fuels, weather types, human activity, and historical fire occurrences, we aim to offer insights that inform policymakers, land managers, and the public on potential wildfire risks. The generated hazard map not only serves as a critical tool for proactive wildfire management and mitigation strategies but also emphasizes the need for increased awareness and education within a fire-illiterate society that may be ill-prepared for the emerging realities of wildfire risks in the region. Fire spread is driven by landscape properties, weather conditions, and ignition locations. To represent the fire landscape, we employed a widely adapted methodological framework consisting of eight data layers. These layers encompass surface fuels characterized by Scott and Burgan's standard fire behavior fuel models, canopy fuels defined by attributes such as canopy cover, canopy height, canopy base height, and canopy bulk density, as well as terrain features including elevation, slope, and aspect. Weather conditions were incorporated both directly — through wind speed and direction — and indirectly — via fuel moisture content. A decade's worth of hourly weather data, encompassing variables such as air temperature, relative humidity, precipitation, wind speed and direction, and solar radiation, was binned into 120 distinct weather types. Fire spread was simulated repeatedly for each weather type, with the final hazard map weighted according to the relative frequency of each scenario. This approach effectively captures a diverse array of weather conditions while prioritizing their realistic probabilities. Additionally, we calculated dead and live fuel moisture for each weather type by applying the Nelson model. To address regional variations in weather patterns (e.g., alpine areas versus the north-western areas with maritime influence), we delineated six fire occurrence areas (FOAs) across Germany by spatially clustering grids based on bioclimatic parameters. Each FOA was assigned a unique set of weather types that reflect its specific characteristics. Recognizing that wildfire ignitions in Central Europe are primarily driven by human activities, we employed a geostatistical approach to predict ignition density. A point pattern model was developed to establish relationships between historical wildfire ignition locations and predictors such as proximity to major roads, urban and military zones, population density, forest road networks, and terrain features. The resulting predicted ignition density grid informed a stratified spatial random sampling strategy to select several hundred thousand ignition locations for subsequent fire spread simulations. Simulations were conducted using RandIG, an efficient command-line based software that implements the Minimum-Travel-Time (MTT) algorithm for modeling fire spread. Wildfire hazard was derived from two primary outputs of the simulation - conditional burn probability and flame length. The results for each weather type were weighted according to its relative frequency. Values of both components were categorized into defined classes and subsequently combined by applying an ordinal scale that ranges from ‘low’ to ‘very high’ fire hazard. The final map has a spatial resolution of 180 meters. The wildfire hazard map developed through this study serves not only as an essential tool for short- to medium-term wildfire mitigation but also as a first step towards long-term strategic planning in response to climate change. By providing a clear visualization of wildfire risks across the entire country, stakeholders can implement targeted interventions such as land management practices, community preparedness programs, and infrastructure improvements to reduce vulnerability. As climate change continues to raise the frequency and intensity of wildfires, collective efforts to adapt to these challenges will become more critical. The integration of data-driven insights into policy development and community engagement will empower stakeholders to proactively manage wildfire risks and ensure the livelihoods of both ecosystems and humans in the face of a changing climate.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Assessment of forest fires and erosion prediction using Sentinel-1 and Sentinel-2: Analysis of advanced indices

Authors: Tomas Pugni, Diego Madruga, César Sáenz, Alfonso Bermejo-Saiz, José Alonso, Silvia Merino-De-Miguel, Alicia Palacios-Orueta
Affiliations: Department of Forest and Environmental Engineering and Management, ETSIMFMN, Universidad Politécnica de Madrid, Department of Agroforestry Engineering, ETSIAAB, Universidad Politécnica de Madrid, Research group Geo-QuBiDy, Universidad Politécnica de Madrid
Forest fires pose a significant threat to Mediterranean ecosystems, causing widespread damage to vegetation, disrupting soil structure, and increasing the risk of erosion. This study investigates the impact of a forest fire in a Mediterranean pine forest located in Juzcar, Spain, leveraging data acquired from the Sentinel-1 and Sentinel-2 satellites. By integrating multispectral and radar analysis, a methodology was developed to classify affected areas into three distinct categories: burned, dry, and healthy wooded areas. Advanced spectral indices, including MIRBI, dNBR, RdNBR, and RBR, were employed in conjunction with time series analysis of the Normalized Difference Vegetation Index (NDVI) to quantify fire severity and assess vegetation recovery. The analysis revealed a substantial decrease in NDVI values within the burned areas, indicative of significant vegetation loss and reduced photosynthetic activity. Furthermore, structural indices such as Leaf Area Index (LAI), Fraction of Absorbed Photosynthetically Active Radiation (fAPAR), and Fraction of Vegetation Cover (fCOVER) were analyzed to characterize the functional status of post-fire vegetation. These indices provide valuable insights into the vegetation's capacity to intercept sunlight and perform photosynthesis, crucial processes for ecosystem recovery. Utilizing Synthetic Aperture Radar (SAR) data from Sentinel-1, specifically backscatter in VV and VH polarizations, enabled the detection of alterations in soil structure and surface moisture. Radar backscatter is sensitive to terrain roughness and moisture content, providing valuable information on post-fire soil conditions. The analysis revealed significant changes in backscatter values within the burned areas, suggesting modifications in soil structure and moisture levels. Interferometric Synthetic Aperture Radar (InSAR) techniques were employed to model terrain stability and predict erosion risk based on biomass and vegetation cover loss. InSAR enables the detection of subtle ground movements, facilitating the identification of areas susceptible to erosion. The InSAR results indicated an increased susceptibility to erosion in the burned areas, emphasizing the need for effective mitigation measures. This integrated approach combines the spectral accuracy of Sentinel-2 with the active detection capabilities of Sentinel-1, offering a robust and dynamic perspective on fire impact and its relationship with erosion processes. The fusion of optical and radar data provides a more comprehensive understanding of the fire's effects, enabling a thorough damage assessment and facilitating the planning of recovery efforts.In conclusion, this study underscores the importance of remote sensing for the assessment and management of forest fires in Mediterranean ecosystems. The integration of multispectral and radar data provides a powerful tool for understanding fire impacts, predicting environmental risks, and promoting forest recovery.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Terrestrial LiDAR Data for Structural Variable Estimation and Biomass Consumption Through Radiative Transfer Models

Authors: Adrián Enrico Baissero García, Dr. Mariano García Alonso, Dr. Patricia Oliva
Affiliations: Universidad De Alcalá
Forests are vital components of the global carbon cycle, acting as both carbon sinks and sources depending on their health and disturbance levels (Vahedifard et al., 2024). Monitoring changes in forest biomass and structure, particularly in the context of increasing wildfire intensity and frequency (Ruffault et al., 2020), is necessary for understanding ecosystem dynamics and managing resources (Bispo et al., 2016; Haemaelaeinen et al., 2024). Traditional field-based methods for biomass estimation, while reliable, are labor-intensive and often limited in scope. Terrestrial LiDAR systems (TLS), offers a fast, non-destructive, high-resolution alternative for capturing forest structure and estimating aboveground biomass (AGB) (Calders et al., 2020; Newnham et al., 2015; Delagrange et al., 2014; Hackenberg et al., 2015). The objective of this study was to estimate AGB retrieved from TLS data in areas affected by wildfires in Spain, using the validated results as a baseline for biomass consumption simulations generated with radiative transfer models. The study took place across three distinct wildfire-affected areas in Spain. The first site, located in Sierra de la Culebra, Zamora, experienced a megafire in 2022 that burned approximately 60,000 hectares. Measurements were carried out in 2023, at the plot and understory level, focusing on Pinus sylvestris and Pinus pinea. The second site, Sierra de Gata in Cáceres, was affected by a wildfire in 2023, which burned 10,863 hectares. Data collection occurred in 2024 at the tree level, targeting the same species, Pinus sylvestris and Pinus pinaster. The third site, Cerro Muriano in Córdoba, covered approximately 4000 hectares affected by a wildfire in 2024, with measurements conducted three months later. This smaller dataset included both tree- and understory-level measurements, focusing on Quercus ilex and Cistus ladanifer. These three sites provided a diverse range of conditions and vegetation types, enabling analysis of biomass estimation under varying fire severities and structural characteristics. Data collection included both traditional field measurements for validation and terrestrial LiDAR (TLS) data for biomass estimation. Each plot represented varying degrees of fire severity and structural classes, including differences in understory presence and tree density. In the first wildfire (Sierra de la Culebra, Zamora), plots had a radius of 15 meters, while in the second (Sierra de Gata, Cáceres) and third (Cerro Muriano, Córdoba) wildfires, plots had a radius of 17 meters. This variation accounted for the high tree density in the first wildfire, which made it challenging to increase plot size due to occlusion. TLS data were acquired through five scans per plot, one at the center and four positioned approximately 7.5-8.5 meters away in each cardinal direction, prioritizing occlusion minimization. Biomass was estimated from DBH using allometric equations proposed by Montero et al. (2005). At the plot level, diameter at breast height (DBH) was measured for all trees, while height was recorded for one in every three trees, due to their similar sizes and heights. At the tree level, 20 trees of varying distances from the center, sizes, and thicknesses were selected per plot, with DBH, height, and centimetric geographic position measured. Geographic positions for all trees within the plot were also recorded to enable the extrapolation of TLS-derived data. For understory data, two 1×1 meter sections per plot containing at least one individual plant were destructively sampled. Biomass that fell outside the defined sections was removed, and the collected material was weighed to quantify understory biomass. In burned plots, biomass consumption was visually estimated for each biomass fraction, including trunks, large branches (diameter > 7 cm), medium branches (7–2 cm), and fine branches with leaves (< 2 cm), following Montero et al. (2005) allometric equations strata. Additionally, the GeoCBI (Geometrically Corrected Composite Burn Index) method was applied to assess fire severity. Biomass estimations derived from TLS data demonstrated high precision in unburnt plots, with an accuracy higher than 90%. Accuracy declined as fire severity increased, reaching an accuracy of approximately 70% in the most severely burnt plots. This decline was attributed to an average 38.8% reduction in laser pulse returns caused by burnt wood’s altered structural and spectral properties. This validated data served as the baseline for generating realistic forest simulations, representing variations in forest age, tree density, understory coverage and height, and terrain characteristics. Each simulated forest plot was first modeled in an unburnt state and subsequently subjected to varying degrees of virtual wildfire severity. This process replicated the consumption characteristics observed in the validated burn data, which were extracted from TLS measurements of different forest strata mentioned earlier. Radiative transfer models enabled the simulation of laser pulse interactions with forest elements across different fire severities. The models used accounted for the scatter, absorption and reflectance of the laser pulse, at the same wavelenght of the validated data, 1610nm. The simulation results of biomass consumption can further be used to develop combustion completeness models, which may incorporate more operationally accessible data, such as optical remote sensing, to assess carbon emissions effectively. Bibliography Bispo, P.D.C., Santos, J.R.D., Valeriano, M.D.M., Graça, P.M.L.D.A., Balzter, H., França, H., Bispo., P.D.C., 2016. Predictive models of primary tropical forest structure from geomorphometric variables based on srtm in the tapajo’s region, Brazilian amazon. PLoS ONE 11. doi:10.1371/journal.pone.0152009. Calders, K., Adams, J., Armston, J., Bartholomeus, H., Bauwens, S., Bent ley, L.P., Chave, J., Danson, F.M., Demol, M., Disney, M., Gaulton, R., Moorthy, S.M.K., Levick, S.R., Saarinen, N., Schaaf, C., Stovall, A., Terryn, L., Wilkes, P., Verbeeck, H., 2020. Terrestrial laser scanning in forest ecology: Expanding the horizon. Remote Sensing of Environment 251. doi:10.1016/j.rse.2020.112102. Delagrange, S., Jauvin, C., Rochon, P., 2014. Pypetree: A tool for reconstructing tree perennial tissues from point clouds. Sensors 14, 4271–4289. doi:10.3390/s140304271 Hackenberg, J., Wassenberg, M., Spiecker, H., Sun, D., 2015. Non destructive method for biomass prediction combining tls derived tree volume and wood density. Forests 6, 1274–1300. doi:10.3390/f6041274. Haemaelaeinen, A., Runnel, K., Ranius, T., Strengbom, J., 2024. Diversity of forest structures important for biodiversity is determined by the combined effects of productivity, stand age, and management. AMBIO 53, 718–729. doi:10.1007/s13280-023-01971-9. Montero, G., Ruiz-Peinado, R., Muñoz, M., 2005. Producción de biomasa y fijación de co 2 por los bosques españoles. Instituto Nacional de Investigación y Tecnología Agraria y Alimentaria Ministerio de Educación y Ciencia. Newnham, G.J., Armston, J.D., Calders, K., Disney, M.I., Lovell, J.L., Schaaf, C.B., Strahler, A.H., Danson, F.M., 2015. Terrestrial laser scanning for plot-scale forest measurement. Current Forestry Reports 1, 239251. doi:10.1007/s40725-015-0025-5. Ruffault, J., Curt, T., Moron, V., Trigo, R.M., Mouillot, F., Koutsias, N., Pi mont, F., Martin-StPaul, N., Barbero, R., Dupuy, J.L., Russo, A., Belhadj-Khedher, C., 2020. Increased likelihood of heat-induced large wildfires in the mediterranean basin. Scientific Reports 10. doi:10.1038/s41598-02070069-z. Vahedifard, F., Abdollahi, M., Leshchinsky, B.A., Stark, T.D., Sadegh, M., Aghakouchak, A., 2024. Interdependencies between wildfire-induced alterations in soil properties, near-surface processes, and geohazards. EARTH AND SPACE SCIENCE 11. doi:10.1029/2023EA003498.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Biases in MODIS Burned Area Products and Their Impact on the Fire Activity Decline in African Savannas

Authors: Adrià Descals, Rosa Ayuso, Ivan Janssens, Josep Peñuelas
Affiliations: University of Antwerp, CREAF
MODIS burned area products indicate a decline in burned area in African tropical savannas during the period from 2001 to 2021. This trend explains the burned area decline observed at the global scale, as fires in the African savanna account for most of the global burned area. Previous studies have proposed two main explanations for this decline in burned area. The first suggests that the decline is linked to changes in fire weather in the region. The second proposes that the observed decrease in burned area may be associated with the expansion of croplands in the African savanna, where fragmented areas between crops are less likely to burn. Additionally, other studies have indicated that the decline might reflect biases in the MODIS burned area products, primarily relating to potential changes in the size of fires. This study aims to determine whether biases from the low resolution of MODIS explain the decline in burned area within Africa's tropical savannas. To achieve this, we validated the MCD64A1 burned area product through random sampling and visual interpretation of Landsat images from 2005 and 2020, which represent the years of maximum and minimum burned area, respectively. We generated post-stratified burned area estimates to assess whether the annual burned area in 2020 was significantly lower than in 2005. The area estimates were categorized into fragmented and non-fragmented by crops in 2020. Non-fragmented areas include extensive savannah regions without crops, such as the savannah vegetation in the Central African Republic. Finally, we analysed trends in vegetation indices and fire intensity to elucidate if the vegetation greening observed in the past two decades is associated with any potential bias in the MCD64A1product. Results indicated that the annual burned area in 2020 was significantly lower than in 2005 in fragmented areas, supporting the idea that croplands contribute to the decline in fire activity, as suggested by previous studies. However, the difference between 2005 and 2020 was not significant in non-fragmented areas, indicating no detectable decline in burned areas over the past two decades. These findings suggest that the decline observed using MODIS data may be partly an artifact of changes in fire regimes, specifically the increase in vegetation moisture and the occurrence of small, low-intensity fires in non-fragmented areas, which MODIS is unable to detect accurately. These findings have important implications for global change and fire studies that rely on MODIS burned area products at both the continental and global scales. As the MODIS-based burned area product are widely used, its trends must be cautiously validated at the regional level. Validation exercises are essential to support conclusions regarding greenhouse gas emissions and their trends in the context of global warming. Furthermore, our results challenge the widely accepted assumption of a pronounced decline in burned area in Africa reported in many previous studies.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Quantifying Post-fire Recovery Through Forest Structure Indicators from Remote Sensing in a Mediterranean Landscape

Authors: Martí Perpinyà-Vallès, Dr Aitor Ameztegui, Dr Maria José Escorihuela, Laia Romero
Affiliations: Lobelia Earth, Universitat de Lleida, isardSAT
The Mediterranean region is widely recognized as a wildfire-prone ecosystem, characterized by species that have evolved specific adaptations to recover from forest fires. These adaptive traits generate highly heterogeneous landscapes, with vegetation at varying stages of growth and maturity. In recent years, significant research efforts have focused on mapping post-fire recovery trends in Mediterranean forests using satellite imagery. Despite these advancements, challenges persist in accurately quantifying post-fire forest structure. This limitation arises primarily from the widespread use of spectral indices, which serve only as proxies for recovery. These indices often saturate before vegetation fully regains its pre-fire structural attributes, limiting their utility in assessing true recovery trajectories. This study aims to provide a more comprehensive understanding of post-fire forest recovery by quantifying key structural indicators, including above-ground biomass (AGB), canopy height, canopy cover, and foliage height diversity (FHD). To achieve this, we employed a combination of satellite data sources and a Deep Learning U-Net-based model to generate high-resolution (10-m) forest structure variables across the entire region of Catalunya, covering the period from 2016 to 2024. The generated variables were rigorously validated against the 4th Spanish National Forest Inventory dataset, with associated uncertainties estimated to ensure robust assessments. Height and AGB are predicted with an R2 over 0.7 and an RMSE of 4 meters and 30 Mg/ha respectively. FHD highly correlates (Pearson R > 0.7) with a measure of height diversity calculated with the Shannon Wienner Index applied on the different heights and expansion factors of the trees within each plot. Using a large dataset of wildfire profiles, we analysed 8 wildfires in coastal zones, 4 wildfires in mountainous areas, and 6 inland wildfires. These selected sites were chosen to exclude overlaps with more recent fires, enabling clear differentiation between distinct recovery stages. For each fire, we examined the distributions of forest structure indicators and observed diverse growth patterns relative to surrounding unburned areas. Although older wildfires typically exhibited higher levels of recovery, certain younger wildfires displayed notably rapid recovery trajectories. This variability highlights the need to consider additional factors influencing recovery dynamics. To elucidate these differences, we conducted an in-depth analysis incorporating various biophysical and environmental parameters. These included the characteristics of buffer zones (serving as pseudo-control plots), dominant plant species within and surrounding the wildfire-affected areas, topographical features (aspect and slope), and wildfire severity. These factors provided valuable insights into the complex interplay of environmental and biological processes driving recovery. Furthermore, temporal analyses of recovery trends during the 2016–2024 period revealed distinct patterns. Recent wildfires (0–5 years old) generally exhibited low recovery levels, particularly in terms of AGB, canopy height, and foliage height diversity. Fires aged 5–15 years demonstrated the steepest recovery gradients, while the oldest wildfires approached either full recovery or a state of signal saturation, suggesting stable structural conditions. This study advances our understanding of post-fire recovery by integrating detailed forest structural measurements with environmental and temporal analyses. The findings underscore the importance of adopting high-resolution remote sensing techniques alongside deep learning approaches to improve post-fire forest monitoring. This approach not only enhances our capacity to assess recovery but also provides valuable insights into the resilience and dynamics of Mediterranean forest ecosystems under increasing wildfire pressures.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Quantifying the effect of bush encroachment on fuels and fire emissions in southern Africa with a satellite-based data-model fusion approach

Authors: Daniel Kinalczyk, Matthias Forkel
Affiliations: TUD Dresden University Of Technology, Junior Professorship In Environmental Remote Sensing
Bush encroachment is a well-documented phenomenon across southern Africa, at least since the late nineteenth century, characterised by the expansion of woody biomass at the expense of diverse grassland ecosystems. This shift in vegetation cover has significant implications for the region’s fire regimes and associated greenhouse gas emissions. Increased woody biomass results in a higher carbon stock, which, when burned, leads to more significant carbon dioxide emissions and other greenhouse gases compared to grassland-dominated fires in the past. Using the modelling framework Satellite data-model fusion approach for Fuel loads, Fuel Moisture, Fuel consumption and Fire emissions (S4F), we here quantify how changes in vegetation cover affect surface fuel loads and, finally, fire emissions. Therefore, satellite data products such as Leaf Area Index (LAI) and fractional cover (fCover) from Sentinel-3, Soil Water Index (SWI) from ASCAT, as well as land cover (LC), above-ground biomass from ESA CCIand burned area derived from Sentinel-2, are used to estimate the fuel properties and fire emissions on a high spatial resolution of 300 m. Precisely, we quantify effects on combustion completeness, combustion efficiency as well as carbon monoxide (CO), carbon dioxide (CO2), nitrogen oxides (NOx) and particulate matter (PM) emissions. Results demonstrate an expected marked increase in greenhouse gas emissions from wildfires in southern Africa over time, correlating with the documented rise in woody biomass because of bush encroachment. This study highlights the critical role of vegetation dynamics in shaping fire emissions. It underscores the importance of integrating land cover changes and vegetation dynamics into regional and global fire emissions and carbon cycle models. We further highlighted the need for Near Real Time (NRT) emissions models that continuously update fire emissions during fire seasons and account for long-term gradual and short-term abrupt vegetation changes. With high resolution NRT fire emission models, land managers and policymakers will receive information about how land and fuel management will affect fire emissions to plan for future actions.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Global-scale large forest fire modelling and prediction: a dance of fire -driving and -inhibiting factors

Authors: Simon Bowring, Philippe Ciais, Yidi
Affiliations: Laboratoire Des Sciences Du Climat Et De L'Environnement (LSCE)
Global process-based fire models are often integrated into the land surface component of earth system models, and have been designed to reproduce mean annual aggregate burned area values at regional and global scales. For this reason, they reasonably reproduce the ~70-80% of observed global burned area that occurs in sub-Saharan African savannah and dry tropical forest biomes. Models are able to do so both in terms of approximate total area burned for this broad region, as well as for the general spatial distribution of where this burning occurs. However, most standard versions of process-based fire models have been shown to be incapable of reproducing seasonal and interannual burn patterns, with their global performance found to be worse than random for expressing this variation when comparing model outputs to observations on a global grid-by-grid basis. Part of the reason for these large limitations lies in the overreliance of the models on savannah fire theory and insufficient representation of fire processes in other biomes. The lack of peat, and peat combustion representation in many or most land surface models, is another notable weakness. Most strikingly, perhaps, is the omission in these models of theory-based process representation of extreme fire spread and intensity in forest landscapes, whose burning has been shown to release as much carbon as the annual fossil fuel emissions of countries whose populations exceed one billion people. This shortcoming owes itself to those in the theory itself. Many studies have been performed at small scales which provide theoretical quantification and sensitivity of fire spread to a range of drivers such as slope, wind, fuel moisture and density, and so on. These theoretical relationships are often not realized however at landscape scale, due to a broader range of fire inhibiting factors which are more difficult to generalize as they may pertain to complex landscape-scale interactions. Thus some species may burn better than others in wetter or drier conditions, and groundwater might inhibit fire spread, while peat might either promote or suppress it depending on preceding primary production and rainfall. Here, we show that given sufficient theoretical consideration of their model representation alongside hypothesis testing, these contradictory landscape-scale factors can be appropriately addressed by land surface models precisely because of their coupling of process complexity and coarse spatial resolutions. This presentation will discuss how this is achieved in ORCHIDEE-SPITFIRE, a fire enabled land surface model, and the extent to which it is capable of reproducing past extreme fires in the satellite record, and the implications its future predictions have for the terrestrial carbon cycle.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: An Operational Fuel Characterization Model: A Cornerstone for Burned Area Prediction and Enhanced Fire Emissions Estimations

Authors: Francesca Di Giuseppe, Joe McNorton, Enza Di Tomaso, Anna Agusti-Panareda
Affiliations: ECMWF
Fuel characteristics—the quantity of available combustible material in an ecosystem, its composition, and moisture content—are essential variables for understanding fire dynamics and ecosystem processes. The total fuel load typically refers to the mass of dry vegetation, including both live fuels (e.g., shrubs, grasses, and trees) and dead fuels (e.g., fallen leaves, branches, and woody debris). Fuel moisture content (FMC) is the proportion of water within the plant mass, calculated as the ratio of the difference between moist and dry vegetation weight to the dry vegetation weight. Both fuel load and composition influence not only fire intensity and spread but also post-fire recovery and carbon emissions. High fuel loads increase the likelihood of intense and widespread fires, while fuel composition determines the type of fire, burn efficiency, and emission characteristic. Fuel moisture is critical for fire ignition and propagation, as the water content of fuels directly affects flammability and fire behaviour. In this talk, we present the latest efforts by ECMWF to characterize fuel quantity and moisture content at the global scale. This work has led to the development of conceptual, simplified models of vegetation physiology designed for integration into numerical weather prediction systems. The estimations of fuel characteristics combine satellite-derived LAI (Leaf Area Index) and land classification, assimilated soil moisture, and Net Primary Productivity (NPP) informed by CO₂ flux inversions as developed for CAMS. These approaches demonstrate the potential for long-term monitoring of fuel characteristics on a global scale and provide valuable constraints for improving operational fire emissions models.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Predicting European Wildfire Occurrence and Analyzing Drivers with Explainable Artificial Intelligence (XAI)

Authors: Hanyu Li, Dr. Stenka Vulova, Dr. Alby Duarte Rocha, Prof. Dr. Birgit Kleinschmit
Affiliations: Technical University of Berlin
As global climate changes, the frequency and severity of wildfires are escalating, causing widespread ecological damage, threatening human communities, and amplifying climate feedback loops. Accurately predicting wildfire occurrences and understanding the drivers behind them is essential for enhancing management and mitigation strategies. While machine learning has been used in wildfire research, Explainable Artificial Intelligence (XAI) remains underexplored. This study integrates historical observational data, climate projections from the Coupled Model Intercomparison Project Phase 6 (CMIP6), and advanced machine learning methods to predict wildfire occurrence across Europe and analyze its driving factors under future climate scenarios. (1) Model Development Our study covers the European region, including various ecosystems such as forests, shrublands, and grasslands. We selected key features from four categories: meteorological factors, vegetation, topography, and anthropogenic activity. We trained Artificial Neural Networks using historical observational data to predict wildfire occurrences. This training allows the model to recognize the complex relationships between environmental and human drivers of fire risk. After confirming the model's performance through validation on unseen data, we applied it to CMIP6 future climate scenarios. (2) Future Predictions The CMIP6 dataset includes a range of socio-economic pathways, from low-emission scenarios (e.g., SSP1-2.6) to high-emission scenarios (e.g., SSP5-8.5). Before using CMIP6 projections data, we first calibrate it with historical data. By inputting these climate projections into our model, we forecast how wildfire-prone areas across Europe may evolve in the coming decades. This analysis provides critical insights into how fire risk zones may shift or expand under varying levels of global warming, offering valuable information for policymakers, fire management strategies, and vulnerable communities. (3) Driver Analysis with XAI Despite the predictive power of machine learning models, their "black-box" nature limits interpretability. To address this, we employed SHAP (SHapley Additive exPlanations), a state-of-the-art XAI method, to explain the inner workings of our wildfire prediction model. SHAP helps us identify the most influential variables driving wildfire risk, both historically and in future projections. This transparency not only aids scientific understanding but also supports decision-making by clarifying which factors (e.g., temperature, vegetation type, human activity) contribute most to fire risk under different climate scenarios. This study highlights the value of combining historical data, CMIP6 climate projections, and machine learning to predict wildfire occurrences across Europe. By incorporating XAI, we provide greater interpretability, allowing for a deeper understanding of the drivers of wildfire risk.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Near coincident GEDI Measurements Unveil Fire-induced Structural Changes in the Amazon Rainforest

Authors: Martín Domínguez Durán, Amelia Holcomb, Dr. Johannes Reiche, Guido van der Werf
Affiliations: Meteorology & Air Quality Group, Wageningen University, Laboratory of Geo-information Science and Remote Sensing, Wageningen University, Department of Computer Science, University of Cambridge
Forests play a crucial role in regulating the global climate, yet they are increasingly vulnerable to disturbances like wildfires, which are predicted to intensify in frequency and severity under climate change. Such disturbances not only release stored carbon, contributing to global warming, but also alter forest structure, with significant implications for biodiversity and ecosystem functions. Understanding these structural changes is particularly critical in the Amazon rainforest, a biodiversity hotspot and a major carbon sink, where fire-driven habitat loss could endanger niche-specific species.   Traditional Earth Observation (EO) data have been instrumental in monitoring forest dynamics; however, these tools face limitations in capturing the complex structural information beneath dense canopies. Recent advancements, particularly from space-borne LiDAR missions like NASA’s Global Ecosystem Dynamics Investigation (GEDI), offer a new perspective by providing 3D vertical measurements of forest structure. This data is invaluable for understanding the state of forest structure post-disturbance, but poses challenges due to its limited spatial coverage, lack of a consistent revisit cycle, and computationally demanding analysis requirements. Specifically, obtaining near-coincident GEDI footprint pairs—measurements of almost the same location—remains a non-trivial task, as GEDI data is not spatially indexed. Recent studies, however, have provided methodologies to overcome this limitation, enabling precise assessments of forest structural changes following disturbances (Holcomb et al., 2024). These near-coincident pairs are critical for guaranteeing that observed differences in structure accurately reflect localized impacts rather than broader regional variations.   In this study, we leverage near-coincident GEDI measurements to assess the structural impacts of forest fires across the South Amazon Basin. Using the MODIS Burned Area product (Giglio et al., 2015) and the VIIRS active fire dataset (Schroeder et al., 2014), we identify forest areas affected by fire and extract near-coincident GEDI footprint pairs with pre- and post-fire measurements. Differences in key GEDI-derived metrics, including L2A (vertical canopy profile), L2B (biophysical metrics), L4A (aboveground biomass estimates) and L4C (waveform structural complexity) products, are analyzed to quantify fire-induced structural changes.   First results will be shown for the vertical and spatial variability in fire severity across the South Amazon Basin. These results provide critical insights into the resilience dynamics of tropical forests in an era of increasing disturbances. By demonstrating the capability of GEDI data to reveal subtle structural changes, this research underscores the value of spaceborne LiDAR for improving our understanding of fire impacts on forest ecosystems. References: Giglio, L., Justice, C., Boschetti, L., & Roy, D. (2015). MCD64A1 MODIS/Terra+Aqua Burned Area Monthly L3 Global 500m SIN Grid V006 [Dataset]. NASA EOSDIS Land Processes Distributed Active Archive Center. https://doi.org/10.5067/MODIS/MCD64A1.006 Holcomb, A., Burns, P., Keshav, S., & Coomes, D. A. (2024). Repeat GEDI footprints measure the effects of tropical forest disturbances. Remote Sensing of Environment, 308, 114174. https://doi.org/10.1016/j.rse.2024.114174 Schroeder, W., Oliva, P., Giglio, L., & Csiszar, I. A. (2014). The New VIIRS 375   m active fire detection data product: Algorithm description and initial assessment. Remote Sensing of Environment, 143, 85–96. https://doi.org/10.1016/j.rse.2013.12.008
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Extreme Peatland Wildfires Disrupt Carbon Sequestration and Exhibit Dynamic Combustion Behaviour

Authors: Luke Richardson-Foulger, Nicole Reynolds
Affiliations: National Centre for Earth Observation - King's College London, Climate and Earth Observation Group, National Physical Laboratory
Peatland wildfires are estimated to have released an average of 0.5 Pg of carbon per year between 1997 - 2009. Healthy peatlands function as a natural carbon sink which can help mitigate climate change. The ability of peatlands to sequester carbon may be challenged by extreme wildfire events, which potentially cause accelerated decomposition of ancient peat reserves. In the context of a drying climate, there is increasing concern that peatland wildfires will increase in frequency and severity during this century. As such, it has become a priority to understand the effect of peatland wildfires on ecosystem health and long-term carbon sequestration. This study examines the effects of the 2018 Saddleworth Moor wildfire, one of the UK's "largest in living memory", on carbon sequestration and ecosystem recovery, as well as presenting new insights into peatland burn dynamics. Saddleworth Moor is a 9,000-year-old blanket bog in North West England and burned for nearly a month following the 2018 heatwave. Airborne atmospheric sampling of the wildfire emissions indicates dynamic combustion behaviour across the timescale of the fire: an initial flaming-dominated phase (MCE ~ 0.95) burned surface vegetation, followed by a smoldering-dominated phase (MCE ~ 0.86) as the newly-exposed peat layers ignited. This infers that more severe peatland wildfires can permanently damage future ecosystem functioning by burning soil layers which are less easily recovered. VIIRS-derived plume outlines allowed for plume concentration estimates across time from corresponding Sentinel 5P imagery, to compare to aircraft observations. Sentinel 2-derived FCC imagery mapped a 9.5 km² burn area, enabling analysis of burn severity and post-fire vegetation regrowth patterns. Moisture data derived from InSAR imagery for years preceding and following the fire allow for an assessment of two core questions: (1) How does peat moisture influence burn severity? (2) What are the implications of wildfires for the long-term carbon sink potential of peatlands? This study aims to advance understanding of how peatland fires impact carbon dynamics and ecosystem resilience, as well as how the fires evolve across time.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Assessing Post-Fire Deadwood and Surface Dynamics using Multispectral and LiDAR UAVs in the Harz National Park, Germany

Authors: Robert Jackisch, Birgitta Putzenlechner, Simon Drollinger, Ephraim Schmidt-Riese, Elisabeth Dietze
Affiliations: Technische Universität Berlin, Geoinformation in Environmental Planning Lab, Georg-August-Universität Göttingen, Institute of Geography
Climate change increases the risk of forest fire following drought periods in temperate forests of Central Europe. In this regard, areas with an increased proportion of standing deadwood are particularly considered to fuel wildfires when dried out. However, especially in national parks, deadwood forms a fundamental element of conservation plans, supporting ecosystem functioning by fostering biodiversity and providing shade for future forests. Precise monitoring of deadwood stands and burnt areas is particularly important in view of the accumulated disturbance events in order to quantify effects on ecosystems. In the Harz National Park, a fire of August 2022 at the “Quesenbank” burned an area of 13 ha within four days. Here, we aim at a comprehensive assessment of fire impact in this formerly even-aged forest stand of Norway spruce (Picea abies) that has turned into a calamity area after the bark beetle infestation following the recent drought events (2018-2022). This includes to understand the temporal evolution of deadwood breakdown and surface dynamics (e.g., potential sagging, soil erosion). We investigate the hypothesis that the burnt site exhibits higher surface and deadwood dynamics than a neighbouring area with comparable site conditions. Our aim was also to examine the extent to which estimations of deadwood and surface dynamics differ across various sensor types. Following the fire event, biannual campaigns using unoccupied aerial vehicles (UAVs) were carried out. UAVs were equipped with multispectral, thermal, high-resolution RGB and light-detection and ranging (LiDAR) sensors to monitor and scan 10 ha of burned and neighbouring unburned areas. For accurate 3D ground reference, we used a terrestrial laser scanner (TLS) in 2024 along a transect of the fire site. To capture the gradual breakdown of deadwood, derived orthoimages, 3D point clouds and canopy height models (CHM) were analysed to estimate the number of standing dead trees, fractional cover and cm-scaled surface changes. For processing large amounts of high-resolution data, deep-learning methods were used with multispectral orthoimages and point clouds. Additionally, we collected ground-based references on biophysical vegetation parameters, such as fractional cover, plant area index (PAI) and fraction of absorbed photosynthetically active radiation (FAPAR) from upward-directed digital hemispherical photos. The ground-truthing was conducted in synchronization with UAV campaigns. The ground reference data confirmed considerably lower fractional cover on burned areas. PAI and FAPAR in burned standing deadwood was lower than in unburned stands, thereby influencing light and microclimatic conditions. As the variation in reference data was relatively low during the observation period, we suggest that the main dynamics of the breakdown of standing deadwood had already happened several weeks after the fire. Interestingly, we found quite heterogeneous microtopography due to granite boulders, with subsurface tunnelling and unstable surface. Upcoming analysis will include analyses of fire-influenced soil properties, morphodynamics and biogeochemical cycling in a region that still shows traces of past land use associated with the mining history of the Harz, such as relict charcoal kilns.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: High-Resolution Burned Area Dataset Reveals Fire Dynamics and Human Influence Across Southern Amazonia (1990–2019)

Authors: Amin Khairoun, Erika Solano, Daniela Stroppiana, Bhogendra Mishra, Matteo Sali, Prof Emilio Chuvieco
Affiliations: Universidad de Alcalá, Environmental Remote Sensing Research Group, Department of Geology, Geography and the Environment, Consiglio Nazionale delle Ricerche – Istituto per il Rilevamento Elettromagnetico dell’Ambiente (CNR-IREA)
Fire plays multiple roles in shaping South American ecosystems, yet much of the recent scientific focus has been exclusively dedicated to the Amazon rainforest. Southern Amazonia encompasses diverse ecosystems, including tropical savannas, moist and dry tropical forests, and wetlands. Fires are prevalent in this region, and their characteristics are primarily driven by human activities such as deforestation and land-use changes for agriculture and cattle ranching. These fire events are intensifying with increasing human encroachment on these landscapes. Existing coarse-resolution Burned Area (BA) datasets fail to capture small, fragmented fires widely prevalent in this region. To address this gap, we developed a high-resolution, long-term BA dataset using Landsat imagery for the period 1990–2019, covering an area of approximately 2.10 million km² (extent: 62°W–47°W; 24°S–12°S). The core algorithm was implemented on the Google Earth Engine (GEE) platform, followed by post-classification refinement to reduce noise and enhance BA patch delineation. Over the 30-year period, we found that the annual BA averages up to 10 million hectares (9.93 ± 1.71 Mha), with the period 1999–2010 being the most devastating. Due to the improved detection of smaller fires, our BA estimates exceeded those of coarse-resolution datasets, being 47.90 ± 9.84% and 67.51 ± 16.37% higher than MCD64A1 and FireCCI51, respectively. Our estimates were also substantially higher than other Landsat-based BA products. In fact, we estimate 154.38 ± 19.32% and 100.75 ± 24.55% more BA than MapBiomas and GABAM, respectively. The dataset was independently validated using a stratified sampling protocol, achieving a high Dice Coefficient (DC) of over 92%. Omission and commission errors were limited to 6% and 8.36%, respectively. Fires were found to be frequent in the region, with about half the study area (> 49%) experiencing at least one fire and approximately 15% (equivalent to >31 Mha) burning at least three times over the 30 years. Spatially, human activity strongly influenced fire patterns. In less populated regions such as the northern Cerrado, adjacent to moist and dry Amazonian forests, and the western Pantanal, large fires predominantly occurred during the dry season. Conversely, the densely populated Atlantic belt was characterized by a large abundance of small fires, likely linked to agricultural practices. This work highlights the role of fire as a driver of deforestation and land-use change in developing regions, providing critical insights into fire dynamics across diverse ecosystems in southern Amazonia.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Arctic and Northern Latitude Peat and Non-peat Wildfire Aerosols During 2018-2024

Authors: Kerstin Stebel, Philipp Schneider, Johannes W. Kaiser, Magit Aun
Affiliations: NILU, University of Tartu
In a changing climate, with temperatures rising faster in the Arctic compared to other regions, Northern high latitudes are increasingly vulnerable to extreme wildfires. This raises concerns about climate impacts including increased air pollution in nearby cities, light-absorbing emissions accelerating ice melt, and disruptions to the Earth’s radiative balance. In Northern latitudes, melting permafrost exposes carbon-rich peat, formed over centuries in wetlands. Dried peatlands are highly flammable, with fires burning deep into the peat layers. Fires are altering aerosol patterns in the Arctic, which is observable in satellite and ground-based remote sensing observations. For example, Aerosol Optical Depth (AOD) in Ny-Ålesund, Svalbard has declined in spring due to regulatory actions in mid-latitudes, but rises in summer and autumn, presumable due to an increase in wildfires during the recent years. In the period 2003 to 2024, Fire Radiative Power (FRP) and the number of active fires from MODIS show an increase within the zonal band 60°N–90°N, but a decrease in 50°N–60°N. With the availability of Sentinel-5P observations from 2018 onwards, wildfires have become particularly comprehensively observed. Therefore, this analysis focuses on Northern Hemisphere and Arctic wildfires during 2018–2024 with a focus on distinguishing peat versus non-peat fires. Fire locations and intensities are characterized by FRP observations from VIIRS. Medium and large fires are identified using a positive TROPOMI aerosol index (AI > 1) “absorbing aerosol mask” combined with a CO and NO₂ plume detection algorithm, outlining the smoke plumes near active fires. However, thick smoke plumes near active fires can often be indistinguishable from clouds and aerosol information can be masked out. Trace gas measurements can be compromised due to shielding by high aerosol loading. To address this, TROPOMI data products with relaxed quality thresholds are utilized for the initial plume segmentation. In a next step, the fires are classified by their timing, FRP, landcover, peat percentage, and NO₂/CO trace gas ratios (using recommended quality criteria). We compare fresh plume aerosol characteristics near active fires using data from multiple satellites (Level 2 retrieval from TROPOMI, SLSTR, VIIRS, and MODIS MAIAC). This analysis incorporates measurements of multi-wavelength AOD and related aerosol parameters such as Ångström exponent, single-scattering albedo, and aerosol layer heights. The aerosol characteristics of fires in Canada and Alaska are contrasted with those in Siberia and background Arctic aerosols. This study is performed in the framework of the EC Framework Partnership Agreement on Copernicus User Uptake (FPCUP) “Arctic Peat- And Forest-fire Information System” and the ESA MIT3D CCN-1 project SVAR “A Satellite view on Aerosols in the Arctic: climatology, fire, and dust”.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: POST-FIRE RECOVERY ESTIMATION OF RECURRENTLY BURNED VEGETATION ACROSS MEDITERRANEAN REGIONS WORLDWIDE

Authors: Tiago Ermitão, Célia Gouveia, Ana Bastos, Ana Russo
Affiliations: Instituto Dom Luiz, Faculdade de Ciências da Universidade de Lisboa, Instituto Português do Mar e da Atmosfera, I.P., Max-Planck Institute for Biogeochemistry, Dept. of Biogeochemical Integration, Institute for Earth System Science and Remote Sensing, Leipzig University, CEF - Forest Research Centre, Associate Laboratory TERRA, School of Agriculture, University of Lisbon
Longer fire seasons with more severe fires are becoming more frequent throughout global fire-prone regions such as the Mediterranean biome. Although the typical vegetation of these areas can tolerate high exposure to extreme temperatures and has several traits and mechanisms to resist and recover from fire, increasing fire intensity and changes in climate conditions can contribute to the loss of ecosystem resilience and enhance the potential for irreversible shifts in affected vegetation. Therefore, post-fire recovery dynamics that constitutes a key component of ecosystem resilience and stability, is an important process to study across Mediterranean ecosystems. In this study, we rely on the spectral index Enhanced Vegetation Index (EVI) from MODIS to estimate the recovery rates of recurrently burned vegetation across Mediterranean regions globally within the period 2001-2022. To do that, a mono-parametric statistical model is applied to the EVI time-series. We focus on pixels burned repeatedly over the mentioned period, analysing how fire severity, pre-fire vegetation state and post-fire meteorological conditions modulate the recovery rates of different vegetation-types. The utilized approach allows to overcome the limitations of space-for-time substitution approaches, as it resolves recovery dynamics in space and time. We compare the recovery rates of burned vegetation across pixels that burned twice in the considered period. We find that vegetation tends to recover faster after the first event than the second, even though large contrasts are detected, which can be explained by regional differences in vegetation-type, fire severity as well as post-fire meteorological conditions. We find a strong dependence of recovery rates on fire severity, particularly for the more extreme fires, with variations among different land cover categories. We also show that fire severity can promote a faster recovery of vegetation, especially in forested ecosystems. An important modulating role of pre-fire conditions on fire severity is also found, as higher EVI before the fire event can result in stronger relative greenness loss. Additionally, precipitation and air temperature play an important role in recovery speed across the Mediterranean regions, reflecting the importance of water availability in these ecosystems and how precipitation and temperature changes in the future may have the potential to affect the regeneration of recurrently burned ecosystems. This study was performed under the framework of DHEFEUS project, funded by Portuguese Fundação para a Ciência e a Tecnologia (FCT) (https://doi.org/10.54499/2022.09185.PTDC). The work was also funded by the FCT I.P./MCTES through national funds (PIDDAC) UIDB/50019/2020 (https://doi.org/10.54499/UIDB/50019/2020), UIDP/50019/2020 (https://doi.org/10.54499/UIDP/50019/2020) and LA/P/0068/2020 (https://doi.org/10.54499/LA/P/0068/2020). A.R. is supported by the FCT through national funds from the MCTES within the Faculty of Sciences of University of Lisbon, through https://doi.org/10.54499/2022.01167.CEECIND/CP1722/CT0006.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: A Digital Twin for Wildfire risk adaptation planning: DT-WILDFIRE

Authors: Eleftheria Exarchou, Mirta Rodriguez Pinilla, Veronica Martin Gomez, Marc Benitez Benavides, Martin Senande Rivera, Peter Sousounis, Mariona Borràs, Guillem Canaleta, Martí Romaní, Eleni Toli, Panagiota Koltsida
Affiliations: Mitiga Solutions, Pau Costa Foundation, Athena Research Center, Universitat Autonoma de Barcelona
Wildfires pose a growing threat to populated areas of the Mediterranean basin. On one hand, rural abandonment has increased fuel loads, creating appropriate conditions for large wildfires. On the other hand, climate change has exacerbated the hot and dry conditions of the atmosphere and has been a key driver in increasing the risk, extent and severity of wildfires all around the world. In addition, the rising number of homes in the wildland-urban interface (WUI), and the associated impacts on lives and property from wildfires, have led to an increasing need for mitigation and adaptation measures against wildfire risk. The Barcelona Metropolitan Area, a large metropolis with an extended WUI, is particularly vulnerable. A part of its population and infrastructure are located near the border of the Collserola Natural Park, an extended forested area, and could be potentially threatened by large forest fires. This study presents a Digital Twin (DT) framework for the Barcelona Metropolitan Area, designed to assess the risk of extreme wildfires, and how it is impacted by heatwaves and droughts under different future climate scenarios. The DT-WILDFIRE will leverage high-resolution climate model projections, fire spread models, satellite data, local observations, and advanced machine learning (ML) techniques to provide a granular understanding of future climate risks and their cascading impacts on wildfires. By combining the DT with a Living Lab to ensure the co-development with the local communities, this work aims at providing a tool that uses state-of-the-art technological approaches to create a comprehensive wildfire risk assessment under different future scenarios. Physical damage to residential and commercial real estate, including damage from smoke and business interruption, will be simulated as well. This information about future wildfire risk will help authorities and society to design participatory risk reduction measures, including nature-based solutions, according to the different climate scenarios. This study takes place within CARMINE, a European Union-funded project whose goal is to provide knowledge-based climate resilient development pathways in metropolitan regions of Europe, to help the metropolitan communities become more climate resilient.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Can hyperspectral EO of landscape fires map proportions of flaming and smouldering combustion to improve fire emissions estimation?

Authors: Farrer Owsley-brown, Prof Martin Wooster, Dr Mark Grosvenor
Affiliations: King's College London
Globally, landscape fires emit vast amounts of smoke, with the rate and composition of that smoke strongly depending on combustion phase (i.e., flaming vs. smouldering). Many pollutants that deteriorate air quality are predominantly emitted during smouldering combustion, such as CO, particulate matter (PM), and volatile organic compounds (VOCs). However, current EO techniques do not account for the relative role of flaming and smouldering combustion to smoke production. Instead, they use fire-average emission factors (EFs) that assume a “typical” amount of flaming and smouldering activity. There has been growing interest in more dynamic emission factors and coefficients, initially to capture seasonal variations, for example. Incorporating the relative role of the combustion phases into emissions models could markedly enhance their accuracy, particularly for air quality forecasting. This research evaluated two techniques for distinguishing between flaming and smouldering combustion in EO data: sub-pixel fire temperature analysis and potassium (K-line) spectral emission detection. These methods were tested through laboratory experiments and real-world observations over Canadian boreal wildfires, alongside concurrent smoke composition measurements. We found that sub-pixel temperature did not appear to significantly improve smoke emission estimates. In contrast, complementary FRP and K-line measurements significantly improved the accuracy of both smoke emission rates and composition. Future applications of these methods, via an integration with satellite datasets, could improve global emissions estimates and their operational utility for climate and air quality monitoring. The upcoming FLEX mission’s FLORIS instrument is particularly well-suited for K-line observations due to its high spectral resolution. Furthermore, its close overpass time with Sentinel-3 means SLSTR-derived FRP and FLORIS-derived K-line data could be delivered together. This opportunity could pave the way for more comprehensive global fire emissions monitoring.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Validation of FireCCI burned area products

Authors: Daniela Stroppiana, Erika Solano Romero, Amin Khairoun, Bhogendra Mishra, M. Lucrecia Pettinari, Emilio Chuvieco
Affiliations: Consiglio Nazionale delle Ricerche - Istituto per il Rilevamento Elettromagnetico dell'Ambiente (CNR-IREA), Universidad de Alcalá, Environmental Remote Sensing Research Group, Department of Geology, Geography and the Environment
This work focuses on the validation activities carried out in the FireCCI project for assessing the accuracy of regional and global Burned Area (BA) products derived from Earth Observation data. The validation of regional and global BA products follows a similar protocol to compare reference and classified burned area polygons and derive accuracy metrics (omission and commission errors, Dice coefficient, Relative Bias). Source EO data for the generation of the reference fire perimeters were medium resolution Landsat and Sentinel-2 imageries. Validation is carried out by comparison of the BA product with fire reference perimeters derived over validation units covering an area of approximately 100 km x 100 km; over each unit, we defined a time series of consecutive medium resolution images (i.e., long-unit) based on the following criteria: scene cloud cover, and cumulated cloud cover, time interval between consecutive images and length of the time series. Each image pair (i.e., short unit) is classified to extract burned polygons between pre-fire and post-fire images with a Random Fotest supervised classification algorithm trained for each pair of images over burned and unburned classes. Each fire polygon retains as date of detection the acquisition date of the post-fire image of each pair. Fire perimeters are visually refined and checked to ensure the greatest accuracy of the generated dataset and combined temporally into synthetic geopackage products with attribute table containing information such as pre- (t1) and post-fire (t2) dates and source image scene identifier, detection dates, category (burned, unburned and masked). The validation units are selected regionally and/or globally by stratified random sampling considering the biomes (Dinerstein et al., 2017) and the fire regimes (high/low fire activity derived from the annual total burned area). The two global product FireCCI51 (2017-2021) and FireCCIS311 (2019-2022) are validated using about 100 validation units distributed among the global biomes. Reference fire perimeters are derived from Landsat images (Landsat8 and Landsat9). Results show that the Dice Coefficient (DC) for FireCCI51 BA product is in the range 54.4%-69.2% while for FireCCIS311 BA product is in 56.0%-68.1%; both product show a negative relative bias (RelB) proving that omission error is greater than commission as expected for coarse resolution products. A major issue is in fact, the proportion of small burned areas that are not detected with coarse resolution data (250-300m). The FireCCISFDL BA product, derived from Landsat data (1990-2019), offers high-resolution multi-annual coverage for three Regions Of Interest (ROIs): Africa-Sahel (AF), South America-Amazonia (SA), and Russia-Siberia (SI). Fire reference perimeters were derived by combining Sentinel-2 (2019), Landsat 8 (2013-2018) and Landsat 5 (1990-2012) imageries to cover the entire period of the FireCCISFDL BA product and validation units are selected by stratified random sampling. Validation involved comparing FireCCISFDL results with Landsat and Sentinel-2 fire areas across the ROIs with 35 units, 29 units and 26 units over Africa-Sahel, South America-Amazonia and Russia-Siberia, respectively. In the case of the validation of the FireCCISFDL, the greatest accuracy is achieved over the Amazonian ROI (DC=48%, CE=33%, OE=44%) while the worst results are obtained for the Siberia ROI that is also characterised by the lowest amount of area burned.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Fires, Land Use, and Forest Loss in the South American Chaco: Links Between Fire Regimes, Climate, and Human Activity

Authors: Rodrigo San Martin, Catherine Ottlé, Dr. Anna Sörensson
Affiliations: Laboratoire des Sciences du Climat et de l’Environnement (LSCE/IPSL), CEA-CNRS-UVSQ, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Centro de Investigaciones del Mar y la Atmósfera (CIMA), CONICET ‒ Universidad de Buenos Aires, Instituto Franco-Argentino de Estudios sobre el Clima y sus Impactos (IFAECI - IRL 3351), CNRS-CONICET-IRD-UBA
The South American Chaco, spanning approximately 1,100,000 km², is home to the world's largest tropical dry forest and the second-largest wetland, making it a globally significant yet under-researched region. This study investigates fire dynamics over two decades (2001–2020), exploring their spatial and temporal patterns, connections to deforestation and agricultural expansion, and the influences of climate and human activity. By integrating three primary remote sensing datasets - FireCCI51 from ESA CCI Fire, ESA CCI Land Cover, and the FRYv2.0 fire polygon database - along with ERA5 and ERA5 Land climate data and additional demographic and topographic information, this research offers new perspectives on the interactions between fire regimes, environmental variables, and anthropogenic pressures. The findings highlight contrasting fire dynamics between the Dry and Wet Chaco subregions. In the Dry Chaco, fires predominantly occur during the dry season (May–September) and are closely associated with deforestation linked to agricultural expansion. In contrast, the Wet Chaco experiences fires throughout both the dry and wet seasons (May–September; October–April), with relatively stable land cover. Precipitation emerges as a critical factor affecting fire activity differently across the humidity gradient: positively in arid zones and negatively in wetter areas. A Fire Weather Index (FWI) was developed using ERA5 Land data at a 0.1° spatial resolution to analyze fire-weather relationships and assess the impact of climatic phenomena such as the El Niño–Southern Oscillation (ENSO). The results reveal that La Niña events are associated with increased burned areas, especially during drought years like 2019 and 2020. Random Forest algorithms identified topography, vegetation type, and meteorological factors, including wind speed, as the primary drivers of fire size. Furthermore, anthropogenic factors such as population and road density influence fire dynamics, with larger fires occurring in sparsely populated regions.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Wildfire Detection in the United Kingdom: a Comparative Study

Authors: Remy Vandaele, Dr Milto Miltiadiou, Dr Finley Gibson, Prof Hywel T.P. Williams, Dr Edward C.D. Pope
Affiliations: University Of Exeter, Met Office
In the United Kingdom (UK), it is estimated that an average of 32,000 wildfires occur every year, posing significant ecological and societal challenges [1]. It is thus of the greatest importance to better understand this phenomenon, map the locations of the wildfires and identify their critical contributing factors. However, the database that is typically used to monitor and analyze wildfire occurrence in the UK primarily is based on the European Forest Fire Information System (EFFIS) database [2], which uses processed MODIS and VIIRS satellite imagery [1]. Due to the moderate pixel resolution of these satellites, only fires greater than 30 hectares are reliably recorded. This is a real limitation for the study of wildfires in the UK as more than 99% of wildfires are smaller than one hectare [1]. With the launch of high resolution sensors (e.g. Sentinel-2 MSI, Landsat 8 OLI), it has now become possible to map smaller wildfires (e.g., [3]). However, to our knowledge these sensors are not yet being used to locate wildfires in the UK. Here we present an initial evaluation of the relevance and effectiveness of high resolution satellite data for wildfire detection in the UK. More precisely, we compared three approaches to detect wildfires in the UK: - The EFFIS database [2]: based on MODIS/VIIRS images, EFFIS provides a reliable detection of large scale fires but is constrained by its coarse spatial resolution. - Wildfire burn indices: Remote sensing indices such as the Normalized Burn Ratio (NBR) or the Mid-InfraRed Burn Index (MIRBI), applied to Sentinel-2 and Landsat 8 imagery, potentially offer the ability to map smaller wildfires with higher precision. -Geospatial Foundation Models: The Prithvi model was pre-trained on Harmonized Landsat and Sentinel datasets, and successfully fine-tuned on a dataset of burn scars from the United States [4]. It represents a state-of-the-art AI-driven wildfire detection method. To validate our approach, we compiled a dataset of ground truth wildfire extent polygons derived from fire intervention records in the county of Dorset in England, spanning from 2013 to 2024. Using this dataset, we assessed the performance of each method in detecting wildfires, identifying the respective strengths and limitations of each method. Results highlight the trade-offs between these approaches. While EFFIS provides reliable detection of large fires, it fails to capture the small-scale events that dominate UK wildfire occurrence. Burn indices from Sentinel-2 and Landsat 8 show improved detection capabilities for smaller fires but are limited by atmospheric conditions (e.g., cloud cover) and rapid post-fire vegetation recovery. The Prithvi geospatial foundation model demonstrates strong potential for detecting small burn scars but requires further fine-tuning to address regional land-use variations and misclassification of other disturbances. This study provides a critical comparison of wildfire detection methods, highlighting the importance of integrating high-resolution remote sensing and advanced AI models to address the unique challenges of wildfire monitoring in temperate regions like the UK. By identifying the strengths and limitations of each approach, we aim to inform the development of more effective wildfire monitoring frameworks tailored to the UK’s landscape and fire regime. References [1] Belcher, C. M et al. (2021). UK wildfires and their climate challenges. Expert Led report prepared for the third Climate Change Risk Assessment. https://www.ukclimaterisk.org/wp-content/uploads/2021/06/UK-Wildfires-and-their-Climate-Challenges.pdf [2] San-Miguel-Ayanz, J. et al. (2012). Comprehensive monitoring of wildfires in Europe: The European Forest Fire Information System (EFFIS). Approaches to Managing Disaster – Assessing Hazards, Emergencies and Disaster Impacts. https://doi.org/10.5772/28441 [3] Filipponi, F. (2019). Exploitation of sentinel-2 time series to map burned areas at the national level: A case study on the 2017 italy wildfires. Remote Sensing, 11(6), 622. https://doi.org/10.3390/rs11060622 [4] J. Jakubik, S. Roy, C. E. Phillips (2023). Foundation Models for Generalist Geospatial Artificial Intelligence. Preprint on arxiv:2310.18660. https://doi.org/10.48550/arXiv.2310.18660
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Quantifying wildfire combustion completeness in a Mediterranean forest using multitemporal airborne LiDAR data and Fire Radiative Energy

Authors: Mariano Garcia, Patricia Oliva, Adrian Baissero, Emilio Chuvieco
Affiliations: Universidad De Alcalá, Departamento de Geología, Geogafía y Medio Ambiente, Environmental Remote Sensing Research Group
Wildfires play a key role in the carbon cycle with biomass burning emissions accounting for over 500 Tg C per year (Jones et al. 2024). Bottom-up approaches to estimate carbon emissions from biomass burning rely on burned area observations combined with aboveground biomass, the emission factors and the combustion completeness (CC) (Seiler and Crutzen 1980), the latter representing the proportion of biomass consumed by the fire. Biomass consumed by fires can be accurately estimated by differencing pre- and post-fire biomass estimates from remote sensing data (Garcia et al. 2017). Bi-temporal airborne LiDAR collected before and after the fire occurrence offers an unprecedented capability to estimate changes in structure caused by fire and to determine the change in biomass (McCarley et al. 2024; McCarley et al. 2020), from which, combustion completeness can be estimated. Therefore, this study aims at estimating combustion completeness for different severity classes from bi-temporal airborne LiDAR data over a large wildfire that occurred in Spain in 2023, as well as assessing the relation between the LiDAR-based estimates and fire radiative energy (FRE) values derived from VIIRS hotspot data. Pre- and post-fire airborne LiDAR data over the study area have been pre-processed to generate the normalized point clouds, from which different metrics describing the distribution of returns across the canopies are compared to assess the impact of the fire on the forest structure. Aboveground biomass (AGB) is estimated from the normalized point cloud using the scale-invariant semi-mechanistic model proposed by Zhao et al., (2018). We fit a different model for the pre- and post-fire scenarios using field data collected over the study area in September 2024. 17 plots with a 17 m radius were located using a stratified random sampling approach considering the LiDAR-based pre-fire structural characteristics and the burn severity, including unburned plots, obtained from Sentinel-2 data. Field data measurements encompassed location with centimetric accuracy and diameter at breast height (DBH) for all trees with a DBH>7.5 cm, height of 20 individuals within each plot and a visual estimate of the proportion of biomass consumed by the fire for the following biomass fractions: foliage and branches< 2cm; branches between 2-7 cm; branches thicker than 7cm and trunk. The data collected in the field is used to estimate pre-fire AGB values using the allometric equations proposed by Montero et al., (2005) based on the DBH and the species. For the post-fire estimates, AGB values were adjusted by considering the proportion of biomass consumed based on the field observations. From the estimated AGB for the pre- and post-fire scenarios, fuel consumption and combustion completeness coefficients are obtained for each burn severity level. Finally, the relation between FRE values derived from VIIRS data, by integrating the FRP values obtained from the active fire detections observed at each image acquisition time, and the consumed biomass is to be assessed by means of a regression analysis. References Garcia, M., Saatchi, S., Casas, A., Koltunov, A., Ustin, S., Ramirez, C., Garcia-Gutierrez, J., & Balzter, H. (2017). Quantifying biomass consumption and carbon release from the California Rim fire by integrating airborne LiDAR and Landsat OLI data. Journal of Geophysical Research: Biogeosciences, 122, 340-353 Jones, M.W., Veraverbeke, S., Andela, N., Doerr, S.H., Kolden, C., Mataveli, G., Pettinari, M.L., Le Quéré, C., Rosan, T.M., van der Werf, G.R., van Wees, D., & Abatzoglou, J.T. (2024). Global rise in forest fire emissions linked to climate change in the extratropics. Science, 386, eadl5889 McCarley, T.R., Hudak, A.T., Bright, B.C., Cronan, J., Eagle, P., Ottmar, R.D., & Watts, A.C. (2024). Generating fuel consumption maps on prescribed fire experiments from airborne laser scanning. International Journal of Wildland Fire, 33 McCarley, T.R., Hudak, A.T., Sparks, A.M., Vaillant, N.M., Meddens, A.J.H., Trader, L., Mauro, F., Kreitler, J., & Boschetti, L. (2020). Estimating wildfire fuel consumption with multitemporal airborne laser scanning data and demonstrating linkage with MODIS-derived fire radiative energy. Remote Sensing of Environment, 251, 112114 Montero, G., Ruiz-Peinado, R., & Muñoz, M. (2005). Producción de Biomasa y Fijación de CO2 Por Los Bosques Españoles. Seiler, W., & Crutzen, P.J. (1980). Estimates of gross and net fluxes of carbon between the biosphere and the atmosphere from biomass burning. Climatic Change, 2, 207-247 Zhao, K., Suarez, J.C., Garcia, M., Hu, T., Wang, C., & Londo, A. (2018). Utility of multitemporal lidar for forest and carbon monitoring: Tree growth, biomass dynamics, and carbon flux. Remote Sensing of Environment, 204, 883-897
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Quantifying the drivers of air pollutant emissions from wildfires in South America through land surface fire modelling and satellite data applications

Authors: Maria P. Velásquez-García, Dr Richard J. Pope, Dr Steven T. Turnock, Dr David P. Moore, Professor Martyn Chipperfield
Affiliations: School of Earth and Environment, University of Leeds, National Centre of Earth Observation, University of Leeds, Met Office, National Centre for Earth Observation, University of Leicester, School of Physics and Astronomy, University of Leicester
The incidence of wildfires is a growing concern, disrupting vulnerable ecosystems and extending beyond natural patterns of variability. Particularly, wildfire emissions have a significant effect on radiation balance, cloud formation, precipitation, carbon cycle and air quality. While there is a growing understanding of the adverse impacts of wildfires currently and into the future, our understanding of the physical processes and anthropogenic interactions driving wildfire activity and variability is still lacking. Fire modelling is a relatively new evolving research field, and large sources of uncertainty still exist in our representation of wildfires in the Earth system. In the latest Coupled Model Intercomparison Project (Phase 6 - CMIP6), only half of the Earth System Models (ESMs) included interactive fire models, with fewer accounting for feedback mechanisms. Wildfire emissions inventories are an important resource when aiding decision-making by describing fire dynamics and their changes over time. However, only land-atmosphere models can help us interrogate the physical processes behind the fire patterns/variability seen in the observations and provide us with an insight into future impacts. Therefore, to improve the representation of wildfire properties in the UK’s ESM (UKESM), which will contribute to CMIP7, it is now coupled with the Interactive Fire and Emission Algorithm for Natural Environments (INFERNO). In the past, the evaluation of INFERNO focused on terms like burn area and emitted carbon. However, there has been limited research on how INFERNO simulates air pollutant emissions and the dominant processes driving their spatiotemporal variability, which is the focus of this study. We have evaluated INFERNO against five wildfire emission inventories (2004-2022): Global Fire Emissions Database (GFEDvn4s and GFEDvn5), Fire INventory from NCAR (FINNvn2.5), Global Fire Assimilation System (GFASvn1.2) and Brazilian Biomass Burning Emission Model (3BEM-FRP), focusing on carbon monoxide (CO) in the ecosystemic and socioeconomic challenge region of South America (SA). CO is abundantly emitted by wildfires and is a key primary air pollutant, as well as a secondary pollutant precursor. There are also multiple long-term complementary atmospheric retrievals of CO from satellite missions, which are exploited in this work. We ran an INFERNO control simulation as well as multiple sensitivity experiments to quantify the importance of the underpinning factors (e.g. burn area, meteorological inputs, ignition rates) driving the INFERNO simulated wildfire CO emissions for the northern (0° -12°N), mid (17°S-0°) and southern (55° S-17°S) part of SA. We identified temporal consistency in the emission seasonal cycles from the inventories across all SA. Here, we typically found that GFEDvn4s and FINNvn2.5 were at the lower and higher end of the inventory ensemble range, respectively. Focusing on mid-SA, where 75% of wildfire emissions of the territory come from, we found a good performance from INFERNO, which does a reasonable job with a bias of 30% against the ensemble average of the inventories. INFERNO also managed to capture the overall trend of CO emissions for the long-term (2004-2021) and shorter-term (2014-2021), which the inventories displayed as predominantly negative and positive, respectively. The model experiments highlighted the regional representation of fire ignition as a dominant factor behind the model-observational differences. However, when the anthropogenic ignition component of INFERNO was scaled by the human development index (HDI) in southern and mid-SA, there was an improvement in the bias against GFEDvn5 to less than 15%. Using the HDI scaling, there was approximately a 200% difference between this simulation and the control run, which neglected socioeconomic influences (i.e. HDI). The difference in the driving meteorology using 20CRv3 and W5R5 against ERA5 (i.e. meteorological data used in the control run) resulted in a significant change in the modelled CO emissions (bias of around 25%) but shifted the simulated CO emissions further away from the emission inventory range. Parametric changes in combustion completeness, average burned area and emission factor resulted in differences of around 25%, 10% and 20% against the control run, respectively. Overall, this study helps us quantify the key processes underpinning the fire emissions from model simulations, allowing us to target areas for improvement and better analyses the scientific results of the model.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: D.03.05 - POSTER - Harnessing Open and Community-Driven Innovation in Times of Emergencies

In an era marked by frequent climate-related disasters and global health crises, the need for rapid and effective innovation in Earth Observation (EO) is paramount. This session explores how open and community-driven innovation can provide timely and impactful solutions by leveraging the collective intelligence and collaborative efforts of diverse stakeholders.

Two key themes are envisaged to be the focus of this session:

1. Open Innovation in Crisis Response:
- The necessity of accelerated innovation during emergencies, such as the COVID-19 pandemic.
- Benefits of open innovation in EO, including reduced time-to-market, shared costs and risks, and enhanced creativity.
- EO Case studies of rapid development in various sectors (i.e. healthcare, such as personal protective equipment manufacturing, medical devices, and vaccine technologies) driven by open innovation practices.
- The role of cross-boundary collaboration in addressing complex challenges swiftly and effectively.

2. Community-Driven Approaches:
- The transformative potential of collaborative development models, crowdsourcing, and citizen science in satellite EO.
- Examples of successful open-source projects and co-creation initiatives that have advanced EO tools and technologies.
- Strategies for harnessing the collective intelligence of distributed communities to analyse large datasets, identify patterns, and develop innovative EO solutions.
- Empowering citizen scientists in data collection, validation, and raising public awareness to democratise access to EO data and its applications.

The objectives of this session are:
- To highlight how integrating external knowledge and community engagement can enhance innovation performance during crises.
- To showcase real-world examples of rapid innovation and effective community-driven approaches.
- To foster dynamic discussions on best practices and future directions for open and collaborative EO initiatives.

In terms of target audience, this session is designed for climate scientists, disaster experts, NGOs, city managers, emergency community members, first responders, insurers, government bodies, researchers, developers, policymakers, and citizen science enthusiasts. It provides a unique opportunity for these groups to exchange ideas and collaborate on innovative solutions for climate and health crises.

Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: A.07.06 - POSTER - Monitoring river discharge variability in a context of climate change: bridging satellite technology, ground data and modelling

River discharge is recognized by the Global Climate Observing System as an Essential Climate Variable due to its crucial role in regulating the flow of freshwater into the oceans, influencing salinity and thermohaline circulation, which are fundamental components of the Earth's climate system. In addition to its importance for water resource management and flood forecasting, climate change impact on river discharge has received more and more interest and is needed to develop effective adaptation strategies. However, the decline in ground-based monitoring networks since the 1980s has made it increasingly difficult to accurately estimate river discharge, a challenge exacerbated by the rise in extreme weather events such as floods and droughts.
Satellite remote sensing, including altimetry, optical, and multispectral sensors, has emerged as a valuable and effective solution for global river discharge monitoring, providing extensive spatial coverage and long-term data. Multi-mission approaches enhance the spatial and temporal resolution as well as the accuracy of these estimates. However, it is essential to integrate remote sensing data with ground-based observations and hydrological models to achieve a more comprehensive and long-term understanding of river discharge.
The European Space Agency (ESA), through its Climate Change Initiative, is making significant investments in the study of river discharge with the aim of providing long-term data that are crucial for analysing trends and understanding the impacts of climate change. By combining satellite technologies with traditional methods, researchers can deepen their understanding of river dynamics and their role in the Earth's climate, thereby supporting the development of more effective climate adaptation strategies.
This session is dedicated to presenting advances in river discharge estimation from space and will cover algorithms development, novel and merged products and applications including assimilation of satellite and in situ data in models. The final goal is to provide an outlining of future directions and to guide upcoming investments in the satellite sector, with a particular focus on the improvement of the river discharge estimation also under the climate change umbrella. Participation is open to all who wish to contribute and share scientific requirements, with the goal of identifying the necessary levels of precision, accuracy, temporal resolution, and spatial scale of the river discharge as an essential climate variable, needed for effective climate studies.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Earth Observation Hydrolab: from SWOT altimetry and multi-satellite to continental water cycle modeling

Authors: Santiago Pena Luque, Kevin Larnier, Pierre André Garambois, Mathilde Simeon, Nicolas Picot
Affiliations: CNES, Hydromatters, INRAE
Three decades of satellite altimetry have laid the groundwork for integrating satellite observations into hydrological models which improvement is key for improving water cycle understanding and flood-drought forecasting in context of climate change. The recent SWOT (Surface Water and Ocean Topography, NASA/CNES) mission has inaugurated a new era in inland water cycle observation. It provides uniquely dense measurements of lake and river elevations, slopes, and widths, enhancing hydraulic visibility. However, estimating river discharge remains challenging, particularly in cases of uncertain bathymetry and friction, or for ungauged rivers. To tackle this issue, a multi-satellite approach coupled with basin-scale hydrological-hydraulic (H&H) modeling is crucial to ensure physical consistency and effective data fusion. The Center for Hydrology from Space (CHS) first aims to estimate river discharge across local, river network, and continental scales. To achieve this, CHS maximizes the use of in situ and satellite data, including satellite altimetry and imagery of continental hydrosystems, basin surface descriptors, and atmospheric forcings. The initial focus is on providing continental-scale coverage over Europe, South America, and Africa. CHS is hosted at the SWOT Expertise Center in CNES. Initial implementations in CHS of reach-scale hydraulic solvers and discharge inversion algorithms will be conducted on the Ohio, Maroni, and Garonne Rivers. This effort will mirror the SWOT Confluence initiative but within a multi-satellite and basin scale hydrologic modeling framework. Further steps planned pertain to implementing hydraulic and hydrological models over large basins such as the Amazon and Congo, and finally coupled hydrological-hydraulic models at a continental scale. Key challenges include harmonizing swath and nadir altimetry measurements, improving the integration of basin-scale models with heterogeneous data, and maintaining an open and adaptable architecture. This architecture must permit the seamless incorporation of new algorithm versions, databases, and models without disrupting continuous integration processes.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: River discharge estimations from Near-Infrared satellite data within the ESA river discharge Climate Change Initiative project

Authors: PhD Paolo Filippucci, PhD Angelica Tarpanelli, PhD Debi Prasad Sahoo
Affiliations: Istituto di Ricerca per la Protezione Idrogeologica (IRPI), Consiglio Nazionale delle Ricerche (CNR), International Crop Research Institute For Semi Arid Tropic
River discharge data are fundamental for applications across both short- and long-term timescales, ranging from water management and flood prediction to global freshwater monitoring. For these reasons, Global Climate Observing System (GCOS) designated River Discharge as one of its Essential Climate Variables (ECV). However, recent and accurate observations of discharge data have become increasingly rare, partly due to geopolitical issues that hinder data sharing across national boundaries. To address this challenge, the European Space Agency (ESA) initiated the precursor study River Discharge Climate Change Initiative, aiming to derive long term climate data records of river discharge for selected basins using satellite remote sensing observations. Several approaches are available to derive flow information from remote sensing. Among these, the use of Near-Infrared (NIR) imagery from multispectral sensors is emerging as a promising method to increase our capacity of estimating river discharge from satellite, due to the high temporal resolution and broad spatial coverage of such sensors compared to alternatives like altimeters. Specifically, the CM approach allows to obtain a proxy of river discharge by analyzing variations in reflectance signals retrieved from pixels near the river boundaries, which alternate between wet and dry states depending on river flow conditions. In this study, the CM approach was applied to over 50 sites worldwide, representing diverse flow conditions (average flows ranging from 11 m³/s to 168,387 m³/s). Date were sourced from multiple satellite sensors: MODIS Aqua, MODIS Terra, Sentinel-3, LandSat-5, LandSat-7, LandSat-8, LandSat-9 and Sentinel-2, characterized by different spatial and temporal resolution. A unified proxy of river discharge was then obtained by combining the multispectral indices derived from each sensor. Two versions of the CM approach were tested: a calibrated model, optimized using recent in-situ data, and an uncalibrated model, tailored for application in data-scarce regions. River discharge estimates were derived using non-linear regression in areas with recent observational data, while for sites lacking such data, estimates were obtained through cumulative distribution function (CDF) matching. The approach yielded satisfactory results across all study areas, with particularly strong performance at sites with access to recent observations. In contrast, the uncalibrated approach showed reduced performance, especially for rivers in regions subject to periodic freezing, tropical climates, or data from coarse-resolution sensors. These limitations were attributed to the algorithm's challenges in handling unmasked snow and frozen areas, cloud interference, and the difficulty in accurately delineating river features using low-resolution data.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: An advance integration of satellite products for a Global Product of River discharge

Authors: Angelica Tarpanelli, Paolo Filippucci, Karina Nielsen, Alessandro Burini
Affiliations: CNR-IRPI, DTU-Space, EUMETSAT
Rivers are fundamental to the Earth's hydrological system, serving as conduits for freshwater flow and vital resources for ecosystems, societies and economies. Monitoring river discharge is critical for understanding the dynamics of the global water cycle, managing water resources and addressing the growing challenges posed by climate change. However, traditional in-situ measurements are sparse and unevenly distributed, particularly in remote and ungauged regions. Satellite observations provide an unprecedented opportunity to bridge this gap, enabling river discharge estimation across large spatial scales and diverse environments. This study introduces an advanced framework that integrates satellite data from optical and altimetry sensors to produce a global river discharge product suitable for hydrological assessments. Leveraging the capabilities of EUMETSAT’s satellite systems and Copernicus contributing missions, the approach synthesizes data from multiple platforms to maximize information retrieval and improve accuracy compared to products derived from individual satellites. Key innovations include the integration of complementary datasets to enhance temporal and spatial resolution, particularly in regions with limited ground-based observations. This initial phase of the study presents the analysis over 300 globally distributed sites, encompassing diverse climatic zones and flow regimes. This comprehensive dataset is designed to assess the long-term applicability of satellite-derived discharge estimates for water resource management and climate impact assessments. Particular attention is given to evaluating the added value of the global product in ungauged basins and understanding its limitations in monitoring smaller rivers, where high-resolution data is critical. By exploring the scalability and consistency of satellite-based techniques, this research advances the development of global hydrological datasets and contributes to the broader goal of sustainable water management. The findings highlight the potential of these technologies to transform river discharge monitoring, enabling more informed decision-making in the face of global environmental challenges.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Satellite Altimetry-based Extension of global-scale in situ river discharge Measurements (SAEM)

Authors: Peyman Saemian, Dr.-Ing. Omid Elmi, Molly Stroud, Ryan Riggs, Benjamin M. Kitambo, Fabrice Papa, Prof. Dr. George H. Allen, Dr.-Ing. Mohammad J. Tourian
Affiliations: Institute Of Geodesy, University Of Stuttgart, Germany, Department of Geosciences, Virginia Polytechnic Institute and State University, Department of Geography, Texas A&M University, Laboratoire d’Etudes en Géophysique et Océanographie Spatiales (LEGOS), Université de Toulouse, CNES/CNRS/IRD/UT3
River discharge is a fundamental metric for understanding water flow dynamics through river systems. However, monitoring efforts are hampered by a declining number of active gauges and temporal data gaps. Remote sensing, especially radar-based techniques, offers an effective means to this issue. We present the Satellite Altimetry-based Extension of the global-scale in situ river discharge Measurements (SAEM) dataset. SAEM employs multiple satellite altimetry missions to estimate discharge, complementing the existing worldwide network of national and international gauges. In this study, we analyzed 47,000 gauges and successfully estimated height-based discharge for 8,730 gauges—approximately three times the coverage of the largest existing remote sensing-based datasets. These gauges represent about 88% of the total gauged discharge volume. The SAEM height-based discharge estimates achieve a median Kling-Gupta Efficiency (KGE) of 0.48, outperforming current global datasets. Beyond discharge time series, SAEM offers three additional products: (1) a catalog of Virtual Stations (VSs) with metadata, including coordinates, satellite altimetry mission details, distances to discharge gauges, and quality flags; (2) for VSs with high-quality discharge estimates, the specific IDs introduced by the database providers such as Hydroweb.Next (former Hydroweb), the Database of Hydrological Time Series of Inland Waters (DAHITI), the Global River Radar Altimeter Time Series (GRRATS), HydroSat, and the water level time series generated specifically for this study (SAEM WL); and (3) rating curves for the VSs, mapping water levels to discharge using a Nonparametric Stochastic Quantile Mapping Function approach. The SAEM dataset can enhance hydrological modeling, support water resource management, and tackle complex water-related challenges in the context of climate change.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Impact of Freshwater Fluxes on Ocean Modelling Systems: Towards the Combination of In Situ and Satellite Measurements With Global Hydrological Models to Improve River Discharge

Authors: Louis Kern, Thomas Vaujour, Julia Pfeffer, Alejandro Blazquez, Romain Bourdallé-Badie, Gilles Garric, Gilles Larnicol, Claire Sirere, Andrea Storto, Chloé Thenoz, Chunxue Yang, Stéphanie Guinehut
Affiliations: Magellium, LEGOS, Mercator Océan international, CNR-ISMAR
Freshwater fluxes impact the ocean by adding water mass into the ocean basins and through regional density changes caused by fluctuations in the local salinity. Changes in terrestrial water storage due to human activities and natural climate variability have been shown to impact global and regional sea level changes from a few millimeters to several centimeters over time scales ranging from a few days to a decade. However, most ocean analysis and forecast systems currently rely on climatological estimates of the river discharge, which are by construction unable to capture the full variability of the global water cycle. Three techniques are explored here to constrain the freshwater fluxes flowing into the ocean. In-situ measurements at river gauges provide the most accurate river discharge measurements but with extremely sparse spatial and temporal coverage. The Copernicus Emergency Service Global Flood Awareness System (GloFAS) allows the estimation of river discharge globally with a dense spatial and temporal coverage, but with local to regional biases linked to uncertainties in the hydrological modelling system. Satellite gravity measurements may be combined with atmospheric reanalyses to estimate the river discharge in large river basins. The first objective of the F3O (Freshwater Fluxes in Ocean Forecasts) project is to assess the performance of each technique to recover river discharge estimates. Artificial Intelligence techniques will then be used to provide a hybrid river discharge estimate with high accuracy, high temporal and spatial resolution, and global coverage having the potential to be produced in Near-Real-Time. Neural network architectures such as the Long Short Term Memory (LSTM) are particularly adapted for correcting biases and errors in time series. The hybrid river discharge dataset will then be implemented in ocean model systems that mimic Copernicus Marine Service's delayed time (reanalyses) and real-time (operational forecasting) products. The expertise and knowledge gained within the F3O project will contribute to improving the ocean monitoring and forecasting systems by integrating the latest in situ and satellite observations of the river discharge along with artificial intelligence techniques.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: A Satellite-Based Framework for River Discharge Estimation: A Hybrid Approach Integrating SAR, Optical, and Altimetry Data

Authors: Ceren Yazigulu Tural, Koray K. Yilmaz, Angelica Tarpanelli
Affiliations: Middle East Technical University, Research Institute for Geo-Hydrological Protection
Rivers are a critical component of the global water cycle, serving as dynamic pathways for freshwater flow and storage. However, global discharge data is limited, particularly in regions with sparse in-situ measurements. This study introduces a hybrid modeling framework that leverages advanced satellite observations combined with machine learning and deep learning algorithms to estimate river discharge. The framework combines Sentinel-2 optical imagery, Sentinel-1 Synthetic Aperture Radar (SAR) data, and satellite altimetry data from Sentinel-3 and Sentinel-6 leveraging their complementary strengths. The input variables for the model include total water surface area and water indices derived from Sentinel-1 and Sentinel-2, while satellite altimetry provides water level time series. Sentinel-1 effectively compensates for the limitations of optical sensors under cloudy conditions. Moreover, satellite altimetry data are particularly evaluable in areas where lateral water expansion is constrained by topography and SAR or optical are unable to detect variations. The hybrid model, combining Long Short-Term Memory (LSTM) networks and Random Forest Regression (RFR), estimates river discharge with satellite-derived measurements. In effort to account for varying river morphologies, reach boundaries and river centerlines from the SWOT River Database (SWORD) are incorporated, ensuring robust adaptability to diverse conditions. The model is calibrated and validated against in-situ measurements on corresponding dates, using in-situ discharge data from the Mississippi River (USA), Kizilirmak River (Türkiye), and Po River (Italy). Designed to achieve high accuracy across diverse climatic and topographical settings, the proposed framework offers a scalable solution for estimating river discharge. By integrating satellite observations with a hybrid methodology, this approach has significant potential for enhancing global hydrological assessments.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Using autonomous In-Situ Radiometry for Monitoring Suspended Sediments During Flood Events in the Po River, Italy

Authors: Gian Marco Scarpa, Dalin Jiang, Congju Fu, Debora Bellafiore, Francesca De Pascalis, Federica Braga
Affiliations: Institute of Marine Sciences-National Research Council (CNR-ISMAR), Earth and Planetary Observation Sciences (EPOS), Biological and Environmental Sciences, Faculty of Natural Sciences, University of Stirling, Department of Civil, Building and Environmental Engineering, Sapienza University of Rome
As part of the Italian Integrated Environmental Research Infrastructures System (ITINERIS) National Recovery and Resilience Plan (PNRR) project within the DANUBIUS-RI Italian Supersite, a WISPstation spectroradiometer was installed along the Po River (Italy) in September 2024 at approximately 95 km from the river’s mouth. This instrument performs predefined sequences of above-water hyperspectral radiometric measurements. Since September 2024, the instrument has captured data during a particularly dynamic period marked by multiple flood events, including one exceeded a discharge of 7,000 m³/s. This study investigates the potential of the WISPstation for providing high frequency information on water turbidity and suspended sediment concentrations in river systems, with a focus on methodological improvements in data quality control and water quality parameters retrieval algorithms. Procedures for data quality control to exclude data affected by environmental perturbations and methodologies for the removal of residual reflected skylight were implemented to qualify the remote sensing reflectance (Rrs) spectra obtained from above-water radiometric measurements. Specifically, different approaches were tested and compared to identify a robust processing capable of correcting data collected under sub-optimal conditions, caused by clouds, variable viewing geometries, high glint perturbations, and low illumination levels. These procedures were crucial to ensuring the provision of high-quality Rrs data for satellite validation and for the retrieval of water quality parameters. In-situ radiometric data gathered from the WISPstation were used for match-up analyses with Sentinel-2 imagery acquired over river water. State-of-the-art algorithms were applied to the corrected in situ Rrs spectra to retrieve turbidity and suspended sediment concentrations. Their temporal variability was analyzed in relation to the river discharge and in situ physico-chemical parameters. The WISPstation demonstrated high sensitivity to turbidity variations and captured suspended sediment trend associated with the river flow. Our findings highlight the capability of in-situ fixed autonomous spectroradiometers to effectively support the near real time monitoring of river sediment dynamics, improving the understanding of sediment transport processes, particularly in highly dynamic systems like the Po River. These analyses demonstrated promising results, confirming the consistency and reliability of satellite-derived products, even in riverine environments, that are quite challenging for remote sensing applications. By providing high-quality reference measurements, the spectroradiometer data support preliminary quality control processes for satellite-derived water quality products, enhancing their accuracy and robustness.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Advancing River Discharge Estimation Through Merged Radar Altimetry Water Surface Elevation Time Series and Rating Curve Approaches

Authors: Laetitia Gal, Sylvain Biancamaria, Adrien Paris, Benjamin Kitambo, Julien Lefebve, Elena Zakharova, Malik Boussaroque, Maxime Vayre
Affiliations: Hydro Matters, LEGOS, EOLA, CLS
Satellite altimetry provides a powerful alternative to traditional in-situ stage measurements for observing water surface elevation (WSE) and can then be used to estimate river discharge with ancillary discharge data, especially in regions with limited or discontinued ground-based observations. This study presents a two-step methodology that integrates the creation of merged satellite WSE time series from multiple altimetry missions with the development of rating curves (RCs) using in situ discharge to estimate long-term (at least from 2002) discharge time series over a sample of stations (54 over the word covering various climatic and hydrodynamic conditions). This methodology has been applied in the frame of the ESA CCI River Discharge precursor project (https://climate.esa.int/en/projects/river-discharge/). The first step of the methodology focuses on computing long-term WSE time series by merging data from various missions, including Topex-Poseidon, ERS2, Jason-1/2/3, Envisat, Sentinel-3A/B, and Sentinel-6. Intermission biases - arising from differences in sensor characteristics - are corrected to ensure consistency satellite missions. The second step involves rating curves estimations (i.e. relationship between discharge and WSE) using three distinct approaches based on data availability. Method 1 employs concurrent in-situ (or simulated) discharge and WSE time series to derive a Bayesian rating curve, while Method 2 applies a quantile-based approach, when no temporal overlap exists between WSE and discharge data. Extensive validation confirms that the altimetry-based river discharge product (RD-alti) achieves the accuracy standards required by GCOS. Its strong correlation with in-situ data highlights its reliability, with an average NRMSE of 12.8% and an NSE of 0.72 across all stations. While the methodology demonstrates good efficiency, some uncertainty remains, particularly during extreme events, due to limited spatial and temporal data coverage or the use of old altimeter missions.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Current status of the ESA CCI River Discharge precursor project

Authors: Sylvain Biancamaria, Alice Andral, Nuria Bachiller Jareno, Silvia Barbetta, Fabien Blarel, Malik Boussaroque, Omid Elmi, Paolo Filippucci, Laetitia Gal, Kevin Larnier, Gilles Larnicol, Benjamin Kitambo, Julien Lefebve, Christian Massari, Simon Munier, Fernando Niño, Gaëtan Noual, Fabrice Papa, Adrien Paris, Vanessa Pedinotti, Malak Sadki, Peyman Saemian, Debi Prasad Sahoo, Stefan Simis, Nicolas Taburet, Angelica Tarpanelli, Mohammad Tourian, Maxime Vayre, Elena Zakharova, Clément Albergel
Affiliations: LEGOS, CLS, Plymouth Marine Laboratory, CNR-IRPI, Hydro Matters, University of Stuttgart, Magellium, CNRM, EOLA, ESA-ESCAT
River discharge is an Essential Climate Variable (Global Climate Observing System, 2022) that was not computed within the ESA Climate Change Initiative (CCCI) before 2022. That’s why, since February 2023, ESA is funding the CCI River Discharge Climate Change Initiative (CCI) precursor project (https://climate.esa.int/en/projects/river-discharge/) to evaluate the potential of computing long-term river discharge time series from satellite data. No past or present satellite instrument is able to directly measure river discharge. It can only be derived indirectly, using three methodologies: (1) the use of multiple satellite radar altimeter long time series of water surface elevation to estimate discharge through a rating curve approach using ancillary discharge data, (2) multispectral sensors data, specifically in the near infrared (NIR) band, are used to compute the ratio between the reflectance of a dry pixel and a wet pixel which is linked to the river flow variations, and (3) the combination of these 2 approaches. This project is a proof-of-concept and is therefore not yet global. It aims at providing discharge estimation from at least 2002 to 2022, at 54 locations over 18 river basins. These selected targets cover different climatic zones (from the tropics to the Arctic), different drainage areas (from 50,000 km2 to the Amazon basin), different levels of human activities and in situ observation availability. This poster will provide a general overview of the computed discharge products, their validation, assimilation of these data into basin-scale hydrology models and some preliminary climate assessment based on these products.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Combining R with Google Earth Engine to create new datasets for the CAMELS-CZ database

Authors: Ondrej Ledvinka
Affiliations: Czech Hydrometeorological Institute, Hydrology Database and Water Budget Department, Charles University, Faculty of Science, Department of Physical Geography and Geoecology
Inspired by the global interest in creating CAMELS (Catchment Attributes and MEteorology for Large-sample Studies) datasets, Czech hydrologists also started collecting various geospatial data and data associated with time series in hydrology and climatology to create their database called CAMELS-CZ, which may be routinely used when studying runoff generation and response in a changing climate on the territory of Czechia and its vicinity given by the selected catchments. Different catchment attributes may serve as good predictors in spatial hydrology looking for clusters of similarly behaving regions from a hydrometeorological point of view. In particular, current machine learning models may be developed based on such a database, providing better insights into rainfall-runoff processes that are usually non-stationary and non-linear. Besides the time series of hydrological and climatological variables that are being opened in Czechia, spatial datasets play a very important role. However, the source spatial datasets can be hard to process on a local machine even for a small country such as Czechia (with a territory of about 79000 km²). The reason is a finer spatial resolution brought by modern technologies, which applies mainly to raster data. Therefore, it is advisable to make use of cloud-based approaches to process such large spatial data. Czech universities have good access to Google Earth Engine (GEE), but Czech hydrologists so far have not employed it much, and it is time to overcome this gap. Since many hydrologists and geographers in Czechia are familiar with R, the potential of the 'rgee' and 'tidyrgee' R packages was investigated. Today, the vector spatial data representing catchment divides allow us to study up to 738 catchments closed by a water-gauging station, ranging from 0.07 to 51408.68 km². Some of the smaller catchments are nested in the larger ones, so there is a need to take the values of some raster cells more than once when aggregating the data for the catchments, which GEE permits without considering more difficult strategies. While learning to effectively apply the combination of the 'rgee' package and GEE, taking into account the best practices recommending avoiding the mixture of the cloud and client functions, several scripts have been developed and refined to lower the computation time. Currently, datasets (i.e. class areas and their percentage) derived from the image collections or images of Copernicus CORINE Land Cover (1986-2018; 100 m horizontal resolution), Copernicus Global Land Cover (2015-209; 100 m horizontal resolution) and ESA World Cover (2020-2021; 10 m horizontal resolution) are ready to be used for all 738 catchments. Moreover, a time series of the Normalized Difference Vegetation Index (NDVI) derived from the MOD13A1.061 product as catchment means can be processed further. Many other products published on GEE can be considered soon as good information sources as well. For instance, ERA5-Land or various GRACE products may be used at least for studying trends and various other aspects of catchment time series when it comes to climate change investigation.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: The Global Runoff Database – A unique archive for in-situ river discharge data

Authors: Simon Mischel, Henning Plessow, Claudia
Affiliations: Federal Institute Of Hydrology
River systems are an integral part of the global water cycle, which are linked to many processes on local, regional and global scales. Terrestrial monitoring of rivers is fundamental for the sustainable management of available water resources on regional or catchment scale. The Global Runoff Data Centre (GRDC) operates under the auspices of the World Meteorological Organization (WMO) at the German Federal Institute of Hydrology (BfG). It holds the most substantive collection of quality assured river discharge data on global scale. Established in 1988 to support the research on global and climate change and integrated water resources management, GRDC has been a key partner in a number of data collection and data management projects. It connects national hydrological and hydrometeorological services, the primary providers of river discharge data and associated metadata with the scientific research community utilizing this unique data collection. Currently, the Global Runoff Database contains river discharge data collected at daily or monthly intervals from more than 10,000 stations in 160 countries. This adds up to around 600,000 station-years with an average record length of 40 years. GRDC archives international data up to 200 years old, fostering multinational and global long-term hydrological studies. As a trustworthy source for runoff data, GRDC has been integrated into the WMO Catalogue for Climate Data, supporting the scientific community to analyze global climate trends and assess environmental impacts and risks. National Hydrological Services are encouraged to supply suitable river discharge data and associated station metadata for publication through GRDC (http://grdc.bafg.de). In addition to the core dataset, the GRDC hosts special datasets which have been collected in the framework of different projects. This includes regional datasets, such as the BALTEX (Baltic Sea Experiment) dataset, covering discharge stations draining into the Baltic Sea. There are also global project datasets such as the GTN-R (Global Terrestrial Network – Rivers) dataset, which contains stations located near the river mouth in order to assess the freshwater flux into the ocean. The newest release is the GRDC Extension to the Caravans dataset. It is based on a subset of hydrological discharge data and station-based watersheds from GRDC, which are covered by an open data policy (Attribution 4.0 International; CC BY 4.0). The dataset covers stations from 5,356 catchments and 25 countries, spans 1950 – 2023, and is already publicly available on Zenodo: https://zenodo.org/records/10074416. Regarding the collaboration with the satellite community on Earth Observations, the GRDC provides a reliable source for in-situ river discharge and river water level data, which is, besides other, important for calibrating and validating satellite based remote sensing data. Thus, GRDC is an important bridge between in-situ monitoring data and satellite derived data products.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: C.03.03 - POSTER - Advancing global-scale high resolution imaging spectroscopy in preparation for CHIME

The growing availability of high resolution imaging spectroscopy products from missions such as EnMAP, EMIT, HiSUI, PRISMA and DESIS, is enabling a wide spectrum of novel scientific products and applications. The Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) will provide routine observations over the land and coastal zone through the Copernicus Programme in support of EU- and related policies for the management of natural resources, assets and benefits. This unique visible-to-shortwave infrared spectroscopy based observatory will provide unique and major contributions in fulfilling user requirements in the domains of environmental monitoring and management, with a focus on soil productivity, sustainable raw materials exploitation, sustainable use of nutrients and water in agriculture and food security. A number of secondary applications will benefit from the routine provision of CHIME products e.g. biodiversity, coastal and inland water quality monitoring, methane and carbon dioxide detection from point sources. In this session we will welcome contributions from the scientific and user communities encompassing CHIME preparatory activities, including L2 products development, calibration/validation, downstream products and applications prototyping. We will also welcome contributions building on current spaceborne imaging spectroscopy missions, and anticipated missions like SBG-VSWIR.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: The CHIME Observation Performance Simulator (OPSI) Software System

Authors: Nicolas Lamquin, Benjamin Finociety, Romain Sumérot, Sinh Khoa Nguyen, Meriem Chakroun, Eric Jeansou, Dr Véronique Bruniquel, Clarissa Hamann, Marco Spagnolli, Isabell Krisch, Richard Wachter, Johannes Schmidt, Stefano Baldacci, Dimitri Lebedeff, Vincent Soulignac, Hugo Monchatre, Yves Bobichon, Antonio Gabriele, Adrian Leonardo Garcia, Ignacio Fernández Núñez, Claudia Isola, Stefan Adriansen, Wouter Dierckx
Affiliations: ACRI-ST, OHB System AG, TAS-France, ESA/ESTEC, VITO
The Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) is one of the High-Priority Candidate Missions (HPCM) endorsed by ESA for the expansion of the Copernicus Sentinel missions. CHIME will provide routine hyperspectral sampling of Earth surface reflectance over the solar spectral range (400-2500 nm) at a 30 m spatial resolution with a revisit of 22(11) days with one (two) satellite(s). CHIME observations will support EU- and related policies for the management of natural resources and assets providing a major contribution in the domains of raw materials and sustainable agricultural management with a focus on soil properties, sustainable raw materials development and agricultural services, including food security and biodiversity. The development of the CHIME mission is performed by a consortium led by Thalès Alenia Space in France (as prime contractor) and OHB System AG in Germany (for the instrument). The Observation Performance Simulator (OPSI), is a software tool being developed by ACRI-ST under the management of the above partners as ATBD providers, to support the development and verification of the space segment as well as the development of the ground segment. The OPSI is devoted to simulate the instrument acquisition and its different acquisition modes (along with the platform behaviour), to prototype the corroborating ground segment processors which calibrate the payload measurements to TOA radiance (at L1b) and geometrically-refined and orthorectified TOA reflectance (at L1c) and to assess the instrument performance by comparing true and estimated parameters generated at different stages. In order to accomplish the above objectives OPSI is composed of an Instrument Performance Simulator (IPS), a Ground Processor Prototype (GPP) and a Performance Assessment Module (PAM). This presentation provides an overview of the OPSI Software System offering to the leading consortium and to ESA a software tool able to simulate the radiometric and geometrical aspects as well as to emulate onboard compression and to provide mission performance figures to be expected from the instrument design.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: Atmospheric correction of hyperspectral satellite missions with the MAGAC toolbox

Authors: Jorge Vicent Servera, Béatrice Berthelot
Affiliations: Magellium
Atmospheric correction is a critical steps in the processing optical satellite data. Its primary objective is to convert the top-of-the-atmosphere (TOA) radiance signal, as measured by satellite instruments, into surface reflectance. A key component of this process involves accurately modeling the absorption and scattering of electromagnetic radiation by aerosols before it reaches the satellite sensor. In many optical missions, it is standard practice to assume a fixed “aerosol type” with predefined and constant optical properties or, at most, to use the Angstrom exponent as a continuous variable to characterize the aerosol type. In recent years, atmospheric algorithms based on optimal estimation techniques have been developed. These instrument-agnostic algorithms offer the potential for highly accurate results while enabling uncertainty propagation by incorporating physical principles throughout the data processing chain. In this context, we introduce MAGAC (Magellium Atmospheric Correction), a versatile toolbox for processing hyperspectral satellite data. MAGAC distinguishes itself from other atmospheric correction algorithms through its detailed modeling of aerosols, which considers their optical properties -including Angstrom exponent, asymmetry parameter, and single scattering albedo- alongside optical thickness. The aerosol optical properties are retrieved using an optimal estimation algorithm, which is constrained by a priori information derived from climatological and meteorological data. This process is further supported by physics-aware machine-learning emulators of atmospheric radiative transfer models. In this presentation, we provide an overview of MAGAC’s aerosol retrieval algorithm and present its performance in the ESA/NASA ACIX-III Land activity. MAGAC was applied to process over 100 datasets from PRISMA and EnMAP hyperspectral satellite missions and validated against ground-based Aeronet stations. This research demonstrates that MAGAC delivers aerosol optical properties along with associated uncertainties at decameter spatial scales. These capabilities pave the way for advancements in aerosol retrieval and atmospheric correction, facilitating their use as high-level products for operational, national, and commercial imaging spectroscopy missions (e.g., CHIME, EnMAP, PRISMA).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: Analysing Spatio-Temporal Estimation of Canopy Nitrogen Content (CNC) Exploiting Different Space-Borne Hyperspectral Data: Developing Sensor Agnostic Models

Authors: Gabriele Candiani, Francesco Nutini, Lorenzo Parigi, Michael Marshall, Mariana Belgiu, Mirco Boschetti
Affiliations: Institute for the Electromagnetic Sensing of Environment - National Research Council of Italy, Faculty of Geo-Information Science and Earth Observation (ITC) - University of Twente
In the pre-industrial period, worldwide production and consumption of food happened parallel to each other. Nowadays, the global megatrends (climate change, population growth, technological change) gradually caused the supply-demand balance to shift towards a not sufficient and unsustainable food production, with a potentially dramatic consequence for environmental and humanitarian aspects (Hank et al., 2019). Considering this scenario, farmers are forced to increase yields while protecting their most important production factors such as, soil, water and air, from degradation and pollution (Fischer, 2009). In the coming years, new operational hyperspectral missions, such as Copernicus-CHIME and NASA-SBG, will provide an unprecedented stream of spectroscopic data to support sustainable agriculture and food security, providing value-added information for crop monitoring services. Estimating crop nitrogen (N) content is crucial for sustainable agriculture due to its significant impact on crop productivity and environmental health. Nitrogen is a vital nutrient for plant growth, influencing yield and quality. However, excessive use of nitrogen fertilizers can lead to environmental pollution, including water contamination and greenhouse gas emissions (Lassaletta et al., 2016). Efficient nitrogen management is essential to balance crop needs and minimize environmental impacts, thereby promoting sustainable agricultural practices. Previous methods for the assessment of crop nitrogen status have traditionally relied on its correlation with chlorophyll (Delloye et al., 2018). However, this correlation is significantly influenced by crop types and phenological stages (Guerif et al., 2007), resulting in a loss of predictive capacity as plants transition into the senescence phase. Furthermore, this approach neglects to consider the non-photosynthetic protein-related nitrogen content in plants and the nitrogen translocation to grains (Berger, Verrelst, Féret, Wang, et al., 2020). The combination of hyperspectral data and physically-based approaches (Féret et al., 2021) which employ machine learning regression algorithms, can provide the necessary framework for the development of sensor-agnostic solutions for nitrogen retrieval (Berger, Verrelst, Féret, Hank, et al., 2020; Candiani et al., 2022; Tagliabue et al., 2022; Verrelst et al., 2021). In this framework, the development and testing of efficient methods for the retrieval of crop traits are still ongoing, with the objective of assessing their exportability in different contexts using actual spaceborne datasets. In this study, a multi-year dataset collected from 2020 to 2024, which includes hyperspectral images from PRISMA and EnMap as well as ground data, has been employed to train both a machine learning regression algorithm (MLRA) and a hybrid approach optimized through an active learning (HAL) heuristic. The aforementioned methods, which were defined for a generic hyperspectral sensor, were tested with the objective of evaluating the retrieval of crop traits, including Leaf Area Index (LAI), chlorophyll content at the canopy scale (CCC), and canopy nitrogen content (CNC). The results showed that MLRA and HAL approaches can efficiently assess canopy-level traits (R² > 0.70 and rRMSE under 15%), emphasizing the importance of using real data in the model training phase. Maps of crop traits produced from PRISMA and EnMap images, acquired during the same period, demonstrate comparable performance and exhibit a strong correlation (R² > 0.85, nRMSE < 0.11), thereby supporting the feasibility of an agnostic algorithm for the synergistic use of multiple hyperspectral sensors. Finally, the potential of nitrogen estimation from space-borne hyperspectral data was evaluated through the examination of spatio-temporal series of PRISMA and EnMap CNC products at both the district (i.e., 30 by 30 km) and field levels. The results demonstrate that the temporal trends are consistent with the different varieties cultivated and their respective sowing dates. This behavior is particularly evident at the field level during the ripening phase in relation to nitrogen translocation to grains. Spatial patterns at the district scale are clearly related to farm management practices, while within-field variability was interpreted in relation to plant-soil interactions. The generated maps demonstrate how the quantification of crop traits is beneficial for assessing crop conditions and growth dynamics. In particular, the CNC maps provide a fundamental input for the assessment of crop nutritional status during the crop season and may provide valuable information for the estimation of final grain protein content. REFERENCE - Berger, K., Verrelst, J., Féret, J.-B., Wang, Z., Wocher, M., Strathmann, M., Danner, M., Mauser, W., & Hank, T. (2020). Crop nitrogen monitoring: Recent progress and principal developments in the context of imaging spectroscopy missions. Remote Sensing of Environment, 242(December 2019), 111758. https://doi.org/10.1016/j.rse.2020.111758 - Berger, K., Verrelst, J., Féret, J. B., Hank, T., Wocher, M., Mauser, W., & Camps-Valls, G. (2020). Retrieval of aboveground crop nitrogen content with a hybrid machine learning method. International Journal of Applied Earth Observation and Geoinformation, 92(April), 102174. https://doi.org/10.1016/j.jag.2020.102174 - Candiani, G., Tagliabue, G., Panigada, C., Verrelst, J., Picchi, V., Rivera Caicedo, J. P., & Boschetti, M. (2022). Evaluation of Hybrid Models to Estimate Chlorophyll and Nitrogen Content of Maize Crops in the Framework of the Future CHIME Mission. Remote Sensing, 14(8). https://doi.org/10.3390/rs14081792 - Delloye, C., Weiss, M., & Defourny, P. (2018). Retrieval of the canopy chlorophyll content from Sentinel-2 spectral bands to estimate nitrogen uptake in intensive winter wheat cropping systems. Remote Sensing of Environment, 216(June), 245–261. https://doi.org/10.1016/j.rse.2018.06.037 - Féret, J. B., Berger, K., de Boissieu, F., & Malenovský, Z. (2021). PROSPECT-PRO for estimating content of nitrogen-containing leaf proteins and other carbon-based constituents. Remote Sensing of Environment, 252, 112173. https://doi.org/10.1016/j.rse.2020.112173 - Fischer, G. (2009). World food and agriculture to 2030/50: how do climate change and bioenergy alter the long-term outlook for food, agriculture and resource availability? How to Feed the World in 2050. Proceedings of a Technical Meeting of Experts, Rome, Italy, 24-26 June 2009. http://www.fao.org/docrep/012/ak542e/ak542e00.htm - Guerif, M., Houles, V., & Baret, F. (2007). Remote sensing and detection of nitrogen status in crops. Application to precise nitrogen fertilization. Progress of Information Technology in Agriculture, October, 593–601. - Hank, T. B., Berger, K., Bach, H., Clevers, J. G. P. W., Gitelson, A., Zarco-Tejada, P., & Mauser, W. (2019). Spaceborne Imaging Spectroscopy for Sustainable Agriculture: Contributions and Challenges. In Surveys in Geophysics (Vol. 40, Issue 3). Springer Netherlands. https://doi.org/10.1007/s10712-018-9492-0 - Lassaletta, L., Billen, G., Garnier, J., Bouwman, L., Velazquez, E., Mueller, N. D., & Gerber, J. S. (2016). Nitrogen use in the global food system: Past trends and future trajectories of agronomic performance, pollution, trade, and dietary demand. Environmental Research Letters, 11(9). https://doi.org/10.1088/1748-9326/11/9/095007 - Tagliabue, G., Boschetti, M., Bramati, G., Candiani, G., Colombo, R., Nutini, F., Pompilio, L., Rivera-Caicedo, J. P., Rossi, M., Rossini, M., Verrelst, J., & Panigada, C. (2022). Hybrid retrieval of crop traits from multi-temporal PRISMA hyperspectral imagery. ISPRS Journal of Photogrammetry and Remote Sensing, 187(March), 362–377. https://doi.org/10.1016/j.isprsjprs.2022.03.014 - Verrelst, J., Rivera-Caicedo, J. P., Reyes-Muñoz, P., Morata, M., Amin, E., Tagliabue, G., Panigada, C., Hank, T., & Berger, K. (2021). Mapping landscape canopy nitrogen content from space using PRISMA data. ISPRS Journal of Photogrammetry and Remote Sensing, 178(May), 382–395. https://doi.org/10.1016/j.isprsjprs.2021.06.017
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: Integration of Plant Trait and Spectroscopy Data for Calibration and Validation of CHIME Vegetation Prototype Products.

Authors: Lucie Homolová, Petr Lukeš, Miroslav Pikl, Marian Švik, Vojtěch Bárta, Zuzana Lhotáková, Jan Hanuš, Lucie Kupková, Lucie Červená, Miguel Morata, Jose Luis Garcia Soria, Jochem Verrelst
Affiliations: Global Change Research Institute (CzechGlobe), Department of Geography, Masaryk University, Department of Experimental Plant Biology, Charles University, Department of Applied Geoinformatics and Cartography, Charles University,, Image Processing Laboratory, University of Valencia
The development of higher level vegetation products from spaceborne imaging spectroscopy data requires high quality in situ reference measurements to support accurate product calibration and validation. For the CHIME mission, the following high priority prototype products (L2B) have been defined: leaf nitrogen content, leaf water content, leaf mass per area, canopy nitrogen content, and canopy water content. However, imaging spectroscopy allow to retrieve many more plant traits than those currently selected for CHIME, such as nutrients, photosynthetic pigments, cellulose, lignin, etc. Field measurements of these leaf and canopy traits do not yet follow any standardised (fiducial reference) protocols. And the availability of field reference data is not yet well coordinated globally. Global initiatives to collect plant traits include TRY, Ecosis databases, observation networks such as NEON, ICOS, TERN, ICP Forests etc. The observation networks measure plant traits in some standardized way within the network, but less has been explored how the measurement protocols differ between the networks. We compared protocols and variability in measured trait data between the American NEON and European ICOS networks. Another source of potentially suitable cal/val data for the CHIME vegetation products are individual ground campaigns carried out by different research teams. These can be relatively well synchronised with airborne hyperspectral campaigns or with EnMap and PRISMA acquisitions, which have the same spatial resolution as CHIME and are therefore well suited for product development, testing and validation. This study presents a dataset of in situ measured forest biochemical and structural traits combined with plot-averaged hyperspectral signatures derived from airborne or spaceborne imaging spectroscopy. Measured traits include leaf chlorophyll, carotenoids, nitrogen, cellulose, protein, water content, leaf mass per area, and leaf area index. The data were collected during several campaigns in the Czech Republic, conducted in temperate forests dominated mainly by Norway spruce and European beech. We tested this dataset to retrieve the forest traits using the algorithms currently developed for the CHIME vegetation products, which are based on SCOPE radiative transfer and machine learning methods.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: A Physics-Informed Neural Networks hyperspectral unmixing method for ASI PRISMA data

Authors: Matteo Picchiani, Luigi Ansalone, Giorgio Licciardi, Francesco Longo, Vincenzo Pulcino, Roberto Luciani, Giovanni Paolo Blasone
Affiliations: Italian Space Agency
Hyperspectral unmixing, aims to decompose mixed pixels of a hyperspectral image into their constituent pure materials (endmembers) and their fractional abundances. Traditionally, this task has relied on linear or statistical models, often facing challenges like mixed endmembers, limited training data, and computational complexity. These algorithms exploit the inherent dimensionality of the data, often employing techniques like linear and non-linear, Singular Value Decomposition (SVD), Minimum Noise Fraction (MNF) and Independent Component Analysis (ICA) to identify the most significant spectral components. By analyzing these variations, the applied algorithm can isolate the purest spectral signatures, representing potential endmembers. However, the emergence of deep learning has ushered in a new era for hyperspectral unmixing, offering remarkable capabilities and overcoming these limitations. With its ability to handle complex non-linearities and spatial information, deep learning methods, such as Autoencoders and Convolutional Neural Networks, offers improved accuracy and robustness compared to statistical methods. Despite the remarkable advancements, deep learning-based unmixing methods face certain challenges. The need for large amounts of labeled training data can be a bottleneck, especially for scenarios with a high number of endmembers. Additionally, interpreting the complex inner workings of deep neural networks to understand their decision-making processes remains an ongoing research area. Regardless of the technique adopted, the success of spectral unmixing hinges on the accuracy and representativeness of the chosen endmembers. This is where the Italian Space Agency's (ASI) PRISMA (PRecursor InfraRed and Microwave Space Mission) mission plays a role. Launched in 2019, PRISMA boasts a state-of-the-art hyperspectral sensor with a high spectral resolution, capturing data across a wide range of wavelengths. This enhanced spectral detail offers significant advantages for endmember extraction. Moreover, the combination of the large ASI’s PRISMA data catalogue with the available hyperspectral library allows the exploitation of a semi-supervised approach based on Physics-Informed Neural Networks (PINNs) and Expert Systems, which rely on the context information of each processed pixel. The high spectral resolution of PRISMA data is particularly beneficial for the proposed automated endmember extraction techniques. The finer detail in the spectra allows the PINNs algorithm to better differentiate between subtle spectral variations, leading to the identification of more accurate and representative endmembers. Accurate endmember identification is critical for various applications using PRISMA data, such as material discrimination, forest species classification, environmental monitoring and precision agriculture.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: Enhancing methane detection through hyperspectral imaging and machine learning algorithms for CHIME mission preparatory activities

Authors: Sara Liburdi, Alessandra Nguyen Xuan, Emiliana Valentini, Simone Mancon, Luca Ferrari, Ruben Berteletti, Andrea Taramelli
Affiliations: University Institute for Advanced Study of Pavia (IUSS), Institute for Environmental Protection and Research (ISPRA), Institute of Polar Sciences of the Italian National Research Council (ISP CNR), ARESYS s.r.l.
CHIME, the upcoming Copernicus Hyperspectral Imaging Mission for the Environment, will make a notable impact on environmental monitoring, especially in methane detection. Methane is a potent greenhouse gas significantly involved to global warming. Its emissions, originating from multiple sources: pipelines, industrial sites, landfills, oil and gas extraction, and agricultural activities. Emissions pose a critical environmental challenge due to their widespread and often undetected nature. Mapping these emissions is critical for environmental monitoring and sustainability, guiding climate action and meeting regulatory targets. This research addresses these challenges by developing advanced machine learning (ML) algorithms, specifically neural networks, that leverage the unique strengths of hyperspectral imaging for methane detection. Thanks to hyperspectral data spectral resolution, with hundreds of contiguous spectral bands, it is possible to capture the fine absorption features of methane in key wavelengths, particularly around 1600 nm and 2400 nm, where methane exhibits distinct spectral behavior. These advanced algorithms are designed to process and analyze the vast, high-dimensional datasets provided by hyperspectral sensors, distinguishing methane absorption features from background noise, such as water vapor or ground reflectance variability. The models are optimized to enhance sensitivity, minimize false positives, and adapt to the complexity of diverse environments and emission sources. Hyperspectral data from existing missions such as PRISMA, EnMAP, and EMIT will be used to train neural networks capable of identifying the unique spectral signatures of methane. Neural networks are particularly effective here because they can process large amounts of data and identify subtle spectral patterns that indicate the presence of methane, even in complex and variable environments. This capability enables high sensitivity in detecting methane leaks from both concentrated sources, such as industrial facilities, and diffuse emissions across heterogeneous landscapes. The study will focus on a test area characterized by on-ground pipelines, providing a representative environment to evaluate the algorithm's performance in detecting methane emissions from critical infrastructure. The final products derived from this work include maps detailing the morphology of methane plumes and spatially explicit variations in methane concentration, integrated with key climatic components, including wind speed and direction from datasets such as ERA5 (ECMWF). By incorporating these atmospheric variables, the results provide a deeper understanding of methane dispersion dynamics, the influence of environmental conditions on plume behavior, and the identification of potential emission sources. This work demonstrates the potential of neural networks in methane mapping, providing an approach that directly supports CHIME's mission to advance EU policies such as the European Green Deal and the Paris Agreement. By refining detection methods for one of the most potent greenhouse gases, this research offers decision-makers detailed and actionable data on methane emissions, contributing to global climate resilience and sustainable resource management. In addition, the methodology enables the rapid detection of methane leaks along pipelines, ensuring timely intervention and reduced environmental impact.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: SBG VSWIR Terrestrial Vegetation Database and Algorithm Development

Authors: K. Dana Chadwick, Christiana Ade, Yoseline Angel, Dhruva Kathuria, Evan Lang, Ting Zheng, Philip Brodrick, Petya Campbell, Fred Huemmrich, Kyle Kovach, Shawn Serbin, Alexey Shiklomanov, Philip
Affiliations: NASA Jet Propulsion Laboratory, California Institute Of Technology
NASA's Surface Biology and Geology (SBG) Mission’s Visible to Shortwave Infrared (VSWIR) instrument is planning to produce foliar trait demonstration products across the globe at 30 m spatial resolution and 16-day repeat. The VSWIR terrestrial vegetation algorithm team is working to aggregate training datasets that will allow for stress testing algorithm strategies and plan for the desired interoperability of SBG VSWIR data products with other international sensor platforms, including ESA’s CHIME mission. This work combines existing datasets with new campaigns that link in situ measurements to airborne spectral data and explores scaling algorithms with orbital precursors like NASA's EMIT mission. Here, we present an update on these efforts. Central to this initiative is the creation of SBG PLANTS, an open-source database integrating airborne and in situ datasets from the radiance level. This approach will allow the team to addresses inconsistencies caused by varying atmospheric correction methods across platforms. Supported by investments from the SBG mission, the U.S. National Science Foundation, and university-led initiatives, this project advances trait prediction algorithms and establishes a collaborative framework for data sharing among researchers worldwide. Early development has utilized data and dataset structure from the SBG High Frequency Timeseries (SHIFT) campaign to shape the database structure. Planned contributions from EnSpec lab and the National Ecological Observation Network will ensure compatibility with established collection frameworks, enabling integration of diverse datasets. A key focus is fostering early community involvement, particularly for researchers new to the field, by supporting their data collection efforts. Multiple campaigns being undertaken in 2025 will serve as beta tests for this collaborative approach, refining strategies for integrating diverse data sources and coordinating with field efforts being undertaken by multiple agencies and organizations. This effort aims to promote transparency, community engagement, and enhance the accuracy and capability of SBG VSWIR vegetation algorithms as the mission progresses.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: Phytoplankton Taxa Identification in Lakes Using Hyperspectral Airborne Imagery

Authors: Loé Maire, Prof. Dr. Alexander Damm, Dr. Daniel Odermatt
Affiliations: Eawag, Swiss Federal Institute of Aquatic Science & Technology, Surface Waters – Research and Management, Department of Geography, University of Zurich
Phytoplankton is essential to lake ecosystems. It serves as the foundation of the food chain for higher organisms and plays a significant role in biogeochemical cycles. The composition and condition of phytoplankton species are key indicators of an ecosystem's resilience. Effective and reliable monitoring of phytoplankton taxonomic groups is crucial to understanding how lake ecosystems will respond to climate change. The differentiation of marine plankton taxa using optical satellite sensors has become firmly established in the past years. Recent and upcoming hyperspectral satellite missions further increase the potential for such applications in optically complex surface waters. However, robust and scalable retrieval methods for such ecosystems are still largely limited to the identification of cyanobacteria. The content of phytoplankton species, non-algal particles or dissolved organic matter in water influences its spectral reflectance, Rrs. Using in situ, airborne or spaceborne measurements of Rrs, we can thus analyse such constituents’ abundance. Two main types of algorithms are used for such retrievals: empirical and analytical. The use of empirical algorithms prevailed until recently, but the increasing availability of hyperspectral data promotes the use of spectral matching algorithms and analytical forward models. We used the free radiative transfer model Water Colour Simulator (WASI) to perform inversion modelling for five phytoplankton absorption endmembers from literature, which correspond to four taxa occurring in Greifensee, a eutrophic perialpine lake in Switzerland. The abundance of green algae, cryptophytes, cyanobacteria, and diatoms was retrieved using automated hyperspectral in situ measurements acquired daily by a WISPstation on a research platform in Greifensee, Switzerland. We used hyperspectral in situ Rrs from two specific events of several months during which several consecutive blooms occurred. Its performance was tested with reference to images acquired by a phytoplankton camera installed in the same location. The Aquascope phytoplankton camera, immersed at 3m depth, automatically acquires hourly photos of aquatic particles. Around 100 phytoplankton taxa are classified in these images using machine learning algorithms. Those phytoplankton abundance data served first to identify the main phytoplankton blooms occurring in Greifensee in the past five years, and second as validation data for the WASI inversion modelling results. We showed that WASI inversion modelling performs consistently even under changing illumination and cloudiness. In the summer of 2024, we acquired airborne images of Greifensee with the AVIRIS-4 sensor. AVIRIS-4 is a hyperspectral imaging spectrometer developed through the collaboration between NASA and the University of Zurich. It captures 224 spectral bands from 400 to 2500 nm. Simultaneously, we conducted in situ measurements in five locations across Greifensee. Rrs was measured in these locations and while cruising with a set of TriOS RAMSES sensors. A probe was used to measure inherent optical properties (IOP) such as absorption, scattering and attenuation. We counted and classified phytoplankton species from water samples under the microscope to obtain a phytoplankton abundance database of different locations and depths. Using the airborne Rrs dataset and applying a similar workflow as introduced before with WASI inversion modelling, we can obtain the phytoplankton abundance of Greifensee at different locations on that day. The abundance database obtained from the microscope counting serves as validation data. This study aims to show the ability to retrieve phytoplankton taxa from hyperspectral airborne sensors using previous knowledge of relevant taxa and generic phytoplankton absorption spectra. We will discuss a potential upscaling of this method to Earth observation data from the PACE or CHIME missions, using known spectral libraries and either known species compositions or, possibly, more candidate spectra.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: Exploring the use of PRISMA hyperspectral data to detect areas vulnerable to land degradation in Mediterranean forests

Authors: Simone Pascucci, Rosa Coluzzi, Vito Imbrenda, Maria Lanfredi, Giovanni Laneve, Saham Mirzaei, Dr. Stefano Pignatti, Angelo Palombo, Federico Santini, Tiziana Simoniello, Rajesh Vanguri, Luigi D'Amato, Maria Francesca Carfora, Italia Defeis, Diana Carolina Fonnegra Mora
Affiliations: CNR IMAA, Scuola Ingegneria Aeronautica (SIA), University of Rome "LA SAPIENZA",, Italian Space Agency (ASI), Istituto per le Applicazioni del Calcolo (IAC)
Land degradation is a serious concern for Mediterranean forests affected by frequent drought and heatwaves. Especially in coastal areas, the synergistic effect of climate conditions and anthropic pressure can reduce forest resilience, thereby triggering degradation and depleting the local natural resources. Developing advanced remote sensing tools to rapidly and accurately identify emerging degradation can be crucial for designing successful monitoring and intervention strategies. In the context of the project “SAPP4VU: Development of Prototype PRISMA Algorithms for Estimating Environmental Damage and Vulnerability to Land Degradation”, financed by the Italian Space Agency (ASI), we are investigating the use of PRISMA hyperspectral satellite data to detect early warnings of land degradation, particularly those related to forest health, by analyzing spectral indices and vegetation functional traits through the Random Forest (RF) algorithm. The study focuses on coastal forests of the Metaponto Plain in Italy (Lat. 40° 22.241'N, Long. 16° 48.811'E), which is a tourism area and a specialized agricultural district. Frequent wildfires are the main problem that can trigger land degradation. Moreover, land use/cover change and tourism pressure are causes of serious problems for forest conservation in this area. For these reasons, in this forest we selected an area affected by repeated wildfires in the recent past (https://rsdi.regione.basilicata.it/) and used it to train the RF with the aim to identify areas with similar properties analyzing a single summer image (25-07-2023). The study area dominant species are pine (Pinus pinea), eucalyptus (Eucalyptus tereticornis) and acacia (Acacia sativa). L2D-PRISMA images were downloaded from ASI catalogue and then co-registered using the AROSICS algorithm which implies the use of Sentinel-2 images acquired at the closest date to assure the co-registration (~ 0.5 pixel of RMS). Then, to retrieve the biophysical parameters (e.g., Leaf Area Index, Chlorophyll and FAPAR) of the forest using PRISMA images, a hybrid model consisting of radiative transfer models (RTM) and machine learning (ML) was used. The INvertible Forest Reflectance Model (INFORM) methods embedded in the ARTMO GUI toolbox have been used to generate the Lookup Table (LUT). For the generation of the LUT (about 20,000 spectra), it was decided to simulate the observation geometry corresponding to possible PRISMA acquisitions over the entire territory of Italy and by considering the sun illumination angles for all the days of the year. The capability of the Partial Least Squares Regression (PLSR) method has been considered for forest biophysical parameters. To validate the hybrid model outputs, we performed two field campaigns on November 11-14, 2023, and May 29-30, 2024. The in-situ data were gathered according to an Elementary Sampling Units (ESU) scheme that accounts for the spatial resolution of the PRISMA sensor by carrying out measurements in 30 by 30 m squares, including 5 transects each 10 m long. For the Leaf Area Index (LAI), the Fraction of Photosynthetically Active Radiation (FAPAR), and the Fraction of Vegetation Cover (FCOVER) the LAI-2200 instrument was used. Contemporary to the LAI measures, LCC values were acquired by the Dualex sensor and leaf disks were collected for laboratory analyses, where the weights of powdered leaves were measured after drying. The preliminary results show a good performance of the hybrid model (INFORM-PLSR) in retrieving LCC (RMSE = 0.37 µg/cm2), LAI (RMSE = 0.32 m2/m2), FAPAR (RMSE = 0.097) and FCOVER (RMSE = 0.012) by using as input the PRISMA L2D data. Afterward, health and functional traits indices consisting of LAI, anthocyanins, chlorophyll, and carotenoids, which offer insights into plant physiology, health, and environmental responses, were used as input for vulnerability mapping. In addition, we included water indices (Moisture Stress Index and Disease-Water Stress Index) capturing sensitivity to water stress and disease-related symptoms, and the Normalized Difference Vegetation Index (NDVI) was also used. Although additional analyses are still required, the results we have obtained by classifying these nine variables are already very satisfactory. The RF identifies areas affected by fires and zones affected by saltwater intrusion with a 0.07 out-of-bag error. Independently of the source of land degradation, the nine selected variables seem to capture the spectral response of stressed and/or sparse vegetation, which is typical of incoming degradation. The maximum value of the out-of-bag feature importance is obtained for the Moisture Stress Index, thereby highlighting the pivotal role of water scarcity in driving the patterns of degradation. This finding underlines the crucial role of local climates, which are Mediterranean and semi-arid in the study area. Summer drought can favor wildfires in stressed forests, so further fuelling land degradation. In this context, effective remote sensing techniques able to assess the vegetation status are fundamental proactive tools to address Sustainable Development Goal 15, which aims to “protect, restore, and promote sustainable use of terrestrial ecosystems, sustainably manage forests, combat desertification, and halt and reverse land degradation and halt biodiversity loss”.
LPS Website link: Exploring the use of PRISMA hyperspectral data to detect areas vulnerable to land degradation in Mediterranean forests&location=X5+–+Poster+Area+–+Zone+O-P" class="text-info" target="_blank">Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: Leveraging PRISMA and EnMAP 2020-2024 Time Series for Improving Agricultural Soil Properties Retrieval

Authors: Francesco Rossi, Luca Marrone, Saham Mirzaei, Dr. Simone Pascucci, Khalil Misbah, Giovanni Laneve, Raffaele Casa, Dr. Stefano Pignatti, Alessia Tricomi
Affiliations: Institute Of Methodologies For Environmental Analysis IMAA-CNR, Department of Agriculture and Forestry Sciences (DAFNE), University of Tuscia, College of Agriculture And Environmental Sciences, Scuola Ingegneria Aeronautica (SIA), University of Rome "LA SAPIENZA", E-GEOS S.P.A
Soil is a vital natural resource that is finite, non-renewable and irreplaceable. The available scientific evidence suggests that soil degradation is an ongoing and worsening phenomenon. A comprehensive understanding of the characteristics of agricultural topsoil is essential for optimising food production and improving the overall efficiency of the agroecosystems. Remote sensing techniques are increasingly being employed to gather critical information on topsoil properties. Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) will be an Earth observation satellite of the European Space Agency (ESA). It is expected to map large areas, with regular and systematic data acquisition every 14 days, delivering images containing more than 200 bands in the 400 – 2500 nm wavelength range. This creates new opportunities for using time-series data, which until now have only been available sporadically, thanks to the launch of recent VNIR-SWIR hyperspectral satellites with open data policies, such as PRISMA and EnMAP. The retrieval of soil properties from spaceborne sensors is challenging due to the presence of multiple endmembers within a single pixel (spectral mixing) and the influence of external factors. Soil moisture (SM), photosynthetic vegetation (PV), and non-photosynthetic vegetation (NPV) significantly affect soil spectral reflectance. These are among the most important external parameters influencing the estimation of soil characteristics due to their highly dynamic nature. The present study aims to retrieve Soil Organic Carbon (SOC), CaCO₃ and topsoil texture properties using machine learning (ML) by minimising SM, PV and NPV effects using the External Parameter Orthogonalization (EPO) algorithm applied to the hyperspectral time series acquired over two agricultural farms in Italy between 2020 and 2024. Field data were collected at the Italian sites of Jolanda and Maccarese, two of the largest agricultural enterprises in the country, renowned for their significant contributions to production and innovation. Jolanda di Savoia farm is in the north-eastern Italian territory (latitude 44.87°N, longitude 11.97°E). The area is alluvial and deltaic, formed by the Po river and the Apennines. The substrate consists of recent calcareous, medium to fine textured sediment and moderately decomposed organic material. The soil is highly variable, due to the presence of buried paleo-channels from several drainage activities of the palustrine environment carried out since the end of the 18th century. Maccarese farm (41°52′N, 12°13′E, alt. 8 m a.s.l.), located near Rome, Central Italy, spans over 3,200 hectares and is renowned for its advanced farming techniques and sustainable practices, particularly in dairy farming, crop cultivation, and environmental conservation. near Rome. The soil is Cutanic Luvisol, with a prevailing sandy clay loam texture, becoming more clayey toward the north-east of the site. Between October 2019 and August 2024, a total of 9 EnMAP images (from Jolanda) and 39 PRISMA images (20 from Jolanda and 19 from Maccarese) were acquired and used in the time series analysis. At the same time, topsoil samples were collected from 103 (60 of Jolanda and 43 of Maccarese) agricultural fields and analysed in the laboratory to determine SOC and soil fractions of each soil type (sand, silt and clay). In addition, a local Soil Spectral Library (SSL) was established using all samples collected from the study areas following the standard protocol defined by the IEEE P4005 working group. Moreover, spectral reflectance was measured in the laboratory by varying SM levels. This SSL was used to train the EPO_SM algorithm, which aims to minimize the SM effects from soil reflectance. Meanwhile, the time series-derived spectra were first clustered using Normalized Difference Vegetation Index (NDVI) and Normalized Cellulose Absorption Index (nCAI), and then used to train the EPO_PV and EPO_NPV algorithms, respectively. Various pre-treatment techniques (e.g., spectral smoothing, transformation, and normalisation methods) were applied to both the reflectance dataset (used as a reference) and the EPO-projected reflectances from the time series. Additionally, an approach for performing "dryification" has been developed and is under assessment. A deep convolutional autoencoder was trained to extract the corresponding dry spectrum from a wet one. The model has been trained on local SSL, with additional samples generated by inverting MARMIT-2, a multilayer radiative transfer model of soil reflectance, which represents a wet soil as a dry soil covered with a thin layer of liquid water. The dataset consists of pairs of wet and dry reflectance spectra, resampled to match the PRISMA/EnMAP band configurations. Concerning the architecture, the autoencoder includes blocks of convolutional layers, batch normalization, and ReLU activation functions. Downsampling is achieved through average pooling, while upsampling is done using inverse convolutions. The model is then applied to PRISMA/EnMAP images to mitigate SM effects. The Partial Least Squares Regression (PLSR) and three superior ML algorithms, including Random Forest (RF), Support Vector Regression (SVR), and Gaussian Process Regression (GPR), have been implemented on reflectance and EPO-corrected reflectance to retrieve the topsoil SOC, CaCO₃, clay, silt and sand content. A K-fold (k=10) cross-validation approach was used for validation. In this context, different techniques for using times series of data were compared. Preliminary results show that for EnMAP the RMSE for SOC was 1.13% for the reference dataset and improved to 1.06%, 1.037% and 1.045% after the application of EPO_SM, EPO_PV and EPO_NPV respectively. While for PRISMA the RMSE for SOC was 1.13% for the reference dataset and improved to 1.06%, 1.038% and 1.045% after the application of EPO_SM, EPO_PV and EPO_NPV respectively. The results demonstrate that the accuracy of SOC and clay, silt, sand estimation is significantly enhanced by applying EPO transformations to multitemporal reflectance data. This suggests that time series of hyperspectral images can serve as valuable baseline information for precision agriculture and climate-smart farming. While EPO methods require extensive time series for training, "dryification" relies on the assumption of transferability from SSL to satellite data. Results of "dryification" for SOC will be then compared to those obtained using EPOs, in order to analyse benefits and limitations. Future research should focus on leveraging physically based models. In particular, efforts should be directed toward addressing critical challenges associated with the use of models like MARMIT-2, such as the integration of inherent noise of satellite hyperspectral data. Expanding the autoencoder training dataset to include both real and modelled reflectance spectra could be essential for enhancing the robustness of the analysis. Instead, the PROSAIL model can be used to generate synthetic data for training a general EPO, with an emphasis on minimizing the effects of PV and NPV. To this end, evaluating the integration of TIR missions such as LSTM, SBG-TIR, and TRISHNA is necessary, as their inclusion could improve NPV separation and spectral emissivity, ultimately refining the estimation of topsoil mineralogical composition.
LPS Website link: Leveraging PRISMA and EnMAP 2020-2024 Time Series for Improving Agricultural Soil Properties Retrieval&location=X5+–+Poster+Area+–+Zone+O-P" class="text-info" target="_blank">Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: CORATHYP : an Atmospheric Correction Tool With Enhanced Atmospheric Characterization From Retrieval Techniques, ECMWF And CAMS Data - Application To SENTINEL2 and ENMAP

Authors: Xavier Lenot, Thierry Erudel, Bruno Lafrance, Sophie Coustance, Camille Desjardins, Damien Rodat, Aimé Meygret
Affiliations: CS Group, CNES
Hyperspectral imaging from space is experiencing rapid growth, driven by the deployment of recent satellite missions such as PRISMA and ENMAP. Numerous future missions are planned, aiming to enhance spatial coverage, revisit frequency, and spatial resolution. These include CHIME, SBG, and the upcoming French hyperspectral mission. In this rapidly evolving context, the CORATHYP atmospheric correction code has been developed conjointly by the CNES and CS-Group in order to extract hyperspectral surface reflectances from satellite measurements. The CORATHYP algorithm enables the processing of both multispectral and hyperspectral images by applying topographic corrections and accounting of adjacency effects. Within the CORATHYP algorithm, the initial step involves the initialization of atmospheric composition using exogenous data. Specifically, surface pressure, ozone concentration, and relative humidity are sourced from the ECMWF portal, while the CAMS Copernicus service is employed to retrieve Aerosol Optical Thickness (AOT) at 550 nm for seven aerosol types: organic matter, black carbon, dust, sulfates, sea salt, ammonium, and nitrates. The ratio of AOT is used to establish the proportions of each aerosol type within the model, with the optical properties of the particles adjusted based on relative humidity. Subsequently, water vapor content is retrieved using the APDA technique, which has been optimized for processing over water bodies through an iterative approach. When vegetation is present in the scene, the total Aerosol Optical Thickness is determined using the MAJA Dark Dense Vegetation algorithm; otherwise, the CAMS total AOT is applied. The final algorithm step involves iterative calculation of surface reflectance. First, an initial retrieval is performed under the homogeneous terrain assumption. Next, three terms describing adjacency effects are calculated—these include the irradiance coupling between the Earth and atmosphere, reflected irradiance from surrounding relief, and upwelling diffuse radiance. The final reflectance is obtained by accounting for scene heterogeneity. CORATHYP has been validated through the processing of SENTINEL-2 and ENMAP images. Results have been compared against reference Level 2 processors (MAJA for SENTINEL-2 and PACO for ENMAP) as well as in-situ measurements from the Gobabeb and La Crau calibration sites operated by CNES (ROSAS). This talk will present an overall description of CORATHYP algorithm and the results obtained during the validation process.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: Soil Spectral Library for Soil Mapping

Authors: Jonti Shepherd, Prof. Dr. Eyal Ben-Dor, Or Amir, Ori Kanner, Dr. Nicolos Francos, Bar Efratti
Affiliations: Tel Aviv University
Soil Spectral Libraries (SSLs) are curated collections of soil samples systematically gathered from the field, analyzed in laboratories for various soil attributes, and cataloged with comprehensive metadata such as GPS coordinates and climate information. These samples undergo detailed chemical and spectral analyses, forming a robust foundation for soil mapping and monitoring applications via AI modeling. A critical concern for SSLs is the potential impact of prolonged storage on the stability of sample properties—both chemically and spectrally. This is especially relevant as SSLs are frequently updated with new samples or utilized to monitor soil attributes over time. To ensure their utility in remote sensing-based soil mapping, maintaining the integrity of SSL data over time is essential. Environmental factors at storage facilities could potentially alter stored samples, necessitating periodic re-measurements to uphold the accuracy and consistency of the library. The central question is whether soil properties in SSLs remain stable—both chemically and spectrally—over time and how this stability relates to field conditions and changes observed in situ over the years. To address this, we developed a transfer function to translate SSL data generated in laboratory conditions to field data collected under remote sensing frameworks. Our research revealed that soil samples stored at room temperature for up to 40 years showed no significant changes in organic or mineral properties. However, field measurements conducted using detailed spectroscopy and satellite spectral data (e.g., Landsat TM) indicated notable changes in soil properties at specific arid and semi-arid locations over time. This finding suggests that SSLs effectively "freeze" the chemical and spectral properties of soil samples, preserving them as a kind of "fossil standard." Such preserved samples can serve as a benchmark to assess soil changes driven by environmental processes, particularly those linked to climate change. In conclusion, SSLs offer a reliable means of monitoring soil dynamics over time, demonstrating their potential for long-term studies on climate change impacts through a combination of laboratory and field-based approaches and can be a treasure data set for hyperspectral monitoring of the environment.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: Towards quantifying model error from spectral radiative transfer models: an application to PROSPECT

Authors: Jose Gomez-dans, Jose Luis García-Soria
Affiliations: King's College London, Leverhulme Centre for Wildfires, National Centre for Earth Observation (NCEO), Laboratory of Earth Observation (LEO) Image Processing Laboratory (IPL).
Radiative transfer (RT) models describe the interaction of photons with soil and canopy using a concise set of biophysical parameters. These models have advanced understanding of the factors controlling remote sensing measurements and the variables of interest to researchers. A key challenge lies in the "inverse problem," where model parameters such as leaf area index (LAI) or leaf pigments are derived by either inverting the model against observations within a Bayesian framework or using it to generate synthetic datasets for training machine learning algorithms. As biophysical parameters derived from optical and thermal data are increasingly proposed as Essential Climate, Agricultural, or Biodiversity Variables, robust frameworks for estimating uncertainty become critical. For land surface variables, this includes quantifying uncertainties introduced during pre-processing, such as radiometric calibration and atmospheric correction, and propagating these through parameter inversion. Recent studies have demonstrated comprehensive uncertainty quantification for optically derived parameters, including temporal interpolation [Yin et al. (2019), Gorroño et al. (2024), Yin et al. (2024)]. However, RT models inherently rely on assumptions and approximations. Deterministic approaches often treat models as error-free, ignoring biases and uncertainties arising from these assumptions. In this study, the PROSPECT family of leaf RT models is extended to generate stochastic outputs that explicitly characterise model-induced uncertainty. The proposed approach re-fits the PROSPECT model to well-characterised reference spectra, modelling residual leaf reflectance and transmittance as Gaussian Processes (GPs). From a Bayesian perspective, GPs serve as priors over functions, capturing spectral correlation and ensuring continuity. This stochastic formulation enables a flexible representation of model uncertainty. The implementation and validation of this method are discussed, along with tests of the uncertainty estimates in the context of hyperspectral leaf-level parameter retrieval, with a discussion on how to treat the uncertainty arising from using different models with different assumptions. Finally, potential extensions to canopy-level models and atmospheric RT are outlined, demonstrating the broader applicability of this approach to remote sensing and uncertainty quantification.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: Implementation of Python Software for Estimating Vegetation Properties From Hyperspectral Satellite Data in the Prospect of CHIME

Authors: Viktor Ixion Mészáros, Louis Trinkle, Dávid D.Kovács, José Luis García-Soria, Pablo Reyes-Muñoz, Ana Belén Pascual-Venteo, Dr. Jochem Verrelst
Affiliations: Image Processing Laboratory (IPL), Universitat de València
The upcoming CHIME mission will provide routine, high-resolution hyperspectral data to advance sustainable agriculture and address climate change challenges. At the Image Processing Laboratory (IPL), University of Valencia, the vegetation retrieval module of the CHIME mission, Level-2B Vegetation (L2BV), is currently under development in MATLAB, a commercial software, but ESA prefers the development of open-source processors. To align with this preference, a software package named PyL2BV has been developed in Python, an open-source programming language, for retrieving biophysical variables from hyperspectral data using models created in the Automated Radiative Transfer Models Operator (ARTMO) Toolbox. The work aims to assess the accuracy and efficiency of the PyL2BV software, particularly for the Python-based Gaussian Process Regression (GPR) retrieval method recently developed at IPL. GPR models were trained using hybrid approaches with simulated datasets derived from radiative transfer models (RTM) by the Soil Canopy Observation Photochemistry and Energy fluxes (SCOPE) model. Datasets were further optimised with active learning, and principal component analysis (PCA) was used for dimensionality reduction. Models were obtained for canopy vegetation traits, including Leaf Area Index (LAI), Canopy Chlorophyll Content (CCC), Canopy Water Content (CWC), the Fraction of Absorbed Photosynthetic Active Radiation (FAPAR), and Fractional Vegetation Cover (FVC). All models were trained and validated in ARTMO, using in-situ data collected from Grosseto (Italy) and Munich North Isar (MNI) field campaigns for LAI, CCC, CWC, and theoretical cross-validation for FAPAR and FVC due to lack of available in situ data. The models achieved strong predictive performance, with R² values between 0.65 (CCC) and 0.98 (FVC) and low RMSE, allowing proper retrievals. The PyL2BV software was assessed based on several key parameters: (1) accuracy, reflecting the quality of the retrievals; (2) speed, measuring the efficiency of the retrieval process; and (3) memory performance, addressing the software's resource usage during execution. To demonstrate the utility of the PyL2BV software, the models were applied to hyperspectral data of synthetic CHIME and a scene from the Italian hyperspectral mission, PRecursore IperSpettrale della Missione Applicativa (PRISMA). Retrievals for all variables from both images showed consistent and accurate values with relatively low uncertainties. Finally, the PyL2BV retrievals were compared against results produced by the MATLAB L2BV using Pearson Correlation. The PyL2BV software demonstrated notable computational advantages over MATLAB-based implementation while maintaining high accuracy. The implementation of parallel processing achieved a significantly faster execution time for the retrievals. Memory profiles also indicated decreased memory usage, highlighting its suitability for large-scale data processing. Scatterplots between Python and MATLAB retrievals showed minor dissimilarities for CHIME due to a difference in the spectral interpolation method of the two platforms while reaching perfect correlation for PRISMA where no interpolation was used, thereby validating the successful development of the PyL2BV software for GPR retrievals. The work also lays a strong foundation for future applications, such as the Python implementation of the vegetation retrieval module of CHIME. Moreover, the improved efficiency of the PyL2BV software can significantly enhance operational workflows for vegetation monitoring, supporting our understanding of coupled environmental processes and climate change mitigation efforts.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: The AVIRIS-4 Airborne Imaging Spectrometer in Support of CHIME and SBG

Authors: Andreas Hueni, Marius Voegtli, Josquin Rosset, Jesse LaHaye, Mike Werfeli, Luc Sierro, Diego Villamaina
Affiliations: University Of Zurich, ZHAW, EPFL
Airborne imaging spectrometers play a key role in the mission preparation of future satellite-based sensor systems through producing data used in the development of end-to-end simulators and by the broader scientific user community for algorithm development. AVIRIS-4 is a state-of-the-art airborne imaging spectrometer built by NASA/JPL with the data capture and mission control hardware and software developed by the University of Zurich (UZH), the Swiss Federal Institute of Technology Lausanne (EPFL), and the Zurich University of Applied Sciences (ZHAW). AVIRIS-4 has been commissioned during a test flight in early 2024 and has since entered operational service within the framework of the Airborne Research of the Earth System (ARES) facility (https://ares-observatory.ch). The system is a sister instrument of AVIRIS-3 and EMIT, both operated by NASA/JPL. This technical compatibility will help pave the way for missions such as ESA’s CHIME and NASA’s SBG by providing airborne and space-based imaging spectrometer data that are easily interchanged between mission simulators and science teams. The calibration of AVIRIS-4 is being conducted at the Calibration Home Base (CHB) at the German Aerospace Agency (DLR) Oberpfaffenhofen, to establish a detailed characterization of spatial, spectral and radiometric responses of the system, traceable to international standards. Raw image data are converted to at-sensor radiances using a common code shared with NASA/JPL while flight trajectory optimization, optical camera model, and sensor alignment calibration are obtained with a custom optimization framework developed for AVIRIS-4. Georectification and atmospheric correction are carried out with the PARGE and ATCOR software packages respectively. Mission data are being disseminated through a new web portal under construction, based on the NASA/JPL MMGIS solution (https://popo.jpl.nasa.gov/mmgis-aviris), allowing users to spatially browse for flight missions and inspect metadata and ‘quicklook’ imagery samples. The portal is also setup to provide access to legacy data of ARES, namely to APEX and AVIRIS-NG data, which includes data acquired during the CHIME Hypersense campaigns in 2018 and 2021. Currently planned flight missions for 2025 include acquisitions for CHIME in support of the ESA Sentinel Users Preparation (SUP) initiative, combining it with airborne SAR imaging to address the topics of biodiversity, drought stress, and agricultural phenology in preparation for synergies between CHIME and ROSE-L. In this contribution we will showcase the capabilities of AVIRIS-4 with calibration, validations and science results taken from the first year of operations, thus showcasing the suitability of this system for forthcoming space-based optical EO mission developments.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone O-P)

Poster: Glacier ice spectroscopy: where art thou?

Authors: Biagio Di Mauro, Giacomo Traversa, Roberto Garzonio, Claudia Ravasio, Lea Hartl, Niklas Bohn, Alexander Kokhanovsky, Maximilian Brell, Lou-Anne Chevrollier, Adrien Wehrlé, Roberto Colombo
Affiliations: Institute of Polar Sciences, National Research Council of Italy, Milan, Italy, Department of Earth and Environmental Sciences, University of Milano-Bicocca, Milan, Italy, Institute for Interdisciplinary Mountain Research, Austrian Academy of Sciences, Austria, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, USA, German Research Centre for Geosciences, Potsdam, Germany, Department of Environmental Science, iClimate, Aarhus University, Roskilde, Denmark, Institute of Geography, University of Zürich, Zürich, Switzerland
In the context of the cryosphere, most of the efforts in imaging spectroscopy have been devoted to the study of snow. Nevertheless, glacier ice is an important component of the cryosphere, and it is strongly impacted by climate change due to its early resurfacing during the melting season in polar and non-polar regions. Recent progress in radiative transfer modelling now allows to simulate the bihemispherical reflectance of ice for varying key parameters such as: ice grain or pore size, ice density, dust concentration, algae concentration and liquid water content. These variables are very important to determine the state of the surface (i.e. dry/melting) and to inform regional and global climate modelling. Imaging spectroscopy is a fundamental tool for monitoring specific properties of glaciers, sea ice, ice sheets and ice shelves, and future global hyperspectral missions such as CHIME and SBG will improve our observational capabilities on glacierized areas. In this contribution, we provide observational evidence that CHIME (ESA) precursors such as PRISMA (ASI) and EnMAP (DLR) feature the sufficient spectral (~12nm) and spatial (30m) resolution to map glacier ice surface properties from space. To support our claim, we first present validation activities of reflectance and radiance products derived by different methods and from different satellite missions over several ice-covered areas, including flat surfaces in alpine and polar regions. We will present a match up analysis based on hyperspectral data collected by aerial surveys in different glacial environments. We also show recent results on a series of specific algorithms for determining surface properties of glacier ice using hyperspectral satellite data. Those algorithms were developed by leveraging both radiative transfer modelling and a set of field campaigns conducted both in polar regions and in the European Alps. We focused on the retrieval of the liquid water content of ice and the concentration of light-absorbing impurities, both organic (i.e. algae) and inorganic (i.e. mineral dust). With our preliminary intercomparison of retrieval algorithms, we also provide a set of recommendations for the harmonization of future global cryospheric products from imaging spectroscopy data.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: A.08.08 - POSTER - Upper Ocean Dynamics

The upper ocean, by exchanging properties such as heat, momentum, gas, mass and freshwater with the atmosphere, plays a key role in shaping the Earth’s climate and influencing various environmental processes. It is characterized by numerous processes which superimpose and interact with each other, and which cover a wide range of spatial (from sub-mesoscale to basin-wide) and temporal (from high frequency to climatic) scales.
Different parameters are needed to properly describe the upper ocean dynamics (e.g. temperature, salinity, sea level, currents, wind, waves, mixed layer depth) and a large variety of active and passive instruments have been put into orbit over the last few decades providing more or less direct information about the upper-ocean dynamics (e.g. altimeters, including the recently launched SWOT mission, gradiometers, scatterometers, synthetic aperture radars, imaging radiometers operating at different wavelengths (microwave, infrared), and spectrometers). In this context, this session welcome contributions exploring how multi-variable satellite observations, together with in-situ data and/or numerical modelling can be consistently and systematically used in synergy to better observe and understand upper ocean dynamics, across different dynamical regimes and spatial and temporal scales.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Deriving Surface Currents Using Multi-Source Approach Through Variational Inverse Method: DIVAnd

Authors: Abel Dechenne, Alexander Barth, Aida
Affiliations: Uliege
Sea surface currents (SSC) are central for understanding ocean upper dynamics and interactions. Indeed, currents transport heat, mass, create conditions for up- or downwelling systems and drive the content of the primary variables such as oxygen and chlorophyll.Thus, SSC is an important factor that conditions primary production and therefore, the whole biological interaction of the trophic chain. Surface currents play also a central role in dispersion of pollutants, for example in the case of oil spill models. Due to this, SSC are part of the Essentials Ocean Variables (EOVs, as defined by Global Ocean Observing System community), which confirms their importance to oceanography knowledge and research. As a consequence, analysis over large timescales and areas are precious for capturing variability and recurrent patterns of circulation. However, SSC is known for its sparsity of observations when it comes to larger (i.e. climatology) analysis. Due to this issue, we need to find a way to get complete gridded files as they are used to force models or to predict dispersion of pollutants. To overcome this problem, researchers often use interpolation techniques to fill the gaps. We propose here to bring a new way to consider time series analysis through ex- and in-situ point of view of surface currents coupled to an innovative technique to fill gaps. In this work field, we consider: Drifters, High-Frequency (HF) Radars and satellite altimetry. Drifters are typically considered the most direct and accurate information among the three data sources since they are the only in-situ measurement. Following, the main benefit of satellite altimetry is the high spatial and temporal coverage which compensates the sparsity of drifters. However altimetry gives only an indirect estimate of surface currents via the geostrophic relationship. Finally, HF radars give precious information on coastal dynamics where non-geostrophic components tend to be larger. Despite that, they do not provide a large time coverage due to their recent arrival. These sources of information are analysed together in the same framework and a complete gridded product is generated via a variational inverse method. The variational inverse method is called DIVAnd (Data-Interpolating Variational Analysis in n dimensions) and is based on a cost function which will penalize abrupt gradients in the analyses. The approach also considers dynamic constraints which are added to the cost function. We consider the presence of the coastline, the generally small horizontal divergence, Coriolis force, the surface pressure gradient, and temporal coherence of the system. The current work is dedicated to the Mediterranean Sea surface currents and to propose a 10 years’ time series of the whole sea. The objective is to have an effective resolution of 7 km and a temporal resolution around 30 days. This work will be shared as an open-source approach where any region of interest can be resolved. We are confident that this technique is innovative thanks to the multi-source approach and thanks to the consideration of hydrodynamical constraint. Results are validated against independent drifter’s observations (10% are removed from the dataset to perform cross validation).These latter show that our reconstruction framework is efficient (our maximum values of RMS are 0.0917 m/s for u and 0.0951 m/s for v) , and propose a technique that does not require large computing resources compared to the modelling approach. This work is part of the Blue-Cloud 2026 project where the approach will be open to the public as a virtual laboratory where computing platform is provided by the Blue-Cloud 2026 project. For technical aspect of DIVAnd technique, please refer to: Barth, A., Troupin, C., Emma, R., Alvera Azcarate, A., Beckers, J.-M., & Joaquı́n, T. (2021). Variational interpolation of high-frequency radar surface currents using DIVAnd.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: A multi-variables synergy between satellites and in situ ocean data to better estimate upper ocean dynamics

Authors: Brahim Boussidi, Clément Le Goff, Gwenaele Jan, Solène Jousset, Bertrand
Affiliations: Eodyn, Groupcls, Ifremer
In the upper ocean, surface current, sea surface temperature and salinity and height (U, SST, SSS, SSH) constrain the upper layer dynamics. These variables interact each other for parts of the spatial and temporal scales. Wind and waves exchange energy with these variables and modify their patterns. The objective of Space4SafeSea project (supported by ESA) is to describe as properly as possible this surface dynamics. The input data is made of measurements from AIS in situ (Automatic Identification System) and sensors in orbit: altimeters, SWOT mission, etc. This work presents and compares two methods for mapping and merging ocean surface currents from AIS data and SSH from altimetry. The first method leverages ship drift estimates derived from AIS analysis [1,2]. These estimates are combined with along- track sea surface height measurement, using the multiscale and multivariate MIOST (Multiscale Inversion of Ocean Surface Topography) mapping approach. This method combines geostrophic signals from the of altimetry and AIS observations with ageostrophic signals unique to AIS data. The second method employs a learning-based variational data. [3]. This machine learning framework has shown a high performance in Observation System Simulation Experiments (OSSEs) [4]. Compared to the first method, 4DvarNet offers promising capabilities for reconstructing ocean surface circulation by integrating multimodal datasets with diverse characteristics through advanced learning strategies. This includes systematically incorporating high-resolution SST and ocean color imagery, which are valuable for tracing upper ocean dynamics. The project enhances the potential for reconstructing ocean surface circulation, particularly by refining mapping in regions of dense maritime traffic and capturing rapid sub-mesoscale dynamics with enhanced temporal accuracy. The performance of these methods will be evaluated using independent validation data, such as measurements from drifter networks and HF radar systems. Additionally, comparisons will be made with state-of-the-art ocean products, including the DUACS optimal interpolation product [5] and GlobCurrent [6], to assess their respective advantages and limitations. [1] Le Goff, C., Boussidi, B., Mironov, A., Guichoux, Y., Zhen, Y., Tandeo, P., Chapron, B. (2021). Monitoring the greater Agulhas Current with AIS data information. Journal of Geophysical Research: Oceans, 126(5), e2021JC017228 [2] Boussidi, B., Le Goff, C., Galard, C., Carton, X., & Speich, S. (2024). The Analysis of North Brazil Current Rings from Automatic Identification System Data and Altimetric Currents. Remote Sensing, 16(15), 2828 [3] Fablet, R., Chapron, B., Drumetz, L., Mémin, E., Pannekoucke, O., & Rousseau, F. (2021). Learning variational data assimilation models and solvers. Journal of Advances in Modeling Earth Systems, 13(10), e2021MS002572 [4] Ronan Fablet and Bertrand Chapron. Multimodal learning-based inversion models for the space-time reconstruction of satellite-derived geophysical fields, arXiv preprint arXiv:2203.10640.2022 [5] Pujol, M. I., Faugère, Y., Taburet, G., Dupuy, S., Pelloquin, C., Ablain, M., & Picot, N. (2016) ; DUACS DT2014: the new multi-mission altimeter data set reprocessed over 20 years. Ocean Science, 12(5), 1067-1090 [6] Rio, M. H., Mulet, S., & Picot, N. (2014). Beyond GOCE for the ocean circulation estimate: Synergetic use of altimetry, gravimetry, and in situ data provides new insight into geostrophic and Ekman currents. Geophysical Research Letters, 41(24), 8918-8925.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Estimating ocean currents from the joint reconstruction of absolute dynamic topography and sea surface temperature through deep learning algorithms

Authors: Daniele Ciani, Claudia Fanelli, Bruno Buongiorno Nardelli
Affiliations: CNR-ISMAR, CNR-ISMAR
Our study focuses on Absolute Dynamic Topography (ADT) and Sea Surface Temperature (SST) mapping from satellite observations with the primary objective of improving the satellite-derived ADT (and derived geostrophic currents) spatial resolution. Retrieving consistent high resolution ADT and SST information from space is challenging, due to instrument limitations, sampling constraints and degradations introduced by the interpolation algorithms used to obtain gap free (L4) analyses. To address these issues, we developed and tested different deep learning methodologies, specifically Convolutional Neural Network (CNN) models that were originally proposed for single-image super-resolution. Building upon recent findings, we conduct an Observing System Simulation Experiments (OSSE) relying on Copernicus numerical model outputs (with respective temporal and spatial resolutions of 1 day and 1/24°) and we present a strategy for further refinements. Previous OSSEs combined low resolution L4 satellite equivalent ADTs with high resolution "perfectly known" SSTs to derive high resolution sea surface dynamical features. Here, we introduce realistic SST L4 processing errors and modify the network to concurrently predict high resolution SST and ADT from synthetic, satellite equivalent L4 products. This modification allows us to evaluate the potential enhancement in the ADT and SST mapping while integrating dynamical constraints through tailored, physics informed loss functions. The neural networks are thus trained using OSSE data and subsequently applied to the Copernicus Marine Service satellite derived ADTs and SSTs, allowing us to reconstruct super resolved ADTs and geostrophic currents at the same spatio-temporal resolution of the model outputs employed for the OSSE. A 12 years long time series of super resolved geostrophic currents (2008-2019) is thus presented and validated against in situ measured currents from drogued drifting buoys and via spectral analyses. This study suggests that CNNs are beneficial for improving standard Altimetry mapping: they generally sharpen the ADT gradients with consequent correction of the surface currents direction and intensities with respect to the altimeter derived products. Our investigation is focused on the Mediterranean Sea, a quite challenging region due to its small Rossby deformation radius (around 10 km). Similar methodologies are currently being adapted for improving the characterization of the upper ocean dynamics in the North Atlantic Ocean and will contribute to scientific studies on the North Atlantic sub-polar mode water formation and its relation to Atlantic Meridional Overturning Circulation variability.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Mesoscale Dynamics in the Baltic Sea: Oxygen and Chlorophyll Transport Insights from the 4DBaltDyn Project.

Authors: Dr Anna Bulczak, Professor Miroslaw Darecki, Dr Andreas Lehmann, Luciana Fenglio-Marc, Dr Rafael Catany, Maciej Janecki, Dr Daniel Rak, Dr Jaromir Jakacki, Marine Bretagnon, Philippe Bryère, Quentin Jutard, Artu Ellmann, Dr Nicole Delpeche-Ellmann, Professor Lidia Dzierzbicka-Glowacka, Dr Dawid Dybowski, Dr Roberto Sabia
Affiliations: Institute Of Oceanology, Polish Academy Of Sciences, GEOMAR, Albavalor, ACRI-ST, University of Bonn, NERSC, ESA, TalTech
The Baltic Sea's mesoscale (10–100 km) and sub-mesoscale (1–10 km) processes play a crucial role in shaping the region’s physical and biogeochemical dynamics, particularly in the transport and distribution of oxygen and chlorophyll. These eddies drive the exchange of mass, energy, and nutrients across the thermocline and halocline, influencing temperature and salinity anomalies. Despite their importance, the three-dimensional structures and full impact of these eddies remain insufficiently explored. By facilitating vertical mixing and lateral transport, mesoscale eddies contribute to the ventilation of hypoxic deep-water regions, enhancing nutrient availability, stimulating phytoplankton growth, and supporting marine biodiversity. The 4DBaltDyn project combines satellite-derived data—such as sea surface temperature (SST), sea surface height (SSH), ocean colour, and sea surface salinity (SSS)—with high-resolution numerical models (115m, 575m, and 2.3km) and produces novel high-resolution 4D Baltic products of ocean currents, temperature, salinity, oxygen concentration and chlorophyll-a concentration. This study uses these products together with in situ observations to provide a comprehensive understanding of eddy-induced transport processes and their effects on primary production and deep-water ventilation. Special emphasis is placed on regions experiencing low oxygen conditions, given their critical ecological importance. This study underscores the pivotal role of mesoscale dynamics in mitigating hypoxia and enhancing biological productivity in the Baltic Sea. By employing data-driven 4D reconstruction techniques and AI-powered analysis, 4DBaltDyn deepens insights into the complex interactions between physical, biological, and biogeochemical processes. The project’s findings offer valuable contributions to ecosystem management, including forecasting harmful algal blooms, addressing oxygen depletion events, and informing the development of targeted conservation strategies for this dynamic and sensitive marine environment.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Mesoscale Eddy Trajectories Atlas – Networks (META-Networks): a new dataset and analytical tools to visualize and investigate eddy trajectories and interactions from multi-satellite altimetry products.

Authors: Francesco Nencioli, Antoine Delepoulle, Juliette Gamot, Isabelle Pujol, Gerald Dibarboure
Affiliations: Collecte Localistation Satellites, Centre National d'Etudes Spatiales
Mesoscale eddies are ubiquitous features of the global ocean. They can trap and transport water over large distances and therefore play a key role in regulating the ocean’s energy, heat and biogeochemical cycles. Furthermore, they impact the transport of surface tracers and therefore have an important influence in regulating the geographical distribution of surface tracers and ecological niches. Global observations of the large-scale mesoscale circulation (i.e. scales > 100 km) are provided daily by the sea surface heights maps obtained from the merging of the along-track measurements acquired by the current constellation of satellite altimeters. Due to the importance of mesoscale eddies, several studies in the last decades have used these multi-satellite SSH fields to analyze their characteristics and distribution, both at the global as well as regional level. Because of the large spatial and temporal extent of those dataset, such analyses required the development of automated methods to identify and track mesoscale eddies. Here we present the new Mesoscale Eddy Trajectories Atlas – Networks which provides global tracks of mesoscales eddies from 1993 to present. The individual mesoscale eddies are detected based on the global fields of absolute dynamic topography reconstructed using the new Multiscale Interpolation gridding (MIOST). Eddy detection is performed on an Eulerian reference frame and it is based on the Py-Eddy-Tracker algorithm (PET) described by Mason et al., 2014, and available here https://github.com/AntSimi/py-eddy-tracker. After the eddies are identified, the PET algorithm combines the individual eddies together to reconstruct eddy tracks. An important additional feature of META is that individual tracks are combined together in a series of eddy-networks based on eddy-eddy interactions at the beginning and end of each track. Such interactions are identified via eddy overlapping from individual tracks, and the tracks are associated in the same network if the overlap ratio (defined as the intersection of the area of the two eddies divided by the union of their areas) is larger than a given threshold. The advantage of such network representation is that it enables the analysis of eddy characteristics not only during individual along-track evolution, but also at times of merging and splitting interactions with other eddies. Since individual eddy tracks are organized into a series of networks, the structure of the META-Network is significantly more complex than traditional eddy datasets. To guarantee ease of use of the dataset, we have also developed a series of functions with which the users can access, display and analyse the META-Network. Such functions and a series of examples of applications in the form Jupyter-notebooks are available here https://py-eddy-tracker.readthedocs.io/en/latest/index.html . These range from a simple display of the detected eddy fields, tracks and network to the computation of more advanced eddy statistics, from along-track evolution of individual characteristics to the geographical distribution of merging and splitting events. Furthermore, a series of Lagrangian methods and diagnostics are also provided an can be used as complementary term of comparison to assess the characteristics of the detected eddy networks. This combination of a dataset and its associated analytical tools, makes META-Networks an ideal tool for a brad range of users and applications: from purely physical oceanography studies to interdisciplinary applications for which the detected eddy network needs to be combined with external fields from other in-situ or remotely sensed datasets.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: A collaborative data challenge for advancing sea surface current products: Insights from the WOC Project

Authors: Sammy Metref, Lucile Gaultier, Clément Ubelmann, Daniele Ciani, Bruno Buongiorno Nardelli, Florian Le Guillou, Fabrice Collard, Gilles Larnicol, Marie-Hélène Rio
Affiliations: Datlas, OceanDataLab, Consiglio Nazionale delle Richerche, Instituto di Scienze Marine (CNR-ISMAR), Magellium, European Space Agency, European Space Research Institute (ESA-ESRIN)
Surface currents, as a central feature of upper ocean dynamics, transport heat, salt, and tracers over vast distances, influencing both local and global climate systems. They also drive the Lagrangian displacement of floating materials, from living resources to marine pollution, underscoring their critical importance for marine safety, environmental monitoring, and resource management. Recent advancements in observational and mapping techniques now provide daily to hourly products containing multiple components of the currents. To fully harness these advancements and address the complexities of assessing upper ocean current reconstructions, collaborative data challenges have emerged as powerful platforms for systematically evaluating and refining observation and mapping methodologies. The ESA World Ocean Circulation (WOC) project exemplifies this approach by driving innovation in ocean surface current mapping and offering robust tools for benchmarking methods against existing operational products. A metric package developed during the project has been integrated into the data challenge offering a wide array of diagnostics. By engaging multidisciplinary teams, the WOC data challenge brought together state-of-the-art methodologies using diverse datasets, including satellite observations (e.g., altimetry and SST) and in-situ measurements (e.g., drifters), around an open and well defined problem to improve the accuracy and reliability of surface current products. By comparing new developments to operational products, the data challenge can identify strengths, weaknesses, and areas for improvement, ultimately contributing to the refinement of operational ocean surface current data products. This presentation will first focus on the role of the WOC data challenge in not only benchmarking existing methods and products but also providing a platform of reference for developing new methodologies. The presentation will also give an overview of some of the latest ocean surface products and their performance. Overall, the lessons learned from the WOC data challenge underline the potential of open science frameworks and collaborative platforms in bridging the gap between methodological developments and operational products.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Ocean mesoscale hot-spot at the Nordic high latitudes: the Lofoten Basin

Authors: Antonio Bonaduce, Artem Moiseev, Florian Le Guillou, Roshin Raj, Johnny A. Johannessen
Affiliations: NERSC, Datlas
The Norwegian Atlantic Current contributes fundamentally to the temperate climate of northwestern Europe and maintains an ice-free ocean well into the Barents Sea even in winter. The interaction between the two branches of the Norwegian Atlantic Current is not known in detail but is generally understood to be mediated by mesoscale eddies. In particular, the Lofoten Basin appears as a hot spot of mesoscale activity in several studies in literature, based on conventional altimetry. While satellite altimetry has made a fundamental contribution to our understanding of ocean circulation, the current constellation of nadir altimeters does not allow for resolving the spatial and temporal scales characterizing the intensification and dissipation of the mesoscale features. The Surface Water and Ocean Topography (SWOT) mission, based on Ka-band Radar Interferometry (KaRIn), extends the capability of existing nadir altimeters to two-dimensional mapping of the ocean surface at an unprecedented spatial resolution. The fast-sampling phase (1-day repeat) of the mission also allowed for resolving the temporal evolution of mesoscale eddies. The Lofoten Basin is located in an area where the SWOT tracks cross, which was sampled twice a day over 90 days in 2023. Building on this unique opportunity, the results presented in this work rely on both the fast-sampling and science (21-day repeat) phases of the SWOT mission to show a comparison of KaRIn retrievals with conventional altimeters and characterize the representation of the mesoscale field emerging from the different altimetry concepts. The mesoscale features are then collocated with high-resolution remote-sensing retrievals to characterize the contribution of eddies to the upper ocean dynamics. Particularly, we build on the synergy between SWOT and SAR retrievals (i.e from Sentinel-1 Doppler) to partition the geostrophic and ageostrophic contributions to the ocean mesoscale dynamics.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Eddy Kinetic Energy Intensification in the Mediterranean Sea From Three Decades of Satellite Altimetry Observations

Authors: Paul Hargous, Vincent Combes, Bàrbara Barceló-Llull, Ananda Pascual
Affiliations: Institut Mediterrani d'Estudis Avançats - Consejo Superior de Investigaciones Científicas - Universitat Illes Balears
Mesoscale activity plays a central role in ocean variability, substantially influencing the mixing of biogeophysical tracers, such as heat and carbon, and driving changes in ecosystems. Eddy Kinetic Energy (EKE), a metric used for studying the intensity of mesoscale processes, has recently been shown to increase in regions of intense EKE worldwide. Strong EKE positive trends are, for example, observed in the principal western boundary current regions, such as the Gulf Stream, Kuroshio Extension, and the Brazil/Malvinas Confluence. In this study, we assess whether the Mediterranean Sea, known to be a hotspot for climate change impacts, also exhibits such intensification. Despite the high number of observational data (in-situ, satellite) and modeling experiments, there is a gap in understanding the long-term evolution of mesoscale dynamics and EKE trends in the Mediterranean Sea. This study investigates EKE trends in the Mediterranean Sea using altimetric data from the Copernicus Marine Service. Gridded altimetric products (L4) provide daily geostrophic velocities at the ocean surface from 1993 to 2023. The EKE is calculated from anomalies of these geostrophic velocities. We analyzed EKE trends computed from three different altimetric products: a global product derived from a stable two-satellite constellation (two-sat) and two others (global and European) incorporating all available satellites (all-sat). While all products reveal a general increase of EKE in the Mediterranean Sea over the period analyzed, trends calculated from the two-sat product are significantly smaller than those computed from the all-sat products. We surmise that this discrepancy is due to the increasing number of satellites over time used to construct the all-sat datasets, which enhances both spatial and temporal coverage, and, hence, their capacity to detect higher energy levels, and/or an underestimate of the EKE detected by the two-sat product. To further investigate these trends, along-track altimetric data (L3) were also used with a specific focus on the Alboran region. This area, dominated by intense mesoscale activity, holds strong statistically significant positive EKE trends. These findings highlight the importance of using altimetric products with a stable number of satellites and constructed for climate applications when addressing long-term ocean variability analysis.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Accounting for a Wind Directional Effect in Sea State Bias for Sentinel-3 Delay/Doppler Altimeter Measurements

Authors: Ngan Tran, Laiba Amarouche, Doug Vandemark, Hui Feng, Salvatore Dinardo, Bruno Lucas, Carolina Nogueira-Loddo
Affiliations: CLS, UNH, EUMETSAT
Delay/Doppler altimetry (also called SAR altimetry) technique has opened a new era for altimetry applications since the launch of the Cryosat-2 mission then followed by the Sentinel-3A/B and the latest Sentinel-6A missions. It demonstrated significant advantages of this high-resolution (HR) dataset over conventional one in low resolution (LR) mode in terms of improved measurement errors and finer spatial along-track resolution. But this innovation also raised questions about the different sensitivities of delay/Doppler measurements to ocean surface characteristics such as waves state, wave motion, surface roughness, currents, etc., when one compares with conventional data. Changing effects require adaptation of the traditional processing approaches notably to assure a seamless continuity of high-quality in satellite sea level observations. In this study, we focused on an issue reported since 2019 about HR/LR range biases between ascending and descending orbits which were found to be correlated with meridional winds patterns. This was first observed in Sentinel-3 data but affects also Sentinel-6 measurements. Different studies analyzed this feature and attributed it to HR data. Those biases have been linked to a wind-waves induced Doppler shift showing up as an apparent horizontal wave motion. The bias is stronger and with opposite sign for up-wind and down-wind, zero for cross-wind situations, and is significant wave height (SWH) dependent. We propose an empirical correction for it in the form of a two-dimensional look-up table (LUT) designed as an additional contribution to the standard sea state bias (SSB) correction already used to correct range data. We found that the correction is able to eliminate most of the wind direction dependent biases observed between the two operation modes. Its application can improve the Sentinel-3 marine altimetry data quality over the global ocean. Results indicate the usefulness of this correction in providing consistent sea level measurements across all nadir altimeter missions.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Towards the next combination mean dynamic topography model DTUUH25MDT

Authors: Per Knudsen, Ole Andersen, Nikolai Maximenko, Jan Hafner
Affiliations: DTU Space, Technical University of Denmark, IPRC, University of Hawaii at Manoa
When deriving the DTUUH22MDT combination model it was found that the differences between the combination model and the geodetic model are within 10 cm. Hence, the effects of integrating the drifter velocities on the MDT are of that order of magnitude. Also, the scales appear to be shorter than a few hundreds of kilometers agreeing well with the resolution of the geodetic model. For comparison the CNES-CLS18 showed larger differences and at longer scales. A recent comparison with the CNES-CLS22 model show the same features as the 18 model. The differences between the DTUUH and the CNES-CLS models may be caused by differences in the choice of reference models, processing of drifter data, methodologies for integrating the data, etc. To assess the differences between the combination models this presentation focus on clarifying issues with the correction of the buoy data for wind driven flows. Different strategies for deriving empirical correlations are tested and regional averages are compared with velocities derived from the geodetic MDT. Furthermore, experiments of integrating the drifter velocities with the geodetic model are carried out with different weighting schemes to assess how much the combination solution is constrained to the geodetic model and how regional biases in the drifter velocities are affecting the solution.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Direct Observations of Ocean Surface Currents from Sentinel-1 Doppler shift: Separating Contributions from Wind, Wave, and Surface Current

Authors: Artem Moiseev, Fabrice Collard, Johnny Johannessen, Gilles Guitton
Affiliations: NERSC, OceanDataLab
Observations from the operational Sentinel-1 mission can be used to derive ocean surface current radial velocity in both open ocean and coastal zones. These observations are valuable for monitoring mesoscale circulation near the coast and equatorial currents. The accuracy of the ocean surface current retrieved from the Sentinel-1 Doppler shift depends on two main factors: (1) instrument-related accuracy, such as platform attitude, antenna electronic mis-pointing, and temperature compensation, and (2) accuracy in partitioning the geophysical signal between wind, wave, and ocean surface current contributions. This presentation demonstrates the latest progress in deriving ocean surface current radial velocities from the Sentinel-1 mission, focusing on understanding and partitioning the geophysical Doppler shift achieved in the ESA World Ocean Circulation and ESA Upper Ocean Dynamics projects. The existing empirical algorithm for wave Doppler estimation (CDOP3SiX) relies on auxiliary wind and wave fields from collocated models to derive necessary corrections for the sea state contribution. Therefore, the uncertainty of the estimated wave Doppler depends on (i) uncertainty in the auxiliary wave and model fields, and (ii) uncertainty related to the training of the empirical algorithm. To provide initial estimates of this uncertainty, we use an ensemble approach. An ensemble numerical weather prediction model was used to estimate the uncertainty associated with the auxiliary wind field used for wave Doppler estimation. We found that the biggest uncertainties in the derived ocean surface current are associated with (i) low winds below 3–4 m/s, and (ii) large uncertainties in wind direction, such as those associated with the passing of an atmospheric front. We also proposed a function for approximating the uncertainty. Overall, the study method showed promising results in estimating the uncertainty, which should be complemented by wave ensemble information and fitting of an ensemble-based empirical algorithm in the future. However, despite efforts to achieve a better empirical fit and provide an estimate of the uncertainty, this approach has fundamental limitations related to the use of model information, such as time lag between the forecasted field and acquisition, and underrepresentation of small-scale variability. To address these issues, we developed a new experimental method for estimating wave-induced contributions to the Doppler shift using the wave signal derived from SAR cross-spectra. The new algorithm relies on SAR capabilities to retrieve wind, wave, and surface current directly from (a) the amplitude and phase of the backscattered signal, and (b) the radial Doppler shift of the return pulse. The SAR-derived backscatter information (amplitude and phase) is used to derive the wind field (Mouche et al. 2011, routinely available from Sentinel-1 data) and the wave cross spectra (Li et al. 2019, available from the parallel ESA SARWAVE). These observational data from Sentinel-1 Wave Mode L2 OSW were used to parameterize the sea-state-induced ocean surface motion and replace the auxiliary model-based information applied in the existing retrieval algorithm. We found that this new fully observational-based approach provides more accurate estimates of the sea state contribution to the radial Doppler shift compared to the state-of-the-art algorithm and independent drifter climatology. The derived product can generate monthly mean radial velocity maps with a 2-degree pixel size, providing valuable information for monitoring the equatorial current.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Improvements of a delay/Doppler altimetry retracker: from modeling to inversion

Authors: Alba Granados, Mònica Roca i Aparici, Chris
Affiliations: IsardSAT
Monitoring of ocean topography is a fundamental task in satellite altimetry missions. This monitoring is performed by constant analysis of long time series of a number of geophysical parameters describing the sea surface, which demands strict thresholds of observational error measures such as precision and accuracy. The geophysical parameters are estimated by minimising the discrepancies between the predictions done by an obervation model of the returned altimetric waveform and the actual measured waveform, that is, by solving an inverse problem. These discrepancies that shall be reduced are not only due to modeling errors, but also due to the noise added to the measurements. In order to improve observational error measures, large effort has been made in developing advanced models of the altimetric echo, with special focus in reducing biases occurring in the estimated parameters (Buchhaupt et al., 2023). However, special care has also to be taken on the measurement errors (Mangilli et al., 2024), as, given the ill-posed nature of physically based inverse problems, those can be amplified in the inversion procedure leading to solutions largely contaminated by spurious information. This work presents developments of an open ocean altimetric delay/Doppler retracker (Granados and McKeown, 2024) aimed at improving the quality of the estimates (sea surface height, significant wave-height, and sigma-nought), both from the modeling and the measurement noise point of view. The retracker is based on a numerical model that allows for the use of the point target response (PTR) in CAL1 Sentinel-6 products to compensate any in-flight PTR shape evolution. Improvements of the modeling error are tackled by inclusion of the satellite mispointing in the flat surface impulse response function. Regarding the measurement noise, statistical inversion theory is explored in order to mitigate its impact, giving continuity to the work in (Halimi et al., 2015). A study area on the South-West Indian Ocean is considered to analyse the quality of the estimated geophysical parameters. Buchhaupt, C., Egido, A., Smith, W. H., and Fenoglio, L. (2023). Conditional sea surface statistics and their impact on geophysical sea surface parameters retrieved from SAR altimetry signals. Advances in Space Research, 71(5):2332–2347. Granados, A. and McKeown, C. (2024). Numerical convolutional model for retracking Sentinel-6 delay/Doppler altimetry data. In OSTST 2024. Halimi, A., Mailhes, C., Tourneret, J.-Y., and Snoussi, H. (2015). Bayesian estimation of smooth altimetric parameters: Application to conventional and delay/Doppler altimetry. IEEE Transactions on Geoscience and Remote Sensing, 54(4):2207–2219. Mangilli, A., Moreau, T., Piras, F., Daguzé, J., and Thibaut, P. (2024). Improvements of ocean retracker solutions for the reference missions: overview, results and perspectives. In OSTST 2024.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Fine Scales Structures of the Abrolhos Bank Circulation From SWOT, Insitu and Copernicus Data

Authors: Fabrice Hernandez, Pr. Alex Costa da Silva, Pr. Marcus Silva, Dr. Ramilla Vieira
Affiliations: Institut de Recherche pour le Développement/LEGOS, Universidad Federal do Pernambuco
In the frame of the SWOT satellite cal/val initiative in Brazil, the University Federal of Pernambuco ship Ciencias do Mar IV has performed successive survey in May and September 2023 over the Abrolhos Bank (17-20°S along the Brazilian coast). In situ measurements (CTDs, shipborne ADCP, TSG), as well as moored data (AWAC instrument) have been collected, in order to be compared to SWOT sea surface height retrievals, but also Copernicus Marine Service operational products in quasi-real time. This area is dominated by the poleward Brazil Current along the shelf, that meanders and generate seasonal eddy patterns that influences the upper circulation on the shelf. Underneath, equatorward Antarctic Intermediate Waters circulate below the thermocline. The objective of this work are two-fold. First characterize the coastal to open ocean continuum of ocean variability in this shallow area during the surveys. And second evaluate both the SWOT and operational oceanography products in providing realistic representation of the high frequency scales. The first results show the dominance of the tidal dynamics over the Banks, and the influence of the wind and the sea-state variability. However, the Brazil Current patterns and associated mesoscales features are well captured by satellite data and models. Further work will evaluate the benefit of SWOT short scale information to improve our understanding of meso to submesoscale interactions in the area.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: An Hybrid Time/Space CNN Approach for Lagrangian Trajectories Simulation

Authors: Lorenzo Della Cioppa, Bruno Buongiorno Nardelli
Affiliations: Institute of Marine Sciences, National Research Council (ISMAR-CNR)
The analysis of Lagrangian trajectories plays a crucial role in understanding ocean dynamics. While satellite observations and numerical general circulation models (GCMs) have made significant advances, in situ Lagrangian drifters remain the most reliable source of ground-truth measurements available for ocean surface currents with almost global coverage. Satellite altimeter data, due to both technical and theoretical limitations, are not able to describe all dynamical components of surface currents. In contrast, in situ drifters provide direct estimates for total currents. The simulation of Lagrangian trajectories is also critical for several operational and scientific applications, such as the tracking and predicting of tracers advection, such as pollutants, debris, or small marine organisms. However, despite the widespread deployment of in situ drifters, their numbers are often insufficient to provide regular space/time sampling. Deploying drifters can be costly, time-consuming, and remote ocean regions are difficult to cover. As a result, numerical experiments using simulated trajectories are commonly employed when Lagrangian analyses are required. Standard methods to simulate Lagrangian trajectories involve integrating velocity fields. However, estimating sea surface velocities is a challenging task in itself. Velocities derived from satellite altimeter data assume geostrophic balance and, while this approximation is accurate for large-scale, steady flows, it fails to capture the complexities of evolving mesoscale and submesoscale features. Geostrophic balance is also not immediately applicable near the equator, where the Coriolis force is zero, further limiting its accuracy. Additionally, satellite altimeter data coverage is limited both in time and space, necessitating the use of ad-hoc methods to interpolate velocity fields on a regular latitude-longitude grid. On the other hand, synthetic velocity fields from high-resolution simulations depend heavily on the specific model used. Although many models assimilate real data or are forced by actual observations, their ability to simulate accurate surface velocities in open ocean—evaluated through comparisons with real drifter velocities—is generally lower than that of satellite altimetry. Moreover, also due to their inherent chaotic nature, no model currently simulates Lagrangian trajectories without some degree of error, even when assimilating from in situ drifters. In this presentation, we expand on existing artificial intelligence approaches and introduce a novel convolutional neural network architecture for simulating Lagrangian trajectories, taking inspiration from established computer vision algorithms. The proposed model takes as input the initial position of the trajectory and the selected Eulerian fields for the considered time-frame. A common issue with AI tools in natural sciences is the possibility that, although learning from real data, the model fails to capture the underlying physical laws. Physically informed strategies have been proposed to overcome these problems, but they would require exact knowledge of the velocity fields, which is not available. Instead, we propose a hybrid U-net/LSTM network design that enforces causality and hence forces the physical meaning of the output. Furthermore, our model allows for any observable tracer to contribute to trajectory generation, as long as they are defined on the same grid. In these preliminary steps, the model is trained on a synthetic dataset of trajectories and validated using independent simulations from the same dataset, integrated from Eulerian velocity fields produced by GCM in the Mediterranean Sea. The input Eulerian fields of the model are zonal and meridional velocities, and sea surface temperature (SST). This study is part of the italian national ITINERIS project, Marine work package, whose goal is to integrate the different marine observations platforms and develop data-driven methods for ocean studies.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C-D)

Poster: Global Ocean CO2 Uptake By Long Lived Mesoscale Eddies Identified With a Synergistic Lagrangian Tracking Approach Driven By Altimeter Data

Authors: Daniel Ford, Jamie Shutler, Katy Sheen, Gavin Tilstone, Vassilis Kitidis
Affiliations: University Of Exeter, Plymouth Marine Laboratory
Mesoscale eddies are ubiquitous in the global oceans, and modify the physical, biological and chemical properties of the oceans. Our understanding of their effect on the global ocean CO2 sink is largely constructed from in situ studies of singular eddies, or smaller regional subsets. Although these studies of isolated or individual eddies are important to understanding eddy dynamics, they currently cannot improve our understanding of seasonal, regional and global differences in the eddy modifications of synoptic-scale or global CO2 fluxes. In this study, we use a synergy of in situ, satellite observations and reanalysis data with a Lagrangian tracking approach driven with altimetry- based eddy trajectories to estimate how the eddies modify the air-sea CO2 flux. Climate quality satellite-based observations were prioritised with the approach where possible. In total, ~6000 long lived (lifetimes greater than a year) mesoscale eddies were tracked globally between 1993 to 2022. Both anticyclonic (3244 eddies) and cyclonic (2752 eddies) eddies were assessed, and the air-sea CO2 fluxes calculated with comprehensive uncertainty estimates for each eddy. Our analysis shows that globally, anticyclonic eddies enhance the air-sea CO2 flux by ~4.4 ± 2.5 % (2 sigma/95 % confidence propagated uncertainties), whereas cyclonic eddies appear to reduce the CO2 sink by ~0.3 ± 2.5 %. Cumulatively, the long lived eddies drew down an additional ~91 Tg C between 1993 and 2022. Anticyclonic eddies increased uptake by an additional ~103 Tg C, and cyclonic eddies reduced uptake by ~12 Tg C. The additional uptake by long lived eddies equated to ~3 Tg C yr-1 or ~0.1 % on a global uptake of 2.9 Pg C yr-1. The eddies display regional characteristics in the air-sea CO2 flux modification. For example, anticyclonic eddies in the North and South Pacific Ocean enhanced the CO2 flux into the eddies by 6.0 ± 4.9 % (N = 718) and 6.0 ± 6.8 % (N = 606) respectively. The South Indian Ocean indicated cyclonic eddies acting to enhance the sink by 1.4 ± 3.8 % (N = 470), compared to the global cyclonic eddy median that suggests the CO2 sink would be reduced. The propagated uncertainties can be used to assign confidence to these results. Although these modifications appear small, the long-lived eddies investigated in this study are only a small proportion (0.4%) of the altimetry-based eddy trajectories. Therefore, a much larger proportion of global eddies are still unaccounted for within our analysis on the global role of mesoscale eddies. However, these results highlight the complexity of mesoscale eddies, and their role in modifying air-sea CO2 fluxes. The data supporting this work provide a resource to investigate the driving mechanisms of the modifications to the air-sea CO2 fluxes.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: A.08.10 - POSTER - Coastal Ocean and Land-sea interaction

Around 10% of the world population live in coastal regions. These areas are interested by an ample set of socio-economic activities – ranging from fisheries and aquaculture to tourism and ecosystems preservation. These regions are however highly vulnerable to climate change, being particularly affected by sea level rise and erosion, and subject to extreme events such as storm surges and floods.
They also play a crucial role in the Earth system as the interface between land and ocean, being also of fundamental importance in relation to fluxes of carbon, nutrients, pollutants and freshwater between the land-sea continuum.

This Session welcomes contributions on comprehensive data-driven reconstructions of coastal regions processes, based on the latest EO capabilities and the exploitation of the increasing set of different sensors offering high-resolution data over the coastal domains and interfaces. Although satellite EO have a prominent role, a complete representation of the overall 4D processes can only be achieved by a consistent, synergistic and multi-modal use of complementary assets such as in-situ measurements, modelling experiments and AI.

Cross-disciplinary thematic topics that are of interest for this Session are:
• Coastal ocean dynamics and sea level variability
• Extremes and Geohazards – eg, flash floods, storm surges and coastal erosion.
• Multi-stressors – such as MHW and deoxygenation.
• Water Quality – including pollution and harmful algae proliferation.
• Ocean carbon and gas fluxes – with special emphasis on high-resolution carbon processes and blue carbon.
• Air-sea-land-ice fluxes/exchanges – with a characterization of exchanges of heat, momentum and freshwater, including atmospheric nutrient deposition and its influence on ocean biogeochemistry and biology.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Detection of potentially toxin-producing phytoplankton in coastal waters using machine learning and Sentinel-3 OLCI

Authors: Conor McGlinchey, Mortimer Werther, Jesus Torres Palenzuela, Luis Gonzalez Vilas, Yolanda Pazos, Maximo Frangopulos, Gemita Pizarro, Violeta Slabakova, Nina Dzembekova, Nataliya Slabakova, Matthew Blake, Andrew Tyler, Evangelos Spyrakos
Affiliations: University of Stirling, Swiss Federal Institute of Aquatic Science and Technology, University of Vigo, Technological Institute for the Control of the Marine Environment of Galicia (INTECMAR), Institute of Oceanology, Bulgarian Academy of Sciences, University of Magallanes, The Fisheries Development Institute (IFOP)
Understanding, predicting and managing Harmful Algal Blooms (HABs) in coastal waters through remote sensing is difficult due to their diverse species composition, temporal and spatial variability, and inconsistent relationships between biomass and toxicity. Currently, no single suitable classification method exists to detect potentially toxin-producing HABs across many coastal regions using satellite images. Here, we introduce a novel approach to detect potentially toxic HABs in coastal waters using machine learning and Sentinel-3 Ocean and Land Colour Instrument (OLCI) products. We compiled a large dataset covering 33 coastal regions across Europe, America, Asia, Africa, and Oceania, spanning the years 2016 to 2024. This dataset includes extensive in situ measurements of phytoplankton abundance and taxonomy, focusing on frequently toxin-producing genera such as Alexandrium spp., Pseudo-nitzschia spp., and Karenia spp. To evaluate our detection approach, we generated a large match-up dataset using three atmospheric correction models (C2RCC, POLYMER and ACOLITE) and the standard Copernicus Marine Environment Monitoring Service (CMEMS) level 3 Global Ocean Colour product from Sentinel-3 OLCI. We propose that applying machine learning techniques and combining optical, thermal, and environmental satellite-derived measurements associated with phytoplankton phenology can improve the identification of characteristics directly related to the HAB species. To enhance HAB classification performance, we developed an ensemble approach based on a meta-classifier that learns from several base classifiers. This ensemble model is expected to overcome the limitations of single classifiers when applied to global coastal water classification, thereby improving the generalisation ability of our detection approach. Furthermore, incorporating a range of satellite-derived products linked to phytoplankton phenology improves the model’s ability to generalise to unseen conditions. We hypothesise that integrating additional data sources will improve the detection of low to moderate-concentration blooms and mixed phytoplankton assemblages, expanding the utility of our approach to support regional monitoring activities. Our research contributes to the development of scalable, operational monitoring frameworks that can inform early-warning HAB systems, enhance ecosystem management, and support policy development. Our meta-classification approach demonstrates the potential for using Earth Observation techniques to address pressing environmental issues at the land-sea interface.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Towards Reliable Satellite-Based Water Quality Monitoring of Shallow Coastal Lagoons

Authors: Samuel Martin, David Doxaran, Philippe Bryère, Pierre Gernez
Affiliations: Laboratoire d'Océanographie de Villefranche (CNRS/SU), ACRI-ST, Nantes Université
Coastal lagoons are fragile ecosystems under significant anthropogenic pressures, such as aquaculture, tourism, and eutrophication. Their biogeochemical variability makes their monitoring challenging, particularly with in situ approaches, which are both costly and spatio-temporally limited. The growing interest in using satellite observations for their monitoring is justified, despite obstacles like complex atmospheric corrections. Additionally, lagoons are often shallow systems with varying turbidity levels, resulting in a mix of optically shallow waters (OSW, the sea-bottom significantly contaminates the satellite measurement) and optically deep waters (ODW, where bottom influence is negligible). This study aims to develop reliable satellite-based products to monitor water quality in shallow coastal lagoons, focusing on chlorophyll-a (chl-a), suspended particulate matter (SPM), and water transparency (expressed as Secchi depth, Zs). The research is centered on the Berre and Thau lagoons in the south of France, using Sentinel-2 MSI and Sentinel-3 OLCI data, radiometric measurements from HYPERNETS autonomous radiometer (1) and field campaigns, as well as long-term biogeochemical monitoring networks. Results show that standard Sentinel-3 OLCI products perform poorly on the two sites, with errors and bias exceeding 100%, primarily due to atmospheric correction issues and sea-bottom contamination. To address these limitations, we redeveloped the satellite processing chain by: 1. Re-evaluating atmospheric correction algorithms for Sentinel-2 MSI and Sentinel-3 OLCI to identify the most effective processor(s). 2. Developing a probabilistic algorithm (OSW/ODW probability) to estimate the sea-bottom contamination for each pixel. 3. Identifying robust inversion methods for SPM and chl-a retrieval, including empirical, semi-empirical, and semi-analytical approaches. For Sentinel-2 MSI, the Sen2Cor algorithm, combined with a supplementary correction, achieved the best performance (errors < 25% for visible bands). For Sentinel-3 OLCI, the C2RCC processor was most effective (errors ~35%). Empirical inversion algorithms consistently overestimated SPM and chl-a concentrations due to sea-bottom contamination, while the OSW Lee semi-analytical model (2) tended to underestimate them by erroneously attributing sea-bottom contributions. Integrating the Lee model with the OSW/ODW probability algorithm (Lee–OSW/ODW) significantly improved retrieval accuracy compared to other algorithms, highlighting the importance of identifying OSW areas before applying the inversion. The application of Lee–OSW/ODW satellite time series provides an innovative tool for assessing long-term ecological trends, aligning with the European Water Framework Directive (WFD). Analysis of the 2018 ecological crisis in Berre and Thau revealed new spatial insights and showed that the Berre lagoon’s transparency has improved since 2017, reflecting successful management actions. This study demonstrates the potential of advanced satellite products for reliable monitoring of coastal lagoons, offering actionable insights for sustainable management and ecological restoration. (1) See more on https://hypernets.eu/from_cms/summary (2) Lee, Z., Carder, K. L., Mobley, C. D., Steward, R. G., & Patch, J. S. (1999). Hyperspectral remote sensing for shallow waters: 2. Deriving bottom depths and water properties by optimization. Applied Optics, 38(18), 3831–3843.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Estimation of Water Clarity Based on PRISMA Hyperspectral Mission in Venice Lagoon, Italy

Authors: Congju Fu, Gian Marco Scarpa, Dr. Vittorio Ernesto Brando, Dr. Federica Braga
Affiliations: Institute of Marine Sciences-National Research Council (CNR-ISMAR), Sapienza University of Rome, Institute of Marine Sciences-National Research Council (CNR-ISMAR)
The status and variability of water quality on coastal areas are sentinels of anthropogenic activity and climate change. Water clarity is a well-established indicator of water quality and can be measured in multiple ways. Focus on the Venice Lagoon in Italy, five metrics (Secchi disk depth (Zsd), Turbidity, colored dissolved organic matter (CDOM), total suspended matter (TSM), and chlorophyll-a (Chla)) are selected to synthetically describe the water clarity. A dataset of in situ measurements was collected from 2019 to 2024 (n=458), including biogeochemistry, apparent optical properties (AOPs) and physical parameters. It shows that: 1) the water quality of Venice Lagoon is mainly oligotrophic and mesotrophic (93% of the Chla concentration < 10 mg m-3, mean = 3.81 mg m-3, median = 1.41 mg m-3, standard deviation = 8.91 mg m-3), however patches with hypereutrophic conditions (Chla up to 117.5 mg m-3) also exist. 2) high water quality variability and complexity are present ranging from clear to very turbid waters (TSM range is 0.7 - 166.6 g m-3, Zsd range is 0.1 - 9 m, CDOM range is 0.12 - 1.18 m-1, and Turbidity range is 0.35 - 151.63 FNU). 3) Zsd is dominated by Chla (r = -0.69) and TSM (r = -0.88). For Chla, nine models were applied to “PRISMA-like” remote sensing reflectance (Rrs(λ), sr-1) calculated from in situ Rrs (n = 286) by the PRISMA spectral response function and original parametrisations were retuned using train dataset (n = 229). The model accuracy was assessed using test dataset (n = 57). The result shows that: 1) NIR-red band ratio (slope = 0.52, intercept = 0.23) and neural network (NN) (slope = 0.72, intercept = 0.14) perform better. 2) for Chla of 0 - 20 mg m-3 and > 50 mg m-3, NN performs best; for Chla of 20 - 50 mg m-3, NIR-red band ratio performs best. NN was further applied to two PRISMA images (07/08/2023 and 11/09/2023) atmospherically corrected with ACOLITE/DSF and a good correlation was found between the estimated and measured Chla (N = 27, slope = 0.55, intercept = 0.27, r = 0.90, RMSE = 5.87, MAE = 3.41, MSE = 34.42, bias = -1.78, MAPE = 0.78). For Zsd, QAA-v6 algorithm was applied to the same PRISMA images and the estimated values are consistent with synchronous in situ measurements (N = 21, slope = 0.91, intercept = -0.04, r = 0.95, RMSE = 0.52, MAE = 0.35, MSE = 0.27, bias = -0.19, MAPE = 0.22). A high negative correlation (r = -0.68) indicates that Zsd is mainly dominated by Chla in summer. Further analysis will be implemented to analyse the water clarity of the Venice Lagoon and how it is influenced by TSM, CDOM and Turbidity. This study is important for demonstrating the excellent capability of the hyperspectral remote sensing satellite PRISMA for water quality monitoring of coastal waters, and comprehensive understanding of water quality changes on River-Sea Systems.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Assessment of QuikSCAT-derived coastal winds bias and accuracy.

Authors: Giuseppe Grieco, Ad Stoffelen, Anton Verhoef, Dr. Marcos Portabella, Dr. Federico Cossu, Dr. Stefano Zecchetto, Dr. Andrea Zanchetta
Affiliations: Istituto di Scienze Marine - Consiglio Nazionale Delle Ricerche (ISMAR-CNR), Koninklijk Nederlands Meteorologisch Instituut (KNMI), Institut de Cienciès del Mar (ICM-CSIC), Istituto di Scienze Polari - Consiglio Nazionale delle Ricerche (ISP-CNR), Faculty of Intelligent Systems Engineering and Data Science - Persian Gulf University, Department of Music - Hong Kong Baptist University
Approximately 40% of the global human population lives within 100 km from the coast. Their lives are heavily affected by coastal phenomena such as land/sea breezes, hurricanes landing, katabatic winds, sea surface coastal currents, etc. Ocean Vector Wind (OVW) plays a key role in all these phenomena. Moreover, it is important for renewable energy production. OVW is, in addition, an essential climate variable, therefore its accurate monitoring is crucial for several scientific and civil applications. Scatterometer-derived winds represent the golden standard for satellite-derived OVW. However, coastal scatterometer normalized radar cross sections (NRCS) may be contaminated by the presence of land in their footprint. This contamination may bias the retrieved winds up to 30 km off the coastline, leading to poor sampling of coastal areas. Recently, a new NRCS correction technique named “noise regularization” has been presented in application to the Seawinds scatterometer, which flew onboard the US polar-orbiting satellite QuikSCAT, showing a dramatic improvement in coastal sampling. Preliminary comparison with few Synthetic Aperture Radar (SAR)-derived wind maps from collocated Advanced SAR imagery shows good agreement. However, a systematic validation of this product is still lacking. In this study, we present two triple collocation analyses aiming to validate QuikSCAT-derived coastal winds, assess their accuracy and determine whether any biases exist. In the first validation exercise, QuikSCAT winds are triple-collocated with the European Centre for Medium-range Weather Forecasts (ECMWF) model winds and buoy measurements for the entire year 2007. In the second exercise, QuikSCAT winds are triple-collocated with ECMWF and SAR-derived winds. SAR winds are inverted from ASAR imagery using a Convolutional Neural Network (CNN) scheme on a grid with 900 m spacing. While triple-collocation results will be presented at the time of the conference, preliminary Vector Root Mean Square Difference (VRMSD) analysis between collocated pairs shows a value of 2.24 ms-1 in the open ocean and 2.67 ms-1 in coastal areas between ECMWF and QuikSCAT winds, showing an expected performance degradation towards the coast. However, the VRMSD increase near the coast could also be due to poor ECMWF wind spatial resolution, which is well known from the literature. This aspect will be disentangled by the time of the conference. Instead, VRMSD between QuikSCAT and ASAR winds amount to 3.36-1 and 3.48 ms-1, respectively. In this case, differences between open ocean and coasts are much reduced, even if both values are much higher than in the previous case. At the moment we can speculate that QuikSCAT and ASAR winds resolve smaller scales than ECMWF winds, even if the ASAR wind quality is yet unknown. Also, these aspects will be disentangled by the time of the conference.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Fusion methodology for advanced high-resolution products of ocean currents in coastal Mediterranean Sea regions – FOCCUS HE project

Authors: NATHALIE Verbrugge, Jérémy Augot, Eric Greiner, Christine Boone
Affiliations: Collecte Localisation Satellites
Coastal zones are the most heavily used and impacted areas of the global ocean due to the large concentration of human populations, numerous human activities, and various anthropogenic pressures. In light of escalating climate change impacts in the coastal zone and the European Union's emphasis on a sustainable blue economy, the need for comprehensive and high-resolution coastal observations has never been more critical. The “Forecasting and Observing the Open-to-Coastal Ocean for Copernicus Users“ (FOCCUS) project (funded by European Commission) aims to enhance Copernicus Marine services for coastal zones by leveraging both Copernicus Marine Service and Member State coastal systems. However, due to the complexity of coastal zones, achieving this requires integrating remote-sensing and in-situ data to fully understand coastal dynamics and address observational gaps. As part of the FOCCUS project, we present here some preliminary results of the machine learning approach used to reconstruct 2D maps of surface velocity currents in Mediterranean coastal areas. The aim of the method is to use the synergy between reanalyses of oceanic and atmospheric numerical model and satellite observations from different sensors to derive hourly currents on a 5x5km grid. Sensitivity studies will be carried out on the selected input data. In particular, the reconstruction will benefit from the novel SWOT wide-swath altimetry dataset at 2km resolution and/or gridded sea level maps incorporating this SWOT wide-swath data (Ballarotta et al, 2024 ). Finally, high-frequency radars will be used for validation purposes in the region of interest. Ballarotta, M., Ubelmann, C., Bellemin-Laponnaz, V., Le Guillou, F., Meda, G., Anadon, C., Laloue, A., Delepoulle, A., Faugère, Y., Pujol, M.-I., Fablet, R., and Dibarboure, G.: Integrating wide swath altimetry data into Level-4 multi-mission maps, EGUsphere [preprint], https://doi.org/10.5194/egusphere-2024-2345, 2024
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Coastline dynamics in the Black Sea observed from optical and SAR satellite images

Authors: Dalin Jiang, Dr Armando Marino, Maria Ionescu, Mamuka Gvilava, Zura Savaneli, Carlos Loureiro, Evangelos Spyrakos, Andrew Tyler, Adrian Stanica
Affiliations: University Of Stirling, GeoEcoMar, GIS and RS Consulting Center, Tbilisi State University, Universidade do Algarve
Coastal zones are among the most productive environments, and provide essential ecosystem services to human society. The coastal areas of the Black Sea are facing multiple pressures including sea level rise, increasing intensity of storms, changing wind and wave conditions, and widespread anthropogenic disturbances. Monitoring and understanding coastal dynamics are crucial to identifying effective interventions that can support the sustainable development of the Black Sea coastal zone. Earth observation provides an efficient way to monitor coastline changes because of the ability to cover large spatial areas and short revisit times. Two main types of space-borne satellite data can be used to monitor coastline changes are optical images and synthetic-aperture radar (SAR) images. The advantages of optical image include the ability to clearly distinguish land and water because of high reflectance from land and low reflectance from water, but optical images can be impacted by clouds and shadows. SAR images are not impacted by clouds and can be used to detect small morphological changes, but can suffer from noise introduced by waves and vegetation changes. This study proposes a novel method for detecting coastline changes by combing optical and SAR satellite images. We obtained Sentinel-1 VH SAR images and Sentinel-2 MSI optical images with a spatial resolution of 20 m in July and August between 2016 and 2023 from Google Earth Engine (GEE). For SAR images, we selected a reference image in 2016, and subtracted the reference from all images in other years to characterize the changes. The same processing was done for the image in 2023 as reference. Then the average was calculated from the first step to establish the difference intensity. Accretion and erosion were identified from pixels where the difference intensity values were higher than 95% and smaller than 5%, respectively. For the optical images, we firstly calculated the normalised difference water index (NDWI) using top of atmosphere (TOA) radiance. Then, coastlines from each year were extracted from the NDWI images using a threshold determined from Otsu’s method. Linear regression analysis was carried out using the extracted coastlines in different years, and the coastline change rates were determined from the regression slope. The detected changes from the SAR and optical images were compared, and only in the locations where both SAR and optical images showed the same change direction were treated as real coastline changes and retained for further analysis. We compared the extracted coastlines with in situ measured coastlines in Romanian and Georgian coasts between 2016 and 2023 to validate our method, with results showing a sub-pixel average difference between satellite-derived and in situ coastlines of 11.8 m, indicating very good accuracy for the extracted coastlines using the optical images. Applying the proposed method to the entire Black Sea coast revealed that, in total, coastline changes between 2016 and 2023 occurred in an area of 35.1 km². This represents both advancement of the shoreline into the sea (23.9 km²) – resulting from combined accretion and artificial development. The area lost by erosion along the Black Sea coasts reaches 11.3 km². When considering the different types of changes, 55.6% of the results are natural changes due to shoreline erosion or accretion, while 34.6% are artificial changes (reclamation and construction along the coast) and 9.8% are noise because of boat/ship movement or land cover changes on adjacent coastal land. Natural changes were observed mainly at deltaic and estuarine systems. Artificial changes were mainly found in the southern Black Sea coast, where constructions of infrastructures, such as airports, ports, harbours and jetties has been observed. This study is supported by the European Union Horizon 2020 DOORS project (Developing Optimal and Open Research Support for the Black Sea, No 101000518).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Satellite Observations of Long-term Water Quality Properties using Improved Algorithms in the Chesapeake Bay

Authors: Seunghyun Son, Dr Nikolay Nezlin, Dr Salem Salem, Dr Merrie Beth
Affiliations: University of Maryland, NOAA/NESDIS/STAR
Satellite remote sensing is a valuable tool for monitoring water quality properties, such as chlorophyll-a and water clarity, in the coastal waters with high spatial and temporal resolutions. In turbid coastal waters such as the Chesapeake Bay, the largest estuary in the U.S., water quality is strongly influenced by a complex mixture of chlorophyll concentration, colored dissolved organic matter (CDOM), and total suspended sediments (TSS). Accurate measurement of water quality parameters in turbid coastal waters is essential for understanding coastal ecosystem health and its interaction with extreme events, such as hurricanes and flooding. User needs helped prioritize this work for the Committee on Earth Observation Satellites Coastal Observations, Applications, Services, and Tools Virtual Constellation (CEOS COAST-VC). This study utilizes two satellite ocean color datasets from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Aqua (2002-2022) and the Visible Infrared Imaging Radiometer Suite (VIIRS) aboard the Suomi National Polar-orbiting Partnership (SNPP) (2012-2023). In support of CEOS COAST-VC users, it was undertaken to derive two water quality products for the Chesapeake Bay pilot region: chlorophyll-a using an Extra-Trees machine learning model and water clarity (Secchi Disk Depth) using a semi-analytical model. The satellite-derived water quality products are analyzed to assess seasonal and interannual variability in water quality and to investigate water quality responses to significant river discharges of dissolved and particulate materials as well as extreme hurricane events that induce strong vertical mixing and storm surges. Improved coastal satellite-derived data products, co-designed with COAST-VC pilot region stakeholders, are available on the CEOS COAST-VC’s Application Knowledge Hub (https://www.star.nesdis.noaa.gov/socd/coast/).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Temporal and Spatial Variability of the Romanian Shoreline Over Four Decades

Authors: Ionuț Șandric, Andrei Toma, Bogdan-Andrei Mihai, Albert Scrieciu, Adrian Stănică
Affiliations: Doctoral School of Geography ‘Simion Mehedinți’, Faculty of Geography, University of Bucharest, Bucharest, Romania, Faculty of Geography, University of Bucharest, National Institute of Marine Geology and Geoecology (GeoEcoMar)
Sorelines serve as dynamic and ever-changing interfaces between terrestrial and aquatic environments, constantly reshaped by a complex interplay of natural and anthropogenic forces. These changes, which influence coastal morphodynamics, require regular and precise monitoring. The high-frequency revisit capabilities of the Landsat and Sentinel-2 constellations have revolutionized the ability to conduct detailed and continuous analyses of shoreline dynamics, offering new opportunities to monitor and understand coastal changes with greater precision and temporal resolution. This study investigates the temporal and spatial variability of the Romanian shoreline over the past four decades, by using satellite remote sensing data combined with field measurements to characterize the dynamic behavior of this region. The Romanian coastline exhibits marked contrasts between its northern and southern segments, each shaped by distinct natural and anthropogenic influences. The northern coastline, dominated by the Danube Delta, reflects changes driven predominantly by natural processes such as sediment transport, deposition, overwash and erosion, while also being indirectly affected by human activities, including sediment disruption by river dams. In contrast, the southern coastline, heavily influenced by urbanization, the development of ports, coastal barriers, and tourist infrastructure, demonstrates more frequent and dramatic alterations, largely attributable to direct human intervention. To analyze these changes, the open-source CoastSat toolbox was employed to extract the position of the instantaneous waterline from satellite imagery. This Python-based tool uses multispectral data from Landsat (5 to 9) and Sentinel-2, using Google Earth Engine’s API to crop and download the images, while the classifications were carried on local hardware. The Modified Normalized Water Index (MNDWI - normalized difference between SWIR1 and Green) and a sub-pixel resolution border segmentation method (Marching Squares) were used to map the sand/water interface with high precision. Statistical analyses were complemented by field validation using GNSS measurements at multiple sites, ensuring the accuracy and reliability of the shoreline extraction methodology. High dynamic areas are present in both the northern and southern segments of the Romanian coastline, each showcasing distinct processes of deposition and erosion. In the northern region, significant deposition occurs in the Danube Delta, particularly around the Chilia branch and its outlets (Ukraine), where more than 300 meters of sediment accumulation has been observed in some areas. Conversely, severe erosion is concentrated along the coastline stretching from the mouth of the Sulina branch to the mouth of the Saint George branch, highlighting the contrasting dynamics within this region. In the southern region, the most representative example of dynamic coastal change is Mamaia Beach. Prior to the 2023 beach enlargement project, the area experienced severe erosion, with up to 100 meters of shoreline lost in certain locations. Following the intervention, however, the coastline saw significant expansion, with as much as 300 meters of beach width added artificially. The results illustrate the benefits of integrating remote sensing and field data for accurate, long-term monitoring of coastal dynamics, providing a better picture of the interplay between natural and human-induced processes that shape coastal evolution. Acknowledgments This research was supported by Project 101112736 — Restore4Life, Restoration of wetland complexes as life supporting systems in the Danube Basin. While the GNSS in situ measurements were performed withing the PN PN23300301, which supports the development of an innovative management system for the dynamics of the Romanian Black Sea coastline through the integration of direct measurements, numerical modeling, and remote sensing to create a "Digital Twin" of Romania's Coastal Zone.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Towards an operational and multisource Coastal Observatory at national scale: Enhancing Coastal Dynamics Monitoring through Super-Resolution, Bathymetric Models, shoreline monitoring and LULC mapping.

Authors: Javier Becerra, Yoel Gerarado Teubel, Patricia Ruiz-Iñigo, Vicente Negro, Luis Juan Moreno-Blasco, Ana Patricia García Fletcher, Aurelio García-Rochera
Affiliations: 1. CENTRO DE OBSERVACIÓN Y TELEDETECCIÓN ESPACIAL SAU, 2. DIRECTORATE GENERAL OF COASTS (MITECO: MINISTRY FOR ECOLOGICAL TRANSITION AND THE DEMOGRAPHIC CHALLENGE, ECOREL UPM (POLYTECHNIC UNIVERSITY OF MADRID)
Efficient monitoring and management of coastal dynamics are critical to mitigate the impacts of climate change and ensure the sustainable use of marine-terrestrial ecosystems. This study introduces a innovative methodology leveraging Earth Observation (EO) data to enhance the precision and reliability of coastal monitoring. Key innovations include the application of super-resolution techniques to Sentinel-2 imagery, the development of advanced algorithms for shoreline detection, state-of-the-art bathymetric modeling, and detailed land cover mapping. Using Generative Adversarial Networks (GANs), Sentinel-2 imagery resolution is enhanced from 10 meters to 2.5 meters per pixel, significantly improving the accuracy of shoreline mapping. This super-resolution approach addresses challenges such as misclassification of foam zones and achieves superior differentiation of the water-sand interface. Additionally, the implementation of automatic shoreline detection algorithms ensures consistency across temporal and spatial scales, providing high-quality data essential for coastal management. The study also advances bathymetric modeling by integrating pseudo derived bathymetry based on Sentinel-2 bands reflectance. An innovative postprocessing methodology applied in this study consists of the analysis of the limitations of the algorithm itself, allowing us to determine at which depth the algorithm is reliable or not. This insight improves the precision of shallow-water depth estimation and enhances the operational use of bathymetric models. Another essential component of the project is the generation of comprehensive Land Cover (LC) mapping, differentiating diverse classes such as Artificial Surfaces, Low Vegetation, Bare Soil, Beaches, Sand Dunes, Forests, Watercourses, Lakes, Lagoons, Reservoirs, Oceans, Shrublands, and Woody Crops, among others. This enriched classification supports integrated coastal zone management by providing detailed insights into land use patterns, ecosystem interactions and their evolution through the years. Showing how the changes on the coast affect the whole ecosystem. The use of artificial intelligence models on time series S2 information enables a fine LULC dataset every five days at a finer resolution than other CLMS products such as Coastal Zones. Developed in collaboration with the Directorate General of Coasts (MITECO: Ministry for the Ecological Transition and the Demographic Challenge) and coastal experts from UPM (Polytechnic University of Madrid), this project utilizes multitemporal super-resolved Sentinel-2 combined with in situ validation campaigns and API services from State Ports page (Ministry of transport and suitable mobility), which provides real-time and historic data of temperature, pressure and several other physical and oceanographic variables. The integration of these data sources ensures comprehensive coverage of coastal dynamics, including shoreline evolution, sediment transport, and underwater topography. Preliminary results demonstrate substantial improvements in both accuracy and operational feasibility, with error rates significantly reduced compared to traditional methods. The scalable nature of this methodology offers potential for replication in diverse coastal regions worldwide, addressing vulnerabilities linked to rising sea levels and extreme weather events. This project underscores the importance of combining EO data with advanced AI methodologies for policymakers and coastal managers like MITECO or the ARMY (Spanish MoD). Representing a significant advance by bridging gaps in data quality and extending the operational capacity of existing satellite missions, the proposed approach represents a significant step forward in EO-based coastal monitoring and contributes to global efforts in climate adaptation and resilience building.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Building an Inventory of the Marine Biogeochemical Responses to Wildfires Aerosol Deposition

Authors: Joan Llort
Affiliations: Barcelona Supercomputing Centre
Wildfires season and intensity have dramatically increased in extratropical regions such as the boreal forests, the Mediterranean Sea or the Californian Coast. The development of these large wildfires (i.e., megafires) is accompanied by the emission of large amounts of carbon and pyrogenic aerosols that can travel long distances in the atmosphere. Modelling works during the last decade have shown that this land-ocean pathway is an important source of soluble iron and other nutrients to remote oceanic regions. This mechanism could alleviate phytoplankton (micro)nutrient limitation, boost marine primary production and carbon export, and potentially compensate for the carbon emitted by wildfires. In line with these results, several observational works have reported the occurrence of exceptional phytoplankton blooms associated with the deposition of pyrogenic aerosols in regions such as the Southern Ocean or the Arctic Ocean. Other works found no apparent fertilisation effect after deposition but reported anomalous changes in the phytoplankton community structure or an increase in the seasonal accumulation peak. Learning from the much more developed study of the impact of dust on phytoplankton, the absence of a response is also a plausible option although less well-known due to publication biases. Most works reporting the influence of pyrogenic aerosols on marine biogeochemistry heavily rely on remotely sensed data. These datasets are an excellent tool to capture, quantify and correlate anomalous responses in aerosol and phytoplankton concentration. However, they cannot provide variables that prove causation, such as aerosol chemical composition or deposition fluxes. This is why remotely sensed data must always come with in-situ measurement, aerosol sampling and modelling studies that build a sound body of evidence. These complementary approaches are also essential to evaluate the impacts on bacteria, ligand concentration or carbon sequestration, all relevant questions that cannot be measured remotely. In this study, we have thoroughly scanned the scientific literature exploring the links between pyrogenic aerosols and marine biogeochemistry. We have created an inventory of all the observed responses based on satellites, in-situ measurements and lab experiments. When possible we have considered the oceanic preconditioning of the response, the aerosols' chemical composition and the known sources on land. Putting these pieces of evidence allows mapping our current understanding of the influence of pyrogenic aerosols, a phenomenon of increasing importance given the observed increase in extratropical wildfires and its influence on the global carbon cycle.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: SAR Imprints of Submesoscale and Mesoscale Eddies in the Western Mediterranean Sea: Visual Observations vs. Automated Detection

Authors: Martin Gade, Zhikun Huang, Xiaoming Li, Nannan Zi
Affiliations: Universität Hamburg, Aerospace Information Research Institute CAS
Oceanic eddies are self-sustaining rotary currents with diameters ranging from several hundred metres to several hundred kilometres. They can be found worldwide in the upper ocean, can travel over long distances before dissipating, and play a key role in horizontal and vertical mixing of the upper ocean. Eddies with diameters smaller than the first baroclinic Rossby radius of deformation are traditionally referred to as submesoscale eddies, while mesoscale eddies have diameters larger than the Rossby radius. Global and regional studies of submesoscale and mesoscale oceanic eddies require a sufficient amount of remote sensing data, whose resolution must be high enough to resolve oceanic features at all scales of interest. Altimeter data have been used for the detection and tracking of mesoscale eddies worldwide; however, studies on finer scales require remote sensing data of higher spatial resolution. Here, synthetic aperture radar (SAR) is one sensor of choice, because of its high spatial resolution and since its operation is independent of daytime and weather conditions. This study is making use of the fact that submesoscale oceanic eddies may show up on SAR imagery as dark, or bright, spirals and can therefore be statistically analysed, if a sufficient amount of SAR images is available. Almost 7200 Sentinel-1 SAR-C images of the Western Mediterranean Sea acquired between October 2014 and September 2017 form the basis of the current research, in which submesoscale eddies were detected on SAR images following two different approaches, (1) done by a human operator and (2) done by an automated detection system named EOLO. We compare the results of the two approaches and highlight those cases, in which the human operator and the automated system produced different results. In general, the automated system was more successful in detecting submesoscale eddies on SAR imagery , whereas the human operator found more mesoscale (i.e., larger) eddies. We attribute these findings to the fact that, on the one hand, the automated system was less often confused by other SAR image patterns (interfering with the eddies’ imprints) and, on the other hand, the human operator could more easily extrapolate imprints of eddies larger than the size of the SAR image. In total, about 15,300 were found by the automated system, of which 91.0% were submesoscale eddies, while the human visual inspection yielded only about 14300 eddies, of which 87.4% were submesoscale eddies. The size distributions for all eddies look very similar, regardless of the used approach. Moreover, the seasonal variation of the eddies’ spatial distribution in the Western Mediterranean Sea follows the same pattern, with highest densities found in autumn (Oct – Dec) and lowest densities found in spring (Apr – Jun). In summary, the EOLO system has shown very promising results, and it will be applied to more Sentinel-1 SAR-C data (eventually covering the satellites’ entire lifetime), thereby offering closer insights into both the generation and SAR imaging mechanisms of submesoscale and mesoscale oceanic eddies.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: From the airborne Delta-X mission in the Mississippi River Delta to Deltas Worldwide with SWOT and NISAR.

Authors: Marc Simard
Affiliations: NASA-Jet Propulsion Laboratory
The NASA Delta-X airborne mission was designed to investigate the complex hydrodynamic processes occurring within the Mississippi River Delta (MRD), a critical coastal wetland region in the United States. The MRD is characterized by its extensive network of distributary channels, wetlands, and floodplains, which play a significant role in carbon sequestration, sediment transport, and ecosystem support. However, the delta is facing severe threats from sea-level rise, subsidence, and reduced sediment supply, making it a high-priority area for scientific study. The Delta-X mission utilized advanced airborne remote sensing technologies, including AirSWOT, UAVSAR (Uninhabited Aerial Vehicle Synthetic Aperture Radar), and AVIRIS-NG (Airborne Visible/Infrared Imaging Spectrometer - Next Generation), to gather detailed measurements of water surface elevation, vegetation structure, and sediment distribution. These data were crucial for understanding the hydrodynamic and ecological processes that control water flow, sediment transport, and nutrient cycling in this vital region. Airborne instrument are particularly well-suited to capture rapid processes such as tides, by repeating measurements of flows several times during the tidal cycle The AirSWOT instrument, a high-resolution Ka-band interferometric radar, was used to measure water surface elevation (WSE) and water surface slopes across the deltaic landscape. The ability to capture fine-scale variations in WSE allowed researchers to assess water flow patterns, detect small changes in water levels, and understand how water moves through the delta’s complex network of channels and wetlands. AirSWOT’s measurements were particularly useful to validate hydrodynamic models over open water surface—channels, rivers and and lagoons—during different tidal conditions. UAVSAR, another key instrument in the Delta-X mission, provided L-band radar data that is highly effective in penetrating vegetation cover, making it ideal for monitoring inundation extent, and water level changes occurring when tides overflow from channels into wetlands. These two radar sensors complement each other by observing the flow connectivity across the distributary channel and wetlands. AVIRIS-NG, an imaging spectrometer, complemented these radar measurements by capturing detailed information on vegetation structure and suspended sediments in the water. The integration of data from AirSWOT, UAVSAR, and AVIRIS-NG enabled a comprehensive assessment of the MRD’s hydrodynamic and ecological processes, providing a clear picture of how water, sediments, and nutrients move through the delta. These data were used to calibrate and validate hydrodynamic and ecological models to understand the vulnerability of the MRD wetlands. While the Delta-X mission focused on the Mississippi River Delta, the approaches and technologies used can be applied to study other river deltas around the world using satellite missions like SWOT (Surface Water and Ocean Topography), NISAR (NASA-ISRO Synthetic Aperture Radar), and the European Sentinel-2 mission. These satellites offer unique capabilities that can extend the insights gained from Delta-X to a global scale. The SWOT mission provides global measurements of WSE, water surface slope, and discharge with unprecedented spatial resolution. SWOT’s Ka-band Radar Interferometer (KaRIn) is similar to AirSWOT but operates from space, allowing it to cover much larger areas and capture data from remote deltas that are difficult to access with airborne sensors. By using SWOT data, scientists can study the hydrodynamic processes in river deltas and estuaries worldwide where riverine and tidal interactions play critical roles in sediment transport and delta stability. SWOT’s ability to measure water surface elevation with a spatial resolution of 100 meters or better, and its global reach provide new insights into how water levels fluctuate in response to tides, river discharge, and storm surges. NISAR, launching in 2025, offers L-band radar measurements that are complementary to SWOT’s high-resolution water surface data. NISAR’s radar capabilities are particularly effective for monitoring inundation, water level changes and vegetation dynamics. To demonstrate this capability, we used ALOS/PALSAR and ALOS-2/PALSAR-2 data to measure water level changes in coastal wetlands and used longer time-series of Sentinel-1 to monitor flood extent and changes. NISAR will enable measurement of these changes globally every 12 days, allowing for continuous monitoring of river deltas, capturing short-term changes in hydrology, such as flooding events. The impact of tidal cycles will be obtained through time-series analysis of tidal components. By integrating NISAR and SWOT measurements of water surface elevation, we will gain a deeper understanding of the complex interactions between hydrology and vegetation in deltaic systems globally. The European Space Agency’s Sentinel-2 mission provides high-resolution optical imagery that can complement the radar measurements from SWOT and NISAR. Similar to the use of AVIRISNG, Sentinel-2’s multispectral data can be used to monitor vegetation health, sediment plumes, and water quality parameters. By combining Sentinel-2’s optical data with the hydrodynamic measurements from SWOT and NISAR, scientists can create integrated models that link water flow patterns with ecological responses in river deltas. The Delta-X mission has demonstrated the power of integrating airborne radar and hyperspectral sensors to study the complex hydrodynamics of the Mississippi River Delta. By expanding these approaches to a global scale using satellite missions like SWOT, NISAR, and Sentinel-2, we can gain a better understanding of river deltas worldwide, especially those that are difficult to access. These observations, along with modeling efforts, will improve our ability to monitor and model the processes that sustain river deltas, helping to protect these vital ecosystems in the face of climate change and rising sea levels.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: A Satellite Perspective on Estuarine Dynamics: An Optical Water Types Approach

Authors: Giulia Sent, Dr Elizabeth Atwood, Dr Ana Brito, Evangelos Spyrakos, Dr Thomas Jackson, Dr Vanda Brotas, Dr Steve Groom
Affiliations: MARE-ULisboa, Plymouth Marine Laboratory, University of Stirling, Eumetsat
Estuaries, located in the transition zone between the freshwater from rivers and seawater from the ocean, are among the most productive and dynamic aquatic ecosystems in the world. At the same time, the anthropogenic pressures on these regions are increasingly growing, and the understanding and management of these vital ecosystems have now become a societal and regulatory priority. The highly dynamic nature and the strong spatial gradients that characterize transitional waters pose significant limitations on monitoring efforts, which are frequently based on conventional field survey methods. In this context, satellite remote sensing has been demonstrated to be a useful tool to monitor waterbodies from space over extensive spatial and temporal scales. Nevertheless, optical remote sensing in transitional waters has been far less successful than in other areas. The high optical complexity and dynamic range of water parameters (i.e. Chlorophyll, suspended sediments and dissolved organic matter) frequently challenge the applicability and robustness of algorithms used for quantitatively retrieve water quality parameters from space. Hence, partitioning water masses into Optical Water Types (OWT) is now an established approach to differentiate waters with different properties. OWT classification can be used to calibrate and apply the most adequate algorithm for a specific water type, or it can be employed to analyze trends and patterns of water masses with distinct spectral signature in a water system. In this study we aim to use an OWT classification as proxy to investigate water masses dynamics in a tide-dominated system (Tagus estuary, Portugal). We analyzed an 8-year dataset (2017-2024) of Sentinel-2 images, categorizing each image per tidal condition (high tide, low tide, 3 phases of Flood and 3 phases of Ebb). The spatiotemporal variability of dominant OWTs was then examined across these tidal conditions. The OWT cluster set developed in Atwood et al. (2024) is employed in this work, as it has been trained on Sentinel-2 data from 6 different transitional water systems, including the Tagus estuary. Additionally, in-situ continuous measurements provided ecological context for the identified OWT classes. Our results show that OWT analysis captures significant aspects of the Tagus estuary's dynamics, with distinct OWTs dominating under different tidal conditions. For instance, low tide images reveal higher OWT classes, indicative of increased turbidity and riverine inputs, while a spatial gradient is evident, with turbidity decreasing toward the ocean. The different location of the Turbidity Maximum Zone at different tidal conditions is also visible from OWT analyses. Some OWTs appear only under specific conditions; for example, OWT 7 (high chlorophyll content) occurs exclusively during high tide and ebb phases near the estuary inlet. These preliminary findings underscore the potential of OWT classification as an effective indicator for understanding the current state, trends and patterns of water quality in highly dynamic systems, providing a synoptic picture of the estuarine system state under different environmental conditions. Nonetheless, limitations arise from the Sun-synchronous orbit of Sentinel-2, which restricts observation of certain tidal conditions. For example, low tide is always observed during spring tides, while high tide coincides with neap tides. Consequently, some spring/neap and semidiurnal phase combinations remain unobservable, showing observational constraints in satellite-based studies of tide-dominated systems.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Satellite-derived bathymetry in the Arctic: To what extent can we monitor?

Authors: Isabel Caballero, Richard P. Stumpf
Affiliations: Spanish National Research Council, NOAA
Satellite-derived bathymetry (SDB) emerges as a pivotal dataset for generating bathymetric maps crucial to understanding the hazards and impacts resulting from climate change. While SDB proves effective in clear water, its applicability to turbid areas remains notably limited. In the quest for routine SDB mapping, we explore the contribution of the Sentinel-2 mission in the Artic. The proposed methodology incorporates a multi-scene compositing method to derive the maximum detectable depth, thus eliminating optically deep-water regions due to severe turbidity. Two study sites in Alaska, Nunivak Island and Kuskokwim Bay, are examined with the integration of turbidity information through a multi-temporal model. These regions are characterized by variable and complex water-quality patterns, over time and space, where water masses accumulate suspended sediments due to their origin from glaciers. The semi-automated approach highlights the potential and restrictions for comprehensive operational SDB across these environments. Over Nunivak Island, results demonstrate its effectiveness with median errors <1.9 m and bias of -0.77 m for depths ranging from 0 to 10 m when validated against multibeam surveys. Furthermore, the study addresses the challenges posed in Kuskokwim Bay, since the application of SDB is not feasible due to the extreme turbidity levels. The findings underscore the capability to deliver valuable bathymetric information, with precise information on when and where SDB can perform dependent on the conditions. This efficient technique holds promise for applicability, facilitating rapid and repeated applications across a wider range of environments and contributing to deal with various challenges in artic research, management, and navigation.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Automated Monitoring of Anthropogenic Coastal Changes Using Sentinel-2 Imagery and Machine Learning Techniques

Authors: doc.dr.sc. Ivan Tomljenović, Branimir Radun, dr.sc. Ivan Tekić, Ivona Žiža, dr.sc. Vladimir Kušan, Bruno Ćaleta, Luka Raspović
Affiliations: Oikon Ltd - Institute Of Applied Ecology
Coastal ecosystems, among the most dynamic and vulnerable environments, are increasingly subjected to anthropogenic pressures such as illegal shoreline grinding, gravel embanking, dumping of construction materials, and unauthorized pier construction. Monitoring these activities effectively requires timely, accurate, and scalable methodologies. This study introduced an innovative approach combining Sentinel-2 satellite imagery and machine learning techniques to detect and quantify human-induced coastal area changes, with a minimum detectable area of 400 m². Our research focused on integrating Object-Based Image Analysis (OBIA) with Random Forest (RF) classifiers for high-accuracy land-cover transformation monitoring. The methodology includes multi-resolution segmentation, spectral index calculation (e.g., NDWI, NDBI), and supervised classification of predefined categories, including vegetation, built-up areas, and water bodies. A 100-meter buffer zone around extracted shoreline areas was additionally analyzed to isolate significant changes in land cover in the coastal region. This approach was tested in three diverse coastal areas along the Croatian Adriatic, achieving classification accuracies exceeding 80%, with robust detection of anthropogenic activities. Initial market assessments revealed a significant demand for automated coastal monitoring solutions, with stakeholders including public institutions, environmental agencies, and regional governance bodies expressing interest. The study demonstrated that most entities lack functional systems for real-time coastal monitoring and would benefit from implementing this methodology. The results confirm the feasibility and reliability of the proposed system for operational monitoring of coastal environments. By leveraging Sentinel-2’s multispectral imagery and Random Forest’s classification efficiency, the method addresses the limitations of traditional manual monitoring systems, offering a scalable and cost-effective alternative. Future improvements should incorporate neural networks for enhanced classification accuracy and Sentinel-1 imagery for cloud-independent analysis. This research aligns with ESA’s mission to advance Earth observation and promote sustainable resource management. It sets a foundation for a web-based monitoring service that could be extended to diverse coastal regions globally, ensuring timely interventions to mitigate environmental degradation.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Resolving near-coastal remote sensing signal into contributions by bottom, water column, glint and the adjacency effect

Authors: Martin Ligi, Tiit Kutser, Tuuli Soomets
Affiliations: University Of Tartu Estonian Marine Institute
All waterbodies are changing in the variable climate conditions. Monitoring of these changes over large areas is possible only by using remote sensing. However, remote sensing of lakes is hampered by the nearby land as part of the signal measured above the waterbodies originates from the nearby land not from the water itself. This problem, called the adjacency effect, can be detectable up to kilometres from the shore which for the majority of lakes on Earth means that every water pixel is affected by the nearby land. Moreover, the signal measured near the shores may contain effects from the bottom, emerging vegetation, and sun and sky glint. In the case of marine remote sensing these problematic areas are usually excluded from the analyses as it is too complicated to resolve. This is not an option in lake remote sensing as just a few tens of lakes (out of 117 million) are large enough to contain pixels free from the adjacency and other coastal effects. The monitoring of the near-coastal waters is crucial, because most biological processes take place in these areas that are currently masked out. This limits the use of remote sensing in environmental research and monitoring. The main reason, why there are no good solutions to correct for these issues, is the lack of data with sufficient auxiliary information. Our aim was to study the very nearshore waters in order to collect data that enables to resolve the contribution of the adjacency effect, lake bottom, sun and sky glint and the water column itself and to test and develop algorithms for removing the adjacency effect and glint from Sentinel-2 imagery. To collect field data from the necessary conditions we used an unmanned surface vehicle equipped with radiometers, fluorometers, sonar, underwater and in-air video cameras. This package allows us to make high frequency reflectance measurements almost from the shore (from 20 cm water depth) to open parts of lakes and assess the contributions of bottom, water column, glint and the adjacency effect on the water reflectance. The setup enables us to conduct both remote sensing reflectance (with glint) and sky blocked (glint free) measurements consecutively. All the parameters and auxiliary data measured together enables us to divide these into transect sections where we know that we have strong glint and bottom issues and where we don’t. The contribution of adjacency effect in the remote sensing signal depends on the contrast between the (dark) water and (bright) land. Therefore, we have carried out experiments in lakes with different optical water properties (clear, eutrophic, high-absorbing) and coastal areas (agricultural, urban, natural forest, mountains) during different seasons: right after the ice cover, spring period, summer, and autumn. For example, in dark CDOM-rich lakes there is almost no water leaving signal and most of the remote sensing signal in visible part of spectrum is glint and the adjacency effect. In shallow waters with bright bottom or in lakes with massive phytoplankton blooms the adjacency effect and glint may be parameters that have nearly negligible effect on the remote sensing signal. This presentation will introduce our approach for the fieldwork, the different spectra measured at the variety of sections and our solutions on how to correct for the main issues affecting the remote sensing signal.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Combined use of virtual altimetry stations and tide gauges to monitor and study variations in coastal sea level: the case of the Wouri estuary (Cameroon)

Authors: Lucrèce Djeumeni, Florence Birol, Raphael Onguene, Serge Raoul Nouanssi Dzonde, Fernando Niño, Fabien Léger
Affiliations: CNRS - LEGOS, Université de Toulouse, Université de Douala
The coastal zones of the Gulf of Guinea, which are densely populated and generally at low altitude, are highly exposed to the risks of flooding and erosion. These issues are exacerbated by human activities and climate change. Therefore, it is crucial to understand and monitor the variations in coastal sea level in this region. But the limited number of tide gauge stations in the Gulf of Guinea makes this objective difficult to achieve. Here, we are studying whether altimetry, with more than 30 years of data, can at least partially overcome the problem of the lack of coastal in-situ data in this area. Recent advances in instruments and data processing now make it possible to obtain altimetric sea level time series at less than 5 km from the coast (for example the Sea_Level_cci dataset from ESA). Here, based on the work of Cazenave et al, 2022 introducing virtual altimetric stations as pseudo tide gauges, we start by developing and validating a regional network of virtual stations derived from altimetry along the Gulf of Guinea coast. The tide being the main contributor to variations in coastal sea level in this zone, we calculate the amplitude and phase of the main tidal constituents from the long and continuous sea level time series of the virtual coastal stations. Then we reconstruct the complete tidal time series of sea level. Compared to informations derived from available tide gauges, the coastal tidal signal derived from altimetry is slightly underestimated, but the differences do not exceed 6.5 cm for the entire Gulf of Guinea, representing less than 6% of the total tidal signal (see Djeumeni et al., 2024 https://doi.org/10.1016/j.ecss.2023.108600, for further details). We then focus on the Wouri estuary using the complementarity between a virtual altimetric station located just offshore of the estuary and the network of tide gauges located inside. We analyze the dynamic processes responsible for the observed sea level variations and how they vary in space. The tide is the dominant factor, contributing more than 96% of the local sea level variations. The combined altimetry-tide gauge dataset highlights an amplification and shift of the tide as one moves into the estuary. We then show that other factors than the tide, although smaller in amplitude, significantly influence the maximum and minimum water level in the Wouri estuary : the seasonal signal from offshore, the river discharge and the inverse barometer. This work will be used to implement and validate a numerical model that will enable a more detailed study of flooding factors in the Wouri area and, ultimately, to prevent the associated risks.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Deep Learning Large-Scale Sentinel-1 Database for Coastline Extraction

Authors: Gianpaolo Passarello, Sergio Vitale, Giampaolo Ferraioli, Gilda Schirinzi, Vito Pascazio
Affiliations: University of Naples Parthenope, Engineering Department, University of Naples Parthenope, Science and Tecnology Department
Coastal protection in response to the human security threats posed by climate change is a collective societal responsibility that depends on governmental initiatives and resources. These efforts aim to prevent the retreat of coastlines and safeguard both settlements and infrastructure located along the shore and in the hinterland from flooding and erosion associated with storms and rising sea levels [1]. The coastal zone evolve over a wide range of time scales, spanning from seconds to centuries, influenced by factors such as tides, waves, rising sea levels, fluvial sediment contributions, and various other localized processes. In some instances, the patterns of erosion and recovery of beaches caused by waves and storms can be anticipated through advanced erosion modeling techniques [2]. In other cases, for intricate environments and during extraordinary storm events, the erosion response frequently remains unpredictable or highly uncertain [3]. The location of the shoreline serves as a significant source of information in the practice of coastal management [4], since a chronological collection of shoreline positions provides insights into coastal changes. Given that observations of coastal change with high spatio-temporal resolution only exist at a small number of well-monitored sites [5], in order to have a wide scale and rapid monitoring, remote sensing represents a perfect opportunity, in particular, the ability of working all day in any meteorological conditions makes Synthetic aperture radar (SAR) system attractive for such task. Deep learning allows the development of computational models characterized by numerous processing layers, enabling them to learn and represent data across various levels of abstraction. This process emulates the brain’s ability to perceive and comprehend multi-modal information, effectively capturing the complex structures in large-scale datasets. Deep learning includes a variety of methods, including neural networks, hierarchical probabilistic models, and various algorithms for both unsupervised and supervised learning. The recent interest increase in deep learning techniques can be attributed to their demonstrated superiority over previous leading methods in numerous tasks, alongside the availability of intricate data from multiple sources [6]. At the same time, the ability of rapidly processing huge data once correctly trained, makes deep learning an appealing solution for all those tasks that for a human user results simple but time-consuming. Specifically, convolutional neural networks (CNNs) have demonstrated effectiveness in extracting mid- and high-level abstract features from raw images through the integration of convolutional and pooling layers. Studies demonstrates that the feature representations acquired by convolutional neural networks (CNNs) are particularly proficient in the task of image segmentation [7,8], that as in [9] result one of the best approach for the coastline extraction from SAR images. The development of a deep learning model presents a series of challenges. To create a robust model, it is essential to have access to a substantial amount of high-quality data for training purposes. When a model is trained on a restricted dataset, its ability to generalize effectively is compromised [10]. Typically, a model with a weak generalization tends to overfit the training data [11]. To mitigate the issue of model overfitting and enhance generalization performance, it is essential to increase the size of the training dataset, as a larger volume of data reduce the likelihood of the model fitting excessively to individual samples. However, the data collection process is often resource-intensive, requiring significant time and financial investment. Data augmentation represents a viable solution to the challenge of constrained data availability [12]. This technique involves the artificial generation of additional data derived from the existing dataset. By applying various transformations to the current data, it eliminates the necessity for new data collection. Consequently, this approach enhances both the quantity and diversity of data within the datasets, thereby facilitating more robust training and testing of the model. The aim of this work is to propose a novel methodology that has been created and validated to extract ground truth data regarding the shape and location of coastlines from SAR images, specifically for deep learning applications. Starting from the preprocessing of Sentinel-1 SLC data and the usage of auxiliary data of European organization involved in land-use and sea water monitoring, through some well known computer vision technique is finally possible to obtain a double polarization image dataset for European coasts. The resulting data of the dataset was validated using a well known deep learning technique ad the result obtained will be shown and compared with other state of the art methodology for coastline extraction. The final version of the paper will additionally showcase the final model's performance using data from various SAR sensors and SAR time series, aimed at assessing the potential changes in coastlines over time. [1] Petzold, J., & Scheffran, J. (2024). Climate change and human security in coastal regions. Cambridge Prisms: Coastal Futures, 2, e5. [2] Montaño, J., Coco, G., Antolínez, J. A., Beuzen, T., Bryan, K. R., Cagigal, L., ... & Vos, K. (2020). Blind testing of shoreline evolution models. Scientific reports, 10(1), 2137. [3] Vitousek, S., Buscombe, D., Vos, K., Barnard, P. L., Ritchie, A. C., & Warrick, J. A. (2023). The future of coastal monitoring through satellite remote sensing. Cambridge Prisms: Coastal Futures, 1, e10. [4] Boak, E. H., & Turner, I. L. (2005). Shoreline definition and detection: a review. Journal of coastal research, 21(4), 688-703. [5] Morton, R. A., Leach, M. P., Paine, J. G., & Cardoza, M. A. (1993). Monitoring beach changes using GPS surveying techniques. Journal of Coastal Research, 702-720. [6] Voulodimos, A., Doulamis, N., Doulamis, A., & Protopapadakis, E. (2018). Deep learning for computer vision: A brief review. Computational intelligence and neuroscience, 2018(1), 7068349. [7] Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431-3440). [8] Noh, H., Hong, S., & Han, B. (2015). Learning deconvolution network for semantic segmentation. In Proceedings of the IEEE international conference on computer vision (pp. 1520-1528). [9] Ciecholewski, M. (2024). Review of Segmentation Methods for Coastline Detection in SAR Images. Archives of Computational Methods in Engineering, 31(2), 839-869. [10] Neyshabur, B., Bhojanapalli, S., McAllester, D., & Srebro, N. (2017). Exploring generalization in deep learning. Advances in neural information processing systems, 30. [11] Karystinos, G. N., & Pados, D. A. (2000). On overfitting, generalization, and randomly expanded training sets. IEEE Transactions on Neural Networks, 11(5), 1050-1057. [12] Ding, J., Chen, B., Liu, H., & Huang, M. (2016). Convolutional neural network with data augmentation for SAR target recognition. IEEE Geoscience and remote sensing letters, 13(3), 364-368.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Estimating dynamic carbon fluxes across the fluvial-marine system of the Mackenzie River Delta-Beaufort Sea

Authors: Annabeth McCall, Dr Martin Hieronymi, Dr Rüdiger Röttgers, Dr Paul Overduin, Dr Guido Grosse, Dr Lisa Bröder, Dr Julie Lattaud, Dr Bennet Juhls
Affiliations: Alfred Wegener Institute Helmholtz Center for Polar and Marine Research, Institute of Carbon Cycles, Helmholtz-Zentrum Hereon, ETH Zürich, University of Stockholm
Intensification of climate change in the Arctic is shifting annual mass fluxes of rivers to coastal seas, making these regions one of the most rapidly changing coastal regions globally. However, due to their inaccessibility, the transport pathways and fate of dissolved organic carbon (DOC), particulate organic carbon (POC), and sediments remains poorly quantified or understood. Ocean-color satellite products have the potential to fill critical data gaps needed to evaluate how land-to-sea carbon fluxes in optically complex waters influence the broader pan-Arctic carbon budget. To fully capture the seasonal variability in fluxes, hydrography, and fluvial-marine biogeochemistry in these shallow coastal waters, accurate retrieval of ocean color products is essential for understanding the region's rapidly evolving ecosystems. This study focuses on the Canadian Arctic coast, where the Mackenzie River delivers the largest sediment load of all Arctic rivers to the Beaufort Sea via the Mackenzie Delta. Studies have shown significant changes in this region: freshwater discharge from the Mackenzie River has increased by ~25% since 2003, with annual dissolved organic carbon (DOC) flux rising by 39% since the early 1970s (Tank et al., 2016) and suspended sediment loads increasing by 50% since 2009 (Doxaran et al., 2015). Recent findings by Matsuoka et al. (2022) reveal a decadal increase in late-summer organic carbon in the Mackenzie Delta, likely linked to permafrost thaw within the watershed. These changes in land-ocean fluxes may have significant consequences for coastal biodiversity and local Indigenous communities that depend on marine ecosystems. In addition to their remoteness, Arctic coastal waters present unique optical complexities, driven by high concentrations of colored dissolved organic matter (CDOM), suspended sediments, and fluctuating sea-ice conditions (Juhls et al., 2022). These challenges complicate the application of satellite remote sensing. To validate the use of ocean color remote sensing (OCRS) to estimate POC, DOC, and sediment fluxes in such waters, comprehensive water sampling, bio-optical and radiometric measurements, plus ancillary biogeochemical and discharge data were collected across the fluvial-marine zone of the Mackenzie Delta during August and September 2024. While wide-ranging, recent field observations also targeted low-salinity optical water types within deltaic tributaries and nearshore coastal waters (i.e., 0-12 psu), where data gaps currently exist. This new and extensive in-situ dataset will ideally capture, at high resolution, the OC transport pathways along the land-sea transition zone. Comparing in-situ concentrations of POC, DOC, and total suspended matter (TSM) with in-situ particulate absorption (aP) and CDOM absorption (aCDOM), respectively, will enable evaluation and refinement of bio-optical relationships for these parameters. Ultimately, this will allow for improvement of remote sensing retrievals and evaluation across OCRS data products. This study will provide critical insights into the strengths and limitations of current OCRS products in Arctic coastal zones and highlights the importance of in-situ measurements to support algorithm development for such a dynamic coastal region. Through these activities, we hope to resolve our understanding of the magnitude, variability, and impact of fluvial and coastal organic carbon fluxes in optically complex river-sea systems, utilizing advances in remote sensing that will ultimately allow for flux estimates independent of in-situ observations. References Doxaran, D., Devred, E., and Babin, M. 2015. A 50 % increase in the mass of terrestrial particles delivered by the Mackenzie River into the Beaufort Sea (Canadian Arctic Ocean) over the last 10 years, Biogeosciences, 12, 3551–3565, https://doi.org/10.5194/bg-12-3551-2015. Juhls, B., Matsuoka, A., Lizotte, M., Bécu, G., Overduin, P.P., Kassar, JE., Devred, E., Doxaran, D., Ferland, Forget, J., Hilborn, A., Hieronymi, M., Leymarie, E., Maury, J., Oziel, L., Tisserand, L., Anikina, DOJ., Dillon, M., and Babin, M. 2022. Seasonal dynamics of dissolved organic matter in the Mackenzie Delta, Canadian Arctic waters: Implications for ocean colour remote sensing, Remote Sensing of Environment, 283, 113327, https://doi.org/10.1016/j.rse.2022.113327. Matsuoka, A., Babin, M., and Vonk, J. E. 2022. Decadal trends in the release of terrigenous organic carbon to the Mackenzie Delta (Canadian Arctic) using satellite ocean color data (1998–2019), Remote Sensing of Environment, 283, https://doi.org/10.1016/j.rse.2022.113322. Tank, S. E., Striegl, R. G., McClelland, J. W., and Kokelj, S. V. 2016. Multi-decadal increases in dissolved organic carbon and alkalinity flux from the Mackenzie drainage basin to the Arctic Ocean, Environmental Research Letters, 11 054015, https://doi.org/10.1088/1748-9326/11/5/054015.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Internal Solitary Waves off the Western Iberian Peninsula: from interactions to short timescale variability and mixing

Authors: Jorge Magalhaes, Professor José da Silva, Dr. Ana Pires, Professor Maarten Buijsman, Dr. Paulo Oliveira, Professor Jesus Dubert, Dr. Rita Nolasco, Mr. Martin Coubard, Dr. Ana Santos, Professor Ana Amorim
Affiliations: CIIMAR (Interdisciplinary Centre of Marine and Environmental Research), FCUP. Department of Geosciences, Environment and Spatial Planning (DGAOT), MARE – Marine and Environmental Sciences Centre, School of Ocean Science and Engineering, University of Southern Mississippi, Portuguese Institute for Sea and Atmosphere, I. P. (IPMA, IP), Instituto de Ciências da Terra, Polo Porto, Universidade do Porto, Departamento de Física e Centro de Estudos Do Ambiente e Do Mar (CESAM), Universidade de Aveiro, IMT Atlantique, Technopôle Iroise, Environment and Sea Department, Cascais Municipality
Physical oceanography is increasingly relying on satellite remote sensing to survey the perpetually undersampled ocean. In this study we present a comprehensive collection of Synthetic Aperture Radar (SAR) images from Sentinel-1 to document the two-dimensional horizontal structure of Internal Solitary Waves (ISWs) off the Western Iberian Peninsula, which are observed to originate a naturally-occurring interaction hotspot between different ISW packets. In addition, the SAR dataset is complemented with moored temperature records to assess the variability scales and magnitudes of ISWs propagating onshore off the western Portuguese shelf. The combined data shows significant variability in timescales of just a few days, and hence within the waves' typical propagation lifespan. A regional ocean circulation model is used together with linear theory to assess how mesoscale variability is contributing to the observed variability patterns observed in the IWs in the SAR and in situ data. Finally, some mixing parameters associated with ISWs are investigated in light of their variability associated with variable wind conditions. Wind and Internal Solitary Waves (ISWs) are well-known to mix the ocean’s surface and inner stratification, but could their combined effect exceed the sum of their individual contributions? This study investigates this possibility, and it is found that regions exhibiting low bulk Richardson numbers (Ri<1/4, indicating the possibility of shear-instabilities and hence turbulence and mixing) and temperature inversions increase in number by nearly four-fold and an order of magnitude (respectively) when comparing ISWs propagating under high and low-wind regimes. Altogether, findings to be presented in this study region may have important implications ranging from parametrizations in ocean models to our understanding of biogeochemical coastal processes modulated by diapycnal mixing.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: The Copernicus Marine High-Resolution Coastal Service and its evolutions

Authors: Dimitry van der Zande, Kerstin Stelzer, Dr. Martin Böttcher, Carole Lebreton, Quinten Vanhellemont, Joppe Massant, Dr. Carsten Brockmann
Affiliations: RBINS, Brockmann Consult
High-quality satellite-based ocean colour products can provide valuable support and insights in the management and monitoring of coastal ecosystems. Today’s availability of Earth Observation (EO) data is unprecedented including traditional medium resolution ocean colour systems (e.g. Sentinel-3/OLCI) and high-resolution land sensors (e.g. Sentinel-2/MSI). Each of these sensors offers specific advantages in terms of spatial, temporal or radiometric characteristics, enabling the provision of different types of ocean colour products. With the High-Resolution Coastal Service (HROC), the Copernicus Marine Service (CMS) provides high resolution ocean colour products based on Sentinel-2/MSI data for European coastal waters. It offers 12 different products for the coastal waters (20km stripe for the coastline) of all European Seas and in a 100m spatial resolution. The primary variable from which it is virtually possible to derive all the geophysical and transparency products is the spectral Remote Sensing Reflectance (RRS). This, together with the Particulate Backscatter Coefficient (BBP), constitute the category of the optics products. The spectral BBP product is generated from the RRS products using a quasi-analytical algorithm. The transparency products include turbidity (TUR) and Suspended Particulate Matter (SPM) concentration. The geophysical product consists of the Chlorophyll-a concentration (CHL) retrieved via a multi-algorithm approach with optimized quality flagging. We will present our experiences after 5 years of operational processing and highlight the implemented and planned evolutions for the service including the de-striping of the Sentinel-2 data using the top-of-atmosphere glint correction (TGC) and the harmonization of the CHL product between Sentinel-2/MSI and Sentinel-3/OLCI using machine learning. • Detector striping can be observed over water in Sentinel-2/MSI imagery, especially in the eastern part of the swath for summer/high sun images. One of the reasons for banding in the Sentinel-2 observations is the staggered positioning of the detectors. This causes differing viewing conditions per spectral band for adjacent detectors which are not considered by the used atmospheric correction algorithms. For atmospheric and land signals this is usually not a large issue, but for water surfaces different detectors and different bands can view different amounts of sun glint. In TGC, the pixel-wise spectral shape and intensity of the glint signal is modelled per band and detector at Top of Atmosphere (TOA) considering the windspeed, aerosol optical thickness (AOT) using the OSOAA radiative transfer model. This TOA glint signal is then subtracted from the TOA observation removing the banding effect. • Due to differences between Sentinel-2/MSI and Sentinel-3/OLCI sensors regarding spectral band sets and viewing geometry, CHL algorithms used for Sentinel-3/OLCI often cannot be directly transferred to Sentinel-2/MSI resulting in the chlorophyll-a products often having diverging values between both sensors. A machine learning technique (LightGBM) was used to transfer the more complex CHL algorithms (e.g. band ratio algorithms, switching algorithms, other machine learning algorithms) from Sentinel-3/OLCI to Sentinel-2/MSI. The usefulness of the updated HROC products will be demonstrated with a use case of eutrophication monitoring in the near shore WFD zones in the Southern North Sea.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Demonstrating the contribution of ODATIS data hub’s medium-resolution products to coastal monitoring

Authors: Aurélien Prat, David Doxaran, Vincent Vantrepotte, Marine Bretagnon, Philippe Bryère, Antoine Mangin
Affiliations: ACRI-ST, Laboratoire d'Océanographie de Villefranche (LOV), UMR 7093, CNRS-Sorbonne Université, Laboratoire d’Océanologie et de Géosciences (LOG), UMR 8187, IRD, Univ. Littoral Côte d’Opale
Coastal areas are critical for human activity and are among the most productive ecosystems, offering valuable resources and services. However, they are increasingly impacted by human actions, pollution and climate change, which threaten their ecological health and sustainability. Remote sensing is an important tool for monitoring these regions, providing timely data to track environmental changes and human influence. More specifically, ocean color is a key variable for studying coastal waters, since it provides vital information about water quality, ecosystem health, and biological processes, through chlorophyll concentration and primary production estimation, water quality assessment, etc. Despite the significant efforts made in recent years to facilitate access to ocean color data, there remains substantial potential to improve data products through better algorithms and broader accessibility. To this end, the "Ocean Color" Scientific Expertise Consortium under the ODATIS data hub, part of the Data Terra Research Institute (CNES), has identified a series of innovative products and algorithms that address the scientific community's needs at a national scale in France. The initiative will, among other things, enable a comparison of a wide range of products and provide access to a long-term archive of qualified ocean color data. This work could potentially be extended to other regions of the world in the future. In this context, ACRI-ST has been commissioned by CNES to create a medium-resolution (300m) ocean color data archive focused on French metropolitan coasts. This involves using specialized atmospheric corrections (Acolite, Polymer) and biogeochemical algorithms adapted to coastal environments, based on data from MERIS, MODIS, and OLCI sensors. The final products are made freely available through an accessible online platform (https://odatis.acri-st.fr), supporting research and decision-making for coastal management. This presentation will detail the operation of the medium-resolution product generation service and the validation process. It will then showcase the added value of these medium-resolution coastal products through examples of their application in coastal water monitoring. Results will be analyzed and explained.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Monitoring of Coastal Dynamics at the Island of Langeoog, Germany by Means of Multi-Sensor Satellite Data

Authors: Julia Holzner, Sandro Martinis, Dr. Simon Plank
Affiliations: German Aerospace Center (DLR), German Remote Sensing Data Center
Langeoog is one of the East Frisian Islands in the German Wadden Sea, with topographical and bathymetric characteristics significantly influenced by environmental factors such as the tides and currents. Coastal protection is of great importance for the preservation of the islands in order to counteract periodically occurring sand deficits caused by erosion processes such as seasonal storm surges and the resulting strong disturbance of the sediment balance. Langeoog is a special case in this context, as it is the only one of the barrier island chain that largely is managed without the use of coastal protection structures such as groynes or revetments due to the prevailing current and sedimentation processes. The only measures to preserve the sand deposits on the north side of the island are carried out at regular intervals in the form of sand replenishment. In order to carry out such measures effectively and efficiently, detailed knowledge of the development of the highly dynamic coastal area is required. In this study, we investigate the suitability of optical multi-sensor satellite data for a comprehensive identification and analysis of coastal dynamics on the Island of Langeoog in the period 2018 to 2023. For this purpose, very high resolution (VHR) and high resolution (HR) optical satellite data were processed to estimate individual components of coastal dynamics such as the coastline, sediment accumulations such as sand flats, the location of the dune base and the extent of permanently dry sand areas under normal tidal conditions. Both index-based threshold methods and a random forest-based classification algorithm were used to extract these parameters. The results indicate that the data are very well suited to derive and analyse individual components of coastal dynamics. The derivation of the Normalized Difference Water Index (NDWI) from VHR and HR optical data showed a sand plate approaching the northwestern tip of the island for the period 2021 to 2023, which landed on the coast during 2023, resulting in a seaward shift of the coastline. A further histogram analysis was performed to extract the beach areas of the island and thus the landward boundary of the beach, the dune foot, from the NDWI information. Results showed that the position of the dune base remained stable over the study period. By applying the classification algorithm, dry sand areas (above the high tide water line) could additionally be extracted from the image data, which particularly show the influence of erosion and accumulation events. This study demonstrates that the use of multi-sensor optical satellite data is suitable for the holistic analysis of coastal dynamics, as the findings on the individual components can be linked to each other as well as to specific erosion (storm surges) and accumulation events (sand accumulation). The results thus show great potential for supporting coastal monitoring in applied coastal protection.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: EO and forcing data-driven forecast of turbidity for the operational monitoring of coastal works

Authors: olivier Regniers, Nicolas Débonnaire, Stéphane Kervella, Rémi Budin, Sébastien Brasselet
Affiliations: I-sea, Hozro, HAROPAPORT
The short-term forecasting of water quality parameters, such as turbidity, can help in the design of an operational monitoring system dedicated to coastal construction works. In coastal areas, turbidity levels are highly dependent on the combined actions of natural and anthropogenic drivers. The forecasting of this complex system can be challenging and is often associated with high levels of uncertainty. Data-driven approaches and machine learning frameworks have emerged recently as promising potential tools for such applications. They rely on the use of both observational datasets, such as satellite-derived ocean parameters, and modelled forcing parameters to reconstruct or forecast fields of ocean variables. During the spring of 2025, extension works are planned within the harbour of Le Havre in the north of the French Atlantic coastline. These will involve the construction of a new 2 km long dyke to enable direct access for river vessels to the container terminals of the harbour, in all weather conditions and in complete safety. To monitor the environmental impact of those works, three monitoring stations equipped with a turbidity probe were set in the vicinity of the building site in 2024. One probe serves as a reference station and is located beyond the extent of the modeled turbidity plume generated by the works, the second and the third are located within the modeled plume. Turbidity is continuously measured at these three locations and will be used, once the works have started, to trigger alert notifications to the harbour authorities whenever turbidity starts to diverge between the reference and both the other stations. This would potentially mean that the ongoing works are generating significantly more turbidity than naturally expected in the area with potential impacts to the nearby ecosystems. But it is also known that turbidity measured in and out of the work area can also diverge in certain natural forcing conditions. A short-term forecasting system of the turbidity can be helpful in contextualizing the observed discrepancies and provide complementary data to the decision-making process leading to the potential temporary shutdown of the ongoing works. The proposed data-driven forecasting model is based on a Random Forest framework powered by two main sources of data. The first is observational data based on the full archive of Sentinel-2 images collected on the area of interest between summer of 2015 and mid-2023. All images were atmospherically corrected using a combination of ACOLITE and C2RCC algorithms and converted to turbidity levels using a conditional based single band empirical algorithm calibrated during a previous in-situ measurements campaign. The turbidity values at each station’s location were further extracted from the time series with a total of approximately 300 turbidity values collected for each station. The second data source is forcing parameters chosen to capture the full complexity of the sediment dynamics in the study area. Forcings such as river flow (Seine river), tidal phase, currents, winds and waves were considered and could be at play at the same time. Given the objective of an operational short-term forecast, parameters describing each forcing must be forecast themselves and access to these data via an API is mandatory. Parameters were collected only at the time a Sentinel-2 turbidity data is available and averaged on different time intervals depending on their own frequency. These time intervals were set to reflect short-term impact of wind, waves and tide levels and longer-term impact of tide range and river flow. Day of the year was also included to account for seasonal variations. Once collected, all these data were formatted in a single table and given to the Random Forest framework to train the model. Our process begins with a variable selection step using the estimation of relative importance of variables provided by the Random Forest algorithm. A first model is trained with all the variables of interest, without any preconceptions on the data. A sensitivity analysis showed that with a subset of 10 variables including their full time series (i.e. all the time intervals), a compromise between precision and parsimony is reached. A second and final model is then trained only on this subset of variables and further used for the operational forecast of turbidity. The operational deployment of the model consists in forecasting turbidity every 6 hours for the next 3 days using only available forcing data and without any dependance on the availability of EO data. In terms of validation, the full dataset of satellite observations and corresponding forcing data was sampled in a bootstrap approach (20 runs). For each iteration, 90% of the dataset is selected for model training, and 10% kept for validation. Validation statistics showed satisfactory results for all three stations with a Mean Average Error (MAE) between 1.8 and 2.4 NTU and a Mean Average Percentage Error (MAPE) between 27 and 31 %. First results of offline validation between prediction and in-situ data collected at one of the three stations (February to September 2024) revealed slightly weaker statistics with MAE of 2.7 NTU and MAPE of 36 %. Qualitative comparisons between temporal profiles of prediction versus in-situ data also enabled to highlight trend conversions and discrepancies. It appeared that most of the time (appr. 85%) the difference between prediction and in-situ measurements are lower than 5 NTU or lower than 30%. The 15% remaining either corresponds to underestimated narrow peaks of turbidity or to longer periods during which the model deviates from reality. When plotted against temporal profiles of the forcing parameters, it appears that these longer periods of discrepancies are often associated to prolonged events of strong winds and/or waves that might not be properly captured by the model. In terms of forcing parameters importance in the prediction process of the model, it appeared that significant wave height, wind speed, tidal range and day of the year are the predominant explanatory variables of the model. Despite the close vicinity of the Seine River mouth, the river flow has a limited impact on the prediction. This forecasting model will be deployed operationally in the coming months as soon as the harbour works start. Further in-situ data will also be collected and enhance the validation review of the system. Winter season often associated with stronger dynamics will also be analysed and provide further feedback on the prediction's accuracy in more unstable conditions. Besides, it was shown that this model has also great potential in becoming an alert system in case of fouling events of the probes. Such an event occurred during spring 2024 and a significant divergence between prediction and in-situ data was observed within a few days. Overall, this system could be deployed at any location where forcing data is available and provides useful forecasted data to help monitor the anthropic pressure of offshore works on the natural water quality.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: FOCCUS: Advancing Copernicus for Coastal Resilience

Authors: Kelli Johnson, Dr. Joanna Staneva, Dr. Emma Reyes, Dr. Joaquin Tintoré, Dr. Antonio Bonaduce, Dr. Giorgia Verri, Dr. Ivan Federico, Dr. Pavel Terskii, Dr. Alena Bartosova, Dr. Kai H. Christensen, Dr. Quentin Jamet, Dr. Angelique Melet, Dr. Lőrinc Mészáros, Dr. Ghada El Serafy
Affiliations: Helmholtz-Zentrum Hereon, Balearic Islands Coastal Observing And Forecasting System, Stiftelsen Nansen Senter For Miljø Og Fjernmaling, Fondazione Centro Euro-Mediterraneosui Cambiamenti Climatici, Swedish Meteorological And Hydrological Institute, Meteorologisk Institutt, Service Hydrographique Et Oceanographique De La Marine, Mercator Ocean International, Stichting Deltares, Mediterranean Institute for advanced studies (IMEDEA CSIC-UIB)
The Horizon Europe FOCCUS project (Forecasting and Observing the Open-to-Coastal Ocean for Copernicus Users, foccus-project.eu) enhances the Copernicus Marine Environment Monitoring Service (CMEMS) for coastal zones, enabling coastal resilience in the face of climate change. In collaboration with European Member State Coastal Systems (MSCS) owners and stakeholders, FOCCUS aims to enable adaptive coastal management strategies by improving and advancing the coastal dimension of CMEMS. FOCCUS enhances CMEMS’s coastal capabilities through three key pillars: i) developing novel high-resolution coastal data products by integrating multi-platform observations (remote sensing and in-situ), Artificial Intelligence (AI) algorithms, and advanced data fusion techniques for improved monitoring; ii) developing advanced hydrology and coastal models including a pan-European hydrological ensemble for improved river discharge predictions, and improving member state coastal systems by testing new methodologies in MSCS production chains while taking advantage of stochastic simulation, ensemble approaches, and AI technology; and iii) demonstrating innovative products and improved co-produced services that address both Environmental and Societal Challenges, enhancing the performance and societal relevance of coastal ocean forecasting systems in and downstream of CMEMS. As an endorsed UN Ocean Decade CoastPredict project, FOCCUS fosters international collaboration and co-production with end-users, policy-makers, and local communities. This ensures societal relevance and supports sustainable coastal management and climate adaptation strategies, to address risks such as coastal erosion, marine heatwaves, and coastal storm surge. Ultimately, FOCCUS provides high-quality, trusted marine knowledge needed for evidence-based management, supporting a sustainable blue economy, and building climate resilience for European coastal communities. The FOCCUS consortium unites 19 partners from 11 countries, fostering a truly interdisciplinary collaboration that spans oceanography, observational science, advanced modeling, and technology development; exemplifying the power of international collaboration in advancing ocean sciences and technologies to create adaptive, innovative coastal management strategies. FOCCUS is funded by the European Union (Grant Agreement No. 101133911). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Health and Digital Executive Agency (HaDEA). Neither the European Union nor the granting authority can be held responsible for them.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Innovative SAR-Based Shoreline Monitoring: Advancing Coastal Change Detection and Environmental Management

Authors: Dr Salvatore Savastano, Mr Mark E. Pattle, Mr Albert Garcia-Mondéjar
Affiliations: isardSAT Ltd, isardSAT
Coastal transformation is one of the most visible impacts of climate change, driven by processes such as erosion, sediment deposition, and rising sea levels. Monitoring these dynamic changes is essential for environmental management and coastal preservation efforts. Satellite remote sensing has emerged as a powerful tool for mapping these shifts, providing precise and continuous observations of coastal evolution. With advancements in imaging technology, satellite systems offer invaluable data to study and monitor the ongoing transformations of coastal landscapes over time. Today, a wide array of satellites, both optical and radar, acquire massive amounts of data globally. Programs like Copernicus have made high-quality satellite imagery freely accessible, enabling consistent monitoring of coastal evolution with reasonable revisit times. Numerous tools and software have been developed that utilise optical data, providing the geoscience community with robust methods for analysing coastal changes. Optical images are often preferred over radar images, which are considered less reliable for this purpose. However, their utility is significantly limited in regions with persistent cloud cover or extreme sea conditions, making it challenging to monitor long-term coastal changes. Synthetic Aperture Radar (SAR) imagery addresses these challenges by offering weather-independent, all-day imaging capabilities. Unaffected by darkness, clouds, or rain, SAR provides frequent and consistent updates for shoreline mapping. Furthermore, SAR can detect features that may be overlooked by optical sensors. To maximize the advantages of SAR technology, isardSAT has developed an innovative Shoreline (SL) processor (Savastano et al., https://doi.org/10.3390/jmse12010163) that utilizes Sentinel-1 (C-band) data to generate coastal change products. This processor can be adapted for other SAR missions operating at different frequencies. It facilitates the extraction of shorelines from individual SAR images, enabling the creation of long-term time series to track coastline variations with high accuracy. Recent improvements to the processor have introduced new steps and analyses to better monitor coastal evolution and address findings from the referenced study. The SAR SL processor operates in three primary stages: 1. Georeferencing In this stage, georeferenced images are prepared for analysis using the SNAP toolbox. This involves processing each Sentinel-1 dataset to ensure precise spatial referencing. 2. Shoreline Extraction This stage comprises four key steps: • Enhancement: Two advanced denoising algorithms (wavelet decomposition and anisotropic diffusion) are applied to reduce speckle while preserving structural features and textural details in the SAR images. • Segmentation: A binary raster representing the land–sea boundary is generated through intensity thresholding. The Kittler method is employed to calculate an optimal threshold for separating land and sea pixels. • Healing: The binary raster undergoes morphological operations to refine its accuracy. Binary opening removes noise caused by reflections from waves or vessels, while a minimum size filter eliminates small, irrelevant foreground features. Holes within valid foreground areas are filled to create a cohesive binary representation. • Vectorization: The final binary raster is converted into a shoreline vector using the marching squares algorithm. An optional blurring step enhances sub-pixel precision, smoothing the vector and reducing aliasing. Each shoreline point is tagged with geographic coordinates (latitude and longitude) and associated metadata, including satellite incidence angle, local incidence angle derived from topography, and the Digital Elevation Model (DEM) used in terrain correction. 3. Shoreline Filtering In the third stage, a new method refines the extracted SLs to capture coastal changes. It includes • Heatmapping: This step is used to analyse how the SLs are spatially distributed across the scene under investigation. Following one of the steps in Georeferencing (Collocation), all points along the SLs are aligned into a structured grid-like arrangement. For each point on the SLs, the number of overlapping lines is calculated by examining data from both ascending (ASC) and descending (DESC) tracks. The count of overlapping lines at each point is then converted into a percentage relative to the total number of lines (ASC and DESC combined). The resulting heatmap provides a clear visualization of SL dispersion and highlights significant patterns within the scene. Additionally, the heatmap serves as a basis for an optional filtering step, where points located far from the primary concentration of SLs are removed to further reduce speckle and improve data quality. • Polygon Creation: The site under investigation is divided into adjacent polygons, constructed using one of two methods. In the first method, polygons are built along a Reference Line (RL), which can either be user-provided or derived from the heatmap by analysing SL concentration patterns. These polygons are oriented perpendicularly to the RL. In the second method, polygons are constructed around user-defined transects, offering an alternative approach for segmentation. The width of each polygon can be adjusted to user preferences, while the length is determined by either the SL distribution and associated heatmaps or user-defined parameters. Regardless of the method, it is recommended to include points with higher heatmap values while excluding outliers located far from the primary SL concentration, often attributed to noise. Overlaps between polygons are generally avoided, except in cases involving complex geometries in the RL or transects. • Distance Assignment: The distance between each point along the SL and the RL is computed for points located within the boundaries of their respective polygons. This calculated distance corresponds to the shortest separation between each individual SL point and the nearest point on the RL. To provide additional context, a sign is assigned to these distances to indicate the relative position of the SL point with respect to the RL. Specifically, if a SL point is positioned seaward of the RL, its distance value is assigned a positive sign. Conversely, if the SL point is located landward of the RL, the distance value is given a negative sign. This approach ensures that each SL point's location is clearly defined in terms of its spatial relationship to the RL, providing a detailed representation of the coastline's dynamics. • Filtering with Gaussian Mixture Distribution (GMD): The SL points undergo a filtering process utilizing the Gaussian Mixture Distribution (GMD) technique. In this approach, the SL distances are modelled statistically using a Gaussian distribution with m components. The appropriate number of components for fitting the data is determined using the Bayesian Information Criterion (BIC). After identifying the optimal model, the Gaussian distribution with the largest population is selected as the primary reference. The mean and standard deviation of this dominant Gaussian component are then computed and applied to filter the distances, ensuring a more refined representation of the shoreline points. • Mean Distance Calculation: For each polygon, a time series of SLs distances is generated by analysing unique timestamps. The Median Absolute Deviation (MAD) method filters outliers, recalculating mean and standard deviation values for inlier distances. A further cleaning operation is done on the time series removing possible outliers. Using an iterative approach, it identifies outliers by comparing the differences between consecutive data points against a designated threshold. Finally, from the time series change rates (CR) are determined via linear regression, with positive or negative slopes indicating accretion or erosion. The last two steps are performed separately on the SAR SLs coming from all the tracks covered the Area of Interest (AOI) and Incidence angles are also analysed for each acquisition (ascending/descending paths) to assess the impact of satellite geometry and topography on SLs positions. Research Findings The findings demonstrate that SAR-derived shorelines align with positions above the high-water mark across all studied sites. SLs from ascending and descending tracks exhibit either overlapping patterns or displacement. Overlapping patterns often reveal strong seasonal trends, while displacement highlights the influence of topography. Current investigations aim to better understand the variabilities observed in SAR SL detection. A further correlation analysis on the SLs coming from each track is performed providing to the end users a combination or separate time series and change rates derived from the SLs acquired in different geometric acquisitions. This methodology provides a robust framework for detecting coastal variations across diverse landscapes, including flat beaches, cliffs, and dunes. It addresses challenges such as coastline orientation, radar incidence angle, backshore type, and soil moisture, ensuring adaptability to various coastal conditions. Practical Applications At the conference, several case studies will be presented, illustrating the utility of this methodology in addressing real-world challenges. Validation results from three different end users will be shared, demonstrating the effectiveness of the processor in various contexts. Additionally, the latest processor version will be applied to the Norwich coast in the United Kingdom as part of the UK Gravel Barrier project led by the British Geological Survey. Initial results from this project will also be discussed, showcasing the tool's capability to support ongoing coastal research and management efforts.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Trends of Sea Darkening and the Drivers of Water Optical Properties in Northwest European Shelf

Authors: Lenka Fronkova, Tiago Silva, Jon Barry, Naomi Greenwood, Elisa Capuzzo, Jennifer Graham, Richard Harrod, Kate Collingridge, Michelle Devlin, Miguel Ferreira-Lima
Affiliations: Centre for Environment Fisheries And Aquaculture Science, University of West of England
Available light in the marine environment is one of the key physical variables that links biogeochemistry and ecology (Capuzzo et al., 2015). Three main optically active water components, namely chlorophyll (CHL), suspended particulate matter (SPM) and coloured dissolved organic material (CDOM), determine the amount of absorbed and reflected photons in the water column and the overall light availability (Sathyendranath, 2000). The amount of available light in the water column is proportional to the rate at which nutrients can be converted into phytoplankton biomass and primary productivity (Cole & Cloern, 1987). As such, turbid waters with low light penetration, show decreased phytoplankton bloom potential (Foden et al, 2010). The changes in phytoplankton bloom and primary productivity, can in turn, affect higher tropic levels, ranging from zooplankton to marine mammals, seabirds (Capuzzo et al., 2017), ultimately impacting people in the coastal communities that might depend on the marine food sources. Based on Secchi disk depth (a measure of water clarity) and CHL in situ data, there has been evidence that multiple places in the North Sea have undergone a decrease in availability of underwater light (Capuzzo et al., 2015), referred to as sea darkening. Sea darkening in the 20th century has been linked with up to three weeks delay in the spring phytoplankton blooms (Opdal et al., 2019). However, findings from Wilson and Heath (2019) based on SPM derived from remote sensing and by modelling the effect of the wind and tides on the bed shear stress contradicts observed sea darkening in Capuzzo et al. (2015) and Capuzzo et al. (2017), suggesting that the North Sea water clarity started to recover at the end of the 20th century, between 1997-2017. Despite the indications of the sea darkening in the North Sea, the results of the past studies are not unanimous, and more research is needed to understand trends and drivers of optically active water constituents due to their impact on primary productivity and the overall food chain. Similarly, there is a research gap as to which natural and anthropogenic drivers are the main contributors to the changes of the light climate in the northeast Atlantic. This study analyses diffuse attenuation coefficient (Kd), CHL and SPM trends between 1998-2020 in the Northwest European Shelf derived from Ocean Colour Climate Change Initiative (OC-CCI), Yu et al. (2023) and GlobColour remote sensing data, within Common Procedure (COMP) OSPAR management areas that are used to assess water quality condition in the northeast Atlantic waters. Preliminary results of Generalised Additive Models (GAM) show a downward SPM trend from 1998 to 2015, followed by an upward trend until 2020. Although this result is in line with Wilson and Heath’s (2019) finding of water clarity at the end of 20th century, the increasing SPM trend after 2015 is not observed in the in situ data. On the contrary, based on Kd, water clarity decreases in the same regions as in SPM until 2015, apart from the Irish Sea and English Channel, which exhibit increased water clarity from 2015 onwards. CHL only shows statistically significant declining trends around the Norwegian Trench and English Channel. Overall, the inconsistencies in trend results between SPM, CHL and Kd could be linked to bias between different satellite sensors (Mélin, 2016) and errors in deriving the three tested parameters in very turbid Northwest European Shelf seas. On the other hand, in situ observations showed limited sensitivity to long term trends due to inconsistent temporal and spatial sampling. To better understand changes in Kd, CHL and SPM between 1998-2020, the main controlling factors were determined using boosting models. The results indicate the main drivers are bed shear stress, river discharges and significant wave height. Offshore wind farms or bottom fishing, which are human factors contributing to sea bottom disturbance and therefore decreased water clarity, did not appear to be significant drivers of Kd, CHL or SPM values during the tested period. Temporal trends in the drivers themselves are discussed in the context of the observed optical changes.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: The Global TanDEM-X High-Resolution Coastline Product

Authors: Larissa Gorzawski, Leena Warmedinger, Carolin Keller, Martin Huber, Bernhard Lehner, Birgit Wessel, Achim Roth
Affiliations: Company For Remote Sensing And Environmental Research (SLU), German Aerospace Center (DLR), German Remote Sensing Data Center, McGill University, Department of Geography, Confluvio Consulting Inc.
The coastline is commonly understood as the object that separates the terrestrial from the marine environment. As such, it is an important element in a broad range of GIS applications, where it can be used to mask out data. Spatial planning in the coastal, littoral and marine area as well as hazard modelling of tsunamis and storm surges are among commonly carried out tasks. In particular, a global coastline is required within the projects TanDEM-X and HydroSHEDS v.2. To support these functions, the Global TanDEM-X High-Resolution Coastline Product, short TanDEM-X coastline, has been developed. The TanDEM-X coastline is a vector dataset with global coverage at a resolution of 10 m. It is derived automatically from the TanDEM-X dataset and complemented with manual checking to fulfill requirements defined from a hydrological perspective. Next to the TanDEM-X Digital Elevation Model (DEM), the coastline is delineated from the associated amplitude layer (AMP) and height error map (HEM). The set of these layers allows to identify open water due to the backscatter and coherence characteristics of the radar signal over open water. To support the positioning of the TanDEM-X coastline, open water is only detected in the neighbourhood of a reference coastline, which is derived from the water quality layer of the Copernicus DEM. As a consequence, the TanDEM-X coastline will remain within a specific margin around this reference coastline. While this works well to define the ocean-land-boundary, it can create inconsistencies at the divide between ocean and inland waters, i.e. at estuaries, river mouths, lagoons, and similar objects, because ocean and inland waters cannot be differentiated purely from radar satellite data. To ensure that the requirements of hydrological applications and in particular HydroSHEDS v.2 [1] are met, the coastline is manually positioned in selected areas taking different existing coastline products into consideration. Nonetheless, it has to be acknowledged that the coast is a transition zone between the terrestrial and marine sphere, whereas the coastline is an artificial feature. As such, the coastline position can only be defined alongside a set of parameters. The TanDEM-X coastline consists of closed polygons representing mainland and islands, the polygons do not contain any holes, and the polygons can be converted into line features. The overall quality of the coastline corresponds to that of the TanDEM-X dataset. Accordingly, the spatial resolution of 10 m is maintained in the coastal product. In this contribution we present the production and the new Global TanDEM-X High-Resolution Coastline Product. The TanDEM-X coastline is a static product with a time stamp corresponding to the acquisition period of the TanDEM-X mission from 2010 to 2015. The Global TanDEM-X High-Resolution Coastline Product is open source under CC BY 4.0 and will be available at https://geoservice.dlr.de/. [1] Lehner, B., A. Roth, M. Huber, M. Anand, and M. Thieme (2022), A sharper look at the world’s rivers and catchments, Eos, 103, https://doi.org/10.1029/2022EO220167. Published on 12 April 2022
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Suitability of Copernicus Marine Products for Estonian Coastal Areas (Baltic Sea)

Authors: Mirjam Uusõue, Oleksandr Borysenko, Kersti Kangro, Krista
Affiliations: Tartu Observatory, University of Tartu
The entire Baltic Sea, including Estonian coastal areas, are significantly impacted by eutrophication and pollution, which threaten ecological balance and water quality. To address these challenges, the European Union (EU) Water Framework Directive (WFD) has been implemented. It states that all EU member states need to make efforts to achieve „good“ status of all surface water quality by 2027. This involves comprehensive monitoring and annual reporting of surface water status. While many indicators can be monitored only with in-situ data, the integration of remote sensing possibilities offers a complementary approach to monitor over large spatial and temporal scale water quality indicators such as chlorophyll-a, transparency and nutrients. To support the reporting of WFD water statuses of Estonian coastal areas, the Cop4ESTCoast project has been initiated. This project aims to provide case studies and promote the Copernicus Coastal Hub by developing a prototype web interface utilizing two Copernicus Marine products: - Baltic Sea Ocean Colour Plankton, Reflectances, Transparency and Optics L3 NRT, for daily observations of transparency and Chlorophyll-a concentrations and - Baltic Sea Biogeochemistry Analysis and Forecast, for daily surface layer nutrient (nitrate (NO3) and phosphate (PO4)) concentrations. Transparency is calculated from the Kd490 values included in the Copernicus Marine product using an established algorithm. The buffer zones of 500 m around the islands and 1000 m around the mainland are used to minimize adjacency effects and bottom reflectance interference. This study explores the suitability and validation of Copernicus Marine products for Estonian coastal waters, where their application presents challenges. These coastal waters are highly diverse, including clear, turbid, chlorophyll-rich, and colored dissolved organic matter-rich waters. Therefore, it is very important to validate these products. Validation is performed using in-situ data of transparency, chlorophyll-a, nitrates and phosphates concentrations measured during national monitoring campaigns and sourced from the Estonian Environment Agency database kese.envir.ee. Additionally, long-term trends since 2016 are assessed to evaluate Copernicus Marine products' reliability and the potential of these products to complement traditional monitoring for effective water quality management.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Suitability of existing Satellite Chlorophyll products for year-round eutrophication assessments around the UK

Authors: Tiago Silva, Mike Best, Naomi Greenwood, Matt Holland, Dimitry van der Zande, Michelle
Affiliations: Centre For Environment, Fisheries And Aquaculture Science
The OSPAR Eutrophication assessment relies on a conceptual framework of cause-effect to assess excessive primary productions due to nutrient pollution (OSPAR 2022). Three indicators are used to measure i) a necessary condition, ii) the direct effect and iii) an indirect effect. The precursor to the high level of winter nutrients, the direct effect is an increase in the biomass of phytoplankton and a resulting effect is a reduction in the level of dissolved oxygen. The biomass of phytoplankton is assessed via chlorophyll a concentration. In the 2023 assessment a combination of in situ and satellite observations was used for the first time, with a 50% weight for each type. The inclusion of Satellite estimates was possible due to improvement in chlorophyl algorithms to make them coherent across different water types, thus allowing the application of consistent concentration thresholds. For the Greater North Sea, project JMP EUNOSAT (2017 - 2019) produced a spatially coherent algorithm (Lavigne et al., 2021). For the chlorophyll indicator, only the levels during the growing season are considered. For most regions this is between 1st of March and 30th of September. It has been shown from both in situ observations (Desmit et al. 2020) and satellite images (Alvera-Azcárate et al. 2021) from the North Sea, that the spring bloom has been happening earlier and the autumn bloom later. In particular, in the Southern North Sea the autumn bloom now happens as late as November. At the same time the yearly averaged Chlorophyll levels have reduced from a peak in late 1980s. A global increase in sea surface temperature has been predicted to reduce, primary production (Capuzzo et al., 2018) but potentially to increase it in cooler waters, due to northward shift of the phytoplankton community (Richardson and Schoeman, 2004). The plankton community are also responding to imbalanced reductions in nutrients, with riverine inputs of phosphorus decreasing significantly but waterborne nitrogen inputs remain high. Year-round assessments of chlorophyll a levels over coastal and shelf water present additional challenges, mostly due to three factors: low winter sun angle, increased wind driven turbidity and higher Coloured Dissolved Organic Matter levels due to increased precipitation. This study assesses the suitability and limitations of several existing Chlorophyll products in the seas around the UK. The following aspects are assessed: i) availability and variability of satellite observations, ii) accuracy compared with in situ and iii) biases related to winter weather systems. The datasets compared were from RBINS, OC-CCI and Globcolour, using respectively the algorithms from Lavigne et al (2021), Sathyendranath et al. (2010) and Maritorena et al. (2010). The assessment shows which datasets work best for each COMP4 water body, and characterise the error as a function of the season. The quantification of the error can be used to guide the integration of in situ and satellite estimates in eutrophication assessments. Alvera-Azcárate, A., Van der Zande, D., Barth, A., Troupin, C., Martin, S., Beckers, J.-M., 2021. Analysis of 23 Years of Daily Cloud-Free Chlorophyll and Suspended Particulate Matter in the Greater North Sea. Front. Mar. Sci. 8. https://doi.org/10.3389/fmars.2021.707632 Capuzzo, E., Lynam, C.P., Barry, J., Stephens, D., Forster, R.M., Greenwood, N., McQuatters-Gollop, A., Silva, T., van Leeuwen, S.M., Engelhard, G.H., 2018. A decline in primary production in the North Sea over 25 years, associated with reductions in zooplankton abundance and fish stock recruitment. Global Change Biology 24, e352–e364. https://doi.org/10.1111/gcb.13916 Desmit, X., Nohe, A., Borges, A.V., Prins, T., De Cauwer, K., Lagring, R., Van der Zande, D., Sabbe, K., 2020. Changes in chlorophyll concentration and phenology in the North Sea in relation to de-eutrophication and sea surface warming. Limnology and Oceanography 65, 828–847. https://doi.org/10.1002/lno.11351 Lavigne, H., Van der Zande, D., Ruddick, K., Cardoso Dos Santos, J.F., Gohin, F., Brotas, V., Kratzer, S., 2021. Quality-control tests for OC4, OC5 and NIR-red satellite chlorophyll-a algorithms applied to coastal waters. Remote Sensing of Environment 255, 112237. https://doi.org/10.1016/j.rse.2020.112237 Maritorena, S., d’Andon, O.H.F., Mangin, A., Siegel, D.A., 2010. Merged satellite ocean color data products using a bio-optical model: Characteristics, benefits and issues. Remote Sensing of Environment 114, 1791–1804. https://doi.org/10.1016/j.rse.2010.04.002 OSPAR (2022). Revision of the Common Procedure for the Identification of the Eutrophication Status of the OSPAR Maritime Area. OSPAR Agreement 2022-07. Richardson, A.J., Schoeman, D.S., 2004. Climate impact on plankton ecosystems in the Northeast Atlantic. Science 305, 1609–1612. https://doi.org/10.1126/science.1100958 Sathyendranath, S., Brewin, R.J.W., Brockmann, C., Brotas, V., Calton, B., Chuprin, A., Cipollini, P., Couto, A.B., Dingle, J., Doerffer, R., Donlon, C., Dowell, M., Farman, A., Grant, M., Groom, S., Horseman, A., Jackson, T., Krasemann, H., Lavender, S., Martinez-Vicente, V., Mazeran, C., Mélin, F., Moore, T.S., Müller, D., Regner, P., Roy, S., Steele, C.J., Steinmetz, F., Swinton, J., Taberner, M., Thompson, A., Valente, A., Zühlke, M., Brando, V.E., Feng, H., Feldman, G., Franz, B.A., Frouin, R., Gould, R.W., Hooker, S.B., Kahru, M., Kratzer, S., Mitchell, B.G., Muller-Karger, F.E., Sosik, H.M., Voss, K.J., Werdell, J., Platt, T., 2019. An Ocean-Colour Time Series for Use in Climate Studies: The Experience of the Ocean-Colour Climate Change Initiative (OC-CCI). Sensors 19, 4285. https://doi.org/10.3390/s19194285
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Instant Waterline Detection Using Multi Mission and Multispectral Satellite Imagery: 40-Years Reconstruction of the Southern Latium Coastal Sector.

Authors: Giulia Iacobucci, Prof Daniela Piacentini, Prof Francesco
Affiliations: Earth Sciences Department, Sapienza University Of Rome
Coastal areas are largely recognized as one of the most sensitive and dynamic geomorphological systems, especially to the weather and climatic conditions. Coastal erosion and deposition alternate cyclically, strictly linked to fluvial transport, wave and tide regimes, coastal flooding, sea level rise, tectonism, and anthropic pressure. Considering that about 2.15 billion people live in the seashore areas, and future projections show relevant growth in coastal populations, the study of the shoreline morphodynamic is relevant for integrating cost-effective and sustainable management. In the national framework of the National Recovery and Plane (PNRR) financed by Next Generation EU, the extended partnership RETURN (multi-Risk sciEnce for resilienT commUnities undeR a changiNg climate) aims to strength the research chains on environmental, natural and anthropogenic risks under climate change. Specifically, the scientific goal of the Diagonal Spoke (DS) 8 (Science underpinning climate services for risk mitigation and adaptation) is to develop innovative models for forecasting atmospheric, hydrologic and marine impact-oriented indicators, along with their uncertainty assessment. For the coastal areas, the variation of shoreline position and morphology has been identified as a key impact-oriented indicator for determining the effects of climate change. The Italian Peninsula has about 7,500 km of coasts, whose 943 km is eroding, and 970 km is prograding, according to the recent report of the Italian Institute for Environmental Protection and Research (ISPRA, 2023), comparing the shorelines of 2006 and 2020. Therefore, the reconstruction of coastal dynamics in specific study areas is fundamental for effective land management, becoming an indispensable tool for government agencies and stakeholders. The southern coastal area of Latium region (Central Italy) offers an ideal study area for reconstructing the shoreline morphodynamics since 1984, considering a seashore of about 50 km long. Multispectral and multi mission satellite imagery like Landsat 4, 5, 8 and Sentinel 2 offer an incomparable dataset for reconstructing the coastal morphodynamics. Indeed, the main goals of the present work are i) the annual reconstruction of instant waterline, and ii) the detection of erosional VS depositional sectors determining the rates. Through the computation of NDWI, about 40 instant shorelines have been reconstructed for the summer season from 1984 up to 2024. After that, the application of DSAS (Digital Shoreline Analysis System) of USGS let to recognize erosional and depositional sectors, highlighting that the maximum regression rate is about 1 m\yr (according to the Linear Regression Rate). Finally, the accuracy of extracted shorelines has been evaluated showing that the ones extracted from the satellite imagery in 1998, 2005, and 2019 are completely comparable to the shorelines reconstructed by ISPRA from orthophotos. Indeed, the average difference is within the uncertainty calculated for Landsat and Sentinel imagery.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Assessing the Effect of Water on Submerged and Floating Plastic Detection Using UAV and Sentinel-2 Remote Sensing Data and K-Means Clustering

Authors: Lenka Fronkova, Dr Ralph Brayne, Joseph Ribeiro, Martin Cliffen, Francesco Beccari, James Arnott
Affiliations: Centre for Environment Fisheries And Aquaculture Science, Headwall Photonics, Texo
Marine and freshwater plastic pollution is a worldwide problem affecting ecosystems and human health. Although remote sensing has been used to map large floating plastic rafts, there are research gaps in detecting submerged plastic due to the limited amount of in situ data. This study was the first to collect in situ data on submerged and floating plastics in a freshwater environment and analysed the effect of water submersion on the strength of the plastic signal. A large 10 × 10 m artificial polymer tarpaulin was deployed in a freshwater lake for a two-week period and was captured by a multi-sensor unmanned aerial vehicle (UAV) system and Sentinel-2 satellite. Spectral analysis was conducted to assess the attenuation of individual wavelengths of the submerged tarpaulin in hyperspectral UAV and multispectral Sentinel-2 data. A K-Means unsupervised clustering algorithm was used to classify the images into two clusters: plastic and water. Additionally, we estimated the optimal number of clusters present in the hyperspectral dataset and found that classifying the image into four classes (water, submerged plastic, near surface plastic and buoys) significantly improved the accuracy of the K-Means predictions. The submerged plastic tarpaulin was detectable to ~0.5 m below the water surface in near infrared (NIR) (~810 nm) and red edge (~730 nm) wavelengths. However, the red spectrum (~669 nm) performed the best with ~84% true plastic positives, classifying plastic pixels correctly even up to ~1 m depth. These individual bands outperformed the dedicated Plastic Index (PI) derived from the UAV dataset. Additionally, this study showed that in neither Sentinel-2 bands, nor the Sentinel-2 derived indices (PI or Floating Debris Index (FDI), it is currently possible to determine if and how much of the tarpaulin was under the water surface, using a plastic tarpaulin object of 10 × 10 m. Overall, this study showed that spatial resolution was more important than spectral resolution in detecting submerged tarpaulin. These findings directly contributed to Sustainable Development Goal 14.1 on mapping large marine plastic patches of 10 × 10 m and could be used to better define systems for monitoring submerged and floating plastic pollution. The study was published here: https://doi.org/10.3390/rs16234405
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: F.02.03 - POSTER - Using satellite EO in global capabilities for disaster response

Satellite EO can contribute to provide damage mapping and situation mapping in a rapid fashion. This is illustrated with two large and operational EO capabilities for disaster response, the International Charter Space & Major Disasters and the Copernicus Emergency Mapping Service (CEMS) of the European Union. There are EO based services based on a broad range of EO missions, radar and optical, with relevant products for several types of hazards both hydromet hazards and geohazards. Beyond awareness issues, the main barriers to user uptake are the quality of the geoinformation provided and its timeliness. Furthermore, with Climate Change, more and more extreme events are occurring in many regions of the world increasing the need for robust services including wide area damage mapping. Novel EO techniques and advanced IT can help enhance the accuracy and latency of disaster response service including the use of multi-sensor multi-temporal techniques combined with Artificial Intelligence to increase coverage and timeliness and the use of Cloud optimized methods able to access EO data in seamless fashion from large virtual constellations. Moreover because of climate change there is an increasing need for EO based geoinformation that can be used in the long term for the analysis and comparison of historical events for better understanding risks and for multi-hazard risk management.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: Montandon: The Global Crisis Data Bank

#stac

Authors: Emmanuel Mathot, Sanjay Bhangar, Sajjad Anwar
Affiliations: Development Seed
The International Federation of Red Cross and Red Crescent Societies (IFRC) started the Global Crisis Data Bank (GCDB, also known as Montandon) initiative in 2021 with support from UNDRR, UN OCHA, and the WMO. The GCDB aims to create a centralized database that harmonizes hazard, impact, and corresponding response data for every event from various sources, including GDACS, EM-DAT, and DesInventar. This addresses crucial gaps in the humanitarian data ecosystem, specifically IFRC’s own National Societies, to understand the past and better prepare for the future. IFRC’s vision for Montandon is to become the largest archive of structured data about current and historical disasters worldwide. This enables analysis to reveal patterns at various spatial and temporal resolutions. Montandon is the foundation for forecast models and systems and a dynamic database constantly reflecting the humanitarian community's approaches. Over the last year, the IFRC team has built several components of Montandon, including a data schema, data processing, transformation scripts, as well as a proof-of-concept API. Development Seed has started a new phase with the IFRC GO development team to enhance Montandon components, making the project operational for use within IFRC and the broader humanitarian community. We introduce a new technical approach to operationalize the Montandon, also known as the 'Monty' database, utilizing established technologies to harmonize the data model. A key part of this process is model normalization, which involves integrating data into SpatioTemporal Catalog Assets (STAC) to ensure efficient metadata management for disaster events. A dedicated extension (https://github.com/IFRCGo/monty-stac-extension) outlines specifications and best practices for cataloging all attributes related to hazards, impacts, and responses. We perform an in-depth analysis of a range of data sources, including GDACS and DesInventar, to effectively extract, transform, and load data into a harmonized model. One of the primary challenges we encounter is the organization of information across various data models and definitions of disasters. A critical aspect of our work involves the implementation of event pairing, which is necessary to connect the different episodes associated with a disaster. This method enables the utilization of STAC API functions to effectively search, filter, and aggregate data on hazards and their impacts across one or multiple events. These functions are designed for broad application by platforms such as IFRC GO and the Disaster Response Emergency Fund (IFRC-DREF). They aim to provide essential data and actionable insights, enhancing decision-making for stakeholders involved in disaster management. Finally, this initiative also seeks to significantly improve the gathering and analysis of responses to various critical events. To achieve this, it will focus on integrating with established frameworks such as the Disasters Charter and the Copernicus Emergency Management Service. These collaborations are intended to enhance the systematic cataloging of satellite imagery and value-added products. Among the key outputs will be detailed flood maps that provide insightful visual representations of flood-prone areas, accurate forecasts outlining the projected paths of cyclones, and near-real-time mapping of wildfires to aid in timely response and resource allocation at global and regional levels. These platforms aim to provide essential data and actionable insights, enhancing decision-making for stakeholders involved in disaster management. Keywords: Global Crisis Data Bank (GCDB), Montandon, International Federation of Red Cross and Red Crescent Societies (IFRC), Data Access, Interoperability, Humanitarian Data, Disaster Management, Hazard Data, Impact Data, Response Data, Data Schema, Data Processing, STAC (SpatioTemporal Catalog Assets), API (Application Programming Interface), Model Normalization, Disaster Response Emergency Fund (IFRC-DREF), Satellite Imagery, Copernicus
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: Secure Decentralized Analytics for Earth Observation: Leveraging Zero-Knowledge Proofs for Trustable Onboard Machine Learning

Authors: Robert Cowlishaw, Dr Annalisa Riccardi
Affiliations: University Of Strathclyde
As machine learning models move onboard satellites, analysing data through inference becomes an efficient method of reducing downlinking satellite data, especially large earth observation data, to the ground. However, trusting the analytics produced from these models involves trusting the operator of the satellite, which is especially important in scenarios such as disaster response. In a decentralised network of satellites operating automatically together requires a mathematical proof that proves the analytics are true. This work discusses a method of generating a mathematical proof, based on a public model and public analytic but a private dataset, to prove to other entities on the decentralised network that the analytics are true based on what the model infers from the satellite's dataset. This therefore maintains low downlink bandwidth requirements of single analytics with the addition of being able to prove the analytic is correct. To generate this proof, a mechanism called Zero Knowledge (zk) Proof is used creating a proving mechanism run locally on the satellite and a verification mechanism, run on a decentralised blockchain network, to verify the proof in a secure way. A classification machine learning model task is used as a case study which determines the type of disaster from an earth observation image. The Ethereum Blockchain network is used to run the verification on-chain within a smart contract. First, the process of encoding the model as a zk-Circuit is shown as well as the proof creation. A hash is created from the input image and placed on-chain, to prove that the image has not already been used, followed by pre-processing with only zk proof compatible techniques. The zk proof and model inference are run side by side on this pre-processed input image, with two outputs of the analytics (the disaster type classification) and the zk proof. The proof is finally shown to be valid through the verification process which is completed on-chain to maintain security and trustability of the final proof. A multitude of images are used to generate many proofs as well as other model architectures discussed to show generalisation of the zk proof generation and verification technique in the earth observation onboard processing scenario.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: The CEOS Missions, Instruments, Measurements and Datasets (MIM) Database - Overview, Upcoming Missions, and Applications

Authors: Marie-Claire Greening
Affiliations: Esa
The Committee on Earth Observation Satellites (CEOS) Missions, Instruments, Measurements and Datasets (MIM) Database was established by ESA well over 30 years ago, and ESA continues to maintain this now highly-valued resource as a key contribution to the Earth observation (EO) community. The CEOS MIM Database is the only official annually-updated consolidation of the programmes and plans of civil space Agencies. The database has a number of objectives: ● To encourage information sharing in support of international Earth observation coordination efforts. ● To facilitate Earth observation mission and measurement gap analyses. ● To connect the Earth observation user community with the satellite-operating Agencies of CEOS. The CEOS MIM database currently features details of over 500 publicly-operated Earth observing satellite missions, and over 1000 instruments, that are currently operating or are planned for launch within the next 15 years. This presentation will touch on several aspects of the CEOS MIM Database, including: ● An overview of the current database, including information sources, capabilities and future plans; ● Recent trends in Earth observation missions and capabilities; ● The connection between the MIM and the EO Portal (eoportal.org), which is an independent “sister” site, also managed and maintained by ESA, and that contains more in depth engineering information about specific missions; ● Examples of significant international space Agency future mission planning analyses that have used the MIM, including ESA’s Science Strategy and NASA’s Decadal Survey; ● An outline of the periodical publications of dedicated Earth observation handbooks that present in more detail the main capabilities and applications of EO in specific domains, the latest (2023) version of which was dedicated to the first UNFCCC Global Stocktake; and ● A case study on the use of the MIM for international coordination of greenhouse gas observations. The CEOS MIM Database is available online at database.eohandbook.com.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: C.02.12 - POSTER - ESA's Biomass mission

BIOMASS is the 7th ESA Earth Explorer mission. Earth Explorers are the backbone of the science and research element of ESA’s Living Planet Programme, providing an important contribution to the global endeavor of understanding the Earth system.
The overall objective of the Biomass mission is to reduce the uncertainty in the worldwide spatial distribution and dynamics of forest biomass in order to improve current assessments and future projections of the global carbon cycle. For enabling this, the Biomass mission data products will provide consistent global estimates of forest biomass, forest height, forest disturbance and re-growth parameters.
The Biomass Satellite industrial Prime contractor is Airbus Defence and Space, Stevenage (UK). The radar payload is built by Airbus Defence and Space, Friedrichshafen (Germany).
The Biomass  payload consists of a fully-polarimetric left-looking P-band SAR which is the first of its kind in this frequency band for Earth observation purposes. The BIOMASS mission is designed to last 5 years, and consists of two phases, i.e. a tomographic and an interferometric phase.
The Biomass Qualification Acceptance Review is scheduled to be concluded within 2024 with the satellite due for launch in 2025.

Biomass will provide global maps of the amount of carbon stored in the world's forests and how these change over time. Biomass will also provide essential support to UN treaties on the reduction of emissions from deforestation and forest degradation. Forest type and forest cover worldwide can be detected by today's satellites, but the mission’s unique capabilities will allow to get access to global forest structural parametrisation obtained with a homogeneous quality and sampling – allowing a multitude of global maps of these main forest parameters over its mission lifetime.
Apart from the above, the session also intends to cover the wider context of how carbon science has moved on and how Biomass, GEDI and many other elements provide the bigger picture.

The session will highlight the latest developments covering overall mission status, mission science and foreseen exploitation, higher-level products (implementation and algorithmic content) and ground segment.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: Enhanced Orbit and Baseline Control Strategy for the BIOMASS Mission

Authors: Sérgio Brás, Itziar Barat, Mr Berthyl Duesmann
Affiliations: European Space Agency
BIOMASS is a mission from the European Space Agency (ESA) that will measure the amount of biomass and carbon stored in the forest. This mission will provide key information about the carbon cycle and will allow to better understand the impact of human activity on the Earth system. BIOMASS will be the first spacecraft equipped with a P-band synthetic aperture radar (SAR) and will provide the backscattering coefficients for each of the different linear polarizations. By observing the same area of forest from different angles (i.e. baselines), the interferometric correlation between each of the acquisitions can be computed and used to derived accurate quantification of the biomass. Moreover, tomography methodologies will be applied to the SAR measurements to derive complete information about the vertical structure of the forest. To satisfy the different science needs during the mission, two nominal phases are defined: i) tomographic phase (TOM) used to build a baseline stack of interferometric observations and reconstruct the scattering of the vegetation as a function of height; and ii) interferometric phase (INT) used to compute the forests biomass. BIOMASS will fly in a frozen eccentricity, dawn-dusk, near repeat cycle orbit and it will take advantage of the resulting ascending node drift to generate the observation baselines with the geometry necessary for the P-band interferometric calculations. The reference orbit is characterized by a repeat cycle of 3 days and a cycle length of 44 orbits, which corresponds to 14+2/3 revolutions per day. To reach global coverage, BIOMASS combines measurements of three sub-swaths. During the TOM phase, seven measurements are performed using sub-swath 1, followed by seven using sub-swath 2, and another seven using sub-swath 3. Since the repeat cycle of the reference orbit is 3 days, 3 days after an observation, approximately the same scene is observed but from a position slightly drifted westward. After the measurements using the 3 sub-swaths, a satellite repositioning manoeuvre (SRM) is performed, so that, the next measurement cycle uses sub-swath 1 to observe the areas adjacent to the last measurements with sub-swath 3. While the measurement stack during the TOM phase is comprised of seven observations, in the INT phase, the measurement stack is composed of three observations with each sub-swath. In this work, we demonstrate that, after a full cycle, the distances between the ascending nodes consecutive in longitude of a near repeat cycle orbit are not constant. For this reason, the total equatorial distance to be observed by the instrument needs to be larger than the Earth's circumference divided by the cycle length of the reference orbit (in this case 44 orbits). Moreover, we show that, by performing orbit maintenance manoeuvres with the same periodicity as the orbit cycle, we can achieve a much tighter baseline and ground-track control than using a classical approach. Finally, the necessity of performing SRMs with a duration that is a multiple of the repeat cycle is discussed and its advantages are highlighted.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: Quantifying the interaction between vegetation biomass dynamics and atmospheric CO₂

Authors: Hui Yang, Dr Maurizio Santoro, Siyuan Wang, Dr Philippe Ciais, Prof Markus Reichstein, Dr Simon Besnard, Dr Ingrid Luijkx, Dr Chunhui Zhan, Dr Nuno Carvalhais
Affiliations: Max Planck Institute for Biogeochemistry, Peking University, Gamma Remote Sensing, Laboratoire des Sciences du Climat et de l'Environnement, Helmholtz Center Potsdam GFZ German Research Centre for Geosciences, Wageningen University
Upcoming Earth observation missions will bring unprecedented detail and comprehensiveness for understanding and quantifying the role of terrestrial ecosystem on the global carbon cycle. Yet, existing missions and datasets are essential to approach longer term dynamics and the processes underpinning biosphere-atmosphere feedback at climate scales. Here, exploring a global long-term dataset on vegetation aboveground biomass (AGB) dynamics, spanning from 1992 and 2019, and contemporaneous atmospheric CO₂ measurements, we focus on (1) understanding the contribution of biomass dynamics to the atmospheric CO₂ growth rate and (2) quantifying the CO₂ fertilization effect on plant biomass. Adopting a fully data-driven three-box model that simulates carbon dynamics within live vegetation, woody debris, and soil organic carbon pools, and considers wildfires and spatio-temporal changes in primary productivity, we are able to explain over 60% of the observed variability in atmospheric CO₂ growth rate over the period of 1997-2019 (R = 0.78, p-value < 0.05), with a low RMSE of 1.0 PgC yr-1. Our results show that, globally, lagged effects from heterotrophic pools account for 50% of the variability in atmospheric CO₂ growth rate, exceeding three times the direct contribution of transient effects from the live biomass pool. These findings highlight the importance of quantifying tree mortality and cascading carbon release from litter and soils in shaping the terrestrial carbon balance. We further leverage these Earth observations to isolate the specific contribution of elevated CO₂ concentration to the biomass dynamics using both local multiple regression and residual methods. The approach is evaluated across an ensemble of dynamic global vegetation model simulations, showing low errors (RMSE: 0.04 and 0.02) and high correlation (R²: 0.79 and 0.88; p-value < 0.005). Globally, satellite-derived estimates indicate a global increase in AGB of 16.9% [13.9–18.8%] per 100 ppm rise in CO₂ concentration. These observation-based estimates are close to those estimated by current land surface models (16.3 ± 5.0%) but exceed estimates from global Earth system models (12.7 ± 6.5% for CMIP5, 13.2 ± 4.6% for CMIP6), suggesting an underestimation of Earth system models on the contribution of the land ecosystems in dampening anthropogenic CO₂ emissions. Overall, vegetation-atmosphere interactions from annual to decadal time scales show both the strong role of carbon loss processes and legacy dynamics, alongside a modest though larger CO₂ fertilization effect on biomass when compared to global models. Ultimately, these results argue for the joint use of both FLEX and BIOMASS Earth Explorer missions to constrain assimilation, disturbance, and recovery processes in Earth system models.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: BIOMASS: Strategies and Challenges of a Special LEOP

Authors: Elia Maestroni, Jakob Karg
Affiliations: esa
BIOMASS, the seventh Earth Explorer mission, aims to reduce the uncertainty in the global spatial distribution and dynamics of forest biomass. To achieve this objective, the spacecraft is equipped with the first space-borne P-band synthetic aperture radar (SAR), which will provide global maps of forest biomass stocks, forest disturbance, and growth. The BIOMASS SAR requires a large passive element to reflect the signal, an antenna dish of approximately 12 meters in diameter, composed of a reflective mesh and an impressive number of large and small structural elements, known as the Large Deployable Reflector (LDR). This huge "umbrella" is held in position by a 7-meter assembly comprising three booms connected by motorized locking hinges. During the launch, the reflector and booms are tightly folded and secured to the back of the spacecraft to fit inside the launcher fairing. This presentation addresses the unique challenges and strategies for deploying these elements in orbit during the Launch and Early Operations Phase (LEOP). Due to these complexities, the LEOP, conducted from the European Space Operations Centre (ESOC) in Darmstadt, Germany, requires nearly nine days of critical in-flight operations, compared to a more classical three-day LEOP of other EO spacecraft. This long duration, and the complexity of the deployment, create a set of unique constraints and drivers, both technical and programmatic, for the operators. A summary of the autonomous post-separation events and the activities performed by ground control after the first acquisition is provided. The presentation then focuses on how these drivers, low-level procedures, and constraints, were translated into an overall operational timeline that provides sufficient robustness for nominal and contingency recovery operations, without extending beyond the agreed level of affordability. The success of these critical operations is closely related to the level of preparation of the team of teams that execute and support the early operations phase. This preparation is not only in terms of technical understanding of the spacecraft and its operations but also in the effectiveness of communication and decision-making processes. These aspects are extensively trained during the Simulations Campaign that precedes the launch. BIOMASS-specific aspects of such training are also presented.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: Mapping Tropical Forest in Gabon: A successful cooperation between ESA - NASA – DLR

Authors: Irena Hajnsek, Matteo Pardini, Roman Guliaev, Ralf Horn, Martin Keller, Marc Jäger, Kostas Papathanassiou, Marc Simard, Temilola Fatoyinbo, Sassan Saatchi, Naiara Pinto, Michele Hofton, Bryan Blair, Ralph Dubayah, Tania Casal
Affiliations: German Aerospace Center, ETH Zürich, JPL NASA, NASA, University of Maryland, ESA
Tropical forests are particularly important. Although they only cover about 6% of Earth’s surface, they are home to approx. 50% of the world’s animal and plant species. Their trees store 50% more carbon than trees outside the tropics. At the same time, they are one of the most endangered ecosystems on Earth: about 6 million of hectares per year are felled for timber or cleared for farming. Compared to the other components of the carbon cycle (i.e. the ocean as a sink and the burning of fossil fuels as a source), the uncertainty in the land local carbon stocks and the carbon fluxes are particularly large. This is especially true for tropical forests, which remain poorly characterized compared to other ecosystems on the planet. More than 98% of the land use change flux should be due to tropical deforestation, which converts carbon stored as woody biomass (of which around 50% is biomass) into emissions. In the frame of the ESAs BIOMASS mission, selected in May 2013 as the 7th Earth Explorer mission to meet the pressing need for information on tropical carbon sinks and sources through estimates of forest height and biomass a first airborne campaign over tropical forest in Gabon was conducted. The campaign, called AFRISAR took place over four forest sites in Gabon where two acquisitions at different season where made, the first one was conducted by ONERA (SETHI system, July 2015) and the second one by DLR (F-SAR system, February 2016). After 7 years a second campaign, called GABONX, was conducted exactly over the same test site and two further one selected by the Gabonese partners (the space agency AGEOS and the Ministry of Forest) with the same configuration. This time the campaign serves also the mission objectives of ROSE-L, with its six days repeat-pass cycles acquiring fully polarimetric and multi-baseline data sets in L- and P-band. The GABONX campaign was conducted in May 2023 where polarimetric tomographic stacks were conducted with the DLR’s airborne radar F-SAR system. Lidar data over the same test site were conducted with the NASA’a airborne lidar LVIS system. One year later radar data from the NASA’s radar airborne system UAV SAR were conducted in L-band and P-band. The main purpose of this work is to present an overview of the cooperation between NASA – ESA – DLR for a common purpose to investigate tropical forest in L- and P-band for forest height and structure estimation. Matteo Pardini, Marivi Tello, Victor Cazcarra-Bes, Konstantinos P. Papathanassiou, Irena Hajnsek, L- and P-Band 3-D SAR Reflectivity Profiles Versus Lidar Waveforms: The AfriSAR Case, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Year: 2018 | Volume: 11, Issue: 10 | Journal Article | Publisher: IEEE
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: On the Use of Multi-Squint for the Ionospheric Calibration of SAR Images: The Biomass Case

Authors: Felipe Betancourt Payan, Dr. Marc Rodriguez-Cassola
Affiliations: German Aerospace Center (DLR)
The Biomass mission from the European Space Agency (ESA) is a response to the current need to measure environmental variables that assess the state of the planet in its fight against climate change. With measuring the above-ground biomass (AGB) at a global scale as its primary objective, this ambitious Synthetic Aperture Radar (SAR) mission will be the first to operate a fully polarimetric P-band (435 MHz central frequency) system from space. This mission concept poses several challenges that require innovative solutions. Given the low frequency, the ionosphere is expected to introduce significant degradation in the observations related to phase errors (defocusing, azimuth shifts and phase screens in the interferograms), intensity scintillation and Faraday rotation. All the effects directly relate to the Total Electron Content (TEC) between the satellite and the imaged scatterers. For this reason, the ionospheric calibration of the system and high-resolution ionospheric imaging and the extraction of geophysical parameters are closely linked. The SAR principle is based on the synthesis of a very long antenna by coherently adding echoes as a platform (a satellite in this case) moves past the targeted scene to generate high-resolution 2-D complex images. As the platform moves past the targeted scene, it looks at it with different squint angles in the along-track (azimuth) coordinate. The varying squint angle is the angle between the direct look direction and the center of the beam. Most current calibration algorithms assume this squint angle is very small, neglecting variation of the ionospheric and system effects inside the beam (beam-center approximation). This assumption, at first, seems to hold for Biomass (with a 3-degree azimuth beam aperture). In this talk, we discuss this approximation and show that this assumption might leave considerable errors in the calibration, ionospheric imaging and the extraction of parameters that can benefit from appropriate squint angle accommodation. In particular, state-of-the-art Faraday rotation calibration in quad-pol data and TEC estimation is made using the Bickle and Bates approach. The transformation from Faraday rotation angle and TEC is done with the angle between the line-of-sight and local geomagnetic field vector (Bk). This estimation is highly sensitive to system calibration residuals and the knowledge of the geomagnetic field and geometry. We will show what, with an efficient multi-squint Bickle and Bates, it is possible to accurately account for the Bk-variation inside the beam and cancel residuals characteristic of the beam-center approximation. Likewise, we will discuss how this approach helps detect errors due to the local magnetic field uncertainty and further system calibration errors. The results are shown with simulated data from the Biomass End-to-End Performance Simulator (BEEPS) and with real Biomass data (if available by the time of the presentation).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: Ionospheric Irregularity Height Estimation Based on Scintillation Signatures in SAR

Authors: Felipe Betancourt Payan, Dr. Jun Su Kim, Dr. Kostas Papathanassiou, Dr. Marc Rodriguez-Cassola, Dr. Gerhard Krieger
Affiliations: German Aerospace Center (DLR)
Low-frequency synthetic aperture radar (SAR) benefits from its penetration capability to image volumetric and sub-surface scatterers (e.g., ice, dense forest and the ground below tree canopies). At the frequency of the ALOS-PALSAR instrument (L-band or 1.27 GHz), the quality of the products is highly affected by the ionosphere and its irregularities. In particular, the turbulent component of the equatorial ionosphere (with the local variations in electron density and total electron content) introduces rapid spatially changing phase errors to the propagating radar wavefronts. Assuming the turbulent irregularities are located in a narrow vertical space, it is common to approximate the disturbances to a phase-modulating screen. The one-way wave propagation model is the following: free-space propagation between the satellite and the ionospheric turbulent layer, followed by phase modulation and then free-space propagation from the turbulent layer to the ground. As a result, the radar cross-section of the imaged scatterers is modified with the so-called intensity scintillations. Because of the elongated nature of the equatorial ionospheric irregularities, the signature is also known as stripes. The stripe signature has been well-known since the beginning of the ALOS-PALSAR operations. The stripes are known to be oriented along the geomagnetic field and their structure is directly related to the spatial structure of the ionospheric irregularities. It is also known that the stripes are usually more visible when the satellite track is parallel to the geomagnetic field vector projected on the horizontal plane, as seen from the line of sight. Even in the presence of irregularities, the stripes are not necessarily visible if the flight direction is off this geomagnetic field projection. In the literature, the ionospheric irregularity height is usually related to the height of maximum ionization, which can be estimated from standard models such as NeQuick and IRI. Besides considering the model uncertainties, we are particularly interested in the accurate estimation of the irregularity height (which can be or not the same as the one of maximum ionization). This information is crucial from the calibration point of view and as a geophysical parameter that serves the community. SAR accomplishes its high along-track (azimuth) resolution by coherently processing the returned phase history, which relates to the varying range between targets on the ground and the satellite as it moves. This way, it is possible to synthesize a very large antenna with a sharper resolution than the real aperture. The azimuth processing takes place with a matched filter, which acts as an averaging filter. The matched filter is adapted to the range between the satellite and the imaged targets. Consequently, if we consider the ionosphere \textit{is a target in-between}, its signature is smeared in azimuth in the raw and compressed data. Semi-focusing of the data is accomplished by tunning the range parameter in the matched filter. In particular, it has previously been shown that using this approach (with knowledge of the ionospheric irregularity height and consequently the range between the satellite and the phase screen) allows estimating the Faraday rotation angle in quad-pol data in principle at a resolution limited only by the radar resolution, the sampling frequency and the noise. In this presentation, we show that semi-focusing can also be used to visualize the ionospheric stripes even when they are initially not visible in the focused data. By semi-focusing, it is possible to decouple the scintillation from the azimuth processing. In other words, it is possible to undo the synthetic aperture averaging to see the scintillation at high resolution. With a maximum contrast autofocus, it is possible to estimate the irregularity height. The methodology is validated with an ALOS-PALSAR dataset. We also present another method for estimating the ionospheric irregularity height based on azimuth sub-looks (less averaging takes place and the stripes are visible by processing only parts of the full synthetic aperture). Sub-look processing has been used in the past, for example, in the state-of-the-art method for ionospheric irregularity height estimation at high latitudes known as the Faraday rotation parallax. We show it is possible to use a similar principle with the scintillation signature instead. The two main benefits are (i) the scintillation parallax proposed in this work operates at equatorial latitudes (where there is not Faraday rotation signature) and (ii) there is no need for full polarimetry since the scintillation can be measured in the single-channel images. References: F. Betancourt-Payan, J. S. Kim, K. P. Papathanassiou, M. Rodriguez-Cassola and G. Krieger, "Estimating Ionospheric Height From Amplitude Scintillation Signatures in SAR Images," in IEEE Geoscience and Remote Sensing Letters, vol. 21, pp. 1-5, 2024, Art no. 4013405, doi: 10.1109/LGRS.2024.3431023.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone R-S)

Poster: Updates from the Project Office Biomass: Building a User Community in Germany and Beyond

Authors: Nicole Börner, Hui Yang, Siyuan Wang, Stefanie Johnson, Dr Nuno Carvalhais, Dr. Friedrich Bohn, Prof. Andreas Huth, Kostas Papathanassiou, Prof. Dr. Christiane Schmullius, Dr. Carsten Pathe
Affiliations: Max Planck Institute for Biogeochemistry, Helmholtz Centre for Environmental Research, German Aerospace Center, Friedrich Schiller University, Peking University, Earth Observation Services GmbH
The project office BIOMASS (EEBIOMASS) aims to build a user community and communicate relevant information on the ESA Earth Explorer Mission BIOMASS. Our goal is to improve understanding of the key role of present and future Earth observations for quantifying the land biosphere contribution to the global carbon cycle. EEBIOMASS is a coordination initiative of the Max-Planck Institute for Biogeochemistry, the Microwaves and Radar Institute of the German Aerospace Center, the Helmholtz Centre for Environmental Research and the Department for Remote Sensing at the Friedrich Schiller University Jena. EEBIOMASS synthesizes, produces and disseminates relevant information to the scientific, academic and wider user communities and fosters communication to maintain a necessary dialogue in the broader context of the upcoming BIOMASS mission. EEBIOMASS actively engages with the community to identify gaps between the mission products and the collective requirements regarding (1) tomographic and interferometric data; (2) biomass data products; (3) terrestrial carbon cycle and Earth system science; (4) secondary mission objectives; and (5) education; in order to develop solutions within the context of BIOMASS. To do so, the EEBIOMASS approach involves four key steps: 1) gathering information across user communities on the current BIOMASS-like data sources and tools used for their different purposes; 2) collecting information about the mission steps, timings and data deliverables; 3) contrasting user expectation with mission commitments to identify gaps; and 4) proposing solutions to close gaps according to existing data and tools beyond BIOMASS. Activities from the EEBIOMASS team include organizing online webinars, training courses, namely the online PolInSAR course, and summer schools (https://eebiomass.org/education/). We also conduct surveys to identify the goals and data needs of the BIOMASS community at large, with a further aim to contribute to the development of a community-oriented data utilization concept. EEBIOMASS has coordinated fifteen workshops (https://eebiomass.org/workshops/), with speakers from all over the world, covering a wide range of research topics related to BIOMASS downstream research, such as forest mapping, covering aspects of uncertainties, forest structure, co-existing high-resolution products; terrestrial carbon cycle dynamics; global vegetation modelling, among other topics. Ultimately, the BIOMASS mission products cater a diverse group of users, including expert technical and non-technical; industry and services, teaching and research beyond the fundamental scientific purposes. As the mission comes close to fruition, the EEBIOMASS community grew constantly over the years, reaching 720 by the end of November 2024. Overall, here we outline our current state of synthesis, assessment and analysis conducted by the Project Office BIOMASS team, and we welcome discussions on BIOMASS data, user needs and potential data application.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: C.01.10 - POSTER - Microwave Instrument Concepts and Technologies to enable new EO Science Missions

New microwave instrument concepts and technologies are enabling new Earth observation science missions.
This session aims to discuss new microwave remote sensing instrument concepts and present advances in microwave instrument technologies and related instrument pre-development activities.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Digital Beam Forming and Synthetic Aperture Interferometry: two sides of the same coin ?

Authors: Dr Eric Anterrieu, Dr Nemesio Rodriguez-Fernandez, Alexandre Mege, Patrice Gonzalez
Affiliations: Cnrs, Airbus Defense and Space, CNES
Digital beam forming and synthetic aperture interferometry are signal processing techniques that mix the signals collected by an array of elementary antennas to obtain high-resolution images with the aid of a computer. If aperture synthesis has been the vocable for many years, since the idea was to synthesize a complete aperture from a diluted one with interferometric data, it is now supplanted by the alternative expression synthesis imaging, as the idea is to synthesize high-resolution images using a computer from whatever data are available. Both paradigms can be seen as the two sides of the same coin. On one hand there is additive synthesis with digital beam forming where the signals collected by the elementary are combined all together in an additive way so that all the elementary antennas of the collection are simultaneously involved in the measure of the antenna temperature seen by the array. On the other hand there is multiplicative synthesis with synthetic aperture interferometry where the signals collected by the elementary antennas are combined pair wise to measure the correlation between the signals collected by antenna pairs, yielding samples of the coherence function, also termed complex visibilities. In both cases, the measured data, an antenna array temperature map for digital beam forming and a set of complex visibilities for synthetic aperture interferometry, have to be inverted with the aid of the computer in order to retrieve the high-resolution brightness temperature distribution observed by the array. Although the two techniques are using the same signals and sharing the same goal, there are several differences that deserve attention while comparing them and that will be the guiding idea of this contribution. The most important one concerns the level and the spatial distribution of the floor error that is observed when inverting the measured data provided by the two paradigms. From a qualitative point of view, whatever the polarization, the spatial distribution of the reconstruction floor error in the synthesized field of view exhibits interesting differences from one paradigm to the other. From a quantitative point of view, global metrics have shown a significant reduction of the level of this error in every polarization with digital beam forming compared to synthetic aperture interferometry. Moreover, with respect to the directional signature of this floor error, it turns out that digital beam forming performs better than synthetic aperture interferometry at any incidence angle leading to an almost constant level of floor error over a wide range of incidence angles whereas synthetic aperture interferometry has a clear angular signature. This is not without interest for the science made from the inversion of multi-angular brightness temperatures, as it will be the case with the FRESCH (a Fine Resolution Explorer for Salinity Carbon and Hydrology) project of mission to study ocean-land-ice interfaces that has been proposed to EE12. However, all these conclusions are derived from numerical simulations conducted with the antenna array of SMOS, and not from experimental results. This is why the FANTASIOR project (a Flexible ANTennas Array for Synthesis Imaging in Observational Radiometry) is under development. This phased array aims at recording on a digital media the radio signals kept by the elementary antennas in order to combine them in a deferred time to produce either complex visibilities or antenna array temperatures and to invert them with the idea to compare the two paradigms experimentally.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Alternative applications of SWOT KaRIn time series

Authors: Urs Wegmüller, Christophe Magnard
Affiliations: Gamma Remote Sensing
The Surface Water and Ocean Topography (SWOT) mission is a joint collaboration between the NASA, USA, CNES, France, CSA, Canada, and UKSA, UK, and was launched on Dec. 16, 2022. One of the instruments on SWOT is a near-nadir viewing Ka-band Radar Interferometer (KaRIn), that collects data over two distinct 50 km swaths, using two SAR antennas located at the opposite ends of a 10 m boom. For the main mission applications, higher level data products as sea surface heights, water masks, a lake product and a river product, are available. In addition, the single-pass interferometric Ka-band SLC pairs can be downloaded. For our investigation we downloaded KaRIn SLC data over several sites including agricultural areas, forest, built up areas, coastal areas, sea ice. In our contribution we present the processing techniques we used, difficulties encountered, and results obtained using SWOT KaRIn time-series over land. A distinctive feature of SWOT KaRIn data is that at the very low (< 6 deg.) incidence angles layover effects become very significant. The interferometric phase relates to the height difference between the effective scattering height of the target and the reference height used in the time-domain back-projection algorithm that was used to focus the KaRIn data. In areas affected by layover the reference height used does not correspond to a combination of the heights of the contributing areas, but to the height of the contributing area with the lowest height. This approach is suited for the inland water application as it results for inland water surfaces affected by layover, but dominating the backscatter in the laid over region, in a nearly constant interferometric phase. The Ka-band backscatter over water is high, except for occasional very calm conditions. Over land surfaces the backscatter is lower than over water, with lower values for rougher or more densely vegetated areas. The backscatter enables applications such as monitoring changes on agricultural fields and tracking the development of sea ice. The results clearly indicate potential beyond the main mission applications and are of interest to improve our understanding of Ka-band backscatter, InSAR phase and coherence at near-nadir viewing angles.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Harmony SAR phase synchronization and performance validation by using a prototype End – to – End simulator

Authors: Daniele Mapelli, Tatiana Martelli, Pietro Guccione, Matteo Aletti, Davide Giudici, Riccardo Piantanida, Lonardo Carrer
Affiliations: Aresys S.r.l.
HARMONY is the 10th Earth Explorer Mission and it is envisaged as a mission with two passive SAR satellites that orbit in formation with one of the Copernicus Sentinel-1 satellites (illuminator), to address key scientific questions related to ocean, ice, and land dynamics. Two separated Mission phases are foreseen: (i)the along-track bistatic stereo phase, where the two passive SARs fly ahead and after the S-1 at a large along-track distance (350km is the designed value), and (ii) the across-track bistatic phase, where the two HARMONYs fly on the same side of S-1, maintaining the along-track distance as in the previous phase, but with, in addition, an across track displacement of few hundred meters, to allow for interferometric measurement. The current paper is devoted to the presentation of the prototype E2E simulator, composed of (i) SAR Instrument Data Breadboard Simulator (IDBS), (ii) SAR Level 1 Breadboard Processor (L1BP) and (ii) Performance Assessment Tool (PAT), developed by Aresys in the framework of HARMONY SAR performance. The main goal of the activity is twofold: (i) to verify and demonstrate compliance with the SAR requirements, and (ii) to verify and validate the L1 Processor for bistatic data. Besides validating the “canonical” SAR performances (e.g., IRF parameters, NESZ, ambiguity to signal ratio), the prototype E2E simulator is also exploited to validate a possible data-driven phase synchronization algorithm proposed in the Harmony Phase B2 activities framework. In fact, like many bistatic SARs, also HARMONY suffers from synchronization issues on both phase/frequency and on time. The technique we are proposing resorts to the Multi Squint approach to implement an adaptive slow-time tracking of the USO-driven phase contribution. The synchronization aims to estimate and compensate it before the single pass interferometric applications of the across-track bistatic mission’s phase. After a brief presentation of the Multi Squint synchronization method's theoretical approach, the current paper focuses on significant experimental results to prove the method's effectiveness.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Effects on Brightness Temperature Bias from Simulated Antenna Pattern of the Arctic Weather Satellite Radiometer.

Authors: Catalina Medina, Roland Albers, Alistair Bell, Axel Murk
Affiliations: Institute of Applied Physics, University of Bern
The Artic Weather Satellite (AWS) is a smallsat prototype for a meteorological constellation called EPS-Sterna [1]. The payload consists of a crosstrack scanning microwave radiometer built by AAC Omnisys operating in the 54, 89, 183 and 325 GHz bands. As AWS is baselined for a constellation and will be manufactured in comparatively large quantities, it needs to be limited in complexity and size to maintain affordability. For the optics this is achieved by foregoing beam co-alignment. Instead, a splitblock feedcluster with 4 feedhorns directly illuminates the continuously spinning scan reflector. No individual feed is located in the focal point of the reflector, which leads to scan angle dependent performance variations. Although the total spillover variation has been minimized in the design, the beam patterns for each frequency still exhibit far side lobes contributions which change their shape and spatial distribution depending on scanning angle. The University of Bern has performed physical optics [2] (PO) simulations in all frequency bands of the instrument. In order to take multiple reflections between the optical elements and the surrounding spacecraft structure into account we also performed method of moment (MOM) simulations up to 183 GHz. All simulations were conducted using the TICRA Tools software package. As the 54 GHz feed edge taper is the largest, it suffers from the largest spillover. For 89 GHz and 183 GHz spillover is reduced, but still significant. At 325 GHz the scanning reflector is underilluminated and spillover is not a concern. By simulating the full radiative sphere of the instrument, we have knowledge of the distribution of power arriving to the radiometer from each viewing angle. These contributions can breakdown into cold sky, absorber terminations of the spacecraft and earth. The spillover in view of instrument absorber and cold sky is easier to compensate as their brightness temperatures are known. For spillover in view of earth, the brightness temperature is not known and varies depending on the geolocation of each far sidelobe. Each region of the antenna pattern with view on earth will contribute to the measured brightness temperature according to the power distribution and its projection onto the earth geoid. This work focuses on simulating the brightness temperature bias due to the instrument non-gaussian antenna pattern under different earth scenarios. Every simulation is made with the MOM simulated antenna patterns and compared with a perfect gaussian beam with the same half power beamwidth (HPBW) which is used in the operational L1b processor. Multiple scanning angles for each frequency are considered. First, we generate simple earth scenarios considering two different temperature masses that do not change with viewing angle. Secondly, we generate more complex earth scenarios by incorporating radiative transfer models. These include standard atmospheric models with varying brightness temperatures, depending on frequency, scanning angle and considering different surface emissivity. These scenarios are convolved with the antenna patterns to get a brightness temperature at each scanning angle. Finally, we compare the impact on calibration of brightness temperature considering a simple gaussian pattern versus a complex antenna pattern with far sidelobes in earth view. We think this work is relevant for all AWS end users, as well as members of the wider microwave radiometry community. [1]Towards EPS-Sterna, EUMETSAT Std., 2024, brochure: PRG.FS.03,V1. [Online]. Available: https://www.eumetsat.int/media/51305 [2]R. Albers, M. M. Bilgic, K. -E. Kempe, A. Bell and A. Murk, "Spillover Analysis and Mainbeam Characterisation of Arctic Weather Satellite Radiometer Using Method of Moments," in IEEE Open Journal of Antennas and Propagation, vol. 5, no. 6, pp. 1795-1804, Dec. 2024, doi: 10.1109/OJAP.2024.3462601.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Link Budget Analysis for the Upcoming PLATiNO-1 SAR Mission in Bistatic Configurations

Authors: Gerardo Di Martino, ALESSIO DI SIMONE, Antonio Iodice, Daniele Riccio, Giuseppe Ruello, Giovanni Paolo Blasone, Deodato Tapete, Simona Zoffoli
Affiliations: University of Naples Federico II, Italian Space Agency
Introduction In recent years, spaceborne bistatic synthetic aperture radar (SAR) systems have been gaining increasing attention in the context of microwave remote sensing and Earth observation [1], [2]. Indeed, compared with monostatic SAR platforms, bistatic systems enable new imaging modes and applications owing to the larger degrees of freedom of the acquisition geometry. Single-pass interferometry with a single transmitter and two receivers is just one example, but other applications can be envisioned in support of fundamental societal challenges, ranging from climate change to natural and anthropogenic disaster monitoring and mitigation [3]. Different space-based bistatic SAR missions will be fully operational in the near future and are planned to offer bistatic imaging capabilities with large bistatic angles on a systematic basis. The Italian Space Agency (ASI) is currently developing the PLATiNO-1 (PLT-1) mission [4], that will jointly operate with the COSMO-SkyMed Second Generation (CSG) satellites to form a spaceborne bistatic SAR system [5]. PLT-1 is conceived to work in two consecutive phases: during phase-1 (Ph-1), it will operate on the same orbit of CSG at fixed along-track distances up to 350 km, that will enable SAR acquisitions with large bistatic angles; during phase-2 (Ph-2), PLT-1 will be lowered to a 400 km-altitude orbit, where it will operate in a nominal monostatic mode, with bistatic imaging capabilities on selected areas when in conjunction with CSG [4]. Methodology A preliminary assessment of the expected imaging performance of the upcoming PLT-1 mission is carried out here by means of a power budget analysis in both operating phases, namely Ph-1 and Ph-2. The imaging capabilities are measured through a quantitative evaluation of the signal-to-noise ratio (SNR) and the Noise-Equivalent Sigma Zero (NESZ) using the standard bistatic SAR equation. Both parameters are computed on the final Level-1 product, i.e., after range compression and azimuth focusing, starting from the input system and scene parameters, including antenna gain, transmitting power, imaging geometry, spatial resolution, and target scattering coefficient. The analysis is carried out assuming different imaging geometries – i.e., different relative positions between the satellites –, polarizations, and natural surface types, including smooth and rough bare soil surfaces, whose electromagnetic behavior is described via proper scattering models. Results As expected, the lower altitude of PLT-1 during Ph-2 allows for better imaging performance in terms of NESZ with respect to Ph-1. Indeed, during Ph-1 it emerged that targets with NRCS lower than -20.7 dB are hardly observable, while, during Ph-2, the weakest observable target has a NRCS around -22.9 dB, which is aligned with the CSG monostatic system, whose estimated NESZ is -22.6 dB. Additionally, in VV imaging mode, the largest SNR values are achieved with small along-track and large positive cross-track displacements, regardless of the terrain type, due to the lower Rx-target distance and the larger scattering coefficient. Conversely, in the cross-pol channel, large along-track distances are preferable. Acknowledgment This work was supported by the Italian Space Agency through the Project “SimulAzione e Modellazione del Sistema BistAtico COSMO-SkyMed/Platino (SAMBA)” (Agreement n. 2023-5-HB.0), CUP n. F63C23000860001. References: [1] G. Farquharson et al., “The New Capella Space Satellite Generation: Acadia,” 2023 IEEE IGARSS, Pasadena, USA, 2023, pp. 1513-1516. [2] P. López-Dekker et al., "The Harmony Mission: End of Phase-0 Science Overview," 2021 IEEE IGARSS, Brussels, Belgium, 2021, pp. 7752-7755. [3] G. Di Martino, A. Di Simone, A. Iodice, D. Riccio and G. Ruello, "Baseline Decorrelation in Bistatic Interferometric SAR Systems Over Bare Soil Surfaces," IEEE Trans. Geosci. Remote Sens., vol. 62, pp. 1-13, 2024, Art no. 2007113. [4] V. Pulcino et al., “PLATiNO-1 mission: a compact X-band monostatic and bistatic SAR,” IAC 2025, Milan, Italy, 2024. [5] A. Renga et al., “Bistatic SAR techniques and products in a long baseline spaceborne scenario: application to PLATiNO-1 mission,” EUSAR 2024, Munich, Germany, 2024.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Radio Frequency Interferece (RFI) Survey of the Passive Microwave Bands using the Earth Observation RFI Scanner (EORFIScan)

Authors: Raul Onrubia, Roger Oliva, Jose Barbosa, Ioannis Nestoras, Adriano Jordao, David Duncan, Niels Bormann, Yan Soldo, Flavio
Affiliations: Zenithalblue Technologies
Radio Frequency Interference (RFI) has been steadily increasing all over the globe. These signals pose a critical threat for passive microwave satellite radiometers, since they measure the noise emitted naturally by each body. Any artificial signal over the noise level jeopardizes the retrieval of geophysical parameters from orbit, which are invaluable for Earth Observation (EO). Feeding these contaminated retrievals to Numerical Weather Prediction (NWP) models could lead to weather forecast errors. To minimize their impact, it is recommended to flag and remove these data prior to being used, and these RFI sources to be reported to authorities to be shut down if they are within a protected passive microwave band. The Earth Observation RFI Scanner (EORFIScan) is a project developed by Zenithal Blue Technologies in collaboration with Research and Development in Aerospace GmbH under an ESA contract. The system is focused on the detection of RFI contamination in Earth Observation products and has been designed as a modular system so new sensors can be easily added. Currently, SMOS, AMSR2, AMSUA, MWHS-II and AMR-C are integrated to survey all passive microwave radiometry bands up to 166 GHz. A collection of RFI detection techniques is used to optimize the detection of contaminated observations, using techniques such as the normalized polarization ratio, the spatial variability, kurtosis, skewness, standard deviation, outlier detection, image enhancing using high pass filter, or the normalized RFI Index [1-2], and the data is flagged according to three threshold levels. These thresholds are computed statistically and depend on the terrain type, the incidence angle, the latitude, among other parameters. They can be adjusted to be stricter in areas where RFI contamination is expected using external information such an airport database, or a population density map. The resulting flags can be added overwriting unused flags in the EO product in order the users not to modify their product reading routines and facilitate the integration of the new products. The number of detections, number of observations and the contaminated observations are also stored in a database to compute the expected RFI probability associated to ground points over a certain period. The current research is surveying the RFI contamination in the passive microwave radiometric bands 6.9, 7.3, 10.7, 18.7, 23.8, 31, 34, 36.5, 50.3, 52.8–59.3, 68–92, 114.25–122.25, 166, and 183 GHz, with the analysis conducted for the year 2022 using monthly subperiods, and 1.4 GHz for July 2019. The results have been validated for SMOS, AMSR2 and AMSU-A by the European Centre for Medium-Range Weather Forecasts. For SMOS, the expected brightness temperatures are simulated using the Community Microwave Emission Modelling (CMEM) Platform; for AMSR2 and AMSU-A, all-sky simulations are produced using RTTOV-SCATT. For the SMOS validation, ECMWF focused on the validation at brightness temperature level. The results showed that all three threshold levels improved the performance metrics compared with respect to the RFI detection method implemented in the current SMOS operational processor. Besides, the less strict threshold achieved that performance while flagging less observations compared to the operational processor, being thus a more efficient method to detect RFI. The strictest threshold showed a drastic reduction in the RFI contamination around the globe, keeping some errors in the areas where models do not fit properly the observations, such as the Sahel, in the sub-Saharan Africa, and the Himalayas. Also CESBIO conducted an analysis in the soil moisture retrievals and their results showed that the chi squared probability in the soil moisture retrievals got closer to 1 (ideal value) when removing the observations flagged by EORFISCAN, compared against the SMOS operational RFI detection method, indicating a good tendency in the resulting quality of the retrievals. The analysis of AMSR2 has shown more contamination and more extensive than the one shown in the literature [2]. New contamination is present at 6.9 GHz at the North Sea and Yellow Sea, and at the very south tip of Africa. At 7.3 GHz reflection from satellites reported in Japan [3] can be seen with a larger extension and near the Arabian Sea, the US west coast, and the Gulf of Mexico. The typical reflection over the sea at 10.7 GHZ from geosynchronous satellites over Europe can be also seen near the coast of the Bay of Bengal. The results at higher bands are still being evaluated, even though less than expected or no RFI presence is found. The flagged data from AMSR2 and AMSU-A were assessed via all-sky radiance departures using ECMWF’s NWP model. Most flagged points exhibit positive departures, indicative of presence of RFI contamination. The removal of contaminated observations leads to a much more symmetric PDF of departures over sea at C-band, demonstrating thus the usefulness of the applied methods to detect RFI contamination. No clear signs of interference at AMSU-A frequency bands could be found at the moment of writing this abstract. The AMSR2 and AMSU-A validation has been documented in a stand-alone report [4]. The EORFISCAN software has shown outstanding capabilities in identifying and flagging RFI contamination across various sensors and frequency bands. Validation results indicate that it outperforms current RFI detection methods, enhancing data quality for weather forecasting and Earth Observation. These findings position EORFISCAN as a strong, efficient, and user-friendly solution to tackle the increasing issue of RFI contamination, facilitating its integration into operational processes, which will contribute positively to the scientific community work with data from radiometers at these bands. [1] R. Onrubia, R. Oliva, P. Weston, P. de Rosnay, S. English, J. Barbosa, I. Nestoras, “The Ground RFI Detection System (GRDS), a New Concept for RFI Detection in Earth Observation Missions”, ESA Living Planet Symposium 2022, Bonn (Germany), 2022. [2] D. W. Draper, “Radio Frequency Environment for Earth-Observing Passive Microwave Imagers,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 11, no. 6. Institute of Electrical and Electronics Engineers (IEEE), pp. 1913–1922, Jun. 2018. doi: 10.1109/jstars.2018.2801019. [3] Y. Wu, M. Li, Y. Bao, and G. P. Petropoulos, “Cross-Validation of Radio-Frequency-Interference Signature in Satellite Microwave Radiometer Observations over the Ocean,” Remote Sensing, vol. 12, no. 20. MDPI AG, p. 3433, Oct. 19, 2020. doi: 10.3390/rs12203433. [4] Duncan, D., and N. Bormann, 2024: Assessing RFI flags at passive microwave bands with an NWP model. ESA AO/1-11605/22/NL/SD, ESA Contract Report. https://doi.org/10.21957/eefd6f0954
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Data Driven DSAR Synchronization: From Raw Data Simulation to Algorithm Implementation and Testing

Authors: Gianluca Coppa, Antonio Gigantino, Francesca Pelliccia, Maria Salvato, Antonio Moccia, Maria Daniela Graziano, Alfredo Renga
Affiliations: University of Naples Federico II
Distributed Synthetic Aperture Radar (DSAR) is gaining increasingly interest thank to the possibility to break up a large monolithic SAR satellite in several, or many, much more compact and cooperating satellites, i.e. microsatellite, nanosatellite and Cubesats. Gentle performance degradation in case of loss of one or few platforms and easier system upgradability are further advantages coming from the distributed nature of the sensor. Even though DSAR literature is rich and diversified, important questions are still opened mainly in the field of time and clock synchronization and with reference to image reconstruction and processing [1]. Those problems are got more difficult when an illuminator of opportunity is used. Versatility can be added to the system in this case by exploiting a preexisting SAR mission. Specifically, for the case of a SIMO (Single Input Multiple Output) working mode a single platform transmits the signal, and several receivers collect the reflected echo from the scene. Indeed, the DSAR product resulting from the coherent combination of receivers’ data can be exploited to complement and improve information from the monostatic image generated by the transmitter [2]. First demonstration of DSAR principle and capabilities from space has been performed through experiments on TanDEM-X data, using just two receivers. However, no real DSAR spaceborne acquisitions are currently available to verify and test specific technological solutions. In the framework of Space It Up project, funded by the Italian Space Agency (ASI) and the Italian Ministry for Universities and Research (MUR), this work contribute to some DSAR open issues. Specifically, a DSAR simulation and processing environment able to generate and elaborate realistic DSAR raw data is developed including the most prominent scene error sources, such as time, frequency and phase synchronization error together with noise, navigation and pointing uncertainties [1]. The simulator is herein used to generate the necessary data for testing algorithms addressing specific synchronization problems, like PRF reconstruction and oscillators’ offset compensation. The work is mainly focused on data driven synchronization in which one assume that no direct link is available to exchange clock information from the transmitter to the receivers. Different methods are analysed and characterized, including autocorrelation for precise computation of the PRF of the transmitter and thus correct echo windowing [3], and autofocusing for clock error determination and compensation [4]. The reference scenario is a DSAR mission where receivers operate from CubeSat platforms. [1] M. D. Graziano, A. Renga, M. Grasso, and A. Moccia, “Error sources and sensitivity analysis in formation flying synthetic aperture radar,” Acta Astronautica, vol. 192, pp. 97–112, 2022, doi: https://doi.org/10.1016/j.actaastro.2021.10.047. [2] A. Renga, A. Gigantino, and M. D. Graziano, “Multiplatform Image Synthesis for Distributed Synthetic Aperture Radar in Long Baseline Bistatic Configurations,” IEEE Transactions on Aerospace and Electronic Systems, vol. 59, no. 6, pp. 9267–9284, 2023, doi: 10.1109/TAES.2023.3316171. [3] A. Moccia and M. D’Errico, “Bistatic SAR for Earth Observation,” in Bistatic Radar: Emerging Technology, M. Cherniakov, Ed., Chichester, UK: John Wiley & Sons, 2008, pp. 67–92. [4] G. Coppa, A. Gigantino, A. Renga, M. D. Graziano, A. Moccia, and A. Mazzeo, “Image Simulation and Processing for Time and Phase Synchronization in Spaceborne Distributed Synthetic Aperture Radar,” Proceedings of the International Astronautical Congress, IAC, vol. 2024-October, 2024.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: StriX with F-STEC: Toward the First High-Resolution Wide-Swath Imaging Mode for Constellations of Small SAR Satellites

Authors: Federica Bordoni, Dominik Thois, Marwan Younis, Gerhard Krieger, Director of the Microwaves and Radar Institute Alberto Moreira, Naoto Onodera, Gerald Baier, Mauro Mariotti, Budhaditya Pyne, Stefan Andrei Chelariu, Toshihiro Obata
Affiliations: German Aerospace Center (DLR), Synspective
In the recent years we have witnessed the emergence and the rapid spread of commercial constellations of small synthetic aperture radar (SAR) satellites for Earth observation (EO) operating in X-band. In fact, X-band systems can take advantage of a mature space technology, advanced miniaturization, and commercial off-the-shelf components. Moreover, in X-band, antennas with a relatively small size can be used to generate high-quality SAR images. As an example, the Finnish ICEYE, the US Capella Space, and the Japanese Synspective build and operate constellations, based on compact, low-cost, spacecrafts in the 100-200 kg class. Small satellites (smallsats) constellations are going to offer a novel, unique, support for EO applications, thanks to their ability to provide global access with high temporal and spatial resolution. In fact, a very high revisit time can be obtained by means of multiple satellites, deployed into orbits with different inclinations. For instance, Synspective’s StriX constellation is expected to achieve a near-real time revisit time with about 30 satellites (for comparison, the next generation of big satellites missions have a revisit time of a few days to weeks [1]). In regard to SAR image quality, smallsats constellations can achieve very high azimuth resolution over small areas, by exploiting the smallsats mechanical agility to increase the synthetic aperture in spotlight-like modes [2], [3]. In stripmap mode, larger swaths can be imaged at a high/medium resolution. Specifically, for SAR images with a resolution up to 3 m, the maximum swath width is around 30 km. The inability to image larger scenes at high resolution, with a satisfactory quality, represents a limitation of current systems: especially for a prompt response to disasters, such as earthquakes or extreme weather events, to observe a wide area in detail would be very useful. In order to overcome this limitation and enable the StriX satellites to achieve a more efficient imaging of large areas, Synspective is currently leading an innovative feasibility study on the first high-resolution wide-swath (HRWS) SAR operational mode for smallsats. Within this study, the Microwaves and Radar Institute of the German Aerospace Center (DLR) is supporting the design of the operational mode and the evaluation of the achievable SAR imaging performance. The idea behind the study is to realize the first SAR satellite exploiting the frequency scanning (F-Scan) technique [4, 5]. The F-Scan technique, also denoted as frequency scan for time-of-echo compression (f-STEC), relies on: 1) the transmission of a frequency modulated signal; 2) the use of a frequency scanning antenna, i.e., an antenna with beam steering direction depending on the signal frequency; 3) the scan of the imaged swath, by means of the antenna beam, from far to near range. A f-STEC operational mode appears particularly suitable for HRWS imaging by means of X-band small SAR satellites due to many reasons. First of all, it exploits a pencil beam, both on transmission and reception, to image a wide swath. As a consequence, compared to conventional modes, f-STEC is expected to improve the sensitivity and range ambiguity suppression. Second, the “time-of-echo compression”, i.e., the compression of the echo duration, allows for longer pulse duty cycles or, equivalently, lower peak powers, resulting in a reduced system complexity. In addition, as regards the implementation, the f-STEC analog beamforming has benefits in terms of lower hardware complexity and costs, compared to other beamforming techniques for HRWS imaging. Last but not least, in X-band a 1200 MHz bandwidth is reserved by the International Telecommunication Union for EO, i.e., a large system bandwidth is available (this is prerequisite for achieving HRWS imaging through the f-STEC technique). This contribution will present the analysis and the obtained results on the achievable SAR imaging performance of the StriX smallsats, exploiting for the first time a f-STEC operational mode for HRWS imaging. [1] A. Moreira, G. Krieger, M. Younis, M. Zink, “Future spaceborne SAR technologies and mission concepts” in Proc. IEEE Int. Geoscience and Remote Sensing Symposium, IGARSS, USA, July 2023. [2] https://www.iceye.com/sar-data/imaging-modes [3] https://support.capellaspace.com/hc/en-us/article_attachments/30977561560340 [4] C. Roemer, “Introduction to a new wide area SAR mode using the F-Scan principle”, in Proc. IEEE Int. Geoscience and Remote Sensing Symposium, IGARSS, USA, July 2017. [5] M. Younis, F. Q. de Almeida, T. Bollian, M. Villano, G. Krieger, and A. Moreira, “A synthetic aperture radar imaging mode utilizing frequency scan for time-of-echo compression,” IEEE Trans. on Geoscience and Remote Sensing, Vol. 60, 2022.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Simulation Analysis of the Imaging Performance of the Upcoming PLATiNO-1 SAR Mission

Authors: Gerardo Di Martino, ALESSIO DI SIMONE, Antonio Iodice, Daniele Riccio, Giuseppe Ruello
Affiliations: University of Naples Federico II
Synthetic aperture radar (SAR) systems operating in bistatic acquisition geometries offer different advantages compared to conventional monostatic systems in terms of revisit time, acquisition modes, and applications. Indeed, bistatic systems can be implemented in companion configurations, where the Earth-backscattered signal transmitted by an existing SAR is opportunistically received and processed by a receive-only platform to enable bistatic SAR acquisitions. Such an approach allows to enhance the imaging capabilities of the monostatic platform by exploiting low-cost, low-power and compact platforms that can be accommodated in small satellites, such as CubeSats, even in constellation formation, e.g., distributed SAR. As a matter of fact, different public agencies are devoting great efforts in the development of spaceborne bistatic SAR missions. The European Space Agency (ESA) selected the Harmony project as the tenth Earth Explorer mission. Harmony is conceived as a companion SAR mission, comprising a constellation of two receive-only satellites equipped with a C-band SAR payload operating in tandem with a Sentinel-1 satellite, which serves as an illuminator [1]. The Italian Space Agency (ASI) is currently active in the development of the PLATiNO-1 (PLT-1) mission [2]. PLT-1 is an X-band spaceborne SAR, that will provide bistatic imaging capabilities in cooperation with the COSMO-SkyMed Second Generation (CSG) satellites [3]. PLT-1 will operate in two phases: during the first phase (Ph-1), PLT-1 and CSG will fly on the same orbit at distances of few hundred kilometers, thus forming a large bistatic angle; during the second phase (Ph-2), the orbit of PLT-1 will be significantly lowered and the satellite will operate in a conventional monostatic mode. However, bistatic images will be acquired on specific areas where a sufficient beam overlapping with CSG is obtained. In order to provide an appropriate evaluation of the imaging capabilities of the upcoming PLT-1 mission, here we derive a signal processing scheme suited to both operational phases and to different target types. To this end, we considered the processing chain described in [4] and generalized it to deal with extended targets, such as bare soil surfaces with different degrees of roughness, and non-aligned Tx-Rx orbits, as is the case in Ph-2 of the PLT-1 mission. Synthetic quality indicators, such as spatial resolution, peak-to-sidelobe ratio, integrated sidelobe ratio, distributed target ambiguity ratio, are computed in function of the acquisition geometry. During Ph-1, it is demonstrated that the bistatic geometry does not lead to a significant degradation of the imaging performance compared to the monostatic system for point targets. Acknowledgment This work was supported by the Italian Space Agency through the Project “SimulAzione e Modellazione del Sistema BistAtico COSMO-SkyMed/Platino (SAMBA)”, (Agreement n. 2023-5-HB.0), CUP n. F63C23000860001. References: [1] P. López-Dekker et al., "The Harmony Mission: End of Phase-0 Science Overview," 2021 IEEE IGARSS, Brussels, Belgium, 2021, pp. 7752-7755. [2] V. Pulcino et al., "PLATiNO-1 mission: a compact X-band monostatic and bistatic SAR," IAC 2024; 75th International Astronautical Congress, Milan, Italy, 2024. [3] A. Renga et al., “Bistatic SAR techniques and products in a long baseline spaceborne scenario: application to PLATiNO-1 mission,” EUSAR 2024, Munich, Germany, 2024. [4] G. Di Martino, A. Di Simone, A. Iodice, D. Riccio and G. Ruello, "Efficient Processing for Far-From-Transmitter Formation-Flying SAR Receivers," IEEE Trans. Geosci. Remote Sens., vol. 61, pp. 1-19, 2023, Art no. 5211419.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: A digital receive and transmit module for a generic radar electronic

Authors: Frederic Robert, Laurent RYS, Alexandre HOUPERT
Affiliations: Thales Alenia Space
The Digital Altimeter with Integrated Circuit study aims at the design of a generic digital Receive and Transmit Module (RT-Module) based on qualified FPGAs and discrete DAC & ADC, with the development, design, manufacturing and validation of a Proof of Concept breadboard. The Receive and Transmit module was defined to be part of a generic Central Electronic Unit architecture that should cover a large set of radar applications as Nadir altimeters, Nadir Interferometric altimeters, Wide swath altimeters and, Wave and current radars of next generation. The first use case will be the the Sentinel-3 Next Generation Topography mission. The proposed poster will present how the concept has been proved through the implementation of a representative breadboard. The characterisations done on this “Proof of Concept” allow a preliminary assessment of the key performances achievable by the product. The results obtained in terms of signal quality over the bandwidth, jitter and stability (Power Spectral Density) are fully in line with the expectation. The concept for a generic Reception/Transmission module based on discrete digital components is therefore validated and it will be improved for further flight models with a solid design basis.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: The potential of Wideband Communication Transmissions as Signals of Opportunity for Inland and Coastal Altimetry

Authors: James Garrison, Rashmi Shah, Sidd Subramanyam, Y. Tony Song
Affiliations: Purdue University, NASA Jet Propulsion Laboratory
Altimetry is a vital tool for coastal management. It provides critical data for understanding sea level rise, developing dependable wave models, and predicting wave surges and potential flooding events. Fine-resolution (both spatial and temporal) water level data is essential for calibrating and validating coastal and general circulation models. Additionally, coastal altimetry can reveal patterns of freshwater influx from the continents, including surface discharge from streams and submarine groundwater discharge. Inland water bodies and river stream flow are also important targets for satellite altimetry with applications in many components of the water cycle including river discharge and surface water storage. Advanced satellite altimeters such as Surface Water and Ocean Topography (SWOT), achieve excellent spatial resolution (~1 km), but lack the necessary temporal resolution. Their revisit times (~10-21 days depending on location) are too infrequent to capture rapid fluctuations inherent in coastal dynamics where small-scale oceanic features exhibit fast variability. This scarcity limits our ability to understand the full spectrum of coastal processes necessary for making forecasts of coastal hazards such as storm surges, floods, and hurricane damage, crucial actionable data for protecting communities and infrastructure. Radar altimetry also misses smaller inland water bodies with smooth surfaces due to the very low backscatter, the so-called “dark water” problem. Signals of Opportunity (SoOp) re-utilizes existing signals from ubiquitous geostationary satellites, eliminating the need for powerful transmitters on the constellation itself. This significantly reduces size, weight, and power (SWaP) requirements, enabling a low-cost constellation of small satellites. SoOp is based upon the “free” utilization of large bandwidths of valuable spectrum (10’sB$) that would otherwise be inaccessible for science. Altimetry observations of sea surface height (SSH) from a constellation can provide dense coverage and high temporal resolution (~1 day). Additionally, the bistatic configuration of SoOp uses forward scatter, producing a stronger signal for smoother surfaces. This property, combined with the denser spatial sampling enabled by a constellation of small satellites, indicates that SoOp altimetry would be able to improve our understanding of the water cycle and fill important gaps in hydrology. A preliminary analysis of a typical direct broadcast satellite occupying 400 MHz of spectrum in Ku band (12.45 GHz, EIRP 65 dBW) and 500 MHz in Ka band (18.5 GHz, EIRP 74 dBW), shows that tracking errors as low as 3.2 cm (Ku) and 1.7 cm (Ka) can be achieved from a typical altimetry orbit of 1280 km. This assumes a 1 ms coherent integration and 1 sec of incoherent averaging. In addition to tracking error, orbit determination of the source must be included in the full error budget. Sensitivity of altimetry error to the non-cooperative transmitter position in GEO is about 10-2 (i.e. scientifically useful cm-level altimetric error requires only m-level source position error). Unfortunately, this required precision is still far better than anything routinely available (e.g. two-line elements at ~10 km accuracy). Our proof-of-concept experiment demonstrated a method for passive orbit determination applying very long baseline interferometry (VLBI) techniques to a distributed network of low-cost stations constructed largely of consumer electronics. Extrapolating the performance observed in that experiment at S-band predicts that cm-level precision would be possible with the wider bandwidth signals described above.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: DopSCA for direct ocean current measurement

Authors: Ad Stoffelen, Alexandre Payez, Paco Lopez Dekker, Owen O'Driscoll
Affiliations: KNMI, TUD
In the ESA Marine-surface Ocean Motion from Scatterometers (MOMS) project, the EPS-SG SCA scatterometer is considered for ocean current measurement. Given the absence of direct satellite measurements of ocean surface motions to date, using SCA in Doppler mode would be very useful and timely. Most requirements for ocean current are based on the geostrophic balance assumption, and currents are indirectly derived from ocean height measurements. Belmonte-Rivas and Stoffelen (2019) tested how such state-of-the-art currents reduce the biases between earth-fixed ERA5 surface vector winds and ocean-relative ASCAT winds, but found that the monthly ERA5 biases were in fact not reduced and remained typically around a m/s. Furthermore, they found that in particular the ocean mesoscale patterns were unfavorable in the Copernicus Marine Service ocean current products, while the large scale current corrections do improve physical consistency. The large deviations suggest that any direct ocean current measurements are needed to monitor ocean mesoscale eddies and to better understand the ageostrophic effects, which are relevant for ocean mixing processes. Earlier work indicated that instantaneous sub-m/s accuracies on the geophysical Doppler velocity would be possible for SCA. Doppler velocities are known to comprise wave motion and currents, where the former is generally dominant. On scatterometer scales, these wave motion contributions may be assessed by means of the wind vector. SCA, which is expected to produce superior ocean-surface vector winds will therefore have superior capability to correct the Doppler measurements for wave motion, such that an ocean current residual remains. Since scatterometer-scale ocean eddies typically evolve on the monthly time scale, potentially a factor five gain in accuracy may typically be achieved by averaging the ocean current residuals from multiple SCA overpasses. This needs a well-established procedure to isolate the ocean-surface currents from the total geophysical Doppler, i.e., eliminating the ocean wave motion contribution. Global DopSCA data will need to be obtained in order to develop such novel procedure. SCA opportunities for Doppler measurement will be discussed with EUMETSAT in order to investigate and establish the wind-dependent wave motion corrections for SCA. With such corrections, the instantaneous ocean current may be estimated and aggregated over many days in order to obtain an accurate local ocean current estimate on SCA measurement scale. In an era where coupled air-sea interaction models are being developed, the user community is in dire need of direct ocean current information and wave motion measurements, which may help to reduce today's large wind and stress model biases. Given the lack of direct ocean motion measurement information and its clear geophysical relevance, any SCA opportunity for ocean current would be appreciated by the user community.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Assessment of Hongtu-1 Multi-Static X-Band SAR Constellation Interferometry

Authors: Urs Wegmüller, Christophe Magnard, Othmar Frey
Affiliations: Gamma Remote Sensing, ETH Zurich
In 2023, the Chinese company PIESAT launched the multi-static X-band SAR constellation Hongtu-1 (HT1). HT1 consists of the active mono-static SAR sensor HT1-A and the three additional passive SAR receivers HT1-B, HT1-C and HT1-D. The passive sensors are arranged as a cartwheel in a circle around the active sensor. For our SAR interferometric investigation, we were able to use a multi-static HT1 recording. After a brief introduction of HT1, we describe the processing performed. Based on the phases of the six single-pass interferometric pairs, we calculated height differences relative to the Copernicus DEM. Larger deviations were observed mainly for mining and forest areas. Thanks to the simultaneous acquisition of the interfero-metric pairs, the high spatial resolution and the good signal quality, the necessary processing was relatively easy to perform. Besides the interferometric phase, we also investigated possible applications of multi-static coherence. Forest can be recognized by its reduced single-pass coherence values. Based on the results, we expect that the multi-static HT1 coherence and its dependence on the interferometric baseline can be used to estimate parameters such as forest biomass. [1] Wegmüller, U.; Magnard, C.; Frey, O. Assessment of Hongtu-1 Multi-Static X-Band SAR Constellation Interferometry. Remote Sens. 2024, 16, 3600. https://doi.org/10.3390/rs16193600.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: On the Challenges of Coherent Ambiguity Removal in the Harmony Mission

Authors: Dominik Richter, Dr. Marc Rodriguez-Cassola, Dr. Pau Prats-Iraola, Dr. Paco Lopez-Dekker
Affiliations: German Aerospace Center (DLR), Delf University of Technology
Companion Synthetic Aperture Radar (SAR) extensions offer the possibility to enhance existing missions with single-pass interferometric capabilities. The companion satellites are typically receive-only simplified versions of the active system, using smaller platforms and antennas. Compared to their full-performance counterparts, critical performance metrics are worsened; in particular, the capability to reject received energy from ambiguous regions in SAR acquistions. The impact of their correlated signatures in SAR interferograms was analyzed in [1]. Recent investigations [2] within the scope of ESA’s Earth Explorer 10 Harmony mission [3, 4, 5] showed significant phase bias due to strong azimuth ambiguities, even exceeding error budgets required for scientific applications. By means of two removal algorithms [5, 6] the phase bias may be removed to achieve the performance goals. So far, the algorithms were designed and evaluated in simulated environments using synthetically generated data either at SLC or raw signal level. A step towards operationally-applicable removal techniques requires the adaptation to complex squinted bistatic acquisition geometries, imaging properties of ambiguities (e.g., defocusing or mis-registration arising from differential filter mismatch) and to evaluate residual errors due to the impact of randomly moving sea surfaces, e.g., Doppler frequency shifts and accelerations [7, 8, 9]), expected to cause a performance degradation due to model mismatch. As shown in [6], the complex valued ratio ci of the aliased ambiguous signal and its main counterpart can be calculated from the spectral representation of the imaging operator. Let us assume, one can represent the imaging operation of main and ambiguous signal by a transfer function A1,i(fD, fr; rP, rTx,1, rRx,1) for a given geometry characterized by the position vectors in Doppler frequency (fD) and range frequency (fr) domain, here, i indicates the signal part. The second channel follows the same definition yielding a transfer function A2,i(fD, fr; rP, rTx,2, rRx,2). SAR focusing applies a kernel matched to the phase of the transfer function indexed by i = 0. Any mismatch of the focusing kernel and the ambiguous signal transfer function causes a change in the ratio ci. In general, removal of coherent ambiguities in squinted bistatic cases requires ci to be azimuth and range dependent, because those geometries will result in residual co-registration errors in azimuth and range and a defocusing of ambiguous signatures [10, 11], due to variabilities in the differential geometry, e.g. Doppler centroid or effective velocity. Also, acquisition modes other than stripmap require a pixel-wise evaluation due to time-varying imaging geometries. The magnitude of mismatch errors increase the larger the interferometric baseline becomes. In the case of Harmony’s ATI mode, only small baselines reduce requirement of very accurate computation of these ratios. In a general consideration, e.g. helical formation of TanDEM-X, Harmony in stereo configuration for sea topography measurement [12] in an arbitrary four-channel interferometer setup, a careful calculation of the ratios for proper suppression of ambiguities is required. We identify the following general challenges regarding the removal of coherent ambiguities, which must be resolved to achieve the theoretical removal capabilities of the proposed algorithms and an overall high interferometric performance: • Required processing steps for an operationally implementation of the removal algorithms, taking into account, e.g. imaging mode (especially burst-based modes) and arbitrary non-linear geometries • Implementation for Harmony related to high squint and bistatic geometry • Identify tuning potentials of the algorithms once instrument is operational, e.g. improving antenna pattern knowledge and baseline knowledge in complex geometries • Residual interferometric errors, e.g. residual synchronization errors • Effect of model mismatches, due to residual co-registration errors, (bistatic) differential focusing mismatches • Adaptation of coherent ambiguity removal for larger interferometric baselines To assess the capability and implementation of the removal algorithms we employ real ambiguous signatures from selected TanDEM-X single-pass interferometric acquisitions. The acquired data is re-processed to obtain stronger azimuth ambiguities of a reduced performance companion system. Two exemplary scenarios are depicted in Fig. 1 and Fig. 2. The former showing signatures of strong currents in a coastal scenario and the later an open ocean scene with an inhomogeneity in the upper part of the backscatter map due to an atmospheric front. Essentially, the algorithms require a pixel-by-pixel evaluation for general bistatic geometries due to changing topographic heights and arbitrary orbits. For a first realization, only block-wise processing in a linearized geometry is performed showing already suppression of phase bias from coherent ambiguities, as shown in Fig. 1 (d) and Fig. 2 (d) and summarized in Tab. 1. The four scenes evaluated by now (featuring coastal areas with strong currents, atmospheric fronts, pure swell) cover a spread of ambiguous to signal power suppression by the system of -10.4dB to -16.3dB. The achieved suppression of root-mean-squared interferometric phase bias is on the order of 65% with an exception at 43.1%. Further investigation is pending related to the impact of scene [7, 8, 9] and SAR processing (focusing, system calibration, bistatic synchronization, and interferometric processing [10, 11]) on the removal performance and residual phase errors not related to coherent ambiguities. This contribution will highlight the most recent relevant findings and draw a conclusion on the maturity of an operationally implemented coherent ambiguity removal algorithm and its expected performance with respect to theoretical assessments. A PDF version of this abstract with figures and tables can be obtained on inquiry with the author. Literature [1] Villano, M., Krieger, G.: Impact of azimuth ambiguities on interferometric performance. Vol. 9, No. 5, IEEE Geoscience and Remote Sensing Letters, 2012, p. 896-900 [2] Zonno, M., Richter, D., Thaniyil Prabhakaran, A., Rodriguez-Cassola, M., Zurita Campos, A., Del Castillo Mena, J., Olea Garcia, A.: Impact of coherent ambiguities on InSAR performance for bistatic SAR missions: The Harmony mission case, Leipzig: EUSAR 2022, 2022 [3] López-Dekker, P. , Rott, H., Prats-Iraola, P. , Chapron, B., Scipal, K., De Witte, E.: Harmony: An Earth Explorer 10 mission candidate to observe land, ice, and ocean surface dynamics, Yokohama: IGARSS 2019, 2019 [4] López-Dekker, P., Chapron, B., Johnsen, H.: Observations of sea surface winds and sea surface deformation with the Harmony mission, Leipzig: EUSAR 2021, 2021 [5] López-Dekker, P., Li, Y., Iannini, L., Prats-Iraola, P., Rodriguez-Cassola, M.: On azimuth ambiguities suppression for short-baseline along-track interferometry: The STEREOID case, Yokohama: IGARSS 2019, 2019 [6] Richter, D., Rodriguez-Cassola, M., Zonno, M., Prats-Iraola, P.: Coherent azimuth ambiguity suppression based on linear optimum filtering of short along-track baseline SAR interferograms, Leipzig: EUSAR 2022, 2022 [7] Hasselmann, K., Raney, R. K., Plant, W. J., Alpers, W., Shuchman, R. A., Lyzenga, D. R., Rufenach, C. L., Tucker, M. J.: Theory of synthetic aperture radar ocean imaging: A MARSEN view. Journal of Geophysical Research, Vol. 90, No. C3, 1985, p. 4659 [8] Alpers, W. R., Bruening, C.: On the relative importance of motion-related contributions to the SAR imaging mechanism of ocean surface waves. Vol. GE-24, No. 6, IEEE Transactions on Geoscience and Remote Sensing, 1986, p. 873-885 [9] Rufenach, C. L., Alpers, W. R.: Imaging ocean waves by synthetic aperture radars with long integration times. Vol. 29, No. 3, IEEE Transactions on Antennas and Propagation, 1981, p. 422-428 [10] Bara, M., Scheiber, R., Broquetas, A., Moreira, A.: Interferometric SAR signal analysis in the presence of squint. Vol. 38, No. 5, IEEE Transactions on Geoscience and Remote Sensing, 2000, p. 2164-2178 [11] Rodriguez-Cassola, M., Prats-Iraola, P., Zan, F. De, Scheiber, R., Reigber, A., Geudtner, D., Moreira, A.: Doppler-related distortions in TOPS SAR images. Vol. 53, No. 1, IEEE Transactions on Geoscience and Remote Sensing, 2015, p. 25-35 [12] Theodosiou, A., Kleinherenbrink, M., López-Dekker, P.: Wide-swath ocean altimetry using multisatellite single-pass interferometry, Vol. 61, IEEE Transactions on Geoscience and Remote Sensing, 2023, pp. 1-21
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Ocean Dynamics and Surface Exchange with the Atmosphere (ODYSEA): a NASA Earth System Explorers candidate mission with a strong contribution from CNES

Authors: Fabrice Ardhuin, Sarah Gille, Mark Bourassa, Melaine Fewings, Tong Lee, Alexander Wineteer, Lionel Renault, Bia Villas Bôas, Tom Farrar, Sophie Cravatte, Paul Chang, Gregg Jacobs, Jackie May, Elisabeth Rémy, Ernesto Rodriguez, Zorana Jelenak, Florent Lyard, Clément Ubelmann, Fanny Girard-Ardhuin, Magali Tello, Said Kaki
Affiliations: LOPS, UCSD/SIO, FSU, OSU, JPL, LEGOS, Colorado School of Mines, WHOI, NOAA, NRL, Mercator Ocean International, UCAR, DATLAS, CNES
The mechanics of the air-sea interface are driven by the difference of the wind and current velocity, which directly determines the exchanges of momentum and mechanical energy between the atmosphere and ocean. Mechanics at the ocean surface have profound impacts on the air-sea fluxes of heat, moisture, carbon, and hence weather and climate. Ocean-atmosphere interactions occur through thermal coupling and dynamical coupling. ODYSEA focuses on dynamical coupling, which has been studied mostly from numerical models due to a lack of necessary measurements. Over the past decade, models have revealed that the surface current should play a dominant role in setting the energy level of the ocean mesoscale with cascading effects on the position and intensity of western boundary currents such as the Gulf Stream. Recent research also suggests that wind-current interactions also play an important role in atmospheric dynamics, and may be related to persistent biases in atmospheric models. As of today, surface currents are only measured at few locations, and estimates from satellite data rely on geostrophic and Ekman balances that are only valid away from the Equator and for large space and time scales. ODYSEA is one of the 4 mission concepts that has been selected by NASA for a competed phase A and future down-selection to the 2 Earth System Explorers planned for launch in 2030 and 2032. ODYSEA is a low Earth orbit polar satellite carrying a conically scanning Doppler Ka-band scatterometer, with a single beam providing measurements of radar backscatter (sigma0) and Doppler velocity (from pulse pair phase differences) at a constant incidence angle of 55° and a 1700 km wide swath. For any measurement cell on the sea surface, ODYSEA provides two measurements of sigma0 and Doppler velocity that are used to retrieve the surface wind vector and the surface current velocity vector, with only the along-track current component available over a narrow band in the center of the swath and a the cross-track component at the outer edges of the swath. The mission concept is designed to maximize the spatial coverage, with 80% of the ocean sampled every day, and complement the existing scatterometer constellation with measurements at 4:30 AM / 4:30 PM. The ODYSEA measurement concept builds on the highly successful Dopplerscatt airborne instrument that was used in the S-MODE experiment program. In scaling up Dopplerscatt to a satellite instrument, the incidence angle was kept and the measurement cell size was increased. This preserves most of the measurement physics and its successful validation during S-MODE. The ODYSEA mission is designed to achieve two scientific objectives related to the measurement and understanding of ocean-atmosphere dynamic coupling, which includes the first measurement of the air-sea mechanical energy flux (also known as “wind work”), and the analysis of surface current response to wind and its implications for upper ocean mixing. Many applications will certainly benefit from this new source of data and the mission will provide near-real time surface winds and current fields that will be available for assimilation into operational atmospheric and oceanic models.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: An Observing System Simulator for the C2OMODO Mission Microwave Radiometers: Current Capabilities and On-going Developments of the RadioSpy Platform

Authors: Dr Pauline Martinet, Dr Laura Hermozo, Dr Roseline Schmisser, Pierre Cardoso, Dr Friederike Hemmer, Laure Chaumat
Affiliations: Thales, CNES - Centre National d'Etudes Spatiales
Through the combination of two satellite trains, the Atmospheric Observing System (AOS) is an international Earth program that aims at better understanding the link between aerosols, deep convection and precipitation in the Tropics. Within AOS, the C²OMODO (Convective Core Observations through Microwave Derivatives in the trOpics) mission, developed by the National Centre for Space Studies (CNES), will operate a tandem of microwave radiometers (MWR) flying less than two minutes apart. This mission goes beyond the common observations of cloud properties already documented by previous satellite missions by providing brightness temperature temporal derivatives (dTB/dt), in order to document the dynamical properties (such as condensed mass fluxes) within deep convection. To achieve this, two identical multi-frequency all-sky MWRs, called C²OMODOR, are developed by CNES. They will provide high resolution measurement at the 89 GHz window channel and around two water vapor absorption lines at 183.31 GHz and 325.15 GHz. The C²OMODOR will be deployed on-board two satellites of the inclined orbit of the AOS program, AOS-Storm : one on a NASA-provided platform C²OMODO-sat and the other on JAXA PMM (Precipitation Measuring Mission). In order to prepare the processing chains of the C²OMODO observations from level0 to level1 and level2 products, the development of an observing system simulator is a crucial step. To that end, the Thales company supports the development of the CNES platform RadioSpy which aims at simulating brightness temperature measurements observed by each MWR with respect to different instrumental and system (time gap, attitude maneuvers, etc) configurations. Although RadioSpy is already able to provide brightness temperature (TB) simulations at the instrumental geometry from a realistic high-resolution convective scene, the current configuration assumes a perfect instrument and does not take into account all the specificities of the orbit geometry. This poster aims at presenting the current developments led by Thales and CNES to produce more realistic TB simulations. These developments include the integration of the instrumental spectral response function, the real antenna gain, as well as parasite noise due to antenna imperfections contributing to the signal loss while passing through the different parts of the instrument. The implementation and assessment of specific interpolation methods are also key for the processing of C²OMODO observations to obtain TB measurements on a common grid from which dTB/dt derivatives can properly be derived. Thanks to the implementation of specific interpolation methods within RadioSpy, sensitivity studies of several input instrumental or mission parameters on dTB/dt will be performed, such as the impact of the combined distance between the two satellite swaths and viewing geometry on dTB/dt depending on the microphysical properties (size, life cycle, growth or dissipation) of the convective system. As the C²OMODO measurements within both 183.31 GHz and 325.15 GHz absorption lines are significantly affected by ice scattering, a sensitivity study on the different parametrizations used within the RTTOV-SCATT radiative transfer model on the simulated TB will be discussed. The different sensitivity studies conducted with RadioSpy will ultimately provide valuable inputs to the performance assessment of the C²OMODO tandem.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Using WBSCAT wideband scatterometer data for Characterization of Hydroterra+ Data Acquisition

Authors: Charles Werner, Silvan Leinss, Andreas Wiessmann, Urs Wegmüller, Valeria Gracheva, Juha Lemmetyinen, Thomas Nagler, Marco Chini, Julia Kubanek
Affiliations: Gamma Remote Sensing AG, ESA ESTEC, Finnish Meteorlogical Institute, ENVEO IT GmbH, Luxembourg Institute of Science and Technology (LIST)
Hydroterra+ is a mission currently one of four missions under consideration as part the ESA Earth Explorer 12 program. It comprises a C-band Synthetic Aperture Radar operating in a low-inclination geostationary orbit with the goal of monitoring various processes and drivers in the water cycle such as soil moisture, flooding, water vapor, and snow cover. The mission is also able to capture potentially hazardous ground-motion associated with landslides, and volcanoes. The proposed geostationary orbit permits daily and sub-daily observations over multiple regions in Europe and Africa with a very slow (< 5 m/s) circular track velocity. The SAR imaging aperture time for this orbital geometry can be selected varying from under an hour to ~8 hours permitting a tradeoff between azimuth resolution, and shorter observation intervals that allowing either greater area coverage or shorter repeat observation intervals of the same area [1]. If parts of the scene decorrelate during the aperture time, the image raw data set can no longer be fully focused for these regions. The decorrelation statistics of vegetated and snow ground covers at C-Band should be investigated because these can decorrelate for the time scales relevant to the Hydroterra+ observations. WBSCAT is a tower-based microwave scatterometer that measures coherent, polarimetric, radar backscatter from 1-40 GHz [2]. Gamma Remote Sensing AG developed WBSCAT for the European Space Agency (ESA) to support mission development and microwave studies of a wide range of ground covers including snow, ice, and agricultural fields . The instrument was located at the Davos-Laret CryoNet Station (LAR) at 1514 meters altitude and operated in cooperation with the WSL Institute for Snow and Avalanche Research SLF in Davos during the winters of 2018-2019, and 2019-2020 [3][4]. Since that time it has operated during subsequent winters at FMI in Sodankylä, Finland. The WBSCAT scatterometer was also deployed in a corn field in Luxembourg during the 2022 and 2023 growing seasons as part of the ESA LUXSCAT campaign including LIST (Luxembourg) and other research groups [5]. These data are particularly well suited to evaluation of the coherence and backscatter variations on time scales of minutes to days. In both campaigns significant in-situ data were collected. The degradation of azimuth resolution is proportional to the coherence time of the targeted region of interest. Decorrelation is due to various processes some gradual, but can also be due to specific random events, vegetation growth, a gust of wind, precipitation and human related change that leads to the rearrangement of scatterers within the resolution element. We report on the measured coherence and backscatter variations at L- and C-Band for data collected during WBSCAT campaigns in Davos and possible implications for analysis and processing of Hydroterra+ data over Alpine Snow cover. [1] M. Monti, A. Monti-Guarnieri, A. Renga, and F. Pelliccia, G. Blasone, “Near Zero Inclination Geosynchronous Synthetic Aperture Radar: System Definition and Feasibility Study, ” ARSI’24 - 8th Workshop on Advanced RF Sensors and Remote Sensing Instruments, ESTEC, Paper S03-P2, 4-6 November 2024, https://atpi.eventsair.com/arsi-2024/programme. [2] C. Werner, M. Suess, U. Wegmüller, O. Frey and A. Wiesmann, "The Esa Wideband Microwave Scatterometer (Wbscat): Design and Implementation," IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 2019, pp. 8339-8342, doi: 10.1109/IGARSS.2019.8900459. [3] A. Wiesmann et al., "ESA SnowLab Project: 4 Years Of Wide Band Scatterometer Measurements Of Seasonal Snow," IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 2019, pp. 5745-5748, doi: 10.1109/IGARSS.2019.8898961. [4] C. Werner, A. Wiesmann, R. Caduff, M. Schwank, O. Frey , U. Wegmüller, C. Mätzler, S. Leinss, and M. Suess, “High Temporal Resolution Time-Series of Calibrated Radar Backscatter and Interferometric Phase and Coherence of Snowpack at Davos-Laret during Winter 2019-2020 using the WBSCAT Microwave Scatterometer,” 2022 AGU Fall Meeting, San Francisco, California, Paper C55D-0432. [5] M. Chini, C. Lopez-Martinez, N. Pierdicca, C. Werner, D. Phan, C. Bossung, M. Machwitz, J. Iffly, O. Faber, P. Matgen, J. Kubanek “A Ground Based Experiment to Assess Multifrequency Sensitivity and Coherence Properties of Radar Backscatter of Corn,” IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, California, Paper FR4.R6.2.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: GRaWAC: G-band Radar for Water Vapor and Arctic Clouds

Authors: Sabrina Schnitt, Mario Mech, Jens Goliasch, Thomas Rose, Linnu Bühler, Nils Risse, Susanne Crewell
Affiliations: Institute for Geophysics and Meteorology, University of Cologne, Radiometer Physics GmbH
The vertical water vapor distribution is a key variable for weather and climate monitoring and prediction. Existing techniques, meanwhile, are limited by low temporal, spatial or vertical resolution, or cannot be applied in cloudy or precipitating conditions. In the Arctic, where water vapor is a central component of multiple feedback processes contributing to Arctic Amplification, optical techniques are additionally obstructed by polar night conditions. The Differential Absorption Radar (DAR) technique offers to overcome some of these challenges as water vapor profiles can be retrieved in cloudy and precipitating conditions independent of solar conditions. The G-band Radar for Water vapor and Arctic Clouds (GRaWAC) is a novel and unique Doppler-capable FMCW Differential Absorption Radar with simultaneous dual-frequency operation at 167.3 and 174.7GHz. Both transmitted frequencies are referenced to a single oscillator with signal generation from Direct Digital Synthesis. The generated chirp signals are fully synchronized and multiplied to transmitted RF frequencies of 167.3GHz and 174.7GHz, also providing local oscillator signals for the receiver downconversion mixers. GRaWAC’s high sensitivity of -43dBZ at 1km range and 1s integration time, and vertical resolution of up to 20m enables promising retrieval possibilities even in low humidity conditions. GRaWAC’s versatile design facilitates an operation from ground, ship or aircraft in harsh environmental conditions. We present measurements from recent deployments at AWIPEV station, Ny-Alesund, Svalbard, aboard the RV Polarstern on the PS144 cruise through the central Arctic, and aboard the Alfred-Wegener Institute’s Polar-6 research aircraft. Based on these measurements, we highlight technical advantages and limitations of the system. We discuss the performance of the G-band radar compared to conventional cloud radars at lower frequencies, assess the stand-alone water vapor retrieval capabilities and investigate synergy options with simultaneous passive microwave radiometer measurements.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Calibration and Inversion of OSCOM Airborne Campaign

Authors: Shilei Wang, Dr. Marcos Portabella, Xiaolong Dong, Wenming Lin, Qingliu Bao
Affiliations: National Space Science Center (NSSC), Chinese Academy of Sciences (CAS), Barcelona Expert Center (BEC), Institut de Ciències del Mar (ICM-CSIC), Nanjing University of Information Science and Technology (NUIST), Piesat Information Technology Co.,Ltd.
Satellite-based observations of simultaneous ocean surface winds and currents are crucial for better understanding of air-sea interactions, ocean biophysical dynamics, and processes occurring across multiple scales. In recent years, various satellite mission designs have been introduced to enable the simultaneous measurement of both parameters. Among these, the rotating pencil-beam Doppler scatterometer stands out for its ability to provide wider swath coverage compared to other remote sensing technologies. Nonetheless, certain areas within the swath experience limited azimuthal angle diversity, which can complicate the inversion process. Accurate retrieval of ocean winds plays a crucial role in estimating ocean currents, as it allows for the removal of surface velocity components induced by wind-wave interactions, which would otherwise distort velocity measurements. Calibrating the Normalized Radar Cross Section (NRCS) is fundamental due to its sensitivity to both wind speed and direction. Such calibration is vital for obtaining an accurate wind field. Once the wind field is inverted, the Doppler frequency shift information from the Doppler scatterometer is utilized to retrieve ocean currents. This study focuses on a comprehensive calibration and subsequent wind and current field retrieval of data from an airborne Ka-band rotating pencil-beam Doppler scatterometer campaign, which serves as the prototype for Chinese Ocean Surface Current Multiscale Observation Mission (OSCOM). Before inversion of ocean winds, the calibration of the backscatter measurements incorporates two distinct techniques to correct for the azimuthal variations in the backscatter signal, which exceed the expected variations as derived from the Geophysical Model Functions (GMFs) used in Ka-band scatterometry. Both techniques are based on what is referred to as target or numerical ocean calibration (NOC). The first method applies an azimuth-dependent calibration, while the second modifies the GMF to better match the observed azimuthal modulation after the initial calibration. The retrieved wind speeds range from 4.7 to 7 m/s, with wind directions around 155°. When comparing the winds from OSCOM with those from the European Center for Medium-range Weather Forecasts (ECMWF) model output, the differences in wind speed and direction show standard deviations (SD) of less than 1.17 m/s and 21.28°, respectively. Winds derived from the azimuth-dependent calibration exhibit lower bias and SD in wind speed compared to those obtained using the modified GMF calibration, although they show a higher SD in wind direction. Furthermore, an analysis of azimuthal diversity and the shape of the Maximum Likelihood Estimation (MLE) cost function indicates that the primary sources of measurement noise are related to sampling issues, specifically arising from the OSCOM viewing geometry and relatively high levels of measurement noise. In Doppler scatterometer measurements, the wind-wave induced artifact velocity can be removed by inverting the wind field and applying the Ka-band GMFs. Two inversion methods for ocean current retrieval were compared in this study. The stepwise inversion approach first calculates the total surface velocity and subsequently subtracts the wind-wave induced artifact velocity to derive the ocean current velocity. Alternatively, the joint inversion method is based on a cost function that simultaneously incorporates both the NRCS and Doppler frequency shift. By minimizing this cost function, both the wind vector and the ocean current vector can be retrieved, providing a more integrated approach for ocean current estimation. When comparing the ocean current retrieval results from the stepwise inversion method with in situ data at a specific location, the differences in current speed and direction are found to be 0.28 m/s and 26.96°, respectively. In contrast, the joint inversion method produces larger differences, indicating that the stepwise inversion approach provides estimates of the ocean current more in agreement with the buoy reading at this location. This suggests that while the joint inversion method offers a more integrated solution, its performance may be affected by the complexities of simultaneous retrieval, particularly in regions where the data quality or azimuthal diversity is limited. Further research is needed to optimize the joint retrieval procedure. The calibration results offer quantitative insights that enhance the retrieval of ocean winds and surface currents using both NRCS and phase information from Doppler scatterometry. To further validate these inversion results, a more comprehensive reference dataset is needed, ideally supplemented by coincident in situ wind measurements that match the spatial resolution of OSCOM, such as buoy data. The calibration techniques introduced in this study could be extended to future airborne Doppler scatterometer experiments, providing a framework for evaluating the performance of different instrument configurations. In addition, the inversion of ocean currents and winds could contribute to advancing research on the mechanisms underlying the interaction between ocean currents and winds.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Performance Analysis of a Geostationary SAR for Interferometric Applications

Authors: Matteo Monti, Alessandro Gatti, Andrea Virgilio Monti-Guarnieri, Silvia Bianchini, Giorgio Boni, Martina Lagasio, Antonio Parodi, Nazzareno Pierdicca, Alvaro Campos, Alfredo Renga, Francesca Pelliccia, Giovanni Paolo Blasone, Deodato Tapete, Simona Zoffoli
Affiliations: Politecnico Di Milano, Università degli Studi di Firenze, Università degli Studi di Genova, CIMA Research Foundation, Sapienza Università di Roma, Università degli Studi di Napoli Federico II, Agenzia Spaziale Italiana
A SAR in Near Zero Inclination (NZI) geostationary orbit is a unique mission in providing persistent observations with sub-daily interferometric revisit – enabling observation of fast evolving fields like water vapor, landslides and deformations, flooding, as well as any other source of surface changes with all-weather capability. In all these applications, SAR interferometry has shown to be a game changer, consolidated after decades of LEO-SAR missions for Earth Observations. GEO-SAR will open new perspectives in terms of both scientific applications and operational capabilities, thanks to frequent interferometric revisit, rapid access and response time to events, with almost immediate data availability. Besides the revisit, the peculiar North-South view of an NZI GEO-SAR make it complementary to the mostly East-West line of sight of near-polar orbiting LEO-SAR. This is relevant not only for the estimation of the E-W component of the deformations, but also for the visibility of slopes oriented along the parallels. The aim of the paper is to analyse the impact of the different view angles and the major impairments to GEO-SAR, like the spatial-temporal variation of troposphere and ionosphere, as well as the temporal decorrelation due to clutter from vegetation, changing soil moisture and presence of standing water, and compare with the benefits that comes from the reduced temporal decorrelation, by assuming both a C and X band configuration. These impairments are caused by the long integration times up to hours required to reach the desired resolution, that characterize the NZI GEO-SAR concept. The analyses will be made basing on the large amount of data available in terms of the evolution of water-vapor, and ionosphere, and Sentinel-1 global decorrelation maps and models. Furthermore, the impact of geometry in terms of incidence angles and view angles will be considered. The result will provide an advance on the feasibility and benefits of the NZI GEO-SAR. This work has been supported by the Italian Space Agency in the framework of ASI project “Images”, contract n. 2023-32-HH.0.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: D.02.04 - POSTER - Machine Learning for Earth System Observation and Prediction

Earth, its weather and climate constitute a complex system whose monitoring and modelling has undergone remarkable progress in recent years. On one hand, enhanced spaceborne observations and the development and availability of novel in-situ sensor data have provided unprecedented information about our planet's systems. On the other hand, the integration of AI and big data analytics has opened new frontiers in how we approach the challenges of weather and climate modelling. Altogether, these elements are key drivers of innovation in Earth System Observation and Prediction (ESOP) for Weather and Climate.

Machine/Deep Learning (ML/DL) techniques have revolutionized numerous fields and have proven to be particularly advantageous in various applications such as image recognition, traffic prediction, self-driving vehicles, and medical diagnosis. These techniques have garnered significant attention and adoption within the Earth System Observation and Prediction (ESOP) community due to their ability to enhance our understanding and prediction capabilities of the Earth's complex dynamics. One prominent area where ML/DL techniques have proven invaluable is in the development of high fidelity digital models of the Earth on a global scale. These models serve as comprehensive monitoring, simulation, and prediction systems that enable us to analyse and forecast the intricate interactions between natural phenomena and human activities. By providing a holistic understanding of the Earth's dynamics, these models contribute to the achievement of the European Commission's Green Deal and Digital Strategy goals towards a green & digital transition.

ML/DL solutions also showcased promising advancements in data assimilation, weather forecasting and climate prediction. Algorithms can be trained to identify instances where physical models may exhibit inaccuracies and subsequently learn to correct their predictions accordingly. Moreover, AI-based models have the potential to create hybrid assimilation and forecasting models that combine the strengths of traditional, physics-based methodologies with the capabilities of ML/DL, ultimately enhancing the accuracy and reliability of predictions.

The aim of this session is to invite new ML4ESOP explorers to present their latest innovation in ESOP. A specific focus on the exploration of new data sources and benchmarks for weather and climate modelling, the adaptation of large-scale data-driven Earth system models, as well as novel demonstrations of their applicability to weather and climate observation and prediction. This session invites all experts from diverse fields to discuss how recent advances innovate on established ESOP approaches, to address current challenges, and to identify opportunities for future work.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: STeMP: A Protocol for Spatio-Temporal Models

Authors: Jan Linnenbrink, Fabian Lukas Schumacher, Marvin Ludwig, Jakub Nowosad, Hanna Mayer
Affiliations: University Of Münster, Adam Mickiewicz University
Spatio-temporal modeling, often relying on remote sensing data, is widely used in ecology and geosciences to track land cover changes, predict species distributions, and estimate agricultural productivity. Thus, transparency of spatio-temporal models is key for evaluating, interpreting, and reproducing them. This transparency can be achieved by the implementation of standardized model protocols. Several such protocols exist for machine-learning models in general (e.g., Model Cards, REFORMs), as well as for specific domains like species distribution modelling (ODMAP) and individual-based modelling (ODD). Despite the availability of general and domain-specific protocols, no existing framework fully addresses the specific requirements of spatio-temporal modeling and remote sensing. Here, we present ideas for a protocol tailored towards spatio-temporal models to fill this gap called SpatioTEmporal Models Protocol (STeMP). Drawing on existing model reports, we propose one consisting of modules that relate to different modelling steps: Overview, Data, Model, Interpretation, Uncertainty Considerations, Software/Tools and Workflow/Reproduction Recipe. Each module includes specific criteria to guide users. For example, the ‘Overview’ module addresses the training domain or area, while ‘Data’ focuses on the spatial resolution of predictor variables. ‘Model’ includes details on variable selection, and ‘Interpretation’ involves sensitivity analysis. Additional modules cover uncertainty considerations, software implementation, and reproducibility. Furthermore, we aim to provide the option to generate automated reports and figures supporting them with user-provided datasets (e.g., spatial locations of the training and test data or a model file). The protocol will be available as a web-based application with the option of creating and editing model protocols in a simple and user-friendly way, rendering them as PDF overview pages or saving them as metadata files. Such a protocol can be useful for modellers to document their models, reviewers to evaluate them, and end-users to understand them. This poster invites the audience to explore the STeMP protocol, provide feedback, and help shape a standardized framework for spatio-temporal modeling. We will present the protocol structure, the modules and criteria, and the envisioned web-based application. Our goal is to encourage the discussion of our proposed model report and to emphasize the need for standardized, as well as comparable model information. Through collaboration, we aim to establish STeMP as a widely accepted standard, driving transparency and reproducibility in the spatio-temporal modeling community.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Exploration of Machine Learning Techniques for ASIM Data Analysis

Authors: Jose E. Adsuara, Javier Navarro-González, Adrián Pérez-Suay, Paul Connell, Victor Reglero, Francisco Gordillo, Nikolai Østgaard, Torsten Neubert
Affiliations: IPL, Universitat De València, Instituto de Astrofísica de Andalucía, IAA-CSIC, Birkeland Center, University of Bergen, Technical University of Denmark
The Atmosphere-Space Interactions Monitor (ASIM), aboard the International Space Station (ISS), is an ESA mission designed to detect and locate Terrestrial Gamma-ray Flashes (TGFs). Since its launch on April 2, 2018, ASIM has operated with its Modular X- and Gamma-ray Sensor (MXGS), which records continuous data with microsecond resolution using two detectors: the Low Energy Detector (LED) for 50-400 keV and the High Energy Detector (HED) for 300 keV to 40 MeV. MXGS produces background observations, sample observations (summarized data over continuous timelines), and trigger observations (detailed datasets captured during transient events). These transient events are often linked to Terrestrial Gamma-ray Flashes (as this mission was designed to) and sometimes to other phenomena like magnetar flares or other high-energy astronomical or "space-weather" events. Exploring these phenomena is critical for advancing astrophysical research and the Earth-Space "electric circuit" behavior. This work focuses on applying Machine Learning techniques to enhance ASIM data analysis, particularly in bridging the gap between low-resolution and high-resolution observations. Most of the data consists of low-resolution background samples, while high-resolution data is only available during triggers. Through the exploration of super-resolution, we aim to transform low-resolution datasets into high-resolution representations for better analysis of transient events, while also applying denoising techniques to these data.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: The E-CONTRAIL dashboard

Authors: Jeroen van Gent, Hugues Brenot, Irene Ortiz, Javier García-Heras, Amin Jafarimoghaddam, Ermioni Dimitropoulou, Pierre de Buyl, Nicolas Clerbaux, Evelyn Otero, Parthiban Loganathan, Klaus Sievers, Manuel Soler
Affiliations: BIRA-IASB, University Carlos III of Madrid, Royal Meteorological Institute of Belgium, KTH Royal Institute of Technology, Klaus Sievers Services
Global aviation contributes to anthropogenic climate change by warming the Earth’s near-surface atmosphere through both carbon dioxide (CO₂) and non-CO₂ effects. The latter category has largely been underexposed in recent years, mainly because many processes lack thorough understanding. One of the non-CO₂ effects on the global radiative forcing is the impact of condensation trails (contrails) and aviation-induced cirrus clouds. Contrails are generally thought to pose a nett warming effect on their environment, but this is difficult to quantify and involves large uncertainties since they are subject to meteorological, regional, and seasonal variations, yet each being individually characterised by different spatiotemporal scales. In response to the EU call HORIZON-SESAR-2022-DES-ER-01 the E-CONTRAIL project has applied cutting-edge deep-learning techniques to leverage leveraging remote sensing detection methods, for the prediction of the climate impact derived from contrails and aviation-induced cloudiness (AIC) and towards the reduction of the involved uncertainties. The aim of E-CONTRAIl is the quantitative forecast of contrail formation probability and accompanying radiative forcing in European airspace for the next 48 hours. This predictive scheme is based on one year of data from the Spinning Enhanced Visible Infra-Red Imager (SEVIRI) on Meteosat Second Generation (MSG), accompanied by data on 3D meteorological parameters. Data from the Flexible Combined Imager (FCI) on the first MTG-I satellite, will also be considered. The first step in the assessment of aviation-induced contrail effects is the detection of the contrails in the satellite imagery. This has been done by applying a neural network model to the MSG SEVIRI data, trained on images from the Geostationary Operational Environmental Satellites-16 series taken between April 2019 and April 2020. The SEVIRI data also serves as input to the so-called Optical Cloud Analys system, that is used in the project to derive physical cloud characteristics. These parameters are subsequently used to derive radiative forcing (RF) estimates. Geospatially matching these RF measurements with the contrail detections results in the RF contribution by individual contrails in space and time. The results of the above exercise where subsequently used to feed a deep-learning algorithm that combines the data with meteorological parameters in order to predict radiative forcing in the case of aviation-induced cloudiness. To demonstrate the final results of the project, an interactive dashboard has been developed that visualises the contrail formation likeliness forecast. The baseline configuration is a map of European airspace with on top of that colour-coded polygons outlining regions that are forecast to be prone to contrail formation. Interacting with these regions show additional information like total radiative forcing, along with meteorological data. For the one-year period of data, advanced features include the option to compare the contrail formation prediction with the presence of actual air traffic and contrail/cirrus detection.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Correlation of Green Values and Protein Content in Oats

Authors: Solveigh Hemprich
Affiliations: Universität Rostock
The agricultural sector has transformed over the past two decades, as a result of the adoption of new technologies and digitalisation. In this context, the fields of 'smart farming' and 'remote sensing' are crucial. Remote sensing of plant parameters, including growth height, drought stress, pathogen infection, and nutrient composition, generates data that is used to inform agricultural decision-making. The protein content of cereals is an agronomically significant parameter. Wheat, barley, rice and oats are essential for global food security. It is crucial to focus on reliable yields and stable product quality. This study will demonstrate whether it is possible to predict the protein content in oat grains before harvest using remote sensing and machine learning. This study presents the definitive results of a trial conducted at the Julius Kühn-Institut in Groß Lüsewitz (Northern Germany). A total of 1,240 plots were cultivated with oats. A drone equipped with a multispectral camera was used to record the spectral signature of each plot. Vegetation indices (VI) were subsequently calculated from the data. Five vegetation indices (VIs) were used to model the protein content in oats: the Optimised Soil Adjusted Vegetation Index (OSAVI), the Green Normalised Difference Vegetation Index (GNDVI), the Normalised Difference Vegetation Index (NDVI), the Normalised Difference Red Edge Index (NDRE), and the Enhanced Vegetation Index (EVI). Four machine learning models were subsequently employed for analysis in RStudio: an artificial neural network (ANN), partial least squares regression model (PLSR), support vector regression model (SVR), and random forest model (RF). The data was evaluated and it was found that the optimal results were achieved on the second to last and last days of data collection (07/07/14.07.2023). The ANN model showed the highest level of predictive accuracy (R² = 0.479), and the NDRE index showed the highest level of correlation within the ANN and SVR models (R² = 0.34).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: EO Data Synergy for WIVERN Gap-Filling Using Machine Learning

Authors: Adian Dawuda, Maryam Pourshamsi, Marcel Kleinherenbrink, Dr Cathy Hohenegger, Alessandro Battaglia, Mauro Federici, Fabian Senf, Dr Rémy Roca
Affiliations: European Space Agency (ESA), Max-Planck-Institute for Meteorology, Politecnico di Torino, Leibniz Institute for Tropospheric Research, CNRS
WIVERN (WInd VElocity Radar Nephoscope) is one of two candidates for the Earth Explorer 11 mission concepts, currently undergoing phase A development in the European Space Agency’s FutureEO Programme. The mission is set to fill a significant gap in global wind profile observations by providing insights into atmospheric dynamics through the first space-based measurements of in-cloud wind and micro- and macro- physical properties of clouds and precipitation systems. Such measurements are essential for a better understanding of weather phenomena, extreme events, and climate modelling. Flying at an orbital altitude of 500 km with an off-nadir pointing angle of about 38 degrees, WIVERN’s conically scanning W-band Doppler radar captures an approximately 800-km swath, performing 12 revolutions per minute. The satellite will be placed in a near-polar orbit with an average global revisit time of 1.5 days at the equator. The scanning configuration results in approximately 35 km gaps between successive scans across the satellite’s ground track between which no data is acquired. While in-cloud observations are unavailable for these gaps, there is the possibility to reconstruct them using Artificial Intelligence methods based on the synergy with other observations. For example, cloud-top observations which can be obtained from spaceborne instruments covering the same domain. These datasets include high temporal resolution observations from geostationary satellite imagers such as the MSG (Meteosat Second Generation) SEVIRI (Spinning Enhanced Visible and Infrared Imager). This research plans to explore the filling of gaps within the WIVERN swath for identified data products using a machine learning model trained on temporally close, collocated SEVIRI observations. Both WIVERN and MSG retrievals are simulated based on outputs of the ICON (Icosahedral Nonhydrostatic) numerical weather prediction model. The primary focus lies on convective cloud systems which are often associated with severe weather events and are a source of uncertainty in our current understanding of atmospheric dynamics. The results are anticipated to establish a foundation for the development of more extensive gap filling methods for WIVERN observations. Overall, this work aims to increase the potential scientific impact of the mission, ultimately increasing our understanding of storm evolution, structure and dynamics.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Subsurface Ocean Insights: Advancing Severe Weather Forecasting through AI

Authors: Veronica Nieves, Javier Martinez-Amaya
Affiliations: University of Valencia
Hurricanes are among the most destructive natural phenomena, and their intensification is strongly influenced by complex ocean-atmosphere interactions. While traditional forecasting primarily emphasizes near-surface ocean conditions, subsurface dynamics remain underexplored despite their pivotal role in fueling storm evolution. Our research uses advanced AI methodologies to analyze subsurface ocean data spanning several hundred meters, offering unprecedented insight into pre-hurricane conditions. This approach achieves a prediction precision exceeding 73% for severe hurricane formation, revealing critical subsurface heat redistribution processes that govern hurricane intensification in a warming ocean influenced by climate change. By incorporating detailed temporal and spatial patterns, these findings enhance understanding of the physical processes driving extreme weather events. They underscore the importance of leveraging subsurface ocean information to develop robust early-warning systems, enabling better preparedness and response strategies in an era of increasingly severe and unpredictable storms.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Robust U-Net- and Transformer-based Cloud Masking Approaches for Multispectral Earth Observation Satellite Missions

Authors: Tianyi You, Michael Greza, Prof. Dr Boris Jutzi
Affiliations: Technical University of Munich
In light of the mission requirements of the upcoming Bavarian multispectral Earth observation mission "CuBy", it is further driven for an automated data processing pipeline during the mission operational phase. As an integral component of the pipeline, it involves the development of a neural network approach for the cloud masking processor for the assessment of data quality. Therefore, this work presents the development of an end-to-end architecture for a robust cloud masking neural network model which is adapted to multispectral satellite imagery, with a combined dehazing approach for further data enhancement. As in recent state-of-the-art literature, CNN-based approaches are commonly used for cloud masking applications. Recent works, such as Cloud-Net and MF-CNN, have been specifically developed to effectively address their respective application. In comparison, transformer-based approaches are relatively few, yet have potential in model capability due to the extraction of long-range dependencies for complex, multi-dimensional features. For this work, both CNN-based and transformer-based approaches are specifically developed and adapted to multispectral PlanetScope Imagery, due to the spectral similarity to the mission data from the planned CuBy Mission. To fulfill the mission data requirements, a diverse and comprehensive spatial-temporal dataset including worldwide locations is fetched with a developed automated API data fetching pipeline. The ground truth masks provided from Planet Labs are generated by neural network models with hand-labeled masks. In terms of the cloud masking architecture, the transformer cloud masking model is developed based on Maskformer with a Swin transformer as backbone. The CNN-based model is based on U-Net specifically adapted to the multispectral imagery. It is observed that the U-Net-based approach serves as a baseline, and is largely outperformed by Maskformer-based approaches, which are developed to accommodate multispectral PlanetScope Imagery in RGB, 6 and 9 channels based on wavelength-aligned spectral band allocation. The Maskformer-based approaches with pre-trained RGB weights from urban domain (ADE20K) prove to be effective in domain adaptation to the remote sensing atmospheric field with a higher performance than the U-Net approach. Both U-Net-based and Maskformer-based approaches are further extensively fine-tuned in different biomes and regions for validation and enhancement of model robustness. The maskformer-based approaches on average achieve 0.73 for Mean IoU (mIoU) and 0.90 for Overall accuracy (OA), which surpass the U-Net-based approach by 30.2% (mIoU) and 9.3% (OA) on the same PlanetScope test dataset. This larger performance improvement compared to other state-of-the-art approaches, such as CSDFormer with OCRNet as the baseline on Landsat Imagery, highlights the competitive cloud masking results within the benchmarking process. Thus it reflects the strong and robust model capability attributed from factors such as the integration of transformer-based architecture and mask-based mechanism. Following the cloud masking approach, the physics-based dehazing approach is capable of delivering qualitatively pleasing results, especially in the urban domain. In addition, several aspects and characteristics of the U-Net-based and Maskformer-based architecture performance are evaluated and compared in terms of feature representation, context-awareness and misclassification errors. In the same scenes, U-Net-based approach excels in capturing fine-grained details with local feature extraction, however encounters difficulties in differentiating homogeneous features. On the other hand, Maskformer-based approaches are prone to capture global context across broader regions and semantic relations between classes, which result in better handling of feature separation and object boundaries in complex scenes, such as polar regions. The noise interference in the satellite imagery is further discussed in relation to satellite orbits, chromatic lens and cloud altitudes, which mainly contribute to errors and diminish the accuracy of ground truth masks in the dataset. In addition, the limitation of the U-Net-based approach mainly lies in the misclassification of urban land-based features for haze or clouds, in comparison the maskformer-based approaches sometimes introduce artifacts on singular patches. A detailed analysis is further elaborated on the additional spectral bands for performance improvement, which includes a correlation coefficient analysis and an evaluation of neural network model weights. The model weights for RGB, 6, and 9 channels indicate that the most influential channels for cloud masking correspond to wavelengths near the Green, Green I, Blue, and Yellow bands. This result is consistent with the statistical analysis and subject to class imbalance in the satellite imagery. The analysis further validates the significant value of incorporating additional spectral information and highlights the channels most critical to the cloud masking model, further contributing to enhance the model explainability which aligns with the physics-guided principles.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Detecting Greenhouse Gas Anomalies in Satellite Data With Topological Data Analysis

Authors: Laia Amorós, Olga Kuznetsova
Affiliations: Finnish Meteorological Institute, Aalto University
Monitoring atmospheric trace gases from space provides a global perspective on Earth's climate system. Greenhouse gases like carbon dioxide (CO₂), nitrogen dioxide (NO₂), and methane (CH₄) play important roles in driving climate change and are key indicators of anthropogenic activity. Accurate tracking of their concentrations is essential for understanding climate dynamics, assessing environmental impacts, and supporting mitigation strategies. Satellite instruments, such as the TROPOspheric Monitoring Instrument (TROPOMI) and the Orbiting Carbon Observatory-2 (OCO-2), deliver high-resolution measurements of CO₂, NO₂, and CH₄, enabling detailed global-scale analysis of atmospheric trace gas distributions. A critical challenge lies in distinguishing genuine atmospheric anomalies (that is, significant deviations or unusual patterns in trace gas concentrations) from background variability and observational noise. These anomalies often signal important environmental events, such as industrial emissions, wildfires, volcanic eruptions, or methane leaks, which can have profound implications for climate science, environmental management, and public policy. Accurate identification of such anomalies is essential not only for understanding their underlying causes but also for enabling timely responses to mitigate their impacts. For instance, detecting an unexpected surge in methane concentrations could help identify and address leaks from natural gas infrastructure, while recognizing regional spikes in nitrogen dioxide might inform efforts to regulate urban air quality. However, detecting these anomalies is not straightforward. Atmospheric trace gases exhibit inherent variability driven by natural processes, such as seasonal cycles, weather patterns, and other interactions. This variability, coupled with the distinct temporal and spatial characteristics of each gas, complicates the identification of meaningful deviations. Additionally, satellite observations face challenges such as instrumental noise, data gaps due to cloud cover, and the limited spatial swath of current satellite missions, which can hide subtle but significant patterns. These factors need advanced analytical approaches capable of robustly distinguishing anomalies from background variations, even in the presence of incomplete or noisy data. Topological Data Analysis (TDA) provides a novel framework for tackling these challenges by revealing the underlying structure of high-dimensional data in a mathematically rigorous yet intuitive way. Unlike traditional statistical or machine learning approaches that often rely on predefined models or assumptions, TDA focuses on the geometric and topological properties of the data itself, offering a flexible approach. One of its most powerful tools, persistent homology (PH), captures and summarizes topological features (such as clusters, loops, or voids) across a range of spatial or temporal scales. This ability to analyze data at multiple resolutions makes it particularly suited to studying atmospheric processes, where signals of interest may emerge at vastly different scales, from global patterns to localized anomalies. The 0-th homology, which detects connected components or clusters, is especially useful for identifying regions with similar behavior, such as areas of elevated trace gas concentrations. Meanwhile, higher-dimensional homologies, which identify more complex structures such as loops (1st homology) or voids (2nd homology), offer insights into intricate patterns that may correspond to dynamic atmospheric processes, such as circulating air masses or regions of reduced gas concentrations surrounded by elevated levels. Persistent homology encodes the "birth" and "death" of these features as the data is viewed through different thresholds, providing a rich, multi-scale representation that highlights patterns which might otherwise be obscured by noise or averaging effects. By using PH to analyze CO₂, NO₂, and CH₄ data, we can detect subtle but significant anomalies, even in the presence of noise or missing data, making it a powerful tool for satellite-based observations with inherent limitations such as cloud cover or narrow swaths. Additionally, TDA’s ability to compare the topology of datasets across time or space allows for tracking changes in atmospheric conditions, offering a unique perspective on both transient events and long-term trends. This innovative methodology has the potential to enhance our understanding of atmospheric processes, support real-time anomaly detection, and inform climate policy. Its application can improve the identification of critical events, such as unexpected methane releases or regional pollution spikes, providing actionable insights for environmental monitoring, regulatory compliance, and international collaboration on climate goals.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Advancing Ocean Insights: AI-Driven 4D Joint Reconstruction of Physical and Biological Fields in the Mediterranean Sea Within the ESA 4DMED-Sea Project

Authors: Dr Michela Sammartino, Dr Simone Colella, Dr Lorenzo Della Cioppa, Dr Bruno Buongiorno Nardelli
Affiliations: Institute of Marine Sciences, Italian National Research Council (ISMAR-CNR), Institute of Marine Sciences, Italian National Research Council (ISMAR-CNR)
A comprehensive understanding of the ocean interior is crucial to describe the evolution in time and space of the marine environment and the effects on it of climate change. Even though the oceans can be explored by using different techniques and platforms, much of the processes occurring within them still remain unexplored. In this context, satellites can improve the observation and investigation of marine surface processes. With their synoptic capabilities, they provide high-resolution spatial and temporal data at both global and regional scales. Nevertheless, while multiple essential ocean variables can be remotely sensed, satellite sensors are unable to detect subsurface layers. The exploration of the ocean depths relies on in situ observations acquired by different platforms, such as oceanographic vessels, gliders or profiling floats. Most of these techniques provide accurate measurements but they are sparse in both time and space. One of the most effective approaches to overcome the gap of information between surface and vertical dimensions is to use in a synergic way both measurements: satellite and in situ. Nowadays, this synergy can be realized using Artificial Intelligence models. The main advantage in using machine learning algorithms is related to the ability of these tools to ideally approximate any function, without a priori knowledge of the relationship between the input co-predictors and desired output of a model. Differing from conventional modelling techniques, they can resolve non-linear problems and manage the complexity that characterizes most of the ocean processes. In this study, we showcase some of the key findings from the ESA 4DMED-Sea project, which focuses on the exploitation of Europe's Earth Observation (EO) capabilities, including Sentinel missions, and promotes synergistic analyses by integrating satellite data with in situ measurements, through the use of advanced models based on Artificial Intelligence and Machine Learning (AI/ML). Within the ESA 4DMED-Sea project, we focused on the Mediterranean Sea carrying out a data-driven approach to combine surface observations and vertical profiles. We developed an algorithm based on AI to project surface values at depth and provide a 4D-reconstruction of key physical and biological variables (temperature T, salinity S, density D and chlorophyll-a Chl). In this work, unlike traditional AI, our deep neural network (DNN) included some physical constraints on density predictions transforming our model into a physically-informed DNN. The model was validated on an independent dataset of in situ profiles, demonstrating a good performance in reconstructing some particular processes, such as bloom events in specific sub-basin areas. The 4D physical tracers have been then integrated with surface geostrophic currents to obtain the 4D geostrophic velocities; while, the reconstructed 4D Chl, T along with atmospheric ocean model outputs have been used to estimate Mediterranean 4D primary production (PP) fields. Finally, the combined physical-biological 4DMED dataset was made freely available online. The scientific analysis of 4D data is ongoing, however, these 4D reconstructions provide novel perspectives on the interaction between biological and physical processes across surface and subsurface layers, opening new avenues for advanced research on ocean dynamics.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: An AI-based Prediction Service of Extremely Heat Islands in Cities for Urban Planning and Citizen Protection

Authors: Andrea Cavallini, Fernando Félix-Redondo, Angie Catalina Carrillo Chappe
Affiliations: Starion Group
The construction materials and urban planning of our cities have contributed along years to the emergence of areas with significantly higher temperatures than the surrounding environment, particularly during the summer months. It is known that the Urban Heat Island (UHI) effect is amplified by factors such as the expansion of artificial surfaces that decrease evapotranspiration and enhance the absorption of solar radiation, lack of vegetation, and heat generated from human activities, resulting in various negative implications for the urban environment. UHIs often reach temperatures that far exceed levels considered hazardous to human health. In the context of climate change, such episodes are becoming increasingly frequent so having a predictive system like this can help health management authorities to issue timely alerts to the population or establish legal restriction to outdoor labors that could help prevent heat related incidents during these periods. Generally, the initial efforts to predict these extreme urban heat islands have been based on numerical weather prediction or linear regression models using data from nearby weather stations and temperature time series. These techniques have proven insufficient to address the complex and multifaceted issue of urban heat islands. However, there is a recent trend toward using machine learning methods, which have demonstrated their effectiveness in tackling other complex problems influenced by multiple variables. Although this methodological shift has enabled the transition from purely support tools to predictive ones, many of these efforts have not achieved the resolution or accuracy required in urban environments. Fortunately, in recent years, the new sensors available on Earth Observation satellites and the use of Deep Learning models have made it possible to extend these models to entire urban areas with unprecedent resolution allowing to address the problem of UHI prediction effectively. The proposed system is an application that makes use of the latest advancements in Artificial Intelligence (AI) and multitemporal EO data analysis to predict maximum temperatures over a geographic area with a minimum lead time of 24 hours and a spatial resolution of up to 20 meters per pixel. The model has been trained integrating data from different Earth Observation satellites, such as Sentinel-3, ground temperature measurements, atmospheric data, and land use information. Applied to the available historical data, the mean absolute error does not exceed 2 degrees Celsius for the peak temperature. The task is framed as an image-to-image regression problem, which has been addressed using an AI model based on a spatio-temporal Transformer network to achieve high-resolution spatiotemporal prediction of maximum temperatures over a city. Transformers are particularly well-suited for capturing long-term dependencies and effectively handling spatiotemporal information. The model's performance has been evaluated using the results obtained from the Mean Squared Error (MSE). The prototype allows the users to generate predictions over several test cities producing a forecasted temperature map and highlighting those specific areas that exceed a predefined threshold. In the future, the system could be redesigned as a service and extended with a subscription mechanism for alerts and a historical database of the recorded temperatures, allowing it to identify changes in trends, which is highly useful for assessing the effectiveness of potential mitigation measures.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Evaluating Deep Learning for Downscaling GRACE-Derived EWT in Flood Monitoring

Authors: Anastasia Triantafyllou, Rafailia Adam, Elisavet Mamagiannou, Assistant professor Elena Tzanou, Professor George Vergos
Affiliations: Laboratory of Gravity Field Research and Applications – GravLab, Department of Geodesy and Surveying, Aristotle University of Thessaloniki, Department of Surveying and Geoinformatics Engineering, School of Engineering, International Hellenic University
The Equivalent Water Thickness, from the GRACE and GRACE – FO satellite missions could be used as both for monitoring water mass variation and as an index for extreme flooding events detection and management. However, the coarse temporal and spatial resolution of the GRACE-class data significantly limits the ability to accurately identify and monitor localized hydrological phenomena, especially over medium- and small-sized hydrological basins. To address this limitation, we propose in this study a deep learning model for downscaling GRACE derived EWT, hence improving their spatial resolution. The model incorporates Convolutional Neural Networks (CNNs) and takes into account high-resolution datasets, including Sentinel-1 Synthetic Aperture Radar (SAR) water extend and flood maps, precipitation data from the Integrated Multi-satellite Retrievals for GPM (IMERG), and surface water extent data from the Surface Water and Ocean Topography (SWOT) mission. The CNN architecture consisted of multiple convolutional layers followed by pooling layers, which helped capture spatial correlations in the predictor variables while reducing dimensionality and mitigating the risk of overfitting. The model was tested for the area of Thessaly, Greece, where a significant flooding event took place during September 2023, resulting in a 1000-year flooding. The calculated EWT values provided great insights regarding key challenges including the sensitivity of deep learning models to data quality, the need for harmonizing datasets with varying temporal resolutions, and the necessity of large, diverse training datasets. While the deep learning model successfully generated downscaled EWT predictions, it is very important to incorporate additional, local, hydraulic data to more accurately simulate extreme hydrological events.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Spaceborne SAR and Machine Learning for Crop Damage Mapping: Exploring Potential and Scalability

Authors: Linara Arslanova, Dr. Ian McCallum, Dr. Milutin Milenković, Dr. Sören Hese, Prof. Dr. Christiane Schmullius
Affiliations: Friedrich-Schiller University of Jena, International Institute for Applied System Analysis
Yield losses from extreme weather events have increased due to climate change, resulting in substantial economic impacts. However, less than one-third of these losses are typically insured, underscoring the need for robust global crop monitoring systems. Spaceborne synthetic aperture radar (SAR) offers a dense time series for crop monitoring and mapping, operating independently of light conditions and with minimal susceptibility to cloud interference. Despite these advantages, monitoring smallholder lands over large areas and detecting sub-pixel agricultural patterns remain challenging. Addressing these limitations requires either high-resolution SAR imagery—an expensive solution—or advanced methods to interpret SAR signal composition for effective upscaling. Data-driven approaches offer powerful tools to extract insights from SAR data and map small-scale agricultural patterns. Over the past decade, unmanned aerial vehicle (UAV) systems have become cost-effective platforms for high-resolution data collection, offering flexibility in acquisition timing and parameters. These data provide valuable references for large-scale spaceborne crop monitoring, offering insights into how agricultural patterns at finer scales contribute to the SAR signal at a large scale. Our work builds on findings from the BMWi/DLR Radar-Crop-Monitor Project and exploratory collaborations with the International Institute for Applied Systems Analysis (IIASA), demonstrating the potential of machine learning (ML) techniques to predict crop canopy damage at the pixel level using Sentinel-1 data. Results underscore the potential of ML for small-scale damage mapping. Scaling these methods globally requires extensive, balanced datasets to ensure effective model training, validation, and regional transferability. We introduce a valuable reference dataset and workflows tailored to process diverse UAV sensor data and extract agricultural patterns, leveraging three years of UAV data collection across three study areas in Germany. We invite contributions to expand and enhance this dataset and associated tools, fostering collaboration in advancing predictive modeling for global crop monitoring. Beyond canopy damage prediction, these resources are adaptable for broader agricultural monitoring and exploration applications, supporting global efforts to mitigate climate-related yield losses.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Reconstructing 3D Cloud Profiles from 2D Satellite Imagery using Deep Learning

Authors: Joppe Massant, Emiliano Diaz, Lilli Freischem, Stella Girtsou, Margarita Bintsi, Giuseppe Castiglione, Michael Eisinger, Emmanuel Johnson, William Jones, Anna Jungbluth
Affiliations: Ghent University, H-CEL, Universitat de València, University of Oxford, Department of Physics, National Technical University of Athens, National Observatory of Athens, Imperial College London, Computing, University of Sussex, European Space Agency, ESA-ECSAT, Institut des Géosciences de l'Environnement, CNRS, European Space Agency, ESA Climate Office
Clouds represent a major source of uncertainty in Earth system models. In particular, the altitude and vertical distribution of clouds strongly influence their radiative properties and impact Earth’s climate. As such, constructing real-time 3D global cloud maps with a high temporal frequency could result in a significant improvement in these models. Satellites such as CloudSat and, more recently, EarthCARE help us in gaining information about the vertical distribution of cloud properties. However, with narrow track width and long revisit times (>15 days), we only obtain a snapshot of our atmosphere and cannot directly study how clouds evolve on a timescale of minutes to hours. In comparison, imagery from passive sensors onboard geostationary satellites is available with high temporal cadence (15 mins) but only provide the “top down” view without directly probing atmospheric profiles. In this study, we extend the sparse vertical profiles in both space and time and create 3D cloud maps from 2D satellite imagery using machine learning. More specifically, we use imagery in 11 spectral channels from the MSG/SEVIRI instrument, collocated with vertical profiles measured by CloudSat’s Cloud Profiling Radar (CPR) for 2010. MSG/SEVIRI has a much higher data volume than the CloudSat/CPR (10 TB vs. 150 GB per year). To maximally leverage the available information, we first pre-train our model on the 2D satellite imagery through spatial masking, and later fine-tune to predict vertical profiles from the aligned image-profile pairs. While the model directly predicts clouds in 3D, the loss is calculated only over the vertical slice measured by the CloudSat/CPR. Since the CloudSat overpass can fall anywhere in the input image, and the model doesn’t know beforehand where it will be tested, the reconstructions are learned everywhere. We compare our approach to a U-NET based model previously used to derive 3D cloud maps from satellite imagery (Brüning, S., Niebler, S., and Tost, H., 2024). We further improve our model by incorporating spatial and temporal encodings based on the SatMAE approach (Cong, Y. et al., 2023), where the model gains extra information about the measurement time and location of the image. We demonstrate that self-supervised pre-training results in significant prediction improvement, especially with geospatial encoding, which strongly reduces regional biases. With this, we create a “geospatially-aware” ML model to reconstruct 3D cloud maps, ready to be applied to EarthCARE data once available.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Integrating Deep Learning with Aerial Laser Scanning to Estimate Tree Diameter in Short Rotation Coppice

Authors: Clara Lößl, Viacheslav Barkov, Lisa Schulz-Nielsen, Marcel Storch, Dr. Ralf Pecenka, Dr. Thomas
Affiliations: Joint Lab KI-DS Osnabrück
The transition to sustainable production methods for energy and consumables is crucial for meeting growing global demands while preserving environmental stability. Among the pressing challenges are climate change, resource depletion, biodiversity loss, and accumulating non-recyclable waste, all of which threaten current living standards in developed societies. Short rotation coppices (SRC) of fast-growing wood species present an innovative solution to these challenges, offering a promising alternative to traditional materials and fossil fuels. SRC systems combine elements of forestry and agricultural approaches, with trees planted densely in rows and harvested at ground level every 2-8 years, providing a continuous biomass supply. This offers a practical solution for improving biomass production efficiency while promoting environmental sustainability. The advancement of SRC systems can be further accelerated through advanced monitoring of key tree characteristics. Accurate measurement of diameter at breast height (DBH) is particularly crucial, as it enables reliable biomass estimation and effective SRC plantation management. Traditional manual measurements are time-consuming and labor-intensive, creating a need for efficient remote monitoring solutions. Our study, conducted at the Leibniz Institute for Agricultural Engineering and Bioeconomy in Potsdam, Germany, investigated a 1-hectare poplar SRC plantation. The site, established in 2018 and currently unharvested, represents typical early-stage SRC conditions with trees ranging from 2.1 to 17.6 cm in DBH. Unlike conventional forestry sites, the plantation features dense spacing (4.8 meters between rows and 1.0 meter within rows), creating unique monitoring challenges. In February 2024, we collected data from 740 individual trees. Tree positions were recorded using a Stonex S9111 Plus real-time kinematic Global Navigation Satellite System receiver, while DBH was measured manually with a tape measure. We collected airborne laser scanning data using a RIEGL miniVUX-1UAV LiDAR scanner mounted on a DJI M600 drone platform, achieving a point cloud density of 2,227 points/m² from 25 meters height. We developed and validated a deep learning approach for DBH estimation from LiDAR data, achieving a mean absolute error of less than 0.7 cm, improving upon existing point cloud UAV tree diameter estimation methods. Our method advances the monitoring capabilities for SRC systems, supporting their scaled deployment as a sustainable source of biomass for energy and materials production. Ultimately, our findings could accelerate the adoption of SRC systems, contributing to the broader transition toward sustainable resource management.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: SENTINELSAnyTIME: Advancing Agricultural Monitoring with Real-Time Cloud-Free Sentinel Data

Authors: Andreas Walli, Dr. David Kolitzus
Affiliations: Geoville
SENTINELS AnyTIME, introduces a groundbreaking solution to address challenges in Earth Observation (EO) caused by cloud cover. By harnessing cutting-edge AI techniques, particularly Deep Bayesian Neural Networks, the project offers a real-time Sentinel-1/2 fusion-based time-series reconstruction API service. Sentinel-1 data is used by AI to predict optical like imagery to ensure a consistent, uninterrupted, cloud-free EO data baseline to support daily monitoring, transforming AgroFood applications, AgriTech solutions, and the wider European EO Value Adding ecosystem. In EO applications, cloud cover and shadows hinder data consistency and quality. This inconsistency impedes critical time-series analyses required for effective agricultural decision-making. SENTINELS AnyTIME addresses this challenge by delivering reliable, cloud-free EO imagery, enabling seamless monitoring across regions and seasons. SENTINELS AnyTIME combines Sentinel-1 radar and Sentinel-2 optical imagery with advanced AI methods to reconstruct high-quality, cloud-free datasets. Its two primary offerings are: 1. Cloud-Free Sentinel-2 Bands: Ensuring consistent spectral data by removing cloud-induced gaps. 2. Predefined and Customized Indices and Variables: Delivering tailored insights, such as vegetation health or soil condition metrics. This approach employs temporal AI frameworks (e.g., Bayesian methods and recurrent neural networks) and spatial techniques (e.g., convolutional neural networks) to detect optically hidden events and reconstruct ground reflectance Addressing Industry Needs Extensive research and stakeholder engagement have ensured that SENTINELS AnyTIME meets the requirements of key users in agricultural applications, such as the Agrifood industry or national CAP Paying Agencies in Europe for monitoring in the framework of the Area Monitoring system. The service has shown its value in various applications and been benchmarked against purely optical solutions. Transforming Agricultural Monitoring SENTINELS AnyTIME addresses critical gaps in agricultural monitoring by enabling continuous, cloud-free data access. This service has been put into operations for various applications and showed increased accuracy for monitoring of winter cover cropping, EO-based determination of phenological stages as well as the identification of agricultural practices such as mowing or harvest detection. This empowers farmers, trading houses, and processors with actionable insights, driving sustainability and efficiency across the value chain. Moreover, its versatility extends to various industries requiring persistent monitoring, showcasing its broad impact on the EO sector. The service also holds significant value for other sectors, including forestry, disaster response, and renewable energy. In conclusion, SENTINELSAnyTIME represents an innovative cloud-free processing approach that enhances data quality, consistency, and accessibility, delivering transformative benefits to agriculture and other sectors. The project got funding by the European Space Agency (ESA) InCubed Programme.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Advancing Wetland Segmentation and Classification using Earth Observation and AI

Authors: Dr Khunsa Fatima, Dr Robert Parker, Ms Chandana
Affiliations: National Centre for Earth Observation, School of Physics and Astronomy, University of Leicester
Wetlands, though covering only about 6% of Earth’s land surface, are among the largest natural sources of methane—a greenhouse gas with a global warming potential over 25 times that of CO₂ over a century. Despite their critical roles in carbon storage, water filtration, and climate regulation, nearly two-thirds of these vital ecosystems have disappeared since the early 20th century. Many recent studies have suggested that the observed acceleration in the global methane trend may be attributed to wetland-climate feedback in the tropics. Accurate and up-to-date wetland extent data are essential to quantify tropical wetland methane emissions and improve climate change predictions. However, current datasets often suffer from inaccuracies, infrequent updates, and insufficient classification. Given the diversity of vegetation types supported by wetlands and their dynamic nature—driven by seasonal variations in temperature, precipitation, and soil moisture—a significant research gap exists in achieving precise, updated segmentation and classification of wetlands. This study exploits the capabilities of artificial intelligence (AI) and Earth Observation (EO) data to develop a convolutional neural network-based deep learning (DL) model for the segmentation and classification of tropical and midlatitude wetlands. The model is designed for multi-resolution capabilities, utilising different EO datasets. For medium-resolution predictions (500m), the model is trained using MODIS data, enabling wetland extent and classification across tropical and midlatitude regions. For 16 identified priority areas (e.g., Indonesia, Southeast Asia, the Indo-Gangetic Plain, the Amazon basin, and others), the model targets high-resolution predictions (30m) by utilising Landsat and Sentinel datasets. Depending on the availability of PlanetScope imagery, the model aims to achieve even finer resolutions (~3m) for specific regions. Each step will be supported by datasets encompassing topography, hydrology, soil moisture, and climate variables. As the first step in this pipeline, a U-Net-based DL model was trained on the MODIS Surface Reflectance dataset (MOD09A1, 8-day composite, Level-3) at 500m spatial resolution, covering the period from 2009 to 2023. This model successfully identifies permanent water (defined as >80% water presence over 2 years) and an "others" class. The subsequent phase of the pipeline focuses on refining the model to classify three categories: wetlands, permanent water, and uplands (encompassing forests, grasslands, agricultural areas, urban regions, and bare soil). The final phase involves differentiating between wetland sub-classes. To ensure robustness, comprehensive model evaluation will employ metrics such as overall accuracy, IoU, precision, recall, and per-class accuracy. The resulting model is expected to deliver accurate and precise wetland extent and classification data. This information will complement and enhance state-of-the-art hydrological models, enabling a deeper understanding of complex tropical wetland systems and their role in global methane dynamics.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: WeSea - Near-real Time Monitoring of Coastal Zone – A Change Detection by Machine Learning model

Authors: M.Eng in Geodesy and Geoinformatics Luka Stemberga, Helena Lesko, Dragan Divjak
Affiliations: LIST LABS LLC
The degradation of coastal environments due to anthropogenic pressures, particularly in ecologically sensitive areas such as Croatia's coastal belt, poses significant challenges to sustainable management and development. Croatia's coastal regions, much of which are protected under the NATURA 2000 network, are under increasing pressure from illegal construction, environmental degradation, and natural alterations. Traditional methods of monitoring and managing these areas rely heavily on citizen reports and periodic inspections, which are often inadequate for addressing the scale and urgency of these issues. To bridge this gap, our project leverages Earth Observation (EO) data and Machine Learning (ML) techniques to develop a near-real-time monitoring system for detecting and classifying coastal changes. This innovative system combines advanced EO data processing with ML-driven classification and change detection, offering a robust and automated solution for coastal management, monitoring, and alerting. The system integrates data from multiple satellite missions, including Sentinel 1, Sentinel 2, and Planet Scope, to maximize spatial and temporal coverage. Cloud-based processing capabilities enable the system to perform change detection every 3–5 days, provided weather conditions permit clear imagery. At its core, the system employs a Change Detection module utilizing the Multivariate Alteration Detection (MAD) algorithm for robust image differencing. Complementing this, the Classification module incorporates ML models such as Random Forest and XGBoost, trained on auto-labeled datasets derived from historical EO data and in-situ field validation. Together, these components ensure accurate detection and classification of changes, focusing on illegal constructions, environmental damage, and natural events like coastal erosion or vegetation shifts. The project’s initial deployment and testing were conducted in Split-Dalmatia County, a region particularly vulnerable to illegal coastal developments and environmental disruptions. A custom-designed interactive web GIS platform serves as the primary user interface, allowing stakeholders to visualize detected changes, access detailed classification information, and generate analytical insights. The system supports public engagement by enabling citizens to view coastal changes while providing authorities and local governments with actionable intelligence for timely interventions. By integrating a public-facing component with administrative tools, the system promotes transparency and community involvement in environmental conservation. Field results have shown that the system can autonomously detect over 90% of incidents larger than 50 m2 with an error margin below 30%. This performance significantly surpasses traditional methods, which are often hindered by delays and limited spatial coverage. By replacing manual, in-situ investigations with automated monitoring and alerting, the system offers a cost-effective and scalable solution for coastal management. Its capabilities are particularly valuable for regions with limited resources for environmental monitoring and enforcement. Beyond its technical achievements, the system addresses broader policy and management goals. It aligns with European Union initiatives such as the Marine Strategy Framework Directive and the Water Framework Directive. Additionally, it supports Croatia’s commitments under the Protocol on Integrated Coastal Zone Management to the Barcelona Convention. By providing a scalable, replicable framework, the system has the potential to enhance coastal monitoring and decision-making across the Mediterranean and other vulnerable regions globally. Future developments for the system include refining the ML models, particularly the change classification module, to improve accuracy and robustness in diverse environmental conditions. Expanding the geographic scope of the system is another key goal, enabling its adoption in other coastal regions facing similar challenges. Enhancements to the user experience and interface are also planned to ensure accessibility for a broader range of users, from policymakers and researchers to local communities and environmental NGOs. Additionally, the integration of supplementary data sources, such as Very High Resolution (VHR) satellite imagery, will further enhance the system’s capabilities for detecting and classifying smaller or more complex changes. This project demonstrates the potential of combining EO data with advanced ML techniques for coastal and environmental management. By providing timely, accurate, and actionable insights, it empowers stakeholders to make science-driven decisions that promote sustainable coastal management and conservation. The system not only addresses immediate challenges in coastal monitoring but also lays the groundwork for future advancements in the use of EO data and AI for environmental applications. Through partnerships with local authorities, international organizations, and the broader research community, this initiative contributes to global efforts to safeguard coastal ecosystems and ensure their sustainable use for future generations.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Toward Estimating Pollutant Emissions From Fired-Clay Brick Kilns Across India Using Sentinel-2 Satellite Imagery and Machine Learning

Authors: Clément Goldmann, Sugandha Arora, Chuanlong Zhou, Fabian Gieseke, Dr. Philippe Ciais, Kushal Tibrewal, Harish Phuleria
Affiliations: Laboratoire des Sciences du Climat et de l’Environnement, LSCE/IPSL, CEA-CNRS-UVSQ, Université Paris Saclay, Department of Information Systems, University of Münster, Interdisciplinary Program in Climate Studies, Indian Institute of Technology Bombay, The International Methane Emissions Observatory (IMEO)
India, the world’s second-largest brick producer, operates over 100,000 brick kilns, many of which are inefficient and highly polluting. These kilns contribute approximately 170 kt of PM2.5 (15% of the national total) and 120 Mt of CO2 (6% of the national total) annually, alongside significant quantities of SOx and NOx emissions [1]. Transitioning from traditional Fixed Chimney Bull’s Trench Kilns (FCBTKs) to cleaner technologies such as Zigzag Kilns (ZZKs) can reduce coal consumption by 20% and particulate matter emissions by 70% [2]. Despite the sector’s environmental significance, comprehensive datasets for brick kiln locations across India remain scarce, with only Boyd et al. (2021) having produced a nationwide map [3]. Most studies in the brick kiln sector rely on high-resolution satellite imagery [4], which is costly and often inaccessible. This cost barrier likely explains the lack of a complete dataset covering brick kiln locations and types across the country. We utilized Sentinel-2 imagery, which is openly available and offers a spatial resolution of 10–20 m, providing a more cost-effective alternative. While Sentinel-2 has been employed in previous studies [5], these have been limited to specific regions in northern India, underscoring the need for a comprehensive, nationwide approach as proposed in this study. Using a curated dataset of 8,700 geo-tagged labels, which includes a subset of labels from the study by Tibrewal et al. (2023), we polygonized the brick kilns to create training data. These polygons represent brick kilns across 15,000 km² (a small fraction of India’s land area) and were used to retrieve Sentinel-2 images corresponding to the labeling dates.The imagery was retrieved and preprocessed using Google Earth Engine (GEE), which enabled the application of a cloud-removal algorithm to enhance data quality. Leveraging this enriched dataset, we developed a novel method combining Sentinel-2 imagery and deep learning. A ResNet34 model was trained to detect brick kilns and classify their operational technologies (FCBTK, Zigzag, or Circular), Utilizing the spatial and spectral properties of Sentinel-2, the brick kilns were identified based on their distinct visual patterns, particularly the ochre-colored features visible in the RGB bands. Additionally, the NIR and SWIR bands are used. The NDVI index was used to identify abandoned brick kilns, which often resemble operational ones but are frequently covered by vegetation, helping to reduce false positives for active brick kilns. Our ResNet34 model achieved good performance on a hold-out set, with a precision of 0.77, a recall of 0.97, and an accuracy of 0.76. Additionally, the centroids of the detected brick kilns closely align with their actual locations, with most being within 30 meters, demonstrating the model’s spatial accuracy. Applying this model across India resulted in 79,506 detected brick kilns. A comparative analysis with ResNet50 and YOLOv3 models showed further possible improvements, particularly in reducing false positives and in refining technology classification capabilities. In addition to location mapping, we are developing annual gridded emission maps specifically for fired-clay brick kilns, estimating emissions of CO2 and pollutants such as PM2.5, black carbon, NOx, and SOx. These maps are derived from emission factors associated with kiln operations and technology types. These maps will provide time-series data on emissions from brick kilns across India, highlighting reductions achieved through the adoption of cleaner technologies. By identifying regional hotspots, evaluating the effectiveness of regulations, and pinpointing areas where cleaner technology adoption is lagging, the project offers policymakers actionable insights.The resulting dataset will be made available as an open-source benchmark, enabling researchers and practitioners to refine brick kiln mapping and further improve emission assessments. These results demonstrate the feasibility of leveraging Sentinel-2 imagery for large-scale, cost-efficient monitoring of brick kilns, providing a scalable framework to track the evolution of kiln locations, technologies, and emissions over time. This approach establishes a robust database to support future mitigation strategies, offering valuable tools for policy assessment, pollution monitoring, and sustainable development in the brick kiln sector. [1] Tibrewal, K., Venkataraman, C., Phuleria, H. et al. Reconciliation of energy use disparities in brick production in India. Nat Sustain 6, 1248–1257 (2023). [2] P. Baral, A. Eil, J. Li, and E. Saikawa, Dirty Stacks, High Stakes: An Overview of Brick Sector in South Asia. 2020. [3] D. S. Boyd, B. Perrat, X. Li, et al., “Inform- ing action for united nations sdg target 8.7 and interdependent sdgs: Examining modern slavery from space”, Humanities and Social Sciences Communications, vol. 8, no. 1, 2021. [4] J. Lee, N. R. Brooks, F. Tajwar, et al., “Scalable deep learning to identify brick kilns and aid regulatory capacity”, Proceedings of the National Academy of Sciences, vol. 118, no. 17, e2018863118, 2021. [5] P. Misra, R. Imasu, S. Hayashida, A. A. Arbain, R. Avtar, and W. Takeuchi, “Mapping brick kilns to support environmental impact studies around Delhi using sentinel-2”, ISPRS International Journal of Geo-Information, vol. 9, no. 9, p. 544, 2020.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Multi-channel U-Net for Separating Temporal Gravity Signals

Authors: Betty Heller-Kaikov, Prof. Dr. Roland Pail, Martin Werner
Affiliations: Technical University of Munich
Measurements of the temporal variations of the Earth’s gravity field such as obtained by the satellite missions GRACE and GRACE-FO include signal contributions from all geophysical phenomena that involve a redistribution of masses. These include hydrological phenomena such as floods and droughts, ice mass changes, atmospheric phenomena or earthquakes. Thus, the user community of temporal gravity data spans a multitude of scientific disciplines, many of which investigate processes of societal relevance. As only the sum of all mass redistribution signals can be measured, an optimal exploitation of the data for the various downstream applications requires a method to extract specific signals of interest from the measured sum of signals. In our contribution, we present a neural network-based framework that separates the individual gravity signal components based on prior knowledge on their typical space-time behavior. The algorithm uses a multi-channel U-Net architecture, which translates a 2-dimensional one-channel input image containing the sum of signals to a 2-dimensional multi-channel output image containing the separated signals. To obtain the required 2-dimensional sample format, we slice the originally 3-dimensional data along one of their three axes latitude, longitude and time. We develop and test the software in a closed-loop simulation environment, attempting to separate the temporal gravity signals induced by processes in the atmosphere and oceans (AO), the hydrosphere (H), the cryosphere (I) and solid Earth (S) given by the Updated ESA Earth System Model. In this test example of separating the signals AO, H, I and S from their sum, we obtain relative RMS prediction errors of 19 % (AO), 23 % (H), 43 % (I) and 67 % (S). To allow real-world applications of the software, we provide the full chain of data formatting and preprocessing, neural network training and testing. Furthermore, we present possibilities to tune the algorithm regarding specific target signals or questions of interest. For applying the framework to real GRACE or GRACE-FO data, we suggest training the network on physics-based, forward-modeled data for introducing prior knowledge on the statistical behavior of the individual signals, and subsequently applying it to real level 2b gravity data. The latter will additionally require validation strategies using knowledge from domain experts. Finally, we note that our method does not only apply to present-day gravity data but does also represent a contribution to optimally exploit the data of future satellite missions such as the ESA-NASA satellite gravity mission MAGIC. Furthermore, the method can be applied to any spatio-temporal dataset representing the sum of several signal contributions that need to be separated.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Advanced Analysis of Agroforestry Fractional Covers Leveraging Sentinel-2 Data and Deep Learning Techniques

Authors: Nasir Farsad Layegh, dr. Claudia Paris, Dr. Abel Chemura, Dr. Roshanak Darvishzadeh
Affiliations: Faculty of ITC, University of Twente
Agroforestry is a land-use management system where trees or shrubs are grown around or among crops or pastureland. These systems play a crucial role in improving biodiversity, enhancing soil health, and contributing to climate resilience. Accurate mapping and monitoring of agroforestry systems are essential for informed decision-making and effective management due to their diverse benefits and functions. One of the main challenges in monitoring these systems is the presence of Agroforestry Fractional Covers (AFCs), areas where different agricultural and forest components coexist, creating heterogeneous landscapes that are difficult to map and monitor accurately using conventional approaches. With the advent of Earth Observation technologies, particularly Sentinel-2's high-resolution multispectral imagery within the European Union's Copernicus Programme, the capacity to map and monitor these components has significantly advanced. However, traditional machine learning methods often struggle to capture the complexity of agroforestry systems when using Sentinel-2 data. Especially, in identifying and delineating AFCs, due to their inherently mixed nature at the pixel level. This characteristic of AFCs complicates the interpretation and analysis of data, requiring more sophisticated approaches that can address both the heterogeneity and temporal dynamics of these systems. This study employs a three-stage approach to characterize AFCs in Sentinel-2 imagery, leveraging both spectral and temporal information. The first stage involves building a time series from Sentinel-2 Level-2A data, stacking imagery captured over the past five years to analyze seasonal variations, phenological cycles, and long-term trends. Monthly intervals are selected to balance detail and practicality, being frequent enough to capture seasonal changes while keeping data volumes manageable and minimizing cloud coverage issues. Gaps in the data, mainly from cloud cover during the rainy seasons, are filled using a weighted linear interpolation method. For each month, composite images are created using a quality-weighted approach, where individual bands are merged based on factors like cloud probability and viewing geometry. The generated monthly composites are processed using a linear Vertex Component Analysis (VCA) unmixing technique to address the mixed pixels challenge. The VCA algorithm identifies pure spectral signatures (endmembers) of distinct components like trees, grass, crops, and soil, without requiring prior scene information. Through the enforcement of physical constraints (abundance sum-to-one and non-negativity), monthly abundance maps are generated that show the distribution of these components across the landscape. These maps decompose the complex spectral patterns of AFCs into their constituent parts, preparing the data for temporal analysis. The final stage examines the temporal patterns of these unmixed components. The abundance time series is fed into a transformer-based deep learning model, following an encoding step that incorporates both positional and seasonal information. This encoding captures not just the sequence of events but also their cyclical nature. The transformer architecture, with its attention heads and encoder layers, can detect complex patterns in time series data. This approach is particularly relevant for the agroforestry context due to its ability to dynamically weigh different time periods, enabling the understanding of how components like crops and trees develop through their growth cycles. The motivation behind this research is threefold: (1) to develop a methodology for disentangling the spectral signatures of different agroforestry components within mixed pixels, (2) to leverage the temporal dimension of Sentinel-2 data through transformer-based deep learning to capture seasonal and phenological patterns, and (3) to provide stakeholders with accurate, temporally-aware mapping tools that can support sustainable land management decisions and climate resilience strategies. The expected outcome of this research is to achieve a detailed mapping of AFCs, enhancing our ability to monitor and manage complex agroforestry systems. By integrating Sentinel-2 data with advanced unmixing techniques and transformer-based models, we envision producing abundance maps that accurately reflect the spatial and temporal distribution of agroforestry components. This can not only improve classification accuracy but also enable the prediction of future states of these dynamic landscapes.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: A new unsupervised method for detecting anomalous areas in geospatial data. Applications to plume detection from greenhouse gas satellite data.

Authors: Elias Ervelä, Laia Amorós, Johanna Tamminen, Antti Mikkonen, Hannakaisa Lindqvist
Affiliations: Finnish Meteorological Institute, University of Helsinki
With the increasing urgency to address climate change, there has been a surge of interest in monitoring human-caused emissions through satellites that measure greenhouse gases. This appears to be the best solution to reach global scale. The massive amounts of data produced by the satellites have increased the need for automated, efficient tools to extract knowledge from the data. Emissions from point sources, such as power plants, can produce distinct plumes that are visible from satellite data. Automated plume detection is key to identify and monitor the largest sources of human-caused greenhouse gas emissions. We introduce a new unsupervised learning method called SCEA (Spatial Clustering of Elevated-valued Areas) for detecting high-valued areas in non-gridded geospatial data. Inspired by density-based clustering, the SCEA algorithm identifies clusters by grouping high-valued points with their nearby neighbors by taking in account the measurement value of the points in the process. Originally, the SCEA algorithm was developed for plume detection from greenhouse gas satellite data, but the method is general enough to be applicable to various types of geospatial data, where we a single value dimension in addition to spatial dimensions. We demonstrate the performance of the SCEA algorithm with the simulated NO2 satellite data set of SMARTCARB in its ability to find point sources with co-located plumes in different noise scenarios. The SCEA algorithm reached a precision (how many found plumes were actually plumes) of 0.974, 0.884, and 0.661 in noise-free, low-noise, and high-noise scenarios, respectively. For point sources with annual emissions of 1000 tonnes and above, the SCEA algorithm reached a recall (how many point sources' plumes were correctly identified) of 0.758, 0.660, and 0.548 for noise-free, low-noise, and high-noise scenarios, respectively. Keywords: Plume detection, satellites, unsupervised learning, greenhouse gas monitoring.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Advanced Tree Detection and Classification for Afforestation Monitoring Using High-Resolution Satellite Imagery: A Case Study from Medina, Saudi Arabia

Authors: Dr. Massimo Zotti, PhD Gamal Abdelnasser Allam Abouzied, Dr. Giuseppe Maldera, Dr. Khalid Tijani, Dr. Alberto Morea, Eng. Raffaele Nutricato, Dr. Davide Oscar Nitti, Eng. Ahmad Yahya Alneami, Dr. Hassan Ali, Dr. Nada Mohammad Almogheer
Affiliations: Planetek Italia s.r.l., GAP srl - Geophysical Applications Processing srl, NCVC -National Center for Vegetation Cover Development and Combating Desertification
This study presents an innovative monitoring methodology developed for Saudi Arabia's ambitious national afforestation campaign, specifically focusing on one of its major implementation sites in Medina, where 50,000 trees are being planted. This initiative is part of a broader national commitment to plant 5 million local desert plants by 2030, supporting Saudi Vision 2030's environmental protection and sustainable development goals. By integrating advanced remote sensing techniques, machine learning, and distributed computing, the research contributes directly to these objectives, providing a scalable solution for vegetation monitoring and sustainable land management. Data Acquisition and Preprocessing The analysis was conducted using high-resolution Pléiades Neo 3 satellite imagery, acquired in May 2024. The dataset covers 166.33 km² with a spatial resolution of 0.27 meters, making it suitable for detailed vegetation analysis. The preprocessing phase involved a series of critical steps to enhance image quality and accuracy, including radiometric corrections, orthorectification, and Pan-Sharpening Fusion. The Pan-Sharpening Fusion process combined high-resolution panchromatic data with lower-resolution multispectral data, significantly improving spatial resolution while preserving spectral information. This enhanced dataset provided the foundation for subsequent vegetation classification and analysis. Feature Extraction Using PCA and Vegetation Indices To optimize the classification process, Principal Component Analysis (PCA) was employed to reduce the dimensionality of the multispectral data. PCA remaps the information from co-registered images into a new set of components, capturing the majority of the spectral variance. The first four PCA bands were used alongside original spectral bands (Blue, Green, Red, and Near-Infrared) and vegetation indices, including NDVI, SAVI, MSAVI2, and GNDVI, to create a comprehensive feature set for tree classification. These indices provided critical insights into vegetation health, density, and chlorophyll content, enhancing the classifier's ability to distinguish between vegetation types and other land cover features. Classification Using Random Forest Algorithm The Random Forest (RF) algorithm was selected for its robustness in handling high-dimensional data and its ability to effectively classify vegetation features. Key steps in the classification workflow included: 1. ROI (Region of Interest) Selection: Regions of Interest were identified and extracted using endmember collection spectra to ensure accurate spectral signature representation for key classes, including trees, soil, and other land cover types. 2. Training and Optimization: The RF classifier was trained using the selected ROIs and the combined feature set, including PCA-transformed bands and vegetation indices. Parameter optimization was performed to maximize classification accuracy and minimize misclassification errors. 3. Output Classes: The classifier categorized the land cover into distinct classes, with a specific focus on isolating the tree class for further analysis. The integration of PCA and vegetation indices with the spectral bands significantly enhanced the classification process, enabling the model to capture subtle spectral variations that distinguish vegetation from other land cover types. This combination improved the accuracy of the classification, particularly in challenging arid environments where vegetation often exhibits lower spectral reflectance contrasts. Post-Processing and Distributed Computing The post-processing phase was designed to extract meaningful information from the classified tree class. The workflow was divided into several key steps to accurately count and classify trees based on size: 1. Tile-Based Segmentation: To handle the large dataset efficiently, the study area was divided into 56 tiles, each processed independently using distributed computing. This approach enabled parallel processing of tiles, significantly reducing computational time and leveraging the power of parallel computing for scalability. 2. Region Labeling and Morphological Analysis: Connected regions were identified and labeled within the binary mask of the tree class. Morphological properties, such as orientation, bounding box, perimeter, and area, were calculated for each connected region. 3. Tree Size Classification: Based on the area distribution of connected regions, trees were divided into three subclasses: large, medium, and small. 4. Polygon Extraction and Coverage: Polygons delineating the connected regions were extracted, and an optimal circle-fitting algorithm was applied to minimize the error between the region’s area and the area derived from the fitted circles. 5. Hough Transform for Circle Detection: A constrained circular Hough transform was applied within the boundaries of each polygon to determine the optimal centers and radii of circles representing individual trees. This comprehensive post-processing workflow produced a detailed count of 1,074 large trees, 3,181 medium trees, and 14,187 small trees, providing a precise assessment of the afforestation project’s progress. The Role of Parallel Computing in Scalability The use of distributed computing, implemented through parallel processing, was integral to the scalability of this workflow. By dividing the dataset into smaller tiles and processing them simultaneously, the framework efficiently managed the computational demands of analyzing high-resolution imagery. This approach not only reduced the time required for processing but also demonstrated the feasibility of applying the methodology to larger regions or datasets, making it adaptable for regional or global afforestation monitoring initiatives. Validation and Future Directions Field validation was conducted using ground truth measurements to assess the accuracy of the tree classification and counting process. The results confirmed the reliability of the proposed workflow, particularly in detecting and classifying large and medium-sized trees. However, small trees presented greater challenges due to their lower spectral contrast and overlapping canopies. Future work will focus on enhancing the temporal analysis of vegetation growth to monitor afforestation projects over time. This will involve integrating multi-temporal satellite imagery and developing advanced machine learning algorithms to improve the classification of mixed land cover types. Additionally, exploring the use of higher-resolution sensors and incorporating drone-based data for ground control points could further refine the accuracy of the workflow. Conclusion This study presents a robust and scalable framework for monitoring afforestation projects using high-resolution satellite imagery, advanced image processing, and distributed computing techniques. The integration of PCA and vegetation indices with spectral bands significantly enhanced the classification process, while the use of distributed computing accelerated the tree counting workflow, enabling efficient analysis of large datasets. This methodology directly supports Saudi Arabia's Vision 2030 goals of environmental protection and sustainable development by enabling precise monitoring of afforestation campaigns, such as the Medina project and the broader initiative to plant 5 million local desert plants by 2030. The combination of advanced remote sensing techniques, machine learning, and distributed computing highlights the transformative potential of these technologies in addressing complex environmental challenges. This study serves as a benchmark for future research and operational projects aimed at promoting afforestation, biodiversity conservation, and sustainable land management on a regional and global scale.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Exploring Multi-level Uncertainty Quantification for Remote Sensing Image Classification: How Label Embeddings Can Effectively Target Calibration and Out-of-Distribution Detection Performance

Authors: Christoph Koller, Dr. Katharina Hechinger, Prof. Dr. Göran Kauermann, Xiao Xiang Zhu
Affiliations: Technical University of Munich (TUM), German Aerospace Center (DLR), LMU Munich
Modern deep learning models have achieved superior performance in almost all fields of remote sensing. An often neglected aspect of these models is the quantification and evaluation of predictive uncertainties. Regarding a classification task, this means that one focuses the analysis solely on performance metrics such as accuracy or the loss. On the other hand, a notion of uncertainty indicates the model’s indecisiveness among the given classes and is essential to understand where the model struggles to classify the data samples. In this work, we distinguish three levels of uncertainty, starting with the typical softmax pseudo-probabilities as level-1 uncertainty. As a next level, we utilize the more flexible Dirichlet framework as model output space, and hereby also consider a Bayesian setting with uninformative prior. For the level-3 uncertainty, we incorporate the statistical framework introduced by (Hechinger et al. 2024) into our Deep Learning pipeline. More concretely, an empirical Bayes setting is used where a latent embedding of the label space is iteratively estimated by the marginal likelihood of the fully parameterized label space. The estimated embeddings are then learned by the network in three different settings: Two regression losses are using the embeddings directly while the closed-form solution of the Kullback-Leibler (KL-) Divergence uses the embedding parameterized as a Dirichlet distribution. For evaluating the different levels of uncertainty, we investigate the label evaluation subset of the So2Sat LCZ42 dataset, which contains label votes from multiple remote sensing experts. We evaluate the predictive uncertainties by means of Out-of- Distribution (OoD) detection and calibration performance. For the OoD setup, we utilize common uncertainty metrics such as the model confidence or entropy of the softmax as well as more advanced metrics like the Dempster-Shafer metric, the mutual information or the expected entropy. In the OoD experiments, the Bayesian Dirichlet setting with an uninformative prior achieves the best performance. In the calibration analysis, we investigate not only the common expected calibration error (ECE) but also the maximum calibration error as well as the static calibration error, which incorporates the entire predictive distribution of the model. Overall, the embedding-based approaches can show strong performance for the calibration experiments when compared to classical models. Reference: Katharina Hechinger, Christoph Koller, Xiao Xiang Zhu, and Göran Kauermann. Human-in-the-loop: Towards label embeddings for measuring classification difficulty. arXiv preprint arXiv:2311.08874, 2024.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: A glacier-rock glacier inventory for the semi-arid Andes generated using a deep-learning approach.

Authors: Daniel Thomas, Benjamin Robson, Shelley MacDonell, Adina Racoviteanu, Daniel Hölbling
Affiliations: University Of Bergen, Waterways Centre for Freshwater Management, Université Grenoble Alpes, University of Salzburg
Rock glaciers are vital components of high mountain ecosystems, serving as significant reservoirs of freshwater and indicators of permafrost. In the semi-arid Andes (27°–35°S), where water resources are crucial yet scarce, understanding the distribution and dynamics of rock glaciers is essential for hydrological studies and freshwater management strategies. However, creating comprehensive inventories of rock glaciers is challenging due to the spectral similarity between rock glaciers and the surrounding rocky terrain, which hampers traditional remote sensing mapping methods. Therefore, inventory creation often relies on manual delineation—a time-consuming and labour-intensive approach that involves a high degree of analyst subjectivity. As such, existing inventories are often incomplete or incorrectly include landforms that are not rock glaciers, highlighting the requirement for an automated, systematic approach. Our study presents the preliminary results of a glacier–rock glacier inventory created for the semi-arid Andes using a deep learning approach applied to high-resolution (<1 m) WorldView-2 and WorldView-3 imagery datasets. We trained the network on existing rock glacier inventories from different regions of the world before fine-tuning it for the semi-arid Andes. The inventory covers an area of over 35,000 km², indicating the ability of deep learning to create inventories over large spatial scales. Following classification, our inventory was quality ensured in compliance with the guidelines established by the Rock Glacier Inventories and Kinematics (RGIK) Action Group in 2023 to standardise rock glacier inventory creation. Rock glaciers in the inventory include attribute data such as their morphological characteristics, upslope contributing areas, and kinematic status and activity levels if existing data is available. While omission and commission errors still occur, our preliminary results demonstrate that the deep learning approach significantly reduces the manual effort traditionally required for regional rock glacier inventory development. Although not yet fully automated, the approach streamlines the mapping process, reduces analyst subjectivity, and increases consistency across the region.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: AI4RSSI: an AI-Based Approach for Generating Synthetic Copernicus Sentinel-2 Images

Authors: Francesco Cristiano Pignatale, Claudio Mammone, Long P. Chau
Affiliations: Telespazio Germany GmbH
Satellite imagery and associated metadata are fundamental for providing insights for Earth-Observation studies, building up reliable models to improve our understanding of the evolution of the climate and for fostering applications for current and future space-borne missions. In designing space-borne missions, it is crucial to properly set a series of specifications (e.g. instrument resolutions, orbit characteristics) that return the best match with the mission’s requirements and goals. For this scope, it is paramount to establish in advance, with the best possible confidence, the output of the proposed specifications (e.g. images, metadata) under different conditions including, for example, extreme settings and corner cases. Synthetic data can help in assessing these problems. Having a series of predictions in advance could reduce the risk associated with non-compliance and allow the assessment of the necessary mitigation strategies. Furthermore, synthetic data can be extremely important for speeding up the process of parallel development, testing, verification and validation of space-borne missions’ data-processors, retrieval algorithms and parameters configuration. Synthetic data can also be implemented in generative or predictive model aimed to study for example, the evolution and the extent of climate changes. The goal of this project is to review the state-of-the-art in the field and then build a Generative AI-based framework for building up synthetic satellite’s imagery for a set of agreed variable parameters (e.g. pixel resolution, aerosol optical thickness, solar zenith angles, sensing time,…). The Copernicus Sentinel-2 database constitutes the data and the sandbox where we develop this prototype-tool as its data are publicly available and cover a large range of different observational parameters and conditions. In this work we present the proof of concept, its processing-chain, and preliminary results showing Generative-AI based Synthetic Copernicus Sentinel-2 True Color Images produced assuming different sets of initial conditions. Our approach can be then extended to different bands (e.g. RGB, NIR, SWIR) and the whole method scaled to include different types of sensors and datatypes. This work is conducted with the contribution of the Technische Universität Darmstadt.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Using SAR Sentinel-1 Imagery and U-Net Model for Oil Spills Monitoring

Authors: Bartosz Staroń, PhD Michał Lupa
Affiliations: AGH University of Krakow
Oil spills in marine environments are frequent occurrences that pose significant threats to aquatic ecosystems and biodiversity. Prompt detection is essential for implementing timely response measures and minimizing the detrimental effects on marine life. Synthetic Aperture Radar (SAR) imagery serves as an effective tool for the surveillance of marine waters, capable of acquiring data regardless of time of day, cloud cover, or weather conditions. This all-weather, day-and-night capability makes SAR data particularly suitable for detecting and mapping oil spills. In this project, a U-Net convolutional neural network model was employed for the semantic segmentation of radar images to identify and visualize oil spills. This approach transforms complex radar data into accessible visual formats that can be interpreted by individuals without specialized knowledge of radar imagery. The model was trained on a dataset comprising 15 SAR images captured by the Sentinel-1 satellite sensor, each accompanied by corresponding ground-truth masks indicating oil-contaminated areas. To enhance the model's robustness and improve its generalization capabilities, the dataset was augmented using various techniques and divided into smaller patches of resolution 512x512 pixels. This process increased the dataset's size and variability, which is crucial for effective deep learning training. The results indicate that the U-Net model effectively segmented the radar images, successfully identifying areas affected by oil spills. However, some noise and misclassification errors were observed in certain regions, likely due to the relatively small size of the training dataset limiting the model's ability to generalize fully. Despite these challenges, the model shows promise in detecting oil spills in new, unseen images while preserving the spatial context of the original data. To improve performance in future work, expanding the dataset by acquiring more SAR images and refining the ground-truth masks is recommended. Additionally, incorporating advanced post-processing techniques could help reduce noise and enhance segmentation accuracy. These findings highlight the significant potential of integrating deep learning models with SAR data for effective environmental monitoring and facilitating rapid responses to ecological disasters. The computations and visualizations were conducted on virtual machines with EODATA catalogue, a service offered by CloudFerro S.A.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Deep Learning for Fire Risk Prediction: Application of Metric Learning and Explainability Methods

Authors: Stella Girtsou, Konstantinos Alexis, Alexis Apostolakis, Dr. Giorgos Giannopoulos, Dr. Harris Kontoes
Affiliations: National Observatory Of Athens, Athena Research Center
Various machine learning approaches, from classical algorithms like Random Forest and XGBoost to deep learning models like U-Net and CNNs, have been applied to fire prediction, but metric learning methods remain underexplored. Metric learning aims to develop a metric that minimizes the distance between similar samples and maximizes it for dissimilar ones. We train a Siamese network with triplet loss, where the input consists of an anchor, a positive (same class), and a negative (different class). The triplet loss ensures the anchor is closer to the positive than to the negative. Our method begins with feature extraction from 25 factors influencing forest fires, including topography, meteorology, and Earth Observation data. After preprocessing, the dataset is formatted for input into Siamese Neural Networks, where triplets (anchor-positive-negative) are created for training. Since generating all possible combinations from approximately 540 million samples is computationally expensive, we focus on constructing effective triplets. Many triplets, whether too easy or too difficult, would not contribute to training, as they result in small or zero gradients, slowing convergence. After converting multispectral images into CSV files, numerical features are normalized, and categorical ones are one-hot encoded. We conduct experiments to evaluate the best triplet composition by calculating similarity metrics, specifically Manhattan distance, between fire-non-fire and fire-fire pairs. Fire instances that are too distant or non fire ones that are similar to fire class are excluded from training to ensure effective learning. Machine Learning. We developed an innovative architecture for metric learning aimed at optimal classification. The model is designed so that a Siamese neural network and a classifier are trained in parallel, with the dual objective of (1) generating new vector representations that better describe the samples and make them more distinguishable in feature space, and (2) effectively classifying these representations into two target classes (fire/non-fire). The architecture consists of two main components: the TripletNet, and the Classifier. Loss functions. We use a weighted loss function that combines the Triplet Loss and Cross-Entropy Loss to train the model. The Triplet Loss function, learns a vector space where an anchor vector is more similar to vectors from the same class (positives) than to vectors from a different class (negatives). The Cross-Entropy Loss calculates the discrepancy between the network's output logits and the target labels, using the sigmoid function for binary classification tasks. To address class imbalance, we introduce class weighting (fire/non-fire), with the optimal weight determined through extensive experimentation. The final loss function is given by L =r1 × L_triplet + r2 × L_cross-entropy, where r1 and r2 are the weighting factors that balance the contributions of each loss component. Experiments. The evaluation dataset covers the entire territory of Greece, defined by a grid of 500-meter cells. This setup results in approximately 889,000 grid cells per day. We focus on the four-month period from June to September, as this is when the majority of wildfires occur in Greece, spanning the years 2010 to 2020. This approach yields a final dataset of about 1.19 billion samples, representing daily snapshots of each grid cell. For model development, data from 2010 to 2018 is designated for training and validation, while the years 2019 and 2020 are reserved for testing. To generate the triplet dataset, we experimented with a) the proportion of hard, semi-hard and easy triplets that will be generated, b) the number of triplets for each fire anchor instance, c) the number of triplets for each non-fire anchor instance and (d) the fire-to-non-fire sample ratio, specifying how many more non-fire samples will be used compared to fire samples. This process resulted in 262 variations of the dataset. From the interpretation of the different variations of the triplets, it is becoming clear that: 1) The number of triplets with a fire anchor has a strong positive correlation with all the metrics (sh1, Specificity, Sensitivity). 2) The number of triplets with a non-fire anchor also has a positive correlation with model performance, but to a much lower degree than the previous one. 3) The ratio of fire anchor/non-fire anchor does not seem to play a significant role in performance, which makes sense as the possible values tested result in almost balanced datasets or datasets where the fire class dominates. Regarding the composition of triplets with fire anchor, it seems that: 1) The use of hard negative examples has a negative impact on model performance. 2) The use of moderately hard and easy negative examples improves model performance. For the composition of triplets with non-fire anchor, we conclude that: 1) Hard fire examples slightly hinder training. 2) Easy and moderately hard fire examples slightly benefit training. Additionally, the effectiveness of the model is studied in relation to the number of fire triplets and their composition. We observe that the effectiveness increases as the number of triplets increases. On top of this work, we propose model agnostic solutions towards representative sampling for producing local prediction explanations spanning the whole dataset, to tackle the slow processing times of explainable AI frameworks, when generating vast amounts of local interpretations. The proposed method is evaluated on a separate test dataset that maintains the original class distribution for the study area. We employ sensitivity (ratio of correctly identified fire areas) and specificity (ratio of correctly identified non-fire areas) metrics for the evaluation of our methods and achieve high performance 80% in both metrics and the desired balance between them.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: The Value of Hybrid Neural-Numerical Surrogate Models for Accelerated Greenhouse Gas Retrievals

Authors: Fiona Lippert, Andrew Barr, Dr Jochen Landgraf
Affiliations: SRON Netherlands Institute for Space Research
Global measurements of greenhouse gases in the Earth’s atmosphere are essential for monitoring climate change and anthropogenic emissions. The Copernicus Sentinel-5 and CO2M missions have high-resolution spectrometers on board to quantify the abundance and spatial distribution of greenhouse gases. However, converting the measured radiances into precise atmospheric concentrations relies on time-consuming multiple-scattering radiative transfer calculations, which need to be performed at each iteration of a retrieval, creating a major computational bottleneck. Machine learning has the potential to address this issue by emulating radiative transfer processes with learned surrogate models that are orders of magnitudes faster than traditional physics-based simulators. Yet, greenhouse gas retrievals require high spectral accuracy to leverage subtle changes in radiances caused by variations in atmospheric composition – especially for CO2, where detecting small enhancements relative to the background is crucial for estimating meaningful fluxes. Learned surrogate models must therefore meet stringent requirements in terms of both computational speed and predictive accuracy to be applicable in practice. Hybrid modeling approaches, integrating machine learning with physics-based numerical simulators, offer a powerful trade-off between speed and accuracy. Unlike purely data-driven models, which must learn all relevant physics from scratch, hybrid models benefit from prior physical knowledge, allowing them to focus on capturing complex phenomena not addressed by the physics component. To be effective, however, the physics component must provide valuable information while avoiding significant computational overhead. In this study, we explore hybrid models that combine the efficiency and flexibility of neural networks with prior knowledge encoded in non-scattering radiative transfer simulations. Non-scattering spectra capture relevant features such as molecular absorption, solar irradiance, and surface reflective properties, while being significantly faster to compute than high-fidelity multiple-scattering simulations. This makes them an ideal candidate for the physics component in hybrid surrogate models. We evaluate three different strategies to integrate non-scattering spectra with neural networks: (1) training neural networks to predict correction terms that account for scattering effects not included in the non-scattering calculations, (2) using non-scattering spectra as additional network inputs, or (3) a combination of the two. We systematically compare these strategies to purely data-driven approaches using a dataset of 100,000 shortwave infrared radiance spectra simulated under synthetic surface and atmospheric conditions. Overall, the developed surrogate models achieve promising spectral errors below 1%. Preliminary results indicate that hybrid models integrating physics-based non-scattering spectra with neural networks can indeed improve the accuracy of the generated radiance spectra. This work takes a significant step towards near-real-time global greenhouse gas monitoring. By enabling retrievals at a fraction of current computational costs, hybrid surrogate models could facilitate large-scale reprocessing of mission data and support timely identification of environmental hazards such as wildfires and methane leaks, enhancing mitigation efforts and decision-making.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Advanced Greenhouse Gas Predictions: Leveraging Ecosystem-Specific Analyses at ICOS sites using ML Models

Authors: Pablo Catret Ruber, Dr. David Garcia-Rodriguez, Dr. Domingo Jose Iglesias Fuente, Dr. Juan José Martínez Durá, Dr. José Javier Samper-Zapater, Dr. Ernesto López-Baeza
Affiliations: University Research Institute on Robotics and Information and Communication Technologies (IRTIC), Universitat de València, Valencian Institute of Agricultural Research (IVIA), Faculty of Physics, Universitat de València, Albavalor, Science Park Universitat de Valencia
Greenhouse gas (GHG) concentrations, encompassing key compounds such as carbon dioxide (CO₂), carbon monoxide (CO), and methane (CH₄), are critical indicators of the ongoing dynamics of climate change. Their monitoring and prediction are essential for comprehending the processes driving global warming and for designing robust mitigation strategies aimed at curbing its impacts. These gases play diverse roles in the atmosphere, ranging from direct contributions to the greenhouse effect to their interactions with other atmospheric components that influence radiative forcing. Given the urgency of addressing climate change, there is a growing need for accurate and reliable models capable of forecasting these GHG concentrations under various environmental conditions to provide actionable insights for researchers, policymakers, and other stakeholders. This study focuses on developing and evaluating a long-term high-resolution temporal forecasting framework tailored to the prediction of CO₂, CO, and CH₄ concentrations. The data utilized for this analysis are sourced from multiple stations within the Integrated Carbon Observation System (ICOS), a European research infrastructure dedicated to high-precision monitoring of carbon fluxes and GHG concentrations. By leveraging the extensive and diverse datasets provided by ICOS, this study seeks to uncover patterns and trends in GHG behavior while refining the methodologies used for their prediction. An essential characteristic of this study is that each forecasting model is trained independently on a distinct time series corresponding to the data from each ICOS measurement station. Unlike generalized models that pool data from various sources, this approach allows each model to adapt specifically to the temporal and spatial dynamics of its corresponding site. This station-specific training ensures that the models capture the unique emission patterns, local atmospheric conditions, and environmental influences present at each location. By focusing on localized training, the study enhances predictive accuracy and highlights the inherent heterogeneity in GHG dynamics across different ecological and geographical contexts. Another key innovation lies in the ability of the forecasting models to function without relying on exogenous variables. Traditional forecasting approaches often require supplementary inputs, such as weather data, human activity indices, or other environmental variables, to improve accuracy. However, the models developed in this study are designed to predict future GHG concentrations solely based on their past observations. By leveraging the intrinsic temporal patterns and dependencies within the historical data, the models effectively forecast future values without the need for additional external information. This capability makes them more robust and universally applicable, especially in scenarios where access to exogenous data is limited or inconsistent. A cornerstone of this research is the advanced analysis of various factors that can significantly influence the accuracy of GHG forecasts. The study examines land cover types classified according to the ESA WorldCover classification system, which provides detailed information on global land use and land cover patterns. By integrating this data, the study assesses how different terrestrial ecosystems and human-modified landscapes contribute to GHG emissions and affect prediction models. Furthermore, the Köppen climate classification system is employed to account for climatic variability across regions, thereby enabling the models to adapt to diverse weather patterns and seasonal changes. Additional geographical parameters, such as latitude and elevation above sea level, are incorporated to capture the spatial variability of GHG concentrations, which can be influenced by atmospheric circulation, temperature gradients, and regional emission sources. Another innovative aspect of this study is the incorporation of vertical variability in GHG concentrations. Many ICOS stations are equipped with sensors placed at multiple measurement heights, allowing for the observation of vertical gradients within the atmospheric boundary layer. By integrating these multi-height measurements, the study captures the complex dynamics of gas exchange processes between the surface and the atmosphere, offering a more nuanced understanding of GHG distribution. Moreover, the study introduces a gap-filling capability to address the challenges posed by data gaps of varying durations in GHG observations. These gaps, which can occur due to instrument malfunctions, maintenance activities, or extreme weather conditions, pose a significant obstacle to the continuous monitoring of GHG concentrations. The developed models incorporate advanced techniques to impute missing data, ensuring the integrity of long-term datasets. By leveraging temporal correlations and machine learning algorithms trained on historical datasets, the models can fill data gaps spanning from short-term interruptions to long-duration absences. This capability is critical for maintaining the continuity of datasets used in climate modeling and forecasting, particularly when observations are sparse or irregular. To achieve the objectives of this research, a diverse array of forecasting models is trained and rigorously evaluated. These include traditional statistical models, which provide baseline comparisons and insights into linear relationships within the data. In parallel, state-of-the-art machine learning (ML) approaches are employed to explore nonlinear patterns and interactions among variables. Deep learning architectures, known for their ability to handle large and complex datasets, are also tested to assess their potential for capturing intricate temporal dependencies. Hybrid techniques, which combine elements of statistical and machine learning models, are further explored to optimize predictive performance. The gap-filling capability is integrated into these models, allowing them not only to predict future GHG concentrations but also to reconstruct past trends from incomplete datasets. By focusing on time series specific to each station, the study ensures that the predictions remain localized and accurate while leveraging station-specific historical data to maximize the models' effectiveness. A key goal of the study is to identify the methods that provide the most accurate and consistent predictions across diverse ecological and geographical contexts. This is achieved through a systematic evaluation of model performance within the predefined environmental categories, including land cover types, climatic zones, and altitudinal ranges. The evaluation criteria include metrics such as mean absolute error (MAE), root mean square error (RMSE), and mean absoltue percentage error (MAPE). By comparing the models across these metrics, the study highlights the trade-offs between complexity, computational efficiency, and predictive accuracy, ultimately guiding the selection of optimal approaches for different scenarios. The implications of this research extend beyond theoretical modeling. Accurate long-term forecasting of greenhouse gas concentrations is essential for informing climate action at multiple levels. For policymakers, these forecasts provide critical information for anticipating trends in GHG emissions and tailoring mitigation and adaptation strategies. At the local level, high-resolution predictions can aid in the development of targeted interventions, such as urban planning initiatives aimed at reducing CO₂ emissions or agricultural practices designed to minimize methane release. At the global scale, the insights gained from this research contribute to the broader scientific understanding of carbon and methane cycles, facilitating international efforts to monitor and reduce GHG emissions in line with the goals of the Paris Agreement. In conclusion, this study represents a significant step forward in the field of greenhouse gas forecasting. By combining high-resolution ICOS data with cutting-edge modeling approaches, it provides a robust framework for predicting CO₂, CO, and CH₄ concentrations under diverse environmental conditions. The integration of factors such as land cover type, climate classification, latitude, and elevation, as well as vertical variability in GHG measurements, ensures that the models are grounded in a comprehensive understanding of the processes driving GHG dynamics. The station-specific training of the models and their independence from exogenous variables underscore their versatility and practicality. Furthermore, the inclusion of a gap-filling capability addresses the challenge of incomplete datasets, enhancing the utility and reliability of the forecasting framework. Ultimately, the research underscores the critical role of accurate GHG forecasting in advancing scientific knowledge, guiding climate policy, and fostering the development of resilient and sustainable systems.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Multi-Task Learning for Field Boundaries Segmentation and Crop Classification Using Remote Sensing Imagery

Authors: Rim Sleimi, Florian Werner, Albert Abelló
Affiliations: Hydrosat
Field boundaries segmentation (FBS) and crop classification (CC) are essential tasks in precision agriculture, supporting applications from yield prediction to sustainable resource management. In regions stricken by water scarcity, accurately delineated field boundaries combined with crop classification maps can play a critical role in water productivity monitoring. By integrating these datasets with evapotranspiration estimates derived from remote sensing (RS), it becomes possible to assess water use efficiency at the field level. This enables stakeholders to identify areas where water is being used inefficiently, prioritize high-productivity crops, and implement precision irrigation strategies. Such applications not only optimize resource utilization but also contribute to mitigating the impacts of drought and ensuring sustainable agricultural practices in vulnerable regions. Recent advancements in remote sensing for CC and FBS have achieved substantial success using deep learning techniques. However, existing methods tackle these tasks separately, necessitating distinct models and workflows which present several inefficiencies and limitations. Separate models require dedicated computational resources for training, inference, and storage, resulting in increased costs and energy consumption—factors critical for large-scale applications in precision agriculture. Moreover, the lack of synergistic processing misses opportunities to enhance performance; for instance, boundary information could refine crop classification predictions especially near field edges, while crop type data could improve field delineation accuracy in heterogeneous regions. This study proposes a unified multi-task learning framework that concurrently optimizes CC and FBS within a single, end-to-end model. The Fractal ResUNet architecture, known for its ability to capture multi-scale features and preserve spatial details, is adapted to simultaneously perform both tasks. The approach leverages shared feature extraction layers within the Fractal ResUNet, coupled with task-specific output heads tailored to each objective. By utilizing shared spatio-temporal features from Sentinel-2 remote sensing imagery, the framework enhances predictive accuracy for both tasks while reducing computational overhead. Preliminary experiments demonstrate that this multi-task approach improves both boundary delineation and crop classification accuracy compared to single-task baselines, highlighting its potential to reduce computational overhead while enhancing prediction performance. This work highlights the feasibility and potential of multi-task learning in remote sensing, offering a unified solution to the intertwined challenges of field delineation and crop classification, paving the way for scalable and integrated agricultural monitoring systems.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Reconstruction of Arctic sea ice thickness (1992-2010) based on a hybrid machine learning and data assimilation approach

Authors: Léo Edel, Jiping Xie, Julien Brajard, Dr Laurent Bertino
Affiliations: Nersc, Bjerknes Center for Climate Research
Arctic sea ice thickness (SIT) remains one of the most crucial yet challenging parameters to estimate. Satellite data generally presents temporal and spatial discontinuities, which constrain studies focusing on long-term evolution. Since 2011, the combined satellite product CS2SMOS enables more accurate SIT retrievals that significantly decrease modelled SIT errors during assimilation. Can we extrapolate the benefits of data assimilation to past periods lacking accurate SIT observations? In this study, we train a machine learning (ML) algorithm to learn the systematic SIT errors between two simulations of the model TOPAZ4 over 2011-2022, one with CS2SMOS assimilation and another without any assimilation, to predict the SIT error and extrapolate the SIT prior to 2011. The ML algorithm relies on SIT coming from the two versions of TOPAZ4, various oceanographic variables, and atmospheric forcings from ERA5. Over the test period 2011-2013, the ML method outperforms TOPAZ4 without CS2SMOS assimilation when compared to TOPAZ4 assimilating CS2SMOS. The root mean square error of Arctic averaged SIT decreases from 0.42 to 0.28 meters and the bias from -0.18 to 0.01 meters. Also, despite the lack of observations available for assimilation in summer, our method still demonstrates a crucial improvement in SIT. Relative to independent mooring data in the Central Arctic between 2001 and 2010, mean SIT bias reduces from -1.74 meters to -0.85 meters when using the ML algorithm. In the Beaufort Gyre, our method approaches the performance of a basic correction algorithm. Ultimately, the ML-adjusted SIT reconstruction reveals an Arctic mean SIT of 1.61 meters in 1992 compared to 1.08 meters in 2022. This corresponds to a decline in total sea ice volume from 19 690 to 12 700 km³, with an associated trend of -3 153 km³ per decade. These changes are accompanied by a distinct shift in SIT distribution. Our innovative approach proves its ability to correct a significant part of the primary biases of the model by combining data assimilation with machine learning. Although this new reconstructed SIT dataset has not yet been assimilated into TOPAZ4, future work could enable the correction to be further propagated to other sea ice and ocean variables.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: COUPLED VAE AND INTERPOLATOR APPROACH FOR FAST HYPERSPECTRAL IMAGE EMULATION

Authors: Chedly Ben Azizi, Claire Guilloteau, Gilles Roussel, Matthieu Puigt
Affiliations: LISIC
The generation of high-fidelity synthetic data plays a pivotal role in enabling large-scale data analysis across various fields, including remote sensing, climatology, and astrophysics. In remote sensing, particularly in hyperspectral imaging, synthetic data generation is critical for preparing and validating observation missions and supplementing real datasets during operations. This is particularly crucial for addressing challenges in weather forecasting, climate modelling, and environmental monitoring, where large, high-quality datasets are needed to train machine learning models, validate algorithms, and test operational scenarios. Synthetic data also allows exploration of rare or extreme events that may not be adequately captured in real-world observations, providing a robust foundation for predictive modelling. In hyperspectral imaging, generating synthetic data is crucial for simulating complex physical phenomena such as radiative transfer and land-ocean interactions, enabling the emulation of diverse Earth surface conditions. However, modeling non-linear physics, such as ocean dynamics or radiative transfer, often requires computationally expensive numerical simulations, prohibiting their use for iterative methods like the Metropolis-Hastings Markov Chain Monte Carlo (MCMC) algorithm. Efficient, accurate, and scalable synthetic data generation techniques are therefore critical for advancing Earth System Observation and Prediction capabilities. In this work, we explore the Variational Autoencoder (VAE) framework as a tool for hyperspectral data emulation, addressing both spectra-level and image-level complexities. Initially, we tested our method on Sentinel-3 Ocean Colour data, demonstrating its potential for generating synthetic data for both spectra and image-level emulation. Subsequently, we extended the approach to a simulated hyperspectral dataset, generated using PROSAIL and 6S radiative transfer models. For this dataset, we selected Sentinel 2 images containing at least 80% land pixels, ensuring a focus on terrestrial features, and used PROSAIL and 6S inversion to produce corresponding vegetation traits as trait maps. Our modular approach involves two steps. First, the VAE is trained to reconstruct hyperspectral data from a compressed latent representation, capturing key scene-level complexities. Second, an interpolator maps biophysical parameters, such as vegetation traits, to the latent space, enabling efficient and flexible emulation of hyperspectral data based on environmental inputs. Although results for image-level emulation on the simulated dataset are still pending, our method has already shown promise with Sentinel-3 data, outperforming traditional methods such as artificial neural networks (ANNs) and kernel ridge regression. It achieves a high coefficient of determination (R²) of 0.89 and operates 2.5 times faster than ANNs on GPUs. Future work will validate these methods on the simulated dataset and explore their applications to practical challenges, such as biophysical parameter retrieval. We also aim to investigate advanced architectures like Generative Adversarial Networks (GANs) to further enhance efficiency and accuracy.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Comparative Analysis of StyleGAN2-ADA and Improved Diffusion Models for SAR Ocean Pattern Generation: A Spectral-Domain Validation Approach

Authors: Omid Ghozatlou, Mobina Keymasi, Prof. Silvia Liberata Ullo, Prof. Mihai
Affiliations: Ceospacetech
Introduction Synthetic Aperture Radar (SAR) image generation has emerged as a critical field in Earth Observation, addressing the fundamental challenge of limited data availability for deep learning applications [1]. The generation of high-quality synthetic SAR data is particularly crucial due to the expensive and time- consuming nature of SAR data acquisition. While traditional approaches have relied on physical simulators, recent advances in generative AI, particularly Diffusion Models and GANs, have opened new possibilities for creating realistic SAR imagery [2]. Technical Challenges and Methodology The unique characteristics of SAR imagery, including speckle noise and complex electromagnetic properties, present significant challenges for generative models. These challenges are further complicated by the need to maintain physical consistency and preserve the intricate patterns specific to different ocean phenomena, such as wind streaks, sea ice, and oceanic fronts. Recent research has demonstrated that Probabilistic Diffusion Models (PDMs) show particular promise in generating complex and realistic structures in radar-based satellite data, though sampling time remains a significant challenge [3]. Research Framework This study presents a comprehensive comparison between StyleGAN2-ADA [4] and improved PDM [5] for generating synthetic SAR ocean imagery across 10 different distinct categories [6]. The research evaluates the performance of both generative models through quantitative metrics including FID scores and introduces a novel validation approach using Fast Fourier Transform (FFT) spectral analysis to assess the fidelity of generated patterns Results and Analysis Visual inspection of the generated samples demonstrates distinct characteristics between the two approaches: Model Performance  The improved PDM excels in preserving fine-grained textural details and local patterns  StyleGAN2-ADA demonstrates robust performance in maintaining global structure consistency  Both methods successfully capture unique features specific to each ocean category. Spectral Analysis The FFT analysis reveals distinctive frequency distribution patterns across real and generated images. The spectral signatures provide quantitative validation of the generated patterns&#39; authenticity, particularly in:  Frequency component distribution  Energy concentration patterns  Structural similarity to real SAR phenomena Implications and Contributions This comparative framework establishes new benchmarks for evaluating synthetic SAR image generation methods, advancing data augmentation techniques in Earth Observation applications. The research provides crucial insights for selecting appropriate generative models based on specific requirements in SAR image synthesis and ocean pattern simulation, contributing to the broader field of remote sensing and maritime surveillance. References [1] Z. Huang, X. Zhang, Z. Tang, F. Xu, M. Datcu and J. Han, &quot;Generative Artificial Intelligence Meets Synthetic Aperture Radar: A survey,&quot; in IEEE Geoscience and Remote Sensing Magazine, doi: 10.1109/MGRS.2024.3483459. [2] O. Ghozatlou, M. Datcu and B. Chapron, &quot;GAN-Generated Ocean SAR Vignettes Classification,&quot; in IEEE Geoscience and Remote Sensing Letters, vol. 21, pp. 1-5, 2024, Art no. 4017405, doi: 10.1109/LGRS.2024.3466970. [3] D. Qosja, S. Wagner and D. O&#39;Hagan, &quot;SAR Image Synthesis with Diffusion Models,&quot; 2024 IEEE Radar Conference (RadarConf24), Denver, CO, USA, 2024, pp. 1-6, doi: 10.1109/RadarConf2458775.2024.10549257. [4] O. Ghozatlou, M. Datcu and B. Chapron, &quot;Gan-Based Ocean Pattern SAR Image Augmentation,&quot; IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 2023, pp. 4056-4059, doi: 10.1109/IGARSS52108.2023.10283353. [5] Tuel, A., Kerdreux, T., Hulbert, C. and Rouet-Leduc, B., 2023. Diffusion models for interferometric satellite aperture radar. arXiv preprint arXiv:2308.16847. [6] Wang Chen, Mouche Alexis, Tandeo Pierre, Stopa Justin, Longépé Nicolas, Erhard Guillaume, Foster Ralph, Vandemark Douglas, Chapron Bertrand (2018). Labeled SAR imagery dataset of ten geophysical phenomena from Sentinel-1 wave mode (TenGeoP-SARwv). SEANOE. https://doi.org/10.17882/56796
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: A generic and scalable neural scheme for data-driven reconstruction, prediction and uncertainty quantification in Climate Sciences

Authors: Maxime Beauchamp, Jacob L. Hoyer, Ioanna Karagali, Guisella Fabiola Gacitua Lovera, Suman Singha, Tore Wulf Hansen, Till Soya Rasmussen, Ronan Fablet
Affiliations: DMI, IMT Atlantique
In recent years, deep learning-based methods have become popular to address key challenges related to the reconstruction, prediction and uncertainty quantification associated with geophysical variables made available through large satellite-based datasets from partial and potentially noisy observations. Among these new approaches, neural adaptations of traditional data assimilation scheme were proposed and showed promising results, even outperforming the state-of-the-art operational products in some cases. We present here the new extensions of 4DVarNet, primarily developed on idealised Sea Surface Height (SSH) supervised experiments, in which the prior models and the solver are jointly learned to estimate the state of the dynamical system to be reconstructed. In the recent extensions of this work, we embed the algorithm in a convenient end-user framework so that the model, originally developed on small domains, is now scalable to basin and global-scale making use of a patch-based strategy, potentially coupled with physically-sounded multiscale decomposition. We also explore how the deterministic approach used in the core version of the method can be extended for short-term forecasting and how a stochastic interpretation of the bi-level optimization scheme can be designed for efficient sampling in the posterior distribution. Demonstrators are provided for real-world applications of L4 products for Global SSH, as well as Regional SST on the North and Baltic Sea and Sea Ice concentrations on the Pan-Arctic at high resolution.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Learnt high-resolution encoding for improving the training of low-resolution fluid flow and weather models

Authors: Constantin Le Cleï, Prof. Nils Thuerey, Dr. Zhitong Xiong, Prof. Xiaoxiang Zhu
Affiliations: TUM
Fluid flow and weather state forecasting have recently seen the advent of data-driven models, allowing for accurate long-term simulations at a fraction of the computational cost compared to physics-based solvers. Traditionally, the latter require a model of the subgrid forcing (either derived from physical principles or, more recently, learned from data), accounting for the unresolved grid scales components. By contrast, pure machine-learning (ML) approaches are more generic, as they learn everything end-to-end, while being agnostic to any physics knowledge. In particular, ML-based methods excel for low-resolution forecasting tasks, which are crucial when data availability is limited, and where physical simulations typically struggle. For those tasks, it is however unclear whether higher-resolution information can also benefit the training of purely data-driven methods, as it does for physics-based solvers, or whether the former are able to learn the dynamics solely using low-resolution information. To answer that question, we equip the generic pipeline with two new components: a high-resolution encoder, which aims at finding the optimal forcing term for the low-resolution forecasting task, and a subgrid model, which learns to reproduce the encoded representation with access to low-resolution inputs exclusively. Both components are learned end-to-end along with the forecasting model to ensure a trade-off between the optimality and learnability of the encoded representation. We evaluate our method on diverse test cases, including 2D quasi-geostrophic equations and turbulence, and show that this approach can improve the long-term rollout accuracy and the physical groundedness of ML-based models while only requiring access to low-resolution data at inference time. We also display the benefits of the method compared to learnt data-assimilation-based approaches, where one first downscales the input to a higher resolution and then produces a forecast, which often comes at the cost of sacrificing high-frequency details. The results appear to be consistent across different ML-based surrogate models, such as U-Nets and diffusion models. Furthermore, the marginal improvement correlates with decreasing data resolution, i.e., our method appears to get more helpful as fewer grid scales are resolved.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Artificial Intelligence for Earth Observation: A Machine Learning Approach for Weather Prediction on the European Weather Cloud

Authors: Dario Cappelli, Marco Di Giacomo, Lorenzo Giuliano Papale, Mario Papa, Ten.Col. Francesca Marcucci
Affiliations: Tor Vergata, University of Rome, GEO-K srl, Italian Air Force
This study presents an AI-based approach to forecast severe weather phenomena, focusing on thunderstorms (TH) and fog (FOG), by leveraging advanced Machine Learning (ML) techniques for the post-processing of Numerical Weather Prediction (NWP) data. The goal is to enhance early detection and improve aviation hazard forecasts. Conducted at the Italian Air Force Meteorological Centre and co-funded under the EUMETNET-SRNWP-EPS Project, this research integrates ML algorithms into a robust Decision Support System (DSS) designed for meteorologists. The methodology employs the XGBoost algorithm, a decision-tree-based ensemble method optimized for speed and accuracy. XGBoost is particularly suited for structured meteorological datasets, allowing for precise classification and regression tasks. Predictive features were extracted from NWP models, including COSMO-IT (2.2 km resolution) and ECMWF outputs, and co-located in space and time with target variables. For thunderstorms, the Rapidly Developing Thunderstorm (RDT) dataset served as the target, while the fog detection task relied on METAR observations detailing visibility and meteorological conditions. Input features ranged from traditional meteorological parameters such as temperature, pressure, and relative humidity to specialized indices like CAPE, SWEAT, and Helicity. Incremental XGBoost forms the cornerstone of this research, enabling the model to continuously adapt to new data without requiring complete retraining. This incremental approach, also known as online or streaming learning, processes smaller data subsets iteratively, ensuring that the model remains efficient and responsive to evolving weather patterns. By training the base model with historical data and subsequently updating it weekly with new information, the system captures seasonal variability and enhances its predictive accuracy. For instance, a 7-day incremental update strategy was shown to optimize performance for both TH and FOG scenarios, minimizing computational resources while maintaining robust forecasting capabilities. The workflow begins with the base model training using historical data spanning multiple months. For thunderstorms, the base model was trained on data from March to August 2023, tested on mid-September 2023, and incrementally updated with daily and weekly data until the end of the month. A similar incremental strategy was applied to fog predictions using datasets containing over one million rows, including 47,566 FOG events. The incremental process reduced training time exponentially compared to the traditional approach and demonstrated comparable or superior performance in test cases. This research also explores the application of ensemble techniques to enhance the robustness of predictions. By varying initial conditions within the COSMO-IT-EPS model, 20 set of predictors were generated and fed into the XGBoost model, producing binary outputs for each pixel. The ensemble output, calculated as a probabilistic aggregation, further improved accuracy, particularly in fog prediction scenarios. These techniques demonstrate the model's versatility and capacity to adapt to complex meteorological challenges. Deployment of the DSS on the European Weather Cloud (EWC) exemplifies the integration of cloud-based computational resources in meteorology. The EWC provides scalable infrastructure for training and deploying ML models, leveraging GPU-accelerated RAPIDS libraries to streamline data processing. This setup enables the system to operate in near real-time for prediction and testing, while offline training phases ensure continuous improvement. Case studies illustrate the model's application and effectiveness. For thunderstorms, the system accurately identified convective cells with lead times sufficient for proactive risk mitigation, aligning well with radar observations. In fog scenarios, the model improved visibility forecasts, reducing operational disruptions at airports. Confusion matrices and comparative analyses highlight the advantages of the incremental approach, including reduced computational loads and enhanced adaptability to changing conditions. The incremental XGBoost approach offers significant benefits in scalability and resource efficiency. By storing only the trained model and small updates instead of the entire dataset, the system supports dynamic training without compromising performance. This capability is especially valuable for handling the increasing variability introduced by climate change, ensuring the model remains relevant in dynamic operational environments. Future steps include extending the methodology to larger datasets and additional weather phenomena, such as wind shear and heavy precipitation, to broaden the scope of the DSS. Further development of open-source tools and repositories will support reproducibility and collaboration within the meteorological community. Integrating high-resolution satellite imagery and radar data will enhance spatial and temporal forecasting capabilities, paving the way for more accurate and actionable predictions . Moreover, future research will also investigate the application of alternative, cutting-edge neural network architectures such as transformers, graph neural networks (GNNs), and spiking neural networks (SNNs). In conclusion, this study demonstrates the transformative potential of Machine Learning in meteorology, particularly through the integration of incremental XGBoost into operational forecasting workflows. By combining numerical modeling with advanced ML techniques, the research establishes a scalable, adaptive framework for severe weather prediction, addressing critical challenges in aviation hazard forecasting and laying the foundation for future innovations in meteorological science.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Pan-European Multi-Seasonal Land Cover Mapping Model

Authors: Mr. Marco Galatola, PhD. Edoardo Arnaudo, Claudio Rossi
Affiliations: Links Foundation
Land cover mapping is the process of identifying and classifying the physical materials on Earth's surface, such as forests, grasslands, urban areas, and water bodies. It plays a crucial role in understanding environmental dynamics, managing natural resources, and guiding sustainable development. Accurate land cover data helps monitor biodiversity, assess climate change impacts, and support disaster management by providing insights into land use and habitat distribution over time. As environmental pressures grow, the demand for precise, up-to-date land cover data continues to increase, due to its pivotal role in promoting resilience and sustainability at local, regional, and global scales. Manually annotating land cover data is a time-consuming and labor-intensive process, often requiring extensive fieldwork and expert knowledge to accurately classify different land cover types. This method is prone to human error and is not scalable, particularly when dealing with large, complex regions or multi-temporal data. Additionally, it can be expensive and difficult to maintain consistent classification standards across different areas and time periods. The importance of an automatic model lies in its ability to process vast amounts of satellite or aerial imagery quickly and consistently, providing accurate, up-to-date land cover maps at a much larger scale. Automated models can integrate various data sources and adapt to changing environmental conditions, making them essential for real-time monitoring, long-term land management, and addressing the growing need for rapid, large-scale environmental assessments. Several existing European land cover datasets provide valuable resources for environmental monitoring and land management. One of the most widely used is the CORINE Land Cover (CLC) dataset, which offers detailed land cover classifications across Europe at a resolution of 100 meters. CLC is updated every six years and covers a wide range of land cover types, making it a fundamental tool for environmental policy and planning. Another important dataset is Urban Atlas, which provides high-resolution land cover data specifically for urban areas. With more reliable labels compared to CLC, Urban Atlas supports urban planning and development, though its coverage is limited to select regions across Europe. The LUCAS (Land Use/Cover Area Frame Survey) dataset, created by Eurostat, offers detailed in-situ data on land use and land cover, collected through field surveys across Europe. While LUCAS offers the most reliable labels, it is point-based and too sparse for training a segmentation model, making it more suited for validation rather than direct model training. Among the available datasets, Urban Atlas is better suited as the base for a land cover dataset. While its coverage is more limited compared to global datasets like CLC, its high spatial resolution and reliable labels make it the most suitable choice for mapping and analyzing land cover types. Additionally, Urban Atlas's focus on cities ensures that the data reflects the complex land cover variations typical of these environments, which are often challenging to capture in coarser, less detailed datasets like LUCAS. Based on this we developed a pan-European, multi-seasonal dataset consisting of Sentinel-2 tiles and Urban Atlas land cover annotations from 2018. Sentinel-2 was chosen for its accessibility and consistent presence over time, providing frequent and reliable imagery. The labels were further refined using High Resolution Layers (HRL), datasets provided by the Copernicus Land Monitoring Service that offer detailed land cover information at a high spatial resolution. These layers are designed to complement broader datasets like CLC and provide more precise information on specific land cover types, such as forests, grasslands, and impervious surfaces. To ensure the generalization of the models, the dataset was divided into training, validation, and test sets based on city boundaries. The Urban Atlas classes were combined into nine broader categories: roads, urban areas, bare vegetation, wetlands, water, grasslands, agricultural land, broadleaved forests, coniferous forests, and shrubs. By aggregating similar land cover types, we reduce the dataset's complexity, making it more manageable for model training and analysis. This approach allows the models to focus on the key classes while still capturing important spatial and seasonal variations. We trained and evaluated a benchmark of state-of-the-art deep learning models, including convolutional neural network-based architectures like UNet and HRNet, as well as transformer-based models such as SegFormer. To enhance performance and accelerate convergence, we utilized SeCo pre-training weights, which are specifically designed for remote sensing applications. These weights provide a strong initialization, enabling the models to better capture spatial and spectral features from satellite imagery. The results demonstrated that the deep learning models effectively captured the spatial and seasonal variations in land cover, achieving high accuracy across multiple regions and land cover types. The use of a multi-seasonal dataset proved beneficial in enhancing the model's robustness, particularly in distinguishing seasonal variations, and enabling mapping in any period of the year. Additionally, the city-based dataset split ensured the model's generalization, highlighting its ability to perform consistently across different urban and rural landscapes. Overall, the benchmark models showed strong potential for large-scale, automated land cover mapping in diverse European environments. This work was partially funded by the Horizon Europe projects FUTUREFOR (GA n.101180278) and RescueME (GA n.101094978),
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Deep Learning models for short-term drought prediction in the Horn of Africa

Authors: Riccardo D'Ercole, Dr. Paolo Sanò, Dr Daniele Casella, Giulia Panegrossi
Affiliations: Consiglio Nazionale delle Ricerche - CNR-ISAC, University of Naples Federico II
The Horn of Africa has historically been vulnerable to droughts, with recent studies indicating that over the past two decades, the frequency, severity, and duration of these events have increased. Droughts have profound impacts, including livestock losses, crop failures, and food security crises. To address these challenges effectively, the integration of advanced remote sensing products with existing early warning systems is crucial, especially in regions with limited data availability. In particular, monitoring environmental hazards at sub-seasonal or sub-monthly temporal resolutions enables the prompt detection of sudden extreme events, which may not be captured adequately by coarser temporal resolution datasets. Despite this potential, high temporal resolution remote sensing instruments have been underutilized in drought monitoring. Unlike rapidly occurring phenomena such as floods or wildfires, the benefits of frequent monitoring of droughts are less apparent. However, evidence suggests that droughts can escalate rapidly due to persistent atmospheric anomalies lasting weeks. Prolonged periods without precipitation, coupled with unusually high surface temperatures, strong winds, and clear skies, can accelerate evapotranspiration and lead to a rapid depletion of root-zone soil moisture. When such short-term water stress coincides with critical crop growing seasons, it can severely compromise agricultural yields. High temporal resolution data introduces substantial data volumes, posing significant processing and computational challenges. In this context, Deep Learning (DL) models are well-suited to exploit the dense information embedded in such datasets. Requena-Mesa et al. developed the EarthNet2021 dataset (https://doi.org/10.48550/arXiv.2104.10066), which uses daily meteorological outputs and static variables (e.g., Digital Elevation Models) to predict vegetation conditions derived from Sentinel-2 at a fine spatial resolution (20 m) every five days. While this dataset is limited to Europe, it has enabled the development and testing of advanced spatiotemporal Deep Learning architectures to the task, exploiting deterministic-type models like the ConvLSTM or probabilistic ones like diffusion models. Efforts to monitor droughts at finer spatial resolutions mark significant progress in understanding the phenomenon. However, there remains unexploited potential in leveraging high temporal resolution instruments. Most of the existing studies rely on data from Low Earth Orbit (LEO) satellites, which generally offer higher spatial resolution at the cost of lower temporal revisit frequencies. Consequently, these studies seldom achieve daily or sub-daily monitoring. In contrast, research has demonstrated the advantages of deriving vegetation status datasets from high temporal resolution data, which can help mitigate cloud cover effects, ensure more consistent vegetation time series through stable viewing angles, and better capture short-term phenological variations. Such improvements can sensibly enhance the detection of drought phenomena. To assess the utility of high temporal resolution datasets in the task of Earth surface forecasting, we use a vegetation status dataset (i.e. NDVI) derived from the geostationary satellite MSG SEVIRI (https://doi.org/10.1016/j.jag.2024.104264). We combine it with hydrological variables derived from ERA5 reanalysis product and soil moisture from the Advanced Scatterometer (ASCAT) on board the series of Meteorological Operational (Metop) satellites. This approach creates a dataset with coarse spatial but high temporal resolution, specifically designed for predicting regional level Earth surface dynamics in the Horn of Africa (Ethiopia, Somalia, Kenya, and Djibouti), offering robustness against cloud contamination. After the creation of the dataset, we evaluate alternative data-driven methods for predicting vegetation status, testing the performance of state-of-the-art DL architectures over varying short-term temporal horizons (from 10 to 60 days). After using a ConvLSTM model as a baseline for the vegetation forecasting task, we frame the problem with a graph-based structure based on the WaveNet architecture, transforming single pixels into nodes and modeling the connections between them as edges. Additionally, we use a Diffusion model with a U-Net architecture as denoiser and demonstrate its ability to forecast extreme dry events. As a matter of facts, employing a probabilistic model in the prediction vegetation status allows capturing well the uncertain evolutions of vegetation, and we demonstrate its robustness in the case of a historical drought in the area during the years 2020-2022. To further understand the Earth system dynamics, we apply explainability techniques to identify the relative importance of individual meteorological features in predicting vegetation dynamics. This comprehensive approach aims to improve the practical applications of drought monitoring in the region and to increase the timeliness of detection of agricultural drought phenomena.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Recent Advances in iota2: Enhancing Large-Scale Land Use Mapping with Cutting-Edge Tools and Techniques

Authors: Eugène Planteur, Arthur Vincent, Mickaël Savinaud
Affiliations: CS Group
Remote sensing is an ever-growing field, allowing for more and more accurate land cover products. Large-scale Earth Observation (EO) missions such as Sentinel and Landsat facilitate the automated generation of detailed land use maps over large areas of interest. For example, the OSO product is an annual land use map of France, which uses Sentinel-2 data. However, the sheer volume of data processed in such projects requires robust software capable of handling these demands. iota2, developed by CESBIO and CNES with support from CS Group, is an open-source processing chain designed for managing large collections of images to produce land use maps via machine learning. This is possible thanks to modern, high performance tools such as OTB, ask or pytorch. iota2 integrates a diversity of widely-used sensors, including SAR (Sentinel-1), optical (Sentinel-2, Landsat-5), and thermal (Landsat 8/9) as well as the possibility to use any other satellite images and using multiple sensors for a single classification. Another key feature is the analysis of the validity and processing of training data: if a pixel is considered invalid (e.g. the pixel is cloudy, flagged as no-data by the provider...), iota2 will instead interpolate its value using data from other dates, hence harmonizing and assuring consistency in the training dataset. iota2 is continuously evolving, incorporating new features, simplifying usage, and improving code quality. Initially designed for land cover mapping, it now also supports regression algorithms [1] to predict geo- or bio-physical variables. Recent developments also include the integration of thermal data from Landsat 8/9, which, when combined with Sentinel-2, enhances detection of specific land types such as urban areas. Another important aspect of iota2 is its modularity. It can easily be installed using conda and ran with a simple configuration file. Additionally, users can define custom spectral indices to use for an improved classification or simply producing a map of these indices. As explained before, users can also use any kind of satellite images as input, if it comes from a sensor not yet supported. It is also possible to train a model without iota2 and use iota2 only for the preprocessing of images and prediction. iota2 is developed and maintained by CS Group, while CESBIO contributes its domain expertise and specialized knowledge. 1: Rivas H., Touchais H., Thierion V., Millet J., Curtet L., Fauvel M. (2024). Nationwide operational mapping of grassland first mowing dates combining machine learning and Sentinel-2 time series, https://doi.org/10.1016/j.rse.2024.114476.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: C.01.08 - POSTER - Optical instrument concepts and technologies to enable new EO science missions

This theme aims to discuss new optical remote sensing techniques and instrument concepts, highlighting how this will expand our understanding of the earth, complement existing assets and improve the capability and performance of existing earth observation missions.
The session aims also at fostering discussions of potential future Earth Observation missions enabled by innovative optical payloads and technologies.

Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Can a Constellation of CubeSats using a Miniaturized Optical Ranging Instrument help reduce Temporal Aliasing in Gravity Field Recovery?

Authors: Matthew Darrow, Prof. Dr. Roland Pail, Dr. Thomas Gruber, Elisabeth Paul, Dr. Bastian Eder
Affiliations: Technical University Of Munich, MUnique Technologies GmbH
Due to temporal aliasing effects, where high frequency signals are incorrectly aliased into low-frequency solutions, gravimetry missions such as GRACE/GRACE-FO have struggled to observe the Earth’s gravity field below around 300km spatial resolution for monthly solutions. This effect is most simply addressed by generating a larger number of gravity observations of low-low satellite-to-satellite tracking (LL-SST). Past, current, and future planned gravimetry missions have made use of large satellites with extremely sensitive and accurate instruments onboard in order to sense the gravity field through which they fly. These large satellites are limited in number by the cost of development and launch, since many parts and instruments must be specifically designed for the mission. Making use of smaller satellites in the CubeSat or microsatellite class can drastically reduce the per-satellite cost of development and therefore enable many more satellites to be sent into orbit. In the past this has been only theoretical because no ranging system with the micrometer accuracy requirement has been met for a CubeSat sized platform, but this is changing with the development of the Dynamic Optical Ranging and Timing (DORT) system being designed by MUnique Technologies in Munich. The DORT instrument is largely fiber-based, allowing it to fit into a much smaller volume, while still displaying an accuracy below the micrometer level. In addition to the miniaturized ranging system, various other institutions are researching small accelerometers, which are also necessary for gravimetry. Although both of these technologies are rather developmental at this stage, it is still of interest to investigate what impact these instruments might have if they would make CubeSat gravimetry possible. Environmental effects such as thermal stability and small vibrations must also be taken into account for the satellite design and are currently a limiting factor for fully achieving a 6U CubeSat body. With the opportunity of CubeSat gravimetry within reach, the design of the satellite bus and mission has begun in order to see how this mission might improve on results from the GRACE/-FO missions and reduce temporal aliasing errors. A tradeoff between instrument quality and the observation geometry, in the form of more observations at a larger number of orbital planes, is analyzed. Full-scale simulations using the open-source software GROOPS were performed to evaluate and optimize a constellation of up to four pairs of satellites and the output continuously compared to the goal mission objectives of the future gravimetry missions. Preliminary results have suggested that a larger constellation has the ability to improve short-term solutions of one day to one week due to more gravity measurements in the short time frame.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: An Overview of the Evolution and Future of Detector Technologies for Living Planet Missions and the Positive Impact of Institutional Development on the Market Dynamic for Commercial and Export Segments.

Authors: Jerome Pratlong
Affiliations: Teledyne-e2v
Living planet missions give us the ‘big’ picture. A thorough understanding of the Earth’s system is critical for the future of our planet especially to better predict environmental changes and mitigate effects of global changes. Imaging missions and their detectors have played a valuable role and will enable, thanks to new technologies, future, more demanding missions with more bands and even higher resolution. We already see a shift with the commercial space playing a bigger role in earth observation (EO) missions which propose higher re-visit solutions. To support the increasing complexity of EO missions and in particular Earth Explorer missions and Earth Watch missions, new detector developments are required that will push the boundaries of current capabilities. New requirements show a need for more spectral bands, demanding larger detectors, to extract more precise information. At the same time the resolution needs to increase leading to smaller pixel pitch design. These new missions can be enhanced by additional complementary missions such as active satellite with onboard LiDAR technology (Aeolus is a successful example) or the lightning mapper technology which has been selected by NASA and NOAA’s GeoXO mission – the next generation of Earth Observation satellites that will operate into the 2050s. The development of these new technologies is costly and long-term. It is therefore essential to maximise the use of these institutional developments into commercial and export programmes to ensure sustainability. Teledyne e2v has produced many detectors from CCD to CMOS for visible spectrum but also Infra-Red detectors for more recent missions. In the presentation we will review existing missions like Sentinels, EarthCare, Atlid and their technologies. In a second part we will cover the technology evolution for the new generation missions such as Sentinel NG but also current designs for CO2M, Chime, Truths and Flex missions. Examples such as Aeolus and the lightning mapper, will be used to demonstrate how technology can enhance earth watch. The third part of the presentation will address the future technology needs for even more bands and higher resolution. Lastly the market dynamic of export and commercial segment taking advantage of these institutional development and investment will be presented.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Exploring the Capability of Future SBG-TIR Mission for Volcanic Ash and SO2 Retrievals

Authors: Lorenzo Guerrieri, Stefano Corradini, Malvina Silvestri
Affiliations: INGV
The SBG-TIR (Surface Biology and Geology – Thermal InfraRed) mission is the result of a collaboration between NASA/JPL and ASI, that will produce a new sensor which is scheduled for launch in 2027. The payload consists of a thermal sensor, with 8 channels in the 3.9-12.0 μm spectral range, and a visible camera with two channels positioned at 655 (red) and 835 (near infrared) nm. The TIR instrument will have a ground spatial resolution of about 60 m at nadir and a revisit time of about 3-days. In the sphere of THERESA (THErmal infRarEd SBG Algorithms) project, the capability of the future SBG-TIR mission for volcanic ash and SO2 retrievals has been studied. Two different algorithms, both working in the TIR range, have been used for this purpose: Look-Up-Table procedure (LUTp, Corradini et al, 2009) and Volcanic Plume Retrieval (VPR, Pugnaghi et al, 2016). Both procedures are able to retrieve the ash aerosol optical depth (AOD), the ash effective radius (Re) and the vertical column density of ash (VCDa) and SO2 (VCDs) inside a volcanic cloud. Both LUTp and VPR have so far been successfully applied to several TIR satellite sensors such as MODIS (on Terra & Aqua platform), SEVIRI (on Meteosat Second Generation), SLSTR (on Sentinel-3). Using MODTRAN 5.3 radiative transfer model, the behaviour of the six SBG-TIR channels (from 8.3 to 12.0 μm) has been simulated. As test case, 4 synthetic images for Mt. Etna (Italy) have been produced, simulating summer/winter atmospheric conditions and pre/syn-eruptive scenarios. All the outputs of LUTp and VPR have been compared with true simulated values, obtaining the relative errors, by considering six different SBG-TIR bands triplets (channels 1-4-5, 2-4-5, 1-4-6, 2-4-6, 1-5-6, 2-5-6). The results are encouraging and confirm the possibility of obtaining reliable quantitative estimates of volcanic ash and SO2 from the future SBG-TIR sensor. Bibliography: Corradini S., Merucci L., and Prata A. J.: Retrieval of SO2 from thermal infrared satellite measurements: correction procedures for the effects of volcanic ash, Atmos. Meas. Tech., 2, 177–191, https://doi.org/10.5194/amt-2-177-2009, 2009. Pugnaghi S., Guerrieri L., Corradini S., Merucci L.: Real time retrieval of volcanic cloud particles and SO2 by satellite using an improved simplified approach. Atmos Meas Tech 9:3053–3062. https://doi.org/10.5194/amt-9-3053-2016, 2016.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: A.04.03 - POSTER - Monitoring Greenhouse Gases from Space - Methods and Validation

Since the launch of SCIAMACHY on ENVISAT, our understanding of the uncertainties surrounding the global distribution of potent Greenhouse Gases (GHGs) such as Methane (CH4) and Carbon dioxide (CO2) have been dramatically reduced. Yet, despite advances in technology in the decades since SCIAMACHY, with missions such as GOSAT and Sentinel 5P, comparisons of satellite observations with bottom-up inventories show that significant uncertainties still remain. Key to reducing these uncertainties are the validation networks providing confidence to the satellite retrievals, and the advancing of retrieval methods including sophisticated use of machine learning, and advanced uncertainty quantification methods.

This session is dedicated to presenting the current state of the art for methods and validation of the remote sensing of GHGs, such as but not limited to CH4 and CO2, including results from current missions and ground based networks such as Sentinel-5P, GOSAT/2, PRISMA, EnMAP and the TCCON and COCCON networks. The presentation of advanced remote sensing techniques and methods, leveraging open-science and machine learning techniques are strongly encouraged in this session.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: A Satellite-Based Comparison of Air Pollution in Romania (2020 vs 2024)

Authors: Irina Diaconu
Affiliations: GISBOX
Air quality is a critical concern across the entire planet, particularly in urban areas where pollution levels often exceed safe thresholds. Space-borne monitoring instruments, such as those from the European Space Agency (ESA), have been instrumental in tracking and assessing key indicators of air pollutants. Among these, the Sentinel-5P mission, which employs the TROPOMI (TROPOspheric Monitoring Instrument), provides detailed, high-resolution data on pollutants like carbon monoxide (CO), nitrogen dioxide (NO2), sulfur dioxide (SO2), and ozone (O3). Romanian cities, as reported by the European Environment Agency (EEA), frequently rank low in terms of air quality, reflecting a need for more in-depth and systematic air pollution monitoring. This study aims to assess the air quality in Romania by comparing pollutants data recorded by Sentinel-5P for two distinct reference years: 2020, when global lockdowns due to the COVID-19 pandemic significantly slowed economic activities, and 2024, a year characterized by typical economic and social conditions. The data collection will focus on the first six months of each year, capturing pollution levels during both colder months, when emissions from heating tend to be higher, and warmer months, when photochemical reactions often increase ozone concentrations. By analyzing the variations between these two years, the study aims to identify whether the temporary reduction in industrial and vehicular activities in 2020 had a lasting impact on air pollution levels or if pollution returned to pre-pandemic levels by 2024. To ensure a comprehensive understanding of air pollution distribution and its influencing factors, the satellite data from Sentinel-5P will be correlated with other environmental and demographic variables, including vegetation cover, topography (through digital elevation models), and population density statistics. This holistic approach allows for a more nuanced analysis of how different factors—such as human activities, natural landscapes, and population clusters—contribute to or mitigate the spread of pollutants. The study will employ spatial and statistical analysis techniques to compare pollutant concentrations across Romania for the two selected periods. A key element of the research is determining whether there are significant differences between the first half of 2020, a period marked by unprecedented reductions in traffic and industrial emissions due to lockdown measures, and the corresponding period in 2024, which represents a "normal" year. This comparison could reveal the extent to which air quality improved temporarily due to human restrictions and whether the effects of such an event can inform future air quality management policies. The outcomes of this study are expected to provide critical insights into the relationship between human activities and air pollution, particularly in urban environments. The findings will contribute to a better understanding of the dynamic nature of air quality in Romania and will inform local and national authorities about potential strategies to improve air quality in the future. Additionally, this research could serve as a model for similar comparative studies in other European regions, helping to develop more effective, evidence-based approaches to pollution mitigation.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Assessment of methane retrieval algorithms from for EnMAP shortwave infrared observations

Authors: Juan Bettinelli
Affiliations: Deutsche Zentrum Für Luft- Und Raumfahrt E. V. (DLR)
The Environmental Mapping and Analysis Program (EnMAP) satellite provides hyperspectral measurements with high spatial resolution. Despite its rather low spectral resolution for atmospheric remote sensing in the shortwave infrared (SWIR) range, it is possible to detect local methane (CH4) enhancements from strong point-like sources such as coal mines or landfills. In this study, data-driven models (using fast linear least squares) and physics-based retrievals (using iterative estimation schemes) are applied to EnMAP measurements of identified CH4 point sources. The methods include Generalized Least Squares, Separable Least Squares, Nonlinear Least Squares, Matched Filters, and Singular Value Decomposition. The results of the different retrieval schemes are assessed with regards to their uncertainties and statistical significance. Particular attention is given to addressing degeneracies between surface reflectivity and molecular absorption features caused by the sensor's low spectral resolution. Retrieval algorithms are tested under diverse environmental backgrounds with both homogeneous and heterogeneous spectral reflectivity. Retrieval errors are quantified and evaluated using various statistical methods. Well established approaches [Hochstaffl et al "Methane retrievals from airborne HySpex observations in the short wave infrared"], successfully applied to other platforms such as airborne HySpex measurements, are adopted for this analysis.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: The Use of Machine Learning for Greenhouse Gases Concentration Retrievals from Space: First Results and Analysis

Authors: Laure Corazza, Lloyd Mace, Virgine Capelle, Nicolas Meilhac, Raymond Armante, Cyril Crevoisier
Affiliations: Laboratoire de météorologie dynamique (LMD/IPSL), École polytechnique, FX-Conseil, École Polytechnique
Following the Paris Agreement, there has been an increasing need to better understand the global sources and sinks of greenhouse gases. Previous space missions such as GOSAT-1/2, OCO-2/3 or TROPOMI/Sentinel-5P focus specifically on point-based observation of the solar radiation reflected by Earth’s surface. These measurements of several kilometers allow to estimate column average of CO₂ and CH₄ through the inversion of the radiative transfer equation. This discrete imaging method presents however the disadvantage of being limited in spatial coverage and resolution, which entails that the satellite can miss important emission zones such as large cities or power plants. This is why the future Earth observation missions based on spectro-imagers are of particular interest: the now continuous measurements will allow to cover large areas of several dozens of kilometers consistently and on a regular basis. On the other hand, analysing such images with a point-by-point method is doomed to be computationally expensive, and there is a need to develop a more efficient way to process this upcoming large amount of information on our atmosphere. The use of machine learning seems to present an interesting solution to this problem: by training neural networks with known atmospheric datasets, the machine learning model can learn to quickly analyse the measurements and extract the column average of the gases from the radiance spectra, thus bypassing the need for the expensive radiative transfer inversion. Such approaches have been successfully applied to the processing of observations in the thermal infrared to retrieve weighted columns of CO₂ and CH₄ (e.g. from the IASI instrument). The objective of this poster is therefore to present the first results of a new machine learning retrieval method - based on the non-linear inference scheme (NLIS) algorithm that relies on the radiative transfer code 4A/OP and the spectroscopic database GEISA - to retrieve CO₂ and CH₄ column average, using as targets the upcoming missions GOSAT-GW by JAXA, CO2M by ESA and GESat by Absolute Sensing. These satellites aim to image the natural and anthropogenic emissions of the two main greenhouse gases using different spatial and spectral resolutions, target precision and swath widths. Comparing the machine learning retrievals using these different missions will allow to perform the analysis on a variety of data types and get a better understanding of the ability of neural networks to quickly analyse large datasets from spectro-imagers. The radiative transfer 4A/OP model will be used to generate synthetic measurements with the mission specifications. Overall, this will allow to better prepare for the data analysis segment of these missions, considering that GOSAT-GW, CO2M and GESat are all planned to launch in 2024-2025.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Validation of GHG weighted columns using vertical profiles measured by balloon-borne AirCore air sampler

Authors: Cyril Crevoisier, Jérôme Pernin, Axel Guedj, Thomas Ponthieu, Lilian Joly, Nicolas Dumélié, Michel Ramonet, Morgan Lopez, Serge Cruzel
Affiliations: CNRS/LMD, GSMA/URCA, LSCE/IPSL, CNES
In the framework of the MAGIC initiative lead by CNRS and CNES, regular measurements of atmospheric concentration profiles of CO2, CH4, CO, temperature, relative humidity and winds are made using balloon-borne AirCore instruments and its associated meteorological sounding at 3 stations in France. These stations (Aire-sur-l’Adour, Moulin de la Housse, and Trainou) form the French AirCore network. As of 3 December 2024, 324 flights have been operated, leading to an extensive dataset spanning 2014-2024. Since 2019, the average number of flights per year is about 40. Among them, depending on weather condition, three quarters of the flights have been organized at the overpassing time of a satellite providing weighted columns of CO2 or CH4: OCO-2, TROPOMI/Sentinel-5P, GOSAT-1/2 and IASI-A/B/C. In addition, a dozen of flights per year have been organized in the context of annual MAGIC field campaigns in France, Canada and Sweden at the overpassing time of the satellites. The availability of measurements between 0 and 25 km provides a unique way to validate various weighted columns of GHG. To that end, extensive work has been done during the first three annual MAGIC field campaigns to validate retrieved AirCore profiles in both the troposphere (using simultaneous in-situ measurement with research aircrafts) and the stratosphere (taking advantage of the possibility to fly several instruments on board large stratospheric balloons from CNES). In this talk, we will present the status of the French AirCore program and will: - present the latest 2024 release of the whole AirCore-Fr dataset and analyze the 8 years of variation of atmospheric variables measured by AirCore. - document the precision achieved with AirCore in terms of total and mid-tropospheric columns computation for validation of weighted columns derived from space observation. - introduce a new methodology for collocating AirCore profiles and Level 2 products derived from space observation, relying on the identification of common air masses to select the columns that can actually be compared to profiles. - highlight the crucial need to get a proper measurement of the stratospheric part of the profile in order to validate total or partial column of GHG at the precision level needed for space-borne validation. - present the preparation work for installing new AirCore launch sites in the equatorial and tropical region.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: An AI-Based Convolution-Attention Model for Accurate Methane Emission Detection Using Sentinel-2 Imagery

Authors: Ali Radman, Masoud Mahdianpari, Fariba Mohammadimanesh
Affiliations: Memorial University of Newfoundland, C-CORE, Canada Centre for Remote Sensing, Natural Resources Canada
Methane is a potent greenhouse gas with a significant global warming potential, making its monitoring essential for mitigating climate change. A substantial portion of methane emissions arises from anthropogenic activities, often leading to concentrated point source emissions that require precise detection. Satellite-based remote sensing, particularly with sensors equipped with shortwave infrared (SWIR) bands, has proven effective in observing these emissions. Sentinel-2, with its multispectral capabilities, offers a valuable platform for methane monitoring due to its balanced spatial and temporal resolution. However, detecting methane plumes in satellite imagery presents challenges, including surface heterogeneity, atmospheric variability, and high levels of background noise. To address these challenges, we developed a hybrid convolutional-attention U-Net model that builds on the proven U-Net architecture, integrating advanced multi-scale spectral-spatial attention mechanisms. This combination enables the model to extract local and global features while capturing long-range dependencies crucial for methane plume detection. The proposed model incorporates multi-scale convolutional (MSC) modules to capture features at different resolutions, channel attention (CA) modules to prioritize relevant spectral information, and spatial attention (SA) modules to focus on important spatial regions. These components enhance the model’s sensitivity to methane plume characteristics, improving its ability to distinguish true methane emissions from noise and artifacts. Additionally, Grouped Attention Gates (GAG) replace traditional skip connections, facilitating better feature fusion between encoder and decoder layers, which is essential for handling complex environmental conditions and background heterogeneity. The dataset used for training and evaluation comprised 64,432 samples of 2×2 km areas with 20 m grid resolution, equally divided into positive (plume) and negative (non-plume) samples. Positive samples were created by simulating methane plumes with the WRF-LES model, covering emission rates from 600 to 30,000 kg/h under varying wind speeds (1–10 m/s). The plumes were overlaid onto real Sentinel-2 background images, while negative samples consisted of unaltered backgrounds collected from diverse global locations, primarily methane hotspots. Each image in the dataset included three input bands: methane concentration (retrieved using the multi-band multi-pass (MBMP) technique), Albedo, and Albedo difference (calculated as the difference between the main and reference images in MBMP). This combination of layers provides additional context, aiding the model in differentiating true plumes from background noise and artifacts. The proposed model achieved strong results in methane plume detection. For pixel-level segmentation, it recorded an F1-score of 93.78% and an Intersection over Union (IoU) of 88.29%, demonstrating its ability to accurately capture plume shapes and extents. At the scene level, the model achieved an F1-score of 96.97%, effectively classifying patches with methane emissions while minimizing false positives. These results reflect the advantages of the model’s attention-based architecture in handling complex environmental conditions. Comparatively, prior studies testing U-Net models on Sentinel-2 imagery for methane detection reported significantly lower performance, highlighting the substantial improvements brought by the proposed approach. To further assess its practical applicability, the model was tested on real-world Sentinel-2 imagery from methane emission hotspots in Turkmenistan and Algeria. The model successfully detected methane emissions in challenging scenarios with substantial background noise, distinguishing plumes from non-methane features such as water bodies or industrial artifacts. This validation underscores the model’s robustness and potential for operational deployment in global methane monitoring initiatives. With its ability to automate methane plume detection at scale, the proposed model supports efforts to mitigate greenhouse gas emissions and combat climate change.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: CarbonSense V2: Expanding the Dataset for Data-Driven Carbon Flux Modelling

Authors: Matthew Fortier, Dr Oliver Sonnentag, Dr Chris
Affiliations: Universite de Montreal
In this work, we introduce CarbonSense V2, an expanded multimodal dataset designed for data-driven carbon flux modeling. The original CarbonSense dataset combined eddy covariance data with satellite imagery from Moderate Resolution Imaging Spectrometer across 385 field stations, formatted specifically for machine learning applications. Along with this dataset, we released a multimodal dataloader and two benchmark models for site-level carbon flux modeling: an XGBoost baseline and a novel deep learning model, EcoPerceiver, which achieved state-of-the-art performance in net ecosystem exchange (NEE) prediction across diverse ecosystems. CarbonSense V2 builds on this foundation by enhancing the data available for each site. Notably, we integrate methane data from the FLUXNET CH4 release, extending coverage to all sites in the dataset which report methane emissions. We also incorporate imagery from digital repeat photography of vegetation (‘phenocams’) at over 100 sites, providing images at 6-hour intervals. This high-resolution imagery allows for advanced multimodal models to leverage computer vision techniques for better information synthesis. Additionally, V2 includes data at daily and monthly aggregations alongside the original half-hourly measurements, offering greater flexibility for model training. Accompanying CarbonSense V2 is an updated dataloader, designed to simplify integration with a variety of machine learning models. This dataloader allows for input and output variable subset selection, supports time-series modelling, and performs customizable on-the-fly data normalization. We provide benchmarks on the expanded dataset using both the original XGBoost model and an updated version of EcoPerceiver. We are releasing the data processing pipeline for CarbonSense V2 under the CC-By-4.0 open source license. This will not only allow researchers to refine and customize the data processing, but also allow for new data integration from both individual contributors and larger network releases (e.g. AmeriFlux or Integrated Carbon Observation System releases). We hope that V2 becomes a living dataset that grows as the community continues to publish data from eddy covariance sites around the world, eventually complemented by plot-scale flux measurements made with chamber techniques. The purpose of this work is to lower the barrier of entry for deep learning researchers who wish to get involved with applications in climate change. By providing a machine learning-ready dataset and benchmarks, researchers can focus on building novel neural architectures which can better leverage the wealth of available data for data-driven carbon flux modeling, without having to navigate the collection, curation, and preprocessing of biophysical and geospatial data.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Machine Learning Models for Multi-sensor Detection of Methane Leaks in Hyperspectral Data

Authors: Vit Ruzicka, Gonzalo Mateo García, Anna Vaughan, Dr. Itziar Irakulis-Loitxate, Manuel Montesino San Martin, Claudio Cifarelli, Luis Guanter
Affiliations: University Of Oxford, United Nations, UNEP - International Methane Emissions Observatory, University of Cambridge, Research Institute of Water and Environmental Engineering (IIAMA), Universitat Politècnica de València, Environmental Environmental Defense Fund
We develop machine learning models for the semantic segmentation of methane leak events in Hyperspectral data from a variety of sensors. We demonstrate the generalisation ability of machine learning models trained on data from one sensor with data from other sensors. This is particularly of interest in scenarios of unbalanced amounts of rare events captured by different instruments. Methane is the number one target for the reduction of greenhouse gasses in the atmosphere as it’s about 84x more potent than carbon dioxide. Fast and automated detection of methane leak events remains to be a challenge due to the usual products requiring manual verification. Automating this detection with machine learning models will speed up the speed of processing newly captured data. Space deployed hyperspectral sensors provide excellent opportunity for global monitoring of the Earth and can be used to detect methane leak events under higher sensitivity to low emission sources than when using multispectral sensors. Creation of qualitatively labelled datasets for training machine learning models is a manual and slow process. Furthermore, the datasets created from different sensors will have different numbers of captured events. We explore the potential of training machine learning models on data from sensors where we have many recorded events for detection of these rare events in data from other sensors. We base our model on the architecture of the recently published HyperSTARCOP model and train it on a newly created dataset of qualitatively labelled data from NASA's EMIT sensor. We then explore the generalisation-ability of these models on data from other sensors, such as PRISMA and EnMAP. We observe good zero-shot detection capabilities and explore model fine-tuning. We use the data annotation provided by the public portal by the United Nations UNEP IMEO team called Methane Alert and Response System (MARS). This data contains high quality labels made by experts in the field. We aim to provide the data processed in a machine learning ready format alongside with our codebase for the broader machine learning community. Additionally, we explore three different variants of matched filter products, demonstrating improved performance when using the recently proposed wide matched filter (WMF). In our preliminary results, we observe successful zero-shot generalisation when leveraging matched filter products computed from different source hyperspectral data. By being able to compute the same or similar products from various source sensors, we are able to use our models with the same input/output modalities. Models we develop and train will be used by the MARS team to speed up the detection of new methane leak events in the daily processed hyperspectral data. Multi-sensor approaches allow us to detect these rare events regardless of the used source. Detections created by our system will create automatic alerts in the PlumeViewer portal used by the MARS team experts to find and annotate methane leak events.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Five Years of GOSAT-2 Retrievals with REMOTEC: XCO2 and XCH4 Data Products With Quality Filtering by Machine Learning

Authors: Andrew Barr, Dr Mari Martinez-Velarte, Dr Jochen Landgraf, Dr Tobias Borsdorff
Affiliations: SRON Netherlands Institute For Space Research
Measuring concentrations of greenhouse gases to an accuracy such that regional sources and sinks can be identified, is fundamental to the monitoring and mitigation of the impact of such gases on the Earth’s changing climate. In this article we present the scientific data products of XCO2 and XCH4, retrieved with RemoTeC, from the Greenhouse Gases Observing Satellite-2 (GOSAT-2), which has the capability to measure total columns of CO2 and CH4 to the necessary requirements. Central to the generation of XCO2 and XCH4 is the post-retrieval quality flagging step. Previous versions of RemoTeC products have relied on threshold filtering, flagging data using boundary conditions from a list of retrieval parameters. Here we implement a new quality filtering approach based on random forest classifier (RFC) models, as part of the European Space Agency (ESA) program Climate Change Initiative+ (CCI+), applied to GOSAT-2. The Total Carbon Column Observing Network (TCCON) is a ground-based network of Fourier transform spectrometers that measure telluric absorption spectra at infrared wavelengths, and is the standard used by the scientific community to validate XCO2 and XCH4 from satellite measurements. Low systematic biases are essential for extracting meaningful fluxes from satellite data products, and requirements have been set by the Global Climate Observing System (GCOS). Through TCCON validation we find that all data products are within the breakthrough requirements set, with good RMSE for XCH4 (<15 ppb) and XCO2 (<2 ppm for XCO2). We derive station-to-station biases of 4.2 ppb and 0.5 ppm for XCH4 and XCO2 respectively, and linear drift of 0.6 ppb yr−1 and 0.2 ppm yr−1 for XCH4 and XCO2 respectively. Data from TCCON are also used to train the classification models, where good and bad quality retrievals are classified via the bias of GOSAT-2 with TCCON. We find quality filtering with the machine learning approach achieves a simultaneous increase in data yield and RMSE of up to 85 % and up to 30 % respectively, in comparison to the old threshold filtering. Through inter-comparison with the TROPOspheric Monitoring Instrument (TROPOMI), we show that the RFC models generalise well to the global ensemble. For XCH4, GOSAT-2 and TROPOMI are highly correlated with standard deviations less than 18 ppb and globally averaged biases close to 0 ppb. The inter-satellite bias between GOSAT and GOSAT-2 is significant, with an average global bias of -15 ppb. This is comparable to the bias seen between GOSAT and TROPOMI, consistent with our findings that GOSAT-2 and TROPOMI are in close agreement.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Comparison Between Full Physics and Machine Learning XCO2 Retrievals Over Large Plumes

Authors: Cédric Bacour, François-Marie Bréon, Denis Jouglet, Pierre Lafrique, Imane Souffer, Frédéric Chevallier
Affiliations: LSCE / CEA, CNES, Capgemini Technology Services
The French MicroCarb mission will be launched in June 2025 to provide scientific information on the global carbon cycle. The instrument is designed to provide estimates of the column-average dry-air mole fraction of atmospheric carbon dioxide (XCO2). MicroCarb builds on a high spectral resolution infrared grating spectrometer aboard a microsatellite with four spectral bands: 3 spectral bands (O2@0.76 μm, weak CO2@1.60 μm and strong CO2@2.04 μm, absorption bands) similar to the OCO-2 (NASA) instrument but with an additional band around 1.27 µm centered on an O2 absorption band. The primary acquisition modes—nadir over land and glint direction over oceans—will provide a spatial resolution of approximately 5 × 8 km². Additional acquisition modes are planned to target specific regions of interest. The operational processing chain to retrieve XCO2 from the reflected radiance measurements will rely on a full physics radiative transfer model (4A-OP) inversion based on optimal estimation with the 4ARTIC software (prototyped and operated by CNES, developed by Thalès Service), accounting explicitly for most absorption and scattering processes in the atmosphere. In parallel, we have developed a computationally efficient AI-based retrieval method using neural networks, trained on actual L1 radiance spectra and XCO2 products from the CAMS (Copernicus Atmosphere Monitoring Service) global atmospheric inversions. In this study, we assess the performance of these two approaches applied to OCO-2 observations over pre-identified plumes of carbon dioxide resulting from strong localized emissions. Their retrieval accuracy is evaluated against the official ACOS product.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: DEVELOPING A NETWORK OF SITES FOR GHGs MEASUREMENTS AND SATELLITE VALIDATIONS IN THE PO VALLEY

Authors: Andrè Achilli, Paolo Pettinari, PhD Elisa Castelli, Enzo Papandrea, Angela Marinoni, Francescopiero Calzolari, Claudio Campenni
Affiliations: CNR-ISAC, University of Bologna - DIFA
Monitoring greenhouse gases (GHGs) like methane and carbon dioxide is crucial for understanding their role in climate change and guiding mitigation strategies. Satellites such as Sentinel-5P and GOSAT1/2, along with ground-based networks like TCCON and COCCON, are playing a key role in delivering valuable data on GHGs concentrations. To support the validation of upcoming missions like CO2M and Sentinel-5, and to improve the precision of GHGs measurements, CNR-ISAC has acquired new ground-based remote sensing instruments. These additions aim to enhance the integration of satellite and ground-based datasets, ensuring reliable and accurate information for addressing global climate challenges. In the frame of the Earth Moon Mars (EMM) project, CNR-ISAC has set up in Bologna a new EM27/SUN Fourier transform spectrometer (FTS), developed by Bruker and compliant with the COCCON network. It measures direct sun spectra in the NIR range and, using the PROFFAST software developed by KIT, we can retrieve the dry air mole fractions (Xgas) of CO2, CO, CH4, and H2O. We also expanded our current network of SkySpec-2D (MAX-DOAS systems developed by Airyx) with two additional FRM4DOAS-compliant instruments. One has been installed in the “Ottavio Vittori” climatic observatory on top of Monte Cimone (2165m above sea level), while the other is located on the rooftop of the CNR-ISAC building in Bologna near the EM27/SUN. These instruments, together with the SkySpec-2D already present in San Pietro Capofiume (SPC), will help provide insightful information about GHGs and pollutant gas accumulation in the region of the Po Valley, while also offering sites with different environmental backgrounds for future satellite validations. Here we will report, for the EM27/SUN, the retrieved XCH4 and XCO against TROPOMI, and then XCH4 and XCO2 against GOSAT1/2. For the SkySpec-2D, we confront the retrieved NO2 and HCHO vertical column densities against TROPOMI. The results, together with the quality assurance provided by the COCCON and FRM4DOAS networks, demonstrate that the systems are suitable to be used for satellite validations. Particular importance is given to the next planned missions Sentinel-5 and CO2M, which will help monitor CO2, CH4, CO, NO2, HCHO, O3, and H2O. Another instrument that will enter our workforce in the future is the IFS 125HR. This is a high-resolution FTS, and it will be equipped to register spectra compliant with the NDACC and TCCON networks. It will be acquired in the frame of the ITalian INtegrated Environmental Research Infrastructures System (ITINERIS) and will be located at the CNR-ISAC of Bologna. The IFS 125HR will allow us to monitor additional trace gases compared to the ones already retrieved with the EM27/SUN and SkySpec-2D. The complete list of gases is: O3, HNO3, HCl, HF, CO, N2O, CH4, HCN, C2H6, ClONO2, CO2, H2O, O2 and HDO. The main difference between the two FTS is that the high resolution of the IFS 125HR allows us to obtain information about the gas concentration at different atmospheric layers, while the portability of the EM27/SUN allows the system to be used for measurement campaigns in remote areas. Part of the research activities described in this paper were carried out with the contribution of the Next Generation EU funds within the National Recovery and Resilience Plan (PNRR), Mission 4 - Education and Research, Component 2 - From Research to Business (M4C2), Investment Line 3.1 - Strengthening and creation of Research Infrastructures, Project IR0000038 – “Earth Moon Mars (EMM)”. EMM is led by INAF in partnership with ASI and CNR. We thank the support of the Italian Ministry for Universities and Research through IR0000032 – ITINERIS, Italian Integrated Environmental Research Infrastructures System (D.D. n. 130/2022 – CUP B53C22002150006) Funded by EU – Next Generation EU PNRR – Mission 4 “Education and Research” – Component 2: “From research to business” – Investment 3.1: “Fund for the realization of an integrated system of research and innovation infrastructures”.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Methane Constellation

Authors: Strategist Frederic Voulouzan
Affiliations: Airbus
Airbus is working towards providing commercial space-based, high-resolution Methane monitoring services, targeted towards industrial facilities in various verticals. Methane is the second largest greenhouse gaz contributor to climate change. It is also actionable at scale, since many Methane emission sources can be suppressed or diminished. This is why slashing Methane emissions is generally considered as a fast way to moderate climate change in the short term. There already exists satellite-based monitoring by both institutional and private services at macro scale, that allow to detect leaks in a magnitude of tons per hour and down to hundreds of kg per hour. However, in order to allow the continuous monitoring of Methane leaks at facility-level that allow operational corrective actions, a more detailed spectral resolution than is available on the market today is needed. Based on its heritage and experience in developing leading-edge spaceborne instruments for monitoring various atmospheric gazes, Airbus aims to develop a fleet of satellites capable of detecting methane emissions with an unprecedented precision, targeting facilities with a low detection threshold that can allow industries to reach and maintain their emissions goals.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Advanced Deep Learning Model for landfill methane detection using PRISMA satellite data

Authors: Mohammad Marjani, Dr. Fariba Mohammadimanesh, Ali Radman, Dr. Daniel J. Varon, Dr. Masoud Mahdianpari
Affiliations: Memorial University Of Newfoundland, Natural Resources Canada, Harvard University
Methane (CH4) is a greenhouse gas that significantly contributes to global warming, highlighting the importance of monitoring its emissions as a critical approach to mitigating climate change. Waste, energy, and agricultural sectors collectively account for nearly half to two-thirds of CH4 emissions. Landfills, in particular, are significant contributors due to anaerobic conditions that promote CH4 production. The Environmental Protection Agency (EPA) reported that landfills accounted for 18% of human-caused CH4 emissions in 2017. With population growth projected to double waste generation, global CH4 emissions will likely rise substantially. However, remote sensing technologies can manage these emissions for landfill monitoring. Detecting CH4 emissions from landfills represents a critical first step toward controlling and reducing atmospheric greenhouse gas concentrations. Advancements in remote sensing technology have significantly enhanced the ability to monitor CH4 emissions from space on a global scale. Cutting-edge satellites like the Precursor IperSpettrale della Missione Applicative (PRISMA), launched by the Italian Space Agency, offer high-precision hyperspectral data that is freely accessible to the public. This availability, combined with the hyperspectral sensitivity of PRISMA, has facilitated numerous successful detections of CH4 plumes, as documented in recent studies. However, most of these studies rely on manual methods, such as thresholding and hand-drawing plume boundaries, to delineate CH4 plume extents, highlighting the need for more automated and efficient approaches. Deep learning (DL) techniques have advanced significantly in recent years. While DL models have shown great potential in various remote sensing applications, their application to CH4 plume detection still needs to be explored, with only a few studies addressing this issue and none specifically focusing on landfill sources. This study employs the state-of-the-art Swin Transformer (ST) model for CH4 plume detection over landfill sites to address this gap. This study utilized PRISMA data to detect CH4 plumes, leveraging its sensitivity in two key absorption regions: a weaker band spanning 1650–1750 nm and a stronger band covering 2200–2400 nm. Data were collected from multiple locations, including sites in India, Iran, Pakistan, Mexico, and other regions. These locations were selected from previous reports available on the GHGSat Spectra and Carbon Mapper websites, as well as an extensive review of existing literature. Subsequently, the matched filter (MF) algorithm was applied to determine the correlation between the reference (target) CH4 spectrum and the spectral data in the image. A high correlation value signifies the presence of CH4 within the image. The MF algorithm was initially employed to produce CH4 concentration maps from PRISMA satellite imagery, which were then used to construct a training dataset. Plumes that were visually detectable and showed no correlation with surface artifacts or albedo were manually delineated using polygons, taking wind direction into account to separate the plume from the background. Plumes visible in RGB images were excluded during this step. Lastly, the portions of the images containing landfills with identified plumes were divided into 200 × 200 patches, each comprising 54 spectral bands. 163 PRISMA images of landfills worldwide were analyzed, of which 41 were identified as containing CH4 emissions. A fundamental limitation of DL techniques is their reliance on large training datasets. To address this issue, two data augmentation strategies were employed—the first involved rotating the images randomly by 90, 180, or 270 degrees. The second strategy shifted the landfill regions within each image patch while ensuring all areas remained visible. These augmentation techniques expanded the training dataset significantly, increasing the number of patches from 41 to 1,585. The ST is a DL model used in this study for CH4 plume detection in satellite imagery due to its ability to extract local and global spatial dependencies. Unlike traditional CNNs, which rely on fixed receptive fields, ST uses a shifted window attention mechanism that processes the image in smaller patches, enabling efficient and scalable feature extraction. This hierarchical approach allows the model to extract fine-grained spectral details of CH4 plumes while recognizing broader spatial patterns, which are essential for distinguishing CH4 from background noise and surface artifacts. After training the ST model, its performance was evaluated on a separate test set consisting of 13 CH4 plumes that were not included in the training data. The evaluation used three commonly used metrics: precision, recall, and F1-score. The model achieved a precision of 0.85, indicating a high proportion of correctly detected CH4 plumes among all detected instances. The recall value of 0.96 suggests the model was highly sensitive, successfully identifying most of the actual CH4 plumes in the test images. The F1-score, calculated as the harmonic mean of precision and recall, was 0.90, reflecting a balanced performance between the two metrics. These results demonstrate that the ST model is sufficient in accurately detecting CH4 plumes, with strong overall performance in identifying and correctly classifying them. After training and evaluating the model, the trained ST was used to extract the boundaries of the CH4 plumes from the PRISMA satellite images. These extracted boundaries were then utilized to estimate the methane emission rate of each plume using the Integrated Mass Enhancement (IME) method. The dataset used for CH4 plume detection in this study includes observations from various landfills across multiple countries. For instance, one of the landfills in Lahore, Pakistan, was observed twice, with an average estimated methane emission rate of 3180 kg/h. Another example is the landfill in Tehran, Iran, which was observed twice, with a significantly lower average emission rate of 2712.5 kg/h. In Buenos Aires, Argentina, a single observation recorded an estimated emission rate of 3829 kg/h. Several landfills from India, such as those in Delhi and Mumbai, were observed multiple times, with emission rates ranging from 1141 kg/h in Mumbai to 5148 kg/h in one of the Delhi locations. Other notable cases include the landfill in Casablanca, Morocco, with a high methane emission rate of 4381 kg/h observed over two occasions. These observations highlight the global variation in methane emissions from landfills, with significant differences in emission rates even within a single country, underscoring the importance of localized monitoring for effective climate change mitigation.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Experience from a Satellite-Airborne Experiment to Detect Methane Emissions from Municipal Solid Waste Landfills in the Czech Republic

Authors: Olga Brovkina, Adam Bednařík, Jan Hanuš, Carles
Affiliations: Global Change Research Institute CAS
Localized, low-intensity methane emissions might seem insignificant individually, but their cumulative impact can be substantial when aggregated across numerous sources such as landfills, leaky pipelines, and agricultural operations. Landfills, in particular, are significant contributors to anthropogenic methane emissions, accounting for approximately 11% of global methane emissions from human activity, according to the Global Methane Initiative. Detecting and quantifying these emissions is challenging, especially for smaller, diffuse sources. This study evaluated methane (CH₄) detection capabilities using GHGSat satellite data and airborne hyperspectral (HS) shortwave infrared (SWIR) data over two municipal solid waste (MSW) landfills in the Czech Republic. The planned campaign included synchronized satellite and airborne acquisitions during the summer and autumn of 2024, supported by ground-based measurements of CH₄ concentrations and fluxes. GHGSat data were acquired using an imaging spectrometer with a spectral resolution of 0.1 nm and a spatial resolution of 25 m, operating in the 1630–1675 nm spectral range. Airborne HS data were captured with the SASI-600 sensor (Itres®) onboard the Flying Laboratory of Imaging Systems (FLIS, https://olc.czechglobe.cz/en/main-page/), offering a spectral resolution of 15 nm and a spatial resolution of 0.75 m, within the 950–2450 nm spectral range. Ground-based measurements of CH₄ concentrations and fluxes were conducted using a portable handheld infrared thermometer and a soil chamber connected to a portable greenhouse gas analyzer (Picarro GasScouter G4301, Picarro, CA, USA). GHGSat Inc. provided ready-to-use processed maps for the two landfills. Processing of airborne HS data involved identifying spectral signatures characteristic of methane absorption and applying spectral unmixing to isolate methane signals from background noise, enabling spatial mapping of methane emissions. While GHGSat data did not detect emissions at the selected sites, its high spatial resolution and wide-area coverage make it a powerful tool for identifying larger or more intense sources and monitoring global methane trends. In contrast, airborne HS SWIR data demonstrated higher sensitivity to localized, low-intensity methane emissions, successfully identifying CH₄ hotspots with rates as low as 1 kg/hour. These findings emphasize that methane detection and quantification require tailored solutions, with different technologies suited to specific scales and emission characteristics. Though emissions of 1 kg/hour may seem minor, their detection and quantification are critical for comprehensive methane management, achieving climate goals, and understanding the broader impact of diffuse, low-intensity sources. Supported by the ESA Third-Party Mission project (PP0096477) and aligned with the Czech Academy of Sciences’ Strategy-21 program, "Space for Mankind," these findings contribute to advancing methodologies for methane emission detection and quantification, with significant implications for climate change mitigation strategies.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Observations of greenhouse gases at Sodankylä and comparisons with satellite borne observations

Authors: Dr. Rigel Kivi, Mr. Pauli Heikkinen, Juha Hatakka, Hermanni Aaltonen, Tomi Karppinen, Marika Honkanen, Hannakaisa Lindqvist, Johanna Tamminen, Jouni Pulliainen, Prof. Huilin Chen
Affiliations: Finnish Meteorological Institute, Finnish Meteorological Institute, University of Groningen
Multiple space-borne missions provide valuable data on greenhouse gases (GHGs). The current or upcoming space missions include ESA Sentinel 5P; the upcoming Copernicus Carbon Dioxide Monitoring mission CO2M; ESA Sentinel 5; CNES (French Space Agency) MicroCarb; the NASA OCO-2 mission; the GOSAT and the GOSAT-2 mission, which is a joint effort of the Japanese Ministry of the Environment, the National Institute for Environmental Studies and the Japan Aerospace Exploration Agency (JAXA). Ground-based measurements are needed to provide confidence in space-borne observations. At the Sodankylä supersite in northern Finland we have established several observational programs that are essential for the satellite data calibration and validation. Here, we first report the Fourier Transform Spectrometer (FTS) measurements of CO₂ and CH₄ at Sodankylä from 2009 to late 2024, and comparisons with satellite borne observations. During the 16 years of operation the FTS instrument has been stable and good data coverage has been obtained. The long-term FTS observations were processed with the latest retrieval GGG2020 used in the global Total Carbon Column Observing Network (TCCON). We find good agreement with GOSAT observations of XCO₂ and XCH₄ available since 2009. Mean relative difference in XCO₂ has been -0.10 ± 0.01 % and the mean relative difference in XCH₄ has been 0.21 ± 0.02 %. Secondly, fiducial reference measurements of CO₂, CH₄, N₂O and other gases have been obtained using a balloon-borne AirCore system. Balloon-borne measurements were taken in the vicinity of the TCCON site, allowing for direct comparison between remote sensing retrievals and in-situ observations. Therefore, we have been able to link TCCON remote sensing observations to the World Meteorological Organization reference scales for greenhouse gases. Finally, we present comparisons of the TCCON a priori versus balloon-borne profiles, showing good agreement between reference measurements and the a priori profiles generated with the newest TCCON GGG2020 algorithm.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Fiducial Reference Measurements for Greenhouse and their use for validation of remote sensing products

Authors: Mahesh Kumar Sha, Martine De Mazière, Justus Notholt, Huilin Chen, Angelika Dehn, Filip Desmet, David W. T. Griffith, Frank Hase, Pauli Heikkinen, Benedikt Herkommer, Nicholas Jones, Rigel Kivi, Nicholas Kumps, Bavo Langerock, Neil A. Macload, Jamal Makkor, Winfried Markert, Christof Petri, Corinne Vigouroux, Damien Weidmann, Minqiang Zhou
Affiliations: Royal Belgian Institute For Space Aeronomy (bira-iasb), University of Bremen, University of Groningen, ESA/ESRIN, University of Wollongong, Karlsruhe Institute of Technology, Finnish Meteorological Institute, STFC Rutherford Appleton Laboratory
Space-based measurement of greenhouse gases (GHGs) are continuously improving over the last decades, especially with missions like SCIAMACHY, GOSAT, OCO-2, and the follow-on missions from both traditional space agencies and new space missions. Calibration (Cal) and validation (Val) form an integral part of a global integrated Earth observation data system for ensuring that it provides reliable information on the measured variable. The Quality Assurance Framework for Earth Observation (QA4EO) provides a set of principles, guidance, and specific tools to encourage provision of internationally consistent quality indicators on the delivered data. It requires Cal/Val of satellite data through an independent data set providing comparable observations. Therefore, the independent data itself and the associated uncertainties must be fully characterized and documented in compliance with the QA4EO principles. This reference independent data is agreed by the community and ideally tied to the international system of units (SI), and is referred to as the Fiducial Reference Measurements (FRM). The Total Carbon Column Observing Network (TCCON) and the Network for the Detection of Atmospheric Composition Change – Infrared Working Group (NDACC-IRWG) are two such reference measurement networks providing total and/or partial column concentrations of GHGs and other important atmospheric gases. The data (total and partial column concentrations) from these two networks form the basis for the validation of satellite derived trace gas products, model validation and for carbon cycle and other scientific research activities. However, the number of stations is limited, and the geographical coverage is uneven and does not cover the full range of measurand space. To fill in this gap, several portable low spectral resolution FTIRs, one of which is the EM27/SUN that is used by the Collaborative Carbon Column Observing Network (COCCON), have been extensively tested and characterized in the framework of ESA’s Fiducial Reference Measurements for Ground-Based Infrared Greenhouse Gas Observations (FRM4GHG; https://frm4ghg.aeronomie.be/) project and showed excellent performance. The spectrometers have demonstrated the ability to provide high quality data (total column concentrations of CO2, CH4, CO, H2O) comparable to that of TCCON. The use of portable spectrometers offering extended spectral coverage towards mid-infrared for complementing NDACC-IRWG observations of some species (e.g., HCHO, CH4, N2O, OCS, …) is a further topic studied by FRM4GHG. These low-resolution spectrometers are useful for achieving a denser distribution of stations, bridging geographical gaps for various atmospheric and surface conditions, source regions of particular interest, and creating a large latitudinal distribution of stations. This presentation will provide an overview of the state-of-the-art of the current high- and low-resolution FTIR spectrometers and present plans for future improvements. A few example cases of satellite data validation using FRM4GHG data will be shown.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Recent Improvements for the Operational TROPOMI CH4 Retrieval

Authors: Soumyajit Mandal, Tobias Borsdorff, Mari Martinez Velarte, Christina Dwiriyanti, Jochen Landgraf
Affiliations: Netherlands Institute for Space Research - SRON
Methane (CH4) is a potent greenhouse gas and a major contributor to global warming, second only to carbon dioxide (CO2). Although less abundant than CO2, CH4 has a higher global warming potential. Human activities, such as livestock farming, rice cultivation, and extraction of fossil fuel, are significant sources of CH4 emissions. Space-based monitoring of CH4 has turned essential to identify emission hotspots and understand its role in various aspects of climate processes while determining strategies for mitigative steps on climate change. The TROPOspheric Monitoring Instrument (TROPOMI) on ESA's Sentinel-5 Precursor satellite provides daily CH4 total column measurements with a high spatial resolution of 7x5.5 square kilometre. The product offers high accuracy (better than 1%) and precision (below 1.5%), and has been extensively validated against ground-based TCCON FTIR observations and other satellite data, such as GOSAT1 and GOSAT2. TROPOMI’s CH4 data is used globally by scientists, policymakers, and industries to track emissions, refine atmospheric models, and support climate change mitigation efforts. In this talk we will explore the recent updates of the TROPOMI CH4 data product to improve and enhance its accuracy and reliability. A key development is the integration of an AI-based cloud filtering method, which makes the data product independent of SUOMI-VIIRS, ensuring consistent quality even during instrument outages. This AI-driven cloud masking is now part of the operational product, available to the data user community. We have also focused on improving CH4 retrievals in high-latitude regions, where high solar zenith angles complicate measurements. By incorporating pseudo-spherical approximations, we have enhanced retrieval accuracy, particularly in these challenging areas. Additionally, we are working on a new AI-based posteriori quality filtering approach for the TROPOMI CH4 product. This will allow users to select data based on their specific accuracy requirements, providing better coverage of the data product. These advancements are key preparations for the upcoming Near-Real-Time CH4 product that will be developed by ESA in the near future.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Unveiling Methane Emitters: China's Spaceborne Imaging Spectroscopy Breakthroughs and Plans

Authors: Prof Pengfei Li
Affiliations: State Key Laboratory Of Infrared Physics, Shanghai Institute Of Technical Physics, Chinese Academy Of Sciences
Spaceborne imaging spectroscopy of methane emissions has emerged as a critical tool in understanding and mitigating global climate change. This study presents significant advancements in China's satellite monitoring capabilities for methane emission sources, highlighting key achievements and future plans. Our research team has developed the Advanced Hyperspectral Imager (AHSI) payload, deployed on multiple satellites including Gaofen-5-02, Gaofen-5-01A, Ziyuan-1-02D, and Ziyuan-1-02E, along with corresponding retrieval techniques. Utilizing AHSI data, complemented by international satellites such as Sentinel-2 and PRISMA, we have conducted extensive monitoring of global methane hotspots, including coal mines, oil and gas facilities, and landfills. Our refined retrieval methodologies, which integrate physical models with data-driven approaches, have achieved a concentration precision of 20 ppb and a detection threshold of 100 kg/hour for methane plumes. This enhanced sensitivity has led to the discovery of widespread, high-intensity methane plumes that significantly exceed emissions recorded in current authoritative inventories. These findings suggest a potential underestimation of global methane emissions and underscore the importance of satellite-based monitoring for accurate quantification. Our monitoring efforts have also provided valuable insights during major geopolitical events, such as the Russia-Ukraine conflict and the Nord Stream pipeline explosion, where we detected extensive methane plumes. These observations demonstrate the capability of satellite monitoring to provide rapid, large-scale assessments of anthropogenic and disaster-related methane emissions. Looking ahead, China is developing new satellite missions to further advance methane emission monitoring capabilities. We aim to establish deep collaborations with relevant organizations in the European Union and the United Nations to promote global satellite monitoring of methane emission sources. This study highlights the critical role of satellite-based monitoring in understanding methane dynamics and supports the urgent need for continued investment in and refinement of these technologies. As we progress with new satellite missions and international collaborations, we anticipate significant improvements in our ability to detect, quantify, and mitigate methane emissions, contributing substantially to global climate change mitigation efforts.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Aerosol Induced Uncertainties in Satellite CO2 Retrievals

Authors: Timo Virtanen, Anu-Maija Sundström, Elli Suhonen, PhD Antti Lipponen, PhD Antti Arola, Christopher O'Dell, Robert R. Nelson, Hannakaisa Lindqvist
Affiliations: Finnish Meteorological Institute, Climate Research Programme, Finnish Meteorological Institute, Space and Earth Observation Centre, Colorado State University, Jet Propulsion Laboratory, California Institute of Technology
Accurate global monitoring of CO₂ is critical for understanding carbon sources and sinks and for assessing mitigation efforts. While ground-based measurements provide valuable data, they are limited in remote areas where CO₂ dynamics remain poorly understood. Satellite remote sensing of CO₂ fills this gap by offering continuous, global observations. High precision is required to observe the small gradients in CO₂ levels. Atmospheric aerosols, which scatter and absorb sunlight in ways difficult to quantify, pose a major challenge for satellite CO₂ retrieval accuracy, especially in polluted regions with high emissions. This study [1] examines the impact of aerosols on column-averaged dry-air mole fraction of CO₂ (XCO₂) retrievals from NASA’s OCO-2 by comparing its retrieved aerosol information to MODIS. Results show that OCO-2 underestimates aerosol optical depth (AOD) in high aerosol conditions relative to MODIS and AERONET data. This results in misclassification of some of the high aerosol cases as good quality (low AOD) in the OCO-2 quality filtering, with resulting biases in XCO₂. We also find a correlation between AOD and XCO₂, which can be attributed to co-emission of aerosols and CO₂. This leads to a sampling bias because high AOD cases are preferential filtered out of the OCO-2 XCO₂ dataset, which results in high CO₂ regions having fewer good quality retrievals. Increasing the AOD acceptance threshold from 0.2 (used by OCO-2) to 0.5, as planned for the future CO2M mission, would improve urban data coverage by 30%, enhancing the monitoring of anthropogenic emissions. Furthermore, using TCCON data as a validation source for XCO₂, we find a weak but statistically significant low bias in OCO-2 XCO₂ at high AOD values, indicating that urban CO₂ emissions may be underestimated. These findings underscore the need for improved aerosol handling in satellite CO₂ retrievals for reliable climate assessments. [1] Virtanen, T. H., Sundström, A.-M., Suhonen, E., Lipponen, A., Arola, A., O'Dell, C., Nelson, R. R., and Lindqvist, H.: A global perspective on CO2 satellite observations in high AOD conditions, Atmos. Meas. Tech. Discuss. [preprint], https://doi.org/10.5194/amt-2024-77, in review, 2024.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Quantifying Environmental Impacts on Bias in Arctic TROPOMI Methane Retrievals Using Machine Learning

Authors: Tuomas Häkkilä, Hannakaisa Lindqvist, Ella Kivimäki, Tomi Karppinen, Dr. Rigel Kivi, Johanna Tamminen
Affiliations: Finnish Meteorological Institute
Methane is one of the most significant greenhouse gases. Accurate observations inform our understanding of the distribution of methane as well as the magnitude of methane sources and sinks. While ground-based methods for the detection of methane emissions exist, satellite measurements are the only viable option for global and timely coverage. The conditions in the Arctic, e.g. high solar zenith angles and snow cover, however, present a particular challenge for satellite retrievals. The Tropospheric Monitoring Instrument (TROPOMI) onboard the European Space Agency’s Sentinel 5 Precursor satellite produces observations of total-column methane (XCH₄), among other atmospheric constituents, at a medium resolution even at high latitudes. Since its launch in late 2017, TROPOMI has enabled the data-driven study of methane emissions through its continuous, global coverage on a daily basis, producing millions of observations per day. These observations are a valuable resource, but also necessitate information on their reliability. For example, recent research has shown a seasonality in the bias of TROPOMI XCH₄ measurements compared to ground-based observations at high latitudes. We present ongoing work on the evaluation of environmental factors impacting TROPOMI measurements of methane concentrations in the Arctic. We compare multiple machine learning methods performance in predicting the bias present in TROPOMI XCH₄ measurements. As a reference we use ground-based observations at four Total Carbon Column Observing Network (TCCON) stations located at high latitudes: East Trout Lake (Saskatchewan, Canada), Sodankylä (Finland), Ny Ålesund (Svalbard, Norway), and Eureka (Nunavut, Canada). The bias predictions are trained based on a set of both meteorological (e.g. temperature, humidity) and retrieval-based (e.g. solar zenith angle, scanline) variables. To further analyse the bias, we then examine the importance of each variable for predicting the bias, in an effort to identify variables that contribute to the bias itself. This research provides important data concerning the accuracy of satellite-borne methane observations in the Arctic, and also informs future developments in satellite retrieval algorithms.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: A multi-objective genetic algorithm approach to optimise GOSAT methane observations for high-latitude regions

Authors: Dr Lakshmi Bharathan, Dr Robert Parker
Affiliations: National Centre for Earth Observation, University of Leicester
The Greenhouse Observing Satellite (GOSAT) provides a long term (~15 year) data record of atmospheric methane with near-surface sensitivity with an objective to measure continental scale sources and sinks (Kuze et al., 2009). The University of Leicester has created a well-validated, operational long-term global data set of column methane concentration based on a proxy retrieval algorithm using CO2 as a proxy to account for scattering-induced changes in light path length in the atmosphere (Parker et al., 2020). However, data coverage over high-latitude regions is limited due to factors such as reduced sunlight, frequent cloud cover, and weak signal detection, particularly during winter months. To improve the data density across high-latitudes, it is necessary to optimise the post-retrieval data quality filtering process as standard filters tend to discard a substantial amount of data particularly in high-latitude areas. Genetic Algorithms (GAs) are potential tools in enhancing efficiency and accuracy of data analysis in satellite remote sensing (Mandrake et al., 2013, Li et al., 2024). Present work explores the application of a GA, to optimise the retrieval of methane data from the GOSAT satellite over high-latitude regions (north of 50°N). GAs, inspired by Darwinian natural selection, treat potential solutions as "individuals" or "chromosomes" – sets of rules or parameters relevant to the specific problem. In our problem, potential solutions/ chromosomes are sets of data quality filter parameters and the objective is to maximise data coverage over high-latitude regions while maintaining data quality. The GA iterates through generations by selecting the fitter individuals (filter combinations) which are chosen as "parents" and carries out crossover (combining genetic material from the parent chromosomes/solutions) and mutation (random modifications promoting genetic diversity) to produce new solutions/offspring to evolve towards the best solution by evaluating the fitness of each individual. Relaxing quality filters can lead to increased differences between satellite and ground-based observations, implying a worsening of the data quality. Thus, to arrive at the best possible filter value that provides the maximum enhancement in data density with least effect on comparison with ground-based truth, we carry out a non-dominated sorting, ideal for analysing trade-offs between multiple objectives, to select the most suitable fit solutions which needs to be carried forward to the next generation. We use ground-based column methane observations from the Total Carbon Column Observing Network (TCCON) to drive the algorithm towards an optimal solution that ensures minimal impact on data quality (Wunch et al., 2011). Initial findings suggest that GA-optimised post-filtering thresholds can lead to significant increase in valid satellite soundings (up to 60% of the initial volume) over high latitudes when compromised for 5 ppb change in root mean square difference (RMSE) compared to TCCON observations. Future research will investigate the generalisability and regional applicability of this approach. This work demonstrates the potential of GAs for improving data coverage in challenging high-latitude regions while maintaining the accuracy and precision of the data. Reference Kuze, A., Suto, H., Nakajima, M., and Hamazaki, T. Thermal and near infrared sensor for carbon observation Fourier-transform spectrometer on the Greenhouse Gases Observing Satellite for greenhouse gases monitoring, (2009), Appl. Opt., 48, 6716–6733, https://doi.org/10.1364/AO.48.006716. Li, Ke, Kaixu Bai, Penglong Jiao, He Chen, Huiqun He, Liuqing Shao, Yibing Sun, Zhe Zheng, Ruijie Li, and Ni-Bin Chang. Developing unbiased estimation of atmospheric methane via machine learning and multiobjective programming based on TROPOMI and GOSAT data, (2024), Remote Sensing of Environment, 304, 114039. https://doi.org/10.1016/j.rse.2024.114039 Mandrake, L., Frankenberg, C., O'Dell, C. W., Osterman, G., Wennberg, P., and Wunch, D. (2013), Semi-autonomous sounding selection for OCO-2, Atmos. Meas. Tech., 6, 2851–2864, https://doi.org/10.5194/amt-6-2851-2013. Parker, R. J., Webb, A., Boesch, H., Somkuti, P., Barrio Guillo, R., di Noia, A., Kalaitzi, N., Anand, J. S., Bergamaschi, P., Chevallier, F., Palmer, P. I., Feng, L., Deutscher, N. M., Feist, D. G., Griffith, D. W. T., Hase, F., Kivi, R., Morino, I., Notholt, J., … Wunch, D (2020). A decade of GOSAT Proxy satellite CH_4 observations. Earth System Science Data, 12(4), 3383–3412. https://doi.org/10.5194/essd-12-3383-2020. Wunch, Debra, Geoffrey C. Toon, Jean-François L. Blavier, Rebecca A. Washenfelder, Justus Notholt, Brian J. Connor, David WT Griffith, Vanessa Sherlock, and Paul O. Wennberg. The total carbon column observing network, (2011), Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 369, no. 1943, 2087-2112. https://doi.org/10.1098/rsta.2010.0240
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: The Fusional-P UOL-FP CO2 full physics retrieval algorithm for the CO2M Mission

Authors: Hartmut Boesch, Leif Vogel, Antonio Di Noia, Klaus Bramstedt, Pablo Echevarria, Vijay Natraj
Affiliations: University Of Bremen, Kaioa Analytics, University of Rome Tor Vergata, NASA JPL
The Copernicus Carbon Dioxide Monitoring mission (CO2M) is the first satellite mission specifically designed to measure CO2 and CH4 with sufficient coverage, revisit and accuracy to allow the determination of anthropogenic CO2 and CH4 emissions. The payload of CO2M will consist of a multi-channel shortwave-infrared spectrometer, a cloud imager (CLIM) and a Multi-Axis Polarimeter (MAP) for aerosol information. The availability of co-located aerosol information is a key aspect of CO2M necessary to achieve sufficient accuracy even under enhanced atmospheric aerosol loadings. To generate geophysical parameters such as the CO2 and CH4 column from the satellite measurements, advanced retrieval algorithms are required that aim to accurately describe the physics of the transport of light in the atmosphere. The Fusional-P UOL-FP algorithm, developed for CO2M, is a full-physics retrieval algorithm based on the Optimal Estimation scheme. The core algorithm has been developed for the OCO mission over the last two decades. One of the novel aspects of the retrieval is its capability to ingest scene-based aerosol information from the MAP Level-2 dataset and utilises it for the CO2 and CH4 retrievals. Detailed and realistic orbit simulations have been conducted to test the retrieval and assess its performance. This algorithm will be implemented in the ground-segment of the CO2M mission as one of three operational algorithms. This presentation will provide a description of the retrieval algorithm, discuss its aerosol approach , and present the results of the orbit simulations.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Comparison of GHG Vertical Concentrations Estimated by COCCON EM27/SUN and Satellite Missions in Rome (Italy)

Authors: Giacomo Gostinicchi, Stefano Casadio, Anna Maria Iannarelli, Nicola Ferrante, Lorenzo Veltri Gomes, Annalisa Di Bernardino, Carlos Alberti, Darko Dubravica, Frank Hase, Akihiko Kuze, Angelika Dehn
Affiliations: Serco Italia, Sapienza University of Rome, Physics Department, Institute of Meteorology and Climate Research (IMK-ASF), Karlsruhe Institute of Technology (KIT), Japan Aerospace Exploration Agency, EOP-GMQ, ESRIN, ESA
In the framework of the collaboration between the European Space Agency (ESA) and the Karlsruhe Institute of Technology (KIT) and in the context of the COllaborative Carbon Column Observing Network activities (COCCON, https://www.imk-asf.kit.edu/english/COCCON.php), in June 2023 an EM27/Sun FTIR spectrometer was installed in the Atmospheric Physics Laboratory (APL, 41.90° N, 12.51° E) of Sapienza University, Rome downtown. It is the first instrument of this type operating in Italy. The APL site is part of the "Boundary-layer Air Quality-analysis Using Network of Instruments” supersite (BAQUNIN, https://www.baqunin.eu/) and was selected as specific urban site thanks to its favourable position and the simultaneous operation of the BAQUNIN instrumental suite (Iannarelli et al., 2022). Since then, the BAQUNIN team performed measurements in cloud free conditions, and processed the acquired measurements using the PROFFAST v2.4 software (https://www.imk-asf.kit.edu/english/3225.php) and PROFFASTpylot v1.3 software (https://www.imk-asf.kit.edu/english/4261.php) to retrieve accurate information about the greenhouse gas (GHG) columnar amounts of H₂O, CO, CO₂ and CH₄, with the aim of monitoring the daily evolution of such species in the urban environment and to provide high quality GHG data for satellite Cal/Val purposes. This work presents results of the comparisons between collocated COCCON (BAQUNIN APL) and a) satellite based observations of XH₂O, XCO, XCO₂ and XCH₄ from JAXA GOSAT and GOSAT-2 satellites (sources: NIES and EORC (Kuze et al.,2022)); b) bias-corrected XCO2 dataset from NASA OCO-2 (source: GES DISC). REFERENCES: - Iannarelli et al., 2022, “The Boundary Layer Air Quality-Analysis Using Network of Instruments (BAQUNIN) Supersite for Atmospheric Research and Satellite Validation over Rome Area”. Bull. Am. Meteorol. Soc. 2022, 103, E599–E618. - Kuze et al., 2022, A. Kuze, Y. Nakamura, T. Oda, J. Yoshida, N. Kikuchi, F. Kataoka, H. Suto, K. Shiomi, “Examining partial-column density retrieval of lower-tropospheric CO2 from GOSAT target observations over global megacities”. Remote Sensing of Environment, 273 (2022 ), Article 112966, https://doi.org/10.1016/j.rse.2022.112966 - GOSAT NIES dataset: JAXA/NIES/MOE - Japan Aerospace Exploration Agency / National Institute for Environmental Studies / Ministry of the Environment. Data source: GOSAT Data Archive Service (GDAS), https://www.gosat.nies.go.jp/en/. - GOSAT-2 NIES dataset: JAXA/NIES/MOE - Japan Aerospace Exploration Agency / National Institute for Environmental Studies / Ministry of the Environment. Data source: GOSAT-2 Product Archive, https://prdct.gosat-2.nies.go.jp/index.html.en. - GOSAT EORC dataset: GOSAT-EORC-Research L2 Product (V03) (Partial Column). JAXA EORC – GHG Trends Viewer, https://www.eorc.jaxa.jp/GOSAT/index.html. - GOSAT-2 EORC dataset: GOSAT-2-EORC-Research L2 Product (V03) (Partial Column). JAXA EORC - GHG Trends Viewer, https://www.eorc.jaxa.jp/GOSAT/index.html. - OCO-2 dataset: OCO2_L2_Lite_FP. NASA/GSFC, Greenbelt, MD, USA, NASA Goddard Earth Sciences Data and Information Services Center (GES DISC), https://disc.gsfc.nasa.gov/datasets. Accessed November 20th, 2024 at doi: 10.5067/70K2B2W8MNGY
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Portable Fourier Transform Spectrometer measurements of greenhouse gases at the Arctic Space Centre in Sodankylä, Finland: instrument coherence and sub-pixel variability

Authors: Tomi Karppinen, Catherine Fait, Dr. Hannakaisa Lindqvist, Dr. Rigel Kivi, Dr. Frank Hase, Dr. Mika Aurela
Affiliations: Finnish Meteorological Institute, Karlsruhe Institute of Technology, Finnish Meteorological Institute
A portable Fourier Transform Spectrometer (FTS), named EM27/SUN, measures direct solar light at short wave infrared wavelengths, at 0.5 cm-1 spectral resolution, to retrieve total column concentrations of atmospheric gases such as CO2, CH4, CO and H2O. EM27/SUN measurements started at the Arctic Space Centre of Finnish Meteorological Institute (FMI) in Sodankylä in 2017 in the framework of the ESA-funded Fiducial Reference Measurements for Greenhouse Gases (FRM4GHG) project. First, the measurements were made using a visiting instrument by Karlsruhe Institute of Technology (KIT). In 2020, FMI acquired an EM27/SUN (serial number 122, hereafter SN122) which has been measuring since contributing to CO-operative Carbon Column Observing Network (COCCON). From 2020 until 2023 the SN 122 measured beside a high-resolution FTS, Bruker IFS125 HR (0,02 cm-1 spectral resolution). In 2023, it was also accompanied by another EM27/SUN from KIT. For the measurement season 2024 (April – November), SN122 was moved to Halssiaapa, a wetland site 1 km away from the original measurement site. Here, we present the four-year time series of SN122. Retrieval of this time series is performed locally using PROFFAST software and customizing the existing python-based wrappers (ProffastPylot, TUM EM27 pipeline) to enable automated near real-time data processing. The measurements are compared to co-located low-resolution FTS measurements to assess the coherence among the ground-based instruments. The ground-based measurements are also used to evaluate the performance of the methane measurements from the TROPOspheric Monitoring Instrument (TROPOMI). For the 2024 data, we study the short-distance variability in the presence of local natural emissions. During that time, SN122 was located at Halssiaapa wetland whereas another EM27/SUN instrument and IFS125 HR were measuring at the original permanent location. We study the differences between the instruments prior to relocation and, using that as a baseline, we assess whether the effects of local emissions at the wetland can be observed by ground-based remote sensing in meteorological conditions where non-wetland instruments can be considered as the background. Year 2024 measurements allow also assessment of sub-pixel variability in TROPOMI satellite instrument validation.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Advances on the Emission Estimation Using the Divergence Method for Individual Satellite Overpasses With Noise Reduction

Authors: Anssi Koskinen, Janne Nurmela, Hannakaisa Lindqvist, Johanna Tamminen
Affiliations: Finnish Meteorological Institute, University of Helsinki
With ongoing climate change and rising global temperatures, quantifying greenhouse gas (GHG) emissions has become increasingly critical. One of the primary uses of remote sensing tools is to observe greenhouse gases, such as CO2 and CH4. Over the past decade, advancements in Earth observation (EO) and remote sensing have enabled the development of instruments with superior spatial and spectral resolution, accuracy, and coverage, which are essential for accurately monitoring these emissions. Combining satellite observations with inverse modeling has further strengthened our capacity to detect and quantify diverse GHG sources, from urban centers to natural environments. To meet the requirements for monitoring CO2 emissions, the European Copernicus programme is preparing a dedicated CO2 Monitoring (CO2M) satellite constellation, scheduled for launch in 2026. This mission will deliver carbon dioxide (CO2) and nitrogen dioxide (NO2) observations with both along-track and cross-track pixel resolution of 4 km and a swath width of 250 km. Compared to other public satellite data products, such as TROPOMI on board of Sentinel-5P, CO2M will produce observations with improved spatial resolutions on regional and global scales. Supporting the CO2M mission, there have been ongoing methodological advances on how to convert total column observations into emission estimates. One related project was the ESA-funded COCO2 project, conducted from 2021 to 2023, which developed the data-driven emission quantification (ddeq) Python library Kuhlmann et al. (2024). The ddeq library includes various inversion methods for converting satellite observations into emissions, such as the cross-sectional flux method, Gaussian plume model, integrated mass enhancement and divergence method. In our study, we aim to improve the ddeq library, especially focusing on the implementation of the divergence method Beirle et al. (2019) Beirle et al. (2021) Hakkarainen et al. (2022). In the ddeq version 1.0, the divergence map over the area of interest is computed by gridding and averaging fluxes over longer time periods using multiple overpasses. This reduces the background noise but makes comparing and cross validating the divergence method with other methods in v1.0 challenging since other methods perform emission estimations using individual overpasses. The divergence can be computed for individual overpasses, but numerical differentiation combined with the noisy background resulted in significant noise in the divergence map, especially for CO2. Our goal is to explore various methodologies for reducing noise prior to applying numerical divergence computation. By implementing effective techniques, we aim to achieve a significant reduction in noise during the differentiation, resulting in smoother divergence maps. We also plan to compute the divergence and estimate the emission rates from single overpasses using satellite coordinates (along and cross track), omitting the regridding step of the divergence map, thereby saving computational time. References: Kuhlmann, G., Koene, E., Meier, S., Santaren, D., Broquet, G., Chevallier, F., Hakkarainen, J., Nurmela, J., Amorós, L., Tamminen, J., and Brunner, D.: The ddeq Python library for point source quantification from remote sensing images (version 1.0), Geosci. Model Dev., 17, 4773–4789, https://doi.org/10.5194/gmd-17-4773-2024, 2024. Steffen Beirle et al. ,Pinpointing nitrogen oxide emissions from space.Sci. Adv.5,eaax9800(2019).DOI:10.1126/sciadv.aax9800 Beirle, S., Borger, C., Dörner, S., Eskes, H., Kumar, V., de Laat, A., and Wagner, T.: Catalog of NOx emissions from point sources as derived from the divergence of the NO2 flux for TROPOMI, Earth Syst. Sci. Data, 13, 2995–3012, https://doi.org/10.5194/essd-13-2995-2021, 2021. Hakkarainen J, Ialongo I, Koene E, Szeląg ME, Tamminen J, Kuhlmann G and Brunner D (2022) Analyzing Local Carbon Dioxide and Nitrogen Oxide Emissions From Space Using the Divergence Method: An Application to the Synthetic SMARTCARB Dataset. Front. Remote Sens. 3:878731. doi: 10.3389/frsen.2022.878731
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Retrieving tropospheric water vapor isotope composition via balloon-borne flask sampling: a step toward calibrating remote sensing full-column H2O/HDO pairs

Authors: Dr. Daniele Zannoni, Dr. Rigel Kivi, Mr. Pauli Heikkinen, Dr. Hans Christian Steen-Larsen, Tor Olav Kristensen, Prof. Thomas Röckmann
Affiliations: Ca’ Foscari University, Finnish Meteorological Institute, University of Bergen, Utrecht University
The stable isotope composition of water is a powerful tracer of the atmospheric hydrological cycle, as it reflects the phase change processes along the source-sink trajectory of air parcels. Recent advancements in remote sensing technology, both satellite-based (e.g. the TROPOspheric Monitoring Instrument (TROPOMI) on board the Copernicus Sentinel-5 Precursor satellite) and ground-based (e.g. the Total Carbon Column Observing Network (TCCON)), now enable the use of water vapor H₂O/HDO pairs to enhance our understanding of weather processes acting on regional and global scales at daily and sub daily resolution. In this study, we present the latest results of the Water vapor Isotopologue Flask sampling for the Validation Of Satellite data (WIFVOS) project, which is aimed to provide an effective balloon-borne flask sampler that enables validation of remote sensing H₂O/HDO products at lower cost than conventional aircraft observations. During the most recent WIFVOS field campaign, held in late July and early August 2024 at the TCCON site in Sodankylä, northern Finland, four successful balloon launches were conducted using a newly designed flask sampler. Each flight collected four flask samples during the descent phase, providing humidity and isotopic composition data of water vapor at predefined pressure levels up to 3000 m ASL. To evaluate the quality of the retrieved samples, a RS92 radiosonde was attached to the sampler. Results showed an excellent agreement between humidity measured with the radiosonde and water vapor sampled with the flasks (mean absolute error = 484 ppm) and a clear isotopic trend within the troposphere and boundary layer. Two additional low-altitude flights were performed with a drone, equipped with a modified, lighter version of the sampler. These low altitude flights assessed sampling reproducibility, yielding 1 standard deviations of 152 ppm, 0.08 ‰ and 0.3 ‰ for mixing ratio, δ18O and δD, respectively (δ vs VMSMOW). Finally, we also provide a brief discussion on the sampler’s potential for measuring other gas species in the lower atmosphere (e.g. CH₄).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Greenhouse Gas Emissions Retrieval Algorithm Using Satellite Remote Sensing And Bottom-Up Model Estimates At A National-Level In Denmark

Authors: Pau Fabregat, Charlotte Scheutz, Sine Munk Hvidegaard, Andreas Rønne Stokholm, Louise Anne Klotz
Affiliations: Technical University of Denmark
Satellite missions such as GOSAT and Sentinel-5P have enhanced the capabilities of quantifying GreenHouse Gases (GHG) column concentrations in the atmosphere over the past years, bringing us one step closer to derive accurate GHG emissions from anthropogenic and natural sources. Nonetheless, there is still a significant effort to be made to reduce the uncertainty of the measurements when compared to bottom-up model estimates and in-situ measurements (ex. Flux towers) by enhancing the algorithms and processing chains of satellite data. On an effort to build the capability of delivering regional to national level GHG emissions cover at a high temporal and spatial resolution of Denmark, expertise in Earth Observation (EO) and GHG emissions are bridged between the Sustainability and Space departments at the Technical university of Denmark (DTU). The main objective being the development of state-of-the-art algorithms for processing satellite data, reinforced by expert knowledge on GHG source emitters and physical processes. The project involves combining multiple satellite missions that allow the quantification of methane (CH4) and carbon dioxide (CO2) concentrations with high accuracy, spatial resolution and revisit time. Sentinel-5P is used to quantify the GHG column concentrations in the atmosphere at a rather coarse pixel spatial resolution, approximately 7 Km, while Sentinel-2 data is used to increase the spatial resolution of the inferred GHG emissions. Various Sentinel-2 spectral bands and derived land cover products are explored for the spatial resolution enhancement. Moreover, auxiliary data such as meteorological data from the ECMWF Reanalysis v5 (ERA5) dataset is used to refine the derived GHG emissions from the satellite data sources. The GHG emissions derived from satellite data are compared to emissions derived from bottom-up model estimates, providing insights into how uncertainties between data- and model- derived emissions relate to each other. Furthermore, enhancements to the data processing chain are proposed and tested based on the uncertainty’s observations and knowledge of local GHG emissions sources, and their effect on the estimates are assessed.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Machine Learning for Methane Observation

Authors: Enno Tiemann, Dr. Alexander Kläser, Dr Konrad Heidler, Dr Rochelle Schneider, Prof Xiao Xiang Zhu
Affiliations: Technical University Munich, OHB Digital Connect GmbH, European Space Agency Φ-lab, Munich Center for Machine Learning
Methane (CH₄) is a potent greenhouse gas with a global warming potential 86 times greater than carbon dioxide over a 20-year period, making methane emission mitigation critical for addressing climate change. Traditional methods for methane detection rely on labour-intensive manual checks and adjustments, while machine learning (ML) offers scalable alternatives that can significantly enhance performance. However, the adoption of ML approaches for methane observation remains limited, primarily due to the lack of standardized datasets for model training, comparison and generalization [1]. To address this gap, we propose a novel, globally diverse synthetic dataset, and corresponding ML models for methane detection, plume segmentation, and emission rate estimation. Methane observation typically involves three tasks: (1) detecting and quantifying methane column concentrations per pixel, (2) segmenting methane plumes, and (3) estimating emission rates and the source's location. Traditional approaches rely on physical models, such as physical inversion of radiance measurements or the integrated mass enhancement (IME) method, to tackle these tasks. These techniques, while accurate, are computationally intensive and lack scalability. Recent advancements in ML—particularly convolutional neural network (CNN)-based models such as U-Net—have demonstrated a potential for faster processing and improved segmentation accuracy. However, most existing ML models are highly task-specific, designed for localized datasets with limited variability in background or sensor characteristics. Consequently, the generalizability and comparability of these approaches are limited. Our work addresses these limitations by introducing a comprehensive synthetic dataset for methane plume detection and analysis, designed to facilitate ML-based generalization across diverse, global scenarios. This dataset integrates simulated with real satellite imagery to provide ground truth data for training and evaluation. We simulate methane plumes using Gaussian plume models with random perturbations to replicate realistic emission scenarios, incorporating variables such as emission rates and wind speeds. These synthetic plumes are then superimposed onto a large set of globally distributed satellite images from Landsat 8/9 and Sentinel-2, spanning diverse terrains including forests, meadows, sand, and gravel. The use of multispectral sensors enables comparisons across varying spectral properties and sensor specifications. The synthetic dataset employs radiative transfer modelling to simulate methane’s impact on radiance and incorporates sensor-specific noise and atmospheric conditions using Beer’s Law. This approach ensures a high level of realism while maintaining consistency across locations and sensors. Compared to existing datasets, our methodology offers unparalleled scalability and diversity, addressing a critical bottleneck in methane observation research: the availability of a large-scale, standardized dataset. However, the accuracy of this representation of real plumes may vary due to sensible atmospheric parameters. To demonstrate the utility of this dataset, we also develop ML models for methane observation. Two modelling strategies are explored: (1) an end-to-end regression model that directly estimates emission rates from Level-1 sensor data, and (2) a modular approach using separate models for plume segmentation and emission rate estimation. For segmentation, we employ U-Net-based architectures, while regression tasks leverage CNNs with advanced features such as attention skip connections. These baselines allow us to systematically evaluate model performance across diverse backgrounds, plume shapes, sensor types, and geographic locations, providing a robust framework for assessing generalizability. Our contribution establishes a novel dataset along with a fully ML-based methodology as foundational tools for advancing ML-based methane observation. By enabling standardized comparisons and promoting generalizable ML approaches, this work supports the development of scalable, accurate models for methane detection and emission estimation. These advancements have the potential to accelerate the global adoption of ML in methane monitoring, fostering more effective climate change mitigation strategies through improved detection and quantification of methane emissions. [1] Tiemann, Enno, et al. "Machine learning for methane detection and quantification from space-a survey." arXiv preprint arXiv:2408.15122 (2024).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Data assimilation and emission inversion developments for current and future Copernicus greenhouse gas services at ECMWF

Authors: Luca Cantarello, Nicolas Bousserez, Ernest Koffi, Panagotis Kountouris, Richard Engelen
Affiliations: ECMWF
As part of the activities of the Copernicus Atmosphere Monitoring Service (CAMS), the European Centre for Medium-Range Weather Forecasts (ECMWF) provides daily forecasts and near real-time global analyses of greenhouse gas (GHG, i.e. CO2 and CH4) concentrations using observations from various satellite instruments as input. In addition, ECMWF is also building the global component of a new Copernicus Monitoring and Verification Service (CO2MVS) for anthropogenic GHG emissions with the aim of supporting and tracking the progress of countries during the implementation of the Paris agreement. This presentation will focus on various data assimilation updates currently being tested for the near real-time GHG operational forecasting suite, some of which are expected to be implemented in the next cycle of the Integrated Forecasting System (IFS). Among others, the impact of a revised background error covariance matrix (initially built using an ensemble data assimilation system) and the assimilation of TROPOMI CH4 retrievals will be discussed, including an evaluation against independent observations; an exploration of the use of the Ensemble of Data Assimilation (EDA) and flow-dependent background errors for GHG is also underway and its results will be shown. Secondly, an overview of the main scientific and technical data assimilation developments carried out at ECMWF in preparation for the CO2MVS will be given. In particular, the new emission inversion system for multiple atmospheric composition species (including both reactive gases and GHG) will be presented. The new system is based on the IFS 4D-Var assimilation algorithm, but it makes it possible to jointly optimise the emissions, the atmospheric composition concentrations, and the meteorology. An evaluation against independent inventories and in-situ observations will be reported, and it might be complemented with the results of some Observing System Simulation Experiments (OSSEs) based on an EDA configuration which has been developed at ECMWF for testing and validation purposes. Lastly, plans to extend the length of the assimilation windows in the emission inversion system beyond 12 or 24 hours to increase the constraint on long-lived species (e.g. CO2) will be introduced.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Sub-pixel Cloud Fraction Retrieval Based on the CO2M Multi-Angular Polarimetric Satellite Measurements

Authors: Babak Jahani, Zihao Yuan, Bastiaan van Diedenhoven, Otto Hasekamp, Guangliang Fu, Sha Lu
Affiliations: SRON Netherlands Institute for Space Research, Institute of Environmental Science (CML), Leiden University
The Earth’s atmosphere contains suspended particles and molecules with a wide range of characteristics. Their interaction with radiation (in both solar and thermal spectral regions) affects the transfer of energy as well as its spatial distribution in the atmosphere, affecting the weather at any moment and climate in the long term. Multi-Angular Polarimetric (MAP) observations have a great potential for quantifying the properties (e.g., size, concentration, etc.) of aerosol particles at a high accuracy. For this reason, a MAP is included on the Copernicus Carbon Dioxide Monitoring satellite mission (CO2M) to provide a correction of the light path to meet the mission’s stringent requirements for CO2 column retrievals. However, for both trace gas and aerosol retrievals it is essential to filter out any cloud-contaminated measurements, because clouds strongly interact with radiation and cover between 60-70% of the Earth’s surface at any given time. The focus of this study is on a neural network algorithm designed at SRON for quantifying cloud coverage at the sub-pixel level based on the MAP instrument on CO2M. MAP observations are particularly sensitive to the presence of clouds, as cloud droplets and ice crystals produce pronounced and distinct angular features in polarized radiances. This algorithm is an adaptation of an approach that was initially developed at SRON for the MAP instrument onboard the Polarisation and Anisotropy of Reflectances for Atmospheric Science coupled with Observations from a Lidar (PARASOL) platform (i.e., POLarization and Directionality of Earth Reflectances; POLDER) [1]. It is trained based on an extensive synthetic measurement dataset and quantifies the sub-pixel cloud fraction. Here, we will provide information on the algorithm’s performance at different cloud coverage levels and as a function of surface and cloud properties. We particularly focus on the detection of thin clouds and/or those with low cloud fraction that are generally challenging to detect by more traditional instruments. We will outline the detection and application limits of the technique and discuss the suitability of the approach for augmenting cloud detection capabilities of the CO2M mission. [1] Yuan, Z., G., Fu, B., van Diedenhoven, H. X., Lin, J. W., Erisman, O. P., Hasekamp, 2024: Cloud detection from multi-angular polarimetric satellite measurements using a neural network ensemble approach, Atmos. Meas. Tech. (17): 2595–2610. doi: https://doi.org/10.5194/amt-17-2595-2024 .
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Carbon-I, a NASA Earth System Explorer Mission concept for Greenhouse Gas Observations

Authors: Christian Frankenberg
Affiliations: Caltech
The past two decades have seen tremendous improvements in greenhouse gas (GHG) remote sensing from space, including global area flux mappers like SCIAMACHY, GOSAT, OCO-2, GOSAT-2, TROPOMI and OCO-3 among others, with more missions planned, such as CO2M. In the past decade there has also been an increase in GHG point source imagers, a field that has grown rapidly after initial successes using AVIRIS-NG and subsequent VSWIR spectrometers (coarser spectral resolution over a broader range). While area flux mapper missions have been effective at measuring GHG with high accuracy, fundamental gaps persist in the humid tropics, where data yields are 2-3 orders of magnitude lower than elsewhere. Here, we discuss the Carbon Investigation (Carbon-I), which was selected for a Phase A mission concept study within NASA’s Earth System Explorer call. Carbon-I provides a unique combination of global land coverage, high spatial resolution, and very high sensitivity required to quantify CH4, CO2, and CO emissions at both the area and point source scale. Given the importance of the tropics for global carbon budgets and in particular natural methane emissions, Carbon-I specifically targets the remaining data and knowledge gaps within the tropics, enabling a step change in our capabilities of observing the tropics with more even temporal and spatial sampling. Carbon-I is a single-band spectrometer covering the 2040 to 2380 nm spectral range with 0.7nm spectral sampling and spatial sampling ranging from ~30m in a local target mode to ~300m for global land mapping. We will discuss the sweet spot in the tradeoff between spatial and spectral resolution, the multi-species trace gas capabilities (CH4, CO2, CO, HDO, H2O, N2O, potentially Ethane), the capabilities to measure GHG area fluxes and point sources while minimizing surface interferences and our approach to account for atmospheric scattering effects.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Measuring natural methane fluxes with the French-German Methane Remote Sensing LIDAR Mission MERLIN

Authors: Sabrina Zechlau, Philippe Bousquet, Gerhard Ehret, Philippe Landiech, Achim Friker, Carole Daniel, Matthias Alpers, Cyril Crevoisier, Andreas Fix
Affiliations: Ludwig Maximilians-Universität München (LMU), Deutsches Zentrum für Luft- und Raumfahrt (DLR), Laboratory for Sciences of Climate and Environment (LSCE), Centre National d'Ètudes Spatiales (CNES), Laboratoire de Météorologie Dynamique (LMD)
Among anthropogenic contributors Methane is known to be the second largest contributor to greenhouse gas induced warming after carbon dioxide. Despite that fact, the global methane budget remains highly uncertain with large differences between top-down and bottom-up approaches. This is because we know much less about methane sources and sinks on temporal and spatial scales (global/regional) and their sensitivity to global warming, than is known for carbon dioxide. Natural emissions of methane from permafrost and from the abundant number of wetlands, lakes, and rivers are expected to substantially increase during this century due to the rapid climate warming. Therefore, disentangling natural and anthropogenic methane fluxes has high scientific priority. However, natural methane fluxes and other diffuse methane sources require unbiased measurements with good coverage in order to measure the small concentration gradients and deduce the underlying fluxes. The French-German Methane Remote Sensing LIDAR Mission MERLIN, jointly developed by CNES and DLR, is designed to measure highly accurate atmospheric columns of methane and is therefore able to close the knowledge gap of natural emissions from space in the top-down approach. Hence, MERLIN is expected to lead to a better quantification global and regional sources and sinks, aiming at reducing uncertainties on the global methane budget. To accomplish this, MERLIN will be relying on its Integrated Path Differential Absorption (IPDA) lidar to access methane atmospheric column concentration (XCH4) at all latitudes and in all seasons. Especially, MERLIN will be able to provide spaceborne methane observations also in regions with high cloud cover and at high latitudes in winter time and in the so-called shoulder seasons. The IPDA measurements are insensitive to ground albedo variations and atmospheric aerosol load and will therefore provide highly valuable information for closing knowledge gaps concerning global to regional methane distributions, and to inform European policies on climate change mitigations. The launch date of the MERLIN satellite is expected to be in 2029. In this presentation, we will provide an overview of the MERLIN mission, discuss its expected performances, together with the plans and needs for validation activities.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: F.02.07 - POSTER - Essential Agricultural Variables: Building Blocks for Global Agriculture Monitoring and Policy Support

The growing global challenges of food security, climate change, and disaster risk reduction require more than just data—they demand coordinated, actionable insights. GEOGLAM (Group on Earth Observation Global Agricultural Monitoring), a G20-endorsed international community of practice, plays a pivotal role in this mission by harnessing Earth Observation (EO) technologies to enhance decision-making processes in agriculture and food security.
In early 2019, GEOGLAM's international partners developed the “Community Research and Development Agenda for GEOGLAM: Operational Research and Development,” a foundational document that identified key variables crucial for agricultural production forecasting. This effort catalyzed the creation of the Essential Agricultural Variables (EAVs).
EAV’s help integrating EO data with global policy such as the UN Sustainable Development Goals, the Paris Accord on Climate Change, and the Sendai Framework for Disaster Risk Reduction., EAVs empower stakeholders— to translate policy into data needs and understand data gaps and needs—to make informed decisions that address critical global challenges.
Significant progress has been made since the initial release of the EAVs in May 2022. These efforts have focused on refining and implementing EAVs across diverse agricultural contexts. The CEOS LSI-VC working group has played a pivotal role in advancing the EAV concept and getting support from different space agencies across the world. This session will explore the evolution of EAVs, highlighting examples such as cropland and crop type mapping, evapotranspiration (ET), and surface water monitoring, and will discuss the future potential of EAVs in enhancing global agricultural monitoring.
For more detailed information on the EAV framework, visit https://agvariables.org/
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: Empirical Rice Yield Forecasting Models for Eastern Spain Using Remote Sensing.

Authors: Fàtima Della Bellver, Belen Franch Gras, Italo Moletto-Lobos, Javier Tarín-Mestre
Affiliations: Global Change Unit, Parc Científic, Universitat de València, Department of Geographical Sciences, University of Maryland
There is a clear global challenge to improve food production methods, making them more sustainable without compromising production capacity, and even enhancing it. New agricultural technologies that align with emerging environmental challenges are essential. In this context, rice is one of the most important crops worldwide, it feedds nearly half of the global population. In Europe, the situation is critical, as rice cultivation area has decreased by 16.55% and production has fallen by 24.86% since 2020 [46]. This, combined with political restrictions on exports from major producing countries, has led to a rise in rice prices compared to 2019. The FAO rice price index recorded a 41.22% increase in Japonica rice and a 27.15% increase in Indica rice in 2023 [1]. There is an urgent need for new technologies that can support the improvement of agricultural production. Remote sensing has emerged as a valuable tool in the technological advancement of agriculture. Within this framework, the objective of this research is the anticipation of the final yield of rice in several fields of l’Albufera de València in each one of the most critical phenological rice phases. The idea is to make a prediction as early as possible. To develop this research Sentinel-2 LaSRC (atmospheric correction from NASA) and Sen2Like (Cloud mask) are used. Sentinel-2 imagery are atmospherically corrected using LaSRC algorithm from NASA. To assess the cloud presence problem, cloud masking from Sen2Corr is implemented. This investigation was carried out in L’Albufera de València and Amposta, Spain, in where we count with yield truth data from the rice productors in the zone. Bomba and JSendra varieties are analysed in 2017-2022 period. The results obtained by Franch et al. (2021) [2] demonstrated that JSendra and Bomba have many differences regarding their spectral behavior and phenology, which implies that there is a necessity of develop one yield model for each variety. A temporal normalization of the spectral surface response is carried out, considering the cultivation cycle of each plot. For this we have used the GDD, because it is related with the phenological cycle of rice. Next, the spectral responses from Sentinel-2 are analyzed and integrated into linear models, allowing us to choose the best performing bands combination. With the best combinations, indexes RyOI (Rice yield Optical Index). Finally, the several traits of the RyOI are used to train the linear models which forecast the rice yield. The findings indicate that linear models can help in yield prediction, the best models show a 0.8 of R2 in case of Bomba and 0.5 in case of JSendra, both in the peak of the season. These findings highlight the potential of linear models for yield forecasting and their role in advancing precision agriculture. By integrating remote sensing with modeling techniques, this study provides valuable insights into optimizing rice production while addressing environmental challenges, thereby promoting sustainability in agriculture. [1] FAO. Rice Price Update, | FAO | Food and Agriculture Organization of the United Nations, available on-line https://www.fao.org/markets-and-trade/commodities/rice /fao-rice-price-update/en/ [2] Franch, B.; Bautista, A.S.; Fita, D.; Rubio, C.; Tarrazó-Serrano, D.; Sánchez, A.; Skakun, S.; Vermote, E.; Becker-Reshef, I.; Uris, A. Within-Field Rice Yield Estimation Based on Sentinel 2 Satellite Data. Remote Sens. 2021, 13, 4095. https://doi.org/10.3390/rs13204095 [3] Franch, B; Vermote, E; Sakun, S; Santameria-Artigas, A; Kalecinski, N; Roger, J-C; Becker-Reshef, I; Barker, B; Justice, C; Sobrino, J.A. The ARYA crop yield forecasting algorithm: Application to the main wheat exporting countries. International Journal of Applied Earth Observations and Geoinformation. 2021, 104, 102552. https://doi.org/10.1016/j.jag.2021.102552
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: Crop biophysical and yield information estimation using time series of Sentinel-2 data

Authors: Mr. Feng Yin, Professor Mathias Disney, Professor Philip Lewis, Professor Richard Pywell, Dr. John W. Redhead
Affiliations: University College London (UCL), National Centre for Earth Observation (NCEO), UK Centre for Ecology & Hydrology (UKCEH)
We present a new method for estimating crop biophysical parameters from Earth Observation (EO) data using a dynamic crop canopy radiative transfer (RT) model, called archetype. This model extends the PROSAIL RT model by incorporating an empirical first-order model that captures the temporal variation of PROSAIL parameters. This allows the archetype model to describe the spectral (400–2500 nm) and biophysical dynamics of crop canopies throughout the growth season. Using an ensemble framework with a weighted K-nearest neighbour method, we can simultaneously retrieve time series of all PROSAIL biophysical parameters and their associated uncertainties. The model was developed using a large dataset of Sentinel-2 observations of maize from Northeast China in 2019, under the assumption of smooth temporal variation and archetypal coordination of biophysical parameters for specific crops. The processing workflow involves mapping reflectance to biophysical parameters using an inverse model operator, aligning the parameters to a consistent time frame via a double logistic model of leaf area index (LAI), and deriving the archetype as median values from synchronised samples. We validated the archetype retrieved parameters against the ground measurements collected near Munich, Germany (2017–2018). Validation results show correlation values > 0.8 for LAI and leaf brown pigment content (C_{brown}), with an RMSE of 0.94 m^2/m^2 for LAI and 0.15 for C_{brown}. The chlorophyll content (C_{ab}) and canopy water content (CC_w) were retrieved at a higher level of accuracy, with correlation values around 0.9 and an RMSE of 6.59\mu g/cm^2 for C_{ab} and 0.03 g/cm^2 for CC_w. Comparison with forward-modelled hyperspectral reflectance revealed that retrieved parameters explained ~90% of canopy reflectance variability, with an RMSE of ~0.05 in reflectance units. To demonstrate its utility, we applied archetype-derived parameters to county and field-level crop yield estimation in the UK and US. The yield estimation model is calibrated using county level (i.e. spatially aggregated) yield statistics, but is able to estimate both county and field level yield in the absence of field-level yield data. County level validation results showed RMSE values of 0.68–0.83 t/ha (6-8%) and correlations of 0.90–0.94 for US maize, and RMSE values of 0.40–0.56 t/ha (5-7%) with correlations of 0.78–0.84 for UK winter wheat. At the field level, RMSE values were 1.63–1.99 t/ha (13–16%) with correlations of 0.65–0.72 for US maize and 1.12–1.57 t/ha (12–18%) with correlations of 0.71–0.76 for UK winter wheat. We also show that the archetype model approach can be used to successfully make in-season yield predictions for US maize and UK winter wheat. We show that a key benefit of the archetype model approach is that biophysical parameter estimation can be continuously updated by adding new Sentinel-2 observations as they become available. This approach provides accurate field-level yield predictions 4–5 months before the end of the harvest, comparable to using all observations. This 4–5 month lead time coincides with the period when the World Bank Commodity Price Index shows its highest volatility. Predictive ability for a yield model of this kind can therefore provide valuable information for financial decision making related to crop production.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: Transfer Learning for National-Scale Crop Field Delineation in Sub-Saharan Africa using Pan-Sharpened SPOT 6/7 Data

Authors: Philippe Rufin, Leon-Friedrich Thomas, Sá Nogueira Lisboa, Natasha Ribeiro, Almeida Alberto Sitoe, Dr Patrick Hostert, Patrick Meyfroidt
Affiliations: UCLouvain, Humboldt-Universität zu Berlin, University of Helsinki, Eduardo Mondlane University, Nitidæ, F.R.S. - FNRS
Navigating social, economic, and environmental trade-offs to build sustainable agriculture requires a fine-grained monitoring of essential agricultural variables (EAV) worldwide. In sub-Saharan Africa, however, the design of science-based policies to improve the sustainability of agriculture is challenged by limited data on fundamental properties such as the distribution of active cropland, crop types, and field boundaries across large regions. We here integrate very high spatial resolution SPOT6/7 data and state-of-the-art deep learning methods to derive detailed crop field delineations across large spatial extents. We based our approach on a pre-trained Residual U-Net (FracTAL ResUNet), fine-tuned by training using quality-filtered pseudo-labels to minimize the requirements for costly, human-annotated reference data. We deployed the fine-tuned model on image mosaics of pan-sharpened SPOT6/7 data (1.5 m spatial resolution) from the Airbus OneAtlas Living Library for the target years 2017 and 2023. The result of our work are the first national-level datasets of individual fields for Mozambique (covering ~800,000 km²) for both years. The field delineation model discriminates cropland from non-crop land cover and thus operates independently of ancillary cropland masks. The model is trained to minimize omission errors, leading to comparatively high rates of commission errors. To account for this issue, we subsequently implemented a machine-learning-based error removal based on multiple field-level (e.g. prediction confidence, size, and geometry), as well as landscape-level attributes (e.g. multi-scale prediction confidence). For validation, we compared predictions with a set of human-labeled land cover samples and field boundaries. We obtained high class-specific user´s and producer´s accuracies and mean field-level intersection over union (IoU) scores of around 0.7, representing an increase of more than 0.1 compared to the pre-trained model. Up to 60% of the fields reached IoU scores above 0.8, which aligns with the state-of-the-art in field delineation in smallholder landscapes in previous local-scale studies. Our approach allowed overcoming key challenges including the highly fragmented landscapes with very small (< 0.02 ha) fields, a pronounced heterogeneity in land management, as well as steep gradients in climate, topography, and land use intensity. The associated field-level uncertainty estimates permit users to balance omission and commission errors matching the accuracy requirements of the intended downstream applications. The assessment of change between 2017 and 2023 is currently challenged by inconsistent distinction between active and fallow cropland across the two years, requiring further consistency improvements for assessments of cropland change, land consolidation, and land fragmentation processes. The presented approach opens new research avenues towards operational production of the EAV field boundaries in sub-Saharan Africa, particularly due to the low reference data requirements and the resulting ability for highly efficient transfer across heterogeneous geographies.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: The Use of Remote Rensing to Study the Rrogression of the Mealybug Pest in Citrus Orchards in Valencian Province Within the Co-Fruit AGROALNEXT Project.

Authors: Fàtima Della Bellver, Belen Franch Gras, Alberto San Bautista, Constanza Rubio, Italo Moletto-Lobos, Cesar Guerrero-Benavent
Affiliations: Global Change Unit, Parc Científic, Universitat de València (Paterna), 46980, Spain, Departamento de Producción Vegetal, Universitat Politécnica de València, Centro de Tecnologías Físicas, Universitat Politécnica de València
The Cotonet de les Valls mealybug is an insect pest proliferating in citrus orchards in Castellón, Spain, leading to significant economic losses in the agricultural sector. The European Copernicus program has facilitated the development of various agricultural monitoring tools using remote sensing technology. This study aims to analyze how surface reflectance is affected by the presence of the mealybug pest through remote sensing data. The Sen2Like (version 4.3.0) processor is used to retrieve the images, with the atmospheric correction of Sen2Corr (version 3.1.0). Shadows due to angular effects can influence on the spectral responses, specially given the complexity of the morphology of trees, so the Bidirectional Reflectance Distribution Function (BRDF) [1] is employed to minimize those effects and also to reduce the detected seasonality in the spectral responses. The research analyzes the spectral response of around 70 fields distributed across the province of Castellón. The mealybug levels have been determined through monthly captures from 2020 to 2023. The aim is to explore the relationship between cotonet infestation and various optical bands, highlighting RED, NIR, and SWIR, as well as the Normalized Difference Vegetation Index (NDVI). Correlations have been identified between mealybug levels and changes in the spectral response and its anomalies with respect to 2018 year, particularly in the SWIR channel and also in NDVI, both stand out for being capable of detect mealybug presence from June onwards. The results indicate that remote sensing data can be useful in the efficient and objective management of the mealybug pest. In particular, specific spectral ranges, especially the SWIR, are effective in distinguishing between affected and healthy fields year-round, with significant differentiation observed in the latter half of the year. Overall, this study contributes to the development of innovative surveillance tools for combating agricultural threats effectively and sustainably. [1] Franch B, Vermote E, Skakun S, Roger J-C, Masek J, Ju J, Villaescusa-Nadal JL, Santamaria-Artigas A, “A Method for Landsat and Sentinel 2 (HLS) BRDF Normalization”, Remote Sensing, vol. 11, pp. 632, 2019, doi: 10.3390/rs11060632
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: A Multi-Year Pan-European Crop Type Map at 10 m Resolution: Introducing a New HRL Copernicus Product

Authors: Kasper Bonte, Kristof Van Tricht, Bert De Roo, Jeroen Degerickx, Laurent Tits, Roel Van Hoolst, Ludvig Forslund, André Stumpf
Affiliations: VITO, EEA, GAF
Agricultural land accounts for nearly one-third of Europe’s landscape and represents one of the most dynamic land uses due to crop rotation. Mapping the multi-annual variation of cropland at the European scale is crucial for understanding the status of agricultural systems and their temporal dynamics. Such insights are not only valuable for assessing agricultural productivity and sustainability but also for supporting the implementation and evaluation of key agricultural policies, including the Common Agricultural Policy (CAP). By providing consistent, large-scale information, these efforts contribute to the effective monitoring of Europe’s agricultural landscape. In this context, the new Copernicus High Resolution Layer (HRL) Crop Type product represents a critical step forward. This product provides pan-European, high-resolution (10 m) data on crop types, offering annual updates from 2017 onward and with currently already guaranteed production until 2024. In this presentation, we will dive into this new HRL crop type product developed within the HRL Vegetated Land Cover Characteristics (VLCC) project. This innovative product represents the first operational approach to crop type identification at a Pan-European scale, classifying up to 17 distinct crop classes, including both perennial and annual crop type classes. The classification methodology consists of a supervised transformer based feature extractor followed by a downstream classification head. This approach directly processes pre-processed time series data from Sentinel-1 SAR and Sentinel-2 optical imagery, complemented by topography and meteorological datasets. The transformer architecture leverages an attention mechanism to autonomously identify and extract relevant features from the inputs, eliminating the need for manual feature engineering. To ensure robust predictions across diverse European landscapes, a stratified dataset of over one million training samples was developed, drawing on harmonized parcel declarations (GSAA) and LUCAS data. The classification model provides a probabilistic output for each of the 17 classes at 10 m resolution, assigning the most probable crop type class to each pixel. The entire workflow has been operationalized using OpenEO, enabling scalability and efficiency. Additionally, the classification results undergo spatial cleaning and temporal harmonization to enhance both spatial and temporal consistency. For cases where classification confidence is low, the product includes separate categories for unclassified arable and permanent crops, leading to a final product with 19 classes. Eventually, this presentation will emphasize the most critical applications of the HRL Crop Type product, particularly its relevance to the CAP. Additionally, insights into the product's accuracy and inherent limitations will be shared, offering a balanced perspective on its capabilities and areas for further improvement. This presentation will be closed with an outlook of expected product updates for the coming years.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: Comparison and validation of spatial Reference Evapotranspiration datasets over Africa

Authors: Suzan Dehati, Bich Tran, Poolad Karimi, Marloes Mul
Affiliations: IHE Delft Institute For Water Education, Wageningen University & Research, Delft University of Technology
Reference Evapotranspiration (RET) is one of Essential Agricultural Variables used in water resource and agricultural planning and management. RET is often calculated using meteorological data from ground-based weather stations, however regions like Africa and the Near East have limited ground-based meteorological data and more often global RET products derived from climate reanalysis and earth observation data are being employed to fill the gap. It is therefore imperative to evaluate the performance of these RET data products. This study evaluates the performance of eight open-access spatial RET products against RET calculated using the FAO-56 Penman-Monteith method with data from 173 meteorological stations collected from the Trans-African Hydro-Meteorological Observatory (TAHMO) network for the period 2018 to 2022. Seven of the selected RET products are based on the FAO-56 Penman-Monteith equation and one product is based on the Slob-de Bruin approach that uses satellite data on shortwave radiation to estimate RET. The performance of products was assessed using statistical metrics: the coefficient of determination (R²), Bias, Root Mean Square Error (RMSE), and Relative Bias (RBias). Findings reveal that high-resolution RET products, such as MSG and Bristol, generally demonstrate stronger alignment with ground-based estimates, particularly in temperate and tropical climates compared to lower-resolution RET products. In contrast, most products’ performance decreases in arid and semi-arid regions, except for FEWSNET. The variance of RET between the different products is the largest in these areas (with MSG showing a negative Bias and all other RET a positive Bias, with ERA5, WaPORv2, WaPORv3 and GEOS5 showing a large positive Bias). The choice of RET products used will therefore affect the agricultural management of some of Africa’s largest irrigation schemes located in this climate class as it depends on RET for estimating crop water requirements. On the other hand, RET data products performance improves with increasing elevations. The study shows that the quality of the input data has more influence on the RET performance than the model implementation. This study also illustrates the challenges with validating coarse spatial resolution data (highest resolution MSG is 5 km compared to the coarsest RET product of FEWSNET of 100km) with ground-based data.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: Cropland and Crop-Type Specific Product Fusion for Global Monitoring and Yield Forecasting

Authors: Francesco Collivignarelli, Michele Meroni, Filip Sabo, Dr Hervé Kerdiles, Felix Rembold
Affiliations: EC-JRC
Operational yield forecasting systems frequently leverage meteorological gridded data and remote sensing techniques. In data-driven approaches, official yield statistics are modelled using environmental drivers, e.g., precipitation or biomass proxies, such as the Fraction of Absorbed Photosynthetically Active Radiation (FPAR). To gather these indicators precisely at a spatial scale, the use of auxiliary data is essential. Common approaches rely on mask layers, which allow only the target (agriculture) to be considered, discarding other land cover types. However, the available masks are mostly non-crop specific. Efforts made in this regard have led to satisfactory results for major crops of global importance, such as wheat, corn, soybean, and rice or for groups of crops. Although these products cover a significant part of subsistence agriculture, in many countries, the most important staple crops are different. To fill this gap, we produced a dataset of 25 global crop-specific Area Fraction Images (AFI), derived from existing data. The methodology used relies on selecting the best product available compared to national-level crop area from FAOSTAT. The input data includes crop-type specific products derived using remote sensing, covering major global crops; for minor crops, we relied on low-resolution (10km) crop-specific products based on allocation models (SPAM). To improve their spatial detail, we carried out a refinement using a higher-resolution (500m) non-crop-specific AFI, under the assumption that the crop-specific proportion specified by the allocation model in each cell can be homogeneously distributed over the higher-resolution non-crop-specific cells. This approach allows for the creation of high-resolution crop-specific AFIs, which are crucial for accurate yield forecasting. The resulting global crop-specific AFIs are used in the framework of a global yield forecast project, carried out by the University of Valencia, FAO-GIEWS, and EC-JRC. Out of a total of 4850 nation-crop pairs globally, more than half (2850) showed an absolute deviation against FAOSTAT within 33%, while the majority (4170) were considered fit for purpose, as they met the minimum requirements set by experts. Our dataset provides a valuable resource for researchers, policymakers, and agricultural stakeholders, enabling them to develop more accurate yield forecasting models and inform decision-making processes. By leveraging this dataset, we aim to improve the accuracy and reliability of yield forecasting, ultimately contributing to more efficient agricultural practices and better food security. The dataset and methodology presented in this poster can be used to address a notable need in crop-specific yield forecasting, providing insights for agricultural science and contributing to informed policymaking and decision-making, particularly for regions dependent on crops that are not typically included in global assessments.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: Optimizing Hybrid Models for Canopy Nitrogen Mapping from Sentinel-2

Authors: Emma De Clerck, Dávid D.Kovács, Dr. Katja Berger, Dr. Martin Schlerf, Dr. Jochem
Affiliations: Universitat de València
Background: Canopy Nitrogen Content (CNC) is a critical indicator of plant health, nutrient availability, and agricultural productivity. Nitrogen plays a key role in photosynthesis, growth, and yield formation, making its precise monitoring essential for sustainable agriculture. While hyperspectral models for CNC estimation have shown high potential, they are nowadays often limited by their inability to provide continuous, large-scale spatial coverage. In contrast, Sentinel-2's high-resolution multispectral imaging, with its broad global coverage and accessibility, offers a valuable alternative for large-scale CNC estimation. Although the spectral bands of multispectral sensors like Sentinel-2 do not offer the same level of spectral detail as hyperspectral sensors, they still provide critical information that can be used to estimate CNC across agricultural regions. With the growing pressures of climate change, population growth, and food security concerns, leveraging multispectral Earth Observation (EO) data for reliable, scalable CNC monitoring is increasingly important for effective decision-making, policy support, and resource management. Objective: This study developed a scalable and optimized method for estimating CNC using Sentinel-2 data. By integrating radiative transfer models with machine learning techniques such as Gaussian Process Regression (GPR) and Active Learning, we propose a novel approach for reliable nitrogen content estimation at a regional scale. The study contributes to the ongoing efforts to support agricultural monitoring, food security, and policy decision-making. Methods: We developed a hybrid modelling framework that combines PROSAIL-PRO simulations with machine learning algorithms to estimate CNC. Specifically, we employed the Euclidean distance-based diversity (EBD) Active Learning algorithm for efficient data sampling, ensuring diverse and representative training data. Two models were tested: a protein-based model (Cprot-LAI) and a chlorophyll-based model (Cab-LAI), utilizing Sentinel-2’s 10m and 20m spatial resolution bands. These models were trained and validated across multiple agricultural sites to ensure robustness across various cropping systems. The resulting models were integrated into Google Earth Engine and OpenEO for easy access and scalability, supporting global application. Results: The GPR models demonstrated strong predictive performance, outperforming other machine learning methods, including kernel ridge regression and neural networks. Validation results showed good accuracy, with NRMSE values of 16.76% and 18.74% for the protein- and chlorophyll-based model, respectively, and R² values of 0.47 and 0.51. The models exhibited high consistency across independent validation sites, with R² values of 0.58 and 0.71, and NRMSEs of 21.47% and 20.17%, for the Cprot-LAI and Cab-LAI model, respectively. The methodology was applied to the Iberian Peninsula, generating CNC maps with relative uncertainty below 30%, demonstrating the potential for global agricultural applications. Furthermore, the models were validated over two growing seasons, allowing for the analysis of seasonal nitrogen dynamics. Conclusion: This research demonstrates the potential of Sentinel-2 data combined with machine learning techniques to estimate CNC at a regional scale, providing valuable insights into agricultural productivity and policy support. The developed models can aid in identifying nutrient deficiencies, optimizing fertilizer usage, and assessing crop health, which are crucial for enhancing agricultural productivity while minimizing environmental impacts. By integrating this approach into open-source platforms like Google Earth Engine and OpenEO, the methodology offers scalability for global agricultural monitoring and policy support. This tool can support sustainable agricultural practices, contribute to food security, and align with international climate and sustainability goals, making it an important asset for decision-makers in agriculture worldwide.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: Integrating Sentinel-1 and Sentinel-2 Data for Wheat and Rice Yield Prediction in the Nile Delta

Authors: Javier Tarín-Mestre, Belen Franch, Italo Moletto-Lobos, Fàtima Della-Bellver, César José Guerrero-Benavent, Katarzyna Cyran, Ahmed El Baroudy, Zoltan Szantoi
Affiliations: Global Change Unit, Image Processing Laboratory, Universitat De València, Department of Geographical Sciences, University of Maryland, Faculty of Agriculture, Tanta University, Climate, Science and Applications, European Space Agency
Climate change is a challenge for all sectors, especially for agriculture. Rising temperatures in Africa affect agricultural production, making it necessary to develop variables to describe agriculture, such as yield forecasts. The EO Africa project aims to sustainably adopt Earth Observation (EO) and related space technology in Africa. Our contribution to this project was to use Sentinel-2 (S2) optical data to monitor wheat and rice yield between 2016-2023 in the Gharbia governorate (Egypt). We applied a field-scale yield model for each crop type, both trained in Spain, and studied their transferability to this region by evaluating their performance based on field-level yield data collected in Egypt. For wheat, we also tested an alternative model that integrates Sentinel-1 (S1) data fusion with S2. Growing Degree Days (GDDaccum) were used to enhance the transferability across seasons and regions. The field-level wheat fusion model showed slightly better results with forecasts in April (one month prior to harvest), with R2 = 0.55 and RMSE = 0.5 t/ha, against R2 = 0.40 and RMSE = 0.8 t/ha of the optical model. Rice showed an error of 0.8 t/ha and R2 = 0.38 in July (two and a half months before harvest). Both optical models needed a scale factor to correct the different average yields across regions. To evaluate the models at the governorate-level we developed seasonal crop type masks. In this scale, the optical wheat model provided better results, with RMSE = 0.4 t/ha. The rice model showed a good agreement with reference data with RMSE = 0.3 t/ha.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: Challenges in Crop Type Mapping: Integrating Field Data, Spectral Analysis, and Remote Sensing

Authors: Livia Peiser, Filippo Sarvia, Federica Chiozza, José Bofana, Anis Bousselmi, Maki Abdourahman
Affiliations: Food and Agriculture Organization of the United Nations, Universidade Católica de Moçambique - Faculdade de Ciências Agronómicas (UCM-FCA), Institut National des Grandes Cultures, Food and Agriculture Organization of the United Nations
In the last years, crop mapping activities have received significant attention due to the urgent need to monitor agricultural production in the context of severe climate challenges affecting many countries. Droughts, floods and other extreme weather events increasingly threaten food security. In this context, crop detection, monitoring and mapping have become an important tool for estimating damage, planning recovery efforts and ensuring prompt action. In addition, monitoring crop phenological development allows for early estimation of potential yields, providing basic indications of expected production levels. This aligns with the Food and Agriculture Organization of the United Nations's goals of enhancing agricultural resilience, improving food security, and supporting sustainable agricultural practices in vulnerable regions. The wide availability of satellite platforms now offers unprecedented spatial, temporal and spectral coverage, enabling the mapping of major crop types. Various spectral indices can be derived from satellite images to capture the specific phenological activities associated with different crops. These indices, such as the NDVI (Normalized Difference Vegetation Index), EVI (Enhanced Vegetation Index), or even specialized indices for arid or high biomass regions, are important for recognizing crop types and analyzing their growth dynamics. Despite the high number of satellite data, a deep understanding of the local agricultural context turns out to be important for accurate crop mapping. Knowledge of the crop phenological calendars and their growth characteristics is still essential. When linked with the spectral data, this knowledge supports the crop types identification and creates the input features for the classification process. Highlighting these details, classification algorithms can effectively leverage spectral and temporal data to map crops into entire regions. A preliminary study is therefore required to design a field form that allows for accurate and objective data collection. This step ensures that the information collected is relevant and directly supports the final mapping objectives. A well-structured field protocol needs to capture attributes such as crop type, growth stage and position, reducing ambiguity and improving the reliability of ground truth data. In this context, examples of crop type mapping conducted in different countries, such as Tunisia and Mozambique, will be presented. Sentinel-2 data served as the input, and various spectral indices were calculated. Starting from the field data, training and validation datasets were prepared. Subsequently, a supervised classification using the random forest algorithm was implemented on Google Earth Engine (GEE) to generate the crop type maps. The corresponding confusion matrix was computed and analyzed to assess and compare the quality of the resulting maps obtained. This work discusses all steps involved in the crop classification process, highlighting challenges in field data collection, the selection of spectral indices, and the identification of appropriate temporal periods. The aim is to provide a comprehensive overview of crop mapping, emphasizing the important role of local agronomic knowledge, as well as the development of standardized field protocols and methodological approaches, in achieving accurate results. These efforts will successfully support the generation of reliable crop maps, aiding agricultural planning and resource management.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: The Potential of Synthetic Data Generation in Predicting Agricultural Variables Using Machine Learning and Remote Sensing Data in Data Scarce Situations

Authors: Hamid Ebrahimy, Martin Atzmueller
Affiliations: Joint Lab - Artificial Intelligence and Data Science, Osnabrück University, Semantic Information Systems Group, Osnabrück University, German Research Center for Artificial Intelligence (DFKI)
In recent years, the integration of machine learning (ML) algorithms and remote sensing data has become a common practice for assessing agriculture-based indicators and/or variables across different spatial and temporal scales. Given that the quantity and quality of training data significantly influence the applicability and performance of ML algorithms, their effective use in the agricultural sector can be challenging and sometimes unattainable. This is due to the fact that in several agricultural variable prediction tasks, obtaining sufficient reference data can be challenging due to limitations such as cost, privacy concerns, data lag, and missing historical data. Generating synthetic data through algorithmic approaches can be considered among the easy-to-implement and effective solution to address this issue. With this approach, by producing artificial, yet realistic samples that mimic the original data’s characteristics, the training dataset will become comprehensive and also larger which can lead to more efficient and reliable outcomes. In this research, we utilized several data synthetization algorithms in different scenarios and in different agricultural variables prediction using ML and remote sensing data. We specifically, worked on three different tasks including crop yield prediction, crop type mapping, and soil moisture prediction. The main research question was: to what extent is the utilization of selected data synthetization methods capable of improving agricultural variables prediction when working with limited and incomplete reference data records? To investigate that, different type of data synthetization methods including diffusion models, statistical models, and generative adversarial networks were evaluated and tested in this study. For the evaluation of the methods, two different types of assessments were applied including similarity inspection of the synthetic data with the original data, as well as performance evaluation of the agricultural variable prediction with the addition of synthetic data. Even though we performed several diverse experiments in terms of datasets, tasks, and models, the procedure of the addition of the synthetic data was, in overall, a successful practice. This was, in majority of cases, proven as the generated synthetic data closely resembled the original dataset and also enhanced ML model performances. However, our results indicated that the addition of synthetically generated data showed a variable impact in different experiments; it enhanced prediction performance in some cases, showed no effect in others, and occasionally diminished the accuracy of outcomes. This might be attributed to the intrinsic characteristic of the variables, methodological workflow, original sample size, and the selected predictor algorithm. In conclusion, our experimentation showed that the utilization of carefully designed and screened data synthetization algorithms can be of help for improving agricultural variable prediction in different scales especially when reference data are scarce and/or incomplete which is a prevalent situation in the agriculture related tasks.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone S-T)

Poster: Spatiotemporal Monitoring of Cropland Soil Organic Carbon Changes From Space

Authors: Tom Broeg, Prof. Dr. Axel Don, Prof. Dr. Martin Wiesmeier, Prof. Dr. Thomas Scholten, Dr. Stefan Erasmi
Affiliations: Thünen Institute of Farm Economics, Thünen Institute of Climate-Smart Agriculture, Bavarian State Research Center for Agriculture, Institute for Agroecology and Organic Farming, Department of Geosciences, Soil Science and Geomorphology, University of Tübingen
Soils are an essential part of the global carbon cycle and play an important role in our mission toward climate neutrality. Policies, such as the European Green Deal, heavily rely on sustainable soil management to achieve comprehensive goals regarding soil health, climate protection, and biodiversity. It has been shown that changes in soil organic carbon (SOC) are closely related to agricultural practices but the monitoring can challenging due to the high spatial variability. Thus, advanced high-resolution soil information systems are needed to track long-term changes in SOC and to inform about sustainable management and policy options to support emission reductions and carbon sequestration in agricultural land use. Starting with the launch of Landsat 5 in 1984 and Sentinel-2 in 2015, the availability of multispectral earth observation (EO) images has increased substantially during the last decades. Multitemporal soil samples can be combined with the corresponding EO data to feed spatiotemporal models and generate information on SOC status, trends, and changes. Recent studies have shown that soil reflectance composites (SRC) are suitable to compute high-resolution, area-wide SOC maps, however, they have been rarely used to predict dynamic SOC trends. This is because changes in cropland SOC are usually very subtle and slow (<0.5 g kg-1 year-1) and sufficient reference data from long-term soil monitoring studies is missing in many cases. In this study, we used repeated SOC samples from 1986 to 2022 and a time series of multispectral SRC (Landsat and Sentinel-2) to model high-resolution cropland SOC trends for almost four decades. An in-depth validation of the temporal model uncertainty and accuracy of the derived SOC trends was conducted based on a network of 100 long-term monitoring sites that were continuously resampled every five years. While the general static SOC content prediction accuracy was high, the direct validation of the derived SOC trends revealed a significantly higher uncertainty, even though predicted and measured values showed similar distributions. Classifying the results into declining and increasing SOC trends, we found that 95% of all sites were either correctly identified or predicted as stable, highlighting the potential of our findings. Increased accuracies for SOC trends were found in soils with higher SOC contents and sites with reduced tillage. Based on the signal-to-noise ratio and temporal model uncertainty, we were able to show that the necessary time frame to detect SOC trends strongly depends on the absolute SOC changes present in the soils. Our results show that it is possible to generate significant cropland SOC trend maps based on EO data and underline the necessity for direct validation with repeated soil samples and long-term SOC measurements. Furthermore, they highlight the potential of earth observation for independent large-scale monitoring, that could be used to verify SOC accrual claimed by carbon farming schemes and support national greenhouse gas inventories.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: F.04.02 - POSTER - Supporting Global Food Security through Earth Observation

Remote sensing technologies play a crucial role in shaping agriculture policies and food systems worldwide. By leveraging satellite imagery and data processing technologies, we can monitor agricultural activities worldwide, from planting to harvest. Earth observation data also provides valuable insights into crop health, soil moisture levels, and land use patterns, allowing policymakers and humanitarian organizations to make informed decisions about resource allocation, disaster response, and food distribution. By harnessing the power of Earth observation, we can enhance resilience in food systems, improve agricultural productivity, and ensure food security for communities around the globe.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Leveraging Machine Learning for National-level Yield Prediction to Support Food Security

Authors: Inti Luna, Vasileios Sitokonstantinou, Dr. Maria Piles, Jordi Muñoz, Gustau Camps-Valls, Filip Sabo, Dr Michele Meroni, Francesco Collivignarelli, Petar Vojnovic, Dr Hervé Kerdiles, Dr Felix Rembold, Mario Zappacosta, Jonathan Pound, Giulia Caivano
Affiliations: Image Processing Laboratory (IPL) - Universitat de València, European Commission - Joint Research Centre (JRC), Food and Agriculture Organization of the United Nations (FAO)
Accurately predicting crop yields is essential for achieving food security (SGD2). We developed operational machine learning models for predicting national-level crop yield of main staple crops in 78 food-insecure countries. We used FAOSTAT yield data as the response variable and JRC-ASAP agrometeorological data as predictors, starting from 2002. To overcome data limitations, with only one yield record per year per country, countries were divided into 16 groups based on temporal similarity in ASAP crop phenology. Remote sensing and meteorological descriptors were extracted and aggregated nationally using ASAP cropland mask, including maximum NDVI, average temperature, radiation, and rainfall variables. Additionally, to account for the possible technological time trend (e.g., improved seeds, fertilization), which our descriptors may not capture, we introduced a "trend" feature based on a yield autoregressive model. We used Gaussian Process (GP) regression because it performs well with small datasets and can approximate nonlinear functions while providing prediction confidence intervals. We systematically tested the GP models using input feature combinations and country-pooling strategies, using leave-one-year-out cross-validation for model evaluation. Models predict harvest at 75% of the growing season to support the FAO Global Information and Early Warning System (GIEWS) for food and agriculture and JRC analysts. Initially, the models with significant coefficients of determination (> 0.3) were considered suitable for operational forecasts; however, in the current version, the normalized root mean square error (nRMSE) is used when comparing model performance. The forecast includes (i) predictions, (ii) accuracy metrics, and (iii) explanatory metrics (https://agricultural-production-hotspots.ec.europa.eu/data/yield-forecast/). The forecasting system is under revision to improve performance. Country and crop type coverage is being widened to cover 89 countries and each country's three most important crops. More explanatory variables are being considered, including socio-economic variables (e.g., conflicts, output and input prices, GDP per capita), improved crop calendars and crop-type specific masks, and additional environmental drivers (soil moisture, soil information). Furthermore, we will explore modeling using longer historical environmental data records from FAO-ASIS (from 1984). This study significantly contributes to addressing food insecurity country-by-country, empowering informed decision-making, and demonstrates the potential of machine learning to address global challenges.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Earth Observation to support national crop statistics in regions with little ground truth: a case study for maize and rice area estimation in the northern Korean Peninsula during 2019-2024

Authors: Sheila Baber, Dr. Maarten Jongsma, Dr. Patrick Fox, Dr. Justin Fisk, Trevor Skaggs, Professor Jan Dirk Wegner, Dr. Xiaopeng Song, Dr. Sergii Skakun, Dr. Inbal Becker-Reshef, Manuel Marcaida III
Affiliations: University of Maryland, Department of Geographical Sciences, Wageningen University & Research, BIOS Applied Metabolic Systems, Swedish Civil Contingencies Agency, Element 84, University of Zurich, Department of Mathematical Modeling and Machine Learning, University of Strasbourg, ICube Laboratory, Cornell University, Nutrient Management Spear Program, College of Agriculture and Life Sciences
Crop type maps, especially those produced over several years, can shed light on food availability and changing policies towards sustainable land use. Some of the locations most at risk of food insecurity are countries for which national agricultural statistics such as cultivated areas are unavailable or not readily accessible for external surveys. One such location is the Democratic People’s Republic of Korea (DPRK). The main staple crops in the country are rice, maize, soy, and potatoes. While national crop area statistics for some years are available publicly through international organizations, gaps exist for certain years, particularly those that overlap with the COVID-19 pandemic. In this study, we apply a mix of methods that have been used in agro-climatically similar regions such as Northeast China, including unsupervised and supervised machine learning methods and earth observations, to estimate areas of maize and rice for 2019 - 2024. A challenge posed in transferring these methods is the domain shift between Northeast China and the DPRK–with differences in maize yield, management practices, and maize proportion of the land cover. We start with an initial rice mapping effort using rule-based methods provided by Dong et al. (2016), which relies on time series data of vegetation indices and flooding signals. This is followed by a supervised approach to maize classification, accounting for maize-specific vegetation indices and using training data from Northeast China (You et al. 2023, Li et al. 2023, Li et al. 2023, etc.). The approaches used to address the domain challenge mentioned above include supplementing the Northeast China dataset with unsupervised maize proxy-data and adjusting the classification probability outputs. To take into account the biases inherent in Earth Observation-based classifications, we apply a stratified random sampling approach for area estimation (Olofsson et al. 2014). In the absence of groundtruth data over sample sites, we rely on photo-interpretation of multi-resolution satellite imagery, primary texts, and individuals who have visited the country. Preliminary results for rice show a relatively stable rice area throughout the study period, despite the challenges posed by the COVID-19 pandemic. While every location has its own set of contexts, the approaches to map generation and validation could be used to assess food security conditions in other isolated countries. Citations Dong, J., Xiao, X., Menarguez, M. A., Zhang, G., Qin, Y., Thau, D., ... & Moore III, B. (2016). Mapping paddy rice planting area in northeastern Asia with Landsat 8 images, phenology-based algorithm and Google Earth Engine. Remote sensing of environment, 185, 142-154. Li, H., Song, X. P., Hansen, M. C., Becker-Reshef, I., Adusei, B., Pickering, J., ... & Justice, C. (2023). Development of a 10-m resolution maize and soybean map over China: Matching satellite-based crop classification with sample-based area estimation. Remote Sensing of Environment, 294, 113623. Li, X., Qu, Y., Geng, H., Xin, Q., Huang, J., Peng, S., & Zhang, L. (2023). Mapping annual 10-m maize cropland changes in China during 2017–2021. Scientific Data, 10(1), 765. Olofsson, P., Foody, G. M., Herold, M., Stehman, S. V., Woodcock, C. E., & Wulder, M. A. (2014). Good practices for estimating area and assessing accuracy of land change. Remote sensing of Environment, 148, 42-57. You, N., Dong, J., Li, J., Huang, J., & Jin, Z. (2023). Rapid early-season maize mapping without crop labels. Remote Sensing of Environment, 290, 113496.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Early-Season Crop Mapping Using Sentinel-2 Time-Series Data and the Random Forest Algorithm

Authors: Simona Gargiulo, Gianluca Brunelli, Erminia De Grandis, Beatrice Gottardi, Camilla Della Monica, Carlo De Michele, Pietro Sciusco
Affiliations: CGI, Ariespace, Planetek
In the context of global climate change, agriculture must meet food supply demands while minimizing the use of natural resources and reducing its environmental impact. Early-season crop mapping is essential in modern agricultural monitoring, serving as a key tool for yield prediction and irrigation planning, which are critical for informed decision-making and marketing strategies related to food security. Remote sensing technology is now widely used as a suitable tool for agricultural surveys, offering benefits such as extensive spatial coverage, rapid data acquisition, and cost-effectiveness. Time-series analyses of multispectral images, e.g. Sentinel imagery, has become one of the most important data sources for crop mapping, as they reflect information on crop phenology and allow to differentiate crops according to their different growth status at different growth periods. The growing trend of cloud-based geospatial platforms has improved remote sensing data accessibility, computation time and automation. In this context, Insula is an innovative Earth Observation (EO) platform operated by CGI Italy designed to handle the processing of large-scale EO data efficiently. It supports the integration of new services and datasets driven by user needs thanks to its standard interfaces, resource scalability to optimize large-scale processing and a robust monitoring system that provides real-time supervision to identify problems and make rapid adjustments for greater efficiency and accuracy. The goal of this work is to propose an accurate and easily reproducible algorithm, fully integrated inside the Insula platform, that allows the early detection of spring-summer, autumn-winter, and multi-annual crops before the start of the irrigation period. Sentinel-2 time-series data across one year are retrieved from the Insula Platform and combined with ground truth over the Emilia Romagna and Campania regions in Italy to train and validate a random forest classifier. Satellite spectral bands, as well as different spectral indices (e.g. the Normalized Vegetation Index (NDVI), the Green Normalized Difference Vegetation Index (GNDVI), the Modified Normalized Difference Water Index (MNDWI), and others) are used as input to the classifier and a feature importance analysis is performed to evaluate the most effective features for crop mapping. This paper presents the results obtained by running the classifier in various regions of Italy, such as Emilia Romagna, Lombardy, Veneto, and Campania, to account for different agricultural practices. A target accuracy of 85% was achieved in the early identification of spring-summer, autumn-winter, and multi-annual crops. As a further improvement, the algorithm will be tested across the full national territory to demonstrate its scalability at multiple latitudes.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Space Applications at the Service of Food Security in the Mediterranean Region

Authors: Mr. Gabriele Redigonda, James Francis, Mr. Lucas Bersegol, Ms. Shadi Rochard, Ms. Laura Corbett
Affiliations: European Space Policy Institute
Food security has become one of the highest political priorities at the national, regional, and international level, notably emphasised in the UN Secretary General's Common Agenda and most recent statements under the G7 umbrella. It is well understood that climate change is a key determinant of current and future risks at stake for food security. However, given the unique climate vulnerability in the Mediterranean region, prioritised and elevated attention is required, directly underpinning the need for intervention to address the humanitarian crisis stemming from food insecurity. Although climate policies in the region are increasingly recognising the urgency of the issue, with the value of EO and other space-based solutions acknowledged to a limited extent, a clear gap has been identified in the translation of data into tangible solutions for end users, and an untapped general potential for such solutions for food security. This ESPI study conducted policy analysis of the critical climate drivers of food security in the region and focused on the uptake and potential for innovative EO and space solutions towards supporting food security in the region. The results highlight the criticality of production and consumption segments of food systems as most vulnerable to climate change, with many space projects available addressing these areas directly. Other components such as logistics and transformation were shown to be less vulnerable. The analysis of the study also showed how EO and other space solutions can be used to mitigate the effects of climate change through the advancement of carbon sequestration in agriculture, GHG emission reduction, and in the monitoring of the food system for sustainable practices. The study finally provides recommendations for unleashing the potential of EO and space-based solutions, calling for concreate actions at all levels, and with a multidisciplinary approach. In particular, policy and regulatory measures play a central role in this regard, as demonstrated based on an initial analysis of the status of affairs, complemented by a broad consultation campaign involving both policymakers and decision-makers from Mediterranean countries, as well as relevant international organisations such as FAO, WFP, IFAD. These regional recommendations and solutions have the potential to support global food security activities and initiatives.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Mapping the Cropping Practices with Earth Observation Satellite Image in Support of Sustainable Agricultural Management

Authors: Jinlong Fan, Dr. Xianwei Li, Prof. Sijun Qin, Prof. Guojun Li, Xueyuan Qian, Qiaomei Su
Affiliations: Beijing Normal University, Beijdahuang Group Jiansanjiang Branch Office, Taiyuan University of Technology
Mapping the Cropping Practices with Earth Observation Satellite Image in Support of Sustainable Agricultural Management Jinlong Fan1, Xianwei Li2, Guojun Li2, Sijun Qin2, Xueyuan Qian3, Qiaomei Su3 1: Satellite Application Engineering Innovation Center, Faculty of Geographical Science, Beijing Normal University, China; 2: Beijdahuang Group Jiansanjiang Branch Office, China; 3: College of Geological and Surveying Engineering, Taiyuan University of Technology, Taiyuan, China; Abstract: The easy access to the high-resolution satellite data worldwide has improved the agricultural monitoring at various scale. In addition to the yield forecasting, crop growth condition monitoring and crop area estimation, the cropping practices from sowing to harvest monitoring with satellite data have recently drawn the attentions from agricultural remote sensing community as this information is key to the crop management along the crop growth cycle. Thanks to the setienel-2, Landsat 8/9 and GF-1 satellite data, the time series of multiple source satellite data increase the frequences of validly monitoring the cropping management practices during the whole crop growth cycle and thus it may provide timely and more information in support of the decision making for agricultural production management and sustainable development. In this study, a suite of state own farms in Northeast China were selected as the study area where the crop field size is 20 hectares in average and the modernized agriculture is in development. The crop types in the study areas are rice, corn, soybean, of which the rice cultivation accounts for more than 80%. In this study, the approach based on remote sensing classification with Random Forest was developed to identify the different cropping practices on each available and valid satellite image. Thus, the cropping practices, including the rice field preparation readiness in spring, dryland and paddy field crop mapping in summer, harvest and plough progress in fall, were able to be retrieved from the satellite image in the study area since 2021. The validation based on the error matrix revealed that the accuracies were higher than 90% for all these classified results. In this study, the timeliness is of importance for the farm decision makers as the action for crop management is expected to take immediately based on the information retrieved from real time satellite image. Therefore, the processing code for these open accessed satellite images was developed to speed up the computation and reduce the interaction between computer and operator. However, the preparation of training samples remains the most time-consuming step in the whole process. With the application in a few years, the methodology was tuned effectively and efficiently and the information may be made available in the early evening once the morning satellite image was valid and useful. All these kinds of immediate retrieved information may help the decision makers in the allocation of agricultural machineries, the evaluation of the effect of policy implementation effect as well as understanding what is going on in the field in real time. Keywords: Cropping Practices; Agricultural Monitoring; Earth Observation; GF; Sentinel; Landsat
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: A.02.06 - POSTER - Advances in land surface phenology monitoring and applications

Land surface phenology (LSP) plays a key role in monitoring and understanding the seasonal dynamics of terrestrial ecosystems, providing critical insights into how these ecosystems respond to environmental changes, including those driven by climate change. This makes LSP a vital tool in predicting and managing ecosystem responses to climate variability and environmental stressors. This field has seen significant advances over the last decade, particularly with the advent of more sophisticated remote sensing technologies and data processing techniques and the development of new phenological products, such as the pan-European High-Resolution Vegetation Phenology and Productivity (HR-VPP) dataset included in the Copernicus Land Monitoring Service. We invite contributions on recent advancements in LSP monitoring, emphasizing the use of innovative techniques to detect phenological changes, the application of sub-daily resolution data, new tools for processing and analysing satellite data, and applications in various fields such as agriculture, forestry, ecology, public health, or climate science.

This session also welcomes any contribution concerning the intercomparison of LSP and complementary phenological observations, including in-situ human observations, phenocams (digital cameras capturing vegetation changes), and flux towers (measuring exchanges of carbon, water, and energy). The synergy between these observation methods can address inherent discrepancies and limitations, leading to a more accurate and holistic view of terrestrial ecosystems and their responses to climate change. It is expected that these contributions will provide further insight into CEOS LPV- phenology validation protocol.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: The Impact of PlanetScope-Sentinel-2 Data Fusion on Phenometrics Retrieval

Authors: Magdalena Main-Knorn, Claas Nendel, Gohar Ghazaryan
Affiliations: Leibniz Center For Agricultural Landscape Research, Institute of Biochemistry and Biology, University of Potsdam, Integrative Research Institute on Transformations of Human-Environment Systems (IRI THESys), Humboldt-Universität zu Berlin, Global Change Research Institute of the Czech Academy of Sciences, Earth Observation Lab,Geography Department,Humboldt-Universität zu Berlin
Monitoring crop phenology is crucial for optimizing agricultural management, resource allocation, and climate adaptation. Phenological shifts, influenced by climate change, extreme weather, and management practices, affect ecosystem services, agricultural productivity, and food security. Near real-time monitoring of crop cover and growth during early growth stages, enabling farmers to evaluate germination success and adapt practices to mitigate potential yield losses. Satellite remote sensing (RS) time-series enable the extraction of Land Surface Phenology (LSP) metrics, such as Start-, Peak-, and End-of-Season (SoS, PoS, and EoS respectively), by capturing variations in crop characteristics like biomass and chlorophyll content. These metrics correspond to phenological transition dates and provide insights into crop growth dynamics at the field and sub-field levels. However, achieving both high temporal frequency and spatial detail in LSP retrieval remains a challenge due to trade-offs in satellite datasets. This study investigates the potential of PlanetScope-Sentinel-2 (PS-S2) data fusion to improve LSP metrics accuracy for maize phenological phases in Germany during 2021 and 2022. Using multiple vegetation indices (Normalized Difference Vegetation Index - NDVI, Normalized Difference Phenology Index - NDPI, Normalized Difference Red Edge - NDRE, Modified Soil-Adjusted Vegetation Index - MSAVI) and retrieval methods (adaptations of TIMESAT methods: First-of-Slope, Median, Relative Amplitude, and Seasonal Amplitude), we derived phenometrics from S2 observations and fused datasets. A Random Forest-based model was employed for novel data fusion approach over maize fields, based on the PS-S2 image matching pairs within 3 days temporal frame, one prior and one ensuing S2 observation, addressing critical temporal and spatial resolution. Results were compared to the interpolated phenological data derived from the German Weather Service (DWD) and the European Vegetation Phenology and Productivity products (HR-VPPs, version 1.05). This is among the first studies to rigorously evaluate PS-S2 fusion for field-scale phenology detection across key phenological stages of an economically critical crop. Results indicate that the RF-based fusion model outperformed other fusion approaches across all test sites, years and vegetation indices. While fusion showed clear potential, its effectiveness depended heavily on the phenological phase, applied method, and analysed year. Emergence (BBCH 10): The SoS metrics for the maize emergence stage revealed consistent patterns across methods and years, with Median retrieval methods benefiting most from fusion. Fusion improved RMSE by 13.3% and MAE by 15.2%, demonstrating its potential for early-stage monitoring, critical for germination assessment and management. As the detection of this stage is missing in the HR-VPP products, a comparison could not be conducted, which emphasises the novelty of our approach. Beginning of Shooting (BBCH 31): Results varied significantly by year, site, and retrieval method. In 2022, fusion improved SoS accuracy when applying First-of-Slope (26.5% RMSE improvement) and Seasonal Amplitude (10.1% RMSE improvement) methods. In contrary, fusion led to declines in SoS accuracy for the Median retrieval methods (e.g., −88.8% RMSE). MSAVI-based Relative Amplitude methods achieved the best accuracy for the fused dataset (15-day delay), while NDRE-based Median methods performed best for the S2-alone dataset (9-day delay). These mixed results highlight the sensitivity of fusion effectiveness to temporal and spatial variations. Harvest (BBCH 99): For EoS metrics associated with maize harvest, fusion consistently improved accuracy, especially for grain maize fields. Using First-of-Slope and Seasonal Amplitude methods, RMSE and MAE improvements ranged from 8% to 10.7%. The Relative Amplitude method showed modest benefits, while Median retrieval methods exhibited negligible improvements. Across datasets, the NDRE-based Relative Amplitude method achieved the highest accuracy (e.g. 14–15 days delay in 2021). The results demonstrate the benefit of fusion in determining EoS, while also highlighting the efficacy of methods applied on Sentinel-2 observations without fusion. Summarizing the significance and novelty, this study reveals that PS-S2 fusion offers significant potential to enhance phenometrics retrieval, particularly for the emergence and harvest phases of maize. However, it challenges the assumption that more data always results in better accuracy, demonstrating that fusion outcomes depend strongly on retrieval methods and data temporal resolution. The year-and-site-dependent variability highlights the importance of tailoring methods to specific phenological phases and crop types. As one of the first comprehensive analyses of PS-S2 fusion for maize phenology detection, this work provides valuable insights for optimizing multi-source data integration and advancing agricultural monitoring strategies. Furthermore, these findings provide a foundation for the use of multimodal datasets to enhance the precision of crop models and improve decision-making in precision agriculture.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: TerEcoData: a service to monitor terrestrial ecology changes from Earth Observing systems

Authors: Aline Deprez, Jean-Philippe Malet, David Michéa, Anne Puissant
Affiliations: Data-Terra / THEIA data hub – UAR 2013 CNRS, École et Observatoire des Sciences de la Terre/EOST – UAR 830 CNRS, Université de Strasbourg, Institut Terre et Environnement de Strasbourg/ITES – UMR 7063 CNRS, Université de Strasbourg, Laboratoire Image, Ville, Environnement/LIVE – UMR 7362 CNRS, Université de Strasbourg
Natural landscapes (grasslands, meadows, forests) are essential for environmental well-being because of their provision of ecosystem services. These landscapes are under increasing pressure from climate change, resource extraction, and human-induced disturbances. The results are changes in habitats most of the time accompanied with a loss of species diversity. Satellite remote sensing enables global and systematic monitoring of biodiversity dynamics. The objective of this work is to present the Terrestrial Ecology Data service (TerEcoData) developed by the French Data-Terra/THEIA Research Infrastructure which aims at providing advanced indicators of ecosystem properties (space and time vegetation changes, vegetation phenological lifecycles, habitats biodiversity) from high-resolution Sentinel-2 time series. The on-demand service offers the creation of multiple products according to the user expertise in satellite remote sensing. The web-service allows users to select the period and region of interest, the indicators to be computed and their representation (pixels, polygon). It is based on a datacube approach allowing to manage and process massive satellite datasets and derive zonal statistics and time series analyses, and optimized for high-performance computing environments. The generated products can be visualized on-line and downloaded. Since only the needed information is extracted, the products are thus directly usable on a simple desktop computer for further advanced processing. The functionalities of TerEcoData and examples of products to quantify ecological changes over Alpine (South French Alps) and agricultural (Grand-Est Region, France) will be presented. The service was developed with the support of the Space Climate Observatory programme of CNES.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Understanding underlayer dynamics of a fire-prone Mediterranean Tree-Grass Ecosystems using In Situ Data, a 3D Radiative Transfer Model and multi-scale remote sensing data

Authors: Mathilda Porterie, Professor M.Pilar Martìn, Doctor Vicente Burchard - Levine, Karine Adeline, Professor Xavier Briottet
Affiliations: Office National d’Etude et de Recherche Aerospatial (ONERA), Département d’Optique et Techniques Associées (DOTA), Université de Toulouse F-31055, Environmental Remote Sensing and Spectroscopy Laboratory (SpecLab), Spanish National Research Council (CSIC), Tec4AGR0 group, ICA, Spanish National Research Council (CSIC)
In Mediterranean ecosystems, periods of severe heat and drought are frequent, and such extreme conditions are expected to become more common and intense in the coming years, thus enlarging fire-prone seasons [1], [2]. In Mediterranean tree grass ecosystems (TGE), two vegetation layers (trees and grass) compose the fuel source, with the herbaceous underlayer recognized as the most flammable living fuel [3]. Remote sensing provides useful data for mapping and monitoring the functional traits of vegetation, which can provide insightful information on its flammability potential such as fuel moisture content, and related proxies such as pigments, water and dry matter content. However, modeling herbaceous vegetation presents unique challenges due to high species diversity leading to overlapping phenological cycles and the coexistence of photosynthetic and non-photosynthetic materials. While the relationship between spectral variables and vegetation traits has been already studied for a long time in the case of photosynthetically active vegetation, the same is not true for senescent material [4]. Besides, the leaf and canopy radiative transfer models (RTMs) such as PROSPECT and PROSAIL commonly used to estimate plant traits, assume uniform optical properties within a vegetation layer, which limits its applicability to ecosystems where a mixture of photosynthetic and senescent plant material occurs. The 1D SenSCOPE RTM [5] partially addresses this problem by representing light interactions and physiology of photosynthetically active and senesced leaves in a same layer of vegetation, though it neglects geometric effects. Transitioning to 3D RTMs is essential to account for both spatial and optical heterogeneity, yet challenges persist in modeling mixed materials within complex canopies [6]. These challenges are further compounded by the temporal dynamics of phenological fractions and the physiological states of photosynthetic vegetation, which shift over time, making it difficult to develop a RTM parameterization that is sufficiently generalizable to capture these variations. Capturing seasonal variations in optical properties which reflect changes in vegetation flammability, throughout the phenological cycle is crucial for enhancing the accuracy and realism of fire prevention efforts This study aims to (1) design a more suitable 3D modeling of the herbaceous vegetation stratum in a Mediterranean TGE accounting for inherent variability of phenophases and then (2) validate the simulated spectra by comparing with measured spectra at multiple scales from ground to airborne and satellite, and along different phenophases. We employed the Discrete Anisotropic Radiative Transfer (DART) [7] RTM, incorporating site-specific structural and optical parameters to simulate reflectance of the herbaceous vegetation in a Mediterranean TGE located in Spain (Majadas de Tietar - latitude: 39.94° N; longitude: -5.77° W) during three phenological stages (biomass peak - BP, grass decay - GD and summer drought – SD [8]). A novel modeling approach is proposed that differentiates two phenological fractions within a 3D DART scene composed of 2 distinct vegetation volumes representing photosynthetic (PV) and non-photosynthetic (NPV) vegetation with individual optical properties, but sharing identical geometric parameters to ensure spatial overlap. The optical properties of the NPV volume are derived from in situ spectral libraries of senescent material acquired at the site, while the PV volume is modeled using PROSPECT leaf RTM. Leaf density within each vegetation volume is represented based on in situ Leaf Area Index (LAI) measurements. Other PROSPECT parameters, such as pigment content, water and dry matter, have been parameterized using in situ measurements, while DART model parameters, including structural and angular distributions, are based on the literature. As the proportions of the different vegetation fractions vary throughout the year as well as the physiological state of the PV vegetation, each phenophase was parameterized with specific ranges; thus, separate 3D mockups were constructed for each of the three studied phenophases. Simulated spectra were compared with reflectance datasets acquired at the study site at three different spatial scales: ground measurements from an ASD FieldSpec® 3 spectroradiometer, airborne images acquired by a Compact Airborne Spectrographic Imager CASI-1500i hyperspectral sensor, and multispectral satellite images fromSentinel-2 MSI. Spectral Angle Mapper (SAM) and Root Mean Squared Error (RMSE) were calculated to quantify shape and intensity similarity between the simulated and measured reflectance spectra. High levels of agreement between observed and simulated spectra across phenophases and spectral domains were found when comparing with ASD Fieldspec measurements. The simulated spectra was validated in the spectral range from 400 to 2500 nm and good agreements were found for BP (SAM = 6.7°) and SD (SAM = 7.0°) phenophases, while GD presented higher discrepancies (SAM = 11°). The Short Wave InfraRed (SWIR) region (1450 to 2300 nm) exhibited the highest similarities between simulated and measured reflectance spectra, particularly for SD (SAMSWIR 1 = 0.3° and SAMSWIR 2 = 1.9°). In contrast, the VIS domain (400 to 700 nm) showed greater discrepancies, especially for BP and GD (SAM =SAM VIS BP = 5.7 and SAM VIS SD = 4.9). The Near Infrared (NIR) region (700 to 1350 nm) was generally well-simulated, though performance declined significantly for GD. Comparison with hyperspectral airborne data for BP and GD phenophases showed more accurate representation of observed data across most CASI range (400 to 1000 nm) with SAM ≃ 4.2°. However, differences emerged in the VIS region, where measured CASI spectra exhibited a distinct reflectance peak near 560 nm and an absorption feature around 500 nm not observed in the simulations. In the NIR region, simulated reflectance for SD was overestimated (~0.3) compared to observed values (~0.25), approaching the simulation lower bound. Finally, the simulated reflectance aligns well with measured Sentinel-2 reflectance across most spectral regions except in the SWIR 1 bands 11 for GD and SD phenophases, where simulated reflectance values were lower than observed. In the VIS region, notable differences were observed, with simulated reflectance values being higher than the Sentinel-2 measured reflectance spectra, approximately 1 compared to the observed 0.5). Despite the shape differences noticed in those results, the RMSE values across spectral regions, phenophases, and sensors remained low (0.01–0.06%), demonstrating good overall agreement between the simulated and measured median reflectance. To conclude, distinguishing PV and NPV vegetation volumes within the 3D model improves the precision and realism of the two phenological fractions by using respective in situ measurements as input parameters to define their optical properties and proportions. However, discrepancies detected in the comparison of the simulated and measured spectra could be related to unaccounted real-world elements within the 3D scene, such as flowers particularly when comparing across different measurement scales. Future prospects will target vegetation trait estimation from these DART-simulated synthetic datasets by training machine learning algorithms. [1] J. Piñol, J. Terradas, et F. Lloret, « Climate warming, wildfire hazard, and wildfire occurrence in coastal eastern Spain », Clim. Change, vol. 38, no 3, p. 345‑357, 1998. [2] J. E. Keeley et A. D. Syphard, « Climate change and future fire regimes: examples from California », Geosciences, vol. 6, no 3, p. 37, 2016. [3] M. Sánchez-Pinillos et al., « Spatial and temporal variations of overstory and understory fuels in Mediterranean landscapes », For. Ecol. Manag., vol. 490, p. 119094, juin 2021, doi: 10.1016/j.foreco.2021.119094. [4] B. Lu, C. Proctor, et Y. He, « Investigating different versions of PROSPECT and PROSAIL for estimating spectral and biophysical properties of photosynthetic and non-photosynthetic vegetation in mixed grasslands », GIScience Remote Sens., vol. 58, no 3, p. 354‑371, 2021. [5] J. Pacheco-Labrador et al., « senSCOPE: Modeling radiative transfer and biochemical processes in mixed canopies combining green and senescent leaves with SCOPE », BioRxiv, p. 2020‑02, 2021. [6] J. R. Melendo-Vega et al., « Improving the Performance of 3-D Radiative Transfer Model FLIGHT to Simulate Optical Properties of a Tree-Grass Ecosystem », Remote Sens., vol. 10, no 12, Art. no 12, déc. 2018, doi: 10.3390/rs10122061. [7] J.-P. Gastellu-Etchegorry et al., « Discrete anisotropic radiative transfer (DART 5) for modeling airborne and satellite spectroradiometer and LIDAR acquisitions of natural and urban landscapes », Remote Sens., vol. 7, no 2, p. 1667‑1701, 2015. [8] Y. Luo et al., « Using near-infrared-enabled digital repeat photography to track structural and physiological phenology in Mediterranean tree–grass ecosystems », Remote Sens., vol. 10, no 8, p. 1293, 2018.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Senescence in the Arctic - a Case Study on the Dynamics of Tundra Plant Communities in Svalbard Using High-Resolution UAV Imagery

Authors: Janik Hoffmann, Lena Josephina Jäger, Andreas Jørgensen, Assoc. Prof. Larissa Teresa Beumer, Assoc. Prof. Lennart Nilsen, Antonio Jose Castañeda, Dr Mirjana Bevanda
Affiliations: Julius-Maximilian-University Würzburg, Earth Observation Research Cluster, Arctic University of Norway, Department of Arctic and Marine Biology, The University Centre in Svalbard, Department of Arctic Biology
Global warming is altering plant phenology, potentially shifting the timing and duration of growth seasons. While considerable research has focused on spring phenology, there remains a significant knowledge gap in understanding the impacts of climate change on autumn processes, such as leaf senescence, particularly in the Arctic regions, where data acquisition is challenging. This has implications for our understanding of potential changes in growth season duration with effects on vegetation-climate feedbacks and ecological dynamics. Unmanned Aerial Vehicles (UAVs) have gained attention due to their ability to provide high spatial resolution data, making it an effective tool for bridging the gap between field observations and satellite-based phenology monitoring. To date, however, no study has utilized UAVs to investigate autumn senescence in Arctic ecosystems. This study assesses the applicability of low-cost UAVs for monitoring autumn senescence in Arctic tundra, aiming to close both, the “seasonal gap” in understanding autumn phenology and the “scaling gap” in bridging high-resolution field data and satellite observations. Specifically, our aim is to leverage the high resolution of our dataset to spatially discriminate between different plant community types and thereby assess differences in their temporal senescence patterns. The research was conducted in Adventdalen, Svalbard, where multispectral UAV data was collected weekly over six weeks in autumn 2023. A DJI Mavic 3 Multispectral drone captured RGB and Near-Infrared (NIR) time series with a spatial resolution of two centimeters, covering a total area of more than 30 square kilometers, spanning across common tundra plant communities. Flight areas were selected based on the distribution of 12 ground-truth plots equipped with RGB timelapse cameras and NDVI sensors. Weekly phenological data from 185 tagged plant individuals across these plots monitored senescence indicators, including leaf color and photosynthetic activity, at individual and species levels. This multi-scale dataset represents the backbone of our current study. We evaluated the performance of common remote sensing vegetation indices and a color model derived from RGB, that represents colors using hue, saturation, and brightness (HSV), to assess their reliability in capturing vegetation browning progression. A supervised object-based classification approach, incorporating both UAV and ground-truth data, will be employed to map plant community distributions and assess temporal patterns in senescence. Initial analyses show that our UAV dataset effectively captures differences in leaf senescence progression across plant communities, with earlier senescence across drier sites (e.g., Luzula heath) whereas communities in wetter sites (e.g. Dupontia wetland) appeared to senesce later. This finding aligns with our ground-truth data and supports our assumption that adding a temporal component to our spectral metrics could further enhance the capacities of the classification model. Our findings highlight the potential of UAVs for monitoring plant senescence in Arctic environments, providing an approach to study inter-annual shifts in phenology. When combined with satellite data, this methodology could enable broad-scale, high-frequency monitoring of Arctic vegetation phenology, offering insights into vegetation-climate feedbacks in remote High-Arctic regions.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Time Series Analysis Using Sentinel-1 and Sentinel-2 to Monitor Vegetation Dynamics in Kruger National Park, South Africa

Authors: Hanna Arlaud, Katharina Franke, Dr. Jussi Baade, Caspar Moritz Jahn, Tercia Strydom, Marco Wolsza, Prof. Dr. Christiane Schmullius
Affiliations: Department for Earth Observation, Friedrich Schiller University Jena, Department of Physical Geography, Friedrich Schiller University Jena, Scientific Services, South African National Parks
Kruger National Park (KNP), one of South Africa's largest and most iconic wildlife reserves, offers a unique and relatively undisturbed natural environment, making it an ideal location for studying vegetation dynamics. Monitoring these dynamics is essential for understanding ecosystem health and the broader impacts of environmental changes, particularly in the context of climate variability. This study uses large-scale Earth observation data, accessed through data cubes as part of the South African Land Degradation Monitor (SALDi) project, to analyze subtle land cover changes that contribute to degradation. Funded under the SPACES II program by the German Federal Ministry of Education and Research (BMBF), the project utilizes data from international missions, including Sentinel-1 and -2 of the European Space Agency (ESA) and TanDEM-X of the German Aerospace Center (DLR). The study leverages the capabilities of remote sensing data, particularly from the Sentinel-1 and Sentinel-2 satellites, to monitor and assess vegetation dynamics over time. Specifically, we address the following question: Are there detectable changes in the length and characteristics of the vegetation period in the last six years in the southern regions of KNP? By analyzing Sentinel-1 and Sentinel-2 scenes from 2018 to 2023, alongside additional soil moisture data from SMAP and in situ stations, as well as precipitation data from MSWEP, we assess vegetation dynamics in a defined research area in the southern part of the park. Temporal analyses are conducted to identify annual trends and seasonal variations, using vegetation indices such as the Enhanced Vegetation Index (EVI) to monitor changes. Statistical methods for trend detection and seasonal decomposition are employed to separate recurring patterns and long-term shifts. Additionally, correlation analysis or multiple regression models are used to explore relationships between vegetation dynamics, as measured by the EVI, and environmental drivers such as precipitation and soil moisture. Preliminary results indicate annual trends in EVI across different land cover classes, highlighting differences in vegetation responses to environmental conditions. The outcome of this research provides a clearer understanding of how vegetation phenology, particularly the timing of the growing season, has shifted in response to climate variability in the past six years. These findings are important for understanding how KNP's natural vegetation adapts to changing environmental conditions and provide insight into broader ecological responses. This work is part of an MSc-seminar at the University Jena in cooperation with South African institutions.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Taking the next step in vegetation productivity estimation: towards 10m vegetation class-specific GPP estimates

Authors: Astrid Vannoppen, Kasper Bonte, Bert De Roo, Dr. Tim Ng, Else Swinnen, Laurent Tits, Roel Van
Affiliations: Vito
Monitoring vegetation productivity requires high-quality and long-term vegetation productivity metrics in absolute units. Currently, several operational services¹ enable this monitoring, which is crucial for understanding seasonal ecosystem dynamics and understanding responses to climate variability and environmental stressors. Light-use efficiency (LUE) models, such as the Copernicus Gross Primary Production (GPP) algorithm, can be used to estimate aboveground GPP in absolute units, after partitioning the total GPP into above and below ground GPP. The Copernicus GPP algorithm applies the simplified Light Use Efficiency (LUE) concept to estimate GPP (g C/m² day) LUE biomass models are based on the theory that vegetation growth is proportional to the time-integral of light absorbed and used in photosynthesis by vegetation. The GPP algorithm describes how absorbed photosynthetic active radiation (estimated by combining FAPAR, incoming solar radiation, and a climatic efficiency factor) is converted to gross biomass by combining it with biome specific maximum LUE terms and several stress factors. Within the horizon Europe project EvoLand and the EEA-funded operation service project HR-VPP2 the Copernicus GPP algorithm is being adapted to enhance the spatial resolution and vegetation specific accuracy, aiming to provide vegetation class-specific GPP estimates at 10m resolution at a 10-daily basis. The model will incorporate Sentinel-2 derived FAPAR, significantly improving the spatial resolution. Additionally, gap-filling the FAPAR aims to better model small temporal variations in GPP, particularly in croplands where current products fail to capture peak values or small variations, leading to an underestimation of GPP. The vegetation-specific accuracy of the Copernicus GPP algorithm is improved by re-calibrating the LUE term for specific land cover classes using all available Fluxnet data. This re-calibration leverages high-resolution land cover information by integrating the HRL-VLCC dataset with the CLC+ land cover map. The Copernicus GPP model has also been adapted to account for the different photosynthetic pathways of C3 and C4 crops. With the release of the pan-European crop type maps in HRL-VLCC, it will become possible to make this distinction, ensuring an improvement in the GPP estimates. These significant improvements to the Copernicus GPP algorithm will be presented during the LPS session. ¹ https://land.copernicus.eu/en/products/vegetation
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Precision Phenology: Proper Propagation of Uncertainties For The Validation of Land Surface Phenology Products

Authors: Harry Morris, Luke Brown, Dr. Jadu Dash, Niall Origo
Affiliations: National Physical Laboratory, University of Salford, University of Southampton
Land surface phenology (LSP) is the study of seasonal patterns in plant phenophases derived from time series of vegetation indices or biophysical variables derived from satellite data. It has an essential role in monitoring at local and global scale the response of the terrestrial biosphere to environmental changes such as water stress and changes in temperature. Changes in the timing and duration of phenological events can influence the carbon cycle, water cycle or interactions between species. Furthermore, these products are often used to assess the variability of carbon sequestration globally. Phenological products have currently reached validation stage 2 of the CEOS Land Product Validation (LPV) validation stage hierarchy meaning the product accuracy has been evaluated over a significant set of global sites and intercomparison of the temporal and spatial consistency for several satellite products has been performed. Currently, validation reference sets focus on species specific phenological observations, such as Nature’s Calendar in UK, or ground-based imaging, such as the global Phenocam network and GPP measurements from flux towers. However, most products lack an uncertainty assessment due to the lack of uncertainties in both the satellite and ground reference measurements. Characterising uncertainty in these products are essential for further understanding the uncertainties in estimates of carbon and water cycling from terrestrial biosphere models. There has been a recent focus on the use of automated systems for the validation of biophysical products such as Leaf Area Index (LAI) and fraction of Absorbed Photosynthetically Active Radiation (fAPAR) through the development of wireless fAPAR networks and automated Digital Hemispherical Photography (DHP) systems. Such systems characterise the temporally seasonality of vegetated sites. However, they also offer a unique opportunity for propagating uncertainties associated with individual measurements through to phenological metrics such as the Start of Season (SOS), Length of Season (LOS) and End of Season (EOS). This would allow for a metrological approach for the validation of LSP products. Such networks have been deployed in forests and agricultural sites across the globe by the National Physical Laboratory (NPL) and by the Ground-Based Observations for Validation (GBOV) consortium. In this poster an overview of the automated datasets and their uncertainty characterisation is shown. Furthermore, a metrological method for propagating these uncertainties through the data smoothing and phenological metric derivation is shown using the Community Metrology Toolkit (CoMet) python package. Finally, we propose a approach for the validation of LSP products over these reference sites.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Intercomparison of Satellite Vegetation Phenology Products Across Europe: Insights From MODIS, VIIRS, MR-VPP and HR-VPP Phenology Products

Authors: Mr. Jose A. Caparros-Santiago, Dr. Victor Rodriguez-Galiano, Dr. Getachew M. Mulualem, Dr. Christophe Lerebourg, Dr. Jadu Dash
Affiliations: Departamento de Geografía Física y Análisis Geográfico Regional, Universidad de Sevilla, School of Geography and Environmental Sciences, University of Southampton, ACRI-ST, Sophia-Antipolis
Satellite-derived vegetation phenology products are essential tools for tracking ecosystem dynamics and assessing vegetation responses to climate variability and change. Land surface Phenological products, such as Collection 6 MODIS Land Cover Dynamics (MCD12Q2 C6) product, VIIRS Global Land Surface Phenology (VNP22Q2), the Medium-Resolution Vegetation Phenology and Productivity (MR-VPP), and the pan-European High Resolution Vegetation Phenology and Productivity (HR-VPP), offer phenometrics at varying spatial resolutions, each with unique strengths and limitations. The MCD12Q2 C6 product estimates vegetation phenology using weekly EVI2 (two-band Enhanced Vegetation Index) composites at a spatial resolution of 500 m. A spline-based technique is applied to the EVI2 time series, providing a flexible fitting approach to minimise noise and disturbances while preserving the seasonal signal. Phenometrics are derived using relative thresholds tailored to capture distinct phenological stages. These thresholds correspond to green-up (15%), green-up midpoint (50%), and maturity (90%) in the pre-maximum segment of the seasonal curve, and start of senescence (90%), senescence midpoint (50%), and dormancy (15%) in the post-maximum segment. As a benchmark product for long-term phenological monitoring, MCD12Q2 C6 has provided continuous global data since the early 2000s, making it suitable for understanding vegetation dynamics and their responses to environmental change over last 20 year. Building on MODIS capabilities, the VNP22Q2 product offers three-day EVI2 composites at the same spatial resolution, significantly increasing the temporal frequency for detecting vegetation transitions. The VNP22Q2 product uses the Hybrid Piecewise Logistic Model (HPLM) to identify key phenological dates—such as green-up onset, maturity onset, senescence onset, and dormancy onset—through analysis of the curvature change rate. Available since 2012, VNP22Q2 ensures continuity with MCD12Q2 C6 phenological records. The MR-VPP product provides phenological data for the European continent for the period 2000 to 2021, derived from MODIS time series with a spatial resolution of 500 m. It uses the Plant Phenology Index (PPI) to estimate key phenometrics using a threshold-based method. The HR-VPP product, based on Sentinel-2 data, represents a significant advancement in spatial resolution, offering phenological metrics at 10 m. By utilising the PPI and fitting seasonal trajectories with a double logistic function, HR-VPP provides detailed estimations of the start of season (SOS) and end of season (EOS) using thresholds of 25% and 15% of seasonal amplitude, respectively. Its high spatial resolution makes it uniquely suited for localised and regional studies, enabling the analysis of phenological dynamics in heterogeneous landscapes and specific land cover types where coarser-resolution datasets may be insufficient. Thus, the phenological products differ in their spatial resolution, temporal frequency, and methods for deriving phenological metrics. This study focuses on comparing these three phenological products to assess their consistency, identify discrepancies, and evaluate their complementary roles in multiscale phenological monitoring. This evaluation will provide information to refine LSP monitoring tailored towards European heterogenous landscape. The analysis leverages latitudinal and altitudinal gradients to evaluate each product’s ability to capture vegetation phenological response to specific bioclimatic and topographic conditions across Europe. Specifically, a latitudinal transect was defined from the Mediterranean to the boreal regions to assess phenological dynamics across different ecosystems. In addition, two elevation transects were analysed, focusing on areas with large elevation gradients: the Alps and the Pyrenees. The intercomparison focuses on specific and unchanged land cover types, minimising confounding factors and ensuring a robust comparison of phenological metrics across products. The intercomparison leverages statistical approaches, including the Pearson correlation coefficient to evaluate the strength of relationships between products, p-values to assess the significance of correlations, and Root Mean Square Error (RMSE) to quantify the magnitude of discrepancies. These metrics provide a comprehensive framework for identifying product alignment and areas requiring refinement. Future research should prioritise exploring additional datasets to further deepen our understanding of global vegetation dynamics and their response to environmental changes.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: EnrichedEuroCrops: Integrating Satellite Data for Crop Type and Phenology Assessment

Authors: Michael Engel, Colin Moldenhauer, Marco Körner
Affiliations: Technical University of Munich (TUM)
We present an extension of the EuroCrops dataset incorporating Copernicus Sentinel-2 satellite data. The original dataset is a collection of publicly available self-declared crop type labels in European Union. The inclusion of multispectral satellite data significantly enhances the dataset, providing a more detailed and accurate representation of agricultural fields. This new version supports a wide range of research applications, from crop type classification to phenology assessment. By leveraging advanced statistical measures and ensuring data consistency across different projections and processing levels, we aim to facilitate research and innovation in the agricultural and environmental sciences. For the sake of spectral precision, we considered every single pixel of a Sentinel-2 tile as a polygon and computed the respective overlap with the field parcel itself, weighting its contribution to the statistics accordingly. This approach provides a more accurate representation of the data. By doing so, researchers can analyze even very small parcels effectively, which would otherwise be missed by statistics acquired based on the pixels where their midpoints are entailed by a field parcel polygon, for example. Training neural networks requires large amounts of data to make accurate predictions on unseen data. By including small parcels in the training data, researchers may improve the performance of their models. The weighted statistics ensure that these small parcels are appropriately represented, enhancing the overall quality of the dataset. This feature is especially valuable for applications like crop classification, yield prediction, and environmental monitoring, where small parcels play an important role as well. To ensure consistency and usability, we first homogenized the field parcel data from all partial datasets per country of the original dataset. Every entry within our dataset is delivered within the same projections, namely WGS84, and a corresponding best-fitting UTM. Additionally, we computed the point of inaccessibility (POI) of each polygon and its area based on a reprojection to the Lambert azimuthal equal-area projection, having that POI as a center. This enables users to assess the distortion bias of spectral statistics based on satellite data within a specific projection, such as UTM for Sentinel-2. We deliver the computed statistics (standard and weighted), pixel weights, and the respective raw data for both L1C and L2A processing levels. These data enable users to compute their own statistics involving multiple field parcels at once, for example. Regarding atmospherically corrected data, we ensured that only the consistent parts from the tiles are used, i.e., the parts as close as possible to the center points of the respective tiles. The processing has been done using AWS and its corresponding open data buckets as maintained by Sinergise. For the sake of interoperability with other statistical data, such as those from EuroStat, we added the NUTS3 regions of a field parcel. This allows users to incorporate additional information, such as the gross domestic product or similar metrics, into their analyses. In summary, the EnrichedEuroCrops dataset offers a robust and comprehensive tool for researchers. The integration of satellite data, the provision of both standard and weighted statistics, and the homogenization of field parcel data across countries ensure that users have access to high-quality, consistent, and detailed information. This dataset is meant to drive advancements in crop classification, yield prediction, phenology assessment and beyond, supporting the scientific community in addressing critical agricultural and environmental challenges.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Multi-decadal temporal reconstruction of Sentinel-3 biophysical trait maps

Authors: Dávid D.Kovács, Pablo Reyes-Muñoz, Katja Berger, Viktor Ixion Mészaros, dr. Gabriel Caballero, Dr. Jochem Verrelst
Affiliations: University of Valencia, GFZ Helmholtz Centre for Geosciences
Earth observation (EO) satellites aim to provide imagery for long-term environmental assessment to monitor and analyze vegetation changes and dynamics; however, their lifespan is limited in temporal availability. The Sentinel-3 (S3) satellite was launched in 2016, and its sensor, the Ocean and Land Colour Instrument (OLCI) provides key data that enables vegetation monitoring. However, despite providing continuity for previous EO missions, the longest, single-sensor data stream that can be retrieved by OLCI is still shorter than a decade, restricting the opportunities for long-term consistent vegetation monitoring. To address this shortcoming, this study develops a novel temporal reconstruction methodology to extend S3 vegetation products (VPs)—including Leaf Area Index (LAI), Fraction of Absorbed Photosynthetically Active Radiation (FAPAR), Fractional Vegetation Cover (FVC), and Leaf Chlorophyll Content (LCC)—back to 2002. Using the Multi-Output Gaussian Process Regression (MOGPR) algorithm, which learns the underlying interconnections within several time series variables, the consistency of S3 temporal reconstruction was iteratively tested by using different predictor variables (e.g., MODIS NDVI, LAI, and FAPAR). Thanks to the integration of long-standing, reliably produced vegetation products from MODIS since 2002, the MOGPR algorithm could successfully fuse and guide S3-based VPs to simulate their multi-decadal period. Since the employed algorithm is a Gaussian process, inherently encapsulating the provision of uncertainties of mean variable estimates as well, the simulated multi-decadal results also feature a confidence interval. The sensor fusion and training were throughout 2019, whereas since 2002 all other available years were used for validation. Reconstructed profiles were prototyped and validated across diverse land covers using Copernicus Land Monitoring Service and in-situ reference data from the GBOV network. The most consistent reconstructed product was FVC (𝑅=0.96, NRMSE = 0.17) over mixed forests compared to CGLS estimates. FVC also yielded the highest validation statistics (𝑅=0.93, 𝜌=0.92, NRMSE = 0.14) over croplands. Due to the scarcity of reference LCC products The reconstructed S3-LCC profiles were compared to the OLCI and MERIS Terrestrial Chlorophyll Index (OTCI, MTCI). The correlation metrics provided strong evidence of the reconstructed LCC product's integrity, with the highest correlation over deciduous broadleaf, mixed forests, and croplands (𝑅>0.9). To have a spatio-temporal assessment of the reconstructed S3 products, the MOGPR algorithm produced vegetation maps as well, ultimately correlated to CGLS reference data. The results highlight MOGPR's potential to harmonize multi-sensor datasets, enabling robust long-term environmental assessments. This framework supports the continuity of Earth observation data for studying vegetation dynamics and advancing long-term climate impact research.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Wald5Dplus And Beyond: An Open Benchmark Dataset For Forest Characterization From Sentinel-1 And -2 Time Series

Authors: M. Sc. Sarah Hauser, Andreas Schmitt
Affiliations: Hochschule München University Of Applied Sciences, Institute for Applications of Machine Learning and Intelligent Systems (IAMLIS)
The demand for high-quality reference data in AI and machine learning (ML) applications is critical, especially in forestry, where detailed and accurate monitoring of structural and ecological dynamics over time is essential. While AI and ML show considerable promise in deriving forest attributes from satellite data, their success hinges on the availability of robust, multi-sensor, and multi-temporal datasets [1]. Among the various fusion techniques available [3], one stands out for its advanced capabilities: hypercomplex bases (HCBs) [2]. HCBs enable a pixel-level fusion that integrates both spectral and structural information from radar (Sentinel-1) and optical (Sentinel-2) data. This approach maintains orthogonality, preserving the integrity of each sensor’s unique contributions without distortion, and is fully reversible, which safeguards the quality of data for repeated analyses. The result is a cohesive and consistent dataset that allows for precise monitoring of forest attributes across time series. The Wald5Dplus dataset leverages these HCB-based methods [2] within its Analysis Ready Data (ARD) cubes, which organize and compress high-dimensional data in a format that is readily accessible and compatible with GIS and image processing software. This structure is highly efficient, allowing seamless integration of up to 512 data layers annually, which include both mean annual intensity and detailed temporal variations. These ARD cubes enable AI-driven applications to access a wealth of information with minimal processing, supporting rapid deployment of ML models for tracking forest phenology, canopy structure, and biomass. Wald5Dplus serves as an open-access benchmark dataset [4] specifically designed to advance AI in forest ecosystem monitoring. Funded by the German Aerospace Center (DLR) with support from the German Federal Ministry for Economic Affairs and Climate Action, Wald5Dplus addresses the rigorous data requirements of AI applications in environmental science, providing a reliable foundation for scalable, high-resolution forest monitoring across diverse ecological regions [1]. The Wald5Dplus dataset is organized around three ecologically and structurally distinct test sites in Germany: Bavarian Forest National Park, Steigerwald, and Kranzberg Forest. These sites were chosen to encompass diverse environmental conditions, providing a robust foundation for training and validating AI models adaptable to various forest ecosystems [1]: - Bavarian Forest National Park offers a mix of mature and regenerating stands at high elevation, capturing variability in tree height and biomass density. - Steigerwald, dominated by deciduous species at lower elevation, captures seasonal variations in leaf coverage and species composition, ideal for phenology studies. - Kranzberg Forest, a controlled experimental site, enables analysis of forest management impacts under consistent conditions. Together, these sites form a comprehensive dataset that encompasses a broad spectrum of forest types, supporting the development of robust AI models adaptable to different ecological settings. The reference data within Wald5Dplus, derived from high-resolution LiDAR and multispectral sources, provides extensive, high-precision metrics for each UTM 33N 10m pixel [1]. Key metrics include crown area, tree count, area coverage, crown volume, and mean tree height, establishing Wald5Dplus as an invaluable resource for AI applications in forest monitoring. These high-resolution metrics ensure the dataset’s applicability across a wide range of research and operational scenarios in forestry and environmental monitoring. Wald5Dplus integrates Sentinel-1 SAR and Sentinel-2 optical data through a specialized multi-sensor fusion using hypercomplex bases (HCBs) [2]. This process establishes a shared radiometric frame, allowing pixel-level fusion that combines radar and multispectral optical data with minimal distortion. Initially, Sentinel-2 reflectances are transformed into "Kennaugh-like" elements to align with Sentinel-1’s Kennaugh elements, normalizing spectral bandwidths to balance the dominance of the NIR channel. A linear transformation into Kennaugh space then produces one total intensity element (K0) and three spectral/polarimetric components (K1−K7) By stacking 64 acquisitions over a year, the dataset creates a temporal HCB structure with both a mean annual reflectance layer (K *¦0,0) and 63 temporal layers (K *¦1,63) that document seasonal changes. The data are stored as UInt8 for efficient, lossless storage, accessible to GIS and image processing platforms [2]. This fusion yields an advanced Analysis Ready Data (ARD) cube comprising up to 512 layers annually, comprising 64 times 8 fused Kennaugh-like elements [1]. During initial testing, multiple regression models, including Random Forest (RF), linear regression, Support Vector Regression (SVR), and gradient boosting, were tested to determine the most effective model for predicting forest parameters. RF consistently outperformed other models, delivering the highest accuracy and adaptability across diverse forest metrics. Additionally, the effectiveness of static fused data (spectrally and polarimetrically fused) against dynamically fused data, which integrated spectral, polarimetric, and temporal dimensions was evaluated. The dynamic fusion approach yielded significantly better results, reinforcing the importance of temporal information in capturing forest dynamics. Building on these findings, Wald5Dplus employs an ensemble-based Random Forest (RF) model optimized for multi-output regression, which emerged as the most robust approach.This RF ensemble, trained over two years of data across the three AOIs, serves as a meta-model that learns forest attributes such as tree height, crown volume, and species classification using only a subset of reference data. By strategically splitting AOIs and employing subsets of data, such as tiles or single-year training sets, the RF model is capable of building independent yet robust models that maintain predictive accuracy while generalizing across forest regions. This method reduces dependency on exhaustive data, making it more practical and adaptable for large-scale forest monitoring [1]. Tests confirm that the projects RF ensemble models achieve excellent predictive accuracy for critical forest variables, such as tree height, crown volume, and species classification, within the individual AOIs using the fused data [1, 6]. Building on these results, ensemble RF models were developed by training on data tiles across the AOIs and refining them through cross-validation. These RF models demonstrated strong predictive performance across various forest types and structural attributes. In cross-AOI testing, where models trained in one area were applied to a distinct, geographically separate AOI, results remained robust, albeit with a slight decrease in accuracy—an expected outcome given the differences in ecological and spatial characteristics. Nevertheless, the cross-AOI results underscore the adaptability of said models, affirming their capability to generalize effectively beyond the initial AOIs. This transferability suggests significant potential for applying Wald5Dplus-trained models in broader, scalable applications, supporting forest monitoring initiatives across similar mid-European forest ecosystems and potentially beyond. In parallel, radar-only studies in the Bavarian Forest National Park, one of Wald5Dplus’s test sites, have shown the potential of dual-frequency polarimetric SAR for estimating forest parameters [5]. However, despite promising results within specific areas, these models faced challenges in regional transferability, especially for attributes like tree height and crown area that rely on spectral diversity. The limitations of radar-only data underscore the need for multi-sensor fusion, which Wald5Dplus provides by integrating both SAR and optical data. This multi-sensor fusion offers improved interpretability and predictive power, making it a significant upgrade over single-sensor datasets. As an open-access benchmark dataset, Wald5Dplus is hosted on Zenodo [4], making it a versatile resource for regional and global environmental monitoring. Its availability allows researchers worldwide to rigorously test, validate, and refine AI models across varied climate and forest conditions, advancing the development of scalable, AI-driven forest management solutions and contributing to sustainable, data-driven approaches to forest conservation and climate resilience. References [1] Hauser, S., Ruhhammer, M., Schmitt, A., & Krzystek, P. An Open Benchmark Dataset for Forest Characterization from Sentinel-1 and -2 Time Series. Remote Sens. 2024, 16, 488. [2] Schmitt, A.; Wendleder, A.; Kleynmans, R.; Hell, M.; Roth, A.; Hinz, S. Multi-Source and Multi-Temporal Image Fusion on Hypercomplex Bases. Remote Sens. 2020, 12, 943. [3] Zangl, R., Hauser, S., & Schmitt, A. Guidelines for the Practical Use of Image Data Fusion in Remote Sensing. GIS-Zeitschrift für Geoinformatik, 4/2022. [4] Hauser, S., Schmitt, A., Krzystek, P., & Ruhhammer, M. (2024). Wald5Dplus (1.0.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.10848838 [5] Ruhhammer, M., Hauser, S., Schmitt, A., & Wendleder, A. Forest Parameter Estimation from Dual-Frequency Polarimetric SAR. Conference Paper, 2024. [6] Hauser, S., Schmitt, A., & Krzystek, P. Forest5Dplus: An Open Benchmark Data Set for the Estimation of Forest Parameters from Sentinel-1 and -2 Time Series with Machine Learning Methods. Conference Paper, 2023.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Standardizing Spectral Mixing Approaches for Global Non-Photosynthetic Vegetation Mapping with Hyperspectral EnMAP data

Authors: Shawn Schneidereit, Kushnerniva Laurent, Dr. Katarzyna Lewińska, Patrick Hostert, Dr. Akpona Okujeni
Affiliations: Humboldt-Universität zu Berlin, GFZ Potsdam
Spectral unmixing techniques enable the estimation of fractional cover for photosynthetic active vegetation (PV), non-photosynthetic vegetation (NPV), and substrate (SUB), providing critical insights into ecosystem states and vegetation health. Among these, NPV holds particular value for monitoring vegetation stress, such as drought impacts, or assessing fuel conditions for wildfire risk. However, accurately quantifying NPV fractional cover using multispectral imagery remains challenging due to its spectral similarities with soils and other substrate materials. Hyperspectral data, with its ability to resolve cellulose and lignin absorption features in the shortwave infrared (SWIR), has shown strong potential for separating NPV from SUB. NPV is therefore considered a key vegetation variable that is expected to be quantified with improved accuracy using latest hyperspectral satellite missions such as EnMAP, PRISMA, or EMIT. Despite these advances, most NPV quantification efforts have been restricted to local studies, specific biomes, or short temporal windows. Global-scale analyses remain limited, constrained by spectral heterogeneity, biome diversity, and environmental variability across spatiotemporal scales, as well as the lack of standardized spectral unmixing approaches that can be universally applied over large areas. Using globally distributed hyperspectral data from EnMAP, this study aims to contribute to the develop a standardized spectral unmixing framework for reliable NPV fractional cover estimation across diverse ecosystems and global biomes. To enable applications across large spatial scales, we propose adopting standardized spectral mixing approaches based on consistent mixing spaces and endmember locations within these mixing spaces. The mixing space defined by the Normalized Difference Vegetation Index (NDVI) and the Cellulose Absorption Index (CAI) provides a robust framework in this regard. CAI and NDVI are surrogates for dry and green vegetation, with PV, NPV, and SUB endmembers being consistently located at the vertices of the triangular feature space spanned between NDVI and CAI. This universal mixing space ensures cross-biome and temporal consistency in endmember identification. A detailed analysis of the CAI-NDVI mixing space serves as a key step towards global NPV mapping. EnMAP’s global hyperspectral coverage facilitates data collection across ecological gradients and phenological cycles. For this study, EnMAP tiles were selected through stratified sampling along temperature and precipitation gradients to capture diverse biome characteristics. NDVI-CAI mixing spaces were computed for each tile, and their topologies and distributions were analyzed. Candidate endmembers were identified and validated using high-resolution reference datasets. This analysis highlights cross-biome consistency and endmember variability, foundational for robust spectral unmixing applications. The NDVI-CAI mixing space consistently displayed triangular structures across biomes, with vertices representing PV, NPV, and SUB. PV and NPV endmembers encompassed diverse vegetation growth forms, while SUB endmembers included a range of soil and substrate types. Although the triangular topology was broadly consistent, density variations were observed under extreme climatic conditions. For example, deserts exhibited sparse clustering near the PV vertex due to limited vegetation. Supplementing these gaps with field or laboratory spectra and universal endmember libraries could improve spectral unmixing in such environments. These results underscore the advantages of EnMAP hyperspectral data for resolving cellulose and lignin absorption features, enabling detailed analyses of NDVI-CAI mixing spaces that surpass the capabilities of multispectral sensors. This approach provides a pathway for global NPV fractional cover estimation using standardized unmixing frameworks. Our findings demonstrate the robustness of NDVI-CAI mixing spaces for fractional cover mapping of PV, NPV, and SUB across diverse biomes. EnMAP’s hyperspectral capabilities significantly enhance NPV differentiation compared to multispectral approaches. Future research will focus on creating global spectral libraries from identified endmembers to support scalable and reliable unmixing workflows. Expanding this analysis to additional hyperspectral sensors, such as EMIT and PRISMA, will validate mixing-space trends and assess the potential for multi-sensor integration. These efforts aim to extend spatial and temporal coverage, cementing hyperspectral data as an essential tool for global vegetation monitoring and ecosystem assessment.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Ecosystem Change Analysis in the European Arctic Permafrost Regions (1984–2024): A Multidimensional Assessment Using Satellite Imagery and Automated Tools

Authors: Teona Daia-Creinicean, Dr Marina Virghileanu, Carmen Bizdadea, Ionut Sandric
Affiliations: Faculty Of Geography, University Of Bucharest
The European Arctic permafrost regions are experiencing rapid ecosystem changes driven by climate change, making them critical areas for long-term environmental monitoring. This study investigates ecosystem dynamics from 1984 to 2024, focusing on vegetation and water bodies, using multidimensional spectral and biophysical indices derived from a combination of Landsat, Sentinel-2, and Sentinel-1 satellite imagery. Key vegetation indices, including NDVI, GNDVI, SAVI, MSAVI, and Sentinel-1 radar-based vegetation indices, were analyzed alongside water indices such as NDWI, MNDWI, and AWEI_NSH, Radar Vegetation Index (RVI), Normalized Difference Radar Index (NDRI), backscatter coefficient (σ°), VH/VV backscatter ratio, polarimetric coherence, interferometric coherence, and temporal backscatter variation to reveal significant trends and anomalies over the 40-year period. A custom framework combining Microsoft Planetary Computer tools and automated scripts was developed to preprocess, analyze, and visualize the vast dataset. These tools facilitated efficient extraction of spectral indices and anomaly calculations, ensuring consistency across the temporal and spatial domains. The integration of optical and radar imagery, particularly Sentinel-1, enabled a more comprehensive assessment of permafrost regions by capturing changes in vegetation and water dynamics under varying conditions, including cloud cover. The results indicate a general increase of approximately 0.3 units in vegetation indices, reflecting higher chlorophyll content and denser vegetation cover, particularly after 2013. Conversely, water indices showed a corresponding decrease of about 0.3 units, suggesting an expansion of inland water surfaces. These complementary trends illustrate the profound effects of climate change on Arctic ecosystems, where thawing permafrost and altered hydrological regimes drive ecological transformations. Analysis of anomalies relative to the mean values over the study period further highlights a critical turning point around 2013. Vegetation indices shifted from negative anomalies (down to -0.4) in 1984–2013 to positive anomalies (up to +0.2) after 2013, reflecting a notable densification of vegetation cover in recent years. Maximum NDVI values were predominantly observed in the last 7–10 years, while peak NDWI values were concentrated in the earlier decades, underscoring contrasting temporal patterns between vegetation growth and water surface dynamics. This study demonstrates the value of integrating long-term satellite imagery with advanced computational tools to monitor and understand ecosystem responses in permafrost regions. The findings provide crucial insights into the ongoing transformations in Arctic landscapes, emphasizing the need for sustained monitoring to anticipate and manage the impacts of climate change in these sensitive environments. Acknowledgement This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 101086386, EO-PERSIST - A Cloud-Based Remote Sensing Data System for Promoting Research and Socioeconomic Studies In Arctic Environments (https://www.eo-persist.eu).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Evaluating Different Approaches for Medium-Resolution Land Surface Phenology Estimation Using In-Situ Leaf Unfolding Observations of Deciduous Broadleaf Trees in Spain

Authors: Mr. Jose A. Caparros-Santiago, Mr. Guilhem N. Jeannet-Chaves, Mr. Jose Ollega, Prof. Victor Rodriguez-Galiano
Affiliations: Departamento de Geografía Física y Análisis Geográfico Regional, Universidad de Sevilla.
Land surface phenology (LSP), the study of seasonal patterns in vegetated land surfaces using satellite-derived vegetation indices (VIs), is an essential tool for understanding vegetation dynamics at regional to global scales. Phenological trajectories of VIs enable the estimation of phenological metrics, or phenometrics (e.g., start of the growing season (SOS), end of the growing season (EOS)). These phenometrics are ecologically meaningful indicators of key events in vegetation greenness linked to phenological processes. The choice of VI is critical for estimating LSP, as they exhibit varying sensitivities to specific properties of the vegetated land surface (e.g., biomass quantity, canopy structure) and atmospheric conditions (e.g., absorption and scattering by ozone, aerosols, or water vapor). Moreover, satellite-based VIs are inherently prone to noise from atmospheric disturbances, sensor inconsistencies, and data gaps. To address these challenges, smoothing algorithms are indispensable for minimising noise and enhancing the quality of phenological trajectories. The choice of smoothing technique can significantly impact phenometric accuracy. Over-aggressive smoothing may suppress biologically meaningful variations in phenological trajectories, reducing the observed amplitude of vegetation activity and shifting the timing of critical events, such as the maximum or minimum greenness. This can lead to inaccuracies in phenometric estimates. Conversely, insufficient smoothing may leave noise unfiltered, causing short-term anomalies from atmospheric effects or transient disturbances to be misinterpreted as genuine phenological events. LSP extraction techniques also play a crucial role in determining phenometric accuracy. Complex techniques designed to capture rapid phenological transitions are often more susceptible to noise, increasing the risk of over-interpreting non-phenological disturbances. Simpler techniques, while effective under specific conditions, may be overly sensitive to extreme values (e.g., minimum or maximum vegetation indices). Consequently, assessing the accuracy of phenometrics is vital, as inaccuracies can propagate through LSP analyses and affect downstream applications, including vegetation phenological trends, ecosystem modelling, and climate impact assessments. This study evaluates the accuracy of SOS estimates using in situ leaf unfolding observations (i.e., first and second leaf unfolding) for deciduous broadleaf trees in the Iberian Peninsula and the Balearic Archipelago. Time series for the Normalised Difference Vegetation Index (NDVI) and the two-band Enhanced Vegetation Index (EVI2) were generated for the period 2001–2019 using the MOD09Q1 product, which is based on 8-day composites with a spatial resolution of 250 metres. The vegetation index time series were smoothed using four algorithms: double logistic function (DL), asymmetric Gaussian function (AG), Savitzky-Golay filter (SG), and Harmonic Analysis of Time Series algorithm (HANTS). SOS was extracted from the smoothed time series using four techniques: first derivative (1DER), second derivative (2DER), threshold (TH), and inflection point (IN). The agreement between SOS estimates and in situ leaf unfolding observations was assessed using Pearson’s correlation coefficient, p-value, and absolute differences. These comparisons were performed for Atlantic and Mediterranean deciduous broadleaf trees across the study region, providing quantifiable insights into the uncertainties of integrating satellite-based and in situ phenological data. The strongest correlations were observed between EVI2-derived SOS estimates and in situ leaf unfolding observations. EVI2 SOS dates also showed lower differences with in situ leaf unfolding dates. The outperformance of EVI2 could be attributed to its reduced sensitivity to atmospheric disturbances and soil background effects, factors that frequently interfere with NDVI measurements. Additionally, EVI2's heightened sensitivity to subtle changes in vegetation greenness allows it to more accurately capture variations in vegetation dynamics. These attributes make EVI2 particularly robust for monitoring phenological events in heterogeneous landscapes, as it minimises inaccuracies in satellite-derived phenometrics. For Atlantic deciduous broadleaf vegetation, EVI2-derived SOS estimates from the SG-IN combination exhibited the highest and statistically significant correlations with both first leaf unfolding (r = 0.55) and second leaf unfolding (r = 0.54). In contrast, for Mediterranean deciduous broadleaf vegetation, EVI2-derived SOS estimates from the HANTS-2DER combination showed the highest statistically significant correlations, with r = 0.35 for first leaf unfolding and r = 0.48 for second leaf unfolding. Overall, the agreement between SOS estimates and leaf unfolding observations was lower for Mediterranean deciduous broadleaf vegetation. This discrepancy can be attributed to the greater landscape heterogeneity in the Mediterranean region, compounded by the lower representativeness of deciduous broadleaf trees compared to the Atlantic region. Although this study suggests the use of EVI2 and the described methods to estimate SOS for deciduous broadleaf vegetation in the Atlantic and Mediterranean biogeographic regions, future research should prioritise the use of high-resolution spatial data to further improve the ground representativeness of LSP estimates. Satellite platforms such as Sentinel-2 hold significant promise in this regard. Improved spatial resolution might refine phenological models and promote a closer match between satellite-derived phenometrics and in situ phenological observations, particularly in ecologically diverse regions such as the Mediterranean.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Forest thinning influences phenological dates and their heterogeneity derived from Sentinel-2 Data

Authors: Haiyin Ye, Dr. Nicole Estrella, Xaver Wangerpohl, Johanna Kauffert, Prof. Dr. Annette Menzel
Affiliations: TUM School of Life Sciences, Ecoclimatology, Technical University of Munich, Institute for Advanced Study, Technical University of Munich
Land surface phenology (LSP) derived from remote sensing data is a valuable tool for forest ecosystem monitoring. In addition to the estimated dates of the phenological stages, the spatial heterogeneity of these estimates is also essential for understanding forest biodiversity and may play a crucial role in determining the resilience of the forest. Former studies have discussed the biotic and abiotic factors related to phenological heterogeneity, including climate change, insect richness, and plant functional traits, which are all factors from the environmental perspective. However, a research gap exists regarding the influence of human activity and management. For the first time, this study explores the potential influences of forest thinning management on the spatial patterns as well as spatial heterogeneity of LSP derived from the normalized difference vegetation index (NDVI) obtained from Sentinel-2 data. The study areas comprise 22 plots of the Bavarian Growth Monitoring Trials in southern Germany, where oaks are the dominant species and the types of forest thinning practices are documented. The phenological metrics were inferred from the seasonal NDVI curves of the years 2017- 2022 (appr. 55 measurements p.a., except for 2017 with 21 measurements), including start of season (SOS), end of season (EOS) and length of season (LOS). In addition, the NDVI values at five fixed dates of the year were also extracted to explore the inter-annual variations. We were able to show with mixed linear models that thinning treatments affect the LSP and phenological heterogeneity in forests. Specifically, we found that (1) the phenological onset dates, as observed through the mentioned LSP metrics (SOS, EOS, LOS), significantly differ among plots with and without thinning treatments. The parcels with thinning treatment display earlier SOS, EOS, and shorter LOS compared to parcels without thinning treatment (p < 0.001). Also consistent with our expectations, (2) the spatial heterogeneity of phenology appears to be significantly higher in the thinned stands (p < 0.01). Finally, our results showed that (3) the influence of thinning treatments on NDVI values at fixed dates of the year gets progressively weaker as spring progresses, and conversely increases again during EOS. These findings suggest that forest thinning can influence phenological metrics (e.g. SOS, EOS, LOS) as well as the spatial heterogeneity of phenology as perceived by NDVI time series. Based on our remotely sensed indices, our results are suggestive that forest thinning treatments influence the timing and spatial heterogeneity of greening. This may have important implications for forest management strategies.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Unexplained InSAR Closure Phases in Areas with Various Land Cover and Climate Conditions

Authors: Yan Yuan, Paco Lopez-Dekker
Affiliations: Delft University of Technology
Closure phases, introduced in Interferometric Synthetic Aperture Radar (InSAR), are defined as the circular summation of three multilooked interferometric phases, φ12+φ23+φ31, and often exhibit non-zero values. These non-conservative closure phases challenge the validity of the implicit phase consistency assumption in SAR interferometry. Phase consistency assumes a geometric interpretation of the interferometric phase, where the expected values of the three interferometric phases are redundant under the condition that their sum, the closure phase, equals zero. Recent studies using real data examples have revealed extensive non-conservative multilooked closure phases arising from phase noise and geophysical signals [1, 2, 3, 4, 5, 6, 7, 8]. Analytical expressions have been developed for two mechanisms: one involves the variation in propagation properties of dielectric media due to soil moisture changes, and the other relates to volume scattering in the presence of perpendicular baselines [2, 1]. Understanding the origin of closure phases is essential for accurate deformation estimation. Furthermore, while the geophysical origin of closure phases has been confirmed, it is not yet fully understood. In this study, we select the extensive Iberian Peninsula as our research area, as it encompasses diverse land cover types [9] and spans multiple climate zones [10]. Quantifying unexplained signals under various geophysical conditions using known mechanisms provides new insights into the geophysical origins of closure phases. Large systematic closure phases of geophysical origin often appear in low-coherence scenarios with high phase noise levels. Estimating closure phases under these conditions requires maintaining a certain level of coherence or applying extensive spatial averaging (multilooking) to reduce noise. To minimize coherence loss due to temporal decorrelation, we use continuous 6-day revisit Sentinel-1 acquisitions to construct closure phases. For multilooking, we employ an adaptive strategy incorporating a statistical test on amplitudes to select suitable samples for averaging, ensuring they are realizations of the same random distribution. Using the multilooked products, we analyze the statistics of closure phases for sub-regions categorized by various land cover and climate conditions. We start with estimating the standard deviation of the closure phases using a null hypothesis that they arise from phase noise, and then assess the statistical significance of the geophysical closure phases. Next, we evaluate the mean values and percentiles of closure phases and discuss their spatiotemporal patterns. Subsequently, we apply existing interferometric models for soil moisture variation and volume scattering mechanisms [2, 1, 11] to evaluate the weighted linear correlation between modeled and observed closure phases. To quantify the contribution of each component to the closure phases, we calculate the R-squared score to estimate the explainable variance attributed to known mechanisms. For cases with high unexplained variance, we visualize and discuss time-series examples to investigate unexplained signals that are not yet captured by current models. Our results confirm the widespread presence of geophysical closure phases, characterize closure phase signatures in areas with varying geophysical conditions, highlight distinct attributes that reveal the mechanisms underlying the geophysical components, and suggest new opportunities for using closure phases to detect Earth surface variations. References [1] Francesco De Zan, Mariantonietta Zonno, and Paco Lopez-Dekker. Phase inconsistencies and multiple scattering in sar interferometry. IEEE Transactions on Geoscience and Remote Sensing, 53(12):6608–6616, 2015. [2] Francesco De Zan, Alessandro Parizzi, Pau Prats-Iraola, and Paco López-Dekker. A sar in- terferometric model for soil moisture. IEEE Transactions on Geoscience and Remote Sensing, 52(1):418–425, 2013. [3] Yusuf Eshqi Molan and Zhong Lu. Can InSAR coherence and closure phase be used to estimate soil moisture changes? Remote Sensing, 12(9):1511, 2020. [4] Yujie Zheng, Heresh Fattahi, Piyush Agram, Mark Simons, and Paul Rosen. On closure phase and systematic bias in multilooked SAR interferometry. IEEE Transactions on Geoscience and Remote Sensing, 60:1–11, 2022. [5] Simon Zwieback, Xingyu Liu, Sofia Antonova, Birgit Heim, Annett Bartsch, Julia Boike, and Irena Hajnsek. A statistical test of phase closure to detect influences on DInSAR deformation estimates besides displacements and decorrelation noise: Two case studies in high-latitude regions. IEEE Transactions on Geoscience and Remote Sensing, 54(9):5588–5601, 2016. [6] Simon Zwieback, Scott Hensley, and Irena Hajnsek. Assessment of soil moisture effects on l-band radar interferometry. Remote Sensing of Environment, 164:77–89, 2015. [7] Francesco De Zan, Paloma Saporta, and Giorgio Gomba. Spatiotemporal analysis of C-band 2 interferometric phase anomalies over Sicily. In EUSAR 2022; 14th European Conference on Synthetic Aperture Radar, pages 1–4. VDE, 2022. [8] Yan Yuan, Marcel Kleinherenbrink, and Paco López-Dekker. On crop growth and insar closure phases. IEEE Transactions on Geoscience and Remote Sensing, 2024. [9] Marcel Buchhorn, Bruno Smets, Luc Bertels, Bert De Roo, Myroslava Lesiv, Nandin-Erdene Tsendbazar, Linlin Li, and AJ Tarko. Copernicus global land service: Land cover 100m: Version 3 globe 2015-2019: Product user manual. 2020. [10] Murray C Peel, Brian L Finlayson, and Thomas A McMahon. Updated world map of the köppen- geiger climate classification. Hydrology and earth system sciences, 11(5):1633–1644, 2007. [11] Francesco De Zan and Giorgio Gomba. Vegetation and soil moisture inversion from sar closure phases: First experiments and results. Remote sensing of environment, 217:562–572, 2018.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Which Vegetation Index and Threshold Should Be Used? A Novel Optimisation Framework for Accurate Detection of Key Crop Phenological Phases in Germany

Authors: Abdelaziz Htitiou, Markus Möller, Tanja Riedel, Heike Gerighausen
Affiliations: Julius Kühn Institute (JKI)—Federal Research Centre for Cultivated Plants, Institute for Strategies and Technology Assessment, Julius Kühn Institute (JKI)—Federal Research Centre for Cultivated Plants, Data Processing Department
Accurate information on crop phenological development phases such as emergence, heading, and maturity at the field level is vital for effective agricultural monitoring, crop production surveillance, and yield prediction. While this information is traditionally gathered through field observation surveys, maintaining continuous phenological observations across multiple field locations is often challenging and time-consuming. On the other hand, satellite remote sensing, as a non-contact detection and monitoring technology, offers an effective and objective means of estimating and monitoring crop phenological characteristics at an unprecedented spatial and temporal resolution. By tracking temporal changes in vegetation indices (VIs) through satellite time series, it enables the extraction of the so-called phenological metrics, such as the start of the growing season (SOS) and the end of the growing season (EOS), based on when VIs reach specific thresholds. However, three main shortcomings make direct use of this approach challenging. Firstly, the nature of optical remote sensing, which is closely bonded with challenges such as clouds, snow and aerosols, resulting in the discontinuities, thereby affecting the accuracy and reliability of phenological metric estimations. Additionally, beyond outliers caused by clouds, VIs time series may capture off-season growth cycles unrelated to the target crop types. These off-season cycles, often caused by weeds, catch crops, or double cropping, can also interfere with accurate characterisation of the target crop’s phenological metrics. Lastly, the SOS/EOS thresholds used to obtain these satellite-derived phenological metrics are often set empirically, which often leads to metrics that lack direct biological relevance, making it challenging to align them with ground-based phenological observations. To mitigate these challenges, this study aims to develop a satellite-to-field phenological optimisation framework for characterising key crop phenological phases by combining various VIs time series of Sentinel-2 with ground-based phenological observations. This framework is applied to winter wheat, sugar beet, and corn in a large case study conducted in north-central Germany, specifically encompassing three federal states within the country. The proposed framework is built on three core pillars: (1) Introducing a practical approach called UE-WS for enhanced retention of original valid pixels, upper envelope extraction, and smoothing of nine VIs raw time series—NDVI, EVI, EVI2, PPI, NDPI, SAVI, MSAVI, NDRE, and NDGI. (2) Developing an automated method based on peaks and pits of the UE-WS filtered VIs curves to identify true and valid phenological cycles specific to the target crop. (3) Combining two years of ground phenological measurements (2019–2020) from the German Weather Service (DWD) with UE-WS-filtered VIs time series to develop a threshold optimisation approach based on Optuna, an advanced hyper-parameter optimisation method, this approach determines the most suitable vegetation index and optimal SOS/EOS thresholds for accurately identifying specific key phenological phases across crops. (4) Once the most accurate combinations of VIs and thresholds have been identified, their predictive accuracy is validated using independent field-based phenological observations. The results demonstrate that, despite challenges in using optical data to monitor crop phenology and growth dynamics in such cloud-prone region, the framework effectively captures the natural growth phases and valid seasonal variations of the studied crops during their in-season cycles using the developed UE-WS approach and phenological cycle determination method. The framework results also indicate that no single index or EOS/SOS threshold consistently achieves the highest accuracy across all crops and phenological phases. However, selecting the both optimal index and threshold for each specific crop and phenophase significantly bridges the satellite and field-based phenological measures, achieving an R-square value above 0.95 and an RMSE <1 week. This tailored framework outperforms the use of generic, non-optimised VIs and thresholds, which often fail to account for the unique growth patterns and phenological characteristics of each crop type. It establishes a robust means for adapting remote sensing tools to specific crop and growth stage requirements, paving the way for more precise, data-driven agricultural decision-making at the field and crop level. Future work aims to incorporate additional data sources and more field-based measurement to further refine these findings and enhance their broader applicability.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Scalable Solutions for Monitoring Rice Phenology: A Comparative Study of Rule-Based and Machine Learning Approaches in South Asia

Authors: Pauline Kimani, Timothy Foster, Dr Ben Parkes, Dr Shu Kee Lam, Dr Alexis Pang
Affiliations: The University of Manchester, The University of Melbourne
Rice production in South Asia is critical to regional and global food security, yet the region experiences significant rice yield gaps and vulnerability to climate change. Timely rice establishment is crucial for rice yields as it ensures phenological phases are synchronised with favourable weather and environmental conditions. However, the few existing rice phenology maps for South Asia are based on coarse-resolution satellite data that obscures variability at field levels and generally have not been validated against farm-level data. Our study utilises data from over 9,600 farms across India's Indo-Gangetic Plains, spanning 700,000 km2. We assess the ability of an array of satellite sensors (MODIS, Sentinel-2, and Sentinel-1) to capture and map the spatiotemporal variability in rice transplanting in this region. First, we evaluate the effectiveness of utilising optical data and standard rule-based approaches to estimate rice phenology dates. Subsequently, we employ random forest models for improved estimation of transplanting dates from relevant indices and backscatter coefficients from optical and radar data, respectively. Our models were developed separately for each sensor and scenarios that integrate optical and radar data based on these phenological time-series metrics. Satellite-based estimates of transplanting dates were validated against field data to demonstrate the potential of improved estimation of rice phenology using multi-sensor data. Through our accuracy assessments, rule-based approaches using optical data demonstrated moderate performance highlighting the limitations in cloud-prone regions, emphasizing the need for refined thresholds and extensive validation when applying these techniques. Random forest models provided more robust estimates of transplanting dates, with significant improvements in accuracy when radar data complemented optical indices. We found complex patterns of variability in rice transplanting dates within our study area, indicated by nuanced changes in rice fields during the transplanting period, as influenced by monsoon timing and socio-economic factors such as irrigation access. Our high-resolution rice transplanting maps can help to identify areas prone to delayed transplanting, paving the way for developing actionable insights to support farmers in optimizing planting practices under climate risk. By refining methodologies and integrating machine learning, our study offers scalable approaches to monitor crop phenology in South Asia’s smallholder rice systems.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Assessment of plant nutritional parameters in cereals by PlanetScope and Sentinel-2 multispectral data

Authors: Beáta Šusliková, Kateřina Kuchaříková, Karel Niederhafner, Vojtěch Lukas, Jakub Elbl, Jan Křen
Affiliations: Mendel University In Brno
The use of satellite data with high spatial and temporal resolution is a key approach for assessing the nutritional status of field crop stands during the growing season in precision agriculture. This study focuses on the use of multispectral data from the PlanetScope (PlanetLabs) and Sentinel-2 satellite for assessment of the nutritional status and monitoring the development of field crops during the growing season 2022 and 2023. An field experiment with the area of 29 ha was conducted in the South Moravia region (Czech Republic; N 48°58.03', E 16°38.39') by the support of the research projects TA CR TQ03000882 and MENDELU AF-IGA2023-IP-036. The crop survey was realized in winter wheat (2022) and winter barley (2023) between BBCH 29 and BBCH 50 by plant sampling with total number of 20 samples accross the field. The plant samples were analysed for Nitrogen content and aboveground biomass, from which Nitrogen uptake (Nupt) and Nitrogen Nutrition Index (NNI) were calculated. Various vegetation indices were calculated from Sentinel-2 and PlanetScope data including EVI (Enhanced Vegetation Index), GNDVI (Green Normalized Difference Vegetation Index), NDRE (Normalized Difference Red Edge Index), NDVI (Normalized Difference Vegetation Index), SAVI (Soil Adjusted Vegetation Index), SRI (Simple Ratio Index), TCARI (Transformed Chlorophyll Absorption Ratio Index). Correlation analyses and regression modelling were performed between the vegetation index values and crop parameters obtained by traditional plant sampling. The results from PlanetScope satellite data showed that in 2022, in the early stages of vegetation (BBCH 30), the vegetation indices EVI (R = 0.91), GNDVI (R = 0.889), NDVI (R = 0.886), SAVI (R = 0.886) and SRI (R = 0.886) achieved the highest correlation values compared to the measured crop parameters. In the second half of the crop growing season (BBCH 50), the vegetation indices NDRE (R = 0.826) and GNDVI (R = 0.792) reached high correlation values compared to the stand parameter fresh biomass. In 2023, almost all vegetation indices reached high correlation values with the biomass, NNI and Nupt in BBCH 29-30, except TCARI and EVI. In the second term, only the vegetation indices GNDVI and NDRE compared to biomass and nitrogen uptake per hectare reached statistically significant correlation values. In the last observation period (BBCH 50), there was an increase in the frequency of vegetation indices that reached statistically significant correlations, especially in comparison with stand parameters such as Nupt and NNI. The results of the study suggest that vegetation indices calculated from PlanetScope EVI, NDVI and SAVI are more appropriate to use in the early growth stages of vegetation. In the later growth stages, it is more effective to use the vegetation indices such as NDRE and GNDVI, which can better highlight differences in nitrogen status for optimization of the fertilization strategy by site specific crop management practices in precision agriculture.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Pan-European mapping of Cropping Patterns at 10 m spatial resolution: methodology and product development

Authors: Kasper Bonte, Astrid Vannoppen, Bert De Roo, Kristof Van Tricht, Laurent Tits, Roel Van Hoolst, Ludvig Forslund, André Stumpf
Affiliations: VITO, EEA, GAF
Monitoring cropping patterns including cropping phenology, bare soil persistency, and temporary fallow land, provide critical insights into the dynamics of agricultural landscapes. Climate change, rising food demands driven by population growth, and disruptions in food production due to geopolitical instabilities impact agricultural landscapes continuously. Accurate and scalable products that can be used for agricultural landscape monitoring are therefore essential for guaranteeing global food security. Remote sensing-based cropping pattern information offer the potential to provide regular, field-level updates on agricultural landscape dynamics. Within this framework, the HRL-VLCC project, part of a new Copernicus Land Monitoring Service (CLMS), has developed high-resolution layers for cropping patterns. These products deliver detailed information on yearly cropping pattern dynamics over annual cropland at a 10-meter spatial resolution across the pan European scale. The products allow to evaluate agricultural production, monitor environmental conditions, and track changes over time, thereby providing policymakers valuable information for developing sustainable agriculture and food security strategies. The cropping pattern products are all based on field-level seasonal information. A key component of the methodology is the delineation of homogeneous fields, which ensures the derivation of uniform cropping pattern information. For all delineated fields located within annual cropland, critical phenological events such as crop emergence and harvest dates are detected, both for the main and secondary growing season. These detected phenological events form the foundation for five cropping pattern products: main crop seasonality, secondary crop seasonality, bare soil period delineation, fallow land detection, and annual crop characteristics. The crop emergence and harvest detection methodologies leverage time series of Sentinel-1 and Sentinel-2 indices at the field level. These raw data are extracted and fused through a radar-based gap-filling approach using the CropSAR technology to generate continuous time series of optical FAPAR. From these time series together with Sentinel-1 derived indices, dedicated algorithms detect emergence and harvest events, enabling the derivation of up to two growing seasons per calendar year, categorized into main and secondary seasons based on region-specific expert rules. These rules provide detailed information for each season, including emergence, harvest, and duration. Secondary seasons are further classified into four categories using rule-based criteria related to their emergence timing and seasonal length. Additionally, bare soil period products quantify the number of days fields remain bare before and after the main growing season, while fallow land products determine whether arable cropland remains unmanaged within a specific year by analyzing emergence and harvest activity. Lastly, annual crop characteristics summarize the number of growing seasons within a calendar year and identify crop type rotation over a three-year period. These cropping pattern methodologies ensure consistent monitoring of agricultural landscapes, with products being updated annually since 2017 and provided at pan European scale. Within this LPS session the applied methodology for deriving the different cropping pattern products will be presented, as well as some potential applications.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Sentinel-2 based Land Surface Phenology in complex Southern African Landscapes

Authors: Jari Mahler, Dr. Achim Röder, JProf. Dr. David Frantz
Affiliations: Trier University
The Kavango Zambezi Transfrontier Area (KAZA) in Southern Africa is one of the world's largest and most ecologically significant transboundary conservation regions, encompassing diverse ecosystems and communities. KAZA faces critical socio-economic and ecological challenges, including climate change with one of the highest projected increases in temperature globally, coupled with declining rainfall in an already hot and dry system. These challenges are compounded by expanding human land use, habitat fragmentation, conservation efforts and human-wildlife conflict. Addressing these challenges requires robust, scalable environmental data to understand and manage the complex interactions between ecological processes and socio-economic activities. However, a substantial data gap exists in the spatial and temporal monitoring of KAZA’s vegetation dynamics. Traditional vegetation classifications often fail to capture the nuanced ecological variability and phenological shifts across diverse ecosystems. In contrast, satellite-derived Land Surface Phenology (LSP) provides continuous, ecologically interpretable and intuitive metrics that can inform applied research, conservation strategies, ecosystem management, and climate adaptation. KAZA is a complex mosaic of ecosystems, ranging from dense forests and dry savannas to wetlands, spanning multiple climate zones and shaped by diverse topography, land use, land cover, and hydrological dynamics. These factors present challenges for Land Surface Phenology (LSP) calculations, which require both a high spatial resolution to capture isolated tree clusters or fragmented habitats, as well as a well-defined time frame—the phenological year—for the consistent estimation of annual metrics. This study employs a pixel-specific, polar-based LSP approach that partitions the time series of a vegetation index into phenological years. After outlier removal, local smoothing and gap filling, this method transforms Sentinel-2 Enhanced Vegetation Index (EVI) data into polar coordinates, where the date of each observation determines the direction from the origin, and the corresponding EVI value determines the vector length. This transformation creates a cyclical representation of vegetation dynamics, allowing the start of each phenological year to be calculated based on the geometric center of the resulting polar figure. This study applies a data-driven statistical approach to explore potentially useful LSP parameters for the target area. In total 39 yearly LSP parameters and their multi-year statistics (e.g. standard deviation, mean) are calculated, including parameters describing timing (e.g., start/end of season), event-specific values (e.g., early minimum, peak of season), integrals (area under the minimum, latent integral), and speed of seasonal vegetation development (e.g., rising/falling rates). The 39 LSP parameters, along with their multi-year statistical summaries, are used as input for an exploratory factor analysis with rotation to identify latent ecological dimensions. While the exact dimensions depend on the patterns observed in the data, they represent aspects such as seasonality, combined timing, or multi-year stability. This process groups related parameters into a smaller, more concise set of variables, simplifying their ecological interpretation. The outcome is a streamlined reduction of the Sentinel-2 EVI time series into a focused set of ecologically meaningful and intuitive metrics, specifically tailored to the target area's unique characteristics, while capturing most of the variance in the data. The identified set of LSP parameters and ecological dimensions will be evaluated for their plausibility and correlated with field vegetation data, ensuring their ecological interpretability and relevance.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Seasonal Variability Dynamics and Drivers of Uncertainties in LAI, Chlorophyll, Vegetation Parameter Retrievals and Indices in Forest Time-Series

Authors: Amie Elizabeth Corbin, Dr.ir. Peter M. van Bodegom, Dr.ir. Joris Timmermans
Affiliations: Leiden University, Delft University of Technology
Ecosystem and species phenology are now considered essential biodiversity variables (EBV), crucial for monitoring, researching, and forecasting biodiversity at a national and global level. The temporal dynamics of forest ecosystems provide insights into their state and how they respond to environmental stressors. Remote sensing and environmental modelling approaches have been key in monitoring these ecosystem dynamics and EBV retrievals. However, a large challenge remains to determine what level of agreement between model output and observation-based data is adequate considering that such data are prone to uncertainties. While satellite observations of leaf area index (LAI) often serves as a pillar for environmental modelling, its retrieval can exhibit major uncertainties and sensitivity – or lack thereof – to specific phenological changes, such as green-up, peak growth, and senescence, in particular types of ecosystems, or upon environmental stress. The circumstances under which these uncertainties and sensitivities arise remains not fully explored, particularly across varying environments and phenological phases. Even where specific remote sensing products like the Plant Phenology Index (PPI) have been developed to address some limitations of LAI, the extent to which these indices align with in-situ measurements or outperform LAI in detecting critical phenological transitions is not entirely established. Uncertainty regarding remotely observable biophysical parameters of chlorophyll and canopy water is likely even greater. Many of these variables are often simply considered to be constant over time. This is especially problematic given that uncertainties in these parameters may amplify uncertainties in LAI and other model outputs. In this study, we analyze time-series data for seasonal variability, sensitivity and uncertainty associated with retrieved LAI, PPI, and in-situ biophysical leaf measurements at two needle-leaf forest stands. Field data were collected at a Fluxtower site in a temperate coniferous tree stand of Douglas firs (Pseudotsuga menziesii) in Speulderbos, The Netherlands, in 2017 and 2018 from April to October every two weeks. Additional Flux data was collected in in a boreal Scots pine (Pinus sylvestris) stand in Södankyla, Finland for an 8 weeks period in 2017. At each site, leaf and needle samples were also collected to determine biophysical leaf traits including leaf chlorophyll, leaf mass per area (LMA), leaf carbon and leaf nitrogen. Environmental data such as photosynthetically active radiation (PAR) were also recorded. LAI was retrieved using Sentinel-2 data and in-situ faPAR. A combined temporal cross-correlation and sensitivity analysis was performed to evaluate lead and lag relationships between LAI, PPI, and the biophysical leaf traits as well as their sensitivity to phenological (green-up, peak and senescence) and environmental changes . Results show that seasonal dynamics is substantial, not only for LAI and PPI, but also for the biophysical parameters. A trait intercomparison suggests that chlorophyll may exhibit greater sensitivity to stress conditions such as drought, compared to structural measures like LAI, as well as during transitional phenological phases. Accounting for the temporal dynamics of other biophysical parameters helps reducing the uncertainties in temporal variation in LAI estimates. These findings underscore the importance of incorporating both structural and physiological metrics for a more holistic assessment of ecosystem function and resilience.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: PlotToSat: Leveraging Earth Observation Data for Scalable Forest Ecology and Environmental Modelling

Authors: Milto Miltiadou, Dr Emily R Lines, Prof Hywel Williams, Dr Stuart Grieve, Prof Paloma Ruiz-Benito, Dr Finley Gibson, Dr Remy Vandaele, Dr Verónica Cruz-Alonso, Dr Julen Astigarraga, Julián Tijerín Triviño, Edward C.D. Pope
Affiliations: Department of Computer Science, University Of Exeter, Department of Geography, University of Cambridge, School of Geography, Queen Mary University of London, Department of Life Sciences, Universidad de Alcalá, Department of Biodiversity, Universidad Complutense de Madrid, Met Office Hadley Centre, Met Office
PlotToSat is a new tool that leverages the Python API of Google Earth Engine (GEE) to enable efficient Earth observation (EO) time-series extraction across multiple regions and facilitates easy integration with ground observations, supporting phenology-related studies. This presentation demonstrates PlotToSat through two novel applications: tree genera identification and soil/vegetation carbon dynamics. The latest published version of PlotToSat supports Sentinel-1 and Sentinel-2 time-series extraction at up to hundreds of thousands of circular plot regions and fuses the extracted data with ground measurements. The plot regions are defined by the longitude and latitude provided in a CSV file, and by the radius, which is considered constant across all plots. New updates of PlotToSat enhance its usability by supporting shapefiles as input for extracting EO time-series across multiple polygons, regardless of shape. It further provides embedded functionalities for extracting soil-, vegetation- and water-related indices. PlotToSat has a flexible object-oriented design allowing the integration of new datasets in the future. To illustrate its efficiency, PlotToSat was able to extract Sentinel-1 and Sentinel-2 time-series from 15,962 plots distributed across Spain and multiple image tiles (estimated at 18.3TB of Sentinel-1 and Sentinel-2 data) in less than 24 hours on the GEE cloud. PlotToSat was initially created to enable phenological studies in forest ecology studies at scale, but it can be applied to any application that necessitates time-series of EO data from multiple non-connected regions spread out within a landscape. For example, forest plot networks, consisting of multiple systematically distributed plots, are crucial for understanding forest ecosystem dynamics, but spatial distribution of plots is typically extensive and non-continuous. Collecting field data across plot networks is time-consuming and expensive, so the temporal resolution of acquired data is often low (e.g. once per decade). The continuous release of EO data brings new opportunities in forest ecology as EO data could enhance spatio-temporal resolution of phenological studies, as well as combining multi-sensory data to understand how forest phenology is changing under varying climatic conditions. Our first application of PlotToSat is tree genera classification in peninsular Spain. We extracted time-series of Sentinel-2 data from 9,642 single tree genera plots of the Fourth Spanish National Forest Inventory. The mean annual phenological cycle of the Normalized Difference Vegetation Index (NDVI) was computed using Sentinel-2 data collected in 2018, 2019, and 2020. The result was 12 NDVI values corresponding to each calendar month. The 12 NDVI values served as independent features to train a k-NN classifier. As a non-parametric method, k-NN is well-suited for data with high intra-class variability, like the differing spectral signatures of the leaves. The plots were divided into three folds, and the results were cross validated by using two-folds for training and one-fold for evaluating the results each time. The results suggest the existence of subgroups within big genera classes, an important observation in forest ecology. Our second application relates to the Joint UK Land Environment Simulator (JULES), a versatile terrestrial ecosystem model used to study terrestrial processes. We simulated daily soil and vegetation carbon storage using JULES, provided at a 5 km × 5 km resolution, and extracted Sentinel-1 GRD SAR and Sentinel-2 optical time series using PlotToSat. The pixel resolution of Sentinel-1 GRD SAR data and Sentinel-2 optical data ranges from 10 to 60 meters, with temporal resolutions of 6 and 5 days, respectively. The GreenSight project explores the phenological and direct relationships between EO time series—comprising vegetation, soil, and water-related indices, such as the Normalized Difference Vegetation Index—and the vegetation and soil carbon estimates produced by JULES. The study area is the UK Greenbelts, which are open land areas surrounding urban regions. Results show that efficient EO time-series construction using PlotToSat allows testing of JULES outputs in previously inaccessible temporal and spatial resolutions.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Isolating Phenological Patterns of Grasses and Trees Across an Environmental Gradient in Tropical Savannas Using Earth Observation Techniques

Authors: Frances O'leary, Prof. Timothy Baker, Kyle Dexter
Affiliations: School of Geography, University Of Leeds, School of Geosciences, University of Edinburgh, Royal Botanic Garden Edinburgh
Monitoring phenological patterns of trees and grasses in tropical biomes is important for understanding the acquisition phase of the carbon cycle; however, vegetation type-specific phenologies remain understudied in dry tropics. Tropical savannas play a significant role in the global carbon cycle through carbon flux between vegetation and the atmosphere which varies seasonally, through growth and senescence events, and interannually, through disturbance events such as fire and herbivory. This study evaluates the efficacy of NASA and ESA Earth Observation datasets alongside various vegetation indices in tracking land surface phenology across 11 one-hectare plots located along environmental gradients in Africa. A statistical approach separates grass and tree phenology using land cover and vegetation index datasets, ultimately aiming to enhance our understanding of tropical savanna vegetation interactions with the global carbon cycle. We compared phenological metrics derived from the Normalized Difference Vegetation Index (NDVI), Normalized Difference Water Index (NDWI), Enhanced Vegetation Index (EVI), and Green Chromatic Coordinate (GCC) time series data obtained from MODIS, VIIRS, and Sentinel-2 with GCC and visible-NDVI data from 11 phenocam plots at three study sites across an environmental gradient in sub-Saharan Africa. The study sites include; Bicuar National Park in southern Angola, Ongava National Park in northwestern Namibia, and Kilwa in southeastern Tanzania. Plots in Ongava Game Reserve, Namibia, have the most arid climate, with an average annual rainfall between 300-600mm. Plots in Bicuar National Park experience an intermediate climate, with an average annual rainfall of 600-800mm. The plots in Kilwa, Tanzania, have the most humid climate averaging 800-1200mm of annual rainfall. The phenocams provided daily ground-truth data, which is assumed to be the most accurate due to high temporal resolution and minimal atmospheric interference. Monitoring occurred from November 2011 to September 2024, quantifying the start, peak, and end of season for each dataset. Using a simulated time series dataset, we established known percentages of grass and tree land cover and NDVI values per pixel. A linear regression model was applied to groups of 50 pixels to extract mean grass and tree NDVI values at each time point. This output was then validated against the simulated data to assess the accuracy of isolating vegetation type-specific phenologies. Our findings revealed significant variations in start, peak, and end of season dates across all datasets and indices. The linear regression technique demonstrated high accuracy in predicting grass and tree NDVI, identifying the most effective Earth Observation dataset for applying this method in real satellite data. However, the accuracy of the linear unmixing approach may be influenced by noise present in actual satellite datasets. This study presents the first application of linear unmixing of Earth Observation datasets to isolating vegetation type-specific phenologies in tropical savannas. This research not only contributes valuable insights into the phenology of savanna ecosystems but also offers practical applications for improving carbon cycle modelling and informing conservation strategies in the context of climate change.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Urban trees phenology: A comparison of PhenoCam and satellite-derived phenology metrics.

Authors: Dimitris Tsirantonakis, Zina Mitraka, Andreas Christen, Nektarios Chrysoulakis, Sue Grimmond, Daniel Fenner, Fred Meier
Affiliations: Remote Sensing Lab, Institute of Applied and Computational Mathematics, Foundation for Research and Technology Hellas (FORTH), Chair of Environmental Meteorology, Faculty of Environment and Natural Resources, University of Freiburg, Department of Meteorology, University of Reading, Chair of Climatology, Institute of Ecology, Technische Universität Berlin
Urban vegetation phenology monitoring has emerged as a key Earth Observation application in the context of sustainable cities development under climate change, as vegetation plays a key role in regulating urban climates. Urban trees are known to contribute to the urban carbon and water cycle, and because of this, the potential benefits of green infrastructure for climate adaptation and heat mitigation actions are at the forefront of European and global directives. In this effort, accurate city-scale monitoring of vegetation cycles can be beneficial in more than one way, including providing robust information to numerical weather prediction as well as urban surface models. The latter can lead to informative decision-making tools for the sustainability of urban environments and improvement of the quality of life for urban populations. Despite the rapid advancements in the EO sector in the past decade, detailed and accurate vegetation phenology mapping in the complexity of cities remains a challenge. To this day, studies present data that highlight the lack of quantitative information needed to further understand, monitor, and model the feedback between urban trees and their environment. In this study, we present a continuous, year-long species-specific urban tree phenology dataset acquired using a PhenoCam mounted on a flux tower in a residential area of Berlin, Germany. We use the Normalized Difference Vegetation Index (NDVI) and Green Chromatic Coordinate (GCC) to analyze the phenology timings (start of green-up, midpoint green-up, maturity, peak, start of senescence, midpoints of senescence, dormancy) for 2023 for the nine deciduous tree species observed in the images. The results reveal that the different vegetation indices (VIs) can exhibit significant variations in the identified timings. The most pronounced differences occur in Fagus Sylvatica trees, where GCC fails to effectively represent the seasonal phenology cycle due to leaf coloring. Overall, GCC consistently identified earlier green-up and senescence compared to NDVI, with median differences of 11 days for mid-green-up and 12 days for peak phenology. As demonstrated in previous studies, the phenology dynamics derived from the PhenoCam exhibit a correlation with the carbon and heat flux dynamics recorded by the flux tower instruments, e.g., carbon capture maximum occurs right after the peak of the season when maximum leaf area is reached. Following this we use our dataset as a reference to evaluate phenology timings and curve shapes of NDVI and the two-band Enhanced Vegetation Index (EVI2) time series extracted from different sensors that are highly referenced in the literature; namely PlanetScope, Sentinel-2, LandSat and HLS (Harmonized Landsat-Sentinel dataset). Our initial results highlight the deterministic role not only of sensor resolution, but also of the VI selection, interpolation and fitting methods to the identification of phenology timing differences. Intercomparison of the estimated phenology timings from NDVI and EVI2 from the same satellite sensors showed minimal differences. However, when the estimated phenology timings from the high-resolution PlanetScope VIs were compared with the corresponding PhenoCam results, the results were different. The mid green-up timings median differences between Planet NDVI and PhenoCam GCC was two days, while when compared to the PhenoCam NDVI the difference extended to 14 days and in both cases peak timings difference reached 20 days. In the same direction, we observe that due to the heterogeneity of the urban environment at the sensor resolution scales for Sentinel and Landsat (hence and HLS) the VI time series is sensitive to surface fractions and the different vegetation types present in the sensor pixel making the phenology timing result interpretation challenging. The abovementioned findings provide significant insight for improving methods in urban vegetation monitoring and their application in urban climate studies.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Methodology for estimating composition in mixed forest stands from phenological analysis of Sentinel-1 and Sentinel-2 time series

Authors: Tomas Pugni, Diego Madruga, José Luis Tomé, Nur Algeet, Alfonso Bermejo, Laura Recuero, Alicia Palacios, Silvia Merino De Miguel
Affiliations: Universidad Politécnica De Madrid, Agresta S. Coop.
The analysis of mixed forest stands, composed of broadleaved and coniferous trees in varying proportions, represents an interesting challenge for forest management. In this context, time series of spectral indices and metrics derived from satellite images have proven to be valuable tools to assess the phenological performance of these forest formations. This work addresses the estimation of phenological parameters in pure stands and their extrapolation to characterise the floristic composition in mixed stands, using Sentinel-1 and Sentinel-2 satellite data. The proposed methodology uses time series of classical spectral indices such as NDVI (Normalised Difference Vegetation Index), fAPAR (fraction of Absorbed Photosynthetically Active Radiation) and LAI (Leaf Area Index), all derived from Sentinel-2 at 10m. These indices provide key information on the vigour, productivity and phenological behaviour of tree stands, as well as changes in these variables over the growing season. In addition, Sentinel-1 derived metrics, such as radar coherence, are used to detect structural changes in the forest canopy, such as those resulting from leaf loss. The time series are expected to reveal phenological patterns that may vary according to the proportion of broadleaves and conifers present in the mixed stands. Intermediate behaviours are expected to be observed in mixed stands, reflecting the combined contribution of both vegetation types. The Sierra de Guadarrama, which includes the National Park and its Protection Zone, has been selected as a pilot area for this study. This region, with a diversity of ecosystems and environmental conditions, offers an ideal scenario to evaluate the proposed methodologies. To build the models and validate the results, plots from the Fourth National Forest Inventory (IFN-4) are used. On the other hand, the 1:25,000 Forest Map of Spain (MFE25) allows spatialisation of these data, facilitating integration with information derived from Sentinel-1 and Sentinel-2. In addition to remote sensing variables, the analysis considers abiotic factors that may influence species phenology, such as latitude, altitude, orientation, slope and soil characteristics. These factors help to contextualise observations and improve accuracy in estimating floristic composition. The proposed methodology not only contributes to the advancement of knowledge on the phenological dynamics of mixed stands, but also has practical applications in forest management and conservation. By characterising the composition and functioning of these stands with greater precision, management strategies can be designed that are better adapted to the specific conditions of the territory. Furthermore, the integration of remote sensing data with forest inventory information establishes a replicable framework for similar studies in other regions.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Synthesis of Sentinel-2 and PAR Sensor Data for Creating Land Surface Phenology Maps Over the Czech Republic

Authors: Marian Švik, Ondřej Nezval, Patrik Paľovčík, Růžena Janoutová
Affiliations: Global Change Research Institute CAS, Masaryk University
Forest ecosystems provide irreplaceable functions and play a significant role in the stability of Earth's ecosystems. Currently, various changes in abiotic environmental conditions and biotic factors are negatively impacting forests. One of the main changes is the extension of the vegetation season (VS), associated with the shortening or interruption of the dormant period. While several methods exist for evaluating VS, a unified automated method for its determination is still lacking. Commonly used methods are based on spectral vegetation indices (VIs) derived from remote sensing data, especially satellite imagery. The current trend in remote sensing involves using data with higher spatial and temporal resolutions, allowing for the creation of VI time series that increase the accuracy of seasonal VI trajectories and the determination of VS length. Within the biomonitoring network DendroNetwork, a measurement system of sub-canopy Photosynthetically Active Radiation (PAR) sensors has been developed, which allows for refining the determination of the VS or growing season. The aim of this study is the automated evaluation of VS using analytical approaches implemented on a web platform, integrating ground monitoring and remote sensing products to create a fully automated evaluation of VS with high spatial and temporal resolution. The resulting layers will be accessible on the EnviLAB web platform for both professionals and the general public. These maps can be used for timely evaluation of the impacts of climate change on forest ecosystems and for proposing adaptation strategies for forest management in the Czech Republic. Preliminary results indicate a significant relationship between daily sums of PAR and VIs, which we have used to fill gaps in the VI time series. These dense time series have been utilized to calculate the start, end, and length of the vegetation season over small regions in the Czech Republic. Our goal is to provide a near real-time assessment using the most current cloud-free Sentinel-2 images and PAR sensor data for each phenological indicator. Additionally, we have constructed layers comparing current values to the long-term average for each phenological indicator.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: An Innovative Workflow Using Sentinel-2 Imagery to Estimate Seasonal Development from Land Surface Greenness in the High Arctic

Authors: Lia Lechler, PhD Virve Ravolainen
Affiliations: Norwegian Polar Institute
The Arctic is warming at an unprecedented rate, with large changes in land surface greenness and wide-ranging consequences for the terrestrial ecosystem. The ecosystem responses of the Arctic tundra—particularly greening and browning of vegetation—are more complex and heterogeneous than previously thought. Using high-resolution, high-revisit-frequency satellite data to monitor tundra vegetation in the Arctic is an important tool for tracking land surface greenness and enhancing understanding of the underlying causes of change in this ecosystem. Land surface greenness is used as a proxy for vegetation productivity and growing season development and is often calculated as a vegetation index (e.g., Normalised Difference Vegetation Index, NDVI). While metrics for calculating growing season trends (e.g., earlier start of the growing season, higher land surface greenness) in the Eurasian High Arctic are available from medium-resolution satellites, no corresponding high-resolution product exists yet. We have thus established a new, stepwise workflow to estimate growing season metrics with NDVI—start of season, magnitude and timing of maximum greenness, and end of season—from Sentinel-2 data, combining several published methods. The steps include using the best available cloud filter, incorporating modelled snow cover (early and late in the season), and dynamically fitting a growing season curve based on the quantity and quality of the satellite data. We apply our workflow in Svalbard and present examples of growing season metrics in ecological focus areas. Our workflow is a step towards easier processing and analysis of Sentinel-2 data from the High Arctic. We will make our growing season metrics available for ecological applications, in which investigating the effects of climate change on High Arctic ecosystems is a high priority.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Comprehensive Validation of the High-Resolution Vegetation Phenology and Productivity (HR-VPP v2) products through Ground-Based Networks

Authors: Getachew Mehabie Mulualem, Dr. Victor F. Rodriguez Galiano, Jose Antonio Caparros-Santiago, Christophe Lerebourg, Ludvig Forslund, Professor Jadu Dash
Affiliations: School of Geography and Environment, University of Southampton, Physical Geography and Regional Geographic Analysis, Universidad de Seville, ACRI-ST, Sophia-Antipolis, European Environment Agency (EEA)
In this study, we will present our proposed methodology for the evolution of the High-Resolution Vegetation Phenology and Productivity (HR-VPP) validation. This is an essential step to support Europe's environmental goals using data from the Copernicus Land Monitoring Service. Derived from Sentinel-2, these high-resolution metrics on vegetation phenology and productivity capture detailed vegetation dynamics. Accurate validation of these metrics is crucial for understanding climate variability and its impacts. We will use ground observations, including eddy covariance Gross Primary Productivity (GPP), PhenoCams, and phenophases of ground observations from phenological networks across Europe. The proposed methodology for validation of version 2 will be exemplified using the version 1 product. Additionally, we will conduct product consistency assessments and time series analyses, comparing HR-VPP data with other equivalent products. For each selected validation tile, we plan to derive GPP phenology metrics from each flux tower site and compare them with the mean HR-VPP phenology metrics within a 100 m radius buffer around the flux tower. We will include 73 flux towers with at least four years of data from 2017-2023, ensuring a representative distribution across various land covers. Additionally, images from the Phenocam network will be processed to derive the vegetation greenness index, the Green Chromatic Coordinate (GCC). The mean phenological metrics of HR-VPP pixels over a 3x3 grid centered on the Region of Interest (ROI) of Phenocam images will be compared with GCC time series values. To maintain consistency with HR-VPP products, we will adapt the TIMESAT approach to derive phenological metrics from both the GCC and GPP time series data. Direct phenological observations from different observation networks (AEMET, FENOCAT, PEP725, SWE-NPN and TEMPO) will be averaged within a buffer of 4 km, considering the correspondence between genera and the following vegetation groups: deciduous broad-leaved trees and shrubs, evergreen coniferous trees, evergreen broadleaved trees and shrubs, and crops. Phenometric and phenophase dates will be associated using HR-VPP product pixels near ground-based phenological observation stations, considering different land covers. As HR-VPP is still in the method development stage, the validation methodology will evolve in tandem with advancements in production processes. This multi-source validation approach will ensure that HR-VPP products remain robust and reliable, thereby supporting environmental monitoring and management across Europe.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Identification of double-cropping parcels using Time Series of Vegetation Indices derived from Sentinel-1 and Sentinel-2 Images

Authors: Gabriel Bonifaz-Barba, Jesus Alvarez-mozos, Maria Gonzalez-Audicana
Affiliations: Institute for Sustainability & Food Chain Innovation (IS-FOOD), Dep. Engineering, Public University of Navarre
Although not a widespread practice, double-cropping is becoming increasingly common in Mediterranean irrigated agricultural areas. It typically consists of sowing a summer crop after harvesting a winter or early spring crop, allowing two harvests in the same year. Identifying double-cropping parcels poses a challenge in the context of crop classification using remote sensing techniques. The availability of dense satellite image time series (e.g., Copernicus and Landsat programs), with medium to high spatial resolutions suitable for agricultural applications, opens the possibility to explore new approaches to address this issue. In particular, approaches based on time series analysis of Vegetation Indices (VI) obtained from these images are especially promising, as they allow to effectively describe crop phenological dynamics at the parcel level. Several descriptors can be derived from these VI time series. Examples of such descriptors, known as Land Surface Phenology (LSP) metrics, are the start of the growing or greening season (SOS), the peak or maximum of season (POS), and the onset of senescence or end of season (EOS), among others (Zeng et al., 2020). This study focuses on evaluating the effectiveness of LSP metrics derived from VI time series obtained from Sentinel-2 and Sentinel-1 images to identify crop-specific phenology cycles, and, consequently, parcels with more than on crop per year. With this aim, an adaptation of the methodology described by Meroni et al. (2021) was applied to time series of the Normalized Difference Phenology Index (NDPI) and the Radar Vegetation Index (RVI). This methodology is based on finding points that may correspond to SOS, POS and EOS in filtered and smoothed VI time series, eliminating relative maxima (POS) and minima (SOS or EOS) and considering only cycles with an amplitude – defined as the difference between POS and the mean of SOS and EOS – greater than a predefined threshold. This work was carried out in Navarre (Spain), where the identification of double-cropping parcels is a problematic task for the agricultural inspection services. The input data for this study was provided by the Agroindustry Service of the public company INTIA under the Government of Navarra. It consists of a database of parcels where farmers planned to cultivate two or three successive crops during the 2017 campaign, including the specific crops per parcel together with the vector file of the Land Parcel Identification System (LPIS) for this year. This database, with 428 multi-cropping parcels, was supplemented with another nearly 400 parcels with a single crop. The study period ranged from September 2016 to February 2018, with one crop considered as cultivated during the 2017 campaign if the POS and the EOS occur within this year. The accuracy of the results was validated using two complementary approaches: first, by comparing the number of crop cycles identified per parcel with data reported by farmers, and second, by comparing this crop cycles counts with those estimated through visual inspections of VI time series conducted by agronomic experts familiar with crop growth patterns. The findings indicate that NDPI data provided 86% accuracy against farmer-reported data, which increased to 96% when validated through expert VI time series visual inspection. In contrast, the RVI data showed lower performance with 69% accuracy compared to farmer-reported data and 73% agreement with expert validation. While these results demonstrated the good performance of the applied methodology to identify double-cropping parcels, further validation is required in future campaigns through on-site ground inspections. BIBLIOGRAPHY Meroni, M., d’Andrimont, R., Vrieling, A., Fasbender, D., Lemoine, G., Rembold, F., Seguini, L., & Verhegghen, A. (2021). Comparing land surface phenology of major European crops as derived from SAR and multispectral data of Sentinel-1 and -2. Remote Sensing of Environment, 253(June 2020). https://doi.org/10.1016/j.rse.2020.112232 Zeng, L., Wardlow, B. D., Xiang, D., Hu, S., & Li, D. (2020). A review of vegetation phenological metrics extraction using time-series, multispectral satellite data. Remote Sensing of Environment, 237(August 2019), 111511. https://doi.org/10.1016/j.rse.2019.111511
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: The Back to Basics (B2B) Sentinel-2 data filtering method

Authors: Philippe LOUDJANI
Affiliations: Joint Research Centre of The EC
Noise and cloud effects are significant challenges in Sentinel-2 data processing, affecting data usability. Several solutions have been proposed ranging from cloud masking (e.g. Scene Classification Layer (SCL)) and atmospheric correction to machine learning-based interpolation and data fusion with other satellite sources. Despite these improvements, there is still a need for continued research and innovation to enhance the reliability and usability of Sentinel-2 imagery for diverse applications. In this Back to Basics (B2B) filtering process, we proposed a novel method to identify, at parcel level, acquisition dates for which the spectral information is considered as reliable based on comparison with the physical basis of ground features spectral reflectance. The first developments have been done in a view of using Sentinel-2 information for the monitoring of agriculture land. For each Sentinel-2 acquisition date, the filtering is based on the spectral responses from 10 bands (2 to 8a and 11 and 12). Acquisition dates are considered as reliable and retained when the spectral bands curve fits expected typical spectral response curves of vegetation or bare soil. A second step of the filtering method allows to sub-devised the spectral curves in different phenological phases (e.g. sparse vegetation, growing vegetation, senescent vegetation) or categories (e.g. dry or wet bare soils). Once filtered the use of the retained information provided promising results for the identification of agriculture practices like harvest, grassland mowing, tillage or non-tillage, or even presence of green cover. The methodological steps and some first uses results will be illustrated during the presentation. Further developments are ongoing through the screening of intra parcel pixel spectral response curves.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Comparative Analysis of Vegetation Indices from Sentinel-2 Data and Ground-Based Phenology: Insights into Forest Ecosystem Monitoring

Authors: Ana Potočnik Buhvald, Prof. Krištof Oštir, Dr. Mitja Skudnik, Dr. Urška Kanjir
Affiliations: University of Ljubljana Faculty of Civil and Geodetic Engineering, Slovenian Forestry Institute, University of Ljubljana, Biotechnical Faculty, Department of Forestry and Renewable Forest Resources, Research Centre of the Slovenian Academy of Sciences and Arts (ZRC SAZU)
Introduction Understanding forest phenology — the timing and patterns of natural events — is vital for assessing environmental change and its impact on ecosystem functions. Particularly, it is critical for understanding ecosystem responses to climate change, as phenological shifts affect the carbon and water cycles, as well as species interactions (Caparros-Santiago et al., 2021). Recent temperature increases in temperate forests have advanced spring and delayed fall, disrupting ecosystem dynamics (Piao et al., 2019; Zhao et al., 2022). Traditionally, ground-based observations provided detailed insights but were limited in scale (Moon et al., 2022). Satellite remote sensing, particularly with sensors like Sentinel-2, enables large-scale phenology monitoring with improved resolution. Vegetation indices (VIs), such as NDVI and EVI, are widely used to track phenological events (Zeng et al., 2020). Recent advancements, including red-edge VIs like IRECI, offer refined tools for species-specific studies (Misra et al., 2016). In this study, we evaluate the effectiveness of 15 vegetation indices (VIs) and Sentinel-2 spectral bands for extracting phenological metrics at a 10-m resolution for three widespread European broadleaved tree species (Fagus sylvatica, Betula pendula, Tilia spp.) from 2018–2020. Using a dynamic threshold method, we calculate the start (SOS) and end (EOS) of the season, comparing results with ground truth data of National Phenological Database. A statistical analysis identifies the most representative VIs and spectral bands, which are then generalized to a native European beech forest covering Slovenia's diverse geographic and climatic zones. This study highlights the utility, limitations, and inconsistencies of commonly used VIs, confirming their robustness for stand-level phenology studies and assessing the influence of elevation and biogeographical region on phenological variability. Materials and methods This study is performed across Slovenia, a country with extensive forest cover (around 60% of the country) and relatively large climatic gradients. We used the phenological data of 118 broadleaved trees distributed throughout Slovenia at altitudes between 40 and 1.365 m above sea level, located in different biogeographical regions: Alpine, Continental, Mediterranean and Panonian. The observations followed BBCH codes 11 (SOS is when 10% of the leaves have their final shape but not yet their final size and colour) and 94 (EOS is when 50% of the leaves have changed colour from green to yellow, red or brown). Our ground data is the phenological archive of systematic monitoring by the Slovenian National Phenological Network (NPN) since 1951. We focussed on the period 2017-2022, which corresponds to the availability of Sentinel-2 data. Between 2017 and 2022, we created Sentinel-2 satellite image time series (SITS) for each observed tree location, using the nearest pixel within a 20 m crown circle (4 Sentinel-2 pixels). The open-source Python library eo-learn was used to process Sentinel-2 images. Sentinel-2 L2A bands were first obtained using sentinelhub-py, along with cloud masks (0.99 empirically selected value) and cloud probabilities (0.54). For each pixel, we used SITS of all 12 Sentinel-2 multispectral bands (B01-B12) and 15 VIs sensitive to canopy chlorophyll dynamics: SR (Simple Ratio), NDVI (Normalized Difference Vegetation Index), TNDVI (Transformed NDVI), KNDVI (Kernel NDVI), EVI (Enhanced Vegetation Index), EVI2, SAVI (Soil-Adjusted Vegetation Index), GNDVI (Green NDVI), GNDVI2, GCI (Green Chlorophyll Index), ARVI (Atmospherically Resistant Vegetation Index), IRECI (Inverted Red-Edge Chlorophyll Index), DSWI (Disease Water Stress Index), BWDRVI (Blue-Weighted Difference Vegetation Index) and NDRE1 (Normalized Difference Red-Edge). All variables were smoothed with the Savitzky-Golay filter (window=31, order=3) to reduce cloud and atmospheric effects. SITS at 7-day intervals were interpolated and smoothed, then further interpolated to daily values. For the extraction of phenological metric (SOS and EOS) we have used one of the most commonly applied methods in phenological studies in remote sensing: VIR threshold method, assuming phenological stages begin when smoothed VI values reach a specific index. A range of thresholds (30% to 60% at 10% intervals) was tested to identify a single optimal value suitable for deciduous trees in central European forests. Results The distribution of results shows that the extraction of phenological metrics for deciduous tree species is best represented when using the Sentinel-2 bands in the red-edge and in the NIR. Also VIs that are the most sensitive in detecting forest seasonal changes and useful in phenology research are: KNDVI, EVI, EVI2, SAVI, IRECI and NDRE1. The error matrix shows us that the selected bands and VIs are highly correlated features. The Pearson coefficient shows us that kNDVI, EVI, EVI2 and SAVI are greater than 0.9. Our results also show that for modelling the SOS phenology metrics it is better to use VIs as a single Sentinel-2 band, using a threshold of 0.5, while for EOS prediction it is better to use the NIR bands, B08 and B8A from Sentinel-2 with a threshold of 0.4. However, the statistical values show that SOS data from remote sensing data are more predictable than EOS data. Our model evaluation highlights key phenological species (e.g. Fagus sylvatica, Betula pendula, and various Tilia species), which are widespread and represent European tree diversity. The evaluation shows that there is no generally superior VI for modelling the SOS and EOS of deciduous trees. For Fagus sylvatica, IRECI shows higher precision for SOS with lower variability, while SAVI performs better for EOS with minimal differences and errors, but these results do not argue in favour of exclusive use of these indices in future research.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: C.02.05 - POSTER - MAGIC – Preparing for the ESA-NASA satellite gravity constellation

ESA and NASA have discussed collaboration on satellite gravity missions for more than a decade, both by engaging the community (e.g. GOCE Workshop, Munich 2011), and under the JPPG dedicated discussions (ESWG, 2015). The two Agencies recently co-signed a Joint Statement of Intent for cooperation on the MAss Change and Geosciences International Constellation (MAGIC). MAGIC is the jointly developed concept for collaboration on future satellite gravity observations that addresses needs of the international user community and stakeholders, consisting of the GRACE-C (NASA and DLR) and the Next-Generation Gravity Mission (NGGM) (ESA) staggered deployment of two satellite pairs. MAGIC will address the interest to join efforts in building a constellation of two pairs of satellites, polar and inclined, to drastically increase the temporal and spatial resolutions to provide, among others, information essential for water management, monitoring and forecasting of hydrological extremes (i.e. droughts/floods) and for monitoring the Global Tipping Points. The constellation is founded on a joint reference document co-signed October 2020: MAGIC Mission Requirements Document (MRD), which consolidated the needs of the international user community as described in the IUGG 2015 publication (Pail et al, 2015) and the NASA 2017 Decadal Survey (NASEM). Furthermore, the first ESA-NASA MAGIC Science and Applications Workshop in Assisi, Italy, November 2023, refined needs of the international user community and stressed the importance of demonstrating operational capabilities of mass change monitoring from space. The two agencies have implemented a joint science and applications plan on MAGIC to coordinate relevant activities. This session invites contributions related to the MAGIC Level-2 and Level-3 product development, as well as Level-1 product development of the individual GRACE-C and NGGM missions. Constellation data processing, algorithm development and mission performance aspects shall be addressed. Contributions related to the generation of higher-level (e.g. Level-4) products involving data assimilation, data fusion and AI techniques are also encouraged. This session shall also address the novel science and operational services and applications enabled by MAGIC and user perspectives related to synergies between the Science Data Systems of ESA and NASA.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: NGGM/MAGIC contributions to geodesy in geoid modeling and precise orbit determination

Authors: Prof. George S. Vergos, Prof. Dimitrios Tsoulis, Dr. Thomas Loudis Papanikolaou, Prof. Dimitrios Natsiopoulos, Mrs. Elisavet Maria Mamagiannou, Georgia Gavriilidou
Affiliations: Laboratory of Gravity Field Research and Applications – GravLab, Department of Geodesy and Surveying, Aristotle University of Thessaloniki, DORUS Space Lab, ACT 2603, Department of Surveying and Geoinformatics Engineering, School of Engineering, International Hellenic University
The SING project ("Studying the Impact of the NGGM and MAGIC missions") aims to evaluate the scientific and operational contributions of the Next-Generation Gravity Mission (NGGM) and the Mass-change And Geosciences International Constellation (MAGIC) satellite missions, among other disciplines to geodesy. This initiative aims to generate advanced Level-3 and Level-4 data products from these missions’ observations, with a focus on understanding and forecasting changes in terrestrial water storage, sea level, and ice masses. The resulting data intend to enhance Earth observation models, supporting a wide range of applications across climate science, hydrology, and oceanography. The GravLab, AUTH contribution to this project reflects mostly on geodesy itself, and an effort to outline, determine and quantify the impact of NGGM/MAGIC to geoid/potential determination, both static and time-variable, and improvements to precise orbit determination for both gravity related and altimetry missions. In this study, we utilize a 12-year simulated dataset with monthly time sampling (up to degree and order 120) and a respective 12-year trend (up to degree and order 160) represented as Spherical Harmonic Coefficients (SHCs), provided within the SING project. This dataset enables us to assess the impact of the MAGIC/NGGM constellation on geodesy and, in particular, on the International Height Reference Frame (IHRF). By analyzing the SHC data, we estimate gravitational potential (W), geoid heights (N), and physical heights (H) at IHRF core sites and assess both their linear and long-term trends. The aim is to assess the recoverable accuracy in potential and geoid determination based on NGGM/MAGIC data for IHRF sites and furthermore evaluate their temporal evolution and whether they can be reliably combined with altimetry and Sentinel 1 SAR data to close the height combination scheme at the sea-land boundaries over e.g. tide-gauge and IHRF sites. Furthermore, the impact of MAGIC/NGGM missions is being investigated within the Precise Orbit Determination (POD), as being a dynamic system with high sensitivity to the gravitational effects. The concept of the POD scenarios applied here, aims at the assessment of the MAGIC/NGGM simulated datasets of geopotential models. The assessment is applied within the POD performance of the GRACE Follow-On mission along with the inter-satellite laser ranging data analysis as an external validation tool of the relative orbit determination within the along-track direction. POD will be further applied to satellite altimetry missions to reveal the MAGIC/NGGM performance of the orbital constituents correlated with the altimeter instrument and therefore related to the observed sea level.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: How can COST-G support MAGIC?

Authors: Ulrich Meyer, Dr. Mathis Blossfeld, Prof. Dr. Adrian Jäggi
Affiliations: University Of Bern, Astronomical Institute, Deutsches Godätisches Forschungsinstitut
The Combination Service for Time-variable Gravity fields (COST-G) started its operation as product center of the International Gravity Field Service (IGFS) under the umbrella of the International Association of Geodesy (IAG) in 2019. COST-G operationally provides combinations of the monthly gravity field solutions from Analysis Centers (ACs) worldwide of the GRACE-FO orbit and range-rate data, as well as long-wavelength solutions derived from kinematic orbits of the Swarm mission to support bridging potential gaps in the GRACE-FO time-series. Recently, Chinese ACs of GRACE and GRACE-FO data have joined COST-G and a second, extended release of combinations of re-processed monthly GRACE solutions has been published. COST-G is a direct outcome of the European Gravity Service for Improved Emergency Management (EGSIEM), a Horizon2020-funded project (2015-17), where a prototype combination service has been developed. In the frame of EGSIEM the combination on the normal equation (NEQ) level had been studied and also the combination of the long-wavelength gravity field coefficients with NEQs derive from Satellite Laser Ranging (SLR) has already been discussed. COST-G is performing its current combination on solution level, because not all ACs provide normal equations, and because the noise models implemented at the different ACs, and consequently the uncertainty information in the individual NEQs, are very diverse. But with the availability of realistic uncertainty information for the most important background models, e.g. from the NEROGRAV project, and with the implementation of empirical covariance modeling techniques by an increasing number of ACs, the combination on NEQ-level will return into the focus of COST-G. Moreover, at the International Laser Ranging Service (ILRS) currently the generation and combination of gravity field NEQs based on observations of the geodetic satellite laser ranging (SLR) satellites is being studied in the frame of a pilot project. A combination of the combined SLR-NEQs, provided by the ILRS, with combined GRACE resp. GRACE-FO NEQs by COST-G is foreseen. In view of the MAGIC mission, where also the combination of diverse contributions from the MAGIC ACs and SLR on NEQ-level is planned, COST-G will be in an excellent position for support.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: In-flight calibration of the NGGM accelerometers

Authors: Sean Bruinsma, Prof. Dr. Pieter Visser, Dr. Jean-Charles Marty, Dr. Julien Laurent-Varin, Dr. José Van den IJssel, Dr. Christian Siemes, Dr. Joao Teixeira da Encarnaçao
Affiliations: GET/CNES, TU Delft, Faculty of Aerospace Engineering
CNES and TU Delft are developing the algorithms and the modules for the calibration of the accelerometers of the NGGM satellites based on satellite ‘shakings’ and precision orbit determination. Both NGGM satellites will be equipped with three accelerometers, one in the center of mass, and one on each side on the normal axis. The calibration based on the satellite shaking provides all calibration parameters except the biases and a few other parameters, such as the distance between the accelerometers. The fitting of orbital positions of the kinematic precision orbit gives three biases and scale factors per arc as calibration parameters. The calibration method and a performance assessment of the calibration will be presented, which will take advantage of the three accelerometers and the attitude sensors onboard each spacecraft, as well as the proposed implementations in the CNES and TU Delft software, respectively.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Integrating Future Satellite Gravimetry Missions with Regional Land Surface Models: Capturing Water Storages and Fluxes Under Extreme Conditions

Authors: Anne Springer, Yorck Ewerdwalbesloh, Dr. Helena Gerdener, Prof. Dr. Jürgen
Affiliations: University of Bonn
Satellite gravimetry missions have been demonstrated to be of great value for the monitoring of global water resources. Terrestrial water storage anomalies derived from the GRACE and GRACE-FO missions have been incorporated into hydrological and land surface models through data assimilation techniques, thus improving the simulation of individual water storage components such as soil moisture, snow and groundwater. These advancements support a wide range of applications, including drought monitoring, investigation of groundwater pumping, and landslide prediction. Data assimilation updates the model state by balancing observation and model errors. However, the coarse spatial and temporal resolution of the current GRACE-FO mission imposes specific limitations on assimilation into high-resolution regional models. Upcoming missions, such as MAGIC and future quantum gravimetry missions, promise higher spatial and temporal resolution and improved accuracy. It is anticipated that these advances will significantly increase the utility of satellite gravimetry for regional-scale data assimilation frameworks. Notably, sub-monthly gravity field solutions will offer significant advantages for applications such as flood prediction, while higher spatial resolution will be particularly beneficial for regions with complex terrain, e.g., in the context of snow modeling. In this contribution, we assess the potential impact of future satellite gravimetry missions on data assimilation frameworks using the Community Land Model (CLM) over Europe. Land surface models (LSMs) simulate the coupled dynamics of water, energy, and biogeochemical cycles within vegetation and soil systems. The objective of this study is to assess the impact of increased resolution and accuracy of future missions on the representation of individual water storage components and associated water fluxes, particularly during extreme events. To this end, we conduct Observing System Simulation Experiments (OSSEs) utilizing two versions of CLM (CLM3.5 and CLM5). One model version is treated as the synthetic truth under varying error scenarios corresponding to different future gravity missions, including MAGIC. Assimilation experiments are conducted with the other model version to quantify improvements in water storage and flux representation.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Toolbox for MAGIC

Authors: Per Knudsen, Americo Ambrozio, Marco Restano
Affiliations: DTU Space, Technical University of Denmark, European Space Agency - ESRIN
The MAGIC missions are envisaged to advance the applications of satellite based gravity field information for tracking changes in the mass distribution and transport in ground water storages, ice sheets and oceans. The GOCE User Toolbox GUT was originally developed for the utilisation and analysis of GOCE products to support applications in Geodesy, Oceanography and Solid Earth Physics. GUT consists of a series of advanced computer routines that carry out the required computations without requiring expert knowledge of geodesy. Hence, with its advanced computer routines for handling the gravity field information rigorously, GUT may support the MAGIC mission in developing Level-2 and Level-3 products. Focusing on MAGIC mission goals on unprecedented recovery of ocean bottom pressures, a more flexible processing of the gravity field information may become essential. Furthermore, an integration of ocean bottom pressure changes with changes in the geostrophic surface currents may advance the analyses further. GUT facilitates such a flexible processing and, in addition, contains tools for the computation of the dynamic ocean topography and the associated geostrophic surface currents.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: First Level-2a gravity field results of the NGGM and MAGIC End-To-End Mission Performance Evaluation Study

Authors: Markus Hauk, Christoph Dahle, Josefine Wilms, Irvin Deaan Swart, Frank Flechtner, Roland Pail, Philipp Zingerle, Petro Abrykosov, Thomas Gruber, Thorsten Mayer-Gürr, Sean Bruinsma, Mirko Reguzzoni, Lorenzo Rossi, Federica Migliaccio
Affiliations: GFZ Potsdam, Technische Universität Berlin, Physical Geodesy, Technische Universität München, Institute of Astronomical and Physical Geodesy, Technische Universität Graz, Institute for Geodesy, Centre National d'Etudes Spatiales (CNES), Politecnico di Milano, Department of Civil and Environmental Engineering
The Mass change and Geosciences International Constellation (MAGIC) will be composed of two satellite pairs flying in different orbit planes. The NASA/DLR-developed first pair (GRACE-C) will be launched in 2028 into a near-polar orbit at an altitude of around 500 km, while the ESA-developed second pair NGGM will be launched in 2032 into a controlled orbit with an inclination of 70° at approximately 400 km altitude. The aim of MAGIC is to extend and improve time series of GRACE-like single pair gravity missions by providing enhanced spatial and temporal resolution as well as reduced uncertainty and latency to address the international user needs. In view of this mission, the main goal of the NGGM and MAGIC End-To-End Mission Performance Evaluation Study is the establishment of a concept for the NGGM/MAGIC Ground Processor from Level-0 to Level-3. One of the central parts of the processor is the NGGM and MAGIC End-To-End Mission Performance Evaluation Simulator (NEMPES) including a Level-1b to Level-2a gravity field recovery processing module which is operated by Distributed Processing Facilities (DPF). Each of these contributing institutions will provide independent Level-2a gravity field products based on their own software, but based on a common realistic test scenario and related test data sets and background models. This presentation provides an overview of simulation results computed by the different DPFs in the framework of NEMPES in order to validate the gravity modeling software and to assess the expected performance of NGGM/MAGIC based on Level-2a gravity field products. The evaluation of the simulated End-To-End Level-2a gravity fields is done according to the requirements and recommendations set out in the NGGM Mission Requirements Document (MRD).
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Rifting detectability in the Gulf of Aden from MAGIC simulated data

Authors: Lorenzo Rossi, Alessandro Regorda, Mirko Reguzzoni, Carlo Iapige De Gaetani, Riccardo Barzaghi, Roberto Sabadini, Anna Maria Marotta
Affiliations: Department of Civil and Environmental Engineering, Politecnico di Milano, Department of Earth Sciences, University of Milan
The MAGIC mission is promising to significantly improve the knowledge of a large variety of Solid Earth processes in terms of both spatial and time resolution. Among them, the opening of the Gulf of Aden is of great geophysical interest, being one of the most active extensional tectonic processes in the Earth. In particular, the ongoing extension at the northern and southern borders of the Gulf of Aden is responsible for the displacement of the contact between the oceanic crust and the continental crust at a rate of about 1.25 cm/yr. The gravity signal produced by this tectonic process is expected to be linearly increasing over the lifetime of the MAGIC mission at a rate having a well-defined spatial pattern, fostering the investigation of its detection with a high accurate satellite gravity mission as MAGIC is expected to be. More precisely, the rate pattern is characterized by two peaks at about 200 km distance to each other, with maximum values of about 0.06 μGal/yr, evaluated at 5 km height along a profile perpendicular to the ridge and passing through the centre of the gulf. In this framework a simulation of the MAGIC mission was performed over a period of seven years in the hypothesis of isolating the gravity signal of this tectonic process. The observations of gravitational differences along the line of sight connecting the two satellites of each pair were simulated along the orbits, by assuming the previously mentioned constant rate pattern and by adding an instrumental noise consistent with the MAGIC specifications. These simulated data were processed by the so-called space-wise approach to produce seven yearly gravity field solutions with the corresponding error covariance matrices, to be used for the estimation of the linear trend in the Gulf of Aden. The space-wise approach is based on the remove-restore principle and consists of a first estimate of the global gravity field in terms of spherical harmonics by least squares adjustment of the simulated data and a subsequent local refinement by collocation using an empirically estimated covariance function for the residual gravity signal. By comparing the estimated pattern of the linearly varying gravity signal with the actual pattern, properly smoothed to reduce the impact of the inevitable loss of high frequencies in the signal, it comes out that they are well consistent at a spatial resolution of about 200 km. This confirms the intrinsic potential of the MAGIC mission in detecting the rifting process under study, even if the spatial resolution is not enough to reconstruct the two-peak pattern of the gravity rate.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: NGGM/MAGIC contributions to the static and time-variable IHRF and height combination realization

Authors: Elisavet-Maria Mamagiannou, Mr George S. Vergos
Affiliations: Laboratory of Gravity Field Research and Applications – GravLab, Department of Geodesy and Surveying, Aristotle University of Thessaloniki, Greece, GR-54124
The Mass-Change and Geosciences International Constellation (MAGIC) mission will mark a significant advancement in satellite gravimetry and Earth observation. MAGIC’s satellite constellation of four satellites will be arranged in two pairs, with NASA’s GRACE-C (P1) pair and ESA’s Next Generation Gravity Mission (NGGM) (P2) pair. As the scientific community is preparing for this novel and promising mission, it is worth noting that the constellation is expected to exceed the capabilities of previous missions, specifically the Gravity Recovery and Climate Experiment (GRACE) and GRACE Follow-On (GRACE-FO), by producing mass-change data products with enhanced spatial and temporal resolution, minimal data latency, and higher accuracy. MAGIC aims to deliver 30-day gravity models with a geoid accuracy at the 1 mm level for wavelengths of ~167 km (d/o 120), while the long-term trend is expected to be determined at the 0.1 mm/yr level for the same spatial frequencies. In this work, we aim to assess the projected performance of 30-day Level-2a data products from MAGIC/NGGM, focusing on Spherical Harmonic Coefficients (SHCs) derived from GRACE and GRACE-FO Release-06 Global Geopotential Models (GGMs) spanning 2002 to 2024, which are used as real-word input data to simulate the NGGM/MAGIC expected observations. The input SHCs are properly filtered and converted into temporal gravity field data, serving as proxies for MAGIC/NGGM observations and facilitating the analysis of potential scenarios for future gravity field and mass-change products. The analysis concentrates on evaluating the projected accuracy and spatial resolution of MAGIC/NGGM data in estimating gravity field and mass-change products, including Equivalent Water Thickness (EWT), gravitational potential, geoid heights, and physical heights. These products, particularly gravitational potential, are essential at core stations of the International Height Reference Frame (IHRF), in order to determine their evolution over time. Within this study, Singular Spectrum Analysis (SSA) is applied to predict gravity field products, examining multiple scenarios with varying mission durations and observational accuracies. Each scenario provides valuable insights into the anticipated performance and capabilities of the MAGIC/NGGM mission, in particular to vertical datum unification, the realization of the IHRF and height combination along the sea/land boundary, both for the static and time-variable component.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Application of the space-wise approach for regional solutions from NGGM/MAGIC simulated data

Authors: Mirko Reguzzoni, Lorenzo Rossi, Öykü Koç, Federica Migliaccio
Affiliations: Department of Civil and Environmental Engineering, Politecnico di Milano
Future satellite gravity missions, like NGGM/MAGIC or other ones based on quantum technology, are promising to provide data allowing for time-variable gravity field investigations with higher accuracy and spatial-temporal resolution than those based on GRACE and GRACE-FO data. In this context, the question arises whether a series of global solutions in terms of spherical harmonic coefficients is still a good option to estimate local signals that have a stronger amplitude than the global average and, therefore, may be inferred with a higher spatial resolution. The space-wise approach, which is a data processing strategy that has been applied in several studies and is currently included into the NGGM and MAGIC End-To-End Mission Performance Evaluation Study, could provide a possible answer to this question. This method basically consists of two steps in the framework of a remove-restore procedure. Firstly, a global spherical harmonic solution by least-squares adjustment is computed. Then, a regional grid prediction by collocation is performed on residuals, thus refining the global solution by exploiting the local characteristics of the gravity field. In performing the latter task, a crucial role is played by the modelling of signal and noise covariances that should be empirically driven by the observations and cannot be done by neglecting the temporal aliasing due to the gravity field variations during the analyzed time span. The method also provides an estimate of the full error covariance matrix of the grid values, which may be useful for subsequent investigations. This matrix is computed by formal error covariance propagation, and it is a-posteriori rescaled according to Monte Carlo simulations. In this work, the processing scheme of the space-wise approach for regional solutions is outlined and then applied to simulated data of the NGGM/MAGIC mission to show the possible improvement of a regional solution with respect to a global one, considering as an example the estimation of the total water storage anomaly in terms of equivalent water height for some hydrological basins of interest.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: NGGM\MAGIC to Improve River Discharge and Runoff Estimation

Authors: Francesco Leopardi, Luca Brocca, Usman Liaqat, Jacopo Dari, Carla Saltalippi, Stefania Camici
Affiliations: National Research Council, Research Institute for Geohydrological Protection, Dept. of Civil and Environmental Engineering, University of Perugia
The terrestrial water storage anomaly (TWSA) data acquired by the GRACE (Gravity Recovery And Climate Experiment) and its extension GRACE-FO (Gravity Recovery and Climate Experiment Follow-on) missions constitute a crucial source of information for hydrological applications. The data can be used to derive valuable insights into groundwater storage and its fluctuations over time, which is of paramount importance for determining water availability, simulating river discharge in watersheds, producing runoff maps, and predicting and mitigating potential drought and flood consequences. TWSA data have been employed as input data in the STREAM (SaTellite-based Runoff Evaluation And Mapping) model enabling the simulation of river discharge and the production of runoff maps for several significant basins worldwide. These include the Mississippi-Missouri, Amazon, Danube, Murray-Darling, Lena, Yenisei, Mackenzie and Niger basins, and the results have demonstrated a markedly positive impact of TWSA (Kling-Gupta values exceeding 0.7). However, when smaller basins (e. g. Danube, Ebro, Rhine) are subjected to analysis, the performance is found to decline as a consequence of the low resolution of TWSA data. The forthcoming NGGM-MAGIC (Next-Generation Gravity and Mission-Mass-Change and Geosciences International Constellation) mission, funded by the ESA-NASA collaboration, will address this challenge by providing data with an improved spatio-temporal resolution, enhanced performance and reduced latency. The aim of this work is to investigate how the spatial-temporal resolution and the accuracy of TWSA impact the river discharge estimation. For that, synthetic TWSA data were generated, by perturbing daily ERA5-Land TWSA data with two different error magnitudes. Initially, the data were smoothed with a 5-day moving average. This was followed by an error of 4.2 mm and then a larger error of 42 mm. These three different clusters of data (i.e., ERA5-Land TWSA, ERA5-Land TWSA +4.2 mm error and ERA5-Land TWSA +42 mm error) were then used as input to the STREAM model to simulate daily river discharge time series and produce average runoff maps for the period 2003-2012 over three European basins, Rhine, Danube and Elbe. For all the investigated basins, the streamflow results show a slight decrease in Kling-Gupta efficiency index (KGE) when comparing the ERA5-Land TWSA data with the ERA5-Land TWSA data including errors. The difference becomes more pronounced as the error increases, leading to a significant decrease in performance. Similarly, the analysis carried out on the average runoff maps over the same period shows that as the error added to the original data increases, the percentage Root Mean Square Error (RMSE) increases for all the basins investigated. These results highlight the impact of TWSA data on river discharge simulations and runoff mapping. They underline the urgent need for more advanced data to improve the understanding of complex natural phenomena and the impact of ongoing climate change on the entire hydrological cycle.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Evaluating the Impact of Future Satellite Gravimetry Missions (NGGM/MAGIC) on the Closure of the Sea Level Budget

Authors: Marie Bouih, Benoit Meyssignac, Julia Pfeffer, Robin Fraudeau, Gilles Larnicol, Alejandro Blazquez, Prof. Dr. Roland Pail, Marius Schlaak, Ilias Daras
Affiliations: Magellium, CNES, Technical University of Munich, European Space Agency, Earth & Mission Science Division, ESTEC
The closure of the Sea Level Budget (SLB) is a crucial step in understanding sea level change, requiring precise and accurate estimates of both mass and steric components. Satellite gravimetry advancements, including different configurations of future missions such as GRACE-C (single polar pair), NGGM (single inclined pair), and MAGIC (double pair), offer promising opportunities to enhance SLB closure at various temporal and spatial scales. Each configuration provides unique strengths in terms of spatial coverage, temporal resolution, and sensitivity to mass changes, which are critical for capturing the ocean mass and freshwater exchanges accurately. This study investigates the potential of these missions to reduce uncertainties and improve the consistency of global and regional sea level observations. Synthetic observations derived from an ocean model are used to assess the impact of GRACE-C-like, NGGM, and MAGIC on SLB closure. This is achieved by simulating the resolution of satellite altimetry, gravimetry, and in-situ measurements, including Argo temperature and salinity profiles. The uncertainty propagation approach evaluates discrepancies between observed and modeled sea level components. It provides uncertainty covariance matrices for total, mass, and steric components. The analysis will focus on comparing the uncertainties in the SLB residuals for different mission configurations (GRACE-C-like, NGGM, and MAGIC), with an emphasis on key metrics such as sea level trends and accelerations. Additionally, the structural and measurement uncertainties of the different missions are estimated, updating methods from the SLBC CCI+ project to incorporate GRACE-C, NGGM, and MAGIC. The combination of the uncertainty matrices for each sea level component will allow us to evaluate the enhanced observability of sea level changes and the potential for improved operational services. We expect that higher spatial and temporal resolutions, particularly with MAGIC's double-pair configuration, will significantly reduce uncertainties compared to current systems. This work highlights how the NGGM and MAGIC missions can enhance the observability and understanding of sea level dynamics, thereby paving the way for their integration into climate monitoring frameworks and operational services.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Future Satellite Gravimetry: Towards a Direct Time-Variable Parametrization

Authors: Philipp Zingerle, Prof. Dr. Roland Pail, Dr. Thomas Gruber
Affiliations: Technical University Of Munich
The limited achievable temporal resolution poses one of the main limitations of current satellite gravity field missions such as GRACE and GRACE-FO and is, eventually, responsible for temporal aliasing. Increasing the temporal resolution is thus one of the most important tasks for future satellite gravity field missions. This, however, can only be achieved by means of larger satellite constellations since the temporal resolution can basically be defined as the time needed to achieve a global observation coverage. This needed (retrieval) time scales linearly with the number of satellites: e.g., a two-pair mission such as the upcoming ESA next generation gravity mission (ESA NGGM) can easily achieve global coverage with sufficient spatial resolution within less than a week. For a hypothetical 6-pair mission, even an independent daily retrieval is feasible. Future missions will hence allow to sample the gravity field in much shorter intervals. However, until now, the parametrization of the gravity field within these intervals usually only accounts for a static behavior, resembling a step function in the time domain. Obviously, such a behavior is unnatural and does not follow the actual progression of the gravity field. So, even if sufficient temporal resolution would be available, temporal aliasing will not be fully mitigated due to this mis-parametrization. In this contribution, we will thus investigate the impact of a direct time-variable parametrization through continuous spline functions. On the example of fictive multi-pair missions, we show how such a spline parametrization with daily and sub-daily support points can be applied: firstly, we prove that the spline parametrization allows numerically stable and correct solution based on a closed loop scenario. Secondly, based on a more realistic scenario with a higher-resolution temporal gravity signal, we also highlight the practical limitations of the spline approach for smaller constellations and present some strategies to minimize the impact.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Updating the ESA Earth System Model for Future Gravity Mission Simulation Studies: ESA ESM 3.0

Authors: Linus Shihora, Volker Klemann, Laura Jensen, Robert Dill, Lone Stumpe, Dr. Yoshiyuki Tanaka, Ingo Sasgen, Bert Wouters, Henryk Dobslaw
Affiliations: Deutsches Geoforschungszentrum GFZ, The University of Tokyo, Alfred Wegener Institut, Delft University of Technology
The ESA Earth System Model (ESA ESM; Dobslaw et al., 2015) provides a synthetic data-set of the time-variable global gravity field that includes realistic mass variations in atmosphere, oceans, terrestrial water storage, continental ice-sheets, and the solid Earth on a wide set of spatial and temporal frequencies. For more than 10 years already, it is widely applied as a source model in end-to-end simulation studies for future gravity missions, but has been also utilized to study novel gravity observing concepts on the ground. For those purposes, the ESM needs to include a wide range of signals even at very small spatial scales which might not yet have been reliably observed by any active satellite mission. In this contribution, we present first steps towards version 3.0 of ESA ESM. The projected changes to the ESM include the utilization of a small ensemble of co- and post-seismic earthquake signals, an updated GIA model, and additional mass balance signals from previously not considered Arctic glaciers. Extreme hydrometeorological events as well as climate-driven and anthropogenic impacts on continental water storage will be represented through an update of the hydrological component. Additionally, the ESM will separately include ocean bottom pressure variations along the western slope of the Atlantic, representing variations in the meridonal overturning circulation as a critically important component of the interactively coupled global climate system. ESA ESM 3.0 will be available from January 2007 until December 2020 with sub-daily sampling. It will be also augmented with synthetic error time-series to facilitate stochastical modelling of residual background model errors. Dobslaw, H., Bergmann-Wolf, I., Dill, R., Forootan, E., Klemann, V., Kusche, J., & Sasgen, I. (2015). The updated ESA Earth System Model for future gravity mission simulation studies. Journal of Geodesy, 89(5), 505–513. https://doi.org/10.1007/s00190-014-0787-8
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Assessment of updated dealiasing products based on the numerical ocean dynamics model TUGO

Authors: Maya Nocet-Binois, Pascal Gegout, Florent Lyard, Jean-Michel Lemoine, Alexandre Couhert, Sean
Affiliations: Cnes
Temporal aliasing of fast gravity signals arising from the Earth's fluid envelope is a major source of errors in the gravity field models from the GRACE/GRACE Follow-On missions. Minimizing these errors in order to accurately detect and analyze gravity signals informing on the solid Earth dynamics and the climatic system, is a major challenge. GET&LEGOS in Toulouse have started calculating dealiasing products for (re-)processing of GRACE and GRACE-FO. The products are based on an updated and enhanced version of the numerical ocean model TUGO and 3D atmosphere ECMWF ERA5. Experimental Atmosphere and Ocean Dealiasing products (DAC0-AOD1B) include: (1) the Atmospheric Time-Variable Gravitational Potential generated by the ECMWF ERA5 model levels density (including hydrometeors, i.e. specific cloud rain / snow / liquid / ice water contents) distributed realistically with altitude above the geoid, and (2) the Ocean Time-Variable Gravitational Potential computed from the ocean surges (forced by ERA5 ocean surface fields). The water height anomalies are realistically placed on the geoid. The surges derive from the TUGO hydrodynamical model without assimilation and the new FES2022 unstructured grid validated by tides in altimetry by the Ocean HFD team. The DAC0-AOD1B products based on FES2022 and FES2014 are compared after reduction of the GRACE/GRACE-FO data, i.e. at the gravity field solution level, with the AOD1B products of GFZ. To that end, monthly gravity field models of GRACE and GRACE-FO are calculated with one or the other dealiasing product and the resulting models are compared and differences analyzed. Secondly, the new ocean tide model FES2022 is compared, also at gravity field solution level, with the previous FES model, FES2014.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Developments in Level-3 ice-sheet mass balance products from GRACE to MAGIC

Authors: Martin Horwath, Thorben Döhne, Matthias Willen, Roland Pail
Affiliations: TUD Dresden University of Technology, Institute for Planetary Geodesy, Technical University of Munich, TUM School of Engineering and Design
The ice sheets' contribution to global sea-level change in the 21st century is the least understood among the sea-level contributions and potentially the largest. Therefore, ice sheet mass balance has been a primary application of GRACE and GRACE-FO. It is important to continue its monitoring by satellite gravimetry, to advance the methodology of satellite gravimetry analysis with respect to ice sheets and to prepare adaptations to the conditions of the MAGIC constellation. Here we review the developments of Gravimetric Mass Balance (GMB) products within ESA’s Climate Change Initiative (CCI) Antarctic Ice Sheet CCI and Greenland Ice Sheet CCI projects (https://data1.geo.tu-dresden.de/, https://climate.esa.int/en/projects/ice-sheets-antarctic/, https://climate.esa.int/en/projects/ice-sheets-greenland/). These products have also been taken up by the Copernicus Climate Change Service as well as by the GravIS Gravity Information Service (https://gravis.gfz-potsdam.de/). The GMB products are currently available as monthly gridded products in 50km x 50km polar stereographic grids and as basin products representing mass changes of large drainage basins as well as of the entire ice sheets. These Level-3 products are derived from Level-2 monthly spherical harmonic gravity field solutions by the methodology of tailored sensitivity kernels (Groh and Horwath 2021, doi: 10.3390/rs13091736, Döhne et al. 2023 doi: 10.1007/s00190-022-01697-8) which can be also regarded as mascon solutions based on the analysis of spherical harmonic Level-2 products. It seeks a compromise (realized by a formal least-squares optimization) between leakage errors and errors propagated from the GRACE Level-2 solutions and therefore crucially depends on information about the latter. We report recent developments of the GRACE and GRACE-FO products. They include: GRACE - GRACE-FO gap filling by utilizing information of interannual signals due to surface mass balance fluctuations; improved comprehensiveness of uncertainty characterization; embedding ice-sheet solutions into a fully globally consistent mass change solution; comparison with results from alternative satellite gravimetry analyses. We show how a recent excess of snowfall in East Antarctica is reflected in the GMB time series. Finally, we use expected MAGIC error characteristics to anticipate and discuss the improvement of resolution as well as potential new science applications that will be achievable with MAGIC.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Monitoring Terrestrial Mass Changes: The CNES's Level-2 and Level-3 Gravimetry Products

Authors: Hugo Lecomte, Alejandro Blazquez, Benoît Meyssignac, Lionel Zawadzki, Jean-Michel Lemoine, Claude Boniface, Julia Pfeffer, Alexandre Boughanemi
Affiliations: LEGOS, CNRS, Univ. Toulouse, IRD, CNES, CNES, Magellium
The Centre National d’Etudes Spatiales (CNES) is actively contributing to the development and distribution of space gravimetry Level-2 and Level-3 products. These products are essential for the study of mass repartition within the Earth’s system. Mass changes products enable study of various physical processes such as tides, sea-level change, post-glacial rebound, terrestrial water storage change, co-seismic deformation, atmospheric/oceanic circulation, Earth’s core processes and others. As an analysis center for the processing of Gravity Recovery And Climate Experiment (GRACE) and GRACE Follow-On (GRACE-FO) missions, CNES distributes Level-2 spherical harmonics coefficients that are the combination of Satellite Laser Ranging data with GRACE data at the normal equation level inverted with the use of truncated singular value decomposition. CNES also contributes to the combined solution produced by the International Combination Service for Time-variable Gravity Fields (COST-G). This presentation highlights the different Level-3 products developed and distributed by CNES. The Laboratoire d'Etudes en Géophysique et Océanographie Spatiales (LEGOS) and CNES have developed a processing chain based on GRACE(-FO) Level-2 spherical harmonics products from various analysis centers. This processing chain generates Level-3 products, corrected for low-degree and solid Earth signals (including earthquakes and post-glacial rebound), filtered, and comes with an uncertaintiy estimation. These L3 products are designed to study water mass transfers with a spatial resolution of a few hundreds kilometers. The Level-3 total water storage changes data product is distributed via the Data Terra/Theia inland water thematic hub hydroweb.next (hydroweb.next.theia-land.fr). The terrestrial water storage is the addition of groundwater, surface water, soil moisture, snow and glaciers water contents. This information can be associated with other satellites observations to study particular components of the land water budget. It is essential for freshwater resource management, drought and flood forecasting. Level-3 Barystatic sea-level product is distributed via the DaTa Terra/ODATIS hub. This oceanography product is combined with sea-level change estimates from satellite altimetry, ocean temperature profiles from in-situ measurements to generate a Level 4 Ocean Heat Content product also distributed at the same portal. This product places a strong constraint on the Earth's energy imbalance and is crucial to monitor the oceans' role in the climate system. The Level-3 Ice-sheets and glacier mass changes products will be distributed via the Data Terra/ForMaTerre hub. This cryosphere product relies on gravimetry to estimate ice sheet and glacier mass changes within the framework of the Ice Sheet Mass Balance Inter-comparison Exercise (IMBIE) project. This product follows the glaciers and ice-sheets dynamics and serves as an estimation of the cryosphere component of the sea-level budget rise. These different products exemplify the key role of CNES in developing innovative services based on satellite gravimetry missions. These products will make use of futur gravimetry mission data including the MAGIC mission, where they will be further enhanced through continuous development and the improved spatial and temporal resolution provided by MAGIC.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Development of an open-source Level-3 processor for time-variable satellite gravimetry data

Authors: Eva Boergens, Daniela Rabe, Christoph Dahle, Josefine Wilms, Robert Dill, Frank Flechtner, Alison Beamish, Romulo Goncalves, Henryk Dobslaw, Martin
Affiliations: GFZ Helmholtz Centre For Geosciences, Technical University Berlin
Many users of time-variable satellite gravimetry data from the GRACE and GRACE-FO missions employ gridded Level-3 data for various applications in, e.g., hydrology, glaciology, oceanography, and the climate sciences. Level-3 data are either available as mass concentration (mascon) blocks processed directly from Level-1B sensor data with additional geophysical constraints. Or alternatively, sensor data is initially inverted into Level-2 gravity fields represented in Stokes coefficients, which are subsequently transformed into a surface mass distribution using the thin-layer assumption. Since no a priori constraints are imposed during the estimation, spatially correlated high-frequency noise must be removed via post-processing. In addition, various low-degree coefficients that are not (or rather poorly) observed by GRACE/-FO need to be augmented from external data, and several geophysical corrections must be applied. GFZ currently provides such globally gridded Level-3 data via its GravIS portal (gravis.gfz-potsdam.de). To make GRACE/-FO data even more accessible, we intend to release the code used to process the GravIS Level-3 data products “ocean bottom pressure” and “terrestrial water storage” as an open-source Python package. This software will allow researchers to calculate customized Level-3 data by e.g. applying filtering methods best suited for their specific areas of interest or exchanging any of the geophysical corrections. To enhance usability, the package will be easily installable via common Python package managers like pip and conda, and the processing pipeline will be configurable through a user-friendly JSON file. As the code will be open-source, users can also develop new processing features for specific needs. In addition to GRACE/-FO data, this contribution will also specifically focus on Level-3 products derived from simulated Level-2 global gravity fields from both GRACE-C and NGGM, which will eventually form the MAGIC double-pair constellation.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: The NGGM/MAGIC Mission Performance Evaluation Framework

Authors: Dr. Thomas Gruber, Roland Pail, Ilias Daras
Affiliations: Technical University Of Munich, European Space Agency
The Next-generation Gravity Mission (NGGM) is planned to constitute the inclined pair of the double-pair Bender-type constellation MAGIC. In the frame of the Mission Performance Evaluation Study an NGGM/MAGIC end-to-end mission simulator from product Level 0 to Level 3 will be defined and implemented, representing a first prototype for an operational NGGM and MAGIC ground processor by a European consortium of 9 partner institutions. The mission performance evaluation framework (MPEF) is composed by the end-to-end simulator and a performance assessment module. The MPEF architecture is defined as a distributed system composed by a central processing facility for Level 0 to Level 1b processing and for data archiving and dissemination, and sub-processing facilities for processing Level 2 and higher level products (gravity fields, orbits, application products). In the sub-processing facilities pre-developed high performance Software to process these products is available at the partner institutions and just needs to be connected to the processing framework. In the current phase B1 of the NGGM/MAGIC project the MPEF design is developed and prototype processors are implemented. The MPEF represents a first step towards an operational ground processing system, which connects new developments with existing highly sophisticated satellite processors. At the poster, we will give an overview about the design of the MPEF, focusing on its architecture, the data flow and the definition of NGGM/MAGIC products.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Improved ocean tide models for gravity field recovery by means of the MAGIC double-pair constellation

Authors: Markus Hauk, Christoph Dahle, Frank Flechtner, Michael Murböck, Natalia Panafidina, Josefine Wilms
Affiliations: GFZ Potsdam, Technische Universität Berlin, Physical Geodesy
GFZ has performed between June 2021 and June 2023 various full-scale simulations for NGGM/MAGIC within ESA´s Third Party Mission (TPM) Program funded GRACE-FO Science Phase Study (FP/3-17121/21/I-DT-lr). Several potential mission constellations were investigated and numerically simulated to narrow down the trade space of a potential MAGIC constellation, to provide feedback to parallel Phase-A industry system studies, and to identify an optimum constellation setup regarding science return, technical feasibility, and costs. All simulations were performed based on realistic instrument noise and background model error assumptions as agreed with ESA. A special focus was set on the development and application of extended parameterization techniques for improved de-aliasing of short-term mass variations. One of the approaches investigated was the application of variance-covariance information from mutual differences of an ensemble of five different ocean tide models. Despite the fact that the MAGIC constellation already provides substantial self-de-aliasing capabilities, the consideration of ocean tide model uncertainties during Level-2 gravity field recovery leads to significantly reduced tidal aliasing errors. We investigated the self-de-aliasing capabilities of the MAGIC constellation by the co-estimation of the eight major tidal constituents over a time span of only one year, including the ocean tide model uncertainty information, with the goal to improve ocean tide models for de-aliasing purposes during gravity field recovery. Results demonstrate significantly estimated tidal constituents resulting in an improved ocean tide model with reduced model error characteristics.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: A.03.05 - POSTER - Opportunities and challenges for global monitoring of photosynthesis from space

In the last years there has been a rapid progress in understanding the dynamics of photosynthesis from space with the advancement of remote sensing observatories that enable the retrieval of solar-induced chlorophyll fluorescence (SIF). The remotely sensed SIF signal has demonstrated potential in detecting and monitoring plant stress and gross primary productivity (GPP) across various vegetation types and spatial and temporal scales. The upcoming FLuorescence EXplorer mission (FLEX) aims at providing new insights into the carbon cycle of terrestrial ecosystems by means of its unique measurements and products. In this session we will welcome contributions from the scientific and user community linked to the consolidation of the link between SIF and carbon balance, carbon cycle studies, agriculture, forestry, food security and crop production. Updates on recent modeling activities coupling remote sensing signals and ecosystem processes will also be welcomed, including the integration of products from active/passive research and operational EO missions.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Assimilating S3 and S5P products into a prototype model for estimating terrestrial carbon fluxes from combined data streams.

Authors: Pablo Reyes-Muñoz, Dr. Jochem Verrelst
Affiliations: University Of Valencia
Carbon fluxes from photosynthesis and respiration in terrestrial ecosystems determine carbon sinks on land, which are key elements of the global carbon budget and influence Earth's climate. At the same time, new-generation satellites from the Copernicus program are aimed at long-term monitoring of the environment, with a special focus on climate change. The Sentinel-3 (S3) mission, which measures radiation along optical and near-infrared spectral bands through the OLCI instrument, is excellent for capturing global vegetation dynamics due to its optimal balance between spatial resolution and revisit time. In parallel, Sentinel-5P (S5P), designed to measure trace gases through the TROPOMI (TROPospheric Monitoring Instrument), provides high spectral resolution data around the red peak at 743-758 nm, linked to solar-induced fluorescence (SIF) emitted during photosynthesis as residual energy. The combination of these missions offers unique features for capturing terrestrial carbon fluxes (TCFs) dynamics. In this work, we processed combined data streams from S3 and S5P using Gaussian process regression (GPR), a Bayesian machine learning algorithm, to estimate TCFs and study the predictive capacity of different S3 and S5P vegetation products. The GPR model, implemented in Google Earth Engine, paves the way for efficient processing of combined data streams in cloud computing services. The global consistency of the GPR-based TCFs products was proven against benchmark products, namely the MOD17A2H gross primary productivity (GPP) and net primary productivity (NPP), the LPJ-GUESS GPP product and the FLUXNET partition GPP product. Intra-annual temporal correlation (year 2019) between the mentioned products led to best matching against MOD17A2H, with mode values of R above 0.9 and RMSE below 0.5 Kg C m⁻² y⁻¹. Correlation with FLUXNET partition GPP products varied in by vegetation type, with R² over 0.6 and RMSE below 1 Kg C m⁻² y⁻¹ in most vegetation types. Additionally, an alternative GPR model trained with ground data and estimating TCFs in function of leaf area index (LAI) and 14 climate variables, including soil variables, yielded similar results. This suggests that integrating S3 and S5P data streams can capture a comprehensive set of climate information influencing photosynthesis. The findings of this study pave the path for a new synergistic approach to utilising data sets from upcoming satellite missions. This is particularly promising in view of the upcoming FLEX mission, enhancing the current sensor features at an unprecedented spatial resolution that will allow better capture of SIF on land diversity, leading to improved global estimations of TCFs. The developed workflow for processing S3 and S5P may serve as a prototype for the future S3 and FLEX tandem, as well as an innovative way to take advantage of new capacities offered by cloud computing services.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Improving Global Primary Production Monitoring Through Microwave Remote Sensing in a Machine Learning Framework

Authors: Moritz Müller, Dipl. Raul Lezameta, Univ.Ass.in Dipl.-Ing.in Ruxandra-Maria Zotta, Projektass. Dr. MSc. Pierre Laluet, Univ.Prof. Dr.rer.nat. Wouter Arnoud Dorigo
Affiliations: TU Wien
Gross Primary Production (GPP) serves as a critical indicator of ecosystem function and its response to climate change. While numerous GPP estimates from different sources (model data, satellite derived, flux tower) exist, discrepancies between these datasets remain, emphasizing the need for new datasets and continuous improvement, particularly in understanding carbon-climate feedbacks and ecosystem resilience. By refining the VODCA2GPP product [Wild et al., 2022], which estimates GPP using Vegetation Optical Depth (VOD) from microwave remote sensing, we present methodological improvements developed within the GLANCE project that incorporate three major advancements. First, we integrate land cover information to improve model generalizability across different biomes, addressing the imbalanced distribution of in-situ validation stations and enhancing the model's ability to capture ecosystem-specific carbon uptake patterns. Second, we incorporate soil moisture data to account for water availability, which is the primary constraint on vegetation productivity in many biomes and is particularly crucial for understanding drought responses and ecosystem stress. Third, we utilize VODCA L-band observations to better capture biomass accumulation patterns, as L-band measurements penetrate deeper into the vegetation canopy, providing improved sensitivity to vegetation structure and water content. Additionally, we transitioned from X-band to Ku-band VOD observations due to their superior signal-to-noise ratio in Mediterranean regions - the geographic focus of the GLANCE project. This change in input data enhances data quality while maintaining the model's overall performance. The enhanced model was validated using an expanded set of in-situ measurements, including data from WARM Winter and CH4 datasets, which significantly extends our validation capabilities across different climatic zones and ecosystem types. Validation against FLUXNET in situ measurements and comparisons with leading datasets, including MODIS, FLUXCOM, and the TRENDY-v7 ensemble, demonstrate significant improvements in model accuracy and reliability, particularly in regions with sparse ground observations. This updated VODCA2GPP version offers a valuable resource for analyzing global vegetation dynamics, enabling better monitoring of ecosystem responses to environmental change and improving our understanding of the terrestrial carbon cycle. This research has been funded through the EOWAVE and ESA xECV GLANCE projects. References Bernhard Wild, Irene Teubner, Leander Moesinger, Ruxandra-Maria Zotta, Matthias Forkel, and Wouter A Dorigo. Vodca2gpp–a new, global, longterm (1988–2020) gross primary production dataset from microwave remote sensing. Earth System Science Data, 14(3):1063–1082, 2022. doi: 10.5194/essd-14-1063-2022.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Improving the simulation of cropland CO2 fluxes using modified vegetation photosynthesis and respiration model

Authors: Henri Hassan Bazzi, Philippe Ciais, David Makowski, Nicolas Baghdadi, Diego Santaren
Affiliations: AgroPariTech, UMR TETIS, Laboratoire des Sciences du Climat et de l’Environnement, UMR 1572 CEA-CNRS-UVSQ, Université Paris-Saclay, Université Paris-Saclay, AgroParisTech, INRAE, UMR 518 MIA Paris-Saclay, INRAE, UMR TETIS
Croplands represent 12% of the global vegetated areas and contribute to 16% of global Net Ecosystem Exchange (NEE). Due to agricultural intensification from irrigation, fertilizers, and better cultivars, cropland NEE increased by 25% between 2003 and 2019. Better monitoring of the CO₂ fluxes in cropland can help better understand the drivers of crop yields, analyze the impacts of climate change on cropland productivity, and inform policies for improving the C budget for croplands. Remotely sensed data-driven models, such as SAFY-CO₂ and SAFY-WB, are commonly used to simulate NEE and its two main components of Gross Ecosystem Exchange (GEE) and the Ecosystem Respiration (RECO) for croplands. These models give good performances at sites after calibration but may have limitations for large scale applications. The Vegetation Photosynthesis Respiration Model (VPRM) is one of the data-driven models used to estimate cropland NEE, GEE and RECO. One advantage of this model is that it includes a limited number of parameters that can be optimized using flux tower data. VPRM is driven by MODIS derived vegetation indices (Enhanced Vegetation Index – EVI and the Land Surface Water Index - LSWI) at 500 m spatial resolution in addition to temperature and radiation climatic data to describe the light dependent part (GEE) of the NEE. The ecosystem respiration flux was described by a simple function using air temperature data. In the European Union, 38% of the total land area are croplands and the heterogeneous features of crop systems in Europe with majority of small to medium sized agricultural holdings, and diversity of crop rotations, require high-resolution information to estimate cropland NEE. Fields’ sizes in Europe are either small (between 0.64 and 2.56 ha) or medium (between 2.56 and 16 ha) and according to the European statistics of farmland, about 63.8% of the European agricultural holdings are less than 5 ha. In addition to the use of MODIS data (500 m) in the VPRM, croplands are considered as a single vegetation class regardless of the specific crop type and the inter-crop variation effects on NEE, GEE and RECO. However, the complex cropping system in Europe makes the quantification of cropland NEE and GEE challenging under the assumption of a single crop type. In this study, we modified the Vegetation Photosynthesis and Respiration Model (VPRM) to estimate cropland CO₂ fluxes at higher spatial resolution adapting for the European cropping system. Four main modifications in the VPRM were proposed in this study. The first is replacing the MODIS vegetation indices with the Sentinel-2 (10 – 20 m) indices for higher resolution fluxes’ mapping. The second modification included adding an explicit soil moisture function in the GEE equation to account for the water stress in the CO₂ fluxes. Third modification represents changing the linear relationship between RECO and temperature by expressing RECO as a function of biomass (GEE), temperature, and soil moisture. The final modification involved proposing a crop-specific optimization of the modified VPRM (MVPRM) to obtain crop-type parameters contrary to the conventional lumping of all crop types for one set of parameters. The MVRPM was calibrated using eddy covariance measurement sites for 11 different crop types. Several scenarios for parameters optimizations were evaluated and compared including the use of MODIS or Sentinel-2 vegetation indices, a generic set of parameters for all crop types vs. a different set of parameters per crop type and a set of parameters per individual site. The objectives of this study are first to compare the performances of the standard and the modified VPRM versions using MODIS and S2 vegetation indices for simulating CO₂ fluxes over croplands sites in Europe, second to assess the performances using a default generic parameterization for all crop types vs. crop-specific parameterization of the VPRM, and finally to propose new crop specific VPRM parameterization for simulations of the CO₂ fluxes for different crop types. Main results primary showed that S2 data better simulated the CO₂ fluxes than the MODIS data leading to a root mean squared error (RMSE) for NEE of less than 3.5 μmol/m² /s with S2 compared to 5 μmol/m² /s with MODIS especially during the peak growing season where MODIS underestimated the CO₂ fluxes for most crop types. The addition of the soil water content function as a driver in the GEE enhanced both GEE and NEE estimates. The MVPRM for GEE mainly decreased the difference between the measured and simulated standard deviation better capturing the fluctuation in the GEE, which in return was mainly related to the daily/weekly variations caused by drought events and water stress for croplands. The comparison between the initial VPRM equation for RECO and the modified VPRM showed that a simple model with a linear relation between ecosystem respiration and temperature underestimates the amplitude of the RECO at crop sites and is not appropriate for further applications concerned by the seasonal cycle of global CO₂ and annual carbon balance. Adding both a productivity dependence (GEE) and a soil water content dependence to the RECO equation in VPRM enhanced the description of RECO. Findings finally demonstrated significantly better performances when the parameters are optimized per crop-type instead of for all crop types lumped together, with lower RMSE and Akaike information criterion (AIC), despite a larger number of parameters. Associated with the availability of crop-type land cover maps, the use of S2 data and crop-type modified VPRM parameterization presented in this study, provide a step forward for upscaling cropland carbon fluxes at European scale.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Opportunities regarding a network of SIF-capable sensors alongside eddy covariance towers

Authors: Simon De Canniere, Luis Alonso, Tommaso Julitta, Andreas Burkart, Bert Gielen, Dario Papale, Marco Celesti
Affiliations: Antwerp University, Mediterranean Center of Environmental Studies (CEAM), JB hyperspectral, Centro Euro-Mediterraneo sui Cambiamenti Climatici, ESA estec
Sun-induced chlorophyll fluorescence (SIF) is an increasingly relevant remote sensing signal that serves as a proxy for photosynthesis. ESA dedicated its 8th Earth Explorer mission, the FLuorescence EXplorer (FLEX) to it. At ground level, tower-based SIF measurements provide the full diurnal cycle, thereby having the possibility to reveal sub-daily dynamics on photosynthesis. SIF provides complementary information to eddy covariance (EC), as SIF is sensitive to the photosynthetic electron transport, while EC is sensitive to the photosynthetic carbon uptake. Neither are direct measurements of photosynthesis, and they both have their problems when it comes to their interpretation in terms of photosynthesis. EC fluxes might be hard to translate to photosynthesis, as they are also affected by respiration, and because the EC footprint changes in function of the wind conditions. SIF interpretation is hampered by the lack of understanding regarding the SIF-photosynthesis relationship, as well as radiative transfer problems associated with the canopy structure. The SIF-photosynthesis relationship is highly non-linear, as non-photochemical quenching regulates the amount of absorbed energy that is dissipated before used in photosynthesis . The canopy structure affects the SIF as leaves scatter the far-red light around the canopy, making the upscaling between the leaf and canopy scale non-linear. SIF data quality varies in function of illumination conditions, where blue-sky days lead to higher quality SIF data compared to partially cloudy days. The combined use of EC and SIF information would lead to a more robust determination of actual photosynthesis. With the number of SIF observations alongside EC measurements growing, EC bears a lot of potential for linking SIF to photosynthesis. A proper intercomparison of ground-based SIF alongside EC towers has the potential to unravel the wide variety in photosynthetic reactions to environmental conditions at each ecosystem. SIF can be especially interesting in the light of flux partitioning, as it is not affected by respiration. In the light of the upcoming FLEX mission, SIF can be an especially promising signal to upscale the photosynthetic knowledge obtained from EC-based studies, while EC-SIF sites provide a needed reference for FLEX cal/val activities. For SIF and EC to be jointly studied at different sites, a few hurdles need to be overcome. These are linked to calibration, sampling strategies, atmospheric effects and differences in retrieval methods . Albeit the latter two can be accounted for during post-processing. A joint effort by EC and SIF experts will be required to benefit maximally from the increasing availability of SIF data. As such, there lies a great opportunity in constructing a joint database consisting of tower-based SIF and EC data.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Unraveling the Nonlinear Dynamics of Photosynthesis and Fluorescence under Stress

Authors: Cynthia Chardi Raga, Dra. Shari Van Wittenberghe, Dr. Adrián Moncholí-Estornell, Sara Pescador-Dionisio, Clara García-Martínez, Eva Neuwirthová, Dra. Inmaculada García-Robles, Dra. Carolina Rausell-Segarra, Dr. Sergio González-Nebauer, Dr. José Felicísimo Moreno-Méndez, Dra. M. Pilar Cendrero-Mateo
Affiliations: Imaging Processing Laboratory, University Of Valencia, Department of Genetics, University Of Valencia, Plant production department, Valencia Polytechnic University.
Photosynthetic efficiency is a critical determinant of plant growth and productivity. While fluorescence-based remote sensing provides valuable insights into photosynthetic processes, the relationship between fluorescence and photosynthetic efficiency is often assumed to be linear. This study challenges this assumption and explores the nonlinear dynamics of these processes under various stress conditions. We conducted a series of experiments on several crop species, including Zea mays (maize), Triticum aestivum (wheat), Camelina sativa (camelina), Solanum lycopersicum (tomato), and Hordeum vulgare (barley), subjecting them to different levels of light, water, and nitrogen stress. Tomato and barley were also measured at different time points to obtain a temporal series. By simultaneously measuring gas exchange and chlorophyll fluorescence, we investigated the interplay between photosynthetic, non-photochemical quenching (NPQ), and fluorescence yields. Our results reveal a complex, non-linear interplay between these parameters. Three response phases were identified: light-damage, light-protection, and light-usage phases. NPQ capacity significantly modulates plant stress responses, determining the duration and magnitude of distinct response phases. Plants adapted to low-and-intermediate light exhibited shorter light-protection and light-usage phases but longer light-damage phases than those adapted to high light. When comparing control plants to those subjected to water or nitrogen deficit, significant differences in fluorescence yields magnitude emerged, suggesting that these variations were primarily attributable to the treatments rather than light intensity alone. A closer examination of light curves revealed a common pattern: water-stressed plants displayed shorter and more rapid recovery phases compared to nitrogen-deficient plants. Additionally, substantial differences were observed between control and treatment groups, particularly in terms of the duration and speed of the recovery phases. Throughout the time series, both the duration and magnitude of these phases fluctuated significantly, highlighting the dynamic nature of plant responses to stress. These findings highlight the importance of considering nonlinear dynamics in interpreting fluorescence-based remote sensing data. By understanding the factors influencing the nonlinear relationship between fluorescence and photosynthesis, we can improve the accuracy of remote sensing techniques for monitoring plant health and productivity.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Fluorescence combined with Spectral Unmixing Using HyPlant Airborne Data of an alphalpa field

Authors: Ana Belen Pascual-venteo, Fluorescence combined with Spectral Unmixing Using HyPlant Airborne Data of an alphalpa field Adrian Pérez Suay, Fluorescence combined with Spectral Unmixing Using HyPlant Airborne Data of an alphalpa field Adrian Moncholi, Fluorescence combined with Spectral Unmixing Using HyPlant Airborne Data of an alphalpa field Bastian Siegmann, Fluorescence combined with Spectral Unmixing Using HyPlant Airborne Data of an alphalpa field Shari Van Wittenberghe
Affiliations: Universitat De València, Institute of Bio- and Geosciences, Plant Sciences (IBG-2
Current and upcoming vegetation imaging spectroscopy satellites are set to provide a valuable new data stream that will enhance existing remote sensing products and facilitate the creation of new ones. The sensors aboard ESA’s Fluorescence Explorer (FLEX) will span the full 500–780 nm range, specifically designed to monitor the partitioning of photosynthetic energy by focusing on key pigments involved in light reactions. To accurately quantify photosynthetic efficiency, it is essential to understand the dynamic absorption behavior of these pigments, as it complements the fluorescence data. The HyPlant airborne imaging spectrometer was created as the primary reference instrument and demonstrator for the FLEX satellite mission. It was the first airborne sensor specifically optimised to capture solar-induced chlorophyll fluorescence (SIF) by means of two oxygen absorption characteristics in the electromagnetic spectrum, located at 687 nm (O2-B) and 760 nm (O2-A). Our study is based on data collected from a campaign carried out in an alfalfa field in La Cendrosa, north-east Spain, where a fluorescence box (FloX) allows continuous monitoring of chlorophyll fluorescence and captures high-resolution spectral data. This site provided a unique opportunity to explore the interactions between carbon uptake and environmental conditions in a controlled environment. At the same time, we used HyPlant airborne data to enhance our analysis, allowing real-time evaluation of the performance of our unmixing models. By employing advanced models such as Constrained Least Squares (CLS), Potential Unmixing (POT), and Bilinear Unmixing (BIL), we can decompose complex spectral signatures and differentiate the contributions of various vegetation types and soil components. Our results demonstrate promising accuracy, with a root mean square error (RMSE) of approximately 0.14 for the Potential Unmixing model, 0.16 for Bilinear Unmixing, and 0.18 for Constrained Least Squares. While BIL and CLS exhibited slightly higher errors compared to POT, they offered more consistent results, effectively avoiding overfitting of the original endmembers. These findings highlight the importance of model selection in spectral unmixing and reveal the potential of these techniques to better understand the photosynthesis process. In conclusion, integrating a pigment-based spectral unmixing of the estimated absorbance from satellite observations provides significant improvements in disentangling the different fractions of absorbed photosynthetically active radiation. The insights gained from this research not only lay a solid foundation for the FLEX mission but also contribute to broader efforts in carbon cycle research and monitoring.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Studying Spatial Variability of Solar-Induced Chlorophyll Fluorescence (SIF) and Its Relationship with Gross Primary Production (GPP) in Castelporziano Forest

Authors: Adnan Yousaf, dr Simone Sabbatini, Dario Papale
Affiliations: Department of Innovation in Biological, Agri-food and Forest Systems (DIBAF), University of Tuscia, 01100, Euro-Mediterranean Center on Climate Change (CMCC), IAFES Division, 01100
Solar-Induced Chlorophyll Fluorescence (SIF) is a reliable proxy for gross primary production (GPP) across spatial and temporal scales. However, the dependency of top-of-canopy SIF measurements on illumination and observation geometries remains a key factor influencing the SIF-GPP relationship. Angular effects in SIF measurements can introduce significant uncertainty, which hinders accurate GPP quantification and its integration into carbon cycle models. While satellites now use SIF to monitor plant photosynthesis on a global scale, ground-based validation at finer spatial and temporal resolutions is crucial for improving the interpretation of satellite-derived SIF. In this study, a high-resolution in-situ FLoX sensor equipped with a Pan-Tilt Unit (PTU) is installed on a newly established eddy covariance tower over a Pinus pinaster L. harvested forest in Castelporziano, Italy. The primary objective of this research is to investigate the relationship between SIF and GPP using high-resolution measurements combined with eddy covariance data. Complementary periodic leaf-level measurements will be conducted to bridge the gap between canopy-level SIF observations and biochemical processes, providing critical insights for validating and refining the SIF-GPP relationship across spatial and temporal scales. Before installation, the FLoX sensor was tested on a homogeneous grass surface to evaluate the influence of sun-target-sensor geometry on SIF spatial variability across different pan and tilt angles. The FLoX system, comprising two spectrometers (FULL: 400-900 nm and FLUO: 650-800 nm), captures high-resolution SIF in the red and far-red spectral regions. The spectrometers have a 25° field of view to measure upwelling radiance, while cosine receptors with a 180° opening angle record downwelling irradiance. The PTU enables multi-angular measurements, covering horizontal pan positions from -30° to 30°, and vertical measurements start at 0° (nadir) and progress through tilt angles of 10°, 20°, and 30°. At each tilt angle, four pan positions are measured, totalling 12 observations per cycle. Each measurement lasts one minute, after which the system resets to the 0° nadir position. SIF was retrieved using the improved Fraunhofer Line Discrimination (iFLD) method, ensuring precise fluorescence signal detection and robust data for analyzing the impact of viewing geometry on SIF variability. This study is expected to improve our understanding of angular effects on SIF measurements, enhance SIF-based GPP estimation accuracy, and facilitate the validation of satellite-derived SIF datasets, ultimately contributing to better modeling of photosynthetic activity and carbon fluxes.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: RStoolbox: An R package for Remote Sensing Data Analysis

Authors: Konstantin Müller, Jakob Schwalb-Willmann, Dr Martin Wegmann, Dr Benjamin Leutner
Affiliations: University of Würzburg, The Landbanking Group
The application of satellite remote sensing has shown continuous growth over the past few decades. With global change, processes such as natural disasters, urban growth, land transformation, etc. became increasingly visible on the Earth's surface, within only the few decades Earth has been observed by remote sensing satellites. Since most of these data are freely available, there is a need for open-source tools and easy-to-apply methods to work with them and transform data into useful information. Hence, well-implemented methods for data analysis and data processing are key to environmental research applications using remote sensing data. With our open-source R package RStoolbox, we serve this need by providing a methods-rich, highly versatile pipeline for data pre-processing, analysis and visualization. Offering seamless integration into other state-of-the-art R packages for spatial data processing such as terra or ggplot2, RStoolbox has found wide acclaim in the remote sensing community, with over 180k downloads to date. Moreover, the package has been designed with efficiency in mind by implementing many of the supplied algorithms in C++. Preprocessing: While many remote sensing data providers offer pre-processed, ready-to-use data, there are cases in which it is useful to be able to pre-process remote sensing base products locally. For example, pre-processed data are not always available for every region on Earth. In addition, pre-processing may differ from regional data provider to provider. RStoolbox offers multiple pre-processing methods to prepare remote sensing data for data analysis . These consist of radiometric conversion, topographic illumination correction, atmospheric correction as well as cloud and cloud-shadow masking. Additionally, it includes histogram matching to generate comparable image data across acquisitions, which is helpful for mosaicing. Analysis and Visualization: RStoolbox offers various analysis methods for remote sensing data. Spectral indices are widely used to ease data interpretation and serve as predictors in remote-sensing-supported modelling. RStoolbox implements a multitude of established spectral indices, including vegetation indices such as the Normalized Difference Vegetation Index (NDVI) and the Enhanced Vegetation Index (EVI), or indices suitable to indicate other surface characteristics such as the Normalized Burn Ratio Index (NBR) and the Modified Normalized Difference Water Index (MNDWI). Additionally, we give the user the possibility to easily integrate their own spectral indices, which can be piped through the C++ backend of RStoolbox. Moreover, we provide dimensionality reduction techniques such as Principal Component Analysis (PCA) or Tasseled Cap Transformations (TCT) with pre-defined parameters for multiple multispectral sensors. For subsequent analyses, we contribute functions for fitting or training unsupervised and supervised classification and regression models using statistical models and Machine Learning. By integrating with the caret package, users can choose between many modelling algorithms such as Random Forest, xgboost or Support Vector Machines. For sub-pixel analyses, RStoolbox provides functions for Spectral Mixture Analyses, e.g. to calculate fractions of surface types or materials per pixel using a Multiple Endmember Spectral Analysis (Spectral Unmixing). Lastly, our package offers functions to identify spatial, temporal and spectral patterns, implementing Change Vector Analysis (CVA), Fractional Cover Mapping, and Spectral Angle Mapping. RStoolbox not only useful for scientific analysis but has also shown to be of value for educational purposes. In teaching, RStoolbox can be used by lecturers and students to jointly explore methods and concepts by applying them live. In addition, the open-source nature of the package allows students to learn from code, motivating them so start developing and implementing their own solutions to research questions in Earth observation and geo-analysis. Packages like RStoolbox itself have profited from this already: Through our international Applied Earth Observation of the Living Environment (EAGLE) degree program within the Earth Observation Research Cluster (EORC) at the University of Würzburg, Germany, ideas and features have been contributed directly by young students. This not only enhances the students' learning about good open-source software but also benefits the remote sensing community.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Assessment of the coupling between SIF and ecosystem carbon fluxes during periods of rapid vegetation functioning shifts triggered by rain pulses

Authors: Dr. Ana López-Ballesteros, Dr. María Pilar Cendrero-Mateo, Dr. Adrián Moncholí-Estornell, Álvaro Doblas-Rodrigo, Dr. Arnaud Carrara, Dr. José Javier Peguero-Pina, Dr. Shari Van Wittenberghe
Affiliations: Agri-Food Research and Technology Centre of Aragon (CITA), Laboratory of Earth Observation, Image Processing Laboratory, University of Valencia, Mediterranean Center for Environmental Studies (CEAM)
One of the most important challenges in remote sensing of vegetation is the quantification of actual photosynthetic activity of plants. Recent studies have shown the potential of combining sun-induced fluorescence (SIF) and energy-dissipating mechanisms such as Non-photochemical quenching (NPQ) - derived from vegetation reflected radiance - to assess the impact of abrupt climate changes on photosynthesis. In Mediterranean ecosystems, vegetation functioning is strongly constrained by water availability that presents strong temporal variations at both seasonal and interannual scale. In that context, rain pulses, defined as discrete precipitation events occurring during drought periods, can rapidly activate plant photosynthesis after an inactive period due to water stress. This phenomenon has been directly measured through leaf-, plant- and ecosystem scale CO2 exchange in several water-limited ecosystems and drought-tolerant plant species. However, the link between gas exchange and SIF-NPQ responses during these rapid changes in vegetation functioning has not been deeply explored, especially in observational field studies. The main objective of this study is to evaluate the potential of seasonal and diurnal SIF-NPQ dynamics to track the short-term vegetation response to rain pulses. To do so, we collected tower-based spectroscopy data with the FloX system at canopy level next to the Manzanera eddy covariance station (ES-Mnz, FLUXNET) established in May 2023. The site is characterized by a dense holm oak woodland located in central-eastern Spain under Mediterranean climatic conditions with a continental influence. At this site, turbulent carbon and water fluxes at ecosystem scale are continuously measured by means of the eddy covariance technique together with a standard suite of meteorological and soil microclimate variables. In addition, point-based top-of-the-canopy (TOC) fluorescence and reflected radiance are monitored. Preliminary analysis shows that rain pulses can provoke a net CO2 uptake four days after precipitation occurs, indicating a remarkable increment in gross primary productivity (GPP). We hypothesize that the recovery of photosystem II activity may occur before GPP responds causing a shift in the SIF-NPQ-GPP relationship. This ongoing work will shed light on the SIF-NPQ seasonal and diurnal dynamics and its capabilities and limitations to track the GPP patterns in Mediterranean ecosystems that greatly influence interannual variability of global terrestrial carbon sink. Acknowledgements: PID2022-137022OB-C32, PID2022‐136478OB‐C32, IJC2020-045630-I, TED2021-129499A-I00, from MCIN/AEI/10.13039/501100011033 and EU Next Generation EU/PRTR; ERC-2021-STG-101041768; Research Group S74_23R from DGA; CISEJI/2023/48.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: A Lightweight SIF-Based Crop Model for Predicting Crop Yields (Australia Wheat)

Authors: Jinru Xue, Professor Alfredo Huete, Professor Xiaoliang Lu
Affiliations: University Of Technology Sydney, Northwest A&F University
As Australia’s primary staple and export crop, wheat necessitates reliable yield mapping to ensure timely alerts about food insecurity. Conventional crop yields are estimated using either process-based or statistical models, but both face challenges in large-scale application due to the extensive data required. Recent studies have shown that the gross primary production (GPP) of plants can be mechanistically estimated from the fraction of open PSII reaction centers (qL), solar-induced chlorophyll fluorescence (SIF), and readily accessible meteorological datasets including air temperature (Tair), dew-point temperature (DEW), and soil water content (SWC). qL can be modeled as a hyperbolic function of SIF and Tair. Along with these theoretical advances, the resolution of satellite SIF has greatly improved, boosting the potential for accurate large-scale crop yield estimation. In this study, we develop a SIF-based lightweight crop model which uses qL and SIF to track crop GPP. This approach allows for a direct mechanistic estimation of GPP without the need to explicitly account for numerous complex agro-climatic processes. We apply this model to estimate Australian wheat yields from 2019 to 2022. The model exhibits strong predictive power, explaining 86% of wheat yield variance at the regional level (RMSE: 91 kilotons, rRMSE: 7.24%) and 91% at the state level (RMSE: 1509 kilotons, rRMSE: 14.13%). Australian wheat yields exhibit a positive correlation with soil water content and vapor pressure deficit (VPD) when VPD remains below 0.80 kPa. However, the correlation turns negative once VPD exceeds this threshold. We also identify the main sources of error in estimating wheat yield as: (1) inaccuracies in estimating the harvested area of wheat, and (2) the relatively low spatial resolution of current satellite SIF data. Our model, with its lightweight design and its ability to mechanistically estimate crop photosynthetic CO2 assimilation, offers a promising, novel framework for practical, large-scale crop yield mapping.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Chlorophyll fluorescence and the xanthophyll cycle: unlocking early stress signals for scalable crop monitoring

Authors: Sara Pescador Dionisio, Dra. M. Pilar Cendrero-Mateo, Dr. Adrián Moncholí-Estornell, Sergio G. Nebauer, María Jesús Rodrigo, Alexander V Ruban, Jose Moreno, Dra. Inmaculada García-Robles, Dra. Shari Van Wittenberghe
Affiliations: University Of Valencia, Institute of Agrochemistry and Food Technology (IATA-CSIC), Polytechnic University of Valencia, Queen Mary University of London
Early stress detection of crops, dealing with the complexity of potential triggers and the further applicability to large monitoring scales, requires a comprehensive and bottom-up approach to achieve a thorough understanding of the signals showing the very first symptoms of the alterations in the photosynthetic light reactions. Chlorophyll fluorescence emission is one of the most direct measures of plant photosynthetic activity. However, fluorescence efficiency alone is insufficient to detect early stress or photosynthetic downregulation due to its complex - non-linear and non-constant - relationship with plant photosynthesis. The latter is influenced by the regulation of energy-dissipating mechanisms, such as non-photochemical quenching (NPQ), where specific carotenes and xanthophyll pigments play a pivotal role. Our work focuses on laying the foundation for the remote estimation of early stress combining fluorescence measurements with the detection of the xanthophyll-driven activation of NPQ. We performed a controlled stress experiment in which active and passive fluorescence measurements were taken, and leaves were sampled for pigment and molecular analysis. Applying a spectral fitting algorithm based on reflectance we were able to detect changes affecting the 500-600 nm region that indicate the activation of regulated heat dissipation, consistent with the active fluorescence results and pigment dynamics. Furthermore, we investigated how plants respond upon high light illumination by adapting their carotene and xanthophyll leaf composition, finding that all carotenoids quickly respond to equilibrate the excessive energy. As expected, these pigment dynamics correlate to the trends observed in passively measured fluorescence during NPQ induction. Being able to disentangle the role of each pigment in energy dissipation will allow us to greatly enhance our knowledge of how plants use energy, and how a cascade of spectral responses lead to potential improvements in food production and helping us to understand how plants adapt to different climatic conditions. Our next steps will be the upscaling of these advancements to canopy and image level using a climate chamber with which we are able to combine active and passive image spectroscopy measurements in controlled experiments. This future challenge coincides with the upcoming launch of advanced vegetation imaging spectroscopy satellites, such as the European Space Agency's Fluorescence Explorer (FLEX), designed to assess plant health, by measuring the fluorescent signal emitted by vegetation through photosynthesis. The presented results demonstrate the current algorithm development aiming for the complex retrieval of photosynthetic energy partitioning based on the key pigment players in the light reactions, using FLEX-like input products.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Detection of Dynamic Antenna Absorption Behavior at Plant Canopy Level Using Automated VNIR Imaging Spectroscopy

Authors: Clara García Martínez, Adrián Moncholí-Estornell, Sara Pescador-Dionisio, Dr. María Pilar Cendrero-Mateo, Cynthia Chardi-Raga, María Jesús Rodrigo, José Moreno, Dr. Shari Van Wittenberghe
Affiliations: University Of Valencia, Institute of Agrochemistry and Food Technology (IATA-CSIC)
In the context of climate change, understanding photosynthesis is essential for assessing CO2 capture and biomass production of forest and agricultural systems. The Fluorescence Explorer (FLEX) mission is planned for this purpose, focusing on investigating and monitoring photosynthesis through fluorescence (F) using the Fluorescence Imaging Spectrometer (FLORIS) and Sentinel-3 synergistically. As a framework for FLEX future observations, significant efforts are required to combine hyperspectral data and fluorescence signals and translate them into meaningful insights regarding the underlying photosynthetic processes. Therefore, our study aims to enhance the understanding of the dynamic absorbance adjustments at whole plant level under important stressful abiotic conditions and recovery conditions using VNIR imaging spectroscopy in automated way. Tomato plants were cultivated in a growth chamber using controlled diurnal light course patterns. In the experiment we compared control plants with water-deprived plants which were kept for 1 week without water. Reflectance dynamics and heat dissipation mechanisms were monitored at the whole plant level using sequential active fluorescence imaging and VNIR hyperspectral scanning, with automated TOC imaging spectroscopy scans performed under dark and steady-state light conditions (PAR300, PAR700, PAR1100, and PAR1500). In addition, leaf-level measurements, including spectroscopy and pigment extraction were performed. Water deficit triggered a short-term adaptation observed in the pigment pools as an increased relative lutein (Lut) and neoxanthin (Neo) pools normalized over Chl a concentrations. Meanwhile, zeaxanthin (Zea) normalized accumulation increased under excessive light, regardless of the treatment (control or water deficit). The observed changes in TOC reflectance revealed light acclimation of the plants, which are suggested to involve both chemical and structural changes in the antenna. A two-step acclimation of the photosynthetic antenna was observed through shifted absorption towards low energy wavelengths, both in the green (500-600 nm) and the red-edge (680-750 nm) region. The first step, acclimation from darkness to PAR300, was less treatment-sensitive (control-vs-drought-vs-recovery) but stronger light dependent instead. In contrast, acclimation to high light conditions (PAR1100 to PAR1500) revealed slower antenna behavior that was highly sensitive. The intensity of the spectral features, characterized by a broader ΔA with maxima around 530-540 nm, intensified with the severity of the drought stress. This suggests that a threshold level of excitation energy, sustained over several days, is required to activate these internal responses. Also, water deprived plants showed an additional, further red-shifted reflectance at both leaf and canopy levels in the red-edge, modifying the reflectance’s first derivative. Rewatering alleviated stress-induced spectral changes, reversing the red shift and revealing reversible and well-regulated TOC dynamics. This was accompanied by a reduced photoprotective pigment pool and restored QY values, confirming effective plant adaptation to rewatering conditions. Although much remains to be understood at the molecular level upon the regulation of excessive energy dissipation, our results demonstrate that imaging TOC spectroscopy of progressively quenched states of whole plants can reveal the transient absorption phenomena expected for regulated energy dissipation by the antenna. We believe this offers a promising potential for revealing the causes behind the dynamic reflectance changes related to the downregulation of photosynthesis and for the upscaling of early stress detection at larger monitoring scales.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Validating and Unmixing DESIS Sun-Induced Fluorescence (SIF) Over Agricultural Fields: A Comparative Analysis With HyPlant Observations

Authors: Mohammed Hajaldaw, Dainius Masiliūnas, Na Wang, Dr Lammert Kooistra, Jim Buffat, Carlos Camino, Stefan Maier
Affiliations: Laboratory of Geo-Information Science and Remote Sensing, Wageningen University & Research, School of Agriculture, Food and Ecosystem Sciences (SAFES), University of Melbourne, Institute for Advanced Simulation, IAS-8: Data Analytics and Machine Learning, Forschungszentrum Jülich, Joint Research Centre (JRC), European Commission (EC), Remote Sensing Technology Institute, German Aerospace Center (DLR)
Photosynthesis is essential for biomass production as it enables plants to convert sunlight into energy, making its monitoring vital for plant health and agricultural productivity. This process can be better understood using Sun-Induced Fluorescence (SIF), which is a weak electromagnetic radiation emitted by plants during photosynthesis. In recent decades, remote sensing has been used in research to monitor SIF at various spatial and spectral scales, with satellite-level research primarily conducted using atmospheric chemistry satellites that have accidental SIF measurement capabilities. However, the limited spatial resolution of these satellites poses challenges in validating their SIF products as well as accurately unmixing the SIF signal for large-scale agricultural monitoring applications, such as stress detection and crop yield estimation. In contrast, the potential of using recently launched spectroscopy missions with higher spatial resolution, such as the DLR Earth Sensing Imaging Spectrometer (DESIS), for SIF-related applications has not yet been fully explored. This study aims to improve large-scale agricultural monitoring by investigating how the observation scale affects the accuracy of SIF retrieval. In addition, it explores the possibility of using land cover fraction maps to unmix the SIF signal, thereby generating SIF maps for different vegetation types. In this research, DESIS was utilized as a stepping stone to validate SIF scaling and unmixing in preparation for the upcoming FLuorescence EXplorer (FLEX) mission. For validation, hyperspectral data was collected using HyPlant, which is a high-performance airborne imaging spectrometer designed for vegetation monitoring. The study area is located at the agricultural research station Campus Klein Altendorf in Germany, covering an area of approximately 1.76 km² (4.4 km by 0.4 km). Both DESIS and HyPlant datasets were collected around noon on June 13, 2023, under clear sunny weather conditions at the study site, with a time difference of 39 minutes. While DESIS data has a spatial resolution of 30 m, HyPlant data offers a much finer resolution of 1 m. To assess the impact of the observation scale, two O2A (Oxygen absorption band around 760 nm) DESIS SIF products were compared against their corresponding measurements from HyPlant. The validation map was produced using the Spectral Fitting Method (SFM), while DESIS products were generated using the three Fraunhofer Line Discrimination (3FLD) and Spectral Fitting Method Neural Network (SFMNN) retrieval methods. The comparison involved calculating multiple error metrics including the bias, R-squared (R²), Root Mean Square Error (RMSE) and Normalized Root Mean Square Error (NRMSE) to evaluate the retrieval accuracy, spatial patterns consistency and accuracy across different land cover classes. Regarding SIF unmixing, a framework was developed utilizing DESIS SFMNN SIF product, and its performance was evaluated against the observed values using the previous error metrics. The framework involves predicting the average SIF value for each land cover class within the mixed pixels using a set of trained random forest (RF) models (one model for each class). Model training involved using 90 m spatial resolution land cover fractions, principal components (PCs), and mixed SIF values as predictors. To prepare the land cover fraction maps, a classification image was first generated from HyPlant data at 1m resolution using RF classification. Subsequently, binary masks were created for each class, assigning a value of 1 to the target class and 0 to all other classes. These masks were then aggregated to 90 m resolution, with the mean value calculated for each 90 m pixel to produce the fraction maps. Regarding other predictors, PCs that capture approximately 95% of the total variance were generated by performing principal component analysis on the spectral reflectances and related vegetation indices at 30 m resolution. These PCs along with the SFMNN SIF map, were then aggregated to 90 m resolution to finalize preparing predictors. In contrast, the response variable was derived by aggregating the SFMNN values per class to a pixel size of 90 m using the average. Finally, the data of each class was split into train (80%) and test (20%) subsets, then a RF model was trained on its training subset and validated using the test subset. For model evaluation, HyPlant data was aggregated to 30 m, 90 m and 300 m (FLEX pixel size) resolutions to assess how the model performs when using sensors with varying spatial and spectral resolutions. The initial comparison between DESIS SFMNN and HyPlant SFM SIF maps at 30 m resolution showed a positive correlation, with an R² of 0.59, a bias of 0.00 mW m⁻² nm⁻¹ sr⁻¹, a RMSE of 0.46 mW m⁻² nm⁻¹ sr⁻¹, and a NRMSE of 10.77%. Additionally, the analysis indicated that SFMNN SIF values tended to overestimate their corresponding SFM values in non-fluorescent pixels, while underestimating them in fluorescent pixels. The class-based comparison revealed a strong correlation in the trees class (R² = 0.59), while the non-fluorescent class exhibited a weak correlation (R² = 0.37). Moreover, the bias between the datasets was small across the fluorescent classes, while the non-fluorescent class had a significant bias of 0.33 mW m⁻² nm⁻¹ sr⁻¹. Furthermore, the RMSE and NRMSE metrics were generally low for the grass and trees classes (RMSE = 0.44 mW m⁻² nm⁻¹ sr⁻¹), but higher for the crops and non-fluorescent classes. Overall, these findings suggest that DESIS SIF products have the potential for large-scale agricultural monitoring, though further improvements are needed. Regarding the unmixing framework, the preliminary validation results of HyPlant data at 30m demonstrated strong model performance over crops with high correlation and low errors (R² = 0.89, NRMSE = 6.34%). The grass and trees classes showed moderate performance, with R² values of 0.78 and 0.63, respectively, while the non-fluorescent class showed almost no correlation (R² = 0.05). Across all categories, the NRMSE was less than 10.4% and the biases were small, indicating no significant systematic overestimation or underestimation errors. At 90 m spatial resolution, stronger correlations were observed for the crops and grass classes compared to their 30 m counterparts. In contrast, the trees class experienced a slight decline in its R² value from 0.63 to 0.61, while the correlation in the non-fluorescent class remained the same. Overall, the better performance at 90 m resolution is likely due to the fact that RF models were trained on this resolution, making it more adapted to this scale. However, given that the primary goal is to unmix DESIS 30 m resolution SIF, the results indicate strong performance and sufficient reliability for the proposed framework in SIF unmixing applications. This study contributes to advancing the use of high spatial resolution SIF data for monitoring vegetation health and productivity by providing valuable insights into the accuracy and potential applications of DESIS SIF products before FLEX becomes operational. Future work will focus on validating and unmixing SIF maps generated by FLEX satellite mission.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Development of a precise full-SIF Retrieval Method Based on Principal Component Analysis Using HyPlant Hyperspectral Data

Authors: Miguel Morata Dolz, José Luis García-Soria, Bastian Siegmann, M.Pilar Cendrero-Mateo, Dr. Jochem Verrelst
Affiliations: University Of Valencia, Institute of Bio- and Geosciences, Plant Sciences (IBG-2), Forschungszentrum Jülich GmbH
The FLEX (Fluorescence Explorer) satellite mission, undertaken by the European Space Agency (ESA), is designed to monitor global vegetation health by capturing sun-induced chlorophyll fluorescence (SIF). This measurement provides key insights into photosynthetic activity and plant stress. The airborne HyPlant sensor serves as an airborne technological precursor for FLEX, offering high-resolution hyperspectral data that is critical for the refinement of SIF retrieval techniques. HyPlant has acquired hundreds of flight lines over diverse vegetation types and environmental conditions. This extensive dataset is used to evaluate new full-SIF retrieval methods. This study explores a novel SIF retrieval methodology designed to improve the accuracy and reliability of fluorescence and reflectance estimations from complex spectral data. One widely used approach for SIF retrieval is the Spectral Fitting Method (SFM), and its derivative, SpecFit. The SpecFit method employs fitting algorithms across the entire spectral region of fluorescence emission, spanning from the red (670 nm) to the far-red and near-infrared regions (780 nm). In this approach, the reflectance (R) is modeled using a cubic spline, while SIF is represented as a sum of Gaussian, Lorentzian, or Voigt functions. However, these functional simplifications can result in less reliable retrievals under complex environmental conditions or canopy structures. To address this issue, we propose a novel method that employs a principal component analysis (PCA)-based basis set to model the shapes of R and SIF spectra more robustly, under the principles of physical radiative transfer. An alternative to empirical approaches such as SpecFit is the use of physically based radiative transfer models (RTMs). RTMs provide a comprehensive physical foundation for SIF retrieval by simulating coupled soil-canopy reflectance and fluorescence spectra using a radiative transfer model. The SCOPE (Soil-Canopy Observation of Photosynthesis and Energy fluxes) model is a strong candidate for this task, offering outputs for both canopy reflectance and SIF. However, its high computational cost limits its application for large-scale or operational datasets. To overcome this limitation, an extensive dataset of reflectance and fluorescence spectra was generated using SCOPE. Subsequently, the dataset was subjected to PCA to identify the principal components that capture the greatest variability in the spectral distributions. The eigenvectors derived from PCA define a basis set where only the dominant components are retained, thus reducing the computational complexity without compromising accuracy. Using hyperspectral measurements of upwelling and downwelling radiance, the principal component coefficients were optimised to minimise the cost function. The full spectra of R and SIF were reconstructed by applying the inverse PCA transformation to the retrieved coefficients, thus ensuring that spectral features are preserved and that the oversimplifications inherent in traditional fitting methods are mitigated. The development and testing of this method were based on high-quality hyperspectral imagery acquired by the HyPlant sensor. By comparing the PCA-based retrieval results with those obtained using SpecFit and SCOPE simulations, we demonstrated the robustness and potential of this new approach for complex canopy scenarios. In conclusion, the PCA-based full-SIF retrieval methodology represents a promising advancement in SIF retrieval studies, combining the physical realism of RTMs with computational efficiency. The results highlight the capability of this approach to capture the nuanced variability in reflectance and SIF spectra, making it suitable for the operational demands of FLEX. Future work will focus on optimizing the method further for real-time applications and validating it against field measurements to ensure readiness for the FLEX mission.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Evaluating modeled carbon and nitrogen cycles via leaf chlorophyll content and remote sensing observations

Authors: Tuuli Miinalainen, Amanda Ojasalo, Sönke Zaehle, Holly Croft, Dr. Mika Aurela, Mikko Peltoniemi, Silvia Caldararu, Tea Thum
Affiliations: Finnish Meteorological Institute, Max Planck Institute for Biogeochemistry, School of Biosciences, University of Sheffield, Natural Resources Institute Finland, Trinity College
Slowing down climate change calls for a strengthening of natural carbon sinks. To support the development of climate policies, it is essential to understand global carbon dynamics and narrow down uncertainties related to land-atmosphere interactions. Remote sensing observations provide valuable information on the state and dynamics of global vegetation. One of the recent advances of the vegetation remote sensing (RS) observations is leaf chlorophyll (chl). Leaf chl is photosynthetic pigment in plants that facilitates the harvesting of light energy from solar radiation used to drive plant photosynthesis. Leaf chl is strongly linked to both leaf nitrogen allocated to photosynthetic fractions in the leaf, and is a strong proxy for plants’ photosynthetic capacity. Hence, RS leaf chl observations provide a novel opportunity for bench-marking the modeled terrestrial nitrogen and carbon cycle. Terrestrial biosphere models (TBMs) are process-based models that are designed to examine carbon fluxes between land surface and the atmosphere. In this project, we harnessed a RS-based leaf chl product by Croft et al. [1] for evaluating a TBM called "Quantifying Interactions between terrestrial Nutrient CYcles and the climate system" (QUINCY) [2]. QUINCY simulates fully coupled carbon and nitrogen cycles and provides diagnostics for leaf chl. The RS leaf chl is derived from the ENVISAT MERIS full resolution reflectance data with a two-stage radiative transfer model for 2003 – 2011. We run site-level simulations for over 500 sites distributed world-wide to represent all major global biomes. To make sure that the RS leaf chl data corresponds to our QUINCY simulation parametrization, we analyzed the land cover values for each site from the land cover map ESA-CCI-LS that was used in the retrieval of RS leaf chl. We selected only the RS data pixels for which the land cover classification matches with the QUINCY ecosystem type assumed for each site. For a subset of 153 sites, we compared the QUINCY gross primary product (GPP) to the FLUXNET ground-level observations. Our results suggested a good agreement between QUINCY and RS leaf chl. However, the QUINCY modeled lower leaf chl variation for some of the ecosystem types. This suggests that the modeling framework is missing factors that are contributing to the observed variation. There was also a good agreement with the carbon uptake, as the QUINCY annual GPP correlates well with the FLUXNET observations (r=0.7, p < 1e-23 ). For the majority of sites (∼57 %), though, QUINCY is underestimating annual GPP on average by 27 %. Our study paves way for more versatile use of satellite observations within TBMs. [1] Croft, H., Chen, J.M., Wang, R., Mo, G., Luo, S., Luo, X. et al., The global distribution of leaf chlorophyll content, Remote Sens. of Environ., 236, 111 479, 2020. [2] Thum, T., S. Caldararu, , J. Engel, M. Kern, M. Pallandt, R. Schnur, L. Yu, L. and S. Zaehle, A new model of the coupled carbon, nitrogen, and phosphorus cycles in the terrestrial biosphere (QUINCY v1.0; revision 1996), Geosci. Model Dev, 12, 4781, 2019.
Add to Google Calendar

Wednesday 25 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Reliability-Enhanced GPP Simulations Within a Land Surface Model Through the Co-Assimilation of Space-Borne SIF Retrievals and In Situ GPP Estimates

Authors: Cédric Bacour, Vincent Tartaglione, Fabienne Maignan, Philippe Peylin
Affiliations: LSCE / CEA
Through photosynthesis, terrestrial vegetation fixes approximately 120 to 170 GtC per year, making it both the largest and most uncertain component of the global carbon cycle. This is reflected by the large variation of the spatio-temporal distribution of the Gross Primary Productivity (GPP – the total amount of CO2 absorbed and turned into organic matter through photosynthesis) estimated by data-driven approaches or simulated by land surface models. Enhancing understanding of GPP drivers at site-to-regional scales and reducing uncertainties in land surface models are crucial for improving confidence in projections of global carbon cycle dynamics and their feedback to climate change. In this study, we improve the simulation of terrestrial GPP dynamics by the ORCHIDEE land surface model through parameter optimization using a 4DVar data assimilation framework. This framework co-assimilates Solar-Induced Chlorophyll Fluorescence (SIF) data retrieved from the Copernicus Sentinel-5P TROPOMI instrument (TROPOSIF product, provided at 0.1° spatial and 8-day temporal resolutions for a selection of homogeneous pixels) and daily Gross Primary Productivity (GPP) estimates derived from eddy-covariance flux measurements. The observation operator for SIF follows a process-based description of the leaf fluorescence and its integration at canopy level accounting for the canopy structure: At the leaf level, fluorescence emission follows the FluorMODleaf concept and depends on a parametric NPQ function of air temperature, PAR, and the degree of photosynthetic saturation; Top-of-canopy SIF is then computed using a two-stream radiative transfer scheme, accounting for sunlit and shaded leaf fractions within each canopy layer. Depending on the plant functional type, about 10–15 parameters controlling photosynthesis, carbon allocation, and phenology, are optimized. Three data assimilation experiments are conducted to evaluate the observational constraints provided by each dataset and their complementarity: two experiments in which SIF and GPP are assimilated individually and one in which they are assimilated jointly. We quantify the improvement in model reliability by comparing ORCHIDEE simulations using the optimized parameters, from site/pixel to regional scales, against independent data not used in the assimilation process.
Add to Google Calendar

Wednesday 25 June 18:00 - 18:20 (EO Arena)

Demo: D.03.31 DEMO - SNAP in Action - Various Application Examples throught the week demonstrating the power of SNAP for EO data visualisation, analysis and processing - session 4

SNAP is the ESA toolbox for visualising, analysing and processing of optical and microwave EO data. SNAP support a large number of current and past satellite sensors as well as generic data formats. SNAP addresses all kind of users, from early stage students, through experienced researchers, up to production managers responsible for a public and commercial EO processing service.

In a series of demonstrations we showcase this breadth of possibilities at various land and water real life applications. Demonstratoins will be repeated multiple times to allow as many as possible participants to join a specific demonstration. We will tailor the daily programme from a set of prepared demonstrations according to themes of the days, and user needs if expressed during the conference.

The following list give a glimpse of demonstrations from which we can select:
1. Sentinel-1 ETAD processing with SNAP
2. Change Detection Monitoring
3. Supporting new SAR missions with SNAP
4. “Live” fire evolution in Los Angeles using Sentinel-2 image
5. Burned Areas Detection – Mehedinti, Romania
6. Monitoring Drought Evolution – Dobrogea, Romania
7. Water Quality in urban areas at the example of the city of Hamburg
8. Interpreting Hyperspectral Data for coastal habitat mapping
Add to Google Calendar

Thursday 26 June

1064 events

Thursday 26 June 08:30 - 10:00 (Room 0.96/0.97)

Session: C.06.10 Intercomparison Exercises in Earth Observation – ACIX, CMIX and RAMI

In this session, we present and discuss the recent results of four Intercomparison Exercises (ACIX-Land, ACIX-Aqua, CMIX and RAMI) in the optical Earth Observation domain. These exercises serve as independent benchmarking mechanisms for Earth Observation simulations and retrieval algorithms.

The Atmospheric Correction Intercomparison eXercise (ACIX) was jointly initiated by ESA and NASA within the frame of CEOS WGCV for the evaluation of atmospheric correction algorithms for the retrieval of surface reflectance, aerosol optical depth (AOD) and water vapour in two different exercises over land and water surfaces. The current implementations of ACIX-III Land and Aqua evaluate algorithms on space-born imaging spectroscopy data (PRISMA/EnMAP) over validation networks such as RadCalNet, HYPERNETS and AERONET(-OC), as well as ad-hoc campaigns.

After the first ACIX exercise, the need for a separate activity on cloud masking emerged, and with it the Cloud Masking Intercomparison eXercise (CMIX). The current CMIX-II looks at both physical and machine-learning based algorithms with different validation data sets over Landsat-8 and Senitnel-2 scenes. In cooperation with the University of Maryland/NASA GSFC, CMIX supports the development of a sky camera network for cloud masking reference data.

The Radiation Transfer Model Intercomparison Exercise (RAMI), ), supporting World Climate Research Programme (WCRP) and International Radiation Commission (IRC), organised by the European Commission’s Joint Research Centre (JRC), assesses (1D)-3D radiative transfer models, with a focus on vegetated surfaces. RAMI comprises six exercise phases spanning more than twenty years. Current implementations are oriented towards satellite and in-situ observations, especially Copernicus optical missions. RAMI-V proposes more than 100 000 experiments, with variables such as BRF, albedo and absorption and transmission through the canopy, while RAMI4ATM is the first RAMI exercise dedicated to surface atmosphere coupling. The atmosphere-surface scenes include seven atmosphere families with increasing complexity over eight surface types.

Presentations and speakers:


Introduction to Intercomparison Exercises for Earth Observation


  • Noelle Cremer

ACIX-III - Results from the third implementation of the Atmospheric Correction Inter-comparison eXercise over Land (Part I) and Water (Part II)


  • Noelle Cremer
  • Claudia Giardino

CMIX-II – Status of the second implementation of the Cloud Mask Intercomparison eXercise


  • Jan Wevers

RAMI4ATM/RAMI V – Design and results of current Radiation Transfer Model Intercomparison Exercises


  • Christian Lanconelli

Discussion on future developments, towards harmonisation of Intercomparison Exercises in the optical Earth Observation domain


  • Noelle Cremer
  • Christian Lanconelli
  • Claudia Giardino Jan Wevers
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall G2)

Session: E.03.01 Commercial Earth Observation Missions: Embracing New Paradigms and Innovative Models

In this session should focus on presenting ideas for new commercial Earth observation missions that address current and emerging market needs. Presenters will discuss how these missions cater to a balanced mix of institutional and commercial customers, highlighting the unique demands of each segment.

The session will feature detailed examples of various business models, illustrating where public support is prevalent and where it is minimal or absent. Presenters will analyze the benefits of different approaches, showcasing how public-private partnerships and fully private initiatives can both drive innovation and meet market demands effectively. This comprehensive overview aims to provide valuable insights into the evolving landscape of commercial Earth observation missions and their impact on the industry.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall G2)

Presentation: The Role of Public and Private Sectors in Addressing Emissions Reduction and the Unifying Opportunity for Space to Drive Net Zero

Authors: Dan Wicks
Affiliations: Ghgsat
Tackling greenhouse gas emissions inherently requires close collaboration across both public and private sectors, as neither can achieve meaningful progress in isolation. Governments play a crucial role in setting policy and regulatory frameworks, establishing incentives for sustainable practices, and investing in green technology and infrastructure. Private industry drive innovation and are ultimately responsible, through action, for implementing reduction. By aligning goals and pooling resource, these sectors can accelerate the transition to Net Zero and address the climate crisis more effectively. This presentation will showcase outcomes of the UK Space Agency funded Methane programme, delivered by Satellite Applications Catapult and GHGSat. This programme has allowed UK organisations access to high-resolution methane emissions satellite data from the GHGSat constellation and explored the role of satellite data to support emissions reduction strategies, including the explicit roles of both public and private sectors and the levers they have available to them, such as: •Replacement of modelled estimates of methane emissions in the UK National Asset Emissions Inventory (NAEI) with empirical data from measurement. •Support for the implementation of reduction schemes and incentives, for example expansion of the UK emission trading scheme and carbon border adjustment mechanism by providing independent Monitoring, Reporting and Verification (MRV). •Enforcement of existing and future environmental and health and safety regulation such as Leak Detection and Repair (LDAR) and emissions threshold penalties that relate to GHG emissions with improved data. •Support operators to diversify income sources, for example gas capture for energy generation, carbon credits to support methane emissions reduction efforts or future acquisition of facilities. •Ensuring operational sites are managed effectively, managing potential risks and improving resource allocation such as reducing gas revenue loss through leakage or reducing the risk of safety incidents. •Demonstrating ESG commitment and notably performance relative to competitors in the market, to help attract customers and investors, enhance valuation and minimise negative publicity and potential reputational damage. This presentation will highlight key use cases, and lessons learnt on business models and mechanisms for driving public-private partnership in response to the challenge of emissions reduction. It will focus in particular on the role of space as a unique enabler of change, considering also the relative contributions of public sector mission capabilities such as those delivered by programmes like Copernicus versus emergent private sector missions. Recommendations will be given on next steps and the pathway to adoption.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall G2)

Presentation: A Hybrid Constellation Strategy for Multispectral Observation

Authors: Alix De Beusscher, Nicolas
Affiliations: Aerospacelab
The Sentinel-2 mission is providing invaluable multispectral imagery for a wide range of applications such as agriculture, forestry, urban planning, and environmental monitoring. With more than 10,000 registered users and consistently high satisfaction levels, the mission is celebrated for its exceptional radiometric accuracy, Signal-to-Noise Ratio (SNR), and Modulation Transfer Function (MTF). These qualities make it indispensable for precise mapping and analysis. However, despite its strengths, the mission faces notable limitations, particularly in its revisit time of five days, which poses challenges for applications requiring near-real-time data, such as rapid disaster response or time-sensitive crop management. The next-generation Sentinel-2 (Sentinel-2 NG) promises to address some of these gaps, with planned improvements in spatial, temporal, and spectral performance. Nevertheless, the increasing diversity of multispectral data users underscores the need for complementary solutions that go beyond institutional missions. This presentation will explore the integration of Sentinel-2 and Sentinel-2 NG with a companion smallsat Multispectral Imaging (MSI) constellation, forming a hybrid model to deliver enhanced revisit capabilities while retaining access to high-quality data and ensuring seamless interoperability. Smallsat constellations can reduce revisit times to daily or even sub-daily frequencies, enhancing existing applications such as detailed crop management, or wildfire and flood monitoring. These missions, while generally offering lower radiometric quality and lifespan compared to institutional ones like Copernicus-Sentinel-2 or the Landsat program, are well-suited to address the majority of multispectral user needs, where timeliness often outweigh peak performance. For instance, operational monitoring of crop stress or water resources frequently does not require highest level of precision, if used in between intervals where Sentinels data are available. . This hybrid approach introduces an innovative trade-off: leveraging Sentinel-2 for baseline data while utilizing commercial missions to fill temporal gaps and provide tailored, on-demand imagery. The presentation will outline the technical, operational, and business implications of such a partnership, emphasizing the complementary roles of public and private players. It will also address challenges such as data harmonization, calibration between platforms, and the financial models necessary to sustain this ecosystem. This hybrid constellation model represents a pragmatic and forward-looking solution to current revisit limitations. By combining institutional rigor with private-sector innovation, this framework has the potential to elevate Earth observation capabilities, ensuring a more responsive future for multispectral applications while maximizing the impact of investments in Sentinel-2 and Sentinel-2 NG.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall G2)

Presentation: Destination Earth: pioneering sustainable and attractive business models for a digital platform to institutional and private actors

Authors: Dr Rochelle Schneider, Nicolas Monzali, Madleen Bultez, Pierre Arnaud, Christophe Taillandier
Affiliations: ESA, Mews Partners
The European Commission is driving the Destination Earth (DestinE) initiative, a cutting-edge digital platform designed to support climate change mitigation. DestinE serves a dual ecosystem of users and developers, ranging from public policymakers to industrial sectors and the general public. Its development is spearheaded by three entrusted entities: ECMWF, responsible for creating digital twins to deliver advanced and precise predictions of Earth’s climate; EUMETSAT, managing the Datalake infrastructure for data hosting and access services; and ESA, developing the Service Platform, which serves as the central access point for users to explore and leverage DestinE data. This initiative unlocks opportunities for service providers in Earth observation and climate sectors. However, two critical challenges must be addressed to ensure the platform’s attractiveness and long-term impact within this dual public-private ecosystem. Enabling business opportunities for service providers poses a challenge, as the core components of the platform are developed with public funding. Another critical issue is ensuring the long-term sustainability of DestinE beyond 2030, when public funding might decrease. Inspired by benchmarks from the Software-as-a-Service (SaaS) industry and aligned with public funding policies, this paper explores three potential solutions. The first focuses on Freemium Business Model for service providers, defining free and paid features to balance platform accessibility, public utility, revenue generation and alignment with public funding principles. The second examines evolving funding sources, the evolution of funding sources proposing new funding structures and contribution fees to support ongoing platform management, operations, and innovation, ensuring financial sustainability beyond 2030. The third addresses collaborative frameworks for innovation, positioning DestinE as a reference platform for research and academia by providing access to accurate Earth system modeling and fostering innovation through collaborative environments, where academia and researchers contribute the latest advancements and open-source technologies to enhance the platform. The presentation concludes with recommendations for implementing sustainable business models at both the service provider and platform levels. It outlines strategies to onboard service providers and establish collaborative ecosystems, aiming to maximize DestinE’s attractiveness and utility for diverse user groups within the public-private dual ecosystem.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall G2)

Presentation: The Golden Twins: Leveraging LEO Satellite and CubeSat Capabilities to Offer Tailored Services to Greek Islands' Stakeholders

Authors: Vasiliki (Betty) Charalampopoulou, Polychronis (Pol) Kolokoussis, Alexandra Papadaki, Thomas Papakosmas, Nikolaos Antonios Livanos, Ioannis Gitas, Vaios Lappas, Charalabos Ioannidis
Affiliations: Geosystems Ηellas SA (GSH), EMTECH SPACE S.A., Laboratory of Forest Management and Remote Sensing (FMRS), Aristotle University of Thessaloniki (AUTH), Dept. of Aerospace Science and Technology, School of Sciences, National and Kapodistrian University of Athens (NKUA), School of Rural, Surveying and Geoinformatics Engineering (SRSGE), National Technical University of Athens (NTUA)
The Golden Twins (GT) project presents an innovative approach to address emerging market needs in Earth Observation (EO) by offering an advanced, satellite-agnostic platform tailored to the unique demands of institutional and commercial customers. Designed to enhance safety, sustainability, and resilience, the GT platform focuses on three core applications: forest fire monitoring and prevention, extreme precipitation event tracking, and maritime vessel monitoring. These services provide actionable insights and rapid response capabilities, catering to sectors such as tourism, insurance, maritime operations and public safety. The Golden Twins platform is designed as a data-agnostic system, able to consume imagery from all present Greek CubeSat, as well as all future Greek LEO missions. It leverages a diverse array of data sources apart from Copernicus Sentinel 1 & 2 missions, including the HSD CubeSat constellation, the Ermis mission, forthcoming microsatellites from Greek National Smallsat Programme, as well as Copernicus contributing missions, to ensure high revisit rates and near-real-time insights. This capability positions GT as a complementary system to governmental initiatives, offering tailored solutions to commercial markets, focusing on tourism-driven industries, insurance providers, and maritime operators. Key Features: - Services Offered: 1.Forest Fire Monitoring & Prevention: Offers fire risk assessments, near-real-time fire detection, and post-fire mapping. It supports safety procedures for high-risk areas, evacuation insights, and infrastructure damage assessment, addressing the needs of tourism operators and insurance firms. 2.Extreme Precipitation Monitoring: Tracks floods and their aftermath (flooded areas and debris accumulation), enabling rapid infrastructure damage assessment and facilitating rapid safety notifications and recovery efforts, enhancing operational continuity in flood-prone areas. 3.Maritime Vessel Monitoring: Utilizing a combination of EO (optical, SAR) and AIS data, this service ensures comprehensive vessel tracking, detecting dark vessels and illegal activities, and monitoring port congestion. - User Interaction and System Design: GT’s modular, interoperable design supports easy access through a web-based interface, allowing users to track incidents, generate reports, and receive alerts customized to their needs. By collaborating with pilot users and stakeholders, the platform ensures alignment with real-world requirements and operational scenarios. Market Differentiation and Added Value: GT represents a balanced public-private partnership model, strengthening both Greek and European EO capabilities while fostering innovation and economic growth. It brings together cutting-edge technical expertise from leading Greek public universities (AUTH, NTUA, NKUA) and the commercial acumen of industrial partners GSH and EMTECH. Supported by ESA’s InCubed program (the first project funded through INCUBED in Greece), GT leverages institutional backing to drive foundational technical advancements while establishing a sustainable business model through targeted commercial engagement. Utilizing a subscription-based pricing strategy, GT serves diverse sectors such as tourism, insurance, and maritime industries, while also offering tailored, high-value services to public agencies, including civil protection and local authorities. This synergy ensures GT not only addresses critical societal needs but also establishes itself as a scalable, future-proof platform in the evolving EO market. Golden Twins integrates real-time EO data and advanced AI to provide faster, more accurate monitoring compared to government-focused competitors. Unlike institutional-only services, GT offers a scalable, cost-effective, and adaptable commercial solution, with significant potential for applications in other remote islands worldwide. GT’s modular, interoperable design positions it as a first-of-its-kind platform for Greek EO markets, addressing gaps left by institutional-only services. Its high revisit rate and robust data integration surpass traditional methods, enabling stakeholders to address challenges posed by climate change, such as wildfire risks and flooding, with precision and speed.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall G2)

Presentation: ESA & The Division of Innovative Labor: Analyzing the Value Creation of Upstream Public R&D in the European Earth-Observation Ecosystem

Authors: Maximilian Schwaiger
Affiliations: UCL - Institute for Innovation and Public Purpose (IIPP)
The Copernicus program, administered by the EU Agency for the Space Program (EUSPA) forms the backbone of European Earth Observation (EO) capabilities. Particularly ESA’s contribution in the form of the Sentinel Satellite missions has been crucial to the development and implementation of the Copernicus ecosystem. While all Sentinel missions and components are developed and implemented by ESA, their application feeds publicly funded R&D into a wider European space innovation system. ESA being firmly in charge of crucial R&D components of Sentinel seemingly fits ESA’s strategy of upstream creation for mid- and downstream commercialization opportunities (Mazzucato & Robinson, 2019). However, the reality of midstream commercialization paints a different picture. The Division of Innovative Labor framework (Arora & Gambardella, 1994; Eickelpash & Fritsch, 2004; Hagedoorn, 2002) suggests that particularly in high-tech sectors, such as aerospace and satellite engineering, ongoing shift in technological regimes toward general and abstract knowledge incentivizes a specialization of actors in the inventive process. This is further supported by the traditionally strong IP protection mechanisms in these sectors, which allow for a risk-free diffusion of codified knowledge (Arora & Gambardella, 1994; Cowan & Foray, 1997). Despite the traditionally high entry costs into the sector, this would imply a greater possibility for small, highly specialized, organizations to focus on specific aspects of the innovative process in the upstream segment, leading to increase opportunities for procurement of specialized components or codified knowledge by other actors. In an alternative reading, ESA could be seen as the provider of such specialized knowledge and could thereby fill a gap of lacking corporate R&D efforts (Arora et. al., 2015; Kantor & Whalley, 2023). This, in turn, would imply a greater degree of commercialization of EO satellite technologies in the midstream segment (Deleidi & Mazzucato, 2018; Robinson & Mazzucato, 2018). As neither appears to be the case in the Copernicus program, this begs the question “What role do ESA’s Sentinel missions play in the division of innovative labor for the commercial European EO-ecosystem?”. This question implicitly addresses value creation by public upstream R&D in EO. Shedding further light on this question could help to shape future ESA missions with regard to a market shaping (Mazzucato, 2016; Weinzierl, 2018) perspective, by highlighting downstream value creation and impulse-setting through public R&D spending. It could further lead to a better understanding of necessary organizational capabilities for such value creation (Kattel, 2024). In order to address the posed research question, a study of contracts (e.g. grants and procurement) at the locus of such interaction and the remit/goal of the interaction is necessary. Therefore, I employ a multi-step qualitative network analysis (QNA) framework. This framework expands on traditional quantitative centrality measurements of networks, by adding a layer of qualitative high-level expert interviews of actors central to the network in order to uncover the meaning and importance attributed to the relationships their organization engages in (Ahrens, 2018; Heath et. al., 2009; Schipper & Spekkink, 2011). For the network construction, I propose to utilize organizations involved in the Sentinel missions as network nodes and volume-weighted contracts as edges. I further intend to employ Eigenvalue-centrality as a topological measurement, as it attributes network centrality based on the quality of connections a node has, thus aligning with the research aims of this paper. For the qualitative interviewing step, I aim at 10 to 15 expert interviews from central organizations based on the findings from the quantitative step. I here intend to use the method of concentric circles interviewing technique, where interviewees are asked to situate their organization’s partners on a graph of several concentric circles and describe their interaction and relationship in detail. This method originating from social sciences has been demonstrated to be a reliable tool for QNA (Ahrens, 2018). I expect this study to shed further light on the role of public organizations in the division of innovative labor. This is of interest to ESA as it may contribute to its mission evaluation for Sentinel in particular and its commercialization strategy in general. It further adds value to LPS panel F05.02 by enriching the existing debate of evaluating public upstream R&D spending with an in-depth mixed methods study adopting a QNA lens. References: Ahrens, P. (2018). Qualitative network analysis: A useful tool for investigating policy networks in transnational settings? Methodological Innovations, 11(1), https://doi.org/10.1177/2059799118769816. Arora, A., & Gambardella, A. (1994). The Changing Technology of Technological Change: General and Abstract Knowledge and the Division of Innovative Labor. Research Policy, 23, 523-532. Arora, A., Belenzon, S., & Patacconi, A. (2015, January). Killing the Golden Goose? The Decline of Science in Corporate R&D. Retrieved from NBER WORKING PAPER SERIES: Working Paper 20902: http://www.nber.org/papers/w20902 Cowan, R., & Foray, D. (1997). The Economics of Codification and the Diffusion of Knowledge. Industrial and Corporate Change, V6 (3), 595-622. Deleidi, M., & Mazzucato, M. (2018, February). Putting Austerity to Bed: Technical progress, aggregate demand and the supermultiplier. Retrieved from Working Paper IIPP WP 2018-02: https://www.ucl.ac.uk/bartlett/public-purpose/sites/public-purpose/files/iipp_wp2018-02_putting_austerity_to_bed-_technical_progress_aggregate_demand_and_the_supermultiplier.pdf Eickelpasch, A., & Fritsch, M. (2004). Stimulating the Division of Innovative Labor by Regional Competition for R&D Subsidies – A New Approach in German Innovation Policy. Regions and Fiscal Federalism", 25th - 29th August 2004, Porto. Hagedoorn, J. (2002). Inter-firm R&D partnerships: an overview of major trends and patterns since 1960. Research Policy, 31, 477 - 492. Heath, S., Fuller, A., & Johnston, B. (2009). Chasing shadows: defining network boundaries in qualitative social network analysis. Qualitative Research, 9(5), 645 - 661. Kantor, S., & Whalley, A. T. (2023, July). Moonshot: Public R&D and Growth. Retrieved from NBER Working Paper Series: Working Paper 31471: https://www.nber.org/system/files/working_papers/w31471/w31471.pdf Kattel, R. (2024). Past, present and future of innovation agencies in Europe. UCL-IIPP Working Paper Series (IIPP WP 2024-12), Available at: https://www.ucl.ac.uk/bartlett/public-purpose/wp2024-12. Mazzucato, M. (2016). From Market Fixing to Market-Creating: A new framework for innovation policy. Industry and Innovation, 23(2), 140-156. Retrieved from https://discovery.ucl.ac.uk/id/eprint/10045784/1/Mazzucato_From_Market_fixing.pdf Mazzucato, M., & Robinson, D. K. (2016). Lost in Space? NASA and the Changing Public-Private Eco-System in Space. SPRU Working Paper Series. Robinson, D. K., & Mazzucato, M. (2018). Co-creating and directing Innovation Ecosystems? NASA's changing approach to public-private partnerships in low-earth orbit. In Technological Forecasting and Social Change (136) (pp. 166-177). Schipper, D., & Spekkink, W. (2011). Balancing the Quantitative and Qualitative Aspects of Social Network Analysis to Study Complex Social Systems. Complexity, Governance & Networks-Vol 02, Issue 01, 5-22. Weinzierl, M. (2018). Space, the Final Economic Frontier. Journal of Economic Perspectives—Volume 32, Number 2, 173-192.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 0.11/0.12)

Session: A.04.02 GHG monitoring for policy

In order to support the Paris Agreement and related mitigation efforts, it is increasingly more-and-more recognised that greenhouse gas monitoring based on measurements can play a vital role. In 2024, WMO has got approval to proceed with implementing a Global Greenhouse Gas Watch (G3W) to support national entities with measured-based data to estimate emissions. UNEP's International Methane Emission Observatory and its Methane Alert and Response System (MARS) is another example of a new initiative to support policy and trigger mitigation. This session will among others to as well attract submissions from activities supporting the devlopment of GHG Monitoring and Verification Systems around the world to support policy and users interested in data uptake. This session aims to bring together the people generating policy-relevant information with the actual policymakers.

Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: From satellite detections to methane emission mitigation through UNEP's Methane Alert and Response System (MARS)

Authors: Itziar Irakulis-Loitxate, Manuel Montesino-SanMartin, Gonzalo Mateo-García, Meghan Demeter, Luis Guanter, Giulia Bonazzi, Anna Vaughan, Vit Ruzicka, Tobias A. de Jong, Shubham Sharma, Joannes D. Maasakkers, Ilse Aben, Tharwat Mokalled, Florencia Carreras, Konstantin Kosumov, Robert A. Field, Małgorzata Kasprzak, Adriana Valverde, Andreea Calcan, Claudio Cifarelli, Manfredi Caltagirone
Affiliations: International Methane Emission Observatory, United Nations Environment Program, Research Institute of Water and Environmental Engineering (IIAMA), Universitat Politècnica de València (UPV), Environmental Defence Fund, Computer Laboratory, University of Cambridge, University of Oxford, SRON Netherlands Institute for Space Research
As part of international efforts to reduce methane emissions and support the Global Methane Pledge, the United Nations Environment Program (UNEP) launched the International Methane Emissions Observatory (IMEO). This initiative provides open, reliable, and actionable methane data globally and puts it directly into the hands of individuals who can act on them. Among the tools enabling this progress, satellite remote sensing data has proven revolutionary. In particular, the detection of point source emissions with high-resolution satellites, such as Sentinel-2, Landsat, PRISMA, EnMAP, or EMIT, has made huge advances in recent years, allowing the identification of specific facilities responsible for the emissions. Building on these advancements, UNEP launched the Methane Alert and Response System (MARS) in 2023 under the management of IMEO. MARS uses information from more than a dozen high and medium-resolution methane-sensitive space missions, with the ultimate goal of identifying active sources and driving mitigation actions. It detects and notifies recent methane emissions to governments and oil and gas (O&G) companies worldwide. Once a point source is identified, MARS analyzes historical data to understand the source's behavior and keeps it under continuous monitoring in case emissions reappear. After a one-year pilot phase in 2023 and another year in a nominal phase sending more frequent notifications and expanding the number of countries and companies notified, MARS now notifies recent emissions (typically no older than 15 days) from the O&G sector to all countries in the world. This has yielded more than 1400 plumes notified by the end of 2024 and several mitigation success stories in different countries around the world. In addition, in its emissions detection and attribution exercise, MARS also detects hundreds of plumes from other sectors, such as coal or waste, which, together with all O&G plumes, are already openly available on the IMEO Eye on Methane Data platform. However, the large number of emissions detected in coal mines and landfills has encouraged MARS to expand its analysis towards metallurgical coal mines and to contribute more actively to IMEO's efforts in the waste sector from now on. In this contribution, we will give a brief overview of MARS, an update of its status and methods, including results obtained with the recently integrated satellite data, and present some of the most interesting mitigation cases. Additionally, we will share initial results from the MARS pilot in metallurgical coal and discuss its broader contributions to methane mitigation in other sectors.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: How Point Source Imaging Satellites are Changing The landscape for International Methane Emissions

Authors: Jean-francois Gauthier
Affiliations: GHGSat
In support of the Paris agreement and many supporting pledges and commitments such as the Global Methane Pledge and the Oil and Gas Decarbonization Charter (OGDC), much of the focus has turned to the increasingly important need to reconcile the bottom-up calculated inventories with top-down measurements. While many new technologies have emerged in support of more accurate methane measurement, no method has captured attention as much as satellites. In the last decade, several new types of systems have emerged to complement the global mappers typically launched by space agencies. Several point source imagers with great sensitivity, such as GHGSat’s constellation and Carbon Mapper’s first satellite Tanager-1, have made it possible to pinpoint emissions from individual sources to shed unprecedented light on specific leaks and bring additional precision to policy-making and drive mitigations. MethaneSat is another system recently launched sitting between global mappers and point source imagers in terms of capability, focusing on mapping area fluxes over regions like oil and gas basins. To properly inform policies, data from all of these complementary systems must come together to provide the best possible picture of emissions. This presentation will discuss how the emergence of point source imagers is changing the landscape of international methane emissions. In particular, the following will be discussed: • How point source imagers can work in a tip and cue system with global mapper satellites, with specific examples of GHGSat and Sentinel-5P / Tropomi working together. • Highlight how systems like GHGSat’s constellation and IMEO’s MARS can drive mitigations by alerting emitters • The importance of the validation of data, focusing on the ESA Third Party Mission (TPM) and NASA Commercial SmallSat Data Acquisition system (CSDA) programs, which GHGSat has both successfully passed, and community-wide activities like those led by NIST in the US and NPL in the UK. • The need for frequent monitoring of individual sites in order to properly qualify the profile of emissions, particularly in terms of persistence, showing examples from GHGSat’s constellation • Coincident monitoring of sites with multiple systems allows for cross-validation of results, including examples. For satellites to live up to their full potential as a tool to drive both emissions mitigations and effective policies, collaboration between systems is crucial. The presentation will conclude with suggestions regarding the next steps to build on the current international efforts for standardization across systems.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: CEOS-CGMS coordinated roadmap to monitor carbon dioxide and methane from space

Authors: Dr Yasjka Meijer
Affiliations: ESA, CEOS-CGMS
The Committee on Earth Observation Satellites (CEOS) and the Coordination Group on Meteorological Satellites (CGMS) recognise that high-quality, systematic observations of atmospheric carbon dioxide (CO₂) and methane (CH₄) from space-based sensors can make critical contributions to an integrated global greenhouse gas (GHG) observing system. The joint CEOS-CGMS Working Group on Climate (WGClimate) was directed to establish a GHG Task Team to implement the roadmap for a constellation architecture for monitoring CO₂ and CH₄ from space. The primary objective of the GHG Roadmap is to coordinate efforts across CEOS and CGMS agencies to maximise the quality, utility, transparency and continuity of space-based GHG products for science and policy applications. Its ultimate goal is to facilitate the development of fit-for-purpose operational systems that integrate space-based GHG estimates with ground-based, airborne and shipborne observations of CO₂ and CH₄ to address the needs of a diverse range of stakeholders. The first issue of the GHG Roadmap in 2020 focused on delivering space-based CO₂ and CH₄ products to support the Paris Agreement’s Global Stocktake (GST) process. Issue 2 of the roadmap, published in 2024, continues to support that goal but has been updated to accommodate lessons learned from the first GST. Its scope has also been expanded to support the rapid evolution of the international GHG science, inventory, policy and regulatory communities. It specifically includes collaboration with the new World Meteorological Organization Global Greenhouse Gas Watch (WMO G3W) and United Nations Environment Programme International Methane Emissions Observatory (UNEP IMEO). In this presentation, we will outline the approach for delivering fit-for-purpose products to be co-developed with stakeholders that support GHG monitoring for policy.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: Unlocking the Potential of Satellite-Derived Methane Emissions Data for Regulators and Policy Makers, Learnings From the UK

Authors: Sarah Cheesbrough, Mr Daniel Wicks
Affiliations: Satellite Applications Catapult, GHGSat
From 2023 to 2024, the UK has actively explored the use of asset-level methane emissions data derived from Earth Observation satellites. The “Methane Monitoring Data Supply for the UK Programme,” led by the Satellite Applications Catapult in partnership with GHGSat and funded by the UK Space Agency, addresses critical gaps in understanding and valuing the benefits of satellite-derived greenhouse gas (GHG) data. Methane, a potent greenhouse gas with a high global warming potential, is a primary focus due to its substantial contribution to the UK’s overall emissions profile. The programme has provided access to GHGSat’s data archive and enabled satellite tasking, supporting research institutions, public sector bodies, and private companies in exploring the data. Through extensive stakeholder engagement—engaging over 150 organisations via technical training, business sprints, workshops, and outreach—the programme has fostered collaboration and capacity building. Notably, a co-hosted session with the Environment Agency brought together cross-sector regulators to explore the integration of satellite data into regulatory decision-making, address operational challenges, and identify policy opportunities for the future. From our engagement, we have come to define the following requirements that capture the public sector opportunity to use satellite data to combat greenhouse gas emissions: • Enhancing the UK National Asset Emissions Inventory (NAEI): Incorporating satellite-derived methane data to complement modelled estimates, which will build on the UK’s leadership in global inventory accuracy by building evidence and confidence in the approach over time. • Supporting reduction schemes and incentives: Enabling independent Monitoring, Reporting, and Verification (MRV) to support mechanisms like the UK Emissions Trading Scheme and the Carbon Border Adjustment Mechanism, ensuring compliance with emerging EU regulations critical for trade continuity. • Strengthening enforcement of environmental regulations: Augmenting tools for Leak Detection and Repair (LDAR) and emissions threshold penalties, integrating satellite data to provide new insights into asset-level emissions and mitigation strategies. This presentation will share findings from the programme, which concludes in March 2025, addressing challenges to operational adoption and barriers identified through stakeholder engagement. Spanning sectors such as Oil and Gas, Financial Services, Waste Management, and Agriculture, the insights highlight sector-specific needs and broader policy implications. As public bodies worldwide aim to achieve net-zero targets by 2050, this work underscores the transformative potential of satellite data as a cornerstone for evidence-based decision-making and climate action.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: The Copernicus Monitoring Service for Anthropogenic Greenhouse Gas Emissions

Authors: Ernest Koffi, Dr Nicolas Bousserez, Dr Luca Cantarello, Dr Anna Agusti-Panareda, Dr Panagiotis Kountouris, Dr Auke Visser, Dr Aura Lupascu, Richard Engelen
Affiliations: ECMWF, ECMWF
The European Commission has entrusted the European Centre for Medium-Range Weather Forecasts (ECMWF) with building an operational global Monitoring and Verification Support (MVS) capacity for anthropogenic CO2 and CH4 emissions as part of the Copernicus Atmosphere Monitoring Service (CAMS). CAMS already produces global daily forecasts and near real-time analyses of atmospheric concentrations of CO2 and CH4. The new extended system combines model data and satellite retrievals to create a complete and consistent estimate of the atmospheric state and the underlying greenhouse gas (GHG) emissions using ECMWF’s IFS model and 4D-Var data assimilation system. The MVS system consists of improved prior fluxes from processed-based models and national anthropogenic emissions inventories as well as a novel methodology that enables the optimization of fluxes over long time periods (i.e., weeks to months). Research activities have been carried out for several years to develop this capacity by a large consortium of partners under the EU-funded CHE, CoCO2, CORSO, and CATRINE projects. The first observational data input to the MVS will consist of existing relevant satellites and be later complemented by the next Copernicus CO2M and Sentinel series of satellites. To ensure that the future monitoring system is fit for purpose, a comprehensive evaluation of its products by independent atmospheric observations (i.e., surface, airborne, total column concentration measurements, and local flux measurements) is essential, together with comparisons with the flux estimates from existing state-of-the-art inversion systems. In this presentation, we will show some of the existing CAMS greenhouse gas products and some preliminary results of the new MVS capacity. We will also illustrate how these products can support the policy-makers in curbing greenhouse gas emissions and/or for mitigation actions.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: Using satellite observations of co-emitted species to better constrain CO2 emissions

Authors: Dr Auke Visser, Dr Nicolas Bousserez, Dr Luca Cantarello, Dr Aura Lupascu, Ernest Koffi, Dr Panagiotis Kountouris, Richard Engelen
Affiliations: ECMWF
Operational monitoring of greenhouse gas (GHG) emissions via inverse modelling is crucial to monitor compliance with emission reduction targets as set in international climate agreements. In the context of a new Copernicus service focused on GHG emission monitoring (entitled the CO₂ Monitoring and Verification Support capacity, or CO2MVS), inverse modeling capabilities have been added to the Integrated Forecasting System (IFS) via the 4D-var assimilation system to allow for CO₂ emission estimation using satellite observations of atmospheric CO₂ columns. Here, we explore a multi-species inversion setup that additionally uses satellite observations of carbon monoxide (CO) and nitrogen dioxide (NO₂), air pollutants co-emitted with CO₂ in combustion processes, which can help isolate the fossil fuel CO2 (ffCO₂) signal from observed atmospheric CO₂ variability. A critical step in the development of this multi-species inversion framework is the definition of the prior error covariance matrix (B matrix) for emissions of ffCO₂ and co-emitted species. This enables the propagation of information in space, time, and across species. We discuss the derivation of B matrix parameters for anthropogenic emissions, based on an ensemble of global, daily CO₂ emissions obtained by propagating parameter uncertainties in an ffCO₂ emission model, and a dataset of cross-species emission error correlation. We additionally consider an approach to evaluate the robustness of the derived B matrix parameters via sensitivity analyses. We subsequently estimate ffCO₂ emissions as a by-product of IFS NOx emission inversions. This is done by applying the prior CO₂:NOx emission ratio to posterior NOx emissions, accounting for uncertainties in this ratio and in prior CO₂ emissions. We will show an evaluation of the NOx-derived posterior CO₂ emissions by comparison with global and country-level ffCO₂ emission budget estimates, as well as with a range of independent surface and column CO₂ observations. We will conclude by sharing perspectives towards the operational implementation of this multi-species emission inversion system.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall F2)

Session: A.02.04 Advances in Monitoring and Management of Forest Ecosystems - PART 1

About 4 billion hectares, which is 31 percent of the global total land surface, is covered by forests according to the latest FAO Forest Resource Assessment FRA 2020. Forests are a valuable and irreplaceable ecosystem, which play an important role from an ecological, social, and economic perspective. Monitoring of forests is essential for a better understanding to protect and manage sustainably this valuable ecosystem.

Information needs vary and include forest area, forest types, forest structure, disturbances, health, and biomass. An unprecedented wealth of observations by optical and microwave sensors, active and passive, from low to high resolution allow new ways to monitor forests. Emphasis will be put on advances in detecting forest disturbances, forest health monitoring, species identification, support to sustainable forest management, estimating forest biomass and forest carbon accounting. This session will showcase some of the more recent key achievements including methods/algorithms, science and applications.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall F2)

Presentation: Unveiling the added value of combining Sentinel-2 data with PlanetScope and aerial orthophotos for tree species classification in two central European forest ecosystems

Authors: Ardalan Daryaei, Michael Lechner, Noah Mihatsch, Anna Iglseder, Markus Hollaus, Hannes Hoffert-Hössl, Christian Prohaska, Markus Immitzer
Affiliations: BOKU University, Institute of Geomatics, Technische Universität Wien, georaum GmbH Ingenieurbüro für Geographie und Landschaftanalyse, Swiss Federal Research Institute WSL
As species extinction rates continue to rise and global environmental conditions deteriorate at an unprecedented pace, there is an urgent need for accelerated knowledge about species number, diversity, composition, and interactions, as well as their habitats. Specifically, in forest ecosystems, tree species diversity plays a pivotal role in explaining overall ecosystem functioning. Traditional methods, such as forest inventory and field-based approaches for acquiring information on tree species, are often costly, time-intensive, and impractical for large-scale applications. However, advancements in remote sensing technology provide new opportunities to collect such data efficiently. Various sensors, with a wide range of spatial, spectral, and temporal resolutions, are now being utilized to gather information on tree species. This study focused on comparing and combining data from three multispectral remote sensing sensors - Sentinel-2 (S2), PlanetScope (PS), and aerial orthophotos - for tree species classification. The study was conducted in two forested areas in Austria: the riparian forests of National Park Donau-Auen (NDPA), where 9 tree species were classified, and the Biosphere Reserve Wienerwald (BRWW), where 12 species were classified. The primary objectives were to assess the performance of each dataset in classifying tree species and to evaluate how combining these datasets could enhance classification accuracy. The rationale for combining datasets stems from the complementary nature of the spectral and phenological information in S2 and PS data and the detailed structural information provided by aerial orthophotos. Both mono-temporal and multitemporal S2 and PS datasets were analyzed alongside four combinations of datasets: S2 + PS, S2 + orthophoto, PS + orthophoto, and S2 + PS + orthophoto. A robust reference dataset (835 samples for NDPA and 1,283 samples for BPWW) was used in conjunction with a Random Forest classification algorithm and recursive feature selection to perform the tree species classifications. When comparing mono-temporal datasets, Sentinel-2 consistently outperformed PlanetScope. For NDPA and BRWW, S2 achieved the highest overall accuracies of 63.7% and 70.6%, respectively, compared to 58.1% and 57.4% for PS. The use of multitemporal S2 data further improved accuracy, achieving 78.3% for NDPA and 83.3% for BRWW, compared to 74.4% and 77.2% for multitemporal PS data. Combining datasets resulted in slight improvements over the sole use of S2 and PS data. The highest overall accuracies were 81.0% for the S2 + PS + orthophoto combination in NDPA and 86.7% for the S2 + PS combination in BPWW. Classification accuracies were higher in BPWW for most classifications, which may be attributed to the larger reference dataset and the inclusion of more phenologically and morphologically distinct tree species on this site. Overall, this study highlights the well-known benefit of using multitemporal data sets and also potential of combining multispectral remote sensing datasets, particularly S2 and PS, to enhance tree species classification. S2 data demonstrated superior accuracy and stability, especially in mono-temporal analyses. Notably, the study fairly differentiated between three closely related Poplar species, including Populus alba, Populus ×canadensis, and Populus nigra, in riparian forests of NDPA.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall F2)

Presentation: Towards understanding vegetation water dynamics in temperate forests using Sentinel-1 in a coupled modelling approach

Authors: Johanna Kranz, Prof. Dr. Matthias Mauder, Dr. Thomas Jagdhuber, Florian Hellwig, Prof. Dr. Matthias Forkel
Affiliations: TU Dresden University Of Technology, Junior Professorship in Environmental Remote Sensing, TU Dresden University Of Technology, Chair of Meteorology, German Aerospace Center (DLR), Microwave and Radar Institute, University of Augsburg, Institute of Geography
Background. Changes in plant phenology, such as earlier leaf unfolding and delayed autumn senescence, can alter the water cycle, e.g. by increasing transpiration and influencing precipitation patterns, resulting in limited availability of moisture in soil and vegetation during summer. Moreover, increased occurrence of droughts in combination with higher temperatures increase/foster/enlarge/boost forest fire danger. Microwave remote sensing offers potential to analyse both dynamics in vegetation phenology and water content because of the sensitivity of radar data to structural, organic as well as dielectric properties of the scattering materials. Microwave data, e.g. Sentinel-1 SAR data, is widely used for retrieval of soil and vegetation parameters (e.g., soil moisture) and more recently for live fuel moisture content (LFMC) in Mediterranean and Western US ecosystems (Rao et al., 2020, Quaife et al., 2022). However, it is still unclear to which extent Sentinel-1 is sensitive to moisture content within temperate forests. Here we aim to develop an approach to retrieve both LFMC and intercepted water from Sentinel-1 backscatter time series for temperate forests in Central Europe between 2022 to 2023. Methods. To retrieve LFMC and intercepted water, we couple a semi-empirical backscattering model (Water Cloud Model, Attema & Ulaby, 1978) with a dielectric mixing model (Ulaby & El-rayes, 1987; de Jong et al., 2002). The model incorporates in-situ measurements of LFMC and rainfall interception calculated from precipitation and throughfall data using the model proposed by Rutter et al. (1971). Furthermore, remote sensing data on vegetation structure, including leaf area index (LAI) and fractional vegetation cover (FCOVER) from Sentinel-3, are integrated into the model. After calibration, the model is inverted for LFMC and interception retrieval using a neural network and prior information on precipitation, Sentinel-1 backscatter, LAI and temporal dynamics from attenuation values predicted from the coupled Water Cloud Model. The information on LAI and precipitation allows the application and inversion of the model on a larger spatial scale. The model’s calibration and inversion are evaluated in temperate evergreen needleleaf (ENF) and deciduous broadleaf (DBF) forests using meteorological data and ground measurements of LFMC. Initial results. The calibrated model demonstrates the impact of water-related variables, specifically interception and LFMC, on the attenuation coefficient. As water content increases, canopy transmissivity decreases, with a more pronounced effect observed for co-polarisation compared to cross-polarisation. Moreover, the relationship between LFMC and attenuation is stronger in magnitude than that of interception. The hybrid physical and data-driven approach allows to further analyse individual effects of both vegetation descriptors, LAI and LFMC, by eliminating either structural or moisture-related changes in the seasonal pattern of the Sentinel-1 signal. Outlook. A deeper understanding how these variables impact backscatter could improve accuracy and applicability of microwave remote sensing in assessing plant phenology, ecohydrology and wildfire danger. By combining the strengths of a data- with a physics-driven model, the approach could be extended to cover larger areas of comparable vegetation, even without the availability of in-situ measurements of LFMC and intercepted water. Attema, E.P.W., Ulaby, F.T., 1978. Vegetation Modeled as a Water Cloud. Radio Science 13, 357–364. https://doi.org/10.1029/RS013i002p00357 de Jong, J.J.M., Klaassen, W., Kuiper, P.J.C., 2002. Monitoring of rain water storage in forests with satellite radar. IEEE Trans. Geosci. Remote Sensing 40, 338–347. https://doi.org/10.1109/36.992793 Quaife, T., Pinnington, E.M., Marzahn, P., Kaminski, T., Vossbeck, M., Timmermans, J., Isola, C., Rommen, B., Loew, A., 2022. Synergistic retrievals of leaf area index and soil moisture from Sentinel-1 and Sentinel-2. International Journal of Image and Data Fusion 1–18. https://doi.org/10.1080/19479832.2022.2149629 Rao, K., Williams, A.P., Flefil, J.F., Konings, A.G., 2020. SAR-enhanced Mapping of Live Fuel Moisture Content. Remote Sensing of Environment 245, 111797. https://doi.org/10.1016/j.rse.2020.111797 Rutter, A.J., Kershaw, K.A., Robins, P.C., Morton, A.J., 1971. A predictive model of rainfall interception in forests, 1. Derivation of the model from observations in a plantation of Corsican pine. Agricultural Meteorology 9, 367–384. https://doi.org/10.1016/0002-1571(71)90034-3 Ulaby, F.T., El-rayes, M.A., 1987. Microwave Dielectric Spectrum of Vegetation - Part II: Dual-Dispersion Model. IEEE Transactions on Geoscience and Remote Sensing GE-25, 550–557. https://doi.org/10.1109/TGRS.1987.289833
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall F2)

Presentation: Evaluating high-resolution, remote sensing-based tree species maps with forest inventory data for biodiversity applications

Authors: Benjamin Dechant, Jens Kattge, Fabian Schneider, Ryan Pavlick, Kyle Kovach, Sarah Graves, Ben Weinstein, Stephanie Bohlmann, Daniel Johnson, Sean McMahon, Kristina Anderson-Texeira, David Orwig, James Lutz, Philip Townsend
Affiliations: German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig, Leipzig University, Max Planck Institute for Biogeochemistry, Aarhus University, NASA Headquarters, University of Wisconsin, University of Florida, Smithsonian Enviromental Research Centre, Smithsonian Conservation Biology Institute, Harvard University, Utah State University
Large-scale information on tree species occurrence and abundance is crucial to improve our understanding of spatial patterns and temporal changes of biodiversity. To obtain reliable maps of tree species at larger scales, remote sensing-based approaches are needed given the large effort of conducting detailed tree censuses in the field. Recently, great progress has been made with large-scale tree species mapping around NEON sites in the USA using high-resolution airborne imagery. However, remote sensing-based species classification approaches can have limitations regarding the correct segmentation of tree crowns and the misclassification of species as well as challenges to well detect rare species. This could result in uncertainties and biases when using the species maps for biodiversity applications. Here, we make use of tree census data from four ForestGEO plots in the USA to conduct a detailed evaluation of high-resolutions, remotely-sensed tree species maps for different biodiversity-related applications. The ForestGEO plots include deciduous, conifer and mixed forests and are co-located with NEON sites. We generated tree species maps from tree census data using allometric scaling and then compared to the remote sensing-based maps across multiple spatial resolutions. Our comparative evaluation includes the overall spatial patterns of tree species occurrence and abundance, the fractional species cover per species, spatial patterns of relevant diversity metrics related to species richness and abundance, and the interpretation of functional diversity maps based on foliar trait maps derived from airborne imaging spectroscopy. Our results indicate that there is considerable potential to use the remote sensing-based tree species maps for biodiversity applications, especially at the coarser spatial resolutions of operational and future hyperspectral satellites (30 – 60 m). Nevertheless, when using such maps, an awareness of their limitations is necessary to avoid misinterpretation of results based on them. Our findings have important implications for improving remote sensing-based species mapping and biodiversity-related analyses using remotely-sensed tree species maps.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall F2)

Presentation: Using the Crop Water Stress Index for measuring water stress in a humid beech forest and a potato field

Authors: Martin Schlerf, Stan Schymanski, Christian Bossung, Jean-Francois Iffly, Franz Ronellenfitsch, Richard Keim
Affiliations: Luxembourg Institute Of Science And Technology
With an increase in droughts and competition for water, the water status and need of farmland and forests will become more relevant to water managers in the future. The upcoming thermal infrared missions to space, the Copernicus LSTM (Land Surface Temperature Monitoring) and the NASA SBG (Surface Biology and Geology) missions, water stress indices are expected to face a wider application in vegetation water management applications. The Crop Water Stress Index (CWSI) is a widely used, remotely-sensed assessment of water stress. Its main advantage over pure measurements of canopy temperature (Tc is that it accounts for meteorological variables, mainly, air temperature and vapor pressure deficit (VPD), while Tc alone is sensitive to highly fluctuating environmental factors. To calculate the empirical CWSI, the lower baseline representing a full-transpiring condition, and the upper baseline representing non-transpiring condition needs to be established from Tc, Ta, and VPD measurements. The CWSI has been proven to be a robust method for monitoring water stress in crops of arid and semi-arid regions. However, so far little research has been done to test its applicability in humid regions. Despite its success in crops, CWSI has hardly been used at all to evaluate water stress in natural vegetation. Because of that, the relation between CWSI and physiology-based stress measures in woody plants has not yet been explored. Here, we present the results of two field experiments on potato and beech, each combining ground physiological or soil moisture measurements with low-altitude, fixed sensors and drone flights above the canopy to calibrate remotely sensed indications of stress. The main objectives were to i) assess if the remotely sensed CWSI can trace variations in ground-based water stress measurements, i.e. stem radius change (beech) and soil moisture (potato) and ii) to compare the upper and lower baselines for beech and potato in a humid climate. In the beech experiment (Bambësch site, Luxembourg), stem radius changes, sap flow and soil moisture (SM) were measured at three young beech trees. From a tower within the stand, TIR canopy temperatures were measured over three treetops between May and October 2023 every 10 minutes, besides net radiation (R), Ta, humidity (H), and wind speed (WS). Depletion of tree water store results in reductions in stem radius during the day, which can be measured using a dendrometer. Diurnal variations in stem radius can be separated into a growth and a Tree Water Deficit (TWD) component (Zweifel et al. 2021). In the potato experiment (Hamerstorf site, Germany), different irrigation water treatments were carried out: Optimum irrigation (OP), reduced irrigation (RD), and no irrigation as control treatment. Six TIR thermometers were set up at six potato plots to record continuous Tc of three RD and three OP treatments from June to September 2018 and 2019. Continuous SM measurements were made at RD and OP positions. Measurements of Ta, R, H, and WS were measured every two minutes and aggregated to thirty minutes intervals. Over both sites, TIR drone flights were carried out at humid and drought conditions using a DJI Zenmuse H20T broadband IR imager attached to a DJI Matrice M300 UAV. CWSI calibration equations were established from the ground measurements and applied to the drone images. Complementary hyperspectral (Headwall Nano Hyperspec and M384 sensors) or multispectral images were acquired. The results of the beech experiment show, that CWSI captures tree water deficit (TWD) patterns very well. Stomata likely respond to TWD as follows: an increase in water stress results in larger TWD values and stomata closure, which in turn leads to less evaporative cooling, followed by larger values of Tc and CWSI. The results of the potato experiment reveal that different stress metrics in relation to SM increased in R2 from Tc (R2=0.58) to Tc-Ta (R2=0.76) to CWSI (R2=0.74-0.87). CWSI-SM relations calibrated in one year (2019) and applied to another year (2018), and vice versa, revealed absolute errors of 1-3% SM; such accuracy is considered good enough to support irrigation management. Comparing the outcomes of both experiments reveal comparable values of the upper baselines (Tc-Ta) of about 4-5°C and comparable slopes of the lower baselines (delta Tc-Ta / delta VPD = -2.5 / 20 for beech, and delta Tc-Ta / delta VPD = -5.0 / 35 for potato). In both experiments, significant relations between the remotely sensed CWSI and ground measurements of vegetation stress were found.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall F2)

Presentation: Mapping and Tracking Forest Functional Traits From Spaceborne Hyperspectral Imagery

Authors: Giulia Tagliabue, Dr. Cinzia Panigada, MSc Beatrice Savinelli, Dr. Luigi Vignali, Dr. Micol Rossini
Affiliations: University of Milano-Bicocca
Forest ecosystems, covering about one-third of the Earth's land surface, are essential for providing a wide array of benefits, including maintaining ecological balance, supporting biodiversity, regulating climate, and supplying essential ecosystem services crucial for human livelihoods. However, these vital ecosystems are facing unprecedented challenges, with climate change intensifying issues like deforestation, drought, degradation, and wildfires. Mapping and tracking changes of key forest functional traits such as leaf chlorophyll content (LCC), leaf carotenoid content (Ccx), leaf nitrogen content (LNC), leaf water content (LWC), leaf mass per area (LMA) and leaf area index (LAI) is crucial for understanding their responses to environmental stressors and for managing these vital resources. Optical remote sensing offers a promising avenue for evaluating the condition and functionality of forests at global scale. Yet, accurately estimating forest traits from spaceborne imagery has long been hampered by methodological and technological limitations. This contribution addresses these challenges by investigating and comparing the capabilities of machine learning regression algorithms (MLRA) and hybrid approaches, a combination of MLRA with radiative transfer simulations, for retrieving forest traits from PRISMA spaceborne hyperspectral imagery. Hybrid models offer a potential solution to the drawbacks of data-driven approaches, such as the requirement for vast amounts of training data and susceptibility to bias from sensor characteristics. This study represents a significant step forward as it is, to the best of our knowledge, the first to employ real spaceborne hyperspectral data for retrieving multiple plant traits in forest canopies. The research was carried out in the Ticino Park, a mixed, temperate forest in northern Italy. An intensive field campaign was conducted in the summer of 2022, coinciding with four PRISMA overpasses, to collect trait samples in correspondence of fifty sites for calibrating and validating the retrieval schemes. This unique dataset encompasses a diversity of forest types, structures, and conditions, allowing for a robust assessment of the models. Results highlighted the ability of PRISMA images and hybrid models to accurately quantify the six considered key forest traits. The hybrid models, employing the PROSPECT-PRO-INFORM model for forward simulations, achieved high accuracy for LNC (r2=0.82, nRMSE=9.5%), LWC (r2=0.98, nRMSE=3.5%), LMA (r2=0.93, nRMSE=6.6%) and LAI (r2=0.83, nRMSE=11.7%). LCC (r2=0.67, nRMSE=13.5%), and Ccx (r2=0.45, nRMSE=13.5%) yielded less accurate results. Significantly, the results showed that hybrid approaches, which capitalise on the strengths of both physics-based and statistical models, performed slightly better than purely statistical approaches. The application of these models to PRISMA images acquired before and after a severe drought event in the summer of 2022 further demonstrated the robustness and practicality of the approach. The models accurately captured the drought-induced changes in forest traits, showing a decoupling between LCC and LNC related to the early degradation of the photosynthetic pigments due to persistent stress conditions. This showcases their effectiveness for tracking changes in the forest health status under dynamic environmental conditions, fostering the use of new-generation models and hyperspectral spaceborne imagery as powerful tools for comprehending and managing forest ecosystems amid ongoing environmental changes.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall F2)

Presentation: Bridging Traditional Knowledge and Technology to Address Biodiversity Gaps in the Amazon

Authors: Polyanna Da C. Bispo, Celso H. L. Silva Junior, James F. Moura Jr, Fernando G. de Carvalho, Loïc Pellissier, Paulo M. L. A. Graça, Nivia P. Lopes, Pitágoras C. Bispo
Affiliations: Department of Geography, School of Environment, Education and Development, University of Manchester, Manchester, Instituto de Pesquisa Ambiental da Amazônia (IPAM), Brasília, GO, Graduate Program in Biodiversity Conservation, Federal University of Maranhão, São Luís, MA, University for International Integration of the Afro-Brazilian Lusophony, Instituto Tecnológico Vale (ITV), R. Boaventura da Silva, 955 - Nazaré, Belém, Ecosystems Landscape Evolution, Institute of Terrestrial Ecosystems, Department of Environmental Systems Science, ETH Zürich, Zürich, Instituto Nacional de Pesquisas da Amazônia (INPA), Manaus, Amazonas, Department of Animal Science, Federal University of Roraima, Boa Vista, Roraima, Laboratório de Biologia Aquática, Faculdade de Ciências e Letras de Assis, Universidade Estadual Paulista, Assis, São Paulo
Our Amazonian BioTechQuilombo project, recently approved under the Amazon+10 Call for Expeditions, addresses critical biodiversity knowledge gaps in the Amazon. This initiative brings together over 40 scientists, 10 Quilombola leaders, and six funding agencies from Brazil, the UK, and Switzerland. By integrating traditional Quilombola knowledge with cutting-edge technologies such as remote sensing and environmental DNA (eDNA) analysis, the project aims to assess biodiversity in Amazonian conservation areas over 36 months comprehensively. The project focuses on four understudied regions, including Quilombola territories and a traditional non-Quilombola black community, fostering local community involvement in conservation. Key objectives include: (1) diagnosing biodiversity gaps by combining traditional knowledge and scientific methodologies; (2) improving biodiversity measurement through automated systems and advanced technologies; and (3) empowering Quilombola communities to monitor and manage biodiversity sustainably. Outputs will include species inventories, forest structure characterisation (biomass, height, canopy openness) and landscape, digital biodiversity catalogues, and training programs for Quilombola researchers. Fieldwork will involve expeditions to Pará, Amazonas, and Roraima, integrating traditional ecological knowledge with advanced tools like satellite images (Sentinels 1 and 2), LiDAR, drone surveys, and eDNA analysis. Permanent forest plots will assess floristic composition and structural diversity, while drone and camera trap surveys will monitor terrestrial and canopy species. Molecular analyses will establish a reference library for aquatic and terrestrial species, supported by audio sensors and AI tools to process collected data. This collaborative approach fosters long-term biomonitoring and conservation practices. Inclusive collaboration is at the heart of the project, with circular meetings and knowledge exchange sessions with Quilombola leaders and researchers. Community-driven outputs, including field guides, educational workshops, and biodiversity dashboards, will ensure meaningful participation and legacy-building. Training programs and international exchange opportunities will strengthen local leadership in biodiversity conservation and enhance cross disciplinary dialogue. Beyond scientific contributions, the project promotes socio-ecological resilience, sustainable resource use, and the recognition of Quilombola cultural heritage. It provides a replicable model for integrating traditional knowledge with technology in conservation, supporting the global biodiversity agenda. By the project’s conclusion, it aims to leave a legacy of empowered communities, a robust biodiversity database, and actionable frameworks for Amazon ecosystem monitoring.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 0.49/0.50)

Session: A.10.04 4D Earth: The future of modelling Earth's interior with EO data and its applications.

Within the broader scope of scientific exploitation of satellite assets, the European Space Agency (ESA) is funding a critical mass of R&D activities aimed at enhancing the observation and understanding of the Oceans from space.
The ESA Ocean Science Cluster consists of a portfolio of several research opportunities and networking actions promoting synergistic research and fostering European and international scientific collaboration.
About 40 projects are currently belonging to the Ocean Science Cluster, further regrouped into six main sub-cluster topics, namely: Ocean Health, Ocean Extremes, Coastal Ocean including Land-Sea interactions, Ocean Carbon, Upper-ocean Dynamics including Air-Sea interactions, and the Ocean’s role in the Earth and Climate System.
This Agora will showcase mini-talks highlighting outcomes of a sub-selection of the Cluster projects, with special emphasis on those projects at the intersection of the sub-cluster domains.
Also, specific attention will be devoted to the mapping and adherence of the Ocean Cluster grand challenges to the recently published ocean Guidelines Science Questions of the ESA EO Science Strategy document.
Lastly, through interactive brainstorming with the audience, plans and ambitions of the ESA Ocean Agenda 2026+ will be shared and discussed.

Presentations and speakers:


4D Dynamic Earth - Towards a Digital Twin of the Solid Earth


  • Bart Root

4DEarth+Swarm core project: milestones and forthcoming challenges


  • Julian Aubert

Rapid mass redistributions in the mantle: interactions with tectonic plates and core-mantle boundary


  • Isabelle Panet

Imaging Electrical Conductivity of the Earth’s crust-mantle across scales: challenges and future opportunities


  • Alexander Grayver

Panel Discussion


Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall M1/M2)

Session: C.03.18 The critical role of CCM for a resilient society and environment

The rapidly growing Earth Observation (EO) landscape is reshaping the data ecosystem, with innovative platforms, sensors, and commercial services bringing new opportunities to meet the diverse and evolving needs of users. This session will explore how Copernicus Contributing Missions (CCMs) are playing a vital role in the European Earth Observation (EO) landscape — from very high-resolution imagery in Optical and SAR to emerging domains such as hyperspectral, thermal and atmospheric data — addressing critical societal, environmental, and business challenges.
Building on the strategic priorities outlined in key initiatives such as the evolution of the Copernicus Programme, the discussion will focus on how CCMs complement and enhance Sentinel observations (both current and future missions). By filling critical spatial, temporal, and thematic observation gaps, CCMs enable a more comprehensive and detailed view of our planet. CCM data are instrumental to address diverse challenges in disaster management, climate resilience, agriculture, infrastructure, and security, and to support key EU policy objectives, such as the Common Agricultural Policy, European Green Deal, the Paris Agreement, and the EU Climate Adaptation Strategy.
The session will provide a dynamic and interactive dialogue, engaging representatives from the EO commercial sector, institutional stakeholders, and the end-users to share innovative approaches fostering a roadmap for the future of EO services.

Speakers:


  • The co-chairs - DG-DEFIS/ESA
  • Quentin Gillet - ICEYE
  • Daniel Sprengler - Constellr
  • Pierre Alain Bosc - Airbus
  • Markel Aramberri - Satlantis
  • Malathy Eskola - Kuva Space
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall F1)

Session: F.04.07 Earth Observation for Tracking Global Sustainability and Biodiversity Targets - PART 1

The 2030 Agenda for Sustainable Development and the Kunming-Montreal Global Biodiversity Framework (GBF) set ambitious targets for biodiversity conservation, ecosystem restoration, and sustainable development. Achieving these goals requires timely, reliable, and spatially explicit data to track progress and inform decision-making. Earth Observation (EO) plays a critical role in monitoring environmental indicators, supporting national reporting, and guiding effective conservation and restoration efforts.
EO provides unparalleled capabilities to support operational monitoring of SDG indicators, helping countries integrate geospatial data into National Statistical Systems to track development policies and sustainability progress. The integration of EO with socio-economic data is essential for delivering high-quality, timely, and actionable information to measure progress on SDG targets. However, challenges remain in EO data accessibility, standardization, and operational integration to ensure that national and global reporting frameworks effectively benefit from EO-based solutions.
In the context of biodiversity, EO is key to supporting national monitoring and reporting on the GBF indicators. EO is also an essential tool for ecosystem conservation and restoration, supporting the identification, mapping, and management of priority areas for protection and rehabilitation. The ability to assess biodiversity at multiple scales, from protected areas to entire landscapes, provides an opportunity to bridge the gap between scientific advancements and national reporting needs. There is a growing demand for accessible and standardized EO-based indicators to support conservation efforts and assess the effectiveness of protected area management. Monitoring ecosystem conditions, connectivity, and resilience is crucial to tracking progress toward restoration targets (e.g., GBF Target 2 and EU Nature Restoration Law). The integration of EO with in-situ data (e.g., bioacoustics, eDNA, LTER) further enhances conservation planning and adaptive management strategies.
This session will explore the latest EO-based approaches for tracking SDG indicators, biodiversity targets, and ecosystem conservation and restoration efforts.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall F1)

Presentation: EU-Mon: Scalable national solutions for coastal eutrophication monitoring (SDG 14.1.1a), serving Statistical Offices and beyond

Authors: Dr. Paola Di Lauro, Dr. Giulio Ceriola, Dr. Silvana Cotrufo, Daniela Drimaco, Dr. Alessandro Marin, Dr. Roberto Di Rienzo
Affiliations: Planetek Italia s.r.l., CGI Itay
Eutrophication is characterized by the excessive enrichment of water with nutrients such as nitrogen and phosphorus and it is a critical challenge affecting water quality and aquatic ecosystems worldwide. Manifestations such as hypoxia and harmful algal blooms underscore the need for effective monitoring and mitigation strategies, particularly in coastal areas where the pressures of human activities are most acute. The EU-Mon (Eutrophication Monitoring) project developed three sub-indicators aligned with the Sustainable Development Goal (SDG) 14.1.1a index of coastal eutrophication and implemented them on CGI Insula, a user-friendly Earth Observation (EO) platform as a service, designed to handle the processing of large-scale EO data efficiently. The project aimed to deliver a scalable and adaptable solution for national-level coastal eutrophication monitoring and reporting for SGD 14.1.1a, leveraging the power of cloud-based processing and analytics. Designed to integrate seamlessly into existing national systems, the solution addresses the needs of diverse stakeholders, including statistical offices, line ministries, and environmental agencies. Its architecture prioritizes three key aspects: nationwide scalability, a robust cloud infrastructure, and ease of use through an intuitive interface. Collaboration with two Early Adopters, such as Tanzania and Albania, underscores the project's adaptability to diverse national contexts and its commitment to supporting countries in reporting and achieving their SDG targets. In Tanzania, the National Bureau of Statistics (NBS) has played a pivotal role by defining user requirements, validating outputs, and ensuring the alignment of developed indicators with national monitoring frameworks. Similarly, in Albania, relevant line ministries and statistical offices involved through the Resource Environmental Center Albania (REC Albania) have actively participated in refining workflows and providing feedback to enhance the usability and accuracy of the indicators. EU-Mon’s focus on scalability ensures its applicability across a wide range of contexts, from localized assessments to nationwide implementations. By harnessing EO data from Sentinel-3 satellites and the numerical models from Copernicus Marine Environment Monitoring Service (CMEMS), the platform generates three sub-indicators such as Chlorophyll-a deviations, Chlorophyll-a anomalies, and the Indicator for Coastal Eutrophication Potential (ICEP). These indicators, developed in accordance with UNEP guidelines, provide actionable insights into the state and drivers of coastal eutrophication, enabling decision-makers to target interventions effectively. The development and deployment of these indicators followed a phased and iterative approach. Initially, the solutions were implemented at pilot sites to validate the methodology and demonstrate feasibility in localized contexts. This was followed by scaling to larger test sites, where further refinements were made based on the specific needs and challenges encountered. Through a user-centered approach, feedback from stakeholders – collected through the two Early Adopters – was systematically incorporated, enabling continuous improvement of the workflows and ensuring their alignment with user requirements and laying the groundwork for broader national and regional adoption. This iterative process culminated in the ability to extend the solution to national territories, demonstrating its robustness and adaptability. The adoption of a scalable cloud-based architecture and automated workflows ensured that processing times remained relatively short, even when handling the large datasets required for nationwide applications. As a result, EU-Mon successfully transitioned from localized proof-of-concept implementations to a fully operational prototype service that supports rapid and effective reporting on SDG 14.1.1a at both national and sub-national levels. This milestone was achieved through the development and national-level scaling of three sub-indicators—Chlorophyll-a deviations, Chlorophyll-a anomalies, and the Indicator for Coastal Eutrophication Potential (ICEP)—designed specifically to meet the reporting needs of statistical offices and line ministries. These indicators provide actionable insights aligned with UNEP guidelines, enabling countries to monitor and report on coastal eutrophication with the precision required for effective SDG tracking. Moreover, the approach was validated and evaluated by both the mentioned Early Adopters (EAs), who played an active role in defining requirements, validating outputs, and providing feedback. This iterative process ensured that the indicators and workflows were not only scientifically sound but also tailored to the operational realities of national reporting systems. Furthermore, the demonstrated scalability and adaptability of the solution confirm its applicability beyond the pilot countries, offering a globally relevant and scalable model for SDG 14.1.1a monitoring. This success underscores the project’s potential to drive effective reporting and decision-making on coastal eutrophication at a global scale.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall F1)

Presentation: Citizen science and Earth Observation Data for “Rescuing” the SDGs

Authors: Dilek Fraisl, Dr Linda See, Dr Steffen Fritz, Dr Ian McCallum
Affiliations: International Institute for Applied Systems Analysis (IIASA), Citizen Science Global Partnership (CSGP)
The UN Sustainable Development Goals (SDGs), adopted by the UN General Assembly in 2015, represent a global call to action to tackle the world’s most pressing challenges, from poverty to environmental degradation. Achieving these goals requires a data-driven approach, grounded in accurate, timely, and comprehensive data to guide policy and decision-making. Despite improvements in data availability over the past decade, with less than five years remaining to achieve the SDGs, substantial data gaps remain, limiting the ability to effectively monitor progress and guide policies and actions. Traditional data sources, such as censuses and household surveys, are insufficient to address these data gaps. New data sources, including Earth Observation (EO) data and citizen science, defined as public participation in scientific research and knowledge production, offer innovative and complementary solutions. Scientific literature has demonstrated the potential of these alternative data sources to fill critical gaps. For example, Fraisl et al. (2020) conducted a systematic review of SDG indicators and citizen science initiatives, showing that citizen science data are already contributing or could potentially contribute to monitoring 33% of SDG indicators. Their analysis also revealed a significant overlap with EO data contributions. According to GEO (2017), EO data are relevant to 29 SDG indicators, and Fraisl et al. found that citizen science could support 24 of these, demonstrating the complementarity between the two. Since publishing the aforementioned study in 2020, Fraisl et al. have been working with National Statistical Offices (NSOs) and UN agencies to demonstrate how this potential can be realized. A notable example is their collaboration with the Ghana Statistical Service, the Environmental Protection Agency in Ghana, and UNEP (as the custodian agency), which resulted in existing citizen science data on marine plastic litter being integrated into Ghana’s official statistics, as well as into the monitoring and reporting of SDG indicator 14.1.1b, Plastic Debris Density, under the leadership of the Ghana Statistical Service (GSS). This initiative bridged local data collection efforts with national and global monitoring processes and policy agendas through the SDG framework. The results were included in Ghana’s 2022 Voluntary National Review of the SDGs, reported on the UN SDG Global Database, and are informing national policies in Ghana. This makes Ghana the first country to use citizen science data for monitoring and reporting an SDG indicator. Their findings, published in Fraisl et al. (2023), provide valuable lessons for the EO community, not only from a technical perspective but also in building effective partnerships with NSOs, UN agencies, civil society organizations, academia, and other stakeholders to leverage EO data for SDG monitoring and reporting and sustainable development. This example highlights one of the ways citizen science is being utilized to support the SDGs. The oral presentation will showcase additional examples that emphasize the synergy between citizen science and EO in bridging SDG data gaps, informing or reshaping policies, and mobilizing action. It will also feature initiatives such as the Citizen Science Global Partnership (CSGP), hosted by the International Institute for Applied Systems Analysis (IIASA), which seeks to advance citizen science for a sustainable world and foster collaboration with the EO community to leverage the combined potential of citizen science and EO data for achieving sustainability. References: Fraisl D, Campbell J, See L et al (2020) Mapping citizen science contributions to the UN sustainable development goals. Sustain Sci 15:1735–1751. https://doi.org/10.1007/s11625-020-00833-7 GEO (2017) Earth Observations 2030 Agenda for Sustainable Development, V1.1. Japan Aerospace Exploration Agency (JAXA) on behalf of GEO under the EO4SDG Initiative. Available at: https://www.earthobservations.org/documents/publications/201703_geo_eo_for_2030_agenda.pdf Fraisl, D., See, L., Bowers, R. et al. The contributions of citizen science to SDG monitoring and reporting on marine plastics. Sustain Sci 18, 2629–2647 (2023). https://doi.org/10.1007/s11625-023-01402-4
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall F1)

Presentation: Mapping Old-growth Forest by Means of Sentinel-2 and LiDAR Data in Various Regions of Europe

Authors: Manuela Hirschmugl, Carina Sobe, Peter Meyer, Kris Vandekerkhove
Affiliations: Joanneum Research, Nordwestdeutsche Forstliche Versuchsanstalt, Research Institute for Nature and Forest, Dept. of Geography and Regional Science, University of Graz
1. Introduction 1.1 Policy Context Old-growth forests (OGF) represent one of the most vital ecosystems in Europe, playing a critical role in biodiversity conservation and climate regulation. These forests, which are in their final stages of natural forest dynamics - from late maturity to eventual collapse - offer a range of essential ecosystem services. They support biodiversity by providing habitats for a wide array of species (Paillet et al., 2010), help to regulate climate and sequester carbon (Luyssaert et al., 2008), and provide protection of soil and water resources (Brockerhoff et al., 2017). Given their ecological importance and their key role in other ecosystem services including the regulation of local climate regimes and maintenance of human health, the mapping and preservation of Europe’s remaining OGFs are critical (Barredo et al., 2021). This is addressed in the EU’s biodiversity strategy to 2030 (European Commission, 2023), which calls to strictly protect all remaining EU primary and old-growth forests. 1.2 Old-Growth Forest Identification and Mapping OGFs are not defined or identified by a single attribute (da Silva et al., 2019), but a combination of several factors may serve as important indicators. However, to identify and map OGFs, a common definition had to be adopted. After scientific considerations and many iterations, the following definition has been adopted by the European Commission (European Commission, 2023) based on the Convention on Biological Diversity: ‘A forest stand or area consisting of native tree species that have developed, predominantly through natural processes, structures and dynamics normally associated with late-seral developmental phases in primary or undisturbed forests of the same type. Signs of former human activities may be visible, but they are gradually disappearing or too limited to significantly disturb natural processes.’ The current state-of-the-art in OGF mapping in Europe using remote sensing has been reviewed (Hirschmugl et al., 2023b) pointing out three different approaches: parameter-based approaches, direct and indirect approaches. Direct approaches show a high potential for large-area mapping, but so far lack operational applications and related sound accuracy assessment (Hirschmugl et al., 2023b). Therefore, the aims of this study are: (i) to develop an RS-based direct mapping methodology for OGF in different bio-geographical regions in Europe and (ii) to apply a scientifically sound accuracy assessment. This accuracy assessment is based on homogeneous field measurements generated through the LIFE+ PROGNOSES project. 2. Material and Methods The test sites are located in four countries in different bio-geographical regions of Europe: (1) the Sonian Forest (SOFO) in Belgium, (2) the National Park Kalkalpen (NPKA) in Austria, (3) the Abruzzo National Park (ABNP) in Italy and (4) the Central Balkan National Park (CBNP) in Bulgaria. The forests are all part of the UNESCO World Heritage Beech Forests. We used Sentinel-2 (S-2) data for all sites and combined them with airborne LiDAR available in two of these sites (SOFO, NPKA). We included S-2 spectral information and various indices (BrightnessRGB, GNDVI, NDVI, NDRE1, TC Greenness) derived from the time series of the vegetation season from the observation year, S-2 based textural features (GLCM features) and 3-dimensional LiDAR data (vegetation height: max, mean, standard deviation; tree density; canopy cover; curvature range, foliage height diversity). In addition, we added terrain parameters (slope, elevation). The parameters were selected based on previous studies in Germany (Adiningrat et al., 2024). Based on these parameters, separate random forest (RF) regression models were trained and applied on all four test sites. Accuracy assessment was done on two levels: first as a RF internal evaluation, which shows the general ability of the model to fit the data. The independent validation will be done based on field-measured plots that have not been used to train the RF regression model. 3. Results Age maps were produced so far for SOFO, NPKA (see Fig. 2) and CBNP. The results of the RF internal accuracy assessment show mean errors of 49.1 (S-2 only) and 42.2 (S-2 and ALS) years respectively for SOFO, 43.0 (S-2) and 33.6 (S-2 and ALS) years respectively for NPKA and 33.1 (S-2) years for CBNP. Similar to previous results, the models tend to overestimate the age of young forests and underestimate age of old forests. The next step is the comparison with the plot data, which is challenging, as the plots are small and not always very accurately positioned. The field plots include not only age, but also other parameters such as presence or absence of habitat trees, standing and lying deadwood, gaps, etc. for the assessment of the old-growthness. These parameters are also part of the spectral signal and therefore, we evaluate, if we can not only assess age, but also old-growthness. The idea is to map larger areas with this approach to pinpoint potential OGFs at relatively low costs. For a detailed assessment, still field inspection will be necessary. This will help to implement the EU commission’s pledge to protect all remaining OGFs in Europe. At the same time, this approach can also help identify suitable or most promising areas for nature restoration in the frame of the Nature Restoration Law, which can, in turn, serve as stepping stones between protected areas. References Adiningrat, D.P., Schlund, M., Skidmore, A.K., Abdullah, H., Wang, T., Heurich, M., 2024. Mapping temperate old-growth forests in Central Europe using ALS and Sentinel-2A multispectral data. Environ. Monit. Assess. 196, 841. https://doi.org/10.1007/s10661-024-12993-5 Barredo, J.I., Brailescu, C., Teller, A., Sabatini, F.M., Mauri, A., Janouskova, K., European Commission, Joint Research Centre, 2021. Mapping and assessment of primary and old-growth forests in Europe. Brockerhoff, E.G., Barbaro, L., Castagneyrol, B., Forrester, D.I., Gardiner, B., González-Olabarria, J.R., Lyver, P.O., Meurisse, N., Oxbrough, A., Taki, H., Thompson, I.D., Van Der Plas, F., Jactel, H., 2017. Forest biodiversity, ecosystem functioning and the provision of ecosystem services. Biodivers. Conserv. 26, 3005–3035. https://doi.org/10.1007/s10531-017-1453-2 da Silva, L.P., Heleno, R.H., Costa, J.M., Valente, M., Mata, V.A., Gonçalves, S.C., da Silva, A.A., Alves, J., Ramos, J.A., 2019. Natural woodlands hold more diverse, abundant, and unique biota than novel anthropogenic forests: a multi-group assessment. Eur. J. For. Res. 138, 461–472. https://doi.org/10.1007/s10342-019-01183-5 European Commission, 2023. Commission Guidelines for Defining, Mapping, Monitoring and Strictly Protecting EU Primary and Old-Growth Forests (Commission Staff Working Document No. 62). Brussels. Hirschmugl, M., Sobe, C., Di Filippo, A., Berger, V., Kirchmeir, H., Vandekerkhove, K., 2023. Review on the Possibilities of Mapping Old-Growth Temperate Forests by Remote Sensing in Europe. Environ. Model. Assess. 28, 761–785. https://doi.org/10.1007/s10666-023-09897-y Luyssaert, S., Schulze, E.-D., Börner, A., Knohl, A., Hessenmöller, D., Law, B.E., Ciais, P., Grace, J., 2008. Old-growth forests as global carbon sinks. Nature 455, 213–215. https://doi.org/10.1038/nature07276 Paillet, Y., et al., 2010. Biodiversity Differences between Managed and Unmanaged Forests: Meta‐Analysis of Species Richness in Europe. Conserv. Biol. 24, 101–112. https://doi.org/10.1111/j.1523-1739.2009.01399.x
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall F1)

Presentation: Integration of satellite observations to SDG eutrophication indicators in Finnish coastal and lake waterbodies

Authors: Eero Alkio, Dr. Laura Hoikkala, Dr. Jenni Attila, Dr. Vivi Fleming, MS Sari Mitikka, MS Hanna Alasalmi, MS Vesa Keto, MS Pekka Hurskainen
Affiliations: Finnish Environment Institute (Syke)
The UN global manual on measuring the SDG indicator 14.1.1a, Index of coastal eutrophication, presents three levels of indicators for the goal. Indicators of level 1 are produced by UNEP based on global or globally modelled data, indicators of level 2 are based on national and regional monitoring, and indicators of level 3 are supplementary. Level 2 consists of indicators of chlorophyll-a concentration, national modelling of coastal eutrophication potential, and in situ concentrations of nitrogen, phosphorus, and silica. Both water sampling data and satellite data are allowable data sources for the level 2 indicators of SDG 14.1.1a (UNEP 2021). The use of satellite data can greatly improve the confidence of the chlorophyll assessment due to its high spatial and temporal coverage. Therefore, we investigate the possibility of including satellite data into the SDG 14.1.1a level 2 chlorophyll-a indicator for Finland’s coastal waters. In order to avoid duplicate work and potentially contrasting results, we use already existing, Syke’s processed water quality datasets, and calculate the coastal zone specific results hierarchically from waterbody specific results; first to water type level and then to the Finnish coastal zone by taking the area weighed average of water type specific results and open sea assessment unit specific results from HELCOM’s chlorophyll-a assessment. Our plan is to merge the beforementioned data into combined, singular indicator value for the Finnish coastal zone. Besides UN sustainable development goals (SDG), Syke is involved in other reporting obligations and commitments that Finland has regarding eutrophication status of coastal waters: Descriptor 5 of marine strategy framework directive (MSFD, 2008/56/EC) requires that human-induced eutrophication is minimized, and one of the goals of HELCOM’s Baltic Sea action plan (HELCOM BSAP) is Baltic Sea unaffected by eutrophication. In addition, ecological status assessment of the water framework directive (WFD, 2000/60/EC) is closely related to eutrophication assessment. These established assessments need to be considered when planning the SDG indicators, to avoid duplicate work and contradictions between the different assessments. During the project, the match between the two data types is analyzed, both visually using the open web map application Tarkka's (tarkka.syke.fi) analysis tool by Finnish Environment Institute (Syke), and with statistical analyses. Tarkka’s analysis tool visualizes satellite observations and station sampling data using various functionalities, such as bars, time series, histograms, and pie charts. In addition, statistical analysis-based correspondence between the two datasets will be explored during the project. Over the coastal water bodies, the match based on visual inspection of the station specific time series between the two datasets was generally good. However, some of the waterbody specific classification results differ between the analyzed data types. This is potentially due to differences in data availability and temporal distribution of the sampling data. Furthermore, the satellite data algorithms to interpret chlorophyll-a concentration are under re-development cycle which is estimated to be complete in the end of year 2024. Hence, a more refined and accurate chlorophyll-a satellite dataset will be available for visual and statistical analysis. Data-analysis with this refined, new dataset is likely to provide more insights regarding the pivotal implementation of EO-data into the level 2 SDG 14.1.1a indicator. In our presentation we will give an overview of the EO data provided for the SDG indicator development, a validation analysis between EO and in-situ datasets, and how EO data could complement and improve the previously reported SGD indicator results.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall F1)

Presentation: The IDEAtlas Data Portal: Bridging Data Gaps and Advancing Earth Observation for Inclusive Cities

Authors: Bedru Tareke, Paulo Silva Filho, Monika Kuffer, Prof. Dr. Claudio Persello, Dr. Raian V. Maretto, Dr. Jon Wang, Dr. Angela Abascal, Dr. Priam Pillai, Dr. Binti Singh, Juan Manuel D'Attoli, Dr. Caroline Kabaria, Dr. Juilo Pedrassoli, Dr. Patricia Brito, Dr. Peter Elias, Dr. Elio Atenógenes, Andrea Ramírez Santiago, Ruth Leska, Jan Streitenberger, Jati Pratomo, Dr. Wahyu Mulyana, Dennis Mwaniki, Dr. Dana R. Thomson, Dr. Ronald Jansen
Affiliations: University of Twente, ITC, Aeronautics Institute of Technology, University of Navarra, Pillai College of Engineering, KRVIA, CISUR - Center for Social and Urban Integration, African Population and Health Research Center (APHRC) i, Universidade Federal da Bahia, University of Lagos, National Institute of Statistics and Geography (INEGI), GeoVille, Urban and Regional Development Institute (URDI), UN-Habitat, CIESIN, Columbia University, United Nations Statistics Division
Low-and-Middle-Income Countries (LMICs) are rapidly urbanizing; a large part of this growth occurs in deprived urban areas (e.g., slums and informal settlements). Both types typically lack access to services, infrastructure, and durable housing, among other things. SDG Target 11.1 prioritizes access to adequate, safe and affordable housing, basic services and upgrading slums. The indicator 11.1.1 is Tier 1, meaning it has an established methodology, and data are regularly produced by at least 50% of all countries (where relevant). However, data are often dated, patchy, and absent at the city scale because estimates are often derived from national household sample surveys. Many studies presented Earth Observation models to map informal areas, ranging from classical machine learning to advanced deep-learning approaches. However, most models are designed for specific cities and are often tested in small, localized areas or in cities that are very similar to each other. Furthermore, model outputs are seen as a final result and are not improved using local knowledge. Thus, most models are built without user engagement and rely on reference data that lacks local knowledge, often curated by researchers through on-screen digitizing. As a result, they fail to capture the diversity of settlements within metropolitan regions (e.g., peri-urban areas) and across different cities. IDEAtlas, an ESA-funded project, develops state-of-the-art methodologies for mapping deprived urban areas for a global sample of cities, emphasizing user engagement throughout the process. The developed deep-learning model is a Multi-Branch Convolutional Neural Network (MB-CNN), which is able to i) effectively learn spatial-contextual features from the input Earth Observation (EO) images, ii) effectively fuse multiple EO data (SAR and Optical) and geospatial features, iii) extract consistent informal settlement extent (binary maps) and a slum severity index (continuous index varying between 0 (least deprived) to 1 (most deprived), where the extent of informal settlements can be derived by simply thresholding the continuous deprivation level, iv) produce multi-temporal maps of settlement extent for the years 2019-2023. City-level metrics are derived by combining the obtained settlement extent maps with population density data. To facilitate efficient interaction with the local partners, we have designed a user-centric Portal (https://portal.ideatlas.eu) where model outputs can be visualized and feedback on both the reference data and outputs can be provided. The Portal is co-developed based on user feedback. It includes an open section for accessing gridded model outputs and a protected section designed for city user engagement to safeguard potentially sensitive data (e.g., in cities with high eviction pressure). The collected data are crucial for iteratively improving models, which utilize free-cost Copernicus satellite data (Sentinel-1 and 2). As part of our user-centred design process, we have performed continuous cycles of interactions with our local partners, e.g., in the form of Living Labs (co-anchors, early adopters, and other local stakeholders), to understand information requirements and improve both the reference data and the produced maps. The active involvement and firsthand knowledge of the local partners ensure the most accurate representation possible of informal settlements and other areas within each city. Rerunning models with improved reference data allowed for a substantial improvement of the model results to detect informal settlements across a global sample of cities. The portal also allows the provision of information for local decision-makers. For example, in Buenos Aires, we could support the identification of newly developing informal settlements that were not included in the National Registry of Informal Settlements (RENABAP - Registro Nacional de Bairros Populares) from the Argentinean Government. The newly detected settlements allowed the RENABAP field team to conduct ground surveys to include new settlements that had not yet been included in their database. This demonstrates that the validation process has been beneficial for both sides, not only helping to improve our reference datasets but also helping our local partners to update and improve their own data and knowledge about the areas being mapped. The current version of the IDEAtlas Portal allows users to interact with the data provided on a graphical user interface, as well as to add, edit and delete their own data and thus make it accessible to other users. The portal is hosted by a cloud infrastructure, enabling multiple users to work simultaneously without performance issues. In addition, the user interface is built on an architecture that is fully virtualized and based on an open techstack. This is a first step towards an EO that enables the mapping of informal settlements to support SDG 11.1.1. in a scalable and affordable way.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall F1)

Presentation: Space to Grow: EO-Based Urban Sustainability Assessment from Pixels to Policy

Authors: Tomas Soukup, Pavel Vlach, Vojtech Dubrovsky, Thomas Esch, Mattia Marconcini
Affiliations: GISAT, DLR
Sustainable urbanization presents both a global challenge and an opportunity. Rapid urban growth requires accurate, harmonized, and actionable data to inform planning and policy decisions. Modern Earth Observation (EO) technologies offer unparalleled capacity to provide spatially and temporally rich insights into urbanization trends and dynamics. However, barriers such as the complexity of handling vast datasets and generating meaningful indicators often hinder their operational use in urban monitoring and SDG reporting. The presentation outlines the recent initiative, supported by ESA's Earthnet programme, exemplifying how cutting-edge EO datasets and open tools can address these challenges. Building on technologies developed under multiple ESA projects such as Urban Thematic Exploitation Platform (Urban TEP) and EuroDataCube (EDC), the collaboration integrates current top global urban datasets, including the World Settlement Footprint (WSF) and Global Human Settlement Layer (GHSL) products suites. The project streamlines the production of urban information, focusing on SDG indicator 11.3.1, which evaluates land use efficiency through the ratio of land consumption rate to population growth rate. This metric is essential for assessing whether urban expansion aligns with sustainable development objectives. The Urban Expansion Insight platform is built using open-source web visualization frameworks, demonstrating how recent advancements in web technologies fill the "last-mile delivery gap" in EO based services. These frameworks have ability to transform complex datasets into interactive and user-friendly tools, allowing diverse stakeholders to explore, analyze, and report urbanization trends. The application supports both administrative boundaries and functional urban areas, providing flexibility to address varying reporting and analytical needs. Features such as customizable visualizations, interactive analysis, and multi-temporal exploration exemplify the power of modern web technologies in democratizing access to complex EO data. Collaboration with UN-Habitat Collaboration with UN-Habitat has been central to aligning the tool with global urban monitoring standards and practical needs. UN-Habitat, as the custodian of SDG 11 indicators, recognizes the tool's potential to support national and local authorities in adopting EO technologies for urban monitoring and reporting. The platform empowers users in developing countries to calculate key metrics like land consumption and population growth rates, aligning their reporting with international standards. Beyond operational use for national reporting, the tool offers significant potential for shaping future UN-Habitat guidance, allowing to integrate and compare multiple EO-based datasets, such as WSF and GHSL, unlocking users to evaluate the strengths and limitations of different data sources globally. This capability is vital for understanding how dataset characteristics, such as resolution and scale, influence the urbanization narratives they tell. Such insights inform refinements to global indicator methodologies and ensure their applicability across diverse contexts. Application and Future Directions The tool exemplifies how EO technologies can bridge the gap between data generation and actionable insights. By leveraging advanced datasets and modern visualization technologies, it transforms complex EO outputs into accessible, easy-to-use resources. These resources empower stakeholders to explore urbanization trends, assess land use efficiency, and produce standardized reports aligned with SDG 11. Looking ahead, UN-Habitat plans to expand the tool's applications, integrating additional urban indicators and refining its analytical capabilities. Planned follow-up activities include training sessions for selected countries, enhancing local capacities to utilize the platform effectively. The insights gained from comparative analyses of datasets will further inform updates to indicator methodologies, ensuring they remain robust and relevant to evolving global needs. Conclusion This initiative demonstrates the transformative potential of EO technologies and modern web visualization frameworks in addressing sustainable urbanization challenges. By combining advanced datasets, user-friendly interfaces, and strong institutional collaboration, the platform sets a benchmark for leveraging EO capabilities in SDG monitoring. It highlights how innovative technologies can empower stakeholders to translate complex data into actionable insights, fostering sustainable urban development worldwide.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.34)

Session: D.05.02 Advancing Optinization, Resilience and Innovation Capabilities to Evolve ESA's Observation

This session offers a forward-looking exploration of key aspects of ESA Observation Framework (EOF) operations, focusing in particular on innovations in data acquisition, advanced processing, long-term archiving and IT resources to manage complex scientific algorithms, larger data volumes, synergetic processing and evolving data format. It emphasizes the adoption of novel technologies highlighting the importance of operational efficiency and resilience, in an environmentally sustainable ecosystem.

Discussions will focus on how these advancements can evolve the ESA Observation Framework operations for Copernicus and ESA Earth Explorers missions to meet user needs and cost-efficiency goals, including the need to implement a collaborative environment to enable the maximization of data exploitation.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.34)

Presentation: The "Onboarding" of new Copernicus missions operations in ESA's Earth Observation Framework

Authors: Olivier Colin, Roberto De Bonis, Alessandra Buongiorno, Andrea Perrera, Ivan Di Lodovico, Bogdana Tsonevska, Davide Castelletti, Vincent Dumoulin, Franck Desbouillons, Nigel Houghton, Roberto Sciarra, Jordi Farres, Berenice Guedel, Ines Sanz Morere, Olivier Barois, Razvan Cosac, Jolyon Martin, Davide
Affiliations: European Space Agency - Esrin
The recurring launches of new Sentinels satellite units, as well as the upcoming Copernicus Expansion and Sentinels Next Generation missions in the years to come, lay down a very challenging agenda to bring every new Copernicus satellite into routine operation along over another decade. With this prospect in mind, ESA had completed between years 2018 and 2021 a major transformation of the Copernicus operations setup, featuring an important shift in the operations paradigm, aimed at streamlining the end-to-end operations via inter-related service-based industrial undertakings and fostering maximum usage of European public cloud infrastructures, referred to as ESA’s Earth Observation Framework (EOF). Following the successful migration of Sentinel-1, -2, -3 and -5p operations into this newborn EOF back in 2021, and the demonstration of seamless operations ever since on that basis, year 2024 has been the occasion to design and put in place a new EOF “onboarding” approach in preparation of embarking new satellites into the framework. This model was first put in practice with the swift onboarding of the Sentinel-2C unit in autumn 2024, then confirmed with that of the Sentinel-1C unit by the end of the year, both successful and well validating the concept. Sentinel-1D is next on the timeline, and the approach will be further matured to onboard the future Copernicus Expansion and Next Generation missions. The talk will outline the on-boarding approach and develop on its benefits as first revealed with the recent on-boarding of Sentinel-1 and -2 recurrent units. It will as well delineate the approach extended to the cases of on-boarding the brand new missions, which involve other activities, some including evolving the underlying EOF factorised capabilities, developing the necessary mission specific elements and building specific expertise, all articulated towards an effective end-to-end integration of every new mission into operations, as and when needed, to globally sustain the successful Copernicus operations with efficiency and for the next decades.
Affiliation(s): European Space Agency - Esrin
LPS Website link:
The "Onboarding" of new Copernicus missions operations in ESA's Earth Observation Framework&location=Room+1.34" class="text-info" target="_blank">Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.34)

Presentation: A new Generic Processing Orchestration System for Earth Explorer missions

Authors: Torben Keßler, Richard Hofmeister, Knut Bernhardt, Alessandra Rech
Affiliations: ESA ESRIN, Werum Software & Systems AG
In the frame of the ESA Earth Explorer Programme, missions are proposed by the scientific community to address specific key aspects of the Earth’s system. These missions, introducing innovative observation strategies and technologies, drive advancements in scientific data processing, enabling the extraction of unprecedented insights from the collected data. Traditionally, the processing facility within the Payload Data Ground Segment (PDGS) of a satellite mission required mission-specific features and interfaces, while many operational requirements, features, and concepts were used repeatedly across the solutions of different missions. Leveraging on said commonalities and aiming to pave the way towards the new ESA EO Framework for Earth Observation Science Missions, a new generic processing orchestration system for scientific Earth Observation missions has been developed and is presented in this study. This software system was designed for a number of Earth Explorer missions and is matured to an open-source, generic multi-mission processing framework. The transformation of computational environments from workstations and on-premise clusters to public-cloud environments is supported by the framework. The framework’s flexibility allows addressing a number of use-cases ranging from classical systematic processing facilities and on-demand processing tasks to archive services. The parallel execution of processing chains for different missions, leveraging on the elasticity provided by the automated scaling and load balancing features of the system, are presented. For the interactive and potentially multi-mission operations, extended features are introduced in the generic operator user interface. The generic system is based on the core processing facilities, which are in use for the current EarthCARE and BIOMASS missions. Thus, the cooperation with stakeholders of the scientific missions’ communities and industrial IT experts ensures the further development of this stable and flexible processing environment. Additionally, the system's adaptability to evolving technological landscapes and its potential for integration with future missions highlight its significance in advancing Earth observation capabilities. This comprehensive approach not only enhances the efficiency and effectiveness of data processing but also fosters collaboration and innovation within the Earth observation community.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.34)

Presentation: COPERNICUS REFERENCE SYSTEM PYTHON: AN INNOVATIVE WORKFLOW ORCHESTRATION WITH THE ADOPTION OF THE SPATIOTEMPORAL ASSET CATALOG

#stac #cloud-native

Authors: Nicolas Leconte, Pierre Cuq, Vincent Privat
Affiliations: CS Group, Airbus
This presentation showcases an overview of the Reference System Python (RS Python) developed within the Copernicus program for the Sentinel 1, 2, and 3 missions. The system will be able to expand its capabilities to include Sentinel-5P and pave the way for other upcoming Copernicus missions. RS Python orchestrates processing chains in a standard environment, from retrieving input data from the ground station, processing it, and providing the final products through an online catalog. The Copernicus Reference System Software has been developed in AGILE since 2021 under Copernicus, the European Union's Earth observation program implemented by the European Space Agency. Reference System Python is a continuation of the services implemented during the first phase, with adaptations considering the lessons learned during the first two years and the evolving CSC Ground Segment context. It offers a set of services necessary to build Copernicus processing workflows relying on Python frameworks. One main goal of the product is to provide an easy-to-use toolbox for anyone wanting to test and integrate existing or new Python processors. It is fully open-source and available online on a public Git repository (https://github.com/RS-PYTHON), allowing anyone to use it and even contribute. The major component is called rs-server. It exposes REST endpoints in the system and controls user access to all sensitive interfaces that require authentication. These endpoints can be called directly via HTTPS with OAuth2 authentication or by using our client named rs-client, a Python library with examples provided to ease the use of RS Python. It simplifies interactions with the system by embedding methods to call the various services of rs-server and handling more complex tasks under the hood, such as authenticating with an API key. RS Python can stage data (download and ingest in the catalog) using various protocols, like OData (Open Data protocol) or STAC (SpatioTemporal Asset Catalog), and multiple sources including CADIP (CADU Interface delivery Point) stations, AUXIP (Auxiliary Interface delivery Point, also known as ADGS, Auxiliary Data Gathering Service) stations, PRIP (Production Interface delivery Point) and LTA (Long Term Archive) stations. The ground stations still use the OData protocol, so until they are STAC-ready, RS Python performs STAC’ification on the fly to provide a unified experience and a unique protocol inside the system. The catalog is based on stac-fastapi-pgstac and is STAC-compliant. On top of that, we deploy STAC browser instances that provide a friendly Graphical User Interface (GUI) over the web browser. Our catalog, as well as the stations, are now easily searchable using all kinds of metadata. We use Prefect, an innovative Python orchestrator, to trigger the staging, processing, and inventorying of data and metadata. The processing can run locally on a laptop, or on Dask clusters to perform distributed computing with auto-scaled workers to achieve maximum performance when it’s needed. The auto-scaling features are applied at two different levels: nodes (infrastructure) and pods (services). This allows optimization of the number of running machines to handle the processing tasks and the number of tasks running in parallel on the available resources. It’s also designed with a sustainable approach, to reduce the cost, usage, and carbon footprint to the minimum. RS Python provides access to JupyterLab for the end-user. The end-user can build or start pre-made Prefect workflows from rs-client libraries. Grafana and OpenTelemetry enhance project monitoring and observability by providing real-time visualization and comprehensive data collection. Grafana provides interactive dashboards for tracking performance, while OpenTelemetry standardizes telemetry data, enabling seamless integration across systems. RS Python will run the refactored processors in Python from the Sentinel missions provided in the context of ESA’s CSC Data Processors Re-engineering project. Another goal is to be able to run any Python processor, making it a reference platform. With the RS Python open-source solution, one can set up a platform to support Copernicus Ground Segment operation-related activities such as processor validation and benchmarking, implementation and fine-tuning of data processing workflows, re-processing and production services, data quality investigations, integration of new processors and missions. In that sense, the Reference System is already used in other contexts, such as the ESA's Earth Explorer missions. Finally, the system is Cloud Native and designed to run with optimal performance in a fully scalable Kubernetes cluster. Yet it’s still possible to install it locally on a laptop ... so anyone can play with it!
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.34)

Presentation: Evolutions in the Copernicus Space Component Ground Segment

#zarr #stac

Authors: Jolyon Martin, Berenice Guedel, Betlem Rosich
Affiliations: European Space Agency
The Copernicus Space Component (CSC) Ground Segment (GS) is based on a service-based architecture and a clear set of operations management principles (management and architectural) hereafter referred as the ESA EO Operations & Data Management Framework (EOF). ESA needs to guarantee the continuity of the on-going operations with the maximum level of performances for the flying Copernicus Sentinels while facing the technical and financial challenges to adapt to the evolutions of the CSC architecture including the Copernicus Expansion Missions and Next Generation Sentinels. The EOF encompasses all the activities necessary to successfully deliver the expected level of CSC operations entrusted to ESA (i.e. establishment and maintenance of the new baseline, procurement actions, operations management, reporting, etc.) The EOF implementation is based on a service architecture with well-identified components that exchange data through Internet respecting defined interfaces. A service presents a simple interface to its consumer that abstracts away the underlying complexity. Combined with deployments on public cloud infrastructure, the service offers large adaptability to evolution of the operational scenarios in particular for what regards scalability. This presentation aims to introduce the ongoing and planned evolutions of the Ground Segment architecture. Recognising community driven initiatives in interoperatbility such as STAC, and tapping into the rich framework for scientific computing offered by Python, Dask and Zarr the EOF intends to further streamlinine the interfaces within the Ground Segment and opening more opportunities in empowering an open ecosystem of service providers leveraging and enhancing the capabilities of the Copernicus programme.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.34)

Presentation: Advancing Earth Observation with the ESA Copernicus Earth Observation Processor Framework (EOPF): New Approaches in Data Processing and Analysis Ready Data

#zarr

Authors: Davide Castelletti, Vincent Dumoulin, Roberto De Bonis, Kathrin Hintze, Jolyon Martin, Betlem Rosich
Affiliations: ESA
The ESA Sentinel missions, a fundamental component of the Copernicus Earth Observation program, deliver a comprehensive range of essential data for the monitoring of Earth's environment. The focus of the presentation will be on ESA Copernicus Earth Observation Processor Framework (EOPF), which aims to innovate the data processing infrastructure supporting the Sentinel missions, including the use of open-source tools and cloud computing platforms. A key highlight will be the adoption of the Zarr data format, which facilitates the storage and access of multidimensional data across all Sentinel missions, improving data interoperability, scalability, and performance. Additionally, the presentation will cover the development of Analysis Ready Data (ARD) products, in particular in the context of Sentinel 1 mission. ARD streamline processing by providing ready-to-use datasets for immediate analysis and are crucial for a wide range of applications, from climate change monitoring to disaster response and resource management. Finally, we will explore the evolving processor design for the Copernicus Expansion (COPEX) missions, emphasizing the need for new data processing approaches to handle the increasing volume, complexity, and diversity of satellite data.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.34)

Presentation: From Vertical PDGS to the ESA EO Framework for EO Science Missions: Drivers, Key Elements and Status

Authors: Massimo Romanazzo, Francesca Cipollini, Damiano Guerrucci, Andrea Della Vecchia, Ivan Famoso, Alessandra Rech, Joost Van Bemmelen
Affiliations: ESA/ESRIN
ESA is responsible for preparing the operations and subsequently operating the flying EO Science missions (including Earth Explorer and Earth Watch missions) and the Heritage missions at the required level of performances and in line with mission requirements adopting the most suitable technical, operational and management approach to maximise scientific impact within the financial envelope set at programmatic level. In the context of R&D activities ESA is also responsible for granting access to authorised users to selected collections from Third Party missions. In the recent years the “new” EO Science missions have shown increasing complexity both on the purely scientific (e.g. algorithm and processing complexity) and on the data management side (e.g. data volumes, operational dependencies with other missions), while the legacy missions have demonstrated a remarkable longevity. To face the corresponding challenges, the operational approach and the corresponding technical / operational solutions will evolve so as to provide scalability (to best fit the overall operation volume evolving in time) and flexibility (to rapidly integrate state-of-the-art technological advancements) while ensuring a secure and robust operational environment and increasing cost-effectiveness. Building on the above needs and on the impact of the Copernicus Ground Segment transformation on the EO community, the former ESA and TPM Payload Data Ground Segment (PDGS) is evolving into a service-oriented architecture, relying increasingly on the European industry state of the art technology and infrastructure and on harmonised interfaces. Such operational framework, referred to as Earth Observation Framework for EO Science Missions (EOF-EOS), greatly streamlines the “onboarding” process required to prepare the operations of new ESA EO Science missions, since it benefits from existing operational interfaces, process and overall operations model. New scientific missions, due to their nature, introduce novel scientific data sensing technologies and acquisition methodologies that require mission specific dedicated development and services (e.g. relevant to data processors and mission planning) on top of the baseline across mission services setup to support the new mission operations – constituting one of the biggest challenges the EOF-EOS missions onboarding process must deal with. This presentation will introduce the driving factors for the evolution towards the EOF-EOS, its reference architecture, operations concept and latest implementation status, with a focus on the process of onboarding new missions on the operational framework and the benefits this new concept is going to introduce for ESA and the EO science community.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 0.94/0.95)

Session: C.01.02 Innovative UAV Applications for Earth Observation - PART 1

Unoccupied Aerial Vehicles (UAVs) are flexible and efficient acquisition systems that can fill an important gap among spaceborne, airborne, and ground-based measurements. UAVs provide very high-resolution data acquisitions, even over inaccessible areas, and have demonstrated their ability to support various environmental and urban applications.

We invite contributions addressing the most recent innovations in the use of UAVs for Earth Observation and environmental and urban monitoring, with a focus on:

-Data acquisition for Earth Observation and atmospheric research
-Synergies and data fusion between UAVs and spaceborne, airborne, and ground-based measurements
-Real-time processing and analysis of UAV-acquired data
-Applications including but not limited to:
oAgriculture and precision farming
oForestry and forest monitoring & inventory
oUrban monitoring and urban green management
oDisaster management
oConservation management
oMonitoring of critical infrastructure (e.g., roads, coastal protection)
-UAVs in support of Earth Observation campaigns
-Transferable methods for environmental and infrastructure monitoring that can be applied by various actors (e.g., foresters, farmers, technicians of public administrations)

By focusing on innovative UAV applications and transferable methodologies, we aim to showcase the potential of UAV technology in advancing Earth Observation, to help develop future satellite missions and to advance environmental monitoring practices.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 0.94/0.95)

Presentation: Methodological Considerations For Studying Spectral-Plant Diversity Relationships

Authors: Christine Wallis, Anna Crofts, Mark Vellend
Affiliations: Technische Universität Berlin, Geoinformation in Environmental Planning, University of Sherbrooke, Biology
The Spectral Variation Hypothesis (SVH) posits that higher spectral diversity indicates higher biodiversity, which would allow imaging spectroscopy to be used in biodiversity assessment and monitoring. However, its applicability varies due to ecological and methodological factors. Key methodological factors impacting spectral diversity metrics include spatial resolution, shadow removal, and spectral transformations. This study investigates how these methodological considerations affect the application of the SVH across ecosystems and sites. Field surveys have been conducted in forest and grassland ecosystems from five sites of the Canadian Airborne Biodiversity Observatory (CABO). Hyperspectral surveys were conducted alongside plant inventories using two comparable hyperspectral push broom imagers that collected spectral information over 288 spectral bands covering the visible and near-infrared (NIR) regions. The µCASI sensor used at the open vegetation sites was mounted on an unoccupied aerial vehicle (UAV) and recorded data at 3 cm spatial resolution. The CASI-1500 sensor used at forest sites was onboard the Twin Otter fixed-wing aircraft and recorded data at ~2 m spatial resolution. We analyzed three variance-based spectral diversity metrics across and within vegetation sites, examining the effects of illumination corrections, spatial resolution, and shadow filtering on the spectral-plant functional diversity relationship. Our findings highlight that the relationship between spectral diversity metrics and functional diversity are strongly influenced by methods, especially spectral transformations. These illumination corrections notably impacted the spectral regions of importance and the resulting relationships to plant functional diversity. Depending on methodological choices, we observed correlations that varied not only in strength but also direction: in open vegetation we saw negative correlations when using brightness normalization, and positive correlations when using continuum removal. Shadow removal and spatial resolution were important but had less impact on the correlations. By systematically analyzing these methodological aspects, our study not only aims to guide researchers through potential challenges in SVH studies but also highlights the inherent sensitivity of spectral-functional diversity relationships to methodological choices. The variability and context-dependence of these relationships across and within sites emphasize the need for adaptable, site-specific approaches, presenting a key challenge in developing robust methods to enhance biodiversity monitoring and conservation strategies. Our results underscore the importance of standardizing methodological approaches in SVH studies to improve the reliability of biodiversity assessments using imaging spectroscopy. This study provides critical insights for optimizing biodiversity monitoring frameworks, which are essential for informing conservation efforts in diverse ecosystems.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 0.94/0.95)

Presentation: UAV lidar-based characterisation of individual trees across ICOS sites

Authors: Benjamin Brede, Bert Gielen, Dr. Martin Herold, Linda Luck, Francesco Minunno, Geike de Sloover, Dr Johannes Wilk, Dr Kim Calders
Affiliations: GFZ Helmholtz Centre For Geosciences, University of Antwerp, University of Helsinki, Ghent University
Recent advances in UAV platforms and lidar technology have fostered innovation of applications and have led to a degree of adoption of UAV lidar for forestry purposes. In particular, there is a continued interest to bridge between field and satellite scale with UAVs, as UAVs make it possible to capture forest structural information at relevant spatial scales in a time efficient manner. In this context, this contribution will address the structural characterisation of Eddy covariance/flux tower footprints with UAV-lidar for the purpose of linking structure and physiology of individual trees to observed fluxes but also to satellite observations. First, an overview of potential UAV-lidar capabilities and possible metrics to derive will be given, including but not limited to individual tree segmentation, diameter at breast height, tree height, crown metrics, above ground biomass (AGB), canopy cover, leaf area index (LAI), and structural diversity. For the continued spatial representation of a flux tower footprint, a complete detection of all individuals in this area is important but due to occlusion and dense canopies might not always be possible. Counteracting strategies to compensate for this will be discussed, including the role of new scanning technology and acquisition protocols, and increased point cloud density as well as modelling methods. Additionally, upscaling strategies from plot scale terrestrial laser scanning (TLS) will be addressed. Second, the requirements of relevant forest productivity models will be examined. Here, it will be important to identify which variables are difficult or possible to be observed with UAV-lidar. One first difficult observation is the tree species, which need to be generated by means of other observations, such as high-resolution airborne surveys, UAV-RGB or stand maps. An opportunity-gap analysis will be presented. Third, the status of current and future UAV and TLS acquisition campaigns at ICOS sites will be summarised. For all sites, the plan is to capture the majority of the tower footprint with UAV. In absence of detailed footprint information, this would translate to an area of roughly 500 x 500 m around each fluxtower. For TLS, the focus will be on representative areas within the core footprint. The target in site selection is to cover a climatic gradient across Europe, and a gradient in forest structure and functioning. The work presented here contributes to two recently started projects, ESA Digital Twin Component Forest and Horizon Europe NextGenCarbon. In both projects, forest observation experts and ecosystem modellers work together to improve the integration between field, terrestrial, UAV, airborne and space-borne observations streams into ecosystem functional models.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 0.94/0.95)

Presentation: A systematic review: Individual tree species identification using deep learning and high-resolution imagery

Authors: Zhongyu Xia, Jan Dirk Wegner, Prof. Dr. Arthur Gessler, Prof. Dr. Verena C. Griess, Dr. Mirela Beloiu Schwenke
Affiliations: Department of Environmental Systems Science, Institute of Terrestrial Ecosystems, ETH Zurich, Department of Mathematical Modeling and Machine Learning, University of Zurich, Forest Dynamics, Swiss Federal Institute for Forest, Snow and Landscape Research WSL
Accurate and fast tree species inventory is a cornerstone of sustainable forest management, providing insights into forest dynamics, ecological value, invasive species, and biodiversity assessments. Over the past decade, rapid advances in remote sensing and machine learning, particularly deep learning, have revolutionized tree species identification at the individual tree level. Despite the growing interest in individual tree species identification (ITSI) research, a systematic synthesis encompassing three key aspects—tree species, data types, and deep learning architectures—remains lacking. This review addresses these gaps through a meta-analysis of 34 metadata extracted from 96 peer-reviewed studies published between 2012 and September 2024. The results showed that the temperate forest biomes were the most studied, with a significant emphasis on urban and plantation settings. Coniferous species, particularly those within the Pinus and Picea genera, dominated the research landscape. Regarding data sources, RGB and near-infrared bands acquired by unmanned aerial vehicles (UAVs) were predominantly utilized, with resolutions ranging from 0.69 cm to 320 cm per pixel. Approximately one-third of the studies incorporated LiDAR data, primarily to enhance crown segmentation accuracy. However, only 11% out of studies employed multi-temporal data, indicating a potential area for future exploration. Convolutional Neural Networks (CNNs) were the most applied deep learning architectures, featuring in 88% of the reviewed studies. More sophisticated approaches, for example, Graph Neural Networks (GNNs), Recurrent Neural Networks (RNNs), and attention-based approaches, have been deployed much more rarely but do hold great promise for future research. This review highlights the progress and current trends in AI-driven ITSI, demonstrating that the increasing availability of fine-grained, cost-effective high-resolution imagery combined with CNNs is shaping the emerging trend in tree species inventory. It underscores the need for standardized datasets, the integration of multi-temporal data, and the exploration of diverse forest types and species to improve model generalization and applicability. Future research should focus on these areas to advance the field and support effective forest management practices.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 0.94/0.95)

Presentation: Adaptive Computation for Resource-Efficient Flood-Water Segmentation

Authors: Vishisht Sharma, Dr. Lisa Landuyt, Dr. Sam Leroux, Dr. Pieter Simoens
Affiliations: University of Ghent, VITO (Vlaamse Instelling voor Technologisch Onderzoek)
Flooding is one of the most destructive natural disasters, inflicting significant harm to infrastructure, ecosystems, and human livelihoods and with the increasing frequency of floods, rapid and effective situational awareness after flood events is important for minimizing its impacts. Unmanned Aerial Vehicles (UAVs) have emerged as a helpful technology for disaster management, offering high flexibility in high-resolution data acquisition. In this research, we focus on leveraging UAV-based imagery for flood delineation, a key geospatial task that is crucial for effective flood crisis management. Our work is driven by two main connected objectives: (1) creating a robust, specialized dataset for water segmentation using oblique UAV imagery in flood scenarios, and (2) developing adaptive computation techniques for energy-efficient segmentation of imagery on-board the UAV, to maximize flight time. The outcomes of this research have significant implications for flood crisis management. A significant challenge in flood monitoring and response lies in the accurate segmentation of water bodies and inundated areas from aerial imagery. Existing datasets (M. Rahnemoonfar, 2021) etc. are often limited by their resolution, geographic specificity, or diversity of water conditions and orthogonality. To address this, we introduce a custom dataset tailored for water segmentation in flood scenarios. This dataset comprises UAV-acquired oblique imagery, capturing complex real-world conditions such as varying water turbidity, urban structures, and changing weather and lighting conditions. By focusing on oblique angles, the dataset emphasizes scenarios where UAVs are generally used by crisis responders. Labeling large-scale datasets for water segmentation is labor-intensive and thus a costly process. To overcome this, we leverage SAM2 (Segment Anything Model 2), a state-of-the-art model for interactive segmentation, in a semi-supervised framework to pseudo-label UAV-acquired flood imagery and other relevant datasets. SAM2 is used to pre-annotate video frames (generate segmentation masks on initial static frames in videos, specifying water bodies that are to be tracked through the video), identifying water boundaries with minimal human input. These pre-annotated videos are passed through SAM2 to generate frame-based segmentation masks for the input videos (Where the initial segmentation masks are tracked through the video). This semi-supervised approach not only accelerates the labeling process but also enables the inclusion of diverse imagery from different flood scenarios, thereby improving model generalizability. Processing UAV-acquired data in real time poses significant computational challenges, particularly when deployed on platforms with constrained resources in terms of processing power and battery. To address this, we develop adaptive computation techniques that dynamically allocate processing power based on the complexity of the input data. The resulting models deliver high segmentation accuracy while minimizing latency and power consumption, making them ideal for deployment on UAVs and edge computing devices in field settings. One of such adaptive techniques SegBlocks is a modular framework that dynamically adjusts computational complexity based on the input image’s content. The architecture divides the input UAV imagery into blocks and processes only the regions likely to contain water or flooded areas at full resolution, while less relevant regions are processed with reduced complexity. This selective focus enables real-time processing without sacrificing segmentation accuracy. SegBlocks is implemented on top of lightweight convolutional models ensuring efficient resource utilization. By adopting this architecture, we achieve a significant reduction in latency and power consumption, making it well-suited for deployment on edge devices, such as those onboard UAVs. By integrating this adaptive computation technique, our solution prioritizes speed and accuracy, essential for delivering actionable insights during flood events. This resource-efficient approach bridges the gap between high-resolution UAV data acquisition and the real-time demands of flood crisis responders. This research presents a comprehensive framework for leveraging UAV-acquired imagery in flood crisis management, addressing critical challenges in data generation, processing efficiency, and model generalizability. By focusing on oblique UAV imagery, we bridge a significant gap in existing datasets, offering a valuable resource for water segmentation in complex flood scenarios. The adaptive computation techniques and semi-supervised labeling pipeline introduced in this study advance the state of the art in resource-efficient computer vision, enabling real-time flood monitoring on edge devices. The methodologies and tools developed in this research have far-reaching applications beyond flood crisis management. The ability to rapidly and accurately detect water bodies and water delineation from specifically oblique UAV imagery supports a wide range of humanitarian and environmental use cases. These include disaster response, urban water management, agricultural water monitoring, and conservation efforts etc and the flexibility of our framework also allows it to be seamlessly adapted to address other types of crises, such as wildfire detection or victim identification. By enabling real-time analysis on edge devices, our solution empowers first responders, public administrators, and environmental managers with actionable data to make informed decisions during critical events. (M. Rahnemoonfar, 2021) M. Rahnemoonfar, T. Chowdhury, A. Sarkar, D. Varshney, M. Yari and R. R. Murphy, "FloodNet: A High-Resolution Aerial Imagery Dataset for Post Flood Scene Understanding," in IEEE Access, vol. 9, pp. 89644-89654, 2021, doi: 10.1109/ACCESS.2021.3090981.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 0.94/0.95)

Presentation: Using a Data Fusion Approach to Characterise Fuel Loads and Predict Fire Probability Across the Brazilian Cerrado

Authors: Natasha Lutz, Dr Manoela Machado, Dr Imma Oliveras Menor
Affiliations: Environmental Change Institute, School of Geography and the Environment, University of Oxford, Woodwell Climate Research Center, Botanique et Modelisation de l'Architecture des Plantes et des Vegetations (AMAP), Institut de Recherche pour le Développement Délégation Régionale Occitanie
Climate change and land-use change are predicted to alter fire regimes globally, with potential ecological impacts that include increased plant physiological stress, reduced soil moisture, and diminished carbon storage capacity. Understanding the probability of fire occurrence within ecosystems is integral to predicting the spatial distribution of these ecological changes. Vegetation structural metrics such as height, canopy cover and leaf area index, as well as topographical variables, influence microclimates and the distribution of forest fuels, which are important factors determining the spread of fires. Upscaling these metrics from a plot scale to the ecosystem and biome level without a loss of fine scale information is a significant challenge in understanding how fuel loads relate to fire regimes on scales relevant for policy. This study develops a novel method of estimating fuel loads across the Brazilian Cerrado biome using a multi-scale, data fusion approach. The Brazilian Cerrado is one of the most fire adapted and biodiverse ecosystems in the world and is at risk from land-use change. The structure and accumulation of surface, herbaceous, and woody vegetation was characterized within multiple sample plots in five locations across the Cerrado using mobile under-canopy LiDAR, UAV LiDAR and UAV multispectral data, and field aboveground biomass density (AGBD) data. LiDAR full-waveforms for these locations were simulated from the UAV LiDAR point clouds, which were then used to relate plot-level metrics to on-orbit spaceborne LiDAR data from the Global Ecosystem Dynamics Investigation (GEDI). The relationships between fuel structural and complexity metrics (height, plant area index, canopy cover), ABGD, fuel type and fire history were explored for the Cerrado. A range of modelling methods (multiple linear regression, random forest and convolutional neural networks) were explored with spatially contiguous remote sensing data (Sentinel, Landsat) to create wall-to-wall mapping of fuel loads and fire probability. Models were stratified by vegetation type: savanna, grassland, and transitional forest. Ongoing development of these models include efforts to extend over several years to map shifts in fire regimes. The study will provide insights to better estimate shifts in ecological impacts and fire probability.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 0.94/0.95)

Presentation: An End-to-End deep learning Framework for Detailed Forest Inventory

Authors: Wen Fan, Dr. Jiaojiao Tian, Jonas Troles, Prof. Dr. Thomas Knoke
Affiliations: Technical University Of Munich, German Aerospace Center, University of Bamberg
The forest is vital for climate change mitigation, carbon storage, flood prevention, water purification, and local economic development. Detailed information at the level of individual trees is a key element of forest monitoring. Tree counting, tree species classification, vitality prediction and parameter estimation are all the content of forest inventory. The accurate delineation of individual tree crowns (ITC) is one of the most fundamental tasks for tree information acquisition. Remote sensing technology is able to collect large and comprehensive data quickly. Therefore, it has great potential for applications in forest ecology and management. ITC segmentation using remote sensing data faces challenges due to canopy heterogeneity, crown overlap and data quality issues. Although some studies have explored deep learning methods for ITC delineation, the need for high-quality annotated datasets has limited the application of deep learning-based methods. Beneficial from the BaKIM project, a high-quality annotated BAMFORESTS dataset was provided. BAMFORESTS contains 27,160 labeled trees on an area of 105 hectares of land in the study area of Bamberg, Germany. A COCO format dataset with a resolution of 2048x2048 pixels is used. The study area includes a rich diversity of tree species, mainly including seven species. Specifically, the tree species mainly include Scots pine, European beech, Oak, Norway spruce, European larch, Douglas fir and Silver fir. Tree vitality is classified as healthy, declining and dead. We propose an end-to-end learning based framework for extracting ITC boundaries based on UAV RGB imagery. Besides, tree species, vitality, and individual tree parameters (i.e. crown area, tree height and crown diameter) are automatically inferred and extracted, which is contributed to quickly obtain detailed forest distribution. The framework uses ConvNext-v2 as baseliner feature extraction. To study the transfer learning ability of the framework, we have further tested the trained model on the aerial Kranzberg forest dataset. In the experiments, the overall accuracy of individual tree segmentation and classification for seven tree species exceeded 70%, while the predicted accuracy for vitality surpassed 60%. Experimental results show that our proposed end-to-end framework achieves superior model performance at a lower time cost for a large-scale mission. It offers a robust and efficient ITC segmentation, tree species classification, vitality estimation and parameter estimation, which provides a detailed forest inventory. This has the potential to support various forestry applications, such as tree harvesting cycle management. The tree distribution and vitality maps are used to help forest management.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall L1/L2)

Session: B.01.03 Transforming Global Analysis Ready Earth Observation Data into Actionable Information to Drive Local Climate and Environmental Actions via Co-creation - PART 1

Earth Observation (EO) from satellites plays a key role in observing changes in climate and environment globally and locally (e.g. climate change, environmental changes and hazards, biodiversity, humanitarian crisis, and etc). Currently, a wide variety of climate and environmental key variables (e.g. temperature, precipitation, greenhouse gases, air pollutants, surface property, land use changes, fires, forests, snow/ices, ocean colors, biomass, and etc.), have been collected, and they are provided as Analysis-Ready Data (ARD) to users. While the satellite-based ARD data has global utility by nature, it is not necessarily fully tailored to particular local issues/challenges/needs.
Ultimately, EO data should be turned into equitable and actionable solutions for addressing local environmental and societal challenges and delivered where they are needed most. In order to fully unlock the potential of the global ARD data, the ARD should be further transformed to Action-Ready Information (ARI) with greater information granularity via co-creation with local collaborators and stakeholders. For example, the key climate and environmental variables are also being collected locally using various observation platforms on the ground, on the water and in the air. Such data can be utilized for regular evaluation and validation of satellite EO, but also to further create customized/tailored value-added ARI for local needs. Identifying the potential gaps between ARD and local needs is also critically important and the gaps need to be mitigated and ultimately closed. In this regard, the involvement of local stakeholders in the process is extremely important. Regardless of the proven power of Community science, local projects, for example, often suffer from remaining self-sustained due to financial challenges. In addition to research, we do need economically viable (and/or politically feasible approaches/solutions to lead us to the local goals. Via co-creation, using ARI to address local challenges should also contribute to addressing global climate and environmental issues.

This session highlights, but is not limited to, EO (in broader sense) data applications for addressing local climate and environmental issues/challenges and local mitigation actions, and discusses the gaps and ways to enhance the use of EO to address them with locals. This session invites not only satellite EO data providers and their users at various stages/levels, but also broader participants, for example, people who collect/analyze EO and/or use it locally, engineers/companies who develop EO technologies and wish to scale up, and researchers/practitioners who seek potential synergy/combined use of global satellite EO and local EO. We also invite broader stakeholders from the policy side and private sectors who would like to discuss potential approaches to use EO for their climate mitigation monitoring and sustainable developments from a social science and/or policy or business perspective to obtain maximum return for locals. With input from many unique local cases, we expect to synthetize the input and co-create global knowledge to guide us towards a sustainable future by further enhancing our ability to monitor and address global-but-local challenges that we face.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall L1/L2)

Presentation: Making Deforestation Alerts More Actionable: Alert Integration, Guidelines and South-South Exchange

Authors: Johannes Reiche, Sarah Carter, Sylvia Wilson, Paul Berkowitz, Dr. Ruth Nogueron Chang, Dr. Andreas Vollrath, Dr. Erik Lindquist, Dr Amy Hudson Pickens, Dr. Robert M Masolele, Bart Slagter, Dr. Johannes Balling
Affiliations: Wageningen University, World Resources Institute, United States Geological Survey, University of Hawaii at Hilo, Food and Agriculture Organization of the United Nations, University of Maryland
Recent advances in satellite monitoring have led to near-real-time monitoring of forest and tree cover loss on local to global scales. Near-real-time alert systems leveraging optical and radar satellite imagery can identify forest disturbances with daily, weekly, or monthly frequency and low latency. These capabilities enable rapid responses to illegal and unsustainable forest activities by facilitating on-the-ground interventions, potentially preventing further forest clearing. Moreover, early warning systems can identify areas at high risk of deforestation, allowing for targeted preventive measures, such as focused patrolling. The availability of multiple optical and radar-based forest disturbance alerts, each with varying detection capabilities depending mainly on the satellite sensor used, poses a challenge for users in selecting the most suitable system for their monitoring needs and workflow. Integrating multiple alerts holds the potential to address the limitations of individual systems. While well-established guidelines exist for monitoring annual forest loss and reporting annual statistics, there is a notable lack of guidance for implementing and institutionalizing deforestation alerts. Limited south-south exchange on the use of alerts further hinders their adoption by countries beginning to explore and utilize these systems. This work presents our efforts to make deforestation alerts more actionable by integrating multiple deforestation alert systems, providing open-source solutions, developing guidance on deforestation alert use and stimulating south-south exchange. First, we provide an overview of our alert integration work. We present results of our study (Reiche et al., 2024) integrating RAdar for Detecting Deforestation (RADD) alerts (Sentinel-1) and optical-based Global Land Analysis and Discovery Sentinel-2 (GLAD-S2) and GLAD-Landsat alerts. Using two confidence rulesets, we applied the integration at ten 1° sites across the Amazon Basin. We introduced a highest-confidence class, in addition to the existing low- and high-confidence classes. The integration improved detection timeliness by days to months and reduced the delay to increased confidence. Combining alerts resulted in an average detection rate of 97%, showcasing the complementary strengths of optical and radar sensors under diverse environmental conditions and drivers, such as fires, selective logging, and persistent cloud cover. The most significant improvements were observed when integrating RADD and GLAD-S2 alerts, leveraging the high temporal frequency and the spatial detail of 10 m Sentinel-1 and Sentinel-2 data. We discuss the implications of this approach and demonstrate that alert integration is a crucial step to make multi-system alerts more user-friendly, providing stakeholders with consistent and reliable information on new forest disturbances. These results provide the underpinning for the Global Forest Watch integrated alert product. Google Earth Engine code to integrate various alert datasets (e.g. RADD, GLAD-L, and GLAD-S2) using the proposed rulesets is openly available. Second, we present an overview of the GFOI guidance module on deforestation alerts (Carter et al., 2024), developed as part of the GFOI R&D Component, and in collaboration with SilvaCarbon. Through a series of regional SilvaCarbon workshops across the tropics and exchanges with deforestation alert producers and users, we provided a comprehensive overview of key terminology, use cases, system characteristics, and existing alert systems. Additionally, the module offers guidance on producing custom alerts using tools such as the open FAO SEPAL platform. The guidance document aims to support forest policy experts, decision-makers, and forest management practitioners with the knowledge needed to effectively access and utilize deforestation alerts. A series of country-specific examples of deforestation alert use are included, emphasizing the critical role of south-south knowledge exchange in fostering the adoption of alerts. References: Reiche J, Balling J, Pickens A H, Masolele R N, Berger A, Weisse M J, Mannarino D, Gou Y, Slagter B, Donchyts G, Carter S (2024) Integrating satellite-based forest disturbance alerts improves detection timeliness and confidence. Environmental Research Letters, 19, 054011. Carter S, Reiche J, Berkowitz, P, Chang R N, Wilson S (2024) GFOI Module: Deforestation Alerts. Global Forest Observation Initiative. https://www.reddcompass.org/mgd/resources/GFOI-DeforestationAlertModule-20240909.pdf
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall L1/L2)

Presentation: High Resolution Multispectral and SAR Remote Sensing Analysis of Agricultural Management Practices and Agricultural Intensity along the Senegal River Valley

Authors: Niklas Heiss, Dr. Jonas Meier, Dr. Frank Thonfeld, Dr. Verena Huber García, Dr. Ursula Gessner, Prof. Dr. Claudia Kuenzer
Affiliations: German Aerospace Center (DLR),
In the context of global change, the sustainable management of agricultural resources has become imperative, particularly for enhancing crop development in regions facing multifaceted challenges. West Arica, characterized by a rapidly growing population, persistently low agricultural yields, and climatic, social, economic and ecological stressors, exemplifies the need for innovative solutions. Addressing these pressures requires strategies for sustainable agricultural intensification – boosting yields without expanding cropland or exacerbating climate change impacts – and the implementation of robust mitigation and adaption measures tailored to local socio-economic contexts. Remote Sensing (RS) is a powerful tool for monitoring agricultural development across space and time, enabling actionable insights for these challenges. A detailed literature review on the potential of RS for mapping small-scale agricultural and cropping systems in West Africa highlights the importance of high-resolution multispectral sensors in agricultural monitoring, driven by advances in sensor technology and computing. This study synthesizes existing knowledge, identifying remote sensing potential and key research gaps to address challenges in West African smallholder agriculture. In this context, the Senegal River Valley, a critical agricultural production area with focus on rice, offers large potential for sustainable intensification. Its regulated river system enables various cropping intensities with up to three cropping cycles a year through irrigation patterns, yet the region faces climate-related risks such as flooding, shifts in rainy season onset, and socio-economic constraints. A currently ongoing study within the framework of the COINS (Co-developing innovations for sustainable land management in West Africa smallholder farming systems) project leverages high-resolution multispectral Sentinel-2 and Planet and Synthetic Aperture Radar (SAR) Sentinel-1 RS data to analyze agricultural management practices and cropping intensity across the Senegal River Valley, while also integrating data based on stakeholder collaboration. By deriving key parameters such field specific agricultural management practices, we aim to assess the spatial and temporal dynamics of agricultural activities. In a first step, our approach focuses on detecting management practices including irrigation and cropping schedules. Those parameters exhibit large variability in agricultural practices and farm sizes along the Senegal River. This contribution presents initial findings from these analyses. In the lower valley and delta regions towards the west, farming systems are more intensive. In contrast, as one moves eastward into the middle and upper valley, the agricultural landscape shifts towards smaller, subsistence-based systems. These smaller Senegalese farms face greater constraints in accessing resources such as sufficient irrigation, inputs, and labor, reflecting a gradient of decreasing agricultural intensity and commercialization along the river. These insights provide a foundation for optimizing land use, enhancing resilience to climate impacts, and supporting access to financing and insurance for smallholder farmers. This demonstrates that advanced RS methodologies can greatly contribute to sustainable farming practices and food security, fostering adaptation in smallholder systems with in the Senegal River Valley, amidst the broader challenges of climate change and population growth.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall L1/L2)

Presentation: Monitoring the environment and the territory of Taranto city through cloud based geoservices exploiting Copernicus services and missions

Authors: Dr. Giulio Ceriola, Dr. Giuseppe Maldera, PhD Antonello Aiello, Dr. Matteo Villa, Dr. Marcello Rossi, PhD Valentina Santarsiero, Dr. Andrea Piccolo, Dr Antonio Zilli, Dr. Marco Lippi
Affiliations: Planetek Italia s.r.l., Distretto Tecnologico Aerospaziale Pugliese
Calliope Project aims to create systems for monitoring the environment and the territory of Taranto city, Italy, to support the improvement of land management and urban development through sensors, platforms for data management and presentation, and applications for specialized analysis. The project is coordinated by the Municipality of Taranto and co-financed by MIMIT (Ministry of Enterprises and Made in Italy). Space and aerial Earth Observation technologies were identified as a powerful source of information. DTA (Aerospace Technology District of Brindisi, Italy), partner of Municipality of Taranto in the CALLIOPE project, has developed UAS-based services, provided space EO service requirements and has coordinated Planetek Italia team, DTA consulting company, in the development of an innovative platform based on EO data. The platform is intended to monitor the environment by focusing on the quality of Air, Water and Public Green, through three cloud-based geospatial services, each one with a dedicated powerful dashboard. The platform integrates EO based geoinformation services with aerial and data geoinformation provided by other partners and provides a powerful dashboard to extract geo-analytics to support assessment, monitoring and alerting. The Air Quality Monitoring Service – Rheticus® Air Quality – exploits data from Copernicus Sentinel-5p and Copernicus Atmosphere Monitoring Service (CAMS) to monitor in near real time (NRT) several pollutants in the city of Taranto and its surrounding, providing alerts if healthy threshold values are reached (defined by local authorities). Pollutants monitored are PM2.5, PM10, CO, CH4, NO2 and SO2 at spatial resolutions ranging from 5 to 7 km. Monitoring is done daily (NRT) and includes also decadal and monthly means. Furthermore, geo-analytics over the 2024 year for all such pollutants are provided. The Water Quality Monitoring Service, namely Rheticus® Marine, utilizes data from Copernicus Sentinel-2 and Sentinel-3 as well as from Copernicus Marine Environment Monitoring Service (CMEMS), in order to monitor water quality and eutrophication related parameters at different scales in the port of Taranto and the surrounding coastal area. The monitored parameters include chlorophyll, turbidity, water transparency and sea surface temperature, obtained through widely tested models and algorithms at spatial resolutions ranging from 10 m to 1 km. The monitoring is done daily in NRT and an alerting system informs when a specific parameters goes beyond thresholds based on geo-analytics calculated over the 2024 year at daily, decadal and monthly basis. Such geo-analytics are also available to the user. Rheticus® Urban Green is the third service that performs Urban Green Monitoring and provides an assessment of the health status of vegetation through indexes derived from Sentinel-2, including SAVI (Soil Adjusted Vegetation Index), LAI (Leaf Area Index), and NDVI (Normalized Difference Vegetation Index) at a spatial resolution of 10m. The service monitors the health status of green areas in the city of Taranto and surrounding areas on a monthly basis for the 2024 year. Each service has a dedicated dashboard optimized to deliver information via a powerful and high-performance web interface with the following features: filters to select parameters, the ability to select a specific date from all available dates, spatial geo-analytics, extraction of graphs with the time series of the displayed parameter, base maps, and a help section. A time slider widget allows the user to examine all measurements collected during the period of image availability. The final user of the services are the Health Local Agency of Taranto (ASL Taranto) and the Taranto Municipality. Such users where strongly committed in supporting the project and in evaluating the platforms and the derived products, recognising the platform a powerful tool to support them in their institutional activities. This allowed to develop very focused services and practices of EO data exploitation that thanks to Copernicus open and global products can be implemented in other cities to support sustainable development and management of the urban territory and the nearby environment. Environmental and territorial monitoring is a key element in facing global challenges of sustainability, climate change, and urban liveability. The users recognised that the Calliope platform developed by Planetek represents an advanced tool based on satellite data, designed to provide accurate analyses and predictive tools to support strategic planning and operational management. By using a combination of new technologies and a user-friendly interface, this platform helps to reduce the complexity of satellite data into ready-to-use information to improve the environmental monitoring in order to evaluate and protect air, water, and soil quality. The users identified key benefits deriving from the Calliope EO-based platform including: protection and enhancement of the environment and natural resources, support for policy and regulatory decisions based on data, optimization of emergency plans and resource management, development of innovative solutions for environmental management (also in support to the private sector like for example Taranto port authority), improved quality of life and urban resilience and promotion of environmental awareness through transparent data.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall L1/L2)

Presentation: Rheticus Forest Carbon Offset: Transforming EO Data into Analytics for the Assessment of Emissions Offsetting Investments

Authors: Dr. Maurizio Laterza, Dr. Nicola Nicastro, Dr. Matteo Villa, Dr. Adriano Vulpio, Dr. Ana Raposo
Affiliations: Planetek Italia s.r.l., ESA ECSAT
According to IPCC (Intergovernmental Panel on Climate Change) in Working Group I contribution to the Sixth Assessment Report, anthropogenic contribution to greenhouse gas (GHG) emissions has led to an unprecedented warming of the Earth and extreme climate events. IPCC also identified finance to be one of the main catalysts and drivers to reduce net GHG emissions and enhance resilience to climate change impacts. Moreover, through the Corporate Sustainability Reporting Directive (CSRD), the EU made it mandatory for multinational and large companies to disclose data on the impact of their activities on people and the planet and any sustainability risks they are exposed to. REDD+ projects (Reducing Emissions from Deforestation and Forest Degradation) are an example of how companies can purchase carbon credits to offset their emissions, but it isn't easy to assess how reliable the generated carbon credits are. Under the activity “Finance for a Green Transition” of the Generic Programme Line “Business Applications - Space Solutions” (BASS) managed by the European Space Agency (ESA), Planetek Italia is carrying out a Feasibility Study on the upcoming service “Rheticus® Forest Carbon Offset”. The service will be built on the existing geo-platform Rheticus® operated by Planetek Italia, a cloud-based infrastructure for the automatic processing of Earth Observation (EO) data that delivers continuous monitoring services. The service turns EO data and EO-derived datasets into site reports aiming to provide a reliability assessment for emissions offsetting investments for companies' sustainability strategies by evaluating forest carbon offsets of REDD+ projects. The service generates a synthetic rating of reliability of REDD+ projects by combining objective geospatial information that considers structural (forest cover) and functional (water stress and fire) factors over at least 10 years of past data. The service aims to use Sentinel-2 L2A data, Copernicus Global Land Cover, ESA WorldCover, MODIS Land Cover, and other global EO-based forest monitoring datasets. Statistical analysis of time series is used to analyse data. The rationale for integrating these space assets revolves around creating a service capable of generating multi-temporal information in the most automated way possible, to be integrated into analysis scenarios. The feasibility study encompasses the Customer Engagement and Value Proposition Definition, the Technical Feasibility Assessment, the Viability Assessment, the Proof of Concept, and the Preparation for Service Implementation. Several customer segments are targeted: consultancy companies benefit from having a ranking of projects; investment funds benefit from having reliable credit estimations thus being safely commercialised along the value chain; carbon accounting software houses benefit from having forest data objective measurements; private market users will have more certainty of the carbon credits they are investing in. In particular customers from consultancy companies and private market users are actually involved in the Proof of Concept of the service, helping in validating the critical assumptions regarding customer desirability and commercial viability, and co-designing the Key Performance Indicators to assess the aspect of the service that are of critical importance for users, customers, and stakeholders.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall L1/L2)

Presentation: U-Climat: Leveraging Satellite Data and Artificial Intelligence for Climate Risk and Impact Assessment

Authors: Valeria La Pegna, Davide De Santis, Dr. Giorgia Guerrisi, Lorenzo Giuliano Papale, Dr. Ilaria Petracca, Prof. Giovanni Schiavon, Fabio Del Frate
Affiliations: Tor Vergata University Of Rome
The increase in the frequency and intensity of extreme weather events due to climate change poses significant environmental, economic and social challenges, and prevention is one of the key actions to counter the growing effects of climate change. Given the increasingly devastating impacts, advanced operational solutions such as accurate climate risk assessment have become a priority. Today, Remote Sensing (RS) data can be used for a wide variety of Earth Observation (EO) purposes and often represent a better alternative to traditional survey methods. This is mainly because remote sensing provides a global view of the area of interest at different spatial and temporal scales, acquires data regularly and quickly, and can access remote locations. Established EO programs, such as the European Copernicus program, ensure a combination of reliable, freely available and systematic data. These characteristics make remote sensing an extremely valuable source in counteracting and mitigating the effects of climate change. In fact, satellite data are increasingly being used in emergency situations, for example to map the areas affected by a disastrous event or to provide support to rescue teams. In spite of this, further efforts are still needed to actually transform EO data into concrete actions for help in dangerous situations and support for the population. In the current work, we put forward the proposal of a new service, designated as U-Climat (Urban-CLimat Impact Mitigation Adaptation Tool) that aims to address these needs by combining satellite EO data with Artificial Intelligence (AI) to develop advanced climate risk and impact mapping tools. In the field of EO, AI has also emerged as a powerful driver of knowledge: Deep Learning (DL) techniques, for which Neural Networks (NNs) are the main application techniques, are also used in the analysis of geospatial data throughout the processing chains. U-Climat's objective is to furnish a risk assessment through a dynamic evaluation of exposure to damage from extreme events and the quantification of damage caused by such events. Ultimately, this aims to provide an innovative and precise assessment of climate risk and impact. First results were obtained in the context of flood extreme events, employing a multi-parametric approach using multi-source EO data. The considered key parameters included precipitation, soil moisture, and Land Surface Temperature (LST) which were obtained from different satellite missions for selected areas of interest. Global Precipitation Measurement (GPM) mission was considered for precipitation, which provides accurate observations of rain and snow; Global Land Data Assimilation System (GLDAS) was considered instead for soil moisture; finally, data from Moderate Resolution Imaging Spectroradiometer (MODIS) was used for LST. Data were processed and analyzed using Google Earth Engine and Python environments, enabling efficient handling of large datasets across different temporal scales (hourly, daily, and monthly). Firstly, data were harmonized in terms of spatial and temporal resolution. Then, some statistical analyses were conducted. For example, climatological anomalies were calculated by determining long-term averages of monthly values over periods of 20 years and extreme values were obtained, focusing on peak parameter values near event occurrences. Statistical techniques, including box plots and cross plots, were employed to evaluate relationships between input variables and the occurrence of extreme events. Preliminary analyses revealed significant correlations between precipitation, soil moisture, and LST during extreme flood events. Higher precipitation and soil moisture levels strongly correlated with flood occurrences, consistent with the hydrological characteristics of flood-prone events. LST showed a secondary but notable influence. These findings informed the design of the NN, which achieved promising classification accuracy during initial testing phases. More in detail, the processed data were used to train a fully connected NN where the input features included averaged parameter values for the days preceding flood events, recorded between the last twenty years. The NN model output was a binary classification: event or non-event. To enhance the model’s predictive capabilities, additional parameters such as land cover, terrain slope, and urban infrastructure density will be integrated. Expanding the dataset to include higher spatial resolution inputs and exploring alternative machine learning architectures, including Convolutional Neural Networks (CNNs), will further refine risk assessment. In addition, we identified some possible final users, who could derive benefit from this service. For example, U-Climat can have significant implications for the insurance sector, particularly for property and casualty insurers. The capacity to quantify and predict climate risks facilitates more efficacious policy pricing, portfolio management, and regulatory compliance. Furthermore, this tool can inform urban planning and disaster management strategies, thereby reducing economic losses and enhancing societal resilience. By integrating EO data with AI, U-Climat offers a robust, scalable approach to climate risk assessment in urban areas. The combination of satellite big data and machine learning provides a unique opportunity to address one of the most pressing challenges of our time. The results underscore the potential of interdisciplinary approaches to mitigate the impacts of climate change through advanced scientific and technological innovation.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall L1/L2)

Presentation: Harnessing AI and EO for enhancing community resilience and sustainable resource management

Authors: Catalin Patilea, Dr. Raul Scarlat, Dr. Meghna Sengupta, Naman Jain, Dr. Gopika Suresh
Affiliations: Marble Imaging AG
The integration of Very High Resolution (VHR) Earth Observation (EO) data with Artificial Intelligence (AI) presents unprecedented opportunities for addressing global challenges in resilience and sustainability. This study presents four tools currently developed at Marble Imaging and explores advanced methodologies and applications across diverse domains, demonstrating the transformative potential of EO and AI. These tools are being developed in close partnership with end-users to ensure smooth integration of the results into policy making and quick customer uptake. In addition, the developed tools will be optimised by the integration of Marble Imaging VHR data (80 cm VNIR and 6m SWIR) that will be ingested into the workflows starting 2024. Marble Imaging's first satellite is fully financed through funding via the German Aerospace Centre (DLR) and European Space Agency (ESA) InCubed. In this paper, we present four key areas wher Marble Imaging's advanced analytics will be extremely vital for supporting decision making with respect to community resilience and sustainable management: terrain trafficability assessment, aquaculture mapping, situational awareness, and the development of the Precious Coast Information System (PCIS). 1. Aquaculture Mapping: A convolutional neural network (CNN)-based approach is employed to detect and map aquaculture infrastructure globally using Sentinel-2 imagery. AI models, trained on annotated datasets representing diverse aquaculture systems, demonstrated high accuracy in identifying offshore cages, floating pens, and pond systems. VHR optical imagery and SAR data were utilized for validation, confirming the scalability and robustness of the Sentinel-2-based methodology. This approach provides actionable insights for ecosystem management, regulatory compliance, and resource allocation, contributing to sustainable aquaculture practices. This tool is being developed in close co-operation with potential end-users in the aquaculture regulation and certification industry supporting efficient and optimised due diligence, certification and regulatory policy making. 2. PCIS – Precious Coast Information System- A Modular Coastal Vulnerability Assessment System: The fourth use case is the innovative PCIS software solution that addresses coastal vulnerabilities by integrating various EO data, AI methodologies, and socio-economic datasets. With modular tools for monitoring coastal erosion, water quality, deforestation, and other environmental factors such as water turbidity, river outflows, PCIS empowers stakeholders to make informed decisions to protect their assets and reduce financial losses. Its scalability and the planned incorporation of Marble Imaging’s proprietary VHR satellite data ensure comprehensive and cost-effective vulnerability assessments. By 2026, the integration of 80 cm VHR data from Marble Imaging’s proprietary satellite constellation will enhance PCIS’s capability to deliver precise, tailored, and cost-effective solutions. 3.Terrain Trafficability Assessment: A novel framework for terrain trafficability analysis leverages Sentinel-1 Synthetic Aperture Radar (SAR) and Sentinel-2 multispectral imagery to estimate soil moisture and derive load-bearing capacity. By integrating indices such as the Modified Soil Moisture Monitoring Index (MSMMI) and Soil Water Index (SWI), the framework produces detailed soil strength parameters, including the Cone Index. By integrating VHR multispectral data from the Marble constellation and leveraging the 6m SWIR bands, the developed workflow will provide even more detailed estimates of soil strength and trafficability. These parameters are correlated with vehicle-specific requirements to classify terrain mobility, enhancing route planning and reducing operational risks in applications ranging from disaster response to military logistics. 4. Situational Awareness Tool: To enhance operational awareness and disaster response, we present a multi-source data fusion tool that combines VHR EO imagery with open-source intelligence (OSINT). The system integrates SAR and multispectral imagery with geolocated news reports, social media content, and crisis datasets. Advanced AI algorithms enable change detection, object recognition, providing indications of changing risks, vital for situational awareness with regards to environmental crimes, conflicts and crises. This tool is designed for dual-use applications, supporting both civil and security/military entities in monitoring critical areas and aids in responding effectively to emergent threats. These innovations demonstrate the capability of EO and AI to provide scalable, accurate, and actionable insights across diverse sectors. For quick and efficient customer feedback while developing these tools, we present demonstrations of the tools using Streamlit allowing customers to understand the power of EO based analytics and the reduction in time and money they can expect by using such tools. This also allows us to understand customer requirements with respect to product delivery. This work contributes to the broader scientific discourse on leveraging EO data to meet global societal challenges, aligning with the objectives of the ESA Living Planet Symposium.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.14)

Session: C.04.01 - MTG mission

The status of development of ESA missions will be outlined in 4 sessions of 1h30 minutes (equally of a full day).
Participants will have the unique opportunity to gain valuable insights in technology developments and validation approaches used during the project phases of ongoing ESA programmes.
The Projects are in different phases (from early Phase A/B1 to launch and operations) and together with industrial/science partners the status of activities related to Mission developments will be provided.

MTG mission


Overview of ESA and EUM meteo missions: 1 programme


  • Graeme Mason – ESA
  • Pieter Van den Braembussche – ESA

MTG-S Launch and Commissioning: Preparations, User Expectations, and Data Outlook


  • Jochen Grandell – EUMETSAT

IRS status and performance


  • Luis Rieger – OHB

Sentinel-4: the first ESA geostationary atmospheric quality mission on-board MTG-S1


  • Giorgio Bagnasco – ESA

FCI In-Orbit results


  • Alessandro Burini – EUMETSAT

LI In-Orbit results


  • Bartolomeo Viticchie – EUMETSAT
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 0.14)

Session: A.02.11 Forest Carbon Monitoring

ESA’s Forest Carbon Monitoring project (FCM) has developed a toolset for EO-based forest biomass and carbon monitoring, aiming to meet the requirements of different forestry stakeholder groups. The toolset provides means for the stakeholders to respond to increasing forest monitoring and reporting requirements, ranging from 1) policy related reporting on the compliance market (e.g. CRCF and LULUCF) to 2) voluntary reporting by private entities and 3) scientific analyses and development (e.g. Digital Twins) by academic institutes and international organizations.
The tools have been developed together with the project’s user partners including private companies, administrative departments and international organizations. They take advantage of multi-source EO datasets in combination with field reference data (where available). The tools have been demonstrated in over ten use cases in Europe and in the tropics, ranging in size from continental level mapping to small privately owned forest estates. The tools can be used on the Forestry TEP platform and will be made available as OGC compatible application packages through the ESA NoR portal.

The aim of this networking session is to introduce the toolset for new stakeholders and interact with the existing and prospective future user base. The session is envisioned to start with general presentation of the project and introduction to the available tools. This is followed by presentations by existing users and their experiences from the use case demonstrations. Most of the time will be reserved for discussion aiming to respond to questions, gather feedback and ideas for further development and create connections with potential future users. The LPS2025 is an excellent place for arranging the stakeholder engagement event as it brings together a wide range of different types of stakeholders interested in the capabilities and potential of EO-based forest monitoring methods.

Moderators:


  • Gesche Schifferdecker - EFI

Speakers:


  • Jukka Miettinen - VTT
  • Zsofia Koma - NIBIO
  • Basanta Gautam - Southpole
  • Alessandro Cescatti - JRC
  • Naomi Swickard - Verra
  • Eva Gabriel de Francisco - Centre de la Proprietat Forestal
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall K1)

Session: B.04.06 Fire Detection and Monitoring from Earth Observation Data for Rapid Disaster Response - PART 1

Fueled by a changing climate, wildfires are becoming more frequent and destructive every year. Remote sensing data from a variety of sources can help to detect and monitor fire outbreaks. This session aims to show capabilities of remote sensing (public satellites, cubesat constellations, UAVs, observation planes) in combination with recent advances in AI and data processing. The session welcomes contributions including but not limited to:
- Novel algorithms for fire detection
- Fire spread modeling
- Burned area mapping
- New data sources such as upcoming missions
- Benchmark or training datasets
- Multi-modal data for fire monitoring
- On-orbit processing
This session brings together research, policy and industry in fire preparedness and response informed by remote sensing data. It provides a platform for a fruitful exchange about the current state of technology, best practices as well as upcoming opportunities for novel techniques in this area.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall K1)

Presentation: Bridging the Resolution Gap: Deep Learning-Based SEVIRI Downscaling for High-Resolution Near-Real-Time Monitoring

Authors: Maria Dekavalla, Chrysovalantis Tsiakos, Georgios Tsimiklis, Angelos Amditis
Affiliations: Institute of Communication and Computer Systems
The escalating frequency and severity of wildfires in Greece and other drought-prone regions affected by heatwaves and strong winds underscore the critical need for accurate, near-real-time fire detection and monitoring systems. These systems enable faster response times, optimise resource allocation, and safeguard communities and ecosystems against increasing fire risks. Among the most effective techniques for detecting burned and actively burning areas is monitoring active fires with satellite systems, known for their reliability, availability, rapid execution, and ability to monitor vast areas. Satellite sensors with fire detection and monitoring capabilities are available on Low Earth Orbit (LEO) and Geostationary Equatorial Orbit (GEO) platforms, offering distinct trade-offs between spatial and temporal resolution. LEO platform sensors, such as MODIS and Sentinel-3, provide a spatial resolution of 1 km in the visible-shortwave infrared (VIS-SWIR) bands but are limited by infrequent data updates, typically only twice daily. This limitation hinders their effectiveness in early fire detection and continuous monitoring. Conversely, GEO platform sensors, like the Meteosat Second Generation (MSG) Spinning Enhanced Visible and Infrared Imager (SEVIRI), can provide a more frequent temporal sampling of individual regions, which allows them to capture the temporal evolution and diurnal cycle of wildfires. Although SEVIRI has a lower spatial resolution (~3.5 km) than LEO platform sensors such as MODIS or VIIRS, its data are currently the only satellite data that can be used for fire detection and provide synoptic wildfire conditions over Europe. This highlights an urgent need to enhance SEVIRI’s spatial resolution to support emergency response management and decision-making. Advances in Deep Learning (DL) offer promising opportunities to improve the spatial resolution of SEVIRI and other geostationary satellite sensors and enhance their fire detection and monitoring capabilities. While some recent studies have explored DL methods for improving the spatial resolution of low-resolution GOES geostationary satellite data, none have focused on improving the spatial resolution of SEVIRI data for operational wildfire detection and monitoring. The ESA-funded ASIMOV project addresses this gap by introducing a novel framework that utilises DL-based techniques to downscale SEVIRI multispectral imagery, enabling continuous monitoring of active fire progression with spatial resolutions comparable to LEO sensors. This framework incorporates an ablation study involving variations of DL architectures, including U-Net and autoencoders, alongside tailored loss functions and evaluation metrics. Models were trained on a benchmark dataset of 1,635 SEVIRI-MODIS Active Fire image pairs from Greece, aligned with wildfire ignition points recorded by the Greek Fire Brigade from 2023–2024. Wildfire-specific ancillary data, such as topography (i.e., elevation, slope, etc), were integrated into the training features to enhance model robustness. Specialised loss functions were employed to address dataset imbalances, including disparities between fire and no-fire pixels and the lower frequency of large wildfires. Hyperparameter optimisation was conducted using Bayesian Optimization with the Tree-structured Parzen Estimator (TPE), identifying optimal configurations of models and loss functions. The best-performing configurations included autoencoders with combined DICE and binary cross-entropy (BCE) losses and Jaccard with weighted BCE losses. Final training sessions incorporated one input feature ([SEVIRI 3.9 μm]) or twelve features ([SEVIRI 3.9 μm, HRV, 10.8 μm, 12 μm, 8.7 μm, 9.7 μm, 1.6 μm, 0.6 μm, 0.8 μm, elevation, slope, and topographic position index (TPI)]). The training was conducted over 300 epochs with early stopping (patience = 15 epochs) and learning rate decay based on validation loss plateaus. The trained models were benchmarked against state-of-the-art DL approaches to downscaling GOES multispectral images. Evaluations were performed on the entire test dataset as well as on subsets filtered for small, medium, and large size fires. Results demonstrated that the optimised DL models and specialised loss functions outperformed reference models, producing high-resolution active fire maps from SEVIRI data that closely approximated the spatial resolution of MODIS products. With SEVIRI’s higher temporal resolution, these DL models enable the generation of near-real-time fire progression maps every five minutes. These advancements enhance current wildfire monitoring capabilities and lay the groundwork for downscaling multispectral images from the FCI sensor onboard the recently launched MTG platform. This could achieve even finer spatial resolution, below 1 km, for future operational fire detection and monitoring.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall K1)

Presentation: A Deep Learning Approach for Active Fire Detection Using Multi-Temporal Geostationary Satellite Data

Authors: Jayendra Praveen Kumar Chorapalli, Max Bereczky, Dmitry Rashkovetsky, Paul Walther, Martin Werner
Affiliations: Ororatech Gmbh, Technical University of Munich
Challenge: Wildfires, both a natural phenomenon and a destructive force, significantly affect the environment, ecosystems, economies, and public security. Detection and real-time monitoring of wildfire locations are crucial in fighting wildfires and reducing human casualties and property damage. Geostationary satellites, such as Himawari-8/9, offer unique advantages due to their high-frequency data acquisition, enabling the characterization of rapidly changing fire dynamics. However, traditional rule-based remote sensing methods used for fire detection in geostationary satellites often fail to detect small fires, e.g., due to coarse sensor resolution. Furthermore, the rigidity of these approaches hinders their adaptability to diverse climate zones and regional differences. Applying deep learning-based methods to the problem also poses challenges as the skewed distribution of fire versus non-fire areas in the captured data and the temporal disjunct of labels and satellite images pose significant problems. Methodology: To address these challenges, we propose a deep learning approach for detecting active fires trained on multitemporal geostationary satellite data. As labels, we use high-resolution multi-sensor active fire detections generated by aggregating individual detections from various sensors with a smaller ground sampling distance (GSD) than that of the Himawari-8 satellites. We hypothesize that these high-resolution labels can also improve the detection capabilities of smaller fires in coarse-resolution imagery. To enhance the model's ability to detect fires at their onset, we utilize time-series data comprising multiple images captured before the fire leading up to the labeled timestamp, allowing for the detection of subtle changes in the signal strength caused by the start of a fire. In addition to the information contained in the input bands, we evaluate auxiliary data such as land cover, biome types, elevation, geolocation, and observation time as potential features in the training process. Data from 2020-2021 is used for training and 2022 for testing. We propose using a time-aware convolutional neural network (3D U-Net) to account for the data's multi-modal and time-dependent properties to perform pixel-based segmentation on the input images. To evaluate the effect of using time-series data on the performance of the proposed model, it is compared against a 2D U-Net model that is applied to mono-temporal Himawari-8 data. The model's performance is evaluated using Precision, Recall, and F1-score. Results: Numerous experiments were conducted to evaluate the performance of the U-Net 3D and U-Net 2D models under various configurations. The most notable findings are as follows: When using Medium Wave Infrared (MWIR) and Long Wave Infrared (LWIR) bands as input features, U-Net 3D achieved Precision, Recall, and F1-score scores of 0.47, 0.28, and 0.35, respectively, outperforming U-Net 2D, which scored 0.41, 0.21, and 0.28. Incorporating all spectral bands along with temporal information led to the best performance, with U-Net 3D achieving a Precision of 0.60, Recall of 0.33, and F1-score of 0.43, compared to U-Net 2D’s 0.50, 0.21, and 0.30 respectively. These findings demonstrate the importance of leveraging time-series and multi-modal data, with U-Net 3D delivering consistently superior results. While the reported performance metrics suggest that the proposed model still underperforms, especially in recall, this can be explained by the fact that the performance was evaluated using labels generated on a much higher level of detail than the input imagery. A comparison to the official Himawari Active Fire product is planned to understand whether this approach provides a step forward in detecting fires in geostationary satellite data. Work in progress: We are currently integrating the official Himawari Active Fire product into our analysis to directly compare its performance with that of the U-Net 3D model. This comparison aims to determine whether using a deep learning model offers advantages over rule-based approaches for fire detection. Additionally, we plan to incorporate Himawari-9 data into our dataset and retrain the model, enabling us to evaluate its performance on Himawari-9 observations and assess any changes in accuracy. We aim to have these results finalized and ready to present at the conference.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall K1)

Presentation: Diffusion Foundation Model for Robust Wildfire Monitoring Across Diverse Geographical Regions

Authors: Ali Shibli, Andrea Nascetti, Yifang Ban
Affiliations: KTH Royal Institute of Technology
Wildfire monitoring using satellite imagery is a critical application for mitigating the devastating effects of wildfires on ecosystems, infrastructure, and human life. However, it remains a challenging task due to the vast variability in geographical, climatic, and environmental conditions across regions. Traditional machine learning models often face difficulties in generalizing across regions, particularly in data-scarce environments. Despite the growing interest in foundation models for remote sensing, little work has been done to develop specialized foundation models tailored for wildfire monitoring, which require addressing unique challenges like regional variability and domain shifts. This research aims to address this concern by developing and assessing a remote sensing foundation model specifically tailored for wildfire monitoring, with a focus on generalization across diverse regions through large-scale pretraining and domain adaptation techniques. Specifically, we explore generative diffusion models pretrained on large-scale satellite datasets, such as Sentinel-1 Synthetic Aperture Radar (SAR) and Sentinel-2 MultiSpectral Instrument (MSI) imagery. By leveraging multi-modal features and addressing challenges such as domain shifts and regional variability, the model significantly improves performance in downstream tasks, including fire localization and damage assessment. Our methodology includes pretraining a diffusion model on curated datasets from Sentinel-1 and Sentinel-2 imagery and fine-tuning it for specific wildfire-related tasks. Novelty is introduced through the integration of domain adaptation techniques, such as knowledge distillation and multi-task learning approaches that simultaneously optimize for fire localization and damage assessment. Validation is performed using wildfire datasets from diverse geographic regions, including the United States and Canada, with metrics such as F1 score and Intersection-over-Union (IoU). Preliminary results indicate that the proposed approach performs well across various wildfire-related tasks, including segmentation and damage assessment. This research sets the stage for developing reliable, robust wildfire monitoring systems by combining multi-modal data, addressing domain shifts, and reducing reliance on labeled data through self-supervised pretraining strategies.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall K1)

Presentation: NRT (Near Real Time) Copernicus Sentinel-3 fires Major evolution with the Baseline Collection 3 for operational and climate users

Authors: Julien Chimot, Dr. Edouard Martins, Dr. Andrea Meraner, Sauli Joro, Dr. Bertrand Fougnie
Affiliations: Eumetsat
EUMETSAT Central facility (CF) and Land Satellite Application Facilities (LSAF) altogether lead since several years the European NRT global fire products from its Member State and Copernicus space-borne sensors. The Copernicus Sentinel-3 NRT Fire Radiative Power (FRP) is procured by the OFRaP-CS3 (Optimized Fire Radiative Power for Copernicus Sentinel-3) processor exclusively led by EUMETSAT CF and entrusted by the European Commission (EC). EUMETSAT, the European Centre for Medium-Range Weather Forecasts (ECMWF), and King's College London collaborate together in order to prepare the future assimilation of this product by the Copernicus Atmosphere Monitoring Service (CAMS) into the Global Fire Assimilation System (GFAS), the Copernicus Emergency Servicy (CEMS, and any other Emergency service applications. The OFRaP-CS3 algorithm baseline is a joint development from EUMETSAT scientific experts and Kings College London (KCL) team, led by Prof. Dr M. Wooster and W. Xu (https://www.eumetsat.int/S3-NRT-FRP - Chimot et al., 2021 - EUM/SEN3/DOC/21/1255140 v1.0). The night-time Collection 1 mostly follows the original specifications from Wooster et al. (2021), originally implemented by the Sentinel-3 Mission Performance Centre (S3MPC) with several EUMETSAT developments added to enhance false alarm filtering: e.g. warm waters, cloud edges, South Atlantic Anomaly, F1 radiometry shooting anomalies. The day-time Collection 2 was released for the 1st time on 9th December 2021 (https://www.eumetsat.int/release-collection-2-s3-nrt-fire-radiative-power) only a demonstrational version. It was developed by EUMETSAT CF, with precursor detection thresholds imagined by KCL. Since then, EUMETSAT has conducted a large series of technical meeting and activities to support users involved in operational services for wildfire and gas flare monitoring. The feedback, and outcome have identified a series of major requests to improve the usability of this product. As such, EUMETSAT has then developed the new baseline Collection 3 covering: 1); a comprehensive revision of the product format with a significant improvement in its user friendliness, and 2) a much enhanced performance of the day-time detection accuracy leading the algorithm towards an operational scientific maturity. In this presentation, EUMETSAT will share the overall status of the OFRaP-CS3 algorithm baseline Collection 3, the performance & validation status, all the lessons learned (strength & weaknesses). The latest validation results will show the maturity of the product, and its readiness to ensure the continuity of the MODIS Terra time series for early morning / evening active fire detection. Finally, the combination of Sentinel-3 with geostationary MTG/FCI will be illustrated over Africa and Europe.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall K1)

Presentation: Enhancing Satellite-Based Wildfire Detection Latency Through On-Board Data Processing: First results from the FOREST-2 Thermal Infrared Imaging Nanosatellite

Authors: Max Bereczky, Fabian Schöttl, Julian Franquinet, Martin Ickerott, Jayendra Chorapalli
Affiliations: Ororatech GmbH
Wildfires pose severe risks globally, necessitating rapid detection and response systems. Early wildfire detection based on current Earth observation satellites is mainly limited by sensor resolution and sensitivity, revisit time and data downlink latency, i.e. the time required for the sensor data to be available for advanced data analysis and fire detection algorithms on the ground. The recently launched nanosatellite, FOREST-2, equipped with a purpose-built thermal infrared imager for wildfire detection with a mid-wave infrared ground sampling distance of 200 m, aims to minimize the detection latency by employing advanced on-orbit processing for onboard data calibration, georeferencing and fire detection. The aim of this contribution is to provide an overview of the deployed system, a report on the already achieved fire detection latency improvements in an operational setting and an outlook to the launch of the first global constellation of wildfire detection satellites in early 2025. FOREST-2 incorporates a state-of-the-art Graphics Processing Unit (GPU) that enables on-orbit processing of thermal imaging data using rule-based and machine learning algorithms. Acquired thermal data is directly georeferenced and radiometrically calibrated using performance optimized onboard processing techniques. Fire detection algorithms are then executed to detect any observable active fires in the data. As only the location and some fire-specific information is of interest for early wildfire detection, the amount of data that must be downlinked is hereby effectively reduced from several tens of megabytes to a few bytes per fire. After reception of the data on the ground, no further extensive data processing is required, and alarms can directly be raised. The deployment and operational testing of FOREST-2 for on-orbit fire detection have demonstrated the efficacy of on-board data processing for rapid wildfire detection. The full system has been deployed globally in an operational setting since summer 2024 and performed on-orbit fire detection during more than one thousand data acquisition operations. Numerous wildfires have been detected within less than 4 minutes after data acquisition and have been downlinked via X-band downlink and a global network of ground stations. Notification latencies of less than 10 minutes have successfully been achieved. As no significant further processing is required on the ground, this rapid notification capability in combination with the high sensor resolution marks a substantial improvement over existing low-earth orbit wildfire detection systems.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.61/1.62)

Session: A.05.11 How to Ensure Accuracy and Stability of Sea Level Measurements? Calibration, Validation and Verification for Current and Future Reference Altimeter Missions. - PART 1

This insight session will delve into the topic of validating sea level measurements from Sentinel-6 and future reference altimeter missions with a focus on the stability assessment. We aim to bring together experts with innovative ideas on in-situ and cross-comparison cal/val approaches, those from climate applications who can provide insights into the necessary uncertainty and long-term stability for robust and useful data, and experts on the instruments and data processing of the current mission.

The session will be split into three parts, each introduced by invited speaker presentations, followed by an extended open discussion.

First, understanding needs for future missions. By exploring observational needs to answer critical scientific questions, we aim to identify priorities for the next-generation altimetry reference mission. In this session we will consider the role of a reference mission within an altimetry constellation in providing robust information for climate studies at local, regional and global scales. Here, we will also consider the challenges of and opportunities provided by closure experiments that match observed sea level rise to independently assessed steric and ocean mass change datasets.

Second, establishing calibration, validation and verification approaches. We aim to identify and outline the validation systems required to ensure the accuracy and stability of sea level measurements at global, regional and high-resolution scales. This will consider whether the current network is sufficient to verify stability requirements for next generation altimetry missions, and/or what new approaches are needed. Here we will consider in-situ observational systems, including those that meet CEOS-FRM (fiducial reference measurement) standards and the potential of more novel approaches. The scope includes optimising the use of tide gauges and floating buoys, and exploiting networks of transponders and corner cubes, as well as exploring the potential of novel approaches such as the use of autonomous surface vehicles.

Third, the role of a reference mission for the altimetry constellation. Here, we consider how and why an altimetry mission is designated as “the reference mission” for all other altimeters. The aim of the discussion will be to define the role of a reference mission to support other altimeters, and what criteria must be met for a mission to have this designated status.

Overview:

The session, comprised of two 90-minute parts, is scheduled for Thursday morning, June 26, 2025. The first part will take place from 8:30 to 10:00 am, and the second from 11:00 to 12:30 pm.
- The first part is focus on Establishing sea level stability needs for future altimetry missions
- The second is focus on Establishing calibration, validation and verification approaches

Session Schedule:


Introduction of the session (part 1 and part 2)


  • Alejandro Egido - ESA

Part 1 : Establishing sea level stability needs for future altimetry missions


Closing the sea level budget


  • Anny Cazenave - CNRS/LEGOS

Detecting and attributing the signal in sea level


  • Ben Hamlington - NASA

Closing the energy budget and estimating the ocean heat uptake and the Earth energy imbalance


  • Benoit Meyssignac - CNRS/LEGOS

The role of a reference mission in the altimetry constellation


  • Estelle Obligis - EUMETSAT

The current mean sea level stability uncertainty budget of the reference missions


  • Victor Quiet - CLS

How to improve the mean sea level stability uncertainty budget for Sentinel-6 next generation?


  • Michaël Ablain - Magellium
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall G1)

Session: D.02.07 Large Language Model Agents and Applications for Earth Observation

This session will focus on the applications of Large Language Models (LLMs) in Earth Observation (EO) and related domains, as well as the creation of LLM-based agents. The session will aim to cover best practices to exploit small, openly available and accessible, pre-trained LLMs across various use cases in EO, Climate Science, and other relevant Earth Sciences. Some of these use cases for the EO and related domains include:

- Bibliographic aid of EO scientific literature, such as summarization of articles, structuring unstructured text, or question-answering on complex topics
- Content creation for understanding of EO science by the wider public
- Development support on software for EO, such as code generation or analysis.

Topics for the session include, but are not limited to:
- Useful applications that leverage the knowledge within open-source LLMs for EO and related sciences.
- Digital assistant development and integration: LLM-powered assistants allowing users to interact with them in natural language for data retrieval, reasoning from user-provided information, insights from large datasets, etc. Accessibility of digital assistants for EO: making data more accessible to scientists, policymakers, educators, and the public. Bridging the gap between complex data and user understanding. Integration of an assistant into third party applications.
- Relevant techniques to create agents without requiring large amounts of data, and training, such as: In-Context Learning; Instruction Fine-Tuned-only models; Retrieval Augmented Generation; or Inference-time/Prompt Engineering like Chain-of-Thought prompting and Reflection.
- Parallel research including practices for: Creation of Synthetic Data, and Developing Evaluation Tools for different use-cases.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall G1)

Presentation: Large Visual Language Model Agents and Composite AI for Accessible EO-based Scenario Analysis

Authors: Yi Xiong, Cornelis Valk
Affiliations: NEO BV
End users involved in climate adaptation struggle to come up with the right answers in time for their challenges. How effective is adding trees along this road to reduce heat? Which species is suitable for this location given the climate in 30 years? Where is drought expected in my nature reserve the coming weeks? Which nature-based solution is most suitable to reduce drought here in the longer term? Some answers to their questions can be found purely from scientific publications, others require data on local land cover, other EO-derived information, or climate projections. Often, finding the best answer requires a combination of various data sources. We present a composite AI system for intuitive, fast access to the best knowledge, and data in selected domains, supporting ad-hoc scenario formulation and spatial analysis. The user interacts with a Large Visual Language Model as central orchestrator: an AI agent that interacts with the user and other AI agents (leveraging LangChain and LangGraph) to formulate scenarios to analyse, collect relevant insights from texts and figures in scientific publications using Retrieval-Augmented Generation, and geoprocessing APIs to access and analyse spatial data. Results are presented in a user interface, including a chat window and a map for displaying results of spatial analyses. A proposal for developing the system to TRL6 has been submitted as response to ESA AO/1-12531/24/I-DT. We show the versatility and expandability of the system, opening access to insights derived from Copernicus data for a broader audience. We report on results achieved with the system for a use case involving scenario analysis for nature-based solutions related to water management in an urban environment. By streamlining the integration of scientific knowledge, EO data, and climate projections, the system demonstrates significant potential to enhance decision-making processes for climate adaptation, supporting more effective and sustainable solutions tailored to complex local challenges.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall G1)

Presentation: Bridging Text and SAR Images with Multimodal Vision-Language and Foundation Models

Authors: Solène Debuysere, Dr Nathan Letheule, Dr Nicolas Trouvé, Elise Colin
Affiliations: Esrin, ONERA, DTIS, Université Paris Saclay, ONERA, DEMR, Université Paris Saclay
Synthetic Aperture Radar (SAR) systems use active microwave sensors to emit and capture electromagnetic backscatter signals, processing images fundamentally differently than optical systems, which rely on reflected sunlight. Thus, SAR images offer distinct insights that complement those provided by optical imagery. Moreover, this active sensing capability allows SAR to acquire images under all weather conditions and during both day and night, making it an invaluable tool for various monitoring applications. These include change detection, soil moisture monitoring, urban monitoring, deforestation and biomass assessment, ice and glacier mapping, flood monitoring, etc. Therefore, the richness and complexity of the information in SAR imagery justifies the use of deep learning technologies, which have demonstrated performance gains in many imaging areas. However, the unique statistical properties of SAR imagery (speckle noise, high dynamic range, and geometric distortions) present significant challenges for traditional vision models primarily designed for optical images. Moreover, the lack of open source, well-labeled, or easily exploitable SAR text-image datasets are barriers to the use of existing deep learning and foundation models. In that context, our study aims at exploiting open-source pre-trained Vision-Language Models (VLMs) to bridge the gap between complex SAR data and user understanding. By combining SAR's robust sensing capabilities with the analytical power of the pre-trained models, we want to use natural language as a flexible prediction space to enable generalization and transfer. Recently, [Trouvé et al. 2024] have already demonstrated that the use of large language models can be useful in the generation of simulated radar images. This work demonstrates the feasibility of generating radar-style images by conditioning the Stable Diffusion model [Rombach et al., 2022] on textual prompts. This approach, rooted in pre-trained architectures, is promising for simulating radar data with realistic characteristics, even if generating large-scale radar images with appropriate conditioning remains a major challenge, as we discussed in [Debuysère et al., 2024]. In this study, we want to highlight the use of one of the essential components of the Stable Diffusion model: the foundational architecture CLIP (Contrastive Language-Image Pretraining) [Radford et al. 2021]. CLIP enables a unified embedding space for images and text and has revolutionized the integration of visual and textual data across various domains. CLIP is based on a combination of two main networks: - An image encoder based on a vision network (e.g. ResNet or ViT, Vision Transformer) which transforms an image into a vector (embedding) in a fixed-dimension space. - A text encoder based on a transform (such as GPT or a similar model) which transforms a text (sentence or description) into a vector in the same embedding space. These two networks are trained jointly in a contrastive manner: for each image, CLIP is trained to bring its embedding closer to that of the text that describes it correctly, while moving the embeddings away from incorrect texts. This method is based on a huge training corpus made up of image-text pairs from the Internet (for example, an image accompanied by its descriptive caption). The use of the CLIP model in radar imagery could offer numerous applications: for instance, addressing the lack of semantic annotations in SAR databases by automatically generating descriptions or automating image labeling; enabling intuitive multimodal search in SAR archives through text-to-image retrieval, such as querying "find an image of flooding in a tropical forest"; or enriching semantic segmentation models by leveraging CLIP embeddings to detect specific structures or terrains, such as buildings, roads, or agricultural areas. In this study, we focus specifically on CLIP's potential for generating simulated SAR imagery. We propose to use CLIP in two key ways: - Validation of prompt-to-image consistency: Using CLIP's latent space, we plan to verify that the user's textual prompt aligns with the embeddings of the generated image. While this approach is somewhat biased—given that models like Stable Diffusion are trained using CLIP or similar frameworks to optimize alignment between text and images—it provides an initial measure of how well the generated image reflects the prompt. - Evaluation through a train-test framework: We propose creating a dataset of training and testing image-text pairs. The model will be trained on this dataset, and later used to generate images based on the test prompts. By comparing the embeddings of the generated images to those of the true test images (ground truth) in CLIP's latent space, we can assess how closely the generated images resemble their true counterparts in terms of semantic content. In conclusion, this study aims to advance the use of CLIP for SAR applications by demonstrating its potential in the generation of simulated radar images. By establishing a link between text and SAR imagery, we aim to open up new possibilities for exploiting foundation models in the radar domain. References Trouve, N., Letheule, N., Lévêque, O., Rami, I., & Colin, E. (2024, April). SAR image synthesis using text-conditioned pre-trained generative AI models. In EUSAR 2024; 15th European Conference on Synthetic Aperture Radar (pp. 1387-1392). VDE. Debuysère, S., Trouvé, N., Letheule, N., Colin, E., & Lévêque, O. (2024, October). Synthesizing SAR Images with Generative AI: Expanding to Large-Scale Imagery. In RADAR 2024. Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., ... & Sutskever, I. (2021, July). Learning transferable visual models from natural language supervision. In International conference on machine learning (pp. 8748-8763). PMLR. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2022). High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 10684-10695).
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall G1)

Presentation: DIVA – A climate chatbot companion leveraging Destination Earth

Authors: Dr Rochelle Schneider, Madleen Bultez, Christophe Taillandier, Xavier Brucker
Affiliations: ESA, Mews Partners, Mews Labs
Climate change is an increasingly critical topic, garnering significant attention worldwide. Communicating climate issues effectively to a broad audience is essential, yet accessing and analysing complex environmental data remains a challenge for non-experts. Journalists, for instance, often struggle to find the right data and fully grasp its potential, especially in newsrooms without dedicated data journalists capable of generating graphical representations from environmental datasets. The “DIVA” demonstrator, a fully open source chatbot developed within the framework of the Destination Earth (DestinE) project, aims to address these challenges. DIVA leverages pre-trained Large Language Models (LLMs) to enable natural language interactions, allowing non-experts to access complex climate data effortlessly and generate downloadable graphical illustrations based on climate-related analyses. The DIVA demonstrator aims to empower users without technical expertise to explore and understand environmental data, thereby enhancing public awareness and engagement with climate science. It seeks to demonstrate how to customize an open-source LLM model and improve its performance in recognizing temporal and spatial parameters within user queries. Additionally, DIVA illustrates the implementation of a "prompt-to-chart" LLM within the DestinE ecosystem, enabling the generation of graphical data representations directly from natural language prompts. It is worth noting that DestinE’s “demonstrator” goals are to demonstrate the innovation potential of the platform but DIVA is not a fully operational service: it has been designed to inspire future service providers and DestinE’s community. A strong effort is being made on capitalization of the different technical bricks as DIVA will be available in open source on DestinE’s platform. The DIVA demonstrator was developed with active participation from journalists, representing a representative group of non-expert users. Their insights shaped DIVA's functionality to meet the real-world needs of those communicating climate change issues to the public. The DIVA chatbot employs a scope management functionality to assess whether a conversation falls within its domain, ensuring relevant and accurate responses while preventing it from addressing generic questions. To enhance safety and appropriateness, a function has also been developed to measure the "toxicity" of user prompt, enabling the chatbot to handle or avoid inappropriate content effectively. To improve the model's performance, several advanced prompt engineering techniques have been utilized. Automated tools and manual assessments are used to monitor the chatbot's performance over time, ensuring that each evolution meets the desired standards and effectively addresses user needs. Specifically, pre-prompting involves using custom prompts to guide the LLM in performing specific tasks, managing conversation history, and maintaining context throughout interactions. Additionally, hybrid strategies combine the capabilities of LLMs with Natural Language Processing (NLP) techniques and keyword recognition to enhance response accuracy and relevance. For model enhancement, fine-tuning the model has been studied to better understand user inputs regarding timeframes, locations, and specific climate variables. Both automatic and manual reinforcement learning methods are being explored to continually improve the chatbot's performance, with further developments anticipated by 2025. In terms of data preparation and graphical view plotting, Python scripts and open-source libraries are used to process data sourced from ERA5 (for data prior to 2024) and from the DestinatE Climate Digital Twin (for data from 2024 onwards). This approach enables the chatbot to generate interactive graphical representations of climate data in response to user queries. Pilot testing with journalists demonstrated that DIVA significantly reduces the time and effort required to access and interpret climate data. Users reported that the chatbot's ability to generate immediate graphical illustrations from natural language queries is particularly beneficial for confirming correlations between events or validating an intuition for instance. DIVA exemplifies the application of open-source LLMs in Earth Observation and climate science, aligning with the session's focus on leveraging LLMs for practical, impactful solutions. The project showcases how advanced language models can be employed without extensive proprietary data or training, emphasizing techniques like prompt engineering and hybrid NLP strategies. A notable innovation in this project is the development of advanced methods for interpreting temporal and spatial parameters within user prompts. DIVA effectively recognizes time expressions and location references, converting natural language inputs into precise start and end dates, and accurately mapping to specific geographical areas. This capability addresses a common challenge in LLMs—understanding and processing temporal and geospatial information in natural language queries. By refining how LLMs handle time and location data, DIVA paves the way for broader applications of LLMs across various domains, such as disaster management or personalised services. Building on these advancements, some opportunities are currently being explored to expand DIVA's scope into other fields and to showcase additional technical components of LLMs, including Agents, Retrieval-Augmented Generation (RAG), and GraphRAG methodologies. Incorporating these technologies aims to further enhance DIVA's capabilities, enabling it to handle more complex tasks and interact with external data sources more effectively.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall G1)

Presentation: GAIA: A Global, Multimodal, Multiscale Vision-Language Dataset for Remote Sensing Image Analysis

Authors: Angelos Zavras, Dr Dimitrios Michail, Dr Xiao Xiang Zhu, Dr Beg¨um Demir, Dr Ioannis Papoutsis
Affiliations: Orionlab - National Technical University of Athens & National Observatory of Athens, Harokopio University of Athens, Technical University of Munich, Technical University of Berlin
The Earth’s orbit is teeming with a growing constellation of Earth Observation (EO) satellites, continuously producing an unprecedented volume of diverse and complex information about our planet. Notably, as of today, Europe’s Copernicus satellite constellation alone is producing more than 25 terabytes of satellite data every day, which as a matter of fact outstrips our capacity to extract meaningful information. Natural language offers an intuitive interface for interacting with and interpreting these images. Existing general-purpose Vision-Language Models (VLMs) are primarily trained on web-scraped, noisy image-text paired data, with limited exposure to RS-related samples that often lack detailed in-domain image descriptions and focus mainly on attributes like date and location. Currently the main focus in training high-quality RS-specific VLMs revolves around overcoming the scarcity of in-domain image-text paired data. Efforts span from leveraging existing limited-scale datasets to creating synthetic datasets for training or fine-tuning existing VLMs for RS. Our work stands out by exploiting web-scraped images along their accompanying text from reputable RS-related sources for generating five synthetic yet scientifically grounded captions for each image using well-structured prompts to exploit the advanced vision-language capabilities of ChatGPT-4o. We introduce GAIA, a Global-scale Archive of In-domain Annotated data for multi-scale, multi-sensor, multi-modal RS image captioning and retrieval. GAIA comprises of 200k RS image-text pairs, covering a wide range of RS modalities and spatial resolutions, spanning the entire globe with equally distributed observations over the last 25 years. GAIA was developed using a two-stage process: first, web-scraping images and their accompanying text from reputable RS-related sources; second, generating five synthetic, scientifically grounded captions for each image using well-structured prompts to exploit the advanced vision-language capabilities of ChatGPT-4o. For completeness, we release the automated processing framework we created to generate captions of RS images by leveraging the aforementioned web-crawled image-text data. Preliminary benchmarks, including fine-tuning CLIP, demonstrate the dataset's effectiveness in RS imagery classification and cross-modal retrieval tasks.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall G1)

Presentation: EVE: A Comprehensive Suite of LLMs for Earth Observation and Earth Sciences

Authors: Dr Àlex R. Atrio, Ms Vijayasri Iyer, Mr Marcello Politi, Dr Cristiano de Nobili, Dr Sébastien Bratières, Ms Elena Christodoulou, Mr Christopher Phillips, Dr Andreas Vlachos, Dr Nicolas Longépé, Antonio Lopez
Affiliations: Pi School, Translated, MHPC (SISSA/ICTP), Imperative Space, University of Cambridge, Φ-Lab, European Space Agency
Large Language Models (LLMs) have proven to be effective as general purpose tools to aid on a variety of tasks, due to their adaptability, ability to process large volumes of data, and capabilities on textual analysis and generation. However, when targeting specific domains, LLMs trained on general domain data require domain-specific knowledge to achieve state-of-the-art results, particularly when these consist of technical and scientific disciplines like Earth Observation (EO). This can be achieved by developing domain-specific LLMs, including training from scratch with vast amounts of domain-specific data [1, 2], or supervised/instruction fine-tuning general domain LLMs [3, 4, 5]. Alternatively, a general domain LLM can be allowed to retrieve answers from a domain-specific external database. Inspired by this trend, we develop Earth Virtual Expert (EVE), a comprehensive suite of LLMs for Earth Observation and related Earth Sciences. With it, we also develop the relevant training, benchmarking data, and strategies. These aim to support a wide range of EO use cases both for specialists and the wider public, including text generation, summarization, question and answering. EVE is created by further pre-training several sizes of open-source general domain LLMs on billions of tokens of curated high quality scientific EO data sources. We then fine-tune instructed models with our own created datasets and authentic preference data. Finally, we integrate the chat models with an external curated database for Retrieval Augmented Generation (RAG). EVE, the resulting chat system, is designed to become a helpful assistant within EO, and can cater to a wide audience of users, both scientific specialists as well as the general public interested in any discipline related to EO, such as policymakers, journalists, or NGOs. The target use cases include support for bibliographic exploration, assisting and informing policy decision-making, and enhancing EO approachability to non-specialized public. For these use cases, a particular focus is placed during the design of the system to minimize factuality-related hallucinations. Our contributions include: 1) Domain-Specific Models: Two sizes of domain-adapted models, both pre-trained on billions of EO-specific tokens and fine-tuned for chat instruction-based interaction in EO and related Earth Sciences. 2) Benchmarking datasets: for EO instruction adherence, alignment, evaluating model performance on hallucination mitigation, enabling robust validation and iteration. 3) Training Data: i) A curated corpus containing billions of cleaned and processed tokens specific to EO; ii) Instruction datasets designed for fine-tuning models on EO downstream tasks; iii) Authentic preference/alignment data. 4) Retrieval-Augmented Generation (RAG) System: A curated RAG database of EO-relevant documents, integrated with the chat models to facilitate accurate and contextually grounded responses. 5) Hallucination Mitigation Strategy: A fact-checking method to suppress factual errors generated by the RAG system. 6) Open-Source Codebase: The supporting code for data processing, model finetuning, and deployment of the RAG system, to ensure reproducibility and usability. The adapted and instructed models, corresponding datasets and benchmarks will be released as open source contributions to the Earth Observation and Earth Sciences through public repositories. References: [1] R. Taylor, M. Kardas, G. Cucurull, T. Scialom, A. Hartshorn, E. Saravia, A. Poulton, V. Kerkez, and R. Stojnic, "Galactica: A large language model for science." arXiv preprint, arXiv:2211.09085, 2022. [2] B. Bhattacharjee, A. Trivedi, M. Muraoka, M. Ramasubramanian, T. Udagawa, I. Gurung, N. Pantha, et al., "INDUS: Effective and Efficient Language Models for Scientific Applications." arXiv preprint, arXiv:2405.10725, 2024. [3] Z. Lin, C. Deng, L. Zhou, T. Zhang, Y. Xu, Y. Xu, Z. He, et al., "Geogalactica: A Scientific Large Language Model in Geoscience," arXiv preprint, arXiv:2401.00434, 2023. [4] C. Deng, T. Zhang, Z. He, Q. Chen, Y. Shi, Y. Xu, L. Fu, et al., "K2: A Foundation Language Model for Geoscience Knowledge Understanding and Utilization," in Proceedings of the 17th ACM International Conference on Web Search and Data Mining, pp. 161–170, 2024. [5] T. de Haan, Y.-S. Ting, T. Ghosal, T. D. Nguyen, A. Accomazzi, A. Wells, N. Ramachandra, R. Pan, and Z. Sun, "AstroMLab 3: Achieving GPT-4o Level Performance in Astronomy with a Specialized 8B-Parameter Large Language Model," arXiv preprint, arXiv:2411.09012, 2024.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall G1)

Presentation: Semantic Enrichment of Synthetic Earth Observation Data: Concept and Technical Implementation

Authors: Thorsten Hoeser, Marco Ottinger, Robin Spanier
Affiliations: German Aerospace Center (DLR)
With the advent of large language models (LLMs) and LLM based agents new opportunities arise for Earth observation (EO) data analysis. Central to leveraging the capabilities of LLMs is the development of semantic interfaces that enrich EO data, data products, and processing pipelines. This presentation explores the integration of semantic descriptions into synthetically generated remote sensing scenes, complementing conventional computer vision labels such as segmentation masks and bounding boxes. The goal is to introduce a semantic interface that facilitates machine learning-ready EO datasets and workflows. The applied synthetic data generation process employs an object-oriented approach, assembling individual object instances into coherent and meaningful remote sensing scenes. A key concept is the graph-based representation of scene elements and their interrelations, which are encoded as explicit property-graphs for each synthetic scene. These graphs not only guide the construction of the scene but also serve as the foundation for generating a detailed semantic description of it. The presentation will emphasize the methodology for deriving semantic descriptions from these property-graphs, including mechanisms for adjusting levels of detail to suit various applications. In addition to the scene-level description, the aggregated key-value pairs from the property graphs can be compiled into a comprehensive metadata database. The database’s schema is another way of communicating the property-graph’s structure and enables efficient querying and summarization of the entire datasets characteristics. By leveraging this structured metadata, users, including LLM-agents, are able to perform queries and select specific samples based on desired metadata configurations. This capability enables the construction of tailored training datasets, supporting advanced experimental setups and facilitating more targeted model development and evaluation processes. By formalizing a structured approach to semantic enrichment, this work enables LLMs to interpret detailed metadata associated with machine learning-ready EO datasets. Moreover, the proposed method supports advanced applications by providing rich semantic descriptions of remote sensing scenes, such as training models capable of describing EO scene content (image-to-text) and generating synthetic training pairs for text-to-image tasks.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall E2)

Session: D.01.08 4th DestinE User eXchange - Addressing Data and Service Needs

#zarr

Addressing Data and Service Needs

Maximising the impact of DestinE requires that its data products and services align with what users across science, policy, and industry need.

This session will explore the challenges and opportunities of accessing and using data available through DestinE, combining technical insights with real-world user developments. Participants will gain an overview of the different ways to access Digital Twin data and learn about data-oriented services. Attendees will also hear from users who have developed applications or contributed data to the DestinE system. The session will conclude with an open discussion on data formats and upcoming developments.

Introduction to the session by presenting DestinE Data offering


  • Danaële Puechmaille - EUMETSAT

How to access DestinE data? • HDA • Polytope • Platform Services


  • Michael Schick - EUMETSAT
  • Tiago Quintino - ECMWF
  • Inés Sanz Morere - ESA

Serve DestinE users with near data computing capabilities (EDGE services)


  • Miruna Stoicescu - EUMETSAT

AI4Clouds application demonstrator using DestinE


  • Fernando Iglesias - Predictia Intelligent Data Solutions SL

Visualizing data in DestinE


  • Barbara Borgia - ESA

A collaborative toolbox to build and share your digital twin components – Delta Twin


  • Claire Billant - Gael Systems

Moderated discussion:


  • Data formats challenges (netcdf, zarr etc.)
  • New developments
  • Data quality
  • Trainings data and ML Models
  • Contribute to Data Portfolio
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall K2)

Session: A.01.04 Advancing Air Quality Monitoring from Space - PART 1

Air pollution is a major concern throughout the world in both, developed and developing countries, especially in urban areas where over 50% of the world population live. Swelling urban population and increased volume of motorized traffic in cities have resulted in severe air pollution affecting the surrounding environment and human health. European Environmental Agency (EEA) estimate that over 40-50% of air pollution in the most important European cities is attributed to vehicular emission.

The investigation of air pollution over megacities by means of satellites observations has recently become a central topic of interest within the air pollution community, especially thanks to the observing capabilities of Sentinel-5p in terms of spatial resolution.

Nonetheless space-borne platforms alone cannot provide a full picture. To investigate the spatio-temporal variability of air pollution on a local, regional, and global scale, new tools are being developed. In this context, the detection, tracking and understanding of pollutant transport on various spatial scales are of both local and global interest. Specifically, in rural and remote areas, where no ground-based monitoring network of the air pollution is available, the use of satellite data can provide an estimation of the regional distribution of pollutants, in order to assess the impact of specific events (e.g., biomass burning or dust storm outbreaks).

Satellites observe air pollution in the troposphere, and its relation with surface concentrations must first be solved for air quality monitoring applications. This session is dedicated to present new algorithms and approaches for the downscaling of air quality satellite observations and to explore novel assimilation methodologies to combine satellite retrievals with in-situ measurements air quality modelling considering all relevant satellite missions (e.g. Sentinel-5P), and future availability of hourly observations from Sentinel-4, and other future capabilities e.g. Sentinel-5 and CO2M.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall K2)

Presentation: The ESA World Emission Project: Demonstrating the Capabilities and Relevance of a Service for the Regular Provision of Global to Local Scale Estimates of Atmospheric Pollutant Emissions Based on Satellite Observations

Authors: Grégoire Broquet, Philippe Ciais, Beatriz Revilla-Romero, Patricia Pérez-Ramirez, Audrey Fortems-Cheiney, Pramod Kumar, Didier Hauglustaine, Rimal Abeed, Frederic Chevallier, Xin Lin, Pierre Le Sann, Srivatsan Anand, Quentin Peyle, Clément Giron, Martin Van Damme, Bruno Franco, Lieven Clarisse, Antoine Honet, Steffen Beirle, Adrian Jost, Marc Guevara, Paula Castesana, Jonilda Kushta, Angelos Violaris, Colas Robert, Mélanie Juillard, Jean-Pierre Chang, Bo Zheng, Daniele Gasbarra, Antony Delavois
Affiliations: Laboratoire des Sciences du Climat et de l’Environnement, LSCE/IPSL, CEA-CNRS-UVSQ, Université Paris-Saclay, GMV, Remote Sensing & Geospatial Analytics Division, Science Partners, Kayrros, Spectroscopy, Quantum Chemistry and Atmospheric Remote Sensing (SQUARES), BLU–ULB Space Research Center, Université libre de Bruxelles (ULB), Max-Planck-Institute for Chemistry, Satellite remote sensing group, Barcelona Supercomputing Center, The Cyprus Institute, Climate and Atmosphere Research Center, Citepa, Institute of Environment and Ecology, Tsinghua Shenzhen International Graduate School, Tsinghua University, ESA, European Space Agency, ESRIN
The growing number of spaceborne sensors dedicated to the observation of the atmospheric composition opens broad perspectives for monitoring air pollutant and greenhouse gas (GHG) emissions at scales ranging from continents and large countries to European countries, administrative regions, large urban areas, and industrial sites or power plants. This monitoring relies on atmospheric transport and chemistry inverse modeling techniques, which are based on global to local atmospheric models embedded in statistical inversion frameworks, or on more direct satellite image processing techniques. The resulting top-down quantification and mapping of the emissions is expected to support the improvement, development and evaluation of the traditional bottom-up emission inventories based on activity data such as statistics on fuel use, and emission factors. Operational systems are currently being developed as complex systems coupled to global or national models (e.g., in Europe, within the framework of the Copernicus program) to provide high-resolution to national scale emission products with a good sectoral resolution, which could be directly handled by inventory agencies or within the framework of international treaties aimed at reducing emissions. However, new types of satellite observations of the atmospheric composition are regularly developed. A growing number of species can be monitored either individually or within joint analysis frameworks. The ensemble of methods developed to process the observations (in particular the satellite images) at the different spatial scales is constantly expanding with innovative approaches and tools being developed. This pushes for maintaining, alongside the long-term development and operation of complex systems, flexible portfolios of adaptive inversion systems designed for specific species and scales, and for using them to explore the potential of new observations and assess the relevance of the corresponding emission estimates to societal needs. Since 2022, global to local scale inversion approaches have been combined into a general service for the production and distribution of pollutant and GHG emissions as part of the ESA World Emission project (https://www.world-emission.com). World Emission aims at demonstrating the added value and readiness of “top-down” (inversion-based) emission inventories for continents, countries, and regional or local emission hotspots derived from Earth Observation. It brings together research institutes and private companies with expertise in the satellite products and atmospheric inversions, as well as inventory agencies among other key project stakeholders, fostering close collaboration. Another objective is to provide an assessment of synergies with other European projects dedicated to global and national monitoring of air pollutant and GHG emissions. This presentation will detail the overall framework of the World Emission service: the satellite observations that are used (with a core dataset from the TROPOMI instrument) and the different inversions systems routinely operated (with regular updates) for the different pollutants (NOx, NH3, SO2, VOCs, CO) and GHGs (CH4 and CO2) at global, regional, and local scales since 2019, the data flow and distribution, with the publicly-accessible web-based visualization interface. It will highlight several prominent scientific and technical achievements of the project, e.g., the extensive collections of NOx, NH3, SO2, VOCs and CH4 point source estimations, the analysis of the impact of the COVID-19 crisis on NOx emissions across the globe (at the scale of countries and provinces), the identification of large-scale adjustments to global inventories of CH4, SO2 and NH3 emissions, and key methodological improvements. It will illustrate the efforts to dynamically visualize the products, validate the overall set of emission estimates, and compare them to official emission inventories, providing prospects for both the improvement of the inversion products and their use to support emissions reporting.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall K2)

Presentation: LEGO-4-AQ: An AQ Policy Support Service Based on the Synergistic Use of LEO, GEO, and In-Situ Monitoring

Authors: Tijl Verhoelst, Steven Compernolle, Jean-Christopher Lambert, Charlotte Vanpoucke, Frans Fierens, Astrid Müller, Hiroshi Tanimoto
Affiliations: Royal Belgian Institute for Space Aeronomy (BIRA-IASB), Belgian Interregional Environment Agency (IRCEL-CELINE), National Institute for Environmental Studies (NIES)
Air quality (AQ) in industrialized and urbanized regions is influenced by complex processes across varying temporal and spatial scales, including point-source emissions, local to intercontinental transport, and the formation of secondary pollutants in areas with heterogeneous land use. AQ regulation responsibilities are distributed among multiple public authority levels—international, regional, and local. Effective policymaking for AQ regulation and sustainable energy consumption necessitates tailored monitoring of AQ and assessments of past and future regulatory impacts. Typically, governmental AQ monitoring relies on in-situ surface concentration measurements acquired in key locations, supplemented by numerical modelling that accounts for the geographical gaps between in-situ measurements with additional data from meteorology and bottom-up emission estimates. However, a constellation of new-generation low Earth orbit (LEO) and geostationary (GEO) satellite sounders is becoming available to enhance AQ monitoring across these relevant scales and fill geographical gaps with observations. While satellite observations can offer near-continuous coverage (subject to cloud cover), observations from space also imply substantial challenges that complicate the uptake by policy makers. Specifically, monitoring local policies like Low Emission Zones (LEZ) in several European cities necessitates downscaling of the spatial resolution currently available from satellites. Furthermore, the relationship between satellite-measured column amounts and near-surface concentrations affecting health is complex. Integrating data from multiple instruments and platforms requires careful understanding and consideration of their mutual biases and differences in perception of the atmospheric composition. Lastly, data analyses and resulting information must be customized to meet policymakers' needs. To facilitate the adoption of this new-generation satellite AQ data, a prototype service for synergistic LEO, GEO, and in-situ monitoring of AQ related compounds, called LEGO-4-AQ, has been developed in collaboration with (inter-)regional and international institutional stakeholders. The underlying data processing system addresses the aforementioned challenges by leveraging the strengths of each observing system in terms of geographical coverage, spatio-temporal resolution, and accuracy. Geostatistical techniques, like Regression Kriging, are applied to convert temporally aggregated and spatially oversampled satellite tropospheric column data into near-surface concentrations, calibrated to reference in-situ measurements traceable to international standards. Case studies focusing on Sentinel-5 Precursor (S5P) NO2 column data and on in-situ network measurements collected in densely populated regions of Belgium and Japan illustrate key policy-relevant spatio-temporal features and trends. Specifically, since the S5P launch in 2017 we see strong reductions in NO2 concentration levels over (sub)-urban areas (5-7%/year), but (hitherto) little difference between areas with or without a LEZ enforced by local authorities. These studies also highlight limitations within individual monitoring systems that warrant further investigation. Our roadmap for system and service development therefore includes ongoing research and development on the data processing system, alongside further tailoring of derived information for policymakers and stakeholders. After a successful prototyping phase, user uptake is now being enhanced through the integration of LEGO-4-AQ into TERRASCOPE, the Belgian Copernicus Collaborative Ground Segment, as part of the BELSPO-funded project CAELOSCOPE (2024-2027), presented elsewhere in this symposium (van Gent et al.).
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall K2)

Presentation: Exploiting synergy of future MAIA and PLATiNO-4 ASI missions to observe atmospheric aerosols

Authors: Luigi Ansalone, Matteo Picchiani, Giovanni Rum, Vincenzo Pulcino, Francesco Longo, Simona Zoffoli, Roberto Luciani, Giovanni Paolo Blasone
Affiliations: Italian Space Agency
Monitoring atmospheric aerosols from space is crucial for understanding their spatial and temporal distribution, as well as their impact on climate, air quality and public health. In this context, while multispectral satellite data offer coarse but frequent observations, hyperspectral sensors capture a much finer spectral resolution, enabling detailed characterization of aerosols. In this framework, the Italian Space Agency (ASI) is developing the MAIA/PLATiNO-2 mission, that is founded on the agreement with NASA, which is providing the Multi-Angle Imager for Aerosols (MAIA) payload, that will be embarked on the ASI PLATiNO-2 satellite. The MAIA instrument includes a spectropolarimetric camera with 14 spectral bands (in UV-SWIR spectral range) and a two-axis gimbal, which enables observations from multiple view angles. The MAIA/PLATiNO-2 mission will observe predetermined areas called Primary Target Areas, that have been selected around the world to include highly populated metropolitan areas, with a revisit of about 3-4 times per week. Moreover, Secondary Target Areas, with 1-3 observations per week, have been defined to extend the mission observations to additional areas of scientific interest. Target areas size is about 360 (East-West) per 480 (North-South) Km. Thanks to the agreement between NASA and ASI, one PTA and two STAs will be respectively observed over Central, Northern and Southern regions of Italy. The MAIA standard mission L1 products will have a spatial resolution of 1 km and provides retrievals of Aerosol Optical Depth (AOD), including fractional AOD by shape, size, absorption and refractive index, that will be release as L2 AOD products. The latter will be combined with surface measurements and chemical transport model outputs to develop the L2 and L4 PM products on a 1-km spatial grid. Such products will be daily maps of mass concentrations for PM10, PM2.5, and PM2.5 speciated mass concentrations (including nitrate, organic carbon, elemental carbon, sulfate, and dust). Ground measurements are provided by the national or regional environmental agencies or by ground instruments deployed by JPL. ASI is establishing an agreement with regional environmental agencies to provide surface data (including PM2.5 speciated mass concentrations) over the two Italian STAs for extending the production of the L2 and L4 products over such areas. On a parallel programme, ASI is developing a new “best-in-class” compact Hyperspectral payload, identified as PLATiNO-4 and implemented by Leonardo S.p.A., that exploits the heritage of the PRISMA instrument, launched by ASI in 2019. The PLATiNO-4 imaging spectrometer is designed to measure the spectral signature across the wavelength range 400-2500 nm, with high spatial resolution (up to 20m of Ground Sampling Distance in SPOTLIGHT mode) and a high signal to noise ratio. ASI will explore the exploitation of the synergy between these two novel missions, which will be launched between 2025 and 2026, to unlock new approaches for aerosol monitoring. In the synergistic approaches MAIA multispectral data will provide the large-scale picture, in terms of AOD and PM observations and tracking, achieved thanks to the high revisit time. PLATiNO-4 data, as well as PRISMA and additional hyperspectral data from other missions, can be exploited to refine the MAIA observations by offering detailed spectral signatures for composition information over specific regions of interest, to improve the spatial resolution of the synergy products, as well as to retrieve additional PM and aerosol’s species.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall K2)

Presentation: Quantification of 3D-radiative transfer effects for S5P-TROPOMI observations of NO2 and SO2 plumes from point source emissions

Authors: Thomas Wagner, Steffen Beirle, Christian Borger
Affiliations: MPI For Chemistry, now at: European Centre for Medium-Range Weather Forecasts (ECMWF)
Usually, horizontal homogenous atmospheric properties are assumed for the analysis of satellite observations of atmospheric trace gases. While this simplification introduces only small to moderate errors in most applications, for the observations of point source emissions, 3D radiative transfer effects can become relevant. Such effects are especially important for satellite sensors with small ground pixel sizes like S5P-TROPOMI or GEMS or TEMPO, for which plumes from point source emissions, e.g. from power plants, have similar dimensions as the satellite ground pixels. Two different 3D effects are investigated in this study: 1. the effects of horizontal light paths, in particular if light from regions outside the trace gas plume is scattered into the FOV of the satellite. 2. saturation effects are investigated, which can become important for SO2. We investigate all these 3D effects with the 3D Monte-Carlo radiative transfer model TRACY-2. We simulate typical trace gas plumes from point sources of NOx and SO2 according to different atmospheric stability classes or taken from high spatial resolutions measurements of the EnMAP satellite. We also include simulations taking into account the conversion of NO to NO2 in the downward plumes. We find that 3D effects generally lead to a substantial underestimation of the true trace gas amount if standard 1-D-air mass factors are applied. For NO2 the underestimation is typically between about 10 and 40%, for SO2 between about 50 and 80%. The strength of the underestimation depends on the position along the plume propagation.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall K2)

Presentation: The CitySatAir Project: Monitoring Urban Air Pollution With Satellite Data

Authors: Bas Mijling, Philipp Schneider, Paul Hamer, Isadora Jimenez, Pau Moreno
Affiliations: Royal Netherlands Meteorological Institute (KNMI), Norwegian Institute for Air Research (NILU), LOBELIA Earth S.L.
In many cities the population is exposed to elevated levels of air pollution. Often, the spatial distribution of local air quality throughout urban areas is not well known due to the sparseness of official monitoring networks, or due to the inherent limitations of urban air quality models. Satellite observations and emerging low-cost sensor technology have the potential to provide complementary information. An integrated interpretation, however, is not straightforward. The CitySatAir project (part of ESA’s EO Science for Society program) investigates how satellite data of atmospheric composition can be better exploited for monitoring and mapping urban air quality at scales relevant for human exposure. Focusing particularly on the nitrogen dioxide (NO₂) product provided by the TROPOMI instrument on Sentinel-5P, we investigate different approaches to how this data can be combined with other information from models and air quality monitoring stations. We chose five contrasting study sites across Europe (Madrid, Oslo, Rotterdam, Bratislava, Warsaw) differing in size, pollution levels, dominant emission sources, and cloud cover. We developed a versatile urban dispersion model able to calculate both surface concentrations of NO₂ at street level and NO₂ column concentrations matching the TROPOMI observations. Urban emissions are described by proxies taken from open data, where emission factors are updated periodically to best match the observations from either ground or space. Compared to the CAMS regional ensemble, local biases in forecasted air pollution are reduced considerably, especially if in-situ measurements are also spatially assimilated in the simulated concentration fields. The inclusion of TROPOMI data is particularly valuable for cities with limited in-situ monitoring capabilities.   The multi-annual reanalysis of hourly air pollution concentrations at street level provides a very rich data set, which demands special user-friendly tools for exploration and analysis. The data sets for the different cities are showcased in the interactive Lobelia Explore viewer. Lobelia Explore is based on a serverless architecture, which eliminates the need for on-demand data processing and reduces maintenance costs. The rendering of the maps, displaying charts, and aggregating data over user-defined areas is all done browser-side.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall K2)

Presentation: The Path to Sentinel-4 Operations: Products, Calibration and Validation, Monitoring, and Data Processing Systems

Authors: Rasmus Lindstrot, Sebastian Gimeno Garcia, Frank Ruethrich, Vinod Kumar, Myojeong Gu, Marcel Dobber, Jochen Grandell, Bojan Bojkov
Affiliations: Eumetsat, HAMTEC Consulting
EUMETSAT will operate the Copernicus Sentinel-4 imaging spectrometer, which is hosted on the Meteosat Third Generation - Sounder (MTG-S) satellite. The first satellite in this series is scheduled to launch in the second half of 2025. Developed by Airbus Defense and Space under an ESA contract, Sentinel-4 is designed to monitor atmospheric trace gases - such as ozone, nitrogen dioxide, sulfur dioxide, formaldehyde and glyoxal - as well as aerosol and cloud properties. It provides high spatial resolution and hourly coverage over Europe, which is vital for tracking atmospheric composition and serves as a key input to the Copernicus Atmosphere Monitoring Service (CAMS). This presentation will provide an overview of the Sentinel-4 instrument and its products, along with the latest updates on the status of the ground segment developments as well as the analysis of the instrument's calibration key data. We will also discuss the progress of EUMETSAT's data processing and monitoring facility, which is being prepared for commissioning and routine operations. This includes activities for the preparation of the calibration and validation (Cal/Val) of operational atmospheric chemistry products, performed centrally at EUMETSAT as well as with support from the scientific community.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall L3)

Session: D.04.05 From the Research Lab to a Global Map: Scalable and Sustainable EO Algorithm Development and Workflows

The advancement of Earth Observation (EO) technologies has enabled detailed monitoring and analysis of our planet on an unprecedented scale. However, scaling EO mapping algorithms and workflows from local to continental and global levels presents significant algorithmic and technical challenges. This session focuses on overcoming these barriers to create robust, efficient, and sustainable solutions for large-scale EO mapping efforts.
Key topics include:
• Algorithmic Scalability: Addressing challenges such as limited and spatially biased training data, ensuring generalizability across diverse regions and time periods, and optimizing algorithms for cloud-based processing.
• Scalability in Workflows: Enhancing the scalability of data processing workflows, with a focus on efficient data handling and resource optimization in cloud-based infrastructures.
• Sustainability: Incorporating innovative practices to reduce the environmental footprint of EO data processing.
A central theme of the session is the importance of considering scalability from the earliest stages of algorithm and workflow development. We welcome contributions that address these challenges, from foundational research into scalable algorithms to practical case studies demonstrating successful or ongoing large-scale EO mapping projects.
This session aims to bring together experts from machine learning, remote sensing, data science, and cloud computing to explore innovative methodologies that drive advancements in large-scale EO mapping. By addressing both scalability and sustainability, the session seeks to foster the creation of EO products that provide actionable insights for tackling global environmental challenges.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall L3)

Presentation: EvoNet: An Innovative Algorithm for Global Remote Sensing Mapping, Powering the Next Generation of Global Land Cover Products

Authors: Daniele Zanaga, Dr. Wanda De Keersmaecker, Dr. Yannis Kalfas, Dr. Joris Coddé, Dr. Mathilde De Vroey, Dr. Myroslava Lesiv, Steffen Fritz, Dr. Tim Ng, Dr. Carolien Toté, Victor Verhaert, Giorgia Milli, Luc Bertels, Dr. René Colditz, Ruben Van De Kerchove
Affiliations: VITO (Vlaamse Instelling voor Technologisch Onderzoek), IIASA (International Institute for Applied Systems Analysis), JRC (Joint Research Centrum)
Global land cover mapping is essential for understanding and managing Earth's resources. Building upon the success of ESA’s WorldCover, we present the next-generation of global land cover products, part of the Copernicus Land Cover and Forest Monitoring service (LCFM). The backbone of the land cover products is EvoNet, a novel algorithm that integrates the strengths of convolutional neural networks (CNNs) and pixel-based classifiers into a unified framework. This approach effectively addresses long-standing challenges in land cover classification, including spatial accuracy, generalization, and the integration of heterogeneous datasets, thereby advancing the state of the art in global mapping. EvoNet avoids the inefficiencies of conventional approaches that either rely on multiple regional models, requiring complex post-processing, or exclusively use CNNs or pixel classifiers, each of which has limitations. CNNs excel in generalization but struggle with fine spatial details, while pixel classifiers offer high spatial resolution but are prone to noise and overfitting. The core innovation of EvoNet lies in unifying these strengths with its dual architecture: a CNN-based spatial feature extractor and a multi-layer perceptron (MLP) pixel classifier. This architecture generates spatial features that provide contextual information, improving pixel-level predictions and enabling downstream applications across a wide range of remote sensing tasks. A key component supporting the EvoNet network performance is the Evotrain dataset, a newly developed global dataset designed as a generic starting point for a broad spectrum of remote sensing applications. Stratified sampling based on spatial heterogeneity, distance, and land cover class ensures a balanced representation of all classes and landscapes at a global scale. High-quality expert annotations were collected using a novel AI assisted system. These annotations include multi-label mixed classes, to provide accurate land cover fraction estimates at high resolution. The Evotrain dataset, together with the EvoNet architecture and the developed supporting Python ecosystem will be open-source and publicly released to foster collaboration and drive advancements across the remote sensing community. The EvoNet framework sets a new standard for global mapping, as demonstrated by the datasets produced under the LCFM service. Among these products, LCM-10, the new global annual land cover map, surpasses WorldCover in accuracy and operational efficiency while addressing its key algorithmic limitations. This talk will showcase EvoNet’s methodology, the development of the Evotrain dataset, and the resulting advancements in global land cover mapping, highlighting its transformative potential for remote sensing applications.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall L3)

Presentation: Scalable and Energy Efficient Compositing of Sentinel-2 Time Series

#zarr

Authors: Pablo d'Angelo, Paul Karlshöfer, Uta
Affiliations: German Aerospace Center
Introduction As Earth observation data archives continue to grow thanks to long-term missions such as the Sentinels, scalable data processing is a key requirement for increasingly complex analysis workflows. At the same time, the increase in data and computing resources results in an increase in energy consumption and thus in the carbon footprint of data analysis. By extracting the visible bare surface of agricultural fields after harvesting and ploughing, multispectral observations of the soil surface can be obtained from Sentinel-2 time series data at 20 m resolution. A complete bare surface reflectance composite can only be obtained from a multi-year time series, which is acceptable due to the low dynamics of soil properties. In the CUP4SOIL project, several soil parameters such as soil organic carbon, pH and bulk density are estimated using digital soil modelling, and the bare surface reflectance composites provide additional information to the traditionally used DSM covariates. The SCMAP compositing process detects pixels with bare surfaces based on a spectral index and regionally varying thresholds. During compositing, robust statistically based outlier detection is used to remove cloud, snow and haze pixels, and reflectance and statistical data are calculated for both bare and non-bare surfaces. Each pixel stack in the time series is processed independently, resulting in a massively parallel reduction operation with no spatial dependencies. This setting is typical of temporal compositing algorithms, which usually reduce along time and spectral dimensions with little or no spatial influence. Many existing products depend on time series analysis of Sentinel data [1, 2]. Efficient computation both decreases the environmental footprint and the costs of processing, and is thus of prime interest. This requires both efficient an implementation of algorithms, as well a compute platform that offers the required compute and data resources. While the embarrassingly parallel nature of this task provides a high scalability potential, high efficiency can only be archived when tailoring the algorithms to the performance characteristics to the employed hardware platform. Method The core SCMAP algorithm is implemented in a C++ application called from Python code responsible for product discovery and data format processing. The use of containers and the modular input interfaces allow the process to be easily adapted to different data archives and to run in cloud or HPC environments. The experiments are performed on the Terrabyte HPC platform of LRZ and DLR[3], which provides ~50 PB of GPFS storage and 271 CPU compute nodes with 40 cores and 1 TB of RAM each. These nodes are completely fanless machines, cooled with a highly efficient hot water cooling system. Using the SCMAP application, we explore several implementations and optimisations on the Terrabyte compute platform. The algorithm allows for multiple levels of parallelization as data dependencies are limited to the temporal and spectral axis. Spatially, neighbouring pixels are independent. Thus, at the SLURM task level, tiles of the Sentinel-2 tiling grid are computed using OpenMP, allowing parallel pixel computations within each task. We are investigating reordering the input data axes to improve cache coherence and align with data access patterns. Concurrent task execution on compute nodes is analysed to assess how memory allocation, task density and data request rates affect I/O complexity and file system load. The Sentinel-2 tiling grid results in spatial tiles of 100x100 km for a given date, and a standard Level 2A Sentinel-2 product stores each of the used 10 bands in separate image files. As each SLURM task processes on Sentinel-2 tile, and thus reads from 1000 to 10000 input files, parallel IO and increasing the IO chunk size were essential for high scalability of the process. In addition, we compare the performance and decompression overheads of several common file formats (cloud-optimised GeoTIFF, JPG2000, ZARR). We further investigate the energy consumption of the compositing tasks and compare the energy efficiency of different processing and data storage setups. Conclusions With the current optimisations, a state-of-the-art bare surface reflectance composite for the whole of Europe can be computed from 500 TB of Sentinel-2 L2A input data in less than 12 hours using 25 CPU nodes on the Terrabyte HPC platform. The complete process, including scheduling, input data reading, compositing and output product formatting operates with an sustained input data rate of ~110 GBit/s. Re-processing EU wide 5 yearly Sentinel-2 bare surface composites in case of algorithmic updates thus reduces to an overnight batch job. References: 1. https://land.copernicus.eu/en/products 2. https://esa-worldcover.org 3. https://docs.terrabyte.lrz.de
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall L3)

Presentation: Mapping Crops at Scale: Insights From Continental and Global Crop Mapping Initiatives

#cloud-native

Authors: Kristof Van Tricht, Christina Butsko, Jeroen Degerickx, Gabriel Tseng, Kasper Bonte, Jeroen Dries, Bert De Roo, Hannah Kerner, Laurent Tits
Affiliations: VITO, McGill University, Mila, Ai2, Arizona State University
Crop mapping has been a focus of the remote sensing community for many years. Ideally, for food security monitoring purposes, we would like to know what is being planted globally, preferably at the time of planting. Realistically, however, the community has had to adjust expectations to align with the current capabilities of remote sensing technologies. Agriculture is one of the most dynamic forms of land use, with agro-climatic conditions and local management practices creating unique agricultural activities in nearly every region. This diversity presents significant challenges for consistently mapping agricultural crops at large scales over multiple years. Consequently, creating reliable, large-scale crop maps requires careful planning from setting appropriate requirements to deploying classification algorithms at scale that ensure maximization of workflow generalizability. A robust approach to large-scale crop mapping involves key decisions such as which crops to map (or not to map), how to cope with seasonality, the collection, harmonization, and sampling of training data, selection of satellite and auxiliary data inputs and their preprocessing, computing and selecting classification features, choosing the appropriate algorithm, and building an efficient cloud-based inference pipeline. These elements ensure that the classification workflow is well suited to meet the specific requirements of agricultural diversity while still being feasible to operate at continental to global scales. In this presentation, we highlight valuable lessons learned by researchers engaged in making large-scale crop maps for two distinct products: the Copernicus multi year High-Resolution Layer (HRL) Vegetated Land Cover Characteristics (VLCC) crop type layer, and the ESA WorldCereal global cropland and crop type maps. The workflows behind these products have many things in common but also exhibit notable differences. We will discuss the synergies and divergences between these crop mapping pipelines, focusing on training data sources and algorithms, as well as the particularities of deploying both workflows in the cloud. For example, spatially and temporally distributed reference data in Europe allows for a powerful fully supervised end-to-end classification workflow based on transformers, while large spatial and temporal gaps at the global scale benefit from a self-supervised pretrained foundation model followed by a lightweight CatBoost classifier. Regardless of the approach, ensuring efficient deployment at scale is crucial at all stages of development. In conclusion, we will reflect on the key challenges and lessons learned from developing and deploying these crop mapping systems, emphasizing the importance of adaptability, careful selection of training data and algorithms, the need for cloud-native infrastructures, and the flexibility to refactor parts of the workflow along the way. By sharing our experiences, we hope to provide valuable perspectives for future endeavors in scaling Earth observation algorithms from regional research efforts to global applications, ultimately contributing to enhanced agricultural monitoring and food security initiatives.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall L3)

Presentation: A Comprehensive Monitoring Toolkit for Energy Consumption Measurement in Cloud-Based Earth Observation Big Data Processing

#pangeo

Authors: Adhitya Bhawiyuga, Serkan Girgin, Prof. Rolf de By, Raul Zurita-Milla
Affiliations: Faculty Of Geo-information Science And Earth Observation, University Of Twente
The processing of earth observation big data (EOBD) in distributed environments has increased significantly, driven by advances in satellite technology and the growing number of earth observation missions. This massive influx of data presents unprecedented opportunities for environmental monitoring, climate change studies, and natural resource management, while simultaneously posing significant computational challenges. Cloud computing has emerged as an enabler for handling such EOBD, offering scalable computational resources, flexible storage solutions, and on-demand processing capabilities through platforms such as Google Earth Engine (GEE), AWS SageMaker, OpenEO, and Pangeo Cloud. While these cloud-based EOBD processing platforms offer varying levels of monitoring capabilities to help users understand their workflow execution, they primarily focus on traditional performance metrics. GEE provides basic performance insights focusing on task execution status, AWS SageMaker offers comprehensive resource utilization metrics through Amazon CloudWatch, and Pangeo Cloud implements the Dask profiler for real-time monitoring of cluster performance. However, a significant gap exists: none of these platforms incorporate energy consumption as a standard monitoring metric. This limitation becomes increasingly critical as the scientific community grows more concerned about the environmental impact of large-scale data processing operations. The absence of energy-related metrics from monitoring may hinder users from understanding the environmental impact associated with their EOBD processing workflows. This knowledge is particularly crucial in the earth observation domain, where the balance between computational requirements and environmental impact directly aligns with the field's core mission of environmental protection. Furthermore, recent green computing initiatives have emphasized the importance of sustainable IT infrastructure, yet the lack of standardized energy consumption metrics in EOBD processing platforms hinders researchers' ability to make informed decisions about computational resource usage. To address this gap, we propose a monitoring toolkit for understanding the energy consumption patterns in distributed EOBD processing. We develop an integrated approach that combines multi-level energy measurements: (1) hardware-level power data collected through RAPL for CPU and DRAM, IPMI for system-level metrics, and external power sensors for overall consumption; (2) software-level resource utilization metrics from the operating system including CPU usage, memory allocation, I/O operations, and network traffic; and (3) application-level profiling through integration with Dask's distributed processing framework. Our methodology employs power ratio modeling to correlate these measurements and estimate process-level energy consumption, enabling fine-grained energy profiling of EOBD workflows. The toolkit generates comprehensive monitoring reports that include energy consumption patterns, resource utilization correlations, and efficiency metrics, allowing users to make informed decisions about their processing strategies. By providing visibility into the energy consumption of computational workflows, this work contributes to the development of more sustainable EOBD processing practices. The toolkit enables users to better evaluate the true environmental cost of their computational workflows and optimize their processing strategies accordingly, supporting the broader goal of environmental protection through more energy-efficient earth observation data processing.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall L3)

Presentation: Scaling EO Processing Task Scheduling With Compact Representations

Authors: Jesse Anttila, Sakari Väkevä, Mr Mikko Kervinen, Jenni Attila, Mr Yki Laine
Affiliations: Finnish Environment Institute (Syke)
The Sampi Workload Scheduler (SWS) is a highly scalable EO processing task scheduler developed at Syke starting from September 2024, with the goal of improving the scalability of Syke's EO processing platform Sampi to allow for hundreds of thousands of simultaneous tasks. Processing tasks are dynamically scheduled to executors in a cloud environment, with rescheduling of interrupted tasks to allow executors to be added and removed freely. SWS uses a highly efficient representation of large processing jobs based on a common structure of EO processing tasks, i.e. a graph of EO processors applied over a number of different inputs. In combination with dynamic provisioning of computation resources and OCI container support, SWS can run very large processing jobs quickly and efficiently with minimal resource overhead and simple integration with existing processing software. Processing jobs ("Workloads") are represented as a collection of parallel tasks all sharing the same general structure, represented as a graph of EO processors. Each task must have the same processing graph, but may have an unique input value such as a tilename or date. Each vertex of the task graph is an independently schedulable subtask, with the edges representing dependencies between processing steps. This representation allows for arbitrary large processing jobs to represented in constant space, excluding the size of any input values which are stored separately. The throughput of the scheduling system is primarily limited by the need to persist all changes to processing state in order to be able to recover a globally consistent state at any time. In practice this involves a minimum of two operations for each subtask scheduled and completed. Although the size of the state modified by each operation is minimal, batching and write-ahead logging are required to scale to a throughput of thousands of subtasks completed each second. The state of all active subtasks is cached in memory to avoid the need to perform any reads from persistent storage, requiring around four bytes of memory for each active subtask. Each cloud VM instance can be used to run several executors simultaneously to maximize resource utilization. Each executor independently requests subtasks, transfers data, runs EO processing software, and reports subtask results. The executors use OCI containers to allow arbitrary processing software to be used without any modifications required to integrate with the scheduling system. The total number of executors and cloud VM instances is scaled according to the number of subtasks available, which helps avoid overprovisioning of compute resources by ensuring that sufficient work is available before scaling up. Underutilized instances can be deprovisioned freely as any interrupted subtasks will be rescheduled to another executor on a different instance. A set of optimized EO processing software for common tasks in optical EO data processing, including resampling, reprojection, mosaicking and statistics extraction has been created alongside SWS to efficiently utilize the greater scalability enabled by the scheduling system. In particular the data fetching and resampling tool is an order of magniture more efficient than existing solutions for Sentinel-2 and Landsat-8 provided in the SNAP toolkit, with additional features included for area-of-interest masking and cloud detection. These tools significantly reduce the amount of compute time spent on data management tasks, resulting in both speed and cost improvements in EO processing jobs that utilize them. SWS has been used to compute full mission time-series statistics for Sentinel-2 and Landsat-8/9 for a set of 4896 coastal and lake waterbodies for Water Framework Directive reporting in Finland using two water quality processors, C2RCC and Polymer. The total area is approximately 60 000 km², or 17 megapixels at 60m resolution. The Landsat processing job included 8236 images and had a runtime of 5 hours, with a total of 6400 CPU-hours consumed. At slightly less than one CPU-hour consumed per processed image, the effective per-image processing cost was less than one cent. For the matching Sentinel-2 processing job, 5900 CPU-hours were consumed to process 37320 images, for an effective cost of less than one fifth of a cent per image. SWS improves on existing container-based scheduling systems like Argo Workflows by introducing the efficient shared graph representation for large processing jobs, which allows for greater scalability while preserving the flexibility of using containerized software. The primary drawback is the inability to represent dynamic processing graphs, preventing the use of loops or conditionals in the graph structure as all processing steps must be known ahead of time. While this is generally the case for large EO processing jobs, a more dynamic solution may be preferable for certain machine learning experiments or ad-hoc processing tasks. With a total memory footprint of less than a hundred megabytes, the scheduling system can be used for processing jobs of any size without any resource allocation scaling.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall N1/N2)

Session: E.02.01 Advanced SAR processing techniques for security and safety applications


This session aims at creating a space for researchers and industry to exchange knowledge on advanced Synthetic Aperture Radar (SAR) processing techniques for safety and security applications. The initial identified advanced processing techniques to be discussed are the following:
• Inverse SAR (ISAR) processing algorithms;
• microDoppler;
• VideoSAR;
• Distributed SAR processing;
• data fusion with non-EO data;
• SAR polarimetry.
Results from ESA ongoing research activities in this domain will be also presented.

Convenors: Michela Corvino (ESA); Gordon Campbell (ESA); Thibault Taillade (ESA); Giuseppe Parrella (ESA)
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall N1/N2)

Presentation: Innovative ISAR processing techniques for security applications

Authors: Federica Pieralice, Dennis Dell'Ara, Davide Giudici, Elisa Giusti, Ajeet Kumar, Tatiana Martelli, Davide Pirrone, Filippo Britti, Chiara Pratola
Affiliations: e-GEOS, Aresys s.r.l, National Laboratory of Radar and Surveillance Systems (RaSS), CNIT
Due to the growing number of satellite constellations, high-resolution, wide-bandwidth SAR data are becoming more accessible, accompanied by greater operational flexibility. This has led to an increased interest from a range of stakeholders in both civil and military applications. In the field of security domain, motion analysis aims at understanding any key features and associated activities over specific area of interest that might be associated with potential threats. Therefore, the monitoring of moving targets, with more or less complex motion types, is a key goal when dealing with security, defense and intelligence objectives. In particular, moving objects can either represent critical features or serve as valuable proxies for activity-related information. To this end, the Spaceborne Inverse SAR (ISAR) offers significant added value compared to traditional SAR-generated information. Due to the SAR imaging mechanism that relies on platform motion to focus the image under the assumption of a static observation scenario, moving objects appear defocused and misplaced on the image. The ISAR techniques leverage on the motion of the object to refocus the image, offering a solution to derive motion parameters and improve the resolution of SAR images, enabling more accurate detection and classification of moving objects. In the framework of the ESA project “EO4SECURITY- INNOVATIVE SAR PROCESSING METHODOLOGIES FOR SECURITY APPLICATIONS”, ruled by the contract Ref: AO/1-11558/23/I-DT, different sensor- and AI-based ISAR algorithms are presented by the project team led by e-GEOS with the participation of Aresys (IT) and CNIT (IT). The motivation behind the approach proposed by the project team arises from the need to enhance signal processing methods for ISAR imaging of moving targets in SAR data, leveraging the capabilities of modern SAR systems. These advanced satellite systems provide higher resolutions and enable longer integration times, revealing limitations in current methods that need to be addressed. To tackle these challenges, we proposed innovative techniques for reconstructing ISAR images of moving targets. Our approach centers on the use of back-projection (BP) methods combined with tailored autofocus algorithms specifically designed for BP. These advancements enable the coherent integration of data over extended time intervals, resulting in ISAR images with enhanced quality and improved resolution. Furthermore, the project investigated polarimetric algorithms to improve the separation of sea clutter from moving targets. By employing Target Decomposition (TD) techniques and leveraging the distinct polarimetric behaviors of clutter and target echoes, the proposed methods effectively reduce sea clutter return, significantly benefiting target detection. The proposed methodologies were validated using real satellite datasets, including COSMO-SkyMed (CSK) spotlight and stripmap modes, as well as COSMO-SkyMed Second Generation (CSG) single-polarization and full-polarization spotlight data. The results highlight the potential of these approaches to significantly improve ISAR imaging and detection performance. Interesting results have also been achieved for both land and marine scenarios. For marine scenarios, ships in port areas and open sea were considered, while for land scenarios, moving trains and cranes were the focus. In both cases, significant improvements have been observed in image quality metrics. In addition to real data, also simulated SAR data have been considered in order to analyse the effectiveness and possibly the limitations of the proposed techniques in ad-hoc and well-controlled scenarios. In addition, the project team's efforts have focused on advancing the target motion estimation by adopting a Deep Learning approach, one of the most promising and high-performing techniques in artificial intelligence, known for achieving outstanding results across various domains and tasks, including computer vision and remote sensing. However, it is important to underline that Deep Learning requires an appropriate dataset for training neural networks to achieve the desired outcomes. In ISAR scenarios, the nature of the phenomena often limits the availability of real-world data, requiring ad-hoc approaches to address this problem. To overcome this limitation, a synthetic dataset was developed for training and evaluating the proposed AI model to estimate target motion from ISAR images. Specifically, the synthetic dataset was designed based on a specific maritime use case. Simulated Single Look Complex (SLC) images with CSG mission parameters in Spotlight acquisition modes have been generated, using a tanker/cargo 3D ship model (.stl) as input to the simulator. Different target motion conditions (both translation and rotational velocity components) and geometries have been simulated in addition to the background contribution which is composed by noise and clutter. A total of 1800 SLC images have been generated and used to train, validate and test the neural network. Concerning the AI architecture, a Convolutional Neural Network (CNN) was then developed to process the complex synthetic ISAR data and extract significant patterns related to target motion. Following training and validation, the AI model was further tested on both synthetic and real-world scenarios, demonstrating good performances in terms of error metrics. The results and methodology highlight the potential of Deep Learning for ISAR target motion estimation, paving the way for future developments: these can include exploring a more generalized training dataset and investigating the integration of real and synthetic datasets to enhance performance and applicability.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall N1/N2)

Presentation: Next Generation Processing Methods for Security Applications: Distributed Sensing

Authors: Nida Sakar, Pau Prats, André Barros, Marc Rodriguez-Cassola, Stefan Baumgartner, Federico Minati, Carmine Frascella
Affiliations: Microwaves and Radar Institute, German Aerospace Center (DLR), B-Open
This abstract introduces the study in the frame of the ESA-ESRIN contract “Next Generation Processing Methods for Security Applications: Distributed Sensing”. A distributed airborne/spaceborne SAR system consists of a set of monostatic and bistatic SARs. Many advantages derive from the bi-/multi-aperture reception, e.g., the potential of flexible observation geometries and multiple coherent parallel acquisition channels. In particular, distributed SAR systems offer the opportunity to improve revisit times, access range, image throughput, swath width, or resolution, making it suitable to overcome the challenges of next generation radar remote sensing. Moreover, these systems can improve performance in topography and deformation retrieval, moving target detection and three-dimensional (3D) imaging by effectively overcoming challenges such as temporal decorrelation and atmospheric interference. Due to its high cost, SAR missions were realized mostly by national space agencies until recently. Today, the start-up companies, so-called New Space Initiative, such as Iceye in Finland, Capella Space, Umbra, PredaSAR and Xpress in the USA, or Synspective and iQPS in Japan have taken their place in the SAR imaging market. The main goal of these companies is to provide high-resolution SAR data with a constellation of small and less expensive satellites, which can also be utilized for frequent and continuous monitoring of the planet, bistatic imaging, single-pass interferometry and tomography. On the other hand, the national space agencies are also adopting the distributed sensing concept in their current or future missions, such as the Chinese LuTan-1 Mission, European Space Agency Harmony Mission, German Aerospace Center (DLR) TanDEM-L Mission, Italian Space Agency RODiO Mission, Netherlands Institute for Space Research (SRON) SwarmSAR Mission, and the distributed geosynchronous SAR (GEO SAR). The main objective of this study is to develop and operationalize advanced SAR processing algorithms for security applications with a focus on the angular diversity. The selected algorithms are composed of a wide range of security applications using angular diversity, and the spaceborne and airborne SAR data considered for verification are from different providers (DRL's spaceborne TerraSAR-X, TanDEM-X sensors and airborne F-SAR sensor, Capella Space and ICEYE (via ESA's TPM programme)), providing a unique opportunity. Six use cases under four main research topics are selected for implementation during this study. The first topic Ground/Maritime Moving Target Indication (GMTI/MMTI), uses SAR data in pursuit monostatic mode and in spotlight mode. The pursuit monostatic mode consists of two different SAR antennas mounted on a single platform or on separate platforms, with each antenna transmitting and receiving only its own signal. These two antennas can suppress clutter, i.e. stationary, non-moving objects, so that signals from low radar cross section (RCS) targets can be detected. If clutter suppression is not required, only two antennas or two independent antenna phase centres can be used to estimate the direction of arrival (DOA) angle and hence the geographical position of the detected targets. The number of independent receivers and the baseline between the antennas determine the accuracy of detection, DOA angle and velocity estimation. On the other hand, when a target moving in the azimuth direction is observed in a spotlight mode SAR image, it appears blurred in the focused image due to the Doppler rate mismatch. The filter bank approach can be used to produce sub-look images by filtering a certain band at the edges of the Doppler spectrum (angular diversity) to estimate the azimuth velocity. The lower resolution images can be used to calculate the displacement of the moving target and hence the azimuth velocity. As with pursuit monostatic mode, the accuracy of the velocity estimate depends on the angular diversity of the track and also the final azimuth resolution. The second topic, High-Spatial Resolution with Continuous Coverage, aims to improve the azimuth resolution of the SAR system without compromising range ambiguity suppression performance or continuous swath coverage. This approach proposes to image the same area with multiple SAR satellites operating at different squint angles, i.e. with different Doppler centroids. Such a concept can be realised with a constellation of active monostatic satellites with slightly different viewing angles, or with a set of bistatic units, where the transmitter and each receiver are separated by an along-track baseline. High-resolution image synthesis is fairly straightforward. After the calibration step (similar to interferometric processing), the Doppler spectrums are coherently stitched together. Considering the current and planned SAR missions by national agencies (small constellations and companion missions) and commercial companies (constellation of small satellites), the operationalisation of this technique is very realistic. The third topic, RFI/Jamming Detection and Location, is based on a very simple, yet very effective and efficient approach. The RFI source is detected by calculating the coherence between the azimuth channels in the raw data level and selecting the pixels with higher coherence values above a certain threshold. Once the RFI contamination is localised, the notch filter is applied to suppress the RFI signal. It is intended to improve the resolution of this method during the study by applying superresolution techniques such as MUSIC and Capon. The last topic, Topography Estimation, exploits the SAR images to extract information outside of their mission objectives. The first approach computes the topographic height with very high-resolution SAR images in staring-spotlight or dwell mode. As with azimuth velocity estimation using spotlight mode data, this technique takes the unwanted effect in the SAR image, i.e. the defocusing of the targets, and uses it for information extraction, i.e. height estimation. Typical SAR processing algorithms focus the raw data by considering a scene height. The targets with different heights appear in the image as defocused in azimuth. The defocusing effect becomes non-negligible in very high-resolution images, and can be used to estimate the height of the targets. The height estimation can be achieved already with a single SAR image, but an additional (bistatic or another monostatic) SAR data can improve the estimation accuracy by removing the atmospheric effect. The last use case is based on the sensitivity of along-track multistatic SAR constellations to topography in the presence of cross-track baselines. The main objective of such a system is to improve the system resolution by using frequency multiplexing, where all satellites transmit simultaneously pulses of the same duration, each covering a fraction of the total frequency band and receiving the full system bandwidth. In addition, such constellation produces a large number of interferograms that can be used for topography estimation. Distributed sensing has been a research area attracting much attention for more than a decade, resulting in a large number of theoretically studied system concepts and processing/feature extraction algorithms. Recent changes in the space industry, favouring a SAR constellation with small and cheap satellites or including small satellites as a companion to the existing/future high-performance SAR systems, provide an opportunity to finally apply these theoretical concepts and algorithms. This study provides an excellent opportunity to discover the benefits of acquisition angular diversity provided by bistatic and multistatic spaceborne satellite systems.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall N1/N2)

Presentation: Micro-Doppler from very high-resolution SAR: critical infrastructure monitoring and applications in maritime and terrestrial domains.

Authors: Mr Aleksanteri Vattulainen, Finlay Rollo, Francesco Vecchioli, Ilaria Nasso, Massimo Zavagli, Fabrizio Santi, Pietro Milillo, Enrico Tubaldi, Malcom Macdonald, Mario Costantini, Debora Pastina, Professor Carmine Clemente, Michela Corvino
Affiliations: University of Strathclyde, B-Open Solutions, Sapienza University of Rome, University of Houston, ESA
Advances in space technologies over recent decades have laid the foundations for a new era of space-based sensing exploiting Synthetic Aperture Radar (SAR) imaging. While spaceborne SARs in the past were used principally for Defence and Earth Science (e.g. ERS- 1 and 2, Envisat, Sentinel-1), the ability to deploy constellations of micro-satellites possessing antennae with significant gain and steering capabilities has transformed the SAR landscape. The opportunity to create novel SAR products has attracted the interest of space agencies, governments, and the private sector, with innovative micro-satellite constellations already deployed by companies such as ICEYE, Capella Space, and Umbra, generating data that are not only commercially valuable but also scientifically unique. Amongst the multiple applications for these novel sensors, the detection and analysis of the micro-Doppler effect in SAR has emerged as a focal point of contemporary research. This promises to have a transformative impact across various sectors including automatic target recognition, surveillance, and infrastructure monitoring. The ability to accurately extract micro- Doppler information from SAR is therefore highly relevant but presents unique challenges requiring sophisticated signal processing techniques for accurate detection, extraction, and characterisation. The unique characteristics of the aforementioned SAR sensors, such as long dwell-time and fine spatial resolution, alleviate some of these challenges, and make the extraction of micro- Doppler information from a wide variety of targets feasible, paving the way to several novel downstream applications. For example: the ability to extract additional target features – such as the vibrational modes of a vehicle or vessel caused by its engine – would enhance automatic target recognition capabilities; and being able to quantify the natural harmonics of critical infrastructure with SAR would provide an early warning or continuous monitoring tool that enables novel commercial services and products, providing cost-effective wide area surveying and rapid response damage mapping after natural disasters. In this talk we will present the latest advances in the field of micro-Doppler information extraction from spaceborne SAR, its applications, and expected future developments. Following a review of the phenomenology we will introduce the novel processing solutions to detect, extract, and exploit micro-doppler in spaceborne SAR. Results and impact in specific application domains will be reviewed, such as maritime surveillance, micro-motion enabled detection of ground targets, and critical infrastructure monitoring. In the maritime domain, the results obtained from real measurements of cooperative targets show that it is possible to measure vibrational features of maritime targets, such as the engine RPM, useful for target recognition as well as law enforcement In the case of infrastructure monitoring, we will present results from buildings and bridges, showing validation of the ability to measure kinematic characteristics of infrastructure from a single pass SAR image. Finally, the problem of detecting the presence of targets whose presence of micro-motion can be a discriminant for either detection or operational assessment will be addressed. For example, we will demonstrate a method for detecting a vibrating target in a scene, thus revealing the presence of human activities, such as illegal mining/drilling or unusual industrial operations. The talk will conclude with an overview of open challenges from a practical and operational point of view with the aim of stimulating further interest around this topic. Acknowledgments This work has been funded under the project EO4SECURITY, Innovative SAR processing methodologies for security applications - Topic B2: Micro-Doppler Processing, ESA Contract No. 4000142272/23/I-DT. This work was in part performed at the University of Houston under a contract with the NASA Commercial Small- sat Data Acquisition Program (QKWEF8XLMTT3). Thanks to Airbus Defence and Space GmbH, Capella Space, Umbra Lab and Spire Global for the access to the datasets.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall N1/N2)

Presentation: Advanced SAR Processing with ICEYE's Dwell Mode: Enhancing Situational Awareness for Defense and Security in war zones

Authors: Quentin Gillet
Affiliations: ICEYE
ICEYE, a global leader in SAR satellite operations, continuously pushes the boundaries of Earth observation technology to provide actionable intelligence for critical applications. This presentation focuses on ICEYE's innovative Dwell mode and its significant impact on defense and security operations, particularly in the context of the ongoing conflict in Ukraine. Dwell mode represents a paradigm shift in SAR capabilities, allowing ICEYE's satellites to maintain a persistent focus on a specific area of interest for an extended period. This unique capability enables the collection of significantly more data, resulting in unprecedented clarity and detail, even in challenging environmental conditions such as cloud cover or darkness. ICEYE has continuously sought innovative ways to push the boundaries of SAR technology and deliver timely, actionable intelligence. This drive led to the development of Dwell mode, a revolutionary approach to SAR data acquisition that significantly enhances situational awareness. Instead of capturing a traditional "snapshot" of a target, Dwell mode allows ICEYE's satellites to "stare" at a specific area for an extended period, typically 25 seconds compared to the 10-second illumination of a standard Spot image. This extended integration time allows for the collection of significantly richer data, resulting in three unique product variants: Amplitude, Colour, and Video. First, the Amplitude product provides a high-fidelity, low-noise, and exceptionally sharp image, surpassing the visual quality of traditional Spot imagery. Second, the Colour product leverages the multiple "looks" captured during the extended dwell time to assign distinct colors to different objects within the scene. This colorization technique highlights man-made structures such as vehicles, armor, and buildings, making them stand out against the natural environment. Finally, the Video product transforms the stack of individual images captured during the dwell period into a dynamic video sequence. This allows analysts to observe movement, determine velocity, and discern the heading and speed of objects within the area of interest. Concrete examples from Ukraine will illustrate the real-world impact of Dwell mode. In collaboration with international partners, ICEYE has utilized Dwell mode to assess damage to critical infrastructure such as bridges and energy facilities, and support humanitarian aid efforts by identifying safe routes and areas of need. This timely information has proven invaluable for decision-makers in rapidly evolving situations, enabling more effective responses and mitigating risks. ICEYE's Dwell mode exemplifies the company's commitment to innovation and its dedication to providing cutting-edge technology for the benefit of global security and stability. This presentation will highlight the key advantages of Dwell mode and its potential to revolutionize how SAR data is utilized for defense and security applications worldwide. Keywords: SAR, Dwell mode, Defense, Security, Ukraine, Situational Awareness, Earth Observation, Change Detection, Critical Infrastructure, Humanitarian Aid
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall N1/N2)

Presentation: ADVANCED ISAR PROCESSING APPLIED TO VHR SAR DATA FOR SECURITY APPLICATIONS

Authors: Massimo Zavagli, Ilaria Nasso, Fabrizio Santi, Debora Pastina, Francesco Vecchioli, Federico Minati, Mario Costantini, Laura Parra Garcia, Professor Carmine Clemente
Affiliations: B-Open Solutions, La Sapienza, University of Rome, University of Strathclyde
Satellite SAR systems, providing all-weather, day-and-night imaging capabilities, are indispensable for Earth observation. In particular, X-band based satellite SAR systems, such as Capella Space, ICEYE, COSMO-SkyMed, TerraSAR-X and PAZ offer unprecedented spatial resolutions below 1 meter and frequent revisit times, broadening their potential for operational security and monitoring applications. This work focuses on information extraction from spaceborne high-resolution radar sensors based on Inverse Synthetic Aperture Radar (ISAR) techniques. By addressing motion-induced effects in SAR imaging, these techniques unlock new possibilities for surveillance and security applications across maritime, inland water, and land domains. In fact, standard SAR imaging results in defocused and misplaced representations of moving targets [1], necessitating advanced processing techniques to correct these distortions. ISAR methods address these limitations by employing motion compensation and autofocus algorithms to refocus targets and estimate their motion parameters ([2], [3], [4]). SAR imaging for moving targets presents unique challenges [1]: • Defocusing Effects: Movement causes smearing and loss of sharpness due to target velocity along azimuth and acceleration along the slant range direction. • Target Misplacement: Range velocity components induce shifts in target positions along azimuth. • Low Signal-to-Clutter Ratio: These factors reduce target contrast and affect detectability, recognition, and tracking. To overcome these challenges, this work leverages the following ISAR techniques: • Parametric Autofocus Algorithms, through image contrast maximization, to remove defocusing effects due to azimuthal chirp rate mismatch, and to assess motion parameters ([6], [7], [8], [13]). • Non-Parametric Autofocus Algorithms, such as Phase Gradient Algorithms (PGA), to remove defocusing terms, even higher than second order, from Doppler phase ([9], [10]). • Cross-Range Scaling refocusing, to mitigate defocusing effects due to rotation motions of the target ([11], [12]). • Azimuthal Sub-Aperture Analysis to identify the acquisition time intervals in which the rotation velocity vector is constant and so cross-range scaling refocusing can be applied. In this work we also analyze how these methodologies are adaptable to the specific requirements of various domains, from maritime environments with high target-to-clutter contrast to land scenarios with smaller targets and complex clutter conditions. To illustrate the surveillance potential of using ISAR processing for very high spatial resolution (VHR) SAR data, the following use cases were identified: • Maritime Surveillance, to support an effective capability to monitor marine traffic in open sea and in critical sea areas, such as neighboring of ports and, for fishery control, marine protected areas. • Land Domain Awareness, for detection of anomalous activities near borders, critical infrastructure, and sensitive areas. • Inland Waterways Monitoring, to support the supervision of river traffic, lakes, and canals for vessel detection and activity tracking. • Infrastructure monitoring, with a focus on wind turbines to estimate their rotation velocity and dimensions for facilitating efficient operations and maintenance. • Airport monitoring, to verify the potential surveillance and monitoring capabilities based on VHR SAR data in airport environments. Tests have been conducted on a large dataset based on VHR SAR data from the satellite missions TerraSAR-X, PAZ, ICEYE and Capella Space. The algorithms developed for maritime surveillance and inland waterways monitoring use cases have been validated by comparing the results of the vessels motion parameters estimation with “truths” obtained from Automatic Identification System (AIS) data. The achieved results are very promising and suggest expanding the use of satellite VHR SAR mission with ISAR advanced processing techniques for monitoring and surveillance applications, with the goal of operational use in future services. ACKNOWLEDGEMENTS This work has been funded under the project EO4SECURITY, Innovative SAR processing methodologies for security applications - Topic B1: Inverse SAR processing to enhance the capabilities to characterize targets/features of interest, ESA Contract No. 4000142270/23/I-DT. SAR data used for this work were obtained through the ESA TPM facility, open data programs of Capella Space. The authors express gratitude to Capella Space and Airbus for their availability to provide SAR data, either for free or at special conditions, for further test cases to be analyzed within the mentioned project, and to Spire Global for their availability to provide AIS data to support validation activities. REFERENCES [1] R. K. Raney, “Synthetic aperture imaging radar and moving targets,” IEEE Transactions on Aerospace and Electronic Systems, vol. AES-7, no. 3, pp. 499-505, May 1971. [2] A. Testa, E. Morando, D. Pastina, M. Zavagli, F. Santi, C. Pratola, M. Corvino, "Velocity Estimation of Maritime Targets in Spaceborne Single‑Channel SAR images: Methods and Performance Assessment," International Radar Symposium, Berlin, Germany, 24-26 May 2023, pp. 1-10. [3] A. Testa, D. Pastina, M. Zavagli, F. Santi, C. Pratola, M. Corvino, “Exploitation of single-channel space-borne SAR data for ship targets imaging and motion parameters estimation,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2023), Rhodes Island, Greece, 4-10 Jun. 2023 [4] M. Zavagli, D. Pastina, A. Testa, et al., "Inverse SAR Processing for Maritime Awareness," IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 2023, pp. 5668-5671. [5] M. Martorella, D. Pastina, F. Berizzi, P. Lombardo, “Spaceborne Radar Imaging of Maritime Moving Targets with the Cosmo-SkyMed SAR System,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7, no. 7, Jul. 2014, pp. 2797 – 2810 [6] M. Martorella, F. Berizzi, B. Haywood, “A Contrast Maximization Based Technique for 2D ISAR autofocusing”, IEE Proc. on RSN, vol. 152, no. 4, Aug. 2005, pp. 253 – 262. [7] L. Xi, L. Guosui, J. Ni, “Autofocusing of ISAR images based on entropy minimization,” IEEE Trans. on AES, vol. 35, no.4, pp. 1240-1252, Oct 1999. [8] R.K. Raney, et al., “Precision SAR processing using chirp scaling”, IEEE Trans. GRS, vol. 32, no. 4, pp. 786-799, Jul. 1994. [9] P. Eichel, C. Jr. Jakowatz, "Phase gradient algorithm as an optimal estimator of the phase derivative," Optic Letters, vol. 14, no. 20, 1989, pp. 1101-1103. [10] D.E Wahl, P.H. Eichel, D.C. Ghiglia, C.V. Jr Jakowatz, “Phase gradient autofocus-a robust tool for high resolution SAR phase correction, " IEEE TAES, vol. 30, no 3, 1994, pp. 827-835. [11] D. Pastina, “Rotation motion estimation for high resolution ISAR and hybrid SAR/ISAR target imaging,” 2008 IEEE Radar Conference, Rome (Italy), May 2008. [12] D. Pastina, C. Spina, “Slope-based frame selection and scaling technique for ship ISAR imaging”, IET Proceedings on Signal Processing, Volume 3, Issue 3, September 2008, pp. 265-276. [13] D. Pastina, F. Turin, “Exploitation of the COSMO-SkyMed SAR System for GMTI Applications,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vokl. 8, no. 3, 1 March 2015, Article number 6919998, pp. 966-979.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall N1/N2)

Presentation: Advancing Video SAR Processing for Enhanced Target Characterization and Operational Capability

Authors: Temesgen Gebrie Yitayew, Geir Engen, Dr. Yngvar Larsen, Michela Corvino
Affiliations: Norce, ESA
The development of Video Synthetic Aperture Radar (Video-SAR) represents a significant advancement in remote sensing, enabling the generation of video-like outputs from long dwell-time SAR acquisitions. This technology involves splitting the total aperture into multiple overlapping sub-apertures, each producing an image frame. These sequential frames, akin to traditional video, offer continuous temporal coverage, allowing dynamic observation of changes in the scene. The key to Video-SAR is that the synthetic apertures from consecutive frames do overlap. This is due to the fact that the aperture time to obtain the desired azimuth resolution is much longer than the frame duration. Our work focuses on developing an efficient, scalable Video-SAR processor that capitalizes on this redundancy to deliver high-resolution, continuous monitoring capabilities. A key innovation in our processor lies in the use of advanced signal processing techniques to accurately characterize moving targets within the scene. By analyzing sequential frames, we extract critical parameters such as target velocity, trajectory, and motion vectors. Phase correlation and Doppler processing methods are employed to detect and correct motion-induced artifacts, ensuring that the retrieved target motion reflects true dynamics rather than processing anomalies. Change detection algorithms compare successive frames to isolate moving targets against stationary backgrounds, utilizing coherence loss as an indicator of dynamic activity. This approach is particularly effective for applications such as maritime surveillance, where tracking ships' movements with high precision is essential. Furthermore, our research elaborates on a comprehensive roadmap to transition Video-SAR from experimental validation to operational deployment. We address challenges such as computational efficiency, real-time processing requirements, and integration with existing SAR data streams. Our processor has been tested using datasets from space-borne SAR missions, including TerraSAR-X (staring spotlight mode), ICEYE, and Umbra (dwell modes). These tests validate our approach, demonstrating the system's capability to maintain high spatial and temporal resolution while accurately mapping dynamic targets. The results of our efforts reveal significant potential for Video-SAR in security applications, environmental monitoring, and infrastructure management. By providing continuous, detailed insights into target behavior over time, our system enhances situational awareness capabilities. The outcomes of this research, including performance metrics and case studies, will be presented at the Symposium, contributing to the broader discourse on next-generation SAR technologies and their operationalization.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.85/1.86)

Session: A.09.06 Advances in Permafrost - PART 1

Covering more than 25% of the Earth´s surface, permafrost is one of the dominant landscape forming features surrounding the Arctic Ocean. It is as well present in higher altitudes of mountainous areas and in Antarctica. Permafrost is an Essential Climate Variable within the Global Climate Observing System, and associated with climate tipping points.

Permafrost is a sub-ground phenomenon that cannot be directly observed from space, yet satellite observations have a major potential to support local, regional and circumpolar monitoring of this key aspect of the climate system. This session will showcase some of the more recent key achievements in circumpolar and mountain permafrost monitoring including methods/algorithms, science and applications.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.85/1.86)

Presentation: InSAR Svalbard Ground Motion Service: Pilot Products and Development Plan

Authors: Line Rouyet, Marie Bredal, Lotte Wendt, Daniel Stødle, Heidi Hindberg, Tom Rune Lauknes, Jelte von Oostveen, Yngvar Larsen, Gökhan Aslan, John Dehls, Anja Sundal, Dag Anders Moldestad
Affiliations: NORCE Norwegian Research Centre, Geological Survey of Norway (NGU), Norwegian Space Agency (NoSA)
Since 2018, InSAR Norway (https://insar.ngu.no), the Norwegian Ground Motion Service (GMS), has been disseminating Sentinel-1 Interferometric Synthetic Aperture Radar (InSAR) displacement maps and time series for mainland Norway. At the continental scale, the European Ground Motion Service (EGMS) focuses on mapping ground displacement across the land areas of the Copernicus member states. However, the Svalbard archipelago is currently not covered by InSAR Norway and EGMS. The environmental conditions in Svalbard differ significantly from mainland Norway, posing distinct technical challenges. To address these, alternative InSAR processing strategies are required to account for short snow-free seasons, continuous permafrost conditions that lead to nonlinear displacement patterns, and patchy coherent areas, spatially separated by decorrelated wet or fast-moving surfaces, large fjords, and glaciers. In addition, the user needs in Svalbard differ from those on the mainland. Svalbard is experiencing rapid environmental modifications due to amplified climate change in the Arctic. An InSAR Svalbard GMS should serve the specific needs of management agencies and local stakeholders (e.g., permafrost-related hazard assessment and monitoring, areal planning and building stability management) and various disciplines represented in the Svalbard scientific community (e.g., permafrost, glaciology, and polar climate research). These needs must be considered when designing relevant product types and visualisation solutions. The InSAR Svalbard Development Project (2023–2025) funded by the Norwegian Space Agency (NoSA) (Post 74) and the Geological Survey of Norway (NGU) aims to lay the groundwork for a future operational InSAR GMS in Svalbard (www.ngu.no/en/geological-mapping/insar-svalbard). In 2023, the focus was placed on mapping user requirements and agreeing on a product development plan. The outcomes of this phase were published in the report “InSAR Svalbard: User requirements, Technical Considerations and Product Development Plan” (https://hdl.handle.net/11250/3125660). In 2024, these user requirements and technical considerations were incorporated in the work to refine the processing settings and generate InSAR pilot products. The pilot products were processed in three study areas on Spitsbergen, the largest island in the Svalbard archipelago: around Longyearbyen, Ny-Ålesund and Svea. InSAR processing was conducted using a Small Baseline Subset (SBAS) algorithm. Two different types of products are currently available: 1) Seasonal displacement time series within the snow-free periods, documenting thaw subsidence and frost heave in flat terrain (strandflats, valley bottoms and mountain plateaus), as well as creeping permafrost processes on mountain slopes. 2) Interannual displacement time series between the snow-free periods, currently only focusing on the flat lowland areas and documenting long-term thaw subsidence related to permafrost degradation or uplift due to ice aggregation. In fall 2024, the InSAR Svalbard pilot products were shared with a reference user group (comprising 29 people, in addition to the project partners). The data are provided in the NORCE NLive visualisation portal. Feedback on the products and the visualisation tool was collected during a user workshop in November 2024 and through an online feedback form. In 2025, this feedback will be further analysed and used to refine the final products. Work is also underway to automate and streamline the processing chain, enabling the upscaling of the products to larger regions and facilitating consistent updates in the future. A first public release of the InSAR Svalbard products is planned for late 2025. This contribution will explain the properties, value, and limitations of the InSAR Svalbard products. We will demonstrate the NLive functionalities for exploring and interpreting the data, and provide recommendations for using the products for both operational and scientific purposes. Finally, we will outline the plan for future developments of the operational GMS in Svalbard, with the long-term goal of extending similar services to other Arctic regions.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.85/1.86)

Presentation: Towards multi-decadal permafrost modelling based on the pan-Arctic AVHRR LST dataset

Authors: Sonia Dupuis, Sebastian Westermann, Frank Göttsche, Stefan Wunderle
Affiliations: Oeschger Centre for Climate Change Research, University of Bern, University of Oslo, Karlsruhe Institute of Technology
Satellite-derived land surface temperature (LST), snow cover, and land cover data can be used as input variables for permafrost models. LST can be used as an indicator of the thermal state of the ground and has, in the last decade, been increasingly used in arctic research and permafrost modelling. The CryoGrid community model, a ground thermal model, has successfully utilized these datasets to model circumpolar ground temperatures at various depths and active layer thickness. Within the frame of the ESA Climate Change Initiative (CCI) Permafrost project, Moderate Resolution Imaging Spectroradiometer (MODIS) LST data, land cover data, and ERA5 reanalysis data were used to produce permafrost maps of the Northern Hemisphere. LST datasets based on MODIS are the most frequently used thanks to their medium spatial resolution (~1km) and extended data coverage (more than 20 years), which meets most users’ requirements. A drawback is that MODIS LST products have only been available since 2001, which prevents differentiating multi-decadal climate trends from decadal-scale climate oscillations. To address this limitation, a new pan-Arctic LST dataset has been developed using EUMETSAT’s Advanced Very High-Resolution Radiometer (AVHRR) Fundamental Data Record (FDR). The pan-Arctic AVHRR LST product covers a period from 1981 to 2021 and has a spatial resolution of approximately 4 km. It incorporates snow cover information derived from fractional snow cover and snow water equivalent data, allowing for accurate emissivity and temperature retrievals over snow and ice. The newly developed pan-Arctic AVHRR LST dataset has been integrated into the CryoGrid community model, enabling modelling of the thermal state of the ground across various regions of the pan-Arctic for the past four decades. Additionally, to enhance resolution and comparability, the AVHRR LST dataset is downscaled to match the ~1 km resolution of MODIS data using a super-resolution algorithm based on deep anisotropic diffusion. This approach allows the comparison between permafrost modelling outputs obtained from LST datasets with different spatial resolutions and the identification of areas that are sensitive to changes in spatial resolution and benefit from a higher resolution. Finally, the pan-Arctic AVHRR LST dataset facilitates a detailed investigation into land surface temperature dynamics and the response of the permafrost over the past four decades, providing valuable insights into long-term permafrost extent and changes in selected Arctic areas.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.85/1.86)

Presentation: Quantifying Boreal Forests' Impact on Permafrost: Toward a Global Approach

Authors: Simone Maria Stuenzi, Dr. Frederieke Miesner, Sebastian Westermann, Prof. Dr. Moritz Langer
Affiliations: Humboldt-Universität Zu Berlin, Department of Geography, Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research, Vrije Universiteit Amsterdam, Department of Earth Sciences, Faculty of Sciences, University of Oslo, Department of Geosciences
Boreal forests, which cover approximately 23% of the continuous Arctic permafrost, play a crucial role in stabilizing permafrost by regulating heat and water fluxes between the vegetation, the ground, and the atmosphere. However, climate change and associated shifts in precipitation regimes, permafrost conditions, forest composition, and density pose significant threats to this tightly coupled system, potentially disrupting key ecosystem functions. We investigate how forest cover influences the thermal and hydrological conditions of permafrost using the numerical land surface model CryoGrid, equipped with a multilayer canopy module. This model framework provides a comprehensive parameterization of fluxes from the ground, through the canopy, up to the roughness sublayer, enabling the representation of diverse canopy structures and the simulation of their impact on vertical heat and moisture transfer. Simulations and measurements at forested continuous permafrost sites in eastern Siberia have shown that forest canopies significantly cool the ground and result in shallower active layers by providing shading, altering snow cover dynamics, and limiting turbulent fluxes. Building on this physical process understanding, we aim to present the first circum-Arctic assessment of boreal forests' insulative effects on continuous permafrost, as such processes are often overlooked in larger-scale permafrost modeling efforts. Based on a diverse set of data, we classify forested continuous permafrost regions according to key characteristics such as MODIS-derived plant function type, leaf area index and Köppen Climate Zones. For each cluster, we simulate a range of land cover conditions to assess their impact on permafrost thermohydrology, focusing on active layer depths, ground temperatures, and plant-accessible water. We aim to enhance our understanding of feedback mechanisms and their impact on permafrost dynamics at a broader scale. Finally, we explore how and where projected forest changes and climate zone shifts may amplify or mitigate permafrost thaw, emphasizing the importance of integrating these insights into large-scale permafrost mapping and monitoring efforts.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.85/1.86)

Presentation: Permafrost monitoring from space – what have we learned so far?

Authors: Annett Bartsch, Tazio Strozzi, Ingmar Nitze
Affiliations: b.geos, Gamma Remote Sensing, Alfred Wegener Institute for Polar and Marine Research
Direct monitoring permafrost from space is limited but land surface features and dynamics can be linked to belowground conditions. We will provide an update of our summary (Bartsch et al. 2023) regarding remote sensing techniques which have been successfully applied and proved to be most promising in linking earth observation data with permafrost conditions. We will show recent progress from the ESA CCI+ Permafrost project and address research gaps in the discussion. Space observable parameters related to ground temperature, specifically land surface temperature and surface status, have been shown to be applicable. Trend patterns can be observed, which are in line with in situ observations. Northern hemisphere ground temperature changes from year to year for example, correlate with sea ice concentrations. Abrupt thaw has been identified as a climate tipping point with regional impact. Ground temperature time series need to be therefore complemented by monitoring above surface features, which reflect abrupt thaw such as lakes and drained lake basins, retrogressive thaw slumps, coastal erosion and ground subsidence. The utility of satellite data has been proven at local to regional scale in all cases, but circumpolar implementation is still lacking. General drying (water surface loss) can be already observed across the Arctic using indices from medium resolution observations. Regional studies also indicate an increase in thaw slump activities over the past 20 years, and various efforts are being made to quantify these changes across the pan-Arctic. Recent advances have been made with e.g. the Copernicus Sentinel missions, InSAR techniques and machine learning. However, observations in much higher spatial resolution are required for detailed analysis of landscape processes in heterogeneous permafrost affected environments. The International Permafrost Association (IPA) Standing Committee on Rock Glacier Inventories and Kinematics (RGIK) has developed guidelines for inventorying the kinematic attribute of rock glaciers at the regional scale using satellite radar interferometry. Rock glacier inventories including the kinematic attribute have been compiled in a dozen areas worldwide using a consensus-based method by multiple operators. The dataset is available as a geopackage for conducting similar inventories in other areas or as training data for the development of automated inventory procedures. Rock Glacier Velocity (RGV) was added in 2022 as an associated quantity of the Essential Climate Variable (ECV) Permafrost. Steps towards a standardized and consistent approach for RGV generation using satellite radar interferometry are currently being taken using Sentinel-1 data. However, a higher spatial and temporal resolution should be considered for future missions. Bartsch, A., Strozzi, T., and Nitze, I.: Permafrost Monitoring from Space, Surveys in Geophysics, 2023. https://link.springer.com/article/10.1007/s10712-023-09770-3
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.85/1.86)

Presentation: Assessing Environmental Hazards of Drilling Mud Sumps in the Mackenzie Delta, Canada

Authors: Sofia Antonova, Annett Bartsch, Matthias Siewert, Johanna Scheer, Justine Ramage, Moritz Langer
Affiliations: Alfred Wegener Institute, b.geos GmbH, Umeå University, Nordregio, Vrije Universiteit Amsterdam
The Arctic has been and continues to be exposed to numerous industrial activities and pollution. Climate change, which is accelerating the thawing of the permafrost, is increasingly destabilizing numerous industrial contaminated sites across the Arctic permafrost region, posing a risk of uncontrolled mobilization and spread of pollutants. This poses significant risks to the health and livelihoods of the communities and wildlife, as they depend on intact ecosystem functions in the Arctic. We examine environmental hazards posed by oil and gas exploration activities in the Mackenzie Delta, Canada. Our focus is on drilling mud sumps, which are constructed or natural pits that collect a mixture of ground material (drilling cuts) and drilling fluids. The stability of drilling mud sumps in permafrost regions is strongly impacted by increasing temperatures leading to thermokarst processes, and changing hydrological conditions. We identified over 200 mud sumps in the region and classified their risks based on geospatial data coming from satellite observations and community knowledge. Key risk factors include isotropic subsidence, thermokarsting, and ice-wedge degradation due to permafrost thaw, coastal erosion along the Beaufort Sea, shore erosion and intermittent flooding in river-proximate areas, thermal changes in fire scars, and shifting vegetation patterns. Our study integrates fire history data, land cover maps, historical aerial imagery, high-resolution satellite and radar data, including interferometric analyses. We combine these datasets with geospatial data collected through a participatory mapping survey locating critical wildlife habitats, drinking water sources, and areas of hunting, fishing, and berry picking. As a result, we produce a risk map to support local communities in assessing and managing the hazards of sump instability, helping safeguard Arctic ecosystems and traditional livelihoods.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall E1)

Session: A.09.05 Using Earth Observation to assess change and dynamics of the Greenland Ice Sheet

Mass loss from the Greenland Ice Sheet has accelerated over recent decades. Currently, about 60% of the mass loss is attributed to runoff of meltwater, and the rest attributed to dynamic mass loss through ice flow into the ocean. The meltwater runoff from the ice sheet has increased significantly over the last 20 years, and models project the volume of runoff will continue to increase. Runoff, and ice-sheet mass loss generally, have significant implications for sea-level rise, freshwater fluxes, ocean circulation and both terrestrial and marine ecosystems.

Recent advances in satellite Earth observation, in-situ measurements and numerical modelling enable a more accurate and integrated view of the ice sheet. These enhanced observations allow an improved understanding of the feedbacks between processes occurring within the ice sheet (i.e. meltwater hydrology links the ice-sheet surface to the base, and leads to feedbacks on ice velocity and terminus melting), as well as complex interactions with the atmosphere and ocean.

This session will highlight recent advances in our understanding of the dynamics of the Greenland Ice Sheet and their wider implications including:
- Interactions between the atmosphere and ice-sheet surface: surface mass balance, firn evolution, supraglacial hydrology and ponding.
- Quantifying interactions at the base of the ice sheet basal friction, geothermal heat flux, subglacial hydrology and lakes.
- Impact of the ocean on tidewater glaciers and iceberg calving.
- Integrated assessment of hydrology and implication on freshwater flux.
- Assessing feedbacks between hydrology and ice flow.
- Evaluating the impact of ice-sheet change on ecosystems and wider Earth system.

Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall E1)

Presentation: A Decade of Winter Supraglacial Lake Drainage Using High Temporal Resolution C-Band SAR Observations of Northeast Greenland

Authors: Connor Dean, Randall Scharien, Ian Willis, Kali McDougall
Affiliations: University Of Victoria, Scott Polar Research Institute
Supraglacial lakes on the Greenland Ice Sheet can rapidly drain through hydrofracture, impacting ice dynamics. Drainage events occur during winter, causing similar responses as during the melt season. Synthetic aperture radar (SAR) is effective for monitoring during winter, when snow and ice lid coverage of meltwater and lack of sunlight render optical sensors (e.g., Landsat, Sentinel-2) ineffective. We combine C-band frequency SAR data from Sentinel-1 and RADARSAT Constellation Mission (RCM) to detect winter (September to May) drainage events in Northeast Greenland from 2014 to 2023. Seasonal lake locations and extents are initially identified during late summer using optical Landsat data, after which horizontal-vertical (HV) backscatter from incidence angle normalized SAR images is tracked through winter at very high temporal resolution, up to sub-daily. Lake drainage events are differentiated from sensor noise by large, sustained, increases in winter period backscatter. ICESat-2 altimetry and ArcticDEM derived elevation data are used to validate drainage events and estimate lake volume loss to the subglacial system. Over the 10-year period, there were 88 drainage events involving 54 lakes across the 27,000 km² area. Winters show significant interannual variability in the number of lake drainage events, ranging from a maximum of 18 to a minimum of 4. The frequency of these events decreases as winter progresses with the majority occurring between September and November. We analyze interannual trends in lake drainage events in relation to surface melt conditions during the melt season, derived from passive microwave observations and surface air temperatures, as well as their impact on ice velocity. When observed at high temporal resolution, the precise order of drainage events were documented, revealing cases of cascading drainage events over multiple days, or closely timed (<24 hours) coupled drainage events including six lakes. We find that approximately half of all winter drainage events occur through these mechanisms. Overall, our approach enables a detailed analysis of the timing and prevalence of winter drainage events with unprecedented precision, improving our understanding of meltwater inputs to subglacial systems and their impacts on ice sheet dynamics, as well as the succession of cascading or coupled drainage events.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall E1)
Thursday 26 June 08:30 - 10:00 (Hall E1)

Presentation: Investigation of Ice Features with Unknown Glaciological Origin in the Ablation Zone of Southwest Greenland

Authors: Sara-Patricia Schlenk, Dr. Georg Fischer, Matteo Pardini, Irena Hajnsek
Affiliations: German Aerospace Center (DLR) – Microwaves and Radar Institute, Friedrich-Alexander-University (FAU) – Institute of Microwaves and Photonics, ETH Zurich – Institute of Environmental Engineering
In recent years, remote sensing methods have become an essential tool for monitoring polar regions. Microwaves are able to penetrate dry snow and ice, rendering Synthetic Aperture Radar (SAR) measurements sensitive to the subsurface structure of glaciers and ice sheets, particularly at lower frequencies [1]. In the ablation zone of southwest Greenland, SAR data have revealed ice features of unknown glaciological origin. These ice features, here referred to as radar-dark and radar-bright features, initially distinguished by different backscatter characteristics, were investigated using multi-modal SAR data collected during an airborne campaign with DLR’s F-SAR system. A multi-modal approach was essential to combine complementary insights, allowing the integration of advanced SAR techniques such as polarimetry [2], interferometry [3], tomography [4,5], temporal analysis, and modeling to characterize surface and subsurface processes of these features and explore their origins. Radar-dark features exhibit low backscatter, dominated by scattering at the surface. They typically appear as round or oval shapes, ranging from 100 to 500 m and frequently form interconnected spatial patterns. Conversely, radar-bright features demonstrate high backscatter, predominantly originating from the subsurface, which results in pronounced spatial contrasts between both features. The use of SAR polarimetry has revealed key differences in scattering mechanisms. Radar-dark features were observed to exhibit low entropy and mean alpha angles, which is consistent with the dominance of surface scattering. In contrast, radar-bright features demonstrated higher entropy and alpha angles, indicating the presence of more complex scattering processes with significant volume scattering. The analysis of interferometric SAR data provided insights into the phase center heights, confirming the shallow radar penetration observed in radar-dark features and the deeper penetration evident in radar-bright features. Tomographic reconstructions provided a three-dimensional perspective, revealing that very weak subsurface structures were present in radar-dark features, while pronounced subsurface structures were present in radar-bright features. Further analysis demonstrated that radar-bright features consistently contain a subsurface scattering layer situated between 15 and 50 m below the surface. Scattering models, validated against observed coherence values, supported these findings. For radar-bright features, a two-component model corresponding to the surface and subsurface scattering components was suitable, while radar-dark features required a three-component model to account for surface dominance and weak subsurface contributions. The temporal analysis of ALOS-2 data revealed that radar-dark features are relatively stable over time, despite significant surface melt rates [6], moving with glacier flow. The absence of correlation between these features and surface topography or roughness indicates a subsurface origin. It is hypothesized that these features result from the formation of a weathering crust caused by the infiltration of meltwater and subsequent refreezing, which creates a porous layer at the near-surface level [7]. The presence of residual liquid water within this crust, when it is in a frozen state, is likely to attenuate radar signals, thereby reducing both penetration and subsurface scattering. The spatial distribution of radar-dark features may also be influenced by the presence of former supraglacial lakes, which may affect the location of weathering crust formation. In contrast, radar-bright features are characterized by frozen ice without liquid water contents. These regions are presumed to be areas of bare ice, where the absence of a weathering crust serves to minimize signal attenuation and enhance penetration into the subsurface. Overall, this study emphasizes the importance of multi-modal SAR data in the characterization of ice features within the ablation zone of southwest Greenland. By integrating multiple SAR techniques, we were able to identify fundamental differences in scattering mechanisms, surface and subsurface processes, and temporal changes. Nevertheless, to confirm the proposed hypothesis regarding the formation of radar-dark and radar-bright features, additional ground measurements and field observations are required. This further validation would serve to refine these findings, thereby enhancing their applicability to glacier modeling. [1] Rignot, E., et al. (2001). Penetration depth of interferometric synthetic-aperture radar signals in snow and ice. Geophysical Research Letters, 28(18), Art. no. 18. [2] Cloude, S. (2010). Polarisation: applications in remote sensing (1st ed.). Oxford: Oxford University Press. [3] Zebker, H. A., & Hoen, E. W. (2000). Penetration depths inferred from interferometric volume decorrelation observed over the Greenland ice sheet. IEEE Transactions on Geoscience and Remote Sensing, 38(6), 2571–2583. [4] Tebaldini, S., Nagler, T., Rott, H., & Heilig, A. (2016). Imaging the internal structure of an alpine glacier via L-band airborne SAR tomography. IEEE Transactions on Geoscience and Remote Sensing, 54(12), 7197–7209. [5] Banda, F., Dall, J., & Tebaldini, S. (2016). Single and multipolarimetric P-band SAR tomography of subsurface ice structure. IEEE Transactions on Geoscience and Remote Sensing, 54(5), 2832–2845. [6] van de Wal, R., et al. (2015). Self-regulation of ice flow varies across the ablation area in south-west Greenland. The Cryosphere, 9(2), 603–611. [7] Cooper, M. G., et al. (2018). Meltwater storage in low-density near-surface bare ice in the Greenland ice sheet ablation zone. The Cryosphere, 12, 955–970.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall E1)

Presentation: Modeling Ice Sheets with Precision: EE12 CryoRad’s Role in Refining Thermal and Basal Observations

Authors: Stef Lhermitte, Marion Leduc-Leballeur, Catherine Ritz, Pierre Zeiger, Aurelien Quiquet, Nanna Bjørnholt Karlsson, Anne Munck Solgaard, Synne Høyer Svendsen, Dr Giovanni Macelloni, Dr Marco Brogioni, Dr Ghislain Picard, Kenneth Jezek, Matthias Drusch
Affiliations: KU Leuven, CNR-IFAC, Institut des Géosciences de l'Environnement (IGE), CNRS-IPSL, GEUS, Ohio State University (OSU), Earth and Mission Science Division, European Space Agency (ESA)
The Earth Explorer 12 CryoRad mission candidate aims to address critical uncertainties in ice sheet modeling by providing direct observations of englacial temperatures and basal thermal states, essential parameters for understanding ice flow dynamics and predicting future sea-level rise. Current ice sheet models rely heavily on modeled englacial temperature profiles and geothermal heat flux estimates, which introduce significant uncertainty due to limited observations. CryoRad’s ultra-wideband radiometry (0.4–2 GHz) is designed to retrieve vertical ice temperature profiles from the surface to the bedrock, offering improved initialization of ice flow models, enhanced basal melt estimates, and refined ground heat flux distributions. This study assesses the sensitivity of ice flow model parameters to englacial temperature and basal conditions, identifying the accuracy and spatial resolution required for CryoRad to deliver meaningful improvements to future modeling efforts. Using the Parallel Ice Sheet Model (PISM) for Greenland and GRISLI for Antarctica, we explore how ice flow dynamics respond to variations in temperature profiles, basal thermal states, and associated uncertainties. Scenarios were constructed based on observed temperature profiles, in-situ measurements, and ancillary datasets, as well as one-dimensional and three-dimensional modeling outputs. Results indicate that englacial temperature observations with sufficient vertical resolution would significantly constrain basal sliding and deformation rates, reducing uncertainty in mass balance projections. Moreover, they highlight the potential of CryoRad data to constrain thermal gradients and basal conditions, critical for understanding ice flow near ice divides and ice shelf margins. Overall, this study defines the observational requirements - accuracy, and resolution - for CryoRad to maximize its impact on ice sheet and ice shelf modeling. By addressing these requirements, CryoRad has the potential to transform our understanding of ice dynamics and improve projections of future sea-level rise.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall E1)

Presentation: Quantifying Biases in Greenland Iceberg Areas Retrieved From Sentinel-1 Extra Wide Swath Data

Authors: Henrik Fisser, David Sutherland, Anthony P.
Affiliations: UiT The Arctic University Of Norway, University of Oregon
Icebergs, along with the glacier termini from which they originate, are the interface between tidewater glaciers, fjords, and the ocean. To advance our understanding of changes and dynamics in iceberg calving, we propose integrating iceberg observations in Greenlandic fjords with glacier observations. However, this requires systematic and frequent, validated retrievals of iceberg areas. Previous studies have primarily utilized optical data to quantify iceberg areas in Greenlandic fjords. In this study, we analyze several hundred co-located Sentinel-1 Extra Wide Swath (EW) acquisitions and Sentinel-2 acquisitions. We aim to quantify biases in Sentinel-1-based iceberg area distributions and cumulative iceberg areas with reference to iceberg area distributions retrieved from Sentinel-2 data. The ultimate goal is to leverage Sentinel-1 EW data independently over broader spatial and temporal scales in the future. The study covers a variety of glacial fjords in Greenland with data acquired during summer months from 2016 through 2024, at low sea ice concentrations. We retrieve iceberg areas from Sentinel-1 data using a recently developed iceberg area model based on a gradient boosting algorithm. The model reduces errors in individual iceberg areas retrieved from Sentinel-1 data by more than 50%. For the co-located Sentinel-2 acquisitions, we use a reflectance threshold to delineate icebergs and sea ice. Icebergs in the Sentinel-2 data are distinguished from sea ice floes using a random forest classifier trained on the separability in the blue band, and geometrical properties. Co-located SAR-Optical iceberg area distributions enable us to quantify spatially and temporally varying biases in Sentinel-1 iceberg areas. Preliminary results indicate a strong statistical agreement between the area distributions, however, areas of the largest icebergs tend to be underestimated in Sentinel-1-derived iceberg area distributions. Additionally, cumulative Sentinel-1 iceberg areas are strongly underestimated. The agreements between the Sentinel-1 iceberg areas and the Sentinel-2 iceberg areas vary with the total ice concentration in the fjord, wind speed, and the incidence angle. Moreover, iceberg area distributions are known to transition naturally from power-law shapes close to the terminus, to log-normal shapes further away. We analyze the area distributions by shape, and compare the agreement between Sentinel-1 and Sentinel-2. The systematically quantified biases in Sentinel-1-based iceberg areas will facilitate more consistent observations of iceberg areas across Greenlandic fjords in summer months. Thereby, the presented study highlights opportunities for integrating SAR-derived iceberg areas with glacier observations in the future.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Hall E1)

Presentation: Changes in ice velocity on the Greenland Ice Sheet from Sentinel-1 offset tracking

Authors: Mae Evans, Anna Hogg, Trystan Surawy-Stepney, Stuart King, Thomas Slater
Affiliations: University Of Leeds, University of Edinburgh, Northumbria University
Monitoring ice velocity is vital for understanding ice sheet dynamics, refining models, and improving future sea level rise projections critical for mitigation and adaptation strategies. Since the 1990s, Greenland has experienced a pronounced negative mass balance, and been the largest ice sheet contribution to global sea level rise, with nearly half the mass loss driven by increased discharge from marine-terminating glaciers. Dynamic losses are particularly significant in regions such as Northwest and Southeast Greenland, where numerous fast-flowing glaciers dominate mass loss. Advances in Earth observation have enhanced our ability to observe changes in ice dynamics over multiple timescales, capturing velocity variations on daily, seasonal, and interannual scales. Widespread changes in ice velocity across the Greenland Ice Sheet have been documented in recent decades, with notable glacier retreats and accelerations in the Southeast in the early 2000s, and more recent sustained accelerations in the Northwest and West since 2000. In this study, we apply offset tracking to Sentinel-1 SAR imagery to monitor ice velocity over Greenland. The Sentinel-1 mission has revolutionised the monitoring of ice velocity on the Greenland Ice Sheet by providing consistent, high-resolution observations with 6- and 12-day repeat cycles. This frequent revisit interval offers a unique opportunity to capture rapid changes in glacier flow, regardless of cloud cover or polar night. This capability has provided unprecedented insights into evolving glacier behaviour and evolving ice sheet dynamics. The monitoring provided by Senitnel-1 is essential for understanding the nuanced behaviour of individual glaciers and their cumulative impact on Greenland Ice Sheet Dynamics. We present an updated view of speed variability across marine terminating glaciers in the Southeast and Northwest Greenland. Our analysis highlights notable periods of acceleration and deceleration over the past decade. By integrating auxiliary datasets such as bed topography, ocean temperatures, and atmospheric conditions, we explore the key forcing mechanisms driving ice velocity changes. We investigate the complex interactions that govern mass loss from Greenland’s glaciers to help enhance our understanding of the processes contributing to mass loss from Greenland’s marine-terminating glaciers.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.31/1.32)

Session: A.05.02 Earth observation for Climate Services - PART 1

Earth Observation has considerable potential to inform and facilitate climate service development globally. Long term, sustainable observations of Essential Climate Variables are the cornerstone of our ability to monitor climate change and climate risks.
These services can in turn be used to support implementation of the Paris Agreement, Nationally and Internationally set goals (such as the European Green Deal, Nationally Determined Contributions etc), and Monitoring, Reporting and Verification of adaptation and mitigation efforts.

However, in order to truly support decision making, this wealth of information must be supplemented by initiatives serving user needs, providing for example, sectoral or regional scale information in a timely, trusted and transparent fashion.

Climate services form the critical link between climate information and decision-making communities of practice. Development of such services must be underpinned by the co-production, with both the end-user and data provider communities, of user requirements, research and data needs, and intermediate products.

These services fundamentally add value to EO information, such that it can be used for decision making. EO-based service development may include (but is not limited to):
• Sustained and sustainable data provision, quality assurance and accessibility
• Research and development on existing data products to produce value added services either directly for decision makers, or for intermediaries in the climate services value chain
• Scoping and requirements gathering for new observational products, where there is a user need
• Stakeholder engagement at all levels of the value chain, as part of a co-production process
• Cross-disciplinary collaboration and research, including integration of economic or social information
• Progress towards policy goals such as implementation of the Paris Agreement, NDCs and MRV systems

This session seeks submissions related to all aspects of climate service provision and development.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.31/1.32)

Presentation: Advancing Climate Monitoring with Satellite Data: The Role of the Copernicus Climate Change ECV Programme

Authors: Joaquín Muñoz-Sabater, Joao Martins, Kevin Lossouarn, Julien Nicolas, Francesca Guglielmo, Carlo Buontempo
Affiliations: ECMWF
The Copernicus Climate Change Service (C3S) plays a central role in the European Union’s efforts to monitor, adapt to and mitigate climate change. Over the course of recent years, the satellite-based Essential Climate Variables (ECVs) component of C3S has established a strong foundation, providing Climate Data Records of ECVs that are crucial for understanding climate trends and impacts based on observations. Currently, C3S provides access to 22 of the 55 Global Climate Observing System (GCOS) ECVs, reflecting its ongoing success in data provision. Moving forward, the service aims to expand its scope to meet evolving needs, with future initiatives focused on enhancing data accessibility, availability and usability. At the same time, C3S will maintain its commitment to deliver high-quality services while leveraging advancements in satellite technology and deepening collaborations with key agencies like ESA and EUMETSAT. Upcoming developments will be organised in six thematic domains: Atmospheric Physics, Atmospheric Composition, Oceans and Sea Ice, Cryosphere, Land Hydrology, and Land Biosphere. In the near term, C3S will introduce six new ECVs and associated quantities, while expanding existing operational services with additional data products. Following this effort, further ECVs will be incorporated, with their transition to operations fully supported by C3S. While some of the 55 GCOS ECVs are not observable from satellite platforms, reanalysis provides a valuable alternative for monitoring these variables. In cases where both observations and reanalysis information are available, an ensemble analysis approach for these variables could be anticipated. Improved tools, including an enhanced Common Data Store and upgraded Evaluation and Quality Control functions, will streamline access to validated, user-driven datasets. Looking ahead, C3S envisions its role as a leader in ECV services, delivering not only data services but actionable climate indicators linked to each ECV quantity. This approach will ensure timely, relevant insights to support informed decision-making for climate change adaptation and mitigation efforts.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.31/1.32)

Presentation: A Policy-Driven Science and Technology Service for EO: Prototyping an Urban Climate Adaptation Use Case

Authors: Dr Mark Dowell, Dr Candan Eylül Kilsedar, Dr Michele Melchiorri, Dominik Weckmüller, Dr Christelle Vancutsem
Affiliations: European Commission- Joint Research Centre, Arcadia SIT for the Joint Research Centre (European Commission), Engineering Ingegneria Informatica S.p.A. for the Joint Research Centre (European Commission)
The increasing availability of Earth observation (EO) data presents unprecedented opportunities for evidence-based policymaking, yet a critical "last mile" gap persists between data availability and its practical application in environmental and climate policy and decision-making. To bridge this gap, we propose a Policy-Driven Science and Technology Service for EO that systematically connects policy needs with tailored EO services through semantic similarity approaches and advanced analysis tools. The European Commission (EC) Knowledge Centre on Earth Observation (KCEO) identifies the requirements of EC policymakers through Deep Dive assessments on specific policy areas (Biodiversity, Urban Climate Adaptation, Compliance Assurance) and a survey, providing a comprehensive overview of policy needs and EO product use across EC Directorates-General. Analysis of these requirements and relevant international policy frameworks and treaties will lay out priority policy needs and corresponding indicators. This analysis employs AI tools to complement and support thematic human know-how. Through the three Deep Dive assessments, key policy priority needs and requirements emerged on the management of the built environment: - Monitoring of urban green spaces and advancement of urban greening for climate adaptation, for which the EU Biodiversity Strategy for 2030 and EU Nature Restoration Law define the needs and roadmap; - adaptation to the urban heat island effect, which receives policy support through the European Regional Development Fund, Cohesion Policy 2021–2027, and EU Climate Adaptation Strategy; - water management for climate adaptation in urban areas, relevant to the Cohesion Policy 2021–2027, EU Climate Adaptation Strategy, and Water Framework and Floods Directives; - adaptation to coastal flooding, eutrophication, marine heat waves, and erosion and geomorphological change, supported by the Maritime Spatial Planning Directive; - compact urban development and climate adaptation in Africa and measuring and mapping urban vulnerability to make cities resilient in EU partner countries, which receive policy support from the European Fund for Sustainable Development Plus and Global Gateway Strategy; - monitoring building thermal performance and solar potential, supported by the Energy Performance of Buildings Directive's requirements for zero-emission buildings, the Renewable Energy Directive's 42.5% renewable energy target, and in alignment with global climate action through the EU Covenant of Mayors for Climate & Energy (CoM) initiative and the Paris Agreement's goals for building sector decarbonization; - providing a city-level GHG monitoring service, essential for supporting local authorities' climate action through the CoM and meeting reporting requirements under the Global Methane Pledge, while addressing current limitations in Baseline Emission Inventories and the need for standardized, transparent measurements at urban scales that align with the Paris Agreement's transparency framework. In the urban domain, the Joint Research Centre (JRC) of EC has long-standing expertise, being the co-lead of the Group on Earth Observations (GEO) Human Planet Initiative. With the Global Human Settlement Layer, the JRC produces global geospatial datasets on the built environment and population, as well as their attributes. Our first use case development will use requirements from the Deep Dive assessments, ongoing survey, and AI-supported analysis of global policy requirements to define indicators that can be addressed with the global geospatial datasets available through the GEO Human Planet Initiative and Copernicus Services. Many of the EU policy areas identified through the work of the KCEO align with corresponding international policy frameworks. EU policies are designed in response to commitments made under global treaties and conventions. This alignment highlights the interlinkages and dependencies between EU and global policy requirements for EO products and services, emphasizing the importance of a cohesive approach to data utilization. Our AI approach builds on KCEO’s previous successes in identifying EO-relevant legislation from EUR-Lex. By employing a Large Language Model-powered workflow, we attribute relevant Copernicus products to legislation and extract relevant references (at the Article level). We will further develop this approach to identify and prioritize policy needs, creating a matrix linking EO-derived global datasets and indicators to specific EC policies and international treaty and convention requirements. The proposed service will provide a blueprint for a Policy-Driven Science Service for EO, outlining the structure, methodologies, and processes necessary to translate scientific EO data into actionable policy insights. It will also support Member States in fulfilling their environmental reporting obligations, focusing on emerging requirements. Our work contributes to the objectives of the European Strategy for Data, EuroGEO, and the GEO Post-2025 Strategy and will help to enhance the EU's capacity to address complex environmental and societal challenges through data-driven decision-making.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.31/1.32)

Presentation: Evaluating the Fitness-for-Purpose of Essential Climate Variables: Lessons Learned and Future Directions

Authors: Ruben Urraca, Fabrizio Cappucci, Christian Lanconelli, Dr Nadine Gobron
Affiliations: European Commission, Joint Research Centre
The Global Climate Observing System (GCOS) provides the reference climate monitoring framework by defining 55 Essential Climate Variables (ECVs) that critically contribute to the characterization of Earth’s climate. Over the last decade, Europe has established a climate monitoring capacity through the development of the Copernicus Climate Change Service (C3S), Copernicus Atmosphere Monitoring Service (CAMS), ESA’s Climate Change Initiative (ESA CCI) and EUMETSAT’s Satellite Application Facility on Climate Monitoring (CM SAF). As a result, around 34 ECVs are currently available from Earth Observation products. ECV requirements are clearly defined by GCOS. Closing the loop evaluating the fitness-for-purpose of ECV products is needed to ensure accurate climate monitoring and safe uptake of climate information in the policy domain. The Joint Research Centre has developed a methodology to evaluate the fitness-for-purpose of ECV products and tested it with four ECVs: snow cover [1], snow albedo [2], surface shortwave radiation [3], and aerosols [4]. The metric used to measure ECV fitness is stability, defined by GCOS as the maximum acceptable drift in ECV products. Our studies targeted Copernicus services (C3S and CAMS), but products from other European (CMSAF and ESA CCI) and international agencies (NASA, NOAA) were also evaluated. Products analyzed covered satellite ECVs, reanalysis ECVs and climate models. First, stability is assessed based on the temporal change of the bias between products and long-term spatially representative ground stations. The trend in bias is compared against the GCOS stability requirement. The methodology described in [5] was used to filter spatially representative stations, i.e., sites where point measurements are representative of the ECV product pixel. Secondly, the trends of all products were inter-compared at the global scale evaluating their spatial, temporal and seasonal consistency. This contribution summarizes the key issues found affecting the stability of each product. Multi-sensor satellite products frequently contain spurious jumps in the transition between sensors (e.g., SPOT-PROBA-Sentinel3/OLCI or (A)ASTR-Sentinel3/SLSTR). Individual sensors can be affected by radiometric or orbital drifts (e.g., AVHRR), while additional spurious changes can be introduced by the cloud or snow masks. Some reanalysis variables have spurious jumps related to temporal changes in the type and quantity of observations assimilated. Pure climate models (CMIP6) are free from spurious jumps due to the lack of data assimilation, but this also hinders the accurate representation of climate trends. All products include some degree of modeling thus being affected by model simplifications that prevent the representation of some physical processes (e.g., assumptions on aerosol composition or surface reflectance). The presentation also describes how fitness-for-purpose results drive climate services’ evolution, as well as the challenges encountered. Looking to the future, we discussed the sources of the most common issues leading to a lack of stability. For instance, missions designed for other purposes, e.g., meteorology, are currently being used for climate monitoring, while sometimes stability requirements may not have been sufficiently prioritized by the space segment. Reanalyses appear as the most mature solution to harmonize multi-sensor observations and deliver temporally stable ECVs. Climate services need to go beyond ECVs to meet new user and policy needs. We also briefly discuss how fitness-for-purpose methodologies must be adapted, through uncertainty propagation, to quantify the fitness of all product pixels. This would allow propagating ECV product uncertainties to policy indicators, ensuring a safe uptake of EO data in applications where fitness is critical such as compliance assurance, or attribution of losses and damage to climate change. References [1] Urraca R, Gobron N. Temporal stability of long-term satellite and reanalysis products to monitor snow cover trends. The Cryosphere. 2023 Mar 2;17(2):1023-52. [2] Urraca R, Lanconelli C, Cappucci F, Gobron N. Assessing the fitness of satellite albedo products for monitoring snow albedo trends. IEEE Transactions on Geoscience and Remote Sensing. 2023 May 29;61:1-7. [3] Urraca R, Trentmann J, Pfeifroth U, Gobron N. Can satellite products monitor solar brightening in Europe?. Remote Sensing of Environment. 2024 Dec 15;315:114472. [4] Urraca R, Cappucci C, Lanconelli C, Gobron N. Causes of disagreement in global aerosol trends from satellites, models and reanalyses. Remote Sensing of the Environment (to be submitted) [5] Urraca R, Lanconelli C, Gobron N. Impact of the spatio‐temporal mismatch between satellite and in situ measurements on validations of surface solar radiation. Journal of Geophysical Research: Atmospheres. 2024 May 28;129(10):e2024JD041007.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.31/1.32)

Presentation: From Satellite Data to Climate Solutions: EUMETSAT’s role in advancing Climate Services

Authors: Christine Traeger-Chatterjee, Joerg Schulz, Rob Roebeling, Frank Kaspar, Joerg Trentmann, Anke Tetzlaff, Christian Grams, Elizabeth Good, Stijn Vermoote, Antonio Vecoli
Affiliations: Eumetsat, Deutscher Wetterdienst, MeteoSwiss, UK MetOffice, European Centre for Medium Range Weather Forecasting, Meteorological and Environmental Earth Observation S.r.l.
Satellite data now cover several decades and are therefore becoming increasingly relevant for climate monitoring. For more than two decades EUMETSAT has been collaborating with other space agencies across the globe to extend and systematically reprocess their long-term archives of satellite observations (referred to as level-1 observations) and prepare them for use in climate re-analysis and for monitoring changes in the Earth's climate system. The use of long-term and high-quality time-series of satellite observations has led to a significant improvement of recent re-analyses, such as the fifth generation ECMWF atmospheric reanalysis (ERA-5). In addition, these time-series are the basis for a range of retrieved geophysical variables, many of them contributing to the set of GCOS Essential Climate Variables (ECVs) and being used in IPCC Assessment Reports. EUMETSAT aims for continuity of these time-series by planning re-processing activities for present and future missions at an early stage. This includes the continuous engagement with users to understand their current and anticipated future needs to consider them in the planning of future Climate Data Records. Unlocking the full potential of satellite-based climate data is not a trivial task. It very often requires new ways of thinking, building new or re-considering existing workflows. This is a challenge for both satellite data experts and experts in climate services. Hence, EUMETSAT, including its Satellite Application Facilities, collaborate with users to integrate satellite-based climate data into new and existing workflows to turn the vast amount of data into valuable information. We will present a range of use cases that have arisen in cooperation with national weather and climate services, such as the use of satellite data - To complement ground-based measurements, e.g. to improve gridded products of surface radiation and sunshine duration. - To develop national solar atlases or interactive platforms for the assessment of PV potential (e.g. the PV GIS system of the European commission or the Suisse “solardach”-portal). - To contribute to the evaluation and monitoring of solar energy production trends. - To estimate evaporation in hydrological models. - To contribute to the monitoring of extreme temperatures, droughts and urban-heat islands. The presentation will provide an overview on EUMETSAT’s role in global satellite re-processing activities, highlight the relevance of satellite climate data records for re-analysis and showcase the direct usage of satellite climate data records in operational applications.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.31/1.32)

Presentation: The Copernicus Climate Change Service for Monitoring the Cryosphere

Authors: Thomas Nagler, Jan Wuite, Dr. Gabriele Schwaizer, Mag. Markus Hetzenecker, Mag. Ursula Fasching, Mag. Tanja Rahim, Michael Zemp, Frank Paul, Dr. Ines Dussaillant, Dr. Jaqueline Bannwart, Dr. Kari Loujus, Dr. Miriam Kosmale, Pinja Venäläinen, Dr. Sebastian Bjerregaard Simonsen, Dr. Natalia Havelund Andersen, Prof. Andrew Shepherd, Dr. Tom Slater, Dr. Ines Otosaka, Dr. Athul Kaitheri, Dr. Joaquin Muñoz Sabater
Affiliations: ENVEO, University of Zurich, Finnish Meteorological Institute, Technical University of Denmark, University of Northumbria, European Centre for Medium-Range Weather Forecasts
The Copernicus Climate Change (C3S) Service for Cryosphere monitors key climate variables of the land cryosphere using Earth Observation data in combination with field observations and provides coherent data records and authoritative information on its past and current states to the public. The service continues the work of the previous C3S Service on Land Hydrology and Cryosphere, focusing on the Essential Climate Variables (ECVs) Glaciers and Ice Sheets, while adding Seasonal Snow as a new service component. The C3S Cryosphere Service builds on methods developed within ESA's Climate Change Initiative (CCI), concerned with processing and analysing extended satellite data time series. The Glaciers Service provides the ECV parameters glacier area, surface elevation change and mass change. Glacier outlines are produced by means of a semi-automatic procedure using high resolution optical satellite data such as Sentinel-2 and Landsat. The glacier mass change product provides globally gridded annual glacier mass changes derived by integrating in-situ observations and spaceborne geodetic data from the WGMS Fluctuations of Glaciers database. The service supports also the integration of glacier surface elevation products from different satellite data. With the generated glacier outlines the Glaciers Service also contributes to the Randolph Glacier Inventory (RGI). The Ice Sheets Service delivers a homogenized portfolio of gridded products including ice velocity and surface elevation change covering Greenland and Antarctica. Annual ice velocity products are generated from time series of Sentinel-1 SAR data as of 2015. Monthly surface elevation change products are derived from altimeter data of multiple satellite missions from 1992 onwards. These datasets enable the detection and analysis of ice sheet wide volume and velocity changes in response to climate atmospheric warming. The Snow Service provides products on two key parameters characterizing the seasonal snowpack: snow cover extent (SCE) and snow water equivalent (SWE). The SCE product is a global daily dataset offering fractional snow cover, with a homogenized time series beginning in 1982. This time series integrates data from multiple medium-resolution optical sensors, including AVHRR (1982-2000), MODIS (2000-2022) and Sentinel-3 SLSTR (2022 onwards). To ensure consistency across the time series, data of different sensors are used for common observation periods. The SWE product is generated daily for the Northern Hemisphere (excluding mountainous regions) and is derived through the assimilation of satellite passive microwave data with in-situ snow observations. The data sets enable studies about the spatial and temporal variations of snow mass and snow cover patterns during the past 40 years. The Climate Data Records (CDRs) generated within the Cryosphere Service are quality checked and distributed through the Copernicus Climate Data Store (CDS). The CDRs are analysed by the service team to identify changes of cryosphere components in response to climate warming. These findings will also contribute to the annual European State of The Climate Report. In this presentation we will introduce the C3S Cryosphere Service, which started in October 2024, and show examples of the different ECV parameters.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.31/1.32)

Presentation: EUMETSAT’s Contribution Towards Generating Uncertainty Characterised Fundamental Climate Data Records

#zarr

Authors: Jörg Schulz, Viju John, Timo Hanschmann, Carlos Horn, Oliver Sus, Jaap Onderwaater, Rob Roebeling
Affiliations: EUMETSAT
Climate change is currently one of the main threats our planet is facing. Observations are playing a pivotal role in underpinning the science to understand the climate system and monitor its changes including extreme events, which have adverse effects on human lives. Information generated from measurements by Earth observation satellites contribute significantly to the development of this understanding and to the continuous monitoring of ongoing climate change and its impacts. However, the meaningful use of data from these satellites requires them to be long-term, spatially and temporally homogeneous, and uncertainty characterised. The process of preparing satellite data for climate studies is tedious and only recently being recognised as fundamental first step in preparing records of Essential Climate Variables (ECV) from these data. During the last decade EUMETSAT has generated several Fundamental Climate Data Records (FCDR) consisting of measurements from instruments operating from microwave to visible frequencies. These measurements are not only from satellites operated by EUMETSAT but also from satellites operated by other agencies such as NOAA and CMA. Scientific advances for the data generation have been made through several EU research projects such as ERA-CLIM, FIDUCEO and GAIA-CLIM. The FIDUCEO project was pivotal for developing a framework for characterising uncertainties of Earth Observation data. The principles developed in the project have been adapted and extended by EUMETSAT by including other sensors and by consolidating longer time series. This presentation outlines the basic principles of FCDR generation illustrated through a few examples. Basic steps of the FCDR generation is comprised of quality control and indicators of the raw data, recalibration of the raw data to produce physical quantities, such as radiances or reflectance. Throughout these steps uncertainty characterisation and harmonisation of a suit of instruments are performed. Finally, outputs are generated in user-friendly formats, e.g., NetCDF4 and/or Zarr adhering to community best practises for meta data. The presentation illustrates these principles by two examples, one on the creation of a harmonised time series of microwave humidity sounder data and the other on the creation of FCDRs from geostationary satellite infrared and visible range measurements. The resulting FCDRs are used to create data records of ECVs for example by the EUMETSAT Satellite Application Facilities (SAFs) and enable uncertainty propagation into the derived ECV data records. EUMETSAT data records support international research activities in the World Climate Research Programme (WCRP) and national and international climate services such as the Copernicus Climate Change Service particularly global reanalysis. Illustration of the use of FCDRs to improve the quality of CDRs will be presented as well.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.15/1.16)

Session: A.07.04 Irrigation monitoring through Earth Observation (EO) data - PART 1

Irrigation is the primary source of anthropogenic water use, largely prevailing on domestic and industrial exploitations. Despite the essential role played by irrigation in ensuring food production and the impacts on the management of water resources, reliable and explicit data about irrigation dynamics (i.e., timing, extent and amounts of water used) are generally lacking worldwide. Remote sensing technology has recently proven to be an essential tool to detect irrigation occurrence in space and time and to estimate amounts of water used, especially in light of the latest high-resolution retrievals.
This session welcomes contributions presenting innovative approaches leveraging Earth Observation (EO) data, eventually combined with modeling approaches or ground-based measurements, for monitoring irrigation and assessing the associated impacts. Topics of interest include but are not limited to:
- exploitation of EO data for irrigation detection;
- use of EO data for quantifying irrigation water use;
- data assimilation techniques to improve irrigation schemes;
- assessment of the impacts of irrigation on the water cycle;
- management of irrigation using hydrological modeling combined with satellite data;
- estimates of irrigation water requirements leveraging satellite data;
- development of strategies based on remotely sensed data for improving irrigation efficiency.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: Leveraging Frequency Analysis of SMOS Soil Moisture Retrievals to Detect Irrigated Areas in the Contiguous United States

Authors: Christian Massari, PhD Sara Modanesi, PhD Zdenko Heyvaert, Louise Busschaert, Martina Natali, Gabrielle De Lannoy
Affiliations: Research Institute for Geo‐Hydrological Protection, National Research Council, Department of Earth and Environmental Sciences, KU Leuven
Despite being one of the most significant human interventions in the terrestrial water cycle, the distribution and extent of irrigation remain highly uncertain. This knowledge is crucial across a wide range of research areas, from closing the water cycle to understanding land-atmosphere interactions. Obtaining reliable global data on where irrigation is applied is particularly challenging, with important implications for the accuracy of land surface model simulations. These models often rely on static maps derived from subnational irrigation inventories, which fail to capture the dynamic and heterogeneous nature of irrigation practices. Earth-observing satellites provide a unique ability to monitor key processes closely linked to irrigation, such as soil moisture, land cover, and vegetation activity, with near-global coverage and high temporal frequency. Over the past decade, there has been growing interest in using satellite data to assess the extent, frequency, and intensity of irrigation. For example, some pioneering studies (1-3) have utilized coarse-scale passive and microwave soil moisture retrievals to detect irrigation. These studies compared the soil moisture distribution from model datasets that do not account for irrigation with various satellite-based soil moisture products. If the satellite data accurately captures the irrigation signal, one would expect to observe wetter soil conditions in areas where irrigation is applied. However, despite promising results in certain regions, the spatial mismatch between model outputs and satellite data, along with confounding factors such as topography, vegetation, frozen soils, and Radio Frequency Interference (RFI), introduces significant uncertainties across much of the study area. In this study, we used differences in the frequency content of soil moisture spectra, obtained through wavelet transformation, to identify irrigated areas in the Contiguous United States (CONUS). We compared the power spectra of a land surface model without irrigation activation to satellite-based soil moisture data from the Soil Moisture and Ocean Salinity (SMOS) mission. Our results show that: (i) irrigation significantly alters the dominant spectral component of soil moisture, with climate-dependent variations, and (ii) the method demonstrates moderate to good capability for detecting irrigated areas over CONUS. Specifically, we compared the binary classification of irrigated vs. non-irrigated areas to the Landsat-derived Global Rainfed and Irrigated Cropland product (LGRIP30) at 30-meter resolution, regridded to a 25 km longitude-latitude grid. The Area Under the Curve (AUC), a standard metric derived from the Receiver Operating Characteristic (ROC) curve, was found to be 0.7, indicating moderate to good discrimination ability. Looking ahead, with future radar missions in L-band, such as ROSE-L and NISAR, this technique could be effectively applied to detect small-scale irrigated fields, offering valuable insights for precision agriculture applications. Kumar, S.V.; Peters-Lidard, C.D.; Santanello, J.A.; Reichle, R.H.; Draper, C.S.; Koster, R.D. Evaluating the utility of satellite soil moisture retrievals over irrigated areas and the ability of land data assimilation methods to correct for unmodelled processes. Hydrol. Earth Syst. Sci. 2015, 19, 4463–4478. Lawston, P.M.; Santanello, J.A.; Kumar, S.V. Irrigation signals detected from SMAP soil moisture retrievals. Geophys. Res. Lett. 2017, 44, 11860–11867. Zaussinger, F.; Dorigo, W.; Gruber, A.; Tarpanelli, A.; Filippucci, P.; Brocca, L. Estimating irrigation water use over the contiguous United States by combining satellite and reanalysis soil moisture data. Hydrol. Earth Syst. Sci. 2019, 23, 897–923.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: Advancing irrigation water use monitoring through satellite-based estimation

Authors: Amali A. Amali, Timothy Foster, Angela Harris
Affiliations: Department of Geography, School of Environment, Education and Development, The University of Manchester, Department of Mechanical, Aerospace and Civil Engineering, The University of Manchester
Globally, rising irrigation demands, coupled with increasing water supply volatility due to climate change, are motivating policymakers and water managers to explore solutions to improve the sustainability of water use in the agricultural sector. A key challenge for sustainable agricultural water management is a lack of reliable and complete data on where, when, and how much water is abstracted for irrigation. Traditional in-situ irrigation water use (IWU) monitoring methods face significant challenges, including high costs, limited scalability, and socio-political resistance. The growth in quantity and resolution of satellite imagery, along with advancements in processing methods such as machine learning, offer a potential solution to address the challenges of monitoring agricultural water use cost-effectively at scale. However, at present, there has been only limited research that has evaluated the reliability and accuracy of satellite-based estimates of agricultural water use against data on real-world agricultural water abstractions. Here, we evaluate the accuracy of satellite-derived evapotranspiration (ET) models for field-scale IWU estimation by comparing these estimates to ground-reported abstracted irrigation records in Kansas, a globally significant agricultural region overlying the water-stressed High Plains Aquifer in the Central United States. Using data from the OpenET project and regional precipitation datasets, we derive IWU estimates for about 7,000 irrigated fields between 2013 and 2023. This represents the most extensive field-scale analysis of its kind, serving as a critical testbed to evaluate ET model performance over a large spatiotemporal scale. Our results reveal substantial variability among satellite-derived IWU models, with model performance differing spatially and temporally. While at the annual scale, the mean estimated IWU (274 mm) was close to observed values (307 mm), we found significant errors in satellite-based estimates of IWU when compared with observations of abstraction in specific years and field locations. Notably, we show that some models tend to consistently overestimate IWU while others consistently underestimate values, highlighting the potential for improving satellite-based estimates of agricultural water use through local/regional refinement to model selection and calibration. Indeed, when models were selected a priori based on performance metrics such as RMSE and slope, we observed substantial improvements in prediction accuracy, reducing errors by 10–15% and increasing alignment with observed irrigation values. We demonstrate the potential of satellite-based IWU estimation as a scalable tool for water resource management while highlighting persistent challenges in accuracy and spatial heterogeneity. By systematically validating and optimising these models, we aim to bridge the gap between remote sensing technologies and practical water management applications. Our findings have practical implications for policymakers, water managers, and stakeholders, supporting evidence-based decisions to enhance agricultural water use. Enhanced accuracy in satellite-derived IWU estimates can inform more effective water allocation, mitigate groundwater over-extraction, and foster resilience in the face of growing demand and climate variability.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: Long-term, satellite-based irrigation water use estimates to foster climate studies: The CCI-AWU project

Authors: Dr Jacopo Dari, Luca Brocca, Sara Modanesi, Christian Massari, Renato Morbidelli, Carla Saltalippi, Pierre Laluet, Pia Langhans, Wouter Dorigo, Louise Busschaert, Zdenko Heyvaert, Gabrielle De Lannoy, Davide Danilo Chiarelli, Nikolas Galli, Maria Cristina Rulli, Clement Albergel
Affiliations: University Of Perugia, CNR-IRPI, TU Wien, KU Leuven, Politecnico di Milano, European Space Agency Climate Office
The exploitation of water by humans for various purposes has always been a sign of progress, accompanying and often marking the evolution of various civilizations. Human beings use water for agricultural, industrial, and civil purposes. All the uses concur in ensuring high levels of living standards, but in quantitative terms the volumes of water destined to agricultural purposes far exceed the withdrawals for industrial and civil purposes. Hence, the Anthropogenic Water Use (AWU) essentially consists of agricultural water use, namely water applied for irrigation. Explicit, consistent, and reliable information on irrigation water use is generally lacking worldwide, but satellite observations offer the opportunity to remotely monitor irrigation dynamics. The Climate Change Initiative (CCI) AWU project (https://climate.esa.int/en/projects/anthropogenic-water-use/), funded by the European Space Agency (ESA) and born as a precursor activity to test the feasibility towards a proper AWU Essential Climate Variable (ECV), is aimed at developing long-term estimates of irrigation water use, using satellite-based soil moisture (SM) data and modelling with satellite-based constraints. To do this, three well-established irrigation quantification methods (satellite SM-based inversion, satellite SM-based delta, modeling) are tested by leveraging coarse resolution remote sensing data. This contribution is aimed at showing and comparing outcomes of the three approaches over the CONtiguous United States (CONUS). Reference data on irrigation water use from the Farm and Ranch Irrigation Surveys (FRIS) produced by the National Agricultural Statistics Service (NASS) of the United States Department of Agriculture (USDA) for the years 2013 and 2018 have been collected for validation purposes. In addition, results obtained by ingesting the developed AWU data sets into the WATNEEDS agro-hydrological model to reproduce observed water management practices will be shown as a use case. The final aim of the CCI-AWU project is the development and validation of long-term, satellite-based irrigation water use estimates to be exploited as an input variable into modeling platforms serving climate studies.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: Irrigation volumes monitoring by assimilating satellite data of land surface temperature and soil moisture into an energy-water balance model

Authors: Chiara Corbari, Nicola Paciolla, Justin Sheffield, Kamal Labbassi, Youssef Houali, Sven Berendsen, Zoltan Szantoi
Affiliations: Politecnico Di Milano, University of Southampton, Chouaib Doukkali University, European Space Agency
The agricultural sector is the biggest and least efficient water user, accounting for around 80% of total water use in the Mediterranean region, which is already strongly impacted by climate change with prolonged drought periods, imposing limitation to irrigation water availability. The objective of this study was to develop a procedure for the monitoring irrigation water use for different agricultural districts in the mediterranean region. The analysis is based on the FEST-EWB model, that computes continuously in time both soil moisture (SM) and evapotranspiration based on the energy-water balances. The model has been calibrated and validated over non-irrigated areas, against land surface temperature (LST) from downscaled Sentinel3 and/or MODIS data at 30m and modeled evapotranspiration from MOD16, GLEAM and FAOWapor. The model has been run using as input the past meteorological forcings (ECMWF ERA5-Land or ground network) and vegetation data from Sentinel2. Then, the actual irrigation volumes have been estimated through the calibrated model implementing three different irrigation strategy: the FAO approach based on SM crop stress thresholds (Allen et al., 1998), the separate and joint assimilation of satellite LST and SM (1km SMAP-Sentinel1, Sentinel1) to update the modeled fluxes and estimate the irrigation volumes. Overall, the results suggested that the yearly total irrigation volumes modeled with the FAO approach are quite in agreement with the observed water allocations; and similar outcomes are obtained when satellite LST are assimilated, while lower accuracy was found when SM data where used. This research was developed within the ESA AFRI-SMART project EO-Africa multi-scale smart agricultural water management, in the framework of EO AFRICA National Incubators, and the TERESA project Drought Sensing for Water and Food Security by Integrating Earth Observation and Agro-Hydrological Model in the framework of ESA “Sentinel Users Preparation (SUP) initiative: Applications preparedness with stakeholder and end-users participation”.
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: Copernicus Earth-Observation data for Irrigation Monitoring in Australia

Authors: Oscar Belfiore, Carlo De Michele, Francesco Vuolo, Qotada Alali, Guido D'Urso
Affiliations: Dept. of Agricultural Sciences - University of Naples Federico II, Ariespace srl, Institute of Geomatics, University of Natural Resources and Life Sciences (BOKU), the COALA team
The integration of advanced Earth Observation (EO) technology provides useful and pioneering solutions for the sustainable use of water resources in agriculture. The COALA project (COpernicus Applications and services for Low impact agriculture in Australia) developed a Copernicus-based information system to assist in optimal irrigation water management in Australia [1]. Funded by H2020 EU's research and innovation programme, the COALA project demonstrated how the DIAS (Data and Information Access Services) cloud platforms provide key information to farmers, water managers, and basin authorities. Defining the Irrigation Water Requirements (IWR) as the net depth of water (mm d-1) required to fully satisfy the water requirement of a given crop by irrigation, after the natural provision of water via the effective precipitation (Pn), the spatially distributed IWR in COALA was computed following IWR=(ETc-Pn). The crop evapotranspiration (ETc, mm d-1) here was estimated by exploiting the full capabilities of the Sentinel-2 (S2) platforms in terms of geometrical, temporal and spectral resolutions. In particular, the modified combination equation for evapotranspiration in COALA incorporated the shortwave infrared data from S2 to represent the crop water status and modulate the resistance terms [2]. We present results from COALA validation campaigns, comparing the IWR and the irrigation water volumes measured at farm and district levels in the Murrumbidgee Valley (New South Wales). We also map actual irrigated areas based on classification algorithm and dense time series of vegetation index [3]. [1] COALA project. https://www.coalaproject.eu/ [2] D’Urso, G., Bolognesi, S. F., Kustas, W. P., Knipper, K. R., Anderson, M. C., Alsina, M. M., ... & Belfiore, O. R.: Determining evapotranspiration by using combination equation models with sentinel-2 data and comparison with thermal-based energy balance in a California irrigated Vineyard. Remote Sensing, 13(18), 3720; https://doi.org/10.3390/rs13183720 (2021). [3] Falanga Bolognesi S., Pasolli E., Belfiore O. R., De Michele C., D’Urso G.: Harmonized Landsat 8 and Sentinel-2 Time Series Data to Detect Irrigated Areas: An Ap-plication in Southern Italy". Remote Sensing, MDPI, vol.12, no. 8: 1275; doi.org/10.3390/rs12081275 (2020).
Add to Google Calendar

Thursday 26 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: Advancing Global Long-Term Irrigation Water Use Estimates With an Improved SM Delta Method

Authors: Pia Langhans, Pierre Laluet, Wouter Dorigo
Affiliations: TU Wien
Accurate long-term irrigation water use (IWU) estimates are essential to better account for anthropogenic activities in climate model predictions. However, no global long-term IWU dataset is currently available, and uncertainties in irrigation data and modeling present significant challenges. Remote sensing observations, particularly soil moisture (SM) and evapotranspiration (ET) products are promising for monitoring long-term IWU globally. Within the scope of the European Space Agency Climate Change Initiative Anthropogenic Water Use (ESA CCI AWU) project, we developed an improved version of the SM Delta method to produce monthly long-term (2003-2022) IWU datasets. SM Delta computes IWU based on the differences between satellite and modeled SM and ET data without irrigation parameterisation. The differences between these product categories are assumed to be caused by irrigation. For the SM component, we computed the difference between the ESA CCI SM ACTIVE, PASSIVE, and COMBINED products on the one hand and ECMWF ReAnalysis v5 Land (ERA5-Land) surface SM on the other. For the ET component, we used two satellite-derived ET products, the operational Simplified Surface Energy Balance (SSEBop) model and FLUXCOM, in combination with Noah-MultiParameterization (NOAH-MP) ET. An ensemble of six different IWU datasets was produced and validated over two highly irrigated regions: the Ebro Basin in Spain (~ 86,000 km²) and the Murray-Darling Basin in Australia (~ 1,000,000 km²). The six IWU datasets captured the irrigation dynamics well, showing a significant improvement over the previous version of SM Delta, which used only SM data and tended to underestimate irrigation. When compared to IWU in situ data measured at four irrigation districts in the Ebro Basin, the datasets achieved a mean Pearson correlation coefficient of 0.54, an RMSD of 29.96 mm/month, and a bias of -19.46 mm/month. Validation in the Murray-Darling Basin at four irrigation districts yielded a Pearson correlation coefficient of 0.29, a bias of -7.07 mm/month, and an RMSD of 21.01 mm/month. The enhanced performance, particularly in reducing bias, significantly improved the accuracy of IWU retrievals. These new estimates of large-scale, long-term irrigation are an important step towards better accounting for the impacts of irrigation in Earth system modeling.
Add to Google Calendar

Thursday 26 June 09:00 - 09:20 (EO Arena)

Demo: D.02.27 DEMO - SCANEO, an AI-powered web tool for smart labeling of satellite data training datasets

This demonstration will showcase SCANEO, an AI-powered web tool designed to streamline the labeling of satellite imagery datasets for training AI models. Developed specifically for Earth Observation (EO) applications, SCANEO leverages active learning techniques to accelerate and improve the accuracy of labeling processes—one of the key bottlenecks in building deep learning models with EO data.
During the session, participants will observe the full workflow of using SCANEO: from selecting areas of interest in satellite imagery to generating high-quality labeled datasets for tasks such as semantic segmentation and object detection. The demonstration will highlight how SCANEO’s active learning loop enables iterative improvement, reducing labeling errors and optimizing dataset quality.
Additionally, the session will explore how labeled datasets can be integrated into the Earth Observation Training Data Lab (EOTDL) platform for efficient dataset management, collaboration, and sharing.
This demonstration targets technical professionals and researchers interested in enhancing their data labeling workflows and training pipelines for satellite image analysis. The session will be led by experts from EarthPulse, with prior experience delivering hands-on EO training at ESA events.

Speakers:


  • Juan B. Pedro - CTO at EarthPulse
  • Fran Martín Rivas - Product Manager at EarthPulse
Add to Google Calendar

Thursday 26 June 09:22 - 09:42 (EO Arena)

Demo: D.01.17 DEMO - Digital Twin Analytics for What-If Scenario Comparison: Flood Use Case

This demonstration highlights the power of digital twin analytics in simulating and comparing flood-related what-if scenarios. By integrating Earth Observation (EO) data and hydrological models, digital twins provide an interactive environment to assess flood risks, mitigation strategies, and policy impacts.

Participants will explore how digital twins can simulate flood events under different conditions, such as changes in climate variations and land use. The demonstration will showcase real-world applications, illustrating how decision-makers can analyze the potential effects of heavy rainfall or river overflows on urban and rural areas.

Through dynamic scenario comparison, users will gain insights into how preventive measures can reduce flood impacts. The session will also emphasize how EO-derived datasets, including satellite-based precipitation and soil moisture data, enhance model accuracy and provide reliable forecasting capabilities.

By leveraging digital twin technology for flood risk assessment, policymakers, urban planners, and disaster management agencies can make informed decisions, improve preparedness, and enhance climate resilience in vulnerable regions.

Speakers:


  • Stefano Marra - CGI
  • Alessandro Marin - CGI
Add to Google Calendar

Thursday 26 June 09:45 - 10:05 (EO Arena)

Demo: C.06.18 DEMO - KARIOS Training

The scope of this session is to present the last version of the KARIOS tool and all existing technical materials.

• KARIOS Documentation, https://zenodo.org/records/10598329
• KARIOS S/W, https://github.com/telespazio-tim/karios,
• Landing Page, https://telespazio-tim.github.io/karios/

Beside, this training is focused on:
- Geolocation Assessment procedure and KARIOS Tool Configuration
- Accuracy analysis report (KARIOS Outputs)
- Processing Use cases (live)

Processing use cases discuss versatility of the KARIOS tool with its ability to process various data types (Radar / Optical, Low to High spatial resolution data).

Speakers:


  • Sébastien Saunier - Telespazio
  • Sultan Kocaman - Telespazio

Add to Google Calendar

Thursday 26 June 10:00 - 11:30 (Plenary - Hall D)

Session: Small Satellite revolution in Earth Observation: What it means for industry and users?

The use of miniaturized COTS technology, combined with rapid development cycles and the serialization of space manufacturing, has impacted the GEO telecommunication business through the emergence of LEO constellations. Concurrently, the continuous reduction in launch costs and the increasing capability of small satellites have facilitated the development of Earth Observation constellations in the US, China, and Europe. This session provides an opportunity to hear from key European stakeholders involved in these changes to understand how the industry is adapting and what opportunities are arising to offer new or improved services to users.

This session will be accessible via live captioning at the following link: HERE

Due to limitations in the app, this is clickable only from the web version of the programme

Speakers:


  • Simonetta Cheli - Director of Earth Observation Programmes, ESA
  • Dominique Gillieron - Head of Projects Department, ESA
  • Miguel Angel Palacios - Head of Astrobus Product Policy, ADS
  • Florian Deconinck - Vice President, Growth, Open Cosmos
  • Elisa Carcaillon - Business Development Director, LoftOrbital
  • Raffaella Luglini - Chief Sustainability Officer, Leonardo
  • Benoît Mathieu - CEO, OHB Sweden
  • Charles Galland - Policy Manager, ASD Eurospace
  • Martin Langer - CEO, Ororatech
  • Steven Allen - Senior Sales Mgr Government Solutions Europe, Iceye
  • Milena Lerario - CEO, e-GEOS
  • Pierre Alain Bosc - Head of Sales, ADS CI
  • Herve Hamy - CEO, Qairbon
Add to Google Calendar

Thursday 26 June 10:00 - 10:45 (Nexus Agora)

Session: F.05.07 Women Trailblazers Round Tables - Session 3 - Industry Day

“Women trailblazers: the present and the future of ...” is a series of round tables focused on the specific disciplines of Earth Observation Remote Sensing, Science, Engineering, Policy making, Entrepreneurship etc. that recognizes women’s significant contributions including senior and young promising professionals, whilst promoting female role models in our community.
The session will bring together prominent figures from diverse organisations, academia, industries, and associations to engage in a focused dialogue on collaborative strategies to address climate change and promote sustainable development. The main objective is to inspire and to discuss the current status and future development in Earth observation data and technologies to address climate change and promote sustainable development.

Speakers:


  • Mariella Graziano - Executive Director, International Strategy and Business Development, Space Science, Exploration, and Transportation, GMV
  • Mireia Colina-Fatjó - Business Development Senior Manager at INDRA
  • Odile Hembise Fanton d’Andon - CEO and Co-Founder, ACRI
  • Monica Roca - Director / CEO, ISARDSAT

Add to Google Calendar

Thursday 26 June 10:00 - 11:30 (ESA Agora)

Session: F.04.30 Leveraging Earth Observation data to help rescuing the SDGs

The importance of data as a foundation for effective development policies is increasingly emphasised. The 2030 Agenda for Sustainable Development underscores the critical need for data-driven and evidence-based approaches to achieve its ambitious goals and targets. However, significant disparities in data availability and accessibility persist, particularly between data-rich and data-poor countries, impeding the ability of many nations to track progress and address challenges effectively.

Earth Observation and geospatial information have been recognised as transformative tools to bridge these gaps. Since the adoption of the 2030 Agenda in 2015, EO has been highlighted as a game-changer in providing the actionable data necessary to monitor, implement, and report on sustainable development goals (SDGs). When combined with traditional statistical data and enhanced by emerging technologies like big data analytics, EO offers unprecedented opportunities to improve the tracking of many aspects of sustainable development.

At the 2023 High-Level Political Forum on Sustainable Development Goals, the UN Secretary-General issued a call for a global rescue plan for the SDGs, acknowledging that progress on the majority of indicators remains alarmingly off-track. Persistent data gaps, particularly in low-income and vulnerable regions, continue to hamper efforts to assess progress and inform evidence-based policies. These gaps highlight the urgent need for innovative solutions and robust data systems to address inequalities, strengthen monitoring efforts, and foster accountability.

This Agora session will provide a platform to address these critical issues, bringing together senior representatives from space agencies, UN bodies, National Statistical Offices, the geospatial community, and other key stakeholders. Participants will review the progress made in integrating EO into SDG processes, share perspectives on achievements and challenges, and identify the opportunities that lie ahead for harnessing EO to fill data gaps and enhance national monitoring and reporting systems.

The session will highlight EO’s potential to strengthen sustainable development efforts by improving data quality, accessibility, and cost-effectiveness. By showcasing success stories and fostering dialogue among diverse stakeholders, the session aims to raise awareness of EO's transformative capabilities and promote collaboration to accelerate progress toward the 2030 milestone.

The forum will emphasize the importance of building global partnerships and ensuring that EO-driven solutions are accessible and scalable for all countries, particularly those most in need. Ultimately, this session seeks to reinforce the role of EO as an essential enabler of the data revolution for sustainable development, supporting more equitable, transparent, and effective pathways to achieving the SDGs.

Session 1: Global Perspectives on EO Integration in the 2030 Agenda


  • Panel Moderators: Steven Ramage (CEOS) and Marc Paganini (ESA)

Panel Discussions | Reflections and reactions from key global stakeholders on experiences and future prospects:


  • Andreas Brink - Joint Research Centre, European Commission
  • Dilek Fraisl - International Institute for Applied Systems Analysis, IIASA
  • Lorenzo De Simone - UN Food and Agriculture Organization, FAO
  • Mary Smyth - Central Statistics Office Ireland, CSO; IAEG-SDGs WGGI
  • Britta Ricker - Utrecht University, Copernicus Institute of Sustainable Development

Session 2: European Perspectives on EO Integration in the 2030 Agenda


  • Panel Moderators: Giuseppe Ottavianelli (ESA) and Francesca Piatto (EARSC)

Panel Discussion | Reflections and reactions from European institutions and European NSOs:


  • Usue Donezar - European Environment Agency, EEA
  • Marta Nagy-Rothengass - EUROSTAT, European Commission
  • Mary Smyth - Central Statistics Office Ireland, CSO; IAEG-SDGs WGGI
  • Alexandra Wegscheider-Pichler - Statistics Austria
Add to Google Calendar

Thursday 26 June 10:00 - 10:45 (Frontiers Agora)

Session: F.01.04 Joint ESA-GRSS initiatives for the exploitation of Earth Observation data

The IEEE Geoscience and Remote Sensing Society (GRSS) aims at effectively addressing global remote sensing information challenges by partnering broadly with other organizations and sectors, to be a trusted source of educational services and resources for geoscience and remote sensing, to provide opportunities for career and professional development, to inspire technical communities to advance remote sensing, inform public policy and expand our knowledge of the earth. The session will include contributions to showcase innovative educational and knowledge transfer initiatives performed by GRSS in cooperation with (or using products by) ESA, such as:

- Quantum Computing for Earth Observation Working Group, operated by GRSS in collaboration with ESA Phi Lab, to foster the application of quantum computing to Earth Observation data (F. Mauro and A. Sebastianelli);
- CONAE/ESA/GRSS Spring School on SAR polarimetry and Interferometry for land applications, jointly operated by GRSS in his educational activities for Latina America and the PUMAS initiative by ESA (F. Sarti);
- “Earth at Risk” image contest designed by the GRSS REACT Technical Committee to engage the younger community and stress the uniqueness of EO data to highlight the challenges facing our planet (Irena Hajnsek);
- GRSS Student Grand Challenges, providing unique opportunities for young researchers and students’ teams to engage with real world problems in UAV and Cubesat hardware, as well as related software, design and deployment (M. Herrera-Giménez).

A discussion about these and more activities LPS, and under the “agora” format will create a unique opportunity for diverse communities, united by a common interest in exploiting Earth observation data, to meet, network and discuss.

Speakers:


  • F. Mauro
  • A. Sebastianelli
  • F. Sarti
  • Irena Hajnsek
  • M. Herrera-Giménez
Add to Google Calendar

Thursday 26 June 10:07 - 10:27 (EO Arena)

Demo: D.04.22 DEMO - Transforming EO Research into On-Demand Cloud Services with APEx Algorithm Services

The APEx Algorithm Services are designed to help EO research and development projects transition their innovative workflows, algorithms, and toolboxes into scalable, reusable, and cloud-based on-demand services. While many projects produce valuable research outcomes, transforming them into operational services that can be widely accessed and utilized can be complex. APEx addresses this challenge by providing a structured approach to optimizing, packaging, and deploying EO algorithms, ensuring compliance with FAIR data principles and facilitating long-term accessibility. By leveraging the APEx guidelines and support, projects can create sustainable, interoperable solutions that support the broader EO community.

This demonstration will showcase how APEx enables projects to ensure long-term access and reuse of their results. Participants will gain insights into the different pathways APEx offers, including:

•Refactoring source code into openEO process graphs
•Packaging algorithms as OGC Application Packages
•Integrating services into the APEx Algorithm Services Catalogue
•Deploying solutions on existing EO processing platforms

Additionally, the session will highlight how APEx tools, such as the APEx Algorithm Services and Data Catalogue, support projects in preserving and sharing their outputs—ensuring that results can be reused and built upon by others.

This session is particularly relevant for EO projects looking to optimize, sustain, and integrate their EO algorithms within an existing on-demand processing platform. It also caters to users seeking a curated list of ready-to-use EO services and platform providers interested in learning about best practices and guidelines for ensuring the seamless integration of upcoming projects into a standardized processing ecosystem.

Speakers:


  • Bram Janssen - VITO
Add to Google Calendar

Thursday 26 June 10:30 - 10:50 (EO Arena)

Demo: D.03.20 DEMO - Cubes & Clouds 2.0 – A Massive Open Online Course for Cloud Native Open Data Sciences in Earth Observation

#pangeo #stac #cloud-native

The Cubes & Clouds 2.0 online course offers vital training in cloud-native open data sciences for Earth Observation (EO). In this 20-minute demonstration, participants will gain insights into the course structure and content, which includes data cubes, cloud platforms, and open science principles. The session will highlight hands-on exercises utilizing Copernicus data, accessed through the SpatioTemporal Asset Catalog (STAC), and showcase the openEO API and Pangeo software stack for defining EO workflows.?

Attendees will also learn about the final collaborative project, where participants contribute to a community snow cover map, applying EO cloud computing and open science practices. This demonstration is ideal for Earth Science students, researchers, and Data Scientists looking to enhance their skills in modern EO methods and cloud platforms. Join us to explore how Cubes & Clouds equips learners with the tools to confidently conduct EO research and share their work in a FAIR manner.

Speakers:


  • Dolezalova Tyna - EOX IT Services GmbH
  • Claus Michele - Eurac Research
  • Zvolenský Juraj - Eurac Research
Add to Google Calendar

Thursday 26 June 10:45 - 11:30 (Frontiers Agora)

Session: E.01.06 Copernicus World Heritage Hub

EUSPA, has been entrusted by the European Commission, to implement the Copernicus World Heritage hub which includes Cultural Heritage and Natural Heritage. The hub shall serve as a single access point to Copernicus data and services relevant for World Heritage including climate, coastal, biodiversity, land, atmosphere, security and emergency, with a focus on Europe's cultural and natural heritage (from the 218 natural heritage sites globally Europe hosts over 60).
The hub shall facilitate data access for new and existing users and supports policy implementation such as Biodiversity strategy and nature restauration law. While it's coordinated by EUSPA, the hub aims to promote learning between all Copernicus services and facilitate the process of identifying thematic needs and gaps engaging with Copernicus Entrusted Entities.
The world heritage hub shall supply a large amount of free and open data to monitor cultural heritage and natural heritage, facilitate informed decision-making and support the protection and monitoring of cultural heritage sites in crises, conflicts situations and natural disasters. In addition, being EUSPA in charge of the user uptake of Copernicus, the hub should include specific use cases and the implementation of pilots with cultural heritage and natural heritage users showcasing Copernicus benefits and leveraging the hub data and infrastructure.

Speakers:


  • Andreas Brink - DG JRC
  • Andrea Taramelli - ISPRA
  • Anastasia Anastasiou - Geosystems Hellas
  • Katia SCHÖRLE - CNRS
  • Benjamin Ducke - German Archaeological Institut
  • Delphine Deryng - ECMWF
  • Florent Michenot - Centralesupelec
  • Denis Bruckert - Satcen
Add to Google Calendar

Thursday 26 June 10:52 - 11:12 (EO Arena)

Demo: F.01.15 DEMO - Introducing DEA: The Art of Datatelling

This demonstration will provide a comprehensive introduction to DEA, a powerful service for Data Visualization and Interactive Storytelling. DEA is a content creation platform that allows users to create interactive stories without writing code. The ambition of this tool is to make data understandable for non-expert users, making it easy to build engaging and appealing narratives.
Data storytelling and visualization are game-changers for the community, democratizing the accessibility of climate, earth observation, and statistical data to the citizens and policy makers to make them aware of climate change impact.

Speakers:


  • Arturo Montieri
  • Cristina Arcari
Data-driven insights, indeed, especially for science communication, can be more effectively conveyed to broader audiences through an interactive storytelling experience.
DEA can be leveraged to create compelling narratives that can be shared with the community and used to foster collaboration and knowledge dissemination.
The demonstration will address best practices for structuring stories to enhance their impact, thus, how to use data to make stories more engaging, starting from some use cases already published on the service.
Participants will also learn how to create a story from scratch with DEA, exploiting the spatiotemporal datasets available for all users and integrating them with user assets, base layers, 3D photogrammetry, terrains, text, and plots. Insights on how to make a story more attractive using the features of the service will be provided as well.
Add to Google Calendar

Thursday 26 June 11:15 - 11:35 (EO Arena)

Demo: D.03.27 DEMO - openEO by TiTiler: Demonstrating Fast Open Science Processing for Dynamic Earth Observation Visualization

#stac

This demonstration aims to highlight our streamlined implementation of openEO by TiTiler, known as titiler-openEO (https://github.com/sentinel-hub/titiler-openeo), which has been developed through a collaborative effort between Sinergise and Development Seed.

In contrast to conventional openEO implementations that often involve extensive datacube processing and asynchronous workflows, titiler-openEO is designed to emphasize synchronous processing and dynamic visualization of raster data. We believe this approach will enhance the user experience and efficiency in handling raster datasets.

The session will highlight the key innovations of our approach:
- Synchronous Processing: Real-time execution of process graphs for immediate visualization
- ImageData-Focused Model: Simplified data model optimized for raster visualization
- Fast, Lightweight Architecture: Built on TiTiler and FastAPI without additional middleware
- Streamlined Deployment: Easily deployable for quick prototyping and visualization
- Early Data Reduction: Intelligent data reduction techniques to minimize processing overhead

We will demonstrate practical applications directly integrated in the Copernicus Data Space Ecosystem using the new catalog of Sentinels data, showing how titiler-openEO can transform complex Earth Observation workflows into lightweight, interactive visualizations. Attendees will see how this implementation complements existing openEO backends for common visualization needs.

This demonstration is particularly relevant for users wanting to quickly prototype and validate algorithms without the overhead of a complex processing backend setup. We'll show how titiler-openEO can be integrated with existing EO platforms and STAC catalogs to provide immediate visual feedback for data analysis.

Speakers:


  • Emmanuel Mathot - DevelopmentSeed
  • Vincent Sarago - DevelopmentSeed
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 0.49/0.50)

Session: A.11.01 Earth Energy Imbalance and Radiative Forcing

Satellite EO is the only way to fulfill the need of direct measurements and monitoring of Earth’s Radiation Budget (ERB) with the accuracy and coverage required for climate actions at commensurate anthropogenic timescales. The ERB comprises the quantification of the incoming radiation from the Sun and the outgoing reflected shortwave and emitted longwave radiation. This Essential Climate Variable (ECV) is the primary forcing of the climate system and is therefore a fundamental quantity to be monitored to understand the Earth’s climate and its variability.

A number of current and future missions in the ESA’s Earth Explorer (EE), Earth Watch (EW) and Meteorological programmes, and from international partner agencies, have been designed to measure the whole or part of the Earth Energy Imbalance components, to study and bolster our ability to model the radiative forcing, notably the role played by clouds and aerosols. The promises of e.g. EE6 EarthCARE (with JAXA), EE9 FORUM, MetOp-SG/IASI-NG, the prospect of EW TRUTHS, EE12 candidate ECO mission, as well as international partners’ missions like CERES, Libera, CLARREO-pathfinder, PREFIRE, shape a comprehensive scene of critical ERB data, unprecedented by their spatio-temporal coverage, accuracy and complementarity.

This session invites presentations on:
- observations of components of the Earth Radiation Budget,
- observations advancing our understanding of radiative forcing processes,
- retrieval algorithms and methods for uncertainty quantification,
- their utilisation in climate modelling and as actionable information for climate decision-making.

The objective of the session is to bring the ERB observations from individual missions and climate communities together, to maximise the exchanges and synergetic benefits, reviewing the current limitations in Earth radiation system modelling and the opportunities with the current and future missions.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: How Does El Niño - Southern Oscillation Drive the Inter-Annual Variability of the Earth's Outgoing Longwave Radiation?

Authors: Martina Taddia, Federico Fabiano, PhD Stefano della Fera, Elisa Castelli, Bianca Maria Dinelli
Affiliations: University Of Bologna (UNIBO)-Department of Physics and Astronomy (DIFA), National Italian Research Council (CNR)-Institute of Atmospheric and Climate Sciences (ISAC), National Italian Research Council (CNR)-Institute of Applied Physics (IFAC)
The thermal radiation emitted by the Earth-atmosphere system in the longwave, also known as the outgoing longwave radiation (OLR), is one of the components of the top-of-atmosphere (TOA) Earth energy budget (EEB). Many satellite missions have been dedicated to monitoring the broadband and the spectrally resolved OLR from space, contributing to improve the knowledge of surface and atmospheric physical processes that drive its variability. Past and present-day OLR measurements constitute also a fundamental benchmark to assess the reliability of climate models. Since long-term stable records of OLR observations are still not long enough to accurately quantify the radiative response to external forcings (e.g. the increase of greenhouse gases concentration), many studies have focused on its interannual variability [1]. In this context, ENSO is the most important variability mode of the planet on the interannual time scale. It causes sea surface temperatures (SSTs) changes across the Tropical Pacific Ocean and alters the atmospheric circulation, with implications also for remote regions through teleconnections. The OLR is strongly affected by ENSO induced variability, with its time series showing positive or negative anomalies in response to El-Niño or La-Niña phases, respectively. In this study, natural climate variability of the TOA OLR driven by El-Niño Southern Oscillations (ENSO) in clear sky conditions is examined. In the first part, the OLR variability induced by ENSO as simulated by 14 climate models participating to the Coupled Model Intercomparison Project, phase 6 (CMIP6) [2] is assessed. Clear sky OLR outputs from the Coupled and Atmospheric Model Intercomparison Project (CMIP and AMIP, respectively) are evaluated against the corresponding observations from the Cloud and Earth Radiant Energy System (CERES) [3] instrument. Their ability in reproducing the phase, amplitude and magnitude of the radiative response to ENSO activity is addressed by performing temporal correlations and lagged linear regressions between OLR anomalies and the Niño 3.4 index. The second part of the work focuses on spectrally resolved measurements of the OLR, with the aim of developing a diagnostic for climate models. The features of the OLR variability induced by ENSO have been further investigated by means of a climatology of clear sky OLR spectral fluxes derived from the Infrared Atmospheric Sounding Interferometer (IASI) measurements [4] covering the 645 – 2300 cm-1 spectral range. As for broadband fluxes, correlations and lagged regressions have been performed but for each spectral channel, isolating the wavenumbers that contribute the most to the OLR response. Finally, the role of water vapor, surface and atmospheric temperature, the three main variables shaping the clear sky OLR flux, has been considered through a spectral kernel analysis. The TOA OLR flux of water vapor, lapse-rate, Planck effect and surface temperature driven by ENSO is calculated. References: [1] Ceppi, Paulo, and Stephan Fueglistaler. "The el niño–southern oscillation pattern effect." Geophysical Research Letters 48.21 (2021): e2021GL095261. [2] Eyring, Veronika, et al. "Overview of the Coupled Model Intercomparison Project Phase 6 (CMIP6) experimental design and organization." Geoscientific Model Development 9.5 (2016): 1937-1958. [3] Loeb, Norman G., et al. "Toward a consistent definition between satellite and model clear-sky radiative fluxes." Journal of Climate 33.1 (2020): 61-75. [4] Whitburn, Simon, et al. "Spectrally resolved fluxes from IASI data: Retrieval algorithm for clear-sky measurements." Journal of Climate 33.16 (2020): 6971-6988.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: Diurnal Cycles, Annual Cycles and Shortwave Anisotropy - Sampling the Earth's Energy Imbalance With the Earth Climate Observatory (ECO) Constellation

Authors: Thomas Hocking, Björn Linder, Thorsten Mauritsen, Linda Megner
Affiliations: Department of Meteorology, Stockholm University, Bolin Centre, Stockholm University
The Earth’s energy imbalance (EEI), i.e. the difference between incoming solar radiation and outgoing reflected and emitted radiation, is the one quantity that ultimately controls the evolution of our climate system. Despite its importance, the exact magnitude of the energy imbalance is not well known, and because it is a small net difference of about 1 Wm⁻² between two large fluxes (approximately 340 Wm⁻²), it is difficult to measure directly. There has recently been a renewed interest in using wide-field-of-view radiometers on board satellites to measure the outgoing radiation and hence deduce the global annual mean energy imbalance, for example as part of the EE12 candidate Earth Climate Observatory (ECO) mission. In general, it is important to consider the diurnal and annual cycles of the EEI and their relation to the implicit measurement cycles introduced by the choice of satellite orbit, to avoid potential systematic effects as a result of biased sampling. Another potential issue with wide-field-of-view radiometers that has been the source of some concern is the effect of anisotropic radiation, particularly anisotropic surface reflection of incoming sunlight. A wide-field-of-view radiometer does not distinguish the direction of incoming radiation, and results from an earlier study that considered a single day indicated that shortwave anisotropy could lead to substantial systematic biases of 1.6 Wm⁻² in the global mean. Here we investigate how to sample with a limited number of satellite orbits, in order to correctly determine the global annual mean imbalance. Using observational and model data, we have investigated the importance of the local and global diurnal cycles, as they are observed by a satellite, in the determination of the EEI. We also compare results for isotropic and anisotropic shortwave reflection. We simulate satellites in polar, sun-synchronous and precessing orbits, as well as constellations of these types of satellite orbits. We present the results of ongoing work concerning different orbits, and how they affect the estimated global annual mean EEI, with a particular focus on the shortwave component.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: Investigating the relationship between the radiative response of the Earth and the pattern of surface warming with satellite observations of the global energy budget

Authors: Benoit Meyssignac, Dr Christopher Merchant, Michae l Ablain, Alejandro Blazquez, Archie Cable, Sarah Connors, Richard Cornes, Darren Ghent, Damien Desbruyères, Pia Englyst, Sebastien Fourest, Robin Fraudeau, Jacob Hoyer, Gilles Larnicol, Dr Michael Mayer, Colin Morice, Karen Veal, Abigail Waring, Jonathan Win, William Llovel
Affiliations: LEGOS, CNES, University of Reading, Magellium, National Oceanography Center, ESA, University of Leicester, LOPS, DMI, University of Vienna, Met Office
The climate feedback parameter λ determines the magnitude of the Earth radiative response to a change in global mean surface temperature. The more negative λ, the more stabilising the feedback of the Earth radiative response and the less sensitive the Earth’s climate response to atmospheric Greenhouse gases concentrations. Climate models and past decades observations suggest distinct patterns of sea surface warming lead to different Earth radiative response meaning λ varies with the sea-surface temperature (SST) warming pattern. Climate model simulations forced with observations of historical sea-surface temperature show a mean λ that is consistent with observations over the last decades. However, they fail in reproducing observed λ variations since 1970. Here, using up-to-date satellite observations of radiative forcing, Earth energy imbalance and surface temperature from the ESA climate space MOTECUSOMA project , we revisit the global energy budget. We estimate the global energy budget over successive 25-year windows and regress EEI -RF over TAS to derive a time-series of λ over the last 20 years with improved uncertainties. We compare λ variations with different proxies of the changing tropical SST pattern (including the Pacific decadal oscillation index, the warmest 30% of the tropical SST – SST#-) both in climate model simulations and observations to determine the causes for the changing λ. Preliminary results suggest that λ variations are linked to changes in the Hadley-Walker circulation rather than the traditional East-West gradients. We propose an observational framework to investigate this hypothesis further
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: Changes in Earth’s Energy Imbalance Since 2000 As Observed by CERES

Authors: Norman Loeb, Mohan Shankar, William Smith, Jr., Lusheng Liang, Seiji Kato, David Doelling, Paul Stackhouse, Katie
Affiliations: Nasa Langley Research Center
Earth’s radiation budget (ERB) describes how solar radiant energy and terrestrial thermal infrared radiation are distributed within the climate system. The balance between absorbed solar and outgoing longwave radiation at the top-of-atmosphere determines whether Earth heats up or cools down. Because the processes impacting Earth’s climate operate on interannual to multi-decadal (and beyond) timescales, understanding current and future climate requires a long observational climate data record of ERB. The Clouds and the Earth’s Radiant Energy System (CERES) instruments provide the longest continuous global record of ERB available. CERES has shown that Earth’s energy imbalance has doubled since 2000, implying an acceleration of warming. Because CERES data products incorporate MODIS and VIIRS imager data along with meteorological information from reanalysis, it is possible to perform attribution studies to better understand what atmospheric and surface properties are behind the ERB changes. This presentation will present recent changes in EEI and discuss the challenges associated with maintaining a long-term ERB climate data record.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: Total Solar Irradiance and Terrestrial Outgoing Longwave Radiation as observed with the SI-Traceable Compact and Leight-Weight Radiometer (CLARA) onboard NorSat-1

Authors: Margit Haberreiter, Dr. Wolfgang Finsterle, Jean-Philippe Montillet, Pr Nigel Fox, Bo Andersen, Alex Beattie, Martin Mostad, Ivar Spydevold, Pal Brekke
Affiliations: PMOD/WRC, NPL, University of Oslo, UTIAS SLF, Statsat, Norwegian Space Agency
The energy budget of the Earth is governed by the balance between the energy entering into and leaving the Earth’s system. On the incoming side is the Total Solar Irradiance (TSI), the solar radiation reaching the top of the atmosphere (ToA). On the outgoing side there is the outgoing shortwave radiation (OSR), i.e., the directly reflected solar radiation, as well as the outgoing longwave radiation (OLR), which is the thermal radiation of the Earth’s surface and atmosphere. The OSR and OLR together build the total outgoing radiation (TOR), which is the spatially and spectrally integrated emission at the ToA. CLARA onboard NorSat-1 is the first SI-traceable radiometer in space that measures both TSI and spatially resolved OLR, and as such serves as in-orbit demonstration for the SI-traceable measurement of the components of the Earth Radiation Budget. We present the available data products of TSI and OLR as measured with CLARA and their validation against other available datasets. Furthermore, we elaborate potential synergies with other missions, such as TRUTHS, as part of the TRUTHS mission Accompanying Consolidation Towards Operations Study.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: An investigation on causes of the detected surface solar radiation brightening in Europe using satellite data

Authors: Anke Tetzlaff
Affiliations: Meteoswiss, Meteoswiss, Meteoswiss, ETH Zürich
Linda Schilliger1,2 Anke Tetzlaff1 Quentin Bourgeois1 Lucas Ferreira Correa2 Martin Wild2 1) Federal Office for Meteorology and Climatology (MeteoSwiss), Zurich, Switzerland 2) Institute for Atmospheric and Climate Science, ETH Zurich, Zurich, Switzerland Surface solar radiation is fundamental for terrestrial life. It provides warmth to make our planethabitable, drives atmospheric circulation, the hydrological cycle and photosynthesis. Europe has experienced an increase in surface solar radiation, termed "brightening", since the 1980s. Thisstudy investigates the causative factors behind this brightening. A novel algorithm from the EU19 METSAT satellite application facility on climate monitoring (CM SAF) provides the unique opportunity to simulate surface solar radiation under various atmospheric conditions for clouds (clearsky or all-sky), aerosol optical depth (time-varying or climatological averages) and water vapor content (with or without its direct influence on surface solar radiation). Through a multiple lin23 ear regression approach, the study attributes brightening trends to changes in these atmospheric parameters. Analyzing 61 locations distributed across Europe from 1983 to 2020, aerosols emerge as key driver during 1983-2002, with Southern Europe and high elevations showing subdued effects (0-1%/decade) versus more pronounced impacts in Northern and Eastern Europe (2-6%/decade). Cloud effects exhibit spatial variability, inducing a negative effect on surface solar radiation (-3 to -2%/decade) at most investigated locations in the same period. In the period 2001-2020, aerosol effects are much smaller, while cloud effects dominate the observed brightening (2-5%/decade). This study therefore finds a substantial decrease in the cloud radiative effect over Europe in the first two decades of the 21st century. Water vapor exerts negligible influence in both sub-periods.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.61/1.62)

Session: A.05.11 How to Ensure Accuracy and Stability of Sea Level Measurements? Calibration, Validation and Verification for Current and Future Reference Altimeter Missions. - PART 2

This insight session will delve into the topic of validating sea level measurements from Sentinel-6 and future reference altimeter missions with a focus on the stability assessment. We aim to bring together experts with innovative ideas on in-situ and cross-comparison cal/val approaches, those from climate applications who can provide insights into the necessary uncertainty and long-term stability for robust and useful data, and experts on the instruments and data processing of the current mission.

The session will be split into three parts, each introduced by invited speaker presentations, followed by an extended open discussion.

First, understanding needs for future missions. By exploring observational needs to answer critical scientific questions, we aim to identify priorities for the next-generation altimetry reference mission. In this session we will consider the role of a reference mission within an altimetry constellation in providing robust information for climate studies at local, regional and global scales. Here, we will also consider the challenges of and opportunities provided by closure experiments that match observed sea level rise to independently assessed steric and ocean mass change datasets.

Second, establishing calibration, validation and verification approaches. We aim to identify and outline the validation systems required to ensure the accuracy and stability of sea level measurements at global, regional and high-resolution scales. This will consider whether the current network is sufficient to verify stability requirements for next generation altimetry missions, and/or what new approaches are needed. Here we will consider in-situ observational systems, including those that meet CEOS-FRM (fiducial reference measurement) standards and the potential of more novel approaches. The scope includes optimising the use of tide gauges and floating buoys, and exploiting networks of transponders and corner cubes, as well as exploring the potential of novel approaches such as the use of autonomous surface vehicles.

Third, the role of a reference mission for the altimetry constellation. Here, we consider how and why an altimetry mission is designated as “the reference mission” for all other altimeters. The aim of the discussion will be to define the role of a reference mission to support other altimeters, and what criteria must be met for a mission to have this designated status.

Overview:

The session, comprised of two 90-minute parts, is scheduled for Thursday morning, June 26, 2025. The first part will take place from 8:30 to 10:00 am, and the second from 11:00 to 12:30 pm.
- The first part is focus on Establishing sea level stability needs for future altimetry missions
- The second is focus on Establishing calibration, validation and verification approaches

Part 2 : Establishing calibration, validation and verification approaches


General introduction of calibration, validation and verification approaches in terms of metrology


  • Emma Woolliams - NPL

Validation with Transponders and corner cubes


  • Stelios Mertikas - U. Crete

Capability of global tide gauge network to assess drift in sea level measurements


  • Steve Nerem - U. Colorado

Capability of cross-comparison methods between altimeter satellites to assess mean sea level drift


  • Noemie Lalau - Magellium

Multimission crossover analysis


  • Denise Dettmerring - DGFI

Novel approaches to validation: autonomous surface vehicles


  • Anahita Lavarack - Oshen Sail
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.15/1.16)

Session: A.07.04 Irrigation monitoring through Earth Observation (EO) data - PART 2

Irrigation is the primary source of anthropogenic water use, largely prevailing on domestic and industrial exploitations. Despite the essential role played by irrigation in ensuring food production and the impacts on the management of water resources, reliable and explicit data about irrigation dynamics (i.e., timing, extent and amounts of water used) are generally lacking worldwide. Remote sensing technology has recently proven to be an essential tool to detect irrigation occurrence in space and time and to estimate amounts of water used, especially in light of the latest high-resolution retrievals.
This session welcomes contributions presenting innovative approaches leveraging Earth Observation (EO) data, eventually combined with modeling approaches or ground-based measurements, for monitoring irrigation and assessing the associated impacts. Topics of interest include but are not limited to:
- exploitation of EO data for irrigation detection;
- use of EO data for quantifying irrigation water use;
- data assimilation techniques to improve irrigation schemes;
- assessment of the impacts of irrigation on the water cycle;
- management of irrigation using hydrological modeling combined with satellite data;
- estimates of irrigation water requirements leveraging satellite data;
- development of strategies based on remotely sensed data for improving irrigation efficiency.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: Operational Water Accounting Using Satellite Data: The IRRISAT® Procedure for Mapping Irrigated Areas and Irrigation Water Requirements in Campania Region, Italy

Authors: Camilla Della Monica, Qotada Alali, Salvatore Falanga Bolognesi, Giuseppe Castaldi, Amedeo D'Antonio, Carlo De Michele
Affiliations: Ariespace Srl, Regione Campania - D.G. Politiche Agricole Alimentari e Forestali
Irrigation water accounting is crucial for managing water resources and ensuring sustainable food production. In compliance with the European Commission's Water Framework Directive (WFD), Italy's Ministry of Agricultural Policies (MIPAAF, now MASAF) issued guidelines on July 31, 2015, for quantifying irrigation volumes and estimation methodologies, approved at the State-Regions Conference on August 3, 2016. According to these guidelines, water used for irrigation is estimated by equating it to irrigation water requirements, with regional governments responsible for conducting and validating these estimations. Irrisat® is the operational procedure employed for this task in the Campania Region territory (South Italy). It was developed to support irrigation water accounting by accurately mapping irrigated areas and estimating irrigation water requirements. This study covers the irrigation seasons 2021, 2022 and 2023, encompassing the whole regional territory: about 13 million square kilometres for an irrigated area of about 140 thousand hectares of irrigated agriculture, including districts served by public collective distribution networks and private groundwater wells. The procedure involves two main steps. Firstly, irrigated areas were classified using the machine learning algorithm based on Random Forest. This algorithm has been demonstrated to be robust for use in operational applications. The approach is primarily focused on classifying irrigated herbaceous and tree crop species, and it was further extended to detect greenhouse gases. The model was trained with ground truth data collected separately for each year (around 15000 ground truths in total), utilising the time series of specific spectral vegetation indices derived from Sentinel-2 satellite images. A balanced training and validation dataset (70% training and 30% validation) with a random sampling strategy stratified by class was applied. The validation was performed through confusion matrix analysis, resulting in an overall accuracy of over 90% for all three years (91,0% for 2021, 92,6% for 2022, and 92,3% for 2023). Secondly, the irrigation water requirements were estimated with the IRRISAT® method (based on a one-step Penman-Monteith approach for evapotranspiration) and extensively validated in different agronomical and environmental conditions. This method incorporates canopy parameters derived from satellite acquisitions, such as the leaf area index (extracted using an artificial neural network, ANN) and surface albedo, with fixed crop resistance values. Daily agro-meteorological data derived from ERA5 complemented these satellite-based surface parameters, whose values were linearly interpolated between two consecutive satellite acquisition dates. The leaf area index was also used to derive effective precipitation, enabling daily calculations of irrigation water requirements. Finally, the irrigation volume was then determined by considering the irrigated areas identified in the previous step. This approach provides operational, cost-effective, robust and reliable estimation of the total irrigation volumes at the regional scale and a cross-comparison with allocated water resources in the implementation of the Regional Irrigation Plan. The study has also highlighted areas of higher demand where improvements in irrigation infrastructures might be considered.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: Linking Copernicus-derived estimates of smallholder paddy flooding practices to agricultural credit in the Senegal River Valley

Authors: Janet Mutuku, Mr Pierre C. Sibiry Traore, Miss Glorie Metsa Wowo, Miss Celeste Tchapmi Nono Nghotchouang, Mr Kalidou Aliou Ball, Dr. Kidia K Gelaye
Affiliations: ICRISAT, Manobi Africa PLC
In the Senegal River Valley, where smallholder farmers rely on controlled flooding to cultivate rice, predicting paddy production hinges on accurate monitoring and forecasting of flooding patterns. Key determinants of timely flooding include early access to credit, enabling land preparation, input purchases, other production activities, and the agricultural practice and performance in previous seasons. Earth observation can potentially reduce the cost of monitoring stakeholders’ compliance with recommended agronomic and financial best practices, which could help de-risk investments in local agricultural value chains, mitigate dependence on expensive food imports and enhance food sovereignty. This study therefore has three objectives. First, it aims to evaluate Sentinel-2's ability to monitor the progression of flooding across time and space in terms of farmers’ response to the availability of credit and to the timing of its disbursement. Second, it examines the contribution of Sentinel-2 and Sentinel-1 data fusion for high-cloud environments such as rainy season rice cropping. Third, it investigates how the combination of satellite observations and mathematical models can help improve the in-season and across-season predictability of the total planted area. Preliminary results generated over the 2019-2024 period reveal that a significant fraction of producers' flood and plant too late, exposing themselves to yield losses at the end of the season. Satellite imagery collected during the 2023 dry hot season further suggests that different flooding behaviors are discernable which are linked to the endowment level of individual farmers and models, based on plot-level measurements, the relation between satellite estimates of flooding dates and final yield. We also demonstrate that nine satellite observations at the start of a cropping cycle may suffice to achieve a root mean squared error (RMSE) of approximately 4% and a mean absolute error (MAE) of 2% on the total planted area forecast. This information can help predict the performance of agricultural credit in a given season, and the likely volume of agricultural credit mobilized for the next cropping cycle.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: Mapping of Irrigation Practices With Current and Future Sentinel Satellites

Authors: Radoslaw Guzinski, Hector Nieto, Benjamin Mary, Cecile Kittel, Michael Munk
Affiliations: DHI A/S, ICA-CSIC
Irrigated agriculture is by far the largest consumer of fresh-water resources, globally responsible for around 70% fresh-water withdrawals. At the same time, the spatial and temporal characterization of irrigation practices is not well mapped in many parts of the globe. This is particularly evident in Africa, even in countries implementing or planning to implement water licensing schemes, where locations of many irrigation districts are unknown. Also in Europe, where countries have in general a better overview of irrigated areas, the timing and amount of irrigation and exact location of irrigated fields is largely unknown. Actual evapotranspiration, which can be reliably derived with thermal and shortwave optical satellite observations, is a direct proxy of irrigation water use. There is however currently a lack of thermal satellite mission with high spatio-temporal resolution, and this might be partly responsible for the absence of widely available and operational irrigation maps. In this work, we assess the utility of daily, field scale actual evapotranspiration maps derived with fusion of thermal sensor on board Sentinel-3 satellites and shortwave optical camera on board Sentinel-2 satellites, to produce monthly maps of irrigated fields and seasonal estimates of volume of water consumed by irrigation. To evaluate the improvements that can be expected in the future, with planned launch of Copernicus Land Surface Temperature Monitoring (LSTM) mission and other thermal satellites with high spatio-temporal resolution, we incorporate ECOSTRESS thermal observations into the modelling methodology. Three methods for mapping of irrigated areas are evaluated. Two are based on thresholding the ratio of actual to potential evapotranspiration (a proxy for root-zone soil moisture deficit) derived at local and regional scales in order to separate the increase in evapotranspiration due to rain (largely occurring at regional scales) and irrigation (largely occurring at local scales). The third method uses a simple water-balance model of root-zone soil moisture driven mainly by rainfall and actual evapotranspiration maps. Temporal mismatch in the soil moisture and evapotranspiration signal (i.e. low root-zone soil moisture with high evapotranspiration) indicates an input of water into the soil in addition to that from precipitation. We also evaluate three methods to estimate seasonal irrigation water volume. First is based on comparison of evapotranspiration sums from irrigated and rainfed agricultural parcels located in the same region. Second relies on the difference in evapotranspiration sums produced with satellite data and with a distributed hydrological model driven by meteorological forcings. The last method makes use of the simple soil-moisture model and the estimated volume of water added during detected irrigation events. The methods are evaluated in irrigation schemes located in Europe, due to in-situ data availability, and the utility of maps is demonstrated in European irrigation districts as well as in Burkina Faso, Botswana and South Africa. The study was performed in the scope of ESA’s EO MAJI and MULTIWATER projects.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: Assessing Energy Use and Greenhouse Gas Emissions in Irrigation across Europe and Central Asia

Authors: Silvan Ragettli, Dr Oleksandr Mialyk, Dr Ranu Sinha, Dr Poolad
Affiliations: hydrosolutions Gmbh
Effective and sustainable irrigation in the Europe and Central Asia (ECA) region hinges on prioritizing energy efficiency to improve water management, reduce operational costs, and minimize environmental impact. The region faces increasing energy demands for irrigation due to low system-level efficiencies and outdated infrastructure. Understanding energy consumption and its associated greenhouse gas (GHG) emissions is critical for identifying inefficiencies and transitioning toward sustainable irrigation practices. While previous studies have quantified energy use and GHG emissions associated with groundwater pumping, there is a notable gap concerning surface water pumping for irrigation in the ECA region, despite it being the predominant method. This gap largely stems from the complexities involved in assessing surface water systems, which require detailed spatial data on canal networks, intake locations, and elevation profiles to accurately estimate pumping requirements. This study leverages Earth Observation (EO) data to provide a comprehensive, country-level assessment of energy use and GHG emissions in irrigation systems across 21 ECA countries. Methodology The study quantifies irrigation energy use by integrating geospatial datasets and new high-resolution EO-derived products, providing a novel analysis of both the volume of irrigation water pumped and the required pumping heights. By examining key variables—including the source of irrigation water (surface or groundwater), on-farm irrigation techniques, and the specific demands of lift irrigation—this research offers a comprehensive assessment of energy consumption in irrigation practices. The proportions of surface water and groundwater utilized for irrigation are estimated based on the global inventory as established by Siebert et al. (2010). For groundwater irrigation, energy use estimates rely on foundational work of Qin et al. (2024). In contrast, surface water irrigation energy use is meticulously evaluated through an innovative, multi-step methodology: 1. Identification of Intake Points: Utilizing OpenStreetMap (OSM) canal and river vector data, intake points where canals intersect with rivers are identified. 2. Mapping Irrigated Areas: Remote-sensing-based irrigated area maps, such as the WorldCereal 10 m 2021 product, are employed to delineate irrigated regions. 3. Defining Canal Command Areas: By applying proximity filters in conjunction with the HydroSHEDS v1 digital elevation models (DEM), the irrigated areas served by each canal intake are determined. This approach ensures that irrigated areas are accurately identified as being downstream of their respective main canals. 4. Calculating Pumping Heights: The elevation difference between intake points and the corresponding canal outlets serving the irrigated areas is calculated using the DEM. 5. Determining Irrigation Water Volumes: Estimating actual crop irrigation water by making use of irrigated and rainfed crop area maps and monthly composites of 30m evapotranspiration maps from the Landsat Collection 2 Provisional Actual Evapotranspiration Science Product. Irrigation efficiencies are assessed by comparing actual blue water consumption with available data on water abstraction volumes. 6. Estimating Energy Requirements and GHG Emissions: Calculate energy requirements for lift irrigation based on pumping heights, irrigation water volumes, pump efficiency, friction losses, and conveyance distances. Evaluate associated greenhouse gas emissions by analyzing energy consumption in relation to the country's energy mix and applying country-specific emission factors. Irrigation volumes are derived by assessing water scarcity conditions, which include blue water deficits. Remote sensing measurements of actual crop water consumption and deficits provide critical insights into blue water use. GHG emissions are calculated by linking energy demand estimates to country-specific energy mixes and emission factors, allowing for spatially explicit insights into the environmental impact of irrigation practices. Results Preliminary results highlight the critical role of lift irrigation in the ECA region, where significant energy demands are driven by multi-stage pumping systems. In Ukraine and Uzbekistan, for example, the analysis reveals the substantial energy requirements of the country's extensive Soviet-era irrigation infrastructure, which relies on pumped surface water. These systems—some of the largest in the ECA region—often operate with pumping heights exceeding 100 meters. Results indicate that lift irrigation systems account for 60% of Ukraine’s total permanently irrigated area (4.2 million hectares, excluding incidental irrigation). In Uzbekistan, the main lift irrigation systems account for about 26% of the total irrigation water used, consuming 2.3 TWh of energy and resulting in 1.1 MtCO₂ emissions. Given the low irrigation efficiencies of approximately 30%, these findings underscore the urgent need for modernization to improve energy efficiency and reduce costs in the face of evolving climate conditions and persistent economic challenges. Conclusion This study utilizes Earth Observation (EO) data to quantify energy use and greenhouse gas (GHG) emissions in irrigation systems, providing actionable insights into inefficiencies and opportunities for sustainable transitions. By addressing both physical and economic water scarcity conditions, this analysis supports informed decision-making for irrigation development across the ECA region. The findings underscore the critical role of energy-efficient practices, such as modernizing lift irrigation systems, in enhancing the sustainability of irrigated agriculture and mitigating the environmental impacts of water use. This research offers a scalable framework for integrating EO data into agricultural planning, promoting data-driven strategies for sustainable water and energy management in the ECA region and beyond.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: More crop per drop: optimizing water productivity from field-scale to global scale

Authors: Florian Werner, Ageeth de Haan, Ilse de Leede, Matteo Ziliani, Kristiina Byckling, Rim Sleimi, Albert Abelló, Roula Bachour, Jonna van Opstal, Wim Bastiaanssen
Affiliations: Hydrosat, Hydrosat
Managing precious water resources responsibly and protecting crops from adverse growing conditions in the face of climate change are key challenges for sustainable and robust agriculture everywhere in the world. Hydrosat leverages thermal infrared (TIR) satellite technology to help governments preserve water while increasing farmers’ crop yields. TIR remote sensing measures land surface temperature (LST) to solve the surface energy balance and directly track the water cycle in crops and soil. Important agronomical parameters, for example soil water potential or moisture, water consumption (i.e., evapotranspiration, ET), crop stress, and biomass accumulation, can thus be monitored fully remotely over arbitrarily large areas. Coupled to a soil water balance model based on meteo data, remote sensing algorithms also estimate the amount of irrigation water applied by farmers and generate irrigation recommendations optimizing water productivity, i.e., maximizing crop yield while minimizing irrigation water consumption. Hydrosat is launching a thermal infrared satellite constellation to provide daily, global, and high-resolution land surface temperature measurements targeting a spatial resolution of 50 m. Hydrosat’s first satellite has been launched successfully in August 2024 and is retuning first operational data. IrriWatch is Hydrosat’s irrigation management decision support system, which allows growers to track water demand and growth progress of their crops down to individual 10x10 m² pixels in near-real-time. With governments becoming more conscious about conserving their water reserves, Hydrosat has launched multiple studies to apply its algorithms over large irrigation districts and potentially even whole nations. Water productivity management requires to accurately measure both the amount of consumed irrigation water and the crop yield. We present recent feasibility studies and validation experiments comparing output from the IrriWatch algorithms to ground truth data obtained from in-field measurements and airborne sensors, with a particular focus on expanding the scale to operational global monitoring. There is clear demand from farmers to increase spatial resolution as much as possible, but this poses a huge computational burden for large-scale monitoring. We apply the Surface Energy Balance Algorithm for Land (SEBAL) to LST imagery from satellites and airborne sensors at multiple ground sampling distances of 3, 10, 50, and 100 m, achieved by resampling the raw imagery. We find that field-level errors in LST and actual ET increase substantially for pixel sizes larger than 10x10 m². We attribute this effect mainly to a smoothing effect in the hot tails of the surface temperature distribution, which is critical for accurate estimation of the sensible heat flux. Thermal sharpening can overcome some limitations of current TIR datasets, but cannot reconstruct the extremes of the temperature distribution, which are absent Landsat-8/9 images due to the insufficient native resolution. Our results further imply that reducing spatial resolution for large-scale analysis is not a suitable approach. We observe excellent agreement of actual ET, dry matter production, and crop yield for multiple crop types from field- to basin-scale. The IrriWatch models typically also provide realistic estimates of root zone soil moisture and applied water, but these results are highly sensitive to the accuracy of precipitation data and the soil water retention curve. By advancing high-resolution thermal infrared remote sensing, Hydrosat bridges field-scale precision with global reach, driving sustainable water productivity management and maximizing the impact of every drop to help feed a growing world.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: Optimizing Date Palm Water Productivity with Remote Sensing and FAO WaPOR Data

Authors: Almutaz Mohammed, Steve Goss, Issam Nofal, Luis Pereira, Professor Amer Marei, Livia Peiser
Affiliations: Food And Agriculture Organization Of The United Nations (fao), Al-Quds University
Date palm, a vital crop in arid and semi-arid regions, provides substantial economic and cultural value while sustaining rural livelihoods. However, its cultivation faces challenges from environmental and socio-economic constraints, including water scarcity, salinity, and varying agronomic practices. In a collaborative effort, FAO's Land and Water Division (NSL) and Investment Center (CFI) Teams, alongside funding from the European Bank for Reconstruction and Development (EBRD), have brought together a national team comprising experts from the Ministry of Agriculture and a leading university. This partnership aims to harness the power of remote sensing and FAO's WaPOR database to enhance water productivity and address key challenges in date palm farming systems. The investigation analyzed 112 date palm farms during the 2022/2023 season, integrating farm-level data—tree age, irrigation type, soil and water salinity, tree cover, farm size, and tree density—with Earth Observation analytics. Utilizing WaPOR-derived actual evapotranspiration (AET), the analysis revealed a stronger AET-yield correlation in young date palms compared to mature trees. To facilitate the analysis of AET across tree age groups, a supervised Random Forest classifier was employed to classify the date palm crop map into three age classes: young, intermediate, and mature. This classification leveraged NDVI time series spanning 2000–2023, analyzed through annual mean and variance metrics combined with tree age data. The results, enriched with yield data from field surveys, enable the quantification of water productivity and AET dynamics for each age class, providing actionable insights into optimizing water resource management and improving agricultural productivity. This impactful collaboration demonstrates the potential of integrating EO data, machine learning, and field-level insights to deliver scalable solutions for sustainable water management and agricultural development in arid regions, driving progress toward global development goals. The findings further inform the analysis of economic water productivity dynamics and support the identification of key policy and investment priorities.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall L3)

Session: D.04.02 Best practices for execution of algorithms and workflows across federated cloud environments

EO algorithms and workflows require execution across different but federated EO cloud platforms and services. This session focuses on showcasing the state of the art of best-practices, standards, implementations, and technological solutions that enable execution of algorithms and workflows across different cloud-based environments.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall L3)

Presentation: ZerOGCProcesses: Leveraging Git, CWL, and OGC Standards for Dynamic Workflow Deployment of Geospatial Applications

Authors: Christophe Triquet, Vincent Gaudissart, Yasmine Boulfani, Benjamin Giraudon
Affiliations: Cs Group
The deployment of workflows, especially in geospatial processing, is a recurring challenge that complicates the transition from research to production. Scientists often design algorithms in controlled environments, typically on local machines, prioritizing experimentation and accuracy. However, these processes frequently fail to meet the technical and scalability requirements of production systems. This gap complicates the collaboration between scientists and integrators, who must adapt these workflows into operational, scalable, and efficient services. To address these challenges, CS Group is developing an automated deployment solution, zerogcprocesses, that accelerates the publication of algorithms. Current state-of-the-art tools and standards form the basis of this solution. Git, the widely used distributed version control system, provides the foundation for collaborative code sharing, version control, and seamless integration into deployment pipelines. In the geospatial domain, the OGC API - Processes standard, defined by the Open Geospatial Consortium (OGC), enables standardized interactions with geospatial processes for numerous compatible clients. This ensures interoperability and scalability across applications. Additionally, the Common Workflow Language (CWL), an open standard designed for describing workflows in reproducible environments, provides a structured way to define and execute processes. Initially popular in bioinformatics, CWL is increasingly relevant in geospatial applications and serves as the basis of the OGC Application Package standard. CS Group’s solution integrates these technologies to automate the deployment process. By leveraging OGC Application Packages stored in Git repositories, zerogcprocesses dynamically generates fully compliant OGC API - Processes services, with automatic translation of inputs and outputs. This automation minimizes manual intervention, allowing scientists to focus on innovation while integrators deploy scalable and robust applications. A key feature of this solution is its ability to execute scientific processes on Cloud deployments but also on isolated infrastructures, such as high-performance computing (HPC) infrastructures, which are often inaccessible from Cloud environments due to security constraints. The system also preserves the integrity of scientific containers, executing them in their original form without requiring modification or instrumentation. Dynamic operations are a core feature, enabling users to test and deploy processes on demand. The solution supports CWL standards, including Directed Acyclic Graph (DAG) workflows and resource allocation for CPU, GPU, and memory. This flexibility ensures workflows can scale efficiently, handle complex patterns like map-reduce, and execute on various platforms, including Kubernetes, HPC (via Slurm, a job scheduler for high-performance computing environments), and local environments. Automation is a central theme in the solution. Process execution results are automatically cataloged, archived, and linked to their lineage, providing traceability from input data to the final output. Each user has a personal catalog where workflows are organized into collections, enabling detailed monitoring and reproducibility. Additionally, the system simplifies container management, automatically converting Docker images to formats like Singularity, an alternative containerization solution, for compatibility with HPC environments. With the goal of automation, there are also plans to automate the creation of standardized Docker images to ease deployments. Additionally, efforts are ongoing to develop a tool to assist in creating CWL files. The zerogcprocesses system aligns closely with the needs of both scientists and integrators, providing a straightforward path from algorithm development to production. By combining Git, CWL, and OGC standards, CS Group’s platform offers a practical and innovative approach to process deployment. It bridges the gap between research and production, enabling faster, more reliable, and scalable publication of algorithms. This tool, which maintains simplicity, efficiency, and interoperability, has the potential to transform how scientific workflows are deployed, making advanced geospatial applications accessible and reproducible at scale. This project is part of the CLUSSTER initiative, co-funded by BPI-France, which aims to strengthen the French and European cloud industry by providing a platform for artificial intelligence, high-performance computing, and quantum computing.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall L3)

Presentation: Bridging Research and Operations with Cloud-Agnostic EO Application Packages

Authors: Fabrice Brito, Pedro Goncalves, Fabrizio Pacini
Affiliations: Terradue Srl
The Open Geospatial Consortium (OGC) Application Package best practice has redefined how Earth Observation (EO) applications are developed, packaged, and deployed. Designed for interoperability, portability, and scalability, the Application Package enables EO applications to integrate with diverse platforms and infrastructures defining a strategy for scaling EO research into operational services. While development often begins with codebases in Python, Java, or other languages hosted across GitHub repositories, the real challenge emerges when transitioning from localized experimentation to scalable, operational workflows. This becomes even more apparent when a project is tied to grants or partnerships that support a specific cloud provider, highlighting the risk of platform dependency or getting different results from alternative backends. The Best Practices for Earth Observation Application Packaging was defined by the Open Geospatial Consortium (OGC) and the EO Exploitation Platform Common Architecture (EOEPCA) spearheaded by the European Space Agency (ESA) to address this challenge by enabling EO algorithms to be expressed in a cloud-agnostic manner, decoupled from specific backends, languages, or service delivery models. Using the Common Workflow Language (CWL) open standard, the package acts as a blueprint for describing EO applications, detailing inputs, outputs, dependencies, and execution logic. By encapsulating these workflows within container images that include all required libraries, runtime environments, and configurations, developers can ensure their applications are portable, reproducible and scalable across federated platforms with different service delivery models. This approach supports multiple execution scenarios, making the Application Package adaptable to a variety of operational contexts. Locally, it facilitates development and testing by providing a consistent runtime environment on a developer's machine. Within Kubernetes clusters, it allows for scalable execution of workflows across distributed nodes, ensuring high availability and performance. As a service, the Application Package enables deployment through OGC API Processes, allowing applications to be consumed programmatically via web APIs. This versatility ensures that EO applications can meet the demands of research, pre-operational demonstration, and full-scale operational deployment, regardless of the underlying infrastructure. The Geohazards Exploitation Platform (GEP) exemplifies this approach by hosting over 25 EO data processing services tailored to geohazard monitoring, including terrain motion analysis, landslide detection, and critical infrastructure monitoring. These services range from systematic workflows, such as generating interferometric deformation maps, to event-triggered processing for rapid response scenarios like earthquake damage assessments. GEP leverages CWL and containerized algorithms to ensure consistent execution across its federated cloud environment, enabling scalability and reproducibility. With over 3,200 registered users, including researchers, private companies, and public authorities, GEP supports a variety of data-driven applications, from data screening and area monitoring to the integration of multi-temporal data for long-term risk assessment. This platform demonstrates how cloud-agnostic workflows can enhance operational readiness and foster collaboration among diverse stakeholders. Similarly, the Urban Exploitation Platform (Urban TEP) enables sustainable urban management through ready-to-use geospatial analytics, leveraging cloud-agnostic workflows to integrate data from multiple sources. In the Copernicus Latin America and Caribbean (LAC) Platform, Application Packages ensure scalability in EO data ingestion, processing, and delivery to support disaster risk reduction and sustainable development. The Disaster Charter Processing Environment, developed for the International Charter 'Space and Major Disasters,' also benefits from this framework, enabling rapid deployment of EO algorithms for multi-mission data ingestion and geo-information extraction. The IRIDE Data Lake, one of Europe’s largest space programs being built, uses this approach to manage their service marketplace, ensuring seamless integration and processing across platforms. This presentation will explore best practices for packaging and executing algorithms across federated cloud environments and the added value brought by the OGC Application Package. By examining real-world examples, we will show how this framework enables scalable, interoperable, and cloud-agnostic solutions that are fully decoupled from specific backends, programming languages, or service delivery models. We will discuss how researchers, developers, service providers, and EO platform operators can leverage these practices to transition their algorithms from research prototypes to operational workflows while ensuring flexibility and avoiding platform dependency. This session aims to inspire the EO community to adopt these best practices, unlocking the full potential of federated cloud-based EO ecosystems.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall L3)

Presentation: Facilitating the Transition to Scalable, Cloud-Based EO Services with APEx

Authors: Bram Janssen, Jeroen Dries, Evelyn Stynen, Pedro Goncalves, Martin Böttcher, Hannes Neuschmidt
Affiliations: VITO, Terradue Srl, Brockmann Consult GmbH
One of the primary objectives of APEx is to facilitate the transition of Earth Observation R&D results into reusable, scalable, cloud-based on-demand services. Currently, R&D projects generate a variety of results including, for example, EO-derived value-added products (raster or vector files, typically level-3 and level-4), workflows and algorithms, or conventional (open-source) desktop software solutions. The APEx algorithm support services focus on transforming and optimising (i.e., propagating) resulting algorithms, applications, workflows and toolboxes and ensuring their compliance with FAIR data principles. Transitioning workflows and algorithms to a FAIR-compliant, cloud-based environment requires the adoption of cloud service concepts and implementations. Therefore, APEx envisioned a solution for workflows and algorithms that would make them available in a public cloud (platform) in co-location with the key EO data archives as on-demand services using standardised web service APIs. Users can then trigger these web services with their user-specific parametrisation to produce results for an area and time frame of interest. Developing on-demand services, however, can be a complex task, often depending on the complexity of a workflow/algorithm and its initial implementation. APEx intends to facilitate the transition to cloud-based services by providing guidelines, best practices, and tools that support EO R&D projects in overcoming these challenges. The support provided by APEx involves various subcomponents, and their relevance depends on a case-by-case basis. The individual service algorithm propagation comprises: • Refactoring source code to an openEO process graph • Packaging of source code to an OGC Application Package • Onboarding and integration of a service into the APEx Algorithm Catalogue • Cloudification of desktop toolboxes through a tailored approach • Technical performance optimisation, both on the source code or service level • Deployment and hosting of a service in an existing EO platform • Onboarding of a service commercial offering onto the ESA Network of Resources • Preserving and ensuring long-term accessibility via the ESA Project Results Repository (PRR) The main goal of the APEx algorithm services is to guide projects in defining their pathway and providing support throughout the various steps. This is accomplished through an initial analysis of the project's needs, resulting in a tailored pathway that assists in the transformation from source code to on-demand EO services. By simplifying this transition, APEx empowers projects to unlock the full potential of their research and contribute to the broader EO ecosystem.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall L3)

Presentation: Merging Sentinel Hub and openEO - the good, the bad and the ugly

Authors: Grega Milcinski, Dr. Matej Perse, Daniel Thiex, Jeroen Dries, Vincent Sarago
Affiliations: Sinergise, VITO, Development Seed
The need for common standards and patterns for EO workflow execution is clear, in the era we live in. With almost unlimited availability of satellite data at our fingerprints, most of it freely accessible, and several existing operational data processing platforms, the basic challenges of making use of sensor data have been solved. Yet the overall uptake of applications based on EO is still significantly lower than one would have expected for a resource as valuable as remote sensing - systematic and objective monitoring of our planet. Where it typically breaks is extraction of relevant information from the data. There are plentiful R&D projects out there, running and finished, demonstrating on what could be done. Yet, vast majority of the results are not repeatable nor reusable. There are two main underlying reasons for that: 1) significant part of the work is at early prototype level, and therefore not actually useful for operational use, and 2) processing workflows are often complicated and tailored to a specific setup, making it cumbersome to transfer into a different environment. The former is not something technology can help with - it’s a matter of funding policies. The latter, however, can be addressed, with common standards and patterns, and a proactive effort of translating the workflows to these. There are two, rather related, initiatives happening in the community right now - OGC EO Application Packages, and openEO. With the latter actively evolving and, importantly, having operational and used services (CDSE, Terrascope et al), it presented a clear choice for us at Sinergise when we were deciding on which direction to push Sentinel Hub platform. The process was, not unexpectedly, challenging - and still is. With Sentinel Hub being primarily used as a synchronous processing engine, the time it takes to do something, matters. This goes both for actual processing (encoding a request, execution et al) as well as the remote sensing scientist scripting the algorithm. Often a user would open Copernicus Browser, and type in a short JavaScript-powered script, which would then be executed in half-a-second, almost real-time. How to translate such experience to openEO? We will be sharing our findings and experience gathered in the process of implementing support for openEO within Sentinel Hub. Some of these we were able to solve, with support of the openEO guardians. Some remain and would require further evolution of the standard/pattern. We hope that this discussion will shore some ideas on how to make the EO science more reusable and repeatable.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall L3)

Presentation: From notebooks to EO application packages with xcube and xcengine

Authors: Pontus Lurcock, Tejas Morbagal Harish, Alicja Balfanz
Affiliations: Brockmann Consult
Software development and deployment in Earth Observation, as in many scientific and computational fields, generally takes place in one of two modes: batch or interactive. An interactive environment, such as a Jupyter notebook, is conducive to rapid prototyping and exploratory development and therefore particularly attractive to users without a strong background in software development, who can test out and iterate on their work with minimal delay and effort. Conversely, a batch (non-interactive) environment, such as the increasingly popular Earth Observation Application Package framework advanced by the OGC, offers strong advantages in reproducibility, automation, scalability, and integration into larger workflows and systems, albeit with higher demands on the developer. A common challenge along the Earth Observation Data Value Chain is to take an algorithm or workflow initially developed by an expert in an interactive environment and transform it into a module suitable for deployment in a batch environment and reuse as a reproducible, self-contained, modular workflow component. Frequently, a considerable amount of skilled manual effort by a software engineer is required to perform this transformation. Here we present xcengine, a new addition to the xcube ecosystem which helps to ease and automate the process of turning Jupyter scientific notebooks into modular, standardized EO Application Packages based on Python scripts and predefined environments encapsulated into Docker container images. Inputs are analysed automatically from the notebook and presented as Application Package parameters. The generated packages can be deployed to an Application Package platform and integrated into OGC API – Processes Web Service workflows. Moreover, the container images can also be used as stand-alone xcube server components, exposing the output of the notebooks via a selection of standardized APIs. The xcengine tools share some goals with some existing tools such as papermill (which allows parameterization and batch execution of Jupyter notebooks) and repo2docker (which creates Docker container images from GitHub software repositories). We take these capabilities further, for instance by automatically analysing required input parameters and by generating full-fledged EO application packages. xcengine assists with code reusability and reproducibility, thus supporting the Agency’s mission to foster Open Science in its R&D activities, and helps to bridge the gap between algorithm development and deployment at scale on cloud infrastructure. More specifically, xcengine will allow researchers of the DeepESDL platform to conveniently publish their workflows and experiments to the Open Science Catalog developed by ESA’s EarthCODE initiative. From there, others may find and execute the workflow either on DeepESDL or any other environment supporting EO application packages. The tool leverages existing, popular technologies like Jupyter and the Python scientific stack, which is the most common choice EO data users. In this way, it helps to further increase user uptake and reuse of already existing code. Importantly, all tools involved, and their dependencies are open source, build on open standards, and may also be used on a user’s own infrastructure, avoiding any risk of vendor or platform lock-in. Our approach thus offers an attractive choice for researchers committed to Open Science who want to ensure reproducibility of their work, independent of any third-party provider or expert.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall L3)

Presentation: Standardizing Earth Observation Workflows With Nextflow and Nf-core

Authors: Felix Kummer, Dr. rer. nat David Frantz, Fabian Lehmann, Dr. Katarzyna Ewa Lewińska, Patrick Hostert, Prof. Dr. rer. nat. Ulf Leser
Affiliations: Humboldt-Universität zu Berlin, Trier University, University of Wisconsin-Madison
Scientific workflows have become a popular tool for analyzing Earth Observation data. The design and implementation of such workflows can vary widely depending on the technology and hardware infrastructure used. Scientific workflows can be as simple as a Bash script executing a pipeline of dependent analysis steps on a single machine, but scientists can also exploit sophisticated scientific workflow languages to define workflows with complex data dependencies that integrate multiple geographically distributed compute sites. Numerous approaches lie between, each with varying levels of application and infrastructure specificity. This abundance of possible systems leads to frequently incompatible, unscalable, and unreproducible implementations of scientific workflows for processing Earth Observation data. Motivated by the need for rapid results, implementations of scientific workflows are often ‘quick and dirty’, focused on one-time processing of large quantities of data. However, designing for only one specific setup limits reusability beyond the intended application. Standardization, understood as an agreement on a set of rules, is a well-known strategy to ensure homogeneity and compatibility of solutions within a community. In the context of Earth Observation data analysis, such a standard could govern the choice and use of workflow languages. Standardization rules, when formulated properly and accepted and implemented widely, can increase the availability of harmonized, reusable, understandable, and reproducible workflows. Furthermore, a standardized approach alleviates the challenge of navigating the vast array of available solutions, allowing researchers to quickly assemble new workflows from existing standardized components. However, achieving a universally accepted standard requires broad community efforts and active contributions. One established standard for scientific workflows is nf-core, which is based on Nextflow. Although gaining substantial attention in other scientific domains, nf-core remains largely unknown within the Earth Observation community. Nextflow is a scientific workflow management system that offers a specification language for scientific workflows alongside an execution engine. It is domain-agnostic, open-source, free to use, adheres to FAIR principles, and is used by over 10,000 researchers worldwide. Nextflow's execution engine supports over 15 execution environments, including local execution on a single server, various cloud providers (e.g., AWS, Azure, or Google Cloud), and resource managers for cluster and HPC infrastructures (e.g., Kubernetes or SLURM). Containerized task execution is enabled through the integration with various container runtimes such as Docker, Singularity, and Podman. The specification language offers great flexibility, supports implicit parallelism, and is independent of the specific computational infrastructure. Ordering this flexibility, the nf-core standard for scientific workflows imposes a strict set of rules, representing a collection of best practices, covering every aspect of a workflow's specification, configuration and documentation. While specification- and documentation-related rules target the pre-execution state of workflows, configuration rules focus on controlling the actual execution. The nf-core standard for developing scientific workflows emerged in bioinformatics. As a result, certain elements of the standard are specific to this domain. However, the nf-core community is currently expanding into other domains, including Earth Observation. To support this expansion and foster standardization of scientific workflows within the Earth Observation community, we present our involvement in the nf-core and Nextflow ecosystem. To demonstrate the applicability of nf-core to Earth Observation, we present a workflow for detecting rangeland degradation that fully adheres to the nf-core standard. The workflow is based on dense time series of Landsat and Sentinel-2 acquisitions and comprises: - Preprocessing Level-1 data into Level-2 Analysis Ready Data, incorporating atmospheric, topographic, and BRDF corrections, as well as detecting and masking clouds, cloud shadows, and low-quality pixels. - Performing Spectral Mixture Analysis on Level-2 data to quantify ground cover fractions in each available scene. - Interpolating pixel-specific time series and deriving Land Surface Phenology metrics. - Calculating long-term trends and generating maps that show changes in vegetation components. The core processing steps are complemented by auxiliary processes, such as restricting the analysis area and generating visualizations. All processes are executed as invocations of various modules within the Framework for Operational Radiometric Correction for Environmental monitoring (FORCE). We utilize FORCE's data cube concept, which applies a common tiling scheme to each satellite acquisition, to divide these steps into atomic tasks, with each task processing one FORCE tile at a time. This allows Nextflow to adapt task parallelization to different hardware setups, such as distributing a single processing step across multiple compute nodes. Furthermore, with standardized task design, each processing step is individually configurable, limited to a single responsibility, and reusable in other pipelines. The standardization is further adopted across entire sub-workflows, configuration files, test profiles, and extensive documentation. Importantly, our configuration files, adhering to nf-core guidelines, strictly separate hardware- and application-specific parameters from the workflow's specification. Consequently, the workflow can be seamlessly executed with different input data and execution environments by either passing an alternative configuration file to the execution engine or selecting a predefined profile. To further integrate our workflow into the nf-core ecosystem, we implemented a plugin for nf-core's testing framework that automatically compares GeoTIFF files with reference data. This plugin enables workflow developers to easily create tests at both the workflow and task levels for any such component that generates GeoTIFF files. By releasing our standardized nf-core workflow, we extend the Nextflow and nf-core ecosystem to new modalities and establish a foundation for a potential community-wide standard in the development of scientific Earth Observation workflows. As the next step, we propose integrating more Earth Observation workflows into nf-core. While developing nf-core workflows requires considerable effort, it results in reusable, scalable, portable, and well-documented workflows and their components. Furthermore, the increased availability of best-practice components accelerates the process of creating new workflows. By expanding the repository of Earth Observation workflows within nf-core, the Earth Observation community will benefit from a robust, accessible, and standardized approach to data analysis. This will empower researchers to deploy high-quality workflows and foster collaboration across diverse research domains, driving innovation and improving the reproducibility of scientific results.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.14)

Session: C.04.01 - MetOp-SG mission

The status of development of ESA missions will be outlined in 4 sessions of 1h30 minutes (equally of a full day).
Participants will have the unique opportunity to gain valuable insights in technology developments and validation approaches used during the project phases of ongoing ESA programmes.
The Projects are in different phases (from early Phase A/B1 to launch and operations) and together with industrial/science partners the status of activities related to Mission developments will be provided.

MetOp-SG mission


EPS-SG/MetOp-SG overview


  • Giordano Bruni – EUMETSAT

MetOp-SG instruments overview #1 (Satellites, 3MI, SCA, RO)


  • Luca Salghetti – ESA

MetOp-SG instruments overview #2 (microwave radiometers: MWS, MWI, ICI)


  • Luca Salghetti – ESA

METimage instrument


  • Isabel Zerfowski – DLR

IASI-NG instrument


  • Adrien Deschamps – CNES

Sentinel-5: the atmospheric quality mission on MetOp-SG


  • Stefano Mattia – ESA
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.34)

Session: D.05.01 Showcasing EO Data and Information Services

The objective of this session is to introduce end-users to various EO data and information services, with the goal of increasing user engagement. These services, which can be accessed through online portals, provide significant support to EO user communities worldwide.

In our digitally driven society, the visual appeal of a website plays a crucial role in facilitating user navigation, information retrieval and data discovery. These portals can offer valuable information for end-users, including:
- Wiki-style repositories containing comprehensive and up-to-date information presented in an easily digestible format suitable for a diverse audience.
- Real-life stories from end-users illustrating how their use of data has contributed to society.
- Real-time dashboards providing insights into data availability and statistics.
-Updates on maintenance information affecting data availability and relevant news.
-3D visualizations showcasing popular EO datasets
-User feedback tools for collecting ideas for future enhancements to the portal.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.34)

Presentation: The CCI Open Data Portal: Evolution and future plans after 10 years of operations

#zarr #kerchunk

Authors: Alison Waterfall, Emily Anderson, Rhys Evans, Ellie Fisher, Philip Kershaw, Diane Knappett, Federica Moscato, Matthew Paice, Eduardo Pechorro, David Poulter, William Tucker, Daniel Westwood, Antony Wilson
Affiliations: Centre for Environmental Data Analysis, RALSpace, STFC, European Space Agency
The CCI Open Data Portal has been developed as part of the European Space Agency (ESA) Climate Change Initiative (CCI) programme, to provide a central point of access to the wealth of data produced across the CCI programme. It is an open-access portal for data discovery, which supports faceted search and multiple download routes for all the key CCI datasets and can be accessed at https://climate.esa.int/data. The CCI Open Data portal has been operating since 2015 and during this time the project has gone through several evolutions in terms of the technologies used and the challenges faced by the portal. In this presentation we will describe the current CCI portal, its future plans and the lessons learnt from 10 years of operations. Since its inception in 2015, the CCI Open Data Portal has provided access to nearly 600 datasets. It consists of a front end access route for data discovery comprising: a CCI dashboard, which shows at a glance the breadth of CCI products available and which can be drilled down to select the appropriate datasets; and also a faceted search option, which allows users to search for data over a wider range of characteristics. These are supported at the back end by a range of services provided by the Centre for Environmental Data Analysis (CEDA), which includes the data storage and archival, catalogue and search services, and download servers supporting multiple access routes (FTP, HTTP, OPeNDAP, OGC WMS and WCS). Direct access to the discovery metadata is also publicly available and can be used by downstream tools to build other interfaces on top of these components e.g., the CCI Toolbox uses the search and OPeNDAP access services to include direct access to data. A key challenge in the operation of the CCI Open Data Portal comes from the heterogeneity of the different datasets that are produced across the Climate Change Initiative programme, with different scientific areas and different user communities all having differing needs in terms of the format and types of data produced. To this end, the work of the CCI Open Data Portal, also includes maintaining the CCI data standards. These standards aim to provide a common format for the data, but necessarily, still leave considerable breadth in the types of data produced. This provides challenges in providing harmonised search and access services, and solutions have been developed to ensure that every dataset can still be fully integrated into our faceted search services. Currently, technologically the CCI Open Data Portal combines search and data cataloguing using OpenSearch with a data serving capacity using Nginx and THREDDS and utilises containers and Kubernetes to provide a scalable data service. These are currently hosted on the academic JASMIN infrastructure in the UK, but for the future, we are exploring a hybrid model whereby some of the functionality will be moved or duplicated to an external cloud provider for increased resilience, whilst still retaining the flexibility and cost benefits of primarily hosting data on a local infrastructure. Over the 10 years of operations of the CCI Open Data Portal, one key evolution relates to the ways in which people prefer to access data. Whilst the original data products are mostly in NetCDF, which is still a popular access mechanism, there is an increasing need to provide data in cloud-ready formats. Over the last few years, work has been carried out in conjunction with the CCI Toolbox, to provide cloud-ready versions of many of the datasets through the alternative provision of data formatted in Zarr and Kerchunk to provide more performant access to the data for cloud-based activities. In the current phase of the CCI Open Data Portal, it is also planned to integrate some of the CCI datasets into other data ecosystems, thereby increasing the reach of the CCI data products and making them accessible to a wider audience. These products will also be made accessible for users accessing the data via the Open Data Portal.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.34)

Presentation: Sentinel Online: Copernicus technical portal

Authors: Chris Mortimore, Petar Uzunov
Affiliations: Airbus, Wiser Tech
Sentinel Online (https://sentinels.copernicus.eu) is the official web portal providing comprehensive information on the Sentinel satellite missions, a key component of the European Union's Copernicus Programme. Developed and maintained by the European Space Agency (ESA), the platform serves as a central hub for technical documentation, mission status updates, and data access related to the Sentinel fleet. These satellites deliver critical Earth observation data for environmental monitoring, climate research, and emergency response. In addition to mission details, Sentinel Online provides important information on events related to data distribution and announcements on major new releases. The portal also includes SentiVista, a tool that offers visual examples of satellite data and showcases success stories illustrating how Copernicus data is applied in various fields. Furthermore, SentiWiki, structured like Wikipedia, allows users to explore and contribute to mission-related knowledge, while SentiBot , an AI-powered assistant, helps users efficiently search for and retrieve relevant information. Designed for accessibility, Sentinel Online is a key resource for all users of Copernicus data, from researchers and policymakers to industry professionals and the general public. With over 300,000 visits per year, the platform facilitates data access and promotes real-world applications, ensuring transparent and efficient dissemination of Sentinel mission data to support global efforts in sustainable development and disaster management.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.34)

Presentation: The Power of Storytelling in Science: Engaging EO Data Discovery

Authors: Pawel Filipkowski, Daniel Biskupski
Affiliations: Eversis
Earth Observation (EO) data has the potential to address global challenges, from monitoring climate change to supporting sustainable development. However, making this data meaningful and accessible to end-users remains a significant challenge, especially in a digital age where attention spans are short and social media compete for engagement. In this presentation, we will explore how digital storytelling can make EO data more accessible and engaging. By combining clear narratives with practical tools, we aim to make complex information easier to understand and use. The audience will learn how storytelling, combined with graphic design and web traffic analytics, can turn static information into compelling journeys that inspire action. Behind these innovations lies a real challenge: bridging the gap between vast datasets and solving real-world problems. From scientists to policymakers, different audiences require tools and features tailored to their needs. Over the years, Eversis has gathered feedback from users and studied their behaviours to develop expertise in designing user-friendly and effective digital platforms. Our work with the European Space Agency (ESA) focuses on creating solutions that improve how users discover, navigate, and utilise EO data. By analysing website traffic and redefining the user journey, we have developed solutions such as digital storytelling and interactive visualisations to enhance user engagement. Our collaboration with ESA has helped improve platforms like Earth Online, which offer enhanced ways for users to explore EO data and understand its impact. Through storytelling-driven design, we have boosted user engagement and increased data accessibility. While we are proud of these achievements, there is still much to learn and improve. This presentation will share key insights and lessons learned along the way. This session will provide practical ideas on how storytelling and thoughtful design can help make EO data more accessible. We hope to inspire others to think creatively about how they present data and to improve their own platforms.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.34)

Presentation: Data Sharing Infrastructures to Bring EO-Powered Intelligence to a Wider Audience

#stac

Authors: Liz Scott
Affiliations: Satellite Applications Catapult
The UK’s National Cyber Physical Infrastructure Programme brings together government, industry and its network of Catapults to share best practice on data sharing and minimisation of data silos. Cross-sector digital twin demonstration projects such as the Climate Resilience Demonstrator (CreDo) from Connected Places Catapult are already planning their production phases, but the inclusion of data streams from Earth Observation (EO) sources remain outside of such projects. Despite recent advancements in the interoperability and reusability of EO data thanks to cataloguing technologies such as STAC, access to EO data continues to be a blocker to its wider adoption. For the data science and analysis needed for addressing many global environmental challenges, it must be possible for a wider audience of scientists, analysts and policymakers to get easier access to a wider range of data sources that already exist, and this requires multiple stakeholders to collaborate on the data engineering to make it happen. The Earth Observation Datahub is a UK-built data infrastructure project at the forefront of digital transformation for the use of UK academia, government and industry. Research from the project’s End User and Stakeholder Forum had shown that even for those working in the EO sector there were barriers to wider usage due to disparate data sources and processing capability a long way from the data. Furthermore, for data scientists and engineers on the periphery of the EO sector, the learning curve to EO data exploitation has been too great for many to make a start. The EO Datahub has been built to address these problems and create a federated ecosystem of sources –both commercial and open source, processing pipelines and end-user ready applications to enable the beginnings of a wider ecosystem of EO usage. By utilising STAC metadata, open source software components and containerised processing, the system presents the potential for data sources to be more easily discovered, and derived data products to be created. Both can then be exploited using coding tools such as a custom Python toolkit as well as no-code/low-code user interfaces. The containerisation of data processing allows for scaling of data processing jobs. With all code open source, the components have been designed to be portable should there be a requirement in the future to scale further including across multiple public cloud offerings. With such a federated system, incomers outside of the EO sector have a foot in the door into creating systems that exploit satellite data sources without the substantial overhead of data management and entire end to end processing chain. Data product development for commercial applications can potentially be a step easier and near-time digital twins into systems which currently have no input from space now become a possibility. This presentation will explain the architecture utilised and data flow from incoming data streams through the hub platform and to the applications, with a discussion on the end user applications being trialled in the pilot phase and potential for future advancements that can feed cross-sector digital twins.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.34)

Presentation: Space Girls

Authors: Mrs Bogdana Tsonevska
Affiliations: ESA
Innovation in space-related fields relies on data from space, and diversity is crucial for driving innovation. However, only 25% of STEM students today are female, largely due to stereotypes. Challenging these biases is essential to achieving gender balance and fostering a more inclusive industry. Space Girls is an initiative dedicated to promoting diversity in STEM by inspiring and encouraging girls to pursue studies and careers in space-related fields. Acknowledging the gender gap in science, technology, engineering, and mathematics (STEM), Space Girls provides educational resources and engaging content designed to spark curiosity and empower young women to explore opportunities in the space sector. A key aspect of the initiative is showcasing women working in the space sector as role models, inspiring future generations to follow in their footsteps. Through interactive workshops, success stories, and collaborations with leading space organizations, Space Girls highlights the achievements of female professionals, demonstrating that a career in space is attainable for everyone. The platform serves as a valuable resource for students, educators, and parents, offering insights into space careers, scholarships, and networking opportunities. By advocating for gender diversity and providing real-life examples of women excelling in the field, Space Girls contributes to a more balanced and innovative future in space exploration and STEM. Ensuring equal opportunities in the space sector is essential for building a society founded on respect, scientific advancement, and inclusivity.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.34)

Presentation: Dunia: an all-in-one processing and dissemination platform for EO data over Africa

Authors: Johannes Schmid
Affiliations: GeoVille Information Systems and Data Processing GmbH
Dunia stands as a comprehensive platform designed for Africans, welcoming both beginners and experts in Earth observation. Focused exclusively on the African continent, it empowers users to discover, build, and exchange valuable geospatial insights across Africa. Those interested can start with a free trial, diving into a web map browser and an innovative streaming solution that highlights the immense potential of Earth observation data. As users engage with the platform, they find a dynamic development environment where they can create tailored solutions for large-scale data processing. This capability fosters creativity and innovation, allowing individuals to transform data into impactful applications. At the same time, Dunia encourages collaboration within the African Earth observation community, offering a vibrant marketplace for data and applications. Discover, build and exchange yourself at https://dunia.esa.int. Streaming Dunia’s innovative streaming solution revolutionizes the compression and access of raster data, such as Sentinel Mission archives, optimizing it for low-bandwidth networks. By achieving up to tenfold compression with minimal quality loss (around 1%), this approach transforms satellite imagery into Bitstream formats suitable for seamless viewing in standard browsers on various devices, leveraging existing digital infrastructure in Africa. The core methodology employs state-of-the-art video codecs to create a streamlined data access system, combining API integration, HTML5 Video Widgets, and mobile web applications. This allows for efficient dissemination of large datasets, harnessing the advancements seen in platforms like YouTube and Netflix. The system uses a block-based encoding strategy, facilitating high compression through inter-frame differences while retaining the ability to decode frames individually or in sequence. Dunia's user-friendly web application offers an interactive map viewer, enabling users to explore time-series imagery easily and convert data into georeferenced GeoTIFFs for GIS analysis. The responsive design ensures compatibility across mobile and desktop platforms. Rigorous quality assessments confirm that the compressed data maintains fidelity, achieving an average pixel value deviation of less than 0.4%, thus significantly reducing archive sizes while enhancing accessibility. For more details, visit https://dunia.esa.int/streaming-landing. EO Marketplace Dunia's marketplace is a dynamic platform designed for buying and selling Earth observation-based products and applications, benefiting individuals, startups as well as established institutes and enterprises. By fostering a collaborative environment, Dunia supports innovation and growth within the geospatial industry minimizing redundant products and applications while increasing the potential for efficient method advancements. Currently, Dunia is developing a cutting-edge billing system leveraging Africa's fintech landscape to facilitate seamless cross-country transactions. This initiative aims to simplify collections and disbursements, enabling users to transact effortlessly across borders. As a result, the platform is positioned to become a leading hub for geospatial commerce, bridging gaps between data as well as service providers and consumers while fostering sustainable growth in the African space industry. Discover the marketplace yourself at https://dunia.esa.int/marketplace. Trainings and Collaborations The user-friendly Dunia platform meets all customer needs by providing a wide range of datasets and robust software tools on a powerful infrastructure. This makes it the ideal choice for webinars and training events aimed at educating users about Earth observation data. For instance, the platform showcases real-life applications, such as flood monitoring, highlighting its practical use in addressing critical challenges and enhancing the understanding of geospatial data. Capacity building and mentorship are significantly strengthened through comprehensive collaborations with the African Union Commission, GMES Africa, national space agencies, and other leading institutes in the space industry. In just one year, Dunia has attracted nearly 1,000 users. With plans for more training and outreach events focused on real-life examples, the name “Dunia,” which means “World” in Swahili, is set to gain recognition and expand its audience even further. See Dunia’s upcoming trainings at https://wiki.dunia.esa.int/getting-started/training.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 0.11/0.12)

Session: E.05.02 Opportunities in the Earth Observation Market: A Focus on GHG Monitoring

Driven by evolving climate regulations – such as the EU ETS and CBAM – there is a growing demand for accurate, timely and transparent GHG monitoring, especially for CO₂ and methane. While the objectives of the policies are already established, their implementation raises practical questions. In this context, several private Earth Observation (EO) companies are proactively developing services that anticipate compliance needs, even before EO data is formally integrated into regulation.

This invited session explores the commercial opportunities emerging from the intersection of EO technology and climate policy. It will feature a brief institutional framing of the regulatory landscape, followed by a panel discussion with leading EO companies. The commercial players will share their approaches and challenges in addressing global and sector-specific user needs.

Moderators:


  • Albin Lacroix - ESA

Speakers:


  • Yasjka Meijer - ESA expert
  • Dan Wicks - Managing Director UK, GHGSat
  • Julian Akani Guery - Methane Lead Scientist, Kayrros
  • Keely Roth - Lead Hyperspectral Scientist, Planet
  • Hervé Hamy - Cofounder & President, QAIrbon
  • Koen Meilink - Business Manager Environment and Sustainability, S&T
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall L1/L2)

Session: B.01.03 Transforming Global Analysis Ready Earth Observation Data into Actionable Information to Drive Local Climate and Environmental Actions via Co-creation - PART 2

Earth Observation (EO) from satellites plays a key role in observing changes in climate and environment globally and locally (e.g. climate change, environmental changes and hazards, biodiversity, humanitarian crisis, and etc). Currently, a wide variety of climate and environmental key variables (e.g. temperature, precipitation, greenhouse gases, air pollutants, surface property, land use changes, fires, forests, snow/ices, ocean colors, biomass, and etc.), have been collected, and they are provided as Analysis-Ready Data (ARD) to users. While the satellite-based ARD data has global utility by nature, it is not necessarily fully tailored to particular local issues/challenges/needs.
Ultimately, EO data should be turned into equitable and actionable solutions for addressing local environmental and societal challenges and delivered where they are needed most. In order to fully unlock the potential of the global ARD data, the ARD should be further transformed to Action-Ready Information (ARI) with greater information granularity via co-creation with local collaborators and stakeholders. For example, the key climate and environmental variables are also being collected locally using various observation platforms on the ground, on the water and in the air. Such data can be utilized for regular evaluation and validation of satellite EO, but also to further create customized/tailored value-added ARI for local needs. Identifying the potential gaps between ARD and local needs is also critically important and the gaps need to be mitigated and ultimately closed. In this regard, the involvement of local stakeholders in the process is extremely important. Regardless of the proven power of Community science, local projects, for example, often suffer from remaining self-sustained due to financial challenges. In addition to research, we do need economically viable (and/or politically feasible approaches/solutions to lead us to the local goals. Via co-creation, using ARI to address local challenges should also contribute to addressing global climate and environmental issues.

This session highlights, but is not limited to, EO (in broader sense) data applications for addressing local climate and environmental issues/challenges and local mitigation actions, and discusses the gaps and ways to enhance the use of EO to address them with locals. This session invites not only satellite EO data providers and their users at various stages/levels, but also broader participants, for example, people who collect/analyze EO and/or use it locally, engineers/companies who develop EO technologies and wish to scale up, and researchers/practitioners who seek potential synergy/combined use of global satellite EO and local EO. We also invite broader stakeholders from the policy side and private sectors who would like to discuss potential approaches to use EO for their climate mitigation monitoring and sustainable developments from a social science and/or policy or business perspective to obtain maximum return for locals. With input from many unique local cases, we expect to synthetize the input and co-create global knowledge to guide us towards a sustainable future by further enhancing our ability to monitor and address global-but-local challenges that we face.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall L1/L2)

Presentation: Integration of geospatial tools in land-use planning for the REDD+ process in the Republic of Congo.

Authors: Joseph Mangouende, Denis Sonwa, Roger Mambata, Kendie Kenmoe
Affiliations: World Resources Institute
Efficient and sustainable management of territorial resources, whether economic, social or environmental, requires land use planning, which is central to a country's overall development process. Land use planning in the Republic of Congo is partly defined by the Sustainable Land Use Program (PUDT). The PUDT contains mechanisms for the sustainable management of land uses and allocations, such as agricultural data, mining data, peatland data and forest data, as part of the REDD+ (Reducing Emissions from Deforestation and Degradation) dynamic in the Republic of Congo. The PUDT intends to produce planning tools such as the National Land Allocation Plan and the National Land Use Plan for better management of the Congolese territory. To produce planning tools to be effective and efficient, the use of geospatial tools such as remote sensing and Geographic Information Systems (GIS) is essential. The overall objective is to set up a dynamic database management system for land use and land cover maps produced in the Republic of Congo. To achieve this, it is specifically a question of identifying the various geospatial data available within the framework of the PUDT; building the capacities of the players responsible for the various assignments; and encouraging the sharing of geospatial data. The methodology used is based on setting up a land-use planning portal in the Republic of Congo that will provide access to all geospatial data on land use in the Republic of Congo. This online portal will be based on WRI's “MapBuilder” model, which uses an efficient system of contextual data assumptions (vector layers e.g. on mines, hydrocarbons, forest concessions and agricultural concessions) coupled with high-resolution satellite data such as ESA's Sentinel 1 and 2 to track the dynamics of different landscapes in a REDD+ process on Reducing Emissions from Deforestation and Degradation, which is an integral part of the Sustainable Land Use Program. The results that will be produced through this land use portal will include: • The sharing of cartographic analyses of overlapping land uses; • Land use maps based on the classification of Sentinel 1 and 2 satellite images, to identify available and unavailable land uses; • Shared statistics on areas dedicated to REDD+ projects extracted from Sentinel 1 and 2 satellite data; • Presentation of an overview of different land uses in the Republic of Congo; • Guiding decisions on land-use conflicts.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall L1/L2)

Presentation: Atmospheric Water Harvesting (AWH) Optimization from Satellite: AI on Meteosat Third Generation (MTG) Data to Detect Optimal AWH Conditions Accessible by UAVs in Non-Precipitating Clouds

Authors: Dario Cappelli, Professor Fabio Del Frate
Affiliations: Tor Vergata, University of Rome
Water scarcity is a major global challenge, with 2.3 billion people living in water-stressed countries and the potential for 700 million people to be displaced by 2030 due to widespread water scarcity. The increasing effects of climate change and overpopulation are exacerbating the problem, particularly in arid and remote regions. (Tashtoush et Al., 2023). Atmospheric Water Harvesting (AWH) represents a promising strategy to address water scarcity, especially in arid regions where traditional water resources are limited. Recent developments in satellite technology and weather forecasting offer unprecedented opportunities to optimize AWH techniques. This research proposes an innovative approach based on the integration of satellite data from EUMETSAT Meteosat Third Generation (MTG) with weather forecasts, in order to determine intra-cloud (IC) areas with optimal AWH conditions accessible by Unmanned Aerial Vehicles (UAVs) in non-precipitating clouds. For this first phase of the research, a Neural Network (NN) was implemented to determine cloud height from MTG radiances, that are distributed with temporal resolution of 10 minutes. Target data was provided by the Sentinel-5P CLOUD product, downloaded through its Application Programming Interface (API). A filtering criterion based on European regulations on the use of UAVs was applied, limiting operational altitudes to less than 120 meters from the ground level. Subsequently, the cloud data were integrated with the nowcasting forecasts service from the European Centre for Medium-Range Weather Forecasts (ECMWF), to obtain a detailed view of the atmospheric conditions: e.g. humidity, temperature, wind speed, etc. The outcome was further refined by filtering only the relative humidity (RH) values above 60%. This threshold is based on recent studies demonstrating the ability of MXene aerogel materials to capture water vapor under comparable humidity conditions (Zhou et al., 2023). Statistical analyses allowed to evaluate the correlation between RH and the other IC features. Furthermore, to visualize the cloud properties, the Cloud Phase RGB image obtained by combining MTG specific radiances was used, providing a detailed assessment of the cloud characteristics and their relevance for the AWH. It was useful to visualize it in conjunction with Day Microphysics RGB and IR10.8 images, also provided by EUMETSAT. The integrated use of satellite and meteorological data has proven to be an effective approach to identify optimal areas and periods for AWH, especially in arid climate contexts: IC areas with higher water yield potential were identified and categorized. This research paves the way for further studies aimed at improving the accuracy and effectiveness of AWH harvesting technologies. The next phase of the project involves the development of a comprehensive dataset that will utilize in-situ data collected by UAVs to directly detect the amount of water retrieved from IC regions, with the help of NNs trained on MTG data. Soon-to-be-released data from ESA EarthCARE and the MTG Infrared Sounder will also help make the dataset more robust. Furthermore, this research addresses several specific goals of the UN 2030 Agenda for Sustainable Development, particularly in relation to Sustainable Development Goal 6 (SDG 6), which aims to “ensure availability and sustainable management of water and sanitation for all”.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall L1/L2)

Presentation: Tailored Digital Architecture for Empowering Diverse Audiences in Protecting the Congo Basin Ecosystem Using Early Warning Systems and National Data

Authors: Kenmoe Kendie, Denis Sonwa, Roger Mambeta, Joseph Mangouende
Affiliations: World Resources Institute Africa
Keywords Congo Basin, Early Warning Systems, Digital Architecture, National Forest Atlas, Ecosystem protection, Climate change The Congo Basin, one of the world’s most vital ecosystems, faces significant threats from deforestation, biodiversity loss, and climate change. As a global carbon sink, the region plays a critical role in regulating the Earth’s climate by storing vast amounts of carbon in its forests and peatlands. However, increasing deforestation and land-use change in the Congo Basin release stored carbon into the atmosphere, exacerbating global warming and disrupting local and regional weather patterns. Effective conservation requires tools that not only provide actionable information but also engage diverse stakeholders, including policymakers, local communities, researchers, and civil society organizations. This study present a tailored digital architecture designed to cater to different audience types by leveraging Early Warning Systems (EWS) and national data to enhance ecosystem protection efforts. This architecture integrates geospatial data, satellite imagery Sentienl 1&2 , Landsat 8&9 and machine learning to deliver near-real-time alerts on deforestation and forest degradation. Monthly very high-resolution imagery, and near-real-time alerts on deforestation and forest degradation. The early warning capabilities, combined with land affectation (logging concession, community forest, mining permits, protected areas…) datasets produced at the national and departmental levels, ensure the relevance and accuracy of the information provided. The system incorporates user-friendly dashboards, interacrive maps, mobile applications, and data-sharing platforms to meet the specific needs and capacities of each target audience. Alerts are accessed by a range of stakeholders, including government agencies, conservation organizations, and local communities, through quartly newsletter report or a structured mechanism designed for rapid response. Once alerts are issued, the system facilitates immediate action by government agencies and other stakeholders, often leading to coordinated field assessments, policy interventions, or community-led conservation efforts. Key case studies from Congo Basin countries illustrate how the “Forest Atlas” (The Forest Atlas is managed and updated by the Ministry of Forests with the support of the World Resources Institute), architecture facilitates informed decision-making, encourages collaboration and strengthens community-led conservation initiatives. By integrating various data sources contained in Global Forest Watch, the forest atlases provide a comprehensive view of forest health and change at the national level. In addition, the architecture is supported by tools such as Forest Watcher, which allows users to monitor forest disturbances directly in the field, even without an internet connection. This combination of global information and local action enables stakeholders to respond quickly to threats, bridging the gap between high-level monitoring and on-the-ground conservation efforts. It also contributes to global efforts to combat climate change by bolstering resilience and ensuring that ecosystems like the Congo Basin continue to play their vital role in climate regulation and biodiversity preservation.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall L1/L2)

Presentation: What role can remote sensing play in co-creation processes for climate and environmental stewardship?

Authors: Ana Carolina Pires Pereira, Neriane Nascimento Da Hora, Daniel Andrade Maciel, Marie-Paule Bonnet, Stéphanie Nasuti, Frédéric Frappart, Renaud Hostache
Affiliations: Centre for Sustainable Development/University Of Brasília, ESPACE-DEV, Univ Montpellier, IRD, Univ Antilles, Univ Guyane, Univ Réunion, ISPA, INRAE, Bordeaux Sciences Agro, Earth Observation and Geoinformatics Division, National Institute for Space Research (INPE), SAPOPEMA
The Amazon region is the scene for various pressures that create a scenario of polycrisis. Environmental and climate changes are observed through the increased frequency of extreme hydrological events (droughts and floods) and heatwaves. The large-scale predatory development model highlights the multiplication of anthropogenic pressures, such as the exploitation of bauxite, overfishing, and illegal activities like gold extraction and deforestation. As a consequence, the impacts are observed both in the ecosystem, which may reach a tipping point, and in the local populations, whose ways of life are closely tied to the nature that surrounds them. Fishing, for example, may become an economically risky activity in the case of drought, and agricultural activities are threatened by fluctuations in water availability. In this scenario, remote sensing and in situ data are used to indicate these disturbances and suggest, for example, a variation in the flooded area and the duration of these floods in recent decades, as well as a significant increase in lake temperatures during the 2023 drought. At the same time, a wide range of monitoring strategies emerges, which we call independent monitoring, where communities position themselves as key actors in the production of information based on their knowledge of the territory and ways of life. However, while the population shows interest in adopting new technologies, there is still a gap regarding the study of how these tools and more specifically remote sensing can be incorporated to strengthen territorial struggles with the goal of co-creating data that can be appropriated by communities based on their needs. Therefore, the aim of this work is to analyze the integration of remote sensing as a tool for independent monitoring in the floodplains of Santarém, located in the Lower Amazon. So, the question we pose is: "How can remote sensing and the co-construction of spatial data contribute to strengthening local climate and environmental actions?". To achieve our objective, the methodology of this work is a mixed-methods approach based on three main pillars: (i) the identification, in the literature, of independent monitoring actions in the Amazon region and the correlation of these strategies with potential spatial products; (ii) the analysis of spatial data and quantitative data obtained from in situ measurements; and (iii) the analysis of qualitative data obtained during fieldwork, particularly through participatory observation (immersion in the community's daily routine, accompanying them in their activities) during the implementation of independent monitoring strategies, conducting semi-structured interviews, and engaging in foresight exercises to define desirable scenarios for territorial defense and political advocacy. Finally, it is emphasized that since this work primarily falls within the scope of a thesis project started in 2024, the results obtained so far are limited to findings related to the first pillar of the methodology. In this regard, three main areas of independent monitoring have been identified: (i) hydrological (extreme events, water quality, and bank erosion); (ii) large-scale enterprises, particularly those related to mining and deforestation; and (iii) land use and land occupation. Through remote sensing, these areas can be studied using optical data (e.g., Sentinel-2 and Landsat), radar data (Sentinel-1, SWOT), and altimetric data (Sentinel-3, SWOT). Concerning the other two pillars, we can currently identify some expected outcomes. Thus, since this research lies at the intersection of remote sensing and social sciences, as well as at the boundary between academic and non-academic knowledge, this study has the potential to propose a methodological innovation by fostering dialogue between academic and non-academic knowledge and placing the interdisciplinary interface at the service of bottom-up data production. Furthermore, from a political point of view, the analysis of information produced by integrating remote sensing with independent observation, along with local populations, their representatives, and local governance institutions (e.g., municipal and state environmental agencies in Pará), will enable the identification of action strategies for ecological transition and promote the introduction of appropriate public policies. In this way, it is important to note that this research points to two main practical implications: solving a concrete and urgent problem and the political advocacy of social actors from the Lower Amazon.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall L1/L2)

Presentation: Advancing Sustainability: Earth Observation Pathways and analysis-Ready Data for SDG Indicator 6.3.2 Water Quality Reporting

Authors: Carmen Cillero Castro, Dr. Harriet Wilson, Dr. Stuart Warner, Dr. Erini Politi, Yago Alonso, Professor Stefan Simis, Dr. Kerstin Seltzer, Professor Evangelos Spyrakos
Affiliations: 3edata. Environmental Engineering. S.L., University of Stirling, Global Environment Monitoring Unit, United Nations Environment Programme (UNEP), Brockmann Consult GmbH, Plymouth Marine Laboratory
We developed an indicator based solely on satellite EO for SDG Indicator 6.3.2 Level 2 reporting, designed for global scalability and tested it locally in a transboundary pilot at Lake Tanganyika. We present part of the results of the UNEP-World Water Quality Alliance (WWQA) seed-funded project, “Earth Observation Pathway for SDG Indicator 6.3.2” focusing on the exploitation of Earth Observation Analysis Ready Data (EO ARD) to address environmental challenges related to ambient water quality. This study addresses the global challenge faced by many (particularly low-income) countries, where sparse in situ water quality measurements hinder effective assessment. By leveraging the evolving capabilities of the Copernicus Land Monitoring Service (CLMS), we assessed water quality over an extended period, established baseline conditions, and introduced a novel spatial component to SDG reporting. The indicator is offered on a tailor-made online dashboard, which was co-designed with interested users (such as SDG National Focal Points) and addresses two types of user personas: those who are interested in simply extracting the SDG Indicator 6.3.2 Level 2 reporting information (the EO indicator) and those who need to dive deeper into EO water quality datasets and examine spatial and temporal dynamics using interactive maps and time series. The methodology integrates three water quality parameters (chlorophyll-a, turbidity and lake surface water temperature) into a composite indicator, complementary to SDG Indicator 6.3.2 Level 1 parameters, derived from in situ data. The EO indicator methodology classifies waterbody status and provides threshold values based on baseline conditions. We used the EO ARD from Copernicus Land Monitoring Service (CLMS) Lake Water Quality (LWQ) early release (Calimnos v2.1 L3 test dataset (July 2024) of Lake Tanganyika produced by Plymouth Marine Laboratory) and the Lake Surface Water Temperature (LSWT) products for the calculation of the indicator. We used the 2002-2012 dataset as the baseline period and for defining the threshold values while the 2017-2019 dataset was used to calculate the indicator for the SDG Indicator 6.3.2 Level 2 reporting. The composite indicator outputs a result per pixel and reporting period that ranges from 0 (low quality) to 100 (excellent quality). These values are categorised into five descriptive classes to analyse spatial distributions. A binary classification of “good” or “not good” water quality, similar to SDG Indicator 6.3.2 Level 1 methodology, can also be derived from the EO indicator, and be available on the dedicated dashboard. Results for Lake Tanganyika for the reporting period of 2017- 2019 revealed a stable southern region with values around 70–80, consistent with baseline conditions. However, the northern part of the lake showed lower water quality, with values ranging from 70 to below 50, and greater deviation from the baseline. Proximity to populated areas appeared to be linked to reduced water quality. This approach highlights the potential of EO data to complement and advance current SDG reporting methodologies. Integrating EO Analysis Ready Data into SDG reporting introduces a unique spatial perspective, enables global reporting with consistent methodologies, and extends the temporal scope of current reporting frameworks. Operational datasets, such as the CLMS data used here, are pivotal to ensure continuity, consistency and robustness.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall L1/L2)

Presentation: Satellite-Based ML Solutions for Methane Transparency in Finance

Authors: Daria Stepanova, Emre Dogan, Vasily Yablokov, Errico Armandillo, David Vilaseca
Affiliations: Airmo SARL
Visibility into supply chain emissions, particularly Scope 3 for financed emissions, remains a critical challenge for financial stakeholders, complicating risk management and sustainability reporting. Data providers and financial industry end-users have shown strong interest in satellite-based monitoring with precise ownership attribution to address these blind spots. AIRMO SARL leverages advanced Earth Observation (EO) technologies, including data from Copernicus, MethaneSAT, and Carbon Mapper, along with its own data integrated with proprietary machine learning algorithms to deliver precise, real-time emissions insights. Currently monitoring 100 assets, AIRMO aims to scale to 1,000,000, enabling comprehensive supply chain tracking. Paper presents results of machine learning integration for methane plume detection. Using Sentinel-2 and MethaneAir imagery, AIRMO developed a custom ML model trained with over 41,000 synthetic plumes generated through simulation tools like the Weather Research and Forecasting Large-Eddy Simulation. These plumes simulate varying methane emission scenarios, enhancing the model's robustness in detecting methane anomalies. The current ML pipeline includes preprocessing of satellite data, filtering based on Scene Classification Layers, and tile segmentation for optimal model input. Initial model predictions demonstrate promising capabilities to be presented in the paper. Real-time methane incident monitoring is increasingly prioritized due to emerging regulations worldwide. While the financial sector has made significant net-zero commitments, unreported methane emissions and the limitations of self-reported data from portfolio companies remain significant obstacles. AIRMO’s satellite-based solutions bridge these gaps by providing accurate, real-time insights, enhancing transparency, mitigating risks, and ensuring compliance with tightening regulatory standards. This paper specifically focuses on AIRMO’s machine learning algorithms and the product offering tailored to the financial industry, utilizing public satellite data to create actionable insights.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.31/1.32)

Session: A.05.02 Earth observation for Climate Services - PART 2

Earth Observation has considerable potential to inform and facilitate climate service development globally. Long term, sustainable observations of Essential Climate Variables are the cornerstone of our ability to monitor climate change and climate risks.
These services can in turn be used to support implementation of the Paris Agreement, Nationally and Internationally set goals (such as the European Green Deal, Nationally Determined Contributions etc), and Monitoring, Reporting and Verification of adaptation and mitigation efforts.

However, in order to truly support decision making, this wealth of information must be supplemented by initiatives serving user needs, providing for example, sectoral or regional scale information in a timely, trusted and transparent fashion.

Climate services form the critical link between climate information and decision-making communities of practice. Development of such services must be underpinned by the co-production, with both the end-user and data provider communities, of user requirements, research and data needs, and intermediate products.

These services fundamentally add value to EO information, such that it can be used for decision making. EO-based service development may include (but is not limited to):
• Sustained and sustainable data provision, quality assurance and accessibility
• Research and development on existing data products to produce value added services either directly for decision makers, or for intermediaries in the climate services value chain
• Scoping and requirements gathering for new observational products, where there is a user need
• Stakeholder engagement at all levels of the value chain, as part of a co-production process
• Cross-disciplinary collaboration and research, including integration of economic or social information
• Progress towards policy goals such as implementation of the Paris Agreement, NDCs and MRV systems

This session seeks submissions related to all aspects of climate service provision and development
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.31/1.32)

Presentation: Sustainable Intensification of Agriculture in West Africa – How Earth Observation Can Support Successful Implementation

Authors: Dr. Frank Thonfeld, Jonas Meier, Pierre C. Sibiry Traore, Alhassan Lansah Abdulai, Dr. Sidy Tounkara, Dr. Laure Tall, Celeste Tchapmi Nono Nghotchouang, Janet Mumo Mutuku, Khadidiatou Faye, Stefanie Steinbach, Dr. Valerie Graw, Charles Lamoussa Sanou, Kwame Hackman, Belko Abdoul Aziz Diallo, Niklas Heiss, Verena Huber García, Dr Ursula Gessner
Affiliations: German Aerospace Center (DLR), International Crops Research for the Semi-Arid Tropics (ICRISAT), Manobi Africa PLC, Savanna Agricultural Research Institute (SARI), Initiative Prospective Agricole et Rurale (IPAR), Ruhr University Bochum (RUB), West African Science Service Centre on Climate Change and Adapted Land Use (WASCAL)
West Africa faces multiple major global challenges of the 21st century such as population growth and climate change. Changing rainfall pattern and rising temperatures in space and time, often experienced as unreliable rainy season onset or severe flooding, pose huge economic risks to traditional small-scale farming systems. At the same time, an increasing population demands for food, which puts arable land under pressure and often results in agricultural expansion. On the long run, this leads to depleted soils and decreasing productivity. Sustainable intensification (SI), defined as increased production and productivity without adverse environmental effects and without additional land consumption, can support achieving the Sustainable Development Goals (SDG) when properly implemented. The work presented here is closely linked to the Nationally Determined Contributions (NDC) on climate action of Senegal and Ghana that helps achieve the Paris Agreement. Many of these NDCs are related to SDG 2 ‘zero hunger’ (e.g., sustainable agricultural practices for resilience and resource protection) and SDG 15 ‘life on land’ (e.g., restore degraded land). Within the COINS project (Co-developing innovations for sustainable land management in West African smallholder farming systems), we assess which SI measures work for who under which socio-economic conditions. In two different environmental, economic, social and cultural settings we identify mechanisms how SI can be successfully implemented. The two study sites are the Senegal River Valley in northern Senegal, and northern Ghana. In Senegal we work around the system of rice intensification (SRI) in an area that is dominated by irrigated rice cropping and that shows huge gradients with intensive agriculture in the West and less developed small-scale farming in the East. The gradient exists also in economic and social terms, for example access to labor, machinery and markets. In Ghana, the study site covers mainly rain-fed agriculture with higher economic risks resulting from environmental factors, e.g. unreliable rains. Here, the focus is on risk reduction and soil fertility management. In this presentation, we demonstrate how Earth Observation (EO) can provide information needed by several stakeholders to facilitate the implementation of SI. The examples cover, among others, the delineation of field boundaries, the assessment which fields are in use, long-term dynamics of agricultural activity, and the identification of management practices (e.g. flooding/planting/harvesting in Senegal or crop rotation in Ghana). The final step in the COINS approach is advising. Integrated geodata-enabled platforms take up EO-based information to link farmers and their agricultural practices with insurances and finances, hence reducing financial risks and supporting food production. Widening functionalities to other stakeholders can help to implement SI and monitor activities on the ground in adequate time. Well-coordinated interaction between all stakeholders along the value chain and proper monitoring are key pillars of sustainable land management in sub-Saharan Africa that can benefit from digital technologies such as EO.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.31/1.32)

Presentation: Revisiting the ESA CCI Land Cover/C3S map series (1992–2022) LULC classification and change algorithms for climate modelling and carbon budget applications

Authors: Celine Lamarche, Grit Kirches, Ralf Quast, Carsten Brockmann, Catherine Ottlé, Philippe Peylin, Andrew Hartley, Stephen Sitch, Clément Albergel, Pierre Defourny
Affiliations: UCLouvain-Geomatics (Belgium), Brockmann Consult GmbH, Laboratoire des Sciences du Climat et de l'Environnement, Met Office, University of Exeter, ESA ECSAT
Land use and land cover change (LULCC) is essential for understanding and mitigating climate change, particularly within the framework of the Paris Agreement objective to limit global temperature rise to below 1.5°C. The European Space Agency (ESA) Climate Change Initiative (CCI) Land Cover (LC) project, operationalized within the Copernicus Climate Change Service (C3S) since 2016, addresses the Global Climate Observing System (GCOS) requirements for the LC Essential Climate Variable (ECV). The project has produced consistent global LC maps at 300 m resolution annually from 1992 to 2022, classified into more than 22 classes using the Food and Agriculture Organization (FAO) Land Cover Classification System (LCCS). These maps monitor LC transitions across Intergovernmental Panel on Climate Change (IPCC) Agriculture, Forestry, and Other Land Use (AFOLU) categories and support climate modeling and carbon budget assessments. Derived from five satellite missions (MERIS, AVHRR, SPOT-VGT, PROBA-V, and Sentinel-3 OLCI/SLSTR), the dataset applies a trajectory-based classification and LULCC detection methodology to ensure temporal and spatial consistency between years. Capturing 13 key transition types, including deforestation, cropland expansion, and urbanization, the dataset achieves an overall accuracy of 70.77 ± 0.28 for 2016–2022. However, significant uncertainties remain, particularly in cropland and deforestation trends in regions like the Amazon, where inconsistencies between datasets (e.g., History Database of the Global Environment (HYDE), FAO, and Land_Cover_cci) highlight challenges stemming from varying resolutions and definitions. Here, we present a improved processing chain for LULCC detection that integrates Sentinel-2, Sentinel-1, and Landsat data, alongside higher-resolution reference datasets, to improve spatial and temporal precision, especially over the pan-tropical area. This approach transitions from detecting 1 km LULCC hotspots to estimating LCC proportions within 300 m pixels using data with 10–30 m spatial resolution, enabling more precise detection of deforestation, plantation dynamics, and cropland expansion. The integration of synthetic aperture radar (SAR) with optical observations facilitates the detection of small-scale changes in highly cloudy regions, such as the Democratic Republic of the Congo (DRC). Accuracy improvements are monitored using a dedicated validation database and methodology in line with the Committee on Earth Observation Satellites Land Product Validation (CEOS-LPV) guidelines. The reprocessed LC map series (1992-2022) at higher spatial resolution will play a critical role in supporting global climate models and the global carbon budget. It will also contribute to key international initiatives such as biodiversity monitoring led by the Organisation for Economic Co-operation and Development (OECD) and the achievement of land degradation neutrality targets under the United Nations Convention to Combat Desertification (UNCCD).
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.31/1.32)

Presentation: Trends and connections across the Antarctic cryosphere

Authors: Prof. Andrew Shepherd, Dana Floricioui, Dr Thomas Nagler, Martin
Affiliations: Cpom
Satellite observations have transformed our understanding of the Antarctic cryosphere. The continent holds the vast majority of Earth’s fresh water, and blankets swathes of the Southern Hemisphere in ice. Reductions in the thickness and extent of floating ice shelves have disturbed inland ice, triggering retreat, acceleration and drawdown of marine-terminating glaciers. The Antarctic Ice Sheets (AIS) Climate Change Inititiative (CCI) produces reliable, long-term climate data records of Antarctic change using satellite observations, supporting the scientific community with critical data records. The project utilizes data from ESA and other satellites since the 1990s, focusing on key parameters including surface elevation change (SEC), ice velocity (IV), grounding line location (GLL), and gravimetric mass balance (GMB). Our efforts contribute to understanding historical and current Antarctic changes, validating climate models, and improving climate projections. Satellite observations provide essential insights into climate dynamics, underscoring the need for coordinated, long-term data preservation. Ice sheet data are crucial for monitoring global sea level rise and for refining models of ice flow, basal hydrology, and fast-flowing ice streams—key uncertainties in predicting future sea-level change. These models, requiring high-resolution and long-term datasets, are increasingly integrated with climate models to study ice sheet evolution and feedback mechanisms. Users of AIS CCI data span diverse fields, including ice sheet modelers, remote sensing scientists, surface mass balance modelers, and climate modelers. Additionally, stakeholders such as policymakers and industry use this data for applications like infrastructure planning and resource exploration. Despite this broad interest, the scientific community faces challenges in accessing and standardizing satellite-derived datasets, highlighting the importance of coordinated monitoring programs and shared resources.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.31/1.32)

Presentation: 25 Years Of Sustained Generation Of Satellite Climate Data Records By CM SAF

Authors: Rainer Hollmann, Dr. Marc Schröder, Nathalie Selbach
Affiliations: Deutscher Wetterdienst
In recent decades, climate variability and change have caused impacts on natural and human systems on all continents. Observations are needed to understand and document these impacts and their causes. Such observations are increasingly based on remote sensing data from satellites which offer a continuous coverage on a global scale. Only long-term and consistent observations of the Earth system allow us to quantify climate variability and change and their impacts on the natural and human dimension. Since more than 25 years, the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) Satellite Application Facility on Climate Monitoring (CM SAF, https://www.cmsaf.eu) has been developing capabilities for a sustained generation and provision of Climate Data Records (CDRs) derived from primarily operational meteorological satellites. The product portfolio of the CM SAF comprises long time series of Essential Climate Variables (ECVs) related to the energy and water cycles as defined by the Global Climate Observing System (GCOS). Currently available CM SAF CDRs include, among others, precipitation, radiative fluxes at the surface and top of the atmosphere, cloud products, upper tropospheric humidity, surface albedo and temperature as well as latent heat flux/evapotranspiration. The data are freely available from the CM SAF Web User Interface (https://wui.cmsaf.eu) and a subset as well through the Copernicus climate data store. The most recent CDR versions cover often the WMO reference period from 1991-2020. Several of the CM SAF CDRs have even a temporal coverage of more than 40 years (e.g. the global CDR based AVHRR data, CLARA). In order to serve applications with stronger timeliness requirements, CM SAF also produces so-called Interim Climate Data Records (ICDRs), which are typically released within a few days after the observations. These ICDRs are then suitable for a climate anomaly monitoring service needed by Member States. All products are well-documented, carefully validated and have been externally reviewed prior to product release. A short introduction to CM SAF including its support to climate services and an overview of available and upcoming CDRs and ICDRs from CM SAF will be presented.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.31/1.32)

Presentation: The Role of EO for Climate Resilience: A Focus on the Mediterranean Region

Authors: Mr. Gabriele Redigonda, James Francis, Mr. Lucas Bersegol, Ms. Shadi Rochard, Ms. Laura Corbett
Affiliations: European Space Policy Institute
For the first time in 2023, the Intergovernmental Panel on Climate Change (IPCC) addressed the specific impact of our changing climate on the Mediterranean Basin. According to the IPCC, the Mediterranean region particularly faces regional climate-induced risks, surpassing the global average and accentuated by the area’s heightened vulnerability. EO and related space-based technologies play a significant role in addressing the challenges of climate resilience, as evidenced by the ongoing implementation of over one hundred projects already using space solutions in Mediterranean countries. However, it remains unclear whether these projects actually directly address the specific challenges identified in the climate policies of these countries. This study undertaken by ESPI, beyond providing insights into current space-based solutions towards addressing climate resilience in the region, also analyses the potential synergies and misalignments of space-based climate projects and policy priorities, along with an identification of gaps, or untapped potential for these solutions. The methodology involved the in-depth analysis and cross-referencing of national and sectorial mitigation and adaptation strategies addressing climate-related concerns from 27 countries in the Mediterranean Basin, complemented by security and disaster risk management policies. Furthermore, over one hundred projects and solutions using space-based technology and data, with a majority focusing on EO, were identified and evaluated based on the ways in which they tackle climate-specific use cases in the region. The study concluded that there was a pronounced discrepancy in sub-regional EO applications targeting climate change, most evident in the Middle East and North African regions. Secondly, several local regions, such as the Balkans, were shown to have poor awareness of the value of space combined with a low level of implementation in space assets. Finally, to address the identified challenges in aligning policy priorities and projects across the Mediterranean region as a whole, the study advocates for best practices towards supporting space actors and policy stakeholders towards integrating further EO within regional policy strategies and fostering greater climate security initiatives at varying levels across the region. These approaches include, but are not limited to, increased awareness of politicians towards the benefits of space for addressing climate resilience, the support of NGOs and international organisations to facilitate the adoption of EO solutions, greater bilateral and multilateral coordination towards the uptake of these space-based technologies, and the encouragement of transnational and regional uptake in space solutions. The lessons learned from these examples and the regional focus could support greater implementation of space in climate policies and alignment of the space and climate nexus globally.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.31/1.32)

Presentation: From Hurdles to Toeholds: Strengthening Conditions and Structures for Evidence-based Policy

Authors: Alexandra Bell, Doris Klein, Dr. Sarah Schönbrodt-Stitt, Michael Thiel, Hannes Taubenböck, Dr. Prof. Stefan Dech
Affiliations: German Aerospace Center (DLR), Space Research Division, German Remote Sensing Data Center (DFD), German Aerospace Center (DLR), University of Würzburg, Institute of Geography and Geology, Department of Remote Sensing, University of Würzburg, Institute of Geography and Geology, Department of Global Urbanization and Remote Sensing
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall E1)

Session: A.09.09 Arctic and Antarctic Sea Ice in the Earth system: Advancing Research with Remote Sensing, In-Situ Observations, and Modeling - PART 1

Sea ice plays a crucial role in the global climate system through its strong effect on albedo, ocean and atmospheric circulation, and its feedbacks with ice sheets and permafrost.

Remote sensing of sea ice has been the cornerstone of Arctic and Antarctic sea ice research for over 50 years. These long-term, large-scale, and stable time series of sea ice parameters provide the baseline for a deeper understanding of the ongoing dramatic changes in both hemispheres. This knowledge is further diversified and enhanced by new and upcoming satellite missions (e.g., ICESat-2, SWOT, CIMR, CRISTAL, ROSE-L) that provide insights into detailed processes such as snow depth changes, meltpond drainage, and sea ice ridging, as well as support operational forecasting and monitoring applications. They also advance our understanding of the relevance of sea ice for atmospheric, oceanic, and ecological processes, e.g., Arctic cloud formation or the timing of ice algae blooms.

Sea ice parameters are observed over a large wavelength spectrum and derived from many different sensors including microwave and infrared radiometers, visible observations, radar imagers, and lidar or radar altimeters. Combining, merging, and jointly analyzing products from different satellite sensors and scales represents the next powerful step in advancing our knowledge of the fast-changing sea ice covers.

A key challenge remains in bridging scales and spheres between Earth Observation (EO) datasets, climate modeling, and in-situ datasets. New methodological advances such as data-driven modeling and physics-informed artificial intelligence, may be well-suited to address this challenge.

This session addresses all aspects of sea ice, including the current status and needs in enhancing EO methodologies, and the use of EO products for evaluating polar climate model simulations and for data assimilation in Numerical Weather Prediction models. Airborne and in-situ observation campaigns are critical to evaluate, calibrate, and develop satellite retrievals and we invite submissions on these aspects, too. Submissions on solutions for addressing up- and downscaling challenges on different temporal and spatial scales and between different data types are also encouraged.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall E1)

Presentation: Integration of polarimetric L- and C-band synthetic aperture radar data of late winter Arctic sea ice: first results from the airborne CryoSAR mission

Authors: Randall Scharien, Richard Kelly, Christian Haas, Neil Brubacher, Andrew Arreak, Trevor Bell, Brody McKeown
Affiliations: University Of Victoria, University of Waterloo, Alfred Wegener Institute, SmartICE
The IceBird program is a long-term airborne observation campaign to observe changes of sea ice thickness, roughness, snow cover, and melt ponds in the Arctic. The program is led by the Alfred Wegener Institute (AWI), using AWI’s Polar 5 and Polar 6 aircraft. As part of the IceBird Canada 2024 (CAN 24) winter program, the main instruments used by the IceBird program were complemented by the addition of a dual-frequency Ku- and L-band synthetic aperture radar (SAR) system called CryoSAR. Here we describe the first results from the successful operation of the polarimetric L-band SAR from CryoSAR, focusing on data collected coincident to satellite polarimetric C-band SAR data from the RADARSAT Constellation Mission, along with detailed in situ, multi-scale, snow and sea ice property information. These data were collected at the Mittimatalik super-site, adjacent to Baffin Bay in the Canadian Arctic, during late winter period conditions in April 2024 as part of the international project Sikunnguaq: meaning, in Inuktitut, one of the principal Inuit languages of Canada, “the likeness or image of ice, as on maps”. The main goal of this paper is to evaluate the potential for enhanced sea ice information through the integration of SAR data collected at L- and C-band frequencies. The IceBird sensor suite, combined with co-located snow and sea ice macro-scale property surveys (thickness and roughness distributions), and micro-physical surveys (snow grain property, dielectric constant, and radar-scale roughness), provides the basis for a detailed examination of the frequency-dependent, signature controlling, properties of the snow-covered sea ice volume for spatially distributed sites across the study area. In our examination, emphasis is placed on the potential utility of conventional, dual-polarisation, SAR modes, compared to experimental quad-polarisation modes, for enhanced sea ice information retrievals. Insights will enable preparation of retrieval algorithms that utilize forthcoming, integrated, SAR missions, such as Sentinel-1 and the Copernicus Radar Observation System for Europe in L-band (ROSE-L) in 2028.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall E1)

Presentation: High-resolution pan-Arctic sea ice motion from the RADARSAT Constellation Mission

Authors: Alexander Komarov, Mathieu Plante, Jean-François Lemieux, Stephen Howell, Mike Brady
Affiliations: Meteorological Research Division, Environment and Climate Change Canada, Meteorological Research Division, Environment and Climate Change Canada, Climate Research Division, Environment and Climate Change Canada
Arctic sea ice motion is recognized as Essential Climate Variable by the World Meteorological Organization. Sea ice motion can be accurately derived from sequential synthetic aperture (SAR) images, but up until recently the availability of SAR imagery has been only from a single satellite thus deriving sea ice motion at the pan-Arctic without additional sensors (i.e. Sentinel-1) has not been possible. With the launch of the Canadian three-satellite RADARSAT Constellation Mission (RCM) the spatiotemporal coverage of SAR data over the Arctic has drastically increased. This opened new opportunities for mapping pan-Arctic sea ice motion with a high-spatial resolution (< 5 km). Pan-Arctic sea ice motion has been recently derived by applying Environment and Climate Change Canada’s (ECCC) Automated Sea Ice Tracking System (ASITS) (Komarov and Barber, 2014) to sequential RCM and Sentinel-1 HH images resampled at 200 m resolution. While the resulting sea ice motion has been found useful for many applications such as quantifying sea ice fluxes, the derived sea ice deformations could often suffer from a prominent checkerboard pattern noise. We overcame this challenge by deriving pan-Arctic sea ice motion (May 2022 → present) from RCM images resampled at a high resolution of 80 m using ECCC ASITS. As of November, 2024, ice motion from more than 730,000 RCM image pairs (with time difference between 2 hours and 3.5 days) was derived. In our processing chain we utilized both HH and HV channels of RCM imagery independently to further maximize the spatial resolution of output ice motion vector field. The selection process of the image control points in ASITS was configured in such a way that the spacing between the neighbor output vectors for a given channel (HH or HV) was as low as 1.28 km. After combining HH and HV outputs, the spacing between neighbor vectors often becomes lower than 1 km. Validation of the high-resolution sea ice motion products against International Arctic Buoy Programme data over the entire year cycle (2023) indicated a very good agreement between SAR and buoy ice displacements with overall correlation of 0.998 for both horizontal and vertical components, and the root-mean square error (RMSE) of 1.07 km. The ice motion vectors provided by HV had higher confidence (i.e., greater cross-correlation coefficients) than those derived from HH. Furthermore, sea ice motion vectors derived from both HH&HV were in a better agreement with buoys (i.e., lower RMSE) compared to the sea ice motion vectors derived from HH alone throughout the year. The excellent added value of HV is explained by the very low noise floor in RCM HV imagery, particularly in the ScanSAR Low Noise (SCLN) beam mode. Our high spatiotemporal resolution RCM ice motion system can be run in near-real time (e.g., twice a day) or for a specified period of time depending on the intended application. The high-quality sea ice deformations (free of the checkerboard pattern noise) are being further derived from individual ice motion products. We have also introduced a daily aggregated pan-Arctic gridded product at 2 km resolution that combines individual RCM ice motion products derived within ± 1.5 days or ± 3.5 days windows with respect to the valid date.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall E1)

Presentation: Comparison of Optical and Altimetry Floe/Lead Classification Using Co-Located Sentinel-3 OLCI/SRAL Data

Authors: Laura Orgambide, Flora Weissgerber, Sara Fleury, Frederic Champagnat, Carlos Yanez
Affiliations: DTIS, ONERA, Université Paris Saclay, Centre National d’Etudes Spatiales (CNES), LEGOS, Université de Toulouse, CNES, CNRS, IRD, UPS
Submarines and satellite records from 1958 and 2018 reveal a significant decline in sea ice thickness [5]. In the context of polar and global changes, estimating sea ice thickness becomes increasingly crucial to accurately quantify ocean-atmosphere energy exchange in climate models. Altimeter sensors, such as SIRAL/CryoSat-2 and SRAL/Sentinel-3, allow sea ice thickness to be derived from ice elevation and sea elevation measurements [7]. The precision of the estimation relies on the classification of the altimeter measurement into valid floe or lead surfaces [1] [2]. This classification is based on the analysis of the echo’s waveform [8] [3], which depends on sea ice roughness, snow cover and other features included in the measurement footprint [6]. However, surfaces variability leads to classification ambiguities. To improve the precision of sea ice thickness products, we propose a method to refine this classification by sharpening the selection of valid floe/lead measurements used in the estimation. The Sentinel-3 mission represents a unique opportunity to study coincident sea ice surfaces and thickness data from the co-located SRAL altimeter and OLCI multispectral imager at high spatial and temporal resolutions. Using these synchronized data, we aim to produce a dataset that combines classified OLCI images with SRAL waveforms. This dataset can be used to derive new robust criteria for the SRAL floe/lead classification. For this purpose, we develop a classification method for OLCI images. To classify OLCI images, a hand-made dataset is built, sampling 108 evenly distributed labeled, cloud-free subset images from March to May 2022 over Svalbard. The samples correspond to a subset of 100 × 100 pixels of the OLCI red band that are centered on the location of a SRAL measurement. Given the variability of the intensity of both the floe and the lead classes, we use a homogeneous region containing the SRAL measurement to make the classification decision. This segment is generated by the Segment Anything Model (SAM) [4] algorithm using the SRAL measurement as a prompt in a zero-shot learning framework. First results of the SAM segments lead us to extend the floe class to brash ice and pack ice classes, since the scale and the shape of these three types of objects differ significantly. Therefore, the OLCI surface classification we developed is composed of four labels: lead, floes (well-shaped ice floe), brash ice and pack ice. These labeled segments will allow to define criteria on shape (SAM predicted IoU, continuity, etc.) and pixel intensity (mean, variance, etc.) to discriminate each class. A similar dataset is built from March to May 2023 in order to validate the method. Extended to the Arctic, this method could be used to define better criteria to select valid floe and lead waveforms and increase the number of sea-ice classes, such as separating floes from brash ice. It could even be used to create an extensive dataset to train a deep-learning waveform classifier. Beyond the scope of altimetry, the method of sea ice surfaces classification could be used to study sea ice at the floe level, especially during spring periods when altimetry-based estimation of sea ice thickness is limited. References [1] M. Bocquet, S. Fleury, F. Piras, E. Rinne, H. Sallila, F. Garnier, and F. Rémy. Arctic sea ice radar freeboard retrieval from the european remote-sensing satellite (ers-2) using altimetry: Toward sea ice thickness observation from 1995 to 2021. The Cryosphere, 17(7):3013–3039, 2023. [2] W. Chen, M. Tsamados, R. Willatt, S. Takao, D. Brockley, C. de Rijke-Thomas, A. Francis, T. Johnson, J. Landy, I. R. Lawrence, et al. Co-located olci optical imagery and sar altimetry from sentinel-3 for enhanced arctic spring sea ice surface classification. Frontiers in Remote Sensing, 5:1401653, 2024. [3] K. Guerreiro, S. Fleury, E. Zakharova, A. Kouraev, F. Rémy, and P. Maisongrande. Comparison of cryosat-2 and envisat radar freeboard over arctic sea ice: toward an improved envisat freeboard retrieval. The Cryosphere, 11(5):2059–2073, 2017. [4] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, et al. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4015–4026, 2023. [5] R. Kwok. Arctic sea ice thickness, volume, and multiyear ice coverage: losses and coupled variability (1958–2018). Environmental Research Letters, 13(10):105005, 2018. [6] J. C. Landy, M. Tsamados, and R. K. Scharien. A facet-based numerical model for simulating sar altimeter echoes from heterogeneous sea ice surfaces. IEEE Transactions on Geoscience and Remote Sensing, 57(7):4164–4180, 2019. [7] S. Laxon. Sea ice altimeter processing scheme at the eodc. International Journal of Remote Sensing, 15(4):915–924, 1994. [8] N. R. Peacock and S. W. Laxon. Sea surface height determination in the arctic ocean from ers altimetry. Journal of Geophysical Research: Oceans, 109(C7), 2004.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall E1)

Presentation: Synergy of Sentinel-1, AMSR-2, and IceSat-2 for Arctic sea ice roughness retrieval

Authors: Anton Korosov, Dr Tore Wulf, Matilde Brandt Kreiner, Dr Jorgen Buus-Hinkler, Dr Sinead Farrell, Dr Kyle Duncan, Dr Jack Christopher Landy, Dr Andreas Stockholm
Affiliations: Nansen Environmental And Remote Sensing Center, Danish Meteorological Institute, The University of Maryland, The Arctic University of Norway, Danish Technological University
Sea ice roughness (SIR) refers to the unevenness of the ice surface, including features such as ridges, hummocks, and deformed ice. It influences sea ice dynamics, energy exchange, and interactions with the atmosphere and ocean. Sea ice roughness affects the atmospheric and ocean drag forces (Tsamados et al., 2014; Martin et al., 2015), the accuracy of remote sensing measurements (Landy et al., 2020), and the formation of melt ponds (Fuchs et al., 2024). SIR can be estimated from radar altimeter data as the shape and width of the radar echo waveform to provide information about surface roughness, with broader or more diffuse waveforms indicating rougher surfaces (Kurtz et al., 2014; Landy et al., 2020). SIR can be estimated from laser altimeter data by detecting individual ridges and characterizing ridge sail height and density (Farrell et al., 2020). SIR can be calculated from Multi-Angle Imaging Spectro Radiometer (MISR) imagery acquired in low sun angle conditions (Mosadegh et al., 2022). However, to our knowledge, SIR is yet to be routinely derived from a combination of synthetic aperture radar (SAR) and passive microwave (PMW) imagery on the pan-Arctic scale. We present a deep learning-based algorithm for deriving SIR from SAR and PMW data. The algorithm uses a deep convolutional neural network U-net (Ronneberger et al., 2015). On input, it takes data from Sentinel-1 SAR extra wide swath (EW) and Advanced Microwave Spectro-Radiometer 2 (AMSR-2). For the targets, we experimented with footprint roughness from CryoSat-2, along-track variability of ice freeboard from CryoSat-2, and sea ice ridging intensity from ICESat-2 (Duncan & Farrell, 2022). We trained and validated the U-net with data from Jan 2020 and tested it with data from Jan 2021. We ran a set of sensitivity experiments to evaluate the impact of the U-net architecture, training parameters, importance of the input data, and pre-processing of the targets. The major challenge with training a U-net on data from the along-track altimetry is low coverage and a small number of matchups. We developed a novel method to extrapolate the altimetry data and improve the U-net training procedure that can be used for other applications. Our method involves computation of texture features (Haralick et al., 1971) from S1 SAR imagery, training a linear regression model for deriving SIR from texture features (TF), segmenting the texture features, and labeling the segments overlapping with the target along track ice roughness values. The U-net is then trained on the TF-derived roughness and fine-tuned on the labeled segments. We found that SIR from CryoSat-2 data has low spatial resolution and can be derived from SAR and PMW data with a low accuracy. However, with high precision, our algorithm can derive the ICESat -2 ridging intensity on the pan-Arctic scale from S1 and AMSR2 data (r = 0.87, RMSE = 0.0013). The maps of S1+AMSR2 derived sea ice roughness have high spatial resolution and depict many interesting large and small-scale features of the Arctic sea ice: highly deformed multi-year ice near the Candian Archipelago and, especially the northern coast of Greenland; a narrow tail of rubble MYI ice in the Beaufort sea; smooth first-year ice in the eastern part of the Arctic; ice hummocks near islands and coastal features; etc. The new algorithm will be applied to all S1 and AMSR2 data in the Copernicus Marine Services project framework. We will use the derived SIR product for assimilation in the next-generation sea ice model (neXtSIM, Olason, et al., 2022) to improve the parametrization of the spatially variable drag coefficient. References Duncan, K., & Farrell, S. L. (2022). Determining variability in Arctic sea ice pressure ridge topography with ICESat-2. Geophysical Research Letters, p. 49, e2022GL100272. https://doi.org/10.1029/2022GL100272 Farrell S.L., Duncan K., Buckley E.M., Richter‐Menge J., and Li R., Mapping Sea Ice Surface Topography in High Fidelity With ICESat‐2, Geophysical Research Letters, e2020GL090708. Fuchs, N., von Albedyll, L., Birnbaum, G., Linhardt, F., Oppelt, N., and Haas, C.: Sea ice melt pond bathymetry reconstructed from aerial photographs using photogrammetry: a new method applied to MOSAiC data, The Cryosphere, 18, 2991–3015, https://doi.org/10.5194/tc-18-2991-2024, 2024. Haralick R M, Shanmugam K and Dinstein I (1973). Textural features for image classification IEEE Transactions on Systems, Man, and Cybernetics 3 610–21 Kurtz, N. T., Galin N., Studinger, M. (2014). An improved CryoSat-2 sea ice freeboard retrieval algorithm through the use of waveform fitting, The Cryosphere, 8, 1217–1237, 2014 Landy, J. C., Petty A. A., Tsamados, M., & Stroeve, J.C. (2020). Sea Ice Roughness Overlooked as a Key Source of Uncertainty in CryoSat‐2 Ice Freeboard Retrievals, Geophysical Research Letters, e2019GL086487. Martin, T., M. Tsamados, D. Schroeder, and D. L. Feltham (2016). The impact of variable sea ice roughness on changes in Arctic Ocean surface stress: A model study, J. Geophys. Res. Oceans, 121, 1931–1952, doi:10.1002/2015JC011186. Mosadegh, E.; Nolin, A.W. A New Data Processing System for Generating Sea Ice Surface Roughness Products from the Multi-Angle Imaging SpectroRadiometer (MISR) Imagery. Remote Sens. 2022, 14, 4979. https://doi.org/10.3390/rs14194979 Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 9351, 234–241. https://arxiv.org/abs/1505.04597 Tsamados, M., D. L. Feltham, D. Schroeder, D. Flocco, S. L. Farrell, N. Kurtz, S. W. Laxon, and S. Bacon, 2014: Impact of Variable Atmospheric and Oceanic Form Drag on Simulations of Arctic Sea Ice. J. Phys. Oceanogr., 44, 1329–1353, https://doi.org/10.1175/JPO-D-13-0215.1.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall E1)

Presentation: Multi-Scale In Situ- and Airborne-Derived First Year Sea Ice Geophysical Properties: Implications for Multi-Frequency Radar Backscatter

Authors: Neil Brubacher, Randall Scharien, Christian Haas, Andrew Arreak, Professor Trevor Bell, Brody Mckeown
Affiliations: University Of Victoria, Alfred Wegener Institute, SmartICE
Understanding the spatial distributions of sea ice thickness (SIT) and roughness (SIR), as well as properties of their overlying snowpacks, is important for modelling mechanical and thermal energy exchanges at polar ocean-atmosphere interfaces. In particular, snowpack properties such as depth, temperature, salinity, and density, are important for constraining satellite-based retrievals of these sea ice parameters, while also exerting influence on their evolution. Alongside the development of geophysical retrieval models for variables like snow depth, SIT, and SIR from remote sensing, studies characterizing the spatial distributions, variability, and relationships between these variables are essential for enhancing physical understanding and constraining errors. Datasets integrating spatially distributed in situ and airborne measurements of snow and sea ice properties that enable such characterizations are relatively limited. As part of the project Sikunnguaq, meaning in Inuktitut “the likeness or image of ice, as on maps”, a field campaign comprising research and community partners from Canada, Germany, and the U.K. was conducted on landfast first-year sea ice (FYI) adjacent to the community of Mittimatalik in the Canadian Arctic in April 2024. As part of the project’s objectives to develop sea ice information retrievals from satellite synthetic aperture radar (SAR), a suite of airborne and satellite SAR data was collected coincident to the field campaign. This study will focus on analyses of relationships between two scales of spatially distributed geophysical snow and sea ice properties. The relatively larger, or geometric, scale, consisting of transect measurements and airborne swath acquisitions, includes co-located snow depth, total snow + sea ice thickness, and surface topography observations. The smaller, micro-scale, consisting of snow pit and ice core samples, includes detailed properties in the vertical dimension, such as temperature, salinity, density, snow grain size/specific surface area (SSA), and dielectric properties. Measurements were carried out at eight discrete sites, capturing a diversity of snow depth, ice thickness, and ice and surface roughness conditions. This study will examine intra- and inter-site distributions of snow and sea ice properties, length scales of variability, and covariance characteristics of snow depth, sea ice thickness, and surface roughness transect data. In addition, measurements will be used as input to a forward microwave radiative transfer model (i.e., SMRT), for sensitivity analyses supporting future work on geophysical parameter estimation development involving integration with airborne and spaceborne SAR observations. Overall, this research represents a multi-scale approach to characterizing the spatial distributions of, and relationships between, important snow and sea ice properties and provides a further step towards bridging in situ scale geophysical characterizations with satellite-scale sea ice thickness and roughness retrievals.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall E1)

Presentation: Operational Fiducial Reference Measurements over Sea Ice in support of Sentinel-3 validation (ESA St3TART-FO project)

Authors: Henriette Skourup, Renée Mie Fredensborg Hansen, Sara Fleury, Claire Miller, Christian Haas, Emma Woolliams, Sajedeh Behnia, Tommy Erni, Frederic Vivier, Antonio Lourenco, Robert Ricker, Rolf-Ole Rydeng Jenssen, Jean-Christophe Poisson, Thomas Krumpen, Eva Le Merle, Mahmoud El Hajj, Filomena Catapano, Pierre Féménias
Affiliations: DTU Space, LEGOS, Noveltis, Alfred Wegener Institute (AWI), Helmholtz Centre for Polar and Marine Research, National Physical Laboratory (NPL), LOCEAN, NORCE, Vortex.io, ESRIN
During the ESA/EU Sentinel-3 Topography mission Assessment through Reference Techniques (St3TART) project we identified several reference measurements to be Sentinel-3 FRM compliant. However, due to the snow layer that usually covers the ice, two coincident measurands are needed to retrieve the radar freeboard as measured by Sentinel-3 among the following ones: snow depth, ice freeboard, total freeboard, total ice thickness, sea ice thickness and ice draft. With particular emphasis on coincident observations of the snow depth. Thus, we need to combine different platforms and measurement techniques to make a full validation of Sentinel-3 STM hydro-cryo thematic products over sea ice. Here we will present the activities to be conducted within the framework of the St3TART follow-on (St3TART-FO) project (2024-2028), which forms the basis for the operational validation network for Sentinel-3 Fiducial Reference Measurement (FRM) provision. For the Arctic we have identified Svalbard to be the most suitable location for such efforts, as the archipelago is relatively accessible from Europe and offers deployment resources thanks to the presence of the University of Svalbard (UNIS) and several international research centres allowing access to logistics and collaborations. However, as reachable areas in Svalbard only includes first-year ice and not multi-year ice we have identified the Beaufort Sea as a supplementary opportunity site. The Beaufort Sea even though it is a good location with respect to the presence of different sea ice types, it is difficult to access and is relatively cost-expensive when compared to the Svalbard option. Our main activities are based on a comprehensive approach to observe and measure ice and snow in key Arctic regions. These activities include: 1) coincident observations of total thickness, total freeboard, and snow depth along dedicated Sentinel-3 ground tracks from regular AWI IceBird airborne campaigns in the Beaufort Sea, Svalbard, and Fram Strait; 2) deployment of an Ice Profiling Sonar (IPS) on an existing mooring in Storfjorden, Svalbard, strategically positioned below a cross-over point between Sentinel-3A and Sentinel-3B; 3) integration of a LiDAR (Vortex.io) on a snow radar drone developed by NORCE during campaigns in Svalbard; and 4) deployment of two autonomous drifting ice thickness buoys (LOCEAN Ice-T) with integrated miniature radars for precise snow depth measurements. Beyond collection of measurements, we will present key tasks within the project i.e., best practices and procedures for data flow, uncertainty estimates, and comparisons to Sentinel-3 STMs over sea ice. We further call for collaborative efforts for third-party reference measurements through dedicated workshops and Announcement of Opportunities (AO) to ensure a consistent network of FRMs throughout the Arctic and Antarctic.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.85/1.86)

Session: A.09.06 Advances in Permafrost - PART 2

Covering more than 25% of the Earth´s surface, permafrost is one of the dominant landscape forming features surrounding the Arctic Ocean. It is as well present in higher altitudes of mountainous areas and in Antarctica. Permafrost is an Essential Climate Variable within the Global Climate Observing System, and associated with climate tipping points.

Permafrost is a sub-ground phenomenon that cannot be directly observed from space, yet satellite observations have a major potential to support local, regional and circumpolar monitoring of this key aspect of the climate system. This session will showcase some of the more recent key achievements in circumpolar and mountain permafrost monitoring including methods/algorithms, science and applications.

Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.85/1.86)

Presentation: Spatial patterns of retrogressive thaw slumps in West Siberia

Authors: Nina Nesterova, Marina Leibman, Ingmar Nitze, Ilya Tarasevich, Guido Grosse
Affiliations: Alfred Wegener Institute, University of Potsdam, Earth Cryosphere Institute, Lomonosov Moscow State University
Retrogressive thaw slumps (RTSs) are spectacular and dynamic landforms resulting from thawing ice-rich permafrost or the melting of massive ground ice, often found in sloped terrains. These landforms occur across the Arctic, subarctic, and high mountain permafrost areas. In recent decades, the size and number of RTSs have been reported to increase significantly due to the impacts of climate warming, which accelerates permafrost thaw in many regions. The West Siberian Arctic, underlaid by continuous permafrost, has a particular abundance of RTSs. This prevalence is largely attributed to the widespread massive ground ice, which often occurs close to the surface, creating favorable conditions for RTS development. Research on RTSs in the north of West Siberia has been predominantly conducted on a local scale, relying on fieldwork at several key sites. While large-scale studies have adopted automated mapping techniques using remote sensing data, these approaches face significant limitations in the region. The moderate spatial resolution of many remote sensing datasets (e.g., 30m for Landsat) is insufficient for detecting smaller RTSs, and the partial coverage of the West Siberian Arctic with openly available high-resolution imagery hampers a comprehensive RTS study in this area. Furthermore, the lack of high-resolution ground truth data, frequent false positives, and challenges in interpreting the complex spatial patterns of overlapping or nested RTSs add to the difficulties of automated mapping. As a result, a full understanding of RTS phenomena in West Siberia remains elusive so far, hindered by both the logistical challenges of conducting extensive fieldwork in remote locations and the uncertainties of automated techniques. To bridge this gap, we analyzed a state-of-the-art high-quality RTS inventory for West Siberia (Nesterova et al., in prep.), created by manual mapping using multi-source, multi-year high-resolution satellite base maps from ESRI, Google Earth, and Yandex Maps. This inventory comprises over 6000 RTS points and reveals important spatial trends, including clustering patterns. Based on this dataset, we conducted the first detailed analysis of modern RTS characteristics in West Siberia, synthesizing multiple environmental parameters such as elevation, land cover, water bodies, and climate data. This study provides new insights into the characteristics and distribution of RTSs, contributing to a deeper understanding of their behavior in this Arctic region.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.85/1.86)

Presentation: Monitoring the Essential Climate Variable (ECV) Quantity Rock Glacier Velocity (RGV) Using InSAR: Steps Towards a Standardized and Consistent Approach

Authors: Sebastian Buchelt, Philipp Bernhard, Yan Hu, Mengze Li, Line Rouyet, Tazio Strozzi, Zhangyu Sun, Lotte Wendt
Affiliations: Department of Remote Sensing, Institute of Geography and Geology, University of Wuerzburg, GAMMA Remote Sensing AG, Department of Geosciences, University of Fribourg (UNIFR), Department of Earth and Environmental Sciences, The Chinese University of Hong Kong, NORCE Norwegian Research Centre AS
Mountain Permafrost is currently undergoing strong changes due to the changing climate. One key indicator for mountain permafrost is the occurrence of rock glaciers. They are formed by permafrost creep and have a distinct morphology, with front, lateral margins and often a surface with ridges and furrows making them ideal for remote sensing observation. While absolute movement rates of rock glaciers depend on various factors (i.e., topographic, structural and climatic), relative changes in rock glacier velocity can be primarily attributed to the thermal and hydrological state of the permafrost body. Hence, relative changes in movement are regarded as the most indicative short- to medium-term response of rock glaciers to environmental changes and, thus, considered as an important indicator of mountain permafrost conditions in general. Therefore, Rock Glacier Velocity (RGV) was added in 2022 as an associated quantity of the Essential Climate Variable (ECV) Permafrost. Along with that, the Rock Glacier Inventories and Kinematics (RGIK) Standing Committee of the International Permafrost Association (IPA) developed guidelines to specify requirements how to document RGV. RGV is defined as “time series of annualized surface velocity values expressed in m/y and measured/computed on a rock glacier unit or a part of it”. Such time series can be derived from various techniques, both in situ and remote sensing based. Several recent studies have shown the successful application of Interferometric Synthetic Aperture Radar (InSAR) for inventorying rock glaciers and quantifying their displacement. Specifically, Sentinel-1 C-Band SAR is increasingly used due to its free and open data policy. So far, however, different approaches and software are used in these studies deriving RGV from InSAR. In order to assure standardized ECV generation in the future, there is a high need to assess interoperator consistency as well as to identify crucial parameters during InSAR processing and RGV generation, which need a consistent handling across different software and users. To address these questions, several different working groups calculate RGV time series for three landforms in Switzerland, Italy and France using their individual methodological ap-proaches in different InSAR software (e.g. Gamma, SNAP + Pyrate, Norce GSAR, ASF HyP3). The results of the different InSAR approaches are compared as well as analyzed together with RGV products from GNSS in situ measurements and airborne photogrammetry. The results show that the handling of phase unwrapping artifacts and the spatial aggregation (i.e., selec-tion of the moving area) are amongst the most important parameters affecting the consistency and accuracy of the resulting RGV values. Other processing parameters, such as the mul-tilooking window size, the criteria to select suitable interferograms and the length of the sea-sonal observation window, might also have influence on the results. We will present a detailed analysis of the current state of our comparison and give insights into the best practice guide-lines currently under development.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.85/1.86)

Presentation: Reactivated post-fire thaw subsidence after stabilization: Revealed by over a decade of ALOS series InSAR timeseries analysis

Authors: Kazuki Yanagiya, Professor Masato Furuya, Go Iwahana, Takeo Tadono
Affiliations: Japan Aerospace Exploration Agency, Hokkaido University, University of Alaska Fairbanks
Wildfires in permafrost regions can trigger rapid thaw subsidence by causing vegetation loss, soil moisture changes, and reduced surface albedo. The frequency and extent of wildfires in the Arctic region have increased in recent years, while the recovery of burned vegetation and subsurface peatlands requires significant time. Consequently, wildfires in permafrost regions have long-term impacts on the global climate through carbon emissions. Interferometric Synthetic Aperture Radar (InSAR) and its timeseries analysis enable millimeter-scale precision measurements of ground deformation, allowing for spatially extensive monitoring of thaw subsidence. Long-term InSAR analyses of burned areas can elucidate the transition from rapid post-fire thawing to stabilization and recovery processes. These insights contribute to improved estimates of carbon emissions from wildfire events, enhancing our understanding of their impacts on global climate systems. This study focuses on the 2014 wildfire scar near Batagay in northeastern Siberia. Our previous study (Yanagiya and Furuya, 2020) used SBAS time-series analysis with ALOS-2 Stripmap 10m mode data to detect interannual ground subsidence up to approximately 30 cm following the fire. In this study, we analyzed ALOS/PALSAR data to investigate pre-fire displacement under stable ground surface conditions, as well as recent ALOS-2/PALSAR-2 data to examine post-fire ground deformation after the stabilization of rapid subsidence. The interannual ground subsidence within the fire scar had stabilized in most areas. A time series of Sentinel-2 data showed that post-fire NDVI began a recovery trend in 2019. This trend corresponded with the stabilization of interannual ground subsidence in the fire scar. However, annual ground subsidence was detected again in some areas during the summer of 2023 despite NDVI showing the same recovery trend. It suggests ongoing localized ground ice melt even 10 years after the fire. Moreover, seasonal thaw subsidence and frost heave of up to 7 cm has continued within the scar. Additionally, we utilized data from ascending (Stripmap 10 m mode) and descending (Stripmap 3 m mode) ALOS-2 orbits to perform a 2.5D decomposition of ground displacement into quasi-up-down and quasi-east-west components. Quasi-east-west displacements showed slope-aligned movements of up to approximately 6 cm over three years, detected both within and outside the fire scar. In contrast, quasi-up-down displacements showed significant subsidence signals of up to 12 cm, primarily confirmed within the areas where annual subsidence has reactivated. ALOS-4, the successor to ALOS-2, was successfully launched in July 2024. Operating in the same orbital path as ALOS-2 with consistent observation modes, ALOS-4 enables InSAR image analysis between datasets from both satellites. Following calibration and validation, we will process ALOS-4 data with ALOS-2 data to perform high temporal resolution and long-term InSAR timeseries analyses, providing deeper insights into post-fire thaw processes.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.85/1.86)

Presentation: Mapping the response of alpine permafrost to a decade-long human disturbance in the northeastern Tibetan Plateau using satellite remote sensing

Authors: Shuping Zhang, Dr. Ji Chen, Prof. Zheng Duan, Lijun Huo, Xinyang Li, Chengying Wu, Prof. Hucai Zhang, Qi Feng
Affiliations: NWIPB, Chinese Academy Of Sciences, State Key Laboratory of Frozen Soil Engineering, Northwest Institute of Eco-Environment and Resources, Chinese Academy of Sciences, Department of Physical Geography and Ecosystem Science, Lund University, Institute for Ecological Research and Pollution Control of Plateau Lakes, Yunnan University, Key Laboratory of Eco-hydrology of Inland River Basin, Chinese Academy of Sciences
Alpine permafrost, a critical source of freshwater for many rivers on the globe, plays an important role in the global hydrological cycle. However, alpine permafrost is highly vulnerable to both climate change and human disturbance, such as mining. Therefore, understanding how alpine permafrost responds to human disturbance in the context of a changing climate is essential for ensuring the sustainability of water resources in these fragile environments. This study investigates the response of alpine permafrost to a decade-long human disturbance in the Muri area located in the headwaters of the Datong River, a major tributary of the Yellow River, on the northeastern Tibetan Plateau. The Muri area, characterized by extensive permafrost, has experienced intensive mining activities followed by replantation efforts over the past two decades. To assess the impacts of these disturbances, we combination SRTM DEM, MODIS NDVI, Landsat land surface temperature (LST) to downscale the daily MODIS LST product using a machine learning method. Using NDVI as an indicator of human disturbance, the LST changes over the mine features including mine pit etc. in the mine area were analyzed at high spatiotemporal scale along with human disturbance like mining and replantation at different phases. Furthermore, the annual active layer thickness (ALT) of the permafrost in the Muri area was derived according to the Stefan Model from the downscaled LST data for the period of 2000-2023, and validated with in-situ measurements. The response process of permafrost ALT to human disturbance in the Muri area was analyzed at spatiotemporal scale and the mechanism behind was also depicted. The findings of our study offer valuable insights into the response of the alpine permafrost to human disturbance in the context of climate change and better reservation of alpine permafrost in area alike. Furthermore, the downscaled LST and derived ALT data generated in this study will serve as important inventory for further alpine permafrost studies in the region.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 1.85/1.86)

Presentation: HIGH-SALINITY LIQUID WATER AS A SOURCE OF UNCERTAINTY IN BEDFAST LAKE ICE MAPPING

Authors: Helena Bergstedt, Benjamin M. Jones, Mikhail Kanevskiy, Andrew Parsekian, Rustam Khairullin, Annett Bartsch, Chiara Gruber
Affiliations: b.geos, Austrian Polar Research Institute, Institute of Northern Engineering, University of Alaska Fairbanks, University of Wyoming
Arctic permafrost landscapes are characterized by numerous shallow lakes which are of ecological, climatic, and cultural significance. Understanding the depth of Arctic lakes is crucial for hydrologic and climatic studies, biogeochemical processes, and resource management. Previous assessments estimate that more than 70% of lakes on the Arctic Coastal Plain (ACP) in northern Alaska develop bedfast ice, with ice freezing to the bottom of the lakebed during the winter. Synthetic Aperture Radar (SAR) has previously been used to distinguish between bedfast and floating ice lakes on a regional and circumpolar scale. The remote sensing-based method has allowed for the characterization of the two primary lake ice regimes across broad spatial scales as well as conducting more detailed lake-specific studies. However, high salinity liquid water present below ice has been found to be a source of uncertainty in SAR-lake ice mapping efforts, leading to misclassification of floating lake ice as bedfast lake ice, possibly introducing an overestimation of bedfast or shallow lake areas in local, regional, and pan-Arctic data sets. Moderate-to-high-salinity water can be caused by ocean water intrusions through storm surges for near coastal lakes, through hydrologic connectivity with the ocean, because of evaporative enrichment, and through the thaw of saline permafrost soils that are a relic of past high sea level stands. To investigate the possible magnitude of errors introduced into bedfast lake ice data sets, an extensive field campaign was conducted in May 2024 sampling more than 100 lakes along an 800 km snowmachine expedition on the ACP. Lakes were sampled for depth, ice thickness, and conductivity of the water remaining below the ice. Sentinel-1 SAR imagery (Interferometric Wide swath and Extra Wide swath mode) acquired during the field campaign was then tested against our field observations to better understand the relationship in salinity and SAR backscatter. Time series of Sentinel-1 acquisitions over the frozen season (December-May) were analysed to better understand the temporal dynamics of bedfast and floating lake ice and lake ice over moderate-to-high salinity water salinity lakes during the winter months. Here, we aim to propose solutions to the uncertainty caused by residual moderate-to-high saline water in large-scale lake ice mapping efforts. Resolving misclassification issues due to the presence of high-salinity liquid water below lake ice has implications for permafrost and carbon cycling studies, climate and hydrological modelling, ecological studies and habitat mapping, and winter overland travel route planning for local communities.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall M1/M2)

Session: F.03.03 Insights into commercial EO data programmes of ESA and other national and international entities

The session will discuss the various institutional programmes for EO data purchase, addressing operational and R&D needs, and contributing to Europe’s EO industry consolidation. This session will present the different market actors, illustrate data purchase programmes of ESA and other institutions, and showcase activities performed with commercial satellite data. The demand for EO will be presented by institutions having established data purchase contracts for specific purposes. Commercial suppliers of Earth Observation data will give testimony on the benefits of working with ESA and other national, European and international entities. The companies will be presenting their solution in the context of data delivery to institutional clients and how these contracts and collaborations contribute to the company's success.

Speakers:


  • The ESA co-chairs
  • Anneleen Oyen - Dutch Space Office
  • Jappe Jongejan - Dutch Space Office
  • Anna Brand - Ororatech
  • Geosat
  • GHGSAT
  • NASA
  • Martin Lenk - Bundesamt für Kartographie und Geodäsie (BKG)
  • UK Government Digital Service, Department for Science, Innovation and Technology
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall E2)

Session: D.01.08 4th DestinE User eXchange - Listening to the Users: Turning Feedback into Functionality

Listening to the Users: Turning Feedback into Functionality

Users must be at the heart of any data project and DestinE is no exception. This session will open with insights from ongoing studies and user surveys, setting the stage for a broader discussion on how user input shapes the evolution of DestinE.

The session will also explore the many ways in which DestinE is becoming more user-oriented through practical tools, co-design approaches, and improvements in usability and access to information. It will conclude with a forward-looking perspective on upcoming innovations and developments.

Attractiveness of DestinE - A study


  • Christophe Taillandier - Mews Partners

First results of the DestinE survey


  • Alexis Longuet - Serco

Result of the German DestinE Survey


  • Andreas Preusser - German Space Agency (DLR)

Co-Designing Services with End-Users


  • Malik Terfous - Armines Paris PSL

Advancing Destine’s Excellence by Quality Measures | DestinE Platform Operational Quality Framework Service | Designing a Quality Control Framework Concept for DestinE


  • Claudia Vitolo - ESA
  • André Obregón - ECMWF

What's coming to users?


  • Jörn Hoffmann - ECMWF
  • Michael Schick - EUMETSAT
  • Franka Kunz - ESA
  • Charalampos Tsitlakidis - EC COM

Closing of the Event


  • Kathrin Hintze - ESA
  • Charalampos Tsitlakidis - EC COM
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 0.96/0.97)

Session: F.04.11 Earth Observation for Environmental Compliance: Enhancing Monitoring, Guidance, and Enforcement - PART 1

Environmental crime is a rapidly growing global issue, increasing by 5-7% annually. The broad spectrum of environmental crime includes illegal waste management, logging, and water abstraction, pollution, habitat destruction, and wildlife trafficking.

As environmental crime often involves transnational criminal organizations, international cooperation is needed to dismantle the network that perpetrate it. The European Union's new environmental crime directive aims to bolster criminal law enforcement against the most severe environmental offenses, as part of the European Green Deal.

Effectively combatting of environmental crime hinges on robust evidence. Earth Observation technology can support monitoring, inspection, and evidence gathering, thus enhancing environmental crime investigations . However, challenges related to data privacy, quality, availability, and legal admissibility must be overcome to fully realize the potential of Earth observation in the fight against environmental crime.

This session will:
• Identify and evaluate EO-based methods to help detect and characterizing environmental crimes and their impacts.
• Explore geospatial and open-source intelligence in multidisciplinary evidence collection, including the role of citizen science.
• Discuss the effective integration of EO into environmental compliance assurance and law enforcement.
• Analyse practitioner needs for new sensor data, processing tools, analytical methods, and operational modes.
• Foster dialogue among policymakers, researchers, and practitioners.
• Inform the development of a roadmap for wider EO adoption in environmental crime investigations through ESA-JRC collaboration.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: Empowering Environmental Enforcement with Real-Time Hyperspectral Intelligence

Authors: Michal Shimoni
Affiliations: Kuva Space
Hyperspectral imaging is a powerful tool for advancing Earth Observation (EO) in environmental compliance by delivering detailed spectral data that surpasses traditional imaging techniques. Unlike conventional multispectral sensors, hyperspectral imaging captures in-depth physical and biochemical information, enabling precise identification of materials, pollutants, and changes in environmental conditions—making it invaluable for monitoring adherence to environmental regulations. Kuva Space, a Finnish space company, is developing the Hyperfield, a constellation of 100 hyperspectral microsatellites designed to deliver hyperspectral imagery of any location on Earth at visible-to-near infrared (VIS-NIR, 450-950nm) and visible-to-shortwave infrared (VIS-SWIR, 450-2500nm) wavelengths. Selected by the European Space Agency to be the sole provider of hyperspectral data for the Copernicus program, Kuva Space plans to collect daily hyperspectral data across Europe by 2027, with coverage increasing to three times daily by 2030. The satellites will continuously gather data without the need for manual tasking over lands and seas, while intra-calibration with Sentinel satellites will ensure high temporal resolution and robust radiometric and orthorectification accuracies. The innovative technology behind Kuva Space’s satellites and hyperspectral payloads enables flexible data acquisition modes, including selectable spectral wavelengths and tunable signal-to-noise ratios (SNR). By integrating AI-powered automated analysis, this capability enables the company to efficiently and affordably deliver Earth observation products, supporting global monitoring efforts and assessing compliance with environmental guidelines or identifying evidence of non-compliance. For land monitoring, Hyperfield facilitates the detection of subtle shifts in vegetation health, water quality, and soil composition, which can indicate environmental degradation or illegal activities such as unauthorized land use or pollutant discharges. Its rich spectral data can precisely pinpoint pollution sources, differentiate between organic and inorganic contaminants, and identify specific chemical compounds, allowing authorities to accurately trace the origins of pollutants. In forest management, hyperspectral analysis goes beyond detecting logging activities; it can differentiate plantations from native forests and evaluate their effects on soil health, biodiversity, and ecosystems. These detailed, actionable insights into the environmental impact of industrial operations, agricultural practices, and urban development have the potential to drive the creation of targeted policies and best practices. Such data-driven approaches aim to mitigate ecological damage while supporting sustainable economic and industrial growth. Kuva Space has designed its new generation satellites to provide enforcement agencies and real-time responders with advanced tools and reliable evidence to detect non-compliance with environmental laws. By integrating cutting-edge GPUs (NVIDIA Jetson AGX Orin) into Hyperfield’s second-generation satellites, Kuva Space enables onboard AI-based calibration and processing. Specifically, the company will be able to align spectral bands, generate hyperspectral data cubes, detect cloud-covered scenes, and compress data to maximize the volume of information transmitted to ground stations. The integration of AI-powered analytics directly in orbit will automate the detection of potential violations, reducing the time and resources required for compliance checks while generating real-time alerts. Additionally, Kuva Space plans to integrate into Hyperfield’s second-generation payload high-resolution RGB imagers, AIS receivers, and Sat-to-Sat and Sat-to-IoT mobile communications to accelerate real-time delivery of analytic products to end-users via secure APIs. This capability is especially critical for security services, exemplified by the detection and tracking of suspicious vessels involved in illegal dumping. For instance, if a vessel lacks an AIS signal, an alert is triggered, prompting the collection of spectral data. Its location is immediately sent via text to marine law enforcement. Once the vessel reactivates its AIS system when nearing port, it can be automatically rediscovered and tracked by Hyperfield analytics, with real-time updates of location, imagery, and AIS data sent securely to enforcement agencies via Sat-to-Sat communication. Kuva Space will also offer on-demand tasking capabilities to support disaster management, security, defense, and climate change monitoring services. Hyperfield’s versatile imaging technology includes four acquisition modes: optimized scanning, stereo imaging from off-nadir angles, enhanced-resolution imaging through micro-movements, and extended "staring" at specific areas to meet the detailed imaging requirements of security services applications. To conclude, the Hyperfield constellation represents a transformative approach to environmental compliance, providing enhanced monitoring, data-driven policy support, and efficient enforcement tools to protect ecosystems and promote sustainable practices on a global scale.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: Enhancing Compliance Assurance through Earth Observation: A Policy-Oriented Approach

Authors: Christelle Vancutsem, Meriam Lahsaini, Kilsedar Kilsedar, Rosana Grecchi, Mark Dowell, Pieter Simon Alexander Beck, Peter
Affiliations: Joint Research Center - European Commission, Arcadia SIT for the Joint Research Centre (European Commission)
The European Commission's Knowledge Centre on Earth Observation (KCEO) has conducted a comprehensive Deep Dive assessment examining how Earth Observation (EO) can enhance compliance assurance - a critical domain given the EC's central role in enforcing EU laws (Treaty on the Functioning of the European Union (TFEU)). This assessment evaluated EO's triple role in assuring compliance through 1. promotion (providing free and open data, fostering awareness and transparency, and educating stakeholders about the environmental impact), 2. implementation monitoring (enabling broad-scale surveillance, detecting and monitoring areas at risk of non-compliance, ensuring prompt detection of any deviations), and 3. enforcement (supplying evidence for legal interventions). The assessment combined two complementary perspectives. First it examined the state-of-the-art of established EO applications across major policy domains, including: - Agriculture: The Common Agricultural Policy has long relied on EO to verify area-based aid (dedicated to agriculture and rural development), with satellite monitoring constituting nearly 80% of farm checks across the EU. - Environmental Monitoring: The Commission uses geospatial intelligence to investigate complaints regarding the deterioration of habitats protected under the Nature Directives, assess compliance risks, and inform enforcement actions. - Maritime Surveillance: Multiple EU agencies leverage EO data and Copernicus Services to provide integrated satellite and vessel tracking data to national authorities. These capabilities enable detection of illegal fishing activities and oil spills in near real-time, and support critical functions including fisheries control and environmental protection across European waters. - Security: EU agencies support EU external action and security through the Copernicus Security Service Component, providing geospatial intelligence that enables the detection of illegal activities in maritime security, critical infrastructure monitoring, and crisis management. - Air quality: The Copernicus Atmosphere Monitoring Service supports reporting and analysis, offering products and services that describe past, current, and future air pollution fields. For each domain, the analysis examined EU policy needs and requirements, EO utilization maturity, and recent developments. The assessment also explores emerging policy areas where EO shows potential to support compliance. Second, the assessment investigated specific use cases in collaboration with four policy Commission Directorates-Generals (ENER, CLIMA, GROW, ENV), related to various EU policy areas where EO has the potential to support compliance. These include: - Monitoring Building Thermal Performance and Solar Potential in the context of the Covenant of Mayors (CoM) initiative and energy-related directives. - Provision of a GHG monitoring service at the city level in relation to CoM and related Air Quality Directives. - Monitoring CH4 and CO2 emissions through the fossil fuel lifecycle in the context of the new regulation on reducing methane emissions - Mapping and monitoring critical raw material mining sites under the Critical Raw Materials (CRM) Act. - Supporting water quality monitoring and control across multiple directives (including the EU Water Framework Directive (WFD) and Marine Strategy Framework Directive (MSFD)). For each use case, fitness-for-purpose assessments, with respect to stated requirements, helped identify gaps and provided specific recommendations for the future evolution required for the Copernicus and Research Programmes. Finally, we examined cross-cutting challenges, associated with compliance assurance, including assessing data accuracy and veracity, traceability and reproducibility of methods, as well as the legal constraints related to the admissibility of EO data in court proceedings. Our findings provide strategic recommendations for enhancing EO's contribution to compliance assurance. This work aims to inform the future trajectory of the Copernicus program and related EU research initiatives while supporting evidence-based compliance monitoring and enforcement across the EU.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: EO as evidence for Environmental Crimes legal proceedings

Authors: Wouter Justus Veening
Affiliations: Institute for Environmental Security
The Amazon basin is currently enduring a severe drought caused by a strong El Niño event, exacerbated by climate change. This has led to rivers drying up, fisheries collapsing, wildfires engulfing entire cities, and critical infrastructure failing. These conditions provide a glimpse into a future of extreme climate events and offer critical lessons for adaptive and mitigative actions to safeguard the Amazon's biodiversity, carbon storage, and support for tens of millions of people. However, the crisis is not solely a result of climate phenomena but also reflects a long-standing erosion of the Amazon's resilience due to widespread agricultural and extractive activities, often linked to organized crime. The Amazon rainforest operates through intricate feedback loops that sustain its climate. Trees release water vapor and spores, forming clouds and producing rain, which supports the forest’s moisture-dependent ecosystems. Deforestation and forest degradation disrupt this cycle, decreasing atmospheric moisture, drying soils, and making the region increasingly vulnerable to fires and droughts. The rainforest risks transitioning to a savannah-like state, which would release an estimated 80 billion tons of carbon by the end of the century, cause significant biodiversity loss, and destabilize global weather systems. The Amazon’s tipping point thresholds, including global warming above 2°C, reduced rainfall, and increased dry seasons, highlight its precarious state. If crossed, these thresholds could trigger a cascade of ecological collapses globally, including disruptions to oceanic and polar systems. Human activities, including mining, agriculture, and logging, both legal and illegal, have significantly weakened the Amazon's ability to withstand and recover from drought. The Guiana Shield, a region critical to the Amazon's hydrological and climatic functions, exemplifies this degradation. Despite being a biodiversity hotspot and a major carbon and freshwater reservoir, the Guiana Shield is heavily impacted by gold mining, which accounts for over 90% of forest loss in the region. Artisanal and small-scale gold mining, often mechanized and large-scale, clears forests, erodes soils, and pollutes ecosystems with mercury. These activities not only devastate the environment but also lead to social harms, including mercury poisoning, human trafficking, and violent displacement of indigenous communities. This pattern of exploitation, compounded by organized crime, weakens the Amazon’s resilience and accelerates its decline. The degradation of the Amazon has already shifted it from being a carbon sink to a net carbon source, intensifying global warming and local drought conditions. Moreover, the destruction of indigenous territories and their expulsion from ancestral lands amounts to a form of slow genocide, undermining cultural and ecological resilience. These factors reinforce the cycle of deforestation and degradation, creating a feedback loop of environmental and social collapse. Addressing these interconnected crises requires urgent and holistic action. International legal frameworks, such as the prosecution of ecocide by the International Criminal Court, must be strengthened to hold accountable those responsible for environmental destruction. Scientific evidence, exploiting satellite observations and field verification, shall be used to document and combat ecocide effectively. The renewed EU Environmental Crime Directive can provide additional policy support, while case studies focusing on mining and export-oriented agriculture in the Amazon can serve as illustrative examples to inform legal and policy interventions.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: A Combined EO/OSINT Approach for Effective Detection and Enforcement Against Environmental Crimes

Authors: Valerio Botteghelli, Filippo Britti, Donatella Giampaolo, Adriano Benedetti Michelangeli, Bernd Schulte, Alix Leboulanger, Anne-Lynn Dudenhoefer, Gerhard Backfried, Manuel Roveri
Affiliations: e-GEOS, GAF AG, JANES, HENSOLDT Analytics, DHIRIA
The impacts of criminal environmental and illegal trafficking activities on Earth are growing significantly. Earth Observation and contextual analysis-based info can provide a key asset to monitor and report environmental crimes, and work towards holding those responsible accountable. Earth Observation, with its ability to reliably document remote areas over a long period of time, from historical analysis to near real time observations provide a way to overcome many of these challenges, while contextual analysis from social media and security events allow to obtain a synoptic view of the whole scenario in which these crimes take place. In this context the EO4SEC project was initiated, with e-GEOS leading the consortium consisting of GAF, HENSOLDT Analytics and JANES. The project focuses on developing new methodologies to combat environmental crimes. Our proposal aims to improve investigators analysis, support enforcement authorities, by leveraging the combined use of multiple data sources, integrating EO and Open-Source Intelligence (OSINT). This integration helps identify relevant indicators that characterize illicit activities in various stages (anticipation, detection, monitoring). The project team tested this approach through two use cases to assess how integrated EO and non-EO services can support this methodology, leveraging on the latest innovative technologies and methods, (which includes AI methods for enhanced features detection and classification, and automatic change detection, data mining, combination of EO with OSINT, movement datasets and context information, enhanced analytics and modelling tools). The goal was to design and develop pre-operational information services that meet the investigative needs of end users. GAF contributed to the EO4SEC project by applying a stepwise approach to detect, localize, monitor and follow-up environmental crime induced by alluvial gold miners in Brazilian, protected, indigenous areas by means of different satellite sources. The first step was to detect changes in the canopy using radar satellite data (Sentinel-1) in a bi-weekly interval and to classify the changes into “related to alluvial mining” and “located in indigenous area” in a larger surface of the Tapajos River basin and some direct neighbourhoods. In a second step public optical satellite data, monthly mosaics (Planet, Sentinel-2) are used to validate the changes in the radar signal and to confirm the alluvial mining classification. The third step involved collecting and cross-referencing OSINT from media, police seizures, and other sources, conducted by JANES and HENSOLDT. OSINT analysis helped pinpoint areas of specific relevance or urgency. The fourth step involved cross-validating these findings with high-resolution (<0.5 m resolution) satellite imagery before, during, and after identified events, leading to the discovery of a likely information leak concerning a planned raid on an alluvial mining site, which enabled the miners to conceal their equipment in advance. In both use cases JANES and HENSOLDT Analytics provided extensive analysis of contextual background through traditional and social media data sources, as well as local counterparts. Data on security-related events such as arrests and seizures were incorporated, helping to create a comprehensive picture of the situation, and offering complementary insights to EO analysis. Regarding the Mekong region use case e-GEOS realized a satellite-based analysis with the goal of identifying and monitoring human activity related to relevant areas of interest and indicated location suspected of drug production. The scenario was dense and active, with a rich combination of factors to consider. Through Earth Observation, it was possible to characterize the temporal development of physical structures and changes over a large timescale, while also monitoring specific activities around key locations on a finer, smaller timescale. Meanwhile, HENSOLDT Analytics employed innovative analysis of social media (e.g., Facebook, Telegram groups) to track keyword mentions, providing crucial insights in response to the evolving cybercrime threats, further enriching the understanding of the overall situation. Moreover, DHIRIA company developed a methodology for secure data transmission based on Homomorphic encryption algorithm to review encryption methods applied to remote sensed data processing, a new capability which will be further explored in future projects.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: Adoptability of international Land cover and land use standards for Earth Observation in Ghana

Authors: Elisha Njomaba, Ph.D. Filippo Sarvia, Fatima Mushtaq, Foster Mensah, Prof. Peter Surovy, Matieu
Affiliations: Czech University of Life Sciences, Food and Agriculture Organization of the United Nations, Food and Agriculture Organization of the United Nations, Centre for Remote Sensing and Geographic Information Servicesn S, Czech University of Life Sciences, Food and Agriculture Organization of the United Nations
Innovation, defined as the development or use of new ideas and behaviors, drives economic growth and underpins sustainable development across all sectors. In the geographic information systems (GIS) domain, standards are essential for ensuring the performance, conformity, and safety of new products and processes. They also facilitate the integration of diverse geographic information, enabling effective policymaking, decision-making, and collaborative geospatial information management across organizations and governments. The ISO 19144 series, particularly the Land Cover Classification System (LCCS), offers significant benefits for land monitoring and management. These standards provide a foundation for collaboration and play a critical role in tracking progress toward the Sustainable Development Goals (SDGs). Despite these advantages, the adoption of standardized land cover approaches in many regions, including Ghana and Sub-Saharan Africa, remains limited due to challenges such as inconsistent methodologies and lack of harmonized classification systems. This research investigates the potential for adopting land cover and land use standards in Ghana and the broader context of West Africa, Sub-Saharan Africa, and globally. Surveys are administered to key stakeholders involved in land use and land cover data preparation and use to identify limitations and challenges. The study employs tools such as the Land Cover Classification System (LCCSv3), Land Characterization Software (LCHS), and Collect Earth Online (CEO), alongside high-resolution Google Earth imagery and pre-existing land cover datasets for Ghana. In collaboration with national partners, including the Environmental Protection Agency (EPA) and the Centre for Remote Sensing and Geographic Information Services (CERSGIS), 14 distinct land cover classes are identified, leading to the development of a standardized national legend. Temporal mosaics of earth observation data (Sentinel-2 and Sentinel-1 imagery) were acquired for the entire study region, prioritizing periods with minimal cloud cover. Both the rainy and dry seasons were considered in the imagery selection process. The dry season, defined as November to March, was represented using data from November 2022 to March 2023. The rainy season, which spans April to October, was assessed using imagery from April 2023 to October 2023. However, the months of October, April, May, and June were excluded due to heavy rainfall during these periods in Ghana, which results in cloudy images. Vegetation indices such as the Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), Normalized Difference Water Index (NDWI), and Normalized Difference Built-Up Index (NDBI) were calculated using the most relevant spectral bands from Sentinel-2 imagery to enhance the differentiation of land cover classes. These spectral indices were used to enhance the identification and differentiation of the land cover classes. In addition, texture variables, including contrast, variance, and correlation, were computed for both Sentinel-1 and Sentinel-2 imagery to capture spatial patterns and structural details in the landcover. These variables provided a comprehensive dataset combining spectral, spatial, and structural information, which was then used to train a Random Forest classification algorithm for the national land cover mapping. The model achieved an initial overall accuracy of 0.85, which, while satisfactory, necessitates further refinement to resolve challenges in distinguishing mixed classes. This national land cover product, developed in alignment with ISO standards, is expected to play a critical role in Ghana’s efforts to monitor land use, inform sustainable land management practices, and support national actions aligned with the Sustainable Development Goals (SDGs). Refrences Ahlqvist, O., Varanka, D., Fritz, S., & Janowicz, K. (2015). Land Use Cover and Land Cover Semantics. Principles, Best Practices, and Prospects. Boca Raton: CRC Press. Akpoti, K., Dembélé, M., Forkuor, G., Obuobie, E., Mabhaudhi, T., & Cofie, O. (2023). Integrating GIS and remote sensing for land use / land cover mapping and groundwater potential assessment for climate ‑ smart cocoa irrigation in Ghana. Scientific Reports, 1–21. https://doi.org/10.1038/s41598-023-43286-5 Allen, R. H., & Sriram, R. D. (2000). The Role of Standards in Innovation. Technological Forecasting and Social Change, 64(2), 171–181. https://doi.org/https://doi.org/10.1016/S0040-1625(99)00104-3 Amoako, D., Kwao, B., & Oluwaseyi, T. (2023). Climate change induced ecological grief among smallholder farmers in semi ‑ arid Ghana. SN Social Sciences, 3(8), 1–24. https://doi.org/10.1007/s43545-023-00721-8 Ampim, P. A. Y., Ogbe, M., Obeng, E., Akley, E. K., & Maccarthy, D. S. (2021). Land cover changes in ghana over the past 24 years. Sustainability (Switzerland), 13(9). https://doi.org/10.3390/su13094951 Anderson, J., Hardy, E., Roach, J., & Witmer, R. (2001). A Land Use And Land Cover Classification System For Use With Remote Sensor Data (Vol. 2001). https://pubs.usgs.gov/publication/pp964 Barnieh, B. A., Jia, L., Menenti, M., & Zhou, J. (2020). Mapping Land Use Land Cover Transitions at Di ff erent Spatiotemporal Scales in West Africa. Sustainability, 12(20). Bossard, M., Feranec, J., & Otahel, J. (2000). CORINE land cover technical guide: Addendum 2000 (Issue July). www.researchgate.%0Anet/publication/268745953_Corine_Land_Cover-Technical_Guide CERGIS. (1999). National land cover land use classification System. https://cersgis.org/static/portal/pdf/01.pdf Daniel, M., Huang, X., Fan, M., & Ou, W. (2024). Study on the spatial changes in land use and landscape patterns and their effects on ecosystem services in Ghana , West Africa. Environmental Development, 49(59), 100947. https://doi.org/10.1016/j.envdev.2023.100947 FAO. (2024). Land Cover Legend Registry. https://data.apps.fao.org/lclr-tool/en/ Gregorio, A. Di, Mushtaq, F., Aw, M., Henry, M., Mamane, B., Mahamane, M., Assoumana, B. T., Mimouni, M., Aubee, E., Enaruvbe, G. O., Mensah, F., Services, G. I., & Bartel, P. (2022). West African Land Cover Reference System. In FAO report. https://doi.org/10.4060/cc0730en Gumma, M., Thenkabail, P., Hideto, F., Nelson, A., Dheeravath, V., Busia, D., & Arnel, R. (2011). Mapping Irrigated Areas of Ghana Using Fusion of 30 m and 250 m Resolution Remote-Sensing Data. Remote Sensing, 3(4), 816–835. https://doi.org/10.3390/rs3040816 Hackman, K. O., Gong, P., & Wang, J. (2017). New land-cover maps of Ghana for 2015 using Landsat 8 and three popular classifiers for biodiversity assessment. International Journal o
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: IMPEL Geospatial Intelligence for Environmental Compliance Assurance (GIECA) project

Authors: Federico Filipponi, Laura Calcagni
Affiliations: National Reserch Council (CNR), Institute for Environmental Protection and Research (ISPRA)
In recent decades there has been a growing awareness that geospatial technology has the capability to monitor, inspect and assess the environment, producing the information needed by regulators to support investigations of eco-criminal acts and violations of environmental laws. The European Network for the Implementation and Enforcement of Environmental Law (IMPEL) ‘Geospatial Intelligence for Environmental Compliance Assurance’ (GIECA) project aims to increase the capabilities of environmental agencies in the use of new technologies, by contributing to the information needs related to illegal activities affecting environmental matrices. From the environmental inspectors perspective, actions that would significantly promote and improve agencies capacity in the use of geospatial intelligence to investigate and ascertain eco-criminal acts across EU are: i) identify effective methodological approaches; ii) raise the awareness of prosecutors and judges about appropriate and reliable product available from geospatial intelligence; iii) perform requirements analysis to identify necessary information for use in the courts that can be generated by geospatial intelligence; iv) improve capability of environmental agency and competent authorities to use such analysis techniques. The first project phase aimed at reporting effective methodological approaches that successfully used geospatial intelligence to investigate and ascertain environmental damage, based on the use of earth observation and geostatistical analysis. The activities included query on databases of sentences and surveys disseminated to European prosecutors, judges, national competent authorities, and other national entities involved in the assessment of environmental violations (e.g. environmental agencies and investigators) to identify real cases of environmental legal judgments or administrative procedures in which geospatial intelligence approaches have been used to produce evidence and support the assessment and ascertainment of environmental damage under Environmental Liability Directive. The second project phase extended real cases reporting to environmental crimes under Environmental Crime Directive. It brings together EUFJE, ENPE and EnviCrimeNet networks in the discussion about technical and juridical requirements for the use of evidence produced with geospatial intelligence in the courts, with the aim of fostering institutional use and the legal application of technologies for environmental analysis and making regulations more efficient. Results helped to identify some of the key challenges, particularly recommendations to support the development of guidelines and implementation of new regulations, thereby improving environmental compliance assurance and governance. Results also provided a baseline of needs to setup action aimed at building technical and procedural capacity in producing a posteriori evidence of environmental crimes and damages caused by environmental incidents, violations and eco criminal acts, affecting various environmental matrices, like water and biodiversity.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 0.94/0.95)

Session: C.01.02 Innovative UAV Applications for Earth Observation - PART 2

Unoccupied Aerial Vehicles (UAVs) are flexible and efficient acquisition systems that can fill an important gap among spaceborne, airborne, and ground-based measurements. UAVs provide very high-resolution data acquisitions, even over inaccessible areas, and have demonstrated their ability to support various environmental and urban applications.

We invite contributions addressing the most recent innovations in the use of UAVs for Earth Observation and environmental and urban monitoring, with a focus on:

-Data acquisition for Earth Observation and atmospheric research
-Synergies and data fusion between UAVs and spaceborne, airborne, and ground-based measurements
-Real-time processing and analysis of UAV-acquired data
-Applications including but not limited to:
oAgriculture and precision farming
oForestry and forest monitoring & inventory
oUrban monitoring and urban green management
oDisaster management
oConservation management
oMonitoring of critical infrastructure (e.g., roads, coastal protection)
-UAVs in support of Earth Observation campaigns
-Transferable methods for environmental and infrastructure monitoring that can be applied by various actors (e.g., foresters, farmers, technicians of public administrations)

By focusing on innovative UAV applications and transferable methodologies, we aim to showcase the potential of UAV technology in advancing Earth Observation, to help develop future satellite missions and to advance environmental monitoring practices.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 0.94/0.95)

Presentation: Mapping Small-Scale Arctic Vegetation Composition in Bjørndalen, Svalbard, Using UAV and Sentinel-2 Data

Authors: Elio Rauth, Antonio Castañeda, Ronja Seitz, Assoc. Prof. Dr. Larissa Beumer, Assoc. Prof. Dr. Simone Lang, Dr. Martin Wegmann, Dr. Doris Klein, Univ.-Prof. Dr. Stefan Dech, Prof. Dr. Tobias Ullmann, Dr. Mirjana Bevanda
Affiliations: Department of Remote Sensing, Institute of Geography and Geology, Julius-Maximilians-Universität Würzburg, The University Centre in Svalbard, Earth Observation Center, German Aerospace Center (DLR)
Arctic tundra environments are defined by fine-scale heterogeneity in land cover patterns that change over time. Monitoring these dynamics requires very high-resolution spatial data across large regions. While satellite remote sensing provides temporal continuity, its spatial resolution, typically in the range of several meters, is insufficient to resolve the fine-scale vegetation patterns characteristic of tundra landscapes. In contrast, uncrewed aerial vehicles (UAVs) offer centimetre-level spatial resolution, making them highly effective for detailed land cover mapping in the Arctic. UAVs can be used to cover larger areas than traditional fieldwork methods while still offering greater detail than satellite imagery. However, integrating UAV data with satellite imagery is essential to upscale local observations for regional analysis, yet research on these methods remains limited in Arctic tundra contexts. This study aims to explore how UAV and satellite data can be combined to classify land cover at spatial resolutions ranging from 10 cm to 10 m. We used a VTOL (Vertical Take-off and Landing) UAV (Trinity Pro by Quantum Systems) to map vegetation in the Bjørndalen valley, Svalbard, Norway. Multispectral and thermal data was collected with the MicaSense Altum-PT sensor, while LiDAR data was acquired using the Qube 240 sensor. To complement the remote sensing data, we collected <i>in-situ</i> observations from 52 vegetation plots, including plant community composition and soil moisture measurements. As the vegetation in Svalbard is linked to the availability of water and protection from harsh winter weather, we defined land cover types ranging from dry ridge vegetation to wet moss-dominated plant communities and wetlands. From the UAV multispectral data, we derived several spectral indices and generated topographic variables from a LiDAR-based digital terrain model. Using this data, we trained a machine learning model to produce a high-resolution land cover map of the study area, validated against the <i>in-situ</i> observations. The model successfully identified small patches of <i>Dryas octopetala</i> and <i>Cassiope tetragona</i> on ridges and distinguished between various moss tundra and wetland types, which are critical grazing areas for reindeer. To expand the analysis to a larger region in Svalbard, we ran upscaling approaches with a machine learning model using Sentinel-2 MSI data. This model was then applied to a full Sentinel-2 scene to create a regional land cover map. Validation with auxiliary datasets confirmed that the model accurately captured fine-scale variations in the dominant vegetation types across Svalbard. This study demonstrates the potential of integrating UAV and satellite data for Arctic land cover mapping, offering a scalable approach to monitor vegetation dynamics at multiple spatial resolutions.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 0.94/0.95)

Presentation: Global Mapping Models – Are We Getting Them Right?

Authors: Karen Joyce, Joan Li, Amy-Nicole Peters
Affiliations: James Cook University, Nguma-bada campus, GeoNadir Pty Ltd
Satellite-derived products are indispensable for scientists and environmental managers making data-driven decisions. But what happens when these data are inaccurate? Calibrating and validating global habitat and landcover products is often expensive, time-consuming, and geographically constrained by field teams’ reach. Consequently, some products lack accuracy metrics, while others rely on metrics that remain unverified. In this project, we assessed the accuracy of widely used global mapping products, including the Allen Coral Atlas (5 m GSD); Global Land Cover with Fine Classification System (GLC_FCS30, 30 m GSD); Esri’s annual LULC (10 m GSD); and MODIS Land Cover Type (500 m GSD). These products represent a range of spatial resolutions and applications, providing a valuable opportunity to examine their performance in diverse environments. Using a large library of validation labels (n = 168,610) derived from 1,217 drone mapping datasets collected globally (available via GeoNadir), we uncovered significant variability in overall accuracies. Class-specific accuracies were as low as 30% in some regions, highlighting critical gaps in these global products. Beyond validation, we explored the spatial patterns of inaccuracy and quantified the potential costs of misclassification for downstream products and assessments. For instance, a sand habitat misclassified within a coral reef could undervalue its biodiversity and ecosystem services, misinforming conservation priorities. Similarly, terrestrial inaccuracies may distort land-use planning or ecosystem service evaluations. While our original data labels were manually created by a team of scientists, we have since integrated geospatial AI tools, such as the Segment Anything Model (SAM) and Grounding DINO, to rapidly assess drone data. These tools enable us to efficiently segment and annotate high-resolution drone imagery, dramatically accelerating the label creation process and enhancing our ability to scale drone data to satellite-derived products. Moreover, by combining RGB imagery with derived digital surface models (DSM) and digital terrain models (DTM), we provide robust predictors for scaling to satellite data, even in the absence of multispectral or hyperspectral UAV sensors. Although our validation results raise concerns about reliance on current global mapping models, they also highlight the transformative potential of integrating drone data into future iterations of these products. We aim to build collaborations with researchers, environmental managers, and product developers interested in improving global mapping products and scaling drone data to broader satellite observations. By leveraging drone-based validation data and geospatial AI tools, our work offers a pathway to create more accurate and actionable datasets for Earth Observation.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 0.94/0.95)

Presentation: Precision aquaculture drone mapping of seaweed cultivation in Indonesia

Authors: Laurent Barillé, Professor Nurjannah Nurdin, Simon Oiry, PhD student Agus Aris, Dr Evangelos Alevizos
Affiliations: Nantes Université, Hasanudin University
Seaweed aquaculture has expanded rapidly among coastal communities in Indonesia due to its relatively simple farming process, low capital costs and short production cycles. Indonesia is the second largest seaweed producer in the world after China and the first one when talking about red seaweed, notably the group of eucheumatoids which refer to the two genera Kappaphycus and Eucheuma. These species are mainly cultivated for their carrageenan content, used as a gelling agent in the food industry. However, crop failure is prevalent due to disease and pest outbreaks, a growing threat to seaweed aquaculture worldwide. A major threat is the “ice-ice” disease, which causes structural and physiological damage, including plant whitening. This study tested the capability of drone imagery consisting of five multispectral bands to analyse the growth and carrageenan content at a very high spatial resolution using machine learning. This approach allowed us to quantify seaweed biometry and biochemistry at single cultivation lines and plot scales. Drone images were obtained using a DJI Phantom 4 with a Real-Time Kinematic Differential GPS (RTK D-GPS). The multispectral bands are monochrome sensors with a spectral range including blue (450 +/- 16 nm), green (560 +/- 16 nm), red (650 +/- 16 nm), red edge (730 +/- 16 nm) and near-infrared (840 +/-26 nm) wavelengths. The camera system includes a sun incident light. An important perspective of drone remote sensing for Kappaphycus aquaculture is capturing potential temporal changes in colour as an early warning of crop conditions such as ice-ice outbreaks or epiphyte infection. We analysed the spectral reflectance in the visible and near-infrared to detect changes in seaweed colour from healthy to depigmented. A multispectral sensor could still detect these colour variations. By addressing these issues, farmers can make data-driven decisions about when to harvest or adjust cultivation practices and thus reduce the economic and environmental impact of their operations. Flexible and high spatio-temporal drone acquisitions open the way for Kappaphycus precision aquaculture.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 0.94/0.95)

Presentation: UAV-based sensors measuring sun-induced chlorophyll fluorescence to support ESA’s Earth Explorer mission FLEX

Authors: Juliane Bendig, Sofia Choza Farias, Saja Salattna, Bastian Siegmann, Armagan Elibol, Jim Buffat, Giulia Tagliabue, Micol Rossini, Doctor Sergio Cogliati, Roberto Colombo, Tommaso Julitta, Andreas Burkart, Uwe Rascher
Affiliations: Forschungszentrum Jülich GmbH, University of Milano-Bicocca, JB Hyperspectral Devices GmbH
With the start of ESA’s FLuorescence EXplorer (FLEX) mission approaching in 2026, ground and airborne validation concepts are under development. Using Unoccupied Aerial Vehicles (UAVs) to support satellite validation campaigns enables characterising the spatial (and temporal) variability of SIF over different plant functional types. UAV observations can complement longer time series of stationary instruments by providing spatial characteristics while offering relatively simple deployment. This contribution introduces two sensor types dedicated to measuring SIF from UAVs. The first, hyperspectral point sensor, is the light-weight version of the field spectrometer system FloX – AirFLoX (4 kg). Another, comparable system named FROG is operated at the University of Milano-Bicocca but with a different software solution. The second sensor type is the multispectral camera array SIFcam (2 kg). Both systems are mounted as a joint payload on an industrial UAV (Alta-X, Freefly Systems Inc., USA) with 34.7 kg take-off weight. AirFloX consists of two modules, FLUO (650-800 nm, 0.3 nm full width at half maximum, FWHM) and FULL (400-950 nm, 1.5 FWHM), allowing visible and near infrared reflectance retrieval and SIF observations across its entire emission spectrum. AirFloX captures upwelling radiance of vegetation with a 25° field of view and downwelling solar irradiance with 180° field of view. SIFcam is a filter-based multi-camera array with two spectral channels (757.9 and 760.7 nm, <1.2 FWHM), allowing for SIF retrieval in the O2A-band at 760 nm. Depending on the camera settings it delivers imagery with up to 0.02 m pixel size when flown at 25 m above ground. Data acquisition protocols have been tested during field campaigns over agricultural fields in the growing seasons of 2023 and 2024. The goal was to investigate synergies between the detailed spectral information available through AirFloX compared to very detailed spatial information by SIFcam with reduced spectral information. The data processing chain for AirFloX has been established based on the existing approaches for FloX combined with previous experience from UAV-based prototypes regarding measurement footprint estimation. For SIFcam, the common image mosaicking workflow used in structure-from-motion software has been optimised controlling each step of orthomosaic generation including e.g. pixel-level statistics. A first accuracy assessment revealed a good agreement of SIFcam retrieved SIF in the O2A-band compared to ground and airborne observations with a low bias. The characterisation of the uncertainty budgets of AirFloX and SIFcam is under development. We discuss first results obtained and challenges encountered as well as indications how UAV-based SIF observations can bring synergies in the FLEX validation scheme.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 0.94/0.95)

Presentation: Scaling SIF: From Point Observations to the Entire Field Using UAV-Based SIF and Multispectral Data

Authors: Dong Liang, Dainius Masiliūnas, Prof.dr.ir. Lammert Kooistra, Dr. Na Wang, Dr. Carlos Camino
Affiliations: Laboratory of Geo-Information Science and Remote Sensing, Wageningen University and Research, Joint Research Centre, European Commission, School of Agriculture, Food and Ecosystem Sciences, University of Melbourne
The growing global population, particularly in densely populated countries, necessitates improving crop yields to ensure long-term food security. While climate change impacts yields at a macro level, maximum potential yield is fundamentally determined by photosynthesis, a key physiological process in plants. Sun-induced chlorophyll fluorescence (SIF), a key indicator for monitoring crop photosynthesis, can be derived through remote sensing. Field-level SIF maps are highly valuable for crop monitoring in precision agriculture, yet current methods face significant limitations. Tower-based spectrometers lack flexibility for field-scale crop management, UAV-borne imaging sensors are impractical for small fields due to their weights, and airborne imaging spectrometers are prohibitively expensive for daily field management. UAV-mounted point spectrometers, such as FluorSpec, have the potential to generate field-level SIF maps more economically and efficiently. However, further research is needed to identify the most accurate method for achieving this. This study aims to identify a robust method for scaling SIF from point observations to the entire field using vegetation indices and plant traits derived from multispectral data and to evaluate the importance of these predictive features in predicting SIF. Four potential methods were considered, including spatial interpolation, machine learning, deep learning and radiative transfer modelling. However, due to limited UAV-based point measurements, deep learning was excluded, and the lack of ground SIF measurements rendered radiative transfer modelling infeasible. Consequently, this study compares four methods: area-to-point (ATP) ordinary kriging, ATP kriging with external drift, machine learning and a hybrid approach combining machine learning and ATP kriging with external drift. As the footprint of FluorSpec SIF is homogeneous circular areas on the ground, ATP kriging was applied instead of the basic point-to-point kriging. ATP kriging interpolates from block-based inputs to point-based predictions by calculating block-block and block-point semivariances based on a point-point variogram model. ATP ordinary kriging uses FluorSpec SIF measurements and an estimated variogram to produce a spatially continuous SIF map. ATP kriging with external drift enhances the kriging method by incorporating auxiliary data. In addition to point-based SIF measurements, it uses a multi-band dataset that includes multispectral data, broadband vegetation indices, leaf area index (LAI), and chlorophyll content (Cab). These auxiliary inputs represent the deterministic component of spatial variation, improving prediction accuracy. The machine learning approach uses multispectral bands, broadband vegetation indices, LAI, and Cab as inputs to predict SIF, employing random forest regression to address collinearity among predictors. Predictive feature importance is assessed through permutation importance analysis to ensure robust importance rankings. The hybrid approach integrates machine learning and ATP kriging with external drift, using machine learning-derived SIF maps alongside all the inputs of ATP kriging with external drift as inputs to improve prediction accuracy. Leave-One-Out Cross-Validation (LOOCV) was used to validate the estimated SIF maps by calculating Root Mean Square Error (RMSE) for each method. In each iteration of LOOCV, one FluorSpec SIF data point is left out, and the SIF for the entire field is predicted using the remaining data as training. The mean of estimated SIF within the footprint of left-out observation is then extracted to compare with the measurements to determine the validation metrics. The average of validation metrics across all iterations indicates the performance of each model. Two multispectral datasets with different band configurations were used in this research to act as the auxiliary data to predict SIF and assess the robustness of these methods. The first dataset was acquired in 2018 over two days using a UAV-mounted Parrot Sequoia+ multispectral camera in a sugar beet field and a potato field. The Sequoia camera has green, red, red-edge and near-infrared bands. The second dataset was obtained in 2019 over two days using a UAV-mounted Rikola hyperspectral imager in a sugar beet field. The Rikola camera was configured with 16 narrow spectral bands at an FWHM of 13–17 nm within a spectral range of 515–870 nm. Broadband vegetation indices related to structural traits, red-edge channels and chlorophyll content were used in this research. Besides, LAI and Cab were retrieved from the Rikola dataset as auxiliary data from the Rikola dataset using the Lookup-table (LUT)-based inversion of the PROSAIL model which combines PROSPECT-D and 4SAILH radiative transfer models. For the Sequoia dataset, RMSE values for ATP ordinary kriging and machine learning were 0.636 and 0.156, respectively, in the sugar beet field, and 0.527 and 0.089 in the potato field. For the Rikola dataset, RMSE values were 0.343 for ordinary kriging and 0.109 for machine learning in the sugar beet field. Permutation importance analysis revealed that, in the sugar beet field, the most important predictive features for SIF prediction from the Sequoia dataset were Green SR, WDVI, GNDVI, the Green band, and NDVI. In the potato field, the most influential features were the Red Edge index, the Red band, SR, the Red Edge band, and NDVI. For the Rikola dataset in the sugar beet field, the 740 nm band (Red Edge) and SAVI were found to contribute the most to SIF prediction. ATP ordinary kriging is unlikely to yield useful results for monitoring crops at the field scale due to sparse FluorSpec observations, but the other methods are promising. Many of these broadband vegetation indices with high feature importance, such as GNDVI, WDVI, SR and Green SR, involve combinations of NIR with Red or Green bands. Spectral indices with ratios between NIR and Red or Green bands are particularly significant for accurately predicting SIF, as they are sensitive to chlorophyll content and structural traits. These indices also play a crucial role in estimating key parameters like Cab and LAI, as they capture variations in both leaf and canopy characteristics. This highlights why indices integrating NIR with Red or Green bands are more influential in the SIF prediction process. However, retrieved Cab and LAI provided limited added value to the machine learning model with just multispectral bands and vegetation indices as its input, as both of them were strongly correlated with these multispectral indices. The other two approaches, ATP kriging with external drift and the hybrid approach are in the process of being evaluated. The performance of ATP kriging with external drift is supposed to exceed the machine learning model, as it leverages the spatial trend and external information from multispectral data. The hybrid approach could combine the strengths of kriging with external drift and machine learning by incorporating the spatial trend of FluorSpec SIF measurements, the external information from multispectral data and the reliable predicted SIF to enhance predictions. This combination is expected to outperform the other three methods. This research provides a robust and innovative method for generating high-resolution field-scale SIF maps using the most affordable UAV-mounted sensors. It offers a cost-effective solution for monitoring crop photosynthesis in precision agriculture, providing data support to improve crop management, optimize yields, and support sustainable agricultural practices.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 0.94/0.95)

Presentation: Recent advances on UAV- and mobile-mapping based SAR imaging and repeat-pass interferometry/tomography with examples at L-/S- and Ku-band

Authors: Othmar Frey, Charles Werner, Rafael Caduff, Silvan Leinss, Tobias Batt
Affiliations: Gamma Remote Sensing, ETH Zurich
In recent years, we have developed compact and lightweight DInSAR end-to-end systems at L-band and S-band. The compact Gamma SAR systems are suitable for car-borne, UAV-borne, airborne, and potentially for future high-altitude-pseudo-satellite-(HAPS)-borne SAR data acquisitions. The end-to-end system includes (1) a compact and lightweight FMCW band SAR system (L-band: up to 200MHz bandwidth, S-band: up to 400MHz bandwidth), (2) a compact and lightweight GNSS-aided INS navigation system, (3) SAR focusing/processing software, and (4) interferometric processing software. The FMCW-SAR system architecture features two alternating transmit channels and up to four concurrent receive channels supporting interferometric, polarimetric interferometric, or single-pass multi-baseline acquisitions at one polarization. The compact design of the SAR system and the flexibility of the SAR processing system allows it to be used on various platforms including agile platforms (UAVs, cars) and platforms with limited payload capacity (UAVs, HAPS). As a result of that, a wide range of SAR data acquisition scenarios can be support. Particularly worth to mention are applications that require short repeat-pass intervals and/or a flexible acquisition geometry: terrestrial, UAV-borne, airborne, or HAPS-based SAR acquisitions can provide the quasi-geostationary repeat-pass observation scheme at short time intervals as required to map fast-moving landslides, to retrieve snow parameters with timely updates, to frequently map the change of the distribution of water vapor in the troposphere, or for emergency/disaster response, etc. Then, the compact system also permits UAV-based high-resolution tomographic imaging of 3-D vegetation structure, which is one of the target observables of upcoming spaceborne SAR missions such as the ESA Earth Explorer mission BIOMASS and as defined in the Surface Topography and Vegetation (STV) earth observation strategy recommended by the 2017-2027 decadal survey of NASA. SAR tomography for 3-D vegetation structure retrieval is among the next-generation measurement/earth observation techniques that are discussed for spaceborne implementation. To establish the detailed scientific requirements and to provide the experimental support and processing algorithms for this purpose, tailored experiments are needed to demonstrate the feasibility and to provide high-quality validation data. In this contribution, we report on the current status of repeat-pass DInSAR and SAR tomography demonstrations conducted in 2023 and 2024 using L-,S-, or Ku-band Gamma SAR systems mounted on various moving platforms such as quad-copter UAVs and cars: starting with the Gamma L-band SAR system, we have successfully performed ground motion / slope stability measurements at different sites using a car or a helicopter UAV and we have also performed repeat-pass tomographic SAR data acquisitions. In 2024, the first car-borne repeat-pass acquisitions with the 400MHz S-band FMCW SAR were performed to retrieve surface displacements at a fast-moving landslide. In addition, we have also gained experience at several measurement sites with a dual-frequency car-borne repeat-pass DInSAR setup including both an L-band and a Ku-band SAR system which acquire SAR data simultaneously. We revisited the Brinzauls landslide in Switzerland several times and acquired high-resolution repeat-pass DInSAR data from the car at Ku-band and L-band. This active landslide consists of different sections with varying kinematic behavior and also with different land cover. More recently, we deployed the compact Gamma L-band SAR system on quad-copter drones (Freefly Alta-X and Harris HX8) and acquired interferometric data from repeated overflights, as well as a set of tomographic acquisition over forested area. The UAVs systems includes real-time kinematic (RTK) GNSS-aided flight control to maintain a narrow flight tube around the planned trajectory. It also includes INS/GNSS-fused post-processed-kinematic (PPK) navigation for accurate knowledge of the sensor trajectories using a Honeywell HGuide n500 GNSS-aided INS system together with a local GNSS reference station. Following a time-domain back-projection approach as introduced in our previous work on airborne SAR tomography we focus the SAR data on a 2-D or 3-D image reconstruction grid. We provide an overview of our drone- and vehicle-based repeat-pass DInSAR/SAR tomography demonstrations carried out, recently. In addition, similar repeat-pass SAR data acquisitions on manned airborne platforms are planned for Q1/Q2 2025, the results of which will also be included, if available by June 2024. We also discuss future use cases for the compact Gamma L- /S-band SAR system, such as the deployment on a HAPS.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall K2)

Session: A.01.04 Advancing Air Quality Monitoring from Space - PART 2

Air pollution is a major concern throughout the world in both, developed and developing countries, especially in urban areas where over 50% of the world population live. Swelling urban population and increased volume of motorized traffic in cities have resulted in severe air pollution affecting the surrounding environment and human health. European Environmental Agency (EEA) estimate that over 40-50% of air pollution in the most important European cities is attributed to vehicular emission.

The investigation of air pollution over megacities by means of satellites observations has recently become a central topic of interest within the air pollution community, especially thanks to the observing capabilities of Sentinel-5p in terms of spatial resolution.

Nonetheless space-borne platforms alone cannot provide a full picture. To investigate the spatio-temporal variability of air pollution on a local, regional, and global scale, new tools are being developed. In this context, the detection, tracking and understanding of pollutant transport on various spatial scales are of both local and global interest. Specifically, in rural and remote areas, where no ground-based monitoring network of the air pollution is available, the use of satellite data can provide an estimation of the regional distribution of pollutants, in order to assess the impact of specific events (e.g., biomass burning or dust storm outbreaks).

Satellites observe air pollution in the troposphere, and its relation with surface concentrations must first be solved for air quality monitoring applications. This session is dedicated to present new algorithms and approaches for the downscaling of air quality satellite observations and to explore novel assimilation methodologies to combine satellite retrievals with in-situ measurements air quality modelling considering all relevant satellite missions (e.g. Sentinel-5P), and future availability of hourly observations from Sentinel-4, and other future capabilities e.g. Sentinel-5 and CO2M.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall K2)

Presentation: Downscaling Sentinel-5P NO₂ Data Using Gradient-Boosted Trees for High-Resolution Urban Air Quality Mapping

Authors: Mr Alessandro Austoni, Mr Francesco Asaro, Dr Alessandra Feliciotti, Dr Emanuele Strano, Dr Mattia Marconcini
Affiliations: Mindearth s.r.l.
Mapping nitrogen dioxide (NO₂) concentrations with high spatial resolution is essential for addressing urban air quality challenges. NO₂, primarily emitted by vehicles, industrial activities, and residential heating, is a major contributor to respiratory and cardiovascular diseases and has significant environmental impacts. While Sentinel-5P (S5P) TROPOMI offers unprecedented global daily NO₂ observations, its nominal resolution of 5.5 × 3.5 km is insufficient for capturing the spatial heterogeneity of urban pollution, where localized hotspots often result from high traffic volumes, industrial emissions, or unique urban morphology. To address this limitation, a machine learning-based methodology was developed to downscale S5P data to a resolution of 100m, enabling detailed daily NO₂ mapping. The approach integrates satellite data with ancillary datasets to produce refined concentration maps over the Copenaghen area, which presents a complex urban environment, characterized by a mixture of dense residential neighborhoods, industrial zones, green spaces, and major traffic corridor, making it an ideal case study to appreciate the model prediction. These diverse features provided a challenge for the model, proving its ability to capture fine-scale spatial variability in NO₂ concentrations The methodology relies on CatBoost, a gradient-boosting tree algorithm, which is specifically designed for handling both categorical and continuous features, and excels in capturing non-linear relationships in the data. The model takes in input the S5P TROPOMI NO₂ column densities and cloud cover observations, leveraging their global daily coverage and sensitivity to NO₂ concentrations. To overcome the limitations of coarse spatial resolution of S5P satellite observations, the model integrates a wide array of additional inputs, including meteorological, topographical, and anthropogenic data. Key meteorological variables from the ERA5 reanalysis dataset, such as surface temperature, solar radiation, total precipitation, Planetary Boundary Layer Height (PBLH), and wind speed taken from Global Wind Atlas, provide critical information about the atmospheric conditions influencing pollutant dispersion. Land use patterns are captured using CORINE Land Cover data, elevation data using NASADEM, while slope has been taken from Hydrography90 dataset helping to account for the effects of terrain on the movement and accumulation of pollutants. Population density, derived from DLR’s World Settlement Footprint (WSF) population layer, serves as a proxy for human activity intensity, closely correlated with NO₂ emissions from transportation and heating. High-resolution wind fields from the Global Wind Atlas further refine the modeling of pollutant transport, ensuring that local meteorological variations are adequately represented. The ground-level NO₂ measurements have been used as targets for the model prediction and were obtained from the Danish Environmental Protection Agency network through AirBase. However, the scarcity and sparse distribution of monitoring stations in Denmark posed a challenge. To address this, the model was trained using data from Germany and Poland, regions with similar climatic conditions, ensuring a significant amount of ground stations while maintaining transferability to the Copenhagen region. By combining these diverse datasets, the model effectively links satellite-derived NO₂ columns to near-surface concentrations at a fine spatial scale, overcoming the limitations of existing monitoring networks and satellite products. Validation has been performed with a k-fold cross methodology and tested over hold out regions in Germany and Poland, which have not been used in training and validation. The downscaled maps revealed detailed pollution patterns, including localized hotspots near major roads and industrial areas, which were obscured at the original S5P resolution. The ability to generate daily maps further enabled the analysis of temporal variations in NO₂ levels, reflecting changes driven by weather conditions and traffic patterns. A key aspect of this methodology is its integration of Explainable AI (XAI) techniques, specifically SHapely Additive exPlanations (SHAP), to provide transparency into the model’s predictions. By quantifying the contribution of each input variable, SHAP enables a better understanding of the factors driving NO₂ distribution. This interpretability is crucial for informing evidence-based policymaking, allowing stakeholders to target the most impactful factors contributing to pollution in specific areas. The results from Copenhagen demonstrate the transformative potential of this approach for urban air quality management. The high-resolution NO₂ maps provide actionable insights for identifying pollution hotspots, assessing the effectiveness of existing measures such as low-emission zones and traffic regulations, and designing targeted interventions. By bridging the gap between global satellite observations and urban-scale monitoring needs, this methodology enables cities to adopt data-driven strategies for improving air quality and public health. The scalability and adaptability of the model make it a powerful tool for other cities worldwide, particularly those with limited ground-based monitoring networks. The ability to generate high-resolution NO₂ maps using readily available satellite and ancillary datasets offers a powerful and cost-effective tool for urban air quality monitoring. This approach addresses a critical gap in current air quality assessment capabilities, providing actionable insights for improving public health and fostering sustainable urban environments. The integration of S5P data with advanced machine learning techniques exemplifies the potential of Earth observation systems to transform how cities understand and manage their atmospheric challenges, offering a cost-effective solution for generating high-resolution air quality maps, supporting science, policy, and urban sustainability initiatives.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall K2)

Presentation: Advancing CO Emission Estimates for Steel Plants Using TROPOMI Observations

Authors: Tobias Borsdorff, Arthur Bronstring, Dr Peter Sterk, Dr. Thomas Plewa, Dr. Jochen Landgraf
Affiliations: SRON Netherlands Institute for Space Research, Delft University of Technology, Institut für Umweltphysik University of Heidelberg
The Tropospheric Monitoring Instrument (TROPOMI) aboard the Sentinel-5P satellite provides global daily observations of carbon monoxide (CO) total column concentrations from the shortwave infrared (SWIR) at a high spatial resolution of 7 × 5.5 km². Leveraging this detailed data, we developed the Automated Plume Estimation (APE) framework, designed to detect pollution events and estimate emissions from wildfires and industrial sources. We applied APE to study emissions from Ukrainian steel plants, highlighting the significant industrial impacts of the Russian invasion. Satellite data revealed visible destruction of major facilities, with emissions from key plants such as Azovstal and Metinvest Ilyich in Mariupol dropping from 5.0 kg/s to negligible levels post-invasion. Similarly, emissions from ArcelorMittal Kryvyi Rih and Kametstal decreased by 85% and 67%, respectively, reflecting the broader disruption to industrial activity caused by the conflict. To further enhance APE, we developed a novel approach that estimates emissions without relying on plume detection, addressing biases introduced by enhancement thresholding. This improved method employs the divergence technique on individual TROPOMI overpasses, analyzing frequency distributions to provide more robust emission estimates. Moving beyond Ukraine, we expanded our study to evaluate emissions from over 100 steel plants worldwide. In this global analysis, we tested advanced AI-based plume detection methods alongside our plume-independent framework, pushing the boundaries of industrial CO monitoring using TROPOMI's capabilities. Our findings not only underscore the utility of TROPOMI for monitoring industrial emissions but also demonstrate the adaptability of APE in diverse applications. By integrating advanced methodologies, we aim to refine emission estimates further and support global efforts to track and mitigate industrial pollution effectively.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall K2)

Presentation: EO-based Downscaling for Urban-Scale Air Quality Applications

Authors: Philipp Schneider, Shobitha Shetty, Paul David Hamer, Kerstin Stebel, Arve Kylling, Amirhossein Hassani, Terje Koren Berntsen, Henrik Grythe, Susana Lopez-Aparicio, Islen Vallejo, Torleif Weydahl, Miha Markelj
Affiliations: NILU, University of Oslo
Earth Observation satellites provide a wealth of information about the environment including air quality. While the spatial resolution of the relevant instruments has improved considerably in recent years, direct monitoring of within-city patterns remains challenging at spatial resolutions of typically several kilometers. Here we present two complementary approaches for increasing the spatial resolution of space-based air quality products and thus increasing their relevance for urban-scale exposure assessment and similar societal applications. The first approach, partially developed within the European Space Agency-funded CitySatAir project, focuses on downscaling the S5P/TROPOMI nitrogen dioxide (NO2) product with the help of the high-resolution city-scale chemical transport model EPISODE. Using the ratio of surface concentration and tropospheric column information obtained from the model through applying the TROPOMI averaging kernel, we first derive TROPOMI-based surface concentration fields at the original satellite footprint. This surface concentration field is then downscaled using geostatistical area-to-point kriging to a spatial resolution ranging from 500 m to 1000 m, while using the spatial patterns from EPISODE as a covariate. The results represent a mass-conservative spatial redistribution of the satellite-derived surface NO2 values. The second approach focuses on downscaling the Copernicus Atmosphere Monitoring Service (CAMS) regional forecast for PM2.5 to a spatial resolution of 1 km by 1 km for all of Europe by training a machine learning model on a set of predictor variables, including among others satellite-based Aerosol Optical Depth. We find that this approach we entitled S-MESH (Satellite and ML-based Estimation of Surface air quality at High resolution) (Shetty et al., 2025) - when assessed against a holdout set of air quality monitoring stations - improves upon the accuracy of the CAMS forecast across the entire concentration range. For higher concentrations levels beyond 30 µg/m3 S-MESH also outperforms the CAMS regional reanalysis while also producing a product with higher spatial resolution. Through this work, we aim to demonstrate the potential of downscaling techniques for increasing the relevance of EO-based air quality products for city-scale studies and highlight the importance of combining EO data with models for societal applications. References: Shetty, S., Hamer, P. D., Stebel, K., Kylling, A., Hassani, A., Berntsen, T. K., & Schneider, P. (2025). Daily high-resolution surface PM2.5 estimation over Europe by ML-based downscaling of the CAMS regional forecast. Environmental Research, 264, 120363.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall K2)

Presentation: Boosting Air Quality Downscaling with Extreme-Value-Sensitive Strategies

Authors: Maria Dekavalla, Chrysovalantis Tsiakos, Georgios Tsimiklis, Angelos Amditis
Affiliations: Institute Of Communication And Computer Systems
Mining industries must monitor air pollutant emissions to comply with EU directives, protect public health and the environment, and support broader sustainability goals. This involves using direct on-site measurements, drones, and mobile units. With their broad coverage and pollutant-specific detection capabilities, spaceborne sensors provide a complementary tool for tracking emissions over large areas, ensuring regulatory compliance, and improving environmental management. However, the spaceborne estimates represent only the total atmospheric column and may not provide comprehensive information about surface concentrations. Therefore, data fusion from multiple sources, including ground stations, satellites, and CTMs based on machine learning (ML) methods, has seen significant advancements in estimating ground-level air pollutant concentrations. ML methods like Random Forests (RF), support vector regression, feed-forward neural networks, and eXtreme Gradient Boosting (XGBoost) are widely used to predict high-resolution ground-level air pollutant concentrations by using data from ground-based measurements, assimilated records (e.g., meteorological variables), and satellite observations (e.g., Sentinel-5P TROPOMI). To enable end-to-end predictions, these downscaling models establish relationships between inputs (e.g., NO₂, CO column density) and outputs (e.g., ground-based measurements of air pollutant concentrations). These ML-based downscaling methods usually outperform classical spatial interpolation and statistical regression methods; however, they struggle to estimate extreme events since their representative values are located in the tails or even outside of the training distribution. Therefore, it is important to identify ML models and training strategies capable of addressing the highly imbalanced data distribution and extending the magnitude of their prediction range to capture hotspots of extreme air pollutant emissions. The EU-funded TERRAVISION project addresses this gap by introducing a novel framework that utilises ensemble techniques that combine multiple ML models' strengths and training strategies for effectively handling imbalanced datasets. Techniques such as oversampling the high-concentration samples or incorporating appropriate loss functions that penalise prediction errors for extreme values could help alleviate the challenges posed by the imbalanced distribution. In addition to these strategies, evaluation metrics, like Geometric Mean (GM) and Squared Error Relevance Area – SERA should be employed to accurately measure how well the ML models capture extreme values and get a complete understanding of their downscaling capabilities. This framework incorporates an ablation study involving variations of ML models, oversampling techniques, cost-sensitive learning where data points are weighted according to their target value rarities, loss functions that perform asymmetric optimisation and highlight extreme values, and evaluation metrics to assess the individual contributions of each component. The models were trained on benchmark datasets, including ground-based measurements from the European Environmental Agency (EEA) air quality monitoring station network, Sentinel-5P tropospheric vertical column density values and modelled meteorological data obtained from the European Centre for Medium-Range Weather Forecasts (ECMWF) Reanalysis v5 -Land (ERA5-Land) model to estimate the near-surface concentrations of air pollutants at 1 km spatial resolution. The trained models were benchmarked against state-of-the-art ML models to downscale air pollutant concentrations. Evaluations were performed on the entire test dataset as well as on three disjoint subsets based on the number of samples that correspond to specific ranges of near-surface concentration values. Extensive experiments verify the superior performance of the proposed strategies towards addressing the underestimation of extreme ground-level concentrations on air quality downscaling tasks. By incorporating these improvements into our modelling framework, we can overcome existing limitations and improve the accuracy and reliability of air quality predictions. This will ultimately benefit environmental monitoring and decision-making processes.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall K2)

Presentation: Impact of wind fluctuations on the performance of the divergence method: How steady is the state?

Authors: Steffen Beirle, Thomas Wagner
Affiliations: MPI Chemistry
The divergence, i.e. the spatial derivative of the horizontal flux, allows to identify point sources due to the strong local gradients and to quantify emissions from satellite measurements of atmospheric pollutants or greenhouse gases such as NO2 or CH4. A central assumption made in this approach is that steady state is fulfilled. I.e., spatio-temporal changes of emissions, chemical conditions, or wind fields are not accounted for. Thus it has to be expected that the emission estimate is affected and probably biased in case of deviations from steady state. Here we investigate quantitatively how far deviations from steady state affect the results of the divergence method. In particular, we quantify the spatial and temporal variability of wind fields and relate them to NOx emission estimates for selected power plants based on individual TROPOMI orbits. The goal is to provide a measure for "steadiness" that could be used to mask out unfavourable conditions. With this filter, it is expected that the remaining emission estimates have lower uncertainties. Other methods for emission estimates that are based on steady state assumption as well, like the calculation of cross-sectional fluxes, will probably also benefit from this.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall K2)

Presentation: Does heterogeneous surface affect the aerosol and surface retrievals from satellite observations?

Authors: PhD Masahiro Momoi, Vincent Leroy, Pavel Litvinov, Siyao Zhai, Sebastian Schunke, Zhen Liu, Marta Luffarelli, Yves Govaerts, Oleg Dubovik, Philippe Goryl
Affiliations: GRASP SAS, Rayference SRL, CNRS/University of Lille, ESA
Global information on aerosol and surface properties derived from satellite observations is essential for Earth climate studies. Although recent advances in the fields of satellite remote sensing data acquisition and processing have led to significant improvements in the retrieval of aerosol properties, discrepancies still exist among the various satellite data sets and between observation and simulation. All aerosol retrieval methods from space-borne observations rely on the so-called one-dimensional (1D) atmospheric approximation whereby the atmosphere is represented as a series of horizontally infinite homogeneous layers above a uniform flat surface. The 1D approach results in neglecting all sorts of 3D scene features, such as: - Surface reflectance heterogeneity and terrain topological structure; - Atmospheric inhomogeneity due to the presence of broken clouds, horizontal variation of aerosol properties and gas-to-particle transformation, and different aerosol emission sources. Recently, in the framework of the '3D Radiative Digital Twin Earth to support multi-angular observation exploitation: studies of Surface heterogeneity effects' (3DREAMS) project supported by ESA, we have investigated the effects of surface reflectance heterogeneity and terrain topological structure on the aerosol and surface retrievals. This study has been carried out through the numerical simulation of simple uniform reference and heterogeneous cases of various complexity and realism. Satellite images corresponding to current and future European missions have been simulated over these scenes with a 3D RT model Eradiate for different atmospheric compositions. The GRASP algorithm is used to retrieve the aerosol and surface properties from the synthetic images.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall F1)

Session: F.04.07 Earth Observation for Tracking Global Sustainability and Biodiversity Targets - PART 2

The 2030 Agenda for Sustainable Development and the Kunming-Montreal Global Biodiversity Framework (GBF) set ambitious targets for biodiversity conservation, ecosystem restoration, and sustainable development. Achieving these goals requires timely, reliable, and spatially explicit data to track progress and inform decision-making. Earth Observation (EO) plays a critical role in monitoring environmental indicators, supporting national reporting, and guiding effective conservation and restoration efforts.
EO provides unparalleled capabilities to support operational monitoring of SDG indicators, helping countries integrate geospatial data into National Statistical Systems to track development policies and sustainability progress. The integration of EO with socio-economic data is essential for delivering high-quality, timely, and actionable information to measure progress on SDG targets. However, challenges remain in EO data accessibility, standardization, and operational integration to ensure that national and global reporting frameworks effectively benefit from EO-based solutions.
In the context of biodiversity, EO is key to supporting national monitoring and reporting on the GBF indicators. EO is also an essential tool for ecosystem conservation and restoration, supporting the identification, mapping, and management of priority areas for protection and rehabilitation. The ability to assess biodiversity at multiple scales, from protected areas to entire landscapes, provides an opportunity to bridge the gap between scientific advancements and national reporting needs. There is a growing demand for accessible and standardized EO-based indicators to support conservation efforts and assess the effectiveness of protected area management. Monitoring ecosystem conditions, connectivity, and resilience is crucial to tracking progress toward restoration targets (e.g., GBF Target 2 and EU Nature Restoration Law). The integration of EO with in-situ data (e.g., bioacoustics, eDNA, LTER) further enhances conservation planning and adaptive management strategies.
This session will explore the latest EO-based approaches for tracking SDG indicators, biodiversity targets, and ecosystem conservation and restoration efforts.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall F1)

Presentation: Towards EO-Informed Global Indicators for Effective Monitoring of Inland Water Protection, Status and Connectivity.

Authors: Lucy Bastin, Mr Riyad Rahman, Mr Jean-Francois Pekel
Affiliations: Aston Centre for Artificial Intelligence Research and Application, Aston University, European Commission, Joint Research Centre (JRC)
The 2030 Agenda for Sustainable Development and the Kunming-Montreal Global Biodiversity Framework (KM-GBF) collectively aim to enhance the ecological integrity of a range of ecosystems, including inland waters. Inland water ecosystems such as lakes, rivers, streams, marshes, and peatlands provide essential services such as water provision, food security, and cultural values and are highly threatened - since 1970, monitored populations of freshwater species have declined by an average of 85 percent (WWF 2022). The KM-GBF includes 3 targets that specifically address spatial planning for, restoration of and protection of inland waters with a goal of conserving at least 30 percent of inland waters globally by 2030. To benchmark, evaluate and track progress towards these targets, clear, transparent and consistent indicators are required at global scale. However, in order to adequately protect ecosystem function, these indicators must capture not only the areal protection of inland water, but also the extent of protection of hydrological connectivity. This holistic approach supports biological diversity and ecosystem services locally and downstream, both in the aquatic realm and in protected riparian zones and associated landscapes. Estimates of inland water protection can be derived using global datasets such as the World Database on Protected Areas (WDPA) and the World Database on Other Effective Area-based Conservation Measures (WD-OECM). Using this approach, disparate studies have estimated what initially appear to be very consistent global baselines of current protection for inland water ecosystems by protected areas at around 16-17 percent (Abell et al., 2016; Bastin et al., 2019; Confluvio Consulting, 2024 - reported in Morberg et al., 2024). However the representations of wetland and river networks used by these studies are different, and each study identifies important variations and gaps in protection, especially as their results are disaggregated by biome, country and inland water type. captures a different, and important aspect of inland water protection. Abell et al. considered the HydroSHEDS database of river reaches and watersheds, exploiting the network nature of this dataset to derive important metrics of upstream protection and concluding that effective integrated protection for rivers is in most cases much lower than can be understood from a simple areal analysis. Bastin et al, computed areal estimates of protection, but using a 30-year time series from the Global Surface Water Explorer (GSWE - Pekel et al., 2016), which permitted the evaluation of surface water change inside and outside protected areas over time. This study also identified important variations in trends of open surface water change which imply differing levels of protection effectiveness when compared against each national baseline. Confluvio Consulting, in work commissioned by the Nature Conservancy, computed areal estimates for a single year (2023) based on the Global Lakes and Wetlands Database Version 2 (Lehner et al. 2024) which is informed by GSWE and HydroSHEDS, among a range of other datasets, thus representing a very complete coverage of inland water for a snapshot in time. The research we present combines the strengths of the three approaches, by incorporating the consideration of change within connected catchments into a network-based analysis of protection. By developing a transparent and reproducible workflow, we propose an indicator methodology which can form part of the spatial planning framework delineated in Morberg et al. (2024). Our approach combines the strengths of connectivity-based approaches with the multi-temporal change analysis that is possible when using EO-derived products such as GSWE, in order to compute meaningful, consistent global metrics for protection of inland water and water connectivity, and additionally to support the evaluation of protection effectiveness by integrating temporal loss and change into the analysis of upstream and local protection. The workflow is globally consistent to allow straightforward reporting by institutions with limited resource, but is adaptable, allowing for downscaling to regional, subnational, and national levels. It allows easy disaggregation to track changes across different biomes, such as rivers, lakes, and artificial wetlands, in order to identify protection gaps and ecosystem-specific pressures. By establishing a robust methodology for measuring and tracking status and protection trends in inland water ecosystems, and relating them to the observed change, this methodology can facilitate reporting of progress towards global biodiversity targets, and help integrate considerations of connectivity and protection effectiveness into conservation planning and monitoring. Abell et al. 2016. Looking Beyond the Fenceline: Addressing Protection Gaps for the World’s Rivers. https://doi.org/10.1111/conl.12312 Bastin L, et al. 2019. Inland surface waters in protected areas globally: Current coverage and 30-year trends. https://doi.org/10.1111/conl.12312 Lehner, B.et al. 2024: Mapping the world’s inland surface waters: an update to the Global Lakes and Wetlands Database (GLWD v2), https://doi.org/10.5194/essd-2024-204, (preprint). Moberg, T. et al. 2024. Designing and managing protected and conserved areas to support inland water ecosystems and biodiversity. https://portals.iucn.org/library/node/51774. Pekel, JF.. et al. 2016 High-resolution mapping of global surface water and its long-term changes. https://doi.org/10.1038/nature20584 WWF. (2022). Living Planet Report 2022 – Building a nature positive society. Gland: WWF. https://wwflpr.awsassets.panda.org/downloads/lpr_2022_full_report.pdf
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall F1)

Presentation: Status of Operational EO-based Operational Forest Monitoring Programs in the Amazon. Advancing Conservation through Regional Cooperation.

Authors: Dr Ana Sebastian Lopez, Dr Nancy Lozano Gracia, Dr Mariana Conte Grand, Dr Caio Cesar de Araujo Barbosa
Affiliations: World Bank Group
The Amazon, spanning eight countries and one territory, and covering approximately 5.5 million square km, plays a crucial role in global climate regulation and biodiversity conservation. The ongoing project "Innovating through better information, monitoring, and financing instruments", part of the World Bank’s Amazonia Viva Initiative, aims to support Amazonian countries in identifying challenges and opportunities for using improved data and information systems. It also aims to foster dialogue on the most effective way to develop strong regional coordination for curbing deforestation through improved management and regional conservation policies. Through extensive desktop research, the project has analyzed the operational EO-based deforestation and forest degradation monitoring efforts across 7 Amazonian countries. Besides characterizing the currently existing monitoring programs, this review identified critical technical, operational, and institutional challenges these programs face today. Results revealed both common strengths and significant heterogeneity in capabilities and resources. Most Amazon countries are committed to high-quality deforestation monitoring and are interested in continuing improvement. The use of open-source High-Resolution data from Sentinel constellations and the free access to some VHR data through specific data-sharing programs with satellite data providers fuel this evolution. Machine learning approaches facilitating and enhancing data processing and interpretation are progressively being adopted, although traditional remote sensing leaning largely on visual interpretation is also present. Very high-resolution data and specific software tools contribute to reducing the cost of obtaining ground truth for calibration and validation purposes. Regional organizations, such as the Amazon Cooperation Treaty Organization (OTCA), support the monitoring programs' evolution by facilitating knowledge exchange across the region. Civil society organizations complement ongoing governmental monitoring efforts in the Amazon. Conducted in parallel to public programs, civil society’s initiatives ensure independent monitoring at different scales. MapBiomas, a well-established land use and land cover mapping program in the Amazon, monitors forest loss and deforestation alerts on a national and regional scale. Programs such as the Global Land Analysis and Discovery (GLAD) and the Radar for Detecting Deforestation (RADD) - both led by the Academia – provide deforestation alerts for the pan-tropical area in near-real-time. The Amazon Network of Georeferenced Socio-Environmental Information (RAISG) produces socio-environmental data and information on Amazonia, which is developed through common protocols for all the region's countries. In parallel, at the local scale, indigenous and other communities conduct monitoring from the ground level, occasionally using geospatial technologies and observing their lands using drones. However, despite the progress and the number of stakeholders involved, operational deforestation monitoring programs in the Amazon region still face severe challenges. From the technical point of view, persistent cloud coverage in the region makes it necessary to move beyond the traditional Landsat-based monitoring towards a multi-source approach (SAR, optical, lidar) and/or with short revisit cycles (e.g., Sentinel data). This is a more complex approach that not all countries can implement. Geospatial timeliness remains another critical obstacle: deforestation monitoring programs operate on annual or even bi-annual (Ecuador) data cycles, which means deforestation data is updated on the corresponding portals only once every twelve months and typically even less frequently. Additionally, even though most public programs have developed a public geospatial tool, full data transparency has not yet been achieved across the region. While deforestation data can be visualized on all these portals, most of them have low usability features, and data can often not be queried or downloaded in GIS format. Moreover, most regional public deforestation monitoring systems fail to comply with basic metadata standards. Finally, despite the studies suggesting that a significant portion of forest loss in the Amazon is attributable to degradation rather than deforestation (Matricardi et al., 2020), Amazon countries have yet to achieve full, operational monitoring of forest degradation. Notably, attempts to monitor degradation have focused solely on monitoring forest cover disturbances, overlooking critical changes in forest ecological integrity, which are also considered degradation. In the region, there are also critical operational barriers hindering conservation efforts. One of these barriers is the lack of (seamless) integration between local and national monitoring data/systems. However, it is expected that, in the near future, LEO constellations providing broad internet coverage and new observation platforms (e.g., HAPS) will enable field teams to efficiently upload ground observations into the national systems while accessing national-level data, creating an effective two-way flow of information that will improve monitoring and law enforcement operations. As a result, monitoring programs will benefit from strengthened operational frameworks and capacity, ultimately leading to more effective forest monitoring across the region. Last but not least, several obstacles detected in this review have institutional character, and among them, there is understaffing and high staff turnover, leading to discontinued capacity-building efforts. A critical institutional challenge for all the programs is ensuring the financial sustainability of the monitoring programs, which remains fragile. Monitoring initiatives, primarily dependent on aid funds, are inherently vulnerable to budgetary resource fluctuations and political will changes. The region has witnessed direct threats to program sustainability through unstable budgets and priorities. The sustainability of the entire monitoring process lies in the government's will to create and sustain an institutional infrastructure where agencies combating deforestation from different angles have clear mandates, distinct and complementary responsibilities, and a shielded, continued budget. While each country has its unique particularities, challenges, and priorities, the Amazon functions as a single ecological unit that spans multiple national borders. For this reason, we believe that closing the said gaps through an integrated regional approach to monitoring would lead to harmonized methodologies, amplified technology adoption, insightful cross-border analyses, and benefit from best practices, better policies, and economies of scale. By addressing topics such as coordination among Amazon countries, transboundary conservation and enhanced cooperation, the Belém Declaration recently signed (August 2023) establishes the necessary framework to build this regional view. This formal regional monitoring approach would empower countries to understand ecosystem changes better, identify priorities, and implement international agreements. It would enable insightful cross-border analyses that are impossible from national data alone. Partnerships embracing Indigenous knowledge would also fortify local engagement and policy impact, and collaboration with civil society’s monitoring program would help consolidate and strengthen the deforestation data. Today, no system generates periodic, comprehensive, and standardized added-value geospatial environmental data at the regional scale in the Amazon. In particular, data specifically aimed at supporting regional collaboration in the context of deforestation and forest degradation monitoring. Should stakeholders require official, regionally standardized data and comparable added value information to underpin regional-scale anti-deforestation policy design, or moreover, to inform joint strategies to meet external sustainability demands (e.g., EUDR or Amazon Bonds initiatives), then there is a gap, a requirement that is currently not met by any system/tool and that will require specific developments. But realizing this requires political will, data-sharing protocols, and capacity investments. Enhanced regional cooperation is urgently needed to harmonize approaches, integrate diverse data sources, build capacity, and strengthen monitoring policy impact. Further work is ongoing, particularly in collaboration with the concerned national and regional stakeholders (multilateral meetings and workshops) to gather and consolidate the stakeholders’ requirements and shape the preferred solution to build this regional view of deforestation and forest degradation dynamics in the Amazon. With concerted effort, a regional monitoring network can empower nations to protect the irreplaceable Amazon biome. The Spanish Government is funding this project under the SFLAC Trust Fund.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall F1)

Presentation: Advancing GHG Emission Monitoring from Forest Fires: A Tier 3 Copernicus-Based Approach for the Mediterranean Region

Authors: Chiara Aquino, Alessandro D'Anca, Jishnu Jeevan, Maria Vincenza Chiriacò, Sergio Noce, Manuela
Affiliations: CMCC Foundation, Euro-Mediterranean Centre on Climate Change
The Mediterranean region is increasingly vulnerable to wildfires due to rising temperatures and prolonged droughts, both exacerbated by climate change. These fires devastate forest ecosystems and contribute significantly to greenhouse gas (GHG) emissions, thereby amplifying global warming. Accurately quantifying GHG emissions from forest fires is a critical component of national inventory reports, which are updated annually under the United Nations Framework Convention on Climate Change (UNFCCC). These reports are essential for tracking progress toward Sustainable Development Goal (SDG) 13: Climate Action, particularly Eurostat Indicator 13_10, which monitors past GHG emission trends, including those from Land Use, Land Use Change, and Forestry (LULUCF). As far as forest fires are concerned, current reporting methods for GHG emissions rely on Tier 1 guidelines of the Intergovernmental Panel on Climate Change (IPCC, 2006). However, these approaches often rely on field inventory data and lack explicit spatial reference and timeliness, with data lags of up to two years. The SDG-Eyes project (https://sdgs-eyes.eu/) is contributing to advancing multiple SDGs, including the improved assessment and monitoring of Eurostat Indicator 13_10. To achieve this, the project is developing an innovative Tier 3 methodology that integrates Earth Observation data into the IPCC empirical model to enhance the accuracy of GHG emission estimation. In particular, we propose the addition of a novel burning efficiency model calibrated to fire severity, which computes the proportion of biomass combusted for each vegetation class, enabling the use of spatially dynamic combustion factors. The workflow is modularly designed to incorporate components from the Copernicus Emergency and Land Monitoring Services, facilitating the development of a comprehensive Tier 3 methodology. This approach overcomes the limitations of Tier 1 methods by delivering reproducible results beyond traditional field measurements and supporting the annual tracking of the SDG indicator. Use of Copernicus data also ensures that the methodology can be adapted for application in other Euro-Mediterranean countries. The outcome is a freely accessible tool developed for Italy, the pilot area, which generates detailed maps for every forest fire event. Its development has been enriched by contributions from key stakeholders, including the Italian Forestry Corps (CUFAA), the Italian Institute for Environmental Protection and Research (ISPRA), and the Italian National Institute of Statistics (ISTAT). It provides GHG emission estimates across multiple forest classes and administrative levels. Designed to be user-friendly, the tool supports geospatial analysis and integration into national emission inventories, offering a valuable resource for statistical officers and research entities working toward SDG monitoring for effective climate action.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall F1)

Presentation: Addressing Bottlenecks in the use of Biodiversity Monitoring Data to enhance GBF Reporting

Authors: Arne Langlet
Affiliations: University Of Vienna
The Kunming-Montreal Global Biodiversity Framework (GBF) has set ambitious goals to guide global efforts in biodiversity conservation, requiring countries to monitor and report progress toward achieving these targets. Central to this reporting process is the use of indicators such as Essential Biodiversity Variables (EBVs) and Essential Ocean Variables (EOVs). Simultaneously, advances in Earth Observation (EO) technologies and Satellite Remote Sensing (SRS) present unprecedented opportunities to enhance the monitoring and reporting of these indicators across diverse dimensions of biodiversity, encompassing terrestrial, freshwater, coastal, and marine ecosystems. Despite these advancements, significant challenges persist in integrating EO, SRS, and local monitoring data into national reporting frameworks and decision-making processes. Challenges also center around the question how satellite observations can be made accessible to non-specialists for use in reporting? The MARCO-BOLO project aims to address these challenges by pursuing two main avenues. On one hand, it focuses on developing technological products related to eDNA usage, connecting monitoring data from various ecosystems, linking remote sensing data to EBVs and EOVs, and modeling and mapping biodiversity habitats and ecosystem status. On the other hand, it works to identify and address critical bottlenecks in the effective use of biodiversity monitoring data. To do so, the project collaborates closely with stakeholders to make biodiversity monitoring data more directly usable and applicable for policy and management purposes. This presentation discusses several key challenges, such as inconsistent data standards leading to poor interoperability between datasets, insufficient metadata quality, fragmented use of global repositories, and underutilization of essential variables. It demonstrates how the project aims to overcome these obstacles by delivering practical tools for users and better connecting scientific advancements with national reporting requirements. We showcase how a community-of-practice approach facilitates social learning, one the one hand ensuring the development of ready-to-use products that address stakeholders’ key requirements; and on the other hand, fostering better understanding and engagement with satellite- and non-satellite-based biodiversity monitoring data among stakeholders, including analysts, consultants, policymakers, and other non-satellite data-specialists.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall F1)

Presentation: The 'Climate and biodiversity oases' program: advancing climate-adaptive small wetland restoration in France through innovative action research

Authors: Dr. Fanny Mallard, Dr. Félix
Affiliations: French Society For Nature Protection (SNPN)
Small wetlands, particularly ponds, are recognized as biodiversity hotspots and effective Nature-based Solutions (NbS) in the face of climate change. These ecosystems provide essential ecological services, including carbon sequestration, water regulation, and habitat provision for numerous species, many of which are highly vulnerable to environmental changes. However, the state of ponds in France is alarming: 90% have disappeared over the past century, and more than half of those remaining are degraded. Despite their small surface area (<1% of the national territory), these ecosystems harbor approximately 20% of protected species, underscoring their disproportionate importance to biodiversity conservation. In response to these challenges, the French Society for Nature Protection (SNPN) launched the "Climate and biodiversity oases" program in 2023, an action-research initiative to protect, restore, and create networks of small wetlands across France. This program aligns with global frameworks such as the Kunming-Montreal Global Biodiversity Framework (GBF) and the EU Biodiversity Strategy for 2030, addressing Targets 2 and 3 of the GBF to restore degraded ecosystems and conserve at least 30% of key habitats by 2030. Program objectives The program is structured around three key objectives: 1. Participatory monitoring and knowledge development The first objective focuses on building a participatory national observatory to collect long-term data on pond conditions through standardized protocols. Leveraging citizen science, the program engages local communities, researchers, and stakeholders to monitor ecological trends and enhance understanding of these ecosystems’ dynamics. A dedicated online platform facilitates data collection and dissemination, integrating traditional field observations with innovative approaches such as drone imagery to provide comprehensive ecosystem assessments. 2. Modeling and prioritization for climate-resilient networks The second objective emphasizes the development of a methodology to model the evolution of pond conditions and identify high-priority areas for restoration and network creation. Spatial modeling integrates climate projections, land-use dynamics, and ecological connectivity to map critical areas for intervention. 3. On-the-ground conservation actions The third objective operationalizes these strategies through direct conservation actions. In collaboration with local stakeholders, the program aims to restore or create at least 30 wetlands annually in high-priority areas. Field-based interventions are monitored using cutting-edge tools, including drone imagery, to evaluate restoration success and guide adaptive management. Scaling impact A national action plan for ponds, developed for the Ministry of Ecology, aims to establish a unified strategy for the protection and restoration of small wetlands across France. Building on the methodologies of the 'Climate and Biodiversity Oases' program, this initiative scales localized actions to a national level, enhancing stakeholder coordination, prioritizing critical habitats, and aligning with international biodiversity and climate commitments to ensure long-term ecological resilience. Addressing conference themes The program demonstrates how localized conservation efforts operationalize global targets by focusing on small wetlands—an often underrepresented habitat in large-scale conservation strategies. The integration of Earth Observation (EO) technologies is a cornerstone of the program, enabling precise mapping, monitoring, and management of wetland networks. EO-based tools support: • Priority setting: Advanced spatial analyses, coupled with climate-informed methodologies, identify areas of high conservation value for restoration and network creation. • Monitoring effectiveness: Combining EO with participatory science and innovative methods, such as drone imagery, to evaluate conservation outcomes and ensure adaptive management. • Policy support: Providing a replicable framework for integrating EO technologies into national systems, aligning with global biodiversity and climate targets, and fostering cross-sectoral collaboration. By leveraging EO, participatory science, and climate-informed strategies, the 'Climate and Biodiversity Oases' program provides a replicable model for conservation initiatives. Its presentation at the Living Planet Symposium will detail methodologies, innovations, and outcomes, inspiring dialogue and collaboration to address the twin crises of climate change and biodiversity loss.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall F1)

Presentation: Co-designing Earth Observation Solutions for Ecosystems Conservation: Lessons from the PEOPLE-ECCO Project

Authors: Andy Dean, Marcos Kavlin, Michael Munk, Mads Christensen, Gyde Krueger, Matthes Rieke, Wietske Bijker, Jasper Van doninck, Louise Willemen
Affiliations: Hatfield, DHI, 52°North, University of Twente | Faculty of Geo-information Science and Earth Observation (ITC)
The PEOPLE-ECCO project (Enhancing Ecosystems Conservation through Earth Observation Solutions, Capacity Development and Co-design) is a project funded by ESA under the Earth Observation Science for Society (EO4Society) programme. The project addresses critical needs identified by Civil Society Organizations (CSOs) and Non-Governmental Organizations (NGOs) striving to improve evidence-based ecosystem conservation. PEOPLE-ECCO aims to develop and demonstrate innovative Earth Observation (EO)-integrated methods and tools to 1) monitor protected areas conditions and management effectiveness, and 2) identify high-priority areas to be protected. Through this, the project will support international and European policies such as the Convention on Biological Diversity (CBD) and the Kunming-Montreal Global Biodiversity Framework (GBF). Central to PEOPLE-ECCO is a user-centred, co-development approach. Six user organisations - African Parks, Bulgarian Society for the Protection of Birds, Lebanon Reforestation Initiative, IUCN Vietnam, Prince Edwards Island Watershed Alliance and Reef Check Malaysia - are actively involved as Early Adopters throughout the project’s implementation. These Early Adopters are distributed over four continents and the ecosystems they jointly manage cover a range of terrestrial and aquatic ecosystems, which ensures that a wide range of users’ perspectives are represented in the project. Furthermore, PEOPLE-ECCO engages with the wider community of conservation practitioners to collect, analyse and consolidate the user requirements regarding tools and capacity development to be addressed by the project. These inputs guide the tool development, ensure addressing real-world needs and foster EO capacity within conservation CSOs/NGOs enabling them to integrate these EO-based tools and methodologies in their operational practices. In this presentation, we share our key experiences and lessons learned on the user requirements consolidation and co-design processes undertaken during the PEOPLE-ECCO project, which commenced in October 2024. These included numerous bilateral online meetings, an online survey, and in-person group meetings. Emphasis is placed on the collaborative strategies employed, the challenges encountered, and the insight gained in bridging EO technology with the on-the-ground conservation need and local knowledge.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall N1/N2)

Session: C.03.02 Advances in the theory and methodology SAR Interferometry and SAR polarimetry - PART 1

In 2003, the first POLINSAR workshop organised by ESA was conducted gathering the new community, to present the findings of ESA-funded studies of SAR Polarimetry and Polarimetric Interferometry applications, and to prepare recommendations for future research work. Today the POLINSAR workshop is an established platform where scientists are meeting and exchanging their recent research advances to support new mission proposals or dedicated airborne campaigns to increase our knowledge. The application of polarimetry and interferometry, or combined, are ranging over a variety of different domain as for example biosphere (forest height, forest structure, forest disturbance, crop development stage classification over agricultural crops, etc.), hydrosphere (soil moisture), cryosphere (snow depth, ice structure), geosphere (enhanced deformation identification), urban areas (scatterer identification, building reconstruction) and many more applications.

We welcome contributions from but are not restricted to:
• New advances in Polarimeric SAR Interferometry: methods and applications
• Multi-baseline and TomoSAR: methods and applications
• Differential Polarimetric SAR Interferometry: methods and applications
• Airborne Campaigns for polarimetric and interferometric SAR
• Future Mission concepts related polarimetry and interferometry
• Recent advancement using AI for SAR mission concept, methods and applications.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall N1/N2)

Presentation: A Model-based Radar Vegetation Index for Crop Monitoring with Sentinel-1 SAR Data

Authors: Lucio Mascolo, Juan M Lopez-Sanchez, Shane Cloude
Affiliations: University of Alicante, University of Valencia, AELc
Vegetation Indices (VIs) derived from satellite images are widely employed to monitor agricultural crops, as they allow a simple and physical description of the canopy cover. In optical remote sensing, VIs are computed by combining reflectance measurements at different spectral bands that are sensitive to different biophysical properties of the plants, e.g., chlorophyll content, leaves internal structure, and so on. In the microwave domain, there exist VIs which exploit the sensitivity of polarimetric Synthetic Aperture Radar (PolSAR) measurements to the structural (size, shape, orientation) and dielectric (related to water content and material composition) properties of the plants to provide a simple physical description of the scattering from the canopy. Such Radar Vegetation Indices (RVIs) are based on the combination of PolSAR observables (e.g., backscattered intensities at polarimetric channels, outputs of polarimetric decompositions) according to the specific polarimetric mode of the available data. The first RVI was introduced for quad-pol SAR measurements in [1], while others were proposed later for dua-pol and compact-pol data (see [2] for a comprehensive review of RVIs). In the last decade, the Sentinel-1 (S1) mission, operated by European Space Agency, fostered the use of coherent dual-pol SAR data (where only a column of the scattering matrix is measured for each pixel) to observe agricultural crops. Although these data have a limited information content compared to quad-pol, they are acquired with short revisit time (6 days when two satellites are employed) and are freely accessible globally. Thus, they are very suitable for operational crop monitoring. Accordingly, RVIs have also been defined for dual-pol S1 data, based either on degree of polarization or the ratio between the backscattered intensities, e.g. [3]. In this work, we apply the model-based decomposition that we proposed in [4] for dual-pol SAR data to define a novel RVI, adapted for S1, which is linked to the physics of the scattering from the canopy. This decomposition is available in SNAP software (since version 11). According to [4], the Stokes vector of the backscattered wave is decomposed in two terms: 1) a partially polarized wave, associated with volume scattering from the vegetation layer, modeled as a random cloud of dipoles; and 2) a completely polarized wave, parametrized by the magnitude and phase of the complex polarization ratio (denoted with delta and alpha, respectively), which describes the contribution from the underlying soil. Accordingly, we define the new RVI as the fraction of (random) volume power. Furthermore, we use both the RVI and alpha (representing the angular separation between the transmitted Stokes vector and the polarized term on the Poincaré sphere) to define a plane, the RVI-alpha plane, which allows a novel physically based characterization of the polarization state backscattered from crop canopies. While RVI describes the contribution of randomly oriented canopy elements (leaves, panicles, etc.), alpha links to the physics of wave propagation through the volume layer. The latter may distort the polarization behavior of the polarized term causing differential extinction and leading to a further depolarization of the received wave. This situation, characterized by increasing values of alpha, is well identified in the RVI-alpha plane. This characterization is not possible with many dual-pol RVIs present in the literature, as they are not based on physical models or assume that the polarized soil contribution is always copolarized with the transmitted polarization. The proposed method is applied to dense time series of dual-pol VV-VH S1 images collected over different crop types from 2017 to 2021. References: [1] Y. Kim, J.J., Van Zyl, “A time-series approach to estimate soil moisture using polarimetric radar data,” IEEE Trans. Geosci. Remote Sens. vol. 47, pp. 2519–2527, 2009. [2] D. Mandal, A. Bhattacharya, and Y. S. Rao, Radar Remote Sensing for Crop Biophysical Parameter Estimation. Springer, 2021. [3] D. Mandal, V. Kumar, D. Ratha, S. Dey, A. Bhattacharya, J.M. Lopez-Sanchez, H. McNairn, Y.S. Rao, “Dual polarimetric radar vegetation index for crop growth monitoring using Sentinel-1 SAR data”, Remote Sens. Environ., vol. 247, art. nr. 111954, 2020. [4] L. Mascolo, S.R. Cloude, J.M. Lopez-Sanchez, "Model-based decomposition of dual-pol SAR data: application to Sentinel-1", IEEE Trans. Geosci. Remote Sens., vol. 60, art. nr. 5220119, 2022.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall N1/N2)

Presentation: Seasonal and Long-Term Vegetation Effects on Sentinel-1 Range Coregistration Shifts

Authors: Giorgio Gomba, Francesco de Zan
Affiliations: German Aerospace Center (DLR), delta phi remote sensing GmbH
In the SAR4Tectonics project, targeted at measuring ground deformation in tectonically active zones, we processed, with interferometric techniques, over 1500 stacks of Sentinel-1 SLCs, averaging 160 images per stack. During the InSAR processing, coregistration was performed between the stack reference image, located at the center of the time series, and all other images. This was achieved using patch-based incoherent cross-correlation for the range direction and the Enhanced Spectral Diversity technique for the azimuth direction. Several corrections were applied beforehand, including instrument timing correction, tropospheric and ionospheric path delays, and solid Earth tides. Tectonic plate motions were afterwards also removed from the coregistration shifts using GNSS data. The remaining shifts should primarily reflect residuals from the correction process, orbit-related errors, and random noise. Unexpectedly, residual range coregistration shifts exhibit systematic trends and yearly oscillations in forested regions, with non-vegetated areas showing the expected zero-mean, uncorrelated, and flat residuals. These signals are consistent between ascending and descending stacks, with the trends always directed towards the satellite (shortening the radar range). Over vegetated temperate regions, trends average approximately 5 cm/year, increasing to around 10 cm/year in equatorial and tropical regions. Oscillations have amplitudes ranging from 10 to 80 cm, with the largest values in tropical and temperate areas, and minimal oscillations (<5 cm) in equatorial regions. The oscillation peaks (minimum delay) occur consistently within climatic zones, around June in temperate regions, September in tropical areas, and January in the southern Hemisphere. The correlation of these signals with vegetation cover and climatic zones rules out significant correction errors, such as those linked to the ionospheric effects of the growing solar cycle or uncompensated tectonic motion. Instead, the signals seem to suggest that radar delay is influenced by seasonal vegetation dynamics and longer-term multiyear trends, possibly linked to variations in plant water content, as well as their slow growth over time. These signals contrast with results from persistent and distributed scatterers (PS/DS), which primarily capture well-known phenomena such as ground deformation from tectonic motion or subsidence due to groundwater extraction. However, PS/DS are generally absent over vegetated areas, meaning no PS/DS measurements are available to confirm the coregistration residual signals observed in these regions. To our knowledge, this phenomenon represents a novel discovery. While the exact physical mechanisms driving these trends is still unclear, these findings seem to suggest that radar delays could be used in an innovative way to monitor vegetation growth and inter-annual variability, which could also provide insights into plant health or drought stress.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall N1/N2)

Presentation: Dense building deformation monitoring from InSAR based on LiDAR height measurement.

Authors: Flora Weissgerber
Affiliations: DTIS, ONERA, Université Paris-Saclay
Creating digital twins of cities is essential to detect urban heat islands, or assess the risk of flooding. Incorporating the temporal axis, changes like urban spreading or urban renovation, can be encapsulated in the digital twins. At smaller scales, digital twins can also be used to follow the deformations of buildings that can be engendered by terrain movement due to subsidence or underground works. These deformation measurements can be extracted from InSAR temporal stack together with the building height [Weissgerber2017]. Moreover, InSAR acquisitions allow monitoring large areas with a bi-monthly update. However, the interpretation of the InSAR phase may be hindered by geometrical effects, such as layovers and shadows that are very present in dense urban environments. On the other hand, 3D description of buildings can be extracted from LiDAR point clouds with their fine height resolution and the large density of measured points. However, LiDAR acquisitions have a small footprint, requiring a large number of flights to monitor an entire city, leading to infrequent updates. In this work, our goal is to use LiDAR cloud points as a reference that can help disentangle the InSAR information, in order to extract information about building deformation. To do so, two steps are needed: 1. The geometrical alignment between the LiDAR point clouds and the SAR image to get a LiDAR reference height for each pixel of the studied buildings. This includes the selection of LiDAR points that corresponds to scatterers visible in the sar image. 2. The analysis of the InSAR phase temporal stack to detect deformation. The first step is to separate the contribution of the elevation in the InSAR phase and the contribution of the deformation. This is done by converting the reference altitude of each pixel found in step one to a reference phase for each InSAR acquisition. However, a phase difference that is due to the discrepancy between the phase scattering center height and the LiDAR geometry can be detecting since it is stable through time which the deformation pattern that changes with the acquisition. Then for InSAR pairs where deformation has been detected, the dense deformation pattern can be analyzed to detect non-uniform deformation across the building façade. This study is based on a temporal stack of 96 images acquired by TerraSAR-X/TanDEM-X between 2007 and 2012, with the reference image being chosen as the 24/01/2009. The LiDAR reference is the LiDAR HD acquired by IGN in 2023. Given the large time gap between the TerraSAR-X/TanDEM-X acquisition and the LiDAR HD one, we selected buildings with no structural changes between the two sources of data. Mainly, we focus on three buildings previously studied in [Weissgerber2017]: The Mirabeau Tower, The Cristal Tower and the Keller Tower. The footprint of the building is also supposed to be known. It is collected from the BD TOPO also produced by IGN. To align geometrically the LiDAR reference and the SAR images, we use the co-registration algorithm described in [Weissgerber2022] that enables to project each LiDAR point in the reference SAR image. Given the size of the LiDAR point cloud, we project only the LiDAR points that are classified as buildings in the building footprint. This enables removing the ground, vegetation points or artifacts that are labeled as such in the LiDAR point cloud. To select the LiDAR points that correspond to visible scatterers in the SAR image, the points are projected in the azimuth/range plane, without considering their height difference. Then, the visibility criteria are computed separately for each azimuth by assuming that the building façades are opaque. If the roof is visible, the points are considered as visible if their height is above the shadow cast by the points with a smaller range than theirs. If the roof is not visible, only the points having the smallest range are considered as visible. After having selected the LiDAR point visible in the SAR acquisition, all the SAR pixels may not have a height reference due to the scarcity of the LiDAR point cloud. To fill the gap in the façade, the height of the LiDAR points is linearly regressed between the ground and the top of the building. To increase the robustness of the linear regression, we use all the points that are on the same side of the building, a side being defined by a segment in the shapefile of the building footprint. Once this reference height is computed, it is converted into the reference phase using the InSAR acquisition metadata. Then, it can be subtracted from the measure InSAR phase to analyze the phase residue. The phase residue can either be due to deformation or from a discrepancy between this reference phase and the scatterers phase center. To compensate the latest, the reference height is updated by estimating the height offset from the conversion of the InSAR phase residue to height for acquisition close to the reference image, and acquisition in the same month but different years. This choice is motivated by the results of [Weissgerber2017] that showed that in this dataset, the deformations were mainly due to thermal expansion and thus were very small between acquisitions one year apart, for which temperatures are close. Having compensated the reference height for the difference between LiDAR height and the phase center height, the observed deformation patterns are mostly linear with the façade elevation, aligned with deformation due to thermal expansion. A more in-depth analysis has to be carried out to detect small phase anomalies that could indicate non-homogenous thermal deformation or deformation from another origin. Moreover, the difference between the height and the phase center height could also be analyzed either to understand the scattering mechanism of these buildings façades or to update the 3D representation of buildings by details captured only in the InSAR phase. References [Weissgerber2017] Flora Weissgerber, Elise Colin-Koeniguer, Jean-Marie Nicolas et Nicolas Trouvé, “3D Monitoring of Buildings Using TerraSAR-X InSAR, DInSAR and PolSAR Capacities”, Remote Sens. 2017, 9(10), 1010; doi:10.3390/rs9101010 [Weissgerber2022] Flora Weissgerber, Laurane Charrier, Cyril Thomas , Jean-Marie Nicolas and Emmanuel Trouvé (2022) LabSAR, a one-GCP coregistration tool for SAR–InSAR local analysis in high-mountain regions. Front. Remote Sens. 3:935137. doi: 10.3389/frsen.2022.935137
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall N1/N2)

Presentation: Vertical Imaging of Semitransparent Media With a Single Baseline

Authors: Pau Prats-Iraola, Matteo Nannini, Matteo Pardini, Luísa Dallmann, Kostas Papathanassiou
Affiliations: DLR
This contribution presents a new technique to retrieve low resolution vertical reflectivity profiles of semitransparent media by means of single-baseline synthetic aperture radar (SAR) interferometric acquisitions. The approach explores the dependency of the vertical wavenumber on the wavelength across the system bandwidth. By performing an inversion as a function of the vertical wavenumber, a low-resolution vertical tomogram can be obtained. The penetration of microwaves into natural, semi-transparent media leads to different scattering interactions at different locations within the volume. The reconstruction of this 3-D scattering process allows the determination of structural parameters that are relevant for a wide range of scientific and commercial questions/applications. In particular, synthetic aperture radar (SAR) is very attractive for these applications due to its high spatial resolution and wide coverage. This has resulted in the development over the last decades of a number of 3-D reconstruction approaches and algorithms including polarimetric SAR tomography, polarimetric SAR interferometry, coherence-based tomography or polarimetric SAR holography, to name the most relevant. This contribution introduces a novel concept, named sublook SAR tomography (SBT), to retrieve the vertical reflectivity profile of semi-transparent media by exploiting a single interferometric pair of SAR images. Imaging in the vertical dimension is achieved by exploiting the dependency of the vertical wavenumber on the wavelength across the system bandwidth. For conventional configurations, the resulting tomogram has a reduced resolution, but the technique could still be exploited to provide inputs to other imaging or inversion methods with better resolution, or it could be a way to retrieve low resolution tomograms in scenarios where any of the aforementioned techniques, which require a larger number of acquisitions, are not feasible, e.g., for space missions to icy planets/moons or with ground-based radars. When considering the tomographic inversion problem, one notes that the steering vector, which includes the height sensitivity to the height of the scatterer(s), depends on the wavelength. This fact can be exploited with one single interferogram as follows. By dividing the available range bandwidth in different sublooks for both the primary and secondary images, one can retrieve a set of interferograms at different wavelengths. This new set of interferograms can be exploited to generate a tomogram, similar as in coherence-based or correlating tomography. The resolution in the vertical dimension is worse than in conventional tomography, being the factor roughly the ratio between the central frequency and the range bandwidth. The approach was suggested for the first time in [1]. This contribution included the mathematical framework of the technique and the validation of the approach using point-cloud simulations. The present contribution will show results with real data acquired by both airborne and spaceborne sensors. The airborne data were acquired by DLR’s F-SAR system, and results will be shown over distributed scatterers (forests and ice). The spaceborne data were acquired by TerraSAR-X and TanDEM-X over urban scenarios. In both cases a comparison with conventional tomography will be also shown. [1] P. Prats-Iraola, M. Nannini, M. Pardini, K. Papathanassiou, “Sublook SAR Tomography: A New Single-Baseline Technique for the Vertical Imaging of Semitransparent Media,” Proceedings of EUSAR 2024.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall N1/N2)

Presentation: SKP Decomposition for BIOMASS Ground Phase Retrieval

Authors: Francesco Banda, Prof. Stefano Tebaldini
Affiliations: Aresys, Politecnico di Milano
BIOMASS is the seventh ESA Earth Explorer mission, conceived to provide the scientific community with a better understanding of global forest status, starting from early 2025 [1]. BIOMASS will be the first spaceborne Synthetic Aperture Radar (SAR) featuring a fully-polarimetric repeat-pass P-band SAR, with interferometric and tomographic capabilities. The choice of P-band is motivated by the sensitivity to the woody elements of trees, while polarimetry, interferometry and tomography allow retrieving forest structure and changes over time. Coverage is almost-global (except for North America and Europe), with a three-days repeat cycle to guarantee coherence between passes. Mission phases foresee a few months Commissioning Phase, followed by about 17 months of seven-passes Tomographic (TOM) phase and the rest of five years operation in dual-baseline Interferometric (INT) phase. P-band opens also to new science applications: mapping of topography under-vegetation and desert and ice 3D structure. As a preliminary step to forest products generation, fine interferometric phase calibration of the data stacks must be performed on top of single-acquisition calibration carried out by Level-1 (L1) processor [2]. Interferometric calibration processing takes as input finely coregistered data, accounting both for known geometry and residual shifts adaptively estimated from the data stack itself. The interferometric calibration strategy pursued mainly consists of three steps: slow-varying ionosphere correction through split-spectrum; fast varying ionosphere and orbit correction based on multi-squint processing; estimation of residual low-pass phase disturbance and ground phase by SKP decomposition. The full revised interferometric calibration processing is discussed in companion presentation [3]. In this work we present a focus on SKP phase calibration, which is the last step of interferometric calibration processing, devised to retrieve phases corresponding to ground contributions and residual low-pass disturbances not recovered by previous calibration modules. The tool underlying this calibration step is Sum of Kronecker Products (SKP) decomposition, first presented in [4] to separate ground and forest scattering, starting from multi-baseline polarimetric-interferometric correlations. We review here the current baseline approach described in [5], which employs all the available baselines and polarimetric correlations to compute SKP decomposition, a choice that turned out to be sub-optimal due to the noise introduced by large baselines and low cross-polarimetric correlations of BIOMASS. We then compare with new alternative approaches that either perform additional filtering of the estimated calibration screens and/or perform a block-based decomposition resorting to a smaller baseline subset. We demonstrate that a proper baseline selection and filtering strategy allows to improve the tomographic structure reconstruction performance, as well as radiometry, which is a proxy for the estimation of biomass-related quantities [1]. Validation is carried out over airborne campaign datasets reprocessed to mimic BIOMASS, considering different types of scenarios, i.e., mainly tropical and boreal forest. Phase screens are also simulated accounting for realistic residual disturbances expected at this level of the BIOMASS processing chain. Part of the validation is also dedicated to ice scenario [6], which is a BIOMASS secondary objective. In this case the objective is to assess to what extent SKP decomposition can separate ice/air interface mechanism from that of the underlying ice volume. In both cases of forest and ice scenarios, retrieved ground phases prove to be a good proxy for the generation of tomography, forest products and Digital Terrian Model (DTM). References [1] Francesco Banda, Davide Giudici, Thuy Le Toan, Mauro Mariotti d’Alessandro, Kostas Papathanassiou, Shaun Quegan, Guido Riembauer, Klaus Scipal, Maciej Soja, Stefano Tebaldini, Lars Ulander, and Ludovic Villard, “The BIOMASS Level 2 Prototype Processor: Design and Experimental Results of Above-Ground Biomass Estimation,” Remote Sensing, vol. 12, no. 6, 2020. [2] P. Prats-Iraola, K. Papathanassiou, J. S. Kim, M. Rodriguez-Cassola, D. D’Aria, R. Piantanida, A. Valentino, D. Giudici, M. Jaeger, R. Scheiber, M. Pinheiro, N. Yague-Martinez, M. Nannini, V. Gracheva, T. Guardabrazo, and A. Moreira, “The BIOMASS Ground Processor Prototype: An Overview,” in EUSAR 2018; 12th European Conference on Synthetic Aperture Radar, 2018. [3] Francesco Banda, Francesco Salvaterra, Naomi Petrushevsky, Stefano Tebaldini, Muriel Pinheiro, Bjorn Rommen, “BIOMASS Interferometric Processing”, ESA Living Planet Symposium 2025 [4] Tebaldini, S. (2009). Algebraic synthesis of forest scenarios from multibaseline PolInSAR data. IEEE Transactions on Geoscience and Remote Sensing, 47(12), 4132-4142. [5] Banda, F., Mancon, S., d’Alessandro, M. M., Tebaldini, S., Giudici, D., Pinheiro, M., & Scipal, K. (2023, July). Biomass Interferometric Calibration Processor Design. In IGARSS 2023-2023 IEEE International Geoscience and Remote Sensing Symposium (pp. 7785-7788). IEEE. [6] Banda, F., & Tebaldini, S. (2015). Texture-free absolute DEM retrieval from opposite-side multibaseline InSAR data. IEEE Geoscience and Remote Sensing Letters, 13(1), 43-47.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall N1/N2)

Presentation: Enhancing Change Detection with Polarimetric SAR: A Multivariate Visualization Framework for Time-Series Analysis

Authors: Elise Colin
Affiliations: Esrin, ONERA, DTIS
Synthetic aperture radar (SAR) imagery provides exceptional capabilities for large-scale, long-term change detection, especially when leveraging multi-polarimetric time-series data (POLTimeSAR). Its ability to capture surface scattering mechanisms under varying environmental and anthropogenic conditions is unparalleled, making it useful for maritime surveillance, urban monitoring, disaster response, etc. In this work, we extend the REACTIV algorithm, originally developed by Colin and Nicolas (2020), to incorporate a polarimetric framework. REACTIV was designed for time-series analysis of SAR amplitude profiles, focusing on detecting the temporal changes through the coefficient of variation (CV). It proposes a visualization scheme based on three key metrics: hue, encoding the temporal position of the most significant change; intensity, representing the maximum amplitude of the backscatter; and saturation, which captures temporal variability using the CV. Although this method is effective for single-channel data, it struggles to capture variations across polarimetric channels, particularly when certain changes are more pronounced in one polarization relative to others. To overcome this limitation, we employ the recently developed multivariate coefficients of variation (MCV), introduced by Colin and Ossikovski (2024). This approach generalizes the concept of CV to multi-dimensional datasets, such as polarimetric SAR data, by incorporating covariance matrices of amplitude or intensity. A pivotal development within this framework is the identification that all MCVs can be grouped into two distinct families, derived from generalized means applied to the eigenvalues of these covariance matrices. This formalism establishes the two theoretical bounds for the MCV: a maximum bound and a minimum bound, each corresponding to distinct change detection capabilities. The maximum bound of the MCV is sensitive to all types of changes, making it an excellent tool for identifying abrupt or longer changes in the scattering behavior across polarimetric channels. This property proves particularly valuable in applications such as urban monitoring, where significant structural changes or construction activities induce strong radar responses. Conversely, the minimum bound focuses on detecting changes associated with asynchronous dynamics between polarizations. This enables the identification of subtle differences in scattering mechanisms, such as those observed in agricultural practices or surface processes where the behavior varies between polarimetric channels. This distinction enhances the understanding of scattering dynamics and increases sensitivity to detecting changes. The proposed polarimetric framework is validated using both short-term and long-term SAR datasets. Airborne time-series data acquired with the SETHI sensor from Onera provide high-resolution insights into dynamic scattering processes over short periods, while satellite datasets, including C-band Sentinel-1 (dual polarization) and L-band SAOCOM, demonstrate the scalability of the approach to long-term observations. These diverse datasets allow us to explore scenarios ranging from maritime surveillance (e.g., ship detection and iceberg tracking) to environmental monitoring (e.g., glacier dynamics and vegetation), and the detection of anthropogenic activities such as mining and urban expansion. We propose an enhanced visualization strategy, in which the hue no longer solely represents the temporal location of changes but encodes the orientation of the dominant polarimetric mechanism. This refinement allows for better differentiation of scattering types in complex environments. Moreover, we propose using the maximum bound MCV as the intensity value to ensure that significant temporal variations are effectively captured and visualized in low-backscatter regions, such as seas or oil slicks, and regions with subtle scattering changes, such as vegetated areas. The results highlight several key contributions: - The generalization of CV to multivariate datasets through MCV, enabling comprehensive analysis of polarimetric time series. - The ability to use maximum and minimum bounds of MCV for targeted change detection, enhancing sensitivity to general and polarization-specific dynamics. - An improved visualization approach that integrates polarimetric mechanisms for a new interpretation of SAR time series. This work establishes a foundation for incorporating polarimetric information into time-series analysis, offering a robust framework for addressing the limitations of single-channel methods. By systematically reviewing scenarios where polarimetry improves the interpretation of observed dynamics, we demonstrate the transformative potential of this approach across a range of Earth observation applications, including maritime surveillance, glacier monitoring, RFI detection, urban analysis, and vegetation monitoring. These advancements underscore the critical role of polarimetry in dynamic Earth processes. [1] Colin, E., & Nicolas, J. M. (2020). Change detection based on the coefficient of variation in SAR time-series of urban areas. Remote Sensing, 12(13), 2089. [2] Colin, E., & Ossikovski, R. (2024). Towards a Unified Formalism of Multivariate Coefficients of Variation: Application to the Analysis of Polarimetric Speckle Time Series. Journal of the Indian Society of Remote Sensing, 1-12.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Room 0.14)

Session: B.03.10 Early Career Scientist Careers and Networking Event

This session is an opportunity for Early Career Scientists (ECS) working in the climate, environment and sustainability remote sensing community to meet and build connections across organisations. Composed of an invited panel who represent many diverse sectors, this session will showcase some of the careers a PhD in remote sensing can take you. You will hear about a range of EO career pathways and have the opportunity to discuss issues unique to the ECS experience. Following lightning presentations from the panelists and a Q&A with the audience, we will encourage conversations and mingling over [a free] lunch.

This session is co-organised by the ESA Actionable Climate Information Section and UK Centre for Satellite Data in Environmental Science (SENSE) Centre for Doctoral Training and has been co-developed with ECS from the SENSE Centre.

Speakers:


  • Doris Klein - Scientific Advisor, German Remote Sensing Data Centre (DFD)
  • Ralph Cordey - Earth Observation Business Development Manager at Airbus Defence
  • Maureen Wanzala - WCRP Secretariat - At the WCRP Secretariat, Maureen’s responsibilities will include supporting two of WCRP’s Core Projects ESMO and RIfS as well as the EPESC and Digital Earths Lighthouse Activities
  • Helène Chefner
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall F2)

Session: A.02.04 Advances in Monitoring and Management of Forest Ecosystems - PART 2

About 4 billion hectares, which is 31 percent of the global total land surface, is covered by forests according to the latest FAO Forest Resource Assessment FRA 2020. Forests are a valuable and irreplaceable ecosystem, which play an important role from an ecological, social, and economic perspective. Monitoring of forests is essential for a better understanding to protect and manage sustainably this valuable ecosystem.

Information needs vary and include forest area, forest types, forest structure, disturbances, health, and biomass. An unprecedented wealth of observations by optical and microwave sensors, active and passive, from low to high resolution allow new ways to monitor forests. Emphasis will be put on advances in detecting forest disturbances, forest health monitoring, species identification, support to sustainable forest management, estimating forest biomass and forest carbon accounting. This session will showcase some of the more recent key achievements including methods/algorithms, science and applications.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall F2)

Presentation: If a tree is “Protected”, is it? Using satellite-borne LiDAR to understand efficacy of protection status in West African Protected Areas

Authors: Abigail Barenblitt, Dr. Lola Fatoyinbo, Dr. Atticus Stovall, Dr. Laura Duncanson, Veronika Leitold, Dr. Mengyu
Affiliations: Nasa/ Umd
Forest integrity in West Africa faces increasing threats from human activities, such as urbanization, oil exploration, and cutting. Globally, PAs have had positive impacts on habitat protection and carbon storage, and are viewed as critical for preserving hotspots of biodiversity. However, results from a global study of biomass in PAs indicate that while the geographic coverage of PAs in Africa is comparable to other continents, biomass densities within African PAs were lower than on other continents. PAs in Africa experience high rates of disturbance from agriculture and urbanization. More research is needed to understand not only the structural makeup of PAs in this region, but also if West African PAs are more structurally diverse than their counterfactuals. To this end, this study will use the Global Ecosystem Dynamics Investigation (GEDI) lidar instrument to measure forest structure and structural diversity across West Africa. GEDI is a space-borne lidar instrument aboard the International Space Station that is capable of measuring the height and complexity of vegetation canopy. GEDI data will be processed using the Multi-Mission Algorithm and Analysis Platform (MAAP), a platform that combines data, algorithms and computational abilities for “the processing and sharing of data related to NASA’s GEDI mission. We will compare forest structure metrics like Foliage Height Diversity (FHD), Canopy Copy, and Top of Canopy (RH98), as measured by GEDI, between PAs and unprotected counterfactuals. We will also derive structural diversity indices, such as Height Evenness and Beta Diversity to create an in-depth understanding of structural diversity within PAs. Finally, PA structural diversity will be examined in conjunction with PA governance type, as enforcement is a known factor in PA success. As countries expand the coverage of PAs to achieve the UN’s 30x30 goal, identifying metrics for tracking PA efficacy is critical for the establishment of new PAs. To this end, we will use GEDI to examine 1) how structurally diverse PAs are in comparison to unprotected counterfactuals and 2) if areas of higher structural diversity in West Africa are indicative of higher AGB and carbon storage. Here, we present preliminary results detailing how Foliage Height Diversity (FHD) varies by country, governance type, and landcover type across West Africa.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall F2)

Presentation: Advancing Tropical Forest Monitoring: Predictive Deforestation Models and Explainable AI for Disturbance Driver Classification

Authors: Laura Elena Cue La Rosa, Diego Marcos, Bart Slagter, Johannes Reiche
Affiliations: Wageningen University & Research, Inria, University of Montpellier
The free availability of optical and radar satellite imagery has transformed our ability to monitor tropical forests, offering near real-time information on deforestation and disturbance events. Monitoring tools like RADD (Reiche et al., 2021) and GLAD (Hansen et al., 2016) alerts provide valuable insights into forest disturbances using radar and optical satellite imagery. These systems are invaluable for law enforcement but lack the ability to identify the specific drivers of deforestation or predict risks before disturbances occur. In this work, we present two research projects aimed at advancing tropical forest monitoring: forecasting the risk of deforestation and improving the interpretability of model decision-making in classifying disturbance drivers. Our first research line focuses on a collaborative effort with Forest Foresight to predict the risk of deforestation in tropical regions using artificial intelligence. By integrating alerts from the RADD and GLAD (Reiche et al., 2024) systems, the initiative predicts deforestation risks up to six months in advance. In this work, we employed a ResUnet (Diakogiannis et al., 2020) to predict deforestation by combining static and dynamic variables, including historical deforestation patterns, proximity to infrastructure and roads, and fire alerts. The study covers several regions in South America, Africa and Asia, which experience different deforestation pressures such as logging, agriculture, and mining. The results show F0.5 scores ranging from 40% to 75%, from July 2022 to May 2023. Performance metrics reveal a seasonal trend where detection scores decrease during the dry season. This trend corresponds to increased deforestation intensity during these months, as forest-clearing activities tend to rise when weather conditions are more favorable for operations. The results reflect the model's ability to capture temporal variations, while also highlighting its limitations due to region-specific challenges, likely influenced by seasonal data quality and deforestation dynamics. The second study focuses on evaluating several eXplainable Artificial Intelligence (XAI) methods applied to the classification of forest disturbance drivers using high spatiotemporal resolution Sentinel-1 and Sentinel-2 data. RADD forest disturbance alerts were classified into five categories: smallholder agriculture, road development, selective logging, mining, and others. The effectiveness of these methods was assessed in the context of early fusion and feature-level fusion, two of the most common data fusion techniques used in remote sensing. Our findings suggest that Guided Grad-CAM (GuidedGradCam) (Selvaraju et al., 2017) is the most effective method for the target application, focusing sharply on particular regions likely representing the target drivers' activities while attributing zero scores to other classes. Additionally, by thoroughly examining the significance of variables, the impact of cloud cover, and the explanations offered by both fusion models for co-located classes, we shed light on the underlying reasoning behind each model's performance and decision-making processes. Integrating both studies, predicting deforestation risk and classifying drivers, is a promising direction for future research. By combining these approaches, we aim to create a unified system that not only predicts where deforestation is likely to occur but also identifies the underlying causes. References Diakogiannis, F.I., Waldner, F., Caccetta, P., Wu, C., 2020. Resunet-a: A deep learning framework for semantic segmentation of remotely sensed data. ISPRS Journal of Photogrammetry and Remote Sensing 162, 94–114. Hansen, M.C., Krylov, A., Tyukavina, A., Potapov, P.V., Turubanova, S., Zutta, B., Ifo, S., Margono, B., Stolle, F., Moore, R., 2016. Humid tropical forest disturbance alerts using landsat data. Environmental Research Letters 11, 034008. Reiche, J., Balling, J., Pickens, A.H., Masolele, R.N., Berger, A., Weisse, M.J., Mannarino, D., Gou, Y., Slagter, B., Donchyts, G., et al., 2024. Integrating satellite-based forest disturbance alerts improves detection timeliness and confidence. Environmental Research Letters 19. Reiche, J., Mullissa, A., Slagter, B., Gou, Y., Tsendbazar, N.E., Odongo-Braun, C., Vollrath, A., Weisse, M.J., Stolle, F., Pickens, A., et al., 2021. Forest disturbance alerts for the congo basin using sentinel-1. Environmental Research Letters 16. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D., 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization, in: Proceedings of the IEEE international conference on computer vision, pp. 618–626.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall F2)

Presentation: Mapping the Global Drivers of Forest Loss at 1 km Resolution Using Deep Learning

Authors: Michelle Sims, Radost Stanimirova, Anton Raichuk, Jessica Richter, Forrest Follett, James MacCarthy, Kristine Lister, Christopher Randle, Lindsey Sloat, Elena Esipova, Jaelah Jupiter, Charlotte Stanton, Dan Morris, Christy Slay, Maxim Neumann, Drew Purves, Nancy Harris
Affiliations: World Resources Institute, Google DeepMind, The Sustainability Consortium, Google Research
Forests are in decline worldwide due to human activities such as agricultural expansion, urbanization, and mineral extraction. Temporary loss of forest cover due to causes such as wildfire and forest management is important to distinguish from permanent land use conversion due to the differing ecological and climate impacts of these disturbances and for the purposes of developing effective policies and management strategies. Existing global maps of the drivers of forest loss [1] that are widely used are not spatially or thematically detailed enough for decision makers at local-to-regional scales, such as governments, land managers, or companies. Using publicly available satellite observations (Landsat, Sentinel-2) and ancillary biophysical and population data, we developed a 1 km resolution, global map of the dominant drivers of forest cover loss from 2001 to 2022 with seven classes: permanent agriculture (e.g., commodity crops or pasture), hard commodities (e.g., mining), shifting cultivation, forest management (e.g., logging or wood fiber plantations), settlements and infrastructure, wildfire, and other natural disturbances. We interpreted nearly 7,000 reference samples to train a single world-wide customized residual convolutional neural network model (ResNet) that classifies the driver of forest cover loss from the Global Forest Change product [2] with an overall accuracy of 90%. Our results show that permanent agriculture was the leading driver of forest cover loss globally, representing 35% of loss from 2001 to 2022. The drivers of forest cover loss vary by region, with the leading driver identified as forest management in Europe, permanent agriculture across the tropics, and wildfire in Russia, the Asian mainland, North America, and Oceania. Our results enable assessment of forest disturbance dynamics from local to global scales and can support the development of forest conservation and management policies, alert relevant industry and government bodies to the risks of carbon emissions and biodiversity loss within their supply chains and jurisdictions, and track progress towards global goals to end deforestation. This is an operational product that will be updated annually to support forest monitoring efforts. References: [1] Curtis, P.G., C.M. Slay, N.L. Harris, A. Tyukavina, and M.C. Hansen. 2018. “Classifying Drivers of Global Forest Loss.” Science 361 (6407): 1108–11. doi:10.1126/science.aau3445. [2] Hansen, M.C., P.V. Potapov, R. Moore, M. Hancher, S.A. Turubanova, A. Tyukavina, D. Thau, et al. 2013. “High-Resolution Global Maps of 21st-Century Forest Cover Change.” Science 342 (6160): 850–53. doi:10.1126/science.1244693.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall F2)

Presentation: Evaluating the Impact of Deep Pre-trained Models for Tree Species Classification on Forest Inventory

Authors: Takayuki Ishikawa, Dr. Carmelo Bonannella, Marc Rußwurm
Affiliations: Wageningen University, OpenGeoHub Foundation
Mapping forests accurately is vital to mitigating climate change and providing essential ecosystem services. Here, National Forest Inventories (NFIs) serve as the primary source of forest information, where tree species distribution information is crucial for applications such as carbon storage estimation and biodiversity assessments. However, curating and creating these inventories is labour intensive, as it requires on-site campaigns where forestry experts identify the tree species on-site and store their location in a common database. Here, remote sensing approaches offer new opportunities to improve or update NFIs on a frequent and large scale when they are paired with effective machine learning detection models. Concretely, taking satellite images over multiple years to create a satellite image time series (SITS) has proven effective, as the change of reflectance of the forest canopy throughout the season can be used to distinguish different tree species. Current machine learning models rely on random forest classifiers paired with a hand-designed set of features, such as statistics on the reflectance of the forest canopy and harmonic features based on phenology throughout the year. The advancement of artificial intelligence (AI) models through deep neural networks offers a complementary strategy to classify tree species through learned deep features from annotated data in an end-to-end fashion. However, effective deep features through AI models are limited by their demand for large, annotated datasets, which are expensive to collect due to the large effort involved in annotating NFIs. Recently, new pre-trained deep learning models, also known as foundation models, have emerged that can leverage unlabeled data through self-supervised pre-training. For instance, when pre-training with masked auto-encoding, a deep neural network is first trained on reconstructing an obscured (masked) input time series, which does not require ground annotations. These yields learned deep features that are optimized to be effective for time series reconstructions. Crucially these pre-trained features can then be used in used in a second fine-tuning step on labelled data to classify the individual tree species. In this work, we test deep features from the Presto (Tseng et al., 2023) pre-trained time series foundation model on tree specifies classification with data from the Dutch NFI data. The dataset includes a total of 1,480 pure species plots (pixels). A plot was classified as "pure" if more than 80% of its basal area consisted of a single species. These plots were grouped into seven species groups with unbalanced sample counts. Using Dutch NFI data with tree species information, we extracted time-series data from Sentinel-1 GRD and Sentinel-2 Level 1C at a 10-meter resolution for the period between January and December 2020 from Google Earth Engine. We employed Presto, a transformer-based, pixel-level, multi-temporal model pre-trained with masked auto-encoding on a global dataset of 21.5 million pixel time-series samples, each at a resolution of 10m per pixel. This model was compared against the current state-of-the-art tree species classification approach by Francini et al. (2024), which uses a Random Forest classifier on harmonic features. The Random Forest model incorporates additional vegetation indices from Sentinel-2 and computes seasonal medoids and harmonic predictors for classification. In our experiments, we directly compared the features from Francini et al. (2024) with the Presto-derived deep features using the same RF classifier and found that the deep features outperformed the hand-crafted harmonic features by a substantial margin. We further found that the pre-training presto model, additionally on forest time series from a newly collected unlabeled dataset of forests in the Netherlands, improved the accuracy further by calibrating the deep features to the patterns visible in central Europe. In conclusion, our experiments demonstrate the potential of fine-tuning and calibrating pre-trained deep learning foundation models in remote sensing for tree species classification to update national forest inventories in a large-scale, cost-efficient way. This automated classification approach can complement the existing labour-intensive process of updating national forest inventories by making effective use of openly available satellite data and publicly available pre-trained deep learning models designed to solve multiple remote sensing tasks simultaneously. Tseng, G., Cartuyvels, R., Zvonkov, I., Purohit, M., Rolnick, D., & Kerner, H. (2023). Lightweight, pre-trained transformers for remote sensing timeseries. arXiv preprint arXiv:2304.14065. Francini, S., Schelhaas, M. J., Vangi, E., Lerink, B. J., Nabuurs, G. J., McRoberts, R. E., & Chirici, G. (2024). Forest species mapping and area proportion estimation combining Sentinel-2 harmonic predictors and national forest inventory data. International Journal of Applied Earth Observation and Geoinformation, 131, 103935.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall F2)

Presentation: High-Resolution Forest Mapping With TanDEM-X Interferometric SAR Data and Self-Supervised Learning

Authors: Jose-Luis Bueso-Bello, Benjamin Chauvel, Daniel Carcereri, Pietro Milillo, Ronny Hänsch, Dr. Paola Rizzoli
Affiliations: German Aerospace Center (DLR), ENSTA Bretagne, University of Houston
Forests, covering around 31% of the total Earth’s land surface, are a valuable and irreplaceable ecosystem, which play a key-role in controlling climate change and in reducing the concentration of carbon dioxide in the atmosphere. Natural hazards and human activities, such as illegal deforestation, impact forest health and can lead to forest degradation and loss. Therefore, a reliable assessment and monitoring of forests is essential for a better understanding to protect and manage sustainably this valuable ecosystem. In this context, remote sensing approaches solely based on optical data may suffer from the presence of clouds. Synthetic aperture radar (SAR) systems represent an attractive solution thanks to their capability of acquiring data almost independently from weather and daylight conditions. Moreover, in the last few years deep learning (DL) approaches have started to significantly impact spaceborne SAR applications. In particular, Convolutional Neural Networks (CNNs) trained in a fully-supervised way have shown a great potential for the extraction of informative patterns from SAR images in various application fields. Regarding the specific case of forest mapping using TanDEM-X and DL, preliminary works propose DL models based on a U-Net architecture, trained in a fully-supervised manner using mid-resolution input data varying from 12 m up to 50 m for local and large-scale products [1, 2]. When moving to finer resolutions or specific application domains, the lack of reliable reference datasets boosts the investigation of self-supervised learning (SSL) approaches. The objective of this study is to investigate the potential of SSL methods for forest mapping with TanDEM-X InSAR data processed down to 6 m resolution and to benchmark them with respect to a traditional fully-supervised learning (FSL) approach. To do so, we first select as test region the entire state of Pennsylvania, USA, where a reference forest map from 2010 at 1 m resolution is available. The large extension of such a reference map allows for the set up of a baseline DL model derived through FSL, which represents an ideal-case scenario. As in previous works [2], we rely on a U-Net model for the fully-supervised investigations, using as main input features the SAR backscatter, the interferometric coherence, the volume correlation factor and for describing the different TanDEM-X acquisition geometries, the local incidence angle and the height of ambiguity (HoA). All of them processed at a resolution of 6 m resolution applying ɸ-Net, a state-of-the-art residual DL model for InSAR parameter estimation and denoising [3]. To evaluate the performance of the different DL methods, we define a test dataset representative of the possible TanDEM-X acquisitions geometries (whole HoA range). Clearly, in order to avoid information leakage, the test images are never used for any learning task. For the implementation of the SSL approaches, we consider a classic convolutional autoencoder (CAE) as DL architecture and we investigate two pretext tasks: a standard identity reconstruction and a masked CAE. In a follow-on step, a supervised learning phase is necessary to perform the downstream task of forest mapping. As DL architecture for this last task, we rely on the same U-Net as for FSL but, in this case, the encoder part is initialized with the weights from the encoder part of the CAE trained in an SSL manner. Finally, to assess the impact of the SSL pre-training on the downstream task, as well as to find a compromise between the final performance and the amount of required reference data to reach it, we investigate different scenarios based on: a) SSL pretext task: identity or masked CAE; b) U-Net trainability after transferring the weights from the CAE: only decoder or both, encoder and decoder; c) Amount of labeled data: instead of using the 100% labeled data as for the ideal FSL scenario, we select 22%, 8%, and 1.5% of these labeled data. In our study, we evaluate and compare the performance of all these different SSL DL approaches with respect to the FSL case over the defined test dataset. Looking to the case where very few labeled data are available (real-case scenario with just 1.5% of the available labeled data), we observe the enhancement in the classification of the TanDEM-X images when using SSL and masked CAE as pretext task. This case shows a very competitive performance, which confirms the meaningfulness of the features learned by the pretext task and therefore. To demonstrate these observations, we focus our investigations over the Amazon rainforest in Brazil, where few input samples are available as reference. They were acquired by the National Center for Airborne Laser Mapping between 2012 and 2018 as small footprint LiDAR surveys at 1 m resolution and with an area extent varying from 1 to 10 km2. First, we train a CAE using a masked CAE as SSL pretext task and using TanDEM-X images representative of the different acquisition geometries. Afterwards, we transfer the weights to the U-Net and we train both, encoder and decoder with the reference LiDAR footprints. Finally, to assess the performance of the proposed SSL DL approach over this challenging area and to compare its performance with respect to an FSL model, we classify more than 500 TanDEM-X images acquired in 2019 and 2020 over the South-East region of the Amazon rainforest using both, the FSL model and the proposed SSL + downstream task approach. We intercompare the generated forest/non-forest maps based on TanDEM-X InSAR images with a forest map derived from the ESA Climate Change Initiative (CCI) High Resolution Land Cover map at 10 m resolution. The improved classification when using SSL represents a powerful tool to address the challenges posed by limited referenced data in forest mapping applications at high resolution with TanDEM-X InSAR data. [1] A. Mazza, F. Sica, P. Rizzoli, and G. Scarpa, “TanDEM-X Forest Mapping Using Convolutional Neural Networks”, Remote Sensing, vol. 11, no. 24, 2019. [2] J.L. Bueso-Bello, D. Carcereri, M. Martone, C. Gonzalez, P. Posovszky, and P. Rizzoli, “Deep Learning for Mapping Tropical Forests with TanDEM-X Bistatic InSAR Data”, Remote Sensing, vol. 14, no. 16, 2022. [3] F. Sica, G. Gobbi, P. Rizzoli and L. Bruzzone, “ɸ-Net: Deep Residual Learning for InSAR Parameters Estimation”, IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 5, pp. 3917-3941, 2021.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall F2)

Presentation: A Scalable Method to Detect Cover Trees in Agroforestry Area

Authors: Yaqing Gou, Kostantinos Bampanikos, Geert Koster, Etse Lossou, Xi Zhu, Mila Luleva
Affiliations: Rabobank
Transferring from a mono-agriculture to agroforestry provide great benefits to the ecosystem and to local farmers by encouraging carbon sequential, enhancing biodiversity and the sustainability of farms. Identify cover trees is the crucial first step to monitor the development of agroforestry plot, which is the basis for many agroforestry projects. In this study, we proposed a scalable method to detect cover trees in coffee farms in six agroforestry areas in Tanzania, Colombia and Kenya. To ensure the scalability of the method for large scale application, we first explored the accuracy that we can achieve using medium to high resolution dataset. We limited the use of very high-resolution data (e.g sub-meter resolution imagery) and Lidar in the model training and feature extraction. We adopted a deep learning model from Meta’s segment anything (referred to as the base model). The features are extracted from the spectral bands and the texture information from PlanetScope imagery, backscatter from Sentinel-1 and ALOS PALSAR Radar sensors, and the tree height information from the recent released tree height product from Meta. The training dataset for the tree and non-tree class are derived from a combination of the Hansen tree cover product, the ESA CCI land cover maps, and the JAXA forest/non-forest map. Secondly, we further explored the model’s capability in detecting cover trees shorter than 5m. When open-access global datasets (described in step 1) were used to train the base model, the derived model is limited by the tree definition used by the training dataset, which is normally 5m. We compared the base model, which is trained only using open-access global dataset, with an advanced model that we include trees between 2m to 5m to the training dataset. Those training dataset for smaller trees are generated from a LiDAR-based tree cover product at 1m resolution. The validation for both models were conducted at two levels – at a pixel level and at a plot level. At the pixel level, the model-derived cover tree map was compared with a 1m tree cover product generated from Lidar data. We also validated the models’ accuracy by altering the tree height thresholds that used to define trees at 2m, 3m, 4m, and 5m. At a plot level, the averaged cover tree ratio per plot will be compared with in-situ data. Last but not lease, we compared the model’s accuracy with a baseline model trained with random forest classifier. Preliminary results indicate that the base model has a detection accuracy of 0.75 at a pixel level in all tested AOIs. When we targeted on smaller trees, the detection accuracy dropped continuedly to 0.7 when 2m threshold was used. This study opens up new opportunities to monitor agroforestry plots at global scale in a scalable manner.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall K1)

Session: B.04.06 Fire Detection and Monitoring from Earth Observation Data for Rapid Disaster Response - PART 2

Fueled by a changing climate, wildfires are becoming more frequent and destructive every year. Remote sensing data from a variety of sources can help to detect and monitor fire outbreaks. This session aims to show capabilities of remote sensing (public satellites, cubesat constellations, UAVs, observation planes) in combination with recent advances in AI and data processing. The session welcomes contributions including but not limited to:
- Novel algorithms for fire detection
- Fire spread modeling
- Burned area mapping
- New data sources such as upcoming missions
- Benchmark or training datasets
- Multi-modal data for fire monitoring
- On-orbit processing
This session brings together research, policy and industry in fire preparedness and response informed by remote sensing data. It provides a platform for a fruitful exchange about the current state of technology, best practices as well as upcoming opportunities for novel techniques in this area.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall K1)

Presentation: Burnt Area Monitoring using Graph Convolutional Networks based on multi-sensor satellite data

Authors: Michael Nolde, Moritz Rösch, Tabea Wilke, Jorge Ignacio Faúndez Pinilla, Paula Aguirre, Torsten Riedlinger, Hannes Taubenböck
Affiliations: German Aerospace Center, Corporación Nacional Forestal, Pontificia Universidad Católica de Chile, Julius-Maximilians-Universität Würzburg
Recent catastrophic wildfire seasons, e.g. in Greece 2023, Canada 2023, and Chile 2023/2024, underscore the critical need for rapid and accurate wildfire data to facilitate emergency response, assess environmental damage, and keep the public informed. Although satellite-based thermal anomaly data is accessible in near real-time (NRT), accurately mapping the areas affected by fires from NRT imagery remains a significant challenge. The proposed approach combines a superpixel segmentation algorithm with both rule-based and deep learning classification techniques to reliably identify burnt areas (BA) in NRT. This method is compatible with a range of optical sensors, from medium to high resolution, and integrates data from diverse sources to continuously refine the detection of burnt areas as active fires unfold. The region of Central Chile, enduring tremendous wildfire events in early 2024, was used as a testing region. An NRT product (DLRBAv2NRT) based on Sentinel-3 OLCI was generated, together with a refined non-time critical product (DLRBAv2NTC). Both products were tested against established global BA products (Copernicus CGLBA31nrt and NASA MCD64A1v061). The DLRBAv2NRT achieved the highest accuracies, outperforming the DLRBAv2NTC product by 5%, the CGLBA31nrt product by 9% and the MCD64A1v061 product by 10% IoU. The DLRBAv2NRT showed the highest sensitivity detecting BA (Recall: 0.78), while MCD64A1v061 produced high number of false negatives (Recall: 0.63). A third variant (DLRBAv2NTCfusion), incorporating results from multiple mid- and high resolution sensors, was generated for the Valparaíso/Chile focus region. The results were inter-compared with local ground truth data provided by the Chilean Corporación Nacional Forestal (CONAF), yielding an IoU of 0.75. This reference data was prepared based on Sentinel-2 satellite data using the differential NBR index, and validated with information from the field. The proposed mapping procedure demonstrates a fully-automated, flexible approach to derive burnt area delineations from satellite data in NRT with high accuracy. This allows for high-frequency monitoring of NRT burnt areas on a global scale.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall K1)

Presentation: Causal-inspired Graph Neural Networks for Wildfire Forecasting

Authors: Shan Zhao, Ioannis Prapas, Dr. Zhitong Xiong, Ilektra Karasante, Dr Ioannis Papoutsis, Dr Xiao Xiang Zhu
Affiliations: Technical University of Munich, National Observatory of Athens, Munich Center for Machine Learning
Predicting wildfires is a challenging task due to the inherent stochasticity of ignition and the complex interplay of environmental, climatic, and human factors. While machine learning has become a powerful tool for modeling the relationships between environmental factors and fire events, most exploited approaches are correlation-based, which face some inherent limitations [1]. For instance, such prediction models often inherit biases from training data and struggle with transferability, making their performance vulnerable to shifts in a changing climate. To support critical decision-making, it is essential to improve the reliability of prediction models. Causality provides a promising framework for explicitly analyzing the interactions between factors. It can also improve the robustness of prediction models [2]. However, its integration into deep learning methods remains underexplored. Furthermore, its application in real-world disaster management is still limited. Recent work [3] has demonstrated the effectiveness of using a causality-based adjacency matrix to capture the synergistic effect of wildfire drivers. Building on this, we propose a causal-inspired Graph Neural Network (GNN) to forecast burned area patterns. In our task, we aim to predict future burned areas based on a variety of inputs, including local environmental variables (2-meter temperature, vapor pressure deficit, and NDVI), ocean-climate indices (the Arctic Oscillation, North Atlantic Oscillation, Niño 3.4 Index, and Southern Oscillation Index), and geographical coordinates (longitude and latitude). Our framework combines causal discovery methods with GNN by constructing edge connectivity in a data-driven way, guided by causal knowledge. To capture the stochastic nature of fire events, we incorporate a generative module for sampling adjacency matrices that simulate the dynamic relationships between nodes. Then, to estimate causal effects from a set of predictors, we introduce a positional-conditional graph pooling layer, inspired by the backdoor adjustment criterion [4]. This approach enables us to directly estimate causal outcomes from observational data by carefully selecting conditional variables. We conducted comprehensive experiments on the Mediterranean area from the SeasFire cube [5] to assess the model’s robustness. Results demonstrate that our model outperforms baseline approaches by reducing the standard deviation of AUROC scores over extended forecasting horizons and improving AUROC under geographical distribution shifts (from the Middle East to European countries). Compared to fully connected and correlation-based graph models, the causally informed graph structure has proven to be more resilient to input perturbations and geographical variability, particularly for longer forecasting horizons. Our model also uncovers several insights into the drivers of wildfire events. For instance, it highlights the lagged effects of Oceanic Climate Index variables on local fire patterns and reveals the significant causal influence of local environmental factors and global teleconnections. Additionally, SHAP analysis identifies short-term local precipitation as a critical factor. This suggests that Mediterranean wildfires are largely drought-driven. In the future, we plan to expand our study to a global scale and integrate a broader range of variables into the analysis. With these efforts, we aim to contribute to a more comprehensive modeling and understanding of the underlying drivers of wildfires. References [1] Jakob Runge, Sebastian Bathiany, Erik Bollt, Gustau Camps-Valls, Dim Coumou, Ethan Deyle, Clark Glymour, Marlene Kretschmer, Miguel D Mahecha, Munoz-Mari, Jordi et al., “Inferring causation from time series in earth system sciences,” Nature Communications, vol. 10, no. 1, pp. 2553, 2019. [2] Fernando Iglesias-Suarez, Pierre Gentine, Breixo Solino-Fernandez, Tom Beucler, Michael Pritchard, Jakob Runge, and Veronika Eyring, “Causally-informed deep learning to improve climate models and projections,” Journal of Geophysical Research: Atmospheres, vol. 129, no.3254, pp. e2023JD039202, 2024. [3] Zhao, S., Prapas, I., Karasante, I., Xiong, Z., Papoutsis, I., Camps-Valls, G., & Zhu, X. X. (2024). Causal Graph Neural Networks for Wildfire Danger Prediction. arXiv preprint arXiv:2403.08414. [4] Tobias Tesch, Stefan Kollet, and Jochen Garcke, “Causal deep learning models for studying the earth system,” Geoscientific Model Development, vol. 16, no. 8, pp. 2149–2166, 2023.386. [5] Lazaro Alonso, Fabian Gans, Ilektra Karasante, Akanksha Ahuja, Ioannis Prapas, Spyros Kondylatos, Ioannis Papoutsis, Eleannna Panagiotou, Dimitrios Mihail, Felix Cremer, Ulrich Weber, and Nuno Carvalhais. SeasFire cube: A global dataset for seasonal fire modeling in the earth system.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall K1)

Presentation: Global, Real-Time Fire Spread Modelling with Machine Learning

Authors: Danica Rovó, Johanna Wahbe, Dominik Laux, Dr. Lukas Liesenhoff, Agustin Martina, Dr. Martin Langer, Julia Gottfriedsen
Affiliations: OroraTech GmbH, Technical University of Munich
Climate change is making mega-fires more frequent and severe [1], threatening public safety, economies, and ecosystems globally. Early detection and response are essential to prevent the escalation of these natural disasters. Remote sensing-based fire monitoring systems, such as OroraTech’s Wildfire Solution, aid in this effort by delivering timely data on fire locations. However, beyond monitoring, we need predictive tools that forecast wildfire spread faster than real-time. These tools can help bridge gaps between detections, and inform first-responders with analyses under different conditions. With these goals in mind, we will use OroraTech’s groundbreaking global and frequent fire perimeter data to develop fast and improved fire propagation models. For modelling fire behavior several approaches exist. Physical models simulate the complex physical and chemical processes in a fire, but are too resource-intensive for real-time operational use. Empirical and simulation-based models, although promising, are also computationally complex and rely heavily on accurate input data, affecting their accuracy and transferability. Semi-empirical models, like the Rothermel model [4], achieve efficiency by integrating statistical analysis with physical concepts, and are therefore more practicable for real-time decision-making tools [2]. However, they often need expert-crafted input data and manual adjustments to apply to landscapes or fire regimes beyond their development context [3]. AI-based propagation models address this issue by effectively learning from data. What’s more, they can capture yet unknown patterns influencing fire dynamics, stemming from complex fire weather that we still don’t fully understand, and variance caused by climate change. Despite this potential, their adoption in research and practice is still limited. The development of a spatio-temporal fire propagation model requires a global, quality dataset containing ignition points, ongoing fire perimeters of high spatio-temporal resolution, and final burned area. This requires extensive and continuous data collection during the crisis which so far has not been feasible on a global scale. OroraTech’s mission is to deliver quality and timely fire data using thermal and optical satellite imagery and AI-driven processing. Its Wildfire Solution provides live and historical incident reports going back 5 years globally, with overpasses from over 20 satellites down to only minutes apart. These reports include Level 3 Aggregated Active Fire Data, a temporal fire perimeter product. This product is curated by aggregating and clustering thermal anomalies, filtering out persistent heat sources, signal reflections and false positives, and assigning a rule-based confidence value to clusters. It also includes a high-resolution segmentation of the final burned area based on optical imagery. These products enable us to curate a global, validated fire propagation dataset with high resolution in both spatial and temporal dimensions, that we will then use to develop various data-driven fire spread models. Furthermore, OroraTech provides Fire Spread, a forecasting service based on the Rothermel model [4] as a strong baseline for ML experiments. This model is recognized worldwide and was optimized in close collaboration with expert stakeholders for better accuracy. The model uses domain information such as weather, landscape and fuel from third-party services, as well as initial fire perimeter from OroraTech as input. First, we will explore statistical baselines such as Logistic Regression and Random Forest. One can do this by formulating the task of fire spread as an array of binary decisions - is there a fire burning in this location at this time? Although these methods have been tried in the past without much success [6], it is interesting to see how far these baselines can go with significantly more, high quality data. Next, we will delve into some of the most promising Deep Learning-based architectures. Convolutional Neural Networks (CNNs) have long been used for image-oriented tasks, due to their ability to capture spatial information. Similarly to above, one can divide an area to a grid and treat fire spread as an image segmentation problem. Several approaches have employed a U-Net to this end, but have focused on burnt areas rather than ongoing fire growth [6]. To include the temporal component, we will employ a 3D U-net, with time as its’ third dimension. Recent research indicates Graph Neural Networks (GNNs) as a promising alternative. GraphCast, Google DeepMind’s operational weather forecast model [5], has been a major breakthrough in this domain. As weather dynamics has many parallels with fire behaviour, we want to investigate a GNNs performance, given our novel dataset. To summarize, this work addresses the need for fast, accurate, and global fire spread modelling for real-time operational use. Rather than rely on theoretical frameworks that suffer from transferability and scalability issues, we focus on data-driven methods that can learn from actual wildfire behaviour. Previously, machine learning models for fire spread have been bottlenecked by the availability of high-quality ground truth data of temporal fire progression at scale. Our Wildfire Solution is an excellent base to close this gap with fire perimeter data from many satellites down to only minutes apart. Using this unique data source with machine learning methods, we can uncover unknown and shifting patterns in fire dynamics in light of climate change, and aim to improve our ability to respond quickly to prevent devastating fires. References [1] Calum X. Cunningham, Grant J. Williamson, and David M. J. S. Bowman. Increasing frequency and intensity of the most extreme wildfires on earth. Nature Ecology & Evolution, 2024 [2] Cardil, A., Monedero, S., Schag, G., de-Miguel, S., Tapia, M., Stoof, C. R., … Ramirez, J. (2021). Fire behavior modeling for operational decision-making. Current Opinion in Environmental Science & Health, 23, 100291. doi:10.1016/j.coesh.2021.100291 [3] Rösch, M., Nolde, M., Ullmann, T., & Riedlinger, T. (2024). Data-Driven Wildfire Spread Modeling of European Wildfires Using a Spatiotemporal Graph Neural Network. Fire, 7(6), 207. https://doi.org/10.3390/fire7060207 [4] R. C. Rothermel. A mathematical model for predicting fire spread in wildland fuels, volume 115. Inter- mountain Forest & Range Experiment Station, Forest Service, 1972. [5] Lam, R., Sanchez-Gonzalez, A., Willson, M., Wirnsberger, P., Fortunato, M., Alet, F., … Battaglia, P. (2023). Learning skillful medium-range global weather forecasting. Science, 382(6677), 1416–1421. doi:10.1126/science.adi2336 [6] Huot, F., Hu, R. L., Goyal, N., Sankar, T., Ihme, M., & Chen, Y.-F. (2022). Next Day Wildfire Spread: A Machine Learning Dataset to Predict Wildfire Spreading From Remote-Sensing Data. IEEE Transactions on Geoscience and Remote Sensing, 60, 1–13. doi:10.1109/TGRS.2022.3192974
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall K1)

Presentation: Integration of SAR and Optical Data for Burned Area Mapping Using Generative Artificial Intelligence

Authors: Donato Amitrano, Luca Cicala, Cesario Vincenzo Angelino, Marco De
Affiliations: Italian Aerospace Research Centre
Climate changes are creating favourable conditions for wildfires due to increasing global temperatures and long-lasting periods of drought. In 2023, the US was affected by more than 55’000 wildfires which burned more than 2’600’000 acres of vegetation. The Copernicus Atmosphere Monitoring Service (CAMS) estimated 2’170 megatonnes of carbon emissions from wildfires worldwide, 22% produced in Canada. According to UN, the phenomenon will become more frequent and intense, with a global increase of extreme fires up to 30% by the end of 2050. Satellite remote sensing is a key asset in wildfires monitoring due to its ability to acquire data over large areas of the Earth’s surface with high revisit time and in different portions of the electromagnetic spectrum. This offers different possibilities for the detection of burned areas. As an example, using synthetic aperture radar (SAR) sensors, the energy backscattered from forest canopies is determined by the interaction between the incident electromagnetic wave and the diverse objects composing the target. Depending on the wavelength, and therefore on its penetration capability, the main contributions come from structures like leaves and twigs (X-, C-band) or branches and trunks (L-band). In the case of wildfires, the use of short wavelengths results in a drop of the backscattered energy due to the lack of volumetric scattering contributions caused by the presence of leaves. However, this phenomenology is not always verified because i) the exposure of soils previously covered by vegetation layers makes the SAR response sensitive to the effects of the dielectric constant and, therefore, dependent on precipitations and ii) the absence of leaves can cause double bounce scattering phenomena originated by branches. Both circumstances can lead to an increase of the backscattered energy. Such an ambiguous behaviour of SAR response made the passive sensors domain the most exploited in forest fire studies thanks to the close relation between infrared frequencies and burned surfaces. In fact, healthy vegetation shows a very high reflectance in the near-infrared (NIR) and has a low response in the short-wave infrared (SWIR) portion of the electromagnetic spectrum. Conversely, burned surfaces typically have the opposite behaviour. This phenomenology, although extremely robust, is not always exploitable due to the presence of clouds, which could be a barrier especially in tropical and high-latitude regions. Recent advances in artificial intelligence are introducing new tools for the joint of exploitation of SAR and multispectral data, which are evidently complementary. In particular, generative adversarial networks (GANs) are rapidly gaining the attention of the remote sensing community thanks to their ability to transform digital images by the generation of new data ideally indistinguishable from the original input. In remote sensing, this concept has been recently exploited to transform SAR images into optical ones with promising results. In most of the literature, the objective is to produce data to compensate the presence of clouds in native optical acquisitions. In this work, it is proposed the exploitation of GANs to translate a coherent SAR bi-temporal composition into a spectral index, i.e., the normalized burn ratio (NBR). Then, generated data, in combination with native multispectral ones, are used to set up a multisensory change-detection environment for burned area mapping. The main novelties introduced against the literature concern the input and target products for GAN translation processing. As for the input, differently from the literature, in which deep learning is generally tackled by means of detected products, it is here proposed the usage of the SAR interferometric coherence as one of the layers contributing to the information process. The coherence is an indicator of targets’ stability against the phase of the SAR signal that can be useful to assess changes in case of ambiguous amplitude response. Concerning the translation objective, the suggestion is to target a spectral index. This choice is new to the best of our knowledge, as the typical suggestion in the literature is to set visible images or a composition of selected multispectral bands as a translation target. Experiments have been implemented using a dataset results obtained on a dataset extracted from the Copernicus Emergency Management database. been acquired with a time span of 12 days at the turn of the event. The available data (30 fire events) were divided 20 events for training (for a total of 16499 patches following data augmentation) and 10 for testing (for a total of 1233 patches). The total area involved in the test dataset is 352 kha, of which 131 affected by wildfires. The assessment of the detection revealed that the proposed methodology can outperform the literature in all the classification quality parameters, reaching a burned area mapping accuracy close to the 90% with negligible false alarm, thus introducing a new robust solution for the integration of synthetic aperture radar and multispectral data. The proposed methodology constitutes a new source for the generation of cloud-free multispectral data, whose applicability can potentially be extended beyond the fire detection, as the input and the target of the domain translation can be tailored as a function of the problem to be tackled.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall K1)

Presentation: Multi-Task Learning Diffusion Models for Wildfire Monitoring

Authors: Eric Brune, Yifang Ban
Affiliations: KTH Royal Institute Of Technology
Wildfire progression monitoring through burned area mapping is an important task during firefighting efforts as well as post-fire analysis. High-resolution imagers such as Sentinel-2 MultiSpectral Imager (MSI) producing open data are well-suited for this task, achieving high delineation accuracy; however, such satellites have a revisit time of several days. There are currently several lower-resolution satellites with daily global coverage in orbit. To overcome this trade-off between high spatial and temporal resolution, we propose to develop a multi-task diffusion model that conditions on data from Sentinel-3's Sea and Land Surface Temperature Radiometer (SLSTR), the Visible Infrared Imaging Radiometer Suite (VIIRS), and the Moderate Resolution Imaging Spectroradiometer (MODIS) to provide high-resolution burned-area maps. Our goal is to upsample the lower-resolution satellite imagery to the higher resolution of Sentinel-2, while simultaneously performing burned area segmentation. By integrating data from multiple satellite sources, we expect to improve both the spatial and temporal resolution of wildfire-perimeter segmentation. Burned area labels will come from a combination of sources, including the National Burned Area Composite (NBAC) and European Forest Fire Information System (EFFIS). We will collect data from 2017 to 2023 (and possibly 2024). The model architecture is based on a conditional denoising diffusion probabilistic model (DDPM). This model conditions on the low-resolution input images to generate high-resolution outputs. It features a shared encoder, pre-trained on the Major TOM dataset, that learns common representations from the input data, and separate decoders for the super-resolution and segmentation tasks. This design allows the model to optimize both tasks simultaneously while benefiting from shared features. For the super-resolution task, various perceptual losses such as VGG and LPIPS will be explored. For the segmentation task, cross-entropy loss will be compared with dice loss and focal loss function. During training, the model is trained end-to-end, allowing the super-resolution and segmentation tasks to benefit from shared learned features. We expect the simultaneous learning of both tasks to improve overall performance, as the tasks are complementary. The super-resolution component enhances the spatial detail available to the segmentation task, while accurate segmentation can guide the super-resolution process by focusing on regions of interest. Data augmentation techniques such as rotation, scaling, and flipping are applied to increase the diversity of training samples and prevent overfitting. A subset of the data, i.e. wildfire events from the latest year in the dataset, is reserved for validation and testing to evaluate model performance objectively. Early experiments conducted using MODIS as the conditional have shown promising results for the potential of the method. We therefore expect expanding the multi-task diffusion model to include data from Sentinel-3 SLSTR and VIIRS to produce accurate predictions, both in terms of visual quality and quantitative metrics like Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM). The simultaneous segmentation output is anticipated to accurately delineate burned areas, improving metrics such as Intersection over Union (IoU) and F1-score compared to models trained on single tasks. By integrating data from Sentinel-3 SLSTR, VIIRS, and MODIS, we expect to capitalize on the strengths of each sensor. The combination should provide richer spectral as well as spatial information which will aid both tasks. We anticipate that the multi-task learning approach will lead to better generalization on unseen data. The shared representation learned by the model should capture underlying patterns related to both image super-resolution and semantic segmentation.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall G1)

Session: D.02.06 Foundation Models for Earth Observation: Current solutions with less labelled data to improve environment monitoring and future perspectives to revolutionize geospatial data discovery and utilization - PART 1

This session will delve into cutting-edge developments in Earth Observation (EO) Foundation Models (FM). It will discuss current solutions, future perspectives and the implications of these models for the EO community. The session will also explore new possibilities enabled by multi-modal models, zero-shot applications, self-supervised learning, big data and large pre-trained models, examining their potential to transform EO data analysis and applications.

Topics:
- Sensor independence: FMs can process data from various sensors, including multi-/hyper-spectral, SAR, LiDAR, Very High Resolution (VHR) satellite data and more, enabling comprehensive analysis of Earth's dynamics holistically.
- Benchmarking and Evaluating FMs: Establishing standardised evaluation metrics and fair benchmarks to assess the performance and capabilities of FMs in processing EO data, ensuring reliability and efficiency.
- Multimodality: FMs can adeptly handle diverse data modalities such as text, video and imagery, offering new approaches to EO data analysis and interpretation without requiring extensive labelled datasets which are rarely available in environmental applications (e.g., land, forestry, agriculture, water/ice or atmospheric phenomena that can be monitored with EO data).
- Fine-tuning FMs and Self-Supervised Learning (SSL) for downstream tasks with an emphasis on environment monitoring applications currently under-represented in EO benchmarks, such as biophysical variable estimation or early warnings/ anomaly detection in satellite image time-series.
- Big data: Over the past few decades, the availability of EO data has increased, providing unprecedented coverage of the Earth’s surface and atmosphere. Modern Earth System Models (ESMs), which operate at high resolutions in both space and time to simulate the evolution of Earth system components and predict the future state of the climate, estimate air pollution, and more, generate petabytes of data per simulated day. Data output and storage have already become a major bottleneck for high-resolution climate modeling. To address these challenges, approaches combining data engineering, AI, and information theory have shown great promise for various downstream applications. This session invites contributions on computational methodologies for engineering embedding representations to compress, index, tokenize, and fuse geospatial data. By focusing on topics such as embedding techniques, vector databases for data exploration, cross-modal alignment, and embedding compression, this session will provide insights into how these technologies can be applied to enhance data accessibility, sharing, and analysis in EO and ESM applications. These embeddings may facilitate efficient data transmission, data exploration/search, and cross-modal data alignment and reconstruction, such as converting vision to text or deriving surface reflectance from SAR data.
- Implications of FMs for the Community: Understanding the potential societal, environmental and economic impacts of implementing FMs in EO applications, fostering informed decision-making and resource management.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall G1)

Presentation: SenCLIP: Enhancing zero-shot land-use mapping for Sentinel-2 with ground-level text prompting

Authors: Pallavi Jain, Dino Ienco, Roberto Interdonato, Tristan Berchoux, Dr. Diego Marcos
Affiliations: INRIA, Mediterranean Agronomic Institute of Montpellier - CIHEAM-IAMM, INRAE, Cirad, UMR TETIS, Univ. of Montpellier
Over the past decade, advancements in deep learning have significantly improved the accuracy and reliability of land-use/land-cover (LULC) prediction from remote sensing data [11]. Traditional methods, however, often depend on large, labeled datasets for supervised learning, which requires considerable effort to compile. Moreover, these approaches are constrained to a fixed set of predefined LULC classes. The reliance on class labels guides the learning process toward optimizing model parameters for minimizing errors on known classes [12]. While this approach results in high performance for these specific classes, it limits the model's ability to generalize to unseen classes without further training or fine-tuning. This lack of flexibility makes these models less suitable for dynamic, real-world LULC mapping tasks, where new, previously unobserved classes may emerge. Pre-trained vision-language models (VLMs) like CLIP [1] have demonstrated remarkable zero-shot classification capabilities across a wide range of domains. These models utilize free-form textual prompts to generalize well to new tasks, leveraging their ability to learn from diverse data sources. This versatility makes VLMs highly efficient for various applications, including image classification and natural language processing. However, when applied to satellite imagery, their performance is often limited due to the underrepresentation of such data in the training sets, which predominantly consist of ground-level images. This disparity hinders the model's ability to capture the unique features and semantic information inherent in satellite images, such as land use, vegetation, and urban infrastructure. Current prompting methods for satellite imagery tend to use generic phrases like "a satellite image of...," which fail to leverage the rich, domain-specific information that satellite images provide. These limitations hinder the model's ability to perform zero-shot LULC classification effectively and reduce its capacity to handle diverse spatial variations across different regions. To address these challenges, we propose SenCLIP, an approach that adapts CLIP's powerful vision-language representations to Sentinel-2 satellite imagery. SenCLIP leverages a large-scale dataset of Sentinel-2 images paired with geotagged ground-level photos from the Land Use/Cover Area frame Survey (LUCAS) dataset [3], a European Union initiative that monitors land use and cover across Europe. The LUCAS dataset comprises nearly one million ground-level images enriched with detailed metadata, including geolocation, vegetation types, and land-cover classifications. By aligning satellite imagery with this rich ground-level context, SenCLIP effectively bridges the semantic gap between the two modalities, enabling it to capture complex relationships in LULC data. Our approach eliminates the need for labeled satellite datasets by utilizing textual descriptions from diverse sources, enabling robust zero-shot classification across a wide range of landscapes and regions. Method SenCLIP uses contrastive learning to transfer ground-level representations to Sentinel-2 satellite imagery. Unlike prior approaches like Sat2Cap [2], which focus on high-resolution data, SenCLIP targets Sentinel-2’s medium-resolution (10m) data. SenCLIP’s architecture bridges the semantic gap between modalities through three stages: pre-training, prompt selection, and zero-shot inference. Training Dataset: The LUCAS dataset [3] provides geotagged ground-level images from 235,000 European locations. Each site includes four directional images (north, east, west, south), resulting in 900,000 total images. Corresponding Sentinel-2 imagery is retrieved via the Planetary Computer API [4], filtered for cloud cover <20%, and preprocessed into 100×100 pixel RGB scenes. Satellite Image Representation: SenCLIP uses two image encoders: a frozen CLIP encoder to produce fixed embeddings for ground-level images and a trainable encoder fine-tuned on Sentinel-2 data to generate satellite embeddings. A projection head, a lightweight neural network, refines these embeddings to align them with ground-level representations. Four directional ground-level images are pooled into a unified embedding using attention pooling, which emphasizes the most informative views. Satellite embeddings are trained to align with these pooled embeddings using InfoNCE contrastive loss [8], which optimizes for both similarity and dissimilarity across modalities. Prompt Selection and Zero-Shot Inference: Using GPT-3.5 [13], a state-of-the-art large language model, we generated descriptive prompts tailored to each LULC class, incorporating both aerial and ground-level perspectives. To optimize classification performance, we assess the quality of these prompts based on their relevance to the target class, prioritizing those with higher scores for inference. For zero-shot classification, Sentinel-2 image embeddings are compared with text embeddings of the prompts using cosine similarity, enabling the identification of the most probable LULC class without requiring labeled training data. Training and Implementation: SenCLIP was fine-tuned with two backbones: ResNet50 (RN50) and ViT-B/32. For RN50, all layers were fine-tuned, while for ViT-B/32, only the final transformer block, linear layer, and projection head were trained. Results: SenCLIP was evaluated against state-of-the-art VLMs like RemoteCLIP [5], SkyCLIP [6], and GeoRSCLIP [7] on the EuroSAT [9] and BigEarthNet [10] datasets. Key findings include: On the EuroSAT dataset, SenCLIP achieves the highest accuracy across all prompt types. For the RN50 backbone, the model achieves a top accuracy of 57.95% with ground-level prompts, outperforming CLIP by more than 25%. With the ViT-B/32 backbone, SenCLIP achieves 66.91% accuracy with ground-level prompts, a significant increase over CLIP's 51.66%. Additionally, SenCLIP leads in aerial prompts, achieving 57.78% accuracy with RN50 and 70.78% with ViT-B/32, showcasing its ability to effectively integrate both aerial and ground-level information. For the BigEarthNet multilabel dataset, SenCLIP again outperforms all other models. With the RN50 backbone, SenCLIP achieves the highest accuracy of 34.80% with ground-level prompts, surpassing CLIP by 10.71%. On ViT-B/32, the model reaches 37.40% with ground-level prompts, outperforming GeoRSCLIP by 5.30%. These results demonstrate SenCLIP’s effectiveness in leveraging ground-level context for improved performance in sentinel-2 classification tasks. We also analysed the influence of prompt selection on model performance, showing that selecting 2-5 good prompts yields the best results, particularly with the ViT backbone. Whereas when prompts with worst scores are chosen the performance drops significantly by 20-30% drop. Conclusion SenCLIP demonstrates the potential of cross-view fine-tuning to bridge the gap between satellite and ground-level imagery. By aligning Sentinel-2 representations with CLIP embeddings of geotagged ground-level photos from the LUCAS dataset, it enables flexible, zero-shot LULC classification without relying on predefined class names or labeled satellite data. This approach offers a significant leap in remote sensing by accommodating diverse prompting styles. Attention pooling and tailored prompts further enhance SenCLIP’s ability to capture the semantic richness of both aerial and ground-level views, establishing a new standard for zero-shot LULC mapping. References [1] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR, 2021. [2] Aayush Dhakal, Adeel Ahmad, Subash Khanal, Srikumar Sastry, and Nathan Jacobs. Sat2cap: Mapping fine-grained textual descriptions from satellite images. arXiv preprint arXiv:2307.15904, 2023. [3] Rapha¨el d’Andrimont, Momchil Yordanov, Laura Martinez-Sanchez, Beatrice Eiselt, Alessandra Palmieri, Paolo Dominici, Javier Gallego, Hannes Isaak Reuter, Christian Joebges, Guido Lemoine, et al. Harmonised lucas in-situ land cover and use database for field surveys from 2006 to 2018 in the european union. Scientific data, 7(1):352, 2020. [4] Microsoft Open Source, Matt McFarland, Rob Emanuele, Dan Morris, and Tom Augspurger. microsoft/planetarycomputer: October 2022, oct 2022. [5] Fan Liu, Delong Chen, Zhangqingyun Guan, Xiaocong Zhou, Jiale Zhu, and Jun Zhou. Remoteclip: A vision language foundation model for remote sensing. arXiv preprint arXiv:2306.11029, 2023. [6] Zhecheng Wang, Rajanie Prabha, Tianyuan Huang, Jiajun Wu, and Ram Rajagopal. Skyscript: A large and semantically diverse vision-language dataset for remote sensing. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 5805–5813, 2024. [7] Zilun Zhang, Tiancheng Zhao, Yulong Guo, and Jianwei Yin. Rs5m: A large scale vision-language dataset for remote sensing vision-language foundation model. arXiv preprint arXiv:2306.11300, 2023. [8] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. [9] Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(7):2217–2226, 2019. [10] Gencer Sumbul, Jian Kang, Tristan Kreuziger, Filipe Marcelino, Hugo Costa, Pedro Benevides, Mario Caetano, and Begum Demir. Bigearthnet dataset with a new class-nomenclature for remote sensing image understanding. arXiv preprint arXiv:2001.06372, 2020. [11] Xiao Xiang Zhu, Devis Tuia, Lichao Mou, Gui-Song Xia, Liangpei Zhang, Feng Xu, and Friedrich Fraundorfer. Deep learning in remote sensing: A comprehensive review and list of resources. IEEE geoscience and remote sensing magazine, 5(4):8–36, 2017. [12] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012. [13] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730–27744, 2022.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall G1)

Presentation: Prithvi WxC: A Foundation Model for Weather and Climate

Authors: Manil Maskey, Sujit Roy, Johannes Schmude, Ankur Kumar, Rajat Shinde, Vishal Gaur, Amy Lin, Tsengdar Lee, Rahul Ramachandran
Affiliations: NASA, IBM Research, University of Alabama in Huntsville
Deep learning is transforming weather applications by providing accurate forecasts with lower computational costs compared to numerical weather prediction (NWP). Unlike physics-based methods, deep learning models learn high-level features through probability distributions to capture relationships among multiple variables. We present Prithvi WxC, a transformer-based foundation model for weather applications, trained on NASA's MERRA-2 dataset, comprising 160 global atmospheric variables from 1980 to the present with high spatial and temporal resolution. Prithvi WxC integrates concepts from recent transformer architectures to manage regional and global dependencies and efficiently handle longer token sequences, allowing incorporation of off-grid measurements during fine-tuning. The model's performance was validated across dynamic atmospheric processes, including convection, turbulence, radiative transfer, and mass conservation. Experiments introduced anomalies using MERRA-2 data, and results—such as temperature, wind patterns, and humidity—were compared with observations and a reference NWP model (WRF). Prithvi WxC demonstrated precise simulations of synoptic and mesoscale weather phenomena and adhered to fundamental physical principles. Downstream tasks, including gravity wave flux parameterization, hurricane estimation, and dataset downscaling, further validated the model's versatility. Preliminary results indicate its robust capability to capture atmospheric dynamics and respond to perturbations in line with theoretical predictions, positioning it as a significant advancement in computational weather forecasting and climate research.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall G1)

Presentation: A Unified Foundation Model for Multi-Sensor Earth Observation: Breaking Representation Barriers across Remote Sensing Sensors

Authors: Gencer Sumbul, Devis Tuia
Affiliations: École Polytechnique Fédérale de Lausanne (EPFL)
Numerous airborne and spaceborne sensors produce massive amounts of Earth observation (EO) data, amounting to tens of terabytes daily. Unlike natural images, EO data encompasses a vast range of wavelengths, from visible light to infrared and microwaves. Optical sensors deliver high-resolution images primarily in the visible and infrared spectra, providing detailed semantic information, but are constrained by weather conditions and daylight. In contrast, synthetic aperture radar (SAR) operates in the microwave spectrum, enabling day-and-night imaging regardless of weather. Thermal sensors measure surface-emitted energy, useful for tasks like estimating surface temperatures. Together, the unique strengths of these sensors support continuous, all-weather Earth monitoring, facilitating applications in areas like agriculture, climate science, hydrology, and urban planning, while contributing to achieving the UN’s sustainable development goals. Over the past decades, deep learning has significantly advanced the automatic and data-driven understanding of EO images. Foundation models, either multi-spectral (e.g., SatMAE, SpectralGPT, S2MAE, SkySense) or RGB data-based (e.g., ScaleMAE, CrossScaleMAE, SatLas) mirror the success observed in the fields of NLP (e.g., CLIP) and computer vision (e.g., MAE), by learning transferable EO image representations across multiple tasks. These advances significantly reduce the demand for dataset-specific customization, which is often resource-intensive and can lead to fragmented, isolated efforts. However, a significant challenge persists: the absence of unified image representations for processing EO data across multiple data sources. This difficulty arises from the intrinsic heterogeneity of EO imagery sensors, which differ in spectral range, radiometric resolution, and spatial resolution. As a consequence, previous solutions tend to rely on sensor-specific models, which are limited. Although multi-sensor models have been proposed to address this gap (e.g., CROMA and SkySense), they typically require separate sensor-specific backbones, increasing computational complexity. Moreover, adding new sensors to a pre-trained model necessitates architectural modifications and further backbones, limiting scalability to any EO imagery sensor. Finally, models trained on fixed sensor combinations develop biases, limiting their adaptability to new, unseen sensors. To overcome these limitations, we introduce a sensor-agnostic foundation model. For the first time, we unify representations across various EO sensors, enabling downstream tasks with a single model and supporting zero-shot transfer to new sensors. Our approach efficiently handles heterogeneous sensors by projecting their data into a shared spectrum-aware space. This concept leverages the fact that all EO imagery sensors, despite their differences, capture subsets of the electromagnetic spectrum with well-defined physical characteristics. Each sensor’s bands are projected into this spectrum-aware space using wavelength-specific functions. This shared representation eliminates the need for separate models for each sensor and allows our model to generalize to new sensors by interpolating learned projection functions for unseen wavelengths. To learn unified representations, the model is pretrained using a self-supervised transformer architecture, learning through mirrored mix-up of paired multi-sensor data on BigEarthNet-S1, BigEarthNet-S2, fMoW-RGB and fMoW-S2 datasets. This involves reconstructing randomly masked regions within the spectrum-aware space, which enhances synergy between data sources and reduces sensor-specific biases. Our pre-training strategy allows to scale efficiently to large datasets, fostering robust, generalizable representations. Experiments on eight diverse datasets demonstrate the model’s excellent performance in both single- and multi-modal tasks, as well as its ability to generalize to new, unseen sensors during inference. To assess our model’s representation ability across different sensors, tasks, and scenarios, we conducted multispectral, radar, and RGB experiments for multi-label single/multi-modal and multi-class single-modal classification, where the sensors of the downstream tasks were seen during pretraining. In addition, we conducted zero-shot sensor transfer experiments on semantic segmentation for crop-type mapping, where the corresponding sensor was not seen during pretraining. Our model with ViT-L backbone provides 1% higher mAP on BigEarthNet-S2 finetuning, while achieving 1.5% higher mAP for linear probing on multi-modal inputs of BigEarthNet-S1 and BigEarthNet-S2 datasets compared to previous state-of-the-art model CROMA (which was specifically pretrained with SAR-Optical image pairs). Compared to existing methods, our model is also capable of achieving the highest linear probing and finetuning accuracies on downstream transfer for scene classification on EuroSat dataset, while providing the highest kNN classification accuracies on WHU-RS19 and UCMerced datasets with multi-scale inputs. Our model’s zero-shot sensor transfer on SICKLE dataset (which includes Landsat-8 images with thermal infrared bands) for crop-type mapping shows its generalization ability towards unseen sensors by surpassing fully supervised models with linear probing. The effectiveness of our model relies on its capability to process diverse sensor inputs without requiring sensor-specific pretraining or backbones, and exceptionally generalize to new sensors and downstream tasks not present during pretraining, while having a high pretraining efficiency with reduced computational costs, using only 500K images and as few as 300 epochs when employing ViT-B and ViT-L backbones. Our model marks a major leap forward in EO data processing, establishing significant improvements in scalability, efficiency, and versatility in EO research and applications. By unifying the interpretation of multi-sensor EO imagery through a single foundation model, it harnesses the complementary strengths of diverse sensors while eliminating the need for separate, sensor-specific models. Future efforts will focus on extending our model to the temporal dimension and exploring a broader range of downstream tasks, advancing the development of unified, physics-driven foundation models for EO.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall G1)

Presentation: Prithvi-EO-2.0: A Versatile Multi-Temporal Foundation Model for Earth Observation Applications

Authors: Manil Maskey, Sujit Roy, Daniela Szwarcman, Paolo Fraccaro, Benedikt Blumenstiel, Christopher Phillips, Bianca Zadrozny, Þorsteinn Elí Gíslason, Rocco Sedona, Gabriele Cavallaro, Pontus Olofsson, Rahul Ramachandran
Affiliations: NASA, IBM Research, University of Alabama in Huntsville, University of Iceland, Jülich Supercomputing Centre
Significant advancements in adaptable and reusable artificial intelligence (AI) models are revolutionizing Earth science and remote sensing. Foundation models, pre-trained on extensive unlabeled datasets through self-supervision, can be fine-tuned for diverse downstream tasks using minimal labeled data. These generalist models, leveraging multi-sensor data, are increasingly valuable for Earth observation applications. We introduce Prithvi-EO-2.0, a transformer-based geospatial foundation model pre-trained on over seven years of multispectral satellite imagery from the Harmonized Landsat Sentinel-2 (HLS) global dataset. The training process utilized approximately 4.2 million samples, ensuring representation across Land Use and Land Cover (LULC) classes and over 800 ecoregions to enhance landscape diversity. This dataset, three times larger than prior versions, significantly improves global coverage and diversity. The Prithvi-EO-2.0 models stand out for several reasons. First, a comparison between Prithvi-EO-2.0-300M and Prithvi-EO-1.0-100M, which share the same architecture but differ in pretraining datasets, demonstrates the value of the expanded global dataset, resulting in a 3% improvement in the overall GEO-Bench score, with even greater gains in specific tasks. Second, larger Prithvi-EO-2.0 models consistently outperformed smaller versions, highlighting the benefits of increased model size. Third, models incorporating temporal and location embeddings showed better performance overall, even though not all GEO-Bench datasets include this information. Lastly, Prithvi-EO-2.0 excelled in high-resolution tasks, despite being pretrained solely on 30m resolution data. We also tested our model on further downstream tasks namely Burn scar and Burn Intensity, Biomass Estimation, Crop classification, Landslide, LULC and Estimation of Gross Primary Productivity (GPP). The Prithvi-EO-2.0 overall showed an improved performance with respect to the baseline performance. The model and workflows are publicly available as open-source contributions to advance Earth sciences through Hugging Face.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall G1)

Presentation: Foundation Models for Climate and Society

Authors: Theodor J. L. Forgaard, Jarle H. Reksten, Anders U. Waldeland, Are Jensen, David Arthurs, Amund Frogner Borge, Vasile Craciunescu, Tore Wulf, Michael Kampffmeyer, Arnt B. Salberg
Affiliations: Norwegian Computing Center, UiT - The Arctic University of Norway, Polar View, Norwegian Water Resources and Energy Directorate, Romanian National Meteorological Administration, Danish Meteorological Institute
To leverage Earth observation (EO) data for large scale analysis, automatic methods is a prerequisite. Since 2012, deep learning (DL) models have brought about a revolutionary change in the analysis of image data and are currently considered state-of-the-art for a broad spectrum of EO tasks. However, a bottleneck with supervised DL models is that they often require a vast amount of labelled data to be trained, and the research community has therefore started to explore alternatives to supervised learning. During the last years, foundation models (FM) signify a change of thinking in computer vision. FMs are trained on a vast volume of unlabeled data and can identify complex patterns due to their large-scale learning capabilities. Typically, an additional head or decoder (small network) is added to the FM, which is trained and adapted to various use-cases by means of a small amount of labelled data. FM have also started to be explored for EO applications, however, current EO-based FMs are limited in terms of handling different modalities with large differences in resolution. Modern FMs are often based on transformers and are trained using self-supervised learning (SSL). There are several SSL schemes in place, including masked autoencoders (MAE) where we mask part of the input data and learn the model to predict the masked data. This is not useful by itself, but the model learns compressed representation of the data, which can be leveraged in downstream applications. This potentially makes the FM more useful than models trained on a limited set of labeled data. The Norwegian Computing Center and UiT – The Arctic University of Norway are in collaboration with user partners Romanian National Meteorological Administration, Danish Meteorological Institute, Polar View and Norwegian Water Resources and Energy Directorate developing a multi-modal FM for the ESA Philab project FM4CS. The FM is designed to process data from the satellites Sentinel-1 SAR, Sentinel-2 and Sentinel-3 OLCI and SLSTR. The FM is based on vison transformers (ViT) but utilizing the same principle as the USat approach to handle the different resolutions between the modalities. The training of the FM is based on the MAE approach, and to ensure that the SSL work efficiently, we have developed a smart sampling scheme during that provides relevant and diverse training data. In addition to SSL, we have also created a learning task in regressing climate variables from the ERA5 dataset. To train the FM, over 20 TB of Sentinel data was collected and processed using the LUMI supercomputer. The multi-modal FM is demonstrated on the following use-cases: mapping of snow, flood zone mapping, mapping and monitoring of sea ice, iceberg detection, early draught warning and mapping of wetlands. The resolution of the target use-case products are vastly different, e.g. for snow mapping we aim for a ground sampling distance (GSD) of 250m whereas for flood zone mapping we aim for a GSD of 10m. We have therefore trained two versions of the FM: one aiming for high-resolution products with GSD between 10 – 60m, and one aiming for low-resolutions products with GSD above 100m. The downstream tasks are implemented using the open-source framework TerraTorch, which is a flexible fine-tuning framework for geospatial FMs. TerraTorch supports common fine-tuning tasks such as image segmentation and pixel-wise regression along with a selection of task-specific decoder heads. The FM has currently been evaluated with promising results on flood zone mapping and wetland mapping, and the development of the use-cases will continue in the first half of 2025.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall G1)

Presentation: Masked Token Reconstruction for Multi-Modal Earth Observation

Authors: Thomas Brunschwiler, Dr. Johannes Jakubik, Felix Yang, Benedikt Blumenstiel, Stefano Maurogiovanni, Erik Scheurer, Dr. Rocco Sedona, Prof. Gariele Cavallaro
Affiliations: IBM Research - Europe, Forschungszentrum Jülich
Remote sensing data plays a critical role in environmental assessments on regional to global scales. Combining multi-modal satellite observations (e.g., optical and radar imagery), higher-level data products (e.g., land-cover maps), and meta-data (e.g., geolocation) is expected to significantly enhance the quality of these assessments. However, deep learning models have faced challenges in effectively leveraging such diverse datasets within a single architecture due to issues such as spatio-temporal misalignment between modalities, partial unavailability of data, and modality-specific modeling requirements. To address these challenges, we leverage state-of-the-art advancements in multi-modal foundation models from the natural image domain and adapt it to the domain of Earth observation. We curated a global-scale dataset comprising over 9 million spatio-temporally aligned samples from Sentinel-1 and Sentinel-2, augmented with higher-level data products like Digital Elevation Maps, Land-Use and Land-Cover datasets, Canopy Height models, as well as meta-data like textual descriptions and geographic coordinates. The training process builds on a two-stage approach: First, we tokenize each modality using encoder-decoder architectures combined with finite scalar quantization and diffusion. Second, the tokens from these modalities are aligned through the mask token reconstruction task to learn correlations within and across modalities. In this presentation, we first analyze tokenizer performance, comparing vector-quantization with finite-scalar quantization, alongside the effect of adding diffusion-based approaches. Second, we demonstrate the model capabilities in any-to-any generation, highlighting artefacts and performance in cross-modal reconstruction. Third, we showcase the fine-tuning performance on various multi-modal tasks, such as flood mapping, and benchmark its performance against late-fusion approaches to underline the benefits of masked token reconstruction. Our results position our modelling framework as a key advancement in enabling multi-modal Earth observation, bridging the gap between diverse data modalities for enhanced environmental monitoring and assessments. This work is partially funded by the Phi-Lab of the European Space Agency (ESA, FAST-EO, contract No. 4000143501/23/I-DT), by the European Union (Horizon Europe, Embed2Scale, contract No. 101131841) and the Swiss State Secretariat for Education (SERI).
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall G2)

Session: F.04.16 Sustainable Blue Economy

The size of the global ocean economy is growing faster than the general economy, with a global gross value added projected to be multiplied by three in 2030 with respect to 2010. Activity sectors are multiple, covering well established activities as e.g. fisheries, aquaculture, shipping to more emerging fields as e.g. tourism, renewable energy production from wind, wave, salinity gradients, tidal, thermal and biomass sources, marine biotechnology, mining of seabed mineral resources. While a flourishing ocean economy depends on sustainable and healthy oceans, exploitation of the marine resources has led in the last decades to an incredible, and non-sustainable pressure on the marine environment, putting Ocean Health and by the same token the marine economy itself, under considerable threat. At European and International levels, a number of Directives/Policy/Action plans have been put in place to help balancing the maritime economy while protecting the marine environment. Satellite data, by providing continuous, global and repetitive measurements of many key parameters of the physical and biogeochemical marine environment, including also high resolution mapping of key marine habitats, are particularly suited to support the development and monitoring of new Blue Economy activities, detect any significant changes induced by the start of new activities over a given area, support the blue economy operators in their daily operations and to report on the environmental impact of their activity in the frame of existing environmental regulations.
This section welcomes contributions investigating how remote sensing, potentially used in synergy with other information (e.g. in-situ measurements, model outputs), can be used to support the sustainable development of the blue economy sector, in line with the current international and European policies.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall G2)

Presentation: Mapping Coastal Aquaculture Facilities and Optimizing Shellfish Operations Using Earth Observation Methods

Authors: Andrey Kurekin, Victor Vicente, Dr Steve Groom, Anton Salgado, Arlene Avillanosa
Affiliations: Plymouth Marine Laboratory, The Spanish National Research Council, College of Fisheries and Aquatic Sciences, Western Philippines University
With the depletion of fish stock and reduction of global fisheries production, aquaculture plays a crucial role in providing an alternative food source. With the unprecedented expansion of aquaculture around the world, there is a critical need for monitoring its progress to provide sustainability of farming activities and to avoid over-exploitation of marine resources.  In project “Earth Observations for Sustainable Aquaculture”, funded by the European Space Agency, we explore the use of remote sensing technologies to address two key challenges affecting aquaculture sustainability:  the conflicting multi-use of coastal environment and unreported and unregulated aquaculture development.   Coastal areas are encompassed by several interests, such as in Galicia rias (Spain), where the maritime traffic, recreation, tourism, and nature conservation compete for space with small-scale fisheries, shellfish extraction from sandbanks and massive mussel aquaculture. Such conflicts can be mitigated by optimizing shellfish operations, such as employing farming activities where mussels can better grow while regulating the maximum farming an area can sustain in terms of natural primary production (PP). This can maximize production to fewer areas, and, consequently, allocate more space to other coastal activities. Since the shellfish growing in farms depends on natural NPP, overexploited areas can reduce NPP available to the local ecosystem. The monitoring of Chl-a concentration can provide this information and be used for estimating the NPP and the regional carrying capacity. Satellites can cover vast areas for prolonged periods and provide consistent Chl-a measurements for estimating NPP. In this study we use Sentinel-2 MSI sensor for this purpose.  Although it was not initially designed for ocean applications, studies have demonstrated its use in inland and coastal waters where spatial resolution becomes a more important factor. Sentinel-2 MSI can allow observations between individual rafts and further investigate their impact on the ambient phytoplankton and provide the information needed from farmers. Mapping the locations of aquaculture facilities is important for optimal exploitation, minimising environmental impact and prevention of unreported and unregulated use of marine resources.  While these assessments are commonly documented in regions like Europe, other regions — such as East Asia — face challenges in monitoring aquaculture facilities. Consequently, the information on aquaculture development and its environment is limited, and sustainability in these areas could be potentially at risk. In this study we apply remote sensing methods to map coastal aquaculture structures in automated way. It is based on our previous success in mapping aquaculture in coastal waters by using both multi-spectral optical and microwave synthetic aperture radar (SAR) EO data. By using Sentinel-1 SAR and Sentinel-2 MSI sensors we could successfully detect and map aquaculture objects of size 5 meters across, but the classification capabilities of these objects were limited by spatial resolution.  We utilise very-high resolution (VHR) remote sensing images with spatial resolution better than 4 meters to improve the accuracy of aquaculture maps and classify the detected objects into different categories.  For processing VHR data we applied the state-of-the-art machine learning methods based on convolutional neural networks (CNN), such as YOLO (You Only Look Once) version 8.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall G2)

Presentation: SEADOTs – Socio-ecological ocean management applications using Digital Ocean Twins

Authors: Piotr Zaborowski, Ute Brönner, dr Rita Tatiana Vasconcellos L. d’Oliveira Bouman, dr Ragnar Arnason, dr Dorothy Jane Dankel, Nenad Gligoric, dr Harald Hasler-Sheetal, Babis Ipektsidis, Sigrid Keiser, dr Susa Niiranen, Johannes Pein, dr Juan Carlos Rocha Gordo, Alondra Sofia Rodriguez, Joanna Staneva, Kristin Sørheim, dr Lara Veylit
Affiliations: SINTEF OCEAN AS, HASKOLI ISLANDS, ZENTRIX LAB, ICES, NETCOMPANY-INTRASOFT SA, GEOMAR, STOCKHOLMS UNIVERSITET, HELMHOLTZ-ZENTRUM HEREON GMBH, Open Geospatial Consortium Europe
SEADOTs (Social-Ecological Ocean Management Applications using Digital Ocean Twins) is a research and innovation action funded under the Mission Restore our Ocean and Waters by 2030, which started Sep 2024. SEADOTs is working with the integration of socio-ecological models into the EU Digital Twin Ocean (DTO) to advance holistic, just, and sustainable ocean management by bringing a predictive component for social-ecological aspects into comprehensive digital ocean twins (DOTs). DOTs, as the extension of DTO, will combine digital representations of the ocean with human activities in the ocean and combine socio-ecological and socio-economic data with ocean data, Earth Observation, with geophysical and Socio-Ecological System (SES). SES is an integrated framework that combines social and economic factors to understand how human societies function and interact with their environments. Long-term EO and marine observation with the multi-actor models can provide detailed insights into environmental changes and their socio-economic impacts. This combination allows policymakers to make informed decisions by visualizing policies' effects on the environment and society. SEADOTs seeks data acquisition and provides spatially explicit social-ecological data by merging socio-centric models with digital twin representations of specific oceanic regions. Applications in the Norwegian North Sea, the Southern North Sea, and the Baltic Sea address current challenges and developments in these areas. The current crises, including climate change, the difficulty of food production, and the loss of cheap energy sources, require a rethink and innovation in producing energy and food. The UN Ocean Decade is focusing on the ocean, which has become a key area of focus for climate-friendly economic activities, partly due to the EU Green Deal. The scale of the planned projects necessitates increasingly sophisticated digital planning, known as digital twins. To this end, bespoke model routines are being developed that calculate the physical and biogeochemical dynamics of food production through aquaculture in offshore wind farms over long periods to the nearest meter and minute. The overarching goal of SEADOTs is to provide stakeholders and the public with a tool that predicts the planning and calculation of the consequences of combined energy and food production in a specific pilot case. The collected data will be more comprehensive and include geospatial attributes, typically needing more social domain data. The 'what-if' scenarios will focus on harmonizing sustainable fisheries management with marine protected zones, assessing the impact of offshore wind projects on a small island community, and examining the economic advantages and implications of shared marine spaces. These applications simulate the intricate interactions between human activities and marine ecosystems to facilitate and inform political decision-making, marine spatial planning, and adaptive management. This integrated approach ensures that policies address human and ecological needs more effectively and sustainably. SEADOTs ambition is to help safeguard ocean ecosystems, promote sustainable resource use, and enhance social and economic well-being. The project will leverage developments from ongoing Mission and Green Deal projects and demonstrate Ocean Management Applications with Digital Ocean Twins on the EU Digital Twin Ocean infrastructure and on distributed platforms for socio-ecological, socio-economic, and political decision-makers. For that purpose, SEADOTs collaborate with sibling and ongoing DTO projects, build stakeholder capacity, and ensure data interoperability with geospatial ocean data in suitable repositories beyond the project period. The SEADOTs consortium is built across scientific and technical excellence, including social science, socio-economics, standardization, twin development, marine spatial planning, marine policy, gaming, and the UN Decade Ocean Best Practices. We will also work closely with ICES Working Group WGSOCIAL and the ICES Human Dimension Steering Group (HUDISG) to develop and improve social and economic indicators and describe alternative futures and management options for marine social-ecological systems.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall G2)

Presentation: Satellite Ocean Observing needs for the coming decade – From foundational science to the Blue Economy

Authors: Laura Lorenzoni, Kelsey Bisson, Woody Turner, Nima Pahlevan, Keith Gaddis
Affiliations: NASA
The ocean economy is valued at around 1.5 trillion US Dollars per year, and ocean resources provide food, recreation and economic growth. Millions of jobs are linked to fisheries, and around 80% of the global trade by volume is carried out by sea. This is our Blue Economy, which relies on the sustainable use of our ocean, its ecosystems, and maintaining the ocean’s health. Understanding how our ocean changes is critical, and on a planet that is more than 70% water space-based Earth Observations are fundamental to understand Earth’s rapidly changing climate and its impact aquatic ecosystems and the services they provide. The vantage point of space offers a unique and unparalleled perspective of Earth, and the means to observe, monitor, and assess changing aquatic landscapes. Over the coming decade exciting new hyperspectral, lidar, radar, and altimetry missions are planned, each with the capability to advance aquatic research capturing synoptic and seasonal ecosystem variation, as well as observing rapid or transient changes related to open ocean, coastal and inland water events. Great potential also exists in combining data from different missions, complementing observables and taking ocean observations to new grounds. The interdisciplinarity of the data will enable observation of the Earth system and offer new opportunities to study the open ocean, global coastal and inland water systems with active and passive sensors at different spatial and temporal scales, developing new data products that will support the blue economy more sustainably. This presentation will focus on current and future priorities of NASA focused on ocean biology, biodiversity, and biogeochemistry, and how these measurements serve different aspects of the blue economy. It will introduce new and upcoming missions, discuss the potential of cross-mission collaboration and combined observations as a constellation, and address barriers that exist in supporting the blue economy sector.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall G2)

Presentation: Harnessing Earth Observation for the Blue Economy in the Atlantic Region

Authors: Filipe Brandao, Christine Sams, Rory Scarrot, Javier Urien, Paul Holthus, Stefano Ferretti
Affiliations: GMV, National Oceanography Centre, University College Cork, SpaceSur, World Ocean Council, European Space Agency
Atlantic-facing EU countries are implementing robust marine planning and management frameworks, including the Maritime Spatial Planning (MSP) Directive, the Marine Strategy Framework Directive (MSFD), Integrated Maritime Policy (IMP), and various Blue Economy measures, to address the growing challenges in marine environmental management. The vast geographical expanse of the Atlantic underscores the need for innovative solutions to effectively manage this critical maritime region. In this context, the European Space Agency’s (ESA) initiatives to coordinate, evaluate, and demonstrate how Earth Observation (EO) can support the goals and requirements of EU marine policy are particularly timely and relevant. One such initiative is ESA’s Atlantic Regional Initiative, which seeks to foster engagement with Atlantic stakeholders and develop strategies for mid- to long-term growth across various maritime sectors. Under this framework, GMV, in collaboration with the National Oceanography Centre, University College Cork, World Ocean Council, and SpaceSur, formed a multidisciplinary consortium to implement ESA’s Blue Economy: Innovation Clusters, Atlantic Natural Resources Management, and Maritime Spatial Planning project. This project leveraged the consortium’s expertise in Earth Observation (EO) and Information and Communications Technology (ICT) to design and implement targeted services addressing user and stakeholder needs across key marine-related domains. These services tackled crucial topics, including: • Flood and Coastal Erosion Risk Management: Mapping intertidal topography using SAR imagery. • Marine Renewable Energy: Employing GNSS interferometric reflectometry to remotely sense sea state. • Marine Pollution Management: Detecting marine plastics and spills using optical and SAR imagery. These services exemplified the immense potential of EO-based solutions in addressing complex maritime challenges. By engaging stakeholders, the project showcased opportunities in the space domain, fostering recognition of EO's role in maritime innovation and encouraging further exploration beyond the project’s scope. The development of sustainable services within expanding innovation ecosystems was a cornerstone of the project, with Technology Innovation Clusters serving as catalysts for applications development across the Atlantic region. In line with fostering long-term growth, the project also focused on future road-mapping to enhance the uptake of EO data in the Blue Economy sector. Activities included the creation of a roadmap for building an Atlantic Cluster of Clusters, aimed at fostering ideas, innovations, and novel EO applications to support maritime markets, communities, and stakeholders. This roadmap provides a framework for leveraging EO to harness the full maritime potential of the Atlantic region. Additionally, the consortium prepared a report on Maritime Spatial Planning (MSP), focusing on transboundary and ecosystem-scale data integration. The report examined challenges and opportunities in aligning diverse data sources to maximize synergies and enable sustainable MSP practices. By offering insights into the effective use of data for MSP, the report supports efforts to promote a sustainable and resilient Blue Economy. Through these activities, the project underscores the transformative potential of EO technologies in advancing marine planning and management while driving sustainable growth in the Blue Economy sector.
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall G2)

Presentation: Enabling Sustainable Fisheries Management Through Earth Observation Technologies

Authors: Pedro Ribeiro, Nuno Grosso, Miguel Santos, Ana Ferreira
Affiliations: Deimos, Instituto Português do Mar e da Atmosfera, Centro de Ciências do Mar - Universidade Algarve, GEO Blue Planet
Fisheries represent a critical sector for exploiting marine resources, significantly influencing marine ecosystems, economic development, and global food security. This highly dynamic and complex activity operates within an evolving ecosystem, requiring diverse data inputs to inform management, regulatory authorities, operators, and consumers. However, challenges such as overfishing, the impacts of climate change, and illegal, unreported, and unregulated (IUU) fishing continue to hinder effective fisheries management, often due to gaps or poor-quality data on fish catches, fishing locations, and their respective environmental conditions. Such challenges are further intensified by the global climate and biodiversity crisis. Dependence on overexploited fish stocks, coupled with rising sea temperatures and ocean acidification, increases risks to marine ecosystems and the economic stability of the fishing industry. This not only threatens the natural marine ecosystem but also the viability of the fishing sector, thus posing a risk for countries where it plays a key role in both the national economy and food security, such as in small island developing states (SIDS). Addressing these challenges requires robust tools to enhance decision-making processes and promote sustainable practices. Earth Observation (EO) technologies can potentially be transformative enablers in fisheries management. EO data, such as sea surface temperature (SST), chlorophyll-a concentration, and the distribution of ocean thermal fronts can support spatial mapping of fishing grounds and environmental information critical to stock assessment and sustainability. When integrated with tools like Automatic Identification System (AIS) and Vessel Monitoring System (VMS), EO provides insights into fishing pressure, vessel behaviour, and interactions between fishing activity and environmental parameters. In the past decade, Deimos and IPMA have collaborated to develop EO-based solutions for sustainable fisheries management. These initiatives range from conceptual demonstrations to operational services, offering advanced capabilities to monitor and manage marine resources effectively. This started with a pilot application developed within the E-SHAPE project, a unique initiative that brought together providers of EO-based services to develop services in close cooperation with key users. Its main goal was to strengthen the knowledge of the fish supply chain by developing an operational EO service to monitor the dynamics of vessels operating in the Northeast Atlantic waters (Chapela et. al, 2022, Gaspar et.al, 2022). Using AIS data from the 2012-2018 period recorded within the Portuguese ZEE, the Fishing Activity Indicators (FAI) application was implemented to characterize and quantify the fishing pressure of these fleets on marine ecosystems, and to investigate the relationship of fishermen's behaviour with environmental parameters. The service generates tracks of particular fishing trips, from individual vessels or fleets of interest, identifying each position as fishing or not fishing. It also provides a fishing footprint, by aggregating time spent in fishing activities over a geographical grid. Building on this foundation, the NextOcean project launched in 2021 to co-design six operational commercially-oriented EO-based services for sustainable fisheries and aquaculture (Ribeiro et al., 2023, Lages et. al, 2023). Among these was FAI, but also Characterization of Fishing Areas (CFA), which identifies potential fish aggregation zones using EO-derived environmental data. SST data is used by the algorithm to identify ocean thermal fronts, typically associated with high biomass productivity, making them ideal feeding ground for pelagic fish. Potential fishing areas (PFA) for tuna are also identified employing an algorithm developed by Santos et al. (2006), corresponding to a buffer defined in relation to ocean fronts, always on the side with the lowest chlorophyll-a concentration. NextOcean has evolved from a research project to a Digital Marketplace, implemented as a commercial platform that provides a set of proven services for fleet monitoring, identification of potential fishing grounds and detection of IUU fishing, supporting activities from different actors in the fisheries sector. Both FAI and CFA are now operational in commercial environments, with ongoing refinements to address specific user requirements. While CFA was initially developed to provide forecasts, an historical analysis mode was also implemented to answer one of such needs. Further evolution is also being planned in new research projects, exploring the potential to expand the range of its applicability for these families of services. For instance, FAI holds significant potential to support the establishment of Marine Protected Areas, contributing to global environmental conservation efforts. Similarly, CFA continues to evolve, incorporating new research findings to enhance its utility for fisheries management and policy-making. We plan to showcase different practical use cases of such services, which have been fostering their evolution. Through these advancements, EO technologies are playing a pivotal role in fostering a sustainable blue economy, aligning with the overarching goals of LPS25: transitioning from observation to actionable climate and environmental sustainability solutions. References: Chapela, M., Ribeiro, P., Mendes, G., S., Silva, R., Henriques, V., Gaspar, P., Grosso, N., Campos, A. (2022, May 23-27). Powering the data revolution for transformational change in the supply chain management. [Paper presentation]. Living Planet Symposium, Bonn, Germany. Gaspar, P., Chapela, M., Silva, R., Mendes, G., Cordeiro, D., Grosso, N., ... & Campos, A. (2022). Spatial characterization of pelagic fisheries in the Northeast Atlantic: The e-shape pilot “Monitoring Fishing Activity”. Trends in Maritime Technology and Engineering, 581-585. Lages, L. F., Catarino, N., Gomes, E., Toh, P., Reis-Marques, C., Mohr, M., ... & Schmidt, G. (2023). Solutions for the commercialization challenges of Horizon Europe and earth observation consortia: co-creation, innovation, decision-making, tech-transfer, and sustainability actions. Electronic Commerce Research, 23(3), 1621-1663. Ribeiro, P., Valente, A., Chapela, M., Ponte, A., Perez, D., Grosso, N., Miller, P., Calmont, H., Toh, P., Ashby, D., Santos, A. M., Ferreira, Ana., Sutton, M., Cauzac, J., Giraud, S., Lucas, M. (2023). NextOcean: Earth Observation services for sustainable fisheries and aquaculture. In: CONGRESSO BRASILEIRO DE AQUICULTURA E BIOLOGIA AQUÁTICA, 10., 2023, Florianópolis. Florianópolis: Aquabio, 2023. Santos, AMP, Fiuza, AFG, Laurs, RM. 2006. Influence of SST on catches of swordfish and tuna in the Portuguese domestic longline fishery. International Journal of Remote Sensing, 27.15: 3131-3152, doi: 10.1080/01431160600567811
Add to Google Calendar

Thursday 26 June 11:30 - 13:00 (Hall G2)

Presentation: On-going ESA activity in support of a Sustainable Blue Economy: the DIOMEDEO, EO4SA, DEEPBLUE and BLUERISM projects

Authors: Lasse Petersson, Dr Marie-Helene RIO, Dr. Roberto Sabia, Kim Knauer, Pedro Ribeiro, Vera Gastal
Affiliations: ESA, EoAP, DEIMOS, NERSC, CLS
Add to Google Calendar

Thursday 26 June 11:37 - 11:57 (EO Arena)

Demo: F.05.12 DEMO - What are the top reasons for investing into Copernicus? - Explore them through the new interactive presentation

Copernicus is a true European success. It is used by public authorities, businesses, and citizens alike in Europe and in the world. However, long-term investments are crucial to maintain the most advanced Earth observation system in the world. Hence, ESA has developed a new presentation aimed at summarising the Top Reasons for investing into it.
The presentation allows the audience to browse through a rich portfolio of user stories and highlights Copernicus past achievements, stimulating a reflection on how Copernicus is benefiting citizens, the environment and economic growth. It highlights Copernicus contribution to European policy making and how Copernicus is strengthening Europe’s resilience and international action against climate change.
Testimonials from industry, researchers, public authorities, and civil society are at the core of the presentation, offering compelling evidence to decision makers regarding the importance of Copernicus.
The booth will provide the audience with the opportunity to contribute their own stories and findings.

Speakers:


  • Hendrik Hamacher - ESA
Add to Google Calendar

Thursday 26 June 12:00 - 12:20 (EO Arena)

Demo: D.04.18 DEMO - InSAR processing with OpenEO in CDSE

The demo session will showcase how Sentinel-1 InSAR processing can be efficiently performed using OpenEO within the Copernicus Data Space Ecosystem (CDSE). This session will guide participants through a cloud-based workflow for accessing, processing, and analyzing Sentinel-1 SLC data, demonstrating how OpenEO can streamline large-scale InSAR applications.
The session will begin with an introduction to OpenEO’s capabilities and its integration with CDSE, followed by an overview of Sentinel-1 SLC data and its structure. It will demonstrate how to access the Sentinel-1 SLC burst catalogue, apply filters based on user-defined parameters such as time range, region of interest, and other criteria, visualize the footprints of queried bursts, and generate an associated Sentinel-1 acquisition calendar. It will be shown how this information can help users to identify the most suitable Sentinel-1 acquisition geometry for their specific use case. The demo will then illustrate how to compute and visualize the perpendicular baseline over all the Sentinel-1 data acquired with the selected acquisition geometry, aiding in the selection of optimal InSAR pairs based on specific processing requirements. Following this, participants will learn how to generate a stack of coregistered Sentinel-1 SLC images and execute key InSAR processing steps, including computing time-series of InSAR coherence for the selected pairs and generating a stack of interferograms, which form the basis for further analysis of surface deformation or other geophysical phenomena.
This demo will provide valuable insights into leveraging cloud-based Earth Observation services for SAR data processing, enabling more efficient, modular and scalable InSAR applications in research and operational scenarios.

Speakers:


  • Mattia Callegari - Eurac research
  • Michele Claus - Eurac research
  • Jeroen Dries - VITO
Add to Google Calendar

Thursday 26 June 12:22 - 12:42 (EO Arena)

Demo: D.01.12 DEMO - Accessing Earth Observation Data through the HIGHWAY Service

The HIGHWAY service, developed within the Agency, provides seamless access to Analysis-Ready Cloud-Optimised (ARCO) Earth Observation data (SMOS, CryoSat, EarthCARE, SWARM, Proba-V, and Aeolus). This demonstration will guide users through the data discovery and retrieval capabilities, showcasing how HIGHWAY simplifies access to EO datasets for researchers, decision-makers, and Digital Twins.
Participants will be introduced to HIGHWAY Data services including Catalogue, Data Access, and Advance Data Access. The ensemble of the services allows users to search, discovery and access EO data in native and ARCO format. As part of the demonstration, we will introduce participants to the ARCO format, explaining its structure, benefits, and how it enhances efficiency in cloud-based EO data processing.
The session will highlight how users can interact with data through the website and along with APIs.
Furthermore, the demonstration will cover authentication mechanisms, and security protocols ensuring controlled and efficient data access. Through real-time examples, attendees will gain hands-on experience in navigating the HIGHWAY platform, optimizing data queries, and integrating the service with their digital twin and EO analysis workflows.
This session aims to enhance user understanding HIGHWAY’s capabilities, ensuring they can efficiently access, process, and analyse EO data to support scientific and operational applications within the DestinE ecosystem.

Speakers:


  • Henry de Waziers - HIGHWAY service manager
Add to Google Calendar

Thursday 26 June 12:45 - 13:05 (EO Arena)

Demo: D.04.31 DEMO - NoR Updates and Road Map - session 4

This session will showcase the ESA Network of Resources initiative current status, major updates, and road map.

Speaker:


  • Francesco Barchetta - Starion for ESA
Add to Google Calendar

Thursday 26 June 13:00 - 13:45 (Frontiers Agora)

Session: C.01.19 Boosting industrial competitiveness with standardisation paving the way to EO Constellation

This Agora will address the challenges and opportunities to scaleup recurring industrial products (e.g. spacecraft platform equipment, with common EO payload interfaces) in the sweet spot of mid-size (100 to 500 kg) satellites with great potential for future EO constellations. The objective is to promote the competitiveness of the European Supply chain and European mid-size satellite integrators for both the institutional and commercial market, by fostering faster and more affordable integration of new EO paylaods with recurring mid-size spacecraft platforms that are as payload-agnostic as possible.

This Agora plans to invite some Integrators of mid-size satellites to address their expertise and their future needs to boost their industrial competitiveness to grow European EO constellations.

Speakers:


  • Miguel Angel Palacios Lazaro - ADS
  • Valerio di Tana - Argotec
  • Ann-Theres Schulz - OHB
  • Florian Deconinck - Open Cosmos
  • Ornella Bombaci - Thales Alenia Space
Add to Google Calendar

Thursday 26 June 13:00 - 13:45 (ESA Agora)

Session: D.06.04 European strategy towards future interoperability

The European strategy towards interoperability is at the heart of advancing Earth Observation (EO) and geospatial capabilities, with far-reaching implications for industry, research, and society. This Agora session will bring together a diverse panel of experts, including representatives from the European Space Agency (ESA), the European Commission, and leading organizations such as the Open Geospatial Consortium (OGC), Committee on Earth Observation Satellites (CEOS) and representative from EU Dataspace communities. Together, they will discuss the vision and the challenges to foster interoperability as an open science and open innovation framework that empowers researchers and businesses alike.

This session will explore the pivotal role of open standards in achieving interoperability, across emerging technologies in the EO and geospatial domain, such as edge computing in space, AI inference and training, High Performance Computing (HPC) and Quantum computing. Panellists will highlight key European and international standardization initiatives, showcasing how these improve open science and industry competitiveness in digital innovation. Discussions will focus on how standards are not only technical specifications, but strategic tools to promote industry growth and support sustainable development.

As European countries seek to strengthen their global position as leaders in Earth Observation, this session aims to develop and promote a common vision for a European strategy that champions standardization as a mean to accelerate scientific advancement and commercial opportunity for key emerging technologies in the EO domain. This Agora represents a collaborative step towards shaping policies that will solidify Europe’s leadership in EO and geospatial innovation.

Speakers:


  • Francesca Piatto - EARSC
  • Peter Strobl - EC-JRC
  • Ingo Simonis - OGC
  • Catherine Akinyi - KappaZeta
  • Damiano Guerrucci - ESA
Add to Google Calendar

Thursday 26 June 13:00 - 13:45 (Nexus Agora)

Session: E.01.13 One Health and Earth Observation

Empowering users to address health resilience challenges with advanced technology is nowadays increasingly possible thanks to the availability of Earth Observation (EO) satellites and data, which offer remarkable opportunities for the health sector, particularly in disease surveillance, risk assessment, and supporting evidence-based decision-making. Despite its potential, the technical complexity of EO data often acts as a barrier for non-expert users, such as public health practitioners and humanitarian workers, limiting its widespread application.

This panel discussion is designed to explore the pivotal role of EO-based online platforms tailored to the needs of non-experts, democratizing access to sophisticated EO data and insights, enabling public health officials, NGOs, and researchers to incorporate EO data seamlessly into their operations without requiring geospatial expertise.

Through intuitive interfaces and actionable outputs, these platforms facilitate the execution of advanced EO-based models, empowering users to tackle urgent health challenges such as monitoring disease outbreaks, assessing environmental health risks, and planning targeted interventions. By integrating EO data into their workflows, users can enhance their capacity to deliver timely and effective responses, especially in vulnerable and underserved communities.

This agora will draw on insights from the EO4Health activities at ESA and feature a panel of experts in EO data visualization, processing, and epidemiology. The discussions aim to highlight best practices and foster collaboration between the EO and public health communities. In doing so, the session seeks to help also non-expert users and NGOs leverage digital platforms effectively to bridge the gap between cutting-edge technology and pressing health needs. Ultimately, the goal is to inspire future innovations and identify unresolved challenges to ensure that EO-based, epidemiologically relevant data becomes user-ready and accessible to all, empowering usrs to make a meaningful impact in global health resilience.

Speakers:


  • ESA introduction to EO4Health by Stefano Ferretti and Francesco Barchetta
  • Nandini Menon - Deputy Director, Nansen Environmental Research Centre (India)
  • Carla Ippoliti - Statistics and GIS Department at Italian National Zooprofilactic Institute (IZS)
  • Caroline PERRIN - Executive Director, Geneva Digital Health Hub (gdhub)
Add to Google Calendar

Thursday 26 June 13:07 - 13:27 (EO Arena)

Demo: D.04.32 DEMO - KForge: enable close-to-real-time EO for all - from a demonstrator to a scalable European capability

#cloud-native

Europe instituions call for a "big bang" in the space strategey. The focus is on boosting competitiveness of the industry, fostering dual-use innovation and fostering the leverage commercial capabilities— KForge is a concrete step forward. It is the backbone of the ESA Close-to-Real-Time Ship Detection Platform demonstrator. KForge is a secure, cloud-native PaaS solution that aims at radically simplifing EO data processing. It allows mission operators to land data directly from ground station networks into a pre-configured cloud environment, ready for near real-time analytics for all domains of applications from environmental, scientific, to security and defence.

KForge contributes to the effort to lower technical and economical barriers to EO, enables larger access to the data and accelerates use case development. From climate monitoring to disaster response and situational awareness, access to cost optimise timely data is critical. Designed with sovereignty, and cost-efficiency in mind, the platform is built to scale beyond its demonstrator role. Future deployments will support institutional missions meeting European sovereign cloud environments requirement, offering a robust and modular processing infrastructure fit for New Space and legacy missions alike.

KForge is a practical enabler of Europe’s strategic autonomy, demonstrating how commercial innovation can empower institutional goals while democratising the benefits of EO.

Speakers:


  • Romain Poly - KSAT
Add to Google Calendar

Thursday 26 June 13:30 - 13:50 (EO Arena)

Demo: C.06.15 DEMO - InSAR Time Series Benchmark Dataset Creation by a new Open-Source Package (AlignSAR)

#zarr

The demonstration would be as follows:

(1) Introduce the AlignSAR project:
The AlignSAR package is a new tool for creating SAR signatures. It is an open-source software that can provide datacubes with InSAR time series signatures. The primary objectives of the AlignSAR are: (1) to provide a full and FAIR-guided InSAR time series datacube; and (2) to containerise the entire workflow so that it is easily accessible to the SAR community. The utility of such datasets for ML applications is evaluated using the example of deformation change detection, recognizing spatial and temporal changes in InSAR signals.

(2) Discuss the implementation of the solution:
The AlignSAR package is presented on one use case, Campi Flegrei, a volcanic area in Italy. The main workflow is separated into three stages: (a) downloading and processing interferograms using LiCSBAS (LiC Small Baseline Subset); (b) spatial and temporal SAR signature extraction and datacube production; and (c) detecting deformation changes in generated datacubes using LiCSAlert. The AlignSAR package uses LiCSBAS and LiCSAlert tools to generate interferograms and identify anomalies in time series signatures. Moreover, additional extensions are discussed that utilize the capabilities of these tools to achieve the project’s goals.

(3) Audience questions (Q&A)

We conclude that the AlignSAR package presented here is an extension of the previous version, which was focused on basic SAR signature extraction. Together, it provides a comprehensive and consistent procedure for creating SAR datasets in standard formats such as Zarr. They can be used for various ML applications created by end users, such as change detection tasks or land use classification. All developed tools and sample datasets are available in the AlignSAR GitHub repository (https://github.com/alignsar/alignsar).

Speakers:


  • Milan Lazecky - University of Leeds
  • Zachary Kiernan - Starion Italia S.p.A
Add to Google Calendar

Thursday 26 June 13:52 - 14:12 (EO Arena)

Demo: D.03.19 DEMO - QField in Practice

QField is an Open-Source application for any mobile device and the solution for smoothless fieldwork. It is compatible with all Android and IOS devices and available on both Google Play and the App Store. In this demonstration, we will demonstrate a typical workflow for setting up your field work from the office to the field. We will outline the benefits of using QFieldCloud and what other options a user have to customise their field experience to according to their needs.

Speakers:


  • Ms. Berit Mohr - Opengis
  • Mr. Marco Bernasocchi - Opengis
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 0.49/0.50)

Session: A.06.03 4D Ionosphere – where we stand and where we are going

This session wants to ignite a discussion in the community about where we stand and where we are going in the context of the ESA "4D Ionosphere" initiative (ESA’s Solid & Magnetic Earth Science Cluster) which aims to advance the use of satellite data, particularly from the Swarm mission, along with additional ground-based and other space assets and space missions, to contribute to the development of advanced dynamic models of Earth’s ionosphere, and its interactions with the magnetosphere, the lower atmosphere, and the other components of the Earth system.
The session is open to all scientists and initiatives that can help move the boundary forward and that will help defining the next phase, following a community approach and fostering networking and collaborative research.
The discussion can start from five research projects, kicked-off at the beginning of 2024, belonging to the ESA "4D Ionosphere" initiative. They jointly address different aspects of the ionosphere and, in particular, four main scientific challenges: 1) enhance the characterisation of the quiet ionosphere (QUID-REGIS); 2) enhance the observation and modelling of dynamic processes in the ionosphere such as irregularities, dynamics, and predictive capabilities – space weather (VIP-Dynamic); 3) enhance knowledge of the ionosphere – upper atmosphere/thermosphere coupling with major focus on Joule Heating (JOIN); 4) improve the way to observe and model the ionosphere – magnetosphere coupling (FBURST). In addition, a fifth activity explores the potential of deriving ocean circulation information from geo-magnetic Swarm data (SfOD).

Speakers:


  • Elisabetta Iorfida - ESA
  • Carsten Schmidt - German Aerospace Centre, DLR
  • Wojciech Miloch - University of Oslo
  • Eelco Doornbos - Royal Netherlands Meteorological Institute, KNMI
  • Anita Aikio - University of Oulu
  • Stephan Buchert - Swedish Institute of Space Physics, IRF
  • Chris Finlay - Technical University of Denmark, DTU
  • Anja Strømme - ESA
  • Diego Fernandez Prieto - ESA
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.61/1.62)

Session: A.05.10 The ESA-NASA Arctic Methane Permafrost Challenge (AMPAC) community past and future review

What are the current methane emissions in the Arctic and what are the sources and sinks? Will the fraction of the carbon permafrost feedback increase in the future? Why do we see rapid changes in permafrost not followed by a commensurate net rise in methane emissions? What can we learn from the increasing number of satellite observations over the Arctic high latitudes? The ESA-NASA Arctic Methane Permafrost Challenge (AMPAC), a transatlantic partnership was established in 2019, in order to help solve some of these questions.
The primary objectives of this invited agora are:
• With the wide AMPAC community, review and consolidate the progress made in the AMPAC project, highlighting achievements from both sides of the Atlantic.
• Collaborate and plan the creation of a detailed community paper that encapsulates the findings, methodologies, and implications of AMPAC, to provide a valuable resource for future research and applications.
• Plan next actions, and extensions.

The proposed session will be structured around key thematic areas, with each segment designed to facilitate comprehensive discussions and collaborative efforts. The session will include:
• Opening Remarks: Introduction and objectives of the session.
• AMPAC Progress Review: Presentations on the current status of various components of AMPAC, highlighting achievements and remaining challenges.
• Interactive Workshops: Breakout sessions focusing on specific aspects of AMPAC, promoting in-depth discussions and problem-solving.
• Plenary Discussion: A collective review of the breakout session outcomes, synthesising insights and identifying action points.
• Community Paper Drafting: Collaborative drafting of the themes community paper, assigning roles and responsibilities, and setting timelines for completion.
• Closing Remarks: Summarising the session achievements and outlining the next steps to ensure the successful finalisation of AMPAC and the community paper.

Agenda


Introduction to AMPAC


  • Edward Malina - ESA

Science Talks


Where is all the Arctic methane?


  • Charles Miller - NASA/JPL/Caltech

Assessing Seasonal Wetness Variability Across the Arctic Using In-Situ and Satellite-Based Soil Moisture Datasets


  • Sree Ram Radha Krishnan - b.geos GmbH

Environmental drivers constraining the seasonal variability of satellite-observed methane at Northern high latitudes


  • Ella Kivimäki - FMI

Advancing and Assessing Satellite Observations of Methane in the Arctic – Overview of MethaneCAMP Project Results and Outlook for Future Work


  • Johanna Tamminen - FMI

Future Arctic Campaigns


  • Cyril Crevoisier - CNRS/LMD

Round Table Keynote | Why was AMPAC needed, what is the problem?


Panellists


  • Annett Bartsch - b.Geos
  • Johanna Tamminen - FMI
  • Chip Miller - NASA/JPL/Caltech
  • Ben Poulter - Spark Climate Solutions

Open Discussion with attendees


Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall G1)

Session: D.02.06 Foundation Models for Earth Observation: Current solutions with less labelled data to improve environment monitoring and future perspectives to revolutionize geospatial data discovery and utilization - PART 2

This session will delve into cutting-edge developments in Earth Observation (EO) Foundation Models (FM). It will discuss current solutions, future perspectives and the implications of these models for the EO community. The session will also explore new possibilities enabled by multi-modal models, zero-shot applications, self-supervised learning, big data and large pre-trained models, examining their potential to transform EO data analysis and applications.

Topics:
- Sensor independence: FMs can process data from various sensors, including multi-/hyper-spectral, SAR, LiDAR, Very High Resolution (VHR) satellite data and more, enabling comprehensive analysis of Earth's dynamics holistically.
- Benchmarking and Evaluating FMs: Establishing standardised evaluation metrics and fair benchmarks to assess the performance and capabilities of FMs in processing EO data, ensuring reliability and efficiency.
- Multimodality: FMs can adeptly handle diverse data modalities such as text, video and imagery, offering new approaches to EO data analysis and interpretation without requiring extensive labelled datasets which are rarely available in environmental applications (e.g., land, forestry, agriculture, water/ice or atmospheric phenomena that can be monitored with EO data).
- Fine-tuning FMs and Self-Supervised Learning (SSL) for downstream tasks with an emphasis on environment monitoring applications currently under-represented in EO benchmarks, such as biophysical variable estimation or early warnings/ anomaly detection in satellite image time-series.
- Big data: Over the past few decades, the availability of EO data has increased, providing unprecedented coverage of the Earth’s surface and atmosphere. Modern Earth System Models (ESMs), which operate at high resolutions in both space and time to simulate the evolution of Earth system components and predict the future state of the climate, estimate air pollution, and more, generate petabytes of data per simulated day. Data output and storage have already become a major bottleneck for high-resolution climate modeling. To address these challenges, approaches combining data engineering, AI, and information theory have shown great promise for various downstream applications. This session invites contributions on computational methodologies for engineering embedding representations to compress, index, tokenize, and fuse geospatial data. By focusing on topics such as embedding techniques, vector databases for data exploration, cross-modal alignment, and embedding compression, this session will provide insights into how these technologies can be applied to enhance data accessibility, sharing, and analysis in EO and ESM applications. These embeddings may facilitate efficient data transmission, data exploration/search, and cross-modal data alignment and reconstruction, such as converting vision to text or deriving surface reflectance from SAR data.
- Implications of FMs for the Community: Understanding the potential societal, environmental and economic impacts of implementing FMs in EO applications, fostering informed decision-making and resource management
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall G1)

Presentation: From Edge to Insights: Transforming Earth Observation with Lightweight Foundation Models and Embeddings-as-a-Service

Authors: Tanja Van Achteren, Bart Beusen, Wouter Dierckx, Marian-Daniel Iordache, Xenia Ivashkovych, Lisa Landuyt, Stefan Livens, Andreas Luyts, Dirk Nuyts, Nick
Affiliations: Vito
Earth Observation (EO) missions face significant challenges in managing the exponential growth of data volumes. From limited downlink bandwidth to the complexities of real-time analytics and storage, conventional approaches are increasingly inadequate. To address these issues, we propose an integrated ecosystem that leverages lightweight, customized foundation models to optimize workflows from edge computing to embeddings-as-a-service in the ground segment. This approach enables streamlined operations while supporting real-time applications and interactive analytics through a shared foundational framework. Our methodology integrates edge-based data compression, onboard inferencing, and efficient ground processing, all utilizing shared or tailored foundational models. The compressed embeddings extend their utility beyond transmission by serving as inputs for downstream applications. In the ground segment, precomputed onboard embeddings enable few-shot learning for tasks such as search and retrieval, classification, segmentation, and object detection. This approach minimizes the need for extensive storage and computational resources, accelerating data transfer and analysis. Furthermore, users can upload lightweight models to the platform for direct onboard inferencing, allowing real-time prioritization of operations and rapid deployment during crises. A key innovation is the integration of embedding-as-a-service within EO portals. This democratizes access to EO model development by enabling users to interactively refine and train models through feedback and iterative learning. Such a setup also facilitates near real-time applications, particularly in crisis management scenarios. For example, embeddings can be directly used for change detection and feature prioritization, while dedicated onboard models can infer critical information to guide disaster response. The ability to swiftly train models for specific features of interest, coupled with onboard processing of compressed embeddings, ensures timely and actionable insights during emergencies. The foundational framework is being developed through a series of recently completed and ongoing projects, steadily extending the ecosystem in terms of new model architectures, data types, and pipeline integration: • CORSA Foundation Model [1, 2]: Delivers high compression ratios (up to 100:1) on hyperspectral and multispectral data, preserving image quality with minimal degradation. This capability reduces downlink costs and enables broader observation area coverage by reallocating bandwidth efficiently. • MOVIQ [3] and SmartConnect [4]: Demonstrate the utility of CORSA-based compression for hyperspectral data to alleviate downlink bottlenecks. These projects also use compressed embeddings as backbones for lightweight models applied to land cover, change detection, and object detection. • Geo.Informed [5]: Leverages Sentinel-1 and Sentinel-2 data for environmental applications using machine and deep learning. It includes an interactive web viewer and annotation capability, enabling user feedback and iterative learning. • GRB Digital Flanders [6]: Supports the maintenance of a comprehensive digital twin of Flanders by leveraging a dedicated foundation model trained on 25 cm multispectral airborne imagery. It achieves rapid change detection and land cover monitoring through interactive few-shot learning techniques. • Flanders AI research program [7]: Builds a fast access multitemporal embeddings vector database of Flanders to accelerate the development of a wide range of geospatial applications supporting environmental policymakers. • SafePlace Project [8]: Develops a customized foundation model for very high-resolution satellite imagery to support flood management and building damage assessment, leveraging precomputed embeddings for rapid before/after analysis. • Flows Project [9]: Utilizes foundational models across different modalities and platforms to create the most accurate and complete situational overviews. This includes satellite-based optical/SAR data fusion and onboard analytics solutions that enable drones to autonomously detect flooded areas and potential victims in real time. This integrated ecosystem exemplifies the transformative potential of lightweight foundational models in EO. By addressing data transmission, storage, and real-time analytics holistically, it accelerates decision-making, particularly in time-sensitive scenarios such as disaster management. These advancements align with our vision for sustainable and innovative EO solutions, showcasing a future where data bottlenecks are overcome, operational workflows are streamlined, and EO insights are democratized for all stakeholders. Keywords: Earth Observation, Foundation Models, Hyperspectral Compression, Edge Computing, Few-Shot Learning, Embeddings-as-a-Service, Crisis Management. References: 1. B. Beusen, A. Luyts, X. Ivashkovych, and T. Van Achteren, Lightweight and Efficient: A Family of Multimodal Earth Observation Foundation Models. IGARSS 2024. 2. 2. N. Witvrouwen, X. Ivashkovych, B. Beusen, A. Luyts, and T. Van Achteren, Implementation of CORSA on Nvidia Jetson Hardware. 9th International Workshop on On-Board Payload Data Compression, 2024. doi: 10.5281/zenodo.13863200. 3. J. Wouters, et al., MOVIQ: an end-to-end approach to enable high-resolution hyperspectral vision intelligence on board satellites. 4S Symposium 2024. 4. SmartConnect - https://connectivity.esa.int/news/european-space-agency-introduces-industrydriven-partnership-resilient-connectivity-during-disasters 5. L. Landuyt, X. Ivashkovych, and T. Van Achteren, Consistent Surface Water Monitoring by Fusing Sentinel-1 and -2 Through Convolutional Neural Networks. 2024. doi: 10.1109/IGARSS53475.2024.10640419. 6. GRB Digital Flanders - https://blog.vito.be/remotesensing/ai-grb 7. Flanders AI research program - https://www.flandersairesearch.be/en 8. SafePlace - https://connectivity.esa.int/news/esa-enhance-spaceenabled-crisis-management-safeplace 9. Flows - https://blog.vito.be/remotesensing/flood-response-flows
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall G1)

Presentation: FAST-EO: Transforming Earth Observation Through Multi-Modal Foundation Models

Authors: Rıdvan Salih Kuzu, Prof. Gabriele Cavallaro, Dr. Thomas Brunschwiler, Jakub Nalepa, Corneliu Octavian Dumitru, Antony Zappacosta, Daniela Espinoza Molina, Romeo Kienzler, Dr. Johannes Jakubik, M.Sc. Benedikt Blumenstiel, Paolo Fraccaro, Felix Yang, Rocco Sedona, Erik Scheurer, Stefano Maurogiovanni, Agata Wijata, Daniel Marek, Jakub Sadel, Lukasz Tulczyjew, Nikolaos Dionelis, Nicolas Longépé
Affiliations: German Aerospace Center (DLR), Forschungszentrum Jülich, IBM Research, KP Labs, ESA Φ-lab
FAST-EO (Fostering Advancements in Foundation Models via Unsupervised and Self-Supervised Learning for Downstream Tasks in Earth Observation) is an ESA Phi-Lab funded project that aims to develop advanced Foundation Models (FMs) tailored to the unique demands of Earth Observation (EO). The project addresses critical environmental challenges through a transformative approach to data integration, analysis, and scalable modeling solutions. [1]. These FMs are designed to leverage the diverse and complex data streams characteristic of EO, enabling comprehensive insights into Earth's dynamic systems. At the core of FAST-EO is the 4M4EO model, an extension of the "Massively Multimodal Masked Modeling" framework [2], which integrates diverse EO data sources—optical and SAR, as well as metadata and textual data—into a unified representation space. This cohesive architecture supports robust capabilities for zero-shot learning, fine-tuning, and generative tasks, making it applicable to a broad spectrum of EO applications. The model builds upon advancements in multi-modal architectures and incorporates temporal mechanisms, enabling effective processing and analysis of time-series data, which is vital for capturing dynamic environmental changes [3]. The effectiveness of FAST-EO’s advancements is demonstrated through its application to several high-impact use cases. These include flood and wildfire monitoring, providing critical insights for mitigating climate disaster impacts; methane leak detection, supporting efforts to curb greenhouse gas emissions; and forest biomass observation, contributing to carbon management and reforestation initiatives. Additional applications include soil property estimation, which enhances precision agriculture practices; land cover change detection, offering systematic monitoring of urbanization and ecological transitions; and mining expansion assessments, which evaluate the environmental and agricultural implications of land-use changes. These diverse use cases highlight the adaptability and versatility of FAST-EO's models in addressing pressing societal and environmental challenges. A key component of FAST-EO’s progress is its use of the petascale modular supercomputer JUWELS Booster [4], along with benchmarking and preparation for JUPITER - Europe’s upcoming first exascale supercomputer [5]. This computational infrastructure enables the efficient training of large-scale, multi-modal AI foundational models by providing the capacity to handle extensive datasets with high speed and precision. The combined capabilities of JUPITER and JUWELS Booster help address the scale and complexity challenges inherent in EO data, aiming for high performance and scalability while maintaining computational efficiency. FAST-EO’s integration of advanced AI methodologies and multi-modal data processing underscores the transformative potential of Foundation Models in EO. By bridging the gap between state-of-the-art computational techniques and real-world environmental applications, FAST-EO sets a new standard for resource management and decision-making. This project not only advances the role of AI and remote sensing in tackling global challenges but also supports a future of enhanced sustainability, resilience, and informed environmental stewardship. Acknowledgements FAST-EO (Fostering Advancements in Foundation Models via Unsupervised and Self-supervised Learning for Downstream Tasks in Earth Observation) project is funded by the European Space Agency (ESA) Phi-Lab under the contract No. 4000143501/23/I-DT. References [1] Zappacosta, A., Kuzu, R. S., Dumitru, C. O., Molina, D. E., Brunschwiler, T., Kienzler, R., Jakubik, J., Blumenstiel, B., Cavallaro, G., Kesselheim, S., Sedona, R., Wijata, A., Tulczyjew, L., Marek, D., & Nalepa, J. (2024, May 7–10). Democratizing foundation models for Earth Observation applications. ESA-ECMWF ML4ESOP Workshop, ESA-ESRIN, Frascati, Italy. [2] Mizrahi, D., Bachmann, R., Kar, O., Yeo, T., Gao, M., Dehghan, A., & Zamir, A. (2024). 4m: Massively multimodal masked modeling. Advances in Neural Information Processing Systems, 36. [3] Jakubik, J., Roy, S., Phillips, C. E., Fraccaro, P., Godwin, D., Zadrozny, B., Szwarcman, D., Gomes, C., Nyirjesy, G., Edwards, B., Kimura, D., Simumba, N., Chu, L., Mukkavilli, S. K., Lambhate, D., Das, K., Bangalore, R., Oliveira, D., Muszynski, M., ... Ramachandran, R. (2023). Foundation models for generalist geospatial artificial intelligence. arXiv. https://arxiv.org/abs/2310.18660 [4] Jülich Supercomputing Centre. (2021). JUWELS Cluster and Booster: Exascale Pathfinder with Modular Supercomputing Architecture at Juelich Supercomputing Centre. Journal of large-scale research facilities, 7, A183. http://dx.doi.org/10.17815/jlsrf-7-183 [5] Jülich Supercomputing Centre, "JUPITER - Exascale for Europe", https://www.fz-juelich.de/en/ias/jsc/jupiter
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall G1)

Presentation: Spectral Invariant Contrastive Learning for Hyperspectral Data

Authors: Xiangyu Zhao, Dr. Zhitong Xiong, Prof. Dr. Xiao Xiang Zhu
Affiliations: Technical University of Munich, Munich Center for Machine Learning
Foundation models have demonstrated great performance in the remote sensing community (Zhu et al., 2024). Current models, such as DOFA (Xiong et al., 2024) and SkySense (Guo et al., 2024), incorporate various data modalities for RGB, multispectral, and hyperspectral imagery. However, hyperspectral foundation models have been consistently underexplored. Unlike RGB or multispectral data, which typically have standardized band configurations across datasets, hyperspectral data present unique challenges due to the difference between available datasets for pretraining and downstream applications: large-scale spaceborne hyperspectral data (e.g., EnMAP) and small-scale airborne datasets (e.g., Pavia U, WHU-Hi, Toulouse). Based on these available datasets, developing robust hyperspectral models presents significant challenges due to variations in spectral resolution across different sensors. Such differences make it hard to transfer the pretrained weights to downstream tasks. Currently, most existing hyperspectral foundation models, such as (Wang et al., 2024; Braham et al., 2024), employ masked reconstruction to pretrain the model. However, transferring models pretrained on low spectral resolution spaceborne data to downstream tasks using high spectral resolution airborne data poses challenges, as the models would struggle to extract spectral features effectively due to differences in spectral resolution and coverage. Such challenges limit the transferability of hyperspectral foundation models to many real downstream applications utilizing various hyperspectral data sources. To address this challenge, we develop a spectral invariant contrastive learning method in our work. Specifically, employing simCLR(Chen et al.,2020) method, we use 11483 EnMAP patches for pretraining (Fuchs and Demir, 2023) and then test our models on benchmark datasets of land cover classfication, including Pavia U, WHU-Hi, Toulouse datasets (Zhong et al., 2020; Thoreau et al., 2024). To enhance the performance, we incorporate spectral similarity and spatial proximity to weigh negative pairs, which yields better feature representations. Moreover, we further extend our methods to multispectral data, where we use cross-modality contrastive learning between paired hyperspectral and multispectral data and learn general feature representation from them. Compared to other pretraining methods, the proposed methodology improves the classification accuracy when transferring pretrained models from spaceborne to airborne datasets. Furthermore, our approach presents an effective strategy to train multimodal foundation models, especially beneficial when abundant multispectral data, e.g. Sentinel-2, is available, but hyperspectral data, e.g. EnMAP, is limited. This work makes a contribution towards hyperspectral foundation models of better robustness and generalizability across various sensors and different data modalities. References Braham, N.A.A., Albrecht, C.M., Mairal, J., Chanussot, J., Wang, Y., Zhu,X.X., 2024. Spectralearth: Training hyperspectral foundation models at scale. arXiv preprint arXiv:2408.08447 . Chen, T., Kornblith, S., Norouzi, M., Hinton, G., 2020. A simple framework for contrastive learning of visual representations, in: International conference on machine learning, PMLR. pp. 1597–1607. Fuchs, M.H.P., Demir, B., 2023. Hyspecnet-11k: A large-scale hyperspectral dataset for benchmarking learning-based hyperspectral image compression methods, in: IGARSS 2023-2023 IEEE International Geoscience and Remote Sensing Symposium, IEEE. pp. 1779–1782. Guo, X., Lao, J., Dang, B., Zhang, Y., Yu, L., Ru, L., Zhong, L., Huang, Z., Wu, K., Hu, D., et al., 2024. Skysense: A multi-modal remote sensing foundation model towards universal interpretation for earth observation imagery, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 27672–27683. Thoreau, R., Risser, L., Achard, V., Berthelot, B., Briottet, X., 2024. Toulouse hyperspectral data set: A benchmark data set to assess semisupervised spectral representation learning and pixel-wise classification techniques. ISPRS Journal of Photogrammetry and Remote Sensing 212, 323–337. Wang, D., Hu, M., Jin, Y., Miao, Y., Yang, J., Xu, Y., Qin, X., Ma, J., Sun, L., Li, C., et al., 2024. Hypersigma: Hyperspectral intelligence comprehension foundation model. arXiv preprint arXiv:2406.11519 . Xiong, Z., Wang, Y., Zhang, F., Stewart, A.J., Hanna, J., Borth, D., Papoutsis, I., Le Saux, B., Camps-Valls, G., Zhu, X.X., 2024. Neural plasticity-inspired foundation model for observing the earth crossing modalities. arXiv e-prints , arXiv–2403. Zhong, Y., Hu, X., Luo, C., Wang, X., Zhao, J., Zhang, L., 2020. Whu-hi: Uav-borne hyperspectral with high spatial resolution (h2) benchmark datasets and classifier for precise crop identification based on deep convolutional neural network with crf. Remote Sensing of Environment 250,112012. Zhu, X.X., Xiong, Z., Wang, Y., Stewart, A.J., Heidler, K., Wang, Y., Yuan, Z., Dujardin, T., Xu, Q., Shi, Y., 2024. On the foundations of earth and climate foundation models. arXiv preprint arXiv:2405.04285 .
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall G1)

Presentation: What Does it Take to Deploy Foundation Models in an Operational Context? The WorldCereal Crop Mapping Case Study

Authors: Christina Butsko, Kristof Van Tricht, Gabriel Tseng, Giorgia Milli, Hannah Kerner, David Rolnick, Ruben Cartuyvels, Inbal Becker Reshef, Zoltan Szantoi
Affiliations: VITO, Mila, McGill University, Ai2, Arizona State University, KU Leuven, University of Strasbourg, Department of Geographical Sciences, University of Maryland, European Space Agency, Department of Geography & Environmental Studies, Stellenbosch University
Foundation models, with their ability to generalize across diverse tasks, hold significant promise for advancing Earth Observation (EO) applications. However, in the literature, these models are often developed and evaluated against benchmarks that do not fully capture the practical challenges and requirements of deploying them in operational settings. This gap highlights the need to bridge theoretical advancements with real-world applicability in EO. Here we present a deep dive into the integration of the Pretrained Remote Sensing Transformer (Presto) model within the ESA WorldCereal system, highlighting enhancements in efficiency, accuracy, and computational resource utilization for large-scale cropland and crop type mapping. Presto is a transformer-based model pre-trained on EO pixel time series from sensors including Sentinel-1 and Sentinel-2, using a self-supervised masked autoencoding approach. This enables it to handle missing data, diverse inputs, and limited labeled training data, enhancing its adaptability and generalization across spatial and temporal domains. We chose Presto because of its unique lightweight characteristics, openness, ease of integration into applications, and availability of pretrained weights learned from millions of unlabeled examples. We focus on the modifications and technical adaptations required to optimize Presto for operational use in WorldCereal’s agricultural mapping tasks. One key adaptation was additional self-supervised training of Presto on WorldCereal's reference data to adapt it to WorldCereal's input data distribution. This step addressed several significant differences between WorldCereal’s data format and the initial Presto pre-training data, e.g., different topographical and meteorological data sources and different handling of cloudy composites in optical imagery. We further refined Presto through lightweight fine-tuning with hierarchical land cover and crop type labels, ensuring robust performance across spatially heterogeneous and temporally dynamic environments. We introduced innovative adaptations such as temporal augmentations, “focus time” selection to guide attention to a specific season, and masking strategies during fine-tuning to bolster temporal generalization and robustness in data-sparse regions. Experimental results demonstrate Presto’s efficacy in extracting high-quality embeddings for downstream classification tasks. When paired with downstream CatBoost classifiers, Presto consistently outperformed traditional feature engineering pipelines, achieving superior spatial and temporal generalization across unseen years and regions. Operational deployment within WorldCereal was facilitated by the lightweight nature of Presto, enabling scalable inference on global datasets. While custom finetuning of Presto is also feasible, the high quality and generalizability of the generic embeddings enabled us to use a frozen Presto encoder together with a lightweight downstream classifier. This approach ensures flexibility for users, who can process monthly or 10-day inputs, adapt class definitions to local contexts, and retrain downstream models with minimal computational resources. This ensures adaptability for diverse agricultural systems while maintaining the scientific rigor required for high-accuracy crop mapping. This study underscores the transformative potential of foundation models like Presto in EO workflows and demonstrates how iterative co-development between AI researchers, EO experts, and end users drives innovation. The WorldCereal-Presto integration provides a replicable blueprint for advancing EO-driven agricultural monitoring, bridging the gap between foundation model research and operational impact.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall G1)

Presentation: Deployable Foundation Models for Earth Observation

Authors: Gabriel Tseng, Anthony Fuller, Ruben Cartuyvels, Ivan Zvonkov, Marlena Reil, Mirali Purohit, Henry Herzog, Favyen Bastani, Patrick Beukema, Evan Shelhamer, David Rolnick, Hannah Kerner
Affiliations: McGill University, Allen Institute for Artificial Intelligence (Ai2), Mila -- Quebec AI Institute, Arizona State University, KU Leuven, Carleton University, University of Maryland, College Park, University of British Columbia
Collecting labels is a persistent problem in remote sensing mapping efforts, making significant spatial and temporal gaps common. Algorithms that exhibit a degree of spatial and temporal generalization are therefore of high interest to the community. The machine learning community has investigated self-supervised learning algorithms - models that can learn without labels - as one solution to build generalizable representations. While self-supervised algorithms are increasingly ubiquitous in traditional ML domains, their deployment in remote sensing contexts remains challenging. One reason is that different remote sensing projects use very different sensor inputs to produce maps. For example, SoilsGrids includes 8 groups of covariates including land cover classes and weather, processed as pixel time series stretching back 15 years. On the other hand, the Skylight system for marine vessel identification uses single timestep VIIRS and S2 imagery and S1 time series. To be useful in diverse contexts, pre-training algorithms used for“remote sensing foundation models” must ingest inputs with a variety of timesteps, spatial dimensions, and remote sensing products. In this presentation, we discuss the development of two remote sensing foundation models, Presto (the Pretrained Remote Sensing Transformer), and its successor model. These models target general utility by learning from a wide range of sensors, and can ingest a wide range of spatial and temporal dimensions of data. We discuss (i) lessons learned from adapting self-supervised learning algorithms to remote sensing, particularly to learn both large-scale and small-scale features from remote sensing data, (ii) steps we took to ensure the models could ingest a wide range of inputs and shapes ranging from pixel-timeseries to single time-step imagery, and (iii) steps we took to sample a globally representative dataset. In addition, we highlight the importance of learning from deployed projects when motivating the design of pre-trained remote sensing foundation models. Both of these foundation models are actively deployed to make maps at scale. We discuss the process of deploying these models, and the lessons learned working alongside partners to execute these deployments.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall G1)

Presentation: Leveraging Neural Compression for Earth Observation

Authors: M.Sc. Isabelle Wittmann, Dr. Johannes Jakubik, M.Sc. Carlos Gomes, M.Sc. Benedikt Blumenstiel, Dr. Thomas Brunschwiler
Affiliations: IBM Research
The exponential growth of satellite data marks a new era in Earth observation (EO) and enables a better understanding of our planet, with applications such as crop mapping and the detection of natural hazards. However, the sheer volume of this data poses challenges for transmission, storage, and accessibility, ultimately limiting its usability. Image compression offers ways to efficiently store and transfer data. In recent years, data driven neural compression approaches have demonstrated improved performance in compressing images, compared to handcrafted algorithms (e.g. JPEG). Our work builds on that progress and focuses on the application of neural compression specifically on satellite images. We explore adaptations of neural compression models that leverage EO-specific characteristics including location and timestamp information. This metadata may offer potential to tailor and improve compression performance. Our research reveals fundamental differences in the input pixel distribution and entropy of satellite images compared to standard natural image datasets. Interstingly, EO data reveils a substantially lower entropy compared to ImageNet samples. We demonstrate that applying neural compression to EO data improves compression performance within a few hours of training, requiring lower bit rates and fewer parameters than models trained on natural images. We further investigate the effect of ingesting encoded metadata information on neural compression techniques. Our results suggest that neural models extract enough image features to make additional spatial and metadata inputs redundant. Finally, we compare specialized neural compression models, trained on specific seasons or geolocations, with general neural EO compressor trained on the entire EO data set. Our results indicate that although the specialized models are learning on different input distributions, general neural EO compressors are still beneficial in many cases. In low-entropy, strongly skewed distribution scenarios, specialized model outperform general neural EO compressors. These results underline that the specific nature of EO data may benefit from individual processing for certain parts of the data, while the majority can be compressed with a general neural EO compressor. Overall, we demonstrate superior perfromance of neural compressors relative to classical methods. Further, our findings suggest that tailoring neural compression to EO data partly requires specialization, as significant differences in input distribution and entropy can enable more specialized compression. We show that the extent to which neural compression models benefit from dataset diversity versus specialization is an essential trade-off, which requires further research from the EO domain. While tailoring neural compression to EO is generally challenging, we show that low entropy image samples allow for lightweight, specialized compressors, which can be very helpful for different EO scenes such as oceans, deserts or forests. This work is partially funded by the European Union (Horizon Europe, Embed2Scale, contract No. 101131841) and th Leveraging Neural Compression for Earth Observation e Swiss State Secretariat for Education (SERI). C. Gomes and T. Brunschwiler, “Neural Embedding Compression for Efficient Multi-Task Earth Observation Modelling,” IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 2024, pp. 8268-8273, doi: 10.1109/IGARSS53475.2024.10642535.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 0.14)

Session: D.05.03 Towards Modernized Copernicus Data: Enabling Interoperability through EOPF Principles and Advanced Data Access Strategies

#zarr #stac #cloud-native

As demand for high-accuracy Copernicus data products grows, modernizing and re-engineering existing processors is essential. The Sentinel data processors, developed over a decade ago, require upgrades to remain viable for the next 15 years. A key focus of this modernization is enhancing data access through cloud optimization, interoperability, and scalability, ensuring seamless integration with new technologies.

A major development in this transition is the adoption of cloud-native data formats like Zarr, which significantly improve data handling, storage, and access. This shift supports the increasing volume and complexity of data from current and future missions. The Earth Observation Processing Framework (EOPF) plays a crucial role in enabling these advancements, providing a scalable and flexible environment for efficiently processing large datasets.

This insight session will provide updates on the latest status of EOPF project components, as well as the future of the Copernicus data product format, with a strong focus on Zarr and its practical applications. Experts will showcase how these innovations enhance data accessibility and usability, ensuring that Copernicus remains at the forefront of Earth observation. The session will also highlight EOPF’s role in streamlining data workflows, fostering collaboration among stakeholders, and advancing next-generation EO solutions.

Presentations and speakers:


EOPF – Enabling Cloud-Efficient and Interoperable EO Data Processing


  • Carine QUANG - CS Group

Re-engineering of Sentinel-2 & Sentinel-3 Processors


  • Naceur Meskini - ACRI ST

Sentinel-1 Analysis Ready Data: Products, Algorithms, and Processor


  • Federico Minati - B-Open

Inside EOPF Sentinel Zarr Samples: An Initial Operational Snapshot


  • Christoph Reimer - EODC

Bridging the Gap: GeoZarr and STAC Integration for Multidimensional Geospatial Data Workflows


  • Emmanuel Mathot and Brianna Pagán - Development Seed

EOPF in Highway Project


  • Simone Mantovani - MEEO
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall L1/L2)

Session: F.01.02 Raising awareness on climate change and environmental degradation using effective communication and education

The world today confronts an unprecedented need for effective communication and education to tackle the planetary crisis, particularly climate change and environmental degradation. The disinformation surrounding climate change takes the form of misleading claims that downplay the severity of the issue and question the integrity of the scientific consensus. This spread of false information can confuse the public, create unwarranted scepticism, and stall political momentum necessary for addressing the crisis. To address this issue, people from multidisciplinary backgrounds must unite to promote accurate, evidence-based information and engage in transparent discussions. By translating complex scientific data into accessible, relatable information, this effort can begin to bridge the gap between the science and the public, to empower individuals, influence policy decisions, and drive collective action by highlighting the urgency of the issues and offering clear pathways for change.

Previous efforts at climate communication have often failed to reach the masses, relying heavily on context-free facts, statistics, and complex science, making it inaccessible and less relatable to the average person. In this context, an average person retains only 5-10% of information if it's purely statistical, but retention soars to 65-70% when information is conveyed through storytelling. This underscores the transformative power of storytelling in effective climate communication, capable of shifting culture, touching hearts, and changing minds. We respond more profoundly to stories that resonate with us on a personal level, linking individual experiences to global challenges and thus rendering the abstract and often distant phenomena of climate change more tangible and immediate. The Earth Observation (EO) community is uniquely positioned in this sense, not only because of the breathtaking visuals of our planet and the excitement of satellite launches but also due to the scope of its measurements, which span global, regional, and local scales.

We invite climate and social scientists, engineers, artists, journalists, communicators, storytellers, activists, and policymakers to submit multidisciplinary abstracts for this session, that:

  • Showcase best practices, case studies and demonstrations of storytelling that use Earth Observation (EO) measurements, data, visualisations, and applications.

This session aims to nurture, support, and expand a community both inside and outside of Earth Observation, committed to science storytelling. It also seeks to address the severe lack of funding in projects by potentially introducing dedicated communication and storytelling work packages.

Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall L1/L2)

Presentation: Telling Climate Stories with Data

Authors: Philip Eales, Andrew Wayne, Annika van Lengen, Patrick Mast, Carsten Brockmann, Sophie Hebden, Paul
Affiliations: Planetary Visions Limited
Article 12 of the Paris Agreement emphasises the importance of promoting public understanding, education and participation in addressing climate change. Education is also a mandatory activity in the ESA Convention, which all member states are expected to support. With these points in mind, ESA’s Climate Office has over the last few years initiated a range of science communication activities to promote the ESA Climate Change Initiative (CCI), and climate change awareness more broadly, to the public, policy makers and students. These activities combine data visualisation and computer graphics techniques with extremely high visual quality and a narrative approach to science communication. Storytelling is the main means of communicating with which the human brain has developed, and narrative cognition is thought to be the default mode of human thought and memory. It is recognised that narratives (stories) are easier to comprehend, and non-expert audiences find them more engaging, than the logical-scientific approach that scientists often use to communicate with each other, and more effective than the “information deficit” approach often used to communicate with the public. We will demonstrate the practical application of a narrative approach across a range of media from simple graphics and image sequences to animation and long-form multimedia storytelling. The CCI is developing key datasets, based on the best available Earth Observation (EO) technologies and best methodological processing algorithms, for use in understanding changes to the Earth’s climate. Twenty-seven essential climate variables (ECVs) are being developed and made available to climate modelers, with data stretching back in some cases more than forty years. The ECVs include variables such as sea surface temperature, atmospheric greenhouse gases, and land cover type. As well as producing clear and easy-to-understand climate data visualisations for print, broadcast, social media and exhibition use in the form of 2D maps, 3D computer graphics and linear animations, an interactive web app has been developed to present the very long time-sequences of ECV data on an interactive virtual globe. This allows users to explore the climate data at their own pace, compare related climate variables, and discover for themselves patterns, relationships, climate events and trends. The data globe is accompanied by a series of multimedia stories that are richly illustrated with photos, videos, satellite images and diagrams, and rooted in human experience. The stories present background information and context for the Earth observation data and show how they are relevant to daily life and newsworthy events. Data are linked through components of the Earth system, such as the carbon cycle and the water cycle, and to the challenges facing society due to climate change. In both the interactive data viewer and the stand-alone linear animations care is taken to follow best practice for scientific data visualisation and science communication. In the animations, visualisations of the global climate data products are supplemented by computer graphic representations of microscopic processes, such as aerosols seeding cloud formation, and with conceptual illustrations of, for example, the carbon budget and the volume of ice Earth loses each year. The time-sequence maps are made available outside the web app for use on custom display hardware such as touch-tables and spherical displays. ESA and the UK Space Agency have used the material in this way in their own exhibition spaces and in public events such as the annual UN climate summits. The animations are published on ESA’s website and social media channels and made available for use by broadcasters. Future work will look at tighter integration between the data viewer and the storytelling, and adapting the app, stories and animations to the needs of outreach partners such as museums, science centres and exhibition spaces.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall L1/L2)

Presentation: The Art of Datatelling: Create Your Interactive Stories With Data

Authors: Arturo Montieri, Cristina Arcari
Affiliations: Alia Space Systems
Over the past years, a significant amount of scientific data has been collected, allowing a comprehensive understanding of the planet's changes and a monitoring of its well-being. This data is very often expressed in a format that is hard to read or understand, making it difficult to highlight its importance for non-expert users. Data becomes more relevant to the community when visualized in an attractive and impressive way. Effective science storytelling is crucial for raising awareness about climate change impacts and extreme weather among the public, policymakers, and journalists. Stories emerge as a powerful tool in this context, transforming complex datasets into engaging narratives that influence understanding and choices. This form of expression of creative talent aimed at communicating science and data in a simple way could be called "Datatelling". We have developed an innovative web-based content creation tool called DEA, that allows users to craft interactive stories using comprehensive data hosted on the DestinE Platform integrating diverse data sources, including satellite observations, re-analysis datasets, and real-time forecasts, provided as time series, with a coverage of about one hundred years, that can be visualized on a map. The datasets available by default are coming from the DestinE Climate Change Adaptation Digital Twin (DT), Copernicus Climate Change Service, Copernicus Atmosphere Monitoring Service (CAMS), ECMWF, Eumetsat and Copernicus Marine Service (CMEMS). DEA also offers fancy climate data visualization, enabling users to support climate change narratives with qualitative anomaly plots, such as climate spirals, stripes, and anomaly bars, at global or local scales, facilitating the tracking of temperature trends over time. Users create interactive stories combining these datasets with textual information, images, video and plots, without writing code. Furthermore, users can upload their own assets to enrich their stories. These assets can be 3D models, vector data or satellite images optimized for the cloud. Leveraging this platform, users can keenly share climate adaptation and resilience insights, fostering open discussions and proactive responses to worsening climate impacts.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall L1/L2)

Presentation: Destination Earth Climate Game

Authors: Dr Rochelle Schneider, Kylian Berrebi, Tabatha Sauvaget, Jean-Philippe Audoin
Affiliations: ESA, Atos
Training and raising awareness about climate change can significantly benefit from the principles of gaming. By offering hands-on engagement, games enhance memorization and allow players to embody real-world situations and stakes. DestinE Climate Game is a game based on facts and scientifically observed or simulated data. Its primary goal is to raise awareness on climate change and the associated environmental degradation, and on good practices to adapt or overcome its impacts. Its second goal is also to demonstrate what can be achieved with the climate data and platform provided by Destination Earth, the flagship initiative of the European Commission to develop a highly accurate digital model of the Earth (a digital twin of the Earth) to model, monitor and simulate natural phenomena, hazards, and the related human activities. This game leverages on storytelling to deliver a powerful educational tool, making the player aware of the stakes, the potential impact, and the levers for him to activate and reduce the risks and the outcomes. It is designed to cater to a large audience, ranging from adults to children, including pupils, students, and young adults. By approaching the climate change through real scenes gamification, the game aims to raise awareness while reducing the source of anxiety associated with climate change impacts, contextualizing the player's immediate environment in a playful manner, making the topic more concrete and actionable. The game alleviates the anxiety feeling by providing players with levers to act, thus demonstrating that everyone can have a positive impact on their environment, even on a small scale. It proposes a timeline from now to 2100, throughout which the player will learn about nature and climate change, either on its local environment, or on specific environments that are particularly impacted by climate change. The players must decide on actions to reduce global temperature rise, environmental degradation, ice melt, and sea level rise. Once the timeline run, a summary of the actions taken provides the players with the different impacts of their actions, either positive or negative. The players are then offered to play again, to better understand and memorize, and to improve their final outcome. The game takes advantage of high-quality Earth Observation datasets, including Copernicus Land Cover, Open Topography, Copernicus Climate Change, IPCC scenarios, DestinE Digital Twin data, and Köppen-Geiger climate classification. This blend of gaming elements, visual representation, and real scientific data creates a rich experience grounded in factual information. The use of these science-based datasets ensures that the game provides reliable information while being accessible to a large audience. Through this game, Atos has achieved a multidisciplinary approach that integrates science, modelling, technology, and storytelling to create a comprehensive educational tool. By empowering individuals with knowledge and actionable steps, Atos aims to foster proactive engagement with climate issues. The elements of the game help to humanize scientific data, making it more relatable and impactful for players. The goal of this presentation is to detail the game design process, explain the expected educational results and highlight the challenges still to come.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall L1/L2)

Presentation: From Fact to Act – Raising Awareness for Impact Through Communications for ESA’s Climate Change Initiative

Authors: Sarah Otto, Georg Lahme, Jana Schmülling, Nina Höhler, Charlotte Flint, Dr. Steffen Kothe, Rainer Hollmann, Achim Tack, Annika van Lengen, Carsten Brockmann, Tonio Fincke, Philip Eales, Paul Fisher, Ravi Kapur, Federica Moscato, Jürgen
Affiliations: Klenk & Hoursch AG, Deutscher Wetterdienst, Ubilabs, Brockmann Consult GmbH, Planetary Visions Limited, European Space Agency, Imperative Space, Centre for Environmental Data Analysis
The accelerating environmental crisis resulting from climate change calls for transformative communication strategies that both inform and inspire action. ESA’s Climate Change Initiative (ESA CCI) exemplifies how storytelling, rooted in robust scientific data, can bridge the gap between complex climate science and diverse audiences. By building trust, sparking curiosity and driving engagement, ESA’s CCI Knowledge Exchange Project (ESA CCI KE) empowers individuals and stakeholders to understand, internalise, plan and take climate action. The evolving climate communications landscape requires innovative strategies. By extending the reach of CCI's findings and stories across four different stakeholder groups and communication platforms, we are driving a broader climate dialogue and fostering more effective action. Extracting the newsworthiness of scientific findings and packaging them for diverse audiences is critical to ensuring sustainable awareness and engagement. The CCI KE project is a major building block within the CCI programme. It has developed over the past five years - and recently entered into a new delivery phase. The core activities are operations, maintenance and development of the CCI KE products Open Data Portal, Toolbox, Climate from Space App and Website providing the facts and basics for communication activities. In the new project significant new emphasis is given to communication, partnerships, venture building and training aspects. Scientific soundness and fact-based communication using Earth Observation (EO) based CCI data and results are central to the project. In our contribution to the session, we will present best practices and case studies using the KE products that show how science storytelling has improved climate communication • by breaking down complex datasets into accessible formats tailored to specific audiences, • by transforming scientific data into content and by creating powerful narratives that make facts tangible and understandable through three core principles – personalise, localise and emotionalise, • by using EO data visualisations to engage, inform and inspire diverse audiences across multiple communication channels. Join us to explore how storytelling can turn data into a catalyst for climate action.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall L1/L2)

Presentation: Visualising Climate Data in a Newsroom Environment

Authors: Erwan Rivault, Jana Tauschinski
Affiliations: BBC News, Financial Times
Communicating the reality of the climate crisis is a key challenge of our time. Although we increasingly see its impacts playing out across the world, repetition, reader fatigue and data complexity can result in audience disengagement. One way to overcome this is finding new ways to tell the stories of climate change, through creative visualisation that leads people to new insights. Climate data is complex. To the non-researcher it can be inaccessible: hard to find and in unfamiliar formats. Within it, however, lie some of the most important and impactful stories. Extracting these stories and communicating them in a way that not only informs but also captivates and engages readers to act is key. How can visual and data journalism contribute to this element of science communication, helping to bridge the gap between research and lay audiences? How do we keep audiences engaged when new records are being broken almost monthly? This talk will explore how creative data visualisation can transform the way we communicate climate change. Drawing on our experience as journalists at the BBC and the Financial Times, we will illustrate how innovative approaches to visual storytelling can help sustain public engagement. We’ll begin by examining some of the common practices in climate data visualisation, and what makes a good graphic for news. While effective, these traditional data visualisations can lose their impact over time. We will then introduce more novel approaches that present data in creative formats, uncovering new insights and sparking renewed interest from disengaged readers. The climate data visualisations we are crafting are now at the heart of our reporting. We will share the creative process and development involved in making them. For example, the joy division inspired ridgeline chart created for the reporting that 2023 was confirmed as world's hottest year on record (https://www.bbc.co.uk/news/science-environment-67861954) and the monthly necklace chart published in the reporting of 12 consecutive monthly temperature records (https://www.ft.com/content/8e83e197-3c1f-4914-af21-04ca07a89ca1). Attendees will leave this session with a deeper understanding of how creative data visualisation techniques can invigorate climate storytelling. Whether you are a researcher, a journalist, a climate advocate, or a communicator, this talk aims to inspire you to rethink the way you visualise the data that’s relevant to your field. By embracing new formats in the newsroom, we hope to keep audiences engaged and keep climate change front and center in the public discourse during this critical decade.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall L1/L2)

Presentation: Engaging young students to ocean colour by storytelling

Authors: Professor Vanda Brotas, Professor Steve Groom
Affiliations: Faculty of Sciences University of Lisbon, Plymouth Marine Laboratory
Satellite Earth Observation (EO) is an indispensable tool to study our planet with a global perspective, at high resolution time and space scales; it is mandatory to understand key biogeochemical cycles and find solutions for a sustainable planet. The expanding use of EO to research, develop and apply techniques to address social, economic and environmental issues, requires new generations of experts, engineers and scientists. Therefore, seeding the interest and curiosity in youngsters for EO is an important endeavour for scientists and agencies. The question is: how to accomplish this task? Here, we present an example of storytelling, where fantasy is used to introduce a key essential climate variable: ocean colour, inspired by results from the ESA Climate Change Initiative, initially in a book form, followed by structured school visits with gamification and finally performances of a theatre version of the story. As part of a series of illustrated books, aimed at children from 8 to 11 years old, on eutrophication, ecosystem conservation and the environmental degradation, we wrote “The Girl who could see different colours in the sea”. The book was issued by a renowned publishing company and shortly after included in the Portuguese National Reading Plan. With the support of a EU Twinning project (PORTWIMS), an English translation was produced and an online version made freely available https://portwims.org/getmedia/b22b2a42-d784-4a97-aaf3-b41494b57b77/TheGirlWhoCouldSeeDifferentColoursInTheSea.pdf. The Portuguese version was also sponsored by the national institution Ciência Viva, which is strongly committed to science communication, mainly to children and young students. The plot introduces children to the notion that satellites, hundreds of kilometres away from the Earth, may see what is invisible to the human eye. A mysterious girl with special powers to see the invisible (the coloured patches in the sea due to phytoplankton cells) is the main character, and while her adventures unfold, the reader learns several key (but obviously simplified) notions about what is phytoplankton, the relation between phytoplankton concentration and fisheries, the competition between scientific teams, the need to validate satellite imagery with in situ data, the importance of asking questions and understanding the challenges society faces. The story has been presented at over 10 schools where we introduced gamification, an important tool, by distributing coloured A4 sheets to the children, simulating the gradient in water colour from coast to open ocean from the first to the last row of the class. Furthermore, the story was adapted by a theatre company and performed in several locations around Portugal. At some of those events, at the end of the show, we asked questions to the public to ascertain their understanding of the science involved, creating another opportunity for sharing information about EO and ESA, and its role in trying to achieve a sustainable planet. This presentation gives an overview of lessons learned and ideas for best practice in education and communication to school children.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 0.11/0.12)

Session: C.01.13 Quantum Sensors: Next-Generation Tools for Earth Observation

This session will delve into the advancements in quantum sensing technologies, encompassing accelerometers, magnetometers, microwave receivers, and more, highlighting their impact on future space-based Earth observation missions. Attendees will gain insights into:

Current Developments:
Presentations on the latest advancements in quantum sensor technology, including breakthroughs in sensitivity, precision, and miniaturization, tailored for space applications.

Use-Case Evaluations:
Detailed evaluations of various use cases where quantum sensors demonstrate potential to enhance Earth observation missions.

Unique Challenges:
A glance at unique challenges faced in the development and deployment of quantum sensors for space missions. This includes technical hurdles, environmental considerations, and integration with existing space technologies.

Future Directions:
A look at roadmaps for future developments in quantum sensing technology such as planned collaborative projects and the next steps required to fully realize the potential of quantum sensors in Earth observation missions.

Interactive Discussions:
Opportunities for Q&A and interactive discussions with leading experts in the field.

This session is ideal for researchers, engineers, and professionals involved in space technology, Earth observation, and quantum sensing, offering an overview of how quantum advancements are poised to evolve instrumentation for Earth observation.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: Qualification of a Quantum Diamond Vector Magnetometers for Earth Observation

Authors: David Roy-Guay, M. Vincent Halde, Ms. Kayla Johnson, Dr Lilian Childress, M. Andrew Lowther, M. Olivier Bernard, M. Romain Ruhlmann, M. Nicolas Fleury, Hubert Dubé, M. Benjamin Dupuis, M. Gabriel St-Hilaire, M. Frédérik Coulombe, M. Adrian Solyom
Affiliations: SBQuantum, McGill University
SBQuantum has developed a diamond-based quantum magnetometer utilizing nitrogen-vacancy (NV) centers in diamond to achieve high-accuracy vector magnetometry for Earth Observation applications. This magnetometer combines quantum sensing techniques with a compact, space-ready design, addressing critical challenges in sensitivity, accuracy, and operational stability. Unlike traditional magnetometers such as fluxgate or optically pumped atomic magnetometers (OPMs), this technology intrinsically measures vector magnetic fields with sub-nanotesla precision, offering significant improvements in size, weight, and power (SWaP) for spaceborne platforms. At the heart of the magnetometer is a diamond containing NV centers, whose spin-dependent optical properties allow for magnetic field detection via optically detected magnetic resonance (ODMR). The NV centers' ability to operate across a wide range of environmental conditions, including radiation and thermal extremes, is a key advantage for deployment in low-Earth orbit (LEO). The sensor operates in pulsed ODMR mode, which eliminates linewidth broadening due to optical pumping, enhancing sensitivity and enabling dynamic bias field modulation for precise calibration. The magnetometer uses a novel three-tone microwave excitation scheme, optimized for the NV centers' hyperfine structure, to ensure robust signal contrast and maintain high sensitivity across vector components. The design also incorporates advanced noise-reduction techniques, including balanced photodetection, to suppress laser and electronic noise. These features enable the device to achieve a noise floor of under 1 nT/√Hz at 1Hz, a critical design parameter for high-resolution geomagnetic field mapping. The sensing head combines high density NV ensemble diamond material, a 3D microwave resonator and compact optical components, including a compact green laser diode, photodiodes and collection optics. The sensor size at 225 cm³ and <200 g, 4W power consumption is achieved through integration of low-power microwave and optical systems, adhering to stringent satellite payload requirements. The system’s control unit includes a radiation-hardened FPGA for signal processing and real-time calibration. Microwave delivery is managed via a resonator-tuned circuit, ensuring homogeneous field application across the diamond sample. Thermal effects are compensated with distributed thermal sensors for in-situ temperature compensation. Extensive environmental testing validated the magnetometer’s operational readiness for space deployment. We summarize different tests, including: 1. Thermal Cycling and Vacuum: The system remained operational across -30°C to 50°C and under vacuum conditions, demonstrating resilience to temperature and pressure typical of LEO environments. 2. Radiation Resistance: Proton and gamma radiation tests confirmed the diamond and associated electronics maintain functionality after cumulative doses exceeding mission requirements, with negligible degradation in magnetic sensitivity or coherence properties. 3. Mechanical Resilience: The sensor withstood sinusoidal vibrations, mechanical shocks, and steady-state accelerations based on MIL-STD-810H standards, verifying its structural integrity and stability for launch conditions. Functional testing conducted at NASA Goddard Space Flight Center and the Ottawa Geomagnetic observatory confirmed the magnetometer’s high sensitivity and precision. A short-term accuracy of 6 nT across axes was measured and orthogonality errors were minimal, attributed to the intrinsic symmetry of the diamond lattice. The system demonstrated vector sensitivity sufficient for geomagnetic mapping applications, meeting target specifications even within the magnetic environment of an optical bench including star trackers. The magnetometer’s performance and compact design make it highly suitable for deployment on CubeSats and small satellite platforms, providing a scalable solution for spaceborne geomagnetic observations. Its ability to measure vector magnetic fields with sub-nanotesla resolution addresses key requirements for applications such as geomagnetic modeling to enhance the World Magnetic Model (WMM) and in the future, supporting geomagnetic navigation. Space Weather Monitoring and Satellite Operations to support precise attitude control and magnetic noise compensation are foreseen. The project contributes to operationalizing advanced quantum vector magnetometry for Earth Observation and establishes a framework for scalable, high-performance magnetic field sensing in space. Scheduled for CubeSat deployment in 2025 under the MagQuest program, this work demonstrates the potential of quantum technologies to bridge the gap between laboratory advancements and real-world applications, driving innovation in geomagnetic science and Earth Observation.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: ESA's perspective on quantum sensing for Earth Observation

Authors: Arnaud Heliere, Olivier Carraz, Julian Hofmann
Affiliations: ESA EOP-FMO
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: Towards nonlinear quantum interferometry enhanced Earth observation

Authors: Marta Gilaberte Basset, Carlos Melo Luna, Valerio Flavio Gili
Affiliations: Fraunhofer Institute for Applied Optics and Precision Engineering IOF
Quantum imaging and spectroscopy, particularly through Quantum Imaging with Undetected Light (QIUL) techniques [1,2], present potential benefits for Earth observation missions such as Aeolus, EarthCARE, and Merlin. These methods can significantly enhance the performance of these missions by addressing critical limitations in current sensor technologies. QIUL, in particular, offers the potential to reduce the payload requirements for infrared (IR) and ultraviolet (UV) sensors, including reductions in volume, weight, power consumption, and critical operational components such as cryostats used in IR detection. QIUL is based on a nonlinear interferometric principle, utilizing a nonlinear crystal pumped by a laser to generate down-converted light through spontaneous parametric down-conversion (SPDC). This process produces pairs of photons—referred to as signal and idler—that are correlated across multiple degrees of freedom, including frequency, momentum, and position. By employing non-degenerate SPDC, the wavelengths of the signal and idler photons can be engineered to differ; for example, the idler photon may reside in the IR spectral range, while the signal photon is in the visible spectrum. Within the interferometer, the signal and idler beams are directed along separate optical paths. The idler beam interacts with the target sample, encoding spatial, spectral, and phase information, while the signal beam is directed to a visible-wavelength detector for image acquisition. This configuration enables the retrieval of the sample information in spectral ranges that are traditionally difficult to detect [3], such as the IR, while leveraging the superior efficiency, lower noise, and broader availability of visible-spectrum detectors. This study represents the first comprehensive evaluation of various quantum imaging techniques for Earth observation, offering a thorough overview of the associated challenges and potential benefits. Specifically, it investigates the advantages of Quantum Imaging with Undetected Light (QIUL) and its applicability to Earth observation missions. Simulations are employed to assess system performance under atmospheric conditions, including turbulence and scattering, and to quantify photon flux losses in realistic operational scenarios. The results of this analysis address the performance of quantum imaging compared to traditional systems, focusing on its potential to reduce system complexity and operational constraints. [1] J. Fuenzalida et al., “Nonlinear interferometry: A new approach for imaging and sensing”. In: Adv. Quantum Technol. 7 (2024), 2300353 [2] G. Barreto Lemos et al., “Quantum imaging and metrology with undetected photons: tutorial”. In: J. Opt. Soc. Am. B 39, 2200-2228 (2022) [3] A. Karim et al., “Infrared detectors: Advances, challenges and new technologies”. In: IOP Conf. Series: Materials Science and Engineering 51 (2013) 012001
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: Earth Observation Using Rydberg Sensing

Authors: C. Stuart Adams, Gianluca Allinson, Mark Bason, Peter Huggard, Kevin Weatherill
Affiliations: RAL Space, Durham University
Highly-excited atoms, often known as Rydberg atoms, display extreme sensitivity to external electromagnetic (EM) fields, including the electric fields associated with black-body radiation (BBR). In addition to their use in quantum computing, Rydberg atoms can be configured for sensing applications. Such sensors make use of non-destructive read-out of Rydberg states, which is implemented optically [1], including fluorescence detection of low-lying Rydberg states [2]. A range of different Rydberg sensing modalities have been developed; these include those developed for microwave standards and imaging in the Terahertz regime to sensitive digital communications techniques in radio and microwave bands. For Earth observation, a sensor that detects specific channels of incoherent radiation is needed - a Rydberg Receiver (RR). In this work, we compare RRs to classical radiometers. As RRs are based on the response of an atom to an oscillating electric field, so they are conceptually different from a classical antenna. Thus, concepts such as effective apertures must be translated to equivalents of a new modality that uses the language of electric fields rather than RF power. In addition, one must convert between scene or antenna temperatures to electric field sensitivities. This conversion draws out a natural comparison to low-noise amplifiers (LNAs). A Rydberg atom could be considered an LNA with a highly tunable ‘carrier frequency’ sensitivity range but low instantaneous bandwidth – typically around 10 MHz. We discuss current sensitivities/noise temperatures compared to non-cryogenic, off-the-shelf LNAs. We conclude by identifying the most promising operation frequencies of RRs, discussing the necessary developmental steps, and likely implications of relevant noise sources. [1] - Mohapatra, A. K., T. R. Jackson, and C. S. Adams. “Coherent Optical Detection of Highly Excited Rydberg States Using Electromagnetically Induced Transparency.” Physical review letters 98.11 (2007): 113003. [2] - Downes, Lucy A., et al. “Full-field terahertz imaging at kilohertz frame rates using atomic vapor.” Physical Review X 10.1 (2020): 011027.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: Future Satellite Gravimetry: The Quantum Leap?

Authors: Philipp Zingerle, Prof. Dr. Roland Pail, Dr. Thomas Gruber
Affiliations: Technical University Of Munich
Quantum sensors such as cold-atom interferometers (CAI) are one of the most promising novel sensor technologies in the context of future satellite gravimetry, to replace or be hybridized with classical electrostatic accelerometers. The main advantage of CAI sensors is their close to flat error spectrum over the whole spectral range, with the potential to provide significantly better and more long-term stable acceleration measurements in the long wavelengths as well as a stable scale. Quantum accelerometers can either be used to measure non-gravitational accelerations in satellite-to-satellite tracking mission concepts (low-low and high-low), or gravity differences on very short baselines in the frame of a gradiometer concept. Based on the results of several science studies on the future potential of quantum gravimetry, in this presentation we will give an high-level summary of the main conclusions. Independently of the mission concept, it has to be noted that currently instrument errors are not the dominant contributor in the total error budget. Errors resulting from temporal aliasing are relatively larger by a factor of 10 to 1000. Therefore, in order to be able to exploit the benefits of quantum sensors (and also other novel sensor technologies) at all, it is important to put the focus in parallel on strategies to reduce the temporal aliasing problem, which might be achieved by improving the measurement geometry and related space-time sampling of the target signal by means of enhanced multi-pair constellations, or by advanced parameterization strategies. From the viewpoint of instrument error contributions, for satellite-to-satellite tracking missions it turns out that also a significant progress of inter-satellite ranging technologies is required to fully exploit the benefits of quantum/hybrid accelerometers. Regarding gravity gradiometry, a very high sensor performance of 10-14 to 10-15 m/s2/sqrt(Hz) is required to achieve reasonable sensitivity for the temporal gravity field. The main limitation turns out to be the necessity to co-measure the attitude with very high precision, not only to exploit the high accelerometer performance, but also for separating linear and rotational acceleration effects. Only a full 3D-gradiometer set-up will enable us to tackle this problem at all. Based on the results of numerous extensive numerical closed-loop simuIation experiments, in this contribution we quantify the individual error sources contributing to the achievable gravity field performance. We identify and summarize in a systematic way the goods, prospects and odds of quantum sensors for future gravity and mass transport monitoring from space, in order to set the current expectations of this new technology on solid grounds.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: From lab to in-orbit demonstration: Journey of quantum diamond-based magnetic field sensor development

Authors: Yarne Beerden, Simon Wilmots, Prof. Dr. Anna Ermakova, Prof. Dr. Milos Nesladek, Dr. Jaroslav Hruby
Affiliations: Institute for Material Research (IMO), Hasselt University, IMOMEC division, IMEC, BIRA-IASB
Magnetic field measurements from low Earth orbit are important for understanding Earth’s magnetosphere, monitoring of space weather, and studying geophysical processes. These measurements provide insights into solar-terrestrial interactions and help mitigate risks to both orbital infrastructure and terrestrial systems. However, classical magnetic sensors face limitations in sensitivity, long-term stability, and integrability due to size, weight, and power (SWaP) constraints of space missions. As with the onset of small platforms (i.e. cubesats), these challenges become increasingly apparent. With quantum sensing technologies emerging as a transformative solution for future Earth Observation missions. One particularly promising platform is quantum magnetic field sensors based on nitrogen-vacancy (NV) centers in diamond. Their unique electronic and spin properties enable scalar and vector magnetometry with high theoretical sensitivity, robustness, radiation hardness, and potential for miniaturization make them an ideal candidate for space applications. NV-based sensors combine the advantages of high performance and integrability, offering an alternative to overcome the limitations of classical sensors. Despite these benefits, quantum sensors still face challenges such as the need for further power consumption optimization, noise suppression, and integration into spacecraft systems. The OSCAR project series presents miniaturized and fully integrated diamond magnetometry solutions for space applications, transforming them from lab-scale prototypes to operational in-orbit demonstrations. The project consists of an interdisciplinary team of students, achieving several significant milestones in the technology development. This includes the successful deployment of the OSCAR-QUBE mission aboard the International Space Station (ISS). This 1U (10x10x10 cm³) device, with a sensitivity of 300 nT/√Hz and <40 Hz measurement rate, collected magnetic field data over ten months, mapping Earth’s total magnetic field, verified using the CHAOS-7 model. Building on the lessons learned of this mission, the OSCAR-QUBE+ sensor reduced the volume to ~0.4U (8.2x7.8x6.4 cm³), improved sensitivity to 30 nT/√Hz, and reduced power consumption, tested aboard the Ariane 6 inaugural flight. The latest iteration, OSCAR-PINQ, aims to further refine SWaP parameters, targeting 0.2U form factor while introducing advanced concepts and operational modes to increase the sensing performance down to 100pT/√Hz. The OSCAR team focuses on integrating strategies to enhance sensing performance while minimizing resource consumption. This includes advancements in readout methods, noise management, and efficient data downlink protocols to optimize bandwidth. NV centers enable both scalar and vector magnetometry, with novel operational modes further enabling increased precision and sensitivity. Alongside technological and scientific innovation, the OSCAR team focuses on personal growth of students, by offering hands-on experience by participating in competitions and initiatives like ESA ‘Orbit Your Thesis!’ and the ESA Academy experiment programme, providing students with the resources and framework to develop quantum sensing technology, demonstrating the potential of involvement of students in research. Lessons learned from in-orbit operation of the technology have contributed to novel strategies for noise suppression, and the optimization of sensor performance under space conditions. We present laboratory performance benchmarking and operational concepts, together with the most recent developments of OSCAR-PINQ iteration. We show the sensor’s specifications, improvements and calibrations, demonstrating the precision required to monitor Earth’s magnetic field dynamics and geophysical phenomena. Together with the overall status and projection of the next steps of the OSCAR journey, and its potential towards utilization in multi-sensor platforms or constellations for future space missions.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall F1)

Session: D.02.03 Approaches, accuracies, applications: next generation large-area land change monitoring - PART 1

Land cover data are used by various user groups to better understand the current landscapes and the human impact on these landscapes. Next to be a core information layer for a variety of scientific and administrative tasks, land cover and change data help organizations, stakeholders, politicians and individual users to assess urban growth, trends in forest changes, assist in land management and land planning, track wetland losses and potential impacts from sea level rise, prioritize areas for conservation efforts, assess the impact of climate change on socio-economic, socio-ecologic systems as well as ecosystem services, etc.

During the last years several global and continental land cover and change datasets have been developed at increasingly higher resolutions. These datasets, with higher spatial resolution (e.g Sentinel 1+2, Landsat, but also using very high-resolution data) can significantly improve the characterization of the Earth’s land surface dynamics and provide land cover and change assessments at a finer scale and thematic details. The new data available in combination with more powerful computing infrastructure and advances in deep learning/artificial intelligence techniques allow for on-demand, adaptable land cover monitoring and applications.

The session should provide a forum for different players in this field to present latest approaches including advanced methods for land cover change analysis (incl. AI) and near-real time land cover monitoring, new thematic details moving towards land use change monitoring, and evolving large-area programs and services and how they support different users and global to national levels.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall F1)

Presentation: High-Resolution Land Cover Maps: Assessment Methods, Challenges, and Prospects

Authors: Qiongjie Xu, Dr. Vasil Yordanov, Prof. Maria Antonia Brovelli
Affiliations: Politecnico di Milano, Department of Civil and Environmental Engineering, GEOLab
Land cover (LC) maps are vital for environmental monitoring, urban planning, biodiversity conservation, and climate change research. These maps provide detailed spatial and thematic information, enabling researchers and policymakers to make informed decisions. The rapid increase in available HRLC datasets, facilitated by recent advancements in satellite data acquisition, cloud-based processing, and deep learning has greatly enhanced the accessibility and production of high-resolution land cover (HRLC) maps. The reliability of these maps heavily depends on their evaluation phases and their quality, which demands robust validation practices. Despite the advancements in the technologies used for the production of the HRLC maps, comparing and assessing global land cover still present persistent challenges. Such challenges arise from discrepancies as spatial resolution, classification legends and actually the validation protocols. As a consequence, those challenges undermine the reliability and comparability of global HRLC datasets. This study aims to critically discuss those issues by providing a comprehensive review of HRLC maps published in recent years, and placing the main focus on evaluation methods, encompassing the production process, applications, validation techniques, and future directions for HRLC development. This research begins by introducing high-resolution land cover (HRLC) maps, detailing their definition, significance, and key attributes, such as spatial resolution, thematic detail, and temporal coverage, which highlights HRLCs essential role in addressing global and regional environmental challenges. Further are explored the technologies and data sources used to produce HRLCs, highlighting advancements in remote sensing, satellite platforms, and data processing techniques. Despite these developments, limitations in data availability, data quality and preprocessing still persist, underscoring the importance of robust validation practices. The diverse applications of HRLCs are then examined, illustrating their importance across domains such as urban development, agricultural management, natural resource conservation, and disaster response. These applications demonstrate the pressing need of accurate and reliable validation in order to ensure HRLCs can fulfil their intended purpose. In addition, validation challenges are also explored, including discrepancies in coordinate reference systems (CRS), variations in spatial resolution, and inconsistencies in classification legends, all of which can hinder the comparability and usability of HRLCs in global-scale analyses. For example, a mismatch between classification schemes of different LC datasets limits their ability to be efficiently cross-validated and integrated in a modelling pipeline. Additionally, the classification schemes used in different HRLCs are typically developed to meet the specific objectives or application contexts of each study, resulting in variations in the categories and definitions applied. Harmonizing these schemes often requires making subjective decisions, which can influence the accuracy and consistency of the final maps. As a result, this process is inherently challenging and may introduce further errors. Consequently, resolving these validation and harmonization challenges is essential for enhancing the reliability of HRLC datasets, particularly in global and interdisciplinary applications. Next are investigated, the methodologies and metrics commonly applied for HRLC validation, with sampling-based accuracy assessments being the most widely used. These evaluations often rely on ground truth data the use of various sampling schemes to derive confusion matrices from which metrics such as overall accuracy (OA), producer's accuracy (PA), user's accuracy (UA), and Kappa coefficient are calculated. Additionally, comparisons with similar LC products are occasionally provided, offering further reference points for HRLC utility. Despite the extensive use of such methods they still face challenges, necessitating the definition of more robust and scalable validation practices. For example, the assignment of class labels to ground truth or reference data for validation is typically performed manually and is prone to subjective interpretation. This subjectivity can result in inconsistencies and thematic errors, as the labelling process may differ based on an individual's perception of land cover classes, the specific research context, or the quality and resolution of the reference data. These inconsistencies can compromise the reliability of validation outcomes and introduce uncertainty into HRLC map accuracy assessments. Finally, critical areas for improvement in validation methods and technologies are also identified. Key recommendations include adopting multi-source data fusion approaches, developing standardized validation protocols, and integrating AI-driven methods to address current limitations and streamline validation workflows. By providing a detailed analysis of the production of HRCL and their validation processes, this review serves as a comprehensive resource for researchers and practitioners, offering valuable insights into HRLC validation practices and highlighting opportunities for future advancements in the field.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall F1)

Presentation: Framework for Annual Global Land Use and Land Cover Map Fusion at 30-Meter Resolution (2000–2023)

Authors: Rolf Simoes, Tomislav Hengl, Martijn Witjes, Maulana Wibisana
Affiliations: Opengeohub Foundation
Land use and land cover maps are essential tools for environmental studies and land monitoring. Currently, several global land use and land cover maps exist, each with different resolutions, legend systems, and methodologies. Many of these maps focus on detecting specific classes, such as the Global Land-Cover (GLC-FCS30D) [1], Global Mangrove Watch (GMW) [2], Global Surface Water (GLAD_GSW) [3], Global Cropland Expansion (GLAD_GCE) [4], and Global Pasture Watch (GPW) [5]. The fusion of these maps represents an opportunity to produce more reliable, robust, and detailed global maps. This paper presents a framework for land use and land cover map fusion, developed to harmonize and integrate data from diverse global, national, and regional sources. The framework is designed to address challenges associated with varied classification systems, data gaps, and spatial and temporal inconsistencies while leveraging the potential of multiple datasets. As a case study, the framework was applied to produce annual global land use and land cover maps for the period 2000 to 2023 at a resolution of 30 meters, combining existing maps using data fusion methods. Machine learning algorithms such as CatBoost [6] and Random Forest [7] were used. The results were compared with traditional data fusion methods, such as the Dempster-Shafer evidence theory [8]. The process involved harmonizing land cover classes from reference points using the GLanCE [9] level 2 legend as the primary classification system. In addition to the GLanCE legend classes, the harmonization process included additional classes, such as cultivated fields and pastures, mangroves, and tundra. These reference points were obtained from other sources [5, 10–12]. Some of these classes represent critical ecosystems that are often under-represented in global datasets, such as mangroves. The harmonized classes included the following: Water, Permanent snow and ice, Developed/urban, Bare soil, Rock outcrops, Shifting sand, Deciduous forest, Ev- ergreen forest, Mixed forest, Shrubland, Natural semi-natural grassland, Cropland, Moss/lichen, Cultivated grasslands/pastures, Wetlands, Tundra, and Mangroves. This harmonization ensured that the final fused maps comprehensively captured diverse types of land cover. The categorical and numerical data available in the GLC-FCS30D, GMW, GLAD_GSW, GLAD_GCE, and GPW maps were used as features to train the models. For Random Forest, categorical variables were converted into indicator variables, one for each class. The models were applied for each year, producing categorical and probabilistic maps, which were organized into global mosaics. The final products included global land use and land cover maps, probability maps, and uncertainty maps, all with a spatial resolution of 30 meters. To validate the models, the cross-validation method was used, achieving a global accuracy exceeding 80%. The maps produced are available in cloud-optimized formats and can be accessed through open catalogues, promoting reuse by researchers and policymakers. The framework offers a robust and adaptable solution for land use and land cover data fusion, with direct applications. Its pipeline was designed to ensure the reproducibility of global map fusion. Its modular architecture allows for the inclusion of new input maps or reference datasets without requiring a complete revision of the process. By integrating data from different sources, resolutions, and classification systems, the method seeks to take advantage of the strengths, details, and quality of each map. References [1] Xiao Zhang et al. “GLC_FCS30D: the first global 30 m land-cover dynamics monitoring product with a fine classification system for the period from 1985 to 2022 generated using dense-time-series Landsat imagery and the continuous change-detection method”. In: Earth System Science Data 16.3 (2024), pp. 1353–1381. [2] Pete Bunting et al. “Global Mangrove Extent Change 1996–2020: Global Mangrove Watch Version 3.0”. In: Remote Sensing 14.15 (2022), p. 3657. [3] Amy H. Pickens et al. “Mapping and sampling to characterize global inland water dynamics from 1999 to 2018 with full Landsat time-series”. In: Remote Sensing of Environment 243 (2020), p. 111792. [4] Peter Potapov et al. “Global maps of cropland extent and change show accelerated cropland expansion in the twenty-first century”. In: Nature Food 3 (2022), pp. 19–28. [5] Leandro Parente et al. “Mapping global grassland dynamics 2000–2022 at 30m spatial resolution using spatiotemporal Machine Learning”. In: Research Square (2024). [6] Liudmila Prokhorenkova et al. “CatBoost: unbiased boosting with categorical features”. In: Advances in neural information processing systems 31 (2018). [7] Leo Breiman. “Random forests”. In: Machine learning 45 (2001), pp. 5–32. [8] Bingjie Li et al. “An improved global land cover mapping in 2015 with 30 m resolution (GLC-2015) based on a multi-source product fusion approach”. In: Earth System Science Data Discussions 2022 (2022), pp. 1–35. [9] Mark A Friedl et al. “Medium spatial resolution mapping of global land cover and land cover change across multiple decades from Landsat”. In: Frontiers in Remote Sensing 3 (2022), p. 894571. [10] Radost Stanimirova et al. “A global land cover training dataset from 1984 to 2020”. In: Scientific Data 10.879 (2023). [11] Alexander M. Tait et al. Dynamic World training dataset for global land use and land cover categorization of satellite imagery. PANGAEA, 2021. [12] Tania L. Maxwell et al. “Global mangrove soil organic carbon stocks dataset at 30 m resolution for the year 2020 based on spatiotemporal predictive machine learning”. In: Data in Brief 50 (2023), p. 109621.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall F1)

Presentation: Geo-Wiki Evolution: a Set of Innovative AI-driven Annotation Tools for Land Change Monitoring

Authors: Myroslava Lesiv, Daniele Zanaga, Monica Shishodia, Darius Couchard, Dr Linda See, Dr Ian McCallum, Andrea Lupi, Baudouin Desclée, Ruben Van De Kerchove, Steffen Fritz
Affiliations: IIASA, VITO, Joint Research Centre, European Commission
Land cover derived from remotely sensed Earth Observation (EO) products is an important input to different global, regional, and national scale applications including resource assessments and economic land use models. During the last decade, several global land cover datasets have been created, but comparison studies have shown that there are large spatial discrepancies between these products. One of the reasons for these discrepancies is the lack of sufficient in-situ data for their development. To address this issue, a platform called Geo-Wiki was developed in 2010. The Geo-Wiki platform was designed to increase the amount of in-situ land cover data available for training and validation, and to contribute data for the development of hybrid global land cover maps, which provide more accurate land cover information than any current individual products. The domain of EO-based land change monitoring is advancing, in parallel with innovations in the Geo-Wiki platform. More specifically, we have developed a new state-of-the-art annotation system for reference data collection using AI-driven guidance for land cover annotation. This new user-friendly interface represents significant advances in the data collection capabilities of Geo-Wiki, by leveraging AI to streamline the annotation workflow, minimizes bias and errors in interpretation, and promotes consistency across annotators. The interface also provides all available information in one location (i.e., very high-resolution imagery, street level views, Sentinel-1 and -2 yearly median composites, NDVI time-series, etc.), helping annotators make more accurate decisions during the labelling process of different EO products.   A key novelty of the new annotation system is the generation of high-resolution segments obtained through a combination of machine learning algorithms. These segments can be quickly selected and tagged, replacing the daunting requirement of a slow and detailed manual delineation over large areas. The annotators then validate these segments with the available information provided within the toolbox, by making choices regarding which land cover description or category fits best to the specific segment. The AI-assisted algorithms are then run again, to allow the input of the annotators to serve as better in-situ training data, thereby progressively improving the algorithm’s accuracy. For the first time, we will present our new approaches and tools for land change data collection. We will also provide examples of data collected using these new tools in the framework of the Copernicus Global Land Cover and Tropical Forest Mapping and Monitoring service (LCFM) and the Horizon Europe EvoLand project.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall F1)

Presentation: Enhancing Global Land Cover and Tropical Forest Monitoring: The Copernicus LCFM Service

Authors: Andreas Brink, Dr. Ruben Van De Kerchove, Benjamin Argyle-Ross, Katja Berger, Luc Bertels, Joris Coddé, Wanda De Keersmaecker, Mathilde De Vroey, Fabian Enßle, Pilar Endara, Steffen Fritz, Xavier Garcia Ramos, Anna Grabatin-Homolka, Martin Gritsch, Martin Herold, Gabriel Jaffrain, Andreea Julea, Yannis Kalfas, Max Kampen, Andreas Langner, Juan-Carlos Laso Bayas, Armin Leitner, Dr Myroslava Lesiv, Antoine Masse, Grega Milcinski, Oscar Muñoz, Tim Ng, Camille Pinet, Michael Riffler, Christian Smit, André Stumpf, Adrian Di Paolo, Carolien TOTE, Victor Verhaert, Stephanie Wegscheider, Daniele Zanaga, Frederic Achard
Affiliations: VITO, GeoVille Information Systems and Data Processing GmbH, Helmholtz GFZ German Research Centre for Geosciences, GAF AG, International Institute for Applied Systems Analysis (IIASA), Telespazio Ibérica, IGN FI, Sinergise Solutions GmbH, Joint Research Centre, European Commission
Monitoring global land cover and tropical forests is critical for addressing global challenges such as deforestation, biodiversity loss, and climate change. These challenges have driven the demand for high-quality, timely, and freely accessible land cover data to inform policies and effectively monitor progress towards the Sustainable Development Goal targets. In response to this need, the Copernicus Global Land Cover and Tropical Forest Mapping and Monitoring Service (LCFM) has been launched as a new initiative under the Copernicus Land Monitoring Service. LCFM focuses on high-resolution mapping and monitoring of global land cover and tropical forests. Led by a consortium headed by VITO, and in collaboration with the European Commission’s Joint Research Centre, the service builds upon previous projects like the 100 m Copernicus Global Land Cover layers (Buchhorn et al., 2020), ESA’s WorldCover (Zanaga et al., 2022), and the REDDCopernicus project (GAF et al., 2021). The LCFM service not only aims to extend the time series of the 100 m Copernicus Global Land Cover layers but also to enhance the existing service by providing a high resolution (10 m), dynamic global land cover service. In fact, LCFM will provide frequent, sub-annual land surface categories and features and consolidate these into global annual land cover maps and tropical forest monitoring products at 10 m resolution. This unprecedented level of detail will facilitate better decision-making and more effective monitoring of environmental changes. Importantly, within the project's lifetime, user requirements are being assessed and consolidated. In addition, new user engagement activities will be carried out to enhance understanding and gather additional requirements to ensure the LCFM outputs are fit-for-purpose. Launched in 2024, the LCFM service plans to release its first public dataset, covering the year 2020, by mid-2025. Subsequently, the service will extend its scope, producing similar products for the years 2021 through 2026, ensuring a comprehensive, up-to-date resource for global and tropical forest monitoring. In this presentation, we will introduce the LCFM service and showcase the first data products. Additionally, we will emphasize the key advancements it offers and explore its potential to support downstream applications and policies.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall F1)

Presentation: Comparative validation of recent 10-30 m-resolution global land cover maps

Authors: Panpan Xu, Prof Martin Herold, Dr Sarah Carter, Nandika Tsendbazar, Dr Fred Stolle
Affiliations: Wageningen University, Deutsches GeoForschungsZentrum, World Resources Institute
Accurate and high-resolution land cover (LC) information is vital for addressing contemporary environmental challenges. With the advancement of satellite data acquisition, cloud-based processing, and deep learning technology, high-resolution Global Land Cover (GLC) map production has become increasingly feasible. With a growing number of available GLC maps, a comprehensive evaluation and comparison is necessary to assess their accuracy and suitability for diverse uses; particularly since several maps are not provided with statistically robust accuracy assessments or sufficient detail on the validation process. Building on the study of Xu et al. (2024), here we present a comparative validation of recent 10-30 m GLC maps, namely ESRI Land Use/Land Cover (LULC), ESA WorldCover, Google and World Resources Institute (WRI)’s Dynamic World, and GLAD GLC map, examining their spatial detail representation and thematic accuracy at global, continental, and national (for 47 countries) levels. Since high-resolution map validation is impacted by reference data uncertainty owing to geolocation and labelling errors, five validation approaches dealing with reference data uncertainty were evaluated. Of the considered approaches, validation using the sample label supplemented by majority label within the neighborhood is found to produce more reasonable accuracy estimates compared to the overly optimistic approach of using any label within the neighborhood and the overly pessimistic approach of direct comparison between the map and reference labels. The four GLC maps varied in accuracy across different LC classes, continents, and countries. Overall global accuracies of the maps ranged between 73.4% ± 0.7% (95% confidence interval) and 83.8% ± 0.4% with WorldCover achieving the highest accuracy followed by GLAD GLC. The maps' spatial detail representation was assessed at various homogeneity levels within a 3 × 3 kernel. Although considered high-resolution maps, this study reveals that ESRI LULC and Dynamic World have less spatial detail than WorldCover and GLAD-GLC. All maps have lower accuracies in heterogenous landscapes and in some countries such as Mozambique, Tanzania and Nigeria. To select the most suitable product, users should consider both the map's accuracy over the area of interest and the spatial detail appropriate for their application. For future high-resolution GLC mapping, producers are encouraged to adopt standardized LC class definitions to ensure comparability across maps. Additionally, the spatial detail and accuracy of GLC maps in heterogeneous landscapes and over some countries are the key features that should be improved in future versions of the maps. Anticipating the availability of more high-resolution GLC products in the future, independent validation efforts at regional and national levels should be strengthened to enhance the utility of GLC maps at these scales. Next, to improve the utility of GLC maps for long-term monitoring, more efforts are required for validation and area estimation of LC change. Our on-going work in Uganda to assess LC change using existing GLC maps will serve as an example to further utilize GLC maps for LC change applications. Index Terms— land cover validation, comparison of high-resolution maps, global land cover Related literature Panpan Xu, Nandin-Erdene Tsendbazar, Martin Herold, Sytze de Bruin, Myke Koopmans, Tanya Birch, Sarah Carter, Steffen Fritz, Myroslava Lesiv, Elise Mazur, Amy Pickens, Peter Potapov, Fred Stolle, Alexandra Tyukavina, Ruben Van De Kerchove, Daniele Zanaga, “Comparative validation of recent 10 m-resolution global land cover maps”, Remote Sensing of Environment, Volume 311, 2024, https://doi.org/10.1016/j.rse.2024.114316.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall F1)

Presentation: Attributing direct drivers to pantropical deforestation alerts

Authors: MSc. Bart Slagter, dr. Laura Cue La Rosa, Anika Berger, Sarah Carter, Dr. Marielos Peña-Claros, Dr. Martin Herold, Dr. Johannes Reiche
Affiliations: Wageningen University, World Resources Institute, GeoForschungsZentrum
Advancements in openly available optical and radar satellite imagery have enabled rapid and detailed information about tropical forest changes around the world. Near real-time deforestation alert systems, such as RADD and GLAD alerts (Hansen et al., 2016; Pickens et al., 2020; Reiche et al., 2021) are now widely adopted to quickly inform on new forest disturbances. These alert systems provide a valuable tool for rapid law enforcement and increased transparency in forest activities. However, current monitoring methods do not allow the distinction of different disturbance types or direct drivers, which is crucial information to make deforestation alerts more actionable and useful for uptake by different stakeholders (Finer et al., 2018; Weisse et al., 2019). A rapid identification direct drivers can enable alert prioritization for better law enforcement against illegal forest activities. In addition, driver attributions may lead to more precise estimations of ecological impacts and carbon emissions associated with specific disturbances. Furthermore, across the pantropical region, driver attributions can aid to generically distinguish of human-induced from natural forest disturbance, or deforestation from forest degradation. Here, as an extension to deforestation alert systems, we present near real-time direct driver attributions to forest disturbances. Our methods are based on a deep learning model inferring information from post-disturbance Sentinel-1 and Sentinel-2 satellite imagery, leveraging the complementary qualities of radar and optical imagery. To demonstrate, we produced monthly driver attributions for all GLAD/RADD integrated alerts (Reiche et al., 2024) between 2020 and 2024 in all tropical forests. The driver attributions will be continuously mapped and openly distributed via Global Forest Watch, as an extension to the integrated RADD/GLAD alerts. For this study, an extensive training dataset of labelled forest disturbance alerts was collected for the Amazon, Congo and Southeast Asian rainforests, based on visual interpretation of Planet imagery. A convolutional neural network was trained with post-disturbance Sentinel-1 and Sentinel-2 imagery to classify alerts as large-scale agriculture, small-scale agriculture, road development, selective logging, mining, flooding, wildfire or other. The model and classification scheme were specifically designed to generalize across all tropical forest biomes, ensuring a robust performance for diverse regional forest disturbance patterns. The results present a classification accuracy of 0.89 Macro-F1 score, demonstrating strong capabilities to distinguish direct drivers of disturbances in near real-time. Most confusion was observed between selective logging and small-scale natural disturbances, and between wildfires and large-scale agricultural clearings with controlled fire use. After filtering out driver attributions with a low classification confidence, a substantial amount of these common errors could be removed. Also, grouping specific driver classes together can enable higher-accuracy maps of, for example, human-induced versus natural disturbance, or deforestation versus forest degradation. References - Finer, B. M., Novoa, S., Weisse, M. J., Petersen, R., Mascaro, J., Souto, T., Stearns, F., & Martinez, R. G. (2018). Combating deforestation: From satellite to intervention. Science, 360(6395), 1303–1305. https://doi.org/10.1126/science.aat1203 - Hansen, M. C., Krylov, A., Tyukavina, A., Potapov, P. V., Turubanova, S., Zutta, B., Ifo, S., Margono, B., Stolle, F., & Moore, R. (2016). Humid tropical forest disturbance alerts using Landsat data. Environmental Research Letters, 11(3). https://doi.org/10.1088/1748-9326/11/3/034008 - Pickens, A. H., Hansen, M. C., Adusei, B., & Potapov, P. (2020). Sentinel-2 Forest Loss Alert. Global Land Analysis and Discovery (GLAD). www.globalforestwatch.org - Reiche, J., Balling, J., Pickens, A. H., Masolele, R., Carter, S., Berger, A., Gou, Y., Donchyts, G., Slagter, B., Mannarino, D., & Weisse, M. J. (2024). Integrating satellite-based forest disturbance alerts improves detection timeliness and confidence. Environmental Research Letters. - Reiche, J., Mullissa, A., Slagter, B., Gou, Y., Tsendbazar, N. E., Odongo-Braun, C., Vollrath, A., Weisse, M. J., Stolle, F., Pickens, A., Donchyts, G., Clinton, N., Gorelick, N., & Herold, M. (2021). Forest disturbance alerts for the Congo Basin using Sentinel-1. Environmental Research Letters, 16(2). https://doi.org/10.1088/1748-9326/abd0a8 - Weisse, M. J., Noguerón, R., Eduardo, R., Vicencio, V., Arturo, D., & Soto, C. (2019). Use of Near-Real-Time Deforestation Alerts: a Case Study From Peru. https://files.wri.org/d8/s3fs-public/use-near-real-time-deforestation-alerts.pdf
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall K2)

Session: A.01.04 Advancing Air Quality Monitoring from Space - PART 3

Air pollution is a major concern throughout the world in both, developed and developing countries, especially in urban areas where over 50% of the world population live. Swelling urban population and increased volume of motorized traffic in cities have resulted in severe air pollution affecting the surrounding environment and human health. European Environmental Agency (EEA) estimate that over 40-50% of air pollution in the most important European cities is attributed to vehicular emission.

The investigation of air pollution over megacities by means of satellites observations has recently become a central topic of interest within the air pollution community, especially thanks to the observing capabilities of Sentinel-5p in terms of spatial resolution.

Nonetheless space-borne platforms alone cannot provide a full picture. To investigate the spatio-temporal variability of air pollution on a local, regional, and global scale, new tools are being developed. In this context, the detection, tracking and understanding of pollutant transport on various spatial scales are of both local and global interest. Specifically, in rural and remote areas, where no ground-based monitoring network of the air pollution is available, the use of satellite data can provide an estimation of the regional distribution of pollutants, in order to assess the impact of specific events (e.g., biomass burning or dust storm outbreaks).

Satellites observe air pollution in the troposphere, and its relation with surface concentrations must first be solved for air quality monitoring applications. This session is dedicated to present new algorithms and approaches for the downscaling of air quality satellite observations and to explore novel assimilation methodologies to combine satellite retrievals with in-situ measurements air quality modelling considering all relevant satellite missions (e.g. Sentinel-5P), and future availability of hourly observations from Sentinel-4, and other future capabilities e.g. Sentinel-5 and CO2M.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall K2)

Presentation: Using GEOS-Chem vertical profiles for an improved IASI-NH3 product

Authors: Martin Van Damme, Dr. Lieven Clarisse, Ruijun Dang, Daniel J. Jacob, Bruno Franco, Jenny Stavrakou, Jean-François Müller, Cathy Clerbaux, Professor Pierre Coheur
Affiliations: Université libre de Bruxelles (ULB), Brussels Laboratory of the Universe (BLU-ULB), Spectroscopy, Quantum Chemistry and Atmospheric Remote Sensing (SQUARES), Royal Belgian Institute for Space Aeronomy, John A. Paulson School of Engineering and Applied Sciences, Harvard University, LATMOS/IPSL, Sorbonne Université, UVSQ, CNRS
The discovery over a decade ago that satellites were able to measure atmospheric ammonia (NH3) led to major advances in our understanding of its emission sources, transport, and atmospheric pathways. The Infrared Atmospheric Sounding Interferometer (IASI) onboard the Metop series of satellites has been especially impactful and currently holds over a 15-year record of measurements, allowing for a detailed assessment of local sources and long-term trends. Since the first global distribution acquired in 2008, IASI-NH3 data products have been continuously improved, with progressively better accuracy and consistency, and are now widely used by the scientific community. Satellite NH3 measurements provide total columns of NH3, as there is insufficient information in the observed spectrum to derive vertical profile information. The inverse methods applied therefore rely on an estimate of the shape of the vertical profile, which has been shown to lead to potentially large errors in the retrieved columns. While previous retrieval versions already allowed flexibility in the assumptions on the vertical distribution, the absence of averaging kernels (AVKs) has until recently hampered model comparison and assimilation. Here we briefly describe version 4 of the retrieval framework called ANNI, which introduces the calculation of AVKs. We show how the reprocessing of the IASI-NH3 dataset using vertical profile shapes from GEOS-Chem chemistry transport model provides much improved distributions in all seasons, with a strong reduction of land-sea discontinuities. It also leads to a major improvement of the IASI-NH3 nighttime distributions, pointing to a misrepresentation of the NH3 vertical profile in the current baseline ANNI retrieval. Based on these results, we present an improved parametrization of the vertical profile shapes to be considered for the IASI baseline product. A better parameterization of how NH3 is vertically distributed, and how this distribution evolves over the course of the day, is crucial to maximizing the potential of upcoming geostationary sounding missions, such as IRS.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall K2)

Presentation: Leveraging Deep Learning for Super-Resolution of GOME-2 Atmospheric Data Using TROPOMI Observations

Authors: Riccardo Ratta, Simone Mantovani, Maximilien Houël, Sebastiano Fabio Schifano, Federico Fierli
Affiliations: Università Degli Studi Di Ferrara, Meteorological and Environmental Earth Observation, SISTEMA GmbH, European Organisation for the Exploitation of Meteorological Satellites
The Global Ozone Monitoring Experiment 2 (GOME-2) and the TROPOspheric Monitoring Instrument (TROPOMI) are two significant satellite-based instruments dedicated to monitoring Earth’s atmosphere. GOME-2, part of the MetOp platform, has been operational since 2006, and was originally developed to monitor the ozone layer in the atmosphere. However, its onboard spectrometer can also pollutant gases, including NO₂, which we will use as an initial example for this study. Unfortunately, GOME-2 spatial resolution is very coarse; a single data point covers an area of approximately 40 km x 80 km, which provides a broad view of atmospheric composition but limits its effectiveness in capturing fine-scale variations over cities and other human activity areas. Would it be useful to employ high-resolution data by TROPOMI (3.5 km x 5.5 km) to super-resolve low-resolution GOME-2 observations dating back to 2006? What effects, not only on image geometry, but also on the radiometric values, would this process involve? And lastly, how does this experiment teach us about the future of space observations? Does this enable, in the future, the use of even higher resolution instruments to “go back in time” and super-resolve past low-resolution data, providing a better understanding of changes in Earth’s atmosphere during decades of intense human activity? This study investigates whether TROPOMI high-resolution data can be utilized to super-resolve GOME-2 observations, potentially yielding insights into atmospheric changes dating back to 2006. We explore the implications of this process on spatial and radiometric accuracy and consider its broader significance for the future of satellite observations. To develop a model to answer our initial questions, we propose a novel approach with deep learning to better leverage the abundance of available training data. We used a combination of Residual Dense Blocks (RDBs) which state-of-the-art studies have shown to outperform similar Convoluted Neural Networks (CNNs) and Generative Neural Networks (GNNs). Then, to effectively train our model, we addressed challenges such as the significant resolution disparity between GOME-2 and TROPOMI (approximately a factor 10), which require to operate in a significant larger pixel space, increasing the memory required for training. And the problem of missing data in atmospheric acquisition, mainly due cloud cover. Finally, we trained the model on one year of data (2023) over 10 selected locations and evaluated its performance using the Pandonia Global Network (PGN), showing an improvement not only on the reconstruction of fine details but also a better agreement on the absolute reported NO₂ value compared with the original data. We are currently working on expanding the dataset, both temporally and spatially, to test the limits of our approach and see if it is possible to further improve the model output. Another active research area is the extension of the approach to other common trace gases in the model. We hope to enhance the utility of this approach for broader applications in atmospheric science and to highlight the potential of leveraging deep learning super-resolution for atmospheric data.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall K2)

Presentation: Assimilation of SO2 retrievals in the global CAMS system

Authors: Antje Inness, Melanie Ades, Dr Richard Engelen, Johannes Flemming, Vincent Huijnen, Christopher Kelly, Samuel Remy, Roberto Ribas
Affiliations: ECMWF, KNMI, HYGEOS
The Copernicus Atmosphere Monitoring Service (CAMS), operated by the European Centre for Medium-Range Weather Forecasts on behalf of the European Commission, provides daily analyses and 5-day forecasts of atmospheric composition, including forecasts of volcanic sulphur dioxide (SO2). Volcanoes can cause serious disruptions for society, not just for people living near them but also further afield when ash and SO2 from highly explosive eruptions reach the upper troposphere or stratosphere and are then transported over vast distances by the prevailing winds. Ash, SO2 and SO4 are a serious concern for the aviation industry, reducing visibility and in severe cases can lead to engine failure or cause permanent damage to aircraft engines. SO2 is also released into the atmosphere from anthropogenic activities, such as fossil fuel burning or smelting. It can have adverse health effects and is the principal precursor for sulphate aerosols. CAMS routinely assimilates near-real time (NRT) SO2 satellite data of volcanic eruptions provided by TROPOMI, GOME-2 and IASI in its global forecasting system to provide the initial conditions for the subsequent 5-day forecasts. In this talk we explore the use of anthropogenic SO2 satellite data in the CAMS system. First, we compare anthropogenic SO2 data from the operational NRT TROPOMI Covariance-Based Retrieval Algorithm (COBRA) with the CAMS anthropogenic SO2 fields, and in a second step we explore the assimilation of the TROPOMI COBRA data in the CAMS global system. Finally, we assess the impact of the SO2 assimilation on the model’s sulphate aerosols and particulate matter (PM2.5) when TROPOMI anthropogenic SO2 as well as IASI and TROPOMI volcanic SO2 data are assimilated. The volcanic TROPOMI SO2 data are also retrieved with the COBRA algorithm, and we make use of the layer height information provided in NRT. The IASI data used in this study come from the Brescia SO2 retrieval which combines measured brightness temperature differences between baseline channels and channels in the SO2 absorption 𝜈3 band to give SO2 columns for a total column range from 0.5 to 5000 DU and information about the plume altitude.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall K2)

Presentation: Integration of Earth Observation into the UK Met Office Air Quality Forecasting System: Initial Results

Authors: Dr Richard Pope
Affiliations: School of Earth and Environment, University of Leeds, National Centre for Earth Observation, University of Leeds
Poor air quality (AQ) is one of the largest environmental stresses on human health. In the UK, poor AQ results in 28,000-36,000 premature deaths per year and annual socioeconomic costs of ~£20 billion. To help address this, the UK Met Office (UKMO) provides critical national daily AQ forecasts of key pollutants (e.g. ozone (O₃), nitrogen dioxide (NO₂) and aerosols) to provide the public and government bodies (e.g. the Department of Environment, Food and Rural Affairs - Defra) with prior warning of hazardous AQ events. To evaluate the skill of their regional forecast model (AQUM – Air Quality in the Unified Model), and to bias-correct the forecasts, the UKMO use AQ measurements from the UK Automated Urban and Rural Network (AURN) of surface sites. The AURN observations are used in the “Statistical Post Processing of Observations (SPPO)” step to correct the forecasts (known as “hybrid-forecasts”) before release. However, sparse surface monitoring sites are often unrepresentative of widespread pollution. Satellite AQ data provides a powerful resource to help address this issue with daily UK spatial coverage, detection of pollution hotspots and incoming pollutant plumes from sources overseas. The project described here (AIRSAT) aims to integrate key satellite AQ data into the UKMO’s SPPO framework to improve these “hybrid-forecasts”, thus benefiting the downstream users of this service. AIRSAT will also undertake a detailed assessment of which satellite AQ products are suitable for UK monitoring and AQUM evaluation. Currently, we have focused on tropospheric column NO₂ (TCNO₂) from the TROPOspheric Monitoring Instrument (TropOMI – Sentinel 5P), lower tropospheric column column O₃ (LTCO₃, surface – 450 hPa) from the Infrared Atmospheric Sounding Interferometer (IASI – MetOp-A), total column ammonia (TCNH₃) from the Cross-track Infrared Sounder (CrIS – JPSS-1) and total column carbon monoxide (TCCO) from the Measurements Of Pollution In The Troposphere (MOPITT – Terra). Future satellite missions of interest include the Meteosat Third Generation (MTG-S) geostationary satellite, due for launch in 2025, which has the Infrared Sounder (IRS) and UV-Vis sounder (Sentinel-4) to provide hourly retrievals of key gaseous pollutants like NO₂ and O₃, along with the Flexible Combined Imager (FCI) on the companion MTG-I satellite to provide aerosol properties. Here, we will present the first results of the AIRSAT project, comparing AQUM hindcast simulations (2017-2019) of key air pollutants with a range of satellite products to quantify the existing model biases and determine a suite of suitable satellite products for the SPPO approach. In these comparisons, for robust direct like-for-like comparisons, the model pollutant fields have been sampled to the satellite level-2 swath data (i.e. nearest model grid box used and model sampled within 3-hours of the overpass time) and the satellite averaging kernels (AKs – measure of the satellite’s vertical sensitivity to retrieve the target pollutant) applied to the model profile. Overall, we found the model successfully captures the spatial and seasonal variability for TCNO₂ and TCNH₃. However, low biases in the summer over London and England for TCNO₂ and TCNH₃ (from diffuse agricultural sources), respectively, highlight potential limitations in the CAMS-REG (Copernicus Atmosphere Monitoring Service – Regional Emissions) emission inventory used within the model. Additionally, satellite retrieved LTCO₃ and TCCO show shallow homogenous gradients from space, which the model accurately captures. We also present a detailed evaluation of the Rutherford Appleton Laboratory (RAL Space) Infrared and Microwave Sounding (IMS) scheme to retrieve UK-scale pollution hotspots of TCNH₃. In an initial comparison of data (2018-2019) retrieved from CrIS, the IMS TCNH₃ broad-scale spatial distribution is consistent with that of AQUM, and therefore the pattern of agricultural emissions in the CAMS-REG emission inventory. Here, we also undertook the first robust evaluation of the IMS TCNH₃ against surface column measurements of NH₃ from the Network for the Detection of Atmospheric Composition Change (NDACC). Here, we found that IMS TCNH₃ has a partial positive bias which was accounted for in the AQUM-CrIS TCNH₃ comparisons. Finally, we investigate the skill of the model to simulate hazardous AQ episodes (24th June – 7th July 2018) in comparison to surface observations (i.e. AURN) and assess if polar-orbiting satellites (1-daytime overpass per day) can provide sufficient information to monitor and track UK scale pollution events. In the end, we found that the TCNO₂, LTCO₃ and TCNH₃ products have sufficient daily spatial coverage, though this is dependent on cloud cover, during the pollution episode to help resolve temporal variability (i.e. time-series) of sub-UK regions. For instance, in Southeast England, there is similar temporal variability (R² = 0.64) between the surface site and satellite NO₂ time-series and the model consistently underestimates the absolute NO₂ levels from both observational datasets. However, between the 8th and 10th June 2018, the model predicts large increases in surface and column NO₂ (likely from overseas sources) which is not seen in the observations. This is a good example of where the satellite data can complement the surface data to bias-correct the model at a particular forecast cycle, which can then be used to initialise the next forecast, thus improving the service for downstream users. Overall, by integrating these satellite AQ datasets into the AQUM forecasts, not only will it provide more robust hybrid-forecasts for the public, but it will also provide further insight into UK pollution levels and if simulated/observed AQ meets regulatory standards and if updates to policy legislation are necessary. And while CAMS provide regional reanalyses of key air pollutants, which assimilate satellite data, national models like AQUM are required for more focused and user specific requirements. As such, incorporating a full data assimilation system, like CAMS does, is less practical for national meteorological agencies, so simpler and cheaper approaches, like that demonstrated here, potentially provide a more attractive alternative for other agencies providing national AQ forecasts.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall K2)

Presentation: USING PHYSICAL MODELS AND MACHINE LEARNING TO MONITOR AIR QUALITY WITH PRISMA HYPERSPECTRAL DATA

Authors: Davide De Santis, Marco Di Giacomo, Sarathchandrakumar T. Sasidharan, Fabio Del Frate, Gabriele Curci, Ana C. Amarillo, Francesca Barnaba, Luca Di Liberto, Ferdinando Pasqualini, Alessandro Bracci, Cristiana Bassani, Silvia Scifoni, Stefano Casadio, Alessandra Cofano, Massimo Cardaci, Daniele Latini, Giorgio Licciardi
Affiliations: Department of Civil Engineering and Computer Science Engineering, “Tor Vergata” University of Rome, Department of Civil, Construction and Environmental Engineering, Sapienza University of Rome, Department of Physical and Chemical Sciences, Università degli Studi dell’Aquila, Center of Excellence in Telesensing of Environment and Model Prediction of Severe events (CETEMPS), Università degli Studi dell’Aquila, National Research Council - Institute of Atmospheric Sciences and Climate, CNR-ISAC, Rome, National Research Council - Institute of Atmospheric Pollution Research, CNR-IIA, Monterotondo, Rome, Serco Italia S.p.A., Frascati, Rome, GEO-K S.r.l. 00133 Rome, Agenzia Spaziale Italiana (ASI), Viale del Politecnico snc, 00133 Rome
This work reports the activities developed within the PRIMARY (PRISMA for Monitoring Air Quality) project, which addressed the pressing issue of air pollution and its significant impact on human health by developing cutting-edge methods to evaluate air quality using hyperspectral satellite data from the PRISMA mission. The activities focused on characterizing particulate matter (PM) in atmosphere at urban scale, with an emphasis on distinguishing between natural and anthropogenic aerosols and analyzing their chemical compositions. Such advancements aim to deepen our understanding of how pollutants influence ecosystems and public health. Core part of the work was the application of machine learning, particularly neural networks, trained on synthetic PRISMA-like datasets generated through advanced atmospheric chemical and physical models. Extracting key features from PRISMA hyperspectral imagery was a critical step, reducing the dimensionality of data while preserving vital spectral details, thereby improving computational efficiency and accuracy in the machine learning models. For the compression of PRISMA data, two approaches were explored: Principal Component Analysis (PCA) and Autoencoder (AE) neural networks. Overall, both methods demonstrated strong performance in compressing and reconstructing spectral information. However, it was observed that when the radiance values exceed a certain threshold, the AE model is more effective in accurately reconstructing the spectral data. The creation of synthetic PRISMA-like datasets began with aerosol profile data derived from the Copernicus Atmosphere Monitoring Service (CAMS), which provides freely accessible, global air quality data. As a preliminary step, CAMS data were compared to the SPARTAN global network of ground-based PM measurements, ensuring its reliability. While CAMS data generally aligned well with SPARTAN observations, an overestimation of organic matter (OM) was identified and corrected using a factor of 0.35 [1]. This adjustment was crucial to ensure the synthetic training dataset closely represented real-world conditions. The FlexAOD post-processing tool [2] was then used to calculate aerosol optical properties, such as extinction coefficients, single-scattering albedo, phase functions, and optical depth, based on CAMS chemical composition data. These properties served as essential inputs for the radiative transfer model used to simulate satellite-observed reflectance. The next step in creating the synthetic dataset involved simulating top-of-atmosphere (TOA) radiance using the updated libRadtran radiative transfer model [3]. Accurate surface reflectance data was critical for this simulation, as aerosols interact with the Earth's surface characteristics. Reflectance information from the Harmonized Landsat Sentinel-2 (HLS) project was classified into 10 representative land cover types, including vegetation, urban areas, soil, and water, using k-means clustering. This classification preserved key spectral features, particularly in the visible range, which are essential for accurate aerosol modeling. The selected surface classes provided a balance between computational efficiency and realistic representation of urban environments, ensuring that the synthetic data reflected actual observational scenarios. The machine learning models developed within the PRIMARY project were designed to estimate aerosol characteristics from PRISMA-like data. These models, based on Multi-Layer Perceptron (MLP) neural networks, used the synthetic datasets as training input. The models predicted key aerosol properties, including total PM concentrations and chemical speciation such as sulfate, black carbon, sea salt, organic carbon, dust, and aerosol optical depth at multiple wavelengths (440 nm, 550 nm, 670 nm, 870 nm). First applications of these models to real PRISMA data demonstrated their capability to identify spatial variability in PM concentrations across urban environments. However, certain limitations, such as an overestimation of dust concentrations, underscored the need for further refinement of the processing algorithms. Anyway, it has been estimated that, considering an uncertainty in the PRISMA measurement that can be as high as 5 percent from documentation, the error propagation obtained in the PRIMARY product, assuming a Gaussian-type random error distribution applied to the PRISMA spectrum, is about 3 percent. To validate these satellite-derived aerosol estimates, dedicated observational campaigns in Rome and Milan, focusing on capturing a broad spectrum of air quality conditions were conducted. The Rome campaign, from October to November 2022, covered seasonal transitions, including the onset of heating season, which introduced variability in pollutant levels. Measurements included active remote sensing instruments, such as lidar and ceilometers for vertical profiling of aerosol layers, AERONET photometers for columnar aerosol content, and in-situ particle characterization using the AEROLAB mobile laboratory. Additionally, ancillary meteorological and gas concentration data relevant to air quality assessments were collected. ARPA Lazio, a key stakeholder, collaborated actively during this campaign, which leveraged facilities at the CIRAS atmospheric observatories (CNR-ISAC) in Rome. The Milan campaign, running from January to autumn 2023, extended these efforts to Northern Italy, utilizing a similar set of instruments and methodologies. Observations were conducted at three locations, including Milano Bicocca, Milano Pascal (co-located with ARPA Lombardia air quality stations), and Milano Linate. The campaign captured data across various urban environments, allowing comparisons between different regions. Analyses of collected data focused on specific PM metrics, such as total particle numbers and black carbon concentrations, which are central to the revised European Air Quality Directive. Comparisons between Rome and Milan revealed distinct differences in aerosol composition and seasonal variability, offering insights into the representativeness of satellite-derived PM estimates. An innovative addition to the validation framework was the use of drone-based observations to collect high-resolution vertical profiles of particulate matter. A Tarot T960 drone has been equipped with a custom payload that included particulate sensors such as OPC (optical particle counter), a micro-aethalometer (MA200), and DISCmini. Designed using the SolidWorks software, the payload optimized weight distribution for stable flight while maximizing the drone’s operational altitude and duration. These drone flights provided detailed vertical aerosol distributions up to 120 meters, complementing satellite and ground-based observations and strengthening the validation process. In addition, a Python-based tool was developed to facilitate validation and operational planning. This tool was designed to predict past and future satellite overpasses for PRISMA and other Earth observation missions, such as Sentinel-5P. It also enabled co-location of satellite and ground-based measurements (e.g. AERONET), enhancing the integration of datasets for more robust air quality assessments. This tool was instrumental in aligning satellite data with ground-based campaigns, ensuring consistent and comprehensive validation efforts. Overall, the PRIMARY project demonstrated a robust framework for advancing satellite-based air quality monitoring, combining state-of-the-art hyperspectral observations with machine learning and innovative validation techniques. These efforts significantly enhance our ability to monitor and understand the complex dynamics of urban air pollution. The PRIMARY project has been co-funded by the Italian Space Agency (ASI – “Tor Vergata” University of Rome Agreement n. 2022-3 U.0); the project is part of the ASI’s program “PRISMA Scienza”. References [1] Ana Carolina Amarillo, Gabriele Curci, Davide De Santis, Cristiana Bassani, Francesca Barnaba, Samuel Rémy, Luca Di Liberto, Christopher R. Oxford, Eli Windwer, Fabio Del Frate; Va-lidation of aerosol chemical composition and optical properties provided by Copernicus Atmosphere Monitoring Service (CAMS) using ground-based global data, Atmospheric Environment, Volume 334, 2024, 120683, ISSN 1352-2310, https://doi.org/10.1016/j.atmosenv.2024.120683. [2] Curci, G., Hogrefe, C., Bianconi, R., Im, U., Balzarini, A., Baro, R., Brunner, D., Forkel, R., Giordano, L., Hirtl, M., Honzak, L., Jimenez-Guerrero, P., Knote, C., Langer, M., Makar, P.A., Pirovano, G., Perez, J.L., San Jose, R., Syrakov, D., Tuccella, P., Werhahn, J., Wolke, R., Zabkar, R., Zhang, J., Galmarini, S. (2015), Uncertainties of simulated aerosol optical properties induced by assumptions on aerosol physical and chemical properties: an AQMEII-2 perspective, Atmos. Environ., 115, 541-552, doi: 10.1016/j.atmosenv.2014.09.009 [3] C. Emde et al., “The libRadtran software package for radiative transfer calculations (version 2.0.1),” Geoscientific Model Development, vol. 9, no. 5, pp. 1647–1672, May 2016, doi: 10.5194/gmd-9-1647-2016.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall K2)

Presentation: Machine Learning-Driven Air Quality Forecasting Using Satellite Data: A Case Study Across Four European Cities

Authors: Giacomo Blanco, Mr. Luca Barco, Lorenzo Innocenti, Claudio Rossi
Affiliations: Links Foundation
Air pollution remains one of the most critical challenges globally, posing severe threats to human health and environmental stability. Pollutants such as PM2.5, ozone (O3), nitrogen dioxide (NO2), and sulfur dioxide (SO2) are known to pose serious risks, particularly to respiratory and cardiovascular systems. These concerns are further amplified by ongoing urbanization and industrialization, which contribute to worsening air quality worldwide. To mitigate these impacts, the development of accurate and accessible predictive tools for air quality monitoring is essential, enabling informed decision-making by citizens, policymakers, and urban planners alike. In recent years, advancements in remote sensing technologies have opened new possibilities for air quality monitoring. Satellite-based data, particularly from missions like Sentinel-5P, provides a cost-effective and comprehensive method for monitoring pollutant levels across vast regions. This study presents a novel machine-learning approach leveraging several remote sensing data sources to predict the future concentrations of five key air pollutants: PM10, PM2.5, O3, NO2 and SO2. To support this research, we first constructed a dataset covering the period between 2018 and 2023. This dataset combines daily Sentinel-5P imagery, weather data, Digital Elevation Model and Copernicus Urban Atlas with observations from ground-based monitoring stations installed in Milan, Zagreb, Lisbon and Budapest. Central to the methodology is a transformer-based architecture specifically designed for this regression task. This architecture processes satellite imagery in combination with weather and morphological information to forecast the concentration of the five pollutants considered in this study for a seven-day horizon. The proposed approach utilizes an encoder-decoder architecture. The encoder processes historical observations from the past seven days, transforming them into a latent representation that encapsulates the current atmospheric and environmental conditions of the given area. This latent representation serves as a summary of the recent state of air quality dynamics for a given area. The decoder, on the other hand, incorporates future weather forecasts into the prediction process. By leveraging a cross-attention mechanism, the decoder combines the weather forecast with the latent representation produced by the encoder enabling the model to adjust its pollution forecasts in the given area based on the expected meteorological conditions for each future day. The proposed framework consists of training the described machine-learning model on remote sensing sources to predict pollutant concentration measured by ground stations, enabling it to learn the relationship between satellite, weather and urban morphological data and air pollution, to later use it for predicting the pollutant concentration for the whole urban area. The model achieves an average percentage error of approximately 30% across all pollutants when tested with measurements from air quality stations in the four cities. A notable strength of this system is its independence from the physical presence of monitoring stations once the training process is completed, as it can forecast pollutant concentration even in urban areas with limited or no infrastructure. The proposed model is embedded within an automated pipeline designed for the air quality forecasting process. Each day, the pipeline automatically retrieves the most recent remote sensing data, weather observations, and meteorological forecasts. These datasets are preprocessed and then fed into the predictive system. Once the air quality forecasts are generated, they are uploaded to a data lake. End-users can access the forecasts through a user-friendly, web-based dashboard, which provides intuitive visualizations and insights. This integration of data acquisition, processing, prediction, and dissemination ensures accessible air quality information for a wide range of stakeholders, from policymakers to the public. This forecasting solution is currently operational as part of UP2030 (Urban Planning and Design Ready for 2030), a project supported by the European Union's Horizon Europe research and innovation program. UP2030 aims to guide cities in adopting socio-technical transitions that align with climate neutrality goals. By integrating the air quality forecasting service into urban planning workflows, the project provides a tool for mitigating pollution and supporting sustainable urban development. This work was funded by the Horizon Europe project UP2030 (grant agreement n.101096405).
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 0.96/0.97)

Session: F.04.11 Earth Observation for Environmental Compliance: Enhancing Monitoring, Guidance, and Enforcement - PART 2

Environmental crime is a rapidly growing global issue, increasing by 5-7% annually. The broad spectrum of environmental crime includes illegal waste management, logging, and water abstraction, pollution, habitat destruction, and wildlife trafficking.

As environmental crime often involves transnational criminal organizations, international cooperation is needed to dismantle the network that perpetrate it. The European Union's new environmental crime directive aims to bolster criminal law enforcement against the most severe environmental offenses, as part of the European Green Deal.

Effectively combatting of environmental crime hinges on robust evidence. Earth Observation technology can support monitoring, inspection, and evidence gathering, thus enhancing environmental crime investigations . However, challenges related to data privacy, quality, availability, and legal admissibility must be overcome to fully realize the potential of Earth observation in the fight against environmental crime.

This session will:
• Identify and evaluate EO-based methods to help detect and characterizing environmental crimes and their impacts.
• Explore geospatial and open-source intelligence in multidisciplinary evidence collection, including the role of citizen science.
• Discuss the effective integration of EO into environmental compliance assurance and law enforcement.
• Analyse practitioner needs for new sensor data, processing tools, analytical methods, and operational modes.
• Foster dialogue among policymakers, researchers, and practitioners.
• Inform the development of a roadmap for wider EO adoption in environmental crime investigations through ESA-JRC collaboration.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: Integrating AI in Legal Analysis of Satellite Imagery: A Focused Approach Using Transformer Models to Guide Classification Accuracy

Authors: Seonaid Rapach, Dr Annalisa Riccardi, Dr Rhonda
Affiliations: University Of Strathclyde
Earth Observation (EO) imagery in an invaluable data source in a variety of humanitarian applications. Its adoption has been demonstrated in international courts, including the International Criminal Court (ICC) and the International Court of Justice (ICJ). In these proceedings, the most widely adopted method for analysing satellite imagery is visual interpretation, which typically involves object detection, such as identifying roadblocks [1], and classification, such as distinguishing and counting internally displaced people (IDP) settlement [2] and bombarded buildings [3]. However, these methodologies are conducted solely with manual classification with interpreters [4], which is costly and time-consuming. With the rapid awareness and development of artificial intelligence (AI), every sector is motivated to assess possible applications, and the law is no different. In legal services, AI has already demonstrated its value in tasks such as analysing and drafting legal documents [5]. However, discussions about the use of AI in judicial decision-making and law enforcement remain a subject of intense debate [6]. Beyond these applications, there is also a clear opportunity for AI to aid in the processing and analysis of satellite imagery, along with other forms of spatial and imagery data. Integrating machine learning (ML) and deep learning (DL) models within legal methodological structures could improve the methodology, from reducing potential human errors to reducing the processing time, which can also ultimately allow for temporally and spatially scaling up the analysis. Currently, AI is used in a wide range of humanitarian applications, but there is justifiable concern over the prevalence of misclassifications from AI models. In international criminal courts, where admissible evidence must meet exacting standards, even modest false-positive rates can overestimate the presence or nature of key features, potentially rendering evidence inadmissible or unconvincing. Furthermore, limitations and concerns about AI need to be articulated and examined, including problems arising over the ‘black box’ data nature of AI decision-making, which must be thoroughly examined to address the specific needs of legal systems. Therefore, this research aims to outline the potential scope for integrating AI models for processing and analysing satellite imagery. More specifically, the research will outline how legal practitioners in international courts could adopt ‘transformer’ models to improve scene/object classification methods, without compromising the integrity of the evidence. Transformer models are particularly important for this application because ‘interventions’ can be applied to various, ensuring a robust but adaptable system [7]. This approach allows practitioners to apply a single pre-pretrained model, but tailor it to suit their needs. Emphasis is placed on minimising either the rate of false positives or false negatives through targeted interventions, ensuring reliable and credible classifications. We will demonstrate this process through a case study where we will use a pre-trained transformer model for pixel-classification of forest disturbances [8], as if posed to detect instances of illegal deforestation. This scenario parallels the analytical challenges faced in human rights litigation, such as identifying instances of environmental destruction or land use violations from satellite imagery. Adopting a pre-trained model enables practitioners to utilise well-develop resources without requiring previous extra resources to train the model themselves, whilst also ensuring that this framework is transparent and reproducible. The following methodology adapts the Li [9] process, who performed targeted interventions to the most influential heads of a large-language model (LLM), to improve the ‘truthfulness’ of the model [9]. Instead, we adopt this to the object classification of disturbed forests in Copernicus Sentinel-2 satellite imagery, using the visual transformer model developed by Schiller [8]. This process enables evidence providers to adopt pre-trained models, whilst steering the behaviour of the model to minimise the instances of false-positive classifications of forest disturbances. False positive classification can undermine the integrity of the evidence, potentially rendering the litigation as false and destructive. Therefore, this process is essential for extracting expansive information on illegal deforestation, but also ensuring the that the evidence is robust and sound. In addition to the technical aspects, we will outline the implications of integrating this methodology from a legal and social perspective, including the rules of admissibility, ethics and accountability. This process provides a framework for integrating AI into legal workflows, offering a scalable and defensible approach to evidence analysis while meeting high evidentiary standards. Bibliography [1] Prosecutor v. William Samoei Ruto and Jushua Arap Sang (ICC Transcript) ICC-01/09-01/11-T-109-ENG (9 April 2014). [2] Sufi and Elmi v. the United Kingdom (Judgment) no. 8319/07 and 11449/07 (ECtHR 28 November 2011). [3] Prosecutor v. Bosco Ntaganda (ICC Transcript) ICC-01/04-02/06-T-176-Red2-ENG (12 December 2016). [4] J. A. Quinn, M. M. Nyham, C. Navarro, D. Coluccia, L. Bromley and M. Luengo-Oroz, “Humanitarian applications of machine learning with remote-sensing data: review and case study in refugee settlement mapping,” Phil. Trans. R. Soc., vol. 376, no. 20170363, 2018. [5] I. Atrey, “Revolutionising the Legal Industry: The Intersection of Artificial Intelligence and Law,” International Journal of Law Management & Humanities, vol. 6, no. 3, pp. 1075-1089, 2023. [6] I. Taylor, “Justice by Algorithm: The Limits of AI in Criminal Sentencing,” Criminal Justice Ethics, vol. 42, no. 3, pp. 193-213, 2023. [7] J. Beal, E. Kim, E. Tzeng, D. H. Park, A. Zhai and D. Kislyuk, “Vision transformers in domain adaptation and domain generalization: a study of robustness,” 2020. [Online]. [8] C. Schiller, J. Költzow, S. Schwarz, F. Schiefer and F. E. Fassnacht, “Forest disturbance detection in Central Europe using transformers and Sentinel-2 time series,” Remote Sensing of Environment, vol. 315, no. 114475, 2024. [9] K. Li, O. Patel, F. Viegas, H. Pfister and M. Wattenberg, Inference-Time Intervention: Eliciting Truthful Answers from a Language Model, 2024.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: Leveraging Geospatial Data for Environmental Compliance Professionals: a Prototype for EU-Protected Forest Habitats

#stac

Authors: Corentin Bolyn, Kenji Ose, Giovanni Caudullo, Carlos Camino, Rubén Valbuena, Jörgen Wallerman, Pieter S A Beck
Affiliations: European Commission, Joint Research Centre (JRC), Swedish University of Agricultural Sciences (SLU)
Environmental compliance assurance is key to upholding environmental laws that protect the natural resources society depends on. Compliance assurance comprises promotion, monitoring, and enforcement, and each of these components can benefit from geospatial intelligence. Geospatial information can promote compliance by helping convey the importance of environmental laws and where it applies. Situational awareness derived by combining spatial information and legal expertise can allow inspectors to assess where compliance may be at risk and deploy resources for on-the-ground interventions more efficiently. And, when necessary, geospatial intelligence can help demonstrate breaches of environmental law. The volume of geospatial data is growing thanks to greater sharing of in situ data and maps. It is of course also growing due to new remote sensing data that provides increasingly detailed and up-to-date information on the environment, complementing legacy remote sensing data. Whether they come from remote sensing programmes, mapping agencies, or monitoring programmes, geospatial data are often sectorial. For those responsible for assuring compliance, data from other sectors can often be hard to access, let alone integrate into their workflows to generate geospatial intelligence. Here we show how geospatial data can be combined to support environmental compliance assurance using the European Union’s Habitats Directive as example. Among other things, the Directive aims to prevent the deterioration of protected habitats listed in its Annex I within designated Natura 2000 sites. Ensuring compliance with the relevant provisions of the directive requires that potential threats to protected habitats are effectively identified, monitored and assessed. In protected forest habitats, logging can be considered a hazard that increases the risk that they deteriorate because it affects the specific structure and functions necessary to maintain the habitat or associated species. Priority natural forest habitats listed in the Habitats Directive are particularly vulnerable as they are at risk of disappearing. In contrast, non-forest habitats, such as peatlands pastures, typically have a negligible risk of being damaged by logging, and may even be threatened by tree encroachment. We combined authoritative maps of the distribution of protected forest habitats with Earth Observation-based data on tree cover loss into a prototype tool to monitor forest habitats for logging activity. The prototype processes geospatial datasets to produce information that is then made available through a user-friendly web interface to aid interpretation by compliance professionals. The tool starts by identifying hazards, which are patches of protected habitats where tree cover has been lost. The web interface then helps the user explore and assess the hazards in their area of interest. First, it offers the possibility to refine the definition of hazards by filtering tree cover loss events based on: • The area of tree cover loss, both in absolute terms and relative to the size of the habitat patches; • The Annex I habitat type where the loss occurred; • The time period during which the loss took place. This filtering allows users to narrow the scope of the analysis to types of forest loss they consider of greatest concern. For example, a user could focus on recent large-scale clear-cutting within a specific priority habitat type. This ability to define and refine threats based on different criteria provides a more nuanced and targeted approach to assessing compliance risks. Once the user has set these criteria, they can then investigate the detected hazards in two complementary ways: • The Regional Assessment: This component summarizes hazard information for the entire study area with graphs. The graphs are interactive and allow users to move seamlessly to the Local Assessment component for more detailed investigation of specific hazards; • The Local Assessment: This component allows users to visualise and analyse the identified hazards in a map viewer together with various Earth Observation layers. Users can explore the spatial distribution of tree cover loss events, examine their characteristics and assess their potential impact on protected habitats. The assessments would allow compliance professionals to identify and prioritize areas for further investigation; in this case for example compliance would be checked with regard to Natura 2000 site’s conservation objectives, legal provisions, and the actual situation on the ground. The prototype can easily be updated to integrate new remote sensing data through the SpatioTemporal Asset Catalogs (STAC) standard. This opens perspectives to incorporate near real-time satellite data into the tool. It also makes it possible to incorporate information derived from airborne LiDAR campaigns which are particularly valuable for our example as they provide a level of reliability to assess tree cover change that is hard to obtain through other means. Our prototype shows how geospatial intelligence can be made more accessible to end users such as forest managers or environmental compliance professionals. It consolidates geospatial datasets and existing information into a single web interface, facilitating the collection of evidence for risk assessment. It serves as a powerful analytical tool for experts, while providing user-friendly access to information for those without specialist geoscience skills. This bridge between expert-generated evidence and management needs is becoming increasingly important to assure environmental compliance.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: Legal Guarantees and Jurisprudential Frameworks for Satellite Geolocation in Criminal Proceedings: Towards an International Protocol

Authors: Faustino Gudin
Affiliations: Eufje
The integration of satellite geolocation technology in criminal proceedings marks a significant evolution in evidence collection, suspect tracking, and crime prevention. However, its use raises profound legal challenges concerning the protection of fundamental rights, including privacy, due process, and the presumption of innocence. This proposal explores the complex interplay between satellite geolocation and criminal justice systems, emphasizing the role of jurisprudence and the necessity for international regulation. The reliance on satellite geolocation data introduces critical issues regarding the admissibility and reliability of such evidence. Questions arise about the accuracy of geolocation data, particularly in cases involving potential errors, intentional manipulation, or technological limitations. Furthermore, the use of this technology often involves the mass collection of data, raising concerns about blanket surveillance and the disproportionate impact on individuals’ privacy rights. These issues are compounded by the lack of uniform rules across jurisdictions, creating a fragmented legal landscape that undermines both justice and accountability. A key component of this study involves analyzing the jurisprudence of international courts, particularly the European Court of Human Rights (ECtHR). Landmark cases such as Uzun v. Germany have addressed the compatibility of GPS tracking with Article 8 of the European Convention on Human Rights, setting essential precedents on proportionality and necessity. Similarly, the Inter-American Court of Human Rights has highlighted the risks of technology-driven infringements on privacy, emphasizing the need for judicial oversight and procedural guarantees. These rulings provide valuable insights but also reveal the absence of a comprehensive, universally accepted framework governing the use of geolocation data in criminal proceedings. National courts offer further perspective but also illustrate significant divergence. For instance, the United States Supreme Court, in United States v. Jones, underscored the importance of Fourth Amendment protections against unreasonable searches, particularly in the context of GPS tracking. Conversely, some jurisdictions adopt less stringent safeguards, allowing geolocation evidence with minimal judicial scrutiny. These disparities point to the necessity of harmonized standards that respect fundamental rights while recognizing the utility of satellite technology in modern law enforcement. To address these challenges, this proposal advocates for the creation of an International Protocol on the Use of Satellite Geolocation in Criminal Proceedings. The Protocol would provide a clear legal framework to guide the collection, storage, and use of geolocation data. Key provisions would include the establishment of judicial authorization as a prerequisite for geolocation tracking, standardized procedures for ensuring the integrity of data, and mechanisms to challenge the misuse of evidence. Independent oversight bodies would be tasked with monitoring compliance, ensuring that technological advances do not outpace the protections afforded by the rule of law. Importantly, the Protocol would emphasize proportionality and necessity as foundational principles, requiring that any use of geolocation data must be justified by a legitimate aim and accompanied by safeguards against overreach. Special attention would be given to cross-border cases, where discrepancies between legal systems can complicate the admissibility of evidence. By promoting international cooperation and consistency, the Protocol aims to strike a balance between leveraging satellite technology for effective criminal justice and upholding fundamental human rights. This initiative is particularly relevant for the European Satellite Agency, which is uniquely positioned to drive the responsible development and use of satellite technologies. By grounding its recommendations in established jurisprudence and a comparative legal approach, this study seeks to provide a robust foundation for legal reforms that align with technological advancements, ensuring that geolocation technology serves as a tool for justice rather than a source of injustice.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: Opportunities and Obstacles When Using Satellite Detection Models to Detect Environmental Crime

Authors: Raymond Purdy, Professor Raymond Harris, Dr Matthias Wagner
Affiliations: Air & Space Evidence, Panopterra
Weaknesses in conventional mechanisms of implementation and enforcement of environmental laws, coupled with pressures to develop smarter and more resource-efficient regulatory and policing approaches, suggest that there are real opportunities for the greater use of satellite technologies to detect and provide intelligence on environmental crime. But whilst there have been dynamic changes in satellite technologies and artificial intelligence techniques in recent years this has surprisingly not yet translated into their widespread adoption of such data by those who police environmental crime. Generally, the success of introducing new forms of technology in the environmental law enforcement sector relies upon establishing a confidence base amongst potential users. Precedents are often required as further evidence of effectiveness, reliability and cost. We will provide a focussed example of where satellite technologies have been successfully used to identify waste crime. This type of environmental crime, where waste can be illegally dumped, buried, burnt, or falsely described, can be highly problematic for law enforcement authorities because it often falls outside of regulatory processes and is carried out by criminals with the intention to avoid detection. We will draw on research we have undertaken in funded projects from ODINE, European Commission (LIFE), ESA, New Zealand Government, Horizon Europe and Innovate UK between 2017-2025. This environmental crime detection work has been tested and validated in many countries, often with ground truthing and feedback from regulators and police, meaning that we can present evidence of detection performance, reliability, and cost. We will use Greece and New Zealand as our two main examples. We will also discuss the data form and profiling that law enforcement users wanted, so it could be useful in real life intelligence-led law enforcement. We will present a number of different techniques which we have tested using different types of satellite data. The first technique used was a semi-automated model using Sentinel-2 data from several years to create a data cube for each target area. The second technique used a deep learning approach which allows us to train the model on Sentinel-2 data. The third technique used thermal infrared data from the VIIRS instrument to identify hotspots and find sites where waste is being burnt. The result of these three techniques is a list of locations of potential waste sites. We then used very high spatial resolution Maxar satellite data to examine in detail the locations that had been identified to see if these were unlawful waste sites, and to classify our certainty of criminality. A final technique was using GHGSat data to identify methane emissions at licenced waste sites, to see if certain wastes were being wrongly misdescribed and deposited in places where the operator did not have consent. This work is highly innovative and we have not presented most of these findings anywhere else to date. Satellite EO and AI has enormous potential for tackling environmental crime, but frustratingly it is not being implemented by law enforcement users to the extent it should be. The obstacles affecting take-up of satellite EO detection models in tackling environmental crime are not well explored, but it is important to extract key learning points to change practice. The practical first-hand experience we have gathered in multiple projects, enables us to reach unique perspectives on issues such as funding constraints, engagement with tech companies and access to data and solutions, legal evidence, organisational culture, and the fear of finding out (FOFO) about environmental crime and the resource implications of finding lots more criminality and having to act. We will propose actions on what could improve the use of take-up of satellite EO tools in tackling environmental crime in the future.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: Satellite and Underwater in-situ Data Processing and Integration for Environmental Monitoring and Crimes Detection

Authors: Maria Angelucci, Michele Nati
Affiliations: WSense S.r.l.
Satellite observation is effective on detecting oceans’ surface properties. However, a holistic approach requires the combination of Earth Observation (EO) with in-situ underwater parameters sensing capabilities. The Copernicus programme supplies free and open access to data and products to the users. In addition to the data produced by the Sentinels satellites, Copernicus users have access to data produced by other satellite missions (Contributing Missions). To date, in-situ data is collected with several technologies, such as buoys or the Argo Float, which mainly operate offshore, are unable to continuously monitor the water column and where the integration of new sensors requires long development cycles while their data collection is mostly delay-tolerant, as the objective of this observation is indeed to build models for future climate trends prediction. Quoting ESA's new strategy for Earth observation: "[...] The combination of multi-sensor satellite, airborne and in-situ observations, combined with enhanced modelling, are an important element in the development of capabilities for all overarching themes such as the water cycle, carbon cycle and ecosystem health [...]. WSense is a deep tech SME born as a spin-off of La Sapienza University (Rome), specialized in Ocean Data, backed by large European Ocean Tech VC funds, bringing to the market, since 2017, innovative solutions that put in place and exploit patented wireless underwater networks and IoUT (Internet of Underwater Things). WSense enables the sustainable and effective collection of in-situ marine relevant parameters and their transmission in real time and in a continuous way to the users. WSense underwater networking solutions are deployed in several international offshore campaigns and can be deployed both for inland waters and near- and offshore sites. Vendor-agnostic sensors are integrated providing continuous real time monitoring. The Internet of Underwater Things (IoUT) provides solutions for continuous in situ monitoring, real-time data transmission for early warning, and the capability to cross-validate and to optimize data acquisitions by applying tip and cue techniques. Leveraging on its cutting edge Internet of Underwater Things (IoUT) technologies, WSense’s roadmap includes developing a toolbox managing Big Data generated from the integration of real-time in-situ underwater observations with wide spatial scale Earth Observation satellite data. The toolbox can be used to jointly monitor specific areas of interest that might be subject to environmental crimes, with improved frequency of observation and features classification capabilities. Derived Value Added products and information layers, along with the operational status of the underwater infrastructure elements, can be accessible via a web-based platform, that can be accessed and managed remotely to navigate parameters trends and statistics, as well as setting threshold-based alarms and underwater sensor management. Multiple use cases for Environmental Crimes detection and monitoring benefit from the combination of underwater in-situ and satellite data, such as: • thermal pollution at sea/rivers: e.g. industrial plants can be large contributors to thermal pollution, occurring when the temperature of a natural body of water suddenly changes; • oil spills from offshore platforms: with in-situ data allowing discrimination with respect to low wind areas, natural seepages, algal blooms, seagrass, or other factors; • chemical pollution: i.e. Hazardous and Noxious Substances (HNS).
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: Operational applications of Earth Observation data supporting environmental controls and inspections: experiences from the Regional Environmental Protection Agency (ARPA) of Lombardy, Italy

Authors: Dario Bellingeri, Martina Grifoni, Alessandro Loda, Dario Lombardi, Alessandro Menin, Vito Sacchetti
Affiliations: ARPA Lombardia
ARPA Lombardia is the Regional Environmental Protection Agency of the Region of Lombardy (the first Italian Region, in terms of population and number of industrial activities); in the contribution, the operational approaches developed using Earth Observation (EO) data at different scales will be presented, applied within the control and monitoring institutional activities of the Agency, and from the experiences developed in the context of international projects linked to the session ‘s theme, such as Horizon H2020 PERIVALLON (Protecting the EuRopean territory from organized enVironmentAl crime through inteLLigent threat detectiON tools), IMPEL WODA (Water over and illegal abstraction Detection and Assessment) and IMPEL GIECA (Geospatial Intelligence for Environmental Compliance Assurance). In 2020 ARPA Lombardia established the Regional Earth Observation Center (CREO), with the aim of developing innovative projects for the application of Geospatial Intelligence and Earth Observation technologies at an operational level. EO activities developed in ARPA are characterized by a wide variability of thematic areas and application scales, and are based on different technologies and platforms: drones, aerial surveys, satellite data, often in an integrated form. In particular, the growing availability of satellite constellations with greater spatial resolution has allowed the development of operational applications also at the local scale, not only linked to environmental monitoring, but also to supporting controls and environmental compliance assessments. The “SAVAGER” Project (Advanced Surveillance for Waste Management), financed by the Lombardy Region, aimed at implementing operational methodologies from EO data for the early fight against illegal waste management in the regional territory. The analysis of high-resolution satellite images, together with other environmental datasets, allows a surveillance approach for the remote identification of sites with possible illicit waste management and potential environmental risk. One of the development components of SAVAGER was the experimentation of the use of Artificial Intelligence (AI) in order to optimize the photointerpretation process through automatic recognition of critical sites in very high resolution image (these technical evolutions are currently developed in the framework of Horizon H2020 PERIVALLON project). The information obtained in this workflow is used to plan and support subsequent control and inspection activities, also in collaboration with other local authorities; in many cases the use of drones is fully operational in order to support field inspections and to perform volumetric estimates. The experience gained in SAVAGER regional project, today fully operational, has demonstrated the effectiveness of the large-scale surveillance approach based on photointerpretation and automatic pre-processing of very high resolution satellite images, in order to identify local situations of potential offenses or potential non-compliances related to waste management. Furthermore, high resolution satellite data itself can also provide an effective contribution to the planning and execution of the following field inspections. As part of the institutional control activities at the production plants, ARPA Lombardia carries out inspections and controls aimed at verifying compliance and/or conformity with the provisions contained in the relevant environmental authorizations; EO data and geospatial intelligence techniques are in some case a valuable input in this process. In specific cases, typically in the field of illegal waste management, or case of improper use of an area (e.g. quarries, remediation sites, etc..), EO and GI techniques application is sometime requested to ARPA by public prosecutors and by Law Enforcement Agencies; in these cases, the main requests are related to the temporal evolution of the illicit, the search of evidence or clues of re-iteration of the illegality, and then the type of waste or material, the severity of the illicit (for example in terms of volume of the waste, etc..). In order to a fully usage of these kind of informations and evaluations in the following judicial procedures, the main requirements are related to the need of secure and certified input data (e.g., the precise date of image acquisition of EO data), and the need for standard and reliable procedures and methodologies (e.g. for volumetric estimations and for accuracy assessment).
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall N1/N2)

Session: C.03.02 Advances in the theory and methodology SAR Interferometry and SAR polarimetry - PART 2

In 2003, the first POLINSAR workshop organised by ESA was conducted gathering the new community, to present the findings of ESA-funded studies of SAR Polarimetry and Polarimetric Interferometry applications, and to prepare recommendations for future research work. Today the POLINSAR workshop is an established platform where scientists are meeting and exchanging their recent research advances to support new mission proposals or dedicated airborne campaigns to increase our knowledge. The application of polarimetry and interferometry, or combined, are ranging over a variety of different domain as for example biosphere (forest height, forest structure, forest disturbance, crop development stage classification over agricultural crops, etc.), hydrosphere (soil moisture), cryosphere (snow depth, ice structure), geosphere (enhanced deformation identification), urban areas (scatterer identification, building reconstruction) and many more applications.

We welcome contributions from but are not restricted to:
• New advances in Polarimeric SAR Interferometry: methods and applications
• Multi-baseline and TomoSAR: methods and applications
• Differential Polarimetric SAR Interferometry: methods and applications
• Airborne Campaigns for polarimetric and interferometric SAR
• Future Mission concepts related polarimetry and interferometry
• Recent advancement using AI for SAR mission concept, methods and applications.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall N1/N2)

Presentation: Performance of High-Resolution methods for estimating Digital Terrain Models over forested areas in the context of the BIOMASS mission

Authors: Laurent Ferro-famil, Francesco Salvaterra, Francesco Banda, Stefano Tabaldini
Affiliations: Isae-supaero & Cesbio, Politecnico di Milano, ARESYS
The ESA BIOMASS mission is based on a single platform repeat pass interferometric P-band SAR, operated in a simple stripmap mode acquiring global fully polarimetric data in a predefined orbit scenario. The BIOMASS interferometric operation mode foresees to fly the satellite in a drifting repeat pass orbit of 3 days, with spatial baselines defined to enable the multi-baseline interferometric processing of data. Besides supporting the generation of primary mission products, i.e. above ground biomass, forest height, the interferometric acquisitions can also be used to estimate the ground topography. The estimation of a DTM over forested areas requires to separate the response of the ground from the one of the overlying volume, and to accurately estimate the elevation from which it originates. PolTomoSAR, i.e. the combination of tomographic imaging with polarimetric diversity, is a natural solution to this problem as it combines spatial discrimination in the vertical direction with the use of polarimetric diversity, which improves the potential of separation of sources [1-3]. Nevertheless, the small bandwidth of the signals measured by the BIOMASS sensor limits the vertical resolution of the multi-image reconstruction to coarse values in the tomographic mode and to extremely coarse ones in the dual-baseline case. The resulting lack of accuracy and a limited contrast between the polarimetric responses of the ground and of the canopy may seriously affect the performance of DTM retrieval using classical tomographic imaging techniques or phase centre estimation. Tomographic imaging using linear filtering approaches, as well as scattering center estimation from multi-baseline InSAR phases is closely related to a Fourier operator which transform data from a wavenumber space (InSAR correlations), to the spatial domain [4]. The Fourier transform, which is the basis operation involved in Beamformer or Capon tomographic methods, is a low-complexity and robust linear estimator whose resolution is determined by the extent of the analysed signal in the dual domain, i.e. by the range resolution and maximum baseline in the BIOMASS case. High-Resolution (HR) spectral analysis techniques represent an alternative to Fourier-based approaches, characterized by significantly improved resolution values, at the cost of a major computational complexity and of a decomposition of a reflectivity profile into simple components. Whereas Fourier-based techniques discriminate sources from reconstructed intensity profiles, HR techniques separate sources by isolating the low-dimensionality subspace containing their response. Ground and canopy responses are associated to low-rank models whose simplicity guarantees both identifiability and robustness [5,6]. In particular, the volume component is not meant to actually represent the forest canopy, but rather to absorb terms which could interfere with the ground response. HR spectral analysis techniques can be applied at different levels in the DTM estimation chain, i.e. using the SLC images of a stack (direct implementation), of after specific coherent volume-ground separation techniques such as SKP or polarimetric oprimization [1][7]. This contribution evaluates the performance of HR methods for DTM estimation in the context of the BIOMASS mission, and addresses the following points: - selection of low-rank models for representing the ground and volume components and of HR spectral analysis approaches in the specific BIOMASS case (coarse resolution). - evaluation of their performance and complexity in both tomographic and dual-baseline Pol-inSAR configurations. - performance improvement related to the use of polarimetric diversity - assessment of the robustness with respect to residual phase screens - association with the aforementioned SKP and polarimetric optimization methods. The approach performance is evaluated using data acquired in the frame of ESA’s TropiSAR, AfriSAR and Tomosense campaigns, using both basic statistical indicators, and advanced geophysical approaches such as those presented in [8] [1] M. Mariotti D’Alessandro and S. Tebaldini, "Digital Terrain Model Retrieval in Tropical Forests Through P-Band SAR Tomography," in IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 9, pp. 6774-6781, Sept. 2019, doi: 10.1109/TGRS.2019.2908517. [2] Y. Huang, L. Ferro-Famil and A. Reigber, "Under-Foliage Object Imaging Using SAR Tomography and Polarimetric Spectral Estimators," in IEEE Transactions on Geoscience and Remote Sensing, vol. 50, no. 6, pp. 2213-2225, June 2012. [3] Y. Huang and L. Ferro-Famil, "3-D Characterization of Urban Areas Using High-Resolution Polarimetric SAR Tomographic Techniques and a Minimal Number of Acquisitions," in IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 11, pp. 9086-9103, Nov. 2021. [4] Gini, F. & Lombardini, F. “Multibaseline cross-track SAR interferometry: a signal processing perspective” Aerospace and Electronic Systems Magazine, IEEE, 2005, 20, 71-93 [5] Y. Huang, Q. Zhang, and L. Ferro-Famil, “Forest Height Estimation Using a Single-Pass Airborne L-Band Polarimetric and Interferometric SAR System and Tomographic Techniques,” Remote Sensing, vol. 13, no. 3, p. 487, Jan. 2021 [6] P. -A. Bou, L. Ferro-Famil, F. Brigui and Y. Huang, "Tropical forest characterisation using parametric SAR tomography at P band and low-dimensional models," in IEEE Geoscience and Remote Sensing Letters [7] S. Tebaldini, "Algebraic Synthesis of Forest Scenarios From Multibaseline PolInSAR Data," in IEEE Transactions on Geoscience and Remote Sensing, vol. 47, no. 12, pp. 4132-4142, Dec. 2009 [8] M. El Hage et al., “Multicriteria Accuracy Assessment of Digital Elevation Models (DEMs) Produced by Airborne P-Band Polarimetric SAR Tomography in Tropical Rainforests,” Remote Sensing, vol. 14, no. 17, p. 4173, Aug. 2022
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall N1/N2)

Presentation: Multi-Frequency Multi-Baseline Fully Polarimetric Forest Height Inversion. Perspectives of BIOMASS/TanDEM-X Data Fusion.

Authors: Roman Guliaev, Matteo Pardini, Konstantinos Papathanassiou, Francesca Ticconi
Affiliations: German Aerospace Center (DLR), Microwaves and Radar Institute, European Space Agency ESA-ESTEC
Today, an increasing number of Earth observation Synthetic Aperture Radar (SAR) satellites, owned by both governmental and commercial parties, are operating in orbit, acquiring microwave backscatter data across a wide microwave spectrum (from cm to dm wavelengths). Beyond the diversity in frequency bands, SAR missions employ different polarisations, and follow various spatial and temporal baseline setups that define the angular and temporal diversity of their measurements. More and more missions are today capable of interferometric measurements while the next generation of missions will allow the implementation of tomographic techniques [1]. In this context, multi-modal SAR missions such as ESA’s BIOMASS [2] (P-band) and DLR’s TanDEM-X [3] (X-band) inspire new synergistic applications of their advanced capabilities. Moreover, the future ESA ROSE-L mission, NASA-ISRO’s NISAR mission, JAXA’s ALOS-2 and ALOS-4 missions, and DLR’s bistatic TanDEM-L mission further motivate the integration of L-band into this study. Our research explores the potential of combining P-, L-, and X-band data using experimental airborne datasets and available satellite data. In particular, this work investigates the potential of combining multi-frequency multi-modal SAR measurements / acquisitions for forest height inversion. Forests provide a natural scenario where frequency bands complement each other. The P-band, with its high penetration capability through dense vegetation, is more suitable for estimating ground elevation (topography). Other frequency bands (e.g., L-band, X-band) can be used to derive InSAR coherence and phase, to be employed in single-baseline forest height inversion. InSAR coherence and phase are inherently sensitive to the vertical reflectivity profile, which is composed of contributions from both the ground and the volume scatterers [4]. This study further examines the use of vertical reflectivity profiles estimated through tomographic reconstruction at one frequency band and evaluates their applicability across other frequency bands. The research focuses on determining the limitations of this cross-band approximation and the necessity for specific parameterization for different frequency bands. Our results show a notable stability of forest height inversion, if several frequency bands data are employed. However, the availability of non-zero across-track interferometric baselines becomes essential for the proposed method. Additionally, the phase calibration is critical, if addressing distortions caused by the ionosphere, especially relevant for achieving accurate results using P-band. The synergy between P- and L-band is demonstrated using data from airborne campaigns such as AFRISAR 2016 and GABONX 2023 [5], while X-band is included through bistatic TanDEM-X data, providing a multi-frequency perspective for forest height inversion. In the space-borne context, our method establishes a framework towards TanDEM-X/BIOMASS data fusion. REFERENCES [1] Moreira, A, Prats-Iraola, P., Younis, M., Krieger. G., Hajnsek, I. and Papathanassiou.K.P., "A tutorial on synthetic aperture radar." IEEE Geoscience and remote sensing magazine 1, no. 1 (2013): 6-43. [2] Quegan, S. et al., The European Space Agency BIOMASS mission: Measuring forest above-ground biomass from space. Remote Sensing of Environment, 227, 44-60, 2019 [3] Krieger, G., Moreira, A., Fiedler, H., Hajnsek, I., Werner, M., Younis, M. and Zink, M., 2007. TanDEM-X: A satellite formation for high-resolution SAR interferometry. IEEE Transactions on Geoscience and Remote Sensing, 45(11), pp.3317-3341. [4] Guliaev, R., Kim, J.S., Pardini, M. and Papathanassiou, K.P., 2024. On the Use of Tomographically Derived Reflectivity Profiles for Pol-InSAR Forest Height Inversion in the Context of the BIOMASS Mission. IEEE Transactions on Geoscience and Remote Sensing. [5] I. Hajnsek, M. Pardini, et al.,, "Technical assistance for the development of airborne SAR and geophysical measurements during the AfriSAR campaign, Final technical report, 2016
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall N1/N2)

Presentation: Forest Change Characterization by means of BIOMASS Pol-InSAR Measurements

Authors: Noelia Romero-Puig, Matteo Pardini, Konstantinos Papathanassiou
Affiliations: German Aerospace Center (DLR)
Polarimetric SAR Interferometry (Pol-InSAR) [1] techniques have proven highly effective for reconstructing various forest structure parameters, such as tree height and stand density. This capability has been demonstrated across different frequency ranges, including data at high- (e.g., X-band [2]) and lower-frequency bands (e.g., L-band [3]). However, despite these initial attempts, a qualitative and quantitative assessment of forest change by means of Pol-InSAR is still missing. The main challenge remains in solving the ambiguities arising from the superposition of both structural and dielectric changes as well as characterizing the different change signatures. At the same time, ESA’s BIOMASS mission is the first spaceborne mission that will provide systematic Pol-InSAR measurements over the majority of the forested areas worldwide. The rather limited spatial resolution of BIOMASS is counterbalanced by the penetration capability at P-band when combined with multi-baseline Pol-InSAR acquisitions. In this context, this contribution introduces a framework for analyzing forest change using Pol-InSAR measurements at P-band. The approach starts with the decomposition of the forest canopy response into ground and volume components by following a two-layer model interpretation [4]. Then, a polarimetric analysis is performed over these individual, simplified, separated components. Different to previous works, which primarily focus on evaluating the polarization states associated with the maximum and minimum change [5], this analysis extends to encompass the full set of possible polarization states. The proposed methodology is applied and validated over tropical forest sites located in Gabon. Airborne campaign data acquired by the DLR’s F-SAR sensor from two campaigns, AfriSAR performed in 2016 and GabonX performed in 2023, are employed. Together with reference lidar data for validation, these datasets offer one of the first opportunities to analyze changes over a tropical forest scenario exploiting P-band data. The polarimetric analysis of the separated ground and volume components that form the forest canopy reveals two important insights. First, evaluating the individual components yields more detailed information of the forest structure compared to observing the full polarimetric signature. At P-band, the ground component, which is strongly polarized, dominates the SAR signal. In contrast, the volume component is relatively weak, homogeneous, and stable. Second, the analysis across the full range of polarization states over the normalized ground and volume components enables the identification of the dominant scattering mechanism. This finding suggests that dielectric changes (i.e., changes only in backscattered intensity) can be distinguished from structural changes. Furthermore, variations in the dominant scattering mechanisms, particularly those associated with the ground response, can potentially be linked to different types of structural changes, such as logging versus growth scenarios. The work presented in this contribution is part of the ongoing activities within the BioTomEx ESA Study, and it will be further developed and consolidated as part of the products provided by the upcoming ESA BIOMASS mission. [1] K. P. Papathanassiou and S. R. Cloude, “Single-baseline polarimetric SAR interferometry,” in IEEE Transactions on Geoscience and Remote Sensing, vol. 39, no. 11, pp. 2352-2363, Nov. 2001, DOI: 10.1109/36.964971. [2] C. Choi, M. Pardini, M. Heym and K. P. Papathanassiou, “Improving Forest Height-To-Biomass Allometry With Structure Information: A Tandem-X Study,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 10415-10427, 2021. [3] M. Tello, V. Cazcarra-Bes, M. Pardini and K. Papathanassiou, “Forest Structure Characterization from SAR Tomography at L-Band,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 11, no. 10, pp. 3402-3414, Oct. 2018, DOI: 10.1109/JSTARS.2018.2859050. [4] S. R. Cloude and K. P. Papathanassiou, “Three-stage inversion process for polarimetric SAR interferometry”, in IEE Proceedings-Radar, Sonar and Navigation, vol. 150, no. 3, pp. 125-134, June 2003, DOI: 10.1049/ip-rsn:20030449. [5] A. Alonso-González, C. López-Martínez, K. P. Papathanassiou and I. Hajnsek, “Polarimetric SAR Time Series Change Analysis Over Agricultural Areas,” in IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 10, pp. 7317-7330, Oct. 2020, DOI: 10.1109/TGRS.2020.2981929.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall N1/N2)

Presentation: A Quality Criterion for Phase Linking Estimates in InSAR Time Series

Authors: Magnus Heimpel, Prof. Dr. Irena Hajnsek, Dr. Othmar Frey
Affiliations: Institute of Environmental Engineering, ETH Zurich, Microwaves and Radar Institute, German Aerospace Center (DLR), Gamma Remote Sensing AG
In the analysis of interferometric synthetic aperture radar (InSAR) time series, phase linking (PL) algorithms [1] are used to estimate the interferometric phases of distributed scatterers (DS) in the presence of decorrelation. Over the years, a variety of methods have been developed, including SBAS [2], PTA [3], SqueeSAR [4], CAESAR [5], EMI [6], and sequential algorithms for large-scale datasets [7, 8]. Cao et al. [9] introduced a unifying mathematical framework in which these methods are characterized by their specific choices of weights in the phase optimization process. In this work, we propose an alternative framework based on the observation that the coherence matrix is of rank one only in the absence of decorrelation. For a given reference datum, a rank-one coherence matrix $C$ can be uniquely represented by a vector $x$ containing the interferometric phases through the relation $C=x\cdot x^H$. From this perspective, phase linking is reinterpreted as the task of inferring a phase time series from the rank-one matrix that most closely approximates the coherence matrix, according to a specified metric. The choice of metric determines the phase linking method - for instance, eigendecomposition-based phase linking corresponds to the metric induced by the Frobenius norm. This new perspective facilitates the development of a novel quality criterion for assessing the reliability of phase estimates produced by these methods. Conventional phase linking quality criteria typically quantify the difference between the phases of the original coherence matrix and those of the derived estimates. We extend these criteria by integrating insights into the condition of the underlying optimization problem. Specifically, given a phase vector as the result, a secondary result can be generated by reapplying phase linking to the original coherence matrix under the constraint that the secondary solution must be orthogonal to the first. The optimization is deemed well-conditioned if the distances from the coherence matrix to the two rank-one matrices corresponding to the primary and secondary results differ significantly, reflecting a stable and robust solution. For Eigendecomposition-based phase linking, a simplified version is given by the relative difference between the largest and second largest eigenvalues. The proposed criterion is validated using a TerraSAR-X dataset acquired over the Swiss Alps, demonstrating its effectiveness in assessing the reliability of phase linking results. References [1] D. H. T. Minh and S. Tebaldini, “Interferometric phase linking: Algorithm, application, and perspective,” IEEE Geoscience and Remote Sensing Magazine, vol. 11, no. 3, pp. 46–62, Sep. 2023, issn: 2373-7468. doi: 10.1109/mgrs.2023.3300974. [2] P. Berardino, G. Fornaro, R. Lanari, and E. Sansosti, “A new algorithm for surface deformation monitor- ing based on small baseline differential SAR interferograms,” IEEE Transactions on Geoscience and Remote Sensing, vol. 40, no. 11, pp. 2375–2383, Nov. 2002, issn: 0196-2892. doi: 10.1109/tgrs.2002.803792. [3] A. Monti-Guarnieri and S. Tebaldini, “On the exploitation of target statistics for SAR interferometry applica- tions,” IEEE Transactions on Geoscience and Remote Sensing, vol. 46, no. 11, pp. 3436–3443, Nov. 2008, issn: 0196-2892. doi: 10.1109/tgrs.2008.2001756. [4] A. Ferretti, A. Fumagalli, F. Novali, C. Prati, F. Rocca, and A. Rucci, “A new algorithm for processing interferometric data-stacks: SqueeSAR,” IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 9, pp. 3460–3470, Sep. 2011, issn: 1558-0644. doi: 10.1109/tgrs.2011.2124465. [5] G. Fornaro, S. Verde, D. Reale, and A. Pauciullo, “CAESAR: An approach based on covariance matrix de- composition to improve multibaseline–multitemporal interferometric SAR processing,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 4, pp. 2050–2065, Apr. 2015, issn: 1558-0644. doi: 10.1109/ tgrs.2014.2352853. [6] H. Ansari, F. D. Zan, and R. Bamler, “Efficient phase estimation for interferogram stacks,” IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 7, pp. 4109–4125, Jul. 2018, issn: 1558-0644. doi: 10.1109/ tgrs.2018.2826045. [7] H. Ansari, F. De Zan, and R. Bamler, “Sequential estimator: Toward efficient insar time series analysis,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 10, pp. 5637–5652, Oct. 2017, issn: 1558-0644. doi: 10.1109/tgrs.2017.2711037. [8] D. Ho Tong Minh and Y.-N. Ngo, “Compressed sar interferometry in the big data era,” Remote Sensing, vol. 14, no. 2, p. 390, Jan. 2022, issn: 2072-4292. doi: 10.3390/rs14020390. 1 [9] N. Cao, H. Lee, and H. C. Jung, “Mathematical framework for phase-triangulation algorithms in distributed- scatterer interferometry,” IEEE Geoscience and Remote Sensing Letters, vol. 12, no. 9, pp. 1838–1842, Sep. 2015, issn: 1558-0571. doi: 10.1109/lgrs.2015.2430752.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall N1/N2)

Presentation: Three-dimensional displacement field retrieval through the SBAS-DInSAR analysis of mid-inclination orbits Capella Space SAR data: first results on the Campi Flegrei caldera (Italy)

Authors: Michele Manunta, Dr Federica Cotugno, Dr Manuela Bonano, Dr Francesco Casu, Dr Victor Cazcarra-Bes, Dr Gordon Farquharson, Dr Riccardo Lanari, Dr Nestor Yague-Martinez, Dr Craig Stringham
Affiliations: CNR-IREA, Capella Space Corporation, Università di Napoli Federico II
Differential Synthetic Aperture Radar (SAR) Interferometry (DInSAR) is a powerful geodetic “tool” that allows us to measure surface displacements with centimetric to millimetric accuracy. Traditionally, the spaceborne DInSAR technique has primarily been used to measure line-of-sight (LOS) displacement, a combination of vertical and horizontal components. Moreover, we remark that most of the current SAR satellites operate in sun-synchronous orbits (SSO), which means that they repeat (nearly) the same orbit at the same local time. This orbital configuration offers several advantages, including a consistent illumination geometry and relatively stable environmental conditions among successive temporal passes. On the other hand, SSOs have limitations when it comes to measuring the North-South component of the investigated deformation. Indeed, the LOS direction of a sun-synchronous SAR satellite is primarily oriented towards the East-West direction (the orbit has a quasi-polar direction), making it sensitive to vertical and East-West displacements. In addition, although some North-South sensitivity exists, it is often masked out by other factors like decorrelation phenomena, atmospheric noise, and phase unwrapping errors. As a result, extracting reliable North-South displacement information from SSO-based DInSAR data can be challenging, especially for slow-moving deformation processes. Conversely, employing satellites working in a mid-inclination orbit (MIO) would allow us to regain sensitivity with respect to the North-South component of the displacement field, thus providing valuable insights into various geophysical processes. Indeed, when we focus on mid- to low-latitude regions, MIOs may offer an effective solution for retrieving the three-dimensional behavior of the displacement signals. Unfortunately, MIO configurations are unsuitable for global coverage Earth Observation applications and have not been largely exploited until now. However, the advent of small satellite constellations in Low Earth Orbit (LEO) paves the way for new satellite configurations and potential applications. Based on the above discussion, some key advantages of Mid-Inclination Orbits for DInSAR can be summarized as follows: • Enhanced North-South Sensitivity: MIOs provide a more favorable geometry for measuring North-South displacements, especially in regions with significant North-South deformation components, such as tectonic plate boundaries, volcanic areas, landslides, and slope instabilities. • Complementary to SSOs-based DInSAR exploitation. Combining DInSAR measurements obtained by means of mid-inclination and sun-synchronous operating satellites, it is possible to achieve a more complete picture of the ongoing three-dimensional deformation phenomena. In this work, we present the first results achieved by processing three different DInSAR datasets generated from the SAR data collected by Capella Space in an experimental way over the Campi Flegrei caldera (Italy), where the bradyseism phenomena, restarted in 2005, are still ongoing. Capella Space is currently developing interferometric capabilities. The Capella-14 satellite, at an orbit inclination of 45 degrees, acquired experimentally a repeat ground track (RGT) orbit in June 2024 with a repeat cycle of approximately 3 days [1]. In particular, during the July-August 2024 period, Capella-14 acquired three Stripmap SAR datasets in ascending and descending orbit, and right- and left-looking directions, using complementary satellite heading angles over the Campi Flegrei caldera. The observation geometries of the three processed datasets have the following characteristics: Ascending left looking, 15 images, 59.5° satellite heading angle. Ascending right looking, 17 images, 71.15° satellite heading angle. Descending left looking 15 images, 121.02° satellite heading angle. The swath width was experimentally increased to 10 km in order to maximize the coverage over the region of interest. The available three SAR datasets have been processed by applying the Small BAseline Subset (SBAS) DInSAR approach to generate the corresponding deformation series. More specifically, once the SLC images were co-registered with respect to the reference geometry, 45, 51, and 84 multilook interferograms were generated. The interferometric phase stacks were then unwrapped with the Extended Minimum Cost Flow (EMCF) algorithm, and the deformation time series were finally retrieved. Following the geocoding procedure, the deformation time series and the corresponding deformation velocity maps were combined to retrieve the Vertical, East-West, and North-South components of the Campi Flegrei caldera deformation field between July and August 2024. To our knowledge, the preliminary results of the use case described in this work represent the first application to fully retrieve the three-dimensional deformation field with multi-angle/multi-temporal DInSAR data, thus demonstrating the feasibility of the MIO satellite configurations for these purposes. In this context, the recent availability of MIOs constellations, like the forthcoming NIMBUS SAR component of the Italian IRIDE constellation, will undoubtedly play a key role soon. [1] Nestor Yague-Martinez, Victor Cazcarra-Bes, Gordon Farquharson, Craig Stringham, Shaunak De. “Capella Space SAR Interferometry Demonstration: Updates and First Evaluation of Time Series”. LIVING PLANET SYMPOSIUM 2025. ESA. Vienna, Austria
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall N1/N2)

Presentation: L-band Tomography with SAOCOM Data: Phase Calibration and Canopy height Estimation

Authors: Naomi Petrushevsky, Francesco Banda, Prof. Stefano Tebaldini, Andrea Monti-Guarnieri
Affiliations: Politecnico di Milano, Aresys
Monitoring trees height and spatial distribution is a critical task for understanding and managing forest ecosystems and their role in the global carbon cycle. Remote sensing technologies enable large-scale, consistent, and non-invasive measurements across diverse and inaccessible landscapes. Specifically, Low frequency (L-band and P-band) Synthetic Aperture Radar (SAR) was proven to be an efficient tool in the scope of forest monitoring, as it can provide seamless forest height maps over extended areas, independently on weather conditions. By exploiting several acquisitions from slightly different angles, one can reconstruct the vertical profile of the forest, a technique known as SAR Tomography (TomoSAR). In fact, many airborne campaigns were conducted to perform such measurements [1], and the soon to be launched BIOMASS mission (P-band) was specifically designed to unlock repeated and global forest monitoring [2]. In this work we demonstrate a successful tomographic processing of spaceborne L-band SAR data, capable of producing accurate Canopy Height Maps (CHM). The data investigated consists of nine SAOCOM A/B acquisitors with eight days gaps and significant dispersion in vertical baselines. While the mission was not originally designed for tomography, the limited orbital control allowed the identification of a suitable stack. Preliminary results were presented in [3], while this paper presents improved performance obtained with a carefully tunned calibration procedure. A successful application of tomography requires that the stack of data is properly phase calibrated, i.e., any disturbances inflicted by atmospheric effects or orbital error are corrected. In the proposed processing, we eliminate spurious phase contributions by estimating the equivalent scattering center using Phase Linking (PL), a classic approach in SAR interferometry [4]. Traditionally, the method is applied on all the possible interferograms in the stack, gaining robustness due to the redundant measurements. However, we demonstrate that in the forest scenario such an approach is sub-optimal, since the basic assumption of phase triangularity is lost for large vertical baselines. Thus, we introduce an innovative adaptation to the calibration procedure, accounting for the peculiarity of the observed scene. Validation and testing of the entire processing scheme was performed by comparing the final CHM with a reference dataset, the Global Forest Canopy Height 2019, obtained from a combination of LiDAR and Landsat optical images, and openly published [5]. The agreement between the two products is evident, with some differences that can be explained by the physical capabilities of each sensor. The achievement is of interest for the development of tomographic processors for BIOMASS and other future low-frequency SAR missions. [1] Tebaldini, Stefano, et al. "TomoSense: A unique 3D dataset over temperate forest combining multi-frequency mono-and bi-static tomographic SAR with terrestrial, UAV and airborne lidar, and in-situ forest census." Remote Sensing of Environment 290 (2023): 113532. [2] Quegan, Shaun, et al. "The European Space Agency BIOMASS mission: Measuring forest above-ground biomass from space." Remote Sensing of Environment 227 (2019): 44-60. [3] Banda, Francesco, et al. "Spaceborne L-Band Forest Tomosar: A First Case Study." IGARSS 2024-2024 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2024. [4] Tebaldini, Stefano, et al. "Phase calibration of airborne tomographic SAR data via phase center double localization." IEEE Transactions on Geoscience and Remote Sensing 54.3 (2015): 1775-1792. [5] Potapov, Peter, et al. "Mapping global forest canopy height through integration of GEDI and Landsat data." Remote Sensing of Environment 253 (2021): 112165.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.34)

Session: F.03.01 Commercial EO missions data for marine and land applications - Part 1

EO data plays a crucial role in marine and land applications, providing vital information for environmental monitoring, biodiversity, disaster management, hydrology, urban planning, agriculture and forestry, oceanography and coastal management, sea-ice monitoring. By leveraging commercial EO data, these applications can enhance decision-making processes, support sustainable development and help monitoring the implementation of EU policies.
This session has been structured into three thematic parts—marine, land, and multidomain—to better highlight the diverse applications of commercial Earth Observation (EO) data. They will feature presentations from data and satellite owners, with a particular focus on commercial data providers, illustrating the complementary nature of commercial EO data with other satellite missions (ESA and non-ESA missions) including Sentinel missions. The three sessions (part 1, 2 and 3) aim also to exchange experiences on applications powered by commercial EO data and the presentations will illustrate the commercial data offer, imaging capabilities and use cases.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.34)

Presentation: Revolutionizing Land Monitoring with Hyperfield’s Innovations and Insights

Authors: Michal Shimoni
Affiliations: Kuva Space
Kuva Space, a Finnish space company, is developing the Hyperfield, a constellation of 100 hyperspectral microsatellites designed to deliver hyperspectral imagery of any location on Earth at visible-to-near infrared (VIS-NIR, 450-950nm) and visible-to-shortwave infrared (VIS-SWIR, 450-2500nm) wavelengths. Selected by the European Space Agency to be the sole provider of hyperspectral data for the Copernicus program, Kuva Space plans to collect daily hyperspectral data across Europe by 2027, with coverage increasing to three times daily by 2030. The satellites will continuously gather data without the need for manual tasking, while intra-calibration with Sentinel satellites will ensure high temporal resolution and robust radiometric and orthorectification accuracies. The innovative technology behind Kuva Space’s satellites and hyperspectral payloads enables flexible data acquisition modes, including selectable spectral wavelengths and tunable signal-to-noise ratios (SNR). This capability, combined with AI-powered automated analysis, will allow the company to deliver Earth observation products efficiently and affordably, meeting global needs for agricultural insights, land mapping, and water quality assessments. By deploying its extensive constellation, Kuva Space aims to address gaps in seasonal yield forecasting caused by limited spectral resolution and inconsistent time series data. The constellation’s high spectral resolution will enhance crop identification, including for minor crops, and enable more detailed assessments of in-field variability. Additionally, the rich biochemical information captured in hyperspectral data will allow for a more accurate evaluation of crop growth stages and health conditions. Starting in 2025, the satellites’ hyperspectral imaging with a spatial resolution of 12 meters will be complemented by high-resolution RGB cameras to detect sub-pixel variations, better identify less common crop species, and distinguish fields based on agricultural practices. Beyond advancing spectral-based change detection and land cover identification, the Hyperfield constellation will significantly enhance inland water quality monitoring by more accurately identifying pollutants, their sources, and specific species. The high spectral resolution data will deepen understanding of pollutant dynamics, predict the impact of both organic and inorganic pollutants, as well as algae blooms, and monitor the eutrophication process within water bodies and their surrounding ecosystems. Through Hyperfield’s advanced capabilities, European and global stakeholders will gain improved tools for assessing biomass volume and carbon content. This will support compliance with the latest Common Agricultural Policies (CAP) and the Water Framework Directive while also helping to tackle climate change challenges and enhance food and water security.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.34)

Presentation: WorldView Legion - Near Real Time Access and Delivery of VHR Satellite Imagery for Land Applications

Authors: Silvester Fischer
Affiliations: European Space Imaging
Subject: Abstract for LPS2025 – F.03.02 Commercial EO Missions Data for Land Applications WorldView Legion - Near Real Time Access and Delivery of VHR Satellite Imagery for Land Applications European Space Imaging (EUSI) is the European distributor and provider of VHR satellite imagery for the Maxar WorldView satellite constellation. EUSI is operating its own ground station in cooperation with the German Aerospace Center offering customized tasking solutions and giving our customers direct access to WorldView-1, WorldView-2, WorldView-3, GeoEye-1 and the new WorldView Legion constellation. Since many years EUSI has been contributing to the ESA Copernicus and ESA TPM program supporting the Copernicus Services, universities and research initiatives with the provision of VHR satellite imagery. A comprehensive overview in regards of the product portfolio offered by EUSI and its related local tasking capabilities with a special focus to the new WorldView Legion constellation will be the first part of our presentation. Within a second part of presentation EUSI will present specific use cases demonstrating the usage of VHR satellite imagery within several land applications and in specific demonstrating the impact of the new WorldView Legion constellation. The first use case will focus on the provision of large data coverages. The EEA is offering several data products through the Copernicus land service providing information about different categories of land use and land coverage. As a basis for the creation of these products pan-european data coverages are to be provided within a three-year time frame based upon VHR satellite imagery. Within that scope EUSI’s contribution for the provision of large coverages will be demonstrated especially showing the possible progress adding the new WorldView Legion constellation.   The second use case will focus on EUSI’s contribution to support Disaster Management with on-demand data. Since many years EUSI has been contributing VHR imagery to the Copernicus Emergency Service. Through its local ground station fastest access, collection and delivery of VHR satellite imagery has been made available. Different use cases will be demonstrated showing EUSI’s capabilities in detail responding to emergency activations and allowing imagery collections and deliveries in near real time in a spatial resolution up to 30 cm taking into consideration the new WorldView Legion constellation.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.34)

Presentation: OroraTech's Contribution to Environmental Monitoring and Data-Driven Decision Making

Authors: Kathrin Umstädter, Pia Feurstein, Josephine Wong, Christian Mollière, Julia Gottfriedsen
Affiliations: Ororatech Gmbh
Introduction Land Surface Temperature (LST) data plays a pivotal role in assessing the health of ecosystems, tracking climate change, and managing natural resources. By identifying thermal anomalies, it helps detect and monitor extreme weather events, droughts, and heatwaves, which are becoming more frequent due to global warming. LST data also supports the study of land cover changes, such as deforestation and desertification, by providing insights into the thermal properties of various surfaces. These capabilities allow policymakers to develop targeted mitigation strategies and improve resilience against climate-related hazards. OroraTech, a German intelligence-as-a-service company, focuses on addressing critical challenges such as wildfire detection and monitoring, climate change monitoring, and sustainable resource management through its advanced constellation of thermal infrared satellites. Technology The company has launched two operational sensors so far: FOREST-1 and FOREST-2. The former served as a prototype, while the latter, the first commercially used sensor, already delivers reliable data to customers. These sensors are equipped with a Mid-Wave Infrared (MWIR) and two Long-Wave Infrared (LWIR) bands. To improve the revisit rate and address the problem of cloud cover, OroraTech will expand its constellation to eight satellites in 2025, achieving a 12-hour revisit cycle. By 2028, the system is expected to include up to 100 sensors in low Earth orbit, providing a 30-minute revisit rate globally. Starting in 2025, OroraTech's constellation will monitor the diurnal temperature cycle, providing critical insights into day and night temperature variations. The satellites’ onboard processing capabilities allow real-time data analysis, enabling quicker responses to detected anomalies. With a swath of 410 km, the satellites can cover vast areas during the overpasses. Their ground sampling distance (GSD) of 200 meters (a super-resolution of 80 meters is in development) allows for the timely detection of significant land surface temperature changes. This capability complements larger-scale, public missions by filling observational gaps between overpasses of satellites like Trishna and LSTM, especially in the afternoon hours, where no other data is available, delivering more comprehensive and actionable thermal data. Additionally, it can provide complementary data to other missions that offer higher resolutions but cover smaller areas. Use Cases OroraTech's LST data is already delivered to German researchers and authorities (POSTER project with University of Freiburg and data delivery to the German Space Agency at DLR) and is used for environmental monitoring and planning. By identifying temperature variations across diverse landscapes and inland water bodies, LST data helps model the dynamics of cold air flow and accumulation in valleys, urban environments, and open fields. These simulations are crucial for understanding nighttime cooling patterns of whole regions. OroraTech's 200m-resolution LST data, with its ability to cover vast areas simultaneously, is a valuable resource for observing large urban settlements. While the spatial detail is somewhat limited, the extensive coverage makes it ideal for identifying regional temperature patterns and trends, such as the urban heat island (UHI) effect across entire cities or metropolitan regions. This broad perspective helps climate adaptation managers assess the impact of urbanization on local climates and develop strategies to mitigate heat stress. With the future sub-daily revisit rate of OroraTech's constellation (twice a day in 2025) the added value will even increase, especially for day/night monitoring of cities and detecting cooling effects.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.34)

Presentation: constellr HiVE thermal satellite constellation - High Resolution Land Surface Temperature for urban monitoring

Authors: Daniel Spengler, Mohammadl Iranmanesh, Dr. Lina Hollender, Dr. Tobias Leismann, Dr. Birgit Dress, Hannah Kofler, Beate Tempel, Christian Mittermaier
Affiliations: constellr GmbH, constellr SA
The relevance of thermal remote sensing satellite data has become increasingly recognised for environmental monitoring in recent years. One key application is the provision of insights for tackling urban heat. Urban planners often face the challenge of working with limited resources, making it crucial to precisely identify where interventions are needed. High-resolution thermal remote sensing data can provide crucial information to public authorities and urban planners, helping them make their urban areas more climate resilient. The use of thermal remote sensing data has been widely recognized as a contributor to large-scale environmental monitoring, which is why Land Surface Temperature (LST) was defined as one of 54 Essential Climate Variables (ECVs) by the Global Climate Observing System (GCOS). In urban contexts, Urban Heat Island (UHI) effects can be observed and monitored. The problem with current thermal satellite data is that they are either only available in very high to high temporal resolution (<1day) and low spatial resolution (>1,000m) respectively from geostationary or sun-synchronous satellites, or low temporal resolution(>1week) and moderate spatial resolution of 30-100m. Such data are therefore only of limited suitability when regular monitoring of small-scale environmental factors is required. constellr develops a constellation of state-of-the-art high-resolution visible, near, and thermal infrared satellites to improve monitor land surface temperature (LST). Launches of two satellites of the HiVE (High-precision Versatile Ecosphere monitoring mission) constellation are planned for the year 2025, with the first launch of Sykbee1 in January 2025 and Skybee2 towards mid-2025. The constellation comprises micro-satellites in the 100 kg class and flies in a sun-synchronous orbit at an altitude of approximately 550 kilometers. With a remarkable 1-day global temporal resolution (using five satellites from 2027), 30 meters native spatial resolution, and the 4-band multispectral TIR capabilities from 8-12µm enabling a temperature accuracy of up to 1.5 K, the Skybees of HiVE are uniquely equipped to provide accurate and timely data for environmental monitoring. Leveraging its proprietary data, imagery from public missions, and strong remote-sensing expertise for data fusion, harmonisation, and analytics, constellr offers timely and highly scalable solutions. The HiVE´s secondary optical VNIR payload provides 10 spectral bands (similar to Sentinel-2) from 400-1000nm and a spatial resolution up to 10m enabling the use of super resolution techniques for thermal sharpening. The HiVE satellites are the first step into a next generation of thermal satellite imagery for environmental monitoring. In the context of the upcoming public missions TRISHNA, LSTM and SBG, constellr HiVE data with its similar band set-up will offer highly valuable precursor dataset and a complementary data source for different use cases in urban, agricultural, forestry and water monitoring. We will present constellr’s HiVE mission concept, its status and the first images acquired during the commissioning phase. The regular operations are planned after successful commissioning phase from mid of 2025. On top, we will present the added value HiVE data for urban monitoring. Therefore, simulated HiVE data based on airborne campaigns as well as harmonised time series of constellr’s LSTfusion data are used. LSTfusion harmonises and assimilates multiple public thermal mission data and applies the constellr proprietary LST retrieval algorithm leading to high spatial sharpness and reliable LST accuracy at a 30m spatial resolution. The thematic analysis of available time series will be applied on selected urban areas to address topics of adaptation to climate change, in particular, the identification of cool and hot areas during heat events for monitoring human thermal comfort, or to analyse the status and health of urban green areas. The data will serve new opportunities for policy and decision-making will support the derivation of SDG indicators and support the European Green Deal monitoring of urban areas and their surroundings.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.34)

Presentation: Enhancing Copernicus Land Services with High-Frequency Complementary Constellations

Authors: Alix De Beusscher, Jean-Emmanuel
Affiliations: Aerospacelab
Copernicus Land Monitoring Services (CLMS) rely on Sentinel-2’s high-quality data to deliver critical insights for agriculture, forestry, urban planning, and environmental management. However, the service’s ability to meet evolving needs, such as frequent monitoring for the Common Agricultural Policy (CAP) or real-time environmental assessments, is constrained by the five-day revisit time of Sentinel-2. To address these limitations, Aerospacelab is developing a complementary commercial constellation that aims to bridge temporal gaps while aligning with Copernicus standards. To ensure seamless integration, Aerospacelab is collaborating with the Optical Mission Performance Centre (OPT-MPC) in the framework of Copernicus Contributing Mission to validate the data quality and compatibility of the constellation with Sentinel-2. This assessment covers radiometric and geometric performance, ensuring interoperability within Copernicus workflows. By adhering to Copernicus standards, the complementary data strengthens the reliability and utility of CLMS outputs, enabling richer insights and greater user confidence in operational applications. Currently, we are exploring several data demonstration activities that are expected to conclude before the conference. These include the work of a Sentinel-2 interoperability report, which evaluates the alignment of Aerospacelab constellation data with Sentinel-2 standards. Additionally, Aerospacelab is conducting a detailed study of Belgium's Very High Resolution (VHR) coverage, offering a proof of concept for systematic monitoring across Europe with a complementary high-frequency dataset. These demonstrations will provide tangible evidence of how Aerospacelab constellation can effectively integrate into Copernicus workflows and enhance land monitoring capabilities. This presentation will demonstrate how Aerospacelab constellation foresees to enhance the functionality of Copernicus Land Services by addressing specific gaps in temporal coverage and by unlocking new land monitoring applications. It will explore the potential for creating a more comprehensive and frequent monitoring framework, with an emphasis on applications critical to EU policy implementation. By showcasing the synergies between Sentinel-2 and this commercial constellation, this presentation will underscore the importance of public-private cooperation in advancing Copernicus Land Services. It highlights a path forward where commercial EO missions not only complement Copernicus capabilities but also expand the program’s potential to address emerging land monitoring challenges effectively.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.34)

Presentation: Third Party Missions and Copernicus Contributing Missions

Authors: Claire D'Oiron, Patryk
Affiliations: Airbus
Airbus plays a critical role in supporting ESA’s CCM and TPM programs by delivering very high-resolution Earth Observation (EO) data to enable rapid crisis response, disaster management and supporting the scientific community in their research and development activities. This ensures timely access to vital geospatial information, often within minutes of service requests, during emergencies such as floods, fires, and earthquakes, enabling precise mapping and decision support. Leveraging Airbus’s advanced rapid response satellite capabilities strengthen emergency efforts, aiding ESA in its mission to address global crises effectively.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall E2)

Session: D.01.06 Destination Earth Checkpoint 2025

Destination Earth (DestinE) is a flagship initiative by the European Commission to create a highly accurate digital twin of the Earth. It enables precise modelling, monitoring, and simulation of natural phenomena and human impacts, supporting effective adaptation and mitigation strategies. DestinE represents a breakthrough in accuracy, detail, and interactivity, advancing computing and science. It is a key component of the EU’s Green Deal and Digital Strategy.
The Commission has entrusted the European Centre for Medium -range Weather Forecasts (ECMWF), the European Space Agency (ESA), and the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT), with the implementation of DestinE.

In line with the organisation of the EU Digital Europe Work Programme, DestinE, is structured into short, dynamic phases. The development and availability of the first digital twins, the operation of the data lake, and the access through the DestinE platform are crucial milestones of the entire system, laying the foundation for its broader objectives. These milestones underline the necessity of continuous improvement during operations and the strategic implementation of new elements to enhance system capabilities. This system enhancement is driven by annual system milestones that actively involve stakeholders and users, focusing on collecting feedback and deriving recommendations for potential consolidation and evolutions of DestinE.

The recent opening of the DestinE Platform for user registrations in October marks a crucial step forward, allowing for the onboarding of users and services and enabling the collection of user feedback based on the operational use of DestinE services and applications.

This session is the 2025 checkpoint with the goal of leveraging user feedback gathered during operations. It will involve the three entrusted entities (ESA, ECMWF, and EUMETSAT) and industrial teams responsible for different DestinE components, ensuring full transparency with DG-CNECT. Feedback previously collected will be discussed with checkpoint participants to generate recommendations for the continued evolution of DestinE services.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.14)

Session: C.04.01 - AWS, EPS-Sterna and AEOLUS-2 missions

The status of development of ESA missions will be outlined in 4 sessions of 1h30 minutes (equally of a full day).
Participants will have the unique opportunity to gain valuable insights in technology developments and validation approaches used during the project phases of ongoing ESA programmes.
The Projects are in different phases (from early Phase A/B1 to launch and operations) and together with industrial/science partners the status of activities related to Mission developments will be provided.

AWS, EPS-Sterna and AEOLUS-2 missions


AWS In-Orbit results


  • Ville Kangas – ESA

AWS impact on NWP


  • Adam Dybbroe – SMHI
  • Philippe Chambon – Meteo-France

EPS-Sterna Programme overview and status


  • Alessio Canestri – EUMETSAT

Importance of wind measurements for NWP


  • Ad Stoffelen – KNMI

Objective and benefits of an operational DWL Mission


  • Thomas Flament – EUMETSAT

Aeolus-2: From Aeolus to the first wind lidar operational Satellite


  • Massimiliano Porciani – ESA
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.31/1.32)

Session: C.02.09 Preparing for the FLuorescence EXplorer (FLEX) mission - PART 1

The FLEX mission, selected by ESA as the 8th Earth Explorer, aims at providing an improved understanding of the functioning and photosynthetic efficiency of vegetation from space. FLEX will be the first satellite mission to facilitate quasi-direct observation of the cause-effect relationships between plant health, photosynthesis and vegetation functioning. FLEX will provide repeated observations along the phenological cycle at a spatial scale supporting agricultural and forestry management units. In this session, the status of the implementation of the FLEX mission will be presented, including preparation activities towards the operational phase. Presentations will focus on the instrument performances, the algorithm developments, the calibration and validation activities, and selected results demonstrating the expected impact of the FLEX products.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: FLEX L2 Processor Prototype: scientific validation protocol and first results

Authors: Théo Paccoud, Doctor Jorge Vicent Servera, Pekka Kolmonen, Doctor Neus Sabater, Sergio Cogliati, Pietro Chierichetti, Doctor Christiaan van der Tol, Gwennaël Matot, Ornela Nanushi, Doctor Antonio Ruiz Verdú, Doctor Claudia Isola, Doctor Marco Celesti, Doctor Marin Turidoriu
Affiliations: Magellium, Finnish Meteorological Institute, University of Milano-Bicocca, University of Twente, ESA/ESRIN, ESA/ESTEC, University of Valencia
The FLEX mission is ESA’s 8th Earth Explorer mission and will provide global maps of vegetation Sun-Induced Fluorescence (SIF), which can be converted into an indicator of photosynthetic activity. The development of an operational processing facility for FLEX data at Level-1C and Level-2 is ongoing in the Data Innovation and Science Cluster (DISC) project. Among other activities in the DISC project, we are implementing a Level-2 Processor Prototype (L2PP) to develop and test state-of-the-art algorithms in a preoperational data processing chain environment. The L2PP consists of four independent modules to process input Level-1B data into higher-level biophysical and photosynthesis Level-2 products. The L2PP starts with the L1C module, whose main objective is to geometrically co-register FLEX (FLORIS) and Sentinel-3 (OLCI and SLSTR) data in the same geometrical reference. The goals of the L2A module are (1) to characterize the atmospheric conditions (aerosols, water vapor, clouds) using both FLEX and Sentinel-3 data, and (2) to retrieve surface apparent reflectance and at-surface solar irradiance from FLEX L1C products. The L2B module disentangles the SIF emitted by vegetation from the reflected radiance using the L2A apparent reflectance and solar irradiance as an input. Finally, the L2C module uses surface reflectance (from FLORIS & OLCI), SIF, and Land Surface Temperature (from SLSTR), to retrieve key biophysical and photosynthetic variables for a correct interpretation of the fluorescence signal. In order to test and evaluate the performance of L2PP algorithms against mission requirements, a validation plan relying on simulated data has been developed. Realistic simulations from the FLEX end-to-end mission performance simulator (E2ES) are used to assess the accuracy of the Level-2 algorithms, as it enables a precise comparison between a set of known reference data and the retrieved FLEX products. This validation framework makes it possible to assess whether the L2PP meets the mission requirements, to identify existing scientific challenges and to establish future improvements. In this presentation we will show the current status of the L2PP validation within the FLEX E2ES framework at milestone 3 (May 2025). In particular, we will: (1) introduce the core L2PP algorithm and high-level architecture, (2) describe the validation protocol that uses E2ES test datasets to assess the accuracy of the retrieval algorithms, (3) give an overview of the validation protocols and results over the E2ES scenarios against mission requirements, and (4) present the main ongoing improvements. With this work, we expect to provide the audience with insight into the FLEX L2 algorithm, their status, scientific performance and challenges.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: The Atmospheric Correction Processor of the FLEX Sentinel-3 Tandem Space Mission

Authors: Pekka Kolmonen, Neus Sabater, Timo H. Virtanen, Dr. Sergio Cogliati, Pietro Cierichetti, Théo Paccoud, Jorge Vicent, Gwennaël Matot, Christiaan van der Tol, Lucie Tixier, Marin Tudoroiu
Affiliations: Finnish Meteorological Institute, University of Milano Bicocca, Magellium Artal Group, University of Valencia, University of Twente, ESA ESRIN
The European Space Agency’s (ESA) Fluorescence Explorer (FLEX) mission [1] is designed to operate in tandem with the Sentinel-3 mission, taking advantage of the combined capabilities of both platforms’ instruments. On board Sentinel-3, the OLCI multispectral instrument and the SLSTR dual-view conical scanning broadband sensor are exploited to characterize key atmospheric components, including aerosols and water vapor, and to facilitate cloud detection and screening [2]. FLEX, on the other hand, is equipped with the FLORIS high-resolution spectrometer, covering wavelengths from 500 to 780 nm with a spectral resolution ranging from 0.3 to 3 nm and a spectral sampling interval between 0.1–2 nm [2]. Such a high spectral resolution allows the disentangling of the solar-induced chlorophyll fluorescence (SIF) signal emitted by vegetation alongside the photosynthetic activity. The accurate estimation of the SIF signal within the 650-780 nm range is a primary FLEX mission objective. This estimation is challenging, as the SIF signal – an in vivo proxy for photosynthesis – comprises only a small fraction (∼4%) of the top-of-atmosphere radiance in the red and far-red regions. Therefore, for an accurate spectrally-resolved SIF retrieval, the atmospheric correction process plays a crucial role, as inaccuracies in this step can propagate into errors in the final SIF estimates [3]. This presentation outlines the design and implementation of FLEX’s Level-2 atmospheric correction processor, developed as part of the Level 2 Prototype Processor (L2PP) for the ESA FLEX-DISC project (2024-2030). The FLEX-DISC initiative (reference 4000144004/24/I-DT) contributes to the FLEX mission ground segment preparations activities, focusing after launch on supporting FLEX’s commissioning and product validation phases. In addition, to assess the current performance, the results of the L2PP atmospheric correction are shown. These evaluated results are the retrieved atmospheric parameters (aerosols, water vapour, cloud mask) and the at surface apparent reflectance and solar irradiance which are further used for the retrieval of the SIF. The atmospheric correction is here presented for chosen scenes, coming from the FLEX End-To-End Mission Performance Simulator (FLEX-E) project, which have been specifically designed to mimic FLEX instrument capabilities and are very suitable for testing the correction performance of the atmospheric module. The scenes, created with the FLEX observation model, contain various atmospheric conditions (aerosols, clouds) over a realistic surface in Catalonia region, Spain, with an optional elevation model. [1] European Space Agency. Flex - fluorescence explorer mission, 2024. URL https://www.esa.int/Applications/Observing_the_Earth/FutureEO/FLEX. Accessed: 2024-10-29. [2] European Space Agency. Flex: Fluorescence explorer mission – report for mission selection. Esa sp-1330/2, European Space Agency (ESA), 2008. URL https://esamultimedia.esa.int/docs/EarthObservation/SP1330-2_FLEX.pdf. Accessed: 2024-10-29. [3] Neus Sabater, Pekka Kolmonen, Shari Van Wittenberghe, Antti Arola, and José Moreno. Challenges in the atmospheric characterization for the retrieval of spectrally resolved fluorescence and PRI region dynamics from space. Remote Sensing of Environment, 254:112226, 2021.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: FLEX-E: The End-To-End Mission Performance Simulator for ESA’s FLEX photosynthesis mission

Authors: Antonio Ruiz-Verdú, Dr. Claudia Isola, Prof. José Moreno, Dr. Carolina Tenjo, Antonio Falcão, Ana Amelia Martín, Aymeric Hale, Fernando Martinho, Adrián Jacinto, Dr. Marco Celesti, Dr. Matthias Drusch
Affiliations: Image Processing Laboratory (IPL) - University of Valencia, European Space Agency – ESTEC, DEIMOS Engenharia, S.A., GMV Aerospace and Defence S.A.U.
FLEX-E, the FLEX Level-2 end-to-end mission performance Simulator for ESA’s FLEX Earth Explorer-8, is a key tool to demonstrate the feasibility of the whole FLEX mission concept and the baseline for the mission Ground Segment. It allows the simulation of a wide diversity of ground, atmospheric and observation scenarios, representative of the expected conditions that the FLEX – Sentinel-3 tandem mission will face. It also permits to test the impact, on the L2 scientific products, of the constraints imposed by the technical solutions adopted for the FLEX instrument/platform. FLEX-E has a modular architecture, with several modules integrated within the OpenSF generic simulation framework. The orbit, attitude and observation geometry, needed for the scene generation, is provided by two Geometry Modules (GM and S3G), for the FLORIS and Sentinel-3 spectrometers, respectively. With that information, the Scene Generation Module (SGM) generates Top of the Atmosphere (TOA) radiance hypercubes, simulating the interaction of the incoming solar radiation with vegetation and soils, and its propagation up to TOA, with two coupled radiative transfer codes: SCOPE and MODTRAN. The TOA radiances are ingested by the FLORIS Instrument Performance Simulator module (FIPS), developed in a parallel project, which models the conversion of TOA radiances to digital numbers, including all the instrument spatial, spectral and radiometric effects and errors. Then, the Ground Processor Prototype (GPP) generates the L1b data products from the raw data, by implementing the calibration and correction of systematic errors. For the Sentinel-3 data flow, the Instrument+L1 Processing Module (S3M) models a simplified S3 sensors’ behaviour (OLCI and SLSTR), both in the spatial and spectral domain, and the L1 Ground Processing for the generation of Level-1 products. Finally, the Level-2 Retrieval Module (L2RM), also developed in a parallel project, ingests the L1b inputs from GPP and S3M and implements all the retrieval algorithms for the FLEX L2 products, including Top-of-canopy reflectance, Sun-Induced Chlorophyll Fluorescence (SICF) and high-level photosynthesis products. The ultimate objective of FLEX-E is to perform the Mission Performance Assessment Report (MPAR), by evaluating the FLEX Mission Requirements (MR). This is achieved through two modules, L1 PAM and L2 PEM, which compare L1b and L2 outputs with reference data from FIPS and SGM, respectively. For FLEX Mission Critical Design Review (M-CDR), performed in 2025, FLEX-E V4 was used first to generate a comprehensive Test Dataset of synthetic and realistic scenes (TDS-5) and then to produce a complete L1 and L2 MPAR, which served as the key element to evaluate the mission status. The main results are presented.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: The FLEX Instrument Performance Simulator and Ground Prototype Processor: development status

Authors: Marc Bouvet, Stephan Adriaensen, Enrico Carta, Emanuela De Luca, Vittorio Digilio, Antonio Falcao, Riccardo Ferrara, Luca Galli, Aymeric Halé, Stefano Mica, Hugo Monchatre, Jaime Parra Damborenea, Dr Sindy Sterckx, Davide Tiriticco
Affiliations: Esa/estec, DEIMOS, VITO, Leonardo Spa, DEIMOS, EXPRIVIA, Thales Alenia Space
The FLEX instrument Performance Simulator (FIPS) is a software allowing the simulation of synthetic raw data representative of the FLEX instrument radiometric, spectral and geometric performances. The FIPS simulates the optical performance of the instrument telescope and the two spectrometers. Particular emphasis was put in simulating the straylight performance of the instrument. The full acquisition chain from detectors to onboard data generation is also simulated allowing the generation of instrument source packets. The Ground Prototype Processor (GPP) processes both synthetic and real instrument Earth Observation data, from the instrument source packet up to the Level 1B user product. The processing includes dark signal removal, smearing correction, non-linearity correction, straylight correction, absolute radiometric calibration and flat field equalization. The resulting Level 1B product includes geolocated top-of atmosphere radiances, associated data quality information and uncertainty estimates. In addition, also included are meteorological data and instrument characteristics required for further processing of the data to the Level 2. The GPP is also designed to process data from the instrument whilst operating in various calibration modes. This functionality will enable in-flight characterization and calibration of the instrument. The instrument radiometric calibration will be performed in-flight using a sun diffuser. It will be further monitored and validated using regular observations of the moon and deep convective clouds. The non-linearity of the instrument detector chain will be characterized on ground and then verified in-flight using natural targets at various level of signal and associated instrument integration times. The in-flight spectral characterization will be based on the measurements of atmospheric absorption features as well as solar absorption lines observed on the onboard sun diffuser. The absolute geometric performance will be monitored and corrected for through spatial feature matching of nominal EO data with a database of georeferenced high spatial resolution (30 m) images. The spatial co-registration between the high resolution and low spectral resolution spectrometers will be ensured by spatial feature matching between the two spectrometers data.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: The FLuorescence EXplorer (FLEX) Mission: Science Objectives and Background

Authors: Jose Moreno
Affiliations: University of Valencia
Carbon exchanges between the surface and the atmosphere mediated by plant photosynthesis are a key component of the terrestrial carbon balance. Vegetation photosynthesis -the mechanism for atmospheric carbon uptake and assimilation by terrestrial vegetation- is highly variable according to environmental factors, particularly when plants are exposed to variable stress conditions, and such variability is enhanced under climate change and human pressure. A better quantitative estimation of the actual carbon assimilation by plants is needed to improve the accuracy of current terrestrial carbon models and to improve the predictability of such models towards future climate scenarios. While Earth Observation techniques have been used extensively to model and quantify terrestrial vegetation dynamics, by determining the structure and functioning of plants, a direct estimate of actual photosynthetic activity by vegetation is only possible by quantifying not only the actual absorbed light by chlorophyll but also how the absorbed light is internally used by the plants. Only a fraction of such absorbed light is actually used for photosynthesis, but such fraction is highly variable in space and time as a function of environmental conditions and regulation factors. Measuring chlorophyll fluorescence together with the total absorbed light, in addition to other key plant variables, provides a unique opportunity to quantify vegetation photosynthesis and its spatial and temporal dynamics by global satellite observations. The FLuorescence EXplorer (FLEX) mission was selected in 2015, by the European Space Agency (ESA), as the 8th Earth Explorer within the Living Planet Programme, with the key scientific objective of a quantitative global mapping of actual photosynthetic activity of terrestrial ecosystems, with a spatial resolution adequate to resolve land surface processes associated to vegetation dynamics. FLEX also provides quantitative physiological indicators to account for vegetation health status and environmental stress conditions, to better constraint global terrestrial carbon models. FLEX measurements include not only chlorophyll fluorescence spectral emission, but also vegetation temperature and estimates of regulated energy dissipation, which is highly variable and drives the relation between fluorescence and photosynthesis. In addition, FLEX has to acquire all the necessary information to determine vegetation conditions to properly interpret the photosynthesis variability, and all the necessary information needed for appropriate cloud screening, compensation for atmospheric effects and proper analysis of the measured signals. The selected spatial resolution of 300 m is driven by the need to resolve land processes at appropriate spatial scales relevant for the identification and tracking of stress effects on terrestrial vegetation, covering several annual cycles with global observations. The optimal observation time around 10:00 is driven by the diurnal cycle of photosynthetic processes. The targeted uncertainty for photosynthesis, as derived from instantaneous measurements, ranges from about 5% in unregulated conditions up to 30% in case of highly variable regulated energy dissipation, high-stress conditions, in line with model requirements and improving current capabilities provided by other techniques. The availability of validated ready-to-use high level science products will allow an extensive scientific usage of FLEX data, with also a high potential for derived applications. The easy availability and accessibility through an open exploitation platform will offer a large versatility in FLEX exploitation approaches. Improved Land models able to ingest the new type of information provided by FLEX, and to assimilate this information into global dynamical models to improve the predictive capabilities of such global models in terms of vegetation impacts in future climate scenarios, is the ultimate goal in the scientific exploitation of FLEX science products.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: FLEX Development Status – Satellite Ready for Test Campaign

Authors: Ralf Bock, Joao Pereira Do Carmo, Jinesh Ramachandran, Frank de Bruin, Matteo Taccola, Marco Celesti
Affiliations: ESA-ESTEC
FLEX (FLuorescence EXplorer) is the 8th Earth Explorer mission currently being developed by ESA with the objective to perform quantitative measurements of the solar induce vegetation fluorescence. It will advance our understanding of the functioning of the photosynthetic machinery and the actual health of terrestrial vegetation. The fluorescence signal measured from space is so faint that additional information is required for an accurate retrieval and interpretation of the vegetation Fluorescence emissions. Hence the FLEX satellite will fly in convoy with a Sentinel-3 satellite for close temporal coregistration of its OLCI and SLSTR measurements. The FLEX project development started in 2016 and now a major milestone has been achieved by completing the FLORIS instrument and satellite platform assembly, integration and testing. By mid-2025 the full satellite level Integration and Testing will start. Also, the Ground Segment development and Launcher activities are progressing as planned. An overview of the development progress will be provided including an outlook of the future project activities to get ready for launch. Finally, the roadmap for the Announcement of Opportunity for the FLEX scientific Cal/Val activities will be presented.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall F2)

Session: A.02.04 Advances in Monitoring and Management of Forest Ecosystems - PART 3

About 4 billion hectares, which is 31 percent of the global total land surface, is covered by forests according to the latest FAO Forest Resource Assessment FRA 2020. Forests are a valuable and irreplaceable ecosystem, which play an important role from an ecological, social, and economic perspective. Monitoring of forests is essential for a better understanding to protect and manage sustainably this valuable ecosystem.

Information needs vary and include forest area, forest types, forest structure, disturbances, health, and biomass. An unprecedented wealth of observations by optical and microwave sensors, active and passive, from low to high resolution allow new ways to monitor forests. Emphasis will be put on advances in detecting forest disturbances, forest health monitoring, species identification, support to sustainable forest management, estimating forest biomass and forest carbon accounting. This session will showcase some of the more recent key achievements including methods/algorithms, science and applications.

Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall F2)

Presentation: GEO-TREES: a global network of high-accuracy ground data to support satellite derived biomass mapping

Authors: Jerome Chave, Stuart Davies, Alvaro Duque, Lauren Krizel, Ayala Loizel, Oliver Phillips, Camille Piponiot-Laroche, Lillian Rodriguez, Beatriz Schwantes Marimon, Klaus Scipal, Irie Casimir Zo Bi
Affiliations: ESA, Université Toulouse 3 – Paul Sabatier, Universidad Nacional de Colombia, University of Leeds, Université de Montpellier, University of the Philippines, Universidade do Estado de Mato Grosso, Institut National Polytechnique Félix Houphouët-Boigny, Smithsonian Tropical Research Institute
Space agencies have made enormous investments in Earth Observation missions to map forest biomass across continents to support climate science and carbon markets. While many Earth Observation missions and programmes aim to estimate forest biomass from space, their calibration and validation is critical. Ultimately trust in biomass maps requires accurate ground data. Supporting ground measurements and the people who make them is thus mission-critical for mapping and tracking Earth’s forests. The GEO-TREES initiative aims to set up a global Forest Biomass Reference System of long-term forest inventory sites, complemented by Airborne and Terrestrial Lidar scans. The Forest Biomass Reference Measurement System will represent forests around the world, with strong priority placed on the tropics. To ensure the highest possible quality standards and a sustainable future the system will be established through collaboration with existing international networks (e.g. ForestGeo, ForestPlots, TMFO) and the engagement of local field experts. Each BRM site includes 10 x 1 ha plots of tree-by-tree measurements of the diameter at breast height and the tree species, 10 km2 of airborne lidar scans and 3 x 1 ha of terrestrial lidar scans following the requirements of the Committee on Earth Observation Satellites for validating biomass observations (https://lpvs.gsfc.nasa.gov/AGB/AGB_home.html). Collected data follows a common standard for high-quality data acquisition and transparent measurement protocols. Based on the collected data, for each site 0.25 ha accurately geolocated forest biomass and height will be estimates and shared openly and free. GEOTREES aims to establish at least ~100 high-intensity BRMS, to represent the main environmental and anthropogenic dimensions over which forests occur globally and achieve greater sampling intensity in the critical tropics with an additional ~200 lower-cost highly-distributed sites. The status of the first committed sites can be accessed at https://geo-trees.org/project/. This contribution will present the project, the status of the first sites and plans.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall F2)

Presentation: Evolution of the Climate Change Initiative Biomass Datasets Towards the Characterization of Terrestrial Carbon Dynamics

Authors: Maurizio Santoro, Oliver Cartus, Samuel Favrichon, Shaun Quegan, Richard Lucas, Heather Kay, Arnan Araza, Martin Herold, Jérôme Chave, Nicolas Labriére, Prof Heiko Balzter, Dr Nezha Acil, Ake Rosenqvist, Takeo Tadono, Kazufumi Kobayashi, Josef Kellndorfer, Dmitry Schepaschenko, Alexandre Bouvet, Frank Martin Seifert
Affiliations: Gamma Remote Sensing, University of Sheffield, Aberystwyth University, Wageningen University & Research, GFZ German Research Centre for Geosciences, Laboratoire Evolution et Diversité Biologique, University of Leicester, soloEO, Japan Aerospace Exploration Agency, Remote Technology Center of Japan, Earth Big Data, International Institute of Applied Systems Analysis, CESBIO, ESA ESRIN
The Climate Change Initiative Biomass dataset responds to the request by the climate and policy community for spatially explicit, timely and global information on above-ground biomass, or AGB, and associated carbon stocks in woody vegetation. The estimates should furthermore be of high quality and the bias and the uncertainties (as compared to independent reference data) as small as practicable. In practice, remote sensing is not able to measure the carbon stored in vegetation and estimates of AGB from satellite data are therefore the result of a data-model integration, which can suffer from a whole range of issues. As predictors, images of the backscattered intensity by C- and L- band Synthetic Aperture Radar (SAR) were chosen. These show weak to moderate sensitivity to AGB; the use of multi-temporal observations can reduce the uncertainty but not overcome the fact that the noise in the data is comparable to the sensitivity of the signal to AGB already for low levels of AGB. Spaceborne LiDAR contributes with information on height and density of the forests, which are used as constraints in the AGB retrieval model. This approach bypasses the lack of a sufficiently dense dataset of field measurements to train the retrieval model. The model itself is based on a physical-based approach, which allows for full control of the estimation and the possibility to reduce systematic errors. Nonetheless, the lack of AGB reference data implies that several approximations and generalizations are needed to maintain the estimation within plausible ranges of AGB, resulting in erroneous estimates locally. Our presentation will review these aspects (Santoro et al., 2024) and offer a perspective on possible improvements either on the data or on the modelling side. The version of the CCI Biomass dataset to be presented consists of annual global maps of AGB and AGB changes for the years 2007-2010 and 2015-2022, with a pixel size of 1 hectare. The time interval represents periods of availability of global C- and L-band datasets by ALOS PALSAR, Envisat ASAR, ALOS-2 PALSAR-2 and Sentinel-1. Each individual map correctly reproduces the spatial distribution of AGB at large scale; however, the maps are subject to significant biases and are characterized by an uncertainty mostly between 25% and 60% of the estimated values the pixel level. Bias and precision improve at the cost of spatial detail in data products obtained by spatial averaging at kilometric resolution. As a consequence, AGB changes are unreliable at the hectare scale with the exception of fast losses. Sufficient confidence can be instead attributed to slow processes when considering spatially averaged map products. Our presentation will review the strengths and limitations of the CCI Biomass dataset and its evolution in time, including independent accuracy assessments and the considerations of errors in national and global uptake. We also aim to provide indications on future developments arising from new and near-future satellite missions (e.g. ALOS-4, NiSAR, BIOMASS) or from the integration of other satellite data streams (e.g., from active and passive microwave sensors operating at coarse spatial resolution).
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall F2)

Presentation: Forest Height to Biomass Allometries: Applying Machine Learning to Estimate Forest Structure on a Multi-Mission-Multi-Scale approach

Authors: Benedikt Hartweg, Islam Mansour, Kostas Papathanassiou, Lukas Lehnert
Affiliations: Ludwig-Maximilians-Universität München, Faculty For Geosciences, Department For Geography, German Aerospace Center e.V. (DLR), Wessling, Germany
The role of quantity and dynamics of forest biomass for our planetary climate system is widely known and obtaining detailed knowledge of it is indispensable. Given the necessity for such data, several approaches were developed in the past to estimate biomass at vastly different scales and accuracies. While the biomass estimation of a single tree can already be complex - as it is a function of wood density, trunk and branch volume - the biomass estimation of a whole stand is a far more challenging task. Especially at highly complex tropical forest sites, the biomass estimation at stand scale remains a challenge. Current approaches are often field inventory (FI) based, with the therefor resulting obvious limitations: they are very restricted in scale with low temporal resolution and often limited comparability / reproducibility. Yet, they provide the absolute foundation for every kind of either modelling or remote sensing (RS) based biomass estimation approaches. With the cornucopia of RS sensors operational now and in near future, we have the chance to realistically approach this task for the first time and monitor forests on wide scales and at relevant repetition times. Amongst the most promising approaches is the usage of monovariate or multivariate allometric functions for estimating above and below ground biomass using different driving variables. The highest accuracies were achieved in FI studies, where the diameter at breast height (DBH) of the tree stems was used in a simple exponential model, incorporating a shifting factor alpha and an allometric scaling exponent beta. A key challenge now remains in the transfer of this very high accuracy, but extremely locally scaled knowledge to a vastly different scale – ecosystem wide RS products on biomass pools and changes. For this, a vastly more aggregate variable is needed as driver. Forest height has proven as challenging yet promising approach, as it can be operationally mapped at nearly global scale using existing platforms and technologies. Several studies analysing forest height to biomass allometric models rendered promising results, but also showed inherent limitations, which arise from the purely statistical nature of the approaches and incoherent methods for transferring FI data to plot aggregates. To improve on this issue, we would like to introduce the following framework, where we preserve the simplicity of the allometric equation, but expand the alpha and beta parameters with RS and machine learning driven applications. We attribute alpha to changes in density of the forest plot and beta to growing conditions and the overall forest type. Yet, research has shown, that beta is very stable across forest ecosystems with values ranging between 1.6 and 1.8, and only significantly differs for boreal forest. Thus, with this study, we want to focus on estimating the alpha parameter on a high-resolution scale and thus create localised allometries. This is also in line with our recent work (in review), where we analysed scale dependencies of the mentioned allometric models and concluded that knowledge on the forest structure is essential. Preliminary results show the potential of our approach. For this, we used ESA Sentinel 2 datasets, NASA GEDI waveforms and FI plot data to estimate the residuals of the allometric equation on the resolution cell level using a simple random forest model and thus apply these localised allometries on a point-cloud LiDAR derived canopy height map. This way, we obtained an above ground biomass (ABG) product of the Bavarian State Forest for the years 2017 and 2023, which we in turn compared to the FI datasets. In a next step, we want to generalise the creation of the allometric equations from FI data at plot level and apply our approach to the whole of Bavaria and vastly different ecosystems. The upcoming ESA BIOMASS mission with its one of a kind forest height product and repetition cycles, will play a key role to drive our approach in future adaptations. Ultimately, our framework represents a significant step forward in the necessity for accurate and scalable biomass estimation, which will provide critical insights for carbon cycle modelling and climate change mitigation and adaption strategies.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall F2)

Presentation: Increasing the precision of forest biomass estimates with space-based biomass maps: a cross-country comparison of inferential techniques

Authors: Dr. Natalia Málaga, Dr. Chad Babcock, Dr. Neha Hunka, Dr. Daniela Requena Suarez, Martin Herold, Hercilo Sancho Carlos Odorico, Muri Gonçalves Soares, Larlyn Faith C. Aggabao, Ricardo de la Cruz Paiva, Alexs Arana Olivos, Jorge Luis Carranza Castañeda
Affiliations: Helmholtz Center Potsdam GFZ German Research Centre for Geosciences, Section 1.4 Remote Sensing and Geoinformatics, University of Minnesota Forest Resources Departmen, Department of Geographical Sciences, University of Maryland, Measurement, Reporting, and Verification Unit, National Sustainable Develop- ment Fund (FNDS), Forest Management Bureau, Department of Environment and Natural Resources, Servicio Nacional Forestal y de Fauna Silvestre, Ministerio de Desarrollo Agrario y Riego (SERFOR)
In recent years, the availability of space-based biomass information (e.g. maps) has grown substantially, offering new opportunities to enhance forest biomass monitoring. However, their uptake into national monitoring and reporting systems for forest-related climate mitigation purposes remains limited. National Forest Inventories (NFI) continue to be the primary source for estimating emission factors; however, many countries face challenges in completing or updating their inventories due to financial and technical constraints, which may affect the timeliness and accuracy of their climate reporting. Integrating space-based biomass information with NFI data has shown potential for increasing the precision of (sub)national biomass estimates, moving from traditional NFI-based approaches to model-assisted and model-based techniques. These inferential techniques are inherently different in their assumptions and implementation. Design-based and model-assisted inference relies heavily on probability samples (e.g., NFIs). In model-assisted approaches, auxiliary information (e.g., maps) is incorporated by establishing a statistical relationship between the auxiliary data and the NFI data, improving estimation precision. Geostatistical model-based methods, in contrast, are less reliant on probability sampling. Instead, they depend on distributional assumptions applied to the posited model to estimate uncertainty. To date, no comprehensive comparison of these inferential approaches has been undertaken to evaluate their respective opportunities and challenges based on country-specific needs and considerations, particularly within the tropics; where NFI data is scarcer, forests face acute deforestation and degradation and therefore, where land use change emissions are the greatest. In this context, open-access space-based biomass data, including those produced through ESA's Climate Change Initiative (CCI), NASA's Global Ecosystem Dynamics Investigation (GEDI) LiDAR mission, can offer a pathway to enhance the precision of (sub)national estimates while improving the understanding of the spatial distribution of biomass. The Intergovernmental Panel for Climate Change (IPCC) encourages parties to reduce uncertainties of their Greenhouse Inventories (GHGI) as much as practical. Enhancing the precision of estimates not only aligns with IPCC-recommended good practices but can also help reduce the costs of NFIs by decreasing the number of field plots required to achieve a desired level of precision in biomass estimation. This is particularly beneficial for countries facing challenges completing or updating their NFIs. In this study, we explore combining traditional NFI data with biomass map products to improve forest carbon estimation precision. We compare approaches from design-based, model-assisted, and model-based inferential paradigms. We implement each of the candidate approaches across diverse ecological settings within the tropics while using an array of NFI sampling designs and characteristics. We base our analyses on ground-based biomass information from the NFIs of Mexico, Peru, Mozambique, and the Philippines. We also explore the use of several different space-based biomass products including ESA CCI, NASA GEDI, and NASA Carbon Monitoring System (CMS) forest biomass map products. We assess the applicability and limitations of each approach under different sampling designs and ecological conditions. Despite advancements in remote sensing technologies, their adoption remains limited due to a lack of examples that address country-specific needs, standardized guidance, and user-target lessons learned. This research aims to enhance national biomass estimation precision by integrating space-based biomass maps with traditional NFI data, providing a detailed comparison of inferential methods and their applicability in diverse tropical ecosystems. It further highlights the potential benefits and challenges of such inferential methods and provides guidance on selecting appropriate approaches based on user-specific needs. The study concludes with actionable recommendations by the Global Forest Observations Initiative (GFOI), supporting national biomass estimation through the use of space-based biomass maps.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall F2)

Presentation: A Framework for Biomass Estimation: Leveraging Sentinel-1 C-Band Radar and Open-Access Datasets.

Authors: Lakshmi Baburajan, Dr Subash Yeggina
Affiliations: Pixxel
Forests, spanning approximately 31% of Earth's terrestrial surface, play a crucial role in global ecological processes and act as major carbon sinks, mitigating the effects of increasing pollution and greenhouse gas emissions. Recent research highlights forests as one of the most effective natural systems for carbon sequestration, storing significant amounts of atmospheric carbon. Above-ground biomass (AGB) is a key variable in this process, as it quantifies the carbon stored in living vegetation, including stems, branches, and foliage. Remote sensing technologies such as LiDAR, synthetic aperture radar (SAR), and multispectral satellite imagery have significantly improved biomass assessment. But, most biomass estimates are often limited to smaller areas due to the reliance on extensive field measurements needed to train models, with remote sensing datasets typically serving as inputs. These models are challenging to scale to other regions or to reuse temporally, as field data collection over large areas is resource-intensive and time-consuming. Recent advancements in machine learning (ML) and deep learning (DL) have further increased the demand for extensive training datasets across wider spatial extents, making the process even more cumbersome. Additionally, conducting field measurements at regular time intervals is impractical, limiting the ability to generate temporally consistent, high-resolution global biomass maps. Studies have shown that LiDAR and L-band SAR are the most commonly used remote sensing technologies for biomass estimation. LiDAR is a reliable method for measuring forest height and estimating biomass with high accuracy, but its high costs for data acquisition and processing limit its feasibility for large-scale. The L-band SAR, known for its ability to penetrate forest canopies, is sensitive to both forest structure and height, making it effective for biomass estimation. However, it saturates for biomass values around 300-350 Mg/ha and is also limited in terms of accessibility. The global ESA Biomass Climate Change Initiative (Biomass_cci) product was developed using the L-band dataset, with the most recent data available only up to 2021. Similarly, openly available LiDAR and field data are tied to specific timestamps, making their outputs temporally constrained. Notably, the canopy height measurements acquired by the Global Ecosystem Dynamics Investigation (GEDI) instrument, aboard the International Space Station (ISS) were used to derive a biomass product at 1 km for the entire globe for one time stamp. While this dataset offers valuable insights, its coarse resolution limits its ability to capture fine-scale spatial variability, and the temporal span constrains its utility in monitoring dynamic changes in carbon stocks. These GEDI-derived biomass estimates have aided in the understanding of global biomass and its spatial variability. It also reveals that the majority of the biomass values across the globe varies between 0 to 200 Mg/Ha. In this study we propose a framework that harnesses thepotential of open Sentinel-1 C band along with other openly available auxiliary datasets such as the vegetation indices from Sentinel-2, forest type, and the maximum biomass, to generate yearly global AGB estimates. The available SAR dataset, the Sentinel-1 C-band radar, is notable for its global coverage, high spatial resolution of 10 m, and a 14-day temporal frequency, making it particularly suitable for estimating biomass in the 0-200 Mg/Ha range. The well-established Water Cloud Model, a semi-empirical approach that does not require any training datasets, is chosen to derive biomass estimates from radar backscatter values. Further, the viability of this approach would be tested at various spatial resolutions from 100 to 1000 m for different forest types to conclude on the best possible product that could be formulated. The analysis is focussed on locations in North-West California where LiDAR-based biomass estimates at 30 m resolution were available through the NASA Carbon Monitoring System (CMS) program. Four patches were chosen, in which two are in Nez Perce-Clearwater National Forest and remaining are in Helena-Lewis Clark National Forest and Idaho Panhandle National Forest respectively. These patches, classified as temperate forests, encompassed a range of biomass values varying from 1 to 370 Mg/Ha. Biomass is estimated using the Sentinel-1, VH polarization at 1 km resolution and validated against both GEDI and LiDAR datasets. The analysis revealed the highest accuracy for the forest patch in the Helena-Lewis Clark National Forest, where the model's performance when validated against GEDI data, yielded an RMSE of 22.06 Mg/ha and a correlation coefficient (R) of 0.41. When validated using LiDAR data, an RMSE of 24.08 Mg/ha, with an improved R value of 0.63 were obtained.In contrast, the poorest results were observed for the forest patch in the Idaho Panhandle National Forest. Validation against GEDI data produced an RMSE of 56.4 Mg/ha and an R value of 0.11, while validation against the LiDAR dataset resulted in an RMSE of 58.5 Mg/ha and an R value of 0.10. The analysis asserted that higher biomass values cannot be estimated, as they saturate around 150 Mg/ha. Additionally, it revealed that forest density plays a critical role in estimation accuracy. Denser forests exhibited poorer biomass estimates, highlighting the limitations of the C-band radar in penetrating thick forest canopies. Conversely, the most accurate estimates were observed in forests with moderate sparsity, where the radar signal effectively interacted with vegetation structure. These findings serve as preliminary conclusions and will guide the next phase of research, focusing on the role of forest density and its impact on biomass estimation accuracy, further exploring the conditions under which C-band radar performs optimally.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall F2)

Presentation: Improved mapping of canopy height and aboveground biomass in Congo Basin using a deep learning framework with Sentinel-1/-2 and GEDI data integration

Authors: Liang Wan, Aurélien de Truchis, Martin Schwartz, David Purnell, Ibrahim Fayad, Philippe Ciais
Affiliations: Laboratoire des Sciences du Climat et de l’Environnement, LSCE/IPSL, CEA-CNRS-UVSQ, Université Paris Saclay, 91191 Gif-sur-Yvette, France, Kayrros SAS, 75009 Paris, France
Mapping high-resolution canopy height (CH) and aboveground biomass (AGB) across the Congo Basin's tropical forests is crucial for advancing our understanding of global carbon dynamics and supporting climate mitigation strategies. Despite advancements in field surveys and the development of airborne and satellite-based models and data, the lack of consistent, accurate, and fine-scale CH and AGB maps remains challenging to effective monitoring. This issue is particularly pronounced in complex tropical ecosystems, where detailed data are critical to capturing changes such as forest loss and regrowth. Moreover, signal saturation and mixed responses in satellite optical and LiDAR data further underscore the urgent need for improved mapping techniques to support reliable tropical forest monitoring and conservation. To address these challenges, this study developed a robust deep learning framework to estimate CH and AGB at a 10-m resolution for 2021/2022 in the Congo Basin using Sentinel-1/-2 and Global Ecosystem Dynamics Investigation (GEDI) data, coupled with extensive reference datasets. The framework incorporates a rigorous sample selection strategy to ensure the representativeness of training datasets and validates CH estimations against multi-scale observations from GEDI, airborne laser scanning (ALS), and field measurements. To produce precise AGB maps, we evaluated approximately 10 allometric models from field surveys and published studies to identify the optimal CH-AGB relationship. Additionally, we applied XGBoost and SHAP methodologies to evaluate the contributions of CH and auxiliary data to AGB mapping. The resulting high-resolution CH map obtained improved accuracy, with validation against GEDI (MAE = 6.12 m, Bias = 2.17 m) and ALS (MAE = 4.56 m, Bias = -1.55 m). Compared to existing products, our CH map provides greater spatial detail, particularly in tall forests (> 30 m). The derived AGB maps, based on the optimal allometric model, showed good agreement with field measurements (MAE = 87.89 Mg/ha, Bias = -4.27 Mg/ha), outperforming existing datasets, such as GlobBiomass, ESA CCI, NCEO Africa, and GEDI L4B AGB. Future work will focus on annual updates to CH and AGB maps from 2019 onwards and expanding the model framework to cover the entire African region. This study enhances our understanding of tropical forest structure in the Congo Basin and provides a scalable methodological framework for mapping CH and AGB in other tropical regions, contributing to improved tropical forest carbon monitoring and sustainable forest management practices.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall M1/M2)

Session: E.01.05 FutureEO - Open Call for EO Innovation and PECS/AM/NMS RPA Call overview

The Open Call has been running since 2017 and represents as a mechanism within the Future EO1 Programme to conduct highly innovative research and development projects under openly proposed themes. It aims to support innovative activities that respond to opportunities linked to the exploitation of ESA, Third Party, and other European (including national and private sector) Earth Observation data, as well as related capabilities and infrastructures. This session will explore the Open Call as a procurement mechanism, showcasing funded projects and discussing ways to enhance its framework. Key topics include Open Call figures, successful project examples, proposal submission guidelines, and feedback for process improvement in preparation for future calls.

Moderators:


  • Michela Corvino - ESA
  • Jolanda Patruno - ESA
  • Gordon Campbell - ESA

Speakers:


  • Bart Gheysens - ESA
  • Karoli Kahn - Kappazeta
  • Julia Marushchak- KPLabs
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall K1)

Session: C.06.05 CEOS Analysis Ready Data (CEOS-ARD)

CEOS Analysis Ready Data (CEOS-ARD) are satellite data that have been processed to a minimum set of requirements and organized into a form that allows for immediate analysis with minimum additional user effort to support time series analysis and data interoperability. This session will provide insights on the CEOS-ARD [https://ceos.org/ard] Product Family Specifications and standards development as well as review current CEOS-ARD compliant satellite data products and those which are in development. Currently there are four optical and four radar CEOS-ARD Product Family Specifications for the following geophysical variables: Surface Reflectance, Surface Temperature, Aquatic Reflectance, Nighttime Lights Surface Radiance, Normalised Radar Backscatter, Polarimetric Radar, Ocean Radar Backscatter, and Geocoded Single-Look Complex. In addition, specifications are under development for Interferometric Synthetic-Aperture Radar (InSAR) and Light Detection and Ranging (LiDAR).
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall K1)

Presentation: Evolution of the CEOS-ARD Optical Product Family Specifications

#stac

Authors: Christopher Barnes, Dr. Ferran Gascon, Matthew Steventon, Ake Rosenqvist, Peter Strobl, Andreia Siqueira, Dr. Jonathon Ross, Takeo Tadono
Affiliations: KBR contractor to the U.S. Geological Survey (USGS), European Space Agency (ESA), Symbios Communications, solo Earth Observation (soloEO), Japan Aerospace Exploration Agency (JAXA), European Commission, Geoscience Australia
The CEOS Land Surface Imagining Virtual Constellation (LSI-VC) has over 20 members representing 12 government agencies and has served as the forum for developing the CEOS Analysis Ready Data (ARD) compliant initiative since 2016. In 2017, LSI-VC defined CEOS-ARD Product Family Specification (PFS) optical metadata requirements for Surface Reflectance and Surface Temperature that reduced the barrier for successful utilization of space-based data to improve understanding of natural and human-induced changes on the Earth’s system. This resulted in CEOS-ARD compliant datasets becoming some of the most popular types of satellite-derived optical products generated by CEOS agencies (e.g., USGS Landsat Collection 2, Copernicus Sentinel-2 Collection 1, the German Aerospace Center) and commercial data providers (e.g., Catalyst/PCI, Sinergise). Since 2022, LSI-VC has led the definition of two new optical PFSs (i.e., Aquatic Reflectance and Nighttime Lights Surface Radiance) and four Synthetic Aperture Radar (SAR) PFSs (i.e., Normalised Radar Backscatter, Polarimetric Radar, Ocean Radar Backscatter, and Geocoded Single-Look Complex), signifying the recognition in importance of providing satellite Earth observation data in a format that allows for immediate analysis. As of December 2024, eleven data providers have successfully achieved CEOS-ARD compliance with a further 12 organizations either in peer-review or underdevelopment for future endorsement. However, this has engendered a need for transparency, version control, and (most importantly) a method to facilitate consistency across the different PFSs and alignment with SpatioTemporal Asset Catalogs (STAC). Thus, all future PFS development will be migrated into a CEOS-ARD GitHub repository. This will facilitate broader input from the user community which is critical for the optical specification to meet real-world user needs and ensures broader data provider adoption. CEOS agencies have concurred that now is the time with increased traceability and version control offered by GitHub, to seek to parameterise the CEOS-ARD specifications and introduce an inherent consistency across all optical and SAR PFS requirements while benefiting from active user feedback. In this presentation, we will share a status on the optical PFS transition to GitHub, as well as a set of implementation practices/guidelines and a governance framework that will broaden the portfolio of CEOS-ARD compliant products so they can become easily discoverable, accessible, and publicly used.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall K1)

Presentation: The CEOS ARD for Aquatic Reflectance – Evolving From Inland and Near-Coastal Waters to Include Oceans.

Authors: Arnold Dekker, Hayley Evers-King, Dr Barbara Bulgarelli, Dr Daniela Gurlin, Dr Peter Gege, Nicole Pinnel, Carsten Brockmann, Professor Maycira Costa, Dr Peter Stroble, Dr Tushar Shukla, Matthew Steventon, Harvey Jones
Affiliations: CSIRO, EUMETSAT, DLR, University of Victoria, European Commission, Joint Research Centre (JRC), Brockmann Consult, Independent Consultant, Space Applications Centre, ISRO, SYMBIOS
CEOS Analysis Ready Data (ARD) are satellite data that have been processed to a minimum set of requirements to allow for immediate analysis with less additional user effort and interoperability both through time and with other datasets. Product Family Specifications (PFS) are the core component of the CEOS ARD concept and describe what is needed for each product and is ready for download and use from here: https://ceos.org/ard/ . In 2021 CEOS and GEO AquaWatch developed Version 1 of Aquatic Reflectance ARD PFS focused on inland and near-coastal waters, as the use of satellite imagery and associated products for aquatic applications is developing rapidly. On 23 February 2022, CEOS’s Land Surface Imaging Virtual Constellation (LSI-VC) endorsed a new Version 1 of the ARD PFS) for Aquatic Reflectance (AR) products. Version 1 is available here: https://ceos.org/news/ard-pfs-aquatic-reflectance/ Currently, four products from Sentinel 2, Landsat, EnMAP, Landsat and Oceansat-3 are under assessment for Aquatic Reflectance ARD compliance. Rapid developments in the inland and near-coastal EO capabilities and the desire to integrate all water bodies into one CEO ARD Aquatic Reflectance PFS has led to a new collaboration to develop Version 2 of the Aquatic Reflectance ARD PFS. As part of the collaborative Version 2 framework, an initial step established that an integrated CEOS ARD AR with a focus on inland and near-coastal waters (focused of Version 1) and ocean waters was needed. Thus, in 2024 a new committee was established with representatives from inland, coastal and ocean Earth observation as well as members of the CEOS ARD Oversight Committee. This committee defined the tasks needed to reevaluate some of the PSF established in Version 1 of the CEOS ARD AR. Version 1 followed the template for Land Surface ARD and focused on finer spatial resolution satellite imagery. The addition of ocean colour radiometry (OCR) coarser spatial resolution EO data from sensors such as MODIS, VIIRS, the OCM-2 &3, Sentinel-3 and PACE and others, including their multi-sensor products (e.g. Copernicus marine service and OC-CCI) added a dimensionality of spatial pixels to this AR PFS. Concurrently there is a broadening use of higher resolution sensors for ocean applications (e.g. Sentinel-2) and expected increases in resolution of ocean related missions (e.g. Sentinel-3 next generation). Thus, the distinctions between inland and near-coastal, and ocean applications; and what constitutes medium or fine resolution sensors. become increasingly blurred. Additional metadata fields (as compared to the Land Surface PFS) had to be inserted or modified such as: Land and water atmospheric adjacency effect correction; directional atmospheric scattering (accommodating global inland waters occur with an altitude from -430 m to + 6500 m); Ice mask; floating vegetation/ surface mask; optically deep or optically shallow waters assessment; sun glint and sky glint correction. For Version 1 the team needed to assess the threshold and goal levels (previously called targets). This took considerable time to do as it was essential to ensure that the proposed metadata field meets the state of the art so that any interested ARD provider has access to methods and algorithms to meet this requirement. In a fast-developing application domain such as inland and near-coastal waters EO, significant advances over the past two years, required revisiting and assessing many of these metadata and processing requirements. OCR, additionally has a much longer history of providing fully operational ocean information data using coarser spatial resolution but higher radiometric resolution EO data. As an example, for sun glint flagging and correction, the coarse spatial resolution EO OCR data can apply statistical methods for each pixel as the wave trains and facets are averaged over the pixel. However, finer spatial resolution resolves individual wave facets requiring a per pixel basis. A similar line of reasoning exists for identifying foam, algal scums, floating macrophytes etc. Other differences include the handling of the atmospheric adjacency effects (AE), which become more significant in satellite images of inland water bodies, since these are surrounded by land. Notably AE can range from positive to negative values depending on the spectral contrast between land and water. Land reflectance is generally higher than that of water (e.g., in the near-infrared for most land cover types, throughout the spectrum for highly reflecting land covers such as snow, white sand, dry vegetation and concrete), but in some cases the land can be lower reflecting than water, as in the blue for highly turbid water and green vegetation. OCR applications usually filter out (using bathymetry models) pixel areas where bottom reflectance might contribute to the surface water leaving signal. With fine spatial resolution coastal and inland EO scenarios need to cover both optical deep (lakes, reservoirs, estuaries, rivers etc.) and optically shallow waters (e.g. seagrass or macro-algae beds or coral reefs etc.). A guiding principle for the work to include and specify requirements for this kind of details, the PFS sets requirements on what shall be included in the processing, but not referring to any specific methodology how this should be achieved. This is left to the discretion of the data provider, to define the best methodology for their data and implementing the state-of-the-art methodology as this continuously evolves. At LPS 2025 we intend to present the final Version 2 of the CEOS ARD AR that integrates inland, near-coastal, coastal and ocean water reflectance applications.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall K1)

Presentation: Development of CEOS Analysis-Ready Data Specifications for Synthetic Aperture Radar

Authors: Ake Rosenqvist, Francois Charbonneau, Dr David Small, Clement Albinet, Takeo Tadono, Antonio Valentino, Bruce Chapman, Zheng-Shu Zhou, Matthew Garthwaite, John Truckenbrodt, Francesco De Zan, Howard Zebker, Dr. Matthew Steventon, Andreia Siqueira, Medhavy Thenkappan, Christopher Barnes, Peter Strobl
Affiliations: solo Earth Observation (soloEO), Japan Aerospace Exploration Agency (JAXA), Natural Resources Canada, University of Zürich, ESA ESRIN, NASA Jet propulsion Lab, CSIRO, DLR, Delta-Phi, Stanford University, Symbios, Geoscience Australia, USGS, EC Joint Research Centre
CEOS Analysis-Ready Data (CEOS-ARD) is a joint effort by the Committee on Earth Observation Satellites (CEOS) to streamline data flows and enable interoperable products between sensors and data providers. Ultimately, the goal is to broaden the Earth Observation user community by provision of data products that require significantly less expert knowledge for analysis. This last point is perhaps particularly relevant for Synthetic Aperture Radar (SAR). Its potential to contribute to today’s great environmental challenges with unique information is significant, yet the SAR user community has remained small and expert-oriented even after 30 years of operational SAR missions. CEOS-ARD is an opportunity to bridge that gap. In a coordinated effort by the CEOS Land Surface Imaging Virtual Constellation (LSI-VC) and the CEOS Working Group on Calibration and Validation (WGCV) SAR Subgroup, four SAR-specific specifications have been developed. These have been merged into one single unified "CEOS-ARD for SAR" Product Family Specification (PFS). These product families are generally presented in a common, Earth coordinate system (e.g. UTM, geographical coordinates, etc.), rather than in radar slant range coordinates facilitating use by non-radar-specialists. · Normalised Radar Backscatter (NRB). The NRB product had Radiometric Terrain Correction (RTC) applied, and is provided in the terrain-flattened gamma-0 backscatter convention. It is the most common SAR product and expected to be useful for, in particular, non-expert users. · Polarimetric Radar (POL). The POL product format is an extension of the NRB format, required in order to better support Level-1 SLC polarimetric data, including full-polarimetric modes (RADARSAT-2, RCM, ALOS-2, SAOCOM and future missions), and hybrid or linear dual-polarimetric modes (i.e., Compact Polarimetric mode available on RCM, SAOCOM and the upcoming NISAR mission). The POL product can be defined in two processing levels: o The normalised covariance matrix (CovMat) representation which preserves the inter-channel polarimetric phase(s) and maximizes the available information for users. o Polarimetric Radar Decomposition (PRD) products, derived from coherent or incoherent polarimetric decomposition techniques. The selection of composition product(s) to be offered (e.g., Freeman-Durden, van Zyl, Cloude-Pottier, Yamaguchi, et al.) are at the discretion of each data provider. · Ocean Radar Backscatter (ORB). The ORB product is a simplified version of the NRB product, intended for ocean applications. As such, ORB products are projected on the geoid and are provided in the Sigma-Nought backscatter convention. Radiometric Terrain Correction (RTC) is neither required nor applied. · Geocoded Single-Look Complex (GSLC). The GSLC product describes the complex radar reflectivity on the surface with all propagational phases removed, so that the amplitude and phase values represent properties of the surface and not the instrument. Similar to the other products, GSLC data are presented in a ground based coordinate system rather than in radar slant range coordinates, to facilitate use by non-radar-specialists. Under development: · Interferometric Radar (INSAR). The CEOS-ARD interferometric product specification covers a suite of three products generated by InSAR processing of (at least) two images captured of the same geographic area with near-identical viewing geometry: o Wrapped interferogram: Image of differential phase signals between two SLC images o Unwrapped interferogram: Image of differential phase signals where the wrapped fringes are summed (“unwrapped”) to give a continuous phase signal across the image o Interferometric coherence: Image of phase coherence between the two images. · Multi-Source Backscatter (MSB). The MSB product specification is conceived to accommodate composite backscatter products, generated from short time windows of data from (potentially different) SAR sensors and with arbitrary observations geometries. These products further improve “analysis readiness”, supporting wide-area time series analysis. The NRB, POL and ORB and GSLC specifications have been endorsed by CEOS LSI-VC and can be accessed on the CEOS ARD website (ceos.org/ard). The IR and MSB product specifications are at the time of writing (Nov. 2024) under development. There has been a significant interest in CEOS-ARD by space agencies and public and private data providers. CEOS-ARD Normalised Radar Backscatter products are currently produced operationally by the SentinelHub for Digital Earth Africa (Sentinel-1); by JAXA’s Earth Observation Research Center (JERS-1, ALOS PALSAR, ALOS-2 PALSAR-2 global mosaic data), CSIRO (NovaSAR-1), ISRO (RISAT-1A/EOS-04), NASA/JPL/ASF (OPERA Sentinel-1) and the commercial provider Catalyst (Sentinel-1). CEOS-ARD SAR products are also being considered by, amongst others, Natural Resources Canada (RCM), CONAE (SAOCOM-1) and ESA (Sentinel-1, Envisat ASAR, ERS). This presentation aims to provide an overview of the CEOS-ARD SAR specifications available and the ongoing development of other SAR product specifications.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall K1)

Presentation: The Future of CEOS Analysis Ready Data (CEOS-ARD)

Authors: Dr. Ferran Gascon, Dr. Matthew Steventon, Dr. Jonathon Ross
Affiliations: ESA (European Space Agency), Symbios, Geoscience Australia
CEOS Analysis Ready Data (CEOS-ARD) caused a paradigm shift in the Earth observation community. Recognising the trends of data volume and availability, CEOS set out to further democratise Earth observation by making data more transparent and easy to use, while also setting a basis for improved interoperability. CEOS-ARD mandates levels of pre-processing, metadata, and documentation that allow users to use and buy data with confidence. It sets a critical benchmark for data providers and has encouraged a more rigorous approach to documentation. For data procurement, it serves as a helpful baseline and level of pre-qualification, streamlining evaluation processes. Naturally, as users become more versed in EO and dataset offerings continue to grow, CEOS-ARD must evolve to reflect new needs and expectations. In particular, demands for increased interoperability are necessitating renewed looks at the bounds of what can be called ‘analysis-ready’. CEOS recognises a need for a future iteration of CEOS-ARD that introduces higher requirements for data ‘quality’, including for metadata, measurand uncertainties, and the introduction of tolerances and bounds for corrections, as well as a framework for ongoing monitoring of these qualities. This list is not exhaustive and CEOS recognises the need to work closely with the community to understand the requirements for future evolution of CEOS-ARD. We plan an interactive presentation that will serve as an opportunity to gather feedback on the current state of CEOS-ARD and to consult the community on what they value in the current iteration of CEOS-ARD and what is needed in future evolutions. We will call on the community to help us understand the headline priorities for the future of CEOS-ARD and to lay the basis for a consultation paper that will kick-start a new phase in CEOS-ARD. More information on the CEOS-ARD Framework and available datasets can be found on our website at ceos.org/ard
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall K1)

Presentation: Development of Analysis Ready Data Products for European Space Agency Synthetic Aperture Radar Missions

#zarr #stac #cog

Authors: Clement Albinet, Davide Castelletti, Fabiano Costantini, Mario Costantini, Francesco De Zan, Jonas Eberle, Paco Lopez Dekker, Juan M Lopez-Sanchez, Federico Minati, Muriel Pinheiro, Sabrina Pinori, Dr David Small, Francesco Trillo, John Truckenbrodt, Antonio Valentino, Anna Wendleder, Marco Wolsza
Affiliations: ESA, Telespazio VEGA, B-Open Solutions s.r.l, Delta phi remote sensing GmbH, German Aerospace Center (DLR), Delft University of Technology, Universidad de Alicante, Serco, University of Zürich, STARION, Friedrich Schiller University Jena
The current family of Synthetic Aperture Radar (SAR) products from Sentinel-1 and TerraSAR-X contains primarily Level-1 Single Look Complex (SLC) and Ground Range Detected (GRD) data types [1][2][3], which inherited their definitions from the European SAR satellite missions ERS-1/2 and ENVISAT [4]. These products have proven to be reliable, high-quality data sources over the years. In particular, users largely benefit from the open and free data policy of the Copernicus programme (European Space Agency (ESA), European Commission). This has led to Sentinel-1 products being routinely used in several operational applications and to a substantial growth of the user base of SAR data in general. However, the rapid increase of data volume is presenting a challenge to many users who aim to exploit this wealth of information but lack the processing resources needed to convert these Level-1 products into interoperable geoinformation. Cloud solutions offer opportunities for accelerated data exploitation but require new strategies of data management and provision. As a consequence, the term Analysis Ready Data (ARD) was coined, and several activities have indicated the potential for extending the Earth Observation product family with such ARD products. With the aim to standardize different categories of ARD, the Committee on Earth Observation Satellites (CEOS) has set up the CEOS Analysis Ready Data (CEOS-ARD) initiative. Within this context, Analysis Ready Data were defined as: »satellite data that have been processed to a minimum set of requirements and organized into a form that allows immediate analysis with a minimum of additional user effort and interoperability both through time and with other datasets.« A variety of SAR product specifications are currently being defined to provide guidelines on how best to process and organize data to serve as many use cases as possible with the respective products [5]. In this context, ESA and DLR decided to collaborate in order to define a family of SAR ARD products for Sentinel-1, TerraSAR-X, ROSE-L, ERS-1/2 and ENVISAT, potentially to be extended to other SAR missions. These products should be calibrated the same way (Radiometric Terrain Correction (RTC) [6]), denoised, projected and geolocated in order to allow immediate analysis by the users. The same ridding / tiling system (Military Grid Reference System (MGRS)) and the same Digital Elevation Model (Copernicus DEM) shall be used in order to allow interoperability together with Earth Observation data from different missions. The use of Cloud Optimised GeoTIFF (COG) raster files or Zarr format, VRT files and STAC metadata will enable efficient exploitation of these datasets into cloud-computing environments by allowing optimizations for cloud storage, enabling concurrent processing and selective data access. Finally, using permissive open-source code and libraries to generate these new products, processors will represent a considerable step toward Open Science. The current status of ARD product development for different ESA missions (Sentinel-1, ROSE-L, ERS-1/2, ENVISAT) and DLR missions (TerraSAR-X) and of the processing experiences will be presented, together with the plans for future missions like Sentinel-1 NG and BIOMASS. References: [1] ESA, “Sentinel-1 Product Specification”, version 3.9, 2021. https://sentinel.esa.int/documents/247904/1877131/Sentinel-1-Product-Specification-18052021.pdf/c2f9d58d-217f-e21d-548d-97a2cbd71e2b?t=1621347421421. [2] Airbus, “TerraSAR-X Image Product Guide”, issue 2.3, March 2015. https://www.intelligence-airbusds.com/files/pmedia/public/r459_9_20171004_tsxx-airbusds-ma-0009_tsx-productguide_i2.01.pdf. [3] https://earth.esa.int/eogateway/instruments/sar-ers/products-information. [4] ESA, “ENVISAT-1 Products Specifications Volume 8: ASAR Products Specifications”, issue 4, Ref: PO-RS-MDA-GS-2009, 20 January 2012. https://earth.esa.int/eogateway/documents/20142/37627/Envisat-products-specifications-VOLUME-8-ASAR-PRODUCTS-SPECIFICATION.pdf/1fd5a0be-1634-06cc-9a1e-249874a6e3aa. [5] CEOS, “Analysis Ready Data for Land: Normalized Radar Backscatter”, version 5.5, 2021. https://ceos.org/ard/files/PFS/NRB/v5.5/CARD4L-PFS_NRB_v5.5.pdf. [6] Small, D. (2011). “Flattening Gamma: Radiometric Terrain Correction for SAR Imagery”. IEEE Transactions on Geoscience and Remote Sensing, 49, 3081-3093. https://doi.org/10.1109/TGRS.2011.2120616
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall K1)

Presentation: Surface Reflectance Quality Consistency For CEOS ARD

Authors: Medhavy Thankappan, Josh Sixsmith, Mr Simon Oliver
Affiliations: Geoscience Australia
The CEOS-ARD Surface Reflectance Product Family Specification (PFS) provides a strong basis for interoperability. However, its non-prescriptive nature tolerates different approaches to the derivation of the ARD surface reflectance quantities. CEOS organisations’ own efforts to produce NASA Harmonized Landsat and Sentinel-2 (HLS) and ESA Sen2Like require the reprocessing of surface reflectance to produce consistent quantities. This reprocessing highlights the need to improve the compatibility of surface reflectance across providers. In 2024 an expert panel was convened to commence drafting of a guidance to summarise the essential steps, model parameters, and tolerances required to achieve consistent surface reflectance quantities from different satellite sensors. The guidance and concept emphasise the importance of radiometric calibration, atmospheric correction, cloud and shadow masking, adjacency correction, topographic correction, and validation against reference data. The harmonisation steps include geometric correction, BRDF correction, and precise geolocation and co-registration of images. Homogenisation steps involve spectral band adjustment, and harmonisation of spatial resolution and polarisation correction. The guidance describes an idealised target surface reflectance quantity/measurand that, if achieved, would enable interoperability at the measurement level. The concept acknowledges differences in spectral response and aims to provide users with a virtual spectrum of measurements that are sensor-agnostic and consistent in the quantity they represent. Similarities in sensor spectral response then permit further homogenisation steps to align with a reference sensor through spectral band adjustment and other processes. The concept described serves as a reference for organisations reprocessing Earth observation data collections and contributing to a virtual constellation of interoperable surface reflectance measurements, aiming to optimise results from multi-provider data integration and analysis.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.15/1.16)

Session: A.07.02 Quantification of Water Resources and Their Evolution Under Anthropogenic Pressure - PART 1

As global populations expand and develop, the demand for water intensifies, placing increasing strain on water resources. Additionally, climate change exacerbates hydrometeorological extremes and introduces significant uncertainty regarding the short- to mid-term availability of water for both communities and ecosystems. Having an accurate and timely overview of available fresh water is therefore essential for its rational and sustainable management. This session aims to emphasize how Earth observations contribute to quantifying different continental water storages—including, but not limited to, groundwater, soil moisture, lakes, artificial reservoirs, seasonal snow and glaciers—and to monitoring and predicting their spatial and temporal evolution, in particular in poorly gauged areas.

We invite contributions that address these quantitative aspects, both in terms of water storages and fluxes from basin to global scales. Works focusing on the identification and characterization of hotspots of unsustainable water exploitation over time and space are particularly welcome. Studies exploring direct human impacts on water availability, such as water abstraction for irrigation, decreased recharge due to deforestation, or the effects of dams on downstream areas, are highly encouraged. Additionally, research focusing on attributing observed variations in water storage to natural variability, climate change, or more direct human impacts will be of great interest. There is no restriction on the type of satellite missions or sensors used in the submissions, but approaches based on multi-mission analyses will be favoured.

Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.15/1.16)

Presentation: Global Land Water Storage data set: development of a global GRACE/-FO assimilation data set and applications in water and biodiversity risk assessment

Authors: Prof. Dr. Jürgen Kusche, Helena Gerdener, Isabel Meza
Affiliations: University of Bonn, WWF Germany
Data from the GRACE missions have contributed significantly to our understanding of the water cycle; e.g. by revealing how water resources respond to climate variability, anthropogenic climate change and water use. But the coarse spatial resolution, heterogeneous data quality, and an 11-months gap render key applications difficult. In particular, isolating groundwater storage changes from GRACE/-FO total water storage anomaly (TWSA) data remains challenging. In recent years, data assimilation schemes were designed to merge GRACE/-FO data with model simulations, and downscale and partition TWSA maps. At the University of Bonn we have developed an assimilation scheme to integrate GRACE level-2 or -3 data with the WaterGAP model (Müller Schmied et al., 2021), while accounting for spatially anisotropic errors. With the Global Land Water Storage (Gerdener et al., 2023, GLWS2.0, DOI 10.1594/PANGAEA.954742) data set we provide global assimilation results for TWSA, and separately for groundwater, soil moisture and surface water storage changes on 0.5° monthly grids to the community. The GLWS2.0 groundwater data set derived from GRACE/-FO has been integrated into the latest update of the WWF Risk Filter Suite (RFS) (https://riskfilter.org/). The RFS offers free, online and spatially explicit tools that enable corporates and financial institutions to assess and act on nature-related risks – water and biodiversity - to strengthen resilience. In this presentation, we will report about the development of the GLWS2.0 data set and in particular its groundwater component, and its impact in risk assessment.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.15/1.16)

Presentation: Monitoring Water Level Changes Using D-InSAR in the Tominé Reservoir, Colombia

Authors: Dr.-Ing Francescopaolo Sica, Maria Paula Bustos Moreno, Prof. Michael Schmitt
Affiliations: University of the Bundeswehr Munich
Monitoring inland water bodies, such as lakes and reservoirs, is critical for sustaining ecosystems, supporting agricultural production, and fulfilling economic development needs. Traditional monitoring methods often rely on ground-based stations, such as water gauges, which provide accurate and continuous measurements. However, these methods are expensive to maintain and are not available for all water bodies, particularly in remote or underdeveloped areas. Remote sensing techniques have emerged as cost-effective alternatives, offering increased spatial coverage and frequent revisits. Among these, satellite radar altimetry has proven to be a valuable tool for measuring water level changes. However, radar altimetry is limited to specific nadir tracks along the satellite's orbit, restricting its applicability to a subset of lakes and reservoirs. For a broader range of water bodies, alternative techniques are necessary to bridge these observational gaps. Differential Interferometric Synthetic Aperture Radar (D-InSAR) represents a promising approach to enhancing and complementing radar altimetry for monitoring water level changes. D-InSAR exploits phase differences between multiple Synthetic Aperture Radar (SAR) images acquired from nearly identical positions but at different times. These phase differences encode the displacement of the observed surface along the radar's Line-of-Sight (LOS). Despite its potential, the application of D-InSAR to water bodies presents challenges. Water surfaces typically reflect the radar signal away from the sensor, resulting in low backscatter and minimal coherence. Consequently, D-InSAR analysis is often restricted to the borders of water bodies, particularly in areas covered by vegetation. Vegetation along the water body's edges creates favorable conditions for D-InSAR analysis due to the double-bounce effect, where the radar signal reflects off the water surface and then the vegetation. Previous research has demonstrated the utility of D-InSAR in wetland environments, where the double-bounce effect is prevalent [1][2][3]. Other studies have also explored its use in open water bodies, such as lakes, but with limited success. The primary limitation arises from low temporal coherence, which restricts reliable phase measurements. Most existing studies have used wrapped phase data to detect water level changes, with changes constrained to within the radar’s wavelength. While this approach has been useful for small-scale changes, it does not accommodate larger displacements, thereby limiting its broader applicability. In this study, we introduce a novel approach that leverages the unwrapped phase of D-InSAR to monitor water level changes in the Tominé Reservoir, Colombia. By analyzing the unwrapped phase, we can detect larger vertical displacements of water levels that were previously undetectable with wrapped phase analysis. This advancement significantly enhances the applicability of D-InSAR for monitoring water bodies with more dynamic water level changes. Our analysis utilized 114 interferograms generated from 40 Sentinel-1 Level-1 Single Look Complex (SLC) images in descending orbit mode, spanning January 2023 to June 2024. Interferograms were processed using the ISCE software, employing a three-consecutive-pair configuration to enhance temporal coherence. The unwrapped phase and cumulative vertical displacements were analyzed with MintPy, focusing on areas with coherence values above 0.4 to ensure data reliability. To validate our findings, we compared the D-InSAR-derived water level changes with daily in-situ measurements from a gauged station within the Tominé Reservoir. The results demonstrated a significant correlation, particularly during periods of declining water levels. These findings underscore the reliability and accuracy of our approach. [1] Alsdorf, D. E., Melack, J. M., Dunne, T., Mertes, L. A., Hess, L. L., & Smith, L. C. (2000). Interferometric radar measurements of water level changes on the Amazon flood plain. Nature, 404(6774), 174-177. [2] Mohammadimanesh, F., Salehi, B., Mahdianpari, M., Brisco, B., & Motagh, M. (2018). Wetland water level monitoring using interferometric synthetic aperture radar (InSAR): A review. Canadian journal of remote sensing, 44(4), 247-262. [3] Simard, M., Yunjun, Z., Jones, C., & Oliver Cabrera, T. (2021). InSAR Phase Unwrapping Error Correction for Rapid Repeat Measurements of Water Level Change in Wetlands. [4] Aminjafari, S., Frappart, F., Papa, F., Brown, I., & Jaramillo, F. (2024). Enhancing the temporal resolution of water levels from altimetry using D-InSAR: A case study of 10 Swedish Lakes. Science of Remote Sensing, 10, 100162. [5] Palomino‐Ángel, S., Vázquez, R. F., Hampel, H., Anaya, J. A., Mosquera, P. V., Lyon, S. W., & Jaramillo, F. (2022). Retrieval of simultaneous water‐level changes in small lakes with InSAR. Geophysical Research Letters, 49(2), e2021GL095950.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.15/1.16)

Presentation: Advances on global reservoir stock monitoring using remote sensing data

Authors: Dawa Derksen, Santiago Pena Luque, Gasnier Nicolas Gasnier, Laetitia LALLA, Benjamin
Affiliations: Cnes
In the current climate, water resource management is a key issue. Artificial reservoirs such as dams are often used by managers who have access to precise information. However, this information remains highly localized and cannot be always used to quickly estimate water stocks on a larger scale, such as the region, department or entire country. Remote sensing data can provide relevant information for monitoring water stocks. In this context, CNES has improved several processing methods to extract area dynamics from Sentinel 1&2 images, exploit SWOT altimetry data as well as bathymetric laws from a Digital Elevation Model (DEM). To estimate the volume of a reservoir, an essential measurement is the actual bathymetry of the water body. As this information is rarely available, the DEM4Water chain has been developed to estimate a bathymetric law for dam reservoirs. Based on a DEM and the location and shape of the water body, the chain will determine Elevation-Area-Volume laws linking water surface elevation (E), water surface area (A) and volume (V). The chain will automatically detect the position of the dam using a gradient map of the DEM on the surrounding area of the reservoir, as well as the dam toe elevation. Such map allows us to initialize automatically the algorithm specifically for each dam, and we then need to determine the elevation contour lines and area information. Unlike conventional bathymetry estimation methods such as GloBathy[1]. DEM4Water exploits the richness provided by dry bathymetry. Such method yields absolute volume laws with less than 10% error on 57% of 100 instrumented reservoirs, and lower than 20% error on 80% of the 100 instrumented reservoirs Current versions of DEM4water are now available online, provided with training materials (https://github.com/CNES/dem4water) , which will allow a new era of global reservoirs monitoring combined with SWOT altimeter or satellite imagers (optical, radar). Concerning reservoir area monitoring, optical data has proven highly effective for the detection of water bodies. However, in some regions, persistent cloud cover frequently make optical data irrelevant. The Sentinel-1 satellites have synthetic aperture radar (SAR) C-band sensors that allow them to see through clouds and acquire images both day and night. However, segmentation of water bodies in SAR imagery presents significant challenges, primarily due to the similar backscatter reflectance patterns found across different land cover types. Hence, we designed a Deep Learning model aimed at identifying water surfaces in SAR images, with the flexibility to incorporate additional modalities. We used data from the S1-S2-Water dataset created by DLR [2]. The authors selected 65 sentinel-2 tiles and curated the corresponding sentinel-1, sentinel-2 and DEM Copernicus data, ensuring that the temporal alignment of the Sentinel data was as close as possible. To create the water masks for radar images, the authors first roughly identified water bodies based on optical imges, then manually refined them, which reduced the misclassifications that can be obtained when segmenting water bodies in SAR images directly. First, we trained a Unet to segment water pixels based on Sentinel-1 radar images only. A U-net with a ResNet50 encoder and BCE/DICE Loss gave the best results. We tested our model on data in Brazil to produce time-series of water surface for several lakes in Ceara and Pernambuco compared our results with in-situ data from the sites (water level and bathymetry). Then, with the same architecture, we experimented the early fusion of Sentinel-1 and Copernicus DEM information. We tested our model on data in France using the ground truth from the ALCD dataset [3]. The metrics were better than the ones obtained with Surfwater [4], especially when the slope information is used. The slope information allows to remove the false positives in mountain ares, where sharp edges can be misclassified as water from s1 information only. The new method is that radar water area estimations where less prone to underestimation thanks to the training on optical labelled images. Furthermore, the approach has been proven to be more robust against wind effect on water backscatter, yielding more precise area timeseries on our water monitoring system. Finally, a novel approach for deriving dynamic 3D models of lakes(dry-bathymetry) and their flood basins from Surface Water Ocean Topography (SWOT) point cloud data is proposed . Supported by recent advances in Implicit Neural Representations (INR) and Physics-Inspired Machine Learning (PIML), we leverage physical biases regarding the geometric and temporal behaviour of water and ground surfaces to learn a continuous implicit 4D model of the lake and its surrounding area. Our experimental results show that despite high levels of noise in the point cloud - particularly in ground areas - our proposed model is able to 1. extract a static Digital Elevation Model of the dry bathymetry, 2. estimate the evolution of water height, surface, volume, as well as their relations to each other, and 3. generate 2d water masks with no prior label information. We take inspiration from Bayesian Neural Networks and implement a probabilistic framework wherein the model outputs are provided with uncertainty estimations that take into account the level of noise in the SWOT point cloud data. [1] Khazaei, B., Read, L.K., Casali, M. et al. GLOBathy, the global lakes bathymetry dataset. Sci Data 9, 36 (2022). https://doi.org/10.1038/s41597-022-01132-9 [2] Wieland, M., Fichtner, F., Martinis, S., Groth, S., Krullikowski, C., Plank, S., Motagh, M. (2023). S1S2-Water: A global dataset for semantic segmentation of water bodies from Sentinel-1 and Sentinel-2 satellite images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, doi: 10.1109/JSTARS.2023.3333969. [3] PENA LUQUE Santiago, STUDER Mathias, MAXANT Jerome, LEDAUPHIN Thomas, NICOLAS Gael, & YESOU Herve. (2019). CNES ALCD Open water masks (1.2) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.4657020 [4] Santiago Peña-Luque, Sylvain Ferrant, Mauricio Cordeiro, Thomas Ledauphin, Jerome Maxant, et al.. Sentinel-1&2 Multitemporal Water Surface Detection Accuracies, Evaluated at Regional and Reservoirs Level. Remote Sensing, 2021, 13 (16), pp.3279. ⟨10.3390/rs13163279⟩. ⟨hal-04784757⟩
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.15/1.16)

Presentation: Global Operational Quantification of Available Water Resources

Authors: Philippe Mourot, Christophe Fatras, Loic Faucqueur, Nicolas Taburet, Nadine Gobron
Affiliations: CLS - Collecte Localisation Satellites, European Commission – Joint Research Centre
Having access to the latest information available is the key for local and central governments to better manage their resources and monitor the environment. The European Copernicus Program delivers reliable, up-to-date information about our planet and its changing climate. This valuable data supports decision-makers, businesses, and citizens in shaping environmental policies and taking informed action. The Global Land component of the Copernicus Land Service (1) (CGLOPS) is part of the European flagship program on Earth Observation. Since 2013, CGLOPS has ensured a global systematic monitoring of the Earth’s land surface. In an era marked by environmental changes, the metrics offer insights into sustainable water resource management. Focusing on the hydrosphere and cryosphere, Lot 2 of the CGLOPS project delivers operational data on water bodies, snow, and ice cover. These products support critical applications, including food security, public health, climate research, and sustainable water management. On the processing side, approximately 200,000 Sentinel-2 images are processed monthly to map permanent and ephemeral water bodies. The output data enables monitoring of the maximum and minimum water surface extents and seasonal dynamics. Global updates are provided monthly at a spatial resolution of 100 meters, covering the period from 2020 to the present. Water levels for all lakes, reservoirs, and rivers have also been available since 1992. The data is delivered in near real time, derived from all spaceborne radar altimeters, ensuring timely insights for various applications. Several enhancements are being explored for potential integration into future operational phases of CGLOPS. First, a robust methodology combining Sentinel-1 and Sentinel-2 data for water extent mapping aims to provide complementary insights, especially for areas obscured by cloud cover. The integration of SWOT nadir altimeter products into water level monitoring is planned, significantly expanding coverage to include over 4,000 stations. Additionally, the inclusion of KaRIn swath data from SWOT is under evaluation, offering the potential for even more comprehensive monitoring capabilities. Lastly, leveraging expertise gained from the ESA Climate Change Initiative Lakes (2) project on Lake Storage Change, the operational production of water volume variation is now feasible. This innovation would combine water extent and water level data to deliver global assessments of water storage in lakes and reservoirs. Finally, a range of use cases will be presented to highlight the practical applications of these products, focusing on flood management, irrigation planning, hydroelectric power generation, and other essential aspects of land and water resource management. References: (1) Copernicus Land Monitoring Service (CLMS) - https://land.copernicus.eu/en (2) ESA Climate Change Initiative – Lakes - https://climate.esa.int/en/projects/lakes/
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.15/1.16)

Presentation: Modelling Small Lakes Volume Variations from Satellite Imagery and Altimetry

Authors: Pierre Fabry, PhD Mathilde De Fleury, PhD Benjamin TARDY, PhD Mickaël SAVINAUD
Affiliations: CS-SOPRASTERIA
Water resource management is an important issue in the context of climate change. Large man-made reservoirs are often monitored by managing entities with dedicated sensors. However, this is not the case for many natural lakes, especially the small ones and those in remote areas like in abrupt mountains, while these are critical resources. To assess changes in surface water resources more thoroughly, it is essential to quantify water volumes variability. Remote sensing is the most affordable technology to address this need, but the challenge is twofold. First the altimetric spatial sampling implies that the waterbody shall be at least 300m wide along the satellite ground track intersection, second the lack of validation data requires to make use of other remote sensing data for the verification. These are the main obstacles to the operational monitoring of such water bodies. In this work, we propose to use information from combined optical and altimetry missions (de Fleury et al., 2022, Girard et al., 2024, Fabry et al. 2024) to estimate volumes or variations in volumes for small water bodies (around 10 hectares) and to verify our results over water bodies in France from an independent and “high resolution” DEM based approach. Water surfaces are derived from Sentinel-2 MSI water masks from SurfWater (Peña-Luque, S. et al., 2021), freely available on the hydroweb-next portal (*). They are co-dated and collocated with Sentinel-3 SRAL and CryoSat-2 UF-SAR / Delay-Doppler land products, over relatively small lakes. The principle relies on matching those surface-height as (S, Z) pairs. These are then sorted and used to compute individual volume layers dVk which can then be stacked to build a complete or partial V(Z) model. In practice, while the method is well-known, it requires adaptation and a more in-depth analysis to be effective for small water bodies. Pixel classification errors, such as mixed pixels affecting the contours of water bodies, need post-processing of the water surface time series. This involves applying appropriate filtering to reduce inaccuracies (step 1). The confidence in the altimetric measurement relies on the intersection width of the water body relative to the along-track resolution of the altimeter (~300m at UF-SAR products). The actual intersection varies from pass to pass as it depends on orbit variations, it differs between ascending and descending passes and overall, it is strongly influenced by the morphology of the water body itself. We propose in this study a confidence index for each water body overflight. Moreover, the record that is selected to provide the height measurement is now checked to be physically consistent with its neighbours (step 2). In addition to these precautions, to ensure the quality of the measurements, we applied a classical outlier rejection filter to the water height timeseries (step 3). After processing the data as described in previous steps, we estimate a surface-height relation, a hypsometric curve, using the series of selected pairs for the lake of interest at different lake filling rates, sorted by increasing lake extent: {(Sk, Zk)} for k in [0, K]. In this process, it is important to carefully consider cases where too many pairs compete to cover a given lake level. To address this, a consistency check is run on the group of pairs, involving among other things the above-mentioned confidence index on water height values (step 4), so that non consistent pairs are discarded. The (S, Z) point cloud go through an iterative sequence of polynomial regression followed by an outlier rejection, to provide the S(Z) and Z(S) models with statistical confidence metrics (step 5). In the end, the volume variation between two consecutive water levels is estimated (step 6) from a simple formula based on the S(Z) model and the assumption of an average linear slope between two consecutive surface values (the current volume layer): dVk = ( Zk - Zk-1 ) . ( S(Zk) + S(Zk-1) ) / 2 Thus, knowing the reference height value, which corresponds to the lowest height situation (initial condition), and based on a step-by-step integration from this value, we estimate the water body volume variation. This approach has been conducted on several lakes in France across heterogenous cases, that include a variety of shapes and sizes. Results show the potential of this method to monitor small water bodies by combining data sources, especially since this method is supported by validations. The S(Z) and V(Z) models are compared with similar models that are derived from a DEM based model issued by the DEM4WATER tool and the RGEALTI multi-sensor DEM from IGN France. We have also included in the case studies lakes monitored by gauges to support this study. These promising method and results are accompanied by a critical analysis, particularly regarding the errors and uncertainties. This work is a step forward in developing a robust and reliable method for modelling lakes water volumes (absolute or variation) using only satellite remote sensing data (Optical Imagery and Radar Altimetry). We will draw directions for the next steps in our conclusions and perspectives. References: (*) https://www.theia-land.fr/typeofproduct/hydroweb-next/ de Fleury M., Kergoat L., and Grippa M.: Hydrological regime of Sahelian small waterbodies from combined Sentinel-2 MSI and Sentinel-3 Synthetic Aperture Radar Altimeter data, Hydrol. Earth Syst. Sci., 27, 2189–2204, https://doi.org/10.5194/hess-27-2189-2023, 2023. Peña-Luque S., Ferrant S., Cordeiro M.C.R., Ledauphin T., Maxant J., Martinez J.-M.: Sentinel-1&2 Multitemporal Water Surface Detection Accuracies, Evaluated at Regional and Reservoirs Level. Remote Sens., 13, 3279. https://doi.org/10.3390/rs13163279, 2021. Fabry P., de Fleury M., Tardy B., Nicolas G., Savinaud M.: Feasibility of Monitoring Lakes Volume Variations from Imagery & Altimetry, poster in 30 YEARS OF PROGRESS IN RADAR ALTIMETRY SYMPOSIUM, Montpellier, 2024. Girard F., Kergoat L., Nikiema H., Wubda M., Yonaba R., Fowé T., Touré A.A., Maïnassara I., de Fleury Mathilde, and Grippa M.: Comparison of methods to derive the height-area relationship of shallow lakes in West Africa using remote sensing, ESS Open Archive, preprint under review, https://doi.org/10.22541/essoar.171033146.63952188/v1 , 2024.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.15/1.16)

Presentation: Improved monitoring of Arctic lakes within the ESA CCI Lakes

Authors: Dr. Eirik Malnes, Dr. Hannah Vickers, Dr. Daniel Trosten, Dr. Marion Bocquet, Robert Ricker
Affiliations: NORCE
Lakes are sensitive indicators of environmental and climatic changes. Monitoring their response to changing meteorological conditions is crucial for our understanding of the global freshwater resources. There is a need to monitor essential climate variables such as Lake Water Level, Lake Water Extent and Lake Ice cover/thickness over extended periods in order to understand the complex variability of lakes on a global scale. Within ESA’s CCI Lakes initiative NORCE is developing radar-based methods to improve the monitoring of Arctic lakes above 60°N. These lakes are poorly covered by the current CCI Lakes baseline dataset due to the initial implementation of optical sensors, which are challenged by ice, cloud, and light conditions at high latitudes. Synthetic Aperture Radar may be used to monitor the areal extent of Arctic lakes. We demonstrate that time series with frequent image acquisitions using Sentinel-1 allow us to monitor the extent with up to daily frequency at high latitudes. Complete time records ranging back to 2004 using Envisat ASAR, Radarsat-2 and Sentinel-1 can be reconstructed. In this project we have developed a fully automatic AI-based method for lake extent classification. A UNET architecture has been selected for the purpose, and the training data was generated by a statistical method (an unsupervised K-means classifier) and manually inspected for a large number of lakes and instances. Within the CCI Lakes project the UNET method will be validated and a time series will be produced for a large number of Arctic lakes. We will use complementary observations from Sentinel-2, in situ data and data from radar altimeters to validate the Sentinel-1 UNET-based lake water extent estimates, as well as evaluate its performance relative to the K-means clustering algorithm . The close correlation between lake water level and lake water extent for any lake can be utilized to analyse the quality of the Sentinel-1 estimates with high accuracy. The project will also explore new methods for Arctic lake monitoring using radar altimeters including Cryosat-2, Sentinel-3 SRAL and the recently launchedSWOT, which allows for two-dimensional mapping of water levels by radar interferometry. Additionally, we will study alternative sensors like Icesat-2 altimeters and L-band SAR sensors (ALOS-2, NISAR) to assess their suitability for monitoring lake extent, water levels and lake ice conditions/thickness.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall E1)

Session: A.09.09 Arctic and Antarctic Sea Ice in the Earth system: Advancing Research with Remote Sensing, In-Situ Observations, and Modeling - PART 2

Sea ice plays a crucial role in the global climate system through its strong effect on albedo, ocean and atmospheric circulation, and its feedbacks with ice sheets and permafrost.

Remote sensing of sea ice has been the cornerstone of Arctic and Antarctic sea ice research for over 50 years. These long-term, large-scale, and stable time series of sea ice parameters provide the baseline for a deeper understanding of the ongoing dramatic changes in both hemispheres. This knowledge is further diversified and enhanced by new and upcoming satellite missions (e.g., ICESat-2, SWOT, CIMR, CRISTAL, ROSE-L) that provide insights into detailed processes such as snow depth changes, meltpond drainage, and sea ice ridging, as well as support operational forecasting and monitoring applications. They also advance our understanding of the relevance of sea ice for atmospheric, oceanic, and ecological processes, e.g., Arctic cloud formation or the timing of ice algae blooms.

Sea ice parameters are observed over a large wavelength spectrum and derived from many different sensors including microwave and infrared radiometers, visible observations, radar imagers, and lidar or radar altimeters. Combining, merging, and jointly analyzing products from different satellite sensors and scales represents the next powerful step in advancing our knowledge of the fast-changing sea ice covers.

A key challenge remains in bridging scales and spheres between Earth Observation (EO) datasets, climate modeling, and in-situ datasets. New methodological advances such as data-driven modeling and physics-informed artificial intelligence, may be well-suited to address this challenge.

This session addresses all aspects of sea ice, including the current status and needs in enhancing EO methodologies, and the use of EO products for evaluating polar climate model simulations and for data assimilation in Numerical Weather Prediction models. Airborne and in-situ observation campaigns are critical to evaluate, calibrate, and develop satellite retrievals and we invite submissions on these aspects, too. Submissions on solutions for addressing up- and downscaling challenges on different temporal and spatial scales and between different data types are also encouraged.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall E1)

Presentation: Intercomparison of sea-ice thickness datasets: Results of the ESA SIN’XS project

Authors: Valentin Ludwig, Caroline Ribere, Sara Fleury, Christian Haas, Vincent Boulenger, Stefan Hendricks, Javier Pastor, Jaoudat Sabalbal, Stephan Paul, Michel TSAMADOS, Mahmoud El Hajj, Dr. Jérôme Bouffard, Alessandro Di Bella, Michele Scagliola
Affiliations: AWI, NOVELTIS, LEGOS, UCL, ESA
Sea-ice thickness is a crucial parameter for various scientific disciplines, including climate science, oceanography and ecosystem processes. It plays a vital role in regulating exchanges of heat, moisture and momentum between the polar oceans and the atmosphere, influencing ocean currents, and affecting local cloud cover and precipitation. Satellite-based sea-ice thickness products range back to 1994. Their number has surged since 2010 with the advent of the CryoSat-2, ICESat, ICESat-2 and SMOS satellites. The suite of sea-ice thickness products is complemented by numerical models and reanalyses. Due to the large uncertainties of existing sea-ice and snow thickness products, there is no consensus about trends of Arctic and Antarctic sea ice thickness (Mallett et al., 2021), and most information about ice thickness changes is still regional and comes from moored upward-looking sonar (ULS; Krishfield et al., 2014; Hansen et al., 2015), airborne (Richter-Menge and Farrell, 2013; Haas et al., 2010), or in-situ measurements (e.g., Haas et al., 2017; Nicolaus et al., 2021). In addition, it is still difficult to use the data for model improvement, assimilation, and numerical weather prediction (e.g., Schröder et al., 2019), or for operational purposes related to, e.g., Arctic shipping or offshore operations (see, e.g., Kepler project https://kepler-polar.eu). In light of this situation the international sea ice research and data product user community has suggested to carry out coordinated intercomparison projects for both sea-ice and snow thickness products. The ESA-funded project Sea Ice Thickness Intercomparison Exercise (SIN’XS), led by NOVELTIS, in collaboration with AWI, LEGOS, and UCL aims primarily at (i) defining agreed assessment methods, metrics and public domain protocols for evaluating and inter-comparing sea-ice thickness products and (ii) collecting sea-ice and snow thickness products from different Earth Observation sensors, reanalyses, models and reference data from airborne campaigns and marine observatories. In SIN’XS, we integrated over 30 of these products in a harmonised format to facilitate comparison. A robust intercomparison framework has enabled the creation of a sea-ice thickness dataset inter-comparison protocol and a reconciled sea-ice thickness estimate with rigorous uncertainty quantification. We developed an open, online Assessment Platform in which data can be interactively selected and displayed, and statistical analyses can be performed on the fly. All data are available for download. A Python library has been set up which allows to convert measured quantities like radar freeboard or sea-ice draft to sea-ice or snow thickness. Gaussian error propagation is used to quantify the uncertainties of the conversion. This allows to study the sensitivity of sea-ice thickness retrievals towards the input parameters. Beyond technical advancements, the project actively federates the polar science community, fostering collaboration through workshops and publications. Project outcomes include a detailed white paper to be published to share results and guide future research, and strategic recommendations supporting future missions. In our presentation, we will give an overview on the major achievements of SIN’XS: First, we will show comparisons of large-scale datasets and quantify how close they are to each other, as well as their evaluations against reference data. Second, we will show how varying the input parameters for sea-ice thickness retrievals impacts the outcome. Lastly, we will present a reconciled sea-ice thickness estimate based on a combination of the products which are available in the SIN’XS database.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall E1)

Presentation: Exploiting Altimetry and SAR Synergies: Extrapolating ICESat-2 Freeboard Tracks to Local 2D Maps

Authors: Dr. Karl Kortum, Gunnar Spreen, Suman Singha
Affiliations: German Aerospace Center (DLR), University of Bremen, Danish Meteorological Institute (DMI)
Satellite altimetry and Synthetic Aperture Radar (SAR) measurements provide both important and to a large degree complementary information about the Arctic sea ice cover. We are currently developing a synergistic approach to expand laser altimetry-derived ice freeboard measurements to SAR scenes. The scope and frequency of SAR acquisitions are complemented by the accuracy and detail of altimetry data products. Whilst both instruments provide valuable data on their own, their synergies have not yet been completely leveraged. However, there is an ongoing effort to relate the measurements for increased insight and possible novel retrieval methods. The ATLAS laser altimeter on ICESat-2 can provide sea ice freeboard measurements along three parallel tracks separated by about 3 km, i.e. there are large gaps between the tracks and the overall measurement coverage of the Arctic Ocean is sparse. Sentinel-1 C-band SAR provides 400 km wide scenes of sea ice backscatter with 40 m resolution in extended wide (EW) mode. The backscatter is related to the ice topography and ice type, but the relationship is complex. As part of an ongoing effort to combine these two data sources, we have constructed a method to extrapolate ICESat-2-derived freeboard measurements to the scale and scope of Sentinel-1 scenes, even when the measurements are up to a day apart. We extrapolate the ICESat-2 measurements on a per-scene basis, so that freeboard maps are constructed for individual Sentinel-1 EW scenes. The extrapolated freeboard is validated against near coincident ICESat-2 freeboard tracks, not used for the extrapolation. At 100 m resolution, these extrapolated freeboard maps show errors of around 10 cm. At 400 m resolution, the errors drop to around 6 cm. Such freeboard extrapolations can fill a knowledge gap of medium to high-resolution, high-frequency Arctic sea ice topography maps and could be converted to sea ice thickness if snow and density information are added. In the future, they might provide relevant information for ice-atmosphere coupling and thermodynamic and dynamic sea ice development. In the planned presentation we aim to showcase the developed method and reach out to the scientific community to discuss the uses of these freeboard maps. With new SAR (NISAR, ROSE-L) and altimetry (SWOT, CRISTAL) missions recently launched and planned for the near future, synergetic uses of these two instrument types are going to stay highly relevant to the scientific community of Polar research for the foreseeable future.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall E1)

Presentation: Investigating snow sinks on level sea ice: A case study in the western Arctic

Authors: Ioanna Merkouriadi, Arttu Jutila, Glen E. Liston, Andreas Preußer, Melinda A. Webster
Affiliations: Finnish Meteorological Institute, Colorado State University, Alfred Wegener Institute, German Space Agency at DLR, University of Washington
Snow and sea-ice interactions play a critical role in the Arctic sea-ice system, influencing mass balance and thermodynamic processes. This study investigates snow sinks on level sea ice in the western Arctic, focusing on two key mechanisms: snow-ice formation and sub-parcel snow mass redistribution. Using the SnowModel-LG, a state-of-the-art Lagrangian snow evolution model, coupled with the HIGHTSI thermodynamic sea ice model, we developed the integrated SMLG_HS modelling framework. This novel system enables detailed simulations of snow depth, snow density, snow-ice formation, and sea-ice thickness. The study evaluates the SMLG_HS outputs against high-resolution airborne snow and sea-ice observations from NASA Operation IceBridge (OIB) and Alfred Wegener Institute’s (AWI) IceBird campaigns. The results of the evaluation exercise reveal that excluding snow redistribution processes leads to significant biases, with snow depth on level sea ice overestimated, resulting in underestimated level sea-ice thickness and overestimated snow-ice thickness. We carried out a modelling sensitivity experiment that suggests that a 40 % reduction in modelled snow depth on level sea ice is required for realistic simulations of snow and sea-ice thicknesses in the western Arctic in April 2017. Additionally, analysis of more than 100 airborne surveys in 2009-2019 indicates a linear relationship between the fraction of snow volume on level sea ice and the fraction of level ice along sea-ice transects demonstrating the effect of snow mass redistribution. This emphasizes the critical role of snow redistribution processes in shaping snow and sea-ice interactions. The findings underscore the importance of including snow sinks caused by snow-ice formation and redistribution in snow evolution models. Accounting for these processes is essential for improving sea-ice thickness retrievals based on satellite altimetry data, which are sensitive to snow depth and density distributions. This study highlights that uneven snow distributions influence not only the thermodynamic growth of sea ice but also its optical and mechanical properties, with broader implications for climate modelling and polar ecosystem studies. By demonstrating the necessity of detailed snow process modelling, this research advances our understanding of Arctic snow and sea-ice dynamics and provides a framework for enhancing the accuracy of remote sensing and coupled climate models. Future work should focus on expanding datasets and observations to capture spatial and temporal variability across the Arctic, particularly under rapidly changing climate conditions.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall E1)

Presentation: Novel pan-Arctic high resolution sea ice products from 10 years of Sentinel-1 Synthetic Aperture Radar and AMSR2 microwave radiometer data.

Authors: Matilde Brandt Kreiner, Tore Wulf, Jørgen Buus-Hinkler, Suman Singha, Anton Korosov, Frode Dinessen, Athanasios Athanasiadis, Ninna Juul Ligaard
Affiliations: Danish Meteorological Institute (DMI), Nansen Environmental Remote Sensing Center (NERSC), Norwegian Meteorological Institute (MET)
A growing and diverse user group accessing wider parts of the Arctic due to sea ice retreat caused by climate change, as well as rising geopolitical tensions in the Arctic, calls for more effective ways of monitoring Arctic sea ice for safety and planning purposes. Sea ice models, as well as climate models, are going towards higher spatial resolution and need more detailed pan-Arctic ice information products with associated uncertainties and for longer time series, for assimilation purposes and verification of forecasts and hindcasts. In recent years, deep-learning-based vision methodologies have emerged with promising results for SAR-based sea ice information retrievals. The DMI-ASIP deep learning model is a Convolutional Neural Network (CNN) that performs data fusion of the Sentinel-1 SAR imagery (Extra Wide swath and IW mode) with ~80 m pixel spacing with AMSR2 passive microwave radiometer (PMR) data, these sensors having complementary capabilities, to better resolve the ambiguities that can occur in SAR imagery e.g. at high wind speeds. Utilizing data from satellite sensors with different capabilities allow for sea ice products to be independent of daylight, more robust to different weather conditions and seasonal variation. The DMI-ASIP model has been trained with a vast dataset of regional ice charts produced at the DMI Greenland Ice Service and the Canadian Ice Service (CIS) that contain information about sea ice concentration, stage-of-development and floe size classes. The training dataset was produced under the ESA AI4Arctic project. While the deep learning model has been trained on ice charts covering the Greenland and Canadian part of the Arctic the algorithm has been modified to run for the pan-Arctic region, which it generalizes well for. The DMI-ASIP model has been used for producing a novel, high resolution Arctic sea ice information level-3 (L3) dataset (daily mosaics) that cover the full 10-year Sentinel-1 era from 2014 to 2024. The L3 product provide sea ice concentration, stage-of-development and floe sizes information with associated uncertainties, on a 0.5 km polar-stereographic grid. A corresponding level-4 (L4) product is based on the L3 products sea ice concentration information and associated uncertainty, and ‘gap-filled’ with EUMETSAT OSI SAF sea ice concentration products, and delivered and 1.0 km spatial grid. In addition to their usefulness as input to forecast (and hindcast) models, these unique multivariate and high resolution sea ice datasets also serve sea ice studies e.g. of the ice edge area and Marginal Ice Zone. The new sea ice products have been developed by a CMEMS service evolution project involving the Danish Meteorological Institute (DMI), Nansen Environmental Remote Sensing Centre (NERSC) and the Norwegian Meteorological Institute (MET) from the CMEMS Sea Ice Thematic Assemble Center (SI TAC) and the products - both the 10-year multiyear datasets and corresponding real-time products that extend the datasets going forward – are available through the Copernicus Marine Service product catalogue (with release November 2024). Recent development efforts have made the DMI-ASIP model generalizable for SAR sensor data from the Radarsat Constellation Mission, which will be included in the production of the real-time L3 and L4 products in 2025.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall E1)

Presentation: Linear Emulation of Observed and Modelled Arctic Sea-Ice Sensitivity and Loss

Authors: Quentin Rauschenbach, Andreas Wernecke, Dirk Notz
Affiliations: University of Hamburg
We introduce and analyse a linear emulator to describe the evolution of Arctic sea-ice area (SIA). The development of this emulator is motivated by the large spread of climate models in describing the decline of Arctic sea-ice area with its profound implications for global climate dynamics, ecosystems, and human activities. In the past, linear predictions from observational records have sometimes been used to circumvent these known climate-model shortcomings. However, these predictions lack robust uncertainty estimates, which we now can derive from our novel linear emulator. The emulator is developed to generate synthetic ensembles of SIA time series from observational or modelled inputs based on learning the structure of SIA variability from analyses of the sixth phase of the Coupled Model Intercomparison Project (CMIP6) multimodel ensemble and the MPI Grand Ensemble (MPI-GE). We find that due to the limited memory of the modelled sea-ice system, ensemble members evolve randomly around the system's background slope that is specific to each model. This randomness reflects the impact of internal variability, which affects both climate model simulations and extrapolated predictions. Thus, the emulator is constructed such that it combines a deterministic linear trend, representing anthropogenic forcing, with stochastic noise reflecting natural variability. Using a Monte Carlo framework, this setup enables the creation of probability density functions for metrics such as SIA sensitivity to cumulative CO₂ emissions and predictions of the timing of an ice-free Arctic from observational records. Testing the emulator against the MPI-GE using noise from a first-order autoregressive model (AR(1)) demonstrated its ability to replicate the modelled SIA internal variability and produce realistic uncertainty bounds for sensitivity and extrapolated timing of ice-free Arctic conditions. Here extrapolated timing refers to the time where linear predictions from the observational period are as likely to be ice-free as not ice-free. Predictions of first ice-free conditions are more challenging, in particular because they are more sensitive to parameter estimation and non-stationary variations of sensitivity and natural variability. Applying the emulator to the 1979-2022 satellite record, this study provides, for the first time, probability distributions for linear predictions of Arctic sea ice loss that account for record length, autocorrelation and the possibility of non-Gaussian distributed uncertainties. Notably, estimates for ice-free conditions are asymmetrically distributed, with extrapolated timing at about 3600 Gt (3180 Gt, 4070 Gt) (median and 95% confidence interval) of total cumulative anthropogenic CO₂ emissions (year 2046 (2037, 2057) under medium forcing scenario) and first ice-free conditions for 3400 Gt (2960 Gt, 3700 Gt) about three years earlier. These observations based predictions are fully within the large range of CMIP6 projections but have substantially smaller uncertainties. Combining several satellite products with the help of the emulator, we estimate the sensitivity of sea ice per amount of anthropogenic CO₂ emissions during the satellite era to be -2.31 m²/tCO₂ (-2.84 m²/tCO₂, -1.84 m²/tCO₂), which is slightly weaker but more constrained than previously estimated. The impact of internal variability on the observed sensitivity is responsible for most of these uncertainties (about ±0.44 m²/tCO₂) with the smaller reminder originating from differences in data products. By bridging the gap between linear observational models and complex climate models, the emulator can act as a summary of our knowledge about SIA variability and changes in the Arctic system over the last four+ decades. It provides a valuable tool for advancing Arctic sea ice predictions and informing climate policy.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall E1)

Presentation: Passive Microwave Data Assimilation in ECMWF’s Coupled Earth System Over Arctic and Antarctic With Focus on Sea Ice

Authors: Stephen English, Christoforos Tsamalis, Dr Alan Geer, Dr Niels Bormann, Philip Browne, Patricia de Rosnay, Filomena Catapano
Affiliations: European Centre For Medium-Range Weather Forecasts, European Space Agency
Assimilation of microwave radiances over sea ice surfaces has been limited by poor knowledge of the surface emission and reflection characteristics, as well as by limitations in knowledge of the sea ice concentration (SIC). Until recently, only channels with minimal sensitivity have been assimilated, using dynamic emissivity retrieval or emissivity atlases. At ECMWF, a new technique has been developed using a combination of machine learning and data assimilation to simultaneously estimate an empirical sea ice surface emissivity model along with the SIC, based on microwave imager radiances from a year-long training dataset. From ECMWF cycle 49r1, which went operational in November 2024, this surface emissivity model has been used within the atmospheric 4D-Var data assimilation to allow assimilation of strongly surface sensitivity channels from 10 to 89 GHz from microwave imagers. As well as influencing the atmosphere, this provides SIC retrievals at observation locations that can in turn be used in the sea ice analysis of ECMWF’s coupled data assimilation system in the future, therefore allowing the consistent use of up-to-date level 1 satellite information across different Earth System components. A particularly difficult aspect of the sea ice emissivity problem is that the temperature profile, microstructural details and optical characteristics of sea ice and snow are poorly known, so the model takes as input three empirical variables which are intended to represent these unknown controlling factors for the sea ice emissivity. During the assimilation, these variables are retrieved from the observations, simultaneously with the SIC, using auxiliary observation space control variables. Initially developed for AMSR2, this novel hybrid/physical method is being extended to cover a wider range of sensors and frequencies, and to potentially include more physics that describe relevant aspects of the radiative transfer in the snow and ice layers. In particular, efforts are under-way under the Data Assimilation and Numerical Testing for Copernicus Expansion Missions (DANTEX) project to address the extension of the approach to lower frequencies down to L-band, while preparing for the assimilation of measurements from the upcoming Copernicus Imaging Microwave Radiometer (CIMR) mission currently set for launch by ESA in 2029. L-band observations also permit to estimate sea ice thickness (SIT) mainly for thin sea ice complementing information on thick sea ice from altimeters, such as Copernicus Polar Ice and Snow Topography Altimeter (CRISTAL). Furthermore, the use of information about snow from CIMR-like available observations will be investigated. We will present plans how this additional information can be explored in ECMWF’s coupled assimilation system, building on the framework developed for SIC estimation and assimilation.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 0.94/0.95)

Session: C.01.01 High Altitude Pseudo Satellites in Earth Observation

High altitude platforms, or pseudo-satellites (HAPS), are uncrewed platforms positioned about 20 km above the Earth's surface in the low stratosphere where they take advantage of weak stratospheric winds and solar energy. The ability to reside in a fixed location over extended periods (several months) enables applications requiring persistent observation and enhanced communication/navigation at local or regional scale.
The Earth observation community has shown interest in the exploitation of HAPS ranging from farming, urban planning, air quality, greenhouse gas and sea ice monitoring to disaster response, fire monitoring, security and maritime surveillance.

The session aims at bringing together scientists, industry and other stakeholder to discuss
- Recent developments
- Scientific applications to better understand our environment
- Applications and services that combine space borne, airborne and HAPS assets
- Role of HAPS for the development of future satellite missions and satellite cal/val
- Demonstration campaigns

The session is soliciting presentations demonstrating the current and future HAPS capabilities, showcasing HAPS as an element in the #Future EO Programme.

HAPS – as part of the #FutureEO program, may play an important role in the near future as they have the capability to serve as a testbed for the development of future satellite missions. Recent test flights have shown the feasibility of reaching and staying in the stratosphere and the rapid development is supported by new technologies. This underlines the importance to bring together different stakeholders to discuss the recent developments and perspectives.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: New Concept of Earth Observation from the Stratosphere Using HAPS Equipped with an Onboard Imaging Sensors

Authors: Chiara Giaquinto, PhD Candidate Eleonora Riccio, Francesco Tufano
Affiliations: Cira (Italian Aerospace Research Centre)
The contamination of the environment with toxic substances is a topic of critical importance today. It can be related to various factors and manifest in multiple ways, such as the different chemical transformation capacities of plants. Among environmental contaminants, Potentially Toxic Elements (PTEs), such as lead, zinc, chromium, and cadmium are commonly identified. These elements are primarily associated with industrial applications and agricultural practices, including the use of fertilizers. Additionally, Polycyclic Aromatic Hydrocarbons (PAHs), a group of organic pollutants, are introduced into the environment through activities such as biomass combustion, vehicle emissions, industrial processes, and oil spills. All these substances can damage the environment and pose a risk to the health of animal and plant organisms, as well as humans, as they become more dangerous the greater their capacity to accumulate. Therefore, the scientific evaluation of the concentration of these toxic substances in living organisms constitutes an essential component in assessing local pollution. One way to estimate the contamination level is through the study of bioindicators, such as plants, which act as natural "sensors”, since they respond with visible and potentially specific symptoms to the presence of one or more stressors. Today, one of the methods for studying bioindicators to determine the presence of environmental issues involves the use of non-invasive techniques. In particular, through remote sensing techniques, it is possible to analyse the spectral response of vegetation, especially through fluorescence, using instruments called fluorimeters. Actually, infrared thermography is a non-destructive diagnostic technique that allows the determination of the surface temperature of a body based on the infrared radiation it emits. An additional issue to monitor concerns phenomena related to climate change, such as landslides, floods, earthquakes, etc. To track this type of phenomena a persistent monitoring is needed to detect the event, to follow up the crise and perform the rescues. It is therefore important in these cases to have real-time monitoring, along with data processing and communication system on board. To obtain reliable environmental health indices and monitor areas impacted by climate change-related phenomena, specific sensors should be employed. These include multispectral and hyperspectral cameras, visible and thermal sensors, and radar systems, as some instruments are particularly sensitive to cloud presence. In fact, for environmental monitoring applications, vegetation is observed using multispectral, hyperspectral, and thermal sensors. Thanks to these, it is possible to increase the informational content of the images. For instance, the combined use of thermal and multispectral sensors enables the classification of materials (vegetation, soil, asphalt, roofs, etc.) and the assessment of their condition. Phenomena related to climate change can be monitored using visible, thermal, and radar sensors. The use of the visible sensor ensures high spatial resolution, while radar enables the observation of areas of interest even when cloud cover obstructs passive sensors. Thermal sensors, on the other hand, ensure observability during remediation phases; they make it possible to assess the most fragile areas of the territory. In the field of surveillance of environmental crime, Earth observation through HAPS (High Altitude Pseudo-Satellite) platforms offers the significant advantage of persistence, meaning the capability to continuously monitor terrestrial or marine geographic areas. Thanks to this type of monitoring, it is possible to supervise sensitive areas, detect illegal dumping, fires (deliberate or not) and so on. Visible and Thermal sensors are, typically, used to detect dangerous events. A HAPS with adequate flight capabilities could also enable hyperspectral push-broom sensors to contribute to improving the recognition of the aforementioned events. The data fusion of visible, thermal, and hyperspectral images will certainly enhance the performance of a multiplatform surveillance system. Flight trajectories confined within an area of interest and mission coordination with other platforms of the same type can ensure continuous observability of region of interest and the proper movement of push-broom sensors for their data acquisition. The live processing of data acquired by the sensors is, in this case, more important than ever. The promptness of the alert is essential for this application; therefore, onboard artificial intelligence algorithms oriented toward monitoring must continuously process the data. In the event of a detected threat, it is necessary to send an alert accompanied by ancillary information such as location and type. To enhance control over environmental contamination, phenomena related to climate change and surveillance of environmental crime application, the demand for long-endurance unmanned stratospheric platforms for Earth observation and Telecommunications, namely HAPS, has surged in recent years. This need arises from their role as a viable supplementary option to satellites and Remotely Piloted Aircraft Systems (RPAS). HAPS can be used to serve different scenarios such as broadcast/multicast HDTV signal, high-speed wireless access, navigation and position location systems, intelligent transportation systems, surveillance, remote sensing, traffic and environmental monitoring, emergency communications. Typically, operating at altitudes around 18-20 km, which places them well above commercial air traffic, these platforms benefit from relatively stable weather conditions and consistent temperatures. Consequently, wind speeds are generally lower compared to other altitudes. These factors drive the curiosity surrounding this segment of the stratosphere. The interest in HAPS is also due to different factors like, as mentioned, their ability to sustain their position for an extended duration over a specific area of interest, with typical endurance of several months and the advantage to maximize the surveillance capability. Specifically, positioned at an altitude of 20 km, these platforms can observe a ground area with a diameter of approximately 70 km using optical sensors and 140 km range radar. Moreover, the optical sensors that can be used achieve high GSD (ground sample distance), while visible cameras can reach up to 10 cm, persistently. In addition, flying at this altitude offers greater proximity to the Earth's surface, resulting in images with higher spatial resolution compared to satellite images. HAPS exploit the best features of both terrestrial and satellite communication systems in many ways, e.g., low propagation delays, and hence strong signals when compared to satellites. The paper proposes an integration of the HAPS with imaging sensor, that meets the mission requirements and specifications proposed, for environmental monitoring. Being easily operable during take-off and landing, the platform can be used for various types of missions, equipped with appropriate payloads such as thermal sensors, visible spectrum sensors, radar and, if necessary, a data processing system for initial real-time analyses. The ability to perform data analysis directly on board a stratospheric platform represents a significant step forward in efficiency and information management. This approach enables real-time data processing and the transmission of only relevant information, such as the detection of environmental anomalies, thereby significantly reducing the volume of data transmitted to the ground. Thanks to the capability to store and classify data based on specific requirements and characteristics, the onboard system can intelligently filter and organize information. This allows direct interaction with the onboard computer, enabling targeted request to retrieve only the data of interest, eliminating the need to download and analyse large amounts of unnecessary data. The key advantages of this solution include: reduction of data transmission load to the ground station, optimizing network resources and energy consumption, improved analysis efficiency, through access to pre-filtered and categorised information and increased responsiveness to operational needs, as it becomes possible to quickly access the most relevant data for specific situations.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: StratObserver

Authors: Elisabeth Langlois, Joe Cotti
Affiliations: Airbus
Strat-Observer, developed by Airbus is a new generation of Earth observation services based on eco-friendly High-Altitude Platform Station (HAPS) such as the Zephyr and stratospheric balloons. Operating at altitudes of approximately 20 km, these platforms offer high-resolution optical imagery in the 15cm class, providing continuous and persistent monitoring over large areas. Strat-Observer is platform agnostic, meaning it can be integrated into a Zephyr and a balloon but also any other HAPS platforms as soon as they become ready to flight. Currently, Strat-Observer serves a variety of critical use cases, including civil security, military surveillance, maritime surveillance, disaster management, environmental protection, and more. The system's payload includes state-of-the-art Earth observation sensors and an Automatic Identification System (AIS) sensor for maritime use cases. So far, Airbus reached 64 days record of uninterrupted flight and 1500+ hours of stratospheric EO capture time. Strat-Observer is on track for its first European demonstration flight in 2025, featuring its advanced Earth observation payload. This milestone will mark the beginning of a new era in persistent, high-resolution monitoring. Building on this, the project plans to integrate an infrared sensor, capable of delivering imagery With a resolution of up to 70 cm. This addition will answer new use cases where there is a need during night time or to monitor thermal signatures. Strat-Observer is ready to revolutionize the field of Earth observation by providing a unique, complementary solution to existing EO satellites. Its combination of high-altitude persistence, precision, and flexibility makes it an invaluable tool for addressing a wide range of challenges, from defense and civil security to environmental protection and disaster response. By integrating advanced payloads, including an upcoming infrared sensor, Strat-Observer ensures that critical information is delivered where and when it is needed most. Strat-Observer not only improves the current capabilities of Earth observation but also paves the way for future innovations. With its eco-friendly design and platform-agnostic approach, Strat-Observer is set to play a vital role in ensuring a safer, more secure, and better-informed future for Europe and beyond. The Strat-Observer framework marks the beginning of a new era for bridging the gap Between space and terrestrial systems, providing real-time Earth observation. Its scalability and rapid payload CAL/VAL enable agile and precise deployments. By linking With ground segments, it provides a testing platform for near-space technologies Before launch, accelerating space innovation.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: The STRATELEC (STRatéole-2 ATmospheric ELECtricity) Project

Authors: Eric Defer, Serge Soula, Sébastien Celestin, Yanis Hazem, François Trompier, Ivana Kolmašová, Ondrej Santolík, Radek Lán, Jean Jacques Berthelier, Elena Seran, Michel Godefroy, Albert Hertzog, Stéphanie Venel
Affiliations: LAERO, LPC2E, IRSN-LDRI, Institute of Atmospheric Physics, Czech Academy of Sciences, LATMOS, LMD, CNES
About 45 lightning flashes occur per second all around the Earth with a predominant distribution over the continents and along the inter-tropical band. While different types of Transient Luminous Events (TLEs) induced by lightning flashes can be produced above the thunderstorms, Terrestrial Gamma Ray Flashes (TGFs) are bursts of high-energy photons originating from the Earth’s atmosphere in association with thunderstorm activity with a great majority of TGFs occurring in the inter-tropical region. In addition to those radiation bursts, another type of high-energy emission, so-called gamma ray glows, has been observed inside thunderstorms corresponding to significant enhancements of background radiation that last for more than a few seconds. All these connected phenomena remain to be documented both remotely and on an in-situ manner. Balloon-borne missions offer the required in-situ close-range high-altitude measurements of the ambient electrostatic field, conductivity, TGF radiation and lightning occurrence for a better understanding and modeling of these complex phenomena and of their effects on the Earth atmosphere and the global atmospheric electrical circuit. The STRATELEC (STRatéole-2 ATmospheric ELECtricity) project (Defer et al., 2022), funded by CNES, aims at deploying within the Stratéole-2 framework (Hertzog and Plougonven, 2020) new atmospheric electricity instrumentation on several stratospheric balloons to: i) Document the electrical state of the atmosphere and the production of high-energy radiation through in-situ and remote sensing measurements to reach better understanding and better modeling capabilities of the processes occurring during thunderstorms, ii) Identify state-of-the-art and emerging technologies to populate the STRATELEC instrumentation package with new sensors in the perspective of their operation on stratospheric balloons, high altitude aircraft and even low-level drones to eventually propose new balloon and/or space mission concepts, iii) Contribute to additional scientific returns on any space mission dedicated to lightning detection (e.g. MTG-LI, GOES-GLM) and more generally to the study of the convection in the Tropics and of electrodynamic couplings in the terrestrial atmosphere-ionosphere-magnetosphere system. First, we will remind the scientific objectives of the STRATELEC project. Then we will provide an update on the different scientific and technical activities, including the development and the testing of STRATELEC instruments. Finally, we will discuss the way forward for the upcoming and final Stratéole-2 campaign, as well as some initial thoughts on future balloon campaigns. References : Hertzog A., and R. Plougonven (2020), Stratéole-2 : des ballons longue durée pour étudier la tropopause tropicale, La Météorologie - n° 108 - février 2020. Defer, E., et al. (2022), An Overview of the STRATELEC (STRatéole-2 ATmospheric ELECtricity) Project, 25th ESA Symposium on European Rocket and Balloon Programmes and Related Research, 1-5 May 2022, Biarritz, France.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: Prospects of the µHAPS Miniaturised Stratospheric Platform for Remote Sensing and Environmental Monitoring in Light of Recent Flight Demonstrations

Authors: Salvo Marcuccio, Matteo Gemignani, Irene Marsili, Alessandro Filippeschi, Marco Gannetti, Andrea Arcarisi
Affiliations: Università di Pisa
The Space Systems Laboratory of the Department of Civil and Industrial Engineering at the University of Pisa (UniPi) has been active in the field of stratospheric flight platform design, construction and operation since 2018. A total of 34 autonomous near-space missions were carried out in 2019-24, many of them in support of scientific investigations and technology testing by academic and industrial entities. Development efforts are now focused on the advancement of µHAPS, a small platform intended to support a payload of up to 15 kg in the stratosphere for several days. The platform relies on the use of low-cost latex meteorological balloons for lift and a number of mission-enabling subsystems specifically developed at UniPi, including long range telemetry and telecommand, photovoltaic power generation, and guided payload re-entry and recovery. In addition to potential uses for technology demonstration and validation in the near-space environment (e.g., solar cells, sensors) and for science (e.g., astronomy, physics of the atmosphere), µHAPS is particularly well suited for Earth Observation. Optical multispectral sensors, or possibly a small synthetic aperture radar, derived from current microsatellite technology can be operated from the platform, thanks to the availability of substantial electric power onboard (25 W to the payload). Two critical functions of the µHAPS platform have recently been successfully demonstrated in flight: azimuth pointing of the payload gondola and long duration permanence in the stratosphere. None of these functions are normally provided by standard weather balloon flight trains, while are found only on large zero- or super-pressure balloon platforms, with costs and complexity orders of magnitude larger. Attitude stabilization allows optical payloads to point in the desired direction, high-gain antennas to remain in view of a ground station, the solar array to remain pointed toward the Sun, etc. Long-term stay (three to five days) within a fixed-range area around a point of interest is achieved through altitude control maneuvers guided by a dedicated trajectory controller based on reinforcement learning. In this work, we present the results of our recent flight activities, including successful demonstration of both technologies in real missions, and discuss the expected performance and potential use cases of the µHAPS platform for remote sensing and environmental monitoring applications.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: Development of a Light Stratospheric Balloon Platform for Earth Observation, Science and more

Authors: Bence Goczan
Affiliations: StratoSpace
High altitude pseudo-satellites (HAPS) can be a great addition to ongoing and future Earth Observation missions. HAPS can provide high-resolution data of an area of interest in a fast or frequent manner while their operation cost can be kept low. The advantage of HAPS lies in their flexibility. The development time and cost is just a fraction compared to a satellite, which makes it possible to use the latest and ever developing technology on-board. This can be utilized not only as a test-bed for new Earth Observation devices, but also conduct missions on their own. Currently, the biggest challenges of HAPS are longevity and station-keeping, especially with high altitude balloons. At StratoSpace, we are developing a lightweight balloon platform that is able to conduct long-duration missions with trajectory control. The goal is to provide a reliable balloon service with short preparation time for smaller payloads. Our platform will be best fit for Earth Observation missions and to support ground operations for emergency situations, like forest fires. The development is planned to have two phase: enhance longevity, achieve station-keeping. Currently we are in the first phase when we work with small zero-pressure balloons to conduct long-duration test flights and improve our navigation and power system. These flights will provide test-bed for the trajectory control and station-keeping technology which currently under development. In the second phase, special double-envelope super-pressure balloons will be used to perfect the on-board navigation systems and develop an autonomous ground support system - both hardware and software, - to control balloons individually or multiple balloons in swarm. To achieve precise navigation in the stratosphere, AI shall be involved to improve the resolution of weather data and combine with the current measurements and historical data. To control the balloons trajectory, continuous communication is needed between the balloon and mission control which only can be achieved by over-the-horizon radio communication. For this purpose it is planned to use already established communication systems - like LoRa-WAN and traditional sat-com providers, - and explore now emerging solutions. We have already launched a zero-pressure balloon which was the first in Hungary. For this flight, the balloon envelope was designed and manufactured by our team, on board the balloon there was our demonstrational multispectral Earth Observation camera. With this current platform we can already provide a basic service to host experiments for third parties, or conduct basic Earth Observation missions with our own demonstrational payload. In the meantime we are looking for payload developers, both in the field of Earth Observation and communication systems, who would like to collaborate with us to build Europe's next generation advanced light stratospheric balloon system.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: Overview of the German Aerospace Centers high altitude platform HAP-alpha and its pay-loads

Authors: Florian Nikodem, Jörg Brauchle, Veronika Gstaiger, Andreas Bierig
Affiliations: German Aerospace Center e.V., German Aerospace Center e.V., German Aerospace Center e.V.
Since 2018, the ongoing high altitude platform project “HAP-alpha” of the German Aerospace Center (DLR) combines the domains of aerospace and space into one single research and engineering project. 16 DLR institutes and facilities under the lead of the Institute of Flight Systems combine their expertise to realize the HAP-alpha high altitude platform. With this project the DLR aims for the following four main research goals in terms of solar powered, unmanned high altitude platforms: 1) The development of novel system concepts and technologies for the realization of robust and cost-efficient high altitude solar platforms. 2) Development and testing of high performance, light weight sensor systems for earth observation. 3) Development of operational strategies and mission scenarios to demonstrate the performance of high altitude solar platforms. 4) Demonstration and flight testing of novel technologies, processes and sensor systems under real environmental conditions All of them lead to the design and build of a solar powered high altitude technology demonstrator, called HAP-alpha. The HAP-alpha with its wing span of 27 meters, its weight of 138 kilograms and a payload capacity of 5 kilograms is designed as a fully functional high altitude platform to potentially reach the lower stratosphere at around 20 kilometers. At the time of the ESA Living Planet Symposium the HAP-alpha is ready for its flight readiness review and its maiden flight in low altitudes soon after. The presentation gives an overview of the current project status, just few weeks before its maiden flight, the HAP-alpha system including ground station as well as the payload systems MACS-HAP and HAPSAR. The MACS-HAP is a high-resolution optical camera system with a ground resolution of 15 cm at 20 kilometers altitude and integrated AI-based image analysis processes. The HAPSAR is a ultra-lightweight high performance radar with synthetic aperture with a ground resolution of 50 cm x 50 cm at 20 kilometers altitude. Both payloads are developed within the HAP-alpha project alongside the platform andcombine extreme lightweight design, both payloads are below 5 kilograms each, and high performance under lower stratospheric conditions. The presentation closes with a discussion on the potential of a solar electric high altitude platform, such as the HAP-alpha and what it offers for lower stratosphere scientific experiments.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.85/1.86)

Session: A.09.10 Interactions and feedback between Ice, Land and Ocean in Polar Regions

Subglacial conditions are a contributing factor to ice sheet dynamics as bedrock properties control basal gliding. However, knowledge of the Solid Earth-Ice (and Ocean) interface comes with large uncertainties. That relates to the margins of the ice sheets, where topography margin is critical for accurate estimates of current Antarctic ice discharge, while bathymetry under ice shelves, bed topography along major outlet glaciers, and geology and subglacial hydrology near the coast are critical factors for predicting the future of the Antarctic Ice Sheet. Additionally, constraining bathymetry under ice shelves is critical to determine their response to ocean forcing, including the incursion of warmer Circumpolar Deep Water.
Bedrock properties also affect the interior of the ice-sheet and here geothermal heat flow is a key parameter, underdetermined both in Greenland and Antarctica. This relates to the thermal and mechanical structure of the solid earth, which exerts a primary control on the response of the polar regions to ice mass changes. Glacial Isostatic Adjustment of the bedrock beneath the ice sheets affects the bed slope of the ice sheet and the grounding line of marine-terminating outlet glaciers.
Satellite data are essential in monitoring the current conditions and for understanding the feedbacks in order to predict the future evolution of the Polar Regions.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: Development of a GNSS-R Module in SMRT for Cryosphere Studies

Authors: Justin Murfitt, Yusof Ghiasi, Claude Duguay, Ghislain Picard, Pierre Zeiger, Martin Wearing
Affiliations: H2O Geomatics, Institut des Géosciences de l’Environnement-Université Grenoble Alpes, European Space Agency
The cryosphere, including lake ice, sea ice, and ice sheets, has been underexplored in GNSS-R (Global Navigation Satellite System Reflectometry) modelling, despite its critical role in Earth's climate system. Current GNSS-R models primarily focus on land and ocean applications, with limited consideration of the unique multilayered structures of snow and ice. This project addresses these limitations by developing a GNSS-R module within the Snow Microwave Radiative Transfer (SMRT) model, which provides an ideal framework for cryosphere applications. Unlike standalone GNSS-R models, SMRT enables user-defined multilayer snow and ice configurations with minimal modifications to its existing structure, allowing seamless integration of GNSS-R alongside passive microwave, imaging SAR, and radar altimetry simulations. This synergy leverages the validated components of SMRT, ensuring high accuracy and reliability while addressing the lack of comprehensive cryosphere-focused GNSS-R models. By accommodating multilayer structures, incorporating surface roughness, and addressing ice-water transitions, this module offers a significant advancement over the state-of-the-art GNSS-R models for cryospheric studies. This presentation will provide an overview of this new ESA-sponsored activity and early results. The module under development will be validated using specific tracks from existing missions such as CYGNSS, TechDemoSat-1, Spire, and SMAP-R. To ensure that the module can be used by the broader scientific community and to test the full capabilities of the module within SMRT, it will be validated across different cryosphere environments, including sea ice, ice sheets, and lake ice. Locations for validation are selected based on the availability of in situ data (snow depth, microstructure conditions, medium temperatures, and ice thickness) for parameterizing SMRT. Preliminary modelling results for lake ice highlight GNSS-R’s ability to characterize lake surface conditions with high sensitivity to surface roughness and ice-water transitions. Preliminary simulations for conventional GNSS-R measurements reveal that open water reflectivity varies significantly (up to 30 dB) due to changes in surface roughness, while lake ice reflectivity remains stable (<5 dB) and more consistent. By enabling accurate simulations of GNSS-R products for snow and ice, this project will support global climate observation initiatives and prepare for the exploitation of measurements from future satellite missions, such as ESA’s HydroGNSS. The developed module will be released as open-source software, fostering collaboration and innovation in cryosphere science and Earth system monitoring.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: Optimal sea-ice concentration estimation from atmospheric assimilation of surface-sensitive radiances from microwave imagers

Authors: Dr. Sebastien Massart
Affiliations: European Centre for Medium-Range Weather Forecasts (ECMWF)
Data assimilation effectively combines remote sensing and in-situ observations with numerical models to optimize environmental data. For example, it is commonly used to estimate sea-ice concentration (SIC) by integrating various observational sources with ocean models. These SIC estimates serve as boundary conditions in Numerical Weather Prediction (NWP) systems. However, NWP systems have their own assimilation processes and may incorporate data from the same onboard instruments used indirectly in a third-party SIC retrievals, leading to sub-optimality. To address this, we propose an innovative approach that maximizes the value of observations by jointly estimating atmospheric and surface states from assimilating surface-sensitive radiances from microwave imagers over sea-ice. This concept inspired the sea-ice developments by Geer (2024), and combines machine learning and data assimilation, allowing us to link the observed radiance to the surface state through a radiative transfer model, thereby enabling SIC analysis. We offer two variants of SIC analysis from the European Centre for Medium-Range Weather Forecasts (ECMWF) Integrated Forecasting System (IFS). The first involves extending the data assimilation control vector with observation-space variables at observation locations. The second approach extends the control vector with a model-space field, which is interpolated at observation locations. Notably, the observation-space approach incorporates a background value defined in observation space using information from the 10V channel. This enables us to assimilate radiances from instruments such as the Advanced Microwave Scanning Radiometer 2 (AMSR2) and the GPM Microwave Imager (GMI). In contrast to the model-space approach, the 10V channel-based background information is not being utilized in the model-space variant, enabling the usage of imagers that lack this channel. For instance, the Special Sensor Microwave Imager/Sounder (SSMIS) can be employed. We will present the SIC estimations from the two approaches and how they differ from the third-party retrievals in monitoring sea-ice. We will also present how these SIC estimations impact the weather forecast. Lastly, we will explore how the estimated sea-ice information can be transmitted to the ocean when coupled to the atmosphere.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: Antarctic Subglacial Lake Systems Underlain By Contrasting Geothermal Heat Flux Provinces

Authors: Fausto Ferraccioli, Jonathan Ford, Pietro Latorraca, Egidio Armadillo, Joerg Ebbing, Graeme Eagles, Karsten Gohl, Rene Forsberg, Ben Mather, Javier Fullea, Chris Green, Massimo Verdoya
Affiliations: National Institute of Oceanography and Applied Geophysics, University of Genoa, DISTAV, University of Kiel, Alfred Wegener Institute Helmholtz-Center for Polar and Marine Research, Technical University of Denmark, The University of Sydney, Universidad Complutense de Madrid, University of Leeds
Geothermal heat flux (GHF) is ill-constrained in Antarctica compared to other continents. This hampers interdisciplinary efforts to study solid earth influences on Antarctic subglacial hydrology and ice sheet dynamics. It also limits our understanding of the interplays between spatially variable subglacial crust and lithosphere properties, contrasting tectonic provinces and Antarctic thermal characteristics. Here we analyse a new aeromagnetic anomaly compilation for Antarctica conformed with SWARM satellite magnetic data using a variety of spectral approaches to estimate Curie depths as a proxy for GHF heterogeneity. We contextualise our Curie depth results using several independent geophysical constraints on crustal and lithosphere thickness derived from seismological, airborne gravity and satellite gravity modelling, and heat production estimates from gravity and geology. We also examine the relation between GHF and effective elastic thickness derived from topography and gravity. We find elevated GHF associated with parts of the thinned and weak crust and lithosphere of the West Antarctic Rift System (WARS), particularly beneath the rapidly changing Thwaites and Pine Island sectors of the West Antarctic Ice Sheet and along the edge of the Marie Byrd Land block. Our results imply greater variability in GHF beneath the central part of the WARS compared to previous estimates. This translates into a greater degree of heterogeneity in thermal basal boundary conditions for the different clusters of active subglacial lakes underlying the ice streams flowing into the Ross and Amundsen Sea embayments. In East Antarctica, we also retrieve elevated GHF associated with some of the active lake clusters under the Byrd glacier catchment, inland of the Transantarctic Mountains. Both the crust and the lithosphere are much thicker here compared to the WARS and GHF anomalies are likely to arise from highly radiogenic Precambrian basement terranes. Relatively lower GHF values are retrieved for the active and static lakes of the northern Wilkes Subglacial Basin (NWSB). There is little consensus on crustal thickness and the tectonic origin of the NWSB, given that gravity models predict thin crust and vice-versa seismology suggests thick crust. Irrespectively, aeromagnetic signatures over the NWSB resemble those over formerly adjacent highly radiogenic Precambrian terranes in Southeastern Australia. The key question therefore is whether spectral approaches may not have sufficient resolution to resolve radiogenic terranes and associated GHF anomalies here. Both the Dome C and Dome A lake districts appear to have elevated GHF with implications for hydrology and ongoing efforts to retrieve > 1.5 Myr paleoclimate records via drilling.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: Antarctic RINGS: Probing Coastal Margins to Unveil the Past, Present, and Future of Ice Sheets

Authors: Kenichi Matsuoka, Dr. Tom Jordan, Geir Moholdt, Dr. Felicity McCormack, Prof. Kirsty Tinto, Prof. Cui Xiangbin, Dr. Fausto Ferraccioli, Prof. Rene Forsberg
Affiliations: Norwegian Polar Institute, British Antarctic Survey, Monash University, Columbia University, Polar Research Institute of China, The National Institute of Oceanography and Applied Geophysics, Technical University of Denmark
RINGS is an emerging and urgently needed international collaborative effort to conduct airborne geophysics and other field surveys of the Antarctic Ice Sheet margin and related research using field data, stellite data and models to better estimate the Antarctic contribution to global sea-level rise, both now and in the future. According to the latest IPCC report, the Antarctic Ice Sheet accounts for nearly 50% of the ensemble uncertainty in projections of net sea-level rise by 2100, regardless of emission scenarios. This uncertainty is largely due to our poor understanding of the interconnected, complex system - particularly with respect to tipping points - of the Antarctic ice margin, where the ice sheet meets the ocean. The first international RINGS workshop in June 2022 identified scientific priorities along the primary ring at the current grounding line, the seaward ring across ice shelves and ice rises, and the landward ring a few tens of kilometres inland from the current grounding line where the future grounding line would be located. To further develop the scientific rationale for the high priority science topics, RINGS has analysed a range of published data, such as the Bedmap3 FAIR database, IBCSO version 2, modelled surface mass balance reconciliation, and results from the ISMIP6. Here, we present an overview of these new results to clarify the scientific rationale for further bed topography surveys in the ice sheet margin.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: Daily, global freshwater influx measurements to estimate water input to Arctic Ocean

Authors: Zsofia Kugler, Anna Podkowa, Dr. Robert Brakenridge G, Dr. Son V Nghiem
Affiliations: Budapest University Of Technology And Economics, Warsaw University of Technology, University of Colorado, Boulder, NASA, Jet Propulsion Laboratory
Climate change is impacting our everyday life and is altering the magnitude and frequency of extreme weather related events such as precipitation, floods, and drought. For this reason there is a great demand to stud these phenomena in a changing environment on a large-scale global spatial extent. In the past, we developed a protocol for river gauge measurements using satellite passive microwave radiometer (PMR) data and applied over multiple river watersheds (Brakenridge, 2023, Kugler, 2024). Exploiting the high sensitivity to water presence of microwave emission we use low frequency L-band (1–2 GHz) passive microwave radiometry to monitor rivers and reservoirs and to compare over different microwave frequencies and polarization configurations. We successfully applied the methodology to both ESA Soil Moisture and Sea Salinity (SMOS) and NASA Soil Moisture Active Passive (SMAP) sensor data reaching high correlation (r2: 0.6-0.8) to in-situ discharge measurements over various river basins in different climate. Our robust and innovative passive microwave radiometer (PMR) approach measuring river flow on a daily, global scale includes both major and medium sized Arctic rivers. In the past we have tested our orbital gauge approach in different environments and over several climate zones including both highly vegetated tropics and the high-latitude arctic with good correlation to ground measurements (Podkowa et al., 2023). We were able to thereby describe trends in the timing of ice cover along numerous rivers feeding the Arctic Ocean. In this study we would like to better understand river-sea interaction and measure how continental freshwater influx variations impact sea salinity from available Copernicus products. We will investigate spatial patterns of freshwater plumes drained into Arctic Ocean over different years and seasons. This will fill the gap of missing in-situ measurements in the harsh Arctic region. Moreover, it delivers an innovative ESA satellite-based observation technique for monitoring river-sea interaction as a crucial climate change indicator over high latitudes- References: Kugler, Z., S. V. Nghiem, and G. R. Brakenridge (2024), SMAP Passive Microwave Radiometer for Global River Flow Monitoring, IEEE Transactions on Geoscience and Remote Sensing. Brakenridge, G. R., S. V. Nghiem, and Z. Kugler (2023), Passive microwave radiometry at different frequency bands for river discharge retrievals, Earth and Space Science, 10(8), e2023EA002859. Podkowa, A., Z. Kugler, Brakenridge, G. R. and S. V. Nghiem (2023) Ice Freeze‐up and Break‐up in Arctic Rivers Observed with Satellite L‐band Passive Microwave Data from 2010 to 2020, Water Resources. Research 1-37. , 37 p., 59
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: Investigating Changes in Lake Ice Cover and Timing Across the Northern Hemisphere Using the ESA CCI Lakes Dataset

Authors: Justin Murfitt, Samuel Johnston, Claude Duguay
Affiliations: H2O Geomatics
Lake ice plays an important role in the northern hemisphere. The presence and absence of ice cover can impact local precipitation patterns through lake effect snow events and contribute to local energy balance. Additionally, lake ice cover (LIC) has socioeconomic implications, providing transportation routes for communities, the establishment of ice roads, and recreation such as snowmobiling and ice fishing. Due to its broad importance, ice cover has been identified as a thematic product of lakes as an essential climate variable (ECV) by the Global Climate Observation System (GCOS), meaning that LIC can provide us with an understanding of how the Earth’s systems are changing with climate change. Up until the 1980s, there was a significant number of in situ observations for these cycles; however, this number has declined over the past four decades. Remote sensing provides an alternative to these ground measurements and with lengthening records of optical imagery from sensors such as the Moderate Resolution Imaging Spectroradiometer (MODIS), there are now 20+ years of satellite observations available for LIC analysis This study aims to leverage these observations through the LIC product available through the ESA CCI+ Lakes project. The ice cover product is produced using a random forest model that is trained on MODIS top-of-the-atmosphere data. Observations will be assessed for 1500+ lakes across the globe to monitor how ice cover has changed between 2000 and 2023 with a focus on the maximum ice cover extent and key phenology dates (freeze onset and water clear of ice). Additionally, the CCI product will be compared to existing ice cover products such as the Interactive Multisensor Snow and Ice Mapping System (IMS) to ensure consistency providing support for the use of the data for long term climate monitoring. Trends in the ice cover variables will also be evaluated in connection with changes in air temperature and large-scale atmospheric and oceanic oscillations (i.e. teleconnections). This study will demonstrate an application of the CCI Lakes dataset and provide context for recent changes in global lake ice patterns.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall G2)

Session: B.02.09 The Role of Spaceborne Imaging Spectroscopy and Drone-based Calibration Data for Integrated Freshwater and Coastal Monitoring

Exponential growth in EO data volumes, advancements in drone technology, and the variety of data accessible via cloud-based platforms, present new opportunities to develop novel integrated systems that leverage these capabilities. In particular, the advancement in new generation spaceborne imaging spectrometers, such as PRISMA, ENMAP, DESIS, EMIT and the upcoming missions such as ESA’s CHIME and NASA’s SBG, can significantly improve applications such as water quality monitoring, especially when also combined with near real-time, in-situ water quality data streams, drone-based measurements and water quality forecasting tools.
This session will bring together water quality remote sensing scientists, modellers and data analytics experts, to showcase and discuss approaches for use of various types of remote sensing data, including imaging spectroscopy and drone imagery, for development of a fully integrated ‘ground-to-space’ data integration system, that support the production of ‘decision-ready’ information for water managers and communities that are dealing with increasing challenges in inland and coastal water quality world-wide.
The goal of the session will be to focus on the benefits and challenges of integrating multiple sources of data (e.g., either different Earth observation (EO) sources like optical/radar, or combining in-situ and/or drone measurements with EO datasets-, or EO with modelling), rather than focusing on only one EO data source or one approach to produce actionable water quality products.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall G2)

Presentation: Advancing Satellite Calibration and Validation (Cal/Val) with Drone-Based Aquatic Measurements

Authors: Sindy Sterckx, Liesbeth De Keukelaere, Ils Reusen, Robrecht Moelans, Agnieszka Bialek, Niall Origo
Affiliations: Vito, NPL
Drones have demonstrated their effectiveness in local environmental monitoring and are now recognized for their potential to advance Satellite Calibration and Validation (Cal/Val). Unlike traditional in-situ data collection from fixed platforms such as AERONET or HYPERNETS, drones offer the unique ability to assess spatial variability surrounding fixed stations, enhancing the representativeness of point measurements for satellite pixel resolution. Additionally, drones can conduct transects from the shoreline to the open sea, providing critical data for validating atmospheric correction algorithms, such as adjacency correction algorithms. This comprehensive approach improves the accuracy and reliability of satellite Cal/Val processes. The Cal/Val of satellite products requires high-quality in situ measurements, referred to as Fiducial Reference Measurements (FRM). Achieving FRM status for drone-based data over aquatic environments presents challenges, including establishing traceability to the International System of Units (SI) and providing robust uncertainty budgets for all steps of the measurement process. It involves rigorous pre-flight and in-flight calibration, careful flight planning to minimize glint effects during data acquisition, and accurately processing data from digital numbers to water reflectance. To move toward FRM data, the FRM4DRONES AQUATIC project seeks to enhance the reliability and accuracy of drone-based measurements over water through a community-based and collective effort. The project’s goal is to take the first steps toward establishing FRM by bringing together the community involved in drone-based water Cal/Val to collect and discuss best practices. These efforts will pave the way for the integration of drones into satellite validation frameworks, ensuring that measurements are trusted and fit for purpose. At the conference, we will present the outcomes of these community consultations, highlight progress in identifying challenges and opportunities, and outline a roadmap for achieving FRM status for drone-based aquatic measurements. By advancing traceability, uncertainty quantification, and protocol standardization, this initiative aims to establish drones as a complementary and trusted tool for validating and enhancing satellite-derived aquatic products.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall G2)

Presentation: Life CYANOBLOOM – developing a holistic cyanobacteria warning system using hyperspectral and multi-spectral measurement techniques

Authors: Carole Lebreton, Kerstin Stelzer, Jorge Garcia, Jorrit Scholze, Dr Annelies Hommersom, MSc. Semhar Ghezehegn, Petra Philipson, Susanne Thulin, Álvaro Fanjul, Alfredo Llorente, Milena Lazic Tejeda, Juan Antonio Gascón
Affiliations: Brockmann Consult GmbH, Aclilma, Anbiotek, Brockmann Geomatics, Sistemas Genomicos, Water Insight
Cyanobacteria are microscopic organisms found naturally in all types of water. In nutrient-rich environments (high in phosphorus and nitrogen), cyanobacteria can multiply rapidly, creating blooms that can produce toxins, cyanotoxins, which affect water quality and can be harmful to humans, animals, and the environment. Meanwhile EO has proven to be suitable for detection of high biomass algal blooms as well as cyanobacteria blooms and water quality services based on satellite data have been established to serve environmental agencies including automatically generated alerts systems. Though cyanobacteria can be detected by dedicated satellite sensors to a certain extent, it is not possible to estimate if the algae bloom has turned into a toxic algal bloom and therefore into a harmful algal bloom (HAB). Within the LIFE CYANOBLOOM project, the detection of cyanobacteria shall be improved by using hyperspectral data (from satellite and in-situ). In addition, the potential of the detected species to produce toxins shall be investigated by genetic analysis. With this evolution, a risk management system for the early identification of harmful algal blooms will be built. The spatial information will be generated from multi-sensor satellite data mainly from Planet Scope Superdove, Sentinel-2 MSI and EnMAP sensors, High-frequent measurements from a WISPstation will provide hyperspectral and detailed point information at sensitive locations and - when a bloom is detected - genetic analysis will be applied to determine if toxin genes are present. With this setup an integrated system of measurements that trigger actions will be developed. The CYANOBLOOM method is developed and tested at four pilot sites, in Spain, Sweden and the Netherlands. However, basins used for drinking water (the 2 pilots in Spain), and lakes or lake areas used for recreational purposes (the pilots in Sweden and the Netherlands) tend to not be very large and can sometimes be on the edge of what can be monitored with the Sentinel satellites. The use of Very High Resolution (VHR) sensors will offer a detection of small water bodies, and especially for the purpose to also trace differences in blooms hyperspectral data (EnMap) are included. The (current) drawback of the hyperspectral data is that acquisitions are not very frequent, and that retrieval algorithms still need to be (further) developed. Therefore, the WISPstation measurements play a critical role within algorithm development providing valuable information with the hyperspectral water leaving reflectance measurements. During summer 2024 satellite data acquisitions, WISPstation measurements and in situ sampling campaigns have been carried out and will be continued in 2025. We demonstrate here the set-up of the different measurements, the validation of the satellite data with WISPstation measurements and the concept of the early warning system using the different measurement techniques.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall G2)

Presentation: Evaluation of ENMAP water leaving reflectance products from three atmospheric correction methods by validation with hyperspectral and multispectral in situ measurements from inland and coastal water around the world

Authors: Prof. Dr. Astrid Bracher, Avotramalala Najoro Randrianalisoa, Dr. Mariana A, Soppa, Prof. Dr. Sabine Chabrillat, Dr. Daniel Scheffler, Dr. François Steinmetz, Dr. Quinten Vanhellemont, Dr. Vittorio Brando, Dr. Mariano Bresciani, Dr. Simone Colelli, Dr. Ana Dogliotti, Dr. David Doxaran, Dr. Peter Gege, Dr. Claudia Giardino, Francesca Ortenzio, Kevin Ruddick, Dr. Thomas Schroeder, Dieter Vansteenwegen, Dr. Maxmillian Langheinrich, Emiliano Carmona, Dr. Martin Bachmann, Dr. Miguel Pato
Affiliations: Alfred-Wegener-Institute Helmholtz Center For Polar And Marine Research (awi), Institute of Environmental Physics University Bremen, GFZ German Research Centre for Geosciences, Leibniz University Hannover, Institute of Soil Science, HYGEOS, Natural Environments, Royal Belgian Institute of Natural Sciences, Consiglio Nazionale delle Ricerche, Istituto di Scienze Marine (ISMAR-CNR), Consiglio Nazionale delle Ricerche, Istituto per il Rilevamento Elettromagnetico dell'Ambiente (IREA-CNR), Instituto de Astronomía y Física del Espacio (IAFE) CONICET/UBA, Laboratoire d’Océanographie de Villefranche, CNRS-Sorbonne Université, German Aerospace Center (DLR), Remote Sensing Technology Institute, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Environment, Flanders Marine Institute, Earth Observation Center (EOC), German Aerospace Center (DLR)
The German hyperspectral satellite mission Environmental Mapping and Analysis Program (EnMAP) is designed to monitor and characterize the Earth's environment on a high spatial resolution. The application of EnMAP delivering information at many spectral bands on the reflected radiation from inland and coastal waters is envisioned to improve our knowledge on these ecosystems’ habitats, biodiversity, water quality and many more. In order to extract this information from satellite images, the water leaving reflectance must be derived from the radiance measured at the satellite sensor. Notably, the water leaving radiance constitutes usually not more than 10% of the total radiance captured by sensors. The remaining 90% of the signal primarily comprises atmospheric radiance and surface reflection, necessitating robust removal to mitigate significant uncertainties in the water products. Recently atmospheric corrections have been expanded for their capabilities on hyperspectral satellite data. In this work, we assess the quality and compare the EnMAP water reflectance products generated by three atmospheric correction (AC) processors: (i) the physics based Module Inversion and Processing System (MIP, EOMAP GmbH) which is used by the EnMAP ground segment to produce the L2A water product, (ii) The “polynomial based algorithm applied to MERIS” (Polymer, Hygeos) . This is a spectral matching algorithm in which atmospheric and aquatic signals are fitted simultaneously using the fully available VNIR spectrum. The EnMAP-Polymer data are processed with the EnMAP Processing Tool (EnPT) using its wrapper module ACwater within the EnMAP-Box QGIS plugin. And (iii) the “Atmospheric Correction for OLI” (Acolite, RBINS) which is based on the Dark Spectrum Fitting algorithm in which multiple dark targets in the image are chosen to construct a dark spectrum. This representative dark spectrum is then used to estimate the atmospheric path reflectance according to the best-fitting aerosol model. We evaluate the EnMAP normalized water leaving reflectance products derived from these three atmospheric correction methods covering EnMAP acquisitions from July 2022 to November 2024 by validation with multispectral and hyperspectral in situ measurements from international autonomous stations and extensive field campaigns at 25 sites spanning from inland, coastal and open ocean waters. The quality control of the match-ups follows the validation protocol and match-up statistics recommended by EUMETSAT for aquatic color remote sensing. The results show a good agreement between in situ measurements, whether hyperspectral with 35 matchups or multispectral with 40 matchups. Overall, for EnMAP-MIP results with in situ multispectral, show better performance (MdAPE= 18.36%, RMSE=0.008) with less bias (β=-5.81) and less error (ϵ=19.70%) compared with in situ hyperspectral (MdAPE= 21.79%, RMSE=0.01, β=-12.38, ϵ=25.69%). This difference may be caused by the higher number of matchups. Further, results from the Polymer and Acolite atmospheric corrections are intercompared to EnMAP MIP by evaluating their performance based on the matchups grouped for different optical water types covering very turbid, highly absorbing, clear and productive waters. Further evaluation, might also benefit from cross-comparing the EnMAP reflectance products with other satellite missions such as DESIS, PRISMA, and S2-MSI. The in situ data from various projects and study sites equipped with autonomous hyper- and multispectral radiometers play a crucial role in this study. Funding agencies are encouraged to continue supporting the advancement of this technology and the expansion of the instrument network, taking into consideration the dynamic characteristics of most aquatic ecosystems and the upcoming hyperspectral missions. Ultimately, this study confirms the great potential of EnMAP hyperspectral data for aquatic studies and improving the knowledge on inland and coastal waters.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall G2)

Presentation: Advancing Coastal Water Quality Monitoring: Integrating In Situ Spectral Libraries, Deep Learning, and Multi-Decadal Satellite Observations

Authors: Dr Kesav Unnithan, Dr Nagur Cherukuru, Dr Yiqing Guo, Dr Eric Lehmann, Dr Dave Cole
Affiliations: CSIRO Environment, CSIRO Data 61, CSIRO Data 61
Coastal water quality is increasingly affected by climatic shifts and anthropogenic pressures, posing challenges for sustainable marine ecosystem management. Traditionally, in situ aquatic sensors have provided detailed bio-geo-physical insights into marine systems at localized scales. These measurements include water temperature, salinity, Total Suspended Solids (TSS), Dissolved Organic Carbon (DOC), Chlorophyll-a (Chl-a) concentrations, and hyperspectral absorption spectra of detritus (ad), phytoplankton (aph), and Colored Dissolved Organic Matter (CDOM - ay), alongside backscattering profiles (bbp) of particulate matter. However, capturing large-scale spatiotemporal variations with in situ data alone is constrained by cost, logistics, and coverage limitations. This study leverages in-situ measurements between 2003 and 2023 across Australia to bridge this gap and develop advanced monitoring tools for coastal water quality. We collated 264 comprehensive spectral datasets from field campaigns along the particulate, CDOM, and phytoplankton-dominated off-shore regions. This spectral library dataset was used to train a novel deep learning framework, Deep Learning for aquatic Remote Sensing (DL-RS) applications, to map complex bio-optical relationships in diverse marine environments. We then invert the DL-RS model to MODIS Aqua imagery spanning 2002 to 2023 to understand broader spatiotemporal dynamics. The model’s inversion results were rigorously evaluated using inherent error analysis, predictive performance assessments, and optical closure comparisons. These evaluations revealed that DL-RS-derived outputs achieved high reliability across tropical, subtropical, and temperate coastal waters despite their significant variability in bio-optical properties. Uncertainty estimates confirmed the robustness of the model in tracking key parameters such as TSS, DOC, and Chl-a concentrations. Notably, the study identified regions characterized by high net primary productivity and substantial changes in euphotic depths, highlighting the dynamic nature of coastal ecosystems. By mapping these patterns across vast temporal and spatial scales, the approach provides critical insights into the impacts of climate variability and anthropogenic activities on marine environments. The integration of deep learning with satellite remote sensing in this study offers a transformative pathway for understanding and monitoring coastal water quality. The decision-ready datasets produced can support marine estate managers and policymakers by enabling informed responses to water quality challenges. This framework advances coastal monitoring capabilities continentally and provides a scalable approach for applying satellite and machine learning-based water quality assessments in other regions globally. The findings underscore the potential of combining in situ spectral libraries with multi-decadal satellite observations and deep learning to achieve accurate, large-scale, and temporally resolved coastal water quality monitoring.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall G2)

Presentation: Benefits of a Science and Applications Traceability Matrix (SATM) for Water Quality Integrating Measurements from In-Situ and Earth Observation Sensors, as Well as Including Predictive Water Quality Modelling

Authors: Arnold Dekker, Dr Isabel Fratter, Dr Tarun Saunders, Dr Joshua Pease, Dr Duy Nguyen, Dr Klaus Joehnk, Dr Dar Tisham, Dr Andrew Prata, Dr Hannelie Botha, Dr Tim Malthus, Janet Anstee, Dr Tim Bolton, Gemma Kerrisk, Dr Timon Sotiropoulos, Dr Rob Woodcock, Flora Kerblat, Zandria Farrell
Affiliations: CSIRO Space & Astronomy, CNES, CSIRO Environment, CSIRO Manufacturing, CSIRO CESRE
Over three billion people worldwide are at risk of disease because the health of their freshwater ecosystems is unknown (UN). The CSIRO’s AquaWatch Australia Program (AquaWatch) aims to develop a world-class integrated water quality monitoring and forecasting system for Australia and beyond, providing actionable information on both inland and coastal water ecosystems. To achieve this overall aim AquaWatch has three overarching goals to support ‘decision-ready’ water quality data from local to continental scales that will provide positive impacts for (1) human health impacts and wellbeing, (2) ecosystem health and (3) industrial applications. Robust science is central to the successful development of AquaWatch. To that end, we have been developing an innovative Science and Applications Traceability Matrix (SATM) inspired by the satellite sensor Science Traceability Matrices (STM). In this expanded SATM form this framework translates the end-user requirements to the physics/measurement-based criteria that allows traceability to system and sub-system levels (including in-situ, EO and forecasting) requirements, with appropriate margins. The SATM provides AquaWatch with a structure that links the end-user requirements and science objectives with the specific observational requirements, system capabilities and applications. It provides systems engineers with the requirements to design the system, highlights the effects of any descoping or losses of elements, and facilitates identification of any resulting degradation to the science and applications. This summary below highlights the water quality (WQ) parameters (directly measurable from space) and the additional desirable WQ parameters that are not directly measurable from space. Predictive WQ modelling is highly relevant for managers of aquatic ecosystems as it allows them to anticipate a deterioration or improvement of water quality. However, the performance of these models is a function of their design principles as well as the parameterisation with measurements (be they from laboratory, in situ or EO measurements) To design a successful SATM, we have identified four different conceptual models for each investigation to flow up and down between: – Science or Applications Model (what water quality information is needed). – Measurement/Experiment Model (how do we measure or predict this and to what required uncertainty?). – Instrument Model (how well is the instrument/sensor capable of reaching these requirements). – Predictive Water Quality Model (if possible, predict water quality up to seven days in advance). To understand the applications model five interdisciplinary End-user workshops were held involving a wide range of end users. They delivered a set of highly diverse criteria and water quality requirements, covering science, engineering, in situ measurements, modelling, regulatory, confidence, data infrastructure, and societal and environmental accounting. For example, for Visible- Near Infrared (VIS-NIR) Earth Observation - the focus of this presentation - variables were identified relating to the presence of phytoplankton and cyanobacteria, and they included: Chlorophyll-a; Phycocyanin, Phycoerythrin. Species / Genus differentiation: Blue-green Algae (inland & species level), Dinoflagellates (coastal waters), Phytoplankton Functional Types (PFT), Peridinin (=dinoflagellates), Total Cell counts (phytoplankton abundance) and Biovolume (may be used with species/types for a HAB index). Composite indices (based on 2 or more variables) were also used by many water management authorities and included the Trophic State Index, Harmful Algal Bloom Index, TRIX Trophic Index for Coastal Seas, Trophic State using the Nutrient Colour Paradigm, and Floating Macro-algal Index etc. Other identified water quality variables were: in the visibility and sediments group such as: Total Suspended Matter, Secchi Disk Transparency, Vertical Attenuation Kd, Turbidity and the Forel Ule scale (water colour). Under carbon related group: Coloured Dissolved Organic Matter (CDOM) and Dissolved Organic Carbon. Under the habitat group: Water Column Depth (Bathymetry); Floating and Submerged Aquatic Vegetation Types, Benthic & Coral Reef Habitat, Water-related Ecosystems and Land-use. Additional variables that are mostly derived using thermal or passive or active microwave satellites sensors or in-situ sensors included: Temperature, Dissolved Oxygen, Water Surface Height, Water surface velocity, and pH. Finally, a set of other measurements that require further R&D focused on miniaturization and automation included Salinity/Conductivity; Total Phosphorus; Total Inorganic Nitrogen (Nitrate-N as surrogate). Methylisoborneol (MIB), Geosmin, Microplastics, Metals (heavy and other), Organic Micropollutants (Pharmaceutical, Antibiotics, Endocrine Disruptors, Insecticides, Herbicides), Pathogens (e-COLI, Cholera, Waterborne...etc.). Examples of SATM Traceability can be found in CSIRO and JPL (2024) for Potentially Harmful Algal Blooms, Invasive Aquatic Vegetation and Coral Reefs. Developing an integrated AquaWatch Australia SATM provided more functionalities than initially intended, ranging from the intended traceability to an inventory of all activities, measurables, and maturity assessment, as well as gaps. We also use this SATM as a prioritisation tool to define the AquaWatch Program and as inputs to an AquaWatch System Requirements Document. During this presentation, we will highlight one of these aspects here: the traceability of end-user requirements for measuring chlorophyll in inland waters to: o In situ instruments  Physical  Optical o In situ spectroradiometers  Earth Observation Sensors: Multi, Supra and Hyperspectral o Predictive modelling Also, the inverse of the above from sensors and modelling to end-user requirements • Trace (or assess) the performance of existing or promised sensors (in situ to EO) back to end-user requirements • Determine the required performance of desired new in situ or EO instruments (incl. trade-offs between spatial, spectral radiometric and temporal resolution) • Determine/advise on optimal (cost-effectiveness) validation (regarding instrumentation and method with a focus on uncertainty) An essential tool required to do the necessary calculations covering the Science or Applications Model, the Measurement/Experiment Model and the Instrument Model is an End-to-End Simulator. Within AquaWatch a prototype simulator was developed (Matthews et al, 2023). We will discuss and present some results using End to End Simulation tools for: • A forward radiative transfer-based simulation tool. • An in-house instrument sensor simulation tool. • Atmospheric correction and algorithms as the inversion tool (empirical-semi-analytical, ML/DML/AI etc.) The SATM approach has delivered benefits beyond its initial scope, offering enhanced traceability, inventorying, and prioritization capabilities. Thus, we recommend using such a highly integrated SATM across the entire width of monitoring, assessing and predictive modelling of inland and coastal water quality. Publications: Matthews, M.W.; Dekker, A.; Price, I.; Drayson, N.; Pease, J.; Antoine, D.; Anstee, J.; Sharp, R.; Woodgate, W.; Phinn, S.; et al. (2023) Demonstration of a Modular Prototype End-to-End Simulator for Aquatic Remote Sensing, Applications. Sensors 2023, 23, 7824. https://doi.org/10.3390/s23187824. CSIRO and Jet Propulsion Laboratory/California Institute of Technology (2024) AquaSat-1 feasibility study report: Toward actionable information on water quality and aquatic ecosystems from space. CSIRO, Australia. CSIRO and Jet Propulsion Laboratory/California Institute of Technology (2024) AquaSat-1 feasibility study report: Toward actionable information on water quality and aquatic ecosystems from space. CSIRO, Canberra, ACT, Australia.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall G2)

Presentation: Biogeochemical and hyper-spectral optical properties of European coastal and open ocean waters: an unprecedent dataset for the calibration and validation of multi-sensor ocean color satellite products

Authors: David Doxaran
Affiliations: LOV UMR 7093 Sorbonne Universite
Co-authors : David Doxaran, Alexandre Corizzi, Denis Pailler, Emmanuel Boss, Vittorio Brando, Victor Martinez Three HYPERNETS autonomous stations (https://hypernets.eu) are operated in France coastal waters to accurately measure the hyperspectral water-leaving reflectance signal and multiply matchups with multi-sensor ocean color satellite data. Using quality-controlled HYPERNETS field measurements, the validity of various atmospheric and glint correction processors is assessed and results are representative of various types of optically water types, ranging from moderately to highly turbid media, oligotrophic to eutrophic water masses. In addition, hyperspectral apparent and inherent optical properties together with biogeochemical properties of a wide range of European coastal waters have been sampled onboard the TARA research vessel (https://fondationtaraocean.org/en/home/) as part of the Tara-Europa expedition and HyperBOOST research project funded by the European Space Agency (ESA). After these two years of intensive measurements along European coastal waters, the TARA vessel also sampled the open waters of the Mediterranean Sea during an oceanographic campaign in August 2024, from Greece to the south of France. This campaign marked the first deployment of an Hypernets station, featuring the Hypstar, a new European-developed autonomous hyperspectral sensor (https://hypernets.eu), alongside numerous other continuous water quality measurements. From August 8 to 18, 240 measurements were collected under optimal conditions: calm seas, clear skies, and non-turbid waters. These data were compared with satellite observations from the, OLCI, VIIRS and MODIS missions, and particularly from the National Aeronautics and Space Administration (NASA) first hyperspectral mission, PACE (https://pace.gsfc.nasa.gov), the Plankton, Aerosol, Cloud, ocean Ecosystem satellite mission. The results of these data matchups are remarkable, showing an exceptional correlation between the Hypstar measurements and the satellite data, highlighting the surprising quality of the Hypstar sensor and the PACE satellite products. This first deployment phase of the Hypernets system, initially designed for coastal studies, paves the way for its adaptation to offshore applications, with promising prospects for global ocean monitoring. The combined HYPERNETS and HyperBOOST datasets offer great opportunities to assess the validity of atmospheric corrections applied to satellite data but also to improve the calibration of inversion algorithms used to retrieve concentrations of key biogeochemical products, based on optical closure. The hyperspectral observation of both coastal and open-ocean environments will undoubtedly open new avenues for understanding ocean (e.g., phytoplankton) dynamics, particularly in response to climate change and anthropogenic effects. The combined use of new satellite sensors and next-generation in situ measurements, within the framework of international cooperation between major space agencies, will make this possible. We will highlight the first key results combining this over 2-year-long expedition aboard the TARA vessel along with satellite ocean observations.
Add to Google Calendar

Thursday 26 June 14:00 - 15:30 (Hall L3)

Session: D.06.05 Addressing Data Processing Challanges in EO Digital Framework: Scaling Computational Resources

#cloud-native

With the ever-growing volume of Earth observation (EO) data, ensuring efficient storage, processing, and accessibility has become an ongoing challenge. The anticipated rapid increase in EO data further emphasizes the need for advanced technologies capable of providing scalable computational infrastructure to support this growth.

The current challenge lies in processing this vast amount of EO data efficiently. Computationally intensive tasks, such as those driven by artificial intelligence (AI) and machine learning (ML), alongside image processing applications, place significant demands on existing solutions. These challenges are further compounded by the need for sustainable approaches to manage increasing computational workloads.

This session aims to address these challenges in the context of ESA's current and emerging computational infrastructure. Discussions will focus on the use of diverse computational solutions, including High-Performance Computing (HPC) systems, cloud-based platforms, and hybrid models adopted across the industry. This will encompass ESA's first HPC system, SpaceHPC, and explore how these technologies address these challenges. While these systems offer substantial processing power and flexibility, the continued growth of data inflow necessitates further advancements in supporting computational infrastructure to maintain efficiency and scalability.

A key consideration will be how these developments can align with sustainability goals, focusing on reducing CO₂ emissions and adopting environmentally responsible practices. Guest speakers from industry will share insights into these topics, highlighting both the challenges and opportunities posed by evolving data processing needs.

Moderators:


  • Peter Gabas - ESA

Presentations and speakers:


SpaceHPC - ESA’s Supercomputing Infrastructure


  • Peter Gabas - ESA

Unifying HPC and Cloud Systems: A Cloud-Native Approach for Infrastructure Integration


  • Vasileios Baousis - ECMWF

Industrial Perspective on the High-Performance Computing and Quantum Computing Opportunities for EOF Processing, Operations, and Archiving


  • Mark Chang - Capgemini

terrabyte: A "Cloud-Like" HPC System for Addressing Earth Observation Challenges


  • Friedl Peter - German Aerospace Center
  • CINECA
  • European HPC Center
Add to Google Calendar

Thursday 26 June 14:15 - 14:35 (EO Arena)

Demo: C.02.22 DEMO - ESA MAAP for EO Science Missions: A Cloud-Based Collaborative Environment for Data Access, Processing, and Algorithm Development

ESA's Multi-Mission Algorithm and Analysis Platform (ESA MAAP) is a cloud-based, collaborative environment that simplifies data discovery, access, algorithm development, and processing for Earth Explorer and TPM missions, while also addressing mission-specific needs. Initially developed for the BIOMASS, GEDI, and NISAR missions in partnership with NASA, ESA MAAP has since expanded to support EarthCARE mission and will soon support FLEX mission. This demo will showcase the individual components of the ESA MAAP, highlight use-cases born from its collaborative capabilities, focussing on the BIOMASS and EarthCARE missions.

Instructors:


  • Saskia Brose
  • Roberto Alacevich
Add to Google Calendar

Thursday 26 June 14:37 - 14:57 (EO Arena)

Demo: E.03.04 DEMO - GMV Prodigi: Cloud-Native EO Data Processing as a Service – Global Launch on AWS Marketplace

#cloud-native

We propose a demonstration session at the Living Planet Symposium 2025 to show the worldwide launch of GMV Prodigi®, an innovative Ground Segment as a Service (GSaaS) solution available on the AWS Marketplace. Developed under the ESA InCubed program, GMV Prodigi is a fully cloud-based framework running on AWS Cloud, providing scalable, efficient, and cost-effective Earth Observation (EO) data processing.
This solution is the result of a strategic alliance between AWS and GMV, combining GMV’s expertise in EO ground segment solutions with AWS’s cloud infrastructure and advanced computing capabilities. GMV Prodigi enables users to process EO data directly on AWS Cloud without requiring data movement, ensuring security, flexibility, and high performance for satellite operators, EO service providers, and the scientific community.
The session will feature a live demonstration, highlighting:
1.Seamless EO data processing directly on AWS Cloud – executing real-time workflows.
2.Scalability & automation – adapting to different missions, constellations, and user needs.
3.Cost and resource optimization – accelerating time-to-market with AWS-powered efficiency.
As the official global launch event, the Living Planet Symposium provides a unique opportunity for the EO community to explore this state-of-the-art cloud-native solution, designed to revolutionize EO data exploitation through the power of AWS cloud computing.


Speakers:


  • Jorge Pacios Martinez – GMV Prodigi Product Owner
  • Vital Teresa – Ground Segment Business Manager
Add to Google Calendar

Thursday 26 June 15:00 - 15:20 (EO Arena)

Demo: C.03.22 DEMO - Technical websites for the Copernicus Sentinel missions

The Sentinel Web Information Service consists of three websites dedicated to providing technical information related to the Copernicus Sentinel missions:

- https://sentinels.copernicus.eu/
- https://sentiwiki.copernicus.eu/web/sentiwiki
- https://sentivista.copernicus.eu/

This session will showcase the three websites and what they offer.

Speaker:


  • Chris Mortimore - Airbus
Add to Google Calendar

Thursday 26 June 15:22 - 15:42 (EO Arena)

Demo: F.01.16 DEMO - Education & Professional Development Platform - Session 2

The Education & Professional Development (EPD) Platform of the Space Generation Advisory Council (SGAC) serves as a comprehensive resource for students and young professionals aiming to embark on or advance their careers in the space sector. Recognizing the challenges posed by the vastness and complexity of space-related disciplines, the EPD Platform offers tailored initiatives to bridge knowledge gaps and foster professional growth.

One of the cornerstone initiatives is the SpaceGen Academy, an e-learning platform that provides accessible and high-quality educational content on various space topics. This academy ensures that members can acquire foundational and advanced knowledge at their own pace, irrespective of their geographical location.

Complementing the academy is the Mentoring Committee, which facilitates personalized guidance by pairing members with experienced mentors in the industry. This mentorship program is designed to offer insights, advice, and support, thereby enhancing the mentees' professional trajectories.

The Career Development Platform is another pivotal component, offering a curated list of job postings, internships, and other career opportunities worldwide. This platform acts as a bridge between employers seeking fresh talent and SGAC members ready to contribute their skills and passion to the space sector.

To stimulate innovation and practical application of knowledge, the ACHIEVED Competition encourages members to engage in original and inventive mission designs. This competition not only fosters creativity but also provides participants with real-world challenges that hone their problem-solving skills.

For those seeking structured training, the ACHIEVED Academy offers courses in Space Systems Engineering, equipping members with the technical expertise required to excel in the industry.

Through these initiatives, the EPD Platform exemplifies SGAC's commitment to nurturing the next generation of space professionals, ensuring they are well-prepared to meet the evolving demands of the global space community.

Speakers:


  • Nikol Koleva - Executive Director, SGAC
  • Tatiana Komorná - Operations Officer, SGAC
  • Marcos Rojas - Education & Professional Development (EPD) Coordinator, SGAC
  • Antonino Salmeri - SGAC
Add to Google Calendar

Thursday 26 June 15:30 - 16:15 (ESA Agora)

Session: E.01.07 Scaling EO Information Services to bridge the Last Mile with End-Users

Despite the proven value of EO data and services across several use-cases in different sectors, scaling their uptake and integration into decision-making and operational end-user processes still suffers from several impeding factors. These include the fragmentation of the EO service and platform landscape, heterogeneity in user customization needs and fitness-for-purpose gaps in operational contexts, along with the limitations in scaling, funding mechanisms and innovative business and partnership models.

The session will discuss these challenges, linking them to digital innovation and collaboration opportunities to accelerate EO-integrated solution adoption and impact. It will explore how digital innovation, cross-sector collaboration and new digitally enabled business and partnership models can address these barriers, blurring the line between EO and non-EO capabilities and reaching a long tail of end-users.

Chairs:


  • Zaynab Guerraou - ESA
  • Salvatore Pinto - ESA

Speakers:


  • David Fernandes - Head of Geospatial Unit at EDP

  • Annekatrien Debien - Principal & Head of Brussels Office at Novaspace

  • Peter Becker - Technical Director, Imagery Information Systems and Workflows at ESRI

  • Gopal Erinjippurath - Founder and CTO at Sust Global

  • Karen Joyce - Co-founder and Product Lead at GeoNadir

Add to Google Calendar

Thursday 26 June 15:30 - 16:15 (Frontiers Agora)

Session: C.01.15 NEOMI: are you ready to become a new Lead Investigator for future EO space missions?

NEOMI (New Earth Observation Mission Ideas) is an ESA Future EO Programme element that has the objective to explore disruptive blue-sky mission ideas and increase their scientific maturity while training a new generation of diverse lead investigators on “how to work with ESA”. NEOMI is all about: How do you translate your scientific idea into a potential future space mission? How do you develop a mission concept and create a mission description document capturing your vision? How do you develop formal mission requirements suitable for use by spacecraft and ground segment development teams? What do you need to do to propose a new mission to ESA for its potential future implementation? How to connect with ESA to take advantage of all the resources and experience available for you to grow and mature your ideas? In this Agora we will introduce the second cycle of this programme element and explain how you can become a successful proposing team or one of our next Lead Investigators. The ESA NEOMI team will be there for an open discussion together with previous NEOMI Lead Investigators to answer all your questions and help you prepare for the exciting opportunities coming your way in the next NEOMI call. Join us at this Agora to discover how to get on-board with ESA NEOMI!

Speakers:


  • Craig Donlon - ESA
  • Bernardo Carnicero - ESA
  • Susan Steele-Dunne - TU Delft
  • Christopher Kyba - Ruhr-Universität Bochum
  • Rosemary Willatt - UCL
Add to Google Calendar

Thursday 26 June 15:45 - 16:05 (EO Arena)

Demo: F.05.13 DEMO - Interactive storytelling of Copernicus4Regions user stories with DEA

Copernicus4Regions has been a true success in highlighting the use and benefits of Copernicus data by local and regional authorities across various regions in Europe. It is a joint initiative of the European Commission, the European Space Agency, and the Network of European Regions Using Space Technologies (NEREUS). The booklet featuring 99 user stories has become an inspiring “classic” within the community, and an increasing number of stories can be browsed through the project’s web pages. However, current technologies enable more advanced storytelling and enhanced communication.
This demonstration displays a new way of navigating Copernicus4Regions user stories using DEA, the DestinE storytelling service. It enables local and regional authorities to combine their own datasets – such as Copernicus Sentinel data featured in the Copernicus4Regions user stories – with the extensive DestinE catalogue to project future trends. The DEA platform offers a participatory and immersive experience by allowing local and regional authorities to personalise their user stories with images and videos, all without writing a single line of code.
This new way of storytelling enables local and regional authorities to interactively display the benefits of Copernicus data for citizens, making these user stories more engaging for interested readers. It serves as a powerful asset for the political dialogue and a vital tool in advocating for the continued support of Copernicus.

Speakers:


  • Hendrik Hamacher - ESA
Add to Google Calendar

Thursday 26 June 16:07 - 16:27 (EO Arena)

Demo: B.01.08 DEMO - EO Analytics for SDG Indicators monitoring: End hunger and sustainably use marine resources

This demonstration showcases how Earth Observation (EO) analytics within the Insula platform support SDG monitoring, specifically for Zero Hunger (SDG 2) and Life Below Water (SDG 14). It highlights the use of EO-derived indicators to track agricultural productivity, food security, and marine resource sustainability.

Speakers:


  • Roberto Di Rienzo - CGI
  • Gaetano Pace - CGI
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall G2)

Session: F.04.15 Resilient Coasts: Adaptive Strategies for Sustainable Ocean Management in a Changing Climate

As climate change continues transforming our global environment, coastal regions and ocean ecosystems are at the forefront of these impacts.
This session focuses on the critical need for adaptive, innovative approaches to manage and protect our coastal and marine environments against climate change pressure. As the planet faces unprecedented environmental stressors - rising sea levels, coastal erosion, increasing temperature levels - coastal regions and ocean ecosystems are particularly vulnerable. Emphasizing resilience will highlight how adaptive management practices - grounded in the latest science and driven by community engagement - can ensure the long-term sustainability of our coastal environments and resources.

Key discussions will include integrating climate projections into maritime spatial planning, developing flexible policies that can adjust to evolving environmental conditions, and leveraging technology for real-time monitoring and responsive action. Case studies from around the world will illustrate how coastal communities and stakeholders are implementing these adaptive strategies to secure their ecosystems' sustainability.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall G2)

Presentation: Surface current from AIS and drifters in coastal marine environment

Authors: Gwenaele Jan, Alexey Mironov, Lucas
Affiliations: Eodyn
The pressure on the marine environment demands a response and acts to better understand marine environments. Influenced by currents, waves, tides, atmospheric fields, and heat exchanges, these dynamics are crucial for marine environment. Coastal region is a specific observational challenge due to interactions between offshore and nearshore processes, requiring high-resolution measurements (meters to kilometers). Satellites like Sentinel (1-2) provide key data on surface currents, waves and SWOT maps high-resolution dynamic topography to capture intricate coastal features. Combining satellite and in situ data enhances understanding of this environment. This study introduces results from MELODI miniature drifting buoys (developed by eOdyn) and surface current derived from AIS data (Automatic Identification System), as a complementary data to existing monitoring systems, aiming to contribute to more comprehensive and effective observation frameworks for coastal and marine ecosystems. These technologies enrich satellite observations and provide critical information for navigation, coastal risk management and ecological assessments in complex coastal zone. In this framework, we focus on surface current results from validation campaigns in the North Atlantic presenting the drifter's performance under varying environmental conditions. Specifically, we aim to present results from one in situ experiment conducted in the English Channel (October 2024) supported by Cedre (Center of expertise in accidental pollution of waters) which involved multiple MELODI buoy trajectories in an application-oriented context dedicated to the forecasting pollutant drift during an operational exercise. The study compares observed currents with AIS-derived currents and with reference fields (e.g. Copernicus Marine Environment Monitoring Service, Cnes, ESA projects). By providing near real-time data and enabling the reanalysis of past events, these technologies (AIS and miniaturized drifters) offer a capacity to estimate the high resolution of the coastal dynamics. They complement spatial data and work in synergy with satellites observations. This approach, through a quite original observation, aligns the development of forecast models and decision-making, by joining the effort for sustainable management of marine environments.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall G2)

Presentation: Translating EO-based Sea Level Rise Knowledge Into an Actionnable World Atlas of Future Coastal Flood Hazards

Authors: Claire Dufau, Frédérique Blanc, Pierre-Yves Remy, Olivier Lauret, Rémi De Dianous, Alice Connault
Affiliations: Collecte Localisation Satellites (CLS)
Among the several effects of climate change, the rise of oceans level and occurrence of extreme meteorological events will inevitably result in coastal flooding episodes, temporarily or permanently threatening the low-lying coastal zones, where actions to mitigate and adapt are more and more urging. Engaging stakeholders towards local actions requests first an assessment of the potential impacts and its dissemination to multi-sectors communities. Since 1993, satellite altimetry provides sea Level observations all over the oceans to the international research community. The Global mean sea level, with its regional variations, is become a major climate change indicator. Rising to the challenge of translating this EO-based climate indicator knowledge into actual actionnable information, an interactive web atlas of future coastal flood hazards has been developed in the framework of an ESA BASS project. The CORISCLIM atlas combines several satellite systems (satellite altimetry, optical and radar imagery) with global ocean models and climate change scenarios in a coastal hazard estimation method applied along all the world coastlines. The solution addresses the needs of (1) coastal authorities, taking action to protect the coastal communities lives and properties, (2) coastal engineering firms & consultancies, proposing local in-depth risk assessment analysis, (3) development banks or private investors, funding climate resilience actions, (4) climate Insurance firms, evaluating future impacts on their clients’ assets, and (5) environmental associations & organizations (NGOs) , supporting climate change awareness and actions in favour of adaptation. This presentation will showcase the CORISCLIM atlas, its EO-based method and how it was embraced by the various user communities during the demonstration phase. The presentation will also illustrate how very-high resolution satellite imagery can be used in modeling studies for climate coastal risks assessment over several coastal zones in France and USA
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall G2)

Presentation: SCOast-DT : creating and operating space based coastal zone digital twins

Authors: Vincent Lonjou, Pierre-marie Brunet, Christian Hummer, Dimitri Lallement, Simon Baillarin, Dr Jacqueline Lemoigne-stewart, Laura J. Rogers, Benjamin Smith, Jeff Walter
Affiliations: CNES, NASA
Currently, more than two billion people live in or near-coastal zones at the ocean-land interface with almost a billion more living in adjacent low-lying coastal areas. These areas and populations are at risk from increasingly severe storms and longer-term sea level rise resulting in coastal erosion, water pollution, urban and agricultural inundation and ecosystems degradation. Earth Science Digital Twins, that is, the combination of data, models, and AI/ML technologies to simulate Earth system processes and enables short- and longterm forecasts, provide understanding and actionable information to reduce risks to humans, infrastructure and ecosystems. Satellite and in situ data are critical components of a Coastal Zone Digital Twin (CZDT) to provide timely and spatially relevant input data for models and to serve as a check or validation of the digital twin's performance. The aim of this paper is to present a CZDT concept being developed as part of the Satellite Climate Observatory with joint participation from NASA, NOAA and CNES, and to discuss initial use cases, satellite data, modelling and advanced technology components. We'll take a closer look at the strategy for creating the foundation of the digital twin: the digital replica. The use of very high-resolution stereoscopic data makes it possible to describe the area under study in three dimensions. Processing, largely based on AI, is then implemented to automatically obtain a classification adapted to each zone and use case. By isolating the terrain in the digital surface model, vegetation, buildings and human infrastructures become visible. These are critical for assessing risk and related adaptation strategies. Finally, scenarios for the use of digital twins, including climate modelling and projections, are detailed for our different study areas and topics : shoreline evolution, marine submersion in New-Caledonia, flood risk and ecosystem evolution in Nokoué coastal lake and Chesapeake bay.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall G2)

Presentation: Advancing Coastal Monitoring: Satellite Solutions for Erosion and Shoreline Management Across Europe – The ESA Coastal Erosion Project

Authors: Dr. Valentin Pillet, Manon Tranchand-Besset, olivier Regniers, Georgia Kalousi, Sorin Constantin, Kerstin Stelzer, Paulo Baptista, Jörg Haarpaintner, Virginie Lafon, Aurélie Dehouck
Affiliations: i-Sea, Terra Spatium SA, Terrasigna, Brockmann consult, Centre for Environmental and Marine Studies (CESAM), University of Aveiro, Norwegian Research Centre (NORCE)
Coastal areas, experiencing rapid urbanization and population growth, are among the most dynamically changing regions on Earth. These zones face increasing risks from storms and rising sea levels, posing significant challenges for sustainable shoreline management. Traditional coastal monitoring, while accurate, is costly and time-consuming, often covering limited areas or requiring substantial processing time, which does not always meet the expectations of authorities requiring up-to-date and actionable information. Technologies such as DGPS, UAV photogrammetry, and LIDAR have been deployed, but struggle to provide the frequency and scale required for effective large-scale monitoring at a reasonable cost for coastal managers. To address these limitations, the Space for Shore consortium, under the European Space Agency's Coastal Erosion Project and led by i-Sea, leverages satellite imagery to provide a cost-effective and wide-reaching solution. The project developed and deployed prototype monitoring tools along more than 5000 km of coastline, covering diverse European coastal environments. By using 25 years of data from the Copernicus program and other satellite missions, including very high resolution data, this approach captures and analyzes long-term erosion patterns and episodic erosion events across a range of coastal dynamics. Addressing the non-linear and regionally varied nature of coastal erosion, Space for Shore uses high-resolution optical and SAR satellite imagery to generate key indicators like waterlines, beach widths, dune positions and vegetation line, but also bathymetry. This data is used to map changes with tailored temporal frequencies, from monthly to annual, to match local environmental rhythms, assessing both gradual erosion trends and immediate impacts from events like storms or fires. The study spans regions across France, Germany, Greece, Romania, Portugal, and Norway (including Svalbard Archipelago). The key findings indicate significant coastal retreat across various locations. In the Danube Delta, up to 330 meters of shoreline have been lost over the past 30 years. France's Cotentin Coast has experienced rapid dune erosion following Storm Eleanor. Greece’s Evia Island has also suffered erosion after recent wildfires, while the Svalbard Archipelago shows marked glacier front retreat likely linked to climate change. Additionally, popular tourist destinations like Portugal’s coastline and Germany’s Sylt Island have seen shorelines recede by several tens of meters in recent years. The project aims to provide an end-user-focused, validated set of tools and data for European coastal managers, ultimately supporting evidence-based planning and a multi-criteria classification of vulnerability to coastal erosion. The collaboration between remote sensing experts and over 60 stakeholders from government, academia, and industry emphasizes the initiative’s practical value in ongoing coastal management efforts.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall G2)

Presentation: Monitoring Interannual Evolution of Normandy Intertidal Areas Using Spaceborne Imagery: A Decade of Observations

Authors: Simon Déchamps, Edward Salameh, Erwin Bergsma, Frédéric Frappart, Ernesto Tonatiuh Mendoza, Imen Turki, Tatiana Goulas, Julien Deloffre, Sophie Le Bot, Stéphane Costa, Benoit Laignel
Affiliations: Univ Rouen Normandie, Univ Caen Normandie, CNRS, M2C, UMR 6143, CNES, ISPA, UMR 1391 INRAE/Bordeaux Science Agro, Normandie Univ, UNICAEN, CNRS, IDEES
Intertidal flats, acting as a natural buffer area between sea and land environments protect against coastal hazards such as storm surges. These dynamic environments are marked by significant topographic variations driven by major processes and stressors including tides, waves, freshwater inflow, storms, human activities, climate change and its associated pressures (rising sea level, increase in storms frequencies and intensities), all of which influence ongoing sediment redistribution. This emphasizes the growing need for accurate and up-to-date intertidal topographic maps. Spaceborne remote sensing offers an efficient tool for monitoring these evolving landscapes, enabling regular monitoring. As part of the SCO EO4Intertopo project, this study aims to investigate the changes in the intertidal topography of the Normandy coast over the past ten years using two complementary methods for constructing digital elevation models (DEMs) from satellite imagery: the waterline method and the water occurrence method. The main difference between these two methods lies in the extraction of waterlines. The waterline method requires extracting the waterlines from a series of images, levelling them using sea level model outputs and then interpolating to generate a DEM. In contrast, the water occurrence method retains information at the pixel level by constructing a water occurrence map, which is then leveled to produce the final DEM. Annual DEMs were generated using Sentinel-2 data between 2015 to 2024, supplemented by Sentinel-1. The waterline method provides good results in open, straight beaches but shows limitations when applied to areas with more complex structures. In contrast, the water occurrence method performs better in estuarine regions and bays. By combining these two methods, it is possible to reconstruct topography with a horizontal resolution of 10 meters. The comparison with airborne LiDAR-derived DEMs showed Mean Absolute Errors (MAE) ranging between 0.25 m and 0.49 m and a Root Mean Squared Errors (RMSE) ranging between 0.41 m and 0.70 m. The computation of DEM of Difference (DoD) maps has enabled the quantification of the annual volumetric morpho-sedimentary changes (erosion, accretion, stable areas) at regional scale along the Normandy coastline. Channel migrations, beach slopes, and landform kinematics were also detected and analyzed against hydrodynamic forcings and substrate types.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall G2)
Thursday 26 June 16:15 - 17:45 (Hall K2)

Session: A.01.07 Tracking and classifying aerosols through advances in observation and modelling

Atmospheric aerosols are a major influence on weather and health that can vary rapidly in time, space, shape, size, and even chemical composition. These variations cannot be comprehensively constrained by any single observation. The assumptions and approximations used to overcome this limitation are described as “aerosol type”. Aerosols thus present diverse challenges to the environmental science community. Reliable weather forecasting relies on the ability to differentiate cloud from plumes of dust or smoke. Model evaluation often desires discrimination of aerosols by chemical properties when only optical data from satellites is available. Air quality estimation requires knowledge of the vertical distribution of particles and humidity. Rapid response to wildfires and volcanic eruptions requires near real-time data at high resolution. The next era of Earth observing satellites and data assimilation systems provide new opportunities to constrain the behaviour of aerosols by synergistically combining satellite datasets, applying physical constraints, exploiting underutilised observations, or applying new methodologies such as machine learning. This session invites presentations on methods to identify, track, and classify aerosols throughout their life-cycle in order to understand aerosols’ influence on air quality, the energy budget, fire, and more. The aim is to bring together the disparate aerosol research communities to exchange knowledge and methodologies to overcome these challenges.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall K2)

Presentation: Aerosol Profiles with Pandora Sky Measurements

Authors: Marie Stoeckhardt, Dr. Alexander Cede, Dr. Elena Spinei, PhD Masahiro Momoi, PhD Marcos Herreras-Giralda, Dr. Oleg Dubovik, Dr. Axel Kreuter
Affiliations: Medical University Of Innsbruck, University Of Innsbruck, LuftBlick Earth Observation Technologies, NASA Goddard Space Flight Center, GRASP SAS, University of Lille
Influencing the radiation budget of the atmosphere and air quality, aerosol particles are crucial in climate change research and public health. Aerosols can have a wide range of optical and microphysical properties with high temporal and spatial variability, which poses a challenge in comprehensively characterizing them. The aerosol vertical distribution is relevant for several reasons: as a priori information for satellite retrievals, for constraining impact on radiative processes and cloud formation and for particle concentration at the surface. Lidar instruments are well-established tools for retrieving of aerosol profiles using active optical remote sensing techniques (Comerón et al. 2017). Aerosol profiles can also be measured from passive ground-based remote sensing instruments. The Pandora spectrometer system routinely performs direct sun and sky measurements within the Pandonia Global Network (PGN). Trace gas profiles of NO2, HCHO and H20 are already retrieved based on so-called Multi-Axis Differential Optical Absorption Spectroscopy (MAX-DOAS) observations, i.e. spectral sky radiance measurements at various elevation angles (at a constant azimuth angle) and applying a spectral fitting method (Bösch et al. 2018). Aerosol profiles are more complex to retrieve because their impact on the sky radiance has no spectral signature and is dependent on their optical properties, which may vary widely between different types. Here we report on first results from Pandora aerosol profile retrievals. In this study we analyze two approaches. First, we use a parametrized approach based on relative sky radiance and retrieved absolute slant column densities of the oxygen dimer (O2O2) (Spinei et al. 2015). AOD per layer is calculated based on the comparison to look-up tables of a pure Rayleigh atmosphere and scaled by total AOD. This approach would facilitate the integration as near-real-time PGN product. Second, we apply the optimal estimation technique of a modified version of the GRASP algorithm (Dubovik et al. 2021), fitting the measurements of sky radiance and slant columns to a radiative transfer model. We analyze one year of data from the PGN from Thessaloniki, Greece, of a first approach of aerosol profile retrievals. Our results demonstrate the potential of Pandora and GRASP for increasing the spatial coverage of aerosol profile data from ground-based monitoring networks, and advancing the development of aerosol observation techniques. Bösch, Tim; Rozanov, Vladimir; Richter, Andreas; Peters, Enno; Rozanov, Alexei; Wittrock, Folkard et al. (2018): BOREAS – a new MAX-DOAS profile retrieval algorithm for aerosols and trace gases. In: Atmos. Meas. Tech. 11 (12), S. 6833–6859. DOI: 10.5194/amt-11-6833-2018. Comerón, Adolfo; Muñoz-Porcar, Constantino; Rocadenbosch, Francesc; Rodríguez-Gómez, Alejandro; Sicard, Michaël (2017): Current Research in Lidar Technology Used for the Remote Sensing of Atmospheric Aerosols. In: Sensors (Basel, Switzerland) 17 (6). DOI: 10.3390/s17061450. Dubovik, Oleg; Fuertes, David; Litvinov, Pavel; Lopatin, Anton; Lapyonok, Tatyana; Doubovik, Ivan et al. (2021): A Comprehensive Description of Multi-Term LSM for Applying Multiple a Priori Constraints in Problems of Atmospheric Remote Sensing: GRASP Algorithm, Concept, and Applications. In: Front. Remote Sens. 2, Artikel 706851. DOI: 10.3389/frsen.2021.706851. Spinei, E.; Cede, A.; Herman, J.; Mount, G. H.; Eloranta, E.; Morley, B. et al. (2015): Ground-based direct-sun DOAS and airborne MAX-DOAS measurements of the collision-induced oxygen complex, O2O2, absorption with significant pressure and temperature differences. In: Atmos. Meas. Tech. 8 (2), S. 793–809. DOI: 10.5194/amt-8-793-2015.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall K2)

Presentation: Satellite Aerosol Composition Retrieval from a Combination of three different Instruments

Authors: Ulrike Stöffelmair, Dr. Thomas Popp, Dr. Marco Vountas, Prof. Dr. Hartmut Bösch
Affiliations: German Aerospace Center (DLR), University of Bremen, Institute of Environmental Physics (IUP)
The effect of aerosols on climate depends not only on the global aerosol distribution but also on its composition. Consequently, there is a need for the retrieval of aerosol composition from satellite measurements. As these cannot be comprehensively constrained by any single instrument we combine data of three different satellite-based instruments for a synergistic aerosol retrieval. We make use of measurements with different observation characteristics: different spectral ranges (UV, VIS, thermal IR) and different viewing geometries (nadir, oblique). The instruments included are: the dual-view instrument SLSTR (Sea and Land Surface Temperature Radiometer) onboard Sentinel 3A and 3B; the Infrared Atmospheric Sounding Interferometer (IASI) and the Global Ozone Monitoring Experiment-2 (GOME-2), both onboard Metop A/B/C. Data are averaged to a common 40x80 km² grid, temporally aligned within a 60-minute window and cloud masked. Based on the results of a previous simulation-based study on the information content of this combination of instruments, the retrieval is developed. This study focuses on retrieving the total Aerosol Optical Depth (AOD) and the AOD of relevant aerosol components from these satellite measurements using an Optimal Estimation framework. The retrieval is conducted using the radiative transfer model SCIATRAN to simulate radiances at the top of the atmosphere. The parameters determined in the retrieval are 7 albedo values at different wavelengths, the soil temperature, the AOD and scaling factors for 15 different aerosol components. As a priori values for the different retrieval parameters we use climatological data: for the albedo values we use GOME-2 surface LER database containing the Lambertian-equivalent reflectivity (LER) and for the AOD and Aerosol Composition we use climatologic MERRA-2 reanalysis data. The information content study showed that up to 22 parameters can be retrieved depending on the aerosol amount and the surface conditions. This shows that it is possible to determine different parameters with their uncertainties. The use of this combination of instruments therefore holds the potential to determine the aerosol composition and thus better constrain the climate impact of aerosols more precisely. Based on this information content study, the paper will show initial results of the retrieval and discuss its capabilities and limitations. It will show how the retrieval has been optimized, e.g. to assure convergence of its iterative fitting routines with practical limits of numbers of iterations.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall K2)

Presentation: Polarimetric Remote Sensing of atmospheric aerosols: First results from the SPEXone instrument on the PACE mission

Authors: Otto Hasekamp, Guangliang Fu, Bastiaan van Diedenhoven, Neranga Hannadige, Zihao Yuan, Raul Laasner, Laura van der Schaaf, Richard van Hees, Jochen Landgraf, Martijn Smit, Jeroen Rietjens
Affiliations: NWO-I/SRON
On February 8, 2024 the NASA Plankton, Aerosol, Cloud & ocean Ecosystem (PACE) mission has been launched with onboard the SPEXone Multi-Angle Polarimeter. SPEXone is designed to deliver un-precedented information on aerosol properties, such as size, shape, absorption (Single Scattering Albedo), and amount (Aerosol Optical Depth, number concentration), and complex refractive index. From the complex refractive index, size and shape, chemical composition can be derived in terms of volume fractions of the main aerosol components. The launch of PACE brings an end to a 10 year gap in the availability of space-based multi-angle polarimeter data, which are essential to under-stand and quantify the role of aerosols and clouds in climate change. In this contribution, we pre-sent the first year of aerosol data from SPEXone. As we will show, the first version of SPEXone aero-sol data shows already very good agreement with ground-based AERONET observations. The presen-tation includes a global view on aerosol composition in terms of volume fractions of Dust, Sea Salt, Black Carbon, Organic Carbon, fine mode inorganics (Sulphate, Nitrate), and aerosol water. SPEXone shows expected patterns of high fractions of sulphates/nitrates over industrial regions and cities in Asia and (North+South) America, Black Carbon over biomass burning regions in Africa and North America, Dust over desert regions and as outflow over the ocean, and hydrated sea salt over the open ocean. Finally, we discuss the capability of SPEXone to provide a Cloud Condensation Nuclei (CCN) product from the retrieved aerosol properties (number concentration, size distribution, wa-ter content).
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall K2)

Presentation: Multi-satellite Procedure to Classify Aerosols into Feasible Types

Authors: Pekka Kolmonen, Tero Mielonen, Antti Lipponen, Elizabeth Andrews, Andrew Sawyer, Antti Arola
Affiliations: Finnish Meteorological Institute, NOAA Global Monitoring Laboratory, NASA Goddard Space Flight Center, Finnish Meteorological Institute
Aerosol types are crucial in climate studies because different aerosols have varying properties and effects on the Earth's energy balance, atmospheric chemistry, and climate systems. Accurate climate models require detailed aerosol type data to simulate their diverse effects accurately. Misrepresenting aerosol properties can lead to errors in climate predictions. By studying aerosol types, one can better understand and predict how aerosols influence climate dynamics, regional weather patterns, and global temperature trends. One way to evaluate the aerosol types in climate studies is to compare them to global aerosol types determined form satellite data. In the presented work, we have combined data from various satellite aerosol products to have the best available global knowledge about the three significant aerosol properties in the aerosol type classification. These properties are; (1) the Aerosol Optical Depth (AOD) which is a proxy for the amount of particles in the air, (2) the Ångström exponent which is a proxy for the aerosol particle size distribution, and (3) the Absorbing Aerosol Index (AAI) which is a qualitative proxy for the aerosol absorbing/scattering property. The quantitative choice for the proxy of the last property would be the Single Scattering Albedo (SSA), but there is no reliable SSA product readily available. The AAI serves our purposes quite well, since classification between absorbing and scattering aerosols does not necessarily require quantification. In addition to the three main aerosol properties, the aerosol shape and elevation properties of the mineral dust is considered by employing a dust AOD (dAOD) product. At this point, we aim to produce the aerosol classification in a L3 monthly grid. The AOD and Ångström coefficient were obtained from the L2 aerosol product of the Finnish Meteorological Institute. The product is retrieved using ESA Sentinel-3 SLSTR data, and significantly enhanced using a neural network post-correction method, and then aggregated to the monthly L3 grid. The AAI was obtained from the OMI OMAERO monthly L3 product. The employed dAOD product was the monthly L3 C3S IASI dust ensemble or the ULB IASI product. Besides the aerosol parameters, a fraction of land, derived from the SLSTR L1B land/ocean flagging, was added in the classification to better distinguish land- and ocean-based aerosol types from each other. The classification was done using hierarchical clustering, where aerosols are first divided into two main types, and then the type division is continued until a desired number of types is reached. A blind clustering did not, however, produce feasible results. For this reason, a subjective pre-selection was carried out before applying a clustering algorithm. The pre-selection includes division between low/high AOD values, and between low/high Ångström coefficient values. We will present the method and results of the aerosol classification. The resulting aerosol types are compared to the results from other aerosol classification attempts. The results and future developments are discussed.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall K2)

Presentation: Study of Major Events Disturbing the Stratosphere: the 2019-2020 Australian Wildfires and the 2022 Hunga Eruption

Authors: Clair Duchamp, Bernard Legras, Pasquale Sellitto, Aurélien Podglajen
Affiliations: Laboratoire de Météorologie Dynamique (LMD-IPSL), CNRS, Sorbonne Université, ENS-PSL, École Polytechnique, Université Paris Est Créteil, Université de Paris-Cité, CNRS, Laboratoire Interuniversitaire des Systèmes Atmosphériques (LISA-IPSL)
High-explosivity volcanic eruptions and extreme-intensity fires (megafires) can inject pollutants into the upper troposphere and stratosphere (UTS), generating persistent disturbances of its composition and of the stratospheric aerosol layer, affecting the radiative balance and the climate system on a global scale. We focus on the last two events to have generated major disturbances: the Australian wildfires (AF) between late December 2019 and early January 2020, and the Hunga submarine eruption in January 2022. The AF caused the formation of long-lived smoke vortices in the stratosphere, transporting gasses and aerosols to very high altitudes of up to 35 km (Khaykin et al., 2020). The Hunga eruption is the biggest stratospheric disturbance since the Pinatubo eruption, and is exceptional in that its plume reached 58 km altitude (Carr et al., 2022), with a massive injection of water vapour representing a 10% increase of the stratospheric content (Millàn et al., 2022). We take advantage of the large number and diversity of spaceborne instruments at our disposal to analyze the properties of the stratospheric plumes. MLS measures an increase in ozone in the heart of the main vortex of the AF, while water vapour remains globally constant on average, this suggests that dilution and mixing with outside air could have a limited impact on those tendencies. Measurements by GEOs, CALIOP, MLS and IASI highlight the rapid conversion of sulfur dioxide into sulfate aerosols a few days after the Hunga eruption. CALIOP, MLS and OMPS LP show that the main vortex approaches the equator in April 2020 and the Hunga plume extends across the Southern Hemisphere to the South Pole from early 2023. The size distribution parameters of sulfate aerosols retrieved by SAGE III — with a mode width of 1.25 and an effective radius of 0.4 μm — differ from the significantly smaller background aerosols and from those measured during recent stratospheric eruptions. We compare AOD measurements from instruments measuring extinction (SAGE III, OMPS LP, OSIRIS) with an extinction retrieval method applied to CALIOP data, and find that the special conditions of the Hunga plume, with unusual size distributions of aerosols, lead to a large dispersion of results. Using the same method, we find mean Lidar Ratio values between 40 and 60 sr following the Hunga eruption.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall K2)

Presentation: Reconstructing and tracking partially cloud-obscured dust plumes in the Bodélé Depression

Authors: Franz Kanngießer, Prof. Stephanie Fiedler
Affiliations: GEOMAR Helmholtz Centre for Ocean Research, Faculty of Mathematics and Natural Sciences, Kiel University, Institute of Environmental Physics, University of Heidelberg
The North Africa is Earth’s largest dust source. Since station observations in North Africa are sparse, satellite observations provide an invaluable tool for studying desert dust plumes. Passive instruments onboard geostationary satellites provide observations covering a large spatial extend with a high temporal resolution. However, dust plumes observed by such instruments, like the Spinning Enhanced Visible and Infra-Red Imager (SEVIRI) on Meteosat Second Generation or the Flexible Combined Imager (FCI) on Meteosat Third Generation – Imager (MTG-I), are frequently (partially) obscured by clouds. Thus, their full extent remains unknown. In this work, partially obscured dust plumes as detected by SEVIRI are reconstructed using a machine learning-based in-paiting algorithm. Specifically, we train the algorithm on dust aerosol optical depth data from reanalysis and operational numerical forecasts and subsequently apply the algorithm to dust fields extracted from SEVIRI images. Spatial patterns from dust plume reconstructions across North Africa were found to be consistent with the ensemble of operational dust forecasts provided by the WMO Dust Regional Center Barcelona (cf. Kanngießer and Fiedler, 2024). Having thus established the reconstruction technique for our application, we can now study the spatio-temporal evolution of dust events for the Bodélé depression, which is perceived as North Africa’s most active dust source region. To do so, we first reconstruct dust plumes from day-time SEVIRI images with a temperal resolution of 15 minutes centred around the Bodélé Depression with a spatial resolution of 0.1° by 0.1° and 0.05° by 0.05°. From these reconstructions, we infer the distribution of transport direction and transport speed, by tracking the centre of mass of each dust plume over time. For consistency checks, we further compare information of dust plume speed and direction with horizontal line-of-sight wind speed information from Aeolus overpasses obtained during the entire Aeolus mission. Our analysis shows, that low image resolution of 0.1° by 0.1° inhibits the detection of dust plume speeds of below 10 ms-1, whereas comparison to in-situ observation from weather stations close to the Bodélé Depression indicate that dust events dominate for wind speeds of 8 ms-1. Higher resolution SEVIRI images (0.05° by 0.05°) can capture lower dust plume speeds. This indicates the added value of the FCI instrument with higher spatial resolution compared to SEVIRI observations. Thus, we expect to detect both lower values and a higher resolution of dust plume speeds with FCI than possible with SEVIRI. Furthermore, our reconstructions constitute, to our knowledge, the first dataset of fully restored, i.e. no longer cloud-obscured, dust plumes across the Bodéle Depression and its surrondings. Reference: Kanngießer, F., & Fiedler, S. (2024). “Seeing” beneath the clouds—Machine-learning-based reconstruction of North African dust plumes. AGU Advances, 5, e2023AV001042. https://doi.org/10.1029/2023AV001042
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 0.96/0.97)

Session: C.03.10 Copernicus CO2M Mission: Operational Concept, Product and Service Development

As part of the Copernicus Programme, the European Commission and the European Space Agency (ESA), are expanding the Copernicus Space Component and are implementing satellite remote measurements for anthropogenic CO2 (and CH4) emission monitoring. In support of well-informed policy decisions and for assessing the effectiveness of strategies for CO2 (and CH4) emission reduction, uncertainties associated with current anthropogenic emission estimates at national and regional scales need to be improved. Satellite measurements of atmospheric CO2 (and CH4), complemented by in-situ measurements and bottom-up inventories, will enable, by using advanced (inverse) modelling capabilities, the transparent and consistent quantitative assessment of CO2 (and CH4) emissions and their trends at the scale of megacities, regions, countries, and at global scale.
In this session, we will provide more details on the operational concept for the CO2M Mission, which is anticipated to consist of a constellation of three satellites each with at least 250 km swath and capable of exploiting two operational measurement modes. An overview of the processing for the greenhouse gases (CO2 and CH4), NO2, aerosol and cloud products will be provided. These products will feed into the Copernicus Greenhous Gas Monitoring and Verification Support capacity under development by ECMWF, which components will also be described with its related supporting projects.

Presentations and speakers:


CO2M Operational Data Acquisition Modes


  • Mauro Caleno & Terry Bastirmaci - ESA

Status of the performance of the CO2M instruments: CO2I, CLIM & MAP


  • Gregory Bazalgette Courreges-Lacoste & Hana Ouslimani - ESA

The CO2M product processing system – product portfolio


  • Ruediger Lang - EUMETSAT

The CO2M GHG data processors and product


  • Hartmut Boesch - IUP Bremen

The CO2M NO2 product for plume detection


  • Benjamin Leune - KNMI

Cloud and aerosol processing for CO2M


  • Pepe Phillips - EUMETSAT

CO2M preparations for product validation


  • Mahesh Sha - BIRA

CO2M: changing Europe’s capacity to monitor greenhouse gas emissions


  • Richard Engelen - ECMWF
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.31/1.32)

Session: C.02.09 Preparing for the FLuorescence EXplorer (FLEX) mission - PART 2

The FLEX mission, selected by ESA as the 8th Earth Explorer, aims at providing an improved understanding of the functioning and photosynthetic efficiency of vegetation from space. FLEX will be the first satellite mission to facilitate quasi-direct observation of the cause-effect relationships between plant health, photosynthesis and vegetation functioning. FLEX will provide repeated observations along the phenological cycle at a spatial scale supporting agricultural and forestry management units. In this session, the status of the implementation of the FLEX mission will be presented, including preparation activities towards the operational phase. Presentations will focus on the instrument performances, the algorithm developments, the calibration and validation activities, and selected results demonstrating the expected impact of the FLEX products.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: Airborne campaign activities in the framework of the FLEX-ITA project

Authors: Karolina Sakowska, Sergio Cogliati, Lorenzo Genesio, Federico Carotenuto, Franco Miglietta, Mariano Bresciani, Claudia Giardino, Lodovica Panizza, Loris Vescovo, Damiano Gianelle, Patrick Rademske, Bastian Siegmann, Uwe Rascher, Jan Hanuš, Karel Holouš, Denise Cinellu, Giulia Tagliabue, Pietro Chierichetti, Roberto Colombo, Cinzia Panigada, Micol Rossini, Sara Venafra
Affiliations: University of Milano-Bicocca, Italian National Research Council - Institute of BioEconomy (CNR-IBE), Italian National Research Council - Institute for Electromagnetic Sensing of the Environment (CNR-IREA), Edmund Mach Foundation, Forschungszentrum, CzechGlobe, Italian Space Agency
This contribution outlines the ongoing activities part of the project “FLEX Inland water and Terrestrial Airborne measurements and scientific exploitation” (FLEX-ITA), founded by the Italian Space Agency (ASI) and developed in collaboration between the University of Milano-Bicocca, CNR-IBE, CNR-IREA and FEM. The project aims to establish a network of experts in airborne and field campaigns to support the ESA FLEX mission. The two-year project includes a range of activities with the following key objectives: (i) deploy the airborne imaging spectrometer IBIS (Specim, Spectral Imaging Ltd., Finland) to acquire high-quality data for mapping the Solar-Induced Fluorescence (SIF); (ii) develop algorithms similar to FLEX mission processor for retrieving spectrally resolved SIF from airborne imagery; (iii) advance methods for Calibration/Validation (Cal/Val) of the FLEX mission through integrated airborne and ground-based measurements; (iv) promoting innovative approaches to SIF-based indicators for detection and monitoring of water stress in agriculture; and (v) exploiting FLEX-like measurements to improve phytoplankton characterization and evaluate in inland water environments. To achieve the project objectives, airborne campaigns are scheduled for the first and second years, supported by in-situ spectral, chemical/physical and physiological measurements. These efforts aim to build expertise in data acquisition, airborne sensor operation, processing methods, analysis, modelling and interpretation of the SIF data. Within this framework, the FLEX-ITA 2024 campaign was designed to acquire data through two distinct campaigns: (i) the “land” campaign focused on the early detection of water stress in agricultural crops, and (ii) the “water” campaign aimed at characterizing and quantifying aquatic phytoplankton in lakes. The campaigns involved simultaneous and coordinated data acquisitions using two high-resolution airborne imaging spectrometers, IBIS and HyPlant, which were co-installed on the same airborne platform (i.e., Cessna 208B Grand Caravan). This effort was made possible through a partnership between the FLEX-ITA project, CzechGlobe, and Forschungszentrum Jülich. The HyPlant airborne imaging spectrometer consists of the high-resolution FLUO module and the VIS-SWIR DUAL module. The FLUO operates in the red and near-infrared wavelengths (650–800 nm) with high spectral resolution (FWHM = 0.2 nm). The DUAL acquires imaging spectroscopy data across the 400–2500 nm range, providing key information for deriving biochemical and structural plant properties relevant to interpret SIF. The technical specifications of the IBIS sensor are similar to those of the FLUO module. Although the nominal characteristics of IBIS and FLUO are comparable, HyPlant is recognized as the first airborne sensor developed explicitly for SIF mapping. Since its introduction in 2012, it has been widely used in international campaigns related to the FLEX mission. In contrast, IBIS represents a newer commercial instrument with its full capabilities still being explored. The "land" campaign was conducted in Rispescia (Grosseto, Italy) from June 15 to July 7, 2024. Multiple flight lines were successfully acquired on June 22, 28, and 30, contributing to the diverse project objectives. These included multi-temporal flights on different days and times of the day, varying flight altitudes, and multi-angular observations. The irrigation manipulation experiment was conducted in the experimental area considering three agricultural crops (i.e., Sunflower, Sudan grass, and Lentils), each divided into well-irrigated and non-irrigated plots. During the experiment, field spectroscopy measurements were continuously collected using three FLOX spectrometers (JB-Hyperspectral, Germany) to measure SIF and canopy reflectance at various plots. Physiological measurements were also acquired throughout the campaign to characterize and monitor crop conditions. The "water" campaign covered the southwestern region of Lake Garda (Italy) where the CNR-IREA permanent station is located. On July 4th, airborne survey was conducted along seven flight lines, repeated at two different times of the day and at varying flight altitudes. Concurrently, an intensive field campaign was carried out with three distinct boats, encompassing a wide range of optical, biological, and physicochemical measurements, alongside water sampling at multiple depths across various sites. Laboratory analyses were performed to quantify chlorophyll-a, secondary pigments, coloured dissolved organic matter, and total suspended particles. Apparent optical properties, such as remote sensing reflectance, were measured using spectroradiometers including ROX, SR-3500, WISP-3, and HyperOCR, while phytoplankton fluorescence and backscattering were evaluated with specialized instruments. Additional in situ measurements, such as turbidity, dissolved oxygen, and other physicochemical parameters, were recorded with multiparameter probes and complemented by atmospheric observations using Microtops II and CIMEL photometers. This contribution presents preliminary results, focusing on the evaluation of the quality and reliability of IBIS airborne imagery, as well as advancements in SIF retrieval algorithms. Special attention will be given to the novel capability of estimating spectrally resolved SIF from airborne imagery, as foreseen for the FLEX mission. SIF retrievals will be quantitatively compared with ground-based measurements from FLOX spectrometers, complemented by the assessments of the spatial and temporal consistency of the SIF maps produced. Finally, the potential contribution of these results in the framework of Cal/Val activities of the FLEX mission will be discussed.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: ESA's FLEX Data Innovation and Science Cluster (DISC)

Authors: Jorge Vicent Servera, Prof. Roberto Colombo, Gwennaël Matot, Lucie Tixier, Béatrice Berthelot, Théo Paccoud, Doctor Neus Sabater, Pekka Kolmonen, Sergio Cogliati, Pietro Chierechetti, Christiaan van der Tol, Marin Tudoroiu
Affiliations: Magellium, Univesity of Milano-Bicocca, Finnish Meteorological Institute, University of Twente - ITC, ESA/ESRIN
FLEX (FLuorescence EXplorer) is ESA's 8th Earth Explorer mission. The mission aims to provide insight into vegetation's photosynthetic activity by characterizing its full energy balance (i.e., incoming and reflected radiation, canopy temperature, and fluorescence emission). Flying with Sentinel-3, FLEX will provide advanced biophysical products to understand photosynthetic activity with potential applications in stress detection, food productivity, and carbon cycle. In the last 4 years, several activities have been carried out spanning the development of FLEX core Level-1B (L1B) and Level-2 (L2) mission products, their validation with simulated data generated by an end-to-end mission performance simulator (E2ES), FLEX-related field campaigns, and processing of in-situ data. In March last year, we started the activities for the FLEX Data Innovation and Science Cluster (DISC) ESA/ESRIN project. The FLEX DISC project aims to gather a team of scientists and technical experts in various domains related to the FLEX mission to ensure efficient mission operations, provide the best data quality, and involve potential FLEX data users through outreach activities. The activities to be carried out by the FLEX DISC encompass the following: (1) consolidating the prototype of L2 processor and industrializing it into an Instrument Processing Facility (IPF), (2) developing a collaborative platform that allows bringing FLEX users and algorithms to data, (3) developing tools to monitor the quality of the FLEX data products (Level-0 to L2), (4) design and implement a calibration/validation during the commissioning phase and regular operation, (5) monitoring the FLEX data quality and maintaining calibration/validation, and (6) implementing evolutions of the processing algorithms to ensure state-of-the-art data products. The goal of this presentation is to give an overview of the FLEX DISC scientific activities and their current status at milestone 3 (May 2025). In particular, we will: (1) give an overview of the project objectives, (2) describe the FLEX Level-1C and Level-2 data processing and products with preliminary validation results, and (3) inform to the community about the Collaborative Platform design and objectives. With this work, we expect to contribute to the presentation of FLEX operationalisation activities with focus on the scientific activities inside of the DISC project.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: Insights into the FLEX Validation Plan: Objectives and Overall Approach

Authors: Roberto Colombo, Béatrice Berthelot, Alireza Mahmoodi, Gwennaël Matot, Doctor Jorge Vicent Servera, Ornela Nanushi, Lucie Tixier, Doctor Neus Sabater, Pekka Kolmonen, Timo Virtanen, Astrid Zimmermann, Agnieszka Bialek, Pieter de Vis, Micol Rossini, Giulia Tagliabue, Pietro Chierichetti, Sergio Cogliati, Juliane Bendig, Bastian Siegmann, Uwe Rascher, Egor Prikaziuk, Doctor Christiaan van der Tol, Dr Darren Ghent, Rocio Barrio Guillo, Dr Tommaso Julitta, Marco Celesti, Matthias Drusch, Dirk Schüttemeyer, Marin Tudoroiu
Affiliations: University of Milano Bicocca, Magellium, Finnish Meteorological Institute, National Physical Laboratory, Forschungszentrum Jülich GmbH, University of Twente, University of Leicester, JB Hyperspectral Devices, European Space Agency, European Space Agency
This contribution provides an overview of the FLuorescence EXplorer (FLEX) strategy for the calibration and validation (cal/val) of the satellite’s operational Level 1 and Level 2 science data products and it is framed as part of the FLEX Data Innovation and Science Cluster (DISC) project. The FLEX mission, developed by the European Space Agency (ESA) as its 8th Earth Explorer, will provide global maps of vegetation fluorescence as an indicator of photosynthetic activity together with the necessary parameters for deriving the amount of carbon assimilated by plants. FLEX is currently planned to be launched in 2027. The FLEX on-board instrument, the FLuORescence Imaging Spectrometer (FLORIS) sensor, will acquire data in the 500 - 780 nm spectral range with a spectral resolution between 0.3 nm (High Resolution, HR) and 1.8 nm (Low Resolution, LR). The spectral sampling interval will be from 0.1 nm in the oxygen absorption bands (748-769 nm and 686–697 nm) up to 2 nm outside the atmospheric absorption bands. The FLEX satellite will deliver data at a spatial resolution of 300 meters, with observations scheduled around 10:00 local time. It will have a swath width of 150 km and a repeat cycle of 27 days. FLEX will operate in tandem with Sentinel-3, working synchronously with the Sentinel-3 Camera 4 (nadir-looking), which has a 14-degree field of view. The FLEX operational products to be validated include L1B data that are radiometrically calibrated, spectrally characterised, and geo-referenced onto the Earth surface; L1C top-of-atmosphere (TOA) radiances, spatially resampled and projected into the FLORIS-HR focal plane geometry; L2 data of surface apparent reflectance, fluorescence emission in the 670-780 nm spectral region, as well as vegetation biophysical variables and advanced photosynthesis-related products. The validation of FLEX products takes into account that both the FLORIS products and the ground truth measurements have inherent uncertainties and variances due to several factors. These include intrinsic instrument accuracy caused by random and systematic noises, the retrieval approach, natural variability of target properties at different spatial scales at the time of acquisition, environmental conditions during ground truth measurements, and finally error propagation in retrieval algorithms or radiative transfer models. Furthermore, a critical aspect of FLEX product validation is ensuring the spatial representativeness of the ground data relative to the satellite’s spatial resolution. In this contribution, we present the comprehensive strategy planned for the validation of FLEX operational products. This strategy ensure the traceability to the mission requirements, and guarantee that all parameters relevant for the operational products have been adequately validated. The mission requirements are highly challenging, and they cover the end-to-end Earth observation system including high-level requirements, mission operations, data product development and processing, data distribution and data archiving. Dedicated tools and resources are being developed to support the validation and quality assessment processes. An interactive portal (i.e. the FLEX Collaborative Platform, CP), will be developed and it expected to provide important validation support functionalities. The FLEX CP will provide analysis tools to enable assessment of quality indicators from specific products and address any specialized data processing requirements. The presentation will outline all aspects of the FLEX product validation, including validation approaches and procedures, site typology and their specific requirements, sampling and transfer functions, uncertainty budget and validation metrics, as well as prototypes of the cal/val tools. Risks associated with the validation of the FLEX products have been identified, possible mitigation scenarios have been outlined and back-up solutions proposed. Finally, the validation timeline during pre-launch preparation, commissioning, and routine operations will be detailed.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: SpaFLEX: A Comprehensive Approach to Calibrating and Validating FLEX Level-2 Products

Authors: MaPilar Cendrero-Mateo, Marcos Jiménez, Ricardo Díaz-Delgado, Juanjo Peón, Adrián Moncholí-Estornell, Pedro J. Gómez-Giráldez, Shari Van Wittenberghe, David Aragonés, Félix Muñoz, Laura Carretero, Jorge Alonso, Oscar Gutiérrez, Ana López-Ballesteros, Álvaro Doblas-Rodrigo, Amelia Montoro-Rodríguez, Javier Gorroño
Affiliations: Image Processing Laboratory, University Of Valencia, National Institute of Aerospace Technology, Doñana Biological Station, Spanish National Research Council, ISDEFE, Agri-Food Research and Technology Centre of Aragon, Instituto Técnico Agronómico Provincial, 02007 Albacete, Institute of Water and Environmental Engineering, Universitat Politècnica de València
Sun-induced chlorophyll fluorescence (SIF) and ground reflected radiance of vegetation will be key operational Level-2 products provided by the upcoming European Space Agency's (ESA) FLuorescence EXplorer-Sentinel-3 (FLEX-S3). The reliability of these products needs to be evaluated through a validation process comparing satellite products with fiducial reference data from the earth’s surface. In order to meet the ESA’s uncertainties requirements for Level-2 products, the SpaFLEX project, funded by the Spanish Ministry of Science and Innovation, aims to develop and implement a comprehensive Calibration and Validation (Cal/Val) plan for the FLEX-S3 mission in Spain. This plan will define the Spanish Cal/Val test sites, ground, UAV and airborne instruments characterization, fiducial measurements, sampling protocols, and uncertainty budgets for Level-2 products. A cornerstone of this strategy is the detailed intercomparison and characterization of ground (ASD FieldSec, FloX, Piccolo Suntracker), UAV (Piccolo, Cubert), and airborne (CFL) instruments. This characterization encompasses indoor and outdoor repeatability and reproducibility procedures for radiometric, spectral, and non-linearity measurement performance. Indoor protocols are based on instrument intercomparison against integrating sphere and gasses lamps at the Spanish National Institute for Aerospace Technology (INTA) facilities. For outdoors procedures, the intercomparison is underpinned on a very homogeneous vegetation/agricultural field like festuca or alfalfa fields. In parallel, the project is delving into the propagation of reflectance and SIF retrieval algorithm into the final FLEX products. This involves analyzing the relationships and correlation between the uncertainties in input quantities, such as radiance and irradiance, and how these uncertainties propagate through the retrieval processes. The propagation of uncertainties considers the error correlation within the different contributions and spectral bands. The uncertainty combines both a Monte Carlo method and the law of propagation of uncertainties (first-order derivative method). To assess spatial and temporal variability in different ecosystems, the SpaFLEX project has established three Spanish Cal/Val sites: agricultural, forest, and woodland/shrubland. These sites, which differ in vegetation heterogeneity, provide a framework for evaluating the SpaFLEX Cal/Val protocols. As part of the Spanish Cal/Val plan, a field campaign was conducted in 2024 at a holm oak forest site in Sarrion (Teruel). Sentinel-2 data and spatial heterogeneity metrics (Moran's index and semivariograms) were used to optimize the sampling strategy. Ground measurements, including leaf and canopy level reflected radiance and fluorescence (FLOX and UAV), were collected based on the optimized strategy. These measurements were upscaled to a 300 x 300 FLEX pixel using a scaling factor derived from leaf area distribution data. The ultimate goal is to achieve a minimum fluorescence of 2 mW m-² sr-¹ nm-¹ (10% uncertainty) and a relative reflectance change of <30% (500-650 nm) over a 300 x 300 FLEX pixel to meet ESA uncertainty requirements for Level 2 products. The methodologies developed within the SpaFLEX project will not only contribute to the successful implementation of the FLEX mission but also have broader implications for future Earth observation missions. By addressing the critical aspects of instrument characterization, retrieval algorithm uncertainty, and spatial and temporal variability, SpaFLEX will pave the way for more accurate and reliable remote sensing of terrestrial ecosystems.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: An integrated network of FloX systems.

Authors: Tommaso Julitta, Mitchell Kennedy, Andreas Burkart, Paul Naethe, Dr. Agnieszka Bialek, Pieter De Vis, Astrid Zimmermann, Thomas Storm, Laura Mihai, Javier Pacheco-Labrador, Dirk Schuettemeyer, Marin Tudoroiu, Dr. Marco Celesti
Affiliations: JB Hyperspectral Devices GmbH, National Physics Laboratory., Brockmann Consult, INFLPR, CETAL, CSIC, ESA
The calibration and validation (cal/val) of the upcoming ESA Fluorescence Explorer (FLEX) mission and its derived products are crucial yet complex tasks. In addition to vicarious calibration of the sensors, the validation process will depend on integrating various ground-based data gathered from multiple sensors. To effectively harness the diverse fluorescence emitted by terrestrial ecosystems, a network of instruments that span a broad range of plant functional types is necessary to validate satellite measurements against ground observations. Within the currently ongoing ESA DEFLOX project, we are establishing an integrated network designed to host and automatically process data from FloX instruments. The FloX (Fluorescence BoX) is capable of simultaneous acquisition of upwelling and downwelling radiance and features a high-performance spectrometer (FWHM: 0.3 nm, SSI 0.15, SNR 1000), enabling continuous measurements of solar-induced chlorophyll fluorescence (SIF). Additionally, the continuous monitoring of spectral downwelling and upwelling radiance with a secondary VIS-NIR spectrometer allows the calculation of reflectance and various spectral indices. Although the FloX instrument is standardized for operation and data processing, there is currently no official FloX network to ensure the long-term acquisition of high-quality data, facilitate data exchange, or handle maintenance activities such as periodic calibration. Through the ESA-funded DEFLOX-CCN5 project, we propose a comprehensive solution to address these needs. The project aims to define and implement a network structure that covers the following key aspects: • Development of a portable calibration device for periodic field calibration. • Creation of an automated data processing pipeline with uncertainty tracking. • Establishment of a database and user interface for data access. This contribution provides an overview of these ongoing activities and presents preliminary results within the framework of FLEX validation.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: FLEX Level-2 fluorescence retrieval processor and adaptation to ground-based measurements

Authors: Pietro Chierichetti, Sergio Cogliati, Alberto Maiocchi, Doctor Neus Sabater, Pekka Kolmonen, Jorge Vicent-Servera, Théo Paccoud, Gwennaël Matot, Lucie Tixier, Giulia Tagliabue, Micol Rossini, Roberto Colombo, Jose Moreno, Marin Tudoroiu
Affiliations: University of Milano-Bicocca, Finnish Meteorological Institute, Magellium Artal Group, University of Valencia, European Space Agency, ESA-ESRIN
This contribution presents the design and implementation of FLEX Level-2 Solar-Induced Fluorescence (SIF) retrieval processor for the FLuORescence Imaging Spectrometer (FLORIS, [1]) aboard of ESA 8th Earth Explorer Fluorescence Explorer (FLEX) mission [2]. The algorithm has been developed as part of the Level-2 Prototype Processor (L2PP) in the framework of the FLEX-DISC (Data Innovation and Science Cluster) project. The DISC contributes to the FLEX mission preparation activities, focusing on improvement of algorithms and user products, development of operational processing chains and validation tools, in-flight validation, routine operations and performance monitoring. The L2PP is structured in four successive modules: i) geometric/radiometric co-registration, ii) atmospheric characterization, iii) SIF retrieval, iv) biophysical retrieval. The SIF retrieval module is explicitly dedicated to quantifying spectrally resolved SIF between 670-780 nm after atmospheric characterization and successive inversion of Top-Of-Atmosphere spectral radiance to surface Apparent Reflectance (R*) performed in the previous module. The L2PP SIF retrieval processor uses R* spectrum (and uncertainty) to disentangle SIF and R spectra. The approach employed is based on the probabilistic framework of Optimal Estimation, where the inverse problem is solved using Bayes' theorem considering probability densities and assuming Gaussian distributions for the uncertainties. The method predicts apparent reflectance by iteratively optimizing the state vector x, based on a cost function that incorporates goodness-of-fit and prior information criteria. Particularly, the inverse method employs data uncertainty and a-priori parameters variance-covariance matrices. The first enables different wavebands to be weighed differently, the second helps to regularize the inverse problem providing covariance between parameters. Uncertainty from previous processing modules is taken as input and consistently propagated within the SIF retrieval. Currently, the L2PP SIF retrieval is undergoing refinement and testing using synthetic scenes produced by the FLEX End-To-End Mission Performance Simulator (FLEX-E) project, specifically designed to simulate FLEX instrument. The results of the L2PP SIF retrieval will be presented, focusing on the evaluated fluorescence parameters: SIF at the oxygen-A and oxygen-B bands (O2-A, O2-B), the intensity of the red and far-red peaks, the positions of the SIF peaks and the spectrally integrated SIF (SIFint). Building on this, an adapted version of the SIF retrieval algorithm specifically dedicated for ground-based FloX measurements (JB Hyperspectral Devices) is being developed and tested as part of the FLEX-DISC Calibration/Validation (Cal/Val) activities. The algorithm is designed based on the methodology implemented in the FLEX L2PP, with the intent to have a consistent retrieval approach between satellite and ground-based observations. An early version dedicated to tower-based measurements was developed as part of the ESA DEFLOX campaign project in collaboration with the Finnish Meteorological Institute. This version includes an atmospheric correction for oxygen transmittance prior to the estimation of SIF. More recently, this version has been adapted to process close-range spectral measurements acquired within a few meters distance from the target vegetation, where atmospheric compensation is unnecessary. The results of the fluorescence retrieval algorithm for ground-based time series of FLoX measurements collected over different crop and forest targets are presented. In the absence of reference measurements such as those provided by FLEX-E in the context of satellite-based study, the algorithm performance assessment is carried out through cross-comparison with alternative estimation techniques and evaluation of result consistency. [1] P. Coppo, A. Taiti, L. Pettinato, M. Francois, M. Taccola, and M. Drusch, ‘Fluorescence Imaging Spectrometer (FLORIS) for ESA FLEX Mission’, Remote Sensing, vol. 9, no. 7, Art. no. 7, Jul. 2017, doi: 10.3390/rs9070649. [2] M. Drusch et al., ‘The FLuorescence EXplorer Mission Concept—ESA’s Earth Explorer 8’, IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 3, pp. 1273–1284, Mar. 2017, doi: 10.1109/TGRS.2016.2621820.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall M1/M2)

Session: B.02.01 Earth System Governance & Sustainability Frameworks

The world today confronts unprecedented needs for earth system governance. The global commons framework is the closest example of an existing approach with the aim of governing biophysical systems on Earth upon which the world collectively depends. A new proposed planetary commons framework updates and introduces a paradigm shift in Earth System Governance by recognizing Earth System Tipping Points (ESTPs) and other biophysical systems as our shared commons, urging a move towards more decentralized & polycentric governance architecture that transcends geopolitical and geographical boundaries. This architecture would distribute responsibility, authority, and accountability across multiple scales and institutions, including at the regional scale of the tipping element.

The world also faces an unprecedented need for a navigational tool capable of guiding us through the Anthropocene and the 21st century as we move away from the stable conditions of the Holocene, driven by the pressures of economic, social, and political forces of humanity. A sustainability framework tailored for the Anthropocene must therefore acknowledge that people and nature are entwined within integrated socio-ecological systems, and guide us towards an ecologically safe and socially just operating space for humanity; a transformation of our societies that bring us back within planetary boundaries, whilst ensuring the social needs of all beings are met, leaving no human or non-human animal behind. Being guided by Planetary Boundaries, while advocating social justice for all, defines a narrow, safe, and just corridor in which we can all thrive. This is our ultimate goal.

However, effective earth system governance is unthinkable without essential remote sensing infrastructure that provides scientific measurements, monitoring capabilities, resilience detection systems & early warning signals. These architectures and sustainability frameworks are an important foundation for guiding Earth Observation (EO) gap analysis, prioritizing Earth Observation (EO) research & applications, and bringing together the global community around a shared vision for 2050, as outlined in the ESA Systems-of-Systems Reference Architecture Blueprint.

This session aims to bring together the global Earth Observation (EO) Community under the framework of sustainability science narratives. It will focus on integrating multiple disciplines and communities to participate in the process of learning, interest formation/positioning, coalition building, and strategic planning. Its primary objective is to explore how the global Earth Observation (EO) community can develop the essential remote sensing infrastructure needed to support the governance of Earth System Tipping Points (ESTPs), other biophysical systems, and support sustainability frameworks (e.g. Planetary Boundaries & Doughnut Economics) as we navigate the Anthropocene and 21st century.

We call for multidisciplinary abstracts on sustainability science, systems thinking, earth system governance, post-growth economics models, planetary commons & boundaries, and the application of remote sensing technologies in these domains.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall M1/M2)

Presentation: Bridging Space, Knowledge, and Justice: Rethinking Sustainability in the Anthropocene

Authors: Sahba El-Shawa
Affiliations: Scuola Superiore IUSS Pavia
This research introduces space epistemology as a framework for leveraging space-derived knowledge - both experiential and intellectual - to address the critical challenges of sustainability in the Anthropocene. The concept of sustainability is critically examined in this work, as defined through economic, environmental, and social dimensions. While the Sustainable Development Goals (SDGs) have been instrumental in providing a shared language for global sustainability efforts, this research highlights their limitations, particularly in addressing deeper systemic issues like epistemic injustice and the overemphasis on economic growth. Epistemic injustice, a concept central to this work, refers to the systemic undervaluation of marginalized knowledge systems and perspectives, as well as the inaccessibility of knowledge. In the context of sustainability, this injustice manifests in the exclusion of Indigenous knowledge and alternative worldviews from mainstream governance frameworks. By linking epistemic injustice to the sustainability discourse, the research underscores the necessity of an inclusive approach that integrates diverse ways of knowing. This is particularly relevant in the context of global challenges like climate change and biodiversity loss, which require solutions that transcend geopolitical boundaries and incorporate localized, contextual knowledge. This framework further critiques sustainable development as a concept, noting its tendency to prioritize economic development, often at the expense of social and environmental sustainability. Instead, it advocates for reimagining sustainability as a paradigm rooted in equity and justice - one that values the interconnectedness of human and non-human systems and prioritizes the well-being of both. While acknowledging the value of the SDGs in communicating sustainability goals, this work calls for a critical reassessment of their application, urging a shift toward frameworks that address systemic inequities and foster resilience across socio-ecological systems. Emerging technologies are explored as tools for democratizing access to knowledge and enabling more equitable participation in sustainability practices. Specifically, this research employs case studies to investigate the potential of Virtual Reality (VR) simulations of the Overview Effect and Artificial Intelligence for Earth Observation (EO), in order to highlight their capacity to bridge knowledge gaps and inform policy frameworks. The research also critically examines the dual-use nature of space technologies, highlighting their potential for both societal benefit and harm. By aligning these technologies with social and environmental goals, this framework advocates for a reorientation of the space industry to prioritize equity and sustainability. Ultimately, this work proposes a comprehensive framework for integrating space epistemology into Earth system governance. It emphasizes the importance of interdisciplinary collaboration and systems thinking in addressing the crises of environmental degradation and social inequity. By harnessing the transformative potential of space-derived knowledge and rooting sustainability practices in ethical and inclusive principles, this research charts a path toward a future that centers around social and environmental justice.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall M1/M2)

Presentation: Advancing Earth System Governance through the Human-Forest Nexus: Spatiotemporal Patterns and Innovations for Planetary Commons

Authors: Emanuele Massaro
Affiliations: European Commission Joint Research Centre
Forests, covering 31% of the Earth’s surface, are critical to planetary health, offering ecosystem services such as carbon sequestration, biodiversity conservation, and socio-economic benefits to over 1.6 billion people living near forested areas. However, these socio-environmental systems face increasing pressures from population growth, deforestation, and land-use changes. To address these challenges, innovative frameworks that recognize forests as part of the planetary commons are essential. Such frameworks must embrace decentralized, polycentric governance architectures and integrate Earth Observation (EO) systems to monitor Earth System Tipping Points (ESTPs) and ensure resilience. This study provides novel insights into the human-forest nexus over 45 years (1975–2020), introducing robust metrics and a technical foundation for linking sustainability frameworks like Planetary Boundaries and Doughnut Economics to actionable governance strategies. Our analysis uses cutting-edge geospatial datasets, including the HILDA+ dataset for annual land-use changes and the Global Human Settlement Layer for population data, to evaluate long-term interactions between human populations and forests. By developing three metrics—Forest Area per Capita (FAC), Forest Proximate People (FPP), and the novel Forest Human Nexus (FHN)—we provide a comprehensive assessment of spatial and temporal dynamics in the human-forest relationship. These metrics address key gaps in understanding how proximity, accessibility, and distribution of forest resources evolve globally. The FAC metric quantifies forest area availability relative to population size, FPP identifies populations living within proximity to forests, and the FHN integrates spatial proximity, forest accessibility, and population density into a single socio-ecological index. Technical Contributions and Findings Global Trends in Forest Area per Capita (FAC): FAC declined by nearly 50% globally between 1975 and 2020, with severe reductions in tropical regions such as Africa and Southeast Asia, driven by rapid deforestation and population growth. These declines highlight ecological stress zones requiring urgent policy interventions. By contrast, Europe showed positive trends, driven by declining population densities and large-scale reforestation initiatives. These regional differences illustrate the complex interplay of demographic pressures and land-use policies. Temporal Patterns of Forest Proximate People (FPP): While the absolute number of people living near forests increased globally, normalized FPP (accounting for population growth) displayed varying regional trajectories. Europe showed significant increases in normalized FPP due to integrated urban planning and reforestation efforts, while tropical regions, particularly in Africa, saw decreases due to deforestation and urbanization. This divergence underscores the role of effective governance in maintaining socio-ecological proximity. Forest Human Nexus (FHN): The FHN indicator provides a comprehensive measure of human-forest interactions, integrating proximity, forest area per capita, and population density. Analysis revealed contrasting regional trends: Southern Europe and parts of Asia showed increasing FHN, reflecting successful conservation policies, while Southeast Asia and Middle Africa exhibited significant declines due to deforestation and land conversion. Positive trends in regions such as Southern Europe suggest the potential for policy-driven recovery, while negative trends in tropical regions highlight the urgency for targeted governance interventions. Impact of Methodological Innovation: The study utilizes a moving window approach to calculate these metrics at a 50 x 50 km resolution, combining high-resolution datasets to provide spatially explicit insights. The integration of population and land-use data with EO tools offers new opportunities to quantify and monitor socio-ecological dynamics, aligning with the ESA Systems-of-Systems Reference Architecture Blueprint for EO systems. Innovation and Policy Relevance This work represents a paradigm shift in Earth system governance by linking forests to the broader concept of planetary commons and operationalizing the governance of ESTPs. The introduction of the FHN indicator as a novel metric addresses limitations in existing approaches, providing actionable insights for policymakers and conservationists. By identifying areas of ecological stress and resilience, this study enables the design of targeted interventions, such as reforestation programs, sustainable land-use practices, and equitable resource allocation. Application to Sustainability Science and Earth Observation The findings demonstrate how EO technologies can support sustainability science by enabling precise monitoring of forest dynamics, quantifying resilience thresholds, and offering early-warning systems for environmental crises. The integration of EO with sustainability frameworks like Planetary Boundaries facilitates the operationalization of concepts such as ecological safety and social justice. This approach aligns with the session’s goals by bridging disciplines and promoting partnerships across governance, science, and technology. Multidisciplinary Integration and Societal Impact The study emphasizes the importance of collaboration across disciplines, highlighting how geospatial analysis, governance strategies, and sustainability science can converge to address the challenges of the Anthropocene. By leveraging EO systems and innovative metrics, this research offers tools for policymakers, conservationists, and EO communities to collaboratively navigate socio-ecological challenges. The study underscores the need for decentralized governance models that transcend geopolitical boundaries, ensuring equitable access to forest resources and fostering resilience at regional and global scales. Conclusion By advancing innovative metrics, rigorous validation, and practical relevance, this study provides a foundational contribution to Earth system governance. The spatiotemporal analysis of the human-forest nexus underscores the critical role of forests within planetary commons, offering actionable insights for balancing ecological sustainability and socio-economic equity. This work supports the session’s objectives by demonstrating how EO systems and sustainability frameworks can guide transformative actions, ensuring resilience and equity in the Anthropocene.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall M1/M2)

Presentation: Adaptive Risk Management for the Governance of Earth System Tipping Points

Authors: Tristram Walsh, Professor Michael Obersteiner
Affiliations: Environmental Change Institute, University Of Oxford, International Institute for Applied Systems Analysis
Given the current level and rate of global warming, and the expected progression of climate policies around the world, it is increasingly likely that the planet will initially overshoot the long-term temperature goal (LTTG) established in the Paris Agreement, and consequently enter a new regime of heightened Earth system risk. Notably, research is identifying with increasing confidence that several elements of the Earth system are capable of undergoing irreversible abrupt changes if certain tipping thresholds are crossed. While ~20 years ago the thresholds for these “large scale discontinuities” were understood to transition from moderate to high likelihood broadly around 5–6°C, understanding of these systems has been steadily evolving; these thresholds have steadily decreased over time and some are now understood to potentially exist to around 1.5–2°C. Therefore, overshooting the LTTG, even temporarily, may induce far more significant and qualitatively different risks to society and ecosystems than previously anticipated. These risks are both under-researched and inadequately represented in the scenario planning, and indicate that shifts are required in decision-mkaing under uncertainty and modes of Earth system governance. Indeed, it can be shown that achieving the Paris Agreement’s LTTG is both necessary but insufficient for constraining the risk associated with Earth system tipping points. As such, the LTTG and associated net zero targets should be reframed as an instrumental sub-goal, with additional governance targets and mechanisms required to fulfil the more general precautionary principle and ultimate risk-limiting intent of the Convention. The proposal of more holistic Earth System Governance may be able to address these gaps by comprehensively integrating the complexities and non-linearities of the Earth system into decision-making for the 21st century. However, to realise this, a significant extension of Earth observation networks, coupled with specialised Earth system models capable of representing tipping dynamics, may be required; such digital twins could enable learning over time about system stability and provide early warning signals for transitions, both of which could feed into novel risk management frameworks capable of more directly steering the planet away from these tipping risks. Towards this end, we construct adaptive risk management policies and model the extent to which they may be theoretically more capable of robustly navigating the Earth system away from tipping points than existing net zero pathways focused predominantly on global temperature. These policies are aligned with a recently published agenda for the governance of Earth system tipping points, in that they are global, preventative, anticipatory, and adaptive. In particular, mitigation actions are prescribed at the global level; actions are oriented around preventing tipping as far as possible; uncertainty of tipping elements is incorporated ex-ante by using information that may be derivable from Earth observations and digital twin models; and decisions reflexively respond to early warning signals and variability in the Earth system, for example by proactively course-correcting using rapid response interventions such as solar radiation management (SRM). Crucially, since it is unlikely that full information about the temperature and timescales of tipping elements will be available until well after the associated thresholds have been crossed, (by which time it will be too late), we explore the extent to which decisions made using these adaptive policies are robust against incorrect or incomplete knowledge. We anticipate a trade-off emerging between (i) the precision and reliability of information about the proximity of a tipping element to its threshold, and (ii) the speed and flexibility demanded of global mitigation pathways to respond. Inadequate operational deployment of Earth observation and modelling pipelines to provide reliable early warning indexes for tipping elements may therefore mandate larger and less predictable mitigation course corrections to prevent passing thresholds, which may then induce dependence on interventions such as SRM techniques. However, the research, deployment, and monitoring of the effects of SRM would itself also benefit from enhancements in Earth observation, illustrating just one of many the strategic win-wins of developing a stronger global infrastructure of observations coupled into digital twins. Such systems could enable us to anticipate more of the risks of climate overshoot, design portfolios of mitigation interventions more precisely tailored to the ultimate goal of climate risk management, navigate through these risks towards safe landing climates using high-quality quasi-real-time indices, and ultimately mediate governance of Earth system tipping points throughout and beyond the 21st century.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall M1/M2)

Presentation: Earth Observation Science Strategy Foundation Study: Main Achievement and Results

Authors: Johnny A johannessen, Stephen Briggs, Jon Styles, Andy Shaw, Geroge Dyke, Han Dolman, Peter Thorne, Ana Bastos, Fabrizia Buongiorno, David Crisp, Christine Gommenginger, Catherine Downy, Alain Hauchecorne, Martin Herold, Anna Hogg, Josie Mahony, Jose Moreno, Isabelle Panet, Lasse H. Pettersson, Gerardo Lopez Saldana, Karina von Schuckmann, Bob Su, Joanna Tamminen
Affiliations: Nansen Center, Steeple, Assimila, Symbios, NIOZ, MU, BGC-Jena, INGV, JPL-NASA, NOC, IPSL, GFS-Potzdam, UL, UV, IPGP, MOI, UT-ITC, FMI
In this presentation we will highlight the main findings and results derived from the recently completed ESA funded Earth Observation (EO) Science Strategy Foundation study. The key goal of the study has been to research and document priority EO scientific needs, and their role in addressing important gaps in our knowledge of the Earth system backed up by two supporting specific objetives targeting traceability exercises. This conceptual approach was implemented to identify: (i) geophysical information and observational gaps associated with selected EO Science Questions (SQs); and (ii) how each question is linked to societal benefits and contributes to international policies and agendas. The study provided foundational support to the development of ESA’s 2024 EO Science Strategy. A comprehensive database of current and planned satellite missions and instruments with linkages via geophysical observables to the science questions has been produced, coupled with a structured method and tools to formulate, characterize, visualize and assess the SQs. The method is adaptive and can be re-used to add future pressing science questions and/or technological advances. Capitalizing on involvement of an international expert team of Earth system scientists, two user consultation meetings each attended by over 200 scientists and a complementary survey, a total of 23 SQs have been identified and down-selected from initially nearly 100 candidate questions. In so doing and with an outlook towards the next 5-10 years, four criteria were applied, notably: (i) Novel / discovery science; (ii) Policy relevance and benefits; (iii) Scope to reduce critical knowledge gaps; (iv) potential to fill critical observations gaps through innovation in space technology. Altogether the down selection ensured a manageable number of science questions, yet covered the whole domain of Earth system sciences and pressing research priorities. Note that the approach was interdisciplinary and science at the interface of geophysical domains (e.g. atmosphere, land, cryosphere, ocean, solid Earth) were highlighted. In summary the study has made novel contributions to an objective methodology to establish metrics to rank key Earth system research questions against identified criteria, such as innovation, policy relevance, societal benefits, etc. It supports an approach to future EO programme development that will ensure its societal relevance and that will appeal in its innovation to the scientific and technical communities. In this presentation the main findings and results will be further highlighted.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall M1/M2)

Presentation: Earth System Governance as a guiding framework for the European Space Agency (ESA) Earth Observation Reference Architecture – A response to the new paradigm of Earth System Stewardship

Authors: Mr Steven George, Craig Donlon, Bernardo Carnicero Dominguez
Affiliations: European Space Agency (ESA)
A new paradigm of Earth System Stewardship is taking shape that calls for the radical reassessment of how we understand and structure our economic, political, technological and social systems, with the understanding that people and nature are entwined within socio-ecological systems. It underscores a shared reality—that our collective well-being relies on the governance of our planetary commons (i.e., Earth System Tipping Points (ESTPs), Planetary Boundaries, and other biophysical systems) that maintain the stability, resilience, and life-support functions of the Earth system. It is under this unprecedented need, that the European Space Agency (ESA) included Earth System Governance into its Earth Observation (EO) Systems-of-Systems Reference Architecture (Blueprint), as a guiding framework and shared vision for 2040+. This strategic inclusion responds to the immediate governance challenge of preventing the transgression of tipping thresholds, with the long-term vision of fostering necessary remote sensing infrastructure (i.e., Acquisition, Dissemination and Intelligence) for the monitoring of our planetary commons. In this manner, appropriate satellite solutions and priorities can be pioneered to establish the measurement evidence base that underpins effective governance and policy decisions essential to protect Earth’s complex & self-regulating systems that have maintained the stable conditions of the Holocene. Furthermore, this new paradigm offers new opportunities for studying institutional & structural dynamics, relationships and functional behaviour within the proposed polycentric and decentralised governance architecture—which is particularly significant considering ESAs strategic and unique global position, with its extensive expertise in Earth Observation. Through pooling the financial and intellectual resource of its member states, and maintaining relationships with the international climate network, including earth system modelers, research communities, policy stakeholders, and monitoring services, it can undertake programmes and activities far beyond the scope of any single nation. The inclusion of this new paradigm invites an interdisciplinary and whole-earth approach into ESA Earth Observation Programme, forming part of a broader vision to nurture community, awareness, momentum, and support for effective Earth System Stewardship & Governance.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.14)

Session: C.04.01 - Utilisation of meteorological satellites data

The status of development of ESA missions will be outlined in 4 sessions of 1h30 minutes (equally of a full day).
Participants will have the unique opportunity to gain valuable insights in technology developments and validation approaches used during the project phases of ongoing ESA programmes.
The Projects are in different phases (from early Phase A/B1 to launch and operations) and together with industrial/science partners the status of activities related to Mission developments will be provided.

Utilisation of meteorological satellites data


Potential impact from future EO mission data in Numerical Weather Prediction and arising science challenges


  • Stephen English – ECMWF

Advancing Satellite Data Assimilation for high-impact weather forecasting at GeoSphere Austria


  • Adhithiyan Neduncheran – GeoSphere Austria

Constellation of mini satellites with IR sounders


  • Adrien Deschamps – CNES

Extracting 3D Wind Profiles from Hyperspectral Infra-red Sounder Missions, Status of the Activities at EUMETSAT


  • Regis Borde – EUMETSAT

More than just Lightning: Meteors, space debris and explosions detected by the MTG Lightning Imager


  • Pierre Kokou – ESA
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall F2)

Session: A.02.04 Advances in Monitoring and Management of Forest Ecosystems - PART 4

About 4 billion hectares, which is 31 percent of the global total land surface, is covered by forests according to the latest FAO Forest Resource Assessment FRA 2020. Forests are a valuable and irreplaceable ecosystem, which play an important role from an ecological, social, and economic perspective. Monitoring of forests is essential for a better understanding to protect and manage sustainably this valuable ecosystem.

Information needs vary and include forest area, forest types, forest structure, disturbances, health, and biomass. An unprecedented wealth of observations by optical and microwave sensors, active and passive, from low to high resolution allow new ways to monitor forests. Emphasis will be put on advances in detecting forest disturbances, forest health monitoring, species identification, support to sustainable forest management, estimating forest biomass and forest carbon accounting. This session will showcase some of the more recent key achievements including methods/algorithms, science and applications.

Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall F2)

Presentation: Global annual aboveground biomass carbon maps from SMOS-IC L-band vegetation optical depth

Authors: Xiaojun Li, Philippe Ciais, Dr. Frédéric Frappart, Dr. Lei Fan, Clément Albergel, Dr. Klaus Scipal, Dr. Yuqing Liu, Dr. Yi Zheng, Dr. Jean-Pierre Wigneron
Affiliations: INRAE, Bordeaux Sciences Agro, UMR 1391 ISPA, Laboratoire des Sciences du Climat et de l’Environnement, UMR 1572 CEA-CNRS-5 UVSQ, Chongqing Jinfo Mountain Karst Ecosystem National Observation and Research Station, School of Geographical Sciences, Southwest University, European Space Agency Climate Office, ECSAT, European Space Agency (ESA)
The Soil Moisture Ocean Salinity (SMOS) mission, launched in 2009 with the first L-band satellite radiometer, has provided key information on global surface soil moisture (SM) and vegetation water content, which are essential for monitoring the Earth water and carbon cycles. SMOS-IC version 2 is the latest algorithm for retrieving land surface SM and L-band vegetation optical depth (L-VOD) from SMOS observations, designed to minimize reliance on ancillary information. Benefiting from this, since its release, SMOS-IC L-VOD has been identified as a useful indicator for estimating global aboveground biomass carbon (AGC) dynamics. Through a space for time approach linking annual L-VOD gradients to the spatial gradients of existing static AGC maps, annual time series of SMOS-IC L-VOD has served as the foundation for several studies assessing the impact of climate and anthropogenic activities on global AGC change. However, the sensitivity of L-VOD to vegetation water content raises concerns about its direct use for accurately inferring AGC changes, and the inconsistent calibration methods currently used for converting L-VOD to AGC further hinder the unification of AGC estimates. To address these issues, we first corrected the L-VOD time series for the effects of the vegetation moisture content and then implemented a systematic global-scale calibration, resulting in a global data set of annual AGC stock data from 2010 to the present at a 25 km resolution. The accuracy assessment demonstrated that the SMOS-IC L-VOD-derived AGC presents a better spatial and temporal consistency with national forest inventory data and forest disturbance events compared to other mainstream satellite products. Based on this data, we found that the biomass carbon in northern ecosystems (>30°N) decreased at a rate of 〖-0.31〗_(-0.44)^(-0.23) PgC yr⁻¹ during 2016-2023, challenging the prevailing view that this region has consistently acted as a biomass carbon sink. Finally, we illustrate how this global product is contributing to a better understanding of the response of global vegetation carbon stocks to climate change.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall F2)

Presentation: Forest Carbon Monitoring Toolbox Expansion with UNet-family Deep Learning Models

Authors: Oleg Antropov, Jukka Miettinen, Zsofia Koma, Johannes Breidenbach
Affiliations: VTT Technical Research Centre of Finland, Norwegian Institute of Bioeconomy Research (NIBIO)
The European Space Agency (ESA) Forest Carbon Monitoring project (FCM; https://www.forestcarbonplatform.org/) is developing a toolset for forest biomass and carbon monitoring, aiming to meet the requirements of different forestry stakeholder groups. The project aims to provide means for forestry stakeholders to respond to increasing carbon monitoring and reporting requirements, ranging from private company estates to European wide monitoring. The tools have been developed to take advantage of a combination of satellite data with fusion of optical and radar datasets. The toolbox includes the multivariate empirical and physical approaches for forest variable prediction. These data can be used as input for process-based ecosystem modelling to provide biomass and carbon flux predictions and future forecasts. The latest inclusions in the toolbox enable utilization of UNet-family deep learning models for forest variable prediction [1] in areas with limited field sample plots available [2]. Here we first provide an overview of the tools available in the FCM toolbox with examples of output results and accuracy from varying demonstration areas. We then focus on the newly developed UNet tools leveraging various EO data modalities (Sentinel-2, Sentinel-1, PALSAR-2 and their combinations) and earlier pretrained backbone models for feature extraction [3,4]. One of the key practical scenarios we explore is the pretraining of the UNet+ model with high-quality wall-to-wall reference data followed by the model transfer to a target geographical area. Two different model transfer approaches are demonstrated depending on the type of field reference data, in particular utilizing forest plot information and forest plot statistics. The effectiveness of the UNet model transfer approach is highlighted by comparing the results with more traditional approaches for EO-based forest inventory, including a benchmark kNN method. Explored UNet architectures include Vanilla UNet as well as improved UNet incorporating an attention module and pretrained Resnet18 encoder paired with UNet-style decoder. With respect to model transfer assessment and upscaling potential of the explored model transfer approaches, we compare our results with end-to-end training using forest plots and ALS-based feature maps. The model pretraining is performed in Finland and Catalonia using wall-to-wall ALS-based datasets, while target application areas are located in Norway (for geographic transfer) and Catalonia, Spain (for temporal transfer). The first results demonstrate the robustness of this approach with clear improvement in accuracy and reduced saturation effects in high volume forests. The spectral-spatial approach with model transfer shows clear improvement in prediction accuracy compared to end-to-end training with forest plots, and also allows to obtain good results with significantly reduced number of plots when a pretrained model is available. References: [1] Ge, S.; Antropov, O.; Häme, T.; McRoberts, R.E.; Miettinen, J. Deep Learning Model Transfer in Forest Mapping Using Multi-Source Satellite SAR and Optical Images. Remote Sens. 2023, 15, 5152. [2] S. Ge, H. Gu, W. Su, J. Praks and O. Antropov, "Improved Semisupervised UNet Deep Learning Model for Forest Height Mapping With Satellite SAR and Optical Data," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 15, pp. 5776-5787, 2022. [3] R. S. Kuzu, O. Antropov, M. Molinier, C. O. Dumitru, S. Saha and X. X. Zhu, "Forest Disturbance Detection via Self-Supervised and Transfer Learning With Sentinel-1&2 Images," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 17, pp. 4751-4767, 2024 [4] O. Antropov et al., "Semi-Supervised Deep Learning Representations in Earth Observation Based Forest Management," IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 2023, pp. 650-653
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall F2)

Presentation: Improving EU Forest Monitoring with Carbon Budget Modeling and Satellite-Based Deep Learning

Authors: Samuele Capobianco, Matteo Piccardo, Gonzalo Oton Azofra, Alessandro Cescatti, Mirco Migliavacca
Affiliations: Joint Research Centre Consultant (JRC), European Commission, Joint Research Centre (JRC)
Accurate monitoring of forest carbon stocks and fluxes is critical for achieving the EU’s 2050 climate neutrality target. Forest ecosystems, as vital carbon sinks, play a pivotal role in mitigating climate change, while nature-based solutions like carbon farming are essential for enhancing carbon sequestration. However, implementing these strategies requires reliable systems for monitoring carbon budgets with high spatial and temporal resolution. This study presents an integrated approach combining Earth Observation (EO) technologies with advanced modeling techniques to enhance the Monitoring, Reporting, and Verification (MRV) of forest carbon removals. The bottom-up approach incorporates the Carbon Budget Model EU-CBM-HAT (Blujdea et al. 2022) with vegetation growth curves derived from high-resolution multitemporal airborne LiDAR data. The top-down approach utilizes a deep neural network trained on satellite radar, optical data, soil characteristics, topographic features, and climate variables to estimate biomass changes in undisturbed areas. This neural network is trained using various reference datasets, such as National Forest Inventories (NFIs) and biomass maps where available, and employs semi-supervised techniques to improve performance. The methodology was tested in selected European regions, demonstrating significant improvements in the accuracy and reliability of forest carbon monitoring. We compare results obtained from different data sources and training methodologies, highlighting the benefits of integrating various data types and modeling approaches for improved predictions. Feature importance analysis reveals key variables driving biomass stock changes, particularly highlighting the high importance of forest type, satellite data, and climate variables in predicting stock dynamics. This analysis enhances both the interpretability and robustness of the models, providing valuable insights into the factors influencing carbon sequestration. This dual modeling approach supports carbon farming practices by delivering effective MRV systems aligned with the EU’s climate neutrality goals, contributing to broader global climate mitigation efforts.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall F2)

Presentation: Characterising Recent Aboveground Forest Biomass Dynamics Detected in High-Resolution Satellite-Based Global Maps

Authors: Dr Nezha Acil, Richard Lucas, Maurizio Santoro, Heather Kay, Prof Heiko Balzter
Affiliations: University of Leicester (NCEO), Aberystwyth University, GAMMA Remote Sensing
Forests play a pivotal role in the global carbon balance, capturing atmospheric carbon through photosynthesis, storing it in above and below ground biomass, and releasing it back through respiration, disturbances and decomposition. In the context of climate change, measuring global aboveground biomass (AGB) of woody vegetation has been the centre of attention for quantifying the contribution of forests to the global carbon budget. But despite remarkable advances in mapping AGB globally, with enhanced retrieval algorithms and finer levels of detail, accurately measuring its variation over time has remained challenging due to multiple constraints. These include large uncertainties in the modelled AGB estimates (due to biases and random errors), spatiotemporal inconsistencies across the measuring spaceborne sensors and instruments (e.g. GEDI, ICESat-2, PALSAR ALOS 1 and 2) and the paucity of longitudinal ground-truth data for calibration and validation (i.e. repeated airborne LiDAR campaigns, national forest inventories and multi-census permanent plots). Understanding AGB dynamics across the world’s forest and how they are related to confounding factors are essential for quantifying global carbon changes reliably. Using Google Earth Engine, we explore the annual AGB maps between 2015 and 2021 produced by the European Space Agency (ESA) Climate Change Initiative (CCI) (Santoro and Cartus, 2024) to assess AGB dynamics across the world, accounting for sources of uncertainties and relationships with environmental covariates. Specifically, we quantify the direction and magnitude of AGB changes using different approaches (stock differencing and trend analysis), stratifying by forest classes and change types (e.g. fire, harvest), and correlating with error-inducing environmental gradients (e.g. topography, moisture). We further check agreement of the changes detected with the ones retrieved with a newly developed framework characterising AGB dynamics from independent optical-based land cover data. This framework combines the Food and Agriculture Organisation (FAO) Land Cover Classification System (LCCS) with the Driver-Pressure-State-Impact-Response (DPSIR) framework to provide evidence of pressures leading to change and associated impacts, including on AGB states and dynamics. We illustrate with case studies from different regions, notably temperate forests in the United Kingdom, Mediterranean woodlands in Australia and mangroves in South-East Asia. By uncovering the global patterns of remotely-sensed AGB changes and quantifying their linkages with related covariates, our results provide fundamental insights into the quality of space-based AGB maps for monitoring forest carbon dynamics, guiding policy making and supporting sustainable forest management.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall F2)

Presentation: Mapping Land Use Following Deforestation Across the Pan-Tropics With Sentinel Data

Authors: Robert Masolele, Dr. Johannes Reiche, Camilo Zamora, Dr. Diego Marcos, Dr. Marc Rußwurm, Dr. Liz Goldman, Ate Poortinga, Dr. Niki De Sy, Clément Bourgoin, Dr. Aurélie Shapiro, Prof. Dr. Martin Herold
Affiliations: Wageningen University, Helmholtz GFZ German Research Centre for Geosciences, Remote Sensing and Geoinformatics Section, Inria, World Resources Institute, Spatial Informatics Group, LLC, European Commission, Joint Research Centre, Food and Agriculture Organization of the United Nations, Mixed Pixel
Tropical forests are biodiversity hotspots, providing critical ecosystem services that sustain millions of plant and animal species. However, these forests are increasingly threatened by human activities, through the expansion of commodity crops such as soy, oil palm, rubber, cocoa, coffee, corn, logging, avocado, and pasture (Masolele et al., 2022, 2024). While significant efforts have been made to monitor deforestation using satellite imagery, most initiatives stop at detecting forest loss without tracking the land use that follows (Hansen et al., 2013). Understanding post-deforestation land use is crucial for addressing deforestation's root causes and mitigating its impacts (Masolele et al., 2022, 2024). Currently, there is no global monitoring system capable of providing annual, spatially detailed updates on the land use that follows after deforestation. Existing datasets and methods frequently lack the spatial, thematic, and temporal resolution necessary to accurately map post-deforestation land uses (Curtis et al., 2018), limiting their utility for rapid policy response and regulatory compliance, such as the European Union’s Deforestation Regulation (EUDR) (European Commission., 2024). This gap poses challenges for ensuring EUDR compliance, limiting the capacity to detect and mitigate deforestation linked to commodity production. Here, we present the first high-resolution (10 m) maps of land use following deforestation covering the entire pan-tropics. We utilize an extensive reference database containing 23 different land use types (including, soy, oil palm, rubber, cocoa, coffee, corn, logging, avocado, mining, cashew, corn, sugar, rice, and pasture), and employ Sentinel-1 and Sentinel-2 data combined with deep learning algorithms, to map land use following tropical deforestation from 2001 to 2023 with an F1-score of 83%. Our approach incorporates location encodings and environmental variables, such as elevation, temperature, and precipitation, to enhance the model’s ability to distinguish various land uses across diverse geographies. In general our results shows increased deforestation as a result of expansion of key commodity crops such as cocoa in Liberia, Cameroon, Ivory Coast, Ghana, Ecuador, Peru, Papua New Guinea; oil palm, in Indonesia, Malaysia; rubber in Malyasia, Thailand, Laos, Indonesia; coffee in Central America (Guatemala, Nicaragua, Costa rica), Peru, Ethiopia, Colombia, Vietnam; soy in Brazil; pasture in Paraguay, Bolivia, Mexico, Brazil, Cashew in in Cambodia, Tanzania, Mozambique, Benin and, logging in Suriname, Guyana, Papua New Guinea, Equatorial Guinea, Gabon, Republic of Congo, and Cameroon. This work directly supports the European Union’s Deforestation Regulation (EUDR), aimed at curbing the EU market’s contribution to global deforestation (European Commission., 2024). Our research offers crucial insights for monitoring land use following deforestation, aiding environmental conservation initiatives and advancing carbon neutrality goals by providing detailed, high-resolution maps on land use that follows after deforestation events. Robert N. Masolele, Diego Marcos, Veronique De Sy, Itohan-Osa Abu, Jan Verbesselt, Johannes Reiche and Martin Herold (2024). Mapping the diversity of land uses following deforestation across Africa. Sci Rep 14, 1681. https://doi.org/10.1038/s41598-024-52138-9 Robert N. Masolele, Veronique De Sy, Diego Marcos, Jan Verbesselt, Fabian Gieseke, Kalkidan Ayele Mulatu, Yitebitu Moges, Heiru Sebrala, Christopher Martius, and Martin Herold (2022). Using high-resolution imagery and deep learning to classify land-use following deforestation: a case study in Ethiopia. GISci. Remote Sens. 59(1), 1446–1472. https://doi.org/10.1080/15481603.2022.2115619. Hansen, M., P. V. Potapov, R. Moore, M. Hancher, S. A. Turubanova, A. Tyukavina, D. Thau, 2 S. V. Stehman, S. J. Goetz, T. R. Loveland, A. Kommareddy, A. Egorov, L. Chini, 1C. O. Justice, J. R. Townshend (2013). High-resolution Global Maps of 21st-century Forest Cover Change. Science 342 (6160): 850–853. https://www.science.org/doi/10.1126/science.1244693 Curtis, P. G., Slay, C. M., Harris, N. L., Tyukavina, A. & Hansen, M. C. Classifying drivers of global forest loss. For. Ecol. 1111(September), 1108–1111 (2018). https://www.science.org/doi/10.1126/science.aau3445 European Commission (2024). Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on the making available on the Union market as well as export from the Union of certain commodities and products associated with deforestation and forest degradation and repealing Regulation (EU) No 995/2010 (2022). https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:52021PC0706
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall F2)

Presentation: Integrating Sentinel, GEDI, and ICESat-2 Data for Improved Mapping of Canopy Structure Across Europe.

Authors: Wanda De Keersmaecker, Luc Bertels, Alba Viana-Soto, Cornelius Senf, Ruben Van De Kerchove
Affiliations: VITO, TUM
Forests cover about 39% of the European land area. They provide a wide range of ecosystem services, including biodiversity preservation, provision of timber and fiber products, prevention of soil erosion, and play an important role in the carbon cycle and climate regulation. Yet, altered natural disturbances due to climate change in combination with increasing wood demands are putting pressure on Europe’s forests. These complex forest changes require the mapping and monitoring of their state and condition. Forest structure is a key variable to monitor the state of the forest as it has manyfold impacts on forest functioning and forest ecosystem services. For example, forest structure impacts sub-canopy forest microclimates and has an effect on habitat quality by providing shelter, foraging, and nesting sites. It therefore enables the characterization of habitats, their quality, and the distribution of species. In addition, spatially explicit information on canopy structure supports biomass monitoring, enables the parameterization of forest simulation, growth and yield models and supports carbon accounting. Within the framework of the Horizon Europe project ForestPaths, we mapped three key forest structural parameters: canopy height, canopy cover, and foliage height diversity. The maps were produced at 10 m resolution across Europe using Copernicus Sentinel-1 and Sentinel-2 data of the year 2020. For each of the structural parameters, a multi-quantile CatBoost regression model was trained using GEDI data. Canopy height models were additionally trained using ICESat-2 datasets, and the combination of both spaceborne LiDAR datasets. The accuracy of the height maps was evaluated and compared with state-of-the-art canopy height maps using an extensive dataset that was built using publicly and freely available ALS data. ALS data was collected over patches sampled across areas with potential forest cover in Europe, resulting in a dataset of about 3500 patches across Europe, each having a spatial resolution of 10 m and a size of 640 by 640 m. From these ALS data a set of ALS-based vegetation structure metrics were extracted that are related to vegetation height, cover, and the vertical vegetation profile. Alongside the vegetation structure metrics, information on the acquisition date and point density has been provided where possible to further filter the dataset. The model comparison shows that the integration of ICESat-2 and GEDI data improved the mapping of forest structure, particularly in northern regions where GEDI observations are lacking. Moreover, comparing the maps with state-of-the-art canopy height maps indicates an improvement in accuracy over Europe. Both the canopy height maps and the ALS-based validation dataset will be made publicly available. In this presentation, we will show the canopy structure modeling approach, compare the spaceborne LiDAR datasets for canopy height mapping, and compare our forest structure map and the state-of-the-art datasets with the European ALS data.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall E2)

Session: D.01.05 ESA DTE Digital Twin Components: Progress, achievements and next steps

The ESA Digital Twin Earth (ESA DTE) programme aims at ensuring that the latest Earth observation satellite capabilities play a major role in the design, implementation and future evolution of DestinE; the flagship initiative of the European Commission to develop a highly accurate digital twin of the Earth to monitor and simulate natural phenomena, hazards and the related human activities.

This session will highlight the progress and achievements of ESA’s Lead Development Actions on EO-based Digital Twin Components (DTCs), which each focus on a particular element of the Earth system: Agriculture, Forests, Hydrology and hydro-hazards, Ice Sheets and regional/global impacts, Coastal processes and extremes. Presentations and discussion in this session will focus on:
- Assessing goals and progress of DTC development activities.
- Challenges and opportunities in the use of EO data for digital twins.
- Use of DestinE services and implementation of DTCs in the DestinE system.
- Advances in science, R&D and innovative methods (AI, modelling and technology) contributing to digital twin capabilities.
- Engagement with users and stakeholders and development of use cases.
- Recommendations for future developments.

Speakers:


  • Martin Wearing - ESA

Speakers:


  • Ingo Schoolmann – OHB Digital Services GmbH
  • William Harcourt – University of Aberdeen
  • Ana Oliveira – CoLAB +Atlantic
  • Lucia Luzietti – e-GEOS
  • Salvatore Stramondo – Istituto Nazionale di Geofisica e Vulcanologia
  • Fabien Maussion – University of Bristol
  • Matti Mõttus – VTT Technical Research Centre of Finland
  • Sebastian B. Simonsen – DTU Space
  • Milena Napiorkowska – Argans Ltd
  • Gohar Ghazaryan – Leibniz Centre for Agricultural Landscape Research
  • Sander Marcel Maria Rouwette – Thales Alenia Space Luxembourg
  • Adrien Paris – Hydro Matters
  • Luca Brocca – Research Institute for Geo-Hydrological Protection, National Research Council, Italy

Session Schedule:


  • Assessing goals and progress of DTC development activities.
  • Challenges and opportunities in the use of EO data for digital twins.
  • Use of DestinE services and implementation of DTCs in the DestinE system.
  • Advances in science, R&amp;D and innovative methods (AI, modelling and technology)
contributing to digital twin capabilities.
  • Engagement with users and stakeholders and development of use cases.
  • Recommendations for future developments.

Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 0.14)

Session: A.01.13 Networking Session for Early Career Scientists Focused on EarthCARE

We propose an invited networking session specifically aimed at engaging emerging researchers with a focus on EarthCARE.
The primary objectives of the networking session are to:
1) Facilitate connections among early career and senior scientists working on EarthCARE-related research.
2) Encourage the sharing of innovative ideas and approaches in the study of clouds, aerosols, and radiation interactions.
3) Provide a platform for early career researchers to discuss their work and receive feedback from peers and experts in the field.
4) Promote interdisciplinary collaboration and foster a sense of community among the next generation of Earth observation scientists focused on EarthCARE.
The proposed networking session will include the following elements:
Welcome and Introduction (10 minutes): Introduction by a senior ESA representative or a leading scientist in the field, outlining the objectives of the session and the importance of EarthCARE.
Senior Scientist Presentations (20 minutes): Presentations providing insights into current challenges and future directions in EarthCARE research.
Short Talks (1 minute each): A series of brief presentations (1 minute) by early career scientists, highlighting their research findings and methodologies.
Networking Break (30 minutes): An informal session, potentially with refreshments, allowing participants to mingle, discuss ideas, and establish connections.
This session is aimed at early career scientists, including PhD students, postdoctoral researchers, and young professionals who are actively engaged in EarthCARE-related research. We anticipate participation from a diverse group of individuals from various institutions and countries, fostering a rich exchange of perspectives and experiences.

Session agenda


Welcome


Brief discussion on aims of session


  • ESA Representative

Panel Discussion about ongoing EarthCARE projects and development


  • Senior Panellists representing key aspects of EarthCARE development and science

Speed Networking, mixture of senior scientists and early career


Group breakout sessions


Topic based discussion (e.g., aerosols/ clouds/ ACI/ radiation/ assimilation)


Wrap-up Discussion and Summary


Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall E1)

Session: A.09.09 Arctic and Antarctic Sea Ice in the Earth system: Advancing Research with Remote Sensing, In-Situ Observations, and Modeling - PART 3

Sea ice plays a crucial role in the global climate system through its strong effect on albedo, ocean and atmospheric circulation, and its feedbacks with ice sheets and permafrost.

Remote sensing of sea ice has been the cornerstone of Arctic and Antarctic sea ice research for over 50 years. These long-term, large-scale, and stable time series of sea ice parameters provide the baseline for a deeper understanding of the ongoing dramatic changes in both hemispheres. This knowledge is further diversified and enhanced by new and upcoming satellite missions (e.g., ICESat-2, SWOT, CIMR, CRISTAL, ROSE-L) that provide insights into detailed processes such as snow depth changes, meltpond drainage, and sea ice ridging, as well as support operational forecasting and monitoring applications. They also advance our understanding of the relevance of sea ice for atmospheric, oceanic, and ecological processes, e.g., Arctic cloud formation or the timing of ice algae blooms.

Sea ice parameters are observed over a large wavelength spectrum and derived from many different sensors including microwave and infrared radiometers, visible observations, radar imagers, and lidar or radar altimeters. Combining, merging, and jointly analyzing products from different satellite sensors and scales represents the next powerful step in advancing our knowledge of the fast-changing sea ice covers.

A key challenge remains in bridging scales and spheres between Earth Observation (EO) datasets, climate modeling, and in-situ datasets. New methodological advances such as data-driven modeling and physics-informed artificial intelligence, may be well-suited to address this challenge.

This session addresses all aspects of sea ice, including the current status and needs in enhancing EO methodologies, and the use of EO products for evaluating polar climate model simulations and for data assimilation in Numerical Weather Prediction models. Airborne and in-situ observation campaigns are critical to evaluate, calibrate, and develop satellite retrievals and we invite submissions on these aspects, too. Submissions on solutions for addressing up- and downscaling challenges on different temporal and spatial scales and between different data types are also encouraged.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall E1)

Presentation: Assessing the capability of Passive Microwaves for retrieving Sea Ice Thickess and Volume on a long term basis over the Antarctic sea ice

Authors: Clement Soriot, Julienne Stroeve, Catherine Prigent, Carlos Jimenez
Affiliations: Centre for Earth Observation Science, Univeristy Of Manitoba, Alfred Wagner Intitute, University of Bremen, National Snow and Ice Data Center, Cooperative Institute for Research in Environmental Sciences, CNRS, Observatoire de Paris, Estellus
Since the beginning of space-based sea ice observations in 1979, the extent (SIE) and area (SIA) of both Arctic and Antarctic sea ice have been determined daily using passive microwave observations. The Arctic SIE decline shows a continuous rate of about -13%/decade at its annual minimum in September, between 1979 and 2023 (Roalt and Maier, 2024). On the other end, Antarctic sea ice extent increased until 2016, followed by a rapid decline (Purich and Doddridge, 2021). Since the era of altimetry, sea ice thickness (SIT) and volume (SIV) have been retrieved with sparse temporal and spatial coverage (Laxon et al. 2003, Kwok 2018). Recent studies have made progress in improving SIT retrieval (e.g. Landy et al. 2022 extend SIT retrieval to the summer season and Bocquet et al. 2024 used the overlap period of radar altimetry to go back in time and retrieve 30 years of SIT and SIV) but still remains challenging for Antarctic sea ice. A recent study (Soriot et al 2024) provided long term series of SIT and SIV over the Arctic by combining the climate record of passive microwave observations with statistical inversion (Soriot et al 2023) trained on lidar altimetry (ICESat-2). With significantly different layering in the Antarctic snow and ice pack compared to the Arctic, this method may have limitations over the Antarctic sea ice. This presentation will explore the capability of the method over the Antarctic sea ice. We will present comparisons with microwave altimetry products and model results. References Bocquet, Marion, et al. "Arctic and Antarctic sea ice thickness and volume changes from observations between 1994 and 2023." Journal of Geophysical Research: Oceans 129.11 (2024): e2023JC020848. Kwok, Ron. "Arctic sea ice thickness, volume, and multiyear ice coverage: losses and coupled variability (1958–2018)." Environmental Research Letters 13.10 (2018): 105005. Landy, Jack C., et al. "A year-round satellite sea-ice thickness record from CryoSat-2." Nature 609.7927 (2022): 517-522. Laxon, Seymour, Neil Peacock, and Doug Smith. "High interannual variability of sea ice thickness in the Arctic region." Nature 425.6961 (2003): 947-950. Purich, Ariaan, and Edward W. Doddridge. "Record low Antarctic sea ice coverage indicates a new sea ice state." Communications Earth & Environment 4.1 (2023): 314. Roach, Lettie A., and Walter N. Meier. "Sea ice in 2023." Nature Reviews Earth & Environment 5.4 (2024): 235-237. Soriot, Clement, et al. "Winter arctic sea ice volume decline: uncertainties reduced using passive microwave-based sea ice thickness." Scientific Reports 14.1 (2024): 21000. Soriot, Clément, et al. "Arctic sea ice thickness estimation from passive microwave satellite observations between 1.4 and 36 GHz." Earth and Space Science 10.2 (2023): e2022EA002542.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall E1)

Presentation: The WIVERN Earth Explorer 11 Candidate Mission: Applications for Snow and Sea Ice

Authors: Christian Melsheimer, Gunnar Spreen, Janna Rückert, Marcus Huntemann, Maryam Pourshamsi, Alessandro Battaglia, Abdulhameed Yunusa
Affiliations: Institute of Environmental Physics, University of Bremen, European Space Agency (ESA), Department of Environment, Land and Infrastructure Engineering (DIATI), Politecnico di Torino, Indra
The WInd VElocity Radar Nephoscope (WIVERN) mission concept is one of the two candidate missions competing for selection as the Earth Explorer 11 mission within the European Space Agency's Future EO programme. The primary objective of WIVERN is to measure, for the first time globally, in-cloud winds by exploiting the Doppler shift produced in the electromagnetic signal transmitted by a 94 GHZ radar and backscattered by cloud and precipitation particles.. Since the radar views the atmosphere at an oblique angle of about 42° (off-nadir), the radial velocity of the scatterers is dominated by the horizontal component, which is the main quantity being measured. However, especially in low humidity regions like the Arctic and Antarctic, the WIVERN measurements will also contain novel information on the surface. Thanks to an innovative radiometric mode, in addition to the radar backscatter, WIVERN will simultaneously provide microwave radiometer measurements at unprecedented fine resolution of the order of only 1 km . The smallest footprint size of currently operational satellite radiometers at similar frequency is about 5 km. Such observations from current microwave radiometers at 89 GHz from, e.g., AMSR2 allow to retrieve sea ice concentration and WIVERN would complement them at 94 GHz. In order to explore what additional information on snow and sea ice in polar regions WIVERN can provide, we have performed model simulations of microwave emission and backscatter by sea ice covered by a snow layer, using the Snow Microwave Radiative Transfer (SMRT) model, which includes a module for sea ice . First, we have carried out sensitivity tests in order to find out which snow and sea ice parameters are most relevant in influencing the microwave scattering and emission behaviour for WIVERN configuration (94 GHz, incidence angle of 42°). These include sea ice of different types, with a snow layer up to 40 cm thick, with different composition and layering. Both the normalised radar backscattering cross section (NRCS) at HH and VV polarisation and the brightness temperature at H and V polarisation have been simulated. The results indicate that the major influence comes from the snow and not from the ice below. Both the thickness of the snow layer on top of the sea ice (usually called snow depth) and the properties of the snow layer are critical. At the rather high microwave frequency of 94 GHz used by WIVERN, scattering by the snow grains is the dominating process. However, the temperature profile of the sea ice and snow, the roughness of the snow surface and possibly the underlying ice type also have an influence. In order to investigate this further, next, we have carried out extensive simulations in which the four relevant parameters, i.e., snow depth, snow density, snow correlation length (which is a measure of snow grain size), and snow surface roughness were independently varied over a wide range of values. In addition, we have used (1) two different sea ice types, first-year ice (FYI, ice less than a year old) and multiyear ice (MYI, ice that has survived at least one melting season); (2) two configurations of the snow, a uniform layer of fine snow (new snow), or a more complex snow pack of three different layers (older snow); (3) two different temperature profiles of sea ice and snow, either isothermal throughout and near the melt point, common for early summer, or with a strong gradient from the bottom near the melt point to air temperature 30 K colder. These three "binary" switches were combined in all eight possible ways. From the simulation results, the emission or backscatter by snow and ice for given distribution of the four relevant parameters can be derived by doing weighted means about the parameters; also, the dependence on one parameter can easily be derived. The results show that the ice type has virtually no influence on the signal (i.e., brightness temperatures and NRCS) when there is more than a few centimetres of snow. In contrast, the structure of the snow layer, and partly the snow depth is the major influence factor. The snow correlation length and the snow density and the snow type (single layer or multiple layers) strongly influence the signal. In most configurations, there is a strong dependence on snow depth only up to about 10 cm, with the exception of the NRCS for new, fine snow, which is sensitive to snow depth up to about 40 cm. Furthermore, the effects of the temperature profile and the snow surface roughness are not negligible. The difference between the brightness temperatures at V and H polarisation is of the order of about 10 K for almost all configurations. This is relevant, as a standard retrieval algorithm for sea ice concentration builds exactly on this difference. In addition to the microwave backscatter and emission analysis, the WIVERN sampling pattern has been considered as well. While the spatial resolution of a single measurement on the ground is very fine, about 1 km, there are significant gaps between one scan and the next. Especially at the centre of the swath the distance between one scan to the next is more than 30 km. However, WIVERN has a forward and backward scan and polar regions are covered several times per day. We demonstrate what data density can be expected from WIVERN surface measurements for polar regions. In summary, retrieval of sea ice concentration is possible with WIVERN, within the individual footprints which are smaller than the footprints of current instruments, but spatially not fully continuous. In addition, WIVERN measurements have the potential to provide information on the type and structure of the snow layer on the sea ice. As the structure of the snow layer correlates with the sea ice type, the snow can be seen as a proxy for the sea ice type.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall E1)

Presentation: “Dynamic Ice Map”: Combining High-Resolution Sea Ice Type Classification With Sea Ice Drift Forecast

Authors: Martin Bathmann, Dmitrii Murashkin, Christine Eis, Jonathan Bahlmann, Bernhard Schmitz, Karl Kortum, Anja Frost, Jakob Bünger, Stefan Wiehle, Gunnar Spreen
Affiliations: German Aerospace Center (dlr), University of Bremen, Drift+Noise Polar Services
To assist navigation through ice-covered waters, Synthetic Aperture Radar (SAR) Earth observation data is often used due to its independence of daylight and weather. Currently, SAR images are acquired and automatically delivered onboard the research vessel Polarstern during campaigns in the Arctic. However, the surrounding sea ice has drifted by the time of delivery and the SAR scenes have to be shifted manually to collocate them to the current ice situation around the ship. This hinders the direct use of SAR scenes for route planning. We aim to enable automatic route planning through drifting sea ice. The sea ice drift forecast data provided on Copernicus Marine Environmental Monitor Service [1, 2] is applied in a value-adding process for this operational application. We use SAR-based sea ice type classification, derived with machine learning from Sentinel-1 or TerraSAR-X SAR data, as static ice type maps [3, 4]. Then we polygonise the ice map and obtain the corner points of the polygons for data reduction. To enable dynamic route planning in a dynamic environment, the static ice map is advected along Lagrangian-Trajectories, using the sea ice drift forecast model data, e.g. TOPAZ [6] or neXtSIM [5]. We provide hourly slices of advected corner points. The result is a spatial-temporal ice map ready for routing (dynamic ice map) that bridges scales from the 160 metres to 6 kilometres resolution. In further processing, speed-optimized route suggestions considering the obtained dynamic ice map are calculated. The routes are then sent to ship bridges. At the current stage of development, routes are calculated optimized for speed, where reduced sailing velocities are assumed in thicker ice situations. Bathymetry is also considered, i.e. navigating through shallow waters is prohibited. We tested our routing support on RV Polarstern with this new data product in a two weeks period during the ArcWatch-II expedition in early autumn 2024 to the Central Arctic. The calculated routes were used to derive decisions for navigation, together with the ice-radar of the ship and spaceborne SAR images. The test case has shown that especially longer routes can benefit from the dynamic ice map. We will present the new algorithm used for the creation of dynamic ice maps. Moreover, an evaluation of the used sea ice drift forecast models together with a detailed quality assessment of the dynamic sea ice maps will be provided. References [1] European Union - Copernicus Marine Environmental Monitor Service: neXtSIM-F: Arctic Ocean Sea Ice Analysis and Forecast, https://doi.org/10.48670/moi-00004, 2020. [2] European Union - Copernicus Marine Environmental Monitor Service: TOPAZ5: Arctic Ocean Physics Analysis and Forecast, https://doi.org/10.48670/moi-00001, 2015. [3] Kortum, K., S. Singha, and G. Spreen, Robust Multiseasonal Ice Classification From High-Resolution X-Band SAR: IEEE Transactions on Geoscience and Remote Sensing, 60, 1–12, http://dx.doi.org/10.1109/TGRS.2022.3144731, 2022. [4] Murashkin, D., and A. Frost, Arctic Sea ICE Mapping Using Sentinel-1 SAR Scenes with a Convolutional Neural Network, 5660–63, http://dx.doi.org/10.1109/IGARSS47720.2021.9553206, 2021. [5] Rampal, P., Bouillon, S., Ólason, E. and Morlighem, M.: neXtSIM: a new Lagrangian sea ice model, The Cryosphere, 10, 1055–1073, https://doi.org/10.5194/tc-10-1055-2016. [6] Sakov, P., Counillon, F., Bertino, L., Lisæter, K. A., Oke, P. R. and Korablev, A.: TOPAZ4: an ocean-sea ice data assimilation system for the North Atlantic and Arctic, Ocean Science., 8, 633–656, https://doi.org/10.5194/os-8-633-2012.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall E1)

Presentation: A new global multisource sea ice concentration composite - Advances in sea surface temperature, ice surface temperature and sea ice concentration consistency

Authors: Ida Olsen, Pia Englyst, Jacob Høyer, Thomas Lavergne, Alexander Hayward, Ioanna Karagali
Affiliations: National Center for Climate Research (NCKF), Danish Meteorological Institute (DMI), Research and Development Department, Norwegian Meteorological Institute
Sea ice concentration is an essential climate variable, serving as one of the most evident indicators of climate change. Various climate data records for sea ice concentration exist and they provide an essential tool for monitoring and evaluating long-term climate trends in sea ice coverage. The accuracy and spatial resolution of existing sea ice concentration climate data records vary depending on the satellite sensors and frequency subsets used in retrieval algorithms. Typically, these records rely on data from a single sensor to maintain long-term consistency, although this approach fails to gain the full potential from improving satellite sensors. At the Danish Meteorological Institute (DMI), we have developed a new global multi-source sea ice concentration composite by integrating existing climate data records from the Global Sea Ice Concentration climate data record (SMMR/SSMI/SSMIS) release 3 (OSI-450-a), the Global Sea Ice Concentration climate data record (AMSR), release 3 (OSI-458) and the High(er) Resolution Sea Ice Concentration Climate Data Record Version 3 (SSM/I and SSMIS) (SICCI-HR-SIC), along with sea ice chart information. This composite aims to provide the most reliable sea ice concentration from 1982 to 2024, and uses various filtering methods to ensure long-term consistency. The sea ice concentration fields are currently being implemented for the second generation of the Copernicus Arctic Regional Reanalysis (CARRA) system as well as the new global, combined sea and sea-ice surface temperature climate data record for the Copernicus Climate Change Service (C3S). Although consistency with sea surface temperatures (SST) for ice covered regions is already an integrated part of the dataset, recent work highlights the need for focussing on consistency with ice surface temperatures (IST) as well, where certain regions have anomolously warm IST for high sea ice concentrations and anomolously cold SST for ice free regions. This presentation will offer an overview of the multi-source sea ice concentration composite (DMI-MSC), outlining key aspects of the methodology. Additionally, it will compare the results with other climate data records including OSI-450-a, OSI-458 and the NOAA/NSIDC Climate Data Record of Passive Microwave Sea Ice Concentration, Version 4. Furthermore, we highlight steps taken to ensure consistency with SST and outline future efforts aimed at examining consistency with IST.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall E1)

Presentation: Quantifying Pan-Antarctic Slush and Snow-Ice Formation: Implications for Sea Ice Mass Balance and Ku-Band Altimetric Observations

Authors: Connor Nelson, Julienne Stroeve, Lu Zhou, Anton Komarov, Lanqing Huang, Michel Tsamados
Affiliations: Centre for Polar Observation and Modelling, Department of Earth Sciences, University College London, Centre for Earth Observation Science, Clayton H. Riddell Faculty of Environment, Earth and Resources, University of Manitoba, National Snow and Ice Data Center, University of Colorado Boulder, Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research, Institute for Marine and Atmospheric Research, Department of Physics, Utrecht University, Institute of Environmental Engineering, Swiss Federal Institute of Technology in Zurich (ETH)
Flooding of the snowpack on sea ice is a common phenomenon in the Antarctic, driven by the combination of high snowfall rates and relatively thin sea ice. When the weight of the snow forces the ice surface to be depressed below sea level, seawater infiltrates the snowpack, creating a slush layer. This layer can refreeze, forming snow-ice, which can contribute significantly to total ice thickness. Snow-ice formation plays an important role in the Antarctic sea ice mass balance, including compensating for basal ice loss and maintaining the ice cover in regions with high ocean heat fluxes. However, the process of snow-ice formation introduces complexities to the interpretation of remote sensing data, impeding the accurate retrieval of key sea ice parameters, such as thickness, at large scales. Despite its importance, the spatial and temporal variability of snow-ice formation remains not well constrained on a pan-Antarctic scale. This study aims to address this gap by (1) quantifying slush and snow-ice formation across the Southern Ocean, (2) assessing its role in the sea ice mass balance, and (3) evaluating its impact on altimeter-based retrievals of snow depth and sea ice thickness. We employ a Lagrangian framework to track individual virtual ice parcels using satellite-derived sea ice drift vectors. The evolution of each floe and its overlying snowpack is simulated throughout its lifetime using the physics-based SNOWPACK model, driven by meteorological inputs from ERA5 reanalysis and oceanic data from the Global Ice-Ocean Modeling and Assimilation System (GIOMAS). This approach provides high-resolution simulations of snowpack dynamics, including flooding, slush formation, and snow-ice development. By delivering daily spatial distributions of these formations across the Southern Ocean, we enhance our understanding of their contributions to the regional and pan-Antarctic sea ice mass balance, helping to address knowledge gaps in snow-to-ice conversion and its influence on ice thickness variability and resilience. The presence of liquid water and salt in the snowpack significantly alters its dielectric properties, introducing biases in satellite-derived products when these effects are not accounted for. In particular, snow flooding shifts the dominant scattering horizon in Ku-band altimetry, leading to errors in snow depth and sea ice freeboard estimates that propagate into inaccuracies in derived ice thickness and volume. By integrating SNOWPACK results into the Snow Microwave Radiative Transfer Model (SMRT), we simulate satellite-based observations of the surface. This allows us to quantitatively explain the origins of these biases and propose solutions to mitigate them. This work is particularly relevant to the upcoming mission CRISTAL, which will feature a dual-frequency altimeter, including Ku-band, designed to measure and monitor sea ice thickness and snow depth in both the Arctic and Southern Oceans. Our findings provide insights to improve the interpretation and accuracy of data from these missions, ultimately enhancing polar climate monitoring efforts.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall E1)

Presentation: Assessing the Antarctic Sea Ice Mass Balance by Multi-Sensor Remote Sensing and Modelling

Authors: Robert Ricker, Marion Bocquet, Thomas Lavergne, Emily Down, Signe Aaboe, Lara Ferrighi, Fabien Collas, Michel Tsamados, Julienne Stroeve, Anton Komarov, Gunnar Spreen, Dirk Notz, Stefan Kern, Andreas Wernecke, Alexander Haumann, Sara Fleury, Martin Wearing
Affiliations: NORCE Norwegian Research Centre, Norwegian Meteorological Institute, University College London, University of Manitoba, University of Bremen, University of Hamburg, Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research, CNRS-LEGOS Centre National de la Recherche Scientifique - Laboratoire d'Etudes en Geophysique et Oceanographie Spatiales, European Space Agency, ESRIN
Antarctic sea ice is crucial for the regional climate of the Southern Ocean and has significant impacts on the global climate system. It stabilizes Antarctic ice shelves by protecting them from the open ocean. In contrast to the Arctic, Antarctic sea ice is predominantly first-year ice, with multi-year ice mostly limited to the western Weddell Sea. While Arctic sea ice has declined dramatically over the past four decades, Antarctic sea ice extent showed little change or slight growth until recently. Record-lows in Antarctic sea ice extent in 2016, 2023 and 2024 require further research, including investigating how those events are reflected in thickness changes and understanding the potential drivers and consequences. Accurate assessment of Antarctic sea ice mass, including its snow cover and snow ice, is essential for understanding freshwater fluxes, ocean circulation, and improving climate models. However, retrieving this information via satellite observations is challenging due to the complex Antarctic snowpack and sea ice dynamics, which introduce significant uncertainties in products derived from active and passive remote sensing measurements. As part of the ESA-funded Sea Ice Mass Balance Assessment: Southern Ocean (SO-SIMBA) project, we aim to develop a comprehensive Antarctic sea ice reference dataset. This dataset will integrate information on thickness, volume, drift, and mass of snow and sea ice from multiple satellite missions such as Envisat, CryoSat-2, Sentinel-3, ICESat-2, SMOS and AMSR-2 as well as other missons. Additionally, data from the recent Surface Water and Ocean Topography (SWOT) mission will be used in selected test areas for calibration and validation. Advanced merging and optimal interpolation techniques will be developed to maximize data accuracy and coverage. For example, we will derive the first Antarctic snow depth product combining active and passive microwave observations, while a drift-aware algorithm will address uncertainties caused by ice motion. Additionally, snow-ice thickness and radar penetration depth (main scattering horizon) will be estimated using model simulations. The validation of those parameters will include available in-situ, airborne and mooring-based measurements in the Southern Ocean to the maximum extent possible. These datasets will allow for studying the dynamic and rapidly changing nature of sea ice across different regions in the Southern Ocean. A key objective of SO-SIMBA is to ensure consistent uncertainty estimation across all datasets, providing robust uncertainty metrics. This consistency is critical for supporting applications, such as model evaluation and data assimilation. We will exploit our novel data products by conducting a scientific study on the interactions between the sea ice, snow, and the underlying ocean. In particular, we will compute freshwater fluxes associated with the formation, transport, and melting of sea ice and snow. For this purpose, we will combine sea ice and snow volume estimates with an estimate of sea ice and snow divergence derived from sea ice drift data to obtain a full sea ice and snow mass balance. We will then investigate their impact on the upper ocean layer and potential consequences for the ocean circulation and ocean salinity. Here, we present initial results from SO-SIMBA, highlighting its potential for advancing our understanding of Antarctic sea ice and ocean processes.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 0.49/0.50)

Session: B.03.07 Land-atmosphere interactions: finding solutions for land-based mitigation and adaptation

Human activities leave a clear fingerprint on terrestrial ecosystems and the climate system. Changes in land cover, in land use and land management can have a profound effect by altering fluxes (carbon, water, energy, and momentum) between the land and the atmosphere, thereby affecting climate both locally and globally through biogeochemical and biophysical mechanisms. These changes may often involve trade-offs between different ecosystem services, such as soil protection versus timber harvesting, or agricultural expansion versus carbon sequestration. Trade-offs can sometimes lead to conflicts between climate and conservation goals, such as rapid carbon sequestration from fast-growing monocultures at the expense of slow-growing natural forests rich in biodiversity.

Satellite remote sensing technology allows us to monitor the state of the land and its vegetation with variables such as leaf area index (LAI) and above-ground biomass (AGB), but also to evaluate corresponding fluxes through the use of proxies such as land surface temperature (LST) and sun-induced chlorophyll fluorescence (SIF). Through innovative methodological approaches, these various sources of complementary satellite datastreams can be combined to provide insights about the spatio-temporal variations of terrestrial ecosystem functional properties and how these interact with the atmosphere. Such methods can further inform on both the potential positive and negative consequences of changing the properties of the land surface, notably with the aim of establishing nature-based solutions (NBS) to the combined climate and biodiversity crises.

This session focuses on how satellite remote sensing can help us better understand and quantify land-atmosphere interactions. We welcome studies exploring this theme at any spatial scale (from local to global). We aim to highlight studies that can inform adoption of appropriate land-based strategies for climate mitigation, adaptation, and ecosystem restoration. Additionally, we seek to explore how the combination of climate adaptation and biodiversity actions can reinforce or weaken each other, and how the combined effect of adaptation and biodiversity actions differs from the sum of individual actions.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Scenarios of Future Urban Street Green Apace and Associated Heat Reductions in Global Cities Using Remote Sensing and Modelling

Authors: Steffen Lohrey, Dr. Giacomo Falchetta
Affiliations: Technische Universität Berlin, International Institute for Applied Systems Analysis (IIASA), Euro-Mediterranean Center on Climate Change (CMCC)
Street Green Space (SGS) – street trees, bushes, plants, grass – is well known to moderate urban temperature through shade and evaporative cooling. Yet, most existing studies have focused on highly resolved local case studies, leaving the extent to which the cooling efficiency of street-level vegetation varies across different urban forms and climatic zones an open question. Existing global studies have used satellite-observed green cover as visible from space, and land surface temperature. This poses significant caveats on the accuracy of the findings, as particularly land surface temperature is not the same as air temperature, and air temperature may be the more appropriate metric to inform about the human-scale impact of heat. Here we address this question using modelled air temperature and estimate monthly, local climate zone-heterogeneous and city-heterogeneous cooling efficiency coefficients for around 150 cities around the world using urban-scale modelled air temperature from the UrbClim urban climate model (De Ridder et al. 2015) and street-level canopy cover estimates. For assessing street green space, we use a machine learning model fed with Sentinel-2 imagery for predicting local tree canopy coverage density – for any coordinate along roads withing these cities, measured by the Green View Index (GVI) of street green space (Seiferling et al. 2017). We then develop future scenarios of urban green space. Here we consider city characteristics, observed growth dynamics of street green space, cities’ climate zone, urban forms present within the cities, socio-economic properties. We determine “frontrunner”, cities that show high amounts of street green within their category, to create about potential scenarios of urban street green. These scenarios of future street green space are then combined with the estimated cooling efficiency and urban climate model outputs forced with future SSP-RCP climate change scenarios to inform about potential future cooling potential of expanding street green space. Scientifically, this approach bridges detailed case studies and other existing studies using land surface temperature. It informs both climate mitigation and adaptation efforts. Policy-wise, the insights gained here will be useful for informing about the additional and cooling potential of street green within in realistic bounds in the light of climate change. This can be helpful to inform both global-level assessments of energy requirements for urban cooling, as well as regional-level and city-level insights into the cooling benefits of street greening particularly where detailed case studies do not exist.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Tracking Short-Term Changes in Subsurface Water Storage Using a Novel Satellite-Based Time-Series of Normalized Far-Red Solar-Induced Fluorescence

Authors: David Herrera, Prof. Dr. Uwe Rascher, Dr. Alexandre Belleflamme, Dr. Bastian Siegmann
Affiliations: Research Center Jülich - IBG-2, Research Center Jülich - IBG-3
Passive optical satellite products have been able to trace drought effects for a long time. While mapping of drought effects is important, the ability to do drought mitigation and truly understand the land-atmosphere interactions, variables are needed that reflect the exact status of plants and their reaction to water scarcity. One of such variables is Solar-Induced Chlorophyll Fluorescence (SIF) which is emitted directly from the photosynthetic apparatus (Drusch et al. 2017). When abiotic stress occurs due to an increased dissipation of energy by non photochemical quenching (NPQ), the fluorescence yield is decreased, which can be measured as SIF (Berger et al. 2022, Damm et al. 2018). The detection of this flux is therefore vital to the early detection of drought stress. Top of canopy SIF data is globally available at daily temporal resolution from the imaging spectrometer TROPOMI aboard Sentinel-5P since 2018 (Guanter et al. 2021, Köhler et al. 2018). This data however is influenced by canopy structure and illumination conditions. Thus downscaling SIF from canopy to leaf level by calculating the fluorescence quantum efficiency at leaf level (ΦF) is vital to detect changes that are directly linked to the photosynthetic activity (Dechant et al. 2020). This can be done by using equation (1) which accounts for the incoming photosynthetically active radiation (PAR), the fraction of absorbed PAR (fAPAR), and the fluorescence escape probability (fesc): ΦF = pi*SIF743canopy/(fAPAR*PAR*fesc) (1) Studies have shown that this downscaling method can be applied to spaceborne data using reflectance indices such as the NIRv (NDVI*NIR (Badgley et al. 2017)) that serves as a proxy of the product of fAPAR and fesc (Dechant et al. 2020, Liu et al. 2023). The downscaling and normalization of canopy SIF can be realized by combining the TROPOSIF canopy SIF product at 743 nm and reflectance data to calculate NIRv (both provided by ESA) with the MODIS PAR product: ΦF = pi*SIF743canopy/(NIRv*PAR) (2) This study presents a new multi-year dataset (2018-2023) providing daily information about downscaled and normalized SIF (ΦF) at 0.05° spatial resolution for the whole of Germany. The main goal was to investigate the potential of ΦF to track short-term changes in root zone soilmoisture. For this reason, ΦF data set was compared to daily spatial subsurface water storage (sss) simulations generated by the PARFLOW/CLM hydrological model developed in the frame of the ADAPTER project (https://adapter-projekt.org/) (Belleflamme et al., 2023). The sss and sss anomaly products are available for different soil depths ranging from 2 cm to 5 m. For this study, we selected the product providing information at 17 cm soil depth because it best reflects the soil depth range from which plants can uptake water. Both datasets (ΦF, sss) were processed in Google Earth Engine (GEE) and subsets were made for the two classes agricultural land and forest) to allow for a more detailed comparative analysis of of ΦF and sss anomaly for these two dominant ecosystem types in Germany. The data was then aggregated spatially for each day and averaged using a rolling average of two days. The time series data was divided into periods of prolonged negative sss anomaly (longer than 30 days), which correspond to periods plants are not sufficiently supplied with water. We then compared the daily ΦF and sss anomaly data of those periods by calculating cross correlation coefficients for different time lags. We additionally checked, if the selected periods correspond to watch/warning drought periods identified with the Combined Drought Indicator (CDI) calculated by the European Drought Observatory (European Commission, Joint Research Centre (JRC), 2020). The results show that ΦF has the highest cross correlations with sss with a lagged response of two days for agricultural areas and forests, while vegetation indices (NIRv and NDVI) and SIF at canopy level do not show any correlations at all. Our findings prove that ΦF has the potential to detect insufficient plant water availability represented in this study by periods of negative sss anomaly values and thus can be used for the early detection of drought stress in agricultural and forest ecosystems. The comparison of the capabilities ΦF and SIF measured at canopy scale to track short-term changes in plant available water illustrates that a proper downscaling and normalization of canopy SIF is essential to use SIF satellite measurements for the early detection of drought events. Although our newly generated ΦF time-series provides daily information for the whole of Germany, the spatial resolution of 3.5 x 5 km determined by the TROPOMI SIF743 product is relatively low. Future satellites, like ESA’s EarthExplorer FLEX that will be launched in 2026, will provide SIF data with much higher spatial resolution. These datasets could be combined with our ΦF product to develop spatially more detailed approaches for the early detection of droughts at regional scale. Besides this, we currently produce a ΦF product for Europe to extend our analysis to areas that are more severely affected by drought events (e.g. Spain). In this context, the Soil Water Index (SWI) product from the Copernicus Global Land Services will be used as reference for the ΦF-based detection of drought events for different plant functional types.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Temporal shifts in forest age structure susceptibility to disturbances

Authors: Dr. Simon Besnard, Dr. Alba Viana Soto, Katja Kowalski, Dr. Viola H.A. Heinrich, Dr. Cornelius Senf
Affiliations: Helmholtz Center Potsdam GFZ German Research Centre for Geosciences, Earth Observation for Ecosystem Management, School of Life Sciences, Technical University of Munich, School of Geographical Sciences, University of Bristol
Forest disturbances play a significant role in shaping European forests' age structure, thus influencing their capacity to serve as carbon sinks in the future. In this study, we utilized Landsat-based disturbance data, the Global Age Mapping Integration v2.1 (GAMIv2.1), and ESA-CCI biomass v5 products to couple past disturbances (planned [i.e. harvest] and unplanned [i.e. bark beetle, wind and fire]) with forest age classes (0–40, 41–80, 81–120, >120 years) and aboveground biomass classes (0–30, 31–60, 61–90, >90 MgC ha−1). This approach enabled us to analyze temporal shifts in age structure and carbon stock susceptibility to disturbances. Our results revealed a clear stratification of disturbance susceptibility across age classes, with planned disturbances predominantly affecting mid-aged forests and unplanned disturbances showing higher variability in their impact over time. Planned disturbances were predominantly associated with high-biomass (>90 MgC ha−1) but mid-aged forests (41–80 years), reflecting stable forest management practices, including selective harvesting of commercially viable stands and preventive interventions targeting high-risk forests affected by bark beetle outbreaks. These disturbances exhibited the highest probabilities (up to 90%) and carbon loss risks, with mid-aged forests experiencing losses exceeding 60 MgC ha−1. In contrast, unplanned disturbances exhibited higher variability, increasingly impacting older forests (81–120 years) in recent years, while younger forests (0–40 years) experienced a decline in disturbance frequency. Despite temporal shifts, unplanned disturbances consistently targeted high-biomass forests, contributing approximately 80% of the total unplanned disturbance area. A country-level analysis highlights notable regional variation. In Germany, unplanned disturbances have transitioned from younger forests (41–80 years) to more mature forests (81–120 years), potentially potentially driven by increasing bark beetle activity targeting older trees. In Sweden, planned disturbances predominantly target mid-aged forests (81–120 years) and high-biomass forests (>90 MgC ha−1) but exhibit higher temporal variability across age and biomass classes compared to Germany. Our findings emphasize the interplay between forest management practices, climate-driven disturbance regimes, and ecological factors in shaping forest susceptibility and carbon dynamics. This study highlights the critical role of age and biomass in driving disturbance processes, showing that while age-class dynamics exhibit significant temporal variability, biomass-class patterns remain more stable. This decoupling likely reflects the influence of forest type, growth rates, and management practices, where biomass stabilizes in older forests, reducing its variability despite dynamic changes in age-class vulnerability. These insights underscore the need for adaptive forest management to mitigate the growing impacts of climate change and natural disturbances.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Potential tree cover under current and future climate scenarios

Authors: Caspar Roebroek, Luca Caporaso, Gregory Duveiller, Edouard Davin, Sonia Seneviratne, Alessandro Cescatti
Affiliations: European Commission - Joint Research Center, ETH Zürich, National Research Council, Max Planck Institute for Biogeochemistry, Wyss Academy for Nature, University of Bern, Oeschger Centre for Climate Change Research
Forests play a major role in the state and the future of the global climate. The IPCC’s pathways to limit global warming to below 1.5 C with limited overshoot, rely, in part, on forest-based climate change mitigation strategies, which include a significant expansion of forest cover. Simultaneously, forests are increasingly at risk of becoming a net source of carbon under the pressure of climate change through increased frequency and severity of natural disturbances such as windthrows, pests, and forest fires. For the design of forest-based policies aiming at climate neutrality insight into the (bio)physical limits of forests under transient environmental conditions is required. These limits are captured in the concept of potential natural vegetation. Several attempts have been made to estimate potential tree cover, both through modelling exercises and statistical approaches. Models are constrained by their computational cost and complexity and therefore operate with a trade-off between spatial scale, resolution and the representation of the processes involved. Especially disturbances are complex to include, due to their stochastic nature, which leads to large uncertainties in projections of tree cover in future (climate) scenarios. On the other hand, statistical approaches rely on the space-for-time concept. Building on previous statistical approaches, our study uses a machine learning framework incorporating high-resolution data on tree cover, climate, soil, and topography to estimate global potential tree cover at a 1 km resolution. Subsequently, we adjust the estimates by accounting for the natural disturbance regime, described as the tree cover carrying capacity. In currently intact forests, our tree cover carrying capacity estimates align closely with observed tree cover, yielding an R2 of 0.83. We estimate that, under current climate conditions, the Earth could host a canopy cover of 50.1 million km2 which is about 55% higher than current levels. Future climate projections suggest a potential increase to 51.3 million km2 but includes a reduction of canopy cover in the tropical belt and an increase in the boreal zones. These estimates can be used to adapt forest management practices and guide afforestation/reforestation programs, although local social and ecological feasibility need to be considered.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Mapping the effect of tree cover and heterogeneity on clouds over Africa

Authors: Di Xie, Luca Caporaso, Markus Reichstein, Deyu Zhong, Dr Gregory Duveiller
Affiliations: Max Planck Institute for Biogeochemistry, State Key Laboratory of Hydroscience and Engineering, Department of Hydraulic Engineering, Tsinghua University, European Commission, Joint Research Centre, National Research Council of Italy, Institute of BioEconomy
Vegetation plays a pivotal role in regulating climate and sustaining the hydrological cycle. Preserving and expanding tree cover is an essential avenue for both mitigating climate change and adapting to its consequences. Both the overall quantity and the spatial distribution of trees are known to influence surface and atmospheric processes. While the direct effects of vegetation on surface properties are well-documented, the indirect biophysical impacts of trees on cloud formation—especially those from trees outside forested areas—are comparatively less explored, with the role of tree distribution often neglected. In this study, we employed a very high-resolution map of individual trees along with geostationary satellite data to examine how tree cover, including its absolute coverage and spatial configuration, influences cloud formation during both day and night across all of Africa. Our findings reveal distinct patterns: daytime cloud cover increases with higher tree cover over tropical rainforests and arid steppes, while it decreases over tropical savannahs. At night, a stronger negative relationship between tree cover and cloud formation emerges during the dry season, especially in the high-elevation regions of southern Africa. Mechanistically, spatial variations in cloud formation are linked to daytime sensible heating in water-sufficient regions, whereas moisture availability becomes the dominant factor in water-limited areas. At night, cloud effects are more closely tied to land surface temperature differences induced by tree cover, likely due to water condensation on cooler surfaces. Additionally, incorporating both average tree cover and heterogeneity reveals a more detailed picture: in tropical savannahs, the cloud enhancement effect increases by 55.2% when heterogeneity is considered alongside tree cover, compared to tree cover alone. Similarly, in arid steppes, this increase is 12.4%. However, in tropical rainforests, greater heterogeneity appears to amplify the reduction in cloud cover caused by declining tree cover. This revelation that there is a shift in heterogeneity’s influence on cloud cover depending on the actual average tree cover holds some valuable insights for understanding the effects of land use changes of both afforestation and deforestation processes. It is not only the number of trees that matter, but also their spatial configuration. This data-driven analysis enhances the understanding of vegetation-cloud interactions, which remain uncertain and underrepresented in Earth system models, and provides valuable insights for planning and implementing future tree restoration projects in Africa.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Enhanced structural diversity increases forest resilience and may compensate climate-driven ecosystem declines.

Authors: Dr. Mark Pickering, Dr. Agata Elia, Dr. Marco Girardello, Dr. Gonzalo Oton, Dr. Matteo Piccardo, Dr. Guido Ceccherini, Dr. Giovanni Forzieri, Dr. Mirco Migliavacca, Alessandro Cescatti
Affiliations: Joint Research Centre Consultant (JRC), ESA-ESRIN, European Commission, Joint Research Centre (JRC), Department of Civil and Environmental Engineering, University of Florence
Ecosystem resilience represents the capacity of an ecosystem to withstand and recover from external perturbations, an increasingly important property for ecosystem function in an era of escalating climate extremes and anthropogenic pressures. Whilst recent studies have related forest resilience to natural factors such as climate, knowledge gaps remain regarding the link between forest diversity and resilience, particularly at large scale. This is important as forest diversity can be controlled by sylvicultural management practices and, therefore, may be used to design climate-smart management plans to increase resilience. This study uses large-scale remote sensing data to directly link recent developments in forest diversity mapping and resilience. In particular, we assess the sensitivity of forest resilience to forest structural diversity (FSD) in temperate and Mediterranean Europe. Two commonly used resilience indicators are used, based on MODIS kNDVI (kernel Normalized Difference Vegetation Index) data acquired at high spatial and temporal resolution: the 1-lag temporal autocorrelation, relating to the ecosystem memory, and the standard deviation, relating to the ecosystem stability. Forest diversity is expressed in terms of horizontal, vertical, and combined horizontal and vertical structural heterogeneity metrics derived from GEDI (LiDAR, on board of the International Space Station) acquisitions. A Random Forest model was used to isolate these FSD-resilience relationships from confounding environmental factors enabling a deeper understanding of the local and Europe-wide relationships and identifying which forest management practices most effectively promote resilience. Results show a positive relationship between resilience and FSD in the majority of European forests investigated. Diversity in canopy complexity is a better predictor of resilience than the spatial variability in the canopy height. This emergent relationship is explored as an adaptation measure to preserve resilience under warming scenarios. The findings suggest that forest management promoting heterogeneity, specifically canopy complexity, may have the potential to compensate for declines in resilience associated with future warming.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 0.11/0.12)

Session: C.06.08 Innovations in Calibration and Validation for New and Upcoming Earth Observation Atmospheric Missions

The Calibration and Validation (Cal/Val) of atmospheric missions are essential for ensuring the accuracy and reliability of satellite data used in environmental monitoring and climate research. This session concentrates on the latest developments and methodologies in Cal/Val activities for both current and upcoming atmospheric missions, including ALTIUS, AOS, EarthCARE, PACE, PREFIRE, TEMPO, TANGO, CO2M, Sentinel-4, and Sentinel-5.

The session focuses on various aspects of Cal/Val, such as pre-launch and post-launch activities, innovative calibration techniques, the establishment of ground-based validation networks, and the convergence on common practices. Emphasis is placed on the importance of sensor intercalibration, the use of vicarious calibration targets, and the collaboration between missions to achieve high-quality data. Additionally, the session will cover the role of airborne campaigns in providing critical validation data and the assimilation of data in Numerical Weather Prediction (NWP) modeling for quality monitoring and validation to enhance the accuracy of satellite-derived products.

By providing a comprehensive overview of state-of-the-art Cal/Val activities, the session aims to foster discussions on best practices and future directions in the field, ultimately contributing to the advancement of atmospheric science and the improvement of atmospheric satellite data quality.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 0.11/0.12)

Presentation: Best Practice Protocol for the Validation of Aerosol, Cloud, and Precipitation Profiles (ACPPV)

Authors: Dr Eleni Marinou, and the ACPPV consortium
Affiliations: IAASARS, National Observatory of Athens, 56 institutes, universities, and space agencies
Aerosols, clouds, precipitation, and the processes governing their interactions are the grand challenges for current climate science, of the highest priority for the climate science-policy interface, and of great relevance for both Working Groups I and III of the upcoming 7th IPCC cycle. Satellite missions such as CALIPSO and CloudSat have demonstrated the value of aerosol and cloud profiling techniques in understanding the processes governing aerosol-cloud-radiation interactions. The EarthCARE mission will ensure the continuity of these efforts and further advance space-borne lidar and cloud radar profiling synergies. Following EarthCARE, the Atmosphere Observing System (AOS) of NASA will further shed light on the unknown links between aerosols, clouds, atmospheric convection, and precipitation. The geophysical validation of spaceborne high-resolution profilers for aerosols, clouds, and precipitation presents unique challenges. The need for a common practice, capturing lessons learned from earlier missions was identified, and its implementation is tracked under the CEOS Working Group Calibration and Validation action item CV-22-01. As a response, an international consortium of 86 scientists converged on the Best Practice Protocol for the Validation of Aerosol, Cloud, and Precipitation Profiles (ACPPV). The ACPPV convergence process aimed at the optimization of Calibration and Validation techniques (Cal/Val) in terms of instrumentation, sampling strategies and scenarios, and intercomparison methodologies. To this end, the scientific communities involved in past missions have reviewed lessons learned and identified areas where convergence on similar approaches is beneficial. The approaches and recommendations cover correlative site and instrument selection, data processing and quality control, campaign criteria, configurations, scenarios, collocation methods, suggestions on issues concerning scene representativeness, and intercomparison methodologies, including handling of wavelength differences. In addition, for increased statistical relevance of the intercomparison with ground sites, guidance and recommendations for inter-calibration between networks are included, to achieve a “network of networks” to compensate for the profilers' sparse overpasses per site, and avoid biases. Moreover, guidance on the statistical validation through intercomparison between satellite-based remote sensing observations, and on the near-real time validation through monitoring in an NWP data assimilation system are included. Finally, existing gaps in our Cal/Val knowledge are summarized. Given the complexity and diversity of geophysical scenarios and retrievals of aerosol, cloud, and precipitation regimes, the ACPPV document is aimed at knowledge exchange and conveying lessons learned, rather than definitions on single and strict protocols that have been agreed upon in some other domains with fewer degrees of freedom. Here the protocol will be presented and discussed. The ACPPV consortium: Lead Authors: Amiridis, V.1, Cecil, D., J.2, Koopman, R.3, Marinou, E.1, Moisseev, D.4, Tackett, J.5, Gross, S.6, Baars, H.7, Redemann, J.8, Marenco, F.9, Baldini, L.10, Tanelli, S.11, Fielding, M.12, Janisková, M.12, O’Connor, E.13, Fjæraa, A., M.14. Contributing Authors: Paschou, P.1,15, Voudouri, K., A.1,15, Hostetler, C.5, Ferrare, R.5, Burton, S.5, Schuster, G.5, Kato, S.5, Winker, D.5, Shook, M.5, Bley, S.7, Haarig, M.7, Floutsi, A., A.7, Wandinger, U.7, Trapon, D.7, Pfitzenmaier, L.16, Papagianopoulos, N.17, Mona, L.17, Posselt, D.11, Mason, S.12, Rennie, M.12, Benedetti, A.12, Hogan, R.12,18, Sogacheva, L.13, Balis, D.15, Michailidis, K.15, van Zadelhoff, G., J.19, Nowottnick, E.20, Yorks, J.20, Mroz, K.21, Donovan, D.19, L’Ecuyer, T.22, Okamoto, H.23, Sato, K.23, Henderson, D., S.24, Nishizawa, T.25, Barker, H.26, Cole, J.26, Qu, Z.26, Clerbaux, N.27, Nakajima, T.28, Chase, R.29, Wolff, D.30, Landulfo, E.31, Kirstetter, P., E.32, Mather, J.33, Ohigashi, T.34, Ryder, C.18, Tzallas, V.35, Tsikoudi, I.1,36, Tsekeri, A.1, Tsichla, M.1,37, Koutsoupi, I.1,36, Tanaka, T.38, Kubota, T.38, Siomos, N.39, Tomiyama, N.40, Takahashi, N.41, Hiroaki, H.42, Suzuki, K.43, Mace, J.44, Prakash, G.45, McLean, W.46, Borderies, M.47, Mangla, R.48, Escribano, J.49, Moradi, I.50,51, Zhang, J.52, Rubin, J.53, Ikuta, Y.54, and Kollias, P.55,56. Affiliations: 1 Institute of Astronomy, Astrophysics, Space Applications & Remote Sensing (IAASARS), National Observatory of Athens (NOA), 15236 Athens, Greece 2 NASA Marshall Space Flight Center, Earth Science Branch, 320 Sparkman Dr NW, Huntsville, AL 35805, United States 3 European Space Research and Technology, European Space Agency (ESA/ESTEC), Keplerlaan 1, 2201 AZ Noordwijk, The Netherlands 4 Institute for Atmospheric and Earth System Research, Faculty of Science, University of Helsinki, Finland 5 NASA Langley Research Center, Hampton, United States 6 Institut für Physik der Atmosphäre, Deutsches Zentrum für Luft- und Raumfahrt (DLR), Weßling 82234, Germany 7 Leibniz Institute for Tropospheric Research (TROPOS), Permoserstraße 15, 04318 Leipzig, Germany 8 School of Meteorology, University of Oklahoma, Norman, OK, USA 9 The Cyprus Institute, 20 Konstantinou Kavafi Street, 2121 Nicosia, Cyprus 10 National Research Council, Insitute of Atmospheric Science and Climate (CNR-ISAC), Via Fosso del Cavaliere, 100, 00133, Roma, Italy 11 Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, United States 12 European Centre for Medium-Range Weather Forecasts (ECMWF), Reading, UK 13 Finnish Meteorological Institute, Climate Research Programme, Erik Palménin aukio 1, FI-00560, Helsinki, Finland 14 Climate and Research Institute NILU, P.O. Box 100, 2027 Kjeller, Norway 15 Laboratory of Atmospheric Physics, Physics Department, Aristotle University of Thessaloniki, University campus, 54124 Thessaloniki, Greece 16 University of Cologne, Institute of Geophysics and Meteorology, Pohligstraße 3, 50969 Cologne, Germany 17 Istituto di Metodologie per l’Analisi Ambientale (IMAA), Consiglio Nazionale delle Ricerche (CNR), C.da S. Loja, 85050 Tito (PZ), Italy 18 Department of Meteorology, University of Reading, Reading, UK 19 Royal Netherlands Meteorological Institute (KNMI), De Bilt, the Netherlands 20 NASA Goddard Space Flight Center, Mail Code 612, MD 20771 Greenbelt, United States 21 National Centre for Earth Observation, University of Leicester, Leicester, UK 22 Department of Atmospheric and Oceanic Sciences, Cooperative Institute for Meteorological Satellite Studies, University of Wisconsin-Madison, Madison, WI 23 Research Institute for Applied Mechanics, Kyushu University, Fukuoka, 816-8580, Japan 24 Space Science and Engineering Center, University of Wisconsin-Madison, Madison, WI, USA 25 Earth System Division, National Institute for Environmental Studies, Tsukuba, 305-8506, Japan 26 Environment and Climate Change Canada, Toronto, ON, Canada 27 Royal Meteorological Institute of Belgium, Brussels 28 Tokai University, Department of Human and Information Science, School of Information Science & Technology 4-1-1 Kitakaname Hiratsuka, Kanagawa 259-1292, Japan 29 Cooperative Institute for Research in Atmosphere (CIRA), Colorado State University, Fort Collins, Colorado 30 Mesoscale Atmospheric Processes Lab, NASA Wallops Flight Facility, 34200 Fulton Street, N159/E220, VA 23337 Wallops Island, United States 31 Instituto de Pesquisas Energéticas e Nucleares, Cidade Universitária, São Paulo, Brazil 32 Hydrometeorology and Remote Sensing Laboratory, University of Oklahoma, Norman, Oklahoma, USA 33 Pacific Northwest National Laboratory, Richland, Washington 34 National Research Institute for Earth Science and Disaster Resilience (NIED), Tsukuba, Ibaraki, Japan 35 European Space Agency, ESA-ESRIN, Via Galileo Galilei, 1, 00044, Frascati RM, Italy 36 Department of Physics, National and Kapodistrian University of Athens, 157 84 Zografou, Athens, Greece 37 Environmental Chemical Processes Laboratory, Department of Chemistry, University of Crete, Greece 38 Earth Observation Research Center, Japan Aerospace Exploration Agency (JAXA), Tsukuba, Ibaraki 305-8505, Japan 39 Meteorologisches Institut, Ludwig-Maximilians-Universität München, Germany 40 Japan Aerospace Exploration Agency (JAXA), 2 Chome-1-1, Sengen, Tsukuba, Ibaraki 305-8505, Japan 41 Institute for Space-Earth Environmental Research, Nagoya University; Nagoya 464-8601, Japan 42 Radio Research Institute, National Institute of Information and Communications Technology, Koganei, Tokyo 184-8795, Japan 43 Atmosphere and Ocean Research Institute, The University of Tokyo, Chiba, 2778564, Japan 44 University of Utah, Department of Atmospheric Sciences, Salt Lake City, United States 45 Oak Ridge National Laboratory: Oak Ridge, TN, US 46 University of Westminster, London, UK 47 Université de Toulouse, Météo-France, CNRS, Toulouse, France 48 Centre for Remote Imaging Sensing and Processing. National University of Singapore 49 Barcelona Supercomputing Center (BSC), Barcelona, Spain 50 Earth System Science Interdisciplinary Center, University of Maryland, College Park, College Park, Maryland 51 NASA Global Modelling and Assimilation Office, Greenbelt, Maryland 52 University of North Dakota, Department of Atmospheric Sciences 53 U.S. Naval Research Laboratory, Washington, D.C., 20375, USA 54 Metrological Research Institute, Japan Meteorological Agency, Tsukuba, Japan 55 Stony Brook University, Division of Atmospheric Sciences, Stony Brook, New York, United States 56 Brookhaven National Laboratory, Department of Environmental and Climate Science, Upton, New York, United States
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 0.11/0.12)

Presentation: PERCUSION’s contribution to EarthCARE validation; HALO measurements with EarthCARE-like payload over the tropics and mid to high latitudes

Authors: Silke Groß, Florian Ewald, Martin Wirth, Julia Windmiller, Bjorn Stevens, Konstantin Krüger, Lea Volkmer, Bernhard Mayer, Anna Luebke, Manfred Wendisch
Affiliations: Deutsches Zentrum für Luft- und Raumfahrt (DLR), Max-Planck-Institut für Meteorologie, Ludwig-Maximilians-Universität München, Universität Leipzig
For the first time, the ESA/JAXA EarthCARE mission combines a high spectral resolution lidar (HSRL) and a cloud profiling radar (CPR) with doppler capability on one single spaceborne platform. These active remote sensing measurements are being complemented by passive observations by a multi spectral imager and a broadband radiometer. With this unique combination, EarthCARE will be the most complex satellite mission to perform aerosol, cloud, and precipitation measurements. To successfully use the expected advanced dataset from these remote sensing instruments for science application a careful and comprehensive validation of the data and of the disseminated products is required, addressing issues of the instruments and the data processing. To prepare for the validation of EarthCARE measurements, an EarthCARE-like payload was implemented onboard the High Altitude and Long Range Research Aircraft (HALO) by German research institutes and universities. This payload has been designed already more than 10 years ago with the deliberate purpose to validate future EarthCARE observations. It includes active remote sensing instruments such as an HSRL, and a cloud radar with doppler capability. Furthermore, like on EarthCARE, the active instrumentation is accompanied by passive remote sensing instruments, such as a hyperspectral imager, a thermal infrared imager, and broadband radiometers to measure solar and thermal-infrared upward and downward irradiances. This payload was first flown on HALO in 2013 during the NARVAL experiment, to serve as a flying cloud observatory. Since then it was frequently deployed for science application. This included to prepare for bridging the gap from A-TRAIN to EarthCARE by performing a large number of underpasses of CALIPSO and Cloudsat since 2013. To make full use of the EarthCARE-like data on HALO and to further prepare for EarthCARE validation, independent algorithms (similar to some deployed on EarthCARE) were developed or revised. The lessons learned from all the efforts of EarthCARE preparation over the last years, finally led to the proposal of a combined science and validation campaign: PERCUSION (Persistent EarthCARE underflight studies of the ITCZ and organized convection). EarthCARE was launched in May 2024, and the first full set of EarthCARE instruments were collecting measurements starting on 11 August 2024. Already on this first day with all EarthCARE instruments running in nominal mode, the first validation underflight during the PERCUSION campaign could be performed. PERCUSION consisted of three main parts to address different target scenes for validation. First measurement flights out of Cape Verde (Sal) took place to address different aerosol particles, shallow to deep clouds, and strong convection. The second part was conducted out of Barbados to sample different cloud structures and different atmospheric aerosol loadings. Eventually, a third part was performed out of Oberpfaffenhofen, Germany, which offered additional validation measurements for mid, and high latitude situations, cirrus clouds and (to a minor extent) continental aerosol particle properties. The flights were designed to combine science objectives and validation; meaning the location of the research flights was determined by the EarthCARE track of the specific day. That offered the opportunity to validate EarthCARE, and has put EarthCARE also in the center of science applications. The capability of HALO to fly high, up to 15 km, and long, more than 9 hours, allowed to choose the location of the underpass in a way to select various atmospheric conditions for EarthCARE validation. Overall, we performed 33 direct underpasses for the validation of the different EarthCARE products under various meteorological conditions; ten underpasses during the measurements out of Cape Verde, 11 underpasses during the measurements out of Barbados and 12 underpasses for the flights out of Oberpfaffenhofen. In addition to direct underpasses, also the need of cross-calibration was considered. During the Cape Verde period the HALO measurements were frequently coordinated with airborne remote sensing and in-situ measurements onboard the French ATR and with in-situ measurements onboard the Romanian King Air deployed by Norwegian Institutes. In addition, overflights of ground stations in Mindelo, Barbados, Germany (Lindenberg, Leipzig, Jülich, Munich) and Greece (Antikyhtera) were performed, as well as of shipborne measurements onboard the German research vessel METEOR. In this presentation we will give an overview of the PERCUSION campaign, and how our measurements are and can be used for EarthCARE validation. We want to foster the discussion and promote the data for further analysis.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 0.11/0.12)

Presentation: First EarthCARE Cal/Val Results for the Ground-based ACTRIS Supersite at Mindelo, Cabo Verde

Authors: Henriette Gebauer, Dr. Holger Baars, Leonard König, Dr. Athena Augusta Floutsi, Dr. Moritz Haarig, Dr. Annett Skupin, Dr. Ronny Engelmann, Felix Fritzsch, Tom Gaudek, Benedikt Gast, Luis Neves, David Donovan, Dr. Ulla Wandinger
Affiliations: Tropos - Leibniz Institute For Tropospheric Research, Instituto Nacional de Meteorologia e Geografia, Royal Netherlands Meteorological Institute (KNMI)
As a part of the German Initiative for the Validation of EarthCARE (GIVE), the Leibniz-Institute for Tropospheric Research (TROPOS) is intensively involved in the EarthCARE (Wehr et al., 2023) Cal/Val activities especially with respect to two instruments, the ATmospheric LIDar (ATLID) and the multispectral imager (MSI). The continuous operation of ground-based multiwavelength-Raman lidar systems (PollyXT) at different stations around the globe in the framework of PollyNET (Baars et al., 2016) provides valuable information on the aerosol conditions at global scale and ground-based lidar measurements can be utilized for Cal/Val purposes. One of these measurement sites is located at Mindelo, on the Cabo Verde Islands in the eastern tropical Atlantic, and includes the PollyXT lidar, a Doppler cloud radar (94 GHz until September 2024, 35 GHz afterwards), an AERONET sun photometer, a Doppler wind lidar, a microwave radiometer and a radiation station. The measurement site belongs to the European Aerosol, Clouds and Trace Gases Research Infrastructure (ACTRIS) and is particularly valuable for the EarthCARE Cal/Val activities as it enables synergistic reference measurements of aerosol and clouds in addition to radiation measurements. In summer 2024, several ground-based, shipborne and airborne campaigns took place at the islands of Cabo Verde under the umbrella of the Organized Convection and EarthCARE Studies over the Tropical Atlantic (ORCESTRA) campaign (orcestra-campaign.org). In this context, numerous reference measurements for the EarthCARE Cal/Val activities were obtained. The various mobile observations were complemented by our stationary ground-based measurement site at Mindelo represented by the CLoud and Aerosol Remote sensing for EarThcare (CLARINET) sub campaign of ORCESTRA. Throughout the campaign, the station at Mindelo was manned by TROPOS scientists to ensure the continuous operation of the automated instruments. The aerosol conditions at Mindelo are typically characterized by the presence of geometrically and optically thick lofted layers of Saharan dust between June and September and a mixture of dust and biomass burning aerosol between November and March. Cloud occurrence in the planetary boundary layer (PBL) below 2 km is very common and cloud formation at the top of the dust layer can sometimes be observed. During CLARINET, low-level clouds were often present. Short cloud gaps or completely cloud-free conditions around the EarthCARE overpass were observed on several days, e.g., on 20, 21, 29 and 30 August 2024, allowing for a comparison of the ATLID measurements with those of PollyXT. Favorable aerosol conditions with Saharan dust up to 6 km height were present on these days. In this presentation, first Cal/Val results from the Mindelo site will be shown. In particular, the focus will be on a case study for L1 and L2a products of ATLID (Eisinger et al., 2024) and, potentially, on L2b products by synergistic use of the ground-based and spaceborne instruments. For the L1 comparison, the ground-based lidar measurements are converted into ATLID-compatible Rayleigh, Mie and cross-polar signals, using the ATLID L1 simulator tool (https://gitlab.com/KNMI-OSS/satellite-data-research-tools/cardinal-campaign-tools). Initial results look promising but further investigations are needed, especially with respect to the direct comparison of the aerosol optical properties (backscatter and extinction coefficients, and depolarization ratio at 355 nm, which are measured with ATLID and with PollyXT) as soon as the L2a products are released to the Cal/Val teams. References: Baars, H., Kanitz, T., Engelmann, R., Althausen, D., Heese, B., Komppula, M., Preißler, J., Tesche, M., Ansmann, A., Wandinger, U., Lim, J.-H., Ahn, J. Y., Stachlewska, I. S., Amiridis, V., Marinou, E., Seifert, P., Hofer, J., Skupin, A., Schneider, F., Bohlmann, S., Foth, A., Bley, S., Pfüller, A., Giannakaki, E., Lihavainen, H., Viisanen, Y., Hooda, R. K., Pereira, S. N., Bortoli, D., Wagner, F., Mattis, I., Janicka, L., Markowicz, K. M., Achtert, P., Artaxo, P., Pauliquevis, T., Souza, R. A. F., Sharma, V. P., van Zyl, P. G., Beukes, J. P., Sun, J., Rohwer, E. G., Deng, R., Mamouri, R.-E., and Zamorano, F.: An overview of the first decade of PollyNET: an emerging network of automated Raman-polarization lidars for continuous aerosol profiling, Atmos. Chem. Phys., 16, 5111–5137, https://doi.org/10.5194/acp-16-5111-2016, 2016. Eisinger, M., Marnas, F., Wallace, K., Kubota, T., Tomiyama, N., Ohno, Y., Tanaka, T., Tomita, E., Wehr, T., and Bernaerts, D.: The EarthCARE mission: science data processing chain overview, Atmos. Meas. Tech., 17, 839–862, https://doi.org/10.5194/amt-17-839-2024, 2024. Wehr, T., Kubota, T., Tzeremes, G., Wallace, K., Nakatsuka, H., Ohno, Y., Koopman, R., Rusli, S., Kikuchi, M., Eisinger, M., Tanaka, T., Taga, M., Deghaye, P., Tomita, E., and Bernaerts, D.: The EarthCARE mission – science and system overview, Atmos. Meas. Tech., 16, 3581–3608, https://doi.org/10.5194/amt-16-3581-2023, 2023.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 0.11/0.12)

Presentation: MICMICS: A Multi-Mission Solution for Level-1 Radiometric Calibration Monitoring and Beyond

Authors: Ali Mousivand, Sébastien Wagner, Vincent Debaecker, Rakshith Shanbhag, Dr. Alessandro Burini, Tim Hewison, Mounir Lekouara
Affiliations: Eumetsat, Telespazio France, Telespazio Germany
Accurate satellite measurements rely on rigorous radiometric performance monitoring to ensure the reliability and precision of data collection. EUMETSAT achieves this through continuous monitoring of operational missions via vicarious calibration and inter-calibration systems. As EUMETSAT's mission portfolio and Level 1 products expand, the need for more calibration and inter-calibration products becomes increasingly important. The growing number of missions, channels to monitor, and evolving mission requirements demand more advanced calibration solutions. To address these challenges, EUMETSAT developed the Mission Integrated Calibration Monitoring and Inter-Calibration System (MICMICS), a versatile tool designed to handle multiple missions. Rather than developing separate tools for each mission, MICMICS integrates various vicarious calibration and inter-calibration algorithms to independently monitor and analyze radiometric performance. MICMICS covers the radiometric calibration needs for all channels in both the reflective and thermal infrared parts of the solar spectrum for EUMETSAT imaging instruments. It also processes third-party reference instruments to support inter-calibration and enhance the system's calibration accuracy. By performing systematic comparisons with reference instruments, MICMICS generates GSICS inter-calibration products through methods such as Simultaneous Nadir Overpasses and double-differencing vicarious calibration results. Although originally developed for calibration monitoring, recent anomalies with MTG-I1 FCI have revealed that MICMICS can also play a vital role in the in-flight calibration chain. This insight expands its potential applications, positioning MICMICS as an essential tool for ensuring the sustained calibration of satellite instruments during active missions.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 0.11/0.12)

Presentation: TROPOMI in-Flight Calibration

Authors: Erwin Loots, Edward van Amelrooy, Mirna van Hoek, Antje Ludewig, Emiel van der Plas, Nico Rozemeijer
Affiliations: KNMI, TriOpSys B.V.
The Sentinel-5 Precursor (S5P) mission is part of the Copernicus Earth observation programme by the European Union. The TROPOspheric Monitoring Instrument (TROPOMI) is the only payload on S5P. With its push-broom imaging system with spatial sampling down to 5.5km x 3.5km a daily global coverage is achieved for trace gases and aerosols important for air quality, climate forcing, and the ozone layer. TROPOMI contains four spectrometer with spectral bands in the ultraviolet (UV), the visible (UVIS), the near-infrared (NIR), and the shortwave infrared (SWIR) wavelengths. This wavelength range allows for observation of key atmospheric constituents such as ozone (O3), nitrogen dioxide (NO2), carbon monoxide (CO), sulfur dioxide (SO2), methane (CH4), formaldehyde (CH2O) aerosols and clouds. The instrument is measuring the radiance on the day side of each orbit and once a day the irradiance via a dedicated solar port. The long-term in-flight optical degradation of the TROPOMI instrument can be determined by combining the irradiance measurements of the daily and the weekly solar diffuser. This multiplicative degradation can be separated in several components: diffuser degradation, common optical degradation, UV spectral ageing and nondeterministic daily residuals. A first complication in the degradation computation is the increasing solar variability as observed in recent years since the solar minimum. We will show how to separate this effect from the instrument-stemming residuals by comparisons with existing long-term solar line indices. A second complication consists of long-term changes in instrument straylight behavior. The straylight characteristics of the TROPOMI instrument have been determined during the on-ground calibration campaign in 2015. The straylight in the in-flight (ir-)radiance measurements is continuously monitored using shielded regions. We will re-assess the 2015 calibration campaign and discuss the relative growth of straylight as observed from the in-flight monitoring. Special attention should be paid to detector regions that receive a relatively low signal. We will show the dangers of overcorrection originating in on-ground calibration features, as well as the reparation measures, which involve a robust in-flight dynamic straylight correction algorithm. Thus, only by a carefully combined analysis of on-ground calibration, in-flight (ir-)radiance measurements and using external solar line indices is it possible to disentangle the separate additive and multiplicative signal components that allow for both (accurate) straylight correction algorithms and degradation correction algorithms.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 0.11/0.12)

Presentation: Demonstration of an Integrated approach for the Validation and exploitation of Atmospheric missions

Authors: Alexandru Dandocsi, Marcos Herreras - Giralda, Benjamin Torres, Axel Kreuter, Livio Belegante, David Fuertes, Manuel Lanzinger, Arthur Lehner, Verena Lanzinger, Doina Nicolae, Oleg Dubovik, Angelika Dehn, Paolo Castracane, Stefano Casadio
Affiliations: INOE, GRASP-SAS, University of Lille, LuftBlick, Cloudflight, ESA/ESRIN
DIVA (Demonstration of an Integrated approach for the Validation and exploitation of Atmospheric missions) is a python jupyter notebook-based platform aiming to collect, handle, and exploit atmospheric data from ground-based remote sensing instruments, and to build new synergies beyond the operational products. DIVA demonstrates the capability and versatility of such a system to integrate ground-based (lidar, ceilometer, sun/lunar photometer and spectrometer), satellite and model data, stand-alone and synergetic algorithms for advanced data products, using combined algorithms from different platforms and sensors, as well as innovative data mining and data visualization tools. As such, the DIVA platform represents a community based scientific virtual environment to develop and exploit new algorithms and products. DIVA pilot and project aims to homogenize the lidar, ceilometer, photometer, and spectrometer data, explore the appropriate data formats, point to possible caveats that should be taken into account, and suggest improvements towards the homogenization of the data infrastructures. The users of DIVA takes advantages of GRASP (Generalized Retrieval of Aerosol and Surface Properties) and NATALI (Neural network Aerosol Typing Algorithm based on LIdar data) algorithms to further exploit aerosol related products from ground-based datasets. Latest developments of new and improved synergy products and algorithms include development of the GRASP-AUR, improvement of the spectrometer retrieval, improvement of QA/QC procedures for lidar retrievals. GRASP-AUR represents an improvement of the GRASP-AOD characterisation of aerosol fine and course-mode extinction from photometer in the coarse mode by including aureole measurements. Latest results show an improvement of the GRASP algorithm on the correlation with AERONET inversion and a reduction of RMSE. Furthermore, improvements on the trace gases profiles algorithms and QA/QC procedures for lidar measurements have been exploited and further developed using DIVA platform as reply to the user recommendations for ESA Atmospheric activities regarding ground based aerosol retrievals. Moreover, future plans for DIVA platform include the development of Cal/Val procedures for EarthCARE ATLID and CBR instruments, as well as further developments on the synergy algorithms for lidar and photometer measurements. DIVA serves as the open source platform for the comparison of satellite borne aerosol and cloud measurements from EarthCARE with the ground based ACTRIS related algorithms and products. This contribution will highlight and demonstrate the main advantages of using an ESA supported open source, python based platform for the development and improvement of new and existing atmospheric related algorithms and products using both satellite borne and ground based instruments. Acknowledgements: This project is supported by the European Space Agency Contract No. 4000121773 DIVA; European Commission under the Horizon 2020 – Research and Innovation Framework Programme, through the ATMO-ACCESS Integrating Activity under grant agreement No 101008004, Core Program within the National Research Development and Innovation Plan 2022-2027, carried out with the support of MCID, project no. PN 23 05 and through Program 1- Development of the national research development system, Subprogram 1.2 - Institutional performance - Projects to finance the excellent RDI, Contract no.18PFE/30.12.2021. We acknowledge the support of the ACTRIS ERIC for the long-term sustainability of the ground-based infrastructure providing data to this study.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall L1/L2)

Session: C.05.02 TRUTHS – Setting the gold standard reference for climate applications and satellite inter-calibration

The Traceable Radiometry Underpinning Terrestrial - and Helio-Studies, TRUTHS, mission will be a ‘metrology laboratory in space’, setting ‘gold standard’ reference at unprecedented radiometric accuracy (goal 0.3%, k=2) in the UV-VIS-SWIR domain. TRUTHS is explicitly designed to re-calibrate itself in-orbit directly to a primary standard of the international system of units (SI).

Carrying a cryogenic solar absolute radiometer and a hyperspectral imaging spectrometer as well as a novel onboard calibration system, TRUTHS will enhance by up to an order-of-magnitude our ability to estimate the spatially and spectrally resolved Earth Solar Reflected Radiation Budget through direct measurements of incoming and outgoing energy and through partnership with other missions. The exceptional accuracy is required to shorten the time-to-detect trends and provide accurate timely inputs about the Earth system to policy-makers and climate actions.

TRUTHS will effectively establish fiducial EO reference data in space, whose SI-traceability can be extended to other sensors through in-flight cross-calibration. This will be achieved directly via simultaneous observations and indirectly via vicarious calibration (e.g. CEOS) targets, improving other missions’ services who are delivering Essential Climate Variables products. As a traveling standard and by transfer of SI-traceability, TRUTHS will contribute to harmonising contemporary and historical multi-platform climate data records.

The focus of this session is on the preparatory scientific and user applications developments, reviewing current limitations and opportunities offered with TRUTHS for EO metrology from space, as a climate benchmark ESRB data record, in climate modelling, and for satellite inter-calibration.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall L1/L2)

Presentation: A multidisciplinary Mission Simulator Tool for the optimisation of TRUTHS spacecraft Observation and Calibration of the Earth, Moon and Sun activities

Authors: Maria Carrillo Barrenechea, Luca Corpaccioli, Hazel Wood, Isabel Moore
Affiliations: Airbus Defence and Space Ltd, Airbus Defence and Space Ltd
The Traceable Radiometry Underpinning Terrestrial- and Helio-Studies (TRUTHS) mission is an ESA mission with Airbus UK as prime contractor. It was proposed by the UK National Physical Laboratory (NPL) through ESA’s Earth Observation Earth Watch programme. By providing climate and calibration data with goal radiometric performances an order of magnitude above current state of the art, TRUTHS will be key to support more accurate adaptation and mitigation techniques towards the Net Zero objective of the United Nations, improving climate change predictions. TRUTHS will be a space-based agile observer that will perform different Earth, Sun and Moon observations and calibrations. In order to undertake these measurements, the spacecraft will perform different slew manoeuvres to point the required scientific payload towards each body of interest. These manoeuvres need to be planned in compliance with the spacecraft's physical limitations, as well as ideally maximising the scientific observation timings. This planning presents a mathematical challenge in terms of feasibility due to the different subsystems’ constraints, and optimisation of other drivers of the mission. The Airbus Mission Simulator Tool is a multidisciplinary optimiser able to find the most optimal manoeuvres for the Sun and Moon Observation and Calibration activities while assessing 1) constraints arising from the star tracker blinding, thermal, power and payload subsystems 2) mission requirements and objectives, such as the Earth Observation availability and the communication or data downlink windows. On the Moon Observation, it also provides information on the lunar phases and librations that the spacecraft will be able to observe during its mission lifetime, and the time impact these will have on the Earth Observation. This tool’s findings will allow the scientific team to easily identify the days in which the body observations would be preferable considering the rest of the drivers, as well as check the compliance with the requirements. In this paper, the working principle of the tool is presented while assessing the requirements, objectives and challenges of the mission. By exploiting the single degree of freedom that the problem presents, the tool finds the most optimal solutions for the manoeuvres, confirming the feasibility of all the activities. Future capabilities and features of the Mission Simulator Tool are also discussed, which will allow TRUTHS to maximise the scientific data provided during its whole mission lifetime.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Traceable Radiometry Underpinning Terrestrial- and Helio- Studies (TRUTHS) – A SITSat to support the climate emergency

Authors: Thomas August, Pr Nigel Fox, Pr John Remedios, Dr Sam Hunt, Paul Green, Dr Maddie Stedman
Affiliations: ESA, NPL, NCEO
INTRODUCTION The international metrology community, through the establishment of the international system of units (SI), has achieved measurement temporal stability and global consistency over a century-long period. This has been accomplished through the implementation of three key principles – traceability, uncertainty analysis, and comparison – demanding an evidence-based approach to ensure unequivocal confidence in information derived from measurements. Climate observations from space have similar measurement goals. Long-term records of satellite-instrument observations and derived environmental parameters must have irrefutable credibility and stability at a level that justifiably permits the evaluation of climatic trends and robust response to the climate emergency. To achieve this, satellite observations should follow the same metrology approach as the wider terrestrial community and this vision helped lead to the development of the Committee on Earth Observation Satellites (CEOS) QA4EO (Quality Assurance for Earth Observation) principle and associated guideline framework [1], which embodies the requirement to establish and demonstrate traceability to international standards. However, with the challenges associated with the harshness of the space environment and launch, evidencing traceability of the uncertainty levels needed for demanding applications like climate is rarely achieved [2]. This paper describes an example of a new class of satellite-based instruments which are being called SI-Traceable Satellites (SITSats) [2] and which provide the opportunity for a paradigm shift in the levels of uncertainty and consequential confidence in the SI-traceability chain that can be achieved at the location of operation, i.e. in space. TRUTHS> Traceable Radiometry Underpinning Terrestrial- and Helio- Studies (TRUTHS) is a SITSat. It is a hyperspectral imaging spectrometer satellite mission explicitly designed to become a ‘gold standard’ reference for observing the state of the Earth’s climate in the short-wave spectral domain in support of the climate emergency[3, 4]. TRUTHS is an ESA Earth Watch mission developed with a large scientific consortium and an industrial consortium led by NPL and Airbus UK, respectively, with contributions in Switzerland, Czech Republic, Greece, Romania and Spain. The mission was conceived at NPL more than 20 years ago in the UK in response to challenges highlighted through bodies such as CEOS to address observational needs of GCOS and satellite interoperability. This led to the initial ‘calibration focus’ of the mission and the vision of creating an in-orbit SI-traceable reference or a ‘metrology/standards laboratory in space’. The mission has a target launch date of 2030 and minimal operations life-time of 5 years with a goal of 8 years. As the climate emergency started to emerge as a global priority, the value of TRUTHS’ unprecedented observational capabilities across the whole short-wave spectral domain, raised the needs of climate to an explicit priority. The results of the 2007 US decadal survey helped to frame the most demanding observational objectives for TRUTHS towards addressing radiation balance and climate sensitivity resulting from feedbacks e.g. Cloud, albedo, solar variability as well as nearer term support for land use change and carbon cycle studies. It also initiated the long-standing partnership with the US sister mission CLARREO and the Clarreo pathfinder mission [5]. The TRUTHS mission is currently in Phase B2 with a planned launch in 2030. PAYLOAD The main instrument of TRUTHS is a hyperspectral imaging spectrometer (HIS), with continuous spectral sampling over the short-wave (320 nm to 2400 nm) capable of 50 m nadir ground resolution over the full globe. The HIS not only measures spectrally resolved (2-6 nm bandwidth) Earth reflected radiance but also incoming spectral irradiance the sun and from the moon. The novel on-board calibration system (OBCS) is seeking to enable all these observations to be made with an unprecedented target uncertainty of ~0.3% (k=2) across the entire spectrum. At the heart of the OBCS is the Cryogenic Solar Absolute Radiometer, CSAR, operating at temperatures below -200 °C, this instrument, in common with similar instruments on the ground, provides the direct link to SI. It also provides daily measurements of the total integrated energy reaching the Earth from the Sun with an uncertainty goal of 0.02% (k=2) [4]. The functionality, SI-traceability and design predicted uncertainty through use of the OBCS will be presented. Although the mission is able to robustly re-establish traceability in-flight it is still essential for it to undertake detailed characterisation and comparison before launch with facilities able to deliver comparable uncertainties. A facility of this performance is itself state-of-the-art even for non-space applications and has been built at NPL in a modular manner. This Spectroscopically Tuneable Absolute Radiometric calibration and characterisation Optical Ground Support Equipment (STAR c&c OGSE) using a spectrally tuneable CW laser to provide a known spectral radiance to an optical instrument. The strategy to achieve and evidence full end-to-end traceability, pre-flight and in-orbit, of the mission will also be described. REFERENCE CALIBRATION The orbital track of TRUTHS has been selected to maximise opportunities to cross the paths of other satellites. This enables improved cross-calibration resulting from near simultaneity of observation of the same target. The methodologies adopted by TRUTHS to transfer its calibration to other sensors and the associated uncertainty will be discussed. SUMMARY The high accuracy of TRUTHS together with its spectral and spatial resolution facilitate a new epoch in how the Earth is observed, delivering data not constrained to a single discipline but deliberately specified to allow it to be configured to support applications in and at the boundaries of Land, Ocean and Atmosphere to meet the exacting needs of climate. Encompassing it’s own ‘metrology laboratory in space’, TRUTHS instruments are regularly calibrated in-flight to a primary SI standard. This ensures that unlike other satellite instruments, TRUTHS’ ability to detect long-term changes and trends will not be constrained by instrumental performance e.g. drifts and biases, but rather the size of the trend above the background of natural variability. In this way, helping observational specifications of the Global Climate Observing System (GCOS) [2, 6] to be achieved as well as the prospect of testing and constraining the forecasts of climate models in as short a time as possible. TRUTHS will establish a fiducial reference data set [7] of incoming and outgoing solar radiation which will: • provide, directly and through reference calibration, an SI-traceable observational benchmark of the radiation state of the planet in the short-wave, from which change can be detected in as short a time as possible by virtue of its low uncertainty.. • transform the radiometric performance and functionality of current, future (and some heritage) EO systems to meet the specific needs of climate - through providing an SI-traceable, high accuracy, reference calibration in orbit. • deliver data of sufficient quality and flexibility to test and improve the retrieval of solar reflective Essential Climate Variables (ECVs), (particularly related to the carbon cycle on land and ocean). • provide a robust anchor to support understanding of the impact of solar variability on climate and atmosphere in the near and medium term. • serve as an enabler for the growth of next-generation ‘micro-satellites’ by providing a reference calibration for sensors too small for robust calibration systems of their own. REFERENCES 1. https://qa4eo.org/ 2. CEOS and GSICS report on SI-traceable space-based climate observing system, https://doi.org/10.47120/npl.9319 3. https://www.esa.int/Applications/Observing_the_Earth/TRUTHS 4. N. Fox & P. Green, Remote Sens., vol. 12, no. 15, p. 2400, Jul. 2020, doi: 10.3390/rs12152400. 5. https://clarreo-pathfinder.larc.nasa.gov/ 6. https://gcos.wmo.int/en/essential-climate-variables 7. Goryl, P.; Fox, N.; Donlon, C.; Castracane, P. Fiducial Reference Measurements (FRMs): What Are They? Remote Sens. 2023, 15, 5017. https://doi.org/10.3390/rs15205017).
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Supporting the TRUTHS Climate Mission Development With an End-to-End Metrology Simulator

Authors: Nicole Yaghnam, Sam Hunt, Dr. Nigel Fox
Affiliations: National Physical Laboratory
Long-term climate data records, which are used to underpin climate mitigation and adaptation policy, are built on high-quality Earth Observation (EO) data. Despite advancements in the development of these data records, their achievable uncertainty remains limited – reducing their utility to observe subtle changes in climate parameters over background natural variability. To address this limitation, it has been identified that new, lower-uncertainty instrumentation is required. To this end, a new generation of SI-traceable satellites (SITSats) are under development, which will provide confidence to climate data through robust in-flight calibration, enabling traceability to the SI in-orbit for the very first time. The ESA Traceable Radiometry Underpinning Terrestrial- and Helio- Studies (TRUTHS) mission is an example of a SITSat currently under development. Scheduled to launch in 2030, TRUTHS will provide benchmark high-resolution hyperspectral measurements (320 to 2400 nm) of Earth-reflected solar radiance with unprecedented low uncertainties, targeting 0.3% (k=2). The key to establishing SI-traceability in-orbit is TRUTHS’s on-board calibration system (OBCS), which resembles those found in metrology laboratories on the ground. This includes a cryogenic solar absolute radiometer (CSAR), which acts as an SI primary reference standard for optical radiation through electrical anchoring to the volt. In order to verify the mission can achieve this target calibration accuracy, an “End-to-End Metrological Simulator” (E2EMS) is under development to extensively simulate the TRUTHS payload performance. This uses a metrological approach to model the acquisition of TRUTHS L1b products and ensure that the uncertainty requirements are met. The E2EMS contains an instrument model that generates mock L0 measurements followed by the radiometric calibration up to L1b. The TRUTHS uncertainty budget is propagated through to L1b using the Monte Carlo method. For the implementation of uncertainty evaluation in the mission operational ground processor, the outputs of the E2EMS Monte Carlo processing will be used to form uncertainty look-up tables (LUTs). To summarise, the TRUTHS E2EMS will verify the measurement capabilities of the satellite design against mission requirements. The objective of this presentation is to outline the E2EMS and its recent results.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Climate Benchmark Using Highly Accurate Hyperspectral Measurements

Authors: Xu Liu
Affiliations: Nasa Langley Research Center
The upcoming missions such as NASA’s CLARREO Pathfinder (CPF) and ESA’s TRUTHS will provide unprecedented accurate top of atmosphere (TOA) hyperspectral radiance measurements from 350 nm to ~2500 nm. In order to fully explore information content of these measurements, efficient and accurate radiative Transfer Models (forward models) and retrieval algorithms are needed. In this presentation, we will describe a Principal Component-based radiative transfer model (PCRTM) and a novel climate benchmarking algorithm designed for hyperspectral remote sensors. The PCRTM can simulate a TOA radiance or reflectance spectrum with very good accuracy relative to a reference line-by-line (LBL) radiative transfer model (RTM). It is orders of magnitude faster than the reference LBL RTM. The PCRTM model has been developed for hyperspectral sensors such as AIRS, CrIS, IASI, NAST-I, SHIS, CPF, OMI, TEMPO, EMIS, and SBG with wavelengths ranging from far IR to UV. An optimal estimation (OE) retrieval algorithm using the PCRTM as forward model has been developed for numerous hyperspectral remote sensors. For example, a Single Field-of-view Sounder Atmosphere Product (SiFSAP) algorithm has been used to generate atmospheric temperature, water vapor, trace gases, clouds, and surface products from the CrIS Level 1 data at NASA Goddard Earth Science Data and Information Science Center (GES DISC) since 2023. The novel benchmarking algorithm uses a spectral fingerprinting method which can process Level 1 data from different hyperspectral remote sensors using consistent radiative kernels. These radiative kernels and associate reference radiance spectra contain the spectral information of the Earth’s atmosphere and the surface. They are derived from the PCRTM-based OE retrieval algorithm mentioned above. The climate benchmarking is done by applying a linear spectral fingerprinting method to the remote sensed level 1 data. This method ensures the radiometric closure between the retrieved atmospheric and surface products with the observed hyperspectral radiance spectra, and it produces high quality climate data record (CDR). Additionally, the spectral fingerprinting method reduces the time needed to generate CDRs by more than three orders of magnitude. This makes it easy to reprocess CDRs once the Level-1 data from different satellites have been improved via either re-calibrations or inter-satellite calibrations. We will demonstrate the spectral fingerprinting method using more than 20-years Aqua AIRS, SNPP CrIS, and NOAA20 CrIS radiance spectra.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Solar Irradiance Variability, its Uncertainty and Impact on the Earth Atmosphere

Authors: Margit Haberreiter, Dr. Stergios Misios, Dr. Nigel Fox, Prof. Richard Bantges, Dr. Wolfgang Finsterle
Affiliations: PMOD/WRC, Academy of Athens, National Physical Laboratory, Imperial College
Solar irradiance varies on time-scales from minutes, days, to months, decades, centuries and beyond. To fully understand the effects of solar irradiance variations on the Earth's atmosphere high-precision measurements as well as robust irradiance reconstruction models are required to fill the gaps when no measurements are available. Here, we present latest work on the uncertainty in atmospheric heating fluxes and the responses of the thermal and chemical structure of the upper atmosphere. We present the solar irradiance input uncertainties as determined within the framework of the TRUTHS mission Accompanying Consolidation Towards Operations Study (TACOS). Our atmospheric and climate model simulations allow us to estimate uncertainties in the atmospheric state for solar spectra of different activity level. The results are compared to the modelling results using the CMIP6- and the forthcoming CMIP7-recomended solar spectra. In addition, we will also report on further related solar irradiance studies within TACOS.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Cross-calibration performance with the SI-traceable Satellite (SITSat) TRUTHS: A case study for S2 MSI

Authors: Madeline Stedman, Samuel Hunt, Pieter de Vis, Javier Gorroño, Professor Nigel Fox
Affiliations: NPL, University of Valencia
Remote sensing plays a crucial role in monitoring and understanding the Earth's climate, providing essential data for policy-making, climate modelling, and climate change mitigation strategies. GCOS have defined a set of Essential Climate Variables (ECVs) that have been identified as critically contributing to the characterization of the Earth’s climate. Achieving comprehensive monitoring of these ECVs necessitates the use of data from the many different available sources. The current Earth Observing system comprises long-established space agency satellites, which offer extended time series, and an increasing number of commercial satellites that deliver higher-resolution data both spatially, and temporally through larger constellations. Ensuring this observing system is integrated and interoperable requires, as a minimum, harmonised and consistent data meaning that any observational biases between sensors are understood and correctable. This is particularly important for climate applications such as change detection in Essential Climate Variable (ECV) datasets – which often demand long time series and comprehensive temporal and spatial sampling. Typically, this is achieved by on-orbit calibration against common reference sites and/or other satellites. The next generation of satellites, where high-accuracy on-board SI-traceability is embedded into the design, so-called SITSats, aim to support this by becoming “gold standard” calibration references. This includes the ESA TRUTHS (Traceable Radiometry Underpinning Terrestrial- and Helio- Studies) mission, which will make hyperspectral observations from visible to short wave infrared with a target uncertainty of 0.3 % (k = 2). The primary technique that will be used for in-flight cross-calibration of EO sensors against TRUTHS is comparison of measurements within satellite-to-satellite match-ups - where sensors observe the same location at the same time (to within a defined threshold). This is also commonly referred to as a simultaneous-nadir overpass (SNO). To date, uncertainty budgets associated with intercalibration have been dominated by the uncertainty of the reference sensor. However, the improved level of uncertainty that will be achieved by TRUTHS, and other SITSats, means cross-calibration performance will no longer be limited by the uncertainty of the reference measurements themselves. Instead, the limiting factor will shift to the challenges associated with the differences between the sensor observations during match-ups. In optical observations this can include mismatch in spectral and spatial sampling, viewing geometry mismatch (changes in surface reflectance, polarization), and temporal mismatch (changes in illumination angle, atmosphere). Such effects introduce a comparison mismatch which adds further uncertainty to the comparison beyond the inherent uncertainties of the measurements. Therefore, several match-up processing steps are necessary to compensate for this comparison mismatch and make the sensor data more comparable, including accounting for the spectral response function to match that of the sensor under test, and sampling the data spatially such that the two sensors observe the same field of view. Presented here is uncertainty analysis to evaluate the performance achievable for cross-comparison with a SITSat like TRUTHS. This is demonstrated for the comparison of multispectral sensor Sentinel-2 MSI with TRUTHS using simulated imagery for representative cal/val scenes. Observations made by high resolution multispectral imagers like Sentinel-2 are used to underpin the derivation of several ECVs. Results from the analysis highlight the importance of characterisation of the sensors (e.g. spectral response function, geolocation) both pre-flight and on-orbit for performance of intercalibration with references, such as SITSats.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 0.94/0.95)

Session: C.01.11 Airborne and Ground-based Instrument Demonstrators

Airborne and ground based remote sensing instrument demonstrators are important tools demonstrate new instrument concepts, develop and verify the retrieval of geo-physical parameters and to make reference measurements for the calibration and validation of Earth observation missions.
This session aims to present ongoing and completed developments of airborne and ground based instrument demonstrators.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: Mountain-top CubeSat Demonstrator for Urban Air Quality Monitoring: A Leap Towards High-Resolution NO₂ Mapping in the Alpine Valley of Innsbruck

Authors: Stefanie Morhenn, Daniel Santana Díaz, Manuel Roca, Christoph Waldauf, Martin Tiefengraber, Dr. Alexander Cede, Dr. Frederik Tack, Dr. Michel Van Roozendael, Univ.-Prof. DI Dr. Thomas Karl
Affiliations: LuftBlick OG, Belgian Institute for Space Aeronomy, Department of Atmospheric and Cryospheric Sciences
Current air quality (AQ) satellite missions deliver data with a spatial resolution of the order of square kilometers and a temporal resolution of hourly to daily data collection, depending on the satellite orbit. They are capable of resolving the spatial and temporal variability of trace gases such as nitrogen dioxide (NO₂) over regions of the world without sources, e.g. oceans, but they fail to capture the variability over most land masses, especially urban and complex (e.g. mountainous) terrains. Complementary ground-based AQ measurement instruments offer a higher temporal resolution, but generally only capture a limited air mass and therefore provide limited spatial information, unless there are many instruments distributed on a small grid. In this project, we approach the resolution limitations of AQ monitoring in urban mountainous regions by using a ground-based instrument spatially resolved at sub-hectometer scale. In the first stage of the project, the stand-alone instrument was developed, calibrated and installed on a mountain top, where it measured in downward-looking mode for more than a year. Due to unexpected hardware-related changes between laboratory and field, we faced several challenges in the usability of the laboratory calibration. This required the development of a comprehensive in-field calibration in order to improve the usability of the measurement data. We show our approach and results of the laboratory and field calibrations, as well as the lessons learned from the challenges of operating a self-sufficient instrument on a mountain-top. Finally, we present the modifications and improvements needed for a future version of the instrument, and demonstrate our results of NO₂ distribution maps of the urban terrain of Innsbruck, with the ultimate goal of continuously monitoring pollution sources, NO₂ patterns and their dynamics within the Alpine valley.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: 3D wind measurement with a wind lidar including a quad-Mach-Zehnder interferometer developed for on-board measurement

Authors: David Michel, Dr Laurent Bizet, Mr Thibault Boulant, Mr Matthieu Valla, Mr Amaury Capmas-Pernet, Dr Nicolas Cezard, Dr Frédérique Dambreville, Dr Florence De La Barrière, Dr Yann Ferrec, Mr Josué Gauthier, Dr Olivier Gazzano, Mr Didier Goular, Mr Julien Houy, Dr Laurent Lombard, Dr Jean-François Mariscal, Dr Christian Musso, Dr Philippe Perrault, Mr Christophe Planchat, Dr Pierre Pichon, Mr Jonathan Pouillaude, Dr Nicolas Rouanet
Affiliations: Onera, LATMOS
On-board 3D wind measurement at all altitudes has numerous aeronautical applications (Gust Load Alleviation, HAPS, etc.) or space (wind measurement with Aeolus, calibration/validation of Aeolus data). This 3D wind measurement is particularly interesting in turbulent wind for several applications such as weather and climate forecasting, planning and safety of aircraft during their flight, transport of aerosols and pollution, monitoring of weather conditions in case disasters, wind power generation, forest fires and volcanic plume movement. (see session A.01.09). The instrument developed for this type of measurement is the direct detection UV lidar which sends a laser pulse into the atmosphere and determines, with a spectral analyzer, the wind projected on the axis of the lidar from the Doppler shift induced by the particles (low altitude) and molecules (high altitudes) of the atmosphere. To measure the radial wind, the quad Mach-Zehnder (QMZ) interferometer is, to our knowledge, the best compromise between precision and robustness [D. Bruneau and J. Pelon, ATM. Measures. Technology. 14 4375-2021 (2021)]. Additionally, such an analyzer can also be used to determine particle backscatter coefficients, extinction coefficient and can be extended with a dedicated channel for aerosol and cloud polarization analysis. This is why, at ONERA, we are developing an all-altitude wind lidar solution based on a QMZ analyzer. The 3D wind is then reconstructed by addressing the lidar axis in several directions and using an algorithm (C. Musso et al., session A.01.09) to recover the 3D wind components from the measured wind projections. This instrument includes several solutions to obtain a vibration-robust version of the different lidar components (analyzer, laser, transmission/reception, scanning system and overall instrument). The QMZ interferometer is a two-wave interferometer that provides four signals of the two-wave interference pattern, in phase quadrature, used to determine the frequency shift of backscattered light energy and derive the radial wind speed. The advantages of the QMZ interferometer, compared to other solutions, are as follows: (1) it is not sensitive to the frequency drift of the laser source, (2) it is not sensitive to the shape of the backscattered spectrum, (3) it gives a small statistical error equal to 2.35 (εvr)ISA where (εvr)ISA is the error obtained for an ideal analyzer, (4) it can include a field compensation design which allows a wide angle of incident field and facilitates their adaptation with an extended wide beam system, and (5) it uses mono-detectors which do not truncate the collected signal (compared to marginal imaging systems). In order to be on-board, two architectures of interferometers robust to vibrations are developed at ONERA: (1) a first based on commercial components and (2) a second monolithic, made up of adhesion of all the optical components. The first version is cheaper and easier to study in depth while the second version is more solid. To obtain architectures insensitive to angular misalignment, both are composed of a single separator and two retroreflective optics. An innovative calibration procedure was developed to determine the exact contrast and phase difference between the four outputs based on the Lissajou curves. The two architectures, their simulated performances and the first experimental results will be presented. In addition to the spectral analyzer, the wind lidar includes several components that must be compacted and reinforced to be able to be used on aircraft. Typically used solid-state UV lasers are very sensitive to vibrations (especially the laser cavity) and their use on board generally requires using a lot of metal to make it insensitive to vibrations, leading to very heavy and expensive solutions. To resolve this problem, we are developing a solution based on a fiber laser which has the advantage of being, in the long term, lighter and more robust to vibrations. The architecture of the system used to address/focus the laser in the probed region and collect the backscattered light from this region is generally designed in a bistatic configuration where the optical axes of the two systems are determined with different optics. However, the transmission and reception must have the same axis, which poses a problem in vibration conditions, for long distance measurements, due to the large lever arm of the two systems. To avoid this problem, we have developed a new monostatic configuration close to that commonly used for heterodyne lidars. To address the beam in different directions, we design a static system comprising several duplicated monostatic transmit/receive instruments. A time multiplexing method is developed to use a shared spectral analyzer to process all axes. The addressed angles were optimized using the 3D wind reconstruction algorithm presented in (C. Musso et al., session A.01.09). The design of all these components will be presented. The project 101101974 – UP Wing is supported by the Clean Aviation Joint Undertaking and its members. Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or Clean Aviation Joint Undertaking. Neither the European Union nor the granting authority can be held responsible for them
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: CRISTALair Development and Testing for Dual-Frequency Altimetry Applications

Authors: Ferran Gibert, Albert Garcia-Mondéjar, Clara Colet, Ester Vendrell, Adriano Meta, Filippo Spaeziali, Henriette Skourup, Sebastian Bjerregaard Simonsen, Paolo Cipollini, Michele Scagliola, Valeria Gracheva
Affiliations: isardSAT, Catalonia, MetaSensing, Italy, Technical University of Denmark, RHEA / ESA-ESRIN, ESA-ESTEC
To ensure the success of operational radar altimeters, proof-of-concept instruments that can be tested on airplanes play a critical role. These instruments enable the validation of technologies, algorithms, and measurement approaches in a controlled yet real-world environment before their deployment in space. CRISTALair succeeds ASIRAS (Airborne SAR/Interferometric Radar System), which operated in both the Arctic and Antarctic from 2004 to 2019, and KAREN, a Ka-band radar altimeter with enhanced sensitivity to surface and snow layer properties. The main advancement in CRISTALair lies in its ability to acquire data simultaneously in Ku- and Ka-band, with interferometric capabilities on both, elevating the Science Readiness Level (SRL) of dual-band algorithms/processing. Beyond the dual-band radar, CRISTALair will integrate an airborne laser scanner, a colour-infrared camera, and ancillary equipment to ensure precise positioning and attitude of the interferometer. Additionally, man-made external reflectors will facilitate the performance evaluation of the instrument. The development of CRISTALair started in March 2023, and it is currently in the testing phase. The first testing flight campaign is planned for the first quarter of 2025, where the full capabilities of the measurements will be evaluated. This presentation will show the results of the dual-frequency altimeter over different natural targets. The performances will be compared against the requirements to evaluate the instrument compliance in preparation for the functional flight campaign planned for Autumn 2025.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: Development of a New Airborne SAR System with Single-Pass Tomography Capabilities

Authors: Maximilian Eitel, Dr. Martin Ruß, Dr. Francescopaolo Sica, Prof. Michael Schmitt
Affiliations: University of the Bundeswehr Munich
The Earth Observation Lab at the University of the Bundeswehr Munich, in collaboration with Gamma Remote Sensing GmbH and AID GmbH, is developing an innovative airborne Synthetic Aperture Radar (SAR) system operating in the S-band. NiSAR, scheduled for launch in the first quarter of 2025, exemplifies the growing focus on leveraging the S-band's unique balance between penetration depth and resolution, demonstrating its suitability for environmental and infrastructure monitoring. With our airborne system, we aim to complement this growing field of research and application. A unique feature of the new system will be its capability to acquire data for single-pass SAR tomography. Operating at a central frequency of 3.2 GHz with a 400 MHz bandwidth, the system is designed to achieve a spatial resolution down to 0.5 m in both range and cross-range. Its architecture includes a multi-antenna configuration, with four receiving antennas and one transmitter strategically mounted on the aircraft's wings and belly. Individual PODs integrate all critical components, including radar electronics, antennas, and a centralized navigation unit, into a compact, transferable structure. Lightweight booms extending from the platform ensure sufficient baselines for tomographic processing while maintaining aerodynamic stability. The booms also provide the necessary stiffness to ensure precise positioning of the flat antennas' phase centers. An Inertial Navigation System (INS) and Global Navigation Satellite System (GNSS) receiver are employed to provide synchronized positional data for all antennas for each POD. This centralized setup reduces hardware complexity and ensures accurate baseline measurements, a key factor for advanced SAR processing and the generation of volumetric data in a single flight. This approach facilitates deployment on multiple platforms, such as the Groppo Trail and Zlin Savage ultralight aircraft, simplifying integration while enabling operational versatility. Additionally, the POD design allows for straightforward maintenance and future system upgrades. This contribution will provide a comprehensive overview of the system's concept and design, highlighting the modular POD approach, innovative antenna configuration, and its potential for multi-platform adaptability. Technical specifications and the anticipated advantages of the design will be detailed, along with updates on project progress and initial test outcomes.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: Adventures in polarimetry: First Light from the Compact Multi-Angle Polarimeter, C-MAP

Authors: Laura Horton, Dr Joshua Vande Hey, David Spilling, Anna Gialitaki, Tim York, Jean-Bernard Riti, Gautham Padmanaban, Charlotte Morrison, Steven Lloyd, Niel Humpage, Piyal Samara-Ratna, Jose Busquets Enriquez de Navarra
Affiliations: University Of Leicester, Thales Alenia Space UK, National Observatory of Athens
More comprehensive observations of aerosol optical and microphysical properties can lead to a better understanding of (1) direct and indirect effects of aerosols in the Earth-atmosphere system, and (2) air-quality, with implications for human health. Multi-spectral, multi-angle (MAP) measurements of reflected light intensity and polarization is state of the art in aerosol remote sensing. Admittedly, for aerosol property retrievals, the information content provided by MAP observations is unparalleled when compared to observations of total light intensity only [1]. As a result, the last few decades have seen substantial developments in passive polarimetric remote sensing of aerosols, mainly in the form of spaceborne missions [2]. In contrast to satellite measurements, airborne polarimetric observations are not as frequently utilized. Nevertheless, these observations could provide aerosol information at higher spatial resolution compared to satellites, while at the same time acting as a testbed for new developments, or providing validation for future satellite missions. Airborne polarimetric measurements require a low-noise, highly accurate FPA (Focal Plane Array) design to optimize the retrieval of information, but also need to be compact and lightweight. Improving the design of these instruments and using the retrievals in conjunction with other measurements, such as the polarimetric CIMEL photometer [3], can lead to higher spatial and temporal resolution (e.g., on a city-scale) aerosol retrievals. Here we present the first data produced from the compact airborne MAP instrument (C-MAP) with lightweight optics and compact FPA. C-MAP was developed by Thales Alenia Space UK in collaboration with the University of Leicester, under a UK Space Agency Centre for Earth Observation Instrumentation funded activity, and it is a compact airborne version of the MAP sensor that will fly on board the upcoming CO2M mission [4]. The first flight took place in March 2024 above two UK AERONET sites [5] giving the first glimpse into C-MAP’s capabilities. Recently acquired data from lab-based calibration, ground deployment and a flight campaign were used to analyse and demonstrate C-MAP performance. These data are being pro-cessed using the GRASP retrieval algorithm [6] to produce both aerosol and surface measurements which can further be validated with ground measurements. References: [1] Mishchenko, M.I., and L.D. Travis, 1997: Satellite retrieval of aerosol properties over the ocean using polarization as well as intensity of reflected sunlight. J. Geophys. Res., 102, 16989-17013. [2] Dubovik, O., Li, Z., Mishchenko, M. I., Tanré, D., Karol, Y., Bojkov, B., Cairns, B., Diner, D. J., Espinosa, W. R., Goloub, P., Gu, X., Hasekamp, O., Hong, J., Hou, W., Knobelspies-se, K. D., Landgraf, J., Li, L., Litvinov, P., Liu, Y., & Lopatin, A., 2019: Polarimetric re-mote sensing of atmospheric aerosols: Instruments, methodologies, results, and perspec-tives. Journal of Quantitative Spectroscopy and Radiative Transfer. 224, 474–511. [3] Fedarenka, A., Dubovik, O., Philippe Goloub, Li, Z., Lapyonok, T., Litvinov, P., Barel, L., Gonzalez, L., Podvin, T., & Didier Crozel., 2016: Utilization of AERONET polarimetric measurements for improving retrieval of aerosol microphysics: GSFC, Beijing and Dakar data analysis. Journal of Quantitative Spectroscopy & Radiative Transfer, 179, 72–97. [4] Spilling, D., Thales, A., 2021: The Multi Angle Polarimeter (MAP) on board ESA’s Coper-nicus Carbon Dioxide Monitoring mission (CO2M). ICSO 2020. [5] Holben B, Eck T, Slutsker I, Tanré D, Buis J, Setzer A, et al., 1998: Aeronet—A Federated Instrument Network and Data Archive for Aerosol Characterization. Remote Sens Environ. 66, 1–16. [6] Dubovik, O., Lapyonok, T., Litvinov, P., Herman, M., Fuertes, D., Ducos, F., Torres, B., Yevgeny Derimian, Huang, X., Lopatin, A., Anatoli Chaikovsky, Aspetsberger, M., & Fed-erspiel, C., 2014: GRASP: a versatile algorithm for characterizing the atmosphere.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: An overview of collaborative calval and field campaigns, and community science activities enabled through the ESA-NASA Joint Program Planning Group

Authors: Dr Malcolm Davidson
Affiliations: Esa
The US National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA) created a Joint Program Planning Group (JPPG) in 2ti1ti to enhance coordination between NASA and ESA on current and future space Earth Observation missions. One of the three sub-groups of the JPPG is dedicated to collaboration in field measurement campaigns, mission and product calval and more recent collaborative EO community science projects. Since 2010 the JPPG has initiated or informed numerous airborne field campaigns to help develop and document the scientific objectives, develop geophysical retrieval algorithms and provide calibration and/or validation for present and/or future satellites to be operated by NASA, ESA, and its partners. The actvities address an underlying need to demonstrate unambiguously that space-based measurements, which are typically based on engineering measurements by the detectors (e.g. photons), are sensitive to and can be used to reliably retrieve the geophysical and/or biogeochemical parameters of interest across the Earth and validate mission design. Such campaigns have included as diverse subjects as atmospheric trace gas composition over the western US, solar induced fluorescence over the Eastern United States, wind profiles over the north Atlantic, vegetation canopy profiles in Gabon, and sea ice and ice sheet properties in the Arctic and Antarctic. The collaborative field campaign and calval activities have helped use of surface-based, airborne, and/or space-based observations to develop precursor data sets and support both pre- and post- launch calibration/validation and retrieval algorithm development for space-based satellite missions measuring our Earth system. The generation of consistent, inclusive, community-based assessments of Earth system change through integrated analyses of these different data sets is also a critically important process in the challenge of documenting Earth system change. To assist in this process the JPPG has supported collaborative community efforts including three installments of the Ice Mass Balance Intercomparison Experiment (IMBIE; two completed, one ongoing), the NASA-ESA Snow on Sea Ice Intercomparison (NESOSI), and the Arctic Methane and Permafrost Challenge (AMPAC). In this talk a review of JPPG activities and their results, as well current plans for future collaborations including campaigns will be provided.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall N1/N2)

Session: C.03.02 Advances in the theory and methodology SAR Interferometry and SAR polarimetry - PART 3

In 2003, the first POLINSAR workshop organised by ESA was conducted gathering the new community, to present the findings of ESA-funded studies of SAR Polarimetry and Polarimetric Interferometry applications, and to prepare recommendations for future research work. Today the POLINSAR workshop is an established platform where scientists are meeting and exchanging their recent research advances to support new mission proposals or dedicated airborne campaigns to increase our knowledge. The application of polarimetry and interferometry, or combined, are ranging over a variety of different domain as for example biosphere (forest height, forest structure, forest disturbance, crop development stage classification over agricultural crops, etc.), hydrosphere (soil moisture), cryosphere (snow depth, ice structure), geosphere (enhanced deformation identification), urban areas (scatterer identification, building reconstruction) and many more applications.

We welcome contributions from but are not restricted to:
• New advances in Polarimeric SAR Interferometry: methods and applications
• Multi-baseline and TomoSAR: methods and applications
• Differential Polarimetric SAR Interferometry: methods and applications
• Airborne Campaigns for polarimetric and interferometric SAR
• Future Mission concepts related polarimetry and interferometry
• Recent advancement using AI for SAR mission concept, methods and applications.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall N1/N2)

Presentation: An evaluation of the Pol-InSAR forest height estimation performance with dual-pol Sentinel-1 data

Authors: Samuele De Petris, Matteo Pardini, Enrico Borgogno-Mondino, Pau Prats, Dr. Kostas Papathanassiou
Affiliations: University Of Turin, German Aerospace Center (DLR)
Forest height is one of the most important parameters in forestry. Absolute height measurements and / or their distribution within a stand provide information on stand development, timber production potential, disturbance regime and, depending on spatial and temporal resolution, presence of selective logging. By means of allometric relations, forest height maps can be used to estimate biomass and the associated carbon flux components, contributing to climate change models and supporting sustainable forest management. Remote sensing techniques can provide large scale and accurate forest height maps with high spatial and temporal resolution, overcoming limitations of traditional field surveys in dense or inaccessible forests. In particular, (model-based) inversions of Polarimetric Interferometric Synthetic Aperture Radar (Pol-InSAR) measurements became an established and well understood approach, demonstrated and validated at different frequencies, from X- down to P-band, for a variety of forest and terrain conditions [1]-[6]. This study investigates the potential of a new PolInSAR model for forest height inversions applied to dual-pol Sentinel-1A/B acquisitions focusing on the temperate forest site of Traunstein in the south of Germany. Preliminarily, a theoretical sensitivity analysis of the proposed model was performed. Subsequently, the model was tested using Lidar data on real interferometric pair acquired in winter 2022. Differently from previous studies [6]-[8], here the analysis relies on image pairs with suitable (yet still sub-optimal) interferometric sensitivity induced by a large enough across-track baseline, 6 days revisit time, and significant coherence levels. To bypass the need for (atmospheric) phase calibration and estimation of the underlying topography, the maximum interferometric phase difference between optimum polarizations [1], [9], has been considered for the inversion using a Random-Volume-over-Ground formulation. Under the assumptions of absence of ground scattering in one of the optimum polarizations and scalar temporal decorrelation (e.g. wind-induced), the estimation of forest height requires only a parameterization of the vertical reflectivity profile underlying the phase difference. This inversion concept has been applied to the selected data, and the obtained forest heights have been validated against lidar data acquired close in time to the Sentinel-1A/B data. Despite the sub-optimal interferometric configuration and a larger variance induced by the presence of temporal decorrelation, preliminary results show the availability of a sensitive observation space even with C-band repeat-pass data leading to forest height estimates with limited bias depending on the vertical reflectivity profile parameterization, especially over the tallest stands in the site. Final experimental results will be presented at the Symposium, and the factors affecting the performance, as well as the validity of the model assumptions, will be discussed in details. [1] K. P. Papathanassiou and S. R. Cloude, "Single-baseline polarimetric SAR interferometry," IEEE Trans. Geosci. Remote Sens., vol. 39, no. 11, pp. 2352-2363, Nov. 2001.[2] I. Hajnsek, F. Kugler, S. Lee and K. Papathanassiou, "Tropical forest parameter estimation by means of Pol-InSAR: The INDREX II campaign," IEEE Trans. Geosci. Remote Sens., vol. 47, no. 2, p. 481–493, Feb. 2009. [2] S. K. Lee, F. Kugler, K. P. Papathanassiou and I. Hajnsek, "Quantification of temporal decorrelation effects at L-band for polarimetric SAR interferometry applications," IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 6, no. 3, pp. 1351-1367, 2013. [3] M. J. Soja, H. Persson and L. M. H. Ulander, "Estimation of forest height and canopy density from a single InSAR correlation coefficient," IEEE Geosci. Remote Sens. Lett., p. 646–650, Dec. 2015. [4] F. Garestier, P. C. Dubois-Fernandez, and I. Champion, “Forest height inversion using high-resolution P-band pol-InSAR data,” IEEE Trans. 747 Geosci. Remote Sens., vol. 46, no. 11, pp. 3544–3559, Nov. 2008 [5] H. J. Persson et al., "Experiences from Large-Scale Forest Mapping of Sweden Using TanDEM-X Data," Remote Sens., vol. 9, no. 12, 2017. [6] M. Lavalle, C. Telli, N. Pierdicca, U. Khati, O. Cartus and J. Kellndorfer, "Model-Based Retrieval of Forest Parameters From Sentinel-1 Coherence and Backscatter Time Series," in IEEE Geoscience and Remote Sensing Letters, vol. 20, pp. 1-5, 2023, Art no. 4001305, doi: 10.1109/LGRS.2023.3239825 [7] De Petris, S., Cuozzo, G., Notarnicola, C., & Borgogno-Mondino, E. (2022, June). Forest Height Estimation Using Sentinel-1 Interferometry. A Phase Unwrapping-Free Method Based on Least Squares Adjustment. In Italian Conference on Geomatics and Geospatial Technologies (pp. 251-262). Cham: Springer International Publishing. [8] Ge, S., Su, W., Gu, H., Rauste, Y., Praks, J., & Antropov, O. (2022). Improved LSTM model for boreal forest height mapping using Sentinel-1 time series. Remote Sensing, 14(21), 5560. [9] S. R. Cloude and K. P. Papathanassiou, "Polarimetric SAR interferometry," IEEE Trans. Geosci. Remote Sens., vol. 36, no. 5, pp. 1551-1565, Sep. 1998
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall N1/N2)

Presentation: Investigations on a Two-Look ScanSAR Mode for Along-Track Deformation Measurements with ROSE-L

Authors: Simon Trumpf, Dr.-Ing. Pau Prats Iraola, Prof. Dr.-Ing. Alberto Moreira
Affiliations: German Aerospace Center (DLR)
In the frame of a recently conducted study at DLR, the general applicability of a two-look ScanSAR technique for along-track deformation retrieval for the ROSE-L mission has been evaluated. In a next step, further aspects of this approach are investigated. These investigations are conducted in two parts: First, an analytical evaluation is performed, then the results are validated using end-to-end simulations. The two-look ScanSAR approach consists in increasing the overlap between the bursts so that every point on ground is covered twice under different Doppler centroids. This is achieved in conventional ScanSAR by reducing the burst time, which, consequently, leads to an azimuth resolution loss. With the ROSE-L system, however, two-look ScanSAR can be achieved by increasing the azimuth processed bandwidth of the bursts. This is possible since the ROSE-L system uses multiple azimuth channels, hence having access to an effective azimuth bandwidth a factor five larger than the system PRF. Using a second pass over the same area, two interferograms of the same target area can be computed, referred to as two looks. The phase of each of the interferograms is directly related to the deformation in the corresponding line of sight. The along-track deformation is then retrieved from the phase of the differential interferogram of the two looks. The technique to compute a differential interferogram between interferograms obtained at different Dopplers is commonly known as spectral diversity or multi-aperture InSAR (MAI). The main considered error contributions are thermal noise and azimuth ambiguities, but can also be extended to include ionospheric disturbances. It has also been shown in [1], that coherent azimuth ambiguities can introduce biases into the phase of an interferogram if certain conditions concerning power and coherence of the backscattered signals of main target and ambiguity are met in a scene. The results of the two-look ScanSAR technique can thus also be affected by those biases. Furthermore, the system concept of ROSE-L includes the use of a large antenna with multiple sub-apertures in azimuth and range, which enables the usage of beam-forming techniques, for example Scan-on-receive (SCORE). To verify the analytical results of several test cases, an end-to-end simulation has been implemented. Hereby an artificial scene with known deformations is generated and two acquisitions matching certain coherence requirements are derived from this scene. In the next step the system characteristics of the ROSE-L system, such as PRF, antenna characteristics and ScanSAR acquisition mode are introduced into the simulated signals, in a way that coherent azimuth ambiguities are added to the main signal. Then, a simplified processing chain is used to retrieve the along-track deformation. Lastly, retrieved and inserted deformations are compared and the resulting measurement errors are evaluated. The contribution will present the current status of the investigations both in terms of theoretical derivations as well as with results obtained with the end-to-end simulator. [1] M. Villano and G. Krieger, “Impact of azimuth ambiguities on interferometric performance,” IEEE Geosci. Remote Sens. Lett., vol. 9, no. 5, pp. 896–900, Sep. 2012, doi: 10.1109/lgrs.2012.2187271.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall N1/N2)

Presentation: BIOMASS Fast Varying Ionosphere and Orbit Correction Through Multi-Squint Interferometry

Authors: Francesco Salvaterra, Francesco Banda, Naomi Petrushevsky, Stefano Tebaldini
Affiliations: Politecnico Di Milano, DEIB, aresys
The ESA BIOMASS mission, set to launch in 2025, will be the first spaceborne SAR mission operating at P-band. The mission will provide critical data for forest monitoring, producing above-ground biomass (AGB) maps, forest height measurements, and forest disturbances detection [1]. P-band was selected for its penetration capabilities and sensitivity to the larger parts of the trees, which improve the sensitivity to AGB even in dense tropical forests and, combined with BIOMASS 3-day revisit time, helps reduce temporal decorrelation, critical in forest scenarios. The use of P-band presents unique challenges, particularly its interaction with the ionosphere, which causes phase advance and range delay in the received echoes [2]. For an ionosphere presenting irregularities in the electron density within the synthetic aperture (i.e. fast variations and linear trends), these effects can lead to defocusing and shifts in azimuth direction, while causing only minimal range effects in the case of BIOMASS [2]. In the interferometric calibration chain, the stack is firstly processed to remove slow ionospheric phases using a split-spectrum algorithm as in [3], the images are then processed using the fast ionosphere and orbit error module to eliminate the effects of fast ionospheric variations and orbital errors. The final step of the chain is the Sum of Kronecker Products (SKP) calibration [4], to obtain a stack ready for Level-2 Processing. In this work, we present the part of the BIOMASS’ interferometric calibration chain responsible of correcting fast ionosphere effects in the data stack, which for tomographic phase consists of seven acquisitions with varying baseline (up to 90% of critical baseline). The fast ionosphere and orbit error module aims to 1) detect the presence of a fast-varying ionospheric signal, 2) estimate ionospheric height and orbit error, 3) correct the Slave image, accounting for adaptive coregistration. The module relies on a multi-squint interferometry processing technique: by analyzing the interferometric phase difference between acquisitions focused at different squint angles, the error contributions from ionosphere, topography residuals and orbit uncertainty can be separated [5]. The idea behind multi-squint calibration is that disturbances that affect the signal at different heights can be separated by comparing different squint angles: while ionosphere, due to its height, shows a dependence on the squint angle, topography is invariant to it, causing a signature which can be identified and separated. The fast ionosphere correction module described in this abstract is a conceptual evolution from the one described in [6]. Rather than conducting an ionospheric tomographic reconstruction, it derives the correction parameters through a 2D spectral analysis of the multi-squint interferometric phase, while also incorporating the estimation of the orbital error. All correction parameters produced by the algorithm are calculated relative to a master image from the stack, selected as the one presenting the least ionospheric effects or the central image of the stack, to reduce wavenumber shift decorrelation [7]. The ionospheric correction is ultimately performed by applying the estimated phase screen to the images defocused at the detected ionospheric height. To maximize the estimation accuracy, affected by wavenumber shift decorrelation, the module currently processes pair of images with shortest relative baseline. Further analyses are being conducted to combine all interferometric pairs in the estimation to improve robustness in case of a corrupted image in the stack. The module was tested on airborne data reprocessed to simulate BIOMASS acquisition and corrupted with different levels of ionospheric phase screens, providing good results even for very fast ionosphere, while not affecting the quality of the data in its absence. An overview on BIOMASS interferometric calibration processor is given in the companion presentation [8]. References: [1] Banda, F.; Giudici, D.; Le Toan, T.; Mariotti d’Alessandro, M.; Papathanassiou, K.; Quegan, S.; Riembauer, G.; Scipal, K.; Soja, M.; Tebaldini, S.; et al. The BIOMASS Level 2 Prototype Processor: Design and Experimental Results of Above-Ground Biomass Estimation. Remote Sens. 2020, 12, 985. [2] Rogers, Neil & Quegan, S. & Kim, Jun Su & Papathanassiou, Konstantinos. (2014). Impacts of ionospheric scintillation on the BIOMASS P-band satellite SAR. Geoscience and Remote Sensing, IEEE Transactions on. 52. 1856-1868. 10.1109/TGRS.2013.2255880. [3] Petrushevsky, N., Banda, F., Monti-Guarnieri, A., Thibeault, M., Gonzalez, J. P. C., & Giudici, D. (2024, April). Operational Ionospheric Correction for SAOCOM Interferometry. In EUSAR 2024; 15th European Conference on Synthetic Aperture Radar (pp. 1221-1226). VDE. [4] Francesco Banda, Mauro Mariotti d’Alessandro, and Stefano Tebaldini, “Ground and Volume Decomposition as a Proxy for AGB from P-Band SAR Data,” Remote Sensing, vol. 12, no. 2, 2020 [5] S. Tebaldini, A. M. Guarnieri and F. Rocca, "Recovering time and space varying phase screens through SAR multi-squint differential interferometry," EUSAR 2012; 9th European Conference on Synthetic Aperture Radar, Nuremberg, Germany, 2012, pp. 16-19. [6] S. Tebaldini, M. M. d'Alessandro, J. S. Kim and K. Papathanassiou, "Ionosphere vertical profiling from biomass multi-squint InSAR," 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 2017, pp. 5866-5869, doi: 10.1109/IGARSS.2017.8128343. [7] F. Gatelli, A. Monti Guamieri, F. Parizzi, P. Pasquali, C. Prati and F. Rocca, "The wavenumber shift in SAR interferometry," in IEEE Transactions on Geoscience and Remote Sensing, vol. 32, no. 4, pp. 855-865, July 1994, doi: 10.1109/36.298013. [8] F. Banda, F. Salvaterra, N. Petrushevsky, S. Tebaldini, M. Pinheiro, B. Rommen, “BIOMASS Interferometric Processing”, Living Planet Symposium 2025.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall N1/N2)

Presentation: TWO-LOOK SCANSAR MODE CONFIGURATION WITH ROSE-L: TOWARD AN EFFECTIVE AND EASILY IMPLEMENTABLE SOLUTION

Authors: Stefano Perna, Francesco Longo, Simona Zoffoli, Riccardo Lanari
Affiliations: Università degli Studi di Napoli “Parthenope”, Istituto per il Rilevamento Elettromagnetico dell'Ambiente (IREA), CNR, Italian Space Agency (ASI)
The European Space Agency (ESA) has been managing the development of a Synthetic Aperture Radar (SAR) system, named ROSE-L (which stands for Radar Observation System for Europe at L-band), as part of the Copernicus Expansion Programme. In order to guarantee high-resolution and wide-swath (HRWS) capabilities, ROSE-L will simultaneously experience different advanced acquisition techniques, such as ScanSAR, Scan On Receive (SCORE) and Displaced Phase Center (DPC) [1]-[2]. This work is focused on the ScanSAR mode of ROSE-L, which was originally designed to guarantee an effective one-look configuration. In particular, following the findings presented in [3]-[5], we show that capabilities of the ROSE-L system can be significantly augmented by tailoring this system to a high gain two-look configuration. The solution that we propose in this contribution is based on the simple idea presented in [3]-[5], where it is shown that an effective two-look ScanSAR configuration can be achieved with the ROSE-L system - without modifying the original mission parameters, such as azimuth resolution and range swath, just to quote those directly related to the above mentioned HRWS; - with a negligible impact on the realization of the already designed radar antenna; - retaining the compliance with respect to the main system specifications originally enforced for the one-look mode configuration. Specifically, following the main rationale of the solutions presented in [3]-[5], we propose to properly shape the azimuth beam illuminated by the ROSE-L radar antenna by taking benefit of the originally designed antenna layout, which consists of five azimuth panels. Specifically, to enable the DPC acquisition mode, these panels operate as a single five-element array in the transmission (TX) mode, and as five separate antennas in the receiving (RX) mode. To enable ROSE-L also for an effective two-look ScanSAR mode, we propose to modify, with respect to the original design, the input excitations of the 5 elements of the TX array. Moreover, to make these modifications even easier to implement and more power-efficient, we propose to act only on the phases of these input excitations. In this contribution, we widen the conceptual study presented in [3]-[5], where only the constraints relevant to the near-range burst of the ScanSAR configuration have been considered. More specifically, to move toward an effective and easily implementable solution, we propose a phase-only control of the input excitations of the five panels of the TX ROSE-L antenna array, capable of guaranteeing compliance with respect to the constraints enforced for all the ScanSAR bursts (characterized by different system parameters) planned for the mission. [1] M. Zimmermanns and C. Roemer, “Copernicus HPCM: ROSE-L SAR Instrument and Performance Overview,” in EUSAR 2022; 14th European Conference on Synthetic Aperture Radar, Leipzig, Germany, 2022, pp. 1-6. [2] M. Davidson and R. Furnell, “ROSE-L: Copernicus L-Band SAR Mission,” in Proceedings of the 2021 International Geoscience and Remote Sensing Symposium, IGARSS 2021, Brussels, Belgium, 2021, pp. 872-873. [3] S. Perna, F. Longo, S. Zoffoli, M. Davidson, L. Lannini and R. Lanari, “Advanced ScanSAR Capabilities Enabled by Optimized Antenna Array Azimuth Radiation Patterns: the ROSE-L Case Study,” in Proceedings of the 2022 International Geoscience and Remote Sensing Symposium, IGARSS 2022, Kuala Lumpur, Malaysia, pp. 7401-7404, 2022. [4] S. Perna, F. Longo, S. Zoffoli, M. Davidson, L. Iannini and R. Lanari, “ A conceptual performance study on a two-look ScanSAR mode configuration for the forthcoming ROSE-L mission,” in IEEE Trans. Geosci. Remote Sens., vol. 62, pp. 1-18, 2024, Art no. 5201618, doi: 10.1109/TGRS.2023.3344537. [5] S. Perna, F. Longo, S. Zoffoli, M. Davidson, L. Iannini and R. Lanari, "Shaping the ROSE-L Antenna Beam to Enable a High Gain Two-Look Scansar Mode Configuration," in Proceedings of the 2024 International Geoscience and Remote Sensing Symposium, IGARSS 2024, Athens, Greece, 2024, pp. 6655-6658, 2024.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall N1/N2)

Presentation: DANI-NET: A Deep and Physics-Driven AI Framework for Repeat-Pass InSAR Change Detection

Authors: Giovanni Costa, Andrea Virgilio Monti Guarnieri, Alessandro Parizzi, Paola Rizzoli
Affiliations: Politecnico di Milano, Microwaves and Radar Institute, German Aerospace Center (DLR), TRE ALTAMIRA s.r.l.,
Repeat-pass Interferometric SAR (InSAR) is widely used for a variety of application scenarios, such as terrain displacement and subsidence monitoring or measuring the state of infrastructures. In this context, the development of effective algorithms to detect temporal and spatial changes in the radar targets becomes of paramount importance. Typically, state-of-the-art methods only return the spatial, temporal, or both locations of the occurred changes without any information about the causes. We present a novel change detection method able to infer not only whether a target has changed and when but also the reason why a change is detected, defining the concepts of definitive and temporary changes, respectively, abrupt changes of the radar targets or changes due to temporary natural phenomena, i.e., atmospheric phenomena. This is done by jointly exploiting for the first time four radar amplitude images and the corresponding six interferometric coherences computed at different temporal baselines. The method based on a fully-convolutional neural network (CNN) is called DANI-NET (Deep Analysis for Non-stable InSAR targets Network), and faces the problem of spatio-temporal change detection as a semantic segmentation problem, defining four different temporal change classes. We rely on the development of fully synthetic training and testing strategy datasets by following a robust statistical derivation, which allows for a full understanding of the network outcomes, making them interpretable. The model can be functionally divided into the following blocks: the input block, the hidden blocks, and the output block. The input block takes as input the Amplitude-Coherence tensor, four amplitude images, and the relative six coherence maps and performs a 2-D convolution operation by simultaneously applying a 3-D kernel across all the input channels. The stack is spatially padded before each convolution to preserve the input shape. The two convolutional layers, consisting of 1x1 kernels, are sequentially applied to mix and gradually increase the number of feature maps up to 128. Then, each convolutional layer is followed by a batch normalization operation, which scales the data to be approximately characterized by zero mean and unitary variance. Finally, a rectified linear unit (ReLU) activation function is applied. Afterward, the five hidden blocks consist of 128 2-D convolutional layers characterized by 3x3 kernels, followed by batch normalization and ReLU activation. The number of hidden layers is defined through hyperparameter tuning. The rationale behind this simple design concerns the need to clearly link the input to the output of the model. The different convolutional layers are effectively banks of filters that aim to extract spatial patterns, in this specific case, spatio-temporal, useful to enhance the correlation between the input and output of the model. This idea fits well with the proposed synthetic dataset: the huge amount of cases generated indirectly offers the opportunity to avoid more complex layers, preserving the physics behind the data as much as possible, while still providing competitive performance also in terms of computational load. Furthermore, because of the algorithm's task and the fully synthetic dataset model, respectively, we choose to preserve the geometric resolution at each layer by avoiding pooling operations. The output block consists of two convolutional layers with 1x1 kernels aimed at finally decreasing the number of features. In particular, the first one is followed by the classical batch normalization and a dropout operation for regularization purposes. Finally, a softmax activation function is applied in order to estimate the relative categorical probability distribution for each modeled temporal change class. The performance analysis conducted through an additional independent synthetic dataset reveals that the proposed CNN architecture, DANI-NET, outperforms the U-NET, the state-of-the-art for semantic segmentation problem. Furthermore, an additional comparison between DANI-NET and Permutational Change Detection (PCD), a statistical non-parametric coherent change detection approach, reveals the robustness of the proposed method with respect to the number of looks L, i.e., the number of statistical homogeneous pixels over which the coherence matrix has been estimated and the different levels of interpretability of the outputs. The first achievement is due to the inference based on both InSAR information sources, i.e., amplitude and coherence. The second instead is due to the introduction of a metric called recurrence when DANI-NET is applied to large InSAR datasets, scanning them four by four acquisitions to fit the input of the network. In fact, this metric allows one to infer not only what, when, and where a target is changed but also to classify the observed change in dependence on its phenomenology, i.e., temporary or definitive change. Finally, the SHAP analysis was conducted for the different classes of changes and in dependence on the different involved physical parameters, revealing a great consistency between the importance of the input features to get the correspondent output and the expectations from a purely theoretical point of view. This was possible because of the completeness and robustness of the proposed statistical modelling for the dataset generation. In order to validate the method, it was applied to the Sentinel-I dataset acquired in Iceland in correspondence with the 2023-2024 Sundhnúkur eruptions. The comparison of the derived lava extension per each event with the map provided by ECHO (European Commission’s Directorate-General for European Civil Protection and Humanitarian Aid Operations) reveals a very high consistency between the two. A second application to the TanDEM-X dataset acquired over an active open-pit mining site in Australia confirms the potentialities of the proposed method when applied in a very dynamic environment, where the inference about the phenomenology of the change definitely improves the level of information that can be extracted by any other classical change detection analysis. In light of these findings, DANI-NET represents an extremely valuable starting point for a large variety of applications related to change detection, such as the monitoring of deforestation, land cover/land use changes, or extreme events. Further developments will be needed in the future to fully exploit the potential of this approach, by extending the model to other frequencies and scenarios, enhancing it to provide information on the land cover/land cover change, as well as fine-tuning it on real datasets describing specific cases.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall L3)

Session: D.04.03 Enabling Machine Learning Operations on Cloud Platforms

Training, deploying and maintaining novel AI machine learning models to achieve reliable and efficient operations is a complex task for scientists and service developers. This session focuses on the state of the art of cloud-based solutions supporting development and operations of ML models in a DevOps framework.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall L3)

Presentation: EO Workflows with ML Operators: Operationalizing ML application with Geo Engine’s processing engine

Authors: Dr. Christian Beilschmidt, Dr. Jöhannes Drönner, Michael Mattig, Prof. Bernhard Seeger
Affiliations: Geo Engine GmbH, University of Marburg
In today's data-driven world, machine learning has become an indispensable tool for extracting valuable insights from geospatial data. However, the transition from first experimental prototypes to operationalized services often presents significant obstacles. These challenges mainly include data harmonization, time series processing, and the deployment of complex machine learning models. Geo Engine is a cloud-ready geospatial analysis platform that provides easy access to geospatial time series data from various sources to perform analytical processing. Users can perform analyses in Geo Engine by defining processing workflows. Access is provided by OGC interfaces, browser-based interactive user interfaces as well as a Python package that enables users to work with the Geo Engine in Jupyter notebooks. It can be deployed in different cloud environments, e.g., EO-Lab or CreoDIAS. Geo Engine offers a streamlined approach for developing and deploying machine learning applications using satellite imagery. This workflow encompasses the following key steps: - Data ingestion and preparation - Model training and optimization - Model deployment and operationalization - Analytics application on new data This talk first presents this comprehensive approach by creating a machine learning application using Sentinel-2 satellite imagery, and a Random Forest machine learning classifier. The resulting model is deployed on the Geo Engine platform. First, we show the training process by ingesting data from Geo Engine workflows to train the model in a Jupyter notebook. Then, we transform the model into the ONNX exchange format and register it as a new Geo Engine operator. We can use this operator seamlessly in a workflow to perform an analysis on new satellite data in the time series. Furthermore, we demonstrate how to operationalize existing machine learning models and apply them to time series satellite images. As an example, we will import a model that creates a cloud mask based on Convolutional Neural Nets into the Geo Engine and use it on new scenes. We generate cloud-free scenes on the fly using the imported model as part of a workflow. An additional benefit besides the operationalization of the model is that using Geo Engine workflows enables integration into more complex processing chains. We show this by aggregating the produced cloud-free times series into monthly cloud-free mosaics.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall L3)

Presentation: Operationalizing MLOps in the Geohazards Exploitation Platform (GEP)

#stac #cloud-native

Authors: Simone Vaccari, Herve Caumont, Parham Membari, Fabrizio Pacini
Affiliations: Terradue Srl
The Geohazards Exploitation Platform (GEP) is a cloud-based Earth Observation (EO) data processing platform developed and operated by Terradue to support geohazard monitoring, terrain motion analysis, and critical infrastructure assessment. It serves a diverse user base of over 3,200 researchers, public authorities, and industry professionals, providing access to EO data archives, advanced processing services, and analytical tools. These services range from systematic data processing workflows, such as generating interferometric deformation maps, to event-triggered processing for rapid response scenarios like earthquake damage assessments. They support a variety of data-driven applications, from data screening and area monitoring to the integration of multi-temporal data for long-term risk assessment. Expanding the portfolio of services that leverage artificial intelligence (AI) and machine learning (ML) is a key objective for GEP to meet the growing demands of its users. However, the complexity of training, deploying, and maintaining ML models at scale posed significant challenges. These include managing large and diverse EO datasets, ensuring reproducibility, and maintaining model performance over time in dynamic operational environments. Addressing these obstacles was essential for unlocking the full potential of AI in geospatial applications, and open GEP to an enlarged set of data processing services, users and stakeholders. As part of an ESA-funded initiative targeted at expanding the use of AI and ML, the GEP has recently embedded Machine Learning Operations (MLOps) capabilities to address these challenges. This encompasses use cases for developing scalable workflows and operating the resulting ML models in geospatial applications. With its EO data repositories and cloud-based processing environment, GEP now supports the full lifecycle of ML operations including data discovery, preparation, model development, deployment, monitoring, and re-training. By embedding MLOps principles into the platform, GEP provides a comprehensive solution for automating and scaling AI-driven geospatial analyses. These capabilities have been designed to ensure reproducibility, improve operational efficiency, and support dynamic adaptation to real-world conditions. This presentation will focus on the practical implementation and use of these MLOps enhancements within GEP. The cloud-native architecture of GEP ensures compatibility with modern DevOps frameworks, providing scalable and interoperable solutions for geohazard assessment and disaster response. We will show how the platform provides advancements like automated pipelines for data preparation and training, real-time monitoring tools for identifying performance issues such as data drift, and SpatioTemporal Asset Catalogs (STAC) compliant cataloging of datasets and models to streamline access and management. We will present technical insights from integrating MLOps into GEP, highlighting challenges and solutions developed to meet the specific needs of EO applications. Operational examples will illustrate how these capabilities are used to address user needs effectively and, by automating and standardizing ML workflows, how GEP empowers scientists and service developers to deploy reliable AI-driven models while reducing the complexity of cloud-based system operations. This session will provide attendees with a comprehensive understanding of how MLOps enhances cloud-based EO ecosystems, demonstrating its potential to enable innovative and sustainable geospatial solutions.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall L3)

Presentation: AIOPEN – Platform and Framework for developing and exploiting AI/ML Models

#cloud-native

Authors: Leslie Gale, Bernard Valentin
Affiliations: Space Applications Services
AI/ML is a transversal skill set that is being applied to EO application development. While AI developers and the EO science and service developer communities are now engaged in developing models offering new possibilities with predictive capabilities, the disruptive nature of AI/ML has also impacted platforms designed to facilitate conventional analytic algorithm development and exploitation. Although current AI/ML development environments have come a long way, they still face several shortcomings that hinder their effectiveness and accessibility. AI/ML ecosystem are fragmented, with multiple frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) and tools often lacking seamless interoperability. Support for creating integrated applications is lacking. Furthermore, functionalities that ensure reproducibility of experiments aggravate by inconsistencies in environment setups, dependency versions, or data pipelines and large dataset management burden users, making collaboration, scientific rigour, publishing and exploitation of models cumbersome. AIOPEN provides state-of-the-art end-to-end AI model development lifecycle support tackling and solving interoperability as far as is possible using existing technologies. AIOPEN provides a solution that is a significant step towards harnessing the power of AI/ML technologies for the advancement of Earth Observation data analyses. To do so AIOPEN utilizes cutting-edge technology and a cloud-native approach to address challenges in big data management, access, processing, and visualization. The paper will present the work performed in an ESA funded project to extend the Automated Service Builder (ASB), a cloud hosting infrastructure and application agnostic framework for building EO processing platforms developed by Space Applications Services. We discuss problems encountered and solutions created showing how an existing EO data processing platform EOPEN developed using ASB, hosted in the ONDA Data and Information Access Service (DIAS), uses extensions developed for ASB to fully support AI/ML developers. Besides the ASB framework (https://asb.spaceapplications.com), other frameworks, services and tools are integrated including from ESA’s EOEPCA (https://eoepca.org) and AI4DTE (https://eo4society.esa.int/projects/ai4dte-software-stack/) projects, OGC services, the tracking server MLFlow (https://mlflow.org) and the inference server MLserver (https://www.seldon.io/solutions/seldon-mlserver). AIOPEN is a robust platform providing collaboration services, allowing seamless model and data sharing, and efficient search across local and remote catalogues. It facilitates the hosting and sharing of models and training data, training of AI models, integration into new applications via standard interfaces, and effective management and tracking of AI assets offering scientists and industry professionals public services capable of bringing together the processing and data access capabilities. AIOPEN enables end-users (scientists and industry professionals) to leverage the vast amounts of EO data available and unlock valuable insights. Through community engagement activities AIOPEN fosters collaboration and gathers valuable feedback. To demonstrate and evaluate the AIOPEN capabilities two uses cases have been implemented: 1) Forest Cover Monitoring making use of a standard, well-established deep learning architectures like U-Nets for semantic image segmentation as the forest segmentation (in a single time point) is a binary semantic segmentation task. 2) Urban Change Detection using a Transformer Architecture EO data and Deep Neural Networks to detect (urban) related changes on the Earth’s surface to construct a digital twin of Earth’s (urban) changes. The paper will conclude with a discussion of the evaluation performed by independent users and presentation of ideas for future work.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall L3)

Presentation: SPAI as a framework for developing applications based on satellite data and artificial intelligence.

Authors: Fran Martín Rivas, Juan B. Pedro
Affiliations: Earthpulse
The development of Earth Observation (EO) applications, especially powered by Artificial Intelligence (AI), is essential for addressing global challenges such as climate change, resource management, and disaster response. However, the process is often hindered by technical barriers, high operational costs, and long development cycles. Additionally, the lack of awareness and understanding of EO data and its potential creates further obstacles to its adoption in the market. A solution that not only reduces these limitations but also evangelizes the value of EO data can significantly accelerate innovation, foster its integration across various industries, and democratize its use and potential to create a positive impact on people’s lives. Existing EO platforms and tools for AI application development often require extensive expertise, substantial computational resources, and complex infrastructure. These challenges limit the scalability and accessibility of EO solutions for broader audiences, including non-experts and industries seeking to leverage EO data for decision-making. There is an urgent need for an integrated platform that not only simplifies EO application development but also promotes understanding and accessibility, breaking down barriers to unlock the full potential of EO data across diverse sectors. SPAI, short for Satellite Processing Application Interface, is an innovative framework that significantly accelerates the development of AI4EO applications by enabling an end-to-end workflow. SPAI covers the entire process, from accessing and managing satellite data to deploying fully operational applications in the cloud. It streamlines key steps such as data access, algorithm application, model training, and operations, while handling infrastructure, hardware resources, and cloud deployment. By abstracting the complexity associated with EO data and AI, SPAI allows developers to focus on generating value for users rather than managing technical challenges. The framework empowers users with seamless integration of AI models and satellite data, enabling efficient extraction of actionable insights. Its accessibility and cost-effectiveness democratize the use of EO data, fostering broader adoption and facilitating impactful solutions to global challenges such as climate change adaptation and digital transformation. SPAI caters to a diverse range of users by offering multiple interaction options designed to make satellite data and AI accessible to everyone. For developers and data analysts, SPAI provides the SPAI SDK, which includes a Python library, a Command Line Interface (CLI), and an API, enabling advanced programmatic workflows and seamless integration into existing tools and processes. For non-developers, SPAI features the Hub, a marketplace of ready-to-use solutions, and the Builder, an intuitive graphical interface that allows users to create satellite analytics by chaining code blocks without requiring programming expertise. Additionally, SPAI Chat, a Natural Language Processing (NLP)-powered chatbot, facilitates interaction with satellite data, AI tools, and the SPAI SDK. By combining these interfaces, SPAI ensures that both technical and non-technical users can easily access and leverage EO data, breaking down barriers and enabling actionable insights for a wider audience. From addressing long-standing challenges such as effective cloud masking, which has been a persistent issue in optical EO data, to innovative applications like road segmentation for monitoring construction progress or mapping climate events such as floods, SPAI leverages the power of AI to unlock new possibilities. This is made possible through SPAI’s unified access to a broad range of data sources, including all Sentinel missions, Copernicus services, and numerous other datasets. Current capabilities include advanced processing tools for Copernicus missions and Copernicus Services processing, and AI-powered solutions for road segmentation, cloud masking, and flood masks. The platform’s cloud infrastructure management and resource handling abstract away the need for provisioning and scale computing and storage resources, as well as the deployment of applications such as web user interfaces, APIs or recurrent processing scripts, allowing users to focus solely on their problem and solution. Looking forward, SPAI is poised to expand even further, enhancing its ability to provide comprehensive, actionable insights for a variety of industries and research fields. All the capabilities of SPAI would be meaningless without users benefiting from it and perceiving the value of these technologies. A key example is Abertis, a global leader in road infrastructure management, which has leveraged SPAI to develop a comprehensive solution for landslide risk assessment and early warning systems in Chile and Brazil. Through SPAI, Abertis accessed and processed diverse satellite datasets—such as Sentinel-1, Sentinel-2, digital elevation models (DEM), geological data, and meteorological information—directly on SPAI’s cloud infrastructure. This platform handles all computational demands, data storage, and processing workflows, significantly reducing operational complexity. In a matter of minutes, Abertis was able to create and deploy business logic and interaction layers, complete with dashboards and APIs, enabling a robust system for proactive slope management. The result is a robust risk profiling and early warning system that enables proactive measures for slope management, safeguarding both infrastructure and lives. Idrica, a global leader in the digitalization of the water sector, is currently leveraging SPAI to develop a groundbreaking solution for assessing leakage risks in water pipeline networks. SPAI provides seamless access to essential datasets—such as Sentinel-1, Sentinel-2, soil moisture, and ground motion data—while its cloud infrastructure manages all computational and storage needs. Idrica leveraged SPAI to quickly train Deep Learning models and deploy actionable insights. The platform’s capabilities allowed the development and deployment of interactive dashboards and APIs for pipeline monitoring in record time, facilitating efficient resource use and driving sustainability. This innovative solution aims to provide a disruptive approach to identifying and mitigating leakage risks, offering actionable insights that enhance pipeline management and water conservation efforts. These are just a few examples of what can be achieved with SPAI. The platform’s versatile capabilities extend far beyond these applications, and countless other use cases can be explored and developed to meet specific needs and challenges. In conclusion, SPAI is a groundbreaking framework that redefines how AI-powered Earth Observation (EO) applications are developed and deployed in the cloud, with the goal of empowering the AI4EO community to create actionable and valuable insights that make a positive impact on businesses and individuals. The platform addresses a critical need in today’s data-driven world: the ability to transform complex satellite data into practical solutions that support informed decision-making, improve resilience, and drive progress in key sectors. SPAI’s integrated, end-to-end approach—from data access and algorithm implementation to model training and cloud deployment—simplifies the development process, ensuring that EO technology is accessible to a wider range of users. By enabling the creation of user-centric applications and deploying ready-to-use solutions in minutes rather than weeks, SPAI helps organizations swiftly adapt and respond to challenges, fostering innovation and supporting sustainable growth. Use cases such as Abertis for landslide risk management and Idrica for water leak detection, demonstrate the potential of SPAI to deliver real-world benefits, reducing development time by up to 90% and cutting operational costs by up to 40%. SPAI is not just a tool; it’s a catalyst for change, transforming the potential of EO data into tangible, positive outcomes for people and the planet. Furthermore, SPAI is building an open and collaborative community where users can connect, share knowledge, and contribute to its growth. Anyone interested can join the conversation on Discord (https://discord.gg/PxPKwwm4Kb) and learn more by visiting the SPAI website https://spai.earthpulse.ai/.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall L3)

Presentation: SharingHub: A Geospatial Ecosystem for Collaborative Machine Learning and Assets Management

#stac

Authors: Clément Guichard, Olivier Koko, Vincent Gaudissart, Brice Mora
Affiliations: CS Group
SharingHub is a comprehensive machine learning (ML) development ecosystem designed to empower collaboration, enhance productivity, and ensure secure management of artificial intelligence models and datasets. Inspired by platforms like Hugging Face, SharingHub offers similar collaborative features but caters specifically the unique needs of geospatial data and AI scientists. Indeed, unlike traditional ML domain, our data is located in space and time, characteristics that are also reflected in the models themselves. For these reasons, traditional ML ecosystems lack some capabilities needed for spatial domain. One of SharingHub’s key strengths is this web portal, with the ability to facilitate the discovery, browsing, and download of ML models and datasets. Acting as a central repository for these resources, it simplifies sharing and collaboration among data scientists, researchers, and organizations, fostering innovation. The service is engineered with interoperability in mind, supporting industry standards such as Open Geospatial Consortium (OGC) standards and SpatioTemporal Asset Catalog (STAC). Built on top of GitLab, SharingHub leverages GitLab’s powerful version control, access management, and collaborative features. However, while GitLab excels in traditional software development, it lacks ergonomics tailored to ML workflows and geospatial domain needs. SharingHub bridges this gap by extending GitLab with a dedicated web portal designed specifically for AI researchers. SharingHub ecosystem integrates with popular and well-adopted tools for ML community, such as MLflow, Hugging Face datasets, and Data Version Control (DVC). This ensures smooth integration with various ecosystems, enabling users to work with familiar tools and frameworks while benefiting from SharingHub’s enhanced capabilities. Through its integration with MLflow, SharingHub offers experiment tracking and model distribution for GitLab projects. Additionally, its DVC integration adds scalable, versioned data storage, that is essential for managing the large datasets commonly used in ML and geospatial projects. Together, MLflow and DVC streamline the end-to-end workflow of model and data management, allowing teams to focus on delivering insights rather than managing infrastructure. SharingHub also integrates with JupyterHub, enabling interactive exploration and experimentation with models and datasets. This functionality closes the gap between prototyping and production by allowing data scientists to test, validate, and refine their work in an interactive environment, enhancing both productivity and model quality. Furthermore, one of our objectives is also to accelerate the projects initiations through the use of preconfigured, standardized templates for common ML project setups. These templates significantly reduce the time required to launch new projects, enhances reproducibility, and ensures adherence to industry best practices, which is particularly valuable for teams seeking consistency and efficiency across multiple projects. Finally, the integration with GitLab provides a fine-grained Single Sign-On (SSO) access control, enabling centralized security and allowing teams to securely manage their large-scale datasets and sensitive models. As a member of the Earth Observation Exploitation Platform Common Architecture (EOEPCA) consortium, SharingHub serves as a core component of one of the European Space Agency (ESA) Building Blocks, the MLOps Building Block. Its geospatial capabilities, such as support for OGC standards and STAC, set it apart from other ML hubs like Hugging Face, with enhanced geospatial tools and capabilities. SharingHub is uniquely positioned as a geospatial focused ML initiative. In essence, SharingHub is more than just a platform for managing models and datasets. It is a comprehensive solution that extends your GitLab instance with specialized tools designed for the geospatial and ML communities. By combining Git-based version control with specialized ML tools, SharingHub creates a unique ecosystem that supports the entire ML lifecycle, including collaboration, versioning, peer review, and model management, making it an essential solution for modern geospatial-oriented MLOps, and promotes a culture of collaboration, efficiency, and continuous improvement for the ML ecosystem. The project, being part of the EOEPCA consortium, is open-source, meaning that you can deploy your own SharingHub, targeting your own instance of GitLab. You can always deploy your own, and try it out! Links: - SharingHub main repository: https://github.com/csgroup-oss/sharinghub - EOEPCA MLOps Building Block: https://eoepca.readthedocs.io/projects/mlops/
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall L3)

Presentation: Empowering AI-driven Earth Observation solutions on CREODIAS

Authors: Dr. Jędrzej S. Bojanowski, Marcin Kluczek, Jan Szypulski, Mikołaj Czerkawski, Jan Musiał
Affiliations: CloudFerro, Φ-lab, European Space Agency
The rapid development of Earth observation (EO) technologies has led to an unprecedented accumulation of spatial data, creating both opportunities and challenges for researchers, industry and public administration. EO data platforms, such as CREODIAS, have become powerful ecosystems, offering vast computational resources, including GPU capabilities, seamlessly integrated with multi-petabyte EO data repositories. This infrastructure provides the basis for processing and analysing large spatial datasets, a critical requirement for modern AI-based solutions. The availability of AI-ready spatial data in CREODIAS is a significant step forward, enabling researchers and developers to focus on model development and application rather than data preparation and management. One of the most exciting developments in the field is the ability to train multimodal fundamental models using the extensive datasets and computing power available on platforms such as CREODIAS. Capable of understanding and generating insights across different data modalities, these models represent a new frontier in the application of artificial intelligence to EO data analysis. CREODIAS takes a pioneering step by providing embeddings computed by different models. This innovation will for instance allow users to search the data catalogue using natural language and perform similarity image-to-image searches. In addition, CREODIAS is positioning itself as a hub for research projects focusing on embedding evaluation and comparison. This initiative will foster collaboration and innovation in the EO community, driving the development of more accurate and efficient artificial intelligence models for spatial data analysis. Foundation models trained on EO data will enable the development of low-cost end-user solutions, such as large-scale image classification and others. In addition to specialised AI geo-models, the integration of large language models (LLMs) is becoming more common on EO data platforms. CloudFerro's Sherlock, for example, offers LLM as a service, providing users with AI-based assistance in navigating documentation, tools and APIs. This integration improves the user experience and accelerates the deployment of advanced EO data analytics techniques. This presentation will provide an overview of the current state of the CREODIAS platform in terms of the use of AI models and development opportunities. It will demonstrate how researchers and developers can use these resources to create their own AI models tailored to specific EO applications.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.15/1.16)

Session: A.07.02 Quantification of Water Resources and Their Evolution Under Anthropogenic Pressure - PART 2

As global populations expand and develop, the demand for water intensifies, placing increasing strain on water resources. Additionally, climate change exacerbates hydrometeorological extremes and introduces significant uncertainty regarding the short- to mid-term availability of water for both communities and ecosystems. Having an accurate and timely overview of available fresh water is therefore essential for its rational and sustainable management. This session aims to emphasize how Earth observations contribute to quantifying different continental water storages—including, but not limited to, groundwater, soil moisture, lakes, artificial reservoirs, seasonal snow and glaciers—and to monitoring and predicting their spatial and temporal evolution, in particular in poorly gauged areas.

We invite contributions that address these quantitative aspects, both in terms of water storages and fluxes from basin to global scales. Works focusing on the identification and characterization of hotspots of unsustainable water exploitation over time and space are particularly welcome. Studies exploring direct human impacts on water availability, such as water abstraction for irrigation, decreased recharge due to deforestation, or the effects of dams on downstream areas, are highly encouraged. Additionally, research focusing on attributing observed variations in water storage to natural variability, climate change, or more direct human impacts will be of great interest. There is no restriction on the type of satellite missions or sensors used in the submissions, but approaches based on multi-mission analyses will be favoured.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.15/1.16)

Presentation: Northern Hemisphere Snowpacks, River Discharge and Settlement Dynamics - A Causal Analysis Using Passive Microwave SWE Data

Authors: Samuel Schilling, Dr. Andreas Dietz, Prof. Dr. Claudia Kuenzer
Affiliations: German Remote Sensing Data Center of the DLR (German Aerospace Center), Institute for Geography and Geology, University of Wuerzburg
Besides serving as the main freshwater resource for about 20% of the world’s human population, snow is also a crucial economic and environmental factor. However, in the context of climate change, snow reliability is diminishing due to declining snowpacks across the Northern Hemisphere and increasing variability in snow cover and melt patterns. Ecosystems and economic sectors dependent on snow, such as agriculture and hydropower, must adapt to these shifts and are reliant on robust data for informed decision-making. Our previous research, however, has shown that although the snowpack itself is thoroughly researched, the connection between snow and water availability, particularly on a large scale, remains underexplored. To address this gap, we performed a causal inference analysis across the Northern Hemisphere from 1980 to 2022, linking snow to river discharge while accounting for six additional influencing variables. The chosen causal inference approach (PCMCI) consists of two steps: causal discovery and causal effect estimation. In the causal discovery phase, a graph connecting the various variables is derived through a multitude of conditional independence tests. The causal effect estimation then fits a random forest regressor model to each time lag examined, allowing us to determine the effect of an intervention on the system, known as the causal effect. Using a moving window design, we evaluated the change in the causal effect of snowmelt on river discharge over the period from 1980 to 2022. This causal effect trend was then compared to other trends such as the river discharge itself, the SWE trends, rain trends, and glacier trends. Additionally, we incorporated the World Settlement Footprint (WSF) as a proxy for population dynamics, to gain insight into the potentially evolving freshwater demand within examined basins. At the Living Planet Symposium, we will present our methodology, and our results, showing the effect snowmelt has on river discharge. Additionally, we are to focus on similarity patterns among the examined regions, identifying areas where water stress could be linked to shifts in snowmelt dynamics. Finally, we compare the observed causal trends in snowmelt’s impact on river discharge with regional settlement trends to highlight areas where snow-driven reductions in river discharge coincide with possible increased water demand, suggesting potential intensification of water stress due to combined hydrological and societal factors.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.15/1.16)

Presentation: Lake-climate interactions across sub-Sahelian Africa

Authors: Anna Joelle Greife, Marina Amadori, Monica Pinardi, Claudia Giardino, Mariano Bresciani, Rossana Caroni, Laura Carrea, Stefan Simis, Jean-François Crétaux, Francesco Fava
Affiliations: CNR-IREA, University of Reading, University of Milan, Plymouth Marine Laboratory, CNES
Inland water bodies are complex systems responding to climate and human pressures. Lakes (both natural and reservoirs) in particular are hotspots of biodiversity and providers of a range of ecosystem services, relevant to many human communities. As such, lakes in the sub-Sahelian region are vital for the people who depend on them. Under increasing climate and anthropogenic pressures, many sub-Sahelian African lakes have become endangered systems. This study examines lake-climate interactions in 137 sub-Sahelian lakes, many previously unstudied, using satellite datasets to extract lake specific variables and global models to retrieve meteorological variables. Five parameters of interest were extracted from the European Space Agency’s project Lakes_cci (Climate Change Initiative) v2.1.0: Lake Surface Water Temperature (LSWT), chlorophyll-a concentration (CHLA), turbidity (TURB), Lake Water Leaving Reflectance (LWLR) and Lake Water Level (LWL), covering an observation period from 1992 to 2022. In addition, water color, as the dominant wavelength, is derived from daily spatially averaged LWLR using the hue angle algorithm by Van der Woerd and Wernand (2018), supplemented with coefficients for Sentinel-3 by Ye and Sun (2022). Atmospheric forcing is obtained from the ERA5-Land reanalysis model. We consider the parameters air temperature at 2 m a.s.l., shortwave downward solar radiation, dew point temperature at 2m a.s.l. (td), wind speed (calculated from u and v components), precipitation, air pressure, and relative humidity, computed from td. Precipitation data instead is obtained from the regional datasets TAMSAT (v3.1) and CHIRPS (v2.0). For each variable we evaluate the climatological year to assess their average seasonal evolution and the standard deviation deriving from 20 years of observations. The computation of the climatology is performed depending on the statistical properties and temporal availability of the variables involved in the multivariate analysis. A Principal Component Analysis (PCA) is performed across all climatic variables to gather a standardized description of the seasons and dominant weather features in each lake. Each evaluation is performed for each individual lake. The hydrological cycle (represented by rainfall and relative humidity) is the most common climatic driver (37%), followed by radiative energy (26%, described by shortwave radiation), and wind (10%). However, we find that lakes in the same climatic region may still show discrepancies in the seasonal evolution of the hydrological cycle and in the main climatic drivers. LSWT closely aligns with climate drivers in 96% of cases, while water level responds to rainfall with 1-3 month lags, longer in artificial reservoirs. In 60% of lakes, CHLA peaks during the dry season and TURB during the wet season, with both contributing to seasonal color shifts. Yellow is the dominant color in 54% lakes mostly located in agricultural areas with high population density, followed by green (29%, cropland and tree cover, medium population density) and blue (9%, tree covered uninhabited region, except East African oligotrophic lakes). By exploiting open access datasets, we classify lakes based on their sensitivity to different climates and anthropogenic forcing. Remote sensing provides a cost-effective approach to conduct water quality monitoring at large spatial scales. Results are in good agreement with existing literature research in many of the well-studied lakes, such that our analogy maps at continental scale are expected to support future research in many uncharted African lakes.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.15/1.16)

Presentation: Intensive irrigation buffers groundwater declines in key European breadbasket

Authors: Manuela Girotto, Grace Carlson, Christian Massari, Marco Rotiroti, Tullia Bonomi, Elisabetta Preziosi, Destinee Whitaker
Affiliations: University Of California Berkeley, Research Institute for Geo-Hydrological Protection, National Research Council CNR, Department of Earth and Environmental Sciences, University of Milano-Bicocca, Water Research Institute, National Research Council CNR, Montelibretti, Spelman College
The increase in irrigated area has altered the hydrologic cycle in agricultural basins resulting in changes to river outflow, changes in net groundwater storage and groundwater flow patterns as well as changes to precipitation and evaporation. The northern Italian Plains has among the highest percentage of irrigated land of any country in the European Union, supporting more than 35% of agricultural production. Widespread drought in 2022 exposed its vulnerability to climate change and dwindling snowmelt in the Alps, which supplies most of the irrigation water to the Plains. Here, we examine spatiotemporal variations in groundwater storage observed by the Gravity Recovery and Climate Experiment satellites and more than 1000 groundwater wells from 2002-2022. We find that, in the unconfined aquifers of the Italian Po Plain, extensive irrigation activities have inadvertently become a source of large-scale, reliable managed aquifer recharge. We show that long-term groundwater storage trends in the Po Plain, from both coarse resolution satellite-based measurements and point-source groundwater level observations, are related to large-scale wet- and dry- epochs. Using a GRACE/GFO mass balance approach, we find that from 2015-2022, the rate of groundwater storage declines more than doubled as compared to the 2002-2022 period. Using a uniquely dense network of groundwater well observations, we find that non-irrigated areas show more dramatic water loss during dry epochs than intensively irrigated areas where irrigation-induced recharge is substantial. High correlations between yearly groundwater storage change in irrigated regions and peak snow water equivalent in the adjacent Alps point to the importance of snowmelt-supplied irrigation water. A low-snow future for the Italian Alps may spell a long-term decline in the irrigation water supply, driving changes to the irrigation system through technological improvements or the proportion of groundwater-supplied irrigation water. This, in turn, may change net groundwater storage either through decreases in irrigation-induced recharge or increases in groundwater pumping. Important changes to the irrigation system in northern Italy need to be made to ensure sustainable use of water resources in a dryer and hotter future. However, taking advantage of the permeable mountain-front aquifers of the Po Plain to expand managed aquifer recharge in wet years will help sustain expected increases in groundwater pumping during dry years. Not only can this help ensure longevity of the food production system in northern Italy, but it will also help combat negative impacts associated with groundwater overdraft and help sustain groundwater-dependent ecosystems through drought. This regional case study of the Po Plain exemplifies the interconnectedness of our human and hydrological systems, highlighting the importance of assessing climate and anthropogenic forces changing water resource availability on seasonal and long timescales.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.15/1.16)

Presentation: 40-year water volume changes of West African lakes and reservoirs derived from SWOT and optical imagery

Authors: Félix Girard, Laurent Kergoat, Joao Marcelo Silva Paiva, Abdelmouhaimen Sarhane, Mathilde de Fleury, Nicolas Taburet, Manuela Grippa
Affiliations: Géosciences Environnement Toulouse, Collecte Localisation Satellites
In West Africa, lakes and reservoirs are associated with essential uses such as irrigation, fisheries, and human and livestock consumption, but their dynamics remain poorly known at the regional scale. As only a few of these water bodies have been included in global studies, more efforts are needed to monitor lake volume changes in the region. Remote sensing is an essential tool in poorly gauged areas such as West Africa. The Surface Water and Ocean Topography (SWOT) mission, launched in December 2022, provides new opportunities for large-scale monitoring of lake volume changes in this region as its wide-swath radar altimeter covers +90% of the inland areas with a sub-monthly temporal revisit. In this work, we first show that SWOT water surface elevation estimates are robust enough to capture the seasonal water level dynamics of the lakes in the study region, with an accuracy and precision of about 0.10 m and 0.20 m, respectively. Second, we use a multi-sensor approach combining SWOT elevation estimates and deep learning (Convolutional Neural Network, CNN) derived Landsat/Sentinel-2 water surface areas to generate hypsometric curves for more than 3000 West African lakes included in the SWOT Prior Lake Database. These hypsometric curves are then associated with 40 years of water surface area estimates derived from the Landsat archive to produce long-term time series of lake water volume change. The 40-year water volume change time series represents the first quantification of water storage and its temporal evolution at the regional scale, and allows for the assessment of the impact of the large number of recently constructed dams, particularly in Burkina Faso. Finally, following de Fleury et al. (2023), evaporation-induced water losses are calculated and linked to the water level changes to provide large-scale information on the hydrological functioning of the study lakes, including water withdrawals for irrigation, which are unknown at the regional scale. The results of this work, once linked to the precipitation variations of the past decades, will also improve our understanding of the paradoxical increase in surface water in the Sahel since the 1970s, despite the severe drought that the region has experienced.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.15/1.16)

Presentation: A Probabilistic Approach to Characterizing Drought Using Satellite Gravimetry

Authors: Peyman Saemian, Dr.-Ing. Mohammad J. Tourian, Dr.-Ing. Omid Elmi, Nico Sneeuw, Prof. Dr. Amir AghaKouchak
Affiliations: Institute Of Geodesy, University Of Stuttgart, Germany, Department of Civil and Environmental Engineering, University of California, Irvine, Environment and Health, United Nations University Institute for Water
In recent years, the Gravity Recovery and Climate Experiment (GRACE) satellite mission, along with its successor GRACE Follow-On (GRACE-FO), has proven to be an indispensable resource for assessing drought through measurements of Total Water Storage Anomaly (TWSA). However, existing methods often neglect the uncertainties in TWSA data arising from GRACE orbit design, background modeling assumptions, and inherent measurement errors. To address this, we propose a fresh view that explicitly accounts for these uncertainties: the Probabilistic Storage-Based Drought Index (PSDI). This method utilizes Monte Carlo simulations to generate realistic realizations of the stochastic TWSA time series, capturing a range of possible drought scenarios. These ensembles enable a probabilistic classification of drought severity, assigning probabilities to each drought category instead of determining a single category per epoch. We compared PSDI with the deterministic Storage-Based Drought Index (SDI) across major global basins. The results reveal that the deterministic method frequently overestimates the severity of storage-based droughts. Additionally, we evaluated PSDI’s performance during various hydrological events across regions including the United States, Europe, the Middle East, Southern Africa, South America, and Australia. ISDI consistently provided a more reliable and nuanced characterization of drought conditions in all cases than conventional deterministic indices. By offering a probabilistic perspective, PSDI enhances the realism of TWSA-based drought assessments, making it better suited for adaptive strategies and practical risk management.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.15/1.16)

Presentation: Integrating Earth observation and artificial intelligence for enhanced snowmelt-runoff modelling to improve operational streamflow estimation in glaciated catchments of Nepal

Authors: Anil Khanal, Dr. Dibas Shrestha, Prof. Dr. Vishnu Prasad Pandey, Piotr Stępnicki
Affiliations: Blue Water Intelligence (BWI), Central Department of Hydrology and Meteorology, Tribhuvan University, Center for Water Resource Studies, Institute of Engineering, Tribhuvan University
Accurate snowmelt-runoff modeling in Nepal's glaciated catchments is vital for sustainable hydropower generation and disaster management. The Himalayan region is characterized by complex topography, extreme climatic variations, and significant contributions of snow and glacier melt to river flows. Scarce in-situ data and the influence of temperature and precipitation lapse rates pose significant challenges to snowmelt modeling. To address these challenges, BWI, a French basin digitization firm, is collaborating with Tribhuvan University's Central Department of Hydrology and Meteorology to integrate Earth Observation (EO) data with artificial intelligence (AI) techniques within a semi-distributed hydrological model for operational streamflow forecasting. Advancements in AI-based data fusion methods enable us to integrate diverse EO datasets—including snow cover, snow extent, snow depth, and glacier extent from optical, microwave, and radar sensors—with meteorological and topographic data. We employ Convolutional Neural Networks (CNNs) for extracting spatial features from satellite imagery; Random Forest (RF) algorithms for snow cover classification; and Long Short-Term Memory (LSTM) networks to capture the temporal dynamics of rainfall and snowmelt contributing to streamflow. By training these models with historical snow and hydrological datasets, we aim to establish non-linear relationships between climatic variables and runoff, thereby enhancing the predictive capability of our snowmelt model. The AI-enhanced inputs are incorporated into BWI’s operational semi-distributed hydrological model, improving predictions of streamflow in real time by accounting for the spatial and temporal variability of snowmelt processes. We validate the model's performance using observed hydrological data to demonstrate improved accuracy in simulating streamflow. This approach offers a novel and efficient framework for operational hydrological forecasting, providing accurate, real-time streamflow predictions in glaciated catchments. By combining EO data processing with AI/ML techniques, this study aims to advance the capacity for sustainable water resource management in the Himalayas, supporting efforts to address the challenges posed by accelerated glacial melt and changing hydrological cycles.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall G1)

Session: D.02.06 Foundation Models for Earth Observation: Current solutions with less labelled data to improve environment monitoring and future perspectives to revolutionize geospatial data discovery and utilization - PART 3

This session will delve into cutting-edge developments in Earth Observation (EO) Foundation Models (FM). It will discuss current solutions, future perspectives and the implications of these models for the EO community. The session will also explore new possibilities enabled by multi-modal models, zero-shot applications, self-supervised learning, big data and large pre-trained models, examining their potential to transform EO data analysis and applications.

Topics:
- Sensor independence: FMs can process data from various sensors, including multi-/hyper-spectral, SAR, LiDAR, Very High Resolution (VHR) satellite data and more, enabling comprehensive analysis of Earth's dynamics holistically.
- Benchmarking and Evaluating FMs: Establishing standardised evaluation metrics and fair benchmarks to assess the performance and capabilities of FMs in processing EO data, ensuring reliability and efficiency.
- Multimodality: FMs can adeptly handle diverse data modalities such as text, video and imagery, offering new approaches to EO data analysis and interpretation without requiring extensive labelled datasets which are rarely available in environmental applications (e.g., land, forestry, agriculture, water/ice or atmospheric phenomena that can be monitored with EO data).
- Fine-tuning FMs and Self-Supervised Learning (SSL) for downstream tasks with an emphasis on environment monitoring applications currently under-represented in EO benchmarks, such as biophysical variable estimation or early warnings/ anomaly detection in satellite image time-series.
- Big data: Over the past few decades, the availability of EO data has increased, providing unprecedented coverage of the Earth’s surface and atmosphere. Modern Earth System Models (ESMs), which operate at high resolutions in both space and time to simulate the evolution of Earth system components and predict the future state of the climate, estimate air pollution, and more, generate petabytes of data per simulated day. Data output and storage have already become a major bottleneck for high-resolution climate modeling. To address these challenges, approaches combining data engineering, AI, and information theory have shown great promise for various downstream applications. This session invites contributions on computational methodologies for engineering embedding representations to compress, index, tokenize, and fuse geospatial data. By focusing on topics such as embedding techniques, vector databases for data exploration, cross-modal alignment, and embedding compression, this session will provide insights into how these technologies can be applied to enhance data accessibility, sharing, and analysis in EO and ESM applications. These embeddings may facilitate efficient data transmission, data exploration/search, and cross-modal data alignment and reconstruction, such as converting vision to text or deriving surface reflectance from SAR data.
- Implications of FMs for the Community: Understanding the potential societal, environmental and economic impacts of implementing FMs in EO applications, fostering informed decision-making and resource management.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall G1)

Presentation: Panopticon: Towards Sensor-Agnostic Models For Earth Observation

Authors: Ando Shah, Co-first Author Leonard Waldmann, Nils Lehmann, Adam J Stewart, Yi Wang, Stefan Bauer, John Chuang, Xiao Xiang Zhu
Affiliations: University of California Berkeley, Technische Universität München
Foundation models for earth observation (EO) have shown great potential for solving various downstream tasks with high data efficiency. However, most existing models are tailored to specific satellite sensors, which significantly limits their flexibility. The primary challenge lies in handling the wide variability in spectral and spatial characteristics across different satellite sensors, with channel counts ranging from 3 to over 200 and ground sampling distances (GSD) varying by two orders of magnitude. A flexible, sensor-agnostic model would enable unified processing of earth observation data across multiple satellites, allowing for more robust monitoring systems and reducing the complexity of maintaining multiple sensor-specific models. In this work, we propose Panopticon, a highly flexible sensor-agnostic model and self-supervised pre-training framework, capable of building representations from any combination of optical multispectral, hyperspectral and synthetic aperture radar (SAR) sensor channels of any ground sampling distance. Our training approach leverages a newly compiled dataset of paired earth observation modalities, where we sample multiple "snapshots" from different geolocations that vary in their GSDs, modalities, spectral characteristics, and processing levels. The datasets comprise globally diverse earth observation data from seven satellite sensors, spanning multispectral, SAR, and hyperspectral modalities with ground sampling distances ranging from 0.3 to 100 meters. Our approach builds on the DINOv2 framework by treating images from different sensors of varying spectral and spatial characteristics of the same geo-location as spectral views of the same object. During training, we apply DINOv2 multi-scalar spatial augmentations to learn different GSDs, alongside a novel set of spectral augmentation to learn diverse channel configurations, in order to achieve spectral and scale invariance. Our spectral augmentation strategy includes generating arbitrary multispectral channels from hyperspectral data through spectral convolution, enabling the model to learn channel configurations beyond those present in multispectral datasets while facilitating cross-sensor learning and generalization to unseen sensors. The model handles varying numbers of input channels through cross-attention with a learnable query, incorporating learnable channel embeddings for SAR modes and embeddings derived from spectral response functions for optical channels. Our systematic evaluation shows that model performance remains stable under progressive spectral subsampling and spatial resolution reduction, confirming the architecture's inherent spectral and scale invariance. The model achieves competitive performance on classification and segmentation tasks across multispectral, hyperspectral and SAR datasets, demonstrating robust generalization across sensors, GSDs, and channel configurations. Notably, Panopticon maintains performance when processing data from previously unseen satellite sensors and efficiently fuses multimodal data across different sensor products. These contributions advance the emerging class of any-sensor models that can flexibly work across the majority of earth observation satellites, potentially revolutionizing how we process and analyze multi-sensor satellite data while significantly reducing the need for sensor-specific model development and training.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall G1)

Presentation: PANGAEA: A GLOBAL AND INCLUSIVE BENCHMARK FOR GEOSPATIAL FOUNDATION MODELS

Authors: Valerio Marsocci, Yuru Jia, Georges Le Bellier, David Kerekes, Liang Zeng, Sebastian Hafner, Sebastian Gerard, Eric Brune, Ritu Yadav, Ali Shibli, Heng Fang, Yifang Ban, Maarten Vergauwen, Nicolas Audebert, Andrea Nascetti
Affiliations: KTH - Royal Institute of Technology, KU Leuven, University of Glasgow, CNAM - Conservatoire National des Arts et Métiers, IGN - Institut National de l'Information Géographique et Forestière
Geospatial Foundation Models (GFMs) have emerged as a promising technology for extracting representations from Earth observation (EO) data, with applications spanning urban planning, agriculture, disaster response, and more. These models, trained on large-scale EO datasets, aim to extract fundamental geospatial features that can be applied to a range of downstream tasks. However, the rapid development of GFMs has outpaced the establishment of a robust and standardized evaluation methodology, hindering the ability to effectively assess their real-world applicability. . Existing works often evaluate on suboptimal downstream datasets and tasks, that are often too easy (e.g., EuroSAT, which reports >99% accuracy) or too narrow, limiting the usefulness of the evaluations to assess real-world applicability of GFMs.These benchmarks overlook critical factors, such as the diversity of sensor types, spatial resolutions, temporal dynamics, and geographical representation, limiting their capacity to evaluate the true potential of GFMs. For instance, the PhilEO Bench [1], though a valuable contribution, evaluates models only through the lens of Sentinel-2 data, limiting the breadth of the conclusions that can be drawn from this benchmark. In comparison, GEO-Bench [2] and SustainBench [3], include data from multiple sensors; however, they cover only land observations, ignore marine environments, consider only uni-modal datasets, and focus on unitemporal applications. Finally, FoMo-Bench [4] and Crop Type Bench [5] restrict their scope respectively to forest and agriculture environments. To address this gap, we propose PANGAEA (https://github.com/yurujaja/pangaea-bench), a comprehensive and globally inclusive benchmark for GFMs. PANGAEA is designed to evaluate models across a diverse set of tasks, including marine debris segmentation, building damage mapping, and change detection. It spans multiple domains—urban, agricultural, marine, and forest environments—while incorporating datasets with diverse temporal configurations (uni-, bi-, and multi-temporal), spatial resolutions (1.5m to 30m/pixel), and sensor modalities (natural color, multispectral, and synthetic aperture radar). We conducted a comprehensive survey of current geospatial foundation models (GFMs) to better understand their landscape and capabilities. A representative set of models was selected for evaluation on our benchmark, guided by criteria emphasizing methodological diversity, impact, and reproducibility. This included approaches such as multi-modal contrastive learning [6], masked image modeling (MIM) [7], and supervised training [8], with a focus on models recognized in leading venues and supported by open-source code and pre-trained weights. Our analysis highlights key factors influencing GFM performance, including the characteristics of pre-training datasets, such as sensor spectral and spatial resolutions. Models trained on datasets with richer spectral and spatial detail excel in tasks requiring those features. However, GFMs do not consistently outperform task-specific supervised models, particularly when there is a mismatch between pre-training and downstream data characteristics (e.g., RGB vs. multispectral). Similar trends are observed across applications such as time-series analysis [9], crop monitoring [5], and canopy height prediction [10]. Additionally, while fine-tuning can enhance performance in some scenarios, its benefits depend on the model architecture and the nature of the task. To promote transparency and collaboration, we provide a fully open-source evaluation framework, enabling reproducibility and fostering trust in GFM evaluations. This benchmark is designed to be modular and extensible, accommodating new datasets, tasks, and models as the field evolves. Our contributions include a robust and easy-to-use codebase for standardized benchmarking, a diverse evaluation protocol covering a wide range of datasets and sensor types, and insights into GFM performance gaps. These efforts aim to advance understanding of GFM generalization, task specialization, and fine-tuning strategies while supporting the development of future models tailored to real-world applications. References [1] Casper Fibaek, et al. Evaluating geo-spatial foundation models, 2024. arXiv preprint https://arxiv.org/pdf/2401.04464 [2] Alexandre Lacoste et al. Geo-bench:Toward foundation models for earth monitoring, 2023. arXiv preprint https://arxiv.org/pdf/2306.03831 [3] Christopher Yeh et al. Sustainbench: Benchmarks for monitoring the sustainable development goals with machine learning, 2021. arXiv preprint https://arxiv.org/pdf/2111.04724 [4] Nikolaos Ioannis Bountos et al. Fomo-bench: a multi-modal, multi-scale and multi-task forest monitoring benchmark for remote sensing foundation models. 2023 arXiv preprint: https://arxiv.org/pdf/2312.10114 [5] Yi-Chia Chang, et al. On the generalizability of foundation models for crop type mapping. ArXiv, abs/2409.09451, 2024. [6] Anthony Fuller, Koreen Millard, and James R. Green. Croma: Remote sensing representations with contrastive radar-optical masked autoencoders, 2023. [7] Colorado J. Reed et al. Scale-mae: A scale-aware masked autoencoder for multiscale geospatial representation learning, 2023. [8] Favyen Bastani, et al. Satlas: A large-scale, multi-task dataset for remote sensing image understanding. arXiv preprint arXiv:2211.15660, 2022. [9] Zongzhe Xu, et al. Specialized foundation models struggle to beat supervised baselines, 2024. [10] Esther Rolf, et al. Contrasting local and global modeling with machine learning and satellite data: A case study estimating tree canopy height in african savannas, 2024
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall G1)

Presentation: Self-Supervised Learning (SSL) for InSAR-dased Structural Monitoring: Exploring Correlations between Anomalies and Key Building Attributes in Time-Series

Authors: Dr. Ridvan Kuzu, Dr. Corneliu Octavian Dumitru, Leonardo Bagaglini, Filippo Santarelli
Affiliations: Dlr, e-GEOS
The stability of urban infrastructure is a critical factor in ensuring resilience against risks posed by aging structures, seismic activity, and climate-induced stressors. This study investigates the potential of Self-Supervised Learning (SSL) in InSAR-based structural monitoring, emphasizing the relationships between detected anomalies and key building attributes such as age, type, and number of floors. By leveraging Sentinel-1 Persistent Scatterer Interferometry (PSI) data, this research captures time-series structural displacements to establish a robust framework for anomaly detection and analysis. The study classifies anomalies into five categories—trend, step, noise, activity, and seasonal—reflecting distinct structural behaviors over time. Advanced SSL models, including Long Short-Term Memory (LSTM)-based architectures and time-series foundation models, are employed to generate embeddings from InSAR-derived time-series data. This approach reduces reliance on labeled data, making it particularly suited for large-scale monitoring of diverse urban areas where annotations are scarce or inconsistent. By learning meaningful representations from the data itself, these models enable the detection of subtle patterns indicative of structural anomalies. A core aspect of the research is the exploration of correlations between anomaly types and building-specific attributes. For example, the study identifies whether older buildings exhibit higher trends of structural displacement or if certain structural types are more prone to seasonal variations. These insights are invaluable for urban planners and policymakers, offering a data-driven basis for prioritizing inspections, maintenance, and retrofitting efforts. The proposed methodology demonstrates the scalability and generalizability of SSL in Earth observation, enabling the monitoring of urban infrastructure at regional or national scales. This framework also supports the development of early-warning systems for infrastructure resilience by identifying potential vulnerabilities before they escalate into critical failures. By integrating advanced time-series analysis with SSL, this study contributes to the broader adoption of AI-driven methods in geospatial monitoring. Future work will focus on expanding the dataset, improving model interpretability, and refining the detection of anomaly correlations to enhance the applicability of InSAR-based monitoring systems in diverse urban environments.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall G1)

Presentation: Spectral Super-Resolution for Greenhouse Gas Detection

Authors: Ruben Gonzalez, Benedikt Blumenstiel, Ranjini Bangalore, Nassim Ait Ali Braham, Dr. Johannes Jakubik, Dr. Paolo Fraccaro, Dr Conrad Albrecht, Thomas Brunschwiler
Affiliations: IBM Research Europe, IBM Research India, German Aerospace Center (DLR), IBM Research Europe
Hyperspectral imaging offers significant potential for greenhouse gas (GHG) monitoring due to its ability to capture detailed spectral information across broad wavelength ranges. This coverage includes key wavelength regions of the absorption spectra to detect gases such as carbon dioxide (CO2), methane (CH4), and nitrogen dioxide (NO2). Today, the spatial and temporal coverage of hyperspectral missions such as EnMAP, PRISMA, and EMIT is limited. In contrast, multispectral remote sensing imagery generated by Sentinel-2 is available with global coverage on a weekly basis, but lacks the spectral granularity and specific coverage in the critical absorption regions of these gases. Our research overcomes these limitations by reconstructing hyperspectral data from multispectral inputs. We developed and pre-trained a spectral transformer model to capture spectral dependencies and subsequently fine-tuned it on multispectral data for spectral reconstruction. The model is based on a self-supervised masked autoencoder [1] and pre-trained with the majority of spectral channels randomly masked. To achieve this, each channel is individually tokenized, and a wavelength-based positional embedding is added. The pre-training task is to reconstruct the masked channels from the unmasked input, simulating the downstream application of spectral super-resolution. During pre-training, we either apply spatial-spectral attention between all tokens or spectral attention only. The later apporach reduces the computational cost, which for self-attention scales quatratic with the token count. We used a large-scale hyperspectral dataset of over 538k EnMAP samples [2] for pre-training and constructed a dataset of 11k aligned Sentinel-2 and EnMAP images for the fine-tuning. During the fine-tuning, the model performs spectral super-resolution and predicts hyperspectral data from the Sentinel-2 input. The study evaluates the utility of the reconstructed hyperspectral data in two GHG downstream tasks, comparing its performance against original hyperspectral and multispectral data. For these tasks, we aligned the EnMAP-Sentinel-2 dataset with CH4 and NO2 mesurements from Sentinel-5P. The results indicate that the reconstructed data improves GHG prediction accuracy compared to models based on only multispectral data, suggesting its potential to bridge the trade-off between spectral resolution and spatio-temporal coverage. This approach offers insights into leveraging the complementary strengths of hyperspectral and multispectral imaging systems for enhanced atmospheric monitoring. References: [1] He, K., Chen, X., Xie, S., Li, Y., Dollár, P., & Girshick, R. (2022). Masked AutoEncoders are Acalable Vision Learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. [2] Braham, N. A. A., Albrecht, C. M., Mairal, J., Chanussot, J., Wang, Y., & Zhu, X. X. (2024). SpectralEarth: Training Hyperspectral Foundation Models at Scale. arXiv preprint arXiv:2408.08447.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall G1)

Presentation: Towards a foundation model of land surface dynamics

Authors: Vitus Benson, Sebastian Hoffmann, Markus Zehner, Lazaro Alonso, Dr Nuno Carvalhais, Christian Reimers, Claire Robin, Dr. Mélanie Weynants, Prof Markus Reichstein
Affiliations: Max Planck Institute For Biogeochemistry, ELLIS Unit Jena, ETH Zürich
Every day public satellite missions collect terabytes of data describing spectral characteristics of our Earth’s surface. These satellite observations are the only measurement data describing the state of the terrestrial biosphere and hydrological cycle on a global scale. Conventional Earth system models use such data to evaluate outputs from their land components describing local to global dynamics of carbon, water and energy cycles. Here, we adopt a novel viewpoint: what if we could learn a representation of land surface states and their dynamics directly from the Earth observation data? Within the Horizon Europe project WeatherGenerator, we are developing a foundation model of the land surface. This foundation model assimilates many different types of available Earth observation data into a unified latent representation, describing the land surface state. Moreover, the foundation model dynamically evolves the land surface state through time conditioned on climatic conditions. More specifically, the foundation model is a large deep neural network that is trained in a self-supervised manner on a broad range of satellite remote sensing data. Its temporal evolution is achieved through building upon techniques from generative AI, but always forced by meteorological variables just like a conventional terrestrial biosphere model. To connect the abstract latent land surface state with concrete biophysical and biogeochemical quantity, different decoders are fine-tuned on smaller task-specific datasets, e.g. in-situ observations of carbon, water and energy fluxes. Through the foundation model, we obtain access to a unified embedding of the land surface. For each grid cell, this land surface state conceptually entails information about spatial heterogeneity, diurnal temporal dynamics and between-sensor relationships. This enables applications based on diagnostic estimation of key environmental variables and prognostic modeling over medium-range to sub-seasonal time scales. More specifically, it supports efforts to upscale and forecast ecosystem fluxes and related environmental mapping activities, as well as impact early warning at high spatial resolution such as drought impacts on vegetation or heat wave characteristics at local scales. Still, the development of the foundation model is a path through many obstacles. The consideration of various modalities such as optical high and medium-resolution satellite missions, SAR and microwave sensors, alongside in-situ data and remote missions on the ISS poses challenges for a generic and scalable data and modeling pipeline. We present first considerations towards overcoming these and developing the foundation model of the land surface. In addition, we discuss how the scientific community can engage with the foundation model through a user-friendly API, such that many applications can adopt and test our land surface embedding.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall G1)

Presentation: Unlocking Geospatial Intelligence: A Novel SSL Approach for Sentinel-2 Satellite Image Representation

Authors: Dr. Nikolaos Dionelis, Riccardo Musto, Giancarlo Paoletti, Jente Bosmans, Simone Sarti, Fabio Di Matteo, Giacomo Cascarano, Dr. Casper Fibaek, Dr. Nicolas Longepe
Affiliations: European Space Agency (ESA), Φ-lab, Leonardo Labs (Leonardo S.p.A.), e-GEOS S.p.A.
Recent advancements in Artificial Intelligence (AI) and Self-Supervised Learning (SSL) have revolutionized large-scale Computer Vision models, enabling exceptional performance on downstream tasks with minimal labeled data. These models are first pre-trained on vast unlabeled datasets and later fine-tuned for specific downstream applications. In the domain of Earth Observation (EO), SSL and Foundation Models hold significant potential due to the extensive and continuously growing volumes of unlabeled satellite data generated by organizations like the European Space Agency (ESA), the National Aeronautics and Space Administration (NASA), and the Japan Aerospace eXploration Agency (JAXA) [1, 2]. For example, the Copernicus Sentinel-2 constellation alone produces approximately 1.6 TB of compressed data daily, offering an ideal environment for advancing unlabeled data-driven model development. Moreover, the scarcity of large-scale labeled datasets and the technical challenges of correctly annotating the vast volumes of data collected by ESA pose significant barriers to achieving high accuracy and top performance in many critical downstream tasks derived from EO data. These tasks often require extensive labeling at medium or large scales to be effective. Furthermore, the Earth’s dynamic and ever-changing nature adds complexity, as labels must account for temporal variations. Labels tied to a specific geographic region at a particular moment are insufficient to capture the evolving characteristics of the environment. Consequently, SSL and EO Foundation Models offer a powerful solution to these challenges. By pre-training these Foundation Models on extensive unlabeled EO data, only a small amount of labeled data is required for subsequent supervised fine-tuning, in order to specialize the models to perform specific downstream tasks. Generally, this approach reduces the need for labeled data to approximately 20% of what is traditionally required for each task [2, 3]. This enables the development of general-purpose Foundation Models capable of solving a diverse range of tasks relevant to the EO community, such as land cover classification, building density estimation, and road segmentation [1]. In this work, we develop and train our recently proposed EO Foundation Model, PhilEO Version 1.0 [1], using the MajorTOM dataset [4]. Specifically, the model is trained on the Sentinel-2 L2A data of MajorTOM, referred to as Core-S2L2A, which comprises 23 TB of data. This represents a significant scaling up of the pre-training data, from less than 1 TB in the PhilEO Globe dataset [5] to 23 TB in MajorTOM Core-S2L2A [4]. The training is conducted on the Leonardo S.p.A. Davinci-1 supercomputer, leveraging PyTorch and its Distributed Data Parallel (DDP) framework to enable simultaneous training across multiple nodes and Graphics Processing Units (GPU). The model is trained using a combination of reconstruction and auxiliary task losses. The architecture employed is a modified U-Net, consisting of: - Encoder: Extracts latent representations from input images via convolutional blocks, gradually reducing spatial resolution while increasing feature dimensionality. It outputs embeddings, intermediate feature maps, and task-specific predictions, such as coordinates, cloud coverage, building presence and landcover classes. - Decoder: Reconstructs input images from embeddings using upsampling and convolutional layers. For the SSL pre-training task, two augmented views of the same input image are generated and fed to the model. The encoder outputs embeddings and auxiliary task predictions, while the decoder only outputs reconstructed images. A composite loss function is then computed and backpropagated in order to optimize multiple objectives: - Reconstruction Loss: Measures pixel-wise MSE between the input images and their reconstructions. - Cosine Similarity Loss: Encourages consistency between the embeddings of augmented views of the same image, promoting invariant representations. - Auxiliary Losses: The task-specific predictions are used to compute multiple losses, such as MSE for latitude and longitude predictions, or Cross-entropy for cloud class predictions and landcover class predictions. We believe that predicting relevant EO properties such as geographic coordinates, cloud types, building presence, landcover classes, and climate zone categories ensures that the model generalizes well across tasks and learns domain-specific features. Furthermore, in addition to the dataset MajorTOM Core-S2L2A, we also train our model on two subsets of MajorTOM, respectively FastTOM and TinyTOM, which have sizes of approximately 1 TB each. This allows us to analyze how good the model is before training it on MajorTOM (23 TB). Experiments performed within the PhilEO Bench [1], and more specifically on the three downstream tasks of land cover mapping semantic segmentation, building density regression estimation, and road regression segmentation, demonstrate the efficacy of our proposed SSL algorithm. Moreover, scaling up the pre-training data from less than 1 TB to 23 TB improves the performance of the model. Hence, we pre-train on the MajorTOM dataset [4] and, then, we evaluate on the PhilEO Bench, fine-tuning on specific downstream tasks. We conduct n-shot experiments for the three downstream tasks, similar to [1] and [3]. Here, the performance of our model on the building density estimation regression downstream task, for the n=100 n-shot scenario, is: PhilEO TinyTOM: MSE of 0.00578. Finally, we are already working on multiple enhancements for this project. In particular: - 1) Implementation of the UPerNet [6] decoder to take advantage of multi-level features; - 2) Implementation of State-of-The-Art (SoTA) architectures in the UNet model, such as Vision Transformers (ViT) [7, 9, 10]; - 3) Confidence quantification and assessment for evaluating EO Foundation Models, on both classification and regression tasks [8]; and - 4) Development of the evolution of the PhilEO Bench, namely PhilEO Bench ++, which will include (1), (2), and (3), as well as co-located SAR Sentinel-1 data. References: [1] C. Fibaek, L. Camilleri, A. Luyts, N. Dionelis, and B. Le Saux, “PhilEO Bench: Evaluating Geo-Spatial Foundation Models,” in Proceedings International Geoscience and Remote Sensing Symposium (IGARSS), 2024. [2] J. Jakubik, S. Roy, C. Phillips, P. Fraccaro, et al., “Foundation Models for Generalist Geospatial Artificial Intelligence,” arxiv:2310.18660, 2023. [3] N. Dionelis, C. Fibaek, L. Camilleri, A. Luyts, J. Bosmans, and B. Le Saux, “Evaluating and Benchmarking Foundation Models for Earth Observation and Geospatial AI,” arXiv:2406.18295, 2024. [4] A. Francis and M. Czerkawski, “Major TOM: Expandable datasets for Earth observation,” in Proceedings IGARSS, 2024. [5] B. Le Saux, C. Fibaek, L. Camilleri, A. Luyts, N. Dionelis, G. Cascarano, L. Bagaglini, and G. Pasquali, “The PhilEO Geospatial Foundation Model Suite,” European Geosciences Union (EGU), 2024. [6] T. Xiao, Y. Liu, B. Zhou, Y. Jiang, and J. Sun, “Unified perceptual parsing for scene understanding,” in Proceedings European Conference on Computer Vision (ECCV), p. 418-434, 2018. [7] A. Dosovitskiy, L. Beyer, et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” in Proceedings International Conference on Learning Representations (ICLR), 2021. [8] N. Dionelis and N. Longepe, “Fine-Tuning Foundation Models with Confidence Assessment for enhanced Semantic segmentation,” IEEE Geoscience and Remote Sensing Letters, DOI: 10.1109/LGRS.2024.3504293, 2024. Online: http://ieeexplore.ieee.org/document/10759697 [9] M. Noman, M. Naseer, et al., “Rethinking Transformers Pre-training for Multi-Spectral Satellite Imagery,” in Proceedings Computer Vision and Pattern Recognition (CVPR), 2024. [10] F. Bastani, P. Wolters, et al., “Satlaspretrain: A large-scale dataset for remote sensing image understanding,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), p. 16772-16782, 2023.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.85/1.86)

Session: C.02.13 ESA's Harmony mission

Harmony is the 10th ESA Earth Explorer mission. Earth Explorers are the backbone of the science and research element of ESA’s Living Planet Programme, providing an important contribution to the global endeavor of understanding the Earth system. The Harmony mission, which is currently progressing into Phase B2, is dedicated to the observation and quantification of small scale motion and deformation (velocity gradient) fields, primarily, at the air-sea interface (winds, waves, and surface currents, including measurements over extreme events), of the solid Earth (tectonic strain), and in the cryosphere (glacier flows and surface height changes).

Harmony aims to provide an integrated view of the dynamic processes that shape the Earth’s surface.
The Harmony space segment consists of a pair of satellites that will fly in convoy with one of the Copernicus Sentinel-1 satellites. These tandem receive-only satellites will passively pick up the same radar signals from Sentinel-1 from different vantage points. Over the ocean, the signals will be Doppler processed, so that surface velocity vector components are obtained along different lines of sight. This will offer a capability to observe – for the first time - velocity vectors directly from space. In addition, both tandem satellites will include an optical TIR instrument with multi-view capability for SST and cloud motion measurements that are colocated and contemporaneous with the SAR observations.
Over land, Harmony would provide data to measure small shifts in the shape of the land surface, such as those resulting from earthquakes and volcanic activity, and therefore it will contribute to risk monitoring. It would also provide new information to study 3D deformation and flow dynamics of glaciers at the rapidly changing marginal zones of the ice sheets for a better understanding of the impact of ice mass loss on sea-level rise.

The Earth System cannot be understood or modelled without adequately accounting for small-scale processes. Indeed, the parameterisation of the unresolved, sub-grid physical processes in global or regional models remains one of the main sources of uncertainty in climate projections, in particular with respect to air-sea coupling, cryosphere and clouds. Hence, it remains essential to rely on high-quality observations to sample and identify small-scale processes, to help emulate and calibrate advanced parameterisations of the small unresolved scales. High-resolution observations of the Earth System will thus play an increasingly central role in next generations of fully coupled Earth System Models (or Digital Twins of Earth).

The session will highlight the latest developments covering overall mission development status, mission science and foreseen exploitation of the mission higher-level products.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: Status of the Harmony Mission on End-to-End Simulations, Products and Processing Algorithms for Land Applications

Authors: Pau Prats-Iraola, Marc Rodriguez-Cassola, Irena Hajnsek, Andy Hooper, Eric Rignot, Andreas Kääb, Juliet Biggs, Francesco De Zan, Ramon Brcic, Thomas Nagler, Andrea Pulella, Georg Fischer, Simon Trumpf, Andreas Benedikter, Dominik Richter, Marcus Bachmann, Dr. Ir. Paco Lopez-Dekker, Björn Rommen
Affiliations: DLR, University of Leeds, University of California Irvine, University of Oslo, University of Bristol, Delta Phi Remote Sensing GmbH, ENVEO GmbH, Delft University of Technology, ESA/ESTEC
This contribution will present the current status of the science study of the Harmony mission for land applications -solid Earth and land ice-. In particular, the focus will be given to the latest developments of the Harmony End-to-End Performance Simulator for land (HEEPS/Terra) and of the dedicated L2 and L3 processing algorithms. Short before Harmony’s system CDR, scheduled to take place by Q1 2027, it shall be demonstrated that Harmony has reached the scientific readiness level 6 (SRL-6). For this reason, it is necessary to address several scientific aspects of the mission, namely, the consolidation of the L2/L3 processing algorithms, the improvement of the end-to-end simulator, which shall include also a first version of the full processing chain, and the execution of a performance campaign demonstrating the feasibility of the mission objectives. All these activities will be conducted in the frame of the aforementioned science study. A first version of the HEEPS/Terra simulator exists, which was developed during the Phase 0 and A of the mission (see, e.g., [1]), and was used to demonstrate SRL-5 for land applications in preparation for the User Consultation Meeting at ESA, which took place in July 2022. Some of the upgrades that will be implemented in the current context are: the improvement of the forward model for land ice, which needs to consider the bistatic nature of the measurement in terms of the penetration depth of the signal through snow and firn; the exact reverse image formation of extreme bistatic geometries; the inclusion of more accurate system and instrument models, being the bistatic synchronization a critical aspect; as well as the integration of a first version of the L1/L2/L3 processors, to name a few. Concerning the processing algorithms, and similar as with the end-to-end simulator, a first version of the algorithm theoretical baseline documents (ATBD) is available for the different products. These algorithms shall be consolidated prior to system CDR. Furthermore, they will be used to implement a first version of the L2/L3 algorithms for land applications, which are briefly described in the following: - L2 processor for DEM products: this corresponds to the single-pass interferometric processor for the generation of the Harmony DEMs. It will include the steps of coregistration, spectral filtering, phase unwrapping and geocoding. A critical aspect is the consideration of the extreme bistatic geometry and the non-zero-Doppler output geometry, which will require the modification of some of the processing steps. This processor will be used for both land ice areas (mass balance estimation) and volcanoes. For land ice applications, a dedicated step will take care of estimating and removing the penetration bias. - L2 and L3 processor for TOC (Topography Change) products: these processors perform the computation of the difference between two DEM products, whereby the L3 processor will perform the mosaicking of different tracks, including bundle adjustment and the calibration of residual trends. - L2 processor for TDV (3-D velocity maps) products: this processor corresponds to the PSI processor responsible for the retrieval of the deformation time series exploiting the Harmony and Sentinel-1 image stacks. The goal is the retrieval of the 3-D surface deformation in order to derive the tectonic strain. The current approach is based on the exploitation of both point-like targets and distributed targets [2]. A novel aspect that will be included in the chain is the consideration of the ionosphere as part of the PSI processing [3]. - L3 processor for TDV products: this processor will perform the 3-D inversion out of the individual line-of-sight measurements. The presentation will address the above aspects in detail, reporting on the status of the study at the time the conference takes place. References: [1] P. Prats-Iraola, A. Pulella, A. Benedikter, A. Hooper, J. Biggs, A. Kääb, et.al., “Performance Analysis of the Harmony Mission for Land Applications: Results from the Phase A Study,” FRINGE 2023. [2] A. Ferretti, A. Fumagalli, F. Novali, C. Prati, F. Rocca and A. Rucci, "A New Algorithm for Processing Interferometric Data-Stacks: SqueeSAR," in IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 9, pp. 3460-3470, Sept. 2011, doi: 10.1109/TGRS.2011.2124465 [3] Navarro Sanchez, V. D., Gomba, G., De Zan, F., & Kretschmer, K. (2021). Compensation of ionospheric effects for InSAR stacks by means of a split-spectrum method. In 13th European Conference on Synthetic Aperture Radar, EUSAR 2021 (pp. 898-901). VDE Verlag GmbH.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: Emulation of the calibration, validation, and tuning of the L2 retrieval of ocean products using Harmony’s Scientific Workbench

Authors: Dr. Ir. Paco Lopez-Dekker, Philip Conroy, Bertrand Chapron, Jordi Isern-Fontanet, Claudia Pasquero, Alexandre Payez, Ad Stoffelen, Marcel Kleinherenbrink, Andreas Theodisiou, Bjorn Rommen
Affiliations: Delft University Of Technology, Ifremer, Institut de Ciències del Mar, KNMI, ESA, University of Milano Bicocca
One of the main goals of Harmony, ESA’s 10th Earth explorer mission, is to provide a new observational window to study small-scale processes at both sides of the air-sea interface. With a target launch date in the last quarter of 2029, the Scientific Readiness Level (SRL) needs to quickly advance, in order to reach SRL 6 in the second half of 2026. Flying in formation with one of the Sentinel-1 satellites, Harmony will form a multistatic radar system, providing line-of-sight diverse observations of the surface roughness, Doppler, and intensity spectra to provide O(1km) surface stress and surface current vectors as well as enhanced estimates of the directional wave-spectra. In addition, each satellite will carry a multi-view Thermal-Infrared system to provide simultaneous observations of Sea Surface Temperature and/or cloud-top height and motion vectors. The measurement principles exploited by Harmony are well established. However, algorithms capable of handling the combination of high product resolution (at least one order of magnitude finer than current scatterometers, for example) and the novel multistatic observation geometry to retrieve the intended L2 products still need to be consolidated and validated. One major challenge that this consolidation effort faces is that, as in many other cases, theoretical forward models, while sufficient to understand the measurement and reliably predict retrieval performances, are insufficient to be used for inversion. Instead, the inversion needs to rely on empirical or semi-empirical geophysical model functions, which need to be derived from observations. A complementary challenge is that these observations do not yet exist and cannot be realistically be obtained at the scales of interest before the mission. This implies that training or tuning of the retrieval algorithms will be part of the mission, which in turn means that reaching the required scientific maturity involves consolidating and validating this training process, which can be seen as part of an extended calibration and validation effort. Over the last few years, we have developed a detailed open-source Scientific Workbench (SWB) to simulate Harmony’s observations. The SWB implements a multistatic extension the Radar Imaging Model (RIM) and Doppler RIM (DopRIM). Within the limitations of a largely theoretical model framework, RIM and DopRIM capture most of the complexities that modulate the radar observables. Using as inputs the outputs of coupled ocean-atmosphere models or high-resolution Large Eddy Simulations of the Marine Boundary Layer, and long-wave spectra generated by a wave model (SWAN or Wavewatch III), it produces Harmony’s ocean spatially averaged L1 products, which are fed to our L2 retrieval prototype to generate the final geophysical products. Aside from a continuous effort to refine and validate the forward models included in the SWB, current efforts aim at implementing a workflow to ingest semi-operational high-resolution joint ocean-atmosphere model outputs over areas covered by Sentinel-1. This will generate time series of synthetic Harmony observations under varying environmental conditions, including Sentinel-1 synthetic data that will be used to validate and tune the SWB using actual Sentinel-1 observations as a reference. After this tuning step of the SWB, the synthetic data will be used to rehearse and validate the training or tuning of the retrieval algorithms. The presentation will provide an overview of the SWB, including the main principles behind RIM and DopRIM, provide a statistical comparison of the SWB outputs with Sentinel-1 observations, and discus the first steps towards the demonstration of the retrieval tuning process.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: Harmony: Mission Overview, Science Products and Roadmap to SRL-6

Authors: Dr. Ir. Paco Lopez-Dekker, Juliet Biggs, Bertrand Chapron, Andy Hooper, Ákos Horváth, Jordi Isern-Fontanet, Andreas Kääb, Gianluigi Liberti, Anton Korosov, Caroline Muller, Claudia Pasquero, Pau Prats-Iraola, Eric Rignot, Ad Stoffelen, Riccardo Lanari, Julienne Stroeve, Björn Rommen
Affiliations: Delft University of Technology, University of Bristol, IFREMER, University of Leeds, University of Hamburg, Institut de Ciències del Mar, University of Oslo, NERSC, CNR-ISMAR, Institute of Science and Technology Austria (ISTA), University of Milano Bicocca, DLR, University of California Irvine, KNMI, CNR-IREA, University of Manitoba, ESA/ESTEC
With the implementation of the Harmony mission steadily advancing towards a planned launch in the last quarter of 2029, the next science milestone is reaching Scientific Readiness Level (SRL) 6 by the end of 2026 in the three main domains of the mission. This contribution will first present a current overview of the mission, from a science perspective, starting with a summary of the main mission objectives followed by a discussing the mission phases: - Close-formation phases to implement a single-pass cross-track SAR interferometer which will allow the generation of dense time series of digital elevation models to study dynamic processes over glaciers and volcanoes, and enabling experimental wide swath ocean topography (WSOA). - Stereo phases optimized for the observation of surface stress vectors and surface currents over oceans, and for the retrieval of 3-D deformations exploiting repeat-pass InSAR over land and in the cryosphere. Then, an overview of the main intended L2 science products will be provided, including key elements of the product specification (e.g. spatial resolution) and expected product performance in terms of the uncertainty of the retrieved product with the must up to date assessment of their geophysical uncertainty. The remainder of the contribution will discuss the path to SRL-6, touching on the main elements required to achieve the required maturity and how these map to the different science-related activities. One of the key elements is the development of a suite of high-fidelity Harmony End-to-End Performance Simulators (HEEPS-SAR, HEEPS-TIR, HEEPS-Terra), to produce synthetic data needed to allow test the prototype L2 processors. This high-fidelity E2E tools will be complemented with higher-level mission emulation tools, needed to produce realistic data sets to, for example, develop and test the multi-directional InSAR processing chain, or to produce synthetic data needed to emulate the calibration and validation of the data-driven retrieval algorithms for the ocean-related products. The second central element is the consolidation of the retrieval algorithms, which will be complemented with user preparation activities, campaigns, etc.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: HARMONY the Earth Explorer mission to measure changes in Earth’s surface and monitor ocean surface conditions

Authors: Pedro Jose Jurado Lozano, Mrs Florence Heliere, Mr Bjorn Rommen
Affiliations: European Space Agency
Harmony is ESA’s 10th Earth Explorer mission in the Earth Observation Envelope Programme (EOEP). Harmony mission is primarily devoted to the observation and quantification of small-scale motion and deformation (velocity gradient) fields covering the ocean surface, glaciers and ice sheets, solid Earth and clouds. Therefore, Harmony is a multi-purpose mission addressing key science questions in different domains to allow data-driven representations of processes in Earth System models, primarily, at the air-sea interface, of the solid Earth, and in the cryosphere [1]. The retrieval of kilometre and sub-kilometre scale motion vectors requires concurrent observations of its components. Moreover, Harmony observation concept will offer unique measurements over timescales varying between 10’s of milliseconds (for measuring ocean currents) and years (for measuring solid Earth motion) and answering to a variety of surfaces: 1. Over land, Harmony will provide data to estimate small shifts in the shape of the land surface, such as those leading to and resulting from earthquakes and volcanic activity, hereby contributing to the assessment of geohazards over geologically active areas. It will also provide new information to study 3D deformation and flow dynamics of glaciers at the rapidly changing marginal zones of the ice sheets for a better understanding of the contribution of ice mass loss on sea-level rise 2. Over the ocean, Harmony will deliver data to improve our understanding of the upper ocean and of interactions between the lower atmosphere and the ocean surface by providing, for the first time, large scale simultaneous measurements of wind, waves, surface currents, sea surface temperature differences and cloud motion vectors. The combination of these measurements will allow better understanding and interpretation of upper ocean small-scale processes and will yield an unprecedented view of the marine atmospheric boundary layer. In Harmony, this is achieved by offering a mission concept to exploit passive C-band antenna measurement using radar instrument on-board Sentinel-1 as a transmitter of opportunity plus thermal infrared cloud motion as sea surface temperature information. By doing this from different viewing angles, the information content in the backscattered signals is multiplied. Indeed, motion vectors can be observed in three dimensions, whereas backscatter can be observed in two polarisations and from multiple vantage points. The Harmony space segment consists of two alike satellites orbiting in loose-formation with a Copernicus Sentinel-1 radar satellite. Each Harmony satellite carrying a payload composed by two instruments: 1. A receive-only C-band Synthetic Aperture Radar (SAR) designed to receive the backscattered signal transmitted by Sentinel-1 and to measure surface velocities from the phase shifts in these signals. This configuration will offer unique multi-angle viewing geometry by combination of a Sentinel-1 radar satellite with two additional bistatic receivers. 2. A multi-view Thermal Infra-Red (TIR) optical instrument with five cameras complementing the SAR observations from with Sea Surface Temperature and Cloud Motion Vectors observations. The Harmony ground segment embodies 1. Single Payload Data Ground Segment responsible for the collection, processing, and distribution of instrument data. This Harmony Payload Data Ground Segment will be interfaced with that from Sentinel-1 to exploit the Sentinel-1 data in the Harmony data processing chain. 2. Single Flight Operation Segment responsible for spacecraft monitoring and control. This Harmony Flight Operation Segment will be coordinated with that from Sentinel-1 to secure Harmony operations with those of Sentinel-1. The Harmony mission is designed to launch both satellites in a single dual- launch configuration on Vega-C by the end of the decade. This will ensure sufficient overlap with Sentinel-1 First Generation satellites and secure a 5 years mission lifetime. Harmony orbit configuration will switch between two formation configurations, to address the different science goals. 1. The mission will hold on to the so-called Stereo phase. During the Stereo phase there will be one Harmony satellite flying ahead of Sentinel-1, and one satellite trailing Sentinel-1. The distances of Sentinel-1 to both Harmony satellites will be in the order of 350 km, maximising the angular diversity between their observations. 2. The mission will reconfigure for certain periods to the so-called XTI phase. During this time, the two Harmony satellites will fly in a close-formation configuration distancing around 350Km. This configuration is optimised for single pass across-track interferometric observations, from which surface height time-series, and, therefore, changes, can be derived. The XTI phase periods will be defined to maximize the quality of observations for slow topography changes happening among XTI observations. The presentation will cover the mission objectives, observational concept, the SAR and TIR instruments and their development status as well as some initial cal/val concepts and related challenges.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: Potential Land Applications of the ESA Earth Explorer 10 Bi-static SAR Mission Harmony

Authors: Andreas Kääb, Juliet Biggs, Andrew Hooper, Pau Prats-Iraola, Eric Rignot, Bernhard Rabus, Andreas Benedikter, Helmut Rott, Thomas Nagler, Bjorn Rommen, Paco Lopez-Dekker
Affiliations: Department of Geosciences, University of Oslo, School of Earth Sciences, University of Bristol, COMET, University of Leeds, Microwaves and Radar Institute, DLR, Department of Earth System Sciences, University of California, School of Engineering Science, Simon Fraser University, ENVEO Environmental Earth Observation IT, ESA-ESTEC, Department of Geoscience and Remote Sensing, Delft University of Technology
The EarthExplorer 10 mission Harmony by the European Space Agency ESA, scheduled for launch around 2029–2030, consists of two passive C-band synthetic-aperture-radar companion satellites flying in a flexible constellation with one Sentinel-1 radar satellite as an illuminator. Sentinel-1 will serve as transmitter and receiver of radar waves, and the two Harmonys will serve as bistatic receivers without the ability to transmit. During the first and last year of the 5-year mission, the two Harmony satellites will fly in a cross-track interferometric constellation, such as that known from TanDEM-X, about 350 km ahead or behind the assigned Sentinel-1. This constellation will provide 12-day repeat DEMs, among other regions, over most active volcanic, land-ice and permafrost areas. Harmony’s repeat DEMs will serve to better understand cycles of topographic growth, mass transport and collapse at actively erupting volcanoes and improve forecasts of the associated geohazards. Over glaciers, the DEMs will support investigating the dynamics of large-magnitude short-term elevation changes, such as from glacier surges, and result in a 5-year global glacier mass balance estimate by differencing the DEM stacks of the first and last year of the mission. Mostly relevant for glaciers, Harmony’s repeat DEMs will be complemented by synchronous lateral terrain displacements from the well-established offset tracking method. Over permafrost areas, the repeat DEMs will mainly be suitable for large permafrost-related landslides such as thaw-slumps. In between the cross-track interferometry phases, one of the Harmony satellites will be moved to the opposite side of the Sentinel-1 to form a symmetric bistatic “stereo” constellation with ±~350 km along-track baseline. In this phase, the mission will provide opportunity for radar interferometry along three lines of sight, or up to six when combining ascending and descending acquisitions, enabling the measurement of three-dimensional surface motion, for instance related to tectonic strain, volcanic deformation, sub- and emergence components of ice flow, or three-dimensional deformation of permafrost surfaces or slow landslides. Such measurements would, for the first time, be available for large areas and are anticipated to provide a number of novel insights into tectonics, and the dynamics and mass balance of a range of mass movement processes. As a key example, interferometric satellite SAR missions have so far only been sensitive to displacements in, roughly, E-W direction and are thus only able to measure strain rates for less than half of the tectonic areas on Earth and provide therefor incomplete imaging for earthquakes in certain orientations. Harmony’s diversity of lines of sight will enable measuring the N-S component of deformation and hence complete the global map of tectonic strain and improve global estimates of seismic hazard.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: Stereo 3D cloud-motion and height retrievals with ESA's Harmony mission: the latest status

Authors: Dr. Alexandre Payez, Dr. Ir. Paco Lopez-Dekker, Ad Stoffelen, Björn Rommen
Affiliations: KNMI, Delft University of Technology, ESA/ESTEC
A better understanding of cloud processes is crucially needed to obtain better climate predictions of a hotter planet Earth, and reduce long-standing model biases from parametrisations. Indeed, despite the small scales at which cloud processes take place, they do affect larger scales, and the overall impact of clouds on the Earth's Radiation Budget remains a major source of uncertainty — not least because of a lack of resolving observational data. Over the ocean, the now-selected ESA Earth-Explorer-10 Harmony mission, which will fly in formation with Sentinel-1, actually generally aims at providing much needed high-resolution (sub-km) observations of physical processes taking place in the marine-atmospheric boundary layer. Complementing their main SAR-receive-only payload, each of the two identical satellites of this tandem mission will in fact also carry a thermal infrared (TIR) payload consisting of 5 side-looking cameras, with views from fore to aft, and which will be used not only to observe the ocean surface but also the clouds. In a nutshell, the idea is that, when flying about two minutes apart in stereo formation, simultaneous and congruent TIR observations from the two Harmony satellites will provide valuable information about 3D cloud motions (CMVs) and positions, whereas the parallax will help retrieve the cloud-edge heights. While the first objective that we want to achieve with Harmony cloud products is to retrieve macroscopic cloud motions from the 10 views obtained over the ~6 minutes it takes for the two satellites to fly past a given scene, it is also well recognised that the intermediate motions between image pairs will moreover contain valuable information, which could then be exploited to learn more about cloud processes. To prepare and help the development of retrieval algorithms, sophisticated end-to-end simulators have been developed for the selection of the mission, and can be used to generate highly realistic synthetic observations of cloud-resolving large-eddy simulations. In particular, approaches which involve optical-flow algorithms (computer vision) have been introduced towards the end of Phase A, and actually proved promising for retrieving the cloud products. This talk will present the activities since Phase A, and will give the latest status of the cloud retrievals with Harmony, which we will keep on developing in the coming years.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall F1)

Session: D.02.03 Approaches, accuracies, applications: next generation large-area land change monitoring - PART 2

Land cover data are used by various user groups to better understand the current landscapes and the human impact on these landscapes. Next to be a core information layer for a variety of scientific and administrative tasks, land cover and change data help organizations, stakeholders, politicians and individual users to assess urban growth, trends in forest changes, assist in land management and land planning, track wetland losses and potential impacts from sea level rise, prioritize areas for conservation efforts, assess the impact of climate change on socio-economic, socio-ecologic systems as well as ecosystem services, etc.

During the last years several global and continental land cover and change datasets have been developed at increasingly higher resolutions. These datasets, with higher spatial resolution (e.g Sentinel 1+2, Landsat, but also using very high-resolution data) can significantly improve the characterization of the Earth’s land surface dynamics and provide land cover and change assessments at a finer scale and thematic details. The new data available in combination with more powerful computing infrastructure and advances in deep learning/artificial intelligence techniques allow for on-demand, adaptable land cover monitoring and applications.

The session should provide a forum for different players in this field to present latest approaches including advanced methods for land cover change analysis (incl. AI) and near-real time land cover monitoring, new thematic details moving towards land use change monitoring, and evolving large-area programs and services and how they support different users and global to national levels.

Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall F1)

Presentation: Monitoring deforestation in the Brazilian Amazon: transitioning from visual interpretation to satellite image time series analysis

Authors: Dr Isabel Escada, Dr Anielli Souza, Dr Ana Paula Dal'Asta, Dr Ana Rorato, Felipe Souza, Felipe Carlos, Dr Rolf Simoes, Gilberto Camara
Affiliations: National Institute For Space Research (inpe), Brazil, Menino Software Crafter, Open Geo Hub Foundation
Since 1988, the Brazilian National Institute for Space Research (INPE) has mapped deforestation in the Brazilian Amazon using satellite images through the PRODES system [1]. PRODES maps have an estimated 93% accuracy [2]. The system relies on visual interpretation of single-date images, which requires substantial human resources and takes a long time. To improve this situation, researchers have proposed machine learning approaches [3-4]. Despite promising results, none of the published papers achieved the PRODES accuracy. This study introduces an automated version of PRODES to measure deforestation in the Brazilian Amazon. Our study area is the state of Rondonia, which has 237,576 km2. According to PRODES, almost half of the original forest has been removed. Most of the remaining natural areas in the region are part of protected areas. The work relies on a "time-first, space-later" approach that takes image time series as the first step to analyse remote sensing data. Time series classification produces a matrix of probability values for each class. In the "space-later" part of the method, a Bayesian smoothing algorithm improves these classification results by considering each pixel's spatial neighbourhood. Thus, the resulting map combines spatial and temporal information [5]. The data set consists of Sentinel-2 and Sentinel-2A ARD images for 2021 and 2022. Using all ten spectral bands, we produced a regular data cube with a 16-day interval and 23 instances per year. A critical part of the work was selecting training samples. When performing visual interpretation, photo interpreters rely on their experience and knowledge. Similarly, a semi-automatic mapping process should be based on careful selection and analysis of training samples to ensure that machine learning-based tools achieve a similar capacity for distinction to photo interpreters. We built a set of event-based training samples, where "event" refers to occurrences associated with changes in land cover. For example, deforestation is a process marked by different events. First, part of the forest area is removed by logging; then, poachers use surface fire to degrade the forest and make it easier to remove large trees [6]. The process continues with the manual or truck-driven removal of large trees, followed possibly by further fires to clean the area [7]. We associate these events with different kinds of classes to be detected by time series classification. Three types of events were considered: (a) deforestation measured by breaks in the time series, (b) natural classes with seasonal variation, and (c) stable natural classes. An example of the first case is a time series that begins with the response of a stable forest cover and is interrupted by a signal related to a forest fire. In the second case, we have seasonally flooded wetlands; their signals have predictable seasonal patterns. Seasonably-variable classes are often confused with deforestation areas when working with single-date images. Sample collection requires multiple steps of collection, selection, and analysis. These iterations used self-organising maps (SOM) [8] and active learning [9]. SOM helps screen out noisy individual samples and produces clean clusters for each class. Active learning is an iterative strategy for optimising the collection of training samples. We started with 1,770 training samples. At each round, we analysed the data using SOM to remove outliers; then, we classified the area and computed uncertainty maps to define critical areas for new sample collection. We trained a model at each iteration, produced uncertainty maps, and selected and added samples to the training set. This procedure was repeated until we obtained a final set of 6,007 training samples with low noise and high validity. The samples were used to classify Rondonia using a random forest algorithm applied to multi-dimensional time series. The critical forest change classes of clear-cuts with bare soil, burned areas, and clear-cuts with remaining vegetation had F1 scores of 89%, 91% and 83%. When these classes combined to produce forest/non-forest maps, we obtained overall accuracies of 98.8% for 2021 and 94% for 2022, as measured by comparing with PRODES. This is the first time Earth observation data analysis has reached the accuracy obtained by PRODES in an operational setting. This work was done using the R open source software "sits", which is available on CRAN and at https://github.com/e-sensing/sits, supported by an online book at https://e-sensing.github.io/sitsbook/. The samples and scripts used in the work are available at https://github.com/e-sensing/rondonia20LMR. References [1] Yosio Shimabukuro et al. “The Brazilian Amazon Monitoring Program: PRODES and DETER Projects”. In: Global Forest Monitoring from Earth Observation. Boca Raton, FL: CRC Press New York, 2013, p. 354. [2] Luis Maurano et al. “Spatial patterns of deforestation and accuracy estimates of PRODES maps for Brazilian Amazon(in Portuguese)”. In: Ciência Florestal 29 (2019), pp. 1763–1775. ISSN: 0103-9954, 1980-5098. [3] Mabel Adarme et al. “Evaluation of Deep Learning Techniques for Deforestation Detection in the Brazilian Amazon and Cerrado Biomes From Remote Sensing Imagery”. In: Remote Sensing 12.6 (2020), p. 910. [4] Fabien H. Wagner et al. “Mapping Tropical Forest Cover and Deforestation with Planet NICFI Satellite Images and Deep Learning in Mato Grosso State (Brazil) from 2015 to 2021”. In: Remote Sensing 15.2 (2023), p. 521. ISSN: 2072-4292. [5] Rolf Simoes et al. “Satellite Image Time Series Analysis for Big Earth Observation Data”. In: Remote Sensing 13.13 (2021), p. 2428. [6] Daniel Nepstad et al. “Large-Scale Impoverishment of Amazonian Forests by Logging and Fire”. In: Nature 398.6727 (1999), pp. 505–508. ISSN: 0028-0836, 1476-4687. [7] Jeffrey Gerwing. “Degradation of Forests through Logging and Fire in the Eastern Brazilian Amazon”. In: Forest Ecology and Management 157.1 (2002), pp. 131–141. ISSN: 0378-1127. [8] Lorena Santos et al. “Quality Control and Class Noise Reduction of Satellite Image Time Series”. In: ISPRS Journal of Photogrammetry and Remote Sensing 177 (2021), pp. 75–88. ISSN: 0924-2716. [9] Robert Monarch. Human-in-the-Loop Machine Learning. Shelter Island, NY: Manning Publications, 2021.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall F1)

Presentation: ESA Ulysses Project and the Use of Soil Sealing Products and Indicators in the Mediterranean Region

Authors: Dr. Daniela Iasillo, Daniela Drimaco, Dr. Paola Di Lauro, PhD Vito De Pasquale, Mrs Teresa Fazio, PhD Walter De Simone, PhD Valentina Santarsiero, PhD Luca Congedo, PhD Michele Munafò, Mr. Lorenzo Stamenkovic, Loic Faucqueur, Mr. Olivier Arino
Affiliations: Planetek Italia s.r.l., ISPRA, CLS - Collect Localisation Satellites, ESA ESRIN
The ESA Ulysses project focuses on strategic challenges related to environmental monitoring and sustainable development in the Mediterranean region, focusing in particular on soil sealing. Soil sealing, the process whereby natural soil surfaces are covered by impermeable materials such as asphalt and concrete, has become a significant environmental problem in many Mediterranean countries. It is associated with soil loss, defined as a change from non-artificial to artificial soil cover. This contributes to various environmental problems, such as an increased risk of flooding, urban heat islands, reduced biodiversity, and the exacerbation of climate change impacts. In this context, the exploitation of soil sealing products and indicators developed under ESA's initiatives plays a crucial role in supporting the assessment, monitoring, and management of soil sealing in Mediterranean coastal areas. The Ulysses project, funded by the European Space Agency (ESA), is part of a broader effort to harness advanced Earth observation (EO) technologies to better understand and manage environmental issues in the Mediterranean. Improving the monitoring of soil sealing dynamics, urbanization, and their associated environmental impacts within the Mediterranean basin is the main objective of the initiative. The aim is to help regional and local authorities develop and implement effective land management policies that mitigate the negative effects of soil sealing. The Ulysses project develops a methodology to monitor the degree of soil sealing using Copernicus Sentinel-2 images, focusing on Mediterranean coastal areas (within 20 km from the coast) at a spatial resolution of 10 meters. In a second stage of the project, the degree of sealing will also be calculated on a national scale for four states: Italy, France, Spain and Greece, considering the entire territorial extent of each. Using satellite imagery, geospatial technologies, and sophisticated analytical tools, the project generates valuable insights into soil sealing patterns and trends over time. It provides crucial data on the extent of impervious surfaces and their growth, allowing stakeholders to better understand the local and regional impacts of soil sealing on ecosystems, human settlements, and climate conditions. The overall objective is to enable more informed decision-making processes regarding sustainable land use, urban planning and environmental conservation in Mediterranean areas. Soil sealing is a driving factor in environmental degradation; with increasing urbanization and infrastructure development, vast areas of natural land are being converted into impermeable surfaces. To address this problem, the Ulysses project generates a series of products and indicators to help monitor and quantify the impact of soil sealing. These products are designed for use by a range of stakeholders, including government agencies, municipalities, urban planners and environmental organizations. The indicators developed offer detailed insights into soil sealing dynamics. The "Soil Seal Degree" indicator provides detailed data on the level of impermeability in a given area, quantifying the percentage of land surface that has been transformed into impermeable surfaces due to urbanization or infrastructure development. This information is vital for understanding the extent of soil sealing in specific regions and for tracking its changes over time. Additionally, products that provide "Percent Waterproofing at the Regional Level" offer insights into the broader scale of soil sealing in Mediterranean regions, quantifying impermeable surfaces within larger geographic areas. This helps evaluate regional impacts and identify trends that may affect urban planning, flood management, and environmental policies. In coastal areas, which are particularly vulnerable to soil sealing, products measuring the ‘percentage of sealing in coastal areas’ allow a detailed assessment of the impact of urbanization and infrastructure development on these sensitive areas. These indicators focus on specific buffers around coastal areas, highlighting the extent of sealing and enabling the monitoring of ecosystems. The project also provides data on ‘Soil sealing rates at national level’, offering an aggregated view of soil sealing trends in the analyzed countries. These data help national politicians and authorities to make informed decisions on urban development and environmental conservation. The project also analyses soil sealing trends in Italy, France, Spain and Greece across the whole country, within a broader framework of analyzing the dynamics of urban and natural areas at different levels, using indicators to assess the characteristics and trends of soil sealing and the resulting urban growth and landscape evolution, providing assessments of the impact of sealing. The use of the project's products and equitable output indicators is essential to instrument the effects of rapid urbanization, infrastructure expansion and climate change on natural resources. Local municipalities and urban planners can use these products to guide informed land-use decisions. By understanding the extent and distribution of soil sealing, they can implement urban planning regulations that protect natural areas, encourage green infrastructure and reduce the proliferation of impervious surfaces. These measures help create urban environments that are more resilient to climate change and environmental degradation. In the context of environmental conservation and biodiversity protection, soil sealing can lead to the loss of valuable habitats and ecosystems. Using these indicators, conservation organizations can identify areas in need of protection and work to restore habitats affected by urbanization. These tools are also useful for tracking changes in biodiversity, enabling informed efforts to conserve and restore ecosystems in the region. The indicators, therefore, can be integrated with a quantitative analysis of some ecosystem services and with the use of ecological quality proxies that can be analyzed before and after the implementation of the intervention, in order to verify its status, which should be unchanged or improved. The principle of invariance or improvement should be applied to all indicators, without a hierarchical structure and without compensation between different indicators. Soil sealing also increases the vulnerability of urban areas to natural disasters, particularly flooding. By monitoring soil sealing dynamics and correlating them with disaster risks, authorities can develop targeted disaster risk reduction strategies. These strategies can include the creation of sustainable drainage systems, the expansion of green spaces, and stricter regulations on impermeable surfaces in flood-prone areas. Additionally, national and international policymakers can use the data from the Ulysses project to craft regulations that promote sustainable land use practices. By understanding trends in soil sealing, they can establish targets for reducing impervious surfaces, promote green infrastructure, and incentivize the rehabilitation of sealed areas. Furthermore, the data generated by the Ulysses project can help raise public awareness about the environmental consequences of soil sealing. Through outreach programs and public information campaigns, local communities can be engaged in efforts to reduce soil sealing and adopt sustainable practices. The Ulysses project is not an isolated effort; it integrates with other environmental monitoring initiatives in the Mediterranean region, creating synergies between different datasets and approaches. By combining soil sealing data with other environmental indicators—such as air quality, water management, and land use change—the project provides a more holistic understanding of environmental dynamics. This integrated approach enhances the ability of stakeholders to address multiple environmental challenges simultaneously, leading to more effective management and conservation efforts. Additionally, the project contributes to the development of climate resilience strategies by identifying areas where soil sealing may worsen the impacts of climate change, such as increased temperatures and water scarcity. By integrating soil sealing data with climate models, authorities can better prepare for future challenges and ensure that cities and communities are resilient to the changing climate. In conclusion, the ESA Ulysses Project plays a vital role in monitoring and managing soil sealing in the Mediterranean region. By providing essential data and products, the project helps mitigate the environmental impact of impervious surfaces and supports sustainable land use, urban planning, and disaster risk management. The exploitation of soil sealing products and indicators in the Mediterranean area is a crucial step toward protecting coastal ecosystems, promoting sustainable development, and enhancing climate resilience. As urbanization and climate change continue to place pressure on the region, the Ulysses Project offers indispensable tools for building a more sustainable and resilient Mediterranean future
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall F1)

Presentation: Automated and Continuous Change Detection for Situational Awareness

Authors: Lauri Häme, Juho Häme, Joni Norppa, Tuomas Häme
Affiliations: Terramonitor
We have developed an automated system which comprises the entire chain for continuous detection and identification of rapid changes in the environment. The steps of the system include: 1. Selection of time series of satellite data from the area of interest during a user-defined time period and maximum cloud percentage shown in image metadata. Presently, we have used Sentinel-2 level 2 reflectance imagery but the method is applicable for different image types. 2. Splitting the raw image data into square-shaped tile images based on the Tile Map Service (TMS) specification. 3. Classification of rapid changes between each successive image pair using time series change detection. The approach is based on calculating the pixel-wise minimum change magnitude over a set of pairwise change comparisons. In other words, instead of comparing one image before the change and one image after the change, we cross-compare, for example, two images before the change and two images after the change. A quality score for each change is calculated using the random forest algorithm. In the initial phase, the model was trained with limited data on rapid changes. 4. Vectorization of the detected changes. 5. Automatic visualization of a detected change using before and after images of the area. The quality score computed for the change is displayed in the user interface. The user presses either a thumb up or down button for the correct or incorrect detection. In the beginning of the application of the system, it outputs a high number of false positives. The false positives are particularly caused by clouds or changes in land cover classes that are not of interest for the objective of the chosen task. All the classification assessment results are stored in a database, and used for further training of the model. This results in lowering of the quality score for the false classifications. The user can set the quality score for the level that best serves his analysis task. We have collected a database of approximately 5000 training observations during the application of the system in Finland and Sweden for several years for the rapid changes. 6. Integration of additional data to the changes, for example tree height based on airborne laser scanning data and main tree species information from national forest inventory maps have been implemented. 7. Transferring the change polygons into a GIS system. At this phase, also the shape of the change can be analyzed. As a result, the detected changes are shown as a GIS layer together with the attribute data. The system has been used operationally among other things for the mapping of clear cuts and points nearby power line corridors where there is a high probability for damage for power lines by falling trees. The satellite image analysis results have also been combined with a tree map from airborne laser scanner data for the provision of tree height information. The application areas range from forestry and landcover to construction and security. References Häme, T., Sirro, L., Kilpi, J., Seitsonen, L., Andersson, K. & Melkas, T., 29 May 2020, In: Remote Sensing. 12, 11, 1751. Pitkänen, T. P. (Corresponding Author), Sirro, L., Häme, L., Häme, T., Törmä, M. & Kangas, A., Apr 2020, In: International Journal of Applied Earth Observation and Geoinformation. 86, 9 p., 102011.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall F1)

Presentation: EU CROP MAP 2022: A 10-M Resolution Crop Type Map for the European Union and Ukraine

Authors: Babak Ghassemi, Emma Izquierdo-Verdiguier, Astrid Verhegghen, Momchil Yordanov, Guido Lemoine, Álvaro Moreno Martínez, Davide De Marchi, Marijn van der Velde, Francesco Vuolo, Raphaël
Affiliations: Institute of Geomatics, University of Natural Resources and Life Sciences (BOKU), Joint Research Centre (JRC), European Commission, SEIDOR Consulting S.L., Image Processing Laboratory (IPL), Universitat de València, Catedrático A. Escardino,
Approximately 42 per cent of the European Union (EU) land area is devoted to agriculture, supporting the EU in being the world’s largest exporter of agri-food products. Policies are being developed to promote sustainable crop production, which requires up-to-date and accurate maps to provide baseline information for assessment and monitoring. Crop type maps facilitate the monitoring of crop growth, health, and yield estimation. They allow for informed decisions on the timing of planting, fertilization, and pest management. Additionally, Land Use Land Cover (LULC) information can assess the suitability of land for specific crops or agricultural practices in combination with factors such as soil quality and hydrological conditions. Besides supporting agroecosystem modelling, LULC maps simulate crop growth and assess the effects of various land management practices. Furthermore, these maps contribute to analyzing historical and future land-use changes and evaluating their impact on crop production. This work introduces an open-access 10-metre resolution crop type map for 27 EU countries (EU-27) and Ukraine with 19 specific crop types for 2022, updating the previous EU CROP MAP 2018. Using Earth Observation data and in-situ data from Eurostat's Land Use and Coverage Area Frame Survey (LUCAS) 2022, the map was produced using a comprehensive methodology using 134,684 LUCAS Copernicus field-surveyed polygons, Sentinel-1 and -2 satellite imagery, land surface temperature, and a digital elevation model. A Random Forest machine learning approach led to the development of two classification models: the Primary Map Generation model to produce the main map and the Gap-Fill model, which addresses cloud-contaminated gaps. The resulting EU-27 LULC map has an overall accuracy of 79.3% for seven major land cover classes and 70.7% for all 19 crop types. The producer’s accuracy for several key crop classes shows remarkable results: Soft Wheat (85.5%), Maize (90.5%), Potato (70.9%), Sugar Beet (85.4%), Sunflower (78.0%) and Rape and Turnip Rape (68.3%). The trained model was used to infer the LULC map for Ukraine in 2022, demonstrating its robustness even in regions without labelled samples for model training by validating its results using third-party independent data. This comprehensive LULC mapping not only provides a benchmark for agricultural monitoring but also opens up opportunities for diverse applications, from climate change modelling to precision farming and policy formulation.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall F1)

Presentation: Large-scale monitoring of cropland trees through individual tree tracking

Authors: Dr. Florian Reiner, Dr. Dimitri Gominski, Prof. Dr. Rasmus Fensholt, Dr. Martin Brandt
Affiliations: University of Copenhagen
Scattered trees on croplands, such as in agroforestry systems, provide a wide range of ecological and socio-economic benefits and play a crucial role for food security and climate resilience in the Global South. However these non-forest trees are increasingly threatened by climate change and rising human pressures. The dynamics of agroforestry systems and national cropland tree stocks are largely unknown, as currently no robust monitoring systems exist to remotely detect single field trees and track changes at national scales. To address this gap, we here present a framework to track cropland trees at the single tree level across multiple years, using a combination of very high resolution satellite imagery, deep learning, and object-based change classification. The approach matches annual tree centre predictions to detect changes, such as individual tree losses from logging or tree mortality events. The framework is designed for PlanetScope nano-satellite imagery, which offers unprecedented opportunities for detailed tree monitoring given the combined high spatial and temporal resolution. We demonstrate the framework by applying it to a national-scale case study of cropland trees in Tanzania from 2018 to 2022. Furthermore, we extend the framework to sub-continental scale by tracking 0.6 billion large farmland trees in India over a period of 12 years, using additional RapidEye imagery to extend the analysis back to 2010. We show that around 11 ± 2% of the large trees (about 96 m² crown size) mapped in 2010/2011 had disappeared by 2018. Moreover, during the period 2018–2022, more than 5 million large farmland trees (about 67 m² crown size) have vanished, due partly to altered cultivation practices, where trees within fields are perceived as detrimental to crop yields. These observations are particularly unsettling given the current emphasis on agroforestry as a pivotal natural climate solution, playing a crucial role in both climate change adaptation and mitigation strategies, in addition to being important for supporting agricultural livelihoods and improving biodiversity.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall F1)

Presentation: BSRLC+: The first annual 30-m land cover maps with detailed crop types and peatlands in the Baltic Sea region over two decades (2000 – 2022)

Authors: Dr Vu-Dong Pham, Farina de Waard, Fabian Thiel, Dr. Bernd Bobertz, Christina Hellman, Duc-Viet Nguyen, Felix Beer, Dr. Marcel Schwieder, Jörg Hartleib, JProf. Dr. David Frantz, Dr Sebastian van der Linden
Affiliations: Institute of Geography and Geology, University of Greifswald, Partner in the Greifswald Mire Centre, Interdisciplinary Centre for Baltic Sea Region Research (IFZO), University of Greifswald, Thünen Institute of Farm Economics, Geography Department, Humboldt-Universität zu Berlin, Geoinformatics – Spatial Data Science, Trier University
Land use and land cover (LULC) information is critical for assessing dynamic processes on the Earth’s surface and understanding human-related activities. In Europe, the generation of LULC products at large spatial scales, such as national or continental levels, and over decadal periods has been a priority for monitoring rapid regional changes. In this context, LULC maps, such as the CORINE Land Cover (CLC) provided by the Copernicus Land Monitoring Service, have been pioneering efforts to deliver long-term, reliable, and thematically detailed LULC datasets for the entire Europe. Initiated in 1990, these maps have been consistently updated every six years since 2000. Following the classification framework of CLC products, recent advancements have focused on producing more refined LULC maps, characterized by higher spatial resolution or more frequent temporal updates. Despite these advancements, CLC and similar LULC products have limited class catalogs, which hampers their suitability for monitoring dynamics of specific crop types or peatland vegetation. For instance, while the CLC product contains 44 thematic classes, there is no finer differentiation of agricultural lands, which accounts for around 42% of Europe’s total land area. In this study, we introduce the Baltic Sea Region Land Cover Plus (BSRLC+), a multi-temporal annual land cover dataset that spans two decades (2000–2022) and covers the entire Baltic Sea region, which is located in the center of Europe. This dataset provides detailed maps with a spatial resolution of 30 m and a thematic detail of eighteen classes, including eight general land cover types, eight major crop types and grasslands, and three classes for wetlands to further describe peat bog-related land use. Representing the first homogenized multi-temporal annual dataset for this region, the BSRLC+ addresses critical gaps in existing LULC products, including limited detail on crop rotation patterns and peat bog exploitation. To develop these maps, we employed our recently developed Temporal Encoding method alongside data augmentation techniques (Pham et al., 2024a, DOI: 10.1016/j.jag.2024.103867). This innovative approach leverages the transferability of crop type mapping using Landsat and Sentinel-2 data. Specifically, it was designed to address the challenge of limited historical crop reference data by enabling the characterization of crop phenological patterns in recent years but adapted to the poor temporal information from satellite data in past periods. The BSRLC+ maps were rigorously validated and assessed using independent field survey data from the Land Use/Cover Area Frame Survey (LUCAS) and expert annotations derived from high-resolution imagery. The validation process and results are comprehensively documented in a peer-reviewed scientific journal (Pham et al., 2024b, DOI: 10.1038/s41597-024-04062-w). All map data are publicly available, and an interactive web application for BSRLC+ is scheduled for release by the time of LPS2025.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.34)

Session: F.03.01 Commercial EO missions data for marine and land applications - Part 2

EO data plays a crucial role in marine and land applications, providing vital information for environmental monitoring, biodiversity, disaster management, hydrology, urban planning, agriculture and forestry, oceanography and coastal management, sea-ice monitoring. By leveraging commercial EO data, these applications can enhance decision-making processes, support sustainable development and help monitoring the implementation of EU policies.
This session has been structured into three thematic parts—marine, land, and multidomain—to better highlight the diverse applications of commercial Earth Observation (EO) data. They will feature presentations from data and satellite owners, with a particular focus on commercial data providers, illustrating the complementary nature of commercial EO data with other satellite missions (ESA and non-ESA missions) including Sentinel missions. The three sessions (part 1, 2 and 3) aim also to exchange experiences on applications powered by commercial EO data and the presentations will illustrate the commercial data offer, imaging capabilities and use cases.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.34)

Presentation: Improving Maritime Situational Awarness In the Arctic With Cosmo-Skymed Constellation

Authors: Mr Maurizio Demarte, Mr Roberto Nardini, Mr Giuseppe Aquino, Mrs Donatella Giampaolo, Mrs Federica Pieralice, Mr Filippo Britti, Adriano Benedetti Michelangeli, Mr Vittorio Gentile
Affiliations: e-Geos, Istituto Idrografico della Marina
The Arctic region presents a complex and evolving scenario that has recently drawn significant attention due to escalating geopolitical tensions and emerging environmental, political, and economic dynamics. In the near future, climate change, coupled with substantial investments in the region, will likely render the Arctic more accessible. The opening of new shipping routes for commercial vessels may revolutionize global trade patterns. Furthermore, the Arctic's strategic position between major powers, along with its abundant resources—such as minerals, oil and gas, food, and freshwater—combined with low electricity costs, positions it as a critical area of interest. However, these opportunities also bring challenges. The historically adopted "High North, Low Tension" approach pursued by Arctic nations will become increasingly important in maintaining stability, not only within the region but also for Europe, as it seeks to manage the interplay between regional development and global security concerns. The HIGH NORTH Program is the Arctic research program of the Hydrographic Institute of the Italian Navy that focusses on to study of Arctic environment, Hydrographic Study a correlate climate change impacts on the artic region. Since 2017, the mission explored northern maritime latitudes during summer seasons to conduct in-situ surveys and measurements to assess water and ice status/quality. The crew of the Alliance vessel, properties of NATO, is experienced in operating in very difficult environments, especially at high latitudes where ice pack shape is quickly changing and floating sea-ice can be a danger for navigation. Since 2018, e-GEOS officially collaborates with the Italian Navy Hydrographic Institute (IIM) to develop and test together new services and methodologies to support Arctic navigation named ARNACOSKY – ARctic NAvigation with COsmo-SKYmed within the IIM HIGH NORTH as an industry partner, e-GEOS provides value-added information products to support the Alliance operations crew in defining cruising routes. Specifically, the delivered value-added products included were: - Ice Concentration map: The product can be generated from a single radar acquisition and is expressed in percentage of ice cover. The value of ice concentration is estimated by a threshold applied to discriminate open water and sea ice, and then by a moving window around each pixel to count the neighbors ice pixels. The sea ice concentration is measured on a scale from 0% (open water) to 100% (full ice cover), classified into five equal classes. The final spatial resolution depends on both the original SAR image resolution and the window used for calculating the ice percentage. Land mask is applied to discriminate between bright pixels of ice and land. - Ice Drift Maps: The product is generated starting from a couple of radar images acquired at short time interval, up to few hours, to allow for tracking of similar features in the image, based on cross-correlation algorithm, and calculate spatial movement of the objects (i.e. sea ice blocks /icebergs). Given time between the acquisitions and the shift, it is possible to extract a vectorial field of intensity and direction of the drift, which can be not spatially uniform due to different wind and current fields. Generated products are delivered in the format of GeoPDF, which can be opened with any GIS (geographic information system) software. Both these products with associated services were obtained through the processing of Synthetic Aperture Radar data of the spaceborne missions ESA Copernicus Sentinel-1, C-Band SAR, and, thanks to the involvement of the Italian Space Agency, COSMO-SkyMed, X-Band SAR. It is worthwhile noting that for the latter mission, the activities required for the data exploitation included also the accurate satellite collection planning according to the vessel positions, shared daily by the Alliance crew to the e-GEOS operations team, forecasted with respect to expected local time passages of the orbiting platforms. Thanks to the geographical position of the area of interest with respect to the selected constellations satellites orbit configuration, it was possible to provide very frequent observations allowing to improve the situational awareness temporal resolution. Indeed, given the rapid changing nature of the artic scenario, the observation frequency and final products information age are crucial elements to navigation support in such environments. To achieve this, close collaboration between the IIM and the e-GEOS technical team was essential. This partnership ensured the proper scheduling of COSMO-SkyMed First and Second Generation SAR acquisitions, which were planned daily based on the shared coordinates and the evolving sea ice scenario. Along with the operative task of navigation support, the collaboration foresaw also an additional task addressing ice classification. To test and improve classification capabilities and ice-pack characterization, ground ice samples collected were analyzed jointly with polarimetric COSMO-SkyMed Second Generation dual and quad polarimetric SAR data collected. We anticipate that these services will play a crucial role in supporting increased navigation in the Arctic, enabling safer and more efficient operations while mitigating the associated risks.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.34)

Presentation: Using Copernicus Contributing Missions for Daily High Resolution Monitoring of Icebergs

Authors: Keld Qvistgaard
Affiliations: Danish Meteorological Institute
Copernicus Marine Services has a long history for monitoring icebergs using Copernicus satellites in the Greenland area in support of shipping safety and the provision of iceberg products to Copernicus Marine Services. The Sentinel 1 satellites are the workhorse in the setup however daily high resolution coverage (defined as resolution < 20m) cannot be obtained without additional SAR satellites. This presentation will provide insights about how the Copernicus Contributing Missions; Sentinel 1, CosmoSkyMed, TerraSAR-X and Paz-X are combined to a constellation of satellites providing daily high resolution Near Real Time SAR coverage for iceberg monitoring in selected regions, for example in the Disko Island area in West Greenland. The satellites in the constellation have different repeat cycles and capabilities so advanced satellite planning and simulations tools are used to derive satellite swath plans for programming of the daily high resolution coverage. This presentation will also describe the necessity of daily high resolution SAR data for iceberg monitoring supporting risk management for ships in bergy waters. Icebergs are per definition (WMO Ice nomenclature) of land origin and have a water line length larger than 15m. SAR data of lower resolution do not “see” the smaller icebergs, which also are a major hazard for local shipping. The presentation will also include examples on the decay and short-term variability of icebergs. The derived constellation for icebergs is taking advantage C- + X-band in HH polarization only. VV data for detection of icebergs in open sea have significant limitations in regions with strong winds and rough sea state. Direct HH and VV comparisons from past studies will demonstrate the limitation but also provide guidance, from a Greenland perspective, on the potential use of VV data for sea ice and iceberg monitoring. Thinning sea ice due climate change and future projections for shipping suggest that more ships will go into the sea ice pack and consequently request information about hazards and obstacles inside the sea ice edge. Future SAR satellite missions’ potential to address this aspect will be discussed.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.34)

Presentation: Marine and Maritime Applications for the MDA CHORUS Mission

Authors: Dr. Jayanti Sharma, Dr. Ron Caves
Affiliations: MDA Space
MDA CHORUS is a next generation dual-frequency commercial SAR mission providing continuity for RADARSAT-2 users, and enhanced capabilities to address emerging needs. It comprises a C-band and an X-band satellite (trailing by 1 hour) in a mid-inclination orbit (53.5⁰), to provide improved coverage between ± 62.5⁰ latitude. The CHORUS mission detailed design is nearing completion. The CHORUS-C spacecraft is in the build phase with a planned mid-2026 launch. CHORUS offers a wide range of imaging modes, fast tasking, and near-real-time downlink and delivery, making it well-suited for marine and maritime applications. Background RADARSAT-2 has been a Copernicus Contributing Mission since 2011. During this time, RADARSAT-2 has acquired approximately 34,000 images in support of Copernicus, largely for maritime and sea ice monitoring. RADARSAT-2 has also supported Copernicus Security Services and Emergency Management Services, including emergency activation in response to floods, storms and wildfires. The quality of RADARSAT-2 products and the reliability and operational responsiveness of the RADARSAT-2 system have been key reasons for the success of RADARSAT-2 as a Copernicus Contributing Mission. Building upon the substantial heritage of the RADARSAT program, MDA Space is developing CHORUS, which will provide continuity for current users of RADARSAT-2, including Copernicus and other European customers, and better address emerging needs of the geointelligence market. CHORUS leverages the best features of RADARSAT-2 and the RADARSAT Constellation Mission (RCM), and enhances space and ground segment capabilities to meet the needs of existing and new users. The combination of two spacecraft introduces new capabilities not available with RADARSAT-2, including higher resolution imagery and cross-cueing. CHORUS will operate in a novel mid-inclination orbit, providing increased coverage at the mid-to-low latitudes, with daily access to over 97% of the area between ± 62.5⁰ latitude. Imaging Modes MDA CHORUS comprises a two satellite constellation: a C-band SAR (CHORUS-C) and an X-band SAR (CHORUS-X). CHORUS-C is designed and built by MDA Space. CHORUS-X is supplied by a leading provider of commercially available X-band SAR for the exclusive use of MDA. CHORUS-C can image over a 700 km accessible swath, either right or left-looking, for 20 minutes per orbit. CHORUS-C supports dual-aperture on receive with the antenna split fore and aft, permitting high-resolution wide-swath imaging and radial velocity estimation of vessels and ocean surface currents. Imaging modes include: • Vessel Detection: 5 modes optimized for vessel detection with simultaneous imaging and downlink, Minimum Detectable Vessel Length of 15m to 65m. • ScanSAR: coarser resolution (20m – 100m) wide swaths (250km – 700km) designed for marine applications such as oil spill detection, mid-latitude sea ice/iceberg monitoring, and ocean wind/wave/current applications. Available both with single and dual azimuth looks. • Stripmap: modes are available at 8m, 5m and 3m resolution (50km to 175km swath) at multiple beam positions across the entire 700 km accessible swath. • Spotlight: 3 m x 1 m imaging is supported across the entire accessible swath. Single, dual and compact-polarization will be available for all CHORUS-C modes except high-incidence vessel detection modes, which are only available at single polarization. CHORUS-X will offer left- and right-look imaging, for 3 minutes per orbit. CHORUS-X offers a sub-metre resolution Spotlight mode in addition to Stripmap and ScanSAR (TOPS) modes. Maritime and Marine Applications MDA CHORUS is designed to support Near Real Time (NRT) applications with high reliability of tasking, acquisition, downlink, processing, image quality and delivery. CHORUS-C will be capable of being tasked within an hour of image acquisition (fast tasking) as well as delivering image products within thirty minutes of downlink (NRT). This includes the ability to simultaneously image and downlink. CHORUS tasking is designed to make frequent use of left/right slews to better respond to customer orders. CHORUS will provide a 15 minute Vessel Detection Service when in contact with any of its global network of ground stations. A global AIS service will be integrated for NRT dark target detection. Operating the C-band and X-band sensors together with cloud-enabled ground stations will enable tipping and cueing operations. With CHORUS-C as the leading spacecraft, it can image broad areas of interest at moderate resolution. These images are downlinked and processed in near real time, and the resulting image products are used to identify targets or areas of interest (AOIs) to be imaged at a higher resolution. These targets or AOIs are then tasked to the trailing CHORUS-X spacecraft. With a 1 hour ground-track separation of the two spacecraft, the resulting fine resolution imagery provides high value data in a timely manner. CHORUS cross-cue could be used for maritime surveillance, oil spill detection, iceberg tracking and monitoring, and land intelligence applications. MDA CHORUS will also carry a Vessel Detection On-board Processing (VDOP) proof-of-concept to demonstrate NRT on-orbit SAR processing, vessel detection, and delivery of vessel detection reports and image chips. This technology has the potential to further improve responsiveness for maritime customers. Complementarity of CHORUS and Other SAR Sensors MDA CHORUS data can be integrated with imagery from other SAR sensors to provide a more complete marine and maritime picture. With its mid-inclination, non-sun-synchronous orbit, the local time at nadir will vary, decreasing by about ~20 minutes per day and providing imagery outside of the typical dawn/dusk orbits of most SAR missions. Combining CHORUS and data from other missions such as Sentinel-1, RCM and RADARSAT-2 provides increased revisit of a given AOI. While Sentinel-1 typically acquires data over a fixed swath width and resolution, CHORUS and RADARSAT-2 offer a variety of imaging modes that can be selected for finer resolution/smaller swaths or coarser resolution/wider swaths than Sentinel-1, depending on the use case. CHORUS-C also includes multi-azimuth channel and along-track interferometric capabilities that may be of interest as users prepare for these features on Sentinel-1 Next Generation (NG).
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.34)

Presentation: Japetus constellation & Earth Observation Platform: commercial EO data for marine applications

Authors: Carole BELOT, Philippe MATEU, Adrien PORDOY, Tanguy DUVAL, Juliette AUMONIER, Frédéric TROMEUR, Olivier MAES
Affiliations: Prométhée Earth Intelligence
Prométhée Earth Intelligence is the first French NewSpace operator of a constellation of Earth Observation nanosatellites, with the goal of democratizing the use of satellite imagery for a safer and more sustainable world. Founded in January 2020, Prométhée Earth Intelligence has received support from the French government, the French Public Investment Bank (BPI), and the French National Centre for Space Studies (CNES) as part of the France Relance program. A member of the French Aerospace Industries Association (GIFAS), the company has established partnerships with over 50 French and European companies. Its first satellite, Protométhée, was launched on November 11, 2023, from Vandenberg Air Force Base, California. Prométhée Earth Intelligence addresses key challenges in crisis management and sustainable development by providing strategic intelligence and environmental intelligence services. The range of applications offered by Prométhée Earth Intelligence is virtually limitless, including maritime surveillance, critical infrastructure monitoring, ecosystem protection, management of water resources, wildlife and flora conservation, combating deforestation, illegal fishing, and natural disaster prevention. Prométhée Earth Intelligence’s high value-added services rely on proprietary data from its own satellites, with ProtoMéthée currently in orbit, followed in 2026 by Japetus demonstrator. By 2027, the Japetus constellation of 20 nanosatellites will be fully operational to provide high-revisit and high-reactivity data acquisition capabilities. Prométhée Earth Intelligence plans to be able to provide L1B (Top-Of-Atmosphere radiances in sensor geometry), L1C (Top-of-atmosphere reflectance in cartographic geometry) and L2A (Atmospherically corrected Surface Reflectance in cartographic geometry) products. The data will be available through the proprietary Earth Observation Platform (EOP ®), which enhances raw data by merging it with other complementary information sources to build solutions for operational services. This agile platform acts as a unique and aggregative access point which is linked to a cloud-based digital processing system. It is designed to facilitate EO access to a broader user community. As part of the Copernicus Contributing Mission joined in 2023, Prométhée Earth Intelligence has developed imaging capabilities and EO use cases to meet Copernicus Services needs, in particular for marine applications: • Vessel detection and classification: Today’s imagery data produced by ProtoMéthée satellite enables vessel detection. In addition, Japetus constellation satellites will allow vessel identification and boat tracking with intraday revisit. These capabilities are essential to survey possible dark vessels, practicing illicit activities such as human trafficking or INN (Illegal, Unreported and Unregulated) fishing. These vessels stop transmitting their location when about to commit violations. Detecting their localization and activities will be helpful for a better preparation of maritime operations. Unauthorized entry into an exclusive economic zone or a protected area,or verifying and addressing suspicious activities will also be possible. Additional correlation with Automatic Identification System (AIS) transmissions can be envisaged to infirm or confirm emission tampering to evade monitoring. Activities surveillance at port will also be targeted. • Pollution detection: The images produced by Prométhée Earth Intelligence satellites will enable the detection of marine pollution and the monitoring of its progression through revisit imaging. The constellation can be used to assess the risks such pollution poses to the marine environment and blue economy of the affected Exclusive Economic Zones (EEZ). Prométhée Earth Intelligence will be able to perform detection of the spread of objects or clusters harmful to human activities or the environment. The enhanced constellation GSD (very high resolution) will allow the identification of the observed pollution, including debris characterization, hydrocarbon pollution, and green tides. By cooperating with the relevant authorities, it will be possible to mitigate related damages. The tracking and mapping of the destruction caused will also be accomplishable thanks to Japetus capabilities. • Bathymetry of shallow coastal environment: Japetus constellation will deliver L1C products than can be processed by Copernicus Services. In addition, L2A bathymetric products may also be directly developed by Prométhée Earth Intelligence with similar capacities than Sentinel-2, to complete Copernicus Services catalogue. This option is still under study and cannot yet be considered as part of Prométhée Earth Intelligence’s roadmap. This data will extend analysis of underwater topography from 0 to 10-meter depth, for a better monitoring of the marine environment. These applications will have different interests, such as safety in marine and submarine navigation, monitoring of infrastructure works, or measurement of water level evolutions in rivers, lakes, and quarries. • Iceberg detection: Prométhée Earth Intelligence satellites will be able to monitor iceberg. These blocs can impact navigation or influence waves (by dampening them), and can serve as a climate indicator. Japetus constellation satellites will allow to keep track of ice blocks smaller than 10 m by correlating with GHOM (Geography, Hydrography, Oceanography and Meteorology) data. Iceberg size and shape changes over time imply efforts focus on reconstructing the movement history of an iceberg by understanding its typical patterns with ocean currents. These different use-cases will be validated through data demonstration activities in collaboration with the related Copernicus Services over specific AOI in the frame of the Copernicus Contributing Mission contract. Moreover, ProtoMéthée satellite and Japetus constellation have been designed to be fully compatible with Sentinel-2, offering a wide range of opportunities in complementary use of ESA missions too. For example, construction of shoreline retreats measurements models can be made combining Prométhée Earth Intelligence’s bathymetric products and Sentinel-2 historical data bank. We would be happy to further present and discuss these marine applications in an oral presentation or a poster session.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.34)

Presentation: Shallow Water Bathymetry Using the High-Resolution Dragonette Satellite Constellation

Authors: Ellie Jones, Chad Bryant, Ana-Maria
Affiliations: Wyvern Inc
Accurate bathymetric mapping in shallow environments is critical for understanding coastal dynamics, managing marine ecosystems, and supporting navigation safety. Visible to near-infrared (VNIR) imaging spectroscopy is uniquely suited to understanding shallow water ecosystems and water properties. High-resolution imaging spectroscopy from space presents a cost-effective and efficient method for large-scale collection of shallow water bathymetry data compared to traditional vessel-based collection methods. Bathymetry using VNIR imaging spectroscopy leverages the interaction of sunlight with the water column and underlying substrate. Incident sunlight enters the water column, where it undergoes absorption and scattering processes. As the light reaches the seafloor, it is partially reflected back through the water column and exits the surface as upwelling radiance; carrying information about water depth and benthic cover. VNIR light wavelengths, especially shorter wavelengths, are particularly sensitive to these interactions due to their penetrating capabilities in optically clear waters. Wyvern’s Dragonette satellite constellation comprises three 6U cubesats in polar orbits, each equipped with a visible to near-infrared line-scan hyperspectral imager. These imagers capture a 20-kilometer swath across 23 or 31 spectral bands within the wavelength range of 445 to 900 nm. The sensors provide a spectral resolution of 3% of each band’s central wavelength and achieve a spatial resolution of 5.3 meters at nadir. The constellation offers an average revisit interval of 2.1 days at the equator. This study explores the use of high-resolution VNIR hyperspectral imagery produced from Wyvern’s Dragonette constellation to derive benthic properties and depth in optically shallow water. Ground truth bathymetric data, acquired by the University of Hawai’i Undersea Research Laboratory in 2016 with a spatial resolution of 5 meters, serves as the training dataset. The processing chain is explored, including the application of radiative transformation models and glint correction. Machine learning methods are applied to processed hyperspectral imagery to predict depth and benthic cover. Performance is compared to ground truth and other multispectral & hyperspectral satellites (including Sentinel-2, and PRISMA).
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.34)

Presentation: ICEYE: A Leading Commercial SAR Data Provider Driving Innovation and Collaboration in the Copernicus Ecosystem

Authors: Quentin Gillet, Przemyslaw Radzik
Affiliations: ICEYE
Founded in 2014, Finnish New Space pioneer ICEYE empowers others to make informed decisions based on reliable Earth observation data by owning and operating the world's largest SAR satellite constellation. As both a Third Party Mission (TPM) and a Copernicus Contributing Mission (CCM), ICEYE provides valuable SAR data and expertise to support a wide range of applications, from environmental monitoring and disaster response to maritime security. This presentation will highlight ICEYE's unique contributions to the Copernicus program and its commitment to fostering global cooperation and policy support. ICEYE's large constellation of SAR satellites offers unparalleled coverage and responsiveness, delivering timely and reliable data even in adverse conditions. This data complements the existing Copernicus infrastructure, filling critical gaps in all-weather and day-and-night observation capabilities, and enhancing the program's overall capacity to monitor dynamic situations. Through its participation in the TPM program, ICEYE makes its high-quality SAR data accessible to a broader user community, including researchers, scientists, and universities across Europe. This fosters scientific exploration, drives innovation in Earth observation applications, and supports the development of new algorithms and processing techniques for SAR data analysis. Furthermore, as a CCM, ICEYE's data directly supports various Copernicus services, such as contributing to the monitoring of sea ice extent in the Arctic through the Copernicus Climate Change Service (C3S) and aiding in rapid damage assessment following natural disasters with the Copernicus Emergency Management Service (CEMS). This strengthens the program's capacity to monitor environmental changes, respond to natural catastrophes (NATCAT), enhance maritime safety, and address other critical challenges facing European citizens and policymakers. This presentation will showcase specific examples of how ICEYE's SAR data is being utilized within the Copernicus framework to address critical challenges such as climate change, natural disasters, and maritime security. It will also explore the policy implications of commercial data providers like ICEYE playing an increasingly important role in global Earth observation initiatives. By fostering collaboration and data sharing, ICEYE contributes to a more robust and effective Copernicus program, ultimately benefiting society and the environment. Keywords: SAR, Copernicus, Contributing Mission, Third Party Mission, Global Cooperation, Policy Support, Commercial Data Provider, Earth Observation, Sustainability, Innovation
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall K1)

Session: C.01.07 Discrete Global Grid Systems (DGGS)

A Discrete Global Grid System (DGGS) is a spatial reference system that uses a hierarchy of global tessellations to partition the surface of the Earth into grid cells. This session will provide a forum for the discussion of requirements for DGGS implementations on EO data and relevant use cases. DGGS can act as a unified framework for EO data integration, multisource data fusion and cloud computing on a global scale. Therefore, DGGS can play a key role in the development of new products within the EO downstream sector, which must incorporate interoperability best practices, automatization, systemization, visualization, and on-the-fly processing through integrated web-services.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall K1)

Presentation: A DGGS (Discrete Global Grid System) Datacube Demonstrator For Sentinel-2 Data

Authors: Nicolas Vila, Georgia Doxani, Silvia Enache, Ferran Gascon, Rosalinda Morrone, Valentina Boccia
Affiliations: CS Group, Serco Italia SpA, European Space Agency (ESA, ESRIN), Starion Group
The increasing availability of remote sensing data from various satellite platforms presents significant challenges due to the complexity and heterogeneity of the images and sensors. This growing complexity makes it difficult for users and scientists to search, access, and utilize Earth Observation (EO) imagery quickly and efficiently. Additionally, transforming EO imagery into useful information, such as ready-to-use aggregated indices across various scales, remains a challenge. To address these issues, advanced tools are needed to manage and analyze large volumes of time-series data. Discrete Global Grid Systems (DGGS) offer a unified framework for integrating EO data, combining multisource data, and utilizing cloud computing on a global scale. The global scale of geospatial data requires dividing the Earth into regions, often using local coordinates reference systems, to address inconsistencies and distortions in traditional map projections. Indexing is essential for efficiently querying remote sensing data, and DGGS provide a solution by subdividing the Earth's surface into equal-size regions, enabling more effective geospatial data indexing. By aggregating cells and using hierarchical properties, the size of the indexed data, such as cloud masks, can be significantly reduced without losing information. DGGS indexing also allows faster and more accurate querying compared to traditional approaches. This facilitates the retrieval of products based on precise information corresponding to a region of interest, while most data provider APIs only implement global metrics, such as the cloud percentage for an entire product. The demonstrator of this project aims to implement a DGGS proof of concept using one year of archived Copernicus Sentinel-2 Analysis-Ready Data (ARD) products over Belgium, showcasing the benefits of a DGGS-oriented solution. It builds upon a previous study that evaluated various DGGS grids for Sentinel-2 ARD products, including H3, rHEALpix, and ISEA grids. Leveraging the results and experiments from that study, this project adopts the H3 DGGS to index and query imagery data, compute cloud masks and other remote sensing indices associated with Sentinel-2 products. The generic design enables the extensions to other DGGS variants and practical use cases. The demonstrator integrates existing technologies and software libraries with newly developed tools to adapt EO data to DGGS grids, from cloud storage to data visualization. This innovative approach addresses issues with the current MGRS grid, which causes 33% data redundancy. Besides Sentinel-2, datasets from other sources will be integrated, such as Landsat or the Copernicus DEM, to demonstrate interoperability. The outcomes of the demonstrator will contribute significantly to the evolution of EO data storage, cataloging, and analysis, aligning with ongoing developments in the DGGS domain. DGGS represent a transformative paradigm for managing geospatial data, enabling efficient data access, querying, and visualization. However, the adoption of DGGS introduces significant challenges, requiring many existing tools to be adapted or redesigned to handle data aligned on non-regular grids effectively. By demonstrating practical DGGS use cases, this work contributes to the advancement of EO data management and analysis, fostering partnerships and cooperation in the DGGS community.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall K1)

Presentation: Hexahedral Projections For Future Global Gridding Solutions

Authors: Peter Strobl, Aleksandar Dimitrijevic
Affiliations: European Commission, Joint Research Centre (JRC), University of Niš, Faculty of Electronic Engineering
One of the most pressing issues in Earth Observation (EO) for the coming years is the search for suitable geometric representations which facilitate global multiresolution storage and analysis of geospatial data. While Discrete Global Grid Systems (DGGS) are obvious candidates in this race, the selection of a particular implementation and the comparison with other options, be them DGGS or not, often lack objective criteria and a systematic and quantitative evaluation approach. In our presentation we will provide a cross-walk through the main types and implementations of global grids currently in use, seeking to clarify terminology and classification. Based on this we distill a series of criteria according to which global grid systems can be compared and ranked, illustrating the special role of DGGS. We will then focus on quadrilateral cell geometries (aka ‘rasters’) as the least disruptive while most versatile option for the EO domain given its particular needs. The key methodology to link quadrilateral cell geometries to global gridding are so called ‘hexahedral projections’ which utilise cube as base model to convert spherical (or better ellipsoidal) reference systems to congruent rasters. This class of projections offers a vast amount of potential solutions for global gridding in general and DGGS in particular. Based on the results from a recent comprehensive study we will discuss why hexahedral projections are strong candidates for use in DGGS, outline the key criteria for selecting a specific projection, and explain how to evaluate the choice. Hexahedral projections have been extensively developed over the past five decades, resulting in a wide variety of approaches tailored to different applications. This diversity encompasses variations in precision, distortion characteristics, and computational efficiency, making them versatile tools for global gridding. The study incorporates an evaluation and ranking of hexahedral projections based on hierarchical criteria, emphasizing their strengths and trade-offs in distortion metrics, transformation accuracy, and suitability for specific tasks. Strategies such as graticule rotation, which align discontinuities and regions of greater distortion with non-critical areas, are identified as effective methods for further optimizing projections for DGGS applications. This work provides a comprehensive foundation for selecting hexahedral projections, offering practical insights for their integration into global gridding frameworks and highlighting their role in advancing DGGS methodologies. We will explain based on concrete examples from our study why and how hexahedral projections through careful selection and optimisation can indeed expand the choice of candidates for global gridding matching or outperforming many of the solutions discussed up to now. For this we will combine and expand the ranking criteria to include aspects such as uniqueness, completeness, multiresolution nesting capabilities and storage efficiency. Showcasing some examples we will also demonstrate how this technology might help bridge the gap between traditional 2D raster concepts and DGGS style solutions, combining advantages of both worlds. With our presentation we aim to set the stage for an open, objective, and systematic discussion about future global gridding solutions for observational geospatial data. This search should not be limited to traditional approaches nor restricted to implementations copied for reasons of convenience from other domains. We wish to highlight that this contribution might also fit in session D.01.02: Technological Innovations for a Digital Twin of the Earth System.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall K1)

Presentation: Interoperable Global and Local Indexing of Earth Observation Data Referenced to ISEA Discrete Global Grid Systems for Efficient Storage, Processing and Transmission

Authors: Jérôme Jacovella-st-louis, Marta Padilla Ruiz, Luís Moreira de Sousa, Perry Peterson
Affiliations: Ecere Corporation, University of Calgary, University of Lisbon
The Icosahedral Snyder Equal-Area (ISEA) projection defined by Snyder (1992) has been used for constructing several Discrete Global Grids Hierarchies (DGGH) such as ISEA3H (aperture 3 hexagonal) Sahr, White, and Kimerling (2003). However, the implementation of these grids and interoperability between them has been limited by factors such as difficulties developers face understanding Snyder (1992) sufficiently well to implement it in software, technical errors in subsequently published papers introducing further confusion, different interpretations as to how an Earth model should be mapped to the ISEA projection in terms of the correspondence between geodetic latitude and latitude on the authalic sphere for which the projection is defined, as well as the possibility to orient the grids differently. We present clarifications on key concepts of the ISEA projection, such as the relationships between the icosahedron, its circumsphere, its inner sphere and the authalic sphere. This presentation provides further details specific to ISEA, since the projection was only one of a class of polyhedral globe projections defined by Snyder (1992). We explain the importance of properly mapping geodetic latitude to authalic latitude in order for ISEA grids to be truly equal-area. We propose an interoperable orientation for ISEA grids offsetting the projection 0.05° West relative to the common orientation symmetrical about the equator proposed by Sahr, White, and Kimerling (2003), allowing to maintain a single vertex on land when the authalic latitude conversion is performed. We also mention contributions by the lead author to the PROJ software library enabling support for the inverse projection which now makes the ISEA projection usable in popular software such as GDAL and QGIS. We also define a Coordinate Reference System (CRS) derived from translating, rotating, shearing, and scaling the ISEA planar projection to a 5x6 Cartesian space where a 1x1 space corresponds to each of the ten rhombuses formed by two icosahedron triangles joined at their base, facilitating a number of operations such as indexing. The use of hexagonal grids in particular poses a number of additional challenges related to indexing. Although multiple approaches to indexing hexagonal grids have been proposed in the past, most of these approaches have disadvantages such as being long textual identifiers not ideal for transmitting large lists of zone identifiers, difficulty to map identifiers to compact integer types for internal processing, as well as complexity resolving an identifier to a particular zone and its associated properties increasing with grid depth. The relatively simple approach we propose leverages the dual relationship between ISEA3H and ISEA9R (aperture 9 rhombic) DGGHs for defining a global indexing scheme resulting in compact textual identifiers with a corresponding 64-bit integer up to a very detailed refinement level. In conjunction with this global indexing, we define a deterministic ordering of sub-zones (zones of a finer refinement level at least partially contained within a coarser parent zone) based on scanlines for both ISEA3H and ISEA9R, which allows for internal addressing within a single zone. This sub-zone ordering enables efficient storage, processing and transmission of Earth Observation quantized and referenced to ISEA grids using the data formats and API defined by the candidate OGC API – Discrete Global Grid Systems Purss and Jacovella-St-Louis (2025). References John P. Snyder. An equal-area map projection for polyhedral globes. Cartographica: The International Journal for Geographic Information and Geovisualization, 29(1):10–21, 1992. URL https://doi.org/10.3138/27H7-8K88-4882-1752. Kevin Sahr, Denis White, and A. Jon Kimerling. Geodesic discrete global grid systems. Cartography and Geographic Information Science, 30(2):30, 2003. URL https://doi.org/10.1559/152304003100011090. Matthew Brian John Purss and Jérôme Jacovella-St-Louis. OGC API – Discrete Global Grid Systems – Part 1: Core (draft). Standard, Open Geospatial Consortium, 2025.URL https://docs.ogc.org/DRAFTS/21-038.html.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall K1)

Presentation: Introducing the OGC API – Discrete Global Grid Systems Standard for Enhanced Geospatial Data Interoperability

#zarr

Authors: Matthew Brian John Purss, Jérôme Jacovella-st-louis, Alexander Kmoch, Wai Tik Chan, Peter Strobl
Affiliations: Pangea Innovations Pty Ltd, Ecere Corporation, Landscape Geoinformatics Lab, Institute of Ecology and Earth Sciences, University of Tartu, European Commission Joint Research Centre
The advent of the OGC API – Discrete Global Grid Systems (DGGS) Part 1: Core Standard marks a significant evolution in geospatial data handling, promising to streamline the integration and retrieval of spatial data through an innovative, standardized API framework. This candidate standard is designed to facilitate the efficient retrieval of geospatial data, organized according to a Discrete Global Grid Reference System (DGGRS), tailored for specific areas, times, and resolutions. It emerges as a robust solution aimed at overcoming the complexities traditionally associated with projected coordinate reference systems. A DGGS represents the Earth through hierarchical sequences of tessellations, offering global coverage with progressively finer spatial or spatiotemporal refinement levels. This well-defined hierarchical structuring allows each data sample to be precisely allocated within a DGGRS zone that reflects the location, size, and precision of the observed phenomenon. This simplifies the aggregation and analysis of spatial data, enhancing capabilities for detailed statistical analysis and other computational operations. Rooted in the principles outlined in OGC Abstract Specification Topic 21, the OGC API – DGGS candidate Standard introduces a comprehensive framework for accessing data organized via DGGRS. This API is not merely a repository access point but a dynamic interface that supports complex querying and indexing functionalities integral to modern geospatial data systems. The standard specifies mechanisms for querying lists of DGGRS zones, thus allowing users to seamlessly locate data across vast datasets or identify data that corresponds to specific queries. This is achieved through the integration of HTTP query parameters combined with advanced filtering capabilities offered by the OGC Common Query Language (CQL2). Moreover, the candidate standard advocates for multiple data encoding strategies, accommodating a variety of data types and formats. It supports the retrieval of DGGS data using the widely adopted JSON encoding formats and additional requirements classes to enable raster or vector data indexed to DGGRS zones. Additionally, it provides compact binary representations for both zone data and zone lists in UBJSON and Zarr, enhancing data transmission efficiency and processing speed. Traditional indexed geospatial data formats are also supported for interoperability. The OGC API – DGGS candidate standard also includes an informative annex providing a JSON schema that describes a DGGRS, coupled with practical examples of DGGRS definitions. This annex serves as a valuable resource for developers and system architects aiming to implement the standard, offering guidance and examples that demonstrate the versatility and applicability of the DGGS approach. By defining a uniform standard for DGGS APIs, this initiative paves the way for a new era of geospatial data exchange and indexing. It addresses the growing challenges of managing massive geospatial datasets in today's digital age, promising enhanced interoperability, precision, and efficiency in geospatial data services. As the candidate Standard moves along the OGC standardization process and becomes more widely implemented in geospatial software tools, OGC API – DGGS is poised to become a cornerstone in the geospatial science and industry, fostering a more interconnected and accessible digital Earth.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall K1)

Presentation: XDGGS: Integrating Xarray with Discrete Global Grid Systems for Scalable EO Data Analysis

#pangeo #cloud-native

Authors: Justus Magine, Benoit Bovy, Jean-Marc Delouis, Anne Fouilloux, Lionel Zawadzki, Alejandro Coca-Castro, Ryan Abernathey, Peter Strobl, Daniel Loos, Wai Tik Chan, Alexander Kmoch, Tina Odaka
Affiliations: LOPS - Laboratoire d'Oceanographie Physique et Spatiale UMR 6523 CNRS-IFREMER-IRD-Univ.Brest-IUEM, Georode, Simula, CNES, The Alan Turing Institute, Earthmover PBC, European Commission Joint Research Centre, Max Planck Institute for Biogeochemistry, University of Tartu, Institute of Ecology and Earth Sciences, Landscape Geoinformatics Lab
DGGS offers a systematic method for dividing the Earth's surface into equally sized, uniquely identifiable cells, enabling efficient data analysis at global scales. The XDGGS library integrates DGGS with the xarray framework, allowing users to work seamlessly with data mapped onto DGGS cells. Through XDGGS, users can select, visualise, and analyse data within a DGGS framework, utilising the hierarchy and numeric IDs of the cells for operations like up-/downsampling, neighbourhood search, and data co-location. The library also supports the computation of geographic coordinates for DGGS cell centres and boundaries, facilitating integration with traditional Geographic Information Systems (GIS). By providing a scalable and systematic approach to geospatial data analysis, XDGGS enhances the ability to work with large, multi-dimensional datasets in diverse scientific domains. It offers robust solutions for tasks such as data fusion, interpolation, and visualisation at global scales. This presentation highlights the potential of XDGGS for Earth Observation (EO) applications by: Simplifying Access to DGGS Workflows: Embedding DGGS functionality within xarray objects lowers the barrier for adopting DGGS frameworks, fostering broader adoption across disciplines. Enabling Scalable Analysis: With xarray's support for Dask, XDGGS facilitates scalable processing of massive EO datasets on DGGS, making it ideal for cloud-native environments and large-scale scientific workflows. Cross-Disciplinary Applications: Through the pangeo ecosystem, XDGGS promotes interoperability across scientific domains, offering use cases in global environmental monitoring, EO data sets, and data fusion with bio-geospatial datasets. Streamlining Integration and Visualization: Combining xarray's user-friendly API with DGGS, XDGGS enables the rapid development of reproducible workflows, advanced visualizations, and real-time data interaction. The presentation will include a demonstration of XDGGS applied to real-world EO datasets, showcasing its efficiency in handling complex global-scale analyses. This integration of xarray with DGGS provides a powerful tool for the EO community, empowering researchers and developers to tackle today's pressing environmental and societal challenges with innovative, scalable, and reproducible solutions.
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Hall K1)

Presentation: Highly Scalable Discrete Global Grid Systems Based on Quaternary Triangular Mesh and Parallel Computing

#zarr

Authors: Davide Consoli, Daniel Loos, Luís Moreira de Sousa, Tomislav Hengl
Affiliations: OpenGeoHub Foundation, Max Planck Institute for Biogeochemistry, Instituto Superior Técnico
Discrete global grid systems (DGGS) can be used to efficiently store and access rasterized Earth observation data, including satellite images and derived products. One of the main advantages lies in the fact that, compared to data stored in standard projections like the Universal Transverse Mercator (UTM) or the WGS84, DGGS minimizes area distortions and avoids data replications in regions near the poles. For large-scale datasets, including the Sentinel-2 collection, this translates into a potential saving of petabytes of data storage [1]. In addition, using uniform cell sizes for tessellation facilitates analysis, derivation of spatial statistics, and application of spatial filters [2]. Finally, when implemented using hierarchical structures, they can perform operations such as point querying in logarithmic complexity, improving scalability at higher spatial resolutions. Despite their potential, DGGS are still not widely adopted in the geoscience remote sensing community. One of the main bottlenecks is that most of the data currently produced by the community are stored as geo-referenced images, typically in formats like GeoTIFF. Furthermore, most software used by scientists and developers does not yet support DGGS data. To enable a transition to DGGS within the community, it is essential to have libraries that include fast I/O methods, allowing reciprocal conversion between rasters in standard projections and DGGS data structures. We propose a strategy based on DGGS that can effectively perform I/O operations from and to standard raster formats. Using a triangular tessellation in a hierarchical structure with aperture 4 derived from the geodesic subdivision of an icosahedron, each node of the 20 quad-trees is univocally associated with an integer sequential index. Each sequential index can be translated into a hierarchical index represented as a vector of size corresponding to the node level and storing integer numbers spanning from 0 to 3, with the exception of the second level that identifies one of the 20 quad-trees, and the first level with only the root index 0. The area non-uniformity of this tasselation, measured as the areas standard deviation normalized by their mean, saturates around 0.086 for a high number of subdivisions. Similar approaches, like the Quaternary Triangular Mesh (QTM), have already been proposed in literature [3] and implemented for large scale applications [4]. One of the main novelty contents of our work, realized in the performance and in the scalability. Targeting an highly parallel implementation, each sequential index is associated with an independent process that can easily communicate with its parent and its children processes. By simply converting its sequential index to the hierarchical one adding or removing one element from the vector (depending on the target), and converting the result back to a sequential index, each process can communicate with the relative processes using, for instance, the Message Passing Interface (MPI). This will result in a process topology composed by interconnected nested spheres that can be used to process geospatial data at different spatial resolutions in parallel. In addition, querying operations can be performed with exponential parallel efficiency and logarithmic complexity, like for standard quad trees. This last characteristic allows associating a substantial amount of pixel locations from input raster files with the DGG leaf cells in which they fall in feasible computational times. After this operation, the associated pixel indices can be aggregated to a higher level of the DGGS depending on the chunking size of the original images. The nodes and processes associated with the selected level will be in charge of reading the required chunks of the input files, associate the pixels of interest to each leaf and aggregate them in case multiple pixels are associated with a single leaf. These pixel chunks can be used for processing and then converted back to raster files. Best writing performance will be achieved when using file formats that allow parallel writing of data chunks such as Zarr. Finally, relying on a meshing approach, the framework can be used to include elevation information directly in the meshed structure, enabling the usage of the DGGS to applications such as hydrology modeling, electromagnetic scattering and Earth digital twins. [1] Bauer-Marschallinger, B., & Falkner, K. (2023). Wasting petabytes: A survey of the Sentinel-2 UTM tiling grid and its spatial overhead. ISPRS Journal of Photogrammetry and Remote Sensing, 202, 682-690. [2] Kmoch, A., Vasilyev, I., Virro, H., & Uuemaa, E. (2022). Area and shape distortions in open-source discrete global grid systems. Big Earth Data, 6(3), 256-275. [3] Dutton, G. (1989, April). Planetary modelling via hierarchical tessellation. In Proc. Auto-Carto (Vol. 9, pp. 462-471). [4] Raposo, P. (2022, November) Implementing the QTM discrete global grid system (DGGS). https://doi.org/10.5281/zenodo.7415011
Add to Google Calendar

Thursday 26 June 16:15 - 17:45 (Room 1.61/1.62)

Session: A.08.15 The ESA Ocean Science Cluster

Within the broader scope of scientific exploitation of satellite assets, the European Space Agency (ESA) is funding a critical mass of R&D activities aimed at enhancing the observation and understanding of the Oceans from space.
The ESA Ocean Science Cluster consists of a portfolio of several research opportunities and networking actions promoting synergistic research and fostering European and international scientific collaboration.
About 40 projects are currently belonging to the Ocean Science Cluster, further regrouped into six main sub-cluster topics, namely: Ocean Health, Ocean Extremes, Coastal Ocean including Land-Sea interactions, Ocean Carbon, Upper-ocean Dynamics including Air-Sea interactions, and the Ocean’s role in the Earth and Climate System.
This Agora will showcase mini-talks highlighting outcomes of a sub-selection of the Cluster projects, with special emphasis on those projects at the intersection of the sub-cluster domains.
Also, specific attention will be devoted to the mapping and adherence of the Ocean Cluster grand challenges to the recently published ocean Guidelines Science Questions of the ESA EO Science Strategy document.
Lastly, through interactive brainstorming with the audience, plans and ambitions of the ESA Ocean Agenda 2026+ will be shared and discussed.

Presentations and speakers:


The Ocean Science Cluster and mapping to the ESA EO Science Strategy


  • Roberto Sabia and Marie-Helene Rio

Six mini-talks showcasing highlights of representative projects:


  • Ocean Heat Content - Benoit Meyssignac
  • SCOPE - Gemma Kulk
  • Ocean Health - Marie-Helene Rio
  • EOatSEE - Pedro Ribeiro
  • Medicane - Giulia Panegrossi
  • 4DMED - Bruno Buongiorno Nardelli

Unveiling of the Ocean Cluster Agenda 2026+ and related open discussion


Add to Google Calendar

Thursday 26 June 16:30 - 16:50 (EO Arena)

Demo: D.04.25 DEMO - Codeless EO data analysis with openEO, leveraging the cloud resources of openEO platform straight from your web browser

#stac

This demo aims at giving a general introduction to the core concepts of openEO and connecting it with a live demo using the openEO Web Editor to highlight the generation of workflows based on the openEO user-defined process (UDP) concept without any coding skills. The demo will operate on the openEO platform and illustrate the ease with which anyone can create workflows for analyzing EO data without the need to take care of data management or writing scalable parallelized code and optimized code. The demo will be hosted by Alexander Jacob from Eurac Research and Matthias Mohr from Matthias Mohr - Softwareentwicklung.

Demo Content & Agenda

1.) Introduction & Overview
a.) Introduction to the openEO API: functionalities and benefits
b.) Data cubes concepts and documentation review
2.) Transitioning to Cloud Processing
a.) Challenges and advantages of moving from local
processing to cloud environments
b.) Overview of cloud providers (VITO Terrascope, EODC,
SentinelHub) and their integration with openEO Platform
& CDSE
c.) Key concepts of FAIR (Findable, Accessible, Interoperable,
Reusable) principles implemented by openEO
d.) STAC: how the SpatioTemporal Asset Catalog allows
interoperability

Live Demo with openEO
1.) Accessing and using the openEO Web Editor
2.) Discovering and accessing EO datasets and processes
3.) Generating workflows using the openEO Web Editor
4.) Processing workflows
5.) Managing and checking the status of submitted jobs
6.) Visualizing results

Speakers:


  • Alexander Jacob - EURAC
  • Matthias Mohr
Add to Google Calendar

Thursday 26 June 16:52 - 17:12 (EO Arena)

Demo: A.07.11 DEMO - Quality Assurance for Soil Moisture (QA4SM) - A centralized soil moisture validation and inter-comparison platform

In this focused 20-minute presentation, we demonstrate to you QA4SM, a centralized cloud-based platform for validating satellite soil moisture data. You’ll discover how QA4SM streamlines comparisons of leading satellite products against fiducial reference measurements, supporting both producers and users of soil moisture data. By utilizing data from missions such as SMOS, SMAP, ASCAT, and Sentinel-1, as well as reference data from the International Soil Moisture Network and reanalysis models like ERA5 and GLDAS-Noah, QA4SM delivers comprehensive and reproducible validation results.
This demonstration focuses on the platform’s intuitive interface, showing how to select datasets, configure validation parameters, and interpret resulting metrics and visualizations. Version 3 brings new capabilities for seasonal and monthly analyses, along with stability metrics on an annual scale, offering deeper insights into the temporal dynamics of soil moisture datasets. Optional filtering and anomaly detection robustly refine dataset validation results. In a subsequent Q&A, we answer any remaining questions, and give you the opportunity to contribute to the future developments of QA4SM by providing us your perspective and need for soil moisture validation and inter-comparison.
By the end, participants will have a clear understanding of QA4SM’s key features and benefits, ready to seamlessly integrate this service into their satellite data validation workflows.

Speaker:


  • Daniel Aberer - TU Wien, Department of Geodesy and Geoinformation
Add to Google Calendar

Thursday 26 June 17:15 - 17:35 (EO Arena)

Demo: D.04.30 DEMO - UP42 platform - simplified and accelerated access to geospatial data

The goal of the demonstration is to present the key features and offerings of the UP42 platform that can help in accelerating the implementation of both scientific and commercial projects. When dealing with complex multi sensor projects and conducting scientific research, it’s crucial to combine the best and proven source of data and tools that will help us extract relevant information and find answers to the questions posed. The UP42 platform simplifies the process of exploring, accessing and ordering the right data.
UP42 is a cloud-based platform that provides a comprehensive, easy and quick access to commercial and open source geospatial data from many sources, as well as processing capabilities.
It aims to simplify the integration of geospatial data into applications and help users derive insights from this data efficiently.

Key features and offerings of the UP42 platform:
1. Data Marketplace: users can explore a variety of geospatial datasets, including satellite imagery, aerial imagery, elevation models and other products.
2. Console UI: all of the integrated archive data collections can be accessed and ordered directly from our archive Catalog. Users can also task a satellite from the Console, if their project requires acquisition over an area of interest in a specific period of time.
3. Processing Workflows: the platform allows users to take the advantage of the custom workflows to process geospatial data. This includes tools like image pre-processing, object detection or classification.
4. APIs and SDKs: UP42 provides RESTful APIs and Python software development kits (SDKs) to facilitate the integration of its services into applications. This enables developers to access data and processing capabilities programmatically.
5. Integration with GIS software: The platform is also integrated with Esri’s ArcGIS Pro and QGIS software via special add-ons, which allows users to access UP42 data repository and build workflows based upon it.

Speakers:


  • Klaudia Bielińska - Senior Partnership Manager, UP42
Add to Google Calendar

Thursday 26 June 17:37 - 17:57 (EO Arena)

Demo: D.01.18 DEMO - DestinE Platform Demo

The demo will illustrate the DestinE Platform, showcasing its features, capabilities, and functionalities. It will provide an insight into how the platform operates and its services. The focus will be on:
1.Platform Introduction
2.Registration and Access
3.Onboarding Process
4.Support
Additionally, some services as:
Data Access,
Data Visualisation,
User Workflows and Edge Services
will be illustrated.

Instructor:


  • Andrea Pensa - Seco
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: E.02.01 - POSTER - Advanced SAR processing techniques for security and safety applications


This session aims at creating a space for researchers and industry to exchange knowledge on advanced Synthetic Aperture Radar (SAR) processing techniques for safety and security applications. The initial identified advanced processing techniques to be discussed are the following:
• Inverse SAR (ISAR) processing algorithms;
• microDoppler;
• VideoSAR;
• Distributed SAR processing;
• data fusion with non-EO data;
• SAR polarimetry.
Results from ESA ongoing research activities in this domain will be also presented.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: SAR Advanced Techniques Exploiting Angular Diversity: Height Estimation with Spotlight Data and Moving Target Indication in Pursuit Monostatic Mode

Authors: Federico Minati, Stefan Baumgartner, Carmine Frascella, Marc Rodriguez, Nida Sakar, Andre Silva, Francesco Vecchioli, Mario Costantini, Pau Pratz
Affiliations: B-Open Solutions, German Aerospace Center (DLR)
This talk focuses on two particularly interesting topics included in the ESA-ESRIN framework contract “Next Generation Processing Methods for Security Applications: Distributed Sensing” (NGSECAPP), that aims to design new algorithms and products exploiting high resolution SAR data acquired in monostatic, bistatic or multistatic modes, with a particular focus on angular diversity between acquired images: • Topography estimation with Spotlight Data; • Ground/Maritime Moving Target Indication (GMTI/MMTI) in Pursuit Monostatic Mode. The development of synthetic aperture radar (SAR) technology has significantly improved satellite remote sensing, enabling detailed and reliable Earth observation (EO) capabilities. SAR systems are essential for a wide range of applications, such as environmental and geological risks monitoring, disaster management, vegetation and agricultural assessment, security and intelligence. Unlike optical systems, SAR operates independently of weather conditions and daylight, providing consistent and reliable data acquisition. This capability makes SAR particularly valuable for analysing areas that are difficult to assess with traditional imaging methods, including remote or less-accessible regions: by capturing detailed information on structures, terrain, and land use, SAR allows for comprehensive monitoring of nearly any location on Earth. Moreover, SAR sensors have interferometric capabilities: both the amplitude and the phase of the received signal carry valuable information that can be used in different applications. The continuously increasing number of available SAR satellite constellations, both by institutional entities and private companies, could allow for the first time to address in an operational way multi-satellite SAR acquisition mechanisms to address the achievements of new functions and new detection, imaging and monitoring capabilities (e.g., bistatic ISAR can enable the imaging of stealth targets). These satellites usually have limited swath width capabilities, but this limitation is anyway compensated with the flexibility of using different orbit inclinations or the shorter revisit times, which can be reduced even to sub-daily time scales. Furthermore, the high spatial resolution offered by these systems makes them very attractive to the market, whereas there is a clear potential for developing and implementing new applications. In this context of raising interest towards distributed sensing, the NGSECAPP project aims to develop algorithms for security applications, focusing especially on angular diversity. As regards the “Topography Estimation with Spotlight Data” topic, in SAR processing, the generation of Level 1 Single Look Complex (SLC) data from raw acquired data involves a focusing processing that typically assumes an average height offset, with respect to the WGS84 ellipsoid, on wide areas. The use of an estimated average height, which is generally different from the actual pointwise heights, leads to a defocusing effect along the azimuth direction at the positions of the image targets. The magnitude of this defocusing effect depends primarily on the height offset of the target relative to the averaged height used in the focusing, the incidence angle, the tropospheric delay offset, and the acquisition mode. While the consequences of this assumption are negligible in many acquisition modes (such as STRIPMAP), this side effect becomes significant in very high-resolution modes, characterized by very long synthetic antenna aperture time. Acquisition modes such as TERRASAR-X Staring Spotlight, CAPELLA Space Spotlight, and ICEYE DWELL Fine Mode, which achieve very high azimuth resolution, are particularly affected by this azimuth defocusing effect. This unintended azimuth defocusing can, however, provide valuable information, as it can be used to estimate the height of targets. After identifying the targets, the image is refocused using different height offsets, and the amplitude of the refocused images at the targets' positions is evaluated for each target as a function of these offsets. Specifically, for each target, the height offset that maximizes this function provides a good estimate of the target’s height. As illustrated previously, this approach allows height estimation using only a single acquisition. However, it is important to note that in this scenario, the results are relative rather than absolute due to an unknown offset caused by the fact that the exact tropospheric delay at the time of acquisition is unknown. This uncertainty also limits the accuracy of the height estimates. To overcome this limitation and improve estimation accuracy, a solution consists in using two SAR acquisitions over the same area, with non-negligible angular diversity. By leveraging common targets in both images, a linear system can be formulated using radargrammetry and autofocus equations (2 radargrammetric equations and 2 refocusing equations) with three unknowns: the heights, and the two tropospheric delays of the two used SAR images. This system first estimates the true tropospheric delay for both acquisitions, and then, using the corrected delay values, determines the absolute heights of the targets. This method significantly enhances the precision of height estimation, typically reducing errors to just a few centimetres. As regards the second topic, “Ground/Maritime Moving Target Indication (GMTI/MMTI) in Pursuit Monostatic Mode”, the Pursuit Monostatic Mode is a technique that utilizes two SAR antennas, which can be mounted on a single platform or separate platforms, each transmitting and receiving its own signals. This mode, for ground applications, allows an effective suppression of the land clutter (e.g. stationary and non-moving objects) enabling the detection of targets with low radar cross-section (RCS), such as small vehicles. If clutter suppression is instead not necessary, the two antennas can still estimate the Direction of Arrival (DOA) angle, allowing for accurate determination of the targets' geographical positions. This mode is implemented by acquiring two STRIPMAP images with interferometric incidence angles and a temporal baseline typically in the range of a few seconds or minutes, then longer compared to bistatic data having temporal baseline of milliseconds. The larger temporal baseline enables the detection of a wider range of moving targets, including maritime targets as vessels and ground-based vehicles. Additionally, the high resolution of the images significantly enhances the estimation of the speed and direction of these targets. The use of interferometric incidence angles and small temporal baseline ensure that targets appear consistent across both images due to the nearly identical perspectives in both acquisitions and the minimization of the geometric distortions. Main advantages are then: • Efficient land clutter suppression, enabling the detection of low RCS targets hidden by background noise; • Accurate estimation of geographical positions, velocities, and moving directions of detected targets; • Improved ability to predict future positions of targets based on their velocity and direction. (this is particularly beneficial for monitoring ships over open seas, where movements are relatively slow and predictable over several days). In comparison to a single SAR satellite system, a pursuit monostatic system retains a similar revisit time (in the order of days), but the added capability to predict vehicle tracks provides a significant operational advantage. This is especially useful for applications like land and ocean border control, surveillance of offshore infrastructure, and critical asset monitoring. More details about the algorithms introduced here will be presented in detail in the presentation, together with some experimental results on some specific dataset.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Safe Bridge Project: an Example for the Geomatic Monitoring of Bridges

Authors: Nicole Dore, Antonio Monteleone, Luca Benenati, Cristiano Lanni
Affiliations: NAIS S.r.l., ANAS S.p.A., Gruppo FS Italiane, Department Technology Innovation & Digital Spoke
Safe Bridge is a project financed by the Italian Space Agency (ASI, Agenzia Spaziale Italiana) in the frame of the ASI I4DP Market program, aimed at promoting the development of the national downstream through the on-field demonstrations of value-added applications and services for the market. The end user of the project is ANAS, the national road authority responsible for the planning, construction, maintenance, and management of the Italian roads and motorways network. Looking at the Italian context, in May 2020 the General Assembly of the Superior Council of Public Works, with the aim to rationalize the interventions necessary to guarantee the infrastructures’ safety, approved the Guidelines for the safety and monitoring of all existing bridges, at national level. Their maintenance and monitoring are strictly linked to their organisation into classes of attention, assigned on the basis of the parameters of danger, vulnerability and exposure to different types of risk. The Guidelines proposed by the Ministero delle Infrastrutture e dei Trasporti (MiT, Ministry of Infrastructures and Transportation) illustrates a procedure for the safety management of existing bridges, in order to prevent inadequate levels of damage, making the risk acceptable; it is characterised by a multilevel approach, made of five levels (from Level 0 to Level 5). The periodic inspections and monitoring of an infrastructure, therefore, have the purpose of evaluating the conditions of the infrastructure itself with reference to its suitability, evaluating both aspects of structural safety and those related to any danger of the surrounding conditions. It is considering the importance given to the territory that one of the Safe Bridge services makes use of SAR data. All the Safe Bridge services are distributed via web platform, intended to be the centre of monitoring and control. Focusing on SAR application, a key aspect of Safe Bridge involves leveraging interferometric SAR analyses to support the systematic completion of Level 1 inspection forms (Level 1 Bridge Inspection Sheet – Landslide and Hydraulic Phenomena) for bridges, specifically addressing landslide and hydraulic phenomena. These forms are a mandatory requirement for site managers, who must periodically evaluate the activity state of both recognized and potential landslides. Recognised landslides refer to those catalogued in existing landslide inventories, while potential landslides are areas not yet classified but showing indications of movement. A central parameter is the "State of Activity," used to quantify the activity level (Pa) for recognised and potential landslides. For recognised landslides, a statistical analysis is performed on PS located within the inventoried landslide boundaries. The velocity and displacement trends of these PS are systematically analysed to classify the landslide's activity state based on its movement dynamics. For potential landslides, spatial clustering algorithms are applied to moving PS outside inventoried areas to detect coherent groups of displacement. These clusters are further evaluated by analysing their velocity along the line of maximum slope, enabling the identification and classification of emerging landslide zones. For a more in-depth reading of the PS information, in addition to the historical series available for each of them (temporal series), thanks to the 3D GIS functionality implemented in the web platform provided with viaducts’ points cloud, it is possible to examine the third dimension (elevation) in order to give the possibility of performing a first level evaluation about the structural element where the movement was recorded (PS) by the satellite. Beyond the application of SAR satellites data, Safe Bridge also involves the use of other technologies that complete the information derived by satellites, such as UAV systems, and GNSS. Therefore, to further support the end users in their daily work with the aim to speed up the process of damages annotation and record, Safe Bridge also proposes a service based on the automatic recognition of superficial defects visible on RGB images acquired during campaigns by UAV systems. The information support provided by this data is guaranteed by the Machine Learning technology and Artificial Intelligence solutions that allow the segmentation in real time of the photographs, directly on the web platform. This web platform section also allows to refine the defects automatically recognised by the system, with the idea to minimise the time spent by specialised dedicated personnel for defects recognition and their record, today still performed manually. During the training phase of the above-mentioned neural network, which went on for over a year, a huge dataset of images has been processed (segmented and labelled), whose surface defects were catalogued using the defectological taxonomy (abacus) provided by the MiT Guidelines. To conclude the range of services provided by Safe Bridge, the monitoring of the infrastructures is also entrusted to the installation of GNSS receivers placed directly upon elements of the viaducts to record movements, exceeding the present thresholds. Information is acquired from GPS and Galileo constellations and the data reception is based on daily (slow) and real time (fast) movements estimation, taking advantage of the Variometric technique. The pilot sites where the services are put in place are three viaducts located in Southern Italy, that is in Calabria, Basilicata and Sicily regions, each one with its specific structural criticality due to the peculiarities of the territory. The Safe Bridge working group is made up of NAIS S.r.l. (contractor), Sapienza University of Rome, ENAV S.p.A. and ALTA S.r.l.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Enhancing maritime and inland waterway situational awareness in the new space era: advanced ISAR approaches with very high resolution SAR data

Authors: Ilaria Nasso, Fabrizio Santi, Debora Pastina, Massimo Zavagli, Francesco Vecchioli, Federico Minati, Mario Costantini, Laura Parra Garcia, Professor Carmine Clemente, Michela
Affiliations: Sapienza University of Rome, B-Open Solutions, University of Strathclyde, ESA
While spaceborne remote sensing has been well established for decades, the recent advancements of spaceborne SAR technology brought space-based sensing into a new era. The advent of competitive private actors such as Capella Space, Umbra and ICEYE is a real game-changer, giving rise to satellite imagery with unprecedent temporal and spatial resolutions. Such constellations of small-sized satellites, capable of operating with very wide bandwidths and advanced operative modes, complement well-established institutional missions such as COSMO-SkyMed and TerraSAR-X, thus significantly increasing the volume of very high resolution (VHR) SAR products, also improving the achievable temporal resolution. Notably, the antennas mounted on most of the micro-satellite constellations provide exceptional scanning agility, even giving rise to the possibility to stair the beam on the same point on the ground for extremely long dwells (up to ~25 sec). These factors, along with the easier accessibility to larger and larger data volume, pave the way toward new applications of satellite data, including, other than remote sensing, surveillance tasks. In this framework, maritime surveillance appears as one of the fields of greater perspective. Maritime situational awareness (MSA) is crucial for the security of every nation with access to the sea. Likewise, rivers, lakes, artificial waterways and canals are critical areas and the monitoring of human activities in these environments is of vital importance for ecological, economic and security reasons. Ships of very different classes, varying from commercial cargos to fishing and cruise vessels, sail in coastal areas, offshore, and inland waterways and need to be monitored to identify threats and track misbehavior. Situational awareness in such environments can be challenging using terrestrial-based sensors (e.g. coastal radars have a limited range, while optical sensors can suffer for bad weather or occlusions effects), while current spaceborne SAR technology can offer imagery with photograph-like quality, independently from the weather, and with sufficiently fine temporal resolution. Nevertheless, a well-known issue when observing moving targets, such as ships, in SAR images is that the assumption of scene stationary during the image focusing causes blurring and mis-location of the imaged ships in the scene. The latter issue can be quite problematic in inland waterway scenarios, where vessels are often subject to ‘rules of the road’ more severe than in open sea and, moreover, the shift could easily bring the target to fall into the surrounding land clutter, making their observability harder. Augmenting SAR with Inverse SAR (ISAR) techniques can unlock the full potential of the new SAR sensors for MSA. Ship’s refocusing can be obtained by properly adapting the ISAR principles to the SAR framework, significantly improving the capability to recognize fine details of the targets. However, the availability of VHR SAR data combined with more advanced ISAR processing approaches can bring further feature layers that can be extracted from the images. Particularly, the estimation of the full ship’s dynamic (translation and rotation) is a strategic information. While the estimation of the ship translational velocity allows us to track the ship’s behavior, the retrieving of the rotational motion provides the possibility to scale the images in the uniform range–cross-range plane, thus enabling the measure of the ship’s size, thus reinforcing ATR procedures. Finally, relocating the ship in its actual location would represent a strong benefit for a clear representation of the maritime picture. It is worth noting that multiple receiving channels would simplify the above-mentioned tasks. For example, ship relocation could be straightforwardly obtained via along track interferometry. Nevertheless, most of the current spaceborne SARs are equipped with a single receiving channel and therefore more sophisticated/smarter signal processing solutions are needed to circumvent such a limitation. The talk will describe the full processing chain that has been developed to achieve such goals. This receives in input level-1A single-channel SAR images and performs for each of the detected ships: i) refocusing, ii) target kinematic estimation, ii) cross-range scaling, iv) relocation. Examples of results will be shown using real SAR data belonging to different satellites classes (i.e., ‘small satellite constellations’ as Capella Space and ‘large systems’ as TerraSAR-X) and composed by targets belonging to different classes (e.g., cargo, fishing boats, etc.) and undergoing different motion conditions. Both the cases of surveillance in open sea and inland navigation will be considered. Such rich datasets, often corroborated by Automatic Information System (AIS) message to validate the achieved results, allowed us to test the developed approaches and to assess their performance in a wide range of operative conditions of interest. Finally, future research directions will be outlined, with a specific focus on the exploitation of very long dwell data collections. Adaptations/extensions of the techniques developed to handle SAR images obtained with such an unprecedented acquisition mode will be introduced, also showing promising preliminary results and discussing the potential and challenges to be faced for their full exploitation in MSA applications. ACKNOWLEDGEMENTS This work has been funded under the project EO4SECURITY, Innovative SAR processing methodologies for security applications - Topic B1: Inverse SAR processing to enhance the capabilities to characterize targets/features of interest, ESA Contract No. 4000142270/23/I-DT. Thanks to Airbus Defence, Capella Space, Umbra Lab, Iceye and Spire Global for the access to the datasets.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: A Python-Based RadarSimpy Library for Antenna Calibration and Simulation of Corner Reflectors for SAR Applications

Authors: Can Atalay, Prof. Dr. Umut Güneş Sefercik, Assoc. Prof. Dr. Hakan Akçın
Affiliations: Zonguldak Bülent Ecevit University, Gebze Technical University
Advanced Synthetic Aperture Radar (SAR) technologies have become more compact as the number of satellites and their electronics have decreased in size. Consequently, the quantity of SAR satellites in orbit is on the rise. Corner reflectors play a crucial role in spatial applications of SAR satellite imagery. The production of antennas with the correct size and geometry for the sensor bandwidth is both costly and time-consuming. In such cases, it is necessary to calculate the radar cross section (RCS) and visually simulate it to identify necessary modifications to the antenna structure and to assess the intensity of the antenna in the SAR image. Due to the high cost of antenna calibration and simulation software, both RCS calculations and simulations can be carried out using the Python library RadarSimPy. Sensor data from current SAR satellites including Sentinel-1A, NISAR, and Tandem-X were utilized in this study. The unique parameters of each satellite were incorporated into the design of nine corner reflector antennas using computer-aided design (CAD) software. These antennas were then calibrated and simulated to analyze their intensities in SAR satellite images.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Atlantic Pathfinder Project: Integrating emerging EO & AI technologies for improved Maritime Surveillance

Authors: Tim McCarthy, Professor Paul Townsend, Mr Paul Kiernan, Professor Carmine Clemente, Dr Aidan Magee, Mr Daire Walsh, Dr Cleitus Antony, Dr Abhinav Gautam, Mr Aleksanteri Vattulainen, Mr Jerek Dobrzanski
Affiliations: Maynooth University, Tyndall National Institute, Skytek Ireland, University of Strathclyde
Maritime zones are increasingly vulnerable to both conventional and non-conventional threats, including the proliferation of missile and drone technologies, the illicit flow of contraband (weapons, drugs) and the increasing threat to critical maritime infrastructure (UNIDIR, 2024). Maritime Surveillance can be used to monitor various marine environments/activities and includes a number of functions; Defence, Customs, Border control, Fisheries control, Safety, security and pollution prevention from shipping, Environmental protection and General law enforcement. A number of terms have evolved over recent years to describe some of the new challenges facing government organisations responsible for monitoring our coastal and maritime waters; Hybrid Threat, in a maritime context, refers to an action conducted by state or non-state actors, whose goal is to undermine or harm a state or institution through various means including disruptions to services (Energy, Communications), cyberattacks etc whilst remaining below the threshold of formal warfare (EU DIS, 2024). Dark Ship is a term used to describe a vessel operating with its Automatic Identification System (AIS) – a transponder system – is turned off. Typically Dark Ships operate unmonitored - a strategy used to circumvent vessel tracking and support tactical-level activities such as sanctions avoidance, illegal trade, human trafficking, and other actions that do not comply with maritime law. It is against this background of increasing hybrid threats and dark vessels in the Maritime domain that provided the motivation for bringing together three core partners; Skytek, Maynooth University (MU), Tyndall National Institute (TNI), in close collaboration with University of Strathclyde (UoS), to research, develop, test and assess the fusion of emerging Earth Observation and underwater sensing technologies within a Common Operational Picture (COP) architecture. These tests were carried out using shipping vessels in Galway, West Coast of Ireland (Sept – Nov 2024). This collaborative research team will present the integrated results of these Maritime EO sensing technologies; Distributed Acoustic Sensing (DAS): Led by Tyndall National Institute, DAS exploits Rayleigh backscattering in optical fibres within underwater cables to provide continuous, spatially resolved acoustic detection over kilometres of fibre, achieving meter-scale resolution and a frequency range of sub-hertz to kilo-hertz. This innovative technology enables the detection of ship signatures and environmental sounds, enhancing maritime situational awareness. Tyndall will present methods for processing DAS data and demonstrate its integration with complementary sensing systems, including SAR and hydrophones, to enhance the detection of Dark Ship and support comprehensive maritime surveillance. Synthetic Aperture Radar (SAR): MU will report on the impact of sea state associated with the detection of vessels using Satellite SAR imagery in North Atlantic waters, focusing on vessels that have a low amplitude single in the image such as smaller vessels that may be more difficult to detect in background sea-state clutter. As part of this work, a unique collection of annotated labels will be generated incorporating AIS and sea-state conditions. These labels will include geospatial context (proximity to coastal landing-points, active fishing-grounds etc.) and be used to train and evaluate both traditional and CNN architectures. Results from Deep Learning models using Drone Optical (RGB & IR) data, to detect man-overboard, small Search & Rescue related objects (life-rings, life-jackets etc.) will also be reported. Micro-doppler information from SAR: University of Strathclyde (UoS)will present how advanced maritime target characterization which exploits the micro-doppler effect in SAR has the potential to enhance target discrimination capabilities especially when combined with additional sensors such as DAS. UoS will discuss the potential for innovative processing solutions exploiting micro-Doppler and acoustic signatures for Dark Vessel detection in SAR images. Common Operational Picture (COP) : A scalable COP is required to ingest various EO data streams (SAR, DAS, Microdoppler) from the three EO technologies above. Skytek presents the results of collating, fusing and analysing these streams together with AIS and RF, in order to present a Maritime Situational Awareness to support informed real-time decision-making. References EU DIS (2024). Hybrid Threats. https://defence-industry-space.ec.europa.eu/eu-defence-industry/hybrid-threats_en UNIDIR (2024). United Nations Institute for Disarmament Research (UNIDIR). Securing the Seas: A Comprehensive Assessment of Global Maritime Security. https://unidir.org/event/securing-the-seas-a-comprehensive-assessment-of-global-maritime-security/
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: MIcro-Doppler InfrAstructure Stability Assessment using SAR (MIDAS) - Advanced SAR processing techniques

Authors: Michela Corvino, Olimpia Masci, Giovanni Nico, Vito Antonio Vacca, Leticia Perez-Sienes, Adrian Focsa, Andrei Anghel, Iolanda Maggio
Affiliations: Starion Italia S.p.A., ESA, DIAN S.r.l., National Research Council of Italy, Institute for Applied Mathematics, ANAS Spa, Struttura Territoriale Basilicata, National University of Science and Technology POLITEHNICA
Earth Observation (EO) Synthetic Airborne Radar (SAR) imaging, unlike Optical (Visible/InfraRed) technology, offers ways to remotely observe areas of interest in all visibility and weather conditions. Moreover, observing objects of interest actively (by transmitting EM waves) and at substantially different frequencies from the optical regime, allows for additional means to enhance detection, recognition and parameter estimation. For these reasons, SAR technology has proliferated greatly in the last decades and it is now becoming more pervasive thanks to a more intensive use of space platforms. Nevertheless, a full exploitation of radar systems, even when specifically designed as SAR systems, goes beyond traditional processing techniques, which have intrinsic limitations when dealing with moving objects in the scene. Vibrometry represents the sub-field of SAR image processing responsible for the determination of a target’s oscillatory parameters. The process of monitoring the vibrational behaviour of some targets may reveal faults and/or may help in the classification of the objects contained in the SAR images. In general, a non-stationary target will produce two main types of artifacts in the SAR image. On one hand, the radial motion (line-of-sight component) will have the effect of shifting the Doppler Centroid (DC) and therefore the target will appear at a fictive azimuth position. On the other hand, the along-track velocity component of the target will modify the azimuth chirp rate and hence the target will appear defocused in the final SAR image. This simplistic reasoning refers to constant velocity, which is not the case of vibrating targets. Nevertheless, this assumption has a higher degree of validity by considering sub-aperture processing of the SAR images, which translates in a lower exposure time. The main objective of the MIDAS project is to investigate specific and advanced SAR processing techniques that can extract additional information from extended targets, and in particular Micro-Doppler processing. As a matter of fact, the employment of Micro-Doppler methods has been limited in the civilian domain due to processing complexity and lack of industrial experience, but the reliable and extensive baseline of new SAR missions with high ground resolution and revisit time provides a credible foundation for their development. This proposal will show how innovative Micro-Doppler techniques can be developed and validated, and what activities are to be carried out for allowing their future industrialization and operationalization. The project is aimed at developing robust m-D estimation algorithms, whose reliability will be demonstrated by validating their results against real-world use cases and operational scenarios and providing detailed and convincing test reports. Currently, in the on-going MIDAS project, a workflow for m-D infrastructure monitoring has been designed summarised as follows. A complex SAR image is used as input for the generation of a vibration map. From the original SAR image, we generate several azimuth sub-aperture images with partially overlapping Doppler spectra. When monitoring large areas, the main concern is the computational cost. Hence, a fast Correlation-based Doppler Estimator (CDE) is applied using a two-dimensional sliding window on each sub-aperture image, which leads to a series of Doppler Centroid (DC) images. In a DC image, each pixel provides the instantaneous frequency of the imaged target in the corresponding sub-aperture. By generating a time series of DC images, we estimate the vibration waveforms of the imaged targets. Finally, a Fourier transform applied along the sub-apertures dimension provides an estimate of the vibration spectrum from each pixel. The vibration spectra usually contain a couple of discrete vibration frequencies. Therefore, to find the pixels that contain a vibrating target we compute a sparsity measure (i.e., entropy/spectral concentration) of the spectrum (a sparse spectrum is an indication of a possible vibration). The vibration frequencies are extracted by peak picking on the amplitude vibration spectrum. The considered use cases address the monitoring of critical infrastructures like bridges, water channels and water towers from the Basilicata region (Italy) using StripMap COSMOSkyMed and Spotlight ICEYE datasets. Testing and validation of the proposed methodology relies on in-situ ground-based real aperture Ku-band radar measurements. Real-aperture radar (RAR) data are acquired, using antennas having different irradiation patterns to monitor the have different spatial resolution and spatial coverage of the target. Raw data are focused in range in order to have range profiles useful to identify the different targets of the monitored structure. Range profiles are stacked in time and interferometrically processed to get the timeseries of displacements and the corresponding amplitude spectrum of each target at a given range distance. The azimuth and elevation angles of RAR acquisition are set in order to be aligned with the LoS of spaceborne SAR acquisitions. For each monitored structure, Ground-based radar measurements are geolocated and the spectral analysis is summarized providing the geographic coordinates of oscillating targets, the azimuth ad elevation angles and the frequencies of spectral peaks and compared with frequencies estimated using spaceborne sensors.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: EVIDERI - EO as Evidence in “Crimes Against Humanity” Investigation Process

Authors: Giulia Tessari, Felicitas Bellert, Kristin Fleischer, Mirko Lapi, Alix Leboulanger, Marcello Moretti, Paolo Pasquali, Michele Rubino
Affiliations: sarmap SA, IABG, JANES
The last century witnessed horrific crimes committed during various conflicts, many violating international law. The International Criminal Court (ICC) is responsible for investigating, prosecuting, and trying individuals accused of severe crimes, including genocide, crimes against humanity, war crimes, and aggression. Effective investigations require solid evidence. The EVIDERI project, founded by ESA, aims to demonstrate the importance of integrating Earth Observation (EO) data with Open-Source Intelligence (OSINT), to enhance the value of the information extracted and utilize it as legal evidence in international crime investigations. Based on the ICC Requests for Information (RFIs), the EVIDERI team conducted case studies to develop workflows for creating evidentiary products. The approach used diverse data types, including earth observation data like optical and Synthetic Aperture Radar (SAR), combined with contextual information such as event monitoring and conflict timelines. The project involved rigorous analysis, validation, and qualification of the collected data. Both medium and high-resolution EO data were analyzed. OSINT information on specific events of interest helped narrow the time interval for analyzing EO data. Meaningful products were provided to effectively address the core questions of the RFI. Two case studies will be presented, focusing on creating a common structure for EO and non-EO data and identifying appropriate solutions to integrate OSINT and GEOINT, enabling link analysis across diverse data sources using common keys such as date, location, and/or names, events descriptions and metadata.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Multi-sensor data fusion for maritime traffic monitoring from space

Authors: Giorgio Pasquali, Filippo Daffina', Carl Torbjorn Stahl, Chiara Pratola
Affiliations: e-GEOS
Abstract. Geopolitical instabilities around the world and ongoing conflicts inevitably impact the transport of goods by sea; the safety of shipping is threatened by piracy events, terrorist attacks and competition for access to maritime resources. General instability forces a change in canonical shipping patterns, favouring safer, albeit longer and more expensive routes. Today, more than ever before, the safety of navigation, on a global scale, is supported by a more effective use of satellite assets dedicated to remote sensing: by the availability of new sensors characterised by high spatial and temporal resolution, by the birth of new families of satellite sensors dedicated to the detection of RF (Radio Frequency) activities, and by an increasingly disruptive development of innovative Deep Learning algorithms [3] that maximise the capacity to exploit the available data. The huge amount of heterogeneous data acquired through the use of different sensors, however, requires new Geospatial-Big-Data-Analytics architectures, capable of harmonising the different data, enabling the execution of data-fusion techniques that allow the tracking of vessels of interest using all available sensors and systems (e.g. AIS, SAR and Optical satellite imagery, RF detection and vessel databases). The e-GEOS team, in the context of ESA EO4Security initiative, enhanced its maritime monitoring capabilities by combining existing techniques and developing innovative Deep Learning techniques [4] used to detect and profile vessels or human activity at sea from the processing and fusion of a wide range of information sources: optical satellite sensors (possibly also with night-time acquisition capability), SAR satellite sensors, RF-capable satellites, AIS systems and vessel databases. Maritime Domain Awareness requires the most complete knowledge of the maritime picture in terms of the most complete profile for each vessel operating in the monitored area [5], regardless of the adopted sensors and their technical characteristics. The application of the data-fusion process allows for a complete and integrated view of each vessel observed, relying on the information transmitted through the AIS system (e.g. IMO code, MMSI, ship name, size, flag and declared destination), validating it with information extracted from satellite surveys, extended with information related to activity and RF signature (in terms of number and operating frequency of navigation radars) and in turn integrated with the vessel database. The results of this process are useful for highlighting the presence of dark vessels (vessels that do not transmit AIS), anomalous behaviour, deviations from routine behaviour (maritime patterns of life) and activities at sea to be further investigated. In particular, the need to exploit RF signals to characterise dark vessels, potentially involved in illicit activities, is becoming increasingly evident in the last period. To this end, by building a database of RF signatures, it is possible to correlate on-board radars with the type of vessel, so as to characterise detections from satellite data even in the absence of AIS data transmission. This step forward in satellite monitoring for maritime security is crucial, since until now the real limitation of monitoring is being able to classify the type of vessel that is involved in illicit activities and that, as in almost all cases, does not transmit an AIS signal. Developed algorithms were assessed, in the context of the ESA EO4Security initiative, during a real operational activity performed by European Fisheries Control Agency (EFCA) in the Adriatic Sea for monitoring fishing activities over a Marine Protected Area (MPA). References 1. European Commission, Policies, Climate, environment and energy, web page https://international-partnerships.ec.europa.eu/policies/climate-environment-and-energy/oceans_en, ultimo accesso 2024/05/27. 2. Review of Maritime Transport, United Nations Conference on Trade and Development (UNCTAD), Ginevra (2023). 3. G. Soldi et al., "Space-Based Global Maritime Surveillance. Part II: Artificial Intelligence and Data Fusion Techniques," in IEEE Aerospace and Electronic Systems Magazine, vol. 36, no. 9, pp. 30-42, 1 Sept. 2021, doi: 10.1109/MAES.2021.3070884. 4. G. Bottini, M. Corsi, F. Daffinà, S. Tilia, T. Stahl, “Custom state-of-the-art CNN algorithm for ship detection and segmentation”. In 1st Maritime Situational Awareness Workshop, MSAW 2019, Lerici (2019) 5. F. Daffinà, et al., “Aggregated risk assessment from multi-source data fusion”. In 1st Maritime Situational Awareness Workshop, MSAW 2019, Lerici (2019).
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: A.09.06 - POSTER - Advances in Permafrost

Covering more than 25% of the Earth´s surface, permafrost is one of the dominant landscape forming features surrounding the Arctic Ocean. It is as well present in higher altitudes of mountainous areas and in Antarctica. Permafrost is an Essential Climate Variable within the Global Climate Observing System, and associated with climate tipping points.

Permafrost is a sub-ground phenomenon that cannot be directly observed from space, yet satellite observations have a major potential to support local, regional and circumpolar monitoring of this key aspect of the climate system. This session will showcase some of the more recent key achievements in circumpolar and mountain permafrost monitoring including methods/algorithms, science and applications.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Permafrost and glacier observations in response to atmosphere circulation in the Arctic

Authors: Ingo Sasgen, Grit Steinhöfel, Caroline Kasprzyk, Heidrun Matthes, Sebastian Westermann, Julia Boike, Dr Guido Grosse
Affiliations: Division of Geosciences, Glaciology Section, Alfred-Wegener-Institut Helmholtz-Zentrum für Polar- und Meeresforschung, Institute of Geography, University of Augsburg, Divisions of Biosciences and Geosciences, Marine Biogeosciences / Marine Geochemistry Section, Alfred-Wegener-Institut Helmholtz-Zentrum für Polar- und Meeresforschung, Institute for Geology, Mineralogy and Geophysics, Ruhr-Universität Bochum, Division of Climate Sciences, Atmosphere Physics Section, Alfred-Wegener-Institut Helmholtz-Zentrum für Polar- und Meeresforschung, Department of Geosciences, Section of Physical geography and Hydrology, University of Oslo, Division of Geosciences, Permafrost Research Section, Alfred-Wegener-Institut Helmholtz-Zentrum für Polar- und Meeresforschung, Geography Department, Humboldt-Universität zu Berlin, Institute of Geosciences, University of Potsdam
The Arctic is undergoing rapid changes driven by global warming and atmospheric circulation patterns, affecting both glaciers and permafrost systems. Here we present our study that integrates glacier mass balance data from GRACE/GRACE-FO (2002–2023), permafrost active layer thickness from ESA’s Climate Change Initiative (CCI, 2003–2019), and field measurements from the CALM network (1990–2023) to examine these impacts. Significant covariations are observed among these datasets, with asynchronous trends in neighboring regions and synchronous patterns in areas antipodal to the North Pole. Dominant atmospheric circulation modes explain ~75% of pan-Arctic variability during 2002–2022. These findings emphasize the importance of accounting for atmospheric drivers, especially during extreme events, in projecting permafrost-related impacts in a warming Arctic.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Interannual InSAR subsidence and heave patterns in the permafrost landscape of Svalbard

Authors: Lotte Wendt, Line Rouyet, Marie Bredal, Hanne H. Christiansen, Andrew Hodson, Anatoly Sinitsyn, Heidi Hindberg, Tom Rune Lauknes, Sebastian Westermann
Affiliations: NORCE Norwegian Research Centre AS, Geological Survey of Norway, Arctic Geophysics Department, The University Centre in Svalbard (UNIS), Arctic Geology Department, The University Centre in Svalbard (UNIS), SINTEF AS, Department of Geosciences, University of Oslo
Permafrost is sensitive to ongoing climate change, making it important to monitor due to its role in the global carbon cycle, its influence on ecosystem stability, and the integrity of Arctic infrastructure. Surface subsidence and heave in permafrost landscapes can serve as an indicator of changes in ground ice content, reflecting processes such as ground ice loss or aggradation within the active layer and uppermost permafrost. Long-term surface settlement may indicate degradation of permafrost. Spaceborne Interferometric Synthetic Aperture Radar (InSAR) provides a powerful tool for detecting and quantifying small surface displacements over large areas, independent of solar illumination and cloud conditions. However, studies applying InSAR for monitoring long-term surface displacements remain scarce in many permafrost regions. This study investigates interannual surface displacement derived from InSAR in the areas surrounding Ny-Ålesund and Longyearbyen in western Svalbard, a region underlain by continuous permafrost. The study areas have experienced strong warming in recent decades and in-situ measurements display a deepening of the active layer, which indicates thawing of the uppermost permafrost. To generate interannual time series of surface displacements spanning from 2018 to 2023, Sentinel-1 imagery was processed using the Small Baseline Subset (SBAS) technique on high-coherent interannual interferograms between consecutive snow-free seasons (temporal baseline between 340–390 days). The results were processed both in ascending and descending geometries. Our InSAR results display spatial and temporal patterns of long-term surface subsidence and heave, with subtle velocity trends (approx. 1–5 mm/yr) in most of the lowland areas. Stronger velocity trends (approx. 10–20 mm/yr) are observed over specific landforms, such as some pingos and ice-cored moraines. We compare the InSAR-derived displacement rates with in-situ measurements of ground ice characteristics from the uppermost permafrost. Preliminary results suggest that interannual subsidence is more pronounced in areas with an ice-rich uppermost permafrost. The presented findings will provide new insights into the dynamics of Arctic permafrost landscapes. The study advances the application of InSAR in permafrost research and contributes to our understanding of surface displacements and ground ice dynamics under ongoing climate change in Svalbard.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Creating a pan-arctic Retrogressive Thaw Slump dataset with harmonized Sentinel-2 data and deep learning methods

Authors: Jonas Küpper, Tobias Hölzer, Todd Nicholson, Luigi Marini, Lucas von Chamier, Ingmar Nitze, Anna Liljedahl, Guido Grosse
Affiliations: Alfred-Wegener-Institut for Polar and Marine Research, Computing and Data Centre, Alfred-Wegener-Institut for Polar and Marine Research, Permafrost Research Section, University of Postdam, Institute of Geosciences, National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, Woodwell Climate Research Center
In a rapidly changing permafrost environment driven by climate change and anthropogenic disturbances, tracking geomorphological dynamics is a crucial task, not only to provide hazard monitoring, but also to evaluate climatological feedback processes. Yet, the impact on rapid permafrost disturbances on the Earth system is still uncertain, making the availability of reliable, long term data a very important building block to understand the interconnections and feedbacks between several environmental subsystems. Specifically, Retrogressive Thaw Slumps (RTS) are a major mass-wasting phenomenon and a rapid disturbance in ground-ice rich permafrost landscapes. They can mobilize large quantities of formerly frozen ground and consequently sediment, carbon, and nutrients. Once initiated they can grow and develop broader erosion disturbances. Over years and decades they can undergo polycyclic behaviour of initialization, growth, stabilization, and re-activation. The spatial distribution and temporal dynamics of RTS are generally poorly quantified so far on a pan-arctic scale, except for some regions covered by more intensive research. Multiple methods and data are used to map permafrost disturbances like RTS, including in-situ mapping. However, due to the remoteness and reduced accessibility, earth observation data is the primary source of RTS inventories. While RTS mapping is also done manually utilizing expert knowledge from high-resolution remote sensing imagery, machine learning techniques are increasingly used to segment permafrost features from satellite images. However, due to the requirement to process large amounts of data and also the reduced availability of suitable image data, especially in the high-latitudes, these datasets still often lack the temporal and spatial coverage to derive insights related to the recent global environmental changes. Current advancements in artificial intelligence based inference methods make feature segmentation now much more feasible and efficient, so activities for mapping RTS based on high resolution PlanetScope images and deep-learning methods, such as the DARTS dataset, already cover large RTS affected regions. Nevertheless, a full pan-arctic coverage over multiple time-steps is still lacking, thus far. To expand the existing body of RTS inventories, we use deep learning methods to detect these permafrost features from Sentinel 2 imagery to create a multi-year dataset of detected thaw slumps in the circumpolar arctic. The comparison with existing manually labelled and automatically derived high resolution thaw slump inventories provides a quantifiable verification to estimate uncertainties. This is crucial for evaluating Sentinel-2 as a high resolution dataset with favourable properties in terms of data availability and processing requirements compared to commercial and access restricted VHR imagery. Our new dataset can underpin downstream tasks to extend RTS classification, understanding trigger mechanisms and improve vulnerability mapping. Also, time series of RTS disturbance data may be used for the temporal and spatial correlation with climate reanalysis and atmospheric datasets for large scale climate change impact modelling and feedback evaluation over the permafrost domain. Additionally, the open architecture of the processing pipeline can be used to implement near real-time monitoring services based on the Sentinel-2 data release stream for public access, like its for example provided with the Permafrost Discovery Gateway (PDG) web portal by the Arctic Data Center. We present the dataset as well as the ongoing work and current key downstream results of the RTS segmentation.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Representation of canopy effects in global-scale monitoring of permafrost

Authors: Dr. Frederieke Miesner, Dr. Simone M. Stuenzi, Sebastian Westermann, Annett Bartsch
Affiliations: Department of Geosciences, University of Oslo, Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research, Department of Geography, Humboldt-Universität zu Berlin, b.geos
Permafrost ground temperature and thaw depth are monitored as Essential Climate Variables (ECV) but cannot be directly quantified from space-borne sensors. The ESA Permafrost_CCI project uses a computationally efficient transient heat conduction scheme in the CryoGrid community model to estimate these ECV globally, based on remotely sensed data sets of Land Surface Temperature and landcover, as well as ERA-5 reanalysis. With multiple ensembles per 1km grid cell, the model statistically accounts for sub-pixel variations in snow depth and ground stratigraphies. Vegetation effects are only included implicitly through parameterized influence of vegetation on snow depth, organic layers in the soil stratigraphy, and soil hydrology. About a quarter of the continuous permafrost region is forested, with canopies strongly modifying permafrost thermal and hydrological dynamics. In summer, canopies reflect and absorb up to 90% of incoming radiation, cooling the ground beneath. Seasonal snow accumulation is also influenced by canopy characteristics, such as whether the dominant vegetation is deciduous or evergreen. In summary, forest canopies lead to reduced temperature amplitudes and forest loss has been linked to increased active layer depths. These effects are not explicitly represented in the Permafrost_CCI ensemble approach. This study investigates the extent to which the ensemble method can capture vegetation effects by comparing Permafrost_CCI simulations against detailed point-scale modeling using a coupled multilayer vegetation-permafrost model. The multilayer model explicitly parameterizes surface energy fluxes, including roughness sublayer effects, allowing it to simulate canopy structure impacts on vertical heat and moisture transfer. We hypothesize that the CCI approach effectively simulates mean annual ground temperatures in forested areas but overlooks seasonal variations that critically influence active layer depths, particularly in ice-rich, forested regions. By bridging ensemble and detailed point-scale modeling, this work enhances the understanding of subpixel land cover variability in permafrost modeling, exploring the need for integrating canopy effects into large-scale mapping efforts.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Estimating Soil Organic Carbon Mobilisation From Retrogressive Thaw Slumps With Multimodal Earth Observation Data

Authors: Kathrin Maier, Ingmar Nitze, Guido Grosse, Philipp Bernhard, Irena Hajnsek
Affiliations: ETH Zurich, Alfred-Wegener Institute Helmholtz Centre for Polar and Marine Research, University of Potsdam, Gamma Remote Sensing, Microwaves and Radar Institute, German Aerospace Centre (DLR)
Retrogressive Thaw Slumps (RTS) are slope failures triggered by abrupt permafrost thaw, occurring in ground-ice rich regions in the Arctic and on the Qinghai-Tibet Plateau. Climate warming has intensified RTS activity, manifesting itself in growing numbers and sizes of RTSs, faster retreat rates, and increased mass wasting. Accelerated RTS activity affects ecosystems and hydrology by altering ground thermal regimes, sediment and geochemical fluxes, landscape dynamics as well as carbon cycles. Soil organic carbon (SOC) that has been stored in the frozen ground is mobilised during RTS growth, partly deposited in the scar zone or transported downstream through the hydrological systems, and potentially becomes available for mineralisation. To date, measurements and empirical data on large spatial scales are lacking for RTS mass wasting quantification. The increasing availability of large-scale optical RTS inventories and geophysical permafrost datasets, facilitated by advanced deep learning techniques, are an opportunity to address this challenge. By integrating optical and Synthetic Aperture Radar (SAR)-based Earth Observation data, we developed a method to quantify RTS mass wasting through estimations of material and ground ice loss, as well as associated SOC mobilisation. Optical imagery alone allows for the quantification of changes in the extent of RTSs by distinguishing the bare ground within the scar zone from the vegetation in the undisturbed tundra or formerly active, stabilised parts. This leaves, however, ambiguity in inferring directly to RTS activity, as parts of the bare scar zone may be ancient and have not been active during the investigation period. In contrast, time series analysis of Digital Elevation Models (DEMs) allow us to directly observe RTS activity in terms of area and volume change affected by thaw and the magnitude of material erosion. While DEM data covering the high latitudes is scarce and often lacks sufficient temporal and spatial resolution, optical data offers higher spatial resolution and more frequent observations. Combining these Earth Observation datasets would enhance monitoring capabilities by leveraging the strengths of both data types. We utilised single-pass Interferometric SAR (InSAR) processing for observations from the German TanDEM-X mission to generate Digital Elevation Models (DEM) with a 10 m spatial resolution and approximately 2 m height accuracy covering key permafrost areas. We produced differential DEMs (dDEM) from DEM pairs over three time steps during the last decade (2010/11, 2016/17, and 2020/21). RTS annotations based on high-resolution optical imagery and deep learning from the DARTS dataset [1] define the affected thaw area, enabling material loss calculations based on negative elevation change from differential DEMs (dDEMs). We derive allometric scaling parameters that allow us to derive volume change estimates from optical RTS area change. By incorporating datasets and modelling approaches for ground ice content, active layer thickness, and SOC content, we estimated the total ground ice lost and SOC mobilised due to RTS activity in the permafrost regions of interest during the study period. We present empirical data of decadal RTS mass wasting in key permafrost regions that can be used to improve and validate large-scale modelling efforts, contributing to our understanding of permafrost carbon cycle dynamics and the broader impacts of climate change on the Arctic and permafrost regions. [1] Nitze, I., Heidler, K., Nesterova, N., Küpper, J., Schütt, E., Hölzer, T., Barth, S., Lara, M., Liljedahl, A., & Grosse, G. (2024). DARTS: Multi-year database of AI-detected retrogressive thaw slumps in the circum-arctic permafrost region. California Digital Library (CDL). https://doi.org/10.31223/x5740z (preprint).
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Yedoma-alas landscape elevation changes using Sentinel-1 SAR Interferometry and their drivers based on detailed geomorphological analysis and landcover changes using field data and high-resolution optical imagery, Bykovsky Peninsula, Laptev Sea region

Authors: Aleksandra Veremeeva, Tazio Strozzi, Frank Guenther, Alexander Kizyakov, Cornelia Inauen, Nina Jones, Mikhail Kanevskiy, Anne Morgenstern, Ingmar Nitze, Anfisa Pismeniuk, Elizaveta Rivkina, Guido Grosse
Affiliations: The Alfred Wegener Institute, Helmholtz Centre For Polar And Marine Research, Gamma Remote Sensing, Neubrandenburg University of Applied Sciences, Lomonosov Moscow State University, Institute of Northern Engineering, University of Alaska Fairbanks, University of Oslo, Institute of Physicochemical and Biological Problems in Soil Science RAS
Yedoma landscapes formed by late Pleistocene ice-rich Yedoma Ice Complex deposits cover vast lowlands of NE Siberia, Canada and Alaska. These areas are among the most vulnerable permafrost regions in a warming climate. Widespread thermokarst at the end of the late Pleistocene - beginning of the Holocene led to significant transformation of initial Yedoma plains, which has resulted in the formation of numerous thaw lakes and subsequently drained thermokarst lake basins (alases). There is a general lack of knowledge of modern Yedoma-alas landscape elevation changes due to thaw processes and their dependence on geomorphological conditions and other characteristics at local scale. The aims of our study are (1) to determine recent Yedoma-alas landscape elevation changes using in-situ measurements and Sentinel-1 satellite synthetic aperture radar interferometry (InSAR) and (2) to investigate the local-scale geomorphological patterns and drivers of elevation changes and spatial differences based on field and remote sensing data. We used a series of Sentinel-1B images from 2016 to 2021 on the Bykovsky Peninsula (NE Siberia) to compute a mean surface elevation change rates and average summer values. We also conducted an analysis of landcover changes based on optical historical and modern high-resolution remote sensing imagery and machine learning-based classification of robust trends of multi-spectral indices derived from 2000-2022 Landsat data as well as Sentinel-2 data (2020-2023) for the mean spring (May, June) NDWI. We detected 21 landform types characterized by different drainage conditions, permafrost characteristics, microtopography, soils, and vegetation cove: eight types of Yedoma uplands and slopes, ten types of alases, thermoerosion valleys, pingo and sand beaches. Based on analysis of the morphology, polygonal relief characteristics and their changes using time-series historical and modern high resolution RS data we detected the stage of ice-wedge polygon network development for each relief type in order to interpretate elevation changes. Seasonal subsidence based on summer average values is up to 3 cm on Yedoma uplands. Maximum values were detected for low-elevated poor drained alas relief types (3.5 – 4.7 cm) caused by higher wetness due to the snow melting or high precipitation. A thaw subsidence trend rather than heaving prevailed on all relief types. Yedoma upland slopes (>5°) subsidence rate values depend on slope exposition due to the right-looking orbit parameter of Sentilel-1 satellite: western exposition slopes show higher deformation values due to line-of-sight sensitivity (1D measurements of 3D changes) with average summer values difference up to 1.4 cm and rate values difference up to 0.4 mm. Yedoma upland relief types with slope degree less than 5° with prevailed ice-wedge degradation stage are characterized by subsidence rates of about 1 cm/year with the highest values 1.1±0.4 cm/year identified on slopes with initial formation of thermokarst mounds (baydzherakhs). Drained alas relief types (with slopes up to 3-5° degree slope angle) with ice-wedge degradation stage have subsidence rates 0.5 – 1 cm/year. Low-elevated alas relief types with ice-wedge growth stage are suggested not to be affected by thaw subsidence, but show 0.7-1 cm/year subsidence rates. Landsat-based NDVI analysis (2000-2022) shows increasing trend for all relief types with highest values in low-elevated alas relief types which also contradict high rates of subsidence here. Detailed geomorphological and landcover changes analysis based on field data, Landsat trend study, the analysis of optical high-resolution time-series imagery allows to detect the current status of landscapes and permafrost conditions, which help to interpret local spatial patterns of elevation changes obtained with InSAR.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Evolution and Variability of Post-Wildfire Permafrost Deformation in Northern Canada Under Climate Change Investigated by Sentinel-1 InSAR

Authors: Zetao Cao, Professor Masato Furuya
Affiliations: Graduate School of Science, Hokkaido University, Faculty of Science, Hokkaido University, Arctic Research Center, Hokkaido University
In circum-arctic permafrost regions, wildfires were natural and normal disturbances that affect the permafrost dynamics, while permafrost could demonstrate resilience to such disturbances under stable climatic conditions. However, the combination of more frequent and severe wildfire regimes and rapid climate warming profoundly threatens the permafrost landscapes. The interplay between these drivers raises critical questions: How does permafrost respond to the compounded impacts of wildfires and extreme heat anomalies? To what extent does post-wildfire ground deformation evolve? How do environmental factors mediate permafrost resilience to fire disturbances? By performing Sentinel-1 Interferometric Synthetic Aperture Radar (InSAR) time-series analysis, this study investigates the post-wildfire permafrost ground deformation across fire scars in the Northwest Territories and Yukon, northern Canada. These regions are underlain by continuous permafrost and exhibit diverse climatic and ecological conditions but have experienced evident climate changes in recent years, thus offering a unique opportunity to explore spatially variable deformation patterns and evolution. Our findings reveal that wildfires universally induce ground subsidence as a result of post-fire permafrost degradation, with the magnitude, spatial patterns, and temporal evolution of deformation varying significantly across regions. In the lower Mackenzie Valley, Northwest Territories, a 2019 needle-leaf forest fire led to pronounced subsidence, with subsidence rates accelerating from 20 mm/year to 60 mm/year along the line of sight (LOS) between 2020 and 2023. This acceleration is attributed to a combination of rapid post-2019 warming and extreme heat anomalies in 2023, indicating the weakened resilience of permafrost to compounding disturbances. Similarly, in the taiga needle-leaf forest region near Rorey Lake, Northwest Territories, a 2017 wildfire caused sustained permafrost subsidence, reaching up to 40 mm in thawing seasons from 2018 to 2024. The subsidence exhibited no signs of deceleration under continued warming, while uplift signals in burned lowlands during early freezing seasons were amplified due to frost heave. Conversely, burned mountain ridges exhibited limited deformation, likely due to thinner soil layers and lower ground ice content. In contrast, several burned areas in the north Yukon region displayed distinct permafrost deformation dynamics. In this region, grassland and shrubland ecosystems dominated by lichen and moss experienced lower burn severity due to lower biomass. Furthermore, rapid recovery of grasses mitigated albedo effects within a year, resulting in subsidence rates that peaked at 20 mm in the first post-fire year and then rapidly declined to below 10 mm in subsequent thawing seasons, despite the relatively high ground ice content in this region. This study highlights the increasingly severe threat to permafrost in circum-Arctic regions from the combined impacts of wildfires and extreme heat anomalies. Simultaneously, it demonstrates the effects of environmental factors, such as vegetation type, topography, surface geology, and ground ice content, in shaping post-wildfire permafrost deformation patterns. These insights are essential for improving models of permafrost response under future climate and disturbance scenarios, aiding in the development of targeted adaptation and mitigation strategies.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: ESA CCI+ Permafrost - Validation using international and national permafrost monitoring networks

Authors: Birgit Heim, Mareike Wieczoreck, Anna Irrgang, Cécile Pellet, Reynald Delaloye, Dr Guido Grosse, Antonie Haas, Kirsten Elger, Sebastian Westermann, Line Rouyet, Dr Frank Martin Seifert, Tazio Strozzi, Annett Bartsch
Affiliations: Alfred Wegener Institut Helmholtz Center for Polar and Marine Research (AWI), PERMOS; University of Fribourg, GFZ German Research Centre for Geosciences, University of Oslo, Department of Geosciences, NORCE Norwegian Research Centre, European Space Agency (ESA), GAMMA Remote Sensing, B.GEOS
The ESA Permafrost Climate Change Initiative (CCI+) (Permafrost_cci, phase I: 2018-2021; phase II: 2022-2025) produces Essential Climate Variable (ECV) time series for permafrost. These include maps in annual resolution of permafrost Ground Temperature per Depth (GTD) [°C] at different depths down to 10 m, Active Layer Thickness (ALT) [cm] and permafrost probability (Permafrost FRaction, PFR) [%]. Version 3 covers 1997–2019, while Version 4 is updated and extended to the year 2021. According to the CEOS Quality Assurance framework for Earth Observation (QA4EO) and the ESA CCI guidelines, validation must be independent of the product generation process. Suitable reference data need to adhere to established protocols, community-endorsed management practices, and open publication standards. Consistent with these principles, validation is conducted with in situ data from the WMO/GCOS Global Terrestrial Network for Permafrost (GTN-P) - managed by the International Permafrost Association (IPA) - which meet QA4EO criteria for measurement accuracy, data collection, and open publication practices. Within the GTN-P/IPA framework, the Thermal State of Permafrost Monitoring (TSP) program is responsible for global permafrost temperature monitoring through borehole temperature depth profiles as well as shallow ground temperature profiles, while the Circumpolar Active Layer Monitoring (CALM) program provides global monitoring of Active Layer Thickness (ALT). To enhance the comprehensiveness of reference datasets, Permafrost_cci integrates additional in situ data from national monitoring programs and individual Principal Investigators (PIs). To address the specific challenges of mountainous regions, the GTN-P/ PERMOS mountain permafrost monitoring program in Switzerland uses its high level monitoring data to validate the 1 km grid cell Permafrost_cci ECV products in mountainous regions. The Permafrost_cci validation team compiled extensive in situ observations across the Northern Hemisphere to create reference datasets for Mean Annual Ground Temperature (MAGT) and Active Layer Thickness (ALT). The creation of a new MAGT reference dataset involved the harmonization and standardization of ground temperature measurements spanning a broad range of temporal frequencies, and depths (from shallow depth profiles to 20 m depths). Although observational time series from the communities included partially standardized datasets, additional processing was necessary that included error-checking, depth standardization, and harmonization of metadata. These harmonised data collections are published in the PANGAEA data repository. The circum-Arctic MAGT data publication represents a consistent ground temperature dataset with standardized measurement depths, encompassing shallow temperature observations that can be also used for applications in the climate and land surface model communities. Quality assessments of the Permafrost_cci products are conducted through detailed point-wise comparisons, aligning locations, depths, and equivalent years. Standard statistical metrics, including bias, absolute error, relative percentage error, and root mean square error (RMSE) are used to evaluate accuracy and performance. In addition, Permafrost_cci employs innovative approaches, including comparisons of GTD products with EO-derived Freeze-Thaw to Temperature (FT2T) (b.geos) and leveraging EO-derived inventories of rock glacier inventories (PERMOS) for validating mountain permafrost areas. The rock glacier inventories were developed by the ESA Data User Element (DUE) GlobPermafrost team in 2016 and have been expanded in Permafrost_cci phase I and worldwide in 12 mountain regions in phase II.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Investigating Features of a Permafrost Landscape With Multi-Frequency Airborne SAR Tomography

Authors: Paloma Saporta, Dr Alberto Alonso González, Matteo Pardini, Irena Hajnsek
Affiliations: German Aerospace Center (DLR), ETH Zurich, Universitat Politècnica de Catalunya
Permafrost landscapes in the Arctic are changing due to rapidly rising temperatures in the context of climate change. Monitoring permafrost landscapes is difficult because these zones are remote and affected by polar night, and also because the permafrost is itself under a vegetation cover in summer and snow in winter. Therefore Synthetic Aperture Radar (SAR) remote sensing is particularly interesting to monitor permafrost characteristics. SAR Tomography (TomoSAR) is an established technique to reconstruct the vertical reflectivity profile at a given pixel in the scene, which corresponds to the distribution of scatterers in the vertical direction at this point, as seen by the radar. TomoSAR has been previously used to investigate scattering profiles in the case of forest [1], agriculture fields [2] as well as ice [3] scenarios, but this technique has never been applied to analyze the vertical profiles of permafrost landscapes. In this study, we use the SAR airborne dataset collected by the German Aerospace Center (DLR) during a campaign in Canada extending over two seasons: summer 2018 and winter 2019 [4]. We focus in particular on the test site located at Trail Valley Creek (68.7 °N, 133.5 °W), which is in the continuous permafrost zone in the Canadian low Arctic. The site features gently rolling hills covered with tundra vegetation [5]. DLR’s airborne SAR sensor (FSAR) was operated at several frequencies (X, C and L-band), on several across-track baselines and in fully polarimetric mode, which makes the dataset suitable for a TomoSAR analysis. In summer, the vegetation-covered-ground is thawed up to a depth of several dm, while in winter, the ground is completely frozen and covered with several dm of snow. This last property is particularly interesting in the case of SAR as the radar waves are expected to penetrate to a certain extent within the frozen soil, therefore modifying the vertical reflectivity profile. Consequently, we will focus on the winter acquisitions to investigate different features of the landscape with TomoSAR profiles, and use the summer acquisitions as a reference in this new and challenging scenario. Up-to-date results of the tomographic analysis and the penetration capability over a selected permafrost region will be presented at the ESA Living Planet Symposium. [1] V. Cazcarra-Bes, M. Pardini, M. Tello, K. P. Papathanassiou, “Comparison of Tomographic SAR Reflectivity Reconstruction Algorithms for Forest Applications at L-band” 2020-01, IEEE Transactions on Geoscience and Remote Sensing, 58(1), 147-164, 2020 [2] H. Joerg, M. Pardini, I. Hajnsek, K.P. Papathanassiou, “3-D Scattering Characterization of Agricultural Crops at C-Band Using SAR Tomography”, IEEE Transactions on Geoscience and Remote Sensing, 56(7), 3976-3989, 2018 [3] F. Banda, J. Dall, S. Tebaldini, “Single and Multipolarimetric P-Band SAR Tomography of Subsurface Ice Structure”, IEEE Transactions on Geoscience and Remote Sensing, 54(5), 2832-2845, 2016 [4] I. Hajnsek, H. Joerg, R. Horn, M. Keller, D. Gesswein, M. Jaeger, R. Scheiber, P. Bernhard, S. Zwieback, “DLR Airborne SAR Campaign on Permafrost Soils and Boreal Forests in the Canadian Northwest Territories, Yukon and Saskatchewan: PermASAR”, POLINSAR 2019; 9th International Workshop on Science and Applications of SAR Polarimetry and Polarimetric Interferometry, 2019 [5] I. Grünberg, E.J. Wilcox, S. Zwieback, P. Marsh, and J. Boike, “Linking tundra vegetation, snow, soil temperature, and permafrost”, Biogeosciences, 17(16), 4261-4279, 2020
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: 3D geospatial mapping of Arctic permafrost carbon

Authors: Friedrich Röseler, Prof. Dr. Guido Große, Prof. Dr. Gerard Heuvelink
Affiliations: Wageningen University & Research (WUR), World Soil Information (ISRIC), Alfred Wegener Institute (AWI)
Soils are the largest terrestrial carbon pool in the global carbon cycle. Although the northern circumpolar permafrost region makes up only 15% of the global soil area, it contains up to half of the global soil carbon pool and twice as much carbon as is currently in the atmosphere. At the same time, the Arctic is warming two to four times faster compared to the global average, causing widespread permafrost thaw. This exposes substantial amounts of soil organic carbon (SOC) that have accumulated and been contained in the permafrost over millennia to microbial decomposition. The enhanced greenhouse gas emissions result in a potentially strong climate feedback, the so-called Arctic permafrost carbon feedback, that could further amplify global warming. Permafrost soils are known for their high lateral and vertical variability, creating challenges for quantifying this carbon pool. Considerable uncertainties persist in mapping Arctic permafrost carbon, particularly at greater depth. Here, we investigated how 3D digital soil mapping (DSM) compares to conventional 2.5D DSM, both in terms of predictions of SOC density over depth and prediction accuracy. We predicted SOC density up to 3 m depth using random forest models based on covariates from a digital elevation model and climate data. First, we harmonized a large number of soil observations (740 soil profiles and 9,000 samples) in a new Circum-Arctic Soil Permafrost Region (CASPeR) reference database. CASPeR was then used to train and evaluate the 2.5D and 3D DSM models using leave-profile-out cross-validation. We evaluated the models based on the Nash-Sutcliffe model efficiency coefficient (MEC) and RMSE. Moreover, we developed a Python package to enable scalable spatial prediction of SOC density and other soil properties across the Arctic permafrost region. Further, we predicted SOC density at 30 m spatial resolution for three sub-regions (780,000 km2 in total). In a next step, we are planning to investigate vulnerabilities of soil carbon to potential mobilization. For that, we will perform a geospatial analysis of the relationship between our modeled carbon storage and remote sensing-derived permafrost disturbance regimes for a 20-year time period 2003-2022 and permafrost ground temperatures; these are both products from the ESA CCI permafrost project. Results from our modeling indicate that the two approaches perform similarly, with 3D DSM showing no significant improvement over 2.5D DSM. Evaluation also showed that our model explains about half the variation in the upper 1 m of soil, with decreasing accuracy in deeper soil layers. We attribute this to the limited predictive power of the covariates currently included, and the lack of reference data in deeper soil layers. Nevertheless, 3D DSM offers advantages, with evidence that depth is an important predictor of SOC. Future studies should investigate additional covariates and harmonize more reference data, including for deeper soil layers.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Permafrost Surface Deformation During and Following A Climate Extreme from Sentinel-1 InSAR

Authors: Simon Zwieback, Jie Chen
Affiliations: Geophysical Institute, University Of Alaska Fairbanks
Northwestern Alaska experienced extremely warm temperatures and deep snow in 2018-2019, during which deep thaw and, in the discontinuous permafrost zone, talik formation were observed in-situ. Did this extreme event induce sustained permafrost degradation on regional scales? During previous extremes, local observations revealed sustained thaw on subdecadal time scales, stimulated by positive feedback processes such as subsidence-promoted wetting and snow accumulation, which in turn reinforce thaw. However, we lack regional-scale observations to determine the occurrence and landscape controls on such sustained thaw. Here, we use Sentinel-1 and ALOS-2 InSAR to test to what extent sustained thaw followed the 2018-2019 climate extreme. Our study area near the Iñupiat town of Kivalina is close to the southern boundary of the continuous permafrost zone and contains a range of ecotypes and surficial geologies. The rationale of our approach is that at locations where the upper permafrost is ice rich, sustained thaw induces subsidence over multiple years, which we measure using InSAR. To identify locations where the upper permafrost contains substantial amounts of excess ice, we estimated the late-season subsidence in the extreme summer of 2019. The idea is that as the thaw front penetrated into the (previous years') permafrost toward the end of that summer, the melting of excess ice would have induced subsidence in ice-rich locations. Our InSAR observations show substantial but short-lived subsidence over ice-rich ecotypes. During the climate extreme, ~5 cm net subsidence were observed on average, predominantly at the end of the extremely warm summer of 2019. However, this seemingly did not trigger sustained thaw. Net post-extreme subsidence was rare in the years following the extreme. The vast majority of ice-rich locations remained stable or showed up to ~2 cm uplift, depending on ecotype. The observations suggest that deep thaw during the extreme summer did not trigger strong positive feedbacks that fueled sustained permafrost degradation on a landscape scale. Meteorological observations and thermal modeling suggest that the inferred resilience of the landscape was bolstered by cool summers and low-snow winters that followed the climate extreme. Toward a more comprehensive characterization of surface dynamics following the extreme, we are mapping changes in ice wedge troughs and in albedo, greenness and wetness. InSAR and complementary remote sensing observations provide valuable constraints on permafrost dynamics following climate extremes. In Northwestern Alaska, the inferred resilience across ecotypes can be used to test and parameterize models and to constrain the climate extreme's legacy on the carbon, energy and water cycles.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Integrating Remote Sensing Observations to Quantify Volume Rates of Thaw-Induced Coastal Erosion in the Arctic

Authors: Steven Palmer
Affiliations: University Of Exeter
Permafrost underlies ~25% of the Northern Hemisphere land surface and more than 50% of the ice-free land north of 60°N (Olefeldt et al. 2016). Arctic amplification of climate warming has led to a mean air temperature increase of ~5 ⁰C over the past century – several multiples of the global average – and this rapid warming is leading to significant landscape changes (Turetsky et al., 2020). As permafrost thaws and active layer thickness increases and soils are destabilised by gravity and rainfall, leading to a range of local impacts including coastal erosion and subsidence of built infrastructure. This thaw is making permafrost soil organic carbon susceptible to microbial breakdown, though future rates and quantities of these carbon emissions is highly uncertain (Schuur et al., 2015). Arctic warming has also led to a dramatic decline in Arctic Sea ice extents, and larger stretches of open water allow greater fetch, increasing wave energy at Arctic coastlines. At the same time, warming has led to a reduction in both landfast sea ice and ground ice, increasing the vulnerability of coastlines to erosion. These changes amplify the impacts of storm surge and wave action, increasing coastal erosion, flooding, and infrastructure damage (Nielsen et al., 2022) In this project, ESA Sentinel-1 and -2 observations are combined with time series of elevation changes from the University of Minnesota Polar Geospatial Centre’s ArcticDEM (Morin et al., 2016) to reveal patterns of erosion and to quantify volume rates of change along a range of Arctic coastlines. This approach allows mechanisms of erosion in coastal zones to be identified, including thermokarst features such retrogressive thaw slumps and active layer detachment slides. These new observations are compared with in situ observations of active layer thickness changes and borehole temperature records, to better understand the spatio-temporal variability in ground-ice thaw. This work paves the way for systematic analyses of the climatic drivers of coastal erosion and other impacts of permafrost thaw across the Arctic References Morin, P., Porter, C., Cloutier, M., Howat, I., Noh, M. J., Willis, M., ... & Peterman, K. (2016, April). ArcticDEM; a publically available, high resolution elevation model of the Arctic. In Egu general assembly conference abstracts (pp. EPSC2016-8396). Nielsen, D. M., Pieper, P., Barkhordarian, A., Overduin, P., Ilyina, T., Brovkin, V., ... & Dobrynin, M. (2022). Increase in Arctic coastal erosion and its sensitivity to warming in the twenty-first century. Nature Climate Change, 12(3), 263-270. Noh, M. J., & Howat, I. M. (2015). Automated stereo-photogrammetric DEM generation at high latitudes: Surface Extraction with TIN-based Search-space Minimization (SETSM) validation and demonstration over glaciated regions. GIScience & remote sensing, 52(2), 198-217. Olefeldt, D., Goswami, S., Grosse, G., Hayes, D., Hugelius, G., Kuhry, P., ... & Turetsky, M. R. (2016). Circumpolar distribution and carbon storage of thermokarst landscapes. Nature communications, 7(1), 13043. Schuur, E. A., McGuire, A. D., Schädel, C., Grosse, G., Harden, J. W., Hayes, D. J., ... & Vonk, J. E. (2015). Climate change and the permafrost carbon feedback. Nature, 520(7546), 171-179. Turetsky, M. R., Abbott, B. W., Jones, M. C., Anthony, K. W., Olefeldt, D., Schuur, E. A., ... & McGuire, A. D. (2020). Carbon release through abrupt permafrost thaw. Nature Geoscience, 13(2), 138-143.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: AI4Sen2Cor: an AI-Based Approach for Integrating Geospatial Detection in Copernicus Sentinel-2 Products

Authors: Francesco Cristiano Pignatale, Satish Madhogaria, Davinder P. Singh, Claudio Mammone, Bodo Werner, Patrick Griffiths
Affiliations: Telespazio Germany GmbH, ESA-ESRIN
In Remote Sensing studies (RS) and Earth Observation (EO) combination of spectral properties have been extensively used to determine and classify the observed scenes with spectral indices returning important information on their properties, nature, and status. In recent time Artificial Intelligence (AI) and Machine Learning have seen unprecedented rise in these fields and constitute an invaluable additional resource for classification and environmental monitoring. Nevertheless, the general concept of scene classification also includes the detection of artificial objects and the effects of human activities on the observed scene. Large datasets (e.g. the Copernicus Sentinel-2 Mission, hereafter ‘S2’, L1C and L2A products) are publicly available and provide an important resource for monitoring natural and anthropogenic phenomena. Consequently, the necessity to provide software capable of rapid processing and correct features' extraction/interpretation is a challenging task, especially considering associated data resolution (e.g. 10/20/60m for S2). In this work we present AI4Sen2Cor, an AI study based on the S2-L2A official processor, Sen2Cor. Sen2Cor provides a Scene Classification (SCL) and atmospheric-corrected surface reflectance products together with a series of quality parameters. The current Sen2Cor SCL includes pixel-based classification on vegetated/non-vegetated areas, water, cloud probabilities, thin cirrus, snow, cast and cloud shadows together with quality information. The project’s goal is to realize a reliable framework that integrates the current Sen2Cor's features and S2 processing baseline with AI-based Geospatial Intelligence capabilities to include multi-spectra information for objects and features detection, their temporal variation, and associated statistics. The project followed these main areas of development: (i) preparation of a curated dataset including optimization and augmentation, (ii) thoughtful analysis, testing and selection of available AI-based algorithms for feature segmentation, SG, (forest, water) and object detection, OD, (e.g. boats, island, bridges, docks, and flying airplanes), (iii) development of an orchestrator and auxiliary functionalities for Sen2Cor integration and for handling of the newly produced output, (iv) development of a new Jupyter-based tool for analysis. DeepLabv3 (for SG) and YOLO5 and 8 (for OD) were chosen as final models after a series of experiments and performance tests. Dedicated functionalities and a configuration file allow the selection and handling of the newly implemented modules and output that integrate seamlessly within Sen2Cor. Results include georeferenced maps and masks of detected features and objects and a series of metadata. A dedicated in-house tool has been developed aimed to analyse the temporal variation in each scene. This tool produces a series of heat-maps and histograms giving the user necessary means for interpreting the results obtained by AI4Sen2Cor. Our results show that geospatial intelligence activities (SG and OD) in single and multiple scenes, using S2-EO, are feasible and they can return important information on scenes’ natural and anthropogenic variations. Furthermore, AI-based functionalities can be seamlessly integrated in the current L2A Processor. CPU and GPU based computations are made available as selectable options allowing processing with different range of hardware configuration. Produced datasets and models will soon be available on the Earth Observation Training Data Lab (EOTDL) platform. The authors wish to thank the European Space Agency for funding this project via the program “Future EO-1 EO Science for Society” as part of the Open Call (Activity Line 3: Artificial Intelligence for EO).
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Accelerated lowland thermokarst development revealed by drone photogrammetric surveys in the Stordalen mire, Abisko, Sweden

Authors: Maxime Thomas, Thomas Moenaert, Julien Radoux, Baptiste Delhez, Éléonore du Bois d’Aische, Maëlle Villani, Catherine Hirst, Erik Lundin, François Jonard, Sébastien Lambot, Kristof Van Oost, Veerle Vanacker, Matthias Siewert, Carl Magnus Mörth, Michael Palace, Ruth K. Varner, Franklin Sullivan, Christina Herrick, Sophie Opfergelt
Affiliations: Earth and Life Institute, Université catholique de Louvain, Department of Earth Sciences, Durham University, Abisko Scientific Research Station, Swedish Polar Research Secretariat, Earth Observation and Ecosystem Modelling Laboratory, Université de Liège, Department of Ecology and Environmental Science, Umeå University, Department of Geological Sciences, Stockholm University, Department of Earth Sciences and Institute for the Study of Earth, Oceans and Space, University of New Hampshire
One of the responses of permafrost terrains to climate change manifests locally by physical degradation, ground collapse and subsidence known as thermokarst development. Some thermokarst terrains can be monitored by satellite remote sensing, such as retrogressive thaw slumps or lake area losses, yet remain a challenge due to the highly dynamic nature and sometimes fine scale of these disturbances. For lowland landscapes for which thermokarst developments lead to the formation of ponds or wetlands, the study of degradation is even more complicated, since landscape deformations occur on a few tens of centimeters per year, both horizontally and vertically. Yet monitoring this type of thermokarst landscape is crucial, since greenhouse gas emissions from these landscapes are directly dependent on soil moisture conditions and microtopography. Here, we studied the rate of permafrost degradation in the form of lowland thermokarst using 10-cm resolution UAV-derived RGB orthomosaic and digital surface models over a time series from 2014 to 2022 in Stordalen, near Abisko, Sweden. It emerges that information on topography is crucial for obtaining a model with reasonable quality, i.e., it increases the overall accuracy of the model from 41% to 77%. We have shown that degradation accelerates significantly in recent years, with a decrease in intact permafrost area of 0.9 – 1.1%/a for 2019-2021, compared to 1970 – 2000 (~0.2%/a) and 2000 – 2014 (~0.04%/a). This physical degradation of permafrost leads to an increase in soil moisture, resulting in a decrease in organic carbon stability and a projected increase in methane emissions in the area. Similar studies across the Arctic also tend to show accelerating degradation in recent years: the loss of intact permafrost is expected to continue, with a non-linear decline, most likely at a higher rate than today. Using interferometric synthetic aperture radar (InSAR) satellite technology appears very promising for extending the analyses for larger areas, as they can detect vertical land surface motion at millimeter precision, yet still at a spatial resolution of several meters. Future work will need to examine the match between the degradation rates calculated using UAV-derived RGB orthomosaic and digital surface models and those calculated using InSAR.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Mapping Patterned Soils in Svalbard Using Satellite Imagery and Deep Learning: Advancing Periglacial Geomorphology Analysis

Authors: Radu Irimia, PhD Ionut Sandric
Affiliations: University of Bucharest
Patterned ground, a distinctive geomorphological feature found in periglacial and polar regions, represents the remarkable ability of natural processes to create visually striking and systematically organized soil patterns. These formations, which include polygons, stripes, circles, and nets, emerge through repeated freeze-thaw cycles that act on the soil over time. The dynamics of these cycles drive a range of cryogenic processes, such as soil contraction during freezing, frost heave due to the upward movement of ice and soil, and the formation of ice lenses that segregate particles and stones into ordered structures. These processes occur under specific environmental conditions, including subzero temperatures, seasonal thawing, and the presence of water within the soil matrix, making patterned ground a hallmark of cold-climate landscapes. Understanding and mapping patterned ground are vital for multiple scientific and practical reasons. These formations offer insights into ecosystem dynamics by influencing vegetation distribution, soil moisture regimes, and microhabitats. They also play a critical role in soil stability, as the underlying processes can impact slope stability, erosion rates, and sediment transport. Moreover, the spatial patterns of these features serve as indicators of past and present climatic conditions, providing a window into the evolution of landscapes under extreme environmental stressors. The urgency to study patterned ground has increased in recent years due to accelerating climate change. Rising temperatures and changes in precipitation regimes are altering freeze-thaw cycles, potentially disrupting the delicate balance that sustains these formations. Such changes can cascade through ecosystems, impacting biodiversity, water availability, and soil carbon storage. Additionally, patterned ground plays a role in the permafrost-carbon feedback loop, were thawing permafrost releases greenhouse gases, amplifying global warming. This study leverages high-resolution satellite imagery and orthoimages to map patterned ground in Svalbard, employing advanced deep learning techniques for precise geomorphological analysis. A U-Net convolutional neural network (CNN), optimized for semantic segmentation, was adapted to handle the challenges of smaller datasets typical in remote sensing studies. Additionally, the integration of the ResNet architecture introduced residual learning, enhancing model depth and accuracy while mitigating performance degradation in deep networks. Over 700 polygons were created manually, distributed across high-resolution satellite imagery and orthoimagery, representing a diverse range of patterned ground features in Svalbard. These samples were selected to capture the spatial and temporal variability inherent in patterned soils, ensuring a robust dataset for training and validating the machine learning models. The imagery spanned different resolutions, ranging from sub-meter to several meters per pixel, and included both recent and archived datasets to reflect the temporal distribution of features influenced by freeze-thaw cycles and environmental changes. The heterogeneity of the dataset was critical to achieving high accuracy in detecting and classifying patterned ground. The samples were drawn from various geomorphological settings, including areas with differing soil compositions, vegetation cover, and climatic influences. This diversity ensured the model's ability to generalize across a wide array of conditions and minimized the risk of overfitting to specific patterns or imaging conditions. Additionally, the inclusion of orthoimagery provided detailed spatial context and precise georeferencing, which complemented the high-resolution satellite data. These datasets captured subtle variations in soil texture, structure, and patterns that are often overlooked in coarser-resolution imagery. The combination of spatial heterogeneity and temporal distribution enriched the dataset, enhancing its representativeness and improving the model’s capability to identify complex patterned ground features across the broader landscape. By leveraging such a diverse dataset, the study was able to fine-tune the U-Net convolutional neural network (CNN) and achieve over 80% accuracy in both training and validation. This high-performance model enabled the large-scale detection and classification of polygonal patterned ground, offering new insights into the spatial distribution and variability of these features. The results underscore the potential of machine learning to transform geomorphological research, providing scalable tools for analysing periglacial landscapes. By highlighting the link between patterned ground dynamics and environmental change, this study contributes to a better understanding of the geomorphological and ecological processes shaping polar regions. As global temperatures rise, monitoring these features becomes increasingly critical for assessing their role in broader environmental transformations. Acknowledgement This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 101086386, EO-PERSIST - A Cloud-Based Remote Sensing Data System for Promoting Research and Socioeconomic Studies In Arctic Environments (https://www.eo-persist.eu).
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: ESA CCI Permafrost time series maps as Essential Climate Variable (ECV) products primarily derived from satellite measurements

Authors: Antonie Haas, Birgit Heim, Annett Bartsch, Andreas Walter, Mareike Wiezcorek, Guido Grosse, Tazio Strozzi, Sebastian Westermann, Dr Frank Martin Seifert
Affiliations: Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research, b.geos, Earth and Environmental Science Institute, Gamma Remote Sensing, University of Oslo, ESA - ESRIN
We present the visualization of permafrost-related map products in the framework of the ESA DUE Globpermafrost (2016-2018) and the ESA CCI+ Permafrost projects phase I and II (2018-2021, 2023-2026). During ESA DUE Globpermafrost project, a comprehensive range of remote sensing products was processed by the project committee: North-South transects in the Northern hemisphere with trends in Landsat multispectral indices (Tasseled Cap Brightness, Greenness, Wetness; NDVI), Arctic land cover (e. g. shrub height, vegetation composition), lake ice grounding, InSAR-based land surface deformation, and rock glacier velocities. The main products were Global Permafrost Essential Climate Variables (ECV) from a spatially distributed permafrost model forced by Land Surface Temperature and Snow Water Equivalent products. The ECVs include permafrost extent and probability (PEX), mean annual ground temperature (MAGT), and active layer thickness (ALT) per pixel. In the context of the ESA CCI+ Permafrost project, a significant variable was incorporated into the products: a time series from 1997 to 2021 comprising the CCI+ Permafrost circum-arctic model output for MAGT from the surface down to the depth of 10 m, PEX, and ALT, spanning a period exceeding twenty years. All data products are available at a yearly resolution, as well as calculated averages of MAGT, PEX and ALT for the time series. To make the products visible we established WebGIS projects using WebGIS technology within AWI’s data workflow framework O2A (from Observation to Analysis and Archive). This modular, scalable, and highly automated spatial data infrastructure (SDI) has been developed and operated at AWI for over a decade, undergoing continuous improvement and providing map services for GIS clients and portals. The FAIR principles were implemented in order to meet the growing demand for discoverable and accessible research data and metadata. The ESA Permafrost WebGIS products were designed utilising GIS software and have been published as Web Map Service (WMS), an internationally standardised format (Open Geospatial Consortium (OGC)) using GIS server technology. Additionally, project-specific visualisations of raster and vector data products, adapted to the products' specific spatial scales and resolutions, have been developed. In addition to data products derived from remote sensing, the locations of WMO GCOS ground-monitoring networks belonging to the permafrost community, which are managed by the International Permafrost Association (IPA) and form part of the Global Terrestrial Network for Permafrost (GTN-P), were incorporated as a feature layer and updated on an ongoing basis. All data products have already been registered with the Digital Object Identifier (DOI) and archived in the data archives PANGAEA or ESA CEDA.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Mapping Arctic Permafrost Polygons Through Integration of the Segment Anything Model with High-Resolution UAV Imagery and Volunteered Geographic Information

Authors: Marlin M. Mueller, Steffen Dietenberger, Markus Adam, Dr.-Ing. Clémence Dubois, Dr. Josefine Lenz, Soraya Kaiser, Dr. Moritz Langer, Oliver Fritz, Sabrina Marx, Pauline Walz, apl. Prof. Dr. Christian Thiel
Affiliations: Institute of Data Science, German Aerospace Center (DLR), Permafrost Research Section, Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research, Department of Earth Sciences, Vrije Universiteit Amsterdam, Heidelberg Institute for Geoinformation Technology
The accelerating degradation of Arctic permafrost necessitates efficient and scalable monitoring approaches that can accurately detect and quantify landscape changes, particularly in vulnerable ice-wedge polygon networks. This study presents a methodological framework that leverages the power of Meta's Segment Anything Model (SAM) – a foundation model trained on billions of image segments – in conjunction with high-resolution unoccupied aerial vehicle (UAV) imagery and Volunteer Geographic Information (VGI) for automated permafrost feature detection. SAM offers zero-shot transfer capabilities and prompt-based interaction that significantly reduces the complexity of implementing sophisticated deep learning approaches and enabling non-experts to quickly interpret data quality as well. Unlike traditional deep learning models that require extensive training data and parameter tuning, SAM can generalize to new objects and scenes with minimal human intervention. This makes it particularly well-suited for analyzing complex permafrost landscapes, where the diversity and subtle variations in features can pose challenges for traditional image analysis techniques. We developed a workflow that integrates SAM's capabilities with UAV-derived orthomosaics, focusing particularly on ice-wedge polygon delineation. Our methodology incorporates automated tiling strategies to handle large-scale imagery, optimized prompting mechanisms to guide SAM's segmentation process, and easy-to-apply rule-based post-processing designed specifically for extracting and refining permafrost landscape features. The high-resolution imagery used in this study was acquired through a collaborative effort with students from Moose Kerr School in Aklavik, Canada. Working alongside researchers, the students participated in a UAV survey campaign mapping a region characterized by diverse permafrost features using DJI Mini 2 drones. This hands-on experience provided the students with valuable insights into remote sensing technologies and their applications in environmental monitoring. To further enhance the robustness of our approach, we incorporated a valuable source of human input in the form of VGI. This VGI data, collected through a series of "mapathons" involving mostly school and university students, was used to generate a reference dataset of ice-wedge polygon centroids. This served a dual purpose: 1) it provided an independent validation dataset for evaluating the accuracy of SAM-derived polygon delineations, and 2) it revealed subtle polygon structures that may be omitted by automated methods, highlighting the complementary nature of human interpretation and AI-powered analysis. This VGI data collection involved diverse groups of participants, including university-level geography students and school students (grades 7-12), demonstrating the potential for engaging a wider public in permafrost research. Our analysis demonstrates the effectiveness of unsupervised SAM in accurately detecting and delineating ice-wedge polygons from high-resolution UAV imagery, enhancing the capabilities of previous methods such as those presented in a previously presented study. Compared to manually digitized reference polygons, the SAM approach achieved an overall area-based accuracy of 80.8%. This indicates a high level of agreement between the automatically generated polygon boundaries and the manually digitized data. Further evaluation using the Intersection over Union (IoU) metric revealed a median IoU value of 0.71 and an F1-score of 78.3%, signifying a substantial overlap between the predicted and reference polygons. Our findings demonstrate SAM's potential for significantly streamlining the permafrost monitoring workflow while maintaining high accuracy in feature detection. The framework's ability to generalize across different landscape types, combined with its intuitive implementation, presents new opportunities for expanding monitoring efforts across the Arctic region. A key innovation of our approach lies in its minimal requirements for training data and reduced complexity in deployment, making it particularly suitable for widespread adoption in community-driven permafrost monitoring programs. This research contributes to advancing permafrost science by providing a more accessible and scalable approach to detecting and quantifying landscape changes, crucial for understanding Arctic system dynamics and climate change impacts.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone F-G)

Poster: Using a Diffusion Model for Enhancing a Panarctic Permafrost Dataset acquired by Sentinel-2 and LandSat.

Authors: Lucas Von Chamier, Dr. Ingmar Nitze, Jonas Küpper, Dr. Erik
Affiliations: Alfred Wegener Institut
Data from the Sentinel-2 and Landsat missions are extremely valuable resources for research in large-scale land-cover changes in the permafrost region due to the scale they cover in time and space and importantly, their free accessibility. However, both missions' datasets are limited in terms of spatial resolution (in the visible and NIR bands), 10m for Sentinel-2 and 30m for Landsat. This means small-scale surface changes cannot be investigated using these satellites, and thus routinely require the use of commercial data with a higher resolution. In this study, we create a panarctic dataset of the permafrost region at a much higher resolution using low-resolution data from Sentinel-2 and Landsat and converting these datasets into the domain of the 3.125m resolution PlanetScope data. We achieve this by training an adapted diffusion model on patches from both datasets, sampled from the entire region. While previous studies have shown that upsampling of remote-sensing data is possible, few studies have done this by using real datasets from both high- and low-resolution domains, and have instead often focused on mimicking the low-resolution data by modelling the degradation from high to low resolution. Though these methods can produce visually appealing results, it is difficult to replicate the exact colour distribution and aberrations present in actual low-resolution datasets, and thus, are prone to introducing bias. The approach chosen here avoids the potential bias this introduces, since the model is explicitly trained to convert data between two real datasets. The training patches for the training dataset were assembled with the highest possible overlap in space and time, to create matched image pairs from different data resolutions. To our knowledge, no studies have attempted using diffusion models trained on real data for the permafrost region on the panarctic scale. We show that the diffusion model used in this study is superior to other upsampling methods, such as GAN models in terms of the visual quality of the predictions and in the use for the downstream task of the segmentation of retrogressive thaw-slump (RTS), which is a dynamic surface change in the permafrost, which can cover scales between single to hundreds of metres, and thus at the smaller scales requires a higher resolution to be accurately detected. The upsampling techniques developed in this study, using at least two real panarctic datasets, could provide an avenue to produce such high-quality remote-sensing data with a lower requirement for the use of commercial satellites, thus democratising the research of the permafrost region.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: D.02.07 - POSTER - Large Language Model Agents and Applications for Earth Observation

This session will focus on the applications of Large Language Models (LLMs) in Earth Observation (EO) and related domains, as well as the creation of LLM-based agents. The session will aim to cover best practices to exploit small, openly available and accessible, pre-trained LLMs across various use cases in EO, Climate Science, and other relevant Earth Sciences. Some of these use cases for the EO and related domains include:

- Bibliographic aid of EO scientific literature, such as summarization of articles, structuring unstructured text, or question-answering on complex topics
- Content creation for understanding of EO science by the wider public
- Development support on software for EO, such as code generation or analysis.

Topics for the session include, but are not limited to:
- Useful applications that leverage the knowledge within open-source LLMs for EO and related sciences.
- Digital assistant development and integration: LLM-powered assistants allowing users to interact with them in natural language for data retrieval, reasoning from user-provided information, insights from large datasets, etc. Accessibility of digital assistants for EO: making data more accessible to scientists, policymakers, educators, and the public. Bridging the gap between complex data and user understanding. Integration of an assistant into third party applications.
- Relevant techniques to create agents without requiring large amounts of data, and training, such as: In-Context Learning; Instruction Fine-Tuned-only models; Retrieval Augmented Generation; or Inference-time/Prompt Engineering like Chain-of-Thought prompting and Reflection.
- Parallel research including practices for: Creation of Synthetic Data, and Developing Evaluation Tools for different use-cases.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Generative Artificial Intelligence assistant demonstrator for EO Portal.

Authors: Mme Hayret Abdula Keary, M Killian Perriot, M Antoine Lorentz, M Georgios Gkyzis, Sylvain Tanguy
Affiliations: Thales, ESRIN
Introduction: The aim of this project is to prepare a demonstrator for an AI EO technical support assistant. The demonstrator shall be adapted to ESA documentation, EO technical websites and frequently asked questions by users Tickets system. To do so we use many sources of data: • Earth Online : https://earth.esa.int/eogateway • EoPortal : https://www.eoportal.org/ • Swarmhandbook : https://swarmhandbook.earth.esa.int/ • The TellUs database Once all these data sources are recovered, analyzed, filtered and cleaned, we implement a the AI assistant. To do so we use Retreval Augmented Generation (RAG) and Large Language Models (LLMs) to make the users able to get all information that he needs. Preparation of database for RAG: About the websites: Regarding the websites, we coded a website crawler. This is a custom-built Python application that efficiently scrapes and extracts data from websites. Each website has its own particularity: • From Earth Online, we extracted 64k PDF documents. After filtering the redundancy in the documents as well as the non useful documents, we kept 14% of this database for the RAG database. We also drew a sample of 30 webpages and analyzed them manually. Our observation was that all pages were informative, standalone, and relevant to the RAG system. • From EoPortal, our web crawler visited 2767 webpages. As for the HTML content of Earth Online, we sampled about twenty satellite-missions webpages and other-space-activities webpages and found that these webpages were relevant for RAG, informative, and of high quality. • From Swarmhandbook, we extracted 12 webpages. Each description webpage contains a short description of the product, how to access the data, and a table with the output variables. Those data seem quite interesting but also seem hard to integrate into a RAG pipeline. This kind of non-verbose, very structured data are not well-suited for retrieval using dense embedding models. They could be retrieved with an LLM agent using function calling. About the TellUS database: TellUS is a database containing tickets divided into 8 categories and 57 subcategories. To better understand the database, we summarized each subcategory into 3 or 4 key points. We adopted a semi-automatic approach: for each subcategory, we sampled randomly 50 tickets and prompted LLaMA-3-70B to extract key points; then, we curated the extracted key points by manually checking them against 10 tickets. As the goal of this project is to develop an informative assistant, we decided to exclude tickets that do not contain standalone information. At the end, we decided to focus our analysis on 4 specific subcategories: Data Products, Online Documentation, Software & Tools, and Support Service Requests. These categories encompass a total of 6.8k tickets, which represents approximately 13% of the original number of tickets. Even after this operation, we still encountered tickets that are not relevant for RAG purposes. To identify tickets that provide guidelines or factual information, we trained a classifier. For this purpose, we labeled 500 tickets by reading each comment and assigning one of two labels: RAG for tickets containing factual information or guidelines, and NO_RAG for those that do not. We then trained a logistic regression on embeddings from Alibaba-NLP/gte-Qwen2-7B-instruct. We used 250 tickets to train and 250 to evaluate our model. We obtained the following result: Precision: measures the proportion of true positive predictions among all positive predictions made by the model, providing an indication of how accurate the model is when it predicts a positive outcome - RAG: 0.88 - NO_RAG: 0.92 Recall: measures the proportion of actual positive instances that were correctly predicted as positive by the model, giving insight into how well the model detects all instances of a particular class or phenomenon - RAG: 0.87 - NO_RAG: 0.92 These results are satisfactory considering the small amount of data used. We applied our trained classifier and retained approximately 1.9k tickets (56%). At this step, the tickets have a lot of noise that could hinder the retrieval of the useful information. We are also interested in anonymizing the ticket. To this purpose we propose to extract a set of question and answer from the informative comments. To do so, we used a Llama3-70B. As an example, from a ticket containing a lot of exchanges regarding the creation of an EO SSO Account and personal information, the LLM proposes: "Question": "How can I create an EO SSO Account?" "Answer": "An SSO account can be created by following the procedure available at https://earth.esa.int/web/guest/general-registration." The useful information is correctly extracted and does not contain any personal information. We then aimed to evaluate the quality of anonymization resulting from LLM reformulation and assess whether further improvements could be made to the anonymization rate. To this end, we designed a manual pipeline with the objective of crafting regular expression rules that detect personal information. Specifically, our approach involved creating four distinct rules: - Identification number detection using regex (?i)id ['\d] - Account detection using regex (?i)account ‘ - Email detection (excluding whitelisted emails) - Name/pseudonym detection We then manually annotated 46% of the reformulations, which corresponds to approximately 1.1k examples. Our ground truth was established as the union of human annotations (manual labeling) and rule-based detections using the four rules outlined above. We achieved a global anonymization rate of 98%. Implementation of the demonstrator: RAG: The core idea is to furnish the LLM with relevant, factual information to ground its responses. To achieve this, the LLM can interact with a search tool, issuing a query when necessary (as exemplified in TellUS Database Analysis > Category Filtering). The search tool then retrieves the top-k most relevant text chunks from a database. The vector database contains pre-downloaded chunks of webpages from Eoportal and Earth Online, as well as ticket reformulations from TellUS. Each chunk is vectorized using an embedding model, and the resulting embeddings are stored in the database. When the search tool is invoked, it embeds the query as well, enabling a similarity search to retrieve the top-k most relevant documents matching the query. Software: The demonstrator we provided is composed of various components, designed to deploy an OpenAI-like enhanced API that can handle tool calls efficiently. To ensure seamless integration with the broader ecosystem, our implementation is compatible with the standard OpenAI API. Specifically, we have skillfully adapted and exposed the /chat/completions stream using FastAPI. Additionally, we leveraged Docker and DevContainer for streamlined and clean deployment processes. The system consists of several key components, each designed to address specific functionalities: 1. A security stack ensures safe login and SSL encryption, implemented using Keycloak, Oauth2-proxy, and Nginx with correct configurations. 2. The UI component, built with Streamlit, displays conversations with the assistant, allowing users to select the model they wish to use, view conversation length, and render tool calls and responses. 3. RAG_API is a FastAPI server responsible for resolving tool calls. When the LLM wants to use the search tool, RAG_API uses the query provided by the LLM and interacts with the Qdrant vector store to retrieve the top-k most relevant documents matching the query. 4. vLLM_wrapper is another FastAPI server that parses tool calls. As the vLLM project has not yet implemented this functionality for the Llama 3.1 models, we developed a custom solution. This wrapper transfers chat/completion requests to a vLLM instance, processes the returned stream to identify any tool calls, and then parses them if present. Hardware: We collaborated with CloudFerro and utilized their infrastructure to host our system. We deployed all components except for vLLM instances on a single virtual machine (VM) with specifications of 8 virtual CPUs (vCPUs) and 32 GB of RAM. The VM consumed approximately 40 GB of storage out of the available 120 GB. CloudFerro provided us with a public IP address associated with this machine, enabling external access. For LLM inference tasks, we leveraged multiple GPUs: • A node equipped with two NVIDIA A6000 GPUs was used to deploy Llama 3.1 8B fp16. Storage disk of 100GB. • A node consisting of four NVIDIA A6000 GPUs was utilized to deploy Llama 3.1 70B in FP8. Storage disk of 500GB. • A single NVIDIA A100 GPU was used to run the intfloat/e5-mistral-7b-instruct embedding model. Storage disk of 100GB. Initially, we encountered nccl timeout issues using nodes with two A100 (40GB) GPUs, prompting us to switch to nodes equipped with two A6000 GPUs instead. Models: We deployed Llama 3.1 8B and Llama 3.1 70B. Benchmark: To assess our system's performance, we employed a benchmark comprising 34 questions with diverse proposed answers. Each question was presented to the RAG system along with the correct and incorrect answers, requiring the model to select the right response. To investigate the effects of various parameters on the system's accuracy, we conducted an experiment where we modified three key settings: • Chunk size of the documents • Number of documents retrieved • LLM choice
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Geolingual Studies: Integrating linguistics and digital humanities with Earth observation to assess the physical in combination with the social dimensions of cities

Authors: Richard Lemoine-Rodriguez, Hannes Taubenböck, Prof. Carolin Biewer
Affiliations: Geolingual Studies team, University of Würzburg, German Remote Sensing Data Center (DFD), German Aerospace Center (DLR), Institute for Geography and Geology, University of Würzburg, Department of English and American Studies, University of Würzburg
Over the past decades, Earth observation has significantly advanced our ability to represent the physical dimension of cities. The growing availability of freely accessible spatial datasets has enabled researchers to describe and monitor the material evolution of urban areas both at the local and the global scale. These efforts include detailed representations of the two- and three-dimensional characteristics of urban environments, such as mapping built-up footprints, estimating building volumes and heights, quantifying urban green infrastructure, and analyzing these features using geospatial statistics and landscape metrics. This progress has greatly enhanced our understanding of urban form and its possible impact on urban environments. Nevertheless, in current Earth observation research the assessment of cities still hardly ever includes the representation of the citizens who experience and produce the urban environment. With its global reach and vast volume of information, social media data offers a valuable resource for bridging this gap. In online social networks, users often attach geolocations to the contents they share – whether these are texts, images, videos, or audio files – connecting their posts to specific places such as parks, public or private spaces, shops, or offices. This geospatial information enriches the data, rendering diverse analyses and applications possible. It allows us to characterize neighborhoods based on thematic content, to explore the online representation of different social groups, to examine how topics discussed correlate with demographic patterns, or even to identify building functions through media content. The multifaceted content and rich metadata of geolocated social media make this data source a promising resource for multidisciplinary research, enhancing our ability to understand the physical and social complexities of urban areas. Here, we present Geolingual Studies (GLS), a research framework that integrates diverse datasets—such as Earth observation data and geolocated text data—by using AI combined with methodologies from linguistics, remote sensing, and digital humanities. GLS aims to explore and understand the interrelation between physical space and socially constructed space as place. This approach leverages natural language processing (NLP) techniques from digital humanities to extract and classify information from large-scale text datasets using AI, while linguistic methods are used to analyze in detail the topics, perceptions and attitudes expressed by social groups and individuals from urban areas. Remote sensing techniques complement these analyses by combining results from social media analyses with Earth observation products to evaluate spatial patterns and temporal trends. We will demonstrate the potential of GLS with two case studies that examine the relationship between the built environment and social media content in two megacities – New York and Mexico City. In both studies, we applied rigorous preprocessing to the social media data, filtering out bots, automated content that was generated by third-party apps, and posts containing only symbols, URLs or emojis. To ensure spatial accuracy for integrating social media data with Earth observation datasets, we included only posts geolocated at GPS coordinates, points of interest (POIs), or the neighborhood level. Moreover, AI was applied to extract information from social media data and classify topics discussed by using pretrained language models. Firstly, we used geolocated Twitter data posted from New York’s metropolitan area and the local climate zones (LCZ) classification scheme derived from remote sensing to assess the connection between topics discussed on social media and different urban morphologies. Our findings reveal that the topics discussed on social media vary significantly across LCZs, with factors such as urban morphology, intra-urban location, and the functional characteristics of the built environment being of influence. Secondly, we integrated geolocated Tweets, weather station air temperature data, MODIS land surface temperature (LST) and the LCZ classification to explore the relationship between heat-related social media content, urban temperatures, and urban morphologies in Mexico City. Our analysis shows an exponential increase in the frequency of heat-related Tweets when air temperatures rose. Additionally, the proportion of heat-related Tweets was notably higher in LCZs with elevated LST. Through these studies, we will demonstrate how the GLS framework enables us to assess relationships between citizens and the urban built environment. Our findings highlight the value of this approach in integrating the often-overlooked social dimension into Earth observation research. While it is essential to acknowledge the limitations of social media datasets – such as their lack of representativeness for entire urban populations, the varying proportion of geolocated posts, and the presence of bots – our two studies demonstrate that a robust preprocessing framework can enhance data reliability. Furthermore, integrating social media data with Earth observation products provides a valuable means to validate and contextualize findings, ensuring the plausibility and credibility of insights derived from these datasets. The steps presented here will be the basis for us to further analyze a wide range of phenomena in social media data with the help of linguistic methodologies, as we can retrieve emotional reactions, narrative structures and ideological standpoints and may find evidence of specific language use and linguistic diversity in individual neighbourhoods and of the existence of social networks within and across the city. Furthermore, these analyses can be enriched with additional spatial datasets (e.g., land cover data) and social indicators (e.g., census data), offering the possibility to assess a broad spectrum of changes and challenges in urban societies. Beyond the presented examples with Twitter data, other social media platforms offer comparable potential, providing scholars with a range of options depending on the researchers’ objectives and data needs. While challenges in data access must be acknowledged, the effort to address these barriers is well justified by the unique opportunities these datasets provide – rendering possible the integration of physical and socially constructed spaces for a more holistic understanding of urban areas to inform decision-making.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Enhancing Earth Observation Accessibility with AI-Driven Natural Language Interfaces

#stac

Authors: Sergey Sukhanov, Enrique Fernandez, Dr. Ivan Tankoyeu, Hector Lopez, Revan Rangotis, Federico Fioretti
Affiliations: Ai Superior Gmbh, Serco
Earth observation (EO) data is a cornerstone for scientific research, environmental monitoring, and decision-making, yet its accessibility remains a significant challenge for non-expert users. Sentinel satellite missions, operated under the Copernicus program, generate vast amounts of geospatial data. However, effectively querying this data requires domain expertise and familiarity with complex query languages like OData and STAC. This barrier limits the utilization of EO data by a broader audience, including policymakers, educators, and researchers in non-technical fields. To address this challenge, we present a generative AI-driven natural language interface designed to bridge the gap between user-friendly interaction and the technical intricacies of EO data discovery. Developed as part of the Collaborative Data Hub Software evolution, the proposed system leverages advancements in natural language processing (NLP) and geospatial technologies to transform user queries into structured formats compatible with Data Hub Software (DHS) ecosystems, such as the GAEL Store Service (GSS) and Copernicus Space Interface (COPSI). Users can input queries in plain language, such as “Find cloud-free Sentinel-2 images of the Amazon rainforest from July 2024,” and receive actionable results seamlessly. The system's architecture is modular, comprising a tool selector, an extractor powered by large language models (LLMs), a geospatial lookup service, and a validation and conversion pipeline to ensure queries adhere to OData and STAC standards. The design also incorporates fuzzy geolocation capabilities and dialogue-based feedback mechanisms to refine ambiguous inputs iteratively. Deployed in a containerized environment on OVHCloud, the system supports scalable, real-time query processing. The solution addresses key challenges, including handling ambiguous or incomplete user inputs, maintaining compatibility with DHS standards, and scaling to support diverse user demands. Initial results demonstrate significant improvements in query accuracy and usability, enabling a wider audience to access and utilize EO data effectively. In our presentation, we explore the need for such a system, detailing the technical challenges, system architecture, and implementation strategies. We conclude with insights from early deployment phases and a roadmap for future enhancements.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Leveraging Remote Sensing, Crowd-sourced Species Observations and Wikipedia with Vision Language Models for Habitat Classification

Authors: Valérie Zermatten, PhD Javiera Castillo-Navarro, PhD Diego Marcos, Prof Devis Tuia
Affiliations: ECEO, EPFL, INRIA, University of Montpellier, Conservatoire national des arts et métiers
The accurate classification of habitats is essential for effective biodiversity conservation. Numerous studies have shown that crowd-sourced species observation and remote sensing images are meaningful predictors for assessing large-scale biodiversity and ecological status. Our study proposes to enrich a model for habitat mapping trained from remote sensing data with information about the presence of species observed therein. As representation of species-specific information, we use textual habitat descriptions from Wikipedia articles. By leveraging the multi-modal integration capacities of vision-language models (VLMs), our goal is to harness the species preferences in terms of habitat such as land cover, climatic conditions or soil properties as described on Wikipedia, to integrate these domain specific knowledge into the model and evaluate its capacities on the task of habitat classification performance. We collect high-resolution aerial images with RGB bands from swisstopo, the Swiss Federal Office for Topography, and species observations from the Global Biodiversity Information Facility (GBIF) over our study area that covers the entirety of Switzerland. First, species observation undergo a careful filtering procedure, including the removal of species with high geo-location uncertainty, erroneous or incomplete metadata (e.g. missing species identification), species without Wikipedia article, or observed very rarely (<10 observations in Switzerland). Then, each aerial image is associated with a list of species geolocated within its footprint. For each species, sentences that describe relevant habitats and environmental conditions are selectively extracted from the corresponding Wikipedia articles. Thus, each aerial image is associated with a set of Wikipedia sentences describing the ecological preferences of the observed species. The collected dataset comprehends 91'800 aerial images associated with one or a few species out of 2’745 different species. We design a loss function to train VLMs using our collected dataset. Our loss encourages the model to find Wikipedia sentences that best match the images from the set of all sentences associated with the images. We use this loss to fine-tune existing remote sensing VLMs such as RemoteCLIP, SkyCLIP or GeoRSCLIP. We evaluate our model in the task of ecosystem classification in zero-shot and few-shot settings, following the habitat definitions from the European Nature Information System (EUNIS), a widely used reference framework for European habitat types. Our results show that our method helps in understanding remote sensing images in a more ecologically meaningful manner.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: An Artificial‬‭ Intelligence Assistant‬‭ for‬‭ Complex‬ Earth‬‭ Observation‬‭ Tasks

Authors: Sylvain Lobry, Simon Grüning, Laurynas Lopata, Yuxing Chen, Gaetan Petit
Affiliations: LIPADE, Université Paris Cité, askEarth
In this work, we propose a conversational‬ assistant‬ to answer complex Earth Observation (EO) queries. With the recent developments of generative artificial intelligence, and in particular around Vision Language Models (VLM), several works aim at facilitating the extraction of information from EO data [1, 2] through tasks such as Visual Question Answering [3]. However, it is known that directly applying Large Language Models (LLM) to EO data does not yield good results [4, 5]. We propose a method which decomposes‬‭ natural‬‭ language‬ requests‬‭ into‬‭ primitive‬‭ operations‬‭ via‬‭ code-based‬‭ reasoning through a LLM. This code can utilize existing EO methods through an application programming interface (API), with function signatures that invoke the API being passed as a context to the LLM. This solution allows to leverage‬‭ existing Earth Observation method and associated research. Furthermore, with this approach, the task of the LLM is only to generate a textual output. Our Interpretable, Code-based‬‭ assistant‬‭ for‬‭ Earth‬‭ Observation‬‭ (IC-EO)‬‭ has‬‭ several‬ unique‬‭ advantages‬‭ over‬‭ current‬‭ solutions.‬‭ Compared‬‭ to‬‭ current‬‭ black-box‬‭ methods [3],‬‭ IC-EO is interpretable,‬‭ transparent,‬‭ extensible‬‭ and‬‭ can‬‭ answer‬‭ a‬‭ wide‬‭ range‬‭ of‬‭ queries.‬‭ IC-EO is developed in the frame of an ESA project (Open Call for Proposals for EO Innovation) In this presentation, we show the different steps that need to be undertaken to develop such an assistant. First we present the methodology for selecting use cases (based on potential user's needs, expressed in interviews), and detail the two selected use cases: damage assessment after natural disasters events and agriculture. The selected use cases drive the selection of EO tools (solving tasks such as classification, segmentation, ...), which are converted to an API. We propose a template to define such an API. We finally present the infrastructure required for the assistant, answering the following challenges: performing the inference of the LLM, and running the generated code, in link with the methods called in the API. The proposed IC-EO assistant is evaluated at three levels. First, the generated code (output of the LLM) is evaluated. We also evaluate independently the different functions of the API. Finally, an evaluation is performed on the answer to the query. This evaluation mechanism leverages the interpretable aspect of the code-base generation. [1] Zhang, W., Cai, M., Zhang, T., Zhuang, Y., & Mao, X. (2024). Earthgpt: A universal multi-modal large language model for multi-sensor image comprehension in remote sensing domain. IEEE Transactions on Geoscience and Remote Sensing. [2] Yuan, Z., Xiong, Z., Mou, L., & Zhu, X. X. (2024). Chatearthnet: A global-scale, high-quality image-text dataset for remote sensing. arXiv preprint arXiv:2402.11325. [3] Lobry, S., & Tuia, D. (2024). Visual question answering on remote sensing images. In Advances in Machine Learning and Image Analysis for GeoAI (pp. 237-254). Elsevier. [4] Zhang, C., & Wang, S. (2024). Good at captioning, bad at counting: Benchmarking gpt-4v on earth observation data. arXiv preprint arXiv:2401.17600. [5] Gruening, S., Berger, D., & Petit, G. (2024, October). Benchmarking Large Language Models for Earth Observation. In Proceedings of SPAICE2024: The First Joint European Space Agency/IAA Conference on AI in and for Space (pp. 532-536).
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Visual Foundation Model as Pseudo-Annotators for Remote Sensing Visual Question Answering

Authors: Christel Chappuis, Sylvain Lobry, Prof Devis Tuia
Affiliations: EPFL, Université Paris Cité
The task of Remote Sensing Visual Question Answering (RSVQA) [1] aims at widening the use of Earth Observation (EO) images by enabling potential users to interact with them using natural language. Indeed, a considerable fraction of EO images remain unused because of the technical skills necessary to extract useful, specific information from these images. In practical terms, RSVQA methods predict an answer from two inputs: an image and a question. So far, most methods are opaque, i.e. they produce answers without any information to understand its reasoning. Any interpretation as to what led the model to the predicted answer is difficult. Down the line, this lack of interpretability could make it harder for the user to trust the model's outcomes. In this work, our objective is to develop a more transparent model for RSVQA. To do so, we take inspiration from the work of Prompt-RSVQA [2] and multi-task Prompt-RSVQA [3] where the image is first described in a textual format using visual, pseudo-annotators models. This "visual context" is then fed to a language model along with the question, which allow the language model to attend to both question and context equally. Thanks to the visual context, the information extracted from the image is clearly humanly understandable, but this semantic bottleneck, i.e. the conversion of the context into a textual form, limits the information that can effectively be used by the language model to predict an answer. In this work, we propose to exploit the full richness of the visual detection performed by the visual pseudo-annotators, and to relax the semantic bottleneck by using vector representations as visual tokens (instead of text-based image descriptions), in a multi-modal vision language model. Additionally, as pseudo-annotators we want to leverage the recent development of EO vision foundation models, and in particular their multiple downstream heads. Therefore, instead of working with a language-only model, we use a multi-modal vision language model: VisualBert [4], which takes as inputs both textual and visual tokens, and learns a joint representation from them with a self-attention mechanism. For the visual tokens, we consider the downstream heads of the openly available Foundation Model MTP [5] as pseudo-annotators. More precisely, these explicit visual predictions are representations of detected objects, segmentation and classification, leading to several, complementary pseudo-annotators. Additionally to the tokens issued from the foundation model, we also add a visual representation of the entire image through a ResNet [6] model (pretrained on ImageNet). For the textual tokens, we simply use the tokenized words of the questions. We consider two RSVQA datasets with high resolution images: RSVQA HR [1] and RSIVQA [7]. We perform two main experiments. First, we want to assess the benefits of further pre-training VisualBert in the remote sensing domain, against its fine-tuning with the task of RSVQA. Essentially, four options are compared: 1) training the classification head for RSVQA, 2) fine-tuning the VisualBert encoder while training the classification head, 3) pre-training VisualBert with the Masked Language Modeling task followed by training the classification head, and 4) pre-training VisualBert followed by its fine-tuning while training the classification head. For the pre-training, the textual tokens come from a description of the remote sensing images from both datasets. Considering only the training images, we create such description using the labels activated by the pseudo-annotators. Second, to better understand the contribution of each pseudo-annotator to the overall performance, we carry out an extensive evaluation of the visual inputs. This evaluation is conducted at inference time by masking part of the visual tokens. Furthermore, we inspect the attention weights of the transformer-based architecture, which indicate the most attended positions, and thus the more useful information. We compare these attention weights for different groups of samples. Based on these experiments, fine-tuning the weights of the VisualBert encoder on the remote sensing domain appears as a necessity in order to obtain good performances. We also observe the importance of objects tokens from the pseudo-annotators, while the model rely less on the two visual tokens of classification and image features. [1]: S. Lobry et al., “RSVQA: Visual Question Answering for Remote Sensing Data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 12, pp. 8555–8566, 2020. [2]: C. Chappuis et al., “Prompt-RSVQA: Prompting Visual Context to a Language Model for Remote Sensing Visual Question Answering,” Proceedings of CVPRW 2022, pp. 1372–1381. [3]: C. Chappuis et al., “Multi-task prompt-RSVQA to explicitly count objects on aerial images,” in Workshop on Machine Vision for Earth Observation (MVEO) at the 34th British Machine Vision Conference (BMVC), 2023. [4]: L.H. Li et al., “VisualBERT: A Simple and Performant Baseline for Vision and Language.”, 2019, ArXiv, abs/1908.03557. [5]: D. Wang et al., “MTP: Advancing Remote Sensing Foundation Model via Multitask Pretraining,” IEEE J. Sel. Top. Appl. Earth Observations Remote Sensing, vol. 17, pp. 11632–11654, 2024. [6]: K. He et al., “Deep Residual Learning for Image Recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770-778. [7]: X. Zheng et al., “Mutual Attention Inception Network for Remote Sensing Visual Question Answering,” IEEE Transactions on Geoscience and Remote Sensing, pp. 1–14, 2021.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: B.03.07 - POSTER - Land-atmosphere interactions: finding solutions for land-based mitigation and adaptation

Human activities leave a clear fingerprint on terrestrial ecosystems and the climate system. Changes in land cover, in land use and land management can have a profound effect by altering fluxes (carbon, water, energy, and momentum) between the land and the atmosphere, thereby affecting climate both locally and globally through biogeochemical and biophysical mechanisms. These changes may often involve trade-offs between different ecosystem services, such as soil protection versus timber harvesting, or agricultural expansion versus carbon sequestration. Trade-offs can sometimes lead to conflicts between climate and conservation goals, such as rapid carbon sequestration from fast-growing monocultures at the expense of slow-growing natural forests rich in biodiversity.

Satellite remote sensing technology allows us to monitor the state of the land and its vegetation with variables such as leaf area index (LAI) and above-ground biomass (AGB), but also to evaluate corresponding fluxes through the use of proxies such as land surface temperature (LST) and sun-induced chlorophyll fluorescence (SIF). Through innovative methodological approaches, these various sources of complementary satellite datastreams can be combined to provide insights about the spatio-temporal variations of terrestrial ecosystem functional properties and how these interact with the atmosphere. Such methods can further inform on both the potential positive and negative consequences of changing the properties of the land surface, notably with the aim of establishing nature-based solutions (NBS) to the combined climate and biodiversity crises.

This session focuses on how satellite remote sensing can help us better understand and quantify land-atmosphere interactions. We welcome studies exploring this theme at any spatial scale (from local to global). We aim to highlight studies that can inform adoption of appropriate land-based strategies for climate mitigation, adaptation, and ecosystem restoration. Additionally, we seek to explore how the combination of climate adaptation and biodiversity actions can reinforce or weaken each other, and how the combined effect of adaptation and biodiversity actions differs from the sum of individual actions.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Drought monitoring in orchard meadows: An integrative approach combining geophysics and remote sensing

Authors: Julia Rieder, Sebastian Buchelt, apl. Prof. Dr. Christof Kneisel, Prof. Dr. Tobias Ullmann
Affiliations: Department of Remote Sensing, Institute of Geography and Geology, Julius-Maximilians-University Würzburg, Department of Physical Geography, Institute of Geography and Geology, Julius-Maximilians-University Würzburg
Traditional orchards, which are an integral part of the Central European cultural landscape, have so far been underrepresented in research on climate change-induced drought stress, although it is known that recent droughts in Central Europe have severely damaged not only forest ecosystems and agricultural productivity but also orchards. This is critical, as orchards provide valuable and important ecosystem services, including the promotion of biodiversity, soil conservation and microclimate regulation, which are originally thought to sustain increased resilience to ongoing climatic changes. Moreover, many orchard meadows have disappeared in the last decades despite their importance for ecosystem services. For instance, orchards in Bavaria, Germany, have decreased by 70 % within 60 years due to economic marginalization and conversion to other land use. Hence, there is a high need to better understand the effects of droughts on orchards, establish means for monitoring as well as to develop guidelines for useful protection measures in the future. In order to better understand the drought vulnerability of orchard meadows and to develop adaptation strategies, this study implements an innovative multi-method approach of geophysical, remote sensing and in-situ data to assess the drought stress of apple and plum trees during the vegetation period. For this purpose, several sensors were installed in spring 2023 at an experimental test site in northern Bavaria, Germany, to measure soil volumetric water content and matric potential pF, as well as soil and air temperature at different depths/heights along a slope. In addition to these point measurements, a 2D geophysical monitoring was set up along a 36- and 72-meter profile where repeated measurements are made using electrical resistivity tomography (ERT). ERT allows investigating apparent resistivities in the subsurface, the variability of which over time is mainly driven by the variability in soil moisture and soil temperature. The soil moisture time-series is then compared to the remote sensing based NDVI time-series which are calculated from Sentinel-2 and MODIS data to assess the influence of decreasing soil moisture on the vitality status of the trees over time. The ERT results show seasonal variations in soil moisture, with the upper soil layers drying out significantly in summer. This aligns well with the in situ- soil moisture observations. The NDVI trend corresponds to this decrease in soil moisture with a small but considerable temporal delay, suggesting a direct link between drought and reduced vegetation health. High-resolution LiDAR data was collected to map vegetation structures and to assure the position and probable extent of root zones. Several low resistivity anomalies in the ERT data could be spatially aligned to these root zones. These areas were found to be generally wetter than the surrounding area. The integration of ERT and remote sensing provides a comprehensive methodology for assessing the impact of drought on orchards and provides spatially detailed insights into moisture dynamics, spatial variations and drought-related vegetation stress. This multi-method approach has shown potential to capture the impacts of climate change on these valuable but vulnerable landscapes. Continued monitoring and expanded application of this approach is critical for the development of sustainable management strategies to protect orchards from increasing drought stress.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: The potential impact of Land Use Change on biophysical processes during the diurnal cycle: study case evaluating potential transitions of natural vegetation to agricultural land for different areas of the African continent

Authors: Daniel E. Pabon-Moreno, Robert N. Masolele, Alessandro Cescatti, Gregory Duveiller
Affiliations: Max Planck Institute For Biogeochemistry, Wageningen University, European Commission, Joint Research Centre
The transformation of ecosystems by human activity not only has an impact on the amount of carbon stored or released to the atmosphere, but also on the biophysical properties of the surface, which may further influence the climate system at local and regional scale. Changes in land use can further have an impact on the temporal dynamics of biophysical variables at sub-daily scales, impacts that have generally been understudied. In parallel, new environmental and climate policies are dealing with land use change and its impacts. For instance, as part of the European Green Deal initiative under the regulation on deforestation-free products, the commerce of commodities related to deforestation and land degradation will not be allowed in the European market. The key goal of the regulation is to incentivize commodity crop producing countries to transition to more sustainable and deforestation-free agricultural practices. In the current context, the quantification of the potential impact of land-use change on biophysical variables at high temporal resolution is needed to document the mitigation potential of these land-based actions. This can serve as a tool when designing transition strategies that contribute to mitigate the effects of global warming and climate change at local and regional scale. Thanks to Earth Observation from geostationary platforms, scientists are able to track ecosystem dynamics and climate interactions at high temporal resolution, opening the door to better understand ecosystem processes. In the present study, we aim to quantify the potential impact of land use change on biophysical variables such as land surface temperature and albedo at sub-daily temporal scale. This is for different areas of the African continent with presence of crops and main commodities such as cacao, oil palm and rubber. For this purpose, we use climatic variables from SEVIRI/MSG EUMETSAT SAF on Land Surface Analysis, in combination with high-resolution classification maps for different crops and commodities. The space-for-time substitution technique is used to quantify the potential changes on the biophysical variables of transitioning from one vegetation type to another. We expect our results can help to better understand the potential climate implications of restoration activities or rotation of crops in the local and regional climate regime. We also expect the results can support the design of better land transition strategies to improve the sustainability and productivity of crops while supporting climate-smart land management practices.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Using CRSWIR Index for Forest Health Monitoring and Climate Impact Assessment

Authors: Dr Delphine
Affiliations: Capgemini
Title : Using CRSWIR Index for Forest Health Monitoring and Climate Impact Assessment Authors : Delphine NOBILEAU, Fabien GOGE, Antony PRIEM Session : B.03.07 The increasing impact of extreme weather events on forests requires accurate and robust monitoring techniques. This study investigates the potential of using Earth observation data to monitor forest health and assess the impacts of climate variability, with a focus on drought-induced stress. This study is based on Sentinel-2 imagery, supplemented by ERA5 meteorological data, to analyze the state of French forests, particularly the CRSWIR index. This index, derived from the Sentinel-2 SWIR and NIR bands, serves as a proxy for forest cover water content and evapotranspiration, key indicators of forest stress, to model the dynamic responses of forests to environmental stressors over a time period from 2018 to 2023. The CRSWIR Index is defined as: "CRSWIR"=(NIR+(SWIR1/SWIR2))/(NIR+(SWIR1/SWIR2)-((SWIR1-SWIR2)/NIR)) where SWIR and SWIR correspond to the reflectance values of the Sentinel-2 Near-Infrared and Shortwave Infrared bands, respectively. Higher CRSWIR values indicate higher water content of the forest canopy, which is inversely proportional to forest stress levels. Climatic data from the ERA5, including temperature, precipitation, and soil moisture at three depth layers, were used to calculate STI and SPI values to assess extreme climatic conditions. The ITS and SPI levels were calculated using historical baselines from 1990 to 2020, ensuring robust standardization for detecting climate anomalies. STI = (T-T_m)/"σ" _p and SPI = (P-P_m)/"σ" _p Where T and P represent monthly temperature and precipitation (in mm), Tm and Pm are their long-term averages (1990–2020), and σT and σP are the corresponding standard deviations. These standardized indices capture deviations from historical climate norms, allowing for the detection of extreme events. To analyze the temporal dynamics of forest health, CRSWIR multitemporal data were evaluated on healthy, degraded, and clear-cut plots. Anomalies were detected by examining deviations from historical CRSWIR averages. The results show that CRSWIR effectively identifies short-term stress events and long-term deterioration trends. Stable plots showed consistent CRSWIR values, while degraded plots showed significant fluctuations correlated with STI and SPI anomalies. Clearcuts showed abrupt declines of CRSWIR, confirming its usefulness in detecting vegetation removal. However, distinguishing between degraded and clear-cut plots remains challenging due to the overlap of ΔCRSWIR\Delta \text{CRSWIR} thresholds. Integrating GEDI LiDAR data provided an independent reference point, revealing strong correlations between CRSWIR and structural parameters such as canopy height and biomass (R2>0.85, R2 > 0.85). This validation highlighted the reliability of CRSWIR as an indicator of forest health. In addition, multitemporal data changes (e.g., a delay of up to six months with CRSWIR) improved the model's ability to predict long-term trends, offering insights into the time lag between climate events and forest responses. This methodology has been applied to several French regions to assess its scalability and robustness. Comparative analyses with existing models, such as the Fordead approach, have highlighted the strengths of the CRSWIR in capturing subtle degradation signals. However, Fordead outperformed CRSWIR in detecting clear-cuts, suggesting hybrid methods combining CRSWIR and complementary indices could improve operational forest monitoring systems. A Random Forest regression model was implemented to predict CRSWIR values and assess the predictive ability of climate variables. This model was trained using input variables such as climate indices (STI, SPI), soil moisture (measured at depths of 0 to 7 cm, 7 to 28 cm and 28 to 100 cm) and topographic features (elevation and slope). Model performance was evaluated using measures such as coefficient of determination (R2), root mean square error (RMSE), and feature importance analysis. For a one-month prediction horizon, the model obtained the following results: R2=0,91 and RMSE=0,052; For longer time horizons (up to three months), performance decreased due to increased temporal variability, with R2 falling to 0.81 and RMSE increasing to 0.097. Feature importance analysis confirmed the dominant roles of CRSWIR and temperature anomalies, while soil moisture provided critical additional context. Integrating remote sensing and climate data provides a comprehensive framework for understanding forest-climate interactions. While challenges remain, such as sensitivity to forest composition and environmental variability, this study demonstrates the potential of the CRSWIR for operational monitoring of forest health. By advancing predictive capabilities and refining classification thresholds, this research contributes to developing sustainable forest management and climate change adaptation strategies, addressing critical environmental challenges in a rapidly changing world.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Global monitoring of LAI, FAPAR and FCover from Sentinel-3

Authors: Aleixandre Verger, Jorge Sánchez-Zapero, Dr Marie Weiss, Enrique Martínez-Sánchez, Adrià Descals, Dr Fernando Camacho, Fred Baret, Roselyne Lacaze, Roxane Van der Goten, Michal Moroz
Affiliations: CSIC, CREAF, EOLAB, INRAE, HYGEOS, VITO
The Copernicus Land Monitoring Service (CLMS) provides continuously a set of bio-geophysical variables describing the dynamics of vegetation, the energy budget, the water cycle and the cryosphere. The service ensures near-real time production and delivery of consistent long-term time series of global bio-geophysical variables. The CLMS portfolio includes the leaf area index (LAI), the fraction of PAR absorbed by vegetation (FAPAR) and the cover fraction of vegetation (FCover) products which are derived every 10 days at 300 m resolution. The products are delivered with associated uncertainties and quality indicators at the CLMS portal (https://land.copernicus.eu). The Collection 300m of LAI, FAPAR and FCover products is available from 2014 with PROBA-V and from 2020 with Sentinel-3 OLCI. Satellite observations of surface reflectance are first transformed into biophysical variables using machine learning techniques. The instantaneous estimates are then composited every 10-day using a specific smoothing temporal filtering which allows near real time estimation. This paper focus on the retrieval algorithm used to generate CLMS Collection 300m of LAI, FAPAR and FCover products, and the validation results for the comparison with other existing satellite products, the direct evaluation with ground measurements, and the temporal consistency of PROBA-V and Sentinel-3 time series.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: The Agricultural Land Abandonment and Climate Change Impacts on the Water, Energy and Vegetation Carbon Cycles in the Mediterranean Region - ESA X-ECV GLANCE Project

Authors: Christian Massari, Oscar Baez Villanueva, Maria Antonia Brovelli, Luca Ciabatta, Marco Donnini, Wouter Dorigo, Pierre Laluet, Massimo Melillo, Diego Miralles, Estela Nadal, Daniele Oxoli, Nicola Paciolla, Mauro Rossi, Sergio Vicente-Serrano, Angelica Tarpanelli, Pedro Torralbo Muñoz, Ruxandra Zotta, Clement Albergel, Chiara Corbari
Affiliations: Research Institute for Geo‐Hydrological Protection, National Research Council, Hydro‐Climate Extremes Lab, Ghent University, Department of Civil and Environmental Engineering, Politecnico di Milano, Department of Geodesy and Geoinformation, Technische Universität Wien, Instituto Pirenaico de Ecología, Consejo Superior de Investigaciones Científicas (IPE-CSIC), European Space Agency (ESA) ECSAT
In recent decades, the Mediterranean region (MR) has emerged as a critical hotspot for both climate change (Giorgi, 2006; Cramer et al. 2018, IPCC, 2023) and land abandonment (Herrando et al., 2016; Lasanta et al. 2017, Jiménez-Olivencia et al., 2021). Only in Europe, approximately 120 million hectares of land have been abandoned, primarily due to socio-economic factors such as rural migration to areas offering better economic opportunities — a trend particularly pronounced in the Mediterranean.  Following World War II, traditional agro-pastoral activities in the Mediterranean began to decline, driven by significant socio-economic shifts and the subsequent economic boom. As a result, rural areas and grazing lands in most of the Mediterranean mountain areas of the northern part of the basin have experienced widespread natural reforestation (Mottet et al., 2006). This, along with a significant rise in atmospheric evaporative demand due to climate change has triggered complex and poorly understood effects on the carbon, water, and energy cycles that will be studied by GLANCE.  The main goal of GLANCE is to leverage ESA CCI ECVs for the quantification of the effects of climate change and land abandonment on forest carbon dynamics, water and energy cycles in the Mediterranean Region. GLANCE aims at answering the following research questions:   1. How much, where and what type of Mediterranean forests have expanded in the past couple of decades?   2. How has forest expansion due to land abandonment and climate change impacted water, energy and carbon cycles especially during major drought events?   3. Which is the sensitivity of a change in runoff, groundwater recharge, evaporation, net and gross primary productivity, water use efficiency to changes in forest cover under land abandonment and climate change?  GLANCE will achieve its objectives through:   1. Utilizing past and ongoing ESA CCI and other ESA initiatives within an integrated framework to analyse long-term observations (at least 30 years) of forest carbon and water–energy cycle dynamics at the MR.   2. Employing tools, data and methods to accurately assess the uncertainties associated with each ESA CCI dataset and derived products utilizing (i) collected benchmark data from the GLANCE database, (ii) error propagation techniques (e.g., Montecarlo approaches), Triple Collocation techniques and iii) cross- comparison among the existing observational products and reanalyses.   3. Employing state-of-the-art trend analysis, drought indices, and climate elasticity to uncover the impacts of land use changes on forest carbon and water cycle dynamics over the past three decades in the MR. In this contribution we provide an overview of the project, and the main results achieved. Giorgi, F., 2006. Climate change hot‐spots. Geophysical Research Letters 33, 2006GL025734. https://doi.org/10.1029/2006GL025734 Cramer, W., Guiot, J., Fader, M., Garrabou, J., Gattuso, J.-P., Iglesias, A., Lange, M.A., Lionello, P., Llasat, M.C., Paz, S., Peñuelas, J., Snoussi, M., Toreti, A., Tsimplis, M.N., Xoplaki, E., 2018. Climate change and interconnected risks to sustainable development in the Mediterranean. Nature Clim Change 8, 972–980. https://doi.org/10.1038/s41558-018-0299-2 Intergovernmental Panel On Climate Change (Ipcc), 2023. Climate Change 2021 – The Physical Science Basis: Working Group I Contribution to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change, 1st ed. Cambridge University Press. https://doi.org/10.1017/9781009157896 Lasanta, T., Arnáez, J., Pascual, N., Ruiz-Flaño, P., Errea, M.P., Lana-Renault, N., 2017. Space–time process and drivers of land abandonment in Europe. CATENA 149, 810–823. https://doi.org/10.1016/j.catena.2016.02.024 Herrando, S., Brotons, L., Anton, M., Páramo, F., Villero, D., Titeux, N., Quesada, J., Stefanescu, C., 2016. Assessing impacts of land abandonment on Mediterranean biodiversity using indicators based on bird and butterfly monitoring data. Envir. Conserv. 43, 69–78. https://doi.org/10.1017/S0376892915000260 Jiménez-Olivencia, Y., Ibáñez-Jiménez, Á., Porcel-Rodríguez, L., Zimmerer, K., 2021. Land use change dynamics in Euro-mediterranean mountain regions: Driving forces and consequences for the landscape. Land Use Policy 109, 105721. https://doi.org/10.1016/j.landusepol.2021.105721 Mottet, A., Ladet, S., Coqué, N., Gibon, A., 2006. Agricultural land-use change and its drivers in mountain landscapes: A case study in the Pyrenees. Agriculture, Ecosystems & Environment 114, 296–310. https://doi.org/10.1016/j.agee.2005.11.017
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: GEOV2-AVHRR: global leaf area index time series from 1981 to 2022. Responses of vegetation to climate change

Authors: Aleixandre Verger, Marie Weiss, Peng Zhao, Thérèse Barroso, Fred Baret
Affiliations: CSIC, CREAF, INRAE, Institute of Mountain Hazards and Environment, Chinese Academy of Sciences, CNES
Long term global terrestrial vegetation monitoring from satellite Earth observations system is critical in climate and earth science applications. Leaf area index (LAI), defined as half the total developed area of green leaves per unit of horizontal ground area, has been recognized as an Essential Climate Variable for its key role in land-atmosphere interactions. Variations in magnitude and shifts in the timing of LAI are key indicators of climate change. However, the success in such applications rely strongly on the quality of the satellite LAI time series. This paper describes the recently generated GEOV2-AVHRR global LAI time series from July 1981 to December 2022 derived from NOAA and Metop-2 AVHRR data. GEOV2-AVHRR products are available at the Theia portal at 0.05° and 0.5° spatial resolution and 10-day frequency. GEOV2-AVHRR product capitalizes on the efforts undertaken to pre-process the AVHRR temporal series, resulting in the Long Term Data Record data, and the recent development of improved processing of biophysical variables resulting in the Copernicus Land Monitoring Service (CLMS) Version 2 of Collection 1km LAI product (GEOV2-CLMS). The comparison between GEOV2-AVHRR and GEOV2-CLMS LAI products shows 94% of land pixels are within ±0.5 LAI. Similarly, ~90% of land pixels agree within ±0.5 LAI with GIMMS LAI4g and GLASS AVHRR-derived products for the common period 1982-2016. Confrontation with ground measurements showed GEOV2-AVHRR outperforms other AVHRR products in terms of accuracy: 0.96 LAI. The sign of the temporal trends of GEOV2-AVHRR, GIMMS and GLASS LAI products agree in 85% of land pixels with ~80% (~50% significant at p<0.05) of greening and ~20% (~5% significant) of browning as evaluated over the period 1982-2016 with however important discrepancies in the spatial distribution of the positive and negative significant (p<0.05) trends. Further confrontations and evaluation of long-term trends from AVHRR LAI time series with variations in climate drivers (warming, heatwaves and drought) both at global and regional scales will be presented in this paper.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: C.06.08 - POSTER - Innovations in Calibration and Validation for New and Upcoming Earth Observation Atmospheric Missions

The Calibration and Validation (Cal/Val) of atmospheric missions are essential for ensuring the accuracy and reliability of satellite data used in environmental monitoring and climate research. This session concentrates on the latest developments and methodologies in Cal/Val activities for both current and upcoming atmospheric missions, including ALTIUS, AOS, EarthCARE, PACE, PREFIRE, TEMPO, TANGO, CO2M, Sentinel-4, and Sentinel-5.

The session focuses on various aspects of Cal/Val, such as pre-launch and post-launch activities, innovative calibration techniques, the establishment of ground-based validation networks, and the convergence on common practices. Emphasis is placed on the importance of sensor intercalibration, the use of vicarious calibration targets, and the collaboration between missions to achieve high-quality data. Additionally, the session will cover the role of airborne campaigns in providing critical validation data and the assimilation of data in Numerical Weather Prediction (NWP) modeling for quality monitoring and validation to enhance the accuracy of satellite-derived products.

By providing a comprehensive overview of state-of-the-art Cal/Val activities, the session aims to foster discussions on best practices and future directions in the field, ultimately contributing to the advancement of atmospheric science and the improvement of atmospheric satellite data quality.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Assessing Instrument Performance through On-Site Field Calibration: Insights from the CINDI-3 Campaign

Authors: Hanna Rohringer, Stefanie Morhenn, Dr. Ragi Ambika Rajagopalan, Manuel Roca, Martin Tiefengraber, Dr. Alexander Cede
Affiliations: LuftBlick Earth Observation Technologies, Medical University Innsbruck, SciGlob
The validation of satellite-based atmospheric measurements is an important process in the assessment of atmospheric composition and air quality observation. Ground-based remote sensing instruments, such as MAX-DOAS systems, are widely used as high-quality reference data for satellite measurements. In order to intercompare the performance and reliability of different reference instruments, the third Cabauw Intercomparison of UV-Vis DOAS instruments (CINDI-3) took place at the Cabauw Experimental Site for Atmospheric Research (CESAR), managed by the Royal Netherlands Meteorological Institute (KNMI), in summer 2024. In this campaign, 60 UV/Vis instruments were installed at the site and measured on a pre-arranged common schedule for 19 days. The main objective of the campaign is to intercompare a range of trace gas products, to understand the overall instrument performance and data product reliability, and to contextualize the results through community effort. We want to further understand the instruments’ performance and reliability in a calibration context and hence aim to characterize some of the instruments’ electronic and optical properties. Given that some instruments had never undergone a comprehensive radiometric or stray light calibration in the laboratory, we performed a set of on-site field calibrations of the UV/Vis instruments participating in the campaign. For this, we set up a provisional calibration laboratory and used a mobile FEL lamp to take measurements at different exposure times covering the instruments' spectral range. Furthermore, we conducted stray light measurements using a set of three cut-off filters in combination with the FEL lamp. This field calibration exercise was done for the first time within the CINDI campaigns. In total, we calibrated 23 instruments (16 different instrument types) and analyzed the results in a comparative manner. In this study, we want to show our preliminary results, such as dark current, signal-to-noise-ratio, photon-to-electron-gain, detector non-linearity, pixel response non-uniformity, and stray light behavior. Further, we want to demonstrate the challenges we faced during the calibration process and discuss our lessons learned for a future field calibration campaign.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: EarthCARE Quality Control within the DISC Framework

Authors: Raul Diez Garcia, Marli Cika, Pranjal Khawas, Timon Hummel
Affiliations: Telespazio Uk Ltd, European Space Agency
The EarthCARE (Earth Clouds, Aerosols and Radiation Explorer) mission, a collaborative effort between ESA and JAXA, was launched on May 29th, 2024, to advance our understanding of cloud-aerosol interactions and their effects on Earth's radiation budget. Equipped with state-of-the-art instruments, including an Atmospheric Lidar (ATLID), a Cloud Profiling Radar (CPR), a Multispectral Imager (MSI), and a Broadband Radiometer (BBR), EarthCARE is designed to provide high-resolution data on cloud dynamics, aerosol properties, and radiation processes. Since its launch, the EarthCARE mission has successfully completed its commissioning phase, which involved extensive system checks and calibration to ensure full operational readiness. The mission is on track to commence its scientific operations by late 2024. With its cutting-edge instruments, EarthCARE is expected to significantly contribute to advancements in climate modelling and weather forecasting by providing unprecedented data on cloud-aerosol-radiation interactions. The EarthCARE Data Innovation and Science Cluster (DISC) supports the mission by developing data processing algorithms, validating scientific products, providing user support, and ensuring high-quality data. Among its responsibilities, the EarthCARE DISC plays a critical role by ensuring the quality, consistency, and completeness of data through systematic quality control and monitoring. The DISC develops and maintains data processing tools, conducts anomaly investigations, and provides long-term performance assessments. In this work, the latest results from the systematic quality control activities conducted by the DISC will be presented. This will include an in-depth review of the quality control baseline, detailing the key checks and criteria applied to monitor data consistency, completeness, and compliance with mission requirements. Additionally, comprehensive statistics will be provided on mission data unavailabilities and degraded periods, highlighting their causes, durations, and potential impacts on scientific outputs. In addition, drawing on nearly a year of data collected since launch, the first long-term assessments will be presented. These findings will offer valuable insights into the effectiveness of the quality control framework and its role in maintaining the high standards of EarthCARE's observational data.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: The NO2 camera: New Capacity for the Monitoring of the Urban NO2 Field with High Spatio-Temporal Resolution in Support of Sentinel-4

Authors: Pierre Gramme, Dr Cedric Busschots, Dr Stefano Casadio, Dr Emmanuel Dekemper
Affiliations: Royal Belgian Institute For Space Aeronomy, SERCO Italia SpA
Networks of air quality stations and dedicated research projects have unveiled the high dynamics of NO2 abundance in urban environments, in both spatial and temporal dimensions. On top of the well-known morning and afternoon peaks, large differences of concentration can be observed when comparing one neighborhood to another, or even between adjacent streets. However, state-of-the-art air quality instruments face significant limitations in capturing these phenomena effectively. Air quality stations are generally under-deployed, and existing satellites provide temporal and spatial resolutions that are too coarse to adequately monitor the urban NO2 field. With the launch of Sentinel-4, the diurnal variability will be captured at the continent scale, which will offer much larger possibilities of comparison with the other ground-based instruments, and models. The networks of remote sensing instruments, such as the Pandonia Global Network or the NDACC MAX-DOAS instruments, are crucial for the satellite NO2 products validation. Their operational character and high accuracy deliver valuable information on the tropospheric NO2 vertical column with regular temporal sampling, in several azimuths, but averaged over large distances. The limited number of directions sampled prevents from capturing the spatial features of the urban NO2 field. The comparison with the adjacent satellite pixels also remains difficult as they are not always adequately sampled. We have developed a new type of atmospheric remote sensing instrument which puts the focus on high spatio-temporal acquisitions. The NO2 camera probes the field of NO2 by taking hyperspectral images of 20°x20° scenes in the spectral region between 435-455nm. A DOAS analysis performed in every pixel delivers a 2D array of NO2 differential slant column densities (dSCD) sampled at 0,04° both in azimuth and elevation. In 2024, the instrument has been tested in Rome for 3 weeks in March, and during the CINDI-3 campaign in Cabauw, The Netherlands. With this paper, we would like to stress the added value of the spectral imaging technique. We will show how strong but localized NO2 sources can be uniquely observed. We will also compare the NO2 camera product with more conventional remote sensing instruments and show how it can be used for validating the TROPOMI data, and in the future, the Sentinel-4 hourly NO2 product.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Leveraging Ground-Based, In-Situ, and Airborne Campaign Data to Validate Satellite Retrievals of Aerosol Water Uptake.

Authors: Jasper Mens, Bastiaan van Diedenhoven, Otto Hasekamp
Affiliations: SRON Netherlands Institute for Space Research
Aerosols affect climate in two main ways: directly, through scattering and absorption of solar radiation, and indirectly, through affecting cloud formation and cloud properties. Combined, these effects result in a net cooling, partially offsetting the warming caused by greenhouse gases (GHGs). While there is consensus about the existence of this cooling effect, its magnitude remains uncertain. This is problematic; as anthropogenic aerosol emissions decrease, we expect the cooling effect to diminish, leading to an enhanced warming effect from GHGs in the near future. Therefore, the accuracy of future warming projections strongly depends on our understanding of aerosols. In particular, the hygroscopicity (i.e., efficiency of water uptake) of aerosols is a poorly understood property, yet highly influential on both cloud droplet nucleation capacity and light scattering. Typically, aerosols are complex mixtures of particles which themselves are a mixture of various materials, some of which are hydrophilic, and others hydrophobic. The distribution of these species, both within and among aerosol particles, is a key factor determining the effect of water uptake. As a result, different model treatments of aerosol compositions produce widely varying radiative forcing estimates. This strongly contributes to the uncertainty on the aerosol cooling term. Global hygroscopicity data is crucial to inform model choices and thereby improve forcing estimates, except the available data is sparse and limited. Our goal is to address this gap by compiling polarimetric satellite observations from POLDER-PARASOL (for 2006-2010) and SPEXone-PACE (2024+) into the first ever global climatology of aerosol hygroscopicity. We use the RemoTAP algorithm to retrieve the aerosol refractive index, among other properties, from the multi-angle polarimeters. Through comparison of the retrieved refractive index to the known refractive indices for dry material and of water, a volume fraction of water in the aerosol is derived. Given the novelty of our approach, validation of the retrievals is of particular importance. First, we compare the satellite retrievals to ground-based measurements such as refractive indices derived from AERONET observations. Furthermore, to validate our method of deriving the water fraction from the refractive index in general, we compare the retrievals to aerosol water fractions derived from ground-based in-situ nephelometer measurements of particle growth in response to changes in humidity, combined with ambient relative humidity measurements. Another fundamental pillar of this validation is the PACE-PAX campaign, which was conducted in September 2024 with the express purpose of validating PACE and EarthCARE results. The campaign involved a high-altitude aircraft serving as a direct satellite proxy, which included the SPEX airborne instrument. Furthermore, one low-altitude aircraft gathering in-situ measurements, two boats, and a glider participated in the campaign. Observation targets included AERONET stations and satellite overpasses, providing ample opportunities for intercomparison between high-altitude aircraft observations and ground-based, aircraft in-situ, and satellite measurements. We discuss our validation strategies and results, demonstrating the accuracy and limitations of our approach.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: The Characterization and Correction of stray light in the MAMAP2D-Light Instrument: an Airborne Imaging Remote Sounder for greenhouse gases.

Authors: Oke Huhs, Konstantin Gerilowski, Dr. Sven Krautwurst, Dr. Jakob Borchard, Dr. Heinrich Bovensmann, Prof. John Philip Burrows, Hartmut Boesch
Affiliations: University of Bremen, Institute of Environmental Physics
The CO2M mission is currently under development. It has the goal to operationally monitor the most important greenhouse gases carbon dioxide (CO₂) and methane (CH₄) from space. In parallel, the airborne demonstrator CAMAP (CO₂ And Methane Airborne maPper), a high-performance imaging spectrometer, is being developed and assembled at the IUP in Bremen under ESA contract. The CAMAP instrument aims to replicate the spectral specifications of the CO2M mission in three spectral channels: NIR (760 nm), SWIR-1 (1.6 µm), and SWIR-2 (2 µm). This will support the development of retrieval algorithms and facilitate the interpretation and validation of Level 2 (dry-air mole fraction maps) data products. CAMAP builds on the push-broom spectrometer MAMAP-2D (Methane Airborne MAPper 2D), which comprises a NIR and SWIR-1 channel and is scheduled to be flown in 2025. Since 2021, the smaller MAMAP2D-Light instrument, which only consists of a SWIR-1 channel, has successfully been flown and has been (among others) part of the CoMet 2.0 Arctic mission aboard HALO (High Altitude and Long-Range Research Aircraft) in Canada in summer 2022 and aboard a motor glider (Dimona HK 36) during BBCMap campaign in Queensland, Australia in summer 2023. The optical design for the single-channel MAMAP2D-Light closely resembles the individual channels of both MAMAP-2D and CAMAP, allowing insights gained from calibration measurements to be applied to all three instruments. Following the CoMet 2.0 mission, stray light characterization measurements were performed on MAMAP2D-Light, which requires high accuracy in signal levels orders of magnitudes lower than the typical dynamic range of operation of the used detector. These measurements have contributed to improve the design of all three instruments and enabled a post-flight stray light correction for the CoMet 2.0 MAMAP2D-Light campaign data set. This work will present the stray light characterization measurements and addresses issues related to detector effects and the monochromatic light source used. An approach for managing these error signals and correcting out-of-band stray light using synthetic as well as real measured data will be presented and the impact on the data from Level 0 (binary spectra) to Level 4 (emission fluxes) will be examined.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: The Boundary-layer Air Quality-analysis Using Network of INstruments (BAQUNIN) Supersite: achievements and perspectives

Authors: Stefano Casadio, Anna Maria Iannarelli, Annalisa Di Bernardino, Monica Campanelli, Giampietro Casasanta, Henri Dièmoz, Cristiana Bassani, Gabriele Mevi, Anna Maria Siani, Massimo Cardaci, Tatiana Di Iorio, Stefano Decesari, Enrico Cadau, Angelika Dehn, Philippe Goryl
Affiliations: Serco Italia, Sapienza University of Rome, CNR-ISAC Tor Vergata, ARPA Valle d’Aosta, CNR-IIA, Monterotondo, ENEA Casaccia, CNR-ISAC Bologna, ESA/ESRIN
The Boundary-layer Air Quality-analysis Using Network of Instruments (BAQUNIN) project has been promoted by the European Space Agency to establish an experimental research infrastructure for the validation of present and future satellite atmospheric products and the in-depth investigation of the planetary and urban boundary layers; since 2016, the supersite has been collecting pollutant concentrations and meteorological parameters. Currently, BAQUNIN consists of three sites located in the city center of Rome (Italy), and in the neighboring semi-rural and rural areas. To the best of our knowledge, BAQUNIN is one of the first supersites in the world to involve several passive and active ground-based instruments installed in multiple observatories managed by different research institutions, in a highly polluted megacity affected by coastal weather regimes. Currently, the three observatories composing the BAQUNIN supersite are: • URBAN: Atmospheric Physics Laboratory (APL) of Sapienza University, Rome downtown • SEMI-RURAL: CNR-ISAC Tor Vergata • RURAL: CNR-IIA Montelibretti In addition, scientific exploitation of atmospheric data is carried out in collaboration with a number of “federated” Italian observatories, located in maritime (ENEA, Lampedusa), alpine (ARPA Valle d’Aosta), Po Valley (CNR-ISAC Bologna), and Arctic (ENEA, Thule) environments. In this framework, fruitful collaborations and knowledge exchanges are promoted. The BAQUNIN-APL observatory is equipped to host ground-based instruments for inter-comparison/inter-calibration campaigns in a challenging urban environment. Thanks to its unique characteristics, APL has been officially selected as ACTRIS-Italia inter-calibration/validation facility. During the last three years, the BAQUNIN-APL instrumental suite has been upgraded to include a) EM27SUN FTIR (part of COCCON network), b) Doppler Wind Lidar (0-200m), c) Clarity-Node (in situ air quality), d) NO2-Camera (BIRA-IASB, 2D Imaging Spectroscopy). In this contribution, the main characteristics of BAQUNIN supersite are described, providing information about the complex instrumental suite operations/maintenance and about the produced data, along with a discussion on future perspectives in view of Cal/Val activities related to forthcoming atmospheric composition satellite missions (e.g. Sentinel-4, Sentinel-5, etc.). The project adopts a policy of free sharing of its validated dataset with the community; for more details visit https://www.baqunin.eu website.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Evaluation of EarthCARE Level 2 Product Uncertainties – AOD Uncertainties in the Arctic

Authors: Kerstin Stebel, Ann Mari Fjæraa, Espen Sollum, Tove Svendby
Affiliations: NILU
Validation studies mainly focus on validation of measured values. The validation of the associated uncertainties is less common, but equally important. Uncertainty is a vital component of any data record as it provides the context with which to understand the quality of the data. It is essential information of data assimilation and comparison to other measurements and many efforts have been put on this topic within the framework of the ESA Climate Change Initiatives. Within the Norwegian initiative for EarthCARE Validation of Aerosol uncertainties and Radiation products in the Arctic (NEVAR), one of the key tasks is dedicated to the evaluation of EarthCARE uncertainties. Here, we first summarize ESA’s EarthCARE L2 product uncertainty estimates, utilizing information from sources such as the Atmos. Meas. Tech. special issue, Product Definition Documents, and information obtained from the algorithm teams. Furthermore, we present a summary of a literature review on evaluation of uncertainties is Earth Observation products. An initial study of uncertainties in the EarthCARE L2 datasets, provided for the CalVal rehearsal, will be extended to include Level 2a and 2-sensor Level 2b products, which are scheduled for release in March 2025 (2nd EarthCARE Validation Workshop). We examine the various L2 uncertainty records for validity, identify missing or anomalous information, and provide exemplary statistics characterizing EarthCARE product uncertainties. A suitable approach for evaluation AOD uncertainties has been developed in the framework of the Aerosol_cci projects. This methodology will be utilized to evaluate EarthCARE AOD uncertainties in the Arctic, using AERONET Level 1.5 data as reference dataset. Potential extensions to include additional approaches or geophysical parameters in the Arctic will also be explored.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Activities of the Aerosol Clouds and Trace gases European Research Infrastructure (ACTRIS) Expert Group on Satellite Cal/Val

Authors: Dimitris Balis, Stelios Kazadzis, Vassilis Amiridis, Arnoud Apituley, Holger Baars, Alexandru Dandocsi, Jean-Philippe Putaud, Kristina Höhler, Gianluigi Liberti, Franco Marenco, Eleni Marinou, Doina Nicolae, Nikolaos Papagiannopoulos, Lucas Pfitzenmaier, Alejandro Rodriguez-Gomez, Kerstin Stebel, Michael Sicard, Tijl Verhoelst, Ulla Wandinger, Robert Wegener
Affiliations: Laboratory of Atmospheric Physics, Aristotle University of Thessaloniki, PMOD World Radiation Center, Davos, Switzerland, National Observatory of Athens (IAASARS/NOA), Royal Netherlands Meteorological Institute, Leibniz Institute for Tropospheric Research, National Institute of R&D for Optoelectronics, European Commission, Joint Research Centre, Karlsruhe Institute of Technology, Institute of Meteorology and Climate Research, Institute of Marine Sciences, Italian National Research Council, Climate and Atmosphere Research Center, The Cyprus Institute, Consiglio Nazionale delle Ricerche-Istituto di Metodologie per l’Analisi Ambientale, University of Cologne, Institute of Geophysiks and Meteorology, CommSensLab, Dep. of Signal Theory and Communications, UPC, Norwegian Institute for Air Research, Laboratoire de l'Atmosphère et des Cyclones, Univ. de la Réunion, Royal Belgian Institute for Space Aeronomy, Forschungszentrum Jülich GmbH
The Aerosol, Clouds and Trace Gases Research Infrastructure (ACTRIS) is the pan-European research infrastructure (RI) producing high-quality data and information on short-lived atmospheric constituents and on the processes leading to the variability of these constituents in natural and controlled atmospheres. ACTRIS officially became the 33rd European Research Infrastructure Consortium (ERIC) on 25 April 2023. ACTRIS enables free-access to high-class long-term atmospheric data and data products on short-lived atmospheric constituents and clouds relevant to climate and air pollution. It offers access to world-class facilities providing unique research environments and expertise promoting cutting-edge science and international collaborations. One of the main objectives of ACTRIS is to contribute with the necessary observations in order to complement Earth Observations from space, providing unique ground-truthing of remote sensing information collected by current and future satellite missions. ACTRIS plays a pivotal role in this calibration and validation (cal/val) process with a network of around 50 data-harmonized ground stations, equipped with state-of-the-art remote sensing instruments such as lidars, sun photometers and radars, essential for comparing satellite data with ground measurements in diverse environments. In 2024, the Expert Group on Satellite Cal/Val (EG-Sat) has been established in ACTRIS as an advisory body which provides recommendations to the ACTRIS Director General and to the ACTRIS community regarding matters related to acquisition and provision of data and services in support of Cal/Val for satellite atmospheric missions, including scientific expertise. EG-Sat interfaces with the space agencies (ESA, EUMETSAT) and Copernicus regarding the design, planning and implementation of ACTRIS activities in support of Cal/Val for satellite atmospheric missions and defining (or supporting) the collaboration framework. Although recently established, EG-Sat has been already involved in coordinating ACTRIS and EarthCARE correlative observations, establishing links with EUMETSAT for operational validation of satellite products, and exploring the use of access to research infrastructures for ensuring the long-term validation of satellite products. This presentation shows examples from Cal/Val dedicated campaigns, pilot studies organized in the framework of the ATMO-ACCESS project, as well as plans for activities towards current and future satellite atmospheric missions.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: The EarthCARE mission validation activity at the Lampedusa Climate Observatory in the framework of the EC-Valmed.it project

Authors: Daniela Meloni, Fabrizio Anello, Virginia Ciardini, Lorenzo De Silvestri, Tatiana Di Iorio, Alcide di Sarra, Paolo Grigioni, Francesco Monteleone, Giandomenico Pace, Salvatore Piacentino, Claudio Scarchilli, Damiano Sferlazzo, Pamela Trisolino
Affiliations: Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA), Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA), Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA), Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA)
The ENEA Climate Observatory at Lampedusa island (https://www.lampedusa.enea.it) participates in the validation activities of the EarthCARE (EC) mission Commissioning phase in the framework of the EC-Valmed.it project, coordinated by CNR ISAC and funded by the Italian Space Agency (ASI), and as part of a national network of Italian atmospheric observatories in Central Mediterranean to provide high quality correlative data for EarthCARE L2 products validation in response to an ESA Announcement of Opportunity. The island is far from the mainland, flat and with a small surface area (about 20 km2), thus with negligible orographic impact. The Climate Observatory is composed of three observation sites located at a relatively short distance from each other. The Atmospheric Observatory (AO, 35.52°, 12.63°E, 45 m amsl) established in 1997 on the north-eastern part of the island, to study the atmospheric composition and the surface radiation budget; the Oceanographic Observatory (OO, 35.49° N, 12.47° E, 8 m amsl), located about 5 km south-west of the island and operational since 2015 to study air-sea interaction, and the Terrestrial Ecosystem Observatory (EO, 35.53 °N, 12.54° E, 119 m amsl) set up in 2023 in the western part to monitor the carbon dioxide air-vegetation exchanges. The integrated observatory includes a large set of instruments for remote sensing of aerosol and cloud columnar and vertically resolved properties, belonging to international networks, and is in the process of entering the Aerosol, Clouds and Trace Gases (ACTRIS) European Research Infrastructure. The instruments involved in the EC validation activity are: Raman-Mie-Rayleigh Lidar, Metek Cloud Doppler Radar Mira35c, Thies Clima Disdrometer, LUFFT CHM15k ceilometer, Cimel solar/lunar photometer, Schereder all sky camera, microwave radiometer HATPRORPG. The instruments are regularly maintained and calibrated to guarantee the best possible quality of the dataset. In preparation of the validation activity, analyses have been carried out on the spatio-temporal representativeness of the Lampedusa measurements, based on the comparison of the long-term measurements available at the AO with model and reanalyses data over a large area of the Central Mediterranean, and taking advance of the availability of similar measurements at the AO and EO (about 8 km apart), allowing to capture the short-range variability of some atmospheric variables. In addition, the uncertainty of the ground-based measurements to be compared with the EC products has been assessed. The present work addresses the comparison of L2a products derived from MSI, ATLID, and CPR EC instruments with ground-based correlative measurements collected during the EC overpasses starting from 28 July 2024 either during daytime and nighttime. The examined variables are the spectral aerosol optical depth, the aerosol extinction and backscattering coefficient profile, the cloud base and top, Doppler velocity, liquid, ice, and rain water path, liquid water content, effective radius, hydrometer classification. The criteria for the choice of the spatio-temporal grid of satellite-based data and the quantitative comparison of a selection of atmospheric variables are presented and discussed.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: The ESA atmospheric Validation Data Centre (EVDC): Applications for EarthCARE

Authors: Paolo Castracane, Angelika Dehn, Jarek Dobrzanski, Ann Mari Fjaeraa, Alastair McKinstry
Affiliations: STARION c/o ESA/ESRIN, ESA/ESRIN, Skytek, NILU, ICHEC
The ESA atmospheric Validation Data Centre (EVDC https://evdc.esa.int/) is the official ESA repository for correlative data in the atmospheric domain including ground-based, aircraft, balloon, and Fiducial Reference Measurements (FRMs) calibration and validation (Cal/Val) data. It offers an online information system providing end-to-end support for Cal/Val activities along a complete functional solution from the Orbit prediction and Overpass tool for campaign planning to the management of the Generic Earth Observation Metadata Standard (GEOMS related tools) and DOIs provision for dataset. The platform also provides access to satellite data for specific missions, in particular: Sentinel-5P, Aeolus, ENVISAT/MIPAS and EarthCARE. Access to the EarthCARE data products from EVDC is provided via integration with the OADS dissemination systems (OADS credentials are required to download pre-operational data into EVDC platform). As a result of working with the EarthCARE Validation Team, a selection of processing tools has been integrated into the EVDC platform. The new processing system offers the capability for processing data using arbitrary processing modules, components and code. Apart from the tools already available, like HARP, a new suite of tools dedicated to colocation activities (satellite and validation data), as well as, a numbers of dedicated Cal/Val applications, e.g.: the Community Intercomparision Suite (CIS) with EarthCARE data reader integrated; MSI Tool; CLM (Lidar Tool); CPR suborbital to orbital transformer (Radar Tool), have been made available over a new workflow oriented tool that allows the users to visually build, test and share data processing graphs composed from available processing tools and arbitrary processing scripts. As part of the continued effort to make the EVDC platform more accessible to the growing user community, webinars and training courses have been organised and video tutorials, explaining how to work with the tools, have been made available through the EVDC Web Platform.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: First results of the European activities for the EarthCARE validation in the framework of ACTRIS/ATMO-ACCESS

Authors: HOLGER BAARS, Eleni Marinou, Lucia Mona, Christina Anna Papanikolaou, Ewan O’Connor, Lukas Pfitzenmaier, Stephanie Rusli, Rob Koopman, Ann Mari Fjæraa, Felipe Toledo-Bittner, Nathan Feuillard, Doina Nicolae
Affiliations: Leibniz Institute For Tropospheric Research (TROPOS), IAASARS, National Observatory of Athens, Istituto di Metodologie per l’Analisi Ambientale, Consiglio Nazionale delle Ricerche, Finnish Meteorological Institute, European Space Agency, ESA-ESTEC, Climate and Research Institute NILU, National Institute of Research and Development for Optoelectronics, Institute of Geophysics and Meteorology, University of Cologne, Laboratoire Atmospheres et Observations Spatiales, LATMOS
The Earth Cloud Aerosol and Radiation Explorer (EarthCARE) is a satellite mission implemented by the European Space Agency (ESA) in cooperation with the Japan Aerospace Exploration Agency (JAXA) to measure vertical profiles of aerosols, clouds, and precipitation properties together with radiative fluxes. This Earth Explorer Mission was launched in May 2024 and is performing observations of the atmosphere using a high spectral resolution lidar (ATLID), a Doppler cloud radar (CPR), a multi-spectral imager (MSI), and a broadband radiometer (BBR) [1]. It can thus be considered a space-based equivalent of an ACTRIS aerosol and cloud remote sensing observatory. The Aerosol, Clouds and Trace Gases Research Infrastructure (ACTRIS) is the pan-European research infrastructure producing high-quality data and information on short-lived atmospheric constituents and on the processes leading to the variability of these constituents in natural and controlled atmospheres. It includes high quality aerosol and cloud remote sensing facilities. Many stations of the European Lidar Network EARLINET [2] and the European Cloud profiling network Cloudnet [3] are part of ACTRIS. ATMO-ACCESS is a pilot project by the European Commission to address the need for developing sustainable solutions based on the principles of open access for efficient and effective access provision suited to distributed atmospheric Research Infrastructures – like ACTRIS. In the framework of the ATMO-ACCESS pilot activities for internal stakeholders, a project has been initiated to support the validation of the ESA EarthCARE data products. To meet the extensive validation needs of these novel space-borne observations, the ATMO-ACCESS pilot scheme was utilized for a dedicated project, resulting in a consortium of about 50 participating observatories plus the ACTRIS central facilities, i.e., the ACTRIS Topical Centres CARS & CCRES and the ACTRIS Data Centre Units CLU & ARES. Complex QA/QC and central processing allow for high quality reference data for the EarthCARE validation. For this reason, not only observational platforms (OP – fixed stations) and mobile platforms (MP) but also ACTRIS central facilities (namely the topical centres and data centre units for the specific components of aerosol remote sensing ARS and cloud remote sensing CRS) were part of this initiative. Starting in June 2023, preparation activities took place including an intense rehearsal campaign to test the foreseen workflow in the autumn of the same year. Afterwards, intense analysis and optimization took place before the start of the real Cal/Val campaign with the EarthCARE readiness for the correlative measurements in July 2024. In parallel to these preparations, intercalibration activities with ESA reference lidars and MPLNet [4] have been performed. Naturally, due to the geographic distribution, many cloud observations have been collected in the northern and central parts of Europe, while aerosol remote sensing observations are mainly available in the Mediterranean and at the outside-Europe ACTRIS stations in the tropical Atlantic and Central Asia. Also, mobile facilities participated, like ACTRIS mobile aerosol and clout remote sensing facilities but also uncrewed automated vehicles (UAV) for in-situ aerosol collection. Most of the collected correlative data is directly delivered to the ESA Validation Data Center (EVDC) in near-real-time. There, the EarthCARE level 1 simulator tools are foreseen to be operated so that a direct Level 1 validation is possible. At the time of writing this was not yet implemented but is expected in near future. In this presentation, we will show first validation results, covering both case study analysis but also statical approaches. The products to be validated will include Level 1 (L1) data, but also the Level 2 (L2) data which is intended to be released to the validation teams in Winter 2024/2025. First test cases show that the ATLID simulator tool works well for denoised ground-based profiles of optical aerosol and cloud properties and shows a good agreement between the EarthCARE L1 data set and the ground-based reference. Even though, at the time of the writing, the particle depolarization ratio which was calculated by dividing ATLID Mie cross by the Mie co-polar signal, seems too low and needs refinement. Further attempts, using the L1 simulators, but also signal rations to, e.g., retrieve the scattering ratio will be made for long-term L1 validation and, if applicable, to first L2 products. Initial results indicate that comparing CPR reflectivity and ground-based data using the ACTRIS cloud remote sensing systems is useful and effective. New attempts to enhance the existing technique for radar reflectivity monitoring [5,6] by incorporating statistical filtering onto the data is foreseen to make the data selection more reliable for validation. Furthermore, initial statistical comparisons for Doppler velocity from space and ground are planned and will be presented if already available at the time of the symposium. Furthermore, a dedicated presentation by Pfitzenmaier et al. entitled “First Results of EarthCARE Cloud Profiling Radar Cal/Val activities using the ACTRIS ground-based cloud radar network" will provide a more in-depth analysis of the CPR validation. Acknowledgements: This project is supported by the European Commission under the Horizon 2020 – Research and Innovation Framework Programme, through the ATMO-ACCESS Integrating Activity under grant agreement No 101008004. We also acknowledge all contributing persons and their institutions in the framework of the ATMO-ACCESS pilot for EarthCARE Cal/Val and the AECARE (ACTRIS for EarthCARE L2 product evaluation) project. References: [1] Wehr, T., Kubota, T., Tzeremes, G., Wallace, K., Nakatsuka, H., Ohno, Y., Koopman, R., Rusli, S., Kikuchi, M., Eisinger, M., Tanaka, T., Taga, M., Deghaye, P., Tomita, E., and Bernaerts, D.: The EarthCARE Mission – Science and System Overview, EGUsphere [preprint], https://doi.org/10.5194/egusphere-2022-1476, 2023. [2] Pappalardo, G., Amodeo, A., Apituley, A., Comeron, A., Freudenthaler, V., Linné, H., Ansmann, A., Bösenberg, J., D'Amico, G., Mattis, I., Mona, L., Wandinger, U., Amiridis, V., Alados-Arboledas, L., Nicolae, D., and Wiegner, M.: EARLINET: towards an advanced sustainable European aerosol lidar network, Atmos. Meas. Tech., 7, 2389–2409, https://doi.org/10.5194/amt-7-2389-2014, 2014. [3] Illingworth, A. J., et al. (2007). Cloudnet: Continuous Evaluation of Cloud Profiles in Seven Operational Models Using Ground-Based Observations. Bulletin of the American Meteorological Society, 88(6), 883-898. https://doi.org/10.1175/BAMS-88-6-883 [4] Welton, E.J., J. R. Campbell, J. D. Spinhirne, and V. S. Scott, 2001. Global monitoring of clouds and aerosols using a network of micro-pulse lidar systems, Proc. SPIE, 4153, 151-158. [5] Protat, A., and Coauthors, 2009: Assessment of Cloudsat Reflectivity Measurements and Ice Cloud Properties Using Ground-Based and Airborne Cloud Radar Observations. J. Atmos. Oceanic Technol., 26, 1717–1741, https://doi.org/10.1175/2009JTECHA1246.1. [6] Kollias, P., Puigdomènech Treserras, B., and Protat, A.: Calibration of the 2007–2017 record of Atmospheric Radiation Measurements cloud radar observations using CloudSat, Atmos. Meas. Tech., 12, 4949–4964, https://doi.org/10.5194/amt-12-4949-2019, 2019.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Validation of EarthCARE Level 2 Data Products in the WegenerNet 3D Open-Air Laboratory

Authors: Andreas Kvas, Esmail Ghaemi, Gottfried Kirchengast, Ulrich Foelsche, Jürgen Fuchsberger
Affiliations: Wegener Center for Climate and Global Change, University of Graz, Institute of Physics, University of Graz
The WegenerNet 3D Open-Air Laboratory for Climate Change Research (“WEGN3D Open-Air Lab”) in southeastern Austria provides a unique setup for the validation and evaluation of atmospheric remote sensing missions, such as ESA’s latest Earth Explorer EarthCARE. The instrumentation of the WEGN3D Open-Air Lab comprises a polarimetric X-band Doppler weather radar, a microwave radiometer for vertical profiling of temperature, humidity, and cloud liquid water, an infrared cloud structure radiometer, and a water-vapor-mapping GNSS station network. This highly synergistic measurement infrastructure enables cross-evaluation, calibration and quality control, ensuring the generation of robust correlative measurements and the production of reliable validation data products. Moreover, it is integrated with the high-density WegenerNet hydrometeorological ground station network, which has previously contributed to the validation of different GPM/PMM precipitation products (including IMERG and GPM-DPR estimates). With ground-based observations of cloud base height, melting layer base and top heights, liquid water content, precipitation parameters, rates and classification, the WEGN3D Open-Air Lab contributes to the validation of EarthCARE cloud and precipitation data products as part of the EarthCARE Validation Team. The facility has been operational since 2021, providing a consistent set of reference data during the commissioning phase of EarthCARE as well as for further calibration and validation activities throughout the mission lifetime. Here we present the post-launch validation activities undertaken in the WEGN3D Open-Air Lab within the framework of the WEGN4CARE project, one of the EarthCARE AO Cal/Val projects. We showcase the quality assurance measures employed for the ground-based instrumentation, with a particular focus on cross-instrument evaluation and calibration. We further outline the validation strategy and methodology, and show the first validation results for Level 2a cloud and precipitation data products derived from the EarthCARE Cloud Profiling Radar (CPR) and Atmospheric Lidar (ATLID). This includes a direct comparison of satellite data with our ground-based reference data and comparisons between satellite- and long-term statistics of ground-based observation records, given the limited number of direct overpasses recorded to date.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Ground-Based Observations for the EarthCARE Commissioning Cal/Val Campaign in Ottawa (ECALOT) and W-band HiSRAMS, AERI, FIRR-2, FINESSE and FIRMOS Experiment on Remote Sensing (WHAFFFERS)

Authors: Zen Mariani, Yi Huang, Zhipeng Qu, Natalia Bliankinshtein, Adrian Loftus, Helen Brindley, Robert Crawford, Stephen Holden, Robert Reed, Lei Liu, Benjamin Riot-Bretêcher, Eve Bigras, Cuong Nguyen, Hilke Oetjen, Dirk Schuettemeyer, Robert Koopman, Jonas von Bismarck
Affiliations: Meteorological Research Division, Environment and Climate Change Canada, Government of Canada, Department of Physics, McGill University, National Research Council Aerospace Division, National Research Council Canada, Mesoscale Atmospheric Processes Lab, NASA Goddard Space Flight Center,, National Centre for Earth Observation, Department of Physics, Imperial College London, European Space Agency
Two ground-based and aircraft campaigns were conducted in Canada during the commissioning phase of the EarthCARE mission and the pre-launch phase for the FORUM mission. Environment and Climate Change Canada (ECCC) deployed a suite of in-situ and remote sensing ground-based instruments at the National Research Council (NRC) Ottawa airport site during the EarthCARE Commissioning Cal/Val Campaign in Ottawa (ECALOT; fall and winter 2024/25) and the W-band HiSRAMS, AERI, FIRR-2, FINESSE and FIRMOS Experiment on Remote Sensing (WHAFFFERS; winter 2025). A second site was operated by McGill University at the Gault Nature Reserve, located ~200 km East of the Ottawa airport, to provide ground-based observations at a second location underneath aircraft flights. The unique suite of instruments at these sites included microwave radiometers, scanning Doppler lidars, a new Vaisala water vapour profiling differential absorption lidar (DIAL), Micro Rain Radar (MRR), NASA’s ACHIEVE mobile W-band radar, surface meteorology instruments, and infrared radiometers such as the Atmospheric Emitted Radiance Interferometer (AERI). The sites were fully automated and collected observations 24/7. The ground-based sites provided detailed observations from the surface to above the boundary layer. Observations were used to provide insights into the local and synoptic meteorological conditions during each NRC cal/val flight track, conduct cal/val of the aircraft remote sensing observations and to provide a fixed-point dataset to perform cal/val of the EarthCARE mission. The suite of co-located far-infrared observations conducted at the sites were complemented by water vapour measurements from the radiometers and DIAL, enabling radiance closure experiments, retrieval assessments, and characterization of water vapour’s impact in the far infrared. This presentation will highlight how the ground-based sites at the NRC Ottawa airport and the McGill Gault Nature Reserve supported cal/val activities for both satellite missions. Information about the ground-based instrumentation, observing methods and scan strategies, automated operation, and dataset applications for satellite cal/val will be discussed. Initial results and case studies of interest to both satellite missions will also be presented.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: EarthCARE Cloud Profiling Radar Validation using ACTRIS ground-based Cloud Radar Network

Authors: Lukas Pfitzenmaier, Nathan Feuillard, Felipe Toledo-Bittner, Dr Martial Haeffelin, Dr Ewan O'Connor
Affiliations: University Of Cologne, Laboratoire Atmospheres et Observations Spatiales, Institut Pierre Simon Laplace, Finnish Meteorological Institute
The Earth Cloud Aerosol and Radiation Explorer (EarthCARE) is a satellite mission implemented by the European Space Agency (ESA) in cooperation with the Japan Aerospace Exploration Agency (JAXA) to measure vertical profiles of aerosols, clouds, and precipitation properties together with radiative fluxes. The launch of the Earth Explorer Mission has been in May 2024. Since being placed in space, EarthCARE has been performing observations of the atmosphere using a high spectral resolution lidar (ATLID), a cloud profiling radar (CPR) with Doppler capabilities, a multi-spectral imager (MSI), and a broadband radiometer (BBR) [1]. EarthCARE can be considered a space-based equivalent of an ACTRIS aerosol and cloud remote sensing observatory with this instrumentation. The Aerosol, Clouds and Trace Gases Research Infrastructure (ACTRIS) [2] is the pan-European research infrastructure producing high-quality data and information on aerosol and clouds in the atmosphere and trace gases to study and understand processes involved in the atmospheres. Meanwhile, many European cloud radar sites are part of the ACTRIS cloud remote sensing network and run the cloud target classification algorithm Cloudnet [3, 4]. ACTRIS, as part of the ATMO-ACCESS pilot project, supports the validation of ESA’s EarthCARE data and products. The ACTRIS Center for Cloud Remote Sensing (CCRES) established QA/QC methodologies and procedures to control and monitor the sensor stabilities of the instrumentation. Therefore, CCRES guarantees that a homogeneous data set at each site is available, and consequently, the resulting Cloudnet classification and other products are homogeneous and comparable. The ATMO-ACCESS pilot project made it possible for most of the collected correlative data measured and QA/QC checked to be directly delivered to EVDC in near-real-time – for more information, see Abstract Baars et al. “First results of the European activities for the EarthCARE validation in the framework of ACTRIS/ATMO-ACCESS”. This work will present the first validation results focusing on Cloud Profiling Radar measurements. The emphasis will be on the statistical comparison of EarthCARE CPR and ground-based radar data [5, 6]. This work's novelty lies in its application to EarthCARE CPR reflectivity data and in extending the methods for Doppler velocity measurements. Furthermore, we improved the existing method using statistical techniques to demonstrate the comparability of the data points per height bin. This enables us to compare the statistically similar data from the CPR and the ground for a specified period. These comparisons result in reflectivity offsets between CPR and each ground-based radar in the network, which can be used to monitor the sensor’s stability over time. A similar approach is used for the Doppler velocity data. In addition to validating actual measured data, one can also validate the statistics of the synthetic CPR data calculated by the orbital radar tool [7]. Evaluating such tools is important, too, because they can be used to develop the Cal/Val methodology. Acknowledgements: We acknowledge the funding by the German Initative for EarthCARE Valiation (GIVE) funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK, grant 50EE2403A,C-G), the Centre National d'Etudes Spatiales (CNES) Fracne, and all contributing persons and their institutions in the framework of the ATMO-ACCESS pilot for EarthCARE Cal/Val and the AECARE (ACTRIS for EarthCare L2 product evaluation) project. References: [1] Wehr, T., Kubota, T., Tzeremes, G., Wallace, K., Nakatsuka, H., Ohno, Y., Koopman, R., Rusli, S., Kikuchi, M., Eisinger, M., Tanaka, T., Taga, M., Deghaye, P., Tomita, E., and Bernaerts, D.: The EarthCARE Mission – Science and System Overview, EGUsphere [preprint], https://doi.org/10.5194/egusphere-2022-1476, 2023. [2] Laj, P., and Coauthors, 2024: Aerosol, Clouds and Trace Gases Research Infrastructure (ACTRIS): The European Research Infrastructure Supporting Atmospheric Science. Bull. Amer. Meteor. Soc., 105, E1098–E1136, https://doi.org/10.1175/BAMS-D-23-0064.1. [3] Illingworth, A. J., et al. (2007). Cloudnet: Continuous Evaluation of Cloud Profiles in Seven Operational Models Using Ground-Based Observations. Bulletin of the American Meteorological Society, 88(6), 883-898. https://doi.org/10.1175/BAMS-88-6-883 [4] Tukiainen, Simo, O'Connor, Ewan, and Korpinen, Anniina (2020). CloudnetPy: A Python package for processing cloud remote sensing data. Journal of Open Source Software, 5(53), 2123, https://doi.org/10.21105/joss.02123 [5] Protat, A., and Coauthors, 2009: Assessment of Cloudsat Reflectivity Measurements and Ice Cloud Properties Using Ground-Based and Airborne Cloud Radar Observations. J. Atmos. Oceanic Technol., 26, 1717–1741, https://doi.org/10.1175/2009JTECHA1246.1. [6] Kollias, P., Puigdomènech Treserras, B., and Protat, A.: Calibration of the 2007–2017 record of Atmospheric Radiation Measurements cloud radar observations using CloudSat, Atmos. Meas. Tech., 12, 4949–4964, https://doi.org/10.5194/amt-12-4949-2019, 2019. [7] Pfitzenmaier, L., Kollias, P., Risse, N., Schirmacher, I., Puigdomenech Treserras, B., and Lamer, K.: Orbital-Radar v1.0.0: A tool to transform suborbital radar observations to synthetic EarthCARE cloud radar data, Geosci. Model Dev. Discuss. [preprint], https://doi.org/10.5194/gmd-2024-129, in review, 2024.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Calibration and Validation strategies for operational products of the CO2 Monitoring (CO2M) mission

Authors: Bernd Sierk, Rüdiger Lang, Josef Gasteiger, Bernd Husemann, Catherine Hayer, Antoine Lacan, Pepe Phillips, Paola Colagrande, Leonid Butenko, Maurizio De Bartolomei, Helmut Bauch, Thomas Honig, Hannah Clarke, Bojan Bojkov, Fabrizio Di Loreto, Thierry Marbach, Cosimo Putignano, Vincenzo Santacesaria, Sruthy Sasi, Eduardo Valido Cabrera
Affiliations: Eumetsat
The European Space Agency (ESA) and the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT), on behalf of the European Commission, are developing the Carbon Dioxide Monitoring (CO2M) mission to complement the Copernicus Space Component with a capacity for anthropogenic carbon dioxide (CO2) emission monitoring. The multi-satellite constellation and CO2M’s ground segment will provide global high-quality observations of column-averaged CO2 and methane (in terms of mole fractions XCO2 and XCH4). These observations, complemented by in-situ measurements and bottom-up inventories, will enable quantitative assessment of CO2 emissions and their trends at the scale of mega-cities, countries, and globally, by using advanced (inverse) modelling capabilities. The presentation will provide an overview of the operational processing system developments for CO2M, with a focus on pre- and post-launch calibration and validation strategies. In view of Level-1 products, we address the planned on-ground calibration activities, and highlight their relation to specific instrument aspects. We will also report on the development of a “bi-directional instrument model” (BIM) at EUMETSAT, which functions as an experimental end-to-end simulator, as well as Level-1 processor. One objective of the BIM is the analysis of specific instrument effects (e.g. from detector artefacts) and develop empirical correction models from both, on-ground characterization, as well as in-flight calibration measurements. We will showcase simulations demonstrating the capability of the system to improve the performance of correction algorithms during commissioning. Regarding Level-2 products, we will present the CO2M operational processing system developments at EUMETSAT, including first results from the innovative three-algorithm GHG (XCO2, XCH4) retrieval approach. Based on simulations of realistic orbits for a constellation of three platforms we will show how the measurements from the three instruments on-board CO2M (the push-broom grating spectrometer (CO2I/NO2I), the Multi Angle Polarimeter (MAP), and the Cloud Imager (CLIM)) are combined into one “hyper-instrument” processing system. Finally, we will present an update on the planning of product commissioning and monitoring throughout the operations phase. The continuous and timely provision of ground-based reference data from all relevant ground networks (including TCCON, COCCON, Pandonia, NDACC, and Aeronet) will play a key role in all activities concerning product validation and monitoring. We will summarize the status and the way forward regarding the future product validation activities.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: Preparation for air quality and climate validation activities for Sentinel-4 and Sentinel-5 over urban areas

Authors: Dimitris Balis, Dr. MariLiza Koukouli, Prof Alkiviadis Bais, Dr Katerina Garane, Dr Annalisa Di Bernardino, Dr Anna Maria Siani, Dr Monica Campanelli, Cristiana Bassani, Stefano Casadio, Dr Massimo Cardasi, Dr Anca Nemuc, Alexandru Dandocsi, Dr Stefan Marius Nicolae
Affiliations: Laboratory of Atmospheric Physics, Aristotle University of Thessaloniki, University Sapienza, CNR-ISAC, CNR-IIA, SERCO, Remote Sensing Department, National Institute for of Research and Development for Optoelectronics, INOE, Faculty of Electronics, Telecommunications and Information Technology, University of Science and Technology Politehnica
Air quality forms an undisputable major global concern as recent studies show that more people die per annum from air pollution than malaria and HIV combined. Accurate information on air quality levels traditionally depends only on local monitoring stations, a practice with obvious shortcomings, such as the geographical sparsity of the monitoring networks, the delay in reporting, and the diverse measurement techniques used, among others. In the Sentinel era, unprecedented high spatiotemporal coverage information by state-of-the-art satellite observations of the atmospheric composition can be obtained at an urban scale by satellite atmospheric observations of key air quality and climate-related species such as NOx, SO2, aerosols, clouds, O3, CO, CH4, etc., along with collocated ground-based measurements of high temporal resolution. The main aim of this study is to demonstrate the capability for the consolidated validation of the major air pollutants, clouds and greenhouse gases provided by Sentinel-4 and Sentinel-5 using: the well-established ground-based infrastructure of the Laboratory of Atmospheric Physics, Aristotle University of Thessaloniki, Greece (LAP/AUTh); the Boundary-layer Air Quality-analysis Using Network of INstruments Super-Site (BAQUNIN) suite of instruments operated at three different sites: an urban component (Atmospheric Physics Laboratory, or APL, of Sapienza University, Rome downtown), a semi-rural component (CNR-ISAC, Tor Vergata) and a rural component (CNR-IIA, Montelibretti); and the National Institute of R&D for Optoelectronics (INOE) remote sensing instrumentation in a periurban site, 6 km south of Bucharest, Romania. These three well-established monitoring stations, with their complex as well as diverse atmospheric conditions, including high variability in pollution sources, microclimate effects, and urban heat islands, will provide an excellent validation super-set of all major pollutants sensed by Sentinel-4 & Sentinel-5 affecting air quality and climate change. Deploying ground-based instruments across diverse urban areas enhances the representativeness of calibration datasets, enabling satellites to account for local variations in air quality, greenhouse gases, and aerosols. Furthermore, urban validation sites help improve satellite algorithms for densely populated regions, where accurate observational data is critical for public health and policy decisions. All three monitoring stations host instruments and form members of major global ground-based networks of EO monitoring, such as EARLINET, COCCON, PGN, NDACC, EUBREWNET, AERONET, CLOUDNET, etc. This work is presented within the scope of the joined EUMESAT/ESA “Announcement of Opportunity for Calibration and Validation Activities for Sentinel-4 and Sentinel-5” approved project Validation activities of air quality for Sentinel-4 and Sentinel-5 over urban areas, VALERIA, part of the S45VT consortium.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A)

Poster: TEDS: Toolbox for End-to-enD Simulations for the TANGO mission

Authors: Raul Laasner, Benjamin Leune, Edward van Amelrooy, Mirna van Hoek, Antje Ludewig, Tobias Borsdorff, Ryan Cooney, Paul Tol, Dr Pepijn Veefkind, Jochen Landgraf
Affiliations: SRON Netherlands Instritute For Space Research, Royal Netherlands Meteorological Institute (KNMI)
The upcoming Twin ANthropogenic Greenhouse gas Observers (TANGO) mission comprises two CubeSat satellites to monitor the emission of CO2 and CH4 from point sources. It is an ESA SCOUT mission, to be launched in 2028, that helps to verify the carbon emission goals set by the Paris Agreement. Using an attitude control system, the agile spacecraft target observations at the emitting facilities and other point sources by measuring reflected sunlight. The satellites fly in a loose formation with a time difference of <1 minute. In order to assess the compliance of hardware design to the mission requirements, we have developed a toolbox for performing end-to-end (E2E) simulations covering all aspects of the mission data flow. The TEDS toolbox, freely available under an open-source license, has a modular design consisting of a geometry module (satellite ephemeris and attitude), scene generation module (atmosphere composition and VIS/SWIR radiation scene), instrument module (radiation scene to L1A data product), L1A-to-L1B processor, L1B-to-L2 processor, and L2-to-L4 processor. Of those, L1A-to-L1B and L1B-to-L2 comprise the operational processor which is therefore a subset of the complete TEDS process chain. Furthermore, the TEDS modules are designed in a generic manner making them adaptable to other Earth observation missions (currently the two TANGO instruments are supported). The software modules are designed to be used in an E2E software pipeline with file-based interfaces between the modules. Having well-defined interfaces between the modules makes TEDS suitable for developing and testing different algorithms to model the satellite performance, including at level 4. Each module can be run independently and be a component of various closed-loop simulations. To demonstrate the effectiveness of the TEDS toolbox, we shall focus on and present results on the instrument module - L1A-L1B - L1B-L2 process chain. Designed to be consistent with each other, the instrument module generates raw detector images by modeling the physical processes of the instrument whereas the L1A-L1B processor applies various calibration steps to convert detector images into calibrated spectra. The instrument module is conceptually the inverse of the L1A-L1B processor with an equivalent process to each calibration function of L1A-L1B. Both modules make use of the calibration key data (CKD) which is either measured or simulated. We demonstrate how by perturbing the CKD between different runs, we can estimate the sensitivity of L1B and L2 data to uncertainties in the detector temperature, dark current, stray light kernels, and other instrument parameters and how the E2E analysis can help guide the instrument design and choose an optimal calibration strategy.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: B.01.03 - POSTER - Transforming Global Analysis Ready Earth Observation Data into Actionable Information to Drive Local Climate and Environmental Actions via Co-creation

Earth Observation (EO) from satellites plays a key role in observing changes in climate and environment globally and locally (e.g. climate change, environmental changes and hazards, biodiversity, humanitarian crisis, and etc). Currently, a wide variety of climate and environmental key variables (e.g. temperature, precipitation, greenhouse gases, air pollutants, surface property, land use changes, fires, forests, snow/ices, ocean colors, biomass, and etc.), have been collected, and they are provided as Analysis-Ready Data (ARD) to users. While the satellite-based ARD data has global utility by nature, it is not necessarily fully tailored to particular local issues/challenges/needs.
Ultimately, EO data should be turned into equitable and actionable solutions for addressing local environmental and societal challenges and delivered where they are needed most. In order to fully unlock the potential of the global ARD data, the ARD should be further transformed to Action-Ready Information (ARI) with greater information granularity via co-creation with local collaborators and stakeholders. For example, the key climate and environmental variables are also being collected locally using various observation platforms on the ground, on the water and in the air. Such data can be utilized for regular evaluation and validation of satellite EO, but also to further create customized/tailored value-added ARI for local needs. Identifying the potential gaps between ARD and local needs is also critically important and the gaps need to be mitigated and ultimately closed. In this regard, the involvement of local stakeholders in the process is extremely important. Regardless of the proven power of Community science, local projects, for example, often suffer from remaining self-sustained due to financial challenges. In addition to research, we do need economically viable (and/or politically feasible approaches/solutions to lead us to the local goals. Via co-creation, using ARI to address local challenges should also contribute to addressing global climate and environmental issues.

This session highlights, but is not limited to, EO (in broader sense) data applications for addressing local climate and environmental issues/challenges and local mitigation actions, and discusses the gaps and ways to enhance the use of EO to address them with locals. This session invites not only satellite EO data providers and their users at various stages/levels, but also broader participants, for example, people who collect/analyze EO and/or use it locally, engineers/companies who develop EO technologies and wish to scale up, and researchers/practitioners who seek potential synergy/combined use of global satellite EO and local EO. We also invite broader stakeholders from the policy side and private sectors who would like to discuss potential approaches to use EO for their climate mitigation monitoring and sustainable developments from a social science and/or policy or business perspective to obtain maximum return for locals. With input from many unique local cases, we expect to synthetize the input and co-create global knowledge to guide us towards a sustainable future by further enhancing our ability to monitor and address global-but-local challenges that we face.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Accessing actionable Sargassum information through the SAMTool web interface

Authors: Marc Lucas, Laurine Meunier, Olivier Lauret, Vera
Affiliations: CLS
The mass of sargassum landings that has been occurring yearly in the Caribbean since 2011 present local authorities with significant challenges. Indeed, responding to a sargassum landings requires the mobilization of heavy equipment to quickly remove the algae from the beaches as the decomposition of algae once ashore adversely impacts the coastal waters and local population health. Fortunately, Sargassum mats can be detected by satellite so it is possible for local communities to get advance warning of potential beachings. The challenge then becomes to provide information in the way that is the most adapted to local users. To this end, since 2018, CLS has been engaging with local partners to help co-design its SAMTool sargassum tracking web interface. More recently, though the ESA-SAVE project, CLS has been co-designing an interface module to enable users to easily access critical information such as mats area and volumes. In this paper, we will present the engagement approach that has been used and highlight the way in which the user feedback is translated into specifications to improve the SAMtool web interface.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Contrasted Cross-border Agricultural Patterns and Trends in the Irrigated Senegal River Valley since 2016

Authors: Jonas Meier, Dr. Frank Thonfeld, Niklas Heiss, Dr. Veren Huber García, Pierre C. Sibiry Traore, Celeste Tchapmi Nono Nghotchouang, Janet Mano Mutuku, Khadidiatou Faye, Sidy Tounkara, Laure Tall, Stefanie Steinbach, Valerie Graw, Ursula Gessner
Affiliations: German Aerospace Center (DLR), International Crops Research for the Semi-Arid Tropics (ICRISAT), Manobi Africa PLC, Initiative Prospective Agricole et Rurale (IPAR), Ruhr University Bochum (RUB)
Food production in West Africa is closely linked to two major challenges of the 21st century: population growth and climate change related climate variability. By 2050, West Africa’s population is expected to reach 1.2 billion increasing food demand, changing diets through urbanization, thus posing considerable challenges to food production systems. Key strategies towards sufficient food supply and the Sustainable Development Goals (SDGs) are sustainable intensification of agriculture (increasing yields without additional land consumption and without adverse effects on climate change) and mitigation and adaptation strategies considering local socio-economic conditions. In the Senegal River Valley, irrigation allows for up to three cropping cycles per year. However, environmental factors such as untimely and excessive rains and reservoir releases, as well as heterogeneity and volatility in socio-economic conditions – e.g., market access and prices – lead to reduced efficiencies. The Senegal River marks the border between Mauritania and Senegal and represents a clear separation between two distinct cropping systems and coping strategies. While the Mauritanian side is characterized by fast-growing large-scale farming systems that are less susceptible to socio-economic fluctuations, smaller farm sizes and a huge subsistence to intensification gradient dominate on the Senegalese side. This work is embedded in the COINS project (Co-developing innovation for sustainable land management in West African small holder farming systems), where we assess which sustainable intensification practices suit for who under which socio-economic conditions with all relevant local stake holders. Therefore, we analyze annual agricultural patterns and trends inside the Senegal River Valley for nearly a decade (2016-today), hence covering the COVID years. We used a random forest classifier to delineate agricultural fields based on Sentinel-1 and Sentinel-2 time series data, trained with samplings from mono temporal high-resolution imagery. Results clearly show contrasted cross-border spatial and temporal dynamics in cropland, especially during the COVID-19 pandemic marked by inter-regional travel restrictions affecting the agricultural labor force. Over the period, a substantial increase of the agricultural area in Mauritania contrasts with a limited expansion in Senegal. There are considerable variations in the annual cropped area in Senegal, with significant areas not put into cultivation notably towards the higher Valley in the East. The study shows the high potential of remote sensing to monitor the agricultural area and to detect abandoned and uncultivated parts at an early stage. In the future, this potential could be leveraged to initiate predict planted area across seasons and guide decision making towards optimized cropland use and sustainable increase in productivity.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Moroccan locust – Potential outbreaks in future. Learning from past and present environment conditions based on EO, geospatial and modelling datasets.

Authors: Igor Klein, Ram Devkota, Nurgul Raissova, Christina Eisfelder, Juliane Huth
Affiliations: German Aerospace Center (DLR) German Remote Sensing Data Center (DFD), Institute of Plant Biology and Biotechnology (IPBB)
Transboundary pests such as Moroccan locust represent a significant concern of regular local to regional outbreaks within its huge natural habitat across European, African and Asian continent. The outbreaks are intensified by factors such as climate change, and land use practices that create favorable environment for these agricultural pests. Transboundary pests such as Moroccan locusts pose a direct threat to food security and livelihoods through devouring vegetation including food and industrial crops and pastures. Moreover, the crisis response, often involving hazardous broad-spectrum pesticides, can lead to considerable environmental, biodiversity, and human health repercussions. The challenge of preventing and controlling any locust outbreaks early on, particularly considering better forecast and effective monitoring is well-recognized but is not yet implemented. In this context, this work presents how relevant parameter can be monitored effectively using remote sensing data and methods to improve forecast and monitoring across large habitat territory of Moroccan lost. Improvements and effort are geared towards safeguarding food security, livelihoods, and the environment, with a specific focus on the hot spots of Moroccan locust outbreaks where these challenges are most pressing. This work includes modelling of historical conditions, present situation and takes also future climate scenarios into account to better understand how this locust pest will be distributed under changing climate conditions. We apply the open source Ecological Niche Model R-software package (ENMTML) to special-specific selection of environmental variables. The variables selection and preprocessing included a harmonization to enable comparable results for three time periods (past: 1970-2000, present: 2000-2021, future: 2021-2040). Environmental variables are derived from different Earth observation (EO) and geospatial sources, including satellite based meteorological datasets, vegetation and soil moisture conditions, soil properties and geomorphological parameters. First results indicate that there is a northward shift as well a shift towards higher latitudes of suitable habitat conditions for Moroccan locust breeding. Furthermore, the core hotspot areas are also changing in regards to intensity and geographical distribution. We present the modelling results in different habitat suitable categories and show how intensity and geographical distribution is changing over seven decades. Furthermore, a detailed statistical analysis is done to identify main driving factors for Moroccan locust outbreaks. Besides suitable environmental conditions, the current land management is most important driver. This contextual knowledge and monitoring of land cover changes in the light of pest managements provide new possibilities for preventive measurements and improvement in pest outbreak forecast.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Scientific climate activism: an open platform based on a scientific and reproducible workflow for mapping Surface Urban Heat Islands through open data and technology hacking

Authors: Meñemo Baihua, Federico Baldo, Letizia Caroscio, Christopher Ceresi, Matteo Francobaldi, Chiara Richiardi, Carlo Zanetti
Affiliations: University of Earth, Department of Computer Science and Engineering, University of Bologna, Department of Civil, Chemical, Environmental and Materials Engineering, University of Bologna, Laboratory Biodiversity and Ecosystems, Division Anthropic and Climate Change Impacts, ENEA, Department of Life Sciences and Systems Biology, University of Torino
The project is a climate activism initiative designed to address the urgent and escalating issue of urban heat islands (UHIs) through cutting-edge technological innovation and extensive community collaboration. The primary objective is to develop a dynamic, collaborative platform based on open data, which will enable citizens, scientists, policymakers, and urban planners to work together seamlessly in mapping, monitoring, and mitigating the adverse effects of UHIs. This platform will empower users by providing an array of tools and resources to collect urban temperature data using an integrated approach that combines temperature sensors, satellite data, and direct community contributions. By leveraging advanced mapping and data analysis technologies, the platform will generate Surface Urban Heat Island (SUHI) maps. The workflow was constructed based on the extensive scientific literature focused on the SUHI detection through remote sensing techniques (Chang et al., 2023; Pappalardo et al., 2023; Kim & Brown, 2021), which calculates SUHI as the difference between the expected temperature measured over the surrounding rural areas and the temperature anomaly over the urban area under consideration. All the steps, from pre-processing to visualisation, are automated through a script first written in the R programming environment (R Core Team, 2023) and exploiting exclusively open data. The user is only required to enter the city of interest. So first the administrative boundaries, urbanised areas and rural areas are downloaded from OpenStreetMap (OpenStreetMap contributors, 2024) through specific queries. This makes it possible to have a reconstructed land-use map that can be considered up-to-date to the present day, which makes it possible to identify urbanised and rural areas in order to measure their reference temperatures. Next, the Landsat 8-9 Collection 2 Level 2 (L8-9 C2 L2) satellite images are downloaded directly through the United States Geological Survey - Application Programming Interface (USGS API), providing the observed Land Surface Temperature (LST) data, to which appropriate rescaling and conversion in Celsius Degrees are applied. Then a Digital Elevation Model is downloaded, at the resolution consistent with the LST, i.e. 30 m, from a portal that makes various open source DEMs available at the highest resolution available for the requested area (Open Topo Data, 2024). Finally, thermal anomalies are calculated as the difference between urban and rural LST per homogeneous altitudinal band, thus avoiding introducing bias due to different altitudes and not over- or underestimating the intensity of the phenomenon (Zargari et al., 2024). At each step, a visualisation of the data is returned so that any errors or inconsistencies can be checked. The last step is the application of geostatistical analysis to describe the spatial distribution of LST and highlight the presence of hot- and cool-spots, and identify SUHI (Zargari et al., 2024). These maps will allow users to pinpoint the areas most severely impacted by heat islands and gain a comprehensive understanding of the factors that contribute to their formation and persistence. The script will be implemented in a user-friendly platform and the code made open and collaborative, so that other scientists and activists can contribute to its maintenance and continued improvement, adding other functionalities, such as summary statistics, or the integration of other data, such as other satellites imagery or even the introduction of socio-economic data, to move from the biophysical information to risk maps. A crucial goal of the project is to elevate public awareness regarding the significance of UHIs and the pressing need to take proactive measures to mitigate their effects. Furthermore, the project aspires to foster transparency and enhance the accessibility of urban climate data, ensuring that the information gathered is readily available This open access will facilitate informed decision-making and strategic planning aimed at designing and creating more resilient, healthier urban environments. By integrating advanced technology, open data, and active community engagement, the project aims to offer a free and easy tool to open scientific knowledge and to promote the awareness that drives the creation of urban spaces that are cooler, greener, and more sustainable for everyone. This comprehensive approach not only addresses immediate concerns but also paves the way for long-term environmental and social benefits, contributing to the overall well-being and liveability of cities. References Chang, Y., Xiao, J., Li, X., & Weng, Q. (2023). Monitoring diurnal dynamics of surface urban heat island for urban agglomerations using ECOSTRESS land surface temperature observations. Sustain. Cities Soc., 98, 104833. https://doi.org/10.1016/J.SCS.2023.104833 Kim, S. W., & Brown, R. D., 2021. Urban heat island (UHI) intensity and magnitude estimations: A systematic literature review. Sci. Total Environ., 779, 146389. https://doi.org/10.1016/J.SCITOTENV.2021.146389 Landsat 8-9 OLI (Operational Land Imager) and TIRS (Thermal Infrared Sensor) Collection 2 Level-2 Science Product, courtesy of the U.S. Geological Survey. DOI: /10.5066/P9OGBGM6. OpenStreetMap contributors, 2024. OpenStreetMap. OpenStreetMap Foundation. Available as open data under the Open Data Commons Open Database License (ODbL) at openstreetmap.org. Open Topo Data, 2024. Open Topo Data Version 1.9.0 https://www.opentopodata.org/. Pappalardo, E. S., Zanetti, C., & Todeschi, V., 2023. Mapping urban heat islands and heat-related risk during heat waves from a climate justice perspective: A case study in the municipality of Padua (Italy) for inclusive adaptation policies. Landsc. Urban Plan., 238. https://doi.org/10.1016/j.landurbplan.2023.104831. R Core Team, 2023. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/. Zargari, M., Mofidi, A., Entezari, A., 2024. Climatic comparison of surface urban heat island using satellite remote sensing in Tehran and suburbs. Sci Rep 14, 643. https://doi.org/10.1038/s41598-023-50757-2.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Co-Creating Climate-Resilient Urban Ecosystems: The ClimRes Project

Authors: Dr Claudio Pisa, Mrs Marica Antonacci, Dr Stamatia Rizou, Vasileios Baousis, Mr Iasonas Sotiropoulos, Mr Sotiris Aspragkathos
Affiliations: ECMWF, SingularLogic
Europe has witnessed a significant rise in the frequency and severity of extreme weather events, and especially the broader Mediterranean region showcases the devastating consequences of climate-related disruptions. Unseasonably warm temperatures during the winter months are increasingly becoming a pattern, disrupting traditional weather patterns and ecosystems. ClimRes aims to foster a ‘Leadership for Climate Resilient Buildings’, by addressing the identification and systematic categorisation of buildings’ vulnerabilities and assessing their impact in the buildings' ecosystem considering the interlinkages within the urban context. This approach actively involves local stakeholders and leverages data from different sources, such as Copernicus services, IoT and other open and/or city-level data, and takes into account hazard warnings and weather forecasts. To this regard, a liaison with the ongoing Destination Earth initiative will allow the project to explore the possible exploitation of weather extreme forecasts and of future climate models. The project aims to deliver vulnerability assessment and impact evaluation methodologies, along with an inventory hub of measures for building materials and design against climate risks, as well as decision support tools, to aid stakeholders in planning effective interventions and to address vulnerabilities, targeting three levels of decision making at strategic, tactical and operational levels. ClimRes solutions will be tested in large-scale pilots in Spain, Greece, Italy and Slovenia evaluating their efficiency against heatwaves, extreme flooding, fires and earthquakes and one multi-hazard replication multiplier pilot in France. The experiences and lessons learnt from the extensive pilot evaluation will consult the replication roadmap of the project as well as a capacity programme that will train the next frontier leaders for climate-resilient buildings. Overall, the ClimRes Leadership will provide valuable insights and guidance for building owners, policymakers, and stakeholders involved in enhancing climate resilience and promoting sustainable development. This will be achieved through the co-creation, development, deployment and demonstration of highly cost-effective, replicable solutions at TRL 6-8. This presentation summarises the ClimRes project’s concept, showcasing the actionable information, tools, and solutions is delivering, with an emphasis on co-creation processes and the active involvement of local contributors, communities and stakeholders.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Examining Urban Expansion in Abeokuta Through the Lens of its Economic Development Cluster: A Geospatial Approach Utilising Earth Observatory Data, Random Forest and Spatial Entropy Analysis

Authors: Oluwafemi Odunsi, Jun.Prof. Dr. Andreas Rienow
Affiliations: Ruhr University Bochum
Land change detection studies usually provide valuable insights into the spatial and temporal dynamics of urbanisation and urban growth. In recent times, open Earth Observatory data and tools have provided the opportunity to understand these dynamics at global, regional and local scales, especially in countries of the Global South where these resources are limited for empirical evidence in developing their urban areas. Considering the current urban developmental changes in countries like Nigeria, especially in its secondary cities, this study analysed urban expansion of Abeokuta within the framework of an economic development cluster initiative pivoted on the city. It addressed the trend, pattern and direction of expansion to provide baseline information on targeted supports and interventions for the viability of the initiative. Theoretically, previous studies have analysed urban expansion using concentric zone theory which typically considers a single city centre. This is contrary to what is obtainable in contemporary cities like Abeokuta as critiqued by the multiple nuclei theory (MNT). We therefore considered the effects of three major centres in Abeokuta based on the MNT. Google Earth Engine was used for remote sensing data collection and analysis. Data were acquired and preprocessed from Landsat 7 ETM+, Landsat 8 OLI/TIRS and Landsat 9 OLI-2/TIRS-2 sensors for 2003, 2013 and 2023, respectively. Supervised image classification through a random forest algorithm was used to analyse land use and land cover change (LULCC) in Abeokuta between these years. Batty’s spatial entropy was used to estimate urban expansion in concentric ring zones around three major city centres, which are the King's Palace, Old Governor's Office and New Governor's Office. The LULCC findings reviewed a decline of 43.3% in dense vegetation and 29.3% increase in both built-up areas and bare land within two decades (2003-2023). The model performance metrics yielded overall accuracies of 74.05%, 82.16%, 89.24% with Kappa coefficients of 0.67, 0.74% and 0.85% for 2003, 2013 and 2023, respectively. The results of Batty’s spatial entropy analysis revealed that urban expansion is marginally higher around Old Governor's Office (∆HB norm = 0.13) than the King's Palace (∆HB norm = 0.11) which is higher than the New Governor's Office (∆HB norm = 0.07). This study established that observing multiple growth points using concentric ring zoning other than the city centre provides a better theoretical understanding of urban expansion of contemporary polycentric cities.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: D.05.03 - POSTER - Towards Modernized Copernicus Data: Enabling Interoperability through EOPF Principles and Advanced Data Access Strategies

#zarr #cloud-native

As demand for high-accuracy Copernicus data products grows, modernizing and re-engineering existing processors is essential. The Sentinel data processors, developed over a decade ago, require upgrades to remain viable for the next 15 years. A key focus of this modernization is enhancing data access through cloud optimization, interoperability, and scalability, ensuring seamless integration with new technologies.

A major development in this transition is the adoption of cloud-native data formats like Zarr, which significantly improve data handling, storage, and access. This shift supports the increasing volume and complexity of data from current and future missions. The Earth Observation Processing Framework (EOPF) plays a crucial role in enabling these advancements, providing a scalable and flexible environment for efficiently processing large datasets.

This insight session will provide updates on the latest status of EOPF project components, as well as the future of the Copernicus data product format, with a strong focus on Zarr and its practical applications. Experts will showcase how these innovations enhance data accessibility and usability, ensuring that Copernicus remains at the forefront of Earth observation. The session will also highlight EOPF’s role in streamlining data workflows, fostering collaboration among stakeholders, and advancing next-generation EO solutions.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: CoperniCUBE: Advanced DEM Timeseries Analysis Using Airbus CopDEM

Authors: Peter Baumann, Henning Schrader, Ernest Fahrland, Ciro
Affiliations: Rasdaman GmbH, Constructor University
Digital Elevation Models represent a core dataset within the geospatial domain. Common, global datasets such as SRTM, WorldDEM/Copernicus DEM or WorldDEM Neo contain height information of a fixed and well defined acquisition timeframe but are usually not stored in a centralized database and not available for ad-hoc timeseries anlysis and fusion. Any analysis of the difference between global DEMs requires their procurement first, followed by the analysis task. CoperniCUBE (https://copernicube.eu), the Copernicus datacube frontend, offers a high-level service combining critical innovations: full spatio-temporal datacube support, AI integration, location-transparent federation and distributed fusion, green computing, and more. In contrast to the original Copernicus DEM variants EEA-10, GLO-30 and GLO-90 which are provided as multi-date merge file bundles, DEMs from true timeseries in CoperniCUBE, queryable through the OGC/ISO WCPS geo datacube query language, are separated to the native acquisition date. For showcasing this, a high-resolution DEM timeseries of Airbus DEM data has been set up. In our presentation we will walk live through this demonstrator, highlighting the benefits of ad-hoc low-coding analytics for decision making. Use cases shown are Amazon deforestation and European surface mining. Functionally, DEM analysis is performed interactively via a standard Web browser with server-side processing of 3D change analysis and DSM/DTM difference analysis. The spatial extent of the differences as well as a statistical summary of the volumetric difference are visualized in the browser.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Leveraging Digital Innovation for Green Solutions  the Fast Track Applications Project

Authors: Martin Jones, Dr Stephen Emsley, Anne-Laure Beck, Holly Baxter, Cameron
Affiliations: Argans Ltd
The rapid transition to sustainable development because of climate change and related safety/security threats demands innovative approaches to harnessing Earth Observation (EO) technologies for “Intelligence”. ESA Fast Track Applications is an initiative to provide stakeholders with expeditious access to EO-derived information, insight, and geo-spatial intelligence (GEOINT) via scalable EO-based applications swiftly tailored to address environmental, social, and economic needs, leveraging state-of-the-art digital tools and user-centric design principles. It is a matter of information design process under temporal constraints (a few weeks), to empower decision-makers to respond to pressing environmental issues with efficiency. The conventional model of R&T project procurement involves months of preparation to assemble an ITT, submit and assess proposals, negotiate contracts before work even begins, followed by at least 12 months before delivery, usually longer, when customers/users, whether scientist or decision-makers might need a more immediate response, even if the error bars are larger, as long as an assessment of the information’s accuracy is provided. Fast Track Applications adopts an adaptive, flexible, Agile approach, using a pool of experts covering the six Copernicus domains to identify, within a few days, the suitable algorithms to be tailored We have designed a standardized software development framework called “Common Application Template”. By enabling customization and reuse of core functionalities across various domains, the common application template technology significantly reduces the time and resources required to create impactful solutions while ensuring consistency and scalability. Specialized EO-based tools, called Domain Application Products (DAPs), have been implemented to address specific challenges such as land cover classification, air quality monitoring, and water resource management. Examples include crop management and efficiency, ecosystem accounting, support to natural disaster (e.g. Kenya floods). It can be applied for lots of projects within Block 4 (EO Science for Society) of the FutureEO program of ESA, i.e. science exploitation, applications, industrial competitiveness support, regional initiatives, and even and even digital Innovation because of the availability of technological building blocks (TBBs). The Fast Track Applications project exemplifies how digital innovation, combined with EO capabilities, can drive impactful green solutions. By fostering capacity building, ensuring broad accessibility, and collaborative development, the initiative ensure that these applications can be scaled and adapted to a wide variety of geographic and thematic contexts, fostering sustainable and impactful outcomes.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: From Space to Summits: Utilizing Copernicus Sentinel-2 Data for Condition Aware Alpine Tours

Authors: Michael Engel
Affiliations: Technical University of Munich (TUM), Zentraler Hochschulsport München (ZHS), DAV Coburg
The most important aspects of planning alpine tours are high-resolution maps and up-to-date information about current conditions. While high-resolution maps are readily available from land surveying offices, obtaining real-time condition information has been more complex. However, with the advent of satellite imagery, particularly Sentinel-2, and especially the easy access to it using the Copernicus Data Space Ecosystem, mountaineers now have a powerful tool at their disposal. This study explores how Sentinel-2 imagery can be effectively used for mountaineering, focusing on different seasons and conditions. We further showcase how a reasonable tool may be implemented as a simple web application for condition aware tour planning. The study mainly focuses on three seasons: high season, late season, and ski touring season. Each season presents unique challenges and requires specific imagery interpretation techniques. During the high season, distinguishing between rocks and glare ice is crucial. Sentinel-2 imagery, with its different wavelength bands, provides valuable insights. Short-wave infrared (SWIR) bands are particularly useful for this purpose, as they enhance the contrast between rocks and glare ice, making it easier for mountaineers to distinguish them, assess the corresponding conditions and navigate safely. In the late season, the focus shifts to the first snowfall and changing vegetation conditions. Shadow-corrected imagery and thresholded indices are used to extract snow coverage and depict the results on high-resolution maps. True color imagery is also employed to assess vegetation conditions, such as the presence of fiery grass or colorful forests, which can indicate the onset of autumn. For the ski touring season, the first application is to assess if a slope is rocky or not. However, time series information is crucial to assess snow melt events and stable weather conditions. This information is vital for avalanche safety considerations and determining the suitability of climbing on ridges, which can be slippery when covered by fresh snow. Sentinel-2's frequent revisit time allows for the monitoring of these conditions over time. The study's findings have been implemented to create a practical application for mountaineers. The application combines true color images, which are easily interpretable by most people, even those with no experience in remote sensing, with SWIR bands to highlight snow and ice features. Additionally, a high-resolution map with extracted snow and ice cover is included. This application aims to enable condition-aware tour planning within the range of the Coburger Hütte, located near the Zugspitze. By providing accurate and up-to-date information, the application helps mountaineers make informed decisions and plan their tours more safely and effectively. The study demonstrates the potential of Sentinel-2 imagery for enhancing mountaineering safety and planning. By focusing on different seasons and utilizing various wavelength bands, the study provides valuable insights into the interpretation and application of satellite imagery for alpine tours. By leveraging the capabilities of satellite technology, mountaineers can now access detailed and current condition information. The resulting application, available on the website of the Coburger Hütte, serves as a practical tool for mountaineers, combining scientific data with user-friendly features to support safe and informed tour planning.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: SentinelHD

Authors: Key Account Manager Claire D'Oiron
Affiliations: Airbus
Airbus imagery Basemap is expertly curated reference layers designed to meet diverse geospatial needs. They offer highly accurate, color-consistent, and nearly cloud-free imagery, making them essential for mapping, analysis, and decision-making. With minimized haze, reduced seasonal differences, and seamless image transitions, Airbus Basemap provides a reliable foundation for global and regional projects. A key addition to the Basemap is the Sentinel-2 Layer, created through an innovative technical workflow which begins with the automatic selection of images with minimal cloud cover. Patented dehazing technology performs atmospheric correction, removing haze, cloud veils, and shadows to create a seamless dataset aligned with the target date. After processing is completed, the images are blended into a global mosaic with consistent geometry and color for resolution enhancement by the proprietary super-resolution algorithm, transforming Sentinel-2 data from 10m to 5m HD imagery. Finally, a true-color rendering ensures the mosaic is visually cohesive and accurate for various applications. This comprehensive process makes the Sentinel-2 HD Global Layer a high-quality, cloud-free, and color-consistent product. Its precision and clarity make it an essential tool for large-scale projects requiring reliable and seamless satellite imagery.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Advancing Global Land Cover Monitoring: Innovations in High-Resolution Mapping with the Copernicus Data Space Ecosystem

#stac #cog

Authors: Joris Coddé, Victor Verhaert, Adrian di Paolo, Max Kampen, Dorothy Reno, Dr. Yannis Kalfas, Wanda De Keersmaecker, Carolien Toté, Mathilde De Vroey, Luc Bertels, Dr. Tim Ng, Daniele Zanaga, Dr. Hans Vanrompay, Jeroen Dries, Dennis Clarijs, Dr. Ruben Van De Kerchove
Affiliations: VITO, Sinergise
Land cover mapping is crucial for comprehending and managing the Earth's dynamic environment. The Copernicus Global Land Cover and Tropical Forest Mapping and Monitoring service (LCFM), part of the Copernicus Land Monitoring Service (CLMS), addresses the need for high-resolution, dynamic global land cover information. By leveraging the capabilities of the Copernicus Data Space Ecosystem (CDSE), LCFM aims to deliver frequent, sub-annual land surface categories and land surface features. These are consolidated into global annual land cover maps and tropical forest monitoring products at 10 m resolution. This high level of detail will facilitate better decision-making and more effective monitoring of environmental changes. LCFM is powered by several essential components, all available from and operating within CDSE. The starting point is the extensive collection of Sentinel-1 and Sentinel-2 data available on EODATA, providing frequent, high-resolution imagery crucial for accurate land cover mapping. A significant challenge faced by the LCFM service is the processing of the extensive global archive of Sentinel-1 and Sentinel-2 data into multiple land cover products with varying temporal resolutions. To overcome this, the service employs processing workflows that operate close to the data, utilizing the openEO Processing system, as well as multiple cloud providers. The results are written directly to CloudFerro’s S3 storage. Additionally, LCFM offers access to its products through a dedicated viewer set up by Sinergise, with plans to incorporate this functionality into the CDSE browser for enhanced usability. Notably, LCFM is the first service to utilize the CDSE Cloud Infrastructure across both CloudFerro and Open Telekom Cloud, enhancing computational resources and scalability. The associated openEO workflows function on both clouds. Furthermore, the workflows read raw satellite data and output products directly, generating (e.g.) multiple resolutions at Sentinel-2 tile level in the form of single-band Cloud Optimized GeoTIFF (COG) files. Additionally, the workflows produce gdalinfo statistics and STAC metadata, which facilitate online quality assurance and enable seamless integration and retrieval of products through a STAC API. This allows further processing by openEO, among others. As a result, the project has driven a paradigm shift in openEO's processing approach—from a traditional single-output model, where workflows produce a single data cube with multiple bands, to a multi-output (multi-head) model that generates multiple files in parallel. This transformation greatly improves the efficiency of the overall workflows and keeps computing costs manageable. This presentation will illustrate how LCFM stands as a flagship project to showcase the potential of generating state-of-the-art global maps using European infrastructure. It will highlight the resulting products, how they are served to and usable by users, as well as how the underlying architecture and workflows are leveraged to generate these products. By continuously driving improvements in openEO, effective use of European cloud infrastructure, and other components, LCFM has significantly enhanced cost efficiency and scalability. These advancements position European cloud services as challengers to global cloud providers, marking a significant step forward in sustainable environmental monitoring and data processing capabilities.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: D.02.03 - POSTER - Approaches, accuracies, applications: next generation large-area land change monitoring

Land cover data are used by various user groups to better understand the current landscapes and the human impact on these landscapes. Next to be a core information layer for a variety of scientific and administrative tasks, land cover and change data help organizations, stakeholders, politicians and individual users to assess urban growth, trends in forest changes, assist in land management and land planning, track wetland losses and potential impacts from sea level rise, prioritize areas for conservation efforts, assess the impact of climate change on socio-economic, socio-ecologic systems as well as ecosystem services, etc.

During the last years several global and continental land cover and change datasets have been developed at increasingly higher resolutions. These datasets, with higher spatial resolution (e.g Sentinel 1+2, Landsat, but also using very high-resolution data) can significantly improve the characterization of the Earth’s land surface dynamics and provide land cover and change assessments at a finer scale and thematic details. The new data available in combination with more powerful computing infrastructure and advances in deep learning/artificial intelligence techniques allow for on-demand, adaptable land cover monitoring and applications.

The session should provide a forum for different players in this field to present latest approaches including advanced methods for land cover change analysis (incl. AI) and near-real time land cover monitoring, new thematic details moving towards land use change monitoring, and evolving large-area programs and services and how they support different users and global to national levels.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: NexOS: a Control Framework for a Next Generation National Mapping Agency

Authors: Lizzie Dobromylska
Affiliations: Ordnance Survey
At over 230 years old, Ordnance Survey (OS) have a long history as Great Britain’s National Mapping Agency (NMA). From first using traditional mapping methods, OS have developed over the years to today’s multi-approach of utilising field surveyors, global navigation satellite systems, remote sensing and a range of advanced geographical information systems (GIS) tools and software to map all 243,241 square kilometres of Great Britain. Continuous advancements in technologies, data ecosystems and user requirements have resulted in a continuously changing geospatial industry, with NMA’s like OS adapting to these shifts. In recent years, the emergence of new advanced technologies in particular – such as Artificial Intelligence and Machine Learning – combined with enhanced methods of data capture, are shifting both the geospatial and non-geospatial industries. These markets are impacted by these developments as data can be used, fused with other datasets and interacted with by not only the traditional GIS method. Because of this, organisations may require considerable changes, as before, to align data for the range of technologies available in today’s markets. With this has come the recognition that there is a problem of data inconsistency across different data sources, as the importance of data management, interoperability and operational services are highlighted by the developments in different data and technology types. OS are leveraging these new advancements to reimagine a next generation NMA. The project, called NexOS, explores the potential capabilities of a modern NMA through imagining how an NMA may change in response to the new data and technologies available. One goal of NexOS is to develop a novel ground control framework which aims to address the problem of data fusion from different data and technology sources. This control framework aims to be demonstrated through a series of pipelines, featuring the creation of Ground Control Points (GCPs), to address known issues of data inconsistencies across multiple data sources, including the orthorectification of earth observation data in comparison to other data sources. If successful, the framework could solve differences in positional accuracy across multiple datasets, including point cloud, aerial and satellite data, reducing data uncertainty and enhancing interoperability. OS aim to use earth observation data to showcase the value of the ground control framework, demonstrating how datasets may be fused together to offer new information and insights into real world problems. Example work could fuse datasets to support sustainable monitoring, mitigation and adaptation efforts, which has become an emerging application of earth observation data, following increasing global interest in sustainability within recent years.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: HabitAlp2.0: Updating Maps of Protected Areas in the Alps With AI-Based Remote Sensing Methods

Authors: Daniel Kulmer, Msc. Harald Kristen, Dr. Manuela Hirschmugl
Affiliations: Joanneum Research, University of Graz, Dept. of Geography and Regional Science
The monitoring of protected areas in the Alps as part of the INTERREG project “HabitAlp” was carried out 20 years ago using a common nomenclature and methods based on the visual interpretation of airborne stereo-images. This extensive monitoring effort covered multiple protected areas across the Alpine region, providing crucial data for understanding landscape dynamics and human impacts. The regular repetition of this process, which was developed around 20 years ago, with newly updated data, provides information on changes in the landscape caused by natural processes such as mass movements, floods, avalanches, wind and fire, but also shows long-term trends caused by changes in management or climate change. However, these reassessments are cost-intensive to carry out and take a long time to complete. Manual aerial image analysis is a physically demanding task that requires a high level of concentration. The last mapping in 2013 required the assignment of over 30,000 polygons to one of 113 possible Habitalp classes and the definition of various other parameters. The training period for an interpreter can take four to six weeks to ensure a high-quality and precise workflow (Hoffert and Anfang, 2006). With the manual interpretation method currently used, an average area of 0.52 km² of the total 154 km² is mapped per day, which, however, also depends heavily on the type of habitat (Bollmann, 2021). The time required for mapping could be strongly reduced through automation. Leveraging advances in computing infrastructure and deep learning, newly developed possibilities for processing spatial data using deep learning methods enable the previous classification approach to be revised and automated. We propose a scaleable, near-real time monitoring approach utilizing, state-of-the-art remote sensing data, including Sentinel-2 timeseries data, airborne LiDAR along with traditional very high-resolution airborne data. This multi-sensor approach enables finerscale land over change detection with imporved thematic detail. The change detection model is trained using the existing HabitAlp maps of the Gesäuse National Park, which are available for 1954, 2003 and 2013. Previous studies have shown that a U-net architecture provides good results for land use and land cover mapping, but might be of limited use in change detection as the architecture lacks explicit mechanisms to compare and detect differences between temporal states (Mustafić et al., 2024). In this study, we evaluate transformer-based geo-foundation models (GeoFMs), which excel at capturing complex spatial relationships through their attention mechanisms and transfer learning capabilities from large remote sensing datasets. Additionally, specially tailored Siamese change detection networks will be tested, as their ability to learn similarity metrics between temporal image pairs makes them particularly suitable for detecting subtle landscape changes. To ensure reliability, explainable AI will be integrated by including knowledge graphs to track changes and feed back into the AI, while physical AI generates a physical frame, which limits the AI results. The methodology will be validated using new data from 2024 to map the changes. In an iterative process, polygons are then reviewed, and incorrect changes are used to further improve the automated classification process in order to enhance the reliability of the model. One challenge is to make the results of the different methodological approaches comparable to existing data from the past. Therefore, we closely collaborate with the national park administration to validate the results, with a special focus on explainability of the model output. This feedback loop ensures the system's applicability for various stakeholders, from protected area managers to policy makers. Upon successful validation, the developed workflow can be transferred to other protected areas within the HabitAlp network, which currently encompasses four national parks across the Alpine region. The work in this study has only started, therefore we cannot share any results yet. In June, the presentation will summarize the construction of the AI processor, show first results and describe how difficulties and challenges were overcome during the development. Bollmann, E., 2021. Schlussbericht zum Projekt „Luftbildinterpretation CC-HABITALP nach dem Kartierschlüssel HIK-CD für ein Gebiet von ca. 26 km2 im südlichen Teil des Nationalpark Gesäuse (NPG)“. Hoffert, H., Anfang, C., 2006. Digitale  CIR-Luftbildkartierung  im  Nationalpark Gesäuse. Nussdorf. Mustafić, S., Gutjahr, K., Miletich, P., Perko, R., 2024. Artificial Intelligence for Land Use and Land Cover Mapping in Austria, in: IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium. Presented at the IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, IEEE, Athens, Greece, pp. 7138–7141. https://doi.org/10.1109/IGARSS53475.2024.10642647
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Improving the temporal consistency and accuracy of land cover fraction mapping using Vision Transformer

Authors: Qin Xu, dr. Dainius Masiliunas, Nandika Tsendbazar
Affiliations: Wageningen University
Land cover maps are essential for sustainable environmental management and have gained significance due to advancements in spatial and temporal resolution technologies. These maps efficiently describe the Earth's surface, but traditional classifications often oversimplify reality by only identifying the dominant land cover class in each pixel. This discrete classification can mask the presence of multiple land cover classes in the same area, leading to biases, especially at coarser resolutions. To address these limitations, mapping land cover fractions offers a more nuanced approach, representing the proportion of various classes within each pixel. The Copernicus Global Land Service -100 m Land Cover (CGLS-LC100) is an example, providing high-resolution global maps with fractional information across 12 major land cover classes since 2015. The availability of high-resolution Sentinel-2 data further enhances land cover monitoring capabilities. Machine learning algorithms, particularly Random Forest, have become increasingly popular in land cover fraction mapping, improving accuracy by managing model complexity and feature selection. Additionally, emerging methods based on deep learning, like the Vision Transformer (ViT), show promise in analyzing satellite imagery by recognizing broader patterns and relationships, suggesting a shift in how land cover change detection may be approached. However, Transformer-based models are often used only with either spatial or temporal embeddings and integrating both temporal and spatial information in a vision transformer requires further development. This study aims to explore deep learning methods, specifically the Vision Transformer (ViT) model, for land cover fraction mapping with a focus on enhancing temporal consistency and accuracy. To achieve this objective, the research investigates three key areas: the comparative effectiveness of the Vision Transformer (ViT) as a regression method for monitoring land cover against the Random Forest (RF) regression approach; the impact of neighborhood spatial information on the performance of the Vision Transformer; and the role of temporal information in influencing the functionality of the transformer model. In this study, the Copernicus Global Land Service Land Cover 100 (CGLS-LC100) reference data was used. It consists of 12,274 training sites in Africa, each 100 x 100 meters, divided into 100 sub-pixels of 10 x 10 meters, labeled by experts using multiple satellites. For model training, ground truth data from 2015 to 2018 were split into 85% training and 15% testing sets. Classes were merged into seven categories. Validation data includes 4,894 sample sites from 2015 to 2019 in Africa. Specifically for this study, the data was aggregated to 20 meters to be consistent with sentinel images. As input data for the models, we used Sentinel-2 imagery, which allows for the construction of monthly and yearly time series thanks to its five-day global revisit time. We downloaded patches of Sentinel-2 images measuring 300 x 300 meters, centered around the reference sample sites, covering the period from 2015 to 2018 for training samples, and 2015 to 2019 for validation samples. Sentinel Level 2A data were downloaded through Microsoft Planetary Computer. From the 13 available spectral bands, bands 1, 9, and 10 were excluded due to their lower resolution of 60 meters. Additionally, the built-in cloud mask (SCL) was downloaded for cloud masking. The selected bands were aggregated to produce input data at a 20 x 20-meter resolution. Monthly and yearly composites were created by taking the median value, which helps mitigate the impact of any small, unfiltered clouds. Due to the 300 by 300 meters size of downloaded Sentinel 2 imagery, individual images measure only 15 by 15 pixels, resulting in numerous NA values. Not all locations possess sufficient data spanning 42 months due to issues such as missing values (NA), cloud masking, or other factors. To address these gaps, the missing months were filled by calculating the average of the preceding and subsequent months. Vision Transformer (ViT) and Random Forest models were trained on these final normalized composites, with the yearly composites generated using a weighted sum of the monthly data, prioritizing the months closest to Africa's growing season depending on the hemisphere. The Random Forest (RF) model processes one response variable at a time, using techniques like binary relevance for separate land class modeling. RF regression does not handle whole time series patterns directly, so derived statistics and vegetation indices like NDVI, NDMI, and NBR are used as features. The models are trained with these features using land cover fractions from 2015 to 2018, generating predictions at a 20m pixel resolution. Pixel values reflect the percentage of each land cover class, ensuring all fractions sum to 100%. After creating monthly fractional land cover maps with RF, image differencing detects changes by subtracting the current month's map from the previous one. For the Vision Transformer, satellite imagery is divided into 1x1 pixel patches, treated as individual tokens, and projected into an embedded feature space. Positional and temporal information is encoded to maintain spatial and temporal context. Each token corresponds to input channels for ten spectral bands, resulting in a concatenation of dynamic time steps, spectral channels, and positional embeddings. The encoder includes multi-headed self-attention and feed-forward layers that process these inputs before directing them to task-specific heads. The AdamW optimizer is used for training, with attention to loss tracking. Predictions display the land cover fractions for each time step. This method accounts for the timing and location of changes. Models predict land cover fractions on a monthly or yearly basis, validated against actual changes. Monthly predictions are aggregated into yearly data with weighted factors, prioritizing the growing season for clearer spectral differences. RF and Vision Transformer models are evaluated using Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) for each class and overall. Overall Accuracy (OA), Producer’s Accuracy (PA), and User’s Accuracy (UA) are derived from a sub-pixel confusion matrix (SCM), and the Jaccard similarity coefficient evaluates the overlap between predicted and observed classes. The current performance of the model is below expectations due to the limited training data utilized; specifically, only a subset was employed for training purposes. Utilizing the entire dataset would likely lead to significant improvements in model performance. This study aims to analyze the effective attention mechanism utilized in the vision transformer model, which processes pixel time series by incorporating spatial context from surrounding pixels. Explore how the integration of both spatial and temporal context can enhance pixel-based predictions and contribute valuable insights for optimizing predictive accuracy. Such an analysis could provide valuable insights into the mechanisms underlying prediction accuracy and inform future advancements in the field. The results of the analysis will be presented in the poster presentation.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Synthetic Training Data in support of Large-Scale Automated Land Monitoring

Authors: Andreas Walli, Dr. Michael Riffler
Affiliations: Geoville
Recent advancements in deep-learning-based classification and regression problems have underscored the immense potential of modern machine learning to advance the capacities of Earth Observation (EO) downstream service applications. These advancements have enabled the retrieval of desired information with higher accuracy than ever before in support of monitoring environmental issues or climate change impact and respective adaptation activities. The success of deep learning models in unlocking the full potential of EO data heavily relies on the availability of high-quality and extensive training data. Yet, the creation of a representative and sufficiently large set of training labels is a process that is both time-consuming and costly. This challenge is further exacerbated by regional variability and lack of appropriate reference information or due to the restricted scope of them. Although the creation of synthetic training data is now a fast-growing activity, most available solutions focus on particular object detection problems (e.g., planes, cars, buildings, etc.) in the very-high-resolution imagery domain. We focus on boosting large-scale land cover mapping and monitoring activities by producing and using synthetic training data in combination with Copernicus high-resolution (i.e., Sentinel-2) and very-high-resolution (i.e., commercial data providers, ESA Copernicus Space Component Data Access datasets VHR_IMAGE_20YY) imagery. Key innovations include the ability to generate synthetic training datasets with an exact match between image features and labels, therefore improving the accuracy and efficiency of large-scale land cover mapping and monitoring, especially in regions with limited reference data. As part of Proof-of-Concept (PoC) studies, we developed and tested the synthetic data creation process to support large-scale mapping activities in the high-resolution and very-high-resolution imagery domains. Thus, we implemented a pipeline of multiple Generative Adversarial Networks (GANs) to train them on specified EO input data (RGB and NIR spectral channel and NDVI) in combination with additional reference data (land cover, building footprints) to produce diverse and realistic sample datasets. With the simulated datasets, an exact one-to-one relationship between reference labels and EO spectral data has been made available, which was further used to train an AI model (i.e., U-Net), allowing to predict various parameters (e.g., building densities, land cover) for test regions around the globe. Overall, the PoC studies revealed that the generated synthetic datasets appear realistic and allow the training of models with a performance close to models trained by real datasets. It turned out that the most realistic datasets were generated based on the idea of “Style Transfer”, i.e., the datasets were generated realistically by transferring features from the style of one region to another region. For example, we generated satellite imagery of rural China by transferring imagery of mountain areas in South America to the style of Asian rural areas. We further found that a mixed training strategy (combining a large synthetic dataset with finetuning based on a small real dataset) was the most cost-effective. Synthetic data offers major benefits in EO by addressing challenges like data scarcity, class imbalances, and the high cost of collecting and labeling reference data. It can fill gaps where real data is unavailable, improve dataset diversity, balance class distributions, and enhance model performance. Synthetic data allows for automated, accurate labeling, reduces manual work, and helps create rare scenarios for anomaly detection. It is faster and more cost-effective than real data collection, and can be used for benchmarking, domain adaptation, and stress-testing models. The long-term goal is to integrate synthetic training data for the creation of Copernicus Land Monitoring Services (CLMS) products at continental (e.g., CLMS Pan-European High-Resolution Layers and Priority Area Monitoring components) and global (Copernicus Emergency Management Service Exposure Mapping Components) scales, allowing for increased accuracy, efficiency, and overall degree of automatization of large-scale land cover mapping activities.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Context Matters - How Climate-aware Neural Networks Improve Satellite-based Land Cover Classification

Authors: Johannes Leonhardt, Jürgen Gall, Ribana Roscher
Affiliations: University Of Bonn, Lamarr Institute for Machine Learning and Artificial Intelligence, Forschungszentrum Jülich GmbH
Climatic conditions play an important role in shaping the Earth's surface, as they are a key driver of landscape and ecosystem variability. Accordingly, climatic factors also determine the distribution and appearance of different land cover classes, such as urban areas, forests, grasslands, and wetlands [1]. For instance, while warmer, wetter climates support dense forests, arid regions are usually dominated by barren land or shrubs. Despite this apparent connection, most land cover classification models do not incorporate climatic information, and thus disregard critical contextual information by focusing solely on imagery. In our work, we therefore explore different ways in which climatic data can be utilized to improve land cover classification. In particular, we first discuss the direct incorporation of climatic data into neural networks for land cover classification in a multi-modal setup. We show that climate-aware models outperform traditional imagery-only models on several benchmark datasets [2]. Second, we tackle the task of climate-conditional editing of satellite images. Our model, ClimSat, is a diffusion autoencoder, which is able to realistically simulate the effect of varying climatic conditions on satellite imagery. The model can ultimately be employed in the context of land cover classification by augmenting geographically limited datasets, which leads to a significant performance improvement of the resulting land cover classification models [3]. I. Climate-conditional land cover classification To train climate-aware land cover classification models, we first enrich existing satellite image datasets with auxiliary climate data such as temperature or precipitation statistics, by using the satellite images' georeferencing information. We then compare two methods of incorporating these additional data into neural networks, learned embeddings and conditional batch normalization [4], to a climate-agnostic baseline ResNet18. In our experiments, we find that especially the use of conditional batch normalization improves the network's overall performance, not only in terms of accuracy, but also generalizability and training efficiency. We find this to be the case not only for the purely supervised case, but also when fine-tuning a model which has been pre-trained in a self-supervised manner and when performing the classification on learned representations of the images. We further show that our climate-aware models do not succumb the pitfall of shortcut learning, which has been brought up in the context of conditional batch normalization [5]. II. Climate-conditional image editing Climate-conditional image editing describes the task of manipulating images in such a way that the effect of certain climatic data, which differ from the actual conditions associated with the images, is simulated. This way, climate-conditional editing enables the disentanglement of the effect of the climatic context from the images' content. To tackle this task, we present ClimSat, a multi-conditional diffusion autoencoder [6] based on climatic and land cover data. Image editing is performed by varying the conditional climate data between its encoding step and its decoding step, while keeping land cover constant. In our experiments, we show that ClimSat is able to produce images of high quality, which are faithful to the simulated conditions, while maintaining the original images' contents. We find that ClimSat outperforms a mono-conditional variant of itself, as well as other types of generative models with respect to these criteria. Finally, we showcase ClimSat's potential for land cover classification: We use our model for augmenting regionally limited datasets and show that training on the augmented datasets leads to more accurate and geographically generalizable land cover classifiers. In summary, two different ways of incorporating climatic data into land cover classification pipelines are showcased: Directly, or indirectly by using a generative model to augment the training dataset. In both cases, we were able to improve the performances of the resulting land cover classifiers, showing that climatic data is an under-utilized modality in the context of land cover classification. In future work, we seek to expand both approaches to land cover classification from satellite image time series and conduct a formal analysis of the specific effects of individual climate variables. References [1] Sleeter, B.M., T. Loveland, G. Domke, N. Herold, J. Wickham, & N. Wood. (2018). Land Cover and Land-Use Change. Impacts, Risks, and Adaptation in the United States: Fourth National Climate Assessment, Volume II (pp. 202–231). Washington DC: U.S. Global Change Research Program. [2] Leonhardt, J., Drees, L., Gall, J., & Roscher, R. (2023). Leveraging Bioclimatic Context for Supervised and Self-Supervised Land Cover Classification. DAGM German Conference on Pattern Recognition (pp. 227-242). Cham: Springer Nature Switzerland. [3] Leonhardt, J., Gall, J., & Roscher, R. (2024). ClimSat - A Diffusion Autoencoder Model for Climate-conditional Satellite Image Editing. Under review. [4] De Vries, H., Strub, F., Mary, J., Larochelle, H., Pietquin, O., & Courville, A. C. (2017). Modulating early visual processing by language. Advances in neural information processing systems, 30. [5] Sheth, I., Rahman, A. A., Havaei, M., & Kahou, S. E. (2022). Pitfalls of conditional batch normalization for contextual multi-modal learning. I Can't Believe It's Not Better Workshop at NeurIPS. [6] Preechakul, K., Chatthee, N., Wizadwongsa, S., & Suwajanakorn, S. (2022). Diffusion autoencoders: Toward a meaningful and decodable representation. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 10619-10629).
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Deep Learning-Based Multi-Task Approach for Agricultural Field Extraction

Authors: Mr Ghaith Amin, Mr Thomas Oberlin, Ms Valerie Demarez
Affiliations: University Toulouse III Paul Sabatier, University of Toulouse, CNES/CNRS/IRD, CESBIO, MEOSS, 39 Chem. des Ramassiers, 31770, Colomiers, France, Fédération ENAC ISAE-SUPAERO ONERA, Université de Toulouse, 10 Avenue Marc Pélegrin, 31055 Toulouse, France
Abstract: The delineation of agricultural fields is crucial for modern precision agriculture, enabling critical applications such as crop monitoring, yield estimation, and irrigation management. Traditional methods for field delineation often rely on edge-based or region-based segmentation algorithms, which face challenges such as false boundaries and suboptimal segmentation outcomes due to sensitivity to noise, arbitrary parameter settings (Chen et al., 2015), and difficulty in adapting to complex agricultural systems. While recent advances in deep learning have shown promise, most existing approaches are constrained by their reliance on very high-resolution commercial satellite imagery. Other approaches depend on Satellite Image Time Series (SITS), which necessitate high preprocessing demands and suffer from limited scalability. Our study addresses these challenges by leveraging Sentinel-2 Level-3A data (Hagolle et al., 2018), which provide high temporal and spatial consistency at a 10-meter resolution through monthly cloud-free composites. To our knowledge, this is the first study to apply such data for agricultural field extraction, offering a cost-effective and accessible alternative to traditional data sources. This research introduces an operational, deep learning-based multi-task approach to agricultural field extraction, using the ResUNet-a d7 (Diakogiannis et al., 2020) architecture and Sentinel-2 Level-3A cloud-free satellite data. The ResUNet-a d7 architecture, a fully convolutional neural network, integrates multi-task learning to predict field extents, boundaries, and distance maps, effectively addressing challenges posed by diverse agricultural landscapes, particularly those with small, irregular fields. The model's ability to generalize across temporal and spatial variations was rigorously tested, demonstrating its effectiveness for large-scale operational agricultural monitoring. Unlike other single-task models, the ResUNet-a d7 incorporates conditioned multi-task outputs, which refine predictions by integrating the distance map and boundary mask into the final segmentation. To support this experiment, a comprehensive dataset was created, encompassing 14 geographically diverse sites across France, representing various climatic zones, soil types, and agricultural practices. This dataset includes Sentinel-2 Level-3A images from 2021 and 2022, providing two acquisitions per site and comprising more than 123,000 image patches. Each image patch is paired with its corresponding ground truth annotations, which include detailed representations of agricultural field extents, boundaries, and a normalized Euclidean distance map. The distance map encodes the distance of each pixel within a field to the nearest boundary pixel, providing additional spatial context for model training. These paired Sentinel-2 patches and annotations form the foundation for supervised learning, enabling robust model training and validation. This large-scale dataset will soon be made openly accessible to the research community to encourage further innovation in agricultural field monitoring. The approach was evaluated using both pixel-based and object-based metrics, demonstrating strong performance with an average weighted F1 score of 91.7% and an Intersection over Union (IoU) of 83.8% across all test sites. To assess its generalization capability, a spatio-temporal transferability experiment was conducted. Despite geographic and temporal variations, the model achieved a high weighted F1 score of 90.4% and 94.5% on two separate sites. These results validate the model's zero-shot generalization ability, highlighting its potential for operational applications without requiring additional retraining. A key innovation of this study is the post-processing method based on Gaussian Mixture Models (GMM) (Liang et al., 2022). Unlike other supervised methods requiring manual parameter tuning, the GMM approach is fully unsupervised, making it particularly suitable for operational workflows. This post-processing step refines the separation of adjacent fields by leveraging the extent and boundary masks generated by the model. It transitions the outputs from semantic segmentation to instance segmentation, uniquely identifying each field—a critical step for real-world applications. In conclusion, this research presents a novel, scalable approach to agricultural field delineation by combining the ResUNet-a d7 multi-task learning architecture with Sentinel-2 Level-3A data and an unsupervised post-processing method. By addressing key challenges in field delineation and offering a cost-effective solution, this study paves the way for advancements in precision agriculture, enabling timely decision-making and sustainable management of agricultural resources. With its demonstrated adaptability across diverse geographic and temporal conditions, this approach shows strong potential for operational deployment in large-scale agricultural monitoring. Furthermore, by making the dataset openly available, this research fosters collaboration and innovation within the scientific community. References : - Chen, B., Qiu, F., Wu, B., Du, H., 2015. Image Segmentation Based on Constrained Spectral Variance Difference and Edge Penalty. Remote Sens. 7, 5980–6004. https://doi.org/10.3390/rs70505980 - Olivier Hagolle, David Morin, & Mohamed Kadiri. (2018). ATBE: Detailed Processing Model for the Weighted Average Synthesis Processor (WASP) for Sentinel-2 (1.4). Zenodo. DOI:10.5281/zenodo.1401360 - Diakogiannis, F.I., Waldner, F., Caccetta, P., Wu, C., 2020. ResUNet-a: A deep learning framework for semantic segmentation of remotely sensed data. ISPRS J. Photogramm. Remote Sens. 162, 94–114. https://doi.org/10.1016/j.isprsjprs.2020.01.013 - Liang, C., Wang, W., Miao, J., & Yang, Y. (2022). Gmmseg: Gaussian mixture based generative semantic segmentation models. Advances in Neural Information Processing Systems, 35, 31360-31375.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Leveraging geospatial metadata to enhance large scale land cover mapping: A case study using European LUCAS Data

Authors: Babak Ghassemi, Cassio Dantas, Raffaele Gaetano, Emma Izquierdo-Verdiguier, Omid Ghorbanzadeh, Francesco Vuolo, Dino Ienco
Affiliations: INRAE, Inria, UMR TETIS, University of Montpellier, CIRAD, Inria, UMR TETIS, University of Montpellier, Institute of Geomatics, University of Natural Resources and Life Sciences, Vienna (BOKU University)
Land Use and Land Cover (LULC) maps derived from Earth Observation (EO) data are essential tools to develop sustainable land and resource management practices. They provide insights into how landscapes change over time due to human activity and natural processes. Predictions indicate that nearly ten billion people will reside on this planet by the year 2050. Thus, effectively monitoring farming landscapes and crop yield becomes crucial (Bose et al., 2016; Luo et al., 2013). To this end, precise and timely updated LULC maps play an important role, supporting precision agriculture and assisting in managing the spatial dynamics of food production. Recent advancements in EO technology enable us to observe Earth’s surface with unprecedented spatial, spectral, and temporal resolution, facilitating the detection of subtle land changes across large areas (Cheng et al., 2023). This feature not only aids in mapping crop types and yields but also has significant applications in disaster management, urban planning, and environmental protection. As a result, LULC maps are indispensable for both global and local monitoring and decision-making (Zhao et al., 2023). The combination of open-access EO data and advanced machine learning (ML) techniques has enabled the development of precise, large-scale LULC maps (Gomes et al., 2020), leading to the creation of numerous LULC products on a continental or global scale with spatial resolutions ranging from 10 meters to 1 kilometer. EO data sources leveraged to derive these products include Sentinel, Landsat, MODIS, and AVHRR (Ghassemi et al., 2024). Developments in ML and deep learning (DL) have improved LULC classification, especially by processing large amounts of EO data. Traditional ML algorithms, such as Random Forests (RF), Support Vector Machines (SVM), and K-Nearest Neighbors (KNN), are commonly adopted for large-scale LULC mapping (Amini et al., 2022; Gislason et al., 2006; C. Huang and Townshend, 2002; Yuh et al., 2023; Mustafa Abdullah and Mohsin Abdulazeez, 2021). These methods significantly enhance classification accuracy and decrease processing time compared to manual interpretation (Zhao et al., 2023). Recently, DL approaches have further advanced the way EO data are analyzed and interpreted, uncovering patterns and trends that enhance the accuracy and efficiency of LULC classification (Xie et al., 2022). Multilayer Perceptrons (Sawada et al., 2020; Zhang et al., 2021) (MLP), Convolutional Neural Networks (Du et al., 2019; Saralioglu and Gungor, 2022) (CNN), Recurrent Neural Networks (RNN) and, more recently Transformers, are among the leading DL architectures that are currently exploited to derive effective LULC maps. However, despite the high-quality performance of these approaches in analyzing and understanding remote sensing imagery, they generally neglect geospatial metadata, such as coordinates, which represents a distinct dimension inherent to remote sensing data (Rolf et al., 2024). The integration of geospatial metadata is essential for enhancing the scalability and accuracy of remote sensing-based analyses at regional (Bellet et al., 2023), continental (Rußwurm et al., 2023), and global scales (Mai et al., 2023). Despite its importance, this information remains underutilized in tasks requiring geospatial awareness, such as large-scale land cover mapping. Recent studies are beginning to explore how geospatial metadata can be leveraged alongside imagery data to improve tasks where geospatial awareness is advantageous. Examples include species distribution modeling (Rußwurm et al., 2023) and scene classification (Mai et al., 2023; Bourcier et al., 2025), proposing strategies for encoding this information. While most recent approaches (Rußwurm et al., 2023; Mai et al., 2023; Bourcier et al., 2025) have focused on using geospatial metadata to pretrain deep learning models independently of the specific downstream task, only (Bellet et al., 2023) investigates the use of geographical coordinates as inputs for land cover mapping at a regional scale. The achieved results reveal that incorporating such geospatial metadata clearly enhances the mapping accuracy of the model. To advance the integration of location metadata into DL frameworks, specifically for large-scale land cover mapping, we propose a method that explicitly incorporates both fine-grained and coarse-grained spatial information. This enhancement allows the DL model to take advantage of the geospatial location information associated with the input data. Specifically, the proposed model integrates fine-grained information, such as latitude and longitude coordinates, through a learnable positional encoding. Additionally, it employs a binary ecoregion partition (Mediterranean vs. non-Mediterranean) to include coarse-level spatial information, aiming to differentiate between shared and specific geographical patterns. As the backbone of our framework, we adopt an MLP architecture, which offers two key advantages: (i) it provides a lightweight model with computational requirements comparable to standard ML approaches commonly employed for large-scale analysis, such as RF, and (ii) it allows for an unbiased assessment of multi-scale spatial information management, free from the complexities and potential biases of over-parametrized models. To evaluate our proposed framework, we utilize the publicly available analysis-ready data provided by (Ghassemi et al., 2024), which encompasses various locations across the European continent. This benchmark serves as an ideal case study to test the capability of a land cover mapping framework to perform large-scale spatial inference. We compare the performance of our framework against a similar model that does not incorporate geospatial information. Additionally, in our analysis, we include standard ML approaches, such as RF and XG-Boost, due to their widespread use in large-scale land cover mapping tasks. Our results clearly demonstrate that integrating either fine-grained or coarse-grained geospatial information enhances performance. Moreover, the best results are achieved by jointly leveraging both fine-grained and coarse-grained spatial information, underscoring the complementarity of these two information sources and the effectiveness of our empirical choices in integrating metadata related to geographical locations. References Amini, S., Saber, M., Rabiei-Dastjerdi, H., Homayouni, S., 2022. Urban land use and land cover change analysis using random forest classification of landsat time series. Remote Sensing. doi:10.3390/rs14112654. Bellet, V., Fauvel, M., Inglada, J., 2023. Land cover classification with gaussian processes using spatio-spectro-temporal features. IEEE Transactions on Geoscience and Remote Sensing. Bose, P., Kasabov, N.K., Bruzzone, L., Hartono, R.N., 2016. Spiking neural networks for crop yield estimation based on spatiotemporal analysis of image time series. IEEE Transactions on Geoscience and Remote Sensing. doi:10.1109/TGRS.2016.2586602. Bourcier, J., Dashyan, G., Alahari, K., Chanussot, J., 2025. Learning representations of satellite images from metadata supervision, in: European Conference on Computer Vision, Springer. C. Huang, L.S.D., Townshend, J.R.G., 2002. An assessment of support vector machines for land cover classification. International Journal of Remote Sensing. doi:10.1080/01431160110040323. Cheng, X., Sun, Y., Zhang, W., Wang, Y., Cao, X., Wang, Y., 2023. Application of deep learning in multitemporal remote sensing image classification. Remote Sensing. doi:10.3390/rs15153859. Du, Z., Yang, J., Ou, C., Zhang, T., 2019. Smallholder crop area mapped with a semantic segmentation deep learning method. Remote Sensing. doi:10.3390/rs11070888. Garnot, V.S.F., Landrieu, L., Giordano, S., Chehata, N., 2020. Satellite image time series classification with pixel-set encoders and temporal self-attention, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Ghassemi, B., Izquierdo-Verdiguier, E., Verhegghen, A., Yordanov, M., Lemoine, G., Moreno Mart´ınez, A., De Marchi, D., van der Velde, M., Vuolo, F., d’Andrimont, R., 2024. European union crop map 2022: Earth observation’s 10-meter dive into europe’s crop tapestry. Scientific Data. doi:10.1038/s41597-024-03884-y. Gislason, P.O., Benediktsson, J.A., Sveinsson, J.R., 2006. Random forests for land cover classification. Pattern Recognition Letters. doi:https://doi.org/10.1016/j.patrec.2005.08.011. Gomes, V.C.F., Queiroz, G.R., Ferreira, K.R., 2020. An overview of platforms for big earth observation data management and analysis. Remote Sensing. doi:10.3390/rs12081253. Luo, B., Yang, C., Chanussot, J., Zhang, L., 2013. Crop yield estimation based on unsupervised linear unmixing of multidate hyperspectral imagery. IEEE Transactions on Geoscience and Remote Sensing. doi:10.1109/TGRS.2012.2198826. Mai, G., Xuan, Y., Zuo, W., He, Y., Song, J., Ermon, S., Janowicz, K., Lao, N., 2023. Sphere2vec: A general-purpose location representation learning over a spherical surface for large-scale geospatial predictions. ISPRS Journal of Photogrammetry and Remote Sensing. Mustafa Abdullah, D., Mohsin Abdulazeez, A., 2021. Machine learning applications based on svm classification a review. Qubahan Academic Journal. doi:10.48161/qaj.v1n2a50. Rolf, E., Klemmer, K., Robinson, C., Kerner, H., 2024. Mission critical– satellite data is a distinct modality in machine learning. arXiv:2402.01444. Rußwurm, M., Klemmer, K., Rolf, E., Zbinden, R., Tuia, D., 2023. Geographic location encoding with spherical harmonics and sinusoidal representation networks. arXiv:2310.06743 . Saralioglu, E., Gungor, O., 2022. Semantic segmentation of land cover from high resolution multispectral satellite images by spectral-spatial convolutional neural network. Geocarto International 37, 657–677. doi:10.1080/10106049.2020.1734871. Sawada, Y., Koike, T., Ikoma, E., Kitsuregawa, M., 2020. Monitoring and predicting agricultural droughts for a water-limited subcontinental region by integrating a land surface model and microwave remote sensing. IEEE Transactions on Geoscience and Remote Sensing 58, 14–33. doi:10.1109/TGRS.2019.2927342. Xie, C., Zhu, H., Fei, Y., 2022. Deep coordinate attention network for single image super-resolution. IET Image Processing 16, 273–284. doi:https://doi.org/10.1049/ipr2.12364. Yuh, Y.G., Tracz, W., Matthews, H.D., Turner, S.E., 2023. Application of machine learning approaches for land cover monitoring in northern cameroon. Ecological Informatics. doi:https://doi.org/10.1016/j.ecoinf.2022.101955. Zhang, Z., Xu, W., Qin, Q., Long, Z., 2021. Downscaling solar-induced chlorophyll fluorescence based on convolutional neural network method to monitor agricultural drought. IEEE Transactions on Geoscience and Remote Sensing. doi:10.1109/TGRS.2020.2999371. Zhao, S., Tu, K., Ye, S., Tang, H., Hu, Y., Xie, C., 2023. Land use and land cover classification meets deep learning: A review. Sensors. doi:10.3390/s23218966.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Towards Continental-Scale Land Change Monitoring: Advancing Land Cover Segmentation with Multi-Resolution Data

Authors: Dr. Nicolò Taggio, Dr. Mauro Sylos Labini, Dr. George Benekos
Affiliations: Planetek Italia s.r.l., Planetek Hellas EPE
Land cover and land use classification (LU/LC) using remote sensing data presents significant challenges due to spectral variability, mixed pixels, seasonal changes, and the complexity of urban and natural environments. Coastal regions in Europe, rich in biodiversity, are particularly vulnerable to human activities, requiring detailed and reliable information for effective sustainable management. In 2020, Planetek Italia led an industrial consortium to develop for the European Environmental Agency (EEA) the first Copernicus Land – very high resolution (VHR) Coastal Zone hotspot thematic mapping product for European coastal zones. This initiative provided specific LU/LC data to address environmental challenges, covering approximately 730,000 km² with a minimum mapping unit of 0.5 ha across 71 classes. However, the process was human-intensive, and updates were only possible every six years, limiting the ability to track rapid coastal changes. By reusing insights and data from the previous project, we have unlocked transformative improvements in our mapping capabilities. This approach enables us to significantly reduce human effort by leveraging the foundational work already completed, allowing for a more efficient workflow and better allocation of resources. Furthermore, we are now able to produce maps with much higher resolution, refining the mapping unit from 0.5 ha to 0.01 ha, which offers a much more detailed and granular view of land cover and land use patterns. Most importantly, this innovative approach allows us to generate updates annually instead of every six years, making it possible to capture and respond to rapid coastal modifications with unprecedented timeliness. This paradigm shifts in how we approach LU/LC classification is achieved through the integration of advanced machine learning (ML) and deep learning (DL) techniques, including the U-Net architecture and the innovative geospatial foundation model Prithvi. The results demonstrate the effectiveness of this approach: both U-Net and Prithvi achieved comparable overall accuracy on Level 2 of the Coastal Zone classes (~70%) across 26 classes in 4 different nations. These technologies build on past work to deliver faster, with higher resolution, and with more frequent results, ensuring that our mapping products meet the dynamic needs of coastal management and biodiversity conservation. Furthermore, we are actively working on hotspot change detection by utilizing the confidence information extracted from the DL methodologies. This allows us to pinpoint and analyze areas undergoing significant changes, providing critical insights into the evolution of coastal zones. Additionally, we are developing the first automatic European coastal map at Level 1 of the Coastal Zone classes (8 classes), aiming to establish a robust and scalable solution for consistent, high-quality LU/LC classification across Europe’s coastal regions. These advancements mark a significant step toward more effective monitoring and sustainable management of Europe’s diverse and dynamic coastal environments.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Leveraging Sentinel-2 EO Data and AI for Land Use and Ecosystem Monitoring in the Eastern Mediterranean Coastal Region of the Middle East

Authors: Sahba El-Shawa, Peter Naylor, Domenico Barretta, Patrick Ebel, Chiara Maria Cocchiara
Affiliations: Scuola Superiore IUSS Pavia, European Space Agency
The Eastern Mediterranean Coastal Region (EMCR) in the Middle East is an important area for studying environmental and climate dynamics due to its distinct geography, diverse ecosystems, and vulnerability to both anthropogenic pressures and climate-related stressors. It is crucial to comprehend and track these impacts in order to enhance climate resilience and support environmental management. This project seeks to utilize Earth Observation (EO) data and Artificial Intelligence (AI) methods to monitor and analyze changes in terrestrial Essential Climate Variables (ECVs), primarily concentrating on land use and land cover (LULC) as well as vegetation health. The methodological framework created for this project can be replicated, providing a model for conducting similar studies in other environmentally vulnerable regions. This research contributes directly to the session’s focus on innovative approaches to large-area land cover change monitoring. By integrating advanced EO technologies with AI-driven analysis, the project showcases the value of combining high-resolution satellite imagery with machine learning to address pressing environmental challenges. Its emphasis on generating data-driven insights for sustainable development and climate resilience highlights the critical role of EO in supporting evidence-based decision-making for environmental management. The project makes use of Sentinel-2 data to create land use and land cover maps and calculate indices such as Normalized Difference Vegetation Index (NDVI) and the Normalized Difference Water Index (NDWI). The use of NDVI is beneficial for evaluating the health of vegetation and identifying regions of stress or degradation, whereas NDWI provides valuable information on soil moisture levels, specifically in agricultural areas. These indicators play a key role in comprehending the effects of human activities and natural events on the area's ecosystems. By focusing on these parameters, the project identifies critical environmental trends, including urban expansion, shifts in agricultural practices, and ecosystem degradation. The study adopts a systematic approach to regional analysis by dividing the EMCR into neutral zones based on geographic and environmental criteria. These zones represent a diverse cross-section of the region's ecosystems and are analyzed using consistent seasonal windows to ensure comparability. The temporal scope of the analysis spans the years 2019 to 2024. This framework facilitates the detection of long-term environmental trends and offers a robust foundation for understanding the changing dynamics of the region. Machine learning techniques are central to the methodology, enabling efficient and accurate analysis of large datasets. Supervised classification techniques are used to create LULC maps, while automated change detection algorithms are used to detect important changes in land cover over a period of time. The use of AI-based methods not only improves the precision of the evaluation but also makes it possible to automate tasks, enabling the method to be scalable and adaptable for use in different regions. The project aims to provide useful insights into the environmental trends of the EMCR by examining and mapping alterations in land use, vegetation health, and soil moisture. These results have the potential to shape climate resilience strategies, land management practices, and sustainability efforts at a regional level.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Transfer Land Cover Maps Across Years: A Time Series-based Semantic Segmentation Approach

Authors: Dino Ienco, Christopher Jabea, PhD Cássio Fraga Dantas, PhD Roberto Interdonato, Prof. Flavie Cernesson, Eric Barbe, Nadia Guiffant, PhD Christian Weber
Affiliations: INRAE, INRIA, Univ. Montpellier, AgroParisTech, CNRS, CIRAD
In recent years we’ve seen an increasing availability of Satellite Image Time Series (SITS) data, mainly due to the launch of space programs associated to the release of high frequency open access imagery, such as the Copernicus Pro-gramme by the European Space Agency and its constellation of Sentinel satellites, or the Landsat collections delivered by NASA. This vast amount of data, combined with the development of increasingly advanced deep learning methods, has enabled the improvement of methodologies developed for challenging tasks such as Land Use/Land Cover (LULC) [1], [2] and Urban Fabric (UF) [3], [4] mapping. Mapping urban land [5] and urban dynamics [6] is fundamental for modelling urban growth, which in turn is crucial for understanding global environmental change and sustainable urban development. However, while it is well know how deep learning method-ologies tend to be data-greed, performing field campaigns on a regular basis in order to get up-to-date labels to retrain the underlying models is often unfeasible, for reasons related to economical costs, time availability and operational constraints.In this context, the ability to produce up-to-date LULC maps of urban and peri-urban areas by exploiting models trained on labeled data from previous years’ campaigns would allow to support the associated tasks at virtually no data-collection cost. In this work, we evaluate the possibility to train SITS-based semantic segmentation approaches on densely annotated historical maps of urban areas for updating land cover in-formation, focusing on a transfer scenario where a model is trained on 2015 data and then used on 2020 data (out-of-year scenario). We compare the obtained classification with that resulting from a model trained and tested on the year2020 (in-year scenario). We test four different methodologies, included a novel extension of the TSViT model [7], where we enhance the original model with the Shifted Window mechanism from the Swin Transformer model [8], namely TSViT+SW. Experimental results on the urban area of Lyon(France) demonstrate how, while transferring a model trained on 2015 data to 2020 data generally results in degraded performance (i.e., compared to a model trained and tested on 2020 data), the proposed TSViT+SW obtains the best absolute performance, also associated to the lower performance loss between the in-year and out-of-year scenarios. [1] R. Interdonato, D. Ienco, R. Gaetano, and K. Ose. Duplo: A dual viewpoint deep learning architecture for time series classification.ISPRS J. Photogramm. Remote Sensing, 149:91–104, 2019. [2] D. Ienco, R. Gaetano, and R. Interdonato. A contrastive semi-supervised deep learning framework for land cover classification of satellite time series with limited labels. Neurocomp., 567, 2024. [3] L. El Mendili, A. Puissant, M. Chougrad, and I. Sebari. Towards a multi-temporal deep learning approach for mapping urban fabric using sentinel2 images.Remote Sensing, 12(3), 2020. [4] R. Wenger, A. Puissant, J. Weber, L. Idoumghar, and G. Forestier. U-net feature fusion for multi-class semantic segmentation of urban fabrics from sentinel-2 imagery: an application on grand est region, france. Int.J. of Remote Sensing, 43(6):1983–2011, 2022. [5] X. Liu, G. Hu, Y. Chen, X. Li, X. Xu, S. Li, F. Pei, and S. Wang. High-resolution multi-temporal mapping of global urban land using landsat images based on the google earth engine platform.Remote Sensing of Env., 209:227–239, 2018. [6] X. Li, Y. Zhou, Z. Zhu, L. Liang, B. Yu, and W. Cao. Mapping annual urban dynamics (1985–2015) using time series of landsat data.RemoteSensing of Env., 216:674–683, 2018. [7] M. Tarasiou, E. Chavez, and S. Zafeiriou. Vits for sits: Vision transformers for satellite image time series. In IEEE/CVF CVPR, pages 10418–10428, 2023. [8] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y.Wei, Z. Zhang, S. Lin, and B. Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In IEEE/CVF CVPR, pages 10012–10022, 2021.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Synthesis and Evaluation of Seamless, Large-Scale, Multispectral Satellite Images Using Conditioned Generative Models

Authors: Torben Dedring, Jun.-Prof. Dr. Andreas Rienow
Affiliations: Ruhr-University Bochum
Artificial intelligence (AI) and deep learning methods have become essential approaches for land use and land cover (LULC) classifications, which are important in urban planning and regional management. While the extraction of LULC information from multispectral satellite images has been a well-studied part of past and present research, only a few studies emerged about the recovery of spectral properties from LULC information. Estimates of the spectral characteristics of LULC categories could enrich LULC forecasting models by providing necessary information to delineate vegetation indices or microclimatic parameters. In the field of geosciences and especially remote sensing, Generative Adversarial Networks (GANs) are deployed for various applications, such as domain adaption, change detection and super-resolution. However, GANs have also opened up the entirely new field of "Deep Fake Geography", as it has been coined by B. Zhao et al. (2021) to emphasize the impact of AI on the challenges of erroneous or simplified geospatial data inherent in geography. Even though several studies have used GANs on remote sensing data, just a few focus on the generation of synthetic satellite data, and the prospects of multispectral image generation have been neither widely investigated nor evaluated in depth, since impartial metrics have not been applied. Furthermore, the usage of LULC information as conditional GAN inputs can serve as an essential parameter to receive tailored results, opening new fields of applicability for LULC forecasting models and in scenario-based urban planning. As previous studies have proven, generating realistic patches of satellite images with GANs is possible. However, generating small square image patches only offers a few application possibilities, which raises the need for image stitching methods. This research aims to generate seamless, large-scale, synthetic multispectral satellite images that are supposed to mimic those captured by Sentinel-2 (S2). Two different sets of input data were used to train the network. The first input data set uses information from the Urban Atlas, a LULC product with 27 land-use classes developed under the Copernicus Project, building footprints as vector files from OpenStreetMap and a no-data mask. We will refer to this training data set as Training Set A. In addition to the inputs used in Training Set A, Training Set B uses crop-type information from the European Union Crop-Type Map provided by the Joint Research Centre of the European Commission. The additional crop-type information is supposed to enhance the synthetic satellite image, assuming that it can assist the model in understanding agricultural land cover patterns. For the Conditional Generative Adversarial Networks (CGAN) itself, we adapted the concept and the rough architecture from the Pix2Pix network with a U-Net-based architecture for the generator and a Patch-GAN architecture for the discriminator. We used a Euclidean distance-based patch-fusion method to stitch the synthetic multispectral image patches back into a larger image using the administrative region of Bonn and Rhein-Sieg in Germany for validation. The approach generated a realistic-looking satellite image without noticeable seams between patch borders. Based on several metrics, such as difference calculations, the spectral information divergence (SID), and the Fréchet inception distance (FID), we evaluate the resulting images. The models reach mean SIDs as low as 0.026 for urban fabrics and forests and FIDs below 90 for bands B2 and B5 showing that the CGAN is capable of synthesizing distinct synthetic features matching with features typical for respective LULC categories and manages to mimic multispectral signatures. The additional crop-type input data in Training Set B significantly improved the results and led to better results in nearly all metrics. We conclude that the approach can support scenario-oriented sustainable urban planning through the provision of synthetic spectral signatures.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: AutoML Land Cover classification in the Kavango–Zambezi Transfrontier Conservation Area in Google Earth Engine

Authors: Nuno Cesar De Sa, Mr Alexander Merdian-Tarko, Mr Anton Hiller, Mrs. Brit Reichelt-Zolho, Mrs Netsai Bollmann, Dr. Mitra Baratchi
Affiliations: World Wide Fund for Nature Germany (WWF-DE), Naturschutzbund Deutschland e.V. (NABU), UNICEF Deutschland, Leiden Institute of Advanced Computer Science (LIACS)
The Kavango-Zambezi Transfrontier Conservation Area (KAZA TFCA) was established in 2011 by Angola, Botswana, Namibia, Zambia and Zimbabwe to protect and conserve the unique biodiversity in the region. One of the key challenges in the area are the shifting cultivation practices encroaching the protected areas and the migratory paths of the megafauna. As part of the KAZA Impact Monitoring working group, the World Wildlife Fund for Nature (WWF) was tasked with developing Earth Observation (EO) land cover mapping solutions to assist in the management of this unique conservation area. Our research focused on specific regions where WWF is developing a BMZ Bengo Global Programme project to introduce sustainable agriculture practices to minimize the impact of shifting cultivation in the protected areas. This study reports on our approach to develop an automated machine learning (AutoML) prototype framework based on Google Earth Engine (GEE) for land cover mapping in these regions. This framework aims not just to improve model performance but to be easily transferable to other use cases and ultimately facilitate access to high-performance machine learning pipelines to users with limited technical expertise. Our model leverages the use of GEE to generate meaning features based on Sentinel-2 data alongside auxiliar datasets (e.g. digital elevation models) and optimizes model selection alongside feature and hyperparameter tuning tailored to the offers available in non-commercial GEE. We tested our modelling framework against default settings and consistently identified an improvement of 2 to 5% in overall accuracy for all study areas. This creates enhanced opportunities to monitor land cover change in the regions of interest. Future directions include the enhancement of the frameworks interpretability and incorporating active learning modules to facilitate the inclusion of local partners in model development. By leveraging GEE’s scalability and AutoML’s efficiency our framework offers a replicable approach for land cover mapping in the KAZA TFCA and enables decision makers on the ground to identify and map land cover at larger scales in time and space. Our work is a showcase of how access to large scale computational solutions for EO can enable users with restricted funds such as WWF to develop targeted solutions that address the direct needs of conservationists on the ground.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Reviving wetlands: unveiling restoration dynamics using satellite images time series (SCO EO4Wetlands project)

Authors: Teodolina Lopez, Rollin Gimenez, Emma Bousquet, Raphaël Antoine, Cyrille Fauchard
Affiliations: Cerema, Direction Territoriale Occitanie, Satellite Team, Cerema, Direction Territoriale Normandie-Centre, Research team ENDSUM, CNES
Wetlands are one of the most critical ecosystems on Earth, because of their major role in climate change, biodiversity and hydrology. Depoldering experiments at the Living Lab Hedwige-Prosperpolder (LLHPP, Belgium, Antwerp) resulted in the partial destruction of the site in an attempt to restore a wetland. The EO4Wetlands project, which has been awarded with the Space for Climate Observatory (CNES) 2022 label, gives Cerema and its partners the opportunity to develop a wetland monitoring tool based on remote sensing and in situ data. Satellite Image Time Series (SITS) provides a unique opportunity to monitor the changes and evolution of the surface temperature and humidity, and vegetation during and after this restoration. Specifically, the project includes visible, radar and thermal infrared imagery and derived products, with Sentinel -1 & -2 providing high revisit frequency images, while MODIS and Landsat provide great temporal depth. However, the local temperate oceanic climate is prone to cloud cover, resulting in irregular optical SITS. In this study, the recently developed Time-Weighted Dynamic Time Warping, designed to handle irregular SITS for vegetation mapping, was combined with superpixel generation and unsupervised hierarchical clustering to perform an analysis of vegetation change on LLHP and the surrounding wetland. This method identified and monitored various classes of vegetation associated with different phenologies. A floodplain covered with pioneer vegetation was thus revealed on the polders following the dike breach in 2022. In situ data complement the SITS with various measurements of surface and subsurface physical parameters of the Saeftinghe soil and the Scheldt water. The wetland monitoring tool, which is a digital twin of the study area, integrates and visualises all SITS and derived indicators as well as in situ data. All this information makes it possible to monitor the evolution of the vegetation over time and space and to assess the water content of the soil. In addition, the temporal depth of the STIS, since 2016 for the Sentinel, the 1990s for Landsat and since 2000 for MODIS, will enable a precise historical study to be carried out, taking into account the past and future impact of climate change on wetlands.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: CLMS Protected Areas: towards an enhanced LULC product using a semiautomatic approach, super-resolved S2 time series, and artificial intelligence.

Authors: Javier Becerra, José Manuel Álvarez-Martínez, Borja Jiménez-Alfaro, Carlos De Waissage, Justine Hugé, Noemi Marsico, Dimitri Papadakis, Alberto Martín, Adrián Sujar-Cost, Ana Sousa
Affiliations: CENTRO DE OBSERVACIÓN Y TELEDETECCIÓN ESPACIAL SAU, BIODIVERSITY RESEARCH INSTITUTE (UNIVERSITY OF OVIEDO-CSIC-P.ASTURIAS), COLLECTE LOCALISATION SATELLITES (CLS), EVENFLOW, EUROPEAN ENVIRONMENT AGENCY (EEA)
LULC monitoring is key to understanding biophysical variables and their link with anthropogenic impacts in the context of global change. Since the 90’s, CORINE Land Cover has continuously shown the evolution of the surface at a European level every 6 years. In addition, the Copernicus Land Monitoring Service’s portfolio (CLMS) provides a comprehensive set of ready-to-use LULC multiannual products. Within the last decade, CLMS has developed Priority Area Monitoring layers, which are actual LULC products focused on different key areas: urban, riparian, protected and coastal spaces. Traditionally, LULC information has been manually derived by expert photo-interpreters over a satellite image. This pipeline shows limitations inferring in quality: (i) satellite coarse spatial resolution, (ii) unique moment, (iii) bias from different operators (impact on comparability) and (iv) cost-effectiveness. This study aims to build an automatic workflow whose output is fine enough that manual effort is reduced and restricted to a few operational processes (e.g. roads, railways and canals continuity). The objective is to generate L4 LULC information for more than 24,000 N2K sites and a surface area of more than 700,000 sq km across Europe. Since several classes are not possible to retrieve in a machinelike workflow, we propose a hierarchical approach based on both automatic and manual procedures. Information feeding AI models are labeled according to automated-feasible redefinition of target classes and filtered by the analysis of the spectro-phenological signal of each class when compared to the EO data predictors (a time series of super-resolved S2 imagery). The models were generated at the intersection of European biogeographical regions and countries to guarantee (i) correct adjustment to model variability and (ii) crosswalks with EUNIS information for subsequent habitat mapping. Predictions for each Natura-2000 site were followed by post-processing pipelines to convert the information to a vector ready-to-use file (e.g. translation of building footprints into a N2K-like 1110 polygon) and to avoid topological errors. First results in several landscapes show an accuracy higher than 90% for all classes at level 1 (urban, cropland, forests, grassland, heathland, sparse areas, wetland and water), whilst other classes at levels 2, 3 and 4 (vineyards, olive trees, burnt areas, coniferous, broadleaved trees) are also reaching those numbers. The use of ARD super-resolved Sentinel-2 imagery and models focused on time-series information improves the results by (i) reducing noise, (ii) capturing unseen elements in original imagery (e.g. small roads, individual houses and, even, small and linear trees) and, more importantly, (iii) giving sufficient spatial detail to derive ready-to-use vector information, key to reduce the manual effort. On the one hand, specific issues arise because of the time series-based approach such as confusion with areas that show different classes through the year (e.g. flat areas in Poland with snow cover, water surface and grassland depending on the month). On the other hand, classes that are clearly dependent on the spectro-phenological behavior such as croplands are better captured. The results were also robust with low-quality S2SR time series due to the model’s internal way of functioning. These results suggest the capability of our solution to be reproducible, allowing the comparability between years with no bias from manual procedures. This product, via crosswalks between PA LULC and European EUNIS habitats at levels 3 and 4, gives the necessary information to design a correct stratification of in-situ surveys through Europe and, hence, the generation of future habitat mapping. CLMS Protected Areas also provides valuable support for European legislation, including Natura 2000 network and Nature Restoration Law requirements, facilitating evidence-based conservation and restoration efforts across protected areas.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Automated Land Use/Land Cover Refinement for Multispectral Satellite Imagery

Authors: Michael Mommert, Hamed Hemati
Affiliations: Stuttgart University Of Applied Sciences
Reliable methods for land-use/land-cover segmentation are necessary to efficiently monitor changes on the face of the planet. State-of-the-art segmentation results can be achieved by combining deep learning methods with multispectral satellite imagery. While a significant body of previous research focuses on improving the underlying model architectures, datasets are oftentimes considered static. Nevertheless, the quality of the results relies heavily on accurate and up-to-date training data. The high cost of labeling images often necessitates the reuse of outdated labels, which can lead to significant inaccuracies due to environmental changes. For example, the plowing of a field or drying of grass will significantly alter the appearance of the labeled classes in the image, potentially confusing a model in its training and impeding the subsequent identification of these classes. While it is correct that a plowed field is still a field, a deep learning model might be confused by the fact that some areas showing exposed soil are considered fields, while others are not; the refinement of the labels based on image data simply improves the agreement of the labels with the image data, therefore guiding the learning process. This effect is relevant to most datasets that combine satellite imagery with land-use/land-cover labels that were not generated for the exact same point in time. Even short temporal differences on the order of days between the observation timestamp of a satellite image and the validity timestamp of the corresponding label information may show label discrepancies due to natural and human-caused effects as those described above. This work introduces a data-driven approach to refine erroneous pixel-wise labels in multi-spectral satellite imagery for the task of land-use/land-cover segmentation. Our method leverages satellite image data to correct the corresponding class labels for the effects described above. Namely, we compute the Normalized Differential Vegetation Index (NDVI) for each satellite image and compare it to the corresponding land-use/land-cover label masks; the labels of image areas that are supposed to be covered by classes "cropland" or "grassland" are altered to class "bare/sparse vegetation" if the NDVI is below a certain threshold, thereby assuming that the area has recently lost its vegetation due to some unknown process. An updated label dataset is generated with this method, which is used in the subsequent training and evaluation of a deep learning model. We evaluate our approach based on the ben-ge dataset (Mommert et al. 2021), which combines Sentinel-2 satellite imagery from the BigEarthNet dataset (Sumbul et al. 2019) that was primarily acquired in 2017 and 2018 with ESA WorldCover land-use/land-cover labels that were generated for the year 2021, leading to significant discrepancies in some dataset samples - foremost in areas that are being utilized agriculturally. Out of 11 classes present in the ESA WorldCover schema, 8 are present in the ben-ge dataset, 3 ("shrubland", "bare/space vegetation" and "herbaceous wetland") of which cover less than 1% of all pixels in the dataset. As our model, we use a U-Net (Ronneberger et al. 2015) backbone, which we train to predict land-use/land-cover segmentation masks from the satellite imagery. To evaluate our method, we compare F1 scores from an ensemble of three independent training runs (we report mean and standard deviation across the three runs) based on the original ben-ge labels and an ensemble based on the refined labels; both datasets use the same train/validation/test split. Experimental results show that the removal of erroneous labels significantly improves the overall segmentation performance and for some classes in particular. Test dataset macro F1-scores (un-weighted mean F1 across all classes) increase from 59%+-1% (model trained and evaluated on original ben-ge labels) to 65%+-1% (model trained and evaluated on refined labels), while micro F1-scores (ignoring classes altogether) and weighted F1-scores (weighted by support) increase marginally (91.5%+-0.4% to 92.1%+-0.3%, and 91.4%+-0.4% to 91.8%+-0.3%, respectively). More interestingly, the F1-scores for the "bare/sparse vegetation" class increase from 34%+-6% to 94.3%+-0.1%, enabling a significantly more confident segmentation for this class; this improvement is largely driven by the much higher support of this class (0.017% of original ben-ge labels and 4.5% of refined labels belong to this class). While the identification of the "grassland" class (93.5%+-0.3% to 94.2%+-0.3%) improves, we see a drop in performance for the "cropland" class (76%+-1% to 61%+-2%), which we attribute to the much lower support of this class after the refinement (7.3% of all pixels to compared to 4.2%). An analysis of the confusion matrices based on the original ben-ge and refined datasets shows that confusion between different classes is significantly reduced. A qualitative analysis of the model output supports our quantitative evaluation results: the model shows a much better agreement between the visual appearance of satellite images and the corresponding labels. Furthermore, the model displays much more confidence in the identification of "bare/sparse vegetation" areas, which represents one of the smallest classes in the original ben-ge labels. The re-labeling of "bare/sparse vegetation" areas now allows for a highly confident identification of such areas, which is not possible based on the original labels. Our proposed method enables the identification and correction of mislabeled areas in land-use/land-cover datasets, particularly in highly dynamic land cover types like croplands and grasslands. The method does not require additional data and uses only minimal resources to improve the results of the trained deep learning model. Utilizing other band indices, this method may be extended to correct other effects in order to provide more reliable land-use/land-cover monitoring.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Assessment of Biases in Crop Detection in Large-Scale EO Landcover Products in Sub-Saharan Africa

Authors: E. Sophia Klaussner, Stefan Lang
Affiliations: University of Salzburg
Agricultural monitoring is crucial in assessing, monitoring, and mitigating food security in Sub-Saharan Africa. Due to a low prevalence of high-quality monitoring data Humanitarian practitioners have increasingly turned towards cropmaps derived from landcover and landuse products to assess and monitor agricultural production. Common monitoring systems such as the GEOGLAM Crop Monitors, ASAP and the GLAM System are all based on landcover derived crop maps. Recent studies, however, have found that the landcover maps in Sub-Saharan Africa seemingly still struggle with correctly identifying and delineating cropland (Kerner et al., 2024). This is even in light of increasing resolution, and Kerner et al. (2024) showed, that increasing resolution does not significantly increase performance of the landcover products. Our study therefore aims at understanding what the underlying issues in cropland detection in Sub-Saharan Africa are. We, hereby, aim to create a better understanding of why large-scale landuse products underperform in Sub-Saharan Africa, and hope that our findings can directly benefit users and producers of Landuse and Landchange models. We will be analysing eleven commonly used land cover products, which all are either produced at continental or global scale. Those are namely: - Digital Earth Africa Cropland Extent - Dynamic World - Esri LULC - ESA Worldcover - ESA CCI Africa - Global Food Security-support Analysis Data - Nabil et al. - Global Land Cover and Land Use Change - Copernicus Land Cover - ESA GlobCover - ASAP Crop Mask Methodology: To do this we are planning on leveraging different evaluation methodologies. We will firstly be using the methodology developed by Petutschnig et al. (2024), who developed a framework to assess data adequacy of geospatial datasets. Using this framework, we qualitatively evaluate all of the previously mentioned landcover datasets for strengths and weaknesses, and report on these. Once that is undertaken, we leverage a statistically rigorous validation dataset (Kerner et al., 2024) to assess what classes cropland is commonly confused with, as well as what cropland commonly gets misclassified as. To understand patterns of confusion, we will evaluate whether this differs depending on Agro-Ecological Zones, or country of evaluation. We will then use Suresh and Guttag’s (2021) categories of bias to evaluate what common biases are found within each of the landcover products, as well as which are common to all products combined. Expected Results To the best of our knowledge, we will provide the first large-scale, comprehensive, qualitative assessment of eleven landcover products. Furthermore, we will provide data producers and users with an overview of common biases and confusion patterns within the products to increase explainability and understanding of the products. We, furthermore, hope that this will provide inform producers of landcover products on common biases in the data, thereby allowing them to potentially avoid those. Citations: Kerner, H., Nakalembe, C., Yang, A., Zvonkov, I., McWeeny, R., Tseng, G., & Becker-Reshef, I. (2024). How accurate are existing land cover maps for agriculture in Sub-Saharan Africa? Scientific Data, 11(1), 486. https://doi.org/10.1038/s41597-024-03306-z Petutschnig, L., Clemen, T., Klaußner, E. S., Clemen, U., & Lang, S. (2024). Evaluating Geospatial Data Adequacy for Integrated Risk Assessments: A Malaria Risk Use Case. ISPRS International Journal of Geo-Information, 13(2), 33. https://doi.org/10.3390/ijgi13020033 Suresh, H., & Guttag, J. (2021). A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. Equity and Access in Algorithms, Mechanisms, and Optimization, 1–9. https://doi.org/10.1145/3465416.3483305
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: EvoLand - Evolution of the Copernicus Land Monitoring Service Portfolio by Integrating Novel EO Data and Latest Machine Learning Algorithms for Continuously Monitoring of Land Surface Status and Dynamics.

Authors: Wai-Tim Ng, Dr. Ruben Van De Kerchove, Dr. Linda Moser, Dr. Fabian Berndt, Dr. Nicolas Granier, Philip Harwood, Konstantinos Kanellos
Affiliations: VITO, GAF AG, Collecte Localisation Satellites (CLS), Evenflow
The Horizon Europe EvoLand - Evolution of the Copernicus Land Service Portfolio - project aims to enhance the Copernicus Land Monitoring Service (CLMS) by advancing innovative methods, algorithms, and prototypes for monitoring land use/land cover dynamics and land surface characteristics at high spatial and temporal resolutions. EvoLand targets the development of eleven next-generation CLMS product candidates across Forest, Agriculture, Water, Urban and General land cover domains, leveraging cutting-edge methods such as data fusion, machine learning and continuous monitoring alongside novel Earth Observation (EO) and in-situ data sources. The project emphasizes aligning its prototype services with policy, data, and infrastructure requirements by engaging closely with Entrusted Entities and key Copernicus Land stakeholders and users. These requirements guide all methodological developments, ensuring relevance and impact. The methods include i) the integration of relevant novel EO datasets; ii) the acquisition of fit-for-purpose in-situ and training data; as well as the development of algorithms for iii) Weakly Supervised Learning; iv) improved spatial, temporal and thematic resolution; v) continuous monitoring (i.e. change detection and continuous land cover mapping) and vi) biomass mapping. EvoLand designs, tests, and implements algorithms on open-source, modular, and scalable platforms, using representative test sites both within Europe and globally. In the demonstration phase, these candidate services were deployed over larger regions, addressing critical thematic areas. To ensure continuous improvement, the candidate services are systematically assessed for their innovation potential, policy relevance, technical excellence, and operational readiness. EvoLand also proposes a strategy for transitioning these services to operational use. The project’s ultimate goal is to support Entrusted Entities by delivering research-driven, tangible advancements to the CLMS. This includes enhancing the information content, quality, and timeliness of services, thereby enabling evidence-based decision-making and demonstrating the potential for the ongoing evolution of the CLMS. In this presentation, we will showcase the CLMS prototype developments achieved during the first phase of the EvoLand project, along with an outlook for the second phase. This will include an overview of the methodological advancements achieved in EvoLand and their connection to the development of proposed CLMS prototypes which include Continuous Forest monitoring, Forest Disturbances mapping and Biomass mapping, Cover Crop Type mapping, Grassland and Cropland GPP, Small Landscape Features mapping, Improved Water Bodies mapping, Continuous Imperviousness monitoring, Automated Land Use mapping of Urban Dynamics, Continuous Mapping of Land Surface Characteristics and on-Demand Land Cover mapping.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Enhancing the Spatial Representativeness of the European-wide LUCAS Dataset for Machine Learning-Based Land Cover Status and Change Mapping

Authors: Lukáš Brodský, PhD. Martin Landa, Ing. Tomáš Bouček, PhD. Ondřej Pešek, Prof. Lena Halounová
Affiliations: Charles University, Department of Applied Geoinformatics and Cartography, Czech Technical University in Prague, Department of Geomatics, Faculty of Civil Engineering
The study of land use and land cover (LULC) is highly important for effectively managing natural resources and mitigating global challenges such as climate change, urban planning, agricultural strategies, and disaster risk reduction. The accuracy of LULC mapping by machine learning (ML) models depends upon the quality and representativeness of the input data. Despite extensive research on classification models, pre-processing, and nomenclatures, there is very little attention devoted to the representativeness of training datasets, particularly for widely used in-situ datasets like the LUCAS (Land Use and Coverage Area Frame Survey). LUCAS, which Eurostat manages, is a comprehensive pan-European dataset that is extensively utilized in agricultural and land cover mapping. However, LUCAS's spatial positional accuracy and suitability for high-resolution remote sensing applications (e.g., Sentinel-2 with 10–20 m resolution) remain underexplored. The LUCAS observation unit, defined as a 1.5-meter and 20-meter radius circle, is confronted with several challenges, including GPS positional inaccuracies, geometric discrepancies in remote sensing imagery, the position of the surveyor in the field, and local land cover heterogeneity. These factors introduce uncertainties that can propagate to the final mapping products, thereby compromising the performance of machine learning models. Pflugmacher et al. (2019) classified European-wide land cover using LUCAS data set. They achieved the highest accuracy of 75.1% using Landsat-8 data spanning three years (2014–2016) for Level-1 classification (12 classes), with minor modifications to the LUCAS input data. Positional representativeness, however, was not evaluated. Witjes et al. (2022) reported an 83% F1-score on the LUCAS dataset for Level-1 classification (5 classes) training the data on set of Landsat images from 2000 to 2019. This study assesses the spatial geometrical representativeness of LUCAS points for land cover classification, to enhance their alignment with high-resolution datasets. We used LUCAS points from the ST_LUCAS system (https://geoforall.fsv.cvut.cz/st_lucas). The ST_LUCAS system provides harmonized and space-time aggregated LUCAS data through an OGC-compliant interface. We accessed LUCAS in a fully automated processing pipeline via ST_LUCAS Python API. A presented algorithm identifies inconsistencies between LUCAS point classes and OpenStreetMap (OSM) using OSM as a reference. The point geometries are adjusted by relocating them to nearby areas within a buffer zone where consistent land cover classes are observed. The effect of this modification on the accuracy of machine learning (ML) classification is subsequently analyzed. Case studies conducted in the Czech Republic and Ireland illustrate a notable enhancement in classification performance. Level-1 Land Cover nomenclature was designed with 11 classes. These are artificial roofed and non-roofed built-up, cropland, permanent, broadleaved woodland, coniferous woodland, shrubland, grassland, bare, water, glacier, and wetlands. Random Forests Classifier was run using multitemporal Sentinel-2 cloud-free median mosaics. In the case of the Czech Republic, the F1 scores increased from 74.6% (training) and 75.4% (testing) to 87.4% and 87.8%, respectively. In Ireland's use case, the F1 scores increased from 64.9% (training) and 64.0% (testing) to 88.3% and 89.1%, respectively. These findings emphasize the critical role of the representativeness of the training data for LULC status and change mapping accuracy.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Scalable OOD Detection for Geospatial Deep Learning Models

Authors: Burak Ekim, Girmaw Abebe Tadesse, Caleb Robinson, Gilles Quentin Hacheme, Michael Schmitt, Rahul Dodhia, Juan Lavista Ferres
Affiliations: University of the Bundeswehr Munich, Microsoft AI for Good Research Lab
Robustness is essential for deploying deep learning models in Earth Observation (EO), where distribution shifts frequently degrade model performance, especially in low-data settings. Out-of-distribution (OOD) detection plays a crucial role in identifying inputs that fall outside the training distribution (in-distribution, or ID), ensuring model reliability in scenarios where data variations challenge model performance and helping to better understand where models might fail. Real-world settings, where large-scale geospatial models are deployed on vast amounts of data, are critical for maximizing a model’s potential to provide actionable insights and drive impactful decision-making. This makes OOD detection even more essential in these contexts, as it promotes the robustness and trustworthiness of geospatial models for large-scale deployment. Existing OOD detection methods often require access to OOD data or computationally intensive adjustments that compromise the model’s primary task performance, limiting their applicability in real-world scenarios. To address these challenges, we introduce TARDIS (Test-time Addressing of Distribution Shifts at Scale), a post-hoc OOD detection method designed for large-scale and production-level geospatial applications. TARDIS operates without prior knowledge of test-time distributions or computationally expensive adjustments, making it suitable for handling near-distribution shifts, such as those arising from variations in satellite data over time, geography, or environmental conditions. Using a pre-trained model and known in-distribution data, TARDIS disentangles WILD samples into surrogate ID and surrogate OOD labels. We refer to samples with unknown distributions that are encountered during inference as WILD. TARDIS begins by extracting internal activations from a pre-trained model for both ID samples and WILD samples. These activations are clustered using k-means, with surrogate labels assigned based on the proportion of ID samples in each cluster: clusters predominantly composed of ID samples are labeled as surrogate ID, while others are labeled as surrogate OOD. The underlying assumption is that samples with similar distributions lie closer in the activation space than those from dissimilar distributions. Effectively segmenting the activation space is critical, as the distributions of WILD samples are unknown during deployment. To assign surrogate labels effectively, TARDIS relies on two key parameters: the number of clusters (k) to partition the activation space, and the ID fraction threshold (T), which determines whether a cluster is labeled as surrogate ID or surrogate OOD. Clusters with an ID fraction above T are assigned as surrogate ID, and those below T are assigned as surrogate OOD. Choosing appropriate values for k and T is critical, as they directly affect the accuracy of OOD detection. We use the EuroSAT and xBD datasets as a diverse testbed for evaluating our method. EuroSAT represents regional imagery at medium spatial resolution, focusing on patch-level land-cover classification, while xBD offers global imagery at very high spatial resolution, addressing pixel-level building detection. The differences in acquisition times, sensor parameters, processing levels, and task dimensions make these datasets complementary for assessing TARDIS across varied geospatial challenges. To introduce controlled distribution shifts, we reformulate the train, validation, and test splits of EuroSAT and xBD so that the model, when trained on the modified training set and evaluated on the adjusted test set, encounters shifts similar to those seen in geospatial model deployments. We categorize these shifts as covariate shift and semantic shift. Covariate shift occurs when the input data distribution changes between training and testing. This can introduce selection bias, as covariates in the training set may still appear in the test set to some extent, potentially skewing robustness evaluation. To address this, we also introduce semantic shift experiments where a class is held out during testing, creating an unseen semantic class. For our experiments, covariate shifts include spatial variance (varying spatial locations), temporal variance (varying time of data collection), and environmental variance (varying landcover or disaster types). For semantic shifts in the EuroSAT dataset, we train the model on nine out of ten classes and test on the holdout class, repeating this process so that each class serves as the holdout class once. We set up 17 experimental setups with varying distribution types. TARDIS achieved performance comparable to the theoretical upper limit in 13 of the setups, demonstrating that it can successfully detect artificially introduced distribution shifts on benchmark datasets. We deploy TARDIS on the Fields of the World (FTW) dataset at scale. FTW is a geographically diverse dataset designed for agricultural field segmentation, covering 24 regions across Europe, Africa, Asia, and South America. The dataset consists of multi-date, multi-spectral Sentinel-2 satellite patches paired with three-class semantic segmentation masks (field, field boundary, and background). The task involves segmenting these classes using a pair of Sentinel-2 images--one for the planting season and one for the harvesting season--as input. We take the training set of FTW as the ID set, then globally sample the WILD set from the Sentinel-2 catalog. Running TARDIS on these two sets, we disentangle WILD samples into ID and OOD, effectively flagging inference inputs that are OOD. This way, TARDIS can identify where the model produces strong or weak predictions, offering significant potential to improve performance through fine-tuning on challenging patches or guiding strategic data curation efforts. Further evaluation shows that TARDIS can process the entire continent of Africa and assign distribution scores to each input image within 15 days on a single CPU. Distributed on GPUs, this process could theoretically be completed within a few hours, highlighting its suitability for large-scale geospatial deployments. We demonstrate TARDIS’s ability to uncover the strengths and weaknesses of geospatial deep learning models in real-world scenarios, taking a step toward answering the critical question: where and why does a particular model fail? This capability moves us closer to building more robust and trustworthy models for large-scale deployment. By addressing key limitations of prior methods and focusing on practical deployment scenarios, TARDIS offers a scalable and efficient solution for EO, enhancing the transparency and trustworthiness of geospatial models in production environments.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Detecting vegetation changes via semi-supervised deep learning from space

Authors: Max Gaber, Dr. Dimitri Gominski, Dr. Florian Reiner, Prof. Dr. Rasmus Fensholt, Dr. Martin
Affiliations: University of Copenhagen
Accurate, high-resolution monitoring of tree growth and removal at the local scale is key to evaluating the success of reforestation projects. Traditional methods often lack the accuracy and scalability required for effective analysis and can be labor-intensive and time-consuming, limiting their effectiveness in large or remote areas. In addition, the dynamic nature of ecosystems requires continuous and detailed monitoring to capture subtle changes over time, which is difficult to achieve with traditional methods. Overcoming these limitations is essential to improving the management, outcomes, and continuity of funding for ecological restoration efforts. Satellite remote sensing technologies have greatly advanced the monitoring of vegetation change. However, high-resolution methods in particular face challenges such as from image alignment, varying illumination, or variability in image quality due to atmospheric conditions and sensor limitations. In addition, the lack of labeled training data for specific regions hinders the development of accurate models. These challenges highlight the need for models that are robust to variations in image quality and efficient in their use of labeled data. To address these challenges, we apply semi-supervised contrastive learning to a time series of high-resolution (3m) PlanetScope images. Contrastive learning exploits unlabeled image pairs and appropriate priors to learn high-level semantics, in this case cover change. Using the concept of Seasonal Contrast, we develop a model based on MoCo v3 to capture temporal dynamics, assuming that changes are more likely to occur over longer time spans than in temporally close images, and using space as a proxy for time when creating negatives. We supervise the model with sparse LiDAR-derived canopy height maps (CHM) from single time steps to ensure that the model is sensitive only to tree cover changes. Furthermore, we use an adversarial approach to account for changes in image quality or climatic ground conditions and to remove time-correlated information from the embeddings. We evaluate the performance of the model on bi-temporal CHMs. By aiming to improve the accuracy and reliability of vegetation change detection, our approach has the potential to significantly enhance the monitoring and management of tree plantation and payment for ecosystem services projects.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Creating harmonized Landsat 7 and 8 data for tracking LULC change

Authors: Galen Richardson, Dr. Anders Knudby
Affiliations: University of Ottawa, Natural Resources Canada
The Landsat archive can be used to track long-term (e.g. >50 years) changes in land surface variables, and is often used in crafting policy decisions about environmental conservation and anthropogenic development. Most change detection studies based on the Landsat archive use data collected starting after the launch of the Thematic Mapper (TM) sensor on Landsat 4 (1982) that provided observations at a 30 m spatial resolution with six or more spectral bands. Some of the sensors used in the Landsat mission are considered broadly equivalent, such as the TM and Enhanced Thematic Mapper Plus (ETM+) sensors, and the Operational Land Imager (OLI) and OLI-2 sensors. However, substantial differences in atmospheric correction algorithms, radiometric resolution, and spectral response functions, suggest that cross-sensor harmonization should be implemented when using data from OLI/ OLI-2 sensors with TM/ETM+ data. Previous Landsat harmonization efforts have been focused on creating regional ETM+ and OLI linear regression models, calibrated with numerous overlapping images captured within a one-day gap. However, such studies have published different ETM+/OLI harmonization functions for different parts of the world, potentially due to the different land covers present. Therefore, using harmonization functions trained in one part of the world can be suboptimal when used in different study regions. To facilitate effective land use land cover (LULC) change detection mapping using the majority of the Landsat archive, we created the Landsat ETM+ OLI Harmonization Script (LEOHS) using Google Earth Engine (GEE) to create ETM+/OLI harmonization functions over user-defined regions. LEOHS uses Collection-2 Landsat Tier 1 imagery in GEE and Theil-Sen regressions to create harmonization functions with ETM+ and OLI pixels that were observed within +/- one day. LEOHS was compared against models produced by Roy et al. (2016) and Perez & Vitale (2023) over their respective study regions (Continental USA, Mediterranean Basin), and yielded harmonization functions with higher R² and lower Root Mean Squared Difference (RMSD) values when compared to an independent validation dataset. These results indicate that LEOHS provided better harmonization functions than previous studies. To illustrate the benefits of LEOHS, we conducted a LULC case study in Menorca, Spain. We trained a Random Forest (RF) classifier on a 2020 OLI image and 10,000 samples of three landcover classes (high vegetation, low vegetation, and urban). The trained RF model was used to predict LULC in a 1990 TM image that was harmonized using LEOHS, and a non-harmonized image. The maps generated from the harmonized and non-harmonized images had overall accuracies of 84% and 72% respectively, illustrating the substantial gains in regional LULC studies from using harmonized imagery produced by LEOHS. Landsat ETM+/OLI harmonization is essential for unbiased time series analysis using the majority of the Landsat collection. LEOHS can create ETM+/OLI harmonization functions for both regional and global applications. Once released on GitHub, LEOHS will substantially benefit the LULC community by enabling researchers to create large datasets of harmonized imagery to track long-term environmental changes.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: EarthMosaics – A Fully Automated Self-Serve Harmonized Data Fusion Solution for Large-Area Land Change Monitoring

Authors: William Parkinson, Rob Irwin, Christopher Rampersad
Affiliations: Earthdaily Analytics
Mosaic systems in Earth Observation are a common tool for land use and land cover mapping. As more datasets have been produced with various spectral bands and resolutions, a need to harmonize and compile these mosaics has become increasingly important. As the data volumes increase, the capacity to deliver useful insights at a higher temporal frequency drives an increasing need to automate these complex processing tasks by reducing human intervention to enable scalability. Various open and commercial solutions exist to bring multi-temporal data together from single sensors such as Mosaic Hub for Sentinel-2[1], Planet’s basemaps[2] and Long-Term Satellite Data records such the data sources maintained by the government of Canada.[3] EarthMosaics is a fully automated, self-serve solution that harmonizes multi-source satellite imagery to enable efficient and scalable land change monitoring. While various commercial and open services now available to provide large-scale mosaic datasets for landcover and land use change exist, few systems are architected to support a diversity of sensors such as Landsat-8, Landsat-9, and Sentinel-2 unless they are able to draw from curated data sources such as harmonized Landsat and Sentinel-2 data sources prepared by USGS. This gap in providing timely data to analysts and scientists that can be used and tailored for applications as the need arises. The second gap is customization of how the mosaic system selects pixels. Users for some use case may prefer specific types of data selection bias, that enables their use case. For example, burn scar mapping after major fire events requires different considerations than a peak of season landcover mapping activity. These select requirements are more than just the right TOI but use an index or metric to weigh the relevant data for selection. Mutli-source mosaics introduce several challenges that accumulate to introduce artifacts either through data quality, processing system differences, or even wavelength mismatches. EarthMosaics manage these multi-source challenges directly by ensuring consistent radiometric and geometric alignment, custom cloud masking algorithms, and configurable construction parameters. EarthMosaics resamples and aligns disparate resolutions as well to enable a common harmonized mosaic structure which is consistent and analysis-ready across large spatial and temporal extents. EDA’s EarthMosaics system has use cases for Forest Change, Monitoring Agricultural Trends, Disaster Response, Climate Change Studies, and Mining. This is enabled by the various ways to select data and including High or Low Normalized Difference Vegetation Index (NDVI), High or Low Normalized Burn Ratio (NBR), and “Best Available Measurement”. Each option can be selected for the use case at hand, for example to when assessing the impact of fires users may choose to use NBR whereas a landcover activity may use NDVI. Recently, EDA has been expanding its EarthMosaics system to support other open data sources, such as Environmental Mapping and Analysis Program (EnMAP) from DLR. This hyperspectral data, which can be highly useful for mineral exploration, is being added to EDA’s mining and exploration offering, Marigold. All created and made available from EDA’s data platform and EO ecosystem, Earth Data Store. This flexible system, supported by ESA, USGS, DLR’s open data policies, packages up data in a highly effective way to exploitation at scale to impact research, policymaking, and decision-making rapidly. This conference talk will speak to harmonizing data on the fly, unique challenges for hyperspectral mosaicing, and looking to the future where more hybrid open data and commercial solutions will be available. EarthMosaics can serve as a foundation for large-area research programs where high-quality image mosaics are needed to understand and quantify change. 1 https://s2gm.land.copernicus.eu/mosaic-hub 2 https://www.planet.com/products/basemap/ 3 https://natural-resources.canada.ca/maps-tools-and-publications/satellite-imagery-elevation-data-and-air-photos/long-term-satellite-data-records-ltsdr/10935
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Multisatellite Time Series Change Detection for Near-Real-Time Monitoring of Deforestation in the Brazilian Amazonia

Authors: Mr Felipe Souza, Dr Rolf Simoes, Mr Felipe Carlos, Dr. Johannes Reiche, Dr Karine Ferreira, Gilberto Camara
Affiliations: National Institute For Space Research (inpe), Brazil, OpenGeoHub Foundation, Wageningen University, Menino Software Crafter
Abstract The Brazilian Amazon rainforest has experienced significant deforestation, and current estimates indicate that close to 20% of the original natural cover has been cut. A critical component of government actions to control forest cuts is its reliance on technologies for near-real-time deforestation monitoring. When law enforcement agencies use such data effectively, they can act immediately and take action before large-scale forest removal happens[1]. Since 2004, Brazil's National Institute for Space Research (INPE) has operated DETER, a satellite-based monitoring tool that provides rapid alerts on changes in forest cover in Amazonia [2]. DETER delivers timely information to combat illegal deforestation and forest degradation. Currently, DETER relies on visual interpretation of WFI data from CBERS-4, CBERS-4A and Amazonia-1 satellites, a multispectral sensor with an 850 km swath and 64-meter resolution. Images from WFI enable DETER to categorise detected changes as clear-cut deforestation, forest degradation, or selective logging, providing detailed insights into the nature of forest disturbances. As an alternative to visual interpretation, researchers have investigated using Sentinel-1 time series for detecting tropical forest disturbances [3-5]. Doblas et al.[6] compared methods for near-real-time monitoring using Sentinel-1 and optical data from Sentinel-2 and Landsat satellites. They concluded that combining optical with SAR time series produces the best results. Reiche et al. reached a similar conclusion [7]. The best results presented by Doblas et al.[6] combine outputs from different systems. Our work improves on the current state-of-the-art, using a multi-satellite change detection method that enables consistency between optical and SAR time series. Instead of using data from different systems, both SAR and optical time series are combined in a multivariate Bayesian algorithm. This approach has not yet been investigated for near-real-time change detection in tropical forests. We combine CBERS-4 and CBERS-4A WFI and Sentinel-1 satellite data using the Bayesian approach proposed by Reiche et al. [3]. The algorithm calculates the conditional probabilities for each time series for the Forest and Non-Forest classes, assuming a Gaussian distribution in both cases. The method re-estimates the change probabilities for each time series as new data comes in. A threshold of two standard variations determines whether a change in forest cover has happened. We combined a time series of Sentinel-1 VH polarisation with EVI data from CBERS-4 and CBERS-4A WFI. We tested the method in an area in the state of Rondonia using data cubes with eight-day intervals. We grouped the most relevant detected pixels into the main DETER deforestation classes: burn scars, degradation, clear-cut, and vegetation clearance. We compared our results with the DETER polygons from 2023. Our method detects newly deforested areas before DETER, with a minimal advance of 16 days in the case of clear-cuts and a maximum of 90 days for burned areas and degraded forests. Regarding area, we detected about the same amount in the case of clear-cuts, burned scars, and vegetation clearance. For forest degradation, our method identified about half the area mapped by DETER. These results are better than the best outcomes measured in the comparative study by Doblas et al.[6]. In addition to the four main classes discussed above, DETER detects selective logging. However, the combined optical-SAR data could not achieve an acceptable level of discrimination in this case. The main reason is that DETER relies on expert knowledge to identify fine textural variations in the optical images. In conclusion, our results compare favourably with the results from DETER and show that multi-satellite automatic event detection is a viable alternative to visual interpretation. The method reached satisfactory results for the four main DETER classes. To improve the detection of selective logging, we plan to include texture measures as proposed by Balling et al. [8]. Our work's results show the advantage of consistently combining optical and SAR data in the same detection algorithm. We expect to implement a fully operational solution to replace the current DETER and improve the Brazilian government's capacity to act against illegal deforestation. The results and code of this work are available on GitHub: https://github.com/OldLipe/Deforestation-Detection. References [1] Juliano Assunção, Clarissa Gandour, and Romero Rocha. “DETER-ing Deforestation in the Amazon: Environmental Monitoring and Law Enforcement”. In: American Economic Journal: Applied Economics 15.2 (2023), pp. 125–156. ISSN: 19457782. [2] Yosio Shimabukuro et al. “The Brazilian Amazon Monitoring Program: PRODES and DETER Projects”. In: Global Forest Monitoring from Earth Observation. Boca Raton, FL: CRC Press New York, 2013, p. 354. [3] Johannes Reiche et al. “Forest Disturbance Alerts for the Congo Basin Using Sentinel-1”. In: Environmental Research Letters 16.2 (2021), p. 024005. ISSN: 17489326. [4] Dirk Hoekman et al. “Wide-Area Near-Real-Time Monitoring of Tropical Forest Degradation and Deforestation Using Sentinel-1”. In: Remote Sensing 12.19 (2020), p. 3263. ISSN: 2072-4292. [5] Juan Doblas et al. “DETER-R: An Operational Near-Real Time Tropical Forest Disturbance Warning System Based on Sentinel-1 Time Series Analysis”. In: Remote Sensing 14.15 (2022), p. 3658. ISSN: 2072-4292. [6] Juan Doblas et al. “Inter-Comparison of Optical and SAR-based Forest Disturbance Warning Systems in the Amazon Shows the Potential of Combined SARoptical Monitoring”. In: International Journal of Remote Sensing 44.1 (2023), pp. 59–77. ISSN: 0143-1161. [7] Johannes Reiche et al. “Integrating Satellite-Based Forest Disturbance Alerts Improves Detection Timeliness and Confidence”. In: Environmental Research Letters 19.5 (2024), p. 054011. ISSN: 1748-9326. [8] Johannes Balling, Martin Herold, and Johannes Reiche. “How Textural Features Can Improve SAR-based Tropical Forest Disturbance Mapping”. In: International Journal of Applied Earth Observation and Geoinformation 124 (2023), p. 103492. ISSN: 1569-8432.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Copernicus Temporal Spectrum for ESA’s GTIF initiative: Multitemporal vegetation change dynamics across all Sentinel-2 observations

Authors: Dirk Tiede, Martin Sudmanns, Hannah Augustin, Thomas Strasser, Luke McQuade, Matthias Laher, Steffen Reichel, Markus Kerschbaumer, Andrea Baraldi
Affiliations: University of Salzburg, Department of Geoinformatics, Spatial Services GmbH
Introduction Big EO data, such as provided by the European Copernicus program, are a great opportunity for highly frequent global monitoring of the environment. Challenges exist not only in processing big multitemporal data [1] but also in communicating results in a meaningful and useful manner, especially for non-Earth Observation (EO) experts. The publicly funded, open, and free Copernicus satellite datasets are invaluable for monitoring the environment. The availability of continuous data supports informed decision making for the current global challenges like climate change or biodiversity loss. Data derived information enhances the visibility of green transition measures, which are urgently needed for a sustainable society and its local impacts and mitigations. ESA’s Green Transition Information Factory (GTIF) initiative, especially the first GTIF demonstrator for Austria (GTIF-AT, https://gtif.esa.int ) showcases domains and tools that support the green transition. Within the current GTIF-AT implementation many important topics for supporting and monitoring the green transition will be made available to different users in an easy to grasp manner. However, the information currently provided is mostly temporally static and focuses on very specific domains, datasets and analysis periods. Copernicus Temporal Spectrum for GTIF-AT GTIF-AT lacks a dynamic integrated view on surface changes over time based on Copernicus Sentinel-2 image time series. We suggest a productively usable change detection and change monitoring view that is applicable to different application areas to supplement other static GTIF-AT layers with temporal enrichment. There is a new Sentinel-2 coverage at least every 5 days for Austria, offering a monitoring possibility providing a unique view on dynamic changes and can enrich existing or be integrated with GTIF-AT thematic domains. Our project [2] integrates all Sentinel-2 observations since 2015 in an integrated and meaningful, interpretable form and a multitemporal view on the vegetation changes in Austria. The robust temporal information product can be combined with most of the existing topics in GTIF-AT as a multitemporal change base-layer. Such a change detection layer adds the temporal component to augment existing or future layers for user interpretation. This can integrate with GTIF-AT domains, offering applications like soil sealing identification and renewable energy construction monitoring. A combination with any applied thematic field would be possible by this generic multi-temporal approach to identify surface/vegetation changes. Methods Our approach uses big EO data analyses in a semantic EO data cube [3],[4]. A semantic EO data cube uses semantic enrichment to count the pixel-based percentage of vegetation / non-vegetation observations over time using all Sentinel-2 images in a user defined analysis period (e.g. years or seasons). Different to index-based approaches using only NDVI, no thresholds need to be defined since the semantic classes (here: spectral categories) also reflect cloud-like / bare-soil-like / vegetation- and water-like categories automatically. Such a knowledge-based semantic enrichment approach requires no training samples and less energy compared to other machine learning/deep learning approaches. Further, the transferability of the approach to all Sentinel-2 data worldwide has been proven (operationalized within an ESA InCubed project – see also https://app.color33.io). We communicate results using a single-layer multi-temporal representation, where RGB colour coding represents selected time periods and changes. This visualisation through colour-codes transfers terabytes of multi-temporal information into a single, comprehensive layer. While this approach is backed by established geovisualisation techniques, we extend it to unveil temporal processes and dynamics hidden in big EO data. The resulting layer can be used in a very simple way: It serves as an interpretable basemap that is integrated in the GTIF-AT platform as a background layer or used as a user defined layer for specific time periods. Conclusion The single-layer representation is an approach to communicate multi-temporal analyses to a variety of users. They will support the developments through feedback workshops and test use cases in a “user in the loop” approach and to support the uptake of such a multitemporal generic change layer. The multi-temporal vegetation-change layer will be tested for integration in user specific workflows and combination with data and information of different domains. Our approach clearly indicates where and when changes happened and provides information on change intensities with temporal aspects to specific topics. This is different from static base maps currently used in GIS-based decision support systems, where usually mono-temporal information serve as background layers (e.g. static maps or aerial/satellite image mosaics with unclear acquisition dates). This cross-domain layer can serve multiple purposes at the same time. The application areas include but are not limited to monitoring green spaces, energy related land use changes, forest change, environmental and soil protection, and nature conservation. We will present upscaled results for Austria at LPS 25, showcasing the integration of all Sentinel-2 observations available to date. References [1] Sudmanns, M., Tiede, D., Lang, S., Bergstedt, H., Trost, G., Augustin, H., Baraldi, A., Blaschke, T., 2020. Big Earth data: disruptive changes in Earth observation data management and analysis? International Journal of Digital Earth 13, 832–850. https://doi.org/10.1080/17538947.2019.1585976 [2] FFG, ‘GTIME: GTIF-AT – Copernicus Temporal Spectrum: Multitemporal Vegetation Change Dynamics across all Sentinel-2 observations.’ Accessed: Nov. 24, 2024. [Online]. Available: https://projekte.ffg.at/projekt/5129575 [3] Augustin, H., Sudmanns, M., Tiede, D., Lang, S., Baraldi, A., 2019. Semantic Earth Observation Data Cubes. Data 4, 102. https://doi.org/10.3390/data4030102 [4] Sudmanns, M., Augustin, H., van der Meer, L., Baraldi, A., Tiede, D., 2021. The Austrian Semantic EO Data Cube Infrastructure. Remote Sensing 13, 4807. https://doi.org/10.3390/rs13234807
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Three-Dimensional High-Resolution Urban Growth Monitoring using SAR Data

Authors: Michael Recla, Michael Schmitt
Affiliations: University Of The Bundeswehr Munich
Synthetic Aperture Radar (SAR) imaging has historically lagged behind optical photogrammetry in the operational mapping of urban areas, primarily due to limitations such as phase mixture in layover regions in SAR interferometry and the high data requirements of advanced techniques like SAR tomography. Recent advancements, however, have transformed this landscape. The integration of radargrammetric processing with artificial neural networks has led to the development of SAR2Height [1], an innovative AI-based framework enabling 3D reconstruction of urban areas from single or very few SAR intensity images. This talk highlights how recent innovations in AI-driven SAR data processing have opened new possibilities for large-scale urban monitoring. We utilize an enhanced SAR2Height architecture that leverages self-supervised learning and incorporates critical acquisition parameters. This approach not only generates surface models but also performs building segmentation in a multi-task framework, enabling a comprehensive 3D understanding of urban environments. Using data from the TerraSAR-X archive, we demonstrate a scalable and efficient workflow for global urban monitoring. StripMap-mode imagery proves particularly effective, offering an optimal balance between spatial resolution and coverage. The methodology is applied to time-series data from diverse cities, including Kigali (Rwanda), New Delhi (India), Zhengzhou (China), Paris (France), Moscow (Russia), and Tokyo (Japan). Despite being trained primarily on Western-style urban datasets (due to the availability of height and building footprint labels), the model exhibits exceptional generalization across varying urban morphologies, capturing growth patterns and structural diversity with remarkable accuracy. Thanks to its active sensing principle, SAR emerges as a highly controllable and well-calibrated measurement source, making it particularly advantageous for data-driven deep learning approaches. In contrast, electro-optical satellite data, which are heavily influenced by weather conditions and solar angles, face clear limitations and typically require significantly larger training datasets to achieve comparable results. Such generalizability is particularly beneficial for regions in the Global South or countries with restrictive governments, where most kinds of geodata are usually inaccessible. But, also in a global context, a consistent and comprehensive map of building footprints along with their corresponding heights remains unavailable, despite significant efforts by major players such as Google and Microsoft to address this gap. The results underline SAR2Height’s operational potential for monitoring urban growth and its applicability across various domains, including disaster response, infrastructure planning, and environmental studies. Moreover, its ability to utilize archived SAR data enables cost-effective and retrospective analyses, making it a powerful tool for understanding and managing urbanization on a global scale. [1] Recla, M. & Schmitt, M., 2024. The SAR2Height framework for urban height map reconstruction from single SAR intensity images. ISPRS Journal of Photogrammetry and Remote Sensing, 211, pp.104-120.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: Advances in global land cover research: Hybrid change detection, time series postprocessing, and spatiotemporal deep learning

Authors: Dr. Dainius Masiliūnas, Dr Nandin-Erdene Tsendbazar, Dr Lili Xu, Liam Adam, Rob Burger, Lennart Murk, Sven-Arne Quist, August Slomp, Qin Xu, PhD Diego Marcos, Dr Laura Cue La Rosa, Martin Herold
Affiliations: Wageningen University & Research, GFZ, Central China Normal University, Inria
Land cover is an essential variable that needs to be monitored in order to make informed decisions about its management. Of particular interest for decision makers and scientists are not only abrupt transitions between land cover classes, but also gradual change, especially degradation and restoration processes. Global land cover change monitoring is particularly challenging due to the amount of required data and the differences between land cover in different parts of the world. The traditional approach to land cover change mapping is to train a Random Forest classifier on reference data obtained using visual interpretation of fine-resolution optical imagery, then use moderate resolution satellite imagery to predict land cover for every time step of interest. Differences between the time steps of the predicted land cover maps are often used as indicators of land cover change. However, these indicators are highly unreliable, as the errors in each land cover map compound when a difference is calculated, resulting in a significant overestimation of land cover change. It is common to attempt to reduce this overestimation by making use of unsupervised change detection algorithms. However, these algorithms are typically applied on satellite imagery, where a change may not actually indicate a change in terms of land cover class definitions, leading to poor change detection accuracy for these applications, typically with an F1 score around 0.3. Moreover, this approach cannot detect gradual change, as it expresses only shifts in the dominant land cover class in each pixel of the change map. In this work, we present an overview of recent research on techniques to improve global land cover change detection at the Laboratory of Geoinformation Science and Remote Sensing of Wageningen University. These techniques can be broadly classified into three categories: 1) hybrid unsupervised-supervised change detection models, 2) postprocessing of predicted land cover maps, and 3) improvements to the base land cover prediction models. Hybrid change detection models are a combination of unsupervised change detection algorithms and supervised models. In the hybrid framework, one or more unsupervised change detection algorithms are used as preliminary indicators of change. These indicators, sometimes together with satellite imagery, are then used as features to predict change using a supervised model. The supervised model then acts as a type of smart filter, retaining only changes with significant evidence of them corresponding to land cover change of interest. In the work of Xu et al. (2022), the best tested single unsupervised change detection model (BFAST Lite) was outperformed by a hybrid BFAST-Random Forest model, achieving an improvement of the F1 score from 0.35 to 0.62. A follow-up study by Adam et al. (2024) confirmed that the hybrid approach outperforms a purely supervised Random Forest model, which achieved an F1 score of 0.45. Another study by Quist et al. (2023) showed that hybrid approaches are not always effective, as combining multiple different unsupervised change detection models using a Random Forest model failed to improve upon the performance of the individual unsupervised models. Therefore, hybrid approaches are promising for improving abrupt change detection accuracy, however, they cannot detect gradual land cover change. Detecting gradual land cover change requires time series of continuous land cover, such as probabilities or fractions. Like discrete land cover classes, fractions can be obtained by training regression models on a fraction dataset (Masiliūnas et al., 2021), or probabilities can be obtained from a discrete classification model (Brown et al., 2022). These predictions of continuous land cover typically result in a noisy time series, especially if the model does not take time into account. This noise can be reduced by postprocessing the time series. Burger et al. (2022) tested a Markov chain approach and found it to improve both yearly land cover maps (by 1.1 percentage points RMSE on average) as well as land cover change estimates (by 6 percentage points RMSE on average). A follow-up study by Masiliūnas et al. (2025a) has shown, similarly to the discrete class case in Adam et al. (2024), that pure Random Forest models are unable to generalise learned temporal patterns, with results rarely better than without any postprocessing applied and worse than a simple linear smoother. A more complex approach was attempted in Masiliūnas et al. (2025b), where an unsupervised change detection algorithm was applied on time series of continuous land cover, and its fitted model used as de-noised land cover time series. This approach resulted in MAE of change being reduced by as much as half (2 percentage points), although at the cost of a slight increase in yearly map MAE. The use of unsupervised change detection algorithms as postprocessing techniques remains a promising research direction for detecting gradual changes in land cover with higher accuracy. However, an issue of postprocessing techniques is that they are limited to working on the output of base predictors, which by themselves are not error-free. Therefore, postprocessing cannot solve base predictor misclassification errors, which affect the correctness of the entire time series of land cover. Improvements to the base predictor models would result in both improved yearly land cover maps and better detected land cover change. Even though Random Forest is the most popular model for predicting land cover, it does not intrinsically handle the time dimension, nor spatial context. These limitations are often worked around by extracting temporal features from time series and spatial features from the neighbourhood of the pixel to be classified. Using a deep learning model that can handle space and time dimensions by design has the potential of further improving the accuracy of land cover and its change. Slomp et al. (2024) investigated the use of LSTM models for both land cover fraction mapping and as a postprocessing method. The results showed that while an LSTM results in lower RMSE (by 1.3 percentage points) for land cover fraction maps compared to Random Forest models, but its performance as a postprocessing technique was not better than a Markov chain. In addition, deep learning models have stricter requirements for data regularity, which means that the LSTM could only learn from time series and not single observations of land cover. When this additional training data is used, Random Forest models outperform LSTMs. Current research is underway to examine the performance of 2D Convolutional Neural Networks (CNNs), 3D CNNs (Murk et al. 2025) and Vision Transformer models (Xu et al. 2025) on land cover mapping and change detection tasks. In summary, hybrid unsupervised-supervised change detection methods can help to greatly increase the accuracy of detected change for discrete land cover classification cases. For improving the change detection in time series of continuous land cover, which is important for gradual change detection, predictions of supervised models can be postprocessed using Markov chain or unsupervised change detection algorithms. There is great potential to improve both land cover and its change predictions by using spatiotemporal deep learning models, however, more research is needed to determine which models and architectures can outperform Random Forest models in this task.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N-O)

Poster: The impact of map accuracy on area estimation with remote sensing data within the design-based inference framework

Authors: Sergii Skakun
Affiliations: University of Maryland
One of the core applications of satellite-derived classification maps is area estimation. Regardless of the algorithms used, maps will always contain errors stemming from imperfect input and training/calibration data, incomplete data coverage, and spectral and/or temporal confusion between classes. Because of omission and commission errors, the direct use of maps for calculating areas, the pixel counting estimator, will lead to biases in area estimation. To address this problem, the remote sensing research and application communities have developed a framework to derive unbiased area estimates. One recommended approach is a sampling-based approach that incorporates probability sampling of reference data and a design-based inference framework within which estimation of the target variables, such as accuracy and area, is performed (Olofsson et al., 2014). Satellite-derived land cover land use (LCLU) maps are used for stratification in the sampling design to improve the precision of the estimator (e.g., compared to a simple random sampling estimator). In this case, the quality of the map, i.e. how accurate it is, will not impact the bias of the estimator because the bias depends on the sampling design and the choice of estimator. The quality of the map, however, will impact the efficiency of stratification and, consequently, the precision of the estimator. Let A and B be the two maps that are used for stratification within the sampling design to estimate the LCLU area y. If the costs associated with generating maps A and B are not considered, then the efficiency of stratification can be defined as follows (Gallego, 2007): η(A,B)=(var(yB)×nB) / (var(yA)×nA), where η(A,B) is the relative efficiency of maps B and A, nB and nA are the sample size used for area estimation from maps B and A, respectively, and var(yB) and var(yA) are estimated variances of the area estimates from maps B and A, respectively. If the sample size is estimated with the objective to minimize the target variance, such that var(yB)=var(yA), then η(A,B)=nB/nA; if the sample size is fixed, i.e. nB=nA=n, then η(A,B)=var(yB )/var(A). In both scenarios, the hypothesis is that a more accurate map A will lead to a higher precision var(yA)<var(yB), smaller sample size nA<nB to reach the target precision or both. Consequently, map A will be considered more efficient than map A for estimating the LCLU area y, as it will require fewer resources (e.g., reduced costs associated with the collection and/or annotation of the reference sample units). This study aims to provide a quantitative assessment of the impact of map accuracies on area estimation within the design-based inference framework. The relative bias of the pixel counting estimator is expressed using class-specific producer’s (PA) and user’s accuracy (UA), and shown to be PA/UA-1. Therefore, estimation of this bias, i.e., how much pixel counting under- or over-estimates the area, would require precise estimates of PA and UA. If estimates of PA and UA are biased, for example by using an opportunistic non-probability sample that does not necessarily represent the population, area estimates will deviate from the true values. Therefore, rigorous map accuracy assessment using good practices (Olofsson et al., 2014; Stehman & Foody, 2019; Strahler et al., 2006) is critical for estimating map performance metrics (PA and UA), will provide unbiased estimates of the elements of the confusion matrix and, subsequently, area estimates of the target classes. The practice when PA and UA are estimated from a non-probability sample with the aim to correct areas derived by the pixel counting estimator should be discouraged because it might lead to biased results. The relative efficiency of stratification for area estimation is also expressed using these class-specific accuracies. Furthermore, for the case of binary classification, elements of the confusion matrix, as well as the sample size, and variance of the area estimator are expressed using PA and UA. When two maps are compared, the relative efficiency can be used as a criterion to quantify the performance of the maps for area estimation. Essentially, the relative efficiency depends on class-specific PA and UA and true areas (f), and therefore map producers should report those within the confusion matrix as suggested by good practices (Olofsson et al., 2014; Stehman & Foody, 2019; Strahler et al., 2006). Numerical simulations will demonstrate how relative efficiency depends on area estimation objectives, target area proportion, and the map’s performance metrics (PA and UA). The impact is non-linear and contributions of the PA, UA, and f are not equal. For example, when the target class is minor (f≪0.5), PA has a larger impact than UA—improvements in PA will lead to larger improvements in efficiency. The reason is that the major class with a larger weight will have a larger impact on the efficiency through its UA which is dependent on the minor class PA. That is why in situations like this, map producers should prioritize PA over UA to improve efficiency. Furthermore, a special buffer stratum can be constructed (Olofsson et al., 2020) under the assumption that omission errors occur in the vicinity of a minor class (e.g., deforestation). The impact of such an approach is two-fold: PA can be potentially increased, and the weight of the stratum corresponding to the major class is decreased. If f>0.5, the impact of PA and UA is inverse, and priority should be given to UA. In general, there are multiple solutions in the PA/UA space, though constrained, to reach objectives, e.g., in terms of relative efficiency, sample size, and target variance. Overall, this study offers map producers a criterion that can be used to benchmark algorithm performance for map generation when area estimation is the primary objective of the classification maps. References: Gallego, F. J. (2007, November). Review of the main remote sensing methods for crop area estimates. In Workshop Proceedings: Remote Sensing Support to Crop Yield Forecast and Area Estimates (pp. 65-70). Olofsson, P., Foody, G. M., Herold, M., Stehman, S. V., Woodcock, C. E., & Wulder, M. A. (2014). Good practices for estimating area and assessing accuracy of land change. Remote Sensing of Environment, 148, 42-57. Olofsson, P., Arévalo, P., Espejo, A. B., Green, C., Lindquist, E., McRoberts, R. E., & Sanz, M. J. (2020). Mitigating the effects of omission errors on area and area change estimates. Remote Sensing of Environment, 236, 111492. Stehman, S. V., & Foody, G. M. (2019). Key issues in rigorous accuracy assessment of land cover products. Remote Sensing of Environment, 231, 111199. Strahler, A. H., Boschetti, L., Foody, G. M., Friedl, M. A., Hansen, M. C., Herold, M., ... & Woodcock, C. E. (2006). Global land cover validation: Recommendations for evaluation and accuracy assessment of global land cover maps. European Communities, Luxembourg, 51(4), 1-60.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: B.04.06 - POSTER - Fire Detection and Monitoring from Earth Observation Data for Rapid Disaster Response

Fueled by a changing climate, wildfires are becoming more frequent and destructive every year. Remote sensing data from a variety of sources can help to detect and monitor fire outbreaks. This session aims to show capabilities of remote sensing (public satellites, cubesat constellations, UAVs, observation planes) in combination with recent advances in AI and data processing. The session welcomes contributions including but not limited to:
- Novel algorithms for fire detection
- Fire spread modeling
- Burned area mapping
- New data sources such as upcoming missions
- Benchmark or training datasets
- Multi-modal data for fire monitoring
- On-orbit processing
This session brings together research, policy and industry in fire preparedness and response informed by remote sensing data. It provides a platform for a fruitful exchange about the current state of technology, best practices as well as upcoming opportunities for novel techniques in this area.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Active Fire Detection in Geostationary Satellite Images

Authors: Ines Kamoun, Charles Hessel, Thibaud Ehret, Carlo de Franchis
Affiliations: Kayrros SAS, Université Paris-Saclay, ENS Paris-Saclay, CNRS, Centre Borelli, Ecole Polytechnique Fédérale de Lausanne
Wildfires are becoming more severe, larger in scale, and start earlier than the typical fire season. Early detection and continuous monitoring are more important than ever. In this context, remote-sensing plays an essential role in providing large scale data with low latency, allowing for the fast detection and monitoring of wildfires. To complement existing solutions using low-Earth orbit satellites such as the MODIS, VIIRS, and Sentinel-3, we propose a method for active fire detection using the latest European geostationary satellite, Meteosat Third Generation (MTG). Eumetsat’s Meteosat Third Generation satellite, about to be operational at the time of writing, is placed at longitude zero and acquires images every 10 minutes over the full disk at a resolution of 1 km/pixel at nadir. Although the spatial resolution is coarser, the acquisition rate (or temporal resolution) is significantly higher than the typical 12 hours from low-Earth orbiting satellites. This also represents a great improvement compared to the 3 km/px resolution of Meteosat Second Generation (MSG) satellite. In addition, images are available to users with a very low latency: within a few minutes of acquisition. We propose an algorithm to detect active fires using dense time series provided by geostationary satellites such as MTG. Indeed, exploiting the temporal aspect has the potential to greatly aid the detection of small fires, thus increasing the number of fires detected compared to most methods that work on a per-image basis. For example, the study from [Roberts and Wooster 2014] showed that exploiting the temporal aspect allowed an increase in the number of detected fires by up to 80% at the peak of the fire diurnal cycle compared to the method currently implemented for the MSG satellite and distributed by Eumetsat. The work of [Filizzola et al. 2016] also showed that time-series of images from the MSG satellite can be used to detect small fires, by making the most of past images acquired in the same conditions. The approach we propose builds on the detection of anomalous events in time-series, using background modeling. This method is not restricted to MTG and can be applied to other geostationary satellites, such as MSG, GOES-R, and Himawari. References: - [Filizzola et al. 2016]: Filizzola, Carolina, et al. "RST-FIRES, an exportable algorithm for early-fire detection and monitoring: Description, implementation, and field validation in the case of the MSG-SEVIRI sensor." Remote Sensing of Environment 186 (2016): 196-216. - [Roberts and Wooster 2014]: Roberts, G. and Wooster, M. (2014). Development of a multi-temporal Kalman filter approach to geostationary active fire detection & fire radiative power (FRP) estimation. Remote Sensing of Environment, 152:392–412.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Machine Learning for Wildfire Detection from Space Using Forest-2 Satellite's Medium Wave Infrared Sensor

Authors: Matthias Rötzer, PhD Lukas Liesenhoff, Max Bereczky, Martin Ickerott, Julia
Affiliations: OroraTech, Technical University Denmark
Aim The increasing intensity of wildfires require advanced detection systems to mitigate their devastating impacts. This research focuses on the development and implementation of a new machine learning based wildfire detection system using the Forest-2 satellite's medium wave infrared sensor. This study explores the application of state-of-the-art deep learning models, particularly semantic segmentation approaches, for identifying wildfire hotspots in noisy, highly imbalanced datasets generated by Forest-2’s MWIR sensor. Our goal is to provide a robust wildfire detection algorithm capable of operating in a resource constrained environment, offering a robust, adaptable to diverse conditions, and explainable, making it a reliable tool for disaster response and environmental monitoring. Methods The methodology employed in this research spans a spectrum of machine learning techniques, from simple to complex, to address the challenges posed by noisy data and rare event detection. Initial methods included basic thresholding techniques, which were then expanded to more sophisticated models such as Random Forest and eXtreme Gradient Boosting (XGBoost) and later deep learning. Given the small size of the objects of interest—wildfire hotspots—relative to the satellite’s large swath, the chosen deep learning architectures were required to support dense prediction capabilities. These dense predictions are essential to capture fine-grained details and resolve small-scale anomalies that might otherwise be overlooked in broader-scale segmentation outputs. Results The SegFormer encoder achieved the highest F1 score for the positive class (0.775 ± 0.01), surpassing EfficientNet (0.762 ± 0.01), MobileNet (0.704 ± 0.01), and ResNet50 (0.646 ± 0.01). Additionally, SegFormer achieved the highest IoU score (0.65), indicating its superior ability to accurately delineate fire shapes—a critical requirement for effective wildfire monitoring. In comparison, the threshold-based detection system achieved a precision of 0.85 but suffered from significantly lower recall (0.10) and IoU (0.10). Tree-based methods, such as Random Forest and XGBoost, performed better than the threshold-based approach, with F1 scores of 0.16 and 0.17, respectively, but still fell far short of the performance of deep learning models. One important factor is that deep learning models are not inherently pixel-based, which can limit their ability to detect very small features, such as small-scale or low-intensity fires. This issue is noticeable in shallower architectures like ResNet50, which are less robust at capturing small-scale anomalies compared to more advanced models like SegFormer. Despite these limitations, the ability of deep learning models to generalize and detect features more reliably makes them a significant improvement over traditional threshold-based classifiers as well as pixel-based classifiers like random forest. Conclusion The study successfully demonstrates the feasibility and effectiveness of ML-based approaches in improving detection accuracy and response times compared to traditional methods. The integration of advanced deep learning architectures has shown substantial performance gains, paving the way for more reliable and efficient wildfire monitoring systems. By combining state-of-the-art deep learning architectures with tailored preprocessing, we have established a pipeline capable of identifying small-scale fire anomalies with promising accuracy. Preliminary experiments highlight the benefits of self-supervised pretraining for noisy, homogeneous data, offering a foundation for further optimization. Finally, the efficiency and performance of smaller models shows ample potential for on-board fire detection. This paves the way to significantly reduced latency with highly performant algorithms.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Development and Optimization of a Machine Learning Fire Risk Prediction Model Using Multimodal Geospatial and Climate Data in Cyprus

Authors: Maria Prodromou, Mrs Stella Girtsou, Mr Panagiotis Vasiliou, Mr George Leventis, Mr Alexis Apostolakis, Dr Marios Tzouvaras, Dr Christodoulos Mettas, Dr Giorgos Giannopoulos, Dr Haris Kontoes, Prof. Diofantos Hadjimitsis
Affiliations: ERATOSTHENES Centre of Excellence, Cyprus University of Technology, National Observatory of Athens, National Technical University of Athens, Athena Research Center
Wildfire detection is a major issue for authorities. There are various causes of fire events with the most common being human influence. A fire risk prediction model through the analysis of geo-environmental and climate data is important for early warning and fire management. This work used a multimodal dataset to predict fire risk in Cyprus island. The island features a predominantly mountainous terrain and a distinctly Mediterranean climate, characterized by dry and hot summers that typically extend from May to October. During the fire season, temperatures range from 30oC to 40oC, increasing the risk of ignition. These factors are selected based on their potential correlation with the unique characteristics of the area investigated, the historical fire events, and the availability of relevant data. This dataset includes anthropogenic, environmental, meteorological, topographic, and fire-related features acquired from 2001 until 2023. Specifically, the anthropogenic features include road density, picnic, and camping sites while the topographical features include slope, elevation, and aspect. Regarding the environmental factors, the land cover, vegetation indices, and the forest-agriculture interface were considered. Additionally, the meteorological conditions data that were retrieved concern temperature, wind speed and direction, and precipitation. For this study, the Random Forest classifier was implemented to predict the fire risk. Before the implementation of the Random Forest algorithm, some pre-processed actions were applied to the dataset. Specifically, the dataset was cleaned to handle the missing values (NaN). Also, since some features had different scales, which can affect the performance of algorithms, all the features were normalized to a common scale. Additionally, the RandomizedSearchCV approach was used for the hyperparameter tuning, and the GroupKFold cross-validation was applied to ensure the robustness of the model evaluation. Regarding the accuracy assessment of the model various metrics including precision, recall, F1-score, and AUC were employed during the stage of model tuning in order to identify the proper combination that results in the highest scoring based on the precision, recall, F1-score, and AUC. Given the significant class imbalance, with the no-fire class being substantially larger than the fire class, we mainly focused on recall values during model selection. We evaluated our top-performing models using two months of data representative of real-world fire and non-fire distributions. The models achieved balanced recall rates exceeding 70% for both classes. These results are shown promising for such an endeavor. The fire predictions through continuous monitoring may serve as a commercial service. Future steps include a comparative analysis based on different models (i.e. Decision Trees, Support Vector Machines, and Artificial Neural Networks) for reliably predicting any forest fires. The authors acknowledge the 'EXCELSIOR': ERATOSTHENES: Excellence Research Centre for Earth Surveillance and Space-Based Monitoring of the Environment H2020 Widespread Teaming project (www.excelsior2020.eu). The 'EXCELSIOR' project has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No 857510, from the Government of the Republic of Cyprus through the Directorate General for the European Programmes, Coordination and Development and the Cyprus University of Technology.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Remote Sensing and Machine Learning for Burned Area Anomaly Detection in Madagascar: Applications of Isolation Forest and SHAP Analysis

Authors: Shrijana Poudel, Dr Robert J. Parker, Prof Heiko Balzter, Tristan Quaife, Dr Douglas Kelley
Affiliations: National Centre for Earth Observation - University of Leicester, National Centre for Earth Observation - University of Reading, UK Centre for Ecology & Hydrology
Remote Sensing and Machine Learning for Burned Area Anomaly Detection in Madagascar: Applications of Isolation Forest and SHAP Analysis Fires significantly impact global ecosystems by disrupting vegetation and impacting on the carbon cycle. Recent advancements in remote sensing techniques have enhanced fire detection capabilities. Application of machine learning along with a better understanding of feature contributions, in fire prediction can further optimise the use of remote sensing data, assisting in fire management and monitoring. In this study, we aim to identify Burned Area (BA) anomalies in Madagascar using the Isolation Forest (IF) algorithm and ESA FireCCI (2001-2020) satellite observations. Madagascar has unique biodiversity, complex topography and a variable climate. However, its ecosystems face severe threats from fires, deforestation and human activities, making it crucial to understand the impact of fire disturbances and subsequent vegetation recovery. IF isolates anomalies more efficiently than distance-density based methods by constructing trees that require fewer partitions for anomalies compared to normal instances. Explainable AI (SHAP) was also used to estimate the contribution of each feature to IF’s prediction, allowing us to incorporate information on the time and location of fires. By calculating the total BA over the 2001-2020 period, we found the highest fire activity with a cumulative BA of over (>5*10⁹ m²) in central-eastern and western Madagascar. This fire distribution is primarily attributed to Madagascar’s tropical maritime climate and anthropogenic activities. The results from our anomaly detection using SHAP analysis found that a high number of these BA anomalies were primarily linked to large BA values, but temporal and geographical factors also influenced whether a point was an anomalous disturbance. IF detected a high number (>20) of anomalies in northern Madagascar, where BA values were relatively low, indicating deviations from seasonal fire patterns and potentially weak but significant disturbances. In the Mahajanga and Toliara regions of Madagascar, BA anomalies were driven by the magnitude of the BA itself, with SHAP values of (>-4) and (>-2.5) indicating strong anomalies. In contrast, the geographic location was the most significant factor in the Antsiranana region (SHAP value >-2.5), indicating atypical fire events in this area. Future research will incorporate additional variables to further understand underlying patterns in BA anomaly in these regions. Ultimately, we will assess the vegetation response in areas with high anomalies and determine how the driver of the disturbance impacts upon that response. This study will additionally be extended to other tropical regions to assess fire impacts across different ecosystems. By utilizing tools like satellite observation and the IF algorithm, policymakers can anticipate the long-term effects of climate change and human activity on tropical forests, guiding sustainable land use, conservation, and climate adaptation efforts in vulnerable regions.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: The Canadian Fire Spread Dataset; Detecting Fire Spread and Overwintering Fires in Peatland Ecosystems

Authors: Quinn Barber, Dr. Piyush Jain, Dr. Matthew G. Hethcoat, Nicholas Pontone, Professor Sophie Wilkinson, Koreen Millard
Affiliations: Carleton University, Canadian Forest Service, Simon Fraser University
Canada experienced a record-breaking wildfire season in 2023, with a total area burned of almost 15 million hectares — over seven times the historical annual average and far exceeding the previous record area burned. Early-season drought, exacerbated by below-average snowpack, set the stage for the record-breaking wildfire season. In many places drought persisted throughout the year and may have contributed to a high frequency of deep-burning peatland fires, which can sustain combustion at low temperatures, smouldering through the winter before flaring up in the spring. These ‘overwinter holdover fires’ contributed to an early start to the subsequent 2024 wildfire season, which saw the second-most area burned since 1995, causing many community evacuations and widespread air quality impacts. This anomalous early season fire activity may represent a disruption of the traditional fire regime of the Canadian boreal forest, normally governed by an annual cycle of snowfall. Peatland wildfires are often characterized by low fire intensity, burning at temperatures as low as 300℃,and have historically been difficult to monitor with satellite-based remote sensing. However, there have been significant recent improvements in sensor resolution and sensitivity, and widespread availability of high resolution, high sensitivity satellite thermal observations provide an opportunity to quantify the recent elevated wildfire activity in Canadian peatlands. Here we present an updated Canadian Fire Spread Dataset (CFSDS), a new fire progression dataset of Canadian wildfires over 500 ha from 2002 to 2024 (Sci. Data; DOI:10.1038/s41597-024-03436-4). This event-based daily fire progression dataset is built on high-precision wildfire boundaries and active fire detections from the Moderate Resolution Imaging Spectroradiometer (MODIS) and the Visible Infrared Imaging Radiometer Suite (VIIRS). Thermal anomalies are interpolated to daily wildfire progression maps, which are used to calculate daily fire spread and to assign environmental covariates including daily weather, derived weather metrics (e.g. drought), forest fuels characteristics, topography, and anthropogenic influences. Fire footprints are identified using a suite of satellite imagery such as the Landsat program satellites and Sentinel-2. This publicly-available, research-oriented dataset encompasses over 6,000 fires over 500 ha, and is valuable for improving knowledge of large-scale wildfire spread or for modelling wildfire in specific settings such as peatlands or high-frequency reburns. Analysis of the updated CFSDS shows that compared to a normal year, wildfires in 2024 were approximately twice as likely to occur in peatlands and three times as likely to occur in poor fens. Peatland wildfires also began much earlier than normal in 2024, with an unprecedented amount occurring prior to May 1. We also present a comparison of CFSDS fire spread rates to climatic drought and remotely-sensed terrain wetness from Sentinel-2 and Landsat multispectral imagery. This reveals a high degree of spatial heterogeneity in critical drought thresholds at which fire spread is more, or less likely. Although peatlands have historically been thought to be fire resistant, this evidence suggests that this is highly dependent on local conditions and water table depth, supporting previous research that has shown terrain wetness indices are correlated with fire occurrence probability. This raises concerns that climate change-driven drought may herald earlier fire activity and peatland carbon loss, as was observed in 2024.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: A Low-Latency Solution for Burnt Area Delineation Through Space Computing: A Kayrros and Loft Orbital Collaboration

Authors: Charles Hessel, Luc Mercereau, Jean Loubet, Aurélien de Truchis, Carlo de Franchis, Simon Lamy, Julien Camus, Elsa Carcaillon, Thomas Leonard
Affiliations: Kayrros SAS, Loft Orbital Technologies
The increasing frequency and severity of wildfires have created an urgent need for advanced systems capable of delivering timely and accurate data to support crisis management. Kayrros and Loft Orbital have developed an innovative approach combining inter-satellite links (ISL) and onboard space computing to significantly reduce the latency of detecting and mapping burnt areas. The system leverages Loft Orbital’s space infrastructure combining sensing and computing capabilities, coupled with Kayrros’ expertise in satellite-based burnt area detection algorithms. This solution was first evaluated in a controlled sandbox environment before being demonstrated in space, eventually supporting French firefighting teams on the ground to get wildfire footprint updates in real time. The methodology is implemented using Loft’s YAM-6 satellite. By downlinking processed contours of burnt areas directly via ISL, the system ensures rapid delivery of actionable data, bypassing conventional delays in data transmission via ground stations and data processing on the ground. Kayrros’ capability to integrate public and commercial low-Earth orbit (LEO) and geostationary orbit (GEO) satellite data further extends its applicability, allowing for large-scale monitoring of both private and public forests. This study demonstrates the potential of the combined technologies to address operational challenges in wildfire management, such as the delays associated with traditional data acquisition and processing pipelines. The near-real-time delivery of burnt area maps facilitates quicker decision-making and resource allocation during active wildfires, while also improving post-event analyses for assessing fire impact. The scalability of the proposed system makes it particularly suited for diverse geographies and vegetation types. Its validation with field data from French firefighters underscores its reliability and operational readiness at global scale. By integrating advanced computational techniques with a modular satellite platform, this work provides a scientific foundation for further research and development in satellite-based wildfire monitoring systems. The results suggest a significant step forward in leveraging space-based technologies to support global wildfire crisis management and mitigation efforts.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Firefighting with data pipelines: an innovative algorithm for fire detection from space

Authors: Henrique Moura, Dr. Phil Reiter
Affiliations: imec IDLab/UAntwerp
Wildfires harm ecosystems, accelerate climate change, and cause significant economic losses. Early detection of wildfires can mitigate these environmental, financial, and health impacts. Satellites play a crucial role in quick detection, providing broader, continuous monitoring compared to traditional methods like ground-based sensors and aerial patrols, which are labor- intensive and limited in range. Satellites like NASA’s MODIS and ESA’s Sentinel-2 enable early detection by monitoring large areas, operating continuously day and night. Nowadays deep learning is particularly effective in image segmentation tasks. Thus, it can be used to identify thermal anomalies in large- scale satellite images. We argue that satellite-based deep learning detection models are important tools for wildfire detection. Specially if they can run on a satellite-embedded computer. This can significantly reduce latency, since less data need to be transmitted to the ground station. However, the success of these models is challenged by the imbalanced nature of satellite data - where fire spots cover a small fraction of an image relative to the background. We proposed a workflow pipeline for wildfire detection that can process a whole L2A image in 20m resolution from Sentinel-2. Our best model obtains an F1 score of 0.9459 when detecting wildfire, on a dataset with less than 0.5% positive cases. Results show that using a pipeline improves the model’s runtime and performance. Specialized (and fast) stages skip parts of the image that cannot catch fire. Particularly, an all-water detector is deployed to identify the the image patch only has water. This model obtains high accuracy (0.9930) and high recall (0.9976). We also evaluated the importance of the bands used in the satellite image for wildfire detection, which shows that infrared bands are important for wildfire detection since they have greater thermal sensitivity and can propagate through smoke. This combination of techniques holds promise for advancing wildfire monitoring, enabling authorities to act swiftly in reducing wildfire impacts and protecting at-risk regions and populations.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: CopernicusLAC: Comprehensive Wildfire Management Through Earth Observation in Latin America and the Caribbean

Authors: Rubén Ramo, PhD Mauro Arcorace, Mr. Adrián Vicioso, Fabrizio Pacini, Mr. Bryan Cantal, PhD Caterina Peris, PhD Alberto Lorenzo
Affiliations: Indra, Terradue
Wildfires profoundly impact global ecosystems and human societies, influencing atmospheric composition, vegetation dynamics, soil erosion, water runoff, and societal values such as resources and assets. Annually, over 5 million square kilometers—an area roughly equivalent to half the size of Europe—are affected by wildfires. Monitoring burned areas is essential to understanding their environmental and societal effects, enabling effective wildfire management through three key stages: prevention, response, and post-fire recovery. Prevention focuses on identifying and mitigating fire risks through data-driven assessments that incorporate environmental and meteorological variables. Response involves the real-time detection and monitoring of fires, utilizing Earth Observation (EO) systems to map burned areas, pinpoint fire locations, and track their spread. Finally, post-fire recovery emphasizes assessing fire severity, monitoring ecosystem regeneration, analyzing biodiversity changes, and implementing land management strategies to minimize future risks and support ecosystem restoration. The Copernicus Centre for Latin America and the Caribbean (CopernicusLAC), located in Panama, is a regional hub dedicated to enhancing resilience in a region highly vulnerable to natural disasters, climate-induced geological and meteorological threats. By leveraging EO data, CopernicusLAC aims to strengthen disaster risk reduction through the development of tailored geospatial services for risk mapping, disaster monitoring, and recovery planning. The project's core objectives include promoting the use of open Copernicus data and services, building regional capacity to process and apply EO data, and fostering collaboration between local stakeholders and the EO community. These efforts not only address wildfire monitoring but also support broader disaster management strategies, ultimately contributing to sustainable and informed decision-making across the region. The CopernicusLAC program is driving the development of 12 tailored geospatial services addressing diverse natural disasters across the region. These include applications for flood preparedness, vegetation and drought monitoring, landslide detection, and urban characterization, among others. Among these efforts, wildfire management stands out with comprehensive tools that cover the entire spectrum of fire-related processes, integrating fire danger estimation, early detection, burnt area mapping, severity assessment, and post-fire regeneration monitoring. These capabilities highlight the critical role of Earth Observation in understanding and addressing fire impacts in a region increasingly affected by climate change. Regarding wildfire risk analysis, a specific tool for fire danger calculation has been developed within CopernicusLAC. The Fire Danger Mapping (FDM) Tool offers a customizable service that allows users to assess fire danger levels, adapting the model to specific local conditions. The tool uses static variables, such as land cover, vegetation height, and historical fire probabilities, combined with dynamic data like the Fire Weather Index (FWI) retrieved from the Global Wildfire Information System (GWIS). These inputs are standardized, reclassified, and integrated using a weighted average algorithm. By defining an Area of Interest (AOI), a date (historical or forecasted), and custom weights for the input variables, users can tailor the tool to reflect the local environmental and meteorological characteristics of their region. This flexibility enhances the accuracy of the FDI, ensuring that the assessment is not only regionally relevant but also sensitive to local variations. The Burned Area Mapping (BAM) Tool operates in two complementary stages to support wildfire detection and monitoring. First, users receive real-time information on active fire hotspots detected using Sentinel-3 thermal data. This early detection capability provides a near-instantaneous view of fire activity, allowing for rapid situational awareness and initial response. When a significant density of hotspots is detected, the system automatically activates the burned area mapping module. This module employs an adapted version of the FireCCISFD20 algorithm, replacing the pre-fire image with a multitemporal composite to mitigate the effects of persistent cloud cover. Additional adjustments have been made to adapt the processing chain to the service execution framework and the configuration of the servers. Leveraging Sentinel-2 imagery and spectral indices (NBR2 and MIRBI), the algorithm delineates burned areas and evaluates the severity using the Relativized Burn Ratio. By focusing computational resources only on areas flagged as significant, this systematic approach enables efficient mapping of fire-affected zones, offering users geospatial products to assess wildfire impacts and guide decision-making. The Fire Recovery Mapping (FRM) Tool is activated automatically once active hotspots have ceased, signaling the end of a wildfire event. This service monitors vegetation regeneration using spectral data, evaluating the Normalized Difference Vegetation Index (NDVI) on a monthly basis and comparing it against pre-fire conditions. Users receive two key outputs: a temporal series of NDVI composites and a recovery layer indicating cumulative regeneration progress. These products enable the assessment of ecosystem recovery, identification of areas with slower regeneration rates, and support for decision-making in ecological restoration efforts. The integrated approach developed by CopernicusLAC demonstrates the transformative potential of Earth Observation tools in wildfire management, addressing prevention, real-time monitoring, and post-fire recovery. By tailoring services like the Fire Danger Mapping, Burned Area and severity Mapping, and Fire Recovery Mapping tools to the specific needs of the Latin America and Caribbean region, the program enhances the capacity for data-driven decision-making and risk management. These innovations not only support regional resilience against the growing impacts of climate change but also provide a scalable open model for global disaster management efforts. Future work will focus on refining these tools and exploring their application in broader contexts, further advancing the role of geospatial technologies in sustainable development and environmental conservation.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: In-Orbit Deep Learning for Nighttime Cloud Detection Tested on a Prototype FlatSat

Authors: Alexis Apostolakis, Mr Simon Vellas, Vasileios Botsos, Christos Chronis, Souzana Touloumtzi, George Tsimiklis, Angelos Amditis, Professor Ioannis Papoutsis
Affiliations: National Technical University Of Athens, Institute of Communication and Computer Systems
Microsatellites are playing an increasingly prominent role in remote sensing services for safety and security. The typical dataflow in such services usually involves downloading sensor data to ground stations where processing is performed to produce Analysis Ready Data (ARD). However, to meet the need for near real-time response required by those applications, edge computing should be implemented, allowing ARD to be generated directly on the satellite and transmitted immediately via inter-satellite communication. This approach enables critical information to reach key users more quickly. One of the major challenges in this direction is developing processing chains and machine learning pipelines that minimize processing power consumption while maintaining the efficiency to produce high-quality ARD promptly. This task involves selecting suitable methods, classical algorithms or machine learning algorithms and architectures with lower complexity to achieve reduced computational needs. Another key challenge is ensuring that software designed for the satellite's limited resources is thoroughly validated before deployment in orbit. One solution is to test the software on a FlatSat platform i.e. a modular, ground-based satellite that emulates real on-board conditions. Our goal in this work is to address the above mentioned two key challenges by: (1) developing a deep learning based processing pipeline optimized for on-board processing to produce ARD and (2) validating the developed software using a FlatSat platform. Among the various ARD processing challenges, we focus on nighttime cloud detection, which, although crucial for remote sensing applications, has been relatively underexplored in the current literature. On-board Cloud detection: Clouds obscure a significant portion of the electromagnetic spectrum, degrading or rendering images unusable. Classification algorithms, ranging from simple threshold-based methods to advanced deep learning segmentation, are prone to errors when processing imagery with undetected clouds. Therefore, it is crucial to segment clouds before performing any further data processing. During the day, cloud detection typically relies on RGB and NIR spectra; however, these bands become ineffective at night due to the absence of sunlight reflectance. In contrast, long-wave infrared radiation (LWIR) enables cloud detection even at night, as cloud tops are generally colder than the Earth's surface. Annotated samples with cloud data from newly deployed satellites are typically limited in number. This is one of the reasons why the chosen methodology involves supervised semantic segmentation combined with transfer learning. This approach leverages data from more abundant, comparable inputs, such as spectral bands from satellite sensors like VIIRS and Landsat, for training. In addition, using this approach lightweight U-Net variants, including EfficientNet and MobileNet-based architectures, are being explored to minimize processing requirements. Transfer learning is subsequently applied to fine tune these models. Self-supervised learning approaches, implemented through architectures such as Masked Autoencoders (MAE) and Vision Transformers (ViT), are an alternative direction to address the scarcity of manually annotated cloud samples. However, these algorithms are significantly more computationally demanding, necessitating the application of knowledge distillation techniques for reducing the complexity. Testing on the FlatSat: To ensure that the developed data processing algorithms will be capable of running efficiently onboard a microsatellite, a lab model CubeSat testbed is being assembled to provide us the ability to fully verify and validate software performance on a system under realistic space conditions. Our FlatSat platform is primarily based on openly available, open-source CubeSat architectures and includes both real hardware (actual physical components) and virtual emulated hardware (software simulations or virtual representations of hardware) based on the specific testing needs or objectives. Specifically, we focus on testing a deep learning pipeline for onboard cloud detection. Thus, the primary FlatSat hardware component utilized is an embedded GPU payload processor with similar processing capabilities to the one on board the actual satellite. The FlatSat will be designed to replicate realistic in-flight conditions for payload power and interface configurations, ensuring accurate emulation of the space environment and allowing for thorough validation of the payload software under conditions matching these of the final operational environment. To reduce unnecessary complexity, non-critical subsystems in our testing will be simplified or virtualized, ensuring fidelity is preserved in critical areas while optimizing for efficiency. The above methodologies are being applied in a real-world scenario to enhance the capabilities of the lately deployed FOREST microsatellites constellation, designed for wildfire monitoring and early warning. These methodologies aim to equip the constellation with advanced onboard processing, enabling more efficient and accurate onboard cloud detection and data analysis in general, integrating deep learning techniques.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Wildfires Meet Multitask Learning: Enhancing Burned Area Delineation With Land Cover Classification

Authors: Edoardo Arnaudo, Luca Barco, Claudio Rossi
Affiliations: Fondazione LINKS, Politecnico di Torino
The increasing frequency and severity of wildfires pose significant challenges for environmental monitoring and emergency response. Accurate delineation of burned areas from satellite imagery is crucial for assessing fire damage, supporting decision-making, and understanding long-term environmental impacts. Traditional approaches using binary segmentation models often struggle with accuracy and robustness, particularly when dealing with limited training data and class imbalances, which are inherently present in the context of wildfires. In this work, we address these challenges through two main contributions. First, we develop a comprehensive dataset specifically designed for multitask learning in wildfire contexts. The dataset combines Sentinel-2 satellite imagery with Copernicus Emergency Management Service (EMS) activations and ESA WorldCover data, encompassing 171 fire events primarily concentrated in Europe, including also events from Australia and the American continent. For each Area of Interest (AoI), we provide four essential components: Sentinel-2 satellite imagery across 12 spectral bands with varying resolutions (10-60 meters), burned area annotations derived from EMS, land cover maps from ESA WorldCover featuring 11 distinct classes (including trees, shrubland, grassland, built-up areas, sparse vegetation, and water bodies), and computed cloud masks derived from CloudSen12. The dataset underwent thorough manual validation to ensure consistency, considering a time frame of up to 30 days following reported event dates to maximize the availability of cloud-free images. Each image was sampled and rasterized at 10m resolution, with lower resolution bands upscaled using nearest neighbor interpolation. The final dataset comprises 433 samples spanning from 2017 to early 2023, with dedicated portions for training (129 events), validation (15 events), and testing (27 events) to ensure proper evaluation. Second, we introduce a lightweight, albeit effective, multitask learning framework that leverages land cover classification as an auxiliary task to enhance burned area segmentation. Our approach utilizes a shared encoder-decoder architecture with two classification heads, enabling the model to learn complementary features from both tasks simultaneously. The framework employs a standard segmentation loss in both binary and multi-class variants for the respective tasks, with gradients from both objectives jointly propagating back to update the model parameters. To ensure a comprehensive evaluation, we experiment with three different architectural configurations: UPerNet with ResNet-50 encoder as fully convolutional network, UPerNet with Vision Transformer encoder (ViT-S) as hybrid configuration, and SegFormer B3 as end-to-end transformer model. We test both with and without domain-specific pretrained weights to assess the framework's effectiveness under different initialization conditions. For pretrained models, we utilize weights from SSL4EO-S12 for ResNet and ViT variants, while SegFormer used ImageNet pretrained weights. We train on 512×512-pixel crops with a batch size of 32, using AdamW optimizer with a learning rate of 1e-4 for 30 epochs. For inference, we implement a sequential sampling strategy with overlapping tiles and smooth blending using splines to reconstruct the original inputs without introducing border artifacts. We evaluate the performance using macro-averaged F1 score and Intersection over Union (IoU). Results demonstrate significant improvements in both accuracy and stability compared to traditional single-task approaches. In configurations without pretrained weights, the multitask approach achieved an average improvement of +3.85 in F1 score and +5.71 in IoU, with notably reduced standard deviations (-3.51 for F1 score and -4.88 for IoU). When training from scratch, the SegFormer architecture demonstrates particular robustness, achieving 90.94% F1 score and 83.38% IoU even without pretraining. When using pretrained weights, while the performance gap between single and multitask narrows, the multitask approach still maintained superior results with average improvements of +0.73 in F1 score and +1.21 in IoU. The UPerNet model with ResNet-50 encoder achieved the best performance in the multitask setting with pretrained weights, reaching an F1 score of 91.86% and an IoU of 84.94%. A further manual validation was also carried out on two selected EMS activations (i.e., EMSR-674 and EMSR-675) to verify the computed performance. Considering the EMSR674 activation, the best performing model achieves a F1 score of 94.50%, showing a high degree of agreement between human annotation and automated output. In the case of EMSR-675, the results were even higher in terms of accuracy, with an F1 score of 96.57%. While the results are comparable with a manual output, the real added value of the tool comes from its processing times. In fact, the inference time for the activations was 13.92 seconds and 221.4 seconds respectively, covering a surface area of more than 20,000 hectares. Considering the computational overhead of our multitask approach, the increases are only marginal in training time (10-30 minutes per epoch) and the number of additional parameters is negligible due to the shared encoder-decoder architecture. During inference, the auxiliary head is removed, eliminating any computational overhead in operational deployment. This work demonstrates that efficient extraction of semantically rich features remains pivotal even for specific downstream tasks such as burned area delineation. Future research directions could investigate multiple heterogeneous tasks simultaneously to enhance model robustness further and exploring more computationally demanding approaches such as large-scale self-supervised learning to generate better pretrained solutions. Another potential application involves experimenting with this approach with other types of natural disasters, such as flood or landslide delineation. This work was carried out in the context of the projects OVERWATCH (Horizon EU, Grant ID. 101082320) and UNICORN (HEU Grant ID. 101180172), focusing on emergency management during natural disasters.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Operational Delivery of MODIS and VIIRS Products in support of EFFIS

Authors: Dr Antoine Masse, Dr Conrad Bielski, Timo Ryyppö, Kevin Storm, Camille Pinet, Gunter Zeug, Jakob Lobensteiner, Panu Lahtinen, Sebastien Labarre, Jesús San-Miguel-Ayanz, Pieralberto Maianti, Roberto Boca, Duarte Oom, Elena Roglia, Alfredo Branco
Affiliations: Ignfi, Riscognition GmbH, Finnish Meteorological Institute, GEOFIT, European Commission, Joint Research Centre (JRC), External consultant for the European Commission, ARCADIA SIT s.r.l
The increasing frequency and severity of wildfires, exacerbated by climate change, pose significant threats to ecosystems, communities, and economies. To address this challenge, the European Union's Copernicus program offers a suite of services to monitor and mitigate wildfire risks. The European Forest Fire Information System (EFFIS) is a key component of the Copernicus Emergency Management Service (CEMS), providing essential information on fire danger, active fires, and post-fire damage assessment. By leveraging earth observations from satellite measurements and advanced modelling techniques, EFFIS delivers timely and essential monitoring of wildfire events across Europe, North Africa, and the Middle East. This abstract presents our role, and the on-going technical developments and services provided to the JRC in support of the EFFIS Services. Our consortium is led by IGNFI in collaboration with the Finnish Meteorological Institute (FMI), Riscognition GmbH and GEOFIT. The consortium is responsible for the fast delivery of thermal anomalies based on the Moderate Resolution Imaging Spectroradiometer (MODIS) and the Visible Infrared Imaging Radiometer Suite (VIIRS) instruments. MODIS measurements are acquired by two satellites: Terra and Aqua, and so is VIIRS: Suomi-NPP and NOAA-20. Several satellite ground station antennas are used to provide near real-time (NRT) access to the acquisitions in order to fulfil the contractual obligations of product delivery in less than three hours after acquisition cover the EFFIS area of interest (AOI). The thermal anomaly (TA) information is used to detect and identify active fires as quickly as possible. Water bodies, rural areas, and well-known hot spots, like factories and greenhouses, can be filtered out from the TA detections to prevent false alerts of forest fires. For completeness, the consortium also provides MODIS and VIIRS optical imagery for analysis purposes and the mapping of burnt areas. The daily output is a composited imagery updated every time new data is available. Each pixel of the composited image is updated if the location is cloud free and the sun zenith angle is higher than from the previous observation. This produces a daily cloud free image with the clouds masked. Once per day, four mosaics are generated based on each of the satellite and sensor combinations from all the satellite overpasses providing the latest and best observations for each day of the year. To achieve the expected high quality of production, the consortium must closely monitor the production. This is achieved through daily automated and manual assessments and controls. The different products are analysed at the output of automated production and evaluated to determine whether the anomalies come from the consortium's process or source data issues. Furthermore, a real-time dashboard provides hourly updates on the status of the different products produced and delivered that is accessible online for the entire project team. The monitoring of wildfire is time critical and therefore one of the challenges that the consortium will be addressing is to deliver the products even faster. Our goal is to improve the delivery by 30 minutes without sacrificing product quality. Furthermore, the consortium is testing the inclusion of other satellite measurements to support EFFIS including data from the Sentinel-3 mission. The testing will help understand what challenges and opportunities exist to integrate this authoritative dataset for future services.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Methodology for burned areas delimitation and fire severity assessment using Sentinel-2 data: A case study of forest fires in Spain (2018–2023).

Authors: Rafael Llorens, José Antonio Sobrino, Cristina Fernández, José María Fernández-Alonso, José Antonio Vega
Affiliations: University Of Valencia, Centro de Investigación Forestal de Lourizán
The aim of this study was to develop a methodology for burned areas estimation and assessing fire severity in forest fires that occurred in Spain between 2018 and 2022. The methodology leveraged Sentinel-2 spectral indices as input data, taking advantage of the spectral bands in the near-infrared (NIR) and short-wave infrared (SWIR) regions, which enable high differentiation between burned and unburned areas as well as varying degrees of fire severity. All possible combinations of Sentinel-2 bands applied to a spectral normalized difference index (SP) were evaluated, alongside widely used burn indices in remote sensing, including the Burned Area Index (BAI), the Burned Area Index for Sentinel-2 (BAIS2), the Mid-Infrared Burn Index (MIRBI), the Normalized Burn Ratio (NBR), the Relativized Burn Ratio (RBR), and the Relative Differential Normalized Burn Ratio (RdNBR). To minimize misclassification between burned areas and other land cover types, the study also utilized the Sentinel-2 Global Land Cover (S2GLC 2017) product and computed temporal differences between pre-fire and post-fire imagery for each spectral index (dSP). The results were validated against independent datasets: for burned area detection, the Emergency Mapping Service (EMS) and the Galicia forest service; and for fire severity, field plots classified into severity categories as per Ruiz-Gallardo et al. (2004) (null, low, moderate, and high severity). The statistical analysis indicated that the dNBR2 index, derived from Sentinel-2 bands B11 and B12, achieved the highest accuracy for burned area delineation, with commission and omission errors of 7% and 3%, respectively. For fire severity assessment, the combination of BAIS2, NBR, and the modified Normalized Burn Ratio (NBR2, derived from Sentinel-2 bands B7 and B12) yielded the most accurate results in areas with low, mixed, and full vegetation cover, achieving kappa statistics, F1-scores, and balanced accuracy values of 0.87, 0.86, and 0.92, respectively. The methodology developed on this study enables the generation of precise maps of burned areas and fire severity in Spain, contributing to the enhancement of national forest fire statistics.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Efficiently sampling diverse wildfire spread predictions from conditional diffusion models

Authors: Sebastian Gerard, Josephine Sullivan
Affiliations: KTH Royal Institute of Technology
We train a conditional diffusion model on the WildfireSpreadTS dataset to generate next-day wildfire spread predictions. While we only train on single observations of how the wildfire spreads each day, we want to generate a diverse set of possible future spread scenarios, to enable better planning. To compute this diverse set of predictions efficiently, we want to transfer our experience with computer vision data that on a high level resembles the fire spread data. Wildfire spread predictions based on satellite observations are inherently affected by aleatoric uncertainty, due to the limitations in spatial and temporal resolution of the measurement instruments. The standard way to deal with such uncertainties, when using deep learning approaches, is to produce pixel-wise probabilities, indicating whether a pixel will be on fire at the next time step. This approach, however, does not allow users to draw any conclusions regarding how the probabilities of different pixels relate to each other. If the fire is predicted to spread east and west with probability 50% each, this could mean that it will go A) exclusively either east or west, or B) either not spread at all or move both east and west at the same time. Both interpretations are consistent with the 50% estimates for the two pixels. Unlike discriminative models, generative models are able to capture such differences by learning to produce different outputs for the same input. We therefore train a diffusion model to generate diverse wildfire spread predictions, conditioned on the current wildfire position and auxiliary variables. At the point of writing, we have trained a diffusion model to produce wildfire spread predictions, using the WildfireSpreadTS dataset. When we average these diverse predictions, they are competitive with a corresponding non-diverse model, as measured by the average precision (AP) metric. Until the symposium, we aim to transfer a method from our currently ongoing research on computer vision data to efficiently sample a diverse set of wildfire spread predictions. The key here is that we do not need to follow the model's learned distribution, which could require many samples to generate certain scenarios, but that we purposefully modify the sampling procedure to more quickly produce a wider range of future scenarios. The goal of this modification is to quickly provide fire fighters with a diverse set of likely scenarios that they can use to inform their planning decisions. Evaluating the results of such a method is difficult without having a ground truth set of possible future outcomes for each input, so we mostly rely on heuristic and qualitative evaluation. However, we believe that these methods are key to future deep-learning-based solutions for wildfire spread prediction, with better models being enabled by the use of simulated data for pretraining and the ongoing growth of real-world data.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Burned area mapping: a new approach using Sentinel-2 time series

Authors: Claudia Collu, Dario Simonetti, Francesco Dessì, Marco Casu, Costantino Pala, Maria Teresa Melis
Affiliations: Dept. of Chemical and Geological Sciences, University of Cagliari, Dept. of Civil, Constructional and Environmental Engineering, Sapienza University of Rome, European Commission - Joint Research Centre
Burned areas (BAs) mapping plays a crucial role in understanding fire dynamics, assessing ecosystem impacts, and supporting sustainable land management strategies in the face of increasing fire frequency and intensity. Current methods rely heavily on remote sensing satellite datasets, including coarse and moderate resolution imagery. Currently, available BAs products at both global and regional scales, like MODIS MCD64A1 (500 m), FireCCI50 (250 m), and Copernicus C3SBA10 (300 m) offer broad coverage, albeit with coarser spatial resolutions ideal for large-scale monitoring. Regional products, such as the European Forest Fire Information System (EFFIS) and FireCCISFD11, focus on specific areas, such as Europe or sub-Saharan Africa, using higher resolution data (10–30 m) to provide more precise and localized BAs mapping. In this study, we propose a new method to detect regional BAs in natural Mediterranean environments with a spatial resolution of 0.1 ha, using multispectral time series acquired by Sentinel-2 during 2020, in a sample area corresponding to Sardinia Island (Italy). The methodology involved: (i) Detection of burned areas in 2020, using existing regional dataset acquired through GPS surveys and photointerpretation of Sentinel-2 imagery, resulting in the identification of 32.8 km² of burned areas across the natural and semi-natural areas of Sardinian territory, (ii) Evaluation of the most suitable spectral indices and parameter to distinguish burned and unburned areas, and relative optimal thresholds, (iii) Filtering the Sentinel-2 time series in Google Earth Engine (GEE) using the optimal thresholds to detect the potential burned pixels, (iv) Classification of burned and unburned areas carried out by performing a Random Forest algorithm, using the training dataset built in the first step, (v) Post-processing, concerning BAs polygon refinement to ensure continuous and coherent representation of the burned areas. The validation of this BAs detection method was performed using Dice coefficient (DC), commission error (CE) and omission error (OE), revealing a DC value up to 90%, outperforming the EFFIS Burnt Area product, which showed a DC of 66%. These results demonstrate that this new method is particularly effective at detecting small BAs, which are often under-represented in existing products. However, it is important to note that many of the observed errors, particularly commission errors, are due to well-known disturbances, including missing data, cloud cover, landscape heterogeneity, and, most notably, agricultural practices in semi-natural grasslands. These factors often complicate accurate detection and classification of burned areas in these environments.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Development of Thermal Infrared and 3D Based Fire Severity Products for Innovative Insurance Market Sector Applications

Authors: Stephanie Battiston, Justine Batisse, Mathilde Mauger-Vauglin, Mounia El Baz, Pascale Ferrage
Affiliations: ICube-SERTIT, Descartes Underwriting, CNES
Parametric insurance models pay out when a predefined event occurs, as measured by specified parameters, and are based on different data sources including Earth Observation (EO) imagery. In situ data are often available after a disaster event but remain incomplete regarding spatial coverage and thematic information, whereas Earth Observation space imagery provides an overview of a disaster's extent, impact and severity. Consequently, in situ data are mainly used to calibrate the model and validate EO-based results. The partnership between the new generation global corporate insurer Descartes Underwriting and ICube-SERTIT, a service platform of the University of Strasbourg specialised in EO, led to the development of an initial fire severity product. It is based on the calculation of a set of indices with Sentinel-2 data, such as the Normalised Burn Ratio (NBR), the Normalised Difference Vegetation Index (NDVI) and the Burn Area Index (BAI). This product is fully operational and integrated into Descartes’s insurance workflow but presents some limitations regarding burnt area delineation and fire severity estimation. ICube-SERTIT and Descartes are now working together on the SEVERIT3D project within the CNES Ambition Aval program to develop enhanced fire severity products using thermal infrared and very high-resolution 3D data. The goal is to improve the initial product and provide additional information answering Descartes’ needs. The combination of thermal and 3D severity indices would allow the detection of indirectly fire-affected vegetation in the areas surrounding burnt areas, enhancing fire extent detection in sparse woodlands, and detecting affected built structures (e.g. wood buildings). While waiting on the future mission CNES/ISRO Trishna satellite to provide thermal data, the generation of the thermal severity index is tested using NASA/USGS Landsat-8 and NASA ECOSTRESS data, as well as the new ConstellR LST30 and HiVE products. These data, combining high spatial resolution and high revisit frequency, will highlight thermal anomalies and the assessment of vegetation hydric stress. The future CNES/ADS constellation CO3D will provide very high-resolution stereoscopic imagery to be processed to calculate pre and post-event elevation differences. For now, very high-resolution stereoscopic CNES/ADS Pleiades and Maxar WorldView data are used to improve the detection of affected vegetation and built structures. Moreover, the potential lack of archive pre-event stereoscopic data is covered by the 3D-GloBFP database, providing worldwide coverage of building footprints and heights. These thermal and 3D severity indices, based on the state of the art, are developed and demonstrated in several areas of interest (AoIs), defined according to the availability of archives and new tasking data. The first AoIs are located in California where many fires occurred during summer 2024 affecting vegetation and urban infrastructures. Bridge and Airport Fires, situated in the periphery of Los Angeles, as well as Borel Fire, located south of the Sierra Nevada, constitute the first demonstrator's case studies. Additional cases will be defined at a later stage for the final demonstrator. Once validated, the automated process will be implemented in the SERTIT’s ExtractEO (EEO) pipeline. The availability of these new fire severity products should allow Descartes Underwriting to improve and expand EO-based insurance products across forestry, vineyards and built environments.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone R)

Poster: Detection, evaluation and monitoring of wildfires based on the Copernicus Satellite Data in the Southwest of Romania

Authors: Andreea-Florentina Marin, Andi Mihai Lazăr
Affiliations: University Of Bucharest
Wildfires represent one of the most common disasters, having significant ecological, economic, and social impacts. According to the Forest Fire 2023 Report published by the Joint Research Centre (JRC), Europe, the Middle East and Africa experienced some of the worst forest fires from 2000 to date. In the last decades the frequency and intensity of wildfires in countries like Greece, Italy, Spain and Portugal as well as parts of Central and Eastern Europe have notably increased. Romania tends to follow this trend of increasing large area forest fires (according to the European Forest Fire Information System - EFFIS), statistics showing a spike in 2017. The year 2024 was no exception. According to Romsilva (The National Forestry Authority Romsilva) in the first semester of 2024, 367 small and big wildfires were recorded, compared to 48 in the period from January to June 2023. Detection of the extent and intensity of the wildfires is important and useful in fire risk management, in order to ensure the rapid management of the disaster, during and after the event. Reliable information is key for the authorities and intervention teams in order to act quickly and to take measures adequate and prompt to protect population, infrastructure and natural environment. Such events can be monitored using data from Earth Observation satellites, which provide important information about the affected areas before, during and after the wildfires. The main purpose of this study is to show the usefulness of Earth Observation data in the rapid mapping of disasters such as wildfires. In this case Copernicus satellite data like Sentinel-2, were used. Based on the Sentinel-2 image, a series of color composite images were processed (Natural Color, False Color and False Color Urban) and also, based on the 13 available spectral bands, a series of spectral indices such as NDVI (Normalized Difference Vegetation Index) and NBR (Normalized Burn Ratio) were calculated. In addition, a Burned Area Detection was applied to detect large scale recently burned areas. For this study, a series of large and small scale events from 2024 within the limits of Mehedinti county were evaluated. Mehedinti county is located in the southwest of Romania. Although Romania’s climate is not that similar as in Spain or Portugal, it has a transitional temperate continental climate, but in the southwest of Romania, the climate hase sub-Mediterranean influences, with hot and dry summers, mild and rainy winters making the area suitable for some Mediterranean vegetation species with a high risk of fire. On the other hand, according to the authorities, most wildfires are caused by human activities, such as the uncontrolled burning of pasture in spring and the burning of stubble after the harvesting of crops, directly adjacent to forests. The selected area is also of particular importance from the point of view of biodiversity, in this area is located a part of the Domogled-Valea Cernei National Park, and also the Mehedinti Plateau Geopark, The Iron Gates Natural Park and other smaller protected areas included in the Natura 2000 network. The high revisit rate of the Copernicus Sentinel-2 satellites and their very good quality imagery products proved to be a very important resource in the case of wildfire monitoring. As a result, the size of the wildfires within the area of interest was precisely determined, and also the data enabled the quantification of the negative effects on the mapped areas. The color composite images, especially False Color (Urban) enhanced the visualisation of the active fires while the calculated indices have demonstrated a good performance for the delineation of the wildfire events and their aftermaths.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: D.04.05 - POSTER - From the Research Lab to a Global Map: Scalable and Sustainable EO Algorithm Development and Workflows

The advancement of Earth Observation (EO) technologies has enabled detailed monitoring and analysis of our planet on an unprecedented scale. However, scaling EO mapping algorithms and workflows from local to continental and global levels presents significant algorithmic and technical challenges. This session focuses on overcoming these barriers to create robust, efficient, and sustainable solutions for large-scale EO mapping efforts.
Key topics include:
• Algorithmic Scalability: Addressing challenges such as limited and spatially biased training data, ensuring generalizability across diverse regions and time periods, and optimizing algorithms for cloud-based processing.
• Scalability in Workflows: Enhancing the scalability of data processing workflows, with a focus on efficient data handling and resource optimization in cloud-based infrastructures.
• Sustainability: Incorporating innovative practices to reduce the environmental footprint of EO data processing.
A central theme of the session is the importance of considering scalability from the earliest stages of algorithm and workflow development. We welcome contributions that address these challenges, from foundational research into scalable algorithms to practical case studies demonstrating successful or ongoing large-scale EO mapping projects.
This session aims to bring together experts from machine learning, remote sensing, data science, and cloud computing to explore innovative methodologies that drive advancements in large-scale EO mapping. By addressing both scalability and sustainability, the session seeks to foster the creation of EO products that provide actionable insights for tackling global environmental challenges.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: Cloud-native Near-Real-Time Image Land-Cover Segmentation Data Pipeline

#zarr #stac #cloud-native

Authors: Tobias Hölzer, Jonas Küpper, Todd Nicholson, Luigi Marini, Lucas von Chamier, Ingmar Nitze, Anna Liljedahl, Guido Grosse
Affiliations: Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research, Permafrost Research Section, University of Potsdam, Institute of Geosciences, Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research, Computing and Data Centre, National Center for Supercomputing Applications, University of Illinois, Woodwell Climate Research Center
Neural network-based image segmentation is increasingly used in remote sensing and earth observation, often combining multiple earth observation data sources such as digital elevation models and optical satellite imagery. Recent free and publicly available satellite imagery has a resolution of up to 10m per pixel, and repeat cycles of a few days, such as Sentinel-2. Such resolutions result in Tera- and even Petabyte large datasets. In the context of machine learning and feature segmentation, this amount of data poses specific challenges:. Running a full segmentation pipeline, with adding auxiliary data, running data preprocessing, segmentation, and post-processing, poses high requirements on processing infrastructure, such as storage, network bandwidth, CPU and GPU resources. In previous work, we created an automated segmentation pipeline to map retrogressive thaw slumps (RTS), a widespread mass wasting feature in permafrost regions, similar to landslides, over multiple years using Sentinel-2 and PlanetScope imagery. This pipeline was sufficient for regional analysis and served its purpose for creating the DARTS dataset. However, this pipeline lacks the optimization necessary for scaling the workflow to the circumarctic scale. Thus, our main goal is to scale our processing throughput to go from regional to pan-arctic scale and potentially high frequency. To address this challenge, we aimed to combine multiple state-of-the-art technologies that rely on proven computational concepts such as ray, which eases distributed computing. Hence, we focused on clean, explainable code and the use of existing solutions instead of re-inventing the wheel. For efficient data caching of procedurally downloaded auxiliary data with the STAC protocol, we used zarr datacubes instead of plain raster data formats, such as GeoTiff. Here we used emerging libraries like odc-geo, which provides an easy-to-use and fast API for defining grids and reprojections of tiles with their GeoBox model. This approach led to the development of a cloud-native pipeline that can efficiently process a single 10,000px x 10,000px tile in mere seconds. Using GPU resources, even for preprocessing such as calculating derived data from digital elevation models, further helped to speed up our processing pipeline. By applying this concept, we successfully built a scalable pipeline for segmenting thaw-slumps across the circumarctic permafrost region, which can be run on multiple platforms from a local machine, to cloud computing and HPC systems.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: color33 – a cloud service for automated semantic enrichment of Sentinel-2 data

Authors: Martin Sudmanns, Matthias Laher, Steffen Reichel, Markus Kerschbaumer, Dr Andrea Baraldi, Dr Dirk Tiede
Affiliations: Paris Lodron University Salzburg, Spatial Services GmbH
Enormous amount of data is produced by Earth observation satellites such as the ones from the Copernicus Sentinel-2 missions. Every day each Sentinel-2 satellite collects >1.5 TB of data, and 120 million Sentinel-2 scenes are already available for download. Cloud-based systems such as Google Earth Engine, Microsoft Planetary Computer, OpenEO on the Copernicus Data Space Ecosystem are supposed to tackle problems regarding data access or provision of processing capabilities. The user’s interest in this type of data is evident as the Sentinel-2 downloads account for 50% of all Sentinel downloads. Yet, users are still struggle with converting this data into information in a reliable and scalable way. Existing approaches are often application- or sensor-specific, not transferable to other geographic areas or applications, or require many user-generated samples. Pixel-based spectral signatures of reflectance values and derived index values (e.g. NDVI) generally cannot be associated to land cover in a 1:1 relationship without additional information or interpretation. This is largely because values can be associated to several desired land cover definitions given variability in state or condition (e.g. deciduous forest in winter), or may not even be related to land cover at all (e.g. clouds). Further, to achieve vegetation categories, users are required to set thresholds depending on the season or area-of-interest. There are relatively few approaches for consistently producing meaningful information from calibrated reflectance values, which is also called semantic enrichment. color33 is the first worldwide working cloud service for fully automated semantic enrichment of any Sentinel-2 scene. Based on the SIAM software [1], it implements a physical-model- and knowledge-based decision tree to categorize the spectral signature into a known and stable set of categories (color naming) without the need for training samples or additional parameter. The target categories are application-independent, generic, sensor-independent and are the basis for fully automatic image interpretation and change detection. From a user’s perspective, having the categories shifts the starting point for analysis from reflectance values to known, stable spectral categories. Therefore, the burden of handling multispectral reflectance values is removed because users are able to use spectral categories with semantics in their analysis. For example, change detection can be expressed as category changes within vegetation categories or from vegetation to bare soil categories. As a physical-model and knowledge-based decision tree, color33 can be used without any parameters and does not require custom adjustments. color33 was designed as a cloud service with scalability in mind and provides parallelization and multi-user support accelerating performance and making it usable in a variety of applications. Hence, as a fast algorithm without training and deployed on scalable cloud resources, it contributes not only to a better user experience but also reduces the environmental footprint. color33 (https://color33.io) provides a scalable, worldwide applicable semantic enrichment on-demand for any Sentinel-2 image without additional parameters or training samples. Several applications can be supported from domains such as agriculture, wildfire, or forest monitoring. color33 was funded through an ESA inCubed project (SIAMaaS) and is available for testing through the ESA network of resources. [1] A. Baraldi, M. L. Humber, D. Tiede, and S. Lang, ‘GEO-CEOS stage 4 validation of the Satellite Image Automatic Mapper lightweight computer program for ESA Earth observation level 2 product generation – Part 2: Validation’, Cogent Geoscience, vol. 4, no. 1, p. 1467254, Jan. 2018, doi: 10.1080/23312041.2018.1467254.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: Scaling Earth Observation Workflows with openEO: Managing Large-Scale Processing Efficiently

#stac #cloud-native

Authors: Vincent Verelst, Ir. Victor Verhaert, Dr. Ir. Stefaan Lippens, Ir. Jeroen Dries, Dr. Hans Vanrompay
Affiliations: VITO Remote Sensing
The openEO API offers a cloud-native interface for seamless access to and processing of Earth Observation (EO) data. By providing a standardized and user-friendly platform, openEO simplifies the integration of diverse EO datasets and facilitates efficient workflows in the cloud. Its integration with the Copernicus Dataspace Ecosystem (CDSE) ensures users can process and analyze EO data at scale. This makes openEO an indispensable tool for researchers and practitioners working with satellite data in fields like agriculture, land monitoring, and environmental management. However, the nature of EO data and analysis presents significant challenges for computational workflows. Tasks such as processing vast spatial extents or executing complex algorithms often involve datasets too large or operations too resource-intensive to handle within the constraints of a single batch job. These limitations can slow progress, increase costs, and create inefficiencies in EO projects. Addressing these challenges requires robust solutions that can manage large-scale workflows, optimize resource usage, and ensure cost-effectiveness. This demonstration highlights how openEO can help you overcome these challenges. By offering advanced job management capabilities, automated processing features, and enhanced metadata integration, openEO enables users to scale up their EO workflows while maintaining efficiency and cost control. These features are particularly beneficial for demanding computational tasks, such as continental land cover mapping or time-series analysis. Key Features of the openEO Job Management System: 1. Automated Spatial and Temporal Splitting Large-scale EO tasks, often exceed the resource limits of a single batch job. To address this, openEO automatically divides these massive jobs into smaller, more manageable sub-jobs. This splitting ensures that each sub-job fits within resource constraints while maintaining the integrity of the overall workflow. 2. Comprehensive Job-Tracking System Managing large workflows often requires monitoring multiple parallel processes. OpenEO’s MultiBackendJobManager includes a robust tracking feature that monitors the status of all jobs in real time. This JobManager provides detailed insights into processing progress, memory usage, and monetary costs (credits). 3. Direct Storage in Cloud-Native Formats A critical aspect of openEO’s scalability lies in its ability to store outputs directly in cloud-native formats (GeoTIFF), on project-specific storage solutions (S3). By bypassing intermediary steps and writing processed data directly to these locations, openEO significantly reduces overhead in data handling. 4. Customizable Memory Usage Settings Cost optimization is a key consideration in cloud-based EO workflows. OpenEO addresses this by allowing users to customize memory allocation settings for each task. By fine-tuning memory usage, users can allocate just enough resources to ensure high performance without incurring unnecessary expenses. 5. Automatic STAC Metadata Generation Metadata plays a crucial role in making processed EO data discoverable, reusable, and interoperable. OpenEO automatically generates STAC-compliant metadata for all workflow outputs. This ensures that processed data can be easily cataloged and shared, simplifying collaboration across teams and projects. Conclusion: The integration of advanced job management features and cloud-native tools positions openEO as a critical platform for scalable EO analytics. Its ability to generate STAC-compliant metadata ensures processed data remains reusable, accessible, and aligned with FAIR data principles, fostering collaboration across the EO community. Successful real-world applications include projects like WorldCereal [1], LCFM (Land Cover and Forest Monitoring) [2], and WEED [3], demonstrating openEO’s capacity to handle even the most computationally intensive workflows efficiently. As the demand for global and regional Earth monitoring grows, openEO provides the tools necessary to meet these challenges, making it an indispensable platform for researchers, policymakers, and practitioners alike. [1] https://esa-worldcereal.org/en [2] https://land.copernicus.eu/en [3] https://esa-worldecosystems.org/en
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: EO Africa - Continental Demonstrator LUISA: Lessons Learned from Scaling HANNP for Local to Continental Scales.

Authors: Wai-Tim Ng, Dr. Hans Vanrompay, Vincent Verelst, Dr. Egor Prikaziuk, Dr. Michael Schlund, Dr. Michael T. Marshall, Dr. Rebecca Varney, Dr. Nina Raoult, Stephen Sitch, Linda Gotsmy, Dr. Sarah Matej, Dr. Karl-Heinz Erb, Marie Polanska, Pavel Vlach, Dr. Luboš Kučera
Affiliations: Vlaamse Instelling voor Technologisch Onderzoek, Naamloze vennootschap (VITO), ITC, University of Twente, University of Exeter, University of Natural Resources and Life Sciences (BOKU), GISAT
The ESA EO AFRICA – Continental Demonstrator "Land Use Intensity’s Potential, Vulnerability and Resilience for Sustainable Agriculture" (LUISA) project addresses the development and application of the Human Appropriated Net Primary Production (HANPP) as an essential indicator for evaluating human impacts on carbon stocks. Such impact assessments are essential for global climate mitigation policymaking. HANPP quantifies the degree to which human activities appropriate or modify net primary production (NPP), which is the biomass plants produce through photosynthesis and that serves as the foundation of the food chain. This indicator comprises two primary components: harvested biomass, representing the portion of NPP used by humans for purposes such as food, fiber, timber, and bioenergy; and NPP foregone, referring to potential biomass lost due to land-use changes, including deforestation, urbanization, and agricultural expansion. By calculating HANPP, scientists gain insights into how human activities alter ecosystems’ capacity to generate biomass, with significant implications for biodiversity, carbon cycling, and ecosystem services. High HANPP values indicate substantial human-induced ecosystem changes that can strain the environment and limit resources for other species. The LUISA project ensures that the monitoring of HANPP is scalable and operational using cloud-agnostic and open-source computational frameworks. This approach facilitates the transition from research prototypes to operational systems, employing modular workflows that adapt to different cloud environments. Proven methodologies reinforce algorithm robustness, and close collaboration with end-users ensures the resulting Earth Observation (EO) products are practical and widely applicable. By integrating advanced EO technologies, scalable computing, and algorithmic innovation, the project overcomes traditional barriers to mapping and monitoring biomass flows at continental scales, supporting informed decision-making and sustainable resource management. The project focuses on creating a remote sensing-driven framework for HANPP monitoring across Africa, using case studies in Mozambique, Senegal, Uganda, and Ethiopia. This involves optimizing NPP estimates at high (20m) spatial resolution with the fraction of absorbed photosynthetically available radiation and weather data derived from Sentinel-2 and ERA5-Land. The optimized estimates are used to calibrate outputs from a Dynamic Global Vegetation Model (DGVM) to estimate NPP across Africa. Automated methods are employed to calculate human influence indicators for land uses such as cropland, grazing land, forests, and built-up areas, and these HANPP estimates are evaluated across agroecosystems representing diverse African contexts. The project extends these efforts to scale HANPP monitoring across the African continent in both space and time. This includes pixel-level computation of actual and potential NPP using calibrated DGVM models, the mapping of human influence indicators across land-use types at the continental level, and the benchmarking of results against a new dataset, HANPPcube. By addressing scales from local to continental, LUISA advances HANPP monitoring as an operational tool for understanding human impacts on ecosystems and fostering sustainable development throughout Africa.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: Improved Cloud Screening for Global Land Cover Classification Using Sentinel-2 Imagery

Authors: Ioannis Kalfas, Daniele Zanaga, Dr. Ruben Van De Kerchove, Joris Coddé, Wanda De Keersmaecker, Mathilde De Vroey
Affiliations: VITO
One of the major limitations in the use of optical remote sensing data is cloud contamination, as the presence of clouds and aerosols, which obscure approximately two-thirds of the Earth's surface, poses a significant challenge to land cover monitoring. This study presents a cloud screening algorithm designed to address this issue, leveraging Sentinel-2 imagery and specifically designed for operational global land cover classification. Our approach aims to develop a land occlusion index at a 60-meter resolution, quantifying the degree of occlusion for each location. This index will facilitate informed decisions on pixel masking for downstream applications, such as selecting the least cloudy pixel for compositing and enabling more accurate masking of occluded pixels prior to classification tasks. The cost-effectiveness and inference speed aspects are also crucial, since the model should be optimized for minimal operational costs, making it suitable for deployment as an on-demand service or for various scenarios including near real-time applications. The model architecture includes a MobileNet backbone combined with a UNET model with a segmentation head which in turn incorporates an additional CNN module. This design enables the encoding of two main streams of information: 1) the Sentinel-2 bands, which are fed through the UNET to derive important spatial and spectral features; and 2) meteorological and topological data, which are fed into the CNN module of the segmentation head and provide additional contextual information for robust cloud pattern identification across the globe. We trained the model on the Cloudsen12+ dataset, which is a comprehensive collection of over 50,000 Sentinel-2 image patches with diverse cloud scenes and expert-labeled annotations, covering various regions worldwide and providing a geographically diverse dataset for training and testing cloud detection algorithms. While our methodology demonstrates robustness in various tested locations across the globe, it is not without limitations. Overfitting issues tend to arise when the model becomes stuck at local minima during training, leading to over-predicting specific classes (e.g., predicting clouds instead of shadows). Careful hyperparameter tuning and class weighting are essential to maintain balanced performance. Augmentation techniques and logistic scaling of the Sentinel-2 bands have also proven invaluable. These methods help the model discern features closer to the edges of the data distributions, such as darker features like shadows and water, and lighter features like snow and thin/thick clouds. Overall, this study presents a promising approach to cloud screening for optical remote sensing data, and future work will focus on further refining the model and exploring its applications in operational land cover monitoring and other Earth observation tasks.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: From Complex EO Data to Actionable Insights: CRISP and Insula’s Role in Sustainable Agriculture

#cloud-native

Authors: Alessandro Marin, Roberto Di Rienzo, Loris Copa, Marcelo Kaihara, Giaime Origgi
Affiliations: CGI, sarmap
The Consistent Rice Information for Sustainable Policy (CRISP) project, funded by the European Space Agency (ESA), aligns with Sustainable Development Goal (SDG) 2.4.1, aiming to enhance sustainable food production systems and resilient agricultural practices by 2030. Through advanced Earth Observation (EO) solutions, CRISP addresses critical agricultural indicators such as seasonal rice planted area, crop growing conditions, yield forecasting, and production at harvest. By integrating contributions from stakeholders like AfricaRice, GEOGLAM, GIZ, Syngenta Foundation, IFAD, WFP, and SRP, CRISP represents a collaborative and impactful initiative. To achieve its ambitious objectives, CRISP leverages Insula, CGI’s EO Platform-as-a-Service (PaaS), as a cornerstone for scalable EO data processing, integration, and analysis. Insula’s capabilities significantly simplify the challenges associated with handling large-scale EO datasets, ensuring cost-effective and efficient processing of Sentinel-1 and Sentinel-2 data. This platform enables CRISP to deploy advanced workflows and algorithms, previously tailored to specific user needs during the project's test phase, across five diverse test sites in South-East Asia, India, and Africa. A key innovation of Insula lies in its ability to provide an intuitive user interface (UI) tailored to decision-makers. This simplifies the complexity of EO data handling, making advanced geospatial analytics accessible to non-experts. Insula’s UI facilitates seamless access to high-quality agricultural intelligence while maintaining a focus on usability, enabling users to interact with, visualize, and analyze data products efficiently. This functionality is vital for CRISP’s mission to empower early adopters with actionable insights for sustainable agriculture. Additionally, Insula excels in its cloud-native architecture, designed to handle the high computational demands of large-scale EO processing. Its scalability ensures that the CRISP project can process massive datasets with consistent performance, accommodating the global scope of its objectives. By integrating EO best practices and leveraging multi-mission data sources, Insula helps CRISP deliver robust, reproducible, and scientifically validated results. Insula’s role extends beyond technical capabilities, fostering a collaborative ecosystem where Early Adopters actively engage with the platform. This hands-on involvement minimizes the risk of unmet expectations and facilitates the endorsement of EO-based services. The platform’s ability to harmonize diverse user requirements into streamlined workflows ensures that CRISP remains a user-centric initiative, delivering operational solutions aligned with the demands of sustainable agriculture. Through Insula, CRISP demonstrates how cutting-edge PaaS technology can transform complex EO data processing into a practical tool for achieving global agricultural sustainability. The project showcases the power of combining advanced analytics with user-oriented design to address large-scale challenges, ensuring a pathway to resilient and productive agricultural practices worldwide.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: Bringing User Software to the Cloud to Scale up Earth Observation Data Processing – Demonstrating a Cloud-Implementation of the ThoMaS Software for Satellite Validation With In Situ Data

Authors: Anna-Lena Erdmann, Dr Hayley Evers-King, Claudiu Vasile Nagy, Juan Ignacio Gossn
Affiliations: Eumetsat
The large volume of Earth Observation (EO) data is a limiting factor in data processing and information generation. The limitations are especially severe in situations where computational resources on local machines and bandwidth for downloading large volumes of data from data provider is restricted. Bringing EO data processing workflows to the cloud is one solution to enable users with computational and bandwidth restrictions to efficiently process EO data and generate value under such conditions. The serverless functions framework implemented in the context of Copernicus WEkEO offers a flexible framework of bringing user software to the cloud. The serverless processing framework enables the access and processing of Copernicus data through the WEkEO Harmonized Data Access API, as well as third party data through links to user-defined object storages. The access and process are completed in the cloud infrastructure, delivering only the results to the users. This significantly reduces the amount of the downloaded data volume, as only the needed information finds its way to the user instead of the full raw data. Parallel processing capabilities of the framework bring the potential to reduce the processing time for large scale processes. Through its federated computational setup, the WEkEO computational resources include distributed processing infrastructures (DPIs) which are located close to the data. ThoMaS – a use case for demonstrating tangible user impact WEkEO’s serverless function framework is applied on the ThoMaS software to showcase the benefit of cloud-based processing for real-world users and applications. ThoMaS is a software package developed by EUMETSAT, which collects Ocean Colour (OC) data from various remote sensors (Sentinel-3/OLCI, MODIS, VIIRS, PACE) as well as in situ sources (AERONET-OC, MOBY) and ancillary information from forecast/reanalysis models (from ECMWF) to validate with in-situ measurements of ocean parameters. For many users who collect in situ OC measurements, data download speed and resources of laptops is a challenge when comparing these measurements with large volumes of satellite data. Implementing the ThoMaS Software on the cloud for the case of Sentinel-3/OLCI data reduces the downloaded data volume to the user's local system from up to 800 MB per scene and in-situ sample to only a few KBs. Using the WEkEO serverless function framework, we demonstrate a use case of enabling users for scaling EO data processing workflows, while still being able to use their specialized software. The presentation will explore lessons learned from the integration of the software in the cloud-based framework, and an evaluation of the user benefit. Testing the DPIs, an assessment of efficiency increase is undertaken to evaluate the effect of data-proximate processing in such a framework. Options for scaling these workflows are discussed in the user context and with regards to the design and implementation of the WEkEO service.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: Synthetic Data to enable AI for the environment and beyond

Authors: Maya Pindeus, Felix Geremus, Dr. Lydia Abady
Affiliations: Another Earth
Over 100,000,000 gigabytes of earth observation data are collected on a daily basis. These vast and rapidly growing amounts of data hold immense potential to address some of the most pressing environmental challenges of our time, from climate resilience to resource management. However, leveraging this unstructured data effectively and scalable is one of the largest challenges in AI in Earth Observation. Artificial Intelligence has the potential to transform Earth Observation data into actionable insights at scale, enabling enhanced environmental monitoring and decision-making. Yet, the success of AI in Earth Observation hinges on one critical factor: high-quality training datasets. The Earth’s surface is extraordinarily complex and diverse, presenting significant challenges for AI model development. Training datasets account for over 90% of an AI model’s performance, making them the key factor of success for AI performance. Despite the vast volumes of raw EO data, creating high-quality labeled datasets remains a costly, labor-intensive, and error-prone process. Key challenges AI training data for Earth Observation is facing include 1) missing or inadequate data, especially for rare objects, rare events or underrepresented regions like the Global South, 2) data bias, variance, and data accuracy due to incomplete or imbalanced datasets and 3) varying sensor types and domain gap. Synthetic Data has the potential to address these challenges by generating unlimited amounts of high-resolution, pixel-perfect training data tailored to specific needs, eliminating the barriers for AI in Earth Observation. Synthetic data enables the creation of consistent, diverse, and accurately labeled datasets that mitigate bias and variance, ensuring better-performing AI models. Moreover, synthetic data solutions can address the domain gap issue inherent to EO. Satellite imagery varies significantly between sensors, requiring retraining of models for each sensor type. Synthetic data bridges this gap by creating training datasets compatible across various sensor types, enabling seamless adaptability to new and diverse sources of imagery. This presentation explores the potential that synthetic data has in revolutionizing AI in Earth Observation, providing scalable solutions to overcome critical data challenges and unlock scalable solutions. By unlocking the full potential of AI, synthetic data promises to accelerate the $266 billion EO market toward a projected $700 billion by 2030, while driving transformative advancements in climate resilience and sustainable development.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: Leveraging Insula for Advanced Eutrophication Monitoring in Albania and Tanzania

#cloud-native

Authors: Alessandro Marin, Roberto Di Rienzo, Paola Di Lauro, Giulio Ceriola
Affiliations: CGI, Planetek
The Eutrophication Monitoring (Eu-Mon) project contributes to Sustainable Development Goal (SDG) 14.1.1, which seeks to reduce marine pollution, including nutrient loads. Collaborating with stakeholders such as the University of Tanzania and the Resource Environmental Center (REC) Albania, the project targets ecologically significant test and pilot sites: Shengjin Bay and Durres Bay in Albania, as well as the Zanzibar Channel and Mafia-Rufiji Channel in Tanzania. These areas, impacted by nutrient loads from rivers, estuaries, and coastal activities, require advanced monitoring techniques to assess water quality and environmental health effectively. To meet these challenges, the project utilizes Insula, CGI’s Earth Observation (EO) Platform-as-a-Service (PaaS), as a solution. Insula provides a robust framework for integrating EO data and simplifying the generation of key eutrophication indicators, such as Chlorophyll , Turbidity and Water Transparency. The platform offers seamless access to Sentinel-3 OLCI Level 2 WFR data via its deployment within the Copernicus Data Space Ecosystem (CDSE), enabling rapid and efficient dataset retrieval. Insula’s flexibility extends to the integration of custom processors tailored to project-specific needs, allowing for the precise extraction of environmental indicators from EO data. The platform’s ability to perform large-scale processing campaigns has been pivotal for the Eu-Mon project. For example, it processed data spanning six years (2017–2022) over large geographical areas in Albania and Tanzania, completing more than 4,800 processing jobs. Utilizing managed Kubernetes solutions within CDSE, Insula dynamically scaled resources to handle intensive computational demands efficiently. This scalability ensures that even vast datasets are processed with reliability and speed, supporting long-term trend analysis critical for understanding eutrophication dynamics. Insula’s cloud-native architecture enhances its capacity to analyze extensive time series data, uncovering patterns and trends essential for informed decision-making. Its intuitive user interface empowers stakeholders to monitor and manage processing campaigns transparently, offering detailed insights into progress and outputs. By providing user-friendly tools for analyzing environmental conditions, Insula bridges the gap between advanced EO analytics and actionable policy-making. Through its deployment in the Eu-Mon project, Insula has demonstrated its transformative potential to support sustainable coastal ecosystem management in Albania and Tanzania. By enabling the generation of high-quality indicators that inform targeted policies and interventions, the platform contributes significantly to addressing global environmental challenges. Insula’s innovative approach highlights the critical role of EO technologies in achieving SDG 14.1.1 and advancing global efforts to mitigate the impacts of eutrophication on vulnerable marine ecosystems.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: Semi-supervised crop classification using auxiliary learning of biophysical variables

Authors: Samuel Bancroft, Anthony Cohn, Netta Cohen, Julia Chatterton, Andrew Challinor
Affiliations: University of Leeds, Leeds, Unilever, Safety and Environmental Assurance Centre (SEAC)
Accurate crop type classification is essential for agricultural monitoring and has significant economic and ecological implications. However, deep learning approaches to this task face limitations due to the scarcity of labelled remote sensing data. In this work, we introduce a semi-supervised learning framework that leverages an auxiliary task—crop trait regression—to improve the robustness and generalisability of crop classification models. By integrating biophysical variable estimation into the learning process, our approach enhances feature representation, capturing distinct crop characteristics and trait patterns through time. Our results indicate that our semi-supervised method outperforms fully supervised baselines, demonstrating superior classification accuracy and generalisation across diverse conditions. Additionally, we introduce EngScotCrop, a new large-scale, open-access annotated dataset of crop types and traits, combining multispectral and SAR data for England and Scotland, providing a benchmark for future research in remote sensing-based crop type classification. This work highlights the potential of auxiliary learning to address data scarcity challenges and advance remote sensing applications in agriculture.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U-V)

Poster: From GeoTIFF to Zarr: Virtualizing a Petabyte-Scale SAR Datacube for Simple, Scalable, and Efficient Workflows

#zarr #cog #pangeo

Authors: Clay Harrison, Wolfgang Wagner, Christoph Reimer, Florian Roth, Bernhard Raml, Sebastian Hahn, Matthias Schramm
Affiliations: TU Wien, Research Unit Remote Sensing, Department of Geodesy and Geoinformation, EODC Earth Observation Data Centre for Water Resources Monitoring GmbH
The technological landscape of big geodata has rapidly evolved, with the "Pangeo stack" emerging as a leading paradigm. This ecosystem, built around data formats like Zarr and Cloud-Optimized GeoTIFF (COG), and processing tools like Xarray and Dask, enables more portable, scalable, reproducible, and FAIR workflows. However, adapting large preexisting data pools to these standards can be logistically challenging, especially when they are integral to numerous existing pipelines and services. Exemplifying this challenge is the petabyte-scale Sentinel-1 Backscatter Datacube, hosted at Vienna's Earth Observation Data Center (EODC) and crucial for global-scale SAR analysis, such as the Copernicus-run Global Flood Monitoring (GFM) and Copernicus Land Monitoring Service (CLMS). Current access and processing rely on custom Python packages, which require ongoing maintenance as both the Datacube and Python itself evolve. A conversion of the Datacube to Zarr format would reduce this maintenance burden by allowing direct use of Xarray for all data access, selection, and processing, and Dask for parallelization and scaling. It would also facilitate uptake of future advancements in the Pangeo ecosystem. However, the costs of duplicating the massive dataset or quickly rewriting existing pipelines to suit a new data are prohibitive for the time being. We have successfully implemented a basic solution to this problem, using the Python package "fsspec" to create a Reference File System that indexes existing GeoTIFF files according to Zarr structure. The RFS exists as a single JSON file which tells Xarray how to access the Datacube as if it were in Zarr format, simplifying access without physical data conversion. Such a virtual Zarr archive allows a gradual transition of existing downstream pipelines to the new format while leaving upstream pipelines untouched. However, significant challenges remain in optimizing this approach for a petabyte-scale cube of raster files in swath format. Our ongoing work focuses on addressing these challenges, particularly in representing the cube's time dimension. We explore various strategies for time representation, considering trade-offs between precision, sparsity, and computational efficiency with respect to several workflows representative of access patterns. Additionally, we investigate the implications of preexisting blocked compression strategies of legacy TIFF files on the overall performance of our approach. This contribution will discuss our successes in implementing the basic fsspec solution, the challenges encountered in scaling to petabyte-level SAR data with irregular timestamps, and our progress in addressing these issues. We aim to contribute insights into adapting legacy data structures to modern, cloud-optimized analysis paradigms, potentially informing similar efforts with other large-scale geospatial datasets.
Add to Google Calendar

Thursday 26 June 17:45 - 18:30 (ESA Agora)

Session: A.05.09 Presenting WGORC : the new Working Group on Observations for Researching Climate .

A new WCRP working group, WGORC, has recently been established under the guidance of the ESMO Scientific steering group in order to address observational data needs for specific climate research cases. This group will collaborate with observational communities, data providers, satellite agencies and other partners in order to address data provision bottlenecks, establish standards, connect various stakeholders, and improve observational data practices across WCRP. By improving dataset curation, identifying gaps, and proposing solutions, WGORC aims to advance Earth System research. This event will serve as an introduction to WGORC's mission and foster collaborative discussions to drive these objectives forward.

Speakers:


  • Claire Macintosh
  • Amy Docherty
  • Jörg Schultz
  • Chris Smith
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: C.02.09 - POSTER - Preparing for the FLuorescence EXplorer (FLEX) mission

The FLEX mission, selected by ESA as the 8th Earth Explorer, aims at providing an improved understanding of the functioning and photosynthetic efficiency of vegetation from space. FLEX will be the first satellite mission to facilitate quasi-direct observation of the cause-effect relationships between plant health, photosynthesis and vegetation functioning. FLEX will provide repeated observations along the phenological cycle at a spatial scale supporting agricultural and forestry management units. In this session, the status of the implementation of the FLEX mission will be presented, including preparation activities towards the operational phase. Presentations will focus on the instrument performances, the algorithm developments, the calibration and validation activities, and selected results demonstrating the expected impact of the FLEX products.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Imaging Spectroscopy of Scots Pine (Pinus sylvestris L.) Seedlings During a Long-Term Drought Stress and Recovery Experiment

Authors: Eva Neuwirthová, Dr. Zuzana Lhotáková, Dr. Jan Stejskal, Dr. Jaroslav Čepl, Dr. Miroslav Pikl, Dr. Adrian Moncholí-Estornell, Prof. Jana Albrechtová, Prof. Milan Lstibůrek, Dr. Shari Van Wittenberghe
Affiliations: Image Processing Laboratory, Laboratory of Earth Observation, University of Valencia, Department of Experimental Plant Biology, Faculty of Science, Charles University, Department of Forest Genetics and Physiology, Faculty of Forestry and Wood Sciences, Czech University of Life Sciences Prague, Global Change Research Centre, Academy of Sciences of the Czech Republic
Scots pine (Pinus sylvestris L.) is known for its low ecological requirements, wide distribution, and phenotypic plasticity under drought stress, making it ideal for studying physiological plasticity. As an isohydric species, Scots pine responds to drought with stomatal closure and reduced transpiration and photosynthesis, relying on carbon reserves. Given the species’ genetically determined variability of photosynthesis performance, we expect other physiological parameters in seedlings to vary across populations during drought and recovery phases. Fluorescence energy emission can be monitored during drought stress to assess changes in photosynthetic activity and plant health. This trait can be detected non-destructively through optical properties, enabling real-time assessment of plant stress and recovery. Those optical properties of foliage (reflectance and fluorescence) provide non-destructive insights into plant physiology and can be evaluated using imaging techniques in high-throughput phenotyping (HTP). While HTP is widely applied to crops, maybe at the deciduous seedlings, conifers have received less attention, probably due to challenges related to leaf morphology. We monitored upland and lowland Scots pine ecotypes from three seed orchards in the Czech Republic. In a controlled greenhouse experiment, we induced drought stress in two-year-old seedlings and studied their response to water deficit and recovery during shoot development from February to August 2022. A high-throughput phenotyping facility with an automated irrigation system was used to assess leaf physiological traits over 164 days at multiple time points. The facility was equipped with imaging sensors for RGB, thermal, fluorescence, and hyperspectral scanning. Hyperspectral imaging (HSI) was performed using two cameras: one for the VNIR range (450-900 nm) and another for SWIR (900-1700 nm providing reflectance at the individual seedling level. Physiological parameters analyzed in parallel included quantum yield (QY), non-photochemical quenching (NPQ), relative chlorophyll content (Rel_Chl), and needle temperature difference (Delta_T). We aim to estimate the increase in NPQ and decrease in QY in drought-treated plants, along with their subsequent recovery, using hyperspectral image data (450-900 nm) and verify the segmentation of plant organs with small projection area. Our study also aims to capture the dynamic response of photosynthetic traits during the early stress and recovery phase at the Scots pine population level. This understanding the dynamics of drought stress in fluorescence traits (NPQ, QY), linked to changes in the red edge of the hyperspectral data, has the potential to be scaled up to larger conifer forests. Furthermore, we aim to identify the optimal optical sensor for detecting physiological functional traits that respond to early drought stress among three Scots pine populations. Conifers play a crucial role in the sustainability of forest ecosystems, but due to the small projection area of their needles, they present a challenge for processing optical properties for validation image data acquired remotely. Moreover, early detection of drought stress at the ecosystem level in conifers offers a valuable advantage for forest management, especially in addressing the ongoing dieback of forest ecosystems in Central Europe. Selection of seedlings based on functional physiological traits holds significant promise for the identification of quality planting material at an early age, especially in view of the increasing problems caused by drought in the context of climate change.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Implementation of WAFER for the retrieval of sun-induced chlorophyll fluorescence from airborne spectroscopy data

Authors: Sergio Roy Martínez, Prof. Dr. Alexander Damm
Affiliations: University of Zurich
The Fluorescence Explorer mission is currently under development and will, after its launch in the near future, provide global maps of Sun-Induced chlorophyll Fluorescence (SIF). SIF is complementary to existing vegetation information and allows assessments of vegetation cover and health. It was also demonstrated that in combination with ecosystem models, SIF allows to study complicated ecosystem processes including photosynthesis, primary productivity, or evapotranspiration. Such sophisticated applications require a high accuracy of SIF retrievals with associated detailed uncertainty budgets. Since various factors affect the SIF retrieval accuracy (e.g. instrumental characteristics, atmospheric effects, vegetation structure), cal/val activities used to provide reliable uncertainty budges are challenged. A particular problem is the need for robust SIF retrieval method that can be applied scale independent (i.e. from ground to satellite) and rely on as few assumptions as possible. A novel method for SIF retrieval, known as WAFER (WAvelet decomposition FluorEscence Retrieval), was recently proposed to address this need. This method takes advantage of the different spectral characteristics of reflected radiance and SIF to disentangle both components, and to retrieve true reflectance. WAFER was developed and successfully tested using simulated data and measured in-situ data from the FloX instrument (JB Hyperspectral). In these top-of-canopy settings, atmospheric interferences can be almost neglected since the distance from the surface to the sensor is small. For the application to airborne or satellite data, WAFERs sensitivity for atmospheric effects and consistency with in situ data based retrievals needs to be assessed. This contribution i) demonstrates the implementation of WAFER to retrieve SIF from airborne based spectroscopy data, and ii) evaluates the sensitivity of retrieved SIF for atmospheric disturbances. We first describe the methodology and apply WAFER to airborne based HyPlant data acquired over an agricultural site in Oensingen, Switzerland. Retrieved SIF maps are compared to the SIF maps provided by the state-of-the-art methods Spectral Fitting Method (SFM) an improved Fraunhofer Line Depth (iFLD). Our analysis examines the retrieval performance considering non-fluorescent surfaces and different crop types, providing a complete and comprehensive comparison of the three methods. We can report a general agreement across the methods with main differences of 0.27 mWm−2nm−1sr−1 between WAFER and SFM and 0.38 mWm−2 nm−1 sr−1 between WAFER and iFLD over the study area. We observe interesting across track differences in retrieved SIF, indicating different sensitivity of the three methods for instrumental effects and observation geometry. We quantify the sensitivity of WAFER based SIF retrievals to variables of the atmospheric correction (e.g. aerosol visibility, water vapor column, surface altitude and sensor height) and evaluate an empirical bare soil correction to efficiently compensate for remaining deviations caused by artefacts of sensor characterization and the description of the atmosphere.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Atmospheric Correction of INTA’s Airborne Chlorophyll Fluorescence Sensor (CFL) Supporting FLEX Cal/Val Campaigns

Authors: Laura Carretero, Eduardo de Miguel, Juanjo Peón, Neus Sabater, Marcos Jiménez
Affiliations: Instituto Nacional de Técnica Aeroespacial (INTA), Finnish Meteorological Institute (FMI)
The Spanish National Institute for Aerospace Technology (INTA) acquired in 2019 the high-resolution Chlorophyll Fluorescence sensor (CFL, Headwall) to be part of the European scientific community involved in the retrieval of the solar-induced chlorophyll fluorescence (SIF) using remote sensing techniques. CFL is a pushbroom sensor with a field of view of 23.5° and 1600 across-track spatial pixels collecting data from 670 nm to 780 nm in up to 2160 pixels to ensure a spectral resolution better than 0.2 nm. The CFL processing chain is being continuously updated to generate L1 (georeferenced at-sensor radiance) and L2 (georeferenced ground reflectance and top of canopy fluorescence) products. The SpaFLEX project, funded by the Spanish Ministry of Science and Innovation, aims to develop and implement the Spanish Calibration and Validation (Cal/Val) plan for the FLuorescence EXplorer (FLEX) mission. This plan will standardize Cal/Val protocols, establish a coordinated network of Cal/Val sites across the Spanish territory and estimate uncertainty budgets for Level-2 products. In the framework of this project, an analysis of CFL atmospheric correction has been performed in order to improve the Bottom Of Atmosphere (BOA) reflectance retrieval. The atmospheric correction of very high resolution spectrometers is a challenging task but essential to obtain accurate SIF measurements. The ground-based and airborne CalValFLEX campaign was carried out in July 2020 to evaluate the Spanish Cal/Val protocols for the future FLEX mission in an experimental agricultural site at Barrax, Spain. Airborne data were acquired with CFL and some simultaneous top of canopy measurements were collected in situ with an ASD FieldSpec 3 spectroradiometer. In this study, CFL data are atmospherically corrected and the accuracy of the BOA reflectance is assessed by comparing it against the ASD FieldSpec 3 data. Water vapor and aerosol optical depth were obtained in situ by a CIMEL CE-318 sun photometer. Additionally, the inversion of the at-sensor simulated radiance for different scenarios is performed in order to evaluate the accuracy of the inversion model, and the sensitivity of the results to the spectral characterization is analysed. The atmospheric transfer functions required for the atmospheric correction are obtained using libRadtran (library for Radiative transfer) and MODTRAN6 (MODerate resolution atmospheric TRANsmission) atmospheric radiative transfer codes (RTCs). Line-by-line and high resolution (0.1 cm-1 in MODTRAN and 1 cm-1 in libRadtran) band model results are compared in both RTCs and the uncertainty sources of the atmospheric correction are discussed.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Accounting for dynamic light absorption and emission properties due to regulated energy dissipation: a bottom-up spectral fitting strategy for early stress detection

Authors: Dra. Shari Van Wittenberghe, Dr. Adrián Moncholí-Estornell, Sara Pescador-Dionisio, Clara Garcia-Martinez, M. Pilar Cendrero-Mateo, Ana Belén Pascual-Venteo, Eva Neuwirthová, José Moreno
Affiliations: University Of Valencia
The detection of actual photosynthetic performance of vegetation based on spectroscopy imaging relies on the better understanding and modelling of the dynamic adaptation of vegetation under scenarios of excessive energy. To find a balance between the harvesting of and the protection against excessive energy, all plants employ flexible thermal or non-photochemical energy dissipation mechanisms. To facilitate this regulated energy dissipation, plants are known to alter both the chemical and structural properties within the photosynthetic antenna on the short as on the long term. However, the spectral responses and changes occurring upon these adaptations remain poorly understood. Within the PHOTOFLUX project (ERC-2021-STG grant no. 101041768), the in vivo adjustments in both the absorption and emission triggered under excessive energy are studied through proximal (imaging) spectroscopy techniques. The goal is to directly measure and model the dynamic light absorption and fluorescence emission properties which alter the leaf and canopy reflectance and fluorescence signals. For this we use a bottom-up approach, whereby the photosynthetic pigments, and especially the xanthophylls, play a fundamental role in this process and are the key object of study. Our experimental results from leaf and canopy level show that plants alter the absorption and emission properties in a subtle way (order of 1-2%) which can be detected in the 500-800 nm range. Specific spectral features are consistently monitored, indicating an overall absorption shift towards lower energy or energy dissipative states during excessive energy exposure. They indicate the transition of plants transforming from an energy unquenched to an energy quenched status. These distinct features and shifts can be monitored in the so-called PRI region (500-600 nm) as well as in the red-edge region (690-750 nm). Moreover, we were able to observe two different quenched states. A quick transition to a quenched state appear with only a low amount of excessive irradiance, even for healthy control plants. A second, slower and also further red-shifted absorption transition appears with higher amount of excessive irradiance, and becomes also more pronounced when the vegetation is deprived of water or nutrients. Through the description and Gaussian-based modelling of the observed absorption patterns, a dynamic spectral endmember fitting strategy is proposed for the unmixing of the standard pigment and dynamic features. The application and results of this spectral unmixing strategy are shown as a required complement to the retrieved fluorescence information for the detection of early stress in the photosynthetic light reactions. In further preparation of the upcoming imaging spectroscopy FLEX (Fluorescence Explorer) mission, the proposed spectral unmixing strategy requires testing for real scenes and (mixed pixel) scenarios, which can account for both the dynamic vegetation spectral behavior due to regulated heat dissipation, and the different contributions of additional background variability.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: SpaFLEX: Evaluation of Systematic and Random Uncertainties in Sun-Induced Chlorophyll Fluorescence and Reflected Radiance Retrievals in the Framework of the Spanish FLEX Cal/Val Campaigns

Authors: Dr. Juanjo Peón, Dr. Marcos Jiménez, Dr. María Pilar Cendrero-Mateo, Óscar Gutiérrez de la Cámara, Dr. Adrián Moncholí-Estornell, Dr. Ricardo Díaz-Delgado, Javier Gorroño, Félix Muñoz, Laura Carretero, Dr. Pedro J. Gómez-Giráldez, David Aragonés, Dr. Shari Van Wittenberghe, Prof. José Moreno
Affiliations: National Institute of Aerospace Technology (INTA), University of Valencia (UV), Doñana Biological Station, Spanish National Research Council (EBD-CSIC), Universitat Politècnica de Valencia
Sun-induced chlorophyll fluorescence (SIF) and ground reflected radiance of vegetation will be key operational Level-2 products provided by the upcoming European Space Agency’s (ESA) Fluorescence Explorer-Sentinel 3 (FLEX-S3) mission. The reliability of these products needs to be evaluated through a validation process comparing satellite products with fiducial reference data from the earth’s surface. In order to meet the ESA’s uncertainties requirements for Level-2 products, the SpaFLEX project, funded by the Spanish Ministry of Science and Innovation, aims to develop and implement a comprehensive Calibration and Validation (Cal/Val) plan for the FLEX-S3 mission. This plan will define Cal/Val test sites, UAV and airborne instruments characterization, fiducial measurements, ground measurements and sampling protocols, and uncertainty budgets for Level-2 products. Following a metrological approach outlined in the Guide to the Expression of Uncertainty in Measurement (GUM), the uncertainties estimated in SpaFLEX are based on an uncertainty tree diagram, where the different sources of uncertainty are considered for a complete characterization of field, UAV, and airborne instrumentation. This characterization establishes a metrological reference through an unbroken chain of calibrations and indoors and outdoors intercomparisons. The uncertainty propagation to SIF and surface reflected radiance for the 300x300 m area representing a FLEX pixel is performed using the Law of Propagation of Uncertainties and Monte Carlo methods. With the aim to improve the uncertainty assessment, this work presents a separate evaluation of the uncertainties associated with systematic and random effects, using the Punpy tool (Propagating Uncertainties with Python) which is part of the open-source CoMet Toolkit (Community Metrology Toolkit). The systematic and random uncertainties in the spatial upscaling of the field point measurements to the coarse pixel scale, including spatial representativeness errors across the different SpaFLEX test sites, still need to be addressed. Through repeatability measurements following the GUM approach, performed against an integrating sphere and gas lamps at the Spanish National Institute for Aerospace Technology’s laboratory (INTA), the uncertainties in radiance (L) and irradiance (E) measurements of FloX and Piccolo instruments were propagated, considering the correlation between them. Additionally, we propagated both systematic and random components during sampling operations, using spectroradiometric data from two field campaigns carried out in Spain, one over a homogenous Festuca plot in Las Tiesas, Barrax, in 2020, and another over the Holm oak forest in the Sarrión Cal/Val test site in Teruel in 2024. To obtain the uncertainties in the SIF estimation, those associated with Level-1 L and E were propagated through the SpecFit SIF retrieval algorithm, using the Monte Carlo method, and considering the spectral autocorrelation of FloX and Piccolo bands and the correlation between E and L.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Towards Satellite SIF Validation: SIF and Active Chlorophyll Fluorescence Measurements in Sodankylä, Finland

Authors: Marika Honkanen, Tommaso Julitta, Hannakaisa Lindqvist, Alasdair MacArthur, Neus Sabater, Tea Thum
Affiliations: Finnish Meteorological Institute, JB Hyperspectral Devices GmbH, Finnish Meteorological Institute, Laboratory for Earth Observation, Image Processing Laboratory, Dept. of Earth Physics and Thermodynamics, University of Valencia
Solar-induced chlorophyll fluorescence (SIF) is an optical signal emitted by plants during photosynthesis. It provides insights into plant photosynthetic activity and carbon uptake, making it a valuable tool for understanding ecosystem function and the global carbon cycle. Satellite instruments such as OCO-2 and Sentinel-5P TROPOMI can be used for retrieving SIF, and future missions such as FLEX are set to enhance these observations. However, satellite SIF products require rigorous ground-based validation to ensure accuracy and precision. At the Sodankylä Arctic Space Centre, we operate instruments to measure both SIF and active chlorophyll fluorescence (aChlF) in a boreal forest. Combining SIF and aChlF provides complementary insights, with SIF capturing canopy-level photosynthetic activity under natural conditions and aChlF offering detailed physiological information on plant stress and photosystem efficiency. The active observations offer insight to the leaf level dynamics that cannot be accessed via the passive observations alone. Our tower-based Piccolo Doppio instrument began measuring SIF in April 2024, building on earlier data collected with the FLOX instrument during the 2021 growing season. Needle-scale active ChlF measurements were initiated in the summer of 2023. Additionally, we have already conducted drone-based SIF measurements in 2020, and we have the capability to continue these measurements with the Piccolo Doppio instrument. We have conducted early comparisons between TROPOMI SIF and tower-based SIF measurements. Additionally, we have used drone-based measurements to examine the spectral extinction of SIF signals in the atmosphere and to study how SIF varies with vegetation type—such as pine forest versus birch forest—and across seemingly similar areas (Honkanen et al., 2023). These analyses provide valuable insights into the variability of SIF within a single satellite pixel. Furthermore, we have compared different SIF retrieval algorithms for tower-based data and identified discrepancies in the springtime SIF retrievals, which calls for a more detailed algorithm intercomparison and may support preparations for satellite observations with the FLEX mission. Honkanen, M., Heikkinen, P., MacArthur, A., Thum, T., Kivi, R., Lindqvist, H. (2024). UAV-Borne Measurements of Solar-Induced Chlorophyll Fluorescence (SIF) at a Boreal Site. In: Westerlund, T., Peña Queralta, J. (eds) New Developments and Environmental Applications of Drones. FinDrones 2023. Springer, Cham. https://doi.org/10.1007/978-3-031-44607-8_8
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Bayesian solar-induced fluorescence retrieval algorithm for remote sensing of vegetation (SIFFI)

Authors: Antti Kukkurainen, PhD Antti Lipponen, PhD Ville Kolehmainen, PhD Antti Arola, Neus Sabater
Affiliations: Finnish Meteorological Institute, University of Eastern Finland
Accurate retrieval of solar-induced chlorophyll fluorescence (SIF) from satellite observations is essential for advancing our understanding of the processes related to plant photosynthesis and to monitor ecosystem health at a global scale. Many of the current satellite-derived SIF products primarily focus on specific narrow spectral regions, such as the Solar Fraunhofer lines or oxygen absorption bands. However, the accurate retrieval of the full SIF spectrum spanning from 650 to 800 nm, remains a challenging task due to the complex interactions between the atmosphere and the surface of the Earth. Uncertainty due to atmospheric state parameters, such as aerosol optical depth (AOD) and water vapor, can introduce a large bias in the satellite level SIF estimations when attempting to retrieve SIF outside the specific narrow absorption bands. Therefore, a retrieval algorithm that aims to retrieve the full SIF emission spectrum will depend on the accuracy achieved at the atmospheric correction step. To address these challenges, we have developed a novel SIF retrieval algorithm known as SIFFI. SIFFI is designed to retrieve the full SIF emission spectrum and surface reflectance without imposing parametric shape constraints to the retrieval, thus offering great flexibility. SIFFI can be applied to both top-of-canopy (TOC) and top-of-atmosphere (TOA) levels, broadening its utility across various different measurement systems. To assess the retrieval accuracy of SIFFI, we used TOC measurements acquired by a fluorescence box (FloX) instrument as well as TOA observations using simulated datasets. The simulated observations were contaminated by realistic levels of instrument noise to simulate real-world conditions while atmospheric parameters (e.g., atmospheric aerosols) were assumed as known. The results demonstrated that SIFFI achieves high retrieval accuracy. We also present the latest upgrade to the SIFFI algorithm which aims to mitigate the effects of incorrect or inaccurate knowledge about the atmospheric state during the retrieval process. The technique is based on the Approximation Error method, which is based on the marginalization of the poorly known auxiliary parameters in the forward modeling. The development of SIFFI represents a major step forward in the field of satellite-based SIF retrievals, providing a robust approach exploiting the full spectral information from the red and near-infrared regions.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: SpaFLEX: Spatial Heterogeneity Analysis and Optimized Field Sampling Design Using Sentinel-2 Imagery

Authors: Pedro Gómez-Giráldez, David Aragonés, Dr. Marcos Jiménez, Mª Pilar Cendrero-Mateo, Shari Van Wittenberghe, Juan José Peón, Adrián Moncholí-Estornell, José Moreno, Ricardo Díaz-Delgado
Affiliations: Remote Sensing & GIS Lab. Doñana Biological Station (LAST-EBD) CSIC, National Institute of Aerospace Technology (INTA), Laboratory for Earth Observation, Image Processing Laboratory, University of Valencia
Sun-induced chlorophyll fluorescence (SIF) and ground reflected radiance of vegetation will be key operational Level 2 products provided by the upcoming European Space Agency (ESA) FLuorescence EXplorer-Sentinel-3 (FLEX-S3). The reliability of these products needs to be assessed through a validation process that compares satellite products with fiducial reference data from the Earth's surface. In order to meet ESA's uncertainty requirements for Level 2 products, the SpaFLEX project, funded by the Spanish Ministry of Science and Innovation, aims to develop and implement a comprehensive Calibration and Validation (Cal/Val) plan for the FLEX-S3 mission in Spain. This plan will define Cal/Val test sites, ground, UAV and airborne instrument characterization, reference measurements, sampling protocols and uncertainty budgets for Level 2 products. In this paper we present the sampling strategy developed in the framework of the Spanish Cal/Val Plan. The SpaFLEX project has established three different sites: agricultural (Barrax, Central Spain), oak forest (Sarrión/Manzanera, Northeast Spain) and natural ecosystems (Doñana National Park, Southwest Spain). By analyzing these sites with different degrees of heterogeneity, we have defined a set of protocols to determine the optimal sampling strategy and upscaling approach to meet ESA's uncertainty requirements to characterize a 300 x 300 FLEX pixel. To optimize the design of the sampling strategy, we propose a six-step protocol to determine the number and location of each Experimental Sampling Unit (ESU) using Sentinel-2 images: (1) regression analysis between different vegetation biophysical parameters, identifying the variable with the highest spatial variability and lowest correlation; (2) calculation of spatial autocorrelation using Moran's index and semivariograms; (3) spatial interpolation using machine learning models (SVM), with a semivariogram fit, (4) stratifying the selected variable into high and low heterogeneity classes along the gradient, taking into account the local mean; (5) randomly locating ESUs within each stratum, ensuring spatial representativeness and minimal autocorrelation; and (6) determining the number of field measurements inside every ESU. Our results demonstrate that this protocol effectively identifies representative areas of heterogeneity in diverse landscapes, optimizing resource allocation for field campaigns and subsequent validation of FLEX products.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Development of a Mobile, Automated Device For Laboratory Grade Calibration of Autonomous Field Spectrometer Systems In Situ

Authors: Mitchell Kennedy, Marc Krause, Dr Andreas Burkart, Laura Mihai, Dirk Schuettemeyer, Paul Naethe, Tommaso Julitta
Affiliations: JB Hyperspectral Devices, INFLPR CETAL, European Space Agency ESTEC
To ensure the accuracy and reliability of optical satellite data, ground-based field spectrometers are commonly employed as a reference. High quality ground data is essential for helping to identify and correct biases in the satellite imagery with respect to the ground based data. The Fluorescence Box (FloX), JB Hyperspectral Devices, Düsseldorf, Germany, is used as ground reference for the upcoming FLEX satellite mission product calibration and validation. The FloX is specified with an optical configuration similar to the satellite in the spectral domain and capable of continuously monitoring solar-induced fluorescence (SIF). A network of these instrument has been established in recent years around the world, expanding to include over 60 FloX’s distributed across numerous global measurement sites. These sites span a broad spectrum of climate zones, biomes and plant types, suitable for the validation of satellite data product. However, at present a FloX requires an annual routine instrument calibration in the laboratory, which requires it to be removed from the installation. Thus, an alternative was sought after which would enable a thorough calibration of the full optical path in the installed instrument in the field. In consequence, we present a novel deployable calibration device, the Mobile Master Reference (MMR). The MMR is being proposed as part of the ESA-funded DEFLOX-CCN5 project which aims to enable in-situ periodic calibration of all JB Hyperspectral automated field spectrometer systems. In concept, the MMR will be shipped to the different instrument sites. The MMR will be equipped with an integrating sphere as well as a wavelength lamp in order to conduct fully automated radiometric and wavelength calibrations at the installed instrument and ensure easy handling by the non-expert user. For accurate calibration and reduced uncertainties, reduction of stray light, temperature and power stabilisation, fibre alignment and measurement repetitions are included and pre-determined by the MMR’s design into a simple and fully automatic calibration procedure. Ongoing work is currently being completed in order to ensure the MMR is calibrated versus national and international standards using a defined protocol for the re-calibration of the spectrometers in the field. Field tests will quantify the capability of the MMR system and quantify the overall uncertainty of the calibration device in real operating conditions. Using the MMR, downtime for JB Hyperspectral devices will be greatly reduced and uninterrupted data will be available throughout the network of instruments. Furthermore, data quality of the ground reference will be vastly improved towards data products consistently following the reference standard, which will allow another level of validation for the FLEX satellite mission.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Leveraging Deep Learning for the Retrieval of Sun-Induced Fluorescence in the O2-A Absorption Band of Hyperspectral Imagery Acquired by the Spaceborne DESIS Sensor

Authors: Jim Buffat, Miguel Pato, Kevin Alonso, Stefan Auer, Emiliano Carmona, Stefan Maier, Rupert Müller, Patrick Rademske, Uwe Rascher, Hanno Scharr
Affiliations: Forschungszentrum Jülich, Institute of Advanced Simulations, IAS-8: Data Analytics and Machine Learning, German Aerospace Center (DLR), Earth Observation Center, Remote Sensing Technology Institute, Starion Group c/o European Space Agency (ESA), Forschungszentrum Jülich, Institute of Bio- and Geosciences, IBG-2: Plant Sciences
Efficiently monitoring the photosynthetic activity of plants across large areas is a challenging task that has been addressed in the last decades as part of an ongoing research interest in the remote sensing of the environment and agricultural areas. The closest measurable variable to photosynthetic activity is the emission of sun-induced chlorophyll fluorescence (SIF). While SIF data retrieved from spaceborne hyperspectral sensors with low spatial resolution (> 1 km pixel size) has been used for drought monitoring, yield prediction and the estimation of global gross primary productivity in the past, these sensors provide only limited insight into the fine-scale dynamics of photosynthesis. This has led to the preparation of the ESA Earth Explorer FLEX - the first spaceborne satellite mission dedicated to SIF retrieval - which promises to address this gap, offering SIF estimates at a significantly higher spatial resolution (300 m). In this contribution, we summarize our results on a novel deep learning based methodology to retrieve SIF from hyperspectral imagery at 760 nm that could benefit the research in computationally efficient SIF retrieval and validation of FLEX data. Our approach was developed and validated using high-quality data from the HyPlant FLUO sensor, the airborne demonstrator for FLEX FLORIS, which has a full width at half maximum (FWHM) of 0.25 nm. In this contribution, we show that the method can be adapted to data acquired by the spaceborne DESIS sensor (3.5 nm FWHM) onboard the International Space Station (ISS) providing for the first time spaceborne SIF estimates at 30 m. In a validation study conducted with a unique benchmark dataset consisting of HyPlant and DESIS at-sensor radiance acquired in a time interval of less than 25 minutes we find r² = 0.60 and a mean absolute difference between the SIF products of the two sensors of 0.78 mW/nm/sr/m² at 740 nm. Furthermore, we find r² = 0.20 in a cross-comparison of DESIS SIF with OCO-3 estimates. Our methodology involves the training of an encoder-decoder type neural network to perform a decomposition of the at-sensor radiance into surface and atmospheric parameters in the spectral window around the O2-A absorption band. Predicted parameters are evaluated in the observation space according to a novel label-free reconstruction based loss formulation. In the case of DESIS, we additionally introduce supervised regularizing loss terms to enhance the integration of existing L2A products. In order to be able to perform radiative transfer simulation as part of the training process we make use of a computationally efficient radiative transfer emulator that we derive from extensive hyperspectral radiative transfer simulation for both HyPlant and DESIS sensor configurations. This approach contrasts with traditional spectral fitting methods by its adoption of a feature-based strategy, enabling faster inference times. Due to the exceptional performance of this approach on DESIS data as well as its beneficial properties showcased in HyPlant data, our contribution has the potential to enhance research on computationally efficient SIF retrieval from FLEX data. Furthermore, its DESIS SIF product may provide a valuable high-resolution dataset for cross-comparison of FLEX SIF products in regions where spatial mixing impedes other validation approaches.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: UAV and CableCam platforms for FLEX products Cal/Val protocol development

Authors: Adrian Moncholi Estornell, MªPilar Cendrero-Mateo, Cynthia Chardi-Raga, Clara García-Martínez, Eva Neuwirthova, Ana Pascual-Venteo, Sara Pescador-Dionisio, Jose Moreno-Mendez, Fluorescence combined with Spectral Unmixing Using HyPlant Airborne Data of an alphalpa field Shari Van Wittenberghe
Affiliations: Universitat de Valencia
Spatial heterogeneity is a significant challenge in accurately quantifying remote sensing products, especially regarding dynamic signals like Solar-Induced Fluorescence (SIF). To effectively validate SIF products which will be provided by ESA’s Fluorescence Explorer (FLEX) mission, it is crucial to characterize the spatial variability of the signal within the foreseen of 300m x 300m pixel resolution. Measurement of surface radiance is a critical step in accurately estimating solar-induced fluorescence (SIF), a key indicator of photosynthetic activity and ecosystem functioning. However, the technical requirements for instruments capable of reliably capturing surface radiance introduce challenges for platform selection. These systems demand high spectral resolution, stability, and precision, which can be difficult to achieve in dynamic environments. Traditional measurement platforms, such as ground-based or tower-mounted systems, often face limitations in covering large or heterogeneous areas. Their static nature and limited spatial coverage make them less effective for assessing complex ecosystems. Emerging technologies, such as unmanned aerial vehicles (UAVs) and cable-systems, offer promising alternatives. These platforms facilitate flexible and efficient data collection, enabling high-resolution spatial mapping, targeted sampling, and continuous time-series observations. By overcoming the spatial and temporal constraints of traditional methods, UAVs and cable systems provide researchers with the tools needed to better understand the variability of SIF and its relationship to plant physiology. Consequently, these systems are becoming increasingly valuable for advancing remote sensing applications in vegetation monitoring. In this work, we present a dual UAV-CableCam platform equipped with a high spectral resolution Piccolo Doppio dual spectrometer system, a MAIA-S2 multispectral camera, and a TeAx Thermal Capture Fusion camera, which can be triggered simultaneously according to a pre-defined protocol. This presentation will showcase the potential of the UAV and CableCam platforms to improve the interpretation of FLEX L2 products by providing ground truth data at a finer spatial scale, to develop robust Cal/Val protocols by defining optimal sampling strategies and data processing techniques, and to advance our understanding of SIF dynamics by investigating the relationship between SIF and other biophysical variables. By leveraging these innovative technologies, we can enhance the quality and reliability of FLEX-derived products and contribute to a deeper understanding of terrestrial ecosystems.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Measuring sun-induced fluorescence from ground-, UAV- and airborne platforms to understand the regulatory properties of photosynthesis and fluorescence across scales

Authors: Uwe Rascher, Jim Buffat, Andreas Burkart, Marco Celesti, Maria-Pilar Cendrero, Sofia Choza Farias, Sergio Cogliati, Roberto Colombo, Alexander Damm, Matthias Drusch, Ingo Ensminger, Lorenzo Genesio, Arthur Gessler, Jan Hanus, David Herrera, Tommi Julitta, Ireneus Kleppert, Oliver Knopf, Deepthi Konche, Julie Krämer, Vera Krieger, Ann-Kathrin Mahlein, Franco Miglietta, Jose Moreno, Onno Muller, Paul Näthe, Huayiue Peng, Juan Quiros, Patrick Rademske, Juan Romera Barrios, Micol Rossini, Saja Salattna, Dirk Schüttemeyer, Bastian Siegmann, Giulia Tagliabue, Marin Tudoroiu, Jochem Verrelst
Affiliations: Forschungszentrum Jülich, Institute of Bio- and Geosciences, IBG-2: Plant Sciences, German Space Agency at DLR, National Research Council – CNR, Institute of Bioeconomy – IBE, European Space Agency, ESTEC, University of Zurich, Department of Geography, JB Hyperspectral Devices GmbH, University of Milano-Bicocca, Remote Sensing of Environmental Dynamics Lab., Institute for Sugar Beet Research, University of Valencia, Laboratory of Earth Observation,, University of Toronto, Department of Biology, Swiss Federal Research Institute WSL, Global Change Research Institute – CzechGlobe
Plant photosynthesis is maybe one of the best understood biophysical and biochemical pathway, which combines photosynthetic light absorption, the usage of the energy in light reactions and the biochemical fixation of atmospheric CO2. Decades of basic and applied research have produced a good mechanistic understanding of the underlying mechanisms and their regulatory properties as well as a large number of methods to non-invasively quantify the dynamics of photosynthesis. Fluorescence techniques are widely used approaches to directly quantify charge separation at photosystem II and to non-invasively quantify the efficiency of photosynthetic light reactions. However, regulation of photosynthetic acclimation and adaptation takes place on the level of single leaves and consequently most of our methods and knowledge on the dynamic nature of photosynthetic regulation is based on leaf-level studies. With FLEX and other satellite platforms delivering large scale data on solar-induced fluorescence, it becomes imminent to develop a good mechanistic understanding how larger scale observations can be translated to the small-scale regulatory properties. Such transfer should be based on good experimental evidence, that can then be used in mechanistic models, which are the basis for the development of higher level data products (lvl 2 and above). In this presentation we summarize two important developments in this direction. First, we give an overview on the recent development of novel sensors, which help to bridge the gap between leaf level regulation of photosynthesis and fluorescence emission and top-of-canopy SIF observations. We demonstrate how novel sensor concepts that include active fluorescence approaches and passive, SIF-based approaches in combination with radiative transfer modelling, are used to mechanistically link leaf-level measurements to UAV- and airborne based SIF observations. Secondly, we will exemplify recent campaigns, where this novel portfolio of sensors was used in the European context to further test and understand the potential of large scale SIF measurements to serve as early stress indicator and to be used to better constrain vegetation carbon and water exchange. We executed various campaigns covering ecosystems in Germany, Switzerland, Czech Republic, The Netherlands, Spain, and Italy, all of which provided data on specific scientific objectives. In these campaigns we could (i) further constrain the accuracy and uncertainty of field, UAV and airborne SIF observations, which are relevant for the upcoming FLEX Cal/Val scheme, (ii) we could further develop the concepts how SIF can be used as an early drought and heat stress indicator, and (iii) we could greatly extend the data basis on the natural variability of the SIF signal across different ecosystems, the diurnal and seasonal cycle, and as a reaction to environmental extremes. With this presentation we will bring together the data of FLEX related campaigns for our understanding of the regulatory properties of photosynthesis and fluorescence across scales. These data were used in scientific publications that are freely accessible for further exploitation. We present data concepts, which are currently being developed to make these new campaign data openly available to the scientific community and to complement the ESA based campaign data. Three selected reviews on this general topic Porcar-Castell A., Malenovský Z., Magney T., Van Wittenberghe S., Fernández-Marín B., Maignan F., Zhang Y., Maseyk K., Atherton J., Albert L.P., Robson T.M., Zhao F., Garcia-Plazaola J.-I., Ensminger I., Rajewicz P.A., Grebe S., Tikkanen M., Kellner J.R., Ihalainen J.A., Rascher U. & Logan B. (2021) Chlorophyll a fluorescence illuminates a path connecting plant molecular biology to Earth-system science. Nature Plants, 7, 998-1009, doi: 10.1038/s41477-021-00980-4. Machwitz M., Pieruschka R., Berger K., Schlerf M., Aasen H., Fahrner S., Jimenez-Berni J.A., Baret F. & Rascher U. (2021) Bridging the gap between remote sensing and plant phenotyping - challenges and opportunities for the next generation of sustainable agriculture. Frontiers in Plant Science, 12, article no. 749374, doi: 10.3389/fpls.2021.749374. Berger K., Machwitz M., Kycko M., Kefauver S.C., van Wittenberghe S., Gerhards M., Verrelst J., Atzberger C., van der Tol C., Damm A., Rascher U., Herrmann I., Sobejano Paz V., Fahrner S., Pieruschka P., Prikaziuk E., Buchaillot M.L., Halabuk A., Celesti M., Koren G., Gormus E.T., Rossini M., Förster M., Siegmann B., Abdelbaki A., Tagliabue G., Hank T., Aasen H., Garcia M., Pôças I., Bandopadhyay S., Sulis M., Tomelleri E., Rozenstein O., Filchev L, Stancile G. & Schlerf M. (2022) Multi-sensor synergies for crop stress detection and monitoring in the optical domain: a review. Remote Sensing of Environment, 280, article no. 113198, doi: 10.1016/j.rse.2022.113198. Selected publications describing novel SIF sensors and the associated campaign activities Naethe P., De Sanctis A., Burkart A., Campbell P.K.E., Colombo R., di Mauro B., Damm A., El-Madany T., Fava F., Gamon J., Huemmrich K.F., Migliavacca M., Rascher U., Rossini M., Schüttemeyer D., Tagliabue G., Zhang Y. & Julitta T. (2024) Towards a standardized, ground-based network of hyperspectral measurements: combining time series from autonomous field spectrometers with Sentinel-2. Remote Sensing of Environment, 303, article no. 114013, doi: 10.1016/j.rse.2024.114013. Siegmann B., Cendrero-MateoM.P., Cogliati S., Damm A., Gamon J., Herrera D., Jedmowski C., Junker-Frohn L.V., Kraska T., Muller O., Rademske P., van der Tol C., Quiros-Vargas J., Yang P. & Rascher U. (2021) Downscaling of far-red solar-induced chlorophyll fluorescence of different crops from canopy to leaf level using a diurnal data set acquired by the airborne imaging spectrometer HyPlant. Remote Sensing of Environment, 264, article no. 112609, doi: 10.1016/j.rse.2021.112609. Kneer C., Burkart A., Bongartz J., Siegmann B., Bendig J., Jenal A. & Rascher U. (2023) A snapshot imaging system for the measurement of solar-induced chlorophyll fluorescence – addressing the challenges of high-performance spectral imaging. IEEE Sensors Journal, 23, 23255 – 23269, doi: 10.1109/JSEN.2023.3297054. Acebron K., Salvatori N., Alberti G., Muller O., Peressotti A., Rascher U. & Matsubara S. (2023) Elucidating the details of photosynthesis in chlorophyll-deficient soybean (Glycine max, L.) leaf. Journal of Photochemistry and Photobiology, 13, article no. 100152, doi: 10.1016/j.jpap.2022.100152. Wang N., Siegmann B., Rascher U., Clevers J.G.P.W., Muller O., Bartholomeus H., Bendig J., Masiliu D., Pude R. & Kooistra L. (2022) Comparison of a UAV- and an airborne-based system to acquire far-red sun-induced chlorophyll fluorescence measurements over structurally different crops. Agricultural and Forest Meteorology, 323, article no. 109081, doi: 10.1016/j.agrformet.2022.109081. Peng H., Cendrero-Mateo M.P., Bendig J., Siegmann B., Acebron K., Kneer C., Kataja K., Muller O. & Rascher U. (2022) HyScreen: A ground-based imaging system for high-resolution red and far-red solar-induced chlorophyll fluorescence. Sensors, 22, article no. 9443; doi: 10.3390/s22239443. Damm A., Cogliati S., Colombo R., Fritsche L., Genangeli A., Genesio L., Hanus J., Peressotti A., Rademske P., Rascher, U., Schuettemeyer D., Siegman B., Sturm J. & Miglietta F. (2022) Response times of remote sensing measured sun-induced chlorophyll fluorescence, surface temperature and vegetation indices to evolving soil water limitation in a crop canopy. Remote Sensing of Environment, 273, article no. 112957; doi: 10.1016/j.rse.2022.112957.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Bridging scales: tower, UAS and airborne-based validation of FLEX products

Authors: Giulia Tagliabue, Dr. Micol Rossini, Roberto Garzonio, Dr. Luigi Vignali, Dr. Cinzia Panigada, Sergio Cogliati, Pietro Chierichetti, Dr Juliane Bendig, Uwe Rascher, Dr Bastian Siegmann, Tommaso Julitta, Dr Andreas Burkart, Roberto Colombo
Affiliations: University of Milano-Bicocca, Forschungszentrum Jülich, JB Hyperspectral Devices
The upcoming Fluorescence Explorer (FLEX) mission, developed by the European Space Agency (ESA) as part of the Earth Explorer programme, will provide global maps of vegetation sun-induced chlorophyll fluorescence (SIF) across the entire emission spectrum with unprecedented spatial resolution. Calibration and validation (Cal/Val) of such products are key to ensuring high quality SIF estimates for environmental monitoring and applications. Direct validation based on the comparison of satellite products with independent in-situ measurements is the most common approach to evaluate satellite products and understand their uncertainties. There is a growing network of point spectrometers installed on fixed towers that collect unattended, continuous, high spectral resolution measurements and generate consistent time series at fixed locations. Validation of satellite-based SIF products with in-situ SIF systems measuring at one or two points is facilitated in geographical areas characterised by homogeneous and large, uniform landscapes. This validation approach can be challenging in more complex and spatially heterogeneous landscapes, as in situ sampling is typically performed at scales that are orders of magnitude smaller than the satellite sensor footprint. On the other hand, airborne systems capable of measuring SIF, such as the HyPlant and IBIS sensors (SPECIM Spectral Imaging Ltd, Finland), provide large spatial coverage and allow characterisation of the spatial heterogeneity within the satellite pixel. However, they have the drawbacks of high operational costs and low temporal resolution. In this context, unmanned aerial systems (UAS) are a lower cost alternative that also have the potential to bridge the gap between field and satellite scale, and additionally enable the acquisition of high spatio-temporal resolution datasets. In this contribution we address how novel UAS systems capable of measuring SIF such as FROG, AirFloX and SIFcam can complement tower and airborne systems, being a valuable tool for supporting the Cal/Val of FLEX products. In this context we present the FROG, a non-imaging spectroscopy system consisting of two point spectrometers (Ocean Insight, USA) connected to bifurcated fibre optics switching between the downwelling irradiance and the upwelling radiance. The two spectrometers complement each other having different spectral range and resolution for measuring both SIF and reflectance. The payload (4 kg) is mounted on a commercial hexacopter (Matrice 600 Pro, DJI, China). A dedicated graphical user interface allows to customize the set-up and monitor the acquisition during the flights. We implemented optimisation methods for guiding the acquisition schemes maximizing the spatial representativeness of the sampled points. The approach is based on statistical methods to determine the minimum number of sampling points given a maximum acceptable error and confidence level on the FLEX products, and on the use of optimisation algorithms to identify the optimal locations of the n sampling points defined. The FROG system was firstly tested in an agricultural area in Grosseto (Italy) in the summer of 2023, covering five different crops at the FLEX overpass time. Additional flights will be performed in the spring of 2025 to further test the system and the spatial sampling strategies developed, and to cross-compare the results with the AirFloX and SIFcam systems. First results, challenges and implications for the Cal/Val of FLEX products will be discussed in this contribution.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Validation Strategy of FLEX L2A Surface Reflectance and Irradiance using Autonomous Ground Reference Data

Authors: Astrid M. Zimmermann, Pieter de Vis, Agnieszka Bialek, Tommaso Julitta, Roberto Colombo
Affiliations: National Physical Laboratory, JB Hyperspectral Devices GmbH, University of Milano Bicocca
The upcoming FLEX (Fluorescence Explorer) mission, an ESA Earth Explorer, aims at characterising the full energy balance of vegetation by providing inside of the photosynthetic activity. To develop a data quality control framework, the FLEX DISC (Fluorescence Explorer Data Innovation and Science Cluster) was set up as a consortium of scientists and technical experts in various domains related to the FLEX mission. One activity within FLEX DISC is to develop Cal/Val plans for all products, with the aim to validate the satellites performance over time. Here, we will present first results for the validation strategy of FLEX surface reflectance and irradiance bottom of atmosphere products, using novel autonomous ground reference data supplied by HYPERNETS and FLOX network. HYPERNETS is an automated network of hyperspectral radiometers, providing multi-angular reflectance measurements for satellite validation for water (WATERHYPERNET) and land (LANDHYPERNET). The HYPSTAR®-XR instrument used for land sites consists of 2 sensors, covering VNIR to SWIR range. The VNIR data is collected at a wavelength range of 380–1000 nm with 0.5 nm sampling and 3 nm resolution, while the SWIR sensor measures in between 1000–1680 nm with 3 nm sampling and 10 nm resolution. The raw data is automatically transferred to a central server, processed in near real-time to reflectance and other variables and made available for distribution through portals. FLOX (fluorescence box) is an instrument, designed by JB hyperspectral, designed for continuous observation of sun-induced chlorophyll fluorescence. It provides measurements from 650—800 nm with a 0.17 nm sampling and 0.3 nm resolution and from 400–950 nm with 0.65 nm sampling and 1.5 nm resolution. FLOX has a dual field of view (FOV) allowing it to measure upwelling radiance (FOV of 25°) and downwelling irradiance (FOV of 180°). The data is planned to be distributed via the FLOX database (under development). In this contribution, we will present the strategy to validate FLEX surface apparent reflectance and at-surface solar irradiance starting from ground data using FLEX simulated data. The validation itself will be done using a comparison metric, where the difference between the FLEX products and the reference dataset should be smaller than the combined extended uncertainty of both. When computing this difference, it is important that the measurands are as similar as possible, i.e. to compare the FLEX and reference dataset, they must be harmonised spatially, spectrally, angularly and temporally. In the spectral harmonisation the dataset with the finer resolution will be convolved to the one with the coarser resolution, i.e. FLEX to HHYPERNETS or FLOX to FLEX. For spatial harmonisation, high-resolution (metre-scale) commercial data will be analysed to find the least variable, hence best suited region of interest (ROI) and quantify the spatial variability. The angular and temporal correction will be done using a BRDF model and linear interpolation. Each of those steps will introduce additional uncertainties, which will be propagated together with the input uncertainties of FLEX, FLOX and HYPERNETS.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Design of atmospheric look-up tables for operational FLEX data processing

Authors: Jorge Vicent Servera, Neus Sabater, Pekka Kolmonen, Marin Tudoroiu
Affiliations: Magellium, Finnish Meteorological Institude, ESA/ESRIN
The FLEX Earth Explorer mission aims at retrieving the chlorophyll Sun-Induced Fluorescence (SIF) emission to provide insight into the photosynthetic activity of terrestrial vegetation. This faint signal produces an infilling in the O2 bands that can be measured with high-resolution imaging spectrometers from space. However, retrieving SIF within stringent mission requirements implies developing an accurate atmospheric correction, for which atmospheric radiative transfer models (RTMs) play a critical role. Nevertheless, the pixel-wise execution of an RTM for operational data processing is unfeasible due to its slow runtime. Accordingly, a set of look-up tables (LUTs) of precomputed RTM simulations is needed. As the complexity of RTMs (e.g. MODTRAN) increases and accuracy requirements become more stringent, conventional LUT interpolation approaches encounter significant challenges in terms of data volume, LUT generation time, and data processing runtime. In this context, an optimized LUT design is of paramount importance to achieve highly accurate SIF retrievals while keeping runtime within operational constraints. In the framework of the FLEX DISC project, this work will systematically analyze the relationships between RTM input variables and output spectral data to determine the most efficient LUT design and interpolation strategy. Specifically, we will identify the relative importance of input variables in the RTM outputs through a global sensitivity analysis. After identifying the key input variables, we will deepen the analysis by studying the input-output relationships in RTM data at key wavelengths associated with SIF retrieval. This will help us to propose and evaluate alternative LUT interpolation strategies, including classical piece-wise linear interpolation, n-dimensional polynomial fitting, and state-of-the-art statistical regression methods known as emulators. These interpolation methods will be analyzed in terms of (1) runtime, (2) accuracy, and (3) LUT design implications (e.g. data volume). In addition, we will analyze the spectral sampling configuration of RTM simulations to have an efficient yet accurate LUT design. The work presented here provides details on the development activities of the FLEX Level-2 data processing chain within the DISC project. The atmospheric LUT design is not only relevant for the FLEX mission development but also the strategies and lessons learned can be applicable for other imaging spectroscopy missions such as CHIME.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: INTA`s New Airborne Platform Equipped With Chlorophyll Fluorescence Sensor (CFL) for SpaFLEX CalVal Campaigns.

Authors: FELIX MUÑOZ SANCHEZ, Óscar Gutiérrez de la Cámara, Marcos Jiménez, Laura Carretero, Juanjo Peón, Eduardo de Miguel
Affiliations: INTA
The calibration and validation (Cal/Val) of the FLEX (Fluorescence Explorer) mission, part of the eighth ESA (European Space Agency) Earth Explorer (EE8), will require the use of airborne systems capable of providing estimates of the SIF (Solar Induced Fluorescence) observable, which is an optical indicator of the photosynthetic state of terrestrial vegetation. In order to strengthen its remote sensing capabilities, INTA (National Institute of Aerospace Technology) is implementing the imager spectrometer CFL (Chlorophyll Fluorescence Sensor) into an instrumented pod of one of its manned aerial platforms, specifically in a Stemme S15, which is equipped with two cargo capsules. The general objective of SpaFLEX project is to develop Spanish Cal/Val procedures for FLEX mission. The main pillars underpinning this strategy are a reinforced focus on metrological practices to ensure traceability along the full processing chain and the need of providing uncertainty per-measurand, ideally at pixel-level. The new remote sensing system under development is proposed as a candidate to provide essential medium-scale airborne data over large areas of Cal/Val test sites, with high efficiency. INTA has among its aerial platforms a light motor glider capable of carrying Earth observation sensors by means of sub-wing pods, with a load capacity for instrumentation of 80 kg per pod. Numerous steps have been taken to convert the aircraft into a scientific data acquisition platform, with the main focus on the fluorescence sensor CFL. The design of a modular and versatile internal structure will allow the future installation of complementary operational and experimental observation systems. To enhance the versatility of the assembly, the covers protecting the optical window have customized cut-outs that enable the integration of multiple transparent glasses for different spectral ranges. The most relevant features of the Stemme are its maximum service ceiling, which is 16,000 feet MSL (above sea level) and an endurance of approximately 6 hours, with a range of 1,100 kilometers. Each semi-plane has a hard-point station for attaching a pod, which allows the scientific instrumentation to be housed. In addition, at the rear of the fuselage mid-section, there is a compartment in which supplementary systems can be installed. It has a wingspan of 18m and a total length of 8.52m with a reciprocating engine mounted in the center of the fuselage, located behind the cockpit, so that it does not interfere with the optics of the sensors. A manned platform was selected because it allows sharing the airspace with the rest of the aviation and flying at higher altitudes, thus providing a competitive advantage over drones, even though the system has been configured so that it can be easily adapted to an RPAS (Remotely Piloted Aircraft System). The implementation was designed to comply with RTCA DO-160 (Environmental Conditions and Test Procedures for Airborne Equipment). The system has a centralized operation and control module within the same pod that meets the mentioned regulation. A validation phase with a series of tests has been planned: EMC (Electromagnetic Compatibility), vibrations and environmental conditions, within the typical flight envelope in a data acquisition campaign. The CFL is a hyperspectral radiometer designed to observe fluorescence emitted by vegetation. It uses a curved diffraction grating and a cooled sCMOS (scientific Complementary Metal-Oxide-Semiconductor), which allows recording up to 2160 spectral bands in the range 670-780 nm, with 0.2 nm FWHM (Full-Width at Half-Maximum). The nominal angular IFOV (Instantaneous Field of View) is 0.26 mrad and the FOV (Field of View) is 23.5°, resulting in 1600 across-track pixels. The electronic readout and control system enables the configuration of spectral and spatial binning in values of 1, 2 and 4, and the use of variable integration times, typically between 10 and 100 ms. The CFL sensor, initially conceived for operator-acquisition, was adapted to a stand-alone operating regime by developing ad hoc tools. Thanks to the INS/GNSS system, embedded in the sensor itself and the Hyperspec® III software, data acquisition will be triggered when the platform enters a predefined polygon which was defined during the campaign design. In this way the pilot follows the flight plan, passing through the successive waypoints and tracking the plotted course, in a stable manner and with constant speed, acquiring the data.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone H)

Poster: C.01.13 - POSTER - Quantum Sensors: Next-Generation Tools for Earth Observation

This session will delve into the advancements in quantum sensing technologies, encompassing accelerometers, magnetometers, microwave receivers, and more, highlighting their impact on future space-based Earth observation missions. Attendees will gain insights into:

Current Developments:
Presentations on the latest advancements in quantum sensor technology, including breakthroughs in sensitivity, precision, and miniaturization, tailored for space applications.

Use-Case Evaluations:
Detailed evaluations of various use cases where quantum sensors demonstrate potential to enhance Earth observation missions.

Unique Challenges:
A glance at unique challenges faced in the development and deployment of quantum sensors for space missions. This includes technical hurdles, environmental considerations, and integration with existing space technologies.

Future Directions:
A look at roadmaps for future developments in quantum sensing technology such as planned collaborative projects and the next steps required to fully realize the potential of quantum sensors in Earth observation missions.

Interactive Discussions:
Opportunities for Q&A and interactive discussions with leading experts in the field.

This session is ideal for researchers, engineers, and professionals involved in space technology, Earth observation, and quantum sensing, offering an overview of how quantum advancements are poised to evolve instrumentation for Earth observation.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone H)

Poster: Development of geoscience applications of optical lattice clocks in Japan

Authors: Dr. Yoshiyuki Tanaka, Yoshiaki Tamura, Takaaki Jike, Yosuke Aoki, Ryuichi Ichikawa, Hiroshi Takiguchi, Hiromu Sakaue, Tomoaki Oyama, Aya Yamauchi, Ichiro Ushijima, Masao Takamoto, Hidetoshi Katori
Affiliations: The University of Tokyo, National Astronomical Observatory of Japan, National Institute of Information and Communications Technology, Japan Aerospace Exploration Agency, RIKEN
Optical clocks have been developed that are two to three orders of magnitude more accurate than the cesium clocks used to define one second in SI units. Among optical clocks, optical lattice clocks (OLCs) achieve an accuracy of 10^-18 within a few hours by observing many atoms simultaneously. In 2023, a landmark international intercomparison of transportable OLCs was performed using a European fiber link. The frequencies realized by the OLCs from Europe and Japan agreed with each other with smaller uncertainties than the cesium clocks, which proved that OLCs could be a candidate for the redefinition of the second. This presentation will report the efforts and prospects for geoscience and future space applications of OLCs in Japan. It is known that atomic clocks put at different positions can measure height differences between them (chronometric leveling). According to the theory on the gravitational red shift, a frequency difference of 10^-18 corresponds to a height change of approximately 1 cm. The OLCs developed by RIKEN and the University of Tokyo aim for high stability to detect cm-level dynamic crustal deformation. If fiber-linked OLCs can capture height changes of 1 cm or less within a few hours, OLCs can be used in combination with GNSS to detect slow fault slips with shorter durations that cannot be captured by GNSS alone. Unlike a stable continental plate, Japan's geological environment which hosts many earthquakes forms an ideal test bed. Toward such geoscientific applications, a fiber link of approximately 800 km length connecting the Tokyo metropolitan area with the Mizusawa VLBI (Very Long Baseline Interferometry) Observatory in northeastern Japan has been constructed and clock frequency comparisons are being made. A static height difference was determined by geodetic methods to an accuracy of at least 10 cm and a detailed evaluation and comparison with clock results is currently underway. We are also challenging to detect dynamic gravitational potential changes due to the gradual uplift of the northeastern Japan with a rate of 1-2 cm/yr, which was triggered by the 2011 M9 Tohoku earthquake, and differential tides with maximum amplitude of 5 cm in terms of geoid height. In addition to the dynamic relativistic geodetic applications, we have begun to develop applications in VLBI. We replaced the hydrogen maser with an OLC at the Mizusawa VLBI observatory and successfully obtained fringes using a domestic VLBI network. However, ground-based VLBI cannot take full advantage of the higher stability of OLC due to atmospheric noise. If OLCs are used for space VLBI in the future, it will open a new window for astronomy in the high frequency range.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: C.06.05 - POSTER - CEOS Analysis Ready Data (CEOS-ARD)

CEOS Analysis Ready Data (CEOS-ARD) are satellite data that have been processed to a minimum set of requirements and organized into a form that allows for immediate analysis with minimum additional user effort to support time series analysis and data interoperability. This session will provide insights on the CEOS-ARD [https://ceos.org/ard] Product Family Specifications and standards development as well as review current CEOS-ARD compliant satellite data products and those which are in development. Currently there are four optical and four radar CEOS-ARD Product Family Specifications for the following geophysical variables: Surface Reflectance, Surface Temperature, Aquatic Reflectance, Nighttime Lights Surface Radiance, Normalised Radar Backscatter, Polarimetric Radar, Ocean Radar Backscatter, and Geocoded Single-Look Complex. In addition, specifications are under development for Interferometric Synthetic-Aperture Radar (InSAR) and Light Detection and Ranging (LiDAR).
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: NovaSAR-1 Analysis Ready Data (ARD): New S-band SAR ARD for Europe (and beyond)

Authors: Dr Frazer Christie, Dr Pasquale Iervolino, Dr Thomas Higginbottom, Dr Dominik Rains, Mr Thomas Harling
Affiliations: Airbus Defence & Space
Launched in 2018, NovaSAR-1 is a low cost, low-weight technology demonstration mission and a unique source of S-band synthetic aperture radar (SAR) data for Europe. With an imaging frequency of 3.2 GHz (equivalent to a radar wavelength of 9.4 cm), a revisit time of approximately 16 days and a range of swath widths and imaging modes, it is well-suited for applications including (but not limited to) oil-spill detection, ice mapping, soil moisture estimation, disaster monitoring and woodland survey. Thanks to its dedicated maritime acquisition mode and onboard Automated Identification System (AIS), NovaSAR-1 is also ideal for maritime security and surveillance purposes. Aligning with the data format associated with the wider Airbus Radar Constellation (TerraSAR-X, TanDEM-X & PAZ), NovaSAR-1 Level-1 data are typically provided as Single Look Complex (SLC) or Multi-Look Detected (Ground Range Detected - GRD, ScanSAR Ground Range Detected - SCD, Slant Range Detected - SRD) imagery. Here, we introduce Airbus’ new NovaSAR-1 Level-2 ‘Analysis Ready Data’ (ARD) processor and associated image products, the latter of which are intended to reduce the technological and computational burden on global satellite data users through the provision of easy-to-access, easy-to-visualize and easy-to-exploit SAR data. Building upon the success of our Sentinel-1 ARD processor (https://hub.jncc.gov.uk/assets/d971e839-6f0c-4951-8b49-3abc14b78566), our Level-2 processing workflow is fully CEOS-ARD compliant, and is currently capable of outputting a suite of Normalised Radar Backscatter (NRB) and Ocean Radar Backscatter (ORB) products with high geometric and radiometric precision. In the context of NovaSAR-1’s recent inclusion as an ESA Third Party Mission (TPM), our Level-2 ARD datasets are now available for use by the international SAR community, and are highly complementary to the planned – albeit geographically limited – S-band coverage due to be collected by the upcoming NASA/ISRO NISAR mission (launching ~Q1 2025). As the only openly available, in-orbit S-band SAR instrument capable of global-scale imaging, NovaSAR-1 also constitutes a much needed bridge between C-band (e.g. Sentinel-1/NG) and longer wavelength sensors (e.g. ALOS, SAOCOM and the upcoming NISAR, ROSE-L and BIOMASS missions) for synergistic, multi-frequency SAR applications.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: Vision-1 and NovaSAR-1 data fusion for quasi-Near-Real-Time applications

Authors: Dr Pasquale Iervolino, Dr Frazer Christie, Dr Thomas Higginbottom, Dominik Rains, Mr Thomas Harling, Mr James Warner
Affiliations: Airbus Defence and Space, Geospatial Business UK
The number of artificial satellites orbiting Earth has increased exponentially over the past 50 years, with approximately 6700 satellites currently in-orbit. Of this 6700, approximately 20% comprise past and ongoing Earth Observation (EO) and science missions, the majority of which can be partitioned into active (e.g. Synthetic Aperture Radar, SAR) and passive (e.g. multispectral) imaging systems. Working together, this explosion in SAR and multispectral imaging capability has enabled a plethora of new multi-EO-sensor-based opportunities (ranging from increased classification accuracies to shortened revisit times), underscoring the importance of synergistic satellite data use for improving our understanding of Earth from space. Here, we present a new fusion framework – based on Vision-1 and NovaSAR-1 imagery – for quasi-near-real-time applications. Launched in 2018, Vision-1 is a very high resolution optical imager capable of providing panchromatic and multispectral imagery with a spatial resolution of 0.87 m and 3.5m, respectively. NovaSAR-1 is a low cost, low-weight technology demonstration mission also launched in 2018, and is a unique source of S-band SAR data for Europe. It is capable of operating in both conventional Stripmap and ScanSAR modes, offering spatial resolutions of 6 m and 20-30 m, respectively. Both missions are sponsored and supported by the United Kingdom Space Agency (UKSA), and constitute UK sovereign assets for monitoring and surveillance activities. Vision-1 and NovaSAR-1 have also recently been accredited as ESA Third Party Missions, and are now available for use by the international EO community. The standout feature of combined Vision-1–NovaSAR-1 operations relates to their uniquely phased orbit which permits imagery from both sensors to be acquired only minutes apart. In this respect, temporal decorrelation is the lowest of any SAR/optical imaging constellation currently in-orbit. The fusion of Vision-1 and NovaSAR-1 data can consequently be exploited for a wide range of applications in which a minimal temporal gap between successive acquisitions is required. We will demonstrate a selection of such applications, which can include, for example, the accurate tracking of moving targets such as ships; the extraction of surface soil moisture in rural areas, and the detailed monitoring of land-use change.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: Analysis-Ready Multi Source Backscatter (MSB) Data

Authors: Dr. David Small, Ake Rosenqvist, Clément Albinet
Affiliations: University Of Zürich, solo/EO, ESA
CEOS Analysis-Ready Data (CEOS-ARD) is a continuing effort by the Committee on Earth Observation Satellites (CEOS) to improve the level of Earth Observation (EO) data products to be open to a broader user community by flattening the learning curve necessary to be able to perform analysis on the datasets. In the case of SAR products, one way to do this has been to flatten the backscatter for distortions induced by the topography within the area imaged. Terrain-induced distortions are both geometric in nature, i.e. uncorrected shifts to the geographic coordinate of a landscape feature, but also radiometric in nature: i.e. a mountain’s uncorrected foreslope is generally brighter than its backslope. That radiometric distortion is not principally caused (as is commonly misconstrued) by the local angle of incidence, but by a more complex non-duality between map coordinates and radar’s native slant range geometry. The geometric shifts are commonly corrected, with data ingested into GIS systems, or, for example, the Google Earth Engine. However, performing robust radiometric terrain correction (RTC) requires more processing, and its availability has long remained less common. The CEOS-ARD group has developed a set of product family specifications: four have been approved to date: Normalised Radar Backscatter (NRB), Polarimetric Radar (POL), Ocean Radar Backscatter (ORB), and Geocoded Single Look Complex (GSLC). The NRB product was the first one approved and has seen the most widespread application to date. The radiometric flattening it mandates is a necessary pre-requisite to allow overlays of radar backscatter from one satellite with that of another that acquired its image of the area from a different vantage point: e.g. one at a steep nominal incident angle, and another at a flat one. The flattening also enables more robust comparisons of backscatter maps with optical data acquired of the same area. However, although the radiometric terrain corrections applied often succeed in significantly reducing the backscatter variations caused by local terrain, they are unable to correct for a further effect. The local resolution of each pixel within an RTC image remains distorted in regions with significant topography. The backslopes have exquisite high resolution, while the backslopes are stretched so far that their local resolution is far coarser in comparison. Here we describe and demonstrate how by combining ascending and descending acquisitions over an area one can even-out those variations in local resolution. A time window is defined and all acquisitions from satellites with compatible wavelengths and polarisations are placed on a list of inputs. We generate local weighting factor matrices for each input image, whereby the weighting factor is the normalised reciprocal of the “local contributing area” from each image that was used to perform its radiometric normalisation. Finally, for the full time window, we calculate a single composite by applying each image’s weighting factor and summing all the results. It is important to realise that this time-window approach enables the production of seamless wide-area backscatter composites less constrained by the sensors’ swath widths. We demonstrate combination backscatter maps generated using well-calibrated European and Canadian sensors, enabling wide-area analysis with narrow time windows, unachievable with one mission’s data alone. A CEOS-ARD SAR product family specification for this type of composite backscatter product is at the time of this writing under development. We provide an update on its status.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: Analysis Ready Data for Copernicus SAR Missions: Products, Algorithms, and Prototype Processors

Authors: Federico Minati, Francesco de Zan, Paco Lopez Dekker, Juan M. Lopez-Sanchez, Francesco Trillo, Francesco Vecchioli, Mario Costantini, Clément Albinet, Muriel Pinheiro, Antonio Valentino
Affiliations: B-Open, Delta Phi Remote Sensing, Delft University of Technology, Universidad de Alicante, European Space Agency (ESA) ESRIN, STARION
The development of synthetic aperture radar (SAR) technology has significantly broadened the capabilities of satellite Earth observation (EO), for a wide range of applications including environmental and geological risks monitoring, disaster management, vegetation and agricultural assessment, security and intelligence, etc. SAR sensors are active systems that operate in the microwave spectrum and can acquire consistent and reliable images of the Earth's surface regardless of weather conditions and daylight. Moreover, SAR sensors have interferometric capabilities, with the phase of the signal carrying precious information for various applications. SAR product catalogues now encompass extensive datasets derived from both institutional and commercial missions, utilizing various wavelengths in the microwave spectrum, such as C, L, and X bands, with different spatial resolution and coverage capabilities. Historical missions like ERS, JERS, and ENVISAT laid the foundation for modern SAR systems, while contemporary satellite missions include institutional missions such as Sentinel-1, COSMO-SkyMed, TerraSAR-X, Radarsat, ALOS, SAOCOM, and a few commercial missions based on constellations of micro-satellites. Among the future satellite SAR missions, it is important to mention NISAR and the ESA missions ROSE-L and Sentinel-1 Next Generation. These efforts collectively enhance the EO capabilities, enabling detailed insights at varying resolutions and wavelengths. Despite its potential, SAR data processing remains highly technical and complex, often requiring expertise in radar signal interpretation and data manipulation. For non-expert users, these challenges can lead to misinterpretations, ultimately discouraging the widespread use of SAR data in applications. To address these barriers, the concept of SAR Analysis Ready Data (ARD) was introduced. SAR ARD aims to simplify and standardize SAR data, i.e., to make it easier to process, interpret, and utilize SAR data. By adhering to predefined standards, SAR ARD improves the quality, interoperability, and accessibility of SAR products, ensuring their utility for a broad user base. The Committee on Earth Observation Satellites (CEOS) has been instrumental in promoting SAR ARD, having identified four core specifications, with ongoing efforts to develop a specific standard for interferometric SAR (InSAR) products. Various international projects have already initiated the creation of SAR ARD products, paving the way for streamlined data integration and application development. In this context, the ESA project “Prototype Processor for ARD of Copernicus SAR Missions" tries to make a significant step forward development of standard and advanced ARD products. Its primary objective is to identify a set of potential SAR ARD products and define the algorithms required for their generation. During the project's first phase, multiple ARD products are analyzed, including: • Single Look Complex (SLC) products that are co-registered and geocoded • Polarimetric products for detailed target characterization • Interferometric products • Normalized radar backscatter and ocean radar backscatter, including enhanced versions with improved image quality • Multilooked co-registered phase products • Stripmap-equivalent coregistered SLC (CSLC) products The analysis performed in the first phase includes generation of a set of sample products (demonstrators) based on Sentinel-1 and L-band data from available satellite missions. These demonstration datasets are intended to cover a number of use cases closely related to ROSE-L primary objectives. Some of the above listed products align with existing CEOS SAR ARD specifications, while others are innovative and offer unique benefits tailored to end-user needs. The products are evaluated based on their accuracy, processing time, and usability, with particular emphasis on their relevance to ROSE-L, Sentinel-1, and other institutional missions. An important aspect of the project effort is leveraging the substantial work already undertaken by ESA and other organizations, e.g. for NISAR, SAOCOM, and ALOS missions, toward defining SAR ARD standards. Given the diversity of SAR missions — both active and planned — it is critical to establish ARD products that are compatible across multiple SAR sensors. This would enable seamless integration of information from various data sources, leading to substantial advancements in EO applications and facilitating the development of innovative solutions. The outcomes of the project's first phase, which focus on product assessment and initial recommendations, are presented in this paper. The second phase will build upon these findings, developing detailed product specifications for a few selected ARD products. Subsequent steps will include design, development, testing, validation, and integration of software prototype processors. These processors will serve as the foundation for producing standardized SAR ARD products, ultimately bridging the gap between complex SAR data and their practical application across diverse domains.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: Sentinel 2 Ground Segment Water Processor Sen2water (AquaticARD-compliant)

Authors: Carsten Brockmann, Martin Böttcher, Kerstin Stelzer, Louis Rivoire, Dr. Sebastien Clerc, Francesco Pignatale, Jerome Louis, Georgia Doxani, Enrico Cadau
Affiliations: Brockmann Consult GmbH, ACRI-ST, Telespazio Germany, Telespazio France, ESA ESRIN
The Sentinel-2 Level 2A product is the Analysis Ready Data (ARD) product in the Sentinel-2 product family. It provides surface reflectances over all land and water surfaces. However, due to the physics of the problem, a land atmospheric scheme as included in the Sen2Cor atmospheric correction method is not optimal for atmospheric correction over water surfaces, and although the resulting surface reflectance spectra are reasonably good, they do not meet the uncertainty requirements of the Copernicus Services. Today, the dedicated water atmospheric correction schemes produce aquatic reflectance data that allow a quantitative retrieval of water properties, such as turbidity, chlorophyll-a concentration or trophic state index. In order to meet the requirements from the Services, a second atmospheric correction scheme has been implemented that gives aquatic reflectances for all water surfaces, called Sen2Water. The processing chain elements in Sen2Water are identical to the ones used by Copernicus Land Monitoring Service and the Copernicus Marine Monitoring Service, namely the pixel identification IdePix, the atmospheric correction processors Acolite, C2RCC and Polymer, and the merging procedures based on water optics. Since the beginning of Sen2Water development, the compliance with the CEOS Aquatic Reflectance Product Family Specification (AR-PFS) was fundamental. Coincidently, the AR-PFS was revised during 2024, in parallel with the Sen2Water development. Therefore, we tried to follow the development as closely as possible in both directions, by contributing to the AR-PFS revision process as well as by updating our implementation according to these new specifications. In this presentation we will introduce the new Sentinel-2 Level 2A processor for aquatic reflectances, Sen2Water, and we will comment on the AR-PFS specification based on our experience. The development of Sen2Water will have been completed at the end of 2024 as a building block for the Ground Segment and as a stand-alone processor, like Sen2Cor, to be made available to the general user community. The documentation includes an Algorithm Theoretical Basis Document (ATBD), a Verification and Validation Report and technical software documentation, and we will briefly present key results from the Verification and Validation Report. In 2026, Sen2Water atmospheric correction scheme will be implemented as an integral part of the operational ground segment processor. In this implementation, the output of Sen2Water will be merged with the output of Sen2Cor. As the aquatic reflectance will become an additional data layer within the Level 2A product, ARD compliancy for land as well as water is a key requirement.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: Evolution of the CEOS-ARD Optical Product Family Specifications

#stac

Authors: Jonathon Ross, Christopher Barnes, Matthew Steventon, Rosenqvist Rosenqvist, Peter Strobel, Andreia Siqueira, Takeo Tadono
Affiliations: Geoscience Australia, KBR contractor to the USGS, Symbios Communications, solo Earth Observation (soloEO), Japan Aerospace Exploration Agency, European Commission
The CEOS Land Surface Imagining Virtual Constellation (LSI-VC) has over 20 members representing 12 government agencies and has served as the forum for developing the CEOS Analysis Ready Data (ARD) compliant initiative since 2016. In 2017, LSI-VC defined CEOS-ARD Product Family Specification (PFS) optical metadata requirements for Surface Reflectance and Surface Temperature that reduced the barrier for successful utilization of space-based data to improve understanding of natural and human-induced changes on the Earth’s system. This resulted in CEOS-ARD compliant datasets becoming some of the most popular types of satellite-derived optical products generated by CEOS agencies (e.g., USGS Landsat Collection 2, Copernicus Sentinel-2 Collection 1, the German Aerospace Center) and commercial data providers (e.g., Catalyst/PCI, Sinergise). Since 2022, LSI-VC has led the definition of two new optical PFSs (i.e., Aquatic Reflectance and Nighttime Lights Surface Radiance) and four Synthetic Aperture Radar (SAR) PFSs (i.e., Normalised Radar Backscatter, Polarimetric Radar, Ocean Radar Backscatter, and Geocoded Single-Look Complex), signifying the recognition in importance of providing satellite Earth observation data in a format that allows for immediate analysis. As of December 2024, eleven data providers have successfully achieved CEOS-ARD compliance with a further 12 organizations either in peer-review or underdevelopment for future endorsement. However, this has engendered a need for transparency, version control, and (most importantly) a method to facilitate consistency across the different PFSs and alignment with SpatioTemporal Asset Catalogs (STAC). Thus, all future PFS development will be migrated into a CEOS-ARD GitHub repository. This will facilitate broader input from the user community which is critical for the optical specification to meet real-world user needs and ensures broader data provider adoption. CEOS agencies have concurred that now is the time with increased traceability and version control offered by GitHub, to seek to parameterise the CEOS-ARD specifications and introduce an inherent consistency across all optical and SAR PFS requirements while benefiting from active user feedback. In this presentation, we will share a status on the optical PFS transition to GitHub, as well as a set of implementation practices/guidelines and a governance framework that will broaden the portfolio of CEOS-ARD compliant products so they can become easily discoverable, accessible, and publicly used.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: Sentinel-2 Level-2A ARD Cloud masking: Operational Performance, Foreseen Improvement and Perspectives

Authors: Jerome Louis, Avi Putri Pertiwi, Vincent Laduguie, Francesco C. Pignatale, Raquel De Los Reyes, Silvia Enache, Sebastien Clerc, Florian Poustomis, Rosalinda Morrone, Ferran Gascon, Valentina Boccia
Affiliations: Telespazio France, German Aerospace Centre, Telespazio Germany, CS Group, ACRI-ST, Starion, European Space Agency
The Copernicus Sentinel-2 mission is based on a constellation of two identical satellites in the same orbit. It is delivering high-resolution optical images for land monitoring, emergency response and security services. The satellite's multispectral imager provides a set of 13 spectral bands spanning from the visible and near infrared to the shortwave infrared with a resolution of 10 m to 60 m and a swath width of 290 km. Following the operational implementation of processing baseline (PB) 04.00, the CEOS Working Group on Calibration and Validation (WGCV) established that the ESA’s Sentinel-2 Level-2A Surface Reflectance (SR) products, is CEOS ARD compliant at the Threshold level. This compliancy requires Per-Pixel Metadata including information on No Data, Saturation, Cloud, Cloud shadow. This task is performed with the Sentinel-2 Level-2A processor which main purpose is to correct Sentinel-2 Level 1C products (radiometrically corrected imagery) from the effects of the atmosphere in order to deliver a Level-2A surface reflectance product. The per-pixel Metadata information is provided in the Cloud Screening and Scene Classification (SCL) map, provided as side product with the Aerosol Optical Thickness (AOT) and Water Vapour (WV) maps. The Cloud Screening and Scene Classification module is performed prior to the atmospheric correction and provides a Scene Classification map divided into 11 classes. This map does not constitute a land cover classification map in a strict sense. Its main purpose is to be used internally in Sen2Cor’s atmospheric correction module to distinguish between cloudy -, clear - and water pixels. Two quality indicators are also provided: a Cloud - and a Snow confidence map with values ranging from 0 to 100 (%). The Level-2A processor is based on Sen2Cor version 2.12 that was developed with the aim to support the recent Sentinel-2C unit, ensuring data quality consistency with the Sentinel-2 Collection-1 reprocessing. The presentation showcases updated validation results for the SCL cloud masking performance of the latest L2A processing baseline. These results encompass the reprocessed Collection-1 data and current operational products, using reference data generated through manual labelling of test images. It also describes the proposed approach to mitigate cloud omission in cases of very low-altitude clouds that are missed by the SCL parallax algorithm which was originally designed to reduce false cloud detections over urban areas and bright surfaces but can lead to some cases of cloud omission. Preliminary findings indicate significant improvements in addressing omissions of very low-altitude clouds. However, the operational implementation of this solution will require additional adjustments for areas including large bright surfaces, either natural such as dry salt lakes or artificial such as greenhouses area, like those in the Almería region. Finally, in the context of increasing occurrence of large wildfires, the perspectives concerning the masking of haze and smoke emitted by those fires are addressed, with the objective to ensure that smoke-affected pixels could be excluded from quantitative analyses in downstream applications.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: D.04.03 - POSTER - Enabling Machine Learning Operations on Cloud Platforms

Training, deploying and maintaining novel AI machine learning models to achieve reliable and efficient operations is a complex task for scientists and service developers. This session focuses on the state of the art of cloud-based solutions supporting development and operations of ML models in a DevOps framework.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Earth Observation-Driven Sustainable Energy Solutions in Nigeria with Sentinel imagery, Machine Learning and Cloud-based Platforms

Authors: Professor Kevin Tansey, Matthew Payne, Mohammed Ozigis, Bilal Majeed
Affiliations: University Of Leicester
This study presents an innovative approach to supporting sustainable energy production in Nigeria by integrating Earth Observation (EO) data and machine learning within a user-friendly cloud platform. The project is funded by the United Kingdom's InnovateUK programme and is in collaboration with PyroGenesys, a company developing pyrolysis machines for biofuel and electricity generation. The University of Leicester, in partnership with CGI UK Ltd. and an agri-tech mechanisation company Hello Tractor, is developing EO-driven solutions to identify optimal locations for maximising crop harvest waste alongside mechanisation services that can be offered at a cost effective manner to ensure efficient feedstock management. This initiative is hosted on CGI's GeoData360 platform, a cloud-based EO ecosystem enabling seamless access and analysis of geospatial data. PyroGenesys' pyrolysis technology utilises agricultural residues, such as cassava stems, as feedstock, offering a sustainable alternative to traditional biomass burning practices prevalent in Nigeria. This technology provides clean energy solutions to rural communities while mitigating the environmental impact of open burning. To support this initiative, we have developed two key services on the GeoData360 platform: 1. Harvest Detection: Leveraging Sentinel-2 data and Google Earth Engine, this service provides time-series analysis of vegetation indices (Normalised Difference Vegetation Index, Normalised Difference Water Index, Soil Adjusted Vegetation Index, Normalised Burn Ratio) to assess the historical reliability of agricultural production within a specified area of interest. This information enables PyroGenesys to evaluate the long-term availability of agricultural residues for sustainable feedstock sourcing, ensuring consistent operation of their pyrolysis machines. 2. Crop Type Detection: This service employs Sentinel-1 and Sentinel-2 imagery in conjunction with machine learning algorithms to map crop types. By exploiting the sensitivity of Sentinel-1 backscatter to crop biophysical parameters and the spectral information from Sentinel-2, this service enables accurate crop identification and estimation of potential biomass residue yields. Hello Tractor collect a wide range of farming mechanisation data that we have used to train the machine learning models. This information is crucial for determining the required agricultural area to support individual pyrolysis machines, ensuring efficient resource utilisation and optimized deployment strategies. GeoData360 is hosted on Amazon Web Services (AWS), providing a scalable and robust cloud infrastructure. The platform leverages AWS's capabilities to automatically adjust computing resources based on processing demands, ensuring efficient and responsive performance for users. By providing accessible and actionable EO-driven insights through this dynamic cloud environment, this project empowers PyroGenesys to strategically deploy their pyrolysis technology, maximising the socio-economic and environmental benefits for rural communities in Nigeria and potentially into other countries as well. These benefits include access to clean energy, improved agricultural practices, and enhanced livelihoods through income generation and local employment opportunities. This approach demonstrates the transformative potential of integrating EO and digital technologies within a cloud-based ecosystem to support sustainable development goals in developing countries. In the talk, the service will be demonstrated and case studies presented.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: DeployAI: Leveraging AI and Earth Observation for Environmental Applications

Authors: Mohanad Albughdadi, Vasileios Baousis, Antonis Troumpoukis, Vasileios Vatellis, Antonis Ganios, Marica Antonacci, Federico Fornari
Affiliations: European Centre for Medium Range Weather Forecasts, European Centre for Medium Range Weather Forecasts, NCSR-Demokritos, European Centre for Medium Range Weather Forecasts
DeployAI is an ambitious initiative that aims at building a fully operational, AI-On-Demand Platform (AIoDP) to empower trustworthy, ethical, and transparent European AI solutions for industrial and public sector applications. DeployAI project leverages the insights of the preparatory Pre-PAI project, the AI4Europe core platform, and other contributions within the AI-On-Demand (AIoD) ecosystem to integrate cutting-edge technologies and practices in order to establish a comprehensive, sustainable, and impactful AIoD platform. This platform is designed to be a one-stop shop for AI tools, to foster the growth of the European AI innovation Ecosystem. By adopting a two-layer approach, the platform focuses on technical excellence as well as stakeholder engagement, network expansion and long-term sustainability. Note that early efforts will be focused on extending networks, organizing open calls, and fostering collaboration, with a long-term vision of establishing a dedicated business entity for platform management. As part of its mission, DeployAI strengthens connections to high-impact domains such as environment, energy, logistics, cybersecurity, and smart cities. It involves a wide range of stakeholders, including SMEs, industry, academia, and civil society, through a program of open calls, and it takes up the challenge of trustworthiness with the trustworthiness assessment suite that adopts top-down and bottom-up assessment methods. It will help position Europe as a leader in ethical and reliable AI solutions. By focusing on environmental use cases, DeployAI supports the application of AI on Earth Observation and Earth modeling data to address critical challenges in climate monitoring, resource management, and sustainable development. These efforts ensure that the platform not only advances AI innovation but also contributes meaningfully to Europe’s environmental resilience and sustainability goals. In this talk we will present various AI solutions leveraging Earth Observation (EO) and Earth modelling data which have already been integrated into the AIoD platform. These solutions can be used to create a pipeline by drag-and-drop that can be run on AI runners or on a user Kubernetes cluster. For instance, an available solution allows object detection and prompt segmentation from satellite images, enabling the identification of various objects. It supports various environmental applications: air and maritime traffic, monitoring of pollution, transportation analysis, and assessment of green spaces in urban areas. Another available solution focuses on estimating the biophysical parameter Leaf Area Index from Sentinel-2 images. Sentinel-2 is a high-resolution multispectral satellite designed for land and vegetation monitoring. Such a solution provides vital information for crop monitoring and forest dynamics, as well as climate modeling applications. Furthermore, Sentinel-2 image enhancement through super-resolution techniques is under development and integration, enabling even more detailed analysis of environmental features. Similarly, super-resolution models using ERA-5-the most complete reanalysis dataset yet made available from the European Centre for Medium-Range Weather Forecasts (ECMWF), providing hourly atmospheric, land, and ocean data-is under integration to support applications related to climate research, weather prediction, and environmental modeling. These applications give examples of how DeployAI bridges cutting-edge AI with EO data to address environmental challenges and results will be presented during the symposium.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Artificial Intelligence Techniques for Geophysical Image Processing Onboard Satellites

Authors: Alessia Sbriglio, Professor Giovanni Battista Palmerini
Affiliations: Scuola di Ingegneria Aerospaziale- Università di Roma, La Sapienza
In recent years, the application of object detection in satellite imagery has gained increasing relevance, thanks to significant advancements in Earth Observation (EO) technologies and the growing need to process data in real-time with high performance. Satellite platforms, particularly those in the Sentinel series, now provide high-resolution images that can be efficiently processed, opening new opportunities for remote sensing applications such as automated object detection, geo-referencing, and environmental monitoring. In this context, object detection algorithms, particularly deep learning models like YOLO, play a crucial role, enabling the rapid and accurate identification of features in satellite images. This study focuses on the implementation of the YOLOv8 model for identifying lakes in Italy, demonstrating its excellent performance in satellite imagery. Specifically, it was applied to the task of detecting and segmenting lakes in high-resolution Sentinel-2 images. The model was trained with different data configurations and settings, observing significant improvements in detection accuracy and consistency when trained for extended periods, ranging from 60 to 100 epochs. The results indicate that YOLOv8 is able to outline the boundaries of lakes, even under challenging conditions such as partial obstructions or varying environmental factors: this makes it particularly suitable for applications requiring real-time automated detection. The study also highlights the effectiveness of YOLOv8 in processing satellite imagery, both for the classification of individual lakes and for generic classification. The increased accuracy in detecting lakes, especially in complex scenarios, further demonstrates the robustness of the model, confirming that YOLOv8 is particularly well-suited for Earth Observation applications based on satellites. The model's ability to consistently handle images also demonstrates its potential for long-term monitoring applications in real-world settings. The results of this work contribute to a broader effort to integrate advanced object detection models, such as YOLO, into autonomous systems, particularly for Orbit Determination (OD). The automatic identification of geophysical features in satellite images can contribute to autonomously determining the satellite's position , providing valuable backup to traditional OD methods that rely on ground tracking or GNSS solutions. This approach has significant potential for future space missions, especially those that may face challenges related to OD, such as lunar or Mars missions, where GNSS systems are unavailable and ground tracking is costly or impractical. Using satellite imagery also for orbit determination could reduce costs and increase the autonomy of space missions, while mantaining unalterated observation payload original function and providing expected data return.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: AI4QC Training Data Portal: AI-Ready Datasets for Quality Control in Earth Observation Data

Authors: Alice Baudhuin, Ana Kostovska, Pance Panov
Affiliations: Bias Variance Labs, d.o.o
The AI4QC (Artificial Intelligence for Quality Control) project, funded by the European Space Agency (ESA), has curated a comprehensive collection of datasets tailored for the automated detection and segmentation of clouds and novel anomalies in Earth Observation (EO) data. These anomalies include uncorrected radio frequency interference and diverse image artifacts, derived from Copernicus Sentinel-1 and Sentinel-2 missions. The project aims to address critical challenges in ensuring the quality of EO data, providing a robust foundation for AI applications in this domain. The AI4QC Training Data Portal, accessible at http://purl.org/ai4qc, serves as a centralized resource for exploring and accessing these datasets. It enables users to perform searches using predefined metadata fields and provides direct links to Zenodo, where raw data files and AI-ready datasets can be downloaded. These datasets have been processed and integrated into the AiTLAS toolbox—an open-source resource for AI in Earth Observation (AI4EO)—available at https://github.com/biasvariancelabs/aitlas. The AiTLAS toolbox simplifies the ingestion, visualization, and analysis of EO data for machine learning applications, making it an valuble tool for researchers and practitioners alike. Each dataset in the portal is enriched with semantically rich metadata, designed following established community ontologies and vocabularies, and available at https://github.com/biasvariancelabs/AI4QC. This metadata, structured using RDF (Resource Description Framework), adheres to the FAIR data principles—ensuring data is Findable, Accessible, Interoperable, and Reusable. The AI4QC Training Data Portal is designed to provide users with comprehensive tools for efficient dataset discovery, metadata exploration, and dataset adequacy assessment. Users can query datasets using various parameters, ensuring seamless access while adhering to stringent standards of data management and accessibility. The platform’s conceptual design illustrates a well-defined data flow and interactions among its components, facilitating the management and utilization of AI-ready datasets. Key functional components include: (1) AI4QC metadata schema and creation service — defines and generates RDF triples that provide machine-readable descriptions of datasets; (2) Metadata backend — comprising an RDF triple store (using Apache Jena TDB) and a SPARQL endpoint, the backend ensures efficient storage and retrieval of metadata; (3) AI4QC data frontend interface — a user-friendly web platform built with NodeJS and ReactJS, enabling semantic searches, metadata exploration, and dataset downloads; and (4) External data repository — hosts datasets under a permissive, open-source license, ensuring long-term accessibility. Currently, the AI4QC catalog comprises 16 datasets, including 9 curated datasets prepared by the project team. These datasets are archived in the Zenodo repository, with an AI4QC community page at https://zenodo.org/communities/ai4qc/, ensuring long-term accessibility. This centralized resource not only streamlines dataset discovery and usage but also promotes the adoption of best practices in data management, fostering the development of more accurate and reliable AI-driven quality control systems in Earth Observation.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Unlocking ML and Foundation Models within openEO

Authors: Pratichhya Sharma, Ir. Jeroen Verstraelen, Ir. Victor Verhaert, Luyts Andreas, Van Tricht Kristof, Ir. Jeroen Dries, Dr. Hans Vanrompay
Affiliations: VITO Remote Sensing
The application of Machine Learning (ML) and Deep Learning (DL) in Remote Sensing has revolutionized how vast amounts of data from Earth Observation (EO) satellites are analyzed and utilized. Their transformative impact is especially notable in tasks such as scene classification, object detection, and segmentation, where the ability to discern complex patterns, spatial relationships, and subtle variations in data is essential. These advantages are crucial as EO data volumes continue to grow, driven by missions such as Sentinel-1, Sentinel-2, and other satellite programs. To make the application of ML models more accessible for EO practitioners, openEO has integrated key machine learning tools into its workflows. Among these is the Random Forest algorithm, a popular and widely used ML model for classification tasks. Known for its robustness and high accuracy, Random Forest operates by building an ensemble of decision trees, each trained on a subset of the data, and combining their predictions to deliver strong overall performance. With openEO, the integration of Random Forest streamlines the entire workflow. Users can now leverage this algorithm to classify EO data efficiently, without requiring extensive expertise in ML. While Random Forest is effective for many classification tasks, the demands of modern EO applications increasingly call for more sophisticated ML techniques that can handle large, diverse, and often unlabeled datasets. This is where the use of advanced foundation models becomes significant. Foundation models are pre-trained on massive datasets and can be fine-tuned for specific applications. These models go beyond traditional ML approaches by capturing complex, high-level features and providing scalable solutions for handling global EO datasets. A notable example is CORSA [1], a variational autoencoder designed specifically for hyperspectral and multispectral data. CORSA excels at compressing high-dimensional satellite data, such as those from Sentinel-1, Sentinel-2, or PRISMA missions, while preserving the key information required for downstream tasks. Beyond compression, CORSA embeddings can be seamlessly integrated into AI workflows for tasks like land cover classification or change detection. This adaptability also supports onboard implementations, enabling rapid disaster response and positioning Another advanced model integrated into openEO workflows is PRESTO [2], a transformer-based model pre-trained on extensive satellite time series data. PRESTO has proven particularly effective in agricultural applications, such as global crop identification, as demonstrated in the ESA WorldCereal project. By leveraging the sequential and contextual learning capabilities of transformers, PRESTO can analyze temporal patterns in satellite imagery to identify crops with high accuracy. This capability is a game-changer for large-scale agricultural monitoring, enabling timely insights into crop health, yields, and patterns of land use. An exciting future direction for openEO involves the native support for embeddings, offering transformative opportunities to improve EO applications. Embeddings encode high-dimensional data into compact, low-dimensional representations, capturing nuanced patterns, spatial relationships, and temporal dynamics. This approach is particularly valuable for processing the complexity of EO datasets. By leveraging embeddings, it becomes possible to reduce computational costs and resource demands significantly, as models operate on compact representations rather than raw data. This not only accelerates data processing but also facilitates scalability for handling massive global datasets. Furthermore, embedding support would enable more efficient fine-tuning of pre-trained foundation models, further enhancing precision and adaptability in EO analysis. In this presentation, we will explore how openEO facilitates the integration and scaling of these cutting-edge ML models to address the challenges and opportunities in remote sensing. Furthermore, the presentation will delve into the broader implications of these advancements for the EO community. By streamlining the integration of sophisticated ML models into accessible workflows, openEO lowers the barrier to entry for researchers and practitioners, enabling them to harness the power of ML and DL without requiring deep expertise in programming or algorithm development. [1] B. Beusen, A. Luyts, X. Ivashkovych and T. Van Achteren, "Lightweight and Efficient: A Family of Multimodal Earth Observation Foundation Models," IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 2024, pp. 2841-2846, doi: 10.1109/IGARSS53475.2024.10641132. [2] Tseng, G., Zvonkov, I., Purohit, M., Rolnick, D., Kerner, H.: Lightweight, Pre-Trained Transformers for Remote Sensing Timeseries. Preprint Available on arXiv:2304.14065 (2023)
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: The Earth Observation Training Data Lab (EOTDL) - Addressing Training Data related needs in the Earth Observation community.

#stac

Authors: Juan B. Pedro
Affiliations: Earthpulse
The Earth Observation Training Data Lab (EOTDL), launched by the European Space Agency (ESA), addresses critical challenges in the development of AI for Earth Observation (EO) applications. A major barrier for leveraging AI in EO is the lack of accessible, high-quality training datasets. These datasets are costly and complex to create, requiring extensive manual labeling, expert input, and often in-situ validation, which limits innovation and hinders the growth of EO-based solutions. The EOTDL aims to tackle these challenges by providing an open, collaborative platform that offers a suite of tools for generating, curating, and utilizing AI-ready training datasets and pre-trained ML models. This platform includes a cloud-based repository with over 100 datasets spanning multiple EO applications, from computer vision tasks like classification and object detection to advanced parameter estimation and 3D analysis. In addition to the repository of training datasets, the platform also includes a repository of pre-trained machine learning models, which accelerates the development process for users by providing a starting point for various EO tasks. EOTDL facilitates seamless data access, supports multi-GPU training directly in the cloud through its cloud workspace, and provides users with tools to create and train models effectively. The platform's cloud workspace is equipped with GPU machines, enabling users to create datasets and train models with high computational efficiency. EOTDL also enables interoperability with third-party platforms, and supports the building of third-party applications on top of its infrastructure, enhancing the versatility of the platform. The EOTDL is built upon open-source foundations, with all code hosted on GitHub along with contributing guides and tutorials to foster community involvement. It promotes community engagement through collaborative tools, user contributions, and incentives. Users can contribute by enhancing existing datasets or adding new ones, with rewards to encourage active participation. The platform includes a labeling tool with active learning capabilities, which simplifies and improves the process of labeling datasets. This community-driven approach ensures a growing, diverse repository that evolves to meet the needs of researchers, industry practitioners, and engineers, helping to unlock the full potential of AI for Earth Observation. Feature engineering is another key aspect facilitated by EOTDL, particularly through integration with openEO. This collaboration allows users to leverage openEO's standardized interfaces and powerful processing capabilities for feature extraction from Earth Observation data. By using openEO, users can perform complex geospatial analyses and transformations efficiently, thereby enhancing the quality and relevance of features used for training machine learning models. This integration not only supports reproducibility but also improves the accessibility of sophisticated feature engineering workflows for a wide range of EO applications. The EOTDL also places strong emphasis on dataset and model metadata management, utilizing the SpatioTemporal Asset Catalog (STAC) standard along with custom STAC extensions. This approach ensures that metadata is standardized, searchable, and compatible across various EO datasets and models. The use of custom STAC extensions allows EOTDL to accommodate specific requirements for EO datasets, such as quality metrics, labeling details, and data provenance. This metadata framework significantly enhances dataset discoverability, quality assurance, and ease of use for the community. EOTDL's flexible access mechanisms—via APIs, web interfaces, and Python libraries—make it accessible to a wide range of users, thus creating a powerful ecosystem for advancing EO capabilities. Future plans include the introducion of gamification features to further incentivize contributions and will explore commercialization opportunities through premium features, thereby expanding the scope and sustainability of the platform.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: GEODES: a new CNES EO Data platform integrated in a federated public platform ecosystem (data, tools and services) with Data Terra Research Infrastructure

Authors: Vincent Martin, François JOCTEUR MONROZIER, Marjorie ROBERT, Benjamin HUSSON, Hugo FOURNIER, Olivier Melet, Yannick Tanguy, Guillaume EYNARD-BONTEMPS, Céline
Affiliations: Cnes
It is crucial that long-term observation series, along with pioneering Earth observation (EO) data such as SWOT and upcoming CO3D and Trishna missions, are embraced to match the needs of scientific and research institutions, public policies, and the private sector. The specific characteristics of novel datasets and potential for applications are increasingly prompting the necessity for federated access and data fusion services. The GEODES portal, previously designated as the Geo Data Hub during its beta phase, has been developed by the French space agency, CNES, with the objective of centralising the distribution of past, present and future CNES EO data. GEODES consolidates the national Copernicus Ground Segment services (previously assumed by PEPS platform), along CNES EO mission data, as well as more advanced and scientific products within Data Terra, the French "Earth System" research infrastructure. Furthermore, GEODES offers a range of interactive tools designed to facilitate scientific and innovative processes through the provision of on-demand processing and prototyping services. GEODES makes a contribution to the uptake of EO data in general and national EO data in particular through the provision of cross-cutting services. Furthermore, GEODES constitutes a significant input for the development of downstream services and digital twin initiatives. The portal employs high standards of interoperability to facilitate the collection of metadata by other catalogues and to enable their referencing in turn, as well as providing standard API access for its computing processes. GEODES has been constructed as the Spatial data platform component of the national Gaia Data project, which federates data access, processing services and resources sharing for the scientific community within Data Terra. Furthermore, GEODES is closely integrating with European data access platforms and serves the Destination Earth ecosystem. CNES is currently deploying the OpenEO APIs with the objective of becoming a data and services provider within this ecosystem. GEODES will capitalise on the expertise of CNES in data processing and the specifics of national sensors. In particular, the newly-launched PLUTO initiative has the following objectives: to map CNES and EO Community data processing tools; to share useful CNES-developed algorithms and tools; and to develop missing software bricks in order to foster the study of cross-cutting use cases. Furthermore, PLUTO bricks will be deployed within the GEODES Datalab, which is based on JupyterHub. Additionally, GEODES will implement on-demand processing developed by CNES and its partners, including IA-driven cloud processing and object detection, as well as super-resolution.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Leveraging Cutting-Edge Technologies in Big Data Platforms To Speed Up AI Development in EO

Authors: Simona Gargiulo, Alessandro Marin, Gaetano Pace, Cesare Rossi, Dianka Sileymane, Michal Kawulok, Maciej Ziaja, Rafal Zogala, Ms. Heike Bach, Nicolas Corti, Silke Migdall
Affiliations: CGI, Silesian University of Technology, KP Labs Sp. z o.o., VISTA Remote Sensing in Geosciences GmbH
Artificial Intelligence (AI) has become an essential tool in Earth Observation (EO), enhancing the capabilities to monitor, analyze, and understand environmental changes from space. Machine/deep (ML/DL) learning techniques play a pivotal role in processing vast amounts of data from satellites, aerial and ground-based systems, providing insights into a wide range of issues such as climate change, natural disasters, and environmental monitoring. As EO data from remote sensing platforms grow increasingly complex and voluminous, there is a need for many organizations to streamline their AI workflows, from development to production, while guaranteeing their reproducibility, scalability and portability. Kubeflow is an ecosystem of open-source projects designed to automate, simplify and standardize ML/DL workflows. Being deployed to a Kubernetes cluster, regardless of the cloud computing (private or public), it leverages its hardware capabilities (GPU, AMD, etc.), scalability and portability. The platform supports a variety of tools for both data and model versioning control, hyperparameter tuning, model optimization, real-time inference, and continuous integration, all while minimizing resource overhead. Kubeflow is integrated inside Insula, an innovative EO Platform operated by CGI Italy designed to handle the processing of large-scale EO data efficiently, which the Food-Security Explorer leverages. It provides the possibility to integrate new services and datasets driven by user needs thanks to its standard interfaces, resource scalability to optimize large scale processing, and a robust monitoring system that provides real-time supervision to identify problems and make rapid adjustments for greater efficiency and accuracy. This paper will demonstrate Kubeflow capabilities by presenting three different showcases, developed in the context of the INFER - artificial iNtelligence for Food sEcuRity – project, financed by the European Space Agency (ESA). The first one develops the hyper transformer (HT) model, a unique approach to the task of pansharpening, the second one derives the location of plastic covered fields (plasticulture) from hyperspectral data and use these geographic pins to train a multispectral deep learning model to allow large scale classification of fields under plastic, and the last one aims at training a deep learning model for building segmentation using Open Street Map as ground truth.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: F.04.07 - POSTER - Earth Observation for Tracking Global Sustainability and Biodiversity Targets

The 2030 Agenda for Sustainable Development and the Kunming-Montreal Global Biodiversity Framework (GBF) set ambitious targets for biodiversity conservation, ecosystem restoration, and sustainable development. Achieving these goals requires timely, reliable, and spatially explicit data to track progress and inform decision-making. Earth Observation (EO) plays a critical role in monitoring environmental indicators, supporting national reporting, and guiding effective conservation and restoration efforts.
EO provides unparalleled capabilities to support operational monitoring of SDG indicators, helping countries integrate geospatial data into National Statistical Systems to track development policies and sustainability progress. The integration of EO with socio-economic data is essential for delivering high-quality, timely, and actionable information to measure progress on SDG targets. However, challenges remain in EO data accessibility, standardization, and operational integration to ensure that national and global reporting frameworks effectively benefit from EO-based solutions.
In the context of biodiversity, EO is key to supporting national monitoring and reporting on the GBF indicators. EO is also an essential tool for ecosystem conservation and restoration, supporting the identification, mapping, and management of priority areas for protection and rehabilitation. The ability to assess biodiversity at multiple scales, from protected areas to entire landscapes, provides an opportunity to bridge the gap between scientific advancements and national reporting needs. There is a growing demand for accessible and standardized EO-based indicators to support conservation efforts and assess the effectiveness of protected area management. Monitoring ecosystem conditions, connectivity, and resilience is crucial to tracking progress toward restoration targets (e.g., GBF Target 2 and EU Nature Restoration Law). The integration of EO with in-situ data (e.g., bioacoustics, eDNA, LTER) further enhances conservation planning and adaptive management strategies.
This session will explore the latest EO-based approaches for tracking SDG indicators, biodiversity targets, and ecosystem conservation and restoration efforts.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Assessing the potential of social media data to support remote sensing data in migration analysis – An explorative modelling approach for Ghana, 2015-2020

Authors: Johannes Mast, Dr. Marta Sapena, Dr. Hannes Taubenböck, Dr. Christian Geiß
Affiliations: German Aerospace Center (DLR), University of Würzburg, University of Bonn
Human migration in West Africa is a complex phenomenon driven by political, environmental, economic and social factors, among others. In Ghana, north-south migration has been, for example, linked to rainfall variability and soil degradation. In a previous study, remote sensing data and microdata from Ghana’s 2021 census were used to measure the migration effectiveness index (MEI). This index is the net to gross migration ratio and measures the ability to attract, avoid or balance migration in a way that is comparable between administrative units. The aforementioned study reached an R² of 52.5% using purely remote sensing data-driven factors. Including additional factors derived from census data increased the R² to 60%, a substantial improvement. Census data is a valuable resource, but it is often unavailable in many countries. Even when it is accessible, its limited frequency and lack of comparability across countries present significant challenges. Social media data could potentially reduce this gap with its capacity to match Earth Observation data in frequency and comparability. If geolocated social media data could serve as a substitute for some census variables, this would allow migration researchers to assess physical and socioeconomic factors at flexible time and spatial scales. This, in turn, could lead to a better understanding of the impacts of climate change, natural disasters, and policies. In this follow-up study, therefore, we examined if the socioeconomic variables from the Ghana census can be substituted with geolocated social media data from Twitter (now X). From a dataset of 20,179,946 geolocated social media posts collected for Ghana over a 5-year period, we calculated the following statistics at district-level: Total number of posts, number of authors, diversity of languages, and trend in the districts’ share of all posts over time. The majority of posts (90.5%) were found to be geolocated at relatively low precision (level of cities or coarser) and were excluded to prevent distortions between rural and urban districts. Our analysis was twofold: Firstly, we assessed correlations between Twitter statistics and the socioeconomic metrics from the census. Secondly, we applied a modelling approach to predict the MEI for recent migration based on residence five years earlier. In this modelling approach, we included the same drivers that were previously identified in a stepwise multiple linear regression procedure and measured the improvement in explained variance (R²) gained by adding Twitter statistics. Results showed that twitter statistics correlated with many of the census variables. Of the tested combinations of variables, more than half (51 of 77) were significant at the 95% level. To give some examples: Particularly strong were the correlations between proportion of the economically active population and post- and author counts, and language diversity (r = 0.76, 0.75, and 0.67 respectively). Further, proportion of population employed in the agricultural sector was inversely related to post and author counts (r = -0.80 and -0.72 respectively), and the proportion of population born outside the region was inversely related to the proportion of English posts (r = -0.39). Adding these twitter statistics into the linear model leads to a marginal improvement of up to 0.5% adjusted R², compared to 8.6% that is gained by including census variables. When excluding districts with low amounts of posts (typically less populated districts), the improvement is larger, with up to 4.4% improvement in R² gained by including the logarithmic number of authors per district. Our study shows that substituting census variables with social media statistics is promising but not straightforward. The high correlations with census-derived variables suggest that social media statistics contain information related to socioeconomic situations, which can contribute to a better understanding of migration. However, this information does not seem to relate to migration effectiveness to the same degree as the census data, which is likely to remain a precious and valuable data source. While we do not find social media statistics to be a direct substitute for census-data in modelling migration effectiveness, it is a promising data source in itself, as we find it to be plausibly related correlated to socioeconomic statistics such as employment rate, proportion of in-migrants, and the importance of the primary sector. Future research should investigate ways to address the limitations of social media data and develop targeted statistics to address specific knowledge gaps in migration research.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Woody fraction estimation in South-Central Ethiopia using spectral-temporal metrics: an approach for restoration monitoring

Authors: Alfred Kokas Aleeje, Dr. Katja Oehmichen, Dr. Nikolai Knapp, Dr. Martin Ehbrecht, Prof. Dr. Andreas Bolte
Affiliations: Theunen Institute Of Forest Ecosystems, Department of Silviculture and Forest Ecology of the Temperate Zones, University of Göttingen
Accurate estimation of spatiotemporal dynamics of forest cover during restoration is becoming critically necessary. Several studies evaluating restoration success cite monitoring, especially of the forest resource as a key aspect that needs to be done to ensure adaptive measures are undertaken or progress towards anticipated forest state known. This has largely been possible through the use of diverse spaceborne sensor products. However, with the multiple estimation approaches available, replication has always been difficult given the difficulty accessing software and data sources used in these studies. This even becomes an urgent issue in countries where many restoration experts and institutions mandated to implement and monitor restoration have limited access to computing capacities required for the manipulation of such products, as is the case for Ethiopia. Under the Bonn Challenge of 2011, Ethiopia made the largest pledge of restoring 20 million hectares of degraded land by 2030. Because of this, there is a high demand to present evidence of progress towards this target, coupled with internal motivation to account for resources invested in the restoration projects. This has however proved challenging as raised by many restoration actors in the country and repeatedly highlighted in published literature. In this study, therefore, we aim to show that using spectral-temporal metrics (STMs) from bands of readily available Landsat products, for spectral mixing implemented under Google Earth Engine (GEE), an open computing platform provides relatively accurate results with fewer replicability challenges. The analysis aims to estimate woody fractions for Oromia and Southern Nations, Nationalities, and People’s regions from 1986 to 2023, a period of tremendous restoration efforts realized nationwide. We, therefore, aim to present the woody fraction change maps for 5-year intervals covering this range and discuss the likely drivers of such changes aside from the known restoration activities. Our findings may serve as a starting product for evaluating restoration success in the study region, and possibly inform corrective project management actions. The approach can also be adapted to estimate woody fractions for the rest of the country to provide evidence for international commitments.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Battling Pollution Threats of Africa’s Largest Freshwater Lake – a GDA Water Resource Use Case

Authors: Jorrit Scholze, Kerstin Stelzer
Affiliations: Brockmann Consult Gmbh
Lake Victoria is Africa’s largest freshwater lake by area and delivers valuable ecosystem services to local communities including domestic use, industrial and agricultural water provision, waste disposal, and hydropower generation. It plays a prominent role for the landscape and can influence local meteorological and climate conditions. Lake Victoria is noted for a high diversity of fishes that are of economic, ecological, and scientific value. Nevertheless, the lake has already experienced deterioration due to unchecked or unregulated exploitation, invasive species, habitat degradation, pollution, and eutrophication. Predicted increases in climate variability and changes pose an additional threat that requires special attention. To address these issues, the ESA’s GDA program collaborates with the World Bank and the Lake Victoria Basin Commission (LVBC), integrating Earth observation technologies into water resource management initiatives. Using EO data, water quality parameters, such as turbidity and chlorophyll-a are monitored. These parameters are influenced by both climatic factors and anthropogenic activities such as agriculture, urbanization, and industrial pollution. By using EO data, it is possible to monitor these parameters over large spatial scales, providing a more comprehensive picture of water quality changes across the entire lake as well as in sub-basins and bays. Data products from various EO services and missions, including ESA's Climate Change Initiative (CCI) for Lakes, the Copernicus Global Land Service, as well as from CyanoAlert service, have been used for assessing the key parameters. The integration of these data sources enhances the accuracy of water quality monitoring and supports the identification of pollution hotspots and areas of concern. To better understand water quality dynamics in Lake Victoria, we focused on identifying and monitoring hotspots of lower water quality. These hotspots are areas where water quality is particularly poor, often marked by high concentrations of chlorophyll, turbidity and harmful algal blooms. A method called Getis-Ord Gi* was applied to detect these hotspots by calculating spatial patterns and identifying areas with significantly high concentrations. By assessing trends over time, we could track how these hotspots evolve, emerge or disappear, offering insights into areas most affected by pollution. Communicating results through hotspot indicators proved more effective and intuitive than relying solely on technical metrics like chlorophyll concentration or turbidity. These insights enable more targeted interventions, complementing strategies like the Lake-wide Inclusive Sanitation framework developed by the Lake Victoria Basin Commission. By merging advanced satellite data with local knowledge, this initiative fosters sustainable water management, enhancing the resilience of Lake Victoria’s ecosystem while supporting the livelihoods of communities that depend on its resources. The hotspot products were developed in close cooperation with LVBC and World Bank considering the requirements expressed by the local users. In addition to monitoring water quality, the project has focused on understanding the drivers of water quality changes in Lake Victoria. The integration of land cover data from ESA’s WorldCover project, as well as population density data from the United Nations, has allowed for the identification of areas where human activity may be contributing to water quality degradation. For example, detailed land cover information helps identify land use changes such as urban expansion, agricultural intensification, and deforestation, which can impact water quality. Similarly, population density data can pinpoint regions with high human pressures, where water quality issues may be more acute due to pollution and overuse. To further support decision-making, a browser-based data viewer was implemented for LVBC, enabling seamless access to project results. This platform enables stakeholders, including policymakers, researchers, and local communities, to access and interpret EO data without requiring specialized software or technical expertise. The data viewer, hosted on-site at LVBC servers, provides an interactive and accessible interface for visualizing water quality data, including time-series analyses, statistical generation, and comparisons between different datasets. By enabling users to analyze water quality trends and identify pollution hotspots, the viewer has become a tool for decision-making and resource management. It enhances the transparency and accessibility of data, facilitating better-informed decisions on water quality management and pollution control.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: How do we double the global area of mangroves under protection? Introducing the Decision Framework for Mangrove Conservation

Authors: Paula Vaz dos Santos, Dr. Pete Bunting, Dr Andy Hardy
Affiliations: Aberystwyth University
Mangroves are among the planet's most vibrant and productive ecosystems, providing essential ecosystem services from food security for impoverished communities to serving as long-term carbon sinks crucial for climate change mitigation. However, human activities and natural processes pose significant threats to this biome, which has experienced consistently high deforestation rates since the 1980's. To change this trajectory, it is essential to identify the factors contributing to mangrove deforestation and identify areas currently without protection and in need of conservation. Additionally, evaluating the available geospatial data is important for effectively mapping and monitoring these factors. Most published works only consider a limited number of factors related to mangrove conservation, which overlooks the complex pressures that mangroves face. Therefore, this study aimed to fill the research gap by generating pan-tropical maps identifying priority areas for mangrove conservation efforts. This was achieved by mapping regions vulnerable to human activities, natural threats and assessing the value of mangrove ecosystem services. To map these regions, we have developed the Decision Framework for Mangrove Conservation (DFMC). This model is a global-scale decision-making tool for conservation, employing products derived from remote sensing data (e.g., Global Mangrove Watch; GMW) to identify hotspots for mangrove conservation worldwide by generating prioritizations based on different combinations of geospatial datasets. The methodology involved the creation of a conceptual model to identify variables related to mangroves, ecosystem quality, health, and deforestation rates, assessing their impact and priority on the local ecosystem and, thus, mangrove conservation. Using the DFMC, a performance matrix was developed to standardize the scores assigned by expert researchers for each variable identified in the conceptual model. These scores were then applied using mixed integer linear programming (MILP) techniques to spatially identify the conservation areas. The resulting global model was applied to identify mangrove areas in each country that should be considered for protection and conservation by governments and NGOs. In doing so, we identify regions that are high priorities for global conservation efforts based on variables such as drivers of anthropogenic and natural deforestation, biomass, species diversity, fishing pressure, and ecosystem services. To achieve the Global Mangrove Alliance (GMA) goal of doubling the area of mangroves protected, an additional 61,000 km2 of mangroves needs to be protected. We have used the results of the DFMC system to identify and prioritize mangrove regions in order to effectively achieve this goal. We also identified additional applications of the DFMC, including the integration of future land cover change scenarios to simulate the impacts of deforestation and how these changes could influence the development of new mangrove conservation areas. Our research is vital for prioritizing mangrove conservation efforts by pinpointing critical areas at risk and tackling a significant gap in deforestation studies. Thus, the use of data-driven models and innovative strategies for mangrove conservation will empower governments and NGOs to make efficient decisions to protect mangroves for many future generations.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Case Studies on Using EO Data for Ecosystem Restoration Monitoring in Support of the Nature Restoration Law

Authors: Petra Miletich, Mag. Dr. Manuela Hirschmugl, MSC Florian Lippl, Dr. Hanns Kirchmeir, MSc Larissa Posch, Dr. Stefan Schindler, DI Maria Stejskal-Tiefenbach, Mag. David Paternoster
Affiliations: Joanneum Research Gmbh, University of Graz, E.C.O. Institut für Ökologie, Environment Agency Austria
The degradation of ecosystems and the accompanying loss of biodiversity are occurring at an unprecedented rate, posing significant threats to critical ecological functions and services such as carbon sequestration, water regulation, and habitat provision. Recognizing these challenges, the EU Biodiversity Strategy for 2030 and the Nature Restoration Law (2024/1991) have set ambitious, legally binding targets, including the restoration of at least 30% of degraded terrestrial, wetland, and grassland ecosystems by 2030 (European Union, 2024). However, achieving these targets requires robust, scalable, and transparent monitoring systems that can provide timely, accurate, and spatially comprehensive data to assess restoration progress and ensure compliance. The case studies presented here focus on how EO technology can support the assessment of ecosystem restoration actions for three critical ecosystems: forests, wetlands and grassland-dominated cultural landscapes. These ecosystems have been selected because of their ecological importance, vulnerability to degradation, and important role in biodiversity conservation and carbon sequestration. We aim to provide actionable insights for policy makers, conservation agencies and local stakeholders to help them achieve restoration goals and improve ecosystem management. To effectively monitor ecosystem health and evaluate the progress of restoration efforts, we identified a series of key indicators for each ecosystem, many of which align with those outlined in the Nature Restoration Law. These indicators are specifically designed to support reporting requirements under the law and can be assessed using data derived from Earth Observation (EO) techniques. Forest monitoring For forests, the indicators include deadwood (lying and standing), forest connectivity, forest fragmentation, and structural variability. Forest connectivity, a key parameter highlighted in the Nature Restoration Law, is assessed using Sentinel-2 time series data, which also supports the analysis of forest fragmentation, disturbance detection, and species recognition (European Union, 2024). Additionally, airborne and spaceborne LiDAR data are utilized to derive vertical structure parameters, such as foliage height diversity (FHD), and to enable accurate deadwood mapping. This combination of data sources ensures a comprehensive evaluation of forest ecosystems, addressing both legislative requirements and ecological priorities. The analysis of forest connectivity and fragmentation was conducted using Copernicus Tree Cover Density (TCD) data across a range of time periods, with regional trends assessed in order to account for disturbances such as bark beetle infestations. These fragmentations – based on tree covered area rather than forest area – do not take into account the potentially positive effects of gaps and other forest openings known in ecosystem analysis. The results for large parts of Austria demonstrate that forest fragmentation remained stable at the regional scale between 2012 and 2018. However, local trends could be observed. In areas of strong bark beetle activity, the fragmentation increased, which can be considered as a degradation of the forest ecosystem. In other areas, we found a reduction in fragmentation, which could in future be considered as a result of restoration efforts. Bog monitoring Restoration of bogs is specifically beneficial, as bogs can store huge amounts of carbon and provide habitat of many, often endangered species at the same time. Key indicators for wetland restoration, as outlined in the Nature Restoration Law, include essential parameters to ensure restoration success and monitor ecosystem health such as soil moisture and hydrological conditions, extent and connectivity of wetland areas, vegetation structure and drainage channels (European Union, 2024). The assessment of vertical and horizontal vegetation structures using EO data provides insights into wetland habitat quality and changes over time. The case study assesses the status of shrub and tree vegetation in the “Pürgschacher” bog through the utilisation of multispectral airborne LiDAR. High vegetation like shrubs and trees can be indicators for insufficiently high water tables and thus a signal for a need for intervention or limited impact of restoration measures. The comparison with expensive field measurements confirmed the high accuracy of the data and its benefits for monitoring. Furthermore, the mapping and monitoring of drainage channels, including their extent and restoration status, enables the evaluation of hydrological restoration in degraded wetlands. In the “Lendspitz” wetland, priority was given to the monitoring of extent and condition of drainage channels by airborne LiDAR. The mapping of these channels achieved sub-decimetre accuracy, with the results being validated by field-measured profiles. Grassland monitoring Key indicators for grassland restoration that should be monitored to ensure compliance with the Nature Restoration Law (NRL) and successful ecosystem recovery include the High-Diversity Landscape Features, Grassland Use Intensity, Grassland Butterfly Index and Organic Carbon Stock in Mineral Soils (European Union, 2024). Grassland indicators which can be derived from EO data focus on Grassland Use Intensity assessing mowing events to distinguish between extensively and intensively managed grasslands, as well as the proportion of agricultural land with features such as hedges, single trees, tree groups and similar elements that enhance habitat diversity and connectivity. We have evaluated the Copernicus HRL "Small Woody Features (SWF)" for its use as an indicator of structural elements in grassland dominated landscapes in Austria. The combination of SWF data with local habitat information resulted in high spatial completeness in the mapping of structural landscape features, offering valuable insights into biodiversity-enhancing elements. Results showed that 21.27% of single tree and tree rows and 52.30% of shrubs and 25.87% of hedgerows were depicted in the SWF layer. We combined this data with mowing event detection from Sentinel-2 time series on a 50x50km grid to evaluate areas in need of restoration due to high use intensity. This grid can also serve as basis for the generation of stepping stones to allow for species exchange and improve genetic diversity. The detection of grassland mowing events was achieved through the utilisation of NDVI time series analysis from Sentinel-2, which enabled the identification of abrupt reductions in vegetation. The detection of grassland mowing events has considerable potential for operational roll-out, although challenges such as cloud cover and spectral limitations necessitated the integration of additional spectral bands. Relevance of results Scientifically, the project is advancing restoration monitoring by developing scalable methodologies that address critical gaps in current practice. The integration of EO technologies, LiDAR and in-situ data pushes the boundaries of restoration science and provides a replicable framework for monitoring ecosystems at scale. From a policy perspective, our developments directly support compliance with the EU Biodiversity Strategy and the Nature Directives by providing transparent, validated data aligned with legislative indicators. This data enables policy makers to assess restoration outcomes and make evidence-based decisions. Our results enable targeted interventions from various stakeholders, including national authorities, park managers and conservation agencies. These interventions may include for example prioritising areas for wetland rewetting, assessing grassland use intensity, and planning measures to mitigate forest fragmentation. References European Union. (2024). Nature Restoration Law (NRL) Regulation 2024/1991. Official Journal of the European Union. Retrieved from [OJ_L_202401991_EN_TXT.pdf].
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Leveraging Citizen Science and Deep Learning for Satellite-Based Monitoring of Seal Populations

Authors: Rachael Laidlaw, Mikolaj Czerkawski, Tilo Burghardt, Majid Mirmehdi
Affiliations: University of Bristol, ESA
The lack of sizeable and accessible datasets for marine-animal biodiversity monitoring from space highlights the need for novel approaches to data collection. We propose a method – implemented within a case-study for the monitoring of seal populations – that utilises citizen-science photographs taken on the ground as a source of metadata in order to rapidly and effectively obtain relevant satellite imagery of areas of interest at the closest available time points to those with known observations of seals. This involves the computation of image embeddings using a pre-trained deep-learning model which allows a given dataset to be queried through natural language. As a result, on-the-ground data can be filtered for particular types of seals or sizes of colonies, and the outputs can easily be inspected manually to verify suitability for the use-case, before extracting the metadata to determine candidate location and time data pairs and then mapping these to satellite imagery. One intended application of this is to make the most of the request limits for VHR data by reducing the likelihood of obtaining “empty” images. In our case-study, we build upon previous work which trained a seal detector for icy habitats, by developing one for sandy beaches too. We compare the performance of both of these in various environments to assess robustness and generalisability. Verifying that seal species have stable and healthy population levels is vital since numbers have historically declined due to anthropogenic influences. The specific interest in seal abundance and distribution stems from the fact that seals are top predators in Europe and can thus be used as an indicator to reflect the state of the marine ecosystem. It is also desirable to observe trends that may reveal certain behaviours, such as how male seals have been known to kill and feed on other seals. Furthermore, seals are important for tourism purposes, so knowing where they are allows conservationists to ensure they are thriving whilst visitors are able to enjoy sightings.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Space for Biodiversity: EO-Backed Auditing and Tracking of GBF Implementation

Authors: Chandra Prakash Jha
Affiliations: Fashion For Biodiversity S.r.l.
The Kunming-Montreal Global Biodiversity Framework (KM-GBF) sets an ambitious path for halting and reversing biodiversity loss by 2030. Its 23 action-oriented global targets and four long-term goals for 2050 emphasize the need for measurable, transformative actions supported by robust monitoring frameworks. National reporting on biodiversity indicators under the Kunming-Montreal Global Biodiversity Framework (GBF) can be significantly enhanced through Fashion for Biodiversity's Triangular Audit Approach which combines the European Space Agency's Copernicus programme, IoT sensors, and hyperspectral drones. This integrated approach enables precise monitoring of GBF’s 4 long-term 2050 targets: maintaining ecosystem integrity, restoring degraded ecosystems, halting species extinction, and ensuring sustainable ecosystem services. It supports achieving 23 ambitious 2030 targets, including protecting 30% of terrestrial and marine areas, restoring 30% of degraded ecosystems, and cutting pollution by 50%. Copernicus satellites, IoT sensors, and hyperspectral drones provide a robust, real-time tracking framework for informed decision-making. This paper highlights the Triangular Audit’s role in national compliance with KM-GBF targets, addressing biodiversity loss, pollution, restoration, and sustainable development while aligning with 2050 goals. Introduction to the Kunming-Montreal Global Biodiversity Framework The KM-GBF aims to transform humanity's relationship with nature through targeted actions to reverse biodiversity loss. The framework is built around four long-term goals for 2050: 1. Goal A: Halt species extinction and maintain genetic diversity. 2. Goal B: Enhance ecosystem functions and services for sustainable development. 3. Goal C: Fair and equitable sharing of genetic resources and benefits. 4. Goal D: Increase financial resources and capacity-building for biodiversity conservation. By 2030, the framework outlines 23 global targets under three themes: o Reducing threats to biodiversity o Meeting people’s needs through sustainable use o Tools and solutions for implementation Achieving these requires data-intensive monitoring, reporting, and verification mechanisms, which the unique Triangular Audit Approach can provide. The Triangular Audit Approach Fashion for Biodiversity's Triangular Audit Approach integrate the European Space Agency’s Copernicus Programme, IoT sensors, and hyperspectral drones—offers a comprehensive, data-driven solution for biodiversity assessment and reporting. It enables comprehensive biodiversity monitoring: 1. Copernicus Programme (Earth Observation) Role: Provides large-scale, high-resolution satellite imagery and spatial data. Applications: • Tracks ecosystem extent, fragmentation, and land-use changes. • Maps protected areas and biodiversity corridors (Targets 1, 3). • Monitors ecosystem services such as carbon storage and water regulation (Targets 11, 14). 2. IoT Sensors Role: Real-time, localized environmental data collection. Applications: • Tracks pollution levels in water and soil (Target 7). • Measures microclimatic variables and species activity (Target 4). • Monitors agricultural practices for sustainability (Target 10). 3. Hyperspectral Drones Role: Offers precise spectral analysis of ecosystems. Applications: • Detects contaminants in soil, water, and vegetation (Targets 7, 10). • Monitors habitat quality and restoration outcomes (Targets 2, 8). • Identifies invasive species and tracks vegetation health (Target 6). Together, these components provide a multi-scale biodiversity monitoring framework capable of addressing the full spectrum of KM-GBF indicators. Achieving KM-GBF Targets Through Triangular Auditing The Triangular Audit supports various KM-GBF targets, offering precise insights to drive policy, conservation, and sustainable development efforts.  Targets 1–5: Addressing Ecosystem Integrity Target 1: Spatial Planning for Biodiversity Copernicus Sentinel-2 tracks habitat extent and land-use changes, aiding biodiversity-inclusive spatial planning. Target 2: Restore 30% of all Degraded Ecosystems Hyperspectral drones monitor vegetation health, while Sentinel-2 maps restoration progress, targeting 30% degraded ecosystem recovery by 2030 Target 3: Conserve 30% of Land, Waters and Seas Sentinel-1 and Sentinel-2 assess protected areas and biodiversity corridors, supporting area-based conservation. Target 4: Halt Species Extinction, Protect Genetic Diversity, and Manage Human-Wildlife Conflicts IoT sensors track species activity, while drones assess population health for stabilization and recovery. Target 5: Ensure Sustainable, Safe and Legal Harvesting and Trade of Wild Species Sentinel-3 monitors deforestation and wetlands, while IoT sensors assess localized stressors like pollution.  Targets 6–10: Managing Pressures on Biodiversity Target 6: Reduce the Introduction of Invasive Alien Species by 50% and Minimize Their Impact Hyperspectral drones detect invasive species, while Sentinel-2 tracks habitat disturbances. Target 7: Reduce Pollution to Levels That Are Not Harmful to Biodiversity IoT sensors monitor real-time pollution in water and soil, with drones detecting contamination hotspots. Target 8: Minimize the Impacts of Climate Change on Biodiversity and Build Resilience Sentinel-5P tracks greenhouse gases, while IoT and drones monitor restoration success, reducing climate-related biodiversity loss. Target 9: Manage Wild Species Sustainably To Benefit People Copernicus maps genetic diversity hotspots; IoT sensors track conditions supporting wild and cultivated species. Target 10: Enhance Biodiversity and Sustainability in Agriculture, Aquaculture, Fisheries, and Forestry IoT sensors monitor soil health; Sentinel-2 tracks agricultural patterns for biodiversity-friendly practices  Targets 11–15: Ecosystem Benefits and Indigenous Rights Target 11: Restore, Maintain and Enhance Nature’s Contributions to People Sentinel-2 quantifies ecosystem services, while drones and IoT sensors assess delivery for sustainable restoration. Target 12: Enhance Green Spaces and Urban Planning for Human Well-Being and Biodiversity Sentinel-1 maps urban green spaces; IoT sensors track species presence to aid urban planning. Target 13: Increase the Sharing of Benefits From Genetic Resources, Digital Sequence Information and Traditional Knowledge Sentinel-2 identifies biocultural heritage areas; drones support Indigenous knowledge systems and equitable access. Target 14: Integrate Biodiversity in Decision-Making at Every Level Copernicus satellites monitor ecosystem services, ensuring fair access to water, food, and energy benefits for marginalized communities dependent on biodiversity. Target 15: Businesses Assess, Disclose and Reduce Biodiversity-Related Risks and Negative Impacts IoT sensors track raw materials, while drones assess ecosystem health in sourcing regions.  Targets 16–20: Target 16: Enable Sustainable Consumption Choices To Reduce Waste and Overconsumption Triangular audits provide biodiversity data for global governance and decision-making. Target 17: Strengthen Biosafety and Distribute the Benefits of Biotechnology IoT sensors monitor GMOs and biosafety risks to distribute biotechnology benefits equitably. Target 18: Reduce Harmful Incentives by at Least $500 Billion per Year, and Scale Up Positive Incentives for Biodiversity Copernicus and IoT systems assess Indigenous land management, promoting traditional stewardship. Target 19: Mobilize $200 Billion per Year for Biodiversity From all Sources, Including $30 Billion Through International Finance Spatial biodiversity data aligns funding with measurable outcomes. Target 20: Strengthen Capacity-Building, Technology Transfer, and Scientific and Technical Cooperation for Biodiversity IoT sensors empower communities with biodiversity insights for sustainable management.  Targets 21–23: Resource Mobilization and Monitoring Target 21: Ensure That Knowledge Is Available and Accessible To Guide Biodiversity Action Sentinel-2 links conservation investments to restoration outcomes Target 22: Ensure Participation in Decision-Making and Access to Justice and Information Related to Biodiversity for all Sentinel-2 and IoT data ensure equitable distribution of biodiversity-derived benefits. Target 23: Ensure Gender Equality and a Gender-Responsive Approach for Biodiversity Action Triangular audits strengthen biodiversity monitoring with multi-scale indicators for adaptive management. Alignment with Long-Term Goals for 2050 The Triangular Audit contributes directly to the KM-GBF’s 2050 Vision of “Living in Harmony with Nature”:  Halt Species Extinction (Goal A): Tracks species populations, genetic diversity, and habitat quality using IoT and hyperspectral data.  Enhance Ecosystem Functions (Goal B): Quantifies ecosystem services and monitors restoration outcomes using Copernicus and hyperspectral analysis.  Fair Sharing of Resources (Goal C): Facilitates equitable access to biodiversity data, empowering local and indigenous communities in conservation efforts.  Increase Financial Resources (Goal D): Promotes innovative financing mechanisms like biodiversity credits and ecosystem service valuation. Case study In October 2023 Fashion For Biodiversity launched ChromEX, a pilot project in Rania, Kanpur, targeting chromium contamination. Satellite and drone data identified critical hotspots, enabling precise bioremediation with Brassica juncea, biochar, and bacterial consortia. By September 2024, chromium levels dropped 85%, improving water quality and agricultural productivity. This demonstrates scalable bioremediation for heavy metal pollution, supporting environmental and agricultural sustainability. The pilot aligns with KM GBF targets:  Target 2: Restores degraded land into thriving ecosystems.  Target 7: Reduces harmful chromium pollution.  Target 11: Enhances ecosystem resilience and productivity. These efforts advance biodiversity conservation and ecosystem restoration globally. Benefits of the Triangular Audit Precision and Scalability: Combines satellite data with drone and sensor insights for comprehensive monitoring. Cost-Effectiveness: Minimizes dependence on labor-intensive field surveys. Transparency and Accountability: Automates data integration for accurate and verifiable biodiversity reporting. Cross-Sectoral Impact: Aids public and private initiatives, including sustainable supply chain management. Fashion for Biodiversity's Triangular Audit makes national biodiversity reporting with KM-GBF easier and real-time. Using Copernicus Earth Observation, IoT sensors and hyperspectral drones, it provides actionable insights to address biodiversity loss, pollution and degradation. This approach is helping to meet the 2030 and 2050 targets and empowering governments, businesses and communities to sustainably protect biodiversity.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: Monitoring farmland habitat diversity with Copernicus data and products from national to European level

Authors: Stefan Erasmi, Dr. Marcel Schwieder, Christian Levers, Talie Musavi, Momtchil Iordanov, Matteo Marcantonio, Felix Lobert, Gideon Tetteh, Marijn van der Velde
Affiliations: Thünen Institute of Farm Economics, Thünen Institute of Biodiversity, European Commission, Joint Research Centre (JRC)
Agricultural expansion and intensification threaten natural habitats, but agricultural areas can still support critical biodiversity, if managed carefully. The state and quality of agricultural land for biodiversity depends on factors such as land use intensity, composition, and configuration. As frequent biodiversity monitoring across large spatial extents is challenging (e.g., due to high personal and financial efforts, potential consistency issues, or low temporal resolution), mapping habitat quality as a proxy for farmland biodiversity is a promising approach to overcome these challenges. By quantifying farmland quality and diversity and its changes over time, conservation efforts can be targeted to protect, restore, and enhance farmland habitats with the aim to maintain and enhance biodiversity and ecosystem services. Earth Observation (EO) data can provide valuable insights into land use and cover change and, thus, can be an entry point for assessing and mapping farmland quality by providing spatio-temporally consistent information on agricultural areas at fine spatial scales. Nevertheless, current knowledge is limited on how available EO data could be utilized in extracting indicators that support a consistent monitoring of farmland biodiversity. To advance the development of such indicators, a generalized workflow for a Farmland Habitat Biodiversity Indicator (FHBI) was proposed by OECD (Bayr et al. 2023). Here, data on agricultural land-use and management are used to map farmland habitats. Based on existing monitoring data or expert knowledge, the FHBI builds on the assignment of habitat quality scores of individual land-use and management categories based on their potential value to support biodiversity. We implemented the FHBI concept for Germany based on country-wide wall-to-wall agricultural land use information from EO data for the period 2017-2023. First, we derived pixel-level structural and functional crop diversity based on the frequency and duration of crop sequences and the share of cereal, leafy, summer, and winter crops (Stein and Steinmann, 2018; Jänicke et al. 2022). Second, we derived grassland use intensity as the number of mowing events detected from satellite time series. Third, we complemented these datasets with maps of small woody features and other perennial land use classes. Fourth, we assigned habitat quality scores ranging from one to five to each pixel in the combined agricultural land use (intensity) map. Finally, we calculated the area-weighted average of habitat quality values at an aggregation level of 100 ha hexagons for the entire open land across Germany. The results of our national pilot study show that the proposed indicator detects general patterns of farmland diversity reflecting the distribution of major agricultural areas and soil-climate regions in Germany and hence underline the potential of EO-based products to monitor trends in farmland diversity on a national level. Recently, the Copernicus High Resolution Layers (HRL) on Vegetated Land Cover Characteristics (VLCC) were published which provide EU-wide wall-to-wall information on crops and grassland. This includes cropland and grassland distribution, but also management layers focusing for example on grassland mowing events. In combination with other HRL layers, e.g., the Small Woody Features layer, the Copernicus product suite allows the extension of the national FHBI approach and implementation at EU-level. Here we illustrate and explore preliminary results and discuss whether and how regional characteristics of agricultural land-use could or need to be considered to improve an EU-wide assessment of farmland diversity. The FHBI approach has the potential to provide a robust framework to map large-scale spatial patterns and trends in farmland habitats at the European level. It further seems to be suitable to complement other MRV frameworks at national to global level, like, e.g., the Essential Biodiversity Variables, and political initiatives, like, e.g., the Target 10 of the Kunming-Montreal Global Biodiversity Framework or the European Nature Restoration Law. References Bayr, U., Cobourn, K., Dieker, P., Fjellstad, W.J., Herzog, F., & Lankoski, J. (2023). Guidelines for the development of an OECD farmland habitat biodiversity indicator Jänicke, C., Goddard, A., Stein, S., Steinmann, H.-H., Lakes, T., Nendel, C., & Müller, D. (2022). Field-level land-use data reveal heterogeneous crop sequences with distinct regional differences in Germany. European Journal of Agronomy, 141, 126632. Stein, S., & Steinmann, H.-H. (2018). Identifying crop rotation practice by the typification of crop sequence patterns for arable farming systems – A case study from Central Europe. European Journal of Agronomy, 92, 30-40.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone M)

Poster: F.04.15 - POSTER - Resilient Coasts: Adaptive Strategies for Sustainable Ocean Management in a Changing Climate

As climate change continues transforming our global environment, coastal regions and ocean ecosystems are at the forefront of these impacts.
This session focuses on the critical need for adaptive, innovative approaches to manage and protect our coastal and marine environments against climate change pressure. As the planet faces unprecedented environmental stressors - rising sea levels, coastal erosion, increasing temperature levels - coastal regions and ocean ecosystems are particularly vulnerable. Emphasizing resilience will highlight how adaptive management practices - grounded in the latest science and driven by community engagement - can ensure the long-term sustainability of our coastal environments and resources.

Key discussions will include integrating climate projections into maritime spatial planning, developing flexible policies that can adjust to evolving environmental conditions, and leveraging technology for real-time monitoring and responsive action. Case studies from around the world will illustrate how coastal communities and stakeholders are implementing these adaptive strategies to secure their ecosystems' sustainability.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone M)

Poster: Harnessing Citizen Science for Coastal Resilience

Authors: Dilek Fraisl, Dr Linda See, Dr. Juan Carlos Laso Bayas, Steffen Fritz, Dr Ian McCallum
Affiliations: International Institute for Applied Systems Analysis (IIASA)
As climate change continues to transform our global environment, coastal regions and ocean ecosystems are among the most vulnerable, facing immediate and severe impacts. Rising sea levels, coastal erosion, and increasing ocean temperatures create immense pressure on these ecosystems, highlighting the urgent need for adaptive and innovative management approaches. In this context, citizen science, broadly defined as public participation in scientific research and knowledge creation, emerges as a powerful tool to enhance resilience and sustainability efforts. Citizen science has gained widespread recognition in environmental and ecological sciences in recent years, including in tackling marine litter (Fraisl et al, 2022). Beyond serving as a valuable data source, it engages communities in action, raises awareness, and fosters education on pressing environmental issues such as plastics. Initiatives such as Ocean Conservancy’s International Coastal Cleanup, the European Environment Agency’s Marine Litter Watch, and NOAA’s Marine Debris Monitoring and Assessment Project demonstrate how citizen science can generate valuable data while mobilizing millions worldwide. However, its integration into policy frameworks and official monitoring efforts, by the countries and international organizations, remains largely untapped. This oral presentation will showcase a pioneering case study from Ghana, where citizen science data on marine plastic litter are being integrated into official statistics and both national and global monitoring efforts for the first time. These data are also being incorporated into Ghana’s Integrated Coastal and Marine Management Policy, which is currently under development. The Ghana case illustrates how adaptive management practices, driven by citizen science, community engagement and trusted partnerships involving diverse stakeholders, can enhance the resilience of coastal environments. It further demonstrates how affected communities can play an active role in data ecosystems and decision-making processes. Additionally, the presentation will highlight findings from a follow-up feasibility study that explored combining drones, citizen science methodologies, and AI to generate litter density maps along Ghana’s coastline. References: Fraisl, D., See, L., Bowers, R. et al. The contributions of citizen science to SDG monitoring and reporting on marine plastics. Sustain Sci 18, 2629–2647 (2023). https://doi.org/10.1007/s11625-023-01402-4
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone M)

Poster: Advanced Dike Structure and Dike Vegetation Monitoring with UAV-based Deep Learning

Authors: Markus Adam, Max Marquardt, Marlin M. Mueller, Steffen Dietenberger, Julian Leiber, Clémence Dubois, Christian Thiel
Affiliations: German Aerospace Center (DLR), Institute of Data Science
This study presents an innovative approach for the monitoring and management of coastal protection infrastructures by using Unmanned Aerial Vehicle (UAV) technology coupled with advanced deep learning algorithms. Focusing on the North Sea coast of Germany, our research aims to develop a high-resolution monitoring system for the structure and vegetation of dikes and dike forelands, supporting coastal communities in the pressing challenges posed by coastal erosion and climate-change related sea-level rise. Utilizing a DJI Mavic 3 Enterprise drone, we captured detailed RGB imagery (5 cm pixel size) of dike areas and foreland structures at a dike section along the German North Sea coast close to Dorum-Neufeld. The surveyed area is structurally diverse, containing the dike foreland with coastal protection structures (e.g., groynes), the dike itself, and the dike hinterland, which is used for agriculture. Vegetation cover types include salt marshes, pastures, reeds and trees. The data acquisition process involved two 3-hour flight missions at an altitude of 80 and 120 meters, covering an area of around 42 ha once in October 2023 and once in April 2024. 3D point clouds, gridded terrain information (elevation, slope) and orthomosaics were generated from the acquired images. Our methodology uses a deep-learning-approach by employing a U-Net algorithm to carry out a two-level hierarchical classification of the orthomosaics. At the first level, general land cover types (e.g., soil, vegetation, water) were classified. The areas classified as vegetation were then further classified into different vegetation types. The classification workflow of both levels consisted of three main stages: (1) exporting training data by randomly extracting patches of 256x256 pixels with 50% overlap from manually predefined training areas, (2) training the deep learning model using ResNet-34 as the backbone, and (3) applying the trained model for pixel-wise classification. We explored various input combinations, including RGB imagery alone and in conjunction with elevation and slope data, to enhance classification accuracy. In addition, we analyzed the temporal transferability of the methodology by applying the models trained on the 2023 orthomosaics to the 2024 orthomosaics. The results indicate high accuracy of the land cover and vegetation cover classifications, with overall accuracy (OA) values around 89% and 95% for land cover and vegetation cover classifications, respectively. Model transferability is high for land cover classification (OA: 88.73%) and decent for vegetation cover classification (OA: 67.80%). This approach allows for fast and precise delineation of dike structures and vegetation patterns, offering insights into the health and stability of these coastal protection systems. The comparison of classifications from different time steps can provide a quantification of changes in land and vegetation cover, as well as show weak infrastructural spots. By providing detailed, objective and up-to-date information on dike integrity and vegetation dynamics, our method can supplement manual dike inspections and therefore enable more targeted and efficient maintenance practices. This not only enhances the resilience of coastal communities against flooding and erosion, but also supports the preservation of unique coastal ecosystems.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone M)

Poster: Advancing Marine Ecosystem Monitoring: Chlorophyll-a Gradients for Sustainable Fisheries Management

Authors: Marine Bretagnon, Philippe Bryère, Jean-Noël Druon, Marco Clerici, Antoine Mangin
Affiliations: ACRI-ST, ACRI-ST, Site de Brest, quai de la douane, Joint Research Centre, European Commission
Phytoplankton is at the basis of the marine food web, but only 10–20% of primary production is transferred to higher trophic levels. Beyond chlorophyll-a concentration, horizontal gradients of chlorophyll-a have been identified as key drivers of mesozooplankton development and overall marine food web dynamics. Consequently, detecting these gradients from satellite-derived ocean colour data is essential for habitat analyses as a robust proxy for feeding and effective management of sustainable fisheries. In 2021, the Joint Research Centre (JRC) published an algorithm specifically designed to detect productivity fronts relevant to fish production. This algorithm is operationally computed at a global scale, made publicly accessible through the JRC Data Catalogue, and distributed in real-time to national institutes in Africa via the eStation software as part of the EU GMES & Africa Program. Developed using data from the MODIS-Aqua sensor, the algorithm benefits from the over 20 years of sensor data, enabling analyses of the temporal evolution of fish habitats and assessments of fisheries' likely overfishing by comparing spatial effective and potential fishing yields. However, with MODIS-Aqua nearing the end of its operational life, the algorithm has been adapted to the Sentinel-3 OLCI (Ocean and Land Colour Imager) sensor to ensure continuity in fish production monitoring, including real-time applications and trend monitoring at global scale under the current effect of climate change. Since November 2024, chlorophyll-a gradients data are freely available through the Copernicus Marine Service (https://data.marine.copernicus.eu/product/OCEANCOLOUR_GLO_BGC_L3_MY_009_103/), supporting global efforts to sustain marine ecosystems and fisheries. In this study, we will demonstrate how incorporating chlorophyll-a gradients into monitoring systems would allow a more integrated approach to studying ecosystem variability and resilience. The chlorophyll-a gradient will be essential for making substantial progress in modeling the distribution of marine habitats, as an EO data that is much better feeding proxy than chlorophyll-a alone for the upper food chain. It will also enable deciders to adapt fishing effort to local opportunities and monitor the increase or decrease in local fish production, which are key considerations for food security and the resilience of coastal communities.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone M)

Poster: Contribution of satellite imagery in support of the Water Framework Directives for eutrophication assessment

Authors: Philippe Bryère, Marine Bretagnon, Antoine Mangin
Affiliations: ACRI-ST, Site de Brest, quai de la douane, ACRI-ST
Eutrophication is a process driven by the enrichment of water by nutrients, especially compounds of nitrogen and/or phosphorus. This leads to rapid growth and primary production of algae, increasing its overall biomass in water bodies. This alters the natural balance of organisms in the water and eventually leads to water quality degradation. This natural process occurs over long-time scales, which are typically geological, but since the 20th century, rapid population growth, industrialisation, and intensive agriculture, have accelerated eutrophication due to excessive inputs of nutrients. This has a major impact on the balance of coastal communities and socio-economic activities (e.g., tourism, aquaculture, fishing). Approximately 50 years ago, European law considered eutrophication as a criterion for assessing the good ecological status (GES) of coastal and offshore water bodies. Eutrophication concerns resulted in political action in Europe, which then translated into programmes that are now implemented by regional conventions like the Oslo-Paris Convention(OSPAR) for the Protection of the Northeast Atlantic, since 1972 or the Barcelona convention (BarCOM) for the Mediterranean, since 1975. In Europe, action against eutrophication was initiated by the conventions and legislation mentioned above, which were then followed by far more comprehensive legislation over the past 20 years like the Water Framework Directive (WFD in 2000), which addresses all coastal waters, up to the one nautical mile, and groundwater and the Marine Strategy Framework Directive (MFSD in 2008), which established a framework for marine environmental policy up to the two hundred nautical mile limit of the European exclusive economic zone. Depending on the convention or directive considered, the complexity of the phenomenon is reflected in several definitions. For the European Marine Strategy Framework Directive (MFSD), the evaluation of the "Eutrophication" criterion is based on eight criteria: nutrients and chlorophyll-a (Chla) concentration in the water column, occurrence of toxic algae, transparency, dissolved oxygen concentration, abundance of opportunistic macroalgae and macrophyte communities in benthic habitats, benthic macrofauna. Up to now, assessments of eutrophication in the different directives have been based mainly on in situ measurements, with data from remote sensing and model outputs used for information purposes only. Currently the Chla concentration, which is a proxy for the phytoplankton biomass and therefore directly affected by nutrient enrichment, is one of the main criteria for the assessment of eutrophication in the different marine water framework directives. Since the 1980s, phytoplankton measurement networks have been set up in various countries (e.g., Ireland, USA, France) to monitor phytoplankton composition, Chla, dissolved oxygen, turbidity and nutrient concentrations at fixed points and at different depths. Related parameters such as temperature, pH, and salinity are also frequently measured. These networks, which are often very close to the coast, are occasionally supplemented by offshore fixed buoys and ship monitoring campaigns. Although these high-quality measurements may be sufficient to characterise costal water bodies (i.e. for the WFD), they are expensive in terms of personnel and equipment, are limited in spatial and temporal coverage and are not well developed in many countries. Chla which is considered as a direct effect, or primary symptom, of eutrophication and is one the most common Earth Observations (EO) made by satellites. Thus, the integration of daily satellite images at low resolution (~1km) from ocean colour sensor deployed by NASA (MODIS, VIIRS ..) and ESA (OLCI, MSI) for areas where there are insufficient or non-existent in situ data are very useful for large scale Chla monitoring. Moreover, dedicated algorithms for estimating Chla concentration in coastal waters (OC5 and NIR algorithms) that are optically difficult to characterise (because of, e.g., turbidity, yellow substance ...) now allow to estimate Chla in coastal waters with improved accuracy. This study shows how EO made by satellite has contributed in recent years to the assessment of the good ecological status (GES) of coastal waters for the eutrophication indicator within the framework of the directives cited above. The link with the Copernicus Marine Service (CMS) is also discussed.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone M)

Poster: Littosat, a satellite dashboard for monitoring coastal environment parameters for managers of coastal areas and marine protected areas

Authors: Marie Jagaille, Marc Lennon, Josselin Aval, Sébastien
Affiliations: Hytech-imaging
Littosat offers a dashboard for managers of coastal areas and marine protected areas, fed by new data from satellite images, and enabling the monitoring of coastal environment parameters (coastal vegetation, morphodynamics of shallow waters, turbidity and microalgae blooms) with little delay. The first version of Littosat-viewer provides data on foreshore vegetation parameters, with : - seasonal image mosaics on a regional scale at high and low tide, with a spatial resolution of 10m ; - vegetation indices of the emergent foreshore and maps of changes (gain or loss of vegetation) between seasons with a spatial resolution of 10m. Littosat-viewer - is based on a simple, intuitive and open source web display interface. It is being deployed in Brittany https://littosat.hytech-imaging.fr/littosat-bzh/ - on a fully automated seasonal data production infrastructure. This infrastructure means that Littosat can be deployed across a region in just a few days. Deployment across the Channel Atlantic coast is currently underway, with delivery scheduled for 2025. In the short term, it will also enable - the development of on-demand calculation services for the selection of coastal images according to tide and cloud cover criteria defined by users (API service under development in 2025). - the development of on-demand calculation services for indicators of the state of health of foreshore vegetation on a seasonal basis (development of indicators under way in 2025). The aim will be to be able to export statistics on changes in vegetation health indicators (e.g. cover, density, etc.), which can be used for dashboards, alert services, reference system updates, etc. All the satellite products already available in Brittany are referenced in the GéoBretagne regional Spatial Data Infrastructure catalogue.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone M)

Poster: Integrated Coastal Flood Mapping: Leveraging Satellite Data and Numerical Modeling in the Senegal Estuary

Authors: E. Tonatiuh Mendoza, Edward Salameh, Imen Turki, Julien Deloffre, Benoit Laignel
Affiliations: Normandie Univ, UNIROUEN, UNICAEN,, CEREMA
Coastal riverine regions face increasing flood risks, primarily driven by extreme weather events, resulting in loss of life and greater exposure to such impacts. This susceptibility is expected to grow as a result of the expansion of human communities and the effects of climate change. Studies foresee an increase in extreme flooding events across various regions, notably in Asia and Africa, further intensified by projected sea level rise. In West Africa, flood events have severely affected riverine human settlements, causing major health and economic repercussions. In particular, the Senegal River Estuary, located on the northern Senegalese coast, has experienced severe flooding events, but only a few have been documented. In 2003, heavy rains caused the Senegal River to overflow, resulting in widespread flooding on Saint-Louis Island, Sor Island and the sand spit known as Langue de Barbarie. Accurate flood risk analysis is crucial, especially in coastal riverine areas of Africa, where socioeconomic growth is concentrated. Precise flood mapping helps mitigate risks and supports early warning systems. However, assessing floods in developing countries is challenging due to limited data. This can be addressed by using numerical modeling and earth observation data. This study employs an integrated approach, combining remote sensing and numerical modelling techniques, to characterize flood-prone regions resulting from the combined effects of extreme river water elevations and long-term sea-level rise in the Senegal River Estuary. First, satellite imagery was used to classify land types through supervised machine-learning techniques. Next, four different case scenarios of hydrodynamic conditions were assessed using numerical modeling and satellite data to provide a quantitative assessment of flooding. Finally, flood impact scenarios were overlaid with land classification results to quantify the proportion of each land type affected by flooding. The analysis shows that the buildings classification is the most impacted followed by vegetation and roads in all cases. This study highlights the flood-affected areas at a district level, offering relevant understanding for the development of effective adaptation strategies, disaster planning, adjusting policies with scientific knowledge, and supporting adaptive governance in the Senegal River Estuary.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone H)

Poster: C.02.13 - POSTER - ESA's Harmony mission

Harmony is the 10th ESA Earth Explorer mission. Earth Explorers are the backbone of the science and research element of ESA’s Living Planet Programme, providing an important contribution to the global endeavor of understanding the Earth system. The Harmony mission, which is currently progressing into Phase B2, is dedicated to the observation and quantification of small scale motion and deformation (velocity gradient) fields, primarily, at the air-sea interface (winds, waves, and surface currents, including measurements over extreme events), of the solid Earth (tectonic strain), and in the cryosphere (glacier flows and surface height changes).

Harmony aims to provide an integrated view of the dynamic processes that shape the Earth’s surface.
The Harmony space segment consists of a pair of satellites that will fly in convoy with one of the Copernicus Sentinel-1 satellites. These tandem receive-only satellites will passively pick up the same radar signals from Sentinel-1 from different vantage points. Over the ocean, the signals will be Doppler processed, so that surface velocity vector components are obtained along different lines of sight. This will offer a capability to observe – for the first time - velocity vectors directly from space. In addition, both tandem satellites will include an optical TIR instrument with multi-view capability for SST and cloud motion measurements that are colocated and contemporaneous with the SAR observations.
Over land, Harmony would provide data to measure small shifts in the shape of the land surface, such as those resulting from earthquakes and volcanic activity, and therefore it will contribute to risk monitoring. It would also provide new information to study 3D deformation and flow dynamics of glaciers at the rapidly changing marginal zones of the ice sheets for a better understanding of the impact of ice mass loss on sea-level rise.

The Earth System cannot be understood or modelled without adequately accounting for small-scale processes. Indeed, the parameterisation of the unresolved, sub-grid physical processes in global or regional models remains one of the main sources of uncertainty in climate projections, in particular with respect to air-sea coupling, cryosphere and clouds. Hence, it remains essential to rely on high-quality observations to sample and identify small-scale processes, to help emulate and calibrate advanced parameterisations of the small unresolved scales. High-resolution observations of the Earth System will thus play an increasingly central role in next generations of fully coupled Earth System Models (or Digital Twins of Earth).

The session will highlight the latest developments covering overall mission development status, mission science and foreseen exploitation of the mission higher-level products.

Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone H)

Poster: Spaceborne Transmitter/Stationary Receiver: Bistatic SAR retrieval over the Girose glacier

Authors: Andrei Anghel, Remus Cacoveanu, Lucas Davaze, Helmut Rott, Thomas Nagler, Julia Kubanek, Björn Rommen
Affiliations: EOS Electronic Systems, POLITEHNICA Bucharest, Mediation Climat, ENVEO GmbH, ESA/ESTEC
In recent decades, global atmospheric and ocean warming has led to a widespread shrinkage of the cryosphere, with a loss of mass for ice caps and glaciers. With future warming, glaciers will continue to lose mass at a rate depending on our future emissions. The diminution of the volume of glaciers and ice sheets is contributing to changes in river runoff and accelerated sea level rise. These long-term changes of grounded ice masses can be directly estimated by measuring surface elevation changes over time. Currently, the set of well-observed glaciers with mass balance measurements constitutes only a small fraction of the global total (121 of the nearly 275 000 referenced glaciers in 2021/22). In the current work, we highlight the activities performed in the COBIS4Harmony (Bistatic SAR Retrieval Over la Girose Glacier in the La Meije Massif, La Grave, France) project. The overall objectives of the COBIS4Harmony activity are derived from the scientific objectives of the Harmony mission, which relies on the exploitation of geometric diversity in multistatic SAR data to retrieve geophysical information, in most cases geometric in nature (height, velocities, and displacements). Most of the measurements and retrieval concepts have been or can be demonstrated using monostatic SAR acquisitions. However, a general question is how bistatic geometry will affect these measurements. Although theoretical electromagnetic forward models are quite often intrinsically bistatic in nature, they are typically used and validated for a monostatic configuration. Experimental data sets needed for validation are, naturally, very limited for the bistatic case. The C-band Opportunistic Bistatic Interferometric SAR (COBIS) system offers a differential interferometry architecture that uses Sentinel-1 as transmitter of opportunity and allows to exploit the information available from more than one orbit and more than one subswath per orbit (relative to the monostatic case), and in some cases can observe targets that are not visible in the monostatic images. The COBIS architecture is able to measure line-of-sight (LOS) displacements with millimeter-level accuracy and relative heights in single-pass interferometry configuration with meter-level accuracy. The main purpose of the COBIS4Harmony activity is to acquire bistatic data over a moving glacier to better understand Harmony’s contribution to cryosphere research and to calibrate models for Harmony’s performance analysis. The real value of the activity comes from the “real” bistatic acquisition geometry provided by the COBIS architecture. At the symposium we aim to present the acquired datasets and in situ measurements, the processing flow used to generate the bistatic products and the outcomes of data analysis for deriving the penetration-related elevation bias of InSAR-based DEMs from interferometric parameters and backscatter properties. The measurement campaign took place between February-August 2023 and we acquired and processed 52 datasets from all visible orbits. At the Girose glacier there are 3 available relative orbits: 139 (the main one that corresponds to the monostatic acquisition), 37 and 66. The acquired raw datasets were processed using an updated version of the COBIS processor, and its workflow can be summarized as follows. The processing flow is composed of Synchronization, Selection of Pulses, Range compression, Azimuth focusing, Sigma0 calibration and Single pass interferogram generation and calibration. The synchronization between satellite transmitter and the ground fixed receiver assumes timing, frequency, and position synchronization. The bistatic synchronization software performs position synchronization and pulse compression using the direct signal received on the synchronization channel. The range-compressed data is focused on a user-defined grid using a time-domain back-projection algorithm. In the case of the Girose glacier, we use for focusing a LIDAR-based DEM of the area acquired in 2021 (a time-domain back-projection focusing method used with a DEM leads directly to geocoded terrain-corrected (GTC) images). The generation of Single-pass InSAR Phase/DEM products requires a calibration step due to two reasons: the baseline tilt (that cannot be exactly aligned with the local vertical direction) and possible fluctuations of the phase difference between the imaging channels. The phase calibration is performed using the external LIDAR-based DEM (which is also employed to aid the phase unwrapping process) by choosing as interferometric phase reference a known stable periglacial area. The obtained bistatic products contain the following elements: SLC images (in arbitrary intensity units), Sigma0 images (using as reference area the resolution cell projected on the external DEM), Single-pass InSAR images (Coherence, Phase and DEM) and auxiliary information computed on the available DEM (e.g., vertical interferometric wavenumber, incidence/backscatter angle maps, resolution cell maps). The Girose glacier (4.6 km2 in 2023) is a north-facing alpine glacier with altitudes ranging from 2830 and 3584 m (2021). Its geometry allows to follow the end-of-summer snowline altitude (representative of the equilibrium line altitude) by analyzing optical Sentinel 2 A&B images. On average, it was at 3179 m in 2021, 3363 m in 2022 and 3333 m in 2023. Above this line, the vertical structure and spatial variability of the annual snow layer was explored with two snow pits and 11 probing measurements performed in February and April 2023. Additionally, for the underlying firn vertical structure, two boreholes were performed. The data analysis part includes backscatter simulations and forward computations and inversions for scattering phase centre and elevation bias estimation. The apparent glacier surface in interferometric DEMs is related to the position of the scattering phase centre in the snow and firn volume. Algorithms for forward and inverse computation on the scattering phase centre and elevation bias in a snow/firn medium are essentially based on the inversion of volumetric coherence and relay as inputs on interferometric and backscatter parameters. If the penetration depth is small compared to the interferometric height of ambiguity, the penetration-related elevation bias is approximately equal to the two-way power penetration depth. For large relative penetration depth, the elevation bias approaches one quarter of the ambiguity height. At C-band radar frequencies the penetration-related elevation basis in dry snow and firn of glaciers is on the order of a several metres up to a few tenths of metres, depending on dielectric and structural properties. The position of the scattering phase centre below the snow surface can be derived from the volumetric coherence which is deduced from the measured total coherence by subtracting other contributions to coherence. The inversion algorithm for deriving the depth of the scattering phase centre from volumetric coherence is based on the assumption of a volume with uniform absorption and scattering properties. Standard InSAR DEM processing does not account for dielectric properties of a medium other than the atmosphere. This results in a horizontal and vertical shift of the apparent location of the scattering phase centre in the InSAR DEM, which needs to be taken into account for deducing the penetration-related elevation bias from the computed depth of the scattering phase centre. Magnitude and sign of the shifts depend on the incidence/backscatter angle and permittivity. The local backscatter angle of the COBIS signal is about 80 degrees off the slope normal at the test field. The slanting observation geometry results in large differences between the actual location of the scattering phase centre and its apparent location in the InSAR DEM. This is shown in the results obtained by inverting COBIS coherence data, in agreement with expectations according to theory. The outcomes of bistatic data analysis will be presented at the symposium along with insights on the understanding of scattering mechanisms at C-band frequencies for glacier snow & ice in bistatic geometry. A comparison of the COBIS4Harmony results to airborne data (that were acquired during the SlimSAR4Harmony campaign) will further investigate C-band penetration in snow and firn.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone H)

Poster: Measuring topographic change after volcanic eruptions using multistatic SAR satellites: Simulations in preparation for ESA’s Harmony mission.

Authors: Odysseas Pappas, Juliet Biggs, Pau Prats-Iraola, Andrea Pulella, Adam Stinton, Alin Achim
Affiliations: University of Bristol, DLR, University of the West Indies
Harmony will revolutionise the way we measure the rapid topographic changes associated with volcanic eruptive activity. More than 800 million people across the world live within 100km of a volcano and monitoring is key to mitigating the threat of volcanic eruptions to human life. Maps of surface displacement and topographic change are vital for understanding the geometry and activity of underlying magma storage areas and the stability of steep volcanic edifices. Harmony will provide such high temporal-resolution views of topographic change at actively erupting volcanoes globally. This will improve the modelling and forecasting of volcanic dome growth, collapse, and emplacement of volcanic flows, all of which can pose significant threat to nearby populations. As part of the Harmony science studies, we demonstrate the use of high resolution bistatic interferometric data from TanDEM-X for the measurement of topographic change after recent eruptions in El Reventador, Ecuador and La Soufriere, St. Vincent and the Grenadines. We then simulate data at the lower, 20m resolution of Harmony so as to gain insights into its capability in quantifying topographic change. These case studies present variations in both local terrain and volcanic hazard that can provide a broad picture of Harmony’s resolving capabilities and allow us to better understand and quantify the effects of topography on the resolution and accuracy of Harmony’s interferometric measurements. Our results demonstrate that Harmony's resolution can be sufficient to resolve and measure accurately topographic change such as the emplacement of lava flows, but may be challenged in areas of steep topography where unwrapping errors can occur. The experimental results highlight the effect of acquisition pass direction with respect to local topography, the challenges arising in areas of steep topography and the importance of masking results based on estimates of precision and resolution. In areas of steep terrain (such as most volcanos), layover and shadow artefacts often occur, leading to artefacts and erroneous data appearing in the generated interferograms, which in turn complicate and introduce errors during the unwrapping process. Finally we discuss some of the challenges, as well as implications of the Harmony mission for the future of volcano monitoring.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone H)

Poster: Preliminary InSAR landslide applicability assessment for Harmony mission

Authors: MSc. Shaokun Guo, Dr. Jie Dong, Dr. Mingsheng Liao
Affiliations: State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, School of Remote Sensing and Information Engineering, Wuhan University
With its unique ability to provide day-and-night, high spatial resolution coverage, SAR has proven to be an effective tool in Earth observation (EO). Among common SAR systems, Sentinel-1 offers stable global revisit and easy accessibility, making it fundamental to the research community. To meet the growing demands for monitoring energy and matter exchange in rapidly changing environments with unprecedented fine granularity, an extension of Sentinel-1, the Harmony mission, is planned. The Harmony mission will feature two passive companions, known as the Harmony satellites, in conjunction with an enhanced Sentinel-1 satellite, and the existing orbit design will be preserved. These satellites will orbit together in two configurations: the "stereo-phase" mode for along-track interferometry and the XTI mode for acquiring topographic time series. In both configurations, the companions will be separated from the master satellite by a distance, resulting in the reception of scattered signals at a squint angle. Among the various applications of SAR, Differential InSAR (DInSAR) aims to determine the ground deformation field by canceling out irrelevant phase components, isolating only the position change-related part. Specifically, it compares the distance between a ground scatterer and the sensor at the corresponding centroid time across repeated observations. Currently, all spaceborne SAR satellites operate in Sun-synchronous Orbit (SSO), which has a near-polar inclination to ensure consistent solar illumination and global coverage. As a result, they are insensitive to deformation in the meridional direction, while latitudinal and vertical deformations can be captured. This issue is more pronounced in the early detection of landslides compared to ground subsidence monitoring. Landslides are governed by gravity and tend to develop along the slope angle, invalidating the "vertical deformation" hypothesis. A true 3D deformation monitoring scheme is therefore necessary. The upcoming Harmony mission may address this limitation by exploiting the relatively large squint angle of the companions, especially in stereo-phase mode, where the along-track distance reaches 350 (400) kilometers, to pioneer 3D deformation monitoring. InSAR landslide applicability, encompassing both geometric distortions and phase sensitivity to deformation, is an established concept. It focuses on the geometric aspects of imaging to determine how along-slope deformation is reflected in the interferometric phase. However, existing work adopts a monostatic configuration and has not been extended to the bistatic case. In this study, we focus on the practical challenges of landslide monitoring using Harmony stereo-phase formation. The major contributions are as follows: 1. We extend the existing monostatic InSAR applicability theory to accommodate bistatic cases. SAR systems differentiate adjacent targets by their slant range and Doppler discrepancy, or the slant range and its derivative. Under far-field conditions, we simplify the analysis by neglecting centroid time variation within a local scope, allowing for a straightforward examination of the spatial gradient of range and centroid Doppler at each ground position. In this way, we can identify the "phase-sensitive direction" and define the "layover direction." Using an advanced DEM, we then calculate the local slope angle and aspect. By combining these intermediate data, we can generate conventional applicability products, such as layover, shadow, spatial resolution, and sensitivity. 2. We implement the workflow using established middleware to enhance efficiency. To derive applicability products, particularly the geometric distortion region, at each ground position, we must solve the line-surface intersection problem multiple times. In monostatic cases, this is simplified to a visibility check in map coordinates. In bistatic scenarios, however, variations in layover and shadow directions cannot be ignored, preventing common simplifications. To optimize performance, we adopt a CPU-GPU hybrid structure: spatial directions are calculated on the CPU, and then passed to hardware-accelerated ray tracing for layover and double-journey shadow extraction. With our implementation, the applicability processing of a single burst takes less than 10 seconds. 3. We use real datasets to conduct a preliminary applicability assessment, covering mountainous areas in China. Using historical ephemeris and imaging parameters from Sentinel-1, we synthesize thousands of burst-wise geometric metadata files. After filtering out those from flat regions, we derive burst-wise layover/shadow masks, spatial resolution, and landslide sensitivity products for ~40,000 bursts. With the above results, we calculate key indicators regarding all stations, showing that 1. Layover areas are nearly identical across all three stations, which roots in the largely preserved rotational symmetry of the fleet’s bistatic configuration. 2. At the companion stations, the double-journey shadow mechanism enlarges the affected region to about twice of the monostatic case, resulting in segmented ground coverage and posing a challenge for coherent point network construction. 3, In mountainous regions, the three stations complement each other to enhance overall landslide monitoring sensitivity. In conclusion, although not a primary design goal, InSAR landslide monitoring can still benefit significantly from the ongoing Harmony mission. Subtle deformations in Northwestern China, where the mountains are oriented along an east-west direction, will be captured, supporting geohazard mitigation efforts in the face of increasing precipitation. However, substantial challenges remain in various aspects, such as coherent scatterer network configuration, multi-station result synthesis, and atmospheric correction using spatial correlation, to name a few. These challenges present opportunities for the research community.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: C.01.11 - POSTER - Airborne and Ground-based Instrument Demonstrators

Airborne and ground based remote sensing instrument demonstrators are important tools demonstrate new instrument concepts, develop and verify the retrieval of geo-physical parameters and to make reference measurements for the calibration and validation of Earth observation missions.
This session aims to present ongoing and completed developments of airborne and ground based instrument demonstrators.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: The ALADIN Airborne Demonstrator – 2-µm Doppler Wind Lidar Team: Aeolus Support from Mission Preparation and Validation to Re-processing

Authors: Christian Lemmerz, Dr. Oliver Lux, Dr. Benjamin Witschas, Dr. Stephan Rahm, Dr. Uwe Marksteiner, Dr. Alexander Geiß, Dr. Andreas Schäfler, Thorsten Fehr, Oliver Reitebuch
Affiliations: DLR - German Aerospace Center, Ludwig-Maximilians-University, ESA-ESTEC
The German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt e.V., DLR) executed four airborne validation campaigns during the first three years of ESA’s Earth Explorer mission Aeolus. The Atmospheric LAser Doppler INstrument (ALADIN) on-board Aeolus was the first Lidar in space, providing wind profile measurements on a global scale for almost five years after its launch in August 2018. One of the major mission goals was achieved by improving the Numerical Weather Prediction (NWP) quality with its data, backed by an extensive validation strategy. After three campaigns in Central Europe (2018, 2019) and the North Atlantic region around Iceland (2019), DLR carried out the Aeolus VAlidation Through Airborne LidaRs in the Tropics (AVATAR-T) on Sal Island, Cape Verde, in September 2021 as a part of the ESA-NASA Joint Aeolus Tropical Atlantic Campaign (JATAC). These efforts built on more than a decade of pre-launch ground and airborne campaigns that had been aimed at supporting Aeolus operations and processor development. The campaigns deployed two lidar instruments: a scanning, high-accuracy heterodyne Doppler wind lidar (DWL) system operating at 2-µm wavelength, which served as a reference by providing wind vector profiles for the Aeolus validation, and the ALADIN Airborne Demonstrator (A2D). The A2D, as a prototype for the direct-detection DWL ALADIN, consists of a frequency-stabilized ultra-violet laser, a Cassegrain telescope and the same dual-channel optical and electronical receiver design. Like ALADIN, it measures wind speeds along the single instrument’s line-of-sight by analysing particulate (Mie) and molecular (Rayleigh) backscatter signals. Operating these two lidar instruments in parallel onboard the DLR Falcon research aircraft in a downward looking configuration allowed for detailed exploration of ALADIN’s wind measurement technology under various atmospheric backscatter conditions. Over the course of the four post-launch airborne validation campaigns, the team completed 190 flight hours covering 26,000 km along the Aeolus track during 31 coordinated underflights. Different operational states of the mission and geographical regions were probed including atmospheric conditions with high wind speeds and variability in and around the Jet Stream in the vicinity of Iceland and the tropical dynamics of the Saharan air layer, the African Easterly Jet, the Subtropical Jet and the Inter-tropical Convergence Zone around Cape Verde. Thanks to the close similarity with the satellite instrument in design and measurement principle, the collocated A2D and 2-µm wind observations acquired during the campaigns enabled valuable insights into the optimization of the Aeolus wind retrieval and related quality control algorithms. For example, the A2D, unlike ALADIN, provided broad vertical and horizontal coverage of Mie winds within the Saharan air layer, while filtering out Rayleigh winds affected by Mie contamination through cross-talk. These findings supported the refinement of the Aeolus wind processor to enhance data coverage and accuracy already during the mission, and continue with evaluating the re-processed data quality after the mission end in July 2023. An overview is provided ranging from mission-relevant results obtained with pre-launch campaigns to comparative wind observations of Aeolus and the DLR airborne DWLs extending to the current re-processed data evaluation. With the follow-on generation of an Aeolus-DWL being developed by ESA and EUMETSAT, a similarly successful mission support based on an accordingly upgraded second-generation airborne demonstrator can build on this legacy.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Analyzing Ku/Ka-band passive microwave observations for temperature, moisture and vegetation monitoring using airborne and ground observations

Authors: Richard de Jeu, Dr. Yoann Malbeteau, Univ.Ass.in Dipl.-Ing.in Ruxandra-Maria Zotta, Wouter Dorigo, Xiaoling Wu, Jeffrey Walker, Diego Miralles
Affiliations: Transmissivity, Planet Labs PBC, TU Wien, Climate and Environmental Remote Sensing, Monash University, Department of Civil Engineering, Ghent University, Hydro-Climate Extremes Lab
The development of land applications and products with passive microwave sensors has a long legacy. Microwave instruments with these bandwidths have been on satellite platforms since the seventies. The list of derived products from these satellites include surface emissivity, surface temperature, soil moisture, vegetation opacity, snow derivatives and a series level 4 products linked to water and carbon fluxes. With the anticipated launch of the Copernicus Imaging Microwave Radiometer (CIMR), Ku- (18.7 GHz) and Ka-band (36.5 GHz) measurements will become available with a spatial resolution of approximately 5 km. This unique data stream could feed a series of new and improved data products encompassing the land, ocean and atmosphere. However, ground and airborne observations at these frequency ranges over land are rare. These observations can be a powerful tool to make a direct connection between the actual observations on the ground and the satellite derivatives. Here we investigated the potential of these frequencies to derive land Essential Climate Variables (ECVs) using airborne and ground measurements which recently became available. A set of available ground and airborne Ka/Ku band observations from the past years were collected, including data from i) the airborne Polarimetric K-band Scanning Radiometer (PKSR) which was used as part of the P-band Radiometer Inferred Soil Moisture project (PRISM19), ii) flights from NASAs Advanced Microwave Precipitation Radiometer (AMPR), and iii) a new multifrequency ground-based radiometer which was measuring in similar frequency ranges. The microwave observations were collected at a variety of spatial resolutions ranging from a few meters to several kilometers with a range of incidence angles and over multiple landscapes. The study provides an overview of the capabilities of these frequencies, and the increased sensitivity to surface changes when moving to higher spatial resolutions. Emphasis will also be placed on the importance of incidence angles for vegetation monitoring and their sensitivity to soil moisture and temperature. Focusing on the airborne Ka band data of the PRISM19 project, observations at a resolution of 10-15 m resolution indicate a stronger sensitivity to vegetation changes as compared to the satellite observations from the Advanced Microwave Scanning Radiometer (AMSR-2) and similar results are expected when exploring the two alternative sources of microwave data. We also noticed a significant increase in sensitivity for wet soils (soil moisture > 0.3 m3/m3) resulting in a 20 K drop in brightness temperatures, which was not observed at the satellite scale for this region. Overall, this study demonstrates the importance of ground observations and how they can help us further leverage the data from existing and future microwave satellite missions.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: C-Band SAR Observations for Boreal Forest Monitoring: A Tower and Satellite Perspective

Authors: Shivam Rawat, Albert Monteith, Lars Ulander, Henrik Persson, Hjalmar Laudon, Ylva Sjöberg
Affiliations: Department of Forest Resource Management, Department of Space, Earth and Environment, Department of Forest Ecology and Management, Department of Ecology, Environment and Geosciences
Understanding boreal forest dynamics is critical for monitoring global carbon and water cycles, as these forests play a significant role in regulating climate systems. Synthetic aperture radar (SAR) is widely used in remote sensing due to its cost-effectiveness, smaller antenna size, and extensive heritage from ESA’s C-band SAR missions. However, C-band SAR observations had limited success in estimating forest variables, such as biomass, water content, and sub-canopy soil moisture, due to challenges in signal penetration and temporal resolution. Previous ground-based radar studies, such as the ESA campaigns (TropiScat and BorealScat), have acquired C-band time series, including tomographic observations. BorealScat remains the only study to produce a C-band tomographic time series in a forest, despite challenges with data quality due to technical issues. Building on the advancements of BorealScat, this study leverages BorealScat-2, a unique tower-based radar system, to compare high-resolution C-band backscatter time series with Sentinel-1 C-band observations during 2024. BorealScat-2 significantly improves over previous C-band radar campaigns, including the past ESA BorealScat campaign. It is a 50 m tower located at the Svartberget Research site (64°15′N, 19°46′E) in northern Sweden which observes pine-dominated Boreal forests with other tree species such as spruce and birch. Its tomographic radar system features enhanced robustness against rain, and a custom-designed calibration reflector ensures high data quality and reduced data gaps. With a temporal resolution varying from 5 to 20 minutes, BorealScat-2 offers the very first high-quality C-band tomographic time series of a boreal forest, capturing detailed vertical backscatter profiles and diurnal variations making it possible to study water dynamics in the forest. Uniquely, it is situated close to an ICOS (Integrated Carbon Observation System) mast (150 m in height) that continuously measures greenhouse gas fluxes, meteorological variables, and soil conditions allowing to gain a better understanding of the interactions between radar backscatter signals and other external parameters such as temperature, precipitation, and vapor pressure deficit (VPD). In contrast, Sentinel-1 provides global C-band coverage with a revisit time of 6 to 12 days, making it ideal for regional monitoring but limiting its ability to capture rapid forest dynamics. The study addresses two primary objectives: 1. Temporal and spatial comparisons: • To analyze the alignment between BorealScat-2 and Sentinel-1 backscatter time series, focusing on temporal trends and spatial footprints during 2024. 2. Estimating Sentinel-1 backscatter during temporal gaps: • To explore whether BorealScat-2’s dense time series, combined with meteorological and ICOS flux data, can mimic or estimate Sentinel-1 backscatter during temporal gaps. • To examine the feasibility of using BorealScat-2 to extend Sentinel-1 observations temporally by leveraging correlations between datasets. This highlights the complementary roles of tower-based and satellite radar systems in advancing forest monitoring. BorealScat-2 provides localized insights into diurnal and vertical forest dynamics, while Sentinel-1 offers valuable large-scale coverage. The ICOS data integration is expected to not only improve the interpretation of radar signals but also advance our understanding of how boreal forests respond to environmental variability, such as droughts, storms, or seasonal transitions. Preliminary results demonstrate that BorealScat-2’s vertical profiles and high temporal resolution reveal backscatter dynamics that Sentinel-1’s sparse temporal coverage and integrated observations cannot capture. Backscatter variations at the C-band are smaller compared to lower frequencies (e.g., L- and P-band) but show larger diurnal amplitudes, possibly reflecting strong sensitivity to water content and transpiration. Since the failure of Sentinel-1B in 2021, the revisit interval has increased from 6 to 12 days, limiting its ability to capture fine-scale forest dynamics and highlighting the need for missions with shorter baselines. These advancements would make it possible to monitor daily changes, moisture levels, and interactions between the canopy and ground on a global scale. This could support important applications like tracking carbon levels and assessing forest health vitality as the climate changes. References H. T. M. Dinh, J. E. D. Valet, L. Villard, F. Garestier, and L. M. H. Ulander, “TropiScat: Multi-temporal multi-polarimetric tomographic imaging of tropical forest,” 2012 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Munich, Germany, 2012, pp. 7051–7054, doi: 10.1109/IGARSS.2012.6351947. H. Laudon, E. M. Hasselquist, M. Peichl, K. Lindgren, R. Sponseller, F. Lidman, L. Kuglerová, N. J. Hasselquist, K. Bishop, M. B. Nilsson, and A. M. Ågren, “Northern landscapes in transition: Evidence, approach and ways forward using the Krycklan Catchment Study,” Hydrological Processes, vol. 35, pp. 1–15, 2021, doi: 10.1002/hyp.14170. A. R. Monteith and L. M. H. Ulander, “Temporal survey of P- and L-band polarimetric backscatter in boreal forests,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 11, pp. 3564–3577, Oct. 2018. doi: 10.1109/JSTARS.2018.2814825. L. M. H. Ulander and A. R. Monteith, “Time series of P- and L-band forest backscatter from BorealScat,” 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, Texas, USA, pp. 4310–4313, Jul. 2017. doi: 10.1109/IGARSS.2017.8127955. A. R. Monteith and L. M. H. Ulander, "A tower-based radar study of temporal coherence of a boreal forest at P-, L-, and C-bands and linear cross polarization," IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–15, 2022, Art no. 4402315. doi: 10.1109/TGRS.2021.3074098. A. R. Monteith, L. M. H. Ulander, S. Steele-Dunne, P. Bennet, J. Westin, H. Persson, T. Leistner, H. Laudon, and J. E. S. Fransson, "A tower-based radar experiment for studying the effects of boreal forest tree-water relations," SSRN, 2024. Available: https://ssrn.com/abstract=4991471. doi: 10.2139/ssrn.4991471. M. Peichl, M. Nilsson, P. Smith, P. Marklund, G. De Simon, P. Löfvenius, R. Dignam, J. Holst, M. Mölder, T. Andersson, N. Kozii, E. Larmanou, M. Linderson, and M. Ottosson-Löfvenius, “ETC L2 ARCHIVE, Svartberget, 2019-01-01–2024-09-01, ICOS RI,” ICOS RI, 2024. Available: https://hdl.handle.net/11676/aji8KUPQYA4MeDyJBWRnRx0S.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: A GNSS-R in-land water level profiler from VLEO

Authors: Dr Manuel Martin-Neira, Marc Zimmermanns, Neil Boasman, Thibaud Gabillard
Affiliations: European Space Agency
GNSS Reflectometry (GNSS-R) is an appropriate tool to profile the water level of lakes and rivers with high precision from space. It is also low cost as the GNSS-R payload can be accommodated in very small platforms, and has the potential to provide short revisit time at global coverage. The key to the high precision resides on the fact that the surface of in-land water bodies reflects the GNSS signals coherently most of the times, allowing the use of precise carrier phase-based observations. The resulting instrument precision is typically at the millimetre level. Being the GNSS-R technique based on forward scattering, the reflected signals offer high amplitude, and small antennas can be used to receive them. Other advantage of coherent GNSS-R observations is their capability to penetrate foliage down to the water surface. This paper will present the advantages of exploiting GNSS Reflectometry from Very Low Earth Orbit. The most significant advantage is the enhanced received reflected power that leads to simpler antennas needed in the GNSS-R payload. The concept proposed is that of a fleet of M very small satellites each capable of processing a number N of reflections. Different size scenarios shall be contemplated, down to reaching revisit times over a given land area of 1 hour. This is necessary for example to monitor and track the effect of heavy precipitation on rivers, water reservoirs and inundated terrains. Other important advantages are the use of, up to now, a very empty orbital shell, as well as the lower cost and reduced contamination of the launches into VLEO.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: International cooperation and transnational open access within EUFAR

Authors: Dr. Thomas Ruhtz, Prof. Dr. Bogdan Zagajewski, Dr. Paola Formenti, PhD Jan Hanus, PhD Lucie Homolova, Herve Roquet, PhD Fausto Ferraccioli, PhD Bart Deronde, PhD Magdalena Ardelean, PhD Francesco Cairo, PhD Ils Reusen, PhD Melanie Ghysels-Dubois, Oguz Ozkan
Affiliations: Freie Universität Berlin, University Warsaw, CNRS, Global Change Research Institute of the Czech Academy of Sciences - CzechGlobe, VITO, OGS, National Institute of Oceanography and Applied Geophysics, CNR, Institute of Atmospheric Sciences and Climate, INCAS, Center for Airborne Atmospheric & Topographic Research, ESF, European Science Foundation
In a time when significant global climatic and geopolitical changes are currently taking place, the development of European research infrastructure (RI), standards for data acquisition, exchange and processing, but also of highly qualified personnel, is important. In particular, changes in the atmosphere directly affect the Earth’s surface (loss of biodiversity, agriculture health, forestry health, soil quality, degradation of coastal zones...). Moreover, the exploration of the underlying Earth and water surfaces requires the continuous methodology and technology advancements of airborne platforms and instruments. In addition, airborne observations are crucial for the development, validation, and augmentation of satellite-based systems, in the technological upgrade toward space missions. In an international context, it is critical for Europe to establish conditions for coordinated development of airborne research facilities. Currently, many national research infrastructures and programs of individual nations are available but the combination or shared utilisation of multiple European infrastructures, is relatively limited for the research community in Europe. Created in 2000 and supported by the EU Framework Programmes as an Integrating Activity, EUFAR (European Facility for Airborne Research) arose from the necessity to create a central network and access point for the airborne research community in Europe. EUFAR aims to support researchers by granting them access to research infrastructures best suited to the needs of researchers across Europe, not accessible in their home countries. EUFAR also provides technical support and training in the field of airborne research for the environmental and geosciences, and enables the sharing of expertise and harmonisation of research practices. During its last program of funding, 2014-2018, EUFAR coordinated transnational access to 17 state-of-the-art European instrumented aircraft and 3 remote-sensing instruments through 13 operators, who were part of a 24-partner European consortium at that time. The framework programme funding also supported networking and joint research activities focused on providing an enabling environment for and to promote airborne research. EUFAR developed significantly in terms of its activities, size of network and budget, and has reached a maturity level in terms of coordination, networking activities, access provision and research development opportunities. This materialised in January 2018 when leading members of the project established EUFAR as an AISBL (International non-profit Association under Belgian law) to continue collaborative activities beyond the framework of EU funding. Currently the EUFAR website has more than 500 registered users and a citation database with more than 500 publications related to airborne facilities. One of its ongoing key objectives is to develop and implement a scheme of transnational Open Access, whereby researchers can use the most appropriate facility for their scientific objectives, regardless of their country of employment. Such a scheme covers both the sharing of facilities amongst EUFAR Member organisations and the provision of access to external users and scientists. Transnational activities (TNA) have been successful in stimulating airborne environmental research activities in countries without their own facilities, with many of the TNA projects able to submit one or more peer-reviewed publications describing their activities. In total, over the course of the EUFAR FP7 and EUFAR2 projects, TNA activity has allowed coordinated access to fully-funded flight hours involving between 17 and 20 research aircrafts, which represents 933 hours total flight hours. The number of users was over 400. Airborne remote sensing imaging spectrometers are used for cal/val of optical satellite missions measuring the Earth atmosphere (Clouds, Aerosols, GHG) and surface (soil, vegetation, agriculture, geology, water, snow). In 2021 ESA, NASA-JPL and the University of Zurich organised an EU wide campaign with the NASA-JPL operated AVIRIS-NG instrument for validation of the spaceborne imaging spectrometer PRISMA (ASI) and in support of the upcoming spaceborne imaging spectrometers CHIME (Copernicus ESA) and SBG (NASA) mission. The Horizon 2020 Copernicus Calibration and Validation Solution project (CCVS) performed a survey of airborne campaigns in Europe and worldwide (CCVS Deliverable 2.5), a compilation of existing campaigns, which were or will be conducted to validate or calibrate satellite mission requirements and/or satellite-based products. Campaign-based measurements can be considered as event-based activities e.g., data acquisition during a satellite overpass or a certain incident.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Highlights of the Nitrogen Cycle Airborne Measurements campaigns

Authors: Frederik Tack, Alexis Merlaud, Michel Van Roozendael, Lieven Clarisse, Lara Noppen, Martin Van Damme, Pierre Coheur, Thomas Ruhtz, Sabine Chabrillat, Robert Milewski, Helge Dämfling, Mary Langsdale, Callum Middleton, Hamed Shariatmadar, Simon Hook, Dirk Schuettemeyer
Affiliations: Royal Belgian Institute for Space Aeronomy (BIRA), Université libre de Bruxelles (ULB), Spectroscopy, Quantum Chemistry and Atmospheric Remote Sensing (SQUARES), Institute for Space Sciences, Freie Universität Berlin (FUB), German Research Center for Geosciences (GFZ), Earth Observation and Wildfire Research Group, king’s College London (KCL), National Aeronautics and Space Administration - Jet Propulsion Laboratory (NASA-JPL), European Space Agency (ESA-ESTEC)
The ESA NITROCAM (NITROgen Cycle Airborne Measurements) campaigns have been designed to serve as both an airborne demonstration case and a subsequent feasibility study for a potential future satellite mission combining measurements of nitrogen dioxide (NO₂) and ammonia (NH₃). Since pre-industrial times, reactive nitrogen emissions have increased fivefold due to the expansion of agriculture, industry, transportation, and domestic production. There is a pressing need to improve our understanding of the global nitrogen cycle, monitor its disruptions, and evaluate the resulting environmental and societal impacts. This can be achieved by measuring the key reactive nitrogen species, NH₃ and NO₂, at high spatial resolution (< 1 km). As part of NITROCAM, an airborne demonstrator has been developed, where NO₂ is measured using BIRA’s UV-VIS spectrometer SWING, and NH₃ is measured simultaneously using either the TELOPS HYPERCAM LW (GFZ) or the HyTES (JPL) instrument. Main objectives: • Practical showcase of simultaneous detection of the key reactive nitrogen species, NH₃ and NO₂, at high spatial resolution (50-200 m) based on an airborne demonstrator. • Feasibility study to simulate space observations and study of requirements on spectral and spatial resolution and the impact on sensitivity, detection limit, etc. • Identification of a variety of NO₂ and NH₃ sources (traffic, industry, farming activities, water treatment, etc.) at different scales and under different geophysical conditions. • Demonstration of the viability of an airborne simulator that could serve in subsequent phases of a dedicated satellite mission, focusing on the global nitrogen cycle at high spatial resolution (0.5-1 km). The objectives have been achieved through the collection and analysis of (mainly) airborne remote sensing data over selected regions of interest (ROI) during three separate campaigns, i.e. a first campaign over Germany in 2020-2021, a second campaign over Northern Italy in summer 2022 and a third campaign over Northern Italy in 2023. The campaigns generated quality datasets that supported an analysis of the main objectives listed above. The first campaign took place between August 2020 and May 2021 over selected targets in the greater Berlin region and can be considered as an exploratory campaign focusing on testing flight plan and instrument settings and setting up retrieval algorithms for the optimal acquisition and retrieval of the NO₂/NH₃ geophysical parameters over major sources. The second NITROCAM campaign took place between May 2022 and June 2022 over selected targets in the Po valley, one of the major hotspots for NO₂ and NH₃ in Europe. Main focus was a continued investigation of the potential and limitations of aircraft observations of NH₃ and NO₂ over a variety of sources, including as well weaker (agricultural) sources. The third NITROCAM campaign took place between May 2023 and July 2023 and again over selected targets in the Po valley, as part of the larger scale ESA/NASA SwathSense campaign deploying the JPL HyTES LWIR airborne imaging spectrometer. Based on the analysis and results of the previous NITROCAM campaigns, the focus of the 2023 campaign was on agricultural NH₃ sources and further measurements of industrial NO₂/NH₃ point sources, including repeatedly overflying the same farm yielding an unprecedented dataset of snapshots of agricultural NH₃ emissions over a single animal farm, showing large variations in time. The presentation will focus on the highlights of the three campaigns and the lessons learned.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Airborne Demonstrator for Near-Real-Time Infrastructure Monitoring With On-Board Processing From Satellites

Authors: Felix Wenk, Daro Krummrich
Affiliations: OHB Digital Connect GmbH
In the scope of the Horizon-Europe-funded project IIMEO, short for "Instantaneous Infrastructure Monitoring by Earth Observation", we are developing and evaluating an airborne prototype system in order to eventually be able to monitor different types of infrastructure using a satellite constellation from low earth orbit. Our society critically depends on functional infrastructure, for communication, energy supply and, of course, transportation. Damages to or disruptions of the means to provide such infrastructure, such as data cables, pipelines, or roads and railways, impact daily lives, are economically expensive, and, in extreme cases, have long-lasting effects on supply chains. Such disruptions are frequently caused by events which are very hard to predict, e.g. natural disasters or extreme weather. Hence, to be able to promptly restore disrupted infrastructure, the corresponding infrastructure elements need to be monitored regularly and, critically, the monitoring results need to be available very shortly after a potentially damaging event occurred. To enable this, the eventual goal is to monitor infrastructure scattered over a large geo-graphic area in a cost-effective manner using a constellation of satellites, where most of the data is processed on-board to avoid communication bottlenecks preventing timely delivery of results. Even leaving the design of the constellation aside, this is difficult in multiple ways. In low-visibility conditions immediately after an event requiring assessing infrastructure elements such as a railway tracks, the satellite must not be entirely blind, while still being able to provide most accurate results under more favourable environmental conditions. Image processing is computationally expensive, so to not just trade the communication bottleneck for a computation-time bottleneck, the satellite needs to be equipped with sufficient compute power. These difficulties are addressed in the IIMEO project. Multiple sensors, namely Synthetic Aperture Radar (SAR) and RGB cameras, are combined and their sensor data fused to make the system robust against bad visibility while still being able to provide detailed results under favourable conditions. An on-board processing unit including co-processors to accelerate computationally expensive operations, for instance for image processing, is used to avoid the transfer of large image data to ground. Instead, the data representing the monitoring results, such as the locations of obstacles on railway tracks, which is much smaller than the original image data, is transferred, saving time to deliver results to the operator of the infrastructure under observation. More concretely, the future satellite-based system is supposed to deliver monitoring results within an hour of being queried. If, for instance, railway tracks are to be monitored -- which is the pilot use case also considered in IIMEO -- the railway infrastructure operator should be able to specify a geographic region of interest in a task order for the monitoring system and then receive information regarding obstructions on railway tracks inside the region specified within one hour after posting the task. The information itself includes the geo-location of the obstruction as well as the extent of the obstruction, and optionally an image of the scene at that location. In our project IIMEO we primarily address the sensing- and processing-related aspects of such system by developing and evaluating an airborne demonstrator including sensors, the on-board processing unit and an on-ground system, which communicates with the demonstrator's on-board software, also performs some of the computations near the end of the data processing chain, and finally presents the monitoring results. While some components of the demonstrator will have to be changed when moving from the airborne demonstrator to an actual satellite, e.g. the camera could not remain the same simply due to the greatly different distance to ground, other components are more immediately transferable. For the latter, in particular for the on-board processing soft- and hardware, we aim for TRL-6 by the end of the project. The demonstrator itself is built around OHB's "Condor" airplane, which is a motorized sailplane based on the Stemme S10. The sensing equipment is contained in two pods, mounted below the plane's wings. The sensors are two RGB cameras, the first with an approximately nadir view of the ground, the second looking obliquely to the side of the plane in the same direction as a 35-GHz SAR. Crucially, the SAR and the obliquely looking camera share (most of) their fields of view, such that one camera as well as the SAR observe the same scene. During adverse weather conditions, the SAR can still be used to check the backscatter supposedly coming -- in our pilot use case -- from a railway track against what would be expected from an unobstructed railway track. The obstruction detection results from both the SAR as well as the RGB cameras are to be fused at the end, such that, if environmental conditions are not adverse, reported obstructions are both more reliable and more precise. To generate the inputs for this final fusion step, we're developing two chains of processing steps, one for SAR data and one for the RGB images. For SAR, prior to the data acquisition, a map -- in our case OpenRailwayMap -- of the railway tracks which are in the region of interest is used to pre-compute a set of so-called tiles which the SAR is focused on. Thus, SAR images will not be formed for all of the theoretically visible swath, but only for the regions where railway tracks can be reasonably expected to show up based on the map data. The SAR image corresponding to the tile is then checked for obstructions on railway tracks. As for November 2024, the most promising approach to this is comparing statistics on the backscatter to statistics on the backscatter of a reference recording made from a similar viewing geometry at a time the railway track was known to be unobstructed. The RGB images are processed by first geo-referencing them using the plane's pose at the time of image acquisition and an estimate of the ground plane relative to the camera computed from two successively acquired, overlapping images. Again using information from the OpenRailwayMap, the parts of the images which, even considering the inaccuracy of the geo-reference, cannot show railway track, are masked out to save both data volume and computation time. The remaining part of an image is fed to an image classifier, which puts out an infrastructure (railway) detection mask computed using a pre-trained convolutional neural network (CNN). This, in turn, is compared with a railway mask computed from the map to find the locations of the image which are supposed to show track but do not, and thus indicate obstructions. To make this work, the geo-referencing of the image is improved by registering the railway track detections onto the railway map. All of this runs on on-board hardware built into the airplane. The SAR acquisition and image formation run on a computer which is part of the SAR acquisition system. The RGB image acquisition and geo-referencing run on two computers, one per camera. These acquisition computers are connected via Gigabit-Ethernet to a central on-board processing unit, which executes the processing steps which are more remote from the concrete acquisition hardware, namely the obstruction detection on SAR images as well as the railway track detection in RGB images and the computation of potential obstructions using the rail reference map. This on-board processing unit will be qualified for operation in low-earth orbit and is equipped with an Intel Myriad co-processor to accelerate the computationally most expensive steps of the processing chain, particularly inference using CNNs. As of November 2024, the railway track detector has been ported to run on the onboard processing computer; judging from the current state, it appears as though at the end of the project we will end up with a hard-/software combination which could be installed on a satellite with minimal modification. Early results from individual steps of the processing chains using data from preliminary flights with the wing pod containing the obliquely looking SAR+RGB combination are promising; after the registration using the railway map the worst-case geo-referencing error observed so far is about half a track gauge, the precision of the rail classification is greater than 85% in the test subset of the dataset we're currently using to work with the rail classifier. There is no explicit railway detection step in the SAR processing chain, however, after quite some experimentation tracks are now also visible in the images formed for the "tiles" mentioned above. A more detailed evaluation of the system’s performance is yet to be performed later during the project. Since the project is currently on-going, we expect to achieve considerable progress between the submission of this abstract and the Living Planet Symposium: Flights over the demonstration area for our pilot use case, railway tracks near Nis in Serbia, are planned to happen in early 2025. For these flights, we plan not only to observe railway tracks but also to put obstacles to be detected onto the railway tracks, which are then supposed to be reported after the processing chains finish processing the sensor data, demonstrating end-to-end operation of the monitoring system, from tasking to delivery of results.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: ULID: an Unconnected L-band Interferometer Demonstrator

Authors: Dr Eric Anterrieu, Dr François Cabot
Affiliations: Cnrs, CNES
The Soil Moisture and Ocean Salinity (SMOS) satellite has provided, for the very first time, systematic passive L-band (1420 – 1427 MHz) measurements from space with a spatial resolution of ~40 Km. Within the frame of the studies conducted by CESBIO and CNES for the next generations of high-resolution L-band imaging radiometers based on aperture synthesis, the concept of unconnected interferometry plays a key role in the roadmap because it is a solution for improving the spatial resolution. Before deploying such a mission, many issues have to be solved. This is why CNES has initiated in early 2017 a proof of concept study for the Unconnected L-band Interferometer Demonstrator (ULID) which aims at demonstrating the capability to operate unconnected interferometry in orbit. This preliminary study has been completed in 2019 by a feasibility study (phase A) at CNES, with the support of Airbus Defense and Space and Hemeria companies. ULID system relies on a small constellation of three identical nano-satellites flying in close range formation (the distance between all the satellites will slightly vary from 35 to 45 m along their orbit) and transmitting to ground the signals acquired by similar detectors onboard all satellites to allow synchronization and correlation computation. Among the major issues this mission is addressing, the most important ones are: 1. Autonomous control of the formation flying: the orbits selected for ULID constellation relies on very small variations in eccentricity and inclination. This allows for multiple satellites to move around the reference satellite, describing an ellipse at the orbit temporal period. Carefully selected parameters allow keeping the constellation stable and also guarantee minimal separation in case of differential drift. Furthermore, in orbit experiments at close range are very limited and clearly require demonstration, especially with limited maneuver capacity. This is a key aspect of ULID mission, with potential uses way beyond L-band interferometry. 2. Acquisition of the signals in a way that will make them usable: sampling and quantization will be achieved at a very high level for ULID to allow tradeoff for future missions to be made consistently, after in -depth assessment of the achievable performances. 3. On-ground processing for synchronization of recorded signals (3 mm baseline knowledge translates into 10 psec accuracy), but also for radio frequency interferences filtering, sub-band decomposition to minimize spatial decorrelation and enhance frequential coverage and time delay integration for radiometric accuracy improvement are all the techniques that will be possible to assess using ULID data.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Exploiting constellations in VLEO to enhance optical payloads

Authors: Ms Astrid Barzaghi, Mr Berthyl Duesmann
Affiliations: European Space Agency (ESTEC)
Among Earth Observation satellites, optical payloads are of primary importance in areas such as land or maritime monitoring, emergency management, security and climate change. A mission such as Sentinel-2 (S2), the European Space Agency wide-swath, high-resolution and multi-spectral imaging mission, falls in this category. In the context of evolving and more stringent requirements imposed by the fast-developing space industry, it is worth considering how a Sentinel-2 type of mission would benefit not only from being turned into a VLEO constellation but also from using the already available infrastructure. A Vega multi spacecraft launcher system can launch at the same time about 12 mini satellites belonging to the class size of Deimos 1. The proposed orbit for this constellation is a 10 days/153 orbits Sun-Synchronous orbit which corresponds to an altitude of about 470 km. This type of orbit offers a colocation once per day, equally spread over the same orbit with each of the S2; at least once every 10 days a perfect colocation is expected which in turn means an opportunity for good calibration in absence of clouds. The expected swath is about half of S2 therefore 2 spacecrafts are needed for global coverage. By launching 12, 10 can achieve global coverage every 2 days and 2 can fly as spares. This already highlights two of the most important features of this proposed constellation: a significantly faster revisit time (S2 currently has a revisit time of 5 days), which is key in emergency management, and the possibility of guaranteeing redundancy. Each element of the constellation can count on about 500s of ground station visibility to download the data which can be sent sequentially to a single ground station further reducing mission costs. In conclusion, the proposed constellation has the potential to provide S2-like products of high quality thanks to the frequent cross calibrations, with a higher revisit frequency and guaranteeing fault tolerance thanks to the distributed system and exploiting the VLEO environment.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: The combined water vapor and high spectral resolution aerosol lidar WALES: an airborne demonstration and validation platform for current and future spaceborne lidars

Authors: Martin Wirth, Silke Groß, Andreas Fix
Affiliations: DLR
Several years ago, DLR lead a proposal in response to an ESA Earth Explorer Call for a spaceborne water vapor DIAL called WALES (WAter vapor Lidar Experiment in Space). By then, the technology was not yet reaching the required TRL and the proposed four wavelength measurement concept was not verified to really provide the envisioned benefits. After phase A, WALES was not selected, but DLR decided to build an airborne demonstrator with the goal to show that a suitable powerful laser source at 935 nm can be successfully realized and that the multi-wavelength instrument concept proves to be adequate to provide high accuracy humidity profiling for most of the relevant atmospheric situations. Since the proposed optical parametric oscillator (OPO) based transmitter technology simultaneously generates not only light pulses at the wavelengths to retrieve water vapor (around 935 nm), but also at 1064 nm and 532 nm this offers the possibility to realize a combined H2O and high spectral resolution (HSRL) aerosol lidar and the WALES airborne demonstrator was designed as such right from the beginning. For the H2O-DIAL, light pulses at four wavelengths are emitted in the 935 nm absorption band of water vapor, one off-line and three on-line. For aerosol characterization the depolarization is measured by cross-polarized channels at 1064 nm and 532 nm wavelength. At 532 nm, the system provides an iodine-filter based Rayleigh channel which allows for measurements of the extinction coefficient using the HSRL method. Since 2008, the WALES airborne demonstrator was deployed during several large-scale field campaigns, first on DLRs Falcon F20 aircraft and later on the German research aircraft HALO (based on a Gulfstream G550). During these campaigns data from all meteorological scenarios from the tropics to the high Arctic could be sampled providing a rich dataset as base for impact studies or end-to-end simulations. The performed aerosol measurements laid the base for a widely used aerosol classification scheme based on the three intense properties depolarisation ratio, lidar ratio and colour ratio (SAMUM, EUCAARI). The water vapour measurements opened new perspectives for the study of ice cloud formation (HALO-TECHNO, ML-CIRRUS, CIRRUS-HL, HALO-(AC)3) and elucidated the role of lofted moist layers for atmospheric stability and its link to low cloud formation (NARVAL, EUREC4A). They also provided key input to the analysis of large-scale dynamic processes (T-PARC, NAWDEX, POLSTRACC, WISE). It further served as a transfer-standard to compare different humidity profiling techniques (LUAMI). The latest activities were centred around the preparation and then actual validation of Level 1 and 2 products of ESAs EarthCARE mission. During the PERCUSION campaign, as much as 33 under flights of EarthCARE from the Tropics to the Arctic could be successfully performed providing a very rich data set for in-detail validations of the scientific products of the mission, not only regarding the ATLID instrument, but also with respect to synergistic products with radar data. The presentation will outline the system layout of the WALES airborne demonstrator and present selected highlights from the latest validation activities.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Flying Laboratory of Imaging Systems – Aircraft Infrastructure to Support Spaceborne Imaging Spectroscopy Missions

Authors: Jan Hanuš, Tomáš Hanousek, Daniel Kopkáně, Miroslav Pikl, Tomáš Fabiánek, Lucie Homolová
Affiliations: Global Change Research Institute (CzechGlobe), Department of Geography, Masaryk University
Airborne imaging spectroscopy continues to play an important role in the development and calibration/validation of spaceborne imaging spectroscopy. Until recently, it has been the only data source for simulating future hyperspectral satellite missions. Although experimental hyperspectral satellites are already providing vast amounts of data, and drones with hyperspectral systems are increasingly being used, aircrafts remain viable and flexible platforms for targeted hyperspectral campaigns, allowing the integration of multiple sensors to explore future sensor synergies. Here we present one of the few fully operational airborne research platforms for Earth observation and ecosystem research in European research space. The Flying Laboratory of Imaging Systems (FLIS) is operated by the Global Change Research Institute of the Czech Academy of Sciences (CzechGlobe). The system consists of three commercial imaging spectroradiometers and a laser scanner that acquire data simultaneously. One spectroradiometer covers the visible and near infrared part with the possibility of changing the spectral resolution (CASI-1500) and the other covers the shortwave infrared part of the electromagnetic spectrum (SASI-600) with fixed spectral resolution. These two provide full spectral data between 380-2450 nm, mainly for the assessment of biochemical properties of vegetation, soil and water. This setup will be replaced by a new spectroradiometer SAVI, which with a single detector and a single optical path can cover the full range between 400 and 2500 nm. The third spectroradiometer covers the thermal longwave infrared part of the electromagnetic spectrum (TASI-600) and allows mapping of surface emissivity and temperature properties. The fourth instrument on board is the Riegl Full Waveform Laser Scanning System, which provides data on landscape topography and the 3D structure of objects. In addition to the four primary sensors, the aircraft is certified to carry the HyPlant instrument, an airborne demonstrator for the upcoming ESA FLuorescene EXplorer satellite mission FLEX. The FLIS infrastructure currently supports the Horizon Europe projects Acquarius and AgroServ. It can also be used by the international imaging spectroscopy community through open access to the CzechGlobe research infrastructure.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: Innovative VLEO Satellite Concept for Very High-Resolution Urban Monitoring

Authors: Jean-Marc Delvit, Gwendoline Blanchet, Christophe Latry, Nicolas Bataille, Narjiss Boufracha, Véronique Pascal
Affiliations: Cnes
The latest devastating floods in Valencia have shown how urban areas are very vulnerable to climate change. Better monitoring the Earth and especially cities is crucial for improving resilience to climate change. CNES has proposed in 2023/2024 a new concept of VLEO satellites dedicated to the quarterly monitoring of worldwide largest cities, typically cities with more than 50,000 inhabitants. Regular acquisitions, every three months, enable to follow the natural evolution of the vegetation due to seasonal effects, but also urban planning which may evolve fairly quickly depending on the areas. The concept is based on VLEO satellites orbiting between 200km and 400km above Earth’s surface, with an optical payload producing EHR images (extremely high resolution images) with spatial resolution below 20cm with controlled cost. High resolution imagery is required to observe very small details in town. More over the output image stream will feed digital twins who necessarily will be more faithful to the real landscape and be more helpful to model it. The considered spectral bands are the traditional red green blue channels, which are sufficient to feed a large number of civil applications whilst keeping the focal plane relatively simple. Because of this very low altitude, the traditional satellite design needs to be completely rethought: - the shape and the propulsion of the satellite to counteract the atmospheric drag and higher gravitational forces - the protection of objects and equipment with adequate coating to limit atomic oxygen - the operation with short communication link but better link budget The results of this preliminary study will be presented. In this study, different orbiting altitudes have been considered, because the amplitude of the physical constraints at these orbiting altitudes and the capacity to acquire all the cities do not evolve linearly with the altitude. The quarterly revisit of large cities between -60° to 60° latitude requires an efficient acquisition system to preserve the image quality on a large range of radiance, whilst minimizing the size of the primary optical mirror to reduce the satellite cost. The tradeoffs made for the dimensioning of the instrument, the satellite and the system will be described.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: C.01.07 - POSTER - Discrete Global Grid Systems (DGGS)

A Discrete Global Grid System (DGGS) is a spatial reference system that uses a hierarchy of global tessellations to partition the surface of the Earth into grid cells. This session will provide a forum for the discussion of requirements for DGGS implementations on EO data and relevant use cases. DGGS can act as a unified framework for EO data integration, multisource data fusion and cloud computing on a global scale. Therefore, DGGS can play a key role in the development of new products within the EO downstream sector, which must incorporate interoperability best practices, automatization, systemization, visualization, and on-the-fly processing through integrated web-services.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: Towards DGGS native data cubes with DGGS.jl

Authors: Daniel Loos, Davide Consoli, Fabian Gans, Gregory Duveiller
Affiliations: Max Planck Institute for Biogeochemistry, OpenGeoHub Foundation
Discrete Global Grid Systems (DGGS) have emerged as a transformative approach to minimizing spatial distortions in geospatial data processing. DGGS are not only used for geocoding but also offer a highly efficient data structure capable of reducing storage requirements by up to 33% compared to the current Sentinel-2 UTM tiling grid. The performance of operations on DGGS native data cubes is intrinsically linked to the cell index, which plays a crucial role in data management and retrieval. Most DGGS implementations utilize a hierarchical one-dimensional index to name and sort cells, optimizing them for parent-child queries. This structure is particularly beneficial for operations such as upscaling and downscaling, which are essential for integrating data with varying spatial resolutions. However, many real-world applications, such as visualization or convolutions, require efficient handling of distant neighbor queries based on spatial distances. These applications often rely on bounding boxes or moving windows, which are not optimally supported by traditional DGGS implementations. In response to these challenges, we introduce DGGS.jl, a Julia package specifically developed to create and utilize DGGS native data cubes optimized for neighbor queries. Our package employs the DGGRID Q2DI index to store data on a hexagonal ISEA4H grid, enabling compact and efficient data cube arrays. We have implemented methods to seamlessly convert raster data between geographic and Q2DI coordinates, access neighbor disks around a given cell, and visualize these data on a global scale. To demonstrate the practical application of DGGS.jl, we conducted a study fitting a local linear regression model to describe land surface temperature as a function of plant functional type cover fractions. Our analysis compared the linear models fitted using DGGS disks with those derived from traditional bounding boxes in a geographic coordinate system. In conclusion, DGGS.jl represents a significant advancement in geospatial data processing, offering a robust solution for applications dominated by spatial queries. By leveraging the unique capabilities of DGGS, researchers and practitioners can achieve more efficient data storage, retrieval, and analysis, paving the way for innovative applications in various scientific domains.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: Advancing Cross-Jurisdictional Marine-Terrestrial Data Integration Through Discrete Global Grid Systems

Authors: Dr Matthew Purss
Affiliations: Pangaea Innovations Pty Ltd
The integration of geospatial data across the land-sea interface presents unique challenges that have historically required complex spatial transformations and data resampling, often compromising data integrity and introducing additional uncertainties to the data. This paper presents groundbreaking results from the Open Geospatial Consortium's Federated Marine Spatial Data Infrastructure (FMSDI) Pilot, specifically focusing on novel approaches to data integration at the land-sea interface using Discrete Global Grid Systems (DGGS). Our research demonstrates the successful implementation of TerraNexus, a semantically-enabled 3D/4D DGGS platform, to achieve seamless data interoperability across the intertidal zone without requiring traditional spatial transformation of terrestrial or marine data. This innovation represents a significant advancement in marine spatial data infrastructure, addressing long-standing challenges in cross-jurisdictional data integration while maintaining data integrity and provenance. The platform serves as a reference implementation of multiple OGC standards, including the Abstract Specification Topic 21 and the OGC API DGGS. Our approach leverages a data-agnostic architecture that enables demonstration across diverse regions, particularly in UK and USA territories, without requiring customization of data or software infrastructure. This versatility marks a departure from traditional methods that often necessitate arbitrary choices of spatial context and repeated data transformations. Key innovations of our implementation include: 1. A layered standards approach incorporating OGC, ISO and W3C semantic standards (PROV-O, GeoDCAT, OGC APIs, JSON-LD) 2. Implementation of scalable semantic enablement through the OGC Building Blocks framework 3. Comprehensive workflow provenance capture supporting trust establishment and metadata context propogation 4. Integration with Integrity-Provenance-Trust (I-P-T) policy building blocks The research demonstrates that DGGS technologies can effectively address the challenges of integrating multiple datasets with varying spatial resolutions, formats, and coordinate reference systems. Our findings show that this approach eliminates the need for wholesale resampling and transformation of data while preserving original data integrity. The platform's semantic enablement facilitates highly efficient data discovery, search, and selection across both structural and jurisdictional boundaries. Our results indicate significant improvements in data integration efficiency, reduction in computational processing overhead, and enhanced maintenance of data provenance compared to traditional approaches. We successfully showcase the platform's capability to handle complex spatial queries across the intertidal zone while maintaining data integrity and supporting semantic and AI-driven investigations. This research contributes to the broader field of marine spatial data infrastructure by providing a scalable, sustainable solution for cross-jurisdictional data integration. The findings have significant implications for marine spatial planning, coastal zone management, and environmental monitoring applications. These findings provide a foundation for future implementations of DGGS-based solutions in marine spatial data infrastructure and cross-jurisdictional data integration scenarios. Our work also advances the standardisation of DGGS technologies within the OGC community and provides valuable insights for future developments in marine data infrastructure. This research represents a significant step forward in addressing the complex challenges of marine-terrestrial data integration, offering a robust, scalable solution that maintains data integrity while supporting advanced analytical capabilities. The findings have immediate practical applications for organisations working across marine and terrestrial domains, while also contributing to the broader development of international standards and best practices in spatial data infrastructure.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: Convolution in Discrete Global Grid Systems (DGGS): Example in Healpix

Authors: Justus Magin, Jean-Marc Delouis, Tina Odaka
Affiliations: LOPS - Laboratoire d'Oceanographie Physique et Spatiale UMR 6523 CNRS-IFREMER-IRD-Univ.Brest-IUEM
Motivation: Discrete Global Grid Systems (DGGS) offer a strong spatial reference framework by dividing the Earth's surface into hierarchical grid cells. This structure enables seamless integration of Earth Observation (EO) data on a global scale. DGGS are especially useful for applications like combining multi-source data, conducting global-scale analytics, and enabling real-time processing in cloud environments, all of which depend on spatial consistency. An essential operation integral to these tasks is convolution, which is pivotal in the realms of signal and image processing. Convolution plays a critical role in spatial filtering, feature extraction, and is central to machine learning applications such as convolutional neural networks(CNNs). In EO, CNNs are extensively employed for tasks like object detection, classification, and segmentation. However, applying convolution on DGGS poses challenges. Standard convolution operations, which are effective for traditional image-based data (e.g., 2D regular grids of pixels), face difficulties when adapted to DGGS. Image convolution cannot be directly applied because DGGS data cannot generally be represented as 2D regular grids without introducing discontinuities. HEALpix, a widely used DGGS, illustrates the challenges of applying convolution in this context. Ensuring connectivity between cells and preserving spatial relationships during convolutions is complex. Proposed solutions, such as spherical harmonics for performing convolution in the frequency domain, offer potential pathways but introduce limitations. These include computational overhead, inefficiency for small kernel sizes, and difficulties in supporting localized subdomains. These constraints make processing EO data on a DGGS less straightforward. Developing alternative approaches that balance computational efficiency, spatial accuracy, and adaptability across scales is critical to overcoming these hurdles. Proposal: To address these challenges, we propose a novel framework for performing convolution directly in the spatial domain on DGGS, demonstrated with HEALpix. This approach represents convolution as a sparse matrix multiplication, with kernel weights computed based on the geospatial relationships between neighboring grid cells. The framework includes: Sparse Convolution Matrix: Computation of kernel weights through neighbor analysis in HEALpix, preserving spatial relationships. Efficient Padding for Subdomains: Introduction of padding operations to enable convolution of HEALpix subdomains without domain shrinkage across successive operations. This approach eliminates the overhead of frequency-domain transformations and enables accurate, scalable convolution on DGGS. By unlocking new possibilities for EO data processing, the framework supports the development of innovative downstream products and aligns with the demands of next-generation EO missions.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: Advancing indexing systems and tooling on ISEA-based hexagonal DGGS: The new Z3 and Z7 indices in DGGRID

Authors: Kevin Sahr, Michael Jendryke, Wai Tik Chan, Evelyn Uuemaa, Alexander Kmoch
Affiliations: Southern Oregon University, GeoInsight GmbH, Landscape Geoinformatics Lab, Institute of Ecology and Earth Sciences, University of Tartu
Discrete Global Grid Systems (DGGS) are increasingly relevant to improve the way spatial data is managed, analyzed, and visualized. DGGS provide a means to partition the surface of the Earth into hierarchical, discrete and regularly arranged cells based on the Icosahedral Snyder Equal Area (ISEA) projection, each uniquely identified with a specific index and equal in area. This unique approach eliminates the common geographic distortions associated with traditional map projections and provides a framework for more efficient and precise spatial data management. Among the tools developed to utilize and explore DGGS, DGGRID stands out as a robust open-source software specifically designed to generate and manage various DGGS implementations (e.g. https://discreteglobal.wpengine.com/dggrid-users/) DGGRID supports several DGGS types, each corresponding to different grid orientations and cell division approaches, such as hexagons (H), diamond/rhombus (D/R), or triangles (T). Prominent examples of hexagonal and equal area grid systems are ISEA3H, ISEA4H, and ISEA7H, each offering unique characteristics (such as the granularity between resolutions) suited to various geospatial tasks. DGGRID offers various systems for assigning unique addresses to DGGS cells. Most of these are per-resolution addressing systems, where cell addresses are unique only within a single resolution. Integer cell indexes are often optimal for many use cases, and DGGRID provides a default integer index (SEQNUM) that sequentially numbers cells within a resolution. Hierarchical integer indexes assign unique addresses across resolutions, preserve spatial locality in memory, and support hierarchical algorithms. DGGRID has long included the ZORDER hierarchical indexing system for ISEA4H but did not have similar hierarchical systems for indexing ISEA3H or ISEA7H. During the last year, in collaboration, two new hierarchical indexing systems have been designed and added to DGGRID: The ISEA3H grid now employs the new Z3 indexing system, which is optimized for a higher degree of granularity. Here, aperture 3 (i.e. refinement ratio) means that the area of 1 parent cell equals the total area of 3 children. The Z3 system provides a robust framework to index a hexagonal grid of aperture 3 for the ISEA projection. Analogously, the Z7 hierarchical z-ordering scheme and indexing system has been added for the aperture 7 hexagonal grid ISEA7H. These developments make DGGRID now fully compatible for use in implementations for OGC DGGS API or multi-resolution DGGS-based computing in Xarray-XDGGS. DGGRID facilitates the generation and use of these DGGS grids, data binning, and also the flexible transform between various DGGS indexing systems, including ZORDER, Z3, Z7, Q2DI, SEQNUM, etc., and point and boundary coordinates in traditional geographic coordinate systems. By providing this interoperability, DGGRID ensures that users can transition smoothly between different spatial data representations, thereby enhancing the utility and applicability of DGGS technologies in real-world scenarios. As an open-source tool, DGGRID encourages innovation and collaboration within and beyond the geospatial community. Developers and researchers can contribute to its development, adapting and extending the software to meet emerging needs and integrating new functionalities as required by advancements in geospatial technologies and applications. This collaborative approach helps ensure that DGGRID remains at the cutting edge of DGGS research and application, providing a reliable and adaptable tool for exploring and utilizing these powerful geospatial systems. In conclusion, DGGRID is a valuable tool for anyone involved in the field of DGGS and geospatial analysis. Its support for multiple DGGS standards and indexing systems, combined with its open-source nature, makes it an excellent tool for both research and practical applications in diverse fields that rely on precise and efficient spatial data management. As DGGS technologies continue to evolve, we believe that tools like DGGRID and its increasingly modular core library dglib will play a pivotal role in shaping the future of geospatial data handling, making them more accessible, understandable, and usable across various scientific, commercial, and governmental sectors. - https://github.com/sahrk/DGGRID - https://github.com/sahrk/DGGRID/releases - https://anaconda.org/conda-forge/dggrid
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: Advancing Earth Observation Analytics: Leveraging HEALpix, rHEALpix, and DGGS for Spherical Data Representation, Power Spectrum Analysis, and Multiscale Insights

Authors: Lionel Zawadzki, Jean-Marc Delouis, Justus Magin, Tina Odaka
Affiliations: LOPS - Laboratoire d'Oceanographie Physique et Spatiale UMR 6523 CNRS-IFREMER-IRD-Univ.Brest-IUEM, CNES
One common challenge in transforming observational data is converting swaths or points into a grid format. Selecting an appropriate gridding system is crucial, especially for spherical data such as those from Earth observations. HEALpix, established by cosmologists two decades ago, facilitates the creation of equal-area grids with iso-latitude pixels. This presentation explores how the features of HEALpix, or its variant rHEALpix, enable consistent and efficient metric computation, which is an essential first step in evaluating the observed process. When referring to statistical metrics, we mean things like the power spectrum or a multiscale approach. It is crucial to understand that performing this statistical computation on a tangent plane projection can lead to inaccuracies, particularly when comparing high and low latitudes, whereas a grid accounting for the Earth's spherical geometry avoids this issue. We show improved efficiency when using an xDGGS approach that considers data located on a sphere instead of projecting them onto a tangent plane. We devised metrics grounded in statistical analysis to assess the benefits of DGGS grids in characterizing sea surface patterns detected by the SWOT interferometric altimeter. This involves demonstrating the use of the isocolatitude attribute, initially crafted for spherical harmonics, to enhance either the local power spectrum or a multiscale analysis. Additionally, we introduce the application of scattering covariance, implemented on the HEALpix grid using the FOSCAT library, within the multiscale analysis framework. This comprehensive study on a spherical domain highlights that these novel statistical methods are essential for assessing non-Gaussian characteristics required for investigating non-linear physical phenomena like turbulence, which cannot be completely described through the power spectrum alone. In addition, we demonstrate techniques for discerning data textures in SWOT datasets. Moreover, we demonstrate that by storing a single Spline Knot per cell grid instead of per pixel, the typical parameter count is maintained. We describe how this novel data storage approach accounts for subpixel variations, and aids in converting observations into gridded data. This is especially relevant for Earth Observation studies, where many processes exhibit a decreasing power-law power spectrum, effectively modeled by spline bases. This technique allows for a reduced cell count compared to traditional representations while retaining information, due to the spline bases' ability to manage subpixel data. We illustrate its application in converting the SWOT swath to the DGGS grid and evaluate this interpolation using the previously defined metrics. These capabilities could be incorporated into frameworks like XDGGS for Earth observation data analysis, facilitating extensive spherical analyses.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone S)

Poster: The Grid Blueprint: Developing DGGS Infrastructure for Data Storage, Interoperability and and AI Integration

Authors: Michael Jendryke, Peter Strobl, João Carlos dos Santos Manuel, Ana Gago Silva, Ernest Fahrland, Gino Caspari
Affiliations: GeoInsight, European Commission Joint Research Centre, Max Planck Institut
The unprecedented growth of geospatial data has transformed how we understand and interact with the Earth. From Earth Observation (EO) satellites to sensor networks, geospatial data has unlocked opportunities for advanced analytics, artificial intelligence (AI), and digital twins. However, the explosion of heterogeneous datasets poses significant challenges in storage, interoperability, and accessibility. Digital Earth infrastructures must now go beyond traditional geospatial tools and methods to truly integrate and harness this data. This is where Discrete Global Grid Systems (DGGS) come in. DGGS provides a standard, hierarchical representation of the Earth's surface through tessellation into discrete grid cells of various, hierarchical ordered sizes, making it inherently scalable, seamless, and resolution-independent. This methodology addresses challenges that data cubes based on classical ‘flat’ grids – currently one of the most popular methods for geospatial data storage and analysis – cannot fully overcome. DGGS infrastructure effectively manages structured and unstructured EO and other geospatial data, standardizing diverse datasets into analysis ready datasets, and enabling seamless interoperability across platforms. This abstract outlines the role of DGGS as a unifying data infrastructure that meets the growing demands of geospatial storage, AI, and interoperability. DGGS as the Foundation of Modern Geospatial Infrastructure excels at reducing storage requirements through direct tessellation of the ellipsoid without overlap. DGGS tessellation e.g. with hexagonal grids is 13% more compact than square grids (pixels), and overlap is reduced by up to 33% compared to UTM grids according to an ESA feasibility study. By representing geospatial data in discrete cell sizes, DGGS can efficiently encode multi-resolution datasets and optimize storage footprints compared to traditional raster or vector formats, where each cell can be addressed and queried by an ID through a very powerful single-index look-up. Additionally, the hierarchical nature of DGGS inherently supports multi-scale analysis, making it an ideal fit for AI applications that require efficient, preprocessed, and standardized data inputs. DGGS as a superset of data cubes While data cubes specialize in structured data workflows, such as time-series analysis of EO imagery, DGGS accommodates the diversity of modern geospatial datasets. DGGS grids can integrate EO imagery, point clouds, vector data, and non-geospatial datasets, breaking silos and enabling truly interoperable geospatial systems. Unlike data cubes, which often require predefined spatial and temporal resolutions, DGGS can dynamically adapt to varying resolutions and coverage, enabling more flexible and efficient analyses. This adaptability supports the integration of heterogeneous datasets from EO satellites, UAVs, ground-based sensors, and other sources, creating a unified and standardized data structure. Interoperability and the Future of AI: One of the most pressing challenges in the geospatial domain is interoperability – both in terms of datasets and workflows. The upcoming Open Geospatial Consortium (OGC) DGGS API standard will provide a global framework for interoperable DGGS implementation, enabling seamless integration of geospatial data across platforms. This development is particularly relevant for AI applications that require large-scale, standardized, and well-structured datasets. By leveraging DGGS, AI models can focus on extracting actionable insights rather than dealing with the complexities of data preprocessing. Additionally, DGGS enables cross-domain applications, such as integrating EO data with socioeconomic or environmental datasets, empowering AI-driven insights for policy, business, and environmental decision-making. In this study GeoInsight (an ESA BIC incubatee), in collaboration with Airbus, explores DGGS as the foundational infrastructure for the next generation of geospatial data solutions. By leveraging DGGS, we aim to address the critical bottlenecks in geospatial data storage and analysis, ensuring scalability, efficiency, and interoperability. We showcase digital elevation models, population data, Points of Interest and environmental data transformed into a hexagonal DGGS data structure covering Areas of Interest in Europe. As key stakeholders in the geospatial domain, Airbus and ESA’s expertise in EO data acquisition and analysis provides an invaluable opportunity to advance DGGS-based infrastructures. Demonstrating DGGS through pilot projects and use cases will illustrate its transformative potential in addressing the challenges of today’s data-rich but insight-poor geospatial landscape. In conclusion, DGGS is the Holy Grail of digital geography, promising a unified, standardized, and scalable approach to managing geospatial data. As a superset of data cubes, DGGS integrates heterogeneous datasets, reduces storage requirements, enhances interoperability, and optimizes AI workflows. By combining the expertise of GeoInsight and Airbus, we are paving the way for the next evolution of geospatial infrastructure, bridging the gap between data acquisition and actionable intelligence. This vision aligns with the goals of the ESA initiative on DGGS, emphasizing innovation, collaboration, and the practical applications of cutting-edge methodologies in Earth Observation.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: F.01.02 - POSTER - Raising awareness on climate change and environmental degradation using effective communication and education

The world today confronts an unprecedented need for effective communication and education to tackle the planetary crisis, particularly climate change and environmental degradation. The disinformation surrounding climate change takes the form of misleading claims that downplay the severity of the issue and question the integrity of the scientific consensus. This spread of false information can confuse the public, create unwarranted scepticism, and stall political momentum necessary for addressing the crisis. To address this issue, people from multidisciplinary backgrounds must unite to promote accurate, evidence-based information and engage in transparent discussions. By translating complex scientific data into accessible, relatable information, this effort can begin to bridge the gap between the science and the public, to empower individuals, influence policy decisions, and drive collective action by highlighting the urgency of the issues and offering clear pathways for change.

Previous efforts at climate communication have often failed to reach the masses, relying heavily on context-free facts, statistics, and complex science, making it inaccessible and less relatable to the average person. In this context, an average person retains only 5-10% of information if it's purely statistical, but retention soars to 65-70% when information is conveyed through storytelling. This underscores the transformative power of storytelling in effective climate communication, capable of shifting culture, touching hearts, and changing minds. We respond more profoundly to stories that resonate with us on a personal level, linking individual experiences to global challenges and thus rendering the abstract and often distant phenomena of climate change more tangible and immediate. The Earth Observation (EO) community is uniquely positioned in this sense, not only because of the breathtaking visuals of our planet and the excitement of satellite launches but also due to the scope of its measurements, which span global, regional, and local scales.

We invite climate and social scientists, engineers, artists, journalists, communicators, storytellers, activists, and policymakers to submit multidisciplinary abstracts for this session, that:

  • Showcase best practices, case studies and demonstrations of storytelling that use Earth Observation (EO) measurements, data, visualisations, and applications.

This session aims to nurture, support, and expand a community both inside and outside of Earth Observation, committed to science storytelling. It also seeks to address the severe lack of funding in projects by potentially introducing dedicated communication and storytelling work packages.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: An Innovative Master Seminar on Water in North America: Remote Sensing and Literary Studies in Dialog

Authors: Caroline Rosenthal, Christiane Schmullius
Affiliations: University Jena
Water is not only a basic material resource, but also has numerous symbolic, narrative, and mythological functions in North American culture. Freshwater is a contested, dwindling resource that is referred to in current discourses as “the new gold”. The great challenges of our time with regard to water pollution, droughts, floods and melting ice can only be solved in a dialog between the natural sciences and the humanities. This poster presents an innovative seminar aiming to make a contribution to the water topic and to provide students with a transdisciplinary view of various bodies of water and reservoirs using different methods and approaches. The seminar is aimed at Master students of remote sensing as well as Master students and teacher candidates of English/American studies. The seminar topics focus on clouds and rain, glaciers and ice, rivers and lakes, wetlands and soil moisture. The students will learn about remote sensing techniques for these various aspects of water-related monitoring tasks and they learn about literary-theoretical approaches (such as the Blue Humanities), too. This transdisciplinary concept requires from the EO-students to present their technical knowledge in an understandable manor to the literary students and vice versa, asks that theoretical concepts are transformed into a technical-affine or transferable language. In addition to short lectures by students from both disciplines on the respective water themes, we will analyze how these themes are being represented in exemplary literary texts. Because narratives are needed to make water resources understandable. The core of this interdisciplinary seminar is a joint excursion to the ESA Living Planet Symposium 2025, where participants must write jointly a response to a conference presentation of their choice. This should combine remote sensing and cultural science perspectives and can take the form of a poster, text, comic, etc. Coursework in both subjects consists of presentations and the joint response to a talk at the ESA Living Planet Symposium.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: SARflix Movies of our changing planet: Opening Eyes with Sentinel-1 SAR time-series animations

Authors: Josef Kellndorfer, Leandra Sethares, Oliver Cartus, Marcelo Villa
Affiliations: Earth Big Data, Gamma Remote Sensing AG, Quansight
Since more than a decade, Sentinel-1 Synthetic Aperture Radar (SAR) satellites have collected global earth observation data made freely available by the European Space agency. A key feature of SAR is its active remote sensing technology transmitting and receiving electromagnetic waves in the microwave spectrum, which in the case of the Sentinel-1 satellites is at 5.6cm wavelength (C-band). At SAR wavelengths, almost no attenuation of the signal is experienced by atmospheric conditions, except for extremely dense frontal rain clouds. This affords near all-time, all-season, and day/night observations under constant imaging conditions. Thus, the measurement of energy scattered back to the sensor (“the SAR backscatter”) is foremost a function of scattering characteristics of the ground, with main sensitivities to moisture and structural composition of the SAR illuminated areas. With the consistency of repeat-pass observations at 6 to 12 day intervals, and the high-spatial resolution of 10-20m, ESA’s Sentinel-1 satellite constellation is the first mission of its kind that provides open access to planetary scale monitoring capability at a combined high spatial AND temporal resolution. Now in orbit since 2014, the remote sensing community has access to hundreds of repeat observations giving unprecedented visual insight into global change, from effects of climate change and other human induced environmental degradation, as well as mitigation efforts. This talk introduces SARflix, a publicly accessible website (sarflix.earthbigdata.com) where time series animations of Sentinel-1 data frames are collected over numerous areas across Earth and can be visualized online. New image frames can be ordered and movies can be generated within minutes, harnessing cloud-based SAR data processing with precise geometric and radiometric calibration of individual image scenes into high-quality time series image stacks. The animation and conversion of these data stacks into standard mpeg4 SARflix movies are a powerful communication and educational tool that can showcase nearly any spot on the planet and its decadal change. During the talk we will showcase several powerful SARflix examples: (1) collapse of the Brunt Ice Shelf in Antarctica with major breakups occurring in 2021, 2023, and 2024, (2) recession of the Thwaites Glacier in Antarctica with observations of growing cavities (3) the damming of the Nile river since 2020 at the Grand Ethiopian Renaissance Dam and creation of one of the largest new storage reservoirs in Ethiopia, (4) the rapid outflow and drainage of the Kakhovka reservoir after the collapse of the Kakhovka dam in Ukraine in June 2023, and (5) monitoring of rapid destruction of primary tropical rain forest in the Madre De Dios region in Peru from illegal gold mining activities leaving a destructed landscape and poisoned river waters behind. A brief introduction into the easy request for generation of movies not yet produced, but of user interest will also be part of the talk.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Amplifying Awareness and Engaging Audiences: Communicating Climate Change and Environmental Challenges Through the Down to Earth Podcast by the IEEE GRSS

Authors: Stephanie Tumampos, Dr. Keely Roth, Nicole Bedford, Mailyne Briggs
Affiliations: Technical University of Munich, Planet, IEEE Geoscience and Remote Sensing Society, Nicole Bedford Films, Kilam Media Inc.
Down to Earth: A Podcast for Geoscientists by Geoscientists (DTE), by the IEEE Geoscience and Remote Sensing Society (GRSS), is a podcast that serves as a platform to disseminate knowledge and dialogue, addressing critical topics in geoscience and Earth observation while exemplifying the power of audio storytelling in communicating science concepts. Conceived in 2020 with its first season debuting in 2021, the podcast began by highlighting women geoscientists, their work, and their mentorship of others. The podcast has since grown into a prominent voice in the geoscience and remote sensing community. It offers engaging conversations that appeal to both technical experts and a broader audience interested in the topics. Across six seasons to date, the podcast has tackled a diverse range of themes with two seasons specifically focused on climate change. Other seasons have explored climate-adjacent subjects including open science, real-world geoscience applications, tools and methods, and artificial intelligence (AI). In this podcast, complex scientific ideas and topics are explored in an accessible and engaging manner. Through the audio storytelling format, the podcast “visualizes” jargon, terminologies and concepts through the conversational exchange between the host and guest speakers. This visualization provides listeners the opportunity to grasp intricate topics, as discussions are driven by probing questions and well-articulated responses from the guest speakers, who are often leading scientists and/or experts in their fields. Through this, the podcast ensures that each episode maintains a good balance between scientific rigor and approachability. This creates an appeal to audiences ranging from students and early-career researchers to professionals, enthusiasts and the greater public. The ability to highlight and disseminate climate change information is further enhanced by the marketing materials designed to not only increase listenership, but to share important scientific information through the combination of compelling, eye-catching visuals and clips about specific scientific concepts, pulled from guest interviews. For example, in season 3, listeners begin most episodes with a guest description and binaural soundscape of the environment that will be explored. The episode then reveals the climate challenge plaguing the chosen environment, and the guest then dives into the ways they and their fellow scientists are using remote sensing to address this challenge. Complementing these episodes, the marketing graphics used bold graphic novelesque imagery of dystopian-looking versions of the environments discussed in the episodes. The graphics also centered the scientists as symbols of positive change amidst these environments, evoking hope. When combined with the podcast, the images both trigger the urgency of climate challenges as well as the hope and scientific tenacity required to tackle them. In season 5, we further enhanced the visuals by working from the marketing premise that environmental imagery captures more social media user engagement. Using clips from the episodes, we built video reels that showcased gorgeous climate imagery such as luscious forests, ice-filled seascapes, and mangrove biodiversity while conveying core scientific information on greenhouse gasses, ship navigation, and carbon capture statistics. These reels used clips from the episodes, which allowed the content to stand on its own, while also encouraging listeners to learn more through the full episodes. The aim of the DTE podcast, aside from ensuring that it aligns with the organization’s mission to promote the advancement and dissemination of knowledge and techniques, is to foster a sense of community among those in the geoscience and remote sensing field. By featuring a diverse array of voices from different regions, backgrounds and areas of expertise, the podcast underscores the inclusive nature of the field and encourages interdisciplinary collaboration and innovation to address climate change as well as some of the world’s most pressing challenges.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Empowering Arctic Communities Towards Marine Pollution-Control Governance: Role of Machine Learning and Citizen Science

Authors: Victor Lion, Arnab Muhuri, Natascha Oppelt, Apostolos Papakonstantinou, Christine Liang, Barbara Jóźwiak, Adam Nawrot, Élise Lépy, Thora Herrmann
Affiliations: Kiel University, Department of Geography, SciDrones PC, Greece & Cyprus University of Technology, Helmholtz-Centre for Environmental Research - UFZ, forScience Foundation, University of Oulu, Faculty of Humanities
The Arctic is facing unprecedented threats from climate change and pollution from different anthropogenic sources. The ICEBERG project (Innovative Community Engagement for Building Effective Resilience and Arctic Ocean Pollution-Control Governance in the Context of Climate Change) is a multidisciplinary initiative funded by the European Union, which aims to assess types, sources, distributions, and impacts of pollution on both ecosystems and coastal communities across the European Arctic. Focusing on subregional case studies in West Svalbard, South Greenland, and North Iceland, ICEBERG is developing community-led strategies to enhance resilience and control pollution. The project addresses pollution sources, including macro-, micro-, and nanoplastics, ship emissions, sewage, persistent organic pollutants, and heavy metals, affecting land, sea, and air. Our team from the Earth Observation and Modelling (EOM) group at Kiel University (CAU) has a pivotal role in the project. One of our primary objectives is to implement a time-lapse camera network to monitor marine litter along Arctic beaches. Using data from this network, we aim to develop machine learning algorithms that will automate the detection and classification of beach litter, improving our understanding of types, sizes, and seasonal dynamics of accumulated litter. The images will contribute to an interactive data-sharing platform, which local residents also use in all three study sites to monitor environmental pollution. In the initial phase of ICEBERG, we installed one camera system in an uninhabited site in West Svalbard. This camera aims to provide data on the accumulation of marine litter over a period of one year, remaining at site without maintenance throughout polar winter. This site is therefore considered separately from Iceland and Greenland, where consultation meetings with local communities took place in a later phase of ICEBERG in order to introduce the project and jointly explore opportunities for citizen science collaboration. By incorporating a citizen science approach, we are actively collaborating with local and Indigenous stakeholders and organizations in Iceland and Greenland who assist in the installation and maintenance of cameras. Additionally, the ICEBERG project aims to map and monitor the spatiotemporal trends of marine litter in the specified areas by utilizing the high temporal mapping capabilities of small drones along with machine learning algorithms. The combination of camera-based results and drone-based mapping will create a comprehensive and advanced approach for mapping at various spatial and temporal scales. Through partnerships with high school teachers and students, we are also engaging young people to increase awareness of the ongoing pollution challenges and explore actionable measures for mitigation and adaptation, allowing communities to actively contribute to the process of identifying pollution sources, monitoring coastal litter, and developing meaningful interventions. The consultation process in Iceland and Greenland culminated in the installation of five time-lapse cameras at selected coastal locations in Iceland, with additional installations planned also in Greenland. Additionally, first drone surveys and training sessions were conducted in Iceland and Greenland, respectively. In the long term, ICEBERG will implement this citizen science model to promote environmental awareness and build local capacity in pollution monitoring. Workshops with teachers and students in Iceland and Greenland are integral to this goal. We will showcase our innovative approach to Arctic pollution monitoring, focusing on the critical role of community engagement and co-created solutions. By developing technology-assisted tools and fostering local collaborations, ICEBERG exemplifies a sustainable, inclusive approach to addressing environmental challenges in fragile Arctic ecosystems. We will highlight the project’s focus on leveraging citizen science for Arctic resilience and governance, showcase the preliminary time-lapse time series from Svalbard, and discuss the benefits and challenges of community engagement in Arctic environmental monitoring. We aim to foster a collaborative framework that encourages international cooperation and drives effective, scalable pollution-control solutions for climate-sensitive Arctic regions. Through this symposium, we hope to highlight the potential of citizen science in tackling large-scale environmental issues and emphasize the importance of cooperative, multi-stakeholder frameworks for effective governance in the context of climate change.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Beyond Gendered Stereotypes: Combining Technical Rigor and Emotional Engagement in Climate Action

Authors: Elise Colin
Affiliations: Esrin, ONERA, DTIS
In the context of the fight against climate change, effective communication and education play a central role. These challenges require a combination of two essential dimensions: scientific and technical rigor, which provides a solid understanding of the phenomena, and emotional sensitivity or "care," which generates the attention and engagement needed to take action. However, these two components are often treated as irreconcilable opposites—not only in how they are taught or communicated but also in how they are perceived through gendered lenses. In practice, stereotypes persist: men often feel more confident in adopting a scientific and technical viewpoint, while women, frequently less assured in these areas, develop a strong sense of care and emotional commitment. However, attributing these qualities to specific genders limits our ability to make the most of their potential. I believe it is essential to move beyond this gendered vision to better integrate rigor and care, reconcile them, and fully embrace their enriching synergy. With 20 years of experience as a researcher, teacher, and communicator, I will share insights on connecting these two dimensions, drawing on several personal initiatives. • In my teaching: At the French High Engineering High School CentraleSupélec, I have implemented innovative pedagogical formats where students present their scientific work by integrating an artistic dimension of their choice, such as music, poetry, cinema, or theater. This approach bridges the demands of scientific rigor with creative openness, allowing students to express their sensitivity while approaching science through a humanistic lens. The outcomes often exceed my expectations, with student projects demonstrating solid technical understanding and the ability to connect with their audience on an emotional level and a much higher level of engagement and personal investment. • As a communicator: Through events such as my TedX talk and art-science exhibitions, I have explored how storytelling and visual metaphors can transform complex scientific concepts into accessible and engaging narratives. These initiatives show that linking abstract scientific data to emotional stakes captivates broader audiences and fosters deeper, more lasting responses. • Through writing: My latest book presents an original fictional narrative that intertwines scientific observations with reflections on humanity. The story is a correspondence between an astronaut orbiting Earth and her sister, a researcher specializing in radar imaging. The astronaut shares her observations of the planet's beauty and fragility. At the same time, her sister delves into her scientific research, including the subtleties of radar waves and the personal challenges she faces in a demanding field. This fictional dialogue offers a dual perspective: rigorous scientific exploration enriched by emotional and human considerations. It demonstrates how storytelling and sensitivity can render the abstract tangible, the technical accessible, and the global personal. These experiences highlight the importance of not opposing rigor and care but integrating them harmoniously. This integration is essential to overcoming the climate crisis and mobilizing audiences inclusively and sustainably. Moving beyond the gendered labeling of these qualities is a critical step: it is not about denying that biases exist but recognizing that these dimensions are not exclusive to any gender and can mutually enrich each other. Through this presentation, I aim to engage with the community on innovative practices that connect science and sensitivity. I aim to discuss how storytelling, interdisciplinary collaboration, and inclusive education can transform how we communicate about Earth observation and climate change. By adopting a de-gendered and integrated approach, we can better employ the complementary strengths of rigor and care to inspire meaningful collective action.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: EDUKEO – A scrollytelling approach to showcase the value of EO data across diverse Earth science domains

Authors: Isadora Jiménez, Christel Michel, Clara Costa, Mariangela Cataldo, Connor Henney, Francesco Sarti, Laia Romero
Affiliations: Lobelia Earth, ESA
The accessibility of scientific knowledge to the general public is becoming increasingly important, as understanding contemporary scientific issues is essential for informed citizenship, political awareness, and safeguarding against misinformation. EDUKEO is an online platform aiming to support ESA in maximizing the outreach of their scientific results beyond the scientific community. The target audience is the general public, with special focus on specialized media, policy and decision-makers, as well as young generations and students. This initiative is part of the EO Science for Society programme element within the FutureEO programme. The platform serves as an educational tool, allowing users to explore various topics through the scrollytelling methodology, which provides an interactive and engaging experience. The content is structured into web-based stories that highlight the value of Earth Observation data across diverse Earth Science domains, including oceans, land, ice, atmosphere, and solid earth. These stories are categorized into three levels of complexity, ranging from basic to advanced, based on the depth and engagement required for each subject. The most advanced stories feature multiple datasets, sophisticated visuals, animations, and interactions, incorporating Lobelia's Globe Story Engine and visual materials crafted by Planetary Vision. framing them within the broader context of climate change and societal impact. This approach helps readers grasp the significance of this scientific research and its impact on our society. At its current stage, the project has successfully implemented its first story: Mysteries of Antarctica: The Future of Virtual Exploration. This advanced-level story highlights the importance of the Digital Twin Antarctica, developed in collaboration with the University of Edinburgh, showcasing the value of digital twin technology for exploring and understanding this remote region particularly susceptible to climate change. Upcoming stories will delve into diverse topics, including the Digital Twin Hydrology, food security and crop mapping, coastal erosion, and solid earth processes. Each story offers a fresh perspective on scientific research, highlighting how ESA's technological tools enhance our preparedness and resilience in addressing the challenges of the 21st century.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Groundbreaking Science Discoveries and Successes enabled by ESA Earth Observation Satellites

Authors: Dr. Maurice Borgeaud, Prof. Jonathan Bamber, Prof. Anny Cazenave, Prof. Yann Kerr, Prof. Michaela Hegglin, Dr. Marta Marcos, Dr. Christian Massari, Prof. Johanna Tamminen, Prof. Chris Rapley, Dr. Jonas L’Haridon, Ms Courtney Allison, Dr. Emmanouil Detsis, Mrs Maria Piera Padula, Mr. Henri
Affiliations:
In preparation for the 2025 ESA Council at Ministerial Level (CM-25) and in order to make more visible the science benefits of studying the Earth by satellites, examples of “groundbreaking” science enabled by ESA Earth Observation (EO) satellites have been assembled in an attractive and easily readable brochure. Based on the mission of the European Space Sciences Committee (ESSC), to give European space scientists an independent voice in the space arena, https://www.essc.esf.org/, such a document provides a powerful means for the European science community to make the case for uplifting science funding across the ESA programme, particularly for the Earth Observation programme. This plea was already delivered by the ESSC statement made at the ESA 2023 Space Summit, https://www.essc.esf.org/esa-space-summit-2023-statement/ during which it was well received. The target audience of the document is high-level decision-makers and should be written in simple words understandable by the public. Taking note that ESA is already very active in the communications of EO results, https://www.esa.int/Applications/Observing_the_Earth, the aim it to produce a simple and easy to understand document that can convincingly demonstrate the huge science and societal benefits brought by ESA EO satellites. The document was prepared by the Earth Science Panel of the European Space Science Committee with ESA support and provides 12 examples clearly identifying the discoveries enabled by EO satellites. Most examples are based on ESA missions (ERS-1, ERS-2, ENVISAT, Earth Explorers) and European Commission Copernicus programme (Sentinels), but other sources of data from European national missions and NASA are used. The 12 examples contained in the brochure are divided in 3 cases for each of the 4 main thematic domains of the Earth sciences: atmosphere, ocean, land, and polar regions. The criteria used to define a “groundbreaking science discovery” include a clear “elevator pitch”, the degree of astonishment, and the jump in knowledge that the scientific discovery raised. Moreover, successes associated to the benefits of Earth observation for societal challenges (food, water, energy, climate) were also considered. From an initial list containing more than 120 examples, the Earth Science Panel reduced the list to the 12 prime examples using the above-mentioned criteria so to obtain a collection of striking discoveries enabled by Earth observation satellites (e.g. for each of the four thematic domains retained: sea level rise, land deformation, Antarctic and Greenland mass balance, and worldwide global daily air pollution. The presentation will describe how the document was conceived, the selection process to arrive at the 12 examples, and the satellite data used. Special attention will be also given on the process to convert scientific results published from a highly ranked journal to easily understandable text and graphics which make the core of the document. The approach proposed for this document is based on simple explanations for describing science breakthroughs and illustrating key societal challenges together with powerful graphics generated in collaboration between the scientists and a team of professional graphics designers, This new perspective could act as a template for future promotion of space agency scientific excellence and value.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: CCI Knowledge Exchange Training and Competitions

Authors: Ravi Kapur
Affiliations: Imperative Space
‘The ESA Climate Change Initiative Knowledge Exchange activity provides a comprehensive range of communications, data access, educational and training resources and services to support diverse audiences and end-users. As part of this, a new set of structured Training activities has been initiated, covering a wide range of ECVs and CCI data. This session will provide an introduction to the types of training available, the under-pinning aims and methodologies, and the forthcoming training and education opportunities for a wide spectrum of age ranges and levels of prior knowledge. The CCI Training activities will include: In-person Seminars: These sessions will be run at public events (including LPS) with a focus on supporting subject-specialists, early career scientists and others with a good level of technical awareness and prior experience of working with related data. On-line Training Sessions: These synchronous training sessions will create an opportunity for wider audiences to engage with the CCI products and data, with sessions and Jupyter notebooks archived for future access. Competitions: A range of far-reaching and ambitious competitions will connect with wider educational audiences. The session will also create a forum to gather feedback on additional training needs and interests.’
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Diversifying the talent pipeline - engaging the next generation of earth observation specialists

Authors: Heather Selley, Leam Howe, Rebecca Wilks, Emily Dowd, Bryony Freer, Calum Hoad, Morag Fotheringham, Samuel Bancroft, Hannah Picton, Anna Hogg, Prof Kathy
Affiliations: University Of Leeds
Earth Observation (EO) and space science have sparkly appeal but the realities of recruiting the next generation and illuminating the pathways into the world of EO remains a major challenge. The Centre for Data in Environmental Science for Doctoral Training (SENSE CDT) has won several grants over the past 5 years including from the UK’s Natural Environment Research Council (NERC) aimed at diversifying the talent pipeline. Gender, race, disability, nationality, ethnicity, career stage and socioeconomic background hinder the participation and engagement with climate and EO science. Through sharing various initiatives, often spearheaded by our students, we hope to share insights into developing effective outreach, communication, and engagement. Clear communication and strategic planning are key to engaging the next diverse generation of climate scientists and activists. We will present our work mapping the multitude of pathways from school to becoming an EO specialist working in the environmental sector, and the insights this has for the wider field of EO, within both industry and academia. Further, we discuss the key challenges illuminated through this work, such as how to effectively track impact in this space, how best to engage students in a historically undiversified field and how to create longevity from limited resources. Finally, we present several success stories, including insights from the SatSchool outreach initiative.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Communicating the Unimaginable: Uncertainty, Storytelling, and Climate Tipping Points

Authors: Kuat Abeshev
Affiliations: Technical University Of Munich
Climate tipping points—critical thresholds in Earth’s systems that, once crossed, lead to irreversible changes—are among the most complex phenomena to communicate. These include the Amazon rainforest, which risks transforming into a savannah; the Greenland and Antarctic ice sheets, whose melting could drive sea levels catastrophically higher; and the Atlantic Meridional Overturning Circulation (AMOC), the ocean current system that regulates global weather patterns, which could collapse altogether. These systems operate on timescales and at distances that make them feel remote and abstract to the general public. Yet their failure would bring profound consequences to ecosystems, economies, and daily life everywhere. This disconnect underscores the complexity of the task. Tipping points challenge not only public understanding but also our collective imagination and emotional capacity to confront their implications. How do we communicate these hyperobjects—Timothy Morton’s term for phenomena so vast in time and space that they defy direct perception? And how do we convey their urgency while avoiding disempowering audiences with catastrophic narratives or overwhelming detail? Storytelling offers a way forward. Humans are wired for stories; they help us process complexity, connect emotionally, and make meaning. Podcasts, as a narrative medium, provide an intimate platform to bridge the gap between abstract science and personal relevance. This proposed podcast explores how storytelling can make climate tipping points relatable and engaging without oversimplifying their complexity. The podcast will address several critical challenges. For instance, uncertainty is often viewed as a failure of knowledge rather than an essential part of science. Tipping points are inherently unpredictable, and that uncertainty can foster paralysis. However, uncertainty can also be a call to caution and action, as we routinely act in the face of uncertainty in our daily lives—whether it’s trusting a weather forecast, taking precautions against illness, or navigating personal relationships. The podcast aims to reframe uncertainty as a crucial aspect of responsible decision-making, helping listeners understand that not knowing everything doesn’t mean knowing nothing. The podcast will also delve into the difficulty of connecting tipping points to lived experience. Systems like the polar ice sheets or the AMOC may feel distant and intangible, but they are deeply linked to human well-being—from rising seas flooding coastal cities to droughts and extreme weather reshaping agriculture and water security. By anchoring these vast systems in stories, the podcast aims to make their relevance clear and their stakes immediate. Moreover, the podcast will prioritize inclusivity, amplifying voices from the Global South and marginalized communities—those already experiencing the impacts of climate change. These narratives provide both warnings and inspiration, showing how people and ecosystems are responding, resisting, and adapting. Through these stories, listeners will not only gain insight into the uneven realities of climate impacts but also develop a sense of solidarity and collective responsibility. Finally, the podcast recognizes the emotional burden of engaging with this subject. Confronting tipping points can evoke grief, fear, and a sense of helplessness. Yet these emotions also reflect profound care for the planet and its future. Drawing on thinkers like Joanna Macy (Active Hope), the podcast will approach these feelings not as barriers but as conversation openers and opportunities for growth, resilience, and action. By blending science, storytelling, and emotional resonance, this podcast offers an innovative model for tackling one of the greatest communication challenges of our time. It aims to empower listeners with knowledge, connect them emotionally to the stakes, and inspire collective action. As storytelling has shaped human understanding for millennia, it may also hold the key to navigating the complexity of our future.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Leveraging Lunar and Cis-Lunar Remote Sensing for Climate Change Awareness: Engaging STEM Education Through Storytelling

Authors: Roman Hiby, Claudia Lindner, Jun.-Prof. Dr. Andreas Rienow
Affiliations: Ruhr-University Bochum
In an era of escalating planetary crises, fostering awareness of climate change and environmental degradation demands innovative communication strategies that bridge scientific complexity and public understanding. Our project harnesses the interdisciplinary power of remote sensing—combining Earth Observation with lunar and cis-lunar data—to inspire and educate the next generation, whilst addressing the growing need for public education about the advantages and potential of remote sensing in our daily lives. This approach integrates storytelling with advanced tools such as augmented reality (AR) and interactive worksheets, creating an immersive experience for pupils and educators alike. By linking lunar exploration to pressing Earth issues, we translate remote sensing data into relatable narratives, tapping into the universal fascination with space exploration. For instance, the "Overview Effect", often experienced by astronauts, evokes a profound sense of global interconnectedness and environmental responsibility and its recreation in a classroom setting is one of the goals of the resources. Our project capitalises on the current global interest in lunar exploration to introduce remote sensing principles and environmental awareness into schools. By using data from missions like NASA's Lunar Reconnaissance Orbiter and the Deep Space Climate Observatory (DSCOVR), we create a compelling narrative linking lunar exploration to Earth Observation and climate science. The talk will present four educational modules, rooted in STEM principles and linked to AR, guiding pupils through lunar and cis-lunar topics. The first module explores theories about the Moon's formation and its geological composition in comparison to Earth using maps derived from lunar satellite imagery and in-situ data. This is followed by a module that raises awareness of location factors for a permanent, self-sustaining Moon base by having pupils analyse its most important parameters in comparison to Earth. The third module teaches pupils about the effects and impacts of gravity in the Earth-Moon system over the last 4.5 billion years and into the future. Additionally, a fourth module focuses on the Earth-Moon system by utilising data from NASA's DSCOVR, analysing both celestial bodies from the Sun-Earth Lagrange Point L1 to provide a continuous global sunlit perspective of Earth. By leveraging pupils' natural curiosity about space, this initiative builds a bridge between abstract climate science and tangible, relatable storytelling. Our presentation highlights the transformative potential of remote sensing as a tool for both fostering methodical competencies and climate communication. It will explain and emphasise the relevance of lunar and cis-lunar topics with the help of satellite-based observation for teachers, pupils and education in general with regard to climate change on Earth and to the interconnectivity of both celestial bodies.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Empowering Secondary Education with Earth Observation: Teaching Climate Change and Disaster Management through Satellite Data

Authors: Henryk Hodam, Jun.-Prof. Dr. Andreas Rienow, Claudia Lindner, Fabian Meyer-Heß, Heli Lätt, Lars Tum, Liisi Jacobson
Affiliations: Ruhr-Universität Bochum, Tartu Ülikool
Schools have a unique opportunity to leverage the power of Earth observation data to enhance environmental education and foster critical thinking skills. The Copernicus Program, one of the world’s leading Earth observation initiatives, provides satellite imagery offering insights into the atmosphere, oceans, land, and climate systems. These data open up new possibilities for integrating real-world scientific applications into classroom learning, enabling students to better understand and address global challenges like climate change and disaster management. Two initiatives, the CDEC project conducted by Ruhr University Bochum and the Copernicus4Schools project, a collaboration between Tartu University and Ruhr University Bochum, are dedicated to harnessing Earth observation data to empower schools with free, multilingual educational resources available in German, English, Estonian, and French. Designed for students aged 14 and over, the materials provide background knowledge through learning videos, text-based tutorials, and illustrations, which are then applied in hands-on exercises using a variety of platforms, including Augmented Reality apps, WebGIS applications, and Jupyter Notebooks. These resources enable students and teachers to explore real-world applications of Earth observation, addressing topics such as atmospheric monitoring, sustainable urban development, climate change, and disaster management. The teaching materials provide opportunities to engage with topics such as radar data analysis for surface monitoring, Sentinel-3’s thermal imaging for measuring Earth’s temperature, and the identification of climate risks like heatwaves and wildfires as well as the surveillance of water bodies and methane emissions. These resources not only foster critical thinking and scientific inquiry but also highlight the importance of satellite data in addressing pressing environmental challenges, which are increasingly evident in the daily lives of students as events like severe floods and wildfires become more frequent. This presentation will showcase selected learning materials for geography lessons, emphasizing their potential to enrich classroom activities. Furthermore, it will explore how schools can integrate these resources in an interdisciplinary manner with interactive tools and Jupyter Notebooks in physics and computer science education.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: F.04.11 - POSTER - Earth Observation for Environmental Compliance: Enhancing Monitoring, Guidance, and Enforcement

Environmental crime is a rapidly growing global issue, increasing by 5-7% annually. The broad spectrum of environmental crime includes illegal waste management, logging, and water abstraction, pollution, habitat destruction, and wildlife trafficking.

As environmental crime often involves transnational criminal organizations, international cooperation is needed to dismantle the network that perpetrate it. The European Union's new environmental crime directive aims to bolster criminal law enforcement against the most severe environmental offenses, as part of the European Green Deal.

Effectively combatting of environmental crime hinges on robust evidence. Earth Observation technology can support monitoring, inspection, and evidence gathering, thus enhancing environmental crime investigations . However, challenges related to data privacy, quality, availability, and legal admissibility must be overcome to fully realize the potential of Earth observation in the fight against environmental crime.

This session will:
• Identify and evaluate EO-based methods to help detect and characterizing environmental crimes and their impacts.
• Explore geospatial and open-source intelligence in multidisciplinary evidence collection, including the role of citizen science.
• Discuss the effective integration of EO into environmental compliance assurance and law enforcement.
• Analyse practitioner needs for new sensor data, processing tools, analytical methods, and operational modes.
• Foster dialogue among policymakers, researchers, and practitioners.
• Inform the development of a roadmap for wider EO adoption in environmental crime investigations through ESA-JRC collaboration.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Monitoring Remote Marine Protected Areas Using Vessel-based Data

Authors: Daniel Kachelriess, Tammy Davies, Ana P. B. Carneiro, Daniel Mitchel, Johnny Briggs, Daniel Steadman, Tyler Clavelle, Adel Heenan, Andrea Sánchez-Tapia, Oliver Yates, Olivia
Affiliations: BirdLife International, High Seas Alliance
Marine Protected Areas (MPAs) are a primary tool for protecting biodiversity and have been shown to have social, economic and ecological benefits when they are well designed, implemented, and monitored. However, many MPAs remain unimplemented and un-monitored, meaning that they have not yet been put into place on the water and exist only as “paper parks”, which undermines the benefits they are able to provide. The lack of implementation for MPAs even within national waters is concerning, especially given the imminent opportunity for establishing sites in international waters (>200nm from shore, also known as “the high seas”), where implementation is expected to be considerably more challenging. A new Agreement on the conservation and sustainable use of marine biological diversity in areas beyond national jurisdiction (ABNJ) establishes a mechanism for creating area-based management tools, including MPAs. With global momentum for this Agreement to enter into force in 2025, there is a need to identify feasible management and monitoring strategies that align with data availability and site accessibility. This is critical to ensure that area-based management of the high seas, characterized by logistical challenges and data limitations, can deliver meaningful conservation outcomes. Remote sensing data offers an opportunity for the effective management of large and remote MPAs, as it is becoming increasingly accessible. Since nearly all human activities on the high seas are associated with vessels, these activities can therefore be monitored via remote vessel monitoring. Monitoring human pressures through remote sensing offers a cost-effective solution for long-term monitoring of remote areas, particularly when other data sources are not readily available. We outline a framework to evaluate the human pressures within and around an MPA using remote vessel-based data that can be used to support area-based management and monitoring, with a particular relevance to sites on the high seas. We illustrate the framework and demonstrate the utility of this approach using remote vessel data representing shipping and fisheries activities for a high seas MPA in the North Atlantic - the North Atlantic Current and Evlanov Sea basin (NACES) MPA. This presentation will outline an analyses of remote vessel data structured within a simple framework and illustrate the resultant time, space and political dimensions of human activities that can be used to support area-based management.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Flood Simulation with Earth Observation - a Powerful Policy Support Instrument in Climate Change Adaptation and Response Planning Context

Authors: Mariana Damova, Dr Stanko Stankov, Hristo Hristov, Dr Emil Stoyanov, Hermand Pessek
Affiliations: Mozaika
The current climate change conditions causing increasingly water related challenges call for urgent measures to adapt to and mitigate the consequences from floods and droughts at local, regional and global level. In Europe alone we observe considerably unbalanced distribution of water quantities throughout the year. For nine months the water balances are below the necessary minimum and then in October, November, December of last year there was 158 notifications for serious floods. The recent flood event in Spain has also shown the urgency of the need of planning and preparatory measures to cope with climate change by adaptation and by response. We will present a Web-based Flood Simulation component that helps municipalities and policy makers in their duties by providing capabilities to playout what-if scenarios of different flood events with different intensity, having them visualized on interactive flood maps, and displaying calculations of the potential damages incurred in terms of infrastructure, population affected, agricultural produce, etc. Thus, it manifests itself as a hands-on instruments for analytics, planning and measures design. The Flood Simulator makes use of earth observation data and Copernicus DEM (Digital Elevation Model), and is part of ISME-HYDRO® – the intelligent e-infrastructure of Mozaika for comprehensive water resources management, based on EO and AI. It is also integrated into DestinE platform as part of one of its first use cases – UrbanSquare – demonstrating federated integration between two platforms, and enables global reach. UrbanSquare addresses the facts that urban areas are increasingly vulnerable to a myriad of climate risks, from scorching heatwaves and the intensifying urban heat island effect to devastating flooding and coastal erosion. The Use Case confronts these multifaceted threats by aiming to understand and mitigate the impacts of heat waves, storms, air pollution, flooding, and infrastructure disruptions. We discuss and show how the presented Flood Simulator can be beneficial for both designing climate change adaptation measures and planning response in eventual hazardous situations.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Mapping the risk of human-wildlife conflict at the landscape scale in Mozambique

Authors: Gordon Campbell, Doug McNeil, Iain Cameron, Clare
Affiliations: ESA
Human-wildlife conflict (HWC) is an ever-increasing challenge in environmental conservation and protection in Africa and globally for governments and organisations, including national park management. This is driven by increased pressures on the land from human population growth and conversion of natural habitat to agricultural land. Companies EOLAS Insight and Omanos Analytics have partnered with the Peace Parks Foundation (PPF) to develop tools that use satellite-derived data to better understand and monitor HWC in Africa. This European Space Agency (ESA) funded project will develop a data portal for managing human wildlife conflict at landscape scale, focused on elephant populations in the Limpopo and Banhine National Parks of Mozambique, and the wildlife corridor that is being established between these parks. At the highest level, this project aims to understand, and map, the risk factors that lead to HWC in the region. Our approach maps the distribution of human activities and elephant populations in the landscape, and looking for places where these distributions may interact. These interactions can result in co-existence or conflict, with the availability of resources in the landscape being an important deciding factor. When resources such as food and water become scarce, the risk of conflict increases as elephant populations are more likely to raid crops and seek water sources that may also be used by human populations. This is complicated by a general lack of accurate data on the distribution and movement of elephants across the region. Current data collection methods provide annual wildlife census from helicopter and tracking of individual elephants over time. This provides snapshots of population movement within the landscape but little insight on seasonal land occupation and interaction with human lands. Here, we will provide updates on the development progress and results to date, including: 1) How the user requirement interviews with staff drawn from across the Peace Parks Foundation have influenced the design of the data layers and project portal. 2) The creation of landscape-level population and natural resource layers form a range of satellite derived data sources. This includes a new machine-learning driven approach to the detection of small-scale cropland parcels that are typically missed from existing landcover maps. 3) Progress with using machine learning models to conduct elephant census from VHR satellite imagery. 4) The development of a seasonal elephant population distribution model that combines the incomplete existing monitoring data with natural resource maps to predict likely seasonal elephant population distributions across the region.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: A Data-Driven Approach for Detecting Suspicious Vessel Behavior Involved in Bilge Dumping Using SAR, AIS Data and Meteorological Data

Authors: Marco Folegani, Noemi Fazzini, Federico Tarozzi
Affiliations: Meeo, Department of History, Cultures, and Civilizations, Alma Mater Studiorum University of Bologna
Within the context of the growing global maritime trade, it is crucial to address the potential impacts of this trend on environmental protection. In particular, the illegal practice known as Bilge Dumping, named after the discharge of oily bilge water and defined in MARPOL Annex I as water contaminated with oil from leaks or maintenance in machinery spaces[1], poses a significant threat to the health of marine ecosystems. Reports from European authorities and private agencies highlight the significant environmental impact of bilge dumping. Yet, despite international obligations under the UNCLOS Convention and the MARPOL 73/78 Protocol[1], these illegal practices persist, driven by convenience and a disregard for compliance. This underscores the urgent need for decisive actions to challenge the prevailing culture of impunity in maritime law. In response, emerging fields such as Earth Observation (EO) and open-source intelligence, enabled by the increasing availability of data, present promising opportunities for detecting and monitoring these hidden illicit activities, ensuring better environmental compliance. To this end, the objective of this research is to develop a model capable of operating on a spatio-temporal level to identify suspicious behaviors of ships deliberately polluting the sea. Therefore, this research examines the suspicious behavior of tanker and cargo ships in the Gulf of Mexico and along the U.S. East Coast. The primary objective is to create an interoperable framework for coastal authorities to identify vessels most likely involved in bilge dumping activities.To achieve this, the proposed solution integrates and analyzes various data sources to pinpoint vessels exhibiting suspicious activity.  The system uses a refitted version of the Deep Learning (DL) model for near real-time oil spill detection from Sentinel-1 SAR imagery, used within the EU funded Iliad project to build a Digital Twin of the Ocean (DTO) for the automatic detection of vessels[4]. Input data consist of 10 m resolution dual-polarization Sentinel-1 images acquired with the Interferometric Wide Swath (IW) Ground Range Detected mode at high resolution (GRD-HD). Images are automatically pre-processed in their VV polarization using customized functions based on SNAPpy. Further, the system utilizes vessel traffic data, specifically historical Automatic Identification System (AIS) data, obtained from the Marine Cadastre database and collected by the U.S. Coast Guard. These data, transmitted via an onboard navigation safety device, provide information on the location and characteristics of vessels operating in U.S. and international waters [2]. The data are then run through statistical models and unsupervised classification algorithms to identify the ships considered suspicious. Finally, the model examines weather data related to wind intensity, direction and temperature to further refine the search for suspicious vessels. The analysis process, conducted through the integration of various software tools such as QGIS and SNAP, and supported by data management via Python scripts, led to the identification of three main types of anomalies: positional anomalies, anomalies in speed and trajectory and anomalies related to temporal and meteorological conditions. These outliers were evaluated in relation to the most commercially trafficked routes for the type of vessel under consideration [3], with particular attention to conditions that could influence vessel behavior and potentially obscure illegal activities.  Moreover, the integration of satellite data with AIS data enables the identification of Dark vessels- ships that do not transmit position signals. This is achieved by validating the absence of signals through a comparison of AIS data with SAR imagery, which can be acquired both day and night and under all weather conditions. At the current state of research, the absence of open ground truth databases for verifying detected anomalies highlights opportunities for further refinement. However, the model's scalability and its ability to enhance performance based on the quality and quantity of the data provided make it a promising tool for future applications, including combating illegal activities at sea. In conclusion, the model's ability to detect suspicious behavior offers a valuable resource for professionals and authorities dedicated to ocean monitoring and protection. This can enhance the ability to make more informed decisions and implement more precise and effective actions to address illicit activities. References: [1] EMSA, Addressing Illegal Discharges in the Marine Environment, published 22.10.2013, updated 24.07.2018, European Maritime Safety Agency, p. 2, p. 40. [2] Marine Cadastre, Vessel traffic data, n.d., U.S. Department of Commerce, National Oceanic and Atmospheric Administration, available at: https://hub.marinecadastre.gov/pages/vesseltraffic. [3] U.S. Department of Commerce, Shipping Fairways, Lanes, and Zones for U.S. Waters, n.d., available at: https://catalog.data.gov/dataset/shipping-fairways-lanes-and-zones-for-us-waters1. [4] Outmani, S., Spanoudaki, K., Fazzini, N., Quarta, M. L., Vettorello, L., Folegani, M. and Kampanis, N., A near real-time oil spill detection and forecasting Ocean Twin, ESA Environmental Crimes Workshop 2024, Frascati, Italy, 11–12 June 2024, available at: https://www.conftool.pro/envcrimes2024/index.php/Outmani-A_near_real-time_oil_spill_detection_and_forecasting_Ocean_Twin-125.pdf?page=downloadPaper&filename=Outmani-A_near_real-time_oil_spill_detection_and_forecasting_Ocean_Twin-125.pdf&form_id=125
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: What is the Potential of Satellite Technology in Monitoring Future Marine Protected Areas in the High Seas?

Authors: Klaudija Cremers, Alexandra Oliveira Pinto, Dr. Julien
Affiliations: Institute for Sustainable Development and International Relations (IDDRI)
In 2023, States concluded an agreement on the conservation and sustainable use of marine biological diversity in areas beyond national jurisdiction (BBNJ) and now there is a global momentum to make the treaty enter into force in 2025. This treaty establishes a mechanism for creating area-based management tools (ABMTs), including marine protected areas (MPAs) in areas beyond national jurisdiction (ABNJ). This mechanism was missing until now and will facilitate the implementation of the 30x30 target adopted under the Convention on Biological Diversity’s 2023 Kunming-Montreal Global Biodiversity Framework. The overarching question regarding the implementation and enforcement of high seas MPAs is how to ensure effective monitoring, control, and surveillance (MCS) of human activities. Over the last decade, there has been a rapid increase and spread of innovative MCS technologies, driven by falling prices and open access to satellite data, and greater investment in artificial intelligence, big data solutions, cloud computing, and skilled human resources. These technological tools, provided by companies and non-profit organizations, can bring a significant added value to the implementation of management plans of future high seas MPAs, by optimizing resource allocation and providing near real-time insights into suspected illegal activities at sea. However, technology alone will not be enough. Supporting policy and technical measures—such as addressing capacity gaps, fostering cooperation for maritime patrols, strengthening port State controls, reforming national judicial systems, and ensuring effective information sharing—will be essential to operationalize technology and ensure the effective management of future high seas MPAs. This presentation will address the conditions under which satellite technology can help with the monitoring and enforcement of human activities in high seas MPAs and will discuss concrete applications on specific sites.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Remote Sensing in the Practice of Environmental Inspectorates: Retrospective and Proactive Monitoring

Authors: Markéta Potůčková, Kateřina Tučková, Jan Kolomazník, Jakub Dvořák, Martin Marko, Lucie Červená, Jakub Lysák, Eva Štefanová, Lucie Kupková
Affiliations: Charles University, Faculty of Science, Department of Applied Geoinformatics and Cartography, Gisat, TRL SPACE, Czech Environmental Inspectorate
The mission of the Czech Environmental Inspectorate is to supervise compliance with environmental legislation. While remote sensing could effectively complement existing monitoring activities, its use remains limited. The presented study is an example of close collaboration between academia, the private sector and representatives of the Inspectorate. It aims to propose and demonstrate the possibilities of using remote sensing at different scales - spaceborne, airborne, and UAV, to support the Inspectorate's surveillance activities related to environmental protection needs, such as waste management, forest, nature, or water protection. In the first ongoing phase of the collaboration, the possibilities of using remote sensing technologies for defined user scenarios of surveillance activities have been analysed. Scenarios include monitoring illegal waste management, illegal construction activities, and activities to support nature protection, focusing on impacts and activities that violate regulations or harm the environment. The demonstration and implementation phases shall focus on integrating remote sensing methods as supporting components within the selected scenarios. Monitoring will take the following forms: 1) Ad-hoc retrospective activity reporting using satellite remote sensing The aim is to develop a method for the retrospective (ex-post) detection of (potentially illegal) activities that have already taken place in a given location and for which a documentary record of their extent and progress needs to be provided on an incentive basis. The basic data source is archived, mostly freely available, high temporal resolution satellite data - SAR (Sentinel-1) and optical (Sentinel-2, Landsat-8/9). The methodology is based on automated detection of change maxima of spectral normalised difference or biophysical indices and radar signals. 2) Proactive monitoring using satellite-based remote sensing supported by ad hoc UAV/proactive observations In this case, it is the early and automated detection of potentially undesirable activity at locations of interest. The methodology involves the sequential processing of newly acquired high temporal resolution satellite imagery - SAR (Sentinel-1) and optical (Sentinel-2, Landsat-8/9, Planet) and possibly hyperspectral imagery acquired by the TROLL satellite (see also the next paragraph) or ad hoc proactive UAV imaging. 3) Proactive monitoring with satellite hyperspectral data From the beginning of 2025, the TROLL hyperspectral satellite from TRL Space System will provide data from 32 narrow spectral bands between 442 and 884 nm and from a panchromatic band with a spatial resolution of 4.75 m. The study will present the first data from this satellite and its potential use for the various surveillance activities of the Inspectorate. As part of the project, data from UAVs equipped with multi- or hyperspectral sensors will also be collected during the satellite acquisitions and the potential of HS satellite data will be compared with geometrically and radiometrically corrected UAV data. 4) Automated detailed proactive monitoring in the field of nature conservation - vegetation monitoring at the individual level using UAV data This type of activity involves mapping of individual plants (selected protected species) using UAV data and machine and deep learning methods. A pilot study on retrospective monitoring of surface changes using satellite data is currently underway to identify suitable methodologies for supporting the Inspectorate's existing monitoring efforts. The first phase focuses on analyzing the applicability of remote sensing for specific surveillance scenarios, including illegal waste disposal, construction activities, and impacts on vegetation. Subsequent phases will involve developing and implementing methodologies for both retrospective and proactive monitoring. Initial results received a favorable response from Inspectorate staff due to the possibility of historical analysis and detection areal extent of partial changes. The aim of this contribution is to present examples of above-mentioned types of monitoring and, by providing examples of good practice, to promote the use of remote sensing in the control activities of governmental environmental agencies/inspectorates in other countries. The contribution is supported by the Technology Agency of the Czech Republic project no. SS07010417 "Earth observation technologies for the surveillance activities of the Czech Environmental Inspectorate"
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Earth Observation data for Environmental Monotoring and Maritime Situational Awareness in the Black Sea

#cog

Authors: Marius Budileanu, Ionuț Șerban, Vasile Craciunescu, Sorin Constantin, Michela Corvino
Affiliations: Terrasigna, ESA
In the last years, the Black Sea has become one of the most important navigation areas in the world. Having in mind the general context of the Black Sea area, navigation safety and the risk of polluting accidents have led to the need for better monitoring of maritime traffic. A new, innovative platform, for data processing, integration, and visualization for situational awareness in the Black Sea will be showcased/presented. The main objective of the platform is to semi-automatically detect ships in the area of interest and provide a brief characterization of these vessels (e.g. length, bearing). The platform benefits from the SAR data provided by Copernicus Sentinel-1 mission, which allows the information extraction concerning maritime traffic in all weather conditions. Optical images (such as Sentinel-2 data), together with other SAR-derived products are also taken into account to minimize the gap between the Sentinel-1 sensors revisiting time. Automatic identification system (AIS) data is used for correlation with targets obtained from Earth Observation (EO) to derive different types of information. These can refer to vessel speed over ground (SOG), course over ground (COG) or its maritime mobile service identity (MMSI). The correlation module is also used to detect anomalies regarding ships' navigation like out-of-path trajectories or AIS broadcaster turned off. All the above-mentioned modules operates in a cloud platform - EO4BSP - that integrates state of the art technologies with open access based on OGC compliant standards and user friendly web interface.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Illegal dumping detection with high resolution satellite imagery

Authors: Vincent Poulain, Sébastien Le Corre, Stéphanie Artigues, Eric Lasserre
Affiliations: Thales Services Numériques, SGEvT, University Toulouse 2, CNES
Illegal dumping of waste in the countryside is an environmental problem because of the pollution it causes, and also an economic issue for the local authorities concerned. The problem is of interest to the authorities, law enforcement agencies and also waste management professionals. Satellite imagery, covering vast territories with frequent revisits, is particularly well-suited to the detection of illegal dumps, with the aim of detecting them as early as possible and planning their evacuation before they take on major proportions. In that context, the French Space Agency (CNES) initialized the development of an illegal dump detector, built by Thales Services Numériques, and enhanced by a contextual and environmental analysis led by SGEvT. An analysis of the feasibility of detecting landfills on French territory showed that sub-meter images were required (unlike larger-scale landfills in Africa or South America, for example, for which Sentinel-2 can be adapted). The approach adopted for detection was to train a convolutional neural network using Pleiades images (70cm resolution). To achieve this, the following steps were carried out iteratively: - pixel-level annotation of Pleiades images (based on open maps of signaled illegal dumping) - training of a U-Net semantic segmentation model - analysis of results, and in particular of types of false positives (to prioritize subsequent relevant annotations). A total of 2711 patches of 256x256 pixels were used for training, of which 488 contained at least one illegal deposit. The test database contains 78 high quality annotated patches, acquired over an area for which a 5cm resolution orthophoto taken the same year were available. The raw performance obtained without post-processing is a 97% detection of landfills, and 20% of detections are indeed landfills. The low precision is explained by confusion with landscape elements visually close to landfills (mineral surfaces in natural landscapes, or deposit of agricultural plant matter for example). This performance, with very high recall and even low precision nevertheless appears as a useful decision-making tool for users in particular, thanks to the post-processing functions the SGEvT implemented: - Parameterizable filtering of non-credible predictions (reliability, distance from road network, etc.) to produce a realistic list of credible predictions to be reported and verified, including assistance in the compromise between precision, recall and minimum detection threshold, which can vary according to the zones or priorities selected (environment, health, water, etc.). - Calculation and addition of contextual information by exploiting exogenous data (cadastral parcel, etc.), environmental data (proximity of water and or various types of buildings, etc.) in order to centralize information and homogenize available data and put it online in the form of data visualizations. - Production of a flow of alerts, configurable, with a status report showing the surrounding stakes, the nature and surface of the pollution detected, volume where possible, additional contextual elements on location (commune, parcel, protected site). - Prioritization of interventions and support to operations of dump removal. This service has been deployed for a number of different applications: the Prefecture of a French department to assist the Gendarmerie in its investigations, the Departmental Council of another department to produce a territorial diagnosis and a reliable reference system for comparing the state of illegal dumping, and a demonstrator for monitoring major events in 5 cities. Thales and SGEvT are continuing to develop this service as part of the France2030 program. It consists of iteratively improving the detection algorithm, in particular by diversifying the source images and their resolution (Spot6/7, Pléiades Néo, Sentinel-2, CO3D, etc.) which enables a higher revisit frequency and an expansion of the service capabilities.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Monitoring Inland Water Quality in Poland Using Python and Sentinel-2 Satellite Imagery

Authors: Klaudia Kościuk, Lukasz Firek, Franciszek Batko, Piotr Krajewski, MSc Jakub Staszel, PhD Michał Lupa
Affiliations: AGH University of Krakow
Monitoring Inland Water Quality In Poland Using Python And Sentinel-2 Satellite Imagery Water quality in inland bodies of water is a critical environmental issue in Poland, primarily driven by rapid urbanization, industrial activities, and inadequate wastewater treatment infrastructure, especially in rural areas. These challenges have led to significant water pollution, resulting in detrimental effects on ecosystems, human health, and overall water resources. Traditional methods of water quality monitoring, which rely heavily on manual sampling and laboratory analysis, are not only labor-intensive but also costly and time-consuming. This is particularly problematic in rural and remote areas, where access to the necessary infrastructure is limited. In response to these challenges, our project proposes a novel, cost-effective solution for real-time monitoring of inland water quality in Poland using advanced satellite imagery and Python-based data analysis. This project is led by AGH STARS (AGH Spatial Technology and Remote Sensing), a scientific club at AGH University in Krakow, AGH Space Technology Center. The primary goal is to develop an innovative geoportal that enables the efficient monitoring and analysis of water quality across Poland using Sentinel-2 satellite data. This approach leverages remote sensing technologies, cloud computing, and open-source tools to create a scalable system that is accessible to both researchers and the public. By utilizing satellite data, we aim to overcome the financial and logistical constraints of traditional monitoring systems, providing continuous data coverage over large geographic areas without the need for costly fieldwork. The impetus for this project arises from the growing concerns over water pollution in Poland. Urban and industrial development has significantly increased the levels of pollutants discharged into rivers, lakes, and coastal waters. This pollution not only affects water quality but also leads to eutrophication, resulting in "oxygen deserts" where aquatic life cannot survive. The project seeks to address these challenges by utilizing the capabilities of satellite-based remote sensing. Sentinel-2 satellites, operated by the European Space Agency, provide high-resolution multispectral imagery that is well-suited for monitoring water quality. By analyzing this data, we can derive critical water quality indicators, such as chlorophyll concentration, turbidity, and the presence of harmful algal blooms. This approach not only reduces costs but also allows for the continuous monitoring of large and remote areas, making it an ideal solution for Poland’s diverse landscape. Our project is built on the integration of various technologies and platforms to create a robust system for water quality assessment. The workflow begins with the acquisition of the newest Sentinel-2 imagery through the CREODIAS platform, which provides access to a vast repository of Earth Observation data. Data processing is performed on Virtual Machines provided by CREODIAS, which offer enhanced computational power for handling large datasets. Once the real-time satellite data is obtained, we process it using Python libraries such as Rasterio and GDAL. The processed data, including the calculated indices, is then stored in Amazon S3 Bucket, a scalable cloud storage service that ensures efficient data management and retrieval. To maintain metadata and facilitate easy access to the results, we utilize a PostgreSQL database. For the visualization component, we deploy Web Map Services using MapServer. The WMS outputs are further enhanced using Mapbender, framework for creating interactive web maps. This enables users to visualize water quality data dynamically, providing an intuitive interface for exploring spatial patterns and trends over time. The final output is a user-friendly geoportal that displays water quality data in an accessible and visually engaging manner. By making this data publicly available, we aim to prompt concern about the state of inland waters in Poland and encourage collective action to improve water management practices. Additionally, the geoportal serves as a valuable tool for public engagement. By making water quality data easily accessible, the portal can increase awareness of environmental challenges and encourage community involvement in water conservation efforts. The visual representation of water quality data helps to translate complex scientific information into a format that is understandable by non-specialists, thereby fostering greater public concern and action. Looking ahead, we plan to expand the capabilities of the geoportal by incorporating additional data sources, such as Landsat imagery for water temperature monitoring and in-situ measurements, to improve the accuracy and reliability of water quality assessments. We also aim to enhance the analytical capabilities of the system by integrating machine learning algorithms to gain super image resolution. Overall, the project demonstrates the potential of remote sensing technologies in addressing pressing environmental challenges. By providing real-time and actionable data, we hope to contribute to better water management practices in Poland and beyond. Our approach serves as a model that can be adapted for other regions facing similar challenges, promoting the use of innovative technologies for environmental conservation. This abstract, with a focus on innovation, technical robustness, and societal relevance, is well-aligned with the objectives of the LPS25 conference. We are confident that our project will contribute valuable insights into the application of remote sensing for sustainable environmental management and decision-making.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Spatial- and temporal data enrichment by multi-sensor approaches and data fusion technologies: policy driven earth observation approaches

Authors: Dr. Thomas Heege, Karin Schenk, Dr. Christoph Deller, Philipp Bauer
Affiliations: EOMAP
Guidelines The EU Water Framework Directive (WFD), established in 2000, aims to protect and improve the quality of water bodies across the EU. It requires member states to achieve “good status” in terms of both the chemical and ecological quality of water bodies. The directive outlines obligations for member states to implement River Basin Management Plans and Programs of Measures to monitor, protect, and restore water bodies, to prevent any deterioration of their quality. Likewise, the Marine Strategy Framework directive (MSRL) forces the member states to develop marine strategies to protect and restore marine environment and the reducing and preventing pollution. Both the WFD and the MSFD require monitoring the ecological status of water bodies based on biological, chemical, and hydro-morphological elements, including factors such as phytoplankton, algal blooms, macrophyte growth, oxygen levels, and nutrients. The directives’ goals are to maintain and improve the ecological state of all marine ecosystems as well as inland water bodies. This also includes preventing harmful events such as algal blooms and oxygen depletion, which can lead to fish kills and other ecological problems. The EU Bathing Water Directive (BWD) introduces water quality monitoring duties to EU member states at public bathing sites. Apart from the detection of pathogenic germs, the detection of harmful algal blooms and changes in water quality is of importance since authorities are supposed to provide timely warnings to ensure public health. Challenges in Current Monitoring Approaches Despite the clear objectives of the WFD, the MSFD and the BWD, existing monitoring practices often face limitations. Conventional monitoring technologies, such as sampling at fixed stations and infrequent sampling intervals, can fail to detect rapid and localized ecological changes. Monitoring networks are often limited in terms of spatial coverage, especially in large river systems or smaller water bodies, and samples may only be taken a few times a year or in monthly intervals during the bathing season. Additionally, ecological and environmental degradation often starts in specific locations, which may not coincide with existing sampling points. These gaps in monitoring can lead to missing critical early warnings about water quality deterioration. Impact of Earth observation and data fusion methods We will present three use cases - recently developed an executed with German authorities in the states of Bavaria, Mecklenburg-Vorpommern and Rhineland-Palatinate – all entrusted with tasks referring to EU guideline reporting. In these examples it is demonstrated how remote sensing technologies and data fusion approaches close spatial and temporal gaps created by current monitoring practices under the WFD, the MSRL and the BWD and how data fusion techniques can be used to enhance the information level and allow for application related modifications of current practices. As far as the WFD goes, multi sensor based remote sensing based on Sentinel-2 MSI, Sentinel-3 OLCI, Planet SuperDoves and Landsat OLI and TIRS data enables the synoptic monitoring of large inland areas over short time intervals. Thanks to technological advancements over the past decade, very short revisit times (1-8 days) and high spatial resolutions (3-10 m) are now standard. Particularly in lakes and lake ensembles, the trophic state and its changes — central parameters for assessing ecological health — can be reliably monitored through optical sensors, such as Chlorophyll-a, a key indicator for algae presence, water clarity, organic absorption, and turbidity. The technologies allow regular monitoring of large water body ensembles to identify risk areas for ecological degradation and detect threats early. Integrating Earth Observation (EO) methods for trophic state assessment enable comprehensive identification of at-risk areas. This allows for the adjustment of monitoring programs with regional, selective time intervals, periods, and sampling locations according to the WFD’s objectives. Faster and more complete detection of critical situations enables better management of affected areas, contributing to risk reduction of ecological degradation. Another advantage of synoptic EO data is the cost-effective integration of information on smaller streams and lakes, which currently fall outside monitoring programs. The fusion of multi sensor RS data with in-situ data and hydrological information yields a complementary dense data network and enhances the amount of available data for stirring management measures tremendously. Likewise, the MFSD can be supported. It is built on thematic “descriptors” such as “biodiversity”, “habitats” or “eutrophication”, elements that describe specific aspects of the marine environment and make them measurable. Data for the example of "eutrophication" will be presented, focusing on a coastal region of the Baltic Sea. To measure and assess eutrophication, data on several key parameters and indicators such nutrient concentrations, Chlorophyll concentration, Secchi depth, dissolved oxygen levels, trophic status indicators or the frequency, duration, and intensity of algal blooms is needed. For this marine environment, L3 and L4 data from Copernicus Marine Services (CMES) is used as an important source to provide this kind of information and is enriched by multi sensor based remote sensing water quality data and in situ data. The data is merged into risk indicators displayed on heat maps highlighting regions in which environmental conditions change rapidly or deteriorate. This information in turn allows for altering management and measurement plans. Very high-resolution satellite data, recorded by Planet SuperDoves and consecutive retrieval of water quality parameters Chlorophyll, Sechi depth and harmful algae bloom indicator are key to support the BWD in terms of near real time detection of harmful algae blooms, caused by toxic Cyanobacteria. We will present examples in which in situ data sets of four in situ samples per season are densified by earth observation data resulting in > 100 data points per season – including additional information on spatial distribution of algal blooms. An user friendly web-based tool allows for synoptical overview of larger areas, automated integration of warning- and alert systems, a dynamic interface for data and information transport, and inter-departmental network, ready to convey information further downstream to lake owner and the population ensuring in time protection of health risks imposed by harmful algae bloom. Conclusion The adoption of multi sensor based remote sensing technologies can massively enhance the number of data points in terms of spatial and temporal information for implementation of the WFD, the MSRL and the BWD. The presented application specific and customized approaches provide a more comprehensive and timely way to detect ecological issues in water bodies, closing gaps in spatial and temporal monitoring, and enabling early intervention. They support the guidelines’ goals by densifying data, enhancing the amount of information and thus the ecosystem understanding and actively help to stir measures to better fulfill the guidelines’ intentions. Data fusion tools can help visualize and understand the improvements or deteriorations the environmental and ecological status and additionally offer valuable tools for understanding and managing the impacts of extreme weather events and climate change.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: The potential of Sentinel-1 to monitor fine-scale natural and logging-related tropical forest disturbance patterns and associated carbon loss

Authors: Anne-Juul Welsink, Dr. Marielos Peña-Claros, Dr. Martin Herold, Dr. Johannes Reiche
Affiliations: Wageningen University, GeoForschungszentrum
Tropical forest degradation has severe impact on biomass stocks and biodiversity. However, degradation from fine-scale disturbances such as logging and natural tree mortality is not accurately captured in state-of-the-art satellite-based monitoring systems (Welsink et al., 2023). Our work on improved fine-scale disturbance detection using Sentinel-1 radar shadow opens up opportunities to study patterns of logging and natural disturbance that could previously not be analysed at large scales and over time. We used a physical-based method to detect disturbances based on changes in backscatter resulting from radar shadow and/or layover. Mapped detections were validated using multitemporal, spatially explicit drone-based reference datasets in Barro Colorado Island nature reserve and five logging concessions in the Congo Basin. For disturbances larger than 300 m2, we reach detection rates above 80% for both natural and logging-related disturbances. We validated several alternative detection systems in our reference areas for comparison, namely the JRC’s Tropical Moist Forest product (Vancutsem et al., 2021), the Global Forest Watch’s Integrated alerts (Reiche et al., 2024), and the Sentinel-1 radar-based study by Carstairs et al. (2022). Our physical-based detection method reaches significantly higher detection rates, lower false detection rates and more near-real time monitoring compared to the operational alternatives. While Carstairs et al. (2022) reach a similar detection rate, their false detection rate is more than double for natural disturbances of almost all sizes, and at least 10 percent point higher for logging-related disturbances. Furthermore, our method requires a shorter period to confirm a disturbance (4 months as opposed to 10), which is an added benefit for monitoring purposes that require somewhat-near-real-time data. Such improved detection opens up opportunities to study natural and logging-related tropical forest disturbance patterns and associated emissions. Inspired by previous ground-based research with a limited geographical scope, we analyse logging patterns inside concessions, such as the adjacency, connectivity and patch size of intensively logged areas and relate this to influencing factors such as distance to logging roads, topology and presence of riparian areas and peatlands. Furthermore, spatially explicit reference data on harvest volumes from logging and associated carbon emissions allow us to assess to what extent integration of our fine-scale disturbance detection system with auxiliary (satellite-based) data streams can be used as a basis to monitor local carbon loss from logging. We find strong linear relationships between our estimated local carbon losses and ground data on emissions at the level of annual cutting blocks, as well as with reported harvest volumes at the concession level. Our work supports the enforcement of policies aimed at curbing forest degradation, fosters management and certification in logging concessions, and provides opportunities for improved forest monitoring in both natural and logged landscapes. Key references Carstairs, H., Mitchard, E.T., McNicol, I., Aquino, C., Chezeaux, E., Ebanega, M.O., Dikongo, A.M., Disney, M., 2022. Sentinel-1 shadows used to quantify canopy loss from selective logging in Gabon. Remote Sensing 14, 4233. Reiche, J., Balling, J., Pickens, A.H., Masolele, R.N., Berger, A., Weisse, M.J., Mannarino, D., Gou, Y., Slagter, B., Donchyts, G., et al., 2024. Integrating satellite-based forest disturbance alerts improves detection timeliness and confidence. Environmental Research Letters 19, 054011. Vancutsem, C., Achard, F., Pekel, J.F., Vieilledent, G., Carboni, S., Simonetti, D., Gallego, J., Aragao, L.E., Nasi, R., 2021. Long-term (1990–2019) monitoring of forest cover changes in the humid tropics. Science advances 7, eabe1603. Welsink, A.J., Reiche, J., De Sy, V., Carter, S., Slagter, B., Suarez, D.R., Batros, B., Peña-Claros, M., Herold, M., 2023. Towards the use of satellite-based tropical forest disturbance alerts to assess selective logging intensities. Environmental Research Letters 18, 054023
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P)

Poster: Leveraging Earth Observation for Strengthened Environmental Regulation

Authors: Peter Spruyt, Cristina Rosales
Affiliations: Joint Research Centre, Seidor
As the urgency to protect our natural environment intensifies, the enforcement of environmental regulations has become a pivotal concern for governments and organizations worldwide. In response to this growing need, Earth Observation (EO) technologies have emerged as a powerful ally, providing comprehensive and objective data that is revolutionizing the way we monitor and manage environmental compliance. Among the various EO initiatives, the Copernicus Emergency Management Service (CEMS) stands as a trustable source of support, offering systematic environmental monitoring to reinforce regulatory actions. This session will explore the practical uses of CEMS's On-demand Mapping service for environmental compliance. With a history of over 900 activations, the service has established itself as an effective tool for disaster risk reduction and has demonstrated the value of EO technologies in supporting legal frameworks. We will showcase case studies that exemplify the importance of the environmental monitoring tasks performed by CEMS. We will examine cases where the service has played a key role in monitoring land use changes, evaluating the conditions of protected areas, and detecting unusual environmental events. These examples will highlight how CEMS's mapping services, utilizing established EO methods, provide vital data for compliance verification and regulatory actions. The session will also present CEMS's extended aerial capabilities, which include the use of mapping drones and aircraft to enhance satellite imagery. This dimension of the service augments the resolution of satellite imagery, providing high-definition data that is crucial for in-depth analysis, playing a crucial role in precise environmental oversight. Furthermore, the session will address the practical challenges of using EO technology for legal and regulatory purposes. We will discuss issues related to data privacy and how EO data can be used as evidence in legal contexts. Recognizing the complexity of EO data interpretation, we will also stress the importance of equipping legal professionals with the necessary training to understand and make informed decisions based on this data. The session will present the clear benefits EO offers for environmental management. We will talk about how EO can become part of regular monitoring and contribute to strategic environmental planning, establishing its role as a key resource for regulators.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: A.01.07 - POSTER - Tracking and classifying aerosols through advances in observation and modelling

Atmospheric aerosols are a major influence on weather and health that can vary rapidly in time, space, shape, size, and even chemical composition. These variations cannot be comprehensively constrained by any single observation. The assumptions and approximations used to overcome this limitation are described as “aerosol type”. Aerosols thus present diverse challenges to the environmental science community. Reliable weather forecasting relies on the ability to differentiate cloud from plumes of dust or smoke. Model evaluation often desires discrimination of aerosols by chemical properties when only optical data from satellites is available. Air quality estimation requires knowledge of the vertical distribution of particles and humidity. Rapid response to wildfires and volcanic eruptions requires near real-time data at high resolution. The next era of Earth observing satellites and data assimilation systems provide new opportunities to constrain the behaviour of aerosols by synergistically combining satellite datasets, applying physical constraints, exploiting underutilised observations, or applying new methodologies such as machine learning. This session invites presentations on methods to identify, track, and classify aerosols throughout their life-cycle in order to understand aerosols’ influence on air quality, the energy budget, fire, and more. The aim is to bring together the disparate aerosol research communities to exchange knowledge and methodologies to overcome these challenges.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Studying the Impact of WIVERN Assimilation on Dust Transport Modelling

Authors: Areti-Panagiota Bantouna, Thanasis Georgiou, Dr Francesco Manconi, Dr Emmanouil Flaounas, Dr Alessandro Battaglia, Dr Vassilis Amiridis, Dr Eleni Marinou
Affiliations: National Observatory Of Athens, National and Kapodistrian University of Athens, Polytechnic University of Turin, Institute for Atmospheric and Climate Science, ETH Zurich
WIVERN (WInd VElocity Radar Nephoscope) is one of two candidate mission concepts for the Earth Explorer 11, currently in phase A development under the European Space Agency’s FutureEO Programme, with a planned launch in 2035-2036. WIVERN envisions a polar-orbiting satellite equipped with a dual-polarization Doppler 94 GHz radar that scans conically with an 800 km swath. WIVERN would deliver near real-time, vertically resolved measurements of in-cloud Horizontal Line of Sight (HLoS) winds and micro- and macro- physical properties of cloud and precipitation systems. These observations are expected to significantly improve Numerical Weather Prediction (NWP) and climate models, while also contributing to climate records of clouds and precipitation. As WIVERN would improve the modeled wind fields in the troposphere, it is also expected to advance the forecasted aerosol transport. In order to assess the impact of WIVERN data assimilation on dust transport modelling, we perform a sensitivity study focusing on a recent cyclonic event in the Mediterranean where a significant dust presence was observed. Specifically, the study is an Observing System Simulation Experiment (OSSE) focusing on the case study of the Mediterranean cyclone Ianos of September 2020. The assimilation system used for this study consists of the Weather Research and Forecasting with Chemistry (WRF-CHEM) model, alongside the Data Assimilation Research Testbed (DART), which is an Ensemble Adjustment Kalman Filter implementation. The simplified forward operator is implemented to simulate HLoS winds from the model state in cloudy regions, avoiding the need for a full cloud radar simulator. An ensemble of forecasts is used to generate the reference fields for the OSSE, by selecting the ensemble member that most closely matches the Ianos’ reference track. Synthetic observations generated from this reference forecast are assimilated into another ensemble, with differently perturbed initial and boundary conditions. In addition to synthetic WIVERN observations generated from the reference forecast, synthetic observations of common types (e.g. radiosondes) are used to evaluate whether WIVERN has an impact beyond the existing observing system. In addition, synthetic Aeolus-2 observations are used to investigate the combined impact of the two missions on dust transport forecasting. The authors acknowledge the support of the following projects: the PANGEA4CalVal project funded by the European Union (Grant Agreement 101079201) and the AIRSENSE project funded by ESA (Contract No. 4000142902/23/I-NS).
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Detailed characterization of volcanic plumes: height estimation using near simultaneous acquisitions from Pléiades and Spot-6

Authors: Céline Bonnetain, Pierre Briole, Marcello de Michele, Daniel Raucoules, David Youssefi, Jean-Marc Delvit
Affiliations: Cnes, BRGM, ENS
Obtaining an accurate estimate of the height of volcanic plumes is a key factor in managing the hazard they represent and anticipating their impacts on health, aviation, and climate. Estimating the height of these structures composed of gases and particles is essential for predicting the dispersion of volcanic ash and deepening our understanding of volcanic systems. Current approaches, based on remote sensing, allow for the estimation of the maximum height of plumes (VCTH), but they represent an average height value (e.g., Pailot-Bonnétat et al., 2020). This study proposes an innovative methodology exploiting data from the Pléiades and Spot-6 satellites, push-broom imagers, to reconstruct a map of the heights and horizontal velocities of the surface of a volcanic eruption column at high resolution. It is based on case studies from archived eruptions of Etna (March 24, 2021) and Calbuco (April 24-25, 2015), sub-plinian explosive eruptions with moderate to strong Volcanic Eruption Index (VEI) values (between 2 and 4). The method relies on satellite photogrammetry, which has already been used to generate Digital Elevation Models (DEMs) (e.g., May, S. & Latry C., 2009; Bagnardi et al., 2016). By applying the principle of parallax, which associates increased height with a more pronounced stereoscopic effect, the heights of volcanic clouds are estimated. Pléiades and Spot-6, with their near-simultaneous imaging (sub-second between bands of the same acquisition), help minimize the effects of plume velocity and improve image correlation accuracy (de Michele et al., 2014, 2019). The low angular difference (B/H) is sufficient to reconstruct a plume elevation model (PEM). Preliminary results show height estimates that can be put into perspective with field data. Combined with mass flux estimates for particles and gases, as well as atmospheric parameters (wind speed and direction), this method could refine models for volcanic ash dispersion (e.g., Sellitto et al., 2021). Furthermore, an accurate reconstruction of plume height could provide insights into the pressurization conditions of the magma chamber (Hreinsdottir et al., 2014). This methodology could be applied to the CO3D constellation soon to be launched by CNES (Lebègue et al., 2020), whose synchronous stereoscopy capability will eliminate the need to estimate the cloud's velocity to determine its height, thereby providing more precise and robust estimations. Keywords— Volcanic plumes, Pléiades, Spot, satellite photogrammetry, plume elevation model (PEM), sub-plinian eruptions References Bagnardi, M., González, P. J., & Hooper, A. (2016). High-resolution digital elevation model from tri-stereo Pleiades-1 satellite imagery for lava flow volume estimates at Fogo Volcano. Geophysical Research Letters, 43(12), 6267-6275. Hreinsdóttir, S., Sigmundsson, F., Roberts, M. J., Björnsson, H., Grapenthin, R., Arason, P., ... & Óladóttir, B. A. (2014). Volcanic plume height correlated with magma-pressure change at Grímsvötn Volcano, Iceland. Nature geoscience, 7(3), 214-218. Lebègue, L., Cazala-Hourcade, E., Languille, F., Artigues, S., & Melet, O. (2020). CO3D, a worldwide one one-meter accuracy dem for 2025. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 43, 299-304. de Michele, M., Raucoules, D. Arason, Þ. (2016). Volcanic Plume Elevation Model and its velocity derived from Landsat 8. Remote Sensing of Environment, 176, 219-224. de Michele, M., Raucoules, D., Corradini, S., Merucci, L., Salerno, G., Sellitto, P., & Carboni, E. (2019). Volcanic cloud top height estimation using the plume elevation model procedure applied to orthorectified landsat 8 data. Test case: 26 October 2013 Mt. Etna eruption. Remote Sensing, 11(7), 785. Pailot-Bonnétat, S., Harris, A. J., Calvari, S., De Michele, M., & Gurioli, L. (2020). Plume height time-series retrieval using shadow in single spatial resolution satellite images. Remote Sensing, 12(23), 3951. May, S. & Latry C. (2009, July). Digital elevation model computation with SPOT 5 Panchromatic and Multispectral images using low stereoscopic angle and geometric model refinement. In 2009 IEEE International Geoscience and Remote Sensing Symposium (Vol. 4, pp. IV-442). IEEE. Sellitto, P., Salerno, G., Corradini, S., Xueref-Remy, I., Riandet, A., Bellon, C., ... & Legras, B. (2023). Volcanic emissions, plume dispersion, and downwind radiative impacts following Mount Etna series of eruptions of February 21–26, 2021. Journal of Geophysical Research: Atmospheres, 128(6), e2021JD035974.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: An Aerosol Retrieval Algorithm Regarding the Aerosol Polarization of Both Coarse and Fine Mode

Authors: Weiyuan Yao, Prof Shi Qiu, Shunjing Yu, Zhaoyan Liu, Yuming Dong, Ziyang Li
Affiliations: Aerospace Information Research Institute, Aerospace DFH satellite Co. ,LTD.
Due to the human and industrial activities, aerosols in the atmosphere of the urban agglomerations are dominated by fine mode. Polarization detection is an important way for the mapping of fine mode aerosol in urban industrial areas. However, in literatures, algorithms for the aerosol retrieval from polarized observation always neglect the contribution of coarse mode aerosol, which brings errors in the results. Thus, an method for aerosol optical parameters retrieval is promoted regarding the polarization of mixed coarse and fine mode aerosols. First, considering the depolarization effect of the coarse mode aerosol, the polarization characteristics of aerosols with different mixed coarse and fine mode aerosols are analyzed and then a corresponding polarization look-up table is established. Based on that, combining the total aerosol optical depth and fine-particle aerosol optical depth obtained from the scalar and polarized observations, respectively, the aerosol model is optimized and the atmospheric polarized reflectance of fine mode aerosol is modified. By iteration, high-precision results of the fine mode aerosol optical depth and fine mode fraction is finally retrieved. The algorithm is then applied on the simulated DPC data of the Terrestrial Ecosystem Carbon Monitoring Satellite (CM-1) for model accuracy assessment, and further validated by the measured data from this sensor. By comparing with the methods which ignore the contribution of coarse mode polarization, it shows that the proposed algorithm can effectively improve the accuracy of aerosol retrieval. In addition, from DPC data of CM-1 in Beijing aera, the retrieval results of aerosol optical parameters suggest an improved root mean square error (RMSE) of the fine mode aerosol optical depth from 0.142 to 0.112, and fine mode fraction from 0.196 to 0.156. The above results demonstrate the reliability of the algorithm, which can support the commercialized operation and product optimization of aerosol retrieval from China’s multi-channel polarization sensors.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Vertical distribution of type-discriminated aerosol concentrations derived from multiwavelength lidar airborne observations

Authors: Fazzal Qayyum, Dr Juan Cuesta, Dr Abou Bakr Merdji, Anton Lopatin, Oleg Dubovik, Dr Richard Ferrare, Dr Sharon P Burton
Affiliations: Univ Paris Est Creteil and Université Paris Cité, CNRS, LISA, F-94010 Créteil, France, GRASP SAS, Lezennes, 59260, France, Laboratoire d’Optique Atmospherique, UMR 8518, Villeneuve d’Ascq, 59650, France, NASA Langley Research Center, Hampton, VA, USA
Atmospheric aerosols are solid and liquid particles of different sizes in suspension in the Earth’s atmosphere. They interact directly with shortwave and longwave radiation emitted respectively by the sun and the Earth by scattering and absorption (aerosol-radiation interaction). They also change cloud microphysical properties and act as condensation nuclei (aerosol-cloud interaction). Standard satellite observations from the space-borne lidar named cloud–aerosol lidar with orthogonal polarization (CALIOP) onboard cloud–aerosol lidar and infrared pathfinder satellite observation (CALIPSO) satellite consist of attenuated backscatter profiles at 532 and 1064 nm and depolarization at 532 nm. These measurements provide qualitative information on a single dominant aerosol type and limited capabilities to derive profiles of the abundance of aerosols. To overcome these limitations and to provide new insights into the interactions between aerosols, clouds, convective processes and precipitation, the future Atmosphere Observing System (AOS) mission comprised of contributions from NASA, CNES, Japanese Aerospace Exploration Agency (JAXA), Italian Space Agency (ASI) and Canadian Space Agency (CSA) is planned in the coming years. In the framework of AOS, we have developed an innovative retrieval approach specifically for multiwavelength lidar observations to derive the aerosol vertical distribution by distinguishing the fraction of distinct types (Smoke, Continental, Oceanic, Dust and Urban polluted) simultaneously present in the atmospheric column and we have previously tested it with synthetic measurements of the mission. In our present work, we will implement our retrieval approach on the airborne measurements acquired from the second-generation NASA Langley Research Center (LaRC), High Spectral Resolution Lidar-2 (HSRL-2) lidar. The first tests will be conducted to discriminate quantitively the vertical profiles of all the possible aerosol types (i.e., Smoke, continental, oceanic, dust and urban polluted) using the already developed retrieval approach. This approach will also provide an opportunity to retrieve bulk optical and microphysical properties of aerosols. In addition, we will also perform aerosol retrievals by using single and multiple adjacent lidar profiles to increase the robustness of the approach.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Application of sigma-FORUM to the retrieval of aerosol properties from interferometric acquisitions

Authors: Michele Martinazzo, Tiziano Maestri, Giuliano Liuzzi, Guido Masiello, Lorenzo Cassini, Carmine Serio, Erika Brattich, Andrea Faggi
Affiliations: University of Bologna, Department of Physics and Astronomy, University of Basilicata, Department of Engineering, University 'La Sapienza', Department of Civil, Building and Environmental Engineering
Aerosol particles interact with electromagnetic radiation both in the short-wave and in the long-wave spectral region. Aerosol radiative direct effects, such as scattering and absorption, have important consequences on the energetic balance of the planet. At the state of art, significant uncertainties in aerosol distribution, vertical profile, interactions, and microphysical properties, and thus on their radiative forcing, are still present. Continued research in aerosol-Earth system interaction is crucial for refining our capability to predict aerosol impact on local and global scales. Aerosol detection and characterization using infrared wavelength received great attention in the last years. This is due to the complementary information carried from this part of the electromagnetic spectrum with respect to the short-wave region, and its sensitivity to different particle sizes (Vandenbussche et al, 2013). Moreover, unlike visible spectrum retrieval, longwave techniques can be applied both during the day and night, allowing for continuous monitoring of aerosol evolution. In this work, we present an identification and inversion strategy based on the fast radiative transfer code σ-FORUM (Masiello et al., 2024). σ-FORUM is a model tailored to perform fast computation of the Earth’s emissions spectra in all-sky conditions. The model implements scaling methodologies to account for the presence of scattering layers in the simulated scene (e.g. aerosols) and efficiently compute analytical Jacobians with respect to particle's microphysical properties, as well as the atmospheric state. A retrieval routine based on σ-FORUM is applied for the derivation of aerosol properties from the infrared atmospheric sounding interferometer (IASI) measurements, with a particular focus on a case study concerning the transport of desert aerosol over the city of Bologna (Italy) during a Heat Wave event. Along with IASI acquisitions, the same inversion strategy is tested also on synthetic ground based interferometric spectra which are derived on the retrieved atmospheric and aerosol parameters from IASI. Differences in identification capabilities, sensitivities and effectiveness between the satellite and the ground-based approach are studied. Moreover, possible synergies between satellite products and different ground-based measurements are also explored. This study is preparatory for the application of a ground-based commercial interferometer (Bruker EM-27) which is being purchased by the Department of Physics and Astronomy of the University of Bologna, as part of the PNRR multi Risk sciEnce for resilienT commUnities undeR a changiNg climate (RETURN) project supported by Netxxt Generation EU.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Towards space-borne lidar data assimilation for atmospheric composition and NWP

Authors: Thanasis Georgiou, Dr Athanasios Tsikerdekis, Dr Alexandra Tsekeri, Dr Anna Gialitaki, Areti Bantouna, Thanos Kourantos, Dr Dimitris Melas, Dr Vassilis Amiridis
Affiliations: National Observatory Of Athens, Royal Netherlands Meteorological Institute (KNMI), Aristotle University of Thessaloniki
Atmospheric aerosols, including desert dust and sea salt, are important components of the Earth’s climate system and affect both the environment and human activity. Through transport, deposition, radiative interactions and acting as cloud condensation nuclei, among other processes, desert dust has a significant impact on weather, ecosystems and human activity. However, operational forecasting systems focus on assimilating AOD, which is a columnar quantity and doesn’t change the vertical distribution of aerosols. This study aims to investigate the additional impact of assimilating vertical profiles from satellite-borne LiDARs. The assimilation system used for this study is based on WRF-CHEM and DART, an Ensemble Adjustment Kalman Filter (EAKF) implementation. The necessary observational operators to enable backscatter coefficient, extinction coefficient and depolarization ratio assimilation are developed for DART and assimilation experiments are conducted to study the impact of these observations on atmospheric composition forecasts. In addition, we investigate the impact of atmospheric composition assimilation on NWP through two lenses. The first concerns the radiative impacts of the improved aerosol fields while the second relates to the possible assimilation of wind information by studying the co-variance between the wind and aerosol fields. The team acknowledges support from the AIRSENSE project funded by ESA (Contract No. 4000142902/23/I-NS).
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: F.04.16 - POSTER - Sustainable Blue Economy

The size of the global ocean economy is growing faster than the general economy, with a global gross value added projected to be multiplied by three in 2030 with respect to 2010. Activity sectors are multiple, covering well established activities as e.g. fisheries, aquaculture, shipping to more emerging fields as e.g. tourism, renewable energy production from wind, wave, salinity gradients, tidal, thermal and biomass sources, marine biotechnology, mining of seabed mineral resources. While a flourishing ocean economy depends on sustainable and healthy oceans, exploitation of the marine resources has led in the last decades to an incredible, and non-sustainable pressure on the marine environment, putting Ocean Health and by the same token the marine economy itself, under considerable threat. At European and International levels, a number of Directives/Policy/Action plans have been put in place to help balancing the maritime economy while protecting the marine environment. Satellite data, by providing continuous, global and repetitive measurements of many key parameters of the physical and biogeochemical marine environment, including also high resolution mapping of key marine habitats, are particularly suited to support the development and monitoring of new Blue Economy activities, detect any significant changes induced by the start of new activities over a given area, support the blue economy operators in their daily operations and to report on the environmental impact of their activity in the frame of existing environmental regulations.
This section welcomes contributions investigating how remote sensing, potentially used in synergy with other information (e.g. in-situ measurements, model outputs), can be used to support the sustainable development of the blue economy sector, in line with the current international and European policies.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Earth Observation-Driven Solutions for Sustainable Tourism and Leisure activities in the Blue Economy

Authors: Vera Gastal, Dr. Roberto Sabia, Rio Marie-Hélène, Marc Lucas, Sina Starmans, Stefan Mühlbauer, Pier Braig
Affiliations: CLS, ESA, EOMAP, NovaSpace
Leisure and tourism activities are drivers of activities in marine and coastal areas and represent a significant contribution to local and national economies. Through the Earth Observation for a Sustainable Blue Economy and BLUERISM projects, ESA has committed to supporting the development of a sustainable Blue Economy. BLUERISM aims at building, with coastal tourism and leisure actors, solutions dedicated to promoting a long-term balance of economic benefits, social inclusion and environmental sustainability. The concentration of leisure, tourism and other economic activities in coastal areas poses serious environmental challenges that are exacerbated by the impacts of climate change. International and European policies address the issue by providing a clear framework that prioritizes the needs and rights of local communities and stakeholders. The definition of a sustainable Blue Economy is thus intimately linked to the monitoring of the coastal environment, whether to assess the impact of the activities on the ecosystem, to anticipate the impacts of marine and coastal threats, and to foster optimized and resilient leisure and tourism activities. Although many initiatives and products already exist to support the marine and coastal economies, the challenge is how the information can be shared with and tailored to actors from the sector. BLUERISM proposes several solutions to bridge this gap between scientific and operational actors, addressing four different thematic areas. Based on earth-observation data, in-situ data and models, these innovative services are co-designed with a community of early adopters. Their needs and priorities are placed at the heart of the solutions to favorize further uptake. CLS and EOMAP have structured four use cases, each combining satellite-based data and innovative technologies such as in-situ camera feeds, crowdsourcing or smartphone applications and alerts. They are dedicated to monitoring the impact of plastic pollution in high touristic areas, to anticipate and mitigate the consequences of economic and environmental marine threats such as sargassum or marine mucilage, and to enhance planning and safety of leisure activities such as snorkeling. In this poster, we will present the approach developed through BLUERISM to identify the strategic criteria and needs of the tourism and leisure communities, and pinpoint how the satellite-based solutions pave the way for a sustainable and resilient marine and coastal tourism.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: DEEPBLUEx2 - Sustainable Blue Economy in the Fisheries Sector

Authors: Pedro Ribeiro, Nuno Grosso, Amos Barkai, Claudia Moreira, Fabienne Chapelain
Affiliations: Deimos, OLSPS, BA.AI-BESSE
The DEEPBLUEx2 project aims to deliver innovative Earth Observation (EO)-based solutions to support sustainable fisheries operations and monitor their environmental impact. Over the course of two years, the project will co-develop innovative use cases across various Atlantic regions, enhancing Early Adopters' understanding of environmental factors that influence fisheries activities. DEEPBLUEx2 adopts a user-driven co-design approach, ensuring that solutions are tailored to the operational needs of a range of Early Adopters with different profiles: industrial operators, small-scale fishermen, public regulators, NGOs and the maritime insurance sector. Four innovative use cases will be developed to address key challenges faced by these stakeholders, over a variety of geographical settings. Fishing vessels often face inefficiencies in route planning, increasing operational costs, carbon emissions, and safety risks. DEEPBLUEx2's Route Optimization use case focuses on developing a data-driven information bulletin that leverages EO data, AIS tracking, and near real-time analytics to guide captains to optimal fishing locations efficiently. By minimizing time at sea and exposure to adverse weather, the solution will improve operational efficiency, enhance safety, and reduce carbon emission, aligning with European directives for sustainable fisheries. Small-scale fishers are exposed to incidents and accidents when they operate in hazardous conditions with limited access to risk indicators. The Safety at Sea use case shall cope with this challenge through short-term weather forecasts, tailored risk indicators, and route planning interfaces accessible via smartphones. Weather and other EO NRT and forecast data shall be integrated in a mobile-based e-logbook solution to minimize incident risk while maximizing operational efficiency. Additionally, an incident prediction model will provide spatialized risk indicators based on vessel characteristics and environmental conditions. These tools will support both risk mitigation and emergency response, ensuring safer fishing practices for small-scale operators. Another use case will focus on Reducing the Impact of Bycatch Overexploitation. Bycatch remains a critical challenge for sustainable fisheries, contributing to overfishing and ecosystem degradation. DEEPBLUEx2 will develop a predictive bycatch model that identifies hotspot areas using historical fishing statistics and satellite-derived environmental variables. By integrating this model with AIS-based fishing activity data, the solution shall allow assessments of the sustainability of fishing operations with various different profiles. These insights will empower fishery managers and regulators to proactively monitor and mitigate bycatch impacts, fostering more sustainable practices and compliance with regulatory frameworks. The high-risk nature of fishing operations can result in prohibitive insurance costs, leaving many fisheries directly exposed to risk. The Parametric Insurance for Fisheries use case will explore EO-based historical weather data and vessel characteristics and incident records to develop predictive models that quantify risk, defining environmental safety thresholds for different types of fishing operations. By evaluating the frequency and severity of hazardous conditions, insurance premiums can be calculated, enabling insurers to offer a new product focused on protecting fishermen from loss of revenue due to weather. DEEPBLUEx2 is funded by the European Space Agency’s EO for a Sustainable Blue Economy initiative under the PEOPLE (Pioneer Earth Observation Applications for the Environment) program. The project is led by Deimos, leveraging its extensive experience in EO-based solutions and integrated systems, and combines expertise across the fisheries value chain, with OLSPS, providers of the Olrac e-logbook solution, and BA.AI-BESSE, France's largest insurance consultancy for fisheries. By delivering practical, vertically integrated solutions co-designed with stakeholders, DEEPBLUEx2 aims to enhance operational efficiency, safety, and sustainability in the sector, ultimately supporting a more resilient and sustainable Blue Economy.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Earth Observations for Sustainable Aquaculture (EO4SA)

Authors: Lasse H. Pettersson, Edson Silva, Antonio Bonaduse, Catherine Downy, Andrey Kurekin, Vincent Martinez-Vicente, Pepe Salgado
Affiliations: Nansen Environmental and Remote Sensing Center, Plymouth Marine Laboratory, Institute of Marine Research (IIM-CSIC)
The Earth Observations for Sustainable Aquaculture (EO4SA) project is a newly funded EO for Sustainable Blue Economy project under the ESA PEOPLE-program. The main objective is to consolidate the requirements and demonstrate the information opportunities, needed by the aquaculture industry and monitoring agencies using EO and other data sources. The project will address the information and operational needs of aquaculture activities in coastal waters for four Use Cases, at several locations in Norway, Spain and the Philippines, related to: i. Prediction of salmon lice outbreaks in fish farms. ii. Prediction of toxic Harmful Algae Blooms (HABs) impacting shellfish farming. iii. Optimize locations and reduce conflicting multi-use between aquaculture and tourism in coastal environments. iv. Mapping of aquaculture structures to avoid unreported and unregulated aquaculture development and exploitation of marine resources. The EO4SA project involves 11 early adopters related to the aquaculture industry, such as farms, regulating authorities, finances, and research. EO4SA will conclude with a roadmap for the implementation of EO-based products/services to support aquaculture stakeholders in their future operations and management. The EO4SA project started in December 2024 and is coordinated by the Nansen Environmental and Remote Sensing Center (NERSC) in Bergen, Norway and implemented in cooperation with Plymouth Marine Laboratory (PML), UK.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Global mapping of aquaculture infrastructure using AI and EO imagery

Authors: Catalin Patilea, Dr. Raul Scarlat, Naman Jain, Dr. Meghna Sengupta, Dr. Gopika Suresh
Affiliations: Marble Imaging AG
The expansion of global aquaculture highlights the necessity for efficient monitoring tools to assess its spatial distribution, environmental impact, and regulatory compliance. This study introduces a methodology for detecting and mapping aquaculture farms utilizing Artificial Intelligence (AI) applied to Sentinel-2 multispectral imagery. The primary focus is on using Sentinel-2 data to monitoring aquaculture infrastructure. Very High-Resolution (VHR) optical imagery from available sources and eventually, Marble Imaging satellites, and publicly available Synthetic Aperture Radar (SAR) data are planned to be integrated mainly for validation and comparative analysis to assess and optimise the performance of the Sentinel-2-based detections. We tested various convolutional neural networks (CNNs) trained exclusively on annotated Sentinel-2 imagery of aquaculture structures, including offshore cages, floating pens, rafts, and inland pond systems. The training dataset encompassed diverse geographic regions to ensure model generalization across various environmental conditions and aquaculture practices. We are also testing various combinations of spectral bands and indices to optimize the distinction between aquaculture installations and surrounding water bodies, ensuring better feature extraction from Sentinel-2 imagery. Applying the methodology to pilot sites representing a range of aquaculture systems worldwide, the AI models demonstrated the ability to detect aquaculture infrastructure using Sentinel-2's 10-meter spatial resolution data. While the medium resolution presents challenges in identifying smaller or densely clustered structures, significant installations were effectively mapped. To validate and assess the limitations of the Sentinel-2-based detections, VHR optical imagery and SAR data (e.g., Sentinel-1) were used for comparison. These higher-resolution datasets provided detailed benchmarks, confirming the presence and exact locations of aquaculture facilities identified in the Sentinel-2 imagery. The comparative analysis revealed that, although some smaller-scale infrastructure were beyond the resolution of Sentinel-2, the AI models successfully identified larger aquaculture operations and spatial patterns indicative of aquaculture activity. The use of SAR data is particularly valuable in regions with frequent cloud cover, offering an alternative means to confirm aquaculture structures through their radar backscatter signatures. The results demonstrate that Sentinel-2 imagery, analyzed using AI techniques, provides an effective solution for broad-scale mapping of aquaculture infrastructure. The availability of Sentinel-2 data at no cost supports the scalability and accessibility of the approach, making it well-suited for global applications. By using VHR and SAR data primarily for validation purposes, the methodology achieves a balance between cost efficiency and result reliability. This study contributes to the development of practical monitoring tools essential for sustainable aquaculture management. Providing up-to-date and accurate spatial information on aquaculture distribution supports ecosystem management, resource allocation, and regulatory oversight. Future work will aim to refine the detection algorithms to improve sensitivity to smaller-scale aquaculture systems and explore the integration of temporal analysis using time-series Sentinel-2 data to monitor changes and trends over time. With the availability of Marble Imaging satellites starting 2024 with images in VNIR at 80 cm and in SWIR at 6m, our developed aquaculture mapping tool will be able to capture more details that can be used for a sustainable blue economy.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: DIOMEDEO - Datasets for InnOvative Marine Energy Developments from EO

Authors: Kim Knauer, Rainer Ressl, Thomas Heege, Knut Hartmann, Eva Haas, Benjamin Lehner, Vincent Bonnin, Bonnin Pablo, Brendan Cahill, Renaud Lafortune, Rune Pilgaard Bloom, Ruth Bloom
Affiliations: EOMAP GmbH, DMEC, ORPC, Crestwing
Marine Renewable Energy (MRE) is increasingly recognized as a crucial component of global energy transition strategies, offering vast potential to reduce carbon emissions, diversify energy sources, and contribute to energy security. Despite its enormous theoretical potential, the development of MRE remains in its early stages compared to other renewable energy sectors like terrestrial wind and solar. This is primarily due to technological hurdles, regulatory challenges, and the complexity of marine environments. Key sectors within MRE include wave energy, tidal energy, salinity gradient power, offshore wind, and ocean thermal energy conversion (OTEC). Each of these sectors faces unique challenges but also offers significant opportunities for sustainable energy production. Given the escalating urgency to combat climate change and reduce fossil fuel dependency, enhancing MRE’s contribution to global energy supply is a strategic priority for many countries and international organizations. The DIOMEDEO project, led by EOMAP in collaboration with industry experts and Early Adopters, aims to support the MRE industry by integrating advanced Earth Observation (EO) technologies. DIOMEDEO directly addresses the EU’s objectives of supporting a sustainable Blue Economy through data-driven decision-making and regulatory compliance. By providing high-resolution geospatial data and advanced analysis, DIOMEDEO offers solutions tailored to different MRE sectors across a diverse set of pilot sites, including wave, tidal and river hydrokinetic energy installations, salinity gradient plants, and offshore wind farms ensuring optimized site selection, reduced operational costs, and minimized environmental impacts. DIOMEDEO’s potential for MRE technologies lies in its ability to deliver innovative, EO-based services that enhance operational efficiency, improve environmental monitoring, and support compliance with national and international regulatory frameworks such as the EU Taxonomy and Marine Strategy Framework Directive. By leveraging state-of-the-art EO technologies, including data from current and future Sentinel missions, and fostering collaboration with key industry players, the project aims to close critical data gaps and enable near real-time monitoring capabilities. The integration of EO-derived insights into Maritime Spatial Planning (MSP) and site selection processes will empower MRE operators to optimize resource utilization while ensuring sustainability. With its focus on scalability and replicability, the DIOMEDEO solution has a high potential for accelerating the adoption of MRE technologies across different maritime regions, contributing to global efforts toward a sustainable and resilient energy future. DIOMEDEO is funded by ESA through the “EO for Sustainable Blue Economy – Marine Renewable Energies EXPRO+” program and is running from 12/2024 to 12/2026.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: A.05.02 - POSTER - Earth observation for Climate Services

Earth Observation has considerable potential to inform and facilitate climate service development globally. Long term, sustainable observations of Essential Climate Variables are the cornerstone of our ability to monitor climate change and climate risks.
These services can in turn be used to support implementation of the Paris Agreement, Nationally and Internationally set goals (such as the European Green Deal, Nationally Determined Contributions etc), and Monitoring, Reporting and Verification of adaptation and mitigation efforts.

However, in order to truly support decision making, this wealth of information must be supplemented by initiatives serving user needs, providing for example, sectoral or regional scale information in a timely, trusted and transparent fashion.

Climate services form the critical link between climate information and decision-making communities of practice. Development of such services must be underpinned by the co-production, with both the end-user and data provider communities, of user requirements, research and data needs, and intermediate products.

These services fundamentally add value to EO information, such that it can be used for decision making. EO-based service development may include (but is not limited to):
• Sustained and sustainable data provision, quality assurance and accessibility
• Research and development on existing data products to produce value added services either directly for decision makers, or for intermediaries in the climate services value chain
• Scoping and requirements gathering for new observational products, where there is a user need
• Stakeholder engagement at all levels of the value chain, as part of a co-production process
• Cross-disciplinary collaboration and research, including integration of economic or social information
• Progress towards policy goals such as implementation of the Paris Agreement, NDCs and MRV systems

This session seeks submissions related to all aspects of climate service provision and development.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Tracking rice residue management and fire emissions using multi-source EO

Authors: Nathan Torbick
Affiliations: MittiLabs
Climate-smart agriculture has emerged as a focal theme in sustainability with a key goal of mitigating greenhouse gas (GHG) emissions. Rice residue management and burning emissions is one core area that the community is developing MRV precision and enhancements. This talk will focus on use of multi-source EO for residue and fire quantification at rice cropland hot spots in Arkansas and NW India. Fused optical (Planet-HLS), multi-frequency SAR (C-, L-, X-bands), and model products (SMAP, GFED, VIIRS, GloCARB) were harmonized into datacubes across landscape experiments measuring residue biomass, fire, burn type, and emissions. Scattering decomposition, physical retrievals, and time series indices were used to describe field attributes and feed a deep learning framework. Outcomes include fire intensity, burn type (partial, full, pile), biomass and change, and a suite of accuracy measures. F1, Recall/Precision, RMSE are around 0.87, 0.82, and 6%, respectively for fire, burn type, and biomass using a blended SAR-optical ResUnet architecture. Strengths and limitations as well as broader transferability toward operational MRV will be shared.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Enhancing Climate Action: Testing the Capacity of Monitor-EO, a new tool box for Monitoring and Evaluating Restoration Impacts in Carbon Offset Projects

Authors: Lorenzo De Simone, Vivian Ondieki, AnaPaula de la O
Affiliations: Agrifood Economics and Policy Division (ESA). Food And Agriculture Organization of the United Nations (FAO), Statistics Division (ESS). Food and Agriculture Organization of the United Nations (FAO).
Climate change adaptation and mitigation efforts require robust Monitoring, Reporting, and Verification (MRV) systems to evaluate the effectiveness of restoration activities and their contributions to Nationally Determined Contributions (NDCs). Using the Monitor-EO tool, we tested its capacity to measure impacts and infer causality in restoration efforts across 320 nature-based carbon offset projects from a global database. The analysis utilized key Earth Observation (EO) indicators—Normalized Difference Vegetation Index (NDVI), Normalized Difference Water Index (NDWI), and Land Surface Temperature (LST)—to assess ecosystem recovery and resilience. The study demonstrates Monitor-EO’s ability to detect significant ecological trends across diverse project scales and contexts. Long-term monitoring efforts (>10 years) and smaller-scale projects (<1,000 hectares) showed stronger and more consistent trends, emphasizing the importance of sustained and localized restoration efforts. However, findings such as misalignments between vegetation recovery (NDVI) and water retention (NDWI) highlight the need for multi-indicator approaches to address ecosystem trade-offs and refine policy design. The tool’s automated Difference-in-Difference (DiD) analysis and trend detection capabilities further support its application in evaluating restoration impacts with rigor and scalability. From a policy perspective, Monitor-EO bridges critical gaps in MRV systems by offering a standardized, replicable framework for integrating EO data into evidence-based climate policies. By aligning restoration interventions with measurable NDC objectives, it equips policymakers with actionable insights to enhance transparency, accountability, and scalability. Additionally, the findings underscore the importance of embedding EO technologies into policy frameworks to ensure restoration efforts achieve both ecological and socio-economic outcomes. This research highlights the transformative potential of EO tools like Monitor-EO in supporting global climate policies and advancing the Paris Agreement goals. It sets a precedent for leveraging EO technologies to strengthen MRV systems, foster resilience, and enable evidence-driven climate action.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Developments Towards ESA CCI Lakes Gap-Filled Temperature and Ice Cover Products: Lake Surface Water Temperature

Authors: Laura Carrea, Prof Christopher J. Merchant, Dr Shaerdan Shataer, Dr Niall McCarroll, Mr Samuel J. Johnston, Claude Duguay, Justin Murfitt, Dr Clement Albergel
Affiliations: University Of Reading, National Centre for Earth Observations, H2O Geomatics, University of Waterloo, European Space Agency Climate Office ECSAT
Lake surface water temperature (LSWT) is a thematic product of Lakes identified by the Global Climate Observing System (GCOS) as an essential climate variable (ECV) for climate monitoring. Robust daily estimates of LSWT have a wide range of potential applications including improvements of numerical weather prediction (NWP) models at regional scales, to capture day-to-day variation in global lake surface temperatures. As part of the European Space Agency’s (ESA) Climate Change Initiative (CCI) on Lakes, an LSWT product is being generated for 2024 lakes, along with lake ice cover (LIC) and thickness, lake water level and extent, and lake water leaving reflectance for multivariate analyses of the response of lakes to global climate change. However, the number of LSWT observations derived from satellite at 1/120° resolution is affected by cloud cover. Additionally, the size of the satellite swath can limit further the complete daily global coverage. These factors impact the daily volume of data and therefore the full coverage of the LSWT product needed for climate studies and weather forecasting. This work aims to explore and establish post-processing methodologies for producing consistent gap-filled LSWT and LIC products. A separate companion presentation covers details on the gap-filling approaches considered for the LIC product. Regarding reconstruction of the LSWT, two different approaches are explored. One is traditionally used for reconstructing surface temperature, and it is based on empirical orthogonal functions with iterative optimisation based on cross validation (DINEOF). The other is based on a machine learning algorithm DINCAE (Data Interpolating Convolutional Auto-Encoder) which relies on a convolutional neural network architecture and is therefore suitable for modeling non-linear spatial and temporal relationships. The two approaches are explored on a selection of 120 lakes, sampled from the full CCI Lakes dataset to be representative of the global distribution. Lakes were first clustered based on lake size, geographic location, spatial LSWT variation, lake color, volume and persistence of missing observations, and the final selection sampled evenly from these clusters. To identify the suitable methodology, artificial gaps are introduced into the LSWT observation maps for each of the 120 lakes. The augmented LSWT maps are optimised for each of 120 lakes and verified manually. In addition, in situ lake temperature data, where available, are used to further consolidate the approach and its parameter setting. The latest developments towards a gap-filled LSWT product will be presented. The LSWT/LIC gap-filled product for the full set of the 2,024 lakes is expected to be available for the climate/weather forecasting communities and other users starting in March 2026.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: The High Resolution monitoring of Water, Snow and Ice over Europe by the Copernicus Land Monitoring Service (CLMS HR-WSI)

Authors: Florence Marti, Zacharie Barrou-Dumont, Robin Buratti, Manuel Da Costa Cantante, Matthieu Denisselle, Anne-Sophie Dusart, Michaël Ablain, Joël Dorandeu, Simon Gascoin, Olivier Hagolle, Markus Hetzenecker, Gabriele Schwaizer, Andrea Scolamiero, Thomas Nagler, Cemal Melih Tanış, Golda Prakasam, Kari Luojus, Tanja Gasber, Christian Gruber, Adam Pasik, Joanna Przystawska, Lorenzo Solari, Guillaume Eynard Bontemps, Thérèse Barroso
Affiliations: Magellium, CESBIO (CNRS, IRD, CNES, Univ. P. Sabatier), ENVEO IT GmbH, Finnish Meteorological Institute, GeoVille, GmbH, European Environment Agency EEA, Centre national d'études spatiales CNES
Since July 2020, under the delegation of the European Environment Agency (EEA), the Copernicus Land Monitoring Service (CLMS) has been operationally producing and disseminating High-Resolution Snow & Ice (HR-S&I) products in near-real time. To further enhance accuracy and meet the evolving needs of users, the product portfolio is being expanded within the High-Resolution Water, Snow & Ice (HR-WSI) project. The HR-WSI portfolio has taken over from the HR-S&I portfolio as of the end of 2024 and now includes annual and multi-annual water extent products. Snow cover and lake ice cover have been both specified by the Global Climate Observing System as part of the 50 essential climate variables (ECVs) to be monitored via satellite remote sensing. These variables play a crucial role in influencing the water cycle, surface energy fluxes, and serve as key indicators of global climate change. Understanding these variables is essential for various sectors, both public and private, as well as for scientific research in areas such as weather forecasting, hydrology, or water management. They are particularly important for assessing and managing natural hazards such as floods, avalanches, and river ice jams. The HR-WSI products provide dedicated information to support these applications. The HR-WSI portfolio offers pan-European coverage, using high-resolution satellite data derived from Sentinel-2 optical imagery and Sentinel-1 radar (SAR) acquisitions. The Fractional Snow Cover (FSC) product provides snow cover fraction data for each non-cloudy pixel. In mountainous regions, the SAR Wet Snow (SWS) product describes wet snow extent, while the Wet Dry Snow (WDS) product identifies snow surface states through optical observations. These products are available in high resolution (20m or 60m) and are delivered in near-real time (NRT), typically within a few hours after a satellite observation. The daily cumulative Gap-filled Fractional Snow Cover (GFSC) product combines optical and radar data from the same day with previous days' data to provide a comprehensive snow cover map. Additionally, the HR-WSI portfolio includes an annual product that characterizes snow cover conditions, such as snow cover duration, onset, and melt-out dates throughout the hydrological year (Snow Phenology - SP). Ice occurrences on the European river and lakes network are described by the new Water/Ice Cover (WIC) products which rely either on Sentinel-1 or Sentinel-2 data. The CLMS HR-WSI service is being developed and operated by a consortium led by Magellium in partnership with CESBIO, ENVEO, FMI, and GeoVille, contracted by the EEA as the entrusted entity of DG-DEFIS (European Commission - Directorate General for Defense Industry and Space). It builds on pre-existing algorithms and software developed by CESBIO with the support of CNES, FMI, ENVEO, and Magellium, integrated into an operational system hosted on the WEkEO DIAS European cloud infrastructure. HR-WSI monitoring products are freely available and will soon be accessible from the Sentinel-1 and Sentinel-2 archive observations, starting from September 2016, as these observations are currently being processed by the HR-WSI production system. For more information, visit the CLMS portal at land.copernicus.eu. The presentation will focus on detailing this new component of the CLMS, with particular emphasis on snow products and their applications.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Lake Ice Monitoring as a Climate Service: Integrating Earth Observation and Stakeholder Feedback

Authors: Kirsikka Heinilä, Dr.(Tech.) Sari Metsämäki, MSc Sakari Väkevä, MS Hanna Alasalmi, M.Sc.(Tech.) Timo Pyhälahti, M. Sc. Jesse Anttila, Dr. Kari Luojus, Gabriele Schwaizer
Affiliations: Finnish Environment Institute, Finnish Meteorological Institute, ENVEO IT GmbH
Monitoring lake ice cover is vital for understanding, protecting, and managing freshwater ecosystems, as well as ensuring safety and transportation in regions with seasonal ice cover. Importantly, as recognized as an Essential Climate Variable (ECV), lake ice provides critical information for advancing climate services and understanding climate change impacts due to its sensitivity to temperature fluctuations and trends. Significant changes in lake ice dynamics have already been observed in northern latitudes, with even more pronounced impacts predicted as the climate warms. Earth Observation (EO) data provides a unique opportunity to monitor lake ice conditions at spatial and temporal scales previously inaccessible. The Copernicus program delivers near-real-time satellite data that underpins both scientific research and practical applications, offering critical insights into regional and global lake ice dynamics. These products not only support regional and global lake ice dynamics studies but also contribute to Essential Climate Variable (ECV) datasets, which are foundational for advancing EO-based climate services. Starting in 2025, Copernicus will expand its medium-resolution (500 m) daily lake ice data from the Northern Hemisphere to include the Southern Hemisphere within it’s Copernicus Land Monitoring Service portfolio. This product, produced collaboratively with the cryosphere team comprising the Finnish Environment Institute (Syke), the Finnish Meteorological Institute (FMI), and ENVEO, is based on the ICEmod method developed by the Syke. It exploits Sentinel-3 Land Surface Temperature Radiometer (SLSTR) data to assess lake ice extent while integrating cloud detection as part of the classification process. Additionally, the high-resolution lake ice products under the Pan-European Copernicus Water, Snow, and Ice (HR-WSI) initiative are undergoing quality improvements, with Syke and the NORCE Norwegian Research Centre conducting extensive independent validation during spring 2025. Through the Horizon 2020 Arctic PASSION project, Syke has developed the Lake Ice Service, a comprehensive climate service that integrates satellite-based products, true-color imagery, and ground-based observations into user-friendly tools. Designed in collaboration with stakeholders—including local communities, researchers, and hydropower operators—the service addresses diverse user needs through co-production workshops, interviews, and feedback mechanisms. The increasing urgency of climate change and its profound impact on seasonal ice conditions drive both Indigenous communities and researchers to seek actionable lake ice data to adapt to and mitigate these rapidly changing environmental conditions. Key innovations include the Highlighting Interesting Phenomena (HISP) tool, which enables users to annotate satellite images with reports of unusual ice conditions, contributing to a shared database of local knowledge. A mobile application under development will also allow offline reporting, addressing the connectivity challenges of Arctic regions. By integrating state-of-the-art EO technologies with stakeholder-driven design, the Lake Ice Service exemplifies how EO-based climate services can bridge the gap between scientific knowledge and actionable insights for decision-making. This presentation will focus on showcasing the development of the lake ice service and Copernicus lake ice products, highlighting their important role in supporting EO-based climate services and their contribution to advancing global climate services.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Using ESA CCI ECVs to Provide Support for and Guide National and International Climate Action

Authors: Dr Richard Jones, Amy Doherty
Affiliations: Met Office
Earth Observation (EO) from satellites is now a mature science, entering its 6th decade for some variables. As such, it offers a wealth of knowledge for monitoring past and present climate. The monitoring from EO can inform policy in a number of ways, such as by providing evidence of trends in our changing climate; to provide support for and guide national and international climate action; and by allowing monitoring of greenhouse gas emissions. Hazard monitoring and early warning of tipping points in the climate system and of High Impact Low Likelihood (HILL) events can also be provided by satellite observations, either directly or when integrated with other data sources such as in situ and ground based observations and/or models. Also, monitoring of various parts of the Earth System, such as the biosphere, land use, air quality, or human actions can allow policy makers to understand the effects of various adaptation and mitigation measures. This talk will summarise the policy relevant work being carried out within the European Space Agency (ESA) Climate Change Initiative (CCI) highlighting recent results from research carried out by the Climate Modelling User Group (CMUG). This will include the work in CMUG demonstrating and supporting the application of EO data to provide robust assessments of climate and earth system changes and in science studies which utilise the latest ECV (Essential Climate Variable) datasets produced in the CCI. Many cutting edge techniques are implemented in these studies, such as explainable AI (Artificial Intelligence), digital twins and the latest in observational and retrieval techniques making observations and study outputs more reliable and trustworthy for guiding policy decisions. [Note that exactly which work from other CCI projects beyond CMUG will be highlighted in this talk will depend on to what extent it will be included in other contributions to this session.]
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Monitoring the health of the Mar Menor through Sentinel-2 for a decade

Authors: SERGIO HEREDIA CARMONA, Paola Barba, MAR ROCA, Gabriel Navarro, Isabel Caballero
Affiliations: ICMAN, CSIC
Introduction Lagoons, as transitional environments between land and sea, occupy 14% of the world's coasts (Barnes et al., 1980). Due to their shallow waters, morphology, trophic state and physicochemical processes in a semi-enclosed system, they are considered one of the most productive habitats on Earth (UNESCO, 1981; Kennish et al., 2010). These areas play an important ecological role and their proper conservation is vital as they host an important part of the world's biodiversity (De Wit et al., 2011). These environments are extremely sensitive and vulnerable and due to the anthropic pressures to which they are subjected they show signs of deterioration, their ecological functioning being altered and therefore their ecosystem services (Contreras et al., 2004). In this context, the Mar Menor located in Murcia (Spain), is the largest hypersaline lagoon in Europe that, despite the natural importance granted with 10 figures of environmental protection, is an alarming case of ecological collapse and that we have monitored over a decade in this study. This severe degradation is mainly due to the process of eutrophication by excess nutrients from agricultural practices of the environment (Ruiz-Fernández et al., 2019) and several extreme events such as the Gota Fría occurred the second week of September 2019 (Caballero et al., 2019). A Python-based tool was developed with which we can download satellite images, correct for atmospheric influence and generate water quality variables automatically and operationally. This tool, called Marine Observatory, allows us to perform extensive studies in space and time in any region of the world. In this case, the objective is to study the influence of massive agriculture located in the vicinity of the Mar Menor on it. Material and Methods Data: For the study, we made use of in-situ chl-a and turbidity measurements performed by IMIDA (Murcian Institute for Agricultural and Food Research and Development) from June 6, 2017 to June 18, 2024 with a Sea Bird Electronics SBE-19 equipped with three fluorimeters. These measurements were carried out 4 to 5 times each time at 12 specific points. On the other hand, thanks to the Marine Observatory, we have processed images from the European Space Agency Sentinel-2A/B satellites of the Copernicus project from July 6, 2015 to October 16, 2024. Processing: In the case of in-situ measurements, for each point, we only considered those where at least 100 measurements were taken between 0-1m on the same day. Subsequently, we eliminated outliers with the z-score method such that values with z greater than 3 were eliminated. Finally, we selected the mean value. For the processing of the Sentinel-2 images we first downloaded the images through the Copernicus API. Next, we corrected the influence of the atmosphere with the ACOLITE tool (Vanhellemont, Q., 2019). Once the images are corrected, we crop the image for the spatial dimensions of the study region and finally calculate chl-oc3 and turbidity-nechad-665nm. Results Once we processed the data, we performed a comparison between in-situ and satellite by means of 2 linear regressions. One for chl-a with an R-square of 0.526 and another for turbidity with an R-square of 0.247. Through the regressions, we see that satellite data can be used in the case of chlorophyll as a minimally reliable estimator, but in the case of turbidity, we must look for other models. Subsequently, we generate a new satellite image where each pixel is the median value of all processed images. Each pixel is composed of all Sentinel-2 and chl-oc3 bands. Finally, that new image was fed into the Kmeans unsupervised learning model with K = 2. In the resulting clustering, all pixels closest to the coast were set as one group and the interior of the lagoon as another. In the first group, a plume could be seen going towards the interior of the lagoon, clearly indicating the influence of agricultural nutrients discharged into the lagoon. Conclusion With these results, the power of the tool is demonstrated. This gives the possibility to perform studies of high computational complexity in a simpler and more operational way.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: ESA CCI Soil Moisture: making a 45-year climate data record fit for novel climate applications

Authors: Wouter Dorigo, Wolfgang Preimesberger, Pietro Stradiotti, Richard de Jeu, Dr Nemesio Rodriguez-Fernandez, Richard Kidd, Bethan Harris, Johanna Lems, Maud Formanek, Alexander Gruber, Oto Alves, Wolfgang Wagner, Dr. Clément Albergel
Affiliations: TU Wien, Transmissivity B.V., CESBIO, EODC GmbH, ESA ECSAT, UKCEH
ESA CCI Soil Moisture (SM) is a collection of long-term global Climate Data Records of surface soil moisture, derived from satellite observations in the microwave domain. The most recent and complete version, i.e., ESA CCI SM v9.1 COMBINED, merges observations from a total of 19 past and current satellite microwave radiometers and scatterometers into a harmonized record covering the period 1978-2024. Within the Copernicus Climate Change Service (C3S), the soil moisture data records are extended every ten days to provide input data for time-critical applications like drought monitoring. Over the past 15 years, ESA CCI SM data have been widely adopted by the scientific user community to gain new insights on the water, energy, and carbon cycles over land, understand land surface-atmosphere hydrological feedbacks, assess the impact of climate change on the occurrence of climatic extremes, and evaluate and improve model simulations. Nonetheless, with scientific progress in these disciplines and the boom in machine learning and artificial intelligence, user requirements have evolved beyond what is offered by the “traditional” ESA CCI SM products. In this presentation we focus on recent scientific developments, intended to make the ESA CCI SM analysis-ready for novel climate and hydrological applications. These include: i) filling spatial and temporal gaps, necessary to make the data fit for modern ML and AI applications; ii) providing estimates of root-zone soil moisture, to promote its use for ecohydrological applications; iii) making the dataset entirely independent of any model and ancillary data, to make it complimentary to any other dataset, and iv) increasing its spatial resolution (goal is 0.1°) temporal resolution (sub-daily). Particularly the latter is expected to allow for new insights into land-atmosphere interactions and their broader impact on atmospheric circulations. Preparing for these new scientific products entails on the one hand revisiting existing methods and addressing still open questions, on the other hand exploring the potential of new methodologies, such as offered by GNSS reflectometry. With these new developments ESA CCI SM foresees to fulfill current user needs and to be prepared for the integration of novel cornerstone missions like CIMR and MetOp Second Generation.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Mapping of Winter Catch Crop Types in Germany Using Sentinel-1 SAR Time Series

Authors: Shanmugapriya Selvaraj
Affiliations: Julius Kuehn Institute, Hochschule Geisenheim University
Catch crops are intermediate crops grown between two main crop growing cycles and act as soil cover throughout winters. Their adoption into cropping systems has significantly increased in recent years, due to numerous benefits, especially its potential in carbon sequestration and preventing Nitrogen leaching during winters. In view of this potential to climate change mitigation, spatially explicit, regular and timely monitoring of catch crops is required as a basis for site-specific assessment and decision support for crop rotation practices. Satellite remote sensing enables the mapping of multiple crop types, including catch crops, by providing recurring, wide-area observations to monitor land use and changes. In Germany, catch crops are mostly grown as winter cover crops between the months of July and February. During this period, the chance of getting usable optical data is rather limited due to high cloud coverage and low irradiance. Alternatively, radar sensors can acquire images regardless of the weather, at any time of day or night and possess unique sensitivity to crop structure and dielectric constant. Hence, this study investigates the potential of Sentinel-1(S-1) radar data to identify and classify different catch crop types. This research was conducted over the greater Brunswick region of Lower Saxony, Northern Germany. In-situ reference data on catch crops were gathered through multiple field campaigns each year (2021, 2022, and 2023) from August to January, yielding a total of 255, 243, and 488 samples, respectively. For each sampled field, data on catch crop types or seed mixes, growth stage, crop coverage, tillage practices, and field photographs were recorded. Additionally, non-catch crop fields of similar size ranges were identified annually using Integrated Administration and Control System (IACS). Radar parameters such as gamma0 VV, VH backscatter coefficients, gamma0 VH/VV cross-polarization ratio and DpRVI (Dual-pol Radar Vegetation Index) were generated from July to March and temporal profiles of winter catch and main crops were interpreted. To evaluate classification performance and assess temporal transferability, 10-day median composites were calculated for each parameter across all years. Furthermore, we applied an automated temporal segmentation approach to each of the four parameter time series, defining 22 descriptive features—including phenological metrics—that capture the seasonal characteristics of specific catch crop types. The proposed classification framework employed a two-level RF model. In level-1, we distinguished catch and non-catch crops using S-1 time series data (VV, VH, VH/VV, and DpRVI) from July to December 2021, despite the catch crop life cycle extending through February. Comparative analysis showed that limiting the time series to this period yielded similar classification accuracy to the full cycle while significantly reducing computational demands. In level-2, we used class-specific 22 descriptive features as input for classifying the specific catch crop types. To prevent spatial autocorrelation and avoid overestimating model performance, we used a spatial 5-fold cross-validation approach. Training and testing data were split in a 70:30 ratio. To address class imbalances in minority crop classes in level-2, we applied the Geometric Synthetic Minority Over-sampling Technique (G-SMOTE) to training data to enhance model performance for underrepresented crops. The temporal profiles extracted from the parameters (VV, VH, VH/VV, and DpRVI) revealed unique patterns for winter catch crops, emphasizing distinct differences in their phenological cycles compared to winter main crops. These phenological contrasts facilitated effective differentiation between catch and non-catch crops in level-1 classification, achieving a high overall accuracy (OA) of 94.9%. Feature importance analysis indicated that time steps in August, November, and December contributed most significantly to the model's performance. In the level-2 model, a catch crop mask generated in level-1 was used to isolate fields with catch crops for further classification. This stage focused on identifying six specific catch crop types—mustard, radish, mixture, phacelia, niger, and vetch—using a set of 22 crop-specific descriptive features. The model achieved an OA of 85.7%. Feature importance analysis highlighted DpRVI_sum, VH/VV_sum, VH/VV_mean, and Start of Season (SOS) as key contributors of level-2 classification. Using the established two-level RF model from 2021, we assessed the model’s temporal transferability on data from 2022 and 2023. In level-1 classification, distinguishing catch and non-catch crops, the model achieved an accuracy of 93% for 2022 and 88% for 2023. For level-2 classification, which identifies specific catch crop types, accuracy reached 81% for 2022 and 74% for 2023. However, certain catch crop types, such as radish, niger, and vetch, demonstrated lower recall rates, ranging between 40% and 65% in 2022 and 2023. Despite these challenges, the model effectively identified the other catch crop types, underscoring SAR's ability to capture crop-specific structural characteristics and highlighting this approach as a promising SAR-based method for catch crop mapping. Based on our classification results, catch crop area was evaluated. In 2021, results showed an area share of 6.5%, which increased to 7.7% in 2022, followed by a decrease to 4.8% in 2023. The observed reduction in catch crop area in 2023 compared to 2022 aligns with significant reforms introduced under the EU Common Agricultural Policy (CAP) starting from January 2023. The updated CAP policy implemented stricter requirements for subsidy eligibility, with a focus on eco-schemes and crop diversification. These changes may have led some farmers to shift their focus to other environmental practices or reduce catch crop planting to comply with the revised CAP conditions.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: A Seasonal Decomposition-Based Approach to Harmonize Copernicus Land Monitoring Service Vegetation Products for Climate Services

Authors: Fabrizio Cappucci, Ruben Urraca, Nadine Gobron
Affiliations: European Commission - Joint Research Centre
The Copernicus Land Monitoring Service (CLMS) provides valuable Earth Observation (EO) data for a wide range of applications, including land cover, land use, surface energy variables of the Earth, and vegetation indicators, among others. Ensuring the long-term consistency and accuracy of these data is crucial to support terrestrial environmental applications and inform decision-making. Changes in satellites and calibration issues can compromise the stability of remote sensing data. This is a concern particularly pertinent to CLMS vegetation products (FCOVER, FAPAR, LAI, NDVI, GPP, NPP), which are derived from a sequence of different satellites and sensors: SPOT-4 and SPOT-5 (Vegetation) in the period 1998 to 2014, PROBA-V (Vegetation) in the period 2014 to 2020, and Sentinel-3 (OLCI and SLSTR) from 2020 onwards. As each new sensor becomes operational, the data series often exhibits a discontinuity in values, which can be attributed to differences in sensor characteristics between the new and previous satellites [1]. These discontinuities pose a severe challenge in ensuring data stability and could make the data series unsuitable for climate studies, especially for trend and anomaly detection [2, 3]. To address this challenge, we have developed a method based on seasonal decomposition for bias correction to harmonize the CLMS vegetation products. Our approach involves applying corrections exclusively to the seasonal component of the dataset to preserve the physical characteristics of the observations, which are assumed to be captured in the trend and residual components. The bias-corrected series is then reconstructed by combining the corrected seasonal component with the trend and residual parts. By correcting the earlier two series (SPOT and PROBA-V) with respect to the latest one (derived from Sentinel-3), our method aims to rectify the discontinuities and provide a consistent and accurate data series. The methodology was initially applied to the CLMS FAPAR dataset from 1998 to 2024, and its accuracy was confirmed by comparing it to the Ground-Based Observations for Validation (GBOV) data [4], demonstrating that the bias-corrected dataset meets the stability requirements set by the Global Climate Observing System (GCOS) [5]. The ultimate goal of this research is to provide a corrected and validated climatology series (relative to the period 2000-2020) that ensures the accuracy and integrity of climate-related studies. Our research underscores the importance of data harmonization in supporting the development of climate services. By enhancing the consistency and accuracy of CLMS vegetation products, we can strengthen the value chain of climate services and support informed decision-making. Our approach also has the potential to be applied to other EO datasets. References [1] B. Mota et al., “Cross-ECV consistency at global scale: LAI and FAPAR changes”. Remote Sensing of Environment, Volume 263, 2021, 112561, ISSN 0034-4257, https://doi.org/10.1016/j.rse.2021.112561. [2] C. Cammalleri et al., “Harmonization of GEOV2 fAPAR time series through MODIS data for global drought monitoring”. International Journal of Applied Earth Observation and Geoinformation, Volume 80, 2019, Pages 1-12, ISSN 1569-8432, https://doi.org/10.1016/j.jag.2019.03.017. [3] M. Meroni et al., “Evaluating NDVI Data Continuity Between SPOT-VEGETATION and PROBA-V Missions for Operational Yield Forecasting in North African Countries”. IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 2, pp. 795-804, Feb. 2016, https://doi.org/10.1109/TGRS.2015.2466438 [4] GBOV (Ground-Based Observation for Validation): A Copernicus Service for Validation of Land Products. doi: 10.1109/igarss46834.2022.9883162 [5] Zemp, Michael, et al. "GCOS 2022 implementation plan." (2022): 85.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Analysis of Climate Change Impacts on Jet Streams Using Atmospheric Motion Vector Climate Data Records

Authors: Alessio Lattanzio, Viju John, Marie Doutriaux Boucher, Jörg Schulz
Affiliations: EUMETSAT
The last IPCC report highlighted how global warming is affecting regional and global circulation patterns causing changed weather patterns including extremes. A significant expression of the changed circulation pattern are changes in Jet Streams strength and location. The scientific community is using climate models and reanalyses data, which provide outputs using similar models as the climate models but constrained by past observations, arriving at the findings that Jet Streams are getting stronger and are moving poleward on the northern and southern hemisphere. For the southern hemisphere the Jet Strems are closer to the pole likely due to a slower recovery of the ozone hole. The objective of this presentation is to present the outcome of an ongoing observations-based analysis of Jet Streams employing Atmospheric Motion Vectors (AMV) in addition to reanalyses data to validate the findings in the literature, as there are differences between winds from reanalyses and those derived from satellite data. EUMETSAT holds an archive covering more than 40 years of geostationary satellite data, going back to the late 1970s. Data since the early 1980s are recalibrated and reprocessed to derive Climate Data Records (CDRs) of geophysical parameters. One of these is the AMV data record derived from Meteosat imagery of the 0° longitude and the Indian Ocean Data Coverage (IODC) mission. These have recently been exploited to produce a 38-year long (1981-2019) and a 21-year long climate data records (1998-2019), respectively. The data records include winds retrieved from Meteosat First Generation and Meteosat Second Generation. In addition, measurements from the AVHRR instrument on board TIROS-N, NOAA-06 to 19 and Metop-A/B have also been exploited to generate an AMV data record covering the Arctic and Antarctica during 1979-2019. With respect to an analysis of Jet Streams the wind vector estimates from instruments in geostationary and polar orbit are of comparable quality. To facilitate analysis of strength and location of Jet Streams, the AMVs retrieved on irregular horizontal and vertical dimensions have been mapped onto a horizontal regular latitude/longitude grid and vertically in the atmosphere. The mapped data are used together with reanalyses wind outputs from the ERA5 to analyse the Jet Streams. Preliminary data analysis demonstrates that the strength and location of the retrieved jets are connected to the mode of the Northen Atlantic Oscillation (NAO), i.e., that a negative NAO index is indicating a weaker Jet Stream over the North Atlantic and northern Europe with a smaller zonal wind than average, while a positive NAO is indicating in a stronger Jet Stream with a higher zonal wind. In addition, the poleward shifts of the Jet Streams inclusive an intensification in terms of wind speed has been observed.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Comparative Evaluation of Runoff Estimation Models Using Detailed DEMs: A Case Study in Polish Mining Areas

Authors: Yessica Natalia Ramirez-Yara, Agnieszka A Malinowska, Ryszard Hejmanowski
Affiliations: AGH University Of Kraków
ecosystem dynamics, and land use changes driven by human activities such as mining and agriculture. It occurs within a specific spatial and temporal context and plays a significant role in hydrological processes, arising from soil saturation during rainfall events, flooding of water bodies, and snowmelt. Mining operations, particularly excavation, stripping, and waste disposal, significantly alter terrain topography, soil structure, and vegetation cover, impacting infiltration capacity and surface runoff. Consequently, mining areas have become focal points for research due to their profound ecological impacts, including land subsidence, erosion, and altered hydrological patterns. The variability of land surface features, which change over time and across regions, must be considered as a critical factor for accurately assessing flooding risk, climate change impacts, ecological integrity, and land surface movements worldwide. This study aims to evaluate the incorporation of detailed terrain shapes derived from digital elevation models (DEM) representing a mining area in Poland, into runoff estimation. A comparative analysis is conducted on various runoff estimation approaches, including conceptual models (e.g., Rational Method), empirical methods (e.g., Curve Number, TANK), physically-based models (e.g., SWAT, GJ4R), and advanced machine learning and deep learning techniques (e.g., CNN, LSTM). The analysis assesses the efficiency of these models in representing runoff processes while preserving the multidimensional characteristics of the input data layers. Statistical techniques, such as Goodness-of-Fit tests and cross-validation, are applied to enhance model performance and to evaluate the impact of incorporating DEMs on accuracy and calculations. The comparison analysis highlights that models incorporating detailed terrain shape data yield more accurate runoff estimations than those that do not, as they spatially allocate runoff variables within a framework that reflects real topographical structures. These insights emphasize the importance of terrain shape data in improving runoff predictions across simple and complex modeling frameworks, particularly in environmentally sensitive mining landscapes. Understanding runoff dynamics in mining areas is crucial for effective water management and environmental restoration, as altered runoff behaviors can result in flooding, soil erosion, and water contamination, posing significant challenges for sustainable land management and ecosystem recovery.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: The LOng-LIved greenhouse gas PrOducts Performances (LOLIPOP) CCI+ project: satellite dataset and climate user case studies

Authors: Elisa Castelli, Dr. Bianca Maria Dinelli, Antonio Giovanni Bruno, Gabriele Brizzi, Massimo Cardaci, Martin Chipperfield, Lieven Clarisse, Cathy Clerbaux, Pierre Coheur, Martine De Maziere, Bart Dils, Sandip Dohmse, PhD Federico Fabiano, Marco Gai, Maya George, Jeremy Harrison, Michaela Hegglin, Alessandro Montanarini, Margherita Premuda, Piera Raspollini, Laura Saunders, Reinhold Spang, Gabriele Stiller, Massimo Valeri, Corinne Vigouroux, Kaley Walker, Simon Withburn
Affiliations: CNR-ISAC, Serco Italia s.p.a., CNR-IFAC, University of Toronto, KIT-IMK, NCEO, SQUARES-ULB, BIRA-IASB, LATMOS-SU, FZJ
For a complete understanding of the Earth’s climate, it is essential to understand the full budget of atmospheric gases that exhibit a large global warming potential or a strong impact on the ozone layer. ESA's Climate Change Initiative is already generating robust satellite-based timeseries for the greenhouse gases water vapour, carbon dioxide (CO2) and methane (CH4). However, several other long-lived greenhouse gases (OLLGHGs) require systematic observation, in particular, nitrous oxide (N2O) and halogenated carbon compounds (CFCs, HFCs, HCFCs, PFCs) These gases have long atmospheric lifetimes, exhibit significant global warming potentials and provide a major contribution to the radiative forcing uncertainty estimates. Nitrous oxide and chlorine-containing OLLGHGs are also the main source of the anthropogenic ozone depletion and are regulated internationally under the 1989 UN Montreal Protocol. However, despite these species are included by GCOS in the Essential Climate Variables (ECVs) list, user requirements are only given for N2O. Several satellite instruments provide information on the atmospheric abundance and distribution of the OLLGHGs. They can provide a valuable multi-mission resource for monitoring and understanding the role OLLGHGs in the atmosphere. For the exploitation of such satellite data, in November 2023 ESA started the LOng-LIved greenhouse gas PrOducts Performances (LOLIPOP) CCI+ project. The final goal of the project is to determine if the quality of the actual set of satellite measurements is good enough for their use in climate science and services. Here, we will show the results obtained so far in the project. Based on the user requirements, assessed at the beginning of the project through a literature review and a user survey, and the inventory of available satellite limb and nadir OLLGHGs datasets, we selected four molecular species (N2O, SF6, CFC-11, CFC-12) for the development of an harmonized satellite dataset. Results on the quality of this dataset assessed with the help of reference datasets, will be shown. In addition, to investigate the benefit of the use of satellite OLLGHGs observations in several end user applications. Three case studies have been devised: one dealing with the sensitivity of historical climate model simulations to the OLLGHGs climatology, one for the study of the radiative forcing of OLLGHGs and the last one about the monitoring of stratospheric chlorine levels and their impacts on ozone recovery. Outcomes from these three studies will be presented.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: High-Resolution Land-Cover Maps for Climate Modeling

Authors: Lorenzo Bruzzone, Francesca Bovolo, Maria Antonia Brovelli, Marco Corsi, Pierre Defourny, Cristina Domingo, Paolo Gamba, Céline Lamarche, Maurizio Laterza, Khatereh Meshkini, Gabriele Moser, Catherine Ottlé, Gianmarco Perantoni, Lluís Pesquer, Philippe Peylin, Stephen Sitch
Affiliations: University of Trento, Fondazione Bruno Kessler, Politecnico di Milano, e-Geos, Université Catholique de Louvain, Centre de Recerca Ecologica i d'Aplicacions Forestals, University of Pavia, Planetek, University of Genoa, LSCE-IPSL, CEA-CNRS-Université Paris-Saclay, University of Exeter
1. Introduction and Motivation The European Space Agency (ESA) Climate Change Initiative (CCI) [1] High Resolution Land Cover (HRLC) project addresses the critical need for spatially detailed time series of land-cover maps to support climate research. Land cover, a fundamental Essential Climate Variable (ECV), is integral to understanding processes such as deforestation, land degradation, and climate feedback mechanisms. The HRLC project [2] has developed the processing chain for delivering three key product lines: i) static HRLC10 maps at 10m resolution for 2019, ii) historical HRLC30 maps at 30m resolution (since 1990 every 5 years), and iii) annual change detection maps HRLCC30 at 30m resolution. These products have been generated for three climatically significant regions: a portion of the Amazon basin, part of the Sahel band in Africa and the northern high latitudes of Siberia. These products include extensive uncertainty metrics, such as class probabilities, second-best class likelihoods, and change uncertainties, which enhance their applicability in climate modeling providing ways to thoroughly weigh the information coming from the HRLC products. This presentation summarizes the achievements of Phase 1 of the project, highlighting product strengths and their validation outcomes, and outlines ongoing Phase 2 activities, including the optimization of the processing chain based on deep learning architectures, and the generation of new maps with expanded coverage to the full Amazon basin to meet the specific needs of the climate science community. 2. Land-Cover Maps Generated in Phase 1 and Related Impact During Phase 1, the HRLC project developed two multisensor processing chains to generate land-cover maps: one for 10m resolution maps using Sentinel 1 and 2 data (for 2019) and another for historical 30m maps using Landsat and ERS/ASAR SAR data (1990-2019). These chains handle data from optical and SAR sources, with each having specific pre-processing techniques tailored to the data quality and availability differences between Sentinel and earlier missions. The 2019 maps use independent classifications of optical- and SAR- data based Support Vector Machine and Random Forest methodologies, respectively, which are then fused using consensus theory and Markov Random Field approaches. The historical maps rely on a cascade classification approach that ensures temporal consistency and mitigates error propagation across multiple years. To generate annual land-cover change maps, a third architecture has been used, focused on identifying and localizing land-cover changes over time, using Landsat data. This approach addressed challenges posed by uneven data distribution, implementing a feature extraction module, time series regularization, and an abrupt change detection module. The entire processing pipeline has been designed to handle large volumes of optical and SAR data and is implemented using Python-based workflows and Docker containers, running on AWS cloud infrastructure, though it is flexible enough to be deployed on other cloud systems. Phase 1 successfully demonstrated the feasibility of high-resolution land cover mapping at sub-continental scales. The HRLC products are supported by a comprehensive suite of uncertainty metrics. HRLC10 and HRLC30 products provide pixel-level probabilities for the first and second-best classes, facilitating better understanding of classification confidence. Additionally, the HRLCC30 change detection maps include uncertainty metrics that highlight the reliability of detected changes on an annual basis. Validation efforts focused on both inter-comparison with external datasets and independent photointerpretation, underscoring the robustness of the products. Moreover, the probabilistic framework used to generate uncertainty products enhances the capacity of climate researchers to integrate land cover data into their models with an understanding of underlying uncertainties. 3. Advancements and Objectives for Phase 2 Building on the successes of Phase 1, Phase 2 (which is in progress) focuses on refining and expanding the HRLC product suite. The first step is the reprocessing of historical maps to enhance temporal consistency and spatial harmonization of the existing HRLC10, HRLC30 and HRLCC30 products. This includes refining the decision fusion algorithms to minimize discrepancies across product timelines. Additionally, Phase 2 introduces methodological advancements aimed at improving classification and change detection reliability. Enhanced algorithms will leverage the increased temporal and spatial resolution of Sentinel missions to deliver more detailed and consistent products. Uncertainty quantification remains a central focus, with improved probabilistic models providing users with more granular insights into classification and change detection reliability. Consequently, the whole processing chain is being reviewed and upgraded, aligning with current cutting-edge technologies, such as advanced deep learning strategies. This upgrade will support the generation of new HRLC30 products for 2024 for all the historical regions considered in Phase 1. Notably, a large extension of the Amazonia area to the full Amazon basin is scheduled for Phase 2, aligning with the evolving requirements of climate science. Therefore, Phase 2 aims to effectively provide a consistent time series of HRLC maps derived from multitemporal, multisensor, multisatellite and multiresolution data, acquired by a heterogeneous set of Earth observation missions. The project also prioritizes user engagement, with workshops and webinars designed to gather feedback from the climate research community. This ensures that the products continue to meet user needs and remain adaptable to emerging climate challenges. The ultimate aim is to deliver a comprehensive and reliable land-cover dataset that supports advanced climate modeling and decision-making. This presentation will highlight the achievements of Phase 1 and the ambitious objectives of Phase 2, underscoring the project’s role in addressing global climate challenges showing how the ESA CCI HRLC project continues to set a high standard for land cover mapping, providing indispensable tools for climate monitoring. References [1] ESA – European Space Agency: ESA Climate Change Initiative description, EOP-SEP/TN/0030-09/SP, Technical Note – 30 September 2009, 15 pp., 2009. [2] L. Bruzzone et al, " ESA CCI High Resolution Land Cover: Methodology and EO Data Processing Chain,” ESA Living Planet Symposium, Bonn, Germany, 2022. List of the CCI HRLC team members: A. Sorriso (UniPV), P.I. Osa (UniGE), M. Shovkati (UniGE), M. Zanetti (FBK), Q. Xu (PoliMi), V. Yordanov (PoliMi), C. Pratola (e-Geos), S. Tilla (e-Geos), D. Drimaco (Planetek), M. Carbone (Planetek), B. R. De Santis (Planetek), L. Olivera (LSCE).
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Producing a Station-Satellite Blended Sunshine Duration Dataset for the UK

Authors: Josh Blannin, Dr Elizabeth Good
Affiliations: Met Office
Sunshine Duration (SD) is a key climate variable that has been measured at weather stations since the 1800s. This work aims to create a new blended UK sunshine duration dataset through the combination of station and satellite data. Utilising satellite data increases the measurement coverage, reduces uncertainties, and will enable the provision of daily gridded UK SD data for the first time. This study uses station SD data from the Met Office Integrated Data Archive System (MIDAS) Open and satellite-based SD from the Satellite Application Facility on Climate Monitoring (CM SAF) SARAH-3 dataset. Until 2003 UK sunshine duration was measured solely using Campbell-Stokes (CS) recorders, which focus solar radiation onto a strip of card and estimate the duration of sunshine from the length of the resulting burn. Since 2003, electronic Kipp & Zonen (KZ) sensors have been increasingly installed at weather stations across the UK. These record the duration for which direct solar radiation exceeds 120W/m², defined as bright sunshine by the World Meteorological Organisation. The SARAH-3 dataset contains sunshine duration from 1983-present and is similarly derived from when Direct Normalised Irradiance ≥ 120W/m². To allow for intercomparison, the station and satellite SD are spatially and temporally co-located. To remove any instability in the stations’ timeseries, introduced by switching from Campbell-Stokes to Kipp & Zonen devices, a conversion is required. It is found that satellite SD more closely aligns with CS, and therefore a converted timeseries is produced for each station whereby KZ measurements are converted to CS equivalents using a linear model with coefficients fitted for each month. A comparison of the converted station and satellite data reveals multiple days with SD differences of > 5 hours. In winter months this is largely due to the satellite misidentifying snow for cloud, and therefore significantly underestimating the sunshine duration. This is particularly noticeable during extreme cold events in the UK, such as the Big Freeze in 2010 and the Beast from the East in 2018. The opposite situation occurs in April 2003 where a Saharan dust event likely reduces the station measured SD considerably and may also affect the satellite SD retrievals. In addition, through a correlation analysis, 6 stations are found to have very poor correlation with the satellite data. Further investigation revealed that these stations have contained a one-day-out error in their records for decades, which has subsequently been corrected. These examples highlight the benefits of utilising station and satellite measurements to identify errors in both datasets. The blending of satellite and station data is a two-step process. Firstly, a linear regression is used to estimate station SD from satellite SD. A linear model is fitted for each calendar day (i.e., 365 models), producing complete fields of station SD estimates. Three different linear models are fitted to trial a combination of different explanatory variables: satellite SD only, satellite SD & latitude, and satellite SD & latitude & satellite SD². The second step is to use the regression residuals from each day in a Gaussian Process (GP), resulting in the number of fitted GP models equalling the number of days of data. The GP uses a kernel to replace the covariance matrix in a multivariate Gaussian distribution over functions, and the residuals are fitted against the station latitudes and longitudes. The GP produces a complete field of residual estimates, which are added to the original linear model estimates, to produce a final spatially complete field of station SD estimates for the UK. A 15-fold cross validation using only the linear models finds that all variable combinations have a similar root mean square error (RMSE: 1.4 hours), residual mean (µ: 0.2 hours), and residual standard deviation (σ: 1.4 hours). A GP is trained on the simplest linear model (using satellite SD only) residuals, and the revised SD estimates show an overall improvement (RMSE: 1.2 hours, µ: 0.0 hours, σ: 1.2 hours). Local improvements can be much larger, e.g. several hours, particularly during snow events when SD from satellite measurements is underestimated.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Validation of Copernicus High Resolution Snow Products

Authors: Dr.(Tech.) Sari Metsämäki, Dr Eirik Malnes, MSc Sakari Väkevä, Dr. Hannah Vickers, Kirsikka Heinilä, M. Sc. Jesse Anttila
Affiliations: Finnish Environment Institute, NORCE Norwegian Research Centre
The Pan-European high-resolution Water, Snow, and Ice products, collectively known as HR-WSI are provided under Copernicus Land Monitoring Service (CLMS). The service provides various Sentinel-1 (S1) and Sentinel-2 (S2) -derived products at high spatial resolutions, down to 10 m to 20 m. The European Environment Agency (EEA) contracted the production of these datasets for 2024–2025. Because of planned methodological changes, the call also included external review and validation tasks as a separate part of the contract. A consortium team formed by Finnish Environment Institute (Syke) and NORCE Norwegian Research Centre was chosen to conduct this work. Here we present the independent validation results of HR WSI snow products, particularly focusing on Fractional Snow Cover, which underwent changes still during the validation process, and will therefore be revalidated during the spring 2025. The validation is of fundamental importance for the Copernicus user community, given that these products may be exploited in several applications such as hydrological modeling, avalanche and flood warnings, hydropower industry, tourism, etc. The Fractional Snow Cover product portfolio consist of three products: daily Fractional Snow Cover On Ground and on Top of Canopy (FSCOG and FSCTOC) and 7-day aggregated gap-filled FSC (GFSC). The FSCTOC and FSCOG are based on optical S-2 observations, while GFSC also utilizes S-1 based snow information. The Snow Phenology (SP) product consist of Snow Cover Onset (SCO) Snow Cover Melt (SCM) and Snow Cover Duration (SCD) and is based both on S-1 and S-2 data. To account for the regional differences and diversity of European landscape, topography, climate and snow regimes, 16 Sentinel-2 tiles were chosen in different parts of Europe. Validation is conducted for the hydrological year October 2020-September 2021. The validation benefits from Syke and NORCE's extensive experiences with remote sensing and in situ observations and models. In overall validation of FSCOG, synoptic weather station network data from World Meteorological Organization (WMO) are exploited, including air temperature, ground status, and snow depth. As an important addition, in situ observations from Finland's snow course network, and the Norwegian snow depth and profiling activities are used. For some variables, such as the wet/dry snow classification of snowpack and snow phenology, comprehensive in situ validation datasets are not available for all regions. Therefore, European-wide snow model for variables like liquid water content and snow cover was developed independently. These data were subsequently verified by comparing them with operational hydrological models such as the Norwegian snow model (seNorge) and Syke's operational hydrological forecasting system (WSFS). For FSCTOC the independent reference snow maps also based on S-2 observations are utilized. At times, validation of FSCOG and FSCTOC is performed using visual assessment of EO products with expert judgment. Validation results are presented in several ways. An overall validation for FSCOG relies on Snow Depth observations made at Weather Stations, by first classifying both FSCOG and Snow Depth to snow/non-snow information. Then several statistical binary metrics like Recall, F-score and False Alarm Rate are calculated. In addition to this point-wise method, also pixel-to-pixel comparison is made for some of the tiles, using independent reference FSCTOC maps provided as courtesy of ENVEO IT GmbH. This results in RMSE and Bias. The validation results for Snow Phenology and GFSC are based on the comparison of the developed snow model data: the evaluation of the product is based on the comparison of time series from the model and the snow product. Also, conclusions from visual assessment are presented, accompanied with explanatory figures.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Exploring the Interplay Between Marine Heatwaves and Atmospheric Circulation in the North Atlantic Using Observation Data and Climate Indicators.

Authors: Dr. Ana Oliveira, Beatriz Lopes, João Paixão, Fabíola Silva, Caio Fonteles, Manvel Khudinyan, Inês Girão, Rita Cunha, Paula Salge, Luísa Barros, Ana Oliveira, Célia
Affiliations: CoLAB +Atlantic, IPMA
Marine heatwaves (MHWs) are prolonged periods of anomalously warm ocean temperatures that can deeply impact marine ecosystems. Understanding the drivers and precursors of MHWs is critical for improving predictive capabilities and mitigating their impacts. This study investigates the role of key oceanic and atmospheric indicators in detecting and predicting MHW events in the North Atlantic, utilizing existing and newly developed Essential Ocean Variables (EOVs) and Essential Climate Variables (ECVs) within the framework of the Horizon Europe ObsSea4Clim project, which aims to support the Global Climate Observing System (GCOS) with novel indicators. The relationship between MHW occurrences and the large-scale climate mode North Atlantic Oscillation (NAO) has been investigated, as well as synoptic weather types and atmospheric circulation patterns. Oceanic variables, including sea surface temperature (SST), mixed-layer depth, and ocean currents, are combined with atmospheric drivers such as air-sea heat fluxes, wind speed, and atmospheric blocking events. Using a robust dataset derived from satellite observations, reanalysis products, and in situ measurements, we aim to identify precursors and thresholds that signal the occurrence of MHWs. Preliminary results indicated the importance of air-sea heat flux anomalies, particularly net, latent and sensible heat exchanges, in modulating the intensity and persistence of MHWs over the mid-latitudes. Additionally, specific synoptic weather types and phases of the NAO emerge as significant precursors, offering a basis for developing early warning indicators of MHWs probability. Future work will build upon these findings and propose an integrated framework that combines oceanic and atmospheric metrics to enhance real-time monitoring and the capability of predicting MHWs in the North Atlantic, with a key focus on the (probabilistic) seasonal time scale. In addition, this work underscores the value of EOVs and ECVs in advancing the understanding of MHW dynamics and offers actionable insights for climate adaptation and marine resource management.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: From retracking to Mean Sea Level: impact of parameters estimation and correlations

Authors: Anna Mangilli, Thomas Moreau, Claire Maraldi
Affiliations: CLS, CNES
Satellite radar altimetry, being one of the most powerful techniques to measure Sea Surface Height (SSH) variations, plays a crucial role in building robust records of Essential Climate Variables as, ultimately, the accurate measurement of the Mean Sea Level (MSL) at global and regional scales, allowing to address important scientific questions for a rigorous assessment of the climate change. The key step towards this goal resides in the accurate estimation of the geophysical parameters from the statistical analysis of the radar waveforms, that is, the retracking analysis. Major efforts have been made in the last decade to continuously improve the retracking solutions towards this goal. These efforts are indeed challenging and more and more critical because of the high demanding requirements in terms of data precision and resolution as of the current reference mission Sentinel-6 MF and upcoming missions, as for instance the S6NG and S3NG missions. To ensure the continuity, the robustness and the optimality of the parameters estimation, a retracking algorithm should account for a realistic waveform noise characterisation and for the real time evolution of instrumental properties, namely, the Point Target Response (PTR), if not, this could impact the parameter estimation leading to biases that are difficult to correct and sub-optimality. Correctly characterize and quantify the parameter uncertainties and correlations is important to robustly propagate the uncertainties in the estimation of the geophysical parameters and of the instrumental and the geophysical corrections, as for instance the Sea State Bias (SSB), to the SSH, and ultimately the MSL. In this talk we will give a quick overview of the different ocean retracking solutions for the analysis of Low Resolution Mode (LRM) waveform data, currently implemented and in development, focusing on the Sentinel-6 MF as it is the current reference mission. We will detail why an accurate noise characterisation is important, particularly for S6, as the pulse-to-pulse correlations linked to the S6 PRF configuration leads to a variation of the Effective Number of Looks (ENL), which has a significative impact on the parameter’s estimation, both the parameters value and uncertainty, if not correctly accounted for. We will present novel retracking solutions that are optimal, correctly accounting for the noise and thus providing with parameter uncertainties compatible with the Cramer-Rao theoretical bounds of minimum variance. We will present a Bayesian approach for the analysis of S6 data showing how it could offer interesting insights to complement and improve current solutions, providing with a robust and consistent estimation of the geophysical parameters uncertainties and characterizing the parameter correlations and the comparison between different waveform models. We will highlight the importance of optimality to correctly quantify the parameter correlations, in particular the correlation between the epoch and the Significant Wave Height (SWH), and discuss the impact in the estimation of derived corrections as the SSB. Finally, we will discuss the perspectives, namely the possible improvements in the waveform modelling, and the development of optimal statistical analysis of high resolution SAR data from current and future radar altimetry missions.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Quantifying Livestock Diet Composition Using Earth Observation (EO) Data for Improved Estimation of Enteric Methane Emissions in Kenya

Authors: John Mutua, Dr. Alan Duncan, Dr Gary Watmough
Affiliations: University of Edinburgh, International Livestock Research Institute
Livestock directly contribute to greenhouse gas emissions, mainly through enteric fermentation and to a lesser extent manure management. Livestock feed composition plays a crucial role in diet quality and the resulting emissions from livestock. However, spatio-temporal variations in diet composition, particularly in tropical environments, remain underexplored. Furthermore, comprehensive livestock diet composition datasets, essential for accurate enteric methane emission estimation, are often lacking in developing countries. Existing datasets from global studies are highly generalised and are based on expert knowledge (Herrero et al., 2013). Other studies undertaken on livestock diet composition have often been done on a local basis (Goopy et al., 2018; Ndung’u et al., 2018; Onyango et al., 2018; Wilkes et al., 2020), with less attention on country-wide studies that can provide detailed assessment and evidence of the impact of livestock diet on greenhouse gas emissions from livestock at national or regional level. For both global and local studies, they often offer incomplete representations of livestock diet composition at best, especially in regions with diverse and poorly documented livestock feeding practices. Where data is available, it has uncertainties resulting from data collection challenges and the assumption of a constant annual distribution of diet composition (MacLeod et al., 2018). This gap in accurate data is particularly problematic in developing countries, where feed composition changes during periods of feed scarcity and plenty (Mutua et al., 2023), and this is seldom captured in existing datasets. In this study, we use freely available Earth Observation (EO) data to generate spatially and temporally explicit livestock diet composition and quality data. Our approach involves three key stages. First, we identify the length of growing period (start and end of wet and dry seasons) at a pixel level using a water balance model that incorporates climate and soil property data to calculate the ratio of actual to potential evapotranspiration (Jones & Thornton, 2009). Second, we determine the feed items available to livestock during these seasons through an extensive literature review and estimate their quantities using EO data including land cover/cover fraction (Buchhorn et al., 2020) and above-ground dry matter productivity from the Copernicus programme (Copernicus, 2024), crop harvest area from FEAST global data repository (ILRI, 2022), length of growing period and crop specific indices and parameters. Third, we calculate the proportion of each feed item in livestock diets at a pixel level to derive diet composition and quality data. Finally, the results are then aggregated by livestock production systems, that are specific to Kenya (Robinson et al., 2018), providing a detailed understanding of livestock diet composition patterns in Kenya. Results indicate that livestock diet composition varied between seasons and livestock production systems. Natural grass was the dominant diet component across all seasons and livestock production systems, with the highest proportion in the diet during the long wet season (41.5%). Concentrates formed the lowest proportion in the diet ranging from 0.9 to 4.7% across seasons and livestock production systems. Diet quality as well varied between seasons and livestock production systems but was in a narrow range (55.4–62.3% dry matter; DM), which was greater than the default digestibility value of 55.0% set by the Intergovernmental Panel on Climate Change (IPCC) for livestock production systems in the region. Notably, mixed rainfed arid systems exhibited lower diet quality across all seasons. Livestock diet quality estimation using EO data demonstrated moderate to high accuracy (R² = 0.50–0.89; RMSE = 1.14–38.44% DM). These findings provide a robust basis for incorporating spatially-and-temporally explicit diet composition and quality data into life cycle assessment (LCA) models to improve GHG emission estimates from the livestock sector. Enhanced accuracy of these models can inform national-level strategies for mitigating emissions and promoting sustainable livestock production.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Copernicus Municipal Network Office: Information and exchange to promote the use of Earth Observation in German cities and municipalities

Authors: Johannes Schmidt, Christian Steffens, Sebastian Teuwsen, Andreas Müterthies, Wolfgang Beckröge
Affiliations: Eura Ag, EFTAS GmbH, DDGI
The development of the Copernicus satellite system will significantly improve the availability and applicability of satellite earth observation data. The Copernicus projects of the European Commission, ESA and DLR to date and exchange events show that the national public sector also has a great need for geoinformation, and that satellite-based Earth observation can make an important contribution to meeting this need in a cost-efficient manner. However, public institutions are often not yet able to make optimum use of earth observation-based information applications in their area of responsibility. This applies in particular to local applications at the lowest administrative level - the cities and municipalities. There are examples for the systematic use of operational satellite earth observation data, but this only applies to a few large and well-equipped cities. Most municipalities, however, have not yet benefited from Copernicus. Earth observation methods and products have yet to be widely adopted for many routine tasks. Based on this situation, the Federal Ministry for Digital and Transport Affairs and the German Space Agency at DLR established the national Copernicus integration measure. This aims to intensify the national use of Copernicus, among other things by setting up Copernicus network offices that actively promote exchange and networking between the remote sensing community and the user community. One of them is the Copernicus Municipal Network Office. Its task is to build up a community at the Copernicus/municipal interface, consisting of public authorities and administration as well as academia and companies. It is also investigating how satellite earth observation - and in particular the data and services of the Copernicus program - can contribute to support the municipal level (cities, districts, municipalities). The aim is also to identify obstacles and derive measures for improvement. The municipal network office focuses on these topics when approaching municipalities and identifying needs: climate protection and climate adaptation, urban green and environmental protection, water management and monitoring, energy management, mobility and infrastructure, and civil security and disaster control. The members of the Copernicus Municipal Network Office actively approach municipalities and focus on their needs and questions. We look at what the actors at the municipal level need and try to match this with the possibilities of Copernicus. This involves promoting the data and services as well as providing information on existing projects, but is also a matter of expectation management. This approach also applies to climate protection and climate adaptation. Therefore, we ask cities and municipalities what information can help them and suggest relevant data and services from Copernicus. There are now also a number of projects at national and international level that serve as positive examples and could motivate for being imitated. Another important aspect are laws and regulations at local, regional, national and EU level. When it comes to planning, implementing and monitoring actions, there is often no way around the comprehensive and coherent data of Copernicus. This we explain and refer to laws where Copernicus is already mentioned (e.g. EU Nature Restoration Law, EU LULUCF Regulation, EU Climate Law). On the other hand, it is also about informing and engaging regional and national authorities that are involved in shaping the political and legal framework. Therefore, we act as intermediary at the science-policy-industry interface when it comes to municipal applications of Copernicus data and services for climate action. The network office acts functions as knowledge and dissemination broker promoting the update of Earth Observation usage and its acceptance as important tool for planning, implementing and monitoring local actions. Background: The Copernicus Municipal Network Office is financed and supported by the Federal Ministry for Digital and Transport Affairs (BMDV) and the German Space Agency at DLR. It is operated by EurA AG, EFTAS GmbH and the German Umbrella Association for Geoinformation (DDGI).
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Developments Towards ESA CCI Lakes Gap-Filled Temperature and Ice Cover Products: Lake Ice Cover

Authors: Sam Johnston, Claude Duguay, Justin Murfitt, Laura Carrea, Chris Merchant, Shaerdan Shataer, Niall McCarroll, Dr. Clément Albergel
Affiliations: H2O Geomatics, University of Waterloo, University of Reading, National Centre for Earth Observations, European Space Agency Climate Office, ECSAT
Lake ice cover (LIC) is a thematic product of Lakes identified by the Global Climate Observing System (GCOS) as an essential climate variable (ECV) for climate monitoring. Robust estimates of LIC are also required for improving weather forecasts in lake-rich regions where ice cover on lakes prevails for several weeks to months of the year. As part of the European Space Agency’s (ESA) Climate Change Initiative (CCI) on Lakes, a daily LIC product is being generated, along with other lake-related variables, for multivariate analyses of the response of lakes to global climate change. However, cloud cover and polar darkness (observations too dark to classify during winter at high latitudes) impact the quality of the LIC product needed for climate studies and weather forecasting (e.g., forecast of air temperature and precipitation, such as thermal moderation effect and lake-induced snowfall, in the vicinity of lakes). This activity aims to explore and establish post-processing methodologies for producing gap-filled lake surface water temperature (LSWT) and LIC products. A separate companion presentation covers details on the gap-filling approaches considered for the LSWT product. Regarding LIC, the gap-filling task is split into two cases, first to handle cloud obstructions and second to handle polar darkness. A 3D Convolutional Neural Network (CNN) approach is tested for filling the cloud obstructions, selected due to its success over other deep learning models during preliminary experiments. Empirical and threshold methods are being tested to fill gaps due to polar darkness. Work to date has focused on the filling of cloud obstructions using the 3D CNN approach. Fifty-two ice-forming lakes were selected to represent the whole distribution, and the criteria were based on lake size, geographic location, and lakes that form full, partial and intermittent ice cover. Training samples were generated by introducing artificial cloud obstructions into cloud-free observations from the study lakes. During training, inputs consist of an artificially cloudy observation paired with one week of raw (cloudy) LIC before and after the obstructed sample. The model leverages this spatiotemporal context to predict the ice cover on the central obstructed date. Predictions are made for one lake at a time, with the approach designed to accommodate all lake sizes. Validation is assessing performance on lakes outside of the training set and comparing its predictions to other gap-filled LIC products. The gap-filled dataset will be beneficial to the climate and weather forecasting communities and is expected to be available for all users starting in March 2026.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: From ERA5 to ERA6: the Status of the Operational Reanalysis and the Next Generation of Reanalysis at ECMWF

Authors: Raluca Radu, Bill Bell, Hans Hersbach, John Hodkinson, Alison Cobb, Mikael Kaandorp, András Horányi, Joaquin Munoz Sabater, Julien Nicolas, Paul Poli, Dinand Schepers, Adrian Simmons, Cornel Soci, Martin Suttie, Carlo Buontempo
Affiliations: ECMWF
The ERA5 reanalysis dataset, a state-of-the-art atmospheric reanalysis of the global climate covering more than 80 years (1940-present) which is currently produced at the European Centre for Medium-Range Weather Forecasts (ECMWF) in the framework of the EU funded Copernicus Climate Change Service (C3S) is providing invaluable information about our past and present climate. ERA5 is the first reanalysis produced operationally as a service rather than as a research project, a comprehensive historical and near-to-real-time dataset providing a large number of essential climate variables (ECV), made available through the Copernicus Climate Data Store under free and open data policies. ERA5 will be replaced in the future by the next generation reanalysis, ERA6, which will provide to users a higher spatial resolution dataset taking advantage of cutting-edge advancements for better accuracy: more and enhanced Earth observations including reprocessed and rescued observations and new satellites, better forcing, improved numerical models and one-way coupling of the atmosphere with the ocean. Furthermore, ERA6 which just started recently to be produced, has been developed based on the end-user feedback and requirements with improved uncertainty quantification and better post-processing to enhance its applicability. In this context the production plans for the forthcoming global atmospheric reanalysis ERA6 will be introduced.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: The Centre for Environmental Data Analysis (CEDA) and JASMIN: EO and Atmospheric data next to a fast parallel processing cluster.

#stac

Authors: Steve Donegan, Ed Williamson, Alison Waterfall, Fede Moscato
Affiliations: Stfc Ceda
The Centre for Environmental Data Analysis (CEDA) provides access to over 23 Pb of EO and Atmospheric data from UK-funded research. CEDA’s data holdings range from surface weather station and airborne survey flight data to the daily retrieval of satellite data as well as climate model data. CEDA has over 35000 users, who benefit from the ability to process and analyse this data using JASMIN– a world-class fast parallel processing cluster hosted by STFC (Science and Technology Facilities Council) with 55K cores and 24Pb of dedicated user storage in Group Workspaces (GWS). There are over 400 GWS’s and they provide a useful facility for CEDA and JASMIN users to share and develop the data further, contributing to many national and international projects and datasets that will be placed on the CEDA archive. CEDA and JASMIN are based at the UK Science Technology Facilities Council (STFC) at the Rutherford Appleton Laboratory (RAL) in Oxfordshire, UK and is a division within RALSpace. CEDA is part of the UK Natural Environment Research Council (NERC) Environmental Data Service (EDS) providing FAIR (Findable, Accessible, Interoperable & Reusable) access to data and services. CEDA also provides the data archive component for the UK National Centre for Earth Observation (NCEO) and the UK National Centre for Atmospheric Science (NCAS). CEDA’s EO archive component alone includes data from the Sentinel, Landsat, Terra/Aqua and ENVISAT missions in addition to data from the NERC ARF and DEFRA Sentinel ARD - as well as many other missions and research data outputs. It also hosts datasets from, and works closely with, international projects such as the ESA Climate Change Initiative (ESA CCI), The Couple Model Intercomparison Project (CMIP) and is also a designated data centre/primary archive for the IPCC Data Distribution Centre. CEDA also provides a Data Hub Relay (DHR) for ESA as part of the international Copernicus data dissemination effort with almost 20Tb per day flowing to and through the CEDA DHR daily. CEDA is one of the leading partners on the UK Earth Observation Data Hub (UK EODH), a high-profile world leading UK specific software infrastructure tying the UK academic and commercial EO communities and easing data access for both. CEDA maintains many data streams across the EO and Atmospheric disciplines, with daily incoming data flows of 7-8Tb per day which are typical for just the Sentinel mirror archive and MODIS data streams alone. CEDA works closely with NCEO, NCAS and UK stakeholders to identify data streams of use to the community and actively engages to provide timely and reliable access to the data. Data is not only sourced and retrieved actively from such sources as EUMETSAT, Copernicus (via the CEDA DHR) and NASA/USGS but data is automatically pushed to us from various sources such as the UK Meteorological Office (UKMO) and ground station retrievals, via a data arrivals service. CEDA provides many methods for users to find and access EO data, not least fast access via the JASMIN environment that allows users access to the data using the JASMIN fast parallel processing cluster. The Satellite Data Finder is a web tool that allows users to quickly find most CEDA EO datasets. Users can access this via a conventional GUI or by an OpenSearch interface. CEDA is constantly involved in identifying and being involved with efforts to improve data search and access, not least current development efforts to ensure a STAC catalogue to support the UK-EODH. With developments such as this and the new Big Data paradigm, CEDA is giving much thought to how best to structure and support data formats that ease this transition as well as how to allow the data to be accessed and processed with these technologies. CEDA has recently celebrated 30 years of continuous data centre operations supporting vital access to the UK EO and Atmospheric Science communities. We maintain a keen eye on emerging future technologies that will impact our operations and interaction with users. Not least of these is the support and research into NetZero technologies for data centres. CEDA is working closely with its STFC parent organisation to ensure that CEDA remains fit for the future and will meet its community and societal obligations.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone C)

Poster: Monitoring long term lake water level evolutions

Authors: Dr Gabriel CALASSOU, Dr Beatriz Calmettes, Nicolas
Affiliations: CLS
Lakes play a crucial role in retaining and stocking water, and in the context of the significant global environmental changes occurring at several anthropocentric levels, the necessity to monitor water level has increased substantially over the last years, and lake water level is defined as an Essential Climate Variable by GCOS. The water stored in lakes responds (directly and indirectly) to any changes in precipitation and air temperature as well as directly to human use of water resources. Fresh water is a more and more pressured resource for the population needs as well as a societal risk for local populations. It is also a fundamental element for industry and agriculture, therefore becoming an economic and political stake. It is therefore essential to monitor variations in lake water levels over multi-year time scales to observe the influence of external pressures on lakes in the context of climate change, and thus improve climate models. Satellite altimetry is widely used to measure lakes and reservoirs water height changes worldwide, and their associated uncertainties. However, strategies and algorithms to calculate this variable are not straightforward and sophisticated approaches have been developed. Although originally conceived to study open ocean processes, the radar altimeter satellites have nevertheless been shown of great interest to acquire numerous useful measurements of decimetric to centimetric precision over lakes. With this technique, the Lake Water Level is defined as the height of the reflecting surface, in meters above the geoid. It is observed by space radar altimeters that measure the time it takes for radar pulses to reach the ground targets, directly below the spacecraft (nadir position), and return. Several projects, such as Lakes_cci (https://climate.esa.int/en/projects/lakes/) and FDR4ALT, use altimetry data to reconstruct climate time series for a large number of lakes. For each of these lakes, several missions, including historic ones such as ERS-1, ERS-2 and Envisat, are used to retrieve these time series. Algorithms developed and data generated in this project, are also being used in the Copernicus Climate Change Service - C3S (https://climate.copernicus.eu/). In those datasets, only lakes located along the satellite's ground tracks or within few kilometres can be monitored with nadir altimetry, with a quality of measurement that not only depends on the size of the lake, but also on its surface roughness, possible signal coming from other reflecting targets in its surroundings. Depending on the size of the lake, the satellite data may be gathered and averaged over very long distances. It is thus necessary to correct for the slope of the geoid (or equivalently, the mean lake level profile). The “repeat track technique” is used to solve this problem. The geoid slope is recalculated for each of the track of the satellite and then is averaged over a significative number of cycles. The result of this calculation is a mean vertical profile, along each pass per lake, which serves as correction for geoid. Additionally, many of these lakes, are observed by multiple mission and tracks, requiring a bias correction process. Additionally, in the framework of the C3S project, a dataset containing exclusively lakes observed by a single operational mission/track was generated. For those lakes, characterised by short transect length, the variation of the geoid has a negligible impact. A simpler approach can therefore be implemented to use the input L2 altimetry products and will be presented: measurements are extracted within lake contours (from Hydrolakes dataset) and averaged for each transect. Particular attention was nevertheless required to setup editing criteria to remove possible outliers within the 20Hz input data. Indeed, given the smaller size of this lake sample, the altimeter footprint also covers lake shores and possible surrounding echogenic targets. Using this approach, water level timeseries have been generated over 8000 lakes worldwide. The validation of the water level time series will be detailed in the presentation, emphasizing on comparisons to external data sets with either in-situ data or other altimetry-based products.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone D)

Poster: A.11.01 - POSTER - Earth Energy Imbalance and Radiative Forcing

Satellite EO is the only way to fulfill the need of direct measurements and monitoring of Earth’s Radiation Budget (ERB) with the accuracy and coverage required for climate actions at commensurate anthropogenic timescales. The ERB comprises the quantification of the incoming radiation from the Sun and the outgoing reflected shortwave and emitted longwave radiation. This Essential Climate Variable (ECV) is the primary forcing of the climate system and is therefore a fundamental quantity to be monitored to understand the Earth’s climate and its variability.

A number of current and future missions in the ESA’s Earth Explorer (EE), Earth Watch (EW) and Meteorological programmes, and from international partner agencies, have been designed to measure the whole or part of the Earth Energy Imbalance components, to study and bolster our ability to model the radiative forcing, notably the role played by clouds and aerosols. The promises of e.g. EE6 EarthCARE (with JAXA), EE9 FORUM, MetOp-SG/IASI-NG, the prospect of EW TRUTHS, EE12 candidate ECO mission, as well as international partners’ missions like CERES, Libera, CLARREO-pathfinder, PREFIRE, shape a comprehensive scene of critical ERB data, unprecedented by their spatio-temporal coverage, accuracy and complementarity.

This session invites presentations on:
- observations of components of the Earth Radiation Budget,
- observations advancing our understanding of radiative forcing processes,
- retrieval algorithms and methods for uncertainty quantification,
- their utilisation in climate modelling and as actionable information for climate decision-making.

The objective of the session is to bring the ERB observations from individual missions and climate communities together, to maximise the exchanges and synergetic benefits, reviewing the current limitations in Earth radiation system modelling and the opportunities with the current and future missions.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone D)

Poster: Traceable Radiometry Underpinning Terrestrial- and Helio- Studies (TRUTHS) – A ‘gold standard’ imaging spectrometer in space for radiation imbalance and in support of the climate emergency

Authors: Dr Thomas August, Pr Nigel Fox, Andrea Marini, Pr John Remedios
Affiliations: ESA, NPL, NCEO
TRUTHS (Traceable Radiometry Underpinning Terrestrial- and Helio-Studies) is an operational climate mission, aiming to enhance, up to an order-of-magnitude, our ability to estimate the Earth radiation budget, spectrally resolved to support attribution. Through direct SI-Traceable measurements of incoming total and spectrally resolved solar irradiances and Earth reflected radiances, spatially resolved, it establishes ‘benchmarks’ against which change/trends can be detected in as short a time as possible. These fiducial reference data sets can be combined with data from other sensors and also serve as ‘gold standard’ references to anchor and upgrade the performance of other space sensors through in-orbit calibration. TRUTHS will become a founding member of a new class of satellites called SITSats, SI-Traceable Satellites, with payloads explicitly designed to achieve and evidence an uncertainty, in-orbit, at a level commensurate with the exacting goals of long-time-base climate studies. SITSats also facilitate interoperability and enhanced trust in the data from the Earth observation system as a whole, helping to provide observational evidence-based confidence in actions addressing the climate emergency. The unprecedented uncertainty of TRUTHS’ globally sampled hyperspectral data underpins many additional applications: • Establishing an interoperable, harmonised Earth Observing system incorporating agency and commercial satellites: large and small • Top and Bottom of atmosphere reflectances impacting carbon cycle (e.g. land cover, ocean colour, vegetation, methane etc together with similar applications of other hyper/multi-spectral missions). Low uncertainty also facilitates improvements in retrieval algorithms. • Transferring radiometric reference values to existing Cal/Val infrastructure (e.g. RadCalNet, Pseudo-Invariant Calibration sites, In-situ ocean colour reference observations; selected surface reflectance test-sites (fluxnet, …), both nadir and multi-angular) and Moon observations. The mission comprises an “agile” satellite capable to point and image the Earth, Moon and Sun from a 90°polar orbit by the Hyperspectral Imaging Spectrometer (HIS). The HIS provides spectrally continuous observations from 320 to 2400 nm, with a spectral sampling between 2 and 6 nm and a spatial sampling of 50 m. The payload utilises a novel SI-traceable on-board calibration system (OBCS), comprising of the Cryogenic Solar Absolute Radiometer (CSAR), able to realise SI-traceability in space and also measure incoming solar radiation. Together with other optical elements the OBCS links the HIS observations to the CSAR with a target expanded uncertainty 0.3% (k=2) and mimics processes and performance previously only achievable within National Metrology Institutes (NMI). TRUTHS is an ESA Earth Watch mission developed with a large scientific and an industrial consortia led by NPL and Airbus UK, respectively, with contributions in Switzerland, Czech Republic, Greece, Romania and Spain. The mission was conceived at NPL in the UK in response to challenges highlighted through bodies such as CEOS to address observational needs of GCOS and satellite interoperability. The mission has a target launch date of 2030 and minimal operations life-time of 5 years with a goal of 8 years. Together with FORUM (ESA) and IASI-NG (CNES/EUMETSAT) it will provide spectrally resolved Earth radiance information from the UV to the Far-Infrared in the coming decade, and in partnership with CLARREO-Pathfinder (NASA) and CSRB (CMA) inaugurate a future constellation of SITSats.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone D)

Poster: Estimating variations in ocean heat content using space geodetic data to assess the global and regional Earth energy budget

Authors: Robin Fraudeau, Sebastien Fourest, Benoit Meyssignac, Alejandro Blazquez, Michael Ablain, Gilles Larnicol, Ramiro Ferrari, William Llovel, Damien Desbruyeres, Marco Restano, Roberto Sabia, Gerald Dibarboure
Affiliations: Magellium, CNES, LOPS, SERCO/ESRIN, ESA-ESRIN
The ocean is the first reservoir to store the energy accumulated by the Earth from greenhouse gas emissions since the start of the industrial era, in the form of heat (~91%). An accurate estimation of the change in ocean heat content (OHC) is therefore crucial for understanding Earth's energy balance (EEI) and monitoring climate change (Meyssignac et al., 2019). While in-situ measurements, such as temperature and salinity profiles from the ARGO network, provide essential OHC estimates (van Schuckman et al., 2021, 2023), they are constrained by spatial and temporal coverage limitations. Most in situ data are constrained above 2000m depth, with a quasi-global coverage initiated from 2005 onwards. This study presents an improved methodology for estimating OHC by combining space altimetry and gravimetry measurements with in-situ data. Our approach leverages the strengths of different observation systems. Space geodetic data provide high-frequency, global coverage, while in-situ measurements offer detailed vertical profiles of ocean temperature and salinity. By combining these data sources using the Integrated Expansion Efficiency of Heat (IEEH) coefficient, we achieve a more accurate and comprehensive OHC mapping constrained by steric sea level observations. Marti et al. (2022) already showed that the space geodetic approach is capable to accurately assess the temporal variations of EEI over decadal and longer timescales, where uncertainties of less than ±0.1 W·m⁻² ([5-95]% confidence level) are required (Meyssignac et al., 2023). Over 1993-2022, geodetic Global OHC (GOHC) showed a significant positive trend of 0.75 W.m-2 [0.61, 1.04] ([5-95]%CL). Comparative analyses of GOHC trends over the period 2005-2019 indicate that estimates based on in-situ temperature and salinity profiles range from 0.55 to 0.7 W.m⁻². The space geodetic product estimate was 0.9 W.m⁻² over the same period (Marti et al., 2024). In terms of EEI, comparisons with the CERES experiment have shown similar trend estimates of 0.50 +/- 0.47 W.m−2 ([5-95]%confidence level) per decade during the period 2005-2019. In this presentation, we present the latest results obtained over the 1993-2023 period, incorporating the most recent altimetry data products (from C3S), refining the data gravimetry processing methodology based on the work of Blazquez et al. (2018) and using the most recent temperature and salinity gridded products. We have also improved the IEEH coefficient calculation allowing us to get more accurate estimates at regional scales. With these improvements, the GOHC trend derived from space geodetic approach is now 0.65 W.m-2 [0.52, 1.00]([5-95]%CL), closer to the in-situ estimates. Comparisons with the CERES dataset showed a correlation of 0.5 at timescale above than 3 years. We also present an assessment of OHC estimates at regional scales with a focus in the North Atlantic Ocean. We compare the space geodetic with in-situ data along the OVIDE section highlighting a correlation close to 0.81. Furthermore, we present an original assessment and validation of geodetic OHC in the North Atlantic by combining estimates of Meridional Heat Transport (MHT) from basinwide mooring arrays (RAPID and OSNAP), estimates of air-sea heat fluxes from atmospheric reanalysis data (ERA5), and OHC estimates from the space geodetic and in situ products. These MHT estimates show good agreement with in-situ measurements from the RAPID array and the OSNAP section (correlation of 0.8), validating the capability of space geodetic observations (Meyssignac et al., 2024). While the space geodetic method offers significant improvements, challenges remain in accurately estimating OHC in certain regions, such as the Mediterranean Sea for instance, due to its unique circulation patterns and limited data availability. Further research is needed to refine our approach and address these challenges.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone D)

Poster: Climate Evolution from Spectral Radiance Trends Analysis: a Comparative Study of IASI and the EC-Earth Climate Model

Authors: Stefano Della Fera, Federico Fabiano, Piera Raspollini, Marco Ridolfi, Jost Von Hardenberg, Ugo Cortesi
Affiliations: Institute of Applied Physics, National Research Council (IFAC-CNR), Institute of Atmospheric Sciences and Climate, National Research Council (ISAC-CNR), National Institute of Optics, National Research Council (INO-CNR), Department of Environment, Land and Infrastructure Engineering, Politecnico di Torino
Monitoring the Earth's spectral infrared radiation at the Top Of the Atmosphere (TOA), also known as Outogoing Longwave Radiation (OLR), offers valuable insights for the assessment of the planet energy budget and for studying the evolution of key climate variables including temperature, surface temperature, greenhouse gases concentration and clouds. The analysis of distinct spectral signatures and their variations allows for quantifying the Earth system's responses to both internal and external radiative forcings. For this reason, statistics derived from observed spectrally resolved radiances represents a powerful and alternative diagnostic tool for evaluating the performance of climate models. Stable hyperspectral observations from space-based sensors began in the early 2000s, with the Atmospheric Infrared Sounder (AIRS, 2002), the Infrared Atmospheric Sounding Interferometer (IASI, 2007), and the the Cross-track Infrared Sounder (CrIS, 2011) measuring the so-called Mid-Infrared region (approximately 650 to 2500 cm⁻¹) of the Earth emission spectrum. The significance of such observations is underscored by substantial investments in and the design of new space missions aimed at acquiring additional spectrally resolved data, both in the Mid-Infrared (e.g., IASI-NG) and the Far-Infrared (e.g., FORUM, PREFIRE) spectral region (approximately 100 to 650 cm⁻¹). In the framework of EMM (Earth Moon Mars) project [1], funded by the NRRP (National Recovery and Resilience Plan), a suite of radiative transfer models, analytical tools, and instrumental datasets is exploited for both meteorological and climate-related applications. As part of the project, in this study a climatology of monthly average radiances is built from 12 years (2008 - 2019) of IASI Metop-A L1-C data and compared to synthetic spectra simulated online using the EC-Earth climate model [2], which has been equipped with the radiative transfer model σ-IASI [3, 4]. To ensure an accurate comparison between observed and simulated radiances and to properly account for the cloud distribution within the large EC-Earth grid boxes, the model grid is decomposed into a set of subcolumns at each time step, matching the spatial resolution of the IASI Field of View. The comparison performed over the period 2008-2019 between observed and simulated spectrally resolved radiances under all-sky conditions reveals the main spectral biases of the model and underscores the role of clouds in shaping the OLR at TOA. More in detail, the climate model exhibits a warm bias in the core of the CO2 band, in the atmospheric window and in the water vapor band. Despite these biases, the trends in simulated spectrally resolved OLR generally show good agreement with IASI observations and reveal a compensation of spectral errors in the estimation of integrated flux trends. To interpret the spectral biases, a set of kernels (derivatives of radiance with respect to specific variables) is computed over a three-year period using RTTOV model [5]. A kernel analysis is then employed to decompose in a linear approximation the contributions of the key climate variables to the radiance trends, enabling the quantification of forcing and feedback from the spectral radiance trends. This analysis provides a concise overview of the evolution of the climate system over the period examined. REFERENCES [1] EMM - Earth-Moon-Mars project - PNRR, Mission 4, Component 2, Investment 3.1, Project IR000038, CUPC53C22000870006. [2] Döscher, Ralf, et al. "The EC-earth3 Earth system model for the climate model intercomparison project 6." Geoscientific Model Development Discussions 2021 (2021): 1-90. [3] Serio, Carmine, et al. "Simultaneous retrieval from the full IASI spectrum of cloud and atmospheric parameters using the new all-sky forward model sigma-IASI/F2N: the first day-night infrared retrieval of the Antarctica ozone hole." Remote Sensing of Clouds and the Atmosphere XXVIII. Vol. 12730. SPIE, 2023. [4] Della Fera, Stefano, et al. "On the use of Infrared Atmospheric Sounding Interferometer (IASI) spectrally resolved radiances to test the EC-Earth climate model (v3. 3.3) in clear-sky conditions." Geoscientific Model Development 16.4 (2023): 1379-1394. [5] Matricardi, Marco. The generation of RTTOV regression coefficients for IASI and AIRS using a new profile training set and a new line-by-line database. Vol. 564. Reading: ECMWF, 2008.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone D)

Poster: Novel joint retrieval of all components of the surface radiation budget

Authors: Anke Tetzlaff
Affiliations: Meteoswiss
During the Third Continuous Development and Operations Phase (CDOP-3) of EUMETSAT, the Satellite Application Facility (SAF) on Climate Monitoring (CM SAF) extended its product portfolio with a Thematic Climate Data Record (CDR) of Regional Land Fluxes based on two sensors of the Meteosat suite of geostationary satellites: the Meteosat Visible and InfraRed Imager (MVIRI) and the Spinning Enhanced Visible and InfraRed Imager (SEVIRI). The Regional Land Fluxes CDR will provids various parameters depicting the surface states and radiation fluxes, including the Surface Radiative Balance (SRB), the Cloud Fractional Cover (CFC), the Land Surface Temperature (LST), the Evapotranspiration (ET) and the Latent (LE) and Sensible (H) Heat Fluxes. The CDR is achieved by consolidating and unifying previously separated developments in CM SAF, LSA SAF and the EUMETSAT Secretariat, and running them in a joint retrieval using the Meteosat Fundamental Climate Data Record. This unique concept ensures consistency among the CDR parameters. We focus here on the SRB product of the Regional Land Flux CDR. All components of the SRB - including the Surface Incoming Solar radiation (SIS, or solar irradiance), the Surface Albedo (SAL), the Surface Outgoing Longwave radiation (SOL) and the Surface Downward Longwave radiation (SDL) - are jointly retrieved using the CM SAF software “GeoSatClim” over the period 1983-2020. The SRB data record covers area up to 65°N/S and 65°W/E. The CDR consists in hourly, daily and monthly means with a spatial resolution of 0.05 degree. In this presentation, we show the unique concept of the Land Flux SRB algorithm. The SRB product and its single components are validated with BSRN, GEBA and ASRB ground based stations and they are compared with other global SRB products such as ERA5-Land, ISCCP-FH and CM SAF CLARA. Overall, the SRB monthly mean absolute bias reaches the target accuracy of 15 W/m-2 with a decadal stability of less than 2 W/m2/decade.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone D)

Poster: Consistent Estimation of Surface Radiation Budget Components from MODIS Observations Using Artificial Intelligence

Authors: Jianglei Xu, Professor Shunlin Liang, Dr. Han Ma, Dr. Yongzhe Chen, Dr. Wenyuan Li
Affiliations: University Of Hong Kong
Surface radiation budget represents the balance between the surface heating through incoming solar radiation and cooling through the emission of thermal radiation. Changes in the SRB lead to climatological responses at different scales. Accurate estimation of surface radiation budget (SRB) components at regional and global scales is critical for understanding the Earth system’s climate change. However, current satellite-derived SRB products retrieved from multiple separate algorithms have large uncertainty and poor consistency. We proposed the SRB-constraint multi-task learning deep neural network (MTL-Net) to jointly estimate all of the global daily mean SRB components at 1 km resolution from the Moderate Resolution Imaging Spectroradiometer (MODIS) observations. A combination of MODIS top-of-atmosphere (TOA) reflectance from bands 1–5, 7, and 19; thermal radiance from bands 27–29 and 31–34; sun-sensor observing geometry and elevation; and Global LAnd Surface Satellite (GLASS)-MODIS surface longwave radiation was used as the predictors to jointly retrieve all of the SRB components. Using globally distributed measurements, the validation results showed that the MODIS-derived SRB estimates have small uncertainties. The root-mean-square errors of estimates for downward, upward, and net shortwave radiation were 28.95, 14.65, and 25.11 Wm-2, respectively; for downward, upward, and net longwave radiation, 20.52, 14.83, and 17.44 Wm-2; and for all-wave net radiation, 24.15 Wm-2. The underestimations of downward and net shortwave radiation as well as downward longwave radiation in the existing SRB products are well improved; the averaging uncertainty of shortwave radiation decreased by 3.63% and that of longwave radiation decreased by 1.5% in the MODIS-derived estimates compared to other SRB products. Because of the SRB balance constraint, the MODIS-derived SRB components presented improved consistency in comparison with GLASS-MODIS SRB products, decreasing by 25.37% in the inconsistency of the GLASS-MODIS SRB in magnitude. The MODIS-derived SRB estimates showed good spatial consistency with CERES and ERA5 products. Analysis illustrates that different satellite diurnal observation frequencies are needed to estimate the daily mean SRB components. The number of MODIS diurnal observations from twin satellites broadly meets the requirements of daily downward and upward longwave radiation estimation, but not for other radiation components. We found the thermal radiances from MODIS bands 32 and 34 have a significant contribution to the shortwave radiation estimation due to stratocumulus’s shortwave radiation effect, while TOA reflectance from MODIS bands 1, 4, 5, and 19 facilitates the estimation of longwave radiation because of the presence of stratus. Extensive validations have proved that the proposed method is a great scheme to retrieve the SRB components with high accuracy and great consistency. We have successfully generated this new set of MODIS-derived SRB component products. These new products will provide new observational evidence for the variability of the SRB over the recent two decades.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone D)

Poster: The Earth Climate Observatory space mission concept for the monitoring of the Earth Energy Imbalance

Authors: Steven Dewitte, Thorsten Mauritsen, Benoît Meyssignac, Thomas August, Luca Schifano, Lien Smeesters, Rémy Roca, Helen Brindley, Jacqueline Russell, Nicolas Clerbaux, Rainer Hollmann, Linda Megner, Margit Haberreiter, Joerg Gumbel, Jochem Marotzke, Jérôme Riedi, Aku Riihelä, Tim Trent, Manfred Wendisch
Affiliations: Royal Observatory of Belgium, Stockholm University, Laboratoire d'études en Géophysique et Océanographie Spatiales, European Space Agency, Vrije Universiteit Brussel, Imperial College London, Royal Meteorological Institute of Belgium, Deutscher Wetterdienst, Physical-Meteorological Observatory and World Radiation Center, Max Planck Institute for Meteorology, Lille University, Finnish Meteorological Institute, Earth Observation Science/National Centre for Earth Observation, School of Physics & Astronomy, University of Leicester, Leipzig University
Monitoring the Earth Energy Imbalance (EEI) is of prime importance for a predictive understanding of climate change. Furthermore, monitoring of the EEI gives an early indication on how well mankind is doing in implementing the Paris Climate Agreement. EEI is defined as the small difference between the incoming energy the Earth receives from the Sun and the outgoing energy lost by Earth to space. Both the incoming solar and the terrestrial outgoing energy are of the order of 340 W/m2 at the global annual mean level, while the EEI is of the order of 1 W/m2. The EEI is cumulated in the Earth climate system, particularly in the oceans, due to their substantial heat capacity, and results in global temperature rise. Currently the best estimates of the absolute value of the EEI, and of its long term variation are obtained from in situ observations, with a dominant contribution of the time derivative of the Ocean Heat Content (OHC). These in situ EEI observations can only be made over long time periods, typically a decade or longer. In contrast, with direct observations of the EEI from space, in principle the EEI can be measured at the annual mean time scale. However, the EEI is currently poorly measured from space, due to two fundamental challenges. The first fundamental challenge is that the EEI is the difference between two opposing terms of nearly equal amplitude. Currently, the incoming solar radiation and outgoing terrestrial radiation are measured with separate instruments, which means that their calibration errors are added and overwhelm the signal to be measured. The current error on the direct measurement of the EEI is of the order of 5 W/m2, significantly larger than the signal to be measured of the order of 1 W/m2. To make significant progress in this challenge, a differential measurement using identical intercalibrated instruments to measure both the incoming solar radiation and the outgoing terrestrial radiation is needed. The second fundamental challenge is that the outgoing terrestrial radiation has a systematic diurnal cycle. Currently, the outgoing terrestrial radiation is sampled from the so-called morning and afternoon Sun-synchronous orbits, complemented by narrow band geostationary imagers. Recently the sampling from the morning orbit was abandoned. The sampling of the diurnal cycle can be improved, for example, by using two orthogonal 90° inclined orbits which give both global coverage, and a statistical sampling of the full diurnal cycle at seasonal time scale. Other alternatives being investigated are orbits with 82 or 73-degree inclinations, which precess to obtain better sampling of the diurnal and annual cycles but require filling in the polar caps. A complementary Sun-synchronous orbit would provide synergies with several other systems. For understanding the radiative forcing – e.g. aerosol radiative forcing - and climate feedback – e.g. ice albedo feedback - mechanisms underlying changes in the EEI, and for climate model validation, it is necessary to separate the Total Outgoing Radiation (TOR) spectrally into the two components of the Earth Radiation Budget (ERB), namely the Reflected Solar radiation (RSR) and Outgoing Longwave Radiation (OLR) and to map them at relatively high spatial resolution. To intercalibrate the high-accuracy EEI measurements and the high-resolution ERB measurements, the ERB measurements need to be made with full angular coverage. This full angular coverage will also be beneficial for reducing the angular conversion error when converting high-resolution radiances to top-of-the-atmosphere high-resolution flux estimates. The Earth Climate Observatory (ECO) mission concept was recently selected by the European Space Agency as one of the 4 candidate Earth Explorer 12 missions, that will be further studied in Phase 0 until mid 2026. The current paper provides a broad overview of the ECO mission objectives, the mission requirements, and the key elements of a baseline mission concept. During Phase 0, the ECO mission concept will be further elaborated in two parallel industrial studies, which may or may not adopt or refine the elements of the baseline concept.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone D)

Poster: The Earth Climate Observatory space mission concept for innovative continuity in the monitoring of the Earth Outgoing Longwave Radiation

Authors: David Vannerom, Deniz Poyraz, Luca Schifano, Lien Smeesters, Thomas August, Steven Dewitte
Affiliations: Royal Observatory of Belgium, Vrije Universiteit Brussel, Euorpean Space Agency
Monitoring the Earth Energy Imbalance (EEI) is of prime importance for a predictive understanding of climate change. Furthermore, monitoring of the EEI gives an early indication on how well mankind is doing in implementing the Paris Climate Agreement. Currently the best estimates of the absolute value of the EEI, and of its long term variation are obtained from in situ observations, with a dominant contribution of the time derivative of the Ocean Heat Content (OHC). These in situ EEI observations can only be made over long time periods, typically a decade or longer. In contrast, with direct observations of the EEI from space, in principle the EEI can be measured at the annual mean time scale. However, the EEI is currently poorly measured from space, due to two fundamental challenges. The first fundamental challenge is that the EEI is the difference between two opposing terms of nearly equal amplitude. Currently, the incoming solar radiation and outgoing terrestrial radiation are measured with separate instruments, which means that their calibration errors are added and overwhelm the signal to be measured. The current error on the direct measurement of the EEI is of the order of 5 W/m2, significantly larger than the signal to be measured of the order of 1 W/m2. To make significant progress in this challenge, a differential measurement using identical intercalibrated radiometers to measure both the incoming solar radiation and the outgoing terrestrial radiation is needed. The second fundamental challenge is that the outgoing terrestrial radiation has a systematic diurnal cycle. Currently, the outgoing terrestrial radiation is sampled from the so-called morning and afternoon Sun-synchronous orbits, complemented by narrow band geostationary imagers. Recently the sampling from the morning orbit was abandoned. The sampling of the diurnal cycle can be improved, for example, by using two orthogonal 90° inclined orbits which give both global coverage, and a statistical sampling of the full diurnal cycle at seasonal time scale. Other alternatives being investigated are orbits with 82 or 73-degree inclinations, which precess to obtain better sampling of the diurnal and annual cycles but require filling in the polar caps. A complementary Sun-synchronous orbit would provide synergies with several other systems. For understanding the radiative forcing – e.g. aerosol radiative forcing - and climate feedback – e.g. ice albedo feedback - mechanisms underlying changes in the EEI, and for climate model validation, it is necessary to separate the Total Outgoing Radiation (TOR) spectrally into the two components of the Earth Radiation Budget (ERB), namely the Reflected Solar radiation (RSR) and Outgoing Longwave Radiation (OLR) and to map them at relatively high spatial resolution. To intercalibrate the high-accuracy EEI measurements and the high-resolution ERB measurements, the ERB measurements need to be made with full angular coverage. This full angular coverage will also be beneficial for reducing the angular conversion error when converting high-resolution radiances to top-of-the-atmosphere high-resolution flux estimates. The state-of-the-art observation of the OLR is provided by the CERES scanning 3-channel broadband radiometer on the Sun-synchronuous afternoon orbit satellites Aqua, Suomi NPP and NOAA 20. A single Earth Venture Continuity mission is foreseen with Libera on JPSS-4. We propose an innovative continuity of those RSR measurements by replacing the scanning broadband radiometer by multispectral wide field of view cameras. The wide field of view allows a full angular coverage, providing the potential for a significant reduction of the dominant angular conversion error. To realise this potential we propose to develop an innovative Deep Learning based angular conversion method. The multispectral bands of the camera should allow to reconstruct the broadband OLR within the state of the art accuracy. The spatial resolution of the cameras should be sufficient to discriminate cloudy from clear-sky scenes. For the calibration of the cameras we propose an on-board shutter acting as a flat-plate blackbody for the offset determination, deep space views for the gain determination, and cross-calibration with the sun-earth radiometer for the final broadband calibration directly tied to the incoming solar radiation.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone D)

Poster: Space accelerometry to measure the Earth Energy Imbalance

Authors: Manuel Rodrigues, Nolwenn Portier, Bruno Christophe, Kristen Maquaire, Alexandre Couhert, Benoit Meyssignac
Affiliations: Onera, GET, LEGOS
The difference between the incoming solar radiation and the outgoing longwave radiation at the top of the atmosphere defines the Earth’s radiative Energy Imbalance (EEI). Measuring the EEI is fundamental to estimate the effect anthropogenic greenhouse gases (GHG) emissions on our climate system. Currently, EEI is known with a stability of ~+/- 0.2 W/m²/decade and a precision of ~+/- 0.3 W/m²/month using space radiometry. However, the estimate of the time-mean EEI from radiometry is limited with an accuracy of ~+/- 2.5 W/m² because of limited in-flight absolute calibration. An accuracy of +/- 0.3 W/m² on the time mean-EEI can however be achieved using the heat inventory approach using in-situ oceanic and geodetic (gravimetry and altimetry) measurements to estimate the ocean heat uptake. This level of precision remains insufficient to evaluate decadal-scale EEI variations induced by solar cycles, volcanic eruptions and GHG emissions variations. Accelerometer space mission is an innovative solution which derives EEI estimates from a measure of the Earth radiation pressure on spacecrafts. This complementary method consists of measuring the acceleration induced by radiation pressure on a spherical satellite with a perfectly reflecting or absorbing surface. In this presentation we propose a mission concept, using ONERA’s electrostatic accelerometers technology used in previous space geodesy missions and fundamental physics missions (CHAMP, GOCE, GRACE, GRACE-FO, MICROSCOPE and soon, MAGIC) to estimate the radiation pressure from the Earth. We evaluate the potential performance of the mission and analyze whether it could fit current scientific needs.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone D)

Poster: The Earth Climate Observatory space mission concept for innovative continuity in the monitoring of the Earth Reflected Solar Radiation

Authors: Deniz Poyraz, David Vannerom, Luca Schifano, Lien Smeesters, Thomas August, Steven Dewitte
Affiliations: Royal Observatory of Belgium, Vrije Universiteit Brussel, European Space Agency
Monitoring the Earth Energy Imbalance (EEI) is of prime importance for a predictive understanding of climate change. Furthermore, monitoring of the EEI gives an early indication on how well mankind is doing in implementing the Paris Climate Agreement. Currently the best estimates of the absolute value of the EEI, and of its long term variation are obtained from in situ observations, with a dominant contribution of the time derivative of the Ocean Heat Content (OHC). These in situ EEI observations can only be made over long time periods, typically a decade or longer. In contrast, with direct observations of the EEI from space, in principle the EEI can be measured at the annual mean time scale. However, the EEI is currently poorly measured from space, due to two fundamental challenges. The first fundamental challenge is that the EEI is the difference between two opposing terms of nearly equal amplitude. Currently, the incoming solar radiation and outgoing terrestrial radiation are measured with separate instruments, which means that their calibration errors are added and overwhelm the signal to be measured. The current error on the direct measurement of the EEI is of the order of 5 W/m2, significantly larger than the signal to be measured of the order of 1 W/m2. To make significant progress in this challenge, a differential measurement using identical intercalibrated radiometers to measure both the incoming solar radiation and the outgoing terrestrial radiation is needed. The second fundamental challenge is that the outgoing terrestrial radiation has a systematic diurnal cycle. Currently, the outgoing terrestrial radiation is sampled from the so-called morning and afternoon Sun-synchronous orbits, complemented by narrow band geostationary imagers. Recently the sampling from the morning orbit was abandoned. The sampling of the diurnal cycle can be improved, for example, by using two orthogonal 90° inclined orbits which give both global coverage, and a statistical sampling of the full diurnal cycle at seasonal time scale. Other alternatives being investigated are orbits with 82 or 73-degree inclinations, which precess to obtain better sampling of the diurnal and annual cycles but require filling in the polar caps. A complementary Sun-synchronous orbit would provide synergies with several other systems. For understanding the radiative forcing – e.g. aerosol radiative forcing - and climate feedback – e.g. ice albedo feedback - mechanisms underlying changes in the EEI, and for climate model validation, it is necessary to separate the Total Outgoing Radiation (TOR) spectrally into the two components of the Earth Radiation Budget (ERB), namely the Reflected Solar radiation (RSR) and Outgoing Longwave Radiation (OLR) and to map them at relatively high spatial resolution. To intercalibrate the high-accuracy EEI measurements and the high-resolution ERB measurements, the ERB measurements need to be made with full angular coverage. This full angular coverage will also be beneficial for reducing the angular conversion error when converting high-resolution radiances to top-of-the-atmosphere high-resolution flux estimates. The state-of-the-art observation of the RSR is provided by the CERES scanning 3-channel broadband radiometer on the Sun-synchronuous afternoon orbit satellites Aqua, Suomi NPP and NOAA 20. A single Earth Venture Continuity mission is foreseen with Libera on JPSS-4. We propose an innovative continuity of those RSR measurements by replacing the scanning broadband radiometer by multispectral wide field of view cameras. The wide field of view allows a full angular coverage, providing the potential for a significant reduction of the dominant angular conversion error. To realise this potential we propose to develop an innovative Deep Learning based angular conversion method. The multispectral bands of the camera should allow to reconstruct the broadband RSR within the state of the art accuracy. The spatial resolution of the cameras should be sufficient to discriminate cloudy from clear-sky scenes. For the calibration of the cameras we propose an on-board shutter for the dark current determination, vicarious calibration for the gain determination, and cross-calibration with the sun-earth radiometer for the final broadband calibration directly tied to the incoming solar radiation.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone D)

Poster: Direct satellite measurements of the radiative forcing of long-lived halogenated gases

Authors: Simon Whitburn, Lieven Clarisse, Professor Keith Shine, Dr. Hélène De Longueville, Pierre Coheur, Dr. Cathy Clerbaux, Dr. Andy W. Delcloo
Affiliations: Université Libre De Bruxelles, BLU-ULB research Center, Spectroscopy, Quantum Chemistry and Atmospheric Remote Sensing (SQUARES), Royal Meteorological Institute (RMI), Atmospheric Composition, Measurements and Modelling (ACM2), University of Bristol, School of Chemistry, LATMOS/IPSL, Sorbonne Université, CNRS, Department of Meteorology, Brian Hoskins Building, University of Reading
Long-lived halogenated compounds are potent greenhouse gases. Following the Montreal Protocol, the concentrations of many of these substances have evolved rapidly in the atmosphere. Until now, their Instantaneous Radiative Efficiencies (IREs) - a crucial parameter when assessing the impact of a driver on the Earth’s climate - were evaluated from radiative transfer model calculations. Here, we derive clear-sky IREs at the top-of-the-atmosphere (TOA) of a series of halogenated gases based directly on long-term changes in the Earth’s spectrally resolved Outgoing Longwave Radiation (SR-OLR). The latter is derived from 15 years of measurements of the Infrared Atmospheric Sounding Interferometer (IASI) on board the suite of Metop satellites. These changes in the SR-OLR contain the spectral signature of absorbing species with globally evolving atmospheric concentrations. To calculate the IREs, a Jacobian is fitted to the trend signature of each of the identified halogenated species and integrated. The result is then divided by the known change in concentration over the considered period (2008-2022). The great advantage of this method is that no computationally expensive radiative transfer model calculations or assumptions on the atmospheric state are required. In total, we derive clear-sky IRE at TOA for five long-lived halogenated species: CFC-11, CFC-12, SF6, HCFC-22 and HFC-134a. For each, a detailed uncertainty budget is also estimated, ranging from about 14% for CFC-12 to 39% for SF6. The comparison with the IREs values converted from the literature agree for all species within the uncertainty range estimated. With the increasing period covered by IASI and with the future launch of IASI-NG, the accuracy of the IASI-derived IREs is expected to improve over time for many gases as the spectral signature associated to their change in concentration becomes more pronounced in the spectrum. In addition, new signatures related to other halogenated compounds should appear more clearly (e.g. CF4, CCl4) allowing us to retrieve their IRE.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: D.05.01 - POSTER - Showcasing EO Data and Information Services

The objective of this session is to introduce end-users to various EO data and information services, with the goal of increasing user engagement. These services, which can be accessed through online portals, provide significant support to EO user communities worldwide.

In our digitally driven society, the visual appeal of a website plays a crucial role in facilitating user navigation, information retrieval and data discovery. These portals can offer valuable information for end-users, including:
- Wiki-style repositories containing comprehensive and up-to-date information presented in an easily digestible format suitable for a diverse audience.
- Real-life stories from end-users illustrating how their use of data has contributed to society.
- Real-time dashboards providing insights into data availability and statistics.
-Updates on maintenance information affecting data availability and relevant news.
-3D visualizations showcasing popular EO datasets
-User feedback tools for collecting ideas for future enhancements to the portal.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Tarkka: A Comprehensive EO Service for Environmental Monitoring and Policy Support

Authors: Hanna Alasalmi, Mr Vesa Keto, Dr Jenni Attila, Dr Sampsa Koponen, Mr Mikko Kervinen, Ms Eeva Bruun, Mr Sakari Väkevä, Dr Kirsikka Heinilä, Mr Timo Pyhälahti, Jesse Anttila, Mr Eero Alkio, Mr Yki Laine
Affiliations: Finnish Environment Institute (Syke)
Earth Observation provides valuable information about the status of the environment, sources of pollution and biodiversity to support various EU, regional, and national policies and directives. These include EU’s Integrated Maritime Policy, maritime spatial planning, the Water Framework and the Marine Strategy Framework Directives, the Nature Restoration Law, and – on a regional level – the Helsinki Convention (HELCOM). Providing convenient access to Earth Observation (EO) materials, along with GIS, model, and in situ data, is essential for expanding the use of satellite data. Since 2017, the Finnish Environment Institute (Syke) has been offering the EO service Tarkka (tarkka.syke.fi), a web application that provides users with both satellite and in situ data and tools for visualization. Due to its popularity, a next generation of the service was launched in 2022. Tarkka is built on open-source solutions and leverages Syke’s open data offerings. The water quality data provided by Syke’s services cover Baltic Sea region, especially Finnish coastal areas and lakes. Although cloud cover often obstructs useful observations, the amount of data available is still substantial. The development of Tarkka has focused on user experience, aiming to provide easier access to cloudless observations from a large amount of data. Tarkka includes a continuously expanding list of EO datasets and has also integrated in-situ data from monitoring stations, automated stations, citizen observations, and ships (FerryBox), as well as modeled data and related GIS data. This information is presented as maps, image galleries, and regional data in statistical time series. To meet user needs, such as those for the Water Framework Directive purposes for regional datasets, an advanced analysis tool has been developed to visualize the statistical datasets of EO and station sampling data. The analysis tools include various functionalities such as time series, histograms, pie charts, and more. ESA has supported this activity by providing funding (under the EO4society programme) through the BalticAIMS project (www.syke.fi/projects/BalticAIMS, contract number 4000133565/20/I-NB). The goal of the 1st funding period (2021-2023) was to demonstrate an integrated data approach for essential processes of land and coastal water areas to better analyze and visualize the interactions. In fall 2024 the project was extended for 18 months with a goal of further improving the services and the data it provides. The presentation will demonstrate the latest tools and capabilities available in Tarkka for use cases related to monitoring the status of the environment for directives and Maritime Spatial Planning.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Sentinel-2 Global Mosaic (S2GM): Copernicus Service for On-Demand Global Cloud Free Mosaics

Authors: Jean-sébastien Carrière, Christophe Lerebourg, PhD Jan Peter Muller, PhD Rui Song, Professor Philip Lewis, Mr. Feng Yin, Frédéric Trastour, Nejia Ben Nasr, Yuliya Tarabalka, PhD Nadine Gobron
Affiliations: ACRI-ST, University College London, LuxCarta, European Commission Joint Research Centre
The Sentinel-2 Global Mosaic service (S2GM ; https://s2gm.land.copernicus.eu) is a component of the Copernicus Global Land Service providing Sentinel-2 surface reflectance time series composites. It primarily seeks to support the sustainable management of natural resources by offering Analysis Ready Data (ARD) through the interactive Mosaic portal. Users can tailor mosaics to their specific requirements in terms of area of interest, compositing period, included bands, file format, and metadata content, and conveniently manage product requests via the S2GM portal. S2GM is funded by the European Commission and led by the Joint Research Centre under the Framework Contract number 942551. S2GM has been in operation since 2018 providing composite scenes based on the best representative pixels. The three Sentinel-2 spatial resolutions are available over different compositing periods ranging from one day to one year, on a global scale. In its first phase (2018-2022), S2GM provided temporal mosaics and time series. In the second phase, several new features were introduced, including RGB and NIR-RG mosaics, alternative cloud masks, alternative Atmospheric Correction branch, and additional products such as Albedo, LAI, FAPAR, anglular-corrected reflectance, as well as a basic mode designed for non-expert users. The ongoing evolution will continue to add new features, such as Burnt Areas, along with potential new products, including Water Quality and Vegetation products derived from SL2P. The most significant update within S2GM phase 2 is the implementation of the so-called SIAC branch: an alternative level-2 processor (Sensor Independent Atmospheric Correction) providing per pixel uncertainties. In addition to an alternative BRF product, the SIAC branch also allows for the derivation of angular-corrected mosaics, known as NBAR (Normalized Bottom of Atmosphere Reflectance). Furthermore, this branch can be extended to produce Albedo products (both black- and white-sky Albedos), which can then be followed by the creation of vegetation mosaics, such as LAI and FAPAR derived from TIP algorithm (Two-stream Inversion Package). An API is available, enabling users to place automated orders without accessing the web portal. Additionally, predefined mosaics, such as those for EU27 countries and various REDD+ areas, are already processed and accessible on the web portal. Global mosaics are currently being processed and will soon be available for direct download, significantly reducing processing time for users. Data usage and data uptake is of prime importance to the service. At the time of writing this abstract, S2GM service has over 3,000 registered users with a steady rate of 100 to 150 new users per month. More than 43,000 orders have been produced and delivered, which represents about 2,830 million km2: more than five times the full earth surface (510 million km2). Operational users are now using S2GM on a regular basis for applications including forest and agricultural monitoring, as well as land use.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Are our satellite data fit for your purpose? – The C3S approach to delivering quality information to users

Authors: Andre Obregon, Dick Dee, Carlo Buontempo, João Martins, Christopher Goddard, Federico Serva, Chunxue Yang
Affiliations: ECMWF, Planet-A Consulting, CNR
The Copernicus Climate Change Service (C3S) provides reliable access to consistent and quality-assured information about climate change on global and regional scales to support adaptation and mitigation policies of the European Union and to increase public awareness of climate change and its impact on society. All C3S data products and most of its services are free and open to unrestricted use worldwide. A total of 23 Essential Climate Variables (ECVs) are currently covered by the satellite products distributed by C3S, covering six thematic domains. A core objective for C3S is to simplify data discovery and access to quality-controlled data, to make it much easier to use those data effectively in a variety of applications and to ensure that all information provided is fit for purpose. The focus is on serving users who need climate information for policy development, environmental impact studies, business plans, public awareness, and many other purposes that extend beyond scientific analysis. Evaluation and quality control plays a critical role as it generates useful statements about the technical and scientific quality of the datasets available on the Climate Data Store (CDS), presented in a form that can be mapped directly to user requirements associated with specific application contexts. The ambition is to enable users to select, locate, access, and process climate data that are fit for purpose, simply by expressing their requirements in their own words. Achieving this goal requires an effective strategy for obtaining clear and realistic requirements from users, combined with an operational framework for conducting independent checks and quality assessments on all service elements. The C3S approach is focused on generating helpful information for users who need to determine whether a dataset is fit for purpose, given their use case and any specific concerns about data quality—which would be supported by natural language processing and AI tools. This novel approach has the potential to change the way users will search, find and access climate data in the future.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: VirES for Aeolus: A Service for Advanced EO Data Access

Authors: Daniel Santillan, Martin Pačes, Ahmed Behairi, Giuseppe Parrella, Tommaso Parrinello
Affiliations: EOX IT Services GmbH, European Space Agency (esa)
The VirES (Virtual Workspace for Earth-observation Scientists) for Aeolus is a virtual research platform offering highly interactive data manipulation and retrieval interface for ESA's Aeolus mission products. It includes tools for studying various atmospheric parameters, in space and time, measured by the Aeolus satellite. By offering a complete workspace to discover, visualize, access and manipulate data, the VirES for Aeolus is an example of advanced data access service for Earth Observation (EO) data. The overall objective of such a platform is to simplify the access and exploitation of Aeolus data, attracting not only Aeolus mission experts but also a wider scientific community. As part of the VirES for Aeolus workspace, two main applications are available. The first one is a web client application (https://aeolus.services) that provides a highly interactive web interface for 3D visualization of vertical data curtains, interactive data plotting, and analysis. Through this, the initial entry barrier to browse and navigate the data is lowered, inviting the users to further explore the offered dataset and functionalities. Through time, area, and product selection the whole Aeolus mission dataset (L1 and L2 products) can be easily explored to find and analyze, for example, data related to specific events of interest. Expanding on that is the Virtual Research Environment (https://vre.aeolus.services) where an online development environment is made available for data manipulation and large scale data access. The VRE is based on JupyterLab and offers a pre-configured development environment in which a wide range of tools commonly used to work with the data (e.g. matplotlib, numpy, Pandas, gdal, …) are available to the users. The access to the data is based on the VirES python client, which allows for very flexible, granular and large scale access to the data in commonly used formats, such as netCDF, pandas DataFrame or xarray. The interface to the data supports a token-based authentication approach which is already pre-configured within the VRE but also allows simple machine to machine access from external environments. The initial definition of the service as well as the selection of tools has been carried out in collaboration with several Aeolus mission experts, such as from DLR and LMU. With their support the equally important creation of a wide range of examples and documentation (https://notebooks.aeolus.services) was undertaken. The examples included in the documentation are managed as an open source repository (https://github.com/ESA-VirES/Aeolus-notebooks). They can be directly executed and edited within the VRE, providing a quick starting point for any development effort. Submission of additional examples by third parties is very welcomed allowing growing the knowledge base for all users.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Innovative Data Integration for Crisis Management in Fragile Settings: Insights from the Insula Platform

Authors: Alessandro Marin, Mr Davide Giorgiutti, Intel Desk Lead Lynn Dudenhöfer
Affiliations: CGI, HENSOLDT Analytics
In the modern data-driven landscape, the ability to merge heterogeneous data sources such as Earth Observation (EO) data and Open-Source Intelligence (OSINT) offers unparalleled potential for actionable insights. Insula, a cutting-edge big data analytics platform, exemplifies this capability by enabling the integration of diverse datasets into a unified environment. In a pilot initiative led by the ESA GDA Fragility, Conflict, and Security (FCS) thematic activity, Insula was deployed for the Asian Development Bank (ADB) to monitor population movements near the Afghanistan-Tajikistan border following the Taliban's resurgence in 2021. This initiative highlighted the platform's capability to address complex humanitarian challenges through data harmonization, advanced analytics, and scalable infrastructure. The project integrated diverse datasets, each contributing critical perspectives. For instance, DLR’s settlement extent data mapped human geography and urbanization trends, while e-GEOS provided high-resolution imagery for monitoring checkpoint activity and infrastructure changes. Additionally, OSINT sources like Janes offered insights into regional security dynamics, and HENSOLDT Analytics enriched the data pool with geolocated social media content reflecting conflict-related events and food security challenges. Together, these data sources painted a multidimensional picture of the situation, enabling stakeholders to identify and analyze key patterns. Insula’s architecture enables seamless data integration while offering scalability and robust analytics, crucial for real-time crisis management. The platform transforms raw datasets into actionable insights, serving as the operational hub for accessing, harmonizing, and visualizing disparate data sources. During the pilot, agile methodologies facilitated iterative enhancements, allowing ADB stakeholders to refine outputs by incorporating new datasets and functionalities. For example, OSINT analyses spurred sprints dedicated to high-resolution EO monitoring at critical border points, illustrating the adaptive potential of Insula in addressing emergent needs. The ability to integrate EO and OSINT data not only enhanced situational awareness but also supported informed decision-making. This proactive approach allowed ADB to prioritize resources, define targeted interventions, and adapt policies in response to evolving challenges. Beyond humanitarian contexts, Insula’s flexible design and advanced data-processing capabilities demonstrate its potential for broader applications in areas such as disaster response, resource allocation, and risk assessment. The success of this initiative underscores the value of harmonizing heterogeneous datasets to address multifaceted global challenges. Insula’s comprehensive data integration framework fosters resilience and responsiveness, enabling timely, evidence-based actions. By bridging the gap between raw data and strategic decisions, Insula exemplifies how multidimensional analytics can transform complex scenarios into opportunities for impactful interventions. Looking forward, the platform’s adaptability and scalability position it as a critical tool for future projects aiming to address geopolitical, environmental, and developmental challenges worldwide.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Evaluation and Quality Control of Satellite ECVs at C3S – An Innovative Way of Conveying Quality and Fitness for Purpose Information

Authors: Joao Paulo Martins, Andre Obregon, Chris Goddard, Federico Serva, Chunxue Yang, Joaquín Muñoz-Sabater
Affiliations: ECMWF, CNR
The Copernicus Climate Change Service (C3S) is the European Union Flagship programme in charge of providing authoritative and high-quality climate information about the climate of the past, present and future. Among the several datasets provided by the service are the satellite-based Essential Climate Variables (ECVs). A total of 23 ECVs are currently covered by the satellite products distributed by C3S, covering six thematic domains: Atmospheric Composition, Atmospheric Physics, Cryosphere, Land Hydrology, Land Biosphere and Ocean and Sea Ice. Most of these products are regularly updated, both by extending the Climate Data Records in time incorporating the most recently acquired measurements, but also through the release of new product versions (leveraging from results obtained in research-oriented projects like the ESA-CCI), which may include data from more sensors or use improved algorithms and/or ancillary data. All the products are carefully validated and reviewed and are released together with a set of comprehensive documentation. On top of that, C3S developed a new Evaluation and Quality Control (EQC) function, where external independent evaluators provide information on Quality and Fitness for Purpose for all datasets distributed by C3S. The EQC function is organised around 3 axes: 1) Quality Assurance, where compliance with a set of predefined user requirements is verified (which encompass Data management, Data Records, Metadata and Documentation), 2) Quality Assessments, which are a series of short topical articles in the form of Jupyter Notebooks, aimed to answer user questions about different quality aspects of the data (e.g. accuracy, completeness, spatial consistency, time series homogeneity, etc.), in the context of specific applications; and 3) Fitness for Purpose which consists of a fact sheet about the product, listing key strengths and example applications, as well as known limitations. The EQC function takes full advantage of product documentation and peer-reviewed literature associated to a given product and aims to provide a high-level and user-friendly view of that information, with the ultimate goals of 1) informing users on the best dataset to use in the context of their applications and 2) helping users decide on how well a given dataset fits the requirements of their applications. In this presentation we discuss practical examples of these EQC contributions, covering the different domains of the ECV programme. In this way, we aim to demonstrate how each of the three axes around which the new EQC function was conceived are useful to understand basic features of the data; how they are already helping to improve specific technical and scientific aspects; and in general, how this approach adds value to the datasets distributed by C3S.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: HEDAVI platforms to discover and operate heritage missions

Authors: Dr. Serge Riazanoff, Grégory Mazabraud, Guillaume Aurel, Mr. Kévin Gross, Giuseppe Parrella, Mirko Albani
Affiliations: VisioTerra, ESA/ESRIN
The HEDAVI (HEritage DAta VIsualisation) project, initiated in 2016, seeks to exploit data from heritage missions (i.e. missions whose operational phase has ended) designed by ESA such as ERS-1, ERS-2 and Envisat as well as third-party missions (TPM) from other space agencies such as Landsat (NASA) or ALOS (JAXA). Most of the current Copernicus Sentinel missions capitalise on the know-how in the design of instruments from heritage missions. This is particularly the case in the field of radar where Sentinel-1/C-SAR benefits from the experience of the ERS/SAR and then Envisat/ASAR instruments. Similarly, the Sentinel 3/OLCI instrument improves the MERIS instrument of the Envisat mission. Data from heritage missions are also useful in an operational context. For example, they allow us to save the memory of landscapes, land use and events reporting, for example, the evolution of forest cover, expansion of urban areas, marine pollution, floods, soil degradation, etc. To raise awareness among a wide audience, the “HEDAVI Discover” platform presents stories in a variety of themes (land, sea, polar, natural hazards, man-made disasters, environmental changes, rare events). Each story includes one or more views, called “hyperlook(s)”, which specify a set of image layers in which the visitor can navigate in 3D. Synthesis of hundreds of Envisat/ASAR/WSM radar products are also processed on-the-fly and in particular orthorectified from a global Digital Elevation Model (DEM). All these products can be viewed throughout the mission life-time (10 years for Envisat) and according to an interval corresponding to the cycle time (35 days for Envisat). Visualisation can be performed at all scales up to the native ground sampling distance (e.g. 150 m for ASAR/WSM). More experienced users use the “HEDAVI Explorer” platform, which allows them to define areas of interest, select, process and merge products as well as ¬share and export them. Processing can be configured using user-friendly interfaces or by editing POF-ML (Processing On-the-Fly Macro-Language) scripts. “HEDAVI Explorer” thus allows several dates to be combined into a colour composition or to create time series. These ones can concern different datasets and be rendered according to a style that goes as far as calculating indicators (vegetation, soil moisture, water colour...). The aggregation of these time series enables statistics to be calculated that reflect, for instance, changes in land use. Registered users can design their own stories and submit them to enrich the “HEDAVI Discover” database.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Quantitative Management of Water Bodies: A Space-Based Decision Support Tool for Sustainable Hydrology

Authors: Gael Nicolas, Benjamin Tardy, Jérôme Maxant, Nicolas Vila, Brice Mora
Affiliations: Cs Group - France, SERTIT
Water resources face increasing pressure due to climate change, societal demands, and geopolitical tensions. To address these challenges, real-time monitoring of water bodies' spatial distribution and stored volumes is critical for sustainable development and informed decision-making by public institutions. In this context, the FR2030 project, "Innovative products and services for public water policies: quantitative and qualitative water management," jointly supported by the French Ministry of Ecological Transition and the French Spatial Agency CNES, aims to enhance water management capabilities. The project leverages space-based data from Sentinel-1 and Sentinel-2 to monitor approximately 5,000 water bodies in mainland France and its overseas territories from 2017 to 2026, the project end date. The methodology integrates CNES's Surfwater and Dem4water processing chains, with innovations in data pre- and post-processing. Detection is based on supervised classification, and machine learning, including Random Forest algorithms. The large-scale data processing is supported by an optimized cloud architecture, aligning with sustainable development goals through reduced resource consumption. Key outputs include time series for water surface area, volume, and fill rate, occurrences of high and low water levels, tidal and dam drawdown, over the 2017–2024 period of processed data. Emphasis was placed on the qualitative assessment of data and the incorporation of an uncertainty value, calculated for each surface, volume, and fill rate using a regression model. Learning is based on deviations between estimates and reference data (in situ measurements and water masks from VHSR optical imagery). The model predicts the absolute relative error for each observation date, using a boosted gradient regression tree that incorporates the lakes' morphological and geographical characteristics, as well as deviations from a seasonal normal determined through time series modeling. The project culminated in the development of an operational online decision-support service, providing 3D-geospatial visualisation and customizable dashboards (e.g., water bodies represented in their local environment). The portal enables stakeholders to explore data at various spatial and temporal scales, apply multiple filters, and integrate external datasets for comprehensive analysis. Near-real-time updated (tri-weekly), the service supports both rapid decision-making during crises and long-term public policy planning. The portal provides a news section informing users on latest news and updates. A dedicated section of the portal provides further information on the service, particularly on data sources and methods employed. While Sentinel data supports the project's success, challenges remain in analyzing small water bodies or those lacking bathymetric data for volume calculations. Future integration of Sentinel-3 and SWOT missions' data will overcome these limitations by enabling direct measurements of surface and volume variations through altimetry. The combined use of Sentinel and SWOT data promises to significantly advance hydrological monitoring and sustainable water management strategies. This project highlights the key role of space-based Earth observation in addressing water management challenges. This multisource service not only meets immediate user needs, captured through recurring interviews among a representative set of user communities, but also lays the groundwork for enhanced water governance and climate resilience through the integration of cutting-edge space technologies. References : [1] https://meetingorganizer.copernicus.org/EGU23/EGU23-12090.html [2] https://earth.esa.int/living-planet-symposium-2022-presentations/25.05.Wednesday/H1-01/1330-1510/03_PENA_LUQUE_1_.pdf [3] https://www.mdpi.com/2072-4292/13/16/3279 [4] https://earth.esa.int/living-planet-symposium-2022-presentations/26.05.Thursday/Berlin/1330-1510/02_Ravanelli_HPC4RM_ESA_living_planet.pdf [5] https://github.com/CNES/dem4water
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: ORBIS: Earth Observation data service for NewSpace missions

#stac

Authors: Roman Bohovic, Jan Chytry
Affiliations: World from Space
In recent years, there has been a rapid expansion of satellite Earth Observation (EO) missions in the private sector through increasing affordability and accelerating adoption processes. The commercial EO market is expected to grow by 8% annually towards 2032. The NewSpace approach enabled cost and time savings through high failure tolerance, development agility and utilizing off-the-shelf components, and led to serializing satellite production, including the EO instruments. The above trends are lesser noticeable in the midstream part of the value chain handling the immediate imaging results, i.e., calibration/validation (cal/val) activities, mission-to-use case analysis, scalable data ingestion and low-level processing. At the time of writing this abstract, only five to six commercial companies were known to exclusively focus on these services while having a repeatable product. Nevertheless, neglecting these areas effectively decreases the mission's value. In response, World from Space (WFS) has been developing Orbis, a cloud-based service for complete optical satellite data management based on close provider-customer cooperation throughout the mission lifetime. The service is primarily focused on NewSpace missions and intends to offer an easily approachable, cost-effective and highly scalable solution. The first phase of the development spun off from the work on the Czech Ambitious Missions framework, being able to collaboratively build knowledge and collect user needs. It further continued under the custom ESA project. The capabilities of the integrated Orbis prototype include both software and non-software (processed-based/professional) services: - Mission analysis - Early mission onboarding enables WFS to perform requirements and observation use cases analysis, consult the mission design and simulate data or performance. - In-flight calibration/validation - It is possible to aid with the planning and assessment of analyses and perform experiments to adjust for imaging biases and errors, using appropriate reference data. - Novel data ingestion and mission setup - Profiling the mission in the system and drawing from the existing settings and heritage enables continuous API-based ingestion of data in various processing stages. - Image (pre)processing - This includes in-house rectification of sensing errors, radiometric, geometric and atmospheric corrections and higher-level processing operations to enhance usability and interoperability, possibly providing customers with ready-made EO intelligence. - Quality assessment - It is thoroughly performed in performance validations and the processed imagery and reported in data products. - Product distribution, storage and archiving - Human or machine-friendly interface and several standardized formats. - Data management services - Overview of the services and interaction with missions, instruments, data and third-party customers in a lightweight browser interface. Several key properties were realized while implementing the Orbis prototype to tackle the challenge of a commercially viable EO data system. The system’s modular architecture is based on flexible and loosely coupled components, minimizing dependencies and promoting standardized data exchange. The system entities are hierarchically structured to include satellite constellations, missions, instruments, and acquisitions, their processing pipelines and catalogued results. The repeatability is secured by careful layerization of processing algorithms, creating a substrate for rapid prototyping, adjustments and re-contribution back to the system. To address the variable nature of EO data inputs, a rigid internal data model is defined, which is extensible by mission adapters. The distribution is based on the STAC standard. For a NewSpace system, it is also important to balance quality and cost, i.e., high-end mission capabilities with best-effort processing for missions with lower-end equipment or constrained budgets. The platform can scale with mission data throughput and satellite capabilities, offering both premium and cost-effective EO product solutions. Lastly, the cloud-based model of Orbis means that its deployment, operations and upgrades can be fully managed by WFS so that the mission is accompanied continuously throughout its lifetime. Orbis presents a technical response to the rapidly advancing EO domain in the NewSpace business, which requires (1) agnostic EO processing and data access expertise while (2) being able to aid the technical conception of the customer, (3) quickly utilizing modular entities, predefined interfaces and continuous deployment and (4) having flexible business models at various scales. In future development, it is planned to extend services to advanced payload-related analyses, complete ground and in-orbit calibration/validation, deployment agnosticity, highly secure operation modes and on-board data processing. All these efforts make Orbis a relevant solution for contemporary and future space-based observation challenges.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: FAO Essential Remote Sensing Data Product Portal for Agricultural Application Services

#stac #cog

Authors: Pengyu Hao, Zhongxin Chen, Karl Morteo, Ken Barron, Pedro MorenoIzquierdo, Valeria Scrilatti, Gianluca Franceschini, Battista Davide, Daniele Conversa, Carlo Cancellieri, Yohannis Bedane, Aya Elzahy, Ahmed AhmedHassan, Mohamed Megahed, Furkan Macit, Muhammad Asif, Noah Matovu
Affiliations: Food and Agriculture Organization of the United Nations
The increasing availability of satellite observations, combined with advancements in cloud-based computing and the improvement of land surface parameters calculation models, has led to the development of a wide range of land surface products. However, these advancements have also introduced several challenges. First, because these products are generated by different teams, they often follow varying standards. For example, differences in definitions, algorithms, projections, tile systems, and scale factors can result in inconsistencies across datasets. Second, although accuracy assessments are typically conducted before data publication, significant disagreement often exist among products addressing the same topic. The lack of third-party evaluations further limits the usability of these satellite-derived products. Third, several platforms provide data visualization and processing functionalities, but a significant amount of data of high-quality remains accessible only through data repositories. The absence of efficient search tools further restricts the practical application of these products. In this work, we propose a new data portal to address these limitations. First, we identified essential topics in the agricultural domain and selected global-level land surface data products derived from satellite observations. The products were primarily chosen based on whether they are consistent to FAO’s definition; and as for the data quality control, both accuracy assessments reported by the data providers and third-party evaluation reports by our data evaluation team are considered. Second, we are developing a data portal using SpatioTemporal Asset Catalogs (STAC) built on FAO’s geospatial data storage and service infrastructures (GIS Manager 2), all selected data are processed to uniform tiling systems, scale factors and the file format are standardized as cloud-optimized geotiff (COG). This portal provides efficient data search and download functionalities, which enabling users to access data with geospatial extent and temporal range of their interest. Furthermore, case studies demonstrating data analysis applications are also included to promote the practical use of typical land surface products. The portal could promote high-quality satellite data derived land surface products and provide efficient toolkits for data users, it is currently available at https://data.review.fao.org/remote-sensing-portal, has been presented to FAO for feedback and hosts data products on topics such as cropland mapping, leaf area index, net primary production, and land surface phenology. We are actively working on adding more datasets and improving the portal’s search and download functionalities, with the goal of formally launching it in 2025.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: CAELOSCOPE: added-value atmospheric products based on Sentinel-5P/TROPOMI measurements in Terrascope

Authors: Jeroen van Gent, Wenfu Sun, Isabelle De Smedt, Tijl Verhoelst, Steven Compernolle, Jonas Vlietinck, Nicolas Theys, Jonas Debosscher, Bavo Langerock, Mahesh Kumar Sha, Frederik Tack, Jurgen Everaerts, Bart Ooms, Michel Van Roozendael, Martine De Mazière, Jean-Christopher Lambert
Affiliations: BIRA-IASB, VITO
The Sentinel-5P/TROPOMI instrument is the first Copernicus sensor that is fully dedicated to measuring atmospheric composition. Since its launch in October 2017, it has provided excellent results that have led to numerous scientific papers on topics related to ozone, air quality and climate. Yet, the potential use of TROPOMI data reaches beyond the direct scientific community. With support from ESA and the Belgian Science Policy office (BELSPO), Belgium has established the Terrascope platform, hosted by VITO. This so-called Belgian Copernicus Collaborative Ground Segment is a platform that enables easy access to and visualisation of Copernicus data for all societal sectors, the development and implementation of tailored or derived products based on Copernicus measurements and the development of innovative tools. In recent years BIRA-IASB has worked with VITO on the implementation in Terrascope of the first TROPOMI total column products, as well as a near-surface concentration product for nitrogen dioxide (NO₂). The BELSPO-funded CAELOSCOPE project (2024-2027) wants to take a step further towards the (potential) Terrascope users by providing a range of new products related to near-surface concentrations of ozone and NO₂. This allows for direct comparison with in situ measurements and provides a more direct indicator of environmental health and air quality and therewith is thought to be useful to user communities like policy makers, the agricultural sector and health bodies. In addition, for the existing TROPOMI vertical column products already present in Terrascope, CAELOSCOPE will calculate multi-year trends, both on the local scale as well as deviations of trend from the global mean value. As with all TROPOMI products in Terrascope, this data will be available for analysis through the Terrascope user environment and as Level-3 maps in the Terrascope Viewer. This way users in the field of air quality and climate are allowed to quickly assess trace gas trends in their geospatial area of interest. This poster outlines the details of the new products to be implemented in Terrascope.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: ROCS: Extending Romania’s National Infrastructure within the European Collaborative Ground Segment

#zarr #stac #cog #cloud-native

Authors: Marian Neagul, Conf. Dr. Gabriel iuhasz, Vasile Craciunescu, Prof. Dr. Florin Pop, Dr. Alina Radutu, Dr. George Suciu, Prof. Dr. Daniel
Affiliations: West University Of Timisoara
The ROCS project (2024-2027), aims to enhance Romania’s integration into the European Collaborative Ground Segment (COLGS). Expanding Romania's national infrastructure for the storage, processing, and dissemination of Earth Observation (EO) data, ROCS addresses critical challenges associated with the exponentially growing volume of satellite data from programs such as Copernicus. This initiative strengthens Romania’s role in the European space ecosystem and facilitates innovation and data-driven solutions to environmental, economic, and societal challenges. Being one of the world’s most comprehensive EO programs, Copernicus generates daily data through its Sentinel satellites, complemented by in-situ observations and contributions from global missions such as Landsat. These datasets provide opportunities for monitoring Earth’s natural processes. Their massive volume creates data storage, access, and analysis challenges. Traditional data management systems are increasingly unable to meet the demands of real-time applications, multi-modal integration, and cross-domain collaboration. Initiatives such as the Copernicus Data Space Ecosystem (CDSE), Destination Earth (DestinE), and projects similar to Digital Twins address these barriers through modern approaches that promote cloud-native architectures, open standards, and distributed processing.Adhering tot these initiatives, ROCS leverages state-of-the-art solutions to enable efficient and scalable EO data exploitation. By building a federated infrastructure based on the European collaborative efforts, this project seeks to bridge the gap between data availability and actionable insights for Romania’s specific needs. Project Objectives and Scope The main objective of ROCS is to establish a national EO data infrastructure that integrates seamlessly into the European Collaborative Ground Segment and supports Romania’s specific needs across multiple domains. The project’s scope focuses on several key objectives: First, the analysis of operational requirements by performing a broad assessment of operational needs in accordance with European and industry best practices. The underlying insight regarding requirements comes with the engagement of stakeholders across public institutions, academia and industry. Second, infrastructure and software design based on a federated, distributed architecture based on open-source tools. It also incorporates "cloud-native"storage formats such as but not limited to: STAC, COG, and Zarr. Optimized environments to support scalable data ingestion, indexing and processing. Third, the design and implementation of modular tools for both raster and vector data indexing, leveraging Open Geospatial Consortium (OGC) standards. Necessarily, the inclusion of an established, robust security framework with centralised authentication and authorisation mechanisms such as OIDC is also a core consideration. Finally the rapid deployment of our platform (even in pre-operational phases) so that it can be validated in real-world case studies. In order to aid in platform flexibility and operational efficiency, we will utilize containerised workspaces, which ensure secure data processing. Technical and Scientific Contributions The architecture of the ROCS platform comprises three distinct levels: Platform, Federation, and Application, ensuring a consistent ecosystem for EO data management and utilization. The platform level will be responsible for data ingestion, storage and primary processing. It integrates existing EO datasets, such as Sentinel (1,2,3, 5P), using cloud-native storage solutions such as MinIO. At a federation level, we aim to establish a network of distributed data centres with harmonised capabilities, enabling shared processing across institutions. We leverage Kubernetes for container orchestration and for scheduling, ensuring high computational efficiency. At the application level, we aim to provide tools for advanced analytics, visualization and development. End-users will be provided access to intuitive dashboards facilitating real-time data access. Advanced Processing and Accessibility The project emphasizes the “bring the user to the data” paradigm by adopting cloud-native storage and processing frameworks. Data replication and indexing will occur periodically , ensuring the most up-to-date information is accessible. By leveraging Kubernetes and OGC APIs, the platform will enable dynamic task execution, facilitating access to complex EO analyses for users without advanced technical expertise. Case Studies: Real-world Applications ROCS’s functionality and potential impact will be demonstrated through five specific case studies, showcasing its utility and societal relevance: Climate Change Adaptation: Integration with the RO-ADAPT platform will strengthen Romania’s capacity for climate resilience by providing EO-driven insights for vulnerable ecosystems. The utilization of Sentinel-2 and Sentinel-5P data will inform localized adaptation strategies aligned with international pledges/guidelines. Agricultural Policy Compliance: In collaboration with the Ministry of Agriculture, ROCS will facilitate automate crop classification processes, supporting compliance with new EU agricultural regulations. Sentinel-1 radar imagery and machine learning models will, potentially, help in the identification of issues associated with subsidy allocations. Forestry Monitoring and Deforestation Control: The platform aims to support initiatives like the EUDR regulation and national forest monitoring programs, providing tools to detect deforestation activities and monitor biodiversity using Sentinel-1 and Sentinel-2 data. Education and Skills Development: By integrating Earth Observation data into university programs, ROCS will encourage the next generation of geospatial professionals. Interactive tools like JupyterHub and Eclipse Che will enable hands-on learning, addressing gaps in EO data education and analytical skills. High-Resolution Cloudless Mosaic: A seamless, cloud-free mosaic (or basemap) of the region will be generated using Sentinel-2 data. This foundational dataset will support various applications, including urban planning, disaster risk management, and environmental conservation. Broader Impact and Relevance The successful implementation of ROCS will position Romania as an important player within the European Collaborative Ground Segment. Beyond technical advancements, the project has significant societal implications: Enhanced Policy Implementation: Supports national authorities in environmental monitoring, disaster response, and urban resilience planning. It aligns with European Green Deal objectives, supporting carbon neutrality and sustainable resource management. Economic Growth: Catalyzes the development of geospatial services, opening new avenues for commercial applications in agriculture, forestry, and urban management. Scientific Collaboration: Provides seamless access to EO data for researchers, fostering innovation and multidisciplinary studies. Education and Awareness: Encourages data-driven learning and public engagement with EO technologies, ensuring long-term sustainability of knowledge transfer. Alignment with the Living Planet Symposium Themes The ROCS project follows the themes of the Living Planet Symposium by addressing the synergy cutting-edge technology and real-world applications. By adopting cloud-native solutions and adopting interoperability with European systems, ROCS demonstrates a commitment to advancing Earth science, bridging data with decision-making, and contributing to sustainability goals. Conclusion ROCS represents a transformative effort to advance Romania's EO data capabilities while enhancing its contributions to the European Collaborative Ground Segment. Through its innovative architecture, real-world applications, and societal relevance, the project aligns seamlessly with the goals of the Living Planet Symposium. It highlights the potential of Earth observation to address critical challenges in climate resilience, agricultural sustainability, and environmental protection, fostering a future of informed decision-making and scientific excellence.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: A.01.04 - POSTER - Advancing Air Quality Monitoring from Space

Air pollution is a major concern throughout the world in both, developed and developing countries, especially in urban areas where over 50% of the world population live. Swelling urban population and increased volume of motorized traffic in cities have resulted in severe air pollution affecting the surrounding environment and human health. European Environmental Agency (EEA) estimate that over 40-50% of air pollution in the most important European cities is attributed to vehicular emission.

The investigation of air pollution over megacities by means of satellites observations has recently become a central topic of interest within the air pollution community, especially thanks to the observing capabilities of Sentinel-5p in terms of spatial resolution.

Nonetheless space-borne platforms alone cannot provide a full picture. To investigate the spatio-temporal variability of air pollution on a local, regional, and global scale, new tools are being developed. In this context, the detection, tracking and understanding of pollutant transport on various spatial scales are of both local and global interest. Specifically, in rural and remote areas, where no ground-based monitoring network of the air pollution is available, the use of satellite data can provide an estimation of the regional distribution of pollutants, in order to assess the impact of specific events (e.g., biomass burning or dust storm outbreaks).

Satellites observe air pollution in the troposphere, and its relation with surface concentrations must first be solved for air quality monitoring applications. This session is dedicated to present new algorithms and approaches for the downscaling of air quality satellite observations and to explore novel assimilation methodologies to combine satellite retrievals with in-situ measurements air quality modelling considering all relevant satellite missions (e.g. Sentinel-5P), and future availability of hourly observations from Sentinel-4, and other future capabilities e.g. Sentinel-5 and CO2M.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Methane satellite detection in the landfill of Cordoba, Argentina

Authors: Lic. Cristhian Escobares, Lic. Silvana Palavecino, Ximena Porcasi, PhD Mariela Aguilera Sammaritano, PhD Fernanda García Ferreyra
Affiliations: Conae_ Instituto Gulich
In this study we focus on the application of the wind rotation method for detecting methane emission sources using Sentinel-5P (TROPOMI), specifically the Level 2 CH4 version 2 product (S5P_L2_CH4_HiR.2). We focus on a known emitter: the landfill in charge of disposal for Córdoba and ten other cities in Argentina, a key site due to its potential for methane generation. Additionally, we conducted comparisons at two control sites with no significant methane emissions: the city center of Córdoba, Argentina, and agricultural land in the Juárez Celman department, Córdoba, Argentina. These sites were selected to contrast detections within an urban environment and a rural area, where methane emissions are expected to be orders of magnitude lower compared to the selected landfill. The analysis included an automated system for the qualitative detection of methane emissions at the preselected coordinates, incorporating the following automated steps: downloading TROPOMI data, preprocessing, wind rotation to the north, and generating the qualitative emission map. The only manual step involved obtaining wind direction data for the selected coordinates. Furthermore, TROPOMI’s temporal resolution was sacrificed to generate a single quarterly emission map with higher resolution, generating these maps for the 4 quarters of 2020. The results obtained demonstrate that it is possible to detect methane emissions at the Córdoba sanitary landfill, and future work could potentially quantify these emissions. This approach offers a valuable tool for the environmental management of the landfill site, as it can be easily adapted and applied to other regions and contexts (by simply changing the source of wind direction), thereby contributing to the improvement of greenhouse gas inventories and the mitigation of their impacts.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Assessing different Machine Learning techniques for extracting AOD and PM2.5 surface concentrations from satellite data over Europe

Authors: Panagiotis Fountoukidis, Dr Mariliza Koukouli, Professor Dimitrios Balis, Dr Diego Loyola
Affiliations: Laboratory of Atmospheric Physics, Aristotle University of Thessaloniki, German Aerospace Center (DLR), Remote Sensing Technology Institute
The atmospheric aerosols are one of the most dangerous pollutants in the atmosphere. They are responsible for several problems in human health, such as respiratory problems. They can also interact with the incoming solar radiation, playing a crucial role to the climate change. Thus, monitoring the aerosols in the atmosphere is very important for the protection of both societies and the environment. In this work, we utilize the operational L1 (S5P_OFFL_L1B_RA and S5P_OFFL_L1B_IR) and L2 (S5P_NRTI_L2__O3, S5P_OFFL_L2__CLOUD and S5P_PAL__L2__TCWV) S5P/TROPOMI observations to extract AOD and PM2.5 surface concentrations over Europe, via Machine Learning (ML) techniques. In addition, we use the CAMS ERA5 Reanalysis data (Boundary Layer Height, Relative Humidity and Specific Humidity), as well. The S5P/TROPOMI and the CAMS data are the features which were used to train the ML models, while AOD observations from AERONET and PM2.5 surface concentration measurements from the European Environment Agency (EEA) were used as the targets. Two different ML models, a Random Forest (RF) and a Neural Network (NN), were developed separately for each one of the target parameters. The models were trained for several wavelength values in UV and Vis spectra, for the year 2021. The area of interest was Greece, where, for the AOD, three AERONET stations, two urban (Thessaloniki and Athens) and one rural (Finokalia) were used for the training, while, for the PM2.5 surface concentrations, we used four urban EEA stations, at four of the largest Greek cities, namely Thessaloniki, Athens, Patra and Volos. In all cases, 80% of the data was used for training and the rest 20% were used for testing. To evaluate the performance of the models, some statistic metrics (MAE, MSE, IOA and R) were calculated. An evaluation of the features’ importance has been made, as well, to identify the best features for the models.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Advancing Air Quality Monitoring through Integrated Sentinel S5P and Ground Sensor Approaches

Authors: Dr. Ali Ahmad, Mr. Guillaume Dubrasquet-Duval, Mr. Thomas Olive, MR. Cyril Vidal, Mr. Nicolas Hussherr
Affiliations: Diginove
Keywords: Air quality monitoring, Satellite data, ground sensors, data fusion, high-resolution mapping Air quality is a critical determinant of public health, environmental sustainability, and urban livability. The increasing concentration of air pollutants, including CO₂, NOₓ, and PM2.5, has been linked to severe respiratory and cardiovascular diseases, as well as broader ecological disruptions. Effective air quality monitoring is thus essential for understanding pollution dynamics, assessing population exposure, and implementing strategies to mitigate health risks and environmental damage [1, 2]. Traditional air quality monitoring methods rely on ground-based sensor networks, which provide accurate pollutant measurements but are often limited in spatial coverage [3]. This restricts their ability to capture fine-scale pollution variations across diverse geographic and urban settings. Satellite-based monitoring offers extensive regional and global coverage and is increasingly employed to track air quality. Sentinel-5P (S5P), part of the European Space Agency’s Copernicus program, is a leading tool in this domain, providing daily global observations of key air pollutants. However, its coarse spatial resolution, which is up to 5.5 km per pixel, limits its ability to identify localized pollution hotspots or assess human exposure at a neighborhood level [4]. This study addresses these limitations by integrating satellite monitoring with high-resolution ground-based sensor networks (25-meter resolution). As a first step, human activities contributing to pollution, such as traffic, industrial emissions and agricultural practices, will be identified and analyzed in correlation with Sentinel-5P data. This analysis will then be used to define the optimal location where ground-based sensors should be deployed in order to monitor CO₂, NOₓ and PM2.5 concentrations with high accuracy, improving the ability to capture localized pollution gradients. Subsequently, advanced data fusion techniques, including artificial intelligence (AI)-based models, will be employed to combine the ground sensor data with satellite observations. This fusion will be used to improve the spatial resolution of Sentinel-5P data, enabling air quality maps with a resolution approaching 25 meters per pixel. These high-resolution maps will facilitate detailed assessments of population exposure and detailed understanding of the pollution dynamics leading to more accurate identification of high-risk areas. The outcomes of this research will provide policymakers, urban planners, and environmental regulators with actionable insights for managing air quality. This approach will enhance the ability to identify high-exposure areas, evaluate industrial compliance with environmental regulations, and implement targeted interventions, contributing to the development of zero-pollution cities. Furthermore, the approach provides a replicable framework for monitoring air quality and mitigating its impact on populations in other regions worldwide. [1] Sokhi, Ranjeet S., et al. "Advances in air quality research–current and emerging challenges." Atmospheric Chemistry and Physics Discussions 2021 (2021): 1-133. [2] Maggos, Thomas. "Advances in Air Quality Monitoring and Assessment." Applied Sciences 11.13 (2021): 5817. [3] Jacquinot, Morgan, et al. "Spatial model for daily air quality high resolution estimation." Air Quality, Atmosphere & Health (2024): 1-10. [4] Oxoli, Daniele, J. R. Cedeno Jimenez, and M. A. Brovelli. "Assessment of SENTINEL-5P performance for ground-level air quality monitoring: preparatory experiments over the COVID-19 lockdown period." The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 44 (2020): 111-116.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Time series of NOx point source emissions from one year of TROPOMI NO2 observations

Authors: PD Dr. Gerrit Kuhlmann, Erik Koene, Sandro Meier, Oscar Collado Lopez, Marc Guevara
Affiliations: Empa, Laboratory for Air Pollution/Environmental Technology, Remote Sensing Laboratories, University of Zurich, Barcelona Supercomputing Center
Nitrogen oxide (NOx = NO2 + NO) is an important air pollutant emitted mainly by the combustion of fossil fuels. The Sentinel-5P/TROPOMI instrument provides daily maps of nitrogen dioxide (NO2) with sufficient resolution to resolve plumes from power plants and industrial facilities. Emissions from these point sources can be quantified using computationally inexpensive approaches such as the cross-sectional flux or the divergence method. We present time series of NOx emissions for more than 50 power plants and industrial facilities in Africa, Asia and Europe, retrieved from one-year of TROPOMI NO2 observations. Emissions are quantified using the cross-sectional flux method and the divergence method, taking into account air mass factor correction, NO2 to NOx conversion and NOx lifetime. The methods are implemented in the Python library for data-driven emission quantification (ddeq) [1]. The dataset is used to evaluate the temporal variability of emissions, including days where no emissions could be detected. We compute weekly and seasonale cycles by region and facility type. The measurement-based profiles are compared derived from economic statistics and with time profiles used by atmospheric transport models. Annual emissions are compared with bottom-up reports available from the CoCO2 and CORSO projects [e.g., 2]. Our study provides important data for monitoring emissions from satellite observations in preparation for the Copernicus CO2 Monitoring and Verification Support (CO2MVS) capacity. CO2MVS is expected to also assimilate NOx emission estimates, which can be used as a proxy for CO2 emissions, by taking advantage of the higher temporal resolution of NO2 observations available from geostationary satellites such as GEMS, TEMPO and Sentinel-4. References: [1] Kuhlmann, G., Koene, E., Meier, S., Santaren, D., Broquet, G., Chevallier, F., Hakkarainen, J., Nurmela, J., Amorós, L., Tamminen, J., and Brunner, D.: The ddeq Python library for point source quantification from remote sensing images (version 1.0), Geosci. Model Dev., 17, 4773–4789, https://doi.org/10.5194/gmd-17-4773-2024, 2024. [2] Guevara, M., Enciso, S., Tena, C., Jorba, O., Dellaert, S., Denier van der Gon, H., and Pérez García-Pando, C.: A global catalogue of CO2 emissions and co-emitted species from power plants, including high-resolution vertical and temporal profiles, Earth Syst. Sci. Data, 16, 337–373, https://doi.org/10.5194/essd-16-337-2024, 2024.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Improvements in the Aerosol Layer Height retrievals from TROPOMI O2-A band measurements from surface albedo fitting and comparisons with EarthCARE aerosol extinction profiles.

Authors: Martin de Graaf, Dr Maarten Sneep, Dr Mark ter Linden, Dr Gijsbert Tilstra, Dr Pepijn Veefkind, Dr Dave Donovan, Ping Wang, Dr Gerd-Jan Zadelhoff
Affiliations: KNMI
Surface air quality monitoring from space is challenging due to the difficulties related to sparation of signals from the ground and from the atmosphere. To relate satellite aerosol optical thickness (AOT) measurements, like e.g. smoke, dust, pollution, to surface concentrations of PM10 or PM2.5 it is at least essential to know the height of the aerosol layers. One such product with daily global coverage is the Aerosol Layer Height (ALH), from the Sentinel-5P/TROPOMI L2__AER_LH product. This product is based on an optimal estimation (OE) approach, fitting cloud-free measurements to synthetic reflectances in the strongest oxygen absorption band, provided by a neural network trained with high resolution simulated reflectances. since its release in 2019, the ALH has been continuously improved. The algorithm performed well over oceans but not over land surfaces where the ALH was strongly biased towards the surface. The surface albedo is an important error source, especially over bright surfaces and for thin aerosol layers. In order to improve the retrieval over land, a surface albedo fitting routine was implemented, yielding highly improved results. For the operational processor this required the retraining of the neural network, since the derivatives to the fit parameters are needed in the optimal estimation routine. Therefore, the derivatives to surface albedo at two wavelengths in the continuum (outside the O2-A band) were added to the algorithm forward model. The result is an improved ALH accuracy over land and a large increase in the number of successful retrievals, as quantified by comparisons with space-based aerosol profiles from CALIOP on CALIPSO. The current implementation improves retrievals over land with about 1.5 times more converged results, and decreases land-ocean contrasts in the aerosol layer height retrievals. The average difference between CALIOP weighted extinction heights decreased for selected cases from about −1.9 km to −0.9 km over land and from around −0.8 km to +0.1 km over ocean. The latest version of the S5P/TROPOMI ALH (version 2.8.0, including all these improvements and the surface fit) was released in November 2024. This version of the ALH can be used to relate TROPOMI AOT measurements to surface air quality parameters. The use of TROPOMI ALH will be demonstrated using new comparisons with aerosol extinction profiles from the EarthCare lidar Atlid. Validation from such instruments like Atlid are essential for the daily, global coverage provided by passive instruments, such as TROPOMI.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Assessment of Sentinel-5P Data for Nitrogen Dioxide (NO₂) Concentrations in Selected Areas of European Coastal Zones

Authors: Bachelor's degree Oliwier Zając, Professor of the University Bogdan Zagajewski, Master's degree Patryk Grzybowski, PhD Marlena Kycko
Affiliations: Department of Geoinformatics, Cartography and Remote Sensing, Chair of Geomatics and Information Systems, Faculty of Geography and Regional Studies, University of Warsaw, Institute of Geophysics, Faculty of Physics, University of Warsaw
The research addresses the complex issue of assessing the spatial-temporal variability of nitrogen dioxide (NO₂) concentration in the European coastal zones. Since air pollution is an important matter related to human health and global warming, satellite monitoring of NO₂ is a key aspect in the analysis of greenhouse gas concentrations (Guo et al., 2022). The research was conducted on six research areas covering 100-kilometer buffers along the coastline (inland) of Poland (Baltic Sea), Romania (Black Sea), Italy and Spain (Mediterranean Sea), Portugal (Atlantic) and the Netherlands (North Sea). To compare pollution in the coastal zone to pollution in large cities, it was decided to add a 50 km buffer around the capital cities of selected countries. This study focused on Sentinel-5P data (covering a four-year research period from 01/01/2019 to 31/12/2022), Corine Land Cover maps (2018 data series) and ERA5-Land hourly data provided by the European Centre for Medium-Range Weather Forecasts (ECMWF) at the 0.1° × 0.1° spatial resolution (Muñoz Sabater, 2019). Six meteorological variables were selected: 2m temperature, 2m dewpoint temperature, 10m U and V components of wind, total precipitation and surface solar radiation downwards. Statistical analyses were performed in the R programming environment. For geoprocessing tasks we used ArcGIS Pro and Quantum GIS. The highest NO₂ pollution was observed in the coastal zone of the Netherlands due to the high concentration of ports, industrial infrastructure and a dense road network. The most polluted month was February in 2019, when values exceeded 20 μmol/m² × 10⁵. In terms of pixels representing ground stations, the time series characteristics of NO₂ concentration showed a typical one-year fluctuation cycle (higher values during winter, lower during the summer). Furthermore, the winter period of 2021 was characterized by 74% higher NO₂ pollution than the summer period. In 2022 we observed an increase of 49% between these seasons, and 32% in 2020. We have developed maps showing the annual median of NO₂ concentration in each region of interest. Sentinel-5P data revealed higher NO₂ values in places with dense road networks and agglomerations of large cities. Interestingly, in the Tuscany region (Italy), we detected heavy pollution on two main highways from Florence to Pisa and Livorno. Similar trends were observed in the metropolitan areas of Rotterdam, Amsterdam, Rome and Barcelona. It was decided to apply spatial regression analysis to investigate statistically significant trends and changes in nitrogen dioxide pollution. Moreover, satellite data from 176 pixels, representing each station over four years (2019–2022), showed a high correlation (ρ = 0.75) with ground-level measurements. It was decided to aggregate values to monthly means and apply Spearman's rank correlation coefficient. The relationship between Sentinel-5P data and ground measurements for rural sites was ρ = 0.8. The lowest correlation was noted for urban stations (ρ = 0.68). Correlation analysis with meteorological components, such as temperature, solar radiation, wind speed, and precipitation, showed mostly negative relationships for monthly means. It was observed that for the most polluted areas of Spain, Portugal and the Netherlands, wind speed showed a statistically significant negative correlation with NO₂. In Romania, precipitation has been observed to have a similar effect on the self-purification of the atmosphere. The high spatial resolution of Sentinel-5P data allowed us to obtain information about the most polluted areas and land cover classes (Ialongo et al., 2020). The results showed that main emitters of NO₂ are anthropogenic areas, including: ports, construction sites, airports, industrial or commercial units (Corine Land Cover classes). Significant pollution increases in suburban areas of Warsaw, Gdansk, Bucharest and Rotterdam were noted. Short-term trend analysis showed that NO₂ concentration in southern Madrid have decreased significantly over 4 years. Areas near capital cities were found to be more polluted than the offshore areas. The coastal zones of the Netherlands and Portugal were characterized by a decrease in NO₂ concentrations in 2020 which may be related to the limitation of road transport during the COVID-19 pandemic (Grzybowski et al., 2023). The visualisation methods were an important part of the research (both in terms of charts and maps) and further results will be presented during the symposium. References: Guo, X., Zhang, Z., Cai, Z., Wang, L., Gu, Z., Xu, Y., Zhao, J., 2022. Analysis of the Spatial–Temporal Distribution Characteristics of NO2 and Their Influencing Factors in the Yangtze River Delta Based on Sentinel-5P Satellite Data. Atmosphere 13, 1923. https://doi.org/10.3390/atmos13111923. Muñoz Sabater, J., 2019. ERA5-Land hourly data from 1950 to present. Copernicus Climate Change Service (C3S) Climate Data Store (CDS). https://doi.org/10.24381/cds.e2161bac Ialongo, I., Virta, H., Eskes, H., Hovila, J., Douros, J. 2020. Comparison of TROPOMI/Sentinel-5 Precursor NO2 observations with ground-based measurements in Helsinki. Atmospheric Measurement Techniques 13, 205–218. https://doi.org/10.5194/amt-13-205-2020 Grzybowski, P.T., Markowicz, K.M., Musiał, J.P., 2023. Estimations of the Ground-Level NO2 Concentrations Based on the Sentinel-5P NO2 Tropospheric Column Number Density Product. Remote Sensing 15, 378. https://doi.org/10.3390/rs15020378.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Assessing the Impact of Crop Residue Burning on Air Quality Across the Indo-Gangetic Plains Using TROPOMI Satellite Observations and WRF-Chem Model Simulations

Authors: Steven Heer, Dr Dan Potts, Harjinder Sembhi, Dr Emma Ferranti, Dr Josh Vande Hey
Affiliations: School of Physics and Astronomy, University of Leicester, School of Engineering, University of Birmingham, Centre for Environmental Health and Sustainability, University of Leicester
Ambient air pollution exposure has emerged as the world’s foremost environmental health risk, estimated by the World Health Organisation to account for 4.2 million deaths in 2019. The Indo-Gangetic Plains (IGP), encompassing north and east India, along with parts of Bangladesh, Nepal and Pakistan, experiences increasingly frequent and severe pollution episodes, particularly during the post-monsoon winter months. This region has a population in excess of 400 million people and is home to 19 of the 20 most polluted cities globally, based on ambient PM₂.₅ concentrations as reported by the World Air Quality Report 2023. Pollution events in the IGP are largely driven by anthropogenic sources, including crop residue burning, residential and industrial emissions, and vehicular pollution; however, their individual contributions remain poorly quantified. Moreover, these pollution events are compounded by geographic and meteorological conditions that hinder pollutant dispersion. Identifying the primary sources and understanding the dynamics of these pollution episodes are critical for formulating effective policy measures to mitigate their health and environmental impacts. This study integrates satellite observations of carbon monoxide (CO) and nitrogen dioxide (NO₂) from the TROPOspheric Monitoring Instrument (TROPOMI) on board the Sentinel-5 Precursor satellite with high-resolution simulations from the Weather Research and Forecasting model coupled with chemistry (WRF-Chem). Trends in observed CO and NO₂ columns are analysed across the IGP for the period 2018–2023. We specifically focus on the impact of crop residue burning events, which are a frequently implicated as the major driver in the region’s air quality degradation during the post-monsoon season. The analysis explores the temporal evolution of the frequency and duration of these burning events across the time series, with a focus on both inter- and intra-country variations. We compare the effectiveness of several global emission inventories to reconcile observed emissions during high pollution episodes, and further perform a WRF-Chem sensitivity analysis to determine the role different sectors play in the air quality degradation. This work aims to establish a methodological framework making use of the increasing accessibility of satellite-derived air quality data to investigate pollution dynamics. The findings provide critical insights for targeted interventions and policy recommendations to alleviate the severe pollution impacts in one of the world’s most affected regions.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: The impact of agricultural burning on air quality in Northern India: A 7-year assessment using TROPOMI Carbon Monoxide

Authors: Ivar Van Der Velde, Guido van der Werf, Ilse Aben
Affiliations: SRON Netherlands Institute For Space Research, Wageningen University
Agricultural residue burning in India, particularly in the Indo-Gangetic Plain, remains a significant contributor to post-monsoon air pollution, with far-reaching implications for public health. This study uses seven years of carbon monoxide (CO) data from the TROPOspheric Monitoring Instrument (TROPOMI) to analyze the spatiotemporal patterns of agricultural burning and its atmospheric impact across Northern India. We focus on the seasonal increase in CO concentrations primarily driven by crop residue burning in the states of Punjab and Haryana. The analysis integrates TROPOMI data with atmospheric transport modeling to improve the characterization of small-scale biomass burning using inverse emission estimation techniques. Traditional approaches for governmental fire monitoring and emission estimation often rely on satellite-based active fire detection and burned area data, but face limitations due to their coarse spatial resolution and frequent hazy or cloudy conditions in this region. Additionally, the limited temporal coverage of existing satellite instruments, combined with the tendency for agricultural fires to be ignited later in the day, results in significant underestimation of fire activity. Our findings show that agricultural burning contributes more substantially to regional CO and aerosol burdens than previously estimated, with severe implications for urban air quality downwind. The situation may be exacerbated by unfavorable meteorological conditions, including stagnant winds and shallow boundary layers, which facilitate the accumulation of pollutants and prolong pollution episodes. These insights underscore the importance of satellite-based trace gas monitoring, which, when used alongside the more traditional instruments and methodologies for fire detection, enhances the accuracy of emission estimates and helps with guiding policies aimed at reducing air pollution in India.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Monitoring Ground-Level Particulate Matter Concentrations From the Synergism of Space-Borne Measurements and Machine Learning Techniques

Authors: Dr Jean-Christophe Péré, Dr Johannes Staufer, Boris Gratadoux, Dr Juan Cuesta, Dr Gaëlle Dufour, Sylvain Tanguy, Laure Chaumat, Dr Laura Le Barbier
Affiliations: Thales, LISA - Laboratoire Inter-universitaire des Systèmes Atmosphériques, CNES - Centre national d’Etudes Spatiales
Comprehensive monitoring and mapping of ground-level air pollutants is essential for evaluating the population’s exposure and assessing the resulting health effects. Satellites provide a cost-effective way for area-wide pollution monitoring. However, air quality products from space-borne sensors provide only a vertically integrated information of air pollutants in terms of optical properties and thus reflect only partly their concentrations at ground-level. We present a novel approach so-called FLORIA to calculate the concentration of fine particles with diameters less than 10 um (PM10) at ground-level using machine learning (ML) techniques. The developed tool is based on the use of the CatBoost machine learning framework that copies cost-effectively several years of high-resolution CHIMERE-WRF simulations. FLORIA is applicable to many space-borne sensors and ingests meteorological data and space-borne data of aerosol optical thickness (AOT) to provide maps of PM10 at ground-level and at the spatial resolution of the satellite. Results using Sentinel-3 AOT data and meteorological fields from ERA-5 from September 2023 to August 2024 over France showed first encouraging results. However, comparisons with in situ measurements from surface stations revealed an underestimation of the retrieved PM10 concentrations. This is probably attributable to insufficient spatial coverage of a single satellite-derived dataset used as input. To overcome this difficulty, multi-sensor data fusion is used to build a more coherent and homogeneous representation of AOT over the region of interest. With a multitude of current sensors and the arrival of new missions dedicated to monitoring the chemical composition of the atmosphere (Sentinel-4, Sentinel-5, 3MI, CO2M/MAP, etc.), merging these datasets has potential to optimize performance and improve coverage. We present results using AOT from Sentinel-3, MODIS as well as a combined Sentinel-3/MODIS product. The latter product has been derived using a technique developed by Joint Research Centre (JRC) that uses a least squares minimisation method based on Angström's law that describes the spectral behaviour of optical thickness. We discuss the performances of FLORIA based on these approaches to estimate surface level PM10.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Estimation of pollutant emissions from remote sensing data and deep Learning

Authors: Gaëlle Dufour, Shen Liang, Sylvain Lobry, Maxim Eremenko, Adriana Coman, Arineh Cholakian, Guillaume Siour
Affiliations: Laboratoire Inter-universitaire des Systèmes Atmosphériques (LISA), Université Paris Cité, Université Paris-Est Créteil,, Data Intelligence Institute of Paris (diiP) & LIPADE, Université Paris Cité, LIPADE, Université Paris Cité, LMD UMR CNRS 8539, ENS, École Polytechnique, Institut Pierre Simon Laplace (IPSL)
The 20th century’s rapid industrialization and urbanization led to a rise in anthropogenic pollutant emissions, making air pollution a major health risk. While developed countries addressed air quality issues starting in the 1990s, effective regulations in developing nations like China are more recent. Precise, high-resolution and long-term data on pollutant emissions is essential for accurately evaluating how changes in emissions affect atmospheric chemistry and climate, and assessing emission control effectiveness, and improving air quality management. However, current emission inventories rely heavily on national self-reporting and annual, aggregated statistics, which lead to uncertainties and do not allow rapid update of inventories limiting the efficiency of air quality forecasts. Advances in satellite remote sensing, including Sentinel 5P and the upcoming Sentinels 4 and 5, provide frequent and high-resolution data. Combined with data assimilation techniques, these satellite observations should allow for accurate daily-to-hourly emission estimates, addressing many limitations of traditional inventories. However, managing and analyzing the vast amount of satellite data poses its own challenges. To meet these needs, we are developing data-driven and physics-informed deep learning- systems with the objective to enable more precise, near-real-time emission estimations at high spatial and temporal resolutions. In particular, we propose to model the problem as a pixel-based regression task, and propose a method relying on UNet [1] and 4DVarNet [2]. We focus on nitrogen oxide (NOx) emissions, key to ozone and particulate matter formation, and the Chinese region, where recent regulations reduced emissions by over 20%. Based on the state-of-the-art CHIMERE chemistry-transport model, we use the UNet to emulate CHIMERE and embed it into the 4DVarNet for efficient inverse modeling. We also exploit unsupervised domain adaptation to address data distribution shift between simulated and real datasets.Developments are done on synthetic observations and performances of the ML/DL models will be discussed. First application to real S5P data will be presented. [1] Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18 (pp. 234-241). Springer International Publishing. [2] Beauchamp, M., Febvre, Q., Georgenthum, H., & Fablet, R. (2023). 4DVarNet-SSH: End-to-end learning of variational interpolation schemes for nadir and wide-swath satellite altimetry. Geoscientific Model Development, 16(8), 2119-2147.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Advancing Satellite-Based NO₂ Monitoring With Machine Learning

Authors: Wenfu Sun, Dr. Frederik Tack, Lieven Clarisse, Dr. Michel Van Roozendael
Affiliations: Royal Belgian Institute For Space Aeronomy (BIRA‐IASB), Université Libre de Bruxelles (ULB)
Nitrogen dioxide (NO₂) is an important atmospheric pollutant that affects air quality, public health, and ecosystems. Satellite technology has advanced air quality monitoring by providing large-scale and long-term observations. In recent years, machine learning has emerged as a powerful tool to link surface NO₂ concentrations with different datasets, including satellite observations, to achieve efficient and accurate predictions. However, assessing the specific contributions of satellite data to different NO₂ level estimations requires further investigation. In addition, machine learning techniques have great potential to improve satellite retrievals, e.g., by producing high-resolution a-priori NO₂ profiles. In this presentation, we will demonstrate how machine learning can advance satellite-based air quality monitoring by addressing these two topics. Western Europe is used as the study area. The first part of the presentation focuses on estimating high-resolution surface NO₂ distributions (1 km resolution, daily) using the Boosting Ensemble Conformal Quantile Estimator (BEnCQE) (Sun et al., 2024). This model utilizes diverse datasets, including TROPOMI NO₂ tropospheric vertical column densities (TVCD), and achieves reliable performance as validated by European Environmental Agency (EEA) surface observations (r = 0.80, R² = 0.64, RMSE = 8.08 µg/m³). By employing quantile regression, the BEnCQE model provides uncertainty estimates for each prediction and allows us to analyze the importance of features at different NO₂ level predictions. Our analysis shows that satellite observations contribute significantly to the estimation of general background NO₂ levels but seem to have less influence on high NO₂ concentrations. We speculate that this is due to the relatively coarse spatial resolution of current satellite data, which limits the ability to resolve localized NO₂ hotspots near emission sources. Furthermore, we find that noise in the TROPOMI NO₂ TVCD reduces its importance in the model estimation, causing the model to rely more on other features to maintain performance. These results highlight the need for satellite observations at higher resolution, such as the upcoming CO2M mission (2 km resolution), to improve monitoring of city-based air pollution at fine scales. The second part of the presentation addresses the challenge of generating high-resolution a-priori NO₂ profiles for satellite retrievals. Traditionally, chemistry transport models (CTMs) are used to generate such profiles, but these are computationally expensive and rely on high-resolution emission inventories. We developed Deep Atmospheric Chemistry NO₂ (DACNO₂), a convolutional neural network-based framework that simulates 3D daily NO₂ distributions from the surface to 5,000 meters at a spatial resolution of 2 km. The model has a flexible architecture that allows customized encoding paths to process different input datasets. Importantly, it employs a novel multi-constraint training strategy that combines coarse-resolution synthetic NO₂ data from the Copernicus Atmosphere Monitoring Service European air quality reanalyses (CAMS-EU, 10 km) with fine-scale EEA ground observations (2 km). This dual-constraint approach enhances the model's ability to capture detailed spatial gradients near emission hotspots while maintaining physical consistency at broader scales. Evaluation results demonstrate the ability of the model to produce reliable 3D NO₂ estimates as validated with ground observations (r = 0.81, R² = 0.64, RMSE = 5.10 µg/m³) and CAMS-EU synthetic NO₂ (r = 0.94, R² = 0.89, RMSE = 1.11 µg/m³). Case studies show that the high-resolution a-priori NO₂ profiles have improved sensitivity to emission hotspots and capture the detailed heterogeneous distribution of NO₂. The implementation of DACNO₂ is efficient, taking only minutes to compute daily averaged 3D NO₂ results at 2 km resolution using GPU acceleration. In addition, the model can be modified to provide a-priori profiles for a specific satellite overpass time. Overall, by bridging satellite observations with surface NO₂ estimation and providing an efficient method for generating high-resolution a-priori profiles, these studies highlight the promising potential of machine learning in satellite-based air quality monitoring. [reference] Sun, W., Tack, F., Clarisse, L., Schneider, R., Stavrakou, T., & Van Roozendael, M. (2024). Inferring surface NO₂ over Western Europe: A machine learning approach with uncertainty quantification. Journal of Geophysical Research: Atmospheres, 129, e2023JD040676. https://doi.org/10.1029/2023JD040676
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Harmonized tropospheric NO2 column monitoring for LEO and GEO constellations

Authors: Sora Seo, Dr. Diego Loyola, Ronny Lutz, Klaus-Peter Heue, Pascal Hedelt, Dr. Vìctor Molina Garcìa, Pieter Valks, Jhoon Kim
Affiliations: German Aerospace Center (DLR), EUMETSAT, Yonsei University
Over the past few decades, the global distribution and trends of atmospheric NO₂ have been monitored from space by low-earth-orbiting (LEO) satellite instruments, such as the Global Ozone Monitoring Experiment (GOME), SCanning Imaging Absorption SpectroMeter for Atmospheric CHartographY (SCIAMACHY), Ozone Monitoring Instrument (OMI), Global Ozone Monitoring Experiment-2 (GOME-2), and Tropospheric Monitoring Instrument (TROPOMI). These LEO measurements have significantly enhanced our understanding of global tropospheric NO₂ levels and their long-term trends, but are limited to provide observations only once per day at specific local times. To address this limitation, a constellation of geostationary orbiting satellite sensors (GEO) is being deployed, including the Geostationary Environment Monitoring Spectrometer (GEMS) over Asia, Tropospheric Emissions: Monitoring of Pollution (TEMPO) over North America, and two Sentinel-4 (S4) missions over Europe. Although the GEO monitoring system has limited spatial coverage, it provides multiple observations per day, enabling detailed monitoring of diurnal variations in NO₂ due to changes in emissions and chemical reactions throughout the day. As described, LEO (global daily monitoring) and GEO (regional hourly monitoring) each have distinct advantages and limitations. Therefore, by complementing each other and combining their strengths, a synergetic effect can be achieved in monitoring atmospheric compositions including NO₂. However, obtaining consistent data on total, stratospheric and tropospheric NO₂ columns from LEO and GEO satellite missions remains challenging since their current operational algorithms use different retrieval approaches (e.g. stratosphere-troposphere separation, cloud corrections) and auxiliary datasets. In this study, we aim to develop a harmonized retrieval system for tropospheric NO₂ columns applicable to both LEO and GEO satellite measurements. The DLR NO₂ retrieval algorithm for LEO and GEO constellations (NO2_LGC) consists of three key steps: (1) the spectral retrieval of total NO₂ slant columns, (2) the separation of slant columns into stratospheric and tropospheric contributions using the Copernicus Atmosphere Monitoring Service (CAMS) forecast model data, and (3) the conversion of tropospheric slant columns into vertical columns using air mass factors (AMFs). Based on Seo et al. (2024), a consistent retrieval approach is applied across current LEO (e.g. TROPOMI) and GEO (e.g. GEMS and TEMPO) observations, using identical forecast model, cloud corrections, and auxiliary data. This harmonized retrieval algorithm for the LEO+GEO constellation provides more consistent NO₂ data records from TROPOMI, GEMS and TEMPO and will be extended to future GEO and LEO missions including Sentinel-4 and Sentinel-5. In addition, we demonstrate the advantages of harmonized LEO and GEO NO2_LGC retrievals, particularly for analyzing NO₂ in regions with high pollution levels and downwind pollution outflows.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: The Path to Sentinel-5 Operations: Products, Calibration and Validation, Monitoring, and Data Processing Systems

Authors: Philipp Köhler, Rasmus Lindstrot, Nan Hao, Christopher Diekmann, Yang Wang, Gabriele Poli, Catherine Hayer, Christopher Lee, Rosemary Munro, Bojan Bojkov
Affiliations: Eumetsat, HAMTEC Consulting
EUMETSAT will operate the Copernicus Sentinel-5 imaging spectrometer, which is hosted on the EUMETSAT Polar System - Second Generation (EPS-SG) A satellites. The first satellite in this series is scheduled to launch in the second half of 2025. Developed by Airbus Defense and Space under an ESA contract, Sentinel-5 is designed to monitor atmospheric trace gases - such as ozone, nitrogen dioxide, sulfur dioxide, formaldehyde, glyoxal, methane, and carbon monoxide - as well as aerosol and cloud properties. It provides high spatial resolution and near-daily global coverage, which is vital for tracking atmospheric composition and serves as a key input to the Copernicus Atmosphere Monitoring Service (CAMS). This presentation will provide an overview of the Sentinel-5 instrument and its products, along with the latest updates on the status of the ground segment developments as well as the preparation of offline processing systems for the analysis of in-orbit calibration measurements. We will also discuss the progress of EUMETSAT's data processing and monitoring facility, which is being prepared for commissioning and routine operations. This includes activities for the preparation of the calibration and validation (Cal/Val) of operational atmospheric chemistry products, performed centrally at EUMETSAT as well as with support from the scientific community.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Integration of satellite and land-based measurements for the characterization of aerosol in an urban and rural region

Authors: Valentina Terenzi, Patrizio Tratzi, Valerio Paolini, Cristiana Bassani
Affiliations: Italian National Research Council
According to the European Environmental Agency, the air quality in Europe is improving but air pollution is still the largest environmental health risk exceeding the European Union (EU) air quality standards in specific hotspots. Aerosol is generally defined as the mixture of solid particulate or liquid droplets suspended in a gaseous medium, generally air. Exposure to aerosol has been associated with health effects such as premature death, heart disease, stroke, diabetes, chronic obstructive pulmonary disease, lung cancer and asthma. In addition, aerosols are also responsible for environmental and climate effects. Accurate aerosol knowledge is necessary to evaluate its spatial-temporal variability and assess exposures and effects on environment and human health. Air quality monitoring stations are widely employed in Italy. However, these in-situ stations are not enough to provide a detailed and wide characterization of the spatial distribution, especially outside of urban conglomerates, as they are limited to a specific location. The Earth Observation products overcome these limits, providing full spatial distribution of aerosol columnar properties with long temporal coverage in terms of years through several datasets available from satellite sensors. The most widespread is the Moderate Resolution Imaging Spectroradiometer (MODIS) on board of Earth Observing System (EOS) Terra and Aqua satellites. In particular, the well-known aerosol optical depth (AOD) is the suitable parameter to investigate the spatial-temporal distribution of aerosols at different scale from space and the AOD MODIS is retrieved by applying independent algorithms to the data. The Multiangle Implementation of Atmospheric Correction (MAIAC) is the latest algorithm providing products with 1km of spatial resolution. The validation of satellite products is a required step to ensure the robustness of the dataset that will then be used for following studies. This validation is usually performed by matching up satellite data with ground-based AERONET (Aerosol Robotic Network) data. This study aims to detect aerosols in Rome and its surroundings for the first time by using simultaneous MAIAC and AERONET products. The daily MAIAC products are used to study the aerosol spatial-temporal variability in Rome and over an urban-rural transition zone without atmospheric monitoring stations except the Liberti Observatory 40km northeast of Rome. Daily AOD MAIAC data were validated against measurements from the AERONET station at the CNR-ISAC observatory, located in the southeastern part of the city. The validation aimed to identify potential geometric and temporal constraints. The studio also examined the temporal and spatial variation of AOD in the urban-rural transition zone along the southern Tiber Vally, a region where anthropogenic, rural, and natural environments closely coexist. Temporal trends of AOD were mapped across Rome and its surrounding areas to provide a comprehensive view. Lastly, the Ångström exponent was analyzed across spatial and temporal scales to assess aerosol particle size distribution in the region. The validation of MAIAC products for the study region revealed that accuracy varies with the satellite's geometric configuration, with the magnitude of the influence depending on angle type. The influence of different angle combinations is being explored. Moreover, the validation showed a tendency of MAIAC to underestimate AOD values, particularly in autumn and winter months. The relation with solar radiation and climatic parameters will be further discussed. Regional maps and AOD-Ångström Exponent (AOD\AE) plots were developed to highlight trends and differences between Rome and its surrounding suburban areas. These analyses identified the primary sources of particulate matter by region and season.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Data fusion for advanced air quality monitoring in undersampled desert areas

Authors: Chrysoula Papathanasiou, Eleni Drakaki, Vasilis Amiridis, Chrysovalantis Tsiakos, Georgios Tsimiklis, Angelos Amditis
Affiliations: Institute Of Communication And Computer Systems (iccs), National Observatory of Athens
Global climate change seems to have a direct impact on desert ecosystems, as their structure, their function and the provided services are expected to be strongly affected by the rising temperature, decreasing rainfall and increasing atmospheric CO2. At the same time, variable local meteorological conditions which are often developed over desert ecosystems may sometimes result in “greener deserts” as increased temperature results in higher evaporation rates over the sea that falls onto to dry land. As a result, desert adapted species, including endemic plant and animal species become particularly vulnerable due to the loss of suitable habitat and overall desert ecosystems are expected to be reshaped. Further to that, the effects of climate change on desert areas have a direct impact on the neighboring areas as well, as dust travels thousands of kilometers crossing countries and continents carrying pollutants and eventually depositing particles far away from their origin. The Mediterranean basin is increasingly often affected by desert dust outbreaks from Northern Africa, with the Eastern Mediterranean Region (EMR) being typically affected by three distinct Sand and Dust sources from the Middle East, the Northern Africa, and the Sahel. Such particulate matter (PM) when combined with others, related with anthropogenic pollution from intense human activity, deteriorate to a great extent air quality over the greater area affected by all pollutant sources. Therefore, efficient monitoring of air quality and PMs over desert areas becomes an issue of high priority not only for the desert areas per se but also for neighboring areas and even more distant areas. Especially for specific parameters, such as temperature and relative humidity, ground-based observational networks are typically more reliable and can support efficient monitoring. However, desert areas are typically undersampled due to the harsh local conditions that do not favor the operation of local networks and hence monitoring is performed, if at all, making use of satellite-derived datasets. Such datasets are inevitably associated with large biases, complicating thus performance testing and calibration of numerical models. Efficient monitoring of air quality in undersampled hard-to-reach desert areas is further investigated in CiROCCO, an EU funded research Project that aims to establish an integrating sensing system that couples cost-effective in-situ sensing nodes with data fusion remote sensing and assimilation modelling techniques exploiting satellite products. More specifically, low-cost sensing nodes that monitor air quality have been designed and manufactured and will operate along with higher quality mid-cost stations. In-situ monitoring will be complemented by datasets retrieved from on-site monitoring campaigns using portable Aerosol Optical Depth (AOD) sunphotometers. Moreover, reference-grade optical particle counters will allow for periodic recalibration of sensors, compensating for seasonal variations and temporal drifts in sensor performance. The collected data are fused with satellite data and are further calibrated. In terms of communication, data collection is taking place using local Wireless Sensor Networks, while satellite communications are also exploited for data retrieval from edge devices. Within the Project, satellite modelling is implemented and assimilation algorithms, also incorporating datasets from in-situ sensors, are developed. Aerosol assimilation involves integrating observational data into atmospheric model forecasts of aerosol fields. The process that has been developed specifically for the assimilation of desert dust field observations in the CiROCCO Project has the goal to enhance atmospheric models, leading to improved air quality monitoring and a deeper understanding of desert dust spatial-temporal distribution and environmental impacts. In particular, two downstream services that exploit data fusion between EO and in-situ measurements are being developed, one related with air quality forecasting focused on dust impacts on vulnerable areas and one related with the establishment of a database with dust deposition fields, specifically designed for biochemical studies. The analysed datasets support the development of different services that address the needs of local communities across 4 different Pilots, in 3 desert areas (in Egypt, Spain and Serbia) and a Mediterranean Pilot site affected by frequent and sometimes severe Desert Dust Storms (DDS) (in Cyprus). Acknowledgements. This research work has been supported by the EU-funded programme CiROCCO under Grant Agreement No 101086497.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: A Bayesian inversion for atmospheric ammonia emissions

Authors: Simeng Li, Dr. Enrico Dammers, Dr. Roy Wichink Kruit, Dr. Shelley van der Graaf, Dr. Jan Willem Erisman, Dr. Martijn Schaap
Affiliations: Institute of Environmental Sciences (CML), Universiteit Leiden, Air Quality and Emissions Research, Netherlands Organisation for Applied Scientific Research (TNO), Rijksinstituut voor Volksgezondheid en Milieu, RIVM
Ammonia (NH3) is an important component of total reactive nitrogen and the most important alkaline inorganic gas in the atmosphere. With the increase in fertilizer application due to industrial production and the intensification of agricultural production, large amounts of NH3 are emitted to the atmosphere at an increasing pace. These emissions pose a threat to both human and ecosystem health and significantly impact climate change. Although many studies have addressed the atmospheric nitrogen budget, large uncertainties still exist concerning the emissions, distribution and deposition of NH3. So called top-down, or inversion methods, in which information is directly extracted from measurement data, provide one way to better quantify and monitor emissions of atmospheric species and may help to reduce uncertainties in the nitrogen budget. In application, the Bayesian optimization is more flexible and can take into account errors in prior information, model, and observations. In this study, we have conducted statistical inversions based on the Bayesian theorem with satellite data and ground-based measurements using the current high-resolution emission inventory CAMS-REG as the prior emission dataset and the state-of-the-art air quality model LOTOS-EUROS as the forward model. We have inversed the ammonia emission for multiple years and analyzed the emission trends. Furthermore, we inversed the monthly time profile of ammonia emissions. In addition, we have analyzed the degrees of freedom for signal (DFS) for data utilization and efficiency in inversions. Based on the inversion results, we identify the observational requirements and provide possible cost-effective monitoring network designs for determining local scale ammonia emissions.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Leveraging Earth Observation for Urban Air Quality Monitoring: Case Study of Canton Sarajevo

Authors: Fabien Castel, Camille Lainé, Alexander Stepanov, Zelie Marcais, Raghavan Narayanan
Affiliations: Murmuration, EBRD
Urban air quality is a critical factor in ensuring the health, sustainability, and resilience of cities worldwide. Canton Sarajevo, Bosnia and Herzegovina, like many rapidly urbanizing centres, faces a growing air quality crisis that threatens public health, the environment, and its long-term sustainability. The city's air is often thick with pollutants, particularly during the winter months when emissions from traffic, heating systems, and industrial activities reach dangerous levels. With growing urbanization, the need for effective tools to monitor and improve air quality while assessing the impacts of infrastructure changes has become increasingly urgent. This study presents an innovative approach to leveraging Earth Observation (EO) technologies for urban air quality monitoring, using Sarajevo as a pilot. Over the last five years, the Canton has been implementing a series of actions to improve quality and availability of public transport and, subsequently, reduce air pollution. The project was conducted under the framework of the Global Development Assistance – Fast EO Co-Financing Facility (GDA FFF), a collaborative initiative by the European Space Agency (ESA) and key partners. The initiative is designed to rapidly demonstrate the utility of EO data in addressing socio-economic and environmental challenges in urban settings. In Sarajevo, the project aimed to evaluate the effects of new public transport infrastructure on air pollution and greenhouse gas emissions, providing actionable insights for sustainable urban development. To achieve this, the study utilized air quality data from the Copernicus Atmosphere Monitoring Service (CAMS) and local ground-based measurements, focusing on key pollutants such as nitrogen dioxide (NO₂), particulate matter (PM10 and PM2.5), carbon monoxide (CO), Ozone (O3) and sulphur dioxide (SO₂). Advanced machine learning models, including gradient boosting and random forests, were employed to downscale coarse-resolution satellite data to a fine spatial resolution of 1-km, providing highly detailed insights into air quality dynamics across the city. Auxiliary datasets, such as meteorological data, road network density, and population distribution, were integrated to enhance the predictive power and contextual relevance of the models. Key findings revealed complex interactions between urban development and air quality. Notably, short-term increases in particulate matter (PM) levels were linked to construction activities, while long-term trends in NO₂ levels suggested limited efficacy of new public transport infrastructure in reducing vehicle emissions, necessitating further interventions and broader datasets for causality validation. Seasonal and meteorological influences emerged as significant factors in the observed air quality patterns. The project also aimed to empower local stakeholders through a user-friendly, real-time dashboard that visualizes air quality data and correlates it with transport infrastructure changes. The dashboard offers dynamic insights into pollutant levels, trends, and potential hotspots, enabling urban planners and policymakers to make evidence-based decisions. By providing localized and actionable information, the dashboard serves as a critical tool for capacity-building among Sarajevo’s municipal authorities, fostering a culture of data-driven governance. One of the key takeaways from this project is the value of combining satellite data with ground-based measurements and auxiliary datasets. While EO products provide comprehensive spatial coverage and consistency, ground sensors offer localized accuracy, especially in areas with high pollution variability. By harmonizing these datasets, the project achieved a level of precision and reliability that would not be possible with either source alone. This hybrid approach also enabled the identification of data gaps and biases, such as the uneven distribution of ground sensors across different urban zones. Machine learning played a pivotal role in this project, enabling the extraction of actionable insights from complex and multidimensional datasets. Techniques such as gradient boosting and random forests were particularly effective in capturing non-linear relationships between pollutants and their driving factors. For example, variables such as wind speed, road density, and population distribution emerged as significant predictors of pollutant concentrations. The use of explainable AI techniques, such as SHAP (SHapley Additive exPlanations), further enhanced the interpretability of the models, allowing stakeholders to understand the relative importance of different factors in shaping air quality outcomes. Despite its achievements, the project faced several challenges that underscore the complexities of urban air quality monitoring. Limited availability of long-term and high-quality ground-based measurements in Sarajevo constrained the validation of satellite-derived models. Furthermore, the inherent variability in urban pollutant sources, from vehicular emissions to industrial activities and natural phenomena, necessitates more comprehensive datasets for robust causal analyses. Addressing these challenges requires continued investment in sensor networks, data integration platforms, and advanced modelling capabilities. The project underscores the value of the integration of EO data with advanced modelling techniques in addressing urban environmental challenges, offering scalable tools and methodologies potentially applicable to other cities and beyond. As urbanization accelerates and environmental challenges intensify, the role of EO technologies in supporting sustainable development will become increasingly vital. The Sarajevo case study exemplifies how innovative applications of EO data can empower cities to achieve their sustainability goals, paving the way for cleaner, healthier, and more resilient urban futures.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: High-Resolution Observations of NO2 and CO2 Emission Plumes From EnMAP Satellite Measurements

Authors: Christian Borger, Steffen Beirle, Andre Butz, Leonie Olivia Scheidweiler, Thomas Wagner
Affiliations: Max Planck Institute For Chemistry, Institute of Environmental Physics, Heidelberg University, Heidelberg Center for the Environment (HCE), Heidelberg University, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University
Accurate emission quantification from anthropogenic sources is essential for monitoring greenhouse gases and air pollution. Nitrogen oxides (NOx) and carbon dioxide (CO2) are particularly important due to their roles in atmospheric chemistry and global warming. While dedicated satellite observations enable independent monitoring on a global scale, their spatial resolution – typically several kilometres – remains insufficient for resolving localized emission sources like power plants. Additionally, high CO2 background levels weaken emission signals, making NO2 observations a widely used proxy for quantifying CO2 emissions. However, to date, no satellite mission has been capable of simultaneously detecting NO2 and CO2 in emission plumes. Although not specifically designed for trace gas monitoring, the Environmental Mapping and Analysis Program (EnMAP) satellite offers a unique opportunity for analyzing these pollutants, even with limited spectral resolution. In this study, we apply traditional retrieval techniques to EnMAP measurements and present the first-ever simultaneous satellite observations of NO2 and CO2 at unprecedented spatial resolution (i.e., a few tens of metres). These observations reveal emission plumes extending several kilometres downwind from their source. Using case studies from Riyadh (Saudi Arabia) and Highveld (South Africa), we showcase the feasibility of estimating CO2 and NOx emissions, analyzing plume chemistry, and deriving NOx/CO2 ratios indicative of power plant characteristics. Our findings highlight EnMAP's potential to complement traditional satellite missions, bridging gaps in emission monitoring and providing critical new insights into localized pollutant sources.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Evaluating High-Resolution Simulations of Atmospheric Composition in Rotterdam Using Satellite and Ground-Based Observations

Authors: Anja Raznjevic, Bart van Stratum, Maarten Krol
Affiliations: Wageningen University, Utrecht University
High-resolution atmospheric models, particularly large-eddy simulations (LES), are instrumental in advancing our understanding of atmospheric pollutant plumes. LES's ability to resolve turbulent wind fields, represent urban structures, and accurately model convective processes provides detailed insights into how pollutants disperse and undergo chemical transformations, especially near their sources. This capability is particularly valuable for studying plumes emanating from urban areas or plumes from large point sources such as industrial facilities. Moreover, LES enables detailed assessment of turbulent mixing effects on chemically active plumes, for example its influence on species lifetime. Utilization of LES capabilities to evaluate large-scale atmospheric transport models is one of the goals of the Carbon Atmospheric Tracer Research to Improve Numerical schemes and Evaluation (CATRINE) project, within which this study is embedded. This study presents a series of LES simulations of the atmospheric composition in the city of Rotterdam and the surrounding area. The simulations focus on plume dispersion for long-lived greenhouse gases (GHG) CO2 and CH4, as well as the more chemically active NOx (NO2 + NO) for which we used a condensed chemistry scheme. The study domain comprises the Rotterdam urban area, the port of Rotterdam (the largest seaport in Europe), Rotterdam Airport and the surrounding agricultural lands, representing a diverse range of both urban and point sources. Simulations were conducted for September 2, 2022, a day during the intensive RITA2022 campaign, during which both GHGs and meteorological conditions were measured in Rotterdam and the surrounding area. This day was chosen because it was characterized by clear sky conditions, which also enabled the use of satellite data from TROPOspheric Monitoring Instrument (TROPOMI) for evaluation of NO2 and CH4 distributions in LES. TROPOMI’s broad spatial coverage is particularly useful for evaluating LES's ability to reproduce regional scale plumes which are transported by large-scale weather dynamics. Conversely, LES provides fine-scale dispersion details which satellite resolution fails to capture, but are visible in ground-based measurements. Therefore, we performed three simulations with increasing resolution (200 m, 100 m, 50 m) to study the impact of resolution on simulated plumes. This study combines the benefits of satellite column measurements to validate the ability of LES to accurately simulate spatial distribution of pollutants, with ground measurements from the RITA2022 campaign to validate fine-scale dispersion in urban areas and from point sources.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Evaluating CAMS European Air Quality Reanalysis in Lombardy, Italy: A Comparative Analysis With Ground-based Measurements

Authors: Lorenzo Amici, Lorenzo Gianquintieri, Prof Maria Antonia Brovelli, Enrico Gianluca Caiani
Affiliations: Politecnico Di Milano
Lombardy, located in the Italian Po Valley, is one of Europe's most populous and industrialized regions, making it one of the most polluted. The natural basin formed by the Alps and the Apennines traps pollutants, exacerbating air quality issues in this densely populated region. Air quality monitoring in Lombardy relies heavily on the network of monitoring stations established by the regional environmental protection agency (ARPA Lombardia). While this network serves as the primary source of data on atmospheric pollutant concentrations, its limitations include a restricted number of stations (83 across a 24,000 km² area) and an uneven distribution, with denser coverage in urban and industrialized areas compared to rural regions. This study therefore evaluates the performance of the Copernicus Atmospheric Monitoring Service (CAMS) air pollution data by comparing it against ground-based measurements and interpolated maps from ARPA. CAMS produces annual validated air quality reanalyses for the European domain at a spatial resolution of 0.1° (approximately 8x11 km in Italy). These reanalyses combine model-based air pollution concentrations with validated observations from the European Environment Agency (EEA) to provide pollutant concentrations in the lowest layers of the atmosphere. For the years considered in this study, the production was based on 9 air quality data assimilation systems across Europe. All the available air quality models are considered to compute a median ensemble, which on average yields better performance than the individual products, and was indeed chosen for the statistical comparison in this study. The analysis focuses on four key pollutants with well-established adverse health effects: nitrogen dioxide (NO₂), ozone (O₃), particulate matter with a diameter ≤10 µm (PM₁₀), and particulate matter with a diameter ≤2.5 µm (PM₂.₅). Data from 2018 and 2019 were used in two comparative approaches: a map-to-map comparison, aligning daily interpolated data obtained from ARPA with CAMS European validated reanalyses, and a station-to-map analysis, directly matching CAMS data with ARPA’s ground measurements. Statistical measures, including Pearson and Spearman correlation coefficients, Root Mean Square Error (RMSE), and Mean Bias Error (MBE), were employed to evaluate performance. The results from the map-to-map analysis indicate that CAMS data exhibits pollutant-specific performance. NO₂ had the strongest correlations (r = 0.6-0.9), while O₃ correlations were weaker (median r ~0.6, with a minimum below 0), with CAMS frequently underestimating concentrations in less populated regions. PM₁₀ and PM₂.₅ showed moderate correlations (r = 0.5-0.9), though CAMS tended to overestimate higher concentrations. A spatial analysis of the differences between the two datasets revealed that for NO₂, PM₁₀, and PM₂.₅, CAMS underestimated the pollutants concentration in the most populated areas of Lombardy, while Ozone was underestimated across the whole study area. The station-to-map analysis yielded more promising results, with all pollutants demonstrating positive correlations (rNO₂=0.6-0.9, rO₃=0.9-1, rPM₁₀=0.7-1, rPM₂.₅=0.87-1). Performance was generally better in the Po Valley's flat, urbanized areas compared to the mountainous provinces, where correlations were weaker probably due to the challenges of modeling complex meteorological and aerodynamic conditions. Indeed, for all considered pollutants the majority of the ground stations with the lowest values of correlation are in the northern mountainous provinces. Further temporal aggregations revealed seasonal and hourly variations in performance. O₃ correlations improved during peak concentrations (in summer and during the afternoons), while NO₂ correlations declined during traffic peaks when the concentrations are higher. Notably O₃ correlations were higher for elevated pollutant levels, whereas other pollutants showed stronger correlations at both low and high concentrations, with middle-range values performing worse. Additionally, CAMS showed a tendency to overestimate pollutant concentrations at low levels while underestimating them at higher levels, a pattern consistent across most pollutants. These biases raise concerns about the reliability of CAMS data for assessing extreme pollution events, which have the most significant health and environmental impacts. This study provides valuable insights into the accuracy and reliability of CAMS European air pollution reanalyses, particularly in highly polluted and geographically diverse regions like the one examined. Despite its limitations, CAMS demonstrates potential as a supplementary tool for air quality monitoring, especially in rural or unmonitored areas where ARPA’s network is sparse. However, CAMS’s tendency to underestimate extreme values and its weaker performance in heterogeneous terrains highlight areas needing improvement. Addressing these challenges could significantly enhance CAMS’s utility for policymaking, environmental management, and public health interventions.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: What Satellite Earth Observations have taught us about air emissions from the Canadian oil sands

Authors: Dr. Chris McLinden, Vitali Fioletov, Debora Griffin, Sumi Wren, Mark Shephard, Erik Lutsch
Affiliations: Environment And Climate Change Canada
The oil sands in Alberta, Canada is a significant source of air pollution, and its remote location complicates the use of conventional monitoring. Satellite Earth Observation (SEO) instruments such as OMI (Ozone Monitoring Instrument), TROPOMI (Tropospheric Monitoring Instrument), TEMPO (Tropospheric Emissions: Monitoring of Pollution) and others continue to be invaluable assets in helping us better understand emissions of air pollutions in the oil sands - how much, from where, and how these have evolved through many years of change in the region. This presentation will describe new methods developed to derive emissions of NOx, SO2, NH3, CO2 (and other species) that combine SEO, meteorological information, and convention monitoring. Also presented will be important applications and insights, which include the evolution of NOx and SO2 emissions over the last 20 years, spatially and diurnally-resolved emissions, complications from wildfires, and examples where inconsistencies with reported emissions were identified.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Development of the NO₂ Product for the CO2M Mission

Authors: Benjamin Leune, Mark ter Linden, Pepijn Veefkind, Jos van Geffen, Maarten Sneep, Ruediger Lang
Affiliations: Royal Netherlands Meteorological Institute (KNMI), Delft University of Technology (TU Delft), EUMETSAT
The upcoming Copernicus Anthropogenic CO₂ Monitoring Mission (CO2M) aims to deliver high-resolution (2×2 km²) CO₂ data with exceptional accuracy and precision, utilizing three satellite platforms. The mission's primary objectives are to detect, identify, and monitor anthropogenic CO₂ emission hotspots—such as power plants and megacities—while tracking emission trends. These data will support the assessment of local and national emission reduction targets and the verification of national reports within the Paris Agreement's five-year global stocktake framework. Nitrogen dioxide (NO₂) measurements will play a crucial role in identifying and characterizing CO₂ emission plumes. As a co-emitted species during combustion processes, NO₂ serves as a reliable tracer for CO₂. Its low atmospheric background and minimal biospheric influence enhance the detectability of local tropospheric NO₂ enhancements, enabling more accurate CO₂ emission estimates. The CO2M mission’s onboard CO2I spectrometer includes a visible band designed to retrieve NO₂ tropospheric columns. This high-resolution NO₂ product is also expected to significantly benefit air quality applications, particularly in quantifying emission sources. To fully leverage the mission’s fivefold improvement in spatial resolution compared to the TROPOMI instrument, several advancements in the tropospheric NO₂ retrieval algorithm have been developed. These include: (1) anisotropic surface modeling at high spatial resolution using MODIS-like BRDF products for land, water, and snow; (2) cloud parameter retrievals using a scattering cloud model, co-registered cloud data from the onboard CLIM instrument, and measurements from the O₂-O₂ (and additionally O₂-A) absorption band; and (3) high-resolution NO₂ a-priori profiles derived from CAMS global model forecasts. Additional potential algorithm improvements, such as a-priori profile column closure, non-linear air-mass factor corrections, aerosol adjustments, and stratospheric bias corrections, are under development. A unique dataset from S5P/TROPOMI, featuring enhanced spatial sampling (2×2 km²) and co-located cloud information from NPP-VIIRS, is combined with synthetic data-sets to create a test environment for developing the NO₂ algorithm.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: AI-Enhanced Active Fire Detection: Bridging Sentinel-3 and Landsat 8/9 for High-Resolution LST Monitoring

Authors: Dr. Ali Ahmad, Mr. Thomas Olive, Mr. Guillaume Dubrasquet-Duval
Affiliations: Diginove
Keywords: Active fire detection, high-resolution LST, Sentinel-3, Landsat 8/9, AI Timely and accurate detection of active fires is essential for managing and mitigating the far-reaching impacts of wildfires, including ecological damage, air quality issues, and contributions to climate change. Satellites equipped with thermal sensors, which measure Land Surface Temperature (LST), provide a valuable tool for monitoring fire activity across large areas [1]. For example, Sentinel-3 offers daily updates with its frequent revisit schedule but struggles to detect smaller or early-stage fires due to its coarse spatial resolution (1 km). Conversely, Landsat 8/9 delivers much finer detail with its high spatial resolution (30 meters) but revisits the same location only once every 8 days, making it less effective for tracking rapid temperature changes critical for timely fire detection. This creates a significant gap: there’s no existing satellite system that balances high spatial detail with frequent coverage. Ground-based sensors and drones, while highly precise, are limited by their range and cost, making them unsuitable for large-scale fire monitoring [2]. To address this gap, our research uses artificial intelligence (AI) to enhance Sentinel-3’s spatial resolution through super-resolution techniques [3]. We trained a deep learning model on high-resolution thermal data from Landsat 8/9, incorporating additional variables such as solar exposure from DEM slop, acquisition timing/season, and land-use indices like NDVI, NDMI, etc. By fine-tuning this model with Sentinel-3 data, we produced daily LST maps with a resolution of approximately 30 meters. This enhanced data enables more precise monitoring of smaller fires and early-stage outbreaks. Additionally, we developed a refined fire detection algorithm that combines thermal thresholds with spatial analysis of historical fire and climate events, improving the ability to distinguish actual fires from other heat sources. Beyond improving the detection of small-scale and early-stage fires, the generated high spatiotemporal LST resolution has broader applications, including monitoring solid waste fires, optimizing irrigation through water resource management, urban planning to address heat islands, and forest monitoring to identify thermal risks [1]. By integrating AI and satellite data, our study demonstrates the transformative potential of advanced remote sensing technologies, offering practical solutions for environmental monitoring where current satellite capabilities fall short. [1] Li, Zhao‐Liang, et al. "Satellite remote sensing of global land surface temperature: Definition, methods, products, and applications." Reviews of Geophysics 61.1 (2023). [2] Wooster, Martin J., et al. "Satellite remote sensing of active fires: History and current status, applications and future requirements." Remote Sensing of Environment 267 (2021): 112694. [3] Chen, Yuncheng, et al. "Spatiotemporal fusion network for land surface temperature based on a conditional variational autoencoder." IEEE Transactions on Geoscience and Remote Sensing 60 (2022): 1-13.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: NitroNet – A Deep-Learning NO₂ Tropospheric Profile Retrieval for the TROPOMI Satellite Instrument

Authors: Leon Kuhn, Steffen Beirle, Andrea Pozzer, Thomas Wagner
Affiliations: Max-Planck-Institute for Chemistry, Institute for Environmental Physics, University of Heidelberg
Nitrogen dioxide (NO₂) is an important air pollutant and has been widely recognized for its hazardous impact on human health. Tropospheric NO₂ is routinely monitored using satellite instruments (e.g. TROPOMI), in situ instruments, and other methods (e.g. spectroscopic measurements on the ground or on aircraft). However, satellite measurements only yield the vertically integrated NO₂ concentration („vertical column density“, VCD), in situ measurements are mostly deployed at the surface, and other methods which may possess the ability to resolve NO₂ vertically are operated very sparsely. In consequence, it is currently impossible to obtain NO₂ profiles with continuous, dense spatial coverage from measurements alone. Regional chemistry and transport (RCT) simulations can be used to simulate NO₂ profiles where no observations are available, but they are computationally expensive and require meticulous parameter calibration to achieve acceptable agreement with observational reference data. We present “NitroNet”, the first tropospheric NO₂ profile retrieval for the TROPOMI satellite instrument. This retrieval utilizes a feed-forward neural network, trained on synthetic profiles from the RCT model WRF-Chem. NitroNet receives tropospheric NO₂ VCDs from TROPOMI and ancillary variables (meteorological, emissions, etc.) from the ERA5 reanalysis data and the EDGARv5 emission inventory as input, from which it predicts the corresponding tropospheric NO₂ profiles. At this, the NO₂ VCD is descriptive of the profiles’ magnitudes, while the ancillaries describe their shapes. Prior filtering of the training profiles based on their agreement to e.g. the TROPOMI observations is shown to be highly beneficial, leading to NitroNet achieving significantly better agreement with observational data than WRF-Chem. In other words, the neural network is prevented from fully reproducing the systematic errors of the data-generating RCT model. We validate NitroNet using a variety of observational data (satellite, in situ, and MAX-DOAS measurements), and showcase its ability to generalize to different seasons and geographical regions. By training on synthetic data as opposed to in situ observations, NitroNet overcomes two principal limitations of previous machine learning models. Firstly, the synthetic dataset contains full NO₂ profiles (as opposed to surface concentrations only), which NitroNet is trained to reproduce. Secondly, synthetic training data are not affected by instrument effects, such as cross sensitivities to other nitrogen species. These were estimated in the range of +20 % to +100 % in previous instrument studies, and would be inevitably reproduced by the neural network.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: Estimations of the NOx emissions and NO2 lifetime using TROPOMI NO2 observations over UK point sources and area sources.

Authors: Matthieu Pommier
Affiliations: Ricardo AEA Ltd
Quantification of air pollutant emissions is crucial to accurately model their concentrations. Among these pollutants, nitrogen oxides (NOx), consisting of nitrogen dioxide (NO2) and nitric oxide (NO), have adverse effects on health, agriculture, and natural ecosystems both directly and due to their role in the formation of secondary pollutants. Despite the progress done in the last decade in calculations of NOx emissions, emission inventories are still uncertain, and European countries such as the UK continue to suffer of NO2 exceedances. Combining the wind rotation technique with the Exponentially modified Gaussian fitting procedure, this study aims to estimate the NOx emissions and NO2 lifetime based on NO2 observations provided by TROPOMI. This work focuses on UK point sources and area sources (cluster of sources) such as over industrial sites and conurbations, throughout several years, analysing the seasonal impact in the estimates and explaining the limitation of this approach.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: The relationship between air pollution and urban heat islands Analysis of the impact of NO₂ and O₃ on urban heat islands in European metropolises using Sentinel-5P and Sentinel-3 Data

Authors: Msc Konrad Wróblewski, Maciej Jurzyk, PhD Szymon
Affiliations: Institute Of Geodesy And Cartography
Urban heat islands (UHIs) are one of the most visible effects of urbanization and climate change, consisting of increased temperatures in urban centers compared to suburban and rural areas. Factors contributing to this phenomenon include the intensification of the use of impervious surfaces, the reduction of green areas, and the interaction between air pollution and radiation processes in the atmosphere. In particular, high concentrations of nitrogen dioxide (NO₂) and ozone (O₃) can influence the UHI phenomenon by modifying the radiation balance and absorption of thermal energy. This study analyses the relationship between air pollution and the intensity of urban heat islands in selected European metropolises, using satellite data from Sentinel-5P and Sentinel-3 missions. Sentinel-5P data provide information on the spatial and temporal distribution of trace gases such as NO₂ and O₃, which play a key role in chemical processes in the atmosphere and affect the absorption of solar radiation. Sentinel-3, on the other hand, with its Sea and Land Surface Temperature Radiometer (SLSTR) sensor, allows the monitoring of land surface temperature (LST), which is the primary indicator of UHI intensity. The analysis was based on multi-year data for selected European cities such as Paris, Madrid, Berlin and Warsaw, taking into account different meteorological and seasonal conditions. The study used advanced statistical analysis and spatial modelling methods to determine the correlation between NO₂ and O₃ concentrations and the intensity of urban heat islands. The results showed that higher NO₂ concentrations in central urban areas correlate with higher LST values, suggesting that the absorption of shortwave radiation by this gas may enhance the UHI effect. On the other hand, ozone (O₃), an important component of photochemical smog, showed variable relationships with UHI depending on the season and sunshine level. During summer periods, increases in O₃ concentrations were associated with higher surface temperatures, indicating a synergistic effect of this gas and intense solar radiation on the intensification of urban heat islands. The study presented here provides new information on the relationship between air pollution and urban heat islands, highlighting the importance of atmospheric pollution monitoring in the context of urban adaptation to climate change. The use of Sentinel-5P and Sentinel-3 data has enabled a comprehensive spatial-temporal analysis, providing a sound basis for further environmental research and policies.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone A-B)

Poster: SO2 Emissions Derived From TROPOMI Observations Using an Improved Flux-Divergence Method With Variable Lifetimes

Authors: Yutao Chen, Prof. Ronald J. van der A, Dr. Jieying Ding, Henk Eskes, Dr. Jason E. Williams, Nicolas Theys, Dr. Athanasios Tsikerdekis, Prof. Pieternel F. Levelt
Affiliations: Royal Netherlands Meteorological Institute (KNMI), Department of Geoscience & Remote Sensing, Delft University of Technology (TUD), Royal Belgian Institute for Space Aeronomy (BIRA-IASB), National Center for Atmospheric Research (NCAR)
The rapid development of the economy and the implementation of new environmental policies in developing regions has led to fast changes of regional SO2 emissions. We presented a monthly SO2 emission inventory for India covering December 2018 to November 2023 based on the TROPOMI Level-2 COBRA SO2 dataset. We are now focusing on SO2 emissions in Africa. The inversion is based on an improved flux-divergence method and estimated local SO2 lifetimes, which include both chemical loss and dry deposition of SO2. The improvement of the divergence method is motived by a spreading effect appearing in SO2 emission maps calculated using the classic divergence method. This spreading effect causes the emission signal from each SO2 point source to spread across surrounding grid cells, rather than being concentrated at the source location. To address this, we improved the divergence method by reallocating the divergence during its calculation to derive an emission map with a significant reduction in the spreading effect. To further improve usability, we have created a user-friendly version of this improved divergence method by incorporating an image deconvolution technique. This advancement enables users of the divergence method to efficiently enhance top-down emission maps without expensive computational costs. We believe this approach can be applied not only when deriving SO2 emissions but also for emissions of other species. We account for the variability of tropospheric SO2 lifetime by calculating the lifetime for each grid cell and each month. The chemical loss by the hydroxyl-radical and the dry deposition processes are considered to mainly drive the SO2 local lifetime. This is the first effort to derive the local SO2 lifetime for application in the divergence method. The results show the application of the local SO2 lifetime improves the accuracy of SO2 emissions when compared to calculations using a constant lifetime. The variability in the SO2 lifetime is important to account for in estimating top-down SO2 emissions. Our derived averaged Indian SO2 emissions covering the recent 5 years are about 5.2 Tg/year with a monthly mean uncertainty of 40%, which is lower than the bottom-up emissions of 11.0 Tg/year from CAMS-GLOB-ANT v5.3. The total emissions from the 92 largest point source emissions are estimated to be 2.9 Tg/year, lower than the estimation of 5.2 Tg/year from the global SO2 catalog MSAQSO2LV4. The emissions of Vindhyachal, the point source showing the largest decrease, were reduced by 17%, which is about 43 Gg/year.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: B.02.01 - POSTER - Earth System Governance & Sustainability Frameworks

The world today confronts unprecedented needs for earth system governance. The global commons framework is the closest example of an existing approach with the aim of governing biophysical systems on Earth upon which the world collectively depends. A new proposed planetary commons framework updates and introduces a paradigm shift in Earth System Governance by recognizing Earth System Tipping Points (ESTPs) and other biophysical systems as our shared commons, urging a move towards more decentralized & polycentric governance architecture that transcends geopolitical and geographical boundaries. This architecture would distribute responsibility, authority, and accountability across multiple scales and institutions, including at the regional scale of the tipping element.

The world also faces an unprecedented need for a navigational tool capable of guiding us through the Anthropocene and the 21st century as we move away from the stable conditions of the Holocene, driven by the pressures of economic, social, and political forces of humanity. A sustainability framework tailored for the Anthropocene must therefore acknowledge that people and nature are entwined within integrated socio-ecological systems, and guide us towards an ecologically safe and socially just operating space for humanity; a transformation of our societies that bring us back within planetary boundaries, whilst ensuring the social needs of all beings are met, leaving no human or non-human animal behind. Being guided by Planetary Boundaries, while advocating social justice for all, defines a narrow, safe, and just corridor in which we can all thrive. This is our ultimate goal.

However, effective earth system governance is unthinkable without essential remote sensing infrastructure that provides scientific measurements, monitoring capabilities, resilience detection systems & early warning signals. These architectures and sustainability frameworks are an important foundation for guiding Earth Observation (EO) gap analysis, prioritizing Earth Observation (EO) research & applications, and bringing together the global community around a shared vision for 2050, as outlined in the ESA Systems-of-Systems Reference Architecture Blueprint.

This session aims to bring together the global Earth Observation (EO) Community under the framework of sustainability science narratives. It will focus on integrating multiple disciplines and communities to participate in the process of learning, interest formation/positioning, coalition building, and strategic planning. Its primary objective is to explore how the global Earth Observation (EO) community can develop the essential remote sensing infrastructure needed to support the governance of Earth System Tipping Points (ESTPs), other biophysical systems, and support sustainability frameworks (e.g. Planetary Boundaries & Doughnut Economics) as we navigate the Anthropocene and 21st century.

We call for multidisciplinary abstracts on sustainability science, systems thinking, earth system governance, post-growth economics models, planetary commons & boundaries, and the application of remote sensing technologies in these domains.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone O)

Poster: A space-borne weighing machine to measure human-made materials in support of sustainability sciences

Authors: Dr. rer. nat David Frantz, Dr Franz Schug, Dominik Wiedenhofer, André Baumgart, Doris Virág, Sam Cooper, Camila Gómez-Medina, Fabian Lehmann, Thomas Udelhoven, Dr Sebastian van der Linden, Dr Patrick Hostert, Helmut Haberl
Affiliations: Trier University, University of Wisconsin, University of Natural Resources and Life Sciences, Vienna, University of Greifswald, Humboldt-Universität zu Berlin
During the last century, humanity has piled up over 1000 Gigatons of long-lasting artefacts like roads and buildings globally – both in cities as well as in the rural landscape. This pervasive phenomenon locks in material and energy use, waste, and greenhouse gas emissions, and shapes the future use and patterns of our settlements and the resulting resource use. In the literature, the mass of these human-made artefacts are dubbed “manufactured capital”, “technomass”, “human-made mass”, “in-use stocks”, or “socioeconomic material stocks”. They have become a major focus of sustainability science in the last decade, especially owing to the fact that this mass doubles every 20 years and has recently even outweighed the whole global plant biomass. Knowing where, what and how much of which material – both at high spatial resolution and across large areas – is of pivotal importance to manage our future resource use more effectively and to increase the circularity of our economy. The question is, though, how can we actually map the mass of all those structures? We have developed an innovative workflow, that acts as a “space-borne weighing machine” for countries. Our solution leverages the power of Copernicus Earth Observation data in combination with other GIS data streams, as well as advanced big data processing techniques to eventually provide high-resolution insights into the material composition of all buildings and infrastructure of a given area of interest. We present a spatially detailed assessment of built structures for the whole contiguous United States at 10 m spatial resolution while quantifying the mass of 14 stock-building materials (concrete, steel, etc.) in eight building types (residential buildings, commercial buildings, mobile homes, etc.) and nine types of mobility infrastructures (roads, rails, subways, parking spaces, etc.). We have used Sentinel-1 and Sentinel-2 data, along with locally existing 3D building models and building footprints, to map the type, area, height, and volume of buildings using machine learning techniques. The area of above- and belowground mobility infrastructures were mapped using OpenStreetMap data and imperviousness datasets from the National Land Cover Database. Eventually, an extensive compilation of material weight factors from the domain of industrial ecology was used to transform the cubic volumes and areas of buildings and mobility infrastructures into terms of mass. Our high-resolution maps reveal that built structures in the US amount to 127 Gt and have become 2.6 times heavier than all plant biomass across the country. We found that most inhabited areas are mass-dominated by buildings or infrastructure – even in the rural landscape. We have further analyzed determinants of the material mass per capita and show that densely built settlements have substantially lower per-capita material stocks, while highest intensities are found in sparsely populated regions due to ubiquitous infrastructures. Our high-resolution analysis indicated that out-migration aggravates already high intensities in rural areas as people leave while built structures remain – highlighting that quantifying the distribution of built-up mass at high resolution is an essential contribution to understanding the biophysical basis of societies, and to inform strategies to design more resource-efficient settlements and a sustainable circular economy.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: A.04.02 - POSTER - GHG monitoring for policy

In order to support the Paris Agreement and related mitigation efforts, it is increasingly more-and-more recognised that greenhouse gas monitoring based on measurements can play a vital role. In 2024, WMO has got approval to proceed with implementing a Global Greenhouse Gas Watch (G3W) to support national entities with measured-based data to estimate emissions. UNEP's International Methane Emission Observatory and its Methane Alert and Response System (MARS) is another example of a new initiative to support policy and trigger mitigation. This session will among others to as well attract submissions from activities supporting the devlopment of GHG Monitoring and Verification Systems around the world to support policy and users interested in data uptake. This session aims to bring together the people generating policy-relevant information with the actual policymakers.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Quantification of Methane Emissions from Overlapping Sources Using Sentinel 5P TROPOMI Satellite Data, CTM Inversion and Disaggregation Techniques

Authors: Zixuan Gao, Dr. Harjinder Sembhi, Dr. Joshua Vande Hey, Daniel Potts
Affiliations: Earth Observation Science, School Of Physics And Astronomy, University Of Leicester
To meet the targets of the Global Methane Pledge—to reduce global methane emissions by at least 30% from 2020 levels by 2030—it is crucial to understand emissions from key sources such as oil and gas, mines, landfills, and industrial facilities. Quantifying the scale and intensity of localized methane sources is essential for rapid mitigation, making them high-priority targets for emissions reduction. Satellite instruments like area flux mappers (e.g. GOSAT and TROPOMI) and point source imagers (e.g., GHGSat, PRISMA, Sentinel-2, and others), provide essential data for observing and distinguishing methane plumes from space. However, accurately quantifying these emissions requires disentangling individual sources often complicated by overlapping methane plumes. This challenge is particularly pronounced in regions with dense emission sources, such as the Silesian basin in Poland—one of Europe's largest coal mining areas—where multiple mines in close proximity create difficulties in attributing emissions to specific sources. This study utilizes Sentinel-5P TROPOMI satellite data, the Integrated Mass Enhancement (IME) method, and the Weather Research and Forecasting (WRF) model to attribute methane emissions to individual sources. Initially, the WRF-LES model will be used to simulate different numbers of point sources, followed by applying a multi-objective heuristic optimization algorithm to estimate parameters for a 2D multisource Gaussian plume model, which serves as a foundation for separating the simulated methane plumes. The IME method and observational data will further evaluate the effectiveness of plume separation and optimal conditions for its implementation. By integrating these methodologies, the study aims to develop a complete framework for separating overlapping methane plumes, improving the accuracy of emission source attribution in areas with compactly distributed emission sources. Meanwhile, the validation will involve comparisons with existing literature results and using observational data to ensure the accuracy and robustness of the results. The final application of this framework to the Silesian coal-mining region will provide site-specific insights into methane emissions to support the development of targeted mitigation strategies.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: An Application of Wind-Rotation Methodology for Quantifying Methane Emission Inventories Using Sentinel-5P

Authors: Julian Akani-Guéry, Srivatsan Anand, Quentin Peyle, Alexis Groshenry, Clément Giron
Affiliations: Kayrros
Accurate quantification of methane emissions is critical for understanding and mitigating greenhouse gas impacts on climate. This study presents a novel application of wind-rotation methodology for estimating methane emission inventories at country and regional scales, using Sentinel-5P TROPOMI satellite data. Unlike traditional methods relying on analytical estimates or reportings, this approach leverages satellite-based observations to provide independent and near-real-time measurements. As this application uses the TROPOMI sensor, it is only applicable to onshore methane sources. The wind-rotation method consists of rotating satellite observations according to wind direction, for a specified period, and solving flux equations to obtain the average methane emissions on a highly localised area. Combining multiple satellite images allows to distinguish the methane plume from the background more easily. Thanks to its high temporal resolution and a very good spectral sensitivity to methane, Sentinel-5P is ideally suited for this approach, enabling spatially resolved emission estimates. The uncertainties of this methodology were assessed through a sensitivity analysis, in order to evaluate the influence of each model parameter on the final quantification. The results demonstrate that this application makes possible the creation of comprehensive and near-real-time methane emission inventories, capable of detailing emissions down to the asset level, such as individual oil and gas fields or cities. As a top-down approach, it offers valuable insights that can be compared with public inventories, which are typically bottom-up. This comparison not only validates the accuracy of existing inventories but also identifies potential discrepancies, thereby enhancing the overall reliability and transparency of methane emission data. By improving the temporal and spatial resolution of methane emission estimates, this methodology provides a robust tool for tracking progress toward emission reduction targets and informing evidence-based climate strategies.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Assessing sub-daily emission rate variability of methane super-emitters using multiple satellite platforms

Authors: Tobias A. De Jong, Joannes D. Maasakkers, Shubham Sharma, Itziar Irakulis Loitxate, Paul Tol, Manuel Montesino San Martin, Ilse Aben
Affiliations: SRON, Netherlands Institute for Space Research, International Methane Emission Observatory, United Nations Environment Program, Research Institute of Water and Environmental Engineering (IIAMA), Universitat Politècnica de València
Large methane super-emitters are strong contributors to global warming and a key target for mitigation efforts. Part of these emitters are persistent and stable, but others have strongly varying emission rates. They can even be intermittent, sometimes emitting hundreds of tonnes of methane in a few hours, after which emissions cease. TROPOMI on ESA Sentinel-5P has daily global coverage, providing a good basis to detect both types of emissions. To support the UNEP IMEO Methane Alert Response System (MARS), source locations of methane emissions need to be known with a higher precision than TROPOMI data can provide on its own. For persistent emitters, we have therefore compiled a list of over 200 global hot spots based on long-term TROPOMI analysis. These emission hotspots are then targeted using high-resolution instruments to identify emissions from for example oil/gas facilities, waste sites, and coal mines [1,2]. For TROPOMI-based detections of transient emissions, we pinpoint sources by combining with data from Sentinel-3’s SLSTR and the Joint Polar Satellite System’s VIIRS. While these band imager instruments have a lower methane sensitivity, they provide global coverage at a resolution of ~500m at least 4 times a day, including Suomi-NPP’s overpass at approximately 4 minutes before Sentinel-5P. This enables pinpointing of large transient emitters [3,4]. In addition to spatially pinpointing sources, understanding their temporal variation is essential to characterize emitters and quantify their total emissions. While satellites are typically thought of as only providing sparse snapshots, we here present an approach to characterize variation of emission rates at a sub-daily scale using TROPOMI data. Due to its high sensitivity and large swath, TROPOMI can follow methane plumes tens to even hundreds of kilometers downwind. To characterize varying emission rates, we use Lagrangian modelling, which enables us to extract information about the emission rate not only at the time of the TROPOMI overpass, but also for several hours preceding the overpass. We verify this approach by again comparing with data from SLSTR and VIIRS of time-varying emissions large enough to be imaged with those instruments: Their multiple overpasses per day provide calibration points for the timeline extracted from Lagrangian modelling. Finally, we apply this approach to characterize the variability of emission rates for both persistent and transient sources detected with TROPOMI, to improve our understanding of their total emissions. [1] Schuit, B. J., et al.: Automated detection and monitoring of methane super-emitters using satellite data. Atmos. Chem. Phys., 23, 9071-9098, https://doi.org/10.5194/acp-23-9071-2023, 2023. [2] Maasakkers, J.D., et al.: Using satellites to uncover large methane emissions from landfills. Science Advances, Vol 8, Issue 32, https://doi.org/10.1126/sciadv.abn9683, 2022 [3] Pandey, S., et al.: Daily detection and quantification of methane leaks using Sentinel-3: a tiered satellite observation approach with Sentinel-2 and Sentinel-5p. RSE Vol. 296, https://doi.org/10.1016/j.rse.2023.113716, 2023 [4] Jong, T. A. de, et al.: Daily global methane super-emitter detection and source identification with sub-daily tracking. preprint on Eartharxiv, https://doi.org/10.31223/X51M5T, 2024
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone B)

Poster: Let’s Investigate Methane 4 Climate Action

Authors: Sander Houweling, Prof. Thomas Röckmann, Dr. Jean-Daniel Paris, Dr. Torsten Sachs, Dr. Tuula Aalto, Dr. Emanuel Gloor, Prof. Hartmut Boesch, Dr. Hugo Denier van der Gon, Prof. Andreas Stohl, Dr. Rona Thompson, Dr. Marielle Saunois, Dr. Lena Höglund-Isaksson, Dr. Sergey Gromov, Dr. Ernest Koffi, Dr. Roxana Petrescu
Affiliations: Vrije Universiteit Amsterdam, SRON Netherlands Institute for Space Research, Institute for Marine and Atmospheric Research Utrecht, Laboratoire des Sciences du Climat et de l’Environnement, Deutches GeoForschungsZentrum, Finnish Meteorological Institute, University of Leeds, University of Bremen, Institute of Environmental Physics, Nederlands organization for applied scientific research TNO, University of Vienna, Norwegian institute for air research NILU, International Institute for Applied Systems Analysis, Max-Planck-Institute for Chemistry, European Centre for Medium-Range Weather Forecasts
2025 started with the launch of the H-Europe project IM4CA to enhance the quantification and understanding of methane emissions and sinks. A consortium of 25 partners joins forces to investigate pressing questions about the evolution atmospheric methane levels in recent decades, to reduce the uncertainty in future projections and design efficient solutions for monitoring and mitigating emissions in and outside of Europe. It will build new measurement and modelling infrastructure for improved monitoring of the progress toward short- and long-term emission reduction targets, with a prominent role for existing (TROPOMI, GHG-Sat, MethaneSat, EnMAP) and upcoming (S5, CO2M, Tango) satellite missions for measuring atmospheric composition and land surface properties. The changing European methane emissions are an important focus of the project, which we keep track of with help of eastward extensions of the ICOS monitoring network in Poland and Rumania. Intensive measurement campaigns in Rumania are conducted combining surface, aircraft, and total column measurements to improve the accuracy of emission quantification techniques using satellite data. The world-wide applicability of these techniques will extend the impact of our campaigns far beyond European borders. Besides changing anthropogenic emissions, climate impacts on natural sources and sinks of methane are an important focus of IM4CA also. The four-year research program will initiate new measurement infrastructure in Congo to help characterize emissions from tropical wetlands in Africa. Campaigns will be conducted in Northern Scandinavia along a transect of disappearing permafrost to investigate impacts on vegetation and methane emissions using techniques that can be applied to high-resolution satellite instruments for circumpolar emission mapping. This presentation will provide an overview of the planned activities and goals of IM4CA. The project offers a great opportunity to learn about methane in a cooperative spirit and to reach out and provide support to those who can turn knowledge about methane into climate action.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone M)

Poster: A.07.04 - POSTER - Irrigation monitoring through Earth Observation (EO) data

Irrigation is the primary source of anthropogenic water use, largely prevailing on domestic and industrial exploitations. Despite the essential role played by irrigation in ensuring food production and the impacts on the management of water resources, reliable and explicit data about irrigation dynamics (i.e., timing, extent and amounts of water used) are generally lacking worldwide. Remote sensing technology has recently proven to be an essential tool to detect irrigation occurrence in space and time and to estimate amounts of water used, especially in light of the latest high-resolution retrievals.
This session welcomes contributions presenting innovative approaches leveraging Earth Observation (EO) data, eventually combined with modeling approaches or ground-based measurements, for monitoring irrigation and assessing the associated impacts. Topics of interest include but are not limited to:
- exploitation of EO data for irrigation detection;
- use of EO data for quantifying irrigation water use;
- data assimilation techniques to improve irrigation schemes;
- assessment of the impacts of irrigation on the water cycle;
- management of irrigation using hydrological modeling combined with satellite data;
- estimates of irrigation water requirements leveraging satellite data;
- development of strategies based on remotely sensed data for improving irrigation efficiency.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone M)

Poster: EO Insights for Addressing Competing Water Demands and Advancing Drought Monitoring in Mexico's Irrigation Districts

Authors: Tomas Bartalos, Dr Jan Misurec, Lucie Dekanova, Sivasankar Arul
Affiliations: Gisat
This study exemplifies a transformative approach to addressing water resource management and agricultural resilience in Mexico's DR075 Río Fuerte irrigation district. The analysis integrates Earth Observation (EO) technology with innovative methodologies, including multi-year z-score analysis of the Normalized Multi-band Drought Index (NMDI) and unsupervised time-series clustering to approximate crop types and detect anomalies in canopy water content. By combining Sentinel-2 imagery, ancillary datasets, and rule-based classifications, the project delivers products that analyze drought severity, urban expansion, and unmanaged land within irrigation units. These tools are critical for decision-making under competing water demands from agriculture, urbanization, and industry. Unlike conventional EO applications, this work introduces a clustering-based strategy to stratify temporal growth profiles in the absence of detailed and current crop parcel data. This approach effectively circumvents the challenges posed by outdated parcel delineations and incomplete crop assignment records, enabling the creation of spatially consistent drought severity and productivity indicators. This methodology is adaptable to other regions facing similar data limitations, offering a novel pathway for operationalizing EO in water and land management. The project ensures methodological rigor through the use of atmospherically corrected Sentinel-2 data spanning 2016–2023. Vegetation indices, including NMDI and NDVI, were employed to calculate reference statistics and detect deviations in vegetation water content, expressed as standardized z-scores. Limitations, including the lack of in situ data and the reliance on clustering for crop identification, are acknowledged but do not detract from the validity of the outputs, which were aggregated to irrigation unit scales for policy relevance. Future enhancements could include integrating historical Landsat data for extended temporal analysis and refining clustering through targeted field surveys. The results have significant implications for both science and practice. Scientifically, the project advances the understanding of drought impacts on agricultural systems by providing pixel-level, seasonal, and annual drought severity metrics. These metrics inform long-term evaluations of drought hotspots and productivity trends, which are essential for adaptive water resource management. For applications, the project's EO-based products offer actionable insights for irrigation planning, urbanization impact assessments, and crop productivity monitoring. By linking outputs to irrigation units, the analysis enables stakeholders to prioritize interventions that balance urban and agricultural water demands. From a policy perspective, the project's outputs align with sustainable development goals related to water security, food production, and urban resilience. The spatially disaggregated results allow decision-makers to identify underperforming or high-risk areas, facilitating targeted water allocation and irrigation strategies. The collaboration between international organizations (such as ESA and the World Bank) and local institutions (like CONAGUA) under the GDA framework highlights the potential of EO to foster partnerships and operationalize services for sustainable resource management. In summary, this project represents an innovative and technically robust application of EO data to address critical challenges in water and agricultural management. Its relevance extends across scientific research, operational services, and policy frameworks, contributing to the broader goals of resilience and sustainability in water-scarce regions.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone M)

Poster: Satellite-Based Irrigation Mapping: Challenges and Insights from Austria

Authors: Luca Zappa, Sebastian Boeck, Isabella Greimeister-Pfeil, Thomas Rosmann, Heike Brielmann
Affiliations: Umweltbundesamt
Irrigation mapping, the process of distinguishing irrigated fields from rainfed agriculture, is fundamental for advancing our understanding and management of water resources. Accurate identification of irrigated areas provides a critical foundation for quantifying agricultural water use and assessing the impacts of climate change on crop production. Recent advances in remote sensing and machine learning techniques have enabled scalable, high-resolution mapping of irrigation, offering valuable insights into spatial and temporal irrigation patterns. This study investigates the potential of Sentinel-1 and Sentinel-2 satellite missions, part of the Copernicus Earth Observation program, for mapping irrigated fields across various Austrian regions characterized by small-scale agricultural structures. The humid continental climate of Austria, marked by frequent rainfall that keeps soil moisture levels high, presents significant challenges for irrigation detection. Analyses were conducted over a three-year period (2021–2023) in four regions of eastern Austria, selected for their extensive agricultural production. A diverse range of 23 crop types, including field crops (e.g., wheat, corn) and vegetables, were examined. A machine learning framework was developed and tested using Sentinel-1 and Sentinel-2 data as inputs. Ground-truth data on irrigation status (irrigated vs. rainfed) were obtained through surveys, orthophotos, and irrigation permits. Among the machine learning models evaluated, the Random Forest algorithm demonstrated the best performance, achieving an F-score of 0.71. Increasing the volume of training data significantly improved classification accuracy, whereas applying models trained in one temporal or spatial context to other periods or regions resulted in reduced performance. This study highlights the feasibility of satellite-based irrigation monitoring in regions with humid climates and fragmented agricultural landscapes. While machine learning provides robust tools for irrigation detection, the accuracy and reliability of these methods remain highly dependent on the quality and quantity of reference data.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone M)

Poster: Towards Operational Detection of Irrigated Agricultural Plots Using Earth Observation Data

Authors: Mr Ghaith Amin, Mr Thomas Oberlin, Ms Valerie Demarez
Affiliations: University Toulouse III Paul Sabatier, University of Toulouse, CNES/CNRS/IRD, CESBIO, 18 avenue Edouard Belin, 31401 Toulouse, France, MEOSS, 39 Chem. des Ramassiers, 31770, Colomiers, France, Fédération ENAC ISAE-SUPAERO ONERA, Université de Toulouse, 10 Avenue Marc Pélegrin, 31055 Toulouse, France
Abstract : The monitoring and management of irrigated agricultural plots are critical for optimizing water resources and ensuring sustainable agricultural practices. With increasing global food demands and the challenges posed by climate change, effective irrigation monitoring has become a priority for governments, agricultural stakeholders, and researchers. This study proposes a novel operational framework that combines Earth observation data with advanced machine learning methods to achieve accurate and timely detection of irrigated plots. Earth observation data provide a unique opportunity for large-scale irrigation monitoring. Among these, the Sentinel-1 SAR satellite stands out due to its high temporal resolution and sensitivity to surface soil moisture dynamics (Zribi et al., 2011), making it a powerful tool for assessing soil surface water content. Previous studies have utilized Sentinel-1 data for irrigation detection at plot and pixel scales using supervised learning approaches (Bazzi et al., 2019; Pageot et al., 2020). While these methods achieve high accuracy, their operational application faces significant challenges, including reliance on annual in-situ data for training supervised models, which limits scalability, and the absence of updated agricultural field boundaries in near-real time, which hinders the precision of monitoring systems. To overcome these challenges, this study introduces a comprehensive framework comprising two core components. The first involves near-real-time extraction of agricultural field boundaries, while the second focuses on developing a semi-supervised irrigation detection approach that reduces dependency on in-situ data for annual model updates. Together, these components establish a robust and scalable system for operational irrigation monitoring. Accurate delineation of agricultural plot boundaries is fundamental for effective irrigation monitoring. To achieve this, we employ a deep learning-based multi-task architecture that precisely predicts field boundaries. This approach effectively handles fragmented and irregularly shaped fields, particularly in complex agricultural landscapes. By utilizing Sentinel-2 Level-3A cloud-free imagery (Hagolle et al., 2018), the model achieves high geometric accuracy and generates accurate plot delineations in near-real time for the current study year. This capability significantly enhances the operational utility of the framework, supporting timely decision-making for resource allocation and irrigation management. Moreover, linking the extracted boundaries to irrigation detection facilitates targeted analysis and reporting at both regional and national scales. To address the limitations of supervised learning methods, which rely heavily on labeled in-situ data for training, we are developing a semi-supervised approach. This method combines multi-source satellite data with change detection algorithms to identify irrigated plots without requiring extensive annual field measurements, thus improving scalability and reducing dependency on ground data. The framework will be tested and evaluated over multiple geographically diverse study sites, notably in Spain and France. These sites represent different climatic conditions, agricultural practices, and crop types, ensuring the robustness and generalizability of the proposed approach. For irrigation detection, the semi-supervised approach will be benchmarked against a supervised random forest (RF) model previously developed for Sentinel-1-based irrigation monitoring (Amin et al., 2024). This research presents a novel framework for the large-scale operational detection of irrigated agricultural plots using machine learning and Earth observation data. By combining near-real-time field boundary extraction with a semi-supervised irrigation detection approach, the framework addresses key limitations of existing methods, offering a scalable, accurate, and efficient solution for irrigation monitoring. This approach has the potential to replace traditional supervised methods, enhancing the applicability and robustness of irrigation detection systems. References : - Zribi, M., Chahbi, A., Shabou, M., Lili-Chabaane, Z., Duchemin, B., Baghdadi, N., Amri, R., and Chehbouni, A.: Soil surface moisture estimation over a semi-arid region using ENVISAT ASAR radar data for soil evaporation evaluation, Hydrol. Earth Syst. Sci. 15, 345–358, https://doi.org/10.5194/hess-15-345-2011, 2011. - Bazzi, H.; Baghdadi, N.; Ienco, D.; El Hajj, M.; Zribi, M.; Belhouchette, H.; Escorihuela, M.J.; Demarez, V. Mapping Irrigated Areas Using Sentinel-1 Time Series in Catalonia, Spain. Remote Sens. 2019, 11, 1836. https://doi.org/10.3390/rs11151836 - Pageot, Y.; Baup, F.; Inglada, J.; Baghdadi, N.; Demarez, V. Detection of Irrigated and Rainfed Crops in Temperate Areas Using Sentinel-1 and Sentinel-2 Time Series. Remote Sens. 2020, 12, 3044. https://doi.org/10.3390/rs12183044 - Olivier Hagolle, David Morin, & Mohamed Kadiri. (2018). ATBE: Detailed Processing Model for the Weighted Average Synthesis Processor (WASP) for Sentinel-2 (1.4). Zenodo. DOI:10.5281/zenodo.1401360 - G. Amin, N. Sfaksi, V. Thierion, J. Gilleron, T. Ferrero and V. Demarez, "Sentinel-1 Synthetic Aperture Radar Time Series for Irrigation Mapping," IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 2024, pp. 4250-4254, doi: 10.1109/IGARSS53475.2024.10640975.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone M)

Poster: Evaluating Satellite-Based Evapotranspiration Products (EEFlux, SenET, and SAR2ET) Using UAV-Derived Thermal and RGB Observations

Authors: Zeynep Kuru, Geomatics Engineering Muhammed Raşit Çevikalp, Geomatics Engineering Esra Erten, Geomatics Engineering Nebiye Musaoglu
Affiliations: Istanbul Technical University
Evapotranspiration (ET) is a basic hydrological process that refers to the transport of surface water into the atmosphere by evaporation and transpiration by plants. Its accurate estimation is critical for agricultural irrigation, water resources management, and climate change analysis. Even though there are analysis ready data (ARD) ET products with high temporal resolution, their applicability to agriculture is limited due to their spatial resolution. In this context, Landsat based ARD EEFlux data, providing 30m resolution global ET data, stands out. However, this data is frequently limited by challenges related to spatial gaps, cloud cover and temporal resolution. To overcome these limitations, recently [1] has discussed having field-scale ET data based on Sentinel-1 trained by weakly supervised content, due to the non overlapping acquisition times and resolution of the EEFlux and Sentinel-1 data. In this study, this U-net based SAR2ET model will be evaluated alongside other satellite-based high resolution ET products (EFFlux and SenET) using UAV acquisitions equipped with RGB and thermal camera as a ground truth. This study specifically targets the cotton fields that SAR2ET model hasn’t seen either its validation or testing, ensuring unbiased performance evaluation. The UAV acquisitions, comprising very high-resolution RGB and thermal images captured at an altitude of 200 meters, were collected on September 3, 2023, over 20 cotton fields in Aydın provence, Turkey. These UAV acquisitions were first integrated with weather station data for ET calculation based on the METRIC (Mapping Evapotranspiration at High Resolution with Internalized Calibration) energy balance model. METRIC, which is widely used in Landsat-based ET studies, solves the surface energy balance equation to calculate ET. Once the high resolution ET product is obtained by UAV acquisitions at the resolution of 0.2 m, it is upsampled to 30 m and compared with three different satellite-based ET products: i) SAR2ET, ii) ARD EEFlux, and iii) SenET [2]. SenET approach proposes to sharpen the thermal band of Sentinel-3 using high resolution Sentinel-2 bands in order to have a 20 m resolution ET product. Beyond acquisition time and spatial-resolution-related differences, a key difference among the ET products is related to their calibration process. Considering this fact, firstly instead of one-to-one pixel-wise comparison, Kullback-Liber (KL) Divergence and Chi-Squared Test, which are statistical methods used for distribution differences and comparing the expected and observed data, are used for evaluating the these four different ET products, once all resampled to 30 m resolution. By this way, even if there are absolute shifts between ET products, we can evaluate their similarities in terms of detecting spatial distribution of irrigation. Later, quantitative comparisons among the produced ET data set are performed by Pearson correlation and unbiased root mean square error (uRMSE). The first initial results highlight that even though there is a huge spatial resolution difference between thermal acquisitions of Sentinel-3 and UAV, the most similar ET products were obtained by SenET and UAV. This can be related to the fact that EEFlux and SAR2ET -trained almost globally by EEFlux- data are not able to model the localized conditions, even though the UAV acquisitions and EEFlux (Landsat) pass occurred simultaneously, the Sentinel-3 acquisition was on the following day. At the next stage, the SAR2ET model will be retrained again with Sentinel-2/-3 and the final quantitative results will be shared. [1] S. Cetin, B. Ülker, E. Erten and R. G. Cinbis, "SAR2ET: End-to-End SAR-Driven Multisource ET Imagery Estimation Over Croplands," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 17, pp. 14790-14805, 2024, doi: 10.1109/JSTARS.2024.3447033. [2] R. Guzinski, H. Nieto, “Evaluating the feasibility of using Sentinel-2 and Sentinel-3 satellites for high-resolution evapotranspiration estimations,” in Remote Sensing of Environment, 221,157-172, 2019, doi: 10.1016/j.rse.2018.11.019.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone M)

Poster: Harnessing Earth Observation for Accurate Early Forecasting of Irrigation Needs

Authors: Giuseppe Satalino, Francesco Mattia, Anna Balenzano, Francesco Lovergine, Davide Palmisano, Cinzia Albertini, Sergio Ruggieri, Pasquale Garofalo, Luca Musti, Michele Rinaldi, Vito Iacobellis, Andrea Gioia, Margherita Lombardo, Audrey Maria Noemi Martellotta, Donato Impedovo, Lucia Sarcinella, Davide Veneto, Vincenzo Gattulli, Luigi Nardella, Massimo Di Cataldo, Nicoletta Noviello, Rocchina Guarini, Patrizia Sacco, Maria Virelli, Deodato Tapete
Affiliations: Consiglio Nazionale delle Ricerche (CNR - IREA), Consiglio per la Ricerca in agricoltura e l’analisi dell’Economia Agraria (CREA-AA), Consiglio per la Ricerca in agricoltura e l’analisi dell’Economia Agraria (CREA-CI), Politecnico di Bari (DICATECh), Università degli Studi di Bari Aldo Moro (Dip. di Informatica), Consorzio per la Bonifica della Capitanata (CBC), Agenzia Spaziale Italiana (ASI)
Irrigation is used on 22% of cultivated land and supports 40% of global food production. In the near future, the demand for irrigation water is expected to rise significantly, particularly in regions where the effects of climate change will be more pronounced, such as the Mediterranean basin. To address this challenge, numerous strategies to mitigate the impacts of climate change are under study. These include changing crop varieties and agricultural techniques, as well as improving irrigation management. This study presents the architecture and performance of a Spatial Decision Support System (SDSS) designed to forecast irrigation needs in a semi-arid Mediterranean environment. The SDSS has been developed as part of the "EarTH Observation for the Early forecasT of Irrigation needS (THETIS)" project, which is supported by the Italian Space Agency (ASI). THETIS provides irrigation forecasts at a basin scale at three key stages identified by the project's Pilot User, i.e., the Reclamation Consortium of Capitanata (CBC). These stages are: i) early (at the end of the winter season); ii) the beginning of the summer season (immediately after sowing); and iii) during the summer season, which includes weekly updates. Early forecasts are crucial for assessing water availability and demand, enabling timely management decisions in case of shortages. On the other hand, periodic forecasts throughout the season can enhance the efficient supply and distribution of water to farmers and/or irrigation districts. Methodology and Data sets THETIS SDSS integrates the soil water balance at two spatial scales. At the basin scale, through a hydrological model (HM), calibrated on the available daily streamflow data, for its ability to reliably reproduce soil moisture temporal dynamics; and at the irrigation district scale, utilizing a crop growth model (CGM), initialized by the HM, for its capability to better model the water dynamics at the local scale, accounting for factors like rain, irrigation, transpiration, evaporation, and drainage. The characteristic of THETIS is using soil and vegetation information derived from earth observation (EO) data, both Synthetic Aperture Radar (SAR) and optical data. Additionally, artificial intelligence (AI) techniques are employed on observed and forecasted Copernicus Service meteorological and climatic data. WEBGIS interface that provides graphical access to the generated maps is also featured. THETIS SDSS has been set up on the "Fortore" irrigation district in the Apulian Tavoliere (AT), Foggia, Italy. The district is managed by CBC and extends over 141 km2. In this area, ground data has been collected from 2020 to 2024. They include meteorological data, soil moisture, crop surveys, irrigation scheduling, and volumes. The SDDS outputs come from the CGM, based on AquaCrop [Steduto et al., 2009]. They consist of water demand maps (m3/ha) at the field scale. The first forecast is refined at the successive stages as the cropping season progresses. CGM simulates crop development, and forecasts evapotranspiration and irrigation needs based on meteorologic forcing, hydrologic and EO-derived information. Such input data come from three different modules: i) the EO processing module, which exploits SAR (i.e., Sentinel-1, COSMO-SkyMed, and SAOCOM), multispectral (Sentinel-2) and hyperspectral (PRISMA) data. The derived thematic maps are: Surface Soil Moisture (SSM) at the field scale [Balenzano et al., 2021], irrigated/non-irrigated fields [Balenzano et al., 2022], tillage practices (ploughed or rolled soils) [Satalino et al., 2024], and vegetation indexes. In particular, the tillage maps combined with historical data provide an early forecast (before the emergence of the plants) of the irrigated area in the incoming season. This first estimate is successively refined at the beginning and during the season, by irrigation maps (derived from SSM maps) and updated vegetation indices. Besides, tillage maps and vegetation indexes were used to identify the dates of crop transplanting; ii) the HM module, which estimates the soil water content at the beginning of the growing period. This information is provided by the DREAM hydrological model [Iacobellis et al., 2024]; iii) the AI module, which is dedicated to the spatialization and downscaling of the seasonal forecasts of meteorological and climate data, distributed by the Copernicus Climate Change Service (C3S) system. Parameters such as precipitation, temperature, and solar radiation index are considered. The downscale modelling is driven by a historical data set of meteorological data collected over the site since 1990 [Impedovo et al., 2024]. Implementation and Results The SDSS was assessed on tomato crops in the AT study area by evaluating the results for each use case, focusing on the identified cultivated area and water consumption. HM and CGM were calibrated in the study area using historical data. Ancillary data included tomato crop variety, soil texture, and irrigation technique (i.e., drip irrigation). Downscaled C3S meteorological data, maps of tillage change and irrigated fields, crop transplanting dates, and soil water content at the start of the simulation have been provided by AI, EO, and HM modules. For the first use case (on April 1), the CGM simulation uses the estimate of the irrigated area identified in March based on historical land use information. As a result, according to crop rotation practices, fields cultivated with winter wheat are statistically cultivated with a summer crop (mostly tomato) after two or three years, with a percentage ranging from 43% to 53%, respectively. Using such a piece of knowledge, an initial estimate of the extension of the irrigated area for the upcoming summer season has been derived. This estimate is refined in the second use case (on May 1), when cumulative tillage maps provide additional information on fields prepared for sowing. This update also provides the location of fields with information regarding soil characteristics, weather, and soil water content (SWC). In the third use case (from June 1 onward), the same procedure is iterated using updated tillage change and irrigation maps derived from SSM to refine the identification of sown fields. It is worth noting that from the first to the last use case, the meteorology becomes more accurate as the forecast transitions from long-term to medium- and then short-term. Moreover, the identification of additional tomato transplanting dates is inferred from the timing of tillage changes in May and June. For example, the analysis of the LAI trend for 2022 showed that 53% of tomato fields were transplanted around May 15, while the remaining fields were transplanted around June 15. Results indicate that the water consumption estimated by the THETIS SDDS using tillage change maps was, on average, 600 m³/ha. This is comparable to the measured value of 500 m³/ha, considering that additional water volumes from groundwater sources were likely used. Detailed results related to the spatial and temporal distribution of irrigation demand at the various stages of forecast will be discussed at the presentation. Acknowledgment: The project “EarTH Observation for the Early forecasT of Irrigation needS” (THETIS) is funded by ASI under the Agreement N. 2023-52-HH.0 in the framework of ASI’s program “Innovation for Downstream Preparation for Science” (I4DP_SCIENCE). References - Balenzano A., G. Satalino, F. P. Lovergine, A. D’Addabbo, D. Palmisano, R. Grassi, O. Ozalp, F. Mattia, D. Nafría García, V. Paredes Gómez (2022). Sentinel-1 and Sentinel-2 Data to Detect Irrigation Events: Riaza Irrigation District (Spain) Case Study. Water, 14 (19), art. no. 3046. https://doi.org/10.3390/w14193046. - Balenzano, A., F. Mattia, G. Satalino, F. P. Lovergine, D. Palmisano, et al. (2021). Sentinel-1 soil moisture at 1 km resolution: a validation study. Remote Sensing of Environment, 263, 112554, DOI: https://doi.org/10.1016/j.rse.2021.112554. - Iacobellis, V., A. Gioia, V. Totaro, M. Lombardo, A. B. Izzaddin, S. Manfreda, R. Zhuang, G. Satalino, A. Balenzano, C. Albertini, F. Mattia, F. Lovergine, D. Palmisano, M. Rinaldi, S. Ruggieri, P. Garofalo, D. Impedovo, N. Noviello, L. Nardella, M. Di Cataldo, R. Guarini, M. Virelli, P. Sacco, D. Tapete (2024). Advancing Sustainable Water Management in Southern Italy through Integrated Hydrological Modeling and Earth Observation. Computational Science and Its Applications – ICCSA 2024 Workshops. Hanoi, Vietnam, July 1–4, 2024, Conference Proceedings, Part V (Paper), ICCSA 2024 Workshops, LNCS 14819, pp. 217–229, https://doi.org/10.1007/978-3-031-65282-0_14. - Impedovo, D., V. Gattulli, L. Sarcinella, D. Veneto, G. Satalino, A. Balenzano, F. Lovergine, C. Albertini, D. Palmisano, F. Mattia, S. Ruggieri, P. Garofalo, M. Rinaldi, V. Iacobellis, A. Gioia, L. Nardella, M. Di Cataldo, N. Noviello, R. Guarini, P. Sacco, M. Virelli, D. Tapete (2024). A Novel Multi-Speed Convolutional Transformer Architecture for Meteorological Forecasting. IHTC - IEEE International Conference on Humanitarian Technologies, 27-30 November 2024, Bari, Italy. - Satalino, G., D. Palmisano, A. Balenzano, F. Lovergine, F. Mattia, F. Nutini, M. Boschetti, G. Verza, M. Rinaldi, S. Ruggieri, F. Ciavarella, C. Manganiello, V. P. Gómez, D. A. Nafría García (2024). Copernicus Sentinels for tillage change detection. IEEE 2024 International Geoscience and Remote Sensing Symposium, July 7–12, 2024, Athens, Greece, pp. 1249-1252, DOI: 10.1109/IGARSS53475.2024.10641126. - Steduto, P., T. C. Hsiao, D. Raes, and E. Fereres (2009). AquaCrop-The FAO crop model to simulate yield response to water: I. Concepts and underlying principles. Agronomy Journal, , 101(3): 426-437.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone M)

Poster: Spatiotemporal Analysis of Irrigation Intensity and Related Impacts on Water Scarcity and Land Degradation in Iran: Insights from Multi-Sensor Earth Observation Data

Authors: Sigrid Roessner, Robert Behling, Mahmud Haghshenas Haghighi, Mahdi Motagh
Affiliations: Gfz Helmholtz Zentrum Potsdam, Leibniz University Hannover
Large parts of the world’s population are going to face conditions of increasing water scarcity and food insecurity. The higher food demand being a consequence of growing global population requires an increase in food production, which can either be met by expanding the area under cultivation or intensifying the use of the already existing agricultural land. Iran is a prominent example for such conditions, since the country has been facing a rapid population growth and the promotion of the paradigm of achieving self-sufficiency in production of main staples, such as wheat and rice. Thus, the country has significantly increased agricultural production during the last 30+ years, although large parts of Iran are unsuited or of limited suitability for agricultural purposes. This development has led to a high and steadily increasing water demand for irrigation towards a very unsustainable use of the available water resources. As a result, the agricultural sector is responsible for approx. 90% of today’s annual water consumption in Iran, whereas approximately 50% of the water comes from tapping underground aquifers. This excessive ground water extraction has led to widespread land subsidence being the cause for extensive land degradation and damage to human infrastructure, such as buildings and life lines. Although these problems have already been known for a longer time, the spatiotemporal specifics of these developments have not been analyzed yet in a comprehensive way on a national scale. However, such information is needed to develop region-specific strategies for a more sustainable use of the available water resources. In this study we combine a variety of globally available Earth Observation (EO) data including MODIS, Sentinel-1/2, GRACE/-FO and ERA5-Land to analyze the spatiotemporal development of vegetation growth as an indicator for irrigation intensity in its relation to water availability. Moreover, we investigate the relationship between groundwater usage and land subsidence with the goal of performing impact assessment with a high degree of spatiotemporal detail. Vegetation growth, its temporal dynamics and trends are analyzed by satellite remote sensing data of different resolutions and periods using MODIS time series data for long-term analyses on a national scale and Sentinel-2 time series data for regional analyses of higher spatiotemporal detail covering the last 5 years. In this context we have explored and developed multiple approaches aiming at the differentiation between irrigated and rainfed agriculture based on vegetation growth dynamics and meteorological water availability derived from ERA5-Land reanalysis climate data (i.e. total precipitation, potential evaporation, temperature, and derived aridity index). On a national scale we analyze the spatiotemporal development of agricultural areas based on remotely-sensed vegetation growth dynamics, changes in the hydrological water storage (GRACE/-FO) and meteorological conditions (ERA5-Land). Despite increasing hydrometeorological water scarcity, Iran has experienced an agricultural expansion of approx. 27,000km² (9%) between 1992 and 2019 and an intensification of cultivation within existing agricultural areas, indicated by significant positive vegetation trends within 28% of the existing croplands (i.e., approx. 48,000km²). At the same time, we observe a significant decrease in total water storage based on the GRACE/-FO data which is not represented by a decrease in meteorological water input clearly speaking for an unsustainable use of groundwater, which is mainly consumed for agricultural irrigation. Besides agricultural land, natural vegetation is also affected by increasing water shortage showing declining trends in vegetation growth and a reduction in land cover from sparse vegetation to barren land within 40,000km². In order to better understand the spatiotemporal effects of the water scarcity in Iran, we analyze Sentinel-1 data to quantify the relationship between excessive ground water extraction used for irrigation and land subsidence. Multi-temporal InSAR analysis between 2014 and 2020 indicates that about 56,000km² (3.5% of Iran’s land) is impacted by land subsidence, with roughly 3,000km² experiencing severe rates of more than 10cm/year. While less than 4% of land subsidence occurs in built-up areas, more than two-thirds of subsiding lands is found in agriculture lands. Notably, almost 40% of land subsidence occurs in irrigated lands, as indicated by the Global Food Security Support Analysis Data. Furthermore, nearly half of the annual total groundwater loss, amounting to 840 million cubic meters, occurs in irrigated lands, highlighting the critical role of agriculture as the primary driving factor for groundwater depletion, water scarcity and land subsidence. The central part of the country is notably affected, containing two-thirds of the depleted aquifers with locations subsiding at rates >35cm/yr. In combining the findings from the different EO sources, we aim at an objective impact assessment related to the unsustainable water use on a national scale which is characterized by high spatiotemporal detail and thus, it allows for multi-scale analysis with variable spatial reference units, such as watersheds and administrative boundaries. As primary impact we consider the continuous lowering of the groundwater level in large parts of the country. We analyze the resulting subsurface mass loss for the time period between 2003 and 2019 as it is reflected in variations of the Earth's gravity field which are recorded by the GRACE/-FO mission. We combine the coarse-resolution estimations from GRACE/-FO data with higher resolution estimations of groundwater depletion based on subsidence analysis. Our results show that the estimated subsidence rates across the country are associated with approximately 1.7 billion cubic meters of groundwater drawn annually from confined and semiconfined aquifers. Our results provide detailed insights into the relationships between vegetation development across Iran, meteorological water availability as well as groundwater depletion and related land subsidence which are closely linked to irrigation intensity. High spatial and temporal resolution of our EO-based results allow for detailed reconstruction of land use development in relation to groundwater use and resulting impacts. Hence, they can support the development of sustainable management strategies at different administrative levels. Since our methodology builds exclusively on globally available EO data, it is also applicable to other regions worldwide facing similar conditions and problems.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone M)

Poster: Near Real Time Irrigation Monitoring, a Multi-Sensor Approach, at National Scale With Framework Agnostic Algorithm.

Authors: Beatrice Gottardi, Gianluca Brunelli, Erminia De Grandis, Simona Gargiulo, Camilla Della Monica, Pietro Sciusco
Affiliations: CGI, Ariespace, Planetek
Monitoring irrigated fields in near real-time is crucial for sustainable water management, agricultural productivity, and food security. It enables efficient allocation of water resources, optimizes irrigation practices, and reduces wastage, especially in water-scarce regions. Real-time insights support policymakers in planning water distribution and cope challenges like salinization and infrastructure failures. This expertise is essential for climate change adaptation, address shifting rainfall patterns, and managing extreme weather impacts. Satellite remote sensing has demonstrated remarkable effectiveness in irrigation diagnostics and supervision, particularly in the context of mapping and monitoring of cultivated and irrigated areas using optical or radar sensors. Leveraging satellite and ground data enhances accuracy, ensuring scalable solutions for sustainable water use and improved food security. The objective of this work is to provide a model capable of detecting the irrigation events using the Sentinel-1 and Sentinel-2 observations. In detail, the analysis of different optical vegetation indices, such as NDVI (Normalized Difference Vegetation Index), is effective in detecting discrepancies in the spectral signature of crops to identify irrigation events. However, the sensitivity of optical data to weather conditions and cloud cover requires the use of SAR (synthetic aperture radar) data that have been shown to be useful for efficient irrigation monitoring due to the direct correlation between the SAR backscattering coefficient and the water content of the soil. Additional filters related to precipitation interference and water balance estimates were integrated using ERA 5 climate reanalysis. A tree-like approach was then developed to detect irrigation events paying particular attention to different land covers: bare soil, orchards and herbaceous crops. The processing chain was implemented on the Insula platform to generate a mapping of irrigation events on a national scale ensuring the monitoring of the territory. Insula platform allows access to the full Copernicus archive, enabling time series analysis, along with in situ measurements uploaded to the platform for data validation. The area of study is the Italian territory, validation was performed selecting various representative regions. The results shows that the proposed model could detect 85% of the overall irrigation events. The final goal is to provide end users with a reproducible software tool to generate scalable actual irrigation maps at different latitudes with a weekly update frequency.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone M)

Poster: An Evaluation of the Impact of Seasonal Land Cover Change on Evapotranspiration Estimates at the Catchment Scale in the Upper Gundar River Basin, Tamil Nadu, India

Authors: Akash Senthilkumaran, Richard Kelly
Affiliations: University of Waterloo
With a drainage area of approximately 5,690 km², the Gundar River Basin (GRB) stands as one of the most important river basins in the state of Tamil Nadu, India. Encompassing five districts and home to nearly two million people, the basin represents a diverse and complex hydrological landscape, particularly in the context of remote sensing. Agriculture is widespread in the GRB, with a wide variety of crops grown such as coconut, chilies, and sugarcane, with paddy being the predominant crop. The most notable features of the river basin include the extensive presence of groundwater and moisture consuming invasive species Prosopis juliflora (prosopis) and thousands of small-scale surface level rainwater storage structures known as tanks. In addition to prosopis, the region is also home to vast swathes of barren land which are vegetation free. Prosopis presents an interesting conundrum, as it is heavily relied upon by the local population for firewood and charcoal, which serve as important sources of income. However, its parasitic impact on groundwater, which reduces water availability for agriculture, is widely acknowledged by the local farmers and the scientific community. Thus, the flow of water through these heterogeneous land cover units results in a complex water balance within the region. Focusing on a section of this vast landmass, the upper Gundar River Basin (study area), which covers 2,640 km², this study specifically focusses on evapotranspiration (ET), a key hydrological variable known to account for 50–60% of the outflux in the water balance. Point scale estimates of ET rely on empirical methods using meteorological parameters, as well as in-situ measurements using instruments such as lysimeters and eddy covariance towers. These methods are expensive, labor intensive, time-consuming, and lack adequate spatial and temporal coverage making them unsuitable for studying large landscapes. As a reliable alternative, approaches utilizing satellite imagery as the primary forcing data have been developed and extensively tested on regions across the world. This study investigates the application of two models from the family of Surface Energy Balance (SEB) models: SEBAL (Surface Energy Algorithm for Land) and METRIC (Mapping Evapotranspiration at high Resolution with Internalized Calibration). These models belong to the approach involving the surface energy balance equation where the energy flux terms are solved through various modules utilizing satellite imagery as the input. With land cover type (class) being one of the key influencers of ET, this study aims to estimate actual evapotranspiration (AET) across user-defined land cover classes, including water bodies, active agricultural lands (comprising tilled and harvested lands), croplands (primarily irrigated areas), Prosopis juliflora (prosopis), barren land, and exposed soil within the study area. To achieve the study’s objectives, a four-step process is employed. First, land cover classifications are conducted during mid-summer and the Northeast monsoon seasons in the years 2006, 2014, and 2021, representing three different decades. This is followed by a seasonal change detection analysis within these years. It is important to note that the Northeast monsoon serves as the primary source of water for domestic, industrial and agricultural activities within the region. Second, novel estimates of AET are generated using the Surface Energy Balance Algorithm for Land (SEBAL), implemented through the pySEBAL tool developed by IHE Delft, for the same years and seasons. Third, AET ranges across different land cover classes are estimated and visualized. An intermodal comparison is then performed, using land cover classes as the unit of comparison, with EEFlux, an AET product based on the METRIC algorithm, due to the lack of in-situ validation data. This comparison utilizes statistical metrics such as the coefficient of correlation (r), root mean squared error (RMSE), and mean values. Finally, mean net water transport to the atmosphere for each land cover class is calculated by multiplying the mean AET values by the corresponding area enclosed by the land cover class. The classification was performed using Google Earth Engine, with Landsat imagery serving as the primary input. Aster Global Digital Elevation Map (Aster GDEM) was used to mask hills and hillshade regions, while builtup areas were excluded using the ESA WorldCover 10m v100 product. Clouds and cloud shadows were masked using the Quality Assessment (QA) band. Training samples for classification were manually created for all images using a suite of tools, including Google Earth, photographs collected during a field visit, and various Landsat false color composites. Support Vector Machine (SVM), Classification and Regression Trees (CART), and Random Forest (RF) algorithms were implemented, with the final maps chosen from the RF algorithm due to its superior accuracy metrics compared to the other two methods. The data inputs for implementing pySEBAL include Landsat imagery, daily and instantaneous weather data, for which GLDAS Noah Land Surface Model L4 3-hourly 0.25 x 0.25-degree V2.1 (GLDAS_NOAH025_3H), in-situ weather data are used and a digital elevation model, which is Aster GDEM in our case. A key attribute of pySEBAL is its automated selection of hot and cold pixels using Normalized Difference Vegetation Index (NDVI) and Surface Temperature (Ts), simplifying the process of selection of anchor pixels significantly and eliminating manual selection which is time consuming and prone to errors. Following its development, pySEBAL has been tested extensively in various semi-arid environments. The most notable pattern detected in the seasonal land cover analysis was the increase in cropped areas and a consequent decrease in non-cropped areas during the monsoon. A general pattern observed in the intermodal comparison was that classes containing vegetation (croplands, prosopis) exhibited higher AET means compared to other classes. Across all years, the average correlation coefficient (r) and RMSE for summer were 0.8 and 1.2 mm/day, respectively, while for the monsoon season, the averages were 0.5 and 0.85 mm/day. Despite a decrease in linear relationship between the novel SEBAL and EEFlux outputs during the monsoon season, closer proximity is noticeable. An examination of the energy balance components including soil heat flux, net radiation reaching the surface revealed a decline during the monsoon season compared to summer further validating the usefulness of the approach. When considering net water transport, which accounts for the combined effects of AET and the area covered by a land cover class, a decrease was observed during the monsoon seasons of 2006 and 2021 when compared to summer whereas an increase was reported in 2014. In regions that are both agriculturally intensive and water-scarce, with heavy dependence on monsoon rainfall, water budgeting is crucial for optimal resource management. By identifying zones exhibiting high rates of evapotranspiration, we believe that corresponding actions taken through policies and interventions can help to conserve and preserve water for various purposes, specifically for agriculture. This study could provide insights into managing tasks such as tank maintenance and the periodic clearance of tracts of prosopis, activities that are currently undertaken by the state government in collaboration with NGOs, which eventually result in significant economic implications for the region.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone D)

Poster: C.05.02 - POSTER - TRUTHS – Setting the gold standard reference for climate applications and satellite inter-calibration

The Traceable Radiometry Underpinning Terrestrial - and Helio-Studies, TRUTHS, mission will be a ‘metrology laboratory in space’, setting ‘gold standard’ reference at unprecedented radiometric accuracy (goal 0.3%, k=2) in the UV-VIS-SWIR domain. TRUTHS is explicitly designed to re-calibrate itself in-orbit directly to a primary standard of the international system of units (SI).

Carrying a cryogenic solar absolute radiometer and a hyperspectral imaging spectrometer as well as a novel onboard calibration system, TRUTHS will enhance by up to an order-of-magnitude our ability to estimate the spatially and spectrally resolved Earth Solar Reflected Radiation Budget through direct measurements of incoming and outgoing energy and through partnership with other missions. The exceptional accuracy is required to shorten the time-to-detect trends and provide accurate timely inputs about the Earth system to policy-makers and climate actions.

TRUTHS will effectively establish fiducial EO reference data in space, whose SI-traceability can be extended to other sensors through in-flight cross-calibration. This will be achieved directly via simultaneous observations and indirectly via vicarious calibration (e.g. CEOS) targets, improving other missions’ services who are delivering Essential Climate Variables products. As a traveling standard and by transfer of SI-traceability, TRUTHS will contribute to harmonising contemporary and historical multi-platform climate data records.

The focus of this session is on the preparatory scientific and user applications developments, reviewing current limitations and opportunities offered with TRUTHS for EO metrology from space, as a climate benchmark ESRB data record, in climate modelling, and for satellite inter-calibration.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone D)

Poster: The ESA TRUTHS mission: a golden standard in flight for climate action

Authors: Marline Claessens, Phd Alessandro Zuccaro Marchi, Dr Thomas August, Andrea Marini, James Yardley, Phd Wilfried Glastre, Phd Paijmans Bart, Phd Rene
Affiliations: ESA
The TRUTHS (Traceable Radiometry Underpinning Terrestrial- and Helio- Studies) mission is an operational climate initiative as part of ESA’s Earth Watch programme aimed at significantly improving our ability to estimate the Earth's radiation budget through direct measurements. The mission's primary objective is to establish a reference baseline measurement of the planet's state, which will allow for climate model improvements and provide observational evidence of climate change. TRUTHS will measure incoming and outgoing energy from the Earth system with high accuracy and SI-traceable, enabling the detection of climate trends in the shortest possible time allowing for enhanced decision-making. The mission involves an agile satellite of 2 metric tonnes equipped with a payload formed by a Hyperspectral Imaging Spectrometer (HIS) and a Cryogenic Solar Absolute Radiometer (CSAR), traceable to SI standards. The two instruments are connected via a complex on-board calibration system which enables the HIS to be spectrally and radiometrically calibrated referring to the CSAR measurements, hence traceable to SI units. The payload is designed to provide accurate, continuously calibrated datasets of solar and lunar irradiance and Earth-reflected spectral radiance. The goal is to achieve an absolute radiometric accuracy of 0.3% across a spectrum of 320 -2400 nm with a spectral resolution ranging from 0.1 to 6 nm over the covered wavelength range. The satellite is developed by an industrial consortium led by Airbus UK will be launched in 2031 into a polar low Earth orbit (non-SSO at 620 km) and will provide over 5 years operations (extendable to 8) global, continuously sampled observations over land and ocean with a spatial sampling distance of 100 m and 200 m respectively. A high-resolution sampling distance of 50 m is guaranteed during inter-calibration events with other satellites and across stable targets on Earth e.g. PICS/ RadCalNet. The Industrial work, currently in Phase B2 focuses on performance demonstration and verification of critical payload technologies, aiming at reaching TRL 6 by PDR .Performance is evaluated through rigorous simulations at both the payload and product levels. Additionally, the calibration method addressing rigorous uncertainty analysis and tracing involves the collaboration with metrological institutions NPL (UK) and PMOD (CH). The Ground Segment will consist of a Flight Operations Segment and a Payload Data Ground Segment, both located in the UK. The mission's data will be accessible via ESA's common service and will follow the ESA open access data policy. There is a continuous and close dialogue with the TRUTHS science team convened in a Mission Advisory group and science studies have been carried out in parallel to industrial activities to ensure support to mission objectives. TRUTHS is part of the ESA Earth Watch Program and is currently subscribed by UK with the Czech Republic, Greece, Romania, Spain and Switzerland. Starting from a successful TRUTHS for Climate workshop held in 2024, a dialogue with Institutional and National potential users of TRUTHS data is ongoing.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone D)

Poster: The Traceable Radiometry Underpinning Terrestrial and Helio Studies mission (TRUTHS): enabling a new generation of SI traceable radiometric observations.

Authors: Luca Corpaccioli, Hazel Wood, Isabel Moore, David
Affiliations: Airbus
Climate change will be one of the defining issues for humanity for the upcoming decades, and tackling its effects will require reliable models that can predict the future trends of our environment. A top priority is therefore to develop and maintain such models, allowing policy makers to direct resources in the right direction. Paramount to these climate models, is the need to keep them fed with high quality remote sensing data – the more accurate the data, the longer the time horizon of the predictions and their reliability. When talking about remote sensing data, absolute radiometric accuracy (ARA) is perhaps the most important – and challenging - performance metric to obtain. In the past, radiometric observations have always been provided by a multitude of sensors on various satellites, all of which exhibit different biases and error sources. In fact, even seemingly identical Flight Models of the same instrument flying on different satellites have been shown to give ever-so-slightly different measurements, when observing identical targets. There is now a strong need to tie such measurements together, as well as to improve the overall radiometric accuracy. The TRUTHS spacecraft (Traceable Radiometry Underpinning Terrestrial- and Helio- Studies), now in implementation phase, will tackle both issues, with a goal of reaching a radiometric accuracy of 0.3%. This is all enabled by one of two instruments: the first is an SI-traceable Cryogenic Solar Radiometer (CSAR) which serves as the lynchpin against which to calibrate all observations made by the second instrument: a hyperspectral imager (HIS). Eventually, TRUTHS will form the first of a hopefully new generation “SITSats” (SI traceable satellites), with the ability to bring together observations from a variety of different sensors. This paper will offer an introduction to this complex mission, tasked to make earth, solar and lunar observations, as well as to perform collocated observations with other satellites, allowing to cross calibrate their sensors together. It will also give a description of the satellite as a whole, including an overview of its complex payload, and the platform that supports it.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone D)

Poster: Traceable Radiometry Underpinning Terrestrial- and Helio- Studies (TRUTHS) – A ‘gold standard’ reference imaging spectrometer to support the climate Emergency

Authors: Professor Nigel Fox, Dr Thomas August, Dr Paul Green, Mr Andre Marini, Professor John Remedios
Affiliations: National Physical Laboratoy, ESA Estec, University of Leicester (NCEO)
Traceable Radiometry Underpinning Terrestrial- and Helio- Studies (TRUTHS) is a hyperspectral satellite mission explicitly designed to become a ‘gold standard’ reference for observing the state of the earth’s climate in the short-wave domain in support of the climate emergency. The UK-led mission, under-development within the ESA Earthwatch program (https://www.esa.int/Applications/Observing_the_Earth/TRUTHS), was conceived at the UK national metrology institute, NPL, more than 20 years ago in response to challenges highlighted by the worlds space agencies, through bodies such as CEOS, in relation to interoperability and accuracy. This led to the initial ‘calibration focus’ of the mission and the vision of creating an in-orbit SI-traceable reference or a ‘metrology/standards laboratory in space’. Such SI-Traceable satellites are now being called SITSats. As the climate emergency started to emerge as a global priority, the value of TRUTHS’ unprecedented observational capabilities across the whole short-wave spectral domain, in addition to the enhancement of other missions by reference calibration, raised the needs of climate to become an explicit priority. The results of the 2007 US decadal survey helped to frame the most demanding observational objectives for TRUTHS towards addressing radiation balance and climate sensitivity resulting from feedbacks e.g. Cloud, albedo... It also initiated the long-standing partnership with the US sister mission CLARREO and the current Clarreo pathfinder mission. What will TRUTHS do? The high accuracy of TRUTHS together with its spectral and spatial resolution facilitate a new epoch in how the Earth is observed, delivering data not constrained to a single discipline but deliberately specified to allow it to be configured to support applications in and at the boundaries of Land, Ocean and Atmosphere to meet the exacting needs of climate. Encompassing it’s own ‘metrology laboratory in space’, TRUTHS sensors are regularly calibrated in-flight to a primary SI standard. This ensures that unlike other satellite sensors, TRUTHS’ ability to detect long-term changes and trends will not be constrained by sensor performance e.g. drifts and biases, but rather the size of the trend, above the background of natural variability. In this way, helping GCOS observational specifications to be achieved and the prospect of testing and constraining the forecasts of climate models in as short a time as possible. TRUTHS will establish a fiducial data set of incoming and outgoing solar radiation which will: • Provide, directly and through reference calibration, an SI-traceable operational observational benchmark of the state of the planet’s short-wave incoming and reflected energy and its contribution to the radiative balance including related forcings and feedbacks manifested within it. From this, human induced climate trends can be detected in as short a timescale as possible, limited only by natural variability. • facilitate a transformation in radiometric performance and functionality of current, future (and some heritage) Earth observing systems to meet the specific needs of climate - through an SI-traceable, high accuracy, reference calibration in orbit, ensuring robust coherence and interoperability. This is the foundation needed to establish an ‘integrated Earth observing system’ and associated Climate Data Records (CDRs). • deliver data of sufficient quality and flexibility to test and improve the retrieval of solar reflective Essential Climate Variables (ECVs), (particularly the carbon cycle on land and ocean) and other operational applications and services. • provide a robust SI-traceable anchor to address the continued debate and uncertainty regarding the impact of solar radiation (spectral and total) on the atmosphere and consequently climate, in the near and medium term. • serve as an enabler for the growth of next-generation ‘micro-satellites’ by providing a reference calibration for sensors too small for robust calibration systems of their own. Payload/observations The main instrument of TRUTHS is a hyperspectral imaging spectrometer (HIS), with continuous spectral extent across the UV-visible-short-wave IR (320 nm to 2400 nm) capable of up to 50 m ground instantaneous field of view. The HIS observes climate-relevant processes related to the Earth’s atmosphere, oceans, land and cryosphere and also solar and lunar irradiance. The novel on-board calibration system is seeking to enable all these observations to be made with a target uncertainty of ~0.3% (k=2) across the entire spectrum. At the heart of this on-board calibration system is the Cryogenic Solar Absolute Radiometer, CSAR, operating at temperatures below -200 °C, this instrument, in common with similar instruments on the ground, provides the direct link to SI. It also provides daily measurements of the total integrated energy reaching the Earth from the Sun with an uncertainty goal of 0.02% (k=2). The HIS of TRUTHS will primarily observe the full Earth at Nadir (pole to pole), the agile platform pointing to the sun or moon as the platform moves into the Earth’ shadow, to minimise observation gaps. On occasions, the platform will allow off-nadir pointing to match that of another sensor for simultaneous calibration and/or to characterise angular reflectance dependencies of the Earth’s surface. The orbital track has been selected to be 90 degree precessing, non-sun-synchronous, with a repeat cycle of 61 days, although the Swath allows at least monthly global coverage. Although adding complexity to the thermal control and power management, this orbit provides many opportunities to cross the paths of other satellites to enable improved cross-calibration due to simultaneity as well as diurnal sampling of the planet. Applications In addition to short-wave climate radiation benchmark applications, TRUTHS will play a strong role in support of climate action and net zero ambitions. Its data will support the calibration and interoperability of GHG monitoring satellites, land use change classification and natural sinks such as oceans and land vegetation. Although the observation cycle of TRUTHS is not optimised for time critical applications such as for agricultural monitoring. TRUTHS supports these indirect applications by providing a high accuracy reference to assess and improve retrieval algorithms and improve and harmonise the performance of other sensors. Similarly, for the Oceans, TRUTHS will have the capability to make GCOS quality observations, in both type 1 and 2 waters without the need for post-launch system vicarious calibration, although not at the time frequency desired. It will however be able to complement the existing reference buoys, MOBY and Boussole, with calibrations to satellites made over different locations of the world’s oceans. In addition to primarily, Top of the Atmosphere, Level 1 products TRUTHS will also deliver a level 2 global surface reflectance product, with robust SI-traceable uncertainties. Summary In summary, this paper will provide an overview of the TRUTHS Mission; starting with the metrological principle and evolution of the concept, through the science and operational drivers, outline of the overall design, route to flight and longer-term vision as a founding element of an integrated international climate observing system. Subsequent papers in the session will provide more specific details of the current design, anticipated performance and operational characteristics.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone D)

Poster: What is a SITSat (SI-Traceable Satellite)

Authors: Professor Nigel Fox, Dr Yolanda
Affiliations: National Physical Laboratoy, NASA Langley Research Centre
An SI-Traceable Satellite (SITSat) is the name given to a new class of satellites expressly designed to achieve and evidence, in-flight, traceability to international units of measurement, (SI) with an uncertainty commensurate with the exacting needs of climate. For Earth viewing optical (solar reflective) imagers this requires an uncertainty at Top of the Atmosphere (ToA) in spectral radiance/reflectance <~1% k=2 with a goal of <0.5% k=2. In addition to this unprecedented observational performance, SITSats, in general can also provide an upgrade in traceability and performance of other space instruments through reference calibration. In this way the SITSat provides a space-based version of a CEOS-Fiducial Reference Measurement (CEOS-FRM) [1]. The number, range and criticality of applications of Earth viewing optical sensors is increasing rapidly. Not only from national/international space agencies but also through the launch of commercial constellations such as those of planet and the concept of Analysis Ready Data (ARD) reducing the skill needed for utilisation of the data. However, no one organisation can provide all the tools necessary, and the need for a coordinated holistic earth observing system has never been greater. Achieving this vision has led to international initiatives coordinated by bodies such as the Committee on Earth Observation Satellites (CEOS and Global Space Inter-Calibration System (GISCS) of WMO to establish strategies to facilitate interoperability and the understanding and removal of bias through post-launch Calibration and Validation. In parallel, the societal challenge resulting from climate change has been a major stimulus for significantly improved accuracy and trust of satellite data. Instrumental biases and uncertainty must be sufficiently small to minimise the multi-decadal timescales needed to detect small trends and attribute their cause, enabling them to become unequivocally accepted as evidence. Current efforts to address the climate emergency similarly need trustworthy comprehensive data in the near term to assess mitigation actions. The range of satellites launched to support these actions must be consistent and interoperable to avoid debate, confusion and ultimately excuses for inaction. In the longer-term we need to have benchmarks of the state of the planet from which we can assess progress in as short a time-scale as possible. Although there have been many advances in the pre-flight SI-traceable calibration of optical sensors, in the last decade, unpredictable degradation in performance from both launch and operational environment remains a major difficulty. Even with on-board calibration systems, uncertainties of less than a few percent are rarely achieved and maintained and the evidential link to SI-traceability is weak. For many climate observations the target uncertainty needs to be improved ten-fold. However, this decade will hopefully see the launch of two missions providing spectrally resolved observations of the Earth at optical wavelengths, CLARREO Pathfinder on the International Space Station from NASA [2] and TRUTHS from ESA [3] to change this paradigm. Both payloads are explicitly designed to achieve uncertainties close to the ideal observing system, commensurate with the needs of climate, with robust SI-Traceability evidenced in space. In this way heralding the start of the era of SITSats (SI Traceable Satellites) and the requests of the international community [4, 5,6] can start to be addressed. In this paper we will discuss the activities of the recently founded CEOS/GSICS task group on SITSats, established to provide clear definitions on what constitutes a SITSat and how to evidence it. In addition, we will explore a vision of what an operational SITSat enabled global earth observing system might look like and the steps needed to achieve it. References [1] Goryl, P.; Fox, N.; Donlon, C.; Castracane, P. Fiducial Reference Measurements (FRMs): What Are They? Remote Sens. 2023, 15, 5017. https://doi.org/10.3390/rs15205017) [2] https://clarreo-pathfinder.larc.nasa.gov/ [3] https://www.npl.co.uk/earth-observation/truths [4] Strategy Towards an Architecture for Climate Monit... | E-Library (wmo.int) [5] GCOS 200 ‘implementation plan’ doc_num.php (wmo.int) [6] CEOS and GSICS report on SI-traceable space-based climate observing system, https://doi.org/10.47120/npl.9319
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone D)

Poster: Defining a TRUTHS inter-calibration strategy with a global end-to-end simulation

Authors: Javier Gorroño, Montserrat Piñol-Solé, Maddie Stedman, Nigel Fox, Luis Guanter, Thomas August, Thorsten Fehr
Affiliations: Universitat Politècnica de València, European Space Agency, National Physical Laboratory
The Traceable radiometry underpinning terrestrial and helio studies (TRUTHS) mission aims to achieve unprecedented accuracy for SI-traceable measurements of the Earth-reflected radiation and contribute to the development of a constellation of SI-traceable satellite (SITSat) missions. These accurate measurement will contribute to reduce uncertainty in climate records and improve the calibration, directly or indirectly, of other Earth Observation optical missions. For the latter application, it is critical that the uncertainty in the calibration transfer are assessed and minimised were possible. This contribution presents the first prototype version and roadmap for a TRUTHS end-to-end global intercalibration simulator. This is expected to assess the potential uncertainty introduced during the calibration transfer for multiple scenarios and target satellites. The process to assess the inter-calibration uncertainty initiates with the definition of sensor-to-sensor match-ups through an orbital analysis. We define those match-ups based on common earth areas observed by the different satellites within a certain time delay. Then, for each one of the match-ups, a top-of-atmosphere (TOA) radiance model is generated that considers the surface albedo, bidrectional reflectance, elevation, angular configuration and atmospheric parameterisation. Finally, we calculate the error between the target satellite and TRUTHS as a reference. The radiometric uncertainty is based on the combination of the different errors globally (e.g. a radiance curve fitting) that implicitly considers the interrelation of different error sources and match-ups. The first implementation takes the land acquisitions of TRUTHS against observations by the Copernicus Sentinel-2A satellite throughout the year. We have also restricted the sun zenith angle (SZA) to 60° to minimize solar angle and view azimuthal dispersion. We have modelled the angular mismatch for both viewing differences and solar changes based on the overpass delays. We have studied different intercalibration scenarios based on a set of temporal, angular, or cloud constraints. These first results show that considering overpasses up to 15-min difference, low cloud probability, and matching field-of-view (FoV), within 5°, we sample most land areas with a mean error <0.1% and bias regression <0.5%. The temporal analysis of matchup opportunities indicates that these occur concentrated within a month twice a year. This scenario provides a good oppotunity to monitor bi-yearly satellites that can be complemented with dedicated maneuvers and/or dedicated processing of polar-region match-ups which occur constantly throughout the year. Next steps of the implementation are considering new sources of error such as the spectral mismatch and consider new missions and scenarios. The TOA radiance model is ongoing a further development which implies considering a more complete set of reanalysis products and a finer hyperspectral modelling. Finally, these results will be validated with real data as close as possible to the TRUTHS scenario. We are considering Sentinel-2, LandSat and EMIT missions to test the sensitivity of the simulated scenarios in a real context.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: C.03.02 - POSTER - Advances in the theory and methodology SAR Interferometry and SAR polarimetry

In 2003, the first POLINSAR workshop organised by ESA was conducted gathering the new community, to present the findings of ESA-funded studies of SAR Polarimetry and Polarimetric Interferometry applications, and to prepare recommendations for future research work. Today the POLINSAR workshop is an established platform where scientists are meeting and exchanging their recent research advances to support new mission proposals or dedicated airborne campaigns to increase our knowledge. The application of polarimetry and interferometry, or combined, are ranging over a variety of different domain as for example biosphere (forest height, forest structure, forest disturbance, crop development stage classification over agricultural crops, etc.), hydrosphere (soil moisture), cryosphere (snow depth, ice structure), geosphere (enhanced deformation identification), urban areas (scatterer identification, building reconstruction) and many more applications.

We welcome contributions from but are not restricted to:
• New advances in Polarimeric SAR Interferometry: methods and applications
• Multi-baseline and TomoSAR: methods and applications
• Differential Polarimetric SAR Interferometry: methods and applications
• Airborne Campaigns for polarimetric and interferometric SAR
• Future Mission concepts related polarimetry and interferometry
• Recent advancement using AI for SAR mission concept, methods and applications.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Time Series of Dual-Polarimetric SAR Measurements to Observe Liquefaction Surface Manifestations

Authors: Anna Verlanti, Ferdinando Nunziata, Nicola Angelo Famiglietti, Maurizio Migliaccio, Annamaria Vicari
Affiliations: Department of Engineering, Parthenope University Of Naples, Department of Engineering, Sapienza University of Rome, National Earthquake Observatory, National Institute of Geophysics and Volcanology, Institute of Marine Sciences, National Research Council, National Institute of Geophysics and Volcanology- Sezione Irpinia
Soil liquefaction is a phenomenon in which saturated soil temporarily loses its strength and behaves like a liquid, typically during an event like an earthquake. This occurs when the shaking of the ground increases the water pressure within the soil particles, decreasing the friction among them and and inflicting the soil to lose its solid state [1]. It can purpose massive harm to infrastructure, bridges, roads, posing extreme dangers to communities: whilst the soil loses its energy, the systems constructed on it may collapse, tilt, or turn out to be unstable [2]. Globally, 10-15% of Earth's land area is susceptible to liquefaction, mainly in earthquake-inclined areas with loose, water-saturated soils. The danger varies substantially through location, with the very best dangers discovered alongside tectonic plate obstacles in which seismic activity is frequent. Understanding and mitigating liquefaction risks is critical for the safety and stability of buildings and other structures in these regions. Soil liquefaction monitoring can be done using data collected from satellites. These provide global coverage for earthquake monitoring, enabling precise tracking of ground movements. With high-resolution imaging radar technologies like SAR, satellites provide important facts each for the duration of and after seismic events, helping catastrophe control efforts and post-catastrophe recovery [3-4]. In this study, a methodology is used to identify areas affected by liquefaction using dual-polarimetric C-band Synthetic Aperture Radar (SAR) imagery. The processing chain consists of two steps: firstly, the areas involved in the liquefaction phenomenon are identified using a constant false alarm rate (CFAR) method applied to the SPAN metric which is ingested into a bi-temporal approach to sort out areas not genuine. Secondly, the resulting masks are read in terms of the physical scattering mechanisms using a child parameter stemming from the eigendecomposition of the covariance matrix - namely the degree of polarization. The latter is evaluated over the co-seismic scenes and contrasted with the pre-seismic one to have rough information on the time-variability of the scattering mechanisms occurred in the area affected by soil liquefaction. Since this phenomenon usually occurs suddenly and temporarily, after which the soil returns to a solid state, the proposed method is used to monitor the progression of the studied phenomenon. Experimental results, obtained processing a time-series of ascending and descending Sentinel-1 SAR scenes acquired during the 2023 Turkey-Syria earthquake, confirm the robustness of the proposed approach. References [1] N. R. Council, Liquefaction of Soils During Earthquakes. Washington, DC: The National Academies Press, 1985. [2] S. B. Z., L. Goren, R. Toussaint, and E. Aharonov, “Drainage explains soil liquefaction beyond the earthquake near-field,” Nature Communications, vol. 14, 2023, Art. no. 5791. [3] A. Portillo and L. Moya, “Seismic risk regularization for urban changes due to earthquakes: A case of study of the 2023 turkey earthquake sequence,” Remote Sensing, vol. 15, 2023. [4] F. Dell’Acqua and P. Gamba, “Remote sensing and earthquake damage assessment: Experiences, limits, and perspectives,” Proceedings of the IEEE, vol. 100, no. 10, pp. 2876–2890, Oct. 2012.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: First Results on the Use of Deep Learning for Persistent Scatterers Detection

Authors: WEILI Tang, Simona Verde, Sergio Vitale, Giampaolo Ferraioli, Gilda Schirinzi, Vito Pascazio, Gianfranco Fornaro
Affiliations: University of Naples "Parthenope", CNR-IREA
Multipass/multitemporal Differential Interferometric SAR (MT-DInSAR) is a technique that has been hugely developed in the last decades, with many application to the risk monitoring, including landslides and infrastructure monitoring. With reference to these applications, which address the problem by processing the data at the highest spatial resolution, one of the problem is the discrimination between persistent scatterers (PS), for which the Parameters of Interest (PoI) are extracted (typically the residual topography and mean deformation velocity), from unreliable noisy pixel. Such a step is commonly referred to as PS detection. In literature methods founded upon the principles of classical radar detection theory, e.g. the Generalized Likelihood Ratio Test (GLRT) [1], have been proposed. GLRT, which is framed in the context of SAR tomography, has been shown to outperform of classical interferometric methods such as the one based on Multitemporal Interferometric Coherence [2], thanks to the use of both the amplitude and the phase information of the multitemporal signal. The need to keep low the probability to wrongly detect noise pixels as PS, leads to the loss of information about useful PS due to pixel-wise nature of the decision test. In this work we examine the potential of Deep Learning (DL) method for PS detection, as an alternative to classical detection approaches, able to overcome the above loss. We chose U-Net as the primary network, leveraging its architecture to achieve pixel-level segmentation, which is characterized by high precision in image segmentation [3]. We exploit statistic data with a specific threshold, which is set based on a fixed false alarm probability to generate label for network. We apply the classical loss function for segmentation Binary Cross Entropy (BCE) on the proposed U-Net [4]. Velocity and label are fed into U-Net obtaining prediction image. Then we apply this prediction image on velocity image to detect PS points. We concentrate on the utilisation of a DL approach for the prediction of the mask of the detected PSs. Experiments have been carried out on COSMO-SkyMed data acquired in Stripmap mode over the Italian territory. One test site regards a hilly region in South Italy (Lungro, Calabria region): the test site has been chosen because of its complexness, resulting in low SAR data quality, which poses a critical challenge for PS points detection. We used a tiny dataset to train the network, the number of training data was 560 samples in total. Despite the small number of sample we achieved interesting results. Experimental showed us that the training and validation losses were low. Similarly for the training evaluation we achieved a good value for the Intersection Over Union (IOU) [5], which is a typical parameter used for performance analysis. The accuracy(ACC) [5] also achieved satisfactory values, thus demonstrating that U-net based PS point detection is feasible. The feature maps predicted by U-Net showed a high degree of similarity with the label. On one hand, the prediction based on U-net removes a lot of noise points especially in highly decorrelated area. On the other hand, points located over features tipically characterized by the presence of PS, e.g. built environment and non-vegetated areas, the detected PS density is generally higher than that achieved via the GLRT. To validate the generalization capabilities of the proposed network, we first trained and validated the network using data from Lungro study area and saved the trained model. When testing the network with data from another area (Salerno, Campania region), a comparison between the predicted results and those from traditional methods showed that the proposed network detected more PS points while significantly reducing noise. This demonstrates that the U-Net also exhibits remarkable use flexibility. To summarize, experimental results demonstrated that the proposed method achieves a high rejection of noisy pixel, while keeping a significant increase in the number of reliable PS detections. The described methodology has been tested on X-Band data. It has shown promising results that could be extended to SAR data collected with systems operating in other bands, e.g. C-Band from Sentinel-1, or from recent and new L-Band missions such as SAOCOM and NISAR. Index Terms Synthetic Aperture Radar (SAR), Multitemporal DInSAR, Deep Learning, Persistent Scatterers Detection. REFERENCES [1] A. De Maio, G. Fornaro, and A. Pauciullo, “Detection of Single Scatterers in Multidimensional SAR Imaging,” IEEE Trans. Geosci. Remote Sens., vol. 47, no. 7, pp. 2284–2297, Jul. 2009. [2] A. Ferretti, C. Prati, and F. Rocca, “Nonlinear Subsidence Rate Estimation Using Permanent Scatterers in Differential SAR Interferometry,” IEEE Trans. Geosci. Remote Sens., vol. 38, no. 5, pp. 2202–2212, Sep. 2000. [3] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18. Springer, 2015, pp. 234–241. [4] S. Jadon, “A survey of loss functions for semantic segmentation,” in 2020 IEEE conference on computational intelligence in bioinformatics and computational biology (CIBCB). IEEE, 2020, pp. 1–7. [5] R. A. Fisher, “The use of multiple measurements in taxonomic problems,” Annals of eugenics, vol. 7, no. 2, pp. 179–188, 1936.
LPS Website link: First Results on the Use of Deep Learning for Persistent Scatterers Detection&location=X5+–+Poster+Area+–+Zone+P-Q" class="text-info" target="_blank">Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Forest Normalized Volume Profile estimation with the PolInSAR Two Layer Model

Authors: Alberto Alonso Gonzalez, Carlos López-Martínez
Affiliations: Technical University Of Catalonia (UPC), Insitut d’Estudis Espacials de Catalunya (IEEC)
Polarimetric Synthetic Aperture Radar (PolSAR) is able to extract additional information of the scatterers related to its geometric and dielectric properties. When combined with interferometry (PolInSAR) sensitivity to the vertical distribution of scatterers is also achieved, which has already been used to extract physical information in forest scenarios [1]. Additionally, if several across-track baselines are available, SAR Tomography may be applied, performing an additional aperture in the cross-range direction, and the vertical reflectivity profile might be estimated for every pixel. This technique has been applied to monitor forest [2] and agriculture fields [3]. The drawback of these techniques is the relatively large amount of required across-track baselines in order to get a good estimation of the vertical reflectivity profile at each pixel, which makes their application to space-borne missions complicated. In natural scenarios a two-layer model may be applied, assuming that the radar response in vegetation can be separated in two different layers: the ground and the vegetation on top. Recent studies have applied this context in order to separate these two-layer models in terms of SAR Tomography [4] and multibaseline PolInSAR [5]. The approach in [5] establishes a link between polarimetry and interferometry, and the polarimetric response of the two layers may be obtained by fixing the ground and volume interferometric coherences. In general, this inversion is ambiguous, and some additional assumptions are required in order to get a unique solution. In this regard, the vertical reflectivity profile may be characterized by a limited number of parameters that might be extracted from the data. A physical uniform model is used in [1], parameterized by a height extent and an extinction parameter, resulting in an exponential profile. Another approach is to use vertical reflectivity profile information coming from additional sources, as described in [6], where LIDAR waveform or a mean normalized profile coming from SAR tomography is used. It is worth noting that this approach is very relevant for future SAR missions like ESA BIOMASS, where vertical reflectivity profiles could be acquired during the tomographic phase and applied latter to the interferometric phase of the mission, where the number of available baselines is more limited. This work will analyze more in detail this concept by assuming and estimating a mean normalized vertical reflectivity profile in terms of the PolInSAR two-layer model. Thanks to the spatial variability in the image, the same normalized volume profile may be observed scaled at different heights. This fact may be used to perform both, the separation of the ground and volume components and also the estimation of the normalized profile shape for the volume component directly from the original data. Special focus will be given to the model assumptions, implications and limitations that might appear with a reduced number of across-track baselines, especially with a single baseline. The proposed technique will be evaluated with simulated and real data acquired during the AfriSAR 2016 campaign acquired by the DLR FSAR sensor at L- and P-band over tropical forests in Gabon. References [1] Cloude, S. R., & Papathanassiou, K. P. (2003). Three-stage inversion process for polarimetric SAR interferometry. IEE Proceedings-Radar, Sonar and Navigation, 150(3), 125-134. [2] V. Cazcarra-Bes, M. Pardini, M. Tello, K. P. Papathanassiou, “Comparison of Tomographic SAR Reflectivity Reconstruction Algorithms for Forest Applications at L-band” 2020-01, IEEE Transactions on Geoscience and Remote Sensing, 58(1), 147-164, 2020 [3] H. Joerg, M. Pardini, I. Hajnsek, K.P. Papathanassiou, “3-D Scattering Characterization of Agricultural Crops at C-Band Using SAR Tomography”, IEEE Transactions on Geoscience and Remote Sensing, 56(7), 3976-3989, 2018 [4] Tebaldini, S. (2009). Algebraic synthesis of forest scenarios from multibaseline PolInSAR data. IEEE Transactions on Geoscience and Remote Sensing, 47(12), 4132-4142. [5] Alonso-Gonzalez, A., & Papathanassiou, K. P. (2018, June). Multibaseline two layer model PolInSAR ground and volume separation. In EUSAR 2018; 12th European Conference on Synthetic Aperture Radar (pp. 1-5). VDE. [6] Guliaev, R., Cazcarra-Bes, V., Pardini, M., & Papathanassiou, K. (2021). Forest height estimation by means of TanDEM-X InSAR and waveform lidar data. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 14, 3084-3094.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Estimating Sea-ice drift using deep-learning optical flow algorithm

Authors: Houda Hassini, Flora Weissgerber, Frederic
Affiliations: DTIS, ONERA, Université Paris-Saclay
Sea ice has a substantial climatic impact, affecting heat exchanges between the ocean and atmosphere, nutrient distribution, and marine ecosystem health. Significant changes in sea ice behavior have been noted in recent years. Research shows that the ice’s thickness and extent have diminished, with multi-year ice significantly declining and freshly created ice increasing [1]. This development has increased the overall fragility of the ice, making it more sensitive to extreme fluctuations, premature break-up, and rapid drifting of the pack ice [2]. The increased variability of sea ice conditions poses problems for human activities in the polar regions. It also hinders efforts to preserve polar ecosystems and complicates accurate modeling of ocean and climate systems. In light of these circumstances, accurately measuring Sea Ice Drift (SID) is a cornerstone for comprehending the new dynamics of sea ice. Precise SID measurements will enable us to identify trends, model the intricate interactions between ice, oceans, and the atmosphere, and inform about the environmental and economic implications, particularly regarding navigation at sea and the exploitation of resources. Synthetic Aperture Radar (SAR) represents a promising solution for SID measurement since it operates independently of weather and illumination conditions, offering high spatial and temporal resolution. Consequently, SAR images are particularly well-suited to track ice sea movements on a fine scale with high temporal frequency. Indeed, emerging solutions derived from Sentinel-1 or RADARSAT-2 SAR images have been proposed, such as DTU-SID for the Arctic (10 km resolution) [3] or UTAS-SID for the Antarctic (kilometric resolution) [4]. While these products are useful for large-scale assessments, their coarser resolutions of existing products limit the ability to accurately represent the intricate interactions within ice structures, such as fractures, compression zones, or local movements at the floe scale, often observable at a scale of hundreds of meters. Furthermore, these products use conventional maximum cross-correlation techniques, which might not be appropriate for identifying low-scale and moving discontinuities. Therefore, higher-resolution SID products are required to give precise information about small-scale movements. In this context, optical flow methods stand out as a promising alternative to MCC methods. Unlike MCC, optical flow methods can estimate dense displacement fields at a sub-pixel level [5]. However, conventional optical flow methods often rely on naive assumptions, such as intensity conservation, which can limit their accuracy in noisy or sparsely textured environments, typical of SAR images. Deep learning-based optical flow techniques overcome this restriction by making it possible to learn intricate representations, which greatly improves performance. These cutting-edge techniques have shown to be quite successful in motion estimation for natural picture processing [6]. However, they have yet to be applied to SID estimation using SAR images, mainly due to the Lack of a dataset with the necessary resolution. This work aims to propose a novel optical flow deep learning-based method for estimating SID that will handle two main challenges: 1. Lack of labels and data scarcity at the desired resolution for model training: Labeled data for sea ice drift (SID) estimation at a fine resolution of 100 meters presents a significant challenge, primarily because deep learning approaches currently require labels for SID at this level of resolution that are not available. We can develop an artificially annotated database by creating pairs of synthetic SAR images with diverse displacement scenarios. The dataset will include a variety of sea ice drift patterns and different SAR acquisition scenarios, carefully balancing the complexity and representativeness of the simulations with the volume of data needed for effective model training. However, simulated data may still be insufficient for training complex deep learning networks that generalize well to real-world situations. Adapted training strategies, such as semi-supervised learning or fine-tuning methods, will be employed to leverage unlabeled data. These strategies enable the pre-training of the network on synthetic data, followed by gradual adaptation to real data to minimize discrepancies between the distributions. 2. Multimodality for high temporal resolution (≤6h): To achieve SID measurement at a temporal resolution of less than 6 hours, exploiting multimodal data by combining SAR images (Sentinel-1) with optical images (Sentinel-2) is necessary. However, this approach introduces a first challenge linked to the heterogeneity of the modalities since the characteristics of the two types of images differ significantly (spatial and spectral resolution and type of signal). A multimodal network will be developed to meet this challenge, integrating specific encoders for each modality to extract relevant characteristics while harmonizing the representations in a shared space that enables the integration of radar and optical information. The second challenge is the quantity of data available for this task. The dependence of optical images on the lighting and weather conditions significantly limits the availability of SAR/optical image pairs. We propose a database enrichment model that generates SAR/optical pairs through a sensor change simulation.This approach enables the conversion of a SAR image into an equivalent optical image, mimicking what could have been captured by optical sensors or inversely, using optical or neural methods, allowing the creation of additional data to train the SID measurement model accurately. This work will be evaluated by benchmarking our SID estimation framework against existing products to validate the proposed methodology. This evaluation will consider the accuracy of SID retrieval, the reliability of uncertainty estimates, and the computational efficiency of the pipeline. References [1] R Kwok. Arctic sea ice thickness, volume, and multiyear ice coverage: losses and coupled variability (1958–2018). 13(10):105005. [2] Jack C. Landy, Geoffrey J. Dawson, Michel Tsamados, Mitchell Bushuk, Julienne C. Stroeve, Stephen E. L. Howell, Thomas Krumpen, David G. Babb, Alexander S. Komarov, Harry D. B. S. Heorton, H. Jakob Belter, and Yevgeny Aksenov. A year-round satellite sea-ice thickness record from CryoSat-2. 609(7927):517–522. [3] European Union-Copernicus Marine Service. Global ocean - high resolution SAR sea ice drift. [4] HEIL, PETRA and HYLAND, GLENN. Satellite-derived high-resolution sea-ice motion in the southern ocean, 2015-2019. [5] Zisis I. Petrou, Yang Xian, and YingLi Tian. Towards breaking the spatial resolution barriers: An optical flow and super-resolution approach for sea ice motion estimation. 138:164–175. [6] Deqing Sun, Xiaodong Yang, Ming-Yu Liu, and Jan Kautz. PWC-net: CNNs for optical flow using pyramid, warping, and cost volume. Version Number: 3.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Deep-learning-based Wrap-count Segmentation Method for 2-D Phase Unwrapping of Large-scale Interseismic Interferograms

Authors: Mr Kun Jiang, Professor Wenbin Xu, Professor Andrew Hooper
Affiliations: School of Geosciences and Info-Physics, Central South University, COMET, School of Earth and Environment, University of Leeds
Phase unwrapping is one of the most important steps in the InSAR data processing. However, it is often challenging to successfully unwrap low-coherent interferograms leading to serious unwrapping errors. The errors are especially significant for large-scale slow deformation monitoring, such as interseismic deformation. Deep learning has provided a new way for valid phase unwrapping. However, the existing deep learning-based phase unwrapping methods either are combined with the traditional algorithms (e.g. path-following integral, global optimization) or output unwrapped phases directly, resulting in time-consuming or phase incongruence. To unwrap such interferograms more intelligently and efficiently, we propose a wrap count estimation method based on the semantic segmentation networks to predict the wrap count of each pixel in SAR interferograms. After partitioning large-scale interseismic interferograms into patches with overlapping, our method firstly uses the Segformer network to make full use of features at different spatial scales to predict wrap count of each patch accurately and robustly. This semantic segmentation network divides wrap count into seven classes, where class 0 represents incoherent pixels, to automatically identify and mask decorrelated regions. A reliability metric is defined to evaluate the trustworthiness of the wrap-count predictions made by Segformer. Secondly, we use DCNN to correct the wrap-count prediction errors or re-unwrap the patch using a traditional method according to the reliability metric. Thirdly, we mosaic the wrap-count patches to obtain the unwrapped result for the entire large-scale interferogram based on overlapping areas between patches. To train the Segformer network adequately, we simulate more than 20000 data samples with different levels of decorrelated noise and include over 10000 selected interferograms from the LiCSAR platform to enrich the diversity of samples. The results show that our method can achieve a 31.4%. RMSE reduction for the simulated interferograms as compared with the classical minimum cost flow (MCF) method. The Deep-learning-based method is further verified by applying to the western Altyn Tagh Fault, where slow interseismic deformation, intense atmospheric delay, and decorrelations are dominating the interferograms. We form 1295 triangular closure loops using the reserved 729 unwrapped interferograms. The average error proportion (EP) of these loops is 0.81 %, which shows a 27.7% improvement in accuracy using MCF phase unwrapping (1.13%). Our method performs much better in isolated patches separated by decorrelated noises of each interferogram. We also apply this method to the western section of Haiyuan Fault. The RMS of 22 closure loops corresponding to interferograms with obvious unwrapping differences between different methods show that our method significantly reduces the large-scale unwrapping errors. Therefore, the proposed method based on deep learning improves the intelligence of big InSAR data unwrapping, and will further promote the applications on spatial large-scale deformation mapping.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Deformation Anomaly Detection Based on Dynamic InSAR Time Series

Authors: Yuqing Wang, Freek van Leijen, Ramon Hanssen
Affiliations: Delft University Of Technology
Urban resilience and decision making requires the continuous monitoring of key safety parameters of building and infrastructure. The increasing availability of satellite synthetic aperture radar (SAR) acquisitions enables near real-time stability monitoring, especially in the built environment. Yet, standard methodologies for InSAR time series analysis are aimed at infrequent ‘batch’ processing, i.e., using all available acquisitions to estimate overarching displacement parameters that describe the behaviour of points over time. This approach is sub-optimal, as it hinders the early detection of anomalous behaviour, it is computationally inefficient, and it does not exploit the continuous flow of the routine SAR updates. Monitoring, defined as the continuous update of previous results with the arrival of new acquisitions to detect possible changes in the dynamic behaviour of points, including subsequent decision making, is not inherently supported by such conventional InSAR time series analysis methodologies. Here we propose a novel approach for instantaneous parameter estimation and deformation anomaly detection considering the dynamic behaviour in the time series of InSAR point scatterers. The method employs a mathematical model for prediction and iteratively updates the model with new observations using recursive least-squares, thereby eliminating the need to store all previous observations. In terms of quality, the recursive solution performs similarly to the batch solution, while being more effective in capturing abrupt dynamic changes in behaviour. It also provides substantial improvements in computational efficiency. The dynamic model incorporates smoothness constraints on the displacement signal by defining two key parameters related to the instantaneous line of sight (LoS) acceleration, i.e., its standard deviation σ and its correlation length L. The choice for a larger σ means that it is expected that the object inhibits more ‘elasticity’ (less rigidity) and may show a greater range of variations of acceleration over a fixed time interval. Thus, choosing a small σ is equivalent to expecting a rather smooth behaviour over time. A greater value for the correlation length L, with the same σ, is equivalent in expecting larger fluctuations in displacement and acceleration. These two parameters are selected based on prior knowledge, i.e., independent of the observational data and are tuned for transparency and refutability. Using this approach, phase unwrapping becomes implicit. It is crucial to underline that the approach is not intended to provide the optimal representation of the data, i.e., its objective is not centred on ‘data fitting’. The anomaly detection in the recursive process employs a χ²-test to assess the stability of each arc between points over time. At each epoch, the predicted phase residuals are computed for multiple subsequent epochs, and observations from these epochs are incrementally integrated into the detection framework, thereby enhancing redundancy. The robustness of the anomaly detection is significantly improved by using both single and multiple update observations. The approach is applied and evaluated based on Sentinel-1 SAR data covering Amsterdam, The Netherlands. The results indicate that incorporating multiple update observations improves the reliability of deformation anomaly detection. Additionally, the recursive process is shown to effectively detect deformation anomalies that might otherwise be missed, or falsely detected in the batch process. The occurrence of a deformation anomaly at a given epoch is evaluated using the multiple preceding observations, reducing the likelihood of missed-detections and false alarms.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: A Novel Approach to Forest Height Estimation Using Gradient Boosting Technique and Pol-TomoSAR Data

Authors: Francesca Razzano, Wenyu Yang, Sergio Vitale, Giampaolo Ferraioli, Gilda Schirinzi, Silvia Liberata Ullo
Affiliations: University of Naples 'Parthenope', University of Sannio
Monitoring forest height is crucial for understanding the complex dynamics of forest ecosystems and their role in the global carbon cycle. Forests are among the most significant terrestrial carbon sinks, capturing large amounts of atmospheric carbon dioxide and thereby playing a central role in mitigating climate change. In addition to their function in carbon sequestration, forests provide a wide range of other ecological services, such as habitat for biodiversity, regulation of water cycles, and soil stabilization. Consequently, accurate and reliable monitoring of forest structure, including tree height, is essential for assessing forest health, productivity, and the overall carbon storage potential of these ecosystems. Traditionally, forest height has been measured through ground-based surveys, which, while highly accurate, are time-consuming, expensive, and logistically difficult to implement over large or remote areas. As forests often span vast and difficult-to-access regions, these traditional methods are geographically constrained, making it challenging to obtain comprehensive data at a global or even regional scale. This limitation has driven the development and adoption of remote sensing technologies, which offer a scalable and efficient means of collecting data across large areas. So, despite significant advancements, challenges remain in achieving high accuracy and operational efficiency in forest height estimation across diverse forest types and terrains. SAR-based methods, including Polarimetric SAR Interferometry (PolInSAR) and Tomographic SAR (TomoSAR), have demonstrated their potential for reconstructing forest height profiles. PolInSAR utilizes phase coherence to infer canopy height, while TomoSAR reconstructs a 3D distribution of scatterers to achieve more detailed height estimations. However, these techniques are often hindered by complex forest structures, signal noise, and the reliance on inversion models, which can lead to errors in challenging environments. Recent research has explored the integration of machine learning (ML) approaches with SAR and LiDAR data to overcome these limitations. Machine learning models, including generative adversarial networks (GANs), support vector machines (SVMs), and deep learning (DL) architectures, have shown promise in improving prediction accuracy. However, these methods are not without drawbacks. DL models, for instance, require extensive training datasets, significant computational resources, and are susceptible to overfitting when applied to heterogeneous forest environments. This necessitates the exploration of alternative ML algorithms that balance accuracy, efficiency, and scalability. The aim of the work is to introduce a novel methodology for forest height estimation that leverages the CatBoost machine learning algorithm in conjunction with Polarimetric Tomographic SAR (Pol-TomoSAR) data. CatBoost, a gradient boosting algorithm, is uniquely suited for handling heterogeneous datasets and complex features. Its innovative approach, which includes ordered boosting to mitigate overfitting and a highly efficient training process, makes it particularly advantageous for remote sensing applications. In this work, the main aim is to evaluate the effectiveness of CatBoost in predicting forest height by integrating Pol-TomoSAR-derived features with high-resolution LiDAR data for validation. The dataset will be used in this study is derived from the TropiSAR campaign, which provides fully polarimetric P-band SAR images of the Paracou region in French Guiana. This region, characterized by dense tropical forest canopies and relatively flat terrain, offers a challenging yet representative testbed for forest height estimation methodologies. LiDAR measurements from the campaign are used as ground truth for validating the predictions. By combining the strengths of Pol-TomoSAR’s 3D reconstruction capabilities with CatBoost’s robust learning framework, the objective is to achieve accurate and computationally efficient forest height predictions. The methodology of the proposed study consists of several key steps. First, Pol-TomoSAR data undergo preprocessing to compute covariance matrices, which encapsulate polarimetric and interferometric features essential for height estimation. These features serve as inputs to the CatBoost algorithm, which is trained to predict canopy height models (CHMs) and digital terrain models (DTMs). Unlike traditional inversion models or deep learning frameworks, CatBoost operates with significantly reduced computational overhead, making it more feasible for large-scale applications. The model will be trained and validated using LiDAR-derived height values, allowing for a comprehensive assessment of its accuracy and robustness. The study goes beyond previous approaches by not only addressing the classification of forest height categories but also exploring regression tasks to predict continuous height values. This dual approach provides a more nuanced understanding of the algorithm’s performance and highlights its adaptability to different kind of situation. The objective is to present results that indicate CatBoost outperforms traditional methods and deep learning architectures in terms of both accuracy and training efficiency. Furthermore, we aim to demonstrate that the model achieves lower root mean square error (RMSE) metrics and generates cleaner height maps with reduced noise and artifacts. Additionally, its ability to maintain accuracy across varying covariance matrix window sizes highlights its scalability and reliability. The implications of this work extend beyond the scientific community. Accurate forest height estimation is a cornerstone for carbon accounting, climate change mitigation, and biodiversity conservation. By offering a scalable and efficient methodology, our approach facilitates the operationalization of remote sensing technologies for forest monitoring on regional and global scales. This contributes to data-driven policymaking, supports international environmental agreements, and fosters collaborations between academia, industry, and governmental organizations.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Deep Learning-Based Phase Calibration of Airborne SAR Tomography

Authors: Roghayeh Zamani, Assistant Professor. Hossein Aghababaei, Giampaolo Ferraioli
Affiliations: University of Twente, Università degli Studi di Napoli Parthenope.
Synthetic Aperture Radar (SAR) tomography (TomoSAR) is a proven technique for capturing multi-dimensional insights into the Earth's surface. Its operation relies on the principle that the phase of the backscattered signal reflects the sensor-to-target distance. However, in airborne SAR applications, residual platform motion often introduces phase errors, leading to challenges like increased side-lobes or signal blurring, which hinder information retrieval [1]. In forested or snow-covered regions, additional decorrelation sources, such as temporal, geometric, and volumetric factors, further degrade signal-to-noise ratios (SNR). This makes phase calibration a critical step to mitigate these disruptions and ensure data accuracy. Numerous methods have been developed for phase calibration in tomographic SAR data. Techniques like ALGAE [2] use algebraic approaches to isolate ground contributions but struggle in dense forests. Autofocusing methods [1, 3], based on optimizing entropy or contrast in tomograms, are adaptable but computationally demanding with large datasets. Techniques like [4,5] consider baseline error models for motion errors to estimate phase error while requiring phase triangularity assumption [4] to mitigate non-coherent scatterer effects. Nowadays, with the growing advancements in deep learning techniques and convolutional neural networks (CNNs) for remote sensing applications, we propose a novel CNN-based method for calibrating TomoSAR datasets. A key advantage of the proposed approach is its independence from coherent points, making it applicable to a broad range of environments, including forested and snowy regions. Moreover, this method operates within an unsupervised framework, eliminating the need for reference training data. The model is trained directly on the input TomoSAR data to predict phase errors using a carefully designed loss function that enforces a specific phase error model on the network's output. A critical aspect of this framework is its ability to bypass the need for generating training datasets comprising both distorted and clean TomoSAR data. In real-world scenarios, clean TomoSAR data are inaccessible, making traditional CNN based methods reliant on such data impractical or external source of information to generate reference data. Instead, the proposed approach does not depend on external reference datasets or additional information, addressing concerns regarding the generalizability of deep learning models for phase calibration. Indeed, while state-of-the-art deep learning methods are typically trained on extensive datasets, their applicability can be constrained by discrepancies between training data and the actual application scenarios, such as differences in study areas, baseline distributions, looking angles, and frequencies. In contrast, the proposed method offers a universally applicable framework for phase calibration across diverse TomoSAR datasets, irrespective of image characteristics, acquisition geometries, or study regions. The methodology is akin to unsupervised classification approaches, where the data itself is used to train a model capable of analyzing and calibrating the entire dataset. The proposed method was applied to the AfriSAR [6] dataset obtained from SETHI and compared against standard calibration techniques. Its performance was assessed through tomogram reconstruction along various range and azimuth profiles, with ground and canopy height models derived from LiDAR data overlaid onto the tomograms for validation. The results demonstrate the efficacy of the approach, underscoring its potential as a robust and user-friendly tool for phase calibration in TomoSAR data. Once trained on a universal set of tomographic images, the method can be readily deployed to streamline calibration processes. In particular, generating universal datasets from publicly available airborne SAR campaigns such as TropiSAR, UAVSAR, and others, and training the network on these datasets, makes it feasible to develop a ready-to-use tool for calibration applications. Experimental results, quality control measures, and evaluations will be presented at the conference. References [1] Aghababaee, H., G. Fornaro, and G. Schirinzi, Phase calibration based on phase derivative constrained optimization in multibaseline SAR tomography. IEEE Transactions on Geoscience and Remote Sensing, 2018. 56(11): p. 6779-6791. [2] Gatti, G., et al., ALGAE: A fast algebraic estimation of interferogram phase offsets in space-varying geometries. IEEE transactions on geoscience and remote sensing, 2010. 49(6): p. 2343-2353. [3] Pardini, M. and K. Papathanassiou. A two-step phase calibration method for tomographic applications with airborne SAR data. in EUSAR 2014; 10th European Conference on Synthetic Aperture Radar. 2014. VDE. [3] Tebaldini, S., et al., Phase Calibration of Airborne Tomographic SAR Data via Phase Center Double Localization. IEEE Transactions on Geoscience and Remote Sensing, 2016. 54(3): p. 1775-1792. [5] Reigber, A., P. Prats, and J.J. Mallorqui, Refined estimation of time-varying baseline errors in airborne SAR interferometry. IEEE Geoscience and Remote Sensing Letters, 2006. 3(1): p. 145-149. [6] Casal, T., I. Hajnsek, M. Pardini, M. Jager, R. Horn, J. S. Kim, K. Papathanassiou, P. Dubois-Fernandez, X. Dupuis, and V. Wasik. "Technical Assistance for the Development of Airborne SAR and Geophysical Measurements during the Afrisar Experiment—Deliverable DD-4—Final Report." (2016).
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Can We Estimate Optical Vegetation Indices Using Sentinel-1 SAR Data and Machine Learning? A Case Study on Central European Temperate Forests

Authors: Daniel Paluba, Bertrand Le Saux, Francesco Sarti, Přemysl Štych
Affiliations: Department of Applied Geoinformatics and Cartography, Faculty of Science, Charles University, Φ-lab - Climate Action, Sustainability and Science Department (EOP-S) Department, Earth Observation Programmes Directorate, European Space Agency (ESA/ESRIN), AI4EARTH, Earth Observation Programmes Directorate, European Space Agency (ESA/ESRIN)
Background: Multispectral Earth observation data has proven highly effective for monitoring forest ecosystems due to its relatively straightforward interpretation, long history of applications, availability of extensive and systematic historical archives dating back to the 1970s. Despite their significant advantages, their usability can be limited by factors such as cloud cover and atmospheric conditions. This issue is also prominent in temperate forest ecosystems in Europe, which are primarily located in mountainous regions. In these areas, cloud cover can create challenges in acquiring images regularly enough to timely detect phenological phases, disturbances and monitor their progression (e.g., forest fires, logging), or track subsequent recovery phases. In such cases, Synthetic Aperture Radar (SAR) data offer significant advantages: SAR can penetrate clouds and fog, is unaffected by sunlight, and enables image acquisition both day and night. This makes SAR data highly valuable for monitoring forest dynamics and its changes. SAR is not only a powerful standalone data source but can also serve as complementary data to optical data, enriching the overall monitoring of forest ecosystems. Research objective: The main objective is to investigate whether SAR and ancillary data can provide enough information to accurately estimate vegetation indices (NDVI and EVI) and biophysical parameters (LAI and FAPAR) in both healthy and disturbed temperate forests in Czechia and the entire Central Europe. Methodology: First, using the Google Earth Engine tool to generate Multi-Modal Time Series datates (MMTS-GEE), developed in Paluba et al. (2024), a paired multi-modal time series dataset including Sentinel-1 SAR, Sentinel-2 optical data, topographic and weather information was generated for healthy and disturbed coniferous forests and for broad-leaved forests in Czechia in 2021 (1800 validated areas in total). These TS were used to train and test (in a 70/30 ratio) regression algorithms to estimate optical-based vegetation metrics. The performance of fine-tuned machine learning (ML) algorithms were compared in optical-based vegetation metric estimation, specifically a traditional and most widely-used ML algorithm (Random Forest Regressor - RFR), a novel one with superior performances in recent studies (Extreme Gradient Boosting - XGB) and an Automatic Machine Learning (AutoML) approach, auto-sklearn. The inclusion of ancillary data (topographic and weather features) in the regression analysis was also evaluated. The results were evaluated statistically and both through time series and spatial analysis. The best-performing models were then used in a transferability test for the entire Central Europe, covering around 1 million of km². Here, the MMTS-GEE was tested for a wider area, where more than 7 thousands and 5 thousands of S1 and S2 image tiles were processed for 2021. Results and discussion: AutoML approaches, tested in this work, are aimed to simplify and lower the barriers to use ML while increasing efficiency by automating tasks such as algorithm selection, hyperparameter tuning and model optimization. However, fine-tuned traditional ML approaches (RFR and XGB) in slightly outperformed AutoML ensemble models achieving high accuracies (R² between 70-86%) and low errors, with XGB proving faster and more computationally efficient due to its early stopping mechanism. Extended AutoML optimization times (up to 12 hours) offered minimal or even negative performance gains beyond certain time thresholds (1, 10 or 60 minutes for EVI, FAPAR and NDVI, and for LAI, respectively). Including the additional topographic and weather features improved the results by 3% and 1% on average in R², respectively. The SAR-based estimates demonstrated consistent seasonal patterns compared to optical-based metrics and were able to detect abrupt forest changes with sub-weekly temporal accuracy (within 4 days), and providing up to 240 measurements per year. In comparison to Sentinel-2 data, it provided a 66 days temporal difference between pre- and post-disturbance events. Moreover, SAR-based estimations using the best-performing ML models over Central Europe achieved comparable results (R² between 54-79%) to those obtained in testing within Czechia, proving their transferability for large-scale modeling. Contributions to the field: This study demonstrates the potential of using SAR data, combined with ML, to estimate widely-used optical vegetation indices and biophysical parameters with very high temporal resolution offering year-round monitoring, without the limitations of atmospheric effects. It provides a comprehensive analysis of the accuracy and transferability of the developed methods, which can benefit forest monitoring applications. The study also presented a more timely disturbance detection compared to optical-based vegetation metrics. Limitations and future work: The study was limited to the year 2021 and the Central European region. Extending the analysis to other geographic and climatic conditions, as well as incorporating data from the upcoming SAR missions, could further improve the applicability and robustness of the presented approaches. References used in the abstract: Paluba, D., Le Saux, B., Sarti, F., Štych, P. (2024): Identification of optimal Sentinel-1 SAR polarimetric parameters for forest monitoring in Czechia. AUC Geographica 59(2), 1–15 DOI: 10.142/23361980.2024.18
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Polarimetric Measurements of SAR Data Products Using Reference Point Targets: Insights from Sentinel-1 and RADARSAT Constellation Mission

Authors: Kersten Schmidt, Jens Reimann, Sebastian Raab, Marco Schwerdt
Affiliations: Microwaves and Radar Institute, German Aerospace Center (DLR)
Polarimetric synthetic aperture radar (SAR) measurements are pivotal for enhancing Earth observation capabilities, particularly in applications requiring precise data, such as land cover classification, soil moisture estimation, and oceanographic studies. This study examines the polarimetric characteristics of two advanced C-band SAR missions: Sentinel-1 (S1) and the RADARSAT Constellation Mission (RCM), focusing on their calibration and channel balance using reference point targets. The study leverages active and passive reference point targets—trihedral corner reflectors and transponders—to evaluate the polarization characteristics for these missions. Sentinel-1 operates as a dual-polarization system, transmitting in either horizontal (H) or vertical (V) polarization and receiving both simultaneously, producing dual H (DH) or dual V (DV) SAR data products. Conversely, RCM employs a quad-polarization system which is able to acquiring four polarization channels simultaneously (HH, HV, VH, VV). While RCM allows comprehensive polarization analysis covering all channel combinations within a single acquisition, Sentinel-1 requires several acquisitions with different transmission polarization to enable similar evaluations. Precise calibration of polarimetric channels is crucial to ensure the accuracy and consistency of SAR data, achieved by analyzing the impulse response functions of reference targets derived from the SAR data. Key results of the study reveal low channel imbalances for both missions. Sentinel-1 exhibits mean imbalances below 0.2 dB, while RCM's imbalances average slightly higher, but remaining below 0.3 dB. Sentinel-1 evaluations incorporated data from diverse global calibration sites, including the DLR calibration field in Southern Germany, the corner reflector array in Surat, Australia, and the CSA transponder site in Montreal, Canada. RCM assessments, though focused on the CSA site, highlighted exceptional stability, with channel imbalance deviations below 0.1 dB for quad-polarization products. For Sentinel-1, dual-polarization channel deviations were similarly low (<0.2 dB), with slightly higher variations (up to 0.27 dB) observed when comparing polarization combinations from different acquisitions. The study’s methodological framework included leveraging longer time series acquisitions of both Sentinel-1 units (S1A and S1B) to analyze inter-channel consistency, compensating for the inherent limitations of the dual-polarization system. The inclusion of data from S1C, launched in December 2024, will further enrich the analysis. Transponders, designed and operated by the German Aerospace Center (DLR), played a central role in ensuring precise transmission and reception of SAR signals across polarization channels, facilitating detailed characterization of impulse responses. In conclusion, the polarimetric calibration results affirm the high accuracy of both SAR missions, with minimal polarization imbalances that are consistent across varying acquisition geometries and calibration targets. These findings not only demonstrate the stability and reliability of Sentinel-1 and RCM for operational applications but also provide a benchmark for future SAR mission designs. Future work aims to explore the influence of external factors on the channel imbalance, such as ionospheric effects or detected antenna mis-pointing. This research highlights the critical role of rigorous calibration using highly accurate reference targets to optimize the utility of polarimetric SAR systems for diverse scientific and operational purposes.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: WAVETRAX: Water and Vegetation Tower Radar Experiments for Improved Climate Monitoring

Authors: Jaron Vandenbroucke, Emma Tronquo, Niko Verhoest, Diego Miralles, Lieven De Strycker, Liesbet van der Perre, Bert Cox, Chesney Buyle, Guus Leenders, Sébastien Lambot, Merlin Mareschal, Hans-Peeter Marshall, Susan Steele-Dunne, Tobias Jonas, Hans Lievens
Affiliations: Hydro-Climate Extremes Lab (H-CEL), Ghent University, WaveCore-DRAMCO, KU Leuven, Earth and Life Institute, UCLouvain, Department of Geosciences, Boise University, Department of Geoscience and Remote Sensing, TU Delft, WSL Institute for Snow and Avalanche Research SLF
Satellite data are indispensable for monitoring climate dynamics, yet current missions face challenges in capturing rapid Earth system processes. Moisture redistribution in the soil-plant-atmosphere continuum during transpiration, or structural changes in snow during rain-on-snow events or melt-freeze cycles unfold too quickly for satellites like Sentinel-1, which provide observations only every 2–6 days. Other processes, including dew formation or rainfall interception, can at best only be captured by a satellite observation at a single instant during the process evolution. New satellite missions are being proposed that aim to improve our understanding of these dynamic processes at the land surface, by acquiring active microwave observations at higher (e.g., sub-daily) temporal frequencies. However, ground-based studies are still needed to better understand the microwave responses to rapidly evolving environmental processes. The WAVETRAX (WAter and VEgetation Tower RAdar eXperiments) project aims to address this critical need by advancing our understanding of active microwave remote sensing responses to sub-daily land surface processes and land-atmosphere interactions. To achieve these objectives, WAVETRAX integrates sensor design, field experiments, and advanced modeling. Tower-based radar sensors operating in L-, C-, and Ku-bands will be deployed at three diverse sites: Fermes Universitaires de Louvain in Belgium, the Idaho Rocky Mountains in the USA, and Weissfluhjoch in Switzerland. These sensors will acquire sub-hourly backscatter time series, supported by in situ measurements of a large variety of soil, vegetation, and snow properties. These observations will help investigate microwave scattering signatures linked to processes such as soil moisture redistribution, dew formation, rainfall interception, and snow metamorphism. Subsequently, the project will investigate different pathways, including radiative transfer models, machine learning, and data assimilation techniques, to improve the retrieval of Essential Climate Variables (ECVs) such as surface and root-zone soil moisture, biomass (or vegetation optical depth as a proxy), snow water equivalent (SWE), and land evaporation, which may ultimately contribute to improved climate monitoring capability. Here, we provide an overview of the context, objectives, and methodology of the WAVETRAX project, which only recently started in March 2024. Over the coming years, as the project evolves, it is expected to produce a unique dataset comprising multi-frequency/polarization radar and in situ measurements, validated retrieval algorithms, and enhanced insights into microwave scattering mechanisms. Furthermore, WAVETRAX will provide observational ground-based support for potential applications of current and prospective satellite missions, such as Sentinel-1, ROSE-L, TSMM, HYDROTERRA, and SLAINTE, and may contribute to improved monitoring of soil, vegetation and snow properties, which can benefit water resource management, agriculture, and renewable energy production.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: DEEP LEARNING SEGMENTATION APPROACH FOR FOREST HEIGHT RETRIEVAL WITH MULTI-CHANNEL SAR DATA

Authors: Wenyu Yang, Sergio Vitale, Assistant Professor. Hossein Aghababaei, Giampaolo Ferraioli, Vito Pascazio, Gilda Schirinzi
Affiliations: Università Degli Studi Di Napoli "parthenope", University of Twente
Deep learning has emerged as a transformative approach in big data analysis, gaining traction across various domains, including remote sensing, due to advancements in data availability and computational power. Neural networks, with their ability to hierarchically abstract complex patterns, serve as robust nonlinear models capable of representing intricate mathematical relationships [1]. Consequently, deep learning (DL) methods have become integral to remote sensing tasks [2], particularly Synthetic Aperture Radar (SAR) image processing [3]. Researchers have further explored DL’s potential to advance 3D Tomographic SAR (TomoSAR) reconstruction techniques. Early DL-based methodologies for TomoSAR applications, especially in urban settings [4], approached scatterer position estimation as a classification problem. In this paradigm, discrete elevation positions within a scene corresponded to classification labels, enabling scatterer location determination in range-azimuth cells. In polarimetric SAR tomography, the PolGAN method [5] reframed forest height estimation as an image pan-sharpening problem. By integrating PolInSAR and LiDAR inputs with adversarial feedback from PolGAN discriminators, this method achieved high-resolution forest height estimates with enhanced vertical accuracy. Inspired by these advancements, recent efforts have shifted toward leveraging DL for more robust height reconstruction. For instance, a pixel-wise Tomographic SAR Neural Network (TSNN) [6] demonstrated the adaptability of DL in height estimation tasks. Building on TSNN’s foundational concepts, this study introduces a more flexible and spatially adaptive framework: multiChannel SAR foresT height reconStruction Neural nETwork (CATSNET). Designed to process multi-channel data, CATSNET accommodates both multi-baseline (MB) and multi-polarization (MP) inputs. Unlike conventional pixel-wise methods that rely solely on local correlations between heights and covariance matrices, CATSNET employs a patch-based convolutional neural network (CNN) model. This enables the integration of spatial context and neighboring pixel information, significantly improving height estimation stability and reducing susceptibility to noise. CATSNET is tailored to address two primary objectives: 1. Leveraging patch-based features to enhance robustness over pixel-wise methods. 2. Ensuring generalization to diverse geographic regions. The inclusion of neighboring spatial information allows the model to better capture the inherent spatial correlation in ground and canopy height profiles, improving the accuracy and adaptability of height estimation across various regions. To achieve this, training patches are annotated with quantized LiDAR-derived heights, such as canopy height models (CHM) and digital terrain models (DTM), reframing the height estimation problem as a segmentation task. The CATSNET architecture is based on a U-Net structure, with a contracting encoder path to extract hierarchical features and an expansive decoder path to restore spatial resolution. The final output layer employs a 1×1 convolution to map 32-dimensional feature vectors into class predictions representing forest or ground heights. The proposed model was evaluated using data from the Paracou site, surveyed during the TropiSAR campaign over French Guiana’s forests in 2009 [7]. Small-footprint LiDAR data was used to construct the training dataset by extracting polarimetric and interferometric elements from the covariance matrix \(R\) for each range-azimuth pixel in the multi-channel SAR stack. Experimental results demonstrated that CATSNET outperformed existing methods in height reconstruction, closely aligning with LiDAR reference data while minimizing outliers. Its patch-based approach proved particularly effective, achieving superior accuracy in estimating both forest and ground heights. These findings establish CATSNET as a reliable, adaptable, and generalizable framework for SAR-based height reconstruction tasks. To date, experiments have been conducted using airborne SAR data, as spaceborne SAR systems typically have long repeat acquisition intervals, which can result in significant forest decorrelation. However, upcoming missions such as NASA’s NISAR, ESA’s Biomass, and DLR’s Tandem-L are expected to provide low-frequency data and improved acquisition timelines, mitigating temporal decorrelation issues. The proposed method presents a promising approach for retrieving forest information with high accuracy and efficiency, making it highly suitable for applications on a global scale. [1]. Ian Goodfellow, Yoshua Bengio, and Aaron Courville, *Deep Learning*, MIT Press, 2016, [http://www.deeplearningbook.org](http://www.deeplearningbook.org). [2]. Xiao Xiang Zhu, Devis Tuia, Lichao Mou, Gui-Song Xia, Liangpei Zhang, Feng Xu, and Friedrich Fraundorfer, “Deep learning in remote sensing: A comprehensive review and list of resources,” *IEEE Geoscience and Remote Sensing Magazine*, vol. 5, no. 4, pp. 8–36, 2017. [3]. Xiaoxiang Zhu, Sina Montazeri, Mohsin Ali, Yuansheng Hua, Yuanyuan Wang, Lichao Mou, Yilei Shi, Feng Xu, and Richard Bamler, “Deep learning meets SAR: Concepts, models, pitfalls, and perspectives,” *IEEE Geoscience and Remote Sensing Magazine*, pp. 0–0, 2021. [4]. A. Budillon, A. C. Johnsy, G. Schirinzi, and S. Vitale, “SAR tomography based on deep learning,” *Proceedings of IEEE International Geoscience and Remote Sensing Symposium*, pp. 3625–3628, 2019. [5]. Qi Zhang, Linlin Ge, Scott Hensley, Graciela Isabel Metternicht, Chang Liu, and Ruiheng Zhang, “PolGAN: A deep-learning-based unsupervised forest height estimation based on the synergy of PolInSAR and LiDAR data,” *ISPRS Journal of Photogrammetry and Remote Sensing*, vol. 186, pp. 924–2716, 2022. [6]. Wenyu Yang, Sergio Vitale, Hossein Aghababaei, Giampaolo Ferraioli, Vito Pascazio, and Gilda Schirinzi, “A deep learning solution for height estimation on a forested area based on Pol-TomoSAR data,” *IEEE Transactions on Geoscience and Remote Sensing*, pp. 1–1, 2023. [7]. Pascale C. Dubois-Fernandez, Thuy Le Toan, Sandrine Daniel, Hélène Oriot, Jérôme Chave, Lilian Blanc, Ludovic Villard, Malcolm W. J. Davidson, and Michel Petit, “The TropiSAR airborne campaign in French Guiana: Objectives, description, and observed temporal behavior of the backscatter signal,” *IEEE Transactions on Geoscience and Remote Sensing*, vol. 50, no. 8, pp. 3228–3241, 2012.
LPS Website link: DEEP LEARNING SEGMENTATION APPROACH FOR FOREST HEIGHT RETRIEVAL WITH MULTI-CHANNEL SAR DATA&location=X5+–+Poster+Area+–+Zone+P-Q" class="text-info" target="_blank">Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: A Quality-Driven Network for InSAR Time-Series Analysis of Coherent Scatterers

Authors: Wietske Brouwer, Dr Freek van Leijen, Prof Ramon Hanssen
Affiliations: Delft University of Technology
InSAR can be used as geodetic technique from which displacements of (objects on) the earth’s surface can be derived. These displacements are estimated from the original double-difference phase, i.e., the spatial phase difference two scatterers and the temporal phase difference relative to a reference epoch. Presenting the results solely for arcs provides the cleanest results. However, to enhance interpretability, displacement estimates are typically expressed per point, either point scatterers (PS) or distributed scatterers (DS), relative to a reference point. The simplest approach is a star network, where all points connect directly to the reference point. Yet, this method often produces errors when the arc length increases due to increased atmospheric phase delays and any testing between arcs is impossible since there is no redundancy. Creating a network of interconnected arcs mitigates these errors and allows for detecting and adjusting erroneous estimates. Many InSAR algorithms use a Delauney network, which is conveniently available, but maximizes the smallest angles of the triangles in the network, an irrelevant metric for InSAR. Moreover, it does not account for the physical, kinematic, or quality characteristics of scatterers. As a result, low-quality points may be connected, producing poor-quality arcs and allowing errors to propagate throughout the network. We developed a new method for network construction based on point quality. This approach builds on the method we developed to compute the time depended stochastic model for the phase observations for individual scatterers. By estimating point quality in advance and incorporating a distance-dependent component to address atmospheric variability, we assess the arc quality before we estimate the parameters per arc. This gives insight in which arcs have a higher potential for a successful parameter estimation. We start by evaluating the highest-quality arcs within an area to form the initial network. For all points in this network, we estimate unknown parameters such as the cross-range component, thermal displacement, and displacement time series. Arcs are iteratively added until specific quality criteria are met. These criteria will be application dependent. Finally, after the adjustment of the initial network, more points can be added to the network, again selecting the best connections based on arc quality. A key difference from conventional methods is that we incorporate arc quality beforehand rather than evaluating it retrospectively, once all parameters have been estimated. This prevents the removal of many arcs in post-estimation, which often weakens conventional networks.Additionally, since we adjust the network on the estimated height, thermal component, and displacement time-series per epoch, different functional models can be applied to individual arcs which is essential in areas with many different deformation phenomena like urban environments. Lastly, using error propagation, we compute the quality of the estimated parameters resulting in more reliable estimated parameters.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Assessing Slope Instabilities Related to Glacier Retreat and Associated Impacts on Alpine Infrastructure Using InSAR Technology

Authors: Zahra Dabiri, Daniel Hölbling, Elena Nafieva, Vanessa Streifeneder, Lorena Abad, Florian Albrecht
Affiliations: Department of Geoinformatics – Z_GIS, University of Salzburg, Spatial Services GmbH
The alpine infrastructure of trails and huts is a crucial asset for tourism in the Austrian Alps. However, it is increasingly affected by gravitational mass movements and slope instabilities, such as landslides, debris flows, and rockfalls. These hazards are often driven by glacier retreat and associated periglacial processes, including permafrost degradation. To address these challenges, Alpine associations require comprehensive and reliable information about these natural hazards to better manage their infrastructure. Over the past decade, Interferometric Synthetic Aperture Radar (InSAR) time-series techniques, such as Persistent Scatterer Interferometry (PSI) and the Small Baseline Subset (SBAS), have become essential tools for analysing surface deformation and terrain instability. These techniques utilize Synthetic Aperture Radar (SAR) data to derive time-series information on the phase of Persistent Scatterers (PS). The primary objective of this study is to demonstrate the application of advanced InSAR time-series analysis for monitoring slope instabilities and surface deformation in high mountain areas associated with glacier retreats and to gain a deeper understanding of spatio-temporal changes within glacier forelands. We utilized Sentinel-1 SAR data collected between 2015 and 2024, employing both ascending and descending flight paths to measure surface deformation near alpine infrastructure located in selected areas in Salzburg and Tyrol, Austria. To process and analyse the data, we employed the Alaska Satellite Facility’s (ASF) on-demand product processing system, based on the Hybrid Pluggable Processing Pipeline (HyP3), for searching, processing, and downloading Sentinel-1 time-series data. Additionally, we used the Miami INsar Time-series software in PYthon (Mintpy) for cloud-based SBAS processing using an unwrapped interferogram stack derived from Sentinel-1 time-series data. The initial results provide detailed insights into slope instabilities and surface deformation in landscapes affected by glacier retreat. Moreover, we investigated which factors, and technical limitations influence the quality of the SBAS and PSI results. Information derived from InSAR provides valuable insights into ground stability and enhances our understanding of the impacts of climate-induced glacier retreat. Our findings demonstrate that advanced InSAR techniques can be applied for the continuous monitoring of alpine infrastructure, such as trails and huts. Such monitoring is essential for supporting sustainable development, promoting alpine tourism, and enhancing natural hazard assessments.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: A Novel Deep-Learning-Based Framework for InSAR Parameters Enhancement

Authors: Luca Dell'Amore, Giovanni Costa, Daniel Carcereri, Begüm Demir, Dr. Paola Rizzoli
Affiliations: German Aerospace Center (DLR), Politecnico di Milano, Technical University Berlin
Nowadays, Synthetic Aperture Radars (SAR) represent a very powerful remote sensing tool at global scale for the study of the geo-, cryo- and bio- spheres. In particular, SAR Interferometry (InSAR) is widely used to perform complex tasks, such as the retrieval of Earth’s surface topography and the monitoring of its deformations. To this aim, the valuable information about the Earth’s surface is encoded in the phase difference between two coherent SAR complex images, acquired from two slightly different positions or at a different time [1, 2]. However, due to the SAR image formation process, several error sources might impact the quality of the interferogram and, by extension, most InSAR-related applications. In this framework, denoising strategies are required in order to generate reliable and highquality InSAR products by preserving phase details, such as fringes and edges, while suppressing the intrinsic interferometric noise. An accurate estimation of the interferometric parameters, i.e. InSAR phase and coherence, is then a crucial aspect for most InSAR-related applications, ranging from the generation of Digital Elevation Models (DEMs) to forest mapping and snow/ice parameters retrieval. Beside the standard approach, consisting of a boxcar filter, which performs a maximum likelihood estimation based on a moving average operation [3], more sophisticated algorithms, based on non-local filters first [4, 5] and then on convolutional neural networks (CNNs) [6, 7, 8], were proposed in the last years in order to reduce the noise level while preserving the original spatial information. Nevertheless, the presence of significant artifacts in the interferometric phase and a persistent underestimation in the filtered coherence are typically observed [8] when compared to the boxcar filter, thus suggesting a non-fullycorrect theoretical formulation of the problem. The proposed work focuses on establishing a complete and generalized theoretical and statistical model to physically describe and simulate a noise-free interferogram starting from the knowledge of the InSAR acquisition geometry and of the underlying topography. After that, real InSAR data, acquired by the TanDEM-X mission with the same geometry as the simulated data, can be used to generate a synthetic dataset according to the proposed statistical model, where noisy interferograms are associated to their noise-free versions. Moreover, the proposed work investigates the potential of CNN-based architectures for the estimation of high-resolution and high-quality InSAR parameters, i.e. coherence and interferometric phase using the proposed statistical model. In particular, a deep-learning (DL) model is trained in a fully-supervised fashion by utilizing the noise-free synthetic interferograms as reference data. Finally, we present a detailed assessment of the reconstruction/denoising performance of the network by comparing it to state-of-the-artdenoising filters, such as Boxcar [3], InSAR-BM3D [5] and Φ-Net [8]. Preliminary results are promising and already show the added-value of the proposed theoretical modifications to the state-of-the-art literature. [1] Bamler, Richard, and Philipp Hartl. "Synthetic aperture radar interferometry." Inverse problems 14.4 (1998): R1 [2] Rosen, Paul A., et al. "Synthetic aperture radar interferometry." Proceedings of the IEEE 88.3 (2000): 333-382. [3] Seymour, M. S., and I. G. Cumming. "Maximum likelihood estimation for SAR interferometry." Proceedings of IGARSS’94-1994 IEEE International Geoscience and Remote Sensing Symposium. Vol. 4. IEEE, 1994. [4] Deledalle, Charles-Alban, et al. "NL-SAR: A unified nonlocal framework for resolutionpreserving (Pol)(In) SAR denoising." IEEE Transactions on Geoscience and Remote Sensing 53.4 (2014): 2021-2038. [5] Sica, Francescopaolo, et al. "InSAR-BM3D: A nonlocal filter for SAR interferometric phase restoration." IEEE Transactions on Geoscience and Remote Sensing 56.6 (2018): 3456-3467. [6] Zhang, Kai, et al. "Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising." IEEE transactions on image processing 26.7 (2017): 3142-3155. [7] Zhang, Kai, Wangmeng Zuo, and Lei Zhang. "FFDNet: Toward a fast and flexible solution for CNN-based image denoising." IEEE Transactions on Image Processing 27.9 (2018): 4608-4622. [8] Sica, Francescopaolo, et al. "Φ-Net: Deep residual learning for InSAR parameters estimation." IEEE Transactions on Geoscience and Remote Sensing 59.5 (2020): 3917-3941
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Application of Sentinel-2, EnMAP and Sentinel-1 data in accurate crop classification: an analysis for the JECAM area in Poland

Authors: Msc Konrad Wróblewski, PhD Ewa Panek - Chwastyk, Prof Katarzyna Dabrowska - Zielinska, PhD Szymon
Affiliations:
The comparison of crop classification in the Polish JECAM area based on Sentinel-2, EnMAP and Sentinel-1 data is an important step in research on the application of satellite technology in precision agriculture. JECAM (Joint Experiment for Crop Assessment and Monitoring) test areas play a key role in crop monitoring and analysis of sustainable agricultural land management. Crop classification is one of the most important tools for assessing the dynamics and condition of agricultural landscapes, especially in the context of global challenges such as climate change, soil degradation or increasing demand for food. The aim of this study was to compare the effectiveness of crop classification based on three different satellite data sets: Sentinel-2, EnMAP and Sentinel-1. Sentinel-2 provides multispectral images with a spatial resolution of 10-20m, depending on the spectral band, and covers 13 broad bands across the visible, near-infrared (NIR) and shortwave infrared (SWIR) regions of the electromagnetic spectrum. With a spectral resolution ranging from 15 to 180 nm, these data are often used in land cover analysis and have the advantage of high image acquisition rates, allowing changes over time to be monitored. The revisit time is 5 days with two Sentinel-2 satellites, enabling frequent monitoring of dynamic changes in the landscape. EnMAP, on the other hand, is a hyperspectral mission providing much finer spectral detail, with around 242 narrow bands spanning from 420 nm to 2450 nm, a spectral resolution of 6.5-10 nm and a revisit time of 27 days. While its spatial resolution is lower at 30 meters per pixel, the rich spectral data allows the identification of subtle differences in vegetation and soil composition, making the data particularly useful for precise classification. Sentinel-1 offers data independent of weather and lighting conditions. This makes it indispensable for monitoring crops in difficult conditions, such as cloud cover, which is a common problem in northern Europe. The study employed advanced machine learning techniques, including Random Forest algorithms, for data analysis. Both individual datasets and their combinations were evaluated to assess the potential advantages of their integration. The classification process utilized reference data obtained from ground-based phenological observations and field inventories conducted as part of the JECAM programme. Classification accuracy was assessed using the Kappa index and the Overall Accuracy metric to asses model reliability and predictive accuracy. The analysis showed that Sentinel-2 data achieves high accuracy in classifying major crop groups such as cereals, legumes and root crops. However, their effectiveness decreases under conditions of intense cloud cover, as well as for crops with similar spectral characteristics. EnMAP has proven to be particularly effective in distinguishing between crops that are difficult to identify with multispectral data, such as different varieties of legumes or specific types of speciality crops. Sentinel-1, with its ability to record data regardless of atmospheric conditions, enabled efficient mapping of vegetation structure and moisture content. Its usefulness was particularly evident during periods of frequent cloud cover and for areas with intense structural heterogenity. In summary, Sentinel-2 multispectral data proves to be a valuable source for crop monitoring under typical weather conditions. EnMAP, with its unique spectral properties, offers invaluable capabilities for analysing more complex crop structures, while Sentinel-1, with its weather-independence, is an essential tool in situations where optical data are unavailable. The combination of these data sources produces the best results, highlighting the importance of their integration in precision agriculture research and sustainable land management monitoring. This study is an important contribution to the development of satellite technology in agricultural monitoring and environmental management, providing practical guidance for future activities within the JECAM and Copernicus programmes. The results can also be used to develop more precise agricultural management strategies in Poland and around the world, supporting a sustainable approach to food production in the face of global challenges.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Advanced InSAR Long-Term Time Series with Flatsim service for Large-Scale Geosphere Applications

Authors: Philippe Durand, Claude Boniface, Sebastien Gaugain, Clément Schaettel, Eric Maesen, Marie-Pierre Doin, Erwan Pathier, Franck Thollard, Bertrand Lovery, Raphaël Grandin, Emilie Deschamps-Ostanciaux, Elisabeth Pointal
Affiliations: Cnes, Sopra Steria Group, ISTerre, IPGP
Since the launch of Sentinel-1 (S1) in 2014, the COPERNICUS program has established and consistently maintained an open-access, free-of-charge distribution policy for Earth Observation satellite data. The huge influx of spatial imaging data necessitates an adaptation of the methods used for handling the data and associated derived products, as new tools and services must be developed to access, store, manage and use them, including by non-expert users. In FRANCE, the "FLATSIM" service (FormaTerre LArge-scale multi-Temporal Sentinel-1 InterferoMetry) aims at providing ground motion products to users, thanks to a massive S1 data processing chain using multi-temporal Interferometric Synthetic Aperture Radar (InSAR). The service, based on the New Small temporal and spatial Baseline (NSBAS) chain, was developed in the framework of the national data hub Data Terra FormaTerre, resulting from a partnership between the French Space Agency (CNES) and scientific teams from several french research institutions involved in the development of InSAR tools. FLATSIM is designed to facilitate Earth science research within the French FormaTerre scientific group and its collaborators, as well as the broader Data Terra Research Infrastructure community. Its primary focus is to support projects encompassing regions larger than 250,000 km² over the full S1 data archive (since October 2014). The FLATSIM service contributes to addressing several scientific topics including seismology, tectonics, volcanology, hydrological cycle... Two announcements of opportunity have selected 18 projects on large areas in 2020 and 2022. A third one will be launched first semester 2025. CNES has implemented and parallelized the NSBAS chain, in its High Performance Computing (HPC) infrastructure, taking advantage that CNES mirrors the entire Sentinel-1 SLC data archive from the European Space Agency (ESA). Examination of several years of data for Long-term Time Series purposes needs specific tool designed to assist Principal Investigators (PIs) in effectively analyzing the entire IW Sentinel-1 archive for their research projects, prior to process thousands of interferograms. Incremental strategies where implemented to complement Time Series after a first run. When working with long-term Time Series, particular attention must be paid to phase biases to ensure accurate inversion of successive interferograms and displacement maps. Long-term interferograms help mitigate these biases. Valuable results will reveal slow motion along faults at mm/year scale. Challenges for displacement maps on vegetated areas will be highlighted.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Advances in the Application of Artificial Intelligence for InSAR Data Analysis in Landslide Studies

Authors: Wilson Ramos Aragão Júnior, Roberto Quental Coutinho, Antonio Miguel Ruiz-Armenteros
Affiliations: Department of Civil and Environmental Engineering, Federal University Of Pernambuco, Department of Cartographic, Geodetic and Photogrammetry Engineering, University of Jaén, Centre for Advanced Studies in Earth Sciences, Energy and Environment (CEACTEMA), University of Jaén, Research Group RNM-282 Microgeodesia Jaén, University of Jaén
Landslides are among the most frequent and destructive natural disasters, causing widespread social, economic, and environmental damage. Triggered by natural factors such as heavy rainfall and earthquakes, as well as human activities like deforestation and unplanned urbanization, landslides pose significant risks to infrastructure and vulnerable communities worldwide. Their impacts extend beyond the immediate loss of human lives, encompassing infrastructure destruction, economic disruptions, and irreversible environmental damage that exacerbates issues like soil erosion. Monitoring and mitigating these hazards are critical to reducing their devastating effects and ensuring the safety and sustainability of affected communities. Synthetic Aperture Radar Interferometry (InSAR) has emerged as a vital technology for landslide monitoring, offering high-precision detection of ground displacements and valuable insights into mass movement dynamics. However, despite its many advantages, InSAR faces challenges such as temporal decorrelation in densely vegetated or rapidly changing areas, as well as the complexity of processing and interpreting the vast volumes of data it generates. These limitations highlight the need for innovative approaches to enhance the effectiveness of InSAR in landslide studies. Artificial Intelligence (AI) has emerged as a transformative tool in this context, offering advanced data processing and analytical capabilities to address many of InSAR's inherent limitations. By integrating AI with InSAR, it becomes possible to automate processes, improve accuracy, and gain a deeper understanding of landslide dynamics, ultimately advancing disaster prevention and mitigation strategies. This study presents a systematic review of the current state of the art in AI applications for InSAR data analysis in landslide monitoring. The research methodology followed the PRISMA guidelines for systematic reviews, ensuring a rigorous and transparent approach to identifying relevant scientific works. Comprehensive searches were conducted in databases such as Scopus using keywords like “InSAR,” “artificial intelligence,” and “landslide.” Articles were filtered to include peer-reviewed studies in English that directly addressed the integration of AI and InSAR for landslide monitoring. Through detailed analysis and selection, the study prioritized works that explored AI’s role in processing and analyzing InSAR data, with a focus on practical applications and advancements. The findings revealed several innovative applications of AI techniques to enhance InSAR-based landslide monitoring. One key area is the processing of large data volumes, where AI-driven platforms such as the AI Earth Cloud system have significantly reduced computational time, enabling faster analysis of extensive satellite datasets. Another crucial application is data modeling and prediction, where machine learning algorithms, including advanced deep learning techniques like MSFD-Net and ECA U-Net, have demonstrated remarkable improvements in identifying ground deformation patterns and trends indicative of potential landslides. Hybrid models such as CSDI further enhance predictive capabilities by integrating geotechnical, climatic, and environmental data, providing a more comprehensive understanding of landslide dynamics. AI also plays a pivotal role in multisensor data integration, enabling the fusion of data from diverse sources, such as optical imagery and radar data, to improve the accuracy and reliability of analyses. This multisensor approach broadens the scope of monitoring efforts, offering a more holistic assessment of landslide risks. Moreover, AI applications have enabled the automation of data analysis processes, significantly reducing the need for manual intervention and enhancing the efficiency of monitoring systems. These advancements not only streamline workflows but also facilitate real-time geohazard assessments, making AI an indispensable tool for landslide risk management. Despite these promising developments, challenges remain in the application of AI to InSAR data analysis. The development of more robust algorithms and improved techniques for data fusion is essential to address issues such as noise, decorrelation, and the detection of non-linear ground movements. Additionally, interdisciplinary collaboration among geotechnical engineers, computer scientists, and disaster management professionals is critical to implementing regional-scale early warning systems and maximizing the potential of AI-driven solutions. The integration of AI with InSAR represents a significant advancement in landslide monitoring, providing enhanced processing capabilities, predictive insights, and automation that collectively improve the identification of high-risk areas and the prediction of deformation trends. Future developments should focus on the creation of intelligent monitoring systems that combine InSAR data with information from in-situ sensors, numerical models, and AI-based analytics. These systems have the potential to revolutionize landslide monitoring by optimizing resources, enhancing disaster preparedness, and minimizing risks to infrastructure, communities, and the environment. This study underscores the transformative potential of AI in advancing geotechnical engineering practices and enhancing the resilience of infrastructure in the face of natural disasters. By synthesizing recent advancements and identifying key areas for future research, this work highlights the importance of adopting innovative technologies to address the growing challenges of landslide monitoring and risk mitigation.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Coherent lifetime estimation for Sentinel-1 InSAR

Authors: Andreas Piter, Dr. Mahmud Haghshenas Haghighi, Prof. Dr. Mahdi Motagh
Affiliations: Institute Of Photogrammetry And Geoinformation, Helmholtz Centre Potsdam, GFZ German Research Centre for Geosciences
Transport infrastructure, such as railways and highways, is susceptible to deformation due to natural and anthropogenic hazards, as well as the ageing of materials. Utilizing publicly available Sentinel-1 data with InSAR time series methods provides a cost-effective and accessible means of continuously monitoring these infrastructures for displacements. Existing InSAR time series methods only consider scatterers with a continuously coherent backscattering, such that the signal is interpretable over the whole study period. However, construction or demolition of transport infrastructures lead to changes in the backscattered signal in the SAR images. Consequently, these infrastructures cannot be observed with conventional methods and require tailored methods to identify the coherent lifetime of the scatterer. In this contribution, we assess the specific requirements for coherent lifetime estimation for transport infrastructure monitoring with Sentinel-1 InSAR. We evaluate the performance of existing change detection methods including both coherent and non-coherent approaches. The non-coherent technique adresses persistent scatterers (PS) expected to exhibit a significant amplitude change due to changes within the resolution cell. In contrast, the coherent methods seek to identify coherent time spans of distributed scatterers (DS). The coherent time span is identified from the coherence matrix which is estimated from spatially adaptive or non-adaptive neighbourhoods and represents the temporal correlation of the signal. A block structure in the coherence matrix is presumed to indicate the scatterer's coherent lifetime. We demonstrate the efficacy of these techniques in an area west of Alicante, Spain. The study area, characterized by sparse vegetation, encompasses both newly constructed and demolished railway tracks and highways. We use a Sentinel-1 descending SAR stack of track 8 covering the period from 2014 to 2024. The images are coregistered with GAMMA, and the displacement time series are derived using the open source research software SARvey for InSAR time series analysis. We use high-resolution optical images to infer the time span in which most likely the infrastructure was demolished and newly constructed to validate the results from the coherent lifetime estimation methods. Our findings suggest that the coherent approach outperforms the non-coherent method, because distributed scatterers (DS) are prevalent, and coherence is more sensitive to changes than amplitude. However, changes in PS are obscured by bright backscattering from vehicles during construction, leading to a misalignment between the detected amplitude change and the actual onset of the infrastructure's coherent lifetime.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Monitoring Soil Freeze/Thaw Dynamics in Snow-Covered Agricultural Areas with L-band Polarimetric SAR

Authors: Ms Zeinab Akhavan, Richard Kelly, Dr Neda Rojhani
Affiliations: University of Waterloo
Tracking spatiotemporal variations in soil freeze/thaw dynamics is important for monitoring various cold region states (Jagdhuber et al., 2014). Hydrologically, soil state freezing reduces surface water infiltration and soil hydraulic conductivity, increasing the risk of flooding and erosion during spring melt season. For estimating snow accumulation using microwave remote sensing observations, soil freeze/thaw state is also an important control on the background microwave reflectance of the backscattered signal. Synthetic Aperture Radar (SAR) is a promising tool for monitoring soil freeze/thaw states due to its sensitivity to changes in the dielectric constant, which are primarily influenced by soil moisture. Lower-frequency SAR, such as L-band, exhibits enhanced sensitivity to soil moisture and is effective in detecting freeze/thaw states beneath snow cover compared to higher-frequency SAR systems (Zhou et al., 2022). To better understand the L-band SAR response in wintertime, an experiment in southern Ontario was executed in 2022-2023 to conduct in-situ measurements of geophysical parameters, such as soil moisture and soil freeze/thaw status. The measurements were taken simultaneously with quad polarization L-band airborne SAR and C/X-band satellite SAR data collections. The field study site at Powassan Ontario, Canada, was selected to monitor soil conditions from late October (snow-free season) through late March (prior to snowmelt). Wet snow acts as a barrier to microwave radiation, preventing SAR signals from reaching the soil beneath the snow layer. Therefore, soil and snow parameters, including moisture and temperature, were measured at several sites over the study area to coincide with airborne and satellite SAR observations. In addition, an automatic weather station was used to record meteorological data such as air temperature, ground surface temperature, and snow depth. The goal of this study is to enhance our understanding of seasonal soil state changes beneath the snow layer in an agricultural region using lower-frequency SAR. To achieve this, we apply Freeman-Durden (FD) polarimetric target decomposition to CryoSAR L-band fully polarimetric images to observe how the FD components respond to freeze/thaw cycles over the snow-covered agricultural landscape (Freeman & Durden, 1998; Kelly et al., 2024). According to weather station records, soil near-surface temperatures dropped to 0°C around mid-December and remained at that level until the end of the field season. Although air temperatures occasionally rose above freezing, the snow layer helped maintain soil temperatures around 0°C. The FD volume and surface scattering components were calculated from the polarimetric data between January 2023 and mid-March 2023 to characterize the L-band SAR response prior to the onset of the snowmelt season. At L-band data, we expected dominant surface scattering from the snow-ground interface with a small contribution from volume scattering due to the snow layer or canopy coverage. However, while the surface roughness of the soil remained consistently smooth at the field site, we observed variable volume scattering contribution compared to surface scattering from the FD decomposition. This indicates that the surface soil water volume likely had an impact on the volume scattering response. The rise in air temperature during mid-February likely increased the moisture content in the soil and snow when melt-refreeze layers formed. These complicated thermal patterns largely explain the volume and surface scattering variations observed between to mid-January and mid-March and are important for snow mass retrievals using Interferometric SAR observations.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Ground rebound caused by groundwater level change using MT-InSAR time-series

Authors: Seong-Cheon Park, Dr. Sang-Hoon Hong, Dr. Francesca Cigna
Affiliations: Pusan National University, National Research Council, Institute of Atmospheric Sciences and Climate(CNR-ISAC)
The groundwater level (GWL) is a primary indicator of estimating the groundwater characteristics within an aquifer and a crucial factor affecting surface deformation. A decline in GWL reduces pore pressure, leading to land subsidence. On the contrary, a rise in GWL increases pore pressure, causing the aquifer to expand and resulting in a surface uplift. Various factors, including natural recharge from seasonal rainfall, artificial fluid injections, and groundwater extraction, influence these GWL fluctuations. Surface deformation associated with GWL changes varies significantly depending on groundwater usage, land use, and geological settings. It can cause severe damage, particularly to soft ground composed of unconsolidated sediments. To periodically monitor surface deformation and mitigate damages, terrestrial geodetic surveys, such as Global Navigation Satellite System (GNSS) or leveling, have been conducted. However, these methods are limited by low spatial resolution and high costs. Differential Interferometric Synthetic Aperture Radar (DInSAR) is a valuable technique for measuring high spatial resolution surface displacement maps over large regions with high precision. Time-series analysis using Multi-Temporal InSAR (MT-InSAR) facilitates the observation of long-term deformation caused by earthquakes, volcanic activity, and subsidence. In this study, we applied a small baseline subset (SBAS) algorithm, one of the MT-InSAR methods, to observe the surface deformation in Gimhae City, South Korea. The human-made area is a soft ground composed of Quaternary unconsolidated sediments distributed as alluvial deposits. Continuous GWL rise was detected through daily GWL measurement data collected at the Gimhae Samjeong groundwater monitoring station from 1 January 2007 to 31 December 2021. After excluding seasonal fluctuations, a consistent increase in GWL has been detected over approximately ten years. We utilized four SAR datasets comprising ascending ALOS PALSAR, descending COSMO-SkyMed, and descending and ascending Sentinel-1 to observe long-term time-series displacement and validate with GWL changes. Using Sentinel-1 datasets from January 2016 to October 2021, deformation velocity maps were generated in horizontal and vertical directions, assuming no deformation in the north-south direction. We used hydrological data such as GWL measurement, GWL contour map, and groundwater flow direction to analyze the spatiotemporal surface deformation characteristics observed by MT-InSAR. Daily GWL data were used for correlation analysis between GWL changes and time-series displacement. We applied linear regression and exponential function models to detect deformations and analyze the correlation between the independent observations. The results revealed the surface uplift phenomenon in the eastern part of the study area and the high correlation between deformation and GWL fluctuations. In ALOS PALSAR observations, covering the period from 2007 to 2010, the specific deformation was not observed, and the GWL remained stable. However, significant uplift was detected in the COSMO-SkyMed data from 2013 to 2019, showing 8.2 cm of maximum total uplift and 1.5 cm/year of the maximum uplift velocity, corresponding to the rise in GWL. The uplift regions were observed along the lowest GWL contour line and captured between two lineament structures. In Sentinel-1 results from 2016 to 2021, slight uplift signals were detected, showing similar patterns in both datasets. The maximum deformation velocities were 0.47 cm/year and 0.54 cm/year for Sentinel-1 descending and ascending, respectively. The decomposed deformation map primarily shows an uplift pattern in the vertical direction, while the deformation velocity in the horizontal direction remained at approximately zero. Additionally, the results of correlation analysis indicate that the surface deformation occurred due to GWL fluctuations, as shown by the high coefficient of determination(R2) between the two datasets. The comparison between ALOS PALSAR and GWL showed a relatively low R2 below 0.4, implying that the fluctuations did not affect the GWL surface during this period. However, the COSMO-SkyMed dataset shows the highest R2 over 0.8, indicating that groundwater fluctuations significantly contribute to ground deformation. The uplift phenomenon due to rising GWL occurs gradually in a non-linear pattern with groundwater recharge. Therefore, correlation analysis during this period also revealed that the inverse exponential function model can be explained with a higher correlation. During the Sentinel-1 observation period, which corresponds to the stabilization of GWL, a high R2 exceeding 0.75 was observed, suggesting that the surface uplift occurred slightly due to rising GWL. Based on the high coefficient of determination in COSMO-SkyMed and Sentinel-1, we calculated the time lag between GWL and deformation. The time lag ranged from 510 to 780 days in COSMO-SkyMed, while a relatively short time lag, ranging from 0 to 120 days, was detected in Sentinel-1. These results suggest that COSMO-SkyMed reflects the long-term GWL rise trend, while Sentinel-1 is slightly affected by seasonality in the short term. Our results reveal that the surface uplift phenomenon occurred in Gimhae City, South Korea, from 2013 to 2021, corresponding to the GWL changes and showing spatiotemporal features on deformation related to the characteristics of the aquifer. We successfully detected long-term ground deformation spanning over a decade using MT-InSAR and multi-frequency SAR datasets. The application of MT-InSAR with future satellite missions, which can be integrated with current missions, is expected to enable practical continuous observation of aquifers for surface deformation.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Enhanced MUSIC Algorithm for TomoSAR Reconstruction of Forested Areas

Authors: Karima Hadj-Rabah, Nabil Haddad, Alessandra Budillon, Gilda Schirinzi, Martin Sudmanns
Affiliations: Paris-Lodron Universität Salzburg, University of Naples Parthenope, Ecole Militaire Polytechnique
SAR remote sensing constitutes a refined tool to characterize vegetation and forest-covered surfaces. The products provided by the actual as well as the upcoming sensors offer a large number of datasets allowing the assessment of human activities’ impact on the environment. In this context, forest height estimation constitutes a crucial parameter for monitoring and managing forest ecosystems, supporting a wide range of applications. With long wavelength fully polarized (FP) Tomographic SAR (TomoSAR) data, canopy and ground elevations can be estimated using FP spectral estimators. Among these approaches MUSIC (MUltiple SIgnal Classification) that has relatively higher super-resolution capability compared to other methods, due to its noise/signal space separation property. Yet, three main challenges can be faced: (1) a priori knowledge of the unknown number of scatterers is required, (2) use of sampling covariance matrix seeing that the statistical covariance matrix is unknown, leading to (3) spatial resolution degradation due to the multi-looking operation involved. The aim of this research is to overcome the abovementioned MUSIC limitations by introducing an enhanced version where the Expectation Maximization (EM) method is used to accurately estimate the statistical covariance matrix resulting in discarding the multi-look operation. Thereby, the suggested modification turns the initial version into an iterative one solving drawbacks (2) and (3). Since our main focus relies on estimating the ground and canopy elevation, the number of scatterers was assumed equal to two in order to avoid the parametric limitation (1). For comparison purposes, the conventional versions of both MUSIC and EM methods were implemented as well. The proposed method was applied on a dataset collected during ESA’s TropiSAR mission over tropical forests in French Guiana (South America) and acquired by ONERA SETHI airborne system. The experimental results show the advantages of enhanced MUSIC algorithm in terms of height accuracy and completeness while using a single polarized L-band TomoSAR data. On the other hand, all three polarizations were exploited separately in order to discuss the most likely better channel to consider for future application of the proposed method. The validation of the findings was conducted using qualitative and quantitative metrics with respect to the Canopy Height Model and Digital Terrain Model derived from LiDAR measurement projected into the SAR geometry.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone P-Q)

Poster: Network Formation Strategies In PSInSAR

Authors: Jakob Ahl, Dr Freek van Leijen, Prof Ramon Hanssen
Affiliations: The Technical University of Denmark, Delft University of Technology
PSInSAR (Persistent Scatterer Interferometric SAR) is a powerful tool to estimate surface displacements with high spatial resolution. Since its inception two decades ago, many variants have been proposed, following different approaches. Most of these are structured similarly relying on four main steps: Coherent pixel identification and selection, Network formation, Arc parameter estimation, and Network integration. In this contribution we will focus on choices made in the first two steps, which are often critically deciding factors in the success rate of later ones. There are a number of different ways to identify Coherent pixels, often referred to as PSs (Persistent Scatterers), in literature. The identification is either based on the amplitude such as Amplitude Dispersion (Ferretti, 2001), Signal-to-Clutter-Ratio (Adam, 2004), or on the phase, eg. Maximum Likelihood Estimation (Shanker, 2007), and Point Coherence Estimation (Costantini, 2024). The combination of their double differenced phases, referred to as arcs, to build a spatial network is not investigated as much in the literature. For N PSs the maximum possible number of arcs is in the order of N², which quickly becomes infeasible to process when considering larger areas. Often a Delaunay triangulation is utilized for its simplicity. Other networks, for example the redundant networks are investigated in (van Leijen, 2014), and star networks in (Kampes, 2006). However, these network methods are primarily mathematical constructs, which do not account for the physics and noise sources of the problem we are trying to overcome, namely atmospheric delay differences and phase dispersion. As an alternative approach, we propose the use of an empirical cost-function based on the correlation between noise sources and known parameters, ie., SAR amplitude and distances between points, to establish a combined PS selection and network formation strategy. This strategy optimizes the likelihood that any PS selected can be connected to another PS through high-quality arcs, and that ambiguities of the arcs in the resulting network can be correctly estimated. This is accomplished by selecting a single ‘seed’ point and growing the spatial network from this point by testing connections to valid PSs before connecting them, and choosing to connect them only if their predicted quality is higher than a threshold. The approach is implemented in TU Delfts inhouse PSInSAR software, DePSI and is demonstrated and validated based on a TerraSAR-X dataset of Copenhagen. Finally, we compare this method to conventional approaches based on: velocity estimates, temporal coherence, ensemble coherence, and local ensemble coherence. A. Ferretti, C. Prati and F. Rocca, "Permanent scatterers in SAR interferometry," in IEEE Transactions on Geoscience and Remote Sensing, vol. 39, no. 1, pp. 8-20, Jan. 2001, doi: 10.1109/36.898661 Adam, N., Kampes, B. M., and Eineder, M. (2004). Development of a scientific persistent scatterer system: Modifications for mixed ERS/ENVISAT time series. In ENVISAT & ERS Symposium, Salzburg, Austria, 6–10 September, 2004, page 9. Shanker, P., and H. Zebker (2007), Persistent scatterer selection using maximum likelihood estimation, Geophys. Res. Lett., 34, L22301, doi:10.1029/2007GL030806. Costantini, M., Minati, F., Vecchioli, F., and Zavagli, M.: Point Coherence Estimation (PCE) in SAR interferometry, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-13336, https://doi.org/10.5194/egusphere-egu24-13336, 2024. Van Leijen, F., 2014. Persistent Scatterer Interferometry based on geodetic estimation theory (Doctoral dissertation). TU Delft, Delft University of Technology. Kampes, B. M. (2006). Radar Interferometry: Persistent Scatterer Technique. Springer, Dordrecht, The Netherlands
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: D.05.02 - POSTER - Advancing Optinization, Resilience and Innovation Capabilities to Evolve ESA's Observation

This session offers a forward-looking exploration of key aspects of ESA Observation Framework (EOF) operations, focusing in particular on innovations in data acquisition, advanced processing, long-term archiving and IT resources to manage complex scientific algorithms, larger data volumes, synergetic processing and evolving data format. It emphasizes the adoption of novel technologies highlighting the importance of operational efficiency and resilience, in an environmentally sustainable ecosystem.

Discussions will focus on how these advancements can evolve the ESA Observation Framework operations for Copernicus and ESA Earth Explorers missions to meet user needs and cost-efficiency goals, including the need to implement a collaborative environment to enable the maximization of data exploitation.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: ProsEO - A Cloud Native Processing Framework for EO Data Processing

#cloud-native

Authors: Peter Friedl, Anett Gidofalvy, Maximilian Schwinger, Frederic Raison, Nicolas
Affiliations: German Aerospace Center (DLR e.V.)
The increasing complexity of upcoming Earth Observation (EO) research missions, particularly those within ESA’s Earth Explorers program, demands innovative and sustainable solutions for ground operations. These missions, characterized by higher data volumes, intricate processing algorithms, and the need for synergetic processing with data from Copernicus and international partners, impose significant challenges on IT infrastructure. In addition to the technical demands, there is a growing imperative to address sustainability by minimizing environmental impacts while meeting the user community’s expectations for collaborative and efficient data exploitation. We introduce ProsEO (Processing System for Earth Observation), a cloud-native processing system designed to respond to these challenges. Built on a microservices architecture, ProsEO provides a scalable and flexible solution for EO data processing across diverse cloud environments. Its advanced capabilities include intelligent dependency analysis between EO products and dynamic optimization of production workflows based on input data availability. By integrating resources from multiple cloud providers, ProsEO ensures efficient use of IT infrastructure while reducing duplication of resources, thereby contributing to sustainable ground operations. ProsEO exemplifies a shift towards environmentally conscious EO ground systems through its ability to streamline data workflows and maximize computational efficiency. Its modular design facilitates seamless integration of new missions and data sources, ensuring the long-term sustainability of ground operational frameworks. ProsEO is capeable of answering requirements of online mission data processing as well as major reprocessing campaigns. We will detail ProsEO’s technical architecture, highlighting its use of containerized microservices, orchestration technologies, and its ability to handle large-scale data dependencies. Through real-world use cases, we will demonstrate how ProsEO optimizes data processing, and exploitation workflows, and reducing costs while addressing the increasing complexity of EO missions. We aim for discussions on sustainable solutions for EO ground systems, showcasing ProsEO and giving insights into the role of innovative technologies in shaping the future of EO research mission ground frameworks.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Presentation of the LCA methodology used to estimate the Copernicus Ground Segment environmental footprint

Authors: Caroline Vateau, Patrick Burger
Affiliations: Capgemini
Introduction & Objectives: In the continuity of the CO2 footprint estimation performed in 2019 at ESA level and in the anticipation of the ongoing deployment of new satellites, Copernicus Ground Segment (CGS) Operations wanted to understand the environmental impacts as of today and expected in 2035. Capgemini was contracted to conduct this analysis. Capgemini used a normalized digital Life Cycle Analysis (LCA) methodology to evaluate the current environmental footprint of CGS operations from the Acquisition of data to the Data Access Hub including cloud services, external network services and Copernicus end-users. The methodology enabled also the establishment of prospective scenarios based on CGS operational forecasts. The suggested session will cover 3 main topics: the applied methodology, the illustration of the methodology in the case of CGS operations, and some options of improvement of the accuracy of the methodology. Explanations of the LCA methodology: To estimate the impacts, Capgemini adapted the Cloud Carbon Footprint methodology to match Copernicus context and to be compliant with a Life Cycle Assessment standards (ISO 14044, ESA LCA methodology). The session will detail this applied methodology, perimeters covered by the study, as well as the benefits of the centralization of the calculation mainly to ensure the normalization of the approach. It will also highlight how the network of suppliers and Cloud Service Providers actively participated in the project by sharing many operational data enabling in-depth analysis. A benchmark of the existing LCA digital standards and emerging ones as well as problems and prospects associated with the processing, storage and transport of data will be discussed. Illustration of the methodology in the case of CGS operations: Only relative and high-level representations/charts will be shared with anonymous information to illustrate the outputs of the methodology on CGS operation use cases. The identification of the main hot spots and their distribution between cloud services, external network services and Copernicus end-users. Assumptions & inductors of the prospective scenarios will be also detailed which will help to better understand the main parameters influencing the methodology. Finally, the session will also highlight how the methodology was applied to analyze EOF end-user impacts and the limits of this approach. Continuous improvement on the methodology itself in 2025: The 3rd section of the session will discuss about the limits and key options to improve the accuracy of the methodology. Indeed, thanks to the results of the analysis, some methodology focuses could be foreseen to increase the accuracy of the estimation of current hot spots. For the end-user part, it will be highlighted the necessity to include direct and indirect impacts for an end-to-end estimation of the impacts, and an introduction of which methodologies could be applied for this purpose . Conclusion: The session will conclude on the positive and active collaboration between CGS, their suppliers and Capgemini on this complex activity, as well as a key summary of the 3 main topics discussed during the session.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Optimising Ground Network Architecture for Future Earth Observation Programs

Authors: Børge Punsvik, Espen Sagen
Affiliations: KONGSBERG SATELLITE SERVICES AS
Kongsberg Satellite Services (KSAT) has conducted an in-depth analysis of the ground segment requirements necessary to support both current and future missions within the Copernicus program, focusing on the support for the Copernicus Expansion Missions and the Next Generation missions. From the ground segment perspective, the Copernicus program is marked by a substantial increase in Earth observation data volume and the integration of advanced technologies for space-to-ground data acquisition, such as Ka-band, and the use of advanced modulation protocols, including Serially Concatenated Convolutional turbo Coding (SCCC), variable and adaptive coding modulation (VCM/ACM), and file-based data transfer protocols (CFDP) with two-way communication. The analysis reveals that an extended, global ground segment provided as a service can effectively meet the Copernicus program’s stringent requirements and objectives. This approach offers an affordable and resilient solution through a systematic yet dynamic operational planning strategy. Key findings underscore the benefits of establishing a ground station network in the Southern Hemisphere to complement the existing Copernicus core ground stations located in Northern latitudes. This strategic expansion enables comprehensive global coverage and enhances the overall efficiency of data acquisition and processing. The study advocates for the deployment of triband antennas equipped with X-band and Ka-band downlink capabilities. The integration of these antennas will significantly enhance the ground segment’s capacity to receive, process and distribute high-resolution data, thereby improving the overall quality and timeliness of the information provided to end-users. A key recommendation from the analysis is the adoption of a Ground Station as a Service (GSaaS) model by hiding site specific infrastructure and implementation from the mission operator. By leveraging GSaaS, the Copernicus program can benefit from a scalable and cost-effective solution that adapts to the evolving needs of its missions. GSaaS enables the efficient utilization of ground station resources, ensuring that data acquisition and processing are optimized for maximum performance and reliability. The implementation of network-centric operations is essential for the successful deployment of the extended ground segment. Network-centric operations involve the seamless integration of various ground stations into a cohesive network, enabling efficient data sharing and collaboration. This approach ensures that data from multiple sources can be aggregated, processed, and distributed in real-time, enhancing the overall responsiveness and adaptability of the ground segment. By adopting network-centric operations, the Copernicus program can achieve greater operational efficiency and resiliency, ensuring that mission-critical data is always available when needed. To further enhance the ground segment’s capabilities, further evolution of APIs is recommended. APIs facilitate seamless communication between different systems and platforms, enabling efficient exchange of data and services. By incorporating APIs into the ground segment infrastructure, the Copernicus program can achieve greater interoperability and flexibility, allowing for the integration of new technologies and services as they become available. This approach ensures that the ground segment remains adaptable and future-proof, capable of meeting the evolving demands of Earth observation missions. Through the use of standardized ground segment interfaces and a virtualized ground segment architecture, KSAT proposes a ground segment that can incorporate existing ground station solutions with enhanced capabilities such as optical space-to-ground links, phased array antennas, and space relay solutions without compromising current capabilities and proficiency. An adaptive and dynamic approach to ground segment capabilities may ensure optimal utilization and cost-efficiency. A hybrid cloud solution combining ground station edge computing with traditional cloud networks is proposed to optimize data distribution solutions. This hybrid approach enables improved cost-efficiency and enhanced ground segment resilience by maximizing the performance and utilization of both specific ground stations and the terrestrial network. By leveraging the strengths of both edge and cloud computing, the ground network can ensure that data processing and distribution are optimized for both speed and reliability. Enhanced integration between space and ground segments is highlighted as essential for improving operational efficiency and resiliency. By enabling closer integration between the two segments by flexible planning utilization through APIs, the ground segment can ensure that data acquisition, processing, and distribution are optimized for maximum performance. This collaborative approach enables the seamless integration of new technologies and services. In conclusion, the insights gained from this analysis provide a strategic framework for optimizing the ground segment network to effectively support the Copernicus program's evolving needs, by leveraging solutions such as GSaaS, network-centric operations, API integration, and cloud-based solutions.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Advancing Sustainable Data Management in the Copernicus Ground Segment Data Dissemination operations

Authors: Gaia Cipolletta, Andrea Tesseri, Alexis Longuet
Affiliations: Serco
The Copernicus Ground Segment faces growing challenges in meeting the increasing demand for scalable, efficient, and resilient operations to handle the expanding volume of Earth Observation data and its dissemination to end users. This study presents recent advancements in optimizing the Sentinel data flow management chain, focusing on improving data ingestion, dissemination and archiving within a sustainable framework. At data source endpoints, dynamic user-activity-based filtering mechanisms are proposed to manage user requests, thereby reducing data retrieval loads and computational overhead. Additionally, an AI-driven approach to data archiving is introduced, leveraging utilization patterns for more efficient storage management. These innovations collectively aim to improve the overall efficiency and predictability of data dissemination operations and streamline data access. These innovations not only address user demands but also significantly enhance operational cost effectiveness and sustainability, aligning with the EU Green Deal’s environmental objectives within a user-centric data ecosystem.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Enhancing Resilience and Sustainability in Cloud-Based Sentinel Payload Data Processing: Atos’ Approach

Authors: Jean-Philippe Audoin
Affiliations: Atos France
Copernicus Ground Segment Operations components are now cloud-based for a few years, thanks to a major change in hosting paradigm initiated and stimulated by ESA’s will. It has led to a fully service based overall architecture, with high level of independence between the services (acquisitions, payload data processing, archiving, dissemination) and between their parallel instances. Payload data processing requires high volumes of processing and storage resources. Both timeliness and completeness requirements applied to this processing service are extremely demanding to the underlying system robustness, and to its adaptiveness to an evolving load. Whereas cloud IaaS architecture brings more agility and scalability, it also comes with less control over hardware and over resources virtualisation layers, especially when it comes to low level redundancy and mean-time between hardware failures. Through architecture studies, experimentations, design and experience, Atos has implemented layers of redundancy and automated response to failure to compensate less control over the lower layers. The design of the solution is mission agnostic and can be applied to any of the Sentinel missions. The solution addresses redundancy: • At IaaS level, through dynamic resources provisioning, • At PaaS and SaaS level, leveraging on prepackaged redundancy features, • At software level, adapting software development to PaaS recovery mechanisms, and implementing microservices architecture. • Combined with DevOps approach, providing end-to-end awareness and end-to-end resilience to the system. Atos has confirmed through several months of operations that a layered approach separates contingency policies, thus simplifying design and clarifying operations and troubleshooting. The combination of those layers balances the redundancy effort between IT and software. It achieves better availability than single cloud components and takes the most of already existing cloud components availability. This complementary approach requires transversal skills, that are provided by a dedicated DevOps approach proposed by Atos, whereas software and IT skills are usually addressed by separated teams. Combined with cloud adaptability and elasticity, this solution also brings opportunities of automated optimisations leading to reduced resources consumption. This means ability to adapt to service needs and to run operations that improve the cost efficiency. And those optimisations are not only about costs, but they are also providing at the end a better environmental footprint. The goal of this poster is to detail the corresponding design process, explain the redundancy layers and highlight the resulting advantages.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Reuse of Copernicus Reference System for Earth Explorer missions

#stac #cloud-native

Authors: Espen Bjorntvedt, Alessandra Rech
Affiliations: ESA ESRIN, CS GROUP
This presentation describes the evolution of the Copernicus Reference System (COPRS), initially conceived as a mission-specific solution within the Copernicus program for Sentinel 1, 2, and 3 missions, into a highly adaptable Generic Processing Orchestration System (GPOS) for ESA Earth Explorers and other scientific EO missions, ready to be adopted as reference processing system within the new ESA EO Framework for Earth Observation Science Missions. Traditionally within a mission-specific Payload Data Ground Segment (PDGS), the Processing Orchestration System is one of the core sub-systems of the downstream chain and was tailored for the requirements of specific missions, often resulting in tightly coupled designs. COPRS’ original design already deviated from this schema, adopting a generic, modular architecture that separated the underlying framework from mission-specific functions and data processors. By leveraging loosely coupled microservices orchestrated by a workflow manager, COPRS enabled seamless integration of additional capabilities without disrupting existing components. From its inception, COPRS was designed as cloud-native, integrating the scalability and efficiency of cloud environments. This foundation allows it to support diverse use cases, from the high-volume Sentinel missions to smaller-scale nanosatellite and demonstration missions. Its ability to dynamically scale processing nodes based on throughput needs provides both flexibility and cost-efficiency, addressing mission peaks and minimizing data production costs. Additionally, the system natively tackles cloud-specific constraints, such as shared data access, through innovative solutions tailored to meet the Sentinel missions' stringent performance and data volume demands. This robust starting point made COPRS a natural candidate for ESA’s vision of a Generic Processing Orchestration System for the new ESA EO Framework for Earth Observation Science Missions. While the Earth Explorer missions present new challenges and requirements, the generic design of COPRS proved to be highly adaptable, allowing seamless integration of processors from the Cryosat-2, EarthCARE, and Swarm missions through specific configurations. The first version of GPOS, validated on the above-mentioned missions, is now available and ready to be operationalized. Built on top of Kubernetes, the system supports deployment on private or public clouds, ensuring platform independence. The integration of modern standards, such as standardized workflow languages and STAC catalogs, simplifies processor integration and data accessibility. Finally, as Free Open-Source Software, the system is ready to power future Earth observation missions while benefiting from community contributions and collaborative enhancements.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Scalable and Automated Cloud-Based Pipelines for Earth Observation: Enhancing the Hellenic Ground Segment Infrastructure and Collaborative Support Activities

#zarr #stac #cog #cloud-native

Authors: Thanassis Drivas, Fotis Balampanis, Iason Tsardanidis, Ioannis Mitsos, Charalampos Kontoes
Affiliations: National Observatory Of Athens
The rapid expansion of Earth Observation (EO) data necessitates the development of robust, scalable solutions for storage, pre-processing, and advanced analytics. This presentation introduces a fully automated, end-to-end data pipeline leveraging cloud-native technologies to address these challenges and is being carried out in the context of supporting the DHR Network Evolution. Processing elements consider the overall architecture of the ESA Ground Segment Architecture and traceability of the various processing steps afforded therein. Developed as part of the Dataspace Copernicus Ecosystem, the pipeline integrates advanced orchestration frameworks, S3-compatible object storage, and cutting-edge Machine Learning (ML) algorithms to enable efficient processing of satellite data. Scalability is achieved through containerization and dynamic resource allocation, making the system adaptable for diverse analytical scales, ranging from localized to global assessments. The pipeline automates the generation of Analysis Ready Data (ARD) utilizing modern data formats such as Cloud-Optimized GeoTIFFs (COGs) and Zarr. Building on this foundation, sophisticated algorithms and state-of-the-art AI models are employed to develop advanced applications, including cloud-gap interpolation, grassland mowing detection, and crop classification. These applications unlock deeper insights from EO data, transforming it into actionable intelligence. Following the pre- and post-processing steps, SpatioTemporal Asset Catalogues (STAC) are utilized to ensure EO data and derived products are accessible, interoperable, and usable by the broader scientific and operational community accessing the Ground Segments facilities. Ingesting Level-2 and Level-3 products into a STAC catalogue not only supports the reproducibility of research but also fosters collaboration and accelerates innovation, transforming insights into validated services. Overall, this study highlights how relay data hubs leveraging cloud infrastructure and AI scale up EO applications to address global challenges and support informed decision-making across diverse sectors and stakeholders such as environmental monitoring, energy, disaster response, and sustainable agriculture.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: A generic processing framework developed by technology transfer across Earth Observation programmes

Authors: Richard Hofmeister, Knut Bernhardt, Faisal Rafi, Torben Keßler
Affiliations: Werum Software & Systems AG
A generic processing orchestration system for scientific Earth Observation missions has been developed in the frame of the ESA Earth Explorer programme and has been transformed to operational services in the Copernicus programme. The concepts and the software system have been developed originally for a number of specific Earth Explorer missions and matured to a multi-mission processing framework. A number of use-cases, ranging from classical systematic processing facilities and on-demand processing facilities to archive services, can be addressed by the generic framework. The transformation of technology towards the operational Copernicus services has been achieved and the software system now provides a common backend for a Long-Term Archive as well as a new Copernicus Production Service. Common challenges in the Copernicus and Earth Explorer programmes have been addressed, such as handling specific geospatial data formats, ground-segment anomalies, cyber-security aspects, the provision of common service interfaces, and using public-cloud infrastructure. The operation of the Copernicus payload data ground segment services in the public cloud including the service monitoring with informative service dashboards and control procedures can be fed back as role models for scientific missions making use of the operationally proven concepts shared with the generic processing framework. Scalability of the orchestration system has turned out to be a key feature in modern ground segment elements and is used successfully for the data transfer actions in the Copernicus Long-Term Archive, the data access facilities in the Copernicus services as well as the higher-level processing workflows in the BIOMASS mission. An effective implementation of the scalable orchestration for these on-demand processing use cases saves energy and resources whenever possible. The concepts of the generic processing framework and related technology transfers between the Earth Observation programmes are presented from an operator point of view and can inspire a dialogue between the agencies and the industrial partners in the community.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Optimizing Sentinels Mission Planning: A Unified Framework for Future Copernicus Operations

Authors: Ms. Laura Fernández, Mr. Francisco Javier Hernández, Ms. Irene Avellano, Ms. Almudena Domínguez, Mr. Carlos Valencia, Mr. Juan A. Tejo, Mr. Gian Piero Izzo, Ms. Inés Sanz-Morère
Affiliations: GMV Aerospace and Defence, S.A.U., Alia Space Systems s.r.l., European Space Agency, ESRIN
In the context of the Copernicus Space Component operations, the Sentinel-1 (S1) and Sentinel-2 (S2) Mission Planning operations rely on mission-specific planning systems (hereafter referred to as S1 MP and S2 MP, respectively). Although both systems have many strengths, there are opportunities for enhancement in different aspects: • From a design perspective, both mission planning systems follow different architectures and operational approaches and were primarily designed to fulfil a systematic planning approach; They are not optimized to efficiently assess and adapt to changes in the observation or downlink scenario. Moreover, both systems have been developed entirely from scratch, leading to overlapping implementations of the same functionality, despite not all features being mission specific. • From a user experience perspective, the differing Graphical User Interfaces (GUIs) of both systems pose challenges for mission planning operators when switching between missions, resulting in a less-than-optimal learning curve for them. • Additionally, these systems rely on proprietary third-party software, which can lead to significant licensing fees and dependencies on vendor policies. Improving these aspects can significantly reduce the time and costs associated with the development, validation, maintenance and operations of planning systems, particularly for future Copernicus Expansion and Next Generation missions. With this aim, ESA initiated the Sentinels Mission Planning Enhancement activity in July 2024. The primary goal of this activity is to make the process of Sentinels mission planning operations more efficient and organized, modernising the underlying technical solutions, and optimising the operational flexibility and efficiency, maximising the operational synergies across missions and establishing the basis for a harmonised framework where mission planning functions for future missions, and in particular the future Copernicus Expansion and Next Generation missions, can be plugged in. GMV is leading the overall activity and is developing a generic Sentinels Mission Planning Framework (SMPF) to serve as a foundation for future missions’ planning systems. Alia Space Systems contributes providing specific expertise gained during these years of S1 and S2 operations and development of various mission planning ancillary tools. The SMPF system offers several strong points that improve the previously mentioned aspects: • It offers common standard functionalities that can be extended to meet specific mission requirements, thereby reducing development time and validation costs. • The framework is based on a new mission planning operational concept that is sufficiently generic to accommodate any mission need. This way, operational procedures are consistent across all missions. • Its architecture combines elements of microservices and clean architecture models, offering an easy configuration to plug-and-play the functionalities according to the mission to be operated. At the same time, it is sufficiently generic to support future Copernicus missions and it is flexible, scalable, and maintainable. • SMPF front end is developed as a web application, significantly simplifying the deployment process. This approach enables seamless accessibility, cross-platform compatibility, and reduced setup time. Moreover, it minimises infrastructure requirements, providing users with an efficient and user-friendly interface for planning and monitoring activities. • The system exclusively utilizes state-of-the-art open source third-party software, each one carefully selected as optimal solution for the particular need. As part of the activity, current S1 MP and S2 MP systems are re-implemented, integrating the required SMPF functionality and developing mission-specific functionality. Regarding S1 MP, one of the most time-consuming processes during the planning is checking and solving constraints between different types of activities. This is particularly relevant when a high number of observation orders needs to be processed, as is typically the case with this system. This process is revisited and re-implemented using an innovative AI-based constraint solver or optimizer. Regarding S2 MP, the system is developed with a strong focus on optimizing on-board memory and instrument models. By addressing even the smallest details, the system ensures efficient resource utilization and guarantees maximum coverage of mission operational requirements. This paper presents the SMPF architecture, its functionalities and the innovative open source technologies it employs. Additionally, it introduces a new concept for operating with the SMPF and highlights the benefits that the AI-based constraint solver brings in terms of flexibility and efficiency to the planning process.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: The effectiveness of a PDGS and its implementation at Sentinel-2B Production Service

Authors: Cyrille Bouisson, Benjamin ESQUIS, Yannick
Affiliations: Cs Group
The PDGS (Production Data Ground Segment) plays a critical role in the spatial data value chain. The challenges it faces in the coming years are vast and structured around several key areas, linked to technological advances, the increasing number of satellites, and the growing demand for data. These challenges require constant adaptation of infrastructures and management methods to ensure optimal performance in the face of rising demands. This presentation provides an overview of the progress made by CS GROUP regarding the PDGS deployed within the Sentinel-2B Production Service. It addresses several major challenges at the core of this evolution: • Increase in Data Volumes: The exponential growth in data generated by satellites requires increasingly efficient data management and storage solutions. This involves optimizing resources and processes to handle the growing volumes of data effectively. • Infrastructure Modernization: To meet the increasing demands, it is crucial to modernize existing infrastructures. This includes upgrading equipment, integrating new technologies, and adapting systems to ensure better scalability and future-proofing. • Fault Tolerance: The reliability of systems is vital to ensuring service continuity. Implementing fault-tolerant mechanisms and rapid recovery solutions are key to maintaining high availability and minimizing service interruptions. • Reduced Downtime for Users: Minimizing downtime is crucial. This involves proactive fault management and quick interventions to ensure the data services remain available to users with as little disruption as possible. • Reducing the Environmental Impact of Processing and Storage Infrastructures: Energy efficiency in data centers is becoming an imperative. Eco-friendly solutions are being implemented to reduce the carbon footprint and ensure sustainable resource management. • New Markets and Uses: The development of the PDGS opens up new markets, particularly in Earth observation and spatial data management. It also enables the exploration of new applications that fully leverage the capabilities of the infrastructure. The presentation specifically defines key concepts such as efficiency in data management, the levers to influence efficiency, the means to measure it, the environmental impact of infrastructures, and fault-tolerant mechanisms. It also highlights the importance of the design phase and the continuous adaptation to new uses and market developments. This holistic approach ensures optimal data management while addressing current technological, economic, and environmental challenges.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Human-Centred AI for Cybersecurity in Earth Observation: Transparent and Reliable Systems for a Sustainable Future

Authors: Carmelo Ardito, Angela Lombardi, Domenico Lofù, Tommaso Di Noia, Eugenio Di Sciascio
Affiliations: Politecnico di Bari
The integration of Artificial Intelligence (AI) in Earth Observation (EO) systems is reshaping our capacity to monitor, analyze, and respond to the planet’s most pressing environmental challenges. By enabling more efficient data processing and enhanced decision-making, AI holds the potential to amplify the global value of EO data. However, alongside these opportunities arise significant risks, particularly in the domain of cybersecurity. As AI-driven systems become increasingly critical, they also represent potential vulnerabilities. Cyber-attacks exploiting these vulnerabilities can disrupt EO operations, compromise data integrity, and hinder environmental sustainability efforts. Addressing these challenges requires a novel approach to cybersecurity—one that integrates AI technology with human-centric design principles. Traditional AI systems often operate as "black boxes," providing outputs without offering clarity on the underlying decision-making processes. This lack of transparency can undermine user trust, increase the likelihood of errors, and reduce the effectiveness of cybersecurity interventions. The Human-Centred AI (HCAI) discipline addresses these issues by prioritizing transparency, explainability, and user collaboration [1]. It emphasizes the development of systems that make AI processes understandable to users through explanations presented in natural or visual language. These explanations are tailored to the user’s level of expertise, enabling stakeholders—whether cybersecurity analysts, EO operators, or decision-makers—to comprehend and evaluate AI-driven outputs. By making the underlying model behavior explicit, such systems promote safety, reliability, and trustworthiness. This not only strengthens the effectiveness of cybersecurity measures but also ensures that users are better equipped to respond to emerging threats. The implications of this approach extend beyond immediate operational benefits. Since 2020, when Ben Shneiderman coined the term, HCAI is gaining momentum and it has been adopted in designing AI-based systems in many domains. However, to the best of our knowledge, its adoption in the field of cybersecurity remains largely unexplored. Here we propose to introduce HCAI as a framework to enhance cybersecurity in EO systems, with a specific focus on the pivotal task of malware attribution and classification. By safeguarding EO data and infrastructures, we contribute to the resilience of critical systems that underpin climate adaptation and mitigation efforts. For example, secure EO systems enable accurate environmental monitoring, disaster response coordination, and long-term sustainability planning. Conversely, breaches in these systems can lead to data tampering, supply chain inefficiencies, and misallocation of climate finance—all of which exacerbate environmental and social vulnerabilities. From a technical perspective, implementing HCAI principles in EO cybersecurity involves three key strategies: 1. Model Explainability: Ensuring that AI algorithms provide interpretable outputs. This can be achieved through methods such as feature attribution, decision trees, and saliency maps, which highlight the reasoning behind specific decisions. 2. Adaptive Explanations: Customizing explanations to align with the user’s domain expertise and cognitive preferences. For instance, a cybersecurity analyst might benefit from technical details about algorithmic behavior, while an EO operator might require visual summaries or high-level insights. 3. Collaborative Interfaces: Designing interactive platforms that allow users to question, validate, and refine AI outputs. Such interfaces foster a sense of control and partnership, reducing the cognitive load associated with relying on automated systems. These strategies not only enhance system usability but also address broader ethical concerns related to AI deployment. By making AI systems accountable and interpretable, we align cybersecurity practices with Environmental, Social, and Corporate Governance (ESG) objectives. This alignment is critical in the EO domain, where the stakes are high, and the consequences of system failures are far-reaching. Moreover, adopting HCAI principles contributes to a more proactive and sustainable approach to cybersecurity. By embedding transparency and trust into system design, we reduce the risk of adversarial attacks, data leakage, and operational disruptions. This, in turn, supports the continuity of green operations and the effective management of environmental risks. Finally, this paper situates the HCAI framework within the broader context of cybersecurity and sustainability. It highlights the dual role of cybersecurity in protecting EO systems while advancing sustainability goals. By securing critical infrastructure and enabling reliable data sharing, cybersecurity becomes a key enabler of resilience, adaptation, and mitigation strategies. Furthermore, the integration of HCAI principles ensures that these efforts are inclusive, equitable, and aligned with the values of a sustainable future. The proposed approach bridges the gap between technical innovation and human needs, demonstrating how AI can be harnessed responsibly to address the interconnected challenges of cybersecurity and climate change. As EO systems continue to evolve, adopting HCAI principles will be essential to ensuring their safety, reliability, and trustworthiness—ultimately contributing to a more resilient and sustainable planet. [1] B. Shneiderman, "Human-centered artificial intelligence: Reliable, safe & trustworthy," International Journal of Human–Computer Interaction, vol. 36, pp. 495-504, 2020, DOI: 10.1080/10447318.2020.1741118
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Manifest for AI and Automation in Sentinel-2 Data Processing Operations

Authors: Mr. Marco Merlonghi, Luca Evangelista, Roberta Bussadori, Roberto Balducci, Giampaolo Pejani, Luciano Rempicci
Affiliations: Capgemini
In the context of processing Sentinel-2 data from raw input to high-quality products, the integration of Artificial Intelligence (AI) and automation presents a transformative opportunity to significantly enhance operational efficiency. By leveraging AI and automation, various aspects of operations can be optimized, ultimately enabling an efficient, continuous, reliable and error free provision of data production services. - AI-Driven Performance Monitoring and System Optimization One of the key areas where AI can improve operations is through predictive analytics and system performance monitoring. By utilizing AI-driven predictive models, one can anticipate potential infrastructure issues before they arise. For instance, by analyzing historical performance data, machine learning algorithms can predict when certain components are likely to experience failures or slowdowns. This proactive approach allows for timely maintenance, reducing the risk of downtime and ensuring smoother operations. For example, an AI model might predict when specific servers are approaching maximum capacity or when cloud resources need to be scaled up to manage a peak in demand. Additionally, AI can be leveraged for anomaly detection, where machine learning models continuously monitor system performance in real time. These models can identify unusual behaviors, such as slowdowns in processing speeds, unexpected surges in data volume, or changes in processing times. This early detection can allow operational teams to investigate potential issues before they are escalated, ensuring that they do not affect the timely delivery of data. - General Automation in Daily Operations Beyond performance monitoring, automation plays a critical role in optimizing daily operations. AI and automation can streamline task management by automatically assigning tasks to team members based on factors like workload, system status, and priority. This intelligent resource allocation ensures that operations are conducted efficiently and without unnecessary delays. In addition, AI can be used to perform root cause analysis in the event of system failures or delays. By examining historical system data, AI can identify patterns and correlations that may point to the underlying causes of issues. This allows the team to address problems faster and more effectively, reducing the impact on operations. Automation also plays a key role in dynamically scaling resources in a cloud environment. AI systems can monitor real-time processing demands and adjust resources—such as compute power and storage—accordingly. During periods of high demand, additional resources can be automatically allocated, while excess resources can be scaled back when demand subsides, ensuring optimal performance and cost efficiency. The examples mentioned are just some of the opportunities for automating tasks and operational processes, and AI-driven automation can be pushed to the point of reducing most manual activities. - Deliverables Customization and Service Reporting AI can also enable deliverables personalization to meet specific needs. AI can help customize service reports, data formats, and processing details to match individual preferences. This tailored approach ensures that highly relevant and valuable deliverables are generated, improving satisfaction. Moreover, the use of automation can streamline the document approval and review process. AI systems can route documents to the appropriate stakeholders, track progress through each review stage, and ensure timely approvals. The system can also flag any discrepancies or errors in documents, helping to maintain the quality and accuracy of the deliverables. By optimizing these workflows, one can ensure that only the highest-quality documents are finalized and delivered in a timely manner
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone V)

Poster: Designing the future world-scale storage for Space observation objects

Authors: Alessandro Di Felice, Olivier
Affiliations: Ovhcloud
The explosion of high-resolution data from Copernicus and the other Earth Observation satellite missions is creating a critical need for innovative data storage and distribution strategies. As the volume of data grows exponentially, the current datacenters are struggling to keep pace, leading to delays and inefficiencies in data transmission and analysis. To address this challenge, OVHcloud proposes a new era of object storage, designed to enable efficient worldwide distribution of Exabyte-scale datasets while simultaneously promoting sustainability and reducing our carbon footprint. Such innovations need intelligent and automated geo-distribution of objects through a global and unique endpoint, CDN-like and advanced analytics. Join us to explore the future of space object storage and learn how we're harnessing the power of innovation to overcome the challenges of the Exabyte era, while being mindful of our planet's well-being.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: B.02.09 - POSTER - The Role of Spaceborne Imaging Spectroscopy and Drone-based Calibration Data for Integrated Freshwater and Coastal Monitoring

Exponential growth in EO data volumes, advancements in drone technology, and the variety of data accessible via cloud-based platforms, present new opportunities to develop novel integrated systems that leverage these capabilities. In particular, the advancement in new generation spaceborne imaging spectrometers, such as PRISMA, ENMAP, DESIS, EMIT and the upcoming missions such as ESA’s CHIME and NASA’s SBG, can significantly improve applications such as water quality monitoring, especially when also combined with near real-time, in-situ water quality data streams, drone-based measurements and water quality forecasting tools.
This session will bring together water quality remote sensing scientists, modellers and data analytics experts, to showcase and discuss approaches for use of various types of remote sensing data, including imaging spectroscopy and drone imagery, for development of a fully integrated ‘ground-to-space’ data integration system, that support the production of ‘decision-ready’ information for water managers and communities that are dealing with increasing challenges in inland and coastal water quality world-wide.
The goal of the session will be to focus on the benefits and challenges of integrating multiple sources of data (e.g., either different Earth observation (EO) sources like optical/radar, or combining in-situ and/or drone measurements with EO datasets-, or EO with modelling), rather than focusing on only one EO data source or one approach to produce actionable water quality products.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Validation of water-leaving reflectance from current hyperspectral space missions using automated HYPERNETS hyperspectral system in the turbid waters of Río de la Plata

Authors: Ana Ines Dogliotti, Estefanía Piegari, Lucas Rubinstein, Pablo Perna, Quinten Vanhellemont
Affiliations: Instituto de Astronomía y Física del Espacio (IAFE, CONICET/UBA), Instituto de Investigación e Ingeniería Ambiental (IIIA, CONICET/UNSAM), Laboratorio de Acústica y Electroacustica (LACEAC, FIUBA), Royal Belgian Institute of Natural Sciences (RBINS)
High spatial and spectral resolution of current VNIR/SWIR imaging spectroscopy missions, like DESIS DLR Earth Sensing Imaging Spectrometer), PRISMA (PRecursore IperSpettrale della Missione Applicativa), and EnMAP (Environmental Mapping and Analysis Program) have shown a unique opportunity to characterize vegetation and biogeochemical processes across the land and oceans as well as studying inland waters. Hyperspectral data provides the capability of improving existing multi-spectral algorithms as well as developing new products to help water quality monitoring, like detecting phytoplankton bloom types, sun-induced fluorescence, derive optical water properties, suspended sediment type, bathimetry and bottom types. In order to retrieve accurate water products from hyperspectral missions, accurate performance of the atmospheric correction is essential. However, in coastal areas its performance is usually compromised due to the optical complexity of the in-water and atmospheric environments. Therefore, the validation of satellite-derived water reflectance of the whole spectrum in coastal waters using hyperspectral in situ data is essential. The HYPERNETS automated system, that uses the HYPSTAR (HYperspectral Pointable System for Terrestrial and Aquatic Radiometry) radiometer, collects data between 380-1020 nm with a FWHM of 3 nm on a routine basis, providing invaluable data for validation of such hyperspectral satellite systems. One of these HYPERNETS systems was deployed at the end of a 1.1 km long jetty in the Río de la Plata waters, 60 km south of Buenos Aires (Argentina), and has been collecting data every 20 min since December 2021. The Río de la Plata is a large and shallow funnel shaped estuary with high values of suspended particulate matter (100 to 500 g m−3, reaching 940 g m-3 in the maximum turbidity zone) representing a challenge an ideal site to test atmospheric correction algorithm performance. Moreover, this site is strategically located between a water intake and the active commercial harbour of La Plata city where intense phytoplankton (including toxic cyanobacteria dominated) blooms have been recorded regularly since 2020 presenting human health risks and causing temporal problems to the water intake site. Given that the automatic system allows retrieving data with high temporal resolution (every 20 min) and in near-real time, it could be used to retrieve time series of useful in-water parameters (like Chl-a and turbidity) to monitor water quality once the atmospheric correction and the products algorithms have been validated. The performance of standard atmospheric correction schemes applied to PRISMA and EnMAP imagery using simultaneous HYPSTAR data collected in the turbid waster of the Río de la Plata are evaluated. The spectral shape of the water reflectance of the evaluated missions resembled that of the HYPSTAR® in situ measurements, but they show conspicuous spectral wiggles which should be addressed before applying second derivative algorithms, for example to retrieve information like phytoplankton pigments or groups. In general terms, the operational atmospheric correction for each mission tends to underestimate in situ measurements suggesting that further improvement in the atmospheric correction algorithm is needed.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Transforming Coastal Pollution Monitoring with AI and Drones

Authors: Enrique Fernández-Laguilhoat Sánchez-Biezma, Sergey Sukhanov, Dr. Ivan Tankoyeu
Affiliations: Flypix Ai Gmbh
1. Introduction: The Coastal Trash Problem Coastal zones are among the most ecologically sensitive and economically vital areas on Earth, yet they are increasingly under threat from pollution, particularly from litter and trash. Marine debris, including plastics, metals, and other artificial waste, poses significant risks to marine biodiversity, disrupts ecosystems, and impacts the livelihoods of coastal communities. Addressing these challenges requires innovative solutions that can provide actionable insights into the sources, distribution, and composition of waste. While traditional monitoring methods often lack scalability and efficiency, advancements in drone technology and AI now offer new opportunities for systematic coastal monitoring. These technologies enable the detection, classification, and mapping of waste over large areas, providing critical data to support targeted cleanup efforts and inform sustainable waste management policies. 2. The FlyPix AI Platform The FlyPix AI platform [2] is an end-to-end solution designed to address the growing challenge of coastal pollution. Leveraging drone imagery, the platform detects, classifies, and georeferences trash objects on orthophoto imagery, offering unparalleled precision and scalability. The system features two core AI models: a multi-class object detection model for identifying specific types of waste and a binary classification model that distinguishes artificial objects from natural backgrounds. Users can interact with the results via an intuitive dashboard, which integrates GIS-based visualization tools to highlight waste hotspots and spatial patterns. This platform not only provides insights into the type and quantity of waste but also offers advanced clustering capabilities to identify areas of high waste density. By transforming raw data into actionable intelligence, FlyPix AI empowers researchers, NGOs, and policymakers to tackle coastal pollution with precision and efficiency. 3. Description of the Dataset The development of the FlyPix AI system was underpinned by a comprehensive dataset consisting of over 22,000 high-resolution (1.5 cm per pixel) drone images. These images were meticulously annotated to include 86,345 instances across 34 waste classes, such as plastic bags (12.23% of annotations), scrap metal (11.73%), and rare categories like glass bottles (0.09%). The dataset spans a diverse range of coastal environments, capturing variations in lighting, terrain, and vegetation. This diversity ensured robust model training capable of handling real-world complexities. The annotations covered a wide range of trash categories, from biodegradable materials to construction debris, providing a holistic understanding of coastal waste. This rich dataset formed the foundation for training highly accurate AI models and remains a valuable resource for advancing environmental monitoring solutions. 4. Training and Post-Processing The multi-class detection model was trained using state-of-the-art machine learning techniques, with YOLO (You Only Look Once) [1] emerging as the best-performing framework. Training parameters included a learning rate of 0.03 with warmup, 12 epochs, and a batch size of 10. The training process incorporated stratified data splits to ensure balanced class representation, enabling accurate detection even for underrepresented categories. Post-processing involved sophisticated techniques to enhance detection accuracy, including filtering of outliers and radial clustering to group detected objects within a 5-meter radius. These steps ensured spatial relevance and reduced noise in the results. Additionally, the binary classification model was developed to distinguish artificial objects from natural backgrounds, complementing the multi-class model by providing a streamlined approach for identifying waste hotspots. 5. Results and Metrics Achieved The models demonstrated exceptional performance during validation and field testing across diverse locations, including the Canary Islands, the Red Sea Coast, and the Persian Gulf Coast of the Arabian Peninsula. We employ classical metrics to measure the prediction quality. • Precision: measures how many of the detected items were correct; • Recall measures how many of the actual items were detected; • F1 is the balance between both, giving an overall effectiveness score. The multi-class detection model achieved an average precision of 0.361, an average recall of 0.475, and an average F1 score of 0.420 across the 34 classes. These metrics reflect the system's capability to accurately identify and classify trash in diverse coastal environments. The precision and recall values varied across different waste classes, with notable performance in classes such as "tire," which achieved high precision (67.3%) and recall (85.4%), resulting in effective waste detection reliability. The "jersey_barrier" class also exhibited high performance, with a precision of 71.2% and recall of 90.7%, leading to a robust F1 score of 79.8%. These results demonstrate the model's effectiveness in detecting larger and more distinct objects. Other classes like "plastic_bag" exhibited moderate metrics, with a precision of 48.8% and a recall of 38.7%, highlighting the inherent challenges of detecting lightweight, scattered objects under complex conditions. Additionally, a binary classification model was developed to distinguish artificial objects from natural backgrounds, achieving an impressive F1 score of 0.89, showcasing its effectiveness in streamlining the detection of waste in coastal areas. Beyond individual detection, the system's radial clustering performance was also evaluated to assess its ability to group detected objects into waste hotspots. The results demonstrate high probabilities of detecting clusters of significant sizes, particularly for common waste types like "tire" and "plastic_bag." For instance, the probability of finding a cluster of tires with a size of at least 3 reached 94.23%. “Brick_holes” class allows to reach 99.82% of detection the group of objects starting from 10 instances with similar results observed for other classes, such as "wooden_panel", “laminate”. For smaller cluster sizes (e.g., 2 or 3), the detection probability remained lower but consistently improved with increased cluster sizes. This clustering capability is crucial for identifying high-density waste areas, which can be targeted for cleanup and mitigation efforts. These comprehensive results underscore the system's adaptability, precision, and transformative potential for coastal monitoring and waste management initiatives. The clustering functionality, in particular, enhances the platform's ability to provide actionable insights, aiding researchers and NGOs in effectively identifying and prioritizing intervention areas. 6. Access for NGOs and Researchers Recognizing the importance of collaboration in addressing coastal pollution, FlyPix AI offers free access to its platform for NGOs and the research community. This initiative aims to democratize access to cutting-edge AI tools, empowering stakeholders to utilize advanced detection and georeferencing capabilities in their environmental projects. Researchers can leverage the platform to reproduce the results, analyze spatial patterns, generate new data sets, assess the effectiveness of cleanup efforts, and generate data-driven insights for policy advocacy. NGOs can deploy the system in their operational areas to streamline waste management strategies and focus resources on critical hotspots. By fostering a collaborative approach, FlyPix AI contributes to global efforts in coastal sustainability and underscores its commitment to creating impactful, accessible solutions for a cleaner planet. References: 1. Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao. "YOLOv4: Optimal Speed and Accuracy of Object Detection." arXiv, 2020. 2. FlyPix AI: https://flypix.ai/
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Technological Readiness Of Complementary Shipborne, Airborne and Citizen-Operated Hyperspectral Aquatic Reflectance Data Collection

Authors: Stefan Simis, Tom Jordan, Aser Mata, Olivier Burggraaff, Norbert Schmidt, Frans Snik
Affiliations: Plymouth Marine Laboratory, National Physical Laboratory, Leiden University, DDQ Pocket Science
In situ radiometry has provided substantial contributions to the validation of satellite imagery and development of novel optical water quality algorithms in the past decades. With rapidly advancing hyperspectral and high spatial resolution capabilities of spaceborne sensors, there is a growing need to characterise the performance of water-leaving reflectance and derived variables across spatial scales and in environments that were until now considered out of scope for remote sensing. Examples of such environments are small bays and lakes, upstream reaches of rivers, and coastal waters adjacent to shore. Major challenges in these environments include observing intertidal areas, access to private land, bottom reflectance and the effects of adjacent land on the remotely sensed radiance. As global observation systems have matured across ocean to inland water quality products, there also remain large geographic data gaps in public records of hyperspectral aquatic reflectance, resulting in relatively poor insight into product uncertainty. This is particularly problematic in optically complex waters, and limits confidence in using Earth observation records in water management, early warning of hazards, and mitigation of climate change impacts. Further improvements are then particularly needed in terms of capital and operational cost, a wider range of deployment platforms, and data interoperability to ensure broad and sustained access. Here, we provide an overview of our recent developments to improve spatiotemporal coverage of above-surface aquatic hyperspectral reflectance measurements, by developing these for moving ships, airborne drones, and smartphones equipped with hyperspectral optics. These complementary approaches provide observation capability from the microscale (lakes, rivers) to large optical gradients connecting coastal water bodies to open oceans. We comparatively address three technologies: - The Solar Tracking Radiometry (So-Rad) platform deployed autonomously on ships and rafts, with a focus on data post-processing for quality control, and system interoperability. - Ongoing validation of iSPEX 2, a low-cost device designed to transform smartphones into hyperspectral spectopolarimeters, for use by non-experts. Our work is focused on characterizing measurement uncertainty and deriving robust optical indicators of water quality. - Experiments with nadir hyperspectral radiometry from drones, particularly to observe land-water boundaries within bounds of satellite pixels, to characterize atmospheric correction uncertainties. These technologies have in common that they are open-source, welcoming community collaboration. A further advantage is the provision of a data processing workflow and infrastructure for instant data sharing of L0 to L2 records, whilst providing user configurable quality control and data extraction methods following open data standards. We will provide an overview of current capability, further planned improvements and ongoing experiments in data collection.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Assessing Cyanobacteria Concentration with Machine Learning and Hyperspectral In-Situ Data: Implications for Remote Sensing

Authors: Jorge García Jiménez, Ana Ruescas, Julia Amorós, Katalin Blix
Affiliations: Brockmann Consult, Universitat de València, UiT The Arctic University of Norway
Cyanobacteria Harmful Algal Blooms (cHABs) represent a critical challenge to aquatic ecosystems, water quality, and human health. Monitoring these blooms increasingly relies on integrating satellite and in situ observations for early detection and quantification. Chlorophyll-a (Chl-a) and Phycocyanin (PC) are key biomarkers for cyanobacteria monitoring; Chl-a is ubiquitous among phytoplankton, while PC is specific to cyanobacteria. Despite advancements in satellite remote sensing, distinguishing cyanobacteria from other phytoplankton remains challenging, particularly in optically complex waters. Hyperspectral sensors offer enhanced bio-optical retrievals and phytoplankton functional type (PFT) identification but present challenges such as high data dimensionality and computational demands. This study leverages collocated radiometric and in situ spectrofluorometric data from two databases: diverse inland waters in Germany (SpecWa) and Lake Balaton in Hungary. These datasets span a wide range of optical and trophic conditions, providing a robust foundation for algorithm development. A machine learning (ML) model was developed using the full hyperspectral spectrum, with a novel focus on the loss function design. The Cyano:Chl-a ratio was included as a biological constraint within the loss function to guide the model and emphasize the relationship between the two variables. Explainability techniques, including permutation importance analysis, further validated the model’s adherence to bio-optical principles and interpretability. Results demonstrate that while individual parameters (e.g., cyanobacteria and Chl-a concentrations) were accurately retrieved, the Cyano:Chl-a ratio revealed substantial differences between constrained and unconstrained models. When the ML algorithm is left unconstrained, it may rely on spurious correlations or context-dependent environmental factors that are coincidentally present in the training data but do not hold across diverse conditions. This can lead to inconsistencies in the predicted relationship between cyanobacteria and Chlorophyll-a, particularly in optically complex waters or systems with different trophic states. By contrast, the constrained model provided more robust predictions aligned with ecological expectations. Compared to band-limited and semi-empirical methods, the hyperspectral ML approach, enhanced by the biologically constrained loss function, captured subtle bio-optical interactions and reduced biases associated with band selection. These findings highlight the potential of hyperspectral data, combined with domain-specific ML constraints, to advance cHAB monitoring in diverse and optically complex aquatic systems.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Multi-scale Monitoring of Water Quality in a Phytoplankton Carrying, European River - a case study of the Moselle

Authors: Tobias Brehm, Marco Herrmann, Daniel Koch, Dr. Franziska Klotz, Dr. Julia Kleinteich, Christopher Nicholls, Dr. Thomas Hoffmann, Dr. Björn Baschek
Affiliations: German Federal Institute Of Hydrology
River management and e.g. the European Water Framework Directive require a comprehensive monitoring of inland water bodies, yet traditional water quality assessment methods using probes and laboratory analyses are time-consuming, expensive and fail to capture the large-scale dynamic nature of river systems. Remote sensing offers a promising additional data source, though river width constrains sensor selection and often necessitates integrating multiple data sources. In addition, this enables the investigation of small-scale effects in satellite images using additional sensors with higher resolution. The MeskalMon-Project (Multi-scale monitoring in rivers using remote sensing and in-situ methods for the parameters chlorophyll and suspended matter) develops an innovative approach that combines in-situ measurements with remote sensing data from various platforms, including a hyperspectral sensor mounted to bridges, a spectrometer, a multispectral UAS-sensor and multispectral satellite imagery. , The research focus on the characterization of the spatial variability and spectral interactions of chlorophyll-a and turbidity through a comprehensive monitoring strategy. Measurement campaigns on the river Moselle during 2022-2024 employed diverse sampling techniques, including longitudinal, cross-profile, and vertical measurements in the water column. By applying indices such as the Normalized Difference Chlorophyll Index (NDCI), we have facilitated comparability between different spatial resolutions and data acquisition methods and platforms. Preliminary results show a promising agreement between satellite, camera, spectrometer and in-situ measurement methods. Our findings indicate that vertical mixing in the river Moselle is of lesser importance, which makes the remote sensing data representative despite its limited penetration depth influenced by many factors. More decisive is the composition of different algae groups in the water and, for example, the detection of scum formation in the case of intensive cyanobacterial blooms. Analysis of multispectral satellite data (here Sentinel-2) shows good results for Chl-a and turbidity quantification. For example, validating different atmospheric corrections in combination with various algorithms for deriving chlorophyll-a from satellite data using in-situ measurements, we achieve determination coefficients of up to 0.79. Limitations arise if algae types vary or high Chl-a concentrations are accompanied by high turbidity. The analyses demonstrated the intricate optical interactions within aquatic environments, highlighting the challenges of accurately distinguishing and measuring water quality indicators through remote sensing techniques, showing advantages of hyperspectral methods. Our research revealed significant variations in the performance of chlorophyll-a algorithms and indices depending on the mix of algal groups present in the water. The current spectral bands available on the used UAS-sensor and the satellites proved insufficient for differentiating algal groups. However, the upcoming CHIME mission provides new opportunities for a more detailed analysis of aquatic ecosystems in the necessary spatial, spectral and temporal resolution and demonstrates the potential of advanced remote sensing technologies. This research provides a novel, integrated framework for remote sensing based water quality monitoring that overcomes some limitations of traditional monitoring methods. It represents a significant step towards more dynamic, comprehensive, and efficient environmental monitoring strategies, with future research poised to leverage emerging satellite technologies for more nuanced ecological insights.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Shedding light on biological monitoring in the Baltic Sea

Authors: Bronwyn Cahill, Anke Kremp, Christiane Hassenrück, Natalie Loick-Wilde, Jerome Kaiser, Eefke van der Lee, Thorger Brüning
Affiliations: Leibniz Institute For Baltic Sea Research Warnemuende, Federal Maritime and Hydrographic Agency
The Baltic Sea, with its strong salinity gradient, large areas of anoxic bottom water and intensive anthropogenic use, is characterised in large parts of its biosphere by low biodiversity, both naturally and due to anthropogenic pressures. Changing climate and increased frequency of extreme events exert further pressure on this delicate ecosystem, leading to changes in phenology of phytoplankton communities and mismatches in food web interactions, with unclear consequences for trophic transfer and uncertainty about its future stability. In response to this challenge, the Leibniz Institute for Baltic Sea Research Warnemünde and the Federal Maritime and Hydrographic Agency are developing an interdisciplinary research concept to enhance ecosystem monitoring in the western Baltic Sea. The concept builds on traditional biological monitoring techniques and established programmes and integrates hyperspectral in situ and remotely sensed observations with bio-optical water quality modelling, organismal data from eDNA, phytoplankton functional types, and lipid biomarkers for phytoplankton biomass for different ecological applications within the Baltic Sea. Our focus is on workflows which leverage reflectance-based approaches to develop indicators of change in phytoplankton biodiversity in response to climate change as well as anthropogenic influences (e.g., eutrophication, marine heatwaves) by empirically associating diagnostic reflectance features to the taxonomic and functional composition of phytoplankton assemblages. By including biogeochemical proxy records from past climate periods in our analysis, we connect across different temporal and spatial scales, and look to unravel drivers of past changes and how these may inform present and future changes. We present a case study in the western Pomeranian region of the Baltic Sea which prototypes a workflow integrating Copernicus Sentinel-2 water quality products, hyperspectral in situ optical measurements, and bio-optical water quality modelling. The aim is to establish a holistic ecosystem observing system which optimizes the use of existing data with new satellite data sources and provides a framework towards operationalising indicators for management directly relevant for implementing, e.g. the Marine Strategy Framework Directive (MSFD) and the HELCOM Baltic Sea Action Plan, thus significantly enhancing our capacity to rapidly detect changes in the state of phytoplankton communities, emerging invasive species and pathogens.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: An Evaluation of the ACOLITE Atmospheric Correction Algorithm at the Integrated Marine Observing System Lucinda Jetty Coastal Observatory

Authors: Andrew Prata, Robert Woodcock, Thomas Schroeder, Nagur Cherukuru, Janet Anstee, Matt Paget
Affiliations: CSIRO Environment, CSIRO Space and Astronomy, CSIRO Environment, CSIRO Environment
Satellite observations represent a powerful tool to assess water quality frequently and at a global scale. Algorithms developed for multi-spectral imaging sensors are often used to determine water quality indicators relating to ocean colour such as turbidity, chlorophyll concentrations, phytoplankton blooms and sediment plumes. However, the water-leaving signal required by these algorithms represents only a small percentage of the total signal measured by space-borne passive sensors. An important and necessary step to convert satellite-measured top-of-the-atmosphere reflectance to bottom-of-atmosphere (or surface) reflectance for aquatic applications is atmospheric correction. The goal of atmospheric correction is to account for and remove the atmospheric signal, which comprises scattering and absorption from atmospheric molecules, gases and aerosols. Here we evaluate the ACOLITE atmospheric correction algorithm using in situ radiometric measurements made at the Integrated Marine Observing System (IMOS) Lucinda Jetty Coastal Observatory (LJCO), Queensland, Australia. Using CSIRO’s Earth Analytics Science and Innovation (EASI) cloud computing platform, we have applied the ACOLITE atmospheric correction algorithm to the entire archive of Operational Land Imager (OLI; onboard Landsat-8 and -9) and Multispectral Instrument (MSI; onboard Sentinel-2A and -2B) measurements for all satellite overpasses intersecting LJCO. The in situ datasets used in our evaluation include the AERONET-OC SeaPRISM multispectral sun photometer and continuous measurements from the Satlantic HyperOCR hyperspectral ocean colour radiometer system. Overall, we find good agreement between the in situ data and the satellite measurements for the coastal aerosol, blue, green and red bands of OLI and MSI. The HyperOCR dataset generally resulted in better correlations to the satellite retrievals when compared to AERONET-OC, probably due to the less precise matchups in time afforded by the AERONET-OC data. Comparing correlations amongst the different bands, we find poorest performance (R ~0.70-0.80) for the coastal aerosol band (443 nm) and best performance (R>0.90) for the green (560 nm) band for both in situ datasets. Our results highlight the critical importance of continuous in situ radiometric measurements such as those routinely carried out at LJCO to increase match-up numbers. First results are presented.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Mapping Aquatic Vegetation in Lakes Using Drone and Satellite Imagery with Machine Learning Models

Authors: Dalia Grendaitė, Yahor Levachou, Lina Galinskaitė, Vaidotas Valskys
Affiliations: Vilnius University, Nature Research Centre
Aquatic vegetation, particularly macrophytes, serves as a critical indicator of freshwater ecosystem health, reflecting trophic conditions such as nutrient levels and long-term environmental changes. These plants influence littoral zone structure, respond to environmental shifts, and signal ecosystem changes due to anthropogenic pressures, including eutrophication and biodiversity loss. Traditional monitoring methods for macrophytes, though comprehensive, are resource-intensive, while remote sensing offers a more efficient alternative for timely and extensive ecosystem tracking. Remote sensing techniques leverage spectral characteristics to identify macrophyte types, detect invasive species, and monitor plant growth phases. Challenges include differentiating submerged macrophytes from phytoplankton or terrestrial vegetation, particularly in high-nutrient waters or along shorelines. Multitemporal and high-resolution data enhance species discrimination by capturing phenological spectral fingerprints. This study utilizes drone imagery and machine learning to develop models for identifying aquatic vegetation genera, exploring applications of both high-resolution (PlanetScope) and medium-resolution (Sentinel-2) satellite data for ecosystem monitoring. We studied five natural lakes in the Lithuanian section of the Nemunas River basin, varying in trophic state (mesotrophic to eutrophic) and size (2.62 ha to 941.43 ha). A Phantom 4 unmanned aerial vehicle equipped with a 12-megapixel camera was used to capture high-quality images. From these images, we identified six aquatic plant genera: Nuphar, Nymphaea, and Potamogeton (floating), and Phragmites, Schoenoplectus, and Typha (emergent). These were used to train machine learning algorithms, with separate models developed using PlanetScope and Sentinel-2 satellite data from April to September 2023. The largest interspecies differences were observed in the red-edge and near-infrared (NIR) spectral bands (740–865 nm), which were key in distinguishing species. Maximum vegetation reflectance occurred in August, while visible wavelengths were less effective for classification due to strong chlorophyll absorption. Emergent species (Phragmites and Schoenoplectus) exhibited the highest reflectances. Variable importance analysis for the random forest model reflected these findings, though the single NIR band in PlanetScope SuperDove satellites limits discrimination to mostly visible wavelengths. We evaluated multiple classification models, including XGBoost, support vector machines, random forest, neural networks, gradient boosting machines, and k-nearest neighbours, using accuracy, kappa, recall, and F1 metrics. The random forest algorithm demonstrated the best performance with Sentinel-2 data, while both random forest and k-nearest neighbours performed comparably well with high-resolution PlanetScope data. Due to its superior accuracy and interpretability across datasets, we selected the random forest classifier for further analysis. For both datasets, NIR reflectance across months was the most critical variable, followed by the green band. The models were applied to the full lake areas for 2023. Results using high-resolution PlanetScope data indicate that Phragmites, an invasive emergent genus, was the dominant emergent genus (up to 32% of lake area), while floating genera were dominated by either Nuphar or Potamogeton, depending on the lake (up to 14% and 21%, respectively). These models can be extended to other mid-latitude lakes with similar aquatic vegetation structures. The high-resolution data provided detailed spatial insights into genera distribution, offering valuable information for aquatic ecosystem health assessments and improved water resource management. Monitoring the spread of invasive species such as Phragmites is critical for preventing their ecological and economic impacts, and the developed models can guide targeted interventions and sustainable management strategies for lake ecosystems. Our study highlights the value of combining high-resolution drone and satellite imagery with machine learning to advance monitoring approaches for lake ecosystems. This integration demonstrates how multi-source data can inform effective management strategies for addressing ecological challenges such as invasive species and supporting sustainable freshwater resource use.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Validation of atmospheric correction approaches of PACE imagery using ship-based radiometry across the coastal and open ocean Atlantic

Authors: Xavier Warren, Michael Cappola, Tom Jordan, Gavin Tilstone, Dr. Heidi
Affiliations:
The OCI instrument on NASA’s PACE satellite mission provides global ocean-color observations that continue the multi-spectral climate-quality record from SeaWiFS, MODIS and VIIRS into the hyperspectral regime. New hyperspectral field data is required over a variety of trophic states from productive coastal waters to low productivity ocean gyres to validate the spectral retrievals visible to near infrared wavelengths. Here, we present radiometric data from ship-based above-water radiometry collected in 2024-2025 in coastal Connecticut and Rhode Island ecosystems as part of the Connecticut Initiative on Environmental Research of Offshore Wind (CIEROW) project and aboard the Atlantic Meridional Transect (AMT31) cruise from Southampton, UK to Buenos Aires, Argentina. Ancillary optical data including HPLC pigments and hyperspectral backscattering was also collected. Remote sensing reflectance (Rrs) data from the solar tracking Satlantic Hyperspectral Surface Acquisition System (HyperSAS) radiometer from 350 – 800 nm was processed with the software HyperCP using various configurations for handling sea surface reflected skylight. Satellite Rrs matchups from the PACE mission were obtained for cloud-free and mask-free 3×3 neighborhood pixels centered at the nominal in situ station coordinates, and using a ±3 hour window around the time of satellite overpass. Vicariously-calibrated Rrs and chlorophyll-a match-up data will be presented from green nearshore coastal water to blue open ocean water. Match-ups will be conducted using the standard PACE atmospheric correction algorithm, an adaptation of NASA’s heritage algorithm, and potentially other available correction routines. Statistical approaches include Type II regression to obtain slope and intercept values between field and satellite data, unbiased percent difference, and mean percent difference for selected blue, green and red wavelengths and aggregated across all available visible/NIR wavelengths. We will use ancillary data to discuss the environmental conditions leading to higher uncertainty in the match-ups.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: ACOLITE/RAdCor: A Generic Adjacency Correction Algorithm For Inland And Coastal Water Applications

Authors: Quinten Vanhellemont, Alexandre Castagna
Affiliations: RBINS, UGent
We present the Remote Sensing Adjacency Correction (RAdCor, Castagna and Vanhellemont, submitted) algorithm as implemented in the open source ACOLITE processor. RAdCor is designed to explicitly correct adjacency effects in optical satellite imagery, and provide adjacency corrected surface or top-of-atmosphere reflectance. The current implementation can process metre- and decametre scale satellite sensors, in particular those on board of the Sentinel-2, Landsat, and Planet SuperDove constellations. Adjacency effects are caused by atmospheric blurring of the surface signal, and occur most noticeably for turbid atmospheres, and in spectral regions with large spectral contrast. Over the length scale of the atmospheric blurring effect, a positive bias will be observed over dark surfaces, and a negative bias will be observed over bright surfaces. For example, in the near-infrared (NIR), water reflectance is typically very low compared to land reflectance. Therefore, if adjacency effects are not treated, the elevated NIR reflectance causes issues either for the atmospheric correction or the parameter retrieval algorithms. While the effect is typically larger in the NIR bands, a spectrally variable bias is expected in all bands over water, with its sign and magnitude depending on the water properties and the properties of the surrounding surfaces. RAdCor processing consists of two main parts: (1) the TOA-Surface Dark Spectrum Fitting (TSDSF) algorithm, which estimates aerosol properties from the remote sensing imagery itself, under the presence of adjacency effects, and (2) RAdCor proper, which performs the actual atmospheric correction step including a deconvolution of the atmospheric blurring effect. The required convolution and deconvolution steps are performed in the frequency domain, greatly speeding up processing by avoiding iteration over image elements. By default, the RAdCor processing is preceded by the TSDSF for estimating the aerosol properties, but can be configured to use ancillary inputs, or estimate the aerosol properties by optimisation to a known surface reference spectrum. This optimisation can be performed using an in situ measured reference, or other spectral assumptions such as zero NIR or SWIR reflectance. We present a validation of both TSDSF and RAdCor in Belgian waters, provide examples from around the globe, and detail some of the processor options now available in ACOLITE.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: Performance assessment of the Aquaverse Atmospheric Correction Algorithm on HYPSO-1 Inland and Coastal Water Images

Authors: Dr Vishnu Perumthuruthil Suseelan, Dr Akash Ashapure, Dr Joseph Landon Garrett, Dr Sivert Bakken, Dr Tor Arne
Affiliations: Department of Engineering Cybernetic, NTNU, NASA Goddard Space Flight Center, Department of Engineering Cybernetic, NTNU, Department of Engineering Cybernetic, NTNU, Department of Engineering Cybernetic, NTNU
Remote sensing of inland and coastal water is critical for monitoring water quality, detecting phytoplankton bloom, and providing early warning of Harmful Algal Blooms (HAB) to the fishing and aquaculture industries. For this reason, the Norwegian University of Science and Technology (NTNU) developed the Hyperspectral Small Satellite for Ocean Observation (HYPSO) mission. This HYPSO mission currently includes two CubeSats: HYPSO-1, launched to a sun-synchronous Low-Earth Orbit (LEO) in January 2022 and HYPSO-2, (commissioning phase) launched in August 2024. Both HYPSO-1 and HYPSO-2 are equipped with a Hyperspectral Imager (HSI) capable of measuring water-leaving radiance from 400 to 800 nm with a spectral resolution of ~5nm. However, atmospheric correction (AC) remains challenging for these CubeSat, especially over optically complex inland and coastal waters, where existing AC algorithms provide a high level of uncertainty in remote sensing reflectance (Rrs) retrieval. To improve these retrievals, we will employ the Aquaverse, a robust AC methodology centered on Mixture Density Networks (MDNs) based architecture extensively validated with over 2,000 matchups from multispectral sensors such as Landsat-8/9 (Operational Land Imager; OLI) and Sentinel-2 (Multispectral Instrument; MSI). It is designed to estimate remote sensing reflectance (Rrs) with reduced uncertainty across diverse aquatic environments. Aquaverse leverages a coupled ocean-atmosphere radiative transfer model to simulate top-of-atmosphere reflectance (ρt) using globally representative in situ hyperspectral Rrs datasets under varied imaging geometries and atmospheric conditions. These simulations, combined with Rayleigh-corrected spectra (ρrc) and resampled using HYPSO-1’s spectral response functions, train an ensemble of MDN models to retrieve Rrs. Aquaverse has demonstrated significant advancements in AC, achieving median residuals of ~16% in green bands and ~30% in blue bands, marking twofold improvements compared to standard processors. Building on this success, Aquaverse has been extended to hyperspectral sensors such as NASA’s Ocean Color Instrument (OCI) aboard the PACE mission and the Earth Surface Mineral Dust Source Investigation (EMIT). For HYPSO-1, Aquaverse has been specifically adapted to its Hyperspectral Imager, enabling accurate Rrs retrievals (400–800 nm, ~5 nm resolution). Validation will focus on concurrent HYPSO-1 captures over AERONET-OC sites, alongside intercomparisons with other ocean color sensors to assess accuracy and adaptability. Combined with complementary data from Unmanned Aerial Vehicles (UAVs), Autonomous Surface Vehicles (ASVs), Autonomous Underwater Vehicles (AUVs), and buoys, Aquaverse will enable a more comprehensive understanding of ocean ecosystem dynamics with improved spectral, spatial, and temporal resolutions.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone K)

Poster: The potential of EnMAP for enhancing water quality assessments in coastal waters

Authors: Avotramalala Najoro Randrianalisoa, Dr. Mariana Altenburg Soppa, Dr Peter Gege, Thomas Schroeder, Pr Astrid Bracher
Affiliations: Alfred Wegener Institute, Helmholtz Centre For Polar And Marine Research, Deutsches Zentrum für Luft- und Raumfahrt (DLR), Remote Sensing Technology Institute, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Environment, Institute of Environmental Physics, University of Bremen
Information on water quality in coastal environments is important for the protection of marine organisms, the well-being of marine ecosystems, and the health of people living in the area. Many coastal waters, however, face the challenges of pollution and eutrophication. Applying remote sensing techniques allows for monitoring water quality parameters at high spatial and temporal resolutions. The last three decades in satellite optical remote sensing for aquatic environments have focused on the exploitation of multispectral sensor data. Applications have been limited to the retrieval of bulk bio-optical parameters, such as concentrations of chlorophyll (Chl) and total suspended matter (TSM), as well as the absorption of coloured dissolved organic matter (CDOM). Recently launched hyperspectral satellite sensors such as Environmental Mapping and Analysis Program (EnMAP), allow the detection of specific in-water absorption features and are therefore expected to provide more accurate chlorophyll retrievals in coastal waters, in addition to a much better applicability over dark CDOM-rich waters due to improvements in the sensor’s sensitivity. In this work, we assess the potential of EnMAP for deriving water quality parameters which are compared to results from the multi-spectral Instrument (MSI) of Sentinel-2 (S2) and the Ocean and Land Color Instrument (OLCI) of Sentinel 3 (S3). We are focusing on the area of the IMOS Lucinda Jetty Coastal Observatory (LJCO, 18.52 S, 146.39 E) in the coastal waters of the Great Barrier Reef World Heritage Area. LJCO is close to the estuary of the Herbert River and the Hinchinbrook Channel in the wet tropics of the central Great Barrier Reef with pronounced monsoonal dry and wet seasons. In this area, the sources of particulate and dissolved matter substantially vary during the tidal and seasonal cycles. Atmospheric corrections were applied to the satellite images before retrieving water quality parameters. For EnMAP normalized water-leaving reflectance ([ρw]N), the operational L2A product based on the Modular Inversion Program (MIP) and retrievals of the ACWater module from the EnMAP-Box based on Polymer were used. To obtain [ρw]N from MSI, L1 data were processed with Polymer and the Case 2 Regional Coast Colour (C2RCC) processor. For OLCI [ρw]N, we used L1 data processed with Polymer and the L2 WFR product. We assessed the quality of all [ρw]N satellite products with in situ hyperspectral (Hyper-OCR) and multispectral SeaPRISM above-water radiometry at LJCO. The analysis includes nine EnMAP, Sentinel-2, and Sentinel-3 images acquired between June 2022 and April 2024. The matchup quality control refers to the validation protocol and match-up statistics recommended by EUMETSAT. The results among all [ρw]N products show that EnMAP-MIP performed best and has better agreement with in situ measurements (MdAPE= 19.52%, β=-15.09, ϵ=21.89%). OLCI/S3 products performed less accurate, with S3 Polymer obtaining better results (MdAPE= 34.21%, β=-42.09, ϵ=47.14%) than S3-L2 WFR (MdAPE= 61.26%, β=-72.80, ϵ=85.62%). A comparison of matchup spectra confirms that EnMAP hyperspectral [ρw]N data fill very well the spectral gaps of the multispectral sensors. Subsequently, the software Water Color Simulator (WASI7) was used to retrieve the in-water constituents (Chl, TSM, and CDOM) from the EnMAP, MSI, and OLCI atmospheric corrected data. The intercomparison of the in-water constituents retrieved with WASI7 was performed within a 3-by-3 pixel window around LJCO. Chl retrieved from EnMAP and MSI are within the range of in situ value. The results for TSM showed a lower concentration for EnMAP and TSM. For aCDOM, a lower concentration was observed from MSI compared to EnMAP. These results underscore the potential of EnMAP for enhancing water quality assessments and the use of WASI software for water quality retrievals at LJCO.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: D.04.02 - POSTER - Best practices for execution of algorithms and workflows across federated cloud environments

EO algorithms and workflows require execution across different but federated EO cloud platforms and services. This session focuses on showcasing the state of the art of best-practices, standards, implementations, and technological solutions that enable execution of algorithms and workflows across different cloud-based environments.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Empowering Your Community with Earth Observation Insights: An All-in-One Online Workspace Platform Solution

#stac

Authors: Stephan Meißl, Daniel Santillan, Josef Prenner, Stefan Achtsnit, Karolina Lehotska
Affiliations: EOX IT Services GmbH
Earth Observation (EO) projects offer immense potential for deriving valuable insights, but face challenges in data access, computing needs, and publishing results. This presentation introduces EOxHub Workspaces (https://eox.at/software-products/#managed-cloud-workspace), a comprehensive online workspace platform designed to empower users throughout their EO journey, as used for example in Euro Data Cube or EarthCODE. EOxHub Workspaces provide a complete suite of services for geospatial professionals and communities. It offers an efficient, fully integrated environment where users can immediately begin working and producing insights. The platform's cloud integration ensures scalability based on user needs. Additionally, EOxHub Workspaces are supported by experts with years of EO-related project experience, offering custom solutions and services. Key features of EOxHub Workspaces include: * Scientific development environment: Facilitates data processing, process and resource management, and AI development. * DevOps and agile life-cycle compatibility: Supports development, testing, and production phases following the GitOps model. * Community sharing: Enables users to visualize, analyze, and share research insights derived from satellite data and products like for example presented in the dashboards at https://race.esa.int, https://eodashboard.org, or https://gtif.esa.int. * Data management and accessibility: Offers multiple storage options, intuitive GUIs, STAC querying, and standardized data rendering. * Experiment management: Ensures reproducible workflow management with input data and configuration customization. * Scalable processing: Provides scalable processing power for demanding EO workflows. Resource management: Supports multiple resource plans and fine-grained resource tracking and billing. * Authentication & authorization management: Offers role-based access, SSO, and user groups. * Branding and customization: Allows for customized landing pages and private user areas. EOxHub's technology stack includes MLflow, Jupyter, Grafana, Dask, Argo, STAC, and OGC, ensuring compatibility and support for a wide range of EO workflows. EOxHub has been successfully implemented in various use cases, including Euro Data Cube (https://eurodatacube.com), Polar TEP community (https://polartep.hub.eox.at), EarthCODE (Earth Science Collaborative Open Development Environment; https://earthcode.esa.int), Cubes & Clouds MOOC (Massive Open Online Course; https://eo-college.org/courses/cubes-and-clouds/), individual workshops, and GTIF (Green Transition Information Factory; e.g. https://gtif-austria.info) projects. Join our presentation to learn how EOxHub Workspaces can empower your EO projects and unlock the full potential of satellite data. Let's discuss your use case and explore how EOxHub Workspaces can support your specific needs and apply for pre-commercialization sponsoring via ESA’s Network of Resources (https://nor-discover.org).
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Benchmaking data access and processing performance of OpenEO backends: a reproducible approach

Authors: Dr. Serkan Girgin, Jay Gohil
Affiliations: University of Twente
openEO is a community-driven initiative supported by prominent research institutions, as well as cloud service providers and international agencies, that is designed to simplify the access and processing of Earth Observation (EO) data. It offers a standardized interface that allows users to interact with multiple EO data providers, cloud services, and processing engines, facilitating the efficient retrieval, analysis, and visualization of EO data. This facilitates scalable and flexible processing workflows for a broad spectrum of applications. To achieve this, openEO defines a HTTP-based API that enables communication between cloud backends with large EO datasets and frontend applications in an interoperable manner by specifying how to manage users, discover available EO data and processes on backends, execute processes and user-defined functions on backends, and download results. Currently there are different backends available that are based on various big EO data processing frameworks, such as Geotrellis, GRASS GIS/actinia, Google Earth Engine, OpenDataCube, Xarray, and Dask. There are also operational platforms that provide openEO services (i.e. openEO providers) such as Copernicus Data Space Ecosystem, EODC, and VITO. Recently, the openEO Platform also has been launched, providing federated access to various openEO services. Although openEO provides a standard mechanism to access the provided EO data and processing services, available data and processing capabilities differ between the service providers. Data availability is determined by the size of the data archives of the providers, whereas processing capabilities are largely influenced by the specific backend implementations deployed by the operators and their IT infrastructure. Although openEO provides a web portal (openEO Hub), which lists the reported capabilities of the openEO providers, limited information is available on the actual data availability and data access and processing performance of the providers. Benchmarking is crucial for assessing the performance of various openEO backends, because it provides objective and standardized measurements of efficiency, performance, and reliability. By comparing different backends under controlled conditions, it provides valuable information that enables users to select the most suitable backend for their specific needs, based on factors such as processing time and scalability. It can also help to identify strengths and weaknesses, driving improvements and optimizations by backend developers that benefit the entire user community. Eventually, it can support the development of more effective systems, and helps maintain high standards in EO data access and processing workflows. To benchmark the performance of different openEO backends and service providers, we have created an openEO benchmarking framework. Built using a unit testing approach, the framework allows for replicable testing of data availability, data access, and data processing capabilities across various backends. The results are presented in standard formats, providing both detailed reports and summaries. This presentation will provide a comprehensive overview of the design principles and operational structure of the developed openEO benchmarking framework. It will cover the framework's core features, functional capabilities, along with a comprehensive overview of the tests that are designed to assess data availability on different backends and their data access and processing performance. The results of the tests performed will be summarized to provide an insight into the current state of the operational openEO providers and their services. Furthermore, a live demonstration of the benchmarking framework will be conducted to showcase its practical applications and effectiveness.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Challenges and Lessons Learned in Implementing Deep-Dearning Based EO Workflows for a Federate Agency User Base

Authors: Claudius Wehner, Stephan Klingner, Philipp Gärtner, Johannes Albert, Chantal Schymik
Affiliations: Application Laboratory for AI and Big Data - German Environment Agency
The initial situation within the German Environment Agency reflects an IT infrastructure designed for bureaucracy-focused workflows and the stringent security requirements common across many public sector organizations. With the establishment of the Application Laboratory for AI and Big Data, dedicated to developing AI-based or data science solutions for identified use cases, new demands have emerged. These demands span both the development phase and the stable operation of the resulting solutions. This talk highlights exemplary challenges and corresponding solutions or mitigation strategies from both perspectives, with a particular focus on the development side. A key challenge at the beginning was the lack of GPU infrastructure, which was neither available in client devices, internal servers nor in the cloud solutions used. However, GPUs are an essential component for both the training and inference phases of AI models. A GPU-enabled cloud infrastructure was procured via code-de1 as a solution that could be implemented quickly, with a focus on processing Earth Observation (EO) data. As this solution could not cover all application scenarios, additional infrastructure was set up. This included a dedicated GPU cloud for non-EO processes and the processing of synthetic datasets, as well as access to a high-performance computing (HPC) cluster for tasks requiring more than one GPU. With this multi-layered approach, the different computing requirements of AI development could be effectively met. Beyond establishing access to high-performance hardware, significant challenges arose also during its utilization across various levels. In the lab, multiple individuals develop numerous and highly diverse prototypes, each with distinct computing, storage, and security requirements. Furthermore, the workflows of data science development differ markedly from deployment processes designed for use by public sector employees. Particular challenges emerge in ensuring the long-term operation of EO-based applications. As these involve complex data processing pipelines, they are highly sensitive to changes in data sources and interfaces. This includes updates to APIs, metadata, object storage structures, and formats, all of which can disrupt the workflows. This contribution provides a retrospective analysis of the lessons learned, from establishing the lab in a governmental context to the implementation and deployment of developed prototypes within federate agencies. The discussion is illustrated through numerous examples of successfully realized EO use cases, highlighting both challenges and achievements in integrating innovative solutions into public sector workflows.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Exploring Federated Processing of Earth Observation Data Through Cloud-Native

#cloud-native

Authors: Mr. Guilhem Mateo, Cesare Rossi, Ms. Kavitha Kollimalla, Mr. Roberto Di Rienzo, Mr. Hugues Sassier, Mr. Jean Luc Gauthier
Affiliations: CGI Italia, Thales Alenia Space, CGI France
This work presents our experience implementing cloud-based processing of Earth Observation data across federated environments in EUMETSAT, addressing the growing need for interoperable and secure distributed computing solutions in the EO domain. Building on established technologies, we explore an architecture combining Kubernetes and Knative for workload management, demonstrating how modern cloud-native approaches can be effectively applied to satellite data processing challenges. At the core of our implementation, Argo Workflows serve a dual purpose: orchestrating cross-platform execution of long-lived processing tasks and defining standardized satellite image analysis pipelines. This approach ensures reproducibility of scientific workflows while maintaining flexibility across different computing environments. Our system implements a subset of the OGC Processes API specification, facilitating standardized access to processing capabilities, alongside WebDAV and S3-compatible interfaces for efficient data access and storage. Our implementation can also support short-lived tasks, purely based on Knative capabilities, ready to demonstrate how serverless computing patterns can be effectively applied to EO processing workflows. Security considerations are addressed through a comprehensive approach to credential management, utilizing Kubernetes secrets, Vault, and a dedicated key management service. This ensures secure access to distributed storage systems while maintaining the scalability required for large-scale EO processing. The architecture implements API gateways for centralized access control and security policy enforcement, complemented by a quota management system that ensures fair resource allocation across users and processing tasks. The federation of Data Processing Instances (DPIs) enables workload distribution across different cloud environments, addressing challenges of data locality and processing efficiency within EUMETSAT's operational context. Through practical implementation and testing, we have identified several key challenges in federated EO processing, including credential propagation across security boundaries, enabling workflow portability between different cloud providers, and maintaining consistent performance across heterogeneous computing environments. Our solutions to these challenges contribute to the broader discussion of best practices in cloud-based EO ecosystems. The implementation has required careful consideration of storage access patterns, network latency between federated instances, and the balance between processing efficiency and resource consumption. In summary, with this work we aim to contribute to ongoing discussions about best practices in cloud-based EO ecosystems, particularly focusing on the intersection of security, interoperability, and scientific workflow management. Our findings suggest that while challenges remain in achieving truly seamless federation across cloud environments, current cloud-native technologies provide a solid foundation for building robust, secure, and scalable EO processing systems. The work demonstrates the potential for standardized, secure, and efficient processing of Earth Observation data across federated cloud environments.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: Harnessing the Cloud: Integrating Diverse Toolboxes for Advanced Earth Observation with TAO

Authors: Mr. Cosmin Cara, Mr. Cosmin Udroiu
Affiliations: CS GROUP ROMANIA
With the continuous growth of available Earth Observation (EO) data in both space and time, users increasingly need to process and analyze not just single EO data products or individual scenes, but large datasets covering extensive regions or long time periods. This requires substantial bulk data processing, prompting users to leverage the combined resources of multiple interconnected computers. There are numerous software toolboxes and libraries for processing EO data products, each offering various processing modules with different implementations. While this diversity is beneficial, it also poses challenges in adapting to the specific features of each toolbox or library. Additionally, most of these tools are designed for individual workstations, making it difficult (without significant effort) to utilize existing computing clusters. To distribute tasks across multiple machines and collect the results, software that orchestrates clusters of computers and manages their resources is necessary. In recent years, several platforms have been developed to address this need in EO data processing, performing quite well. However, extending and customizing these solutions to meet user-specific requirements can be challenging. Introducing TAO (Tool Augmentation by user enhancements and Orchestration), an open-source, lightweight, generic, extensible, and distributed orchestration framework. TAO enables the integration of commonly used toolboxes (such as SNAP, Orfeo Toolbox, GDAL, PolSARPro, etc.), allowing users to compose and distribute processing workflows. This framework empowers end users to define workflows and easily integrate additional containers with minimal programming knowledge. The TAO platform facilitates the orchestration of different containerized images of heterogeneous components and libraries for scientific data processing. Key features of the TAO framework include: - Visual integration of EO processing toolboxes - Easy visual definition of processing workflows through drag-and-drop operations - Integration of user-defined algorithms written in Python or R into processing workflows - Dynamic allocation and deallocation of cloud resources - Self-contained containerized execution on processing nodes - DRMAA-compliant or Kubernetes-orchestrated workflow execution - User virtual workspaces - Data source abstraction for querying and retrieving EO data - OGC standard interfaces for workflow execution This presentation aims to showcase the TAO framework, highlighting its features, extension capabilities, usage scenarios, and, most importantly, its user-friendly nature.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone U)

Poster: FAIRSenDD: A FAIR cloud workflow for Sentinel-1 based forest change detection

Authors: Felix Cremer, Daniel Loos, Fabian Gans, Zaynab Guerraou, Gregory Duveiller
Affiliations: Max Planck Institute for Biogeochemistry, European Space Agency, ESRIN
In the field of Earth observation science, the implementation of FAIR (Findable, Accessible, Interoperable, and Reusable) workflows are essential for efficient analyses in federated cloud environments that have fast access to the desired large-scale datasets. This presentation introduces the FAIRSenDD project, a recent initiative that implements an end-to-end workflow designed for forest change detection across various forest types using Sentinel-1 radar time series data. The primary objective of this project is to demonstrate the efforts required to turn a research algorithm into a FAIR and scalable workflow, made available for the scientific community. The core algorithm, originally developed in the Julia programming language, leverages Recurrence Quantification Analysis to effectively utilize time series data from Sentinel-1 radar. The algorithm enables horizontal scaling by processing every location individually to distribute the workload across multiple nodes in the cloud environment. To facilitate accessibility and interoperability, we have developed an interface compliant with the OGC API – Processes standard, enabling seamless integration of the workflow in other workflows. This enables the deployment of the workflow on different Infrastructure as a Service providers towards federated cloud computing. Additionally, a user-friendly web GUI further enhances usability. This ensures that the workflow can be easily deployed as a Software as a Service (SaaS), which can be used by both computers and humans. Significant efforts have been made to optimize the code for both runtime and memory efficiency. The project adheres to the OGC Best Practice for Earth Observation, resulting in the creation of a portable application package. In addition, we evaluated the ability of running Julia code in openEO backends to support a wider range of users and tools. This presentation will share the valuable lessons learned from translating a research algorithm into a reproducible and scalable workflow. The FAIRSenDD project exemplifies how computational techniques and cloud-based solutions can be harnessed to address complex challenges in Earth observation, paving the way for higher integrated FAIR workflows.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: C.01.02 - POSTER - Innovative UAV Applications for Earth Observation

Unoccupied Aerial Vehicles (UAVs) are flexible and efficient acquisition systems that can fill an important gap among spaceborne, airborne, and ground-based measurements. UAVs provide very high-resolution data acquisitions, even over inaccessible areas, and have demonstrated their ability to support various environmental and urban applications.

We invite contributions addressing the most recent innovations in the use of UAVs for Earth Observation and environmental and urban monitoring, with a focus on:

-Data acquisition for Earth Observation and atmospheric research
-Synergies and data fusion between UAVs and spaceborne, airborne, and ground-based measurements
-Real-time processing and analysis of UAV-acquired data
-Applications including but not limited to:
oAgriculture and precision farming
oForestry and forest monitoring & inventory
oUrban monitoring and urban green management
oDisaster management
oConservation management
oMonitoring of critical infrastructure (e.g., roads, coastal protection)
-UAVs in support of Earth Observation campaigns
-Transferable methods for environmental and infrastructure monitoring that can be applied by various actors (e.g., foresters, farmers, technicians of public administrations)

By focusing on innovative UAV applications and transferable methodologies, we aim to showcase the potential of UAV technology in advancing Earth Observation, to help develop future satellite missions and to advance environmental monitoring practices.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: Enhancing Search and Rescue Missions with AI: Cross-Comparison of Object Detection Models for Real-Time Missing Persons Detection

Authors: Salaheldin Hassan, Dr. Adam J.
Affiliations: Technical University of Munich
Search and Rescue (SaR) missions are critical operations that save lives in challenging and often extreme conditions, requiring rapid decision-making and precise execution. SaR relies heavily on Earth Observation (EO) data, which provides vital information about terrain, weather, and other environmental factors. Despite their importance, these missions predominantly rely on manual video analysis, a process that demands expertise, intense focus, and significant time, ultimately prone to human fatigue. With the advent of machine learning and artificial intelligence, there is a pressing need to revolutionize SaR operations by introducing automated solutions that enhance efficiency, accuracy, and scalability. Our research addresses this need by conducting a comprehensive cross-comparison of state-of-the-art object detection models—YOLOv5, Faster R-CNN, and RetinaNet—tailored for the unique challenges of SaR missions. These include detecting small objects, navigating variable lighting conditions, managing occlusions, and adapting to diverse environments. To overcome these challenges, we integrate transfer learning, enabling models to adapt pre-trained architectures to SaR-specific tasks. The transfer learning was done using pre-trained model weights from the Common Objects in Context (COCO) dataset. This approach significantly enhances detection accuracy while optimizing training efficiency. The results highlight that YOLOv5 consistently outperformed other models across all datasets in terms of mean Average Precision (mAP) and mean Average Recall (mAR), and inference speed. Specifically, YOLOv5 achieved the highest mAP@[IoU=0.50] of 0.983 on the SaRD dataset, compared to Faster R-CNN (0.92) and RetinaNet (0.8632). When evaluated using the stricter metric mAP@[IoU=0.50:0.95], YOLOv5 again demonstrated superior performance, with scores of 0.7 on SaRD and 0.568 on DB Licenta. Furthermore, YOLOv5 achieved the highest mAR@[IoU=0.50:0.95], scoring 0.951 on SaRD and 0.878 on DB Licenta. Through a rigorous evaluation using specialized benchmark datasets including SaRD, DB Licenta, VisDrone, SaRNet, and TinyPerson, YOLOv5 emerged as the most robust performer, demonstrating superior detection capabilities across multiple metrics, with mAP values as high as 0.983 on the SaRD dataset, and achieving an inference time of 41.1 ms per frame on average, making it well-suited for real-time applications. By advancing machine learning techniques for Earth Observation, our work not only accelerates missing person detection in remote and hazardous areas but also enhances resource allocation during rescue operations. While this research focuses on single-image analysis, future work should explore extending these methods to video analysis.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: Combined P+L band reflectometry for root-to-surface soil moisture measurements

Authors: Xavier Canalda, Serni Ribó, Daniel Herencia, Arshad Karimbu-Vallappil, Eduard Aguilar, Estel Cardellach, Weiqiang Li, Antoni Rius
Affiliations: Institute of Space Sciences
Soil moisture (SM) is crucial for various environmental processes, including plant growth, climate modeling, and water resource management. Monitoring changes in soil moisture levels is essential for predicting and mitigating the effects of climate change, particularly in forecasting and managing drought conditions. Accurate measurement of root zone soil moisture (RZSM) is vital for optimizing irrigation, improving crop yield, and enhancing water use efficiency, which are key to ensuring food security and price stability, especially in the face of changing climate patterns and their associated challenges like droughts. Traditional methods for measuring SM, especially in the root zone, face limitations in spatial coverage and depth penetration. Addressing these limitations is essential for understanding Root Zone Soil Moisture (RZSM) dynamics. This research aims to address challenges in microwave remote sensing for soil moisture sensing, focusing on improving penetration depth and spatial resolution. The proposed single-channel dual-band reflectometer is self calibrated which will allow for the correction of induced measurement errors, enhancing the robustness and reliability of the data collected. This research introduces a novel dual-band (P+L) single channel passive reflectometer to measure soil moisture from a drone at various depths using Signals of Opportunity (SoOp), i.e. reuses existing signals from navigation and communication satellites for Earth observation and remote sensing. This approach enables the monitoring of different geophysical parameters without additional infrastructure, offering a cost-effective solution. Our instrument uses low-frequency bands to achieve greater penetration into soil, improving RZSM data accuracy, addressing current satellite-based methods limitations, such as those used by ESA's SMOS and NASA's SMAP, which are constrained by low penetration depths and coarse ground resolution. By integrating our reflectometer on a drone, the spatial resolution is enhanced, achieving a pixel size of about 15 meters, suitable for precision agriculture. This high-resolution soil moisture estimation at local scales can complement coarse-resolution satellite observations, bridging gaps between local, regional, and global scales. Besides building the instrument, the research involves developing new scattering models and inversion algorithms for accurate soil moisture retrieval, validated with in-situ ground truth data to ensure the effectiveness of the methods and to refine them based on empirical findings. The proposed system aims to contribute to a more robust ground reference data set, supporting the accuracy of existing satellite-based soil moisture retrieval algorithms and offering a means to improve predictions for drought conditions and adaptive water management strategies. Knowledge derived from this study could be potentially helpful for upcoming P-band missions such as ESA's BIOMASS. Details about the instrument and experimental campaigns will be provided, as well as on the obtained retrievals.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: Image-Based Vegetation Classification of Rewetted Peatlands; an Example on the FluxNet-Site Zarnekow

Authors: Christoph Lotz, Dr Benjamin Brede, Prof. Dr. Torsten Sachs, Dr. Aram
Affiliations: Deutsches Geoforschungszentrum Gfz
Drained peatlands have a high potential in mitigating anthropogenic climate change through the process rewetting. The Intergovernmental Panel on Climate Change (IPCC) provides a binary distinction of these biotopes: Peatlands remaining Peatlands and Land converted to Peatlands, and associates GHG emission factors to these classes. However, this distinction is not able to grasp the transition of these states, which is also characterized by highly dynamic GHG fluxes. Tracking the progress and impact of rewetting peatlands is needed to monitor the goal of policymaker decisions and further understand the peatland characteristics. Conventional methods such as chamber or eddy-covariance measurements are highly local and thus not suitable for upscaling. A key to that lies in the Greenhouse Gas Emission Site Type (GEST). This approach allows emission estimation based on dominant vegetation species and water table depth. The first part of this approach is the focus of this study: vegetation mapping. The combination of high resolution Uncrewed Aerial Vehicle (UAV) imagery and dominant vegetation species survey data, Machine Learning (ML) models are used to tackle the task. This case study uses data collected in the rewetted peatland of Zarnekow in Mecklenburg-Western Pomerania. The sites rewetting started in late 2004 and its developement is tracked by Eddy-Covariance measurements from 2007 to 2009 and from 2013 ongoing. The datasets consist of a RGB-orthomosaic which was used to plan and do a vegetation survey aiming at spatially dominant species, which can visually be delineated. The collected ground truth dataset was then expanded by visual interpretation of the orthomosaic. The two datasets (ground-truth and expanded) span 5.7ha and 0.75ha hectares and are used to train four different ML methods. Random Forests (RF) are explainable and frequently employed. Being dependent on precalculated features, this technique suffers from heterogeneous datasets, such as the one collected in the field. To boost model robustness, handcrafted features (Haralick-Features in different scales) are used to expand the datasets feature space. Neural Networks (NN) are able to calculate and find features automatically. While this decreases the models explainability, a high degree of specialization can be achieved leading to increased prediction accuracy. In this highly dynamic field of research new concepts are frequently proposed. In the computer vision domain, however two approaches gained popularity for their classification capabilities: Convolutional Neural Networks (CNN) and Vision Transformers (ViT). A model-architecture of each of these category as well as a vanilla RF and a RF with additional handcrafted features are trained with the dataset. While being quantitatively superior, the NN, when judged qualitatively, falls short of the traditional RF approach's performance.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: Enhancing LiDAR Data Integration From Mobile and Unmanned Aerial Laser Scanning Systems: An Algorithmic Solution for Matching Tree Point Clouds Based on Tree Characteristics.

Authors: Tamás Radnóti
Affiliations: Wageningen University & Research
Accurate forest inventories are essential for sustainable forest management, ecological research, and biodiversity monitoring. These inventories provide crucial data on the structure, composition, and health of forests, supporting resource management and decision-making on ecosystem services such as carbon sequestration. Recent advances in remote sensing technologies, in particular LiDAR (Light Detection and Ranging), have transformed forest data collection by enabling accurate three-dimensional modelling of tree structures. Among these technologies, mobile laser scanning (MLS) and unmanned aerial laser scanning (ULS) have emerged as powerful tools that provide detailed insights into both under and overcanopy structures. However, the integration of MLS and ULS datasets remains challenging due to differences in scanning perspectives and resolutions, as well as the lack of accurate location data in dense forest environments. The goal of this research is to develop an algorithm that matches MLS and ULS datasets based on tree characteristics instead of tree location data. The approach involves the identification of measurable tree attributes such as tree height, crown width, and crown diameter and the assessment of consistency between the two LiDAR systems. To evaluate the matching accuracy of the trees, a weighted consistency score is calculated and expressed as follows: C=100 - ∑_(i=1)^n ω₁ * (|TCᵢ,ᴜʟs - TCᵢ,ᴍʟs| / TCᵢ,ᴜʟs) * 100 Where C is the consistency score, ω₁ are the assigned weights to the tree characteristics TCᵢ, and n is the number of characteristics. Preliminary results show that integrating multiple tree features with assigned weights improves matching accuracy compared to single-feature approaches. The study area of this research is the "Arboretum Oostereng" located in the province of Gelderland near the University of Wageningen. The datasets were collected using a Greenvalley LiBackpack DGC50 (MLS) and a UAV-mounted RIEGL VUX-SYS (ULS). Both datasets were analyzed using R-Studio to implement the algorithm and validate its consistency against a reference dataset. The validation process will be based on a dataset provided to evaluate the algorithmic solution. The validation dataset matches trees based on their position thus making a good comparison possibility between the two methods. The research seeks to answer four research questions as it follows: 1. What tree characteristics can be measured from MLS and ULS LiDAR systems and which can be captured by both? 2. How do MLS and ULS LiDAR systems differ from each other in capturing tree characteristics for trees in the Oostereng arboretum? 3. How can the combination of multiple tree characteristics improve the consistency of tree matching between the MLS and ULS datasets? 4. How consistently can the tree characteristic algorithm align trees compared to the validation dataset? This research contributes to forest management by providing a reliable method for aligning LiDAR data sets, addressing challenges due to GPS inaccuracy, and enabling the effective integration of remote sensing technologies for forest inventory.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: UAV-SAR Imaging and Interferometry: System Design and Signal Processing Tools

Authors: Pietro Grassi, Stefano Moro, Marco Manzoni, Stefano Tebaldini
Affiliations: Politecnico Di Milano
Synthetic Aperture Radar (SAR) is a well-established remote sensing technology widely used in Earth observation applications, including environmental monitoring, disaster relief, soil moisture evaluation, snow and glacier monitoring. The main advantages of SAR over optical technology lie in its all-weather capabilities and precise distance measurement. Today, the miniaturization of electronic components has paved the way for new, smaller platforms such as Unmanned Aerial Vehicles (UAVs). UAV-borne systems are more versatile, enable faster response times in emergencies, and offer lower operational costs compared to traditional satellite or manned airborne platforms. Additionally, they ensure a higher revisit frequency for interferometric applications. Thanks to its flexibility, the UAV-borne SAR can provide products for a variety of applications. In this contribution, the UAV platform and radar system developed by the joint research group of Politecnico di Milano and Aresys are presented. The system is specifically designed for interferometric applications in natural scenarios, with the aim of supporting environmental monitoring tasks such as tracking terrain movements, analyzing soil deformations and studying other geophysical processes. Advanced signal processing tools have been developed and integrated into the workflow to achieve these results, enabling high-quality interferometric product generation. Both the technical aspects of the UAV platform and radar system, as well as the signal processing methods employed, are detailed in the following sections. The drone platform is an AntX T-Drones MX860, an octacopter with a maximum payload of 10 kg and a flight time of up to 40 minutes with a 5 kg payload. The onboard localization system comprises several instruments, including an IMU, a compass, a gyroscope, and dual GNSS antennas. Moreover, the drone's control is improved using a Real-Time Kinematic (RTK) base station. Post Processing Kinematics (PPK) is used to enhance the navigation data quality after the flight. The radar is a Frequency Modulated Continuous Waveform (FMCW) X-band system developed by Aresys, transmitting a linearly modulated chirp signal. The central frequency is 10 GHz, the RF bandwidth of the chirp is 400 MHz, and the transmitted power in the antenna is 8 dBm. The standard acquisition mode is Stripmap, but thanks to the platform freedom of movement in flight, many other modes like Spotlight or ScanSAR are possible. This section provides an overview of the processing procedure for generating interferogram maps. Accurate interferograms rely on high-quality, sharply focused images, which in turn require precise navigation data. Specifically, localization accuracy must be within a small fraction of the transmitted signal's wavelength. For an X-band system, the wavelength is approximately 3 cm, and the positioning precision needed typically exceeds the capability of the low-quality IMU onboard the UAV. Localization inaccuracies introduce phase errors, resulting in a blurred focused image. In interferometric applications, blurred images with phase errors lead to noisy interferogram phases. To generate the SAR image, the processing occurs in the time domain using Time Domain Back Projection (TDBP), which can cope with non-linear flight configurations. The processor is implemented with CUDA accelerated computing for a drastic reduction in image formation time. To address for inaccuracies of the navigation data, a signal processing tool known as the autofocusing (AF) algorithm retrieves and compensates for the induced phase errors. Several AF algorithms have been studied and developed in the literature. One of the most well-known and efficient AF algorithms is Phase Gradient Autofocus (PGA), which estimates the phase error by analyzing the phase of a few strong point targets. This error is then compensated directly in the range-compressed domain without interpreting it as a trajectory deviation. Most traditional AF algorithms perform processing in the frequency domain, an approach that assumes a linear trajectory and is often unsuitable for UAV-borne SAR. The main contribution of the proposed AF method is its geometric interpretation of phase error, which is estimated in the time domain, resulting in orbital corrections applied directly to the nominal trajectory. This approach provides corrected trajectory information that is geometrically consistent with the acquisition geometry rather than a simple phase correction of the acquired data. At least two passes over the same area are required to generate interferometric maps. The AF procedure is applied to both the primary and secondary passes. The proposed method corrects trajectory errors and estimates the corrected coordinates of the bright targets used for phase error estimation. The target coordinates estimated from the primary pass are then used to compute trajectory compensation for the secondary pass, effectively co-registering the two images. The resulting interferogram from the AF and co-registration procedure exhibits high coherence. Minimal residual orbit errors may still be present but are corrected using a coherent sub-aperture procedure. Specifically, the total trajectory is divided into multiple short sub-apertures, and the orbit shift is computed by interpreting the residual phase fringes as a geometric correction of the phase center for each sub-aperture. Finally, the compensated trajectory produces the corrected interferograms. Several acquisition campaigns were conducted by the RIDE (Radar Imaging and Digital Earth) Lab at Politecnico di Milano. The flight site was an airfield near Milan, a primarily natural setting with very few strong targets. We positioned corner reflectors to serve as calibration points for phase error estimation to compensate for trajectory deviation. The proposed processing method was applied to real UAV data acquired during the flight campaigns, and the results demonstrate the method's feasibility and effectiveness in generating compensated interferograms.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: Integrating UAVs, Aircraft, and Ground Robots for Scalable Plant Disease Monitoring With Multispectral and Hyperspectral Sensing

Authors: Ertai Liu, Kathleen Kanaley, Fernando Romero Galvan, Ryan Pavlick, Alyssa Whitcraft, Yu Jiang, Katie Gold
Affiliations: Cornell University, NASA Headquarters, University of Maryland College Park
Increasing food demand and climate change necessitate innovative and scalable solutions for detecting and managing agricultural diseases. Paradoxically, both plant disease and its control harm human health, the environment, and farm profitability, causing annual mitigation spend/losses >$220B USD. Over 1 billion pounds of pesticides are applied to US agricultural fields annually to avoid and mitigate these losses. Remote sensing, in particular imaging spectroscopy, offers a unique opportunity to improve food security via scalable, passive disease detection and large-scale monitoring to inform both risk assessment and management intervention to mitigate crop losses. Plant disease changes how solar radiation interacts with leaves, canopy, and plant energy balance, which can be quantified pre-visually with imaging spectroscopy. Non-visual disease detection and differentiation is driven by changes to plant physiology and biochemistry caused by pre- and/or asymptomatic infection. Early detection, either pre-visual or soon-thereafter, is essential, because management interventions that halt disease progression are universally most effective at the earliest stages of establishment. Unfortunately, satellite’s coarse spatial resolution and top-down view limits its ability to directly measure traits at the individual plant and organ level, or observe the side- and under-canopy regions, where early-stage disease is most likely to occur. Additionally, the labor-intensive nature of human scouting limits large scale ground-truth data acquisition, which further challenges our ability to train Earth observation-based models for accurate early detection. Supplementing Earth observation with complementary proximal and near-surface modalities such as UAVs and aircraft effectively ameliorates these limitations. In this study, we present autonomous, ground imaging robots with side canopy view as a scalable alternative to human ground-truthing. Our “PhytoPatholoBot,” provides consistent, objective ground-truth data through geo-tagged disease maps generated by autonomous RTK-GPS vineyard navigation. We demonstrate successful proof-of-concept with two economically important diseases of grapevine with contrasting disease physiology (Grapevine Downy Mildew and Grapevine Leafroll-associated Virus 3) in both research and commercial vineyards in diverse geographies, arid Southern California and humid upstate New York (USA). We developed and tested pipelines to integrate robotic ground-truthing with super high-resolution multispectral drone imagery (sub-1cm pixels) and high-resolution hyperspectral airborne imagery (sub-1m pixels) from NASA’s Airborne Visible and Infrared Imaging Spectrometer Next Generation (AVIRIS-NG). We found that robotic ground-truthing was able to train aerially acquired multispectral and hyperspectral disease detection models with comparable, if not better, performance than human-based ground-truthing for both diseases (>80% AUC). Further analysis reveals that robotic ground-truthing not only simplified model learning, but also improved model robustness to misclassification caused by environmental variability. The PhytoPatholoBot was capable of scouting 1ac (0.4 hectare) of vineyard in less than 2hrs, twice as fast as a team of 5 human scouts in the same vineyard. Our work lays the foundation for reliable, large-scale disease monitoring with autonomous ground robots for improved precision viticulture and management intervention with existing and forthcoming Earth observation platforms, such as NASA SBG, ESA CHIME, and LANDSAT NEXT.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: Ready for take-off?! - How to integrate UAS remote sensing into the monitoring of EU Habitats Directive sites

Authors: Marvin Ludwig, Jan Lehmann, Henning Schneidereit, Hanna Meyer
Affiliations: University of Münster - Institute of Landscape Ecology, Deutsche Bundesstiftung Umwelt
The EU habitats directive require regular reports on the condition and development of the respective conservation area. With over 27.000 conservation sites today and the aim to restore and extend the natural habitats within these sites, the monitoring of all areas far exceeds the available funds and workforce due to the currently required labor intensive, field work based control protocols. Hence, novel methods and monitoring practices are required to meet the demand of regulations and in order to gain accurate and widespread information of the state and development of habitats. In the last decade, unoccupied aerial vehicles (UAV) became a widespread tool for practitioners in environmental monitoring giving access to high resolution aerial images and analysis methods rooted in remote sensing. The potential of UAVs for estimating important relevant landscape features was shown in countless scientific studies, applications and technical demonstrations. However, especially the analysis of UAV imagery e.g. with automated classifications or image recognition still requires a high degree of technical knowledge in order to gain reliable results that is needed for practical habitat monitoring. To leverage the full potential of UAVs for habitat monitoring, we need to identify and develop remote sensing and image analysis methods and workflows, that are applicable and robust such that practitioners have a high degree of success when they implement UAVs in their established field work practices for habitat monitoring. In our project 'Ready for Take-Off!? – Integration of UAS remote sensing in the monitoring of EU Habitats Directive sites', we aim to build a knowledge database for standardized UAV remote sensing methods for habitat monitoring. Besides protocols for a standardized use of drones for the image acquisition itself, we will focus on image analysis techniques that lead to objective estimations of habitat conditions that are required for EU regulations and reports. Here we present first result and workflows we established for the estimation of selected parameters of grassland and peatland sites in order to support the report of habitat conditions. While the project focuses on German sites and laws, the concepts and ideas will be applicable to environmental monitoring in general. The workflows will be openly available as a collaborative knowledge database in order to provide public administration, practitioners and landscape managers with standardized methods and data.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: Synergy Use of Decametric Satellite Imageries and UAV Observations for Continuous Monitoring of Paddy Rice Growth

Authors: Wenjuan Li, Miss Zihan Ren, Miss Jiaqi Xu, Prof.Dr. Marie Weiss, Prof.Dr. Wenbin Wu
Affiliations: Institute of Agricultural Resources and Regional Planning, Chinese Academy of Agricultural Sciences, UMT-CAPTE, INRAE
Continuous and accurate monitoring of crop status throughout the growth cycle is a key requirement for precision agricultural management. SENTINEL-2 satellite observations are particularly suited for this purpose due to their high temporal resolution and decametric spatial resolution (~10 m). However, effective crop monitoring demands accurate assessment at specific phenological stages, which is often hindered by cloud cover in many regions, resulting in gaps in SENTINEL-2 time series data despite its frequent revisit schedule. Commercial satellites such as PlanetScope and Jilin-1 offer data with very high spatial resolution and frequency, but their application is limited by high costs and persistent cloud interference. UAVs present a more flexible alternative for timely and ultra-high resolution data collection; however, their limited coverage, constrained by battery life, makes them unsuitable for monitoring large areas. A synergistic approach, combining data from optical satellite platforms and UAVs, offers a promising solution for continuous monitoring. The objective of this study is to demonstrate the potential of integrating observations from multiple platforms, such as satellites and UAVs, to generate continuous Green Area Index (GAI) and Normalized Difference Vegetation Index (NDVI) imagery over paddy rice. The study area is located in paddy rice fields in Tianjin, China. UAV flights were conducted monthly between May and September in both 2023 and 2024, using a DJI M300 RTK equipped with a MicaSense Altum-PT multispectral camera. Multispectral images of the fields were processed using a standard workflow developed with Agisoft software. Surface reflectance imagery from Sentinel-2, PlanetScope, and Jilin-1 during the same period was also acquired. Green Area Index (GAI) was derived from each platform using a pre-trained Back-Propagation Neural Network based on PROSAIL radiative transfer model. The study first evaluated the consistency of surface reflectance, GAI, and NDVI measurements across UAV and satellite platforms. Subsequently, the Near Real-Time Green Surface Fusion (NRT-GSF) spatiotemporal algorithms, originally developed to integrate Sentinel-2 and daily IoT ground measurements, were revised and adapted for this study. It adjusts the temporal information from PlanetScope, Jilin-1 and UAV observations to Sentinel-2 measurements, and generate continuous GAI and NDVI images over large fields at high spatial resolution. The accuracy of the retrieved GAI images will be validated against in situ GAI measurements and through Leave-One-Out (LOO) cross-validation methods.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: Segmentation of Invasive Plant Species in High-resolution UAV Images

Authors: Dr. Lukas Drees, Professor Jan Dirk Wegner
Affiliations: EcoVision Lab, University of Zurich
Invasive alien plant species (neophytes) pose a significant threat to ecosystems and are a major driver of global biodiversity loss. Combating their spread is a priority outlined in the United Nations' Sustainable Development Goals (Target 15.8). Effective mitigation requires continuous monitoring of high-risk areas and automated detection methods. Those high-risk areas include points of entry and spread, such as road and railway traffic routes. Monitoring efforts should therefore focus on those regions that facilitate the invasion. However, this is complicated by several challenges, including the high adaptability and rapid spread of neophytes, which makes manual monitoring inefficient. Additionally, these species exhibit significant variability across phenological growth stages, vary greatly in size, from small shrubs to large trees, and are often intertwined with native vegetation. A promising direction towards automated monitoring of neophytes at large scale with high spatial resolution is the combination of UAV imagery with a deep learning approach. This study introduces supervised deep learning methods for semantic segmentation of multiple invasive alien plant species along railway tracks in Switzerland under a low and biased reference data regime. We explore mixture of expert models and multi-modal deep learning to compensate for the scarce and unevenly sampled reference data. Mixture of experts divides the segmentation task among specialized sub-models to handle the complex and imbalanced datasets more effectively. Our multi-modal approach leverages additional pixel-level features such as phenological phase and canopy density, along with image-level metadata like the time of capture, to enhance detection accuracy. Both approaches remain under-explored in the field of neophytes detection. State-of-the-art methods for multi-class semantic segmentation rely on deep neural networks, where performance is highly dependent on quality and quantity of training data. Neophytes detection faces unique challenges in this regard: while large amounts of unlabeled UAV imagery are available, invasive species occupy only a small fraction of these images. Labeling neophytes is time-consuming and costly due to the need for expert botanical knowledge to differentiate them from native species within time-consuming field surveys. There is not only an imbalance in the presence of different neophytes, but also in their spatial scale. These factors lead to a rather small reference data set paired with a pronounced long-tail distribution. Mixture of experts address this issue by dividing the model into specialized experts, each tailored to specific data subsets, thereby improving the segmentation performance. Another possibility to compensate for scarcely available reference data that follows a long-tailed distribution is adding auxiliary evidence to the segmentation task. We add pixel-level information about the canopy density and the phenological state of the neophytes, which were collected by experts to augment the plant labels. These data vary greatly between neophytes and can implicitly encode the growth process over the vegetation period. Combined with an encoding of image-level metadata such as the season of the images, multi-modal data can support semantic segmentation of under-represented classes. We hope that our framework will lead to a significant advancement for biodiversity conservation management, particularly in monitoring neophytes along the critical infrastructure of railways. While the model is already working for various locations in Switzerland, it is scalable for use by a diverse group of landowners responsible for removing neophytes.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: Individual Tree Species Identification Using UAV-LiDAR in Mixed Temperate Forest

Authors: Mudassar Umar, Dr. Harm Bartholomeus, Dr. Alvaro Lau Sarmiento, Dr. Kirsten De
Affiliations: Wageningen University & Research
Variation in tree species is essential for healthy and resilient forests, and spatially explicit information on tree species is required for sustainable forest management. This information contributes to an accurate understanding of forest conditions from an individual tree to a forest stand and the broader landscape. However, remote sensing-based tree species identification is somewhat elusive due to the complexity of species composition and canopy structure of forests, as both cross-species similarities and intra-species variation exist in terms of tree reflectance and structure. Light Detection and Ranging (LiDAR) data collected using unmanned aerial vehicle (UAV) platforms provide high-density point clouds, offering detailed 3D structural information and enabling a high level of detail of individual trees. This study evaluates the effectiveness of UAV-LiDAR features in identifying individual tree species within a mixed-temperate forest in Germany. UAV-LiDAR data collected under leaf-on conditions from seven forest plots were used to derive structural and radiometric features for 350 trees. We found that most of the LiDAR features were highly correlated to each other (|r|≥0.7). Using the Random Forest algorithm, we identified six tree species with 80% accuracy using structural features alone, which improved to 88% when also radiometric features were included in the model. Key features identified included crown characteristics, height percentiles, entropy, and intensity metrics, with mean and minimum intensity of single returns being particularly significant. Specifically, the minimum and mean intensity of single returns were identified as the most significant features when combined with structural features in the identification of tree species. These findings demonstrate that UAV-LiDAR based features contain significant shape information to identify tree species in mixed temperate forests accurately. However, obtaining a large sample may also lead to higher accuracy and validation of the results. Therefore, further investigation is being carried out to test the robustness and transferability of the significant features in identifying individual tree species across different forest environments.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: Monitoring and modeling of rock glacier kinematics: A study case in Lazaun South Tyrol Italy

Authors: Giovanni Dedivitiis, Chiara Crippa, Giovanni Cuozzo, Giacomo Marrazzo, Francesco Calvetti, Abraham Mejia-Aguilar
Affiliations: Eurac Research, Polytechnic University of Milan, Department of Architecture, Built Environment and Construction Engineering, Polytechnic University of Milan, Department of Civil and Environmental Engineering
Permafrost is a condition that allows ice to form and persist in the ground because temperatures consistently remain below 0°C. In mountainous regions, this leads to the development of rock glaciers, which are masses of debris or fractured rocks mixed with ice. These formations move downslope due to gravity, influenced by the surrounding topography. Rock glaciers consist of angular, poorly sorted rocks embedded in glacial ice, which acts as a cement binding the boulders together. They are often characterized by steep fronts and distinct features such as lobate ridges and furrows, resulting from their viscous flow and various movement types, including sliding, rolling, and gravitational motion. Monitoring rock glaciers is challenging due to their remote locations, harsh climatic conditions, and limited resources for operational systems. Traditional methods involve using GNSS instruments to measure the coordinates of specific objects, like boulders, to determine displacement and velocity. While these methods are highly accurate and capable of detecting 3D movements, they have limited spatial coverage and are time-consuming. Remote sensing offers an alternative, particularly with Synthetic Aperture Radar (SAR), which enables the analysis of large areas and the detection of active zones, even under cloud cover. However, SAR has limitations, including challenges caused by surface roughness leading to signal scattering, difficulties in capturing vertical displacements, and limited resolution for detecting small debris and boulders. To address these challenges, we propose integrating Unmanned Aerial Vehicles (UAVs) into a broader monitoring strategy that combines remote and ground-based sensing methods. UAVs can easily reach remote, rugged, and hazardous terrain, where it may be unsafe or impossible for humans to conduct surveys. UAVs equipped with various sensors, provide detailed data on terrain displacement, position, and characteristics. Outputs such as Digital Surface Models (DSMs), orthomosaics, thermal maps, and 3D point clouds are generated. Thermal imaging is particularly interesting because it reveals areas of higher activity or movement by detecting differences in surface temperature caused by internal friction or melting processes. While using the 3D cloud point data are then processed in modeling software, such as FLAC3D, to predict future scenarios and potential events, enhancing the understanding and management of rock glacier dynamics. We present this monitoring strategy in the rock glacier of Lazaun, in South Tyrol, Italy.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: Innovative Autonomous UAV solution for in-situ Cal/Val of satellite altimetry over inland waters and other surfaces

Authors: Valentin Fouqueau, Tom Bruel, Jean-Christophe Poisson, Yannick Riou, Loïc Richard, Nicolas Picot
Affiliations: vorteX-io, I-TechDrone, CNES
For many years now, satellite altimetry has been increasingly used to monitor inland waters all over the globe, and even more with the advent of delay doppler radar altimeter embedded on the Copernicus Sentinel-3 and Sentinel6-MF missions, and also with the breakthrough of swath altimetry brought by SWOT. For these instruments, new algorithms are currently being developed to support improved data processing over hydrological surfaces to achieve significant accuracy improvements. There is therefore an increasing need for new in-situ systems to provide reference data for large-scale calibration/Validation (Cal/Val) activities over inland water. Swath altimetry also needs new type of in-situ measurements to validate the data over long section of hydrological bodies. In this context, vorteX-io designed a lightweight remote sensing instrument, inherited from the specifications of radar altimeters on board altimetric satellites, capable of providing water height measurements with centimeter-level accuracy and at high frequency. Embedded on a flying drone, the system combines a LiDAR system, a camera and a GNSS chip in a single payload to provide centimeter-level water surface elevation measurements, orthophotos, water surface mask and water surface velocity throughout the drone flight. The vorteX-io system is a review of existing in-situ systems used for Cal/Val of satellite altimetry in hydrology or operational monitoring of water heights (used to anticipate river floods or to monitor reservoir volumes). As the lightweight altimeter is inspired by satellite altimetry, water level measurements are directly comparable to satellite altimeter data. Thanks to UAV capability, water measurements can be performed over long distances along rivers, and at the same location and time as the satellite overflight. New hydrological variables are planned to be added in the next future (water surface temperature, river discharge, turbidity, …). To perform operational calibration and validation activities, the river profiles measured by the drone altimeter must be combined with in-situ measurement performed by fixed sensors. This processing was designed to overcome the problem of measurements not being simultaneous with those from the satellite. In fact, the ideal method to perform reference measurements is to measure the water altitude at the exact location and time as the satellite measurements. In this context, vorteX-io is developing with its partner I-TechDrone a version of the lightweight altimeter embedded on an autonomous drone. We present here the concept, and the development of this autonomous drone dedicated to hydrological measurements and the early results of the first flights performed in an autonomous mode (i.e. without any human pilot controlling the drone on the field)
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone J)

Poster: UAV thermal remote sensing in complex environments – challenges, lessons learnt and recommendations

Authors: Kathrin Naegeli, Jennifer Susan Adams, Gabriele Bramati, Isabelle Gärtner-Roer, Julian Gröbner, Nils Rietze
Affiliations: Department of Geography, University of Zurich, Physikalisch-Meteorologisches Observatorium Davos, World Radiation Center (PMOD/WRC), Department of Evolutionary Biology and Environmental Studies, University of Zurich
Mountain regions are particularly affected by rapid and severe changes caused by global climate change. Monitoring these regions is however challenging and often hampered by safe accessibility, harsh conditions, and small-scale heterogeneity. Uncrewed aerial vehicles (UAVs) bridge both the spatial scale between point-scale in situ instruments and coarse scale satellite data, but also enable surveying environments not safely accessible by foot. Hence, provide essential datasets to improve our understanding of Earth surface dynamics and environmental changes at present and under future climate variability scenarios. Land surface temperature (LST), a recognised Essential Climate Variable (ECV) by the Global Climate Observing System (GCOS), is a fundamental variable in the physics of the surface – atmosphere interface and therefore highly relevant to the surface energy budget (SEB). Recent technological developments fostered popularity in the use of thermal infrared (TIR) close-range remote sensing for various environments. However, the mostly uncooled microbolometers used onboard UAV platforms are prone to manifold impacts by the environment as well as the internal camera functioning during surveys. An absolute surface temperature retrieval is thus not straight forward and requires extensive considerations before, during and after data acquisition. While some studies have focused on UAV-based LST retrieval and associated challenges, most of them focused on easy accessible sites, usually with very little or no topography, generally homogeneous surface types and rather stable ambient meteorological conditions. Large topographical gradients, remote or difficult to access areas, large variance of slope and aspects, little surface contrast, high spatial heterogeneity, high altitude and/or high elevation gradients, as well as generally harsh environmental conditions within the survey area, are characteristics often found in the (high) mountain environment. These factors further complicate available approaches and algorithms, making LST data acquisition and retrieval a highly complex task. In this contribution, we make use of multi-year, combined RGB and TIR UAV datasets over a permafrost landform in the high-alpine Swiss Alps. We present challenges and lessons-learnt specifically arising and stemming from such complex environments. A particular focus is put on several pre-survey, in flight and post-survey characterisations and correction schemes to most accurately acquire absolute surface temperature values from uncooled cameras on UAV platforms. Furthermore, the availability of different in situ ancillary data allows a thorough discussion of the environmental impact on thermal datasets and how best to use them for validation purposes. This work aims to contribute to a better understanding of the acquisition and processing of close-range thermal data, with a particular focus on the challenges posed by complex environments. With several upcoming thermal satellite missions providing finer spatially resolved data and more frequent revisits, an improved understanding of the spatial and temporal variability of SEB-relevant variables such as LST in complex environments is urgently needed to provide critical information for accurate product retrieval.
Add to Google Calendar

Thursday 26 June 17:45 - 18:30 (Nexus Agora)

Session: B.03.12 Space Solutions and the Green Transition: Joint Action for Impact

Space capabilities have long provided essential data and insights on climate change, but their potential for driving urgent climate action remains far from realized. Moving forward, space technology solutions should be integrated into the policies and practices of impact sector multipliers, international financial institutions and policymakers, in support of achieving carbon neutrality by 2050. This Agora session will explore, among other topics, how to unlock the potential of space for stakeholders in key sectors related to the green transition, also ensuring that space-based contributions are strategically deployed and sustainable in the long-term, to maximise the impact of space for non-space to drive research and economic investment. The session will take the form of a moderated panel discussion with representatives from ESA and stakeholders.

Speakers:



  • Benjamin Koetz. Head of the Long-Term Action Section at the European Space Agency

  • Benjamin White. Ecosystem Services Technology Officer at the Forest Stewardship Council Investments and Partnerships

  • Dušan Chrenek. Principal Adviser for ‘Digital for the Green Transition’ in the Directorate-General for Climate Action of the European Commission

  • Melissa de Kock. Deputy Director at the World Conservation Monitoring Centre- United Nations Environment Programme

Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: A.09.09 - POSTER - Arctic and Antarctic Sea Ice in the Earth system: Advancing Research with Remote Sensing, In-Situ Observations, and Modeling

Sea ice plays a crucial role in the global climate system through its strong effect on albedo, ocean and atmospheric circulation, and its feedbacks with ice sheets and permafrost.

Remote sensing of sea ice has been the cornerstone of Arctic and Antarctic sea ice research for over 50 years. These long-term, large-scale, and stable time series of sea ice parameters provide the baseline for a deeper understanding of the ongoing dramatic changes in both hemispheres. This knowledge is further diversified and enhanced by new and upcoming satellite missions (e.g., ICESat-2, SWOT, CIMR, CRISTAL, ROSE-L) that provide insights into detailed processes such as snow depth changes, meltpond drainage, and sea ice ridging, as well as support operational forecasting and monitoring applications. They also advance our understanding of the relevance of sea ice for atmospheric, oceanic, and ecological processes, e.g., Arctic cloud formation or the timing of ice algae blooms.

Sea ice parameters are observed over a large wavelength spectrum and derived from many different sensors including microwave and infrared radiometers, visible observations, radar imagers, and lidar or radar altimeters. Combining, merging, and jointly analyzing products from different satellite sensors and scales represents the next powerful step in advancing our knowledge of the fast-changing sea ice covers.

A key challenge remains in bridging scales and spheres between Earth Observation (EO) datasets, climate modeling, and in-situ datasets. New methodological advances such as data-driven modeling and physics-informed artificial intelligence, may be well-suited to address this challenge.

This session addresses all aspects of sea ice, including the current status and needs in enhancing EO methodologies, and the use of EO products for evaluating polar climate model simulations and for data assimilation in Numerical Weather Prediction models. Airborne and in-situ observation campaigns are critical to evaluate, calibrate, and develop satellite retrievals and we invite submissions on these aspects, too. Submissions on solutions for addressing up- and downscaling challenges on different temporal and spatial scales and between different data types are also encouraged.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: High-Resolution UAV-Based Hyperspectral Imagery Reveals the Large Spread of Albedo-Reducing Snow Algae Blooms in Maritime Antarctica.

Authors: Dr. Alejandro Román, Gabriel Navarro, Dr. Luis Barbero, Dr. Enrique González-Ortegón, Dr. Isabel Caballero, Dr. Antonio Tovar-Sánchez
Affiliations: Institute of Marine Sciences of Andalusia (ICMAN-CSIC), University of Cádiz (UCA)
Single snow algal blooms can span areas from tens to hundreds of square meters in certain Antarctic snowfields, making them significant photosynthetic primary producers according to previous estimates. The growth of snow algae is crucial to coastal ecosystems in polar regions, as it affects nutrient dynamics, supports food chains, contributes to carbon cycling, and influences climate feedback mechanisms. In addition, these algal communities can reduce snow albedo by up to 20% due to darker snow surfaces, which further accelerates snow melt rates. Therefore, monitoring the growth, frequency, and extent of snow algae blooms is essential for understanding their ecological importance and the physical and biogeochemical changes occurring in polar regions due to global environmental change. Despite the wide-ranging potential of satellite remote sensing for monitoring these algal communities in Antarctic snowfields, its application is restricted by almost permanent cloud cover, and an occasionally insufficient spatial resolution to identify these blooms in such heterogeneous and complex areas. Furthermore, to the best of our knowledge, these challenges have thus far hindered the generation of an accurate spectral library for remote detection. In this study, we leverage the fine spatial and spectral resolution provided by a hyperspectral sensor mounted on an Unmanned Aerial Vehicle (UAV) to detect, identify, and map the occurrence of snow algae blooms, primarily composed of Sanguina nivaloides, at two locations on Livingston Island (South Shetland Islands, Maritime Antarctica). The collected data enabled the creation of a spectral library, validated with ground reference spectrometry data. This information was upscaled to Sentinel-2 imagery to conduct a comprehensive assessment of snow algae’s impact in the Antarctic South Shetland Islands using a supervised machine learning approach. Our results provide insight into the spread of snow algae, offering a methodological approach that facilitates monitoring and enhances our understanding of its ecological consequences under current climate change scenarios.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Upscaling of ICESat-2 Sea Ice Freeboard Measurements by Sentinel-1 Synthetic Aperture Radar

Authors: Shiming Xu, Siqi Liu, Yanfei Fan, Wenkai Guo, Jack Landy, Dr. Lu Zhou
Affiliations: Tsinghua University, UiT, Utrecht University
Satellite altimetry serves as the backbone of measuring sea ice elevation and producing basin-scale climate record of sea ice thickness in polar regions. Although modern altimeters such as ICESat-2 provide highly precise freeboard measurements, its spatial representation is inherently limited due that scanning only covers the nadir of the satellite’s pass. Consequently, measurements within a month are used to compile basin-scale maps, which is a compromise between coverage, spatial and temporal representation, as well as matching end users’ needs. In this talk we explore the statistical upscaling of the sea ice topography as measured by ICESat-2 by collocating backscatter maps by the C-band synthetic aperture radar of Sentinel-1. By using Operation IceBridge (OIB) data and collocating Sentinle-1 maps, we construct statistically significant relationship between the total freeboard statistics (i.e., mean height, height variability) and the C-band backscatter. In particular, the scale dependency of this relationship is investigated. The relationship is rooted in the different backscatter mechanisms among various sea ice types/ages. Besides, this relationship is also found to be localized, varying with both the sea ice conditions and the observational geometry. We further utilize this relationship to build a prototype algorithm to upscale the spatially limited ICESat-2 measurements with SAR images to larger scales. As a result, the spatial representation of the freeboard measurements is much improved, which potentially allows more effective synergy with other payloads such as SMOS. Collaterally, the temporal representation of basin-scale freeboard maps are also attained: weekly basin-scale maps are generated instead of monthly. Initial results of the upscaled total freeboard dataset based on ICESat-2 and Sentinel-1A/B for recent winters are also discussed.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Inventory of historical Nimbus 5 and 6 microwave radiometer satellite sea ice concentration estimates from the 1970s

Authors: Rasmus Tonboe, Wiebke Kolbe, Emil Tellefsen, Alberte Lorenscheit, Amalie van Holt
Affiliations: Denmark's Technical University
Satellite microwave radiometer data from the successful Nimbus satellite program have recently been made available online by NASA. Some of these data are now being reprocessed with modern sea ice concentration processing methodologies to estimate the global sea ice extent and ice type in the mid 1970s (Kolbe et al., 2024a). The Nimbus 5 Electrically Scanning Microwave Radiometer (N5ESMR) sea ice concentration data spanning the period from Dec. 1972 to May 1977, with some interruptions, is thus part of the ESA CCI sea ice concentration datasets on the ESA climate office open data portal (Kolbe et al., 2024a; ESA CCI data). The N5ESMR record is the longest of the historical microwave radiometer datasets, but there are also other historical satellite datasets from the 1970s that can be used for computing the sea ice concentration and type, for example, the microwave radiometers on Nimbus 6; ESMR and SCAMS. The Nimbus 6 ESMR data were recently recovered from scanned film images and written to geolocated NetCDF orbit files (Tellefsen et al., 2024). These data are now being processed for closing a major data gap in the N5ESMR record. The primary mission objectives of the SCAMS radiometer was temperature sounding and humidity of the atmosphere. However, two of the channels at 22 and 31 GHz have a good penetration of the polar atmosphere and are sensitive to surface emission and to sea ice (Kolbe et al., 2024b). All of these sensors have sensitivity to sea ice type and concentration and by combining the datasets it is possible to make a complete assessment of the peaks and lows of Antarctic maximum extent in 1973 to the minimum in 1977. Arctic sea ice extent is mapped from the annual maximum in 1974 to the maximum in 1977 with all highs and lows in between. Overlapping periods between the N5ESMR, N6 ESMR and SCAMS missions in 1975 and 1976 can be used for closing gaps, assessing the uncertainties, and for comparing independent estimates of the sea ice extent and type. The results show that the Antarctic winter sea ice extent (annual maximum) was high in 1973 to 1975 and with a lower maximum extent in 1976 compared to the previous years. The Arctic sea ice extent was much higher in the 1970s than it is today. These different satellite sensors and datasets have very different imaging geometry, noise level and spatial resolution and coverage. All of these issues are affecting the sea ice concentration and the sea ice extent estimates. These issues, when combining the datasets, are discussed at the conference. References: ESA CCI data: https://catalogue.ceda.ac.uk/uuid/34a15b96f1134d9e95b9e486d74e49cf/ Kolbe, W. M., Tonboe, R. T., and Stroeve, J.: Mapping of sea ice concentration using the NASA NIMBUS 5 Electrically Scanning Microwave Radiometer data from 1972–1977, Earth Syst. Sci. Data, 16, 1247–1264, https://doi.org/10.5194/essd-16-1247-2024, 2024a. Kolbe, W. M., R. T. Tonboe, J. Stroeve: Mapping of sea ice using the NIMBUS-6 satellite SCAMS 1975/1976. In preparation 2024b. Tellefsen, E. H., R. T. Tonboe, W. M. Kolbe, J. Stroeve. The Nimbus 6 Electrically Scanning Microwave Radiometer: data rescue. In review 2024.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Changes in snow on first-year sea ice observed with coherent change detection from C-band InSAR

Authors: John Yackel, Dr. Torsten Geldsetzer, Kiran
Affiliations: University of Calgary
Coherent change detection (CCD) observations of snow on first-year sea ice display distinct signatures that are hypothesized to be related to wind redistribution in some cases and to thermodynamic changes within the snow cover in others. These CCD observations are made using repeat-pass coherent C-band SAR image pairs at temporal baselines of 4 to 12 days using Sentinel-1 and Radarsat Constellation Mission data. Local meteorological data are used to relate the CCD observations to significant wind events, or to air temperature transitions that cause the basal brine-wetted snow layer to cross eutectic thresholds leading to significant dielectric changes. Image pairs during stable meteorological conditions provide the control state. The study area is located on landfast first-year sea ice in Dease Strait, Nunavut, Canada. Three case studies are used to demonstrate the observed effects: 1) a wind event producing alternating low- and high-coherence oriented scour features; 2) a very-cold to cold temperature transition producing areas of low coherence; and 3) a cold to less-cold temperature transition also producing areas of low coherence. The latter two cases are related to brine volume changes in the basal snow layer, one due to salt precipitation (cold case), and one due to snow grain melting (less cold case). The areas of low coherence linked to the thermodynamic effects are associated with thinner snow covers, as a function of reduced insulation. Implications of the thermodynamic effects suggest that SAR image coherence may provide a proxy for snow depth on first-year sea ice. Implications of the wind effect are the detection of wind redistributions, which are of interest to modelling studies and to those travelling on the sea ice. The coherence observations offer insight into microwave scattering mechanisms in snow-covered sea ice and suggest that changes in snow properties during winter can be detected at C-band and presumably at higher frequencies such as X, Ku- and Ka-bands.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Direct Observations of the Mesoscale Dynamics in the Marginal Ice Zone from Sentinel-1 Doppler shift observations

Authors: Artem Moiseev, Anton Korosov, Fabrice Collard, Johnny Johannessen
Affiliations: NERSC, OceanDataLab
The marginal ice zone (MIZ) is the transitional area between open ocean and pack ice regimes, exhibiting significant horizontal gradients in ice, oceanic, and atmospheric properties. MIZ formation, dynamics, and variability are driven by complex vertical and horizontal interactions between the sea ice, atmosphere, and ocean through heat fluxes (sea ice growth and melt) as well as wind, waves, and currents (sea ice drift, breaking, and heat and salt transport). Moreover, interactions between the ocean, sea ice, and atmosphere in the MIZ are amplified by extreme gradients in albedo, temperature, salinity, momentum exchange, and ABL stability, yielding a fast-changing sea ice regime. Many pan-Arctic studies indicate that atmospheric and oceanic forcing has strong regional and temporal variability. However, the impact of mesoscale ocean circulation on the MIZ remains largely unknown due to a lack of systematic information about surface currents in the ice-covered ocean. We investigated ocean surface currents and waves in the Arctic Marginal Ice Zone by utilizing direct observations of the sea ice radial velocity from Sentinel-1 Doppler shift observations (see Figure 1). The Sentinel-1 Doppler shift observations provide a unique way to observe instantaneous sea ice drift velocities. Moreover, under low wind conditions, they reveal the presence of mesoscale ocean surface current structures, providing a novel way to study upper ocean dynamics in the region. Furthermore, we compared these observations with surface currents and sea ice drift fields from an operational CMEMS Mercator 1/12 model as well as with sea ice drift derived from pairs of Sentinel-3 SLSTR acquisitions. The comparison indicated promising consistency between satellite observations and the operational model. This study demonstrated the potential for using Sentinel-1 Doppler observations for studying mesoscale dynamics in the Marginal Ice Zone. Moreover, the combination of instantaneous velocities from Sentinel-1 Doppler and three-hourly velocities from SLSTR indicated a promising capability to study the variability of sea ice drift in the MIZ at different time scales. The method can be further used to study Arctic and Antarctic coupled sea ice and ocean surface dynamics.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Separating Sea Ice and Open Water in SAR Imagery Based on Model-Derived Thresholds for Backscatter Intensity Decay Rates With Incidence Angle

Authors: Madeleine Östersund, Anthony Paul Doulgeris, Johannes Lohse
Affiliations: Uit The Arctic University of Norway
Mapping sea ice and open water from SAR is crucial to support the safety of shipping and offshore operations in the Arctic, data assimilation into numerical models for improved sea ice forecasting, and long-term environmental monitoring. However, varying ocean surface roughness due to changing wind conditions leads to highly variable radar backscatter from open water, which complicates automated classification approaches. Furthermore, the SAR backscatter intensity decreases with increasing incidence angle (IA) across the swath, which is commonly called the “IA effect” in SAR imagery. With backscatter intensity given in decibel (dB), this decrease is often approximated with a linear function, and different surface types (in our case: open water as well as different ice types) have different decay rates. In the literature, these decay rates are often referred to as ”IA slopes”. In contrast to backscatter intensity, IA slopes for C-band SAR have been found to be much more consistent for sea ice and water, and several studies have reported generally steeper slopes for water compared to ice (Mäkynen & Karvonen, 2017; Lohse et al., 2020). Therefore, in this work, we use a model-based approach to explore the potential ranges of IA slopes for sea ice and open water in order to constrain theoretical minimum and maximum slopes values and to investigate if the slope parameters can be used for automated labeling of surface types. Most commonly, automated ice-versus-water mapping is performed using supervised classification algorithms, which require labeled training data. In particular convolutional neural networks, which currently provide the best results, require large training sets. These labels are often derived from operational ice charts provided by national ice services. However, while useful, the manually drawn ice charts are not suitable to provide ice-water labels with spatial resolution at pixel-level detail. Hence, here we suggest using an unsupervised segmentation approach that was first introduced by Cristea et al. (2020). The algorithm accounts for the class-dependent IA effect and returns the segmented image together with IA slopes, intercepts, and covariance matrices for each segment. Using the previous findings of generally steeper slopes for open water compared to all sea ice types, the authors then demonstrate the potential of the segment-wise IA slopes to label individual segments either as sea ice or open water. We now use a model-based approach to determine theoretical minimum and maximum IA slopes for ice and water to find a generic threshold which can be applied to segmentation results. Using literature values for typical water and ice surface parameters (roughness, permittivity), we explore different backscatter models to investigate IA-backscatter relationships. Combining scattering models with observations from Sentinel-1 scenes, we aim at finding a theoretically derived threshold separating sea ice and open water based on their IA slopes. The threshold is then applied to assign class labels (sea ice and open water) to the results from unsupervised segmentation. Finally, the quality of our derived ice/water maps is assessed by comparing them to automated classification products from supervised state-of-the-art methods as well as example images that were labeled based on manual expert analysis of the data. Cristea, A., Van Houtte, J., Doulgeris, A.P. (2020) Integrating incidence angle dependencies into the clustering-based segmentation of SAR images, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 13, pp. 2925-2939. doi: 10.1109/JSTARS.2020.2993067 Lohse, J., Doulgeris, A.P., Dierking, W. (2020) Mapping sea-ice types from Sentinel-1 considering the surface-type dependent effect of incidence angle, Annals of Glaciology, 61(83), pp. 260-270. doi: 10.1017/aog.2020.45 Mäkynen, M. & Karvonen, J. (2017) Incidence angle dependence of first-year sea ice backscattering coefficient in Sentinel-1 SAR imagery over the Kara Sea, IEEE Transactions on Geoscience and Remote Sensing, 55(11), pp. 6170-6181. doi: 10.1109/TGRS.2017.2721981
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Sea Ice Concentration Retrievals Using Sentinel-3's MWR

Authors: Connor Nelson, Julienne Stroeve, Thomas Lavergne, Jack Landy, Fabrizio Baordo, Michel Tsamados
Affiliations: Centre for Polar Observation and Modelling, Department of Earth Sciences, University College London, Centre for Earth Observation Science, Clayton H. Riddell Faculty of Environment, Earth and Resources, University of Manitoba, National Snow and Ice Data Center, University of Colorado Boulder, Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research, Research and Development Department, Norwegian Meteorological Institute, Department of Physics and Technology, UiT The Arctic University of Norway, Danish Meteorological Institute
Accurate sea ice concentration (SIC) estimates are critical for climate monitoring and modelling. Our recent work introduces a new SIC product derived from Sentinel-3's dual-frequency (23.8 GHz and 36.5 GHz) microwave radiometer (MWR) and enhanced by its onboard altimeter (SRAL). By adapting and advancing the NASA bootstrap algorithm, including the use of dynamic open water and closed-ice tie points, and a brightness temperature atmospheric correction scheme using a radiative transfer model, we generate along-track SIC estimates as well as optimally interpolate these to produce daily near-pan-Arctic SIC maps. This approach achieves monthly root mean square deviations (RMSD) ranging from 5% to 6% when compared to OSI SAF's Sea Ice Concentration Climate Data Record (CDR) during the 2018/19 Arctic Winter-Spring season. Our SIC product demonstrates a robust performance despite the sensitivity of the MWR's 23.8 GHz channel to atmospheric water vapour. With many satellites that are carrying radiometers suitable for SIC retrieval nearing or exceeding their intended lifetimes, demonstrating Sentinel-3's capability to mitigate potential data gaps is of great significance. Furthermore, the coincident nature of Sentinel-3’s observations from its relatively large suite of instruments provides new opportunities to integrate SIC estimates with other sea ice variables, enhancing multi-sensor data synergy. We explored the integration of our along-track-derived SIC into the surface classification component of the Sentinel-3 sea ice thickness processing chain. Results showed a significant increase in the number of classified floes, up to 50% across a single month when including the marginal ice zone, compared to the interpolated interim climate data record (ICDR) SIC used in the Level 2 sea ice thematic processing chain. Therefore, our product may not only help to improve our understanding of sea ice concentration but also those additional essential climate variables (ECVs) retrieved using Sentinel-3 that consider the local sea ice concentration in their derivation, improving overall climate monitoring capabilities.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Earth Explorer 12 Candidate Mission CryoRad: Innovations in Sea Ice Observations

Authors: Lars Kaleschke, Dr Laurent Bertino, Dr. Marco Brogioni, Philip Browne, Matthias Drusch, Stefan Hendricks, Dr. Ken Jezek, Professor Joel Johnson, Dr. Marion Leduc-Leballeur, Giovanni Macelloni, Ghislain Picard, Rasmus T. Tonboe, Xiangshan Tian-Kunze
Affiliations: Alfred-Wegener-Institut Helmholtz-Zentrum für Polar und Meeresforschung, Nansen Environmental and Remote Sensing Center,, National Research Council Institute for Applied Physics “Nello Carrara”, European Centre for Medium-Range Weather Forecasts, Earth Surfaces and Interior Section (EOP-SME), Earth and Mission Science Division, European Space Agency/ESTEC, Byrd Polar and Climate Research Center, School of Earth Sciences of The Ohio State University, ElectroScience Laboratory, The Ohio State University, Institut des Géosciences de l’Environnement (IGE), CNRS/UGA, Technical University of Denmark
CryoRad consists of a single satellite equipped with a broadband low-frequency microwave radiometer operating in the range 0.4 to 2 GHz with continuous frequency scanning. The CryoRad mission aims to produce key scientific data for advancing cryosphere studies. It will provide temperature profiles of Antarctic and Greenland ice sheets, extending from surface to base, a dataset previously available only through limited borehole observations. The mission will also address uncertainties in sea surface salinity (SSS) measurements in cold waters, overcoming limitations of current L-band radiometers. Furthermore, CryoRad will enhance estimates of sea ice thickness and deliver the first spaceborne observations of sea ice salinity. In this presentation, we will discuss the potential impact of CryoRad measurements in determining sea ice properties and their benefits for ocean and climate modeling. Moreover we will provide information on uncertainties in determining sea ice parameters, focusing on simulations due to the limited availability of suitable measurements. Models of varying complexity, from one-dimensional thermodynamic sea-ice models to coupled ocean-sea-ice systems, will be used. The analysis will utilize new data from ECMWF’s ORAS6 ocean reanalysis, which includes a multicategory sea-ice model with prognostic salinity, as input for brightness temperature simulations. ORAS6 will also serve as a baseline to examine the spatio-temporal co-variability of key geophysical parameters (e.g., sea ice concentration, thickness, and salinity) and their relationship to simulated brightness temperature observations. Special attention will be given to the new sea ice salinity forecast parameter in both hemispheres.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Improved applications of sea ice mass balance buoys for polar climate studies, remote sensing and modelling

Authors: Sebastian Gerland, Evgenii Salganik, Mats A. Granskog, Jack Landy, Dmitry V. Divine, Christian Haas, Mario Hoppmann, Polona Itkin, Marcel Nicolaus, Anna Nikolopoulos, Don Perovich, Chris Polashenski, Andreas Preusser, Ian A. Raphael
Affiliations: Norwegian Polar Institute, UiT The Arctic University of Norway, Alfred-Wegener-Institut Helmholtz-Zentrum für Polar- und Meeresforschung, Thayer School of Engineering at Dartmouth College, USACE-CRREL, German Aerospace Center (DLR)
Information on sea ice growth and decay processes is crucial for understanding polar climate processes, and to observe and understand how the annual cycle of sea ice is changing over longer time periods. Autonomous sea ice mass balance buoys (IMBs) are unique tools to quantify the evolution of sea ice in Polar regions at a single locations of sea ice floes in detail, and to provide data beyond periods of ship-based or air-borne field campaigns in a cost-efficient manner. After installation in sea ice, such buoys measure changes of the snow and sea ice thickness, as well as temperatures in air, snow, ice and ocean. From these measurements, the full sea ice mass balance can be derived. In most cases the measured data are transmitted via satellite in near-real time. This presentation will (a) give a brief review of existing IMB systems, their measurement principles including ultra sonic sensors, thermistor chains and other complementary sensors, as well as installation/deployment protocols; (b) introduce raw data processing methods, data availability, data formats and data portals; (c) give examples how IMB data are currently used (e.g., findings from recent integrations of IMB and satellite remote sensing data, as well as use of synergetic data collected from airborne surveys, in situ observations and IMB time series as conducted during the MOSAiC expedition in the Arctic Ocean), and depict how their potential can further be exploited; and (d) discuss limitations, logistical challenges, robustness of current IMB systems, as well as data biases and uncertainties. During two international IMB workshops arranged in 2016 and 2024, respectively, IMB users have shared and discussed results and experiences with different IMB systems. Such meetings are helpful for agreeing on consistent deployment protocols and data formats, coordination of activities with the International Arctic Buoy Programme (IABP) and the International Programme for Antarctic Buoys (IPAB), improving IMB data availability, identifying shortcomings and discuss improvements, increasing data usage, as well as exchanging information about deployment plans and opportunities, such as during the coming International Polar Year (IPY) 2032-2033. Beyond a brief summary of the recent workshops, we will also discuss (potential) future developments, e.g., modifying setups with added sensors to derive lateral spatial variability on local scales and sea ice freeboard, improving IMB deployment strategies, aiming at IMB installations in regions or seasons with poor data coverage, the possibility of “remote” installations (i.e. deployment from air, or release from mooring or AUV), developments towards more environmentally-friendly systems and procedures, including options of IMB recovery, and enhanced coordination, integration and synergy with existing or developing programs such as the pan-Arctic DBO (Distributed Biological Observatory) and Synoptic Arctic Surveys (SAS).
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: The Level-2 Product Algorithm Development (L2PAD) Project : Preparing Open-Source Algorithms and Software for the Copernicus Imaging Microwave Radiometer (CIMR) Mission

Authors: Thomas Lavergne, Signe Aaboe, Emily Jane Down, Suman Singha, Hoyeon Shi, Marcus Huntemann, Gunnar Spreen, Xiangshan Tian-Kunze, Lars Kaleschke, Michele Scagliola, Pierre Féménias, Filomena Catapano, Craig Donlon
Affiliations: Norwegian Meteorological Institute, Danish Meteorological Institute, University of Bremen, Alfred Wegener Institute, ESA ESRIN, ESA ESTEC
The Copernicus Imaging Microwave Radiometer (CIMR) mission is one of the six Copernicus Expansion Missions currently being implemented by the European Space Agency and the European Commission. CIMR will provide high-spatial resolution microwave imaging radiometry measurements and derived products with global coverage and sub-daily revisit in the polar regions and adjacent seas to address Copernicus user needs. The primary instrument is a conically scanning microwave radiometer with frequency bands at L-, C-, X-, K- and Ka-band with effective spatial resolution of < 60 km, ≤ 15 km, ≤ 15 km and < 5 km (both K- and Ku-band, with a goal of 4 km for Ka), respectively. This solution allows many Level-2 geophysical products to be derived in several Earth domains including sea ice, the polar and global ocean, as well as several variables over terrestrial surfaces and in the atmosphere. In addition, synergies with other upcoming missions such as Metop-SG (MWI and SCA), CRISTAL, and ROSE-L will offer new opportunities. Two satellites are being implemented (to be launched sequentially) with a first launch anticipated in 2029/30. The CIMR Level-2 Product Algorithm Development project (CIMR L2PAD) is a 4-year project initiated by ESA. Kicked-off in November 2023, CIMR L2PAD aims to a) develop state-of-the-art algorithms for all Terrestrial and Polar Oceans (incl. Sea Ice) products, b) develop the open-source Level-2 Ground-Processor Prototype (GPP), c) conduct pre-flight performance assessment activities, and d) liaise with the user community (including but not limited to) the Copernicus Services to ensure effective user preparedness and uptake. The project is implemented by a consortium of 12 partner institutions, under the leadership of the Norwegian Meteorological Institute. This presentation introduces the CIMR mission and the CIMR L2PAD activities, with a focus on its upcoming portofolio of sea ice products: concentration/edge, thin thickness, drift, ice type, snow-depth, and ice surface temperature. We will be looking forward to interacting with (future) users as we start building an open community of polar experts and users around the CIMR mission.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Pan-Arctic Melt Pond Fractions and Sea Ice Albedo Retrieved from 18 Years of Optical Satellite Observations using a Constrained Physical Forward Model

Authors: Maximilian Ringel, Dr. Hannah Niehaus, Dr. Gunnar Spreen
Affiliations: Institute of Environmental Physics, University of Bremen
Melt ponds significantly lower the albedo of sea ice, causing an increase in absorption of solar radiation within the sea ice-ocean system which leads to changes in the Arctic energy budget as well as sea ice mass balance. Therefore, melt ponds are key components of the positive sea ice-albedo feedback and contribute to the accelerated warming of the Arctic. Understanding their seasonal and regional variability is crucial for assessing their impact on Arctic amplification and improving climate models. In this study we show surface melt pond fractions (MPF) along with sea ice albedo for the cloud-free Arctic for the periods 2002-2011 and 2017-2024 derived from optical satellite data of the instruments MERIS (ENVISAT) and OLCI (Sentinel-3A/3B), respectively, using the Melt Pond Detection 2 (MPD2) algorithm. MPD2 is a fully physical and constrained retrieval, which derives surface fractions of sea ice, melt ponds and ocean together with spectral albedo. The MPD2 cloud masking routine uses additional channels in the near and thermal infrared spectrum from the AATSR (ENVISAT) and SLSTR (Sentinel-3A/3B) instruments. Analysis of the OLCI 2017-2023 time series shows significant seasonal and regional variability in MPF. The marginal Arctic seas exhibit the highest MPF variability and values, reaching up to 50% in the Canadian Archipelago and the Beaufort Sea. The central Arctic shows lower variability with MPF values peaking around 30%. Air temperature and sea ice surface topography are shown to be key factors influencing melt pond formation and evolution. Rough sea ice topography favors earlier melt pond formation with a low peak in MPF at the melt-season start and a more pronounced peak towards the season’s end. In contrast, flat sea ice delays pond formation, with MPF rising sharply mid-season before also declining rapidly at the end of the season. The MERIS 2002-2011 time series of MPF and sea ice albedo is currently being analysed to investigate changes in trends and variability within the last two decades using both time series.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Observational Sea Ice Area uncertainties and their implications for investigating the 2015/2016 Antarctic sea ice decline

Authors: Andreas Wernecke, Dirk Notz
Affiliations: University of Hamburg
The net Arctic and Antarctic sea-ice area (SIA) and sea-ice extent (SIE) are routinely estimated from passive microwave sea-ice concentration (SIC) satellites products. The SIA record is extremely well suited for monitoring changes in the climate system since, with a record length of around 45 years, it is able to cover decadal climate variability. This makes it particularly valuable for climate model evaluation. However, a truly useful evaluation target in model assessments requires an uncertainty estimate. Taking observational uncertainties into account in model evaluations is necessary to avoid penalizing models for not recreating observational errors. At the same time uncertainty estimates are essential to put the drastic decline in Antarctic SIA around 2015/2016 into context. Further more observational uncertainties can obscure links between the SIA and the climate system as a whole, which is as a factor is not well studied. We have retrieved such uncertainties for SIA estimates from the OSI SAF SIC Climate Data Record by encompassing the spatial and temporal SIC error correlations into a stochastic model. With this Monte Carlo approach we can derive dynamic single-product observational SIA uncertainties and the corresponding probability distributions. The monthly-mean SIA uncertainties are about 100 000 km² to 140 000 km² and they (1) are largely consistent with variability in inter product comparisons, (2) are relatively stable over the record, and (3) have a small seasonal cycle. We show what potential these uncertainties have to obscure links between the global climate system and both, regional and net-Antarctic SIA. Investigating the causes of the 2015/2016 drastic SIA decline is not only affected by uncertainties because of the ability to disguise correlations, but also for identifying potential changes in the statistical properties of the SIA. Those properties, like variability and autocorrelation currently attract attention because they could indicate a transition into a ‘new state’. Learning from the previous analysis, and considering the limited length of the time series since 2016, we illustrate how a Gaussian Process Model can be used to overcome some challenges in identifying the driving factors of the Antarctic sea ice decline. In this way we give a systematic first overview of a single-product SIA uncertainty. Besides being a crucial step towards a higher maturity level of this prominent climate variable we show the impact these uncertainties have on investigations of the polar system in general, and the decline of Antarctic sea ice in the last decade in particular.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Altimetric Sea Ice Measurements: Performances of the new Sea Ice thematic products of S3A and S3B

Authors: Sara Fleury, Tom Megain, Stefan Hendricks, Fanny Piras, Jérémie Aublanc, Pierre Femenias
Affiliations: LEGOS/CNRS, CLS, AWI, ESA/ESRIN
The main objective of the Sentinel-3 constellation over sea-ice is to provide accurate measurements of the polar sea surface height, the sea-ice radar freeboard, and its thickness. Compared to previous missions embarking conventional pulse limited altimeters, Sentinel-3 A and B are measuring the surface topography with an enhanced spatial resolution, thanks to the on-board SAR Radar ALtimeter (SRAL), exploiting the delay-Doppler capabilities. To further improve the performances of the Sentinel-3 Altimetry ‘LAND’ products, ESA has developed dedicated and specialized delay-Doppler and Level-2 processing chains over the different type of surfaces (hydrological surfaces, Land Ice and Sea Ice). Over sea-ice the IPF (Instrument Processing Facility) includes dedicated algorithms, in particular the hamming window and the zero-padding processing that allows significant improvement: thanks to the hamming window, the waveforms measured over specular surfaces are cleaned from spurious energy spread by the azimuth impulse response. The zero-padding provides a better sampling of the radar waveforms, notably valuable in case of specular energy returns that are often encountered in sea-ice regions. New sea-ice concentration and type variables have been implemented in the IPF from OSI-SAF OSI-430a and OSI-403d products. The full mission reprocessing has been performed in 2023. The objective of this presentation is to show the progress and even the excellent performances of the sea-ice thematic products with respect to the former ‘LAND’ products, for the Arctic and the Antarctic Sea ice. Performances of the IPF over the whole lifespan of the two missions S3A and S3B will be assessed, with a strong focus on the radar freeboard. Inter-comparisons with CryoSat-2 will also be performed (time series, gridded maps), showing that the three missions provide now similar performances in the estimated freeboard. Comparisons to external in-situ datasets such as moorings will be also presented. Finally, the future developments considered to further improve the quality of this thematic chain will be proposed.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Potential of high-resolution Grazing Angle GNSS-reflectometry to derive sea surface heights and sea ice freeboard in the Arctic Ocean

Authors: Felix Müller, Robert Ricker, Kevin Halsall, Matthieu Talpe, Denise Dettmering, Philip Jales, Florian Seitz, Leonardo De Laurentiis, Alessandro Di Bella, Jerome Bouffard
Affiliations: Technical University of Munich (DGFI-TUM), Norwegian Research Centre (NORCE), Telespazio UK, Spire Global Luxembourg S.a.r.l., ESA-ESRIN
In the Arctic Ocean, sea surface heights (SSH) are usually measured by a few tide gauge stations on the coasts and with the help of satellite altimeter missions, which are also essential to monitor sea ice freeboard and thickness. However, there are only a few altimeter missions that cover polar latitudes above 81°N. Currently, these are only ESA's Earth Explorer radar altimeter mission Cryosat-2 and NASA's laser altimeter ICESat-2. A sudden failure or outage of these missions would lead to significant data gaps in the central Arctic Ocean. Therefore, additional possibilities for retrieving sea level and freeboard based on remote sensing techniques are of great importance. One of these techniques is the so-called Grazing Angle GNSS-Reflectometry (GA GNSS-R), which exploits surface reflections of the Global Navigation Satellite System (GNSS) to collect elevation information or surface roughness characterizations. In this process, both the directly emitted (line-of-sight) signal and the surface reflected signal are collected by nanosatellites and exploited to determine the delays in measured phase compared to a modelled phase, which are then used to derive the height of a reflecting surface. The height determination works when the phase information remains coherent and Right Handed Circularly Polarized at small angles that graze (i.e. grazing angle) the Earth's surface between 5 and 30°. Reflections from calm water surfaces and sea ice are particularly strong, while rougher surfaces, such as open ocean, result in weaker signals. This provides the possibility, besides SSH determination, to develop algorithms for the detection of open water spots like leads or polynyas and consequently to estimate the sea ice freeboard. The leading data provider for GA GNSS-R is Spire Global Inc., which has operated tens of nanosatellites equipped with GNSS receivers and provides multi-GNSS amplitude and phase observations at 50 Hz since 2020. In the current ESA-funded R&D project, Investigating the potential of Spire grazing angle GNSS-R data to infer Arctic sea ice freeboard, a study part of the wider ESA’s Earthnet Data Assessment Project - EDAP, Spire observations are used and analysed regarding the possibility for detecting leads/polynyas at 50 Hz and for comparing GA GNSS-R with existing satellite altimetry-derived sea surface heights during winter months. Open water elevations from Cryosat-2, Sentinel-3 as well as optical and imaging radar techniques (e.g. Sentinel-1/2) are used to evaluate the potential of grazing angle observations and to assess the possibility of determining the sea ice freeboard at spatial scales of 100s of meters.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Deriving Continuous Sea Ice Trajectories from Synthetic Aperture Radar Data

Authors: Sean Chua, Anton Korosov
Affiliations: Nansen Environmental and Remote Sensing Center
Sea ice drift plays a fundamental role in polar dynamics, with implications for the distribution of sea ice thickness, ocean-ice-atmosphere interactions, and maritime navigation safety. While numerous methods exist for tracking sea ice movement, many current approaches rely on image pair analysis, which limits our ability to persist continuous trajectories of specific ice features over multiple timesteps. Through the European Space Agency's (ESA) Arktalas project, we present an innovative algorithm that tracks continuous sea ice trajectories across sequential Sentinel-1 SAR images. Our approach is made up of two main steps: a structured grid implementation of the ORB (Oriented FAST and Rotated BRIEF) feature detector and a pattern matching system. The grid framework ensures a consistent 3-5 km spatial resolution while minimizing the need for interpolation of drift due to gaps in coverage. The rotation-invariant algorithm leverages dual-polarized SAR data (HH and HV), with pattern matching refining drift vectors by considering the spatial context around each feature and fine tuning the location of the matched feature. These refinements enable persistent tracking until either sea ice conditions change significantly or feature correlation falls below threshold values. A key technical advancement is our database structure, which efficiently stores both drift vectors and pattern matching templates. This allows rapid analysis of large-scale drift patterns and individual floe trajectories, supporting both operational and research applications. The database and algorithm design are computationally efficient and can run in near-real time as an operational product. Validation against Arctic drifting buoy data and existing products demonstrates improved accuracy in tracking persistent features. The project advances sea ice monitoring capabilities through three main contributions: the ability to calculate continuous trajectories, improved spatial and temporal coverage alongside a computationally efficient implementation. These improvements directly support sea ice model development and validation, while providing valuable data for maritime operations and climate research.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Remote Sensed Interferometric Synthetic Aperture Radar (InSAR) Observations of Thermal Strain in Arctic Sea Ice

Authors: Larson Kaidel, Christopher Polashenski
Affiliations: Dartmouth College
Sea ice in the Arctic Ocean is instrumental in the complex, fragile system of the Arctic and offers insights into the impact of climate change. As the sea ice cover evolves with the changing climate, its dynamics and deformation have changed. Quantifying material properties of sea ice at a range of scales (μm-km) is necessary to represent its role in the system. One such material property is the thermal strain coefficient, which influences sea ice deformation via thermal fractures. We seek to increase knowledge of sea ice behavior at larger scales (km) relevant to the Arctic system with remote observations of thermal strain using interferometric synthetic aperture radar (InSAR). Prior in situ strain observations at km scales suggest different thermal strain behaviors compared to laboratory results at cm scales. Sea ice, as it occurs in nature, includes brine networks with transport mechanisms not present in most laboratory samples. Some thermal strain behavior is governed by temperature, salinity, and mass in the surface layer (10 cm), consistent with empirical results. However, multiple factors not present in many laboratory samples impact thermal strain behavior in most natural cases. In particular, these include variable mass transfer, void formation, internal pressure transfer, and surface fracturing. Theoretically, combinations of these factors allow a range of thermal strain behavior from that identical to pure ice to a completely reversed expansion under cooling conditions. Each additional factor differs between ice types: fresh ice and multiyear, first-year, and young sea ice. As the latter two become more prevalent in areas of the Arctic, observing differences and describing the evolution of thermal strain behavior over the winter season is critical. These differences could help explain observed trends in increased sea ice deformation and motion. We used remote sensing interferometric synthetic aperture radar (InSAR) techniques provided by the Sentinel-1 mission and temperatures measured in situ or from fifth-generation ECMWF Reanalysis (ERA5) to evaluate the thermal-strain relationship of ice. These observations included first-year, multiyear, and freshwater at 1-22 km scales and span the majority of the frozen season over multiple years. The results are presented in the context of existing theories and contradicting laboratory findings. Where available theories cannot explain observed behavior, new processes are discussed, some present only at geophysical scales. Expansion is never observed in first-year sea ice, even under a net temperature increase. We identified distinct behavior at multiple points throughout the growth season, highlighting the difference between young ice, approximately thinner than 30 cm, and first-year ice. Unexpectedly, we found disagreement in our observations of multiyear and fresh ice with laboratory behavior. The implications of differing behavior between ice types and the evolution of behavior across the growth season are evaluated as a potential contribution to trends in accelerated sea ice motion and increased deformation. Future work will benefit from longer wavelength (L-band) satellite missions (NISAR) or a shorter temporal baseline (Sentinel-1C). Different wavelengths provide different penetrating depths, which may improve the understanding of temperature gradients through the ice thickness. Shorter temporal baselines offer a higher maximum deformation observable by interferometry and potentially provide late winter measurements before melt onset. Additionally, we discuss the benefit of in situ temperature measurements, paired with remote sensed strain observations. We provide an outline to measure thermal strain from other remote sensing data sources, maintaining the ability to intercompare with results from this work.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Ice Drift Correction of SAR Imagery for Efficient Ice Navigation

Authors: Jonathan Bahlmann, Jakob Bünger, Joshua Dawson, Dr. Thomas Krumpen, Dr. Niklas Neckel, Dr. Lasse Rabenstein, Andreas Winter
Affiliations: Drift Noise Polar Services, Alfred Wegener Institute for polar and marine research (AWI), Reederei F. Laeisz
Images from polar-orbiting Synthetic Aperture Radar (SAR) satellites are a vital resource for navigation in ice-covered waters as they provide the spatial coverage needed for strategic route planning that other information systems on board ice-going ships (e.g. marine radars) lack. Current openly available SAR missions provide data at temporal resolutions typically between one and five days. Sea ice drift can be highly variable and ice situations can change in a matter of hours, such that the temporal resolution of SAR imagery is unable to resolve the dynamic nature of drifting sea ice. A few hours after the SAR scene acquisition the image is already outdated and might not agree with what a marine radar will show in the area around the ship. Considerable time is needed to manually locate the correct position of the ship on the outdated satellite image - time navigators on the bridge do not have. To close the gap between consecutive SAR images, data that is available in real time as well as forecast data can be used to automatically correct the satellite scene for the ice drift. During the recent trans-Arctic expedition of the research icebreaker Polarstern different input datasets were ingested into an operational image displacement system and subsequently ranked according to their potential usefulness for the drift correction. We also found that even basic linear image translation was able to increase the situational awareness of the navigation and science crew significantly, allowing them to navigate more efficiently and to effectively save fuel.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Sea Ice Motion Estimation in the Weddell Sea from Optical Flow Analysis

Authors: Yanbing Luo, Christian Haas, Dr. Lars Kaleschke
Affiliations: Alfred Wegener Institute
Antarctic sea ice is an important component of the Southern Ocean climate system and has strongly decreased since 2016. Sea ice dynamic processes affect the total ice mass balance in numerous ways, including ice thickness redistribution due to convergence and ice production in divergent regions. As a crucial property for sea ice kinematics, sea ice motion has been monitored with pattern matching approaches using microwave remote sensing measurements. However, despite the availability of operational sea ice motion products that provide pan-Antarctic coverage at daily frequencies with proper accuracy, information is limited near the coast. This hampers comprehensive analysis of ice dynamics in coastal regions. The aim of this study is thus to investigate the potential of an alternative scheme for daily sea ice motion estimation from optical flow, particularly in coastal areas, to complement existing products. , We also analyse improvements arising from processing swath-to-swath (S2S) data instead of gridded fields. The implementation is conducted using active and passive microwave data in the Weddell Sea. Following the idea of S2S, we extract overlapping areas from swath pairs, particularly those covering the coast. In recent work the S2S approach has been shown to reduce the impact of time “aperture” averaging typical for gridded pan-Antarctic data fields. We use the Recurrent All-Pairs Field Transforms (RAFT) for optical flow state-of-the-art deep network architecture, which is based on correlations of multi-scale features for all pairs of pixels to compute the optical flow between two overlapping swaths. The derived motion fields are compared with buoy-derived velocities and are analysed with regards to two cases of ice dynamics near the coast: polynya events along the front of the Filchner-Ronne ice shelf and ice deformation events along the Antarctic Peninsula. Our results show a good agreement with the buoy data and represent drifting patterns in the regions of interests well. The proposed scheme based on optical flow could provide valuable ice motion information and new insights for further exploration of coastal sea ice dynamic studies in the Weddell Sea and around Antarctica.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Compensating sea ice drift to match SAR acquisitions with in situ measurements

Authors: Leif E. B. Eriksson, Dr. Denis Demchev, Ana-Mari Petrova, Alexander Komarov
Affiliations: Chalmers University Of Technology, The Marine Research Center MSU, Environment and Climate Change Canada
Sea ice plays a crucial role in the global climate system and there is a need both to monitor changes in the sea ice cover and to increase our understanding for interactions between ocean, sea ice and atmosphere. As sea ice cover vast inaccessible areas in harsh environment and long periods without sunlight, satellite remote sensing provides the only viable means to collect information from all regions with sea ice. The fact that sea ice is also constantly moving with winds and currents and changing with temperatures and deformation makes frequent updates necessary. Methods for observations and monitoring of sea ice type, concentration, thickness, drift etc have been developed and refined during several decades and new satellites with improved sensors are launched to provide better observation capabilities. However, for development of new methods, training of new models, validation of results as well as for sampling of sea ice properties that cannot be measured with satellite sensors, in situ observations are still indispensable. Whenever in situ data should be compared or analysed with remote sensing data, the data sets need to be collocated in time and space. For drifting sea ice, this is a major challenge. Polar orbiting satellites pass over the polar regions about 15 times per day, but for a geographic location the time between satellite data acquisitions will range from hours to many days. In a few hours the sea ice can move several kilometres. We present a method that can allow collocation by compensating for the sea ice drift. The method has been developed for synthetic aperture radar (SAR) data, where the relatively high spatial resolution makes drift compensation even more important. In the first step a drift algorithm is applied to calculate the drift field between two SAR acquisitions, one before and one after the in situ observations. The drift vectors are then scaled to lengths corresponding to the time from one of the SAR acquisitions to the time of the in situ observation. Finally, the scaled drift vectors are used to warp the sea ice in one of the SAR image to the shape and extent it would have at the time of the in situ observations. The method is limited to areas where drift vectors can be retrieved and assumes homogeneous drift speed between the SAR image acquisitions. The method has been tested on data from two sea ice expeditions, the Baltic2023 with RV Skagerak in the Gulf of Bothnia in February and March 2023 and the ARTofMELT with the icebreaker Oden west of Svalbard in May and June 2023. One of the goals of the Baltic2023 expedition was to map thickness and underwater structure of the sea ice with an autonomous underwater vehicle (AUV) and compare these data with the above water structure as seen by SAR and by optical and IR images captured by an unmanned aerial vehicle (UAV). During the ARTofMELT expedition, drift buoys were deployed to monitor drift and deformation, snow properties were measured to capture the onset of melting conditions and aerial photos were taken to document distribution and extent of ridges, leads and melt ponds. During both expeditions, SAR data were acquired by Sentinel-1 and the Radarsat Constellation Mission (RCM).
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Spatial and Temporal Distribution of Winter Arctic Polynyas (1978-2023)

Authors: Carmen Hau Man Wong, Céline Heuzé, Luisa Ickes, Lu Zhou
Affiliations: Department of Earth Sciences, University of Gothenburg, Department of Space, Earth, and Environment, Chalmers University of Technology, Institute for Marine and Atmospheric research Utrecht, Utrecht University
Polynyas, open water regions within the sea ice cover, have been observed by satellites intermittently in the Arctic region over the past few decades. Their formation is complex, requiring various drivers to precondition and trigger the opening, which then influences local and regional weather significantly. Therefore, understanding Arctic polynyas’ spatial and temporal distribution is crucial to study the polynyas’ impacts on climate. To date, most research is local and short-term, focusing on the major active Arctic polynyas or specific events; As more satellite data are available nowadays, it is timely to conduct a pan-Arctic, long-term studies of this natural phenomenon, with the focus primarily on addressing the distribution and trend. Here, we use available sea ice satellite products, sea ice concentration (SIC) and sea ice thickness (SIT) data (Nimbus-7 SMMR and DMSP SSM/I – SSMIS SIC; AMSR-E and AMSR2 SIC; SMOP and SMOP-SMAP SIT), to investigate all Arctic polynya events since 1978, in particular their locations and total numbers for each winter. The location preciseness and robustness are examined by sensitivity tests, varying the concentration (30 – 60%) and thickness (10 – 30 cm) thresholds. In the meantime, the total area extent and recurrence percentage of polynyas events are also computed. The results indicate that the most active polynya formation regions are mainly along the coast, especially in the Eastern Laptev Sea, Eastern Kara Sea, Franz-Josef Land, Eastern Greenland, Beaufort Sea and Chukchi Sea. The total extent of the polynya area increases across the observation period from a pan-Arctic perspective. To try and explain this trend, we perform a regional analysis in nine regions with the most active polynya events. We find a correlation between the area change and weather parameters, air temperature and wind speed, in most regions with increasing polynya area extent. This implies both thermodynamics and dynamic forcings can contribute to the opening and persistence of polynyas.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: 15 Years of Global Winter Sea ice Thickness & Volume from CryoSat-2, Sentinel-3A/B and SMOS (CS3SMOS)

Authors: Dr Stefan Hendricks, Xiangshan Tian-Kunze, Robert Ricker, Antonio De la Fuente, Raffaele Crapolicchio, Lars Kaleschke
Affiliations: Alfred Wegener Institute, Norwegian Research Center, European Space Agency, Serco Italia SpA - for European Space Agency
The unique synergy of the two ESA Earth Explorer Missions CryoSat-2 and SMOS (CS2SMOS) has enabled the generation of a sea ice thickness and volume dataset for the northern hemisphere. The dataset is created within the ESA CryoSat-2/SMOS production and dissemination service (CS2SMOS-PDS) and the algorithm development is supported by the Cryosphere task of the SMOS Expert Support Laboratory (SMOS-ESL). CS2SMOS is based on satellite radar altimeter and L-Band radiometer data capitalizing on the complementary sensitivity to different sea ice thickness categories. The data fusion is implemented via an optimal interpolation, therefore enabling gapless sea ice thickness information with a daily update frequency. Here we introduce a new version of the data record that expands the spatial coverage to the southern hemisphere. This extension has been made possible by the recent extension of the SMOS sea ice thickness retrieval algorithm to the southern hemisphere. Also, the availability of a thematic sea ice product for Sentinel-3A/B has led to a substantial increase of the ice freeboard observation density by Ku-Band radar altimetry. Results from Sentinel-3 have demonstrated excellent consistency with CryoSat-2 reference data. We therefore include Sentinel-3 in the data fusion scheme in both hemispheres, resulting in global winter sea ice thickness product from CryoSat-2, Sentinel-3A/B and SMOS (CS3SMOS) at an improved spatial resolution. The most important property for data fusion using optimal interpolation are realistic and consistent uncertainties of the source datasets. A new approach to express sea ice thickness uncertainties from radar altimetry and L-Band radiometer retrieval algorithm was required for the complex sea ice conditions in the southern hemisphere. Retrieval of sea ice thickness in the Antarctic by Ku-Band radar altimetry is characterized by large uncertainties due to the complex snow stratigraphy affecting the Ku-Band radar backscatter elevation distribution. These high uncertainties contrast with substantially lower SMOS product uncertainties, thus leading to an imbalance in the optimal interpolation weights of both methods. We have developed strategies to balance the weights of both methods by reformulating the uncertainty description of our data, which was also applied to the northern hemisphere. This change should lead to more realistic end-product uncertainties compared to the previous version of the data record. We present and assess the evolution of sea ice thickness and volume in both hemispheres observed by CS3SMOS over the past 15 winters and close with an outlook on possible future improvements in the methodology and source datasets.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Regional Variability and Changes of Sea Ice Deformation in the Arctic During the Last Four Decades

Authors: Linxin Zhang, Gunnar Spreen
Affiliations: Institute Of Environmental Physics (IUP), University of Bremen
Sea ice plays a key role in the Earth’s climate system. The dynamic processes of sea ice shape its surface, influence the distribution of sea ice thickness and leads, and regulate thermodynamic ice growth. In recent decades, Arctic sea ice has undergone significant changes, including increased speed and deformation. Changes in ice deformation can either be caused by changes in the atmosphere and/or ocean forcing, i.e., winds and ocean currents, or by changes of the sea ice itself, i.e., weakening of the ice by thinning or roughening (more ridges), which makes it more susceptible to wind and ocean forcing. Our current understanding is that changes in wind are not dominating the ice dynamics changes and that ice thinning has contributed. But a detailed analysis and attribution of the different processes to ice deformation variability is missing. Currently, ice deformation is primarily obtained from ice motion fields derived from buoy records and synthetic aperture radar (SAR) imagery. The latter provides higher spatial coverage and thus is more suitable for studying large-scale sea ice changes. Specifically, for SAR data, ice drift fields can be computed using an ice-tracking algorithm (Maximum-Cross-Correlation method), which identifies patterns in radar backscatter coefficients across sequential images and estimates the spatial displacement of these patterns. Following this, the ice deformation can be obtained by applying the Sobel filter to compute spatial velocity gradients from these displacements. Although, SAR sea ice motion data provides high spatial resolution (about 10 km), its limited swath width and long revisit time constrain its spatial coverage in some regions and also the length of the SAR time series is limited to recent years. Passive microwave radiometer ice motion data, on the other hand, has relatively low spatial resolution (typically on the order of tens of kilometers) but provides near-daily temporal resolution and Arctic-wide spatial coverage, making it the primary source of continuously updated large-scale sea ice motion. However, studies utilizing passive microwave data to investigate Arctic-wide sea ice deformation patterns remain scarce. Therefore, in this presentation we calculate sea ice deformation from sea ice motion based on low-resolution passive microwave satellite data to create long-term, Arctic-wide sea ice deformation time series for the last decades. The European OSI SAF sea ice drift products OSI-405-c and OSI-455 are available from 2009 and 1991, respectively. We calculate sea ice strain rates and sea ice deformation based on these low-resolution products. OSI-405-c based ice deformation demonstrates good correspondence with high-resolution ice deformation data derived from Sentinel-1 SAR and exhibits a power-law decay of mean total deformation rates with respect to spatial and temporal scales. The US National Snow and Ice Data Center (NSIDC) sea ice motion data, which provides daily results with a spatial resolution of 25 km since 1978, will also be included in the analysis and compared to the OSI SAF results. This will provide higher confidence in obtained result if different datasets agree. By incorporating deformation data derived from NSIDC, SAR, OSI-405-c, and OSI-455, the long-term trends, regional variability, and spatio-temporal patterns of sea ice deformation will be analyzed. Furthermore, the relationship between sea ice deformation and atmospheric circulation patterns based on ERA5 wind data during the same period will be shown.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: More Dynamic Ice Growth in a Thinner Arctic? 18 Years of Arctic Thermodynamic and Dynamic Sea Ice Thickness Change Along Lagrangian Trajectories

Authors: Dr Luisa von Albedyll, Robert Ricker, Dr Frank Kauker, Dr Stefan Hendricks, Dr Nils Hutter
Affiliations: Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research, NORCE Norwegian Research Centre, O.A.Sys, Ocean Atmosphere Systems, GEOMAR Helmholtz Centre for Ocean Research Kiel
The Arctic Ocean's transition from perennial sea ice to more ice-free summers has halved sea ice thickness over the past six decades, with significant impacts on the Arctic climate and ecosystem. However, recent trends show a slowdown in ice thickness decline and volume loss. This highlights the need to explore the seasonal and long-term feedback mechanisms that drive changes in sea ice thickness. An intriguing question is whether feedback mechanisms, such as the relationship between ice thickness and ice strength, have led to more dynamic ice growth in winter, potentially altering the balance between thermodynamic and dynamic processes in the future Arctic. To quantify pan-Arctic trends in thermodynamic and dynamic thickness changes over the past 18 years, we have developed a Lagrangian framework. This framework integrates a drift-aware sea ice thickness product (DA-SIT) with the 1D multi-category sea ice model Icepack, which simulates the sea ice and snow evolution on trajectories identified by DA-SIT. DA-SIT is based on the ESA Climate Change Initiative (CCI) Arctic sea ice thickness climate data record, which combines gridded altimeter observations from Envisat and CryoSat-2 from 2002 to 2020. Icepack simulations require atmospheric and oceanic forcing data for thermodynamic calculations, as well as sea ice deformation forcing for dynamic thickness change calculations. To identify the best combination of thermodynamic forcing, we evaluated Icepack simulations driven by various combinations of atmospheric reanalysis datasets and sea ice-ocean model outputs against snow and sea ice thickness measurements from 38 Ice Mass Balance Buoys (IMBs) recorded over full winter seasons from 2002 to 2019. For dynamic forcing, i.e., opening and closing due to sea ice deformation, we derived sea ice deformation from the low-resolution passive microwave drift trajectories of DA-SIT. To evaluate this deformation, we have extensively validated it against high-resolution data from the RGPS dataset (2003–2008). Despite the comparably large footprints of the satellite radar altimeters, our analysis shows that they effectively capture key features of thermodynamic and dynamic ice thickness changes. Furthermore, we found that the low-resolution passive microwave drift trajectories resolve prominent annual, regional, and seasonal variations in shear and divergence, making them a suitable choice for the dynamic forcing of Icepack. Thermodynamic growth, calculated using Icepack with ERA5 (atmosphere) and FESOM2 (ocean) shows the best agreement with IMB observations, achieving a root mean square error of 7 cm for the monthly thickness changes. Overall, we have demonstrated that Icepack successfully simulates thermodynamic and dynamic sea ice thickness changes within the observational uncertainty bounds along the Lagrangian trajectories. The described framework enables us to identify and describe pan-Arctic, long-term trends in thermodynamic and dynamic thickness changes on time scales from months to decades and spatial scales larger than about 500 km. We discuss these findings in relation to overall thickness trends and the changing significance of feedback mechanisms throughout the 18-year analysis period.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: High resolution sea ice floe detection and freeboard measurements using SWOT

Authors: Sara Fleury, Gwenael Jestin, Matthias Raynal, Anaëlle Treboutte, Fanny Piras, Sammy Metref, Pierre Rampal, Gérald Dibarboure, Filomena Catapano
Affiliations: Legos/CNRS, CNES, CLS, DATLAS, IGE, ESA
Sea ice plays a major role in climate, these changes are of great concern to scientists, who are seeking to forecast its behavior and understand their causes and effects. The most common techniques employed to monitor sea ice rely on 2D space imagery in a wide range of frequencies and techniques (optical, thermal, radiometric, SAR imagery). However, we now know that the evolution of sea ice over the course of days or seasons is governed more by its thickness than by its extent. Thickness is therefore an essential parameter to determine the condition of the ice and its ability to withstand particular climatic events (wind, heat waves) or oceanic events (waves, swells, upwelling of warm waters, etc.). Thickness also provides access to volume and mass variations, quantifying freshwater inflows and the effects of stratification on ocean circulation. The main technique for measuring sea ice thickness is based on nadir altimeters. The result is a series of point measurements of the sea ice taken vertically along the satellite track. This loose sampling between tracks means that we have to wait for about a month of data to provide a sufficiently dense spatial representation to be exploitable. Relative to the current sparse observations, SWOT's scan width (120km) offers enormous potential to analyze sea ice conditions and dynamics on much shorter time scales and with much higher coverage and density. This mission, whose primary objective is continental hydrology, was given little consideration by the sea ice scientific community, as we felt that the viewing angle (1° to 4°) would not allow us to obtain feedback on water surfaces in highly specular sea ice fractures. Without this water reference, ice freeboard and thickness cannot be measured. But initial tests carried out by CNES on atypical surfaces, including sea ice, have revealed quality measurements not only on ice, but also in leads, albeit less systematically on these watery. In addition to these initial observations, we have been able to measure the freeboard of the sea-ice using SWOT data, providing unprecedented resolution of the topography of the sea-ice surface over swaths 120km wide and several hundreds of kilometers long. This result was made possible by generating a precise lead / floe classification from the 250m SWOT data. The aim of this presentation is to show the results obtained, together with comparisons with space images for the classification and nadir altimetry measurements for the freeboard. This lead/floe classification will be available in the CNES L3 product, probably in version v2 scheduled for early 2025. This should make it easier to measure SLA over ice-covered oceans and provide the first measurements of ice density using space altimetry. This work is supported by CROPS ESA and CNES TOSCA SPIceSea projects.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Outlook and History of Satellite-Based Iceberg Monitoring in the Arctic: Insights From Sentinel-1 Observations and the Path to Pan-Arctic Coverage With Sentinel-1C and RCM

Authors: Jorgen Buus-Hinkler, Research Scientist Tore Wulf, Matilde Kreiner, Jens Jakobsen, Keld Qvistgaard
Affiliations: Danish Meteorological Institute (DMI)
The transport of freshwater from glaciers to oceans, also in the form of icebergs, plays a vital role in the global climate system. Icebergs in particular not only influence ocean salinity and circulation patterns but also pose growing hazards to Arctic navigation as diminishing sea ice leads to more dynamic and unpredictable sea ice- and iceberg regimes. These changes, coupled with increasing maritime activity in the Arctic, underscore the need for reliable iceberg monitoring. Radar-based satellite technology, such as Sentinel-1, is indispensable for detecting icebergs at high latitudes, overcoming challenges posed by polar darkness, frequent cloud cover, and remote locations. Since the launches of Sentinel-1A in 2014 and Sentinel-1B in 2016, Synthetic Aperture Radar (SAR) imagery has been used by the Danish Meteorological Institute (DMI) to monitor icebergs in Greenland waters as part of the Copernicus Marine Service (CMS). These satellites have enabled the creation of a comprehensive iceberg observation archive spanning eight years (2017–2024), providing critical insights into seasonal and inter-annual iceberg dynamics. However, the permanent loss of Sentinel-1B in 2021 and the aging Sentinel-1A have introduced significant data gaps, partially addressed through the integration of Radarsat Constellation Mission (RCM) data. The launch of Sentinel-1C in December 2024 is expected to restore robust SAR coverage and expand monitoring capabilities. This study leverages Sentinel-1 SAR imagery and RCM data to produce iceberg concentration products at a 10 km resolution, enabling the construction of a detailed "per grid-cell iceberg climatology." DMI ice charts, and later DMI-Automated Sea Ice Products (DMI-ASIP), were used to filter out sea ice, revealing variations in iceberg concentration across Greenland waters and beyond. The DMI Iceberg Detection Algorithm (DIDA) further provides individual iceberg positions with iceberg-specific attributes, aiding in drift forecasting. These advancements are integral to the “Arctic Cross-Copernicus forecast products for sea Ice and iceBERGs” (ACCIBERG) -project, supporting the development of a Lagrangian iceberg forecasting module. By extending monitoring towards pan-Arctic coverage, this work emphasizes the importance of consistent SAR coverage for understanding and forecasting Arctic iceberg drift dynamics in a changing climate.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: New estimates of Arctic Ocean sea ice export from Sentinel-1, the RADARSAT Constellation Mission (RCM), and CryoSat-2

Authors: Stephen Howell, David Babb, Jack Landy, Mike Brady
Affiliations: Climate Research Division, Environment and Climate Change Canada, Centre for Earth Observation Science, University of Manitoba, Department of Physics and Technology, The Arctic University of Norway
Sea ice export from the Arctic Ocean is important to the ice mass balance and freshwater budget of the Arctic Ocean and the delivery of solid freshwater to the North Atlantic. Historically, estimates of the sea ice flux across the Arctic’s major export passageways were limited by the seasonal availability of ice thickness data together with low spatial resolution sea ice motion data. However, observational advances now provide year-round estimates of ice thickness from CryoSat-2 and high spatiotemporal estimates of sea ice motion can be derived from Sentinel-1 and the RADARSAT Constellation Mission (RCM) synthetic aperture radar (SAR) satellites. In this presentation, we present the results of merging these datasets that provide new high-quality monthly and annual estimates of sea ice area, volume, and equivalent freshwater flux at the Arctic’s major export passageways of Fram Strait, Davis Strait, Nares Strait and the Canadian Arctic Archipelago from 2016-2022. Over our study period, the annual average ice volume export at Fram Strait was 1586 km³, which is larger than previous estimates but significantly less than historic estimates due to the long-term decline in sea ice export through Fram Strait. The sea ice volume flux through Davis Strait was 816 km³, which is nearly double previous estimates. The annual average volume export at Nares Strait and the Canadian Arctic Archipelago was 160 km³, and 43 km³, respectively that is in agreement with longer-term increases and indicates a divergent trajectory compared to Fram Strait. Annually, a total of 1912 km³ of solid freshwater was delivered to the North Atlantic from both Fram Strait and Davis Strait, which is approximately 60% of the estimated liquid freshwater flux through these channels. Our annual flux values, particularly for Fram Strait and Davis Strait are considerably higher than previous studies which we attribute primarily to SAR imagery being able to capture more of the true ice drift variability compared to passive microwave observations. As a result, our new high-quality estimates of these sea ice fluxes provide updated and more representative quantities for understanding recent changes in ice mass balance and the freshwater budget of the Arctic Ocean, as well as the freshwater balance of the North Atlantic, where overturning is critical to the global climate.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Using RCM and Sentinel1 SAR observations to evaluate the sea ice dynamics in ECCC prediction systems

Authors: Mathieu Plante, Jean-François Lemieux, Bruno L. Tremblay, Damien Ringeisen, Alexander S. Komarov, Stephen Howell, Brady Mike
Affiliations: Environment And Climate Change Canada, McGill University, Environment and Climate Change Canada, Environment and Climate Change Canada
Sea ice forms a thin but horizontally extensive boundary between the ocean and the atmosphere, with a complex crust-like dynamics characterized by intermittent sea ice deformations. The heterogeneity and localization of these sea ice deformations are important characteristics of the sea ice dynamics and are often used to evaluate the material physics parameterized in dynamical sea ice models. Recently, a new pan-Arctic sea ice deformation and rotation rates (SIDRR) dataset was produced based on RADARSAT Constellation Mission (RCM) and Sentinel-1 (S1) synthetic aperture radar (SAR) imagery. The SIDRR estimates are derived from contour integrals of triangulated ice motion data, obtained from the Environment and Climate Change Canada automated sea ice tracking system (ECCC-ASITS). This new dataset covers the entire Arctic Ocean and all peripheral seas, except the Okhotsk Sea, from 01 September 2017 onward. One difficulty with the SIDRR dataset is its relatively large uncertainties associated with the low tracking resolution required for the timely production of the operational sea ice motion products. These uncertainties are small enough to identify large lines of localized deformations (called Linear Kinematic Features, or LKFs), but post-processing methods are needed to retrieve detailed sea ice deformations information (e.g., ridging and lead opening rates). Here we investigate how these processing methods influence the sea ice deformation metrics commonly used to evaluate the simulated sea ice dynamics (e.g., Probability Distribution Functions, scaling properties). Furthermore, we investigate different approaches used to evaluate the Eulerian simulations to lagrangian observations. In particular, we show that computing the sea ice deformation diagnostics based on long-term (3-months) lagrangian tracks often results in filtering out some of the large sea ice deformations occurring in heavily deformed cells. However, we note that producing lagrangian trajectories remains necessary as some of the dynamical properties (such as spatial scaling) are ill-defined in an Eulerian approach. Based on these results, we favor evaluating the sea ice dynamics in our prediction systems based on short lagrangian tracks computed over only a few days.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Sea Ice Surface Temperature Retrieval Algorithm for Copernicus Imaging Microwave Radiometer (CIMR)

Authors: Hoyeon Shi, Suman Singha, Emy Alerskans, Jacob Høyer
Affiliations: Danish Meteorological Institute
There are roughly two different satellite surface temperature observations for sea ice: One is snow surface temperature, which can be obtained from infrared observation, and the other is ice surface temperature (IST), which can be obtained from passive microwave observation. Although both surface temperatures enable us to monitor the change of thermal state over sea ice-covered regions, the ice surface temperature provides more continuous opportunities because the passive microwave observations are less hindered by persistent cloud cover than infrared observation. Moreover, IST has a wide range of applications, including estimating sea ice emissivity, sea ice thickness, and snow depth on sea ice. It can also be used to constrain the sea ice growth model through data assimilation and nudging. Therefore, retrieving IST from satellite microwave observation is important. This poster describes procedures for retrieving IST from microwave brightness temperatures obtained from Copernicus Imaging Microwave Radiometer (CIMR) observation. It provides an introduction to the field of IST retrieval using satellite passive microwave observation and describes the chosen baseline algorithm, a physical method based on Fresnel equations that retrieves IST from C-band polarized brightness temperatures. In addition, it presents the expected improvements to the original algorithm through further research and development.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Exploring Kalman Filter Efficiency in Modelling Sea Ice Deformation Using C and L-band SAR Imagery

Authors: Vaishali Chaudhary, Lanqing Huang, Julienne Stroeve, Dustin Isleifson
Affiliations: Centre for Earth Observation Science, University of Manitoba, Centre for Polar Observation and Modelling, University College London
The stability of landfast sea ice is undergoing significant alterations due to the accelerating impacts of global warming in the Arctic. These changes pose substantial risks to maritime navigation, traditional hunting practices, and the survival of marine species that depend on stable sea ice for their habitats. Consequently, the need for robust monitoring and predictive systems to track sea ice deformation has become increasingly critical. The Kalman filter emerges as a powerful tool in this context, offering a sophisticated data assimilation framework to reconstruct and forecast phase changes associated with sea ice deformation. By integrating observational data with predictive models, the Kalman filter enhances the precision of estimates, reduces uncertainties, and facilitates advanced predictive analysis. This technique is particularly well-suited for applications requiring continuous monitoring, as it can dynamically integrate new data from remote sensing missions, such as C-band Sentinel-1A/B and L-band ALOS PALSAR sensors. The Kalman filter, leveraging such data streams, provides a robust framework for sequential data assimilation, maintaining accuracy as new observations are incorporated in real-time. For effective application, the Kalman filter generally requires at least one year of input data to achieve model convergence and ensuring sufficient temporal and spatial coverage to capture the nuances of deformation dynamics. In this context, it is fed with Interferometric Synthetic Aperture Radar (InSAR) data spanning 4.5 years. By accounting for observational uncertainties and noise, the Kalman filter enhances the reliability of phase reconstructions and displacement forecasts. The Harmony mission, which ESA has selected as its next Earth Explorer concept, adds a transformative dimension to this attempt. The Kalman filter can integrate Harmony's high-resolution SAR data, offering a unique capability to track seasonal cycles and long-term deformation patterns in landfast sea ice. By coupling Harmony's data streams with the Kalman filter’s adaptive modeling framework, researchers can uncover subtle deformation behaviors, such as seasonal ice formation and melt cycles, and evaluate their cumulative impact over the years. Moreover, the Kalman filter can be adapted to model semi-annual or annual deformation patterns, making it an ideal tool for analyzing long-term trends and seasonal variability in sea ice stability. Its ability to handle multi-scale temporal signals allows researchers to discern subtle deformation behaviors, such as seasonal ice formation and melt cycles, or shifts induced by climate-induced changes in the Arctic environment. This capability not only facilitates understanding of the processes driving landfast sea ice dynamics but also supports actionable insights for navigation safety, wildlife conservation, and Indigenous subsistence practices.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: New AVHRR based C3S IST CDR/iCDR of Arctic and Antarctic ice surface temperatures from 1982 to present

Authors: Wiebke Margitta Kolbe, Gorm Dybkjær, Rasmus T. Tonboe, Steinar Eastwood, Pia Nielsen-Englyst, Jacob Høyer, André Toft Jensen, Magnus B. Suhr
Affiliations: Danish Meteorological Institute (DMI), DTU Space, Norwegian Meteorological Institute (MET)
The new ice surface temperature (IST) climate data record (CDR) of the Copernicus Climate Change Service (C3S) will be presented. The CDR covers the period 1st of January 1982 to 30th of June 2019 and is extended to the present by an interim CDR (iCDR) with regular updates. Thus, the CDR and iCDR provide a more than 43-year long record of surface temperatures for both hemispheres. The dataset includes sea ice and ocean surface temperatures poleward of 50° North and South, as well as surface temperatures for the Greenland and Antarctic ice sheet. Thereby it creates a seamless transition between different surfaces and covers also the marginal ice zone (MIZ). The CDR/iCDR is based on Advanced Very High Resolution Radiometer (AVHRR) Global Area Coverage (GAC) data and is provided in a 0.25° latitude and longitude polar grid. The underlying algorithm is a combination of sea ice, land ice, MIZ and open water algorithms, which have been tuned with top of the atmosphere brightness temperatures, computed with the RTTOV radiative transfer model. Daily mean, minimum and maximum temperatures are accompanied by daily uncertainty estimations and quality levels in this Level 3 data record. The IST is officially recognized as an Essential Climate Variable (ECV) and the C3S IST CDR/iCDR can be used to assess temporal and spatial trends throughout the last four decades. By computing trends across the whole dataset, the general warming in the Northern Hemisphere can be compared to regional extremes, such as the rapid temperature increase in the Barents Sea. Regional differences could also be identified in the Southern Hemisphere, where temperature trends generally are smaller and less significant. Validations against a number of different in situ measurements, such as drifting buoys, land stations and flights from NASA’s Operation IceBridge have been performed, to provide insights into the quality and stability of the data record.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: On L-band radiometric sea ice emission: from in situ observations to enhanced thin sea ice thickness satellite retrievals

Authors: Ferran Hernández-Macià, Dr. Marcus Huntemann, Dr. Carolina Gabarró, Dr. Maria José Escorihuela, Dr. Gemma Sanjuan Gomez
Affiliations: Institute of Marine Sciences (ICM-CSIC), Barcelona Expert Center (BEC), Institute of Environmental Physics, University of Bremen, isardSAT, S.L., Autonomous University of Barcelona (UAB)
Satellite-based sea ice thickness (SIT) retrievals are key for monitoring the state and evolution of such a key climate change variable. With Arctic Amplification accelerating the existence of young, thin sea ice, L-band radiometers such as SMOS and SMAP are increasingly essential for thin-ice monitoring. Despite progress in algorithm development, significant uncertainties remain in the L-band emissions modeling from snow-covered sea ice. This study utilizes data from multiple campaigns to deepen insights into sea ice L-band emission modeling. Firstly, data from ELBARA and UWBRAD radiometers, collected during the MOSAiC expedition, is assessed using three radiative transfer models: an incoherent model without scattering (Burke), an incoherent model with scattering (SMRT), and a coherent model without scattering (Wilheit). All approaches are compared with different sea ice permittivity formulations, including one empirical (Vant) and two theoretical models that describe brine inclusions as random needles or spheres. Results show that some brightness temperature observations can only be explained by coherent effects, although definitive evidence for these occurrences remains elusive. Meanwhile, the incoherent approach demonstrated reliability for many measurements. Permittivity modeling introduced notable variability, confirming its key role in emission modeling. The findings support previous works that suggest that optimal permittivity likely lies between the random needles and spheres scenarios, implying ellipsoidal brine inclusions influenced by ice growth conditions. Secondly, ARIEL L-band radiometer data from a winter campaign in January 2024 at the Canadian High Arctic Research Station (CHARS, Cambridge Bay) were analyzed using a similar framework. The found ice and snow conditions allowed detailed investigation of coherence effects, as brightness temperatures aligned only when these effects were included. To address input uncertainties, an optimal estimation framework was employed to obtain optimized values for key variables such as snow depth, snow density, sea ice thickness, and the axis ratio of the brine inclusions within the ice. Snow properties are known to determine the position of the coherence peaks and thus the possible match of the models with the observed small polarization differences. Optimized variables agreed with in situ measurements within expected variability, confirming the role of coherence effects in the observations. These findings are crucial to enable accurate sea ice thickness estimates and highlight the need for future field campaigns to further investigate coherence effects and their implications for L-band SIT retrievals. This work also examines the possible impact on not only the current SMOS and SMAP sea ice thickness retrievals, but also on the future ESA’s CIMR mission.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: A New Thin Sea Ice Thickness Retrieval Using Combined L and C-band Passive Microwave Observations from SMOS and AMSR2 with Applications towards CIMR

Authors: Marcus Huntemann, Gunnar Spreen, Janna Rückert, Rasmus Tonboe
Affiliations: University Of Bremen, Technical University of Denmark
Sea ice thickness is a crucial parameter for understanding Arctic climate change and for maritime operations. This study presents a new algorithm for retrieving thin sea ice thickness (approx. below 1 m) using the low frequency part of the thermal microwave spectrum; combined L- and C-band. It is specifically designed for the upcoming Copernicus Imaging Microwave Radiometer (CIMR) mission and currently applied to a combined dataset from SMOS and AMSR2 to simulate the capabilities of CIMR. The algorithm is based on a microwave emission forward model that incorporates multiple sea ice physical parameters and employs a Bayesian inversion technique for parameter estimation. The forward model uses a layered approach, considering the radiative transfer from the ocean, through sea ice, snow and atmosphere. For the snow layer, we calculate permittivity based on density variations.The sea ice layer modeling includes variations of brine inclusions typically present in growing first-year ice. The ice layer permittivity is determined through a mixture model where the brine pockets are simulated as oriented ellipsoids. The volume of the brine inclusions implicitly depends on temperature and salinity. The ocean layer underneath is modeled with temperature and salinity-dependent seawater permittivity. In general, the influence of the atmosphere on the measured signal is small at L and C band. Nevertheless, we explicitly consider the atmospheric contribution, which, in rare cases, may exceed 5 K. Our approach implements an incoherent approximation of the coherent radiative transfer model, which provides computational efficiency while maintaining physical accuracy for the snow and sea ice layers. The model calculates interface reflections at the snow-air, snow-ice, and ice-water boundaries, leading to individual contributions from each layer to the total brightness temperature. To address systematic deviations between model and observations, we introduce a bias correction scheme that improves the accuracy of the physical forward model with respect to the observed brightness temperatures. This allows to maintain physical sensitivities in the retrieval, even in cases where the forward model cannot fully match the observations. The Bayesian inversion framework allows for the simultaneous retrieval of multiple parameters, including thin sea ice thickness, snow depth and density, sea ice salinity, temperature, and brine inclusion shape. This multi-parameter approach provides a more comprehensive understanding of sea ice properties while accounting for their interdependencies. The method's uncertainty quantification through the Bayesian framework offers valuable insights into the reliability of the retrieved parameters. Preliminary results on AMSR2 and SMOS data demonstrate the algorithm's capability to retrieve thin ice thickness with improved uncertainty quantification compared to existing methods. The combined use of L and C-band frequencies leverages their complementary sensitivity to different ice properties, particularly beneficial for thin ice parameters. This work represents a step forward in sea ice remote sensing and provides a foundation for future applications with the CIMR mission, which will offer enhanced spatial resolution and coverage of the polar regions. The algorithm development is targeted for CIMR. However, already today it can be applied to the combination of SMOS and AMSR2 observations, while with lower resolution at C- band and with temporal brightness temperature inconsistencies due to the observation from different satellite platforms. It thereby already demonstrates the potential for improved sea ice monitoring capabilities, particularly in the crucial thin ice regime. This advancement will contribute to, for example, better understanding of heat exchange processes in polar regions and support climate studies.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Developing of RIO: Creating a Risk Assessment Dataset for Polar Navigation, based on Automated Sea Ice Products

Authors: Magnus Suhr, Suman Singha, Gorm Dybkjaer, Dr Tore Wulf
Affiliations: DMI
The Risk Index Outcome (RIO) is a critical component of the Polar Operational Limit Assessment Risk Indexing System (POLARIS) developed by the International Maritime Organization (IMO). RIO evaluates the operational risks for ships navigating in ice-infested waters by evaluating ice conditions and offers a quantifiable measure of risk that aids in decision-making for safe navigation in polar regions based on ship ice class, sea ice type and sea ice concentration. The ship ice class can be one of 12 different classes defined by International Association of Classification Societies (IACS) and IMO, and are corresponding to conditions, that the ships are designed to navigate in. The RIO system supports safer navigation in polar regions and has previously been tested by the Arctic Command of Denmark. These products will provide vital information for navigating in dynamic and hazardous ice conditions. The RIO dataset produced by DMI, as part of the ArcticPASSION (AP, a European Union Horizon 2020 project), program represents a significant advancement by offering a multi-year reprocessed dataset based on outputs from the ASIP product. The ASIP product includes Sea Ice Concentration (SIC) and Sea Ice Type (Stage Of Developments) data, utilizing AI-based retrieval techniques from SAR and passive microwave satellite data. Therefore, the ASIP product is an ideal candidate for input in the computation of a RIO product, as it is based directly on SIC and sea ice type. The AP-RIO dataset will provide weekly risk assessment maps for the given ship classes and will support the establishment of a 10 year climatology thereby enabling the assessment of RIO variability in the years covered by the repro cessed ASIP product. This can be crucial for understanding and predicting ice conditions thus facilitating informed navigation decisions in the Arctic. The 10-year product can give big insight into the change of risk of sailing in the arctic areas and the evolution of this from week to week. By leveraging these detailed datasets, the AP-RIO data will enhance the safety and efficiency of maritime operations in the polar seas, providing a robust reference for evaluating normal and extreme ice conditions. The work described in this poster has been funded from the European Union’s Horizon 2020 research and innovation programme through the project Arctic PASSION under grant agreement No. 101003472.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone G-H)

Poster: Low-Cost Drone and Satellite Remote Sensing for Quantifying Air-Ice-Sea Interactions in a Mobile and Highly Dynamic Arctic Ocea

Authors: Jennifer Watts, Christina Braybrook, Brent Else, Hannah Niehaus, Karen Anderson, Thomas G. Bell, Jamie Shutler
Affiliations: University Of Exeter, University of Calgary, University of Bremen, Plymouth Marine Laboratory
The Arctic Ocean is a globally significant carbon sink, yet large uncertainties remain in estimates of carbon dioxide exchange between the air and sea in polar regions. These uncertainties stem from critical gaps in our ability to understand and parameterise interactions between the air, ice, and ocean. Such interactions are particularly complex in heterogeneous sea ice environments, including the marginal ice zone and spring melt season, which are expanding due to amplified Arctic climate change. It is possible to use the field-technique of eddy covariance to make direct observations of air-ice-sea carbon dioxide fluxes in these heterogeneous environments, however its application requires accurate characterisation of ice conditions within the flux measurement footprint. Previous studies have typically relied on passive microwave satellite data to characterise sea ice concentration near the measurement tower. However, this approach has multiple issues, for example it neglects that the flux footprint extent also varies in time and space, the resolution of the data is not a good match for the small towers which are land based, along with the inability of passive microwave sensors to resolve surface melt ponds. This research examines the spatial and temporal variability of landfast ice conditions during the spring melt season within the modelled flux footprint of a stationary meteorological tower used to study gas fluxes in the Canadian Arctic Archipelago. We employ a low-cost, open-source methodology to georectify aerial drone image datasets without ground control points, and with fully characterised uncertainties. The collected high-resolution drone data (10 mm ground sampling distance, sub-daily surveys) is compared with satellite optical data (10 m ground sampling distance, daily, cloud-permitting) and coarser passive microwave data (3.125 km and 10 km, daily) to assess sea ice, melt pond, and open water fractions within the flux footprint, aligning with the air-sea-ice interactions measured by the tower. The drone optical observations revealed substantial spatial and temporal variability in melt pond coverage (16–60%) within the tower footprint, with variations driven by both variability in melt on-set and progression, but also the shifting position of the flux footprint. Fine spatial-resolution satellite optical data (Sentinel-2) (showed good agreement with drone data (average melt pond fraction difference of 5%) when spatial resolution, extent, and time offset (< ±4 hours) were optimised. Whereas lower resolution microwave data gave a poor representation of the tower footprint conditions. Our findings demonstrate that high-temporal-resolution optical data with low view zenith angles, precisely aligned with the flux footprint, are suitable for characterising ice and melt pond coverage at scales relevant to eddy covariance measurements. Temporal and spatial mismatches between the flux footprint and remote sensing data used to characterise conditions can misrepresent conditions, leading to erroneous conclusions about the impact of sea ice on air-sea gas exchange. The methods presented here offer potential for broader application, enabling the study of diverse sea ice characteristics and contributing to the standardisation of flux footprint characterisation across polar eddy covariance studies. We show that low-cost drones provide a flexible and effective tool for capturing high-resolution ice condition data, even over mobile targets without ground control points. This capability expands the applicability of the drone techniques to studying ice from ships and examining mobile and fine-scale sea ice dynamics. Beyond enhancing polar eddy covariance research, this approach provides valuable opportunities across marine sciences, such as studying ocean glitter, white capping, or ocean colour, to support satellite validation and bridge the gap between sparse in situ data and synoptic-scale satellite observations.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: A.09.10 - POSTER - Interactions and feedback between Ice, Land and Ocean in Polar Regions

Subglacial conditions are a contributing factor to ice sheet dynamics as bedrock properties control basal gliding. However, knowledge of the Solid Earth-Ice (and Ocean) interface comes with large uncertainties. That relates to the margins of the ice sheets, where topography margin is critical for accurate estimates of current Antarctic ice discharge, while bathymetry under ice shelves, bed topography along major outlet glaciers, and geology and subglacial hydrology near the coast are critical factors for predicting the future of the Antarctic Ice Sheet. Additionally, constraining bathymetry under ice shelves is critical to determine their response to ocean forcing, including the incursion of warmer Circumpolar Deep Water.
Bedrock properties also affect the interior of the ice-sheet and here geothermal heat flow is a key parameter, underdetermined both in Greenland and Antarctica. This relates to the thermal and mechanical structure of the solid earth, which exerts a primary control on the response of the polar regions to ice mass changes. Glacial Isostatic Adjustment of the bedrock beneath the ice sheets affects the bed slope of the ice sheet and the grounding line of marine-terminating outlet glaciers.
Satellite data are essential in monitoring the current conditions and for understanding the feedbacks in order to predict the future evolution of the Polar Regions.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: Tracking Changes in Western Antarctic Peninsula Ice Shelves: High-Resolution Surface Elevation Mapping Using Multi-Sensor Data

Authors: Antonia Warnstedt, Prof. Matthias Braun, Dr. Thorsten Seehaus, Dr. Christian Sommer, Philipp Malz
Affiliations: FAU Erlangen-Nuremberg
The ice shelves of the Antarctic Peninsula have undergone significant changes in the last decades, with the influence of both a warming atmosphere and increasing ocean temperatures being evident. In particular, the western side of the Antarctic Peninsula is experiencing the upwelling of warm circumpolar deep and continental shelf bottom water, which is amplifying the basal melt. It is of great importance to be able to detect the response of ice shelves to changing climatic conditions, given their role in buttressing land-based glaciers and the ice sheet. The present study focuses on the western part of the Antarctic Peninsula, situated to the south of 68°S, including the ice shelves Wilkins, George VI, Wordie, Bach, and Stange, as well as their associated tributaries. The objective is to update the existing database for the years from 2018 onwards and provide high-resolution datasets on surface elevation changes. The surface elevation change trends were calculated using laser altimeter data from ICESat-2, supplemented by CryoSat-2 data and synthetic aperture radar (SAR) data from the TanDEM-X mission. Elevation measurements at repeat track and crossover points of the ICESat-2 ATL06 data product were identified and interpolated for the region of interest. Moreover, these height measurements were employed as a stable reference point for the construction of digital elevation models (DEMs) of bi-static TanDEM-X scenes. In order to compute the initial DEMs for various time steps, we employ the use of SAR interferometry (InSAR). In particular, the status of Wordie Bay as a super test site for the TanDEM-X mission provides us with the opportunity to use frequent acquisitions. Furthermore, differential interferometry (DInSAR) is applied to identify alterations in the surface elevation over time. As an elevation reference, the 2022 release of the Copernicus Global 30 DEM is employed. The data indicate a significant surface depletion at Wordie Bay, especially in the case of Fleming and Prospect glaciers. The mean elevation loss over the period 2019-2023 was -0.7 m/yr ±0.2 m/yr. The George VI ice shelf displays a mixed signal. While the northern part exhibited stability, the southern tributary glaciers of George VI demonstrated surface lowering and frontal retreat. At Wilkins Ice Shelf, surface lowering was observed up to the point of break-up events at the southern fronts between Latady Island and Eroica Peninsula in 2023/24, accompanied by the formation of additional ice rifts. Moderate frontal retreat and a slight negative elevation change trend were observed for Bach and Stange ice shelf.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: Scaling Up Arctic Driftwood Mapping: Deep Learning and Medium-Resolution Satellite Imagery for Large-Scale Assessments

Authors: Carl Stadie, Guido Grosse, Prof. Dr. Martin Stefan Brandt, Ingmar Nitze, Dr. Tabea Rettelbach
Affiliations: Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research, Department of Geosciences and Natural Resource Management, University of Copenhagen
Large driftwood deposits constitute an important ecological variable in artic ecosystems. Driftwood connects marine and terrestrial ecosystems, stabilizes sediments, provides essential resources, is used as an indicator of sea-ice dynamics, oceanographic conditions, and serves as a significant carbon storage. Despite its importance, a comprehensive understanding of the abundance and distribution of arctic driftwood deposits needed for advances in the field is missing, primarily due to the challenges of large-scale mapping using traditional high-resolution imagery and manual digitization techniques. This study explores the potential of medium-resolution (3-meter) PlanetScope satellite imagery combined with deep learning techniques for the large-scale mapping of Arctic driftwood deposits. We created annual mosaics from PlanetScope imagery for 2019, 2021, and 2023, covering 1,346,569 km² of Arctic coastline and coastal lowlands in North America, supplemented with ArcticDEM elevation data. Using this dataset, we trained a UNet-based deep learning model, optimized for handling imbalanced classes and small-scale features, to segment driftwood deposits. The model leveraged RGB, near-infrared, and elevation data to produce a continental-scale driftwood dataset. To validate and refine the method, we integrated high-resolution aerial imagery (7–20 cm) from three flight campaigns over 16 diverse Arctic target sites, covering 700 km². A separate deep learning model was trained on this aerial imagery to create detailed driftwood maps. These high-resolution maps served as a benchmark to evaluate the performance of the coarser-resolution PlanetScope-based model. The results revealed the effectiveness of using medium-resolution satellite imagery for large-scale driftwood mapping. The PlanetScope-based model detected over 38,000 driftwood deposits, with a total estimated area of 73 km² across Arctic coastal North America. Driftwood was found to be concentrated in zones of agglomeration proximate to the outlets of large river systems like the Mackenzie or Yukon River. Deposits were found to have an average size of 1902 m², yet within the deltas of the Yukon and Mackenzie Rivers and the adjacent coastlines multiple deposits with sizes exceeding 100,000 m² were found. The largest detected deposit situated in the western Mackenzie Delta, measured 147,511 m². Compared to traditional high-resolution aerial approaches, the method underestimated driftwood area by 33%. This underestimation can be mainly attributed to small driftwood deposits like logs, either as singular entities or arranged in piecewise matrices, or narrow berms along beaches falling below the minimal mapping unit of the PlanetScope images, thus not being detected. This work represents a significant advancement in Arctic driftwood research, demonstrating the utility of modern deep learning frameworks and medium-resolution satellite imagery for cost-effective, scalable, and temporally consistent mapping of large driftwood deposits. The presented approach as well as the resulting dataset bridges critical knowledge gaps by providing the first insight into Arctic driftwood abundance and distribution on a large, continental scale, which is desperately needed as a foundation for future research on driftwood dynamics.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone E)

Poster: Refining Geothermal Heat Flow Models for Greenland Using Radiogenic Heat Production and Geological Mapping

Authors: Judith Freienstein, Wolfgang Szwillus, Agnes Wansing, Maja Zimmer, Björn Heincke, Jörg Ebbing
Affiliations: Kiel University, GEUS
We present an approach, where we establish a relationship between large-scale radiogenic heat production (RHP) and geological units for Greenland in order to reassess the role of local variations on geothermal heat flow (GHF). We use petrophysical measurements from the ice-free regions at the coast to estimate the statistical distribution of thermal properties (i.e. mean RHP and standard deviation). These statistics can be extrapolated to ice-covered areas under the assumption that the coastal samples are representative of the geological units at large. Furthermore, we can simulate small-scale variations based on the correlation lengths of RHP values. The baseline GHF is controlled by the lithospheric architecture (lithosphere and crustal thickness) and superimposed on this trend are the local small-scale variations from crustal RHP. Variations in radiogenic heat production significantly influence GHF and lead to a contribution of up to ~ 60% to the local GHF compared to the mean RHP per unit. However, not all variations seen in rock samples are imprinted in GHF. Within the measurements we observe an integrated effect at spatial length scales below 10 km. While the RHP sampling at such small length scales contribute most to the general amplitude of radiogenic heat production, these are unlikely to significantly influence GHF. The correlation between RHP at 50 km length scale, in contrast, consists of smaller amplitudes but is sufficient to use for calculating geothermal heat flow. A comparison of our model to a broadband radar profile confirms that our predicted spatial variability of small-scale GHF variations are in line with expectations from ice temperature modelling based on the profile data, both in magnitude and spatial pattern. This agreement supports the use of petrophysical and geological data to refine GHF models for Greenland. However, a challenge is still an accurate prediction of sub-glacial geology. Further improvements can be made by integrating magnetic, gravity and radar data for interpretation.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: C.01.01 - POSTER - High Altitude Pseudo Satellites in Earth Observation

High altitude platforms, or pseudo-satellites (HAPS), are uncrewed platforms positioned about 20 km above the Earth's surface in the low stratosphere where they take advantage of weak stratospheric winds and solar energy. The ability to reside in a fixed location over extended periods (several months) enables applications requiring persistent observation and enhanced communication/navigation at local or regional scale.
The Earth observation community has shown interest in the exploitation of HAPS ranging from farming, urban planning, air quality, greenhouse gas and sea ice monitoring to disaster response, fire monitoring, security and maritime surveillance.

The session aims at bringing together scientists, industry and other stakeholder to discuss
- Recent developments
- Scientific applications to better understand our environment
- Applications and services that combine space borne, airborne and HAPS assets
- Role of HAPS for the development of future satellite missions and satellite cal/val
- Demonstration campaigns

The session is soliciting presentations demonstrating the current and future HAPS capabilities, showcasing HAPS as an element in the #Future EO Programme.

HAPS – as part of the #FutureEO program, may play an important role in the near future as they have the capability to serve as a testbed for the development of future satellite missions. Recent test flights have shown the feasibility of reaching and staying in the stratosphere and the rapid development is supported by new technologies. This underlines the importance to bring together different stakeholders to discuss the recent developments and perspectives.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone I)

Poster: How HAPS Will Change the Game for Earth and Space Sciences

Authors: Laurence Croizé, Vincent Réveret, Vincent Dubourg, Quentin Gueho, Philippe Keckhut, Sébastien Payan, Isabelle Pison, Emmanuelle Vaudour
Affiliations: Onera, Université Paris-Saclay, Université Paris Cité, CEA, CNRS, AIM, CNES, Centre National d’Etudes Spatiales (CNES), 4 Institut du Droit de l'Espace et des Télécommunications, Université Paris-Saclay, Université Paris-Saclay / UVSQ, LATMOS (UMR 8190 / OVSQ, CNRS, Sorbonne), 6 Sorbonne Université, LATMOS (UMR 8190 / OVSQ, CNRS, Sorbonne), Université Paris-Saclay, INRAE, AgroParisTech, UMR EcoSys, Laboratoire des Sciences du Climat et de l’Environnement, CEA-CNRS-UVSQ
HAPS, the acronym of which means ‘High Altitude Pseudo Satellites’, are emerging platforms (solar drones, airships or maneuvering balloons). Such platforms allow long terms (typically several months) quasi-geostationary observations at an altitude about 19 km. Used for scientific applications, HAPS have the potential to revolutionize observations for the Earth and Space sciences [1]. For atmospheric physics, HAPS will provide access to underlying processes that are currently not observable from existing infrastructures (ground, drone, balloon, airplane), and/or to information which is currently obtained but with remaining gaps (in space and time) or too large uncertainties. They will bridge the gap between low-earth orbit measurements, which enable us to observe at the global scale for several years, but with relatively low spatial and temporal resolutions, and infrastructures which provide better spatial and/or temporal resolution, but only for short observation periods at local scale. Compared with UAVs or aircraft, HAPS provide regional or national, long-term, quasi-geostationary observation capabilities. Compared with space-based observations, their altitude enables a much better spatial resolution than from geostationary orbit. Observations from HAPS could last from several months to one year, depending on latitude. This persistence will enable for tracking the kinetics of phenomena and/or an incensement of the instrumental sensitivity, by accumulation over longer integration times. Another advantage is the possibility of maintaining or upgrading onboard instruments. Such advantages may lead to important improvement both for earth and space sciences. For instance, monitoring greenhouse gas anthropic emissions is crucial both for a better understanding and forecasting of climate change and for monitoring the efficiency of emission reduction policies. Currently, the observation network (ground and satellites) is insufficient to achieve that goal at the resolutions required by policy makers and it would be greatly improved by measurements from HAPS. Same consideration can apply to air quality monitoring: improved spatial and time sampling would allow better observation and forecast. It will be useful for various topics such as regional import/export analysis, heat wave and meteorology, down scale studies, assimilation, plume sampling or source inversion. Combination of frequency and spatial accuracy would also benefit the monitoring of soil carbon, the changes of which are not only slow but tenuous: compared with UAV and aircraft, HAPS would enable multi-instrumental acquisitions the spectral accuracies of which would compete with both proximal and remote sensing that are currently operable over limited periods and extents. Other operational and/or scientific use-cases have been identified: in the field of Intelligence, Surveillance and Reconnaissance (ISR) for infrasound (dual or military applications), monitoring of the volcanic and seismic activity, detection and monitoring of natural disasters (forest fire detection), for instance. In the fields of astrophysics and space science, HAPS present unique opportunities for specific areas such as meteor characterization, identifying transient gamma-ray sources, observing the Cosmic Microwave Background (CMB), and conducting general far-infrared astronomy research. The regulation and legal requirements to develop and use HAPS for scientific research is an important issue. More specifically - national, European and international - environmental law could encourage public and private actors to engage in such developments to comply with their legal obligations. These laws could induce these actors to learn more about their environmental impact, such as their greenhouse gas emission, then act. We also wish to talk about some direct potential beneficiaries from HAPS observations, such as insurers, and about the ethical issues being raised. The University Paris-Saclay is leading a collaborative project called NeoStars, that gathers scientists working in various fields like Earth and Space science, but also law and economics, interested in the possibility to use HAPS for their research. In this paper, we will outline the scientific areas already identified that could gain advantages from HAPS, as well as discuss our future strategies and goals for this endeavor. [1] P. Keckhut, M. Meftah, et L. Croizé, « Observation de la Terre et du Soleil depuis l’espace : vers un changement de paradigme », L’Espace Polit. Rev. En Ligne Géographie Polit. Géopolitique, no 51‑52, Art. no 51‑52, sept. 2024, doi: 10.4000/12ddy.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: F.03.01 - POSTER - Commercial EO missions data for marine and land applications

EO data plays a crucial role in marine and land applications, providing vital information for environmental monitoring, biodiversity, disaster management, hydrology, urban planning, agriculture and forestry, oceanography and coastal management, sea-ice monitoring. By leveraging commercial EO data, these applications can enhance decision-making processes, support sustainable development and help monitoring the implementation of EU policies.
This session has been structured into three thematic parts—marine, land, and multidomain—to better highlight the diverse applications of commercial Earth Observation (EO) data. They will feature presentations from data and satellite owners, with a particular focus on commercial data providers, illustrating the complementary nature of commercial EO data with other satellite missions (ESA and non-ESA missions) including Sentinel missions. The three sessions (part 1, 2 and 3) aim also to exchange experiences on applications powered by commercial EO data and the presentations will illustrate the commercial data offer, imaging capabilities and use cases.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Improving Field Boundary Detection: Leveraging PlanetScope to Address Sentinel-2 Limitations

Authors: PhD Devis Peressutti, Nejc Vesel, PhD Matej Batič, Ziga Luksic, Matic Lubej, PhD Nika Oman Kadunc, Sara Verbič
Affiliations: Planet Labs (Ljubljana Office)
Field boundaries are essential for a wide array of agricultural applications, from precision farming to land-use monitoring. The Field Boundaries (FB) dataset is a Planetary Variable containing a set of polygons that represent the boundaries of agricultural fields. The current FB Version 1 (FB V1) model utilizes Sentinel-2 imagery, but its limitations, especially in resolution, restrict its accuracy and usability for smallholder farms and precision farming, particularly in regions like Africa and India, where field boundary data is often scarce. The primary challenges with FB V1 come from its inability to accurately delineate small, narrow, or elongated fields due to the low resolution of Sentinel-2 imagery. This results in polygons that are not comparable to digitized boundaries, underestimating field sizes and producing rounded corners due to rasterization and post-processing limitations. To address these challenges, we piloted a new approach using PlanetScope products. This pilot study demonstrated significant improvements in field boundary delineation. Our first findings revealed consistent performance across different PlanetScope products, including PlanetFusion, Analysis-Ready PlanetScope, and Basemaps. This indicates that a unified deep learning model, trained on a combination of these sources, could seamlessly work with any of these products. Using PlanetFusion satellite imagery as an input, we compared Intersection over Union metrics across different crop categories, revealing significant improvements for both arable crops and, most notably, grasslands, where the FB V1 model had previously struggled. The new model achieved a median intersection-over-union (IoU) of 0.859, compared to 0.697 for FB V1, highlighting improved delineation of field shapes that are less underestimated, have sharper corners, and greater alignment with ground truth data. Additionally, transitioning to PlanetScope Basemaps simplifies the workflow by removing the need for multi-temporal prediction, temporal fusion, and filtering required with Sentinel-2’s time series data. Basemaps, created from the best-quality scenes over a month, reduce data volume, computational load, and storage costs while minimizing the impact of clouds, shadows, and noise. This approach aligns well with agricultural cycles, as field boundaries typically do not undergo dramatic changes on a daily or weekly basis. The monthly composites are sufficient to capture seasonal and crop-cycle shifts while maintaining high accuracy. By integrating PlanetScope data, this updated approach enhances scalability and meets the growing demand for higher accuracy in agricultural monitoring. It also enables applications such as smallholder farm monitoring and precision agriculture. The transition to Planet data is a step toward delivering actionable insights for agricultural stakeholders worldwide.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: A Spectral Approach for Preliminary Mapping of Mosquito Habitats Using Planetscope SuperDoves Data in the One Health Framework

Authors: Tommaso Orusa, Annamaria Conte, Silvio D'Alessio, Carla Ippoliti
Affiliations: Istituto Zooprofilattico Sperimentale "G. Caporale" - Teramo
In recent years, the investments and advancement of high-resolution Earth observation missions have grown significantly. Among the most promising platforms there is Planet’s PlanetScope mission, which offers daily temporal resolution and around 3 meters spatial resolution globally, with spectral data comparable to Sentinel-2. However, one major challenge is the absence of mid-infrared bands, which are crucial for computing indices like the Normalized Difference Moisture Index (NDMI) and the Gao version of the Normalized Difference Water Index (NDWI). These indices, sensitive to water content in vegetation, soil, and surface features, are essential for assessing hydrological conditions, drought stress, and water availability. This limitation makes the use of this product less common in applications where the identification of water is of paramount importance. While PlanetScope data are freely available for research purposes, applications in areas such as vector-borne disease modelling remain limited. In the context of mosquito-borne diseases, the interaction of three main components host, pathogen, and vector within a suitable environment is critical for disease transmission. Standing water and moist vegetative terrain serve as essential ecological factors, providing breeding sites and favorable conditions for mosquito survival, reproduction, and development. This research introduces a novel approach for mapping mosquito habitat suitability, hereinafter called mosquito-suitable prone areas (MSPAs), characterised by the presence of water, moist soil and vegetation. The method leverages a new spectral index that combines all Planetscope SuperDoves bands in surface reflectance. This index condenses spectral information into a single value, enabling effective discrimination of water and moisture-rich vegetation from other landscape elements. The primary aim was to define optimal thresholds for distinguishing water and water-in-vegetation zones, which represent the most favourable mosquito habitats. The study focuses on areas located in the Abruzzo and Molise regions of central Italy. Sixteen Planetscope SuperDoves images were acquired in five months across all-year seasons: November 2023, March 2024, May 2024, July 2024, and September 2024. The images were pre-processed specifically clouds, shadows and defective pixels were masked out using UDM2 product as well as clipped within the areas of study. A geospatial matrix, defined as the sum of surface reflectance values extracted from MSPAs ground control points, was constructed for each period. Thresholds were established by identifying the minimum and maximum intersection points common to all spectral curves, ensuring robustness and stability across both temporal and spatial domains. For threshold comparison, different land cover types were analysed, including bare soil, crops, forests, grasslands, water, and urban areas. The threshold range of the spectral sum defining mosquito habitat suitability was determined to be 1000<∑ρ<9600. Reference spectral values were established using pure pixels sampled across all images. Preliminary results showed that the identified thresholds are stable in space and time. These findings show high sensitivity in distinguishing MSPAs. Future research will focus to integrate additional ground-truth data, refine thresholds for broader geographical areas, and perform cross-correlation analyses to enhance robustness. These efforts aim to broaden the applicability of Planet data by enhancing our understanding of mosquito habitats and their ecological determinants, capturing fine-scale environmental features. This level of detail allows for a more precise characterization of mosquito breeding and resting sites, providing critical insights into their behavior and habitat preferences. By leveraging this high-resolution data, researchers can uncover hidden aspects of mosquito ecology, ultimately improving strategies for vector-borne disease monitoring and control, fostering technological advancements within the One Health framework.
LPS Website link: A Spectral Approach for Preliminary Mapping of Mosquito Habitats Using Planetscope SuperDoves Data in the One Health Framework&location=X5+–+Poster+Area+–+Zone+N" class="text-info" target="_blank">Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Enhancing land monitoring through Earth Observation commercial data in the European Environment Agency’s Copernicus Land Monitoring Service

Authors: Joanna Przystawska
Affiliations: EEA
The Copernicus Land Monitoring Service (CLMS) delivers vital environmental information through its Pan-European, Local, and Global components. The Pan-European and Local components, coordinated by the European Environment Agency (EEA), provide harmonized and high resolution geospatial data supporting a range of applications, from policy development to environmental monitoring and sustainable resource management across Europe. This presentation focuses on the Pan-European and Local components, highlighting their role in advancing environmental governance through the integration of Very High Resolution (VHR) satellite data. The integration of VHR satellite data within CLMS demonstrates the transformative potential of commercial Earth Observation (EO) missions for land monitoring. Products such as Coastal Zones, Urban Atlas, Riparian Zones, Protected Areas, the EU-Hydro product, and Small Woody Features rely on the high spatial and temporal resolution of VHR data to deliver detailed, actionable insights for key applications. This session highlights diverse use cases where VHR data significantly enhance environmental monitoring efforts. For example, VHR imagery supports precise delineation of dynamic coastal environments, detailed mapping of urban sprawl, and robust assessment of ecological corridors in riparian and protected areas. A key advantage of VHR data is its capability to support change detection by capturing fine-scale temporal and spatial variations. This enables the monitoring of environmental changes, such as urban expansion, habitat degradation, and coastal erosion. While VHR imagery provides valuable detail, challenges include the need for robust processing algorithms and the management of large datasets to ensure meaningful interpretation of detected changes. Using Urban Atlas data, CLMS has facilitated the assessment of the quality and accessibility of urban green spaces across European major cities. One of the presented use cases will focus on a scientific study that examined the relationship of accessibility and quality of green spaces across Finland’s seven largest urban areas. These analyses provide crucial information for urban planners and policymakers, enabling the identification of areas requiring enhanced green infrastructure and supporting strategies to improve urban livability. By leveraging detailed VHR imagery, Urban Atlas ensures accurate delineation of urban features, further empowering sustainable urban development initiatives. Similarly, the mapping of Europe’s coasts using Copernicus Land data showcases the capability of CLMS products in monitoring coastal zones. These datasets enable detailed analysis of coastal morphology, land use, and ecosystem health, which are critical for managing dynamic coastal environments. The integration of VHR imagery improves the ability to track changes over time, supporting efforts to address challenges such as erosion, habitat loss, and the impacts of climate change. One of the standout applications of VHR data within CLMS is the Small Woody Features (SWF) product. This product uses VHR imagery to map small, often fragmented woody vegetation across the landscape, including individual trees, tree clusters, and other small vegetation types. Using machine learning (ML) algorithms, particularly automated methodologies, the SWF product allows for reliable detection and mapping of these features at a pan-European scale. The integration of VHR data significantly enhances the spatial resolution, enabling the detection of woody features, which are often overlooked in traditional land cover products. This innovative approach shows how the combination of advanced technology and VHR data can offer useful insights into land use and vegetation patterns, helping with environmental monitoring, biodiversity management, and land-use planning. In addition to operational CLMS products, a pilot activity exploring the integration of High Resolution (HR) thermal data with VHR optical imagery for inland water monitoring will be showcased. This innovative approach leverages advanced data science techniques to monitor water quality, detect thermal anomalies, and assess ecological impacts. By illustrating the impactful use of commercial EO data within the CLMS framework, this session emphasizes the importance of effective cooperation and the synergies between commercial and institutional data in addressing contemporary environmental challenges.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Spatial vs. Temporal: Trade-Offs in High-Resolution Satellite Imagery for National-Scale Hedgerow Mapping

Authors: Anna Köber, Dr Javier Muro, Dr. Marcel Schwieder, Dr Stefan Erasmi
Affiliations: Thünen Institute, Humboldt-Universität zu Berlin
Hedgerows play an important role in supporting biodiversity, improving soil quality and contributing to carbon storage. They increase woody biomass outside forests, reduce soil erosion on bare fields by acting as a wind barrier and provide habitats for a range of species. In recognition of their importance, the EU's Common Agricultural Policy (CAP) includes measures to conserve them. As part of this policy, monitoring, reporting and verification (MRV) is required to ensure compliance. This requires reliable data on the extent and location of hedgerows at national level. However, mapping hedgerows on satellite imagery remains a challenge due to their dense structure and similarity to other woody features, making publicly available satellite imagery such as Sentinel-1 or 2 sub-optimal for this task. This study evaluates whether commercial very high resolution satellite imagery combined with deep learning techniques can effectively map hedgerows across Germany. Furthermore, we aim to identify the optimal data source for hedgerow mapping and assess the trade-offs between temporal and spatial resolution and their impact on model performance. We compared two satellite imagery datasets: multi-temporal PlanetScope composites (3 m resolution) and mono-temporal SPOT-Pléiades fusion imagery (1.5 m resolution). For PlanetScope we had four seasonal composites capturing vegetation dynamics throughout the year (April, June, August and October), while for SPOT-Pléiades we had only one composite for the vegetation peak season, but with higher spatial detail. We trained a U-Net convolutional neural network (CNN) for semantic segmentation with a manually refined reference dataset from the state of Schleswig-Holstein in northern Germany. The performance of the two satellite datasets was compared at the national level using the F1 score and the Normalised Hausdorff Distance (NDH). The NHD measures the average shortest distance between points on each boundary and quantifies the similarity between predicted and reference boundaries. Both models achieved an F1 score around 0.65 and a NHD close to 0, indicating a good spatial fit between predictions and reference data. They both identified a heterogeneous distribution of hedgerows, with higher concentrations in northern Germany and sparser coverage in the south. However, the predictions on a national scale differed between the two models. While for most regions the predicted hedgerow area was similar for both sensors (variations within ±1 ha of hedgerow per 100 ha of land), there was a small but systematic bias towards more hedgerow area in the PlanetScope-based predictions. The observed differences in mapped area are influenced by differences in spatial resolution. SPOT-Pléiades, with its higher resolution, allows more accurate detection of very narrow, linear features. On the other hand, PlanetScope's multi-temporal imagery captures seasonal changes that help distinguish hedgerows from other vegetation. The variability in predicted area highlights the need for further research to better understand these differences. Nevertheless, both approaches provide valuable insights for national monitoring and help identify areas for potential hedgerow restoration and planting. The choice of sensor depends not only on user requirements but also on economic resources, as both data sources have different pricing policies.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: ENABLER Mission - A new design approach of a thermal infrared instrument for environmental monitoring and disaster response

Authors: Josep Pino, Davide Giglio, Paulo de Urmeneta Rubira, Sonia Navarro
Affiliations: Aistech Space, Aistech Space, Aistech Space, Aistech Space
The ENABLER instrument is a novel satellite-based thermal infrared (TIR) remote sensing system designed for environmental monitoring and analysis of Earth's surface. By leveraging its high spatial, temporal, and spectral resolution capabilities, ENABLER addresses critical challenges in land applications such as agriculture, water resource management, urban heat monitoring, and natural disaster assessment. Aistech's satellite constellation of 20 satellites will optimize ENABLER's performance capabilities for global coverage and near-real-time data delivery, obtaining multifaceted dynamics of the Earth's diurnal cycle at high spatial resolution of up to 30 meters and high temporal frequency of up to 3 hours anywhere in the world. Its innovative design includes multi-band TIR imaging sensors, high-accuracy onboard calibration systems, robust thermal anomaly monitoring capabilities and onboard pre-processing of all multispectral images and insights products. This new design approach will provide precise temperature measurements with accuracy of 1 K and sensitivity of 0.5 K, addressing the critical need for detailed thermal observations. The mission operates in conjunction with the Advanced Land and Environment Exchange (ALEX) platform, an AI-based data management and processing system that enhances ENABLER's functionalities. ALEX integrates a robust data processing and analytics system that enhances thermal data quality through advanced algorithms for calibration, based on a combination of in-situ and in-orbit measurements. This constant calibration and temporal change analysis ensures high-quality and actionable insights for our customers. ALEX delivers imaging data at various processing levels, from Level 1 brightness temperature to higher-level products such as Land Surface Temperature (LST) and Sea Surface Temperature (SST) for various applications including wildfire monitoring, evapotranspiration estimation, or energy management applications like power plant monitoring, while urban applications benefit from heat island analysis and energy efficiency planning. The ENABLER's precision and revisit frequency are particularly valuable for tracking temporal and spatial energy dynamics, enabling the monitoring of both human-induced and natural processes at multiple scales. As a proud contributor to the Copernicus Contributing Missions (CCM), Aistech aligns its Hydra constellation with Europe’s flagship Earth observation initiative. This integration amplifies ENABLER’s impact, offering critical data to complement the Copernicus program’s focus on environmental monitoring and climate change mitigation. The complementary nature of Copernicus and CCM data sources strengthens ENABLER’s role in monitoring and analyzing Earth’s thermal processes, supporting precise, timely decision-making in disaster response, resource management, and long-term climate resilience.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: Novel AI based Vessel Detection Service for PAZ1 Imagery

Authors: Juan Ignacio Cicuéndez, Miguel Chapela, Alejandro
Affiliations: Hisdesat
Synthetic Aperture Radar (SAR) imagery has been extensively used in maritime applications since its beginning. The ability of these active sensors to provide their own illumination over wide swaths of the Earth enables image acquisition regardless of weather or sunlight conditions. Additionally, SAR sensors can cover large areas with resolutions suitable for extracting information about sea conditions and maritime vessel activity. These features have facilitated the development of operational Near Real-Time (NRT) monitoring systems, leveraging SAR imagery as a primary satellite source for generating products such as oil spill detection, vessel tracking, and ice monitoring, which are then delivered to national stakeholders. The PAZ1 satellite system plays a crucial role in the EMSA and Copernicus maritime services. Launched in 2018, this satellite platform ensures the reliable acquisition of SAR imagery and simultaneous Satellite AIS (Automatic Identification System) data, both highly suitable for maritime applications. Depending on the scale of the phenomena to be monitored, sea acquisitions can be conducted in Stripmap (30 km × 50 km), Scansar (100 km × 150 km), or Wide Scansar (2530 km × 200 km) modes. Once acquired, images are promptly downloaded at dedicated ground stations and processed under NRT conditions, delivering SAR-based maritime products within less than an hour (e.g. EMSA). Hisdesat, as owner and operator of the PAZ1 satellite, developed an in-house supervised Constant False Alarm Rate (CFAR) algorithm for automatic vessel detection using SAR imagery. This algorithm identifies vessel positions and parameters from the imagery and correlates the results with NRT data from Satellite and Terrestrial AIS. However, traditional inaccuracies in this approach include: 1) the presence of false positives (e.g., due to land masking errors or SAR ambiguities) and false negatives (e.g., due to localized variations in sea clutter); and 2) inaccuracies in determining vessel parameters such as length and beam, caused by physical effects like microwave interaction between the water and the vessel hull, motion-induced blurring during acquisition, and other factors. These limitations make the supervision process time-consuming and challenging for operators. Recent advancements in Deep Learning, particularly convolutional neural networks, have introduced new possibilities for vessel detection systems (VDS). These models offer the potential for greater accuracy and faster processing speeds, shifting the primary challenge from detection performance to the availability of large, high-quality training datasets. To address this, Hisdesat has built a robust training dataset by cross-referencing archived PAZ1 vessel detections with global AIS data, enhancing its reliability and robustness. Regardless of the detection methodology (CFAR or deep learning based), operational vessel detection services must also deal with different difficulties related to either the processing of images acquired out of the nominal imaging parameters (i.e., out of the “full-performance” modes) or with the detection of vessels in difficult environments. To address this issue, we leverage the polarimetric capabilities provided by PAZ1. SAR polarimetry (PolSAR) exploits the polarization diversity of SAR signals to provide more insights on the scattering properties of the imaged area, allowing to identify the different scattering mechanisms present in every image pixel. Consequently, polarimetric techniques allow to optimize the detection of targets in such difficult environments or in presence unwanted background signals. We analyse and statistically compare the performance of different vessel detection models for SAR images, based on traditional algorithms and on new methods based on deep learning. We also investigate the limitations and the applicability of such methods, and finally we evaluate a polarimetric optimization method that allows to enhance the automatic detection of vessels, regardless of the method used.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone N)

Poster: RADARSAT-2 – Past, Present, and Future: a Reliable Copernicus Program Provider

Authors: Ms Gillian Robert, Ms. Wendy Branson, Mr Jakub Smolen, Mr Olivier
Affiliations: MDA Space
MDA Space, through the use of SAR C-Band RADARSAT-1 and RADARSAT-2, has been supporting the Copernicus program, formerly known as Global Monitoring for Environment and Security, since May 2009. In particular, MDA Space has contributed to the maritime user community, specifically the Sea Ice program, providing continued data supply for more than 15 years. This, combined with support to the European Maritime Safety Agency (EMSA) program partly funded through the Copernicus program, has provided significant support to the European Union Maritime Operational requirement. This presentation will provide a history of the partnership that Canada and MDA Space has with the European Space Agency (ESA) and the European community through the Copernicus program as it transitions to the new Rapid Response Desk Service, providing users with easy access to Copernicus Contributing Missions data. During the early phases of the Copernicus program, with the loss of the Envisat satellite in April 2012, MDA Space and the Canadian Space Agency (CSA) provided RADARSAT-2 resources to assist European operational Sea Ice users with an uninterrupted data supply. MDA Space provided this same supportive concept with the loss of Sentinel-1B in January 2022. These agreements strengthened the long-term partnership between Canada and ESA. Since this time, RADARSAT-2 images have been provided in near real-time using two reception facilities to meet the needs of European Arctic, Baltic, Greenland and Antarctic regions. The capacity of the RADARSAT-2 satellite enables the continued monitoring of the dynamic ocean environment to satisfy the growing operational requirements of the European community. Broad area coverage, user selected acquisition planning, and C-Band continuity are key features from RADARSAT-2 that make it an ideal sensor for the Sea Ice community and sea ice analytics. Planning, in combination with known Sentinel-1 coverage, has allowed the Sea Ice community to fulfill their mandates. The complementary nature of the RADARSAT and Sentinel-1 missions ensure that the users can develop programs that require long-term access to data and have confidence that there will always be data available. For other maritime users in programs, such as EMSA, the flexible programming, as well as the ability to meet very short timelines from data acquisition to product delivery has also enabled EMSA to fulfill their mandate using thousands of images every year. RADARSAT-2 support to other European maritime monitoring programs has also expanded, including programs such as Global Fishing Watch with their use of 14 years of historical RADARSAT-2 data representing 26 billion square kilometers. Other operational programs such as Oil Spill Response services using MDA Space’s Emergency programming services support emergency response. In addition to supporting the European maritime community, RADARSAT-2 continues to build a large archive of data over Europe and the rest of the world covering key areas of global interest. In collaboration with our ground receiving station partners around the world, we continue collecting RADARSAT-2 data that satisfy national, regional and international programs such as disaster monitoring and management, environmental impact assessment and climate change monitoring. As interest continues to build for both scientific and operational monitoring of our environment, a comprehensive archive of data will support continued understanding of our changing environment. This enables such analysis techniques as Interferometry and change detection to use imagery from RADARSAT-2, Sentinel and other SAR sensors. It also contributes to the Emergency Activation services in which RADARSAT-2 has contributed over the duration of the program. With maritime users in mind, MDA Space’s next generation Earth observation constellation mission, MDA CHORUS, will continue to provide C-band continuity but with an inclined orbit allowing for greater coverage of critical European maritime areas. This paper will also look forward to how MDA CHORUS can support these critical maritime community and applications.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: D.02.06 - POSTER - Foundation Models for Earth Observation: Current solutions with less labelled data to improve environment monitoring and future perspectives to revolutionize geospatial data discovery and utilization

This session will delve into cutting-edge developments in Earth Observation (EO) Foundation Models (FM). It will discuss current solutions, future perspectives and the implications of these models for the EO community. The session will also explore new possibilities enabled by multi-modal models, zero-shot applications, self-supervised learning, big data and large pre-trained models, examining their potential to transform EO data analysis and applications.

Topics:
- Sensor independence: FMs can process data from various sensors, including multi-/hyper-spectral, SAR, LiDAR, Very High Resolution (VHR) satellite data and more, enabling comprehensive analysis of Earth's dynamics holistically.
- Benchmarking and Evaluating FMs: Establishing standardised evaluation metrics and fair benchmarks to assess the performance and capabilities of FMs in processing EO data, ensuring reliability and efficiency.
- Multimodality: FMs can adeptly handle diverse data modalities such as text, video and imagery, offering new approaches to EO data analysis and interpretation without requiring extensive labelled datasets which are rarely available in environmental applications (e.g., land, forestry, agriculture, water/ice or atmospheric phenomena that can be monitored with EO data).
- Fine-tuning FMs and Self-Supervised Learning (SSL) for downstream tasks with an emphasis on environment monitoring applications currently under-represented in EO benchmarks, such as biophysical variable estimation or early warnings/ anomaly detection in satellite image time-series.
- Big data: Over the past few decades, the availability of EO data has increased, providing unprecedented coverage of the Earth’s surface and atmosphere. Modern Earth System Models (ESMs), which operate at high resolutions in both space and time to simulate the evolution of Earth system components and predict the future state of the climate, estimate air pollution, and more, generate petabytes of data per simulated day. Data output and storage have already become a major bottleneck for high-resolution climate modeling. To address these challenges, approaches combining data engineering, AI, and information theory have shown great promise for various downstream applications. This session invites contributions on computational methodologies for engineering embedding representations to compress, index, tokenize, and fuse geospatial data. By focusing on topics such as embedding techniques, vector databases for data exploration, cross-modal alignment, and embedding compression, this session will provide insights into how these technologies can be applied to enhance data accessibility, sharing, and analysis in EO and ESM applications. These embeddings may facilitate efficient data transmission, data exploration/search, and cross-modal data alignment and reconstruction, such as converting vision to text or deriving surface reflectance from SAR data.
- Implications of FMs for the Community: Understanding the potential societal, environmental and economic impacts of implementing FMs in EO applications, fostering informed decision-making and resource management.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Gaussian Trees: Learning Universal Tree Detectors With Noisy Supervision

Authors: Dr. Dimitri Gominski, Dr. Martin Brandt, PhD Maurice Mugabowindekwe, Asst. Prof. Xiaoye Tong, PhD Siyu Liu, PhD Sizhuo Li, Dr. Florian Reiner, Prof. Dr. Rasmus
Affiliations: University Of Copenhagen
Mapping small objects such as trees at a global scale poses significant challenges, due to the need for fine-grained supervision which often requires extensive human labeling. Foundation models trained on large-scale datasets offer a promising solution, but their application to such tasks remains underexplored. In this work, we develop models for mapping individual trees globally without manual labels, leveraging over 1 billion pseudolabels. Tree positions are mined at scale from canopy height models derived from aerial LIDAR scans, with curation efforts to limit noise and ensure homogenous label quality. We train anchor-free detection models with Gaussian modeling, on PlanetScope satellite imagery at 3 m ground resolution. Each pseudolabel generates a Gaussian kernel at the given position, with a learnable scale parameter to account for registration issues. We assume that the uncertainty on tree positions correlates with crown size, which we verify in practice with kernels fitting the visual extent of the crown on satellite imagery. The features learned from the pseudolabels generalize well across diverse landscapes, handling variations in image quality effectively. Cross-sensor experiments on Gaofen and RapidEye further validate the model's transferability, highlighting its utility in heterogeneous data environments. Additionally, we show how our foundation model serves as a robust base for fine-tuning in specific regions, enabling tailored applications in forestry, biodiversity monitoring, and carbon accounting. This study underscores the potential of large-scale data mining to address labeled data scarcity in Earth observation. By minimizing human involvement, our pseudolabel approach reduces biases and enables rapid upscaling. The high synergy with deep vision models who can handle label noise offers a scalable solution for global environmental monitoring. Furthermore, heatmap-based models have the advantage of producing spatialized, more interpretable results, compared to end-to-end black box detection approaches. This allows a degree of explainability, and opens the way for data ensembling to further stabilize and enhance performance. Our findings represent a first step toward a universal tree map, which could serve as a critical resource for ecological studies and global environmental policies.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: The Deep Coding Strategy for Geolocation-Aware Deep Learning in Remote Sensing

Authors: Mojgan Madadikhaljan, Michael Schmitt
Affiliations: University of the Bundeswehr
Remote sensing images and labels used in Earth observation applications exhibit significant heterogeneity influenced by geolocation-specific characteristics, making it difficult to train global models that perform consistently across different regions. In this work, we introduce the Deep Coding strategy, a new approach for incorporating geolocational information into deep neural networks for a variety of Earth observation applications. This method involves defining regional subnetworks within the network architecture, each dedicated to a subset of data with similar geolocational characteristics. The approach aims to stabilize training performance and enhance the predictive power of deep neural networks by encountering region-specific details. The proposed methodology involves two primary steps: 1. Dataset Subdivision: The dataset is initially analyzed to understand its characteristics, biases, and distributions. Based on this analysis, the dataset is divided into subsets, aka regions, with similar characteristics. These characteristics can include shape, color, temperature, material, etc. The subdivision is crucial for geolocation-aware training and is tailored to the task at hand. 2. Deep Coding Strategy: The deep coding strategy involves creating separate subbranches within the deep blocks of a convolutional neural network (CNN). These subbranches are only fed by regional data and therefore focus on geolocation-specific details, enabling the model to benefit from exposure to global samples while concentrating on regional characteristics. The UNet architecture is used as the backbone, but the strategy is model-agnostic and can be applied to other architectures like ResNet, AlexNet, and VGG. Different use cases are employed to validate the effectiveness of the proposed method. First, we investigated the task of geolocation-aware land cover classification for Sentinel-2 images across the globe and compared it to a non-geolocation-aware model. Faster stepwise convergence and enhanced visual predictions motivated us to employ the method on further downstream applications. Next, Using Sentinel-2 imagery, we compare the performance of regional, global, and geolocation-aware models in detecting building footprints in Stuttgart, Germany, and San Francisco, USA. The geolocation-aware model demonstrates improved performance compared to the global model and other state-of-the-art geolocation-aware models. The third use case involves classifying forests from single-channel thermal infrared (TIR) imagery. The dataset includes globally distributed Landsat 8 thermal images with corresponding ESA Worldcover labels. We use the Köppen-Geiger climate classification to divide the dataset into climate regions (tropical, arid, temperate, cold, and polar). The geolocation-aware model outperforms the non-aware and cyclic coordinate-encoded models in various regions. The experiments reveal that the geolocation-aware model achieves improved qualitative and quantitative performance compared to other models. Particularly, in applications where the data does not provide extra information to the model to decode the geolocation differences while training. During our experiments, we encounter the importance of a balanced dataset and sufficient data within each region to avoid overfitting and ensure robust model performance. The experiments also highlight that the geolocation-aware model stabilizes training and captures regional details more effectively than global models for several regions. We conclude that incorporating geolocation information into deep learning models for remote sensing tasks significantly improves model performance. The proposed geolocation-aware deep coding strategy allows for the development of a single global model that leverages regional characteristics without the need for multiple region-specific models. The methodology is shown to be effective in several different remote sensing applications, demonstrating its potential for broader use in Earth observation tasks.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Generating global-scale embeddings for enhanced analysis of Sentinel-1 and Sentinel-2 data

Authors: Marcin Kluczek, Mikolaj Czerkawski, Jędrzej Bojanowski S.
Affiliations: CloudFerro S.A., Φ-lab, European Space Agency (ESA)
The exponential growth of Earth Observation (EO) data, driven by the Copernicus Earth observation programme, presents unique opportunities and challenges for leveraging AI and machine learning techniques. This work introduces a global-scale embedding framework for EO data, designed to enhance the usability and analytical capabilities of large-scale datasets derived from Sentinel-1 and Sentinel-2 imagery. By adopting the Major TOM standard for managing large-scale EO data and its corresponding AI-ready datasets—spanning terabytes of preprocessed Sentinel-1 RTC and Sentinel-2 L1C/L2A data.Our approach uses cutting-edge models such as SigLIP, SSL4EO, and DinoV2 to precompute model embeddings, efficient AI-enabled descriptors of the data. These models are used to produce robust, semantically rich embeddings, enabling advanced analysis and interpretation. The embeddings, processed at terabyte scale, enable a wide array of downstream applications, such as text-to-image and image-to-image retrieval, as well as zero-shot classification. These capabilities provide a significant step forward in integrating EO data with artificial intelligence workflows, offering novel insights into global phenomena through unsupervised learning. By employing transfer learning techniques and multi-modal frameworks, our work bridges gaps between disparate data sources and modalities, enabling more cohesive and actionable insights. Using the cloud platform of CREODIAS, the developed system processes TB-scale data efficiently, showcasing scalability and practical applicability for real-world EO challenges. Through a series of experiments, we validate the effectiveness of the generated embeddings, demonstrating their utility in tasks such as land cover classification, disaster monitoring, and environmental analysis. Our results highlight the robustness of our approach in diverse scenarios, emphasizing its potential for empowering data-driven decision-making at global scales. This presentation will explore the methodologies, implementation strategies, and application outcomes of embedding-based analysis in Earth observation, emphasizing the integration of advanced computational techniques. Key topics include leveraging cloud computing platforms for scalable data processing, GPU optimization for accelerating model training and inference, and multithreaded CPU strategies for efficient parallel computation. Additionally, we will discuss the design and deployment of robust pipelines capable of processing embeddings at a global scale.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Forest Land Use Mapping: the Challenge of the FAO FRA RSS Dataset for Weakly Supervised Semantic Segmentation

Authors: Dr. Timothée Stassin, Dr. Diego Marcos, Jan-Hendrik Fabricius, Adolfo Kindgard, Simon Besnard, Begüm Demir, Frederic Achard, Dr. Anssi Pekkarinen, Martin Herold
Affiliations: GFZ German Research Centre for Geosciences, Inria, Technische Universität Berlin, Food and Agriculture Organization of the United Nations, Rome, Italy, European Commission Joint Research Centre
Land use monitoring and reporting is challenging. Distinguishing between different land uses is inherently complex: a fruit orchard (agricultural use) or a city park (urban use) are not forests from a land use perspective, although these areas may have a similar tree cover density from a land cover perspective. Conversely, areas devoid of tree cover such as temporarily unstocked forests are still forests from a land use perspective. Forest land use classification has thus far extensively relied on expert interpretation, for example through the regular Global Forest Resources Assessment of the Food and Agriculture Organization of the United Nations (FAO FRA) and its participatory Remote Sensing Survey (RSS). In the FAO FRA 2020 RSS, more than 800 local experts have been trained in the visual interpretation of remote sensing imagery and consistently analyzed 400 000 sample locations. These surveys provide high-quality interpretations at sample locations for regional and global analysis, but no spatially explicit information on forest land use distribution. Such maps would be particularly valuable in informing global and regional forest conservation and restoration actions, and critical to the implementation of the recent European Union Regulation on Deforestation-free products (EUDR). With this respect, the European Commission Joint Research Center recently released the Global map of Forest Cover (GFC 2020). The GFC 2020 is a Forest/Non-Forest map that was developed by combining existing land cover maps with a series of overlays and decision rules to align it with the land use definition of forest. Here, we present how artificial intelligence in combination with Earth Observation and FAO FRA RSS ground truth data has the potential to bridge the technology gap between plot-level analyses and map generation. Besides the inherent complexity of forest land use assessment, we will focus on the methodological challenge imposed by the FAO FRA RSS dataset for training segmentation models. The FAO FRA RSS dataset only contains weak labels: class proportions for hexagonal areas of ca. 40 ha, and the main class for the 1 ha square centroids. In the absence of segmentation masks, we therefore rely on Weakly Supervised Semantic Segmentation (WSSS) model training, following a two-step approach. First, we train WSSS models for Land Cover segmentation using classes from the WorldCover 2020 map as labels and associated Sentinel composites as input. While training is done using the WorldCover class proportions, evaluation is carried out using the WorldCover pixel-level information. Second, the WSSS models pre-trained on Land Cover segmentation are transferred to the Land Use segmentation task: fine-tuning with the FRA RSS class proportions, and evaluation against the GFC 2020 map.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Foundation Model for the Mapping of Red-Listed Biotope Types to Enable Generalization on Spatially and Temporally Independent Aerial Images as Basis for a Monitoring With Sentinel-2 Data

Authors: Vera Sons, Dr. Annett Frick, Benjamin Stöckigt, Marina Höhne, Begüm Demir, Leonard Hackel
Affiliations: University of Potsdam, Luftbild Umwelt Planung GmbH, Technical University of Berlin
The European Union decided to establish a network of areas (Natura 2000) that ensures the preservation of natural biotopes of the wildlife in Europe. The conservation of biotope types plays an important role in the protection of biodiversity and the fight against climate change. Red-listed biotope types are endangered and require frequent monitoring and the implementation of appropriate conservation measures. One conservation area can contain multiple biotope types that need different, individual conservation measures. It is therefore necessary, to know the exact boundaries and labels of the different biotope types in each conservation area. Monitoring biotope types manually requires frequent terrestrial mappings and site inspections. That is time-consuming and requires a large workforce, especially, because the areas are located all over Europe. Remote sensing data is gathered remotely and can be used to get a comprehensive coverage of the respective areas without the necessity of site inspections. The data can be recorded for example from planes or satellites with different spatial, temporal and spectral resolutions and is stored in raster images. The remote sensing images can be used to generate mappings of the conservation areas, instead of terrestrial mappings through site inspections. Different remote sensing platforms are suitable for different tasks. The aerial images from planes can be used to create detailed mappings of the conservation areas. Aerial image flights are typically carried out at irregular intervals, which is not suitable as a basis for constant monitoring. Satellite images typically come in a lower spatial resolution but higher and more frequent temporal resolution, compared to images from planes. Satellite data could be used to track changes in the respective conservation areas. Deteriorations or improvements in the status of the biotope types could be monitored. The low spatial resolution of satellite images makes the identification of biotope types hard. A solution would be, to create detailed mappings with high resolution aerial images as basis and use satellite data for the constant monitoring of changes. As a first step, we focus our work on the mapping of biotope types using aerial images. The remote sensing images can be analyzed manually or – with the right software – automatically. Automation enables us to process and analyze data on larger scales than previous, manual methods would allow. An algorithm can be applied more frequently and areas can be processed quicker than the use of manual methods would allow. To create a mapping from aerial images, an automatic analysis requires software that can identify the types of biotopes and their exact shapes in a given image. Such a complex problem is typically solved with artificial intelligence. Deep learning algorithms (like Convolutional Neural Networks or Vision Transformers) are typical, state of the art options for this kind of task. They can be trained with existing data and the resulting model can be applied on newly acquired data. A deep learning model that monitors biotope types should be trained to be applicable to new natural conservation areas and different time steps of the respective areas. This means, it needs to be applicable to spatially and temporally new data. This level of generalization requires a lot of training data from different areas and acquisition dates that covers all biotope types to a sufficient degree to work satisfactorily. A general problem in remote sensing is the scarcity of reference data. There are no mappings available to the public for most areas in Germany. The nationwide availability of open-source aerial images is not given, either. A model that can include labeled and unlabeled data in the training process can include the most data and is the most promising approach for the task at hand. The limited access to labeled data could be overcome by the use of a foundation model (e. g. SatLasNet or FoMo-Bench). These models are typically trained with unlabeled data by some form of self- supervised learning (e. g. contrastive learning or masked image modeling). This is a common approach to improve the generalization abilities of models. Foundation models are trained on large datasets. These datasets are typically more general and more diverse which makes the models applicable to several tasks. Foundation models can be fine-tuned with labeled data for the use on specific tasks. We are developing a deep learning model that can map biotope types in national natural conservation areas all over Germany by semantic segmentation of high-resolution digital orthophotos. We are ideally using a pretrained foundation model. If there is no suitable foundation model available, we plan to train a foundation model on the nationwide available current and historic unlabeled digital orthophotos. We will use the foundation model that has been trained on diverse remote sensing data as basis. The foundation model is fine-tuned with the available, labeled data in Germany. This final model will be tested on spatially and temporally independent data to estimate its generalization performance. Such a model is especially of interest to governmental agencies, natural conservation and environmental planning organizations. Very high resolution satellite data has spatial resolutions similar to aerial images and a temporal resolution that is higher than the one of non-commercial satellite data. In the future, the model could be adapted to the use of VHR satellite data. That would make the distinction between mapping with aerial images and monitoring with satellite data unnecessary and the process more efficient. The use of deep learning models to map types of biotopes is a relatively new field of research and is not widely applied so far but it could play an important role in accelerating and expanding the protection of endangered biotope types in the near future.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: DeepFeatures: A Deep Learning Approach to Dimensionality Reduction of Spectral Indices for Scalable Earth System Analysis

Authors: Julia Peters, Dr. Martin Reinhardt, Dr. Guido Kraemer, Prof. Dr. Miguel Mahecha, Dr. Karin Mora
Affiliations: Leipzig University
To capture the dynamics of the Earth system from remote sensing data, a plethora of spectral indices has been developed. While these indices provide diverse perspectives on phenomena such as gross primary production, tree mortality, and biodiversity monitoring, their high dimensionality poses challenges to their usability. Particularly, in machine learning applications, high correlations among indices create redundancies, and the high dimensionality necessitates larger, more complex models. This contributes to the risk of overfitting, and leads to inefficiencies in processing workflows, including higher memory usage, longer processing times, and greater computational demands. The DeepFeatures project addresses these challenges using a deep learning-based methodology. Our approach proposes to derive a compact set of latent indices that retain the most essential information from the original spectral indices while also encoding temporal dynamics and spatial context. We employ an autoencoder to reduce the dimensionality of spectral indices for any given spatio-temporal coordinate. To encode spatial and temporal dynamics we include the neighborhood of the coordinate in the analysis. By disentangling and compressing the spectral indices with an autoencoder, we aim to produce decorrelated representations optimized for data-driven analysis workflows. This approach seeks to reduce redundancy and facilitate the development of simpler, more robust, and efficient downstream models. Our ultimate objective is to derive indices that enable scalable Earth system analysis and inference for diverse applications such as comparing phenological features using satellite and crowd-sourced data, analyzing the ecological impacts of open-pit lignite mining, and studying greening trends across landscapes. A central focus of DeepFeatures is the creation of accessible and reproducible processes for generating and transforming Sentinel-2 spectral indices into latent indices. The project aims to support a variety of research applications and foster collaboration in the study of Earth system dynamics. This presentation will showcase the DeepFeatures methodology and its potential to contribute to scalable and efficient Earth system science. The project DeepFeatures is funded by ESA's AI4Science activity. Website: https://rsc4earth.de/project/deepfeatures/
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: FINE-TUNING FOUNDATION MODELS IN EARTH OBSERVATION USING A MULTI-OBJECTIVE OPTIMIZATION STRATEGY

Authors: Dr. Parth Naik, Dr. Sam Thiele, Dr. Richard
Affiliations: Helmholtz-Zentrum Dresden-rossendorf
Pre-trained Foundation Models (FMs) have emerged as transformative tools across various domains, including Earth Observation (EO), owing to their remarkable capabilities developed through training on massive and diverse datasets [1]. These extensive datasets empower FMs to adapt rapidly to a wide range of EO applications, such as land cover classification, change detection, disaster management, and resource monitoring [2][3]. However, fine-tuning FMs for these diverse applications poses significant challenges [4], as it requires balancing multiple and often conflicting objectives. These include maximising task-specific accuracy, minimising computational resource requirements, and ensuring robust generalisation across varying geographies, spectral bands, and temporal conditions. This study is based on developing a robust and novel framework for fine-tuning FMs using multi-objective optimization techniques. Our approach develops a fine-tuning process framed as a multi-objective optimization problem, targeting certain key objectives such as maximising task-specific accuracy, minimising inference latency (for real-time applications), reducing model size, and enhancing generalisation across heterogeneous and multi-source satellite EO datasets. The framework encodes model hyperparameters (e.g., learning rates, layer-specific tuning factors, weight decay) into a structured representation and leverages a non-dominant sorting genetic algorithm (NSGA) to explore trade-offs and identify optimal solutions. The study aims to demonstrate the efficacy of the proposed approach using a pre-trained Vision Transformer (ViT) on multi-spectral satellite data. The performance of fine-tuned ViT models will be evaluated on various tasks, including feature detection and classification, producing a diverse set of candidate models that balance accuracy and efficiency while outperforming baseline methods. Importantly, the resulting optimal solutions would offer decision-makers flexibility, enabling them to prioritise operational needs such as real-time processing or high precision for specific EO tasks. This work would highlight the potential of multi-objective optimization, particularly evolutionary optimizers like NSGA, for addressing the complex demands of EO applications and advancing the deployment of foundation models in real-world geospatial scenarios. Future research will extend this framework to incorporate additional objectives, such as fairness and energy efficiency, further bridging the gap between foundational AI models and sustainable EO solutions. References [1] M. Zhang, B. Yang, X. Hu, J. Gong, and Z. Zhang, “Foundation model for generalist remote sensing intelligence: potentials and prospects,” Science Bulletin. Elsevier BV, Sep. 2024. doi: 10.1016/j.scib.2024.09.017. [2] D. Wang et al., “Advancing Plain Vision Transformer Toward Remote Sensing Foundation Model,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61. Institute of Electrical and Electronics Engineers (IEEE), pp. 1–15, 2023. doi: 10.1109/tgrs.2022.3222818. [3] L. Scheibenreif, J. Hanna, M. Mommert, and D. Borth, “Self-supervised Vision Transformers for Land-cover Segmentation and Classification,” 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, pp. 1421–1430, Jun. 2022. doi: 10.1109/cvprw56347.2022.00148. [4] W. Li et al., “Segment Anything Model Can Not Segment Anything: Assessing AI Foundation Model’s Generalizability in Permafrost Mapping,” Remote Sensing, vol. 16, no. 5. MDPI AG, p. 797, Feb. 24, 2024. doi: 10.3390/rs16050797.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Insights for a Reliable Model-selection in Multi-modal Biomass Estimation with Foundation Models and other State-of-the-art Baselines.

Authors: Kalifou René Traoré, Vytautas
Affiliations: German Aerospace Center
The mapping of the global vegetation structure and its biomass play an important role in supporting the monitoring of the environment, and the impact that society has over it. For instance, the monitoring of forest biomass in certain regions of the world can shed light on the evolution of reforestation initiatives, or even the local use of sustainable agricultural practices, and guide relevant policies towards more desirable outcomes. Besides, the mapping of vegetation structure is usually supported by the manual inventory of forests and recording of numerous of their attributes (e.g., tree height and diameter). Useful numerical quantities, such as above-ground biomass (AGB), can then be derived from such attributes. However, due to the difficulty and cost in time of the manual collection of the ground truth, it is nowadays complemented by airborne measurements using LIDAR technology, covering and assessing areas of interest. Then, in order to estimate such quantities for neighboring areas or other regions of the world, state-of-the-art AGB estimation methods rely on the collection of matching satellite imagery, to be coupled with the ground truth quantities. Moreover, modern AGB estimation methodologies make use of statistical models to learn and predict biomass from such datasets. A conventional experimental setting for tackling a task of AGB estimation consists in pixel-wise regression using optical multi-spectral (Sentinel-2) or Radar-based (Sentinel-1) imagery. In our work, we are interested in a better understanding of the performances of state-of-the-art methods. We explore the benchmarking of baselines such as U-Net models, as well as more recent foundation models, using the recent dataset, Biomassters. In particular, we explore the influence of the input sensors, i.e., multi-modality, and the choice of the backbone architecture (e.g., encoder, decoder) on the performances and cost of the solutions. We aim at providing reliable insights into a model-selection that is tailored to each AGB experimental setting, taking advantage of all current advances in geo-foundation models of AGB estimation.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: From Cradle to EO: 10 Child-Inspired Generic Tasks for Foundations Models

Authors: Imen Kaabachi, David Petit, Mohamed Amine Bouchnak
Affiliations: Metaplanet
Foundation models are generic models that use self-learning techniques to learn general concepts after being trained on huge data sets utilizing pretext tasks. Afterward, these foundational models can be employed and trained on smaller data sets—even no data with zero-shot learning—for specific Earth observation applications. This work proposes and analyzes the performances of a list of generic pretext tasks that can be employed to train an Earth Foundation model using self-learning. The tasks are derived from the typical tasks that educators and child therapists recommend for children to help them develop their skills. They have been adapted to meet the specific requirements of Earth Observation. This should assist in training a genuine generic foundation model, provided that the foundation model has a suitable architecture that is able to learn these concepts. In the past years, most approaches to building a foundation model in EO have been funded on the predicate that bigger models and more data (of different modalities) will unlock the capacity to build more generic foundation models. Metaplanet has reviewed more than one hundred foundation models and analyzed their training approaches to categorize them. State-of-the-art methodologies address various challenges of geospatial data, including spatial-temporal heterogeneity, spatial redundancy and complexity. Additionally, novel architectures were introduced to better perform remote-sensing tasks. Advances include the incorporation of spatial and temporal relationships within the positional encoding (+3d token generation to enhance spatial-temporal coupling) to encode patterns captured over different timestamps or spectral bands. Dynamic masking strategies and patch selection processes quantify the amount of information in patches by measuring indicators of texture and content. They, therefore, ensure models focus on finer details and provide additional contextual information. Versatile architectures were also proposed to adapt to unseen modalities and varying sizes, resolutions, time-series and regions and to learn the complementary information between different sensors observing the same location. In addition, multi-stage learning approaches like consecutive pre-training, where a model is initially pre-trained on general-purpose data and then pre-trained on domain-specific data, were introduced to allow models to gradually adapt to the specific characteristics of the target domain, improving their capability of knowledge transfer. Although larger models and improved architecture are essential for the development of additional capabilities, the significance of the pretext tasks employed in self-learning is still overlooked, whereas they are critical in determining the foundation model's actual capabilities. According to Piaget's cognitive development theory, infants undergo four developmental stages: the sensorimotor stage (less than two years), the preoperational stage (2-7 years), the concrete operational stage (7-11 years), and the formal operational stage (more than 12 years). We have chosen ten pretext tasks that are appropriate for training Earth Observation Foundations Models through self-learning, up to the concrete operational stage. They are classified into categories such as creativity, perception of objects, perception of space, mathematics and logic, abstraction of concepts, problem-solving, memory, perception of time and causality. The team has employed the PhilEO framework to evaluate the performance of a few foundation models by examining the impact of each pretext task individually and in combination for self-training. The PhilEO framework, which was developed by the Phi-lab of the European Space Agency, provides a flexible, consistent benchmark for evaluating these trained foundation models. This method also enables the classification of the learning capacity of the foundation models depending on the type of cognitive task. The proposed principles are generic and have the potential to enhance the training quality of any foundation models.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: EO4ConStat - Earth Observation Data and AI for Construction Statistics

Authors: Thorsten Dahms, Carola Stolle, Georg Starz, Maren Köhlmann, Stefan Irrgang, Dorothee Stiller, Dr. Michael Wurm, Dr. Michael Hovenbitzer, Frederik Stellmach
Affiliations: Federal Agency for Cartography and Geodesy (BKG), Federal Statistical Office Germany (StBA), Earth Observation Center (EOC) German Aerospace Center (DLR)
EO4ConStat is a joint project between the Federal Agency for Cartography and Geodesy (BKG), the Federal Statistical Office of Germany (StBA), and the German Aerospace Center (DLR). The main focus of the project is to detect construction sites in remote sensing data, more specifically in high-resolution aerial imagery, by using the Segment Anything Model (SAM) developed by Meta AI. The collaboration aims to enhance statistical accuracy and pinpoint areas of heterogeneous data quality, thereby providing quality control for construction statistics in Germany. The presented framework describes the method used to collect reference data of building construction sites from 10 cm digital orthophotos over the state of North-Rhine Westphalia. Consistent with the official statistics on construction starts and works completions which will be used for comparison with the results, the project includes data for multiple years (2018-2023). Given the complexity and heterogeneity of construction sites, fine-tuning SAM is critically important. Finding and labelling training data for this project, as with any deep learning task, is both an important and arduous undertaking. To facilitate and accelerate this process, a semi-automated framework was developed. Additional data was used to pre-identify potential construction site locations, reducing the number of images to a more manageable set. To achieve this, firstly, a yearly dataset of building perimeters collected by the federal states of Germany was used to detect areas of many changes between the years. Secondly, buildings as well as land use features tagged with “construction” were extracted from OpenStreetMap data. Both of these information sources are intersected with property perimeters, provided as a dataset by the federal state of North-Rhine Westphalia, to create pre-labels. Then in a second step, these pre-labels are checked and adjusted manually and every site is classified into one of four categories depending on the construction progress. The goal is to encompass a diverse range of construction sites across various environments and residential contexts within the region of interest, maximizing the model’s ability to accurately classify construction sites. So far, a total of around 3,500 construction sites were labelled and categorized. As the project continues, the next step is the fine-tuning of SAM and the subsequent segmentation of the test data. First results will be part of the presentation
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Federated AI-Cubes: The New Paradigm for Easier, Faster, and Deeper Insight

Authors: Peter Baumann
Affiliations: Constructor University, rasdaman GmbH
Federated AI-Cubes: The New Paradigm for Easier, Faster, and Deeper Insight Peter Baumann Datacubes are acknowledged as a cornerstone for analysis-ready data as they allow more intuitive, human-centric services. Their abstraction from technical pains has two advantages: for users, it gets more convenient; for servers, it is possible to dynamically optimize, orchestrate, and distribute processing. The latter is particularly important for Digital Twins where manifold different data need to be combined for reflecting physical reality, data sources which often sit at disparate locations and need to be fetched and merged dynamically. Additionally, for understanding, forecasting, what-if scenarios, etc. it is indispensable to embed model-based prediction in Digital Twins solutions. And, most of all, using all these powerful tools must become simpler for users. Recent advances in datacube technology, in particular AI-Cubes, promise the efficient combination of several relevant capabilities, such as: - The seamless integration of AI into datacube analytics plus AI-assisted query writing open new opportunities for zero-coding exploitation. - Automatic homogenization of data, based on the OGC/ISO coverage standards, responds to the need for data fusion across data centers. - Use of these standards also accomplishes interoperability across servers and clients, allowing users to remain in the comfort zone of their well-known tools, from map browsing over Web GIS like QGIS and ArcGIS up to openEO, R, and python. These innovations are implemented and exploited at scale. For example, EarthServer connects members in the US, Europe, and Asia into a single planetary datacube space. Both free and paid services together form a single multi-Petabyte pool of datacubes, based on the rasdaman datacube engine. In our talk we present the latest advances in AI-Cubes in the quest for building better Digital Twins, but also where further innovation is still needed. We address the state and future of the datacube standards and show to what extent the complexity of space/time data wrangling can be "pushed behind the curtain". Several live demos, many of which can be recapitulated by the audience, illustrate the topics.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: SeaPatrolAI: Using foundational models for multiscale ocean feature detection

Authors: Constantinos Lordos, David Petit, Nilesh Vyavahare, Paolo Silvestro, Paula Marti
Affiliations: Deimos Space Uk, Deimos Space
SeaPatrolAI is a maritime surveillance project currently being developed by the EO Applications team of the Deimos group. The goal of SeaPatrolAI is to use advanced Artificial Intelligence algorithms to detect features in the sea. The word feature is used here to mean anything on the sea surface, from vessels to oil spills. As in many other applications, the need for extensive datasets of labelled data is a problem when trying to train Artificial Intelligence (AI) models. The selection of the AI model for this project corresponds to the need of having an unsupervised training phase that can prepare the model for a second training phase, that will be supervised but with fewer labels than traditionally used, or to return better results with the same number of labels. To achieve this, the decision was made to use a foundation model to obtain a pre-trained model of the sea, the SeaAI. This pre-trained model shall be generic enough to then allow a variety of applications to be built on top of it and shall work with a range of resolutions if trained with different EO datasets from different data sources. After some tests, the decision was made to use an architecture from the ESA PhilEO Bench, the GeoAware architecture. The foundation model is trained from scratch and customised for the sea. Regarding data sources, models are trained separately from SAR (TerraSAR-X, PAZ, Sentinel-1) and optical (Planetscope, Sentinel-2) datasets, utilizing imagery of different resolutions from 30 cm to 10 m. The datasets include a range of locations, atmospheric conditions, sea states and lighting levels, ensuring reliable performance across diverse operational scenarios, delivering comprehensive maritime surveillance and monitoring capabilities. Data is labelled specifically for the project, in a manual way or using a semi-automatic approach with a human in the loop to fix and validate the creation of labels. From the pre-trained model, the different detection applications are implemented. The main application is vessel detection, aiming for vessels as small as 10 meters in length as well as other objects on the sea surface, without relying on tracking systems such as the Automatic Identification System (AIS). For every detected object, the precise geographical coordinates are provided, together with the dimensions and orientation. In addition to detection, SeaPatrolAI also uses the pre-trained to classify objects into predefined categories, including different vessel types and different types of objects and oil spills. Confidence scores are assigned to each detection, ranging from 0 to 100%, and statistical metrics related to detection performance, like recall rates, are maintained in a database to continuously improve the system's accuracy. By combining cutting-edge AI technology with high-resolution satellite data, SeaPatrolAI enhances maritime security and situational awareness capabilities of institutions with responsibility in these fields.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Towards Sensor-Parameter Awareness in Earth Observation Foundation Models

Authors: Jonathan Prexl, Michael Schmitt
Affiliations: University of the Bundeswehr Munich
In the past decade, the field of satellite-based Earth observation has gained access to a wide variety of data provided by space agencies worldwide. These agencies deploy satellites with diverse modalities, each with a specific set of sensor parameters tailored towards a specific aspect of monitoring the Earth's surface. The most commonly utilized sensing technologies are given by optical (multi- and hyperspectral) as well as synthetic aperture radar (SAR) sensors. For each class of sensors, the data acquired by different satellites varies according to the spesific sensor parameters utilized in the corresponding mission, such as the number and spectral characteristics of channels (for optical satellites), radar wavelength and acquisition mode (for SAR-based satellites) as well as the spatial resolution and revisit times. Typically, data from different sensors provides complementary information, which, when properly integrated, can offer a more comprehensive understanding of the Earth's surface. In the recent past, many scientific efforts regarding the development of machine learning methods for Earth observation data have focused on the development of foundation models. In this context, foundation models are (typically) large deep learning architectures pre-trained on unlabeled data with the objective of being used (fine-tuned) in a variety of Earth observation downstream tasks. The potential advantages of using the pre-trained models over task-specific training are expected to lead to better overall performance with less need for manually annotated data as well as less computational cost on the side of the end-user. Although this research field is still in the early stages of defining these models and establishing the requirements, it can be reasonably assumed that models capable of processing data from multiple sensors will be of high interest and offer significant advantages. Such multi-sensor foundation models could reduce the need for manual decisions about which sensor to use, would automatically leverage the complementary information provided by different modalities and lead to a temporal more dense information through the fusion of different sensor revisits. However, feeding data from various sensors with varying geometries or spectral properties into a single model requires careful consideration, since it can lead to ambiguities and reduce the effectiveness of the model. In this presentation, we examine and compare strategies for developing sensor-aware, transformer-based pre-trained models for satellite Earth observation data. We illustrate two approaches: First, in a multi-sensor setup, we utilize multiple satellites with varying numbers of channels and spectral positions. Sensor parameters are directly encoded into the model during pretraining and inference, allowing for seamless integration of diverse data sources. Second, we apply similar techniques to SAR data, showing that encoding the specific scene acquisition geometry can significantly enhance tasks like 3D reconstruction. Together, these examples demonstrate how incorporating knowledge of sensor and acquisition configurations can provide valuable additional insights and improve model performance.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Massive Scale, Noisy Labels: Foundational CV Models and Terabyte Workflows for Efficient Olive Mapping in Apulia, Italy.

Authors: Roberto Cilli, Ester Pantaleo, Vincenzo Giannico, Giacinto Donvito, Marica Antonacci, Leonardo Cristella, Gioacchino Vino, Salvatore Camposeo, Mario Elia, Raffaele Lafortezza, Alfonso Monaco, Giovanni Sanesi, Sabina Tangaro, Roberto Bellotti, Gaetano Alessandro Vivaldi, Nicola Amoroso
Affiliations: Dipartimento di Fisica Michelangelo Merlin, Università degli Studi di Bari, Istituto Nazionale Fisica Nucleare, Sezione di Bari, Dipartimento Di Scienze Del Suolo, Della Pianta E Degli Alimenti, Università degli studi di Bari A. Moro, Dipartimento di Farmacia - Scienze del Farmaco, Università degli studi di Bari A. Moro
To the author’s knowledge, this study introduces the first open catalog of olive trees across the Apulian region, southern Italy. Leveraging the YOLOv8 algorithm, we developed a high-performance workflow for olive tree detection. The model was trained on 23,000 annotated olive trees from 250 parcels in northern Apulia: to further enrich the dataset, we performed a zero-shot instance segmentation of olive canopies within our training parcel examples using the Segment Anything Model (SAM). This approach eliminated the need for labor-intensive manual canopy mask collection, enabling the integration of information from canopy morphology into our object detector. The calibrated model achieved satisfactory performance with sensitivity and precision exceeding 92%, and a mAP(50) of approximately 95%. Scaling this workflow to the entire Apulian territory was accomplished through inference on AGEA2019 orthophotos accessed via WMS services, processing 460,000 tiles (200m x 200m) in parallel across 254 threads in just 24 hours. The resulting catalog estimates the total number of olive trees in Apulia at 67 million (2019), exceeding official pre-Xylella estimates of 60 million. We are currently adopting an active learning approach aimed at refining the olive catalog and removing mistakes, such as duplicate detections, omissions, and false positives, by refining annotations on difficult or ambiguous olive crops. The present work is aligned with the Water Digital Twin project (WADIT) which is aimed to model regional water resource dynamics, integrating advanced Earth observation, AI, and hydrological models to support sustainable water management in the face of climate change and anthropogenic pressures. Olive groves, being the most widespread cultural system embedded in the Apulian landscape, play a critical role in shaping land use, microclimatic conditions, and water resource allocation. Mapping their distribution and canopy extension is pivotal to understanding the intricate interactions between vegetation, water consumption, and land cover change, especially in a region heavily affected by the Xylella fastidiosa outbreak and increasing drought frequency. The resulting catalog will represent a cornerstone of the WADIT project, providing critical data for modeling for supporting sustainable water management, and evidence-based land use policies, empowering stakeholders in Apulia to address challenges posed by water scarcity and climate variability.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Advancing Marine Earth Observation with AI Foundation Models

Authors: David Moffat, Remy Vandaele, Geoffrey Dawson, Sarah Jackson, Anne Jones, Chunbo Luo, Tong Luo, Andrew Taylor, Hywel Williams
Affiliations: Plymouth Marine Laboratory, University of Exeter, IBM Research, Science and Technology Facilities Council Hartree Centre
Self-trained foundation models (FMs) are transforming the landscape of Artificial Intelligence (AI) by offering generalized learning capabilities that can be adapted to a wide range of downstream tasks. These models, trained on large, unlabelled datasets, can acquire a broad understanding of data and transfer knowledge from one domain to another, thereby addressing real-world problems more effectively. In this study, we apply an FM to two different downstream tasks in marine Earth observation. The FM model selected was a Vision Transformer architecture, building on the Prithvi-100M model, which was pretrained using Sentinel-3 Ocean and Land Colour Instrument (OLCI) data. The model architecture is fully described in the related abstract “A remote sensing foundation model for the ocean using Sentinel-3" We demonstrate the utility of this FM with two distinct downstream applications: 1. Harmful Algal Bloom (HAB) Species Detection HABs are a growing environmental concern, leading to fish kills, toxin contamination in seafood, and ecosystem disruption [Zingonea et al., 2001]. However, monitoring HABs is resource-intensive, often resulting in sparse and irregular data coverage. In this work, we explore how the FM can improve the detection and monitoring of HABs by leveraging remote-sensing data, reducing the reliance on costly in-situ measurements. 2. Primary Production Quantification in the Ocean Carbon Cycle Marine phytoplankton play a crucial role in the global carbon cycle, contributing to approximately half of Earth's primary production and influencing climate regulation through CO₂ fixation. Understanding the variability and trends in ocean carbon sequestration is essential for climate modelling, but current models often show poor alignment with observational data [Friedlingstein et al., 2022]. In this application, we assess how the FM can enhance the accuracy and efficiency of remote sensing-based methods for quantifying primary production in the ocean, providing better estimates for Earth system models that inform climate predictions. In both cases, we will compare the results of an FM model with a baseline ML model, and the untrained, random-weight FM, to understand the impact By leveraging a self-trained FM, we aim to advance the capabilities of remote sensing for marine monitoring, offering more accurate, scalable, and cost-effective solutions for studying ocean ecosystems and their role in global climate processes. References Friedlingstein, et al., 2022. Global carbon budget 2022. Earth System Science Data, 14(11), pp.4811-4900. Zingonea, A. et al., 2012. Harmful algae in benthic systems: A GEOHAB core research program. Cryptogamie, Algologie, 33(2), pp.225-230.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Sequence to Sequence Is All You Need: Flexible Generative Pre-Training for Satellite Image Time Series Models

Authors: Samuel Jonathan Barrett
Affiliations: Unaffiliated at time of abstract submission
Large multimodal models (LMMs) have demonstrated the transformative potential of integrating language and vision, enabling promptable, reasoning-capable models for both vision and language tasks by leveraging language’s inherent capacity for context, chain-of-thought reasoning, and unstructured outputs. Extending this paradigm to Earth Observation (EO) would unlock similarly groundbreaking capabilities, enabling models to process multi-sensor EO data in rich contextual frameworks, support complex queries, and produce both structured and nuanced unstructured outputs while incorporating reasoning. Achieving this vision requires integrating EO data as a primary modality in LMMs, alongside language. However, a key challenge lies in pre-training: while next-token prediction has proven effective in NLP and masked auto-encoding works well for general vision tasks, neither directly addresses the unique spatio-temporal nature of Satellite Image Time Series (SITS) data. Recent work on pre-training for SITS, such as temporal masked autoencoders and self-supervised instance discrimination methods, has shown promise, but current approaches remain inflexible, with each task requiring specific modifications to the model architecture. To address this limitation, we propose a generative framework that allows diverse pre-training strategies to be unified without requiring architectural modifications, significantly improving pre-training flexibility and enabling task diversity. This work focuses on pre-training generative transformer decoder-only models using Sentinel-2 and Sentinel-1 data. Planned experiments include crop type classification using the PASTIS dataset as a downstream task, alongside pre-training paradigms expressed as sequence-to-sequence tasks. These paradigms include next observation prediction (analogous to causal language modelling), temporal masked auto-encoding, modality translation, instance discrimination, and contrastive learning. This approach aims to test whether these diverse strategies can be effectively unified within a generative framework, evaluate whether classification tasks can be effectively performed in this generative context (a critical proof of concept for downstream applications), and assess the impact of self-supervised generative pre-training on downstream tasks. Ultimately, this work bridges immediate needs in foundational EO modeling by enabling flexible and effective pre-training strategies, while advancing the broader vision of integrating EO data into generative, multimodal frameworks. By enabling reasoning-capable, promptable models, these advancements have the potential to revolutionize how Earth observation data is used to address diverse global challenges.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Generation of Synthetic Earth Observation Databases Using Generative Artificial Intelligence and Diffusion Models

Authors: M Antoine Lorentz, Sylvain Tanguy, M Killian Perriot
Affiliations: Thales
Project Introduction: In Earth observation, object detection tasks commonly rely on convolutional neural networks (CNNs). These CNNs necessitate large annotated training datasets containing the targeted objects to achieve optimal performance. Unfortunately, creating such annotated datasets is a complex and time-consuming process. This project explores the idea of fine-tuning generative AI models, specifically diffusion models, on satellite images captured by Earth observation satellites. Once fine-tuned on satellite imagery, these diffusion models can generate annotated synthetic images. These synthetic images are then utilized to augment training datasets, thus enhancing the performance of object detection algorithms. Introduction to Diffusion Models Diffusion models, or more broadly, flow-based models, are designed to learn a mapping between noise and data. They regress a velocity field that transports Gaussian noise to any given data distribution through an ordinary differential equation (ODE). The core concept of diffusion models encompasses two processes: • A forward process, where an image is iteratively subjected to noise until it becomes white noise. The forward process is the addition of a fragment of Gaussian noise to a source image. • A reverse process, which denoises the white noise to reconstruct a realistic image. Training a diffusion model centers on learning this reverse process. Historically, diffusion models were introduced to tackle unconditional tasks (1) but have rapidly been adapted for guided image generation tasks (2). Image Generation through Diffusion Models The necessity to generate rare and specific objects, people, or concepts has quickly become a highly anticipated feature. Three methods have stood out within the community interested in diffusion models: Textual Inversion (3), Dreambooth (4), and LoRA (5). The idea behind textual inversion is to find the best text embedding that describes the object or concept common to a reduced dataset of images (approximately 10). Consequently, the conditioning used during sampling is assumed to be optimal for generating the desired object. Dreambooth is a method that modifies the weights of a pre-trained U-Net model, applying fine-tuning on a limited number of images associated with captions representing the same concept. The goal is to characterize the new concept as a refinement of a more general concept already known to the model. The known concept is associated with a token [known_concept]. Dreambooth seeks to associate the specifics of the new concept with a new token [rare_token]. Thus, after training, the new concept can be invoked in prompts as “[rare_token] [known_concept].” To preserve the knowledge the model had regarding the known concept, Dreambooth proposes enriching the fine-tuning dataset with artificially generated images using the prompt “a [known_concept].” Finally, LoRA (Low-Rank Adaptation) is based on the idea that the difference between the fine-tuned weight matrices for a specialized task and the initial pre-trained matrices often exhibits low intrinsic rank. This implies that they can be well approximated by a low-rank matrix and thus can be represented as the product of two smaller matrices. This method was originally conceived for fine-tuning language models based on Transformers but can be easily adapted to any architecture containing attention layers, for example. Several methods have also been developed to control the geometry of generated images, with the most well-known likely being ControlNet (6). ControlNet is a neural network architecture designed to introduce spatial conditioning controls to large pre-trained text-image diffusion models. It locks the weights of the pre-trained models and reuses their encoding layers as a backbone for learning a diverse set of conditional controls. The architecture is connected to "zero convolutions" (convolutional layers initialized to zero), thereby ensuring the stability of the fine-tuning process. Project Development: 1st Use Case: Inpainting A key use case implemented in this project is inpainting, accomplished using the "Repaint" sampling algorithm. In this scenario, the diffusion model learns to seamlessly integrate objects into their environment. This use case is compatible with guided models, allowing for on-demand inpainting. We also leveraged ControlNet inpainting for newer diffusion models, such as Stable Diffusion 3. 2nd Use Case: Object Reconstruction Another application explored is object reconstruction. This focuses on situations where an object, such as an airplane, is partially obscured by obstacles (e.g., within a hangar). The model assists in reconstructing the missing parts of the object. 3rd Use Case: Domain Transfer Synthetic training data can also be generated from segmentation maps, which are generally easier to obtain or produce. Using domain transfer techniques, realistic styles can be applied to these maps. Although the results are conceptual, they effectively illustrate the potential outcomes of this approach. 4th Use Case: Creation of a Foundation Model To ensure the most flexible tool for achieving the best results, the project directed its focus on using pre-trained models and then fine-tuning them on satellite images. Consequently, we decomposed the generation of images of rare objects into several steps: 1. Obtain a Model for General Realistic Satellite Image Generation: To achieve this, we chose to fine-tune the open-source model Stable Diffusion. This approach not only provides potentially superior final results but also produces a highly powerful model that can be reused for other tasks, serving as an initial foundation model for the spatial domain. 2. Develop a Model Capable of Controlling Object Geometry in the Image A common weakness of generative models is the precise adherence to the geometry of generated objects. We opted to utilize the ControlNet method to address this issue. 3. Develop a Model Capable of Generating New Rare Objects Given that the base model is generic, we employed fine-tuning methods such as LoRA to generate rare objects. This approach is intriguing as it leverages the generalization capability of pre-trained models for a wide range of applications while ensuring better generation quality compared to ultra-specialized models previously trained. Below, we detail these three steps: Obtain a Model for General Realistic Satellite Image Generation We chose to start with the pre-trained Stable Diffusion models developed by Stability AI. Stable Diffusion was trained on image-caption pairs sourced from LAION-5B, where 5 billion image-text pairs were classified based on language and filtered into distinct datasets based on resolution, likelihood of watermarking, and “aesthetic” scores. Initially, we performed fine-tuning on images from the DOTA database. To do this, we first generated captions associated with the DOTA images, which were previously tiled into 512x512 images. We employed two captioning methods: • One method was based on the objects present in the images (which can be automatically retrieved through DOTA labels). • The other method utilized automatic caption generation through a neural network (we chose the open-source model BLIP-2 (27)). Finally, we removed 10% of the blurriest images from the DOTA dataset, using the Laplacian variance as a metric. This step ensures that the training set consists of higher-quality images, which improves the overall performance and quality of the models being fine-tuned. By focusing on sharper images, we enhance the model's ability to generate realistic and coherent outputs, particularly important in applications such as satellite image interpretation and object detection. We then fine-tuned Stable Diffusion with the captions derived from both methods. Develop a Model Capable of Controlling Object Geometry in the Image The next step focused on finding a way to control the shape of the objects we were generating. To achieve this, we used ControlNet, which integrates over a pre-trained model. To teach the network to adhere to a geometric condition, we needed examples. We opted for the iSAID database, utilizing DOTA images enhanced with associated segmentation, and removed from the dataset all images that did not contain any objects. We employed the same training conditions as those used for the training of our foundation models. Develop a Model Capable of Generating New Objects We aimed to test the limits of generation through few-shot learning with a LoRA-style training approach. Our experiments focused on a "rare airplane" (the specific name of which we prefer not to disclose) that was not present in the database, for which we retrieved images from the internet. Bibliography 1. J. Ho, A. Jain, et P. Abbeel. Denoising Diffusion Probabilistic Models . arXiv. [En ligne] 16 Décembre 2020. [Citation : 8 Décembre 2022.] http://arxiv.org/abs/2006.11239. 2. J. Ho et T. Salimans, . arXiv, 25 juillet 2022. Classifier-Free Diffusion Guidance. arXiv. [En ligne] 25 Juillet 2022. 10.48550/arXiv.2207.12598. 3. An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion. Gal, Rinon, et al. 2022. 4. DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation. Ruiz, Nataniel, et al. 2022. 5. LoRA: Low-Rank Adaptation of Large Language Models. Hu, Edward J., et al. 2021. 6. Adding Conditional Control to Text-to-Image Diffusion Models. Zhang, Lvmin, Rao, Anyi et Agrawala, Maneesh. 2023.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: DeepFeatures: Remote sensing beyond spectral indices

Authors: Dr. Karin Mora, Dr. Martin Reinhardt, Julia Peters, Dr Gunnar Brandt, Prof. Teja Kattenborn, Dr Guido Kraemer, David Montero, Clemens Mosig, Konstantin Ntokas, Dr. Daria Svidzinska, Prof Miguel D. Mahecha
Affiliations: Leipzig University, Brockmann Consult GmbH, University of Freiburg
Understanding Earth system dynamics and their response to climate change and human activity requires innovative approaches to analyse complex and multivariate remote sensing data. However, the current trend is towards large models that require a lot of memory and computational power to be trained. The DeepFeatures project addresses this challenge by developing a foundation model approach to create Feature Data Cubes, which capture the underlying ecosystem dynamics as a low dimensional representation in latent space. These reduced representations enable the use of simpler, resource-efficient downstream models, which are easier to train and require minimal computational resources. The project builds on the rationale that each spectral index (SI) reflects a specific aspect of ecosystem behaviour. Despite the development of over two hundred spectral indices, current studies often narrow their focus to individual SIs, overlooking the broader context of land surface processes represented by not considered SIs. The DeepFeatures project addresses this challenge by adopting a spatio-temporal multivariate approach. The SIs are derived from Sentinel-2 observations to generate a SI Data Cube. AI dimension reduction methods are applied to reduce the SI dimensionality and extract a latent space to create the Feature Data Cubes. To demonstrate the potential of the Feature Data Cubes, the project focuses on inference for a variety of science cases including: modeling gross primary production, analyzing tree mortality and greening trends, biodiversity monitoring for conservation, comparing phenological features using satellite and crowd-sourced data, and studying the ecological impacts of open-pit lignite mining. DeepFeatures emphasizes the deployment of transparent and reproducible workflows, from generating Sentinel-2 derived SI Data Cubes to creating Feature Data Cubes. It aims to have an accessible, extensible, and modifiable framework for diverse applications, fostering broad community engagement and enabling open exploration of Earth system dynamics. This presentation will showcase the methodology, scientific cases, and transformative potential of the DeepFeatures framework, highlighting its contributions to Earth observation and climate research. The project DeepFeatures is funded by ESA’s AI4Science activity. Website: https://rsc4earth.de/project/deepfeatures/
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Deep Learning Approaches for Automated Inland Water Body Mapping Using Sentinel-2 Imagery

Authors: Mohammadmehdi Saberioon, Ali Ghaznavi, Jakub Brom, Sibylle Itzerott
Affiliations: Deutsches Geoforschungszentrum GFZ, University of South Bohemia in Ceské Budejovice
Accurate mapping of inland water bodies is crucial for understanding hydrological systems, climate dynamics, and ecological processes. This study introduces a deep learning framework for automated detection and delineation of inland water bodies using high-resolution Sentinel-2 satellite imagery. Three U-Net-based architectures were evaluated: a conventional U-Net, a Residual Attention U-Net, and a VGG16-U-Net hybrid model. The models were trained on Sentinel-2 visible bands (Red: 665 nm, Green: 560 nm, Blue: 490 nm) at 10-meter spatial resolution and validated against ZABAGED ground truth data. Among the models, the VGG16-U-Net emerged as the most efficient, achieving a mean IoU score of 0.9850 while maintaining low computational requirements. In comparison, the Residual Attention U-Net offered similar accuracy but required significantly higher computational overhead. Our results demonstrate that the VGG16-U-Net strikes an optimal balance between accuracy and efficiency, making it a practical tool for large-scale water body monitoring. This approach has significant potential for applications in water resource management and ecological monitoring.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Investigating the Key Variables Influencing the Production of Winter Wheat and Oilseed Rape in Bavaria: Integrating Spatiotemporal Fusion of Remote Sensing (Landsat and MODIS) Data and Machine Learning for Enhanced Yield and Biomass Predictions from 2001 to 2019

Authors: Maninder Singh Dhillon, Dr Sarah Schönbrodt-Stitt, Dr Sarah Asam, Dr Ursulla Gessner, Prof. Dr Thomas Koellner, Dr Carina Kuebert-Flock, Dr Thomas Rummler, Prof. Dr. Tobias Ullmann
Affiliations: Earth Observation Research Cluster, Department of Remote Sensing, Institute of Geography and Geology, University of Würzburg, 97074 Wurzburg, Germany, German Remote Sensing Data Center (DFD), German Aerospace Center (DLR), 82234 Wessling, Germany, Department of Earth Science, Faculty of Biology, Chemistry and Earth Sciences, University of Bayreuth, 95447, Bayreuth, Germany, Department of Remote Sensing, Hessian State Agency for Nature Conservation, Environment and Geology (HLNUG), 65203 Wiesbaden, Germany, Department of Applied Computer Science, Institute of Geography, University of Augsburg, 86159 Augsburg, Germany
Fast and precise yield estimation, driven by the growing availability of global satellite data and the rapid advancement of new algorithms, remains a key objective for precision agriculture and food security. However, the consistency and reliability of methodologies that deliver accurate crop yield predictions and identify key variables influencing yields/biomass at the regional scale remain areas requiring further investigation. This study investigates the integration of spatiotemporal fusion modeling and machine learning (ML) into a light use efficiency (LUE) model to enhance yield and biomass predictions for winter wheat (WW) and oilseed rape (OSR) in Bavaria, Germany, from 2001 to 2019. To fill the cloud and shadow gaps in the satellite data, first, the synthetic NDVI time series (30 m, 8 days) were generated using the spatial and temporal adaptive reflectance fusion model (STARFM), which fuses the high-resolution Landsat (30 m, 16 days) data with MODIS (250 m, 16 days). The study validates the received synthetic product by dropping a single available high spatial resolution NDVI image during the fusion process and comparing both actual (the dropped high spatial resolution NDVI) and synthetic (STARFM NDVI) images of the same time zone achieving high overall accuracy of R² > 0.65 and RMSE < 0.11 for nearly two decades (2001-2019). Next, the fused NDVI time series and the climate data were inputted into the LUE model to predict the crop biomass of WW and OSR. Because the validation data by Bayerisches Landesamt für Statistik was available in the crop yield form at a district level, the generated crop biomass is converted into crop yields at the regional level using the linear regression models. The LUE model reached mean R² values of 0.75 (WW) and 0.73 (OSR), with RMSEs of 4.33 dt ha-1 and 2.19 dt ha-1, respectively. The model demonstrated strong correlations (R = 0.81 and R = 0.77) between synthetic NDVI accuracy and yield prediction accuracy, showing crop yield outputs' dependency on the fused NDVI time series. To find the most influential factors affecting crop biomass from 2001 to 2019, the Random Forest (RF) models were developed for both mean and variability (standard deviation) of biomass (spatial resolution of 30 m and the temporal scale for the exact start and end of development stages for WW and OSR ), incorporating environmental factors such as climate (mean temperature, precipitation, and solar radiation data taken during the development stages of both crop types with spatial resolution of 2000 m and temporal resolution of one day), topography (elevation, aspect and slope: 30 m), landscape diversity (Shannon Diversity Index), and soil potential at a hexagonal level. Key drivers included soil potential, elevation, and solar radiation. Regional analysis revealed that the Franconian region of Bavaria exhibited the highest prediction accuracy, while the Alps and Pre-Alps showed the lowest, particularly for OSR.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: GMSM: A Generic Framework for Mine Site Monitoring with Multimodal Earth Observation

Authors: Yang Mu, Yuanyuan Wang, Matthieu Molinier, Pinja Lindgrén, Prof. Dr. Xiao Xiang Zhu
Affiliations: Data Science in Earth Observation, Technical University of Munich, VTT Technical Research Centre of Finland Ltd, Munich Center for Machine Learning (MCML)
Mining sites pose significant challenges to environmental sustainability. Effective management of mining activities necessitates comprehensive monitoring of diverse environmental parameters, including water quality, dust emissions, and vegetation dynamics [1]. The primary technical challenge lies in developing a generic monitoring algorithm capable of integrating Earth Observation (EO) data from multiple sources with varying spatial, spectral, and temporal resolutions to address several downstream applications within the mining context [2]. EO foundation models (FMs) have emerged as a promising solution. Trained on large-scale, often unlabelled datasets, these models can learn general feature representations from diverse EO data sources and be fine-tuned for specific tasks [3]. The primary advantage of FMs is their label efficiency, which reduces the need for extensive labelled data—a common bottleneck in EO applications. By leveraging EO FMs, the effectiveness of mine site monitoring can be significantly enhanced. To this end, we introduce the Generic Mine Site Monitoring (GMSM) framework, a novel FM-based approach designed to synthesize multi-source EO data for comprehensive environmental monitoring. The framework employs the Dynamic One-For-All (DOFA) foundation model [4], whose dynamic hypernetwork adjusts to different wavelengths across diverse EO data sources. Unlike other foundation models confined to specific sensors or data types, DOFA utilizes a unified Transformer architecture capable of processing multimodal data from various sensors, including those not seen during pretraining [4]. We specifically adapted the DOFA model within the GMSM to handle the complex EO data in mine site monitoring. To overcome the limitations of DOFA in processing image time series, we introduce a Temporal Attention Network (TAN) as an addon processing component. TAN utilizes position encoding and multi-head attention mechanisms to capture temporal dependencies in sequential satellite data, enabling the GMSM framework to provide precise and timely monitoring of environmental changes. Our study focuses on three core tasks related to vegetation monitoring currently, including estimation the number of plant species (species richness), the proportion of ground covered by vegetation (fractional vegetation cover), and the total leaf area per unit of ground area (leaf area index) within 1x1 m² quadrats. Using Leave-One-Out cross-validation, GMSM demonstrates label-efficient capabilities with a limited dataset of only 32 observations, showing substantial superiority over established baseline methods such as Decision Trees, Gradient Boosting, and Random Forests. GMSM achieves superior performance across key metrics, including R² and RMSE, for species richness (R²=0.30, RMSE=3.45 species/m²), fractional vegetation cover (R²=0.16, RMSE=0.20), and leaf area index (R²=0.16, RMSE=1.38). Notably, GMSM performs comparably to leading methods in estimating species richness using Sentinel-2 time series data with an R² of 0.30. The framework also performs robustly in estimating fractional vegetation cover and leaf area index, achieving R² of 0.16 for both tasks, further validating its ability to monitor vegetation dynamics. The improvement in performance, with an average R² increase of 0.13 across three tasks between single-date and time-series data, demonstrates the significant benefit of temporal information for vegetation monitoring. This highlights the effectiveness of TAN module in capturing temporal dependencies and enhancing model accuracy. The GMSM framework offers a robust, scalable solution for real-time environmental monitoring at mine sites, with its ability to seamlessly integrate multimodal EO data and temporal dynamics. Its versatility makes it applicable not only to diverse monitoring tasks within the mining sector but also to a wide range of environmental applications, providing a valuable tool for advancing EO and supporting informed decision-making in environmental management [5]. This study is part of the MultiMiner project (https://www.multiminer.eu/), funded by the European Union’s Horizon Europe research and innovations actions programme under Grant Agreement No. 101091374. References [1] Virgone, K. M., et al. "Effective integrated frameworks for assessing mining sustainability." Environmental geochemistry and health 40 (2018): 2635-2655. [2] Han, Wei, et al. "A survey of machine learning and deep learning in remote sensing of geological environment: Challenges, advances, and opportunities." ISPRS Journal of Photogrammetry and Remote Sensing 202 (2023): 87-113. [3] Zhu, Xiao Xiang, et al. "On the Foundations of Earth and Climate Foundation Models." arXiv preprint arXiv:2405.04285 (2024). [4] Xiong, Zhitong, et al. "Neural plasticity-inspired foundation model for observing the Earth crossing modalities." arXiv e-prints (2024): arXiv-2403. [5] Anderson, Katherine, et al. "Earth observation in service of the 2030 Agenda for Sustainable Development." Geo-spatial Information Science 20.2 (2017): 77-96.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Towards Efficient Neural Compression for Earth Observation Data

Authors: Michael Lukas Marszalek, Rikard Vinge, Amelie Koch, Dr. Conrad Albrecht
Affiliations: DLR, IBM
The rapid growth of Earth observation (EO) data presents a challenge in data management, data processing, and data storage. Missions like Sentinel-2 generate petabytes of multi-spectral imagery, contributing to the massive volume of information from diverse EO missions such as Landsat, and MODIS. In combination with climate-related satellites, e.g. MetOp, commercial and future satellites, a significant contribution to a climate-responsible and sustainable future is ensured [1,2]. The constantly growing volume of data has implications for hosting infrastructure: from satellite specifications, e.g. bandwidth and storage capacity, to the ground segment where the data is processed and managed. Any design choice in this flow of data impacts the corresponding CO2-footprint [3] and costs associated. As part of the “Embed2Scale” project [4] co-funded by the European Union (Horizon Europe contract No. 101131841), the Swiss State Secretariat for Education (SERI), and UK Research and Innovation (UKRI), we develop and evaluate compression techniques leveraging deep neural network models [5,6]. Our end-to-end pipeline leverages current progress in the area of pre-training with masked autoencoder (MAE) reconstruction tasks, contrastive encoders for feature extraction, quantization, and entropy models for latent distribution modeling [7]. Based on the foundations of the SSL4EO-S12 data set [6], these methods are designed to drastically reduce data size while preserving relevant information for a diverse set of EO downstream tasks. Efficient neural compression improves downlink efficiency and data storage on the ground, as the temporal, spatial and spectral dimensions have subtle correlations, and our neural compressor architectures exploit these correlations. We validate our methods in a range of real-world use cases, from agriculture to maritime awareness. The diverse set of downstream tasks allows us to verify the degree to which the compressed data retains relevant information. The long-term goal of our study is the implementation of an embedded neural compressor for satellites, such as ESA's Φsat-2 [8,9], where the benefits of initial on-board applications have been demonstrated. [1] A. Kavvada, G. Metternicht, F. Kerblat, N. Mudau, M. Haldorson, S. Laldaparsad, L. Friedl, A. Held, E. Chuvieco, Towards delivering on the Sustainable Development Goals using Earth observations, Remote Sensing of Environment, Volume 247, 2020, 111930, ISSN 0034-4257, https://doi.org/10.1016/j.rse.2020.111930 [2] K. Anderson., B. Ryan, W. Sonntag, A. Kavvada, L. Friedl, (2017). Earth observation in service of the 2030 Agenda for Sustainable Development. Geo-Spatial Information Science, 20(2), 77–96. https://doi.org/10.1080/10095020.2017.1333230 [3] R. Wilkinson, M.M. Mleczko, R.J.W. Brewin, K.J. Gaston, M. Mueller, J.D. Shutler, X. Yan, K. Anderson, Environmental impacts of earth observation data in the constellation and cloud computing era,Science of The Total Environment, Volume 909,2024,168584,ISSN 0048-9697, https://doi.org/10.1016/j.scitotenv.2023.168584 [4] https://embed2scale.eu/ [5] L. Theis, W. Shi, A. Cunningham, and F. Huszar, Lossy image compression with compressive autoencoders, in International conference on learning representations, 2022 [6] Y. Wang, N. A. A. Braham, Z. Xiong, C. Liu, C. M. Albrecht and X. X. Zhu, "SSL4EO-S12: A large-scale multimodal, multitemporal dataset for self-supervised learning in Earth observation [Software and Data Sets]," in IEEE Geoscience and Remote Sensing Magazine, vol. 11, no. 3, pp. 98-106, Sept. 2023, doi: 10.1109/MGRS.2023.3281651 [7] C. Gomes and T. Brunschwiler, “Neural Embedding Compression for Efficient Multi-Task Earth Observation Modelling,” IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 2024, pp. 8268-8273, doi: 10.1109/IGARSS53475.2024.10642535 [8] G. Meoni, R. D. Prete, F. Serva, A. De Beusscher, O. Colin and N. Longépé, "Unlocking the Use of Raw Multispectral Earth Observation Imagery for Onboard Artificial Intelligence," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 17, pp. 12521-12537, 2024, doi: 10.1109/JSTARS.2024.3418891 [9] https://www.esa.int/Applications/Observing_the_Earth/Phsat-2/Introducing_Phsat-2
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Mining OpenStreetMap for Planetary-scale Labels to Build an Earth Observation Foundation Model

Authors: Sherrie Wang, Chenhui Zhang
Affiliations: Massachusetts Institute Of Technology
The rapid progress of machine learning, exemplified by breakthroughs like large language models, has been fueled by massive, diverse datasets that capture human knowledge—such as the vast troves of text data available on the internet. We hypothesize that the potential of deep learning to unlock knowledge from Earth observation (EO) data hinges on access to large, richly labeled datasets that can help models interpret complex, real-world imagery. To address this, we curated a dataset leveraging annotations from OpenStreetMap (OSM) alongside high-resolution remote sensing imagery, capturing diverse real-world contexts at a planetary scale. Using this dataset, we trained a foundation model that can perform a range of EO tasks, including land cover classification and object detection. Our dataset and model pave the way for new insights and capabilities in remote sensing image analysis by grounding models in rich, human-curated geospatial data.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Foundation models for ocean monitoring: large-scale training and applications

Authors: Alexandre Tuel, Thomas Kerdreux, Quentin Febvre, Alexis Mouche, Bertrand Chapron
Affiliations: Galeio, Laboratoire d’Océanographie Physique et Spatiale, Institut Français de Recherche pour l’Exploitation de la Mer,
Foundation models are becoming indispensable tools to monitor the Earth system, thanks to their ability to generalize to new tasks and unseen data. This capability is particularly valuable for exploring the vast amounts of Earth Observation (EO) data available today. Oceans, as a case in point, present unique challenges for large-scale monitoring due to their immense size, limited in-situ data availability, and the wide variety of oceanic and atmospheric phenomena observable on their surface. Synthetic Aperture Radar (SAR) imagery, such as that from the Sentinel-1 constellation, is a powerful tool to monitor oceans. SAR captures complex oceanic and atmospheric dynamics (rain, fronts, convective cells, sea ice, etc.), providing valuable insights. However, supervised algorithms trained to analyse this data at large scales often struggle due to insufficient (or insufficiently diverse) training data. This creates a pressing need for foundation models capable of accurately representing ocean SAR imagery. A critical step in developing such models is the selection of a diverse and balanced image database. Previous work on foundation models for a variety of modalities (text, image, time series, etc.) has indeed shown that model performance scaled not only with model and training data size, but also with the diversity and balancedness of the training data. For terrestrial applications, off-the-shelf land cover datasets can be used to sample balanced training data across classes like urban, desert, or cropland. However, this approach does not translate well to ocean data. Ocean imagery is indeed characterized by significant redundancy (e.g., highly frequent wave patterns), and we lack prior knowledge to guide a diverse and balanced sampling. Without such sampling, models risk overfitting to the most frequent patterns, leading to poor generalization, or wasting computational resources on redundant training data. In this presentation, we will discuss our development of new foundation models for Sentinel-1 Wave Mode (WV) data over oceans. Building on the DINO framework, we designed a dynamic sampling methodology to address data redundancy, ensuring the model is exposed to diverse images during training. This approach accelerates training and enhances downstream performance. We will showcase the application of these models to various downstream tasks, including semantic search, scene classification, and wave height estimation.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Learned representations for accelerating planetary-scale mapping and monitoring with sparse labelled data

Authors: Christopher Brown, Michal Kazmierski, Valerie Pasquarella, William Rucklidge, Masha Samsikova, Evan Shelhamer, Emily Schechter, Sophia Alj, Sean Askay, Alexis Boukouvalas
Affiliations: DeepMind, DeepMind, Google
The Earth Observation (EO) community has a long history of engineering designed features like spectral indices or coefficients from temporal curve-fitting approaches that maximize information across multiple dimensions by encoding physical properties and expert knowledge. Yet in recent years, we have entered a new age of remote sensing where the availability of a large and ever-growing suite of image archives on cloud-computing platforms has created new opportunities to synthesize across space, time, and observation records to extract meaningful data patterns that support downstream analyses. To this end, we propose a novel deep learning approach that harmonizes a vast amount of unstructured EO data, including optical, radar, lidar, climate, topography, and text, to create a globally consistent, semantically rich feature space. Our approach is capable of handling the complexities of geospatial data, such as varying scales, time, and projections. Our model is trained on over 3 billion globally distributed image tiles, and its output is a learned per-pixel representation or “embedding”. We surface these representations as “embedding fields” that have a nominal 10-meter resolution and are specifically designed to work well in both data-poor and data-rich training regimes. To establish the foundational utility of these learned embedding fields for applied use cases, we assess performance on a set of sixteen archetypal classification, regression, and change-detection tasks, including land use and land cover mapping, crop type mapping, tree species predictions, and predictions of evapotranspiration and emissivity, simulating a low-shot regime with only simple models (linear and k-nearest neighbors) and no more than 300 samples per class. We compare our results with other designed and learned featurization approaches, including image composites, harmonic features, and embedding layers produced by other geospatial "foundation models". We find that our representations consistently have highest accuracies for classification problems and lowest errors for regression problems across all approaches considered with an order of magnitude fewer dimensions compared with other representation learning approaches, suggesting enormous potential to accelerate planetary-scale mapping and monitoring with sparse labelled data. Nonetheless, scaling deep learning models remains an ongoing challenge and significant barrier to use. To address this, we have applied our model at scale and created global annual embedding fields datasets that can be substituted for raw imagery in standard EO workflows, making it easier than ever for domain experts to leverage the power of deep learning for a wide variety of downstream tasks.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Self-supervised learning for multispectral and hyperspectral remote sensing

Authors: Nassim Ait Ali Braham, Dr. Conrad Albrecht
Affiliations: German Aerospace Center (dlr), Technical University of Munich
Foundation models have triggered a paradigm shift in computer vision and remote sensing, demonstrating remarkable generalizability across tasks with minimal fine-tuning. While extensive research has focused on RGB, multispectral, and radar imagery [1-4], the potential of foundation models on hyperspectral imagery remains largely untapped. Moreover, most existing works are limited to one or two sensors, despite the availability of diverse satellites capturing imagery across varying spectral ranges and spatial resolutions. Integrating different data sources can improve the versatility of foundation models and enables exploiting the complementarity of different sensors. In this work, we develop a foundation model for multispectral and hyperspectral data, leveraging the rich spectral information unique to HSI. Specifically, we pair the EnMAP-based SpectralEarth dataset [5] with Sentinel-2 and Landsat-8 imagery to create a diverse, multi-sensor dataset for pre-training. Additionally, we design a self-supervised learning method based on masked image modeling and apply it to a vision transformer model. The employed architecture consists of a few initial sensor-specific layers to account for differences in spatial and spectral characteristics, followed by a shared vision transformer backbone that processes all modalities in a unified latent space. The pre-trained model is evaluated on downstream tasks from each sensor, including land cover and crop type classification, to assess its generalizability. Our findings highlight the effectiveness of multi-sensor pre-training and demonstrate positive transfer across modalities. This study contributes to ongoing efforts in developing generic foundation models and provides insights into the training of such models. [1] Fuller, Anthony, Koreen Millard, and James Green. "CROMA: Remote sensing representations with contrastive radar-optical masked autoencoders." Advances in Neural Information Processing Systems 36 (2024). [2] Hong, Danfeng, et al. "SpectralGPT: Spectral remote sensing foundation model." IEEE Transactions on Pattern Analysis and Machine Intelligence (2024). [3] Wang, Yi, et al. "SSL4EO-S12: A large-scale multimodal, multitemporal dataset for self-supervised learning in Earth observation [Software and Data Sets]." IEEE Geoscience and Remote Sensing Magazine 11.3 (2023): 98-106. [4] Stewart, Adam, et al. "Ssl4eo-l: Datasets and foundation models for landsat imagery." Advances in Neural Information Processing Systems 36 (2024). [5] Braham, Nassim Ait Ali, et al. "SpectralEarth: Training Hyperspectral Foundation Models at Scale." arXiv preprint arXiv:2408.08447 (2024).
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Multi-modal Generative Modelling of Copernicus Data

Authors: Miguel Espinosa, Elliot J. Crowley, Mikolaj Czerkawski
Affiliations: University of Edinburgh, European Space Agency Φ-lab
The Copernicus programme offers a rich set of sensors for Earth observation (EO) capable of capturing the complexity of our planet, with each sensor—such as radar (Sentinel-1), multi-spectral optical (Sentinel-2))—providing unique insight about the observed environment. For instance, radar data like Sentinel-1 captures surface structure and movement in all weather conditions, while optical data from Sentinel-2 provides spectral information for vegetation, water, and land monitoring under clear skies. Another valuable representation of the planet is contained in digital elevation models (DEM), which offer topographic context that can enhance the interpretation of both radar and optical imagery. A single modality is often not enough to fully capture the complexity of phenomena occurring in the environment, as each data type provides unique and complementary insights. Understanding and modeling relationships between modalities enables AI models to learn robust, holistic representations (i.e. embeddings of the inputs), enhancing pipelines and improving performance in tasks like land-cover classification and segmentation. By learning to generate individual modalities based on the information present in others, a wide range of tasks become feasible. For instance, we can approximate missing data on radar coverage maps or cloud covered regions. In addition, it enables conditional data generation, suitable for diverse image-to-image translation tasks. Finally, it can simply be used for data augmentation for data-hungry deep learning training pipelines. However, leveraging these complementary modalities effectively remains a major technical challenge for the existing generative models, such as denoising diffusion models, which are conventionally trained on a single modality. These models, which have shown remarkable success in generating high-quality novel data, typically operate on single modalities, limiting their capacity to fully exploit the rich information provided by EO's multi-modal datasets. While traditional diffusion models can handle different data modalities through conditioning mechanisms, they don’t treat tokens from each modality in the same way, hindering their potential for learning robust cross-modal representations for EO tasks. This work presents an adaptation of a multi-modal diffusion framework for Copernicus data. Our method extends beyond bi-modal fusion (e.g., Sentinel-1 and Sentinel-2) to incorporate a third modality, topographical context from COP-DEM, demonstrating scalability to more modalities. What is jointly learned are the marginal, conditional, and joint distributions between these data modalities, providing more flexibility which modality is generated or used for conditioning. This training results in richer cross-modal representations, beneficial for the downstream tasks (land cover classification, semantic segmentation, and multi-modal image-to-image translation). The framework is trained using the ESA MajorTOM dataset, observing richer and more detailed features than single-modality models. By effectively combining the strengths of different data types, our model performs well across various EO tasks. Furthermore, our simple noise-prediction training objective enables the model to handle diverse tasks without extra fine-tuning (zero-shot inference). This work demonstrates the potential of multi-modal diffusion as a scalable framework for integrating diverse Earth observation data into a single model. The resulting model flexibility and performance improvement demonstrates an attractive potential for representation learning, data augmentation, and image reconstruction for the widely relevant Copernicus data.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone T)

Poster: Determining minimum and optimal polygon requirements for regional spectral reference curves in crop monitoring: balancing data quality and processing efficiency

Authors: MSc Eng Katarzyna Długokęcka, MSc Eng Karolina Wawrzusiszyn, MSc Eng Łukasz
Affiliations: OPEGIEKA Sp. z o.o.
Crop spectral curves, which include vegetation indices such as the NDVI (Normalized Difference Vegetation Index), are crucial tools in crop condition analysis, damage assessment and yield forecasting. The implementation of region-specific spectral reference curves being created from statistical data of multiple crops of the same species within a specific area enables more accurate comparative analyses. The authors highlight the importance of ensuring high-quality reference curves, particularly for real-time analyses conducted during the growing season, when comprehensive crop data are not yet available. This approach enables continuous monitoring and decision-making but presents significant technical and methodological challenges. Proper acquiring and filtering of single-crop time series remote sensing data, as well as the generation of curves for multiple regions across the entire country, is a time-consuming process that requires significant computational resources. The main challenge lies in finding a balance between the number of crops (polygons) analyzed and the quality of the resulting reference curves. An excessive number of polygons increases computation time and costs, while too few may lead to unreliable results. It is also crucial to consider variations in the quality of input data and the regional specificity, which can be defined at the local level within a single grid cell (e.g., squares or hexagons) or at an administrative level (e.g., voivodships in Poland). A key issue is determining the optimal number of polygons for analysis and the minimum number required to obtain reliable results, particularly in the context of real-time processing, where, unlike post-season analyses, data availability may be limited. The objective of the study was to develop an optimized approach for generating reference curves that minimizes the number of analyzed crops without compromising results quality. This includes defining the minimum and optimal conditions for selecting crops polygons in various spatial units (regions). This approach will improve efficiency and reduce data processing time, while ensuring high-quality results, which is crucial for practical applications such as crop damage assessment in agricultural insurance and large-scale crop detection. The study used vector data from the 2019/2020 season on winter rapeseed crops, obtained from the Agency for Restructuring and Modernisation of Agriculture (ARiMR). Analyses were conducted for two region types: the entire voivodship (Greater Poland Voivodeship) and a hexagonal grid cell with a side length of 30 km, located in the northwestern part of the studied voivodship. To generate spectral curves, the study used Sentinel-2 Level L1C data, utilizing both the NDVI index and cloud mask. The reference curve creation process involved calculating statistics for cloud-free NDVI data at the polygon level for each crop, then processing these statistics from all crops in the region into a spectral curve with a daily temporal resolution, using linear interpolation and the Savitzky-Golay filter. After curve creation, two measures were assessed: the similarity coefficient, calculated based on (Sinergise, 2020, Area monitoring: Similarity score. Medium. https://medium.com/sentinel-hub/area-monitoring-similarity-score-72e5cbfb33b6), and a homogeneity measure for individual crops - a custom-designed inhomogeneity_score parameter, determined by the normalized spread between the 10th and 90th percentile NDVI time series. Outliers, which were related to incorrect species or crop area declarations in the input data from ARiMR or crop heterogeneity, were iteratively removed - three times for the voivodship and twice for the hexagon. Each iteration allowed for data cleansing and curve recalculation. The number of input polygons was progressively reduced: from 442 in the voivodeship and 161 in the hexagon to a minimum of 10, enabling assessment of how reducing input data affected curve quality. For the voivodeship area, polygon numbers were reduced in steps: 350, 300, 250, 200, 150, 100, 90, 80, 70, 60, 50, 40, 30, 20, 10. For the hexagon, the steps were: 150, 100, 90, 80, 70, 60, 50, 40, 30, 20, 10. The crops for each curve were selected based on polygon suitability for calculating statistics, taking into account the spatial resolution of the input satellite data (10 m). Selection criteria included a minimum area of 0.1 ha (after applying a 10 m buffer) and a width of at least 30 m. Due to the large dataset of 14,335 polygons in the voivodeship, it was necessary to filter the reduced number of crops in such a way that the set reflected both the real conditions of operational work and, in the best possible way, represented the phenological development of plants in the region. The maximum polygon area was limited to 2 ha, and crops with the best characteristics in terms of shape, area, and uniformity of distribution within the region were selected. Data processing and calculations were performed using FME Software, popular Python libraries for data processing and analysis, and the SentinelHub API for satellite data acquisition. Similarity analysis between generated curves included calculating matrices of mutual similarity measures with the following indicators: the similarity coefficient, Pearson correlation coefficient, and Mean Absolute Error (MAE) in NDVI units. These measures helped determine the minimum and optimal numbers of polygons for curve generation. The minimum value was set as the number of polygons for which MAE < 0.02 and similarity < 0.02, where error values were stable. The optimal value was determined for cases meeting these same conditions with the additional constraint of MAE < 0.01. These thresholds were based on analyzing result stability and the acceptable NDVI error, considering the use of the curves in subsequent analytical processes. For the voivodeship, the minimum number of polygons was 100, and the optimal number was 150. When creating spectral curves within the hexagon, it was determined that the minimum number of polygons was 30, and the optimal number was 50. These results aligned with previous empirical observations regarding quantitative requirements that ensured a balance between result accuracy and the efficiency of analytical processes. The conclusions of the analysis indicate that the proper selection of polygons, considering their spatial distribution and geometric parameters, enables generation of high-quality spectral curves, even with small input datasets. However, regarding the minimum and optimal values for the number of input polygons for curve generation, it is recommended to increase these numbers by 20%, due to the iterative removal of outlier values that can still appear in even well-cleaned datasets. It is also recommended to verify the results for other crop species, in different regions and seasons, as well as to apply the developed polygon selection algorithm to streamline analysis and assess whether the identified values can be applied to other types of regions than those analyzed in this study. The method could be expanded by dividing the research areas where differences in the phenological development of a given plant were observed or applied to the analysis of other types of crops. Generating reference curves for a crop in a given season would also allow for the identification of so-called time-space twins, which, in a different season within a given area, exhibited similar or identical growth characteristics to the analyzed crop. This type of analysis would enable the forecasting of potential trends in plant development based on comparisons with other archive seasons, for which suitable twins were identified.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: A.07.02 - POSTER - Quantification of Water Resources and Their Evolution Under Anthropogenic Pressure

As global populations expand and develop, the demand for water intensifies, placing increasing strain on water resources. Additionally, climate change exacerbates hydrometeorological extremes and introduces significant uncertainty regarding the short- to mid-term availability of water for both communities and ecosystems. Having an accurate and timely overview of available fresh water is therefore essential for its rational and sustainable management. This session aims to emphasize how Earth observations contribute to quantifying different continental water storages—including, but not limited to, groundwater, soil moisture, lakes, artificial reservoirs, seasonal snow and glaciers—and to monitoring and predicting their spatial and temporal evolution, in particular in poorly gauged areas.

We invite contributions that address these quantitative aspects, both in terms of water storages and fluxes from basin to global scales. Works focusing on the identification and characterization of hotspots of unsustainable water exploitation over time and space are particularly welcome. Studies exploring direct human impacts on water availability, such as water abstraction for irrigation, decreased recharge due to deforestation, or the effects of dams on downstream areas, are highly encouraged. Additionally, research focusing on attributing observed variations in water storage to natural variability, climate change, or more direct human impacts will be of great interest. There is no restriction on the type of satellite missions or sensors used in the submissions, but approaches based on multi-mission analyses will be favoured.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Inland Water Dynamics in Central Italy: Surface and Water Level Extraction Based on COSMO-SkyMed Imagery

Authors: Lorenza Ranaldi, Alireza Hamoudzadeh, Filippo Bocchino, Giulia Graldi, Marco Casini, Sara Taviani, Deodato Tapete, Alessandro Ursi, Maria Virelli, Patrizia Sacco, Valeria Belloni, Roberta Ravanelli, Andrea Nascetti, Mattia Crespi
Affiliations: Sapienza University of Rome, DICEA, Geodesy and Geomatics Division, Sapienza School for Advanced Studies, Sapienza University of Rome, Italian Space Agency (ASI), University of Liège, Department of Geography (Faculty of Sciences), Geomatics Unit, Central Apennine District Basin Authority (AUBAC), KTH Royal Institute of Technology, Division of Geoinformatics
Climate change is affecting the extension and levels of inland water bodies, decreasing the availability of this important resource. In this context, a constant monitoring of water reservoirs is essential for identifying and addressing potential hydrological crises. Traditional gauge measurements are extremely effective for monitoring water level variations but are limited to accessible areas. Therefore, there is a need for alternative approaches. This study aims to develop a workflow using Earth Observation (EO) data to monitor changes in water level and surface extent. Specifically, we used Synthetic Aperture Radar (SAR) images due to the all-day, all-weather, and high-resolution acquisitions. Among SAR data, we selected COSMO-SkyMed STRIPMAP images acquired by the first and second generation constellations of satellites, made available by the Italian Space Agency (ASI) in the framework of the GRAW project. These data were selected not only owing to their spatial resolution (significantly upgrading compared to datasets from global medium resolution SAR missions such as Sentinel-1 Interferometric Wide Swath mode imagery), but also given the possibility to select images from a temporally dense and regular archive like the Map Italy Plan managed by ASI over Italy. To derive water surface masks, we developed an algorithm that uses a radiometric approach leveraging differences in the backscattered signal to distinguish between water and land pixels [1]. We applied standard image processing techniques such as histogram equalization for contrast enhancement and bilateral filtering for noise reduction. Then, we used two classification algorithms to classify water bodies: Otsu’s thresholding, which minimises intraclass variance for binary classification, and k-means clustering, which identifies multiple intensity-based classes based on the distance from the center of the classes. In the case of k-means clustering, we used three different classes to classify each pixel as land, water, or border areas that may be characterized by features such as vegetation [2,3]. After classifying each pixel, the largest polygon within the water class was identified and extracted for segmentation. Finally, the water mask polygon was georeferenced and exported in the original UTM coordinate system of the COSMO-SkyMed STRIPMAP image. We validated the results by comparing the automatically generated masks with reference masks manually digitised by an operator on selected COSMO-SkyMed images. We used standard performance metrics: accuracy, F1 score, recall, and precision. We applied this methodology to the Albano Lake in the Lazio Region (Italy), selected as a case study in collaboration with the Central Apennine District Basin Authority (AUBAC), due to significant water level reductions registered in recent years [4]. Results showed a strong agreement between the manually generated reference mask and the automatically generated masks, with F1 scores of 0.997 for Otsu’s method and 0.996 for k-means clustering. Additionally, we generated a time series of water surface area for seven epochs between December 2023 and September 2024, which showed a decreasing trend consistent with hydrometer data. Once the masks of the water surface were generated, we focused on estimating water levels. The main idea behind this goal was to perform a stereo-SAR approach in the object space using COSMO-SkyMed STRIPMAP imagery, based on the methodologies already implemented in the pyDATE software [5]. Assuming a constant water level across the water reservoir, two as much as possible temporally co-located SAR images were projected in ground range onto a plane at an elevation equal to the mean known elevation of the water reservoir in a chosen time interval. Then, the actual elevation at the mean epoch of the two SAR images was estimated, suitably iteratively adjusting the elevation of the projection plane, until the correlation between the two ground range SAR images was maximised. This approach was applied to water reservoirs with available water level measurements and encouraging agreements were found. Although the approach has been partially validated through a single case study, ongoing efforts are focusing on developing and improving the methodology. In the future, this workflow will be applied to diverse water bodies and independent of specific SAR satellite constellations. Acknowledgements This research is performed in the framework of the GRAW project, funded by the Italian Space Agency (ASI), Agreement n. 2023-1-HB.0, as part of the ASI’s program “Innovation for Downstream Preparation for Science” (I4DP_SCIENCE). References [1] Li, J.; Ma, R.; Cao, Z.; Xue, K.; Xiong, J.; Hu, M.; Feng, X. Satellite Detection of Surface Water Extent: A Review of Methodology. Water 2022, 14, 1148. https://doi.org/10.3390/w14071148 [2] Ciecholewski, Marcin. Review of Segmentation Methods for Coastline Detection in SAR Images. Archives of Computational Methods in Engineering, 2023. 31. 10.1007/s11831-023-10000-7. [3] Guo, Z.; Wu, L.; Huang, Y.; Guo, Z.; Zhao, J.; Li, N. Water-Body Segmentation for SAR Images: Past, Current, and Future. Remote Sens. 2022, 14, 1752. https://doi.org/10.3390/rs14071752 [4] Ranaldi, L., Hamoudzadeh, A., Bocchino, F., Tapete, D., Ursi, A., Virelli, M., Sacco, P., Belloni, V., Ravanelli, R. and Crespi, M., "Exploring Water Reservoir Dynamics in Central Italy: A Preliminary Workflow for COSMO-SkyMed Imagery-Based Water Segmentation", 14° Workshop Tematico AIT-ENEA | Telerilevamento applicato alla gestione delle risorse idriche, Bologna, Italy, 2024, https://hdl.handle.net/11573/1714593 [5] Di Rita, M., Nascetti, A., & Crespi, M. (2017). Open source tool for DSMs generation from high resolution optical satellite imagery: Development and testing of an OSSIM plug-in. International Journal of Remote Sensing, 38(7), 1788–1808.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Monitoring of Seasonal Lakes Using Speckle-Related SAR Information

Authors: Anna Verlanti, Ferdinando Nunziata, Maurizio Migliaccio
Affiliations: Department of Engineering, University of Naples Parthenope, Department of Engineering, Sapienza University of Rome, National Earthquake Observatory, National Institute of Geophysics and Volcanology, Institute of Marine Sciences, National Research Council
Seasonal lakes are water bodies that form and disappear in response to seasonal changes. These lakes can appear during specific times of the year, usually due to seasonal rainfall, snowmelt, or the fluctuating water levels of nearby rivers or aquifers. The formation and disappearance of these lakes is often related to the local climate and hydrological conditions: some seasonal lakes are formed by rainfall during the wet season and they may dry up in the dry season when precipitation is low or evaporation is high. These lakes are not permanent: despite their temporary nature, seasonal lakes can support a variety of wildlife, including migratory birds, aquatic species, and plants that are adapted to fluctuating water levels [1]. Satellite-based monitoring of water bodies has become a powerful tool for tracking and analyzing various water-related phenomena on a global scale [2-3]. In particular, Synthetic Aperture Radar (SAR) is a powerful remote sensing technology used for monitoring water bodies thanks to its high spatial resolution. However, SAR images contain "speckle" noise, a grainy pattern that can obscure fine details. Typically, various processing techniques are required to reduce this noise to improve clarity [4]. Actually, speckle itself provides useful information about surface features. Infact, the goal of this study is identifying two status of seasonal water body, i.e, completely filled by water and empty, using a approach that is based on the information extracted from the speckle. The approach distinguishes the water status according to the speckle using Normalized Intensity Moments (NIMs) in C-band Sentinel-1 Single Look Complex (SLC) SAR imagery [5]. A methodology is proposed to deal with the discrimination of the two stages that is based on theoretical K-distribution and experimental NIMs. The C-band speckle model exhibits a pronounced sensitivity to water content that can be exploited through K-distribution NIMs. The methodology is also able to sort out rainy/windy cases. Experimental results, obtained processing descending Sentinel-1 SAR scenes acquired in the period spanning from 2018 to 2021 over the ”Sal’e Porcus” Italian salt pond, confirm the soundness of the proposed approach. References [1] S. Pakzad, A. Keshtkar, H. Keshtkar, H. Atashi, and A. Afzali, “Impact of lake surface changes on climate fluctuation within a lake-affected region,” Environmental Earth Sciences, vol. 80, 02 2021 [2] T. Liu, J. Dai, Y. Zhao, S. Tian, Z. Nie, and y. Chuanyong, “Using remote sensing technology to monitor salt lake changes caused by climate change and melting glaciers: insights from Zabuye Salt Lake in Xizang,” Journal of Oceanology and Limnology, vol. 41, pp. 1258–1276, 09 2023. [3] X. Ding and X. Li, “Monitoring of the water-area variations of Lake Dongting in China with ENVISAT ASAR images,” International Journal of Applied Earth Observation and Geoinformation, vol. 13, no. 6, pp. 894–901, 2011. [4] D. Long and F. Ulaby, Microwave Radar and Radiometric Remote Sensing, Artech, 2015. [5] G. Gao, “Statistical modeling of SAR images: A survey,” Sensors, vol. 10, no. 1, pp. 775–795, 2010
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Ensemble Evapotranspiration approach to assess the dynamics of Agricultural Land and Water Use Under National Development Policies: 23 Years of Observations in the Chichaoua Basin (Morocco)

Authors: IKRAM EL HAZDOUR, Michel LE PAGE, Lahoucine HANICH, Jeremy AUCLAIR, Vincent RIVALLAND, Marielle MONTGINOUL, Lionel JARLAN
Affiliations: CESBIO, University of Toulouse, IRD/CNRS/UPS/CNES, L3G Laboratory, Faculty of Sciences and Techniques, Cadi Ayyad University, Mohammed VI Polytechnic University (UM6P), Geology and Sustainable Mining Institute (GSMI), INRAE UMR G-Eau, Montpellier University
Over the past two decades, Moroccan agricultural development policies, notably the Green Moroccan Plan (GMP) and its successor, the Green Generation Plan (GGP), have aimed to enhance agriculture by improving resource management and increasing productivity. This study examines the impacts of these policies on land use and water consumption in the southwestern Tensift Basin, specifically the Chichaoua sub-basin. The study area’s unique distribution of land and water resources highlights two distinct zones: upstream, unexploited till the late 2010s, of a natural spring which historically served as primary irrigation source for the downstream area, supporting a traditional extensive irrigation system. This research analyzes 23 years (2000–2023) of changes in cropland extent and water consumption using Sentinel-2 and Landsat satellite data. Crop classification was modeled with a Random Forest workflow, while evapotranspiration (ET) rates were estimated using a robust ensemble approach. This approach combines contextual and pixel-based energy-balance methods (e.g., TSEB-GEE, SEBAL, SSEBOP, EVASPA, and SEBS) with water-balance models (e.g., MODSPA), and a daily interpolation scheme to deliver high resolution estimates of crop water needs evolution. Annual land-use maps and detailed ET estimates were generated for the 23-year period, achieving an overall accuracy (OA) of 98% and validation F-scores between 0.8 and 0.9 for land-use classification. The findings reveal significant trends in irrigation expansion. Total irrigated areas increased from 3256 to 6569 hectares, with contrasting dynamics between upstream and downstream zones. Upstream fields expanded rapidly in response to the significant subsidies offered to the farmers for the installation of drip irrigation system within the frame of the GMP, and continued to grow under the GGP, increasing by over 4000 hectares in 23 years. In contrast, downstream traditional fields contracted by 1299 hectares during the same period, losing more than 50% of olive groves due to water scarcity. ET rates displayed an overall increasing trend, reflecting the intensification of agriculture. In parallel, the groundwater data from 2019 to 2022 indicate a significant decline in levels across both upstream and downstream zones with up to 3.61 m/year. The main spring, which previously flowed at 600 l/s, dried up entirely by the winter of 2023. The expansion of cash crops like watermelon and tree orchards, including olives and citrus, has been most pronounced in the upstream area, aligning closely with development policy initiatives. Our findings quantify and map the marked rise in agricultural water usage, due to changes in land-use and the expansion of large-scale irrigation systems encouraged by policy initiatives. The shift observed in the study region underscores the unique dual impact of agricultural development policies: while they boost productivity, they could also exacerbate water resource challenges, particularly in areas where water scarcity, frequent droughts, and climate change effects, are more severe.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Forecasting Total Water Storage Changes with Graph Neural Networks

Authors: Viola Steidl, Shan Zhao, Dr. Fupeng Li, Jürgen Kusche, Prof. Dr.-Ing. habil. Xiaoxiang Zhu
Affiliations: Technical University of Munich, University of Bonn, MCML
The availability of fresh water is vital to the ecosystem and communities. In a changing climate the increased risks for droughts make it more important to have an accurate view of changes in terrestrial water storage (TWS). Moreover, accurately forecasting changes in TWS could help policy makers to manage resources and thus mitigate the negative impacts of droughts. Forecasting tasks nowadays are often solved using machine learning algorithms. Powerful models such as Google’s GraphCast are even able to forecast the weather. However, these models require vast amounts of data. In contrast, total water storage changes (TWSC) derived from GRACE/GRACE-FO observations only date back to 2002 and are available at a grid of 1°x1° at monthly resolution. Nevertheless, Li et al., (2024) showed that machine-learning approaches could forecast TWS variation tendencies for up to one year ahead. They cleverly exploit temporal lag relationships between TWS anomalies and ocean, atmospheric, or land variables. In our work, we propose a framework using graph neural networks to forecast TWSC. Graph neural networks can not only exploit lag relationships in the temporal domain but also incorporate spatial dependencies between adjacent grid cells and regions. We create a heterogeneous graph with separate node types for TWSC, ocean, atmospheric, and land variables. The graph structure allows to incorporate data of varying spatial and temporal resolution with graph edges encoding node relationships such as geographical proximity, hydrological connectivity, or statistical correlations. The graph neural network encodes the temporal information first by a one-dimensional convolution. Afterward, the information is propagated through the network using a message-passing mechanism. A final layer collects the encoded information to forecast TWSC. We develop this framework building on reconstructed TWSC from Li et al. (2021) to avoid the data gaps in GRACE/GRACE-FO derived TWSC. Additionally, the framework relies on data from various sources, such as sea surface temperature, ERA5 climate data, land cover, and also irrigated area maps, as predictors. Combining the information from multiple sources in a graph and exploiting the temporal and spatial correlations enhances the predictive power of the network as the evaluation on the forecasting test period shows. Li, F., Kusche, J., Chao, N., Wang, Z., and Löcher, A.: Long-Term (1979-Present) Total Water Storage Anomalies Over the Global Land Derived by Reconstructing GRACE Data, Geophys. Res. Lett., 48, e2021GL093492, https://doi.org/10.1029/2021GL093492, 2021. Li, F., Kusche, J., Sneeuw, N., Siebert, S., Gerdener, H., Wang, Z., Chao, N., Chen, G., and Tian, K.: Forecasting Next Year’s Global Land Water Storage Using GRACE Data, Geophys. Res. Lett., 51, e2024GL109101, https://doi.org/10.1029/2024GL109101, 2024.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Improving the Reconstruction of the Hydrological Cycle through Satellite Observations: The Case Study of the Po River Basin

Authors: Sindhu Kalimisetty, Serena Ceola, Irene Palazzoli, Alberto Montanari, Paolo Stocchi, Silvio Davolio, Stefania Camici
Affiliations: National Research Council, Research Institute for Geohydrological Protection, Department DICAM, University of Bologna, National Research Council, Institute of Atmospheric Sciences and Climate, Department of Earth Sciences, University of Milan
Climate change is having a detrimental impact on the availability of water resources, and the effects are found to be particularly pronounced in regions where human activity is more concentrated. Across the world, the rising frequency of meteorological, agricultural, and hydrological droughts highlight the urgency of understanding how human activities contribute to their occurrence, which is crucial for effective water management. Specifically, estimating water storage in groundwater, reservoirs, and snowpack is essential for determining overall water availability and, in turn, assessing the amount of water that can be sustainably allocated for irrigation purposes. In this context, the INTERROGATION project, funded by the Italian Ministry of Universities and Research, examines the interactions between climatic and anthropogenic factors in the development and recovery of major hydrological droughts that have affected the Po river basin in northern Italy in recent decades (1990-2023). The purpose is to gain insights for developing an integrated approach for managing water resources during future droughts that may occur in the Po basin. In particular, the aim of the INTERROGATION project is to demonstrate how the combination of accurate input data, a robust hydrological model and a tool capable of assessing the uncertainty in the model prediction are fundamental elements for the development of a reliable decision support system for water resources management. For the purpose, three different precipitation datasets, namely long-term (2000-2023) daily in situ, high-resolution (4km) reanalysis and high-resolution (1km) satellite precipitation data are used as input data in a flexible conceptual hydrological model, MISDc (Modello Idrologico Semistribuito in Continuo, Camici et al, 2018). The model is able to mimic both natural processes and anthropogenic activities, such as irrigation and the water stored in the reservoirs. To assess the uncertainty of modelled river discharge, the Bluecat tool (Montanari et al., 2022), is applied. Once calibrated, the framework is employed to simulate three sets (one for each precipitation dataset) of the principal variables of the hydrological cycle, namely soil moisture, evaporation, snow accumulation, irrigation and river discharge. A comprehensive validation of the corresponding modelled hydrological variables is performed using high-resolution satellite-derived observations of soil moisture, evaporation, groundwater, irrigation and snow accumulation, which are developed within the framework of the European Space Agency Digital Twin Earth (DTE) Hydrology Next project. The results of this work demonstrates the significance of employing a suitable hydrological model in conjunction with accurate satellite information for capturing the evolution of the hydrological cycle, in both space and time, within highly anthropized basins. These results will be the base for developing a decision support system that will guide stakeholders towards an integrated management of water resources during drought events.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Small Agricultural Reservoirs Detection in Italy With Satellite Data and OpenStreetMap Integration for Climate Resilience in Drought Prone Areas: a Contribution to the CASTLE Project

Authors: Noemi Mannucci, Gabriele Bertoli, Marco Lompi, Tommaso Pacetti, Patrick Ebel, Davide Chiarelli, Margherita Azzari, Enrica Caporali
Affiliations: University Of Sapienza, University of Florence, European Space Agency, Φ-lab ESRIN, Politecnico di Milano
Meteorological unpredictability, exacerbated by severe events caused by climate change, poses significant problems for water resource management (IPCC, 2023). Climate change has increased the frequency and severity of droughts, especially in mid-latitude regions, where reduced precipitation coupled with rising temperatures is expected to exacerbate water scarcity (Cook et al., 2018). In this regard, Small Agricultural Reservoirs (SmARs) offer a strategic response, as they are designed to collect and store water for use in irrigation and other agricultural applications. In addition, they not only reduce water scarcity, but also act as Nature-Based Solutions (NBS) that can provide more water-related ecosystem services. This is the context in which the research activity described here is developed, contributing to the research project CASTLE - Creating Agricultural reSilience Through smaLl rEservoirs. Despite their importance, the lack of comprehensive national databases for SmARs remains a major obstacle to their efficient management. Prior to this study, for example only eight of Italy's twenty regions had SmARs inventories, often based on non-standardised and incomparable approaches (Calcaterra et al., 2022). This fragmentation of information makes the analysis and management of SmARs challenging. A possible option to overcome this problem is represented by satellite data, which provides accurate and continuous information over large geographical areas. Sentinel-2 satellite imagery - part of the European Space Agency's Copernicus programme - was particularly well suited to this study because of its high spatial resolution (up to 10 metres), global coverage, frequent revisit times (every five days) and ability to operate in multiple spectral bands, allowing accurate identification of water surfaces. One of the objectives of this research was to provide a reliable and scalable system for identifying SmARs, analysing their geographical distribution and determining their water storage potential. The system was validated in Italy for a reference year, integrating satellite data with OpenStreetMap (OSM) and the ESA World Cover 2021 dataset (Zanaga et al., 2022). Since the performance of spectral indices depends on the scene, sensor, and climate of the study area, a procedure to select the most appropriate multispectral index with its optimal threshold was necessary. The database of SmARs from the Tuscany Region (LaMMA - CNR IBIMET, Progetto INVASI 2018) was used as ground truth to validate the indices.Then, the best index has been used at the national scale. OSM data, enriched with tags contributed by volunteer mappers and supplemented by remote survey information, were critical in accurately classifying water bodies. This integration helped to eliminate false positives such as ponds, glaciers, large dams, rivers and canals, which spectral indices alone cannot distinguish from SmARs due to their similar reflectance characteristics as they are also water surfaces. In addition, OSM labelling improved the geometric integrity of the SmAR polygons and enabled the identification of reservoirs that were not detectable due to cloud cover. The ESA World Cover data were used to exclude urbanized areas, which were irrelevant to this study, minimizing the misclassification of building shadows as water surfaces. The combined use of open-source data has enabled the development of a replicable methodology adaptable to various spatial scales, considerably enhancing the identification and mapping of SmARs. This strategy will help to manage agricultural water resources more efficiently and increase resilience to climate change. As a future development, optical data from Sentinel-2 will be combined with radar (SAR) data from Sentinel-1. This combination will overcome the limitations due to cloud cover and provide the ability to monitor changes over time. ACKNOWLEDGMENTS This study was carried out within the CASTLE project and received funding from the European Union Next-GenerationEU (National Recovery and Resilience Plan – NRRP, Mission 4, Component 2, Investment 1.1 – D.D. n. 104 02/02/2022 PRIN 2022 project code MUR 2022XSERL4 - CUP B53D23007590006). REFERENCE Calvin, Katherine, et al. "IPCC, 2023: Climate Change 2023: Synthesis Report. Contribution of Working Groups I, II and III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change” [Core Writing Team, H. Lee and J. Romero (eds.)]. IPCC, Geneva, Switzerland. (2023). Cook, Benjamin I., Justin S. Mankin, and Kevin J. Anchukaitis. "Climate change and drought: From past to future." Current Climate Change Reports 4 (2018) Calcaterra, Stefano, Piera Gambino, and Daniela Niceforo. "Invasi artificiali" Annuario dei dati ambientali (2022). LaMMA - CNR IBIMET, Rapporto Tecnico. “Progetto INVASI” Catasto Invasi Regione Toscana (2018) https://geoportale.lamma.rete.toscana.it/difesa_suolo/#/viewer/372 Zanaga, Daniele, et al. "ESA WorldCover 10 m 2021 v200." (2022).
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: The Role of Hydrogeodesy in Monitoring and Managing Global Water Resources

Authors: Fernando Jaramillo, Saeid Aminjafari, Pascal Castellazzi, Ayan Fleischmann, Etienne Fluet‐Chouinard, Hossein Hashemi, Clara Hubinger, Hilary R. Martens, Fabrice Papa, Tilo Schöne, Angelica Tarpanelli, Vili Virkki, Lan Wang‐Erlandsson, Rodrigo Abarca‐del‐Rio, Adrian Borsa, Georgia Destouni, Giuliano Di Baldassarre, Michele‐Lee Moore, José Andrés Posada‐Marín, Shimon Wdowinski, Susanna Werth, George H. Allen, Donald Argus, Omid Elmi, PD Dr-Ing. habil Luciana Fenoglio, Dr. Frédéric Frappart, Xander Huggins, Zahra Kalantari, Simon Munier, Sebastián Palomino‐Ángel, Abigail Robinson, Kristian Rubiano, Gabriela Siles, Marc Simard, Chunqiao Song, Christopher Spence, Mohammad J. Tourian, Yoshihide Wada, Chao Wang, Jida Wang, Fangfang Yao, Wouter R. Berghuijs, Jean‐François Cretaux, James Famiglietti, Alice Fassoni‐Andrade, Jessica V. Fayne, Félix Girard, Matti Kummu, Kristine M. Larson, Martin Marañon, Daniel M. Moreira, Karina Nielsen, Tamlin Pavelsky, Francisco Pena, J. T. Reager, Maria Cristina Rulli, Juan F. Salazar
Affiliations: Department of Physical Geography and Bolin Centre for Climate Research, Stockholm University, Stockholm, Sweden, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia, Mamirauá Institute for Sustainable Development, Tefé, Brazil, Earth Systems Science Division, Pacific NorthwestNational Laboratory, Richland, WA, USA, Department of Water Resources Engineering, Lund University, Lund, Sweden, Department of Geosciences, The University of Montana, Missoula, MT, USA, Université de Toulouse, LEGOS (CNES/CNRS/IRD/UT3), Toulouse, France, Institute of Geosciences, University of Brasília, Brasilia, Brazil, DepartmentGeodesy, Helmholtz‐Centre GFZ German Research Centre for Geoscience, Potsdam, Germany, Research Institute for theGeo‐hydrological Protection, National Research Council, Perugia, Italy, Water and Development Research Group, AaltoUniversity, Espoo, Finland, Department of Environmental and Biological Sciences, University of Eastern Finland,Joensuu, Finland, Stockholm Resilience Centre, Stockholm University, Stockholm, Sweden, Potsdam Institute forClimate Impact Research, Potsdam, Germany, Anthropocene Laboratory, Royal Swedish Academy of Sciences,Stockholm University, Stockholm, Sweden, Departamento de Geofisica, Universidad de Concepción, Concepción, Chile, Scripps Institution of Oceanography, University of California San Diego, San Diego, CA, USA, Department ofSustainable Development, Environmental Science and Engineering, KTH Royal Institute of Technology, Stockholm,Sweden, Stellenbosch Institute for Advanced Study, Stellenbosch, South Africa, Department of Earth Sciences, UppsalaUniversity, Uppsala, Sweden, Department of Geography and Centre for Global Studies, University of Victoria, Victoria,BC, Australia, Grupo de Investigación INDEES, IU Digital de Antioquia, Bogota, Colombia, Institute of Environment,Department of Earth and Environment, Florida International University, Miami, FL, USA, Department of Geosciences,Virginia Polytechnic Institute and State University, Blacksburg, VA, USA, Jet Propulsion Laboratory, California Instituteof Technology, Pasadena, CA, USA, Institute of Geodesy, University of Stuttgart, Stuttgart, Germany, Institute ofGeodesy and Geoinformation, University of Bonn, Bonn, Germany, ISPA, Institut National de Recherche pourl'Agriculture, l'Alimentation et l'Environnement (INRAE), Villenave d'Ornon, France, Department of Civil Engineering,University of Victoria, Victoria, BC, Canada, CNRM, Université de Toulouse, Météo‐France, CNRS, Toulouse, France, Stockholm Environment Institute, Latin America Centre, Bogotá, Colombia, Department of Biology, Faculty of NaturalSciences, Universidad del Rosario, Bogotá, Colombia, Subdirección Científica, Jardín Botánico de Bogotá ‘JoséCelestino Mutis’, Bogotá, Colombia, Département des sciences géomatiques, Université Laval, Québec, QC, Canada, Radar Science and Engineering Section, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA,USA, Key Laboratory of Lake and Watershed Science for Water Security, Nanjing Institute of Geography andLimnology, Chinese Academy of Sciences, Nanjing, China, University of Chinese Academy of Science Nanjing(UCASNJ), Nanjing, China, Environment and Climate Change Canada, Water Science and Technology Directorate,Saskatoon, SK, Canada, Biological and Environmental Science and Engineering Division, King Abdullah University ofScience and Technology, Thuwal, Saudi Arabia, Department of Earth, Marine and Environmental Sciences, University ofNorth Carolina at Chapel Hill, Chapel Hill, NC, USA, Department of Geography and Geographic Information Science,University of Illinois Urbana‐Champaign, Urbana, IL, USA, Department of Geography and Geospatial Sciences, KansasState University, Manhattan, KS, USA, Environmental Institute, University of Virginia, Charlottesville, VA, USA, Department of Earth Sciences, Free University Amsterdam, Amsterdam, The Netherlands, School of Sustainability,Arizona State University, Tempe, AZ, USA, Department of Earth and Environmental Sciences, University of Michigan,Ann Arbor, MI, USA, Géosciences Environnement Toulouse, Université de Toulouse, Toulouse, France, DETECT, Universität Bonn, Bonn, Germany, Centro Andino para la Gestión y Uso del Agua, Universidad Mayor de San Simón,Cochabamba, Bolivia, Centro de Planificación y Gestión, Universidad Mayor de San Simón, Cochabamba, Bolivia, SGB, Geological Survey of Brazil, Rio de Janeiro, Brazil, Department of Space Research and Technology, Geodesy andEarth Observation, Geodesy, Denmark, Department of Civil and Environmental Engineering, Politecnico di Milano,Milan, Italy, GIGA, Escuela Ambiental, Facultad de Ingeniería, Universidad de Antioquia, Medellín, Colombia
Global water resources are under unprecedented pressure from climate change and human activities leading to critical challenges for sustainable management. Hydrogeodesy, with satellite system technologies (i.e., altimetry, InSAR, gravimetry, and GNSS) provides capabilities for monitoring hydrological systems at diverse scales. By synthesizing data from over 3,000 studies and conducting expert elicitation, this research evaluates hydrogeodesy’s potential to address hydrological and sustainability challenges. Our meta-analysis reveals a rapid increase in hydrogeodetic research, with applications ranging from tracking terrestrial water storage (TWS) to monitoring groundwater depletion and surface water dynamics. Gravimetry, particularly from GRACE and GRACE-FO missions, excels in large-scale assessments of TWS and has significantly contributed to groundwater studies, though it is limited by coarse spatial resolution. Altimetry demonstrates high precision in tracking water level changes in lakes and reservoirs, with expanding applications to smaller water bodies due to improved algorithms. InSAR provides millimetric accuracy in detecting surface deformations linked to groundwater extraction and wetland hydrology, but its utilization is hindered in regions lacking open data processing frameworks. GNSS offers insights into soil moisture, snow depth, and vertical crustal movements, supporting critical hydrological load assessments. Hydrogeodesy shows untapped potential in addressing less-studied systems such as permafrost and wetlands, which remain underrepresented in current literature. Increasingly, the integration of multiple hydrogeodetic technologies enhances resolution and accuracy, as seen in combined InSAR and altimetry approaches for estimating floodplain storage. This synergy underscores hydrogeodesy’s ability to tackle global hydrological challenges, such as addressing the 23 Unsolved Questions of Hydrology, advancing understanding of groundwater recharge, snowmelt dynamics, and storage variability, and supporting the Planetary Boundaries Framework by quantifying water-related thresholds to ensure a safe operating space for humanity. The findings also highlight hydrogeodesy’s critical role in informing water management and policy, including mitigating groundwater depletion and assessing the impacts of infrastructure on water systems. Realizing its full potential requires enhanced integration of artificial intelligence for real-time data processing, broader accessibility of hydrogeodetic data through open-source platforms, and institutionalizing hydrogeodesy in education to cultivate interdisciplinary expertise. Hydrogeodesy is not merely a tool but a transformative approach to hydrological science and sustainability, providing actionable insights for navigating the complexities of water resource management in a rapidly changing world.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Analysis of Recently Drained Lake Basins Succession Driving Factors

Authors: Rustam Khairullin, Annett Bartsch, Clemens von Baeckmann, Helena Bergstedt, Barbara Widhalm, Sree Ram Radha Krishnan
Affiliations: b.geos GmbH, Austrian Polar Research Institute
Drained lakes basins (DLBs) are common features across Arctic permafrost regions. Shrinking and further disappearance of open water due to drainage from the lakes is not only further altering hydrological conditions but also impacting carbon fluxes. The rate of drainage is known to differ between regions and basins plant colonization is expected to be affected by many factors: specific bioclimatic conditions of the region, terrain, shape etc. Lakes at different stages of drainage have been analyzed with satellite records in order to quantify the impact of various factors. Bioclimatic conditions are represented in landcover patters but the landscape heterogeneity in tundra regions is very high. The recently published circumarctic landcover units (CALU) capture a wide range of landcover types (vegetation communities and wetlands) and provide good spatial detail (10m). The scheme is based on fusion of Sentinel-1 (C-band radar) and Sentinel-2 (multispectral) information and was designed specifically for Arctic regions. Super-resolution processing is applied for inclusion of information from the 20m bands of Sentinel-2. Pre- and post- processing steps rely on the use of the Copernicus digital elevation model. The CALU processing scheme has been adapted for the analyses of several hundreds of recently drained lakes (for years of 2016 – 2022) located in the West Siberian region of Yamal and Gydan peninsulas. These areas represent a gradient of permafrost conditions, spanning from continuous to discontinuous permafrost. Ground temperatures have been increasing within the last two decades according to ESA CCI+ Permafrost records and in situ observations. Further permafrost thaw is projected to occur within the next decades across these sites. The succession of vegetation and wetland patterns within the DLBs was determined as the area coverage by the tundra vegetation classes presented in CALU within three to five years from initial drainage. A Random Forest approach has been applied to determine the significance of certain factors for the drained lake basins plants colonization and wetland area change rate. The approach only considers interannual variations. The extension of the exploitation of SAR data is needed but data availability is currently limited in these regions. The upcoming launch of Sentinel-1C and InSAR are expected to provide advances for related applications.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Multitemporal Seasonal Monitoring of Reservoir Area Fluctuations Using Copernicus Satellite Imagery and Google Earth Engine, Focusing on Ten Reservoirs in Greece from 2020 to 2024

Authors: Triantafyllos Falaras, Stamatina Tounta, Anna Dosiou, Prof. Issaak Parcharidis
Affiliations: Department of Geography, Harokopio University of Athens, Department of Geoinformatics - Z_GIS, Paris Lodron University of Salzburg, Aristotle University of Thessaloniki, School of Spatial Planning and Development, Laboratory of Geoinformatics
Reservoirs play a significant multipurpose role, mainly contributing to the water supply of agricultural areas, energy production, and serving as drinking water sources. Inland waterbodies like reservoirs face serious threats to their water balance due to climate change which in many cases leads to substantial non-rebounding water area losses. These conditions make it vital for the multi-temporal monitoring of reservoirs to trace changes and fluctuations and make the proper water management decisions. To this extent satellite imagery from earth observation programmes like Copernicus, as it has already proved, can have a pivotal role to effectively monitor inland waters. This study, considering the drought that Greece faced in 2024, aims to monitor reservoirs across the country for the past five years regarding their annual and seasonal water area changes using ESA Copernicus Sentinel-2 MSI Data and cloud-based processing in Google Earth Engine. The monitoring was conducted for some of the many existing reservoirs in Greece from 2020 to 2024. To ensure broad national coverage ten reservoirs were selected based on their significance in water use, size, and geographical distribution. More analytically, the following reservoirs were selected: Karla and Plastiras from Thessaly region, Polyfyto and Kerkini from Central Macedonia, Marathon from Attica, Mornos from Central Greece, Kremasta from Western Greece, Pournari from Epirus, Pineiou from Peloponnese, and Aposelemis from the Crete region. To fulfill the aim of the study, for the examined period, the monitoring was designed to be both multitemporal and seasonal to obtain a representative view of the surface area fluctuations of each reservoir. The monitoring concept relied on utilizing two satellite image acquisitions per year for each reservoir, one in April and one in September. This allowed us to assess the refill of each reservoir during the wet autumn-winter seasons and the use of the water during the dry spring-summer seasons, where in most cases the water is extracted to cover the irrigation needs. Loss due to evapotranspiration also occurs during this season. In this way, for each reservoir during the examined period, 10 satellite image acquisitions were used. The images were selected to retain temporal coherence for proper comparisons between seasons, years, and reservoirs, taking into consideration the challenge of image availability due to cloud coverage. For the purpose of the study, openly available datasets were utilized. The monitoring was based on multispectral satellite images from the Sentinel-2 mission of the ESA Copernicus programme. The Sentinel-2 MSI atmospherically corrected Bottom-Of-Atmosphere (BOA) reflectance Level-2A (L2A) products were utilized with a resolution of up to 10 m and an acquisition frequency of 5 days. Other datasets that helped in the analysis and interpretation of the results included the meteorological data of the closest active meteorological stations to the reservoirs from the National Observatory of Athens Network (Meteo.gr) and hydrological data and information obtained from the 1st Update of the Approved River Basin Management Plans of the Special Secretariat for Water. The processing was performed in the Google Earth Engine (GEE) platform, including the retrieval of Sentinel-2 L2A product collection, its analysis, and the derivation of the results. This cloud-based platform, which has geospatial analysis, visualization, high-performance processing, and large database capabilities, proved to be the optimal solution for this study due to the large satellite multispectral data processing volume. Lastly, the final geospatial analysis, result management, and map production were performed in the commercial GIS software ESRI ArcGIS Pro v3.4. The mapping of the waterbodies was based on the application of the Modified Normalized Difference Water Index (MNDWI) of Xu (2006) as presented in Equation 1. This modified version of the NDWI (McFeeters, 1996) makes use of the Visible Green and Short-Wavelength Infrared (SWIR) spectral bands which enables distinction between land and other features like built-up areas and water surfaces. This is a result of the enhanced water’s typical reflectance in Green and the lower one in SWIR as well as the high typical reflectance of land features in SWIR. This Normalized index returns values ranging from -1 to 1 with the positive values of the index corresponding to water. MNDWI = (GREEN-SWIR)/(GREEN+SWIR) (Equation 1) The steps of the analysis in GEE included the input of the reservoir areas of interest (AOIs), and then the selection of the Sentinel-2 L2A images meeting the temporal and cloud-cover criteria previously described to create the corresponding image collections. Then the images were preprocessed including subsetting to the AOI's boundaries and resampling to 10 m resolution. The following steps included the application of the MNDWI for each image and the threshold selection to extract the water surfaces from a binary raster image. The threshold selection was based on the principle that positive index values represent water, with manual optimization where needed after testing to enhance accuracy. The final steps were the area calculation and the exports of the vector and raster products as well as the geospatial analysis and map production in the GIS environment. The results of this study show the seasonal water area fluctuations and their annual cycle for each reservoir. The precipitation contributes significantly to the fluctuations of the reservoir's balance of inflows and outflows. In some reservoirs, the area does not refill sufficiently during the wet season, thus shrinking from year to year. Most of the reservoirs faced much pressure in years when the water needs, especially for irrigation, were higher due to the excessive drought and high temperatures that occurred, as observed in 2024. In the case of the Mornos reservoir, results indicate a decrease in the extent of the lake from 2022 to 2024, as the reservoir’s surface dropped from 16,14 sq. km. to 11,63 sq. km. with a loss of almost 28%. In the Plastiras reservoir for example, in April 2024 the area was over 7 % greater than in 2023 while in October 2024 the losses were greater than in 2023 being over 14 %. The combined use of the ESA Copernicus Sentinel-2 data and Google Earth Engine proved to be a fast and efficient way of producing reliable results that enable the monitoring and assessment of the state of multiple reservoirs in terms of water area on various temporal bases. For Greece or other countries, it may constitute an important way for a national-scale reservoir or lake monitoring and the correlation with in situ data can lead to a complete analysis ideal for water management and the respective policy-making. In conclusion, the effects of climate change on reservoirs are evident in their fluctuations per year and the need for adaptation and rational water management is critical. References McFeeters, S. K. The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425-1432. https://doi.org/10.1080/01431169608948714 Xu, H. Modification of Normalised Difference Water Index (NDWI) to Enhance Open Water Features in Remotely Sensed Imagery. International Journal of Remote Sensing 2006, 27, 3025–3033, doi:10.1080/01431160600589179.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Multi-sensor monitoring of agricultural reservoirs in the Casamance region, Senegal, in the context of global change.

Authors: Manuela Grippa, Félix Girard, Cheikh Faye, Bakary Faty, Idrissa Ka, Cheikh Ousmane Sarr, Sidy Dieye, Elodie Robert, Guillaume Morin, Tristan Harmel, Romain Buratti, Gilles Larnicol, Fabrice Gangneron, Jean-Michel Martinez, Joana Roussillon, Laurent Kergoat
Affiliations: Géosciences Environnement Toulouse, Université Assane Seck de Ziguinchor, Direction de la Gestion et de la Planification des Ressources en Eau, Faculté de Lettres et Sciences, Fann, Littoral, Environnement, Géomatique, Télédétection, Magellium
Water resource management in West Africa is a critical issue in the context of global change. Population growth and global warming are expected to significantly increase in the water demand in the region. In addition, climate variability and floods and droughts, which are expected to become more frequent in the future, will complicate the planning and management of water resources. Moreover, in a region where waterborne diseases are a major concern, the impact of climate change and human activities on the surface water quality is a crucial question. The Kayanga-Anambé watershed in Casamance is characterized by high variability in rainfall and high evaporation rates, which affect water availability. Agricultural consumption, for rice growing and market gardening, is the main source of water demand in this basin. A series of reservoirs have been built with the Niandouba reservoir upstream, the Confluent dams and the Lake Waima. However, this basin is poorly instrumented, and questions arise on the capacity of these facilities to satisfy meet agricultural demand in the years to come, in the context of demographic growth, increased temperature and reduced rainfall predicted by the CMIP6 models for this region. The objective of the Space Climate Observatory project VOQUALISE is to answer the above questions using a satellite multi-sensors approach. Water levels derived from SWOT, and classical altimetric data (Sentinel-6 and Sentinel-3 and ICESat-2) are combined with water area derived from Sentinel-2 to estimate volume changes over time. At the same time water quality parameters are retrieved using the Sentinel-2 and Sentinel-3 optical sensors. In-situ data on water levels, water turbidity, suspended particulate matter and temperature are used to validate the remote sensing estimates. This allows to quantify water storage changes in reservoirs (mainly supplies by rainfall and losses by irrigation), to estimate water quality parameters (suspended solids, algal blooms, temperature) and to establish their relationship with climatic variables and human activities (multiple water uses, public policies).
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: How well Can We Explain GRACE/-FO-derived Total Water Storage Variability, Deficits/Droughts, and Trends in Europe?

Authors: Charlotte Hacker, Johannes Leonhardt, Prof. Dr. Ribana Roscher, Wanxue Zhu, Prof. Dr. Stefan Siebert, Prof. Dr. Thomas Heckelei, Dr. Julian Giles, PD. Dr. Silke Trömel, Prof. Dr. Jürgen Kusche
Affiliations: University of Bonn, University of Göttingen, Research Center Jülich
Since 2002, the Gravity Recovery and Climate Experiment (GRACE) and its follow-on mission, GRACE-FO, have provided global measurements of total water storage anomalies (TWSA). Various studies have suggested that GRACE-derived trends in terrestrial TWSA, i.e., long-term losses or accumulation in groundwater, surface water, and soil moisture storages or in-/decreasing variability, can be related to climate change, natural climate variability, and direct anthropogenic effects such as water withdrawals. However, most studies aiming to link TWSA signals to potential drivers of change, such as measured precipitation and temperature, or proxy variables, like remote sensing leaf area index, rely on simple correlation metrics. These methods cannot establish causality. Many regression or machine learning methods utilize climate mode indicators such as El Niño Southern Oscillation (ENSO) indices to improve TWSA prediction. However, these indices serve as proxy representations dependent on other variables such as precipitation or sea level pressure, which limits their effectiveness for causal identification. Furthermore, the impact of human activities, particularly irrigation and land use/land cover change, is often compounded with other variables. Irrigation and land use and land cover change (LULCC) affect variables like leaf area index, soil moisture, and gross primary productivity, causing feedback on the water cycle and water storage. Causal relation identification methods focus on finding strong connections between a target variable and related variables. In this study, we employ such data-driven, causal identification techniques to investigate how well precipitation, soil moisture, land and sea temperatures, vegetation mass, and gross primary production can explain GRACE-derived TWSA as well as trends and extremes in water storages in river basins across Europe. This approach goes beyond simply looking at known effects, such as changes in reservoir levels.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Estimate of the SWE Available for the Barasona Reservoir

Authors: Lucas López, Dr Maria José Escorihuela
Affiliations: Isardsat, Universitat Autònoma de Barcelona
Accumulated water in the form of snow at the mountains is an important asset for river basin managers and farmers. This is especially true in semi-arid regions like the Mediterranean, recognized as one of the most vulnerable regions to global warming, and where drought events are expected to increase in severity and frequency. Since mountain water resources are highly dependent on environmental fluctuations, alterations induced by climate change could have a great impact on their quantity and availability and thus in hydrologic management. The Ebro basin has been identified as one of the zones susceptible to hydrological impacts of climate change. The Barasona reservoir, built in 1932, was designed to cumulate water resources of a small mountain basin within the Ebro basin. The Barasona reservoir provides water for power generation and irrigation, and is one of the three main water supplies for the "Canal de Aragón y Catalunya" (Aragon and Catalonia Irrigation District). With a total area of 1535 km², its basin lies in the Central Pyrenees, and it is the principal provider for 98.000 ha, including 37 municipalities and thousands of agricultural holdings and livestock farms. Because of this, it is of great interest to have a correct understanding of the reservoir’s hydrological processes. In this project we combined remote sensing data along with AI-based and hydrological models in order to estimate the amount of water in the form of snow that will be available for the Barasona reservoir and be able to predict the reservoir water input flow, in addition to estimate the expected dates of the beginning and end of thaw. First, we estimated using satellite imaging the snow cover area (SCA) from MODIS and Sentinel 2 Theia Normalized Difference Snow Index (NDSI) products. For Landsat 8, we implemented the algorithm described in Gascoin et al. (2019) to derive the snow cover. The results are a consistent time series of SCA from 2009 to 2023. With this data and the snow height measured at two snow gauges available at the catchment we developed a simple methodology to estimate the snow water equivalent volume (SWE) for the catchment. Then, all this data along with the temperature and precipitation measured at different weather stations at the basin were fed to an AI model that predicts the amount of input flow received by the Barasona reservoir in the short term. The base model achieved an accuracy of R² = 0.691. If the river flow arriving at reservoir during the previous days was inputted as an extra variable, the melting rate of the snow could be inferred directly from it, which caused an increase of the coefficient of determination to R² = 0.781. Regarding predictions in the long term, the SWE available at the top of the mountains shows a large inter-annual variability, ranging from 48.82 hm³ during years of drought to 228.77 hm³ when snow is most abundant, with an average of 120.27 hm³, and a standard deviation of 58.15 hm³. The estimated dates of the beginning and the end of snowmelt are March 23 and May 26. Both present some degree of variability, with two weeks of standard deviation. Our results have shown that SWE can be accurately estimated based on a reduced number of in-situ measurements and remote-sensing data notably improving current SWE estimations provided by the basin authorities to the irrigation district managers. Furthermore, our short-term estimations allow a near-real time operation of the reservoir which enhances water management capabilities in the basin.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: A comprehensive assessment of Groundwater Resources in Botswana: Droughts, Recharge, and Abstractions using GRACE and GLDAS

Authors: Èlia Cantoni Gomez, Miguel González-Jiménez, Marta Toro-Bermejo, PhD Beatriz Revilla-Romero, Homero Paltan Lopez, Diego Juan Rordiguez, Marc Paganini
Affiliations: GMV, World Bank, European Space Agency
As the largest source of freshwater, groundwater is a priceless resource that needs to be safeguarded. This poses a considerable challenge in countries like Botswana, one of the world’s most drought-prone nations, which has endured several multi-year droughts since the 1950s. The country faces the pressing challenge of warranting a sustainable water supply to meet the water requirements for its population, agriculture, and industry. Indeed, groundwater plays a key role in Botswana's water supply, accounting for approximately 80% of the country's total water supply, with this reliance being exacerbated by recurring droughts, unreliable rainfall, and the limited operational hydrological data monitoring network in recent years. As a part of the European Space Agency (ESA) Global Development Assistance (GDA) Water Resources project, and in collaboration with the World Bank, GMV has implemented a suite of Earth Observation-based services for groundwater monitoring to support effective groundwater resources management. Groundwater resources monitoring and quantification were conducted by calculating Groundwater Storage Anomalies (GWSA) through the integration of GRACE and GLDAS data for the 2003-2022 period. All three available GLDAS models were used to generate separate GWSA estimates, which were then combined into an ensemble to enhance estimation robustness and minimize the effect of potential biases of the GLDAS models. The resulting GWSA ensemble was used to assess the evolution of groundwater resources in Botswana, which provides a general understanding of groundwater dynamics across the country. The GWSA ensemble was also used to deepen the groundwater assessment by calculating the (1) drought occurrence and persistence using the GRACE groundwater drought index (GGDI), (2) groundwater recharge, and (3) groundwater abstractions. Additionally, a high-level correlation between GWSA and ENSO was performed to assess its impact on groundwater resources. The drought index (GGDI) shows some multi-annual drought events across the country demonstrating its capability to capture known droughts such as the event spanning from 2003 to 2006. The drought frequency and persistence analysis revealed that the northwest areas are more prone to droughts where most frequent and persistent droughts were identified. However, the GGDI time series analysis reveals a clear trend towards wetter conditions over the study period. Groundwater recharge estimates were derived from GWSA using the Water Table Fluctuation method, with recharge events defined as annual increases in GWSA. The analysis was conducted at both county and pixel scales. Given that there are two approaches to determine the magnitude of a recharge event, both were implemented to provide a range of likely recharge values. Results reveal a pronounced north-to-south recharge gradient aligned with rainfall distribution: in northern regions, recharge reaches up to 300 mm/year, whereas in the Kalahari Desert, it decreases to just 15–35 mm/year. Groundwater abstractions were approximated through a water budgeting approach, where the inflows should equal outflows, adjusted for changes in groundwater storage. The resulting groundwater residual reflects the system's balance: positive values indicate a deficit, where abstractions exceed recharge, while negative values suggest recharge surpasses abstractions, potentially due to unaccounted recharge mechanisms like transfers from other water bodies. The results emphasize surface-groundwater interactions, with the Ngami Lake region showing the highest values. Annual data reveal increased abstractions during drought periods, consistent GGDI results albeit with some spatial differences. Finally, a high-level correlation analysis of ENSO and groundwater availability in Botswana shows that La Niña events have a more pronounced impact than El Niño, leading to higher precipitation and significant increases in groundwater anomalies. In contrast, El Niño phases are associated with reduced rainfall and declines in groundwater storage, but these effects are evident only during the most significant El Niño events. This work highlights the way that bringing in earth observation products and land assimilation systems supports the design of resilient policies in countries with data scarcity and challenging climatic conditions.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: The Nexus of Groundwater-fed Irrigation, Aquifer Depletion, Land Subsidence and Desertification:  Insights from a Multi-Decadal Satellite Survey in Iran

Authors: Mahmud Haghighi, Mahdi Motagh
Affiliations: Institute Of Photogrammetry And Geoinformation, Leibniz University Hannover
Most of Iran is characterized by arid and semi-arid climates and is highly vulnerable to environmental degradation due to both natural factors and unsustainable water management practices. This study explores the interlink between unsustainable groundwater extraction, aquifer depletion, surface water diversion, and desertification processes across the country. The over-reliance on groundwater as a primary source for agricultural, industrial, and domestic use in recent decades has led to a significant decline in groundwater levels and associated land subsidence. Our multi-decadal analysis of satellite data from different sensors including ERS, Envisat, ALOS and Sentinel-1 mission indicates that approximately 56,000 km² (3.5%) of the country is subject to land subsidence, with some hotspots sinking at alarming rates of over 35 cm per year. Our recent country-scale survey using Sentinel-1 reveals that approximately 3,000 km² of land is subject to subsidence rates exceeding 10 cm per year, highlighting the extent of the crisis. The widespread extraction of groundwater has triggered a cascade of environmental issues, leading to the degradation of the natural landscape and a reduction in water availability. The central plateau catchment, which hosts two-thirds of the country’s depleting aquifers, has experienced a significant and permanent loss of aquifer storage capacity, jeopardizing the sustainability of water resources in the region. Our analysis of agricultural distribution, surface water distribution, and desertification using Sentinel-2 data indicates a nexus relationship between groundwater pumping and surface water diversion and the desertification and landscape degradation in various parts of the country. Surface water diversion, mainly from the western and northern parts of the country to the central parts, has led to increasing demand for water in several major cities and agricultural hubs. The expectation of continuous water availability has driven unsustainable consumption, pushing the region beyond its natural regenerative capacity. This rising demand has intensified pressures as water consumption has continued to rise over the past decades. As a result, excessive groundwater pumping has caused a decline in water table levels. Our estimates suggest that approximately 1.7 billion cubic meters of unsustainable groundwater have been drawn annually from confined and semiconfined aquifers across the country in the past decade. This has increased the vulnerability of ecosystems and led to the collapse of traditional water management systems, such as qanats. As the wells dry up, agricultural productivity declines and rural communities are forced to abandon their land, while urban areas are facing the risk of land subsidence to their population and infrastructure. Our study finds that poor water management policies have worsened the situation by a lack of regulation and ineffective enforcement. Short-term solutions, such as drilling deeper wells, have only postponed an inevitable water crisis while aggravating long-term environmental degradation. In the context of climate change, which is expected to increase aridity and reduce rainfall in the region, the consequences of current water management practices pose a severe threat to the sustainability of local communities and ecosystems. Furthermore, the contribution of different economic sectors to Iran’s GDP reveals a critical dependency on groundwater. The agriculture sector, employing approximately 17% of Iran’s population and contributing around 10% to the nation’s GDP, is the primary user of groundwater resources. Inefficient irrigation practices, coupled with unsustainable extraction, have degraded water resources, threatening the sector’s viability. In contrast, our results show that provinces like Tehran, which derive most of their GDP from industrial and service sectors, face significant land subsidence risks, jeopardizing their infrastructure, including roads, railways, and buildings. These impacts highlight the connection between groundwater depletion, agricultural productivity, and broader economic stability. Addressing these challenges requires a paradigm shift in water management policy. A more sustainable approach would involve regulating groundwater extraction, promoting efficient irrigation techniques, and restoring traditional water systems that operate within the natural limits of the environment. Reassessing water allocation priorities and integrating local knowledge with modern technology could help balance water use in a way that mitigates desertification.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Italian lakes water level monitoring through GEDI and SWOT

Authors: Alireza Hamoudzadeh, Carlotta Grande, Federica Leotta, Mirko Reguzzoni, Lorenza Ranaldi, Filippo Bocchino, Roberta Ravanelli, Giulia Graldi, Valeria Belloni, Deodato Tapete, Alessandro Ursi, Maria Virelli, Patrizia Sacco, Mattia Crespi
Affiliations: Sapienza University of Rome, DICEA, Geodesy and Geomatics Division, Sapienza School for Advanced Studies, Sapienza University of Rome, Italian Space Agency (Agenzia spaziale italiana), University of Liège, Department of Geography (Faculty of Sciences), Geomatics Unit, Politecnico di Milano, Department of Civil and Environmental Engineering (DICA)
Inland water bodies are essential sources of freshwater for several activities, such as human consumption, industrial processes, and irrigation of agricultural sites. Monitoring their levels is crucial to understand the impacts of climate change and human activities on this precious resource. Traditionally, water levels of inland water bodies are monitored using gauge stations. Nonetheless, a global-scale deployment of gauge stations is unfeasible because of their high cost of installation and maintenance and the remote location of many water reservoirs. Moreover, a considerable amount of in-situ data is gathered but not distributed due to governmental restrictions. Therefore, it is crucial to develop effective methodologies based on up-to-date technologies to homogeneously monitor the water level of inland freshwater reservoirs on a large scale. Nowadays, thanks to the immense developments in remote sensing technologies, Earth Observation (EO) can represent a feasible and low-cost answer to the need for extensive and long-term monitoring of surface water levels. This research is precisely included in this context, aiming to produce refined continuous water level time series for inland water bodies by analysing the data collected by two satellite altimetry missions: the Global Ecosystem Dynamics Investigation (GEDI) and the Surface Water and Ocean Topography (SWOT) missions. Therefore, understanding the limitations, accuracy, and precision of these two sensors is essential for ensuring a reliable integration of data from both sources, reducing also the overall revisit time, particularly as GEDI is scheduled for reactivation while SWOT continues monitoring water levels. GEDI and inland water levels GEDI [1,2] is a spaceborne LiDAR altimeter able to collect high-resolution measurements (25 m footprint size) of Earth's surface, focusing on canopy and terrain. GEDI is hosted on the International Space Station and collects measurements within the latitude range of 51.6°N to 51.6°S. Its revisit time varies from a few hours to several days, depending on the location, spatial extent, and orientation of the region of interest. GEDI’s complete dataset, covering from March 2019 to March 2023, is available on Google Earth Engine (GEE), a cloud-based platform integrating multiple datasets with advanced analysis capabilities [3]. The first part of this study evaluated the accuracy of GEDI altimetric data for lake water level monitoring by comparing GEDI water level measurements with ground truth data and proposes a robust outlier detection methodology directly implemented within GEE. In particular, a selection of lakes across Italy was analyzed for the period from April 2019 to June 2022 [4]. The proposed outlier detection workflow consists of a two-step process. The first step involves the removal of invalid data relying directly on GEDI metadata: measurements flagged as invalid by the "quality_flag" (indicating signal anomalies) and "degrade_flag" (pointing or positioning degradation) are indeed removed. The second step consists of an iterative 3NMAD-based test: measurements outside the range of -/+3*NMAD (Normalized Median Absolute Deviation) from the spatial median of the epoch of each lake were classified as outliers and removed. We first applied the developed workflow to 4 lakes in Northern Italy, for which the gauge measurements were available: Garda, Como, Iseo, and Maggiore. After the outlier removal, we observed an overall intrinsic precision (mean over the 4 lakes of the per-epoch spatial NMADs over all the available epochs for each lake) of 0.11 m for the studied period. Additionally, the methodology was applied to six smaller lakes in the Lazio region (Trasimeno, Bolsena, Nemi, Albano, Bracciano, and Martignano, 1–150 km²), where no gauge measurements were available. GEDI successfully measured water levels for these lakes with an intrinsic precision (NMAD) better than 10 cm with an acceptable number of acquisitions (between 6 and 36 measurements, depending on the area of the lake). SWOT altimetry and water surface elevation The SWOT mission, led by NASA in collaboration with CNES, employs a Ka-band Radar Interferometer (KaRIn) as its principal instrument. This altimeter provides extensive global coverage, monitoring 86% of Earth's surface between 77.6° S and 77.6° N, with a 100 m pixel size. With a revisit time of 21 days during its three-year mission, SWOT refers to elevations to the EGM2008 geoid model and applies corrections for atmospheric and tidal effects [5]. The SWOT mission was launched in December 2022, with the primary objective to provide the first worldwide inventory of water resources, including rivers, lakes, and reservoirs, to observe the fine details of the ocean surface topography, and to measure how terrestrial surface water bodies change over time [5]. SWOT data acquisition started between April and August 2023, depending on the lake's location, aligning closely with the temporary decommissioning of the GEDI mission. Considering the SWOT v2.0 product (SWOT_L2_HR_Raster_2.0) from April 2023 to September 2024, the sensor achieved a 92% correlation with in situ gauge measurements, detecting water level variations with an average precision of ~0.06 meters over lakes in Northern Italy and Switzerland. For ungauged lakes in Central Italy, the spatial NMAD of the SWOT pixels within the lake bound for each epoch was under 10 cm, with the difference between the per-epoch spatial mean and the per-epoch spatial median below 15 cm for most of the epochs, indicating minimal outlier presence. Acknowledgements This research is performed in the framework of the GRAW project, funded by the Italian Space Agency (ASI), Agreement n. 2023-1-HB.0, as part of the ASI’s program “Innovation for Downstream Preparation for Science” (I4DP_SCIENCE). References [1] Dubayah et al., 2021. GEDI L3 Gridded Land Surface Metrics. [2] University of Maryland, 2022. GEDI Ecosystem Lidar. www.gedi.umd.edu. [3] Hamoudzadeh, A., et al.: Gedi Data Within Google Earth Engine: Potentials And Analysis For Inland Surface Water Monitoring, EGU General Assembly 2023, Vienna, Austria, EGU23-15083, https://doi.org/10.57757/IUGG23-3886 [4] Hamoudzadeh, A., Ravanelli, R., and Crespi, M.: GEDI DATA WITHIN GOOGLE EARTH ENGINE: PRELIMINARY ANALYSIS OF A RESOURCE FOR INLAND SURFACE WATER MONITORING, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLVIII-M-1-2023, 131–136, https://doi.org/10.5194/isprs-archives-XLVIII-M-1-2023-131-2023, 2023. [5] Hamoudzadeh. A, Ravanelli. R, and Crespi. M . SWOT Level 2 Lake Single-Pass Product: The L2_HR_LakeSP Data Preliminary Analysis for Water Level Monitoring, Remote Sensing, 16, 1244. https://doi.org/10.3390/rs16071244.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Development of a water stress indicator that highlights the impacts of tourism

Authors: Fabien Castel, Guillermo Gonzalez Fradejas, Mael Plantec
Affiliations: Murmuration
Monitoring water resources and usage patterns is crucial for sustainable water management. To achieve this, a commonly used indicator for assessing the imbalance between water availability and water demand is water stress. Water stress is defined as the overall ratio between the total freshwater withdrawn by all major human activity sectors and the total renewable freshwater resources. By accurately assessing water availability and demand, we can identify areas vulnerable to water scarcity and develop strategies to mitigate it. Existing water stress indicators, such as those provided by the World Resources Institute (WRI) through the Aqueduct platform, and the Food and Agriculture Organization (FAO), offer valuable insights into global water stress patterns. However, these indicators often present limitations. They typically provide annual data or baseline values based on historical data. Additionally, they are usually presented as an aggregated value, making it difficult to isolate the specific contributions to water stress of different human activities. Furthermore, these indicators frequently rely on complex hydrological models, like the PCR-GLOBWB2 model, that are computationally intensive and require significant data inputs. This project aims to develop a water stress indicator that addresses the aforementioned limitations. The water stress indicator developed here relies on open-source satellite data to compute water availability. ECMWF ERA5-Land Monthly Aggregated values, from the Copernicus Climate Change Service are used to avoid the direct running of hydrological models. This dataset provides a comprehensive list of hydrological variables, including runoff, precipitation, and evaporation, at a 5 arc-min resolution, approximately 10km2 at the equator. This allows for a quick estimation of water availability on a near-real time basis, with a monthly frequency. Besides, the water stress indicator developed here specifically isolates the impact of tourism on water demand by differentiating between the contributions of local residents and tourists within municipal water demand. This provides a stepping stone for another ongoing project, focused on developing a carrying capacity indicator that will evaluate the environmental impacts of tourism on water, biodiversity, and air quality. Tourism can put considerable pressure on water resources. Tourists often consume more water than local populations, particularly in coastal and resort areas, many of which already face water scarcity. Understanding the specific contributions of tourism to overall water withdrawals is crucial for sustainable water management and planning. Finally, water stress is usually measured at the level of the hydrological basin, as it is the natural catchment area for precipitation and runoff. The water stress indicator presented here also calculates meaningful values for target areas that don't align with basin boundaries by averaging the water availability of overlapping basins. For the computation of the water stress indicator, four major sources of water withdrawals were considered: irrigation, livestock, industrial, and municipal. The contribution of tourism was computed specifically as a part of the municipal demand. The computation of water availability was done using runoff minus environmental flows. Runoff values were taken from the ERA5-Land dataset, and then a portion was allocated for environmental flows, leaving the remainder for human needs. Water demand was estimated by disaggregating national statistics provided by the United Nations, through the Aquastat database. This database contains reported values on water withdrawals across three sectors: irrigation, industrial, and municipal, per country and year. These country-level annual values were disaggregated into monthly values at the level of 1km pixels. To achieve this, spatial and temporal weights were assigned to the different pixels within each country using spatial and temporal covariates, what is known as dasymetric weighting. Several techniques and databases were used for this depending on the source of the demand. A combination of the SM2RAIN algorithm, evapotranspiration data from ERA5-Land, and FAO’s Global Map of Irrigated Areas (GMIA) was used to estimate the distribution of irrigation water demand. The distribution of industrial water demand was assessed based on data on power plant energy production from the Global Power Plant Database and population density from the Gridded Population of the World v4.11 (GPW). Municipal water demand distribution was estimated based on the distribution of tourists and local population. Local population density data was again obtained from the GPW v4.11, while tourism data was obtained from the monthly number of nights spent at tourists accommodation establishments at a NUTS2 regional level, as reported by EUROSTAT for Europe, and UNWTO data for the rest of the world, at a country level. Corine Land Cover 2018 data and OSM data on tourism attractions and accommodations was used to spatially distribute tourists. Finally, livestock water demand was directly computed based on livestock numbers from the Gridded Livestock of the World (GLW4) database together with water requirements based on temperature and animal type. Water stress was calculated as the ratio of water demand to available water resources. The results were analyzed at both the basin level and for target areas that did not align with basin boundaries. To validate the methodology, the estimated water withdrawals were compared to reported data for France at the level of the department. Validation showed mixed results. While the methods presented here seem to be quite good to distribute municipal water demand, including the contributions of tourism, other sources of demand, specially industrial, were not so well estimated. In the future, we hope to improve this disaggregation process, including more specific information to better evaluate the spatial distribution of industrial demand and to include future projections on water stress, according to the different IPCC scenarios.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Intercomparison of global evapotranspiration products over large irrigated areas using irrigation auxiliary information and in situ flux tower measurements

Authors: Pierre Laluet, Chiara Corbari, Wouter Dorigo
Affiliations: TU Wien, Climate and Environmental Remote Sensing, Department of Geodesy and Geoinformation, Politecnico di Milano, SIA - Edificio 4A, DICA
Global evapotranspiration (ET) products are essential for climate modeling, hydrological modeling, land surface models, and water resource management. Numerous global ET products have been developed, utilizing diverse methodologies ranging from machine learning based on in situ data to energy balance approaches and process-based models. While many studies have assessed various ET products, these typically focus on specific regions or basins and often use ET estimates derived from Gravity Recovery and Climate Experiment (GRACE) Total Water Storage Anomalies (TWSA) as a reference. However, no intercomparison has specifically addressed irrigated areas, despite their significant role in regional climate and hydrology. To address this gap, this study evaluates nine ET products (including one derived from GRACE TWSA data) across 10 irrigated regions in the contiguous United States, Spain, Italy, Australia, and India, characterized by diverse irrigation practices, climates, and crop types. The analysis examines ET dynamics and magnitudes in relation to auxiliary information (irrigation timing, equipment rates, crop water requirements, and climate) and locally validates the products using in situ ET measurements from Eddy Covariance stations located in irrigated fields in Northern and Southern Italy. Our results reveal substantial discrepancies among ET products in their ability to: i) detect irrigation signals, ii) capture seasonal irrigation patterns, and iii) estimate ET volumes consistent with crop water needs and local climatic conditions. Furthermore, the relationship between ET dynamics and irrigation information differs significantly between regions, sometimes even for the same product. These findings highlight the need for enhancing global ET products to better incorporate irrigation dynamics, improving their applicability for water management, climate modeling, and assessments of anthropogenic impacts on the Earth system. This research was funded by the Climate Change Initiative (CCI) AWU project (https://climate.esa.int/en/projects/anthropogenic-water-use/), supported by the European Space Agency (ESA).
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: An automatic Google Earth Engine tool for generating lake water area time series from satellite imagery

Authors: Omid Elmi, Amirhossein Ahrari
Affiliations: Institute of Geodesy, University of Stuttgart, Water Resources and Environmental Engineering Laboratory, University of Oulu
Quantifying freshwater storage in lakes and reservoirs is essential for understanding global hydrological cycles, optimizing water resource management, and assessing the impacts of climate change on water availability. Although they cover only a small fraction of the Earth’s surface, lakes and reservoirs are the most accessible freshwater resources, making them vital for ecosystems and human activities. Despite their importance, global monitoring of surface water storage remains challenging due to sparse gauge observations and limitations in current remote sensing techniques. Satellite imagery offers a unique opportunity to monitor surface water extent globally, offering various spectral, spatial, and temporal resolutions. However, most of the existing datasets and repositories are limited in two major ways. Firstly, they typically cover only a subset of global lakes and reservoirs. Secondly, most of these datasets are rarely updated regularly after the initial release, making them inadequate for monitoring current lake conditions. To address these challenges, we present Water Area Tracking from Satellite Imagery (WaTSat), a Google Earth Engine (GEE)-based tool that generates long-term water area time series for global lakes and reservoirs from satellite imagery. Requiring minimal user input, low storage demand, and efficient computational overhead, the tool automates multiple tasks to modify the lake shoreline search area, identify and remove cloud-contaminated images, delineate the lake extent, detect and remove outliers, and generate the water area time series. Utilizing the capabilities of the GEE platform not only ensures access to up-to-date satellite images but also eliminates the burden of maintaining substantial local storage and processing resources. The initial version of the WaTSat tool utilizes the MODIS MOD09Q1 product to generate water area time series from February 2000 until now. Validation against altimetric water level time series for 40 global lakes of varying sizes and regions demonstrates an average correlation coefficient of 0.89 between lake water level and area estimates. This confirms WaTSat’s capability to accurately estimate surface water area and capture long-term trends and annual fluctuations. The tool’s outputs hold great potential for advancing scientific research and supporting operational applications in water resource management, hydrological modeling, and climate studies.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Land subsidence analysis in Taipei Basin, Taiwan, integrating Sentinel-1 InSAR, groundwater and rainfall data

Authors: Leibniz Hannover University Erik Rivas, Leibniz Hannover University Mahmud Haghighi, Leibniz Hannover University Mahdi Motagh, National Taiwan University Jyr-Ching Hu, National Taiwan University Shao-Hung Lin
Affiliations: Leibniz Hannover University, National Taiwan University
Excessive groundwater extraction in the Taipei Basin, Taiwan, has resulted in significant land subsidence in the past In the 1950s, the Taipei basin experienced strong subsidence rates due to excessive groundwater pumping and regulations were necessary to control them. The geodetic monitoring of ground deformation in the basin started in 1948 when the government established levelling routes to monitor the land subsidence impact. From 1975 to 1989 subsidence rates decreased and the aquifer exhibited signs of recovery turning into uplift due to elastic rebound from 1990s until early 2000s. Since then, the basin has experienced interchangeable periods of subsidence and uplift, showing the high variability and complexity with its geological setting. In this study, we use the remote sensing technique of Differential Interferometric Synthetic Aperture Radar (DInSAR) to quantify contemporary deformation in the Taipei basin from October 2014 until October 2024 using the open access satellite images from Sentinel-1. Additionally, we have investigated the basin along the same time period from different sources of data as groundwater level, levelling data and a rainfall station in the center of Taipei. We applied the Small BAseline Subset (SBAS) approach to retrieve the deformation time series by using multi-look and single-look interferograms. For the multi-look processing, we formed a network of interferograms with temporal baselines between 30 and 90 days with the open source software Miami InSAR time series in Python (MintPy), in order to minimize the impact of the phase bias. The single-look processing was performed by using a stack of coregistered SLC images to form a network of interferograms with a maximum temporal baseline of 120 days using the recently released open source software SARvey (Survey with SAR) for InSAR time series. The results show various subsidence deformation clusters in the basin with subsidence rates of 2-3 cm/yr, most of which also exhibit high seasonal deformations with an amplitude of 2 cm. Additionally, an uplifting signal was identified from late 2021, characterised by a well-defined spatial boundary with a cumulative displacement of 3-4 cm. Comparison against groundwater level data suggests that this uplift signal in the center of the basin is associated with a rapid recovery going from -17 m in mid 2021 until -2 m by 2024 with a net increase of approximately -15 m. This might indicate an recharge event, however, no significant changes were identified in the rainfall data during this period, suggesting that there is reduction in the groundwater extraction activities.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Estimating Green and Blue Components of Evapotranspiration of a River Basin using Remote Sensing Data Based Soil Water Balance Model

Authors: Dr. Solomon Seyoum, Bich Tran, Dr Claire Michailovsky, Dr. Marloes Mul
Affiliations: Department of Land and Water Management, IHE Delft Institute for Water Education
Water resources assessment of a river basin rely on evaluating the water generated by the natural landscape and the water consumed by various land cover types. Splitting water consumption in the form of evapotranspiration (ET) into green ET and blue ET helps to differentiate the various water fluxes affected by different types of river basin management decisions. Typically, water consumption in a river basin is estimated using hydrological modelling through potential evapotranspiration, land use types, and available water. To estimating green and blue components of evapotranspiration, we developed a pixel-based soil water balance model (PixSWAB) using remote sensing derived data of components of the hydrological cycle including actual evapotranspiration data as input. The idea behind the PixSWAB model aims to simulate the hydrological processes of river basins in a way that is straightforward yet sufficiently detailed to offer valuable insights into the key factors affecting water availability in basins. PixSWAB is a distributed water balance model designed to simulate the processes of precipitation, runoff, and evapotranspiration for a river basin based on calculating the water balance for each individual pixel (grid cell) in the basin, with all inputs and outputs represented in the form of a grid of mesh. Each pixel is characterized by its unique parameters, inputs, and outputs. This spatially explicit approach allows analysis of change n water availability patterns due to anthropogenic interventions such as land cover changes and irrigation and impacts due to climate change. The PixSWAB model is applied to the Upper Litani River Basin in Lebanon to simulate the monthly hydrological processes, estimate the blue and green components of water consumption, and support water accounting for the hydrological years spanning from 2009 to 2017. The model was able to simulate the monthly runoff with Kling-Gupta Efficiency (KGE) of 0.80 and 0.79 for calibration and validation respectively. The Water consumption through evapotranspiration was estimated to be 70.41% of the total precipitation for the simulation period. The blue and green components account to 44.4% and 55.4% of the total water consumption respectively. The monthly variation of the percentage of blue and green ET ranges from 0 to 95% and 0 to 99% of the total ET. This indicates that the river basin relies heavily on irrigation in dry months and the forest land cover class access groundwater for their transpiration.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Earth Observation Based Region Scale Water Bodies Modelling, Monitoring and Forecast: Atos’ Approach

Authors: Jean-Philippe Audoin, Guillaume Huguet
Affiliations: Atos France
During the last five years, water resources have been heavily impacted by unusual climate situations, especially in the south of Europe, and in many other places around the World. This leads to unprecedent tense situation, or even competition, between water users, urging heart-breaking prioritization between human drinking water supply, preservation of ecosystems, agriculture irrigation needs and industrial use. In this situation, and through the coming years during which this tense will probably repeat and grow, the need for automated estimation and monitoring of water resources becomes compelling. The monitoring solution shall answer to the concerns of any kind of stakeholders, including resource administrators, regulatory institutions, environmental NGOs and users. Moreover, this monitoring shall be systematic, unbiased, impartial, and dispassionate, in order to bring to local or country level decision makers the proper information to make their decisions and the levers to justify their prioritization decisions. Earth Observation provides unprecedent data to this matter. For water bodies of scale starting from 1 ha, being either lakes, dams or reservoirs, either natural or man-made, Earth Observation is the perfect tool to monitor water resources at large scale, region wide or even country wide. The combination of the results of different research, published during the last years, allows now to overcome the challenge of monitoring water bodies, for both surface and volume, only based on satellite data. Inspired by those scientific papers, Atos has designed and developed on online service that automates water bodies monitoring based on the combination of different satellite sources, without the need of onsite data. The project is led in the scope of the agricultural and environmental services based on Atos’ Copernicus DIAS, Mundi Webservices https://mundiwebservices.com/. For each water body to be monitored, a first theoretical model is built from DEM data (for example Copernicus Digital Elevation Model) and geophysical local parameters. In a second step, the theoretical model is tuned by shore evolution timeseries based on optical and SAR data (for example Copernicus Sentinel 1 & 2). A third step is applied to fine tune the model, with water level altitude calibration thanks to specialised altimetry missions dedicated to water bodies (for example CNES SWOT). Once a collection of water bodies has been modelled, the proposed service is ready to automatically monitor this collection. This systematic monitoring of water bodies is achieved through optical and SAR data (including Sentinel 1 &2), with weekly revisit, monitoring both surface and volume. The service also provides past years analysis, and real-time monitoring and alerting. This first service level addresses mainly water management institutions and all kinds of water resources users. Despite not being in all the regions the main user of water, agriculture is usually the main consumer if it, meaning that water used by agriculture is not reinjected in the cycle of available water. In the scope of a region wide monitoring, the water bodies monitoring service can be combined with Earth Observation based irrigation detection. It is eased by region providing crop parcels inventory, and if not, this requires a parcel detection process, as a preparation phase. This service allows the correlation of water bodies volume variation, previous years rainfall in the area (for example based on ERA5 data) and evolution of irrigated surfaces among cultivated areas. In a further step, Atos is planning to add a forecast level to the service, making use of Destination Earth digital twin data. DestinE will provide an unprecedent data source of temperature and rainfalls evolution forecast, and the ability to calculate their impact on available water volumes and irrigation needs. This second service level addresses more specifically agricultural regulation institutions and farmers’ crop strategy. The goal of this poster is to sketch the proposed water resources monitoring process, detail the expected results and advantages, and highlight the challenges still to overcome. Main scientific references the service is inspired by: A New Digital Lake Bathymetry Model Using the Step-Wise Water Recession Method to Generate 3D Lake Bathymetric Maps Based on DEMs, May 2019, DOI: 10.3390/w11061151 An Operational Framework for Mapping Irrigated Areas at Plot Scale Using Sentinel-1 and Sentinel-2 Data, May 2021, DOI: 10.3390/rs13132584
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Eddy generation in large deep seasonally-freezing Eurasian lakes: insights from satellite remote sensing and field observations

Authors: Alexei Kouraev, Elena Zakharova, Prof Andrey Kostianoy, Prof Nicholas Hall, Dr Anna Ginzburg, Andrey Suknev
Affiliations: Université de Toulouse, LEGOS (CNES/CNRS/IRD/UT3), EOLA, Institute of Water Problems, Russian Academy of Sciences, P.P. Shirshov Institute of Oceanology, Russian Academy of Sciences, S.Yu. Witte Moscow University, Great Baikal Trail (GBT) Buryatiya
Large Eurasian lakes are an integrator of climate processes at the regional scale and a good indicator of climate changes. Variability of ice and snow regime is important for their physical, chemical and biological properties, and for human activity. We present studies of ice and snow cover, water dynamics and formation of giant ice rings and eddies under ice for the lakes Baikal (Russia) and Hovsgol (Mongolia). Multi-mission satellite observations make it possible to monitor ice cover evolution and water dynamics in ice-free period with high spatial and temporal resolution. We have used satellite imagery in the visible, near-infrared, shortwave and thermal infrared (MODIS Terra/Aqua, Sentinel-2, Landsat-8, PlanetScope), complemented by active microwave observations (Sentinel-1 SAR, Jason-3 radar altimeter). We also complement these observations with in situ data from our field campaigns and loggers. We also address drivers and patterns of eddy generation during ice-free season before and after vertical overturning. We use satellite remote sensing, historical observations and in situ data to follow the different stages of warm and cold anticyclonic eddy generation before and after vertical overturning. Thermal satellite images for 1998-2022 indicate a stable repeating seasonal pattern which is classified into stage of eddy generation and development. Field observations complement satellite imagery to characterise the vertical structure of the eddies. The main source of eddy generation of eddies is the outflow from Barguzin Bay which interacts with the coastline. Subsequent eddy generation is driven by density gradients and geostrophic adjustment. In summer this outflow is dominated by river inflow and lead to the formation of warm anticyclonic eddies. After autumnal vertical overturn, the outflow is forced by the wind bringing cold water from the Bay to Middle Baikal and creating cold anticyclonic eddies. We suggest that in the autumn, when the surrounding water cools to a temperature below about 4°C, these cold eddies sink and transform into intrathermocline lens-like eddies that persist under ice and can later create giant ice rings on the Baikal ice cover. Better understanding of eddy dynamics and continued monitoring help to improve safety for people travelling or working on the ice. There is a need for timely communication of results for non-scientific audience - fishermen, tourism agencies, tourists, journalists and local administration. This research was supported by the CNES TOSCA Lakeddies, TRISHNA and SWIRL projects, P.P. Shirshov Institute of Oceanology Project N FMWE-2024-0016) and Institute of Water Problems Project N FMWZ-2022-0001.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Long-term analysis of global surface water volume anomalies using remote sensing

Authors: Omid Elmi, Mohammad J. Tourian, Dr. Peyman Saemian, Dr. Fabrice Papa, Nico Sneeuw
Affiliations: Institute of Geodesy, University of Stuttgart, LEGOS, University of Toulouse, CNES/CNRS/IRD/UT3
The availability and variations of continental water storage are of great importance for society, as they influence agricultural, industrial, and domestic water use. Among the components of water storage, terrestrial surface water—specifically lakes and reservoirs—is essential for both wildlife and human habitats. They store freshwater in the most accessible way, control seasonal floods, and generate hydropower. Despite their importance, the estimation of surface water storage variation on a global scale is often derived from simplified models due to the absence of necessary gauge and remote sensing measurements. In this study, we generate monthly water volume anomaly time series for 182,260 global lakes and reservoirs larger than 1 km² for the period 1985–2022. To achieve this, we derive water area time series from the Joint Research Center (JRC) Global Surface Water dataset. Subsequently, we compile all publicly available in situ water level time series and altimetric measurements from data providers such as Hydroweb and DAHITI. For the remaining lakes and reservoirs larger than 10 km², we first assess the feasibility of deriving water level time series from repeat-track missions, such as Jason-2, Jason-3, and Sentinel-3. If no data are available, we attempt to generate water level time series using the CryoSat or ICESat-2 mission. For smaller lakes and reservoirs, water height information is extracted from the TerraSAR-X digital elevation model. After collecting the required data, we developed an empirical water area-level model for each water body and then estimated the water volume variation time series. With this data set, we can investigate the temporal and spatial variations of surface water stored in lakes and reservoirs from 1985 to 2022 on a global scale. This study aims to answer these fundamental questions: 1) What are the temporal behaviors of surface water volume variations in different river basins? 2) Does the water volume variation trend align with other hydrological parameters' temporal variations? and 3) What are the major natural and anthropogenic factors that explain the long-term water volume variation?
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Exploring GRACE and GRACE-FO data to estimate the groundwater component of a digital twin of the terrestrial water cycle

Authors: Dr Nooshin Mehrnegar, Dr Fan Yang, A/Prof Mehdi Shafiei Joud, Ms Leire Retegui-Schiettekatte, Maike Schumacher, Ehsan Forootan, Dr Luca Brocca
Affiliations: Aalborg University, Research Institute for Geo-Hydrological Protection (IRPI), CNR
As the climate warms, hydrological processes become more complex, making the development of accurate digital twins of the terrestrial water cycle increasingly crucial for predicting water-related hazards and managing water resources effectively. In this study, we investigate the use of satellite observations of the Earth’s time-variable gravity fields from the Gravity Recovery and Climate Experiment (GRACE) and its Follow-On mission (GRACE-FO), to estimate Terrestrial Water Storage (TWS) and to constrain water storage representation of large-scale hydrological models within Europe. The implementation follows a multivariate state-space Bayesian model-data fusion to merge a priori 0.1° resolution water storage estimates with the TWS observed by GRACE/GRACE-FO. The estimated groundwater storage will be validated against groundwater network within Europe and the GNSS measurements. We also assess the feasibility of the (Interferometric) Synthetic Aperture Radar (InSAR) and SAR techniques for downscaling water storage estimates in selected regions in Europe.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Presenting of an EO-based Service for Hydrological Drought Monitoring

Authors: Luca Cenci, Giuseppe Squicciarino, Edoardo Cremonese, Luca Pulvirenti, Mariano Tullio Pintus, Giacomo Fadda, Andrea Duro, Silvia Puca, Paolo Botti, Costantino Azzena
Affiliations: CIMA Research Foundation, Regione Autonoma Sardegna, Italian Civil Protection Department
Considering the current climate change scenario of the Mediterranean area, characterized by an evident intensification of hydrometeorological extremes (e.g., drought, floods, heatwaves, wildfire), it is important to develop reliable monitoring and Early Warning Systems (EWS) to support national agencies in charge of undertaking actions to mitigate the impacts of these hazards. In this context, satellite Earth Observation (EO) data are a valuable tool to support decision-makers. Within this framework, this work describes a service designed for monitoring water bodies (WB) extent through time by means of Sentinel-2 (S2) data and its validation. For the current analysis, the WB object of interest are water reservoirs. The latter can be severely affected by hydrological drought effects, which in turn can reduce the water resources’ availability. Indeed, when critical periods characterized by this hazard occur, water resources management can be a crucial aspect. In this context, the final objective of the presented service is to support the operational activities of the Italian Civil Protection Department (DPC). According to a methodological point of view, the presented service aims at: I) Mapping the spatiotemporal variations of reservoirs’ water extents by means of S2 data, every time an image collected over the reservoir is available from the data provider repository; ii) Computing the corresponding time series of the reservoir "degree of filling” in terms of WB Extent percentage (“% Extent”), by comparing the S2-based WB map extent against the maximum WB Extent derived from the Global Surface Water (GSW) dataset (obtained by using Landsat data acquired from 1984 to 2021); iii) Generating the monthly time series of “% Extent”, by averaging the previous data over a monthly time scale; iv) Computing the “% Extent” monthly anomalies, by taking into account a reference period. For the current analysis, the time span used for performing the validation test was 2018-2024. A detailed description of the methodology can be found in Cenci et al., 2024, as well as the results of a first “local” validation of the prototype version of the service carried out only for the San Giuliano Reservoir (Basilicata, Southern Italy). Instead, the analysis described in this work was performed by taking into account a larger number (i.e., a dozen) of WB located in the Sardinia Region (Central Italy), characterized by different environmental conditions. This feature allowed us to test service performance in various situations, thereby producing more robust results. Both validations were carried out by comparing the S2 “% Extent” data (including anomalies) against analogous variables referred to the reservoir "degree of filling” expressed in terms of water volume (“% Volume”). The reference data related to the reservoir amount of water storage, in volumetric terms, were collected by in-situ stations. Results obtained in Sardinia confirmed the ones of the first “local” validation: i) The service can accurately monitor the temporal trend of the reservoirs’ “degree of filling” by means of the S2 “% Extent” time series; ii) The service is capable of identifying extreme conditions potentially linked to hydrological drought events by means of the S2 “% Extent” anomalies. This feature can also be used to detect early signs of critical periods to support the management of the water resources, not only during emergency situations but also before their occurrence. These features are extremely important for institutions and agencies in charge of monitoring and managing water resources to undertake actions useful to mitigate the negative effects associated with hydrological drought. However, when interpreting results for such purposes, the relationships between the service outputs, the occurrence of hydrological drought events and the effects of human-related activities potentially linked to the changes of the water stored in the reservoir (e.g., agriculture, hydroelectric power generation, additional activities linked to the management or the maintenance of the reservoir) shall be carefully evaluated (Parshina et al., 2024). Acknowledgements The authors acknowledge the support of the Autorità di Bacino Distrettuale della Sardegna for providing the reference data - related to the reservoir amount of water storage, in volumetric terms - used during the validation. References - L. Cenci, G. Squicciarino, L. Pulvirenti, S. Puca and A. Duro, "Validation of a Prototype Monitoring System of Water Bodies Extent for Civil Protection Applications," IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 2024, pp. 3765-3769, doe: 10.1109/IGARSS53475.2024.10641198. - O. Parshina, L. Cenci, G. Squicciarino and L. Pulvirenti, "Satellite-Based Monitoring of Vegetation Health and Water Bodies Extent in Dry Periods and Under Drought Conditions," IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 2024, pp. 4027-4031, doi: 10.1109/IGARSS53475.2024.10640789.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Characterizing and monitoring Ramsar wetlands using multi-source remote sensing data

Authors: Yang Li, Nandika Tsendbazar, Dr. Kirsten de Beurs, Dr. Lammert
Affiliations: Wageningen University
Title Characterizing and monitoring Ramsar wetlands using multi-source remote sensing data ABSTRACT Wetlands are vital for global biodiversity and climate regulation, yet they remain among the most threatened ecosystems (Weise et al., 2020). Historic data show a loss of 54-57% of wetlands from the 1700s to 1900s, with significant declines in Europe and North America (Davidson, 2014). Recent studies have revealed a 33% loss in global potential wetlands by 2009 (Hu et al., 2017), driven by agricultural expansion and urbanization. Despite concerted global conservation efforts, including the Ramsar Convention and the Sustainable Development Goals (SDGs), wetland conservation still faces significant challenges. For example, the Haiderpur wetland, a Ramsar site on the Muzaffarnagar-Bijnor border in India, was drained in 2023 for agriculture development, emphasizing the need for more effective conservation strategies (Abdurrahim et al., 2023; Gardner and Finlayson, 2018; UN Water, 2018). Harnessing remote sensing technology, a variety of global wetland datasets have been developed, grouped into three main categories: land cover datasets, single-type wetland datasets and multi-type wetland datasets (Appendix 1 Tab. S1). While these datasets have been successful in satisfying the requirements of certain user groups, they each present unique limitations (Xu et al., 2020). For instance, land cover datasets focus more on terrestrial ecosystems, typically encompassing a narrow range of wetland subtypes, primarily water bodies and wetlands (Li and Niu, 2022). Single-type wetland datasets provide detailed views of specific wetland subtypes but their integration for systematic analysis is complicated by variations in data sources, definitions, methods and timelines (Jia et al., 2023; Murray, 2023; Pickens et al., 2020). The latest multi-type wetland dataset can provide more information on several wetland subtypes (Zhang et al., 2023). Nonetheless, it falls short in accurately classifying inland marshes and swamps and utilizes a sample-dependent classification method, which poses challenges to its application in sustained monitoring efforts. Recent advancements in using Synthetic Aperture Radar (SAR) data have greatly enhanced wetland monitoring (Amitrano et al., 2024). Chapman et al., (2015) and Hess et al., (2015) used multi-temporal JERS-1 L-band SAR data with different thresholds for various land cover types to map Amazon Basin wetlands. Rosenqvist et al., (2020) built on previous work, using ALOS-2 PALSAR-2 data and decision tree classification to identify maximum and minimum inundation extents from 2014 to 2017, offering precise seasonal inundation dynamics. Oakes et al., (2023) introduced RadWet, which auto-generates training datasets and uses ensemble machine learning to classify Sentinel-1 C-band SAR, enabling 12-day interval inundation monitoring at a 30-meter resolution. In addition to mapping inundated vegetation, SAR data has also made significant progress in monitoring water levels (Kossieris et al., 2024), water depth (Pereira et al., 2019), and coastal topography (Bishop-Taylor et al., 2019). Integrating multi-source earth observation data is crucial to enhance wetland monitoring further (Xu et al., 2022). The upcoming high-performance remote sensing satellites, such as BIOMASS, the Environmental Mapping and Analysis Program (EnMAP), the NASA-ISRO Synthetic Aperture Radar (NISAR), and the Surface Water and Ocean Topography (SWOT), heralds new opportunities for environment monitoring (Kooistra et al., 2024). However, effectively combining these new sensors to improve the state of global wetland monitoring remains a compelling domain for further exploration. Monitoring waterlogged wetlands without surface flooding presents significant challenges for both optical and radar remote sensing; the former primarily detects open water and often overlooks water-saturated soils hidden by vegetation (Zhang et al., 2021), while the latter struggles with the dynamic wetland vegetation and soil conditions that impact radar backscatter, complicating consistent detection ( Tootchi et al., 2018). Data integration methods attempt to address these challenges but depend heavily on the quality and availability of existing data, which can hinder consistent time series observations and timely updates (Tootchi et al., 2019; Zhang et al., 2021). Another approach simulated potential wetland distribution through terrain-based hydrological models/indices, assuming uniform local hydraulic gradients and slopes, and water table fluctuations consistently influenced by precipitation (Gumbricht et al., 2017; Hu et al., 2017; Xi et al., 2022). The low spatial resolution of meteorological data constrains this approach (Abatzoglou et al., 2018). Some studies use machine (deep) learning to map waterlogged wetlands (Melton et al., 2022; Poggio et al., 2019), with the quality of training samples being more critical than the model used (Minasny et al., 2019). However, collecting training samples is time-consuming and laborious (Dehkordi et al. 2022). The significant interannual variability of wetlands further complicates historical tracking (Yan and Niu, 2021). Recent advances in multi-source remote sensing data fusion products like global 30m daily seamless data cube (SDC) enhance historical surveillance capabilities, yet developing advanced methods to accurately assess waterlogged wetland extent remains essential (Liu et al., 2021). Wetland changes are driven by anthropogenic activities like agricultural expansion and dam construction, as well as climatic factors such as rising temperatures and declining precipitation (Yi et al., 2024). Identifying these drivers and understanding their patterns is crucial for developing effective wetland conservation policies (Murray, 2023). Fluet-Chouinard et al., (2023) interpolated time series of seven human-induced factors and potential wetland areas derived from existing and simulated wetland datasets to reconstruct the anthropogenic loss of global wetlands between 1700 and 2020. Zhang et al., (2022) employed linear regression to investigate the relationship between six climate variables and the dynamics of Alpine Wetlands on the Qinghai-Tibetan Plateau. Due to the challenge of collecting historical driver data, these studies focus on single types of drivers or a single wetland site. Moreover, certain drivers, such as temperature variations and hydrological conditions have compound effects on the wetland landscape, but research exploring these compound effects is still scarce (Xiong et al., 2023). Advances in remote sensing technology and its widespread application have provided key data for understanding the forces driving changes in wetlands (X. Chen et al., 2023; Ghajarnia et al., 2020; Liu et al., 2021). However, linking the long-term changes of these drivers and their synergistic effect to wetland dynamics requires innovative methodologies. This research aims to develop a large-scale applicable, highly automatic mapping method tailored to diverse wetlands worldwide. The method will specifically focus on effective monitoring of highly dynamic flooded wetlands and non-hydrodynamic waterlogged wetlands. The second objective is to conduct long-term monitoring from 1985 to 2025 of key wetlands such as Ramsar wetlands using the developed method to elucidate their spatiotemporal dynamics. Finally, for severely degraded wetlands, various potential driving factors will be characterized based on existing related datasets. These factors will be comprehensively analyzed to identify the primary drivers of wetland degradation, thus offering insights to enhance the conservation efforts for these wetlands. The primary research questions are as follows: 1) How to improve the mapping of temporarily flooded and waterlogged wetlands using multi-source remote sensing data? 2) What are the spatiotemporal changes in Ramsar wetlands between 1985 and 2025? 3) What are the key driving factors of these changes in severely degraded Ramsar wetlands?
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: How can GRACE/-FO data assimilation enhance our understanding of anthropogenic effects on the hydrological system in Europe?

Authors: Yorck Ewerdwalbesloh, Visakh Sivaprasad, Carsten Montzka, Jürgen Kusche, Anne Springer
Affiliations: Institute of Geodesy and Geoinformation, University of Bonn, Institute of Bio- and Geosciences: Agrosphere (IBG-3), Forschungszentrum Jülich
Land surface models (LSMs) are used to simulate dynamic processes of energy exchange, water movement, and biogeochemical cycling within vegetation, soil, and surface water systems. They play a critical role in understanding ecosystem and climate feedbacks, impacting the hydrological cycle. However, LSMs often face limitations due to simplifications or processes that are not considered in the model. For instance, anthropogenic impacts through irrigation or groundwater abstraction influence water and energy fluxes while not being necessarily explicitly accounted for. Assimilating observations into LSMs can not only help to improve model state variables but also to overcome these deficits. In this context, data from the GRACE (Gravity Recovery and Climate Experiment) satellite mission and its successor, GRACE Follow-On, provide valuable observations of global and regional variability in total water storage (TWS). In this contribution, we explore the potential of detecting anthropogenic impacts on the hydrological system by assimilating TWS observations from the GRACE and GRACE-FO missions into the Community Land Model 5 (CLM5) over Europe. Our objectives are (1) to evaluate the feasibility of using GRACE data to identify human-induces changes in the hydrological system and (2) to link specific processes to these impacts. To achieve this, we perform experiments with data assimilation (DA) and without it (open-loop, OL) and compare both simulations in regions with intensive human water management to GRACE and other independent observations. Additionally, analyzing spatial and temporal patterns in the assimilation increments can provide valuable insights into the attribution of these patterns to specific human activities and their temporal evolution.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone L-M)

Poster: Global trends of vegetation leaf moisture content since the 1980s

Authors: Luisa Schmidt, Johanna Kranz, Daniel Kinalczyk, Dr Marta Yebra, Li Zhao, Ruxandra-Maria Zotta, Matthias Forkel
Affiliations: TUD Dresden University of Technology, Junior Professorship in Environmental Remote Sensing, The Australian National University, ANU College of Engineering, Computing and Cybernetics, School of Engineering, The Australian National University, ANU College of Science, Fenner School of Environment and Society, TU Wien, Department of Geodesy and Geoinformation
Vegetation is regulating several processes in the water cycle such as interception, transpiration, and water storage - factors essential for plant productivity, biomass production, and carbon cycling. Drought and heat events characterised by reduced precipitation, enhanced temperatures and evaporation can lead to water-stressed vegetation, which corresponds to diminished vegetation water storage. As an indicator for vegetation water storage ground measurements of leaf moisture content (or live-fuel moisture content, LFMC) can be used, which have been collected in several countries to assess wildfire danger (Yebra et al., 2024). However, field measurements of LFMC are sparse and in many countries not available at all. Hence, it remains unclear how the observed changes in climate warming and local drying are affecting LFMC and therefore the vegetation water storage. The objective of this study is to quantify globally changes in LFMC since the 1980s by developing a long-term satellite time series dataset of LFMC. Such a dataset will ultimately help to understand the impact of changing environmental conditions on the vegetation layer as part of the water as well as the carbon cycle. To develop a long-term LFMC dataset, we build upon the recently introduced VOD2LFMC approach (Forkel et al., 2023), which applies an empirical model to estimate LFMC from satellite datasets of Vegetation Optical Depth (VOD) from passive microwave sensors and of Leaf Area Index (LAI). Here we extend this approach by using the VODCA v2 VOD dataset (Zotta et al., 2024) and GLOBMAP LAI data to create a global LFMC time series for the period 1988–2020. We train and cross-validate our approach against LFMC measurements from the Globe-LFMC database. Then we compute trends in LFMC for different seasons and compare those trends with trends from a few field sites that provide long-term LFMC measurements. Initial results suggest a diverse pattern of LFMC worldwide, with trends varying based on vegetation type and regional climate conditions. Those results indicate that trends in LFMC do not necessarily correspond to trends in dead fuel moisture content of surface litter and woody debris that is declining globally (Ellis et al., 2022). In conclusion, our results demonstrate the potential of passive microwave observations to add live fuel moisture content as a quantity for a more comprehensive assessment of vegetation water storage in a changing climate system. References Ellis, T. M., Bowman, D. M. J. S., Jain, P., Flannigan, M. D., and Williamson, G. J.: Global increase in wildfire risk due to climate-driven declines in fuel moisture, Glob. Change Biol., 28, 1544–1559, https://doi.org/10.1111/gcb.16006, 2022. Forkel, M., Schmidt, L., Zotta, R.-M., Dorigo, W., and Yebra, M.: Estimating leaf moisture content at global scale from passive microwave satellite observations of vegetation optical depth, Hydrol. Earth Syst. Sci., 27, 39–68, https://doi.org/10.5194/hess-27-39-2023, 2023. Yebra, M., Scortechini, G., Adeline, K., Aktepe, N., Almoustafa, T., Bar-Massada, A., Beget, M. E., Boer, M., Bradstock, R., Brown, T., Castro, F. X., Chen, R., Chuvieco, E., Danson, M., Değirmenci, C. Ü., Delgado-Dávila, R., Dennison, P., Di Bella, C., Domenech, O., Féret, J.-B., Forsyth, G., Gabriel, E., Gagkas, Z., Gharbi, F., Granda, E., Griebel, A., He, B., Jolly, M., Kotzur, I., Kraaij, T., Kristina, A., Kütküt, P., Limousin, J.-M., Martín, M. P., Monteiro, A. T., Morais, M., Moreira, B., Mouillot, F., Msweli, S., Nolan, R. H., Pellizzaro, G., Qi, Y., Quan, X., Resco de Dios, V., Roberts, D., Tavşanoğlu, Ç., Taylor, A. F. S., Taylor, J., Tüfekcioğlu, İ., Ventura, A., and Younes Cardenas, N.: Globe-LFMC 2.0, an enhanced and updated dataset for live fuel moisture content research, Sci. Data, 11, 332, https://doi.org/10.1038/s41597-024-03159-6, 2024. Zotta, R.-M., Moesinger, L., van der Schalie, R., Vreugdenhil, M., Preimesberger, W., Frederikse, T., de Jeu, R., and Dorigo, W.: VODCA v2: multi-sensor, multi-frequency vegetation optical depth data for long-term canopy dynamics and biomass monitoring, Earth Syst. Sci. Data, 16, 4573–4617, https://doi.org/10.5194/essd-16-4573-2024, 2024.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: A.09.05 - POSTER - Using Earth Observation to assess change and dynamics of the Greenland Ice Sheet

Mass loss from the Greenland Ice Sheet has accelerated over recent decades. Currently, about 60% of the mass loss is attributed to runoff of meltwater, and the rest attributed to dynamic mass loss through ice flow into the ocean. The meltwater runoff from the ice sheet has increased significantly over the last 20 years, and models project the volume of runoff will continue to increase. Runoff, and ice-sheet mass loss generally, have significant implications for sea-level rise, freshwater fluxes, ocean circulation and both terrestrial and marine ecosystems.

Recent advances in satellite Earth observation, in-situ measurements and numerical modelling enable a more accurate and integrated view of the ice sheet. These enhanced observations allow an improved understanding of the feedbacks between processes occurring within the ice sheet (i.e. meltwater hydrology links the ice-sheet surface to the base, and leads to feedbacks on ice velocity and terminus melting), as well as complex interactions with the atmosphere and ocean.

This session will highlight recent advances in our understanding of the dynamics of the Greenland Ice Sheet and their wider implications including:
- Interactions between the atmosphere and ice-sheet surface: surface mass balance, firn evolution, supraglacial hydrology and ponding.
- Quantifying interactions at the base of the ice sheet basal friction, geothermal heat flux, subglacial hydrology and lakes.
- Impact of the ocean on tidewater glaciers and iceberg calving.
- Integrated assessment of hydrology and implication on freshwater flux.
- Assessing feedbacks between hydrology and ice flow.
- Evaluating the impact of ice-sheet change on ecosystems and wider Earth system.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Monitoring Greenland outlet glaciers using stereoscopic DEM's and radar altimetry

Authors: Dominic Hardy, Romilly Close, Professor Malcolm McMillan
Affiliations: University Of Lancaster, UK Centre for Polar Observation and Modelling, Centre of Excellence in Environmental Data Science, Lancaster Environment Centre, Lancaster University
Time series of Digital Elevation Models (DEMs) can be used to quantify surface elevation changes and mass balance over glaciated regions. Compared to other geodetic techniques, DEM-based approaches are particularly well-suited to regions that exhibit complex topography, such as narrow Greenlandic outlet glaciers, where fine spatial resolution is essential for understanding glaciological processes through time. The archive of consistent stereoscopic DEMs that have been acquired over multiple decades, such as those by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument onboard the Terra satellite, provides a wealth of valuable data to achieve such insights. However, it is well-established that stereoscopic DEMs require additional corrections to remove both spatially-varying and systematic biases. For mountain glacier settings, stable ground, i.e. exposed bedrock, is commonly used to correct these biases, by performing co-registration to an external reference DEM. However, this approach is often limited or impossible over ice sheets, due to the absence of sufficient exposed bedrock. In settings where there is little bedrock, coincident radar or laser altimetry acquired over the ice itself can, in theory, provide the ground control points required to co-register each DEM. However, the use of the long-term record of radar altimetry measurements over Greenland for this purpose remains relatively unexplored, despite considerable efforts in recent years to optimise processing of radar altimetry data over ice sheets, in projects such as Cryo-TEMPO, CryoTEMPO EOLIS and FDR4ALT. Alongside the decadal-scale datasets now available from ICESat, and ICESat-2, this presents a timely opportunity to revisit this challenge and attempt to systematically combine altimetry and DEMs within large-scale processing systems. Here, we present an automated pipeline to co-register ASTER-derived DEMs with various coincident Level 2 altimetry products acquired between 2000 and 2023 over the Greenland Ice Sheet. The pipeline incorporates an adaption to the Shean et al. (2016) filtering method for selecting dynamically controlled altimetry points, which removed the necessity to rely on only stable bedrock areas within each DEM scene. Specifically, we include additional dh/dt auxiliary information alongside velocity information in order to control the selection of co-registration points and minimise any resulting bias. Through the application of the pipeline to DEMs acquired over Helheim, Russell, Northeast Greenland Ice Stream and Upernavik Isstr⌀m glaciers, we explore the effectiveness of various altimeters, the sensitivity of filter conditions, and the impact of different co-registration methods across four contrasting Greenlandic glaciological settings. Ultimately, our aim is to establish and optimise a framework for integrating altimetry and DEMs at scale, to better constrain Greenland’s mass balance over the past two decades.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Towards systematic mapping of Greenland’s active subglacial lakes

Authors: Diego Moral Pombo, Professor Malcolm McMillan, Dr Jade Bowling, Romilly Close, Dominic Hardy
Affiliations: Lancaster University, Centre for Polar Observation and Modelling
Understanding the distribution and dynamics of Greenland's active subglacial lakes is critical for assessing their role in the ice sheet’s hydrological and broader environmental systems. Despite significant progress in Antarctic research, Greenlandic subglacial lakes remain underexplored, with only a relatively small number of them having been identified to date. This knowledge gap arises from limitations in observational methods, largely due to the lakes’ smaller size and potentially shorter drainage cycles compared to their Antarctic counterparts (Livingstone et al., 2022). The GLOBE (Greenland Subglacial Lake Observatory) project aims to address this challenge through a program of research that will systematically integrate ultra high-resolution Digital Elevation Models (DEMs) and a suite of other EO datasets, using statistical and geospatial techniques to map and monitor Greenland's active subglacial lakes at an unprecedented scale. Preliminary results from proof-of-concept studies have already demonstrated the viability of using the temporal variance in elevation as a proxy for identifying subglacial lake dynamics (e.g. Bowling et al., 2019). Here, we will present results from the first stage of the GLOBE project, which is working to extend this approach in a systematic way across the entire Greenland Ice Sheet to obtain a comprehensive inventory of its active subglacial lakes. Specifically, we will show examples from a number of test sites, and assess the ability of the method to resolve lake drainage events. In the project's initial phase, our goal is to compile and georeference the complete database of ArcticDEM 2-meter strip files over Greenland since 2008. In order to improve the accuracy and precision with which we can detect subglacial lakes, we coregister DEMs with contemporaneous satellite altimetry data, such as CryoSat-2 and ICESat-2. This coregistration allows us to separate out the long-term trend in ice sheet elevation and isolate the localised, more rapid changes associated with the draining and filling of the subglacial lakes. In the longer term, this initial work sets the stage for tackling subsequent challenges such as the automated identification of lake signatures, the analyses of lake drainage histories, and the assessment of the hydrological impacts of lake outburst floods. In doing so, and by exploiting the potential of ArcticDEM, GLOBE aims to improve and upscale our understanding of Greenland’s subglacial lake system.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Glacier algae spatial and temporal distribution at the Qaanaaq Ice Cap, Northwest Greenland

Authors: Giacomo Traversa, Yukihiko Onuma, Davide Fugazza, Roberto Garzonio, Takumi Suzuki, Nozomu Takeuchi, Biagio Di Mauro
Affiliations: Institute of Polar Sciences - National Research Council of Italy, Japan Aerospace Exploration Agency (JAXA), University of Milan, University of Milano-Bicocca, Chiba University
The mass loss of the Greenland Ice Sheet (GrIS) in recent years is extremely important from the perspective of environmental changes in the Arctic in the context of climate change. Understanding this melting process helps to predict sea level rise and assess the environmental impact of climate change. The ice sheet mass loss is caused not only by increasing temperature, but also by a decrease in albedo due to darkening of the surface by snow and ice. Satellite and field observations report that this darkening is caused by the extensive proliferation of light-absorbing photosynthetic microorganisms (e.g. algae) living on frozen surfaces, such as the Qaanaaq Ice Cap in Northwest Greenland at about 77°N. In this research, remote sensing and field observations were used to observe the darkening of the Qaanaaq Ice Cap in space and time in relation to the surface concentration of glacier algae. In this study, a method was developed to quantify the concentration of algae on the ice cap from satellite, leveraging samplings collected during two different melting seasons (2014 and 2023). The laboratory analyses revealed a predominance of Ancylonema nordenskioldii (the most abundant species identified on different glaciers across the world), followed by Ancylonema alaskana. By applying a correlation matrix, we identified the wavelength (bands) ratio with higher correlation (R²) with algal abundance from the sample dataset. This ratio, applied to satellite data (Sentinel-2, bands red-edge 1 - 705±15 nm - and red - 665±30 nm), allowed to evaluate the annual and seasonal variability of glacier algae concentrations over the ice cap. In detail, mean and maximum algae concentration over the ice cap were estimated from 2016 and 2023 in the summer period with a daily temporal resolution. Other estimates are based on albedo-derived metrics such as the start, end and length of the bare-ice season. These results were moreover analysed with topographical parameters (elevation, slope and aspect) in order to detect possible correlations. Finally, algae-abundance seasonal variation was related to meteorological parameters measured by the automatic meteorological station SIGMA-B, located on the ice cap. The research was conducted in the context of two international projects, titled “Biological Darkening of Qaanaaq Ice Cap (BDQ)” supported by the ArCS II International Early Career Researchers Program 2023-24 and “Triple UP-scaling of Ice-Light-Absorbing particles at Qaanaaq ice cap (TUPILAQ)” supported by INTERACT TA.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Advancing Year-Round Supraglacial Lake Monitoring on the Greenland Ice Sheet by Utilising Sentinel-1 C-Band Radar Data

Authors: Jacqueline Otto, Amber Leeson, Katie Miles, Jennifer Maddalena, Mal McMillan
Affiliations: Lancaster Environment Centre, Lancaster University
Supraglacial lakes on the Greenland ice sheet are expanding in number, size, and drainage frequency, influencing ice flow and the hydrological system. Traditional monitoring of these lake features using optical satellite imagery is limited by cloud cover and the polar night. Independent of these constraints, radar remote sensing data offers a promising alternative to fill the temporal gaps and unknows in current lake process understanding. Recent insights based on radar data reveal the occurrence of winter lake drainage and the presence of buried lakes uncaptured by optical imagery, emphasizing the need for year-round lake monitoring and improved lake mapping methods. However, the potential of radar data for supraglacial lake investigations strongly relies on the interpretation of complex radar backscatter signals. Decomposing and accurately interpreting radar backscatter remains a significant challenge that we are aiming to tackle in this project. Based on Sentinel-1 C-band SAR data, we assess the radar backscatter signal associated with supraglacial lakes across the Greenland ice sheet during the years 2018 and 2019 which present two contrasting melt seasons. We argue that differentiating between different types and evolutions of lakes over the course of a lake cycle is essential for accurately representing the ice-dynamical feedback from lake processes. Whether a lake drains or retains water over the course of a lake cycle determines its effect on ice rheology, ice flow and the drainage system. Therefore, we investigate the evolution of the backscatter signature over lakes throughout the season to differentiate between such lake types. We deploy time series data analysis techniques, in combination with clustering methods, to explore the radar backscatter signal across the range of observed lake processes. By refining our understanding of radar backscatter from lake processes, this research aims to inform and advance automated techniques for year-round and ice sheet-wide supraglacial lake detection and monitoring, contributing to an improved assessment of the glacio-hydrological system under the ongoing climatic changes.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Evolution of the Petermann Ice Shelf River and Estuary, and Impacts on Ice Shelf Stability

Authors: Alison Banwell, Michela Savignano, Emily Glazer, Sarah Esenther, Robin Bell, Alexandra Boghosian, Roger Buck, Adam LeWinter, Laurence Smith, Leigh Stearns
Affiliations: Cooperative Institute For Research In Environmental Sciences (CIRES), University Of Colorado Boulder, Lamont-Doherty Earth Observatory, Columbia University, Brown University, University of Pennsylvania, Cold Regions Research and Engineering Laboratory
Ice shelves play a crucial role in helping to maintain ice sheet mass balance by buttressing upstream glacier flow. Their stability in both Antarctica and Greenland is influenced by surface melt rates and hydrology. Ice shelves with a high density of surface lakes are more susceptible to hydrofracture-driven break-up (e.g. the Larsen B), while those with river systems that export meltwater to the ocean may remain more stable (e.g. the Nansen). However, early observations suggest that ice shelf instability can arise if a surface river incises to sea level, forming an estuary at the ice shelf terminus. Estuary formation can enable water flow reversals, with fresh water in the river mixing with saline ocean water. Periodic ocean water loading, combined with tidally-induced flexure, may increase ice shelf stresses near the calving front, thus promoting longitudinal fractures and rectilinear calving events. Recently, satellite remote sensing observations led to the discovery of the first ice-shelf estuary at Petermann Glacier in northwest Greenland (Boghosian et al., 2021). Our NASA-funded project aims to investigate this ice-shelf estuary through three key objectives: 1) Quantifying the river incision rates required to create and sustain an estuary; 2) Assessing the relationship between ice shelf estuary formation and the development and orientation of fractures (transverse vs. longitudinal); and 3) Evaluating how ice-shelf estuaries affect ice shelf stability through ocean water loading. To achieve these goals, we integrate satellite remote-sensing data products (optical imagery from WorldView-2/3 and Sentinel-2, altimetry data from ICESat-2, and surface digital elevation model (DEM) data from ArcticDEM and WorldView-2/3 stereo pairs), combined with both viscoelastic ice-shelf flexure and river incision models. Results from our study will improve predictions of where and when future ice shelf estuaries may form in Greenland and Antarctica, as well as their implications for ice shelf stability, and therefore for ice sheet mass balance and global sea level rise.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Terrestrial Radar Interferometry Reveals High Spatial and Temporal Variability of Ice Velocity at Major Greenlandic Tidewater Outlet Glacier

Authors: Armin Dachauer, Andrea Kneib-Walter, Andreas Vieli
Affiliations: Department of Geography, University of Zurich
Enhanced ice discharge and frontal ablation at tidewater outlet glaciers are key contributors to the mass loss of the Greenland Ice Sheet. Understanding the dynamics at the ice-ocean interface, particularly ice flow velocity, is crucial for determining both the current and future behavior of tidewater glaciers. We investigate the temporal and spatial ice flow variability at the terminus area of Eqalorutsit Kangilliit Sermiat (also known as Qajuuttap Sermia), a major ocean-terminating outlet glacier in South Greenland. This requires high-resolution data, which we obtain using a terrestrial radar interferometer operating with a temporal resolution of one minute and a spatial resolution of a few meters. The instrument operated continuously during two field campaigns, each lasting two weeks - one in August 2023 and the other in July 2024. This enabled us to examine short-term ice flow variability, complementing the data from satellite imagery and filling gaps it cannot cover. We observed that the glacier’s flow velocity is highly sensitive to additional freshwater inputs from melt or lake discharge events. As a result, the velocity exhibits both diurnal fluctuations and multi-day acceleration/deceleration patterns, which propagate either downstream or upstream.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: TanDEM-X for monitoring the dynamics of the Greenland ice sheet

Authors: Dr Sahra Abdullahi, Dennis Kaiser, Carolin Keller, Dr Birgit Wessel, Martin
Affiliations: German Aerospace Center DLR
Progressive glacier retreat in the face of global warming has far-reaching consequences for humans and the environment. Regarding adaptation strategies and reliable projections, there is a constant need for accurate information on the state and the evolution of glaciers and ice sheets worldwide. Interferometric synthetic aperture radar (InSAR) systems enable the reconstruction of glacier and ice sheet topography without limitations due to illumination and cloud cover, which is a major advantage especially in polar regions. Since 2010, the TanDEM-X mission has been acquiring bistatic single-pass X-band InSAR data at a high spatial resolution of 0.4 arcseconds (i.e., about 12 m) at a global scale. In this context, about 2,100 and 3,700 bistatic co-registered single-look complex SAR data (CoSSCs) acquired in StripMap mode in horizontal polarization from the TanDEM-X mission were used to derive a time-tagged digital elevation model mosaic of Greenland for winter 2010/2011 and 2016/2017, respectively. Using the Operational Integrated TanDEM-X Processor (ITP) of the German Aerospace Center (DLR) pre-calibrated, geocoded single-scene digital elevation models were generated by combining the backscattered signals of the CoSSCs to interferograms and converting the resulting phase differences into height based on the interferometer geometry. Since the elevation data suffers from an elevation bias up to several meters due to signal penetration into the snow and ice surface, each single-scene DEM was corrected for this penetration bias based on its correlation with interferometric coherence and backscatter intensity. Subsequently, the DEMs were vertically adjusted to correct for systematic absolute height offsets inherent to all interferometric DEMs due to residual baseline and orbit inaccuracies and mosaicked by the Operational DEM Mosaicking and Calibration Processor (MCP). A least-squares block adjustment procedure based on ground control points over stable terrain from ICESat (Ice, Cloud and land Elevation Satellite) GLA14 products and tie points for each single-scene DEM was used. For validation purposes, independent laser altimeter measurements from the IceBridge Mission were used as reference heights. Finally, DEM differencing was applied in order to determine the elevation change between the 2010/2011 and 2016/2017 winter seasons, especially in the highly dynamic periphery of the Greenland ice sheet.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Mapping subsurface water on the Greenland ice sheet from multi-frequency passive microwave remote sensing

Authors: Baptiste Vandecrux, Ghislain Picard, Pierre Zeiger, Marion Leduc-Leballeur, Andreas Ahlstrøm
Affiliations: Geological Survey Of Denmark And Greenland, Institut des Géoscience de l’Environnement (IGE), Université Grenoble Alpes, Institute of Applied Physics “Nello Carrara”, National Research Council
About half of the Greenland ice sheet’s mass imbalance and contribution to sea-level rise stems from increasing surface melting and subsequent runoff. While the visible part of the ice sheet’s hydrological system (e.g., supraglacial lakes and streams) is relatively well-documented, detecting liquid water in the large, yet hydrologically active, percolation area — where meltwater infiltrates below the snow surface — remains challenging. In this project, we use spaceborne passive microwave observations at 1.4, 6.9, 10.7, 18.7, and 36.5 GHz from the Soil Moisture and Ocean Salinity satellite (SMOS) and the Advanced Microwave Scanning Radiometers (AMSR-E & AMSR2) to detect the presence and depth of subsurface liquid water in the Greenland ice sheet’s percolation area. First, we build a catalogue of realistic water-bearing snowpacks paired with their corresponding microwave brightness temperatures to study the sensitivity of the microwave signal to the presence of water in the snow. To increase our sample size beyond the limited in situ measurements of liquid water content (LWC) in Greenland, we use the GEUS multilayer firn model, driven by the Copernicus ARctic ReAnalysis (CARRA) dataset, to generate snowpack characteristics and LWC profiles. We then employ the Snow Microwave Radiative Transfer (SMRT) model to calculate the corresponding microwave emissions. These simulated snowpacks and brightness temperatures are evaluated at several hydrologically active sites, including the supraglacial lake region, the percolation zone, and the perennial firn aquifer area. In the next step, we use this catalogue of collocated snowpack and microwave emission data to train a Random Forest (RF) model. This model predicts, based on a given microwave signature — brightness temperatures at five frequencies and two polarizations — whether liquid water is present and at what depth the uppermost wetted snow layer resides. Once trained and validated, the RF model produces, for the first time, daily maps of subsurface water presence and depth on the Greenland ice sheet since 2010. These maps enable detailed studies of snow and firn hydrology, such as the ponding of meltwater beneath the surface during summer and its refreezing in autumn. This work advances our understanding of Greenland’s hydrology and improves predictions of future ice sheet behavior and sea-level contributions in a warming climate.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Geothermal heat flow models for ISMIP-7 – Recommendations for Greenland

Authors: William Colgan, Mareen Lösing, Helene Seroussi, Tobias Stål, Tong Zhang, Felicity McCormack, Dominik Fahrner, Jörg Ebbing
Affiliations: Department of Glaciology and Climate, Geological Survey of Denmark and Greenland, School of Earth and Oceans, The University of Western Australia, Australian Centre for Excellence in Antarctic Science, Thayer School of Engineering, Dartmouth College, Institute for Marine and Antarctic Studies, University of Tasmania, School of Natural Sciences (Physics), University of Tasmania, State Key Laboratory of Earth Surface Processes and Resource Ecology, Beijing Normal University, Securing Antarctica's Environmental Future, School of Earth, Atmosphere and Environment, Monash University, Kiel University
Geothermal heat flow (GHF) affects ice sheet thermal conditions, determining how it slides and internally deforms, as well as the rheological behaviour of the lithosphere. However, the distribution of GHF in polar regions remains poorly constrained, with few borehole-derived estimates and large discrepancies between available glaciological and geophysical estimates. These discrepancies relate to methodological differences and data treatment or availability. Therefore, many ice sheet modelling studies use a homogeneous GHF estimate, an ensemble average, or or vintage models that only provide a very simplified or misleading picture. Here, we discuss the suite of all available GHF models with respect to the underlying methodology and the data sets used and provide recommendations on which models to use for different purposes. In general, models can be grouped into three classes: (1) outdated due to improved data availability (2) methodological shortcomings due to too simplified parametrization and (3) preferred models. To further evaluate model applicability, we conduct an online expert elicitation to identify which heat flow models are most suitable for ice sheet modelling, as rated by experts in this field. For the preferred models, we discuss model uncertainty and data dependency in order to provide means to judge the model suitability depending on the application, e.g. in the Ice Sheet Model Intercomparison Project (ISMIP7). In Greenland, uncertainty of GHF estimates especially relates to the NGRIP point on the inland ice, where models and observations are difficult to reconcile. This is an example of heterogeneities that affect a local heat flow observation, which cannot, yet, be adequately addressed by regional models. Still, recent models show only low to moderately high geothermal heat flow under the Greenland ice-sheet, making a significant imprint on the Iceland hot-spot track unlikely but pointing to the sub-glacial geology as a main factor in controlling local heat flow. Please note that there is an accompanying poster discussing data and models for Antarctica.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Glacier velocity retrieval from SAR data based on AI feature tracking

Authors: Magdalena Łucka, Miłosz Sumara, PhD Wojciech Witkowski
Affiliations: AGH University of Krakow
Greenland, as one of the most significant contributors to global sea-level rise, is one of the most crucial places in the world to monitor in order to better understand the environmental response to global warming. Ice dynamics, including ice velocity, are a key component for understanding and modelling Greenland. Remote sensing approaches, such as offset-based algorithms, could use optical or radar satellite data to provide ice velocity information in real time and over vast areas. Nonetheless, such algorithms have several downsides, such as time-consuming processing or the need to set up numerous parameters in a processing chain, which may vary depending on the type of sensor used. However, the current state of artificial intelligence (AI) research in terms of satellite data processing for motion monitoring has yet to be completely examined. This study investigates the feasibility of using the LightGlue image matching technique, which is based on a deep convolutional neural network, to analyze satellite radar images (SAR) in order to identify similar points in different time frames. The primary goal of this research is to quickly match corresponding areas on SAR images in order to assess the pace of glacier's movement. The AI-based technique is tested on two glaciers in Greenland: Daugaard-Jensen and Jakobshavn. These two locations depict distinct movement paces, making it possible to evaluate the LightGlue tool in areas with diverse dynamics. This work, for the first time, looks into the feasibility of applying this feature matching technique for movement detection in polar regions. Sentinel-1 coregistered images served as the input for detecting the movement of those two glaciers. Furthermore, an additional test of the high-resolution X-band ICEYE satellite dataset was conducted to assess the impact of resolution and wavelength on the quality of the velocity map. Additionally, different types of input layers are evaluated to choose the best dataset. The tested inputs are 1-layer and multi-layer images that contain bands such as amplitude in different polarization or their ratio. The acquired results are compared to the traditional offset-tracking technique to determine the efficiency of the new approach. Compared to traditional techniques, the machine-learning methodology may produce results quicker and without the need for further parameter determination. Furthermore, such an image matching technique can provide velocities utilizing a variety of input images, such as radar, optical, or UAV images, so it may serve as an alternative solution for quick movement tracking. Nonetheless, the density and accuracy of the produced values require further investigation to assess the utility of such a method.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Annual and Interannual Cycles Observed Between Greenland’s GRACE Derived Mass Variations and Climatic Indices

Authors: Florent Cambier, José Darrozes, Muriel Llubes, Lucia Seoane, Guillaume Ramillien
Affiliations: Université Toulouse Iii - Paul Sabatier, Observatoire Midi Pyrénées, Géosciences Environnement Toulouse (get)
The Greenland Ice Sheet (GIS) is threatened by the current global warming. The GIS is globally losing mass since the 90s but it shows modulations at an annual and interannual scale. Those modulations are representative of the complex interactions between the GIS, the atmosphere and the ocean. This study investigates the temporal variability between early 2002 and the end of 2023 with the Gravity Recovery and Climate Experiment and its Follow-On mission. We focus on the dominant cycles and their connections to climatic indices and related parameters such as temperature, precipitation and albedo. Using Empirical Orthogonal Functions (EOF) applied to mass variation data, retrieve from the COST-G solution, we identified five principal modes of variability, which explain 67.5% of the total variance. The leading mode captures both the annual and interannual cycles, while the others only reflect longer periodicities, including known cycles length of 4 to 11 years. We use the following indices and parameters, the North Atlantic Oscillation (NAO), the Greenland Blocking Index (GBI), the Atlantic Multidecadal Oscillation (AMO), the duration and mean degree of positive temperature, the amount of precipitation and the albedo of Greenland surface. Each one of them are cumulated over time for them to be able to be compared to the cumulated mass changes since 2002. Using Wavelet Analysis, a cross correlation is effectuated between indices, parameters and the EOF modes of variability. Correlations between them reveal significant links to GIS mass variations with different time lag. This leads to a complete annual cycle between them and some interannual relationship. For example, the effect of a positive NAO increases the amount of precipitations, and the AMO index shows a surprising 3.5-year delayed response to ice mass change over the studied time. The wavelet analysis shows that NAO, GBI and temperatures cycles of 11 years are correlated with the Solar activity, and we also observe cycles of 4 to 7 years. Other authors have found that similar cycles length could be originated from complex atmospheric processes or the response to Earth’s internal dynamic. The GIS emerges as a dynamic system influenced by processes operating across multiple timescales. This study points out the critical need to integrate diverse climatic drivers to comprehensively understand the mechanisms behind observed mass variations. Such insights are vital for simulating future ice mass changes under global warming scenarios, assessing their implications for global sea-level rise, and formulating effective mitigation strategies.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Towards catchment scale estimates of runoff from radar altimetry over Greenland

Authors: Jennifer Maddalena, Mal McMillan, Robert Wassink, Louise Sandberg Sorenson, Tom Slater, Amber Leeson
Affiliations: Lancaster University, Northumbria University, Technical University of Denmark
Surface processes have been the principal driver of Greenland Ice Sheet mass imbalance since the early 2000s. Earth Observation (EO) techniques have traditionally focused on measuring the extent and intensity of liquid water present, as well as mapping supraglacial hydrology, while regional climate models have been relied upon to estimate most other surface mass balance (SMB) parameters. However, Slater et al. (2021) pioneered the use of satellite altimetry to directly monitor runoff in the ablation zone of the Greenland Ice Sheet. In this study, we investigate the feasibility of deriving meltwater runoff estimates from CryoSat-2 satellite radar altimetry at finer spatial scales, with a focus on individual drainage basins. At the catchment scale, runoff estimates that provide localised observational constraints of freshwater inputs are important for understanding coastal marine ecosystem productivity in fjords, sediment delivery to the oceans, and have wider societal implications, for example, the potential for hydropower. Our current analysis focuses on the Watson and the North East Greenland Ice Stream basins, in alignment with ESA’s 4DGreenland and EO4SMB projects, and spans the period from 2011 to 2023. Leveraging the latest CryoSat-2 altimetry products, ESA's CryoSat ThEMatic PrOducts (CryoTEMPO), we analyse ice sheet elevation changes to separate long-term trends from seasonal variations. However, estimating catchment-scale runoff presents challenges due to the relatively small spatial scale over which the altimetry signal must be aggregated. To address this, we apply advanced statistical techniques, namely Single Spectrum Analysis, to effectively differentiate signal from noise within the seasonal cycle. Through this approach, our analysis enables us to approximate runoff at the basin scale. We find good agreement in the derived seasonal ablation rates in comparison to in situ data, and once converted to meltwater run-off volumes we find they are, on average, within 0.45 km³ of equivalent regional climate model simulations (an average percentage difference of 9.7%). By harnessing CryoSat-2 data, our study provides valuable new insights into seasonal variability across Greenland, and demonstrates the feasibility of local-scale, EO-based retrievals of surface ablation and run-off. As such, this work serves to establish the foundations for future operational services over Greenland, which leverage forthcoming missions such as CRISTAL, to deliver routine estimates of surface mass balance fluxes from space.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Assessing and projecting ice sheet catchment hydrology for Greenland’s rivers – a digital twin component approach

Authors: David Parkes, Amber Leeson, Dr. Sebastian Bjerregaard Simonsen, Mal McMillan
Affiliations: Lancaster University, Technical University of Denmark
The hydrology of the Greenland ice sheet is evolving in complex ways in response to increasing regional temperatures. This includes changes in supraglacial, subglacial, and basal hydrology, all of which contribute to changes in the total runoff at the ice sheet margins. For land-terminating outlet glaciers, this drives changes in the outflow of rivers fed by the melting ice, both in terms of the total annual discharge and patterns of discharge in time, including the timing and magnitude of extreme discharge. This is important as a mechanism for enhancing mass losses from the ice sheet, as well as enhancing sediment transport and increasing the input of fresh water into local marine ecosystems. There is human interest in these changes for communities in Greenland, including interest in the potential for hydropower they represent. In the DTC (Digital Twin Component) Ice Sheets project, modular components for ice sheet digital twins are being developed from existing models of physical processes and statistical/machine learning techniques. These include modules for ice sheet mass balance, damage forecasting, super-resolution modelling, and sea level rise emulation, amongst others. Modules can be selected and connected as required by a particular use case; the project has four use cases in its initial design, of which this abstract covers one. For this use case we utilise a module assimilating multiple hydrological datasets with different spatial scales using a Gaussian Random Field process and a module that models spatio-temporal data using a Functional Time Series approach. The Functional Time Series module also uses an additional extremes model layer on top to give a distribution of extreme values within the modelled time series. Training these modules on earth observation data for catchments on the Greenland ice sheet allows us to independently characterise the annual pattern of changes in hydrology and the interannual variability. In a forward-looking mode, the trained model uses projections for regional climate to determine expected changes – and associated uncertainties – in these patterns over the coming decades. At LPS 2025 we will show the results of our digital twin component for a test catchment of the Watson River basin in south-west Greenland. Motivated by the use case of hydropower potential, we show estimates for changes in expected annual discharge over the coming decades and distribution of this discharge through the year (both important for expected hydropower output), and the likely ‘Once-per-X-years’ extreme highs in discharge (important for engineering solutions that are robust to extreme scenarios).
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Investigation of the recent dynamics of the active subglacial lake under the Flade Isblink ice cap.

Authors: Mikkel Aaby Kruse, Professor Nanna Bjørnholt Karlsson, Anders Kusk, Magdalena Łucka, Professor Louise Sandberg Sørensen
Affiliations: DTU Space, Technical University of Denmark, Department of Glaciology and Climate, Geological Survey of Denmark and Greenland, Faculty of Geo-Data Science, Geodesy, and Environmental Engineering, AGH University of Krakow
Subglacial lakes represent an important yet poorly understood component of glacial hydrology. While most of the 64 known subglacial lakes in Greenland appear hydrologically stable (Livingstone et al., 2022), a subset of them exhibits dynamic behaviours, with periodic drainage and refilling (Palmer et al., 2015; Bowling et al., 2019; Sandberg Sørensen et al., 2024). These active subglacial lakes play a role in transporting substantial volumes of freshwater to the ocean in episodic pulses, potentially impacting fjord ecosystems and altering downstream ice flow dynamics. Furthermore, they may serve as temporal buffers between surface melt and basal runoff, challenging conventional assumptions embedded in surface mass balance models for the ice caps or ice sheets. To improve our understanding of the hydrological processes that govern these active subglacial lakes, we focus on the active subglacial lake beneath the Flade Isblink ice cap in North Greenland. Using a range of remote sensing methodologies, including DInSAR, optical satellite imagery, and ICESat-2 laser altimetry, we quantify the timing and size of the drainage and refilling. Our findings indicate that the Flade Isblink subglacial lake has drained multiple times in recent years, with some identified indications of water pathways beneath the ice cap. The interplay between surface meltwater inputs and basal hydrology underscores the complexity of these systems, possibly partly driven by basal melt. By mapping the variability of subglacial lake activity at unprecedented spatiotemporal resolution, our study can refine basal hydrology models and enhance ice sheet mass balance predictions under future climate scenarios. Our research highlights the importance of integrating subglacial lake dynamics into surface mass balance models to accurately account for delays in meltwater runoff. An improved understanding of the connections between surface and basal hydrology will inform projections of freshwater fluxes to the ocean and their implications for global sea level rise and Arctic ecosystems. Livingstone, S.J., Li, Y., Rutishauser, A. et al. Subglacial lakes and their changing role in a warming climate. Nat Rev Earth Environ 3, 106–124 (2022). https://doi.org/10.1038/s43017-021-00246-9 Palmer, S., McMillan, M. & Morlighem, M. Subglacial lake drainage detected beneath the Greenland ice sheet. Nat Commun 6, 8408 (2015). https://doi.org/10.1038/ncomms9408 Bowling, J.S., Livingstone, S.J., Sole, A.J. et al. Distribution and dynamics of Greenland subglacial lakes. Nat Commun 10, 2810 (2019). https://doi.org/10.1038/s41467-019-10821-w Sandberg Sørensen, L., Bahbah, R., Simonsen, S. B., Havelund Andersen, N., Bowling, J., Gourmelen, N., Horton, A., Karlsson, N. B., Leeson, A., Maddalena, J., McMillan, M., Solgaard, A., and Wessel, B.: Improved monitoring of subglacial lake activity in Greenland, The Cryosphere, 18, 505–523, https://doi.org/10.5194/tc-18-505-2024, 2024.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Monitoring the Greenland Ice Sheet through the PROMICE and GC-Net programmes

Authors: Penelope How, Andreas P. Ahlstrøm, Robert S. Fausto, Jason E. Box, Nanna B. Karlsson, Baptiste Vandecrux, Anja Rutishauser, William Colgan, Mads C. Lund, Christoper L. Shields, Michele Citterio, Kenneth D. Mankoff, Alexandra Messerli, Mai Winstrup, Anne M. Solgaard, Signe H. Larsen, Kristian K. Kjeldsen, Signe B. Andersen
Affiliations: Geological Survey Of Denmark And Greenland, NASA Goddard Institute for Space Studies (GISS), Asiaq Greenland Survey, DTU Space
The rapid demise of ice sheets and glaciers worldwide has increased the need for mass balance observations at a temporal and spatial resolution, where they can both help us understand the physical processes and also serve as validation or calibration for remote sensing data products or regional climate model output. Here, we present the latest developments in the PROMICE and GC-Net monitoring programmes, measuring crucial mass balance components of the Greenland Ice Sheet. Our remote sensing efforts include measurements of ice velocity, ice discharge elevation change, and total mass balance. In situ observations are from a network of automatic weather stations, including snow water equivalent, snow height in the vicinity of the station, sufficiently accurate transmited position and elevation of the station, snow compaction and non-stake ice sheet ablation. Immediate access to the observations is key to certain applications, such as numerical weather forecasts. Hence, we also present the complications of producing operational datasets, such as the near real-time data transmission and quality checking from our automatic weather station network.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Glacier and ice sheet motion retrieval by Sentinel-1 using deep learning over Greenland

Authors: Mr. Andrea Pulella, Dr. Pau Prats-Iraola, Dr. Francescopaolo Sica
Affiliations: German Aerospace Center (DLR), Microwaves and Radar Institute, Institute of Space Technology and Space Applications, University of the Bundeswehr Munich
The Sentinel-1 (S-1) satellite constellation provides invaluable data for continuously monitoring the motion and discharge of Greenland’s ice sheets and outlet glaciers. This information is essential for refining sea-level rise predictions and understanding the broader implications of climate change on global ecosystems. However, despite the availability of freely accessible ice velocity maps derived from S-1 data [R1] [R2], several challenges inherent to processing TOPS InSAR data impose significant limitations on existing products. These challenges include the low sensitivity of spaceborne SAR missions to North-South displacements, the degradation of interferometric phase quality due to ionospheric contributions, and the difficulty in estimating interferometric parameters under strong noise conditions. Current ice velocity products derived from Sentinel-1 data typically suffer from low spatial and temporal resolution. Spatially, maps are often provided on grids of at least 250 meters, which makes it difficult to detect localized phenomena. Temporally, although Sentinel-1 acquisitions are available every 12 days (or even every 6 days when both satellites are active) products are usually generated on a monthly basis due to the need to average multiple interferometric pairs to reduce noise. Additionally, ionospheric effects are frequently overlooked, with their mitigation relying on averaging over large temporal baselines, which can obscure short-term dynamics. To overcome these limitations, we propose a novel methodology for ice-sheet monitoring that exploits the unique features of Sentinel-1’s TOPS acquisition mode. For inland regions, we utilize a multitask learning framework capable of monitoring glacier flows with centimeter precision. This approach also separates phase contributions attributed to zero-Doppler (ZD) and along-track (AT) displacements, enabling the generation of 3D displacement maps from a single interferogram [R3]. For coastal regions, where magnitude of the deformation often exceeds several SAR wavelengths and conventional InSAR techniques fail, we apply an offset-tracking technique. This method exploits the cross-correlation of SAR backscatter between S-1 image pairs to measure large displacements [R4]. Furthermore, we address high-latitude ionospheric effects by using the split-spectrum technique [R5] to mitigate this impact on both the interferometric phase and reflectivity. By enhancing both the spatial and temporal resolution of displacement measurements, our approach enables a more detailed and accurate analysis of Greenland’s ice sheet dynamics, advancing the state-of-the-art in remote sensing of polar regions. [R1] A. Solgaard et al., “Greenland ice velocity maps from the PROMICE project,” Earth Syst. Sci. Data, vol. 13, no. 7, pp. 3491–3512, Jul. 2021. [R2] T. Nagler, J. Wuite, L. Libert, M. Hetzenecker, L. Keuris and H. Rott, "Continuous Monitoring of Ice Motion and Discharge of Antarctic and Greenland Ice Sheets and Outlet Glaciers by Sentinel-1 A & B," 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 2021, pp. 1061-1064. [R3] A. Pulella, P. Prats-Iraola and F. Sica, "Multitask Learning for Phase Source Separation in InSAR Burst Modes," in IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1-21, 2024. [R4] T. Strozzi, A. Luckman, T. Murray, U. Wegmuller and C. L. Werner, "Glacier motion estimation using SAR offset-tracking procedures," in IEEE Transactions on Geoscience and Remote Sensing, vol. 40, no. 11, pp. 2384-2391, Nov. 2002. [R5] G. Gomba, A. Parizzi, F. De Zan, M. Eineder and R. Bamler, "Toward Operational Compensation of Ionospheric Effects in SAR Interferograms: The Split-Spectrum Method," in IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 3, pp. 1446-1461, March 2016.
Add to Google Calendar

Thursday 26 June 17:45 - 19:00 (X5 – Poster Area – Zone E-F)

Poster: Observing Ice Sheet Melt Dynamics by Means of Active Microwave Sensors

Authors: Dr. Thomas Nagler, Dr Jan Wuite, Dr. Helmut Rott, Anna Puggaard
Affiliations: ENVEO IT GmbH, Technical University of Denmark
The spatially and time-varying extent and duration of surface melt and freezing on ice sheets and ice shelves are essential parameters for ice sheet mass balance and serve as key indicators of climate warming in polar regions. The melt and refreeze conditions of the ice sheet surface are an important component of ice sheet hydrological models and support the estimation of the meltwater volume, surface runoff and surface atmosphere energy exchanges. The presence of melt water and its production rate is also affecting the subglacial water pressure which is an important parameter for ice flow dynamics. Increased melt dynamics has been observed as precursor of ice shelf disintegration on the Antarctica Peninsula and as trigger for speed up of ice velocity of outlet glaciers in Greenland. Within the ESA Polar Cluster projects 4DGreenland and 4DAntarctica we developed an automatic algorithm for mapping ice sheet and ice shelf snow melt phases using dense time series of METOP ASCAT and Sentinel-1 SAR. The retrieval method is based on the sensitivity of active microwave C-Band measurements to the presence of liquid water in surface snow layers, which results in a significant decrease of backscatter due to dielectric absorption. Daily observations of ASCAT data with 4.5 km resolution were used to generate daily maps of snow melt phases since 2007 identifying dry, melting and refreezing snow states. The time series of ASCAT based snow melt phase maps for Greenland and Antarctica were compared with output of regional-scale surface energy balance models. We found significant differences for computed melt of individual models versus the ASCAT snow melt phase products. On outlet glaciers of the Greenland and Antarctic ice sheets, the resolution of ASCAT is not sufficient to detect the spatial details of the melt and refreeze pattern. For these regions we use high-resolution Sentinel-1 time series at 6 and 12 days intervals in synergy with the ASCAT melt phases products to follow the evolution of the melt and refreeze cycles. The ice sheet wide melt maps are used to analyze interannual variations in start, duration and extent of surface melt in different parts of Antarctica to assess regional differences. To investigate the increasing thickness of refrozen snow on top of wet snow at the end of the melt season, we integrate backscatter modelling with signature analysis of Sentinel-1 backscatter time series. In preparation for ROSE-L and NISAR we also tested the synergy of C- and L-Band SAR data for extending the sensing depth in the refreezing surface layer. We will show time series of ASCAT melt products for Greenland and Antarctica and compare them to climate model output. Additionally, examples of the refreezing depth product, derived from Sentinel-1 and SAOCOM data, will be presented.
Add to Google Calendar

Thursday 26 June 18:00 - 18:20 (EO Arena)

Demo: D.01.14 DEMO - DestinEStreamer: Revolutionizing Big Data in Earth Observation and Climate Science

Join us for an insightful demonstration of the DestinEStreamer service, an Innovation designed to streamline the storage, distribution, and access to large-scale data in Earth Observation (EO) and Climate Science. This session will highlight the ease with which users can navigate the service, making complex data access tasks simpler and more efficient.
Attendees will explore the DestinEStreamer web application, which features an intuitive interface for examining dataset variables like ERA5 and Climate Digital Twins. The platform allows users to perform fast temporal scans, interact with maps, and access additional analytical tools via direct links to Jupyter Hub and Insula services. A dedicated Python module, including example scripts, will also be demonstrated to showcase data conversion tasks, such as extracting specific time series, georeferencing, and utilizing collections like xarray.
For those interested in automating their workflows, the session will cover the available API for each dataset variable. The API provides detailed metadata and quality metrics, ensuring transparency and ease of integration into automated processes.
This demonstration is ideal for researchers, developers, and analysts working with big data in EO and Climate Science. The DestinEStreamer service offers a comprehensive and flexible solution that accelerates data management and enhances the efficiency of your projects.
Join us to discover how DestinEStreamer can revolutionize your approach to big data in the EO and Climate Science fields, with hands-on guidance throughout the session.

Speakers:


  • Dr. Wolfgang Kapferer - Head of IT-Services, Head of Department Space & Security, GeoVille Information Systems and Data Processing GmbH
Add to Google Calendar

Friday 27 June

880 events

Friday 27 June 08:30 - 10:00 (Room 1.34)

Session: A.02.07 Monitoring grasslands and rangelands from space - PART 1

Grasslands and rangelands account for a large share in global agricultural managed landscapes and provide essential ecosystem services such as carbon sequestration or water retention. At the same time, they represent a critical source of biomass for animal feed and support livelihoods, for instance, of pastoralists and other livestock producers. Grasslands and rangelands provide habitats for a wide range of floristic and faunistic species and have therefore a crucial role in the context of climate change and biodiversity. A key challenge is to find a balance between maintaining or enhancing ecosystem services and functions – which are influenced by the intensity of their utilization and management - while at the same time meeting the demand of a growing population in terms of meat and dairy products, as well as pressures from other land uses. This brings the topic of grassland/rangeland ecosystems to the agenda of land managers, nature conservationists but also prominently in international policies. Therefore, accurate Earth Observation (EO) based mapping and monitoring approaches are crucial to deepen our understanding and to support evidence-based policy development and evaluation. While remote sensing-based approaches have already proven valuable for mapping and monitoring the intensity of grassland/rangeland use over large areas, several challenges such as degradation and recovery of those ecosystems remain unresolved for which new solutions are needed.

We welcome contributions that either focus on the testing and implementation of novel EO data and methods for the assessment of rangelands and grassland management and condition, or that provide insight into the valorization of established technologies for innovative EO-based services in support of climate action and sustainable management. These could include but are not restricted to:

- multisource imaging / data fusion / time series
- estimation of grassland yields / biomass / quality and other relevant biophysical variables
- degradation and recovery of grasslands and rangelands
- differentiation of pastures / meadows
- rangeland/grassland use intensity, resilience and carrying capacity
- grassland use in the context of biodiversity / climate change
- monitoring and evaluation of agricultural and environmental policies

Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.34)

Presentation: Mapping EU-Wide Grassland Management Intensity Combining LUCAS and EMBAL In-Situ Data and Sentinel Observations

Authors: Julien Morel, Dr. Mattia Rossi, Martin Claverie, Momtchil Iordanov, Filip Sabo, Fernando Sedano, Raphaël d'Andrimont, Marijn Van der Velde
Affiliations: European Commission, Joint Research Centre, European Commission, DG Agriculture & Rural Development (DG AGRI)
Grasslands are among the most important ecosystems in Europe, as permanent grasslands represent approximately 30% of the utilized agricultural area. They provide various services including habitats for biodiversity, greenhouse gas sinks, sources for livestock forage and feed, placing them at the heart of the European Green Deal. Since the 1970s, a shift of management towards higher productivity at the detriment of ecosystem services has been reported for European grasslands. A monitoring system that allows for tracking these changes throughout the whole of Europe currently does not exist. Earth observation satellites are ideal to underpin large-scale monitoring systems and to effectively track changes in grassland management intensity. Specifically, the Copernicus Sentinel-1 and -2 constellations provide imagery at high spatial and temporal resolutions. Practically, however, their implementation as a large-scale monitoring tool for grasslands management is hampered by the lack of large and geographically comprehensive in-situ datasets upon which robust and accurate models could be built. The LUCAS Grassland (LG) module is part of the LUCAS survey. It aims to collect information on vegetation state, vigor and height, percentage of different land covers and the graminoids-to-forbs ratio, presence of key flower indicator species, their density and colors, grassland richness and structure. It consists of approximately 15,000 georeferenced sampling locations spread across the EU, collected in 2018, 2022 and 2023. Among the reported information is the grassland type which discriminates extensive to intensive management in meadows (which are typically mowed) and pastures (which are typically used for grazing). Another relevant source of data to be considered is the EMBAL dataset, for which in-situ information (including the grassland type) was collected on 3,000 500 x 500 m plots visited in 2022 and 2023. We combine these EU-wide in-situ monitoring schemes with Sentinel-1 and -2 time-series to assess grassland use intensity. Sentinel-1 and -2 time-series from 2017 to 2023 were extracted for 8,148 LG quality-controlled sampling locations. Filtered data provide a diversity of management intensities: the medium intensive meadow was the most widely represented class, with approximately 2,000 occurrences, followed by extensive meadow (around 1,200 occurrences), extensive pasture, intensive meadow and medium intensive pasture (approximately 1,000 occurrences). Around 800 occurrences were reported as “Meadow or pasture”, while the remaining transects were distributed across grassland classes with fewer than 250 occurrences. The radiometric data are used as explanatory variables for grassland use intensity classification using models of various complexity (naive bayes, random forest, temporal convolutional network and transformer-based models). These models are benchmarked and compared in terms of accuracy and robustness, using the EMBAL dataset as an independent evaluation source. Finally, the results are cross-referenced with information on grassland management from the forthcoming Copernicus High Resolution Layer on Grasslands. Our analysis provides insights and recommendations into the methodological building blocks to be put in place to expand the Copernicus EU-wide grassland monitoring capacity. These results will help bridge the gap between scientific monitoring approaches and policy implementation, contributing to evidence-based decision-making for sustainable grassland management across Europe.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.34)

Presentation: Revealing the intensity of management on mountain hay meadows across selected regions in Armenia, Austria, Germany, and Sweden

Authors: Konrad Turlej, Paul Jacques Connetable, Michael Munk, Alexander Prishchepov
Affiliations: University Of Copenhagen, DHI
Mountain hay meadows (MHM) are biodiversity hotspots in European mountain ranges, but they are increasingly threatened by biodiversity loss due to either abandonment or overuse through intensive mowing and grazing. Satellite remote sensing offers a powerful tool to monitor MHM management intensity and provide critical information to decision-makers and practitioners supporting their conservation activities. Especially, the modern satellites collecting high spatial resolution data more frequently than ever are very important for monitoring the intensity and spatial distribution of management of MHM. In this study, we assessed the current state of MHM management and vegetation trends over recent years across 15 study sites in Armenia, Austria, Germany, and Sweden. Grassland areas were delineated using high-resolution PlanetScope imagery (3-meter resolution) segmenting the landscape into grass-dominated areas and complex grassland mosaics with trees, shrubs, and non-vegetation landforms such as roads and streams. For these delineated grassland parcels, we combined time series of optical data from the Sentinel-2, Landsat 8 and 9 satellites to analyze and quantify hay harvesting intensity for the target year of 2023. Individual mowing and intensive grazing events, as well as their intensity are detected by comparing a field’s time-series to an ideal undisturbed time-series, created with a convex envelope following method. Additionally, Sentinel-2 NDVI time series (2017–2024) were used to detect long-term trends in annual vegetation growth. The long-term changes observed over these years are then compared to the detected management indicators over 2023 to assess the impact of different management practices over MHM. Our analysis revealed variability in management intensity across the study sites. Armenian meadows exhibited less frequent management compared to Austria and Germany, while Swedish meadows were managed less intensively overall. Despite these differences, vegetation trends across all sites remained relatively stable in recent years, indicating no rapid changes in management intensity. Moreover, we show the benefit of combining high resolution PlanetScope imagery with robust timeseries of Sentinel-2 when mapping activities in complex environments – such as mountain hay meadows. These findings provide valuable insights for a broader study aimed at identifying best management practices for preserving the unique biodiversity of mountain hay meadows. This work underscores the potential of satellite remote sensing as a key tool for monitoring and conserving these critical ecosystems.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.34)

Presentation: Estimating Grassland Biomass With Sentinel-2 and LUCAS Grassland In-situ Data Across the European Union

Authors: Daniel Gruschwitz, Dr. Mattia Rossi, Julien Morel, Dr. Raphaël d'Andrimont, Dr. Marijn van der Velde, Prof. Dr. Tobias Ullmann
Affiliations: University of Würzburg, Department of Remote Sensing, Institute of Geography and Geology, European Commission, Joint Research Center (JRC), European Commission, DG Agriculture & Rural Development (DG AGRI)
In the future, grass biomass available on Europe’s pastures and meadows is expected to vary more in time and space due to increases in climate change related extreme weather events. Remote sensing provides extensive grassland monitoring capabilities but the lack of adequate in-situ data for training or validation constrains the evaluation of regression models at a continental scale. The dedicated grassland module of the 2022/23 rollout of the EU Land Use/Cover Area frame Survey (LUCAS) constitutes a unique, large-scale and geographically comprehensive data source. Within the harmonized data sampling concept across national borders in the EU, grassland-specific attributes are collected for a 20 m by 2.5 m transect. Among these attributes are two variables that can be used as proxies for biomass: average grass height and vigour of vegetation. This study investigates their suitability for biomass monitoring using remote sensing by linking them to Sentinel-2 (S-2) imagery and conducting machine learning regression and classification analysis. After filtering the more than 12,000 surveyed transects down to agricultural land use, approximately 4,000 samples with an S-2 scene available within three days of the sampling date remain. Various vegetation indices are used to predict grass height via Support Vector Regression (SVR) and Random Forest Regression (RF). The EU-wide SVR model outperforms the RF attempts but nonetheless delivers a high root mean square error (RMSE) of around 19-20 cm and a low R² of 0.22. Regional climate and grassland management practices strongly influence the grassland growing conditions. If a regional modelling attempt is followed by splitting the database according to similar environmental conditions, the error rates drop slightly by around 2 cm compared to the EU-wide model (RMSE = 17cm, R² = 0.4). Predicting the height of tall grassland stands presents greater challenges, as these stands often display signs of plant maturity and senescence, which disrupt the correlation between height and spectral indices. Moreover, as grass height increases, the potential for errors both in field sampling and modelling rises. As a result, when considering only grasslands shorter than 60 cm, the RMSE reduces to 10 cm. Plots that were mown or grazed before the sampling tend to have lower RMSE values, primarily due to their generally shorter heights; however, the relative error margins are greater in these cases. The influence of the patch size and the time interval between the S-2 imagery and the sampling date is also checked, leading to differing sample sizes and minor accuracy changes. The vigour of vegetation – an ordinally scaled visual assessment of the grassland– only allows for classification analysis. Even after regionalization and the aggregation to three classes (low, medium and high vigour), the overall accuracy is low with 64% for the training data and 56% for the test dataset. This limitation is attributed to the subjective nature of vigour assessments, which may not align with the immediate vegetation condition captured by Sentinel data. In conclusion, the average grass height is the preferred variable within the LUCAS database for remote sensing modelling of grassland biomass. As the LUCAS database reports for a wide range of grasslands, it is strongly recommended to apply rigorous filtering, using ground images and metadata, to account for the environmental and management heterogeneity found across the EU.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.34)

Presentation: EU Grasslands Watch – A Modular and Operational Satellite-based Monitoring Service to Support the EU Biodiversity Policy

Authors: Stefan Kleeschulte, Mr Karl Ruf, Mr Geoff Smith, Jan Mišurec, Mathilde De Vroey, Mrs Raquel Geijo, Mr Heinz Gallaun, Mr Evan Frank, Mr Lubos Halada, Mr Per Toräng, Mrs Barbara Herrero
Affiliations: space4environment, Specto Natura, GISAT, VITO, Bilbomatica, Joanneum Research, Also Consulting, Institute of Landscape Ecology SAS, Swedish University of Agricultural Sciences, Birdlife Europe
Grasslands are a key part of many landscapes whether as hay meadows, grazing marshes or woodland clearings. They maintain biodiversity and food production but also influence ecological processes on a wide range of scales. These include critical functions such as pollination, water supply, carbon sequestration, and climate regulation. They cover a significant part of the European Union (EU) and 70 % of the world's agricultural land, resulting in grasslands that are both diverse and extensive habitats. However, these important habitats are currently facing numerous threats, including extensification and abandonment on one side as well as intensification of agricultural production on the other, highlighting the urgent need for effective monitoring and conservation strategies. Within the assessment reports prepared for the EU’s Habitats Directive it was established that many protected grassland sites were in a poor condition or not assessed at all and that some form of monitoring was required which addressed both local detail within a site and could also be delivering continental scale statistics. Earth Observation (EO) is an obvious way to regularly monitor grasslands over large areas and with relatively fine (field scale) spatial detail. However, the dynamic nature, subtle variations and unique management practices of grasslands mean that conventional land cover mapping every few years rarely provides enough information. Over the last decade the increased availability of high cadence EO data at sub-field / habitat-patch spatial resolutions has opened up new and powerful opportunities for grassland monitoring. In response to the need for improved grassland monitoring the European Parliament's Pilot Projects and Preparatory Actions initiative funded a development project which would represent a significant step towards consistent and effective monitoring of grasslands protected within the Natura 2000 site network. The Cop4N2K project developed a prototype service which demonstrated the potential to regular monitor grasslands efficiently with EO data and formed the basis for the current project. A follow-up project is building on the technology developed and feedback received to produce an updated, targeted service. The resulting European Union Grassland Watch (EUGW) platform exploits data and services within the Copernicus Programme to deliver operational grassland information at the pixel level for over 16 thousand sites to support national and EU-level management and reporting obligations. Utilising the Landsat archive in combination with the data currently being collected by the Sentinel satellites it produces yearly data going back to the mid-1990s when the Natura 2000 network was established. The service portfolio presents an unprecedented time series of grasslands characterized by their type (at EUNIS level 2), management and productivity. One of the primary features of the modular service setup is to give the user the ability to freely combine land cover data with these important grassland characteristics. This cuts ties with traditional approaches in land use mapping that may capture the diversity of grasslands in collective classes such as “semi-natural” grassland, Especially for grasslands, management action can imply different things and either encounter habitat degradation or cause adverse effects on biodiversity. Flexibility in adapting data binning and definition to local landscape context is therefore essential and here provides analysts with a new depth of data to close the information gap on grassland condition and contribute to informing on the compliance to the no-degradation target of the EU Habitat Directive. As the service is evolving to map more Natura 2000 sites and provide more detailed thematic information it will also generate more easily interpretable insights and expose this information through a user-friendly interface. This platform has been specifically developed to ensure seamless data access, scalability, and compatibility with evolving data standards, enabling the platform to manage large datasets effectively. It provides, an enhanced basemap tool, an optimised layer management interface, and external service integration to overlay complementary datasets through URL or file upload. EUGW represents an exemplary service for the delivery of actionable information and multi-scale indicators by the exploitation and integration of the vast amounts of EO data and contextual information now available in an intuitive and flexible fashion.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.34)

Presentation: Evaluating Grassland Mowing Detection Algorithms Across Europe - Results of the Mowing Detection Intercomparison Exercise (MODCiX)

Authors: Dr Marcel Schwieder, Felix Lobert, Dr. Arnab Muhuri, Prof. Dr. Natascha Oppelt, Dominique Weber, Dr. Sophie Reinermann, Dr. Sarah Asam, Dr. Filippo Sarvia, Dr. Samuele De Petris, Dr. Enrico Borogno-Mondino, Professor Clement Atzberger, Iason Tsardanidis, Dr. Charalampos Kontoes, François Godechal, Dr. Alban Jago, Emilie Beriaux, Cozmin Lucau-Danila, Viviana Planchon, Dr. Anatol Garioud, Dr. Célestin Huet, Dr. Julien Morel, Dr. Mattia Rossi, Ann-Kathrin Holtgrave, Dr. Francesco Vuolo, Aleksandar Dujakovic, Dr. Julien Radoux, Dr. Mathilde De Vroey, Dr. Andreas Schaumberger, Andreas Klingler, Dr, Ruth Sonnenschein, Prof. Dr. Patrick Hostert, Dr Stefan Erasmi
Affiliations: Thünen Institute, Humboldt-Universität zu Berlin, Christian-Albrechts-Universität zu Kiel, Swiss Federal Research Institute WSL, University of Würzburg, German Aerospace Center (DLR), University of Turin, Food and Agriculture Organization of the United Nations, Mantle Labs, National Observatory of Athens (NOA), Walloon Agricultural Research Centre, Institut national de l’information géographique et forestière (IGN), Joint Research Centre, Technical University of Berlin, University of Natural Resources and Life Science, Université catholique de Louvain, Agricultural Research and Education Center Raumberg-Gumpenstein, EURAC Institute for Earth Observation
Grasslands play an essential role in the European agricultural landscape. They provide numerous ecosystem services such as water regulation and the provision of fodder for livestock. Grasslands have a huge carbon sequestration potential and are thus crucial for climate change mitigation and adaptation. Above that, grasslands provide important habitats that support biodiversity. For nature conservation, policy monitoring and environmental planning, it is therefore not only relevant to map the spatial extent of grasslands, but also to monitor their state and management intensity. While the total grassland extent within a given region can be derived from agricultural statistics, timely and spatially explicit information on management intensity is usually lacking. This gap can be overcome by analyzing Earth observation (EO) satellite time series, which allows to estimate grassland use intensity through the identification of the number of cuts within a calendar year. Fueled by the opening of the Landsat archive and the Copernicus Sentinel missions, a multitude of approaches have been published in the recent past that explore the unprecedented availability of optical and SAR data with moderate to high spatial resolution. These approaches relate abrupt changes in time series to grassland management activities using threshold-based rules or by making use of machine learning techniques. One major key criterion for the evaluation and, thus, for a potential adaptation of approaches for large-scale monitoring purposes is the accuracy with which a mowing event can be predicted. However, it is not straightforward to compare the reported accuracies of different approaches, as the studies usually have a different regional focus and may differ in terms of data, methods and evaluation strategies. In addition, most studies lack a test for spatial and temporal transferability, as no independent reference data is available. There is hence a great need for benchmark reference data allowing a comparative assessment of different methodological frameworks under different environmental and management conditions. To enable such a comparison, we have collected a set of independent grassland management reference data from different research groups, covering eight European countries and a five-year period. These data contained geographic coordinates or field boundaries of grasslands and dates of mowing events. We harmonized these data and assigned quality values to account for the different collection methods, which included farmer reports, mapping campaigns and the interpretation of satellite or webcam time series. Finally, we ended up with a total of more than 5000 reported mowing events that we could use to compare the performance of eight different algorithms for identifying grassland management intensity based on EO time series. All algorithms were applied to common time series of optical and / or SAR satellite data to predict mowing events across Europe. A stratified random sample of 30% of the reference data was shared to enable fine-tuning of the algorithms. The reported predictions of each algorithm were evaluated on temporal, regional, qualitative, and use-intensity subsets of the unseen reference data, enabling a differentiated in-depth comparison of the results. In general, the results show that all algorithms can be transferred in space and time but experience varying degrees of performance losses. When using all available reference mowing dates and a tolerance threshold of 12 days, the overall F1 values ranged from 0.55 to 0.74. However, the direction and range of precision and recall varied between the algorithms. Highest accuracies were achieved by Deep Learning algorithms that used a combination of SAR and optical data, when using all reference data were used for evaluation. However, this pattern somewhat changed when we assessed the prediction separately for different grassland-use intensity levels. We then observed that rule-based algorithms that rely only on optical data performed better the more often a meadow has been mowed, indicated by higher precision values. Further details can be interactively explored using this shiny app: https://geo-masc.shinyapps.io/modcix_evaluation. Overall, the Mowing Detection Intercomparison Exercise (MODCiX) emphasizes the importance of benchmark reference datasets for a consistent, representative and reliable model evaluation of EO-based solutions. We recommend that this benchmark dataset be maintained and expanded for the evaluation of future algorithms and products, such as the upcoming Copernicus Grassland HRL layers that feature grassland mowing dates and frequencies.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.34)

Presentation: SatGrass: Estimation of grassland yield in Austria using weather, satellite and extensive in situ data

Authors: Andreas Klingler, Aleksandar Dujakovic, Francesco Vuolo, Konrad Mayer, Lukas Gaier, Erich Pötsch, Andreas Schaumberger
Affiliations: Institute of Grassland research, Agricultural Research and Education Centre Raumberg-Gumpenstein, Institute of Geomatics, University of Natural Resources and Life Sciences, Vienna, Climate Research Department, GeoSphere Austria
Grasslands are among the most widespread ecosystems globally, covering 40 % of the earth's land surface and providing essential ecological and agricultural functions. In Austria, grasslands exceed 50 % of agricultural land, supporting livestock feed production, carbon sequestration, erosion control, water filtration, and biodiversity preservation. However, optimal grassland management poses significant challenges due to the variability in environmental conditions, topography, and management practices. The lack of accurate, timely yield data often affects optimal decision-making for harvest timing, balancing forage quality and yield. The SatGrass project addressed these challenges by developing a robust machine-learning model capable of estimating dry matter yield (DMY) for grasslands across Austria. By integrating Sentinel-2 satellite data, agrometeorological variables, and a comprehensive in situ dataset, SatGrass achieved high accuracy and generalizability for yield predictions. The model is based on an extensive dataset collected over five years (2018–2022) at 184 sampling sites, representing Austria's diverse grassland regions. These range from low-yielding areas in the drought-prone eastern regions to high-yielding areas in the Alpine foothills and regions with shorter growing seasons in the Alps. In particular, we have focused on in situ data to address commonly observed spatial and temporal coverage limitations. Data collection was standardised using a mobile application that guided field technicians and farmers through the entire sampling process, ensuring consistent procedures and direct data transfer. Above-ground biomass samples were taken in triplicates at 5 cm cutting height, with repeated measurements during individual growths to capture yield and forage quality dynamics. This resulted in a dataset of 3,348 observations covering all relevant grassland regions with diverse site conditions, productivity and management intensity.Remote sensing predictors derived from Sentinel-2 imagery included vegetation indices (VIs) such as the Leaf Area Index (LAI), normalised difference vegetation index (NDVI), and others. Data were processed using advanced cloud masking techniques and a Whittaker smoothing algorithm to produce VI time series for the different growth periods. Weather data were sourced from Austria's high-resolution INCA model, which provides daily temperature, radiation, and precipitation composites. The SatGrass model uses a fully connected feedforward neural network (ANN). A four-layer architecture with 128 nodes per layer and ReLU activation functions was trained using stratified data splits to ensure spatial and temporal independence. The ANN achieved high overall accuracy, with an R² of 0.8 and a mean absolute error (MAE) of 330 kg DMY ha-¹ on a spatially independent test dataset. Validation on temporally independent datasets from 2021 and 2022 further confirmed its generalisability, with consistent R² values of 0.72 - 0.73. Predictor importance analysis using Local Interpretable Model-agnostic Explanations (LIME) highlighted the central role of LAI in yield estimation due to its biological importance in growth-related processes such as photosynthesis and biomass accumulation. Weather variables, especially global radiation, also contributed significantly, while precipitation had a lesser influence, especially in regions with adequate water supply. Challenges related to cloud cover were addressed using advanced cloud masking techniques and temporal smoothing. While performance declined slightly after extended periods without valid satellite data, the Whittaker smoothing algorithm effectively maintained prediction accuracy after two weeks of observational gaps. SatGrass fills a critical gap in grassland yield modelling by providing a scalable, real-time tool for continuous yield estimation accessible to a wide range of stakeholders through its reliance on freely available satellite and weather data. The project demonstrates the potential of combining satellite remote sensing, agrometeorological data, machine learning and comprehensive in situ data to model complex growth dynamics, providing a reliable solution for grassland yield prediction. Future developments will combine forage quality predictions, cut detection algorithms and real-time decision support systems, enabling farmers to optimise management practices, improve resource efficiency and promote sustainability. Ongoing research aims to refine the model for different management types, such as grazed versus mowed grassland and extend its applicability to further regions.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.14)

Session: A.08.11 Ocean Waves: Measurements, Interactions and Applications

Ocean waves are a fundamental property of the air-sea boundary, with interactions ranging from wave-breaking assisting the air-sea transfer of gases to the potential destruction of large oil tankers. From a climate perspective they provide a measurable integration of the input of wind energy and provide a mechanism for the break-up and dispersal of ice floes. Long-term studies of wave height, wave period and direction provide an insight into climate change, and derived climatologies and short-term forecasts indicate the probable range of conditions likely to affect shipping, wave-energy harvesters and offshore structures.

This session encourages the submission of presentations related to the measurement and utilization of wave data from all sensors, ranging from the long-term missions and efforts to maintain the consistency of such datasets through validation to exploring the potential new information from innovative missions such as CFOSAT and SWOT. Studies may be on along-track or gridded products, especially those that focus on driving down the uncertainty in long-term products. We also welcome abstracts dealing with the interaction of waves with currents or sea-ice, and those addressing the issues of extremes and of recovering data close to the shore.

Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.14)

Presentation: On the innovative combined assimilation of CFOSAT wave spectra and swath wave height from SWOT : perspectives to operational wave forecasting

Authors: Lotfi Aouf, Danièle Hauser, Cédric Tourain, Bertrand
Affiliations: Météo France, CNRM
Wave forecasting in extreme conditions is highly sensitive to wind forcing provided by the atmospheric models, and its uncertainties depending on the ocean basin. The assimilation of wave observations in the MFWAM model enables the correction of forcing and model physics uncertainties in operational wave forecasting dedicated to wave submersion warnings. This work aims to assess the impact of assimilating both CFOSAT directional wave spectra and the 2D wave height field provided by SWOT. These two types of observations have shown their ability to better correct wave propagation from storm areas such as the Southern Ocean. We used a global MFWAM wave model configuration with a resolution of 10 km, and an atmospheric forcing from the IFS-ECMWF atmospheric system. Long-period assimilation experiments have been developed using observations from both missions, derived from recent L2 processing. Experimental results have been validated with independent altimetry wave data and drifting wave buoys deployed during the Calibration/Validation phase. The results show that the combined assimilation significantly reduces the bias and scatter index of significant wave height, particularly in Southern Ocean and swell tracks over the oceans. The assimilation of SWIM wave spectra compensates for the uncertainties on the estimate of wave in the swath of SWOT. A focus on the impact in the Southern Ocean where the improvement of scatter index exceeds 25% has been implemented with the comparison of Met-Norway drifting buoy deployed in April 2024. We analysed different assimilation scenario to get the benefit from high resolution swath wave height consistently with grid resolution of operational wave model MFWAM. Further comments and conclusions will be presented in the final paper.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.14)

Presentation: Coastal wave refraction over varying currents and arbitrary depths.

Authors: Trygve Halsne, Dr. Yan Li, Dr. Mika Malila, Dr. Øyvind Breivik
Affiliations: Norwegian Meteorological Institute, University of Bergen
Wave refraction—the veering of waves due to underlying current and depth gradients—is a predominant mechanism causing spatially inhomogeneous ocean wave fields. However, the interplay between depth and current induced refraction remains poorly understood. We present an approximate analytical solution for the wave curvature, which is valid under the weak current assumption epsilon = u/cg. Furthermore, the solution requires waves propagating atop slowly varying currents and bathymetry. The curvature is the appropriate measure of refraction, and denotes how much a curve departs from a straight line. The solution is tested for realistic refraction situations depicted by the Copernicus Sentinel-2 mission, including cases where bathymetry is the predominant refraction mechanism. The observed refraction patterns are compared against wave rays computed from an open–source source ray tracing framework. Moreover, we explore the potential of treating the ray curvature solution as a so-called inverse problem, meaning that the observed wave refraction, as seen in Sentinel-2 imagery, can be used to estimate the horizontal near–surface current gradients.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.14)

Presentation: Extending and exploiting the coastal record of the ESA Sea State Climate Change Initiative

Authors: Marcello Passaro, Fabrice Ardhuin, Guillaume Dodet, Mariia Usoltseva
Affiliations: Deutsches Geodätisches Forschungsinstitut - Technische Universität München, Univ Brest, CNRS, Ifremer, IRD, Laboratoire d’Océanographie Physique et Spatiale (LOPS), Technische Universität München
The monitoring of sea state, particularly significant wave height (SWH), relies heavily on the multi-mission altimetry record accumulated over 30 years of satellite observations. As part of the ESA Sea State Climate Change Initiative, the WHALES retracking scheme was adopted, a method that significantly improves the retrieval of valid data in coastal areas. Through the reprocessing of data from missions such as Jason-1, Jason-2, Jason-3, Envisat, CryoSat-2, and AltiKa, the first phase of the project has yielded two decades of coastal SWH records that are ready to be exploited. In parallel, efforts are underway to apply this approach to additional historical missions, potentially extending the record back by another decade. This presentation will highlight both recent progress in applying WHALES to other missions and the opportunities for leveraging these coastal records. Specifically, we will discuss the challenges and outcomes of applying WHALES to ERS-1 and ERS-2 data, along with statistical insights gained from reprocessing the ERS-2/Envisat and ERS-1/ERS-2 tandem phase data. Additionally, we will show how WHALES not only enhances coastal data accuracy but also reduces the impact of speckle noise and wave group influence, providing a more accurate representation of SWH and its uncertainty—particularly valuable for extreme conditions. Finally, we will demonstrate how these reprocessed datasets can be used effectively for coastal applications, focusing on the monitoring of SWH changes from offshore to coastal zones and the role of coral reefs as natural barriers protecting coastal environments from high sea states.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.14)

Presentation: Comparing and Combining Directional Swell Measurements From Sentinel-1, SWIM and SWOT Using Fireworks

Authors: Marine De Carlo, Romain Husson, Beatriz Molero, Nicolas Richard, Annabelle Ollivier, Charles Peureux, Pierre Dubois, Adrien Nigou, Hugo Singhoff, Matthias Averseng, Lotfi Aouf, Cédric Tourain, Gerald Dibarboure, Alejandro Bohe, Fabrice Ardhuin, Fabrice Collard
Affiliations: CLS Group, Météo-France, CNES, Univ. Brest, CNRS, Ifremer, IRD, LOPS, IUEM, OceanDataLab
Various spaceborne Radar instruments are now operating with the common ability to observe long waves propagating across the oceans: the C-band Synthetic Aperture Radar (SAR) onboard Sentinel-1 (~23-36° of incidence), SWIM rotating Ku-band Radar onboard the Chinese-French CFOSAT satellite (~2 to 10° of incidence) and the Ka-band radar Interferometer (KaRIn) onboard the Surface Water Ocean Topography (SWOT) (1 to 5° of incidence). The directional swell measurements are classically investigated using model and in situ measurements but also co-locations between the spaceborne instruments (also known as cross-overs in altimetry). These latter co-locations involve both static and so-called “dynamic co-locations" where waves are propagated using a linear wave propagation model over a few hundred kilometers to maximize the number of co-locations between the sensors. Intercomparisons can therefore be performed either for specific case studies or for massive statistical comparisons. In this study, we use wave measurements from Sentinel-1A (S1-A) wave mode Level-2 Ocean (OCN) products (distributed by ESA) and SWIM L2S (IFREMER/OCEANDATALAB) and L2P products from the new processing that extends the range of swell measurements up to 1200m wavelength (500m before) and better filters non-wave signatures (distributed by CNES/CLS, Q1 2024). We include KaRIn SWOT spectral content of measurements from the Sea Surface Height Anomaly (SSHA) observations to image so-called “forerunners”, the longest period swells that propagate ahead of the most energetic swell components (from Munk 1947). Intercomparisons show the promising synergies between these sensors and the potential to derive worldwide multi-sensor swell monitoring. First, SWIM shows great capabilities to image the shortest swell that S1 can totally miss or partially image because of its cutoff limitation. Then, S1 has shown capabilities to observe long swell (with peak wavelength larger than 800m), but their imaging mechanism is still limited by the reduced velocity bunching and tilt modulation of longest forerunners. On the other hand, comparisons with SWOT indicate that KaRIn unsmoothed (ocean data at their highest resolution) SSHA measurements can further extend the range of visible swell up to kilometric scales thanks to its interferometric capability, up to wavelength larger than 1300m. These swells are footprints of the most energetic and potentially harmful storms. Imaging these low amplitude but very long and fast travelling waves can help forecasting the arrival of major hazards. A dedicated task is devoted to the development of an adaptative partitioning method aiming at catching wave observations persistent throughout the data take, despite their potential low energy. This is expected to bring more geophysical observations from each satellite and better identify outliers. Swell parameters from multi mission Fireworks based virtual buoys will then be compared with those from in-situ moored or drifting wave buoys. A significant work is still necessary to better understand, compare and inter-calibrate these directional swell measurements but they offer very promising perspectives for the exhaustive description of swell events, from the longest forerunners to the short wind sea.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.14)

Presentation: ASSEMBLING 30 YEARS OF SATELLITE WAVE MEASUREMENTS FOR CLIMATE SCIENCE : THE SEA STATE CLIMATE CHANGE INITIATIVE CCI

Authors: Annabelle OLLIVIER, Dr Fabrice ARDHUIN, Dr Guillaume DODET, Dr Sarah Connors, A
Affiliations: CLS
Robust long-term sea state information are needed for coastal adaptation, offshore engineering, marine energy production, and marine security, but also to constrain modern global wave reanalysis, which often suffer from the assimilation of uneven dataset. In this context, the European Space Agency is playing an active role through the Climate Change Initiative program, which aims at making the most of historical and current Earth Observation missions in order to produce long term records of Essential Climate Variables (ECV). The sea state ECV has been routinely monitored from space with radar altimeters and synthetic aperture radar imagers over more than three decades. Over this period of time, mission designs and technology have evolved alongside climate change, and the production of seamless multi-mission and multi-sensor climate data record has become more and more challenging. For the second phase of the Sea State CCI project,wave measurements from 16 satellite missions, including altimeters (LRM and Delay Doppler) and SAR imagers, will be inter-calibrated to minimize the bias between missions and sensor types, and between satellite, ground truth measurements and models. This talk presents the new and future datasets (version V4 in March 2025 and V5 in 2026) for the climate and waves communities. It explains the strategy chosen from the processings, validation, and inter-calibration strategy to the down stream applications developped within the project. The impact on the long-term statistics are finally discussed.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.14)

Presentation: Novel Sea-State Observations from SAR Altimetry – Steps Taken and the Road Ahead

Authors: Dr.Ing. Christopher Buchhaupt, Alejandro Egido, Doug Vandemark, Hui Feng, Luciana Fenoglio
Affiliations: Cooperative Institute for Satellite Earth System Studies (CISESS)/Earth System Science Interdisciplinary Center (ESSIC), University of Maryland, College of Engineering and Physical Sciences, University of New Hampshire, European Space Research and Technology Centre, European Space Agency, Astronomical, Physical and Mathematical Geodesy, University of Bonn
First enabled by CryoSat-2 in 2010, Synthetic Aperture Radar (SAR) altimetry represents a significant advancement in wave measurements, providing unprecedented accuracy and resolution in monitoring significant wave height (SWH) and wave dynamics. Unlike traditional pulse-limited altimetry, SAR altimetry utilizes a smaller along-track footprint, enabling precise detection of fine-scale wave patterns and variations. This capability is especially crucial in regions with complex wave behavior, such as coastal zones and high-energy oceanic areas. By offering consistent, global wave data, SAR altimetry enhances our understanding of wave generation, propagation, and interactions with winds and currents. These insights are vital for improving wave climate models, forecasting extreme wave events, and assessing the impacts of waves on coastal erosion and marine structures. As a key tool in studying the ocean-atmosphere system, SAR altimetry is essential for advancing wave-related research and applications in marine safety, renewable energy, and climate studies. In this presentation we will give an overview of observational capabilities of wave observations by unfocused SAR altimetry and how those additional parameters influence the estimation of sea level and SWH. We will discuss requirements to utilize the direct estimation of two new wave parameters: 1. The second order spectral wave moment M2, usable to compute the mean zero upcrossing wave period T02. 2. A mean line-of-sight velocity mostly caused by vertical wave velocity/along-track wave slope correlations. This parameter can be used to estimate wind-wave direction with respect to the satellite track. We will discuss the accuracy and precision of these new parameters using the results of a two-year global Sentinel-6A processing campaign conducted with our in-house SAR altimetry processor prototype. The outputs of this campaign will be validated against ERA5 wave model data, tide gauges, and buoy measurements. Additionally, we will provide an outlook on the future potential of SAR altimetry for sea-state observations, highlighting capabilities that are currently unattainable with unfocused SAR altimetry but could become feasible through the application of focusing techniques. This includes an examination of the current limitations, and the technological or methodological advancements required to unlock these new capabilities, paving the way for future developments.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall E1)

Session: A.09.03 Understanding the Arctic as a system

The Arctic is warming up to four times faster than the rest of the globe, a phenomenon known as Arctic amplification (Rantanen et al., 2022). This warming has triggered a chain of changes and feedbacks in the Arctic system whose expected impacts will exceed those forecasted for many other regions in the planet. Climate change in polar regions has wide ranging implications for human activities, primary productivity, ecosystems, atmosphere-ocean interactions, the carbon cycle, ocean and atmosphere circulation, as well as impacts on the Earth and climate system as a whole.

This session will focus on the latest research to understand complex processes and feedbacks in the Arctic including, but not limited to: Freshwater fluxes and impact on the Arctic ocean and its biology; Sea-ice evolution (thickness, extent, concentration, motion, snow-cover-depth, roughness, melt ponds) and its feedback to the climate system; Interactions between the ocean and atmosphere; Snow melt processes and related downstream impacts, Impacts of climate change on Arctic biodiversity; Impact of extreme events.

Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall E1)

Presentation: Understanding Soil State Transitions and Assessing Slope Instability in Arctic Regions Under Changing Climate Conditions

Authors: Ionut Sandric, Dr. Kimmo Rautiainen, Manu Holmberg, Nicolas Alexandru
Affiliations: Faculty Of Geography, University Of Bucharest, Finnish Meteorological Institute
Understanding the impacts of climate change in Arctic regions is critical, particularly the transitions of soil states from frost to intermediate and thawed conditions, which have profound implications for slope stability. Thawing permafrost weakens the structural integrity of slopes, increasing the likelihood occurrence of landslides and other geohazards. Understanding these trends is essential for identifying regions at higher risk of future slope instability, which is often triggered, in the Arctics, by permafrost thaw. It also enables better-informed decision-making for infrastructure development, resource management, and disaster mitigation in Arctic regions. In order to model the trends in soil freeze-thaw state over the Arctics region, the SMOS (Soil Moisture and Ocean Salinity) brightness temperature data from CATDS (Centre Aval de Traitement des Données SMOS) dataset (Al Bitar et al. 2017) and the SMOS L3 Freeze and Thaw State data available from ESA SMOS online dissemination service (https://smos-diss.eo.esa.int/) were used. Out of the dataset (Rautiainen et. Al. 2016), the normalized polarization ratio (NPR) and the classified ground in freeze-intermediate-thaw (FT) variables from both ascending and descending orbits were analysed. Two methodological approaches were employed: the Mann-Kendall test for detecting monotonic trends and several LSTM models. These methods were applied both individually to each month and season for every pixel in the dataset and in clusters of 3x3, 5x5, and 7x7 pixels. The Mann-Kendall trend test, a robust non-parametric statistical method, has been widely employed to detect and analyse monotonic trends. This method identifies consistent patterns of change over time without assuming specific data distributions or linearity, making it ideal for Arctic datasets characterized by variability and outliers. Long Short-Term Memory (LSTM) models, a specialized class of recurrent neural networks, are well-suited for modelling trends and seasonal patterns in time-series data. Their gated architecture, including input, forget, and output gates, allows them to capture long-term dependencies while filtering noise and irrelevant data. The results show notable clusters with pronounced changes in parts of northern Canada, Alaska, and Siberia, as well as around the Arctic Ocean’s coastal areas. These regions exhibit a mix of changes, suggesting localized increases or decreases in frost-related conditions over time. In northern Canada and Alaska, the observed trends may be attributed to processes such as permafrost degradation and seasonal variations in frost dynamics. These regions are highly sensitive to rising temperatures, which can lead to thawing permafrost and subsequent changes in slope stability and ecological systems. The interplay between climatic warming and local microclimatic factors likely drives the spatial variability in these trends. Siberia, characterized by its vast permafrost coverage, also exhibits significant changes. Areas showing decreasing trends may reflect accelerating permafrost thaw, a phenomenon with profound implications for ecosystem functionality, carbon release, and infrastructure stability. This region's complex interactions between temperature fluctuations, snow cover, and vegetation dynamics further contribute to the observed variability in slope trends. The coastal zones along the Arctic Ocean present another critical area of slope variation. These changes may be influenced by interactions between frost dynamics and the retreat of sea ice, as well as by coastal erosion linked to rising sea levels and permafrost thaw. Such processes can exacerbate the vulnerability of these regions to both ecological and geomorphological changes. This research emphasizes the importance of integrating diverse analytical approaches to monitor and understand environmental changes in permafrost-affected areas. By leveraging the strengths of statistical and artificial intelligence models, it is possible to gain deeper insights into the factors influencing slope instability. Acknowledgement This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 101086386, EO-PERSIST - A Cloud-Based Remote Sensing Data System for Promoting Research and Socioeconomic Studies In Arctic Environments (https://www.eo-persist.eu). Selected references Al Bitar, A., Mialon, A., Kerr, Y. H., Cabot, F., Richaume, P., Jacquette, E., Quesney, A., Mahmoodi, A., Tarot, S., Parrens, M., Al-Yaari, A., Pellarin, T., Rodriguez-Fernandez, N., and Wigneron, J.-P.: The global SMOS Level 3 daily soil moisture and brightness temperature maps, Earth System Science Data, 9, 293–315, https://doi.org/10.5194/essd-9-293-2017 Rautiainen, K., Parkkinen, T., Lemmetyinen, J., Schwank, M., Wiesmann, A., Ikonen, J., Derksen, C., Davydov, S., Davydova, A., Boike, J., Langer, M., Drusch, M., and Pulliainen, J. (2016). SMOS prototype algorithm for detecting autumn soil freezing, Remote Sensing of Environment, 180, 346-360, https://doi.org/10.1016/j.rse.2016.01.012
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall E1)

Presentation: ARCTIC-FLOW: A new project for better understanding water mass formation processes in the Nordic Seas

Authors: Estrella Olmedo, Manuel Arias, Agnieszka Beszczynska-Möller, Carolina Gabarró, Verónica González-Gambau, Michael Karcher, Nanna B. Karlsson, Frank Kauker, Roger Oliva, Raul Onrubia, Aqeel Piracha, Roberto Sabia, Anne Munck-Solgaard, Antonio Turiel, Marta Umbert, Dr Martin Wearing
Affiliations: Institut Of Marine Science (csic), ZBT, IOPAN, OASys, GEUS, ESA
The Atlantic Meridional Overturning Circulation plays a central role in climate by transporting and redistributing recently observed temperature increases to depth, thereby regulating the effective heat capacity of the ocean under global warming. The Atlantic Meridional Overturning Circulation is projected to decline in response to climate change and there is broad agreement that the climate consequences of a potential shutdown of this vital ocean circulation are enormous. The Nordic Seas are a dominant contributor to the overturning circulation due to the production of dense waters north of the Greenland-Scotland Ridge which feed into the lower limb of the Atlantic Meridional Overturning Circulation. The objectives of ARCTIC-FLOW, an European Space Agency Polar Cluster project, are: 1) to identify the main locations of surface water mass transformation into denser waters; 2) to provide new estimates of water mass transformation and overturning in order to understand the mechanisms driving surface density changes and their impact on the ocean circulation; 3) to investigate the temporal and spatial scales at which the main processes of water mass formations occur; and 4) to assess the impact of extreme freshening events, with the main focus on different regions of the Nordic Seas. To achieve these objectives, we will construct a new 16-year time series of satellite-derived freshwater and density fluxes for the Arctic and sub-Arctic regions, obtained by combining sea surface salinity, sea surface temperature and velocity fields from EO observation, along with information of the Mixed Layer Depth. We will then perform an in-depth analysis of a comprehensive set of in situ measurements in combination with results of model experiments and the new EO-derived time series. In this talk we will present the project and the progress made in generating the new satellite product.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall E1)

Presentation: Chukchi and Beaufort Seas circulation from a long-term dataset of satellite radar altimetry

Authors: Maria Pisareva, Felix L. Müller, Marcello Passaro, Christian Schwatke, Denise Dettmering, Florian Seitz
Affiliations: Deutsches Geodätisches Forschungsinstitut, Technischen Universität München (DGFI-TUM)
Pacific-origin waters flowing northwards through the Bering Strait bring heat, freshwater, and nutrients, influencing the entire Arctic Ocean. The Chukchi and Beaufort Seas region is crucial for the dynamics and transformation of the Pacific waters inflow into the Arctic. Monitoring the processes in this region is very important for understanding the variability of the Arctic Ocean; however, due to harsh environmental conditions, in-situ observations are limited. In the work, we created and analyzed a high-resolution dataset of dynamic ocean topography and geostrophic currents based on multi-mission satellite radar altimetry data over the region. The novel dataset is characterized by the application of sophisticated sea-ice-dedicated data processing techniques (e.g., machine-learning-based lead detection), enabling a more reliable sea surface height determination. Careful processing, along with validation against mooring data in the Bering Strait and Beaufort Gyre region, allows us to observe reliable interannual, seasonal, and synoptic variability of the sea level and geostrophic flow in the Chukchi and Beaufort Seas. The analysis shows that anomalously strong northerly winds can force the Ekman set-up in the Bering Strait and reverse the northward flow of Pacific waters. While this phenomenon has been previously described in oceanographic studies from in-situ data, the proposed dataset makes it possible to assess the development of the reversed flow in the strait, as well as the forcing and the flow variability with a high temporal-spatial resolution (10d/8km) over 2013–2023. The strongest reversals are observed in fall, although winter wind forcing is comparable, and the highest correlations between the storm’s strength and the current's response occur during partial ice cover in winter and spring, as the ice modulates the flow through the strait. The newly derived dataset also shows that the area of the Beaufort Gyre has decreased from 2013 to 2023, and its center has shifted southeastwards, compared to the northwestward shift of the previous decade, which is tightly linked to regional wind forcing and could potentially mean a release of freshwater into the Arctic Basin. In the changing climate, diminishing sea ice, rising sea levels, and adjusting weather patterns, it is necessary to investigate ocean currents to understand and predict their development. The study demonstrates how satellite radar altimetry provides crucial information for monitoring and analyzing a highly dynamic region in the Arctic.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall E1)

Presentation: Pervasive glacier retreat across Svalbard from 1985 to 2023

Authors: Tian Li, Stefan Hofer, Geir Moholdt, Adam Igneczi, Konrad Heidler, Xiao Xiang Zhu, Jonathan Bamber
Affiliations: University of Bristol, Technical University of Munich, Norwegian Polar Institute
Svalbard has been undergoing amplified climate variability compared to the global mean, leading to significant glacier mass loss. It is therefore an ideal laboratory to study the impact of a warming climate on glacier behaviour. A key process in predicting the glacier mass loss in Svalbard is the calving dynamics of marine-terminating glaciers, which remains poorly understood due to a lack of high-resolution calving front observations. Here we present a new high-resolution calving front dataset of 149 marine-terminating glaciers in Svalbard, comprising 124919 glacier calving front positions from 1985 to 2023. This dataset was generated using a novel automated deep-learning framework for mapping glacier calving front positions using multiple optical and SAR satellite images from Landsat, Terra-ASTER, Sentinel-2, and Sentinel-1 satellite missions. The overall calving front mapping uncertainty across Svalbard is 31 m. The newly derived calving front dataset agrees well with recent decadal calving front observations between 2000 and 2020 (Kochtitzky and Copland, 2022) and an annual calving front dataset between 2008 and 2022 (Moholdt et al., 2022). The calving fronts between our product and the latter deviate by 32 ± 65 m on average. The R² of the glacier calving front change rates between these two products is 0.98, indicating an excellent match. Using this new dataset, we observe pervasive calving front retreats for non-surging glaciers over the past 38 years and widespread seasonal cycles in the calving front position for more than half of the glaciers across Svalbard. The spatial pattern in the seasonal cycles is linked to the different timings of warm ocean water inflow carried by the West Spitsbergen Current around Svalbard, demonstrating the dominant role of ice-ocean interaction in seasonal front changes. The interannual variability in the long-term calving front retreat, however, shows a strong sensitivity to both atmospheric and oceanic warming. These findings provide insights into understanding glacier calving mechanisms and can help improve mass loss projections for Arctic glaciers.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall E1)

Presentation: Using remote sensing data in Arctic tundra conservation planning

Authors: Antonia Ludwig, Hannes
Affiliations: Remote Sensing Center for Earth System Research | Leipzig University
The advancement of forests and the expansion of economic activities associated with global warming will lead to a displacement of the Arctic tundra. The impact of the expansion of the Arctic tree line, intensifying land-use changes, and economic changes on the loss of the unique tundra biodiversity and ecosystem functions have not been adequately documented. Currently, we lack a spatial, qualitative, and quantitative assessment of the current status, especially regarding the dynamics of forest expansion and the biodiversity regions of the tundra. Using freely available Copernicus data products combined with multiple state-of-the-art EO data, we developed a framework that allows us to capture the spatio-temporal dynamics of the Arctic tree line and biodiversity patterns in the pan-arctic tundra. We aim to contribute to the session by presenting our target-oriented multi-sensor time series product that enables an exhaustive characterization of the Arctic tundra ecosystem. Further, we will present our methods for big EO data analyses that can serve as a valuable basis for the identification of target areas for envisaged Arctic biodiversity conservation schemes. By employing dimensionality reduction methods and trajectory analyses, we are able to identify tundra areas under change and contextualize these changes with area-specific drivers. Finally, our study aims at a spectral characterization of functional tundra properties. By applying machine learning models, we link high resolution EO data to functional plant traits which can help in identifying the drivers of large-scale spatial vegetation patterns.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall M1/M2)

Session: D.05.04 Digital Copernicus – shaping Europe’s Digital Future in the Age of AI

The application of AI technology in EO data systems is nearing the inflection point. 2024 was marked by several pilot or experimental projects which demonstrated the power of AI systems over Sentinel data giving the users the ability to browse and interact with data much faster, much more efficiently and to query/process data with much less compute. The vision for AI-powered Copernicus data can dramatically lower the barriers of entry for data access and use, bringing millions of new users and customers (both private and corporate) to these observational assets. This has a potential to change the paradigms of data processing and distribution as well as information extraction. However as the progress in AI is accelerating and brining enthusiasm of the community, policy makers are increasingly challenged to keep the investment momentum and to create the right conditions for the European sovereign AI solution to continue growing on the continent. This session would entail Invited presentations and speakers from ESA, European Commission Directorates (DG DEFIS and CONNCT - DestiE) as well as industry and research representatives working on AI solutions for EO.

Presentations and speakers:


Towards AI powered Archives-Level Insights From Copernicus Data: Where Are We and What we Need


  • Mikolaj Czerkawski - AsteriskLabs

Scalable Earth observation Analytics with Cloud Computing and AI Solutions


  • Marcin Kluczek - Data Scientist, CloudFerro S.A.

Self-Supervised Learning for Earth Observation (SSL4EO) Put to the Test - a perspective through the lens of two Horizon Europe projects


  • Conrad Albrecht - DLR

AI-Powered Planet: Destination Earth and the Future of Climate Intelligence in Europe


  • Bertrand Le Saux - European Commission, DG CONNECT

Towards Next-Gen Intelligence: AI for Very High-Resolution SAR Data


  • Andrzej Kucik - Helsing.AI

ESA Framework for the Copernicus Ground Segment transformation in the Age of AI


  • Bethlem Rosich - ESA
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 0.11/0.12)

Session: C.06.11 RF Interference and Frequency Managemnet challanges in EO missions

RF Interference (RFI) is a growing concern for EO missions due to the increasing demand for spectrum by multiple and competing space and terrestrial systems. RFI might hinder the scientific exploitation of EO missions by causing loss of data, by creating measurement biases or by damaging sensors.
This session will include presentations on some novel approaches to cope with and prevent the negative impact of RFI on EO missions.

Chairs:


  • Yan Soldo – ESA
  • Juliette Challot – ESA
  • Bruno Espinosa – ESA

Presentations and speakers:


Safeguarding Sea Surface Temperature Measurements: Enabling Future Observations in a Crowded RF Environment


  • Hugo Thomas - ANFR)

Detecting Human Influence from Space: Correlating SMOS RFI with Geopolitical and Human Activity Signals


  • Ekhi Uranga - ESA

RFI in passive microwave remote sensing: lessons learned from SMAP and future scenarios in EO


  • Paolo de Matthaeis - NASA

RFI detection, characterization and monitoring for Sentinel-1 mission


  • Simone Mancon - Aresys

A unified way to detect RFI in passive sensors: Survey results from 1.4 to 200 GHz


  • Roger Oliva Balague - Zenithal Blue Technologies

Active-passive coexistence: recent research and future challenges


  • Juliette Challot - ESA
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall K2)

Session: E.01.14 The Phi-LabNET: bridging the gap between research and commercial applications

The Phi-LabNET is a network of Phi-Labs implemented in the Participating States, which aims at bridging the gap between the research and the commercial world and channeling research efforts into concrete commercial space products and applications.
The Phi-LabNET leverages the collaboration between ESA, Industry and academia to create a test bed, where the outcomes of research activities are exposed to industry needs, so that their market potential can be rigorously assessed, creating solutions with commercial benefits for the Economic Operators, whilst contributing to the Phi-Lab's internal knowledge. Under the ScaleUp programme, ten (10) Phi-Labs have been implemented as part of the Phi-LabNET. Each Phi-Lab has a specific theme, to bring the necessary focus to research proposals, and aims at becoming a centre of expertise, recognized at European and worldwide level.
Each Phi-Lab is managed by a local administrator, who coordinates the activities in cooperation with ESA.
This session will first give a general overview of the Phi-LabNET and will then present the specific activities of those Phi-Labs that are mostly relevant for the Living Planet Symposium.


Presentations and speakers:


Phi-LabNET Introduction


  • Michele Iapaolo - ESA

ESA Phi-Lab ES "Space technologies and their application to boost climate resilience"


  • Estel Blay - Phi-Lab Spain

ESA Phi-Lab NO "Cutting-edge technology based on space capabilities to meet Arctic needs”


ESA Phi-Lab UK “Space enabled sustainability technologies”


Phi-Lab AT: "Industrial innovation for the upstream domain"


Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall G1)

Session: D.02.08 Explainable AI for Earth Observation and Earth Science - PART 1

The rapid expansion of AI in Earth system science and Earth observation (EO) is accelerating research and innovation. However, for AI-based solutions to be truly impactful in Earth Action initiatives, they must demonstrate explainability, physics-awareness, and trustworthiness to ensure they are fit for purpose.
This session will explore cutting-edge advancements in explainable AI (XAI) methods across diverse EO data types, including Synthetic Aperture Radar (SAR), optical, and hyperspectral data. Contributions are invited on integrating AI with physical models, interpretable deep learning, uncertainty quantification, causal inference, and other approaches to improve transparency, consistency, and robustness in AI-driven solutions.
We welcome case studies and research addressing a variety of Earth science missions and applications, such as SAR processing, Earth system process understanding, image classification, 3D reconstruction, and climate/environmental monitoring. The session will also cover strategies for tackling data gaps, physical inconsistencies, and ensuring responsible, ethical AI use.
Attendees will gain valuable insights into the latest research on explainable AI for EO, with a focus on enhancing model interpretability and trustworthiness in applications that advance Earth observation and Earth system science, supporting actionable solutions for environmental and climate challenges.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall G1)

Presentation: Uncertainty quantification in the retrieval of cloud properties for the atmospheric Sentinel missions using deep neural networks

Authors: Fabian Romahn, Dr. Diego Loyola, Dr. Adrian Doicu, Dr. Vìctor Molina Garcìa
Affiliations: German Aerospace Center (DLR)
In atmospheric remote sensing, the quantities of interest (e.g. cloud properties) are usually not directly observable but can only be inferred indirectly via the measured spectra. To solve these inverse problems, retrieval algorithms are applied that usually depend on complex physical models, so-called radiative transfer models (RTMs). These are very accurate, however also computationally very expensive and therefore not feasible in combination with the near-real-time requirements of operational processing. Machine learning methods, in particular neural networks (NNs), can be used to accelerate the classical remote sensing retrieval algorithms. However, their application may be difficult as they can be used in different ways and there are many aspects to consider as well as parameters to be optimized in order to achieve satisfying results. For the inverse problem in atmospheric remote sensing, there are two main approaches to apply NNs: 1. NNs used as forward model, where a NN approximates the radiative transfer model and can thus replace it in the inversion algorithm. 2. NNs for solving the inverse problem, where a NN is trained to infer the atmospheric parameters from the measurement directly. The first approach is more straightforward to apply. However, the inversion algorithm still faces many challenges, as the spectral fitting problem is generally ill-posed. Therefore, local minima are possible, and the results often depend on the selection of the a-priori values for the retrieval parameters. For the second case, some of these issues can be avoided: no a-priori values are necessary, and as the training of the NN is performed globally, i.e. for many training samples at once, this approach is potentially less affected by local minima. However, due to the black-box nature of a NN, no indication about the quality of the results is available. In order to address this issue, novel methods like Bayesian neural networks (BNNs) or invertible neural networks (INNs) have been developed in recent years. This allows the characterization of the retrieved values by an estimate of uncertainty describing a range of values that are probable to produce the observed measurement. At the example of the Sentinel‑4 and Sentinel‑5P cloud product, we apply and evaluate these new methods in order to demonstrate their potential for future operational algorithms.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall G1)

Presentation: On the derivation of forest parameters from Interferometric SAR features using Deep Learning

Authors: Daniel Carcereri, Stefano Tebaldini, Dr. Paola Rizzoli
Affiliations: German Aerospace Center (DLR), Politecnico di Milano
The wall-to-wall assessment of forest parameters is fundamental for monitoring and preservation efforts [1]. Thanks to their capability to operate in all-weather conditions, synthetic aperture radar (SAR) systems represent an attractive information source for the retrieval of bio-physical parameters. In the last decades, modeling efforts based on physical-based principles have represented the state of the art in forest parameter prediction, while simultaneously offering a theoretical interpretation of the electromagnetic interactions between the radar waves and the forest structure [2]. More recently, deep learning (DL)-based approaches have shown the potential to surpass the estimation accuracy of classical methods by learning the underlying relationship directly from the data without the need for privileged information sources [3]. Despite this, a major downside to DL-based approaches remains the difficult interpretation of the learnt models, due to their high dimensionality and non-linearity. In this work we present a complete analysis on the derivation of forest parameters from single-pass interferometric SAR (InSAR) acquisitions using a DL-based modeling approach. First, we present a state-of-the-art model for the estimation of forest height from TanDEM-X single-pass interferometric data in the challenging context of the Gabonese tropical forests. Subsequently, we focus on assessing the relevance of the different SAR- and InSAR-derived predictors by systematically probing the model for different subsets of input data. Finally, we motivate the relevance of each feature on the regression task based on the theory of SAR and InSAR image formation, and on experimental evidence. In our analyses we find that the interferometric coherence and phase features hold the most relevant information regarding the inversion problem. Despite this, we also show that the backscatter maps help to discriminate the context of the regression problem. Crucially, the acquisition digital elevation model (DEM) and the local incidence angle maps are shown to correctly inform the model about the side-looking acquisition geometry of the SAR system. Finally, we use a LiDAR-derived digital terrain model (DTM) to discuss two further contributions. On the one hand, we show that the local mean phase center variability, captured within the acquisition DEM, uniquely allows the model to distinguish between presence and absence of short vegetation. On the other hand, we demonstrate that for the local incidence angle computation the removal of the vegetation bias, present in the acquisition DEM, leads to significant performance improvements. Overall, we demonstrate the need for DL-based approaches to move away from the “black-box” paradigm in favor of a more physics-aware definition of the model predictors, especially when considering SAR and InSAR data as the source of information. In light of future and upcoming single- and repeat-pass missions, such as NISAR, Biomass, Harmony and ROSE-L, we highlight the necessity for interferometric capabilities in the context of forest parameters estimation. Bibliography: [1] Global Forest Resources Assessment 2020. FAO, 2020. doi: 10.4060/ca9825en. [2] M. Denbina, M. Simard, and B. Hawkins, “Forest Height Estimation Using Multibaseline PolInSAR and Sparse Lidar Data Fusion,” IEEE J. Sel. Top. Appl. Earth Observations Remote Sensing, vol. 11, no. 10, pp. 3415–3433, Oct. 2018, doi: 10.1109/JSTARS.2018.2841388. [3] D. Carcereri, P. Rizzoli, D. Ienco, and L. Bruzzone, “A Deep Learning Framework for the Estimation of Forest Height From Bistatic TanDEM-X Data,” IEEE J. Sel. Top. Appl. Earth Observations Remote Sensing, vol. 16, pp. 8334–8352, 2023, doi: 10.1109/JSTARS.2023.3310209.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall G1)

Presentation: Active Learning with Constrained Virtual Support Vector Machines for Classification of Earth Observation Data

Authors: Prof. Dr. Christian Geiß, Lorenzo Carlassara, Prof. Dr. Ludovico Giorgio Aldo Biagi, Hannes Taubenböck, MSc Patrick Aravena Pelizari
Affiliations: German Aerospace Center (DLR), German Remote Sensing Data Center (DFD), University of Bonn, Department of Geography, Department of Civil and Environmental Engineering, Politecnico di Milano, University of Würzburg, Institute of Geography and Geology, Chair of Remote Sensing
Developing methods for information extraction from remote sensing imagery has been one of the major tasks of the scientific remote sensing community over the past few decades. Nowadays, very large deep learning and foundation models are being created, which are often built upon billions of parameters and are very costly in terms of required input data and the energy required for training, among other factors. In contrast, here we follow the idea of learning invariant decision functions for remote sensing image classification using Support Vector Machines (SVM). To achieve this, we generate artificially transformed samples (i.e., virtual samples) from very limited prior knowledge. Labeled samples closest to the separating hyperplane with the maximum margin (i.e., the Support Vectors) are identified by learning an initial SVM model. These Support Vectors are then used to generate virtual samples by perturbing the features to which the model should be invariant. Subsequently, the classification model is trained a second time, using the newly created virtual samples in addition to the SVs, to find a new optimal decision boundary. We implement a self-learning procedure to ultimately prune non-informative virtual samples from a potentially arbitrary invariance generation process, allowing for robust and sparse model solutions. The self-learning strategy jointly considers similarity and margin sampling constraints. In contrast to existing approaches, we extend this concept within the context of active learning. By leveraging the self-learning constraints to simultaneously select the most informative unlabeled samples within an active learning framework, we can achieve excellent model accuracies with very little prior knowledge, while simultaneously maintaining sparse models. Experimental results are obtained from two high-resolution multispectral datasets acquired over the city of Cologne, Germany, and the Hagadera Refugee Camp, Kenya. Comparative model accuracy evaluations highlight the favorable performance properties of the proposed methods, particularly in settings with very few labeled samples.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall G1)

Presentation: CLeaR Eyes On Earth: A Case for Causally Learned Representations in Earth Observation

Authors: Dagmara Panas, Dr Gary Watmough, Dr Sohan Seth
Affiliations: University Of Edinburgh
Motivation: Machine Learning for Earth Observation (ML4EO) is making great strides in estimating socio-economic indicators in low- and medium-income countries (LMIC) for monitoring sustainable development goals. In the absence of census data, representation learning leverages household surveys and earth observation to provide reliable, granular and frequent estimations of the respective indicators during the intercensal period. For example, among other features, Chi et al. use hundreds of features derived from raw satellite image patches to estimate the distribution of wealth in 135 countries [1]. These approaches, albeit successful, are underpinned by traditional ML techniques, which means they can be difficult to transplant to new geographies. Further, they lack transparency due to the black-box nature of the ML models being used, which hinders their applicability in deriving actionable insight for policymakers. We argue here a case for incorporating the latest developments in ML, summarising the pertinent literature and reporting on a prototype demonstrator study. Representation Learning: Existing ML tools usually apply an end-to-end encoder-decoder approach to make use of the vast amount of unlabelled data through self-supervised learning. Here the encoder, typically a neural network, represents the earth observation data, e.g., a very high-resolution satellite image tile, as a vector embedding of lower dimensionality. The decoder, which is also usually a neural network, then may reconstruct the satellite image tile from the vector embedding. The intention is thus to capture the data in a more parsimonious format than the raw pixels. The outcome of interest, e.g., population density or wealth index, is then estimated from this embedding or representation. The representation inferred by the encoder, however, is usually not interpretable, i.e., the dimensions of the vector embedding do not align with any physical aspect of the image such as the presence of a forest or the count of buildings. Instead, the dimensions are often entangled, i.e., each might simultaneously encode multiple, not necessarily related, high-level features [2]. Disentangled Representation Learning: Although a representation learned in a purely self-supervised manner described above can be useful in a downstream predictive task, its’ tendency to entanglement carries the risk of learning spurious associations and, as a consequence, poor reusability and robustness [2]. Further, such representations are inscrutable to human understanding and thus difficult to operationalise for decision support. To ameliorate these issues, disentangled representation learning (DRL) proposes to not only optimise for a faithful reconstruction of the data, but also for the independence of dimensions of the encoding [3], optionally matching to high-level labels. This has been shown to improve robustness as well as interpretability [3]. However, an issue remains in situations where the independence assumption is violated, as is often the case with real world data subject to causal dependencies [4]. Namely, many statistical associations exist because in the physical world there is a cause-effect relationship between features, such as objects casting shadows. Causal Representation Learning: Causal Representation Learning (CRL) is an emerging technology in machine learning that aims to improve the robustness and interpretability of the learned representation, and address the limitations of DRL [4]. The core idea in CRL is to capture the underlying structural causal model that manifests in the observed, often abstract, data, e.g., images. We expect satellite images to demonstrate causal structure, reflecting the physical mechanisms of the natural world, be it casting shadows, geophysics and ecology, or functional relationships of human-built infrastructure. For example, an image tile of a city may contain buildings, cars, roads and shadows, and we expect that if buildings and cars are present, then that implies shadows nearby. Similarly, since buildings need to be accessed, we might assume the presence of a building to ‘cause’ roads, and in turn roads to ‘cause’ cars. Extracting these underlying causal mechanisms makes the model robust and interpretable in estimating the outcome of interest, e.g., we can explain if poverty is related to the number of cars seen in the image, and additionally, it should not depend on the shadows present in the image beyond the presence of buildings. Example: We report a feasibility case study examining the applicability of the latest advances in CRL to the area of Earth Observation [5]. We utilise a small, hand-annotated dataset comprising 10k images (of which 600 are labelled), inspired by the SpaceNet challenge [6], adapted to the broader task of learning meaningful factors of variation in satellite images, e.g. buildings, roads, vegetation. As the ML tool we select Disentangled gEnerative cAusal Representation (DEAR) by Shen et al. [7], due to its favourable data requirements, competitive performance, and re-usable code base. This framework is intended to work with a known causal graph, which in this case consisted of 3 nodes of Building, Vegetation, and River; Building and River exerting opposing influence on the presence of Vegetation. Although we do not achieve full convergence and controllability, we observe, qualitatively, promising results and sufficient controllability. We are conducting a more extensive study at present. Conclusion: The CRL sub-field of ML shows great promise in terms of aiding decision-making, benefitting from the causal grounding in both robustness and interpretability. Although the area is still developing, and real-world application use cases are exceedingly rare, we believe Earth Observation and Sustainability to be a prime candidate for efforts in early adoption. The intersection of causality and Earth Observation for Poverty is already growing, and as noted in a recent review by Sakamoto et al. [8], the inclusion of CRL could complement these efforts. References [1] Chi, Guanghua, et al. "Microestimates of wealth for all low-and middle-income countries." Proceedings of the National Academy of Sciences 119.3 (2022): e2113658119. [2] Eastwood, Cian, and Christopher KI Williams. "A framework for the quantitative evaluation of disentangled representations." 6th International Conference on Learning Representations. 2018. [3] Higgins, Irina, et al. "beta-vae: Learning basic visual concepts with a constrained variational framework." ICLR (Poster) 3 (2017). [4] Schölkopf, Bernhard, et al. "Toward causal representation learning." Proceedings of the IEEE 109.5 (2021): 612-634. [5] Brutkowski, Piotr. “Weakly supervised causal representation learning on satellite imagery” Master’s thesis, University of Edinburgh [6] Hänsch, Ronny, et al. "Spacenet 8-the detection of flooded roads and buildings." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. [7] Shen, Xinwei, et al. "Weakly supervised disentangled generative causal representation learning." Journal of Machine Learning Research 23.241 (2022): 1-55. [8] Sakamoto, Kazuki, Connor T. Jerzak, and Adel Daoud. "A Scoping Review of Earth Observation and Machine Learning for Causal Inference: Implications for the Geography of Poverty." arXiv preprint arXiv:2406.02584 (2024).
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall G1)

Presentation: Iterative construction of a very high resolution remote sensing dataset by leveraging the uncertainty of land use and land cover models

Authors: Axel Rochel, Hélène Savatier-Dupré, Clément Dechesne, Chloé Thenoz, Pierre-marie Brunet
Affiliations: Magellium, Centre National d’Etudes Spatiales (CNES)
The Digital Twins generation is a topic that has gained in popularity in recent years. More specifically, digital replicas of cities can be very useful for urban planification, disaster simulation, crowd management etc. Remote sensing images can be leveraged to generate such replicas. In particular, deep learning techniques applied on very high resolution satellite images can reach state of the art performance for the creation of land use and land cover (LULC) maps, thus offering a 2D semantic representation of the city considered. However, such methods lack in performance when changing the sensor, even when they have the same modality with similar specifications. Several studies have been conducted in order to tackle this problem [RUI23] [YU23] [BEN19]. They highlighted the need to tackle domain gaps and use advanced learning methods to improve semantic segmentation with diverse and limited data sources. Furthermore, ground truth generated for a sensor might not be usable for another sensor if a precise LULC task is desired (with a specific nomenclature). This is due to multiple variations in the scene such as acquisition conditions (luminosity, angle, …) or in the scene itself (change in the LULC). One of the best ways to ensure that the new sensor can be fully and efficiently used is to manually create a new ground truth, whether for training or evaluation. However, manual annotation is an expensive and tedious task and all parts of a remote sensing image are not meaningful. Therefore, it is crucial to carefully select the areas to annotate. This study, carried out with funding from CNES (the French Space Agency), presents the methodology set up to select optimal areas for manual annotation of Pléiades Neo (PNEO) [AIR21] images (0.3m resolution) to finetune a LULC model. Previously, we successfully trained a LULC model using a manually labelled dataset on Pléiades High Resolution (PHR) [CEN11] images (0.5m resolution) [ROC23]. In order to capitalize on this well-trained model and the previous PHR ground truths available, the model will be reused and updated using fine-tuning on new labelled data. Ideally, when adding new data to the dataset, one wants to select the zones where the model is the most uncertain of its predictions. Indeed, the model is more likely to make wrong predictions in these areas and thus adding them to the dataset is a good way to improve its performances. Uncertainty evaluation is therefore a major key in order to detect and select such areas. [GAL16] distinguishes two kinds of uncertainty: the aleatoric uncertainty, which is solely linked to the randomness contained in the input data, and the epistemic uncertainty, which is directly linked to lack of knowledge of the model. This latter is the part of the uncertainty that can be reduced with more training data and thus the part we focused on. As mentioned by [GAL16], this kind of uncertainty corresponds to the mutual information [SHA48] between the predictive distribution and the posterior over model parameters. They also showed that, for deep learning models, these distributions can be approximated by combining Monte Carlo sampling with dropout, a regularization technique that consists in randomly setting to zero some neurons during training to prevent the network from relying too heavily on a small set of neural [HIN12]. In their framework, called Monte Carlo Dropout, the dropout is also applied during prediction. In that way, they can sample different models and their associated predictions to approximate the desired distributions. Other regularization techniques can also be leveraged in the same manner. For example, [MOB19] used DropConnect [WAN13], a generalization of dropout that sets to zero weights instead of neurons, to compute epistemic uncertainty. Instead of using one of these 2 techniques, our trained neural network relies on stochastic depth [HUA17], a regularization technique better suited to deep CNN [WAN20] that randomly removes entire blocks of layers during training to virtually reduce the depth of the architecture and thus prevent the vanishing gradient effect. [WAN20] shows that this technique can also be a good way to estimate uncertainties but they only computed predictive (global) uncertainty and not epistemic uncertainty. They called their framework Monte Carlo Stochastic Depth (MCSD). This work experiments the use of MCSD to guide the selection of relevant tiles to construct a training dataset. When a new PNEO image is selected to complete the dataset, MSCD is used on the current model to estimate the epistemic uncertainty for each pixel. This uncertainty map can then be used to select tiles with the biggest uncertainty scores corresponding here to the sum of the epistemic uncertainty of each pixel. In order to prevent selecting similar [LEN22] or side by side pictures, and so to ensure a good diversity in the dataset, the whole image is divided into sub-images with overlap. On each sub-image, an optimal tile is found. The user can then select the desired number of locally optimal tiles following the scores or its preferences. This semi-automatic framework facilitates the selection of tiles while allowing customization. Then, once enough new labelled data is added to the entire dataset, the model is retrained and the process is repeated with the updated model. In our experiments, we used sub-images of size 6800x6800 pixels with an overlap of 1700 pixels. Candidate tiles of size 1700x1700 were selected using the uncertainty score for each sub-image. In the end, only a subset of the candidates is effectively added to our dataset after a visual inspection. The uncertainty maps obtained are consistent with the dataset and the performance of the model. For example, at the beginning when the model almost did not have any water area, the uncertainty maps highlighted all the lake, river and sea area. After a retrain on the next images which contained much more water, the uncertainty on this class disappeared and the map highlighted instead the most difficult classes to learn: the railways, which is a the rarest class, and the low vegetation, for which there is a strong ambiguity between dry vegetation and bareground. This work thus seems to confirm the possibility to use MCSD to estimate epistemic uncertainty. From a practical perspective, the method set up represents a way to ensure adding relevant data to the dataset in an efficient manner, which is important given the budget necessary to manually annotate images at a pixel level. Its framework is quite general and so the method can be applied to images from other sensors such as SPOT, SENTINEL, etc. References [RUI23] Rui, X., Li, Z., Cao, Y., Li, Z., & Song, W. (2023). DILRS: Domain-incremental learning for semantic segmentation in multi-source remote sensing data. Remote Sensing, 15(10), 2541. [YU23] Yu, A., Quan, Y., Yu, R., Guo, W., Wang, X., Hong, D., ... & He, P. (2023). Deep learning methods for semantic segmentation in remote sensing with small data: A survey. Remote Sensing, 15(20), 4987. [BEN19] Benjdira, B., Bazi, Y., Koubaa, A., & Ouni, K. (2019). Unsupervised domain adaptation using generative adversarial networks for semantic segmentation of aerial images. Remote Sensing, 11(11), 1369. [AIR21] Airbus Defence and Space (2021). Pléiades Neo. First-rate performance for trusted intelligence. https://www.airbus.com/en/space/earth-observation/earth-observation-portfolio/pleiades-neo. [CEN11] Centre National d'Etudes Spatiales (CNES), Airbus Defence & Space, Thales Alenia Space (2011). Pléiades. Two satellites to observe Earth close up. https://cnes.fr/en/projects/pleiades. [ROC23] Rochel, A., Dechesne, C., Chan-Hon-Tong, A., Brunet, P. (2023). Multiclass semantic segmentation with very high-resolution satellite images. In Soille, P., Lumnitz, S., Albani, S. (Eds). Proceedings of the 2023 conference on Big Data from Space (pp. 21-24). Publications Office of the European Union, Luxembourg, 2023, doi: 10.2760/46796, JRC135493. [GAL16] Gal, Y. (2016). Uncertainty in deep learning. [SHA48] Shannon, C. E. (1948). A mathematical theory of communication. The Bell system technical journal, 27(3), 379-423. [HIN12] Hinton, G. E. (2012). Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580. [MOB19] Mobiny, A., Yuan, P., Moulik, S. K., Garg, N., Wu, C. C., & Van Nguyen, H. (2021). Dropconnect is effective in modeling uncertainty of bayesian deep networks. Scientific reports, 11(1), 5458. [WAN13] Wan, L., Zeiler, M., Zhang, S., Le Cun, Y., & Fergus, R. (2013, May). Regularization of neural networks using dropconnect. In International conference on machine learning (pp. 1058-1066). PMLR. [HUA17] Huang, G., Sun, Y., Liu, Z., Sedra, D., & Weinberger, K. Q. (2016). Deep networks with stochastic depth. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14 (pp. 646-661). Springer International Publishing. [WAN20] Wandzik, L., Garcia, R. V., & Krüger, J. (2020). Uncertainty quantification in deep residual neural networks. arXiv preprint arXiv:2007.04905. [LEN22] Lenczner, G., Chan-Hon-Tong, A., Le Saux, B., Luminari, N., & Le Besnerais, G. (2022). DIAL: Deep interactive and active learning for semantic segmentation in remote sensing. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 15, 3376-3389.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall G1)

Presentation: Modelling Causal Networks for Global Burned Area in Multiscale Atmospheric Interactions

Authors: Amir Mustofa Irawan, Mercè Vall-llossera, Carlos López-Martínez, Adriano Camps, Gerard Portal, Miriam Pablos, Alberto Alonso-González
Affiliations: Universitat Politècnica de Catalunya, Institut d’Estudis Espacials de Catalunya IEEC, Institut d’Estudis Espacials de Catalunya IEEC, UAE University CoE, Institut d’Estudis Espacials de Catalunya IEEC, Universitat Politècnica de Catalunya, Universitat Politècnica de Catalunya
Wildfires pose serious risks to human safety, economic stability, and ecosystems, with climate change intensifying their impacts. Understanding these interactions is an essential prerequisite for predicting, for example, the response of multiscale atmospheric interactions to global burned area. However, many empirical studies in this field rely on correlative approaches, with only a few applying causal discovery methods. This study explores the potential of a recently proposed causal graph discovery algorithm to reconstruct the causal dependency structure underlying multiscale atmospheric interactions. Building on prior research, it investigates the causal relationships between Earth systems and wildfires using data-driven modelling. Specifically, the Peter–Clark algorithm with momentary conditional independence (PCMCI) is applied to uncover causal links in time series data. This study uses a compilation of global data on fire drivers and burned areas from SeasFire Cube (Karasante et al., 2023) spanning the years 2011 to 2020, all aligned on a consistent spatio-temporal grid (0.25° × 0.25° with eight-day intervals). This study employs unsupervised classification to define fire regimes using the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm. This algorithm differentiates between high- and low-density global fire points to form clusters and is ideal for identifying irregularly shaped clusters (Deng, 2020). Cluster labels are assigned based on the following rules: 1) Core Point: A core point is a data point (e.g., a circle) within a cluster that has at least min_samples points within a specified distance (radius), known as eps (epsilon). 2) Border Point: A border point is not a core point itself but lies within the eps distance of a core point. If the eps radius of one core point intersects with another, they are linked together as part of the same cluster. When there is a gap greater than eps, a new cluster is formed for other core points found beyond that distance. 3) Noise Point: Any point that is neither a core point nor a border point is classified as noise. This work uses local inputs, including soil moisture anomalies (∆SM), vapor pressure deficit anomalies (∆VPD), and normalized difference vegetation index anomalies (∆NDVI). At the continental scale, the study combines 500-hPa geopotential height anomalies (∆Z500) and the 300-hPa meridional wind component (v300) to explore upper-tropospheric conditions. Additionally, NAO and ENSO (represented by the Southern Oscillation Index, SOI) are used to explain global climate indices. The output variable is global burned area (BA). Anomaly values, denoted with ∆ in this study, are calculated to remove the seasonality. The time resolution is eight-day intervals based on the SeasFire Cube time resolution. The process begins by computing the eight-day periods for each year from 2011 to 2020, using the SeasFire Cube scale to generate a time series. Next, the climatological value is determined by averaging each month data spanning from 2001 to 2020. Then, the eight-day smoothed time series is subtracted from the calculated climatological value to produce anomalies on an eight-day scale. This study employs the Peter–Clark (PC) algorithm (Spirtes and Glymour, 1991), a widely used constraint-based approach for learning causal structures, represented as directed acyclic graphs (DAGs). A DAG comprises variables (nodes) and directed edges (arrows) that indicate causal relationships, ensuring no loops or cycles. All shared causes between variable pairs are explicitly included. The PC algorithm infers causal directions using conditional independence tests and follows these key steps: 1) Step 1: Begin with a complete undirected graph on the set of variables, indicating potential causal relationships between all pairs. 2) Step 2: Perform conditional independence (CI) tests between all pairs of variables (Xi, Xj) given a set Z of other variables. If Xi and Xj are conditionally independent given Z, remove the edge between them. 3) Step 3: Increment the size of Z and repeat until all pairs have been tested or Z includes all variables. 4) Step 4: Using the pruned undirected graph, determine the direction of edges by identifying collider-structures (also known as v-structures) and applying rules to avoid cycles. Furthermore, this work applies PCMCI (Runge et al., 2019), which integrates the PC algorithm with momentary conditional independence (MCI) tests to enhance recall and create a more complete DAG. PCMCI captures time-delayed causal effects, where changes in a cause at time t-τ (with τ = 10 for a 10-step lag) influence effects at time t. Link orientations follow the climate scale hierarchy, from global to local, with adjustments made to resolve conflicts. Partial correlations (ParCorr) are used for conditional independence tests, considering a maximum delay of three months (τ_max = 10) and a significance level of α=0.05 to construct the causal graph. Based on the DBSCAN results, this study identifies eight fire regimes for analysis, which are also depicted in the findings. These regions are defined based on global fire density and include the following: 1. Boreal America (BAm): 49°N–72°N, 168°W–55°W 2. North America (NAm): 20°N–49°N, 126°W–60°W 3. South America (SAm): 0°–30°S, 82°W–30°W 4. Southern Europe (SEur): 30°N–45°N, 10°W–48°E 5. South Africa (SAf): 18°S–36°S, 20°W–45°E 6. Southeast Asia (SEA): 5°N–35°N, 65°E–120°E 7. Siberia (Sib): 45°N–72°N, 40°E–175°W 8. Australasia (AUS): 0°–45°S, 95°E–160°E. Furthermore, causal graphs for the eight fire regimes were simulated. The colours of the connecting lines (MCI) indicate the strength of the causal relationships, red for positive associations and blue for negative ones. The numbers along the arrows denote the lagged relationships, measured in eight-day intervals, while the arrows themselves indicate the direction of causality. Additionally, the node colours (auto-MCI) reflect the level of autocorrelation for each variable within the time series. The PCMCI method effectively reconstructs teleconnections between multiscale atmospheric variables in each region. In tropical areas (SAm, SAf, and AUS), the NAO connection weakens or disappears, while ENSO (represented by SOI indices) becomes stronger. Specifically, in AUS, the ENSO connection directly influences local variables and burned area, with or without mediation. This method accurately captures the influence of SOI, which is negatively associated with drier conditions (∆VPD) in AUS. In extratropical (NAm) and polar regions (BAm and Sib), NAO directly influences upper-tropospheric wind perturbations (∆Z500 and v300), intensifying the extratropical jet stream. Furthermore, the maximal MCI values and corresponding lags are plotted for the links Xt-τ → BA (τ=1,2,3) where X represents one of the climatic drivers: ∆Z500, ∆VPD, or ∆SM. The result shows the climatic driver with the largest MCI for each grid point. This spatial MCI detects a regionally varying influence of climatic drivers. As expected, boreal regions are strongly influenced by ∆Z500, detected as jet stream wave patterns in NAm, BAm, and Sib. PCMCI also reveals that NAO positively influences precipitation in the SEA region, aligning with earlier findings (Hunt and Zaz, 2023) linking NAO’s positive phases to increased winter precipitation over northwest India and a stronger pressure gradient (∆Z500) intensifying the subtropical jet stream. Furthermore, PCMCI confirms that warming-induced dryness (∆VPD) driven by soil moisture (∆SM) contributed to increased burned area. The identification of the causal graph and the estimation of causal effects further highlight ∆Z500 as the primary and direct causal influence of surface dryness, significantly exacerbating the extreme spread of burned areas on the polar and extratropical region. This mechanism traps heat within high-pressure zones, contributing to extreme heatwave conditions (Irawan et al., 2024). Consequently, the findings underscore the importance of developing burned-area estimation systems that integrate and assess the interactions between global, continental, and local factors comprehensively. Beyond classical correlative approaches, the findings are limited to a few meaning-ful sets of relationships, offering valuable insights for evaluating terrestrial ecosystem models. References Deng, D., 2020. DBSCAN Clustering Algorithm Based on Density, in: 2020 7th International Forum on Electrical Engineering and Automation (IFEEA). Presented at the 2020 7th International Forum on Electrical Engineering and Automation (IFEEA), IEEE, Hefei, China, pp. 949–953. https://doi.org/10.1109/IFEEA51475.2020.00199 Hunt, K.M.R., Zaz, S.N., 2023. Linking the North Atlantic Oscillation to winter precipitation over the Western Himalaya through disturbances of the subtropical jet. Clim. Dyn. 60, 2389–2403. https://doi.org/10.1007/s00382-022-06450-7 Irawan, A.M., Vall-llossera, M., López-Martínez, C., Camps, A., Chaparro, D., Portal, G., Pablos, M., Alonso-González, A., 2024. Land, jet stream, and other atmospheric effects on burned area estimation during the South Asian heatwave of 2022. Int. J. Appl. Earth Obs. Geoinformation 128, 103720. https://doi.org/10.1016/j.jag.2024.103720 Karasante, I., Alonso, L., Prapas, I., Ahuja, A., Carvalhais, N., Papoutsis, I., 2023. SeasFire as a Multivariate Earth System Datacube for Wildfire Dynamics. https://doi.org/10.48550/ARXIV.2312.07199 Runge, J., Nowack, P., Kretschmer, M., Flaxman, S., Sejdinovic, D., 2019. Detecting and quantifying causal associations in large nonlinear time series datasets. Sci. Adv. 5, eaau4996. https://doi.org/10.1126/sciadv.aau4996 Spirtes, P., Glymour, C., 1991. An Algorithm for Fast Recovery of Sparse Causal Graphs. Soc. Sci. Comput. Rev. 9, 62–72. https://doi.org/10.1177/089443939100900106
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.31/1.32)

Session: D.04.01 Data Access and Interoperability to enable Infrastructure-agnostic Science Reproducibility

Accessing a consolidated set of data on different cloud-based ecosystems is a challenge to enable full science reproducibility and allow full infrastructure independence. The difference of data organization, formats, availability, access protocols and their change over time requires scientists to perform to frequent and costly adaptations of their application. This session focuses on solutions to ensure uniform data searchability and access from multiple cloud environments, with the purpose to allow interoperability of platforms, applications and full reproducibility of executions.

Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.31/1.32)

Presentation: OGC GeoDataCube API - Towards harmonizing cloud processing on data cubes

Authors: Peter Zellner, Jérôme Jacovella-St-Louis, Michele Claus, Juraj Zvolensky, Alexander Jacob, Francis Charette-Migneault, Matthias Mohr, Jonas
Affiliations:
Cloud computing in Earth System Sciences (ESS) has become inevitable, due to the ever-growing amounts of data and the need for cross-disciplinary collaboration. Cloud computing has therefore been an evolving field throughout the last years, leading to a fragmented landscape of offerings where no consolidation has taken place yet. Thus, users are confronted with a wide spectrum of solutions often aiming at the same goal. However, user surveys show strong interest in moving from traditional workflows to cloud processing in ESS. Nevertheless, there are some major obstacles that haven’t been solved yet. Users report that the steep learning curve, high degree of fragmentation and especially the lack of interoperability between platforms, data and workflows is hindering their transition (Wagemann et al. 2021, DiLeo et al. 2024). To reduce this complexity and improve interoperability, the Open Geospatial Consortium's GeoDataCube Application Programming Interface (OGC GDC API) aims at harmonizing existing solutions to access and process data cubes (multi-dimensional data structures organized along multiple dimensions) on cloud platforms by working towards defining a standardized API. The suite of standardized OGC APIs (e.g. OGC API Coverages, OGC API Processes) covers a large part of the functionality. At the same time, openEO offers a well defined API for cloud processing in ESS. Accordingly, the main challenges are (a) aligning the solutions where identical functionalities are solved in different ways and (b) identifying the gaps in data cube functionalities from selective data access to producing interoperable results without intermediate downloads (e.g. combining data access and processing, tracing of provenance). This talk will present the current status of the OGC GDC API as advanced in the OGC Testbed-20, and the OGC GDC Standard Working Group. The core capabilities of OGC GDC API will be presented, which include combining data access and analysis, allowing access to different data sets in the form of data cubes with the possibility of filtering, subsetting and aggregating (reshaping the original data cube to custom parameters), and tracing provenance of results. The OGC GDC API draft consists of a set of modular profiles which allow a large range of functionality when implemented together (GDC API Core, Partial Access, Resampled Access, GDC Data Processing). The functionality will be showcased through a use case which is calculating the Vegetation Health Index by combining different data sources, such as optical satellite data and meteorological data and highlighting the current status of interoperability between implementations of the OGC GDC API. To open the outlook into the future, the results of the usability testing carried out in the Testbed-20 will be presented. Finally, we want to open the discussion for defining the future directions of the OGC GDC API and its role in the ecosystem of standardized ESS cloud computing.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.31/1.32)

Presentation: EOEPCA+: a method for EO Exploitation Platform Common Architecture

Authors: Richard Conway, James Hinton, Chandra Taposeea-Fisher, Claudio Iacopino, Salvatore Pinto
Affiliations: Esa, Telespazio UK Ltd
The ‘Exploitation Platform’ concept derives from the need to access and process an ever-growing volume of data. Many web-based platforms have emerged - offering access to a wealth of satellite Earth Observation (EO) data. Increasingly, these are collocated with cloud computing resources and applications for exploiting the data. Rather than downloading the data, the exploitation platform offers a cloud environment with access to EO data and associated compute and tools that facilitate the analysis and processing of large data volumes. The Exploitation Platform benefits users, data providers and infrastructure providers. Users benefit from the scalability & performance of the cloud infrastructure, the added-value services offered by the platform – and avoid the need to maintain their own hardware. Data hosted in the cloud infrastructure reaches a wider audience and Infrastructure Providers gain an increased cloud user base. Users are beginning to appreciate the advantages of exploitation platforms. However, the market now offers a plethora of platforms with various added value services and data access capabilities. This ever-increasing offer is rather intimidating and confusing for most users. Users often face challenges such as inconsistent interfaces, proprietary software and limited interoperability. To fully exploit the potential of these complementary platform resources we anticipate the need to encourage interoperation amongst the platforms, such that users of one platform may consume the services of another directly platform-to-platform. EOEPCA (EO Exploitation Platform Common Architecture) is a European Space Agency (ESA) funded project with the goal to define and agree a re-usable exploitation platform architecture using standard interfaces to encourage interoperation and federation between operational exploitation platforms - facilitating easier access and more efficient exploitation of the rapidly growing body of EO and other data. Interoperability through open standards is a key guiding force for the Common Architecture. EOEPCA adheres to standards from organisations such as Open Geospatial Consortium (OGC) and follows best practices in data management, including implementation of OGC Web Services and emerging OGC API specifications for features, coverages and processes. Platform developers are more likely to invest their efforts in standard implementations that have wide usage; off-the-shelf clients and software are more likely to be found for standards-based solutions. The EOEPCA system architecture is designed to meet a set of defined use cases for various levels of user, from expert application developers to data analysts and end users. The architecture is defined as a set of Building Blocks (BBs), exposing well-defined open-standard interfaces. These include Identity and Access Management, Resource Discovery, Data Access, Processing Workflows, Data Cube Access, Machine Learning Operations, and more. Each of these BBs are containerized for Kubernetes deployment, which provides an infrastructure-agnostic deployment target. The exploitation platform is conceived as a ‘virtual work environment’ where users can access data, develop algorithms, conduct analysis and share their value-adding outcomes. The EOEPCA architecture facilitates this through a Workspace BB that provides collaboration environments for projects (groups of users) including dedicated storage and services for analysis, processing and publishing of added-value data and applications. This is supported by an Application Hub building-block that provides interactive web-tooling for analysis, algorithm development, data exploitation and provides a web dashboard capability through which added-value outcomes can be showcased. Our presentation will highlight the generalised architecture, standards, best practice and open source software components available.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.31/1.32)

Presentation: The EO DataHub: federating public and commercial EO data sources to deliver an innovative analysis platform for the UK EO sector

#zarr #stac #cog #kerchunk

Authors: Philip Kershaw, Piotr Zaborowski, Alastair Graham, Daniel Tipping, Dave Poulter, Rhys Evans, Fede Moscato, Prof John Remedios, Jen Bulpett, Alasdair Kyle, Alex Hayward, Alex Manning
Affiliations: Centre for Environmental Data Analysis, RAL Space, STFC, National Centre for Earth Observation, Oxidian, Telespazio UK, Open Geospatial Consortium
The Earth Observation (EO) Data Hub is a new national platform that has been developed to serve global EO data and analytics capabilities for UK research, government and business use. It seeks to address challenges facing the EO community identified in the findings of user engagement studies which consistently point to the need to better integrate between different data sources and platforms and provide access to assured datasets in a format which is readily usable for analysis and application. The goals of the Hub then can be summarised in three core objectives: 1) deliver unified access to data from multiple sources, integrating the data outputs from UK-specific expertise in EO and climate science together with data more broadly from other public and commercial sources; 2) provide an integrated environment of tools and services as building blocks to enable developers and EO technical experts to process and transform data creating new value-added products; 3) provide dedicated quality assurance services to better inform users about fitness-for-purpose of data for a given application. Nearing the completion of its initial 2-year pathfinder phase of development, the Hub has been funded through a wider UK EO investment package from UK government. The project is led by NERC’s National Centre for Earth Observation - Leicester University and the Centre for Environmental Data Analysis, at RAL Space, STFC Rutherford Appleton Laboratory. An initial consortium was formed amongst public sector organisations including the Met Office, Satellite Applications Catapult, National Physical Laboratory and UK Space Agency. These were joined by industry partners brought in via three major procurements first, to implement the Hub Platform software (Telespazio), second to provide commercial data sources (Airbus, Planet and Earth-i) and finally implement exemplar applications (SparkGeo, Spyrosoft and Oxidian) to test and validate the Hub Platform’s capabilities as a tool to accelerate to EO data application development. The Hub platform draws directly from ESA’s EO Exploitation Platform Common Architecture (https://eoepca.org) and OGC standards to build an interoperable eco-system of Open-Source tools and services. The major components are: 1) Resource catalogue – a searchable inventory of content provided by or via the Hub (primarily EO datasets) 2) Workflow runner (WR) – based on the OGC ADES (Application Deployment and Execution Service) 3) Workflow and Analysis System (WAS) – Jupyter Notebook service 4) Data Access Services (DAS) – interfaces for access to data for clients to the Hub and integrations with data providers 5) Quality Assurance service (QA) – support for running quality assessments of datasets and storing them in a searchable inventory For the purposes of this submission, we focus on the Resource Catalogue - the ability to assemble discovery metadata from multiple data provides and provide a unified search interface – and the Data Access Services. The DAS provide the interface between the Hub and data providers. Initial data was selected for the platform based on UK strengths and the anticipated needs of the target user community: - Climate observations: The UK Earth Observation Climate Information Service (EOCIS) addresses 12 categories of global and regional essential climate variables. It includes new climate data at high resolution for the UK specifically. - Climate projections – Global: CMIP6; regional: CORDEX and UK high-resolution - Met Office UKCP (UK Climate Projections) - Commercial satellite data: Planet - Planetscope and Skysat; Airbus - Optical and SAR archive - Sentinel data access: Leveraging the SentinelHub and the CEDA Archive including ARD for Sentinel 1 and Sentinel 2 over UK and territories This data integration presented two high-level challenges – the relative disconnect between climate and EO domains and the fundamentally different access process between open public datasets and commercial data products. Climate data is almost entirely represented by gridded CF-netCDF typically equivalent to Level 3 data products and above in the EO world; satellite data is based on scenes and uses alternative data formats such as COG (Cloud-Optimised GeoTIFF). STAC (Spatio-Temporal Asset Catalog) was selected as the standard to support data discovery based on its increasing adoption in the EO sector, its active development community and extensible model making it flexible for the inclusion of new data types. Though STAC has its origins in EO, significant prior work carried out by CEDA for the Earth System Grid Federation (a distributed infrastructure for sharing climate projections data) has established a profile for use of STAC with climate model outputs stored using CF-netCDF, Zarr data formats and Kerchunk a technology to aggregate netCDF files and present them as a Zarr-like interface. Commercial providers Planet and Airbus each provide their own interfaces for data discovery and consequently different strategies were adopted for integration with the Hub. With Planet, it was relatively trivial to develop a STAC façade to their data discovery API. However, the sheer volume of Planet’s data catalogue meant that harvesting all its content into the Hub’s central catalogue would be prohibitive. As a compromise, the top-level catalogues are harvested into the Hub’s catalogue, but subsidiary metadata (STAC items and assets) are discovered by invoking the dedicated STAC proxy service. For Airbus, metadata harvesting was implemented using an established Python client toolkit and the content translated into STAC format. Commercial EO data access follows a flow from discovery to ordering and finally delivery to the user’s chosen location. In addition to the discovery aspects, the project team has been working together to develop a unified interface for data ordering so that a user can select the desired products from Planet and Airbus and arrange for them to be staged into their group workspace on the Hub Platform for subsequent use. This staging can be built as workflow packages. Data adapters enable plug-and-play configuration of future data sources and standards based harmonised data access. Besides a ‘horizontal’ integration across different data providers into the Hub, a ‘vertical’ integration could also be considered i.e. the flow from data producer, provision via the Hub Platform and access by a consuming client application. Fortunately, with new climate observations datasets being developed as part of the EOCIS project (https://eocis.org), it has been possible to have direct dialogue with the data producers and influence how the data is being produced to best meet end-application developers’ needs. The EOCIS project team has agreed to generate STAC metadata files alongside netCDF data products to better facilitate indexing of content into search catalogues. Further still, it has been possible to consider the formulation of STAC content tailored to the specific needs of consumer applications such as TiTiler, an Open-Source map tiling service implementation. The STAC metadata can be integrated at the point of data production to include data ranges and default colour table settings needed by TiTiler. Besides the agreement of metadata content, work has also included the selection of appropriate data chunking strategies for serialization of the data to optimize it for analysis by applications. Recent work to integrate an Xarray interface into TiTiler has meant that it is possible to use it as a unified solution for visualisation of climate (netCDF and Kerchunk and Zarr derivatives) and EO datasets (COG format). This has been used to advantage in SparkGeo’s Climate Asset Risk Analysis Tool (CLARAT) and Spyrosoft’s EventPro Landcover analysis web application. The third application supplier, Oxidian has been tasked with developing integrations and training to facilitate use of the EO DataHub with other applications and platforms. This includes a Python client for the Hub, pyeodh together with example Jupyter Notebooks as well as integrations with QGIS (https://qgis.org). Running in parallel with the data discovery and access capabilities, a QA function has been under development. NPL working with Telespazio have developed a system of QA workflows whereby quality assessments of data products can be carried out using the Hub Platform’s Workflow Runner. These assessments are serialized into a dedicated QA repository which is linked to the data discovery search index, providing users with quality information appended to the data of interest. The current work focuses on QA for optical sensor satellite data but could be expanded to support characterization for other sensor types or even for uncertainty information for climate projections. In summary then, the EO DataHub has sought to build a unique offering which applies UK EO science expertise, but which also builds on the existing capabilities from commercial data providers. The development has demonstrated the value of a federated approach to integrate data sources across both public and commercial providers and EO and climate domains as well as benefits and demands of the public cloud infrastructure deployments concluded in the comprehensive CI/CD and accounting framework. Working in an integrated team for applications, Hub middleware and data suppliers has borne fruit in addressing some of the challenges to narrow the gap between data access and effective utilisation in applications. As next steps, the team are now building on the work of the pilot applications to establish a broader engagement with early adopters and bring the service up to a full operational footing.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.31/1.32)

Presentation: Cloud-Native Raster Data: Revolutionizing Geospatial Analysis

#zarr #stac #cog #cloud-native

Authors: Vincent Sarago, Zach Deziel, Emmanuel Mathot
Affiliations: Development Seed
The geospatial data landscape is undergoing a radical transformation, driven by the exponential growth of remote sensing, satellite imagery, and environmental monitoring technologies. Traditional approaches to raster data management are rapidly becoming obsolete, unable to keep pace with the increasing volume, complexity, and computational demands of modern geospatial analysis. This talk offers a comprehensive exploration of cloud-native raster data formats, illuminating the technological revolution that is reshaping how we store, access, and process geospatial information. Raster data—the fundamental building block of geographic imaging—has long been constrained by significant technical limitations. Historically, researchers and data scientists faced formidable challenges: downloading entire massive datasets, managing prohibitive storage costs, and navigating performance bottlenecks that could stall critical research and analysis. The transition to cloud-native formats represents a paradigm shift, offering unprecedented efficiency, scalability, and accessibility. This presentation will provide an in-depth examination of the cloud-native raster ecosystem, focusing on groundbreaking technologies that are redefining geospatial data processing. Attendees will gain insights into: ● The evolution from traditional file-based formats to cloud-optimized solutions ● Detailed analysis of cutting-edge formats like Cloud Optimized GeoTIFFs (COGs), Zarr ● Technological innovations that enable partial reads, streaming access, and efficient multi-dimensional data handling ● Practical challenges in raster data management and how cloud-native approaches provide elegant solutions ● Emerging standards and tools, including STAC (SpatioTemporal Asset Catalog) and OGC guidelines The talk will dive deep into the technical mechanisms that make cloud-native formats so powerful. Cloud Optimized GeoTIFFs (COGs), for instance, allow for partial data retrieval, dramatically reducing download times and computational overhead. Zarr and TileDB introduce revolutionary approaches to multi-dimensional array storage, enabling parallel processing and efficient handling of massive datasets across spatial and temporal dimensions. Practical demonstrations will showcase real-world applications using TiTiler, including: ● Dynamic tiling techniques ● Efficient COG creation and validation ● Streaming large-scale geospatial datasets ● Integration with machine learning and advanced analysis workflows Beyond technical capabilities, the presentation will explore the broader implications for various domains: climate research, environmental monitoring, urban planning, agriculture, and beyond. As datasets continue to grow in size and complexity, cloud-native raster formats offer a glimpse into the future of geospatial analysis—a future characterized by unprecedented accessibility, performance, and insight. This talk is designed for data scientists, geospatial professionals, researchers, and technologists seeking to understand and leverage the latest innovations in raster data processing. Attendees will leave with a comprehensive understanding of cloud-native technologies, practical insights into implementation, and a vision of the transformative potential of modern geospatial data management.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.31/1.32)

Presentation: Distributed access to Marine Data with Integrity through a the value chain framework

#zarr #stac #cog #cloud-native

Authors: Piotr Zaborowski, dr Raul Palma, Bente Lilja Bye, Arne-Jorgen Berre, dr Marco Amaro Oliveira, Babis Ipektsidis, Sigmund Kluckner
Affiliations: Open Geospatial Consortium Europe, PSNC, BLB, SINTEF, INESC, Netcompany, IEEE
Effective management and utilization of marine data are critical for advancing our understanding of oceanic systems and addressing global challenges such as climate change, biodiversity loss, and sustainable resource management, as well as local problems like efficient navigation, microplastic monitoring, fisheries management, and power plant management. How can the alignment to standards and vocabularies be effectively implemented across the entire value chain of marine data without unnecessary data providers' burden? What modern approaches can be utilized to formalize definitions in marine data management? How does the integration of linked data impact the interoperability of different marine data sources and services? The authors present a comprehensive approach to data management, embracing EO, marine observations, and citizen science data along the processing pipelines, ensuring coherent access to source and analytical data. Built within the Iliad Ocean Twin project for the marine environment, it focuses on harmonizing, preserving, and enforcing integrity through an operational framework with abstract conceptual models and practical tools and implementations. Technological advancements on various fronts show growing diversity in how data is acquired, stored, and processed, benefitting from the distributed cloud, maturing analytical workflows, and operationalizing EO research-based service. While common data sources like Sentinel and Copernicus Marine Services from the Copernicus missions are well known and provided in standard formats, their details differ depending on the provider. Practically, the application of tools is strongly dependent on the convention used, for example, regarding band distribution along the asset files for EO, the level of support of the CF convention in the NetCDF-like formats, and the best-case selected templates. In practice, the questions about compliance usually require analyzing and reformatting data to the suitable structures. The variety of protocols and formats observed in the project is even higher for the in-situ observations. Standards like ISO/OGC Observations and Measurement share a model with W3C/OGC Semantic Sensor Network (SSN) ontology and Sensor Things API as one of the standards suites, but in many cases, raw data is minimized and meta information obscured on the transmission level and likewise stored on the time series specialized databases. In such instances, meta information must be provided on the side and available on the harmonization layer. This layer will enable data produced or made available via different data sources to be represented according to standard data models and vocabularies, with well-defined semantics, and exposed via standard APIs so that they can be further examined and processed by one or more software tools in a unified manner, leveraging the total value of all the available data. In ILIAD, this common model is provided by the Ocean Information Model (OIM), which harmonizes and aligns relevant cross-domain standards (particularly from OGC and W3C) with domain models, bridging various views on the ocean data and providing a formal representation enabling unambiguous translations between them. ILIAD also provides the mechanisms to transform/lift data into this common model and to integrate it with other related datasets, providing the harmonized data layer that the different data analytics tools can exploit. Finally, ILIAD exposes this harmonized data via standard OGC APIs (e.g., SensorThings API, Features API, etc.) to boost the potential for interoperability with existing and future components. Such harmonization is necessary not only to enable reliable and trustful processing but also to be critical for AI, including explainable AI that will need to understand relationships between information from various sources. Here, proper distinction between similar and same observation types is necessary in a machine-readable format. While the meaning is changing with advanced language models, problems of ambiguous naming like those analyzed in [1] have not yet been solved, and the ambiguity has not been clarified by standards. In practice, on the data consumer side, multiple scenarios were met from online data streaming, analytical clients in Python and R using batch processing but benefiting from data trimming and spatial queries, and interactive visualization tools. On the groundwork of the widely adopted standards and conventions of the international marine, nautical, geomatics, and earth sciences, ICT advancements bring both opportunities and challenges related to data harmonization. Applications require high-resolution data access for numerical modeling and analytics and multi-resolution support for downscaling and visualization, which are discrepant. Likewise, minimized storage and processing costs are preclusive and require tradeoffs. Effectively, various data access methods must be offered. In the Iliad, project number of legacy standards-based services were provided (*DAP, WMS, WCS, WFS, opensearch) together with more modern (STA, Features API, STAC, Environmental Data Retrieval API, Coverages API) with many local data streams. Due to the complex environment, data harmonization was implemented during data check-in and access, depending on the legacy state. OGC location building blocks enable consistency and proper understanding of interrelations, while their application for non-expert users is a significant burden and needs to be scaled up. These definitions include formal schemas, data structures, and vocabularies mappings to common canonical models, examples, tools, and documentation. For example, this way, regardless of the convention used, like variables in EO or data types (variables in NetCDF, parameters in EDR, observable properties in SOSA/STA) and their representations in various APIs, including direct access to data chunks, data can be interpreted in the same way. As a side effect of development parallel to major OGC OWS updates to APIs and standardization of cloud-native formats, experiments have benefited and contributed to the advancements proving cross standards compliance, including non-geospatial standards like data space suites. Implementations have proved the value of both cloud native formats with their block-based access and modern metadata schemes aligned with general-purpose ICT. As a starting point, they offer promising capabilities for direct access and as an underlying layer for advanced APIs. On the other hand, the project shared issues of their efficiency for multi-resolution pyramids for global coverage and still limited support on the application side. Web APIs have proven more effective for platforms that do not require data harmonization on the check-in and need to support basic recalculations on the fly, like reprojections and resolution reductions. The proposed framework integrates advanced data processing and management using applied semantic technologies to facilitate seamless access to diverse marine datasets. Key features include: - Integration with the Ocean Information Model - a multilayer data description ontology marrying spatiotemporal concepts, data dimensionality, domain vocabularies like CF convention, Ocean Essential Variables, marine observables vocabularies, and model outcomes. - Operational Framework: Using standards and best practices for Earth Observation exploitation, like EOEPCA, and adding post-processing templates establishes protocols and guidelines to enforce semantic consistency and data quality throughout the data lifecycle. Interoperability: Enhancing data interoperability through the adoption of open data standards (COG, GeoZarr) and OGC APIs, enabling efficient data sharing and integration for a variety of applications from scientific to visualization - User-Centric Access: Iliad pilots worked closely with the end-users, which imposed to address the requirements of researchers, policymakers, and the general public in accessing and utilizing marine data effectively. The presentation will highlight case studies demonstrating the application of this framework in various marine pilots using variety of EO, climate, marine data. By ensuring coherent data access with preserved semantics, the initiative aims to enhance the reliability and usability of marine data, ultimately supporting informed decision-making and sustainable ocean governance. [1] Strobl, P.A., Woolliams, E.R. & Molch, K. Lost in Translation: The Need for Common Vocabularies and an Interoperable Thesaurus in Earth Observation Sciences.Surv Geophys (2024). https://doi.org/10.1007/s10712-024-09854-8
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.31/1.32)

Presentation: Copernicus data and services uptake with EO4EU, an AI-augmented ecosystem for Earth Observation data accessibility with Extended reality User Interfaces for Service and data exploitation.

Authors: Vasileios Baousis, Mr Babis Andreou, Dr Kakia Panagidi, Dr Claudio Pisa, Mohanad Albughdadi, Mr Tolga Kaprol, Professor Stathes Hadjiefthymiades
Affiliations: ECMWF, National & Kapodistrian University of Athens
Copernicus data and services have provided a plethora of Earth Observation (EO) and Earth Modeling data to European Citizens. Data acquired from Sentinel satellites is made available to the end users through Copernicus Open Access Hub and the new Copernicus Data Space Ecosystem providing free access to a wide range of data and services from the Copernicus Sentinel missions and other land, ocean and atmosphere EO data. Moreover, there are six Copernicus services providing data for the atmosphere, marine, land, climate change, security and emergency related services. Copernicus Data and Information Access Services (DIAS) cloud-based platforms in addition to the cloud infrastructure and processing tools, are providing centralised access to Copernicus data and information. The Copernicus Data Space Ecosystem (https://dataspace.copernicus.eu/), builds on DIAS-es existing distribution services, ensuring their continuity, and bringing significant improvements like advanced search functions, virtualizations, APIs etc. Moreover, Destination Earth (DestinE) implemented by ECMWF, EUMETSAT and ESA will develop a high precision digital model of the Earth (a digital twin) to monitor and simulate natural and human activity, with the first two digital twins focusing on weather-induced and geophysical extremes, and on climate change adaptation, yielding enormous new Earth Modeling data. Finally, European Data Spaces provide data from various domains (agriculture, food security, health, energy, natural resources, environmental monitoring, insurances, tourism, security), which can be combined/integrated/fused with the Copernicus/DestinE data using cutting-edge ICT technologies like AI/ML, opening new opportunities for the creation of beyond state-of-the-art solutions that can provide new products and services to the public. Despite the significant volume and plethora of EO and Earth Modeling data offered from current EU services and repositories, their access has not been yet extended beyond experts and scientists to the wider industry to deliver tangible applications that improve our health and lives and protect the planet. Unfortunately, only a small part of the market has that kind of expertise and, as follows, high value EO information remains unexploited, it is often fragmented, complex, diverse, difficult to find, retrieve, download and process, while users must have some kind of domain expertise to find, access, understand how to pre-process data, find storage solutions and transform data into useful formats for analytics and Geographic Information Systems (GIS). The EO4EU (https://www.eo4eu.eu/) which will be demonstrated in this presentation is a multi-cloud platform that introduces a comprehensive ecosystem for holistic management of EO data. It aims to bridge the gap between domain experts and end users, foregrounding technological advances to expand the use of EO data across various market. EO4EU boosts the Earth Observation data market, providing a digestible data information modelling for a wide range of EO data, through dynamic data annotation and a state-of-the-art processing by leveraging key European Clouds such as WEkEO/DIAS and CINECA. EO4EU platform provides innovative tools, methodologies and approaches for assisting a wide spectrum of users, from domain experts and professionals to simple citizens to benefit from EO data. Key innovative elements of the EO4EU platform include: a. Knowledge Graph-based Decision Making: Enables informative feature extraction from diverse data repositories, facilitating a more comprehensive understanding of datasets. b. AI/ML Marketplace: A centralized hub for AI & ML models, algorithms, techniques, and metadata. c. Big Data Processing Engines: Optimized for cloud environments, allowing efficient handling of large-scale datasets. d. User-friendly Interfaces: Including GUI, CLI, and API, alongside immersive VR experiences, catering to both technical and non-technical users. e. Workflow Engine: Facilitates the definition and execution of recurring EO data retrieval and processing tasks.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 0.14)

Session: A.01.11 Living Planet Fellowship Programme Coordination - PART 1

The Living Planet Fellowship is an ESA mechanism to support the next generation of scientists, at post-doctoral level, to undertake cutting-edge research in Earth Observation, Earth System Science or Climate Research, maximising the scientific return of ESA and European EO missions and datasets through the development of novel EO methods, techniques and products, promoting an open science approach and by delivering excellent scientific results addressing the grand Earth Science challenges of the next decade. The initiative is implemented through a number of dedicated research projects proposed and carried out by young scientists hosted by universities, laboratories and technical centres in ESA Member States.
This session is designed to allow the current fellows to present their results to ESA and more importantly to the other fellows, share their experiences and explore mechanisms for future collaboration across research institutions, with ESA and between individual researchers. The session forms the Annual Meeting of the Fellowship scheme and is structured around individual presentations of short duration by each of the currently active fellowships as well as and those that have recently between completed, followed by discussion of the Programme and next steps.

Earth Processes, Magnetosphere and Ionosphere topic:


Wide-Area Sentinel-1 Deformation Classification for Advanced Data Exploitation - WISE


  • Riccardo Palamá – Telecommunications Technology Center of Catalonia (CTTC)

Developing systematic SAR backscatter tools for volcanic monitoring - VolcScatter


  • Edna Dualeh – COMET, School of Earth Sciences, University of Bristol

The Shape of Auroral Plasma Turbulence - SAPT


  • Magnus Ivarsen – University of Saskatchewan

Biosphere topic:


Raised Peatland Ecohydrology Evaluation through Sentinel-1 InSAR data and Machine Learning - RaiPEAT_InSAR


  • Alexis Hrysiewicz – University College Dublin

Integrated Remote Sensing for Biodiversity-Ecosystem Function - IRS4BEF


  • Javier Pacheco – Max Planck Institute for Biogeochemistry

Ground Reference Observations Underlying Novel Decametric Vegetation Data Products from Earth Observation - Grounded EO


  • Luke Brown – University of Salford

VEgetation Spatialization of Traits Algorithm - VESTA


  • Mateus Dantas – Senckenberg Society for Nature Research

Reference data for Improved Sar-based fOresT waTer Observables - RISOTTO


  • Paul Vermunt – University of Twente

Global vegetation monitoring from active and passive microwave sensors - GVMAP


  • Samuel Favrichon – Gamma Remote Sensing AG

Large scale exploitation of satellite data for the assessment of urban surface temperatures - EO4UTEMP


  • Zinovia (Zina) Mitraka – Remote Sensing Lab, Foundation for Research and Technology Hellas (FORTH)
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 0.96/0.97)

Session: C.02.02 Heritage Missions and Long Time Data Series - PART 1

EO archives worldwide contain several decades of data acquired from a variety of missions and sensors. The value of these scientific time-series is continuously growing. Information extracted from long time series of data is used to observe and characterise the current climate, to detect climate change and to determine the rate of change. Hence there is a strong need to secure state-of-art preservation of the growing volume of heritage data and associated information holdings, to allow their discoverability/accessibility through state of art technologies, and to ensure continuous improvement and valorisation to maximise usability and impact with current and future missions. The session focuses on activities and projects dealing with the preservation and valorization of Earth observation heritage (historical) data including new datasets generation, algorithms improvements, reprocessing and long time series generation through harmonization and combination of data from multiple sensors.

Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: A combined total column water vapour data record from microwave and near-infrared imager observations: new developments and results from validation

Authors: Marc Schroeder, Johannes Bärlin, Olaf Danne, Anja Niedorf, Rene Preusker, Tim Trent, Dr. Carsten Brockmann, Jürgen Fischer, Michaela Hegglin, Rainer Hollmann, Hannes Konrad
Affiliations: Deutscher Wetterdienst, Brockmann Consult, FU Berlin, U. Leicester, Spectral Earth, FZ Jülich
Water vapour is the single most important natural greenhouse gas in the atmosphere, thereby constraining the Earth’s energy balance, directly and indirectly through the water vapour feedback mechanism. In addition, water vapour is a key component of the water cycle. There is consequently the need to consolidate our knowledge of natural variability and past changes in water vapour and to establish climate data records (CDRs) of both total column and vertically resolved water vapour for use in climate research. This is the objective of the ESA Water Vapour Climate Change Initiative (WV_cci). Within WV_cci a global total column water vapour (TCWV) data record was generated by combining microwave over the ice-free ocean with near-infrared imager based TCWV over land, coastal ocean and sea-ice. The data record relies on microwave imager observations, partly based on a fundamental climate data record from EUMETSAT CM SAF and on near-infrared observations from MERIS, MODIS and OLCI. Both, the microwave and near-infrared data streams are processed independently and combined afterwards by not changing the individual TCWV values and their uncertainties. The latest version of the data record is freely available to the public via 10.5676/EUM_SAF_CM/COMBI/V001. A new version will be released in summer 2025 which relies on more sensors, improved stability and uncertainty characterisation. In addition, a high-resolution regional product was generated which covers three regions in the sub-tropics and tropics with a spatial resolution of 0.01°. This presentation will briefly introduce WV_cci and new developments and improvements related to the data record generation. The TCWV over land is reliably possible in clear-sky conditions only. Thus, after aggregation and comparison to all-sky data a clear-sky bias is present. Results from the clear-sky bias analysis will be shown as well. A focus will be on results from intercomparisons and validation, including results from uncertainty validation through comparisons against radiosonde and GNSS observations.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: FDR4LDYN: Preserving and Harmonising ERS C-Band Scatterometer Backscatter Data for Long-Term Land Surface Applications

Authors: Roland Lindorfer, Sebastian Hahn, Clay Taylor Harrison, Wolfgang Wagner, Thomas Melzer, Dr. Mariette Vreugdenhil, Prof.dr.ir. Susan Steele-Dunne, Raffaele Crapolicchio, Philippe Goryl
Affiliations: Vienna University of Technology (TU Wien), Delft University of Technology (TU Delft), European Space Agency (ESA) - Centre for Earth Observation (ESRIN), Earth Observation Data Centre (EODC)
Since 1991, the European Remote Sensing satellites ERS-1 and ERS-2 – carrying the Active Microwave Instrument (AMI), which combines synthetic aperture radar (SAR) and scatterometer functionality – and the Metop satellite series – carrying the Advanced Scatterometer (ASCAT) – have provided an unparalleled long-term source of C-band scatterometer backscatter data, characterised by consistent frequency, viewing geometry, and orbit design. With the upcoming Metop Second Generation (Metop-SG-B) satellites equipped with SCA scatterometers, this continuous data collection is set to extend well beyond 40 years, reaching into the 2040s. This extensive data record holds profound value for monitoring land surface dynamics and supporting climate applications. Despite its potential, however, no products currently combine ERS and Metop scatterometer observations to fully leverage the long-term history of observables such as backscatter, slope, and curvature. To address this gap, ESA has commissioned the Fundamental Data Record for Land Dynamics (FDR4LDYN) project, a collaboration with TU Wien and TU Delft, launched in July 2024 with a planned duration of three years. The project aims to develop a harmonised, interoperable data record of ERS scatterometer backscatter variables. The resulting dataset will ensure continuity and compatibility across ESCAT, ASCAT, and SCA. It will thus preserve and valorise heritage scatterometer data for long-term applications, including soil moisture estimation, vegetation monitoring, and surface water and flood dynamics detection. FDR4LDYN begins with a comprehensive quality assessment to address the unique challenges associated with the different mission phases of ERS-1 and ERS-2, as well as technological issues encountered during ERS-2's mission, such as gyroscope failures in 2000–2001 and the tape recorder failure in 2003 [1]. In this context, we will present peculiar spatiotemporal patterns in data availability, validity, and quality flags. Not all flags necessarily compromise the quality of backscatter measurements for land surface applications, nor can flagged data fully guarantee the removal of all potential outliers. To address this, we will demonstrate how selective flagging can retain usable data while robust outlier detection based on quantile regression ensures the reliability of the input data. Building on this foundation of quality-controlled input data, the next step involves performing intra-calibration (calibrating beams within a single sensor) and inter-calibration (calibrating between sensors). Previous ERS calibration efforts leveraged stable scattering targets in the Amazon rainforest with a constant gamma model [2]. FDR4LDYN refines these methods by applying a relative stepwise calibration approach [3] to correct temporal biases. We will showcase progress achieved through this updated methodology, particularly in inter-calibrating the ERS and Metop satellite missions. This approach also lays a critical foundation for incorporating future SCA data into a unified scatterometer record. To further support sensor interoperability, we assess the robustness of ESCAT observables, which have a lower temporal observation density than ASCAT-based data. To investigate this, we generate a synthetic ASCAT dataset by subsampling the original data to mimic ESCAT's temporal resolution and data gaps. Comparing this synthetic dataset to the original, we will highlight the sensitivity of slope and curvature retrievals to reduced observation density, mainly when calculating time-series-based values using short-term time windows. These effects are most pronounced in regions with sparse data coverage, such as Europe, where resulting time series often suffer from data gaps and edge effects. To mitigate these challenges, we explore temporal weighting strategies to produce continuous time series while preserving the short-term dynamics essential for applications like vegetation monitoring. Furthermore, we will discuss the advantages of providing multiple versions of slope and curvature datasets to accommodate the diverse needs of the user community. To foster the adoption of the FDR4LDYN data record, we focus on user engagement through visualisations on a dedicated project website, code examples in Jupyter Notebooks, and dissemination via TU Wien's data repository. We will introduce these platforms to actively encourage early feedback from potential users, welcoming their input to refine the FDR4LDYN products. By engaging the community and providing tools for easy data access, we aim to ensure these datasets become indispensable resources for land surface and climate monitoring. [1] Crapolicchio, R., De Chiara, G., Elyouncha, A., Lecomte, P., Neyt, X., Paciucci, A., and Talone, M., “Ers-2 scatterometer: Mission performances and current reprocessing achievements,” IEEE Trans- actions on Geoscience and Remote Sensing, vol. 50, no. 7, pp. 2427–2448, 2012. doi: 10.1109/ TGRS.2011.2179808. [2] Neyt, X. and Elyouncha, A., “Ers-1 and ers-2 scatterometer calibration,” Royal Military Academy, Signal and Image Centre, Brussels - Belgium, Tech. Rep., 2015. [3] Reimer, C., “Calibration of space-borne Scatterometers: Towards a consistent climate data record for Soil Moisture Retrieval,” Vienna University of Technology, Department of Geodesy and Geoin- formation, Austria, 2014. [Online]. Available: http://repositum.tuwien.ac.at/urn:nbn:at:at-ubtuw:1- 70509.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: FDR4ATMOS: Adding GOME-2 data to the GOME-1/SCIAMACHY harmonised FDR product

Authors: Günter Lichtenberg, Dr. Sander Slijkhuis, Melanie Coldewey-Egbers, Bernd Aberle, Dr. Abdalmenem Owda, Dr. Stefan Noël, Dr. Klaus Bramstedt, Dr. Syedul Hoque, Jean-Christopher Lambert, Jeroen van Gent, Tijl Verhoelst, Dr Paul Green, Dr. Mattea Goalen, Dr. Matthijs Krijger, Dr. Angelika Dehn, Dr. Gabriele Brizzi
Affiliations: DLR (German Aerospace Centre), IUP (Institute for Environmental Physics), BIRA-IASB (Royal Belgian Institute for Space Aeronomy), NPL (National Physics Laboratory UK), ESS (Earth Space Solutions), ESA (European Space Agency), SERCO
The Fundamental Data Record for ATMOSpheric Composition (FDR4ATMOS) project is part of the European Space Agency (ESA) Long Term Data Preservation (LTDP) programme. A Fundamental Data Record (FDR) is a long-term record of selected Earth observation Level 1 parameters (radiance, irradiance, reflectance), possibly multi-instrument, which provides improvements of performance beyond the individual mission data sets. The project served as a pathfinder for harmonisation of spectrally highly resolved data from various instruments. The initial phase of the project delivered a first version of the FDR with harmonised Earth reflectances and solar irradiances from GOME-1 and SCIAMACHY. This combined data set spans a 17 year period offering spectrally highly resolved data essential for air quality applications, climate research, ozone trend analysis, and UV radiation studies. The FDR provides the first-ever harmonised data set for high-resolution spectrometers. Since the retrieval of atmospheric trace gas content and other parameters depends on the relative structures of the spectrum, any harmonisation effort must preserve these spectral structures while also addressing broader band differences between the instruments. The FDR4ATMOS project was extended at the end of 2023 to also include Level 1b data from the GOME-2 instrument series with the aim to provide an updated time series covering data from GOME-1, SCIAMACHY and GOME-2 (1995 - 2024). Additionally, the project will develop a lunar data set from data of SCIAMACHY and GOME-2. Building on the experience of the harmonisation of GOME-1 and SCIAMACHY, we will further refine our harmonisation method. In this paper we will give an overview of the current status of the FDR4ATMOS project, presenting detailed insights into the lunar data set development, the solar irradiance harmonization, and upcoming plans for the harmonisation of Earth reflectances.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: Extending the SPOT/VEGETATION and PROBA-V Archives With Sentinel-3 SYN VGT: Challenges, Achievements and Consistency Evaluation

Authors: Carolien Toté, Else Swinnen, Claire Henocq, Steffen Dransfeld
Affiliations: VITO, ACRI-ST, ESA-ESRIN
Sentinel-3 synergy (SYN) VEGETATION (VGT) products were designed to provide continuity to the SPOT/VEGETATION (SPOT VGT) base products archive. Since the PROBA-V mission acted as a gap filler between SPOT VGT and Sentinel-3, in principle, a continuous series of data products from the combined data archives of SPOT VGT (1998–2014), PROBA-V (2013–2020) and Sentinel-3 SYN VGT (from 2018 onwards) is now available to users. There are, however, several aspects related to the satellite/sensor and processing definitions that, to a greater or lesser extent, impact the consistency between these product series, such as differences in the acquisition scheme, sensor design, spectral response, sensor calibration and ground segment processing choices. Another crucial aspect is that since the release of the first Sentinel-3 SYN VGT products in October 2018, several important changes in the SYN VGT processing baseline have been implemented, thereby gradually improving the quality of SYN VGT products. From a user perspective, the consistency of Sentinel-3 SYN VGT with both the latest SPOT VGT (VGT-C3) and PROBA-V (PV-C2) archives is highly relevant. A uniform data record of standard SPOT VGT, PROBA-V and Sentinel-3 SYN VGT products, spanning over 25 years, would provide valuable input for a wide range of land monitoring applications. Since in past years important changes have been implemented in the SYN VGT processing baseline, the resulting archive of SYN VGT products is intrinsically inconsistent, leading to different consistency levels with SPOT VGT and PROBA-V throughout the years. Consistency evaluation consists of spatio-temporal intercomparison of the combined time series of VGT-C3, PV-C2 and Sentinel-3 SYN VGT 10-day NDVI composite products with an external reference from LSA-SAF, and an intercomparison of Sentinel-3 SYN V10 products with a climatology of VGT-C3 resp. PV-C2 for three distinct periods with different levels of product quality. These analyses have shown that the subsequent processing baseline updates have indeed resulted in better-quality products. Additional improvements are currently in development and under evaluation. The presentation will cover the latest achievements and the consistency evaluation results for the entire time series up to the present date.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: VENµS satellite end of life: archive reprocessing and creation of new products for water quality applications

Authors: Arthur Dick, Olivier Hagolle, Jean-louis Raynaud, Morgan Farges, Cécile Peschoud, François Steinmetz, Agathe Moreau
Affiliations: CNES, CESBIO, Magellium, CS GROUP, Hygeos
VENμS (Vegetation and Environment New micro (μ) Satellite) is a micro satellite launched in 2017 by the Israeli Space Agency (ISA) and the French Centre National d’Etudes Spatiales (CNES). VENμS is a research satellite that embarks two very different devices, an electric Hall Effect thruster, and a multispectral optical camera which is the purpose of this presentation. At the beginning of this project, it had been planned to divide the mission in 3 phases specially to use the small thrusters, which are dedicated to a technological mission. During the first scientific phase VM1, VENµS satellite was on an orbit at 720 km with a 2-day revisit cycle and a spatial resolution of 5 m. Regarding to the highly satisfactory results of the VM1 phase, it has been decided to create two new phases, VM4 and VM5, to finally reach an orbit at 560 km. This new orbit allows attaining increased characteristics such as a 1-day revisit cycle, a ground resolution of about 4m and a swath of around 20 km. As it is usually done after the end of life of a satellite, all VENµS data (level 1, 2 and 3) are currently completely being reprocessed with enhanced processing parameters for geometric and radiometric calibration and also for cloud mask generation. Regarding the level 2 products, the operational processing chain uses MAJA algorithm to perform atmospheric corrections and to generate cloud mask. However, in order to maximize the number of applications using VENµS data, it has been decided to generate a new type of level 2 products with POLYMER algorithm on a selection of VENµS sites. This is an atmospheric correction algorithm dedicated to ocean and inland water scenes to retrieve water reflectance and chlorophyll concentration. This presentation details the different phases of the mission, its main characteristics and available products. Also, it gives a status on the data reprocessing and of the processing the new level 2 product dedicated to water quality applications. Finally, a special focus is made on the absolute radiometric calibration during VENµS lifetime and especially cross-calibration with Sentinel 2 and 3 satellites. VENµS data are freely available to everybody for peaceful and non-commercial uses on the French Theia land data center: http://www.theia-land.fr.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: Valorisation and Curation of the ESA ERS Mission

Authors: Mirko Albani, Sergio Folco, Roberto Biasutti, Nora Wiik, Iolanda Maggio
Affiliations: Starion Italia S.p.A., ESA, Serco
The European Remote Sensing satellite (ERS-1), launched more than 30 years ago, was one of the most sophisticated spacecraft ever developed and launched by Europe, covering the monitoring of atmosphere, land, ocean and ice. Shortly after the launch of ERS-2 four years later, ESA decided to link the two satellites in the first ‘tandem’ mission which lasted for nine months. During this time the increased frequency and level of data available to scientists offered a unique opportunity to observe changes over a very short space of time. These pioneering missions have provided the basis for the remote sensing we have come to rely upon today to unravel the complexities of the way Earth works. The success of the ERS missions has helped Europe to gain clear leadership in several critical technologies and in the scientific use of Earth observation. The ERS missions have provided a stream of data which has changed our view of the world in which we live. It has provided us with new insights on our planet, the chemistry of our atmosphere, the behavior of our oceans, and the effects of mankind’s activity on our environment – creating new opportunities for scientific research and applications. During their lifetime, ERS data supported over 5000 projects producing some 4000 scientific publications. Archived data, which remain accessible, continue to provide us with a wealth of information and are continuously improved in the frame of the Heritage Space Programme to build long-term data series with successor missions including Envisat, ESA's family of Earth Explorers and the Copernicus Sentinels. The paper describes the full process of ERS missions valorisation and curation activities. ERS missions data are in fact continually reprocessed to align format with newer missions data, to exploit more recent algorithms, and to improve their quality and accuracy, with the ultimate goal to offer valuable long-term references for ongoing studies in a wide range of fields. Reprocessing activities are also triggered by the availability of additional data recovered through transcription of heritage media containing unique data to fill identified gaps, or after repatriation to ESA of unique data acquired by National or Foreign stations. Beside these reprocessing activities, ad-hoc projects for the generation of Fundamental Data Records of data aligned with newer missions, and addressing the generation of new products have been started. The Fundamental Data Records for Altimetry (FDR4ALT) project for example is generating innovative Earth system data records, called Fundamental Data Records (basically level 1 altimeter and radiometer data) and Thematic Data Records (basically level 2P geophysical products). The goal is to serve the different communities involved in long-term data exploitation over the different Earth surfaces: ocean, coastal, inland water, ice sheets, sea ice and atmosphere.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 0.94/0.95)

Session: C.03.14 Sentinel-3 Mission Status, Achievements, and Future Plans - PART 1

This session, jointly organized by ESA and EUMETSAT, will provide a comprehensive update on the Sentinel-3 mission, a core element of the EU's Copernicus Earth Observation Program. Launched in 2016 (Sentinel-3A) and 2018 (Sentinel-3B), Sentinel-3 is designed to monitor the Earth’s environment, spanning oceans, land, the cryosphere, and the atmosphere domains. The mission’s primary objectives include tracking Earth surface changes, supporting climate research, and aiding the sustainable management of natural resources. Equipped with a suite of advanced instruments, including the Ocean and Land Colour Instrument (OLCI), Sea and Land Surface Temperature Radiometer (SLSTR), and a Synthetic Aperture Radar Altimeter (SRAL), the satellites provide critical data for monitoring multiple components of the Earth System. These observations are essential for climate studies, disaster management, and environmental monitoring.

The session will highlight the status and operational success of Sentinel-3A and 3B, showcasing key scientific achievements. Presenters will also discuss upcoming developments, such as enhanced data processing, innovative multi-sensor synergies, the preparation of the Sentinel-3C tandem phase, and the evolution of the operational constellations. Moreover, ESA and EUMETSAT’s collaboration ensures Sentinel-3 data continuity and integration into operational services, supporting informed decision-making in areas like sustainable development and environmental protection. This session will emphasize the role of such international cooperation in tackling global environmental challenges through enhanced Earth observation capabilities.

Introduction


Welcome and brief mission status


  • Hilary Wilson (EUMETSAT) and Jérôme Bouffard (ESA)

Overview of EUM products and plans


  • Estelle Obligis (EUMETSAT)

Overview of ESA products and plans


  • Steffen Dransfeld (ESA) and Alessandro Di Bella (ESA)

Marine Session


The operational use of Sentinel-3 by the Copernicus Marine Service


  • Antonio Repucci - CMEMS

The Météo-France Sargassum Service


  • Marianne Debue – Météo-France

Arctic lead detection with SLSTR


  • Sascha Willmes – University of Trier

Wave Forecasting


  • Lotfi Aouf – Météo-France
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall K1)

Session: D.01.07 ESA Digital Twin Earth Mid-term Milestone - PART 1

The ESA Digital Twin Earth (DTE) program, part of the Earth Watch Programme, is an initiative aimed at advancing the use of Earth Observation (EO) data and technologies to create digital twins for monitoring and predicting environmental changes. The program collaborates closely with the European Commission’s Destination Earth (DestinE) initiative, complementing DestinE’s efforts to establish a comprehensive digital twin ecosystem in Europe.

The program focuses on developing pre-operational digital twins to demonstrate their value for applications such as climate monitoring, urban planning, and environmental management. By integrating Earth Explorer Missions data into the DestinE Platform, it ensures high-quality Earth Observation (EO) data is available for digital twin development. Additionally, it establishes a framework to support the creation and operational use of digital twins and promotes interoperable services accessible via ESA DTE, DestinE, or Member State initiatives. Through these efforts, ESA DTE strengthens DestinE and contributes to advancing digital twin technologies at national and European levels.

This session is the ESA mid-term milestone which is organised as an open discussion on the phase-in achievements and initial lessons learnt. The objective will be to assess the adequacy of the provided environment for the development of DTCs, the review of thematic priorities as well as the efficiency of the integration process with DestinE and potential MS initiatives. ESA will prepare a set of recommendations and lessons learnt to be discussed during the workshop. The process shall serve as the basis for the next phase procurements and to establish new priorities or continuity on the DTCs thematic areas. ESA MS representatives, DG-CNECT, ECMWF, and EUMETSAT will be invited to participate actively to the milestone.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall G2)

Session: A.02.01 Land Surface Temperature and Emissivity Data for Research and Applications - PART 1

This session invites presentations on the use of land surface temperature and emissivity data for research and applications. In particular we welcome studies using high spatial resolution multispectral thermal infrared data such as those from the ECOsystem Spaceborne Thermal Radiometer on Space Station (ECOSTRESS) and airborne instruments as well as from the evolving New Space sector. The session will also provide an opportunity to discuss the preparation of upcoming missions such as the ESA - LSTM mission, CNESISRO TRISHNA mission and ASI-NASA SBG mission.

Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall G2)

Presentation: ECOSTRESS, SBG-TIR AND HYTES – STATUS AND RESULTS

Authors: Simon Hook
Affiliations: NASA/JPL
In 2017, the National Research Council released the second Earth Science Decadal Survey (ESDS). The ESDS recommended four sets of measurements referred to as the Decadal Observables. One of these was the Surface Biology and Geology (SBG) Decadal Observable (DO). The Decadal Observable measurements together with measurements from the upcoming NISAR mission are now referred to as the Earth System Observatory (ESO). The SBG-DO called for high spectral and spatial resolution measurements in the visible to shortwave infrared (VSWIR: 0.38-2.4 micrometers) and high spatial resolution multispectral measurements in the mid and thermal infrared (TIR: 3-12 micrometers). The TIR measurements would be made every few days and VSWIR measurements every couple of weeks. The VSWIR and TIR measurements would have spatial resolutions of 30 m and 60 m respectively. After the release of the 2017 ESDS, NASA formed teams to develop architectures for each of the DO’s. The SBG team recommended the VSWIR and TIR measurements be made from two separate platforms in a morning and afternoon orbit respectively. The morning orbit was preferred for the VSWIR measurements to minimize cloud cover and the afternoon preferred for the TIR to measure the peak temperature stress of plants typically occurring in the early afternoon. The architecture team recommended global revisit times for the VSWIR and TIR of revisit times of 16 and 3 days respectively, which resulted in swath widths of 185 km and 935 km from the nominal altitudes chosen for the VSWIR and TIR platforms respectively. SBG (VSWIR and TIR) is a global survey mission that will provide an unprecedented capability to assess how ecosystems are responding to natural and human-induced changes. It will help identify natural hazards, in particular volcanic eruptions, and any associated precursor activity, and it will map the mineralogical composition of the land surface. SBG will advance our scientific understanding of how the Earth is changing as well as provide valuable societal benefit, in particular, in understanding and tracking dynamic events such as volcanic eruptions, wildfires and droughts. The SBG VSWIR and TIR instruments recently passed their Phase A and B Reviews respectively and the TIR instrument is currently scheduled for a 2028 launch. The SBG-TIR system is a joint mission between NASA and the Italian Space Agency (ASI). ASI will provide the spacecraft, launch and a visible near infrared (VNIR) camera. The VNIR system complements the TIR measurements and is particularly useful for the retrieval of evapotranspiration which requires both TIR and VNIR data. The TIR system is now referred to as OTTER (Orbiting Terrestrial Thermal Emission Radiometer) and the VNIR system as VIREO (Visible Infrared Earth Orbiter). All SBG data will be freely available through the Land Processes DAAC. In 2014 the ECOsystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS) was selected as part of the NASA Earth Ventures Instrument program. ECOSTRESS addresses critical questions on plant–water dynamics and future ecosystem changes with climate. ECOSTRESS has five TIR spectral bands, a spatial resolution of 68m x 38m (crosstrack x downtrack) and a revisit of every few days at varying times of day from the International Space Station (ISS). ECOSTRESS was delivered to the ISS in 2018 and operations began shortly thereafter. ECOSTRESS was planned to operate for one year, however, due to demand as well as the instrument continuing to operate well, NASA extended the mission until 2023. At the end of 2022 three of the missions operating on the ISS (ECOSTRESS, OCO-3 and GEDI were evaluated, and it was decided to further extend all three missions. The ECOSTRESS site was reserved until 2029 and ECOSTRESS was funded through the Senior Review process until 2026 and invited to compete in the 2026 Senior Review. A new collection of products (Collection 2) is also being released. All ECOSTRESS data are freely available through the Land Processes DAAC. HyTES represents a new generation of airborne TIR imaging spectrometers with much higher spectral resolution and a wide swath. HyTES is a pushbroom imaging spectrometer with 512 spatial pixels over a 50-degree field of view. HyTES includes many key enabling state-of-the-art technologies including a Dyson-inspired spectrometer and high performance convex diffraction grating. The Dyson optical design allows for a very compact and optically fast system (F/1.6) and minimizes cooling requirements since a single monolithic prism-like grating design can be used which allows baffling for stray light suppression. The complete optical assembly is operated at cryogenic temperatures (~100K). HyTES originally used a Quantum Well Infrared Photodetector (QWIP) and had 256 spectral channels between 7.5μm to 12μm. In 2021 this was upgraded to a Barrier InfraRed Detector (BIRD) array with 284 spectral channels. The first science flights with the QWIP were conducted in 2013 and the first science flights with the BIRD in 2021. Work is now underway to develop a second upgraded BIRD with integration into HyTES in the 2024/5 timeframe. Many flights have been conducted, and the instrument can now be deployed on a Twin Otter, ER2 or Gulfstream aircraft allowing a variety of pixel sizes depending on flight altitude. In 2023 a joint campaign was conducted with the European Space Agency to acquire data for simulating and evaluating data from several upcoming thermal infrared missions including the ASI/NASA SBG-TIR, the ESA LSTM and IRO/CNES TRISHNA missions. All the data acquired thus far has been processed and is freely available from the HyTES website (http://hytes.jpl.nasa.gov). Higher level products including surface temperature and emissivity and gas maps are available for the more recent data. This presentation will describe the current status and plans for SBG-TIR, ECOSTRESS and HyTES programs. The presentation will also provide some recent results utilizing ECOSTRESS and HyTES for a variety of research and application studies.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall G2)

Presentation: Understanding Thermal Directionality for Improved Satellite Land Surface Temperature Retrievals: Insights from a Joint NASA-ESA Airborne Campaign

Authors: Mary Langsdale, Professor Martin Wooster, Dirk Schuettemeyer, Simon Hook, Callum Middleton, Bjorn Eng, Roberto Colombo, Micol Rossini, Lorenzo Genesio, Franco Miglietta, Dr. José Sobrino, Rafael Llorens, Drazen Skokovic
Affiliations: King's College London, National Centre for Earth Observation, ESA ESTEC, NASA Jet Propulsion Laboratory, University of Milano Bicocca, Institute of Bioeconomy, CNR, University of Valencia
Satellite observations of land surface temperature (LST) play a critical role in environmental monitoring, particularly over heterogeneous landscapes such as agricultural and urban areas, where ground-based measurements lack the necessary spatial coverage. However, the influence of viewing and illumination geometry on satellite-derived LST remains a significant challenge in these complex regions due to the diverse thermal properties of surface components. Despite its importance, current operational algorithms do not fully account for these angular effects, largely due to the challenges of quantifying them through observations or modelling. Addressing this gap now is crucial given that LST is an Essential Climate Variable and next-generation high-resolution thermal infrared missions such as LSTM, SBG, and TRISHNA are on the horizon. To advance this understanding, a NASA-ESA airborne campaign, led by the National Centre for Earth Observation at King’s College London, was conducted in Grosseto and Siena in Italy during the summer of 2023 as part of the LSTM mission development activities. This campaign focused on thermal directionality and uniquely deployed two aircraft each equipped with state-of-the-art longwave infrared (LWIR) hyperspectral sensors. These sensors captured concurrent nadir and off-nadir observations to evaluate angular effects at the satellite scale across both agricultural and urban surfaces, with measurements collected at different stages of the crop growth cycle. In-situ measurements were also collected to evaluate the accuracy and stability of the airborne datasets. This data provide an unprecedented opportunity to analyse directional influences on LST retrievals. This presentation details the novel strategies employed in the campaign design to capture high-quality hyperspectral LWIR data under varying geometries and presents the results of this campaign for the first time, providing insights into angular effects and their implications for satellite-based LST retrievals from upcoming missions. We also discuss future analysis plans to support the development of corrections for angular impacts in operational algorithms, ensuring the success of upcoming thermal missions and enhancing downstream applications.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall G2)

Presentation: The Land Surface Temperature Retrieval Algorithm for LSTM: An Overview and Results

Authors: Benjamin Courtier, Michael Perry, Agnieszka Soszynska, Darren Ghent
Affiliations: University Of Leicester, NCEO
Wide scale observations of Land Surface Temperature (LST) are vital to improve our understanding of climate change, monitor the rate of global warming and understand the impacts of a warming climate. Land surface temperature information is also important in aiding the best use of managing agricultural land and has health implications, especially with an increased proportion of the world’s population living in cities and extreme heat events becoming more common. The Land Surface Temperature Monitoring mission, together with TRISHNA and SBG, will provide high resolution LST measurements, to a high degree of accuracy in order to improve the global surface temperature data stream. Here an overview of the operational retrieval algorithm for the LSTM mission is presented together with results from real-world and simulated test cases. The LSTM retrieval algorithm is based on an Optimal Estimation (OE) framework, in particular the OE framework of Perry et al 2025. This framework retrieves the LST, the Land Surface Emissivity (LSE) and the absorption due to the atmospheric column (all gases absorptions are retrieved in a single catch all term); each of these variables are retrieved with an uncertainty term. The method works using a fast radiative transfer model (RTTOV) to propagate initial best estimates of the state vector (LST, LSE and the atmospheric column) to the observation space (i.e. Top of Atmosphere (ToA) radiances). Improvements are made to the initial best estimate using the jacobians of the forward model to inform a search direction, this process is then iterated up to seven times. Once the method has converged to within a set tolerance of bias from the ToA radiances the retrieval is complete. The results of the Optimal Estimation are shown in a variety of test scenarios, including comparison against airborne observations of agricultural scenes in Grossetto, Italy. These real-world observations allow for the direct comparison between the OE retrieval and measurements of ground truth over a number of different surface types. Comparisons with other commonly used retrieval methods are shown. Further testing was undertaken on simulated scenes to ensure a wide variety of surface types, solar angles and atmospheric compositions were included in the testing process.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall G2)

Presentation: STIC-ing together LST, Stomatal Conductance, and ET: Evolving Science of Ecosystem Functioning in the European ECOSTRESS Hub & Beyond

Authors: Kanishka Mallick, Tian Hu, Zoltan Szantoi, Christian Bossung, Yoanne Didry, Patrik Hitzelberger, Gilles Boulet, Albert Olioso, Mauro Sulis, Chiara Corbari, Philippe Gamet, Simon Hook, Bimal Bhattacharya, Muddu Sekhar, Jean-Louis Roujean, Dennis
Affiliations: Luxembourg Institute Of Science And Technology (list)
The Penman-Monteith (PM) formulation is widely recognised as the leading closed-form physical equation for directly estimating evapotranspiration (ET) from terrestrial vegetation. It builds upon Penman’s equation, which estimates open water evaporation using Earth's radiation balance, flux-gradient theory, and atmospheric demand. A key advantage of the PM equation is its elimination of direct dependence on surface temperature, a feature that originates from its development under zero water stress conditions and the unavailability of surface temperature data when the model was first proposed. However, applying this model to natural ecosystems reveals the critical role of biophysical conductances in accurately estimating ET, especially under stressed environmental conditions, where surface temperature plays a significant role. Two essential conductances are aerodynamic conductance, regulating water vapor and heat exchange between the surface and the atmosphere, and canopy-surface conductance, governing the carbon exchange during photosynthesis at the cost of transpiration. The structural characteristics of the PM model, combined with the absence of closed-form equations for these conductances, present challenges for global-scale ET mapping. This is especially true when trying to integrate the PM equation with thermal remote sensing data. To address these challenges, we propose a framework that establishes direct links between conductances and thermal remote sensing through closed-form equations. This integration embeds land surface temperature (LST) into the PM formulation, enabling a more direct and scalable estimation of ET. Additionally, it would also facilitate advanced ecosystem function analysis using data from current and future thermal remote sensing missions. This method is anchored in Surface Energy Balance (SEB) theory, where evapotranspiration (ET) responses to land surface temperature (LST) variability are influenced by factors such as stomatal control, aerodynamic conductance, soil water availability, and atmospheric aridity. By decomposing the upwelling thermal longwave irradiance signal using a higher-order Taylor series expansion and linking it to the governing equations for conductances, we derived closed-form analytical solutions for the two critical conductances as a function of LST. To implement this approach, we utilized the updated Surface Temperature Initiated Closure (STIC2.0) model, which is currently operational within the European ECOSTRESS Hub (EEH). The new version STIC2.0 analytically estimates these conductances through closed-form equations that incorporate key variables such as LST, radiation, temperature, humidity, fractional vegetation cover, and vegetation indices. This model offers a robust and efficient method for estimating the biophysical conductances that regulate ET, enabling more accurate and scalable ET assessments. To validate the methodology, we integrated thermal-optical remote sensing data from ECOSTRESS, Landsat, and Sentinel-2 with meteorological forecasts. The model was tested across a range of ecosystems in Europe, Africa, and the United States. Analysis revealed a weak but statistically significant relationship (p-value < 0.05) between model bias and acute ecosystem stress under extremely low vegetation fraction conditions. The results emphasized the pivotal role of dual conductances—stomatal and aerodynamic—in explaining ET responses to LST under varying conditions of soil and atmospheric water stress and vegetation cover. This study underscores the necessity of incorporating LST-stomatal conductance interactions in thermal ET models, advancing beyond traditional methods that estimate ET as the residual of sensible heat flux. It introduces a novel framework for interpreting ET variability using thermal remote sensing, with promising applications for upcoming missions such as TRISHNA, LSTM, and SBG. These advancements have the potential to significantly enhance ecosystem management and support sustainable water resource management, particularly in arid and semi-arid regions.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall G2)

Presentation: TRISHNA: an Indo-French Space Mission to Study the Thermography of the Earth at Fine Spatio-Temporal Resolution

Authors: Olivier Hagolle, Mr Philippe Gamet, Dr Jean-Louis Roujean, Dr Bimal Bhattacharya, Dr Gilles Boulet, Dr Albert Olioso, Dr Laure Roupioz, Dr Ghislain Picard, Dr Mark Irvine, Dr Philippe Maisongrande, Dr Kanishka Mallick
Affiliations: Cnes / Cesbio, CNRS / CESBIO, ISRO / SAC, IRD / CESBIO, INRAE, ONERA, IGE, INRAE, CNES, LIST
TRISHNA (Thermal infraRed Imaging Satellite for High-resolution Natural resource Assessment) is a cross-purpose high spatial and temporal resolution thermal infrared Earth Observation (EO) mission that will provide observations in the domains of terrestrial and marine ecosystems, inland and coast, urban, cryosphere, solid Earth and atmosphere. It is an Indo-French scheduled to be launched by the end of 2026 for a lifetime of 5 years. The high-quality optical-thermal imagery will be used to provide cutting-edge optical and thermal products of vegetation, manmade structures, snow, ice and sea. Atmospheric fields such as cloud mask and type, aerosol load, water vapour content will be well described. TRISHNA products will improve our knowledge of radiative and heat transfer to quantify evapotranspiration, fresh water discharge, snow-melt runoff, bio-geochemical cycle, urban heat island. Land surface temperature (LST) and land surface emissivity (LSE) are Essential Climate Variables (ECV). LST is defined as the radiative skin temperature and is useful in agriculture (plant growth, water stress, crop yield, precision farming, early warning, freezing scenarios), hydrology (water cycle, catchment water, etc), and meteorology. LSE differentiates the surface attributes (vegetation, soil, snow, water, rock, manmade material) composing the landscape. TRISHNA distributed products will be at the resolution of 60 m. The time revisit will be 3 days at noon-night time at the equator with more orbit passes at higher latitudes. Instrumental design will propose acquisitions based on across track scanner for 4 TIR channels (8,65, 9.0, 10.6 and 11.6 µm) and on push-broom for 7 VNIR channels (485, 555, 670, 860, 910, 1380 and 1610 nm). TRISHNA mission will look at early warnings of water scarcity and fire risk, including optimization for agricultural irrigation, rainfed crops, water consumption and its partitioning. TRISHNA will help to monitor any reduction and increase in evaporation (E) and transpiration (T) (Figure 2). Evapotranspiration (ET) is an ECV that reflects the satisfaction or not of plant water needs. Its accuracy and timeliness is central as ET governs soil moisture at the surface and in the root zone through water extraction by plants, with large consequences for infiltration, runoff and for the whole catchment water budget. Another objective is to describe mixing processes and water quality in the coastal areas and estuarine from high-resolution SST (Sea Surface Temperature), also to enhance productivity and biodiversity assessment on the coast and for rivers and lakes including warning and monitoring of water borne diseases, to estimate energy fluxes in alluvial plains and aquifers, and to describe hazards (river floods, storm surges, inundated vegetation) in link to sea level rise. Another main objective is to predict and to quantify at short-term the Urban Heat Island (UHI). A spectral un-mixing method was developed to provide sub-pixel abundances and temperatures from radiance images. Main goal is to improve LST retrieval due to enhanced footprint, to account for cavity anisotropy and environment effects, and to relate LST to air temperature. Urban areas are formed by complex 3D structures of mixed materials (cement, steel, bituminous roads, stones, bricks, glass, wood, grass, etc) and LST can vary locally by a few °K. About the cryosphere, key question concerns combining thermal and optical high resolution data to improve the prediction of the energy budget and the melting of snow and ice covered areas. In this regard, LST will be of added-value and how it may capture small-scale variability in mountainous areas is a relevant issue. Other issue will concern the mapping of debris-covered glaciers, lake ice formation and development, and lake water dynamics. Thermal anomalies may serve to anticipate volcano eruption and TIR measurements may help better characterizing volcanic ash clouds, geothermal resources, coal fires. Moreover, topography and roughness affect the surface energy balance of the solid Earth. Main properties are grain size, porosity, water content and composition. Soil temperature is a primary tracer of energy and water exchanges between the surface and the ground. Downwelling shortwave and longwave radiation at noontime will be quantified under all-sky conditions through retrieval of aerosol optical depth, columnar precipitable water, cloud mask, cloud type / phase and albedo using seven optical and four TIR bands. CalVal is an important component of the TRISHNA program. It consists in developing strategies to support the validation of TRISHNA Level 1 and 2 products, namely LST, LSE, ET, albedo and radiation. Efforts also concern the data preprocessing to remove or minimize turbulent and directional effects that jeopardize the specification of 1 K on LST. Micrometeorological stations in different climates are equipped of TIR OPTRIS cameras and flown by UAV. Metrics and statistics are proposed as criteria for inter-comparison. Cloud mask, atmospheric and anisotropy corrections (relief, directionality) are under development. Methods for measuring LST and LSE use consolidated approaches such as TES (Temperature-Emissivity Separation). Such key variables along with ET product will include accuracy assessment and a Quality Flag. The presentation will review the current status of the proposed algorithms and their implementation in the ground segment. TRISHNA activities are serving the preparation of similar forthcoming TIR missions such as LSTM (ESA) and SBG (NASA).
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall G2)

Presentation: Algorithm Selections Progress And Way Forward Of LSTM-L2 Processor Development And Associated Calibration And Validation Project

Authors: Sandrine Mathieu, Vincent Levasseur, Dr Frédéric Romand, Stéphane Ferron, Nicolas Bertaud, Dr Véronique Bruniquel, Frédéric Rouffi, Jan Jackson, Darren Ghent, Mike Perry, Agnieszka Soszynska, Ben Courtier, Raquel De Los Reyes, Bringfried Pflug, René Preusker, Juergen Fischer, Dave Smith, Jean-Philippe Gastellu, Zhijun Zhen, Nicolas Lauret, Olivier Hagolle, Steffen DRANSFELD, Kevin Alonzo
Affiliations: Acri-st, University of Leicester, DLR, ARGANS, Spectral Earth, CESBIO, RAL, ESA
The LSTM-L2 Processor development and associated calibration and validation project aims to develop the operational algorithms that will process the L1C data from the LSTM mission. This L2 processor providing key products such as Land Surface Temperature (LST) and Land Surface Emissivity (LSE). These products are key to the development of agricultural applications and the monitoring of the environment. The project has begun in 2023, is planned for seven years and is divided into two phases. The first phase of two years, currently underway, focuses on defining and selecting the algorithms that will be implemented to develop the prototype software, which will be delivered at the Critical Design Review (CDR), mid 2025. During the first year of the project, LST/LSE algorithms, along with those for Surface Reflectance (SR), Aerosol Optical Thickness (AOT), Atmospheric Correction (AC), and Total Column Water Vapor (TCWV), underwent a rigorous selection process involving intercomparison exercises and scientific validation. This selection process was conducted taking into account LSTM mission requirements. For LST/LSE it includes algorithms foreseen for the Surface Biology and Geology (SBG) NASA mission and Trishna (CNES/ISRO) mission, having in mind the synergy between these three high resolution thermal satellite missions. This presentation will detail the conclusions of the selection of all the algorithms that will now serve the implementation process. Within the LSTM project, a particular attention is paid to data that are used to test these algorithms. The simulated data that have been selected in addition to the campaign data, will be presented. The presentation will also outline the next steps in the processing development leading to the operational software. Finally, the presentation will conclude with the presentation of the remaining activities and the process that remains to define and select the algorithm related to the correction of directional effects and the calculation of evapotranspiration.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.61/1.62)

Session: D.06.03 Digital Revolution and Emerging Technologies

The world of Earth Observation (EO) is dramatically changing driven by the digital revolution. Emerging disruptive technologies combined with EO will entirely reshape the EO information value chain, affecting both the downstream applications sector and the space assets. New paradigm around EO value chain should be likely reinvented to further exploit synergies and translate data into contextual insights that can drive action for the climate, the economy and the environment. This session will address all ICT revolution technologies impacting EO, like the so-called web 3 (blockchain and Distributed Ledger Technologies, now associated with web 3.0 at large), IOT, Immersive Visualisation, Federated Learning, Decision Intelligence, Innovative Computing paradigms, and other not here listed.

Topics to be addressed:
- web3 (blockchain and Distributed Ledger Technologies, now associated with web 3.0 at large): We are moving towards a more decentralized, robust, secure, intelligent, and, above all, more equitable data industry, of which Earth Observation (EO) is a part. A set of paradigms will be likely changed along the entire up-and down-stream markets: inter alia with smart data storage and fusion, privacy-preserving applications, data-traceability, certification of processing chain and derived products, monetization of EO data and their added-values
- Federated Learning: federated learning will support the generation of distributed hypermodels, preserving privacy and robustness.
- Internet of Things (IoT): The integration of in situ measurements, typical of IoT, with EO and other heterogeneous sources of information will enhance the capabilities of such an integrated knowledge system. By leveraging Commercial Off-The-Shelf (COTS) sensors and communication devices, IoT technology can be used to implement a distributed in-situ measurements sensing and processing networks that could significantly complement the information provided by satellite imagery.
- Immersive visualization: It aims to bridge the gap between the vast amount of complex data available in an integrated Earth Observation (EO) knowledge system and unexpert users. It provides truly immersive experiences that can simulate scenarios generated by predictive and prescriptive AI.

These are just a few examples of the technologies to be addressed in this session. The intent is to leave the door open for the community to propose disruptive and transformative innovations for both upstream and downstream assets.

Presentations and speakers:


Assessing the Feasibility of IoT/Low-Cost Sensors for Air Quality Monitoring: Insights from the AD4GD Project


  • Christian Borger - ECMWF

Seeing the Future: Immersive Earth Observation Scenarios with Unreal Engine, IoT and AI


  • Arturo Montieri - Alia Space

RUDEO: Harnessing Earth Observation and Disruptive Technologies for Sustainability and Compliance


  • Mihaela Violeta Gheorghe - GMV

Enhancing Remote Sensing Integration: An Ontology-Based Framework for Semantic Data Interoperability


  • David Garcia Rodriguez - Universitat de València
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall F1)

Session: A.01.06 Aerosol, clouds, their interactions and the radiation budget - PART 1

The Intergovernmental Panel on Climate Change (IPCC) has recognised that aerosol and clouds remain the largest contributions to overall uncertainty in climate feedback. Aerosols particles directly affect the Earth’s energy budget by scattering and absorbing sunlight and, to a lesser extent, infrared radiation. Aerosol layers often include tiny particles known as cloud condensation nuclei (CCN), around which water condenses to form cloud droplets, and ice nuclei. Aerosols therefore affect the size and concentration of water droplets and ice crystals, which in turn affects the radiative properties of the clouds. Aerosols indirect forcing is broadly defined as the overall process by which aerosols perturb the Earth-atmosphere radiation balance by modulation of cloud albedo and cloud amount.

Satellite observations have up to now been less successful at providing the quantitative particle optical, microphysical, and chemical properties needed to model aerosol forcing and to interpret the interactions with clouds. The recent EarthCARE mission collects co-registered observations from a suite of four instruments located on a common platform: the two active instruments provide vertical profiles of the atmosphere along the satellite nadir path while the two passive instruments provide scene context information to support the active instruments data interpretation.

This session is dedicated to the presentation of improved detection and retrieval capabilities of aerosols and clouds characteristics, to the understanding of their interactions and their effect on the global radiative budget by means of novel and past satellite missions such as but not limited to Sentinels-4,5,5P, EarthCARE and Aeolus.

Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall F1)

Presentation: International Efforts for Sustained Generation of a GEO-Ring Radiance Data Record

Authors: Jörg Schulz, Viju John, Carlos Horn, Ken Knapp, Jessica Matthews, Andrew Heidinger, M Takahashi, A Okuyama, S. Balakrisnan, A Mitra
Affiliations: EUMETSAT, NOAA, Japan Meteorological Agency, India Meteorological Department
Sustained observation and data provision capabilities are crucial for monitoring weather and climate. EUMETSAT, NOAA, and JMA are collaborating on developing a GEO-Ring radiance data record which is a significantly improved baseline compared to the heritage GEWEX ISCCP data set radiances used for decades. The aim is to provide to users the best possible and longest (1978 - today) GEO radiance climatology for all available spectral channels with 30 min temporal sampling around Earth. The result of this effort will be a 40+ year record that can be used for global and regional applications (solar energy, precipitation and drought, global and regional reanalyses, weather prediction with machine learning, extreme weather analysis, etc.) and will also contribute to the generation of GCOS Essential Climate Variables (ECV). The new data record will contain many rescued satellite data adding additional value to this radiance record. This is 10 years of data from SMS-1/2 and GOES 1-7 data enable 30min sampling for the early years over the Americas. Later GOES satellites had differing scanning strategy delivering only a full disc scan every 3 hours, for those satellite nevertheless the existing partial scans are stitched together to enable the 30 min sampling throughout the record. In addition, there is also a possibility of filling up the data gap over the Indian Ocean prior to 1998 due to a recently started collaboration with India Meteorological Department who is the custodian of data from INSAT, the Indian geostationary satellites. Consistent quality control and recalibration has been applied to all geostationary measurements that contain a significant number of radiometric anomalies. Automated algorithms have been developed to detect and flag the anomalies and to include the information in the data files so that users are able to exclude them from their applications. This strongly enhances the usability of images, e.g., in eclipse situations where only small parts of an image are affected but the whole image was so far discarded from downstream processing into ECVs. Recalibration using reference measurements has considerably reduced radiometric biases of these measurements up to 3 K for the early satellites and significantly improved the temporal stability of the time series. The workflow generates a Fundamental Climate Data Records of individual satellite series operated at the same orbit position. From this gridded data records will be created per orbit position with a nominal resolution of 30 min and 5 km. The individual satellite series will also be combined into a quasi-global gridded product that should simplify the usage of it. Several post processing such as limb correction needed for storm tracking and precipitation will either be incorporated into the data or provided by tooling. A test dataset is planned to be released to the community during the first half of 2025 covering 6 years (2019 – 2024) to gather feedback on its usage. The data processing and distribution will be done using state-of-the-art cloud computing technology at EUMETSAT and NOAA.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall F1)

Presentation: Clouds Decoded: Repurposing Sentinel-2 for High Resolution Cloud Property Retrieval

Authors: Alistair Francis, Dr Jacqueline Campbell, Dr Mikolaj Czerkawski
Affiliations: Asterisk Labs, European Space Agency
Cloud-climate interactions constitute the largest remaining uncertainty in our estimates of climate sensitivity, altering the Earth's radiation budget in complex ways. Cloud processes are difficult to model and understand because they are intrinsically multi-scalar, whereby their microphysical properties, all the way to their mesoscale structures, influence their evolution, and their impact on the climate. A large array of remote sensing instruments have been developed to retrieve these microphysical and macro-scale properties, from active sensors which use the scattering effects of particles to measure their microphysical nature, to passive sensors that recover the cloud’s radiative properties by observing their emission and reflection at different wavelengths. Within these passive sensors, the spatial resolutions of sensors typically used are on the order of several 100s of metres or more. However, there is a growing recognition that higher resolution spatial measurements will aid in understanding the finer scale structure of clouds, beyond the resolution of sensors such as MODIS and MISR. For example, the growing body of work on 3D high resolution radiative transfer requires knowledge of the 3D structure and morphology of clouds, in order to simulate the complex reflection, emission and absorption of realistic, non-uniform clouds. Meanwhile, it is thought that ice nucleation in high clouds can exhibit strong spatial heterogeneity, with particles freezing in small areas first before spreading outwards, below current instruments’ resolution limit. In this presentation, we will explore the opportunities and challenges that arise when attempting to retrieve physical cloud properties from a class of satellite that was not designed for that purpose: high-resolution multispectral sensors. In particular, we will show how Sentinel-2, a satellite with 10 m/pixel spatial resolution, and 13 spectral channels ranging from 443-2190 nm, can be used to retrieve various cloud properties. Lacking thermal channels, estimation of cloud height via other means is critical. Fortunately, Sentinel-2 data offers two independent means by which to calculate cloud height, which we can cross-validate against one another. First, by directly measuring the distance between a point on a cloud and the corresponding point of its shadow, which gives the height for a known viewing geometry and sun angle. Second, by exploiting the small time difference between the capture of Sentinel-2’s bands, which leads to a small channel-wise misalignment for objects not on the surface of Earth, proportional to the height of the cloud. Ice properties, specifically the effective particle radius and ice/water ratio, can be estimated by considering the relative intensities of reflected light in the short-wave infrared and the visible. The Sentinel-2 detector has both a 1610 nm and a 2190 nm band whose radiative properties vary as a function of the ice particle size and concentration, as well as a 1375 nm channel which can be used for estimating the thickness of high icy clouds, when absorption from water vapour above the cloud is accounted for. In summary, we will show that Sentinel-2 and other satellites in this resolution range represent a unique, challenging, but nevertheless useful tool that can complement existing cloud sensing techniques. Of course, severe limitations exist when using their data: the sun-synchronous orbits mean data is only collected at ~10 am local time; products for the majority of the Earth’s open ocean are not downlinked; they have a 5-day revisit time; data is collected only at (or close to) nadir; and they lack thermal channels. All of these, however, could be remedied in future missions, and we hope that the results presented here will act as a catalyst for the prioritization of cloud sensing—particularly over the open ocean— during the planning of the Sentinel-2 Next Generation, to be launched in the early 2030s.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall F1)

Presentation: Advancing our understanding of deep convective systems from GEOring observation and last generation of cloud tracking algorithms

Authors: Thomas Fiolleau, Dr Remy Roca
Affiliations: CNRS/LEGOS
Deep convective systems (DCS) play a pivotal role in the hydrological and energy cycles of tropical regions. They are primarily responsible for extreme precipitation and contribute significantly to radiative feedback processes through their anvil clouds. Understanding the lifetime duration of these systems, as well as the evolution of the expansive cold cloud shield throughout their lifecycle, poses substantial challenges in the tropics, both under current climatic conditions and in a warmer, moister future climate. Global satellite observations provide invaluable resources for refining theoretical models of DCS. In particular, high-frequency imagery from the fleet of multi-agencies meteorological geostationary satellites, defined as the GEOring IR dataset, enables crucial statistical analyses, which are vital for validating or challenging our current understanding of these systems. Moreover, global convection-permitting models at kilometer-scale resolution, capable of simulating individual convective events, require thorough validation using observational datasets to assess their representation of convective organization. However, an accurate characterization of DCS across the entire tropical belt requires specific homogenization of the infrared (IR) GEOring dataset. Indeed, The GEOring IR dataset exhibits a number source of inhomogeneities ranging from temporal and spatial resolutions, spectral window channels characteristics as well as calibration methodologies. To overcome these issues, We have conducted a harmonization procedure to minimize differences between geostationary sensors and create a homogeneous global archive of geostationary infrared datasets for the 2012-2020 period. This dataset aims to support the study of deep convective systems by providing high-resolution, consistent temporal and spatial data, which are essential for characterizing convective systems and their global organization. The cloud tracking algorithm, TOOCAN, has been applied to this homogenized IR dataset to produce a database of Deep Convective Systems (DCS) across the entire intertropical belt, covering the 2012-2020 period. This database has facilitated numerous advances in understanding the organization of convection, the internal processes of deep convection, the extreme rainfall associated with DCS... In this presentation, we will showcase some significant advancements in understanding tropical convective systems from this database. Simultaneously, as part of the Mesoscale Convective Systems Tracking Method Intercomparison (MCSMIP) exercise, the TOOCAN algorithm has been applied to a set of global convection-permitting models from the DYAMOND exercise, underscoring the need for consistent global data to advance research on convective processes and comparison with GEOring. We will also demonstrate the capability of cloud tracking algorithms, such as TOOCAN, in evaluating the DYAMOND kilometer-scale global simulations using homogenized GEOring IR observations. In the near future, the multi-spectral ISCCP-NG GEOring dataset will further expand our results, enabling enhanced monitoring of deep convection organization and providing climatological analyses of tropical convective systems. We will finally demonstrate the strong value of using the next ISCCP-NG L1g GEOring dataset and advanced cloud tracking algorithms, like TOOCAN, to gain deeper insights into the evolution of DCS, ultimately contributing to a better understanding of convective processes and their variability over time.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall F1)

Presentation: Using ISCCP-NG cloud properties to study cloud glaciation

Authors: Martin Stengel, Jan Fokke Meirink, Karl-Göran Karlsson, Salomon Eliasson, Irina Holtermann
Affiliations: DWD, KNMI, SMHI
In liquid clouds colder than 0°C (super-cooled liquid clouds), aerosols can act as ice nucleating particles (INPs), initiating ice crystal formation (heterogenous freezing) at warmer sub-zero temperatures compared to pristine environments in which water droplets can remain liquid down to temperatures of about -38°C where homogeneous freezing starts. The phase of the clouds, thus being liquid or ice, is of crucial importance for the clouds’ development, precipitation formation, but very importantly also for the radiative properties of clouds with, for example, liquid clouds reflecting significantly more solar radiation than ice clouds. While laboratory studies could quantify the ability of different aerosol types to “onset” heterogeneous freezing, thus on when super-cooled liquid clouds glaciate, observational-based studies have mostly been limited to Lidar or aircraft observations, which by construction provide only limited spatial representativeness and context. A large-scale (ideally global) observation-based quantitative impact of aerosols on cloud glaciation is still missing. This can only be done with satellite data with the requirements that the satellite data provide relevant cloud and aerosol properties with sufficient accuracy. In this presentation, we use ISCCP-NG geo-ring cloud properties based on CM SAF and ESA Cloud_cci developments to monitor cloud glaciation temperatures. We combine these results with satellite-based information on dust aerosol optical depth and layer height. Mineral dust can be assumed to be the dominating INP source for cloud glaciation at temperatures colder than −15°C. We show if and how the observed cloud glaciation depends on dust aerosol occurrence. We further investigate if the results further depend on the large scale synoptical condition or cloud type.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall F1)

Presentation: First global assessment of cloud geometrical thickness from TROPOMI on Sentinel 5P

Authors: Luca Lelli, Andrew Sayer, Klaus Bramstedt, Marco Vountas, Dr. Vìctor Molina Garcìa, Athina Argyrouli, Diego G. Loyola
Affiliations: German Aerospace Center (DLR), Institute of Envinonmental Physics and Remote Sensing, University of Bremen, NASA Goddard Space Flight Center
The retrieval of cloud properties from satellite measurements has wide-ranging applications, including light path correction for atmospheric composition, assessment of Earth's radiation budget, studies of aerosol-cloud interactions, and meteorology. One parameter that has received relatively little attention to date is the height of the cloud base and the derived geometrical thickness. This is largely due to the significant attenuation of light when tropospheric clouds are highly opaque at optical wavelengths. After briefly presenting a solution to the radiative transfer problem in the molecular oxygen absorption band measured by the TROPOMI instrument aboard the Sentinel-5P satellite, this study applies three independent algorithms - SACURA, CHROMA, and ROCINN - to the same set of measurements and derives one year of global cloud base altitude, from which the geometrical thickness can be inferred. The validation of the derived cloud parameters, including top and bottom altitude, cloud phase, and optical thickness, sets the stage for the potential creation of a long-term data record for climate research, considering that future missions such as Sentinel-4 on MTG and Sentinel-5 on EPS-SG will provide similar spectral coverage.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall F1)

Presentation: Assessment of Cloud Products Retrieved From the Geostationary Ring of Meteorological Satellites

Authors: Jan Fokke Meirink, Caroline Poulsen, Vincent
Affiliations: Royal Netherlands Meteorological Institute (KNMI), Bureau of Meteorology (BOM)
Over the past decades the capabilities of geostationary (Geo) meteorological satellite imagers have increased dramatically. The current generation of Geo imagers encircling the Earth in the so-called Geo-Ring have around 10 common spectral channels. This enables retrieving a wide range of cloud products at high temporal and spatial resolution with near-global coverage. Within the scientific community, a variety of cloud property retrieval algorithms for meteorological satellite imagers exists, and corresponding data records are available. This community gathers in the CGMS International Cloud Working Group (ICWG) to exchange ideas and experiences. A recurring activity in the ICWG and its predecessor Cloud Retrieval and Evaluation Workshop (CREW) has been to organize dedicated assessments of the various cloud products. Motivated by the enhanced capabilities of the Geo-Ring imagers and the significant evolution of cloud retrieval algorithms over the past years, preparations for a new assessment were made in 2024. This ties in with the Next Generation of the International Satellite Cloud Climatology Project (ISCCP-NG), in which a new baseline for the Geo-Ring data and processing methods is being defined. The plan is to follow a two-step approach. Firstly, participants are invited to submit Geo-Ring cloud products for a chosen day in 2021. The main purpose here is to develop the protocols and intercompare the datasets. Secondly, more recent dates/timeslots are selected based on the availability of suitable collocations with EarthCARE and corresponding targeted validation campaigns. This allows additional evaluation of the cloud products with independent observations. Software for the collocation, intercomparison and validation of the datasets will – building on existing tools – be developed in an open source framework. In this presentation, the setup of the Geo-Ring cloud product assessment will be outlined. The status of the project will be reported, including first results of the intercomparison and validation efforts. Finally, an outlook on the remainder of the project will be provided.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall L3)

Session: C.06.02 Advances in Calibration and Product Validation for Optical Sensors - PART 1

Accurate calibration and validation lies at the heart of any optical satellite measurement sensor system and it is of paramount importance for delivering high quality satellite products used in an exponentially growing way for operational environmental monitoring as well as for long-term climate research applications. Calibration may be achieved by dedicated on-board systems or by independent vicarious calibration targets and the need for sensor intercalibration and collaboration of satellite operators has become essential to provide such products to the optical satellite user community. In this context the need for common reference sources and protocols has been recognized as well as mission scenarios such as dedicated tandem campaigns to achieve sensor intercalibration and bias assessments. Moreover, precise calibration is a prerequisite for high quality geophysical Level-2 parameters retrieved from satellite and bringing independent validation efforts of these into play with sensor calibration is an essential activity to generate accurate geo-physical parameter maps at satellite-scale. In the context of the CEOS Working Group Cal/Val (WGCV) Land Product Validation (LPV) efforts ESA have led and collaborated on many such activities establishing the concept of ground-based Fiducial Reference Measurements and LPV Supersites as a firm reference for ground-based validation of satellite measurements. The aim of this is to provide well calibrated and validated Level 1 and 2 products to the satellite data user community at large including their uncertainty budgets traceable to SI and cal/val protocols. This session will provide an overview of the latest state-of-the-art Cal/Val activities for optical sensors including novel calibration approaches and new ground-based validation networks used for land product validation.

Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall L3)

Presentation: Cal/Val Park: project status and future plans

Authors: Fabrizio Niro, Roberto Colombo, Sergio Cogliati, Lorenzo Genesio, Franco Miglietta, Béatrice Berthelot, Sébastien Saunier, Niall Origo, Joel Kuusk, Sultan Kocaman, Noelle Cremer, Massimo Cardaci, Stefano Casadio, Marco Galli, Leonardo De Laurentiis, Valentina Boccia
Affiliations: Serco For ESA/ESRIN, University Milano Bicocca, CNR-IBE, FCS, Magellium, Telespazio-FR, NPL, University of Tartu, Haccetepe University, ESA-ESRIN
The Cal/Val Park is a collaborative effort between its several stakeholders (e.g. space agencies and institutions, commercial satellite data providers, etc.) and it is currently led by the European Space Agency (ESA). The overarching goal of the project is to design, set-up and operate a new ground-based calibration and validation (cal/val) site for supporting the post-launch cal/val needs of multiple land imaging satellite sensors. The Cal/Val Park is the response to the ever evolving and advancing capabilities in Earth Observation (EO), particularly with the rising number and increasing complexity of spaceborne remote sensing systems developed by both Institutional Agencies and Commercial New Space companies. Furthermore, it aims at filling existing gaps in the availability of cal/val data, specifically of multi-angular spectrally resolved surface reflectance measurements and of reference data for assessing the geometry and image quality of very-high resolution sensors. The Cal/Val Park is primarily designed for optical missions operating in the visible to shortwave infrared range (400nm – 2500nm), encompassing both multispectral and hyperspectral sensors with a Ground Sampling Distance (GSD) up to 30 meters. The Cal/Val Park will be implemented in three consecutive phases: design, set-up, and operations. Presently, the project is in the design phase, scheduled to be completed by mid 2025. This paper aims to provide an update on the current status and future activities of the Cal/Val Park project. Specifically, it outlines the consolidated design of the site, including its location, layout, proposed suite of sensors, and considered radiometric, geometric and image quality targets. The chosen location is a flat, rural area in southern Tuscany, central Italy, situated away from heavily urbanized zones and characterized by a relatively low probability of cloud cover throughout the year. Within this area we have identified a primary site, where the majority of targets and sensors will be installed. Complementary to this main site are nearby ancillary locations and natural/man-made features, which will be used to acquire supplementary data for radiometric and geometric assessment. The layout of the primary site includes five radiometric targets (120 m x 120 m) and a larger one (150m x 150m), made of different materials (black and white gravels, stabilised brown material, bare soil and maintained lawn) spanning a wide brightness range. These targets enable the validation of the full processing chain, from Level 1 Top-Of-Atmosphere (TOA) to Surface Reflectance (SR) and higher-level derived land products (vegetation, raw materials). Ground-based surface spectral measurements over these targets will be performed continuously with tower-mounted pointable spectrometers, using the consolidated setup and best practices of existing networks, namely RadCalNet and Hypernets. Additionally, dense spatial sampling at high spectral resolution will be ensured thanks to a full range (400-2500nm) spectroradiometers mounted on moveable and fully automated robotic platform. The image quality targets include a large checkerboard (120 m x 120 m) and a modular set of coloured draw patterns to assess the spatial performances of a wide range of optical sensors, with a focus on very high-resolution missions (GSD <5m). Complementary ground-based instruments will be collocated at the site, notably for characterizing atmospheric state parameters, specifically aerosol and water vapor, to accurately simulate the at-sensor signal in a vicarious calibration approach. Synergies with existing and planned cal/val initiatives will be exploited, fostering cooperation across existing sites and networks and promoting harmonization of validation practices, aligning with the metrological guidelines set forth for the Fiducial Reference Measurements (FRM). The site will include an area, named 'Playground,' for ad-hoc field campaigns, cross-calibration activities, and testing new and advanced sensors, platforms and targets. The design will be scalable and flexible to accommodate future cal/val needs and expansion of the domain of applicability to other types of sensors, notably thermal high-resolution, atmospheric, and SAR sensors. The set of additional ground-based sensors and ancillary targets to address these sensors' needs has already been identified. The final technical solution will be consolidated at the end of the design phase, providing the foundation for the implementation phase. The long-term objective is to set up an operational site, providing cal/val reference data continuously for at least 10 years.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall L3)

Presentation: A Web-Based Platform for Validating Satellite-Derived Snow Cover Products in Near Real-Time Using In-Situ Measurements and Webcam Imagery Networks

Authors: Cemal Melih Tanis, Dr. Ali Nadir Arslan, Dr. Kari Luojus
Affiliations: Finnish Meteorological Institute
The Global Observing System for Climate identifies snow cover as one of the 50 essential climate variables critical for monitoring climate trends via satellite remote sensing. Organizations like EUMETSAT and Copernicus contribute significantly to this field by offering multiple snow cover products through programs such as H-SAF and the Copernicus Land Monitoring Service. To ensure the long-term utility and reliability of these products, their validation remains essential. Equally important is the capability to assess the quality of new satellite products promptly, ideally from the first day of their operational availability. Our work focuses on developing an online validation platform aimed at enabling rapid quality assessment of newly released snow cover products, while also providing comprehensive validation results for existing operational products in near real-time. The platform leverages in-situ measurements, including snow depth and snow state data, which have long been utilized as reference datasets for satellite validation. These datasets, while varying in spatial and temporal density by region, are widely available in areas where snowfall occurs, thanks to sources such as the World Meteorological Organization Global Telecommunications System (WMO GTS), the Finnish Environment Institute (SYKE), the Finnish Meteorological Institute (FMI), and other data-providing entities. In recent years, webcam image processing has emerged as a novel method for environmental data acquisition. Using digital imagery from publicly available environmental camera networks, parameters such as snow cover and snow depth can be derived. The FMIPROT Camera Network Portal aggregates data and metadata from multiple such networks, offering near real-time snow cover observations through a simple HTTP API. Our recent studies confirm that snow cover estimates derived from webcam imagery serve as a valuable resource for validating satellite-derived snow cover products. To integrate these resources, we developed the "FMI ARK SAT QA Portal," a validation platform that compares satellite-derived snow cover data with both in-situ and webcam-derived reference data. Built within the ESA IDEAS-QA4EO framework, the system operates on a virtual machine that fetches reference data from the FMIPROT Camera Network Portal API, FMI Open Data API, FMI internal data APIs, and additional sources such as the MONIMET camera network, Finnish airport cameras, and the PhenoCam Network. Satellite product data is retrieved, synchronized with reference data temporally, and stored locally for efficient access. The platform's client-side web interface enables users to perform comparisons, producing outputs such as RMSE values, contingency tables, and other performance metrics. These results are visualized interactively through maps, scatter plots, and tables. Hosted on an FMI server, the portal’s modular design allows for easy migration (with most of the reference data) to other cloud-based systems, ensuring adaptability and scalability. This approach provides a robust solution for ongoing validation of snow cover products, enhancing the reliability of satellite-derived climate data.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall L3)

Presentation: Potential of distributed Wireless PAR Sensor Networks validating satellite-derived vegetation products

Authors: Somnath Paramanik, Harry Morris, Rémi Grousset, Gabriele Bai, Christophe Lerebourg, Ernesto Lopez-Baeza, Ana Pérez-Hoyos, David Garcia-Rodriguez, Alexander Knohl, Anne Klosterhalfen, Frank Tiedemann, Marco Clerici, Nadine Gobron, Luke Brown, Mr Finn James, Fabrizio Niro, Stefan Maier, Professor Jadu Dash
Affiliations: University of Southampton, National Physical Laboratory, ACRI-ST, Albavalor, University Research Institute on Robotics and Information and Communication Technologies (IRTIC), Universitat de Valencia, University of Göttingen, European Commission Joint Research Centre, University of Salford, European Space Agency Centre for Earth Observation, ESA-ESRIN, maitec
Accurate and continuous estimation of Leaf Area Index (LAI) and Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) is essential for understanding ecosystem processes and informing agricultural, ecological, and climate models. Both LAI and FAPAR are produced operationally from a range of satellite sensors (e.g. VIIRS, Sentinel-3 OLCI) by utilising different algorithms. These applications align with the objectives of ESA’s Climate Change Initiative (CCI) and EU Copernicus Land Monitoring and Climate Change Services to monitor and model key environmental processes. Validation of these satellite-derived products is essential to ensure they meet the standard required for specific downstream applications, such as crop yield forecasting and carbon dynamics. In this paper, we proposed a novel framework to utilise wireless Photosynthetically Active Radiation (PAR) sensor networks to estimate both FAPAR and LAI automatically and continuously, which overcomes the limitation of logistical arrangements related to traditional field campaigns. By deploying the PAR sensor network in diverse ecosystems (a vineyard in Valencia, a natural forest in Hainich, and a savanna in Litchfield), we aim to capture detailed temporal variations in light attenuation through canopies. By balancing the incoming and outgoing flux of the solar radiation from each installed sensor within the elementary sampling unit (ESU), FAPAR is being estimated. Simultaneously, the transmission of PAR components through the canopy from the above and below canopy PAR measurements, the LAI could also be achieved. These high-resolution, continuous measurements provide significant advantages in terms of temporal coverage and accuracy over traditional methods. The resulting FAPAR and LAI estimates will be used to validate global vegetation products (e.g., VIIRS, Sentinel-3 OLCI), by upscaling vegetation index-based multitemporal transfer functions derived from Sentinel-2. This framework bridges the gap between ground-based measurements and satellite observations, strengthening the reliability of global land products. This dual validation framework bridges the gap between ground-based measurements and satellite observations, strengthening the reliability of global land products. Our estimates are intended to contribute as reliable data for the ESA Climate Change Initiative’s effort to generate Essential Climate Variables (ECVs) and support the EU Copernicus programs’ monitoring and modeling objectives. By integrating these capabilities into existing frameworks, our approach can significantly enhance the accuracy of satellite-derived land products, advancing global-scale understanding of vegetation dynamics and supporting international climate and ecological modeling efforts.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall L3)

Presentation: Evaluation of atmospheric correction methods for hyperspectral PRISMA data in inland and coastal waters: first results from ACIX-III Aqua

Authors: Claudia Giardino, Dr Nima Pahlevan, Alice Fabbretto, Andrea Pellegrino, Lodovica Panizza, Mariano Bresciani, Quinten Vanhellemont, Yulun Wu, Anders Knudby, Hendrik Bernert, Tristan Harmel, François Steinmetz, Liesbeth De Keukelaere, Dr.ssa Federica Braga, Vittorio Brando, Susanne Kratzer, Thomas Schroeder, Ana Dogliotti, Dr Mortimer Werther, Ryan Vandermeulen, Georgia Doxani, Noelle Cremer, Ferran Gascon, Kevin Alonso Gonzalez
Affiliations: CNR-IREA, NASA Goddard Space Flight Center, University of Sapienza, Royal Belgian Institute of Natural Sciences, University of Ottawa, EOMAP GmbH & Co. KG, Magellium, Earth Observation Unit, HYGEOS, VITO, CNR-ISMAR, Stockholm University, CSIRO, IAFE, EAWAG, National Oceanic and Atmospheric Administration, ESA
Atmospheric correction (AC) over inland and coastal waters is one of the major challenges in aquatic remote sensing, often increasing quantitative retrieval errors of biogeochemical variables. Accurate AC is a necessary step for obtaining accurate remote-sensing reflectance (Rrs) required to generate aquatic remote sensing products. There have been multiple studies and projects addressing questions relating to AC of satellite data gathered by optical sensors and one of those was the Atmospheric Correction Intercomparison eXercise (ACIX), jointly conducted by NASA and ESA in coordination with the CEOS validation group. The initial exercise (ACIX-I; Doxani et al. (2018)) focused on land and mostly evaluated aerosol optical thickness retrieval performance. The second Atmospheric Correction Intercomparison eXercise (ACIX-II Aqua), was then completed with the aim to provide an assessment over water. Eight AC processors were applied to multi-spectral datasets gathered by Landsat-8 OLI and Sentinel-2A/B MSI, both of which are increasingly used and recognised for research and applications over inland and coastal waters. The ACIX-II Aqua results have been successfully published in Pahlevan et al. (2021), presenting a comprehensive assessment of performance AC processors, achieved through significant efforts in collating in situ datasets. Building on the success of ACIX-II Aqua, a follow-up initiative, namely ACIX-III Aqua, was launched within the same framework. This effort focuses on testing AC processors using hyperspectral images. This contribution provides an overview of the exercise, detailing the datasets, methods and initial results. Started around three years ago, ACIX-III Aqua is coordinated by ESA and NASA with the support of ASI. Developed similarly to ACIX-II Aqua (e.g., by using the same match-up protocol and statistical performance metrics), it aims to test AC processors on satellite hyperspectral data, as those acquired by the Italian Space Agency (ASI) satellite hyperspectral mission PRecursore IperSpettrale della Missione Operativa (PRISMA). Launched in March 2019, the PRISMA mission has acquired a substantial number of images over various marine and freshwaters, where in situ measurements were collected to enable match-up generation with PRISMA and carry out the AC comparison exercise. Similar to ACIX-II Aqua, t the in situ dataset for ACIX-III Aqua originates from two sources: hyperspectral data contributed by the international community (called Community Validation Database, CVD) and AERONET-OC multispectral measurements from AERONET-OC. This extensive dataset enables both the evaluation of the AC processors (using all available matchups) and comparative analysis based on shared matchups. Five AC processors are participating in ACIX-III Aqua: ACOLITE, hGRS, iCOR, MIP and POLYMER. Additionally, the standard AC product of PRISMA (L2C) was included in the comparison. Moreover, T-Mart (Wu et al., 2024), which focuses solely on adjacency effect correction, was also evaluated in combination with ACOLITE. The imagery data used in ACIX-III Aqua are the PRISMA L1 data (version 4.1-0) and PRISMA L2C data (version 02.05), the former providing ancillary data for running the AC methods as well as storing the standard PRISMA L2C reflectance as a matter of comparison with the other AC processors. PRISMA data were selected based on the synchronicity with the in situ data and the percentage of cloud cover, which was required to be less than 2%. The dataset finally consists of 174 images with in situ data derived from the AERONET-OC network and 65 images with in situ data obtained from the CVD network, for a total of 239 images. The match-up analysis is provided both per site and across seven different Optical Water Types (OWTs); the Quality Water Index Polynomial (QWIP) (Dierssen et al. 2022) is also included in the analysis to identify Rrs spectra that fall outside the general trends observed in aquatic optics for optically deep inland and coastal waters. So far ACIX-III Aqua results seem consistent with previous findings of ACIX-II Aqua (Pahlevan et al.,2021). In particular we observe largest differences between satellite and field data associated with the blue bands and NIR bands and also that each AC processor has different degrees of accuracy depending on OWT. Further results will be presented based on the latest progress in data analysis to understand how the variations in AC performance are related to the increased hyperspectral information content of PRISMA and how the results of this exercise compare to those obtained with other satellites (e.g. EnMAP, Soppa et al. 2024) to support the generation of improved products for aquatic applications from hyperspectral data collected by current and future missions (e.g. CHIME). References - Dierssen, H. M., Vandermeulen, R. A., Barnes, B. B., Castagna, A., Knaeps, E., & Vanhellemont, Q. (2022). QWIP: A quantitative metric for quality control of aquatic reflectance spectral shape using the apparent visible wavelength. Frontiers in Remote Sensing, 3, 869611. - Doxani, G., Vermote, E., Roger, J. C., Gascon, F., Adriaensen, S., Frantz, D., ... & Vanhellemont, Q. (2018). Atmospheric correction inter-comparison exercise. Remote Sensing, 10(2), 352. - Pahlevan, N., Mangin, A., Balasubramanian, S. V., Smith, B., Alikas, K., Arai, K., ..., Warren, M. (2021). ACIX-Aqua: A global assessment of atmospheric correction methods for Landsat-8 and Sentinel-2 over lakes, rivers, and coastal waters. Remote Sensing of Environment, 258, 112366. - Soppa, M. A., Brell, M., Chabrillat, S., Alvarado, L. M., Gege, P., Plattner, S., ... & Bracher, A. (2024). Full mission evaluation of EnMAP water leaving reflectance products using three atmospheric correction processors. Optics Express, 32(16), 28215-28230. - Wu, Y., Knudby, A., Pahlevan, N., Lapen, D., & Zeng, C. (2024). Sensor-generic adjacency-effect correction for remote sensing of coastal and inland waters. Remote Sensing of Environment, 315, 114433.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall L3)

Presentation: FRMOCnet: A Network of Radiometric Measurements With the FRM Quality

Authors: Riho Vendt, Viktor Vabson, Krista Alikas, Ilmar Ansko, Joel Kuusk, Kim Duong, Martin Ligi, Giuseppe Zibordi, Christophe Lerebourg, Alexis Deru, Gabriele Bai, Marie Bretagnon, Carsten Brockmann, Uwe Lange, Sabine Embacher, Ruchi Motwani, Vittorio Brando, Agnieszka Bialek, Ashley Ramsay, Gavin Tilstone, Thomas Jordan, Kevin Ruddick, Heloïse Lavigne, Giorgio Dall'Olmo, Dirk Aurin, Juan I. Gossn, Ewa Kwiatkowska
Affiliations: University of Tartu, EOScience, ACRI-ST, Brockmann Consult GmbH, CNR-ISMAR, NPL, PML, RBINS, OGS, NASA, EUMETSAT
A Fiducial Reference Measurement (FRM) is: “a suite of independent, fully characterised, and traceable (to a community agreed reference, ideally SI) measurements of a satellite relevant measurand, tailored specifically to address the calibration/validation needs of a class of satellite-borne sensor and that follow the guidelines outlined by the GEO/CEOS Quality Assurance framework for Earth Observation (QA4EO)”. The European Space Agency launched the first phase of the FRM4SOC project between 2016 and 2019 to enhance Ocean Colour (OC) data validation through a set of proof-of-concept tasks. The FRM4SOC Phase 2 (https://frm4soc2.eumetsat.int ), which includes two optional extensions, is financed by the European Commission’s Copernicus Programme and implemented by EUMETSAT. The main two-year period of the project commenced in April 2021, its first optional one-year extension in July 2023, and the second optional extension, started in 2024, is in progress until October 2025. The FRM4SOC Phase 2 study leverages and consolidates findings from global research conducted over the years, including the successes of the first phase of FRM4SOC and the new experiments in the current phase. Significant effort has been invested in the thorough calibration and characterisation of the most prevalent types of field radiometers used by the OC community, along with the evaluation of associated uncertainty budgets. The developed guidelines, uncertainty budgets, and tools have been tested in dedicated laboratory and field comparisons. All these efforts aim to implement the principles of FRM and support the community to ensure its adoption. As an outcome of this work, an operational network of radiometric measurements (FRMOCnet) is established to ensure the quality of in-situ above-water measurements for satellite product validation and algorithm development in the OC domain. The network assists the OC community in collecting high-quality in-situ measurement data by formulating practical guidelines for calibration, characterisation and usage of radiometric sensors, in-field measurement procedures, data handling, assessment of measurement uncertainties, and execution of comparison measurements. Extensive training for researchers in the field is provided to strive for a common understanding and implementation of these principles and guidelines. For example, the first Copernicus FRM4SOC 2024 training (FICE2024) on the collection and processing of high-quality data in water remote sensing was conducted in Venice at CNR-ISMAR from 6th to 17th May 2024. Also, sensor calibrations supported by the Copernicus Programme, a complete, fully operational, calibrated and characterised in-situ above-water OC radiometric system for short-term loans, and practical tools such as the calibration and characterisation database FidRadDB (https://ocdb.eumetsat.int/docs/fidrad-database.html), HyperCP Community Processor (https://github.com/nasa/HyperCP/tree/master), and Ocean Colour In-Situ Database (OCDB, https://ocdb.eumetsat.int) are now available to the community.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall L3)

Presentation: CMIX-II: Cloud Mask Intercomparison eXercise – second edition

Authors: Jan Wevers, Sergii Skakun, Carsten Brockmann, Jean-Claude Roger, Eric Vermote
Affiliations: Brockmann Consult GmbH, Department of Geographical Science, University of Maryland, NASA Goddard Space Flight Center
Cloud cover is a major limiting factor in exploiting time-series data acquired by optical spaceborne remote sensing sensors. Multiple methods have been developed to address the problem of cloud detection in satellite imagery, and a number of cloud masks have been developed for optical sensors but very few studies have carried out quantitative intercomparison of state-of-the-art methods in this domain. Until 2019 the first Cloud Masking Intercomparison eXercise (CMIX) was conducted within the Committee Earth Observation Satellites (CEOS) Working Group on Calibration & Validation (WGCV) as a common undertaking by ESA and NASA. After a successful finalization of CMIX and presentation of the results during the 2022 Living Planet Symposium in Bonn, ESA and NASA decided during a joined lunch at Living Planet Symposium to continue the activity in a second edition of CMIX (CMIX-II), improving datasets and methodologies, addressing all shortcomings of the first edition and allowing more algorithm developers to compare their algorithms. In 2022 CMIX-II was started. With nineteen algorithms developed by nineteen teams from twenty different organizations representing universities, research centers and industry, as well as space agencies (ESA, DLR, and NASA), the number of participants, and thus compared algorithms, has nearly doubled compared to the first edition of CMIX, making this the biggest and most unique intercomparison of cloud masking algorithms so far. In the first CMIX-II workshop the shortcomings of the first exercise were addressed, which led to clear definitions of a participation protocol and the datasets and their requirements. The five key requirements for the datasets are 1) to be unknown to the users before the exercise, 2) to be specifically designed to the exercise, 3) to define different cloud classes as physically as possible, 4) to include cloud shadow in the datasets, 5) to use multiple dataset approaches (e.g single pixel, classification of subsets, multi-temporal). Four different datasets have been decided collectively that address and fulfil the decided requirements which include an expert pixel collection, a reference dataset based on sky camera acquisitions, a collaborative dataset of classified image snippets, created by the participants, as well as a multi-temporal dataset to identify systematic and semi-systematic classification errors. Since a physical measure for the cloud classifications of the reference datasets was required, a methodology to provide COD estimates with the expert pixel collection and the sky camera dataset have been developed, specifically for the exercise. During the first workshop a split of the exercise into two phases was decided. The goal of the first phase was to create small test datasets of the four validation datasets, distribute the datasets to the participants for evaluation, make a first test run of all algorithms in the datasets and then conduct a first test validation and collect feedback from the participants. After successful completion of the first phase, the second phase should be started, which is the main exercise with the complete datasets. The first phase has mainly completed meanwhile, and results have been shared with the participants. The second and main phase will start in 2025 and shall be completed before the Living Planet Symposium, with the goal of giving some first insights into this unique comparison of cloud masking algorithms and sharing experiences with the Cal/Val community.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall F2)

Session: A.02.04 Advances in Monitoring and Management of Forest Ecosystems - PART 5

About 4 billion hectares, which is 31 percent of the global total land surface, is covered by forests according to the latest FAO Forest Resource Assessment FRA 2020. Forests are a valuable and irreplaceable ecosystem, which play an important role from an ecological, social, and economic perspective. Monitoring of forests is essential for a better understanding to protect and manage sustainably this valuable ecosystem.

Information needs vary and include forest area, forest types, forest structure, disturbances, health, and biomass. An unprecedented wealth of observations by optical and microwave sensors, active and passive, from low to high resolution allow new ways to monitor forests. Emphasis will be put on advances in detecting forest disturbances, forest health monitoring, species identification, support to sustainable forest management, estimating forest biomass and forest carbon accounting. This session will showcase some of the more recent key achievements including methods/algorithms, science and applications.

Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall F2)

Presentation: Recent Dynamics of Forest Canopy Cover Loss in Germany

Authors: Dr. Frank Thonfeld, Patrick Kacic, Stefanie Holzwarth, Marco Wegler, Niklas Jaggy, Kjirsten Coleman, Ursula Gessner, Prof. Dr. Claudia Kuenzer
Affiliations: German Aerospace Center (DLR), University of Wuerzburg
Forests cover about one third of Germany’s land area. They are an important economic factor and fulfill several other ecosystem services at the same time. Furthermore, forests play a major role as carbon sink and are therefore essential for Germany to achieve national climate goals. According to the latest national forest inventory (NFI), conducted in 2021 and 2022, German forests turned into a net carbon source due to exceptional losses. These losses are mainly attributed to a severe drought situation between 2018 and 2020 and beyond. Compound and cascading effects of droughts, heatwaves and insect infestations led to large-scale disturbances, often resulting in salvage and sanitary logging. In essence, the decadal sampling of the NFI in Germany does not keep pace with the dynamics on the ground. In response to this new challenge, several policies such as the national forest strategy 2050 promote the development of digital technologies with the potential to monitor forests and to provide updates regularly and at higher temporal resolution in order to complement the in-situ measurements from the NFI. One key technology is remote sensing because of its capabilities to cover large areas synchronously on a regular basis. While Earth Observation (EO) is well-implemented in the monitoring of forests in the tropics, its potential is far from being deployed in operational monitoring in Germany. However, several EO-based forest monitoring tools have been developed over the past years. While there are some similarities, there are also differences between the products. To date, none of these tools is able to address all requirements of the annual crown condition assessments used to assess forest condition in Germany based on field samples. The information the EO-based tools provide is often complementary to the field-based assessments as they reveal additional spatio-temporal details and characterize canopy condition from a top-view perspective rather than the ground-view perspective. Recently, we provided a first assessment of forest canopy cover losses (FCCL) in Germany between January 2018 and April 2021. This was the first EO-based national study of this kind. It is different from all later tools in that it quantifies loss areas, i.e. forest areas that were either cleared or that were covered by larger patches of standing dead trees. Here, we provide an update until including September 2024 based on fully reprocessed and extended time series. We revised the approach to include Landsat-9 and used all available Sentinel-2A/B and Landsat-8/9 images with less than 80 % cloud cover. We masked out unusable data due to clouds, cloud shadows, snow and other artefacts and computed the Tasseled Cap-based Disturbance Index (DI), an index that normalizes across all forest pixels and hence reduces seasonality. We filtered the resulting time series with a moving median filter and used the 10th percentile to compute monthly composites. These were subtracted from the median full year 2017 DI. We applied a fixed threshold and counted how often it was exceeded. After exceeding the threshold six times in a row, an anomaly was considered a FCCL. The FCCL time series covers September 2017 through September 2024 at monthly resolution. Further, we overlaid our FCCL results with a tree species map and computed statistics for the four major tree species in Germany, namely spruce, pine, oak and beech. Our results show that at the beginning of the observation period several larger storm events caused losses distributed over large parts of Germany with a focus in Northern Germany. As the impacts of the 2018-2020 drought became effective, spruce-dominated areas became the hotspots of canopy cover loss. Accordingly, three major regions of FCCL emerged. Although spruce forests are particularly affected by recent-past FCCL, virtually all species show serious signs of stress, often resulting in dieback and associated logging. Hence, we found canopy cover losses across all major tree species. Considering the whole time series, most severe dynamics were recorded in central Germany, covering a region from the far West to the far East of Germany. In total, about 850,000 ha were affected by FCCL, corresponding to 8.4 % of the forest area. In some administrative districts of Germany about 45 % of the total forest area was cleared. Only a small share of FCCL can be attributed to land use changes for road construction, settlement expansion, or open-pit mining. The FCCL information is of great importance for forest authorities and forest managers. It can be used to specify the need of seedlings and to plan replanting with species that are better adapted to future climate. With our contribution we demonstrate how spatially explicit information about forest disturbances can be provided in high temporal resolution to support the national forest monitoring.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall F2)

Presentation: Next-Level Forest Disturbance Monitoring Across Europe Based on Deep Learning

Authors: Alba Viana-Soto, Katja Kowalski, Jorunn Mense, Jan Pauls, Fabian Gieseke, Cornelius Senf
Affiliations: Earth Observation for Ecosystem Management, School of Life Sciences, Technical University of Munich, Department of Information Systems, University of Münster
Forest disturbances have increased globally due to climate and land-use changes. Changing forest disturbances have strong impacts on forest dynamics, structure and demography, with profound implications for ecosystem functioning and services essential to society (e.g. timber production, carbon storage, water purification, recreation value). Of particular concern are Europe’s forests, where changes in disturbance regimes are complex due to interactions between individual disturbance events, such as insect outbreaks and windthrows, recurring fires, and human management. To better understand how increasing disturbances and changing disturbance regimes impact Europe’s forests, consistent and spatially explicit information is needed. We fill this gap by developing a comprehensive and spatially explicit forest disturbance atlas for Europe, striving towards an operational monitoring system of forest dynamics. The current version of the forest disturbance atlas (Viana-Soto & Senf, 2024) is based on a classification-based approach to detect forest disturbances annually since 1985, therefore accounting for multiple disturbance events per time-series. A first version of the maps can be explored online: https://albaviana.users.earthengine.app/view/european-forest-disturbance-map. This approach yielded good performance compared to previous assessments, reducing omission errors to 22.5% while maintaining commission errors low (17.3%). Despite the improvements of the atlas compared to other European and global forest disturbance products, limitations and sources of uncertainty arise when relying on annual classification. For instance, remaining errors in the underlying remote sensing data (e.g. noise, undetected clouds) can introduce commission errors, particularly in regions with lower data availability. Also, more subtle disturbances that do not produce a clear opening of the canopy are challenging to capture using year-to-year classification, which does not directly account for temporal dependencies. Deep-learning models can help to address these challenges by automatically extracting meaningful representations of available sequences within the spatio-temporal and spectral domains, and by classifying full pixel sequences instead of year-to-year changes. As such, deep learning models are a promising tool to advance European disturbance monitoring to the next level. We aim to continue advancing disturbance monitoring in Europe by testing deep-learning models, trained on more than 20,000 manually labelled reference pixels across Europe and annual Landsat data covering more than four decades (1984-2024). For doing so, we built a consistent data cube of ready-to-use Landsat surface reflectance data, including atmospheric and topographic corrections, and cloud and shadow masking, with the full data cube totalling to 115,663 images. Annual seamless gap-free composites were derived from the image database using the Best Available Pixel (BAP) algorithm, which selects high quality pixels closest to a target date ensuring intra-annual consistency, while simultaneously avoiding clouds and other sources of noise. For classifying disturbance sequences, we targeted two deep-learning models: a 1D U-Net, which leverages spectral information across time, and a 3D U-Net, which further incorporates the spatial context of disturbances. Models are optimized by determining the optimal hyperparameters, including learning rate, optimizer, batch size, and the loss function. To counter overfitting, typical measures such as dropout, batch normalization, and weight regularization are used. Different lengths of the subsequence windows are further tested to facilitate updating as soon as new data arrives. Preliminary results on the calibration phase already show accuracies comparable to annual classification, but fewer post-classification data cleaning steps are needed to filter out illogical change sequences. More complex 3D models will likely play a transformative role by capturing disturbances dynamics simultaneously across space and time. By leveraging dependencies over temporal and spatial dimensions, we anticipate these models can overcome existing challenges of disturbance mapping, reducing omission of true disturbances while increasing robustness to noise and data gaps. Thus, better understanding of deep learning approaches for disturbance detection over long time series and at the continental scale is a crucial step for advancing and operationalizing forest disturbance monitoring.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall F2)

Presentation: Forest Height Estimation Based on Sentinel-1 and Meteorological Data

Authors: Marc Herrera-Giménez, Carlos López-Martínez
Affiliations: Universitat Politécnica de Catalunya (UPC), Institut d'Estudis Espacials de Catalunya (IEEC)
Over the last decades, forest ecosystems have undergone significant transformations due to anthropogenic effects. Deforestation of the Amazonas, violent and widespread fires due to droughts and increasing higher temperatures, and decreasing of biomass in forested areas are some of the critical phenomena that forests are undergoing. To properly understand all these changes and how these catastrophes are affecting each region and ecosystem we must define a way to quantify in a general and operational way how the forests are evolving and changing. Estimating forest biophysical parameters is a must to quantify these climate change effects. The literature contains many studies addressing these use cases through the application of Earth Observation (EO) data, including deforestation extent, forest fire prediction and biomass monitoring [1][2][3]. The challenge with EO is the complexity and volume of data. Many machine learning (ML) and deep learning (DL) tools have been introduced to extract actionable insights from EO data. This work studies the effects of using AI4EO techniques to estimate forest tree height (TH). This analysis focuses on two 100km x 100 km test sites. The first test site is located close to Yellowknife, Canada in the Northwest Territories. The data used dates from December to March for the periods 2018-2019 and 2020-2021. The second test site is located in the central part of Finland dating from December to March for 2016-2017 and 2017-2018. In total 9 variables are considered for both test sites, Sentinel-1 VV polarization 6-day coherence (Coherence), a digital elevation model (DEM) from the Copernicus 30m Global DEM, a land cover (LC) map and the tree height (TH) ground reference data [4]. These variables are provided at a high spatial resolution of 10 meters. At a lower spatial resolution of 10km we find the meteorological data obtained through ERA5 including precipitation amount in mm (Rain), snow depth in cm (Snow), wind in both U and V directions (Wind) and temperature at 2m in Cº (Temperature). The study consists of two parts, first the analysis and estimation of tree height at the low 10km resolution by downscaling the high-resolution variables by aggregating the pixels in 10km x 10km subareas. This approach assessed the possibility of estimation TH using the native ERA5 meteorological data combined with the Coherence, DEM, LC. The relevance of this analysis lines in the contribution that the Coherence at 10km may bring to the forest tree height estimation when combined with meteorological data, DEM and LC. The second part consisted of a high-resolution study at 10m, to study if the weather conditions add complementary information to the native resolution data. To reduce computational complexity this high-resolution analysis is limited to a 10 km x 10 km subset within the larger areas and only a single 6-day coherence pair is used. The ML model employed was Random Forest (RF) outperforming other tested models, such as Support Vector Machines (SVM) and K-Nearest Neighbors (KNN). The model at 10km is trained and tested using each individual 6-day coherence acquisition, resulting in around 20 predictions for 20 different dates. The obtained average Root-mean-square error (RMSE) for the Canadian site is 1.977 ranging from 1.369 to 3.227. The Finnish test site averages 1.212 with a range from 0.918 to 1.628. The 10m analysis uses the first 6-day coherence acquisition on the 2018-2019 period for Yellowknife and 2016-2017 for Finland. In this case, as only one coherence acquisition is used, only one RMSE is obtained being 1.729 for the Canadian site and 3.053 for the Finnish site. The results demonstrate a relatively low RMSE for both analyses. In the 10km study, the Finnish test site obtains better accuracy, while for the 10m analysis, better accuracy is obtained in the Canadian site. These results highlight the possibility of estimating forest tree height by combining multi-source data and ML techniques. [1] C. H. Silva Junior, A. C. Pessôa, N. S. Carvalho, J. B. Reis, L. O. Anderson, L. E. Aragão, “The brazilian amazon deforestation rate in 2020 is the greatest of the decade,” Nature ecology & evolution 5 (2021) 144–145. [2] A. Mustofa Irawan, M. Vall-llossera, C. López-Martínez, A. Camps, D. Chaparro, G. Portal, M. Pablos, A. Alonso-González, “Land, jet stream, and other atmospheric effects on burned area estimation during the South Asian heatwave of 2022,” International Journal of Applied Earth Observation and Geoinformation, Volume 128, 2024, 103720, ISSN 1569-8432, https://doi.org/10.1016/j.jag.2024.103720. [3] M. H. Giménez, C. López-Martinez, O. Antropov and J. M. Lopez-Sanchez, "Role of Temporal Decorrelation in C-Band SAR Interferometry over Boreal and Temperate Forests," IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 2024, pp. 4230-4234, doi: 10.1109/IGARSS53475.2024.10641054. [4] N. Lang, W. Jetz, K. Schindler, and J. D. Wegner, “A high-resolution canopy height model of the Earth,” 2022.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall F2)

Presentation: Forest Disturbance Monitoring Prototypes for a Future CLMS

Authors: Linda Moser, Anna Grabatin-Homolka, Andreas Langner, Fahad Jahangir, Fabian Berndt, Stephanie Wegscheider, Bruno Menini Matosak, Andre Stumpf, Inés Ruiz, Martin Puhm, Deutscher Janik
Affiliations: GAF AG, Joanneum Research
Forest change detection and monitoring is a key part of the Copernicus Land Monitoring Service (CLMS) (https://land.copernicus.eu/). Under the umbrella of the High Resolution Vegetated Land Cover Characteristics (HRL VLCC) the existing pan-European HRL Tree Cover & Forests (2012/2015/2018) are currently being updated for the reference years 2018 to 2024 to provide yearly status products at 10m resolution for the Dominant Leaf Type (DLT) and Tree Cover Density (TCD), as well as 3-yearly Change products at 20m spatial resolution for years 2018–2021–2024. The pan-tropical forest component of the Copernicus Global Land Cover and Tropical Forest Mapping and Monitoring Service (LCFM) will provide a TCD product for 2020 and yearly Tree Cover Presence Change (TCPC) from 2021 onwards. Additionally, phenology products exist both at pan-European and Global levels of the CLMS. Various methodologies already implement near real-time (NRT) forest monitoring in tropical regions (e.g. Reiche et al., 2021) with the focus on timely detection of deforestation activities. However, there is not yet an operational pan-European product tracking forest dynamics at such temporal frequency and the capability to separate also subtle disturbances of the tree canopy from signal noise. This kind of product is under demand by the user community, hence a new prototype updating the frequency on a monthly to NRT basis is tested and implemented within the Horizon Europe project Evolution of the Copernicus Land Service portfolio (EvoLand). This presented study describes the testing and implementation of a proposed “Continuous Forest Monitoring” prototype and compares the detected forest disturbance locations and dates to the radar-based Tree Cover Disturbance Monitoring (TCDM) product and the 3-yearly VLCC change product. Alongside, the feasibility for another prototype on “Forest Disturbance Mapping” is tested to classify the driver behind the disturbances. The resulting information can be utilized to enhance forest management and planning, aid forest-related decision-making or contribute to reporting on forest-related EU policies. These two prototypes are proposed within EvoLand to enhance the CLMS forest portfolio and to meet or go beyond users' requirements and demands. The main goal is to capture natural and human-induced forest disturbances by detecting tree cover vitality loss on a monthly basis to depict the change dynamics within tree covered areas. In a second instance, it is tested to what probability the detected disturbances can be assigned to disturbance agents. Two large EvoLand European sites were chosen for a first phase implementation: one in Germany (analysis years 2019-2021) and another in Spain (analysis years 2020-2022), which are being extended to other sites in a second phase. Products are delivered at pixel level (10m spatial resolution), improving the 20m resolution of the current products. Dense time series from Sentinel-2 serve as main input for both prototypes, supported by forest masks from the HRL VLCC. Reference data were generated from various sources, including multitemporal optical Very High Resolution (VHR) imagery, used to create samples for the verification of detected forest losses, supported by Sentinel-2 time series data. Ancillary products were also utilized, such as the Remote Sensing-based National Detection System for Forest Damages (FNEWS) in Germany, providing reference data on forest disturbances like bark beetle infestations and windthrow, as well as the European Forest Fire Information System (EFFIS), offering NRT and historical forest fire data. Several methods in the field of unsupervised and continuous forest change detection from time series were compared, and those most promising ones (e.g. BFAST Monitor, EWMA, Kalman filter/CuSum, BEAST, etc.) were benchmarked and tested. The Exponentially Weighted Moving Average (EWMA) – proposed by Brooks et al. (2014) for Landsat time series data and implemented as part of the JRC-NRT tool (https://github.com/ec-jrc/nrt) – yielded the most promising benchmarking results, especially considering the balance between accuracy, NRT capability, and computational effort. The concept of the method is a harmonic regression that utilizes residuals from the regression together with statistical quality control charts. Structural breaks in the time series above a user-defined significance threshold are detected as forest disturbance by the EWMA algorithm. This method enables continuous monitoring, allowing for the generation of change maps for each satellite scene after training on the historical period. The detected disturbance date serves as input to the forest disturbance agent classifier, leading to the “Forest Disturbance Mapping” prototype, considering: (i) windthrow/storm damage, (ii) wildfire, (iii) insect infestations, as well as (iv) human-induced disturbances (e.g., forest clearing, clear-cutting, and thinning activities). Spectral bands and indices from Sentinel-2 are analysed using a Gaussian Mixture Classifier, which models the spectral signature of disturbances as a multivariate Gaussian distribution. Importantly, the EWMA method was designed not only to detect abrupt and significant forest changes, such as clear cuts, but also to identify more subtle forms of gradual forest degradation and thinning in a near real-time manner. In EvoLand this has been tested for detecting above-mentioned disturbance agents, and was rolled-out onto the prototype sites in Germany and Spain. The algorithm has been adapted for use with Sentinel-2 time series, incorporating the Sen2Cor Sentinel-2 Scene Classification Layer (SCL) cloud masks and monthly median tasselled cap wetness (TCW) features (Cohen et al., 2017; Oeser et al., 2017) with transformation coefficients given in Crist (1985). The EWMA method is designed to process large amounts of data in a highly automated way. Apart from being an unsupervised, data driven approach for which no explicit training data for disturbances are required besides the historical time series, the univariate feature input aids to make the method computationally effective and scalable. The results show a comparison of the outcomes from the EWMA method (monthly resolved, 10m) with the HRL VLCC’s Tree Cover Presence Change (TCPC) (yearly resolved, 20m), and the SAR-based TCDM product (bi-weekly resolved, 10m). The latter employs an iterative, unsupervised change detection method based on Sentinel-1 radar data, with high spatial (10m) and temporal resolution (bi-weekly updates since 2018) (Langner & Carboni, 2020; Langner et al., in preparation). For performing an internal quality control, a total of 177 samples (Germany) and 179 samples (Spain) were interpreted comparing the results from those three methods. Considering all tree cover disturbances, EWMA performs the best at the Central European site, achieving Overall Accuracies (OA) of 79%, compared to TCDM-radar at 73% and VLCC at nearly 69%, drivers which are difficult to detect lower the OA. All three methods show higher User Accuracies (UA) - 100% for TCDM and VLCC, and 97% for EWMA - than Producer Accuracies (PA), which are 74% for EWMA, 64% for TCDM, and 58% for VLCC. Commission errors were slightly elevated for EWMA in both sites (2% Spain, 3% Germany) as compared to TCDM-radar and VLCC. Respective regions showed the presence of cloud cover, concluding to high sensitivity of the EWMA method to improper cloud detection, which is currently being targeted using deep-learning based cloud masks and outlier screening approaches for the next development step. Different drivers have an effect on either the physical structure of the trees and/or the spectral signal of the canopy, and hence also on the suitability of a method of detection. Depending on the driver this is also represented by the lower PAs compared to the UAs. Radar-based methods, like TCDM-radar, detect changes in physical structures, whereas optical-based methods such as EWMA and VLCC react to change in the spectral signal. Furthermore, it is important to consider that the VLCC product targets per definition only full tree cover losses (i.e. selective logging) at 20m spatial resolution and with a Minimum Mapping Unit of 1ha. The results of the Forest Disturbance Mapping prototype indicated that EWMA, followed by VLCC is better suited to detect bark beetle infestations (that are characterised by a gradual loss of chlorophyll content or the loss of leaves/needles whereas the trees are still standing and might be cleared thereafter). Forest fires have been better detected using EWMA. For windthrow events EWMA, TCDM and VLCC showed similar results. Clear cuts (resulting in complete tree removal and exposure of bare soil) show a strong structural as well as spectral change. Both clear cut and (selective) logging events were better detected with TCDM-radar, followed by EWMA and VLCC. In conclusion, the “Continuous Forest Monitoring”, built on an open-source tool of unsupervised data-driven nature, is capable of detecting subtle disturbances, but there is a trade-off between sensitivity and over-detection, which can lead to commission errors. Nevertheless, these problems can be addressed by incorporating advances in signal pre-processing, like AI-driven cloud screening. The approach is suitable for processing large datasets efficiently and remains computationally effective. The “Forest Disturbance Mapping” classification algorithm is being improved by adding spectral, temporal and spatial input features and by improving the regional calibration. Whereas human-induced activities like clear-cuts and logging are expected to be more reliably detected, classifying natural disturbance agents remains challenging due to spectral similarity and lack of adequate training data.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall F2)

Presentation: Monitoring of temperate forest clear-cuts in mainland France from 2018

Authors: Stéphane Mermoz, Juan Doblas Prieto, Milena Planells, David Morin, Thierry Koleck, Florian Mouret, Alexandre Bouvet, Thuy Le Toan, David Sheeren, Yousra Hamrouni, Thierry Bélouard, Éric Paillassa, Marion Carme, Michel Chartier, Simon Martel, Jean-Baptiste Féret
Affiliations: GlobEO, CESBIO, DYNAFOR, INRAE, INRAE, CNPF, INRAE, CNPF, TETIS
Continuous large-scale monitoring of temperate forest, such as European forests, remains limited compared to tropical forest systems. However, temperate forests are critical for biodiversity preservation and carbon storage and are increasingly threatened by climate change, pests, and anthropogenic pressures. In this context, France's forests hold significant importance. As the fourth most forested country in Europe, with 17 million hectares, France saw a 30% increase in harvested forest area during 2016-2018 compared to 2004-2015 (Ceccherini et al., 2020). Meanwhile, approximately 5% of the total forest area is currently in a state of decline in mainland France (IGN, 2024), and this area has increased significantly in recent years. Over the past decade, the French forest's capacity to absorb carbon has halved (ONF, 2024). This study adapted a radar-based method, initially developed for tropical forests, to monitor temperate forest clear-cuts in near real-time across mainland France using Sentinel-1 satellite data, enabling high-resolution and frequent assessments even under cloud cover. Sentinel-1 time series were processed to detect changes in forest canopy using a refined radar change ratio method specifically tailored to temperate forests. The refinements build on the original method developed by Bouvet et al. (2018) for tropical forests, adapting it to the unique characteristics of temperate forest ecosystems. The refinements included optimization of method parameters while accounting for snow conditions to effectively tackle false alarms due to associated strong backscatter decrease from snow. The system operates with a 10-meter spatial resolution and a 0.1-hectare minimum mapping unit, providing robust detection of clear-cuts. In fact, validation employed 967 manually defined reference plots, achieving precision and recall rates of 99.4% and 80.9%, respectively. The submonthly clear-cut maps produced by this approach are the first to be openly available to everyone (https://zenodo.org/records/13685177). A major finding of this study is the method’s limited confusion between forest dieback and clear-cuts. Only 1.6% out of 4,530 dieback reference plots were confused with clear-cuts before the latter occurred. This makes our maps complementary to forest dieback maps based on Sentinel-2 data (for example from Dutrieux et al., 2021) which, by contrast, exhibit significant confusion with clear-cuts (Mouret et al., 2024). Clear-cuts were predominantly observed in private forests, accounting for seven times more activity than public forests, although the surface area of private forests is only three times that of public forests. Conifer species, particularly maritime pine and spruce, dominated clear-cut areas. Spruce clear-cuts were largely driven by bark beetle outbreaks, reflecting ongoing ecological challenges. Clear-cuts in broadleaf forests, including oak, showed an increase from 2018, influenced by market dynamics and tree dieback events. Most clear-cuts were small-scale, with 98.7% under 10 hectares, and an average size decreasing yearly. Such patterns suggest a trend toward more sustainable forestry practices, balancing economic objectives with ecological considerations. Although clear-cuts surface area showed remarkable stability over time, the corresponding carbon loss, obtained from the carbon maps of Morin et al. (2023), increased a lot, i.e., 8.3 Mt in 2020, 13.3 Mt in 2021 and 14.2 Mt in 2022, mostly driven by broadleaf forests. This trend is observed in 9 out of the 13 French regions. Mean aboveground biomass over broadleaf forest loss was 98, 111 and 122 Mg/ha in 2020, 2021 and 2022, suggesting that forests being cut are getting bigger or older. This study demonstrates the feasibility and value of implementing a radar-based satellite monitoring system for temperate forests. The methods and results are scalable across Europe, offering a framework for broader temperate forest monitoring initiatives. The system's adaptability to local forest dynamics ensures relevance for policymakers and forest managers aiming to reconcile economic needs with ecological sustainability. Its high accuracy, real-time capabilities, and adaptability make it a powerful tool for addressing pressing challenges such as illegal logging, biodiversity loss, and climate-induced disturbances. By bridging gaps in current monitoring efforts, this approach contributes to the sustainable management and preservation of temperate forest ecosystems across Europe and beyond. References: Bouvet, A., Mermoz, S., Ballère, M., Koleck, T., & Le Toan, T. (2018). Use of the SAR shadowing effect for deforestation detection with Sentinel-1 time series. Remote Sensing, 10(8), 1250. Ceccherini, G., Duveiller, G., Grassi, G., Lemoine, G., Avitabile, V., Pilli, R., & Cescatti, A. (2020). Abrupt increase in harvested forest area over Europe after 2015. Nature, 583(7814), 72-77. Dutrieux, R., Feret, J. B., Ose, K., & De Boissieu, F. (2021). Package fordead. Recherche Data Gouv, Software. IGN (2024). Inventaire Forestier National, memento 2024. https://www.ign.fr/publications-de-l-ign/institut/kiosque/publications/docs_thematiques/memento-2024.pdf Mouret, F., Morin, D., Martin, H., Planells, M., & Vincent-Barbaroux, C. (2023). Toward an operational monitoring of oak dieback with multispectral satellite time series: a case study in Centre-Val de Loire region of France. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. Morin, D., Planells, M., Mermoz, S., & Mouret, F. (2023). Estimation of forest height and biomass from open-access multi-sensor satellite imagery and GEDI Lidar data: high-resolution maps of metropolitan France. arXiv preprint arXiv:2310.14662. ONF (2024). https://www.onf.fr/vivre-la-foret/%2B/1feb%3A%3Adeperissement-des-forets-quel-etat-des-lieux-aujourdhui.html
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall F2)

Presentation: Automated Monitoring of Road Development in Tropical Forests

Authors: MSc. Bart Slagter, Kurt Fesenmyer, dr. Matthew Hethcoat, Ethan Belair, Peter Ellis, prof. dr. Fritz Kleinschroth, Dr. Marielos Peña-Claros, Dr. Martin Herold, Dr. Johannes Reiche
Affiliations: Wageningen University, The Nature Conservancy, Canadian Forest Service, Leibniz University Hannover, GeoForschungsZentrum
In recent decades, road development has affected many tropical forests around the world and has accelerated human-induced deforestation, forest degradation and biodiversity loss (Engert et al., 2024; Gaveau et al., 2014; Kleinschroth et al., 2019). Road development is largely driven by the industrial forestry sector, building unpaved roads to facilitate selective logging. These logging roads have critical ecological impacts and can expose remote areas to unauthorized human activities, such as agricultural colonization, poaching, mining and artisanal logging (Kleinschroth & Healey, 2017; Laurance et al., 2009). Conversely, selective logging in sustainably managed forests can provide profitable timber harvests for developing nations and sustainable employment for local communities, which together form an attractive alternative to more destructive land uses that require deforestation (Edwards et al., 2014; Gibson et al., 2011; Putz et al., 2012). Mapping and monitoring road development is crucial to understand and mitigate the impacts of selective logging, enforce forest management regulations, and evaluate human influence at large scale. Traditional mapping efforts have been largely based on manual digitization, which is labor-intensive and leads to incomplete and outdated maps (Barber et al., 2014; Engert et al., 2024; Sloan et al., 2018). Advanced remote sensing and deep learning technologies now provide unique opportunities for automated and frequent across large spatial scales. We present a novel road mapping system currently implemented for the entire Congo Basin forest, a region characterized by widespread logging road development. This mapping system is based on a deep learning model applied to 10 m Sentinel-1 and Sentinel-2 satellite imagery, leveraging the complementary qualities of radar and optical imagery. This approach enables detailed and timely detection of new roads on a monthly basis, demonstrated by a Macro-F1 score of 0.91. The road mapping system revealed 46,311 km of road development in the Congo Basin forest from 2019 through 2023. The results confirm the role of logging as the major driver of road expansion in the region, particularly in Cameroon, Equatorial Guinea and Gabon, where logging activities are widespread. Further spatial analysis revealed that 30% of the mapped roads were in fact old logging roads that were reopened for new timber harvest cycles. Notably, nearly a quarter of newly built roads were inside areas previously identified as intact forest landscapes (Potapov et al., 2017). The detailed and consistently updated road maps generated by our system are openly available (https://wur.eu/forest-roads) and can support a variety of applications for forest management and conservation. The road maps can provide a basis for assessing ecological impacts and carbon emissions related to logging, aid in preventing illegal or unsustainable forest activities, and be used for improved understanding and evaluation of human impacts on tropical forests at large scale. Additionally, the road mapping system can complement existing deforestation alert systems (e.g. Hansen et al., 2016; Reiche et al., 2021), which currently do not have sufficient capabilities to detect small-scale changes related to road development. Ongoing work to expand the road mapping system to forests of the Amazon and Southeast Asia will provide a comprehensive basis for understanding forest road development and further advance our understanding of their impacts across the tropics. A detailed view of the Congo Basin road map can be viewed at: https://nrtwur.users.earthengine.app/view/forest-roads References Barber, C. P., Cochrane, M. A., Souza, C. M., & Laurance, W. F. (2014). Roads, deforestation, and the mitigating effect of protected areas in the Amazon. Biological Conservation, 177. https://doi.org/10.1016/j.biocon.2014.07.004 Edwards, D. P., Tobias, J. A., Sheil, D., Meijaard, E., & Laurance, W. F. (2014). Maintaining ecosystem function and services in logged tropical forests. In Trends in Ecology and Evolution (Vol. 29, Issue 9). https://doi.org/10.1016/j.tree.2014.07.003 Engert, J. E., Campbell, M. J., Cinner, J. E., Ishida, Y., Sloan, S., Supriatna, J., Alamgir, M., Cislowski, J., & Laurance, W. F. (2024). Ghost roads and the destruction of Asia-Pacific tropical forests. Nature. https://doi.org/10.1038/s41586-024-07303-5 Gaveau, D. L. A., Sloan, S., Molidena, E., Yaen, H., Sheil, D., Abram, N. K., Ancrenaz, M., Nasi, R., Quinones, M., Wielaard, N., & Meijaard, E. (2014). Four decades of forest persistence, clearance and logging on Borneo. PLoS ONE, 9(7). https://doi.org/10.1371/journal.pone.0101654 Gibson, L., Lee, T. M., Koh, L. P., Brook, B. W., Gardner, T. A., Barlow, J., Peres, C. A., Bradshaw, C. J. A., Laurance, W. F., Lovejoy, T. E., & Sodhi, N. S. (2011). Primary forests are irreplaceable for sustaining tropical biodiversity. Nature, 478(7369), 378–381. https://doi.org/10.1038/nature10425 Hansen, M. C., Krylov, A., Tyukavina, A., Potapov, P. V., Turubanova, S., Zutta, B., Ifo, S., Margono, B., Stolle, F., & Moore, R. (2016). Humid tropical forest disturbance alerts using Landsat data. Environmental Research Letters, 11(3). https://doi.org/10.1088/1748-9326/11/3/034008 Kleinschroth, F., & Healey, J. R. (2017). Impacts of logging roads on tropical forests. In Biotropica. https://doi.org/10.1111/btp.12462 Kleinschroth, F., Laporte, N., Laurance, W. F., Goetz, S. J., & Ghazoul, J. (2019). Road expansion and persistence in forests of the Congo Basin. Nature Sustainability, 2, 628–634. https://doi.org/10.1038/s41893-019-0310-6 Laurance, W. F., Goosem, M., & Laurance, S. G. W. (2009). Impacts of roads and linear clearings on tropical forests. In Trends in Ecology and Evolution. https://doi.org/10.1016/j.tree.2009.06.009 Potapov, P., Hansen, M. C., Laestadius, L., Turubanova, S., Yaroshenko, A., Thies, C., Smith, W., Zhuravleva, I., Komarova, A., Minnemeyer, S., & Esipova, E. (2017). The last frontiers of wilderness: Tracking loss of intact forest landscapes from 2000 to 2013. Science Advances, 3(1). https://doi.org/10.1126/sciadv.1600821 Putz, F. E., Zuidema, P. A., Synnott, T., Peña-Claros, M., Pinard, M. A., Sheil, D., Vanclay, J. K., Sist, P., Gourlet-Fleury, S., Griscom, B., Palmer, J., & Zagt, R. (2012). Sustaining conservation values in selectively logged tropical forests: The attained and the attainable. Conservation Letters, 5(4). https://doi.org/10.1111/j.1755-263X.2012.00242.x Reiche, J., Mullissa, A., Slagter, B., Gou, Y., Tsendbazar, N. E., Odongo-Braun, C., Vollrath, A., Weisse, M. J., Stolle, F., Pickens, A., Donchyts, G., Clinton, N., Gorelick, N., & Herold, M. (2021). Forest disturbance alerts for the Congo Basin using Sentinel-1. Environmental Research Letters, 16(2). https://doi.org/10.1088/1748-9326/abd0a8 Sloan, S., Campbell, M. J., Alamgir, M., Collier-Baker, E., Nowak, M. G., Usher, G., & Laurance, W. F. (2018). Infrastructure development and contested forest governance threaten the Leuser Ecosystem, Indonesia. Land Use Policy, 77. https://doi.org/10.1016/j.landusepol.2018.05.043
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall N1/N2)

Session: C.06.07 Recent progress on uncertainty analysis for Earth Observation measurements - PART 1

Since the least decade, considerable efforts have been devoted to better understand uncertainties associated to Earth Observation measurements. Indeed, the uncertainty is directly linked to the value and usefulness of a measurement, whether it is used for predictions, risk assessment or decision making.
This multi-disciplinary session will present recent progress on evaluating measurement uncertainties, as well as practical use cases from the whole range of EO techniques.
We welcome contributions covering:
• recent results on uncertainty estimation either from theoretical (uncertainty propagation from measurement principles) or data-driven approaches,
• novel calibration and validation methods providing insights for the characterization or verification of uncertainty estimates,
• studies using Fiducial Reference Measurements, either to characterize their own uncertainties or as a source of truth for EO measurements,
• methods to convey uncertainty estimations to end users.

Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall N1/N2)

Presentation: Confidently Uncertain: Methods for - and Challenges in - the Validation of Atmospheric ECV Data Uncertainties

Authors: Dr Tijl Verhoelst, Dr Steven Compernolle, Dr Arno Keppens, Dr Daan Hubert, Jean-Christopher Lambert
Affiliations: Royal Belgian Institute for Space Aeronomy (BIRA-IASB)
The importance of reliable uncertainty estimates in satellite-derived indicators for climate trends, air quality and various other aspects of biosphere health, is today fully recognized¹. Increasing emphasis is placed on the uncertainty characterization of the satellite data records on which these indicators rely, leading to user-driven (as opposed to engineering-driven) uncertainty and stability criteria. This evolution can be seen for instance in the latest GCOS implementation plan, in ESA’s CCI user requirement assessments, and in recent updates of community-wide maturity assessment frameworks (e.g., CEOS interoperability framework and EDAP product maturity matrix). Uncertainty is formally defined in the Vocabulaire International de Métrologie (VIM) as a “parameter, associated with the result of a measurement that characterizes the dispersion of the values that could reasonably be attributed to the measurand”. It can be related to random, systematic and/or correlated effects and it can be estimated either from repeat measurements (Type-A evaluation following GUM nomenclature) or in various other ways (Type B). Uncertainty assessments for satellite data records on ozone, aerosols, and their precursors, rely both (1) on a full metrological understanding of the measurement, with uncertainty propagation and sensitivity analyses along the data processing chain, resulting in so-called prognostic uncertainty estimates², and (2) on external validation of the data using independent reference measurements, yielding so-called diagnostic uncertainty estimates³. External validation of the prognostic uncertainties is a requirement in several maturity assessment frameworks, but is still not yet commonplace. As a baseline, uncertainty validation can be based on a qualitative comparison between prognostic and diagnostic uncertainties, but in the present contribution we report on more integrated approaches to directly validate the prognostic uncertainties (both random and systematic) in the comparisons between satellite and independent reference measurements. An essential concept here is that of the uncertainty (or error) budget closure of the data comparison. Indeed, satellite and reference measurements should agree within their reported uncertainties, or combined uncertainty, with the caveat that co-location mismatch, i.e., irreducible errors due to differences in spatio-temporal sampling and smoothing of atmospheric variability, most often adds additional uncertainty components that must be minimized and/or quantified. In this contribution, we present recent advances in uncertainty budget closure, and recipes for co-location mismatch minimization and estimation. We detail these methods through a set of use cases on ozone, aerosols and their precursors, developed and applied in various ESA CCI activities, in past EC research projects (namely, FP7/H2020 QA4ECV, and GAIA-CLIM), and in the ESA/Copernicus Sentinel-5P Mission Performance Centre (MPC). Lastly, a “conditio sine qua non” to carry out any state-of-the-art validation is a detailed and accurate uncertainty characterization of the reference measurements, which is recognized as such in the concept of - and classification criteria for - Fiducial Reference Measurements, as formalized in the CEOS-FRM maturity assessment framework. Like for the satellite data, this uncertainty characterization can be prognostic/theoretical and/or verified through intercomparison and calibration campaigns, and should distinguish between random and correlated/systematic parts. We touch upon advances on this topic for ozone and its precursors and how these fit into the process of validating satellite prognostic uncertainties. ------------------------------------------------------ (1) See e.g., the recent cross-domain ISSI workshop on “Remote Sensing In Climatology – ECVs and their Uncertainties” and its dedicated special issue of “Surveys in Geophysics”. (2) See e.g., the ISSI team “Towards Unified Error Reporting (TUNER) and its dedicated special issue of “Atmospheric Measurement Techniques”. (3) Within CCI projects, prognostic and diagnostic uncertainties are reported in respectively End-to-End ECV Uncertainty Budget (E3UB) documents and Product Validation and Intercomparison Reports (PVIR).
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall N1/N2)

Presentation: Sentinel-2 MSI level-1 Radiometric Uncertainty Tool, status and application to tandem analysis

Authors: Alexis Deru, Sebastien Clerc, Dr. Javier Gorroño, Bruno La France, Emmanuel Hillairet, Jérôme BRUNIQUEL, Silvia Enache, Valentina Boccia, Dr Véronique Bruniquel
Affiliations: ACRI-ST, Universitat Politècnica de València, CS, ESA/ESRIN
The Sentinel-2 satellites are operated by the European Space Agency (ESA) as part of the European Commission’s (EC) Copernicus program. They carry the Multi Spectral Instrument (MSI) dedicated to acquiring high spatial resolution images over land in the visible, the near infrared and the short-wave infrared part of the spectrum. Providing uncertainties for each measurement is a major objective of the Copernicus programme. For Sentinel-2, the radiometric uncertainty tool, was developed as an offline tool to assess the level-1 uncertainty directly from the level-1 product, by first reconstructing the level-0 quantities to then estimate their uncertainty level. The OPT-MPC performed an assessment of the existing software and of the MSI uncertainty budget. Several updates have been conducted to propose a more convenient use of the L1 uncertainty, in particular, the conversion of the existing SNAP module into a simpler python reader. At the same time, several contributors have been revised to better described the uncertainties: the straylight characterisation, the residual electronic cross-talk and the instrument noise model which has been re-evaluated with different methodologies. The latest Sentinel-2 satellite, S2C, has been successfully launched in September 2024 and have performed a one and half month tandem phase with its predecessor S2A, which allow a radiometric inter-calibration of the 2 successive missions, ensuring a radiometric continuity of the 2 data collections. A second tandem, between S2A and S2B, is also planned in spring 2025. The S2-RUT will be used during these tandem phases to support the radiometric analysis of the MSI instruments, by producing the level-1 per-pixel uncertainty of both instruments over large datasets of each tandem phases. Then the statistical inter-comparison of the uncertainties will be performed by analysing the uncertainty normalized difference of the 2 instruments. If the variance of the differences is well described by the uncertainties, the resulting distribution should be a standard normal distribution. This paper presents the latest evolution of the S2 radiometric uncertainty tool, and the uncertainties inter-comparison results obtained from the tandem analysis.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall N1/N2)

Presentation: The calibration and harmonisation of the IR channels of Landsat-8 TIRS and Sentinel-3 SLSTR for high resolution climate studies

Authors: Jacob Fahy, Jonathan Mittaz, Samuel Hunt
Affiliations: National Physical Laboratory, University of Reading
In recent years, there has been significant progress in the development of high-resolution (< 100 m) thermal infrared satellite missions, for example, the Landsat series of sensors currently on orbit are planned to be joined TRISHNA, LSTM and SBG missions in the coming . In support of the interoperable use of these high-resolution missions with current climate missions there is a need to underpin the quality of their products, particularly where products from multiple sensors are being used to generate Climate Data Records (CDRs). Radiometric observations from the various sensors being used should be cross-calibrated, or "harmonised", to a common reference together with quantified uncertainties. This then provides both consistency between high-resolution and lower-resolution sensors (which is generally not achieved when the sensors' operational calibration is used) as well as traceable uncertainty information. Subsequent processing will then be able to provide improved climate products together with associated uncertainties. In order to understand how this cross-calibration may improve the quality of climate products, we have undertaken a case study using operational missions. Two of the sensors being used to generate thermal products today are the Thermal Infrared Sensor (TIRS) on-board Landsat 8 and the Sea and Land Surface Temperature Radiometer (SLSTR) on-board Sentinel-3. Landsat-8 is used for many applications due to its high spatial resolution (100 m for the thermal channels), but the TIRS has experienced significant issues affecting its radiometric performance. Operationally, corrections have been implemented to improve the situation, but there is limited information regarding the accuracy and uncertainty of the data. Climate products are typically produced using sensors such as the SLSTR which has much lower radiometric uncertainty with a lower spatial resolution (1 km) and a wider swath than TIRS, making SLSTR data more favourable for use in climate studies. To enable the creation of high-resolution climate products using both SLSTR and TIRS data, cross-calibration of the two sensors has been carried out. First, a reference based on simulation of top-of-atmosphere TIRS data was created, using ESA CCI Sea Surface Temperature (SST) and drifting buoy measurements as input data to a radiative transfer model (RTM). Comparison of simulated and observed scenes shows a discrepancy of approximately 0.4 K in Band 10 and 0.1 K in Band 11, and this was used to inform the cross-calibration activity. Cross-calibration of TIRS and SLSTR was then carried out using a newly developed match-up system, which compares near-simultaneous overpasses between the sensors to estimate the bias between matching spectral channels. The results of the comparison were also used to develop estimates of the radiometric uncertainty of the cross-calibrated thermal product. Here we will present the outcomes of the cross-calibration, and the radiometric correction to TIRS, together with corresponding uncertainty estimates. Additionally, we will also demonstrate the potential improvement in quality for downstream (Level 2+) climate products that were developed using the cross-calibrated thermal product, with some discussion of future applications using products from upcoming missions.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall N1/N2)

Presentation: Comprehensive pixel-level Level 1 Uncertainty Characterization for SPOT-VGT1, SPOT-VGT2, and PROBA-V following the FIDUCEO guidelines

Authors: Nicolas Misk, Dr Marta Luffarelli, Lucio Franceschini, Yves Govaerts, Sindy Sterckx, Dr Iskander Benhadj, Roberto Biasutti, Dr Fabrizio Niro
Affiliations: Rayference, VITO, ESA-ESRIN
Producing Fundamental Data Records (FDRs) with well-defined uncertainty estimates is crucial for the development of stable and relevant Essential Climate Variables (ECV), as prescribed by the Quality Assurance framework for Earth Observation (QA4EO) and the FIDUCEO guidelines [RD-1]. An FDR is a record, of sufficient duration for its application, of uncertainty-quantified sensor observations calibrated to physical units and located in time and space, together with all ancillary and lower-level instrument data used to calibrate and locate the observations and to estimate uncertainty. However, satellite operators often do not provide contextual uncertainty products for their missions. This is particularly evident for the PROBA-V, VGT1, and VGT2 sensors, where uncertainty is typically expressed as global upper bounds rather than pixel-specific measures [RD-2] [RD-3]. Furthermore, such uncertainty information, when existing, is frequently inaccessible to the broader scientific community or inconsistently formatted. This lack of clear and detailed uncertainty data constrains researchers’ ability to propagate uncertainty to higher-level ECV models accurately. The ESA FDR4VGT project led by VITO Remote Sensing addresses this gap by producing pixel-level uncertainty estimates for over two decades of harmonized satellite data. The project employs a robust uncertainty propagation method grounded in state-of-the-art metrology principles. Special care has been given to the redaction of clear, consistent and FIDUCEO compliant Digital Effect Tables (DETs) for the characterization of digital counts, calibration coefficients and ancillary information. The proposed method exposes the uncertainty estimates as an explicit analytical equation, differentiable and optimized for large scale computing. This comprehensive approach ensures adherence to FIDUCEO guidelines while balancing computational efficiency and accuracy. Reprocessing 20+ years of Level 1A data to propagate uncertainty estimates to Level 2A projected reflectance images poses several technical challenges. Performance constraints must be considered for the propagation method, and Monte-Carlo uncertainty propagation approaches can only be done for sub-problems limited in terms of time range or scope. An uncertainty quantization using a statistical approach for the less impactful solar geometry uncertainty has been performed on the year 2021 of Proba-V Level 1 data, capturing the errors originating from the satellite drift. Level 1A uncertainties have been propagated using an analytical expression of the Law of Propagation of Uncertainty (Guide to the expression of uncertainty in measurement) [RD-4]. DETs are prepared to characterise each identified source of uncertainty in the uncertainty diagram. The extensive uncertainty characterisation at Level-1 is expected to improve the retrieval of ECVs from the VGT and PROBA-V archive, as discussed in previous studies, such as the ESA SPAR@MEP project, underlying the critical role of improved satellite observation uncertainty characterization in enhancing image inversion performance [RD-5]. The enhancement of retrieval performances will be assessed through the application of the CISAR algorithm [RD-6] to a diagnostic dataset in terms of improved comparison against reference datasets and of retrieval uncertainties, evaluated against ground-based measurements [RD-7] (ground-based observations, atmospheric models, and data from other satellite missions). This study demonstrates the feasibility of delivering high-quality, pixel-level uncertainty maps for Level 1 satellite observations using computationally efficient models. The work highlights the partial or inadequate characterization of several uncertainty contributors, which should be addressed in the preparation of future mission aiming at 1% accuracy. Additionally, this study sets the groundwork for advancing uncertainty analysis in Level 2 and Level 3 data and fulfilling key prerequisites for the delivery of FRM and higher-level CDR. References : [RD-1] Jonathan Mittaz, Christopher J. Merchant, and Emma R. Woolliams, ‘Applying Principles of Metrology to Historical Earth Observations from Satellites’, Metrologia 56, no. 3 (May 2019): 032002, https://doi.org/10.1088/1681-7575/ab1705. [RD-2] Wouter Dierckx, Sindy Sterckx, Iskander Benhadj, Stefan Livens, Geert Duhoux, Tanja Van Achteren, Michael Francois, Karim Mellab & Gilbert Saint (2014) PROBA-V mission for global vegetation monitoring: standard products and image quality, International Journal of Remote Sensing, 35:7, 2589-2614, DOI: 10.1080/01431161.2014.883097 [RD-3] Carolien Toté, Else Swinnen, Sindy Sterckx, Dennis Clarijs, Carine Quang, Ronny Maes, Evaluation of the SPOT/VEGETATION Collection 3 reprocessed dataset: Surface reflectances and NDVI, Remote Sensing of Environment, Volume 201, 2017, Pages 219-233, ISSN 0034-4257, https://doi.org/10.1016/j.rse.2017.09.010. [RD-4] GUM 2008 JCGM 100:2008 Guide to the expression of Uncertainty in Measurement (JCGM) p 100 (www.bipm.org) (Sevres, Paris: BIPM). [RD-5] Luffarelli, Marta, Lucio Franceschini, Yves Govaerts, Fabrizio Niro, and Erminia De Grandis. 2023. "Surface Reflectance and Aerosol Retrieval from SPOT-VGT and PROBA-V in the Mission Exploitation Platform Environment" Remote Sensing 15, no. 21: 5109. https://doi.org/10.3390/rs15215109 [RD-6] Luffarelli, M.; Govaerts, Y.; Franceschini, L. Aerosol Optical Thickness Retrieval in Presence of Cloud: Application to S3A/SLSTR Observations. Atmosphere 2022, 13, 691. https://doi.org/10.3390/atmos13050691 [RD-7] Sayer, A. M., Govaerts, Y., Kolmonen, P., Lipponen, A., Luf- farelli, M., Mielonen, T., Patadia, F., Popp, T., Povey, A. C., Stebel, K., and Witek, M. L.: A review and framework for the evaluation of pixel-level uncertainty estimates in satellite aerosol remote sensing, Atmos. Meas. Tech., 13, 373–404, https://doi.org/10.5194/amt-13-373-2020, 2020.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall N1/N2)

Presentation: Metrological Analysis of CIMR RADiometry (MACRAD)

Authors: Dave Smith, Paul Green, Pieter de Vis, Thomas Lavergne, Emy Askerlans, Jacob Hoyer, Nicolas
Affiliations: RAL Space
The Copernicus Imaging Microwave Radiometer is a multi-band, wide swath passive microwave radiometer to measure sea ice concentration, sea surface salinity, sea surface temperature and other essential parameters, planned for launch in 2029. A key mission requirement is to demonstrate the traceability and provide uncertainty estimates in the data products. This is not trivial for CIMR as there are many input effects in the processing chain that contribute to the overall measurement uncertainty, and there are significant correlations between inputs that need to be adequately quantified to enable a full understanding of the mission performance. Beyond the on-board calibration infrastructure: the calibration loads, there are significant contributions from the feeds, antenna mesh, side lobes, microwave background as well as retrieval coefficients. The MACRAD study was initiated to establish a strategy to develop and maintain the uncertainty budgets for the CIMR mission Level-1b and Level-2 products following metrological principles, early in the mission development that can be tracked through to launch and operations. Then, using these methodologies and tools (NPL CoMet tool) to perform initial estimates of the CIMR L1b brightness temperature uncertainties and propagate these preliminary estimates through a selection of the CIMR L2 retrieval algorithms to produce initial estimates of the Level-2 uncertainties and to define a roadmap for the ongoing development and maintenance of the CIMR uncertainty budgets to be implemented through the CIMR mission. In this paper we present the results of the L1 and L2 uncertainty analysis.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall N1/N2)

Presentation: Uncertainty Quantification for Aerosol Retrievals With Invertible Neural Networks

Authors: Paolo Pelucchi, Jorge Vicent Servera, Prof. Philip Stier, Professor Gustau Camps-Valls
Affiliations: Image Processing Laboratory, University of Valencia, Department of Physics, University of Oxford
Satellite remote sensing is the primary source of global aerosol observations, providing essential data for understanding aerosol-climate interactions and constraining global climate models. To solve the inverse problem at the heart of the retrieval process, traditional algorithms must make simplifications and are often unable to quantify the associated uncertainty. The lack of uncertainty quantification hinders the reliability of satellite aerosol observations in downstream applications. In this study, we explore the use of Invertible Neural Networks (INNs) for retrieving aerosol optical depth (AOD) from spectral top-of-atmosphere reflectance, leveraging their ability to handle underdetermined inverse problems and quantify their inherent uncertainty. INNs simultaneously model the forward and inverse processes via a sequence of invertible operations. They learn to encode information from the atmospheric state that is lost in the observations into additional random latent variables. At retrieval time, these are sampled to efficiently recover full non-parametric posterior distributions for the inverse predictions. We develop location-specific INNs for MODIS sensor data. Training is performed using synthetic datasets generated by combining the radiative-transfer-model-generated atmospheric reflectance look-up tables from the MODIS Dark Target (DT) algorithm and surface reflectance from the MCD43GF MODIS product. The INNs successfully emulate the forward problem, and inversion results achieve accurate predictions on synthetic test sets, with an AOD retrieval error in line with DT's expected accuracy (RMSE ≈ 0.05). The posterior distributions obtained are well-calibrated (mean absolute calibration error ≈ 2.5%), providing predictive uncertainty estimates that are reliable and informative. Additionally, we find that the INNs' invertible architecture promotes physically consistent predictions and uncertainties. The INN retrievals are further validated in a real world context by applying them to MODIS L1B reflectance observations, producing full-resolution (nominal 500 m) AOD estimates and pixel-level uncertainties. We compare them to the corresponding operational MODIS DT retrievals, with ground-based AOD measurements from Aeronet stations as ground truth. The INN results show slightly better performance than DT at most locations, with overall RMSE ≈ 0.1 and 74% of retrievals within the DT expected error envelope. The INNs are also able to produce reasonable results over bright surfaces where DT cannot be applied. Despite some limitations in out-of-distribution contexts uncovered by our analyses, the INNs show consistent skill under diverse land surface conditions. The INNs' unique modelling and uncertainty quantification features have the potential to enhance aerosol and climate studies in various real-world contexts.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 0.49/0.50)

Session: B.03.05 Heritage at Risk: Innovative Tools for Assessing and Mitigating Climate Change and Natural Hazards

In a landscape where climate change is accelerating and the frequency of natural disasters is increasing, the preservation of cultural heritage has become a critical challenge. This session aims to address these pressing issues by showcasing the latest tools and technologies designed to protect cultural assets and mitigate the stress posed on them by climate change and its side effects.
The session will explore a series of novel tools and methodologies designed to assess and mitigate the risks that climate change poses to cultural heritage. The session will highlight advancements in technology, including satellite remote sensing methods that can be used for the identification, monitoring and impact assessment of different types of threats as well as innovative sensors such as flash LiDAR, modelling techniques, novel coatings and data analytics, that are being utilised to safeguard invaluable cultural sites. Participants will gain insights into the practical application of these tools in real-world scenarios through the showcasing of successful case studies, and explore collaborative strategies to enhance resilience through a series of expert presentations and interactive discussions.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 0.49/0.50)

Presentation: Danger Classification for Strong-Rain and Flood-Risks of Cultural Heritage Sites

Authors: Thomas Krauß
Affiliations: DLR - DeutschesZentrum für Luft- und Raumfahrt
Due to climate change, the risk of heavy rainfall and flooding events is increasing dramatically now and in the coming years. Since Cultural Heritage is the soul of a country and it's people protecting Cultural Heritage is a significant task towards a resilient society. In the presented work developed in the framework of the TRIQUETRA project (Toolbox for assessing and mitigating Climate Change risks and natural hazards threatening cultural heritage) we constructed a framework for estimating danger due to strong-rain events and flash-floods for Cultural Heritage sites. The strong-rain risk assessment was developed using a huge database of an insurance company in northern Germany linking the probability and severity of observed and reported damages due to rain to known geo-coordinates. Based on this knowlede a method using the Topographic Position Index (TPI) was developed and calibrated on the data using only a digital elevation model (DEM) for danger estimation. In the context of this study we found also that the risk depends only on the topography and the occurance of strong rain events is evenly distributed. Finally six classes from "ridge" with the lowest probability and severity of damages over high, mid, flat and low slopes to "sink" with the highest danger were derived. But in the summer of 2021 the Ahr flood in Germany showed the necessity to extend the danger classification also to flash-floods in the case of torrents, steep valleys and strong rain in upstream areas. Deeper investigations revealed that sewage systems get completely surcharged and blocked by debris like branches or leaves in such strong rain scenarios. Thus we developed a worst case estimation of water runoff from upstream areas and also a new danger classification scheme to give a more realistic assessment of exposure to strong rain. Finally the method was adapted to cultural heritage sites, tested and applied to eight pilot sites. In the presented work we describe the developed methods and show and discuss the resulting danger maps for the test sites we applied our method to.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 0.49/0.50)

Presentation: New coupled quantitative scenario-based approaches in geohazard mitigation measures: The Case Study of the Village of San Vito Romano (RM)

Authors: Simona Marano, Yawar Hussain, Guglielmo Grechi, Stefano Rivellino, Francesca Bozzano, Salvatore Martino
Affiliations: Department of Earth Sciences of Sapienza University and CERI Research Centre, Rome, Italy
The development of tools for quantitative scenarios of ground instability effects is a robust methodology for risk mitigation, especially in the landslide risk framework. In the here considered case-study, this approach is crucial for urban planning and protection of historical heritage. Its application requires a thorough analysis of preparatory and triggering factors. It is also an effective strategy for detecting and monitoring seasonal ground instability effects, while providing a useful instrument to calibrate numerical models aimed at predicting multi-hazard scenarios. This study is being performed in the framework of the RETURN Extended Partnership and received funding from the European Union Next-GenerationEU (National Recovery and Resilience Plan – NRRP, Mission 4, Component 2, Investment 1.3 – D.D. 1243 2/8/2022, PE0000005) . The case study concerns the village of San Vito Romano (RM) which is a historical place representative for the countryside contexts near to Rome in the Latium region (Italy). The village hosts an historical center with a convent and a historical building for the city hall. The village is for the main part built on a active landslide of almost 1km2 with a rototranslational mechanism and slow kinematics. The landslide involves silicoclastic deposits from the Frosinone Formation (Upper Tortonian), characterised by alternate layers of clayey and arenaceous marls. This geological setting represents a predisposing factor for slope instability; on the other side, preparatory factors include cumulative precipitation, affecting the soil moisture, and triggering factors include seismic events and intense rainfalls. Landslide activity is documented by the presence of fractures in buildings that have been declared unusable. This phenomenon is causing not only a progressive demographic decline in the town, but also a decrease in tourism, which is one of the fundamental pillars of the local economy. To obtain a detailed geomorphological layout, a LIDAR flight will be conducted, allowing the creation of a Digital Terrain Model (DTM). In addition, regarding satellite techniques, the DInSAR approach was applied, specifically the SBAS (Small Baseline Subset) methodology, an interferometric technique that generates maps of average deformation velocity and provides time series on the displacements of individual points within the image. The processed data, covering the period from 2022 to the present, has shown an increase in displacement velocity, particularly in some areas of the landslide. For the analysis of preparatory and triggering factors, the present research proposes the integration of passive seismic geophysical techniques and satellite interferometric methods. Regarding the first technique, four three-component velocimeters with a sampling frequency of 250 Hz were installed, two of which were positioned inside the landslide and two outside. Continuous monitoring of ambient seismic noise allows the calculation of landslide mobility indices (LSMI), such as natural period variation (dT/T), peak polarization variation (dP/P), variation in the velocity of Rayleigh waves (dV/V) over time. The measurement of surface wave velocity is obtained through the innovative technique of seismic interferometry of ambient noise, which, through the cross-correlation of the recorded signal, makes it possible to determine the variation of surface wave velocity between different monitoring stations. From the variation of these parameters over time, we can continuously monitor changes in rigidity, amplification, and non-linear elastic properties of the soil to understand how the slope is prepared for possible instability. The project will lead to the creation of a quantitative tool capable of correlating interferometric displacements of the landslide and proxies of displacements deduced by passive seismic techniques (LSMI) to point out the role the preparatory and triggering factors in the landslide activity. The influence of seismic events on landslide movement will also be analyzed in relation to magnitude and distance, and the amount of precipitation that shows the most significant correlation with displacements will be identified. Once this information has been assimilated it will be possible to train the model in a forward mode for depicting future scenarios, in a multi-hazard perspective.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 0.49/0.50)

Presentation: Analysis of Earthquake-Induced Landslide Scenarios under Changing Environmental Conditions: A Case Study of the Choirokoitia Cultural Heritage Site, Cyprus

Authors: Ramaz Koberidze, Carlo Esposito, Matteo Fiorucci, Gian Marco Marmoni, Salvatore Martino, Stefano Rivellino, Kyriacos Themistocleous, Guglielmo Grechi
Affiliations: Sapienza University of Rome, Carlo Esposito Associate Professor, Department of Earth Sciences Sapienza University of Rome, University of Cassino and Southern Lazio, Cyprus University of Technology, Gian Marco Marmoni, Researcher in Engineering Geology, Department of Earth Sciences Sapienza University of Rome, Salvatore Martino Full Professor in Engineering Geology, Department of Earth Sciences Sapienza University of Rome
This research, framed in the TRIQUETRA EU Project (https://triquetra-project.eu/), focuses on assessing and mitigating the risk of earthquake-induced landslides at the Choirokoitia Archaeological Site in Cyprus, a UNESCO World Heritage site. The site faces significant vulnerability due to seismic activity, compounded by climate change factors such as increased rainfall and temperature fluctuations. Using the PARSIFAL (Probabilistic Approach to Provide Scenarios of Earthquake-Induced Slope Failures) methodology, this study integrates seismic hazard data with slope stability models to predict landslide occurrences, including rockfalls and rockslides, and assess their potential impact on the site. By employing specific quantitative analysis tools, we aim to reconstruct potential landslide scenarios in these regions and improve the accuracy of our assessments. The reconstruction of landslide scenarios is crucial for managing risk assessment. Scenario maps provide spatial distributions of potential landslide impacts under various environmental conditions, helping to inform urban planning and emergency response strategies. The focus of this resarch is on creating scenario maps rather than hazard maps for landslides. This means we may establish the hazard of a specific trigger (like an earthquake) and instead model the resulting distribution of landslide effects ("induced scenario"). The research combines cutting-edge tools such as satellite remote sensing, LiDAR, and geospatial modeling to analyze environmental factors influencing slope stability. By developing probabilistic risk scenarios, the study aims to inform targeted mitigation strategies, including slope stabilization, real-time monitoring systems, and protective coatings, to enhance the site’s resilience to both immediate and long-term hazards. Additionally, the research highlights the vulnerability of the touristic path, which lies within a rockfall danger zone, posing a direct threat to visitor safety. By integrating innovative technologies and methodologies, the study aims to improve the accuracy of hazard assessments, contributing to global conservation strategies for protecting cultural heritage sites. Ultimately, this research seeks to develop comprehensive risk assessment frameworks that ensure the preservation of heritage sites in the face of increasing natural hazards driven by climate change.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 0.49/0.50)

Presentation: Application of DInSAR Technology for Monitoring Slope Instability in Cultural Heritage Sites: A Case Study of Machu Picchu and Cusco

Authors: Federica Ferrigno, Dr Gabriele Leoni, Dr Francesco Menniti, Dr Daniele Spizzichino
Affiliations: Ispra
The Machu Picchu Archaeological Site in Peru, a UNESCO World Heritage Site, is exposed to significant geological risks due to recurrent landslides that threaten its structural integrity and accessibility. The Historic Sanctuary predominantly consists of Upper Permian-Lower Triassic (250–300 Ma) igneous rocks, primarily plutonic, which form the Vilcabamba Cordillera's backbone. These intrusive formations, oriented ONO–ESE, constitute the elevated regions of the Eastern Cordillera. The area is dominated by a batholith composed mainly of granite and granodiorite, with medium-textured basic granite prominently outcropping within the citadel. The site and its surroundings are characterized by instability phenomena driven by complex geomorphological and structural/tectonic conditions. These are exacerbated by the interplay of primary discontinuity families, resulting in recurring processes such as planar slides, rockfalls, topples, debris slides, debris flows, and avalanches. This study presents the application of Differential Interferometric Synthetic Aperture Radar (DInSAR) technology to measure slow, non-catastrophic morphological changes with millimeter-scale precision. The analysis captures both long-term and seasonal processes triggered by diverse causative factors, enabling informed planning of mitigation strategies. Specifically, DInSAR data processing was conducted for the Machu Picchu archaeological area, complemented by direct field surveys to validate the results (November 2024). Additional investigations are planned for the wider Cusco region. Multi-temporal SAR images from the Sentinel-1 constellation (C-band radar) were processed using advanced DInSAR techniques to generate ground displacement measurement points. The spatial distribution and correlation of these measurements with slope instability and structural damage were analyzed, revealing ground deformation trends from January 2020 to August 2024. Preliminary results indicate that the citadel exhibits average ground and structural displacement of less than 1 mm/year. However, localized analyses highlight distinct patterns of small-scale displacement: • Grupo de las Tres Puertas: This area exhibits slight brick detachment, necessitating monitoring via traditional in-situ instruments. • Upper Plaza and Eastern Citadel: These zones show minor relative subsidence compared to adjacent areas, suggesting potential movement on the eastern flank. Enhanced monitoring systems, including crack meters, inclinometers, GNSS, and corner reflectors, are recommended for these sectors. In summary, the Sentinel-1 DInSAR data provided critical insights into the interaction between ground displacement and archaeological structures. It facilitated the identification of potentially unstable areas, detected anomalies, and traced ground displacement accelerations over time. These findings underscore the utility of DInSAR as a powerful tool for preserving cultural heritage threatened by slope instability, offering data-driven approaches for damage prevention and site management.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 0.49/0.50)

Presentation: The INACO Project: strengthening the resilience of heritage sites in river basins district against climate extreme events

Authors: Alessandro Sardella, Ramiro Marco Figuera, Stefano Natali, Riccardo Cacciotti, Miloš Drdácký, Sante Laviola, Elsa Cattani, Alessandra Bonazza
Affiliations: National Research Council of Italy, Institute of Atmospheric Sciences and Climate, SISTEMA GmbH, Institute of Theoretical and Applied Mechanics, Czech Academy of Sciences
More frequently, in recent years, managers and curators of cultural and natural heritage (CNH) sites are facing new challenges for the protection and conservation of archaeological sites, historic parks and gardens, monumental complexes, historic buildings and related collections, because of the negative impact of slow and extreme climate events and human activities. This is also reflected on the safety of their users who are as much exposed to such risks as the goods themselves. Therefore, today more than in the past, a need is felt to maximize studies and methods aimed at supporting decision makers and public authorities in preparing plans to manage and mitigate the correlated risks. In this context, the identification of hazard-prone areas at regional scale together with the vulnerability assessment at local scale are of paramount importance for risk assessment of CNH assets particularly exposed to climate extremes. The present contribution aims at presenting the Risk Mapping Tool for Cultural Heritage Protection (WGT), a WebGIS-based solution implemented within the Interreg CE projects ProteCHt2save and STRENCH and under further enhancement in the ongoing Interreg CE project INACO. WGT is specifically addressed to the safeguarding of CNH exposed to extreme weather events, strongly based on a user-driven approach and the multidisciplinary collaboration among the scientific community, public authorities and the private sector. WGT provides hazard maps at European scale where CNH is exposed to heavy rain, flooding, extreme heating, and prolonged drought. Hazard-prone areas are assessed by computation of specific indices related to temperature and precipitation extremes, and by integrating Earth Observation (EO) - based products (European and non-European missions), reanalysis products (Copernicus climate), observational dataset (E-OBS), and climate projections. Outputs from Global/Regional Climate Models from the Euro-CORDEX experiment in the near (2021-2050) and far (2071-2100) future, under two different IPCC scenarios (i.e. stabilizing RCP4.5 and pessimistic RCP8.5) are used for the projection of hazards. In addition, the application of the methodology for the vulnerability assessment of cultural and natural heritage set up in the framework of the Interreg Central EU Project STRENCH will be also presented. These tools are going to be tested at case studies representative of different CNH categories in three different environmental contexts linked to European river basin districts: sea/river shore, lake shore, and inland river shore. With this contribution we intend to showcase how the application of EO-based products and their integration with climate projections and vulnerability assessment can support policy, decision makers and stakeholders in charge for disaster mitigation and safeguarding of CNH, with high potentiality to be scalable to other sectors under threat by climate change.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 0.49/0.50)

Presentation: Introducing HeritageWatch.AI: A Global Initiative for Heritage Preservation in the Face of Climate Change

Authors: Titien Bartette, Jonathan Chemla, Fafet Charlotte, Bachaar Tarabay, Yves Ubelmann
Affiliations: Iconem
Cultural heritage, both tangible (monuments, archaeological sites) and intangible (traditions, languages, craftsmanship), is crucial to humanity's shared history. It contributes to community identity and fosters a sense of belonging. Preserving this heritage is crucial for safeguarding cultural diversity and passing knowledge to future generations. However, cultural heritage faces growing threats from human activities such as conflict and urbanisation, as well as natural disasters like floods, storms, fires, and earthquakes. These risks are further exacerbated by climate change, which increases the frequency and intensity of extreme weather events. Despite existing mechanisms to monitor and respond to climate events, few are tailored to cultural heritage, which is often overlooked during crises. This results in insufficient alerts, limited knowledge, and slow decision-making, putting global heritage at risk. Technological advances present new opportunities for safeguarding cultural heritage. In particular, using satellite data and drones enable detailed data collection and near-real-time monitoring of some extreme phenomena. The adoption of open standards further enhances collaboration and coordination among stakeholders, improving the effectiveness of disaster responses. In this context, a global initiative for heritage preservation was created, called HORAM. It positions itself as a new international player, pioneering real-time monitoring and decision-support systems to safeguard heritage sites worldwide. Combining advanced Earth Observation (EO) technologies with artificial intelligence (AI) and OSINT data, HORAM provides dynamic tracking of disasters and their impact on cultural and natural heritage. By leveraging existing real-time data streams and analytics, and adapting them to specific cultural heritage needs, the initiative aims at delivering tailored insights to guide decision-makers in protecting sites while adapting to evolving climate conditions. Specifically, the functionalities of HORAM’s innovative platform will include: • Real-time Monitoring: The platform provides near real-time detection of cultural heritage sites at risk from extreme weather events, such as floods or wildfires, leveraging in particular satellite-derived information. Future updates will expand its scope to include geological and human-induced risks, offering a comprehensive view of threats to heritage sites. • Alert System: A semi-automated alert system enables timely tracking of ongoing events and their impacts on cultural heritage. It facilitates communication with local and international authorities to support preventive and protective measures. • Decentralized Database: The platform integrates a database of cultural heritage sites, aggregating verified data such as location, site typology, conservation status, threats, disaster history, and analytical reports. Over time, the system will evolve into a centralised, validated, and version-controlled repository to better represent under-documented heritage sites. • User-Friendly Interface: Featuring an intuitive, interactive map, users can easily identify sites potentially impacted by climatic threats. The tool is designed for accessibility and ease of use by all stakeholders. Central to HORAM’s success is its robust network of partners, including Microsoft AI for Good, ICOMOS, the World Monuments Fund, and UNESCO. These partnerships combine expertise in cutting-edge technology, cultural heritage, and global outreach, ensuring that the tools and methods developed by HORAM are both innovative and practical. This approach allows HORAM to create solutions that are effective, inclusive, and widely applicable. By integrating cultural heritage preservation into climate resilience efforts, HORAM offers a new way to protect our shared heritage. Its work aligns with key global frameworks, such as the Paris Agreement and the Sendai Framework, emphasising the role of heritage in building sustainable and resilient communities worldwide.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.15/1.16)

Session: C.02.03 SMOS – 15 years in space

SMOS pioneered L-band radiometry from space. Having been launched on 2nd November 2009 it provides the benchmark dataset for current and future L-band radiometer missions. Initially foreseen to provide information on ocean salinity and terrestrial soil moisture, today the mission contributes to various Earth System domains. Beyond soil moisture and sea surface salinity, SMOS data are used to estimate sea ice thickness, terrestrial freeze/thaw cycles, extreme ocean winds, biomass, ice sheet temperature, solar flux and many more. Many of these applications evolved to operational services in recent years and are used by operational service providers with a direct societal benefit. This session will take stock of the achievements and lessons learned from SMOS and its future.
We encourage submissions related to the mission’s status of SMOS, cal/val, the product status and evolution, the application of SMOS data in the various application domains, novel exploitation ideas and future L-band concepts building on the SMOS legacy.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: SMOS after 15 years in orbit: results and future plans

Authors: Yann Kerr
Affiliations: Cnes/cesbio
The ESA (European Space Agency) led SMOS (Soil Moisture and Ocean Salinity) mission, operating since November 2009, is the first satellite dedicated to measuring surface soil moisture and ocean salinity. It has been now in operation continuously for more than 15 years, delivering a wealth of new measurements including the first time ever global, frequent, quantitative and absolute measurements of soil moisture and ocean salinity. From these measurements a large number of science and applied products have emerged ranging from strong wing or thin sea ice thickness to root zone soil moisture or biomass but also fire or flood risks prediction, snow density or freeze thaw to name but a few. Operational users (such as numerical weather prediction) have also emerged. To obtain such results several challenges had to be addressed and overcome but results show the uniqueness of L band radiometry for some crucial water cycle measurements. Recent advances have been made and notably rot zone soil moisture fields and drought indices, giving way to first assessment of irrigation; biomass estimates and related carbon storage estimates; freeze thaw and related greenhouse gas emissions; snow properties and permafrost indicators or even rainfall estimates. In parallel high resolution fields are being produced and used operationally. Currently, using the long term data set and developing approaches to extend it in time, climate trends can start to be considered as well as teleconnections and SMOS is contributing to a large number of ECV (Essential Climate Variables) / CCI (Climate Change Initiative). Moreover, synergisms with other missions either existing (SMAP, AMSR, S3 for instance) or planned (Biomass, FLEX, LSTM/ Trishna) are being studied or put into use, paving the way to even more outstanding science results. At the time of writing SMOS is in very good condition and can last for a few more years, extending the length of the data sets but will not last forever. Consequently the team is both investing time in new or improved science and application products but also on potential follow on mission which would be very much similar to SMOS (or SMAP) but with a significantly improved spatial resolution. The presentation will give an overview of the most striking new results as well as future plans.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: The SMOS Mission: an Unexpected Sun Sentinel

Authors: Raffaele Crapolicchio, Federica Guarnaccia, Roberta Forte
Affiliations: Serco for ESA ESRIN, Serco
Launched in 2009 as part of the ESA Earth’s Explorers missions, over the years SMOS (Soil Moisture and Ocean Salinity) has provided prime and multi-faceted contributions for Earth observation. To grant good data quality, images need to be processed to remove a spurious signal from the Sun. This information is stored in the L1B data and has been used to generate new products that go beyond the terrestrial domain of SMOS. The retrieved Sun Brightness Temperature can be converted into a solar flux measurement; the latter, can be used to monitor Sun activity and look for solar radio bursts at a key technological frequency. The SMOS solar flux timeseries describes the solar cycle progression in the L-Band since 2010 thus covering about two solar cycles. The paper presents the results of the validation of the SMOS solar flux prototype data derived from the operational L1B v724 dataset and preliminary validation of the dataset based on the upcoming 4th mission reprocessing based on the update algorithm baseline L1B v781. SMOS solar flux data are in high agreement with radio telescope ground references. When compared to the state-of-the-art L-Band ground reference, SMOS data reveal a more stable behavior over time and a higher correlation with the main stations monitoring lower (Nobeyama observatory) and higher (Penticton observatory) frequency channels. The SMOS solar flux prototype product benefits of the continuous daily monitoring, and the lack of atmospheric correction needed. Good agreement with the ground references is achieved for signal acquisitions coming from the antenna front lobe and back lobe alike; data have also been correlated to common solar activity indicators, including the sunspot number, the MgII index, and the Ap index, further validating the quality of the SMOS solar flux product. However, the signal to noise ratio in the back lobe fast increases with the Sun elevation angle, thus requiring the positing of an upper elevation threshold of 0.2 radians. Making use of the calibrated and high time resolution of MIRAS (Microwave Imaging Radiometer with Aperture Synthesis), SMOS solar flux can be used as complementary information to F10.7 index and used to monitoring the impact of Solar Radio Bursts (SRBs) in the Global Navigation Satellite Systems (GNSSs) signals thanks to its unique property to estimate the Degree of Circular Polarization (DoCP) of the solar radio burst events.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: SMOS L-band data for Numerical Earth-system Weather prediction at ECMWF

Authors: Patricia de Rosnay, Joaquín Muñoz-Sabater, Kirsti Salonen, Peter Weston
Affiliations: ECMWF, ECMWF
This presentation gives an overview of 15 years of activities conducted at the European Centre for Medium-Range Weather Forecasts (ECMWF) to analyse soil moisture for Numerical Weather Prediction (NWP) and Earth-system Prediction applications. SMOS brightness temperature observations have been operationally monitored in the ECMWF system since 2010, i.e. the data is not active in the analysis but the model counterparts for the observations are calculated, and various statistics are produced to assess the data quality and characteristics. In this presentation long-term monitoring statistics are shown along with the latest interesting findings from the monitoring covering the most recent years. Research developments that conducted to operational SMOS soil moisture data assimilation for NWP are presented. These include the Community Microwave Emission Modelling platform (CMEM) observation operator and SMOS integration in the ECMWF Integrated Forecasting System (IFS) suite definition. The IFS simplified Extended Kalman Filter (SEKF) soil moisture data assimilation system was updated to assimilate SMOS brightness temperatures and data assimilation experiments were conducted covering different periods of the year to assess the impact of SMOS brightness temperature data assimilation on NWP. They showed positive impact on soil moisture states, but the atmospheric impact was very limited. In parallel, a neural network soil moisture product was developed for data assimilation purpose and the impact of its assimilation on NWP was also evaluated. The neural network soil moisture data assimilation showed slight benefits on atmospheric forecasts, while being computationally efficient. It was implemented for operational NWP in 2019 in IFS cycle 46r1. In 2024 a new gradient boosted (XGBoost) decision tree machine learning approach was developed to retrieve SMOS soil moisture from near real time brightness temperatures. It is presented in this paper. The impact of SMOS XGBoost soil moisture data assimilation on NWP is shown and plans for the SMOS Mission Extension and future L-band missions such as CIMR are discussed.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: Sea Ice Thickness Products from SMOS and CryoSat2 in the Arctic and Antarctic 2010-2024

Authors: Xiangshan Tian-kunze, Dr. Lars Kaleschke, Dr. Stefan Hendricks, Antonio de la Fluente, Raffaele Crapolicchio
Affiliations: Alfred-wegener-institute, ESA/ESRIN, Serco Italia SpA - for European Space Agency (ESA)
Sea ice thickness is an essential climate variable that critically contribute to energy exchange and mass balance in the polar regions. Passive and active methods to observe sea ice thickness from space have been practiced since decades. Since 2010 thin sea ice thickness up to 1m has been estimated from the brightness temperatures measured by the L-band radiometer aboard ESA’s Soil Moisture and Ocean Salinity mission (SMOS). Due to the broad swath-width, SMOS has almost daily coverage in the polar regions. In parallel, the Synthetic aperture Interferometric Radar Altimeter (SIRAL) on board of CryoSat2 has measured the sea ice freeboards with a horizontal resolution of 300m. Sea ice thickness derived from SMOS shows relatively low uncertainty for thin ice but loses sensitivity for ice thicker than 1m. On the contrary, altimeter-based ice thickness observation from CryoSat-2 has higher uncertainty for thin ice but is more reliable for thicker ice. An optimal interpolation scheme has been established at the Alfred-Wegener Institute, to merge the two satellites achieving lower relative uncertainty. Within the framework of ESA projects “SMOS & CryoSat-2 Sea Ice Data Product Processing and Dissemination Service” and “SMOS Expert Support Laboratory”, daily L3 SMOS thin ice thickness product and weekly L4 combined product from SMOS and CryoSat2 for the polar regions have been produced and disseminated. Recently, altimeter data from Sentinel 3A/B has been integrated into the L4 merging process to improve spatial resolution and data density. These data sets are limited to cold seasons and are not applicable during late spring and summer. Both L3 and L4 products have undergone numerous validation and inter-comparison with in-situ measurements, remote-sensing data, and sea ice assimilation systems, with very plausible results. The extraordinary low Antarctic sea ice extent and sea ice thickness in the Southern Hemisphere in 2023 and 2024 is well captured by SMOS data, an evidence that SMOS makes an important contribution to the climate change monitoring in the polar regions.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: Bayesian Time Series Approach to Retrieve Ground and Vegetation Variables From SMOS

Authors: Manu Holmberg, Juha Lemmetyinen, Dr. Philippe Richaume, Andreas Colliander, Anna Kontu, Johanna Tamminen
Affiliations: Finnish Meteorological Institute, Centre d'Etudes Spatiales de la Biosphère, Jet Propulsion Laboratory
Geophysical variables are retrieved from satellite observations through an inversion of empirical, semi-empirical or physics-based forward models [1]. However, in most cases the associated inverse problem is ill-posed due to the lack of information in the satellite observation data in contrast to the highly complex scene that is being observed. This lack of information is solved by introducing prior information about the retrieved geophysical variables through a Bayesian framework [2]. Typically, the geophysical variables are retrieved one by one, from each satellite overpass separately. However, this approach makes it difficult to incorporate prior information about the temporal characteristics of the geophysical variables into the retrieval problem. To incorporate prior information on the temporal characteristics of the geophysical variables, we present a formulation of the retrieval problem in a time series domain, and its application to SMOS data to retrieve four geophysical variables. Time series of ground permittivity and vegetation optical depth, and in addition, two time-invariant variables ground surface roughness and vegetation single scattering albedo are retrieved. The retrieved ground permittivity and vegetation optical depth are compared with in-situ measurements from the Finnish Meteorological Institute’s Sodankylä research site in Northern Finland [3]. While ground permittivity and vegetation optical depth are typical variables retrieved from SMOS observations, the ground surface roughness and vegetation single scattering albedo are usually not retrieved due to the lack of information in data from a single SMOS overpass [4]. We show that by considering a simultaneous retrieval of the geophysical variables from a time series of satellite observations, we are able to retrieve more information than by considering each satellite overpass separately. Furthermore, although the ground surface roughness and vegetation single scattering albedo lack in-situ measurements to validate the retrieved values, they affect the retrieved ground permittivity and vegetation optical depth. Therefore, the validity of the retrieved two latter variables indicate the validity of the two former ones. [1] J. Pulliainen et al., Retrieval of regional snow water equivalent from space-borne passive microwave observations, Remote Sensing of Environment, 2001 [2] A. Tarantola, Inverse problem theory and methods for model parameter estimation, SIAM, 2005 [3] J. Ikonen et al., The Sodankyla in situ soil moisture observation network: an example application of ESA CCI soil moisture product evaluation, Geoscientific Instrumentation Methods and Data Systems, 2016. [4] Y. Kerr et al., The SMOS soil moisture retrieval algorithm, IEEE Transactions On Geoscience And Remote Sensing, 2012
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: SMOS 4th MISSION REPROCESSING

Authors: Roger Oliva, Dr. Manuel Martin-Neira, Raffaele Crapolicchio, Dr. Ignasi Corbella, Raul Diez-Garcia, Dr. Raul Onrubia, Josep Closa, Daniel Toledano, Ali Khazaal, Dr. Jose Barbosa, Goncalo Lopes, Daniel Barros, Joao Cruz, Dr. Veronica Gonzalez, Juha Kainulainen, Dr. Klaus Scipal
Affiliations: Zenithal Blue Technologies, European Space Agency, ESTEC, European Space Agency, ESRIN, Universitat Politecnica de Catalunya, Telespazio UK Ltd, Airbus defense and Space, RDIS, RDA, Deimos Engenheria, SMOS Barcelona Expert Center, Harp Technologies
ESA’s Soil Moisture and Ocean Salinity (SMOS) mission and its Microwave Imaging Radiometer with Aperture Synthesis (MIRAS) in two dimensions continues to provide full polarimetric L-band Brightness Temperature measurements from which a number of applications are derived. More notoriously, soil moisture measurements, sea surface salinity, sea-ice thickness, sea surface high wind speeds and Freeze/Thaw soil state. 15 years after launch, SMOS is still working very well and the SMOS calibration team continues to monitor the instrument performance and to improve the quality of the data. In fact, the performance metrics developed by the team, indicate that the instrument is extremely stable in the long-term, where we observe a drift below 0.015 Kelvin per year, reflecting the excellent health of the instrument, and also a very good seasonal and inter-orbital stability particularly at the Alias-Free Field of View. In the image reconstruction aspects, the team has recently implemented a few changes to further improve the data quality, and as a consequence, we are in the process of a new Reprocessing Campaign that will include all these improvements, the 4th full SMOS Mission Reprocessing. Some of the most important evolutions that the team has implemented at level 1 are the introduction of a Spatial Bias Correction that notably reduces the orbital stripes observed in the current data products, improvements in the generation of the artificial scenes that reduce the error associated to image reconstruction process and of the Land-Sea contamination. Another aspect that has been improved is the estimation of the Sun Brightness Temperature observed in SMOS images by the front and back antenna patterns of MIRAS instrument. This change, not only improves correcting the contribution of the Sun in the final images, but also enhances the computation of the SMOS-derived prototype products, such as L-band Solar flux and Solar Radio Burst bulletin. These products will be provided in near-real time and will contribute significantly to space weather applications. A summary of all these changes, improvements and metrics will be presented during the conference.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall E2)

Session: C.02.01 Aeolus Mission: 5 years of advancing atmospheric understanding through spaceborne lidar technology

The Aeolus mission, launched by the European Space Agency (ESA) in August 2018, represented a pioneering effort in advancing our understanding of Earth's atmosphere through the innovative utilization of spaceborne lidar technology. This mission aims to provide unprecedented insights into global wind patterns, atmospheric dynamics, and aerosol distribution, thereby enhancing our ability to forecast weather, monitor climate change, and improve air quality.
At the heart of the Aeolus mission laid the revolutionary Atmospheric Laser Doppler Instrument (ALADIN). ALADIN employed the principle of Doppler wind lidar, utilizing pulses of laser light to measure the Doppler shift of backscattered signals from atmospheric molecules and particles. This technique allows for the precise measurement of wind speed and direction throughout the depth of the atmosphere, from the Earth's surface up to the stratosphere.
One of the primary objectives of the Aeolus mission is to fill the critical gap in our observational capabilities regarding global wind profiles. Aeolus, operating from a sun-synchronous orbit at an altitude of approximately 320 kilometers, offered a unique vantage point for comprehensive and continuous global wind observations, covering the data sparse tropics and polar regions.
By accurately mapping atmospheric wind fields, Aeolus contributed to improving weather forecasting models, and facilitated the monitoring and enhanced the understanding of atmospheric circulation patterns, including the dynamics of jet streams, tropical cyclones, and the interplay between atmospheric and oceanic circulation systems, preparing the grounds for Aeolus-2 meteorological system.
In addition to wind profiling, Aeolus data is contributing in characterizing the distribution and properties of atmospheric aerosols, including pollutants, dust particles, and volcanic ash, which provides valuable insights into aerosol transport aiding in the refinement of climate models and air quality forecasts.
The mission ended nominal operations on 30 April 2023, followed by the end-of-life phase where scientific technological experiments were carried out before the satellite reentered from space on 28 July 2023 through an innovative and pioneering reentry approach.
The scope of this session is to review and discuss the main scientific achievements of the Aeolus mission, including the results from the international Cal/Val campaigns and the outcome of the end-of-life phase.


Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall E2)

Presentation: Assimilation impact of Aeolus on the representation of extratropical atmosphere dynamics in NWP models

Authors: Robin Pilch Kedzierski, Prof. Martin Weissmann, Michael Rennie
Affiliations: University of Vienna, European Centre for Medium-Range Weather Forecasts
The first spaceborne Doppler Wind Lidar (DWL) observations of wind profiles were measured by the Aeolus satellite mission between late 2018 and early 2023. The assimilation of horizontal line of sight (HLOS) wind profiles with global coverage from Aeolus within numerical weather prediction (NWP) systems has been shown to improve forecasts, especially in the upper-troposphere and lower-stratosphere (UTLS) in the tropics, as well as higher latitudes in the extratropics. The atmospheric dynamics behind the forecast improvements from assimilating Aeolus winds have been studied in the tropics, where the Aeolus HLOS wind component is almost equal to zonal winds. The quasi-biennial oscillation (QBO) consists of alternating westerly and easterly zonal-mean wind phases in the tropical stratosphere, on a timescale of around two years. A better represented QBO in the lower stratosphere, as well as the 2019/20 QBO disruption, have been captured by Aeolus measurements. Forecast improvements are larger during phase changes of the QBO. Equatorial waves (variability modes trapped at the Equator, with timescales of days to a couple weeks), were shown to have larger activity in the upper troposphere when Aeolus measurements were compared to reanalyses. Additionally, forecast error reductions by assimilating Aeolus were reported after the onset of a La Niña event in the Pacific ocean. On the other hand, there is relatively less research focusing on Aeolus effects on extratropical atmosphere dynamics. Forecast improvements by Aeolus were linked to Rossby wave packet (RWP) development, i.e. the evolution of organized sets of troughs and ridges in midlatitudes. There is indication that the studied RWP cases were impacted by a significant component of unbalanced flow from tropical cyclones (TCs) undergoing extratropical transition. Our study aims at expanding these findings and bridging the gap in understanding the dynamical aspects of Aeolus forecast improvements in the extratropics with a systematic approach and a more general look into sources of unbalanced flow including warm conveyor belts (WCBs) and mesoscale convective systems (MCSs). To analyze the effect of Aeolus assimilation, observing system experiments (OSEs) were performed at the European Centre for Medium-Range Weather Forecasts (ECMWF). A control experiment without Aeolus includes all other operationally assimilated observations into the ECMWF system. This is contrasted with another experiment which assimilates Aeolus winds on top of all other observations. These experiments cover Aeolus’ FM-B laser lifetime: the OSEs length is over 3 years and Aeolus data are from the higher-quality 4th reprocessing. Our analysis bears special focus on the jet-stream and its undulations – the region of strongest winds as well as their vertical/meridional gradients at synoptic-scales in the extratropical UTLS. These are followed with a feature-based approach both in the vertical and meridional directions. In doing so, we are able to show the different roles of RWPs and planetary waves on the extratropical forecast improvements by Aeolus. Forecast scores for extratropical unbalanced flow are computed separately from the geostrophic circulation. Unbalanced circulation cannot be inferred indirectly from mass observation sources (radiances), therefore we expect a greater impact of Aeolus on the unbalanced (divergent) component of winds that is associated with TCs, WCBs and MCSs.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall E2)

Presentation: Aeolus Mission: Operational Achievements, Controlled Reentry, and Prospects in Phase F

Authors: Tommaso Parrinello, Dr Denny Wernham, Dr Thorsten Thorsten Fehr, Timon Hummel, Viet Duc Tran, Oliver Oliver Reitebuch, Luca Girardo, Trisomono Candra Krisna, Irene Cerro Herrero, Dr Peter Bickerton, Michael
Affiliations: Esa
The Aeolus mission, launched in 2018, marked a breakthrough in atmospheric science as the first satellite equipped with a Doppler wind lidar, providing unprecedented global wind profile observations. During its operational phase, Aeolus significantly advanced numerical weather prediction models and contributed valuable data for climate studies, demonstrating its exceptional scientific and operational relevance. After almost five years in orbit and despite some performance issues related to its instrument ALADIN, Aeolus has achieved all its scientific objectives and gone beyond its original designed life-time in space. Positive impact on the weather forecast has been demonstrated by multiple NWP centres world-wide, with four European meteorological centres now are assimilating Aeolus winds operationally, paving the way to its successor: EPS-Aeolus or Aeolus-2. In July 2023, Aeolus successfully executed a pioneering controlled reentry, setting a benchmark for safe satellite end-of-life operations. This process provided critical insights for sustainable space operations and risk mitigation strategies for future missions. As the mission transitioned into Phase F, we outline the continued exploitation of Aeolus data over the next three years, including reanalysis activities, enhanced data accessibility, and collaboration opportunities. This phase will ensure Aeolus's enduring impact on atmospheric research, satellite mission design, and Earth observation programs. Aeolus remains a testament to ESA's commitment to innovation and sustainability, leaving a legacy that will inspire future Earth observation missions.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall E2)

Presentation: Tracking the aerosol plume from the Hunga Tonga eruption over months using Aeolus

Authors: DIMITRI TRAPON, HOLGER BAARS
Affiliations: Leibniz Institute for Tropospheric Research (TROPOS)
The AEOLUS satellite was operated between 2018 to 2023 as part of ESA Earth Explorer missions carrying the first Ultraviolet (UV) lidar in space: the Atmospheric LAser Doppler Instrument (ALADIN). Despite being initially designed to measure Earth’s winds, the High Spectral Resolution (HSR) capacity of ALADIN allowed to retrieve profiles of optical properties of atmospheric particles and molecules from accumulated lidar signals seperated by two detection channels: Rayleigh and Mie. Those optical profiles are derived as the Aeolus level-2A (L2A) product. Currently it includes independent retrievals of particles backscatter and extinction from the historical L2A Standard Correct Algorithm (SCA) [1] which corresponds to a direct inversion of the lidar equations without a-priori assumptions. A new algorithm based on physically constrained optimal estimation and referred as Maximum Likelihood Estimation (MLE) [2] was implemented later in the L2A product. On 15th January 2022 the submarine Hunga Tonga-Hunga Ha'apai volcano erupted in the southern Pacific Ocean. The high-magnitude volcanic explosion resulted in an unprecedented injection of particles into the atmosphere up to the mesosphere. Due to Aeolus' settings flexibility, the profiling range was quickly adapted so that the top bin measurement was set up to 30 km altitudes to capture one of the early volcanic plumes mostly composed of submicron-sized sulfate particles [3]. The talk will give a concrete illustration of Aeolus aerosol observing capabilities showing how the vertical structures and optical properties of the stratospheric particles can be characterized with L2A product. This allows to monitor the evolution of the volcanic aerosol plume for months after the eruption. The dataset used for the study was publicly disseminated on May 2024 as part of the Aeolus DISC (Data, Innovation, and Science Cluster) reprocessing activities in Phase F, and is labelled baseline 16. Quality flagged and cloud screened tracers such as scattering ratio and backscatter coefficient for particles are used to analyze the volcanic plume early phase dispersion and descent in synergy with independent lidar measurement. The fresh aerosols are clearly visible with SCA parallel (i.e. co-polarized signal) attenuated backscatter for particles. The cross analysis with other space-borne observations such as CALIOP/CALIPSO, TROPOMI/S5P or IMS/IASI confirmed an elongated filament stabilizing around 25 km altitudes. The optical properties measured for the early plume agree with values reported from ground-based observations made in southern hemisphere [4]. As a novel application of Aeolus L2A data, the evolution of the aerosol plume until summer 2022 after the eruption will be shown. For this purpose, global maps of maximum backscatter intensity in the stratosphere are utilized to characterize the vertical motion of the elongated plume during the equatorward transport and the Earth circumnavigation. A two branches separation can be identified using Aeolus L2A products and agree with other studies. [1] Flament, T., Trapon, D., Lacour, A., Dabas, A., Ehlers, F., and Huber, D.: Aeolus L2A Aerosol Optical Properties Product: Standard Correct Algorithm and Mie Correct Algorithm, Atmos. Meas. Tech., 14, 7851–7871, https://doi.org/10.5194/amt-14-7851-2021, 2021. [2] Ehlers, F., Flament, T., Dabas, A., Trapon, D., Lacour, A., Baars, H., and Straume Lindner, A. G.: Optimization of Aeolus Optical Properties Products by Maximum Likelihood Estimation, Atmos. Meas. Tech., 15, 185–203, https://doi.org/10.5194/amt-15-185-2022, 2022. [3] Legras, B., Duchamp, C., Sellitto, P., Podglajen, A., Carboni, E., Siddans, R., Grooß, J.-U., Khaykin, S., and Ploeger, F.: The evolution and dynamics of the Hunga Tonga–Hunga Ha'apai sulfate aerosol plume in the stratosphere, Atmos. Chem. Phys., 22, 14957–14970, https://doi.org/10.5194/acp-22-14957-2022, 2022. [4] Baron, A., Chazette, P., Khaykin, S., Payen, G., Marquestaut, N., Bègue, N., & Duflot, V. (2023). Early evolution of the stratospheric aerosol plume following the 2022 Hunga Tonga-Hunga Ha'apai eruption: Lidar observations from Reunion (21°S, 55°E). Geophysical Research Letters, 50, e2022GL101751. https://doi.org/10.1029/2022GL101751Received 17 OCT 2022
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall E2)

Presentation: The impact of Aeolus winds on tropical wind shear and wave-driven circulations: key findings from Aeolus+Processes project

Authors: Nedjeljka Žagar, Robin Pilch Kedzierski, Valentino Neduhal, Frank Sielmann, Giovanna De Chiara De Chiara, Michael Rennie, Sean Healy
Affiliations: University of Hamburg, European Centre for Medium-Range Weather Forecasts
Understanding atmospheric circulation relies on analysis data, produced by assimilating observations in numerical weather prediction (NWP) models. However, analyses (initial state for NWP) are only as reliable as the observations which enter the data assimilation process and the NWP models which provide the first guess for the data assimilation. The operational NWP system of the ECMWF has the largest analysis uncertainties and short-range forecast errors in the tropical upper troposphere and lower stratosphere (UTLS), a layer between approximately 12 km and 30 km. The UTLS is also the layer where the largest and longest lasting benefits of the assimilation of Aeolus’ profiles of the horizontal line of sight (HLOS wind) in the ECMWF system were found. Similar effects were reported for all major NWP systems that assimilated Aeolus data. By analysing differences between ECMWF analysis that used HLOS winds on top of all other data assimilated by the ECMWF system (denoted Aeolus) and analyses without Aeolus (denoted NoAeolus), we explain how Aeolus winds affected tropical waves and zonal mean state within UTLS in the ECMWF system. We found that Aeolus systematically enhanced amplitudes of the Kelvin wave and n=1 Rossby wave within the tropical tropopause layer.The non-zero systematic effects suggest model shortcomings (for example, in numerical aspects such as vertical diffusion), but the systematic effects account for a smaller portion of the total impact. The total effect (measured as root-mean square differences, RMSDs, between Aeolus and NoAeolus) depends on the background wind shear, with greater impact in easterlies that have stronger shear than westerlies, especially over the Indian ocean. Large RMSDs for the mixed Rossby-gravity wave manifest the information transfer from nearly zonal HLOS winds in the tropics to the meridional flow through the 4D-Var data assimilation. Changes in equatorial wave activity in the tropopause layer by Aeolus winds are shown to affect wave forcing of the quasi-biennial oscillation (QBO). An especially large effect of Aeolus winds in the stratosphere took place during the 2019/2020 QBO disruption, when in-situ observations of the disruption led to a considerable improvement of the forecasts started from Aeolus versus NoAeolus analyses. In the lower stratosphere, 20% of the RMSDs on average associated with the zonal mean state increased to 60% during the QBO disruption, demonstrating importance of observing shear lines during extreme events when forecast accuracy is most critical. In normal QBO conditions (summer 2020), the assimilation of Aeolus winds improved the transition from the easterly to westerly phase of the QBO. The quantification of tropical analysis uncertainties in terms of the ratio between the RMSDs and variability in NoAeolus analyses shows substantial flow dependency of the effect of HLOS winds, with largest analysis uncertainties in regions of enhanced longitudinal and vertical shear in the background wind, and near the zero-wind line. The impact of Aeolus data on the zonal wind varies from 10% to 60% of the intra-monthly variability in the tropical UTLS, highlighting the importance of observing shear zones within the UTLS for understanding tropical variability.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall E2)

Presentation: Assessing the impact of reprocessed Aeolus wind data in global weather forecasting

Authors: Michael Rennie
Affiliations: ECMWF
The ESA Earth Explorer mission Aeolus was the first Doppler wind lidar instrument (ALADIN) in space. It provided wind profiles from above the surface to 20-25 km altitude (although 30 km was briefly reached), with near global coverage due to the near-polar sun-synchronous, dawn-dusk orbit. The wind information is the horizontal line-of-sight (HLOS) component, via off-nadir pointing at 35 degrees and perpendicular to the satellite-earth velocity. Winds were measured from 3 September 2018 to 5 July 2023. ECMWF (European Centre for Medium-Range Weather Forecasts) has been involved in the Aeolus mission since its inception. Particularly in the Level-2B wind processing algorithm development (in collaboration with several institutions), the development of the IFS (Integrated Forecasting System) for data assimilation of the winds, and during the mission (and now post-mission), as part of the Aeolus DISC (Data Innovation and Science Cluster) for the monitoring and use of the data for NWP (Numerical Weather Prediction) and atmospheric composition forecasting. After demonstrating a strong positive impact for one satellite instrument, the L2B winds were operationally assimilated at ECMWF from 9 January 2020 to 30 April 2023, using the near real time data. Several NWP centres also found strong positive impact with L2B winds and some operationally assimilated it. Due to Aeolus' success, an ESA/EUMETSAT follow-on operational mission EPS-Aeolus has been put forward for EUMETSAT Programme approval. For Aeolus’ legacy we aim to produce the best possible, consistently processed dataset. Hence society can continue to benefit from the data e.g. assimilating the data in the next generation reanalyses (like ERA6), identifying and perhaps correcting NWP model errors, further atmospheric science research. This talk focusses on the impact assessment of the Level-2B HLOS (horizontal line-of-sight) winds in global NWP using the most recently reprocessed data covering the longest period of consistently reprocessed data so far: September 2018 to April 2023 (using the Baseline 16, 4th reprocessing). The L2B wind data quality has improved with the 4th reprocessing: with smaller random and gross errors (better quality control), and reduced wind biases associated with uncorrected “hot” pixels. This leads to more consistent impact in NWP over the mission. The longer and more consistently reprocessed data makes it possible for a fairer assessment of how Aeolus impact varied with time, in particular its dependency on the varying atmospheric path signal levels (and lasers) and on the meteorological conditions. New results from Observing System Experiments, Forecast Sensitivity to Observation Impact and of experiments with modified IFS parameterised vertical diffusion (which improved vertical wind shear in the tropical lower stratosphere) will be discussed.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall E2)

Presentation: Achievements and Lessons Learnt from ESA´s Aeolus Mission

Authors: Oliver Reitebuch, Tommaso Parrinello
Affiliations: DLR Institute of Atmospheric Physics, ESA-ESRIN, DLR, ECMWF, KNMI, Météo-France, TROPOS, DoRIT, S&T, Serco, OLA, Physics Solutions, IB Reissig, Les Myriades
Launched in August 2018, ESA’s Earth Explorer mission Aeolus was the first to measure atmospheric wind profiles on a global scale. As the first-ever Doppler Wind Lidar instrument in space, Aeolus significantly contributed to the improvement in numerical weather prediction (NWP) by measuring one component of the horizontal wind vector until the end of its operational mission phase on 30 April 2023. Following an instrument test period until 5 July 2023, where a new record in ultraviolet (UV) laser energy of 182 mJ over 33 h was achieved, the Aeolus satellite re-entered into the atmosphere on 28 July 2023. Aeolus achieved its mission objectives, demonstrating a clear positive impact on weather forecasts across several NWP models. These accomplishments were made possible through the critical contributions from the Aeolus Data Innovation and Science Cluster (DISC). The DISC supported the mission with a wide range of activities, including instrument and product quality monitoring, retrieval algorithm and operational processor enhancements, and NWP impact assessments using wind and aerosol products from Aeolus. Coordinated by DLR, the Aeolus DISC brought together expertise from ECMWF, KNMI, Météo-France, TROPOS, DoRIT, ABB, S&T, Serco, OLA, Physics Solutions, IB Reissig and Les Myriades, involving more than 40 scientists and engineers within the DISC. The contributions of the DISC will continue in phase F of the Aeolus mission until end 2028. The presentation will discuss the achievements of the Aeolus mission during its nearly five-year lifetime, with a particular focus on the ALADIN instrument performance and the evolution of the wind observation quality. We will highlight some key lessons learned from this mission, providing valuable insights for future Earth Observation missions. Additionally, the presentation will outline the objectives of the Aeolus DISC contributions in the on-going phase F of the mission, emphasizing the anticipated benefits and improvements in data quality and scientific applications.
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Room 1.85/1.86)

Session: C.02.19 Future trends in smallsat based research missions

Given the recent achievements in smallsat based operational and scientific missions, this session should explore beyond state of art or frontier ideas that can be realistically matured in the next 3-6 years. The session aims to give an overview of what to expect in later 2027 or 2030 Scout/Phisat missions; the proponent shall clearly present the scientific goals, their relevance to EO programmes and an initial implementation of their ideas in terms of mission.

Presentations and speakers:


SnowCAT: an Innovative MIMO SAR Mission for Snow Characterization by SAR Tomography


  • Francesco Banda - Aresys

Incoherent Combination of Monostatic-Bistatic Long Baseline SAR Acquisitions: Future Benefits of PLATiNO-1 Mission


  • Antonio Gigantino - University of Naples

Transforming Earth Observation Science Missions with Advanced Small Satellite Constellations


  • Stephanie Mottershead - Surrey Satellite Technology Ltd

HiBiDiS: The Hyperspectral Biodiversity Scout Mission


  • Maria J. Santos - University of Zurich
  • Gilles Mariotti - SITAEL

SOVA-S -Satellite Observation of Waves in the Atmosphere – Scout 2nd Cycle


  • Krystof Pridal - OHB Czech Space

Second Scout Cycle Consolidation Study – SIRIUS (Spaced Based Infrared Imager for Urban Sustainability)


  • Federico Branca Roncati - Thales Alenia Space in Spain
  • Maria Andrea Vidal Urriza - Thales Alenia Space in Spain
Add to Google Calendar

Friday 27 June 08:30 - 10:00 (Hall L1/L2)

Session: C.01.21 Technology by National Agencies for future EO missions

In this session, you will learn about the technology strategy and examples of technology developments in National technology programmes for future Earth Observation missions.
This complement other technology developments done through ESA programmes.
The main National Agencies will be invited to present on this topic.

Presentations and speakers:


Overview of European Technology for EO missions


  • Josep Rosello - ESA

EO Technology Development under the UK National Programme 2020-2025


  • Tim Manton - UKSA

ASI technological Roadmap for EO Missions


  • Raffaele Votta - ASI

Examples of innovative technological EO developments in CNES


  • Cherchali Selma - CNES

Overview on DLR's Mission Preparation Activities


  • Christian Bruens - DLR

NASA Earth Science Technology Development for Future Science Missions


  • Sachi Babu - NASA
Add to Google Calendar

Friday 27 June 09:00 - 09:20 (EO Arena)

Demo: F.03.04 DEMO - ATOM - Rapid Virtual Tasking of Satellite Imagery via Web Browser or API

Easy, instant access to high-resolution satellite imagery has long been a demand of power users. ATOM takes satellite tasking and imagery access to the next level—enabling organizations to plan, order, and manage complex imagery projects with just a few clicks. We’ll showcase how ATOM enables direct satellite tasking in just 60 seconds, provides real-time feasibility studies with 75+ inputs, and offers advanced project management tools to streamline operations. See how ATOM simplifies complex workflows, ensuring seamless access to high-quality EO data. Whether you're a power user or new to satellite imagery, this session will equip you with the insights to maximize your geospatial capabilities.

Join us for an in-depth look at ATOM, EUSI’s new direct tasking and archive ordering platform. We’ll explore how ATOM provides:
-Granular control over search filters, product parameters, and tasking settings
-Real-time, highly accurate feasibility studies—leveraging 75+ inputs beyond basic satellite access
-A seamless web and API experience that integrates into diverse operational workflows.

Speakers:


  • Alan Scandrett - EUSI
Add to Google Calendar

Friday 27 June 09:45 - 10:05 (EO Arena)

Demo: D.01.13 DEMO - Vizlab - Data visualization on Destination Earth Platform

Vizlab offers an intuitive GUI and advanced 3D rendering technologies, to provide immersive storytelling experience that goes beyond simple data visualization, making complex data sets accessible and understandable to a broad audience.

Vizlab implements a user-centric approach, leveraging intuitive interfaces and advanced 3D technologies, aiming to provide a tailored and immersive storytelling experience that goes beyond simple data visualization, making complex data sets accessible and understandable to a broad audience.

The Data Visualisation Service within the DestinE Platform framework is engineered to enrich user interaction with a vast array of geospatial and observational data. By offering a suite of visualization options, including 2D, 3D, and 4D representations, it caters to a diverse set of analytical needs and data types.

This service is adept at handling the complexity of spatial data, enabling users to effortlessly navigate through intricate datasets and extract meaningful insights. Key to its design is the adaptability to various client devices, ensuring that users receive an optimized experience whether accessing data on mobile devices or desktops. Interoperability is another cornerstone of this service, as it seamlessly integrates with other data management services on the DestinE Platform for efficient data preparation and analysis.

Speakers:


  • Gianluca Palumbo - DestinE Vision
  • Roberta Rietti - DestinE Vision
  • Francesco Garofalo - DestinE VizLab
Add to Google Calendar

Friday 27 June 10:00 - 11:30 (Plenary - Hall D)

Session: Breakthroughs and Paradigm Shifts in EO

Session Objectives

This Plenary session will convene industrial players who already have worked with ESA/EOP for a discussive plenary session on recent and soon-to-be-expected breakthroughs. They will share their vision of future Earth Observation from an end-to-end perspective and how to progress on including space data into decision-making processes. A special focus is on infrastructure and AI-augmented business processes.

This session will be accessible via live captioning at the following link: HERE

Due to limitations in the app, this is clickable only from the web version of the programme

Panel Members


  • Fanny Bouton (OVH)
  • Rafal Modrzewski (ICEYE)
  • Giancarlo Cobini (CapGemini)
  • Patrick Lamm (SAP)
  • Christine Knackfuss-Nikolic (T-Systems)

Add to Google Calendar

Friday 27 June 10:00 - 10:45 (ESA Agora)

Session: C.01.18 Crafting the Perfect Bouquet of Innovation: Presenting the Next EO Technology Demonstration Mission

What are the next generation of promising technologies to be demonstrated in space? Are you working on new technologies for space demonstration? How can these developments contribute to strengthening Earth Observation efforts worldwide?

This session will present the top small scale mission concepts generated during two workshops held earlier this week (Monday and Thursday). These workshops focused on gathering ideas for small-scale technology demonstration missions planned for 2030. Join us to explore the most promising concepts and innovations identified during these workshops.

Earlier during the week, two workshops (Monday and Tuesday) will be held to gather ideas for technology demonstration mission concepts. This presentation will highlight the top concepts we receive.

Speakers:


  • Emma De Cocker - ESA
  • Tuur Strobbe - ESA
  • Sofia Lembo - ESA
  • Paolo Bazzocchi - ESA
Add to Google Calendar

Friday 27 June 10:07 - 10:27 (EO Arena)

Demo: B.01.09 DEMO - Unlocking the Power of Earth Observation: Global Development Assistance (GDA) Analytical Processing Platform (APP)

The GDA APP demonstration will provide an in-depth look at how Earth Observation (EO) data can be leveraged for international development projects. The session will focus on showcasing the platform’s intuitive, cloud-based analytical environment designed to support both technical and non-technical users, including professionals from International Financial Institutions (IFIs), the Global Development Assistance (GDA) program, and the wider development aid community.

The demonstration will showcase selected EO capabilities available in GDA APP, illustrating how satellite data can be transformed into actionable insights. Participants will gain a practical understanding of how the platform simplifies complex EO analytics, making them accessible and easy to use.

Demo Session Structure:

Platform overview (5 min): An introduction to GDA APP’s interface and core functionalities, focusing on: EO Widgets, Explorer, Advanced Features.

Capability Demonstration (10 min): Showcasing 3-5 EO capabilities from the list below:
AI Super Resolution – Enhancing Sentinel-2 imagery resolution.
AI Road Extraction – Automatically detecting road networks.
Vegetation Trends – Analysing long-term vegetation dynamics.
Built-Up Areas Delineation – Identifying urban areas in different population densities.
Built-Up Areas Change Detection – Tracking urban expansion over time.
Climate Indicators Processor – Calculating climate variables, indices, and anomalies.
Ground Motion Monitoring – Observing and analysing ground deformation.
Surface Water Dynamics – Monitoring changes in surface water extent.
Vessel Detection System – Tracking vessel movements and activities.
EO Time-lapse Generator – Creating visual time series animations.

Upcoming Features & Roadmap (3 min): An overview of the future developments and enhancements planned for the platform, including:
EO Capability / case study execution through JupyterLab
Integration of additional EO Capabilities / Application Packages provided by user community

Q&A Session (2 min): Open discussion to address participant questions.

We encourage all LPS participants to register and create an account on the GDA APP (https://app-gda.esa.int/) to fully explore its features.

Read more for additional details and updates:
https://app-gda.esa.int/user-guide
https://gda.esa.int/cross-cutting-area/app/

Speakers:


  • Hanna Koloszyc - GeoVille
  • Alessia Cattozzo - MEEO

Supporting team:


  • Simone Mantovani - MEEO
  • Fabio Govoni - MEEO
Add to Google Calendar

Friday 27 June 10:30 - 10:50 (EO Arena)

Demo: D.03.29 DEMO - Julia for large scale geospatial data analysis

The amount of earth observation data is increasing at ever larger rates. Therefore, we need tools to efficiently analyse this increasing stream of available remote sensing data. Spatiotemporal data cubes are becoming ever more abundant for this and are widely used in the Earth Observation community to handle geospatial raster data.
JuliaGeo provides a flexible and powerful toolkit to derive insight from large earth observation datasets. In this demonstration we show case the use of the Julia programming language and its geospatial ecosystem for interactively analysing and visualising large scale datasets from different sensors.
Julia is an interactive scientific programming language, designed for HPC applications with primitives for Multi-threaded and Distributed computations built into the language.

Speaker:


  • Felix Cremer - Max-planck Institute For Biogeochemistry
Add to Google Calendar

Friday 27 June 10:52 - 11:12 (EO Arena)

Demo: D.01.19 DEMO - EDEN service in the platformInteracting with DestinE Data Portfolio

#cloud-native

The EDEN service demonstration will provide an in-depth look at how Destination Earth (DestinE) data portfolio, including Digital Twins, Copernicus data and services as many other datasets made available in the federated data sources. The session will focus on showcasing both the platform’s human-friendly and machine-to-machine interfaces designed to support both technical and non-technical users belonging to EO community.
The demonstration will showcase selected case studies on air quality monitoring and forecasting for the analysis of natural phenomena and human activities from satellite and model-based data, illustrating the benefit of Analysis-Ready data for the development of cloud web-based services. Participants will gain a practical understanding of how the platform provides native and cloud-native data.
Demo Session Structure:
- Platform overview (5 min):
- An introduction to EDEN service and core functionalities: Finder, Harmonised Data Access API.
- Data Portfolio

- Case Studies (15 min):
- Dust events, whose frequency is increasing due to changing atmospheric conditions, transport fine particles over long distances, with severe consequences on air quality and visibility across Europe.
- Wildfires, boosted by rising temperatures and prolonged droughts, release massive amounts of pollutants, further degrading air quality
- Case study execution through JupyterLab

- Q&A Session: Open discussion to address participant questions.

We encourage all LPS participants to register and create an account on the DestinE Platform (https://platform.destine.eu/) and read more about EDEN service its features:
https://platform.destine.eu/services/service/eden/
https://platform.destine.eu/services/documents-and-api/doc/?service_name=eden

=S=Speakers:
  • Simone Mantovani - MEEO
  • Alessia Cattozzo - MEEO
  • Federico Cappelletti - MEEO
Add to Google Calendar

Friday 27 June 11:15 - 11:35 (EO Arena)

Demo: A.09.13 DEMO - CS2EO: Query Platform for Altimetry Data

CS2EO is a high-performance geospatial-temporal query platform for Earth Observation data, developed by Earthwave and the University of Edinburgh for ESA. Initially created for the CRYO2ICE campaign, the platform has evolved into a robust tool that provides users with the ability to discover, visualise, combine, and download a wide range of datasets from missions such as CryoSat-2, ICESat-2, CryoVEx, CryoTEMPO, Operation IceBridge, and AWI IceBird.

This demonstration will highlight the key features and capabilities of the CS2EO portal, showing how it simplifies access to both coincident and individual altimetry data. Users can efficiently find and download coincident data with time separations ranging from just a few hours to up to 28 days, supporting a wide range of applications. The Advanced Data Download feature allows for filtering by column value, reprojection, and reformatting before downloading, further enhancing data accessibility and usability. Predicted ground track data enables users to plan for future CryoSat-2 and ICESat-2 satellite passes and intersections. The portal’s Time Series Processing functionality allows users to generate, visualise, and download time series for multiple datasets.

Join us to see how CS2EO can be used to streamline common altimetry tasks and support your research! For more information about the platform, visit www.cs2eo.org.

Speakers:


  • Julia Bizoń - earthwave
  • Sarah Appleby - earthwave

Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 0.49/0.50)

Session: E.01.01 EO for Cultural and Natural Heritage: from innovation to user uptake - PART 1

Cultural and Natural Heritage (CNH) have a major significance for local communities as an identity factor and a facilitator of social cohesion. Furthermore, CNH are strategic assets for tourism, and their promotion and valorisation contribute to countries economic development. Preservation of CNH is therefore essential, also towards strengthened sustainability and resilience to current global challenges as promoted by the United Nations’ 2030 Agenda for Sustainable Development.
Dedicated sessions held during the last two editions of the Living Planet Symposium helped unveiling the potential and benefits of Earth Observation (EO) for CNH. Satellite data and technologies from Copernicus and Contributing Missions are already playing a key role in enabling novel, transformative and cost-effective solutions to undertake a variety of specific tasks, such as: archaeological prospection, landscape archaeology, multi-temporal monitoring, risk assessment, impact assessment due to anthropogenic activities (e.g. urbanization, illegal excavations), natural hazards (e.g. floods, earthquakes) and climate change.
Public administrations and authorities in charge of CNH management are more aware of these applications, and so more remote sensing and geospatial companies are developing tailored services. At the same time, CHN have become an established application area in Copernicus and its services, e.g. Support to EU External and Security Actions (SESA) Service, Emergency Management Service (EMS) and Climate Change Service (C3S). Furthermore, various initiatives have been launched by ESA (e.g. Downstream Gateway, ARTES IAP) and national space agencies to support industry in developing downstream applications in the CNH sector, and more alliances have been established with national and international bodies such as UNESCO and ICOMOS.
In this context, this session aims to understand how EO scientists, CNH user community, institutions and industry are partnering and cooperating to enable novel applications, improve those that are already being delivered, and facilitate the user uptake to make EO data and technologies from Copernicus, Contributing Missions and commercial missions more deeply embedded into operational workflows for study, monitoring, preservation and promotion of CNH.

The session encourages submissions focusing on:
• Solutions based on the exploitation of satellite data, as well as exploitation of Artificial Intelligence, Machine Learning, thematic platforms, cloud computing resources and infrastructure, collaborative environments;
• Benefits from the use of Copernicus products and services, also in relation to impacts due to climate change and towards future resilience in CHN management;
• Use cases addressing specific user requirements and needs in the field of either CNH discovery, study, monitoring, preservation or cultural/touristic promotion;
• Success stories and best practices of EO integration in operational systems, workflows and processes on CNH;
• Downstream applications, with a focus on multidisciplinary collaboration and partnerships between heritage institutions, academia and commercial providers;
• Initiatives of capacity building towards user uptake by the CNH community and end-users.

Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: Can Sentinel-2 data be used for spectral separability for archaeological prospection? Lessons learnt from use-cases in Telesia and Labro, Italy

Authors: Antonio Corbo, Dr. Deodato Tapete, Prof. Alessandro Jaia
Affiliations: Sapienza Università di Roma, Agenzia Spaziale Italiana (ASI), Sapienza Università di Roma
Cultural and Natural Heritage (CNH) is widely recognised as symbols of society, national identity and cultural values, serving as vital resources for social development and tourism. Consequently, it represents an active source of social development and tourist attraction and, therefore, constituting significant economic assets for many local communities. However, anthropic activities (e.g. urbanisation, infrastructural development, agricultural activities), climate change and natural risks (e.g. floods or earthquakes) pose significant challenges and risks to the integrity of the CNH. These threats necessitate innovative management approaches to safeguard CNH, highlighting its crucial role in contemporary society. Therefore, in addition to serve historical understanding of landscapes and preparatory actions prior to excavations, mapping cropmarks and surface features that may refer to buried remains is crucial to assess the “archaeological risk”, i.e. the likelihood that material of archaeological relevance is present and may be subject to (un)intentional destruction. In this sense, archaeological prospection is a crucial means for preventive archaeology, and is intended to provide guidelines for effective planning, to support decisions concerning the urbanisation of rural areas and the construction of medium- and large-scale infrastructure (e.g. roads, railways, gas and oil pipelines) in order to preserve and, where possible, enhance CH at risk (Abate and Lasaponara 2019; Fattore et al. 2021). Archaeological prospection based on aerial imagery has a very long history, however, in recent decades, Earth Observation (EO) data have emerged as an essential tool for this task, demonstrating their full potential. The advent of Copernicus Sentinel-2 data has significantly contributed to stimulate the use of EO data in archaeology (Tapete et al., 2018, 2019; Cuca et al., 2023). However, several archaeologists and practitioners argue that Sentinel-2 data do not provide the sufficient spatial resolution to fit the purpose of CNH research and study (e.g. McGrath et al., 2020). On the other side, other authors have found that the high temporal revisit provided by the constellation (albeit the unpredictable cloud coverage) is a great advantage in order to monitor the appearance of surface anomalies, if not even proper cropmarks, that could help anticipate the best period of the year for surveying, either by foot or through airborne/UAV means (e.g. Cigna et al., 2023). Therefore, even if it cannot be expected that Sentinel-2 data can depict in detail the presence of buried structures unless the size exceeds the 10-m pixel cell in the visible and near-infrared bands, the extraction of reflectance time series and spectral indexes such as Normalized Difference Vegetation Index (NDVI) to name the most common and simple is viable, and can be successfully performed across the whole yearly phenological cycle at several latitudes despite random cloud coverage, such as in central-southern Europe. The focus therefore is not on the limitation due to the spatial resolution, but instead on the actual capability of using Sentinel-2 spectral bands to achieve spectral separability between areas within a site that are suspected (or already known) to hide archaeological material and areas that are empty. With this goal in mind, the present research has utilised the full archive of Sentinel-2 satellite data collected in the time interval 2017-2023 to undertake a multi-temporal assessment over two different archaeological contexts. The first encompassed the ancient Roman city of Telesia, located at the confluence of the Calore and Volturno rivers, northern Benevento (southern Italy). The second investigates a Roman villa near Lake of Piediluco, in the municipality of Labro, northern Rieti (central Italy). The study introduces a new methodology, particularly suited for preventive archaeology, centred on analysing spectral signatures and reflectance values from multispectral images to identify possible anomalies in vegetation and soil. These anomalies are often the result of past human activities, which can be either positive (e.g., constructed structures such as building walls or roads) or negative (e.g., excavated structures such as canals or ditches). These activities affect vegetation growth and alter soil composition, resulting in common traces such as crop, soil, and damp marks. This approach therefore focuses on examining the spectral separability between areas with and without archaeological material and enables the identification and mapping of buried structures. Moreover, the high temporal frequency of Sentinel-2 imagery, which provides continuous and multi-seasonal coverage, enables a multitemporal study to detect spectral patterns that recur over time. This facilitates the attribution of potential anomalies to possible buried archaeological features and helps to limit any false positives that might be caused by random or transient environmental factors (such as rainfall during a specific season). In the two case studies analysed, in the ancient city of Telesia and the Roman villa at Labro, several Areas of Interest (AoIs) were examined by drawing polygons over areas where traditional aerial photo interpretation had not identified any traces associated with buried archaeological remains, as well as over areas where the presence of such remains had already been confirmed. The spectral signatures and reflectance values extracted from these two types of polygons were found to be similar, suggesting the likely presence of buried structures even in areas previously considered devoid of archaeological materials. At the same time, cases were also identified where the spectral signatures and reflectance values showed significant differences, enabling a more precise distinction between areas lacking material evidence and the actual extent of areas with archaeological presence. For example, in an AoI in the Roman city of Telesia, the spectral signatures of polygons drawn over an ancient road displayed lower reflectance values compared to those of polygons in the surrounding areas. This revealed a clear spectral separability and a recurring pattern over the years, reflecting the influence of the roadway on the overlying vegetation. The results not only aim to highlight the role of EO data in advancing sustainable and informed heritage management but also to provide a scientific evidence base that could be used bypublic authorities at various levels (municipal, provincial, regional) to develop appropriate urban and infrastructural planning, through the creation of potential and risk maps to preserve and promote potential archaeological sites. This research underscores the key role of open-access satellite data, provided by Copernicus, in enabling innovative remote sensing applications in archaeology, overcoming some of the traditional challenges related to the location, monitoring, preservation and promotion of CNH, offering practical solutions to contemporary challenges in heritage conservation. References Abate, N., and R. Lasaponara. 2019. Preventive Archaeology Based on Open Remote Sensing Data and Tools: The Cases of Sant’Arsenio (SA) and Foggia (FG), Italy. Sustainability 11 (15): 4145. https://doi.org/10.3390/su11154145 Fattore, C., N. Abate, F. Faridani, N. Masini, and R. Lasaponara. 2021. Google Earth Engine as Multi-Sensor Open-Source Tool for Supporting the Preservation of Archaeological Areas: The Case Study of Flood and Fire Mapping in Metaponto, Italy. Sensors 21 (5): 1791. https://doi.org/10.3390/s21051791 Cigna, F., Balz, T., Tapete, D., Caspari, G., Fu, B., Abballe, M., & Jiang, H. (2023). Exploiting satellite SAR for archaeological prospection and heritage site protection. Geo-Spatial Information Science, 27(3), 526–551. https://doi.org/10.1080/10095020.2023.2223603 Cuca, B.; Zaina, F.; Tapete, D. Monitoring of Damages to Cultural Heritage across Europe Using Remote Sensing and Earth Observation: Assessment of Scientific and Grey Literature. Remote Sens. 2023, 15, 3748. https://doi.org/10.3390/rs15153748 McGrath, C.N.; Scott, C.; Cowley, D.; Macdonald, M. Towards a Satellite System for Archaeology? Simulation of an Optical Satellite Mission with Ideal Spatial and Temporal Resolution, Illustrated by a Case Study in Scotland. Remote Sens. 2020, 12, 4100. https://doi.org/10.3390/rs12244100 Tapete, D.; Cigna, F. Appraisal of Opportunities and Perspectives for the Systematic Condition Assessment of Heritage Sites with Copernicus Sentinel-2 High-Resolution Multispectral Imagery. Remote Sens. 2018, 10, 561. https://doi.org/10.3390/rs10040561 Tapete, D.; Cigna, F. Detection of Archaeological Looting from Space: Methods, Achievements and Challenges. Remote Sens. 2019, 11, 2389. https://doi.org/10.3390/rs11202389
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: The PERSEO Project: Enhancing Archaeological Prospection with Hyperspectral Imaging and Machine Learning

Authors: Arianna Traviglia, Giulio Poggi, Giorgio Licciardi
Affiliations: Istituto Italiano Di Tecnologia, Agenzia Spaziale Italiana
Hyperspectral images have been successfully used in archaeological prospection to enhance the visibility of surface traces and anomalies related to subsoil archaeological deposits. While most archaeological applications have predominantly relied on data from airborne sensors due to their higher spatial resolution, satellite hyperspectral data represent an emerging and promising frontier. This approach offers advantages such as better data availability and the capacity to cover extensive territorial areas. However, these benefits are counterbalanced by the lower spatial resolution of satellite hyperspectral imagery, which limits its effectiveness in detecting small-scale features that are often critical in archaeological and geomorphological studies. In this context, the PERSEO project (Prisma hyperspectral image Enhancement for Revealing cultural heritage Sites from Earth Observation), funded by the Agenzia Spaziale Italiana, aims to evaluate the suitability of PRISMA hyperspectral data for applications in the Cultural Heritage domain. The project utilises imagery from the PRISMA hyperspectral satellite and develops advanced pansharpening techniques to enhance the spatial resolution of hyperspectral products. By improving the resolution and the clarity of these images, PERSEO seeks to address the limitations of lower-resolution satellite imagery allowing for the detection of small-scale archaeological features. This contribution provides the first comparative evaluation of pansharpening techniques applied to PRISMA hyperspectral data in the context of geo-archaeological prospection, using multiple case studies from the Mediterranean region as benchmark tests. Moving from top-performing baseline techniques and developing novel approaches based on the HySure method, the project assessed pansharpening products in the context of Aquileia, northern Italy. The evaluation compared visual inspection results with ground truth data including over 400 geoarchaeological features. Various band selection methods and statistical analysis were employed to evaluate the improvement in the detection rate of subsoil features, leveraging a broad range of spectral information and enhanced spatial resolution. The pan-sharpened data was also used to train machine learning models in automatically detecting subsoil traces, using performance on multispectral Sentinel-2, with lower spectral and spatial resolution, as a means of comparison. The application to several case studies and different archaeological object classes offered the possibility to explore the implications of the use of hyperspectral data in archaeological practice, evaluating their contribution compared to the use of alternative sensors, such as multispectral images. The results underscore the effectiveness of pansharpening, establishing satellite hyperspectral data as a promising tool for advancing geo-archaeological prospection.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: SmartDIG - AI-Powered Preventive Archaeology Web-GIS Service for Detecting and Preserving Cultural Heritage

Authors: Andrea Cavallini, Angie Catalina Carrillo Chappe, Prof. Emiliano Tondi, Marco Uccelli, Leticia Pérez Sienes, Nicola Lorusso
Affiliations: Starion Group, Università di Roma Tor Vergata
Preserving Cultural Heritage (CH) and archaeological features during public and private construction projects is a critical responsibility. Preventive archaeology ensures that project designs are adjusted to avoid damaging historical artifacts or potential sites. According to Italian legislation and actual archaeological methodology research, this process can be enhanced by a non-invasive analysis before excavation or construction begins. Since the invention of flying, aerial photography and remote sensing have played a pivotal role in the detection of buried remains by identifying specific markers like crop marks, ground relief, and soil discoloration. These markers, often invisible at ground level, emerge from subtle variations in vegetation growth, soil moisture, or topography caused by underlying archaeological structures. Since the year 2000, Remote Sensing Archaeology has seen many innovative archaeological applications and encountered some emerging and challenging issues analysed and described in a large number of studies. By using satellite passive and active sensors, space stations, space shuttles, aircrafts and unmanned aerial vehicles (UAV), archaeologists have recognized the immense value of satellite and aerial Remote Sensing (RS) and obtained a perspective from space that allows them to better understand archaeological known and unknown landscapes and their wider contexts. Compared with passive RS (photography and multi-/hyperspectral), active RS (radar and lidar) has the advantage of being able to detect buried desert sites or to detect hidden archaeological landscapes in forest environments. Although there are research experiments in which semi-automatic algorithms have been applied to detect crop marks in aerial photos or satellite images, it has always been about applications that have remained confined to the specific research area for which they were developed. Generally, the preventive analysis process is performed manually by Cultural Heritage experts who may not have the necessary Remote Sensing (RS) knowledge or adequate tools. In some cases, experts may rely on tools like Google Earth or GIMP for a quick overview of an area or on drone-services. While drones provide high-resolution imagery, they are constrained by seasonality and specific ground conditions that may obscure features during data acquisition. SmartDIG is an innovative downstream EO solution designed to address user needs in preventive archaeology and to overcome current limitations by providing a global, scalable, and season-independent tool for identifying potential archaeological features. Unlike drone surveys, which capture only specific temporal snapshots, EO data offers the ability to analyse extensive historical and current time series, enabling consistent and comprehensive monitoring across all seasons. This unique capability ensures that features concealed due to seasonal conditions can still be detected, enhancing the accuracy and reliability of preventive archaeology. It provides a user-friendly, GIS-like web application that delivers on-demand, customized analyses, tailored to meet accuracy requirements for detecting buried archaeological features. Leveraging the latest advancements in Artificial Intelligence (AI) and multitemporal EO data analysis, SmartDIG integrates data from multiple sources, including Copernicus Sentinel-1 and Sentinel-2 Missions, COSMO SkyMed, ICEYE constellations, and high-resolution missions like SkySat, PlanetScope, WorldView, and Pleiades. This multitemporal approach analyses vegetation, moisture, ploughing, and soil marks to detect anomalies potentially caused by buried archaeological features and performs a four-season overview, analysing seasonal variations and leveraging historical time series to identify consistent anomalies caused by potential archaeological structures. By continuously monitoring changes over time, it provides a temporal consistency check that significantly enhances the accuracy and reliability of feature detection. SmartDIG automated analysis provides unparalleled speed and precision, far surpassing traditional manual methods. The integration of active (e.g., radar) and passive (e.g., multispectral) EO sensors ensure the detection of features even in challenging environments, such as forested areas or beneath desert sands. The service’s ability to combine long-term historical data with current observations allows it to identify archaeological features that would otherwise remain undetected due to surface changes or environmental conditions. This solution is not limited by geography or scale, making it globally applicable across diverse terrains and climates. Its intuitive web platform and integration with the Italian QGIS template published by the Culture Minster empowers users to access tailored analyses effortlessly, eliminating the need for extensive technical expertise in remote sensing. Furthermore, SmartDIG distinguishes itself with its seamless integration into existing workflows through dedicated APIs, enabling efficient data exchange and streamlined operations across various systems. This democratization of advanced EO technologies ensures that a wide range of professionals — including archaeologists, researchers, and construction managers — can benefit from its capabilities. By bridging this gap, it represents a paradigm shift in the field of Cultural Heritage preservation. It delivers not only technical innovation but also tangible societal benefits, including the protection of cultural identity, the efficient planning of urban spaces, and the mitigation of project delays and disputes. Public authorities, such as urban planners, and private stakeholders, including construction operators, are empowered to make informed decisions based on comprehensive analyses of CH areas. SmartDIG enables urban planners to anticipate risks early in project development, ensuring compliance with legal requirements for CH preservation while avoiding costly work stoppages and social disruptions. In doing so, SmartDIG revolutionizes the field of preventive archaeology, transforming it into a more efficient and socially responsible practice. By providing a non-invasive, globally scalable, and season-independent solution, SmartDIG not only enhances the efficiency of the construction value chain but also ensures that our cultural heritage is preserved for future generations. In summary, SmartDIG combines advanced EO capabilities, AI-driven multitemporal analysis, and an intuitive interface to provide a comprehensive, non-invasive solution for preventive archaeology. Its ability to deliver accurate, season-independent, and globally scalable analyses makes it an indispensable tool for preserving our shared heritage while supporting sustainable development.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: From Leicester to Libya: Training and Collaboration in Automated Change Detection Methods for Heritage Monitoring

Authors: Dr Ahmed Mahmoud, Dr Nichole Sheldrick, Dr Ahmed Buzaian, Prof Muftah Ahmed
Affiliations: University of Leicester, Azzaytuna University
Archaeological sites across the globe are facing significant threats and heritage managers are under increasing pressure to monitor and preserve these sites. The enormous number of heritage sites and landscapes, often in remote or otherwise difficult to access locations makes consistent and regular monitoring of these sites for disturbances and threats a daunting task. Combined with the increasing frequency and severity of threats to archaeological sites, from urbanisation to climate change, the need to develop novel tools and methods that can rapidly monitor the changes at and around archaeological sites and provide accurate and consistent monitoring has never been more urgent. The Endangered Archaeology in the Middle East and North Africa (EAMENA) project has developed a new Machine Learning Automated Change Detection (MLACD) tool using Google Earth Engine for automating the detection of disturbances and threats to heritage sites and landscapes. This tool uses bespoke machine learning algorithms to process sequential Sentinel-2 satellite images and produce a series of outputs, including land classification maps, change class maps, time series analysis charts, and other statistical data. By automatically comparing these outputs with the locations of known archaeological sites, we are able to detect and identify disturbances and potential threats in their vicinity (Mahmoud et al., 2024). The EAMENA MLACD has been developed with a user-friendly interface to assist users with limited knowledge of or comfort with coding to effectively carry out the monitoring workflow. To facilitate the adoption of this method by heritage authorities in the MENA region, we have created detailed training documentation in both English and Arabic. These materials showcase the tool's functionalities and provide comprehensive guidance on how to use it. The code, along with step-by-step instructions in both languages, is accessible via the EAMENA GitHub repository, https://github.com/eamena-project/EAMENA-MachineLearning-ACD. This method has been developed and tested for a number of different regions in North Africa, but most thoroughly in Libya, in collaboration with partners from the Libyan Department of Antiquities (DoA). Supported by the British Council’s Cultural Protection Fund, between late 2023 and 2024, the EAMENA project provided training for over 20 heritage professionals from the Libyan DoA, across three training workshops, in using and adapting the EAMENA MLACD tool for heritage preservation in their areas of interest. To support the training, four bespoke MLACD case studies in different regions of Libya were developed in collaboration with our Libyan partners, who then undertook fieldwork campaigns in each of these areas to assess the condition of the sites, record damage and disturbances, and validate the MLACD results: Bani Walid, Lefakat, Derna, and Fazzan. The goal of these training programmes is to share knowledge and skills, demonstrate the value of the EAMENA MLACD method, and ultimately to encourage the DoA to integrate this method into their heritage monitoring framework, as a means of reducing the time, cost, and effort required for field surveys, particularly in remote areas where such work can be challenging. After each training programme, we collected feedback on the training and the MLACD tool itself, which has been used to modify and update the tool to ensure continuous improvement. In this paper, we will reflect on the successes and challenges of these training workshops, on the feedback we received, and six months on from the last training, assess to what extent the MLACD tool has been incorporated into DoA workflows, and how we can increase uptake of the method in Libya, North Africa, and more widely. References: Mahmoud, A. M. A., Sheldrick, N., & Ahmed, M. (2024). A Novel Machine Learning Automated Change Detection Tool for Monitoring Disturbances and Threats to Archaeological Sites. Remote Sensing Applications: Society and Environment, 101396.https://doi.org/10.1016/j.rsase.2024.101396
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: The ALCEO Project: Machine Learning and Remote Sensing for Looting Detection

Authors: Arianna Traviglia, Giulio Poggi, Riccardo GIovannelli, Michela
Affiliations: Center for Cultural Heritage Technology, Istituto Italiano di Tecnologia, ESA
The looting of archaeological sites represents a significant global threat to cultural heritage, resulting in irreversible damage to invaluable historical landscapes and the dispersal of cultural artefacts. Since traditional on-site monitoring methods are often ineffective in remote or hazardous locations due to logistical challenges and safety concerns, remote sensing surveillance has emerged as a critical and widely adopted tool for detecting and monitoring looting activities. The ability of remote sensing technologies to provide timely and high-resolution imagery over large and inaccessible areas makes them particularly well-suited to addressing the limitations of conventional methods, enabling continuous monitoring without the need for physical presence. However, reliance on visual inspection for analysing remote sensing data has proven insufficient to match the increasing availability of timely satellite imagery. This limitation risks information loss and prevents the development of time-sensitive monitoring systems. The ALCEO project, funded by the European Space Agency, focuses on developing a proof-of-concept platform to counter looting of archaeological sites. By integrating machine learning (ML) and remote sensing technologies, the project aims to develop advanced methods for the timely detection of illicit activities. The core of the project is a deep learning change detection algorithm designed to identify newly appearing pits in time series imagery of monitored archaeological sites. This method utilizes sequential image analysis to detect changes by comparing newly acquired imagery with previous datasets and known looting site patterns. The system was designed and tested across several key sites in the Mediterranean area and in the MENA (Middle East-North Africa) region. These include Arpinova and Cerveteri in Italy, Dura Europos and Ebla in Syria, and Aswan in Egypt. These sites were selected for their varied environmental and geomorphological characteristics, providing a robust testing ground to evaluate and refine the system’s ability to generalize across diverse conditions. The system consists of two main components: the "Data Management Sub-system," responsible for generating datasets for change detection, and the "Modelling Sub-system," which manages the training, evaluation, and inference processes of the deep learning models. The data consists of a geodatabase containing over 6,000 manually annotated looting traces from various sites created for training purposes. The deep learning model employs a fully convolutional Siamese network architecture, specifically designed to detect newly formed looting pits with high accuracy. ALCEO has demonstrated strong performance, achieving an Intersection over Union (IoU) score of 0.72 averaged across test sites, with a peak of performance of more than 0.86 in Ebla and Dura Europos. The results were also assessed in a ground truth campaign conducted in Aswan in 2023 and refined through a “human in the loop” approach, which served to improve the reliability of the model and of newly detected features. These results show that ALCEO can significantly contribute to the analysis of remote sensing images to facilitate prompt identification of looting activities and enable the development of rapid and effective interventions, ensuring the preservation of archaeological sites.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: New insights from the use of Copernicus, Contributing Missions and ESA Third Party Missions data for archaeological prospection in the framework of Dragon Cooperation

Authors: Dr. Deodato Tapete, Dr. Francesca Cigna
Affiliations: Agenzia Spaziale Italiana (ASI), National Research Council (CNR), Institute of Atmospheric Sciences and Climate (ISAC)
In the last decades, EO technologies and imagery collected by means of space-borne sensors – mainly operating in the visible, InfraRed (IR), and microwave portions of the electromagnetic spectrum – have been increasingly used for cultural heritage applications. In the field of archaeological prospection, there is evidence in the scientific literature clearly showing a user preference for High and Very High Resolution (HR and VHR) optical imagery, dominantly collected in the visible and Near-IR (NIR) channels, and less in the short-wave infrared or other bands. These data are accessed from open visualization and/or processing platforms (e.g. Bing Maps, Google Earth Pro, Google Earth Engine), via data grants, or (albeit, less common) direct purchase from commercial providers. One of the properties most appreciated by end-users is the provision of these data in pre-processed (e.g. orthorectified, pansharpened) or analysis-ready formats (e.g. geocoded raster), directly enabling further handling and analysis, for example, in Geographic Information System (GIS) environments. The above context has contributed to making Medium Resolution (MR) data (e.g. Landsat) progressively less appealing for archaeological applications, especially at the local and site scale. For some strands of the archaeological community, even optical HR data collections such as those provided by Copernicus Sentinel-2 at 10 m resolution cannot provide the necessary ground sampling distance for archaeological purposes like archaeological prospection and crop mark detection. Contrasting evidence is provided by another research strand that, instead, is unveiling the added value of high-temporal revisit constellations (e.g. Sentinel-2). However, this operational advantage was discussed with regard to monitoring and condition assessment of cultural heritage sites during both ordinary and crisis time, but not with regard to archaeological prospection. Indeed, still limited is the published literature on buried remains which mostly refers to linear ancient structures and infrastructure. Therefore, a topic that needs to be more deeply investigated is the potential of HR multispectral data, either alone or in combination with other VHR optical imagery and/or Synthetic Aperture Radar (SAR), for archaeological prospection and crop mark detection. In this respect, the integration of optical imagery with other sensor data could also build upon the lessons learned on the use of satellite SAR to address studies of archaeological landscapes and archaeological prospection. In addition to the intrinsic capability of operating under any weather conditions, the advantageous properties offered by SAR include: wide-swath to spotlight coverage, kilometre to sub-metre spatial resolution, historical and present-day temporal coverage, longer to shorter wavelength (e.g. L/S/C/X bands), monthly to daily revisiting time if collecting time series. These properties have been even further improved from earlier (e.g. ERS-1/2, ENVISAT, ALOS-1, RADARSAT-1) to current satellite missions (e.g. TerraSAR-X, COSMO-SkyMed First and Second Generation, Sentinel-1, ALOS-2 and NovaSAR). However, while some research has been published on the use of SAR for archaeological prospection in arid environments, yet there is a need for more case studies highlighting that, under certain surface roughness, soil moisture content, and surface land cover conditions, even imagery collected at shorter wavelengths such as X-band (often provided at HR to VHR, nowadays reaching up to <1 m resolution such as in the case of Italian Space Agency’s COSMO-SkyMed) can unveil “shadow marks”, “crop marks”, and “soil and damp marks”. Copernicus, its Contributing Missions and ESA Third Party Missions (TPMs) are all contributing to provide such wealth of optical and SAR data that can be exploited for purposes of archaeological prospection. It is in this context that the project “SARchaeology: exploiting satellite SAR for archaeological prospection and heritage site protection” [grant id.58113] was undertaken, as part of the 5th phase of the Dragon cooperation program (namely, Dragon-5; https://dragon5.esa.int/) and being funded by ESA and the National Remote Sensing Center (NRSCC) – Ministry of Science and Technology (MOST) of the P.R. China. Among the scientific objectives, a focus was to test the following two capabilities for prospection on vegetated areas at high archaeological potential located in temperate climate: • Identification of buried structures and archaeological material using VHR X-band SAR imagery, despite the theoretical lower penetration capability compared to other radar bands; • Early detection of crop marks appearance based on spectral analysis using very high temporal revisit Sentinel-2 time series. The test site was the Ostia-Portus archaeological area, located ~22 km west of Rome and 3-4 km inland from the Tyrrhenian Sea. Specifically, the analysis focused on two areas particularly abundant of buried features: (a) Capo Due Rami and (b) Pianabella. A systematic database was first generated to catalogue crop and soil marks visible in historical aerial photographs and VHR satellite images covering a long time interval from early 20th century up to 2024. VHR satellite images included 0.5 m resolution data acquired by SkySat, Pléiades and WorldView-2 that were accessed through the Dragon Cooperation and ESA TPM programs. This database acted as a reference to assess buried feature detectability using MR to VHR SAR scenes acquired by the RADARSAT-2 and Copernicus Sentinel-1 missions using the C-band, ALOS-1 using the L-band and COSMO-SkyMed using the X-band. These were exploited to trial the detection of crop marks at (semi-)buried and sub-surface archaeological features in the archaeological landscape of Ostia-Portus. The analysis highlighted a progressive increase in the detectability of smaller archaeological features when moving from L to X band, from co- to cross-polarizations, and toward finer spatial resolutions. Best results were achieved with 1 m resolution imagery from COSMO-SkyMed Enhanced SpotLight (ES) mode, VV polarization, ascending orbits, collected in the period 2021-2024 that revealed some crop marks correlated with the presence of the Northern Canal and other artificial channels, located northeast of Portus hexagonal basin (Trajan’s harbour). In this paper we will illustrate which conditions allowed for such detection and draw some conclusions for further replication to other similar geographic and archaeological context. With regard to early detection of cropmarks, a set of AOIs including geo-archaeological features and neighbouring areas where no crop marks were expected, were selected to conduct the multi-temporal analysis of spectral bands, profiles, and vegetation indices in the presence/absence of buried archaeological features, and to highlight any differences in vegetation status which could indicate crop marks due to yet unknown features. The analysis of spectral profiles and NDVI trends updated until 2024 highlighted that they were compatible with the typical behaviour of soil without buried structures, compared to the location where very substantial building material related to the Via Portuensis Roman road lies underground, hindering plant growth, particularly during periods with limited rainfall as typically happens during the summer season in Rome. The time series allowed to capture decrease/increase of vegetation in relation to alternation of artificial cuts (the investigated areas are used for farming) and regrowth with spectral and temporal separation between areas with and without archaeological material. The experiment proves that a multi-sensor SAR and optical paradigm can lead to a significant improvement in performance for cropmarks detection, as a valid alternative to the approach based on spot (aerial) observations only that is still commonly used across the archaeological community. In areas where cropmarks are recurrent features across the landscape, their temporal behaviour can be effectively monitored and characterized even at HR, e.g. with the use of Sentinel-2 time series.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.85/1.86)

Session: C.01.20 Recurring Spacecraft Platform technology for future EO missions

In this session you will learn about recurrent spacecraft platform sub-systems and communication space-to-ground technologies enabling new and more performing remote sensing techniques and measurements for future Earth Observation (EO) systems.
These industrial points are crucial to enable better EO science thanks to leaving better margins and resources for more performing EO sensors and also for industrial scaleability towards more affordable and sustainable missions, including considerations on compliance to new Zero Debris regulations.
The focus is on Large Spacecrafts in ESA missions, but synergies with smaller satellites aimed for constellations can also be addressed.

Presentations and speakers:


Overview of European Technology for EO missions and Mid-size sat INDustrialisation initiative (M-IND)


  • Josep Rosello - ESA EOP

Challenges for large-scale industrialisation of recurring platforms


  • Miguel Ángel Palacios Lázaro - Airbus Defence and Space

From Large to Mid-size satellites – OHB’s Standard Platforms Eos & Envoy’s latest activities


  • Ann-Theres Schulz - OHB

Reconfigurable NIMBUS Platform concept for multi-mission applications, and an example of future on board advanced space edge processing solution


  • Giovanni Campolo, Xavier Chebanier - Thales Alenia Space
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall L3)

Session: C.06.02 Advances in Calibration and Product Validation for Optical Sensors - PART 2

Accurate calibration and validation lies at the heart of any optical satellite measurement sensor system and it is of paramount importance for delivering high quality satellite products used in an exponentially growing way for operational environmental monitoring as well as for long-term climate research applications. Calibration may be achieved by dedicated on-board systems or by independent vicarious calibration targets and the need for sensor intercalibration and collaboration of satellite operators has become essential to provide such products to the optical satellite user community. In this context the need for common reference sources and protocols has been recognized as well as mission scenarios such as dedicated tandem campaigns to achieve sensor intercalibration and bias assessments. Moreover, precise calibration is a prerequisite for high quality geophysical Level-2 parameters retrieved from satellite and bringing independent validation efforts of these into play with sensor calibration is an essential activity to generate accurate geo-physical parameter maps at satellite-scale. In the context of the CEOS Working Group Cal/Val (WGCV) Land Product Validation (LPV) efforts ESA have led and collaborated on many such activities establishing the concept of ground-based Fiducial Reference Measurements and LPV Supersites as a firm reference for ground-based validation of satellite measurements. The aim of this is to provide well calibrated and validated Level 1 and 2 products to the satellite data user community at large including their uncertainty budgets traceable to SI and cal/val protocols. This session will provide an overview of the latest state-of-the-art Cal/Val activities for optical sensors including novel calibration approaches and new ground-based validation networks used for land product validation.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall L3)

Presentation: Good Practice Guidelines (GPG) for UAV-based Hyperspectral Sensing for Satellite Surface Reflectance Validation

Authors: Niall Origo, Dr Margaret Kalacska, Dr Juan Pablo Arroyo-Mora, Dr Raymond Soffer, Dr Magdalena Smigaj, Benjamin Brede, Raquel De Los Reyes, Dr Claas Koehler, Bringfried Pflug, Fabrizio Niro, Morven Sinclair, Dr Cindy Ong, Dr Ian Lau, Dr Robbie Ramsey, Dr Guy Byrne, Harry Morris, Dr Juha Suomalainen, Dr Lammert Kooistra, Dr Maximillian Brell, Miss Chloe Randall, Daniele Latini, Johannes Wilk, Dr Nina Raqueno, Dr Aaron Gerace, Dr Timothy Bauch, Dr Rehman Eon, Professor Jadu Dash, Dr Fernando Camacho, Dr Jorge Sanchez-Zapero, Dr Enrique Martinez, Luke Brown, Rosalinda Morrone, Bernardo Mota, Dr Alex Merrington, Dr Jens van der Zee, Dr Ilaria Petracca, Dr Medhavy Thankappan, Miss Rasma Ormane, Mr Matthew Scholes, Dr Eija Honkavaara, Dr Mark Broomhall, Valentina Boccia
Affiliations: National Physical Laboratory, McGill University, National Research Council, Wageningen University and Research, GFZ Potsdam, DLR, Serco, CSIRO, NERC, Geoscience Australia, NLS, GEO-K, Rochester Institute of Technology, University of Southampton, EOLAB, University of Salford, Starion Group, ESA
Unoccupied Aerial Vehicles (UAVs) and their payloads are continuing to experience fast-paced development with more turn-key solutions being offered. In the realm of satellite validation, UAVs offer several notable benefits over conventional validation using manually operated instruments. These range from faster measurement times and greater sampling, low impact on the surface of the target, measurement of typically inaccessible targets (e.g. water, tall trees, etc.), and many more. UAV payload technology has recently reached a point at which validation of surface reflectance products has become possible, with the advent of broad range hyperspectral sensors (as demonstrated by Byrne et al (2024) & Eon et al (2023)). Unlike conventional surface reflectance using manually operated spectrometers, where numerous GPG documents exist, there is a lack of community agreed standards for the optimal way to operate UAVs and their payloads for the purpose of satellite surface reflectance validation. This is particularly important because in designing a validation campaign with a UAV there are numerous variables, such as the flight plan, supporting instruments, calibration activities, and post-processing. This requires careful thought and planning to reduce the overall uncertainty. This contribution describes the recently published Good Practice Guidelines for UAV-based Surface Reflectance Validation. This is the first Committee on Earth Observation Satellites (CEOS)-endorsed, community good practice document dedicated to UAV-mounted technology. Its focus is the validation of satellite-derived surface reflectance, however we believe that many of the recommendations and requirements can be useful outside of this scope. This presentation aims to highlight the existence of the protocol and demonstrate its use to the community. When aiming to reach CEOS-FRM status, the requirements given by the guidelines are for regular calibration between campaign seasons, regular dark signal collection, and calculation of the measurement uncertainty. Several recommendations are made in the GPG, including the “hyperspectral point cloud” concept to remove pixel duplication and omission involved in gridding, and the use of view zenith angle filters to create validation data more closely aligned with the satellite viewing geometry. In this contribution we will discuss the context for the creation of the guidelines, the international measurement campaigns (Surface Reflectance Intercomparison eXercise for VEGetation (SRIX4Veg) at Barrax, Spain and Calperum, Australia) that provided an input mechanism for the community, the thinking behind the stated requirements and recommendations, the interplay of other cal/val initiatives (e.g. Fiducial Reference Measurements), and a plan for future editions of this living document. Byrne, G. et al., (2024) Validating Digital Earth Australia NBART for the Landsat 9 Underfly of Landsat 8. Remote Sensing, 16(7): 1233. Eon, R. et al., (2023) Validation of Landsat-9 and Landsat-8 Surface Temperature and Reflectance during the Underfly Event, Remote Sensing, 15(13): 3370
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall L3)

Presentation: Global Mapping of Vegetation Biophysical Variables at Decametric Resolution: Are we there yet?

Authors: Richard Fernandes, Dr. Luke Brown, Dr. Jadu Dash, Dr. Najib Djamai, Dr. Gang Hong, Dr. Courtney Meier, Mr. Harry Morris, Mr. Lixin Sun, Dr. Hao Tang
Affiliations: Canada Centre For Remote Sensing, Natural Resources Canada, Salford University, U.K., Southampton University, Batelle Inc., National Ecological Observatory Network, U.S.A., National Physical Laboratory, Singapore National University
Satellite monitoring of vegetation biophysical variables such as fAPAR, fraction of vegetation cover (fCOVER) and leaf area index (LAI) at decametric resolution are needed to provide essential agricultural, biodiversity and climate variables for the Global Climate Observing System (GCOS). Both the baseline Simplified Level 2 Prototype Processor (SL2P) and the updated Canada Centre for Remote Sensing version (SL2P-CCRS) algorithms can map these variable globally given Sentinel-2 or Landsat 8/9 imagery. In this study we test the hypothesis that RTM calibration is sufficient to meet GCOS requirements and also the alternate hypothesis that including empirical measurements can reduce accuracy error. SL2P and SL2P-CCRS both use neural networks, calibrated with radiative transfer model (RTM) simulations, to estimate a range of vegetation biophysical variables given either Sentinel-2 or Landsat 8/9 surface reflectance data; with SL2P using a single homogeneous RTM while SL2P-CCRS using land cover dependent heterogeneous RTMs. Both algorithms were implemented in the open source Landscape Evolution and Forecasting toolbox (https://github.com/richardfernandes/LEAF-Toolbox) and applied to retrieve fCOVER, fAPAR and LAI over 400 plots across all vegetated biomes in North America. Algorithms were validated following newly developed Committee of Earth Observation Satellite (CEOS) good practices using over10,000 match-ups between satellite based estimates and fiducial reference measurements, spanning 8 years. Novel findings include: 1. Site specific woody area corrections to fiducial references measurements are essential to quantify algorithm bias. 2. The ESA SNAP implementation of SL2P most likely has implementation errors impacting LAI retrieval and should be revised. 3. SL2P-CCRS reduces LAI bias compared to SL2P over forests by ~50% but overall uncertainty remains similar to SL2P. 4. Sl2P-CCRS satisfies GCOS temporal stability requirements but slightly exceeds uncertainty requirements due to bias. 5. Empirical bias correction enables SL2P-CCRS to meet thematic uncertainty requirements for the majority of reference sites. This study is the first to make use of updated CEOS validation good practices that satisfy the CEOS analysis ready data requirement by quantifying both accuracy and uncertainty as a function of the mapped value. The study also provides a new approach to quantifying temporal stability. The combination of well validated algorithms and bias correction using fiducial reference measurements lead us to suggest that "We are almost there" in terms of meeting user requirements for systematic global mapping of fCOVER, fAPAR and LAI at decametric resolution. However, there is still the need for continuing and expanding in-situ reference networks and for incremental refinement of algorithm calibration. Additionally, there is a need to validate and, if need be, calibrate algorithms for retrieving other vegetation biophysical parameters using newly available satellite datasets representing both active and passive sensors at sub-decametric resolution.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall L3)

Presentation: NASA-ESA Collaboration for Cal/Val of Sentinel-3 SLSTR and Earthcare MSI using vicarious buoy measurements

Authors: Dr Steffen Dransfeld, Dr Simon Hook, Timon Hummel, Dr Silvia Scifoni, Dr Vasileios Tzallas, Claire Henocq, Mr Simone Bucci, Mr Robert Freepartner, Dr Nimrod Camron, Dr Rene Preusker, Dr Anja Hünerbein, Dr Sebastian Bley, Dr Nicole Docter, Dr Nils Madenach
Affiliations: ESA, JPL-NASA, ACRI-ST, Serco, Raytheon, FUB, TROPOS
The goal of NASA’s Earth Science Research Program is to utilize global measurements in order to understand the Earth system and its interactions as steps toward the prediction of Earth system behavior. NASA has identified the provision of well-calibrated, multiyear and multi-satellite data and product series as a key requirement for meeting this goal. In this framework, international cooperation agreements have been signed by NASA and ESA to advance our understanding of the Earth System and climate change. In order to help meet this goal we will validate the TIR data from the SLSTR instruments on the Sentinel-3 A/B satellites and the MSI instrument on the EarthCARE satellite, launched in 2024, using in situ measurements that are already used to validate TIR data and products from the MODIS/VIIRS instruments from NASA/NOAA to permit their inclusion in the existing NASA LST&E ESDR. The SLSTR instruments provide long-term, consistent sets of global LST measurements. SLSTR provides continuity through a series of instruments back to 1991: Along Track Scanning Radiometer (ATSR) on ERS-1, ATSR-2 on ERS-2 and the Advanced Along Track Scanning Radiometer (AATSR) on Envisat. The MSI onboard EarthCARE is designed to provide contextual scene information that enables the extension of atmospheric profiles in 3D, as well as to provide aerosol information. It plays an important role in the mission by collecting co-registered observations alongside EarthCARE's active instruments, enhancing the understanding of cloud-aerosol-radiation interactions and contributing to improved knowledge of the Earth's radiative budget. In this study we will use data from the Lake Tahoe and Salton Sea automated validation sites to calibrate and validate the mid and TIR data in S3-SLSTR and recently launched EarthCARE-MSI instruments ensuring the data can be used with other similar calibrated and validated NASA datasets in the NASA LST&E ESDR. The primary objectives of this effort are: • Calibrate and validate the radiance at sensor data from the SLSTR/MSI instruments and compare with the equivalent data from the MODIS/VIIRS instruments to enable a seamless calibration between data from all instruments. using an independent ground dataset. • Provide this information to the NASA LST&E ESDR Project to enable them to include SLSTR/MSI data with the MODIS/VIIRS data currently used to generate the ESDR. Level-1 data from both instruments is routinely extracted by the OPTical Mission Performance Cluster (OPT-MPC) and EarthCare Data Innovation & Science Cluster (DISC) and compared against the forward-propagated buoy measurements at surface level in order to assess vicariously any radiometric biases and sensor stability at top-of-atmosphere. In a wider scope this activity is strongly linked to the CEOS activity of developing the so-called TIRCALNet consisting of several sites distributed around the globe allowing a Top-of-Atmosphere calibration for thermal sensors based on a reference network similar to the CEOS RADCALNet initiative for VIS/SWIR sensors. This presentation will provide details about the overall collaboration set-up between NASA and ESA, a synthesis of the calibration results thus far. Moreover it will link to the activity of TIRCALNet and give an overview of that initiative.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall L3)

Presentation: Sentinel-2C data quality status and tandem phase analyses

Authors: Silvia Enache, Florian Poustomis, Sebastien Clerc, Bruno Lafrance, Bahjat Alhammoud, Emmanuel Hillairet, Louis Rivoire, David Marshall, Alexis Deru, Anais Vincensini, Jérôme Louis, Avi Pertiwi, Raquel De Los Reyes, Francesco Pignatale, Jérôme BRUNIQUEL, Rosalinda Morrone, Ferran Gascon, Valentina Boccia
Affiliations: CS Group, ACRI-ST, German Aerospace Centre, Remote Sensing Technology Institute, Airbus Defense and Space, Telespazio France – A Leonardo / Thales Company, Telespazio Germany – A Leonardo / Thales Company, Starion for ESA ESRIN, European Space Agency, ESRIN
For several years now, the Copernicus Sentinel-2 satellite mission, with its Sentinel-2A and Sentinel-2B units, has provided a massive quantitative and qualitative resource to the Earth Observation community. The third unit of the Sentinel-2 family was launched on September 5th, 2024, and will complete its commissioning phase on December 19th, 2024. This presentation will report on initial calibration and performance assessment for Level-1C and Level-2A images, in terms of geometry, radiometry and image quality. Overall, the satellite shows very good performances. A small radiometric bias of the order of 1-2% is observed for some spectral bands with respect to Sentinel-2A, which could be corrected through a vicarious gain adjustment. Thanks to careful geometric calibration, the geometric performance is in line with specifications. Level-2A product performance is in line with that of Sentinel-2A and B. Compared to the two other units (i.e. Sentinel-2A and B), the commissioning phase for Sentinel-2C involves certain specific activities. The first concerns a series of specific Sun calibrations acquired after yaw rotations in order to cover the full range of yaw angles encountered during the year. This dataset will be used to refine the characterization of the Sun diffuser and reduce calibration artefacts. The second operation concerns Moon acquisitions for lunar radiometric calibration. The first outcomes of this analysis will be reported within this paper. In addition, a period of operation in close tandem configuration with Sentinel-2A was carried out for a period of 4 cycles (1st November up to mid December 2024). The formation resulted in a 30 second interval between Sentinel-2A and Sentinel-2C acquisitions, which allows a direct comparison of the images. This enables a very detailed analysis of geometric, spectral and radiometric differences between the satellites, which will be reported in this presentation. After this tandem phase, Sentinel-2A will drift away from Sentinel-2C. The separation between satellites will slowly increase. Acquisitions during this period can be used to investigate the impact of different viewing angles on Level-1 and Level-2 products. The presentation will report on the outcomes of these analyses.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall L3)

Presentation: Validation Of Winter Wheat Biophysical Parameters Retrieved From Real and Simulated Agronomic Hyperspectral Data

Authors: Rasma Ormane, Harry Morris, Niall Origo
Affiliations: National Physical Laboratory
One third of the world population relies on wheat as a staple food. Therefore, mapping and monitoring the health and distribution of its crop yield is crucial for ensuring global food security. The monitoring of parameters such as leaf area index (LAI) and chlorophyll concentration are particularly useful in facilitating and enhancing agricultural sustainability and productivity. Precision farming is an agricultural management strategy that utilises technologies like Unoccupied Aerial Vehicles (UAVs) and satellites to collect real-time crop data and target resource use (fertilizer, water, etc.) based on that data. For instance, targeting nitrogen-based fertilizers in areas with reduced canopy chlorophyll content is likely to yield increases in productivity in that area. New and forthcoming satellite missions (CHIME, PRISMA, etc.) will provide hyperspectral data with broad spectral coverage (400 nm - 2500 nm). These data offer information on the chlorophyll content of plants since changes in chlorophyll impact the 520 nm - 705 nm wavelength range. Radiative Transfer Models (RTMs) offer a route to exploiting this by modelling the relationship between vegetation parameters such as chlorophyll content and canopy hyperspectral reflectance. While simple 1D RTMs like PROSAIL are widely used due to their manageable parameter scope, they rely on assumptions about canopy structure and light interactions. Testing whether these assumptions are valid is crucial for ensuring valid data interpretation from hyperspectral missions. Librat is an example of a general purpose RTM that uses Monte Carlo ray tracing to estimate the illumination of an object within a scene and translate it into per-pixel hyperspectral reflectance. Librat can model structurally complex scenes, such as crop canopies, and simulate the data retrieved by hyperspectral canopy systems. This study combines field simulation studies to address this need, by measuring hyperspectral reflectance in winter wheat via UAVs and validating against in situ LAI and chlorophyll measurements. A realistic three-dimensional wheat canopy architecture, based on winter wheat at Rothamsted Research fields (Harpenden, England) is then modelled using Librat to simulate hyperspectral reflectance. Simulated data is inverted using PROSAIL, with leaf optical properties modelled via PROSPECT. Outputs are compared with the real UAV and ground-based measurements, providing insights into RTM assumptions and assessing their fitness-for-purpose in precision farming applications.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall L3)

Presentation: ICOS offers great potential as in-situ network for Land Product Validation.

Authors: Bert Gielen, dr Maarten Op de Beeck, dr Simone Sabbatini, Giacomo Nicolini, dr Arne Iserbyt, Dr Marie Weiss, dr Jasdeep Anand, dr Rocio Barrio Guillo, Darren Ghent, Dario Papale
Affiliations: Universiteit Antwerpen, CMCC, INRAE, University of Leicester, CNR
The flux tower network of the Integrated Carbon Observation System (ICOS) offers excellent opportunities to serve as an in-situ network for validation of satellite measurements. ICOS is a European Research Infrastructure (RI) that has been established and is funded by the European Member States with a long-term perspective (>20 years). It is providing high quality greenhouse gasses data collected according to standard protocols and distribute them in real-time with an open data access according to FAIR principles through its dedicated data portal. The terrestrial component of ICOS consists of a distributed network of flux towers across Europe covering the most representative ecosystem types (forests, grasslands, croplands, mires, lakes and urban). These stations provide a broad, high-standard ecological assessment of the target ecosystem: greenhouse gas fluxes are measured in combination with a large set of meteorological variables and vegetation related parameters like Leaf Area Index, biomass, species composition and vegetation structure. All measurements are standardized and collected according to detailed field protocols. The network is coordinated by the Ecosystem Thematic Centre (ETC), which is also responsible for centralized data processing and near real time data elaboration and publication to the ICOS Carbon portal, where it is freely accessible to the data users. The network is currently fully operational with >100 stations delivering data across 16 countries. This presentation focuses on the potential of the RIs like ICOS for serving as a reference in-situ network for satellite product validation, highlighting how the CalVal community can not only benefit from these observational networks, but also contribute to their development, in particular through the design, discussion and implementation of specific protocols to match the possible requirements. Specific examples will be provided on Land Surface Temperature, fAPAR, SIF and Terrestrial Laser Scanning.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 0.11/0.12)

Session: E.02.02 Advancing EO Capabilities for Security: Innovation, Adoption, and Strategic Partnerships

This invited session will bring together security stakeholders currently collaborating with ESA to showcase needs and recent developments in EO-based capabilities for security. Participants will share perspectives on how Earth Observation can address evolving security challenges and support decision-making through innovations in geospatial technology and data science.

The format includes presentations followed by a Q&A session, with the aim to:
- Present R&D initiatives and funding opportunities in the field of security and defence
- Raise awareness of recent advancement in geospatial and open-source intelligence
- Collect insights on priority development areas to foster EO uptake within security domain.

Presentations and Speakers:


EDA Captech Space Initiative


  • Eleni Patouni - Project Officer, European Defence Agency

New NATO Commercial Space Strategy


  • Julie Biel - NATO Defence Investment Division, Space Sector

The Strategic Research Agenda for the evolution of the Copernicus Security Services


  • Evaldas Kristopaitis - Project Officer, JRC

Earth Observation for the Detection and Analysis of Illicit and High-Risk Activities


  • Coen Bussink - Geospatial Analysis and Programme Delivery Section United Nations Office on Drugs and Crime (UNODC)

The increasing role of EO based information for evidence at the International Criminal Court , including R&D activities with ESA


  • Christian Riesner - Head of the Geoint Unit, ICC- OTP

The role of Earth Observation System to enhance the fight against Environmental crime at INTERPOL


  • Jose Adrian Sanchez Romero - Operations Coordinator Forestry, INTERPOL
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall E2)

Session: B.04.04 Spaceborne data for the analysis of Natural Hazards and AI: new insights from Artificial Intelligence technologies and recent missions - PART 1

Natural hazards, can be defined as “all atmospheric, hydrologic, geologic, and wildfire phenomena that, because of their location, severity, and frequency, have the potential to affect humans and their environment”. In 2023, the Emergency Events Database (EM-DAT) recorded a total of 399 disasters related to natural hazards.These events resulted in 86,473 fatalities and affected 93.1 million people. Therefore, analysing natural hazards, particularly focused on mapping and monitoring, is a key activity for land management and risk reduction policies as pledged by many national governments and international institutions. Compared to traditional field-based surveys, spaceborne remote sensing technologies have proven to be of utmost importance due to their high spatial and temporal coverage and reduced costs.
The actual scenario of large volumes of freely accessible data allows to retrieve relevant information as well as to develop new techniques and methodologies for the investigation, characterization, monitoring, and modeling of hazards. Alongside remote sensing, Artificial Intelligence (AI) and Machine Learning (ML) play now an important component for the analysis of large datasets of EO data. These new approaches have widely demonstrated their suitability in many scientific fields, being characterized by high accuracy and specific advantages for different applications.
This session is intended to collect recent and promising advances on the use of AI/ML for the processing of (optical, multispectral, hyperspectral, radar and thermal) satellite data and the analysis of geohazards such as landslides, earthquakes, subsidence, volcanic eruptions, and hydrometeorological hazards, such as wildfires,tsunamis, floods, storms, avalanches, etc. . The outcome of the session will provide a very insightful state-of-the-art on the current and future perspectives on EO capabilities for studying natural hazards and risk reduction policies.

Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall E2)

Presentation: Deep Learning Approaches to detecting Volcano Deformation in Sentinel-1 InSAR data

Authors: Juliet Biggs, Nantheera Anantrasirichai, Robert Popescu, Fabien Albino, Milan Lazecky, Yasser Maghsoudi
Affiliations: University of Bristol, University of Grenoble Alpes, University of Leeds, University of Exeter
Satellites systematically acquire imagery that can be used for volcano monitoring, characterising magmatic systems and potentially forecasting eruptions on a global scale. However, exploiting the large dataset is limited by the need for manual processing and inspection, meaning timely dissemination of information is challenging. Many new machine learning algorithms and architectures have been proposed for detecting and locating volcano deformation. Here we focus on deep learning methods developed by the COMET group and their application to the COMET-LICSAR database. We use a 3-stage approach to analyse the large dataset of satellite imagery available from Sentinel-1. The first step is the automatic generation of InSAR images using the COMET-LICSAR automated processing system (Lazecky et al, 2020,2021). The next step is the automatic analysis of the processed images, which requires a machine learning approach (Anantrasirichai et al, 2018,2019a,2019b). Finally, expert reviews are needed to identify the true positives and characterise the signal at each volcano making use of any external information available (Biggs et al, 2023). First, we use a supervised Convolutional Neural Network to distinguish between deformation and atmospheric artefacts in individual interferograms. We use a transfer-learning strategy to fine-tune the AlexNet architecture using a combination of real and synthetic data (Anantrasirichai et al, 2018, 2019a,2019b). This method is now applied in real-time to flag volcano deformation in automatically processed Sentinel-1 imagery through the COMET volcano deformation portal: https://comet.nerc.ac.uk/comet-volcano-portal/. It can also be applied retrospectively to create a catalogue of past events and we summarise the most recent results. However, a drawback of using supervised classification is that real positive signals are rare and the use of synthetic training data limits the detection algorithms to preconceived models. Deformation patterns that do not conform to the ‘magma chamber’ paradigm may be missed. Therefore, we have developed an unsupervised machine learning framework to detect volcanic ground deformation as an anomaly in unwrapped interferograms (Popescu et al, 2024; Popescu et al, in review). We test three different state-of-the-art architectures, one convolutional neural network (PaDiM - Patch Distribution Modeling) and two generative models (GANomaly and DDPM) and compare performance against the existing supervised learning method. Our final anomaly detection outperforms the supervised learning, particularly where the characteristics of deformation are unknown. Our framework can thus be used to identify deformation at volcanoes without needing prior knowledge about the deformation patterns present there.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall E2)

Presentation: Assessing Rhineland Coalfields Mining Impacts With Remote Sensing and ML: EGMS-Based Susceptibility Mapping and Risk Analysis

Authors: Dibakar Kamalini Ritushree, Dr. Marzieh Baes, Maoqi Liu, Prof. Dr. Mahdi Motagh
Affiliations: Deutsches Geoforschungs Zentrum, Leibniz Universität Hannover, School of Resources & Environment and Safety Engineering
The Rhineland coalfields are a major lignite mining region in Germany, contributing significantly to energy production and posing environmental and geological challenges. Besides carbon emissions, mining has well-known consequences, including subsidence and fault reactivation, making it essential to study risks and potential damages to ensure safety and sustainable land use in the region. This study examines the potential susceptibility and infrastructure risks caused by mining-induced subsidence in the Rhineland coalfields. An optimal model for susceptibility mapping was selected from machine learning and rule-based approaches, including Random Forest, Naïve Bayes, and Fuzzy Logic, to utilize their strengths in managing complex relationships, probabilistic classifications, and geospatial data uncertainties. Geological, groundwater, lithology, landuse landcover, and elevation data were employed for training, facilitating the identification of areas with potential susceptibility to subsidence. A product of the European Ground Motion Service(EGMS), providing reliable ground motion measurements, was used for validation. The results indicate that Random Forest outperformed the other methods in mapping susceptible areas. Based on the susceptibility mapping, areas categorized under very high and high susceptibility were subjected to further experimentation to determine the potential damage to infrastructure. This was assessed by utilizing EGMS data and analyzing angular distortion, which indicates tilting or bending. It also analyzed horizontal strain to identify zones of sagging and hogging in structures caused by uneven ground movement. The results showed angular distortion (β) of 1/150 and horizontal strain (ε) reaching up to 0.01% along the fault lines, critically influencing structural integrity. These results were validated with ground truth data acquired during the fieldwork. Furthermore, historical leveling data recorded since the 1960s indicate vertical deformation of around 4 meters in regions previously impacted by mining operations, underscoring the long-term impacts of mining on ground stability. These findings suggest that susceptibility mapping and risk identification are crucial for understanding and addressing the effects of mining-induced subsidence and geological instability. The results offer valuable insights into ground deformation patterns and support the classification of risk zones, providing policymakers with essential tools for implementing effective mitigation measures and sustainable land-use strategies.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall E2)

Presentation: Predicting Arsenic Contamination Hotspots in Abandoned River Bends in Bangladesh: A Machine Learning Approach

Authors: Julian Peter Biesheuvel, Dr. Marinus Eric Donselaar, Dr. Devanita Ghosh, Dr. Roderik Lindenbergh
Affiliations: Department of Geoscience and Remote Sensing, Delft Univeristy Of Technology, Department of Geo-science and Engineering, Delft Univeristy Of Technology, Department of Water Management, Delft Univeristy Of Technology
Summary Arsenic contamination in shallow aquifers of Holocene alluvial basins is a serious health risk affecting millions of people [1]. Detection of arsenic hotspots is a slow and tedious process based on the analysis of groundwater samples. This study improves arsenic risk prediction by incorporating geomorphological features such as oxbow lakes and clay plugs into a machine learning (ML) approach. Advances in remote sensing [2], often combined with ML, enable the efficient detection of these and other proxy features, significantly reducing reliance on labour-intensive fieldwork. By combining these features with environmental and demographic data, the approach provides more accurate and cost-effective risk assessments, enabling better-targeted interventions in vulnerable regions and supporting proactive environmental monitoring. Introduction Traditional geospatial interpolation methods for arsenic contamination mapping, such as Kriging and Inverse Distance Weighting, often introduce “bull’s-eye” artefacts, misrepresenting true arsenic distributions. Machine learning approaches as Random Forest enhance predictions by, for example, incorporating soil types, but may still overlook crucial geomorphological features such as oxbow lakes and point bars, which act as arsenic repositories by stratigraphic trapping and groundwater diffusion processes in Holocene alluvial basins [3 – 5]. By integrating geomorphological insights with machine learning, as proposed by [6], this study enhances arsenic risk prediction, utilising a three-stage process: delineating oxbow lakes and clay plugs from satellite data, predicting arsenic concentrations based on key variables, and performing risk analysis. This approach offers more accurate risk assessments, guiding targeted mitigation strategies. Materials and Methods Oxbow Lake and Clay-plug Extraction Oxbow lakes and clay plugs were identified using machine learning and algorithmic methods applied to Sentinel-1 SAR and Landsat-8 True Colour imagery via Google Earth Engine (GEE). Water bodies in the Ganges-Brahmaputra Basin were detected with Sentinel-1 VV/VH polarisation bands and the mean Normalized Difference Water Index (mNDWI) from Landsat-8. Oxbow lakes were extracted using a two-step process: rivers were excluded with Global Surface Water Explorer data, and a so-called circularity index was applied to the remaining water bodies. Clay plugs, characterised by crescent shapes and distinct perpendicular agriculture plot orientations, were detected with a You-Only-Look-Once (YOLO) deep learning model. The YOLO model is utilised for its ability to detect complex spatial patterns, such as the distinctive shapes of clay plugs. Arsenic Concentration Prediction Arsenic concentrations in Bangladesh were predicted using a Random Forest (RF) model, with 17,000 arsenic aquifer concentration values as ground-truth data. Predictor variables so far include precipitation, temperature, NDVI (a proxy for microbial activity in oxbow lakes and clay plugs), local terrain relief, and population density derived from satellite and climatological datasets. RF models are employed for their interpretability in understanding the relationship between features and arsenic concentrations. The model's performance was evaluated with cross-validation, Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) metrics. The predicted arsenic concentrations were validated by testing the model on samples from the original dataset that were not used during training or validation. Additionally, the model was tested on a smaller, independent dataset from a similar geomorphological environment in India that had not been seen during the training or validation phases, ensuring the model's robustness and generalizability. Arsenic Risk Prediction Arsenic risk was classified based on predicted concentrations, relative terrain height, and local population density. High population density and predicted high arsenic concentrations indicate immediate risk. In contrast, low population density and high arsenic concentrations signal potential future risks, especially due to climate change or increased organic matter. Risk levels were assigned with rule-based or unsupervised machine learning clustering methods. Initial and Expected Results Initial results focus on successfully applying YOLO to detect and delineate geomorphological features such as clay plugs from satellite imagery. Early tests have demonstrated the model’s ability to identify crescent-shaped landforms with varying degrees of accuracy depending on the quality of the spatial and temporal resolution of the input data. Additionally, oxbow lakes have been successfully extracted using SAR data and the mNWDI method. The next steps for the YOLO method involve expanding the approach to include false colour imagery, which may provide more contrast and better highlight the crescent-shaped features associated with clay plugs. Additionally, the model will be trained with a larger and more diverse dataset, including a dedicated validation and test set, to ensure robust evaluation and avoid overfitting. These improvements are expected to enhance the accuracy and generalizability of the YOLO-based detection of geomorphological features in satellite imagery. Conclusions This research introduces a more precise approach for identifying arsenic hotspots using a limited set of proxy features, eliminating the need for costly and time-intensive fieldwork. Once trained, the model can be fine-tuned and applied to other regions with comparable geomorphological characteristics, making it adaptable and scalable for global applications. Integrating geomorphological features such as oxbow lakes and point bars with machine learning models improves arsenic risk prediction accuracy. This approach enables fast and accurate arsenic monitoring and supports targeted mitigation strategies safeguarding public health, particularly in vulnerable regions. Future work will focus on refining model parameters, expanding datasets, and incorporating additional environmental factors to enhance the model's robustness and applicability.   Impact This research presents a novel, cost-effective approach to mapping arsenic contamination by leveraging machine learning and geomorphological features. Utilising proxy data reduces the need for extensive fieldwork, making arsenic risk assessment more accessible, especially in resource-limited regions. The integration of geomorphological insights enhances the accuracy of arsenic predictions, which can lead to better-targeted interventions and policy decisions, particularly in areas vulnerable to arsenic contamination, such as the Ganges-Brahmaputra Basin. This work contributes to the growing field of environmental monitoring, providing proactive risk management in the context of water quality and public health. Acknowledgements The authors thank the Environment and Population Research Center (EPRC), Dhaka, Bangladesh, for providing the dataset containing arsenic concentrations in Bangladesh. References [1] Rahaman, M. S., Mise, N., & Ichihara, S. (2022). Arsenic contamination in the food chain in Bangladesh: A review on health hazards, socioeconomic impacts, and implications. Hygiene and Environmental Health Advances, 2, 100004. https://doi.org/10.1016/j.heha.2022.100004 [2] Agarwal, V., Kumar, M., Panday, D. P., Zang, J., & Munoz-Arriola, F. (2024). Unlocking the potential of remote sensing for arsenic contamination detection and management: Challenges and perspectives. Current Opinion in Environmental Science & Health, 100578. https://doi.org/10.1016/j.coesh.2024.100578 [3] Kumar, S., Ghosh, D., Donselaar, M., Burgers, F., & Ghosh, A. (2021). Clay-plug sediment as the locus of arsenic pollution in Holocene alluvial-plain aquifers. Catena, 202, 105255. https://doi.org/10.1016/j.catena.2021.105255 [4] Donselaar, M. E., Bhatt, A. G., & Ghosh, A. K. (2016). On the relation between fluvial-deltaic flood basin geomorphology and the widespread occurrence of arsenic pollution in shallow aquifers. The Science of the Total Environment, 574, 901–913. https://doi.org/10.1016/j.scitotenv.2016.09.074 [5] Ghosh, D., Kumar, S., Donselaar, M. E., Corroto, C., & Ghosh, A. K. (2020). Organic carbon transport model of abandoned river channels: A motif for floodplain geomorphology influencing biogeochemical swaying of arsenic. The Science of the Total Environment, 762, 144400. https://doi.org/10.1016/j.scitotenv.2020.144400 [6] Donselaar, M. E., Khanam, S., Ghosh, A. K., Corroto, C., & Ghosh, D. (2024). Machine-learning approach for identifying arsenic-contamination hot spots: The search for the needle in the haystack. ACS ES&T Water, 4(8), 3110–3114. https://doi.org/10.1021/acsestwater.4c00422
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall E2)

Presentation: A Deep Learning based framework for building damage assessment using multisource satellite and geospatial data: demonstration on the 2023 Kahramanmaraş Earthquake

Authors: Luigi Russo, Dr Deodato Tapete, Silvia Liberata Ullo, Paolo Gamba
Affiliations: University of Pavia, Italian Space Agency (ASI), University of Sannio
Natural disasters, such as earthquakes, demand rapid and accurate responses to minimize human and economic losses. Remote Sensing (RS) data is invaluable for assessing building damage after a disaster, provided that key factors such as sensor type, spatial resolution, revisit time, and access to pre-disaster data and maps are carefully considered [1]. Indeed, the latest advancements in RS technology continue to enhance the ability to monitor large areas at high resolution, further improving disaster management efforts [2]. These capabilities are particularly critical for quick yet precise damage assessment following an earthquake, enabling efficient disaster response, humanitarian aid, and informed recovery planning [3]. Deep learning (DL) has emerged as a cutting-edge technology in this context, enabling automation of traditionally visual tasks for damage assessment. In this field, there are different studies leveraging DL approaches for assessing the different levels of damage in built-up areas. For instance, [4] introduces a two-stage model, BDANet, pre-trained on the xBD dataset [5] —a benchmark of optical images for building damage assessment—and fine-tuned with high-resolution WorldView2 imagery to evaluate damages caused by the February 2023 earthquakes in Islahiye, Turkey. This approach emphasizes the value of pre-training on diverse datasets and fine-tuning with localized data, achieving accurate damage detection, population impact estimation using WorldPop data, and rapid disaster response insights. Similarly, [6] uses the xBD dataset to develop a deep learning (DL) pipeline, combining a semantic segmentation Convolutional Neural Network (CNN) for building identification and a classification CNN for assessing damage levels. This approach effectively addresses post-disaster damage caused by various natural disasters. To reach these results, all these approaches rely, with respect to RS data, only on very high-resolution (VHR) optical data, which are limited by their dependence on cloud-free conditions and reduced availability during disasters. In contrast, this study proposes an innovative approach exclusively based on VHR Synthetic Aperture Radar (SAR) satellite imagery, offering resilience to adverse weather conditions, high-frequency revisit times, and the ability to capture critical data even in emergency situations. Furthermore, the approach is tested using SAR imagery that is made available in support to institutions, emergency management services and in the framework of international cooperation for disaster mapping and response. In particular, this work focuses on the 2023 Kahramanmaraş earthquake in Türkiye, implementing a multi-mission data acquisition plan that integrates SAR imagery from the Italian Space Agency (ASI)’s COSMO-SkyMed (CSK) constellation with complementary geospatial datasets, including: • the Global Earthquake Model (GEM) exposure model, which offers detailed information on building stock (e.g., residential, commercial, and industrial structures) derived from national and regional datasets to assess global earthquake risk [7]. • the Copernicus Emergency Management Service (EMS), which provides free satellite-based mapping for disaster response and recovery, including rapid and risk/recovery mapping for both natural and human-made emergencies [8]. In particular, grading products produced by EMS during the EMSR648 activation were accessed and used as the reference dataset in the DL-based experiment. • the High-resolution Digital Surface Model (DSM) data from CartoSat-1 Euro-Maps, made available by ESA as part of Third Party Mission data used as a 3D reference for 3D building height measurements for the affected built-up structures. The proposed methodology integrates a DL framework trained on multi-source datasets, combining pre- and post-event satellite imagery with the aforementioned geospatial information. The experiment exploited COSMO-SkyMed StripMap images, at 3-m spatial resolution, HH polarization, that were collected few days before and few days after the earthquake occurrence, thus ensuring that the depicted scenario included changes and damages due to the earthquake impacts only. This fundamental condition was possible owing to the availability of a conspicuous archive SAR imagery collection being collected on a regular basis as part of the ASI’s COSMO-SkyMed background mission, i.e. the low-priority acquisition plan allowing the system exploitation to be maximized and consistent datasets to be collected, thus creating a strategic historical data archive. This plan was also strengthened by ASI with further images collection in the context of the Committee on Earth Observation (CEOS) to support the GEO Geohazard Supersites and Natural Laboratory initiative (GSNL) Kahramanmaraş Event Supersite. This approach enables detailed analysis of building damage at a 2D level while incorporating 3D building height data for improved accuracy. Additionally, the approach incorporates geological and environmental features by adjusting the damage assessment outcomes to account for structural differences across the affected cities. This comprehensive strategy significantly improves the precision and efficiency of damage mapping compared to traditional visual interpretation techniques. Results demonstrate that integrating SAR imagery with multi-source geospatial data enables rapid, consistent, and scalable damage mapping solutions. These results highlight the operational potential of the approach for near-real-time disaster response, even under challenging conditions. It provides a reliable alternative to traditional, labor-intensive field data collection and manual visual assessments, ensuring the damage analysis is both accurate and efficient. The validation of the results is performed by comparing the outputs with high-resolution optical imagery (e.g., Google Earth) and geological studies of the affected regions. In conclusion, this study highlights a significant step forward in automated damage assessment by combining DL methodologies with multi-source geospatial data. The findings emphasize the degree of innovation in leveraging VHR radar imagery and DL models, demonstrating their potential to enhance earthquake response strategies, influence policies related to disaster preparedness and recovery, and support decision-making processes at local, national, and international levels. Additionally, this research contributes to expanding the application of DL and geospatial technologies for disaster management and recovery, with potential implications for scalable and global solutions. REFERENCES [1] Yamazaki, F., & Matsuoka, M. (2007). Remote sensing technologies in post-disaster damage assessment. Journal of Earthquake and Tsunami, 1(3). https://doi.org/10.1142/S1793431107000122. [2] Silva, V., Brzev, S., Scawthorn, C., et al. (2022). A building classification system for multi-hazard risk assessment. International Journal of Disaster Risk Science, 13(2), 161–177. https://doi.org/10.1007/s13753-022-00400-x. [3] Mangalathu, S., Sun, H., Nweke, C. C., & Burton, H. (2019). Classifying earthquake damage to buildings using machine learning. Earthquake Spectra, 36(1). https://doi.org/10.1177/8755293019878137. [4] A Deep Learning Application for Building Damage Assessment Using Ultra-High-Resolution Remote Sensing Imagery in Turkey Earthquake. (2023). International Journal of Disaster Risk Science, 14(4). https://doi.org/10.1007/s13753-023-00526-6. [5] Gupta, R., Hosfelt, R., Sajeev, S., Patel, N., Goodman, B., Doshi, J., Heim, E., Choset, H., & Gaston, M. (2019). xBD: A Dataset for Assessing Building Damage from Satellite Imagery. arXiv. https://arxiv.org/abs/1911.09296. [6] Alisjahbana, I., Li, J., Strong, B., & Zhang, Y. (2024). DeepDamageNet: A two-step deep-learning model for multi-disaster building damage segmentation and classification using satellite imagery. arXiv. https://arxiv.org/abs/2405.04800. [7] Yepes-Estrada, C., Calderon, A., Costa, C., Crowley, H., Dabbeek, J., Hoyos, M., Martins, L., Paul, N., Rao, A., Silva, V. (2023). Global Building Exposure Model for Earthquake Risk Assessment. Earthquake Spectra. doi:10.1177/87552930231194048. [8] Copernicus Emergency Management Service. (n.d.). The Emergency Management Service - Mapping. Copernicus Emergency Management Service. Retrieved November 26, 2024, from https://emergency.copernicus.eu/mapping/ems/emergency-management-service-mapping.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall E2)

Presentation: Landslide Identification Using Foundation Models and Deep Learning for Unsupervised Change Detection

Authors: Julia Leonardi, Valerio Marsocci, Dr. Vasil Yordanov, Prof Maria Antonia Brovelli
Affiliations: Politecnico Di Milano, KU Leuven
Landslides are among the most devastating natural hazards, posing significant risks to human life, infrastructure, and ecosystems. Their rising frequency and intensity, driven by climate change, create a pressing need for effective landslide prevention methods development, including detection and prediction techniques. This study presents a novel architecture for unsupervised landslide detection, integrating foundation models and unsupervised deep learning within change detection workflows. This analysis highlights advancements in geospatial hazard analysis and broadens the boundaries of current methodologies. Recent advancements in machine learning for geospatial applications have revolutionized environmental monitoring capabilities. However, these advancements come with various challenges, including the need for large, labeled datasets and the computational resources required to train application-specific models. Unsupervised learning offers a partial solution by training its models on unlabeled data. Yet, models trained using solely unsupervised techniques often perform worse than their supervised counterparts. To mitigate this challenge self-supervised pre-training has emerged as a powerful approach. By generating pseudo-labels directly from the data, self-supervised models learn transferable feature representations without manual annotation. This concept paved the way for the rise of geospatial foundation models (GeoFMs). GeoFMs are large-scale, general-purpose models trained on extensive amounts of geospatial data, often using self-supervised pre-training techniques. These models overcome many challenges by reducing the need for labeled data, minimizing computational overhead, and offering versatile, reusable models that can be adapted with minimal fine-tuning to different applications. Combining the capabilities of geospatial foundation models with remote sensing techniques enables more efficient and scalable solutions for a wide range of applications, such as disaster management. This work integrates the advancements of self-supervised pre-training into an unsupervised change detection framework. Change detection is a technique that identifies pixel-level changes in bi-temporal remote sensing imagery and has long been widely used in environmental monitoring, especially natural hazard applications. Integrating deep learning methods in change detection workflows creates a promising opportunity to enhance landslide detection accuracy by operating on the data’s deep feature representations, which provide rich information to the change detection processing pipeline. A core challenge in this work was creating a global training dataset of bi-temporal images of landslide events, which involved curating a collection of pre- and post-landslide Sentinel-2 image pairs. The resulting dataset – “GLaD4CD: Global Landslide Dataset for Change Detection” [1] is composed of 174 bi-temporal image pairs from all over the world. The dataset is a major contribution to this work since there is a scarce amount of multispectral, multi-temporal landslide datasets specifically designed for change detection tasks. GLaD4CD can be used with a range of unsupervised change detection models, as in this study, in which it was employed to fine-tune and evaluate a novel unsupervised deep-learning-based change detection architecture. This study introduces an innovative architecture, inspired by Metric-CD, proposed in [2], an unsupervised change detection framework based on metric learning principles. By leveraging GeoFMs pre-trained on Sentinel-2 data, the proposed approach extends the original methodology to efficiently extract and use the information in multispectral imagery. The pre-trained models employed in this research were developed as part of the Self-Supervised Learning for Earth Observation (SSL4EO) initiative [3], utilizing the DINO (Distillation with No Labels) [4] and MoCo (Momentum Contrast) [5] models, both based on the Vision Transformer (ViT) architecture. To evaluate the proposed model, its performance was benchmarked against other unsupervised change detection techniques, including (i) a baseline algebraic differential thresholding method applied across all Sentinel-2 bands, (ii) the Change Detection based on image Reconstruction Loss (CDRL) method, an unsupervised workflow introduced in [6], which relies on reconstruction loss, and (iii) the original Metric-CD. The proposed architecture demonstrates the impact that GeoFMs have on unsupervised landslide detection workflows. Comparative analysis against other unsupervised methods across different experimental setups revealed both the strengths and limitations of the introduced approach. The results highlight the potential of combining GeoFMs with unsupervised change detection workflows to enhance geospatial analysis capabilities. This work establishes a foundation for further research in unsupervised deep-learning-based change detection for landslide detection and prevention, contributing to more effective environmental monitoring and risk mitigation strategies in a changing climate. [1] J. A. Leonardi, V. Yordanov, and M. A. Brovelli, “GLaD4CD: Global Landslide Dataset for Change Detection.” Zenodo, Mar. 09, 2024. doi: 10.5281/zenodo.10800338. [2] W. G. C. Bandara and V. M. Patel, “Deep Metric Learning for Unsupervised Remote Sensing Change Detection,” Mar. 16, 2023, arXiv: arXiv:2303.09536. doi: 10.48550/arXiv.2303.09536. [3] Y. Wang, N. A. A. Braham, Z. Xiong, C. Liu, C. M. Albrecht, and X. X. Zhu, “SSL4EO-S12: A Large-Scale Multi-Modal, Multi-Temporal Dataset for Self-Supervised Learning in Earth Observation,” May 29, 2023, arXiv: arXiv:2211.07044. Accessed: Nov. 03, 2024. [Online]. Available: http://arxiv.org/abs/2211.07044 [4] M. Caron et al., “Emerging Properties in Self-Supervised Vision Transformers,” May 24, 2021, arXiv: arXiv:2104.14294. doi: 10.48550/arXiv.2104.14294. [5] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum Contrast for Unsupervised Visual Representation Learning,” Mar. 23, 2020, arXiv: arXiv:1911.05722. doi: 10.48550/arXiv.1911.05722. [6] H. Noh, J. Ju, M. Seo, J. Park, and D.-G. Choi, “Unsupervised Change Detection Based on Image Reconstruction Loss,” Apr. 05, 2022, arXiv: arXiv:2204.01200. doi: 10.48550/arXiv.2204.01200.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall E2)

Presentation: Short-term wildfire danger forecasting in the Mediterranean with Deep Learning using the Mesogeos Dataset

Authors: Spyros Kondylatos, Ioannis Prapas, Eleftheria Vrachoriti, Gustau Camps-Valls, Professor Ioannis Papoutsis
Affiliations: Orion Lab, National Technical University of Athens & National Observatory of Athens, Image Processing Laboratory, University of Valencia, ETH Zürich
Wildfires play a key role in the ecosystem but also pose significant threats to human life, property, and the environment. Climate change is intensifying wildfire frequency and severity—particularly in Mediterranean regions [1], emphasizing the urgent need for innovative tools to enhance wildfire preparedness and management. However, accurately modeling wildfire danger remains challenging due to the complex interactions between climate, vegetation, and human activity across diverse spatial and temporal scales. Machine Learning (ML) offers a promising data-driven approach to address this complexity [2]. In this study, we develop short-term wildfire danger forecasting models, utilizing Mesogeos [3], a comprehensive and multi-purpose dataset tailored for ML applications in wildfire research. Mesogeos covers the Mediterranean basin, including Greece, Italy, Spain, Portugal, Cyprus, western Turkey, and parts of North Africa. It harmonizes a wide range of fire-related variables, such as meteorological conditions, vegetation characteristics, and anthropogenic factors, along with records of past burned areas, fire ignition points, and burned area sizes. The dataset is structured in a standard spatiotemporal grid format, namely a datacube, with a daily temporal resolution and a spatial resolution of 1km × 1km, containing data from 2006 to 2022. Its design enables easy extraction of ready-to-use ML datasets enabling its utilization for diverse wildfire-related applications. We focus on forecasting daily wildfire danger for temporal horizons ranging from 1 to 10 days in advance using a multi-step direct forecasting approach, where distinct models are trained for each temporal horizon. For this, we employ temporal-context-aware Deep Learning (DL) architectures, including Long Short-Term Memory (LSTM) networks and Transformers, to model the temporal dynamics of wildfire drivers. We also assess the effect of input sequence length on prediction accuracy and perform an ablation study to identify key variables for precise forecasts. We integrate Explainable Artificial Intelligence (xAI) techniques, such as Feature Ablation, Partial Dependence Plots, and Integrated Gradients, to enhance model interpretability and transparency. These methods offer insights into variable importance and fire dynamics, improving our understanding of modeling across forecast horizons. Our findings demonstrate that models maintain high predictive skill for short-term forecasts (1–4 days), with performance starting to decrease as the forecast horizon extends beyond five days. Moreover, a week of historical data is enough to accurately predict fire danger 1–4 days ahead, whereas longer forecasts (5–10 days) necessitate at least 10 days of input data. Additionally, integrating satellite-derived data and land cover information significantly improves prediction performance. Finally, our experiments demonstrate that the values of the variables in the last 2–3 days preceding a wildfire ignition are the most critical for accurate danger forecasting across all forecast horizons. These results suggest that DL methods, combined with rich datasets like Mesogeos, have the potential to improve short-term wildfire danger forecasting in the Mediterranean. Such tools could support operational agencies by providing timely predictions that help allocate resources, prioritize high-risk areas, and enhance coordination among stakeholders. While further validation in real-world scenarios is needed, these forecasting systems may offer insights into the drivers of wildfire danger, potentially aiding decision-making processes and improving preparedness under a changing climate. References [1] Moreira et al. “Wildfire Management in Mediterranean-Type Regions: Paradigm Change Needed.” Environmental Research Letters 15, no. 1 (January 1, 2020): 011001. https://doi.org/10.1088/1748- 9326/ab541e. [2] Kondylatos, Spyros, Ioannis Prapas, Michele Ronco, Ioannis Papoutsis, Gustau Camps-Valls, María Piles, Miguel-Ángel Fernández-Torres, and Nuno Carvalhais. “Wildfire Danger Prediction and Understanding With Deep Learning.” Geophysical Research Letters 49, no. 17 (2022): e2022GL099368. https://doi.org/10.1029/2022GL099368. [3] Kondylatos, Spyridon, Ioannis Prapas, Gustau Camps-Valls, and Ioannis Papoutsis. “Mesogeos: A Multi-Purpose Dataset for Data-Driven Wildfire Modeling in the Mediterranean.” Advances in Neural Information Processing Systems 36 (December 15, 2023): 50661–76.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall G1)

Session: D.02.08 Explainable AI for Earth Observation and Earth Science - PART 2

The rapid expansion of AI in Earth system science and Earth observation (EO) is accelerating research and innovation. However, for AI-based solutions to be truly impactful in Earth Action initiatives, they must demonstrate explainability, physics-awareness, and trustworthiness to ensure they are fit for purpose.
This session will explore cutting-edge advancements in explainable AI (XAI) methods across diverse EO data types, including Synthetic Aperture Radar (SAR), optical, and hyperspectral data. Contributions are invited on integrating AI with physical models, interpretable deep learning, uncertainty quantification, causal inference, and other approaches to improve transparency, consistency, and robustness in AI-driven solutions.
We welcome case studies and research addressing a variety of Earth science missions and applications, such as SAR processing, Earth system process understanding, image classification, 3D reconstruction, and climate/environmental monitoring. The session will also cover strategies for tackling data gaps, physical inconsistencies, and ensuring responsible, ethical AI use.
Attendees will gain valuable insights into the latest research on explainable AI for EO, with a focus on enhancing model interpretability and trustworthiness in applications that advance Earth observation and Earth system science, supporting actionable solutions for environmental and climate challenges.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall G1)

Presentation: Physics-Inspired Neural Network for Forward Modeling of Sentinel-1 Radar Observables

Authors: Tina Nikaein, Dr Paco López-Dekker
Affiliations: TU Delft, Space4Good
Mapping biophysical parameters to satellite observations requires forward models that describe the complex interaction between vegetation, soil properties, and radar signals. In this study, we address the challenge of modeling the dependence of the normalized radar cross-section (NRCS) on crop and soil-related parameters for applications such as direct assimilation into crop growth models. Traditional models, such as the widely used Water Cloud Model (WCM)(Attema and Ulaby 1978), and more complex alternatives like the Tor Vergata model (Bracaglia, Ferrazzoli, and Guerriero 1995) and the Michigan Microwave Canopy Scattering Model (MIMICS) (ULABY et al. 1990), provide detailed simulations of radar backscatter. However, these models often face limitations in real-world scenarios due to their reliance on extensive parameterization, many of which are challenging to measure or estimate accurately. Recent advancements have explored the use of machine learning (ML) as observation operators (Rains et al. 2021; Shan et al. 2022; de Roos et al. 2023; Nikaein et al. 2023). A major limitation of standard ML approaches is their inability to consistently adhere to physical principles, which limits their interpretability and generalizability in scientific applications. Addressing this challenge, this study presents a machine learning framework that integrates the strengths of neural networks and physics-based models to improve the accuracy and reliability of modeling Synthetic Aperture Radar (SAR) observables. Our proposed framework incorporates physical knowledge into the machine learning process in two ways: through the architecture of the neural network and the loss function. The network topology is inspired by the WCM. We ensure that the model reflects key physical relationships by embedding these principles directly into the network structure. Additionally, a physics-informed constraint is integrated into the loss function. This dual approach enables the network to produce predictions that are not only accurate but also physically interpretable. To validate the performance of this model first we evaluate it on synthetic data which we expect to perform optimally under controlled conditions. Secondly, we apply it to real data to evaluate its generalizability. We generate synthetic data using WCM and crop growth model-generated variables. Unlike purely data-driven methods, our framework ensures that the predictions remain consistent with fundamental physical realities, thereby reducing the risk of overfitting or unrealistic outputs. The proposed methodology highlights the potential of physics-guided neural networks as a powerful tool for advancing SAR-based vegetation studies and using in data assimilation. References: Attema, E. P. W., and Fawwaz T. Ulaby. 1978. “Vegetation Modeled as a Water Cloud.” Radio Science 13 (2): 357–64. https://doi.org/10.1029/RS013i002p00357. Bracaglia, M., P. Ferrazzoli, and L. Guerriero. 1995. “A Fully Polarimetric Multiple Scattering Model for Crops.” Remote Sensing of Environment 54 (3): 170–79. https://doi.org/10.1016/0034-4257(95)00151-4. Nikaein, Tina, Paco Lopez-Dekker, Susan C. Steele-Dunne, Vineet Kumar, and Manuel Huber. 2023. “Modeling SAR Observables by Combining a Crop-Growth Model With Machine Learning.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 16:7763–76. https://doi.org/10.1109/JSTARS.2023.3301124. Rains, Dominik, Hans Lievens, Gabriëlle J. M. De Lannoy, Matthew F. McCabe, Richard A. M. de Jeu, and Diego G. Miralles. 2021. “Sentinel-1 Backscatter Assimilation Using Support Vector Regression or the Water Cloud Model at European Soil Moisture Sites.” IEEE Geoscience and Remote Sensing Letters, 1–5. https://doi.org/10.1109/LGRS.2021.3073484. Roos, Shannon de, Louise Busschaert, Hans Lievens, Michel Bechtold, and Gabriëlle J. M. De Lannoy. 2023. “Optimisation of AquaCrop Backscatter Simulations Using Sentinel-1 Observations.” Remote Sensing of Environment 294 (August):113621. https://doi.org/10.1016/j.rse.2023.113621. Shan, Xu, Susan Steele-Dunne, Manuel Huber, Sebastian Hahn, Wolfgang Wagner, Bertrand Bonan, Clement Albergel, Jean-Christophe Calvet, Ou Ku, and Sonja Georgievska. 2022. “Towards Constraining Soil and Vegetation Dynamics in Land Surface Models: Modeling ASCAT Backscatter Incidence-Angle Dependence with a Deep Neural Network.” Remote Sensing of Environment 279 (September):113116. https://doi.org/10.1016/j.rse.2022.113116. ULABY, FAWWAZ T., KAMAL SARABANDI, KYLE McDONALD, MICHAEL WHITT, and M. CRAIG DOBSON. 1990. “Michigan Microwave Canopy Scattering Model.” International Journal of Remote Sensing 11 (7): 1223–53. https://doi.org/10.1080/01431169008955090.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall G1)

Presentation: Improved Bio-Physical Variable Retrieval From Sentinel-2 Images Using Physically-Constrained Neural Networks Augmented With Residual Error Modelling

Authors: Kevin De Sousa, Jordi Inglada
Affiliations: Univ Toulouse 3 Paul Sabatier, Univ Toulouse, CNES/IRD/CNRS/INRAe, CESBIO, Toulouse, France
Automatic feature extraction from remote sensing images becomes particularly challenging when these features need to be interpretable. In environmental applications, the meaningful features are tied closely to the observed physical processes. As a result, the primary use of remote sensing imagery is the retrieval of biophysical parameters. Accurately estimating and monitoring of Essential Climate and Biodiversity Variables (EVs) is fundamental to understanding, mitigating and adapting to climate change and biodiversity extinction. Remote sensing enables large-scale observation and monitoring of these variables. Biophysical parameter retrieval is usually done using look-up tables (LUT) produced by radiative transfer models (RTM) which simulate satellite observations. These LUT can be used to train regression models. While effective, this approach suffers from unrealistic simulations due to implausible parameter combinations, leading to learning errors in the inversion modelling process. To overcome these challenges, a novel auto-supervised, physics-aware method that relies directly on real observations was recently introduced. A neural network infers a latent space from observed images, which is then processed through a physical model to reconstruct the scene. This feedback loop eliminates the need for reference data and ensures that the latent space corresponds to physical variables. By constraining these variables with the physical model, they adhere to physical properties, making them interpretable and semantically consistent. For large-scale mapping of crops and forests, a method called PROSAILVAE was developped. It is a variational autoencoder that uses Sentinel-2 spectral bands as input. The encoder generates a latent space constrained to physical variables by a differentiable version of the well known PROSAIL (PROSPECT + SAIL) RTM, which acts as the decoder to reconstruct the bands. PROSAILVAE obtains state of the art results in LAI retrieval and outperforms existing approaches in chlorophyll estimations. It also performs a full inversion of all the model parameters, while previous approaches were limited to single variable retrieval. While effective, PROSAILVAE assumes an ideal physical model, which degrades the variable estimation, due to errors in the physical process modelling resulting from the approximations taken. To address this, we introduce a Refinement Network (RN) to model the errors of the physical model (residuals). Four input configurations for the RN were evaluated: 1) SERIES, where the RN takes the output of the physical model and directly produces a correction; 2) RESSERIES, where the RN takes the output of the physical model and generates residuals that are subsequently added to the model’s output; 3) PARALLEL, where the RN uses the latent space to predict the residuals; 4) COMPOUND, where the RN integrates both the latent space and the physical model’s output to estimate the residuals. Balancing contributions between the RN and the physical model is critical to prevent over-compensation from the RN, which will indirectly degrade the estimations of the physical variables. Three balancing losses were investigated: - the Posterior Predictive Check (PPC), - Decorrelation between RN and the physical model (DEC), - the RN contribution (RNC). We validated our approach using fiducial reference measurements for vegetation (FRM4Veg) and BelSAR datasets as ground truth benchmarks for LAI and Canopy Chlorophyll Content (CCC). Our findings highlight PPC as a requirement to ensure proper balance between the RN and the physical model, as its absence leads to significant degradation in LAI and CCC estimations. Moreover, all configurations except SERIES outperformed PROSAILVAE in LAI estimation, demonstrating the utility of incorporating the RN. However, none of the configurations improved CCC estimation, suggesting that PROSAIL accurately models this variable. This study demonstrates that augmenting the physical model with a Refinement Network to address residual errors yields notable improvements in variable estimation accuracy, while maintaining a physics-based foundation. These findings emphasize the value of integrating RN and balancing losses to enhance the robustness of physical variable retrieval for remote sensing applications.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall G1)

Presentation: Physics-aware emulation for atmospheric correction

Authors: Jorge Vicent Servera, Dr. Alexander Kokhanovsky, Dr. Luca Martino, Dr. Gerardo Varando, Professor Gustau Camps-Valls
Affiliations: University Of Valencia, GFZ Helmholtz-Zentrum Potsdam, Universidad Rey Juan Carlos
Atmospheric radiative transfer models (RTMs) play a critical role in satellite data processing by modeling the scattering and absorption effects caused by aerosols and gas molecules in the Earth's atmosphere. However, as the complexity of RTMs increases and the demands of future Earth Observation missions escalate, conventional Look-Up Table (LUT) interpolation approaches face significant challenges. Statistical methods called emulators have been proposed as alternatives to LUT interpolation, but their operational use in satellite data processing is hindered by slow performance. but their operational use in satellite data processing is hindered by slow performance, their black-box nature reduces transparency, limiting the physical interpretability of their predictions. This research introduces two physics-aware solutions that leverage multi-fidelity methods to enhance the explainability and accuracy Gaussian Process (GP) emulators. We propose two methodological frameworks to improve model emulation and interpretability through machine learning techniques. First, we present a wrapper-forward feature selection method that incorporates physics knowledge into model emulation process. The core idea is that not all input variables have an impact on the model outputs; by selecting only the relevant variables, emulators become more interpretable. Applied to global sensitivity analysis and emulation, this approach represents a significant advancement in physics-aware emulation, striking a balance between accuracy and explainability. Second, we focus on the model outputs by implementing a physics-based dimensionality reduction, which combines symbolic regression with semi-empirical modeling in a multi-fidelity framework. Symbolic regression identifies mathematical models from data patterns under the assumption that physical laws are described by a sparse and algebraic input-output relationship. In this approach, we use the sparse linear regression technique LASSO to model the RTM spectral data with a semi-empirical parametric formulation. The interpretable parameters of this model serve as new variables for training a statistical regression model, thus enhancing the interpretability of emulators. In this presentation, we will describe our latest advances in physics-aware emulation and highlight its potential for atmospheric correction in hyperspectral satellite data. Our goal is to share a suite of methods and tools with the scientific community to automate the development of atmospheric RTM emulators, paving the way for more efficient and interpretable remote sensing workflows.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall G1)

Presentation: Physics-Informed Deep Learning for Understanding Radar Backscatter and Biomass Relationships and Improving Biomass Estimation in the Dry Tropics

Authors: Reza M. Asiyabi, Prof. Casey Ryan, Dr. Steven Hancock, Dr. Sam Harrison, Prof. Shaun Quegan, Professor Mathias Disney, Dr. Mark Lomas
Affiliations: School of GeoSciences, The University Of Edinburgh, School of Mathematical and Physical Sciences, The University of Sheffield, Department of Geography, University College London
The dry tropics represent a critical component of the global carbon cycle and serve as a significant carbon sink. However, accurately estimating AboveGround Biomass Density (AGBD) in these regions remains challenging due to their ecological heterogeneity and the dynamic processes that influence carbon storage. Traditional approaches leveraging Synthetic Aperture Radar (SAR) have demonstrated potential for biomass mapping in these ecosystems but are hindered by limitations such as signal saturation, structural ambiguities, and variability in the SAR backscatter-AGBD relationship across different sites. Physics-informed deep learning models, which embed physical principles (e.g., from radiative transfer and physical models) into data-driven architectures, offer immense potential for AGBD mapping and can address key shortcomings of the traditional techniques by enhancing interpretability, reducing overfitting, and improving generalizability. These models offer the potential to enable robust biomass estimation, even in under-represented or complex ecosystems, while providing deeper insights into SAR backscatter-AGBD relationships. By advancing scalability and accuracy, physics-informed deep learning offers a promising tool for producing high-resolution biomass maps, supporting global carbon monitoring, and informing conservation and climate mitigation strategies. Here we will present the findings from our previous studies on SAR backscatter and AGBD relationships, the site-specific factors influencing this interaction, and the limitations of traditional methods. We will outline the framework for multi-model physics-informed deep learning applied to this problem and present the initial results and findings from this model. Extended background: The dry tropics, spanning diverse ecosystems such as savannas, woodlands and dry forests, encompass approximately 15 million square kilometres, and are ecologically and climatically diverse regions characterized by heterogeneous vegetation structures. They act as significant carbon sinks, playing an essential role in mitigating climate change while simultaneously supporting critical biodiversity and the livelihoods of millions. However, accurately quantifying the carbon stored within the woody vegetation in these ecosystems is challenging. AGBD in these regions is influenced by a range of dynamic processes, including deforestation, degradation, regrowth and tree mortality induced by changing climatic conditions. Understanding these processes at scale is fundamental to monitor carbon fluxes. However, existing methods for biomass mapping often fail to meet the precision and generalizability needed for global carbon tracking and for use in conservation strategy and policy making. Remote sensing techniques, leveraging the advantages of various Earth Observation data, provide an immense opportunity for mapping and monitoring biomass in the dry tropical ecosystems. SAR, particularly lower-frequency systems such as L- and P-band, is a key tool for mapping AGBD due to its ability to penetrate vegetation canopies and interact with the woody structures, such as branches and trunks. With global datasets available from previous and current satellite missions, such as Sentinel-1 and ALOS PALSAR, and the future planned missions, such as BIOMASS and NISAR, SAR provides long-term, repeatable observations that are essential for monitoring large-scale ecosystem dynamics. Despite these advantages, the utility of SAR for biomass estimation has several critical limitations. SAR signal saturation is one of the most studied challenges in this area, where radar backscatter loses sensitivity to increases in biomass beyond moderate levels, typically around 80–100 Mg ha⁻¹. This makes SAR less effective in denser forests and regions with high biomass values. Furthermore, previous studies have demonstrated that the relationship between SAR backscatter and biomass is highly variable across ecosystems, and different site-specific factors such as vegetation structure, stem density, and soil properties strongly influence this relationship. For example, structural differences, such as having similar AGBDs produced by a few big trees vs many smaller stems, can produce dissimilar SAR backscatter responses and reduce the accuracy of biomass estimates. Similarly, soil moisture and texture can interfere with radar signal interactions, adding another layer of complexity to biomass retrieval. These issues show that traditional empirical models, which often rely on simple statistical relationships between radar intensity and biomass without accounting for the underlying physical processes cannot adequately capture the complex relationship between the radar backscatter and AGBD. Traditional approaches also struggle with generalizability. Biomass estimation models calibrated for specific regions or vegetation types often fail when applied to different ecosystems. This is particularly problematic in the dry tropics, where vegetation types vary widely, from sparse savannas to denser woodlands. The inherent heterogeneity of these landscapes demands models capable of capturing site-specific variability while maintaining robustness across broader scales. Empirical models, however, typically lack the flexibility and interpretability required to achieve this. Inter-comparison exercises show that they have regional biases, which some studies suggest could be due to varying biomass-to-structure relationships. Furthermore, such models often disregard important ecological and physical interactions, leading to significant biases in biomass estimates. There is an increasing demand for innovative approaches that go beyond traditional methods to address these challenges, including regional biases due to structure. Deep learning, a subset of artificial intelligence, has emerged as a powerful tool for modelling complex, nonlinear relationships. Deep learning-based methods have provided state-of-the-art results and outperformed traditional methods in diverse applications, including environmental remote sensing. Unlike conventional statistical models, deep learning algorithms can automatically extract patterns from high-dimensional data, enabling them to handle the complexity and variability of SAR-vegetation interactions. These models excel at capturing subtle relationships between inputs and outputs, making them particularly well-suited for applications like biomass mapping where traditional methods often fail. However, the application of deep learning to biomass estimation comes with its own set of challenges. Deep learning models require a vast amount of ground truth label data for training. The increasing number of field measurement (i.e., plots) networks as well as AGBD estimation of LiDAR observations from spaceborne (e.g., GEDI), airborne (ALS) and terrestrial (TLS) systems, provide the opportunity of training deep learning models for AGBD mapping. Additionally, while data-driven models are capable of high accuracy, they often lack interpretability, which can undermine trust in their predictions. In the context of biomass monitoring, where physical principles such as radar scattering mechanisms play a critical role, purely data-driven approaches risk overfitting or producing results that would satisfy the statistical accuracy measures but lack physical sensibility and are difficult to validate. By embedding physical laws and domain-specific knowledge into the architecture of deep learning models, physics-informed deep learning aims at overcoming these limitations. This approach combines the strengths of empirical and theoretical methods. For biomass estimation, this means incorporating the known relationships between SAR scattering mechanisms, such as backscattering mechanisms, and vegetation structure into the learning process. Physics-informed deep learning offers several distinct advantages over traditional methods. First, by integrating physical principles, these models enhance interpretability, allowing researchers to understand the factors driving model predictions. This is particularly important for biomass monitoring, where the ability to understand the contributions of different factors like stem density, canopy structure, and soil properties to SAR backscatter is crucial. Second, physics-informed models are inherently more robust, as they are constrained by established biophysical relationships. This reduces the risk of overfitting and improves generalizability across different ecosystems. Third, these models can leverage multi-source data, combining (multiband) SAR with complementary data from optical, thermal, and LiDAR sensors. This data fusion approach provides a more comprehensive view of vegetation structure and dynamics to further improve model accuracy. By combining the scalability and precision of deep learning with the interpretability of physical modelling, this approach addresses the key limitations of traditional methods. For example, instead of relying solely on radar intensity to estimate biomass, physics-informed models can incorporate phenological constraints to retrieve additional metrics such as canopy cover and stem density. This allows a deeper understanding of vegetation dynamics and capturing changes that might be overlooked by conventional AGBD-focused models. Additionally, the integration of temporal data from SAR time series can provide insights into seasonal variations in ecological attributes, further enhancing the accuracy of biomass estimates. The development of such models requires a robust and diverse dataset, combining high-resolution Earth observation data with detailed ground-based labels. Moreover, under-represented ecosystems within the dry tropics should be included in the training dataset for generalizing across the region’s heterogeneous landscapes. The high computational requirement is another restricting factor for developing large-scale deep learning-based models. However, advances in cloud and high-performance computing platforms mitigate this issue and make it feasible to handle the large-scale datasets needed for training and validating these models. Our goal is not only improving the accuracy and scalability of biomass estimates but also gaining a deeper understanding of the SAR backscatter-AGBD relationship and its influencing factors. Once developed, physics-informed deep learning models can be used to generate high-resolution biomass maps, providing invaluable tools for policymakers, conservationists, and researchers. These maps can inform efforts to monitor carbon stocks, design conservation strategies, and evaluate the effectiveness of climate mitigation programmes.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall G1)

Presentation: Towards Lightweight Physics-Informed Neural Networks for Bio- Geo-Physical Parameter Estimation Using EO Data

Authors: Davide De Santis, Lorenzo Giuliano Papale, Fabio Del Frate, Prof. Giovanni Schiavon
Affiliations: Tor Vergata University Of Rome
Machine learning (ML), particularly Neural Networks (NNs), is often perceived as a mysterious "black box" that transforms input data into solutions without clear explanation. However, in Earth observation (EO), the underlying processes are rooted in physics, not magic. This highlights the critical need for frameworks that effectively combine AI with physical models and principles, to achieve explainability, physics awareness, and trustworthiness [1]. Thus, developing reliable ML models for tasks like image processing and parameter estimation requires a strong foundation in physics. In this context, Radiative Transfer Models (RTMs) and Electromagnetic Models (EMs) represent extremely useful tools. These models simulate scattering and radiation mechanisms, creating datasets with input-output pairs essential for training and testing ML algorithms. On the other hand, the vast availability of Earth Observation (EO) data highlights the need to carefully select the most effective inputs for AI algorithms, particularly by identifying the optimal wavelengths based on their sensitivity to the parameters being estimated [2]. Furthermore, addressing the computational constraints of real-world applications by designing lightweight models has become increasingly important. This need is especially critical in an era where the demand for onboard processing capabilities is growing exponentially. In this context, to address the need for interpreting the complex physical relationships inherent in Remote Sensing data (RS), the present study explores the integration of Radiative Transfer Models RTMs and EMs with NNs, leading to the implementation of Physics-Informed Neural Networks (PINNs). Additionally, pruning techniques were adopted to simplify neural networks and enable lightweight models. Specifically, pruning is particularly effective due to its ability to reduce computational complexity while preserving the most important features required for accurate parameter extraction. In this work, we assessed the synergic use of PINNs and pruning techniques to enhance RS applications in two key domains: atmospheric aerosol retrieval and soil moisture estimation. The first domain leverages PRISMA-like simulated hyperspectral data to develop neural networks constrained by physical principles for aerosol optical depth (AOD) estimation. The second application focuses on soil moisture retrieval in agricultural regions using EM simulations of radar backscatter data at varying frequencies (X-, C-, S-, L-, and P-bands). Simulations based on bio-geophysical properties and Radiative Transfer (RT) theory are fed to the network which learns how to emulate physical scattering mechanisms. As for the first application, physical constraints are imposed to the AI model, guiding the training process. Then, for both the case studies, pruning technique is applied to identify essential network connections and input variables. It is worth mentioning that the methodology is adaptable to both applications, despite differences in their domains and electromagnetic spectrum regions, by tailoring it to the specific nature of the datasets. For hyperspectral data, the “libRadtran” model [3] is used to generate datasets spanning the 400–2200 nm range with 189 channels. For SAR data, the "Tor Vergata" model [4] simulates datasets across the selected frequencies. From a general perspective, the core motivation for this work lies in overcoming the challenges associated with the high dimensionality of hyperspectral data and the intricate physics behind radar signals, especially on vegetated areas. Hyperspectral sensors provide rich spectral information useful for Earth Observation (EO) applications, but their high dimensionality demands careful feature selection to ensure efficient and accurate retrievals. Similarly, SAR-based retrievals of bio-geo-physical parameters necessitate identifying optimal signal frequencies and polarizations sensitive to the target’s properties. Regarding pruning strategies, various state-of-the-art approaches were considered, including structured and unstructured methods, as well as pruning criteria such as magnitude-based and saliency-based techniques [5]. The study situates pruning within the broader context of feature selection methods, comparing its advantages to techniques such as ablation studies, XGBoost [6], and knowledge distillation [7]. However, unlike other feature selection approaches, pruning reflects the underlying physics while directly optimizing network architecture. As a result, the combination of PINNs and the pruning approach reduces model complexity while preserving both the accuracy and physical relevance of its predictions. Specifically, in hyperspectral aerosol retrieval, the pruning process significantly decreases the dimensionality of input vectors while adhering to the physical principles governing the interactions between electromagnetic waves and aerosols. The refined neural network identifies three critical wavelength groups: two corresponding to significant aerosol-light interactions (UV-VIS spectral bands) and one less interactive channel that serves as a reference, enabling the model to differentiate aerosol presence by leveraging variations in light intensity. This mechanism is similar to the differential optical absorption spectroscopy (DOAS) algorithm, commonly used for trace gas retrieval. For soil moisture retrieval, the relevant radar frequencies and polarizations are identified based on underlying scattering physics, as the results underscore the importance of lower radar frequencies. Specifically, the findings highlight the sensitivity of the P- and L-bands while noting the limited significance of higher frequencies. The analysis confirms that P- and L-band frequencies are the most critical for soil moisture estimation due to their superior penetration capabilities through vegetation canopies, making them particularly effective for assessing soil conditions beneath vegetation. On the other hand, the pruned PINN identifies S-band and certain C-band features as less relevant. Although X-band features exhibited lower weight magnitudes, their potential role necessitates further investigation. For the aerosol retrieval case study, validation of the proposed approach using real PRISMA data will be presented, demonstrating its applicability also to other hyperspectral data with similar characteristics, such as those provided by the EMAP and CHIME missions. Regarding the soil moisture retrieval application, a validation procedure has not yet been conducted due to the limited availability of satellite data at the selected frequencies over the same area. However, this will become feasible as a future development with the launch of radar missions such as the NASA-ISRO NISAR satellite and the ESA-led ROSE-L and Biomass programs. These missions will provide the necessary data to validate proposed approach also in the land domain To conclude, this research demonstrates the potential of integrating Physics-Informed Neural Networks (PINNs) with pruning techniques to advance remote sensing (RS) applications. By enabling the development of lightweight, efficient, and physics-aware neural networks, this methodology supports the realization of real-world satellite-based Earth Observation (EO) applications, including onboard processing. The results contribute to the growing field of AI-enhanced bio- and geo-physical parameter estimation which adhere to physical principles, offering scalable solutions for addressing critical environmental monitoring challenges. [1] Datcu, Mihai, Zhongling Huang, Andrei Anghel, Juanping Zhao and Remus Cacoveanu. “Explainable, Physics Aware, Trustworthy AI Paradigm Shift for Synthetic Aperture Radar.” ArXiv abs/2301.03589 (2023). [2] Persello, Claudio, Jan Dirk Wegner, Ronny Hänsch, Devis Tuia, Pedram Ghamisi, Mila Koeva and Gustau Camps-Valls. “Deep Learning and Earth Observation to Support the Sustainable Development Goals.” ArXiv abs/2112.11367 (2021). [3] Emde, Claudia, Robert Buras-Schnell, Arve Kylling, Bernhard Mayer, Josef Gasteiger, Ulrich Hamann, Jonas Irgens Kylling, Bettina Richter, Christian Pause, Timothy E. Dowling and Luca Bugliaro. “The libRadtran software package for radiative transfer calculations (version 2.0.1).” Geoscientific Model Development 9 (2015): 1647-1672. [4] Bracaglia, Marco, Paolo Ferrazzoli and Leila Guerriero. “A fully polarimetric multiple scattering model for agricultural fields.” 1995 International Geoscience and Remote Sensing Symposium, IGARSS '95. Quantitative Remote Sensing for Science and Applications 2 (1995): 1339 vol.2-. [5] Cheng, Hongrong, Miao Zhang and Javen Qinfeng Shi. “A Survey on Deep Neural Network Pruning: Taxonomy, Comparison, Analysis, and Recommendations.” IEEE Transactions on Pattern Analysis and Machine Intelligence 46 (2023): 10558-10578. [6] Chen, Tianqi and Carlos Guestrin. “XGBoost: A Scalable Tree Boosting System.” Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining(2016). [7] Hinton, Geoffrey E., Oriol Vinyals and Jeffrey Dean. “Distilling the Knowledge in a Neural Network.” ArXiv abs/1503.02531 (2015).
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall G1)

Presentation: Learning Physical Interactions Between Environmental Variables Through Self-supervised Learning

Authors: Diogenis Antonopoulos, Nikos Bountos, Ioannis Papoutsis, Dimitrios Michail
Affiliations: Orion Lab, National Technical University of Athens, Harokopeio University
The rapid accumulation of meteorological and environmental data offers opportunities to study the Earth system in various contexts such as forecasting, extreme event detection, early warning of natural hazards etc. However, utilizing this data requires methods capable of learning meaningful representations without extensive labels, as producing curated large-scale datasets is expensive. Recent advances in self-supervised learning (SSL) and foundation models have shown promising results in both computer vision and remote sensing applications, but their application to Earth system science remains underexplored. Contrary to natural images, environmental problems are governed by physical laws. Models trained on such problems must efficiently capture and leverage these governing principles. While several SSL approaches are well-suited to this domain, a systematic evaluation of these methods in terms of the knowledge they extract remains lacking. In this work we address this gap, proposing a systematic framework to evaluate how effectively different pre-training paradigms capture relationships between environmental and meteorological variables within the Earth system. The Mesogeos data cube[1] presents a unique testbed for exploring this potential, offering a diverse collection of meteorological, satellite remote sensing, and static variables across the Mediterranean region. Mesogeos provides data for the whole Mediterranean region daily, with a ground sampling distance of 1 km. It covers the period from 2007 to 2021 offering 21 unique variables like maximum daily temperature, relative humidity, Normalized Difference Vegetation Index, day/night surface temperature, and static features such as road proximity, land cover types and more. Its rich information and spatiotemporal structure make it a great fit for SSL methods. We construct our evaluation framework, adopting the recently introduced FoMo-Net[2] architecture. FoMo-Net is a foundation model designed to dynamically handle a wide array of remote sensing data in multiple scales, making it an ideal choice for the task at hand. FoMo-Net employs a Vision Transformer (ViT)[3], processing each input channel independently, creating a token sequence proportional to the amount of processed variables. The introduction of FoMo-Net to our study makes no hard assumptions on the examined models, as its encoder is a common ViT, with a flexible patchification scheme. FoMo-Net's channel-wise patchification approach is better suited to these tasks, as it enables a deeper exploration of both inter- and intra-variable relationships. The FoMo-Net encoder distinguishes between different variables in the token sequence utilizing a set of learnable channel embeddings. Based on common approaches in literature, we investigate three pretraining paradigms: a) supervised proxy tasks, b) autoregressive pretraining and c) information reconstruction via Masked Autoencoders (MAE)[3]. Specifically, supervised pretraining frames a pretext task where one variable serves as the target, while the others are used as predictors.The target variable functions as a critical hyperparameter, significantly influencing the knowledge extracted during the pretraining stage. We experiment with a diverse set of target variables aiming for a task that is neither trivial nor extremely complex, facilitating a smooth and effective learning process. The autoregressive task mimics forecasting, where past information predicts future outcomes. This approach is widely used in Earth system science, where forecasting is a key challenge. Finally, the Masked Autoencoder framework has become widely popular in remote sensing and computer vision, training models to reconstruct the masked parts of the input. Frameworks like SatMAE[4] have demonstrated the potential of MAE-based methods to capture rich spatial and temporal features in the remote sensing domain. Building on these pretraining setups and FoMo-Net’s flexibility, we design a framework to directly evaluate the extracted knowledge, avoiding reliance on downstream task assessments. In particular, we analyze the channel embeddings learnt by the model to distinguish among variables, assessing how well they capture relationships between variables. Variables strongly linked by physical principles should exhibit similar embeddings, suggesting the model has effectively captured these dependencies. Another key aspect of our framework involves exploring spatial relationships. The model’s encoded representations of regions that are physically close to each other are expected to exhibit similarity in their representations. This can be examined by investigating the tokens of the transformer encoder. In this case, we measure the cosine distance between different tokens, representing different spatial patches of the same variable for a given sample. By analyzing this spatial coherence, we assess whether the model respects the inherent spatial structure of the Earth, an essential characteristic of environmental data. Additionally, we examine how the model links different variables during the encoding process, investigating the attention between variables at multiple layers via attention rollout. This evaluation reveals the connections between the learned representations of variables and known physical properties, allowing us to verify the model’s understanding of Earth System processes and, at a later stage, extract new insights. To conclude, we propose a framework to assess the capacity of popular pre-training paradigms to effectively understand the complex processes of the Earth System in a task agnostic manner. Such insights have broad implications for Earth system science, enabling the design of models that not only align with known physics but could potentially generalize to new and unexplored phenomena. This work advances research at the intersection of machine learning and Earth science, paving the way for robust, interpretable SSL methods that can enhance our understanding of complex environmental systems. [1] S. Kondylatos, I. Prapas, G. Camps-Valls, and I. Papoutsis, Mesogeos: A multi-purpose dataset for data-driven wildfire modeling in the Mediterranean. (Jun. 14, 2023). Zenodo. doi: 10.5281/zenodo.8036851. [2] N. I. Bountos, A. Ouaknine, and D. Rolnick, “FoMo-Bench: a multi-modal, multi-scale and multi-task Forest Monitoring Benchmark for remote sensing foundation models,” Mar. 27, 2024, arXiv: arXiv:2312.10114. doi: 10.48550/arXiv.2312.10114. [3] Y. Bao, S. Sivanandan, and T. Karaletsos, “Channel Vision Transformers: An Image Is Worth 1 x 16 x 16 Words,” Apr. 18, 2024, arXiv: arXiv:2309.16108. Accessed: Jul. 15, 2024. [Online]. Available: http://arxiv.org/abs/2309.16108 [4] Y. Cong et al., “SatMAE: Pre-training Transformers for Temporal and Multi-Spectral Satellite Imagery,” Jan. 15, 2023, arXiv: arXiv:2207.08051. Accessed: Aug. 22, 2023. [Online]. Available: http://arxiv.org/abs/2207.08051
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 0.94/0.95)

Session: C.03.14 Sentinel-3 Mission Status, Achievements, and Future Plans - PART 2

This session, jointly organized by ESA and EUMETSAT, will provide a comprehensive update on the Sentinel-3 mission, a core element of the EU's Copernicus Earth Observation Program. Launched in 2016 (Sentinel-3A) and 2018 (Sentinel-3B), Sentinel-3 is designed to monitor the Earth’s environment, spanning oceans, land, the cryosphere, and the atmosphere domains. The mission’s primary objectives include tracking Earth surface changes, supporting climate research, and aiding the sustainable management of natural resources. Equipped with a suite of advanced instruments, including the Ocean and Land Colour Instrument (OLCI), Sea and Land Surface Temperature Radiometer (SLSTR), and a Synthetic Aperture Radar Altimeter (SRAL), the satellites provide critical data for monitoring multiple components of the Earth System. These observations are essential for climate studies, disaster management, and environmental monitoring.

The session will highlight the status and operational success of Sentinel-3A and 3B, showcasing key scientific achievements. Presenters will also discuss upcoming developments, such as enhanced data processing, innovative multi-sensor synergies, the preparation of the Sentinel-3C tandem phase, and the evolution of the operational constellations. Moreover, ESA and EUMETSAT’s collaboration ensures Sentinel-3 data continuity and integration into operational services, supporting informed decision-making in areas like sustainable development and environmental protection. This session will emphasize the role of such international cooperation in tackling global environmental challenges through enhanced Earth observation capabilities.

Land Session


Significance of the Sentinel-3 Surface Topography Mission for the Cryosphere


  • Malcom Mc Millan – Lancaster University

Sentinel-3 Optical Synergy products for Land Applications


  • Carolien Toté - VITO

Role of Sentinel-3 Surface Topography Mission in Advancing Hydrology within Copernicus Services


  • Nicolas TABURET – CLS

Exploitation of land surface temperature from SLSTR for long term climate monitoring: from global to local


  • Agnieszka Soszynska - University of Leicester

Atmosphere Session


The use/planned use of Sentinel-3 NRT Atmosphere Products by CAMS in their operational services


  • Richard Engelen - CAMS

The Sentinel-3 Water Vapour products and their contribution to climate monitoring and studies


  • Rene Preuske - Freie Universität Berlin
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.61/1.62)

Session: C.06.04 New Space Missions Data Quality & Cal/Val

New Space global players are launching more and more Earth-Observation satellites, with a great potential to complement Copernicus and ESA Programmes for both operational and scientific purposes. Data Quality and Calibration/Validation activities are critical with a view to build up an interoperable system of systems. The goal of this session is to discuss the current and future developments in the area of Data Quality and Calibration/Validation of spaceborne Very High-Resolution instruments, including pre-launch/post-launch activities and best practices. Characterization/Traceability, Cal/Val of Constellations, Interoperability, Standard and Best Practices, Cal/Val Sites, are among the key elements of the session, including contributions from various domains such as SAR, Multi-/Hyper-Spectral, ThermalIR, Atmospheric Composition.


Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.61/1.62)

Presentation: Data Quality Assessment of the New Space data providers of the CCMs

Authors: Thomas Miraglio, Carla Santella, Sebastien Clerc, Leonardo De Laurentiis, Jerome Bruniquel, Veronique Bruniquel
Affiliations: ACRI-ST, SERCO, ESA
The Optical Mission Performance Cluster (OPT-MPC) monitors the instrument performances of optical New Space Data Providers taking part in the Copernicus Contributing Missions (CCMs). These missions are crucial to deliver complementary data to the Copernicus Services, ensuring (i) that observational requirements and (ii) that evolving data needs (high-resolution thermal infrared data, hyperspectral constellation data) are satisfied through contractual set-ups. The CCMs belong to three categories: Category 1, emerging European Data Providers, are new European actors in the field; Category 2 are European established data providers; Category 3 are non-European established data providers. The OPT-MPC activities were tailored to the different categories, ensuring that the data that will be delivered to the Copernicus Services is fit for purpose. For Cat.1 CCMs, the objective of the OPT-MPC is (i) to help these companies develop robust Cal/Val practices and (ii) to independently assess their data quality before ingestion by the Copernicus Services. The assessment is based on the Earthnet Data Assessment Project (EDAP) framework, which has recently been framed as an ESA-NASA jointly developed framework, and considers both the documentation (user guide, Cal/Val plan, algorithm theoretical basis) and the data products themselves. For Cat.2 & Cat.3 CCMs, the assessment focuses on the quality of the data provided by these established data providers including, if applicable, specific analyses towards the use of their data as part of the future Very High Resolution 2024 coverage of Europe, with additional requirements concerning product quality and dataset content. This presentation will focus on the OPT-MPC framework and activities dedicated to the documentation and data quality assessment of the optical Copernicus Contributing Missions. First, an overview of the reference standards, networks, methods, and metrics of interest will be given. Then, ongoing activities, first results, and feedback about the experience and challenges from the past year will be addressed.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.61/1.62)

Presentation: First Results From the constellr HiVE Mission: CAL/VAL Activities as the Backbone of a High-Resolution 2 K-Accurate LST Product.

Authors: Christophe Lerot, Andreas Brunn, Alexandre Delahaye, Ellis Freedman, Jan Hieronymus, Imaduddin Humayun, Christian Köppl, Daniel Spengler, Jean-Baptiste Volatier, Tianran Zhang
Affiliations: constellr Belgium S.A., constellr GmbH, constellr SASU, Serious Science LLC
constellr is a new space company at the forefront of delivering daily, global land surface temperature (LST) data with the ambition to serve the agriculture sector to make it more efficient and more sustainable. Besides this general ecosystem monitoring, it also provides crucial information for many other applications including the monitoring of urban heat islands and infrastructure, or the tracking of wildfires. In early 2025, constellr began deploying its HiVE (High-precision Versatile Ecosphere monitoring mission) constellation. This cutting-edge system comprises micro-satellites in the 100 kg class carrying advanced high-resolution and cryocooled thermal infrared (TIR) as well as visible and near-infrared (VNIR) sensors. Operating in a sun-synchronous orbit at approximately 510 km altitude, the constellation is expected to deliver a remarkable 1-day temporal resolution by the end of 2027 achieved with five satellites in orbit. The satellites offer a 30-meter spatial resolution for TIR bands (up to 10 meters for VNIR bands) and an unparalleled 2 K accuracy for the final LST product. Achieving such high performance despite the lack of on-board calibration equipment requires robust calibration and validation (CAL/VAL) procedures. Pre-launch, a rigorous on-ground calibration and characterization campaign has been carried out in collaboration with our suppliers to characterize the spatial and radiometric responses of the instruments. Additionally, the line of sight of the two cameras has been aligned and the focus of the TIR instrument has been characterized with respect to the telescope temperature. The extensive commissioning phase, which followed the launch allowed to characterize the self-emission of the optical assembly through deep space looks, to fine-tune the focus positions, and to perform initial absolute calibration and flat fielding. During operations, the calibration and validation of the two instruments is performed through a combination of vicarious calibration, side slither imaging and cross calibration techniques. A key element of the absolute calibration approach is a patented cross-calibration technique, which leverages existing high temporal resolution surface temperature products from geostationary weather satellite platforms like GOES-R, MSG/MTG or Himawari. Combining all these extensive characterization, correction, calibration and validation techniques allows to convert the raw satellite readings into geometrically and radiometrically highly accurate L1 products in the radiance domain. At L2 processing, high-quality surface temperature retrievals are produced from the TIR bands using an enhanced Equivalent Temperature Approach. The latter is supported by prior emissivity values estimated from co-located surface reflectance information. Furthermore, the VNIR camera enables precise geolocation as well as atmospheric correction, optimized with collocated aerosol and water vapor data. In this presentation, we will summarize the status of the HiVE mission after its first months of operations, we will showcase the initial results from commissioning and CAL/VAL procedures, and we will highlight the mission potential to revolutionize global environmental monitoring.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.61/1.62)

Presentation: Evolving Cal/Val for High Resolution Satellite Data – The FinCaS Site

Authors: Lauri Häme, Juha Suomalainen, Juho Häme, Joni Norppa, Tuomas Häme
Affiliations: Terramonitor, National Land Survey of Finland
Terramonitor and Finnish Geospatial Research Institute (FGI) in the National Land Survey of Finland are planning to establish a Cal/Val site for high resolution satellite data in Finland. The site located at Sjökulla, Kirkkonummi, approximately 25 km northwest from Helsinki is focused on the Cal/Val activities for high and very high resolution optical and SAR data with spatial resolution of 10 m and higher or better spatial resolution. The site has been planned to be set up during 2025-2026 in a project that is scheduled to start in early 2025 subject to a positive funding decision. Recently the Earth Observation capabilities from space have increased significantly due to the growing number of satellite missions launched by both public institutions and private companies. Calibration and validation processes are essential to ensure that satellite data meets quality standards. These processes also help evaluate private sector EO missions for potential recognition as ESA Third Party Missions (TPM) or Copernicus Contributing Missions (CCM). The site set-up and instrumentation will be made flexible which enables testing and using of novel Cal/Val approaches. An example of a new Cal/Val concept to be tested is on-demand Cal/Val using drone observations. In addition, the scalability of the concept is considered. This means that ad hoc Cal/Val missions could be established at other locations in future, using natural targets and drones, for instance. The foundation of the FinCaS site is a present calibration field by FGI. Since 1994, the site has been mainly used for the vicarious Cal/Val of aerial remote sensing systems. Currently, it consists of five radiometric calibration gravel fields with known spectral characteristics including the BRDF properties, black and white resolution stripe targets. In the early 2020’s, an experimental automatic radiance measurement system operated on the site during a growing season in a project by Terramonitor and FGI. A larger light gray gravel field is being built in 2025 to make it better suited for the calibration of satellite data with a few-meter solution. The spectral, BRDF, and geometric site characterization is planned to be measured using drone remote sensing supported by traceable laboratory calibrations. The first instruments to support the Cal/Val activities are installed. The FinCaS project includes development of the data processing and management system for the measurements. FinCaS will follow the Fiducial Reference Measurements (FRM) concept (see reference). The project is scheduled to start in early 2025 and it will be completed in August 2026. In the later stages, the instrumentation will be expanded. In addition to the instruments for the radiometric instruments on site, equipment for the thermal measurements of lakes is considered as well as geometric calibration targets. The data processing, management and distribution system will be continuously developed. Reference Goryl, P., Fox, N., Donlon C. Castracane, P. 2023. Fiducial Reference Measurements (FRMs): What Are They? Remote Sens. 15(20), 5017; https://doi.org/10.3390/rs15205017.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.61/1.62)

Presentation: Advancing the Geometric and Radiometric Uncertainty Estimation of Balkan-1 Multispectral Data

Authors: Mr. Boyan-Nikola Zafirov
Affiliations: Endurosat
Balkan-1 is the first satellite of the Balkan Constellation, EnduroSat’s optical mission for Earth Observation (EO). At full deployment of 120 satellites, the Balkan Constellation will provide the ability to monitor the Earth’s surface at an unprecedented spatiotemporal scale. The mission is part of the Category 1 Copernicus Contributing Missions and as such strongly considers the interoperability with other sensors and scientific use for the desired applications. A major consideration for provision of complementary observations is the quality and uncertainty associated with the dataset. In the pre-launch phase of Balkan-1, EnduroSat is dedicating effort for uncertainty budget estimation of radiometric and geometric calibration, following the GUM approach. The outcome of the calibration are the targeted measurands, respectively the absolute accuracy of Top-Of-Atmosphere (TOA) reflectance, all unitary contributors to the resolution (e.g., instrument SNR, FPN, MTF, GSD), absolute geolocation accuracy, co-registration with Sentinel 2 Global Reference Image (GRI) and multi-band co-registration. To define input and output quantities, a process chain diagram has been developed both for radiometric and geometric measurement traceability. The diagrams identify the sources of uncertainty at each step of processing, considering the input parameters, auxiliary, ephemeris and attitude data, instrument modelling, and sensor configurations. Currently, work is ongoing to characterize uncertainty sources. Principal contributor for radiometric absolute radiance/reflectance (TOA) is the absolute accuracy estimation method. The performance target for the mission is set at 5% to be validated by Simultaneous Nadir Observations (SNO) with Sentinel 2 and RadCalNet. Other major contributors to radiometric uncertainty are non-corrected nonlinearity, pixel dependent uncertainty of SNR, and non-corrected additional artefacts such as crosstalk, straylight and dark noise having relative impact at very low radiances. Major contributors for geometric uncertainty are instrument LOS native uncertainty caused by potential biases and drifts. Additionally, uncertainties from the absolute geolocation accuracy of the reference image and Digital Elevation Model (DEM) accuracy for orthorectified image products. Balkan-1 launch is currently expected in January 2025. The in-orbit calibration and validation activities during the commissioning period will serve as a first step towards estimation of data product uncertainties. Therefore, the symposium offers an interesting checkpoint to share the first results and progress made towards uncertainty estimation and appropriate integration of uncertainty estimates upon the delivery of operational data to users.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.61/1.62)

Presentation: Validating HotSat-1 Surface Brightness Temperature Using the VIIRS Land Surface Temperature Product

Authors: Joshua Chadney, Jade Constantinou, Daniel Evans, Ross Hawton, Jamie McMillan, James O’Connor, Teresa Steinke, Martin Wooster
Affiliations: Satvu
SatVu strives to provide customers with high-resolution, radiometrically calibrated thermal data. To demonstrate that the data meet this purpose, the Calibration and Validation team at SatVu uses products derived from VIIRS imagers to validate their HotSat-1 nighttime brightness temperature products. The VIIRS instrument’s I4, M12 and M13 bands significantly overlap with HotSat-1’s waveband (3.7 μm to 5 μm), enabling a meaningful comparison despite differences in spatial resolution. The HotSat-1 imager has been radiometrically calibrated empirically post-launch using the onboard heated calibrator and a dataset of flat ocean imagery with associated Sea Surface Temperatures (SSTs). This contribution focuses on validating the HotSat-1 surface brightness temperature resulting from this calibration by comparing it with the VIIRS Land Surface Temperature (LST) product. To establish a direct relationship between VIIRS and HotSat-1 observations, we estimate an average surface emissivity for each scene by comparing VIIRS LST values with VIIRS Top-of-Atmosphere (TOA) radiances from the I4, M12, and M13 wavebands. This provides an emissivity expression for specific surface temperatures, enabling the conversion of VIIRS LST to surface radiance in the mid-wave infrared (MWIR). Next, we convert the calibrated HotSat-1 TOA radiances to surface brightness temperatures using a scaling law specifically designed for wide-waveband instruments, and atmospheric parameters estimated from the MODTRAN atmospheric radiative transfer model with atmospheric profiles obtained from ECMWF ERA-5 reanalysis data. The comparison of ocean and cloud-free scenes reveals strong agreement between surface radiances derived from VIIRS LST and HotSat-1, with the highest accuracy achieved using emissivity from VIIRS I4. In brightness temperature space, the absolute error varies from about 1.3 K at 300 K to 3.4 K at 270 K. These results provide strong confidence in our calibration process and the quality of the surface brightness temperature product. Furthermore, this study establishes a foundation for future efforts using the VIIRS products, such as enhancing cloud detection and extending the validation to daytime observations.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall F2)

Session: A.02.04 Advances in Monitoring and Management of Forest Ecosystems - PART 6

About 4 billion hectares, which is 31 percent of the global total land surface, is covered by forests according to the latest FAO Forest Resource Assessment FRA 2020. Forests are a valuable and irreplaceable ecosystem, which play an important role from an ecological, social, and economic perspective. Monitoring of forests is essential for a better understanding to protect and manage sustainably this valuable ecosystem.

Information needs vary and include forest area, forest types, forest structure, disturbances, health, and biomass. An unprecedented wealth of observations by optical and microwave sensors, active and passive, from low to high resolution allow new ways to monitor forests. Emphasis will be put on advances in detecting forest disturbances, forest health monitoring, species identification, support to sustainable forest management, estimating forest biomass and forest carbon accounting. This session will showcase some of the more recent key achievements including methods/algorithms, science and applications.

Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall F2)

Presentation: FORDEAD 2.0: Monitoring forest diseases with Sentinel-2 time series using cloud-based solutions

#stac

Authors: Jean-Baptiste Feret, Dr Florian de Boissieu, Remi Cresson, Elodie Fernandez, Kenji Ose
Affiliations: TETIS, INRAE, AgroParisTech, CIRAD, CNRS, Université Montpellier, European Commission, Joint Research Centre, Ispra, Italy
Climate change is leading to an increase in severe and sustained droughts, which in turn contributes to increase the vulnerability of forest ecosystems to pests, particularly in regions that usually experience high water availability. As a consequence, bark beetle outbreaks occurred at unprecedented levels over the past decade in Western Europe, resulting in high spruce tree mortality. Forest dieback caused by bark beetle infestations poses significant challenges for monitoring and management. To address this issue, there is an urgent need of operational monitoring tools allowing large-scale detection of bark beetle outbreaksto better understand outbreak dynamics, quantify surfaces and volumes impacted, and help forestry stakeholders in decision making. Ideally, such tools would incorporate early warning systems. The complexity of bark beetle dynamics, coupled with the spatial and temporal variability of forest dieback, necessitates advanced monitoring solutions that leverage remote sensing technologies. Remotely-sensed detection of bark beetle infestation relies on detecting the symptoms expressed by trees in response to attack. Infested trees experience different stages, starting with the early ‘green-attack stage’, mainly characterized from the ground by visual identification of the boring holes in the bark, with no change in color of the foliage.The following ‘yellow-attack stage’ and ‘red-attack stage’ present changes in foliage color induced by changes in pigment content. The ultimate ‘grey-attack stage’ appears with foliage loss. Early detection is crucial for pest management but the remotely sensed identification of the green-attack stage is challenging due to the lack of visible symptoms. However, moderate changes in foliage water content occur during this stage, which can be detected in the near infrared and shortwave infrared domains. Satellite imagery acquired by optical multispectral missions such as Sentinel-2, may then provide relevant information to identify such tenuous changes early. FORDEAD (FORest Dieback And Degradation) is a method designed to identify forest anomalies using satellite images time series. Developed in response to the bark beetle outbreaks that has occurred in France since 2018, FORDEAD aimed above all to provide an operational monitoring system. In this context, FORDEAD analyses the seasonality of a spectral index sensitive to vegetation water content computing: the Continuum Removal in the ShortWave InfraRed (CR-SWIR). Since 2020, FORDEAD has been applied to produce quarterly maps of bark beetle outbreak covering about 25% of the French mainland territory. The results are promising regarding early detection capabilities. Indeed, the success rate reaches 70% in detecting of the ‘green-attack stage’, with a low false positive rate. The method has been implemented in a python package in order to ease the transfer to the national forestry services and forest management services, and more broadly to all potentially interested users. Beyond the method, storage and computing resources are crucial to access and process satellites images, in particular high resolution time series such as sentinel-2.The emerging cloud-based solutions provide scalable computing resources that enable near-real-time analysis, integrating diverse datasets. The combination of appropriate storage solutions, cloud optimized data format and processing standards are now reaching maturity for large-scale geospatial processing in a free and open source environment. The synergy between remote sensing and cloud-based platforms presents opportunities for forest monitoring, enabling improved detection, prediction, and response strategies. A new version of FORDEAD has been developed and optimized to take advantage of cloud-based solutions, including seamless access to Spatio Temporal Assets Catalogs (STAC). This enhanced version provides a versatile toolkit dedicated to multiple usages from pixel/plot scale analysis for calibration and validation with field data, to large-scale monitoring over national extent. FORDEAD has been integrated into the cloud infrastructure of the THEIA-Land data center. This French service aims at producing and distributing higher level remote sensing data products, providing technical support and access to dedicated methods for both remote sensing experts and user community. FORDEAD is also compatible with the European infrastructures currently being created, such as the Copernicus Data Space Ecosystem (CDSE). This contribution aims at supporting forest management practices and long-term ecological studies to help understand and predict the spatial and temporal dynamics of bark beetle outbreaks, and to mitigate their impact. We strongly argue for more systematic integration of free and open standards in remote sensing data analysis frameworks, including improved geospatial and in situ data interoperability, modular software design for an easier integration of alternative methods in the multiple stages of an algorithm. By promoting these standards, we foster a more collaborative approach, ensure accessibility for a diverse range of stakeholders and thus meet the needs of forest sector and public policies.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall F2)

Presentation: Operational Applicability of the Newly Developed Radar-based Tree Cover Disturbance Monitoring Tool (TCDM-radar)

Authors: Andreas Langner, Christophe Sannier, Fabian Enßle, Dr. Sharon Gomez
Affiliations: GAF AG
The monitoring of deforestation and forest degradation is crucial in the context of the Paris Agreement’s policy segment, Reducing Emissions from Deforestation and Forest Degradation (REDD+) and for the new EU Deforestation Free Supply Chain Regulation (EUDR). There are various optical- as well as radar-based remote sensing approaches available for forest monitoring, several of them operationalized on larger (pan-tropical or global) level such as the tree cover loss data of Global Forest Watch (GFW, Hansen et al., 2013) and the integrated deforestation alerts (Hansen et al., 2016; Pickens et al., 2020; Reiche et al., 2021), all hosted on globalforestwatch.org. Due to their cloud-penetrating capabilities radar-based systems can be superior to optical-based monitoring approaches – in particular within tropical regions which experience frequent cloud coverage. Most prominent example for such radar-based system are the RADD alerts that are part of above-mentioned integrated deforestation alerts. Depending on the geographic region these RADD data are available from January 2019 (Africa) or January 2020 (South America, Central America, and Southeast Asia) till present but are restricted to primary humid tropical forest areas. In this study, it will be shown that a newly developed TCDM-radar (Tree Cover Disturbance Monitoring) approach – even though working on the same database (Sentinel-1 backscatter data) as the RADD alerts – is in its newest version able to cover a longer analysis period (January 2018 – present) and further allows the monitoring not only of humid tropical forests but also performs well within dry tropical as well as temperate forest ecosystems. The reason for that is the different monitoring methodology of TCDM-radar that is based on an unsupervised moving window two-level change detection approach that only needs one year of reference data and can easily deal with seasonal effects as compared to the harmonic model fitting of the RADD alerts that requires three years of training data. The original concept of the TCDM-radar algorithm has already been published in Langner et Carboni (2021) but was further developed and improved since then. The major processing steps of this latest version are described hereafter: Input to the TCDM-radar approach are the S1 Ground Range Detected (GRD) scenes in vertical, vertical (VV) and vertical, horizontal (VH) polarization. The scenes are processed using the Sentinel-1 Toolbox to generate calibrated and ortho-corrected products. Basic underlying concept of TCDM-radar are two nested iterative delta analysis steps using a sliding time window to cover the whole analysis period from 2018 till present. The temporal interval between the analysis and reference windows allows removing seasonal effects. Descending and ascending orbits are handled separately and combined afterwards. These nested delta analysis results are eventually processed to derive time series data per pixel location over the whole analysis period. A temporal filtering step leads to confirmations or rejections of the derived disturbance indications. Subsequently, a disturbance density related filtering step analyses the spatial and temporal proximity of disturbance detections, allowing to identify and confirm potential disturbance values that are situated in close vicinity. Eventually, this new TCDM-radar approach allows detecting detailed disturbance events within the forest canopy of tropical evergreen and deciduous as well as temperate landscapes at high spatial (10m) and temporal (bi-weekly) resolution and due to its implementation in Google Earth Engine (GEE) it is easily scalable. A validation of the new TCDM-radar results is currently on-going and will be also presented in the paper – however, validation results of an earlier TCDM-radar version can already be looked up in Langner et al. (2020). References: Hansen, M. C., P. V. Potapov, R. Moore, M. Hancher, S. A. Turubanova, A. Tyukavina, D. Thau, S. V. Stehman, S. J. Goetz, T. R. Loveland, A. Kommareddy, A. Egorov, L. Chini, C. O. Justice, and J. R. G. Townshend. 2013. High-Resolution Global Maps of 21st-Century Forest Cover Change. Science 342 (15 November): 850–53 Hansen, M.C., A. Krylov, A. Tyukavina, P.V. Potapov, S. Turubanova, B. Zutta, S. Ifo, B. Margono, F. Stolle, and R. Moore. 2016. Humid tropical forest disturbance alerts using Landsat data. Environmental Research Letters, 11 (3) (https://dx.doi.org/10.1088/1748-9326/11/3/034008) Langner, A., Desclée, B., Carboni, S., Vancutsem, C., Stibig, H.-J., Achard, F., Theilade, I., Forest encroachments and logging activities within the Prey Lang Wildlife Sanctuary, Cambodia. 2020. (https://forobs.jrc.ec.europa.eu/iforce/docs/JRC-Technical-Report_Assessment-Cambodia-Prey-Lang-Sanctuary.pdf) Langner, A., and Carboni, S. (2021). Forest Degradation Derived by a Newly Developed Sentinel-1 Change Detection Approach, IEEE, Brussels, Belgium, DOI: 10.1109/IGARSS47720.2021.9554574 Pickens, A.H., Hansen, M.C., Adusei, B., and Potapov P. 2020. Sentinel-2 Forest Loss Alert. Global Land Analysis and Discovery (GLAD), University of Maryland Reiche, J., Mullissa, A., Slagter, B., Gou, Y., Tsendbazar, N.E., Braun, C., Vollrath, A., Weisse, M.J., Stolle, F., Pickens, A., Donchyts, G., Clinton, N., Gorelick, N., Herold, M. 2021. Forest disturbance alerts for the Congo Basin using Sentinel-1. Environmental Research Letters. (https://doi.org/10.1088/1748-9326/abd0a8)
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall F2)

Presentation: deadtrees.earth: Advancing Global Tree Mortality Mapping by Integrating Crowd-Sourced Aerial Images, Earth Observation Data and AI

Authors: Teja Kattenborn, Clemens Mosig, Janusch Vajna-Jehle, Yan Cheng, Henrik Hartmann, David Montero, Samuli Juntilla, Stéphanie Horion, Mirela Beloiu Schwenke, Miguel Mahecha
Affiliations: Chair Of Sensor-based Geoinformatics (geosense), Institute for Earth System Science and Remote Sensing, Leipzig University, Department of Geosciences and Natural Resource Management, University of Copenhagen, Institute for Forest Protection, Julius Kühn Institute (JKI) - Federal Research Centre for Cultivated Plants, School of Forest Sciences, University of Eastern Finland, Department of Environmental Systems Sciences, ETH Zurich
Tree mortality rates are rising across many regions of the world. Yet the underlying dynamics remain poorly understood due to the complex interplay of abiotic and biotic factors, including global warming, climate extremes, pests, pathogens, and other environmental stressors. Ground-based observations on tree mortality, such as national forest inventories, are often sparse, inconsistent, and lack spatial precision. Earth observations, combined with machine learning, offer a promising pathway for mapping standing dead trees and potentially uncovering the driving forces behind this phenomenon. However, the development of a unified global product for tracking tree mortality patterns is constrained by the lack of comprehensive, georeferenced training data spanning diverse biomes and forest types. Aerial imagery from drones or airplanes, paired with computer vision methods, provides a powerful tool for high-precision, efficient mapping of standing deadwood on local scales. Data from these local efforts offer valuable training material to develop models based on satellite data, enabling continuous spatial and temporal inference of forest cover and standing deadwood on a global scale. To harness this potential and advance global understanding of tree mortality patterns, we have developed a dynamic database (https://deadtrees.earth). With contributions from over 150 participants, the database already contains more than 2,000 orthoimages covering all of the Earth´s continents and biomes. This presentation will demonstrate the resulting core functions and products from deadtrees.earth, including 1) uploading and downloading aerial imagery with optional labels of standing deadwood, 2) automatic segmentations of alive and dead trees in uploaded imagery using computer vision models, and 3) large-scale products on forest cover and tree mortality as derived from ESA´s Sentinel 1 and 2 missions. Moreover, we will discuss future directions in how such products through the integration of Earth observation, machine learning, and ground-based data, can fill critical knowledge gaps in global tree mortality dynamics and create an accessible, valuable resource for researchers and stakeholders.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall F2)

Presentation: A Multi-Dimensional View of the Global Forest Drought and Heat Stress

Authors: Dr. Matteo Piccardo, Dr. Matteo Zampieri, Dr. Luca Caporaso, Dr. Caspar Roebroek, Dr. Marco Girardello, Dr. Guido Ceccherini, Dr. Mirco Migliavacca, Alessandro Cescatti
Affiliations: Joint Research Centre Consultant (JRC), European Commission, Joint Research Centre (JRC)
Heat and drought stress is a growing threat to forest ecosystems, significantly affecting growth, mortality, and ecosystem services. Water-related stresses are particularly important in the context of climate change, as rising temperatures and changing precipitation patterns exacerbate drought conditions. Effective forest management demands advanced, responsive monitoring tools capable of detecting ongoing dynamics in a timely and effective manner. This study leverages Earth Observation (EO) remote sensing data to develop multidimensional metrics that track heat and drought stress metrics for the global forests. Our framework involves a multidimensional analysis of time series data across various climate and vegetation indices, each capturing distinct aspects of heat and drought stress. We evaluate forest stress through four primary stages: (1) environmental water deficit, assessed through time series data on soil moisture, Vapor Pressure Deficit (VPD) and changes in Land Surface Temperature (LST); (2) anomalies in plant response to environmental stress, monitored via shifts in evapotranspiration and photosynthetic activity. These indicators capture physiological responses such as stomatal closure and reduced water usage, leading to an increase in sensible heat flux relative to latent heat flux. This stage also includes variations in Solar Induced Fluorescence (SIF) as a measure of changes in photosynthetic efficiency; (3) vegetation water content deficit, analysed through spectral indices like the Normalized Difference Water Index (NDWI), which detect reductions in the vegetation water content; (4) plant functional impairment, assessed through spectral signals from the photosynthetic apparatus, such as chlorophyll content, measured using indices like the Normalized Difference Vegetation Index (NDVI). This stage reflects structural impacts on overall plant health and vitality. By integrating these indices, which collectively represent the progression of stress stages, our study explores the temporal dynamics of heat and drought stress in forests. Analysis of the combined forest stress index over the past two decades reveals the global spatio-temporal trends, identifying regions at high risk where adaptation measures should be prioritized. Insights from this research can inform sustainable forest management strategies, fostering resilience against climate change impacts and supporting ecosystem health.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall F2)

Presentation: High resolution reconstructions of Pan-European woody cover timeseries from 1990-2024

Authors: Shawn Schneidereit, Dr. Matthias Baumann, Dr. Xiang Liu, Ruben Butzer, Prof. Dr. Tobias Kümmerle
Affiliations: Humboldt-Universität zu Berlin
Reconstructing forest cover has allowed us to better understand the land use history and disturbance regimes of woody vegetation across Europe. The advent of cloud-based processing workflows has unlocked the potential to analyze petabytes of data across vast spatiotemporal scales. While high-quality forest datasets are now available across the EU, these have limited suitability to capture how woody cover evolves outside traditional forest boundaries. This study introduces a novel pan-European dataset to quantify woody cover dynamics, leveraging the full Landsat archive from 1990 to 2024. Unlike conventional forest-focused datasets, our approach uses woody cover as a comprehensive metric to capture gradual vegetation expansion, forest fragmentation, and degradation—processes often overlooked by traditional forest definitions which inadequately capture fine-grained dynamics and often misalign with the spatial and technical specifications of remote sensing data. This work establishes a foundation for enhanced land cover monitoring and analysis by attempting to resolve existing forest definitions limitations, biases, and inconsistencies. For this analysis, we used data from the Landsat archive (1990–2024) to predict fractional woody cover across Europe. We employed innovative methodological frameworks combining cloud-based solutions with local model training. Annual growing-season spectral-temporal metrics (STMs) were calculated to condense all observations within a growing season into a single aggregated value per pixel. To train a regression-based unmixing model for fractional woody cover prediction, we utilized an NDVI-SWIR ratio mixing space to semi-automatically extract spectrally pure woody and non-woody endmembers. We used a synthetic mixing approach to combine spectra of these endmembers, yielding a representative spectral library with continuous woody cover fractions. The model was deployed on Google Earth Engine (GEE) and applied to STM time series preprocessed with LandTrendr, ensuring consistent and reliable annual predictions. We validated our time series against a multitemporal dataset of validation points, derived from high-resolution aerial imagery. Additionally, we compared our predictions with existing fractional forest cover products to evaluate performance and identify areas of agreement and divergence. Our results showed consistent alignment between predicted woody cover values and validation datasets. Underprediction occurred in regions with high woody cover, particularly in arid woodlands with open canopies. These patterns were not unique to our dataset, as similar trends were observed in comparisons with other fractional cover products and validation data. Overall, our findings demonstrate agreement with validation data and existing datasets, highlighting the potential of our approach for downstream analyses while providing annual data at extensive spatial scales. This work addresses key gaps in land cover monitoring by producing high-resolution, annual-scale woody cover maps for Europe. These data show congruence with existing forest cover datasets, while complementing them by providing previously not-existent long time series with annual maps. Beyond this they expand the scope of land cover analysis by extending the mapping of woody cover to non-forest areas. Our harmonized, remote-sensing-based approach captures subtle and previously overlooked woody cover dynamics, providing essential insights for biodiversity conservation, sustainable forest management, and climate change mitigation across Europe. Future work will focus on analyzing and synthesizing high-level trends in woody cover change. This study sets a foundation for more accurate, inclusive, and actionable insights into woody cover dynamics, advancing large-scale land cover monitoring and analysis.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall F2)

Presentation: Overcoming Challenges in Scaling Up Tree Cover Density Across the Pan-Tropical Extent

Authors: Andreea Julea, Anna Grabatin-Homolka, Andreas Langner, Christian Smit, Biman Biswas, Bruno Menini Matosak, Matthias Weigand, Stephanie Wegscheider, Inés Ruiz, Christophe Sannier, André Stumpf, Baudouin Desclée, René Colditz Colditz
Affiliations: GAF AG, Joint Research Centre, European Commission
With an area of over 4 billion hectares, covering over 30% of Earth’s land surface, forests are one of the most important habitats, involved in multiple functions, among them, playing a major role as global carbon sinks, which are a major part of the solutions to the on-going challenges in climate change mitigation. Several international Conventions and policy frameworks address forest protection and management, such as the United Nations Convention on Climate Change (UNFCCC), the UN Convention on Biological Diversity (UNCBD) and the UN Sustainable Development Goals (SDG) 13 and 15. The pan-tropical Tree Cover Density (TCD) layer, which offers information on the projective tree cover as a percentage per pixel, is a key forest product of the Global Land Cover and Tropical Forest Mapping and Monitoring (LCFM), an essential component of the Copernicus Land Monitoring Service (CLMS). It will support the implementation of international policy segments such as REDD+ by supporting the involved countries in forest monitoring and reporting and in the analysis of degradation and deforestation. Whereas TCD has already been produced at pan-European level, it is the first time a TCD product will be provided for the whole tropical area covering about 49.5 million square kilometers with expected minimum overall accuracy of 85%. The pan-tropical belt, home to diverse forest ecosystems, presents unique challenges for large-scale TCD production. These include persistent cloud cover, ecosystem heterogeneity, and limited availability of harmonized training data, all of which hinder the operationalization of consistent, high-resolution monitoring. The LCFM service tackles these challenges through innovative methodologies and scalable workflows. This current paper highlights the lessons learned and advancements achieved regarding the pan-tropical TCD baseline product at a spatial resolution of 10m for the year 2020, which also constitute a reference layer for the tree cover presence change products for the years 2021-2026 that will follow. It is focusing on the following aspects: • Improved Input Data Availability and Quality: Main input datasets to TCD modelling are Sentinel-2 timeseries. Persistent cloud cover in tropical regions hampers the acquisition of clear optical satellite imagery, leading to data gaps. In order to overcome this, Sentinel-1 radar data were also included in the analysis. Annual statistics summarizing the temporal profile of the Sentinel-1, Sentinel-2 and Vegetation Indices timeseries have been analyzed to identify the best suited ones for TCD modelling, pointing out that despite the advantage of Sentinel-1 to penetrate clouds, those radar data showed a reduced correlation required for precise TCD estimation. Due to those findings, the Sentinel-2 cloud masking process was further enhanced in order to mitigate the above described problematic of optical systems having to deal with frequent cloud coverage. • Enhanced Modelling of Ecosystem Heterogeneity: The vast ecological diversity of tropical forests requires localized calibration of machine learning models. LCFM employs region-specific approaches to ensure accurate TCD estimates across varying canopy structures and species compositions. To ensure wall-to-wall consistency of the final TCD product across the pan tropical belt, regional models are trained with an overlap in the training data (i.e. each regional model is calibrated including part of the training data from neighboring regions). • Agile Reference Data Suitability: The collection of precise and spatially well-distributed reference data across the TCD range is key to account for the various forest ecosystems. The developed TCD sampling process aims at improving the speed and quality of training data collection and involves classic image segmentation and machine learning, able to function directly on a Web Map Service VHR data source. The 2020 TCD sampling design is derived for a total of about 400 production units/regions covering the pan-tropical extent, considering information of eight vegetation/terrain strata and the availability of contemporary VHR data. Within each production unit, the allocation of the sample locations ensures balanced spatial distribution, diversity of TCD values and different strata representation. • Computational Scalability: Processing high-resolution data at the scale of the tropical belt is computationally demanding. LCFM utilizes modular, cloud-based processing chains, ensuring timely delivery of annual TCD products at 10m spatial resolution. This contribution emphasizes the importance of advanced algorithm development, and scalable workflows in overcoming these challenges. Production unit specific local models allow deriving TCD results at higher precision, however, leading to the challenge of best merging them seamlessly during roll-out for the pan-tropical production. Still under investigation are methodologies to overcome artefacts due to acquisition geometry like swath borders through improved input features and improvements on the modelling approach. However, preliminary results on 10% pan-tropical coverage already demonstrate the production of consistent, high-resolution TCD information, fostering improved forest management and compliance with global conservation goals. By addressing these challenges, the LCFM project advances tropical forest monitoring capabilities, ensuring alignment with international sustainability initiatives and providing critical information for analysis and monitoring of deforestation and forest degradation.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall K1)

Session: D.01.07 ESA Digital Twin Earth Mid-term Milestone - PART 2

The ESA Digital Twin Earth (DTE) program, part of the Earth Watch Programme, is an initiative aimed at advancing the use of Earth Observation (EO) data and technologies to create digital twins for monitoring and predicting environmental changes. The program collaborates closely with the European Commission’s Destination Earth (DestinE) initiative, complementing DestinE’s efforts to establish a comprehensive digital twin ecosystem in Europe.

The program focuses on developing pre-operational digital twins to demonstrate their value for applications such as climate monitoring, urban planning, and environmental management. By integrating Earth Explorer Missions data into the DestinE Platform, it ensures high-quality Earth Observation (EO) data is available for digital twin development. Additionally, it establishes a framework to support the creation and operational use of digital twins and promotes interoperable services accessible via ESA DTE, DestinE, or Member State initiatives. Through these efforts, ESA DTE strengthens DestinE and contributes to advancing digital twin technologies at national and European levels.

This session is the ESA mid-term milestone which is organised as an open discussion on the phase-in achievements and initial lessons learnt. The objective will be to assess the adequacy of the provided environment for the development of DTCs, the review of thematic priorities as well as the efficiency of the integration process with DestinE and potential MS initiatives. ESA will prepare a set of recommendations and lessons learnt to be discussed during the workshop. The process shall serve as the basis for the next phase procurements and to establish new priorities or continuity on the DTCs thematic areas. ESA MS representatives, DG-CNECT, ECMWF, and EUMETSAT will be invited to participate actively to the milestone.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall E1)

Session: A.09.12 Identifying the next research priorities for the ESA Polar Science Cluster 2026+

The Polar regions, and Earth’s Cryosphere more generally, are experiencing rapid and potentially irreversible change. Quantifying this change and improving process understanding is vital for identifying regional and global impacts, and accurate future projections.

Satellite Earth observation offers a unique capability to observe these regions and advances in satellite sensor technology and processing algorithms are allowing an increased range of parameters to be measured at higher temporal and spatial resolution.

This session will bring together leaders in Polar research and the ESA Polar Science community to discuss future research priorities. The session will address:
- Identifying key motivating science questions for ESA Polar Science Cluster activities 2026+.
- Opportunities offered by developments in EO capabilities, new sensors (Copernicus Expansion Missions) and algorithms.
- Innovative methods, technologies and approaches (AI, models, etc).
- Community collaboration and synergistic activities, such as combined use of EO data, modelling and in-situ observations (IPY, Antarctica Insync, EC-ESA Earth System Science Initiative).
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall G2)

Session: A.02.01 Land Surface Temperature and Emissivity Data for Research and Applications - PART 2

This session invites presentations on the use of land surface temperature and emissivity data for research and applications. In particular we welcome studies using high spatial resolution multispectral thermal infrared data such as those from the ECOsystem Spaceborne Thermal Radiometer on Space Station (ECOSTRESS) and airborne instruments as well as from the evolving New Space sector. The session will also provide an opportunity to discuss the preparation of upcoming missions such as the ESA - LSTM mission, CNESISRO TRISHNA mission and ASI-NASA SBG mission.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall G2)

Presentation: Factors Influencing Modelling Directional Effects in Thermal Remote Sensing of Crop Fields

Authors: Louis Snyders, Joris Blommaert
Affiliations: VITO
Remote sensing retrieved land surface temperatures (LST) depend, besides on the time of acquisition, on the position of both the thermal sensor and the Sun. An accurate estimation of LST and LST-derived products, e.g. evapotranspiration, requires characterizing and correcting for directional effects. These directional effects will deteriorate LST retrieval, with several Kelvin, of upcoming high spatial resolution thermal infrared (TIR) satellite missions, such as LSTM, Trishna and SBG, which will include measurements under large viewing zenith angles, up to 35°. The research results illustrate the applicability of parametric models to correct for directional effects in a range of crop fields. SwathSense, an ESA- and NASA-funded airborne campaign led by King’s College London, aims to identify, and characterize the directional effects in remote sensing LST. The evaluation of a kernel-driven parametric model based on the airborne data shows (1) the capability of simple parametric models to correct for directional effects in high-resolution TIR data and (2) the change in model coefficients for different crop types, biophysical parameters, and spatial resolution. These parametric models offer a trade-off between operational use and accuracy, which allows for applying them on large datasets of LST. The airborne campaign collected measurements under different viewing geometries by flying two airplanes simultaneously over an agricultural area near Grosseto, Italy in 2023. The Specimen AisaOWL and HyTES sensors, attached to the airplanes, provide multi-angular TIR data with both zenithal and azimuthal variability by following a parallel track and cross-track strategy. The final dataset contains observations with viewing zenith angles up to 35°, which is in line with upcoming TIR satellite missions and is suitable for modelling directionality. The inversion of parametric models, based on the observed temperatures and corresponding sun sensor geometry, provides the necessary parametric coefficients that determine the modelled temperatures. To enhance understanding of parametric model behaviour, the analysis evaluates parametric coefficients for varying spatial resolution and varying biophysical parameters. First, the study includes upscaling the original 2-meter resolution data in steps up to a hundred meters to assess the variability of directional parametric models for different spatial resolutions. Second, the research assesses the change in model coefficient values for variation in field variables such as crop type and biophysical parameters, including leaf area index (LAI), normalized difference vegetation index (NDVI), and fraction of green vegetation cover (fCOVER), retrieved from VITO’s Terrascope library. The research demonstrates the capability of parametric models to correct for these effects across different crop fields. Additionally, the campaign data allows elaborating on directional parametric models, addressing the limited research on their behaviour under varying circumstances. This analysis provides insight into the potential of combining observations from multiple high-resolution thermal satellite platforms, such as LSTM, SBG, and TRISHNA, to correct for directional effects. Furthermore, the correlation between biophysical parameters and parametric model coefficients reveals the potential of using optical data from Sentinel-2 to estimate parametric model coefficients, which is crucial when multi-angular thermal data is lacking.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall G2)

Presentation: Benchmarking the TRISHNA L3 all weather daily evapotranspiration algorithms over contrasted land use and climates

Authors: Gilles Boulet, Albert Olioso, Emmanuelle Sarrazin, Olivier Merlin, Vincent Rivalland, Dr Kanishka Mallick, Prof. Sekhar Muddu, Nesrine Farhani
Affiliations: Centre d’Etudes Spatiales de la Biosphère, Univ Toulouse 3 Paul Sabatier, Univ Toulouse, CNES/IRD/CNRS/INRAE, Indo-French Cell for Water Science, Indian Institute of Science, Unité de Recherche écologie des Forêts Méditerranéennes, INRAE, Unit ENVISION, Luxembourg Institute of Science and Technology, ydroSciences Montpellier, Univ Montpellier, CNRS, IRD,
Most hydrological, agronomical or ecological applications of any evapotranspiration or a stress factor product require an estimate for every day. However, with the projected TRISHNA revisit frequency of 2-3 days combined with the cloud interference, one needs to interpolate between two cloud free acquisitions in order to build a continuous daily evapotranspiration and water stress products. To do so, several methods have been proposed in the literature, from the simplest ones (based on easily available meteorological data) to the most complex ones (based on data assimilation of multiple sensor into distributed hydrological models). We present and test the various options for an operational algorithm over several crops in France, Morocco and India. These solutions show an increasing complexity in accounting for the water status of the surface through water budget information (from a simple Antecedent Precipitation Index to a water budget model forced with optical visible/near infrared remote sensing data), and various alternative sensors that can be used for cloud free days (e.g. disaggregated daily products from Sentinel 3 or MODIS such as SEN_ET) or cloudy conditions (e.g. Sentinel 1 data).
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall G2)

Presentation: Daily Evapotranspiration and Water Stress L2 products from the TRISHNA mission.

Authors: Albert Olioso, Gilles Boulet, Dr Kanishka Mallick, Samuel Mwangi, Nesrine Farhani, Tian Hu, Aolin Jia, Emmanuelle Sarrazin, Jordi Etchanchu, Chloé Ollivier, Vincent Rivalland, Eswar Rajasekaran, Marie Weiss, Samuel Buis, Jérôme Demarty, Jean-Louis Roujean, Philippe Gamet
Affiliations: Mediterranean Forest Ecology Lab, INRAE, Indo-French Cell for Water Science, ICWaR, Indian Institute of Science, Centre d'Etudes Spatiales de la BIOsphère (CESBIO), Université de Toulouse, CNES, CNRS, IRD, UT3, INRAE, Department of Environmental Research and Innovation, Luxembourg Institute of Science and Technology, HydroScience Montpellier (HSM), Univ. Montpellier, CNRS, IMT, IRD, Department of Civil Engineering, Indian Institute of Technology Bombay, UMR Environnement Méditerranéen et Modélisation des Agro-Hydrosystèmes (EMMAH), INRAE-Avignon Université
The joint ISRO/CNES TRISHNA mission (Thermal infraRed Imaging Satellite for High-resolution Natural resource Assessment), to be launched in 2026, will provide thermal infrared data with high revisit (3 acquisitions every 8 days at equator) and high spatial resolution (60 m). Such data will make it possible unprecedent monitoring of evapotranspiration (ET) and water stress. Evapotranspiration and water stress products will be proposed at level 2 within less than one day after image acquisition. Here, we present the various options for the operational algorithms that will be used for generating ET and water stress products. For evapotranspiration, two main models will be used: 1- EVASPA (EVApotranspiration monitoring from SPAce, Gallego et al. 2013, Allies et al. 2020) which provides ET maps by combining several models based on the evaporative fraction formulation of surface energy balance based on the S-SEBI and the triangle approaches (contextual approach). As EVASPA provides an ensemble estimation of ET, an estimation of uncertainty in the derivation of ET is also provided (Allies et al. 2020, Farhani et al. 2024, Mwangi et al. 2024) 2-STIC (Surface Temperature Initiated Closure, Mallick et al. 2014, Hu et al. 2023) which is based on the integration of the surface temperature into the Penman-Monteith (PM) formulation at the pixel scale (single-pixel approach). The model was recently implemented within the European ECOSTRESS Hub (Hu et al. 2023). For water stress indicators, two main indices are foreseen: the evaporative fraction as provided by EVASPA and the ratio of daily ET to reference ET (or maximum ET). Calculation of evapotranspiration requires a large number of information that will be obtained from different sources: - surface temperature from TRISHNA thermal channels (using the new DirecTES method) - albedo and vegetation fraction cover from the vis/nir TRISHNA channels - clear sky incident solar radiation (corrected for topography effect) and incident downward thermal radiation at the time of acquisition as derived from TRISHNA channel data (in particular estimation of aerosol and water vapour content) - air temperature and air vapour pressure from ECMWF analysis - daily solar radiation from geostationary satellites (e.g. the new MTG satellite over Europe and Africa) that will be used to upscale instantaneous estimates of ET to daily ET (Delogu et al. 2012, 2021) At level 2, daily evapotranspiration will be provided for clear sky TRISHNA acquisition (when thermal infrared data are available, depending on satellite orbit and cloud cover). Reconstruction of a continuous serie of daily evapotranspiration will be provided as a level 3 product (Boulet et al. 2023). We will use a simple upscaling method on the basis of the daily evapotranspiration / daily solar radiation ratio, that was shown as usually providing good performances. References 1. Allies A., J. Demarty, et al., “Evapotranspiration Estimation in the Sahel Using a New Ensemble-Contextual Method,” Remote Sensing, 12, pp. 380, 2020. (doi:10.3390/rs12030380) 2. Delogu E., Boulet G., et al., “Reconstruction of temporal variations of evapotranspiration using instantaneous estimates at the time of satellite overpass”, Hydrology and Earth System Sciences, 16(8), 2995–3010. 2012. 3. Delogu E., Olioso A. et al., “Evaluation of multiple methods for the production of continuous evapotranspiration estimates from TIR remote sensing”, Remote Sensing, 13(6), 1086. 2021. 4. Gallego-Elvira B., Olioso A., et al., “EVASPA (EVApotranspiration Assessment from SPAce) tool: An overview,” Procedia Environmental Sciences, 19, pp. 303–310, 2013 (doi: 10.1016/j.proenv.2013.06.035) 5. Farhani N. et al., “Spatially remotely sensed evapotranspiration estimates in Sahel regions using an ensemble contextual model with automated heterogeneity assessment”, Submitted, 2024. 6. Hu T., Mallick K., et al., “Evaluating European ECOSTRESS Hub Evapotranspiration Products Across a Range of Soil-Atmospheric Aridity and Biomes Over Europe”, Water Resources Research, 59, e2022WR034132, 2023 (doi: 10.1029/2022WR034132) 7. Mallick, K., Jarvis, A.J., et al., “A Surface Temperature Initiated Closure (STIC) for surface energy balance fluxes,” Remote Sensing of Environment, 141, pp. 243-261, 2014. 8. Mwangi S., Olioso A., et al., “Ensemble Estimation of Evapotranspiration Using EVASPA: A Multi-Data Multi-Method Analysis”, IGARSS, 2024. (doi: 10.1109/IGARSS53475.2024.10642831)
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall G2)

Presentation: Synthetic Scenes for the Evaluation of Land Surface Temperature Algorithms: Application to the NET-Sense 2023 Campaign.

Authors: José Antonio Sobrino, Drazen Skoković, Rafael Llorens, Diego Salinas, Xiujuan Li, Juan Carlos Jiménez-Muñoz, Mary Landsdale, Prof Martin Wooster, Dirk Schuettemeyer, Simon Hook, Roberto Colombo, Franco Miglietta, Callum Middleton, Dr Mark Grosvenor, Bjorn Eng, Associate Professor Micol Rossini, Lorenzo Genesio, Glynn Hulley
Affiliations: University Of Valencia. Global Change Unit. Image Processing Laboratory, National Centre for Earth Observation, Department of Geography, King’s College London, European Space Agency, NASA Jet Propulsion Laboratory, University of Milano Bicocca, Institute of Bioeconomy, CNR
During this decade, several missions (TRISHNA, SBG, LSTM) are expected to be launched, capable of acquiring images in four or more spectral bands in the thermal infrared (TIR) with high spatial resolutions of approximately 50–60 meters and temporal revisit times of about 2–3 days. This study proposes Split Window (SW) and Temperature Emissivity Separation (TES) algorithms to estimate Land Surface Temperature (LST) for these future thermal missions. The performance of the algorithms has been evaluated using simulated scenes generated from measurements taken during the joint NASA-ESA Airborne Campaign (NET-Sense 2023), which took place in Italy during the summer of 2023 as part of the ESA LSTM mission development activities. To encompass the entire crop growth process, the campaign was divided into two periods: one in May and another at the end of June. For both periods, the Hyperspectral Thermal Emission Spectrometer (HyTES) and the hyperspectral thermal OWL sensor captured simultaneous images along the same flight line designed at different viewing angles. Two days were selected as "golden days" for processing. To generate the simulated scenes, radiative transfer models are utilized to estimate the Top Of Atmosphere (TOA) radiances from the temperature (LST) and emissivity (LSE) data obtained during the campaign, along with atmospheric profiles retrieved from the European Centre for Medium-Range Weather Forecasts (ECMWF) datasets. The proposed radiative transfer model is a simplification of more complex models. It is based on the Radiative Transfer Equation (RTE) and atmospheric parameters derived using the MODTRAN processor on a line-by-line basis, capable of reproducing TOA radiances for each simulated wavelength. To create more realistic scenes, a random Noise Equivalent delta Temperature (NEdT) is added to the final TOA values. By using alternative atmospheric reanalysis products to avoid applying the same atmospheric profile as in the simulation, the LST output data are evaluated against airborne LST images as well as the ECOSTRESS LST product. The results obtained provide insights into the performance of the algorithms for the aforementioned future missions.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall G2)

Presentation: European ECOSTRESS Hub: An evolving concept of integrating high-resolution thermal infrared remote sensing into terrestrial ecosystem process modelling

Authors: Tian Hu, Kanishka Mallick, Ziyu Lin, Huanyu Zhang, Patrik Hitzelberger, Yoanne Didry, Zoltan Szantoi
Affiliations: Luxembourg Institute of Science and Technology (LIST), European Space Agency
Thermal infrared remote sensing offers a valuable perspective for understanding critical ecosystem variables like evapotranspiration (ET) and gross photosynthesis (GPP) via mapping land surface temperature (LST) from space. The existing thermal missions, however, fall short of providing the high spatiotemporal resolution observations needed for agricultural and ecological applications. To address this gap, several high-resolution thermal missions are under preparation by various space agencies, including CNES-ISRO TRISHNA, NASA SBG, and ESA LSTM. As a precursor to these missions, ECOSTRESS has been delivering high-resolution thermal observations since August 2018. Leveraging ECOSTRESS data, the European ECOSTRESS Hub (EEH) aims to generate accurate LST, ET, and GPP products through innovative modeling techniques. To avoid the scale mismatch between the thermal radiance and atmospheric profiles, the temperature and emissivity separation (TES) algorithm is modified by incorporating a machine learning model into the atmospheric correction module. This approach eliminates the need for atmospheric radiative model meanwhile retrieves LST and emissivity simultaneously, making the algorithm more efficient. By integrating LST into the analytical Surface Temperature Initiated Closure (STIC) model, ET retrieval in EEH ET circumvents the empirical parameterization of aerodynamic and surface-canopy conductance. Evaluation of ET estimates from ECOSTRESS using STIC demonstrates the advantage of the analytical SEB model compared to the other thermal-based parametric SEB models, especially over water-constrained ecosystems. To estimate daily ET from the instantaneous retrieval obtained at different times in a day, an ET upscaling method harnessing the extraterrestrial solar radiation constrained by various energy-water limiting factors across various ecosystems is developed. This novel method offers a practical and accurate approach for estimating daily ET from ECOSTRESS data compared to the other widely used methods. To retrieve GPP, the water stress information is injected into the photosynthetic light use efficiency and coupled ET-GPP equations in an iterative way. Multiple configurations of GPP model are under testing and development. These advancements represent a significant step forward in generating LST, ET, and GPP products, offering improved accuracy and practicability. The development of these models potentially enhances our ability to monitor and understand ecosystem functioning with greater spatial detail and accuracy.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall G2)

Presentation: Ensemble Learning for Land Surface Temperature Downscaling in Urban Areas to Prepare Future Missions TRISHNA, SBG, and LSTM

Authors: Raphael Delair, Doctor Aurélie Michel, Professor Hervé Carfantan, Xavier Briottet, Doctor Vincent Lonjou
Affiliations: ONERA/DOTA, Université de Toulouse F-31055, IRAP, UMR5277 CNRS, Université de Toulouse F-31055, CNES
Climate change is affecting global temperatures, leading to more frequent and intense heatwaves, particularly pronounced in urban areas due to the Urban Heat Island (UHI) effect. These high temperatures can increase the population’s vulnerability to heat. Therefore, it is essential to monitor intra-urban temperatures at a scale that facilitates the differentiation of urban elements like buildings and streets to better integrate heat mitigation and adaptation strategies within urban policies. Thanks to thermal remote sensors, Land Surface Temperature (LST) can be retrieved and used to assess these heat impacts. However, current satellite observations lack the fine spatial resolution required for detailed urban analysis. Forthcoming satellite missions like TRISHNA, SBG, LSTM and Landsat Next will provide thermal infrared data with a spatial resolution of ≃60 meters. Such resolution is recognized as too coarse for urban roughness. Super-resolution techniques are often applied to such coarse data, fusing them with higher resolution data to improve their original resolution. For LST super-resolution (LSTSR), these methods use fine images acquired in the visible to shortwave infrared (VNIR/SWIR) spectral range to enhance the coarse thermal image spatial resolution. Traditional super-resolution approaches, such as DisTrad (Kustas et al. 2003), HUTS (Dominguez et al. 2011) or (A)ATPRK (Wang, Shi, and Atkinson 2016), hypothesize that empirical relationships are scale invariant and uniformly applied on the observed scene. The linear relation between NDVI, a vegetation spectral index, and LST have been extensively applied for instance (Kustas et al. 2003). Other studies have tried to better model relationship between several indices and LST like (Li et al. 2022), (Njuki, Mannaerts, and Su 2020), (Zawadzka et al. 2020), (Dong et al. 2020), (Tang, Zhu, and Ni 2021). Existing methods are two-fold: first, they model the relationship between selected indices and LST; then, they correct residuals using coarse observations in a subsequent step. In this work, we pinpoint that current literature focuses too much on tackling the scale invariance constraining hypothesis and on the LST regression models instead of residual correction methods. We then suggest an ensemble approach for better LSTSR performances. We first show that, although scale effect exists, it is often negligible compared to regression model simplicity that do not capture the subtle relationship between indices considered and LST. Furthermore, we show that having better LST regression model does not necessarily lead to better LSTSR performances when considering the full methodology which includes the residual correction. Then, we emphasize that residual correction should be more studied as existing methods are less efficient in zones of high thermal contrast. These are caused by the morphological aspects and material heterogeneity of urban areas where LSTSR methods could introduce a blurring effect, smooth out LST extreme values or introduce spatial artefacts Granero-Belinchon et al. (2019). As an example, we focus on two widely used residual correction methods used in current literature: average residual correction and kriging based residual correction. In these methods, corrections are much more correlated with coarse observed scene than high resolution estimation contrasts. In this way, some residual correction actually worsen LST estimations in some regions of high contrasts, nearby building walls for instance. Finally, we propose a first idea to tackle this problem. Instead of focusing on the first step of LSTSR methods, we use a Random Forest (RF) machine learning model to fuse different LSTSR methods output in order to get a finer estimate of LST values at fine scale. To show practical examples of our statements, we use high-resolution thermal and reflective multispectral datasets (≤10 m) from airborne campaigns over Toulouse (Roupioz et al. 2023) and Madrid (Sobrino et al. 2009) to compute spectral indices at different spatial scales. We also include morphological indices (semantic and 3D information obtained from Pleiades data) describing the observed urban environment. After simulating TRISHNA data at spatial scale of 60 m, we coarsen them even more and use different ATPRK instances based on different spectral indices. We then fit a RF model on obtained LST results and observed values. Applying again the different ATPRK instances directly on simulated TRISHNA data and using the fitted RF model, we achieve to reduce root mean squared error by 8.6% when super-resolving LST to a spatial scale of 10 m compared to classical ATPRK. Dominguez, Anthony, Jan Kleissl, Jeffrey C. Luvall, and Douglas L. Rickman. 2011. “High-Resolution Urban Thermal Sharpener (HUTS).” Remote Sensing of Environment 115 (7): 1772–80. https://doi.org/10.1016/j.rse.2011.03.008. Dong, Pan, Lun Gao, Wenfeng Zhan, Zihan Liu, Jiufeng Li, Jiameng Lai, Hua Li, Fan Huang, Sagar K. Tamang, and Limin Zhao. 2020. “Global Comparison of Diverse Scaling Factors and Regression Models for Downscaling Landsat-8 Thermal Data.” ISPRS Journal of Photogrammetry and Remote Sensing 169 (November): 44–56. https://doi.org/10.1016/j.isprsjprs.2020.08.018. Granero-Belinchon, Carlos, Aurelie Michel, Jean-Pierre Lagouarde, Jose A. Sobrino, and Xavier Briottet. 2019. “Multi-Resolution Study of Thermal Unmixing Techniques over Madrid Urban Area: Case Study of TRISHNA Mission.” Remote Sensing 11 (10): 1251. https://doi.org/10.3390/rs11101251. Kustas, William P., John M. Norman, Martha C. Anderson, and Andrew N. French. 2003. “Estimating Subpixel Surface Temperatures and Energy Fluxes from the Vegetation Index–Radiometric Temperature Relationship.” Remote Sensing of Environment 85 (4): 429–40. https://doi.org/10.1016/S0034-4257(03)00036-1. Li, Xiangyu, Guixin Zhang, Shanyou Zhu, and Yongming Xu. 2022. “Step-By-Step Downscaling of Land Surface Temperature Considering Urban Spatial Morphological Parameters.” Remote Sensing 14 (13): 3038. https://doi.org/10.3390/rs14133038. Njuki, Sammy M., Chris M. Mannaerts, and Zhongbo Su. 2020. “An Improved Approach for Downscaling Coarse-Resolution Thermal Data by Minimizing the Spatial Averaging Biases in Random Forest.” Remote Sensing 12 (21): 3507. https://doi.org/10.3390/rs12213507. Roupioz, L., X. Briottet, K. Adeline, A. Al Bitar, D. Barbon-Dubosc, R. Barda-Chatain, P. Barillot, et al. 2023. “Multi-Source Datasets Acquired over Toulouse (France) in 2021 for Urban Microclimate Studies During the CAMCATT/AI4GEO Field Campaign.” Data in Brief 48 (June): 109109. https://doi.org/10.1016/j.dib.2023.109109. Sobrino, J A, R Bianchi, M Paganini, G Sòria, J C Jiménez-Muñoz, R Oltra-Carrió, C Mattar, et al. 2009. “Dual-Use European Security IR Experiment 2008 (DESIREX 2008) Final Report.” Tang, Kai, Hongchun Zhu, and Ping Ni. 2021. “Spatial Downscaling of Land Surface Temperature over Heterogeneous Regions Using Random Forest Regression Considering Spatial Features.” Remote Sensing 13 (18): 3645. https://doi.org/10.3390/rs13183645. Wang, Qunming, Wenzhong Shi, and Peter M. Atkinson. 2016. “Area-to-Point Regression Kriging for Pan-Sharpening.” ISPRS Journal of Photogrammetry and Remote Sensing 114 (April): 151–65. https://doi.org/10.1016/j.isprsjprs.2016.02.006. Zawadzka, Joanna, Ron Corstanje, Jim Harris, and Ian Truckell. 2020. “Downscaling Landsat-8 Land Surface Temperature Maps in Diverse Urban Landscapes Using Multivariate Adaptive Regression Splines and Very High Resolution Auxiliary Data.” International Journal of Digital Earth 13 (8): 899–914. https://doi.org/10.1080/17538947.2019.1593527.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall K2)

Session: D.04.06 Advancements in cloud-native formats and APIs for efficient management and processing of Earth Observation data

#stac #zarr #cog #parquet #cloud-native

Earth Observation (EO) data continues to grow in volume and complexity as the next generation satellite instruments are being developed. Furthermore, novel advanced simulation models such as the Digital Twins (DTs) deployed in the scope of the Destination Earth (DestinE) project generate immense amount of multidimensional data (few PB/day in total) thanks to the High Performance Computing (HPC) technology. Cataloguing, processing and disseminating such broad variety of data sets is a huge challenge that has to be tackled in order to unleash the full potential of EO. Storage and analytics of vast volumes of data have been moved from the on-premise IT infrastructure to large cloud computing environments such as Copernicus Data Space Ecosystem (CDSE), DestinE Core Service Platform (DESP), Google Earth Engine or Microsoft Planetary Computer. In this respect, robust multidimensional data access interfaces leveraging the latest cloud-native data formats (e.g. COG, ZARR, Geoparquet, vector tiles) and compression algorithms (e.g. ZSTD) are indispensable to enable advanced cloud-native APIs (e.g. openEO, Sentinel Hub) and data streaming (e.g. EarthStreamer). Moreover, metadata models have to be standardized and unified (e.g. STAC catalogue specification) among different data archives to allow interoperability and fast federation of various data sources. This session aims at presenting the latest advancement in data formats, data compression algorithms, data cataloguing and novel APIs to foster EO analytics in cloud computing environments.

Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall K2)

Presentation: The Future of Data Discovery at CEDA: The DataPoint API

#zarr #stac #kerchunk #virtualizarr

Authors: Mr Daniel Westwod, Mr Rhys Evans, Prof. Bryan, N. Lawrence, Mr David Hassell
Affiliations: Centre for Environmental Data Analysis, STFC, NCAS, Department of Meteorology, University of Reading
A paradigm shift is underway among many data storage centres as the need for cloud-accessible analysis-ready data increases. There are significant issues relating to findability and accessibility with current storage formats, and the existing archives of data which are not suited for aggregation and use in cloud applications. Several technologies are available such as so-called cloud-optimised data formats such as Zarr and Icechunk, and more traditional formats such as NetCDF. Whatever tools are used, aggregation methods are needed to expose simpler views of the underlying objects and/or files. At CEDA we have begun to create and ingest these new file formats, as well as develop new search services to enable fast access to our data. We have also created an API called DataPoint, capable of connecting to our systems and abstracting much of the complexity of different file types, to create the best environment for accessing our cloud data products. Data Storage Typical data access use cases require parts of a file or dataset to be read independently (either because only a part of the data is required, or else the entire data is required but the client manages the reading of it in a piecemeal fashion), and there are generally two approaches that support the reading of only the requested parts: - Reformat: break them up into separate objects that can be individually requested. - Reference: Provide a mechanism to get a specific range of bytes corresponding to a data chunk from a larger object. Whichever approach is used, it is often necessary to reformat or repack the data to provide performant access, but that requires duplication of the data. Whether this step is necessary will depend on a combination of what client tools are being used and what the primary mode of access is (in terms of how it might slice into the data chunks). The client tool landscape is changing rapidly, and so flexibility is needed in organisation such as CEDA. We need to deal with data stored in Zarr chunks, in NetCDF files with kerchunk indexes, and with aggregations defined using a range of mechanisms including the newly formally defined CF Aggregation format. Over the last two years we've developed a tool called ‘padocc’ to handle large-scale generation of Kerchunk reference files or new cloud data stores with Zarr. We are actively working on this tool to provide performance enhancements and are considering the inclusion of upcoming packages like VirtualiZarr to generate virtual datasets. The generated files have been ingested into the CEDA Archive and are accessible to all users - except that no one knows how to use or even find them. Since these technologies are relatively new to most of our user communities, the mechanisms for accessing these types of data are not well known or well understood, and they need to exist alongside more established formats such as NetCDF/HDF. Metadata Records CEDA are also investigating the SpatioTemporal Asset Catalogue (STAC) specification to allow for user interfaces and search services to be enhanced and facilitate interoperability with user tools and our partners. We are working to create a full-stack software implementation including an indexing framework, API server, web and programmatic clients, and vocabulary management. All components are open-source so that they can be adopted and co-developed with other organisations working in the same space. To create the CEDA STAC catalog we have developed the "stac-generator", a tool that utilises a plugin architecture to allow for more flexibility at the dataset level. A range of input, output, and "extraction methods" can be configured to enable metadata extraction across CEDA's diverse archive data and beyond at other organisations. Elasticsearch (ES) was chosen to host the indexed metadata because it is performant, highly scalable and supports semi-structured data - in this case the faceted search values related to different data collections. We have also developed several extensions to the STAC framework to meet requirements that weren't met by the core and community functionality. These include an end-point for interrogating the facet values, as queryables, and a free-text search capability across all properties held in the index. The developments of our search system have also included pilots for the Earth Observation Data Hub (EODH) and a future version of the Earth System Grid Federation (ESGF) search service, for which we have created an experimental index containing a subset of CMIP6, CORDEX, Sentinel 2 ARD, Sentinel 1, and UKCP data to investigate performance and functionality. Discovering and Accessing Data DataPoint is the culmination of these developments to create a single point of access to the data archived at CEDA; the so-called 'CEDA Singularity'. It connects directly to our STAC catalogs and can be used to search across a growing portion of our data holdings to find specific datasets and metadata. What sets DataPoint apart from other APIs is the ability to directly open datasets from cloud formats without the requirement of manual configuration. DataPoint reads all the required settings from the STAC record to open the dataset, to make the interface much simpler for the typical user. With DataPoint, a user can search across a vast library of datasets, select a specific dataset matching a set of constraints and then simply open the result as a dataset. DataPoint handles the extraction of the link to the cloud formats stored in our archive to present a dataset that looks the same regardless of what type or format the data is stored. At all points the data is lazily loaded to provide fast access to metadata, enabling users to get a summary of data before committing to opening and transferring what could be a large volume of data. Currently our STAC catalogs represent only a small fraction of the total CEDA archive which spans more than 40 years of data, totalling over 25 Petabytes. The next step towards greater data accessibility will be to dramatically expand our STAC representations as well as the formats required for DataPoint. We have well established pipelines for generating both, which will become immediately available to all DataPoint users when published to our production indexes.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall K2)

Presentation: Embracing Diversity in Earth Observation with HIGHWAY

#stac

Authors: Luca Girardo, Mr. Simone Mantovani, Henry de Waziers, Mr. Giovanni Corato
Affiliations: Esa Esrin, adwäisEO, MEEO S.r.l.
The European Space Agency (ESA) boasts a wide array of Earth Observation (EO) missions, including Earth Explorer, Heritage and Third-Party Missions. Each of these Mission provides valuable datasets enabling researchers, decision-makers, and scientists to gain deeper insights into the planet's systems. Among these missions, SMOS (Soil Moisture and Ocean Salinity), CryoSat, Proba-V, SWARM, Aeolus, and EarthCARE collectively allow to collect different type of information exploiting a wide range of sensors such as radiometers, optical sensor, multispectral, radar and altimeter. This richness of sensors allows generating a wide spectrum of data products for the monitoring of different physical variables at different spatial and temporal scales. This diversity is essential for capturing the multifaceted dynamics of Earth’s systems, but it also presents challenges in terms of data accessibility, integration, and usability. Indeed, these datasets vary in format, content, and data types. To address this complexity, HIGHWAY offers an innovative solution that bridges the gap between the heterogeneous nature of EO data and the seamless usability required by end users. HIGHWAY provides unicity, adopting the Earth Observation Processing Framework (EOPF) data model as unified approach to guarantee the adequate level of data harmonisation and providing OGC standard services to discover, view and access these diverse datasets while preserving their unique attributes. This capability is driven by several key features: 1. Digital Twin Analysis Ready Cloud Optimized (DT-ARCO) files: HIGHWAY transforms disparate datasets into standardized, cloud-optimized formats designed for analysis readiness. These files are tailored to meet the rigorous demands of modern data analysis workflows, particularly for Digital Twin engines that require high-quality, pre-processed inputs for training and predictions. 2. Unique and seamless endpoint for users: HIGHWAY simplifies access to data by consolidating multiple data sources into a single, intuitive interface. Users can explore and retrieve datasets without needing to navigate the complexity of individual mission archives or disparate data formats. 3. Advanced cataloguing standards: HIGHWAY incorporates state-of-the-art cataloguing protocols, including OpenSearch with Geo And Time Extensions, STAC (SpatioTemporal Asset Catalogue), WMS (Web Map Service), and WCS (Web Coverage Service). These standards enable efficient querying, visualization, and retrieval of spatial-temporal data, enhancing the user experience and supporting diverse application requirements. 4. Native and cloud-optimized data access: HIGHWAY ensures that data is accessible in both in its native and cloud optimized format to meet the different needs of researchers and digital twins. In particular, ARCO data can unlock the potential of large cloud or HPC processing system. One of HIGHWAY’s standout features is its ability to retain the specificity of each dataset while integrating them into a unified system. This is particularly critical for the development and deployment of Digital Twin engines, which rely on the precise characteristics of EO data to produce accurate predictions and insights. By maintaining the integrity of the original datasets, HIGHWAY ensures that these advanced analytical models can fully leverage the richness and diversity of ESA’s EO products. The success of HIGHWAY is underpinned by a robust, high-performance infrastructure that seamlessly combines on-premises, cloud, and HPC (High-Performance Computing) environments. This infrastructure is designed to accommodate the growing demands of EO data users, supporting advanced workflows such as large-scale data analysis, real-time processing, and machine learning. HIGHWAY is also future-ready, incorporating data caching strategies that prioritize importance and relevancy. This ensures efficient data retrieval, reducing latency and enabling faster decision-making for time-sensitive applications. HIGHWAY’s transformative approach to EO data management and access positions it as a critical enabler for the next generation of Earth science applications. By addressing the challenges of data diversity and accessibility, HIGHWAY unlocks the full potential of ESA’s EO missions, empowering users with the tools and resources needed to tackle complex environmental challenges. As the demand for actionable insights from EO data continues to grow, HIGHWAY is ready to evolve, introducing new capabilities that anticipate and meet the requirements of future users. In summary, HIGHWAY embodies the principles of innovation, integration, and inclusivity, turning the challenges of data diversity into opportunities for advancement. By providing a unique and seamless endpoint, leveraging state-of-the-art interoperable standards, and ensuring analysis-ready data, HIGHWAY not only simplifies access to EO data but also enhances its usability for cutting-edge applications. With its high-performance infrastructure and future-oriented design, HIGHWAY stands as a cornerstone for advancing Earth science and fostering sustainable solutions for a changing planet.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall K2)

Presentation: openEO - STAC Integration for Enhanced Data Access and Sharing

#stac #cloud-native

Authors: Ir. Victor Verhaert, Vincent Verelst, Ir. Jeroen Dries, Dr. Hans
Affiliations: VITO Remote Sensing
The openEO API serves as a standardized interface for accessing and processing Earth Observation (EO) data. It is deeply integrated within the Copernicus Dataspace Ecosystem (CDSE), providing users with easy access to vast collections of satellite data and computational resources. However, many specialized EO workflows require data beyond CDSE, such as those hosted on platforms like Microsoft’s Planetary Computer or private repositories. To address this limitation, openEO has expanded its capabilities by enhancing its integration with the STAC (SpatioTemporal Asset Catalog) ecosystem. This advancement significantly broadens the range of data sources that can be accessed, processed, and shared within openEO workflows. STAC, an emerging standard for organizing and cataloging geospatial data, is widely adopted for its simplicity and flexibility. It provides a unified framework for indexing and querying EO data from various sources, making it an essential tool for modern geospatial analysis. By extending its support for STAC, openEO aligns itself with the broader trend toward cloud-native geospatial workflows and ensures compatibility with diverse data providers. One of the key developments in openEO is the introduction of the load_stac functionality. This feature enables users to query and access STAC-compliant datasets across multiple platforms, regardless of whether the data resides in public repositories, such as the Planetary Computer, or private repositories tailored to specific projects. This functionality goes beyond CDSE, allowing users to integrate datasets from different sources into a single, cohesive workflow. By combining public and private data collections, openEO empowers users to address unique research and operational needs while maintaining flexibility, scalability and privacy. In addition to expanded data access, openEO now supports saving workflow outputs directly into STAC-compliant formats. By adhering to STAC’s metadata standards, the outputs can be cataloged and shared with ease, fostering greater collaboration and reproducibility within the EO community. By integrating STAC for both data input and output, openEO enhances not only the accessibility of EO data but also the sharing and scalability of derived products. These capabilities are critical for enabling FAIR (Findable, Accessible, Interoperable, Reusable) data principles in geospatial workflows. The ability to retrieve data from diverse sources, process it in the cloud, and store outputs in standardized formats ensures that data flows remain seamless and efficient, even as datasets grow in size and complexity. By integrating deeply with STAC, openEO provides users with a robust, adaptable platform for modern EO analysis. Whether working with massive public datasets or proprietary collections, users can design and execute workflows that meet their specific needs without being constrained by data availibility. This session will delve into the technical details of openEO’s enhanced STAC integration. We will demonstrate the use of the load_stac functionality to query and process datasets from platforms like the Planetary Computer, as well as private repositories. Additionally, we will showcase how processed data can be exported in STAC-compliant formats, highlighting its utility for data sharing and reproducibility. Practical examples will include combining multiple datasets into unified workflows and saving analysis outputs for collaborative projects.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall K2)

Presentation: Key Innovations, Challenges, and Open-Source Solutions in Building the Copernicus Data Space Ecosystem STAC Catalog

#stac

Authors: Marcin Niemyjski
Affiliations: CloudFerro
Spatio-temporal Asset Catalog (STAC) has gained significant recognition in both public and commercial sectors. This community-developed standard is widely used to access open data and by commercial providers as an interface for accessing paid resources, such as data from private constellations. However, existing implementations of the standard require optimization for handling datasets typical of big data scales. The Copernicus Program is the largest and most successful public space program globally. It provides continuous data across various spectral ranges, with an archive exceeding 84 petabytes and a daily growth of approximately 20TB, both of which are expected to increase further. The openness of its data has contributed to the widespread use of Earth observation and the development of commercial products utilizing open data in Europe and worldwide. The entire archive, along with cloud-based data processing capabilities, is available free of charge through the Copernicus Data Space Ecosystem initiative. This paper presents the process of creating the STAC Copernicus Data Space Ecosystem catalog—the largest and most comprehensive STAC catalog in terms of metadata globally. It details the process from developing a metadata model for Sentinel data, through efficient indexing based on the original metadata files accompanying the products, to result validation and backend system ingestion. A particular highlight is that this entire process is executed using a single tool, eometadatatool, initially developed by DLR, further enhanced, and released as open-source software by the CloudFerro team. Eometadatatool facilitates metadata extraction from the original files accompanying Copernicus program products and others (e.g., Landsat, Copernicus Contributing Missions) based on a CSV file containing the metadata name, the name of the file in which it occurs, and the path to the key within the file. By default, the tool supports product access via S3 resources, configurable through environment variables. The CDSE repository operates as an S3 resource, offering users free access. The development process contributed to the evolution of the standard by introducing version 1.1 and new extensions (storage, eo, proj) that better meet user needs. The paper discusses the most significant modifications and their impact on the catalog’s functionality. Particular attention is devoted to performance optimization due to the substantial data volume and high update frequency. The study analyzes the configuration and performance testing (using Locust) of the frontend layer (stac-fastapi-pgstac) and backend (pgstac). Stac-fastapi-pgstac was implemented on a scalable Kubernetes cluster and subjected to a product hydration process, leveraging Python's native capabilities for this task. The pgstac schema was deployed on a dedicated bare-metal server with a PostgreSQL database, utilizing master-worker replication, enabled through appropriate pgstac configuration. The presented solution empowers the community to utilize the new catalog fully, leverage its functionalities, and access open tools that enable independent construction of STAC catalogs compliant with ESA and community recommendations.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall K2)

Presentation: Atmosphere Virtual Lab: Access atmospheric satellite data as a datacube

#zarr #stac

Authors: Sander Niemeijer
Affiliations: S[&]T
The Atmopshere Virtual Lab aims to provide users with the tools to simplify the analysis and visualisation of atmospheric satellite data. In the past it has provided this by means of a jupyterlab environment that covered the Atmospheric Toolbox components, such as the popular HARP toolset, and dedicated interactive visualisation components for notebooks. One of the main challenges for using data from missions such as Sentinel-5P is having to deal with the data in L2 format. Many types of analysis require a L3 regridding step in order to arrive at an analysis ready form. To remove this step, the AVL has evolved into a cloud hosted platform, providing a wide range of atmospheric satellite data in an analysis ready L3 format, with the data being continuously updated. It leverages popular standards such as zarr for data storage and STAC for discovery to expose the data cubes. AVL brings a novel approach by its use of pyramiding to provide zoom levels in all dimensions, both spatially and temporally, and facilitate fast access through efficient chunking mechanisms. The AVL cloud service comes with a public web client that allows for easy browsing and visualisation of the data cube content and a cloud hosted jupyterlab environment for advanced analysis purposes. We present the current status of the AVL service, its capabilities and design, and plans for the future.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall K2)

Presentation: OpenSTAC: an open spatiotemporal catalog to make Earth Observation research data findable and accessible

#stac

Authors: Dr. Serkan Girgin, Jay Gohil
Affiliations: Faculty of Geo-information Science and Earth Observation (ITC)
In line with Open Science practices and FAIR principles, researchers are publishing their Earth Observation (EO) related research data at public research data repositories, such as Zenodo and Figshare. Files that are in raster grid data formats such as GeoTIFF, NetCDF, and HDF, as well as supplementary vector data, are common in these research datasets. Although such data files include detailed spatiotemporal information, which is very useful to find data for specific regions and time periods, unfortunately this information is currently not effectively utilized by the research data repositories and the researchers are asked to enter spatial and temporal information manually as part of the dataset metadata, which is usually limited to a textual description or simple metadata attributes. Moreover, the repositories also do not provide effective tools and interfaces to search research data by location, e.g. by specifying a geographical extent. Therefore, EO-related research data largely becomes "invisible" to the researchers and can only be found if some keywords match textual location description. This highly limits the findability and accessibility of research data with spatiotemporal characteristics. On the other hand, there are many initiatives that aim to facilitate access to EO data by using modern tools and technologies. One such initiative is the SpatioTemporal Asset Catalog (STAC), which is an emerging open standard designed to enhance access to geospatial data, especially on the Cloud. STAC provides a unified framework for organizing and describing geospatial assets, making it easier for users to discover, access, and work with EO data. It enables data providers to create catalogs of geospatial assets, each with detailed metadata, including spatial and temporal information, formats, and links to data files. This standardized structure improves data discoverability and interoperability across various software tools and platforms, streamlining the process of finding and accessing geospatial data. OpenSTAC leverages the capabilities of the STAC ecosystem and aims to create an open spatiotemporal catalog of public research datasets published at major research data repositories. For this purpose, geospatial data files available in research datasets are analyzed, and spatiotemporal information embedded in these files are extracted. This information is used to create a global STAC catalog, OpenSTAC, which enables the researchers to easily find and access EO research data by using a wide range of open-source tools provided by the STAC ecosystem, including visual data browsers, command line tools, and data access libraries in various languages, e.g. Python, R, and Julia . Hence, it significantly improves the FAIRness of EO research data. This talk will provide a detailed information about the methodology developed to monitor the research data repositories to identify published geospatial datasets, to collect spatiotemporal metadata of datasets by using existing metadata and by analysing and extracting additional information from the geospatial data files, and to update a STAC-based spatiotemporal catalog of the datasets by using the collected information. The methodology's implementation through open-source software will be presented, providing insights into its functionality and practical applications. Additionally, a live demonstration of the operational OpenSTAC platform will showcase its features, capabilities, and real-world applicability, highlighting its role in facilitating seamless integration and execution of the methodology.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.15/1.16)

Session: A.07.01 Soil moisture - PART 1

Over the past two decades, the technique of microwave remote sensing has made tremendous progress to provide robust estimates of surface soil moisture at different scales. From local to landscape scales, several field or aircraft experiments have been organised to improve our understanding of active and passive microwave soil moisture sensing, including the effects of soil roughness, vegetation, spatial heterogeneities, and topography. At continental scales, a series of several passive and active microwave space sensors, including SMMR, AMSR, ERS/SCAT provided information on surface soil moisture. Current investigations in L-band passive microwave with SMOS and SMAP, and in active microwave with MetOp/ASCAT series and Sentinel-1, enable an accurate quantification of the soil moisture at regional and global scales. Building on the legacy of these mission operational programmes like Copernicus but also novel developments will further enhance our capabilities to monitor soil moisture, and they will ensure continuity of multi-scale soil moisture measurements on climate scales.

At the same time the field of metrology has received more and more attention by the Earth observation community, which has led to a growing awareness of the concept of error traceability and the necessity of well characterized, so-called “Fiducial Reference Measurements” (FRMs). As a consequence, research has put a new focus on obtaining traceable error budgets for soil moisture products, and improved ground reference data.

We encourage submissions related to soil moisture ground and remote sensing, including:
- Global soil moisture estimation from coarse resolution active and passive sensors.
- High spatial resolution soil moisture estimation based on e.g. Sentinel observations, GNSS reflections, or using novel downscaling methods.
- Field experiment, theoretical advances in microwave modelling and calibration/validation activities.
- Root zone soil moisture retrieval and soil moisture data assimilation in land surface models, hydrological models and in Numerical Weather Prediction models.
- Evaluation and trend analysis of soil moisture climate data records such as the ESA CCI soil moisture product as well as soil moisture from re-analysis.
- Inter-comparison and inter-validation between land surface models, remote sensing approaches and in-situ validation networks.
- Progress towards the estimation of SI-traceable uncertainty budgets including uncertainty characterization across scales.
- Application of satellite soil moisture products in scientific and operational disciplines.

Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: Characterizing Precipitation and Soil Moisture Drydowns in Finland Using SMAP Satellite Data

Authors: Kerttu Kouki, Andreas Colliander
Affiliations: Finnish Meteorological Institute, Jet Propulsion Laboratory, California Institute of Technology
Precipitation (P) and soil moisture (SM) are critical components of the climate system. They are tightly linked and play vital roles in hydrological processes. They influence surface energy fluxes, the carbon cycle, vegetation growth, and land-atmosphere interactions. Accurate estimates of P and SM are essential for various climatological and hydrological applications, such as predicting floods and droughts. Additionally, key variables related to SM drydown patterns, such as the exponential decay time scale and the lower bound of SM, are crucial inputs for climate models. P and SM drydown patterns in the Arctic are currently poorly known. Previous satellite-based studies on the relationship between P and SM and drydown patterns have often excluded the Arctic region due to dense forests, water bodies, or seasonally frozen ground, which complicate SM retrieval. Yet, satellites are usually the only way to observe P and SM in high latitudes due to sparse in situ measurement network. Direct satellite-based P estimates currently lack consistency and spatiotemporal coverage. However, P can also be estimated indirectly using satellite-based SM data: SM increases during rainfall and decreases during dry periods. The satellite-based SM measurements have good spatiotemporal coverage, and unlike direct rainfall measurements, SM data also provide insights into the relationship between SM and P and what happens to the water after it reaches the ground. This study aims to (1) analyze the relationship between SM and P, (2) investigate whether satellite-based SM data can be used to estimate P rates, and (3) examine SM drydown patterns across Finland. This study is based on SM data from NASA’s Soil Moisture Active Passive (SMAP) satellite, which operates at an L-band frequency (1.41 GHz), making it highly sensitive to the soil moisture in the top layer (0-5 cm). Additionally, we use ground-based weather radar data and in situ SM and P measurements as reference data. The study covers Finland from April to September over two consecutive years (2018-2019). While this study focuses on Finland, the methodology aims to be applicable across the entire Arctic region. The analysis showed that SM varies notably during the study period. In April and May, melting snow increases SM levels, and P and SM do not show a clear relationship. From June onwards, P and SM correlate better, with high daily P indicating a clear increase in SM. Water bodies notably complicate the SM retrieval and cause SM retrievals to saturate. An analysis of the different SMAP SM retrieval algorithms showed that the single-channel algorithms perform better near water bodies than the dual-channel algorithm, which is currently the baseline algorithm. To estimate P from SM measurements, we employed the SM2RAIN algorithm, which treats soil as a natural rain gauge. The algorithm showed promising results, detecting the area of rainfall accurately in most cases but estimating the intensity of the rainfall is more challenging. It occasionally overestimates light rain and fails to detect the highest daily P rates. Also, the choice of the parameters used in the algorithm considerably impacts the results. Drydown patterns were analyzed by fitting an exponential model to each SM drydown period, from which the exponential decay time scale, i.e. the time constant (τ) and the lower bound of SM (SMmin) were estimated. The time constant τ did not show much spatial or temporal variability. The distribution of τ for the whole study area was highly positively skewed, with a mode of 1.6 days and a median of 4.0 days, consistent with other studies. The distribution of SMmin is also positively skewed, with a mode of 0.14 m³/m³ and a median of 0.17 m³/m³. SMmin exhibits another lower peak at 0.02 m³/m³, the lower limit of SMAP soil moisture retrievals, possibly causing an artifact in the results. SMmin shows spatial variability, with surface conditions affecting the values. The lower bound is slightly higher near water bodies but also shows a more prominent peak at 0.02 m³/m³. Grid cells with dense vegetation and low vegetation agree better with each other, indicating that water bodies particularly affect and complicate SM retrieval. The promising results suggest that the method could be applied to the entire Arctic region.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: Combining GNSS Reflectometry and machine learning for global Soil Moisture and Forest Biomass monitoring in view of the ESA Scout HydroGNSS mission.

Authors: Emanuele Santi, Simone Pettinato, Nazzareno Pierdicca, Davide Comite, Leila Guerriero, Laura Dente, Maria Paola Clarizia, Nicolas Floury, Martin Unwin, Gabrielle Marigold
Affiliations: CNR-IFAC, Università La Sapienza, Università Tor Vergata, ESA-ESTEC, Surrey Satellite Technology Ltd (SSTL)
HydroGNSS (Unwin et al. 2021) has been selected as the second ESA Scout small satellite science mission, with a planned launch date of 2024. It is aimed at monitoring hydrological parameters, closely linked to GCOS-defined Essential Climate Variables (ECVs). HydroGNSS will prove new concepts and offer timely observations that supplement and complement existing satellite missions with high scientific priorities in ESA’s Earth Observation directorate. HydroGNSS aims at providing operational products of soil moisture (SMC), inundation or wetlands, freeze/thaw state and forest above ground biomass (AGB). This study aims at addressing the retrieval of SMC and AGB by proposing two retrieval concepts based on Artificial Neural Networks (ANN) that have been developed and validated by using the NASA’s Cyclone GNSS (CyGNSS – Ruf et al. 2015) land observations in the framework of the ESA Ecology project and the ESA consolidation Study for HydroGNSS. Along with one year of CyGNSS global land v3.1 data, reference data from the International Soil Moisture Network (Dorigo et al. 2011) and the SMAP Level 3 SM global daily product (Entekhabi et al, 2010) have been considered for implementing and validating the SMC algorithm, while the AGB pantropical dataset (Avitabile et al. 2016) was considered for implementing and validating the AGB algorithm. Two global datasets of daily CyGNSS observables plus ancillary plus target SM have been implemented by aggregating data at different resolutions. The first dataset at 36 Km resolution (on EASE-Grid)) is used for evaluating the CyGNSS sensitivity vs. SMAP products (SM plus VWC plus Roughness) and pantropical AGB, and for training the ANN SM algorithm. The second dataset at 0.05° (≃5Km) is used for validating the SM algorithm versus in-situ measurements (ISMN) and training/testing the AGB algorithm. At first, a sensitivity analysis of the CyGNSS observables to SM and Forest AGB was carried out at two different spatial resolutions of 5 and 36 Km, with the scope of understanding if, and to what extent, the synergistic use of other GNSS-R observables, beside the already assessed equivalent reflectivity (Santi et al. 2020), can improve the retrieval of both parameters. The outcomes of the sensitivity analysis confirmed the possibility of estimating both SM and AGB from GNSS-R, while the high dispersion of the CyGNSS observations suggested the use of advanced algorithms to reduce the uncertainties and improve the retrievals. The feasibility of retrieving SMC and AGB has been therefore exploited by implementing and testing retrieval algorithms based on Artificial Neural Networks (ANN). Taking advantage of the ANN capability in easily merging multiple inputs, several combinations of GNSS-R observables and ancillary data (e.g., topography and land use information) have been evaluated. The algorithms have been trained using the lower resolution datasets and then applied to the higher resolution dataset to generate global maps of the target parameters (Santi et al. 2024). The obtained outputs have been tested and validated against the reference datasets: in particular, the SMC algorithm has been tested at 36 Km against SMAP data, obtaining an overall correlation R= 0.88, and validated at 5 Km against data from ISMN stations with site dependent but in general encouraging results. The AGB algorithm was validated at 5 km against subsets of the AGB map not involved in the ANN training, obtaining R≃085 and RMSE < 75 t/ha. The results confirmed the feasibility of using GNSS-R for SM and AGB global monitoring: the retrieval is feasible provided that advanced algorithms (e.g., ANN) are used. The inclusion of ancillary information (topography, land cover and so on) was also effective in improving the retrievals. References Ruf et al., 2015, DOI: 10.1175/BAMS-D-14-00218.1. Avitabile et al., 2016, DOI: 10.1111/gcb.13139. Dorigo et al., 2011, DOI: 10.5194/hess-15-1675-2011. Entekhabi et al., 2010, DOI: 10.1109/JPROC.2010.2043918. Santi et al., 2020, DOI: 10.1109/JSTARS.2020.2982993. Santi et al. 2024, DOI: 10.1016/j.srs.2024.100177 Unwin et al., 2021, DOI: 10.1109/JSTARS.2021.3089550.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: Enhancing Spatio-Temporal Soil Moisture Monitoring through Integrated Cosmic-Ray Neutron Sensors (CRNS) and Sentinel-1 C-Band Observations

Authors: HAMI SAID AHMED, PhD Modou MBAYE, PhD Edson RAMIREZ, PhD Noure Eddine AMENZOU
Affiliations: Soil Water Management and Crop Nutrition Laboratory, Joint FAO/IAEA Centre of Nuclear Techniques in Food and Agriculture, Department of Nuclear Sciences and Applications, Centre d'Etude Régional pour l'Amélioration de l'Adaptation à la Sécheresse (CERAAS), Institut Sénégalais de Recherche Agricole (ISRA), Universidad Mayor de San Andrés (UMS), Direction Etudes et Recherche Scientifique (DERS) Division Eau et Climat (DEC) CNESTEN (Centre National de l'Energie, des Sciences et des Techniques Nucléaires)
Soil moisture is a key variable for sustainable management of agricultural water resources, hydrological modeling, ground water recharge and forecasting flood and drought. Despite the availability of various techniques to measure soil moisture, achieving high-resolution monitoring of its variability across space and time remains a significant challenge. This limitation is profoundly impacting climate resilience, sustainable agriculture, and water resource management. This study introduces an innovative approach that combine in situ nuclear technology, the Cosmic-Ray Neutron Sensors (CRNS) with Sentinel-1 C-Band satellite data to enhance spatio-temporal soil moisture monitoring. As a ground-based sensor, the CRNS provides precise landscape-scale soil moisture measurements over a footprint of approximately 10 hectares, effectively bridging critical gaps in traditional point-based observations. Sentinel-1 satellite data provides high spatial resolution (10 meters) and temporal resolution (12 days), enabling the upscaling of localized soil moisture observations to larger spatial domains. By using advanced machine learning algorithms and geostatistical models, this study seeks to combine CRNS and Sentinel-1 data, aiming to produce enhanced soil moisture maps with improved accuracy and reliability. Preliminary findings show that combining CRNS and Sentinel-1 data significantly reduces uncertainties in soil moisture estimates derived from Sentinel-1, particularly in areas with complex soil texture and diverse vegetation. The CRNS data serves as a robust validation tool, ensuring precise calibration of satellite-derived soil moisture products. Furthermore, the global CRNS network provides a framework for consistent validation, improving the reliability of remote sensing applications. The CRNS-Sentinel-1 integration represents a scalable and adaptable methodology for accurate soil moisture mapping, supporting sustainable land management, climate-smart agriculture, and global climate action. It represents a significant advancement in soil moisture monitoring, demonstrating the potential of combining nuclear-based ground sensors with satellite observations. This approach is particularly valuable in climate-vulnerable regions, where precise soil moisture data can guide strategies for sustainable agriculture, drought mitigation, and effective water resource management. Future research will focus on automating data fusion processes, improving temporal resolution, and expanding this approach to other key environmental variables essential for building climate resilience.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: CIMR L2 Soil Moisture Algorithm Development and Testing

Authors: Moritz Link, Dr. Roberto Fernandez-Moran, Martin Baur, Dr. Thomas Jagdhuber, Dr. Dara Entekhabi, Dr. Lanka Karthikeyan, Jean-Pierre Wigneron, Thomas Lavergne, Dr. María Piles
Affiliations: Image Processing Laboratory (IPL), University of Valencia, University of Cambridge, Microwaves and Radar Institute, German Aerospace Center (DLR), Massachusetts Institute of Technology (MIT), Centre of Studies in Resources Engineering, Indian Institute of Technology Bombay, INRAE, Norwegian Meteorological Institute
The Copernicus Imaging Microwave Radiometer (CIMR) is one of six Copernicus Expansion Missions, planned for launch in 2029. CIMR will provide multi-frequency passive microwave observations in the L-, C-, X-, K-, and Ka-bands. With an expected 95% daily global coverage, high radiometric fidelity, multi-resolution measurement capabilities, and local overpass times around 6 AM and 6 PM, CIMR will provide unique opportunities for global soil moisture and vegetation monitoring. This study introduces the current developments for the planned CIMR L2 soil moisture retrieval algorithm. Two soil moisture products are planned: One based on L-band measurements in their native resolution (<60 km) and one based on L-band measurements sharpened with C-/X-band measurements at an enhanced resolution (~15 km). In both cases, the retrieval is based on inverting the tau-omega model, a zeroth-order solution to the radiative transfer equation over land. We discuss the calibration of algorithm parameters to the CIMR viewing geometry, the sharpening of CIMR L-band measurements with higher resolution C-/X-band measurements, the quantification of retrieval algorithm uncertainties, and the potential inclusion of land surface temperature estimates from CIMR K-/Ka-band measurements in the soil moisture inversion. The algorithm is tested based on satellite data time series and simulated data. We also discuss the planned CIMR L2 multi-frequency microwave vegetation indicators, which shall characterize microwave vegetation interactions at L-, C-, and X-bands. To ensure a maximum degree of consistency, the vegetation indicators are planned to be estimated alongside the enhanced resolution soil moisture product. The Copernicus Imaging Microwave Radiometer is currently being implemented by the European Space Agency (ESA) and the European Commission (COM) to support the Integrated European Policy for the Arctic. The CIMR algorithms discussed in this study are developed as part of the CIMR L2PAD project. The algorithm theoretical baseline documents (ATBDs) and code will be hosted publicly at https://github.com/CIMR-L2PAD in the future.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: Fiducial Reference Measurements for Soil Moisture (FRM4SM) and their Contributions to the Science Community

Authors: François Gibon, Alexander Gruber, Daniel Aberer, Peter Balogh, Wouter Dorigo, Irene Himmelbauer-Quast, Johanna Lems, Wolfgang Preimesberger, Alexander Boresch, Monika Tercjak, Arnaud Mialon, Philippe Richaume, Yann Kerr, Nemesio Rodriguez-Fernandez, Raffaele Crapolicchio, Klaus Scipal
Affiliations: Centre d’Etudes Spatiales de la BIOsphère (CESBIO) Université de Toulouse CNES/CNRS/INRAE/IRD/UPS, Department of Geodesy and Geoinformation, Vienna University of Technology (TU Wien), Applied Science, Software and Technology (AWST) GmbH, European Space Agency - ESA/ESRIN
Fiducial Reference Measurements (FRMs) are defined as a suite of independent, fully characterized, and traceable ground measurements that follow metrological principles as described in the guidelines from the GEO/CEOS Quality Assurance framework for Earth Observation (QA4EO). In recent years, ESA has funded several FRM-related activities, aiming at establishing independent reference measurements suitable to derive satellite validation results and retrieval uncertainty estimates over the entire mission duration, to provide users the required confidence in satellite data products. ESA’s “Fiducial Reference Measurements for Soil Moisture” (FRM4SM) project, which was launched in May 2021, is one such activity aiming to establish the FRM concept in the soil moisture community. By developing new quality indicators and quality control procedures for in situ soil moisture measurements, in particular estimates of spatial station representativeness, a first “soil moisture FRM” dataset has been drawn from the International Soil Moisture Network (ISMN; https://ismn.earth/), which is currently the most extensive public SM reference data base holding in situ measurements from networks all around the world. At the same time, FRM4SM actively collaborates with other researchers, stakeholders, projects, and initiatives to lay out guidelines and protocols for the installation, calibration, operation, and maintenance of future in situ measurement networks to be compliant with the recently-developed CEOS-FRM principles. The FRM4SM project also mantains and evolves the Quality Assurance for Soil Moisture (QA4SM; https://qa4sm.eu/) framework, which is a cloud-based online validation service that provides user access to the latest available soil moisture FRM dataset (and other state-of-the-art satellite and model-based soil moisture products), and implements the validation good practice guidelines that are developed by the SM community and are endorsed by the CEOS Working Group on Calibration and Validation. QA4SM allows users to quickly and effortlessly obtain independent, openly-accessible, archived, and citable estimates of the uncertainties in their soil moisture data products. Lastly, FRM4SM builds on several SMOS-based science case studies, which have helped to identify limitations and shape requirements for “FRMs” for soil moisture by addressing various questions related to sampling, representativeness, and scale errors while improving our understanding of soil moisture retrieval quality across the globe, especially over organic soils. In this talk, we will present how FRM4SM and QA4SM have benefited the scientific community thus far, and provide an outlook on planned future activities, both through FRM4SM and other joint efforts, to address the most pressing needs of the satellite soil moisture community.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: Analyzing the Impact of Land Surface Temperature on Downscaling Satellite Soil Moisture in a Two-Step Machine-Learning Framework

Authors: Thomas Himmer, Wolfgang Preimesberger, Wouter Dorigo
Affiliations: TU Wien, Research Group Climate and Environmental Remote Sensing, Department of Geodesy and Geoinformation
Soil moisture (SM) is a critical component of the Earth system, influencing climatological and hydrological processes by regulating water, carbon, and energy fluxes between land and atmosphere. The ESA Climate Change Initiative (CCI) provides a long-term, global record of SM with daily temporal resolution and a spatial resolution of ~25 km (0.25°) by merging multiple EO-derived SM datasets from different sensors into a harmonized product. While the primary strength of the CCI SM record lies in its extensive time span and global coverage, a downscaled high-resolution product would benefit applications requiring SM data at local scales such as hydrological modeling, forecasting hydro-climatic extremes, crop yield estimation, and accounting for land-atmosphere interactions in regional weather models. Here we propose a downscaling framework which improves the grid resolution of the ESA CCI SM product first to ~5 km (0.05°) and in a second step to ~1 km (0.01°) through machine learning, incorporating static and dynamic ancillary data influencing the spatial distribution of SM at this finer scale. The existing procedure uses the Normalized Difference Vegetation Index (NDVI) as the input variable with temporal dynamics; however, its impact on the model prediction remains regionally constrained. This study explores the potential of Land Surface Temperature (LST) as a dynamic covariate to improve the model’s downscaling accuracy. LST, tightly coupled to SM through evapotranspiration, offers complementary information to refine spatial SM patterns. Using gap-filled datasets for both the original CCI SM and the MODIS-derived LST for better coverage, we produce the downscaled SM for two study-regions in Europe and perform validation against in-situ data to evaluate the benefit of incorporating LST into the model. Preliminary findings suggest that LST achieves greater information gain on the spatial distribution of SM, enhancing the downscaling framework’s accuracy and addressing regional limitations of the NDVI as a predictor. This research provides insights into how leveraging LST for downscaling can enhance high-resolution SM estimation, which benefits numerous applications that require fine-scale SM information for local and regional modeling and forecasting.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.31/1.32)

Session: B.03.09 Community-Led Climate Initiatives: Share and discuss successful grassroots climate action projects from around theworld

This session highlights innovative, community-driven projects from around the globe that are addressing the climate crisis at a grassroots level. Bringing together initiatives focused on local climate action, sustainable agriculture, renewable energy, and green
infrastructure, the panel will showcase how communities are driving transformative change in response to the climate and ecological emergency. Attendees will hear about a variety of initiatives, from European networks that support community-led climate action to cooperative renewable energy solutions, community gardening projects, and rural innovation ecosystems empowering women in sustainable agriculture. The session will also discuss technology driven platforms for optimizing urban spaces to support green energy and biodiversity. Through sharing these diverse experiences, the panel aims to inspire and equip participants with practical models and actionable insights that can be applied to empower their own communities. This session will be valuable for activists, local government representatives, and anyone interested in supporting grassroots solutions that address climate change, foster resilience, and create positive environmental and social impact at the local level.

Moderators:


  • Sara Aparício - Solenix c/o ESA
  • Federico Rondoni - Starion c/o ESA

Speakers:


  • Katja Arzberger - Obststadt Wien
  • Michaela burger - Obststadt Wien
  • Johanna Roniger - CliMate Austria
  • Pietro Maroè - SuPerAlberi
  • Marina Mattera - Adaptation Agora


Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.34)

Session: A.02.07 Monitoring grasslands and rangelands from space - PART 2

Grasslands and rangelands account for a large share in global agricultural managed landscapes and provide essential ecosystem services such as carbon sequestration or water retention. At the same time, they represent a critical source of biomass for animal feed and support livelihoods, for instance, of pastoralists and other livestock producers. Grasslands and rangelands provide habitats for a wide range of floristic and faunistic species and have therefore a crucial role in the context of climate change and biodiversity. A key challenge is to find a balance between maintaining or enhancing ecosystem services and functions – which are influenced by the intensity of their utilization and management - while at the same time meeting the demand of a growing population in terms of meat and dairy products, as well as pressures from other land uses. This brings the topic of grassland/rangeland ecosystems to the agenda of land managers, nature conservationists but also prominently in international policies. Therefore, accurate Earth Observation (EO) based mapping and monitoring approaches are crucial to deepen our understanding and to support evidence-based policy development and evaluation. While remote sensing-based approaches have already proven valuable for mapping and monitoring the intensity of grassland/rangeland use over large areas, several challenges such as degradation and recovery of those ecosystems remain unresolved for which new solutions are needed.

We welcome contributions that either focus on the testing and implementation of novel EO data and methods for the assessment of rangelands and grassland management and condition, or that provide insight into the valorization of established technologies for innovative EO-based services in support of climate action and sustainable management. These could include but are not restricted to:

- multisource imaging / data fusion / time series
- estimation of grassland yields / biomass / quality and other relevant biophysical variables
- degradation and recovery of grasslands and rangelands
- differentiation of pastures / meadows
- rangeland/grassland use intensity, resilience and carrying capacity
- grassland use in the context of biodiversity / climate change
- monitoring and evaluation of agricultural and environmental policies
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.34)

Presentation: RETRIEVAL OF ALPINE GRASSLAND TRAITS THROUGH HYPERSPECTRAL FIELD AND SPACE-BORNE SENSORS

Authors: Rodolfo Ceriani, Dr. Monica Pepe, Natalia B. Lizana-Saavedra, Cristina Gavazzeni, Marta Galvagno, Gianluca Filippa, Alberto Tamburini, Francesco Fava
Affiliations: Department of Environmental Science and Policy, Università degli Studi di Milano, IT, Department of Agricultural and Environmental Sciences, Università degli Studi di Milano, IT, Institute for Electromagnetic Sensing of the Environment, Italian National Research Council, IT, Agenzia Regionale per la Protezione dell’Ambiente della Valle d’Aosta, IT
In the Italian Alps, mid-to-high altitude pastures are undergoing significant abandonment and degradation, threatening the sustainability of livestock production at both farm and regional scales. Addressing this issue requires adaptive grazing management practices supported by informed policies to promote sustainable use of pasture resources. However, implementing these strategies is hindered by a lack of reliable, real-time data on pasture conditions during the grazing season. Current monitoring systems, based on multispectral satellite data, provide reasonable estimates of forage availability but fall short in assessing critical agronomic traits such as protein content, fiber composition, and lignin levels, which are essential for optimal grazing management. This study examines the potential of hyperspectral sensors, both field-based and satellite-based, to estimate pasture biomass and key traits of semi-natural mountain pastures in the Italian Alps. These traits include fresh biomass, dry matter, protein, fiber, and lignin content. A field study has been conducted in 2024 at two high-altitude Alpine sites: Valsavarenche (Torgnon, AO, Italy) and Val Camonica (Passo Crocedomini, BS, Italy), situated at approximately 1500 meters above sea level. These semi-natural pastures exhibit high species diversity and high spatial heterogeneity. The field campaigns followed an experimental design scheme to ensure comparability with multispectral (Sentinel 2) and hyperspectral (PRISMA, EnMAP) satellite data. Each site included 10 elementary sampling units (ESUs), each measuring 60 × 60 meters. Each ESU was subdivided into four plots (30 × 30 meters). Within each plot, a linear transect of 10 meters × 0.1 meters (1 m2) was established. Along this transect, five leaf area index (LAI) measurements were recorded using an LAI-2200c instrument, and five spectral measurements were collected with an ASD FieldSpec 4 spectroradiometer in clear sky conditions across solar noon. Following spectral and LAI data collection, the green biomass along the transect was harvested for further laboratory analysis. Biomass samples were analyzed using two approaches: near-infrared spectroscopy (NIRS) to measure dry matter (DM), crude protein content (CP), acid detergent and neutral detergent fiber content (NDF and ADF), and laboratory analytics with an elemental analyzer to estimate nitrogen content. Four field campaigns have been conducted following this experimental scheme. 128 vegetation spectra and corresponding agronomic traits were collected, and one cloud-free PRISMA image has been acquired on Valsavarenche on 4th July 2024. Preliminary work has assessed the performance of Partial Least Squares Regression (PLSR) models for estimating LAI, fresh and dry biomass, nitrogen, and fiber content, with a 5-fold cross-validation. The PLSR models demonstrated promising predictive accuracy, with results as follows: i) LAI (R² = 0.76, RMSE = 0.91 m²/m²); ii) fresh biomass (R² = 0.75, RMSE = 158.7 g/m²); iii) dry biomass (R² = 0.72, RMSE = 43.7 g/m²); iv) nitrogen content, both as percentage (R² = 0.55, RMSE = 0.28 g/g) and at canopy level (R² = 0.75, RMSE = 5.6 g/m²); v) ADF, both as percentage (R² = 0.68, RMSE = 1.34 g/g) and at canopy level (R² = 0.72, RMSE = 12.54 g/m2); vi) NDF, both as percentage (R² = 0.62, RMSE = 2.16 g/g) and at canopy level (R² = 0.71, RMSE = 21.03 g/m2). Next, we will explore a range of additional machine learning techniques, comparing various algorithms for hyperspectral data analysis. These techniques will include support vector machines (SVM), boosted random forests (BRF), and Gaussian process regression (GPR) models. Additionally, different spectral indices will be explored, ranging from well-known vegetation indices (VIs) such as NDVI, NDRE, and MSAVI, to hyperspectral simple ratios. These methods will be evaluated based on accuracy, robustness, and computational efficiency, including their performance when applied to our PRISMA imagery. By generating initial maps of pasture traits, we aim to assess not only computational performance and robustness but also to identify spatial patterns influenced by different management practices. This analysis will help determine the most suitable approaches for processing hyperspectral imagery in the context of pasture monitoring and management. All of these efforts aim to ensure readiness for when operational missions, like the Copernicus Hyperspectral Imaging Mission for the Environment (CHIME), become available. Once CHIME imagery is accessible, these optimized methods will enable the generation of real-time, high-resolution maps across the vegetative and grazing seasons, potentially supporting innovative adaptative grazing management by providing up-to-date information on forage biomass availability and nutritional status.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.34)

Presentation: Assessing Subalpine Grassland Mortality in the Czech Republic Using Satellite and UAV Multispectral Imagery Combined With Machine Learning Techniques

Authors: Olha Kachalova, PhD Jakub Houška, PhD Radim Hédl
Affiliations: Masaryk University, The Silva Tarouca Research Institute for Landscape and Ornamental Gardening, Institute of Botany of the Czech Academy of Sciences
Rapid anthropogenic climate change induces responses in all natural ecosystems, with outcomes that are often challenging to predict. Of particular concern are climate trends such as rising average annual temperatures, shifts in precipitation patterns, and an increase in extreme weather events, including droughts. Alpine and subalpine ecosystems are particularly vulnerable, not only due to their high exposure to global warming effects, including reductions in snow cover, but also because of their relict, spatially fragmented nature and limited extent, which reduce their resilience and adaptability. Hrubý Jeseník, an isolated mountain range in northeastern Czech Republic, is a notable example. Part of the Sudetes, it reaches 1,491 m above sea level (Praděd peak) and is characterized by a narrow ridge (~10 km) and isolated peaks. The forest line is at 1,100–1,200 m, and forestless areas are dominated by subalpine tundra vegetation, including grasslands and heathland. Over the last century, the climate in Hrubý Jeseník has changed significantly, with the average annual temperature increasing by over 2°C, which exceeds average warming trends in the lowlands of the Czech Republic. Observations over the past 100 years document a rising forest line, while snow cover has significantly decreased over the last decades, leading to dramatic changes in cold-adapted subalpine grassland ecosystems, such as the sudden grassland mortality recently reported. Using remote sensing techniques, including satellite and UAV-derived multispectral and hyperspectral imagery combined with digital elevation model (DEM) analysis, we analyzed the timing, extent, progression, and potential causes of grassland mortality events in Hrubý Jeseník. Subalpine grasslands in the region are represented by two main types: short-stemmed alpine grasslands (Natura 2000 6150 class), with three subtypes: wind-swept Avenella flexuosa-dominated, typical Festuca supina-dominated, and Nardus stricta-dominated—and subalpine tall grasslands (Natura 2000 6430 class), mainly dominated by Calamagrostis villosa and Molinia caerulea, which often form a mosaic. As a first step, we conducted high-resolution mapping (0.5 m) of the vegetation using UAV-borne CASI/SASI hyperspectral imagery (25 July 2022) classified with a Random Forest algorithm based on >750 field-verified training polygons. The classification achieved an overall accuracy of 0.93 (kappa = 0.90), detecting all key vegetation types, dead biomass, and various stages of grassland damage and recovery. The Normalized Difference Moisture Index (NDMI = B842 nm−B2190 nm/B842 nm+B2190 nm), proved to be the most reliable for identifying dead biomass and can be derived from Sentinel-2 and Landsat data, allowing retrospective analysis of grassland mortality. Data processing was conducted in the Google Earth Engine (GEE) platform with Python and JavaScript scripting using Sentinel-2 L2A (2015–2024) and Landsat 5 TM/7 TM (1987–2022) data collections. Images were filtered in order to represent the vegetation season (1 June–30 September), and cloud masks were applied. The NDMI index was calibrated for each sensor using temporal overlap (CASI/Sentinel-2 in July 2022, Sentinel-2/Landsat in September 2015). Grassland mortality events were identified using a Change Detection algorithm; derived index value grids were synthesized into 30-day composites to map and calculate the extent of grassland mortality. This analysis identified three major grassland mortality events in Hrubý Jeseník: 1. June 2003: Grassland dieback on Jelení Hřbet peak (4.32 ha), followed by rapid regeneration during September, with full recovery in the next growing season. 2. Early spring 2012: Persistent dieback (3.44 ha) between Pecný and Břidličná peaks. Recovery was slow and fluctuating, completed only by 2023. 3. May 2019: Extensive damage on Velký Máj (19.7 ha) and Jelení Hřbet (7.6 ha). Recovery was partial, with some areas (2.6 ha) showing delayed regeneration until 2024. To identify which vegetation types were particularly affected, we conducted retrospective mapping of subalpine vegetation in the southern part of Hrubý Jeseník for years preceding dieback events, using Random Forest classification in the ArcGIS Pro environment. For this, we created composite Landsat 7 ETM+ images, consisting of B, G, R, NIR, SWIR1, and SWIR2 bands for three time periods: 1–10 May, 20–30 July, and 20–30 October. Each band was pansharpened to 15 m resolution and combined into a single multispectral image to capture phenology determined spectral characteristics of vegetation types, improving classification accuracy. Training data were derived from the previously created 2022 vegetation map, assuming minimal vegetation change between 2019 and 2022. Training polygons of 3 × 3 pixels were selected to represent key grassland types, along with associated land cover and mixed vegetation. The classification achieved an overall accuracy of 0.91 (kappa = 0.82), which was considered good, taking into account the mid-resolution of input imagery. Similar multitemporal composites were created for 2002 and 2011 and classified using the same training set. Retrospective vegetation mapping revealed that mortality primarily affected short-stemmed grasslands, accounting for 92% of pixels in future affected zones, while tall grasslands remained undamaged. This suggests that morphological traits, particularly more extensive and deeper root systems, provide tall grasses with an advantage during drought. Moreover, Nardus-dominated grasslands showed the fastest recovery in subsequent seasons compared to other subtypes. Meteorological data analysis from the nearest meteorological stations linked these events to extreme weather conditions. For example, the 2003 event coincided with a severe spring precipitation deficit (<50% of the long-term average) and anomalously high summer temperatures (+5–10°C above average). The 2012 event followed a prolonged dry period (March–June) and extreme winter freezing in February, which likely caused soil frost damage. In 2019, the event followed a dry previous growing season and localized microclimatic and geomorphological variability. These findings suggest that extreme weather events were the primary drivers of grassland mortality; however, local geomorphological and microclimatic factors also played significant roles, as only specific grassland patches were affected during each event. Machine learning models (Gradient Boosting) were applied to evaluate the influence of geomorphological parameters derived from a 5 m resolution DEM using the SAGA-GIS application. Key predictors included wind exposure (feature importance 13.4%), curvature (7.2%), distance to the ridge (3.2%), elevation (5.6%), and flow accumulation (1.6%). Vulnerable areas were mostly convex ridge tops with high wind exposure and limited moisture retention due to wind-driven evaporation and reduced snow cover. These findings provide a basis for developing monitoring systems and predictive models to understand the dynamics of subalpine ecosystems under changing environmental conditions, with potential applications for similar regions in Central Europe.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.34)

Presentation: Assessing climate change impacts on Mongolia's steppe using MODIS data (2000–2023)

Authors: Justinas Kilpys, Mr. Egidijus Rimkus, Ms. Soyol-Erdene Tseren-Ochir
Affiliations: Vilnius University, Institute of Geosciences, National University of Mongolia, Department of Environmental and Forest Engineering
Mongolia's expansive steppes are highly vulnerable to the impacts of climate change. These impacts are manifested through shifts in temperature, altered precipitation patterns, and increased frequency of extreme weather events. Climate change can disrupt natural ecological processes, threatening the stability and productivity of these ecosystems. Monitoring these changes in the vast steppes of Mongolia's territory is challenging. This study aims to bridge this information gap by using remote sensing data. We utilized MODIS satellite datasets, including the Normalized Difference Vegetation Index (NDVI), Gross Primary Productivity (GPP), and land cover data from 2000 to 2023. To evaluate the impact of climate change on Mongolian steppes, we used high-resolution climate data from ERA5-Land reanalysis. We upscaled all datasets to the unified spatial grid (0.1° × 0.1°) and used this database to evaluate spatial and temporal variations of steppe productivity in the country. Our findings indicate that Mongolia's mean annual temperature has risen by 0.81°C over the past 24 years, with the most pronounced warming occurring during the cold season, particularly in March. Spatial analysis revealed that the Gobi Desert and Central Highlands experienced the highest temperature increases. Meanwhile, mean annual precipitation showed an overall increase of 10%, although this trend was not statistically significant. Seasonal patterns indicate increased precipitation during summer, especially in July and August, while a minor decrease was observed during winter. Northern and mountain areas experienced the most significant annual and seasonal precipitation increases. The analysis of steppe productivity using GPP and NDVI revealed an increasing trend, particularly from July to September in northern and eastern Mongolia. This trend correlates strongly with increasing summer precipitation, with correlation coefficients ranging from 0.58 to 0.64. The lowest GPP was observed in 2007, characterized by the most severe drought and high temperatures during the study period. The highest GPP was noticed in 2021, which experienced abundant rainfall. These results underscore the impact of seasonal and interannual climate variations on the steppe ecosystem productivity. GPP and NDVI showed positive correlations with temperature during spring and autumn, while summer productivity was more closely linked to precipitation. Also, the extreme climate indices (heat waves, cold spells, wet days, and others) were used to determine how they affect the steppe primary production dynamics. Extreme climate indices were observed to relate better with NDVI and GPP, with a 1-2 month lag. Despite the observed relationship between mean and extreme temperature and precipitation, not all observed spatial trends in GPP and NDVI can be explained by climate change. Soil degradation and steppe overgrazing significantly negatively affect some of the potentially positive climate impacts on vegetation growth. Animal husbandry's impact on Mongolian steppes weakens the resilience of these ecosystems to extreme weather events such as droughts and dzuds (severe winter conditions).
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.34)

Presentation: Assessing the age of permanent grassland with time series from Sentinel-2 and Landsat-5/8 imagery

Authors: Tatjana Veljanovski, Matic Lubej, Žiga Lukšič, Anže Zupanc, Matic Pečovnik, Matej Batič, Ana Potočnik Buhvald, Peter Pehani, Aleš Marsetič, Krištof Oštir, Grega
Affiliations: Research Centre of the Slovenian Academy of Sciences and Arts - ZRC SAZU, Sinergise Solutions d.o.o., University of Ljubljana, Faculty of Civil and Geodetic Engineering
Background Information on grassland permanency is important to understand the condition and stability of grassland ecosystems and can be used to guide conservation and management actions. It speaks about the consistency with which grassland is maintained as grassland over a longer period. From an ecological perspective, the persistence of grassland positively contributes to plant species richness and resilience to disturbances such as climate variability, serving as an indicator of the biodiversity quality of a landscape or grassland ecosystem. Our aim in this study was to determine the persistence of permanent grassland in Slovenia as a function of age (i.e., years in which the grassland remains undisturbed by other land uses) and to reveal spatio-temporal patterns associated with conservation or signs of change or degradation. The aim of the work proposed under the Statistical Office of the Republic of Slovenia and the Eurostat grant was to develop methods for extracting improved statistical geoinformation from Earth observation data. A comprehensive approach to quantify key parameters of grassland utilization was developed, allowing for annual statistical reporting. However, estimating the age of the grassland and the number of mowing operations proved to be the most feasible for regular monitoring of agricultural grasslands in Slovenia. The proposed workflow for estimating grassland age or grassland permanency indicator is transferable to all European permanent grassland, as the reference samples for grassland are provided. Area-wide monitoring of such grassland parameter with satellite data enables evaluation and quantification of the effectiveness of grassland conservation in the agricultural landscape. Typical permanent grassland in Slovenia is semi-natural and has a high natural and aesthetic value. We used time series of Sentinel-2 and Landsat 5/8 satellite imagery for the period between 2000 and 2021 to identify the annual presence of bare soil rather than tracking the continuous presence of grass. Using a machine learning-based bare soil marker (BSM) model (developed as part of Horizon 2020 projects - Perceptive Sentinel, NIVA, Dione, and ESA-funded Sen4CAP project), we detected ploughing and similar events by observing exposed bare soil on grassland. Study Area, Data and Methods The study focuses on Slovenia, a European country with a total area of 20,282 km2. Forests cover 59%, and grasslands 18%, of which 17% is permanent grassland with a total area of 3,445 km2. The area-of-interest, representing the entire permanent grassland area in Slovenia, was extracted from the LPIS/Records of Actual Use of Agricultural and Forest Land layer. Slovenia ranks among the top European countries for the share of permanent grassland in total grassland (92%). However, the fragmentation of the parcels poses a challenge for monitoring with Sentinel-2 and Landsat data, as some parcels are narrower than a single pixel and thus too small to create meaningful overlap summaries. Therefore, a pixel-based approach was used. For assessing grassland age, the data from optical satellites Landsat 5 (2000-2011), Landsat 8 (2013-2021), and Sentinel-2 (2017-2021) were collected and processed with Sentinel Hub facilities and eo-grow library [1]. A supervised machine learning model, trained on manually labeled data of agricultural parcels in Slovenia (collected in 2019), was used alongside the s2cloudless detector [2] to ensure the validity of the time series. The bare soil collection for arable land in Slovenia was prepared as part of Area monitoring activities and includes up to 26,000 labeled Sentinel-2 image samples from 2019. This dataset was used to curate a training dataset for the bare-soil marker, which contains both bare-soil and non-bare-soil observations, including samples for arable land and grassland. The BSM model uses a Light Gradient Boosting Machine (LGBM) classifier. This classifier was pre-trained to distinguish bare soil from non-bare-soil observations [3]. The model inputs four Sentinel-2-based vegetation indices: Normalized Difference Vegetation Index (NDVI), Normalized Bare-Soil Index (NBSI), Normalized Difference Vegetation Index with Red Edge 3 (NDVIRe3), and Chlorophyll Index with Red Edge (CLRe). In case of Landsat data, this dataset is limited to NDVI and BSI. The model assigns a probability to each valid Sentinel/Landsat observation that indicates the presence of bare soil. By applying a simple condition of at least two consecutive observations in an observation year, annual bare soil maps are generated, and bare soil counts at the pixel level are summarized to determine in permanent grassland age information. Results The results show that the proposed workflow can successfully detect the presence of bare soil in Sentinel-2 and Landsat 5/8 time series, help to distinguish permanent and non-permanent grassland, and allow establishing the indicator of grassland permanence. The results, presented as national statistics aggregated by administrative regions at the NUTS2 and NUTS3 level, indicate that 98% of all permanent grasslands in Slovenia has remained unchanged over the period of two decades. However, significant regional differences exist – while some areas experienced changes of less than 0.3%, others saw almost 5% loss of permanent grassland. The results illustrate the status, dynamics, and trends of agricultural grassland permanence and change in Slovenia over the last two decades. Our findings indicate that information on grassland permanence at the national level is particularly valuable for official statistics and nature conservation stakeholders. For the former, to fulfil reporting obligations, and for the latter, to have evidence-based information on the long-term monitoring of grassland ecosystems and to design appropriate response measures when needed. The proposed workflow was developed for a specific application – to determine the long-term sustainability of an agricultural class of permanent grassland. In other countries, where grassland may not be as well preserved, it could reveal the severity, extent and dynamics of grassland degradation. In the presentation, we will: - highlight the importance of grassland conservation as a key ecological parameter in monitoring grassland ecosystems, - introduce the bare-soil marker methodology (operational use of the model to determine grassland age, its validation and potential for large-scale application), and data processing workflow, - discuss implications such as model training, harmonization of Sentinel-2/Landsat 5-8 data, uncertainties in the nationwide modelling, challenges to validate age, and finally, we will: - comment on the results at the level of annual bare-soil layers and what their NUTS-aggregated counterparts reveal about the long-term spatio-temporal dynamics and grassland conditions, - comment on regional differences reflecting data-driven results versus potential development factors in these regions. The presentation will summarize our general experience in establishing the indicator of grassland permanence and contextualize the potential and limitations of using Earth observation data and machine learning models to support monitoring of grassland ecosystem conservation and/or degradation. References: [1] Sentinel Hub eo-grow - Earth observation framework for scaled-up processing in Python. https://github.com/sentinel-hub/eo-grow [2] Sentinel Hub Cloud Detector for Sentinel-2 images in Python. https://github.com/sentinel-hub/sentinel2-cloud-detector [3] Area Monitoring — Bare Soil Marker. Sentinel Hub Blog. https://medium.com/sentinel-hub/area-monitoring-bare-soil-marker-608bc95712ae
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.34)

Presentation: Examining the Potential of EnMAP Hyperspectral Imager in Estimating Sour-and-sweetveld Grass Quality Under Senescence Condition

Authors: Miss Anita Masenyama, Professor Onisimo Mutanga, Professor Timothy Dube, DR Mbulisi
Affiliations: Discipline of Geography, School of Agricultural Earth and Environmental Sciences, University of KwaZulu-Natal, Private Bag X01 Scottsville, 3209 Pietermaritzburg, South Africa, Discipline of Geography, School of Agricultural Earth and Environmental Sciences, University of KwaZulu-Natal, Private Bag X01 Scottsville, 3209 Pietermaritzburg, South Africa, Institute of Water Studies, Department of Earth Sciences, University of the Western Cape, Private Bag X17, Bellville 7535, South Africa, Department of Geography, Environmental Studies & Tourism, Faculty of Arts, University of the Western Cape, Private Bag X17, Bellville 7535, South Africa
The grassland biome in South Africa is characterized by sourveld and sweetveld grasses which experience variations in nutritional quality during the senescence phenological stage. This is because sourveld grasses experience a marked reduction in forage quality due to translocation of foliar nutrients to the roots owing to senescence while sweetveld grasses do not translocate nutrients to the roots such that they retain their nutritional quality throughout the year. Despite this important distinction, the grass quality dynamics of these veldts during senescence is poorly understood especially from a remote sensing perspective. The newly advanced spaceborne hyperspectral imager (HSI) on board Environmental Monitoring and Analysis Program (EnMAP) provides untapped prospects to spectrally detect the discreet characteristic features of senescent grass quality on broader spatial extents. The objective of this study was to assess the utility of EnMAP HSI in estimating grass quality as determined by canopy nitrogen (N) content across sour-and-sweetveld grasslands during the senescent stage. To achieve this objective, we tested the utility of (i) full spectrum (ii) application of narrow band vegetation indices (iii) combination of spectral bands and vegetation indices based on random forest (RF) algorithm. The results obtained from EnMap HSI were compared with those derived using a ground-based spectroradiometer. Overall, the findings of this study illustrate that the use of combined spectral bands and vegetation indices, coupled with RF improved the prediction accuracies of senescent grass N across both veldts. Based on the combined dataset, grass N was estimated with an R2 of 0.77 and RMSE of 0.12% (for sourveld) and an R2 of 0.73 and RMSE of 0.17% (for sweetveld) using EnMAP data. Meanwhile, grass N was estimated with high accuracies based on spectroradiometer spectral derivatives, yielded model accuracies of R2 = 0.88, RMSE = 0.06% and R2 = 0.81, RMSE = 0.09% for sour-and-sweetveld grasses respectively. The optimal variables for estimating grass N were derived from the visible and shortwave infrared region of the electromagnetic spectrum. Although there were differences between the accuracies obtained with the two sensors, the magnitude of variation (RMSE of 0.06% and 0.08%) was within an acceptable range. In this regard, the findings of this study demonstrate the potential of EnMAP HSI in advancing the scientific enquiry for discreet detection and estimation of senescent grass quality over space and time. This is crucial towards generating spatially explicit information that can be used in understanding the spatial variability of grass quality across different veld types which is essential for monitoring forage provisioning ecosystem services. Considering that grass quality becomes critical in the senescence as compared to other phenological stages, the results obtained in this study are useful to rangeland managers so as to inform sustainable grazing management strategies. Since grasslands provides the primary feed base for grazing mammals which support the livelihoods of many low-income communities, the results of this study contribute to United Nations’ Sustainable development Goals 1 and 2 which advocate for no poverty and zero hunger respectively.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.34)

Presentation: Grassland Yield Estimation Based on Sentinel-2 Time Series and Comparison to Process-Based Modelling and Statistics in Southern Germany

Authors: Sophie Reinermann, Dr Carolin Boos, Dr Andrea Kaim, Dr Sarah Asam, Dr Anne Schucknecht, Ursula Gessner
Affiliations: German Aerospace Center (DLR), German Remote Sensing Data Center (DFD), Institute of Meteorology and Climate Research, Atmospheric Environmental Research (IMK- IFU), Karlsruhe Institute of Technology, Helmholtz Centre for Environmental Research - UFZ, Department of Computational Landscape Ecology, OHB System AG
Managed grasslands dominate the landscape in southern Germany, as in many regions of Europe. They are used for livestock fodder production and provide a variety of further ecosystem services, such as carbon storage, water filtration and the provision of habitats. Consequently, grasslands play a major role for mitigating climate change and biodiversity loss. In southern Germany, grasslands are constantly managed, often highly intensive. They are mown up to six times per year with varying mowing dates and partly also grazed. The management intensity strongly determines grassland productivity and the provision of other ecosystem services. There it is often a trade-off between high yields and ecosystem services relevant for ecology and nature conservation. However, grassland management as well as productivity and other ecosystem services are usually unknown. Grassland harvests are usually not weighed and traded and, therefore, spatially explicit data on grassland yields are lacking. Earth observation has the potential to inform on grassland yields as extensive and reproducible monitoring of grassland biomass is enabled. However, in mown grasslands, grassland biomass is characterized by recurring removal and regrowth during the growing season. Multi-temporal biomass estimates as well as information on mowing dates are therefore required to estimate annual grassland yields. In this study, we built and optimized an empirical extreme gradient boosting model based on Sentinel-2 data and above-ground biomass field measurements acquired in 2019-2021 in southern Germany to estimate grassland biomass. We included mowing dates of am existing multi-annual grassland mowing product in the modelling process to support the biomass estimations. Apart from the time since the last mowing event or since the start of the growing season, Sentinel-2 reflectance bands from the green, red, blue, near infrared and short-wave infrared domains as well as the Enhanced Vegetation Index, the tasseled cap wetness index and the acquisition date were used as model input features. Through applying the model to Sentinel-2 time series of 2019, multi-temporal biomass was mapped. Afterwards, we combined the multi-temporal biomass estimates with the mowing dates and calculated the accumulated annual yield. During this step, we aggregated the pixel-based biomass information to field level by using the 95% percentile of the biomass derived on the field area. We investigated the yield maps for the study region in southern Germany and compared the results to grassland yield products of different sources. On the one hand, grassland yields resulting from the bio-geochemical process-based model LandscapeDNDC, optimized for the study region, were used for comparison. On the other hand, yield maps were derived from reference values from the Bavarian State Institute for Agriculture (LfL), coupled with mowing information and data on the stocking rate, as used by administrative authorities. In addition, the relationship of yield to influencing factors, such as mowing frequency, temperature, and precipitation, was examined. The satellite data-based extreme gradient boosting model achieved a R² of 0.97 (RMSE = 0.18 t/ha) during training and a R² of 0.68 (RMSE = 0.19 t/ha) with an independent testing data set. The most important input features were a short-wave infrared reflectance band, the tasseled cap wetness index and the time since the last mowing event or start of season. The estimated annual yields based on the Earth observation approach reached plausible results of 3-10 t/ha for the study region in southern Germany. The LandscapeDNDC approach and the approach based on reference values resulted in annual grassland yields of around 4-9 t/ha. The modelled yields showed the largest value range for the Earth observation approach, the overall yields were the smallest compared with the yields of the other two approaches. Yield maps show patterns of annual grassland yields with similarities and differences between the three approaches and rather follow the mowing frequency than patterns of climatic conditions. The mowing frequency was also the influencing factor with the highest correlation coefficient (Pearson r = 0.81 for the Earth observation approach) for all three yield estimation approaches. The largest differences between the estimated yields from the different approaches were found for grasslands mown one to two times per year. The study demonstrates the capability of estimating annual grassland yields based on Earth observation data and field measurements. The comparison to two other approaches, which are either used within the scientific community, or used by administrative authorities, aids in evaluating the results and showing advantages and limitations of the various yield estimation approaches. Using Earth observation enables grassland yield monitoring without large amounts of input data and calibration as needed for bio-geochemical models, capturing intra- and interannual variations in contrast to approaches solely based on reference values.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.14)

Session: A.08.05 Measuring and Monitoring Winds

Global ocean surface winds are forcing the oceans for momentum, heat fluxes and gas exchanges, furthermore inhibiting deep vertical exchange processes in both atmosphere and ocean. This intimate coupling of the ocean and the atmosphere drives both atmospheric and oceanic circulations that determine our weather and climate, while ocean-atmosphere exchanges over 70% of the earth’s surface are poorly understood. Winds further impact our lives in many ways from off-shore energy provision to global trade and aviation.

Altimeters, scatterometers and radiometers have long provided marine wind data, with some efforts made to achieve multi-decadal self-consistent datasets suitable for addressing issues of climate change. In addition, new instruments such as Aeolus provide a wealth of data not seen before that enable greater insight into the dynamics of storms and for the monitoring of high-altitude winds. These also provide challenges for their optimum use in weather forecasting models and understanding earth climate dynamics on the longer term. Papers are invited covering all aspects of the remote-sensing of winds, the calibration and quality control of the data, and their analysis to better understand the Earth’s climate or the dynamical processes occurring in individual storms.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.14)

Presentation: SAR-learned Scatterometer Resolution Enhancement for Tropical Cyclones

Authors: Ad Stoffelen, Weicheng Ni, Jur Vogelzang, Xingou Xu, Ke Zhao
Affiliations: KNMI, NUDT, Retired, MISR Lab, CMA
Close monitoring of Tropical Cyclone (TC) inner-core structures contributes to a better understanding of TC thermodynamics and development, especially when TCs undergo processes like amplification, shearing, and eyewall replacements. The wind scatterometer constellation can measure the ocean surface winds at a good spatial and temporal coverage, but the TC inner structures are largely blurred by the typically 25-km Wind-Vector-Cell (WVC) footprints. In this study, the Two-Dimensional Variational (2DVAR) system is exploited to enhance the TC inner-core structure, by "learning" background spatial error covariances from high-resolution Synthetic Aperture Radar (SAR) winds. We find that the length scale values of the stream functions are close to the radii of maximum wind speeds and length scale values of the velocity potential are dependent on TC asymmetry scales. All these parameters can be provided by ASCAT scatterometer data alone. Experimental results prove that the proposed method can enhance the TC inner-core structures and thus achieve super-resolution for ASCAT. The promising results contribute to our long-term goal of developing a general method for providing TC inner-core structures from all scatterometer winds available for nowcasting, allowing close temporal monitoring of TC winds. Hence, current work includes consolidating progress in Bayesian wind and rain estimation and Ku-band scatterometer resolution enhancement for tropical cyclones using 2DVAR.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.14)

Presentation: Developing a long-term consistent wind speed dataset from altimeters

Authors: Dr Graham Quartly
Affiliations: PML
Radar altimeters record the return from nadir reflections, with the overall strength of the return over the instrument footprint being designated σ0 (sigma0). Over the ocean this is an indicator of the roughness at the scale of the radar wavelength, and usually interpretted as wind speed, although it may also signify the presence of rain, slicks or sea-ice. Sigma0 is also used in algorithms to infer wave period from altimeters. Over land ice, such as Greenland and Antarctica, this measure of backscatter strength provides information on the compactness and moisture content of the overlying snow. However the present record is far from uniform due to different scalings being used for different instruments, and gradual degradation during their operating lifetimes. There can also be issues with the calibration used in the on-ground processing. In the second phase of the Sea State CCI, the consortium are considering the sigma0 measurements as well as the wave height record. The paradigm of the consistency of the altimeter backscatter record at Ku- and C-band is used to both determine biases between non-contemporaneous instruments and to ascertain any temporal changes in the dual-frequency data from the TOPEX-Jason-Sentinel-6 time series, These, in turn, are used via crossovers to provide a reference for the other Ku-band altimeters. The methodology provides a reliable alignment of the records from the TOIPEX-A and TOPEX-B instruments, corresponding to the switchover between them in February 1999. A particular issue with the Jason-2 altimeter is ascribed to errors in the AGC settings, which is not found for its sister instruments. The apparent 59-day periodicity in Ku-band calibration noted for TOPEX and Jason is also noted for Sentinel-6, which has a very different satellite architecture. This suggests that this effect is not instrumental but an artefact of aliasing of the diurnal cycle in sea surface temperature. The fully homogenised σ0 record will be released as part of the v5 output of the Sea State CCI.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.14)

Presentation: Making Sentinel-1 SAR surface wind suitable for offshore wind application using in-situ observations and Machine Learning

Authors: Henrick Berger, Romain Husson, Dr. Marie Cathelain
Affiliations: CELAD, CLS
The availability of accurate and reliable sea surface wind speed is a growing necessity for human activities, either coastal or offshore. From offshore wind industry to meteorological reanalysis validation, in-situ measurements from anemometers and lidars are essential but only available in limited locations and time periods. They also involve high installation and maintenance costs. The small number of in-situ measurements and the lack of accuracy of Numerical Weather Prediction (NWP) models (Sheridan et al. 2024 Wind Energy Science) can be leveraged by complementing with SAR observations to propose improved wind resource assessment (WRA) techniques. In the present study, we show how Sentinel-1 (S1) Level-1 products provided by the European Space Agency (ESA), thanks to their high spatial resolution and all-weather night and day coverage, can be significantly improved to be used for WRA estimations, both in terms of accuracy and precision. Estimations of annually averaged wind production are indeed very sensitive to small wind speed errors from the input measurements and therefore require particular attention. A 1% exceeding probability of wind speed can indeed contribute to significant errors on the annual energy production leading to millions of dollars in gain or losses for operators, as discussed in the 82nd Topical Expert Meeting of the International Energy Agency Wind. To improve the accuracy of S1-derived sea surface wind speeds worldwide, an advanced processing methodology is presented. The resulting products are compared to NWP model outputs, in-situ observations, as well as to the baseline OCN dataset derived from the ESA processor available on the Copernicus datastore. The quality of ESA wind products is not homogeneous over time because of the evolution of Level-1 and Level-2 processors as well as the Geophysical Model Function (GMFs) used (e.g. IFR2 until 2019, then CMOD5.n). Even homogeneously processed SAR wind as the SATWINDS product processed with the SAROPS tool (Badger et al. 2022) can present large bias due to bright targets and non-wind related features not removed from the backscattered signal. Therefore, we build a homogeneously processed SAR surface winds dataset from December 2015 to December 2022 over the coast of USA. We use an in-house Level-2 wind processor (“Sartool-wind") derived from the ESA Level-2 processor with two modifications: (i) an additional bright target detection in cross- polarizations to reduce most of the remaining bright targets, and (ii) the use of CMOD7 GMF rather than CMOD5.n to improve performances at low wind speed. GMFs are empirical relationship functions depending on wind speed and wind direction at 10 m above sea level (a.s.l), azimuth and incidence angles of the SAR backscatter signal. As GMFs are derived from scatterometers and not built especially for SAR, they present uncertainties due to different resolution and different levels of backscattered signals (i.e., no inter-calibration). CMOD7 is an update of CMOD5.n whose objective is to improve wind speed associated with low backscatter signals but also improve global quality of the inversed wind compared to CMOD5.n (Stoffelen et al. 2017). Second, from this homogeneous dataset, we perform a recalibration of the wind retrieved from each sub-swath of the Interferometric Wide swath (IW) images to remove wind discontinuities and mean level bias due to non-geophysical radiometric steps and acquisition geometry differences between sub-swaths implying bias in the backscattered signal. This recalibration is based on a comparison between co-localized 10 m SAR and ECMWF (which is used as a priori wind in the Bayesian algorithm from 0.25° 3h or 0.1° 1h data depending on the period) winds at 1 km resolution. To account for the evolution of Level-1 products calibration, we split our dataset for each Instrument Processing Facilities (IPF) version and originating sub-swath of the pixel to derive a recalibration function depending on the off-boresight angle. Resulting wind fields for each SAR image qualitatively demonstrate reduced discontinuities and mean level mismatches compared to ESA OCN products. The original and sub-swath recalibrated winds are then quantitatively compared to surface wind measurements from NOAA NDBC wave buoys. From 22,560 co-localized data, mean bias is reduced from –0.63 to –0.36 m/s, mean absolute error (MAE) is reduced from 1.07 to 0.96 m/s and the R2 correlation increase from 0.85 to 0.87. Other statistics like standard deviation and root-mean-square error being unchanged. Third, from swath recalibrated data, we develop a surface correction algorithm based on XGBoost machine learning between 10 m SAR-derived wind extrapolated to 4 m using a power law and NDBC surface measurements (at around 4 m mainly). This correction aims at addressing GMF errors: due to their empirical nature, GMF can have difficulties in fully capturing the complexity of the interactions between sea state and wind speed. This surface correction model is trained with a large database of buoy measurements covering the diversity of possible SAR incidence and azimuth angles. 25% of data are kept for validation. As it only corrects biases related to the sensor geometry, this model is supposed to be independent of the location and can be used to correct SAR observation all around the world. Compared to the 3,400 remaining independent buoys data, performances of corrected wind are greatly improved with bias reduced from –0.33 to 0.01 m/s, standard deviation reduced from 0.99 to 0.85 m/s, MAE reduced from 0.82 to 0.63 m/s and correlation increased from 0.89 to 0.93. However, our model is not fully able to correct wind speeds below 3.5 m/s and results tend to be overestimated in this domain while original ESA OCN products tend to be underestimated. Remaining bias also exists for high wind speed due to the scarcity of co- localized winds higher than 12 m/s and corrected SAR results remain underestimated but less than original ESA OCN data. Finally, because the analysis of NDBC/SAR co-localized winds demonstrates similarities for low and high wind speed bias, we add a mean statistical compensation to SAR corrected surface winds. From all co-localized winds, we derive a compensation function depending on wind speed from the comparisons of NDBC and corrected SAR quantiles. This correction is then applied to all individual SAR wind fields. Compensation ranges from –0.6 m/s for lower (overestimated) SAR wind speed to -0.1 m/s around 3 m/s. At wind speeds higher than 12 m/s, the compensation is constant at ~0.15 m/s. The final SAR dataset presents a bias which is slightly increased from 0.01 to –0.03 m/s while overall statistics remains unchanged. However, resulting low wind speeds are no longer overestimated and high wind speed tendency is improved. This methodology eventually demonstrates its ability to greatly improve Sentinel-1 SAR surface measurements with unbiased wind and limited spread compared to in-situ measurements. There is still room for improvement by filtering non wind related signatures (e.g. rain effect, residual interferences) and using a massive reprocessing of Sentinel-1 archive to rely on the most accurate and homogeneous data calibration. Such highquality SAR wind data can hence be used for offshore wind applications to: (i) estimate offshore winds atlas at 10 m a.s.l, (ii) estimate wind at hub height using advanced extrapolation methodology when combined with atmospheric numerical models (De Montera et al. Wind Energy Science 2021; Optis et al. Wind Energy Science 2021, Cathelain et al. Journal of Physics: Conference Series 2023).
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.14)

Presentation: Analysis of the Scatterometer Ocean Surface Measurement Dependency on Sea State

Authors: Federico Cossu, Ana Trindade, Anton Verhoef, Dr. Marcos Portabella, Ad Stoffelen
Affiliations: Institut de Ciències del Mar (ICM-CSIC), Royal Netherlands Meteorological Institute (KNMI)
Recent Synthetic Aperture Radar (SAR) work confirms the different ocean surface scattering mechanisms for vertically (VV) and horizontally (HH) co-polarized measurements of the Normalized Radar Cross section (NRCS). In particular, VV modulation is well described by the so-called Bragg scattering of electromagnetic waves on wind-generated gravity-capillary ocean waves. The retrieved winds from the Advanced Scatterometer (ASCAT) onboard Metop series, which prove indeed to very closely align with the local wind vector, are based on VV polarization. The second generation EUMETSAT scatterometer, SCA, will however also measure HH and cross-polarized normalized radar cross sections (VH and HV). It is therefore of interest to also investigate the other ocean surface electromagnetic wave scattering mechanisms, associated with horizontally-polarized microwaves. The HH NRCS is now generally assumed to be the sum of a Bragg part and of a non-Bragg part. The response of the sea surface is then based on the short-wave spectrum (wavelength from few millimeters to few meters), and on wave breaking statistics. As the non-Bragg scattering is associated with breaking waves and hence the ocean mean slope distribution, longer waves than only the short-wave spectrum come into play as they modulate the ocean slopes. This implies that the HH NRCS depends both on the local wind and the non-locally generated waves that propagate through the scatterometer Wind Vector Cell (WVC). While SAR VV, VH and HH NRCS and Doppler observations are in line with the above mechanisms, it is difficult to quantitatively assess the statistical effect on scatterometer wind retrieval, as a rather complete distribution of winds and sea states would be needed. Ku-band scatterometers do employ HH NRCS measurements. Moreover, the EUMETSAT Ocean and Sea Ice Satellite Application Facility (OSI SAF) processes intercalibrated C-band ASCAT, and KU-band OSCAT2 (onboard ScatSat-1) and HSCAT (onboard HY-2 satellites) scatterometer winds, such that many collocations between VV-only and combined VV/HH scatterometers may be analyzed. The aim of this study is to thoroughly analyze sea state conditions in association with HH (and VV) NRCS. The objective is to prepare for the SCA onboard Metop Second Generation, but also to verify all current and past Ku-band scatterometers that actually use HH for wind retrieval. It is expected that the OSI SAF will shortly be processing five Indian and Chinese scatterometers employing HH at Ku- and/or C-band. In this work, the VV and HH retrieval residuals from HY-2C and HY-2D scatterometers are being analyzed separately and associated with sea state parameters from the European Centre for Medium-range Weather Forecasts (ECMWF) wave model (WAM). HH and VV residuals have been compared for different sea state conditions and different wave/wind directions over a period of three months. It is found that the HH residuals show a stronger signal compared to VV residuals for decaying sea state (swell) when the mean wave direction is opposite to the wind direction. Such result supports the initial hypothesis of a wave breaking contribution to the HH surface return signal.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 1.14)

Presentation: Multi-decadal Variability in Ocean Surface Wind Differences Between Scatterometer Observations and Reanalysis Model Fields

Authors: Rianne Giesen, Ad Stoffelen
Affiliations: Royal Netherlands Meteorological Institute (KNMI)
The ocean surface wind plays a key role in the exchange of heat, gases and momentum at the atmosphere-ocean interface. High-quality ocean surface wind records from scatterometers are available from 1991 onwards. Care has been taken to account for changes in scatterometer instrument types and spatial coverage over time, such that these records can be used to assess changes in the ocean surface wind over the past 30 years. On the other hand, modelled surface winds from global climate reanalyses (e.g. ERA) suffer from changes in the density and coverage of observational timeseries used in the data assimilation process. Still, global numerical weather prediction (NWP) model wind fields are widely used in the computation of ocean surface processes to study climate trends and variability in ocean variables. A comparison of scatterometer observations and global NWP model wind fields reveals substantial, persistent local systematic errors in wind vector components and spatial derivatives. Temporally-averaged gridded differences between geolocated scatterometer wind data and ERA/NWP wind fields can be used to correct for persistent local model wind vector biases. By combining these scatterometer-based bias corrections with global, hourly ERA/NWP wind fields, high-resolution wind forcing products can be created for the ocean modelling community and other users. In 2022, new hourly and monthly Level-4 (L4) surface wind products were introduced in the Copernicus Marine Service catalogue. These products include global bias-corrected 10-m stress-equivalent wind, surface wind stress fields and spatial derivatives. The bias corrections are calculated from Copernicus Marine Service Level-3 wind products for a combination of scatterometers and their collocated European Centre for Medium-range Weather Forecasts (ECMWF) model winds. We used the monthly multi-year L4 product to identify long-term changes in ocean surface wind differences over the period 1995-2024. The spatial distribution of differences between scatterometer observations and collocated ECMWF ERA5 reanalysis are found to be highly consistent between different scatterometers and over time. Remaining small differences between individual scatterometers could be caused by different instrument characteristics, sampling, coverage and processing and may be further reduced by continued intercalibration efforts. Bias corrections for a single instrument display long-term variations of comparable magnitude to the scatterometer-model differences, which point to artificial changes in the ERA5 winds over time. Furthermore, regional local bias anomalies are found for climate phenomena like the El Niño Southern Oscillation. These artificial features should be taken into account in any long-term reanalysis of ocean surface wind fields.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 0.14)

Session: A.01.11 Living Planet Fellowship Programme Coordination - PART 2

The Living Planet Fellowship is an ESA mechanism to support the next generation of scientists, at post-doctoral level, to undertake cutting-edge research in Earth Observation, Earth System Science or Climate Research, maximising the scientific return of ESA and European EO missions and datasets through the development of novel EO methods, techniques and products, promoting an open science approach and by delivering excellent scientific results addressing the grand Earth Science challenges of the next decade. The initiative is implemented through a number of dedicated research projects proposed and carried out by young scientists hosted by universities, laboratories and technical centres in ESA Member States.
This session is designed to allow the current fellows to present their results to ESA and more importantly to the other fellows, share their experiences and explore mechanisms for future collaboration across research institutions, with ESA and between individual researchers. The session forms the Annual Meeting of the Fellowship scheme and is structured around individual presentations of short duration by each of the currently active fellowships as well as and those that have recently between completed, followed by discussion of the Programme and next steps.

Atmosphere topic:


Monitoring Atmospheric Anomalies from Space: A Topological Approach - MAASTA


  • Laia Amoros – Finnish Meteorological Institute (FMI)

Earth Surface Impacts of Hydrological Extremes along Global Atmospheric River Networks - ARNETLAB


  • Tobias Braun – University of Leipzig

SPectroscopy In The Far InfraREd: Reducing uncertainties in spectroscopic line parameters for ESA’s FORUM mission - SPITFIRE


  • Daniel Coxon - University of Leicester

Burning questions on carbon emissions from fires - BURNQUEST


  • Ivar van der Velde - Netherlands Institute for Space Research (NWO-I)

Cryosphere topic:


Examining Greenland’s Ice Marginal Lakes Under A Changing Climate - GRimL


  • Penelope How – Geological Survey of Denmark and Greenland (GEUS)

UAV observations of BRDF and albedo over sea ice and snow - UAV-OBASIS


  • Henna Hannula – Finnish Meteorological Institute (FMI)

A Multi-Sensor Synthesis for the Spatiotemporal Quantification of Near-Surface Density across the Greenland Ice Sheet - EO4GHRO


  • Kirk Scanlan – Technical University of Denmark (DTU)

Ocean topic:


Impacts of Pyrogenic Aerosols on Plankton Ecosystems - PYROPLANKTON


  • Joan Llort Jordi – Barcelona Supercomputing Centre (BSC)

Phytoplankton and Fisheries Under Regional Warming in The Global Oceans - POSEIDON


  • John Anthony Gittings – National and Kapodistrian University of Athens

Combining a Stochastic LAgrangian Model of Marine Particles with ESA’s Big Data to Understand the Effects of a ChaNging Ocean on the PlanKtonicFood Web - SLAM DUNK


  • Anna Rufas – Department of Earth Sciences, University of Oxford
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 0.96/0.97)

Session: C.02.02 Heritage Missions and Long Time Data Series - PART 2

EO archives worldwide contain several decades of data acquired from a variety of missions and sensors. The value of these scientific time-series is continuously growing. Information extracted from long time series of data is used to observe and characterise the current climate, to detect climate change and to determine the rate of change. Hence there is a strong need to secure state-of-art preservation of the growing volume of heritage data and associated information holdings, to allow their discoverability/accessibility through state of art technologies, and to ensure continuous improvement and valorisation to maximise usability and impact with current and future missions. The session focuses on activities and projects dealing with the preservation and valorization of Earth observation heritage (historical) data including new datasets generation, algorithms improvements, reprocessing and long time series generation through harmonization and combination of data from multiple sensors.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: Still improving the ERS-1, ERS-2 and ENVISAT altimeter and radiometer historical datasets: Towards a new version of the FDR4ALT products

Authors: Fanny Piras, Jérémie Aublanc, Malcolm McMillan, Jérémy Guilhen, Beatriz Calmettes, Frank Fell, Bruno Picard, Hélène Roinard, Fernando Niño, Ngan Tran, Sajedeh Behnia, Emma Woolliams, Annabelle Ollivier, Adrien Nigou, Kassem Asfour, Pablo Garcia, Joana Fernandes, Telmo Vieira, Michiel Otten, Ergane Fouchet, Tim Springer, Pierre Thibaut, Filomena Catapano, Pierre Femenias
Affiliations: CLS, Lancaster University, Informus, Fluctus, LEGOS, NPL, IsardSat, Porto University, Positim, Noveltis, ESA/ESRIN
In the framework of the European Heritage Missions Program, aiming at generating innovative Earth system data records named Fundamental Data Records (level 1 altimeter and radiometer data) and Thematic Data Records (level 2+ geophysical products), the European Space Agency launched a reprocessing activity of the ERS-1, ERS-2 and ENVISAT altimeter and radiometer datasets. A large consortium of thematic experts has been formed to take in charge these activities in the frame of the FDR4ALT (Fondamental Data Record for Altimetry Follow) contract. This contract is concluded and achieved its main objectives that were to 1) define thematic products including the long, harmonized record of uncertainty-quantified observations, 2) define the most appropriate level 1 and level 2 processing, 3) reprocess the whole times series according to the predefined processing and, 4) validate the different products and provide them to large communities of users focused on the observation of the atmosphere, ocean, coastal, hydrology, sea ice, ice sheet regions. The FDR4ALT products are freely accessible via ESA EO dissemination service. Based on FDR4ALT project success, ESA decided to continue for additional 3 years follow-on activity, in order to deliver a new version of the FDR4ALT products following the most recent recommendations and up-to-date algorithms, allowing the end users to continue the exploitation of this 21-years long time series that is extremely valuable for climate studies. Different technical activities are planned that should lead again to great improvements, including the following: • An innovative relocation technique for Land-Ice based on the AMPLI algorithm (Aublanc,2024) that is developed specifically for LRM in the frame of this project • An up-to-date hydrological targets database for Inland Waters and distinct products for rivers and lakes • Investigations about an innovative correction linked to wave groups called SSE (Sea-State Effect) and its potential application for 20Hz Ocean and Coastal data • New orbits for ERS-1 and ERS-2 • New wet tropospheric corrections (GPD+ and 1DVAR) that should improve performance especially in the coastal areas • Updated geophysical corrections aligned with GDR-G standards • New two dimensions Sea State Bias (SSB) solution for the Ocean and Coastal • Investigations about ERS-2 data beyond the tape recorder failure • Analysis of the uncertainties that may lead to new methods to compute them The first part of this talk will remind the context and give a global overview of the FDR4ALT project and products. Then, this talk will focus on the follow-on activities, by introducing the new consortium, present in detail the planned evolutions for the second version of the FDR4ALT products, and show the preliminary results already obtained in the frame of this contract.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: 20+ years of atmospheric profiles from Canadian OSIRIS on Odin and ACE-FTS, MAESTRO on SCISAT

Authors: Cassandra Bolduc, Dr Marcus Dejmek, Taryn Tomlinson, Dr Konstantin
Affiliations: Canadian Space Agency
This presentation will highlight various projects that are increasing the value of two space-based datasets spanning more than 20 years of atmospheric measurements: the Optical Spectrograph and InfraRed Imaging System (OSIRIS) on the Swedish satellite Odin; and the Atmospheric Chemistry Experiment/Fourier Transform Spectrometer (ACE-FTS) and Measurement of Aerosol Extinction in the Stratosphere and Troposphere Retrieved by Occultation (MAESTRO) on SCISAT. Details such as data distribution, preservation, data merging efforts, and algorithm improvements will be presented. The Swedish satellite Odin was launched in 2001 into a low Earth orbit, with the Canadian OSIRIS and the SMR instruments on board. OSIRIS measures limb-scattered sunlight in the 280–800 nm spectral region, and by its scanning motion, produces profiles of various trace gases from the upper troposphere to the lower mesosphere. Its planned life expectancy was two years, and 23 years later, it is still operating. Profiles of various trace gases, aerosols, and temperature are still being produced, and the data quality keeps being refined by the science team. OSIRIS data is available on various open data portals and continues to be incorporated into Canada’s periodic reports to the UN Science Assessment Panel to the Montreal Protocol, and to Environment and Climate Change Canada (ECCC) air quality and climate models, and peer-reviewed publications. SCISAT was launched in 2003 with the objective of measuring atmospheric profiles of ozone and ozone-depleting substances, and with its high-inclination orbit, offer a frequent sampling of northern latitudes. With a planned duration of two years, and 7 substances to be measured, SCISAT has greatly exceeded expectations, having celebrated its 20th anniversary last year, and measuring 70 substances to this day. Thanks to the scientific and calibration/validation teams’ continued efforts over the last decades, SCISAT’s two instruments have provided an uninterrupted series of atmospheric composition profiles. These have been used in several UN-WMO reports to report on ozone recovery and climate research. Its data is also regularly used for partner agency data calibration/validation and comparison, as well as in ECCC Earth system and chemistry models.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: Reviving Historical IRS-P3 MOS Data With Adaptations to Modern Standards, Requirements, and Algorithms

Authors: Birgit Gerasch, Katrin Molch, Max Wegner, Peter Schwind, Mirco Tegler, Johannes Roeder, Galina Voinov, Dr. Leif Bergerhoff, Hendrik Zwenzner, Mirko Albani, Roberto Biasutti, Sergio Folco
Affiliations: German Aerospace Center (DLR), Earth Observation Center (EOC), ESA ESRIN, ESA Mission Management and Ground Segment Department, Starion Group
The Modular Optoelectronic Scanner (MOS) instrument was an experimental imaging spectrometer built by the German Aerospace Center (DLR) and launched in March 1996 aboard the Indian Remote Sensing Satellite P3 (IRS-P3) spacecraft. The mission was primarily dedicated to the study of ocean color and atmosphere but also providing observations over land. It ended on May 31, 2004. The instrument consisted of three independent spectrometers with a total of 18 channels in the visual and short-wave infrared range and a swath width of 200 kilometers. In addition to the sensor, DLR also developed the processor software and organized the mission operations. The mission was conducted in cooperation with the Indian Space Agency (ISRO), the European Space Agency (ESA), and the American National Aeronautics and Space Administration (NASA). Considerable effort was put into ensuring the best data quality possible. An on-board solar diffuser allowed for an absolute radiometric calibration. Internal control lamps were available for long-term monitoring of the instrument and regular dark measurements were performed. IRS-P3 MOS data constitute an important contribution to the CEOS Ocean Colour Radiometry Virtual Constellations. Its data narrow the temporal data gap between the Coastal Zone Color Scanner (CZCS) on Nimbus 7, which flew from 1978 to 1986 and SeaWiFS launched in September 1997. It also nicely complements the latter data in terms of spectral and spatial resolution. The IRS-P3 MOS European Dataset Curation project under the ESA Heritage Space Programme aims at consolidating MOS data in the DLR and ESA archives and to create a unified European master data set. The processors employed during mission’s lifetime are being re-implemented and improved, integrating all correction algorithms developed at the time, such as stray light correction and destriping. The data will be reprocessed to level-1B and a new level-1C product, both in NetCDF, providing the basis for a possible future level-2 product. The products as well as the required calibration and reference data are processed into to a new product structure with a harmonized product format and file naming convention. The aim of the project is to provide high-quality data products for the user community. This contribution discusses the advantages and challenges arising from the preparation and processing of historical satellite data in line with state-of-the-art standards and requirements.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: The TIMELINE Project: Unlocking Four Decades of AVHRR Data for Long-Term Environmental Monitoring in Europe

#stac

Authors: Stefanie Holzwarth, Dr. Sarah Asam, Dr. Martin Bachmann, Dr. Martin Böttcher, Dr. Andreas Dietz, Dr. Christina Eisfelder, Dr. Andreas Hirner, Matthias Hofmann, Dr. Grit Kirches, Detmar Krause, Dr. Julian Meyer-Arnek, Katrin Molch, Dr. Simon Plank, Dr. Thomas Popp, Philipp Reiners, Dr. Sebastian Rößler, Thomas Ruppert, Alexander Scherbachenko, Meinhard Wolfmüller
Affiliations: German Aerospace Center (DLR), German Remote Sensing Data Center (DFD), Brockmann Consult GmbH
The TIMELINE project (TIMe Series Processing of Medium Resolution Earth Observation Data assessing Long-Term Dynamics In our Natural Environment), led by the German Remote Sensing Data Center (DFD) at the German Aerospace Center (DLR), is an initiative aimed at harmonizing and leveraging four decades of AVHRR (Advanced Very High Resolution Radiometer) data. Since the 1980s, daily AVHRR observations at ~1.1 km resolution over Europe and North Africa have been systematically processed, calibrated, and harmonized to create a unique dataset for long-term environmental and climate-related studies covering more than 40 years in time. TIMELINE addresses key challenges in ensuring data consistency across different AVHRR sensors by correcting for satellite orbit variations and calibration drift. The resulting dataset is aggregated into Level 3 daily, 10-day, and monthly composites, minimizing noise and gaps to support robust and reliable time series analyses. The TIMELINE product suite includes key geophysical parameters such as Normalized Difference Vegetation Index (NDVI), snow cover, fire hotspots, burnt area maps, Land Surface Temperature (LST), Sea Surface Temperature (SST), and cloud properties. Rigorous validation against independent Earth observation and in-situ datasets ensures product accuracy, enabling reliable trend analysis over decades. For instance, TIMELINE supports investigations of climate-related phenomena such as shifting vegetation green-up dates and Urban Heat Island effects. To ensure accessibility and interoperability, TIMELINE products are freely available under an open data policy. By adopting Spatio Temporal Asset Catalog (STAC) metadata standards, the dataset is seamlessly integrable with modern Earth observation platforms, fostering broader use in research and applications. The project also emphasizes continuous improvement through iterative reprocessing and incorporation of the latest advancements in methodology, calibration, and data standards as well as through the integration of recent AVHRR data sets. Updated product versions are regularly released, ensuring that users have access to the most accurate and reliable information for their analyses. This conference contribution will provide a comprehensive overview of the TIMELINE project’s progress, innovations, and contributions to the Earth observation community.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: Long-time series for ERS-1/2 and Envisat SAR data using Analysis Ready and Composite products approach.

#cog

Authors: Dr. David Small, Fabiano Costantini, Clement Albinet, Dr Sabrina Pinori, Lisa Haskell
Affiliations: University of Zurich, Telespazio UK, ESA/Esrin, Serco Spa
Over the last few years, through the IDEAS-QA4EO service and now through the QA4EO-2 project, ESA’s Heritage Space Programme section has supported several activities in relation to continuous improvement and maximising the usability of archive heritage mission data from SAR instruments alongside the current and future SAR missions. To that effect, Telespazio UK developed a CEOS-ARD prototype processor that enabled the production of CEOS-ARD “Normalised Radar Backscatter” (NRB) output that support the following: • Immediate analysis (by means of ensuring that CEOS-ARD requirements related to radiometric terrain correction (backscatter normalisation), projection of DEM etc. are implemented), • Interoperability (by ensuring that the same gridding and DEM are used as in the Sentinel-2 mission, thus expanding interoperability with Sentinel-1 and potentially the future Sentinel-1 NG, ROSE-L and BIOMASS missions), • Cloud computation capability (by developing the output product in the Cloud Optimised GeoTIFF (COG) format), • Open science compliance (by developing an open-source software for the processor). In order to ensure the correctness of RTC computation, the IDEAS-QA4EO service undertook a project to support an open source RTC processor led by the University of Zurich. The processor has been tested on thousands of Sentinel-1 backscatter products, and supports input from calibrated GRD or SLC product types. The aim of SAR radiometric terrain correction (RTC) is to compensate geometrical slope distortion by knowing the topographic variations within a scene so they can be used not only to orthorectify a SAR image into a map coordinate system, but also to correct for the influence of terrain on the image radiometry. Radiometric normalisation of SAR imagery has traditionally used the local incidence angle together with empirically derived coefficients to try to “flatten” the SAR imagery. However, this approach has been proven to be inefficient as it is an oversimplified model of how terrain slope distorts local radar backscatter. A better method based on SAR images simulations has been developed to model the local area “seen” by the area in the plane perpendicular to the look direction, accounting inherently for foreshortening, layover, and shadow distortions. A map of that local area is used to “flatten” the radar image by performing the normalisation from radar cross section (RCS) to normalised radar cross section (NRCS) in the form of terra-flattened gamma nought. As a further step towards the generation on long-term data series, Telespazio UK has started a project under QA4EO-2, with support from the University of Zurich and Aresys toward development of a composite backscatter processor for Envisat/ASAR with possible later extension to ERS-1/2 heritage data. In this processor, multiple relative orbits (or tracks) are integrated into a single backscatter value representing a set time window. To that end, the local area maps used for the normalisation in the previous RTC step is employed again, this time to calculate for each pixel the appropriate local weighting factors of each track’s local backscatter contribution. The time window can be moved regularly forward in time, and the calculations repeated with newer data to generate multiple seamless wide-area backscatter estimates. The benefits of composite products are well known to users of data from optical sensors: cloud-cleared composite reflectance or index products are commonly used as an analysis-ready data (ARD) layer. No analogous composite products have until now been in widespread use that are based on spaceborne radar satellite backscatter signals. In this work, we present a methodology to produce wide-area ARD composite backscatter images. They build on the existing heritage of geometrically and radiometrically terrain corrected (RTC) level-1 products. By combining backscatter measurements of a single region seen from multiple satellite tracks (incl. ascending and descending), they can provide wide-area coverage with low latency. The analysis-ready composite backscatter maps provide flattened backscatter estimates that are geometrically and radiometrically corrected for slope effects. A mask layer annotating the local quality of the composite resolution is introduced. The multiple tracks available (even from multiple sensors observing at the same wavelength and polarisation) are combined by weighting each observation by its local resolution. The process generates seamless wide-area backscatter maps suitable for applications ranging from wet snow monitoring to land cover classification or short-term change detection. At the conference a complete overview of this long-time data series evolution journey will be presented in detail, aiming to present first sets of backscatter composites.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: FDR4AVHRR – generation of a consolidated fundamental data record based on > 40 years of AVHRR-LAC data (Europe, Africa and South America)

Authors: Stefan Wunderle, Christoph Neuhaus, Jonathan Mittaz, Nicole Yaghnam, Martin Raspaud, Jacob Nilsson, Marin Tudoroiu, Mirko Albani
Affiliations: University of Bern, National Physical Laboratory, Swedish Meteorological and Hydrological Institute, ESA - ESRIN
Data of the Advanced Very High Resolution Radiometer (AVHRR) onboard of NOAA- and since 2006 on EUMETSAT MetOp-satellites is the only source to provide long time series based on an almost unchanged sensor of the last 40 years with daily availability and 1.1km spatial resolution. This unique data set is of high scientific value to study, independent of ground-based measurements, climate change induced shifts of Essential Climate Variables (ECV) and fulfills the requirements of the World Meteorological Organization for a sufficient long period (> 30 years) to analyze changes of the environment. In the frame of ESA’s Heritage Space Programme and based on a previous Long Term Data Preservation Project (LTDP) we developed software and procedures to read and reformat the various data sets combining the data holdings of University of Bern (Switzerland), European Space Agency and Dundee Satellite Receiving Station (UK). This results in a consolidated data set for the period 1981 – 2020 with at least daily resolution. In a final step, metafiles and quicklooks are generated to be included in a data container (EO-SIP) of more than 250.000 AVHRR data in level 1b format (https://doi.org/10.5270/AVH-f1i8784). Based on the successful LTDP project we started in January 2024 the FDR4AVHRR project with the objective to generate a FDR based on AVHRR data and to make all data accessible via ESA dissemination service. The generation of an FDR requires additional effort for calibration, geocoding, extracting quality indicators, defining uncertainty and validate the final level 1c data set. Therefore, the FDR4AVHRR project has the following five pillars: - Data preparation: this includes to enlarge the existing AVHRR data holding with additional data sets from University of Bern to cover the years 2021 – 2024, integrating the full Dundee archive for improved coverage of Greenland and to fill gaps of the early years (1979 – 1985), integrate the AVHRR LAC data from Hartebeesthoek (South Africa) of the years (1985-2009), from ESA operated station in Malindi (Kenia), 2001-2009, data from Buenos Aires (Argentina) of the years 1995-2017 and finally, to integrate all 1km AVHRR data from the Global Land 1km AVHRR Project (1992 – 1999). - Scientific improvements: For improved geocoding accuracy we have developed a new orthorectification process and we have also developed pixel level uncertainty components (both random and systematic) which will be the first time such components are available with these data. These updates have also been added to the open source software PyGAC so will be more generally available. The calibration of the VIS-channels is based on the latest version of PATMOS-X and we apply the procedure of Walton et al. for thermal calibration. - Reprocessing of all data (Europe, global) to level1c: considering the latest developments of PyGAC we are re-processing all European data (University of Bern, ESA, Dundee), from Africa and South America, and the global data set received during the campaign from 1992 – 1999. - Validation and quality assessment: the calibration quality and consistency will be proofed for stable targets (PICs), the geocoding accuracy is retrieved from the processor comparing features of GCPs and in addition, quality indicators are derived and added to the data set. - Transfer all data to ESA for archiving and dissemination: the data will be available in NetCDF as part of the ESA EO-SIP files. The content and structure of the final delivered data set will be proofed to guarantee consistency of all data. Finally, more than 460.000 1km AVHRR will be reprocessed and transferred to offer the user the largest public accessible AVHRR level 1c FDR covering Europe, most parts of Africa and a large part of South America. The re-processing and delivery of the data set is planned to be finalized at the end of 2025; hence, a few months later the data set will be available for the public via ESAs dissemination service. The whole process for consolidation of the AVHRR data is in-line with the recommendations of the CEOS Working Group on Information Systems and Services (WGISS) to fulfill the needs for data management and stewardship maturity.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall F1)

Session: A.01.06 Aerosol, clouds, their interactions and the radiation budget - PART 2

The Intergovernmental Panel on Climate Change (IPCC) has recognised that aerosol and clouds remain the largest contributions to overall uncertainty in climate feedback. Aerosols particles directly affect the Earth’s energy budget by scattering and absorbing sunlight and, to a lesser extent, infrared radiation. Aerosol layers often include tiny particles known as cloud condensation nuclei (CCN), around which water condenses to form cloud droplets, and ice nuclei. Aerosols therefore affect the size and concentration of water droplets and ice crystals, which in turn affects the radiative properties of the clouds. Aerosols indirect forcing is broadly defined as the overall process by which aerosols perturb the Earth-atmosphere radiation balance by modulation of cloud albedo and cloud amount.

Satellite observations have up to now been less successful at providing the quantitative particle optical, microphysical, and chemical properties needed to model aerosol forcing and to interpret the interactions with clouds. The recent EarthCARE mission collects co-registered observations from a suite of four instruments located on a common platform: the two active instruments provide vertical profiles of the atmosphere along the satellite nadir path while the two passive instruments provide scene context information to support the active instruments data interpretation.

This session is dedicated to the presentation of improved detection and retrieval capabilities of aerosols and clouds characteristics, to the understanding of their interactions and their effect on the global radiative budget by means of novel and past satellite missions such as but not limited to Sentinels-4,5,5P, EarthCARE and Aeolus.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall F1)

Presentation: Cloud and aerosol analysis with the ECMWF system

Authors: Kirsti Salonen, Angela Benedetti, Jeronimo Escribano
Affiliations: ECMWF, BSC
The quality and impact of aerosol and cloud optical depth (AOD and COD, respectively) Climate Change Initiative (CCI) products from Sea and Land Surface Temperature Radiometer (SLSTR) instrument have been assessed in the European Centre for Medium-Range Weather Forecasts (ECMWF) system using the Copernicus Atmosphere Monitoring Service (CAMS) configuration. The COD v3.3 dataset is not part of the official CCI data sets, but the same algorithms are being used to cover the test periods June 2020 and September 2021. Quality assessment with passive monitoring experiments, i.e. AOD or COD are not actively assimilated into the system, indicate high and homogeneous quality for the AOD observations over sea. For COD positive observation minus background mean differences are seen over areas where there is typically marine stratus clouds, and negative mean differences especially in the tropics where thick convective clouds are frequently present. Passive monitoring of reflectance observations from the Ocean and Land Colour (OLCI) instrument indicate very similar mean difference features which suggests that the features seen are potentially related to model deficiencies over those areas. Impact assessment has been performed in depleted and in full observing system. Depleted system was used in the sensitivity tests and the final optimal configuration of joint AOD and COD assimilation was then tested in the full observing system framework. Both experiment configurations indicate degradation in temperature and humidity forecasts. However, some positive signal is seen for wind forecasts which can be due to the tracer advection effect of the 4D-Var assimilation. Verification against the AErosol RObotic NETwork (AERONET) observations indicate small but positive signal from assimilation of the CCI AOD and COD. The results show that the CCI AOD product is mature and behaves in a similar way in the ECMWF system than the operationally used AOD products from other instruments. The COD product was tested in assimilation for the first time. Thus, no similar maturity can be expected as that of the AOD product.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall F1)

Presentation: Investigating dust aerosol impacts on cloud formation, using advanced modeling and observations.

Authors: Eleni Drakaki, Dr Eleni Marinou, Dr Petros Katsafados, Dr Vassilis Amiridis
Affiliations: National Observatory Of Athens, Harokopio University of Athens
Aerosol-cloud interactions are a key area of uncertainty in understanding and predicting climate change. These interactions play a crucial role in shaping cloud microphysics, radiative behavior, and precipitation patterns. However, accurately representing them in numerical models remains a challenge due to their inherent complexity and variability. To address this issue, advanced laboratory-based parameterizations were integrated into the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem). This study focuses on exploring the influence of dust aerosols as ice-nucleating particles (INPs) on cloud formation processes. Dust-derived INPs (INPD) are particularly significant because they can initiate ice formation at relatively higher temperatures compared to homogeneous nucleation. To evaluate their impact, simulations incorporating these updated parameterizations were compared with satellite-based INP retrievals obtained during the ASKOS campaign in 2022 over the eastern North Atlantic. This campaign offered a rich dataset, allowing for a detailed comparison of modeled INPD concentrations with satellite observations under real-world atmospheric conditions. In addition to satellite data, radar observations from ASKOS were used to examine the performance of the parameterizations in simulating cloud formation processes, including the development of ice crystals in mixed-phase clouds. These radar insights helped evaluate how well the modeled cloud properties matched observed phenomena, providing a rigorous test of the model’s robustness. Overall, the findings underscore the need for a better representation of aerosol-cloud interactions in numerical models to reduce uncertainties in climate predictions and improve forecasting accuracy. Acknowledgements. This research work has received funding from Horizon Europe programme under Grant Agreement No 101137680 via project CERTAINTY (Cloud-aERosol inTeractions & their impActs IN The earth sYstem); the HFRI Research Projects to support postdoctoral researchers (project acronym: REVEAL; Project 07222). This research work also has been supported by computational time granted from the National Infrastructures for Research and Technology S.A. (GRNET S.A.) in the National HPC facility - ARIS - under project ID pr016030_thin-MIAMI.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall F1)

Presentation: Exploiting satellite observations to study Aerosol-Cloud Interactions in the framework of the SATACI project

Authors: Dr Marta Luffarelli, Yves Govaerts, Nicolas Misk, Dr Hershal Pandya, Dr. Thomas Popp, Dr Stefan Kinne, Martin Stengel, Dr Michael Schulz, Dr Jan Griesfeller, Dr Gareth Thomas, Dr Elisa Carboni, Dr Daniel Robbins
Affiliations: Rayference, DLR, DWD, MET Norway, RAL Space
Understanding the Earth’s climate and its future trajectory remains one of the biggest challenges for humanity. Satellite observations have proven instrumental in enhancing the consistency of independent climate estimates and reducing uncertainties in climate processes and feedback mechanisms, as highlighted in the latest IPCC Assessment Report (AR6). Despite these advancements, significant uncertainties persist, particularly concerning the interactions between aerosols and clouds. The SATACI project (SATellite observations to improve our understanding of Aerosol-Cloud Interactions) addresses this critical knowledge gap as part of the Climate Space Theme II Cross ECVs activities. Leveraging extensive satellite datasets, SATACI aims to deepen our understanding of aerosol impacts on clouds across various spatial and temporal scales, utilizing data from both polar-orbiting and geostationary satellites. The project builds on the legacy of the ESA Climate Change Initiative (CCI) aerosol and cloud projects, incorporating cutting-edge products and algorithms developed within the CCI framework. Two main activities will be performed during this 3-year study: 1. Aerosol-Cloud Interaction Analyses This activity focuses on using satellite data to study aerosol impacts on clouds, employing observations from geostationary and polar-orbiting satellites. The work is divided into two scientific investigations: - Indirect Effects on Liquid Clouds: Examining how aerosols influence cloud properties and behaviors. - Dust Particles and Cloud Glaciation Temperatures: Studying how dust aerosols affect cloud ice formation. The analyses will leverage the Norwegian Earth System Model (NorESM), a state-of-the-art CMIP6-class model, to test the usefulness of the findings of these studies. Adaptations, such as higher output frequency, will support improved model-satellite integration. 2. Feasibility Study of a New Climate Indicator This study aims to develop a new aerosol/cloud cooling offset climate indicator, using global, long-term satellite data records and enabling monitoring the cooling offset due to (anthropogenic) aerosols and (aerosol modified) clouds. This activity aims to visualise the greenhouse gas warming concealed by aerosols and clouds developing a new climate indicator ‘aerosol / cloud cooling offset’. This aerosol overall cooling has contributions by direct (aerosol presence) effects and by indirect effects - mainly through aerosol modified water clouds. Global, long-term satellite data records will be used to demonstrate the feasibility of a method to derive a new climate indicator which enables monitoring the cooling offset due to (anthropogenic) aerosols and (aerosol modified) clouds. This new indicator would complement the existing WMO climate indicators based on off-line (dual call) two-stream radiative transfer simulations Kinne, 2019. The results on the new aerosol/cloud cooling offset indicator will be compared with the statistics obtained from regional satellite observations on ACI. The SATACI project is supported by a specialized consortium fostering strong collaboration between retrieval and modeling experts. A dedicated effort will ensure consistency among datasets, incorporating a robust framework for uncertainty characterization and propagation. A comprehensive review of current methodologies and ongoing studies, alongside extensive interactions with relevant scientific communities, will be presented. The outcome of this preliminary exercise will guide the project’s development. Preliminary results including the refinement of scientific methods for the two key activities and a detailed strategy for addressing uncertainties and error propagation will be outlined. SATACI represents a significant step toward improving our understanding of aerosol-cloud interactions and their role in Earth's climate system.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall F1)

Presentation: Instantaneous Direct Radiative Effects of Aerosols in Cloud and Clear-sky Scenes from Passive hyperspectral observations

Authors: Martin de Graaf, Dr Gijsbert Tilstra, DR Piet Stammes
Affiliations: KNMI
Extreme wildfire events are likely to increase in number in the near future and observations of regional radiative effects of smoke can be used to challenge and improve climate model simulations, which are currently relying almost exclusively on model-model intercomparisons, while models currently disagree on the magnitude and sign of the radiative forcing by aerosols. A new methodology is presented to measure the instantaneous aerosol direct radiative effect of wildfire smoke using satellite observations of aerosol optical thickness and a radiative transfer model, and determine the radiative heating and cooling of the smoke in both clear-sky and cloud scenes. Radiative effects of smoke are defined as the radiative effect with and without smoke in the atmosphere, which are necessarily computed using model simulations. Regionally, aerosol-radiation interactions can be an order of magnitude larger than their global mean values, especially during extreme events. Results at the top of the atmosphere and at the surface for a case of extreme wildfires in Chile in 2023 will be shown. The results are compared to retrievals of the aerosol direct radiative effects of smoke above clouds using hyperspectral measurements, to show the ability of direct retrievals of aerosol effects from satellite measurements. These results are important to validate and challenge climate models, and will help the development of radiative effect retrievals from more dedicated missions, like PACE and EarthCARE. The SpexONE instrument of PACE is capable of separating aerosols and clouds using the polarisation of light, while the ATmospheric LIDar (ATLID) on EarthCARE is an active instrument, which provides profiles of aerosol extinction and heating rates. In addition, the upcoming EUMETSAT mission Metop-Second Generation A (Metop-SG A) will carry a hyperspectral spectrometer Sentinel-5 and the Multi-viewing Multi-channel Multi-polarisation Imager (3MI). On this mission, the hyperspectral and polarization capabilities are combined and improved with mutli-viewing infromation. The approach presented here will be useful for the quantification of aerosol direct and semi-direct effects from space, which is urgently needed to improve the radiative interaction schemes between aerosols and clouds in climate models. Better observations of both aerosols and cloud properties, and direct observations of aerosol-cloud- radiation interactions will improve our ability to attribute climate change, quantify climate sensitivity, and improve the accuracy of future climate change projections.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall F1)

Presentation: Negligible Contribution from Aerosols to Recent Trends in Earth's Energy Imbalance

Authors: Brian Soden, Chanyoung Park
Affiliations: University Of Miami
During the 21st century, Earth's energy imbalance (EEI) at the top of the atmosphere has markedly increased, mainly due to an increase in absorbed shortwave (SW) rather than a decrease in outgoing longwave (LW) radiation. While previous studies, based on single-forcing (aerosol-only) experiments, linked reductions in anthropogenic aerosols to this positive SW trend, we find that both aerosol-radiative interactions (ARI) and aerosol-cloud interactions (ACI) have had a negligible impact on recent increases in the EEI. We estimate recent trends in effective radiative forcing due to aerosols using observations and reanalysis data. While aerosol concentrations have declined in the Northern Hemisphere (NH), wildfires and volcanic activity in the Southern Hemisphere (SH) have resulted in larger aerosol loading. This contrast effectively cancels out the total aerosol forcing, resulting in a negligible global impact on the EEI trend. Our findings also suggest that model-driven estimates may be overestimated, as they overlook the compensating effects of SH aerosol emissions that balance out NH reductions.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall F1)

Presentation: Synergetic retrieval from multi-instrument measurements for advancing aerosol and surface characterisation in global scale and at high temporal resolution

Authors: Pavel Lytvynov, Oleg Dubovik, Siyao Zhai, Cheng Chen, Christian Matar, Anton Lopatin, David Fuertes, Tatyana Lapyonok, Verena Lanzinger, Lukas Bindreiter, Manuel Dornacher, Arthur Lehn, Daniele Gasbarra, Cristian Retscher
Affiliations: GRASP SAS, LOA, Laboratoire d’Optique Atmosphérique, Cloudflight, ESA ESRIN, Anhui Institute of Optics and Fine Mechanics
For number of climate studies the extended aerosol and surface characteristics in global scale are required. This includes such properties as Aerosol Optical Depth (AOD), size distribution, Single Scattering albedo, full surface BRDF etc. This is relevant, in particular, for generation of high-quality aerosol and surface essential climate variables (ECVs) product, air-quality monitoring, aerosol emission and transport studies etc. In addition to the global scale, the extended aerosol properties with high temporal resolution are required for such important but challenging studies as aerosol-cloud interactions, gas-to-particle transformation, and aerosol dynamic in general. The global information about aerosol can be obtained from space-borne measurements only. Therefore, climate studies are becoming more and more relying on high quality aerosol characterization from space. At present time there are a number of different satellites on Earth orbit dedicated to aerosol studies. Nevertheless, due to limited information content, the main aerosol products of the most of satellite missions is AOD while the accuracy of the extended properties requires essential improvements. Moreover, the role of space-borne measurements in studies of the atmospheric aerosol dynamic is essentially limited by satellite swath and instrument information content. In general, no single instrument satisfies all requirements which are necessary for global, high-temporal and extended aerosol characterization. One of the promising solutions of this problem originates from the idea of the synergetic aerosol and surface characterization from multi-mission instruments. Since a long time, the realization of this idea has always been related to number of instrumental and algorithmic problems. In the frame of ESA GROSAT and SYREMIS projects, the synergetic approach was implemented in GRASP algorithm and have been tested on different synergetic instrument constellations: (i) synergy of satellite and ground-based measurements; (ii) synergy of polar-orbiting (LEO) satellites (in particular, synergy of Sentinel-5p/TROPOMI, Sentinel-3A, -3B/OLCI instruments) and (iii) synergy of LEO and geostationary (GEO) satellites (in particular, synergy of Sentinel-5p/TROPOMI, Sentinel-3A, -3B/OLCI and HIMAWARI/AHI sensors). On one hand such synergy constellation extends the spectral range of the measurements. On another hand it provides unprecedented global spatial coverage with several measurements per day which is crucial for global climate studies and air-quality monitoring. In this talk we will discuss new possibilities of multi-instrument synergy for advancing aerosol and surface characterisation in global scale and at high temporal resolution. It will be demonstrated that properly balancing “weights” of measurements from different sensors in synergetic approach according to their information content, SYREMIS/GRASP synergetic approach allows transition of information from the instruments with richest information content (S5p/TROPOMI) to the instruments with lower one (S3/OLCI and HIMAWARI). In combination with and proper constraining spectral, spatial and temporal aerosol/surface variability, this results in increased performance of AOD, aerosol size and absorption properties retrieval and more consistent surface BRDF characterization. Application of the SYREMIS/GRASP synergetic retrieval to future sensors (3MI, Sentinel-5, Sentinel-4 etc.) will be discussed.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall N1/N2)

Session: C.06.07 Recent progress on uncertainty analysis for Earth Observation measurements - PART 2

Since the least decade, considerable efforts have been devoted to better understand uncertainties associated to Earth Observation measurements. Indeed, the uncertainty is directly linked to the value and usefulness of a measurement, whether it is used for predictions, risk assessment or decision making.
This multi-disciplinary session will present recent progress on evaluating measurement uncertainties, as well as practical use cases from the whole range of EO techniques.
We welcome contributions covering:
• recent results on uncertainty estimation either from theoretical (uncertainty propagation from measurement principles) or data-driven approaches,
• novel calibration and validation methods providing insights for the characterization or verification of uncertainty estimates,
• studies using Fiducial Reference Measurements, either to characterize their own uncertainties or as a source of truth for EO measurements,
• methods to convey uncertainty estimations to end users.

Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall N1/N2)

Presentation: Design elements for an external verification system of S6NG stability requirements

Authors: Pierre Prandi, Thomas Moreau, Victor Quet, Valeria Gracheva, Alejandro Egido, Benjamin Monteillet
Affiliations: Collection Localisation Satellites, ESA/ESTEC, Thales Alenia Space
The Global Mean Sea Level (GMSL) is an Essential Climate Variable monitored using satellite radar altimetry missions over more than 30 years. Assessing the long-term stability of this record is therefore critical for climate monitoring. The ocean surface topography community has addressed this question through various ways: internal system consistency, comparisons to independent in-situ measurements such as tide gauges either at dedicated calibration sites (eg Bonnefond et al. 2021) or using a larger network (eg Watson et al., 2015), sea level budget closure (eg Barnoud et al., 2023) or transponders (Mertikas et al., 2024). With the advent of global SAR altimetry coverage and the FF-SAR processing (Egido & Smith, 2016), a new calibration method has been tested: corner reflectors (eg Gibert et al, 2023). Those passive trihedral reflectors are relatively easy to install and maintain (compared to transponders) and can be a strong alternative to other methods for altimeter range validation and calibration. Maintaining and improving current altimeter calibration and validation means also appears critical as climate science requirements for the long-term stability of the GMSL record have recently been updated from 0,3 mm/yr (GCOS, 2011) down to 0,1 mm/yr (Meyssignac et al., 2023). While the current GMSL trend uncertainty now meets GCOS stability requirements (Guérou et al, 2023), we are still far from the 0,1 mm/yr target. The Sentinel-6 NG mission shall address these new stability requirements with a threshold requirement of 0,2 mm/yr (goal 0,1 mm/yr) on GMSL stability uncertainty. In this study, we use available literature to provide design elements for an external verification system capable of verifying S6-NG altimeter and precise orbit solution stability. This system is based on a network of corner reflectors. While there are still unknowns on the uncertainty budget of corner reflectors, especially regarding error covariances in space and time, our results suggest that a network of 20 to 40 corner reflectors could verify S6NG altimeter stability requirements down to 0,1 mm/yr.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall N1/N2)

Presentation: Sentinel-6 Next Generation: Sea Level Stability Uncertainty Budget

Authors: Michaël Ablain, Noemie Lalau, Louis Kern, Thomas Vaujour, Benoît Meyssignac, Emma Woolliams, Sajedeh Behnia, Jonathan Mittaz, Annabelle Ollivier, Pierre Prandi, Loren Carrere, Alejandro Egido, Craig Donlon, Robert Cullen
Affiliations: Magellium, LEGOS, CNES, CNRS, IRD, NPL, University of Reading, CLS, ESA
Since 1993, significant efforts have been made to provide an accurate and stable record of global and regional mean sea level data from altimetry measurements, with a comprehensive characterisation of associated uncertainties. The global mean sea level (GMSL) trend is measured with an uncertainty of 0.3 mm/yr (within a 90 % confidence level) over the last 20 years (Guerou et al, 2023). This uncertainty level meets the requirements defined by the GCOS in 2011. However, new stability uncertainty targets have been recently defined to address critical climate science questions, including closing the sea level budget, attributing sea level signals to greenhouse gas emissions, and monitoring Earth’s energy imbalance (Meyssignac et al., 2023). The key uncertainty requirements include: - GMSL trend stability: Uncertainty of 0.1 mm/yr over 10 years or more. - GMSL acceleration stability: Uncertainty of 0.5 mm/yr/decade over 20 years or more. - Regional sea level trend stability: Uncertainty of 0.3 mm/yr over 20 years or more. In order to achieve these stringent goals, it is crucial to identify, quantify, and mitigate all sources of uncertainty which encompass instrumental measurements, geophysical corrections, and processing methodologies. The European Space Agency (ESA) ASELSU (Assessment Sea Level Rise Stability Uncertainty) project undertakes a comprehensive derivation of the end-to-end uncertainty budget for global and regional sea level records, employing the FIDUCEO formalism (Mittaz et al., 2019). This approach ensures rigorous uncertainty characterisation through systematic identification, quantification, and propagation of all error sources, fully documented for transparency. Uncertainty estimates are validated using multiple methods, including along-track and crossover analyses, tide-gauge comparisons, and sea level budget assessments, with detailed evaluation of the uncertainties inherent in each validation method. Furthermore, the study examines the potential consequences of a change in the Sentinel-6 Next Generation (S6-NG) orbit. The proposed S6-NG orbit, with a lower altitude and a different repeat cycle, could change the uncertainty budget due to altered aliasing of environmental corrections (e.g., ocean tides, dynamic atmospheric corrections), lack of cross-calibration phases, unknown orbit errors etc. This study aims to quantify the potential impact of the S6-NG orbit change on the accuracy and stability of sea level estimates. References: Meyssignac, B., et al.: How accurate is accurate enough for measuring sea-level rise and variability, Nature Climate Change, 13, 796–803, https://doi.org/10.1038/s41558-023-01735-z, 2023 Mittaz, J., Merchant, C.J. & Woolliams, E.R. (2019). Applying principles of metrology to historical Earth observations from satellites. Metrologia, 56, 32002.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall N1/N2)

Presentation: What is the uncertainty of the uncertainty and (why) does it matter? Propagating uncertainties of weight estimates through soil moisture data merging

Authors: Maud Formanek, Alexander Gruber, Pietro Stradiotti, Wouter Dorigo
Affiliations: TU Wien
Rigorous uncertainty quantification in Earth observation is critical for long-term Climate Data Records (CDRs), particularly those from the ESA CCI program, which aim to detect subtle climatic trends amidst noise. This study enhances the uncertainty estimates of the ESA CCI soil moisture dataset by incorporating the sampling uncertainty from Triple Collocation Analysis (TCA) into the uncertainty propagation of the data merging process. The ESA CCI soil moisture product merges data from multiple sensors using a weighted average, designed to improve temporal and spatial coverage while minimizing random retrieval errors. The weights depend on sensor-specific uncertainties derived from TCA applied to soil moisture estimates from triplets of three independent datasets, typically comprising a land surface model, a passive and an active microwave satellite instrument. However, the TCA-derived uncertainties are themselves uncertain due to finite sample sizes, introducing a second-order uncertainty we denote the "uncertainty of the uncertainty" (UU). While the UU can be derived analytically for simple error models, such an analytical solution is lacking for the affine error model assumed for remote sensing data soil moisture measurements. To address this, we extract confidence intervals around the random uncertainty variance by applying bootstrapping to the TCA results from two sets of sensor triplets (ASCAT, SMAP or SMOS and the GLDAS-Noah land-surface model). We find that for all sensors the resulting 'uncertainty of the uncertainty' follows a power-law relationship with the number of collocated observations, whose exponent is comparable to the analytical solution for simple error models. The magnitude of the resulting weight uncertainty has serious implications for the weighted averaging and the resulting uncertainty of the merged products: 1) it introduces an additional term in the uncertainty propagation of the merged product stemming from the uncertainty of the weights themselves, and 2) it can reach a threshold, where the weighted average yields higher uncertainties than an unweighted average. Indeed, we find that this is true for many pixels, although validation against ERA5-Land suggests that the weighted average results in better skill metrics even in those cases. Furthermore, the validation against the reference dataset indicates that the extended uncertainty propagation approach yields more meaningful uncertainty information. This is demonstrated by a significant increase in the (anti-)correlation between the average per-pixel uncertainty and the error and skill metrics compared to the old uncertainty quantification method.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall N1/N2)

Presentation: The CEOS Fiducial Reference Measurements (FRMs) Assessment Framework

Authors: Paolo Castracane, Nigel Fox, Philippe Goryl
Affiliations: STARION c/o ESA/ESRIN, National Physical Laboratory, ESA/ESRIN
The concept of a Fiducial Reference Measurement (FRM) has been developed to highlight the need for precise and well-characterised measurements tailored explicitly to the post-launch calibration and validation (Cal/Val) of Earth Observation satellite missions [1]. Recently, this need has been enhanced with the emergence of new commercial satellite operators to ensure that users have a transparent and fair means to judge the adequacy of data products to meet their needs. The Committee on Earth Observation Satellites (CEOS) Working Group Cal/Val (WGCV) is putting in place a framework to assess the maturity and compliance of a “Cal/Val reference measurement” in terms of a set of community-agreed criteria which define it to be of CEOS-FRM quality. The assessment process is based on a maturity matrix that provides a visual assessment of the state of any FRM against each of a set of given criteria, making visible where it is mature and where evolution and effort are still needed. General information and the last version of the guidelines are available on the CEOS Cal/Val Portal [2]. This presentation provides an overview of the framework including examples of applications and the latest updates of the guidelines. References [1] Goryl, P.; Fox, N.; Donlon, C.; Castracane, P. Fiducial Reference Measurements (FRMs): What Are They? Remote Sens. 2023, 15, 5017. https://doi.org/10.3390/rs15205017 [2] FRMs Assessment Framework on CEOS Cal/Val Portal: https://calvalportal.ceos.org/web/guest/frms-assessment-framework
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall N1/N2)

Presentation: End-to-end uncertainty budget for the ECV Land Cover

Authors: Ralf Quast, Grit Kirches, Carsten Brockmann, Martin Böttcher, Roman Shevchuk, Céline Lamarche, Pierre Defourny, Olivier Arino, Dr. Clément Albergel
Affiliations: Brockmann Consult GmbH, Université Catholique de Louvain, ESA ESRIN, ESA ECSAT
Land Cover (LC) is a GCOS Essential Climate Variable (ECV). Based on a long history of algorithm development within a series of ESA projects, now the Copernicus Climate Change Service is producing operationally this ECV based on Sentinel 3 OLCI and SLSTR data. Recent research activity of ESA’s CCI Landcover project has been the further development of a methodology for an end-to-end uncertainty budget, from OLCI and SLSTR calibrated radiometry to plant functional types. Establishing an end-to-end uncertainty budget is essentially required for all ECVs of ESA’s Climate Change Initiative (CCI). The reference guide for expressing and propagating uncertainty consists of the GUM and its supplements, which describe multivariate analytic and Monte Carlo methods. The FIDUCEO project demonstrated the application of these methods to the creation of ECV datasets, from Level 0 telemetry to Level 1 radiometry and beyond. But despite this pioneering work, uncertainty propagation for ECVs continues to be challenging. Firstly, many retrieval algorithms do not incorporate the use of quantified uncertainty per datum. Using analytic methods for propagating uncertainty requires completely new algorithmic developments while applying Monte Carlo methods is usually straightforward but leads to proliferation of computational and data curation resources. Secondly, operational radiometry data are often not associated with a quantified uncertainty per datum and error correlation structures between data are not quantified either. For Sentinel-3 OLCI L1b data, however, radiance uncertainty is included with the L1b product per datum. For Sentinel-3 SLSTR radiance uncertainty can be obtained per datum by means of software provided by the European Union’s Copernicus Programme. Regardless of the availability of detailed uncertainty information, it is feasible to explore and prepare ECV processing for the use of uncertainty per datum and information on error correlation structures, based on instrument specifications and expert-based assumptions. For the Land Cover ECV of the ESA CCI we developed a Monte Carlo processing sequence, which considers three most significant effects in the preprocessing: random and systematic errors in satellite radiometry, random and systematic errors in aerosol retrieval, and random and systematic errors in cloud screening. The created surface reflectance ensemble has been inserted into a land cover classification scheme. Eventually, land cover categories are typically converted into fractional plant functional type (PFT) area coverage by means of a cross-walking table, which constitutes an added source of uncertainty. In this contribution, we explain the concept of our Monte Carlo processing sequence and its computational implementation and present proof of concept by verifying the statistical properties of the created ensemble of surface reflectance, LC classes, and PFT fractional area coverage. The main result is a quantification of the impact of the surface reflectance pre-processing on the ECV Land Cover. In addition, we present further developments of the approach, e.g. the consideration of errors in the geolocation errors in auxiliary data used for classification, and errors of cloud screening.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall N1/N2)

Presentation: Making sense of uncertainties: Ask the right question

Authors: Alexander Gruber, Claire Bulgin, Wouter Dorigo, Owen Embury, Maud Formanek, Christopher Merchant, Jonathan Mittaz, Joaquín Muñoz-Sabater, Florian Pöppl, Adam Povey, Wolfgang Wagner
Affiliations: Department of Geodesy and Geoinformation, Technische Universität Wien (TU Wien), National Centre for Earth Observation, University of Leicester, European Centre for Medium-Range Weather Forecasts (ECMWF), Department of Meteorology, University of Reading, National Centre for Earth Observation, University of Reading
Earth observation data should help make decisions. However, all Earth observation data have an associated uncertainty arising from a number of error effects in the measurement process that vary in time and space, including limitations of the measurement and calibration process, uncertainties in auxiliary data, approximations in the data processing, and inhomogeneities in the observed field. Data producers thus strive to provide reliable uncertainty estimates alongside their products that should help inform decisions that are based on these products. However, data users often struggle to make sense of uncertainty information, because it is usually expressed as the statistical spread in the observations (for example, as random error standard deviation), which does not relate to an intended use of the data. That is, expressing data and their uncertainty as “x plus/minus y” does not answer the really important question: How much can I trust “x”, or any use of or decision based upon “x”? In this talk, we present a Bayesian framework that can be used to transform Earth observation product uncertainties into more meaningful, actionable information by explicitly quantifying the resulting probabilities of events of interest. We demonstrate the merit of this framework using two case examples: (i) monitoring drought severity based on soil moisture; and (ii) estimating coral bleaching risk based on sea surface temperature. We first show how looking at deterministic state estimates from Earth observations alone can be misleading, and that any decisions based on these estimates are unlikely to be the best course of action. We then show how typical data representations like “the state of this variable is “x plus/minus y” can be transformed into more informative statements such as “the data and their uncertainties suggest that we can be “z” % confident that…” and discuss how such an approach can help data users make better decisions, thus maximizing the socioeconomic merit of Earth observation data.
Add to Google Calendar

Friday 27 June 11:30 - 13:00 (Hall L1/L2)

Session: C.02.16 ESA Earth Explorers 11 and 12: The Future Earth Science Research Mission Candidates

This session of invited talks will focus on the Earth Explorer 11 and 12 candidates and their core scientific mission objectives, through preliminary and conceptual overviews. Key selection criteria included a blend of scientific excellence with innovative technology to target scientific and societal challenges.

The Earth Explorers are ESA’s unique and ambitious Earth Observation research missions. The Earth Explorer Candidate Missions 11 and 12 are competing to be selected as the next flagship Research missions within the ESA Earth Observation Envelope Programme. Presently there are two Earth Explorer 11 candidates CAIRT and WIVERN undergoing the Phase A feasibility study phase, with the selection of the mission to proceed into the implementation phases later in 2025 and a launch target in 2032-2033.

CAIRT – short for changing-atmosphere infrared tomography – would provide the measurements needed to make a necessary step change in understanding the links between climate change, atmospheric chemistry and dynamics in the altitude range of about 5 to 115 km.
WIVERN - short for wind velocity radar nephoscope – would provide the first measurements of wind within clouds and precipitation. There is a notable gap in global observations of wind in cloudy regions. The mission would also deliver profiles of rain, snow and ice water.

The Earth Explorer 12 candidates are currently undergoing Phase 0 studies with a launch target in 2036. For each mission candidate a series of scientific and parallel system study activities are foreseen to prepare a consolidated mission concept. Earth Explorer 12 candidates are:

CryoRad would fill an important gap in observations of the cryosphere through the direct measurement of low-frequency passive-microwave brightness temperatures using a novel broadband radiometer.
ECO would measure the difference between incoming solar radiation and outgoing radiation, which defines Earth’s energy ‘imbalance’, and which fundamentally controls Earth’s climate system.
Hydroterra+ is a C-band synthetic aperture radar mission placed in geostationary orbit above the equator which would deliver data twice a day over Europe, the Mediterranean and northern Africa to understand rapid processes tied to the water cycle and tectonic events in these regions.

Keystone would provide the first direct observations of atomic oxygen, composition and temperature in the altitude range of 50–150 km using a unique combination of limb-sounding techniques, enabling study of processes driving variability and energy balance in the mesosphere and lower-thermosphere.

This session will showcase each of the candidate Earth Explorer missions, to highlight the science objectives addressed by each mission, and which technical concepts been investigated to assess either the ability to fulfil the scientific objectives, or the feasibility of implementation. In addition, it will also give a brief historic and programmatic overview of the Earth Explorer programme, and of the process of selecting future mission candidates, following each Call for Ideas.

Presentations and speakers:


Earth Explorers 11 and 12: ESA’s Earth Science Research Candidate Mission Overview


  • Mark Drinkwater - European Space Agency

CAIRT: An ESA Earth Explorer 11 Candidate charting the middle atmosphere in the climate system


  • Björn-Martin Sinnhuber - Karlsruhe Institute of Technology

WIVERN: An ESA Earth Explorer 11 Candidate Advancing Climate Science Through In-Cloud Observations of Global Winds


  • Pavlos Kollias - McGill University

CryoRad: A New Spaceborne Mission Concept for The Monitoring of the Polar Regions


  • Giovanni Macelloni - IFAC-CNR

ECO: First direct measurements of Earth Energy Imbalance from space


  • Steven Dewitte - Royal Observatory of Belgium

Hydroterra+: A game changer in the monitoring of the water cycle over Mediterranean and Africa areas


  • Andrea Monti Guarnieri - Politecnico Milano

Keystone: Exploring the mesosphere and lower thermosphere


  • Daniel Gerber - RAL Space
Add to Google Calendar

Friday 27 June 11:30 - 12:15 (Hall M1/M2)

Session: A.02.09 Exploring Space Opportunities with ESA Φ-lab and EUSPA: Pathways for Students and Young Professionals

This 30-minute session, followed by 15 minutes of Q&A, is designed for students and young professionals eager to enter the European space sector through innovation, technology, and entrepreneurship. The session will introduce two key institutions: ESA’s Φ-lab, the European Space Agency’s center for disruptive innovation in Earth Observation, and EUSPA, the European Union Agency for the Space Programme, which manages operational EU space services like Galileo, EGNOS, and Copernicus.

Participants will learn how to engage with both agencies through practical opportunities such as traineeships offered by EUSPA and ESA Φ-lab’s activities focused on advanced technologies.

The session will also explore investment and support mechanisms aimed at early-stage innovators. ESA Φ-lab’s InCubed programme offers a flexible, continuous open call for co-funding EO-based commercial solutions. On the EUSPA side, the CASSINI initiative will be highlighted, which support entrepreneurs through themed challenges and structured programmes. We’ll clarify the timing, expected Technology Readiness Levels (TRLs), and the kind of projects each programme typically supports.

To conclude, we will discuss where the market is growing—upstream vs. downstream—highlighting where entry points are more accessible and where interest is rising across the industry. Whether your passion is in developing new technologies, building startups, or applying space data to real-world problems, this session will help you understand how to get started with ESA Φ-lab and EUSPA.

Speakers:


  • Sabrina Ricci - ESA
  • Valeria Catalano - EUSPA
Add to Google Calendar

Friday 27 June 11:37 - 11:57 (EO Arena)

Demo: C.01.26 DEMO - Enhancing Earth Observation with AI: Validating Onboard Processing with Smart Mission Lab

Validating AI models and onboard data processing capabilities is essential for the success of modern Earth Observation missions. Traditional testing methods are often slow, expensive, and infrastructure-heavy—hindering agile development.

Smart Mission Lab (SML) offers a remote, fast, and mission-relevant environment for testing AI algorithms on flight-ready Data Processing Units (Antelope, Leopard, and Lion). Engineers can access hardware within 48 hours and validate models under conditions that simulate real space missions.

This session will showcase how SML enables efficient testing of AI applications—like hyperspectral image processing and autonomous classification—by measuring performance, optimizing power use, and verifying fault tolerance. With SML, Earth Observation missions can speed up development, reduce costs, and ensure system readiness before launch.

Speaker:


  • Julia Marushchak – KP Labs
Add to Google Calendar

Friday 27 June 12:00 - 12:20 (EO Arena)

Demo: D.02.30 DEMO - Create your custom Earth Observation use case with GeoAI

This demonstration session will showcase how to build custom Earth Observation (EO) use cases using the GeoAI service on DESP. Designed for a broad audience, including EO professionals, data scientists, researchers, and industry experts, this session will highlight the practical applications of GeoAI for extracting actionable insights from satellite imagery.
Attendees will learn how to work with EO imagery of varying resolutions and apply AI models at scale. The session will feature real-world examples, including object detection, land segmentation and time-series analysis to monitor urban growth, deforestation, and other environmental dynamics.
Whether you are developing a new geospatial workflow or looking to enhance existing analysis capabilities, this session will demonstrate how GeoAI on DESP can support flexible, scalable, and insightful EO solutions.

Speakers:


  • Dr. Sergey Sukhanov - CEO FlyPix AI GmbH: Create your custom Earth Observation use case with GeoAI
Add to Google Calendar

Friday 27 June 12:22 - 12:42 (EO Arena)

Demo: E.01.12 DEMO - Searching EO and in-situ data using natural language queries in the IEOTO data service

Our IEOTO data service has been equipped with a search assistant that, based on a contextualised large language model, allows to preset query parameters based on natural language input.

The demonstration starts with a brief introduction of the overall service functionality and proceeds to show uses of the search assistant. Based on the user input the search assistant proposes query conditions for the Copernicus data products and in-situ data holdings of the IEOTO data service. The proposed query conditions become available within the standard search interface and can be refined or modified prior to submission.

The demonstrations shows a number of examples that work well and some examples that show the limitation of the current underlying model. It is shown how the LLM integration concept of IEOTO allows to harness the advantages of the flexible AI technology without losing the rigor of a precise query specification.

The IEOTO Data Service can be reached at: https://ieoto.space. The feature to demonstrate is currently under testing and will become available in May 2025.

Speakers:


  • Bernard Pruin - Petabite GmbH
  • Nils Junike - Petabite GmbH
Add to Google Calendar

Friday 27 June 12:45 - 13:05 (EO Arena)

Demo: C.03.23 DEMO - The new CCM Rapid Response Desk (RRD) Service - Insights into the Rolling Archive and Data Re-use

The Copernicus Contribution Missions (CCMs) Rapid Response Desk (RRD) provides all its Users (Copernicus Services, Institutions and Bodies of the EU, Union Research Projects and EU Member States or Copernicus participating States National Public Authorities) with access to an online archive that holds the datasets delivered by the data providers in response to users requests since December 2024. Data products made available through the previous PRISM (Copernicus Contributing Missions access Support Functions and Platform) service since April 2024 are also included in the archive.
Access to the archive is provided in a self-service manner via the Copernicus RRD Browser. Eligible registered users can search among all the available, non-sensitive, products and data, using graphical and attribute-based filtering. There is an unlimited free quota for the download and re-use of data already held in the archive, according to the agreed terms and conditions for the CCM data use.
Map footprints, detailed meta-data and quicklooks are available for each of the products. Users can download the complete products directly from the browser.
For products that are suitable for interactive use, the browser provides additional functionalities such as visualisation up to full resolution, time-lapse visualisation, point/area analysis of the time-series, interactive 3D visualisation, histogram calculation, export of a specified image subset, image comparisons with swipe or blend options and multi-temporal analysis. Helpdesk assistance is provided on request.

Speakers:


  • Alexandra Knizel - Telespazio Ibérica
  • Stefan Ram - GAF AG
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C)

Poster: A.08.11 - POSTER - Ocean Waves: Measurements, Interactions and Applications

Ocean waves are a fundamental property of the air-sea boundary, with interactions ranging from wave-breaking assisting the air-sea transfer of gases to the potential destruction of large oil tankers. From a climate perspective they provide a measurable integration of the input of wind energy and provide a mechanism for the break-up and dispersal of ice floes. Long-term studies of wave height, wave period and direction provide an insight into climate change, and derived climatologies and short-term forecasts indicate the probable range of conditions likely to affect shipping, wave-energy harvesters and offshore structures.

This session encourages the submission of presentations related to the measurement and utilization of wave data from all sensors, ranging from the long-term missions and efforts to maintain the consistency of such datasets through validation to exploring the potential new information from innovative missions such as CFOSAT and SWOT. Studies may be on along-track or gridded products, especially those that focus on driving down the uncertainty in long-term products. We also welcome abstracts dealing with the interaction of waves with currents or sea-ice, and those addressing the issues of extremes and of recovering data close to the shore.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C)

Poster: Nearshore sea state variability from diverse long-term satellite observations produced by the ESA Sea State CCI consortium

Authors: Ben Timmermans
Affiliations: National Oceanography Centre
The global coverage and consistency of satellite remote sensed observations of sea state strongly motivates their exploitation for applications where in situ measurements and model hindcast data remain unavailable. This is particularly relevant to coastal locations where environmental and societal impacts may be of interest. However, validation and application of these data sources is often not straightforward in such locations owing to data sparsity, reliability linked to coastal morphology and local ocean variability on small spatial scales that can introduce uncertainty. However, updated observations, together with recently developed methodologies, can be applied to bring greater understanding of sea state and ocean variability in nearshore areas. Long term, quality controlled and consistent observations of sea state from satellite-borne altimeters and synthetic aperture radar (SAR) imaging systems have been produced previously by the ESA Sea State Climate Change Initiative (CCI), with records up to 2020. Recently, in a new project phase, these records have been updated. In addition to an altimetry record, which has been extended temporally up to 2024, coastal observations have been augmented to include the first ever processing of SAR-derived sea state parameters from Sentinel-1 A & B Interferometric Wideswath (IW) and Extra-Wide (EW) modes. The DLR processing of Sentinel-1 IW mode images provide parameters of wave height and period for total, wind-sea and swell partitions, at spatial resolutions approaching 1 km. In order to both validate and exploit these new datasets for the study of long term sea state variability, intercomparison with other sources of coastal data are made, including both in situ measurements from moored buoys, and recent altimeter measurements from Envisat produced by the ESA “FDR4Alt” project. In spite of their importance, the use of direct sea state observations from buoys presents a number of challenges that are often overlooked. Buoy platforms vary considerably in configuration, purpose, operational management, reliability and so on, and as such represent a diverse collection of largely independent monitoring stations. While over more recent years there has been increased scrutiny of their records, important questions still remain over uncertainties in the buoy record. Nonetheless, such is its level of maturity, altimetry sea state data can be readily exploited to investigate this issue. For example, consistent along-track observations can be used to identify problematic data from buoys, thus reducing uncertainty in subsequent analyses. Attribution of discrepancy is more difficult, but is of potential interest to buoy operating agencies. Sea State CCI altimetry datasets bring further benefit to nearshore analysis, such as improved gradient evaluation and data collocation, through the implementation of an innovative “denoising” process applied to the 1 Hz Level-2 data that reduces small-scale uncertainty. For Envisat data specifically, analysis at the 1 Hz (~7 km) scale can also be compared with the reprocessed data from the FDR4Alt project (https://www.fdr4alt.org/) at 5 Hz (~1 km), that potentially offers further spatial detail. In this work we use both denoised altimetry and SAR observations from the recent Sea State CCI datasets to examine and intercompare the spatial properties of sea state variability in areas around the coasts of the U.S. and U.K. We first use altimery data to establish the reliability, limitations and collocation criteria for relevant in situ records in both regions, with particular consideration of data from the U.S. Coastal Data Information Program (CDIP) and the National Network of Regional Coastal Monitoring Programmes of England. Subsequently, the evaluations of long term wave climate from buoys and CCI altimetry data are compared with those from CCI Sentinel-1 SAR IW mode, and also using the FDR4Alt Envisat 5 Hz along-track data. Benefits, advances and limitations of these new satellite datasets for the investigation of nearshore sea state climate are presented and discussed.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C)

Poster: A new insight into the global ocean wind-sea from CFOSAT/SWIM: Stokes drift and wind-waves related parameters

Authors: Charles Peureux, Annabelle OLLIVIER, Romain Husson, Lotfi Aouf, Cédric Tourain, Dénièle Hauser
Affiliations: CLS, Météo-France, CNES, LATMOS
CFOSAT is a satellite mission operated since 2018 whose objective is to measure, in addition to historical nadir parameters, wave spectra on a global scale from space. In this presentation, examples of products from this mission are presented, based on measurements of wind-wave spectra (Stokes drift, waves skewness for example). In principle, CFOSAT can resolve waves down to 30 meters in wavelength, enabling us not only to measure dominant waves, but also to explore the intermediate waves domain, which is a unique capability among ocean satellite missions. When talking about sea-state, it can be split into a swell and a wind-sea part. The swell is usually of larger scale while the wind-sea encompasses all those waves forced by the wind with scales smaller than swell, down to a few meters length. Swell parameters are relatively easily accessible from sensors such as SARs, In opposite knowledge on wind wave at a global scale is mainly based on ocean wave models. CFOSAT is a satellite mission operated since 2018 whose objective is to measure, in addition to historical nadir parameters, wave spectra on a global scale from space down to 30m wavelength typically. SWIM is the name of the first ocean wave scatterometer carried by CFOSAT. In this presentation, examples of products from CFOSAT mission are presented, based on measurements of wind-wave spectra (Stokes drift mainly). It will be seen in which way CFOSAT/SWIM helps fill the observation gap between swells and short wind-waves. The characteristics of a few products will be presented. An initial validation of these products shows a strong correlation with reference data such as wave models, even stronger than that of the usual integrated wave parameters such as significant wave height. Useful information is provided in areas where the model performs badly. Using this information in conjunction with altimeter-derived data (geostrophic currents, for example) we could provide an interesting insight into ocean surface dynamics.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C)

Poster: Sea state monitoring with miniaturized drifter network

Authors: Alexey Mironov, Gwenaelle Jan, Lucas Charron, Jean Rabault, Dr Fabrice Collard, Alexis Mouche
Affiliations: Eodyn, LOPS, Ifremer, OceanDataLab, Norwegian Meteorological Institute
As ocean models' resolution and complexity continue to improve, the need for reliable and cost-efficient tools to calibrate and validate them becomes critical. Similarly, the growing number of ocean-observing satellite missions requires in-situ ocean-state monitoring networks specifically adapted for precise calibration and validation of satellite data. Traditional sea surface buoys, while robust, do not fully address the evolving needs of scientific and industrial stakeholders. Accurately comparing satellite measurements or model outputs with in-situ data requires careful consideration of sensor sampling, acquisition rates, and spatiotemporal resolution. Measurements must capture variables such as sea surface roughness, total current, wave spectra, and wind vectors, synchronized within satellite swaths and tailored to mission-specific requirements. Additionally, monitoring wave-current-wind coupling processes necessitates advanced in-situ capabilities. Meeting these diverse needs on a global scale, across offshore and coastal regions, requires deploying a responsive, extensive observation network—particularly for rapid-response scenarios like extreme weather events, maritime accidents, or oil spills. Traditional solutions are often too costly and slow to deploy to meet these challenges adequately. Recent advances in compact, cost-effective drifters, such as SOFAR Spotter and OpenMetBuoy, have demonstrated the potential to transform in-situ ocean monitoring. These systems enable mass deployments in remote regions or for targeted meteorological events, such as tracking tropical cyclones. Building on these achievements, we introduce the Miniaturized Electronics Lagrangian Oceanographic Drifter (MELODI) project, which addresses the need for an environmentally friendly, affordable, and versatile platform for real-time sea surface monitoring. The MELODI buoy represents a next-generation solution for multivariable sea surface observation. Its compact hull, measuring 24 cm in diameter and 9 cm in height, is made from biodegradable Polybutylene Succinate (PBS), reducing environmental impact and eliminating the risk of microplastic pollution at the end of its lifecycle. Equipped with solar panels, the buoy has a total weight of 1.3 kg and features a low immersion-to-windage ratio to minimize windage effects and ensure accurate drift measurements. The platform is designed for classical ocean monitoring tasks, such as measuring drift velocity, wave spectra, and sea surface temperature. However, its modular design and onboard processing capabilities allow it to go beyond traditional applications. MELODI buoys can also estimate wind speed and direction, investigate wave-current-wind coupling processes, and study phenomena such as high-frequency wave behavior, wave group modulation, wave steepness, and nonlinearity. To validate the performance of the MELODI platform, extensive campaigns were conducted in the North Atlantic and Mediterranean Sea. These tests evaluated the mechanical and hydrodynamic properties of the buoy, verified the functionality of onboard sensors, and assessed data processing algorithms in real-world conditions. High-resolution datasets were collected and cross-validated against outputs from numerical wave and current models, as well as measurements from stationary buoys and collocated satellite observations. The results confirmed the system’s reliability and accuracy across a range of environmental conditions. In addition to its technical performance, MELODI is designed for scalability and rapid deployment. Its streamlined production process supports large-scale manufacturing, enabling the creation of dense observation networks in regions of interest. This capability will be demonstrated during upcoming deployments. Sixteen buoys will be deployed in the Southern Ocean during the Vendée Globe 2024 Challenge, where the captain of a race boat has agreed to facilitate deployment in this remote and under-observed region (joint initiative eOdyn, LOPS-IRD). Another, ten buoys will be deployed in the North Atlantic aboard the Statsraad Lehmkuhl as part of the 8th ESA Ocean Training Course (ESA OTC25). These deployments will serve as a demonstrator for a future global, real-time observation network capable of addressing a wide range of scientific and operational needs. The data collected from these deployments will provide valuable insights into the potential of MELODI buoys for systematic ocean monitoring. Analyses will include comparisons with numerical models and collocated satellite observations to evaluate the buoys’ ability to enhance satellite calibration and validation efforts. These results, along with detailed performance analyses, will be communicated during the presentation. By introducing MELODI, we aim to address the growing demands of global ocean observation networks with a versatile and environmentally sustainable solution. The combination of classical monitoring capabilities and the potential for novel applications positions MELODI as a transformative tool for advancing ocean science and operational oceanography.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C)

Poster: CFOSat surface waves multiplatform calibration and validation

Authors: Salman Khan, Marites Canto
Affiliations: CSIRO, CSIRO
In this work, the accuracy and quality of the wave data products from the Surface Waves Investigation and Monitoring (SWIM) instrument, onboard the China France Oceanography Satellite (CFOSat), has been analysed against extensive multi-platform wave buoy measurements. The SWIM instrument on CFOSAT, launched in 2018 and a joint effort between the French and Chinese space agencies, is a first of its kind wave scatterometer exploiting sensitivity of near-nadir scattering to wind-generated waves roughly in the range of 50m - 550m wavelengths [1]. Using the near-nadir radar beams of SWIM, conically scanning the ocean surface in all directions along the satellite track, mean slope spectra and associated wave statistics are being regularly produced, globally. Additionally, using the nadir/altimeter beam, conventional altimeter significant wave height and wind speed are also produced. The SWIM data are of great significance being one of the few space-based platforms able to measure directional wave spectra globally and filling many gaps where in-situ measurements are sparse. Several studies have examined the performance of SWIM wave data against reference wave datasets [1-5]. Continuous assessment of data quality and accuracy is important due to potential degradation of satellite and/or payloads and to record uncertainties and inform on data use in science applications. Also, the SWIM wave data have benefitted through several improvements in algorithms and re-processings warranting regular assessment of quality and accuracy. In this work, SWIM wave data are assessed against multiple in-situ wave buoy data collections: National Data Buoy Centre (NDBC) [6], Sofar drifting spotter buoy archive [7], and wave measurements at the Southern Ocean Flux Station (SOFS) [8]. The NDBC data cover the entire SWIM operational period, the Sofar archive covers a ~3 year period from Jan 2019 to March 2022, while the SOFS measurements span up till March 2022, with directional wave measurements spanning a shorter period. Comparisons have been performed for wave statistics of significant wave height, peak wave period, mean wave period, and peak direction. These are then extended to wave spectra comparisons across the three in-situ data collections. Overall, very positive and convincing results have been seen through these comparisons especially for significant wave height and peak direction. The accuracies are not as convincing for mean and peak wave periods (mean wave periods being worse), but vary depending on the situ wave buoy data collection used. For mean wave period, using Jiang's neural network-based method [4] significantly improves the accuracies. Interesting differences in accuracy and performance are also observed across different SWIM off-nadir beams (at 6, 8, 10 degree incidence angles) with some convergence toward either the 10 degree beam or a weighted average of the off-nadir beams giving the best results against wave buoys. The comparisons are then also extended to wave spectra, and the initial results are positively convincing with low accuracies mainly focused at the low frequency part of the spectra (potentially due to parasitic peaks). At the time of writing this abstract, only omni-directional spectral comparisons have been performed with the intention to extend these to the 12 discrete directions of the SWIM spectra. As such, a validated and calibrated SWIM data collection with established accuracies and errors is very useful for ocean wave applications globally, but especially in regions where in-situ measurements have been historically sparse (such as the Southern Ocean). [1] D. Hauser et al., "New Observations from the SWIM Radar On-Board CFOSAT: Instrument Validation and Ocean Wave Measurement Assessment," in IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 1, pp. 5-26, Jan. 2021, doi: 10.1109/TGRS.2020.2994372. [2] Tourain, C., Piras, F., Ollivier, A., Hauser, D., Poisson, J. C., Boy, F., Thibaut, P., Hermozo, L., & Tison, C. (2021). Benefits of the Adaptive Algorithm for Retracking Altimeter Nadir Echoes: Results from Simulations and CFOSAT/SWIM Observations. IEEE Transactions on Geoscience and Remote Sensing, 59(12), 9927–9940. https://doi.org/10.1109/TGRS.2021.3064236 [3] Liang, G., Yang, J., & Wang, J. (2021). Accuracy evaluation of CFOSAT SWIM L2 products based on NDBC buoy and jason-3 altimeter data. Remote Sensing, 13(5), 1–17. https://doi.org/10.3390/rs13050887 [4] Jiang, H., Song, Y., Mironov, A., Yang, Z., Xu, Y., & Liu, J. (2022). Accurate mean wave period from SWIM instrument on-board CFOSAT. Remote Sensing of Environment, 280(January), 113149. https://doi.org/10.1016/j.rse.2022.113149 [5] Xu, Y., Hauser, D., Liu, J., Si, J., Yan, C., Chen, S., Meng, J., Fan, C., Liu, M., & Chen, P. (2022). Statistical Comparison of Ocean Wave Directional Spectra Derived From SWIM/CFOSAT Satellite Observations and From Buoy Observations. IEEE Transactions on Geoscience and Remote Sensing, 60, 1–20. https://doi.org/10.1109/TGRS.2022.3199393 [6] Hall, C., & Jensen, R. E. (2022). USACE Coastal and Hydraulics Laboratory Quality Controlled, Consistent Measurement Archive. 9(1), 248. https://doi.org/10.1038/s41597-022-01344-z [7] Sofar Spotter Archive. https://www.sofarocean.com/mx/sofar-spotter-archive (accessed 29 Nov 2024). [8] Rapizo, H., Babanin, A. V, Schulz, E., Hemer, M. A., & Durrant, T. H. (2015). Observation of wind-waves from a moored buoy in the Southern Ocean. Ocean Dynamics, 65(9), 1275–1288. https://doi.org/10.1007/s10236-015-0873-3
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C)

Poster: 5Hz altimetric wave data products and climate cross cutting activities

Authors: Annabelle OLLIVIER, Marine De Carlo, Adrien Nigou, Dr Lotfi Aouf, Fabrice Ardhuin, Claire Maraldi, Pierre Femenias
Affiliations: CLS, Ifremer, MeteoFrance, ESA, CNES
New missions dedicated to small scales of ocean dynamics monitoring such as CFOSAT or SWOT, enable to better characterize the sea state related structures and revisit the altimetric constellation time series. Thanks to the full wave spectra delivered by CFOSAT mission (launched in 2018) we identified that a source of additional energy between 1km and 100km - and known as the LRM spectral bump (Dibarboure et al. 2014) was the directionality of swell conditions and more specifically the wave groups (DeCarlo et al. 2022, De Carlo et Ardhuin 2023). Based on this, we delivered demonstration products to Aviso and Wave TAC Copernicus users with a refined resolution from 1Hz to 5Hz as the optimal way to increase the observability near coasts and highly variable areas (Ollivier et al. OSTST 2022, Ollivier et al. LPS 2022, De Carlo et al. 2023, Aouf et al. CoastALT meeting 2023, Zunino et al. 30yrsAltimetry 2024). For wave height measurements, De Carlo et al. 2022, De Carlo & Ardhuin 2023 showed that the small-scale variability can be split into speckle noise, wave groups effect and mesoscale variability. Therefore, a parameterization of Hs uncertainty depending on the significant wave height and the peakedness parameter (Qkk) of the spectrum was proposed as a proxy for the groupiness of waves. In Fdr4ALT ESA project (2021-2023) we showed that this uncertainty was varying in time with basin scales patterns for the whole Envisat mission (10years). This work is derived for other missions in the frame of CCI Sea State projects. On the other hand, simulations studies, and short scale variability studies on none retracked data (Icesat2 and SWOT) showed that the bump on the Sea Level Anomaly was directly coming from this wave effect, propagated by a correlation induced by the retracking. This correlation also quantified by Mangilii et al. OSTST 2023 and 30yr of altimetry 2024 with a bayesian approach helped us to derived from these observations, an empirical correction, complementary to the SSB correction (or HFA, Tran et al. 2021) and correcting from the wave group effect around 10km. Those analysis, shared in the frame of ASELSU project helped to disentangle different sources of uncertainty coming from the sea states. In this talk, we will present a breakdown, over different scales, of the different sources of sea states corrections and settle the basis of cross cutting activities between Sea Level climate community and Wave climate activities, pushing forward the limits of our understanding of surface ocean dynamics.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C)

Poster: Progress Towards Assimilating Synthetic Aperture Radar Wave Mode Products at ECMWF

Authors: James Steer, Dr Jean-Raymond Bidlot, Dr Saleh Abdalla, Sean Healy
Affiliations: European Centre for Medium-Range Weather Forecasts
Current approaches to assimilate synthetic aperture radar (SAR) directional wave spectrum retrievals into large-scale spectral ocean wave models use optimal interpolation of integrated parameters from the retrieved swell spectrum partitions (Aouf et al., 2019). These methods were first implemented in the early 1990s. During the mid to late 1990s, the wave modelling community shifted from inverting SAR intensity spectra (Level 1b products) using a model first-guess (Hasselmann and Hasselmann, 1991) to the assimilation of ocean swell spectra (Level 2 products), which have been generated using multiple-look radar images in the inversion process (Engen and Johnsen, 1995; Hasselmann et al., 2013). In contrast, coupled data assimilation systems increasingly emphasize lower-level retrieval products, leveraging the implicit coupling between Earth system components captured by these products. This work will present a discussion on these approaches and their role in underpinning the new wave data assimilation framework currently being developed at the European Centre for Medium-Range Weather Forecasts (ECMWF) through the Data Assimilation and Numerical Testing for Copernicus Expansion Missions (DANTEX) project. ECMWF began work on the European Space Agency (ESA) funded DANTEX project in October 2024. Through this project, ECMWF has been given the opportunity to fully exploit Sentinel-1 SAR Wave Mode (WV) products by updating and transforming its framework for assimilating these products into its Earth system model. ECMWF also have the opportunity to develop novel data assimilation methods and to utilise recent additions to Sentinel-1 products. The latest progress made by ECMWF to assimilate Sentinel-1 WV retrievals will be presented, along with an investigation into their impact on wave forecasts, a comparison between the impact of assimilating Sentinel-1 and ENVISAT retrievals, and an analysis of the potential impacts and quality of Sentinel-1 Level 2 (ocean wave/swell spectrum) retrievals in a variety of sea states. Additionally, given the nascent stage of this project, a general discussion will be presented on the potential impacts and challenges of all aspects of ocean wave data assimilation. These aspects will include flow dependency in the background error covariance matrix, the use of a variational assimilation method, the role of the forward operator in quality control and monitoring, and the potential role of data-driven models. This work is becoming particularly pertinent due to the launch of further satellites with SAR instruments: Sentinel-1c in late 2024 and Sentinel-1d in 2025, as well as the continued operation of the China-France Oceanography Satellite (CFOSAT). Looking ahead, the approved Copernicus Expansion Mission, Rose-L, planned for launch in 2029, will also provide SAR images over the ocean. References: Aouf, L., Dalphinet, A., Hauser, D., Delaye, L., Tison, C., Chapron, B., Hermozo, L. and Tourain, C. (2019). On the assimilation of CFOSAT wave data in the wave model MFWAM: Verification phase. In IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium, pp. 7959–7961, IEEE. Engen, G. and Johnsen, H. (1995). Sar-ocean wave inversion using image cross spectra. IEEE Transactions on Geoscience and Remote Sensing, 33(4), 1047–1056. Hasselmann, K., Chapron, B., Aouf, L., Ardhuin, F., Collard, F., Engen, G., Hasselmann, S., Heimbach, P., Janssen, P., Johnsen, H. (2013). The ERS SAR wave mode: A breakthrough in global ocean wave observations. Hasselmann, K. and Hasselmann, S. (1991). On the nonlinear mapping of an ocean wave spectrum into a synthetic aperture radar image spectrum and its inversion. Journal of Geophysical Research: Oceans, 96(C6), 10713–10729.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C)

Poster: A Novel Data-Driven Modulation Transfer Function (MTF) for Wave Spectra Inversion From CFOSAT SWIM Wave Scatterometer: Preliminary Results and Validation With Buoy Data

Authors: Manuel López-Radcenco, Gilles Guitton, Dr Fabrice Collard
Affiliations: OceanDataLab
The SWIM instrument onboard CFOSAT (China-France Oceanography Satellite), a Ku-band radar with a near-nadir scanning beam geometry, was designed to measure the spectral properties of surface ocean waves. In particular, it allows for the retrieval of directional ocean wave spectral measurements by leveraging the inversion of the modulation transfer function (MTF), i.e., the relationship between the measured backscatter modulation spectra and the surface ocean waves directional spectra. Current inversion algorithms, however, rely on either geometrical and/or analytical models, which present limitations that may hinder the accuracy and precision of the inversion process. Recently, data-driven approaches have appeared as an appealing alternative to model-based approaches, allowing for the leverage of the ever-increasing availability of remote-sensing and in situ data to overcome the limitations of current models. In this context, we introduce a novel partition-based approach for the statistical derivation of a data-driven empirical model of the MTF. To this end, we leverage both model data, in the form of WaveWatchIII (WW3) simulated ocean waves directional spectra, as well as real SWIM backscatter modulation spectra observations, to build a collocated partition dataset which we subsequently exploit to learn an analytical MTF model. Specifically, one year of collocated SWIM/WW3 spectra were independently partitioned, followed by a cross-assignment procedure of the partitions based on spectral distance and intersection criteria. This leads to a collocated partition dataset of a few millions partition pairs, allowing for the direct comparison between SWIM modulation spectral energy and WW3 wave height spectral energy. Through the statistical analysis of this uniquely collocated SWIM/WW3 dataset, we aim to identify the most relevant parameters on which the MTF depends from an empirical point of view; namely wavenumber, wind speed and wind azimuth; and subsequently learn an analytical model of the MTF based on these parameters. Additionally, we exploit both model data from WW3, as well as real in-situ data from historical SOFAR Spotter buoy network measurements, to validate obtained results both in terms of directional wave spectra reconstruction and partition parameters retrieval. Importantly, we introduce a novel framework allowing for the validation of obtained results directly in terms of directional moments, directional moment based parameters and partition parameters, without the need for any artificial reconstruction of the directional spreading function.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C)

Poster: CFOSAT: 6 years of surface wind and waves product, main outcomes and upcoming evolutions

Authors: Baptiste Gombert, Cédric Tourain, Lotfi Aouf, Danièle Hauser, Mrs Déborah Hazan, Annabelle OLLIVIER
Affiliations: CNES, Meteo-France, LATMOS, CLS
Since October, 29th 2018, the new space-borne system CFOSAT (China France Oceanography Satellite) [1] has been deployed for measuring ocean surface parameters. This mission, developed under the responsibilities of the French and Chinese Space agencies (CNES, CNSA) was designed to monitor at the global scale, ocean surface winds and waves. It is composed of two radar sensors both scanning in azimuth: SCAT, a fan-beam wind scatterometer [2], and SWIM designed for wave measurements [3]. With its collocated measurements of ocean surface wind and waves, CFOSAT aims at better understanding processes at the ocean surface and ocean/atmosphere interactions and at improving atmospheric and oceanographic models and predictions by feeding forecast systems through assimilation. This paper focuses on the SWIM measurements. SWIM is an innovative Ku-band real-aperture wave scatterometer, with 6 low-incidence rotating beams [2]. Since the beginning of the mission a continuous work of CALibration and VALidation (CAL/VAL), performance analyses, algorithm and processing evolutions, improved the quality of SWIM products and opened more applications to stokes drift and ocean/wave interactions. This work also showed the complementarity of the existing missions based on SAR systems which also provide directional wave spectra but with more limitations of capturing shorter wave scales (<150 m of wavelength) [4] Based on this high quality dataset, several work axes have been studied to improve still further the products quality and enhance parameters provided. The work performed on parasitic peaks observed at low wave number allowed the implementation of a filtering algorithm, however with the planned extension of the wave wavelengths range of observation, new methods are identified to be analyzed and improve this filtering. The waves characterized through the wave spectra in the current level 2 operational products are in the wavelength range of [22.5m; 500m], the modulation spectrum is provided in the range of [22.5m;1112m]. It is planned to extend the wave wavelength range for wave spectra also to [22.5m; 1112m], opening potential partitioning evolution currently under analyses. An in-flight experiment is also on going with a modification of the beams illumination sequence to study potential azimuth resolution modification or information extraction thanks to cross or co spectra analyses. References : [1] Hauser D. et al., “Overview of the CFOSAT mission”, IGARSS’2016, Beijing (China), July 2016 [2] Liu Jianqiang, Wenming Lin, Xiaolong Dong, et al, « First Results From the Rotating Fan Beam Scatterometer Onboard CFOSAT », 10.1109/TGRS.2020.2990708, 2020 [3] Hauser D., et al, SWIM: the first spaceborne wave scatterometer, 10.1109/TGRS.2017.2658672, 2017 [4] W. R. Alpers and C. Brüning, “On the relative importance of motion related contributions to the SAR imaging mechanism of oceansurface waves,” IEEE Trans. Geosci. Remote Sens., vol. GE-24, no. 6, pp. 873–885, Nov. 1986
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C)

Poster: Wave TAC from Copernicus Marine service: New products generation

Authors: Annabelle OLLIVIER, Patricia ZUNINO, Romain Husson, Dr Antonio REPUCCI, A AllTheContributors, Gerald Dibarboure
Affiliations: CLS, Mercator, CNES
Since 2017, the waves observations are integrated in the Copernicus Marine Service, thanks to the WAVE Thematic Assembly Centre (TAC), this project aggregates and delivers to end users the best products of ESA, CNES, CNSA and EUMETSAT missions. In 7 years, the catalogue has been largely enriched and the product quality, following the wave community R&D upstream activities, keep on improving. Starting from a nadir altimetry constellation and Sentinel-1 wave mode, more recent altimeters (including HY2, CFOSAT nadir, S6, SWOT nadir...) were taken onboard. Thanks to the R&D supported by CNES, CFOSAT, mission dedicated to the wave characterisation, now enable to enrich the understanding of different sea states thanks to its full spectra. The latest altimetric 1km resolution products will enable to better qualify nadir uncertainty and coastal observation. 2D wave products derived from SWOT are under development to be onboarded in the catalogue. This poster proposes an overview of the existing products, distributed in Near Real Time, for wave forcasting, as well as Multiyear for climate application (largely derived from CCI Sea State ESA project). It also shows how various projects (Sea state CCI projects, SALP-CNES, EUMETSAT...) jointly contribute to the upstream validation and R&D studies. It will also show the complementarity of the L3 along tracks products (altimeter, diffusiometer, SAR, interferometer...) and how, once the missions merged, they provide the best integrated synergetic characterisation of waves in L4 gridded products.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C)

Poster: Sentinel-1 Archive Processing for Spaceborne Synthetic Aperture Radar Derived Sea State Parameters in Scope of ESA’s Climate Change Initiative - Sea State ECV and collocation with TerraSAR-X

Authors: Andrey Pleskachevsky, Björn Tings, Sven Jacobsen
Affiliations: German Aerospace Center, Maritime Safety and Security Lab Bremen
In the scope of ESA’s Climate Change Initiative - Sea State ECV sea state parameters are derived from Synthetic Aperture Radar (SAR) acquisitions provided by the Sentinel-1 (S1) mission (satellite S1-A launched in 2014, in operation until today; satellite S1-B launched in 2016, out of service in 2021). Within the project, ten years of archived S1 data from 2014 to 2024 are processed and analyzed in terms of sea state. The following eight integrated sea state parameters are processed from Level-1 (L1) Interferometric Wide Swath Mode (IW), Extra Wide (EW) and Wave Mode (WV) products: total significant wave height (SWH), dominant and secondary swell wave height, windsea wave heights, first and second moment wave periods, mean wave period and period of windsea. For S1 IW (coverage ca. 250×200 km, 10 m pixel spacing, acquisitions dominantly in coastal regions and shelf waters) and S1 EW (coverage ca. 450×400 km, 40 m pixel spacing, acquisitions dominantly in Arctic/Antarctic but also in North Atlantic and around Madagascar) modes, the GRD (Ground Range Detected) and for WV mode (along-orbit imagettes each 100 km ca. 20×20 km, ca. 3.5 m pixels, acquisitions only in open oceans) the SLC (Single Look Complex) ESA products are used. The scenes are processed in a spatial raster format by analysis of subscenes (ca. 2.5×2.5 km, processing raster step of 5 km for S1 IW, 10×10 km, processing raster step of 17 km for S1 EW) and result in continuous sea state fields for S1 IW and S1 EW. For S1 WV, averaged values for each along-orbit imagette are stored. The DLR ocean products include eight estimated sea state parameters, quality, uncertainty and rejection flags for each data point. For the processing, the new algorithm SAR-SeaStaR (SAR Sea State Retrieval) based on combination of linear regression (LR) and machine learning (ML) has been applied [2]. The SAR-SeaStaR was implemented in the sea state processor (SSP) software using modular architecture as one of the processing branches in the SAR AIS Integrated Toolbox (SAINT) package. Besides processing of S1 IW, EW, WV (acquired with C-band radar frequencies) the SSP modular architecture is also adapted for processing TerraSAR-X/TanDEM-X (TS-X/TD-X) StripMap (SM) modes (acquired with X-band radar frequencies). In total, ca. 3 Mio S1 IW and ca. 1 Mio S1 EW worldwide acquired scenes were processed and result into a data base of ca. 1,2 Mio and 0,5 Mio ocean scenes for IW and EW respectively. The processing of S1 WV archive for 2014-2021 with ca.150,000 products (ca. 12 Mio WV individual imagettes) has been already shown in [1]. The processing was executed on LRZ (Leibniz Supercomputing Centre, www.lrz.de). The huge amount of data (ca. 4 Mio S1 IW and EW scenes) with ca. 13 Petabyte (PB) requires the usage of such a supercomputing system enabling large scale parallel processing. For Areas of Interests (AoIs) covering the North Atlantics/Pacific and European waters including North, Baltic, Black and Mediterranean Seas [1], the processed S1 IW archive was collocated to TerraSAR-X SM archive for the years 2020 and 2021. From the data of this time period of two years (ca. 350,000 ocean S1 IW scenes and ca. 25,000 ocean TS-X SM scenes), 185 scenes are collocated within distance under 30 km and within +/-30 min time window. From those, ca. 50% of all collocations are found under 10 km distance and 5 min time difference. However, only individual scenes are directly collocated spatially and temporally under 1 min. For estimation of root mean squared error (RMSE) and BIAS from the collocated full scenes, the values of processed sea state parameters are compared for collocated subscenes extracted from the full scenes of both satellites (1.5 km processing raster for TS-X). The analysis of SWH results into RMSE≈0.45 m between S1 IW and TS-X and BIAS of ca. 0.08 m (S1 minus TS-X). These numbers correspond to the uncertainty of S1 IW compared to the model and buoys with RMSE≈42 cm [2]. The individual outliers in TS-X SM derived SWH dataset (recognizable by SWH underestimation for individual subscenes) are found being connected to large slicks-looking structures (“dark spots” with low NRCS due to e.g. oil and algae spills, wind shadowing, etc.) which are not as strongly pronounced in the S1 IW data (C-band, lower resolution). The collocation of the S1/TS-X collocated dataset to in-situ measurements (NDBC and EMODNET buoys) and to the model (MFWAM, Copernicus) results into quadruple cross-comparison and uncertainties analysis. A cross-comparison matrix of RMSE was established, where all combinations of four SWH sources are analyzed. The cross-comparison matrix represents partial RMSEs calculated only for locations where all four sources are available. These partial estimates are different from total RMSEs, estimated for each data pair worldwide (e.g. 0.40 m for S1 IW and 0.35 m for TS-X by worldwide comparison to MFWAM) where amount of collocated data pairs is significantly wider and covers larger areas. [1] Pleskachevsky, A., Tings, B., and S, Jacobsen, 2022: Multiparametric Sea State Fields from Synthetic Aperture Radar for Maritime Situational Awareness, RSE, vol. 280, 22 pp. [2] Pleskachevsky, A., Tings, B., Jacobsen, S., Wiehle, S., Schwarz, E., and D. Krause, 2024: A System for Near Real Time Monitoring of the Sea State using SAR Satellites. IEEE Transactions on Geoscience and Remote Sensing, vol. 62, 2024.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C)

Poster: Last Improvement of the L4 WAVE-TAC Significant Wave Height NADIR Products

Authors: Cécile Béchonnet, Dr Maxime Ballarotta, Loren Carrere, Annabelle OLLIVIER, Patricia ZUNINO, Gerald Dibarboure
Affiliations: CLS, CNES
Accurately measuring and mapping of sea surface significant wave height (SWH) is essential for numerous applications spanning maritime safety, marine weather forecasting, and climate research. The Copernicus Marine Service's Thematic Assembly Centre (TAC) serves as a key resource in providing essential products for this purpose. This study focuses on the assessment and enhancement of the quality of SWH mapping within the WAVE-TAC product lines, particularly the Near Real-Time (NRT) Level-3 (L3; along-track) and Level-4 (L4; gridded) products, as well as the Delayed Time/Multi-Year (DT/MY) products. A key aspect of this investigation is the improvement of SWH mapping quality through the use of new interpolation techniques. By moving from conventional box averaging methods to interpolation, we aim to mitigate inherent limitations such as spatial discontinuities and loss of finer details in sea surface representation. The quality of the gridded SWH product is assessed through comparisons with independent in-situ measurements and along-track data. Findings suggest that the new NRT and DT products exhibit qualitatively and quantitatively better performances when compared with the established Copernicus L4 WAVE-TAC product. Maps of variance reduction show great improvements especially further from the shore. When comparing the NRT and DT products, it is evident that the NRT product, which has half as many observations as the DT product, performs only half as well, but adding some measurements in the future greatly improves the NRT product performance. Additionally, intercomparison with assimilated model products (e.g., ERA5 or WAVERYS) sheds light on the limitations of the current NRT L4 WAVE-TAC product. Thus, regardless of the approach used (interpolation or box averaging), this product is still inadequate for resolving spatial scales of less than 1000km. Finally, sensitivity studies are carried out for the DT product to identify mapping errors in various altimeters constellation scenarios as the number of nadir altimeters varies with time. The DT product also enriches the discussions held with the CCI Sea State Project on global waves products. Overall, our study contributes to the refinement of SWH mapping methodologies, thereby enhancing the utility of these products for various scientific and operational purposes.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C)

Poster: An Evaluation of Sea Level Estimation From SWOT KaRIn Altimetry in the Southern Chesapeake Bay

Authors: Pittayuth Yoosiri, Christopher Buchhaupt
Affiliations: University Of Maryland
Sea level rise (SLR) is a growing problem in many coastal communities, with up to 4% of Americans at risk from seawater inundation by 2100. Current SLR estimates by the NOAA CoastWatch program rely upon in situ measurements from tide gauges and spaceborne sea level anomaly (SLA) measurements from a constellation of nadir-looking radar altimeters. However, conventional altimeters have limited spatial coverage outside of their direct groundtracks, creating significant gaps in SLA measurements. The Surface Water and Ocean Topography (SWOT) mission is equipped with a wide-swath Ka-band interferometric synthetic-aperture radar (SAR) altimeter capable of resolving submesoscale features up to 60 kilometers off the satellite ground track, drastically expanding the coverage of the altimeter constellation. Additionally, SAR altimetry offers higher precision and reduced interference compared to conventional altimeters near coastlines. Here, we assess the capabilities of the wide-swath altimeter in estimating sea level anomaly over the densely-populated Southern Chesapeake Bay region, a drowned river valley with complex coastline geometry. As a heavily-trafficked marine shipping hub, the region hosts a dense network of tide gauges as part of the Physical Oceanographic Real-Time System (PORTS) operated by the National Ocean Service. The region additionally has the local maximum swath coverage by SWOT during its science phase and good coverage from the conventional altimeter constellation, making the region an ideal candidate for wide-swath SAR altimetry performance evaluation. In this presentation, we aim to evaluate the performance of the SWOT Level 2 Unsmoothed SSHA product over the Southern Chesapeake region. Using data collected between August 2023 and October 2024, SSHA measurements from the SWOT mission are collocated with measurements from the Sentinel-3A and Sentinel-6A missions and interpolated to their spatial resolution. The performance of both SSHA products will be validated against the PORTS tide gauge measurements with statistical metrics such as bias, RMSE, and correlation coefficients. Retrievals from the SWOT mission will also be processed with our in-house SINCS-OV2 ZSK retracker to reduce noise especially in nearcoast environments. The processed product will also be validated against tide gauge measurements. Special consideration will be given to the role that distance to coast has in generating noise in the unprocessed SWOT SSHA product. Our findings will demonstrate the potential increase in accuracy and precision of the SSHA product from the SWOT mission, along with informing the viability of future analyses of other relevant waveform parameters retrieved from SWOT such as significant wave height within the study region.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: A.02.04 - POSTER - Advances in Monitoring and Management of Forest Ecosystems

About 4 billion hectares, which is 31 percent of the global total land surface, is covered by forests according to the latest FAO Forest Resource Assessment FRA 2020. Forests are a valuable and irreplaceable ecosystem, which play an important role from an ecological, social, and economic perspective. Monitoring of forests is essential for a better understanding to protect and manage sustainably this valuable ecosystem.

Information needs vary and include forest area, forest types, forest structure, disturbances, health, and biomass. An unprecedented wealth of observations by optical and microwave sensors, active and passive, from low to high resolution allow new ways to monitor forests. Emphasis will be put on advances in detecting forest disturbances, forest health monitoring, species identification, support to sustainable forest management, estimating forest biomass and forest carbon accounting. This session will showcase some of the more recent key achievements including methods/algorithms, science and applications.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Open-Source Software for Scalable Forest Monitoring Using Diverse Satellite Data and Deep Learning

Authors: David Purnell, Gabriel Belouze, Fajwel Fogel, Ibrahim Fayad, Liang Wan, Philippe Ciais
Affiliations: Laboratoire des sciences du climat et de l'environnement, École Normale Supérieure, Kayrros
Earth Observation (EO) data has revolutionised our ability to monitor forests, yet no single satellite mission provides enough information to fully characterise complex forest environments. This presentation introduces a novel open-source software tool designed to integrate data from multiple satellites and leverage deep learning techniques for enhanced estimation of critical forest attributes such as canopy height and biomass. The software enables users to efficiently combine satellite data, train customisable deep learning models and produce country-scale maps. Engineered for scalability and flexibility, the software features a user-friendly API and extensive documentation. The presentation will focus on the application of the software for combining Synthetic Aperture Radar (SAR), LiDAR, and optical satellite imagery for monitoring forests in Europe and Central Africa. Previous studies have shown that the combination of SAR data from Sentinel-1, optical data from Sentinel-2, and LiDAR data from GEDI (Global Ecosystem Dynamics Investigation) is effective for monitoring forest canopy height and (to a lesser extent) biomass. SAR and optical satellite imagery, in the form of raster products, can be used as inputs for deep learning models, such as U-Net, to predict canopy height, with LiDAR vector products such as RH98 used for validation. Additional satellite data sources can further improve such models. For example, SAR data at different frequency bands, such as PALSAR-2 (L-band), complements Sentinel-1 (C-band) data for biomass prediction, as the lower-frequency band penetrates further into the canopy. LiDAR data from ICESat-2 can be used alongside GEDI data for enhanced canopy height validation by increasing spatial and temporal sampling. Additionally, high-resolution satellite imagery, such as from SPOT (Satellite Pour l’Observation de la Terre), can be incorporated to improve horizontal resolution in predictions. A quantitative comparison of models trained with different combinations of satellite data sources will be presented. In addition to an introduction to the software and key applications, the presentation will feature a discussion of the challenges associated with handling diverse data products and specific challenges related to biomass prediction.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Multi-Annual Forest Structure Characterization in Germany - Novel Products and Analysis Based on GEDI, Sentinel-1 and Sentinel-2 Data

Authors: Patrick Kacic, Dr. Frank Thonfeld, Dr. Ursula Gessner, Prof. Dr. Claudia Kuenzer
Affiliations: University of Würzburg, Institute of Geography and Geology, Department of Remote Sensing, German Aerospace Center (DLR), German Remote Sensing Data Center (DFD)
Forest structure monitoring is an essential task in the context of global climate change to preserve biodiversity, protect carbon sinks and foster future forest resilience. Severe impacts of heatwaves and droughts triggering cascading effects such as insect infestation are challenging the semi-natural forests in Central Europe. As a consequence of repeated drought years since 2018, large-scale canopy cover loss has occurred calling for an improved disturbance monitoring and assessment of forest structure conditions. The present study demonstrates the potential of complementary satellite remote sensing sensors to generate wall-to-wall products of multiple forest structure attributes (canopy height, total canopy cover, above-ground biomass density) for Germany. The combination of high spatial and temporal resolution imagery from Sentinel-1 and Sentinel-2 with novel samples on forest structure from the Global Ecosystem Dynamics Investigation (GEDI) enables the analysis of forest structure dynamics. Modeling the three-dimensional structure of forests from GEDI samples in machine learning models reveals the recent changes in German forests due to disturbances (e.g., canopy cover degradation, salvage logging). This first consistent and multi-annual data set on forest structure for Germany as annual products since 2017 provides information of forest canopy height, forest canopy cover and forest above-ground biomass and allows estimating recent forest conditions at 10 m spatial resolution. Change detection analyses for disturbance hotspots (e.g. Harz forest, Thuringian forest) highlight the dominance of strong negative changes for various forest structure attributes in coniferous stands which are spatially corresponding to the canopy cover loss areas mapped by previous studies. Furthermore, there are asynchronous temporal dynamics in forest canopy height, total canopy cover and above-ground biomass density for some regions (e.g. Harz forest, Bavarian Forest National Park) enabling a more detailed understanding of post-disturbance conditions, i.e. detection of standing deadwood. Overall, the wall-to wall maps of forest structure attributes for Germany support a better understanding of post-disturbance forest structure and forest resilience.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Forest Height Mapping From Sentinel-1 and -2 Time Series, ALS, and GEDI LiDAR Measurements Using Machine-Learning Models

Authors: Markus Zehner, Clémence Dubois, Christian Thiel, Alexander Brenning, Jussi Baade, Prof. Dr. Christiane Schmullius
Affiliations: Friedrich Schiller University Jena, Institute of Geography, Deutsches Zentrum für Luft- und Raumfahrt, Institute of data science
Forests play a vital role in sequestering carbon, and detailed knowledge of their height and how it changes over time is essential for quantifying carbon stocks, management, and improving the accuracy of Earth system models. While global modeling approaches exist, assessing their predictions shows they often do not capture the variability at a local scale. LiDAR provides high-resolution measurements at the local scale, but the costs lead to limited spatial and temporal coverage. To bridge this local knowledge gap, we utilize dense time series data from Copernicus Sentinel-1 and -2 satellites which offer insights into forest structure and changes over time to predict LiDAR-derived forest height. To explore the potential of temporal dynamics, we adopt a time series approach to reduce the risk of spurious correlations in the spatial domain. To achieve this, we train a transformer neural network on per-pixel forest height estimation from satellite time series in the temperate beech forests of Hainich National Park, Germany. Designed to handle variable-length time series from multiple sensors, our approach consists of per-sensor encoders that enable a modular extension of input sources, the fusion of the learned data representations, and a decoding step to obtain the target variable. During model training, we tested different input variable settings, as well as the addition of context variables. Our results show enhanced performance over the satellite’s measurements alone by incorporating sensor-specific context to the models, such as viewing geometry and weather conditions that influence scattering mechanisms and backscatter intensity. Predicting the 90th height percentile of the Hainich National Park forest, derived from Aerial Laser Scanning point clouds achieved an RMSE of 3.38 m and an MAE of 2.42 m. All the while the model sizes were kept small enough to be trained fast on a single GPU. While Sentinel-1 alone achieves good differentiation between very sparse and low vegetation to about 10 m of forest height, it shows to saturate quickly for higher canopy height. With Sentinel-2 data, the model decently distinguishes over the range of forest heights. The combination of both still results in a notable improvement, which becomes more important considering the requirement of cloud-free conditions for optical data. By considering the time dimension of a single pixel we show the potential for forest height estimation of joint optical-SAR time series. Incorporating sensor characteristics as features increases the precision and robustness of our forest height models. This study shows the potential of mapping LiDAR forest height, an important proxy for stored carbon, through the S1 and S2 time series. Future work can build onto this enhancing the targets towards gaining insight into the vertical distribution. In addition, early results already show that our model can be extended to leverage GEDI LiDAR measurements, which would allow for scaling toward spatially diverse sampling.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: VODnet: a virtual GNSS-T VOD network for monitoring of forest water budget and structure

Authors: Benjamin Brede, Konstantin Schellenberg, Adriano Camps, David Chaparro, Alexander Damm, Prof. Dr. Matthias Forkel, Christian Frankenberg, Abesh Ghosh, Henrik Hartmann, Martin Herold, Vincent Humphrey, Thomas Jagdhuber, Alexandra Konings, Mehmet Kurum, Michael Niederberger, Prof. Dr. Christiane Schmullius, Timothée Stassin, Susan Steele-Dunne, Martin Strube, Paul Vermunt, Yitong Yao
Affiliations: GFZ Helmholtz Centre For Geosciences, Friedrich Schiller University Jena, Universitat Politècnica de Catalunya, Centre for Ecological Research and Forestry Applications (CREAF), University of Zurich, TU Dresden, California Institute of Technology, University of Georgia, Federal Research Institute for Cultivated Plants, Meteoswiss, German Aerospace Center DLR, Stanford University, TU Delft, Max Planck Institute for Biogeochemistry, University of Twente, IEEC/UPC, ASPIRE Visiting International Professor UAEU, Georg-August-University Goettingen, University of Augsburg
Climatological, meteorological and hydrological extreme events, in particular droughts, are expected to become more frequent and severe under current and future climate change. These events increase pressure on forest and individual tree water budgets, leading to higher rates of tree mortality. In the domain of space-borne microwave remote sensing, vegetation can be characterized by the vegetation optical depth (VOD), a parameter that describes the attenuation by vegetation for electromagnetic waves, frequently used in parameter retrieval models. VOD is related to the water content, structure and biomass of vegetation and has been successfully applied to study water status and biomass globally using satellite-based observations. However, ground-based observations from radiometers or scatterometers that could be used to validate space-borne VOD retrievals (e.g., from ESA’s upcoming HydroGNSS Scout mission) are rare and costly. This scarcity makes validating VOD tedious to validate across different forest ecosystems using independent microwave measurements. GNSS-T (global navigation satellite system transmissometry) offers a promising alternative by opportunistically using GNSS signals (L-band, 1-2 GHz) with permanently installed antennas. Incoming signals – emitted by 20–40 – in view at any particular point in time from the four major constellations (GLONASS, GPS, GALILEO, BeiDou) – are partially/gradually attenuated by the forest canopy before being received by a ground antenna. Comparing the transmitted signal received by a ground antenna under a tree canopy with a reference antenna in a clear sky gives the pure tree canopy attenuation, which directly corresponds to VOD. demonstrated the potential of this approach for monitoring vegetation water content, evapotranspiration, and intercepted rainfall at high temporal resolution, as well as its relationship to biomass and phenology. A coordinated effort to establish joint but distributed observations would greatly enhance our understanding of microwave transmission in forest ecosystems, deducing state and flux estimations, and to calibrate and validate space-borne VOD data sets. In analogy to soil moisture networks (e.g., the International Soil Moisture Network, https://ismn.earth/en/), a similar network for VOD ground-based observations would provide a robust framework for comparison with space-borne VOD retrievals. Such a network would not only facilitate the validation of satellite-derived data but also strengthen its application to global drought studies. This contribution introduces the VODnet initiative, providing an overview of the different observation sites and site setups across a gradient of bioclimatic conditions, ranging from temperate to tropical forests in Europe, North America, and South America. The hardware and software used by individual partners will be shown, along with a discussion of the interoperability among these partnering sub-networks. The research objectives of the participating partners will be presented, including challenges in data interpretation, and inter-comparisons with complementary in situ and remote sensing observations. Finally, an outlook on the network’s coverage and potential use cases will be presented, particularly the interoperability with other networks (ICOS, sapfluxnet & Ameriflux) and their data streams, such as measurements of energy and carbon fluxes observed from Eddy Covariance towers.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: High Biomass Forests are More Susceptible to Bark Beetle Disturbance in Europe

Authors: Viola Heinrich, Katja Kowalski, Dr. Alba Viana Soto, Simon Besnard, Wand de Keersmaecker, Ruben Van De Kerchove, Dr. Cornelius Senf
Affiliations: Section 1.4 Remote Sensing and Geoinfomatiacs, GFZ, Earth Observation for Ecosystem Management, School of Life Sciences, Technical University of Munich,, Unit Remote Sensing and Earth Observation Processes, Flemish Institute for Technological Research (VITO)
European forests have seen a rise in both harvest and natural disturbances (i.e., bark beetle and windthrow disturbances) in the last decades, with consequences for Europe’s Forest carbon sink, which is already declining in some countries. Among natural disturbances, drought-driven bark beetle outbreaks accounted for a third of unplanned canopy openings between 2015 and 2020. Bark beetles are theorised to preferentially target high-biomass forests, due to favourable breeding material, suggesting that the major outbreaks in 2018–2020 were inevitable in high-biomass spruce forests. However, direct comparison of forest biomass in pre-disturbance and undisturbed forests using remote sensing data remain unexplored despite their critical implications for future forest management. We hypothesise that forests subject to upcoming bark beetle disturbances have a higher biomass than forests that have remained undisturbed throughout the satellite era. To test this, we combine 30m spatial scale forest disturbance data (1984 to 2023) with a 30m PlanetScope-based aboveground biomass (AGB) map for 2019 and a 10m forest genus map based on Sentinel-1/2. This approach allows us to examine forest AGB in 2019 before disturbances between 2021 and 2023 occurred, which we term “forests with upcoming disturbances.” We compared this with AGB in nearby (within 10km) forests that remained unaffected by disturbances throughout the entire period, termed “undisturbed forests”. Additionally, we included nearby forests that experienced disturbances between 1984 and 2019, referred to as “disturbed forests”. Preliminary results show that needleleaf forests subject to upcoming unplanned disturbances have significantly higher AGB compared to nearby undisturbed forests, particularly in spruce forests, where biomass values are, on average, 30Mg/ha higher than undisturbed spruce forests. In contrast, no statistical difference was found between the biomass of spruce forests subject to upcoming harvest and undisturbed forests. Enhancing the carbon sink in European forests is a crucial climate mitigation strategy and for achieving the European Green Deal goals. Prioritising the restoration of spatially heterogeneous forests over merely high biomass forests is therefore a crucial consideration. This strategy could help mitigate the increasing risk of bark beetle outbreaks under global warming.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Continuous ground reference data for improved microwave observations of forest water status

Authors: Paul Vermunt, Dr. Roelof Rietbroek, Yijian Zeng, Bérénice Guglielmi, Christiaan van der Tol, Bob Su
Affiliations: University Of Twente (ITC)
During drought, water deficit depletes tree water content, which leads to an increased susceptibility to plagues (e.g. bark beetle), wildfires, and hydraulic failure. Drought frequency and duration are expected to increase in many areas in Europe, which can trigger widespread tree mortality and lead to disrupting the provision of important ecosystem services and increased carbon emissions. To better observe and respond to ongoing changes in forest water status, we need a consistent, cross-border monitoring system. Microwave satellite observations are increasingly adopted to monitor forest health on large scales, based on their sensitivity to vegetation water content. Three major developments contribute to current progress: (1) the development of retrieval algorithms for products such as Vegetation Optical Depth for both passive and active instruments, (2) merging datasets into consistent long time series, (3) the launch of new satellites with various frequencies, and high spatial resolution (SAR). However, lagging behind these developments is the availability of continuous ground reference data for calibration and validation of these new algorithms and datasets. Similar to the International Soil Moisture Network supporting the development of soil moisture products, we aim to increase the quality and quantity of ground reference data for satellite products of vegetation water content. This is a challenge because trees are dynamic, living organisms with complex structures. Here, we will discuss our current efforts to test the suitability of state-of-the-art sensors and techniques to generate continuous time series of vegetation water content. This includes GNSS-Transmissometry based Vegetation Optical Depth. We will discuss the decomposition of almost two years of GNSS-T VOD observations in the Netherlands. These hourly observations are analysed in detail using various in situ sensors and destructive samples. Moreover, we will discuss the potential for using existing data networks, such as dendrometer observation networks, to create a valuable database for cal/val activities.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Mapping Forest Management in Europe through Integration of Remote Sensing Products and Geospatial Data Sets

Authors: Dr Linda See, Dr Fulvio Di Fulvio, Zoriana Romanchuk, Orysia Yashchun, Dr Mykola Gusti, Dr Pekka Lauri, Dr Andrey Lessa Derci Augustynczik, Dr Myroslava Lesiv, Dmitry Schepaschenko
Affiliations: International Institute For Applied Systems Analysis (IIASA)
European forests play a key role in achieving climate neutrality as they cover 44% of the EU’s land area and absorb almost 10% of greenhouse gas emissions annually. However, European forests have been declining over the last decade due to increased demand for wood coupled with a greater occurrence of natural disturbances. To determine the impacts of different sustainable forest management policies, models need good baseline information on how forests in Europe are currently managed. Here we present a European map of forest management at a 100 m resolution. Inputs from different years have been used in the production of the map, but it is intended to be a current representation of spatially explicit forest management. There are seven forest management classes with increasing intensity as follows: Primary/unmanaged forests; Close-to-nature forestry; Multi-functional protective forest forestry; Multi-functional forestry with non-woody forest products; Multi-functional recreational forestry; Long rotation production forestry; and Short rotation forestry. These classes were chosen as part discussions in the framework of the EU-funded Forest Navigator and LAMASUS projects. These classes were informed by existing forest management products, the project requirements and refinements through stakeholder feedback. To produce the forest management map, a probabilistic forest mask was first developed. This mask was based on the consensus between various remotely-sensed tree cover products including the UMD GLAD forest layer, the JAXA forest product and tree cover classes from the Corine 10 m product and the C-GLOPS land cover product, all resampled to 100 m; in this mask, higher consensus between products represents a higher probability. Various input data layers were then assembled and resampled, and rules were formulated to assign one of the seven forest management classes to the forest mask. Examples of input data sets used include: the World Database of Protected Areas, Natura 2000 sites, riparian areas calculated from a detailed river network overlaid with the Copernicus riparian high resolution layer, the EU tree species atlas, slope and elevation, the Copernicus Urban Atlas product, and Hansen’s tree loss and gain time series, among others. The rules operate hierarchically with any remaining tree cover assigned to the class ‘Multi-functional recreational forestry’. The results were checked using very high resolution satellite imagery, and the rules and input data sets were then adjusted through an iterative process of development. The map was also fit to national and sub-national forest statistics in order to check the national totals in terms of the area of production forests. Finally, the map was further refined using the Geo-Wiki tool and expert-sourcing.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Influence of topography and fire severity on pine and oak forest recovery.

Authors: Ana Laura Giambelluca, Txomin Hermosilla, María González-Audícana, Jesús Álvarez-Mozos
Affiliations: Institute for Sustainability & Food Chain Innovation (IS-FOOD), Dep. Engineering, Public University of Navarre, Arrosadia s/n, 31006 Pamplona, Spain., Canadian Forest Service (Pacific Forestry Centre), 506 Burnside Rd W, Victoria, BC V8Z 1M6, Canada
Post-disturbance forest monitoring is an essential tool for assessing forest development. Identifying the driving variables behind recovery and their impact on the recovery process is essential for making informed decisions that facilitate efficient regeneration of the forest and the preservation of the ecosystem services it provides. The objective of this study is to evaluate the influence of topographic variables, including slope, aspect, and elevation, as well as the severity of wildfire and the condition of pre-fire vegetation, on the process of forest recovery following a wildfire. The study was conducted in Spain and comprised a total of 614 fires that occurred between the years 1986 and 2004. Pixel samples at a distance greater than 100 m were obtained from burned area maps. The forest masses assessed comprised six different tree species: holm oak (Quercus ilex), downy oak (Quercus pubescens), cork oak (Quercus suber), Portuguese oak (Quercus faginea), black pine (Pinus nigra) and Aleppo pine (Pinus halepensis). To evaluate the recovery of the forest, all available 30-m Landsat images from 1983 to 2023 with less than 50% cloud cover were considered. The Normalized Burned Ratio (NBR) index was used as input for the Continuous Change Detection and Classification (CCDC) algorithm, which provides a harmonic model of trend allowing for the monitoring of recovery following a disturbance. The forest was considered to be recovered when the NBR value reached 80% of its pre-fire value. Topographic variables (elevation, slope, aspect) were calculated using the ASTER DEM v3, while the burn severity was calculated using the spectral change magnitude. The Kruskal-Wallis test, with comparisons made using the Bonferroni correction, was employed to identify significant differences. The results showed that the NBR value pre-fire, burn severity and the elevation had a negative impact on recovery time. In samples with high or moderate-high severity, the recovery period was found to be longer than that observed in areas with moderate-low or low severity. The recovery period was found to be longer in areas with high elevations (greater than 550 m) than in areas with elevations below 350 m. Similarly, pixels with high NBR values prior to the fire required more years to recover than those with low NBR values. In contrast, the lower slopes (less than 10º) exhibited a tendency towards slightly faster recovery. Furthermore, distinct aspects yielded similar outcomes. The west aspect demonstrated a longer recovery period, while the east aspect required fewer years to recover. The variable with the highest correlation with the years to recover was the elevation (0.18), followed by the burn severity (0.14) and the NBR value pre-fire (0.12). It was observed that 11% of the pixels did not recover at the end of the study period. The majority of these pixels exhibited moderate-high to high severity, high elevation, a west or south aspect and/or a high NBR value prior to the fire. With regard to tree species, the cork oak demonstrated the most rapid recovery (five years), while the downy oak exhibited the least rapid recovery (nine years). However, the particular factors explaining recovery time for each particular tree species remained unclear due to the interaction between variables. The findings of this study facilitate the understanding of the manner in which different factors influence the recovery of pine and oak trees following a wildfire, offering valuable insights for pre- and post-fire interventions. Further research could extend this study by incorporating additional climatic variables and metrics obtained from other remote sensors, thus providing complementary information that could lead to a more complete understanding of post-fire recovery.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Hyperspectral Remote Sensing and Environmental DNA for Assessing Soil Bacterial Diversity in Temperate Forests

Authors: Haidi Abdullah, Professor Andrew Skidmore, Dr Andjin Siegenthaler, Assistance Prof Margarita Huesca, Dr. Roshanak Darvishzadeh, Assiatnce Prof. Elnaz Neinavaz
Affiliations: University Of Twente
Environmental DNA (eDNA) provides a powerful tool for uncovering the diversity of microbial taxa in environmental samples, such as soil. Despite its potential, our understanding of bacterial community spatial variation across large landscapes remains limited due to sparse field sampling. This research introduces a novel approach combining eDNA data with high-resolution image spectroscopy from the DESIS satellite, integrated with Gaussian Process Regression, to estimate bacterial alpha diversity in the Bavarian Forest National Park. The resulting maps, generated at a 30-meter resolution, explain up to 50% of the variation in bacterial diversity. The study underscores the influence of ecological drivers on bacterial alpha diversity and offers new insights into the distribution of microbial communities in the Bavarian Forest National Park, Germany. These findings demonstrate the potential of integrating eDNA and remote sensing techniques to explore microbial diversity across diverse and expansive ecosystems.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: The effect of forest structures on forest microclimate using Unmanned Aerial Vehicles (UAVs): Implications for Heat Mitigation in Riparian Forests in Urban Areas

Authors: Sophie Feiertag, Prof. Dr. Annette Menzel
Affiliations: Technical University of Munich
Forests impact not only the climate conditions in their immediate vicinity, but also create a special within-forest microclimate. Especially during hot summer days, they provide cool microclimate refugia through canopy shading and evaporative cooling, benefiting flora, fauna, and the thermal well-being of visitors. This microclimatic buffering or cooling effect largely depends on canopy cover and forest orientation. However, the role of small-scale stand structures (e.g., tree species, richness, and abundance) in influencing microclimate remains unclear, as traditional in-situ methods are difficult to implement and spatially limited. Using UAV-based remote sensing, we aim to explore these relationships between stand characteristics and microclimate buffering and identify key structural drivers of forest microclimate cooling. Our study area, the Isar floodplain or riparian forest (Auwälder an der mittleren Isar), spans ~ 2,300 hectares along the river Isar between Munich and Landshut (Bavaria, Germany). In 2020, it was designated a nature forest (Naturwald) halting its economic use to prioritize nature conservation. At present, the Isar floodplain forest consists of near-natural sections with typical floodplain tree species as well as commercial forest relics that are still dominated by fast-growing productive tree species. Additionally, the Isar floodplain forest serves as a popular recreational area for over 1 million people. Its diverse forest structures within a compact area make it an ideal site for studying the relationship between forest structure and microclimate. RGB- and multispectral images for identifying forest structures and thermal infrared images (TIR) for evaluating the surface temperature were taken at DOY 213 and DOY 226 at two sites using two DJI Mavic 3 UAVs mounted each with a RGB and multispectral camera (Ortho GSD 5.5 cm/pixel) and a RGB and TIR camera (Ortho GSD 14.3 cm/pixel), respectively. Both flights were performed around midday on sunny days with no cloud cover, and moderate wind speeds to minimize the influence of shadows and canopy movement. The image-image overlapping ratio was set to 85% and the flights were performed at a 120 m altitude. The two sites comprise near-natural stands and commercial forest relics, as well as structures aside from forest stands (e.g. river Isar, dead wood and pathways). The identification of forest structures with multispectral and RGB UAV images is validated with in-situ measurements such as various inventory data and ground-based LiDAR scans of the forest stand. TIR images revealed an enormous range of LST, between 19.4 and 55.7 °C. These records will be compared against data from climate stations and thermo-hydro loggers placed in the forest stands in the study area. Our study provides deeper insights into the effect on forest structures on microclimate in this riparian forest and by this, helps to assess the potential future effects of a forest breakdown on recreational forest function. In our poster we will showcase our measurement concept and preliminary results, providing a visual representation of these findings for conference attendees. This project is part of the REGULUS project A-DUR (https://regulus-waldholz.de/a-dur/) and funded by the German Federal Ministry of Education and Research.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Trends in ground filtering of Airborne LiDAR: A comparison of the most used algorithms at different NEON field sites

Authors: Aaron Cardenas-Martinez, Francisco M. Canero, Emilia Guisado-Pintado, Prof. Victor Rodriguez-Galiano
Affiliations: University of Seville
An accurate analysis of forest structure is crucial for understanding the functioning of terrestrial ecosystems, plant biodiversity, and ecosystem services. In this context, active remote sensing technologies such as LiDAR have emerged as valuable tools for extracting structural parameters of vegetation. Among these technologies, Airborne Laser Scanner (ALS) stands out due to the extensive development of sensors and platforms, enabling its application in numerous national missions to generate three-dimensional data across entire territories. The widespread availability of point clouds generated from these missions remains a key advantage of ALS over terrestrial (TLS) and unmanned (ULS) laser scanning systems for large-scale vegetation structural traits modelling. Moreover, the proliferation of private ALS acquisitions has further expanded its utility in forest inventories, ecological monitoring, and species-level classification. The increasing use of ALS data, alongside the diversity of sensors and flight configurations, has driven the need for the development of software tools that enable the processing of highly heterogeneous point clouds through standardized workflows. These workflows typically vary depending on whether an area-based approach (ABA) or a tree-based approach (TBA) is employed. ABA focuses on deriving canopy-scale metrics by applying a grid across the study area, requiring pre-processing steps such as ground filtering. Classical filtering algorithms implemented in widely known software coexist with novel formulae that are becoming popular within the scientific community due to advancements in sensor technologies and processing techniques. Despite the numerous published studies assessing the performance of different ALS ground filtering algorithms, some aspects require a more comprehensive discussion. Notably, there is a scarcity of studies that validate findings using study areas representative of diverse eco-climatic regions. Moreover, the selection of algorithms in existing benchmarks often lacks a comprehensive review of current trends in forestry science. To address these gaps, a systematic review of scientific literature was performed to assess the use of ground filtering algorithms in forestry between 2016 and 2023. The 519 papers reviewed allowed the identification of the most frequently used algorithms and software packages, as well as the range of point densities employed during this period. To evaluate the performance of ground filtering algorithms, three National Ecological Observatory Network (NEON) field sites were selected, representing different ecoclimatic domains. The most widely used algorithms for ground filtering previously identified in the systematic review were tested under varying forest structural complexity, considering factors such as vertical stratification and slope. Three-point densities (±20, 8 and 1 p/m2) were defined based on percentile ranges from the reviewed datasets to determine the most suitable algorithm for each NEON site and assess their overall performance. Among the tested algorithms, the Cloth Simulation Filter (CSF, MMCE = 0.967) and Progressive TIN Densification (PTIN) implemented in LAStools (MMCE = 0.951) consistently outperformed others across all study sites. Furthermore, results indicated that the performance of CSF and PTIN is largely independent of parameter configurations, whereas algorithms such as FUSION’s Ground Filter and Multiscale Curvature Classification (MCC) exhibited a stronger dependency on user-defined settings.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Assessing post-disturbance recovery in European forests using remote sensing data Monitoring of post-disturbance forest recovery dynamics in Europe with remote sensing data

Authors: Dr. Eatidal Amin, Dino Ienco, PhD Cássio Fraga Dantas, Samuel Alleaume, Dr. Sandra Luque
Affiliations: Inrae, Inria, Univ. Montpellier
In recent decades, European forests have faced an increased incidence of disturbances. This phenomenon is likely to persist, given the rising frequency of extreme events expected in the future. As forest landscapes fulfill a variety of functions as well as provide a variety of services, changes in severity and recurrence of disturbance regimes could be considered among the most severe climate change impacts on forest ecosystems. Therefore, estimating canopy recovery after disturbance serves as a critical assessment for understanding forest resilience, which can ultimately help determine the ability of forests to regain their capacity to provide essential ecosystem services. This study examines the impact of varying forest fire disturbance frequencies, a key attribute of disturbance regimes, on the recovery of European forests. Forest fire data were acquired from the Copernicus EFFIS service. A remote sensing based approach, using MODIS time series data of a canopy cover structural variable like Leaf Area Index (LAI), was developed to evaluate recovery dynamics over time, from 2000 to the present, at a spatial resolution of 1 km. Recovery intervals were determined from the tree cover time series as the duration required to reach the pre-disturbance canopy cover baseline, using the previous forest status as a reference. Severity was defined in relative terms, by comparing forest conditions before and after disturbances. The recovery metrics and patterns were analyzed across different countries in Europe, highlighting regional variations and trends. Additionally, this study analyzed severity and recovery indicators in relation to forest species distribution and productivity metrics across Europe, offering valuable insights into the effects of disturbances on the interactions between bundles of ecosystem services. This work was conducted within the ongoing EU ECO2ADAPT project, funded by Horizon Europe, to develop sustainable forest management practices that enhance biodiversity and resilience in response to the challenges of climate change.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Pan-European Forest Disturbance Alerts Using Sentinel-1 Radar

Authors: Sietse van der Woude, Dr. Johannes Reiche, Dr. Johannes Balling, Prof.Dr.Ir. Gert-Jan Nabuurs, Prof.Dr.Ir. Frank Sterck, MSc. Anne-Juul Welsink, MSc. Bart Slagter, Prof.Dr. Martin Herold
Affiliations: Laboratory of Geo-Information Science and Remote Sensing, Wageningen University, Forest Ecology and Forest Management, Wageningen University, Wageningen Environmental Research, GFZ German Research Centre for Geosciences, Remote Sensing and Geoinformatics Section
Spatially and temporally accurate large-area estimates of forest disturbances in Europe are crucial for forest management, carbon accounting, and biodiversity conservation (FOREST EUROPE, 2020). Current pan-European forest disturbance products are largely based on optical satellite data, providing yearly updates spanning back multiple decades (Hansen et al., 2013; Senf & Seidl, 2021; Turubanova et al., 2023). The reliance of optical systems on cloud-free observations affects their capability to increase monitoring frequencies beyond annual intervals. In addition, they are required to operate at a relatively low spatial resolution (30m, Landsat archive) to maintain consistency over time. Since the operationalization of the Sentinel-1 constellation in 2014, C-band radar has proven effective for near real-time forest disturbance detection due to its cloud-penetrating capabilities and high spatial detail and frequent observations (almost daily for Europe). The RADD alerts demonstrate operational Sentinel-1-based forest disturbance detection for the pan-tropics (Reiche et al., 2021). However, the application in temperate and boreal forests has been limited by the radar’s inherent sensitivity to freezing temperatures, as well as the occurrence of leaf habit-related seasonal patterns in the time series. These factors can result in false detections if not accounted for: freezing temperatures have been shown to cause similar drops in backscatter as forest disturbances (Mullissa et al., 2024; Olesk et al., 2015), while leaf-habit related seasonality potentially affects the effectiveness of spatial normalization methods (van der Woude et al., 2024). This study extends the RADD (RAdar for Detecting Deforestation) alerts to Europe, by mitigating the effects of frozen conditions and leaf habit-related seasonality on the Sentinel-1 C-band radar signal through the integration with ERA5-Land temperature data (Muñoz-Sabater et al., 2021) and Copernicus High Resolution Layers (HRL) forest data (Copernicus Land Monitoring Service, 2021). This allowed us to produce a pan-European dataset of forest disturbances from 2020 onwards at 10m scale. We used terrain-corrected and speckle-filtered Sentinel-1 backscatter (Mullissa et al., 2021) and derived sum average Grey-Level Co-occurrence Matrix (GLCM) (Balling et al., 2021) values as inputs for an Bayesian updating-based system (Reiche et al., 2021), which we expanded by creating temperature-specific (freezing, transitional, and non-frozen) forest and non-forest probability density functions at a pixel level for a stable historical period (2017-2019). Observations in the monitoring period (2020-) were matched with a corresponding set of probability density functions based on the lapse-rate adjusted ERA5-Land surface temperature at the time of observation. In addition, stratified spatial normalization was performed on each new image by stratifying based on temperature and forest type. We will present validation results, obtained through a stratified random sampling approach (Olofsson et al., 2014) and visual interpretation of high-resolution (3m) Planet Labs optical imagery. We assess the timeliness by comparing the date of detection to the earliest non-clouded image available. A comparison with key annual forest change datasets (Hansen et al., 2013; Senf & Seidl, 2021; Turubanova et al., 2023), revealed similar patterns overall, with less commission error in disturbance patch edges. Disturbances involving complete removal of canopy cover were accurately detected using a minimum mapping unit (MMU) of 0.1 ha, with minimal commission errors. Some disturbances involving partial canopy cover removal such as low-intensity fires, bark beetle damage, and management practices such as thinning were omitted in our product, while detected in the optical-based forest change products. However, the key contribution of this study is detection timeliness: our method consistently detected disturbances earlier than the yearly products, as well as providing year-round estimates of disturbance detection dates. We analyzed temporal disturbance patterns across Europe and provide the first remote sensing-based estimate of intra-yearly disturbance occurrence on a continental scale. We highlight opportunities to capitalize on the complementarity of optical and radar sensors by combining our new European RADD alerts with optical-based products. References: Balling, J., Verbesselt, J., De Sy, V., Herold, M., & Reiche, J. (2021). Exploring Archetypes of Tropical Fire-Related Forest Disturbances Based on Dense Optical and Radar Satellite Data and Active Fire Alerts. Forests, 12(4), 456. https://doi.org/10.3390/f12040456 Copernicus Land Monitoring Service. (2021). HRL Forest 2018 Product User Manual. European Environment Agency (EEA). https://land.copernicus.eu/pan-european/high-resolution-layers. FOREST EUROPE. (2020). State of Europe’s Forests 2020. Ministerial Conference on the Protection of Forests in Europe - FOREST EUROPE. Hansen, M. C., Potapov, P. V., Moore, R., Hancher, M., Turubanova, S. A., Tyukavina, A., Thau, D., Stehman, S. V., Goetz, S. J., Loveland, T. R., Kommareddy, A., Egorov, A., Chini, L., Justice, C. O., & Townshend, J. R. G. (2013). High-Resolution Global Maps of 21st-Century Forest Cover Change. Science, 342(6160), 850–853. https://doi.org/10.1126/science.1244693 Mullissa, A., Saatchi, S., Dalagnol, R., Erickson, T., Provost, N., Osborn, F., Ashary, A., Moon, V., & Melling, D. (2024). LUCA: A Sentinel-1 SAR-Based Global Forest Land Use Change Alert. Remote Sensing, 16(12), 2151. https://doi.org/10.3390/rs16122151 Mullissa, A., Vollrath, A., Odongo-Braun, C., Slagter, B., Balling, J., Gou, Y., Gorelick, N., & Reiche, J. (2021). Sentinel-1 SAR Backscatter Analysis Ready Data Preparation in Google Earth Engine. Remote Sensing, 13(10), Article 10. https://doi.org/10.3390/rs13101954 Muñoz-Sabater, J., Dutra, E., Agustí-Panareda, A., Albergel, C., Arduini, G., Balsamo, G., Boussetta, S., Choulga, M., Harrigan, S., Hersbach, H., Martens, B., Miralles, D. G., Piles, M., Rodríguez-Fernández, N. J., Zsoter, E., Buontempo, C., & Thépaut, J.-N. (2021). ERA5-Land: A state-of-the-art global reanalysis dataset for land applications. Earth System Science Data, 13(9), 4349–4383. https://doi.org/10.5194/essd-13-4349-2021 Olesk, A., Voormansik, K., Pohjala, M., & Noorma, M. (2015). Forest change detection from Sentinel-1 and ALOS-2 satellite images. 2015 IEEE 5th Asia-Pacific Conference on Synthetic Aperture Radar (APSAR), 522–527. https://doi.org/10.1109/APSAR.2015.7306263 Olofsson, P., Foody, G. M., Herold, M., Stehman, S. V., Woodcock, C. E., & Wulder, M. A. (2014). Good practices for estimating area and assessing accuracy of land change. Remote Sensing of Environment, 148, 42–57. https://doi.org/10.1016/j.rse.2014.02.015 Reiche, J., Mullissa, A., Slagter, B., Gou, Y., Tsendbazar, N. E., Odongo-Braun, C., Vollrath, A., Weisse, M. J., Stolle, F., Pickens, A., Donchyts, G., Clinton, N., Gorelick, N., & Herold, M. (2021). Forest disturbance alerts for the Congo Basin using Sentinel-1. Environmental Research Letters, 16(2). https://doi.org/10.1088/1748-9326/abd0a8 Senf, C., & Seidl, R. (2021). Mapping the forest disturbance regimes of Europe. Nature Sustainability, 4(1), 63–70. https://doi.org/10.1038/s41893-020-00609-y Turubanova, S., Potapov, P., Hansen, M. C., Li, X., Tyukavina, A., Pickens, A. H., Hernandez-Serna, A., Arranz, A. P., Guerra-Hernandez, J., Senf, C., Häme, T., Valbuena, R., Eklundh, L., Brovkina, O., Navrátilová, B., Novotný, J., Harris, N., & Stolle, F. (2023). Tree canopy extent and height change in Europe, 2001–2021, quantified using Landsat data archive. Remote Sensing of Environment, 298, 113797. https://doi.org/10.1016/j.rse.2023.113797 van der Woude, S., Reiche, J., Sterck, F., Nabuurs, G.-J., Vos, M., & Herold, M. (2024). Sensitivity of Sentinel-1 Backscatter to Management-Related Disturbances in Temperate Forests. Remote Sensing, 16(9), 1553. https://doi.org/10.3390/rs16091553
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Identifying the Drivers of Biomass Change in European Forests: Insights from Remote Sensing and Geo-Wiki Analysis

Authors: Dmitry Schepaschenko, Dr Myroslava Lesiv, Maria Shchepashchenko, Martina Duerauer, Volodymyr Blyshchyk, Ivelina Georgieva
Affiliations: International Institute for Applied Systems Analysis (IIASA), National University of Life and Environmental Sciences of Ukraine
Forest biomass stock and its changes serve as crucial indicators of ecosystem health, biodiversity, and the effectiveness of climate change adaptation and mitigation efforts. With advancements in remote sensing technology and data processing techniques over the last decade, our ability to monitor these changes has significantly improved. However, the validation of biomass change products and the identification of their underlying drivers remain challenging. The European Space Agency's Climate Change Initiative (ESA CCI) Biomass project provides above-ground biomass (AGB) maps with a spatial resolution of 100 meters, covering the years 2010 to 2020, with plans for broader temporal coverage. This product benefits from the integration of diverse remote sensing instruments, including radar, LiDAR, and optical sensors. The set of maps includes the standard deviation of AGB estimates and a quality flag for AGB changes. However, the set of sensors used for AGB estimation vary from year to year, impact the reliability of biomass change analysis. To validate and identify the drivers behind these reported biomass changes, we utilised Geo-Wiki.org, a well-established tool for the visual interpretation of high-resolution imagery and vegetation indices. This approach allowed us to verify whether changes detected by CCI Biomass are visible and to categorise the drivers of these changes, including natural regrowth, forest management (planting, thinning, harvesting), natural disturbances (fires, pests, wind, flooding) and land use change. Our methodology applied a random sampling approach, reducing sampling intensity for small and insignificant changes (indicated by a quality flag). This approach enabled the estimation of area changes attributable to specific drivers at a regional level. Our findings indicate that 94% of forest gains in Europe were due to reforestation or natural growth, 6% to afforestation, and 1% to urban expansion, tree crops, and agroforestry. Biomass losses revealed that 78% were associated with forest management (harvesting or thinning), 9% with land use change or activities outside of the forest (infrastructure, cropland, tree crops), 8% with wildfires, 6% with other natural disturbances. A visual inspection of the CCI Biomass change product revealed that approximately half of the reported changes were not verifiable using freely available very high-resolution images and vegetation indices. These discrepancies arise from two primary sources: the invisibility of certain changes in the available images and false detections of changes due to utilization of different sensors over the observed period. This work is supported by the European Space Agency's CCI Biomass project (4000123662/18/I-NB).
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Regression-based Subpixel Mapping: Towards Global Forest Monitoring by Vision Transformers

Authors: Tim Landwehr, Prof. Dr. Björn Waske
Affiliations: Institute of Computer Science, Osnabrück University
Forest disturbances are significant contributors to global environmental change, influencing biodiversity, carbon storage, and the hydrological cycle. Disturbances can manifest in various forms, including deforestation, i.e., the conversion of forest to non-forest land, as well as small-scale degradation, due to e.g., selective logging, road construction, resource extraction, and natural events. While many remote sensing studies provide detailed data on deforestation, information on degradation and small-scale disturbances remains still limited and may vary. However, such small-scale changes may comprise a large portion of the total forest disturbance in an ecosystem. Consequently, there is an urgent need for robust remote sensing-based monitoring systems. However, monitoring small-scale forest disturbances, particularly at large scales, is often constrained by limitations in the spatial resolution of the satellite data. To address this limitation, various studies employ subpixel analysis, utilizing techniques such as spectral mixture analysis and regression-based approaches. It has been demonstrated that established decision-based regression models, such as the Random Forest regression (RFR), are effective in predicting subpixel fraction for land cover types. Nevertheless, pixel-based image analysis approaches have been displaced by the current state-of-the-art techniques, such as convolutional neural networks (CNN), which are capable of learning high-dimensional and complex patterns. Moreover, recent studies show that vision transformers (ViT) can outperform these CNNs in image classification due to their ability to model global relationships and utilize intra-relationships at the spectral level. However, the use of these methods in context of regression analysis and sub-pixel analysis is still rare. As CNNs and ViT require a large number of training data to optimize parameter and learn complex pattern, we generated synthetic spectral fractions by mixing pure “forest”, "non-forest-vegetation" and "non-vegetation-background" spectra. For selecting spectral references, annual median composites of Landsat 9 imagery were used. The training of regression models with synthetically-mixed spectra has previously been demonstrated to be an effective approach for RFR. To assess the potential and generalization capability of the proposed ViT, we mapped forest cover fractions in 21 study sites across the globe. These sites cover all types of global forest ecosystems, including boreal, temperate, subtropical and tropical forests, as identified by the Forest Resources Assessments in 2020. The prediction of the fractions was carried out using Landsat 9 scenes that were not spatially or temporally correlated with the training data. In light of our hypothesis that ViT outperforms both CNN and RFR for predicting forest cover fractions, all three models are employed with standard implementation. A fair comparison is guaranteed through the use of a 1D spatial scale for the fraction in both, training and prediction, as well as predicting multi-outputs for RFR, CNN and ViT for all three fractions. During model training, ViT and RFR achieve RMSE < 0.09 and R² > 0.9 on excluded test data. The CNN scores significantly lower values of RMSE > 0.15 and R² < 0.7. PlanetScope imagery with a spatial resolution of 3 m is used for visual interpretation and independent validation of the predicted Landsat 9 scenes. Here, the ViT showed significantly improved accuracies when compared to the results of a CNN, and slightly better results compared to the RFR. The proposed approach, using ViT models in combination with synthetically mixed training data, provides the basis for a universal and global model for mapping forest cover fractions. In context of regional monitoring systems, the global model can be adapted to specific study regions and their corresponding forest types. In future work the used ViT model will be extended by integrating spatial information.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Mapping Litter and Shrub Biomass Synergising ALS, Multispectral, and SAR data

Authors: João E. Pereira-Pires, Juan Guerra-Hernández, Adrián Pascual, João M. N. Silva, José M. Fonseca, André Mora
Affiliations: UNINOVA, Associated Lab of Intelligent Systems, NOVA School of Science and Technology, Forest Research Centre, Associate Laboratory TERRA, School of Agriculture, University of Lisbon, Department Geographical Sciences, University of Maryland
The mapping of forest biomass stocks is frequently only expressed in relation to Aboveground Biomass Density (AGBD (Mg/ha)) despite the whole range of the living carbon budget, which can be considerably greater when accounting for forest soil biomass. Forest biomass in the understory and belowground sections is highly unaccounted when estimating biomass losses after wildfires or land-use changes (e.g. from forest to agriculture). Therefore, Global Forest policies and monitoring systems should avoid the bias of looking only at AGBD component to describe forest biomass. Despite their importance, there is a lack of knowledge about large-scale carbon dynamic of Belowground Biomass density (BGBD), Litter Biomass density (LBD), and Shrub Biomass density (SBD) biomass pools in Europe. The increasing availability and update of National Forest Inventories and countrywide airborne laser scanning (ALS) data across Europe create new chances for synergizing with space-based data to map these biomass pools. To address this problem, the Below, Litter & Shrub BIOMASS density mapping combining optical, LiDAR and SAR Earth Observation data project (BLS-BIOMASS) aims to map the BGBD, LBD, and SBD pools in Mediterranean regions. This abstract presents the first results achieved in the framework of this project. A methodology that produces LBD and SBD wall-to-wall maps ALS-based maps from countrywide data was developed. Results were achieved by using the synergy of multispectral, SAR (dual-frequency backscatter and interferometric data) and ancillary data (Land Cover Land Use – LCLU – and Digital Elevation Model – DEM), and a machine learning regression, locally fitted by means of ALS-based LDB and SBD estimations. The ALS data was acquired through the Portuguese ALS pilot campaign providing point clouds with point densities ranging from 10.02 to 13.88 points/m2 along six study areas in summer 2020. The multispectral data corresponds to level 2A products acquired by Sentinel-2 (S2), provided by Copernicus Data Space Ecosystem. The SAR data was acquired by two different satellites: Advanced Land Observing Satellite 2 (ALOS-2) and Sentinel-1 (S1). From the first, the Global L-band Mosaic delivered by the ALOS Research Application Project, was used. The mosaic contains the backscatter Gamma Nought values for polarisations HH and HV. This product has a spatial resolution of 25m and provides one observation per year for each area. S1 operates in the C-band, and two different products were used: Ground Range Detected (GRD) and Single Look Complex (SLC). They were acquired in VV and VH polarisations, with a revisit period of six days. The GRD product is provided with a resolution of 20m. The ancillary data included the Portuguese LCLU map (COS2018); and the DEM acquired by the Shuttle Radar Topography Mission. The six pilot study areas are distributed throughout Portugal. Two of them, have more than 75% of their extent covered by coniferous species, one by broadleaf species, and the remaining are mixed forest. Finally, the period of analysis was between July and December of 2020. Regarding the methodology, specific equations for LBD and SBD tailored to the main forest types, based on canopy cover and shrub height, were utilized to map biomass pools using variables derived from ALS data at 25m resolution. Secondly, the multi-sensor approach can be divided into four parts: data pre-processing, features generation, features processing, and LB D/SBD mapping. The data pre-processing started with bringing all products to the same level. It was defined that the biomass maps would be generated at 25m spatial resolution, which matches the resolution of the lowest resolution product, ALOS-2 (with exception of bands 1 and 9 of S2). Therefore, all products, apart from ALOS-2, were resampled to 25m. While for S2 this is the only pre-processing step, for S1 products more steps were required. For S1-SLC, the coherence and unwrapped phase was computed, and then, both were terrain corrected. For S1-GRD, the gamma and sigma nought backscatter coefficients were computed, a speckle filter was applied (Lee filter with a 7x7 kernel), the terrain was flattened and corrected. For the ALOS-2 data, only speckle filtering was applied. For all these steps, the implementations on Sentinel Application Platform (SNAP) were used. In a final step, all bands were reprojected to the coordinate reference system ETRS89/Portugal TM06. From the 22 features generated previously, 250 other features were derived. For S2 bands, ALOS-2 and S1-GRD backscatter coefficients, and S1-SLC coherence and phase, ten texture features were computed using the Grey-Level Co-occurrence Matrix operator available on SNAP. Additionally, for each of the previous sets, the principal components were calculated . For S2 bands the five biophysical variables available on the SNAP biophysical processor, and 25 spectral indices, were also computed. Finally, for the backscatter coefficients the polarisations normalised differences and ratios were also calculated. In the third part of the methodology, the features processing, two methods were applied: temporal compositing and spatial filtering. For S1 and S2, a six-month composite is computed using the mean, with the observations acquired during the period of analysis of this study. A 3x3 median spatial filter was then applied to all features, except for the ancillary features. The goal is to enhance the spatiotemporal information of the features and reduce their noise. The final part of the methodology consists on the mapping the LBD and the SBD. The Extreme Gradient Boosting (EGB) regressor was used, being implemented a regressor for each biomass pool. The EGB was locally fitted, in each study area, with samples from the respective ALS estimations as reference. Note that the final objective of the BLS-BIOMASS is to use spaceborne LiDAR data (namely, from GEDI) for the local fitting since ALS data is often limited and temporally sporadic. Therefore, to replicate the limited availability and sparseness of spaceborne LiDAR data, only ALS samples with a corresponding GEDI footprint were used for the local fitting. For the LB, ALS sample coverage from areas ranged from 0.40% to 3.30%, while for the SB it ranged from 0.39% to 3.24%. For the model building, a 5-fold cross validation was used, considering only the samples for fitting. During this, the EGB hyperparameters (namely, the number of estimators and maximum depth) were optimised. Then, the best 50 features (according to the gain) were selected. After defining the model, it was tested by mapping the entire study areas. The evaluation metrics used were the Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Bias (where negative values correspond to underestimation ). Note that for the three of them the normalised values were also computed using the mean. This metrics were computed using the respective ALS data as reference. The LBD-EGB achieved its best results when using 500 estimators with a maximum depth of 4. The top 50 features included 31 from S2, 5 from ALOS-2, 9 from S1-GRD, 4 from S1-SLC, and the LCLU, being the top feature the S2 band 12. During the cross validation, it achieved an RMSE (%RMSE) ranging from 1.99 to 4.33 Mg/ha (18.30% to 35.09%); an MAE (%MAE) ranging from 1.38 to 3.00 Mg/ha (14.20% to 24.29%); and a Bias (%Bias) ranging from -0.05 to 0.03 Mg/ha (-0.51% to 0.52%). The results achieved by the produced wall-to-wall maps were, in the same order, 2.15 to 4.66 Mg/ha (22.21% to 35.32%); 1.45 to 3.24 Mg/ha (16.19% to 24.56%); and -0.51 to 0.26 Mg/ha (-3.87% to 3.36%). The SB-EGB resorted to 600 estimators with a maximum depth of 5. In the top 50 features there were 32 from S2, 4 from ALOS-2, 9 from S1-GRD, and 5 from S1-SLC, the top feature being the S2 band 12. The results achieved in the cross validation ranged from 4.48 to 6.34 Mg/ha (30.35% to 46.24%); 3.25 to 5.11 Mg/ha (23.90% to 35.05%); and -0.02 to 0.08 Mg/ha (0.00% to 0.65%). Regarding the produced wall-to-wall maps, the results were, in the same order, 5.06 to 6.86 Mg/ha (32.45% to 44.29%); 3.83 to 5.26 Mg/ha (24.85% to 34.02%); and -0.55 to 0.49 Mg/ha (-2.59% to 4.14%). Looking at the results achieved, it can be said that the proposed methodology is able to produce reliable wall-to-wall maps of LBD and SBD, despite the fitting datasets covering less than 5% of the areas. It was also observed that better results were achieved for LB (on average these maps were better than the SB by 7.56 percentage points(pp)). The Bias results do not show a clear trend of over- or under-estimating. However, from the difference between the %RMSE and %MAE (on average 10.40pp and 9.24pp for the LB and SB, respectively), it can be said that the resulting maps have difficulties in mapping the least represented values in the fitting sets. Therefore, they both under- and over-estimate the higher and lower values, respectively. This abstract presents the first results achieved in the scope of the BLS-BIOMASS project. It was shown that proposed methodology successfully mapped LBD and SBD across six different Mediterranean areas, while using ALS datasets with the same limited availability and sparseness of GEDI footprints. The next steps include the replacement of ALS by spaceborne LiDAR and the mapping of the BGBD.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Multi-Temporal Sentinel-1 InSAR Coherence for Rapid Detection of Storm Damage

Authors: Tauri Tampuu, Elzė Buslavičiūtė, Mateo Gašparović, Ivan Pilaš, Damir Klobučar
Affiliations: Kappazeta Ltd, Vilnius University, Croatian Forests Ltd, University of Zagreb, Croatian Forest Research Institute
With the increasing impacts of climate change, heavy storms and extreme weather events are expected to become more frequent and intense, posing significant risks to forest ecosystems and affecting forestry operations. This necessitates the development of improved methods for damage assessment and monitoring. Synthetic Aperture Radar (SAR), with its ability to operate under all weather conditions and penetrate cloud cover, could offer a robust solution for such a task. The interferometric SAR (InSAR) coherence of the bistatic single-pass TanDEM-X radar mission has demonstrated its utility for forest height estimation [1]. Also, the potential of the monostatic repeat-pass SAR mission Sentinel-1 (S1) InSAR coherence has been acknowledged [2] but it has not, to the best of our knowledge, been systematically investigated. On July 19, 2023, a severe storm caused significant windthrow damage in forests around Otok (45°09′N, 18°53′E), Croatia. Using ground-reference data and S1 imagery, we investigated the potential of InSAR coherence and SAR backscatter time series to detect and quantify storm-induced forest damage. The study involved 84 forest plots classified by damage levels, ranging from undamaged to severely damaged. Each plot was a 50-meter radius buffer around central coordinates, forming circular areas of 0.8 hectares. The severity of damage in the reference plots was estimated as the percentage of wind-thrown trees and aggregated into damage classes: “No damage” (<10% of trees damaged, 13 plots), “Damage >10%” (10–20% of trees damaged, 18 plots), “Damage >20%” (20–50% of trees damaged, 19 plots), “Damage >50%” (50–80% of trees damaged, 22 plots), and “Damage >80%” (80–100% of trees damaged, 12 plots). For simplicity, we refer to the first two classes as “No to low damage classes” and the latter three as “Moderate to high damage classes”. This aggregation facilitates the interpretation of the results presented below. Data from three S1 relative orbits, including both ascending and descending passes (51, 73, and 175), were used, and coherence was calculated between consecutive image pairs. No filtering was applied for backscatter intensity estimation. From the individual coherence images, we calculated the average coherence amplitude for each plot at each timestamp. The data points were then aggregated by damage class and time relative to the storm. The resulting data populations were compared using the Mann-Whitney U Test, with statistical significance assessed at the 0.01 level. Our results demonstrated that S1 coherence in VV polarization was effective in detecting storm damage. The “No to low damage classes” were statistically indistinguishable during the one-month period after the storm. In contrast, the “Moderate to high damage classes” were statistically significantly different from the “No to low damage classes” but did not differ statistically from one another. The largest separation was observed between the “No damage” and “Damage >80%” classes, with a p-value of 4.17e-04. During the three-month period after the storm, all classes differed statistically significantly from one another. No statistically significant differences were observed in VV coherence magnitude among the damage classes during the one- or three-month periods preceding the storm, except between the “No damage” and “Damage >10%” classes during one month before the storm (p-value = 0.007). All “Moderate to high damage classes” differed statistically significantly when coherence values from the one-month periods before and after the storm were compared. The largest separation was observed between the “No damage” and “Damage >50%” classes, with a p-value of 7.10e-08. Conversely, the “No to low damage classes” did not show statistically significant differences before and after the storm. This spatial and temporal pattern confirms the sensitivity of VV coherence to structural changes caused by the storm. In contrast, VH coherence and VV and VH backscatter values were found to be less reliable for storm damage detection, as indicated by higher p-values, fewer instances of statistical significance, and lower consistency across damage classes and time periods. Previously, S1 coherence magnitude has been shown to be a stronger predictor than the backscatter coefficient for clearcut detection [3]. Consistent with our results, it has also been shown previously that the backscatter intensity of the damaged forest segments does not change significantly [4]. However, other studies have demonstrated the sensitivity of the backscatter intensity to the severity of storm damage [2], [5], [6]. To the best of our knowledge, our findings provide the first demonstration of the potential of Sentinel-1 VV coherence for the rapid detection of storm-induced forest damage. However, the coherence amplitude values we observed fluctuated close to the noise level, and while the signatures were statistically significant, they were very subtle. Moreover, our study plots were relatively large, whereas storm damage is often spatially localised, which may limit the applicability of InSAR coherence for such scenarios. [1] A. Olesk, J. Praks, O. Antropov, K. Zalite, T. Arumäe, and K. Voormansik, “Interferometric SAR Coherence Models for Characterization of Hemiboreal Forests Using TanDEM-X Data,” Remote Sensing, vol. 8, no. 9, Art. no. 9, Sep. 2016, doi: 10.3390/rs8090700. [2] E. Tomppo, G. Ronoud, O. Antropov, H. Hytönen, and J. Praks, “Detection of Forest Windstorm Damages with Multitemporal SAR Data—A Case Study: Finland,” Remote Sensing, vol. 13, no. 3, Art. no. 3, Jan. 2021, doi: 10.3390/rs13030383. [3] V. Akbari and S. Solberg, “Clear-Cut Detection and Mapping Using Sentinel-1 Backscatter Coefficient and Short-Term Interferometric Coherence Time Series,” IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1–5, 2022, doi: 10.1109/LGRS.2020.3039875. [4] M. Lazecky, S. Wadhwa, M. Mlcousek, and J. J. Sousa, “Simple method for identification of forest windthrows from Sentinel-1 SAR data incorporating PCA,” Procedia Computer Science, vol. 181, pp. 1154–1161, Jan. 2021, doi: 10.1016/j.procs.2021.01.312. [5] S. van der Woude, J. Reiche, F. Sterck, G.-J. Nabuurs, M. Vos, and M. Herold, “Sensitivity of Sentinel-1 Backscatter to Management-Related Disturbances in Temperate Forests,” Remote Sensing, vol. 16, no. 9, Art. no. 9, Jan. 2024, doi: 10.3390/rs16091553. [6] M. Rüetschi, D. Small, and L. T. Waser, “Rapid Detection of Windthrows Using Sentinel-1 C-Band SAR Data,” Remote Sensing, vol. 11, no. 2, Art. no. 2, Jan. 2019, doi: 10.3390/rs11020115.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Classification of dominant tree species and detection of forest decline using Sentinel-2 data

Authors: Agata Hościło, Dr Kinga Kulesza, Aneta Lewandowska
Affiliations: Institute Of Environmental Protection – National Research Institute
The global warming observed over the past decades has an impact on increased evapotranspiration and water shortage that consequently increase the severity and duration of droughts worldwide. The frequent and severe droughts inducing forest disturbances in Europe. Therefore, the use of satellite-based Earth Observation data, in particular freely available satellite data with high temporal coverage (i.e. Sentinel-2 constellation), and advanced remote sensing techniques provide a cost-effective approach to obtain systematic, wall-to-wall information on the forest status and may complement the field-based observation by assessment of the condition at higher spatial and temporal resolution. The main aim of this study was firstly to classify the dominate tree species in order to obtain the accurate spatial distribution of tree species and secondly to perform the analysis of the forest condition (decline) based on a time series of the Sentinel-2 data. It has to be highlighted that the national forest database provides the information on the proportion of the tree species at the forest stand level, therefore the analysis of the spatial distribution of species within the stand is essential for monitoring forest status and accessing the forest condition. The study was performed over the area of the Regional Directorate of State Forests in Wrocław, Poland. The tree species classification was carried based using the mosaics of Sentinel-2 data acquired in the vegetation season of 2022, in spring (May – June), summer (July-August) and fall (September – October) combined with the topographic data from the SRTM DEM. The information on the homogenous forest stands from the national Forest Data Bank was used as the reference data. The classification was performed using the Random Forest classifier available at the Google Earth Engine (GEE). The reference samples were divided into a set of training and validation data in proportion of 6 to 4. We classified nine tree species (four coniferous and five broadleaved species). Among coniferous species, the highest accuracy was obtained for pine, spruce (above 94%) and fir (73%), and the lowest for larch (62%). The accuracy of the broadleaved species was much lower, among them the highest accuracy was obtained for beech (89%), oak (76%), alder (85%), and the lowest for birch (67%) and ash (46%). The independent verification was also performed at the stand level using the national forest database. In the next step, the forest condition was assessed over the coniferous species based on the combination of the spectral indices calculated using a time series of the Sentinel-2 data. The choice of spectral indicators was proceeded by in-depth analysis of its suitability for the forest decline analysis. The national data on the forest sanitary cuts and disturbances was used as the reference data for the selection of the most suitable indices and for the independent verification of the results. The preliminary results showed that the combination of the NDVI, NDWI and SAVI gave the most accurate results. The decline of forest condition was observed in many areas in the spruce forest as the effect of the bark beetles as well as in the pine forest caused by prolong droughts. However it has to be remembered that the satellite remote sensing provides accurate information on the forest losses caused both by planned activities related to the management of commercial forests (e.g. clear cuts, tinning) and by natural disturbances factors (e,g. windstorms, insect outbreaks). This work was supported by the National Science Centre in Poland [grant number 2021/41/B/ST10/04113].
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: National tree species mapping using the Danish Forest Inventory, Sentinels, Orthophotos and LiDAR

Authors: Alkiviadis Koukos, Kenneth Grogan, Thomas Nord-Larsen, Nicklas Simonsen
Affiliations: DHI, University of Copenhagen
Monitoring and management of Europe’s forest ecosystems are critically important, particularly as these ecosystems face growing pressures from climate change and human activity. Effective management requires precise, geographically detailed, and up-to-date information on forest attributes, including tree species composition. Sample-based forest inventories serve as the cornerstone of many national forest monitoring programs, providing valuable statistical estimates of forest resources. However, the integration of remote sensing technologies creates opportunities for extrapolation beyond plot-level data to produce high-resolution, national-scale tree species maps, which are essential for sustainable forest management and biodiversity conservation. Time series from Sentinel satellite missions have proven to produce reliable tree-species maps in the European setting and should provide a base dataset for large-scale operational mapping across the continent. In addition, several countries have access to complementary datasets, such as orthophotos and LiDAR campaigns, which can enhance the separation of different tree species by capturing textural and structural details not available from the satellite imagery alone. This study aims to create a national tree species map for Denmark by leveraging satellite data from Sentinel-1 and Sentinel-2 missions, orthophotos, LiDAR data, together with the National Forest Inventory (NFI) database. We utilized all available Sentinel-1 and Sentinel-2 data over a 3-year period to extract phenological information related to tree species. These data were combined with 1) textural features derived from orthophotos and 2) structural attributes from the LiDAR point cloud, such as canopy height. We also refined and integrated auxiliary datasets containing tree species labels to train machine learning models more effectively. The primary challenges of this study included processing and integrating large volumes of heterogeneous data from different sources and scales. Additionally, proper understanding and utilization of NFI data were crucial for accurately training the machine learning models used for classification. The final map focuses on the dominant tree species groups within each plot. By combining diverse datasets, this study demonstrates the potential for developing high-accuracy tree species maps that can support forest monitoring and management in Denmark.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Large-scale mapping of tree species in Swedish forests using Sentinel-1 and Sentinel-2 observations

Authors: Abdulhakim M. Abdi, Fan Wang
Affiliations: Lund University
More the two-thirds of Sweden is composed of forest that holds approximately 90 billion trees. Developing an efficient large-scale monitoring protocol of this resource is essential for maintaining ecosystem services and economic viability, as well as land management and biodiversity conservation. This presentation shows ongoing work in the development of an operational machine-learning mapping protocol that integrates Sentinel-1 and Sentinel-2 observations with field data from the Swedish National Forest Inventory to map seven dominant tree species in Swedish forests – Norway Spruce (Picea abies), Scots Pine (Pinus silvestris), Birch (Betula spp.), Beech (Fagus sylvatica), Oak (Quercus spp.), Alder (Alnus spp.), and Aspen (Populus tremula), which together represent more than 97% of the standing timber volume in the country. To create a wall-to-wall map of tree species, monthly composites of Sentinel-1 radar backscatter and cloud-free Sentinel-2 multispectral reflectance from 2021 – 2023 were downloaded from the Google Earth Engine catalog and combined with topographic variables (slope, aspect, elevation) to account for environmental variations and gradients across the Swedish landscape. Data from the Swedish National Forest Inventory comprised 23,140 plots that covered the region of Götaland. Plots that experienced disturbance such as clear-cutting were removed, as well as duplicate plots that were sampled more than once. Then, pure plots composed of least 80% of a single species were kept for further analysis. This resulted in 6,530 plots, which were divided into training (70%) and validation (30%) subsets for input into a machine learning algorithm. After comparing 1-D convolutional neural networks, support vector machines, random forests and extreme gradient boosting, the latter was chosen to perform the classification due to its higher predictive power when considering the validation dataset. Classification accuracy of the tree species map was 78.67% (weighted F-1) with a precision and recall of 78.49% and 79.61%, respectively. Preliminary results that include a map of the predicted tree species is presented in a web platform for visualization: www.treespecies.se.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Forest structure and biodiversity: Terrestrial laser scanning of micro and macro structural proxies in a recovering Caledonian Pinewood.

Authors: Ellie Kent, Dr Emily Lines, Professor David Coomes
Affiliations: University Of Cambridge
Forest structure and biodiversity: Terrestrial laser scanning of micro and macro structural proxies in a recovering Caledonian Pinewood. Biodiversity is complex, dynamic and all-encompassing, making it notoriously difficult to measure in a way that is absolute. Biodiversity proxies are surrogate sample measurements that reflect biodiversity entirety when a complete measurement of biodiversity is impossible. They are therefore useful tools for land managers, policy makers and planners when attempting to fulfil and evaluate their commitments to biodiversity conservation. Forest structural complexity (FSC), tree microhabitats (TreMs) and forest microclimates are agreed proxies for biodiversity within a forested environment. However, some traditional field inventory methods to obtain these metrics are time consuming, struggle to capture these fine scale structures and can often rely heavily on assumptions. Terrestrial laser scanning (TLS) offers innovative opportunities to obtain detailed metrics describing forest structure in three-dimensions. This can be achieved at micro to macro scales and across multiple forest ecosystem levels, often to millimetre accuracy. Recent advancements in computational power and machine learning have eased processing of high-resolution three-dimensional data, boosting efficiency of complex analysis. Global forests exemplify the intricate nature of biodiversity. By providing food, shelter, and resources to 80% of the world’s terrestrial species, forests are some of the most valuable and biodiverse habitats on Earth. Despite conservation efforts over the past two decades, leading to an increase in forest cover in the UK, forest-associated biodiversity continues to decline. Forests now lack species that are integral to ecosystem function and the UK is positioned among the most nature-depleted countries in the world. The Cairngorms Connect project (RSPB) and collaborative landowners in the Cairngorms, Scotland, are compensating for these losses and aim to restore natural processes and ecosystem function, initially through direct management of their Caledonian Pinewood. This includes, thinning dense forest, promoting regeneration through herbivore control, the creation of standing and lying deadwood and the diversification of forest structure. This research, in collaboration with the Centre for Landscape Regeneration (University of Cambridge) and the Cairngorms Connect project in Scotland (RSPB), leverages novel techniques to explore the relationship between forest structure and biodiversity across forest plots with varying management techniques and history in the Cairngorms. Using commonly agreed biodiversity proxies, (FSC, TreMs and forest microclimates), and recently published processing algorithms, this research aims to generate new insights into the relationship between these proxies and forest restoration, the relationship of these proxies with each other and the practicality of these proxies as a surrogate for biodiversity in a recovering Caledonian Pinewood. This research will provide valuable information for conservation woodland management, particularly in the context of long-term ecosystem restoration in Scotland.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Combining optical EO time series data and GEDI full-waveform measurements for the production of annual high-resolution aboveground biomass products for MRV projects

Authors: Markus Immitzer, Kyle S. Hemes, Mathias Kästenbauer, Josue Lopez Ruiz, Clara Rajadel-Lambistos, Saulo Franco de Souza, Nathan Wolff, Clement Atzberger
Affiliations: Mantle Labs Ltd, Amazon Worldwide Sustainability, CIFOR-ICRAF Brazil Amazon
Up-to-date and reliable measurements of forest biomass are essential for the management of forests. Especially in managed forests, terrestrial inventories are appropriate to provide reliable information for larger contiguous areas (e.g. national forest inventories). For ARR projects in the tropics, the picture is different as carbon projects are often implemented in smaller farms, scattered over large areas. To cost-efficiently monitor such scattered projects, a high-quality monitoring-reporting-verification (MRV) must be based on openly available Earth Observation (EO) data. Alongside real-time monitoring, historical aboveground biomass (AGB) trajectories with high spatial resolution across the region are also essential for tasks like eligibility assessments and selecting appropriate counterfactuals; the latter is needed to establish a dynamic performance benchmark. We introduce an AGB product based on a recently developed foundation model (‘Barlow Twins’) that utilizes advances in self-supervised learning from multi-spectral EO time series. This non-contrastive foundation model condenses all available spectral observations within a year into a few orthogonal, highly informative representations at 10 m (for Sentinel-2) and 30 m (for Landsat 7/8). By integrating sparse Global Ecosystem Dynamics Investigation (GEDI) full-waveform measurements, and without additional fine-tuning, we produce annual AGB maps covering the past twelve years (2013 till 2024). We validated our product against 38 in-situ AGB measurement sites from various agroforestry and restoration age classes and achieved an RMSE of <25 Mg/ha. Compared to five publicly available datasets - most of which lack annual time steps - our method reduces the RMSE by 15 to 55%. We demonstrate the scalability of this approach by producing annual AGB maps for the entire state of Pará, Brazil. The data can be directly used for eligibility checks, counterfactual selection, additionality calculations, and digital MRV of carbon projects. The approach is computationally efficient, fully self-supervised, and does not rely on contrastive samples, allowing it to scale globally, even in areas with frequent cloud cover.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Analysis of Canopy Responses of Temperate Forests to Frost and Drought Across Elevational Gradients Using Sentinel-2 Data

Authors: Dr Michele Dalponte, Dr Davide Andreatta, Dr Luca Belelli Marchesini, Dr Loris Vescovo, Dr Damiano Gianelle
Affiliations: Fondazione Edmund Mach
Extreme climate events, such as severe droughts, heatwaves, and late spring frosts, significantly impact forest ecosystems. Their frequency is expected to increase in the next decades, posing a substantial threat to forest sustainability. High-resolution multispectral satellite images with frequent revisit times, such as those from Sentinel-2, offer valuable tools for analysing the impact of such events on forests and monitoring their recovery. This research focuses on two specific extreme climatic events: late spring frosts and summer heatwaves, both of which are projected to become more frequent in temperate and boreal forests. We aim to understand how temperate forest canopies respond to extreme climatic events by examining Sentinel-2 normalised difference vegetation index (NDVI) anomalies and phenological changes. The study is conducted in the forested areas of the Autonomous Province of Trento (approximately 3,000 km²) in the Italian Alps, encompassing a wide altitudinal range that supports Mediterranean, temperate, and boreal species. In this area a late spring frost event occurred in May 2019, and a summer heatwave in July 2022. Starting from the field data collected during the forest inventories by the local forest service we identified forest units characterised by at least 70% coverage of a single species, for a total of 16 tree species. A total of 281,525 sampling points were distributed on a 40 × 40 m grid across the study area, covering the polygons and spanning elevations from 100 to 2,000 metres above sea level. Sentinel-2 satellite images from April 2018 to October 2023 were used to extract NDVI values for each point. NDVI data were interpolated using a Generalised Additive Model to generate daily values for the 2018–2023 period, for the months from April to October. Daily average temperatures from 100 weather stations across the region were spatialized using kriging interpolation, to explore the relationship between canopy responses and temperature anomalies. Phenological metrics, including Greenup, Maturity, and Senescence, were calculated for each sampling point using the phenofit R package. A Wilcoxon-Mann-Whitney test was conducted to identify time periods when (i) daily temperatures in 2019 and 2022 were significantly lower or higher than the 2018–2023 average, and (ii) NDVI values in 2019 and 2022 significantly differed from the same baseline. Data were analysed by tree species and elevation intervals. The impact of the May 2019 late spring frost varied by species, elevation, and phenological stage. During this event, temperatures were significantly below the 2018–2023 average for two to three weeks, but the resulting damage was uneven. For Fagus sylvatica, impacts were strongly altitude-dependent: below 1,000 m a.s.l., vegetation was largely unaffected, as leaves were already developed and temperatures were less extreme. Between 1,000 and 1,250 m a.s.l., leaf-out was slowed, delaying the end of Greenup. Above 1,250 m a.s.l., frost delayed all phenological phases due to its occurrence during early leaf development. The 2022 summer heatwave caused prolonged temperature anomalies, with low elevations experiencing the most severe impacts. Species such as Quercus ilex and Castanea sativa showed significant NDVI declines starting mid-July, with no recovery until 2023. Other low-elevation species, such as Fraxinus ornus and Ostrya carpinifolia, began to recover by mid-August. Higher-elevation species, including Fagus sylvatica, often benefited from warmer spring conditions, showing increased NDVI values for the whole 2022 growing period. For Picea abies, NDVI declines were limited to very low elevations (<500 m a.s.l.), where plantations exist outside its natural range. Phenological responses to both events revealed clear elevation-dependent patterns for Fagus sylvatica. In the frost year, high-elevation trees (>1,500 m a.s.l.) showed delayed Greenup but normal development after frost due to intact buds. In mid-elevation ranges (1,000–1,500 m a.s.l.), Greenup started on time but ended later, extending its duration. Senescence across elevations was delayed but occurred faster than usual, aligning its end with the 2018–2023 average. During the heatwave year, Greenup was prolonged below 1,000 m a.s.l. but shortened above 1,000 m a.s.l. The heatwave also advanced the start of senescence across elevations, with a shortened senescence period above 750 m a.s.l. This study explored the canopy responses of forest species to late spring frosts and summer heatwaves, revealing strong species- and altitude-dependent patterns. In particular, our findings suggest a larger impact of summer heatwaves on lower altitude mediterranean oaks. However, further investigation is needed to investigate the resilience (i.e., recovery time) of the most impacted species after extreme climatic events, to better predict the impact of the increased frequency of such events on forest functioning and stability.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Extending canopy structure measurements from GEDI and ICESat-2 to savannas

Authors: Mikhail Urbazaev, Dr John Armston, Dr. Steven Hancock, Konrad Wessels
Affiliations: University of Maryland, GFZ Potsdam, University of Edinburgh, George Mason University
Current NASA spaceborne lidar missions – the Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) and the Global Ecosystem Dynamics Investigation (GEDI) – have been acquiring science data in parallel from 2019 to 2023. Consistent with mission design and science requirements, the calibration and validation of GEDI's algorithms and products for estimating canopy structure and aboveground biomass have been focused on temperate and tropical forests. Similarly, much of the published research on extracting vegetation structure information from ICESat-2 has been focused on temperate and boreal forests. Performance of both instruments for estimation of vegetation structure parameters are largely unexplored in water limited savanna ecosystems that are often dominated by sparse and short-statured vegetation. In this study, we evaluate the efficacy of GEDI and ICESat-2 for the estimation of canopy height, canopy cover and Plant Area Volume Density (PAVD) metrics across global savannas. Our analysis is conducted over pilot sites in South Africa and Australia that cover dry, temperate and tropical savannas. We examine measurement accuracy in GEDI and ICESat-2 vegetation structure products by colocation and comparison with equivalent reference estimates simulated from Airborne Laser Scanning (ALS) data. These data are used to validate current GEDI and ICESat-2 canopy height and cover profile metrics, assess the impact of beam and target (canopy-ground reflectance ratio, canopy cover, terrain slope) properties on observed accuracy, and evaluate refinements to the GEDI and ICESat-2 retrieval algorithms. Finally, we evaluate the deconvolution of GEDI waveforms to extend the lower detection limit for short-stature vegetation and increase the vertical resolution of PAVD profiles, thus providing a more direct retrieval of canopy top height and the vertical canopy cover profile. We enhanced canopy height estimates from GEDI by: 1) improving the detection of low-amplitude returns from sparse vegetation, which reduces underestimation of canopy height; and 2) detecting short-stature vegetation using waveform deconvolution. Additionally, we reduced bias in GEDI canopy cover estimates by calculating locally calibrated canopy-ground reflectance ratio. Finally, waveform deconvolution enabled us to increase the vertical resolution of PAVD profiles from 5 m (the default resolution in the original product) to 1 m.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Characterizing Forest Fragmentation in Bavaria Through Canopy Cover Loss Analysis

Authors: Kjirsten Coleman, Prof. Dr. Claudia Kuenzer
Affiliations: German Aerospace Center (DLR), University of Würzburg
Recent drought conditions and elevated summer temperatures have contributed to widespread canopy cover loss across Germany, weakening forests and increasing their vulnerability to European spruce bark beetle (Ips typographus) infestations. These processes can contribute to forest fragmentation, further exacerbating forest degradation. Fragmentation is characterized by shrinking forest area and increased number of patches and edge exposure. This process creates abrupt abiotic changes such as exposure to light and temperature, reducing forest resilience while sometimes enhancing local biodiversity. However, highly fragmented forests are typically less biodiverse, store less carbon, and are warmer and drier than large contiguous forests. Earth observation (EO) data are a valuable tool for understanding large-scale forest dynamics such as fragmentation. Building on a time-series product by Thonfeld et al., which detects monthly canopy cover loss across Germany (2017–2024), this study investigates forest fragmentation in Bavaria. Using Thonfeld's dataset, we assessed patch characteristics, dynamics, and connectivity to uncover current patterns and development of fragmentation in this region, which is known for extensive forest loss and remains understudied. We aggregated eight years of canopy loss data to generate a forest mask for Bavaria, delineating over 83,000 forest patches. Based on metrics established by FRAGSTATS, we utilized Python libraries to analyze patch size, shape, and distribution, integrating terrain and tree species data for a comprehensive assessment. Results reveal a highly skewed size distribution: extra-small (XS) patches (<24 ha) dominate Bavaria, outnumbering larger patches (small to extra-large) by up to 773 times. Although XS patches cover ~192,000 ha compared to ~976,000 ha for extra-large (XL) patches, their similarity in perimeter length indicates that XS patches exhibit a significantly higher edge exposure, with 97.5% of forest area within 50 meters of the edge and 2.5% core-area on average. Perforations, or within-patch gaps, were observed across all patch sizes, with XS patches exhibiting a total perforated area of 3,380 ha compared to 125,184 ha in XL patches. Additionally, XS patches showed higher connectivity within a 200 m buffer, having more neighboring patches than larger categories. Tree species distribution varied across Bavaria; northwestern districts were richer in deciduous species like beech and oak, whereas spruce dominated most other regions, especially larger patches. Moreover, tree species composition was variable across patch sizes. Our findings indicate that forest fragmentation in Bavaria is intensifying, with smaller patches undergoing the most significant changes in area and core-area loss. These results highlight the need for tailored forest management strategies to address fragmentation dynamics for improving forest resilience in Bavaria.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: A Deep Learning Approach for Large-Scale Mapping of Trees Outside Forests in Germany

Authors: Moritz Lucas, Dr. Ralf Pecenka, Dr. Hamid Ebrahimy, Prof. Dr. Björn Waske
Affiliations: Osnabrueck University, Leibniz-Institut für Agrartechnik und Bioökonomie e.V.
Trees Outside Forests (TOF) refer to any trees that grow beyond the boundaries of forest and other wooded land (FAO 2013). In agricultural landscapes, TOF provide a range of ecosystem services, including but not limited to enhancing biodiversity, improving soil quality through carbon sequestration and mitigating soil erosion by reducing wind velocities, as well as decreasing surface water runoff. TOF reduce the impact of droughts by increasing water retention, raising soil moisture and lowering local temperatures as well as evapotranspiration and thereby contribute to the climate resilience of landscapes. However, despite their important ecological role, TOF are often overlooked in ecological monitoring, limiting their integration into conservation and land-use planning strategies. Large-scale TOF mapping using medium- and high-resolution satellite data has attracted considerable interest across Europe in recent years (Liu et al. 2023). This information can be used for several proposes, ranging from mapping national carbon stocks to quantifying the contribution of TOF to national canopy cover. More detailed information can be obtained using high resolution aerial imagery. Malkoç et al. 2021 conducted a national level TOF mapping in Switzerland by aerial imagery by integrating with additional height models and the combination of different thresholds. The previous studies in TOF mapping focused on two classes: TOF and forest. While additional information describing the type of TOF can be used to model their influence on ecosystem services and the environment. Categorizing different TOF classes is a complex task, given diverse influential criteria such as land use (e.g., agricultural, urban), geometric patterns (clumb or linear), functions (e.g., windbreaks, shade), and origin (planted or natural). Classification into geometric patterns is widely used because it provides the best opportunity for processing into other finer classes by adding additional information. Sarti et al. 2021; Bolyn et al. 2019 have attempted to classify TOF into geometric classes, such as clumps or linear patterns, but these efforts remain geographically limited due to their reliance on user-defined thresholds. In previous work, we proposed a deep learning approach, to classify trees into Forest and the geometric TOF classes Patch, Linear, and Tree, using high-resolution aerial imagery. Preliminary results across four distinct German regions, showed that the vision-transformer model provided promising results (mIoU: 0.74, F1: 0.84), with stable performance even when applied to unseen areas (mIoU: -0.12, F1: -0.1) (Lucas et al. 2024). In this study, the approach is extended and adapted to a large-scale mapping and classification of TOF in Germany. This high-resolution map of TOF in Germany fills a gap in ecological data that can help policymakers optimize land-use planning, promote agroforestry, and adapt to climate change. As policies increasingly favor sustainable land management, this new TOF classification framework provides actionable insights. In addition, this method can be adapted to medium-resolution satellite imagery, facilitating scalable TOF mapping in areas without access to aerial imagery.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Exploring Wolverine Colonization in Finnish Forests

Authors: Pinja-Emilia Lämsä
Affiliations: Aalto University
Understanding how forest structure influences species colonization is critical for biodiversity conservation and sustainable forest management as animal species distributions in forest-dominated landscapes are closely tied to vegetation structure and heterogeneity, which can vary across spatial scales. These factors, combined with landscape connectivity, are crucial for sustaining populations and facilitating animal dispersal, especially in changing environments. The endangered wolverine (Gulo gulo) inhabits remote tundra, alpine and taiga habitats across the Northern Hemisphere. Although habitat selection of wolverines has been previously explored in Fennoscandia, little is known about what kind of habitats wolverines are colonizing. Remote sensing can provide a way to assess habitat characteristics across large areas such as entire countries, which assists especially when studying elusive large carnivore species with large home ranges. Since their protection in the late 1980s, Fennoscandian wolverine population size has gradually increased, offering an excellent opportunity to investigate the relationship between forest characteristics and colonization dynamics. We studied i) what kind of forest and landscape structures explain the colonization and ii) does the effect differ between spatial scales. Colonization was studied in mainland Finland between two periods: 2009-2010 and 2018-2022. A multi-scale analysis was conducted using snow-track observations from Finnish wildlife and field triangle censuses, along with Finland’s Multi-Source National Forest Inventory (MS-NFI) remote sensing products. We applied generalized linear mixed models (GLMMs) to assess the influence of forest and landscape variables on the probability of colonization at two spatial scales: local (3.13 km) and landscape (20 km). The variables best explaining colonization were landscape fragmentation, tree volume, forest tree species composition, and distance to clearcuts. In colonized sites, forest is less fragmented, volume of trees is moderate, forests tend to be dominated by broadleaved trees and the sites are not in close proximity to fresh clearcuts. Landscape scale was found to be more relevant when studying the impact of overall structure of forests on wolverine colonization. Our findings provide new insights into how wolverines respond to forest conditions in boreal landscapes, emphasizing the importance of multi-scale analyses for understanding species colonization processes in heterogenous environments. This work demonstrates how Earth observation technologies can advance the understanding of forest ecosystems and their role in supporting wildlife populations.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: From Point Clouds to Fuel Maps: Modeling Surface Fuels from 3D Terrestrial Lidar Data

Authors: Miriam Herrmann, Dr. Carlos Cabo, Dr. Fabian Ewald Fassnacht
Affiliations: Institute of Geographical Sciences, Working group for Remote Sensing and Geoinformatics, Freie Universität Berlin, Department of Mining Exploitation and Prospecting, University of Oviedo
1. Introduction Forest fires are a major disturbance of forest ecosystems, with fire risk determined by weather conditions, topography, and the presence of burnable material (fuels). As climate change alters weather patterns, fire pressure is expected to increase. Among the factors influencing fire danger, fuels are the only directly manageable factor, making fuel mapping crucial for fire prediction and prevention. While much remote sensing research has focused on canopy fuels, the study of surface fuels has been limited. Conventional optical remote sensing from satellite or UAV platforms struggles to penetrate dense canopies, leaving gaps in understanding surface-level dynamics. This project seeks to address this gap by utilizing 3D point cloud data from Terrestrial Laser Scanning (TLS) to map surface fuels in pine-dominated forests, offering alternatives to conventional field sampling methods. Our primary goal is to create detailed fuel maps that identify fuel distribution and accumulation areas, thereby highlighting zones at elevated fire risk. Additionally, we aim to understand the relationship between surface litter patterns and the overlying forest structure. 2. Methods Field data were collected in 2023 and 2024 across Germany, the Czech Republic, and Spain, focusing on pine-dominated forests. 70 plots (each 30x30 m) were scanned using TLS to capture the 3D structure of the forest. Litter samples were taken from four 18x18 cm subplots per plot, gathering all burnable materials within this area. Samples were dried, sorted into classes categorized by burn characteristics (e.g., leaves, wood, needles, moss), and weighed. The TLS data were analyzed to extract metrics such as tree positions, diameters, crown cover, and vegetation volume across multiple height layers, using the 3DFin Plugin in CloudCompare (Cloudcompare 2024, Laino 2024) and the lidR package in R (R Core Team 2024, Roussel 2020). These metrics will be used for predicting surface fuel weights, enabling spatially explicit fuel mapping on the plot level. 3. Preliminary Results Preliminary analyses using 32 of the 70 plots reveal three key patterns: - Among the selected fuel classes, moss and leaf litter have the strongest correlations with the 3D structural attributes captured by TLS. - Metrics describing forest structure at intermediate heights (e.g., vegetation volume at 5-10 meters) show the strongest relationships to litter weights. - Metrics calculated from larger portions of the point clouds around sample points (e.g., 7 m buffers) have stronger correlations with litter weights than metrics restricted to smaller, localized areas (e.g., 1 m buffers). These findings suggest that certain fuel classes (like moss and leaves) are easier to model, indicating that fuel maps for these categories may be more reliable than for others (such as needles or wood). Additionally, the structure of the forest at intermediate heights appears to be a key predictor of surface fuels, likely linked to processes that influence fuel deposition and accumulation. Modeling fuels using larger point clouds, encompassing areas up to 7 meters around sample points, seems more effective, hinting that fuel deposition is determined on a larger scale and is influenced by factors beyond the immediate vicinity. 4. Next steps Future analyses with all 70 plots, will show if the initial observations can be confirmed. During the next phase we will repeat the analysis with the complete dataset, develop predictive models, and generate detailed surface fuel maps. 5. Outlook At the upcoming Living Planet Symposium, we will present predictive models and the resulting fine-scale fuel maps, which have potential applications in fire modeling and targeted management strategies. Future research will extend from mapping surface fuel weights to cover types using under-canopy orthophoto mosaics and will investigate the feasibility of using Sentinel-2 satellite data as a substitute for TLS-derived inputs. References CloudCompare (2024). Version 2.13.beta. Retrieved from http://www.cloudcompare.org/. Laino, Diego; Cabo, Carlos; Prendes, Covadonga; Janvier, Ro-main; Ordonez, Celestino; Nikonovas, Tadas et al. (2024): 3DFin: a software for automated 3D forest inventories from terrestrial point clouds. In: Forestry: An International Journal of Forest Research, cpae020. R Core Team (2024): R: A Language and Environment for Statis-tical Computing. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from https://www.R-project.org/. Roussel, Jean-Romain; Auty, David; Coops, Nicholas C.; Tompalski, Piotr; Goodbody, Tristan R.H.; Meador, Andrew Sánchez et al. (2020): lidR: An R package for analysis of Air-borne Laser Scanning (ALS) data. In: Remote sensing of Envi-ronment 251, S. 112061. DOI: 10.1016/j.rse.2020.112061.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: GeoAI-Driven Insights into Vegetation Dynamics: Decoding Disturbance and Fragmentation Patterns in the Cerrado-Amazon Transition

Authors: Chuanze Li, Dr. Angela Harris, Dr. Polyanna da Conceição Bispo, Dr. Matthew Dennis, Prof. Beatriz Schwantes Marimon, Prof. Ben Hur Marimon Junior
Affiliations: Department of Geography, School of Environment, Education and Development, University of Manchester, Plant Ecology Laboratory, Mato Grosso State University, Campus Nova Xavantina
The Cerrado-Amazon Transition (CAT), the world’s largest tropical ecotone, is a biodiversity hotspot and a critical transition zone between the Amazon rainforest and Cerrado (Brazilian Savanna). However, this region faces escalating ecological pressures, particularly from deforestation (clear-cutting) and fires, which are the dominant disturbance drivers threatening ecosystem integrity. These disturbances not only disrupt vegetation cover but also lead to significant habitat fragmentation, a direct consequence of prolonged and frequent disturbances. Fragmentation further amplifies ecological vulnerabilities by altering landscape patterns, reducing connectivity, and compromising habitat resilience. Here we present research which employs GeoAI techniques—integrating machine learning algorithms and multi-source Earth observation data—to identify and link disturbance events with fragmentation dynamics, providing a comprehensive understanding of the region’s ecological transformations over recent decades. To monitor and classify disturbances, we used a 35-year (1985–2020) Landsat time series using the Landtrendr algorithm for temporal segmentation and, subsequently, a Residual Neural Network (ResNet) for disturbance classification. We identified four major disturbance types: Amazon forest clear-cutting, Cerrado clear-cutting, Amazon forest fire, and Cerrado fire. We mapped over 384,000 km² of disturbances, with Amazon forest clear-cutting accounting for the largest share (35%). Temporal trends reveal peak disturbance periods during the late 1990s linked to agricultural expansion and policy shifts. Accuracy assessments showed high performance, with classification accuracy ranging from 88% to 93% for individual disturbance types and an overall accuracy of 95%. To assess fragmentation, a suite of seven landscape metrics was calculated at decadal intervals from 1990 to 2020 using MapBiomas land use and cover data. These metrics quantified changes in landscape composition and configuration, including area, diversity, connectivity, and edge complexity, for Amazon forest and Cerrado ecosystems. Importantly, we integrated our disturbance monitoring results to understand how different types of disturbance influence landscape patterns over time. A multivariate clustering approach categorized landscapes into four levels of fragmentation severity for the year 1990, which we subsequently used to train a random forest model to classify fragmentation severity in subsequent years. Our results reveal that over 270,000 km² of Amazon forests experienced fragmentation, characterized by significant reductions in core areas and level of patch aggregation. Similarly, 139,806 km² of Cerrado savanna were fragmented, showing increased edge complexity and diminished connectivity. Clear-cutting emerged as the dominant driver of patch isolation, while fire-prone areas exhibited irregular edge geometries and reduced structural cohesion, reflecting the compounded impacts of recurrent disturbances. This integrated analysis highlights how different types of disturbance events shape fragmentation dynamics, underscoring the cascading ecological impacts of human activities in the CAT. The findings offer critical insights into the drivers and consequences of landscape transformations, providing a robust scientific foundation for biodiversity conservation and sustainable land-use strategies in tropical ecotones under mounting anthropogenic and climatic pressures.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Tree Species Classification Using Time Series of Sentinel-2 Images and Weak Labelled Data

Authors: Jana Annika Wicklein, Davide Andreatta, Lorenzo Bruzzone, Michele Dalponte, Daniele Marinelli
Affiliations: Research and Innovation Centre, Fondazione Edmund Mach, Dept. of Information Engineering and Computer Science, University of Trento, National Biodiversity Future Center
Mapping tree species plays a significant role in forestry and ecology applications, particularly in heterogeneous forest environments with complex terrain and diverse ecological conditions. The availability of Sentinel-2 (S2) satellite data, with its high spatial (10 m), spectral (13 bands), and temporal resolution (revisit time of 5 days), has revolutionised land-cover mapping. These dense image time series offer an effective framework for automatically classifying tree species of large forested areas because they can capture seasonal phenological changes, which are critical for distinguishing species with similar spectral signatures. However, this process relies on robust and accurate training data, which are often unavailable for large forest areas, where field surveys are impractical due to the time and manual labour required. Previous studies have successfully mapped tree species using methods that focused on the classification of few species, often only relying on labeled pixels randomly selected from either very homogeneous areas or identified on the basis of the average spectral reflectance value. While effective, these approaches are less suitable for complex landscapes with high species diversity and mixed forests, where training data are limited, especially for rare species. To address this limitation, we adapt a method previously used in land-cover classification and test its effectiveness for tree species mapping. This method automatically filters and extracts reliable pixel-level training samples from weak thematic data of relatively homogenous forest areas. Our approach leverages forest inventory data, which represent tree species distribution as percentages within forestry units. A forestry unit refers to an area, often delineated by natural or administrative boundaries, within which the tree species composition and forest characteristics are recorded. This forest inventory data is then integrated with S2 satellite imagery to generate a high-quality training dataset for mapping 18 tree species classes in the province of Trento, Italy. To create the classes, minor species frequently co-occurring with overlapping canopies were merged into broader classes, while major species were kept as separate classes. Purity thresholds were then defined for each class to identify almost “pure” forestry units in the forest inventory data, where spectral signatures predominantly represent a specific tree species. From these selected forestry units, S2 data from the summer months of 2019 were sampled. Buffers were created along spatial boundaries to exclude edge regions and minimize their effects during sampling. To improve representativeness while preserving within-class variability, the sampled data underwent unsupervised filtering. This was applied by clustering sampled pixels within each forestry unit using k-means clustering based on the S2 spectral reflectances and keeping only the points from the dominant cluster. Subsequently, a consistency analysis was performed by removing forest units with spectral characteristics far from the distribution of the related tree species class. Finally, the resulting dataset was downsampled by using elevation as a stratification layer to obtain a balanced distribution of classes. To evaluate the effectiveness of the filtering methods, Linear Discriminant Analysis (LDA) was conducted. The results revealed greater centroid distances among classes after filtering, indicating improved class separability. The refined dataset was then employed to train a Support Vector Machine (SVM) model, which has been previously proved successfully in similar studies, to map the distribution of tree species classes in the province of Trento at 10 m resolution. The use of the proposed filtering methods improved classification performance, increasing the average cross-validation accuracy from 77.38% to 84.11% and the Kappa statistic from 0.76 to 0.85. Test accuracies for the ten most abundant classes ranged from 75% to 93%. Preliminary validation using an independent set of individually sampled trees yielded accuracies of up to 80% for the most abundant species, though rare species exhibited lower accuracy due to limited training data. The preliminary results point out the potential of the proposed methodology to address one of the most pressing challenges in large-scale forest mapping: the scarcity of high-quality training data. By leveraging freely available S2 imagery and widely accessible forest inventory data, this approach provides a replicable framework for producing high-resolution, ecologically meaningful tree species maps. The methodology is scalable and adaptable to diverse forest environments, thus it represents a valuable tool for supporting automated forest management and ecological conservation.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: “Delineation of Riparian Zones” for the Classification of Riparian Forests for Ecosystem Accounting in Germany

Authors: Nicole Habersack
Affiliations: Federal Agency for Cartography and Geodesy
The accurate Delineation of Riparian Zones is essential for ecosystem management and monitoring, especially in the context of ecosystem accounting. Riparian zones and forests play an essential role in biodiversity conservation, as well as in processes such as water regulation and carbon sequestration. However, they are challenging to map due to their dynamic nature and the complexity of land cover transitions between aquatic and terrestrial environments. In Germany, the German Federal Statistical Office (StBA) has been reporting on the extent, condition, and ecosystem services of 74 ecosystems every three years since 2015, in alignment with the international “System of Environmental Economic Accounting” (SEE-EA) framework. Determining the presence and extent of Riparian Forests, which is one of the classes in this report, is particularly challenging using conventional methods such as field mapping. Previously the "Delineation of Riparian Zones" from the "Copernicus Riparian Zones High Resolution Layer" has been used to define the extent of riparian zones. This dataset was combined with land cover data from the German Digital Land Cover Model (LBM-DE) to identify Riparian Forests. Following the discontinuation of Copernicus’ updates in 2012, the adapted method, described in this work relies on open-access data sources to produce continuous updates at three-year intervals for Germany from 2018. The presented workflow is designed to be cost-effective and modular ,using freely available data, such as Sentinel-2 imagery, to achieve national-scale assessments with high spatial resolution (10 m). Various geospatial and remote sensing data are used to derive the product “Delineation of Riparian Zones”. The workflow applies Fuzzy Logic and object-based image analysis to integrate land cover and remote sensing indices, such as NDVI and NDWI, assigning riparian probability values to each area. The calculation divides the product into Potential Riparian Zones (PRZ), Observable Riparian Zones (ORZ) and Actual Riparian Zones (ARZ). PRZ data, representing the maximum possible riparian extent without human influence, are used from the original Copernicus dataset, while ORZ and ARZ are derived from Sentinel-2 and Germany-specific land cover data (LBM-DE). In contrast to the original model, this approach is configured for operational use by German authorities and is designed to be transferable to other European countries by using e.g. Corine Land Cover data instead of LBM-DE and the Europe mosaic from Sentinel-2 (Sen2Europe). The resulting dataset “Delineation of Riparian Zones” was validated using different national datasets. Firstly, information on the extent of floodplain soils was taken from a national soil overview map (BÜK200) from the German Federal Institute for Geosciences and Natural Resources (BGR). This was then compared with the derived riparian zones, to see whether the prevalent soil types in these areas are indicative of riparian zones or floodplains. The second validation dataset is the Floodplain Status Report from the German Federal Agency for Nature Conservation (BfN), which includes floodplain delineations. The floodplain delineation from the BfN contains similar information to “Delineation of Riparian Zones”, but has a lower level of detail as tributaries and side arms have been excluded from the calculations. The comparison with the Delineation of Riparian Zones therefore looks for similarities and differences. The comparison was made for the year 2021, for which all three datasets are available. The validation shows promising results. The “Delineation of Riparian Zones” is predicted mainly in areas where the soils show characteristics of riparian zones. Any differences found in the comparison with the BfN dataset can be explained by the different methodology and datasets used. The renewed “Delineation of Riparian Zones” data product has broad implications for environmental science, policy, and service applications within Germany. It supports the StBA and the Federal Agency for Cartography and Geodesy (BKG) by providing high-resolution data for the assessment of riparian zones and forests, a critical component in ecosystem reporting and flood management. By using open-access data and a sustainable methodology, this approach not only improves spatial accuracy but also provides a transferable model for assessing riparian ecosystem in different regions. Until now, to our knowledge it is the only product, which is essentially an adapted version of the Copernicus Delineation of Riparian Zones and a national-scale continuation of “Delineation of Riparian Zones”.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Windthrow Automatic Analysis Workflow Using Satellite Imagery and Deep Learning Algorithms - Case study Romania

Authors: Andy-Gabriel Stanciu, PhD Ionut Sandric, PhD Bogdan-Andrei Mihai, PhD Marina Virghileanu, PhD Ionut Savulescu
Affiliations: University of Bucharest
Andy Stanciu, Ionut Sandric, Marina Virghileanu, Bogdan-Andrei Mihai, Ionut Savulescu Extreme wind events are among the most significant natural factors influencing forest structure and dynamics worldwide. In Romania, the phenomenon of windthrow poses a substantial challenge to forest ecosystems, affecting both their ecological integrity and economic value. This is evident from the numerous events that have taken place in the Southern and Eastern Carpathians in the latest years, significantly impacting extensive forested areas. For instance, severe storms occurred in February 2020 affected more than 200 hectares of forest on the southern slopes of Iezer Mountains, Southern Carpathians. Moreover, national statistics show alarming data regarding forest damages in the Carpathian counties: Harghita County with an estimated 38,100 hectares, with 840,000 cubic meters of timber damaged, according to the Brașov Forestry Guard. Other affected counties include Mureș (18,000 ha, 450,000 m³), Covasna (9,000 ha, 64,000 m³), and Brașov (1,400 ha, 2,200 m³), with additional wind damage reported later that month. In the northern part of the Eastern Carpathians, other 14 hectares of forest were damaged in July 2021, uprooting deeply anchored trees and caused extensive destruction, with an estimated 4,000-4,500 cubic meters of timber affected. Our study aims to explore the potential of integrating high-resolution satellite imagery with advanced deep learning techniques to detect and map areas affected by windthrow events. The research focuses on the forests of the Eastern and Southern Carpathians, where dominant species such as Norway spruce (Picea abies) and beech (Fagus sylvatica) are frequently exposed to the impacts of strong storms. Our study integrates satellite imagery from Sentinel-2 and Planet Scope missions, with spatial resolutions between 10 and 3 meters per pixel, into a pre- and post-storm image difference approach, providing critical insights into the extent of storm-induced damage. The methodology integrates various data processing techniques, starting with image segmentation into smaller pixel regions in order to facilitate more precise analysis. A critical step is the creation of labeled training data through the manual mapping of windthrow-affected areas. Data augmentation techniques, including image rotation and color balance adjustments, are applied to expand the training dataset and enhance the model's robustness. For the machine learning component, a U-Net convolutional neural network (CNN) is employed. The network is fine-tuned by optimizing hyperparameters and incorporating regularization techniques, such as batch normalization and dropout layers, to minimize overfitting. The trained U-Net model achieved an accuracy exceeding 80% in automatically identifying windthrow areas. This high level of performance demonstrates the model's capacity to deliver rapid and reliable assessments of storm damage. The approach significantly reduces the time and resources required for traditional field-based damage assessments, offering a scalable solution for forest monitoring. Our findings have important implications for sustainable forest management and conservation. By providing precise and timely information on windthrow events, this method supports the development of targeted strategies to mitigate damage, enhance forest resilience, and promote ecosystem recovery. Furthermore, the integration of satellite imagery and deep learning offers a valuable tool for monitoring forest disturbances under the increasing pressures of climate change. This study underscores the potential of modern remote sensing and artificial intelligence in advancing forest science and conservation in dynamic and vulnerable ecosystems.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Post-Disturbance Treatment Effects on Vegetation Recovery in Forests of Central Germany From 3 Years of UAV and Satellite Remote Sensing

Authors: Birgitta Putzenlechner, Simon Grieger, Leon Ramin, Christian Czech, Timo Lehmann, Martin Kappas, Philipp Koal
Affiliations: Georg-August-University, Forest Research and Competence Centre, ThüringenForst
An increase in disturbance agents, mainly drought and insect infestation, has led to wide-spread forest cover loss across Central Europe. Currently, reforestation and recovery efforts comprise different post-disturbance management approaches, such as leaving of deadwood structures and planting of different tree species. Given the vast areas affected, there is a need of using satellite remote sensing to assess effects of different treatment strategies at the regional level. We hypothesize differences in vegetation development arising from different deadwood retention strategies being implemented, ranging from complete clearing of disturbed areas, high stumps (dead trees cut at 2 m height) and remaining deadwood islands. In this regard, satellite-derived biophysical parameters could provide quantitative insights into ecosystem functioning, making them valuable indicators for post-disturbance assessment. However, the question arises if and to what extent potential effects on vegetation recovery of these nuanced treatment options on vegetation biophysical parameters are assessable from space. We present results on assessing vegetation development with regards to post-disturbance forest management within three hotspot regions of recent die-back of Norway spruce (Picea abies) in Germany (i.e., Harz Mountains, Thuringian Forest and Thuringian-Vogtlandian Slate Mountains). Within an interdisciplinary effort (project “ResEt-Fi”), different post-disturbance treatments are being investigated and compared to vital stands. To understand treatment effects and the potential to upscale findings by the use of remote sensing, we retrieve biophysical information from Sentinel-2 imagery and carry out extensive in-situ measurements. The LEAF-Toolbox within GEE is used to retrieve biophysical parameters, i.e., leaf area index (LAI), fraction of absorbed photosynthetically active radiation (FAPAR) and fractional vegetation cover (FCOVER); In-situ references of LAI and FAPAR are acquired with light ceptometers during peak vegetation period. The investigations are accompanied by permanent microclimatic measurements to monitor dynamics of topsoil moisture, soil temperature, surface as well as near-surface temperature. In addition, combined RGB- and thermal UAV campaigns are conducted to retrieve the evolution of vegetation and deadwood cover and associated surface temperature. Permanent microclimatic observations demonstrate considerable effects of the different management variants, with highest maximum temperature (95-percentile) measured on cleared plots and those with high stumps compared to vital stands and remaining standing deadwood, especially first-year post-disturbance. The relative differences in surface temperature between variants are well-reflected in UAV-derived thermal orthophotos, showing lower differences 3-years post-disturbance as vegetation cover increases. Preliminary analysis of in-situ biophysical parameters shows a tendency of higher LAI and FCOVER in plots with high stumps and standing deadwood compared to complete clearings. According to UAV imagery, re-greening dynamics across the treatments are also slightly different. Sentinel-2-derived biophysical variables (FAPAR, LAI, FCOVER) show promising results in terms of overall re-greening dynamics across the study regions. We overall find highest agreement with in-situ measurements obtained from cleared plots and those with high stumps. However, so far, differences observed could also be attributed to the length of time after management intervention rather than to the management variants per se so that we plan to incorporate this information into our workflow. In the meantime, continued monitoring is ongoing, with successional communities and plantations becoming more established. Our study demonstrates how forest cover loss and post-disturbance treatment options can be retraced from plot to regional level, even though challenges regarding different spatial scale exist. Next steps include a refined integration of regional LULC information for the retrieval of biophysical parameters, additional in-situ acquisitions as well as linking biophysical parameters to species information with regards to different management variants. Moreover, a comprehensive investigation on bio-geophysical effects is planned with a combined approach on biophysical parameters and land surface temperature dynamics. In this regard, the multi-scale integration of remote sensing derived Essential Climate Variables holds the potential for developing effective strategies for monitoring forest restoration success to foster resilient forests in the future.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Mapping tree height from Sentinel-2 for operational forest monitoring

Authors: Srilakshmi Nagarajan, Lisa Mandl, Prof. Dr. Cornelius Senf
Affiliations: Technical University of Munich, School of Life Sciences, Earth Observation for Ecosystem Management
There are increasing concerns about the carbon sink capacity of Europe’s forests and whether the EU will reach its ambitious goals of an increasing land carbon sink by 2030. To better understand the European land carbon sink, more detailed insights into forest change are needed, as forests across Europe have experienced unprecedented losses of live tree biomass in recent years. Forest monitoring is a key application of remote sensing data and there are many operational forest monitoring systems available. Yet, most of those systems only track changes in tree cover and they do not quantify changes in forest structure, despite the importance of forest structure for understanding biomass impacts. There has been progress recently in mapping tree height – an important dimension of forest structure as it directly relates to biomass – from Sentinel-2 data, using convolutional neural networks trained on freely available spaceborne LiDAR data (i.e. GEDI). While those approaches are still experimental, they offer a promising path towards operationally monitoring forest and in particular biomass change. In our project, we will test recently developed algorithms for mapping tree height from Sentinel-2 data across test areas in Germany. For doing so, we will build a comprehensive Sentinel-2 data cube that incorporates all available images and interpolates missing observation by applying phenological models. Using the interpolated data, we will generate a set of monthly spectral features, serving as input to a U-NET trained on GEDI data. Our results will give insights into whether AI-based approaches for mapping tree height from Sentinel-2 are feasible for operational forest monitoring, with special emphasis on the carbon-rich forests of Germany.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Microclimatic Buffering in Boreal Forests: The Roles of Forest Management and Geographical Context

Authors: Iris Starck, Juha Aalto, Steven Hancock, Julia Kemppinen, Johanna Lehtinen, Miska Luoto, Pekka Niittynen, Tuuli Rissanen, Vilna Tyystjärvi, Sauli Valkonen, Eduardo Maeda
Affiliations: Department of Geosciences and Geography, University of Helsinki, Finnish Meteorological Institute, School of GeoSciences, University of Edinburgh, Finnish Museum of Natural History, University of Helsinki, Department of Biological and Environmental Science, University of Jyväskylä, Faculty of Biological and Environmental Sciences, University of Helsinki, Faculty of Agriculture, University of Bonn, Natural Resources Institute Finland
Forest canopies regulate the exchange of energy between the atmosphere and Earth’s surface, creating microclimate conditions critical for the survival of forest-dwelling species under climate warming. These microclimates are often shielded from open-air temperature variability, but the magnitude of this buffering is strongly influenced by the structure and management of the forest. Boreal forests, especially in Fennoscandia, are extensively managed through recurrent clear-cut rotations, often resulting in young, homogeneous structures. Given that the boreal biome is among the most rapidly warming regions on Earth, these management practices could have profound implications for microclimates, yet their effects remain understudied. Here, we leveraged an extensive dataset of in-situ microclimate observations across 746 locations in Finland, combined with high-resolution forest inventory and other geospatial data, and applied a machine learning model to predict and provide the first country-wide estimate of microclimate temperature buffering (quantified as the regression slope between micro- and macroscale temperatures) in boreal forests. Additionally, we identified which factors were constraining the buffering capacity in different parts of the country. Our findings reveal that the majority of Finnish forests currently do not buffer macroclimatic variability effectively, primarily due to reduced canopy cover, particularly in young forests. The effects of management on microclimate temperature buffering were strongly influenced by biogeographical and latitudinal gradients, with the southern boreal zone showing the most buffered temperatures, while the northern boreal zone exhibited coupled or amplified microclimate temperatures. Based on the limiting factor analysis, temperature buffering was mostly constrained by reduced canopy cover, but in the northern parts of the country, elevation and the amount of incoming radiation were the main limiting factors. These results reveal an extensive and overlooked impact of forestry on the microclimate conditions over boreal forests. Consequently, there is an enormous potential of forest management to promote buffered microclimate temperatures across the boreal biome, prioritizing management practices that preserve canopy cover.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Single-tree volume estimation using RayCloudTools based on laser scanning data

Authors: Benjamin Wild, Taşkın Özkan, Florian Pöppl, Norbert Pfeifer, Markus
Affiliations: Research Group Photogrammetry, Department Of Geodesy And Geoinformation, TU Wien
Above Ground Biomass (AGB), which represents the total dry biomass above the ground, is crucial for understanding the global carbon cycle and biodiversity. Recognized as an Essential Climate Variable (ECV) by international organizations, AGB is a fundamental variable for carbon accounting and climate projections. However, accurate AGB estimation remains challenging due to limitations in current methods. Satellite remote sensing is commonly used for large-scale AGB estimation, utilizing proxy variables such as Leaf Area Index, Canopy Height, and Vegetation Water Content. However, these methods require calibration with field data, typically collected during campaigns where tree parameters like tree height, diameter at breast height, and species are used in allometric models to estimate AGB. While numerous allometric models exist, variability across species, regions, and tree sizes remains a challenge. Additionally, these models often fail to adequately represent large or urban trees, and the destructive sampling required for model development is time and resource intensive and potentially even raises ethical concerns. Laser scanning, particularly Terrestrial Laser Scanning (TLS), offers a non-destructive alternative for retrieving AGB via volume estimation. Point clouds obtained from TLS can be used to create Quantitative Structure Models (QSMs) by fitting a hierarchy of cylinders to the 3D point cloud, allowing for AGB estimation. However, established QSMs are prone to overestimate tree volume, especially for smaller tree parts or with increasing scanner-to-tree distance, a limitation that is amplified in large-scale or aerial LS (e.g., UAV-LS) applications. The RayCloudTools (RCT) software toolbox, developed by the Commonwealth Scientific and Industrial Research Organisation (CSIRO), provides a novel methodology, based on Dijkstra’s shortest path algorithm, for volumetric reconstruction of trees. Preliminary results suggest that RCT can provide reliable volume estimates compared to data obtained from destructively harvested trees. Additionally, RCT shows promising results even for data obtained in challenging settings and environments such as UAV-based Laser Scanning (UAV-LS) data in a dense forest. This study explores the potential of using RCT on laser scanning data for assessing tree volume and consequently AGB at the individual tree level. The outcomes are compared with destructive sampling data, allometric models, and established state-of-the-art QSM solutions. Additionally, this study examines RCT's potential for large-scale, time-efficient AGB estimation, advancing non-destructive methods to improve our understanding of terrestrial carbon dynamics both at the single-tree level and, through upscaling with satellite remote sensing, at regional and global scales.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Disturbance and post-disturbance vegetation composition in East Siberian boreal forests estimated from Landsat and Sentinel-2 data

Authors: Oleg Zheleznyy, Prof. Dr. Cornelius Senf, Prof. Dr. Julia Boike, Dr. Katja Kowalski, Dr. Dirk Pflugmacher
Affiliations: Technical University of Munich, School of Life Sciences, Earth Observation for Ecosystem Management, Humboldt University of Berlin, Alfred Wegener Institute for Polar and Marine Research
The boreal biome accommodates almost one third of global forest carbon stocks and is home to a large fraction of the world’s remaining primary forests. Boreal forests globally are facing increasing pressures under climate change, as heatwaves and extreme droughts lead to increased fire activity. Eastern Siberia, despite being one of the major wildfire hotspots, is understudied with respect to climate change impacts on fire activity. Filling this research gap needs long-term and consistent disturbance and recovery data, which can only be acquired from Earth Observation data. In our study, we therefore aimed to: 1. Map forest fire disturbance in Eastern Siberia for the period 1984–2000, not covered by existing disturbance products. 2. Characterize the post-fire vegetation composition and its relationship with disturbance severity and average annual temperatures. We used a Landsat-5 time series to derive year-to-year difference images, which then served as a basis for disturbance mapping. We excluded areas of active forestry operations from the analysis, thus allowing us to focus on wildfire disturbances only. To map the contemporary fractional cover of six main vegetation types, we used Sentinel-2 spectral-temporal metrics and Random Forest regression, trained on a library of synthetically mixed endmember spectra. We achieved a good agreement between predicted fractional cover and an independent validation sample, with regression MAE ranging between 8.4–14.5%. Then, the vegetation composition was compared between disturbed areas and the undisturbed surrounding forest, assessing the effects of disturbance severity and average annual air temperatures on post-fire composition.  Following a three-decade recovery period, most burned sites were vegetated again. Deciduous broadleaf forests preserved their composition even after high-severity fires, while coniferous trees were commonly replaced by secondary broadleaf species, especially in warmer annual air temperatures. In colder mountainous areas and sites with high fire severity, herbaceous and shrubby vegetation surpassed the deciduous trees in the recovery process. Evergreen coniferous species were strongly affected even by low-severity fires, with low post-fire fractional cover, while larch forests showed higher resilience to fire; however, recovery of larch has been poorer in the southern parts of its range. We conclude that increasing frequency of fires may stimulate a gradual shift from coniferous to broadleaf forests in the warmer parts of Eastern Siberia, and more effort should be put into monitoring and supporting native tree species recovery. The ongoing work is focused on improving the disturbance detection algorithms to include other disturbance agents than fires and extend the temporal and spatial domain of our analysis to cover all of Siberia for the period 1984–2024, allowing for a more comprehensive assessment of boreal disturbance and recovery dynamics.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Mapping Tree Species Fractions in Temperate Mixed Forests Using Sentinel-2 Time Series

Authors: David Klehr, Johannes Stoffles, Andreas Hill, Vu-Dong Pham, Sebastian van der Linden, Dr. David Frantz
Affiliations: Trier University, Forest Service Rhineland-Palatinate, University of Greifswald
Forests are important ecosystems and play a crucial role in regulating climate, sequestering carbon, and maintaining ecological balance. Monitoring and mapping of forest stands, including tree species composition, can support forest protection and management. Sentinel-2 imagery provide a viable data source for tree species mapping due to its high spectral, temporal, and spatial resolution. However, particularly in temperate mixed forests challenges in tree species classification persist, mainly due to the high mixing ratio of tree species, which cannot be fully resolved even with the 10 m resolution of Sentinel-2 data. Additional challenges are associated with the commonly low number of reference data in forest inventory databases for rare tree species, resulting in low classification accuracy for these species. To address these issues, we propose an approach to map sub-pixel tree species fractions in mixed temperate forests. The method combines dense multispectral Sentinel-2 time series, reconstructed using splines, with the advantages of regression-based unmixing using synthetic data augmentation. This allows for a small number of pure training samples per tree species. For an increased model robustness and stabilized predictions, we implemented synthetic mixing and artificial neural network regression as an ensemble approach. We effectively mapped nine tree species fractions for the federal state of Rhineland-Palatinate, Germany, i.e., European beech, Sessile and Pedunculate oak, and Norway spruce. Mean absolute errors ranged from 2.66% to 15.53%, root mean square errors from 5.75% to 21.42% and R² values up to 0.92 - when validated against polygon-based forest inventory data. When comparing the estimated covered area per tree species from the most recent national forest inventory of Germany with the predicted area per species, an R² value of 0.915 was obtained. We show that the data augmentation through synthetic mixing allows for a sample size as small as 30 pure pixels per class, to sufficiently distinguish nine tree species, one ‘other species’ class, and ground or background. This minimal sample size substantially increases the operational applicability of the approach, particularly in scenarios with limited reference data for rare species, while still providing accurate and information-rich tree species distributions across large mixed forest areas.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: European-wide forest structure maps and estimates integrating Sentinel-2 and National Forest Inventory data

Authors: Jukka Miettinen, Radim Adolt, Mari Myllymäki, Laura Sirro, Lauri Seitsonen, Johannes Breidenbach
Affiliations: VTT Technical Research Centre of Finland, Forest Management Institute (UHUL), Natural Resources Institute Finland (Luke), Norwegian Institute of Bioeconomy Research (NIBIO)
The Horizon Europe project PathFinder (https://pathfinder-heu.eu/) provides tools and designs participatory processes for supporting evidence-based policies in the EU that foster synergies between forest-based bioeconomy and biodiversity. One of the key objectives of the project is to develop and demonstrate a mapping and estimation platform for EU-wide forest monitoring based on integrated use of field and remotely sensed data. The amount of data available for forest monitoring has been rapidly increasing over the past ten years. The influx of 10-30 m resolution satellite imagery and supporting Earth Observation (EO) based products (e.g. Copernicus pan-European forest High Resolution Layers), together with hundred thousands of National Forest Inventory (NFI) plots around Europe provide unprecedented possibilities for forest structure mapping and estimation in Europe. The PathFinder mapping and estimation platform aims to facilitate effective integrated use of NFI plot measurements with remotely sensed and other datasets to produce precise forest information. The unique aspect of the PathFinder platform is that it enables combined use of the vast datasets of NFI and remote sensing data, while preserving the privacy of field plot locations. Sample plot location privacy is not only a critical precondition for the credibility of permanent sample plots but is also subject to privacy legislation in some countries. The platform enables various estimation approaches and models to be used to provide European-wide forest information with a spatial resolution down to 10 m annually. The system facilitates reporting for the upcoming Forest Monitoring Law (EC 2023) with statistically sound methods at various levels. Key aspects of the system include: 1. Extraction of remote sensing and auxiliary variable values for field plots, ensuring the compliance with data restrictions such as secret plot coordinates. 2. Creation of high-resolution forest attribute maps integrating remote sensing and field data. 3. Estimation of forest statistics using NFI field data as the only source of information and in combination with wall-to-wall auxiliaries (typically products of EO data analysis) with the nFIESTA (new Forest Inventory ESTimation and Analysis) tool developed in the H2020 project DIABOLO. 4. Distribution of harmonized results through dedicated distribution channels (e.g. FISE). In this presentation we will 1) provide an overview of the platform concept and its main components and 2) present the European wide 10 m resolution forest structure maps of the final demonstration of the PathFinder platform. The PathFinder platform concept is based on three main modules: 1) Pre-processing module, 2) Mapping and Estimation module and 3) Results module. Each of these modules are built from interlinked components which can be run in multiple platforms allowing flexibility in the operational set-up of the system. The final demonstration of the platform is run in the course of 2025. Here we will present the 10 m resolution European-wide forest structure maps created as part of the final demonstration. These maps will later be used to together with NFI plot measurements to provide model-assisted estimates for pre-defined administrative areas. The maps provide information on eight key forest structure variables including Diameter, Basal area, Height, Growing Stock Volume, Above Ground Biomass, Conifer proportion and Species count. In addition, species proportion (of biomass) are provided for 10 species groups (1. Betula + Carpinus + Fraxinus, 2. Populus and Salix, 3. Fagus, 4. Quercus, 5. Eucalyptus, 6. Other Broadleaf, 7. Picea, 8. Abies, 9. Pinus and 10. Other Conifers). In addition to the target variable predictions, also uncertainty layers with standard deviation at pixel level are provided. The mapping is based on 2024 Sentinel-2 imagery and over 150 000 NFI field plots collected from nearly 20 European countries, including all major forest countries. The mapping is conducted with the k Nearest Neighbour method. Based on the pilot maps produced in 2024 ( https://zenodo.org/records/13143235) the maps are expected to provide consistent European-wide forest information with very low bias on continental level, but with considerable pixel level errors (50-70% of the mean).
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: EXPLORING THE POTENTIAL OF EMIT HYPERSPECTRAL DATA FOR ASSESSING FOREST BIOMASS: AN ALPINE CASE STUDY

Authors: Rodolfo Ceriani, Dr. Monica Pepe, Sebastian Brocco, Silvio Oggioni, Giorgio Vacchiano, Renzo Motta, Roberta Berretti, Davide Ascoli, Matteo Garbarino, Donato Morresi, Francesco Fava
Affiliations: Department DISAA, Università degli Studi di Milano, Department of Environmental Science and Policy, Università degli Studi di Milano, Institute for Electromagnetic Sensing of the Environment, Italian National Research Council, Department of Agricultural, Forest and Food Sciences, Università di Torino
Forests in mountainous regions provide important ecosystem services, including carbon sequestration, hydrological regulation, soil stabilization, and habitat for biodiversity. These services are crucial for sustaining the livelihoods of downstream communities, maintaining ecological integrity and mitigating climate change. To maximize these services, it is essential to implement sustainable forest management practices that are tailored to specific regional and environmental conditions. A key factor in selecting and applying these techniques is accurate knowledge of forest volume and biomass, which serve as indicators of carbon stock capacity, and the productivity potential of agroclimatic zones. However, field surveys for these parameters are often costly, time-consuming, and limited in their ability to provide comprehensive spatial data, especially in remote or inaccessible areas. Remote sensing technologies, particularly space-based platforms, offer an unmatched opportunity to measure and monitor forest biomass and volume across vast and challenging mountain landscapes, supplying the data needed to inform effective management strategies. Current forest volume and biomass monitoring systems rely heavily on multispectral sensors, such as the Sentinel-2 and Landsat missions. While these sensors offer valuable data for broad-scale forest assessments, their relatively low spectral resolution can limit the accuracy of biomass and volume estimates. To address these limitations, multispectral data are often combined with airborne or ground LiDAR data which provides detailed measurements of canopy height and forest structure, and radar data (e.g., from Sentinel-1), which offers consistent structural insights. Hyperspectral data have also shown potential to enable more precise forest volume and biomass estimations. However, most studies to date have focused only on airborne hyperspectral sensors, leaving the application of last-generation spaceborne hyperspectral data largely unexplored. Understanding the trade-offs between hyperspectral data, with its higher spectral but lower spatial resolution, and widely used multispectral data is key to advancing forest monitoring. This study aims to bridge this gap by exploring the potential of hyperspectral data from the Earth Surface Mineral Dust Source Investigation (EMIT) mission for predicting forest biomass and stand volume in mountainous regions of Italy. Additionally, the study examines the synergy between EMIT’s optical hyperspectral data and canopy height information from the Global Ecosystem Dynamics Investigation (GEDI) mission, assessing how the inclusion of hyperspectral data can enhance prediction accuracy when compared to Sentinel-2 multispectral data. To optimize the retrieval of these variables, we assess the performance of various machine learning (ML) and deep learning (DL) algorithms, aiming to identify the most effective approach for integrating these diverse datasets. Between 2021 and 2024, extensive field campaigns were carried out to collect ground truth data for this study, resulting in 157 forest volume and biomass observations across two distinct study areas: Val Camonica (Brescia) in the Italian Alps and the municipality of Berceto (Parma) in the Apennines. Both regions are marked by high forest heterogeneity and complex topography, which further challenge their suitability for testing remote sensing techniques. During the field surveys, data were collected within 14-meter radius plots, where tree diameter at breast height (DBH) and canopy height were measured. Forest biomass and volume were then estimated using species-specific and regional-specific allometric equations. For the two study areas, EMIT data acquired on April 12th and 13th, 2024, were downloaded and processed. To complement these hyperspectral datasets, Sentinel-2 multispectral data from April 9th, 2024, were also included in the analysis. Additionally, canopy height data from the Global Ecosystem Dynamics Investigation (GEDI) LiDAR sensors were incorporated to provide detailed structural information. From these datasets, spectral data (both hyperspectral and multispectral) and canopy height measurements were extracted, resulting in a comprehensive space-based only dataset for each observation point. This dataset includes stand volume, biomass, hyperspectral spectra, multispectral spectra, and canopy height. In the Matlab 2024b environment, multiple machine learning (ML) models were evaluated for retrieving forest stand volume and biomass. These included Support Vector Machines (SVM), Boosted Regression Trees (BRT), Artificial Neural Networks (ANN), Gaussian Process Regression (GPR), and Partial Least Squares Regression (PLSR). The models were tested using optical data alone and in combination with LiDAR-derived canopy height information. Preliminary results revealed that the most effective input for predicting forest stand volume was a combination of hyperspectral data and LiDAR, with GPR delivering the best performance (R² = 0.75, RMSE = 75.5 m³/ha). For biomass estimation, the optimal input combination consisted of Sentinel-2 multispectral data and LiDAR, also using GPR, achieving an R² of 0.67 and an RMSE of 44.24 Mg/ha. The next step in this study will involve generating maps of forest stand volume and biomass, applying the best models to EMIT and Sentinel-2 acquisitions. This step is critical, to assess the spatial coherence and robustness of the models and examine spatial patterns related to different forest structures and management practices. This analysis will help identify the most effective methods for processing hyperspectral imagery in the context of forest management, ensure readiness for the upcoming launch of operational missions like the Copernicus Hyperspectral Imaging Mission for the Environment (CHIME), capable of delivering high-temporal-resolution imagery across the globe. The optimized methods developed in this study will enable the generation of real-time, high-resolution maps, offering critical insights into forest dynamics and supporting more effective and adaptive forest management practices.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Assessing the Consistency of Above Ground Biomass Estimates Derived from Terrestrial Laser Scanning with Varying Instrumentation and Scanning Protocols

Authors: Mr Matthew Scholes, Dr Joanne Nightingale, Professor Mat Disney, Dr Cecilia Chavana-Bryant, Dr Andrew Allen
Affiliations: National Physical Laboratory, University College London, Space Clipper Industries
Nature-based carbon storage through afforestation is recognised as a key requirement for achieving the climate targets outlined in the Paris Agreement [1]. This is demonstrated by the EU proposing to plant over 3 billion trees by 2030 as part of its carbon neutral commitments [2]. Above-ground biomass (AGB) is an essential climate variable that serves as a key metric for assessing carbon sequestration in forests and is particularly important for verifying carbon credit inventory in the rapidly growing voluntary carbon market (VCM) [3]. Additionally, following the announcement of a centralised carbon credit trading market at COP29, the number of carbon credit projects are anticipated to grow substantially. However, all actors across the VCM, including credit inventory buyers and sellers face huge challenges in effectively monitoring and validating realised versus claimed carbon sequestration. The need for standardised and reliable methods to validate AGB and associated climate mitigation claims has never been more critical. Over the last decade, Terrestrial Laser Scanning (TLS) has been used to derive AGB and has emerged as the most accurate, non-destructive, sampling method to provide whole tree volume estimates which compare directly to destructive measurements [4]. Therefore, this method has not only been used to parameterise allometric models, but can also improve measurement, reporting and verification (MRV) of carbon storage in forests [5]. Additionally, TLS derived AGB can be used to calibrate and validate satellite data that is used across Europe to monitor changes to forest based carbon sequestration [6]. Because of the challenges associated with MRV, and the growing VCM, it is expected that there will be an influx of carbon project developers employing this technique to help improve the integrity of carbon markets. While this is a positive development, it remains unclear how consistent AGB estimates are across different users due to variations in instrumentation, field sampling and data processing workflows. This study aims to address this challenge by exploring the potential of TLS in an operational domain by exploring the impact of operator and instrument workflow on point cloud distribution and final AGB estimates. Two Riegl models (vz-400i and vz-600i) with different scanning settings (e.g., pulse repetition rates, angular resolution), and operator workflows were used to collect data at 2 UK forest sites. The results of these will be compared by analysing two sample point cloud distributions as well as the final AGB estimate at a plot level. The primary analytical tool used in the study is the Wasserstein metric, a statistical distance that quantifies the difference between two probability distributions, and in this case, it is applied to assess how the point distributions of point clouds vary across different scanning protocols. This allows the quantification of scanning workflow variations on the structure of the point cloud distribution, and subsequently AGB estimates, to be determined. The variation in point cloud distribution due to operational and scanning workflow is isolated by comparing the Wasserstein metric to that of a control scan. These include repeated scans using the same instrument model to isolate the environmental effects, in addition to a similar scan with varying alignment to extract the positional variation of the scan. By comparing Wasserstein distance of the inter-instrument scans to the controls, the variation due to TLS model can be extracted. Finally, the results of these can be compared to determine which scanning variable results in the most significant difference in point cloud structure, which can in turn, be used to identify discrepancies in AGB estimates. We hypothesize that larger Wasserstein distances, indicating greater differences in point distributions, will correlate with more significant discrepancies in AGB estimates. This analysis will provide valuable insights into how much variation in TLS derived AGB can be attributed to different scanning workflows and instruments. Quantifying this component of uncertainty in TLS derived AGB carbon estimates will be vital for ensuring robust AGB estimates from plot to regional scale that will ultimately underpin MRV and improve integrity of forest carbon credit projects. [1] Griscom, B. W., G. Lomax, T. Kroeger, J. E. Fargione, J. Adams, L. Almond, D. Bossio, S. C. Cook-Patton, P. W. Ellis, C. M. Kennedy and J. Kiesecker (2019). "We need both natural and energy solutions to stabilize our climate." Glob Chang Biol 25(6): 1889-1890. [2] Deal, EU Green. "The European Green Deal." Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions. Brussels (2020). [3] Porsborg-Smith, A., J. Nielsen, B. Owolabi and C. Clayton. (2024). "The Voluntary Carbon Market Is Thriving." Energy Transition Retrieved 27 February, 2024, from https://www.bcg.com/publications/2023/why-the-voluntary-carbon-market-isthriving. [4] Demol, Miro, et al. "Estimating forest above‐ground biomass with terrestrial laser scanning: Current status and future directions." Methods in Ecology and Evolution 13.8 (2022): 1628-1639. [5] Stovall, Atticus EL, Kristina J. Anderson-Teixeira, and Herman H. Shugart. "Assessing terrestrial laser scanning for developing non-destructive biomass allometry." Forest Ecology and Management 427 (2018): 217-229. [6] Duncanson, L., Armston, J., Disney, M. I. et al. (2021) Aboveground Woody Biomass Product Validation Good Practices Protocol. Version 1.0. In L. Duncanson, M. Disney, J. Armston, J. Nickeson, D. Minor, and F. Camacho (Eds.), Good Practices for Satellite Derived Land Product Validation, (p. 236): Land Product Validation Subgroup (WGCV/CEOS), doi:10.5067/doc/ceoswgcv/lpv/agb.001, https://lpvs.gsfc.nasa.gov/PDF/CEOS_WGCV_LPV_Biomass_Protocol_2021_V1.0.pdf
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Integrating PRISMA and Sentinel-2 with biophysical models for monitoring fungal infection and GPP

Authors: Dr Jose Luis Pancorbo, Mr. Paul Mille, Dr. Giandomenico De Luca, Dr. Beniamino Gioli, Dr. Nicola Arriga, Dra. Flor Álvarez-Taboada, Dr. Pieter S.A. Beck, Dra. Kirsten de Beurs, Dr. Carlos Camino
Affiliations: Consiglio Nazionale delle Ricerche—Institute of BioEconomy (CNR-IBE), School of Agrarian and Forest Engineering, Universidad de León, Joint Research Centre (JRC), European Commission, Laboratory of Geo-Information Science and Remote Sensing, Wageningen University and Research
Forests play a crucial role in providing ecosystem services, such as carbon storage, and water regulation, and they host enormous biodiversity. However, they are increasingly threatened by climate change, biodiversity loss, and affected by changing disturbance regimes. Early detection of disturbances is essential to mitigate their impacts on forest health and biomass. This study focuses on detecting Fomes fomentarius infections in a stone pine (Pinus pinea) stand near an eddy covariance flux tower (IT-SR2) in the Migliarino San Rossore Natural Park, Italy. To detect fungal infections, we retrieved leaf area index (LAI), chlorophyll a and b content (Chl-ab), and leaf water content (EWT) by coupling a classical PROSAIL-PRO model with PRISMA and Sentinel-2 satellite imagery through deep learning-based inversion approaches integrated into the novel R ToolsRTM package. Additionally, we aimed to assess the capacity of Sentinel-2 to predict gross primary productivity (GPP) by integrating key plant traits and red-edge indices with ecosystem flux measurements. We validated our method with field data provided by the ICOS network in the IT-SR2 area. The results showed that LAI estimates derived from Sentinel-2 imagery achieved a root mean square error (RMSE) of 0.47 m²/m², while uncorrected PRISMA imagery yielded an RMSE of 0.8 m²/m². Applying reflectance corrections using Sentinel-2 spectral bands significantly improved the accuracy of PRISMA-derived LAI, reducing the RMSE to 0.41 m²/m².GPP estimates from Sentinel-2 retrievals showed a strong alignment with the ICOS tower data, with an R²=0.82 and RMSE = 1.27 µmol CO₂ m−² s−¹. The method effectively estimated EWT, Chl-ab, LAI, and GPP, while detecting fungal infections. Time series analysis revealed clearer patterns for EWT and Chl-ab than for LAI in the presence of infections. Combining Sentinel-2 and PRISMA data improved LAI accuracy, with PRISMA benefiting from Sentinel-2 corrections for off-nadir effects. While corrected PRISMA data provided the most accurate LAI, Sentinel-2 data showed the fungal outbreak more clearly due to its higher spatial resolution (20 m) compared to PRISMA (30 m), and the relatively small extent of the outbreak. These findings highlight the value of integrating Sentinel-2 and PRISMA data to minimise uncertainties. It demonstrates the potential of combining satellite imagery, radiative transfer models and deep learning techniques to monitor GPP and facilitate early detection of forest diseases, ultimately improving forest health and promoting ecosystem sustainability.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Forest Height Prediction in a German National Park: Comparing a Sentinel-2 ML and Sentinel-1/-2 DL Model

Authors: Gerrit Eisele, Johannes Halbauer, Markus Zehner, Prof. Dr. Alexander Brenning, Dr. -Ing Clémence Dubois, Prof. Dr. Christiane Schmullius
Affiliations: Friedrich-Schiller-University Jena, Department of Earth Observation, Friedrich-Schiller-University Jena, Department of Geoinformatics, German Aerospace Center (DLR), Institute of Data Science
Accurate estimation of forest canopy height is essential for monitoring ecosystems, assessing biodiversity, and evaluating carbon stocks. This study investigates machine learning (ML) and deep learning (DL) techniques for predicting forest height in the Hainich National Park, Thuringia, Germany, leveraging Sentinel-1 (S1) and Sentinel-2 (S2) time series (covering one year) and using aerial laser scanning (ALS) data as a reference. With complementary information from backscatter and reflectances, we investigated how a very dense time series of S1 (240 VV, VH scenes) or the optical bands of S2 (spatial resolution ≤ 20 m), or synergies of both relate to the forest height. Thus, three multi-temporal and one single-temporal Random Forest (RF) ML models were trained (S1, S2, S1+2 combined, and S2-median). The multi-temporal S2-only model achieved the best overall accuracy and therefore was compared to a lightweight transformer-based DL approach, which achieved its best results by combining S1 and S2 data. All models used identical, spatially disjoint datasets for training and testing. The best RF model achieved a mean absolute error (MAE) of 2.98 m and an R² of 85.81 %, while the DL model showed better accuracy with an MAE of 2.42 m and an R² of 90.39 %. The DL model demonstrated enhanced capability in capturing fine-scale spatial variations, particularly in areas with complex terrain, such as slopes and forest edges. Conversely, the RF model effectively predicted canopy height in homogeneous regions using only S2 data. This highlights the potential of RF models for cost-effective computation with excellent out-of-the-box performances. According to a permutation-based feature importance assessment, the S2-based model showed significant improvements from using multi-temporal data compared to the median-based S2 approach. S2 acquisitions during times of phenological changes (April/May), showing seasonal leaf developments and changes in chlorophyll content, were critical in improving predictive accuracy. Additionally, S2 scenes containing snow cover also proved important, with the model likely being able to distinguish between forest and non-forest areas. This highlights the advantages of time series data covering a whole phenological cycle instead of only a short time period. This research highlights the potential of combining multi-temporal satellite data with machine learning for canopy height prediction. While DL models offer superior accuracy, their computational cost and time-consuming model design process may limit their applicability in computation-limited scenarios. RF models, in contrast, present a viable alternative, offering robust predictions with significantly lower training times and hardware requirements. Future research should focus on extending these models to diverse forest types and climatic conditions to enhance their transferability. Integrating additional datasets, such as topographic and climatic variables, could further refine predictions and provide deeper insights into forest ecosystem dynamics.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: From Beams to Biomass: Developing TLS-Based Allometric Models Incorporating Size-to-Mass Scaling in Australian Tropical Forests.

Authors: Geike De Sloover, dr. Louise Terryn, Zane Cooper, dr. Miro Demol, Andrew Ford, Bert Gielen, Luna Soenens, Kim Calders
Affiliations: Q-ForestLab – Laboratory of Quantitative Forest Ecosystem Science, Department of Environment, Ghent University, PLECO – Research Group Plants and Ecosystems, Faculty of Science, University of Antwerp, Sylvera, Independent Researcher
Forests are vital to the global carbon cycle, acting as significant carbon sinks through photosynthesis and biomass accumulation (Pan et al., 2011). However, climate change poses a significant threat to the carbon storage capacity of forests (Hubau et al., 2020; Pan et al., 2011). Gaining a comprehensive understanding of how climate change affects carbon storage dynamics, especially in tropical forests, is essential for developing effective climate mitigation strategies. Although Australia's tropical rainforests cover only a small fraction of the world's rainforest area, they are ecologically significant, hosting unique biodiversity. The Wet Tropics of Queensland, a UNESCO World Heritage Area, includes the Australian Tropical Monitoring Network (ATMN). The area is very unusual in the sense that it is the only rainforest expanding under the growing threats from climate change (Baker et al., 2020; Turton, 2017). Despite this expansion, altered rainfall patterns and rising temperatures are causing higher tree mortality, drastically changing forest carbon dynamics (Bauman et al., 2022). To better understand these effects on carbon storage and improve our ability to quantify and monitor forest carbon dynamics, we leverage advanced technologies like terrestrial laser scanning (TLS). TLS has demonstrated its ability to accurately estimate tree volume in tropical forests based on quantitative structure models (QSMs). Additionally, volumetric estimates obtained through TLS enable non-destructive sampling of aboveground biomass (AGB) across the entire size range of trees. From June to August 2024, we collected TLS data from sixteen plots (0.5-1 ha) within the ATMN, totalling ten hectares and approximately ten thousand trees. The sites span an altitudinal (15 – 1,200 m asl) and rainfall gradient (1236 - 3470 mm). We aim to develop and introduce site-specific allometric models for AGB in these plots. While this presentation focuses on the ATMN, future objectives include extending these methodologies to the ICOS network in Europe. This will enable the comparison of carbon storage dynamics across tropical and temperate forests, offering a broader understanding of tree biomass scaling and advancing global AGB estimation frameworks. Traditional allometric models often assume tree size-to-mass scale-invariance and are rarely site-specific, especially in understudied forests, but these assumptions may not hold across the full-size range of trees and sites and can result in significant errors in estimating carbon (Calders et al., 2022; Terryn et al., 2024). We will evaluate the validity of this assumption and, if necessary, implement dynamic allometric models that account for the size-dependent relationship between stem biomass and AGB. This approach will enable us to achieve more accurate AGB estimates, both historically and future wise, and provides a better understanding of carbon dynamics at the individual tree level. References Baker, A. G., Catterall, C., Benkendorff, K., & Fensham, R. J. (2020). Rainforest expansion reduces understorey plant diversity and density in open forest of eastern Australia. Austral Ecology, 45(5), 557–571. https://doi.org/10.1111/AEC.12871 Bauman, D., Fortunel, C., Delhaye, G., Malhi, Y., Cernusak, L. A., Bentley, L. P., Rifai, S. W., Aguirre-Gutiérrez, J., Menor, I. O., Phillips, O. L., McNellis, B. E., Bradford, M., Laurance, S. G. W., Hutchinson, M. F., Dempsey, R., Santos-Andrade, P. E., Ninantay-Rivera, H. R., Chambi Paucar, J. R., & McMahon, S. M. (2022). Tropical tree mortality has increased with rising atmospheric water stress. Nature 2022 608:7923, 608(7923), 528–533. https://doi.org/10.1038/s41586-022-04737-7 Calders, K., Verbeeck, H., Burt, A., Origo, N., Nightingale, J., Malhi, Y., Wilkes, P., Raumonen, P., Bunce, R. G. H., & Disney, M. (2022). Laser scanning reveals potential underestimation of biomass carbon in temperate forest. Ecological Solutions and Evidence, 3(4), e12197. https://doi.org/10.1002/2688-8319.12197 Hubau, W., Lewis, S. L., Phillips, O. L., Affum-Baffoe, K., Beeckman, H., Cuní-Sanchez, A., Daniels, A. K., Ewango, C. E. N., Fauset, S., Mukinzi, J. M., Sheil, D., Sonké, B., Sullivan, M. J. P., Sunderland, T. C. H., Taedoumg, H., Thomas, S. C., White, L. J. T., Abernethy, K. A., Adu-Bredu, S., … Zemagho, L. (2020). Asynchronous carbon sink saturation in African and Amazonian tropical forests. Nature 2020 579:7797, 579(7797), 80–87. https://doi.org/10.1038/s41586-020-2035-0 Pan, Y., Birdsey, R. A., Fang, J., Houghton, R., Kauppi, P. E., Kurz, W. A., Phillips, O. L., Shvidenko, A., Lewis, S. L., Canadell, J. G., Ciais, P., Jackson, R. B., Pacala, S. W., McGuire, A. D., Piao, S., Rautiainen, A., Sitch, S., & Hayes, D. (2011). A large and persistent carbon sink in the world’s forests. Science, 333(6045), 988–993. https://doi.org/10.1126/SCIENCE.1201609/SUPPL_FILE/PAPV2.PDF Terryn, L., Calders, K., Meunier, F., Bauters, M., Boeckx, P., Brede, B., Burt, A., Chave, J., da Costa, A. C. L., D’hont, B., Disney, M., Jucker, T., Lau, A., Laurance, S. G. W., Maeda, E. E., Meir, P., Krishna Moorthy, S. M., Nunes, M. H., Shenkin, A., … Verbeeck, H. (2024). New tree height allometries derived from terrestrial laser scanning reveal substantial discrepancies with forest inventory methods in tropical rainforests. Global Change Biology, 30(8), e17473. https://doi.org/10.1111/GCB.17473 Turton, S. M. (2017). Expansion of the tropics: revisiting frontiers of geographical knowledge. Geographical Research, 55(1), 3–12. https://doi.org/10.1111/1745-5871.12230
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Detection of Forest Clearcuts Across Sweden Using a Multi-Scale Feature Pyramid Network Operating on Sentinel-1 Data

Authors: Ole Martin Christensen, Debaditya Roy, Yuxia Wang, Richard Hilton, Haopeng Li
Affiliations: Qamcom Research And Technology
Monitoring clear-cut areas in Swedish forests is primarily conducted using data from optical satellites, notably ESA's Sentinel-2 series. However, obtaining useful data during the winter and cloudy summer periods proves challenging due to low solar angles. Therefore, developing methods to utilize radar data from Sentinel-1 would be highly beneficial for the continuous monitoring of Swedish forests. Moreover, effectively monitoring forests using C-band radar data remains a substantial unsolved challenge. As leaves, twigs, and small stems are the primary scattering sources (unlike longer-wavelength radars where the total biomass is the scattering source), differentiating trees from ground clutter, undergrowth, wheel tracks, or other background scattering sources is difficult. Developing a model that successfully integrates satellite radar data, which can be scaled and applied across Sweden, thus presents considerable technical and scientific challenges. In this study, we present results indicating that utilizing a state-of-the-art multi-scale feature pyramid network can overcome these difficulties. The network operates on two images, each approximately 1 km x 1 km, collected at different times. By training the network on this differential data, the network learns about background conditions to a much greater extent than by directly estimating clear-cuts from single images. As such, it promises higher accuracy in detecting new clear-cuts. Additionally, to increase robustness against changes in background conditions (such as weather and seasonal variations), the method incorporates statistical properties such as mean and variance of the radar data from the entire region (up to 100x100 km) into its inference process. In collaboration with the Swedish Forest Agency, this method has been implemented and tested across several different regions in Sweden. Based on initial test results, the method indicates an accuracy comparable to traditional optical clear-cut retrievals. More significantly, it can successfully detect clear-cuts during the wintertime when cloud cover and poor lighting conditions would otherwise hinder such monitoring. This capability offers new opportunities for effective forest management and ensures compliance with environmental regulations even under challenging conditions.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Optimized Forest Characterization and Monitoring Services

Authors: Romy Schlögel, Nicola Dimarco, Benoit Bartiaux, Louis Andreani, Elsy Ibrahim
Affiliations: Spacebel
Detecting tree characteristics is labor-intensive, time-consuming, and costly, yet the need for precise and scalable solutions is urgent. Addressing climate change and ensuring forest resilience necessitate detailed and accurate mapping to identify ecosystems most capable of withstanding environmental pressures. In order to enhance the accessibility, usability, and integration of Earth Observation (EO) data for monitoring forest ecosystems, an approach integrating a database for tree detection and species classification based on orthophotos (at 25 cm) in Wallonia was developed, comprising over 100,000 labeled trees across 10 commercially significant species, enriched with four additional classes for young conifers and mixed-species categories. These datasets were derived from multi-temporal orthophoto RGB imagery provided by the Department of Nature and Forests of Wallonia. Deep Learning models for tree detection and species classification were trained using representative datasets, achieving acceptable performance metrics. Precision, recall, and F1-score for tree detection reached 0.922, 0.828, and 0.870 (IoU@0.5), respectively, while species classification yielded weighted averages of 0.945, 0.938, and 0.941. Challenges encountered included: the difficulty of distinguishing tree crowns in dense plots or suboptimal imaging conditions, confusion between beech and oak trees, subjectivity in manually digitized ground-truth data, and variability in monoculture representation. These limitations are addressed through ongoing database enhancement and iterative model refinement. The processing pipeline, optimized with GPU acceleration (NVIDIA RTX A2000), enabled efficient workflows with a total runtime of 31 minutes to detect and classify approximately 325,000 trees over 1000 hectares which is drastically reducing our production cost. Automatization of our process are using docker containers which have been set up to leverage GPU. Using only parcels as input data, the workflow selects automatically relevant images on our servers, preprocess them to make them compliant with deep learning models that detect and classify each tree according to its species. Finally, statistics at parcel level are computed such as tree count, percentages per species, tree density, detection and classification scores. In addition to our EO integration in operational systems, our web interface (https://eoregions.spacebel.be/) includes various product integration within its dedicated service catalogue and enables to enhance user experience. Key advancements include the integration of species distribution visualizations per parcel and a confidence mapping layer for result interpretation. These features enhance the usability and interpretability of the models for end-users. Outreach activities have demonstrated the value of our updated services to stakeholders while public and private end-users provided useful feedback to make the service useful to the forest management communities.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Advancing Trait Mapping of Congo Basin Secondary Forests Using Multispectral and Hyperspectral Satellite Imagery

Authors: Sacha Delecluse, Pierre Defourny, Dr. Marijn Bauters
Affiliations: UCLouvain-Geomatics, Ghent University-CAVELAB
Central African forests play a vital role in the global carbon cycle and, on average, store more carbon and are more efficient in carbon uptake than the Amazon forests. They also have a high biodiversity value and are a source of livelihood and ecosystem services for numerous communities. Despite this, the forests of Central Africa remain understudied compared to the Amazon and Southeast Asia. The unique dominance of smallholder shifting agriculture as driver of disturbance also increase uncertainty regarding the forest’s future. This activity is poorly monitored and is associated with cycles of forest regrowth and clearing, creating a complex landscape mosaic of primary and secondary forest and agricultural field. As slash-and-burn practices are expected to increase with the demographic growth in the region, understanding forest recovery and the dynamics of secondary forest is essential, especially for accurate carbon accounting. Recent development in satellite remote sensing allows for unprecedented monitoring of tropical moist forest in terms of dynamics, regrowth, typology and functional traits. C-band SAR images and high-resolution multispectral images are made available through the Copernicus program. More recently, hyperspectral imagery from EnMAP allow the study of the Congo basin with a new range of wavelength. Extensive studies on forest morphological and physiological traits have been conducted using field inventory from logging concession and study plots throughout the basin. These works have shown that the forests of Central Africa follow the classical J shape in diameter distribution, with an increase in basal area and tree height and decrease in stem density throughout the forest succession. Above-ground carbon also increase during succession, but its recovery was shown to be slower than anticipated. For physiological traits, wood density increases while the specific leaf area decreases, other leaf specific traits such as phosphorus and nitrogen content and C:N ratio remains less understood. While detailed, these studies are limited spatially due to the constraint of field inventory. On the other side, studies using satellite remote sensing are spatially extensive but lack the detailed view given by field measurements or are limited by an insufficient spatial resolution for the mosaic landscape of secondary forest. In this study we evaluate the potential of high-resolution multispectral (Sentinel-2) and hyperspectral (EnMAP) for quantifying key morphological and physiological traits of secondary forests along a successional gradient in the Congo Basin. Using a dataset of 60 chronosequence plot (secondary forest stands with documented ages) we explore the capability of satellite-based spectral data to capture and predict forest characteristics typically assessed through ground-based inventories. By correlating the spectral signatures from EnMAP with specific vegetation trait such as above-ground biomass, canopy height, leaf area index, and foliar biochemical properties we identify spectral indicators of structural and physiological development. The spectral analysis focuses on identifying the drivers of the shifts in reflectance patterns as the forest matures, linking spectral characteristics to ecological changes along the successional trajectory. Preliminary results reveal a notable decrease in forest reflectance with age across the entire spectrum, indicating a darkening trend in the canopy as secondary forests mature. Forests aged 50-80 years show spectral signatures similar to those of primary forests, despite still exhibiting differences in structural traits, particularly above-ground biomass. This study highlights the potential of combining high-resolution multispectral and hyperspectral data to provide spatially extensive and detailed insights into secondary forest structural and functional dynamics. Improving the ability to monitor secondary forest traits and their recovery trajectories contributes directly to more accurate carbon modeling, particularly in regions where shifting agriculture dominates. Such advancements are essential for refining projected changes in the regional carbon balance of the Congo basin and informing policymakers to implement adequate land management strategies.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Assessing Disturbance Regimes Based on High-Resolution Biomass Observations

Authors: Siyuan Wang, Dr. Hui Yang, Dr Sujan Koirala, Felix Cremer, Prof.Dr. Matthias Forkel, Prof.Dr. Markus Reichstein, Dr Nuno Carvalhais
Affiliations: MPI-BGC, TU Dresden, Peking University, Universidade Nova de Lisboa, ELLIS Unit Jena
Forests are critical components of the global carbon cycle, sequestering atmospheric carbon dioxide and mitigating climate change. However, disturbances such as wildfires, droughts, insect outbreaks, and human activities disrupt this balance, weakening forests’ carbon sink capacity. These disruptions pose significant challenges to accurately modeling and predicting carbon dynamics, which is crucial for reducing uncertainties in terrestrial ecosystem projections. This study investigates the potential of predicting disturbance regimes—landscape-level characteristics of disturbance events—using machine learning-based disturbance simulations and biomass satellite observations. Specifically, we examine the extent (μ), frequency (α), and intensity (β) of disturbance events and their correlation with biomass statistics. A strong link between disturbance regimes and biomass was identified through simulation, with machine learning proving effective in capturing this relationship (Wang et al., 2024). However, observed discrepancies between simulated and observed biomass statistics highlight the significance of spatial structure and the risk of model extrapolation. Extending the experimental design, by expanding parametric ranges and disturbance shapes, to narrow down discrepancies while maintaining predictive capabilities, led to exploring spatial aggregation as an observation operator. Aggregated biomass statistics were subsequently used to generate global patterns of disturbance regimes, offering a robust and comprehensive understanding of their spatial variability. We further explore the concept of overlap ratio, to quantify discrepancies between simulated and observed biomass patterns, and of area of applicability, to guide the selection of aggregation kernel sizes and the quantification of prediction uncertainty. Overall, our preliminary findings indicate that prediction uncertainty varies across global landscapes and is influenced by attributes such as vegetation type and climate conditions. Our study evaluates the feasibility of using realistic biomass observations to predict disturbance regimes, emphasizing the critical role of remote sensing biomass data in representing the stochastic nature of disturbances. Further analysis on factors controlling regional to global variations in disturbances regimes paves way for understanding and quantifying the spatio-temporal representation of disturbance regimes in carbon cycling models. These improvements are vital for enhancing predictions of forest carbon dynamics from annual to decadal scales, ultimately supporting effective climate change mitigation strategies. Wang, S.; Yang, H.; Koirala, S.; Forkel, M.; Reichstein, M.; Carvalhais, N.: Understanding disturbance regimes from patterns in modeled forest biomass. Journal of Advances in Modeling Earth Systems 16 (6), e2023MS004099 (2024)
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Country-scale tree species classification with Machine Learning in Hungary and Poland

Authors: Tamás Dr Molnár, Dr Ewa Grabska-Szwagrzyk
Affiliations: Hungarian Forest Research Institute, Jagiellonian University
Information on tree species classification is crucial, as tree species react differently to rapidly changing environmental conditions, requiring precise spatial separation. For large regions, such as country-scale analyses, this can be achieved time—and cost-efficiently utilizing the latest remote sensing data and cloud computing platforms. In our research, the objective was to develop cloud-based methods for forest classification in Hungary and Poland. While both countries represent temperate forest ecosystems, they differ in terms of climate, topography, and species composition. Polish forests are dominated by Scots pine, followed by oaks (pedunculate and sessile), silver birch, and alder (black and grey); while Hungarians are by pedunculate oak, sessile oak, Turkey oak, European beech, European hornbeam, Norway spruce, Scots pine, black pine, black locust and poplars. Both monitoring approaches utilised high-resolution Sentinel-2 imagery from the European Space Agency for forest species classification. Google Earth Engine (GEE) cloud platform facilitated cloud-based preprocessing and classification of large spatial datasets. Here, we used the Random Forest algorithm to map forest composition. In Hungary, the dominant ten tree species were classified using 50,000 randomly selected sample points deriving species information from the Hungarian National Forestry Database. In Poland, approx. 400,000 sample pixels based on the Polish Forest Data Bank and visual analysis of high-resolution ortho imagery were used. High accuracies were achieved (80%), although they varied largely across regions and species. Specifically in very heterogeneous forest stands, the classification accuracies were lower. In addition, conifers (spruce, pine, fir) were classified relatively easily in monocultures, while mixed deciduous stands with poplar, lime and hornbeam were more problematic. Furthermore, more common species or those with unique spectral properties (e.g. black locust) were classified with higher accuracies. The results demonstrated that Sentinel-2 in combination with the GEE and RF algorithm can provide high accuracies at the country scale, supporting map applications for National Forest Inventories and the daily work of practical foresters. Another outcome was that the mutual interest in our countries enabled us to build partnerships due to the similar climatic risks, forest damage types, and research interests.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Exploring the Relation between Tree Species Diversity and Forest Height Heterogeneity Across Spatial Scales

Authors: Elisabeth Rahmsdorf, Dr Daniel Doktor, Prof Dr Hannes Feilhauer, Dr Maximilian Lange
Affiliations: Helmholtz Centre for Environmental Research - UFZ, Remote Sensing Centre for Earth System Research - RSC4Earth, Leipzig University, German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig
Forests with a high overall biodiversity provide a variety of ecosystem functions and service and are associated with greater ecosystem stability and resilience to disturbance events. To identify and preserve intact forest ecosystems across the country, monitoring forest biodiversity on national scale is essential. Large-scale remote sensing datasets offer the potential to develop novel monitoring strategies on a national level. In particular, area-wide LiDAR based canopy height models with high spatial resolution are a powerful tool for deriving forest structural attributes in different forest ecosystems. We investigated the relationship between tree height heterogeneity and tree species diversity across different spatial scales in Germany. Height heterogeneity was calculated from canopy height models. Canopy cover, topographic features and tree species composition were included in the analysis to identify drivers of forest height heterogeneity. For the first part of the analysis, we focused on selected forest inventory sites to investigate the relation of tree species diversity and tree height heterogeneity on the local scale. In a second part, we extended our focus to the national scale to identify drivers of forest height heterogeneity across the country. Here, a German-wide tree species classification map based on Sentinel-2 satellite data was used to obtain measures of tree species composition and tree species diversity. We addressed the challenges arising from the nationally heterogeneous data used to generate canopy height models. Additionally, we analyzed and evaluated the impact of spatial scale and resolution, as well as forest type specific structural features. Correlations of tree species diversity and tree height heterogeneity vary between geographic regions and forest types. Furthermore, canopy cover, topography and tree species composition are affecting tree height heterogeneity. By combining our findings from different forest inventory sites and examining national trends, we aim to provide a comprehensive picture of the potential and the constraints of using remote sensing data products such as canopy height models for forest monitoring. This study contributes to the development of remotely sensed forest biodiversity indicators and facilitates the integration of remote sensing datasets into large-scale forest assessments to improve the detection of structural changes in forest ecosystems over time and space.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Tree Cover Changes in War-Affected Areas: A Case Study of Ukraine

Authors: Adam Waśniewski, Alicja Rynkiewicz, dr Agata Hościło, dr Serhii Havryliuk, dr Oleh
Affiliations: Department of Geoinformatics, Cartography and Remote Sensing, Faculty of Geography and Regional Studies
Trees play a crucial role in various aspects of human life, including ecology, economy, climate, urban planning, and social well-being. Ecologically, trees are vital for water conservation, air quality improvement, soil preservation, and oxygen production. Economically, trees are essential in construction, transportation, and as an energy source. Therefore, monitoring tree cover range and its changes over time is critical. The study focuses on mapping tree cover and detecting the tree cover change in the Lviv, Kyiv, and Zhytomyr regions of Ukraine. Given the current geopolitical situation, monitoring Ukraine using satellite data has become increasingly important. The ongoing conflict has led to significant environmental impacts, including forest damage and land use changes. Remote sensing provides a technology and data that allows to assess and monitor these changes in near real-time, and provides evidence for both environmental conservation and humanitarian efforts. The benefits of using remote sensing in the conflict zones include the ability to continuously monitor large and inaccessible areas, to detect changes quickly, and provide data for disaster response and recovery planning. The aim of the study was: (a) to determine the tree cover range and classify dominant leaf types (coniferous and broadleaved) for the reference year 2020 using Sentinel-2 images; (b) to detect changes in tree cover over the period 2020-2022; and (c) to evaluate the environmental impact of the conflict by assessing forest losses and land use changes, In this study the methodologies for accurate monitoring of land cover changes in conflict zones has been developed. The classification was performed in two steps: (a) initial classification of tree cover, and (b) subsequent classification of leaf type within the tree cover mask from the first step. The classifications were implemented in a Python environment on cloud computing virtual machines. using the Random Forest classifier. The Random Forest is widely used for classifying tree cover and leaf types (coniferous and broadleaved), and often outperforms other algorithms in classifying Sentinel-2 data due to its ability to deliver accurate predictions. For tree cover change detection, a three-step method implemented in the Google Earth Engine (GEE) platform was applied across the three study areas and two-time intervals (2020-2021 and 2021-2022). The first step, the spectral index differencing was applied to indicate the potential forest changes. For that, the Normalized Difference Vegetation Index (NDVI) and the Normalized Burn Ratio (NBR) were calculated based on the seasonal Sentinel-2 mosaics (derived from the images acquired between April and September). In the next step, the Random Forest classifier was trained using the selected potential changes over the Kyiv region for the period 2020-2021. Then subsequently the pre-trained model was applied to the remaining regions and all time intervals to detect and classify the forest changes into three categories: no changes, clearcuts, and burnt areas. The changes were verified using the Sentinel-2 mosaics and images available on Google Earth. The results indicated the high overall accuracy (OA) and F1 scores for tree cover classification across the three regions, with values ranging from 0.97 to 0.99. The forest in the Lviv region covered an area of 8065.28 km², in the Zhytomyr region 12124.87 km² and the Kyiv region 7820.33 km² of forest. Accuracy assessments for the broadleaved and coniferous tree types also showed high values, with all statistics exceeding 90%. The tree cover change detection revealed that the clear cuts and burn forest accounted for 1-2% of the total tree-covered area, with the largest burnt area detected in the Kyiv region during the period 2021-2022 (around 6491 ha). The classification accuracy for burnt areas was slightly lower around 0.94 compared to other classes. Despite the high accuracy, the study identified some challenges, including misclassification of burned areas and clearcuts due to spectral similarities between burnt and clear-cuts (bare ground) and shadows. The findings underscore the importance of continuous tree cover monitoring for environmental conservation and climate change mitigation, particularly in conflict-affected areas, and highlight the potential for further refinement of classification methodologies to address identified challenges. The research leading to these results has received funding from the Norway Grants 2014-2021 via the Polish National Center for Research and Development [grant no: NOR/POLNOR/InCoNaDa/0050/2019-00] under Scheme: Support for Ukrainian researchers. Project InCoNaDa+UA “Enhancing the user uptake of Land Cover / Land Use information derived from the integration of Copernicus services and national database - case study Ukraine”.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Forest structure complexity from forest inventory and GEDI data for Europe

Authors: Dr Alexandra Runge, Dr Simon Besnard, Dr Emil Cienciala, Dr Kevin Black, Dr Roberto Pilli, Dr Gherardo Chirici, Dr Giovanni D'Amico, Dr Martin Herold
Affiliations: GFZ Helmholtz Centre for Geosciences, Institute of Forest Ecosystem Research, fers forestry, University of Florence
Forests are essential components of the global carbon cycle, serving as vital carbon sinks and regulators of climate. The rate of carbon removal by European forests is declining due to increased wood demands, natural disturbances, and shares of forests reaching maturity. Therefore, promoting actions to sustain and enhance forests as carbon sinks is essential. Forest structure is crucial for accurate carbon storage quantification and predicting future ecosystem dynamics. Additionally, forest structure characteristics are of high relevance for recent EU policies, i.e. old growth, natural or planted forests. In this study, we hypothesise that the forest inventory survey results provide invaluable ground information on forest structure complexity. Therefore, we link forest inventory data, such as national forest inventory, ICP forest or similar plot surveys, with Global Ecosystem Dynamics Investigation (GEDI) data to map forest structure complexity across Europe, focusing on Italy, Czech Republic and Central Europe. Forest structure complexity is challenging to measure in plot surveys. However, the variability of diameter-at-breast height (DBH), tree height (THT) and number of species in a plot give a good indication of the complexity within a plot. Little variability of DBH and THT, and few species in a plot indicate little forest complexity as the trees are most-likely the same species, even-aged, and have the same disturbance and/or forest management history. Whereas a high variability of DBH and THT and multiple species in a plot indicate high forest complexity, assuming that the high DBH variability results from multiple species, varying ages, and a more complex forest structure. The GEDI L2A and L2B data, following the first acquisition circle from 2019-2023, provide height parameters and structural variables, respectively, derived from the raw vertical waveform product. So far, assessments of forest structure with GEDI have been limited to individual forest structure parameters, such as canopy cover, height, and biomass. Still, the 3D information obtained from the waveforms is rarely taken into account. Furthermore, the forest structure products were mostly linked and calibrated with airborne laser scanning (ALS) products, combining two remote sensing products. Therefore, we matched forest inventory plots from Italy, the Czech Republic, and ICP forests with good-quality GEDI shots. Firstly, we determined forest structure complexity at plot-level by assessing the variability of DBH, THT and species information and their combined effect in an equally weighted structure index. Comparing the results to forest type maps and expected low and high structural complex forests such as the primary forest database, showed that the plot-level DBH, THT and species information captured forest structure complexity reliably. Secondly, we trained an XGB Regression model with GEDI L2A and L2B data to predict plot-level structural complexity based on variability from DBH standard deviation (SD), THT SD, and species number. The resulting map on GEDI level will show areas of low and high forest structure complexity across these European countries. The forest structure complexity information will help improve carbon quantifications and assess the impact of forest disturbances that lead to changes in forest structure and, hence, future carbon uptake.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Forest health monitoring and climate envelopes in support of a national reforestation strategy in Germany

Authors: Julian Fäth, Christian Schäfer, Svenja Dobelmann, Judith Reise, Miriam Pfeiffer, Catrin Schmidt, Sebastian Dittrich, Philipp Herrmann, Prof. Dr. Tobias Ullmann, Michael Thiel
Affiliations: Department of Remote Sensing, Institute of Geography and Geology, University of Würzburg, Öko-Istitut e.V., Institute for Applied Ecology, Professorship of Landscape Planning, TUD Dresden University of Technology, Professorship of Biodiversity and Nature Conservation, TUD Dresden University of Technology
Central European forests are increasingly threatened by drought stress and extreme events resulting from changing climatic conditions. These pressures are visibly impacting forest ecosystems, often resulting in reduced growth and extensive forest dieback. In recent years, the increasing extent of calamity-stricken forest areas in Germany has underscored the urgent need to determine where and how reforestation efforts should be prioritized. A critical foundation for strategic reforestation planning involves generating a comprehensive overview of current forest distribution, health status, and damage levels across different forest regions. As a first step, available forest map products were collected and compared. Using data from the Landsat and Sentinel-2 missions, we further calculated three key indices—the Dryness Index (DI), Normalized Difference Vegetation Index (NDVI), and Normalized Difference Water Index (NDWI)—for the summer months from 2003 to 2023. These indices were mapped to 82 Growth Areas of Germany, guided by a defined forest mask. Additionally, air temperature, precipitation, and evapotranspiration from 1950 to 2023, were processed for the same areas and correlated with the remote sensing indices. To support a national reforestation strategy, it is also essential to consider how forests may develop under future climatic scenarios, and how they ideally should evolve to promote nature conversation and climate resilience. Since projecting forest dynamics is challenging due to active forest management, particularly regarding tree species composition, we used climate envelopes to predict a portfolio of suitable tree species based on their specific climate niches and compared the development of the amount of native deciduous and coniferous species. Our approach leverages remote sensing and climate envelopes to achieve several key objectives: first, we aim to assess the temporal trend of forest condition by focusing on forest vitality and damage through various indices; second, we acquire and overlay climate data with remote sensing-derived forest health metrics, enabling us to examine the climatic response patterns of coniferous, deciduous and mixed forests. By integrating remote sensing, historical climate data, and climate envelopes, our results provide critical insights into how climatic stressors have influenced forest health over time and offer important information to support a national reforestation strategy. The outcomes are designed to foster nature conservation and climate resilience across Germany’s forests, enabling informed decision-making to guide reforestation efforts in response to ongoing environmental challenges.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Detection of Spruce Budworm with Sentinel-1 Time Series

Authors: Lauriane Mousset, Dr. André Beaudoin, Dr Frédéric Brigui, Dr. Xavier Dupuis, Dr. Régis Guinvarch, Dr. Thierry Koleck, Dr. Laetitia Thirion
Affiliations: SONDRA, CentraleSupélec, Université Paris-Saclay, DEMR, ONERA, Université Paris-Saclay, Laurentian Forestry Centre, Canadian Forest Service, Natural Resources Canada, Centre National d’Études Spatiales (CNES), CESBIO, Université de Toulouse, CNES/CNRS/INRAE/IRD/UT3
The spruce budworm (Choristoneura fumiferana, SBW) is an insect pest found in North American forests. It mainly affects balsam fir (Abies balsamea), white spruce (Picea glauca), and black spruce (Picea mariana). Caterpillars feed on tree needles, causing them to turn brown and fall off.[1] Despite the natural belonging of the SBW to the boreal forest ecosystems, recent years have seen an upsurge in epidemics, particularly because of climate change [2,3,4]. This has many impacts, both environmental and economic. Defoliated trees are weakened, increasing the risk and severity of fires in areas affected by SBW[5]. In addition, the capacity of forest ecosystems to store atmospheric carbon is affected, transforming natural carbon sinks into carbon sources [6,7]. Finally, these epidemics have a high economic impact, given the importance of the forestry sector in the Canadian economy [8]. SBW has to be detected as early as possible, and the affected areas must be monitored to check the evolution of the tree diseases. In the province of Quebec (Canada), there is precise forest monitoring, and the Ministry of Natural Resources and Forests provides datasets for scientific purposes. We have digital elevation models and canopy cover, photographic image interpretation of forest stands with their characteristics, and historical fieldwork cartography. They also provide an aerial photographic and satellite image interpretation of the stands affected by SBW. Every year, an aerial campaign happens in July-August to map and monitor the areas defoliated by SBW, combining it with satellite data (Harmonized Landsat Sentinel) [9]. To overcome the constraints associated with the use of optical satellite images, such as clouds, the aim here is to use a series of images obtained by synthetic aperture radar (SAR) with the Sentinel-1 satellite. This satellite seems adapted to this study because it delivers images regularly over a long period. It makes it possible to compare areas that have been affected by SBW with areas that have not, within the same swath. We suggest that the fall of needles linked to the presence of SBW will modify the geometry of the tree and therefore also the temporal signature of the zone in SAR imagery. In addition, in diseased trees there is a reduction in the sap flow[10]: the electromagnetic properties of the object encountered by the radar vary[11 - 13], causing a break in the regularity of the time series. Sentinel-1 time series vary periodically with the seasons, temperature variations, and natural phenotypic variations in trees. Snowfall and rainfall also influence the backscatter of the signal to the satellite and, by extension, the time series. The aim here is to detect anomalous variations in these time series linked to unusual meteorological conditions for the season or to SBW. To differentiate between weather-related and SBW-related anomalies, we will use weather data provided by the Meteorological Service of Canada. By using deep learning methods on Sentinel 1 time series, similar to those developed with an autoencoder for forest fire detection[12], we investigate our capability to detect anomalies linked to SBW, as it is done for optical images with machine learning[13,14]. [1] Talerico, R. L. (1983). Summary of life history and hosts of the spruce budworm. General Technical Report, USDA Forest Service, Broomall, PA. [2]Gray, D. R. (2008). The relationship between climate and outbreak characteristics of the spruce budworm in eastern Canada. Climatic change, 87(3), 361-383. [3]Blais, J. R. (1983). Trends in the frequency, extent, and severity of spruce budworm outbreaks in eastern Canada. Canadian Journal of Forest Research, 13(4), 539-547. [4] Seidl, R., Thom, D., Kautz, M., Martin-Benito, D., Peltoniemi, M., Vacchiano, G., ... & Reyer, C. P. (2017). Forest disturbances under climate change. Nature climate change, 7(6), 395-402. [5] Fleming, R. A., Candau, J. N., & McAlpine, R. S. (2002). Landscape-scale analysis of interactions between insect defoliation and forest fire in central Canada. Climatic Change, 55(1), 251-272. [6] Gunn, J. S., Ducey, M. J., Buchholz, T., & Belair, E. P. (2020). Forest carbon resilience of eastern spruce budworm (Choristoneura fumiferana) salvage harvesting in the Northeastern United States. Frontiers in Forests and Global Change, 3, 14. [7] Dymond, C. C., Neilson, E. T., Stinson, G., Porter, K., MacLean, D. A., Gray, D. R., ... & Kurz, W. A. (2010). Future spruce budworm outbreak may create a carbon source in eastern Canadian forests. Ecosystems, 13, 917-931. [8] Holm Perrault, A. I. E. (2024). The Economics of Spruce Budworm Monitoring and Management in Eastern Canada. [9] Inventaire Forestier National du Canada, https://nfi.nfis.org/fr [10] Schäfer, K. V. R., Renninger, H. J., Clark, K. L., & Medvigy, D. (2014). Hydrological responses to defoliation and drought of an upland oak/pine forest. Hydrological processes, 28(25), 6113-6123 [11] Salas, W. A., Ranson, J. K., Rock, B. N., & Smith, K. T. (1994). Temporal and spatial variations in dielectric constant and water status of dominant forest species from New England. Remote sensing of environment, 47(2), 109-119. [12] Di Martino, T., Le Saux, B., Guinvarc’h, R., Thirion-Lefevre, L., & Colin, E. (2023). Detection of forest fires through deep unsupervised learning modeling of sentinel-1 time series. ISPRS International Journal of Geo-Information, 12(8), 332. [13] Carletti, H., Gégout, J. C., Dutrieux, R., Féret, J. B., Vega, C., Belouard, T., ... & Piedallu, C. Evaluating Sentinel-2 Time Series for Monitoring Dieback Reveals Different Responses Among Temperate Conifer Species. Available at SSRN 4955076. [14] Dutrieux, R., Féret, J.-B. et Ose, K. (2021). Mise au point d’une méthode reproductible pour le suivi généralisé des dégâts de scolytes par télédétection satellitaire. Les rendez-vous techniques de l’ONF, 69/70, 39-46.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Understanding central European forest practitioners' requirements for remote sensing-based information products: A questionnaire survey

Authors: Fabian Fassnacht, Dr. Christoph Mager, Dr. Lars T. Waser, Dr. Ursa Kanjir, Dr. Jannika Schäfer, Ana Potocnik Buhvald, Dr. Elham Shafeian, Felix Schiefer, Dr. Lisa Stancic, Dr. Markus Immitzer, Dr. Michele Dalponte, Prof. Krzysztof Sterenczak, Dr. Mitja Skudnik
Affiliations: Freie Universität Berlin, Karlsruhe Institute of Technology, WSL, Research Centre of the Slovenian Academy of Sciences and Arts, University of Ljubljana, University of Saskatchwen, Canada, University of Natural Resources and Life Sciences,Vienna (BOKU),, Research and Innovation Centre, Fondazione Edmund Mach, Italy, Forest Research Institute, Poland
Despite decades of development, the adoption of remote sensing-based information products in the forestry sector remains limited in central and southern Europe. This may partly be due to a mismatch between the developed remote sensing products and the needs of potential users. In this study, we present the results of a survey conducted with 355 forest practitioners from eight central and southern European countries. The survey aimed to identify practitioners’ technical requirements for four types of remote sensing-based information products: tree species, canopy height, wood volume/biomass, and forest disturbances. Respondents were asked about their preferences regarding thematic and spatial detail, the maximum acceptable error, and the temporal frequency of the information layers and for the main application fields of the information layers. The study also examined whether demographic variables of the participants including education, age, and professional background influenced these requirements. Preferences for spatial and thematic detail were found to be relatively diverse, whereas more consistent patterns emerged regarding error tolerances and temporal frequency demands. For example, the maximal acceptable error for detailed tree species maps was fluctuating between 5 and 15%, the maximal error of canopy height maps deemed acceptable by interviewees was between 1 and 3 m and the maximal acceptable error for biomass maps was between 5 and 20%. When comparing the demands of the practitioners with the current state-of-the-art in remote sensing, our results suggest that for some products, such as canopy height maps, existing remote sensing technologies and workflows can meet all practitioner demands. However, focusing on other information products, this is only partly the case: In our view, remotely sensed information on forest disturbances partially meets practitioner needs, while products related to tree species and wood volume/biomass currently fall short in terms of thematic detail and accuracy required by practitioners in central and southern Europe. The application fields for the information layers included well-expected tasks but the participants also named some quite unexpected “out-of-the-box” ideas of how to make use of remote sensing based information layers. These have so far been mostly ignored by the remote sensing community. With respect to the demographic groups, we found no statistically significant differences. While our results suggest that further technical innovation is still needed with respect to some information products to match the demands of the practitioners, it may also be questioned whether the demands are fully realistic. For example, it is well known that many traditional information products on which current forestry practices base on, do not reach the accuracy demanded by the practitioners. So, providing information products that are better than what practitioners are used to work with at the moment, may still create added value, even if the defined accuracy requirements are not matched. Our study contributes to our understanding of the alignment and misalignment between the technical requirements of forest practitioners and the capabilities of remote sensing-based information products.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Merging Copernicus data into national forestry products – a symbiosis?

Authors: Benjamin Schumacher, Stephan Gräber, Susanne Karel, Tobias Schadauer, Ursula Knieling, Claire Brenner, Matthias Lampert, Kevin Kopecky, Katrin Windisch-Ettenauer, Alexandra
Affiliations: Austrian Research Centre for Forests (BFW) - Department for Forest Inventory
Traditional aerial remote sensing products such as aerial imagery and airborne laser scanning have delivered high-resolution forest data on national level in Austria for over 20 years. However, the retrieval and postprocessing of the imagery is time consuming and cannot be carried out on a regular yearly basis. With the changing climate and the higher sociopolitical requirements for sustainable forest management, the need for a timely information system on forest parameters for various stakeholders arose. Since the launch of the Sentinel Missions BFW has incorporated the data into daily routines, to serve national agencies, political stakeholders and forest owners with the latest information about the Austrian forests. Advances include the development and regular processing of an anomaly detection of the Sentinel-2 phenology time series, which allows to investigate calamities of different types quickly and specifically in Austria’s complex terrain. Furthermore, BFW has developed a tree species of Austria featuring pure and mixed species classes based on convolutional neural networks of Sentinel-2 time series, which is now widely adopted in the community of forest researchers and will potentially be rolled out to regions in other countries as well. Alongside the tree species map, a tree shares map is being developed with an ongoing validation phase that requires information from local stakeholders to ensure correctness of the data. The developed anomaly detection system is being utilized to explore methods for categorizing anomalies based on their underlying causes. By analyzing spatial and temporal patterns in areas affected by anomalies, we aim to distinguish between human-induced causes - such as harvesting, thinning, and forest road construction - and natural causes, including bark beetle infestations, windthrow, and potentially combinations of these natural factors. The success of the products is based on three major factors, which complement the assets of the S2 timeseries: 1. The intense cooperation with researchers to bring the latest scientific advances to applications for the benefit of the Austrian forest 2. The high-resolution aerial data in combination with local knowledge which allows to create excellent training data for machine learning applications 3. The local national forest inventory which aids in validating the results National Forest Inventories (NFIs) conduct design-based ground sampling, that is based on a statistical design which allows for unbiased estimates for forest parameters such as type, growth or height. The trade-off for such low-bias estimates and comprehensive dataset is the revisiting cycle, which is six years for Austrian NFI. In contrast, the use of Sentinel-2 imagery allows for high timeliness. However, solely relying on model-based mapping leads to estimates with a high bias and is therefore not suitable for forest monitoring. In a European landscape it is important to recognise and utilise local, regional and national knowledge about forests to reflect the diversity of European forests and their management. For instance, what is considered a forest in mediterranean regions is different from continental regions and alpine regions pose different challenges compared to plain terrains. Consequently, applying a model that is adapted to the local landscape will lead to more precise results than a generalised model. This talk will present how scientific research, and the resulting innovation have been implemented into remote sensing forest products that, in conjunction with the traditional aerial remote sensing products, deliver a wholistic picture of the recent evolution of Austrian forests. Nevertheless, a detailed validation of forest products from Satellite missions are required to reflect the ecological diversity and management of forests. The talk will also demonstrate how design-based ground sampling and model-based mapping can be combined to maximise synergies of both domains.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Investigating the Impact of Airstrips on Deforestation in Indigenous Territories of the Brazilian Amazon

Authors: Kate Booth, Professor Jonathan Huck, Dr Polyanna Bispo
Affiliations: The University Of Manchester
This study explores the impact of airstrips on deforestation within Indigenous Territories (ITs) in the Brazilian Amazon (BA). Airstrips are critical enablers of illegal deforestation activities, including mining, logging, and land grabbing, which drive biodiversity loss and habitat fragmentation. Despite their significance, the direct relationship between airstrip establishment and deforestation within ITs remains underexplored. This research aims to fill that gap, providing insights to support targeted monitoring and policy interventions. The BA rainforest, in its natural equilibrium, acts as a carbon sink for our planet, with carbon sequestration resulting in much larger volumes of carbon stores than emissions. This natural balance established by primary, untouched rainforest is a key process in sustaining the Earth’s climate. We are in a time now where this balance has been disrupted. With decades of increasing illegal activities such as logging and mining, wildfires extending across untouched vegetation, the Amazon rainforest is fast approaching a crucial tipping point. Illegal activities have been allowed to encroach across the boundaries of Protected Areas and ITs and advances in transportation systems across the rainforest have supported and enabled this encroachment. We have established that time-series analysis coupled with the change detection techniques comprising of PELT and K-Mean clustering works as a method to identify the creation year of airstrips. In this study we sourced point data determining the location of airstrips within the BA rainforest from MapBiomas, this point dataset formed the starting point of our methodology. Spectral index-based time-series extracted from Landsat imagery were produced for each point and change detection techniques were applied. Visual comparison of the time-series with the corresponding Landsat imagery, as well using high resolution satellite images such as Planet NICIFI, has validated the change detection points and confirmed that the above method is a successful way to identify the creation year of an airstrip. We will conduct t-tests to evaluate the hypothesis that airstrip creation increases deforestation rates and extents in surrounding areas. Preliminary findings indicate that airstrips strongly correlate with intensified deforestation, particularly within ITs. Observable vegetation changes in Landsat imagery and time-series analyses highlight the extent of this impact. Identifying airstrip creation years enables the assessment of deforestation rates and patterns over time, offering valuable insights into how transportation infrastructure contributes to deforestation and forest degradation. Understanding the implications of airstrips is critical for improving initiatives like the Action Plan for the Prevention and Control of Deforestation in the Legal Amazon (PPCDAm). Strengthening such programs can enhance law enforcement, refine policy frameworks, and improve deforestation monitoring systems. By demonstrating the effectiveness of this methodology, the study also provides a simplistic framework that can be extended to other transportation infrastructure, such as roads, to further investigate their environmental impacts. These findings offer actionable insights for conservation strategies and policy development while enabling researchers to examine infrastructure-driven environmental changes in other ecosystems. Ultimately, this research contributes to a deeper understanding of how infrastructure accelerates deforestation and forest degradation, helping to guide interventions that mitigate environmental impacts and conservation strategies for the Amazon rainforest and its Indigenous Territories. Key words: Airstrips; Deforestation; Indigenous Territories; Brazilian Amazon; Environmental Monitoring; Change Detection
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Updates on the nrt Python Package: Enhancing Near Real-Time Forest Disturbance Monitoring

Authors: Loïc Dutrieux, Jonas Viehweger, Kenji Ose, Pieter SA Beck
Affiliations: Joint Research Center, European Commission, Sinergise Solutions GmbH
Introduced in 2021, the nrt Python package was developed to streamline environmental monitoring by providing a unified Application Programming Interface (API) for near real-time detection of anomalies in gridded spatio-temporal data, with a focus on forest disturbance monitoring using satellite image time-series. Built with computational efficiency and scalability as core principles, nrt implements state-of-the-art monitoring algorithms—including EWMA, CuSum, MoSum, CCDC, and IQR—optimized through vectorized operations and Just-In-Time (JIT) compilation with Numba. This poster presents the latest developments of the nrt package, highlighting its current capabilities and recent real-world applications. We showcase the operational deployment of a nationwide forest disturbance alert system, demonstrating the package's readiness for large-scale, practical use. This deployment underscores the effectiveness of nrt in providing timely information to stakeholders involved in climate change mitigation and biodiversity conservation. Looking ahead, we discuss future directions for the nrt package, including plans to introduce multivariate monitoring modes to capture a broader range of vegetation indices, thereby improving detection capabilities for various types of disturbances. By presenting the evolution and anticipated enhancements of the nrt package, we aim to engage the community in leveraging and contributing to this tool, enhancing collective efforts in environmental monitoring and the preservation of natural heritage.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Open spectral libraries to support vegetation monitoring

Authors: Miina Rautiainen, Aarne Hovi
Affiliations: Aalto University
Remote and close-range imaging spectroscopy (also called hyperspectral sensing) is anticipated to provide new solutions for monitoring vegetation. Particularly, the development of methods for interpreting the complex data from new and forthcoming hyperspectral satellite missions such as CHIME, EnMAP, PRISMA, and SBG requires high spectral resolution reference data on the elements forming vegetation canopies in different types of terrestrial ecosystems. These critical reference data are often collectively called spectral libraries. Spectral libraries of vegetated ecosystems contain the reflectance, transmittance, or absorption spectra of various elements forming canopies, such as single leaves, woody components, or entire plants. Even though the basic spectral features of plants and lichens have been known for many years, the scientific understanding of the variations in spectral properties of the ‘elements’ forming northern forest and peatland ecosystems has been limited. Open-access spectral libraries have been even more scarce. In the poster presentation, we report intra- and interspecific variations in the spectral properties (400 – 2500 nm) of, for example, leaves and needles of over 30 bush and tree species, tree bark, terricolous lichens, and forest floor and peatland vegetation communities. The datasets have been measured by Aalto University researchers using laboratory or field setups in European forests and peatlands as part of projects funded by the European Research Council and the Research Council of Finland during the past eight years. The spectral libraries are openly available with digital object identifiers, ensuring wide accessibility. These data can be used in the development of, for example, remote sensing methods and land surface models, and to understand the shortwave radiation regime and ecophysiological processes of forest canopies and peatland vegetation communities. The freely accessible nature of these spectral libraries encourages collaborative research efforts and promotes the advancement of scientific knowledge across disciplines and will thus facilitate improvements in environmental monitoring and management.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: A Biome Centered Approach to Machine-Learning Based Above-Ground Biomass Mapping

Authors: Ghjulia Sialelli, Dr. Torben Peters, Prof Jan Dirk Wegner, Prof Konrad Schindler
Affiliations: Photogrammetry and Remote Sensing, ETH Zurich, EcoVision Lab, DM3L, University of Zurich
Accurate estimates of Above Ground Biomass (AGB) are essential for addressing two of humanity's biggest challenges: climate change and biodiversity loss. One prevalent way of mapping biomass at scale is to analyse globally available remote sensing data with machine learning (ML) methods. We have recently published a machine-learning ready dataset [1] of high-resolution (10m), multi-modal satellite imagery, paired with AGB reference values from NASA's Global Ecosystem Dynamics Investigation (GEDI) mission. The geographic distribution of that new dataset is designed to be representative of the world-wide AGB distribution in terms of land cover and biomes, paving the way for biome-aware ML models. AGB values vary significantly across different vegetation types: some biomes cover a wide range (e.g. Closed Evergreen Broadleaf trees, with an average of ≃100t/ha and a maximum around 500t/ha), while others have much narrower ranges (e.g. Open Deciduous Broadleaf trees, with an average of ≃15t/ha and a maximum around 80t/ha). We find that existing AGB models, be it our own or others like ESA’s Climate Change Initiative Biomass, do not faithfully reproduce those differences. This underlines the need for a biome-centered approach to AGB estimation. In this work, we make use of the new dataset to investigate various strategies for biome-specific AGB estimation with deep neural networks: 1) Our baseline approach incorporates biome information into the model as an additional input feature. While straightforward, this method is not guaranteed to recover the underlying relationship between biomes and biomass predictions well enough. 2) To address this we also experiment with an ensemble of biome-specific models, whose outputs are combined at the pixel level. 3) Going beyond independent, biome-specific models we also investigate architectures that are able to share information between biomes. This involves training a single model in which certain layers are shared among biomes, whereas other layers are biome-specific, for instance a shared backbone and biome-specific prediction heads, or biome-specific feature extractors that feed into a common prediction head. 4) Another strategy is a single model in which the bulk of the parameters are shared, but the internal representation can be modulated in response to the biome information, e.g. using Feature-wise Linear Modulation (FiLM) [2]. In this way the model can locally adapt to biome-specific characteristics while using the full network capacity for biomass estimation. 5) To better utilise existing knowledge, we propose a variant with an auxiliary model that exploits canopy height (CH) maps together with known allometric relations and acts as a regulariser for AGB estimation. 6) Finally, we study the potential of recent foundational models for Earth observation, which have been pre-trained on other (pretext) tasks with the intention of encoding relevant a-priori knowledge, such that they can be efficiently fine-tuned for various application tasks, possibly including AGB mapping. [1] Ghjulia Sialelli et al. AGBD: A Global-scale Biomass Dataset. 2024. doi: 10.48550/ARXIV.2406.04928. [2] Ethan Perez et al. “FiLM: Visual Reasoning with a General Conditioning Layer”. In: AAAI. 2018.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Benchmarking Trees' Architectural Traits with Ecological Factors in a Free-Air CO₂ Enrichment Experiment in Central Amazonia Using Terrestrial Laser Scanning

Authors: Elisa Sillfors
Affiliations: University Of Helsinki
Benchmarking Trees' Architectural Traits with Ecological Factors in a Free-Air CO₂ Enrichment Experiment in Central Amazonia Using Terrestrial Laser Scanning Elisa Sillfors (elisa.sillfors@helsinki.fi) Tomas Ferreira Domingues (tomas@ffclrp.usp.br) David M. Lapola (dmlapola@unicamp.br) Carlos Alberto Quesada (quesada.beto@gmail.com) Kim Calders (kim.calders@ugent.be) Iokanam Pereira (iokanam.eng@gmail.com) Bruno Takeshi Tanaka Portela (brunotakeshi@gmail.com) Erone Santos (erone.g.dos.santos@jpl.nasa.gov) Paulo Bittencourt (bittencourtp@cardiff.ac.uk) Pietro Ciccale (pietro.ciccale@usp.br) Eduardo Eiji Maeda (eduardo.maeda@helsinki.fi) ABSTRACT The architectural traits of trees influence their ecological niche and its strategies for competing for light and nutrients. For instance, a tree's morphology determines the height of its canopy and the spread of its branches, impacting leaf positioning for optimal photosynthesis and energy usage. Hence, it is expected that increasing CO2 levels in the atmosphere may alter structural traits associated with key physiological functions of trees. However, the relationship between increased CO2 levels and morphological changes in tropical trees' architectural traits has not yet been quantitatively studied in tropical forests. Understanding this connection is essential for predicting the Amazon rainforest's response to climate change and informing effective conservation strategies. In this research, we explore the characteristics of specific architectural traits of trees in the AmazonFACE experiment (Amazon Free-Air CO2 Enrichment), an international field experiment located in an old-growth forest in Central Amazonia. The objective is to determine the magnitude and diversity of each architectural trait in the study area before exposure to elevated CO2 concentrations, focusing on the traits expected to change. To achieve this aim, the following specific research questions are addressed: What is the range of values for each structural trait across the AmazonFACE experiment? How do the values of structural traits vary with the taxonomy (family, genus and species level), vertical and horizontal structure (height, DBH, basal area, etc) and ecological factors (population size, conservation status, etc.)? Can we predict the expected value of each structural trait based on tree species and ecological factors? Our approach leverages terrestrial laser scanning (TLS) data combined with field-based inventories and other ecological datasets. The TLS data, collected in September 2023, provides fine-detailed structural information of individual trees, such as tree asymmetry, crown complexity, and angles of branches. We extracted point clouds of 296 trees with a diameter at breast height (DBH) greater than 10 cm from six AmazonFACE research plots. These trees are combined with field collection data that includes 36 families, 89 genera, and 177 species. The architectural traits chosen for the study are those expected to undergo changes due to increasing carbon dioxide levels such as crown complexity, crown depth and angles of branches. We expect that the results of this study will build our knowledge about trees' survival strategies in changing environmental conditions and directly indicate the forest’s health status, serving as an effective monitoring solution for ecological challenges like climate change. However, it is still unclear how we can effectively use the values obtained from architectural traits to determine the ecological condition of the forest and individual trees. The correlation between trees’ ecological characteristics and the values derived from architectural traits, and what they reveal about trees’ competitive strategies in the ecosystem, has not been sufficiently explored. The high biodiversity of the rainforest is expected to result in varying architectural traits across the study area, but specific architectural traits may dominate. It’s expected that architectural trait values are correlated at the taxonomical levels due to genetically determined tree morphology, even though influenced by the environment. Different tree species may employ diverse competitive strategies of light and nutrients, and this research might give correlations between ecological factors and architectural trait values for planning future research in this field. Higher CO₂ concentrations may cause changes in traits such as crown complexity and branch angles, as trees adapt their growth strategies to new environmental conditions. Hence, the results from this study will provide a critical baseline for the AmazonFACE experiment.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Innovative Monitoring of Forest Water Content Using Active Microwave Systems and Corner Reflectors

Authors: David Moravec
Affiliations: Department of Spatial Sciences, Faculty of Environmental Sciences, Czech University of Life Sciences Prague, Kamycka 129, 165 21, Institute of Photogrammetry and Remote Sensing, TUD Dresden University of Technology, Trefftz-Bau Zellescher Weg 16 01069
Forests cover 31% of the Earth's surface and 43% of the EU area, playing a vital role in maintaining clean air, regulating the water cycle, and supporting biodiversity. Understanding the water balance, water ages, transit times, and water potential gradients within the soil–forest–atmosphere system is essential. These insights can significantly enhance our understanding of water fluxes on a large scale. A well-maintained water balance is crucial for the resilience of forest ecosystems to drought, their ability to adapt to global climate change, and their role in climate change mitigation. The most common method for assessing forest water content relies on optical satellite systems, which indirectly measure leaf moisture content through the increasing/decreasing absorption of short-wave infrared radiation. However, these approaches are limited, as they primarily capture information from the upper canopy, neglecting the lower, hidden forest layers. Microwave satellite systems provide an alternative, as microwaves are directly sensitive to the water content and can penetrate deeper into vegetation layers. Therefore, both active and passive microwave systems were applied to estimate leaf moisture content. High-resolution active microwave systems, such as Sentinel-1, can overcome the limitations of optical systems or the coarse spatial resolution of passive microwave systems by offering better canopy penetration and independence from weather conditions, as well as higher spatial resolution. Existing methods utilising these systems were mostly applied in areas with lower vegetation density, such as crops and grasslands, but applications of those systems are currently hampered by the scientific understanding of how especially complex forest ecosystem structures interact with microwaves and by the limited availability of field data to validate forest water content. Here, we propose a novel solution for validating retrievals of forest water content by using active microwave systems in combination with corner reflectors. This approach mitigates significant uncertainties in estimating tree water content. Corner reflectors produce a strong signal reflection that overpowers other soil reflections from the same location. Moreover, their reflection parameters are straightforward to calculate due to their predefined geometric design, reducing the number of variables required in theoretical models such as the Water Cloud Model. The technical feasibility of using microwave signals to penetrate vegetation canopies, interact with corner reflectors, and return information about vegetation status has already been demonstrated in studies involving microwave towers. However, the use of corner reflectors for satellite-based monitoring of forest water content has not yet been tested. In this contribution, we present initial results on the design and deployment of corner reflectors within forest canopies to improve estimation accuracy by correlating Sentinel-1 microwave signal responses with reflections from corner reflectors and with in situ measurements of topsoil and subsoil moisture measurements, temperature, precipitation or dendrometry measurement at a forest site in Czechia.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Influence of Severe Drought Events on Pine Forest Condition in Poland in the Period 2002-2023

Authors: Kinga Kulesza, Oliwier Zając, Agata Hościło
Affiliations: Institute Of Environmental Protection - National Research Institute, Faculty of Geography and Regional Studies, University of Warsaw
Forest ecosystem stress caused by climate change, in particularly prolonged and severe droughts, has already been manifested in several parts of Europe. It is likely that droughts and heatwaves will occur more often, which might result in the ecological transition of the forests and loss of biodiversity. In this project we identify the most severe drought events in Poland in in the period 2002-2023, and investigate the vulnerability of pine (Pinus sylvestris L.) forest vegetation to drought events, in the total area of 13,780 km2. To this end we use ERA-5 Land reanalysis data (temperature, precipitation, evapotranspiration) and vegetation index – Net Primary Productivity (NPP) – derived from the Moderate Resolution Imaging Spectroradiometers (MODIS) – product MOD17A2HGF. To keep consistency in our analyses we use monthly mean values of meteorological elements and monthly sums of NPP. For the selected pixels, covered with at least 80% of homogenous pine forest, the spatiotemporal trends of NPP were computed, and their statistical significance was assessed. The influence of the variability of meteorological elements on vegetation index was assessed with the methods of linear regressions and spatial correlations. The results showed strong negative correlation between NPP and temperature for July and August (r = -0.76 and r = -0.84, respectively), while significant positive correlations for these variables in March and April (r = 0.95 and r = 0.90, respectively), i.e. months at the beginning of the growing season. In relation to annual data, the strongest correlation was observed between NPP and precipitation (r = 0.32). The analysis of monthly anomalies of temperature and climatic water balance (difference between precipitation and evapotranspiration) led to identification of the most severe drought events in June 2019, July 2006, April 2009 and August 2015. In August 2015, the lowest value of NPP in pine forest occurred (-0.056 kgC·m-2·month-1), which corresponds to the occurrence of strong drought event. The dry conditions of July 2006 were also evident in values of NPP in pine forest, showing a value of -0.043 kgC·m⁻²·month⁻¹. This research was funded by the National Science Centre in Poland, under OPUS21 project entitled: „Assessment of the impact of weather conditions on forest health status and forest disturbances at regional and national scale based on the integration of ground and space-based remote sensing datasets” (grant number 2021/41/B/ST10/04113).
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Modelling Forest Floor Temperature From Terrestrial Laser Scanning and Potential Links to Satellite Remote Sensing

Authors: Andreas Hanzl, Karun Dayal, Katja Kowalski, Kim Calders, Prof. Dr. Cornelius Senf
Affiliations: Technical University of Munich, School of Life Sciences, Earth Observation for Ecosystem Management, Ghent University, Department of Environment, Q-ForestLab
Forest microclimates contribute to the environmental conditions experienced by forest-dwelling species on the forest floor. They can differ from open-land climates due to the temperature buffering effect of forest canopies. Understanding forest microclimates is essential for unraveling the impacts of climate change on forest-dwelling species, especially under increasing risk of climate extremes such as heatwaves. Measuring forest microclimates and their surrounding vegetation structure with traditional methods such as temperature loggers in situ is, however, labor-intensive and costly. There is thus increasing interest in using remote sensing, in particular laser scanning and satellite data, for modeling forest microclimates. A better understanding of forest microclimates through laser scanning might also open up opportunities for improved monitoring of forest microclimates from satellite systems, such as Sentinel-2. To test if and how forest microclimates can be mapped using remote sensing, we surveyed eight 1-ha forest sites in the national parks of Bosland (Belgium) and Berchtesgaden (Germany) using terrestrial laser scanning (TLS) to create detailed 3D representations of each site. Additionally, we measured near-surface temperatures and soil humidity using 16 microclimate sensors per site. Linking the 3D representations of the forest sites to the microclimatic data allows us to calibrate models for mapping forest microclimates – here defined as the absolute difference between forest floor temperature and open-land temperature – at very fine resolution (<1m). First results reveal the important role of vegetation density (measured in terms of plant area index) and vegetation closure (measured in terms of gap fraction) in modeling forest floor temperatures. The effect of vegetation density was more pronounced in dense stands, whereas canopy closure was more important in open stands. Preliminary models achieved an R² > 0.5, showing the potential of TLS for mapping forest floor temperatures. We plan to improve model accuracy by incorporating additional predictors derived from the point cloud characterizing the structural variability of the surrounding forest stands. We will explore how predictions change over variable spatial resolutions, expecting that both variability in forest structure and variability in forest floor temperatures decrease with coarser resolution. This analysis will provide insights into the potential of coarser air- and spaceborne systems, such as airborne laser scanning or Sentinel-2, for understanding links between vegetation structure and forest microclimates. Our findings will help to better characterize and map forest microclimates across large scales and in different forest types, thus improving our understanding of the long-term effects of climate change on forest-dwelling species.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Comparison of Contemporaneous Sentinel-2 and EnMAP Data for Vegetation Index-Based Estimation of Leaf Area Index and Canopy Closure of a Boreal Forest

Authors: Jussi Juola, Aarne Hovi, Miina Rautiainen
Affiliations: Aalto University
Data from new hyperspectral satellite missions, such as EnMAP, and forthcoming missions like CHIME, are anticipated to refine the monitoring of leaf area index (LAI) or canopy closure (CC) in conifer-dominated forest areas. Both LAI and CC are critical metrics for characterizing forest canopies, as they play a pivotal role in the biological and physical processes of a forest ecosystems, influencing everything from radiation to microclimate dynamics. In this study, we compared contemporaneous multispectral and hyperspectral satellite images from Sentinel-2 MSI and EnMAP, taken just 13 minutes apart, to assess the added value of hyperspectral data in estimating LAI, effective LAI (LAIeff), and CC in a European boreal forest area. The estimations were performed using univariate and multivariate generalized additive models (GAMs) with up to three predictor variables. The models utilized field measurements of LAI and CC from 38 forest plots in Finland, as well as an extensive set of vegetation indices derived from the two different satellite images. The best univariate models for each of the three response variables showed small differences between the two satellite sensors, however, EnMAP had more well-performing vegetation indices which was reflected in the better multivariate model performances. The best performing multivariate models using EnMAP’s hyperspectral data had relative RMSEs that were approximately 1–6% lower than those of Sentinel-2 MSI. Wavelengths near the green, red-edge, and shortwave infrared regions were frequently utilized in the estimation of LAI, LAIeff, and CC with EnMAP’s hyperspectral data. These wavelength regions could be associated with moisture and pigment absorption features characteristics of vegetation and were detectable due to finer-scale spectral resolution. Because EnMAP’s high spectral resolution resulted in more accurate LAI estimations, the results from this study suggest that EnMAP may be more useful than multispectral satellite sensors, such as Sentinel-2 MSI, in monitoring biophysical variables of coniferous-dominated forests.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Monitoring Species-Specific Tree Dieback across Central European Temperate Forests using Sentinel-2 Time Series

Authors: Jonas Alsleben, Dr. Dirk Pflugmacher, Rasmus Rehwaldt, Dr. Akpona Okujeni, Patrick Hostert
Affiliations: Earth Observation Lab, Humboldt-Universität zu Berlin, Section 1.4 Remote Sensing and Geoinformatics, Helmholtz GFZ German Research Centre of Geosciences, Integrative Research Institute of Transformations of Human-Environment Systems (IRI THESys)
Temperate forests in Europe are increasingly affected by prolonged droughts linked to an elevated likelihood of forest fires and insect infestations. Understanding the dynamics of species-specific tree dieback across spatial and temporal scales is crucial for managing carbon sequestration, conserving biodiversity, and supporting sustainable forest management. Field monitoring networks, while valuable, provide limited snapshots—often only collected on coarse grids, late in the growing season, or aggregated over several years. Satellite-based remote sensing can fill the gaps in spatial and temporal coverage left by field assessments. However, existing large-area disturbance maps only distinguish between partial and stand-replacing changes in live tree cover and therefore include a multitude of disturbance types that do not necessarily correspond to dieback or decreasing tree vitality. Sentinel-2 offers dense time series needed due to phenology, land use and ephemeral defoliation, but the subtle and fine-scale nature of dieback remains a challenge for 10-20 m pixel size. Spectral unmixing can address this challenge by providing continuous estimates of dead vegetation fractions, offering ecologically interpretable indicators while reducing the need for extensive training data collection. Previous studies analyzed fractions of non-photosynthetically active (NPV), photosynthetically active vegetation (PV), and bare soil fractions in different ecosystems. However, in forests, most approaches mapped NPV fractions either for small and confined areas or used NPV fractions as input to change detection approaches without estimating and validating dieback. The objectives of this work were to (i) develop and test an unmixing approach to map intra-annual NPV time series across large areas and different forest types, (ii) aggregate the NPV time series to map annual dieback fractions for entire Germany between 2019-2022, and (iii) assess species-specific dieback rates at the national scale. In this context, we present a novel framework for analyzing species-specific (spruce, pine, beech, oak) tree dieback at the sub-pixel level combining regression-based unmixing of Sentinel-2 time series with triangular feature space concepts, tree species maps, and very high-resolution imagery. We created a comprehensive spectral library to generate synthetic data for training machine learning regression models using an ensemble approach and mapped NPV, PV and soil fractions for every cloud- and cloud shadow-free pixel in Sentinel-2 imagery between 2017 and 2023. We defined dieback as a non-ephemeral annual increment in NPV because crown defoliation, crown dieback and tree mortality all result in an increase of NPV within forest canopies. To account for disturbance types, non-ephemeral defoliation and phenology we created a rule-set for annual aggregation of the NPV time series. Based on the annual dieback maps, we estimated annual species-specific dieback areas and rates for Germany. The unmixing models for mapping NPV fractions generalized well across forest types (coniferous and deciduous) and regions and achieved mean absolute errors (MAEs) < 14.5%. The fractional cover time series reliably captured gradual processes like ephemeral defoliation, seasonal leaf shedding and dieback, as well as abrupt disturbances such as down woody debris from windthrow and harvest. Annual aggregation of NPV predictions yielded estimates that agree well with mortality rates from the German Forest Crown Condition Survey (Pearson’s R = 0.8, MAE = 0.6%) and revealed national scale dieback patterns for the period 2019-2022. The maps confirm species-specific patterns, with high dieback rates and areas for spruce (1.9-3.6%, 308kha), followed by pine (0.6-2%, 116kha), beech (0.3-0.6%, 45kha) and oak (0.2-1%, 37kha). Our maps effectively captured low severity and small-scale dieback, suggesting omission of dieback in categorical forest change products. We demonstrate how we can reliably derive NPV fractional cover time series over large scales and conditions and map temperate forest dieback solely from Sentinel-2 data. As the increase in disturbances in Europe is largely driven by low severity disturbances, capturing small scale dieback is of great value and emphasizes the advantages of our framework for forest monitoring. Our approach paves the way for a more nuanced description of disturbance impacts at the sub-pixel level using Copernicus Sentinel-2 time series. Wall-to-wall maps of species-specific dieback will enable future studies to better (i) disentangle the contribution of different dieback agents, (ii) identify distinct dieback regimes, and (iii) quantify species-specific dieback thresholds.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Temporal Dynamics in Forest Structure: UAV-Based Monitoring of Tree Crowns, Canopy Gaps, and Deadwood in Hainich National Park

Authors: Steffen Dietenberger, Marlin M. Mueller, Markus Adam, Felix Bachmann, Hanna Arlaud, Jonathan Frank, Maximilian Nestler, Jonas Ziemer, Laura von Haacke, Boris Stöcker, Sören Hese, Clémence Dubois, Christian Thiel
Affiliations: German Aerospace Center (DLR) - Institute of Data Science, Friedrich Schiller University Jena, Institute of Geography, Department of Earth Observation, University of Münster, Institute of Landscape Ecology
Forest ecosystems are increasingly affected by climate change-related disturbances such as storms and drought, which alter their structure and dynamics. This study presents a multi-temporal analysis of forest structural changes in a dense deciduous forest within Hainich National Park, Germany. We utilized UAV-based imagery acquired during summer (leaf-on) and winter (leaf-off) campaigns from 2018 to 2024 to investigate forest parameters at the individual tree level, focusing on canopy gap detection, individual tree crown delineation (ITCD), and the classification of downed deadwood (coarse woody debris, CWD). Our methodology employed high-resolution UAV imagery (leaf-on and leaf-off) in conjunction with structure-from-motion (SfM) techniques to generate 3D point clouds, canopy height models (CHM) and orthomosaics. The UAV was equipped with an RGB camera and a real-time kinematic (RTK) receiver to ensure accurate georeferencing, crucial for multi-temporal analysis. To enhance ground point coverage, nadir and oblique images were acquired and combined during processing. A canopy-free ground orthomosaic was derived from the terrain-normalized leaf-off point cloud, offering an unobstructed view of the forest floor for CWD detection. Canopy gaps (>50 m²) were identified and analyzed for their spatial and temporal dynamics, with a focus on gap formation and expansion driven by storm events and other disturbances. Gaps were mapped annually using a rule-based algorithm applied to the CHM. Over the seven-year period, the mean gap size increased steadily by approximately 2,000 m² per year. Individual tree crowns were delineated using a Mask R-CNN architecture with transfer learning. The pre-trained deep learning model was fine-tuned using manually delineated crowns from the RGB orthomosaic of 2019 as training data. The trained model was then applied to segment crowns across all time steps. The model achieved a mean intersection over union (IoU) score of 0.66, with 80% of crowns exceeding an IoU of 0.5, commonly considered a threshold for correct segmentation. Finally, CWD was classified using a U-Net model, achieving F1-scores up to 96% for the 2019 dataset. To track temporal shifts in downed woody debris density and distribution, deadwood objects were also manually detected in the ground orthomosaics across all years. The average number of new objects was 44 per year, while 9 objects per year were lost due to decomposition. The integration of temporal datasets provides valuable insights into disturbance-recovery cycles and their impact on forest structure. This study contributes to the development of robust UAV-based methodologies for long-term monitoring of forest ecosystems and highlights the importance of high-frequency, high-resolution data acquisition to capture the impacts of disturbances at the individual tree level. These findings support adaptive forest management and conservation strategies in the face of increasing climate variability. Furthermore, this study demonstrates the potential of UAV remote sensing to complement terrestrial methods, providing effective approaches for assessing forest dynamics and improving our understanding of ecosystem resilience.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Approaches for Integrating Forest Inventory and Earth Observation Data for Climate Change and Biodiversity Assessments

Authors: Dr. Daniela Requena Suarez, Dr. Alexandra Runge, Dr. Natalia Málaga, Dr. Katja Berger, Dr. Markus Immitzer, Prof. Martin Herold
Affiliations: Helmholtz Center Potsdam GFZ German Research Centre for Geosciences, Remote Sensing and Geoinformatics Section, University of Natural Resources and Life Sciences (BOKU), Department of Landscape, Spatial and Infrastructure Sciences
Over the years, forest inventories (FI) have evolved from their original timber production purpose to a more comprehensive assessment of forest resources, including additional variables of interest related to forest biomass, tree species diversity, and other parameters. In recent years, National Forest Inventories (NFIs) and cross-national large-scale ground forest surveys have increasingly been used within research to tackle questions linked to climate change and biodiversity conservation. Within the Earth Observation (EO) domain, FI data has proven to be a key source of information for multiple purposes, ranging from calibration and validation of EO-derived models to support spatial and temporal predictions. Examples include enhancing the spatial distribution of forest biomass, assessing forest structure and productivity, and increasing our understanding of the role of forests within climate change and biodiversity. Research integrating FIs with EO information is currently being carried out extensively, ranging from examples in temperate and tropical forests in provinces and countries to cross-country and continental assessments. In this respect, FIs have become highly valuable for research purposes, and thus the approaches to integrate FI data with EO have proliferated and evolved. Insights from research that integrate FI and EO data are critical in the context of recent land-related policies of the European Union (EU), such as the EU Regulation on Land, Land Use Change and Forestry (EU LULUCF), the proposed EU Forest Monitoring Law or the Biodiversity Strategy for 2030. Following this, discussions on open or limited access to FIs, especially NFIs, have emerged and are currently carried out within research and policy domains. Despite significant research over the past decades integrating FI and EO data, an overview is still needed that comprehensively summarizes the diverse purposes, approaches, and methods employed for data integration. This effort is particularly timely, as understanding the opportunities and challenges faced by data users and producers is essential to advancing the state of the art in FI-EO integration. To address this gap, our systematic review aims to provide a comprehensive overview of the current application areas of FI/EO integrated studies for research on climate change and biodiversity, to examine the methods applied, and how this varies temporally and regionally. This work seeks to promote collaboration between FI/EO data producers, inform future data campaigns on the key considerations in data integration, and enhance the utility of integrated data for all stakeholders. Specifically, our systematic review addresses the following key questions: 1. What are the main objectives of the studies that integrate FI data with EO data and how do they vary over time and across regions? 2. What methods have studies used to integrate national or cross-national forest inventory data with EO data, and how have these approaches evolved over time and across regions? 3. Do studies involving FI/EO Integration comply with FAIR (Findable, Accessible, Interoperable, and Reuseable) principles, and how has compliance varied over time? By uncovering regional and temporal variations in forest research objectives and methodologies, our review will provide valuable insights into the practical applications of FI/EO integrations. Through examining data accessibility and methodological approaches, we also aim to inform future research directions and policy decisions. Furthermore, we will explore the additionality of studies that integrate FI/EO data for recent land-related EU policy frameworks.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Prediction of Canopy Cover Loss in German Spruce Forests Using Spatio-Temporal Matrix Feature

Authors: Samip Narayan Shrestha, Dr. Andreas Dietz, Prof. Dr. Claudia
Affiliations: Deutsches Zentrum Für Luft- Und Raumfahrt (DLR)
In the last decade, German forests have deteriorated because of extreme events such as drought and windthrow, and bark beetle infestations that occur in the aftermath, primarily in monoculture Norway spruce stands. It is essential for decision makers in forest management to have an educated prediction on where future loss may occur. In this regard, we have developed a model to predict future canopy cover loss in German spruce forests. Since, past canopy cover loss is a key predictor of future canopy cover loss, we adapt the Spatio-Temporal Matrix (STM) method used for predicting urban growth using the World Settlement Footprint product, to work with a canopy cover loss time series product based on Landsat-8 and Sentinel-2 earth observation data produced by DLR to predict future canopy cover loss. We configure a hybrid convolutional neural network – artificial neural network (CNN-ANN) model using the STM, its percentiles along with climatic and topographic data to produce probability of canopy cover loss in German spruce forests in the next year. We use canopy cover loss going back two years to predict future cover loss to a year in advance. Every pixel is predicted using neighborhood spatio-temporal information of the past canopy cover loss. We use locations in the Harz forest, Frankenwald and the Siebengebirge in Germany to represent different levels of forest structure, amounts of past forest loss and management. In Frankenwald, the prediction results from the model show a good capacity of prediction as validation results present an area under the curve (AUC) of the receiving operating characteristic curve (ROC) value of 0.81. Our results show that future canopy cover loss can be predicted with reasonable accuracy using easily available earth observation time series data supplemented by climatic and topographic data without the need for site specific data collection and expensive field campaigns.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Integrating a trait-based dynamic vegetation model with earth observation data to simulate large-scale spatial and temporal patterns of plant traits

Authors: Mateus Dantas De Paula, Thomas Hickler
Affiliations: Senckenberg Society For Nature Research
Understanding global patterns of biodiversity and the impacts of climate change on these patterns remains a critical challenge in Earth system science. Shifts in plant functional diversity are key drivers influencing ecosystem processes such as the carbon cycle. Fundamental plant traits—such as those governing photosynthesis, carbon storage, and water/nutrient uptake—are directly linked to vegetation function. Comprehensive global maps of these traits are essential for studying ecosystem dynamics, identifying environmental threats, and informing conservation strategies. However, existing observations are sparse and often regionally biased. Current global trait maps rely heavily on empirical or statistical extrapolations based on limited observations, climate and soil data, and remote sensing, resulting in low explanatory power and ecological inconsistencies, particularly when extrapolating across diverse environments. The VESTA (Vegetation Spatialization of Traits Algorithm) project addresses these limitations by integrating a trait-based dynamic global vegetation model (DGVM) with Earth observation (EO) data to develop global maps of above- and below-ground plant traits for the present and future. Trait-based DGVMs are process-based, offering direct links between environmental factors, plant ecology, and emerging vegetation patterns. The model initialization will incorporate data from comprehensive global trait databases, while EO data will be used to optimize the model through calibration procedures. These procedures adjust trait-relationship curves to bring model outputs closer to satellite-derived measurements of vegetation structure and productivity. This optimization procedure is necessary, since global data on plant traits and their functional relationships are often incomplete, insufficient and not globally consistent, requiring in addition grouping according to plant functional types or geographical location in order to improve their explanatory power. Data exploration of main plant trait databases confirm this, showing that plant trait relationships should be grouped according to functional types. This innovative approach mirrors the methodology of climate reanalysis by creating EO-constrained, trait-based DGVMs capable of delivering a multivariate, spatially complete, and ecologically coherent record of global vegetation traits. Outputs will include detailed trait distributions, enabling in-depth analyses of plant functional diversity at local and global scales. Metrics such as mean, variance, skewness, and kurtosis of trait distributions will be available for each location, alongside temporal datasets that provide insights into the current state of functional diversity and its dynamics over time. Trait dataset ouput spatial resolution reflects that of climatic and edaphic input data, currently at 0.5 degrees, but which is aimed with new high resolution climate datasets to 1km². The resulting dataset will represent a groundbreaking EO product, offering high-resolution, temporally explicit maps of leaf, wood, and root traits. These maps will highlight the capabilities of Sentinel missions, Earth Explorers, and the ESA’s long-term data archives to support biodiversity analysis and Earth system modeling. By enhancing our understanding of plant trait distributions, the VESTA project will contribute to addressing critical questions in global biodiversity and ecosystem response to climate change. Preliminary global simulations using global trait relationships show that although broad plant trait geographical patterns and distribution are as expected, there are still strong inconsistencies with published trait maps. These are expected to be reduced with grouped trait relationships and optimization procedures. High resolution simulations based on the CHELSA climate dataset have shown good results for a pilot area in southern Ecuador, demostrating that leaf and wood traits have altered during the last century for this area. This pattern is expected to also be shown for other geographic locations in response to past and future climate change.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: From Study Areas to a Nationwide Forest Damage Monitoring: Rollout in Germany in Progress

Authors: Katja Oehmichen, Jonathan Wolf, Tobias Schad, Martin Puhm, Andreas Wimmer, Janik Deutscher
Affiliations: Thuenen Institute of Forest Ecosystems, Joanneum Research - Remote Sensing and Geoinformation
Recent years showed that our forests are increasingly suffering from disturbances caused by storms, extreme drought and the resulting heavy bark beetle infestations. In particular, 2017 and 2018 were marked by severe damage to forests. Those events led to the development of an innovative, remote sensing-based detection system for damaged areas at the Thuenen Institute of Forest Ecosystems. In the FNEWs project (1), an international consortium from the various fields of technology, remote sensing, economics and applied forestry research, developed and set up a monitoring system along with its technical infrastructure. The methodology and process chain of the forest damage monitoring were developed within four study areas, covering approximately 2,407,000 ha (22%) of Germany's forests. The monitoring system includes data preprocessing and time series analysis of Sentinel-2 satellite imagery. Further it produces maps of damage and statistics of damaged forest areas and damaged timber volumes for the study areas. The annual product identifies damaged areas with a minimum mapping unit of 0.1 ha at a spatial resolution of 10 x 10 m². This disturbance map achieved an overall accuracy of ≥ 95% (2). This largely automated system enables the detection and quantification of forest areas affected by biotic and abiotic damage events using satellite imagery. The annual product of FNEWs and additional project results are accessible at the Thuenen Institute's geoportal (3). Owing to the success of the initial project and an urgent need for information on forest damage, continuous nationwide forest damage monitoring is indispensable. Early in 2024, the implementation phase commenced at the Thuenen Institute of Forest Ecosystems, aiming to scale the successfully tested damage monitoring system to nationwide coverage. The transition towards operational usage brings various challenges, which will be highlighted here. The nationwide rollout, which began at the level of study areas, requires adjustments to the procedure itself as well as to its implementation in the software. We will give an insight into the current status of the rollout process and present first results. (1) www.fnews-wald.de (2) Reinosch E, Backa J, Adler P, Deutscher J, Eisnecker P, Hoffmann K, Langner N, Puhm M, Rüetschi M, Straub C, Waser LT, Wiesehahn J, Oehmichen K (2024), Detailed validation of large-scale Sentinel-2-based forest disturbance maps across Germany, Forestry: An International Journal of Forest Research, 2024; cpae038, https://doi.org/10.1093/forestry/cpae038 (3) https://atlas.thuenen.de/atlanten/waldatlas
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Exploring multi-decadal forest recovery dynamics across spatial and temporal scales

Authors: Lisa Mandl, Ana Stritih, Rupert Seidl, Prof. Dr. Cornelius Senf
Affiliations: Technical University of Munich, Earth Observation for Ecosystem Management, Berchtesgaden National Park, Technical University of Munich, Ecosystem Dynamics and Forest Management
Earth observation offers unparalleled insights into ecosystem processes, such as forest recovery following disturbances. However, challenges remain in capturing the fine-scale heterogeneity of recovery dynamics in mountainous regions such as the Alps. Forest recovery is shaped by interactions across spatial gradients, including altitude, disturbance severity, and forest composition, as well as temporal trends driven by changing disturbance regimes and shifting climatic baselines. Understanding these spatial and temporal recovery dynamics is essential for effective management and conservation in a rapidly changing environment. Yet, large-scale assessments that comprehensively integrate both spatial and temporal dimensions remain scarce. To address this gap, we focused on the following objectives: 1. How does the spatial variability of recovery dynamics in the Alps relate to differences in altitude, forest types, and disturbance severity? 2. What drivers explain observed temporal trends in recovery, and how do they interact with potential shifts in disturbance regimes and environmental baseline conditions? To account for challenges of existing Earth observation-based recovery assessments, we harmonized multi-decadal Sentinel-2, Landsat, and climate data into a pan-Alpine data cube spanning 1986–2023. Fractional tree cover was estimated using synthetic spectral unmixing to derive ecologically informed recovery metrics that capture the spatial and temporal heterogeneity of post-disturbed landscapes. Recovery was analyzed along key spatial and ecological gradients, such as elevation, forest types and disturbance severity, while spaceborne LiDAR data (GEDI) was integrated to benchmark tree cover-based recovery against canopy height metrics. Temporal trends in recovery were investigated by linking recovery metrics to potential drivers, including changes in disturbance regimes, pre-disturbance tree cover, and climatic factors. First results reveal substantial spatial variability in recovery dynamics across the Alps. Recovery success, meaning that disturbed areas reach pre-disturbance tree cover, is highest in the north-eastern Alps, where 67% of disturbances recover within 10 years post-disturbance, compared to only 48% in the south-western Alps. Increasing disturbance severity significantly hinders recovery, while the proportion of bare ground following a disturbance further reduces recovery success, with larger bare ground shares associated with lower recovery success. Over time, a positive trend in recovery success has been observed, particularly for low-severity disturbances. The underlying drivers of these temporal trends are currently being explored in greater detail. Overall, our study provides the first large-scale and temporally resolved assessment of forest recovery from Earth observation for a complex ecosystem such as the Alps, and we highlight several challenges using and interpreting Earth observation-based recovery indicators. Yet, we also highlight the opportunities arising from Earth observation for understanding ecosystem dynamics under climate change.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: The GreenEO Project: Satellite-Based Services to Support Sustainable Land Use Practices Under the European Green Deal

Authors: Leonor Tarrasson, Johannes Kaiser, Paul Hamer, Susana Lopez-Aparicio, Philipp Schneider, Tuan-vu Cao, Isadora Jimenez, Thais Fontenelle, Ronald van der A, Bas Mijling, Jieying Ding, Nicholas Hutchings, Camilla Geels, Lise Mari Frohn, David Simpson, Bruce Rolstad Denby, Isabel Trigo, Jean-Christophe Calvet, Joanne Schante, Thomas Judes, Eddo Joachim da Silva Rosa, Suzi Maurice, Jean-Luc Dupuy, Jean-Pierre Wigneron, Raia-Silva Massad, Dimitrios Stavrakoudis, Ioannis Gitas, Thomas Scheuschner
Affiliations: Klima- og miljøinstituttet NILU, Lobelia Earth, Royal Netherlands Meteorological Institute (KNMI), University of Aarhus, Met Norway, Portuguese Institute for Sea and Atmosphere (IPMA), Meteo France, La Grange Innovations (LGI), National Research Institute for Agriculture, Food and Environment (INRAE), Aristotle University of Thessaloniki, Umweltbundesamt (UBA)
GreenEO is a new Horizon Europe project with the ultimate ambition of supporting governance approaches for the implementation of the Zero Pollution and Biodiversity ambition. The implementation of the European Green Deal will rely on accessible, actionable environmental data for policymaking and monitoring. Success will be dependent on the usage of the latest observational data as well as on the level of uptake of these data by end-users. GreenEO addresses this by using observations from the newest meteorological satellites, and through co-creation with users of high-resolution services and novel indicators that directly meet user needs. It thus focuses on the development of end-to-end service chains that utilise Earth observations in particular from the Sentinel, MTG and Metop-SG satellites. Additionally, GreenEO evaluates governance methodologies for its land use applications, including possible institutional barriers, and solutions to overcome them. The project is funded for March 2025 to February 2029 under Horizon Europe. GreenEO’s main objective is to provide improved satellite-based environmental information to support sustainable nature protection practices across four competing land-use areas, namely, cities, agricultural areas, semi-natural and natural forests, and ecosystems. Through the combined exploitation of new Earth Observation (EO) products, digital infrastructures, and modelling capabilities, using advanced machine learning/artificial intelligence algorithms, inverse modelling techniques, and atmospheric and land surface models, GreenEO will provide new environmental information and stakeholder engagement practices to support the implementation of specific end points under the Zero Pollution and Biodiversity aspects of the European Green Deal. GreenEO will develop advanced satellite-based environmental information in cities in support of the implementation of Zero Pollution strategies in cities. Current urban-scale air quality assessments based on satellite observations still face challenges due to the coarse spatial resolution of satellite observations, even in the case of TROPOMI. GreenEO will in an operational demonstration produce high-resolution urban air quality maps using three complementary satellite-based approaches, utilizing data, modeling, and machine learning. Through stakeholder engagement the project will enable diverse policy applications in pursuit of zero-pollution goals. GreenEO will derive new environmental information to evaluate benefits for nature of the agricultural green transition. Specifically, the project seeks to evaluate the environmental impacts of agricultural transitions on nitrogen deposition, biodiversity, and nature restoration, supporting the Green Deal. Current data on nitrogen emissions, deposition, and biodiversity impacts are inconsistent and lack sufficient spatial resolution. Using advanced satellite data and modeling, GreenEO will estimate high-resolution emissions (NH3, NOx), nitrogen deposition, and critical load exceedances. Collaborating with stakeholders, it will link these to biodiversity indicators, like plant species richness and butterfly indices, to create a nitrogen sensitivity index. This will identify high-recovery areas and support sustainable agricultural practices. GreenEO will derive new environmental information to improved services for preparedness and response to forest fires. The project aims to enhance forest fire danger analysis with high-resolution tools integrating meteorological, biomass, and fire data. Current systems rely on empirical indices like FWI, ignoring fuel conditions and fire metrics such as intensity. Using advanced EO data (e.g., MTG-FCI, Sentinels) and the FIRELIHOOD framework, GreenEO will enable short-term forecasting and historical analysis of fire activity, including occurrence, burnt area, and Fire Radiative Power (FRP). This will support better fire management decisions. Pilot services in France and Greece will demonstrate improved preparedness and response. GreenEO will derive novel ecosystem and biodiversity monitoring services. The project will utilize satellite data for advanced ecosystem and biodiversity monitoring, focusing on vegetation (e.g., LAI), soil moisture, and biomass. Combining Sentinel and GEDI data with machine learning, it will create novel indicators—city greening, ecosystem area, and greening in order to assess and track ecosystem health. These indicators will align with sustainability standards and be visualized on the GreenEO platform, supporting ecosystem preservation and restoration through defined use cases. We did not find a broad enough session for this contribution. So, we invite the conveners to move it into whatever session they think fits best.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Large-Scale Sentinel-2 Tree Species Mapping: Independent Probability Sample Validation and Mixed-Species Classifications

Authors: Tobias Schadauer, Susanne Karel, Ursula Knieling, Kevin Kopecky, Christoph Bauerhansl, Ambros Berger, Stephan Gräber, Benjamin Schumacher, Univ. -Prof. Lukas Winiwarter
Affiliations: Austrian Research Centre for Forests (BFW), Department of Forest Inventory, University of Innsbruck, Department of Basic Sciences in Engineering Sciences
Sustainable forest monitoring and management require accurate, large-scale tree species mapping capabilities. While Sentinel-2 imagery has proven valuable for this task, challenges remain in accurately representing mixed-species stands at the pixel level, and independent probability sample-based validation approaches are remarkably scarce. Our work advances forest monitoring capabilities by developing a modeling approach that includes pure and mixed-species classes and evaluates the results based on a probability sample provided by the National Forest Inventory. Our methodology combines dense Sentinel-2 time series, which offers consistent data across multiple granules, with terrain and vegetation height data to classify tree species at the Sentinel-2 pixel level across Austria's entire forested area. To better represent actual forest conditions, we include mixed tree species and sparsely populated classes alongside pure tree species classes in our classification scheme. The training data was derived from visual aerial photography interpretation by creating and labeling polygons, resulting in 568,018 labeled pixels for the 26 classes. To improve label quality, the training polygons were required to predominantly consist of one species for pure classes and two species for mixed classes, respectively. Additionally, for mixed and sparse classes, the use of synthetic training data proved to be beneficial. We applied models based on a convolutional residual network specifically tailored for time series classification, and a multilayer perceptron. Numerous studies have identified that spatial autocorrelation significantly impacts the validation of thematic maps. A key contribution of this work is a systematic comparison of validation approaches and the effects of autocorrelation on them. We investigated both spatial split validation—where the labelled data is divided spatially—and a validation based on buffered ground reference probability samples provided by the National Forest Inventory, testing various split and buffer distances. A random training data holdout set yielded 99% overall accuracy, while spatial split validation resulted in 74% overall accuracy. This emphasizes the importance of accounting for spatial autocorrelation when validating with holdout sets derived from polygon-based training data. The validation based on NFI data resulted in 55% overall accuracy, 91% post hoc pure class accuracy, and 79% accuracy when confusions among phenologically similar classes were disregarded (e.g., spruce-larch confused with spruce-beech). These results indicate that our models work well on pure species classes; the majority of confusions occur among phenologically similar classes, and the complexity significantly increases when mixed-species classes are considered. Furthermore, the significant decline in accuracy between spatial split and NFI validation highlights the disparity between polygon-based training data and ground reference forest data. This reflects the inherent challenge of using homogeneous training polygons to represent diverse forest areas. These methodological advances in tree species mapping and validation contribute to more reliable forest monitoring systems, supporting both sustainable forest management and accurate carbon accounting—key requirements for understanding and protecting forest ecosystems in a changing climate.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Combining Ascending and Descending mode for Enhancing burned area mapping with Normalized Radar Burn Ratio

Authors: Yonatan Tarazona, Co-author Vasco Mantas
Affiliations: PhD student at the University of Coimbra, Department of Earth Sciences, Centre for Earth and Space Research-CITEUC, Portugal
This study evaluates the effectiveness of the Normalized Radar Burn Ratio (NRBR) for mapping burned areas using Sentinel-1 Synthetic Aperture Radar (SAR) data, specifically comparing ascending, descending, and combined acquisition modes. The research was conducted in Castelo Branco, Coimbra, and Guarda districts of Portugal, significantly affected by devastating wildfires in 2017. SAR data from Sentinel-1 satellites were particularly advantageous, as their capabilities are unaffected by clouds or smoke, unlike optical satellite imagery. The NRBR index integrates Sentinel-1 SAR backscatter intensity data from VV (vertical transmit and receive) and VH (vertical transmit, horizontal receive) polarizations to detect changes in vegetation structure induced by wildfires. The main aim was to assess whether combining ascending and descending orbits improves the accuracy and reliability of burned area detection compared to using each orbit independently. We analyzed Sentinel-1 data from ascending and descending orbits collected over two 50x50 km study sites in central Portugal, regions heavily impacted by wildfires in 2017. Data were pre-processed using Google Earth Engine (GEE), including thermal and border noise removal, radiometric calibration, and terrain correction. These SAR data, spanning pre-fire and post-fire periods, were analyzed using a deep learning model (U-net) trained to identify burned areas. Findings demonstrate that combining ascending and descending orbits (called NRBRcmb) significantly improves burned area mapping accuracy compared to single-orbit approaches (ascending or descending alone). The combined mode found superior performance metrics: overall accuracy (0.921), recall (0.858), Intersection over Union (IoU) (0.852), and significantly lower omission errors (14.2%). Single-orbit methods, in contrast, resulted in lower accuracy, higher omission rates, and reduced effectiveness. The enhanced accuracy of NRBRcmb highlights its potential operational use for monitoring wildfires, reducing errors associated with terrain and SAR signal variations in single-orbit data. Consequently, NRBRcmb emerges as a robust and valuable tool for environmental monitoring and disaster response, especially in regions frequently affected by wildfires.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Quantifying Forest Growth and Carbon Dynamics With Terrestrial Laser Scanning Data

Authors: Shilin Chen, Professor Hans Verbeeck, Dr. Louise Terryn, Mr. Wout Cherlet, Dr. Chang Liu, Professor Mathias Disney, Professor Yadvinder Malhi, Dr. Niall Origo, Professor Kim Calders
Affiliations: Ghent University, University College London, Oxford University, Climate and Earth Observation Group-National Physical Laboratory
Monitoring forest growth and carbon dynamics plays a crucial role in forest resource management practices, forest response to climate change, and terrestrial carbon cycling. Forest structural parameters, such as diameter at breast height (DBH), tree height, tree volume, crown volume, crown height, and crown projection area, are essential indicators for evaluating forest growth and carbon dynamics. Traditional forest monitoring methods for assessing forest dynamics rely on analyzing changes in a limited set of attributes (e.g. DBH, tree height) derived from forest inventory data, making it challenging to capture detailed structural changes in individual trees. The advent of terrestrial laser scanning (TLS) has provided new opportunities to quantify forest dynamics over time due to its ability to capture highly accurate and detailed three-dimensional (3D) forest structural information. Previous TLS studies have predominantly focused on analyzing forest status (e.g. extracting structural parameters, DBH, tree height, tree volume, crown projection area) at a single point in time. To date, however, the utilization of multi-temporal TLS data to quantify forest dynamics remains largely explored. In this study, we utilized two acquisitions of TLS data, 6 years apart from Wytham Woods (UK), to quantify forest growth and carbon dynamics at both the individual tree-level and plot-level by extracting structural parameters and identifying the causes of the changes. A new voxel model was developed to analyze changes in the spatial structure of trees. Quantitative structure models (QSMs) and voxel models were used to reconstruct detailed 3D structure from point clouds. QSM-based tree volume was subsequently converted into carbon storage to assess forest carbon dynamics. The reconstructed voxel-based 3D models were utilized to analyze changes in the spatial structure of trees. By analyzing the differences in forest structural parameters and their 3D models over the six-year period, the causes of these changes were identified, which effectively quantified forest growth and carbon dynamics over time at both the individual tree- and plot-level. Additionally, our findings revealed that changes in tree woody volume were primarily driven by variations in the branch volume within the canopy. The results of this study demonstrate the unique advantages of TLS data in analyzing 3D forest dynamics and further promote its application in ecological studies.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Mapping Annual Forest Growth in France at High Resolution Using Satellites and Deep Learning (2018–2024)

Authors: Martin Schwartz, Fajwel Fogel, Aurélien De Truchis, Agnèse Pelissier-Tanon, Ibrahim Fayad, Cédric Vega, Philippe
Affiliations: Laboratoire des Sciences du Climat et de l'Environnement, LSCE/IPSL, CEA-CNRS-UVSQ, Université Paris Saclay
Accurate mapping of forest height, wood volume, and biomass stocks at high spatial resolution is vital for ecosystem monitoring and carbon budget assessments. Recent advancements have leveraged satellite imagery and cutting-edge deep learning algorithms to generate high-resolution forest maps, primarily focused on forest height [1–6]. While these maps provide valuable snapshots of forest conditions, they lack the temporal resolution needed to estimate forest-related carbon fluxes and track annual changes. To date, no study has successfully used deep learning models and recent satellite data to produce annual forest maps capable of capturing pixel-level growth signals. To address this limitation, we developed a deep learning framework to generate annual time series of forest height, wood volume, and aboveground biomass density for France at 10 to 30 m spatial resolution. Our methodology relies on a « year agnostic » U-Net model trained on GEDI RH95 heights, capable of predicting forest height yearly over the period 2018-2024 from a combination of Sentinel-1 (S1) and Sentinel-2 (S2) raw images from any year, thanks to a training process involving a high number of S2 scenes with multiple conditions of acquisition. Predicted height maps at 10 m resolution were further scaled to 30 m maps of wood volume and biomass density using species-specific (broadleaf/conifer) allometric equations. We first evaluated the maps with the French National Forest Inventory (NFI) plot data for corresponding year, leading to an average mean absolute error of 3.4 m (R2 = 0.45) across all years. Then, we assessed the capacity of our time series to capture a yearly growth signal on sites with repeated airborne laser scanning (ALS) campaigns showing a good agreement between the ALS-based growth areas and our maps. Furthermore a regional comparison of gross wood volume production with NFI estimates yielded an MAE of 1.4 m3/ha/year. The time series enabled the derivation of species-specific growth or regrowth curves after stand replacing disturbance events, highlighting distinct growth dynamics and regional variations in forest management practices, including varying growth rates between broadleaf and conifer species. This level of detail provides insights into the influence of climatic, ecological, and silvicultural factors across different regions of France. Furthermore, the 10 m resolution maps facilitate precise detection and reconstruction of forest disturbances, such as forest fires, windthrow, and clear-cuts, for the 2018–2024 period. Comparison with independent disturbance datasets [7] revealed strong agreement, validating the ability of the maps to capture both localized and large-scale disturbances. These newly developed high-resolution maps of forest resources will be made freely accessible to the public and research community, fostering transparency and collaboration. These products hold the potential to advance numerous applications, such as the assessment of forest-related carbon stocks and fluxes, contributing to the formulation of a comprehensive French carbon budget, and supporting global efforts to mitigate climate change. References : [1] Pauls, Jan, Max Zimmer, Una M. Kelly, Martin Schwartz, Sassan Saatchi, Philippe Ciais, Sebastian Pokutta, Martin Brandt, et Fabian Gieseke. « Estimating Canopy Height at Scale ». arXiv, 3 juin 2024. https://doi.org/10.48550/arXiv.2406.01076. [2] Schwartz, Martin, Philippe Ciais, Catherine Ottlé, Aurelien De Truchis, Cedric Vega, Ibrahim Fayad, Martin Brandt, et al. « High-resolution canopy height map in the Landes forest (France) based on GEDI, Sentinel-1, and Sentinel-2 data with a deep learning approach ». International Journal of Applied Earth Observation and Geoinformation 128 (1 avril 2024): 103711. https://doi.org/10.1016/j.jag.2024.103711. [3] Lang, Nico, Walter Jetz, Konrad Schindler, et Jan Dirk Wegner. « A High-Resolution Canopy Height Model of the Earth ». Nature Ecology & Evolution, 28 septembre 2023, 1‑12. https://doi.org/10.1038/s41559-023-02206-6. [4] Fayad, Ibrahim, Philippe Ciais, Martin Schwartz, Jean-Pierre Wigneron, Nicolas Baghdadi, Aurélien de Truchis, Alexandre d’Aspremont, et al. « Hy-TeC: a hybrid vision transformer model for high-resolution and large-scale mapping of canopy height ». Remote Sensing of Environment 302 (1 mars 2024): 113945. https://doi.org/10.1016/j.rse.2023.113945. [5] Liu, Siyu, Martin Brandt, Thomas Nord-Larsen, Jerome Chave, Florian Reiner, Nico Lang, Xiaoye Tong, et al. « The overlooked contribution of trees outside forests to tree cover and woody biomass across Europe ». Science Advances 9, no 37 (15 septembre 2023): eadh4097. https://doi.org/10.1126/sciadv.adh4097. [6] Fogel, Fajwel, Yohann Perron, Nikola Besic, Laurent Saint-André, Agnès Pellissier-Tanon, Martin Schwartz, Thomas Boudras, et al. « Open-Canopy: A Country-Scale Benchmark for Canopy Height Estimation at Very High Resolution ». arXiv, 18 juillet 2024. https://doi.org/10.48550/arXiv.2407.09392. [7] Senf, Cornelius, et Rupert Seidl. « Mapping the Forest Disturbance Regimes of Europe ». Nature Sustainability 4, no 1 (septembre 2020): 63‑70. https://doi.org/10.1038/s41893-020-00609-y.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Resilience and recovery time of European forests after drought and compound hot and dry extreme events

Authors: Agata Elia, Mark Pickering, Giovanni Forzieri, Mirco Migliavacca, Alessandro Cescatti
Affiliations: ESA-ESRIN, Joint Research Centre Consultant, Department of Civil and Environmental Engineering, University of Florence, Joint Research Centre, European Commission
The increase in the frequency of drought and compound hot and dry extreme events have been raising further concern for the future of forests, due to the link between drought, heat and tree mortality. European forests are expected to be heavily affected, in a continent particularly subject to compound hot and dry extreme events in terms of area and days of the year (Weynants et al., 2024). Ecosystem resilience represents its capacity to resist and recover from perturbations. Subject to increasing research interest in recent years, the resilience of ecosystems, and of forests, has been measured both via the post-disturbance rate of recovery to the original state and through more novel and theoretical Critical Slowing Down (CSD) indicators, such as the lag-one autocorrelation and the variance. Evidence exists on the agreement between these metrics (Smith et al., 2022), and forests with higher resilience are found in regions with lower aridity (Smith and Boers, 2023). Nevertheless, few studies exist that explore the resilience of forests to actual specific extreme events, and they are based on in-situ measurements (Serra-Maluquer et al., 2018) or focus only on post-disturbance recovery (Anderegg et al., 2020). In this study, the resilience of European forests to drought extreme events, hot and dry compound extreme events and consecutive extreme events is explored. Both critical slowing down indicators and post-disturbance recovery time derived from EO are used, with the aim of investigating whether forests that manifest lower CSD based resilience will prove to recover slower from drought extreme events. In addition to exploring the resilience of European forests to droughts of varying impact, this study is a step forward in assessing the suitability of CSD indicators in predicting the response of ecosystems to extreme events. EO derived indicators of vegetation greenness (kernel Normalized Difference Vegetation Index, kNDVI), water content (Vegetation Optical Depth, VOD) or photosynthetic activity (Solar Induced Chlorophyll Fluorescence, SIF) are expected to be implemented in the study. Finally, the assessment of different responses of forests to droughts will highlight the pre-conditions that make forests more or less resilient to these extreme events, such as their heterogeneity. References Weynants, M., Ji, C., Linscheid, N., Weber, U., Mahecha, M. D., Gans, F. Dheed: an ERA5 based global database of dry and hot extreme events from 1950 to 2022. Zenodo (2024). https://doi.org/10.5281/zenodo.13710040 Smith, T., Traxl, D., Boers, N. Empirical evidence for recent global shifts in vegetation resilience. Nat. Clim. Chang. 12, 477–484 (2022). https://doi.org/10.1038/s41558-022-01352-2 Smith, T., Boers, N. Global vegetation resilience linked to water availability and variability. Nat Commun 14, 498 (2023). https://doi.org/10.1038/s41467-023-36207-7 Serra-Maluquer, X., Mencuccini, M., Martínez-Vilalta, J. Changes in tree resistance, recovery and resilience across three successive extreme droughts in the northeast Iberian Peninsula. Oecologia 187, 343–354 (2018). https://doi.org/10.1007/s00442-018-4118-2 Anderegg, W.R.L., Trugman, A.T., Badgley, G., Konings, A.G., Shaw, J. et al. Divergent forest sensitivity to repeated extreme droughts. Nat. Clim. Chang. 10, 1091–1095 (2020). https://doi.org/10.1038/s41558-020-00919-1
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Narrow-band spectral indicators of vegetation functioning across scales: From trees to forests

Authors: Dominic Fawcett, Christian Ginzler, Arthur Gessler, Petra D'Odorico
Affiliations: Swiss Federal Institute for Forest, Snow and Landscape Research (WSL)
Environmental factors are leading to pressure on European forests, which are increasingly facing stresses such as more frequent and stronger droughts. Within the Forestward Observatory to Secure Resilience of European Forests (FORWARDS), we seek to understand how such stress related variations in forest functioning can be resolved in optical remote sensing data and how these signals scale. Narrow-band spectral indices derived from hyperspectral reflectance data can map longer term variations in vegetation structure, medium-term variation in leaf pigment pools and also short-term, pre-visual indicators of vegetation function (D’Odorico et al., 2021). These indices, however, vary at leaf, crown, and stand level and are influenced by tree species and functional types (Hilker et al., 2010, Kováč et al., 2023). With the advent of hyperspectral satellite missions (EnMAP, CHIME, FLEX) which can acquire data over all of Europe, understanding scaling effects is of great importance to their interpretation and linking to plot-level measurements. We make use of multiple scales of measured and simulated hyperspectral reflectance data from drones to satellites to investigate 1) how indices are able to differentiate stressed from less stressed trees and forest stands and 2) how indices behave at crown, stand and landscape level with variations in shadow fraction, forest structure and type. We present results from three intensively monitored sites of different forest types in Switzerland, including a rainfall exclusion experiment for investigating drought stress. For these sites, drone and airplane borne hyperspectral sensors acquired data in August 2023 and 2024. Results show that various spectral indices could differentiate treatment from control tree crowns of the drought experiment in 2023, with the photochemical reflectance index (PRI) showing significant differences for most species. These between-crown differences could be resolved both at very fine resolutions from drones (0.1 m) and at the medium resolution of airborne data (1 m). Challenges remain when aggregating over tree stands at landscape-scale, particularly for mixed forests. Indices sensitive to pre-visual stress symptoms such as the PRI were among the indices most influenced by shadow fraction and structure variations at coarser resolutions. The carotenoid-chlorophyll index (CCI, Gamon et al., 2016) and modifications to the PRI (PRInorm, Zarco-Tejada et al., 2013), were less sensitive to structural variations and therefore may be promising for satellite-based monitoring of early changes in forest functioning in response to stress. References D’Odorico, P., Schönbeck, L., Vitali, V., Meusburger, K., Schaub, M., Ginzler, C., Zweifel, R., Velasco, V. M. E., Gisler, J., Gessler, A., & Ensminger, I. (2021). Drone‐based physiological index reveals long‐term acclimation and drought stress responses in trees. Plant, Cell & Environment, 44(11) Gamon, J. A., Huemmrich, K. F., Wong, C. Y. S., Ensminger, I., Garrity, S., Hollinger, D. Y., Noormets, A., & Peñuelas, J. (2016). A remotely sensed pigment index reveals photosynthetic phenology in evergreen conifers. Proceedings of the National Academy of Sciences, 113(46), 13087–13092. Hilker, T., Hall, F. G., Coops, N. C., Lyapustin, A., Wang, Y., Nesic, Z., Grant, N., Black, T. A., Wulder, M. A., & Kljun, N. (2010). Remote sensing of photosynthetic light-use efficiency across two forested biomes: Spatial scaling. Remote Sensing of Environment, 114(12), 2863–2874. Kováč, D., Novotný, J., Šigut, L., Ač, A., Peñuelas, J., Grace, J., & Urban, O. (2023). Estimation of photosynthetic dynamics in forests from daily measured fluorescence and PRI data with adjustment for canopy shadow fraction. Science of The Total Environment, 898, 166386. Zarco-Tejada, P. J., González-Dugo, V., Williams, L. E., Suarez, L., Berni, J. A., Goldhamer, D., & Fereres, E. (2013). A PRI-based water stress index combining structural and chlorophyll effects: Assessment using diurnal narrow-band airborne imagery and the CWSI thermal index. Remote sensing of Environment, 138, 38-50.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Mapping of the biomedicinal compound Quercitrin from species to spatial scale: A Case Study in the Himalayan Kumaon Region

Authors: Ayushi Gupta, Dr. Prashant K. Srivastava, Dr. Karuna Shanker, Dr. K. Chandra Sekar
Affiliations: Sathayabama Institution of Technology, Institute of Environment and Sustainable Development, Banaras Hindu University,, CSIR - Central Institute of Medicinal and Aromatic Plants, G.B. Pant, National Institute of Himalayan Environment, Garhwal Regional Centre
Forest management plans and policy-making efforts require precise spatial mapping of individual species, which requires exploiting species' unique parameters with advanced technological involvement. Plants' unique medicinal properties are attributed to their secondary bioactive ingredients, which include flavonoids, alkaloids, tannins, and phenolic compounds. A major flavonoid found in selected species of Rhododendron arboreum leaves, Quercitrin is found to be responsible for its therapeutic value. About 25% of all Rhododendron taxa face the risk of extinction in their natural habitats, according to the IUCN report, caused by environmental conditions and human encroachment. Using remote sensing mapping and monitoring R. arboreum at a spatial scale becomes necessary. In this presented study, the hyperspectral radiometer data of R. arboreum were acquired at the complex terrain of the Kumaon region of the Himalayas and processed by using sophisticated smoothening algorithms such as Average Mean and Savitzky Golay, followed by derivative spectral analysis techniques and regression learning algorithms for the selection of sensitive bands concerning Quercitirn content in the plants. These selected bands were then used to develop two-band combination indices. The accuracy of these indices was improved by using machine learning algorithms. The results show that the smoothening technique Savitzky Golay, followed by the first derivative, performed moderately averagely for selecting sensitive wavebands. The regression machine learning algorithm random forest also highlighted wavebands of interest and enhanced the index developed for estimation of Quercitrin at spatial scale by model development. The best model showed a training correlation of 0.864 and a testing correlation of 0.570 for Quercitirn content prediction.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Integrating Airborne Laser Scanning and Satellite Data for Enhanced Forest Monitoring in the Wienerwald Biosphere Reserve

Authors: Anna Iglseder, Markus Hollaus
Affiliations: TU Wien
Forest ecosystems play a crucial role in climate regulation, biodiversity conservation, and also provide important economic and recreational functions. Effective monitoring of these ecosystems is essential for their sustainable management and conservation. Remote sensing data has proven to be a valuable tool in this context. While satellite-based data offers significant advantages—such as wall-to-wall coverage, high temporal resolution, data from a variety of sensors, and global consistency—it is limited in its ability to capture detailed forest structure, particularly vertical characteristics at scales ranging from individual trees to stands. Airborne Laser Scanning (ALS), an active sensor system deployed from aircraft, complements satellite data by providing high spatial resolution, capturing data points that range from one to several hundred points per m², spanning the canopy to the ground. This technology fills critical gaps, delivering precise 3D structural information. ALS point cloud data is increasingly accessible as open data at national scales, enabling the derivation of structural features such as canopy height, canopy roughness, forest layering, vertical complexity, and understory height at resolutions as fine as 1 m², applicable on regional to national scales. Beyond offering detailed forest characterization, ALS data holds significant potential for calibrating satellite-based models, enhancing traditional forest inventories, and integrating ALS-derived metrics into satellite-based feature sets for advanced modeling. In this study, we demonstrate the integration of ALS data for forest monitoring within the Wienerwald Biosphere Reserve—a 1,050 km² area situated at the easternmost foothills of the Alps, west of Vienna. This region is characterized by diverse forest management regimes. The reserve includes a gradient of management zones, ranging from highly protected areas with minimal human intervention aimed at preserving original habitats to multifunctional areas that support recreation and economic tasks. For our analysis, we utilized openly available ALS data provided by the federal states of Vienna and Lower Austria, acquired between 2011 and 2015. This dataset represents full coverage of the study area from a single data acquisition cycle. Key analyses involved mapping vertical forest structure by deriving various features that describe the vertical geometric distribution of ALS data points. These products were subsequently used for forest monitoring applications, such as assessing structural differences between protected areas and economically utilized zones. Additionally, we demonstrate the integration of ALS data with Sentinel-1 and Sentinel-2 satellite imagery, highlighting applications in habitat classification and ecosystem monitoring. The presented work is funded by the FFG Austrian Space Applications Programm ASAP 18 “Anwendung von Sentinel Daten für die Ausweisung von Biotoptypen und grüner Infrastruktur, SEMONA Reloaded” (grant agreement number FO999892649)
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Finding the Best Approach to Identify Deforestation Drivers Using Earth Observation: What Works and What Doesn’t

Authors: Amandine Debus, Dr Emilie Beauchamp, Justin Kamga, Dr Astrid Verhegghen, Christiane Zébazé, Dr Emily R. Lines
Affiliations: University Of Cambridge, International Institute for Sustainable Development, Forêts et Développement Rural (FODER), ARHS Developments Italia S.R.L. under contract with the European Commission, JRC
Commitments such as the European Union Deforestation Law and pledges made at COP26 show international ambition to slow down deforestation in the tropics. However, despite varying progress across countries, recent overall trends do not show a decline in forest loss. This is the case in Africa, which has become the region with the largest annual forest loss between 2010 and 2020, with 3.9 million hectares lost yearly. Sub-Saharan Africa has also experienced the fastest agricultural expansion globally since 2015. Within the Congo Basin, where the second largest tropical forest in the world can be found, Cameroon has known the sharpest average annual rise in primary forest and tree cover loss between 2016 and 2022. In addition, Cameroon was the fourth country with the highest increase in primary forest loss worldwide in 2022. This shows that global political commitments are not sufficient to protect tropical forests. Understanding drivers of forest loss with a high level of detail and automation for up-to-date information is needed to plan interventions focusing on specific areas and actions, design measures to address specific drivers, and define priorities for monitoring. It is also crucial to consider country-specific deforestation dynamics, as well as socio-economic and political contexts, to design adapted methodologies and tools. However, a detailed, country-specific and comprehensive automated classification for the land-use changes leading to deforestation has been lacking for Cameroon. We addressed this issue and present here Cam-ForestNet, a new deep-learning approach to automatically classify fifteen direct drivers of degradation and deforestation in Cameroon. In addition, we created a new Earth Observation reference dataset, which includes satellite imagery and auxiliary biophysical and socio-economic data. We compared the performance of different optical and SAR data (including Landsat-8, NICFI PlanetScope, Sentinel-1, Sentinel-2, HLS), different band combinations, and tested multi-sensor data fusion and time-series analyses. We especially explored the impact of image acquisition times and how this helps us understand the temporal dynamics of deforestation and degradation drivers in Cameroon. We also used Cam-ForestNet for yearly national analyses, which gave us insights into the spatio-temporal dynamics of deforestation in Cameroon. In addition, we explored ways to improve the use of our methodology for ‘real-world’ applications, which included finding an interpretable confidence score to increase the trust in our method, developing an easy-to-use and easy-to-update platform for partners, and designing careful open data strategies with attention to privacy and security issues. Our results show the potential of Cam-ForestNet to monitor deforestation nationally and locally and prioritise interventions for organisations working on forest conservation, as well as to carry out post-event analyses to inform policy. This model is especially powerful as it displays a high performance in identifying detailed small-scale drivers, which is usually a challenge, and groups together a high number of detailed drivers compared with other studies. Moreover, we show that the belief that there is generally not enough existing available land-use data for sub-Saharan Africa can be a misconception and that, with only a relatively small amount of location-specific data for training, our approach still produced good results. Finally, we hope this can be an example framework for other locations, especially in the Congo Basin area.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: European forest development with changing climate and legal constraints: combining Earth observations, biophysical models, and realistic human decision-making

Authors: Colin Johnstone, Florian Kraxner, Dmitry Shchepashchenko, Eunbeen Park, Hyunwoo Jo, Andrey Krasovskiy, Georg Kindermann, Maximilian
Affiliations: International Institute For Applied Systems Analysis
The decline in biodiversity and degradation of forest ecosystems pose significant global challenges. Despite ambitious policy targets, the majority of forest habitats within Europe are in an unfavorable conservation status. Addressing this not only robust environmental policies but also an understanding of the behavioral responses of forest owners and managers to environmental policies. In this talk, we present a novel interdisciplinary approach that combines space-based observations of the Earth’s biosphere with biophysical models of forest development and social science methods for studying the actual behaviors of forest managers in response to changing climate and environmental policies within Europe. Central to our project is the Global Forest Model (G4M) which combines spaced-based observations of forest growth and structure, land based measurements of soil properties, and global climate models to determine forest growth productivity for different future climate projections. This leverages the unprecedented wealth of remote sensing observations for monitoring forest ecosystems that has become available in recent years. These growth productivity predictions are used in a biophysical forest model to estimate the future development of forests on country, continental, or global scales. The model includes also the important effects of forest management to explore the effects of climate change and management on forest biomass, carbon sequestration, biodiversity, and the influences of disturbances such as wildfires. Complementing the biophysical modeling, we conducted detailed surveys of forest practitioners across multiple European countries to explore both their current management behavior and how their behavior will change in response to climate change and new EU regulations on biodiversity improvements. Applying a theoretical framework for forest manager types and modern machine learning techniques to the survey responses, allowing us to predict how actual forest management respond to changing climate and policy constraints. We integrate this model for European forest management into the G4M framework, allowing us to predict the influences of changing forest management practices on forest development and biodiversity on the European scale. By providing a view that encompasses remote sensing observations of the Earth’s biosphere, biophysical processes, and realistic human decision making, our project offers a valuable tool for policymakers. Our unique methodology highlights the benefits of interdisciplinary collaborations and that combine Earth observations with machine learning methods and social sciences in tackling biodiversity decline and forest ecosystem degradation across Europe.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Building development is the main cause of rapid wildland-urban interface growth in wildfire-prone Mediterranean-type ecosystems

Authors: Kira Pfoch, Dr Franz Schug, Dr Amanda Carlson, Heather Cox, Dr Todd Hawbaker, David Helmers, Patrick Hostert, Dr Dominik Kaim, Dr Miranda Mockrin, Zennure Uçar, Paige Wirth, Volker Radeloff
Affiliations: SILVIS Lab, Department of Forest and Wildlife Ecology, University of Wisconsin-Madison, US Geological Survey, Geography Department, Humboldt-Universität zu Berlin, Integrative Research Institute on Transformations of Human-Environment Systems, Humboldt-Universität zu Berlin, Institute of Geography and Spatial Management, Faculty of Geography and Geology, Jagiellonian University, Northern Research Station, US Department of Agriculture Forest Service, Department of Forest Engineering, Faculty of Forestry, İzmir Katip Çelebi University
The wildland-urban interface (WUI) is the area where buildings meet or intermingle with wildland vegetation and wildfire risks to people and homes are most acute. The WUI area is growing globally, increasing those threats and altering ecosystems and their fire regimes. Developments within Mediterranean-type ecosystems are especially at risk of wildfires due to their cycle between a wet winter season during which vegetation can grow and thrive, and a dry summer season that is prone to wildfires. Our goal was to assess (I) how much the WUI grew in five global Mediterranean-type ecosystems (Australia, California, Chile, Mediterranean basin, and South Africa) from 1990 to 2020, and (II) if WUI growth is driven by building development or vegetation change. We assessed land cover fractions of impervious surfaces and vegetation cover derived from 30-m Landsat data for 1990, 2000, 2010, and 2020 and consistently mapped WUI growth. We identified whether building development or vegetation change drove WUI growth using counterfactual scenarios. We found that the WUI is widespread in global Mediterranean-type ecosystems and has more than doubled its area from 1990 to 2020. In California and the Mediterranean basin, WUI growth was strongest before 2010 and subsequently decelerated. In contrast, in Australia, Chile, and South Africa, WUI growth has accelerated since 2010. WUI growth was concentrated around major metropolitan centers and in coastal regions. Building development was the main driver of WUI growth in all regions, but vegetation increase was almost as important in some regions of central Chile, South Africa, and the Mediterranean basin. The rapid growth of the WUI is concerning because it is primarily caused by the expansion of settlements, which means that more buildings, and potentially more people are at risk. Given that most wildfires are human-caused, WUI growth also poses a risk to ecosystems and biodiversity as wildfire frequency exceeds the natural range of variability. Combined with climate change-induced frequent, severe drought events, WUI growth considerably increases wildfire risk to people, buildings, and biodiversity in global Mediterranean-type ecosystems.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Spaceborne Lidar Detects Decline in Overstorey and Increase in Understorey Canopy Cover of Protected Forests in Central Europe Since 2019

Authors: Xiao Liu, Prof. Dr. Matthias Forkel, Prof. Dr. Bernhard Schuldt, Dr. -Ing. Vítězslav Moudrý
Affiliations: TUD Dresden University of Technology, Institute of Photogrammetry and Remote Sensing, TUD Dresden University of Technology, Institute of Forest Botany and Forest Zoology, Czech University of Life Sciences Prague, Department of Spatial Sciences
Forest covers around 31% of the Earth’s land surface and stores around 45% of terrestrial carbon. Forests interact with the climate through multiple processes, such as biogeophysical process (surface energy fluxes and hydrology), biogeochemical process (carbon cycle) and biogeographical process (land use, urbanisation, and vegetation dynamics). The vertical structure of forests is an important control of the carbon, water and energy exchanges between land and atmosphere. It regulates microclimates, creates microhabitats and influences biodiversity. Recently, drought and insect disturbances have resulted in widespread forest dieback in the forests of Central Europe since 2018. Optical remote sensing shows a large-scale decline in forest cover and an increase in canopy mortality. However, the implications of those changes in forest cover on the vertical structure such as the canopy cover of the over- and understorey have yet to be investigated at a large scale. In this study, we aim to investigate changes in vertical forest structure in Central Europe that occurred since the drought year 2018 by using Global Ecosystems Dynamics Investigation (GEDI) spaceborne waveform lidar observations at forests in protected areas in Central Europe from 2019 to 2023. Our objectives are to (1) quantify the changes in overstorey and understorey canopy cover of forests and (2) to analyse the relationship between changes in understorey and overstorey canopy cover. We used the categories of protected areas defined by the International Union for Conservation of Nature (IUCN), which in our study region include 690 protected landscapes. The protected areas were selected as study sites for two reasons: (1) only GEDI footprints from forests with similar management are aggregated, and (2) forests in protected areas are more likely to have multiple layers. A height of 10 m above ground was used as the boundary to separate overstorey (from 10 m to the top of canopy) and understorey (from ground to 10 m) for each GEDI footprint with high quality. Other boundary such as 5 m and a forest-height-adaptive threshold were also tested. Because GEDI does not have regular revisits over the same location, a spatiotemporal aggregation is necessary for generating the time series of canopy cover in overstorey and understorey. The IUCN protected areas were used to mask all filtered GEDI footprints. Within each protected area, GEDI footprints were grouped by forest type, i.e., broadleaved or coniferous forests. For protected areas with two forest types, time series for both forest types were calculated. To quantify changes in overstorey and understorey canopy cover from 2019 to 2023, we first applied the Seasonal and Trend decomposition using Loess method to remove seasonality from monthly GEDI time series of canopy cover, and secondly applied the Theil-Sen estimator to seasonality-removed time series. We then transformed the unit of estimated Theil slope from percentage per month to percentage per year to have a unit comparable to other studies that report changes per year. GEDI canopy cover change was compared with (1) Hansen Global Forest Change 2000-2023 (Hansen et al., 2013), (2) Forest Structure Germany data from Kacic et al. (2023), and (3) Forests Crown Defoliation data from International Co-operative Programme on Assessment and Monitoring of Air Pollution Effects on Forests (ICP Forests) (Michel et al., 2023). For one example site from Harz mountains in Germany, the overstorey canopy cover in both forest types declines but the change is much larger in coniferous forests (-4.45% per year) than in broadleaved forests (-1.57% per year). The maximum overstorey canopy cover in coniferous for each year declines from 45% to 30%. However, for the understorey, coniferous and broadleaved forests show an increase of canopy cover, with +0.59% per year in broadleaved forests and +1.18% per year in coniferous forests. For both forest types, the absolute value of canopy cover change in overstorey is larger than that in understorey. For all study sites in Central Europe, our results show that overstorey canopy cover declined and understorey canopy cover increased in both coniferous and broadleaved protected forests since 2019. For example in protected forests in Germany, the overstorey canopy cover declines by ~1.9% per year in coniferous and by ~1.2% per year in broadleaved forests while the understorey canopy cover increases by ~0.7% per year and ~0.6% per year, respectively. For coniferous forests, we found that the change of overstorey canopy cover has a significant negative correlation (r = -0.78, p < 0.01) with the change in understorey canopy cover. However, this relationship is not significant for broadleaved forests (p > 0.01). The observed loss of overstorey canopy cover is consistent with results from other remote sensing based forest monitoring products and field measurements while the observed increase of understorey canopy cover is complementary to existing studies. In summary, we demonstrated the potential of using GEDI to monitor forest vertical structure dynamics at large scale, which reflects recent impacts of climate dynamics such as drought on ecosystem functioning. As the irregular spatiotemporal sampling of GEDI influences the analysis of canopy cover time series, repeated lidar measurements from TLS or ALS should be included in the future for a comprehensive assessment of changes in forest vertical structure. References Hansen, M.C., Potapov, P.V., Moore, R., Hancher, M., Turubanova, S.A., Tyukavina, A., Thau, D., Stehman, S.V., Goetz, S.J., Loveland, T.R., Kommareddy, A., Egorov, A., Chini, L., Justice, C.O., Townshend, J.R.G., 2013. High-Resolution Global Maps of 21st-Century Forest Cover Change. Science 342, 850–853. https://doi.org/10.1126/science.1244693 Kacic, P., Thonfeld, F., Gessner, U., Kuenzer, C., 2023. Forest Structure Characterization in Germany: Novel Products and Analysis Based on GEDI, Sentinel-1 and Sentinel-2 Data. Remote Sens. 15, 1969. https://doi.org/10.3390/rs15081969 Michel, A., Kirchner, T., Prescher, A.-K., Schwärzel, K. (Eds.), 2023. Forest Condition in Europe: The 2023 Assessment ; ICP Forests Technical Report under the UNECE Convention on Long-range Transboundary Air Pollution (Air Convention).Online supplementary material., ICP Forests Technical Report. Thünen-Institut, Bundesforschungsinstitut für Ländliche Räume, Wald und Fischerei, Braunschweig. https://doi.org/10.3220/ICPTR1697801881000
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Improving National Forest Disturbance Monitoring by Integrating Landsat and Sentinel-2

Authors: Jessica Soto Balvanera, Dr. Alba Viana-Soto, Katja Kowalski, Prof. Dr. Cornelius Senf
Affiliations: Technical University of Munich, School of Life Sciences, Earth Observation for Ecosystem Management
Forest disturbances have increased across Europe in recent decades, calling for better monitoring of forests and in particular forest disturbance. Earth observation is the most prominent tool for monitoring forest disturbances across large areas, but recent approaches are either focusing on high spatial and temporal detail (i.e. using Sentniel-2) or long time series to understand long-term forest dynamics (i.e. using Landsat). In our project, we aim to bridge between both approaches by integrating Landsat and Sentinel-2 into a combined forest disturbance monitoring framework that allows for long-term assessments of forest change at high spatial and temporal resolution. For doing so, we will develop a combined Landsat-Sentinel-2 data cube structure that interpolates missing temporal observations based on phenological models. Using this data cube, we will identify disturbances using classification models trained on a manually labeled reference database, testing for the effect of disturbance size and severity on detection accuracy. In particular, we will investigate whether past disturbances smaller than Landsat resolution (i.e. < 0.09 ha) will become detectable through integrating recent Sentinel-2 information. We will further explore whether phenological implementation helps in better estimating the timing of disturbance within a year. Our results will contribute to improve national forest monitoring in Germany, which has experienced one of the most severe disturbance waves in the observational record in the recent years.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Mediterranean forest traits retrieval from hybrid inversion: a multi-sensor and radiative transfer modelling comparison

Authors: Doctor Karine Adeline, Dr Xavier Briottet, Jean-Baptiste Féret, Florian De Boissieu, Jean-Philippe Gastellu-Etchegorry, Yingjie Wang, Grégoire Vincent, Jean-Marc Limousin, Marianne Debue
Affiliations: DOTA, ONERA, University of Toulouse, TETIS, INRAE, AgroParisTech, CIRAD, CNRS, University of Montpellier, CESBIO – Centre d'Études Spatiales de la Biosphère (CNES, CNRS, IRD, Paul Sabatier University), AMAP – BotAnique et Modélisation de l’Architecture des Plantes et des Végétations (CNRS, CIRAD, INRAE, IRD, Montpellier University), CEFE – Centre d’Ecologie Fonctionnelle et Evolutive (CNRS, EPHE, IRD, Montpellier University)
Characterizing the condition of Mediterranean forests is crucial for fire risk prevention, and for assessing the impact of increasing droughts and changes in land use. Optical remote sensing provides access to various biophysical and biochemical vegetation properties, including essential biodiversity variables. This capacity enhances our understanding of the structure and functioning of these forests. However, the accuracy these properties can be estimated depends on several factors: the scale of observation (i.e. the spatial resolution), the spectral richness (i.e. the number of spectral bands, sampling and resolution) and the revisit time (for monitoring purposes) of the remote sensing sensors. Hybrid inversion methods are increasingly used as they require fewer field measurements compared to empirical methods. These methods adopt a more generalizable and physically-based approach by relying on the combination of radiative transfer model (RTM) simulations and machine learning regression methods (MLRA). Key challenges include the adequate choice of the RTM (e.g. 1D, 2D or 3D) and the accurate input parameterization of the RTM, in order to account for the particularities of the observed landscape and to simulate realistic remote sensing images (e.g. differences between processing dense or open forests). The objective of this study is to compare the accuracy of vegetation trait estimations for the tree overstory layer based on a hybrid inversion using either 1D or 3D RTM and a multi-sensor dataset encompassing multi-/hyperspectral and airborne/satellite data. Two Mediterranean forests located in the South of France are studied: Pic Saint Loup and Puéchabon. We focused on two oak species: Quercus ilex (evergreen) and Quercus pubescens (deciduous). The MEDOAK campaign [1] occurred in June 2021 and resulted in the collection of a variety of field and lab measurements, including overstory and understory inventory, plant area index, spectroscopic data, leaf-clip optical data, trait measurements and inversions with the leaf model PROSPECT [2][3]. This ground data collection was carried out simultaneously to the Hypersense campaign, a joint airborne campaign organized by NASA and ESA, and managed by the University of Zurich in cooperation with NASA/JPL to support the upcoming Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) and Surface Biology and Geology (SBG) satellite missions. Airborne imaging spectroscopy data were acquired with AVIRIS-Next Generation sensor at 1m and 3m spatial resolutions. These airborne acquisitions were complemented with Sentinel-2, PRISMA and DESIS acquisitions over the same period. In addition, UAV LiDAR scans were acquired on some forest plots. Past work aimed at designing detailed 3D forest mockups from these LiDAR point clouds, assessing the accuracy of the simulations performed with the DART 3D RTM [4], and studying the contribution of factors including understory reflectance, woody element and abstract modeling of leaf optical properties in total canopy reflectance [5]. The current work focuses on identifying the optimal geometrical parameterization for vegetation trait estimations by comparing: (1) a homogeneous and simple representation with PROSAIL 1D RTM [6], (2) a simplified forest representation using geometric objects with DART and (3) a detailed 3D representation obtained from LiDAR mockups with DART. Targeted vegetation traits are plant area index, leaf pigments, water and dry matter content, and live fuel moisture content. Inversion methods are based on existing tools [7] and based on [8]. For comparison purposes, several MLRAs are compared, including PLSR, SVMR and RFR. The results are explored and discussed between the three different geometrical modelling strategies, the remote sensing sensor characteristics (spatial and spectral), and the studied forest plot characteristics (canopy cover, species composition). This work also aims at preparing the French hyperspectral satellite mission BIODIVERSITY, featuring 10 m spatial resolution [9][10]. Therefore, 10 m images were simulated from 1m airborne data and comparisons were performed to assess the potential of improved spatial resolution compared to current operational hyperspectral missions at 30 m. To conclude, these results will contribute to assess the complementarity of remote sensing data (multi- vs hyperspectral, airborne vs satellite) to map vegetation traits useful to determine forest water stress and risk prevention through their monitoring. [1] K. Adeline, J.-B. Féret, H. Clenet, J.-M. Limousin, J.-M. Ourcival, F. Mouillot, S. Alleaume, A. Jolivot, X. Briottet, L. Bidel, E. Aria, A. Defossez, T. Gaubert, J. Giffard-Carlet, J. Kempf, D. Longepierre, F. Lopez, T. Miraglio, J. Vigouroux and M. Debue (2024). Multi-scale datasets for monitoring Mediterranean oak forests from optical remote sensing during the SENTHYMED/MEDOAK experiment in the north of Montpellier (France). Data in Brief, 53, 110185. [2] Féret J-B, Gitelson AA, Noble SD and Jacquemoud S (2017). PROSPECT-D: Towards modeling leaf optical properties through a complete lifecycle. Remote Sensing of Environment, 193, 204–215. [3] J.-B. Féret, J. Giffard-Carlet, S. Alleaume, X. Briottet, V. Chéret, H. Clénet, J.-P. Denux, J.-P. Gastellu-Etchegorry, A. Jolivot, J.-M. Limousin, F. Mouillot, J.-M. Ourcival and K. Adeline (2022). Estimating functional traits in Mediterranean ecosystems using spectroscopy from leaf to canopy scale. 2nd Workshop on International Cooperation in Spaceborne Imaging Spectroscopy, 19-21 october 2022, Frascati, Italy, oral. [4] Yingjie Wang, Abdelaziz Kallel, Xuebo Yang, Omar Regaieg, Nicolas Lauret, Jordan Guilleux, Eric Chavanon, Jean-Philippe Gastellu-Etchegorry (2022). DART-Lux: An unbiased and rapid Monte Carlo radiative transfer method for simulating remote sensing images. Remote Sensing of Environment, Volume 274, 2022, 112973. [5] M. Debue, G. Vincent, S. Alleaume, F. de Boissieu, X. Briottet, J.-B. Féret, J.-P. Gastellu-Etchegorry, J.-M. Limousin, D. Longepierre and K. Adeline. Adequacy of Mediterranean forest simulations from DART radiative transfer model and UAV laser scanning data to multi- and hyperspectral images. SPIE Remote Sensing, 3-6 September 2023, Amsterdam, Netherlands, oral and proceeding. [6] Jacquemoud S, Verhoef W, Baret F, Bacour C, Zarco-Tejada PJ, Asner GP, François C and Ustin SL (2009). PROSPECT+ SAIL models: A review of use for vegetation characterization. Remote Sensing of Environment, 113:S56–S66. [7] Féret, J.-B. & de Boissieu, F. An R package for the simulation of canopy reflectance using the model PROSAIL (PROSPECT+SAIL). https://gitlab.com/jbferet/prosail/. [8] Miraglio, T., Adeline, K., Huesca, M., Ustin, S. and Briottet, X. (2022). Assessing vegetation traits estimates accuracies from the future SBG and biodiversity hyperspectral missions over two Mediterranean Forests. International Journal of Remote Sensing, 43(10), 3537-3562. [9] K. Adeline. APR CNES TOSCA SENTHYMED, REMOTE sensing and in situ observations to study TREEs in natural and manmade landscapes (2023). https://remotetree.sedoo.fr/senthymed/. [10] X. Briottet, K. Adeline, T. Bajjouk, V. Carrère, M. Chami, Y. Constans, Y. Derimian, A. Dupiau, M. Dumont, S. Doz, S. Fabre, P.Y. Foucher, H. Herbin, S. Jacquemoud, M. Lang, A. Le Bris, P. Litvinov, S. Loyer, R. Marion, A. Minghelli, T. Miraglio, D. Sheeren, B. Szymanski, F. Romand, C. Desjardins, D. Rodat, B. Cheul (2024). End-to-end simulations to optimize imaging spectroscopy mission requirements for seven scientific applications. ISPRS Open Journal of Photogrammetry and Remote Sensing, 12, 100060.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Assessing Effects of Forest Disturbance on Land Surface Temperature in Low Mountain Ranges of Central Germany Using Google Earth Engine and the Landsat Archive

Authors: Simon Grieger, Prof. Dr. Martin Kappas, Susanne Karel, Philipp Koal, Dr. Tatjana Koukal, Markus Löw, Dr. Martin Zwanzig, Dr. Birgitta Putzenlechner
Affiliations: Georg August University of Göttingen, Austrian Research Centre for Forests (BFW), Forestry Research and Competence Centre (FFK), ThüringenForst AöR, Dresden University of Technology
Forest ecosystems in Germany are threatened by drought stress and pest infestation, with pressure projected to increase due to global warming. In particular, Norway spruce (Picea abies (L.) Karst.) is affected by arid conditions and bark beetle infestation. The sequence of pronounced dry years from 2018 in Central Germany with above-average temperatures caused a massive defoliation and tree decline, leading to an extensive loss of forest cover in low mountain ranges. Such shifts in forest vitality and vegetation cover are expected to affect local environmental conditions, including land surface temperature (LST), an essential climate variable. On cloudless summer days, areas with decreased forest vitality or vegetation cover are expected to favor higher LST, facilitating further drought stress on vegetation. However, regional assessment of potentially elevated satellite-derived LST in disturbed forest stands is challenging due to the spatial and temporal resolution of available LST products and various factors influencing the surface energy balance, such as topography, geology, soil properties, forest stand characteristics and management activities as well as local weather conditions. To assess the effects of forest disturbance and topographic (elevation, potential incoming solar radiation, Topographic Wetness Index) and pedological (substrate, soil depth, type of water balance) site factors on LST on a regional scale, an automated workflow was implemented to combine a time series of the Landsat 8/9 Surface Temperature product, preprocessed via Google Earth Engine, with an operational forest disturbance monitoring framework, deriving disturbance from a dense time series of the Normalized Difference Vegetation Index (NDVI) based on Sentinel-2 data, as well as topographic and soil properties. For each Landsat raster cell with a spatial resolution of 30 m, a classification of the level of forest disturbance was performed. The classification is based on the cumulative sum of daily negative deviations of the NDVI from a baseline phenology course (derived from a pre-disturbance reference period), and uses threshold values to classify each cell into one of the classes "undisturbed", "disturbed" and "severely disturbed". Areas classified as (severely) disturbed include several types of (former) forest cover, such as forests of reduced vitality, standing deadwood, or complete loss of forest cover. Furthermore, site-specific properties were assigned to each Landsat cell to enable the identification of other factors influencing LST dynamics, with the aforementioned topographic and pedological factors considered. The workflow is capable of deriving a time series of mean LST among undisturbed and (severely) disturbed forest stands as well as LST differences between (severely) disturbed and undisturbed areas. The considered site factors were used to calculate a site-adjusted LST difference, which accounts for equivalent combinations of site factors. Moreover, Landsat LST was compared to continuous in situ measurements of soil temperature as well as air temperature approx. at and above (+15 cm) the soil surface, using data loggers situated in intact and disturbed forest stands with different variants of forest management, ranging from clearings to varying levels of standing deadwood. Results of the workflow applied to forest stands of Norway spruce in three low mountain ranges in Central Germany indicate differences in the distributions of midday LST (mean acquisition time of approx. 12:10 local time) between severely disturbed and undisturbed forest stands during the vegetation period, with a distinct trend of higher temperatures in severely disturbed forest areas. Over time, LST difference between severely disturbed and undisturbed areas rises after forest disturbance events, reaching a median of approx. 4.4 K in similar site settings. Among topographic site factors, elevation exhibits the highest influence, indicating a trend of median LST differences rising with elevation. Between areas of lowest and highest elevation, median LST differences differ by approx. 1.2 K. For pedological site factors, median LST differences vary the most among substrate types, differing by 2.9 K, possibly caused by different soil moisture conditions and ability to retain water during droughts. The rise of LST difference between severely disturbed and undisturbed forest areas also comes with an increased variance of LST for areas classified as disturbed. Increased variance could be explained by different post-disturbance forest management practices, such as clearings of disturbed spruce stands and remaining standing deadwood, therefore providing varying levels of shading. This is supported by in situ microclimatic measurements in different management variants, which reveal higher midday temperatures during the vegetation period with decreasing volume of shading deadwood. Landsat LST shows highest agreement (linear regression with R² ≈ 0.73) with air temperature at 15 cm above the soil surface, thereby affecting microclimatic conditions for natural and artificial regeneration. In view of increasing risk of forest disturbance and the necessity to adapt forest stands to a changing climate, insight into the factors influencing LST may aid in informed decision making in post-disturbance forest management. Increasing temperatures during the vegetation period may impose further drought stress on (already disturbed) forest stands, facilitating further pest infestation. Identification of sites with a high risk of elevated LST in the vegetation period is crucial for successful reforestation efforts and establishment of resilient forest stands. The results further highlight the influence of different variants of silvicultural management and therefore possible means of intervention. Thus, future research should focus on the identification of other site factors with an effect on LST as well as the differentiation of variants of silvicultural management of disturbed forest stands, assessment of their effects on LST, and derivation of strategies for mitigation of the impacts of widespread forest disturbance.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Spectral Upscaling and Its Effects On Plant Traits Of Different Mid-European Tree Species From Leaf To Satellite

Authors: Dr. Michael Foerster, Ephraim Schmidt-Riese, Pia Kraeft, Dr. Christine Wallis, Randolf Klinke, Elina Mevenkamp, Dr. Annett Frick
Affiliations: LUP GmbH, Technische Universitaet Berlin
Understanding the physiological traits of forests is crucial for sustainable forest management and conservation efforts at different scales. In this study, we investigate physiological traits of a mixed-stand forest using a composition of biophysical measurement methods and imaging spectroscopy at various scales. Specifically, we examine how spatial and temporal variability of the spectral response of trees alters our capability of consistently mapping tree physiology features. To understand signal changes with scale, we establish relationships between spectra from different scales and physiology data via Partial Least Squares Regression (PLSR)and Random Forest Regression (RFR). Moreover, we evaluate how tree physiology is depicted differently across scales. We further consider how seasonal change, as depicted in monthly measurements, as well as tree species identity and tree species traits alter the relationship between small- and large-scale remote sensing observations. To achieve this, we acquired spectral measurements in 2023 and 2024 between May and September on 42 sample trees of five main tree species (11 European beeches, 10 Pedunculate oaks, 10 green Douglas firs, 8 European larches and 3 Norway spruces) with ASD Fieldspec measurements (Plant Probe with Leaf Clip and 30 cm distance to canopy) from a crane platform installed within a 1-hectare mixed beech forest in Mecklenburg-Western Pomerania, Germany. Additionally, we acquired multispectral and hyperspectral UAV measurements as well as Sentinel-2 imagery. Applying the two regression methods, we can find clear relations to biomarker measurements, especially chlorophyll and carotenoid content, which decreases with increased scale. We suppose that light-and-shadow-conditions, daily-climate-driven-traits such as leaf angle and increasing spectral mixture add to uncertainties of physiology trait estimation with scale. With our analyses, we aim to (1) assess which physiological traits can be estimated from spectral responses with sufficient consistency, (2) identify the reason for scale differences in trait estimation. On this basis, we suggest identifying the relative importance of processes that drive scale-dependent capability of physiological trait estimation. With this, we will be able to better frame the actual potential of different remote sensing approaches to operational monitoring of forests. Our results will enable a more thorough evaluation of ongoing monitoring schemes and can critically contribute to a comprehensive interpretation of these results.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: From Satellite to App: An End-to-End Workflow for Near-Real Time Monitoring of Forest Disturbances on German State-Level

Authors: Niklas Jaggy, Dr. Nikolas Herbst, Dr. Mareike Kortmann, Dr. Frank Thonfeld
Affiliations: Department of Remote Sensing, Institute of Geography and Geology, University of Wuerzburg, Chair of Software Engineering (Informatik II), Department of Computer Science, University of Wuerzburg, Field Station Fabrikschleichach, Department of Animal Ecology and Tropical Biology (Zoology III), Biocenter, University of Wuerzburg, German Remote Sensing Data Center (DFD), German Aerospace Center (DLR)
Recent years have seen a persistent decline in forest health for German forests, including Bavaria. Especially the repeated drought years between 2018 and 2022 have led to large-scale forest loss, driven by a variety of pressure factors. The increase in occurrence and severity of disturbances fueled the need for information products to consistently monitor forest dynamics at denser temporal intervals than currently possible with ground-based surveys. Satellite remote sensing is widely recognized to efficiently provide continuous data across large areas. It therefore forms the basis for many forest applications, including disturbance detection. Currently, several forest monitoring efforts covering Bavaria exist with differences in spatial and temporal resolution, input data, choice of algorithm and update intervals. However, there is so far no system that is able to detect disturbances at both high spatial and temporal resolution in near-real time. Our work aims at closing this gap by presenting an implementation of a near-real time forest monitoring system that detects disturbances at biweekly intervals at 10m resolution. The entire workflow combines mass-data processing of satellite data with the presentation of results to users through a mobile app. We analyzed historic time series from Sentinel-2 and Landsat-8/9 starting from August 2017 until April 2024 before switching into the monitoring mode. All input data is thoroughly pre-processed to reduce atmospheric contaminations and other artifacts. Our analysis relies on the disturbance index (DI), which uses a selection of the Tasseled Cap components to detect disturbances. Aggregated in biweekly composites, we compare each timestep to a stable reference of 2017 for the historic time series while the monitoring mode applies a data-driven pixel-wise outlier detection. To ensure robustness, we consider only six consecutively detected disturbances as confirmed. In addition to the time of loss we provide the disturbance magnitude and a context layer for candidate disturbances which are not yet fully confirmed. The results from the satellite data processing are then transferred to the app backend, which triggers several processes to enhance informational value. Firstly, the detected pixels are aggregated into meaningful disturbance patches. Secondly, a disturbance attribution algorithm is activated, which assigns one or multiple of the common stress factors (windthrow, drought, insect infestation, fire, snow damage , regular harvest). This process is implemented as decision tree and includes several external data sources with respect to each cause of disturbance. All gathered results are then visualized in the app frontend: Here the user can subscribe to an area of choice for which notifications are sent if new disturbances are detected. Additionally, the user is presented with a description for the stress factors associated with a specific disturbance case. The user is then able to send feedback on the situation on the ground, thereby supporting our validation process. Our disturbance mapping results show good agreement with existing forest condition maps, although direct comparisons are difficult due to differences in spatial or temporal resolution and varying target metric that is assessed. Altogether, our implementation is capable of detecting historic forest loss in Bavaria as well as monitoring current forest dynamics. The validation shows that our approach achieves accuracies comparable with results reported from other disturbance maps. We find that our implementation of a near-real time forest monitoring service integrates well with existing systems as it extends current efforts by combining regular data updates with high spatial and temporal resolution. The app serves as a low barrier for interacting directly with disturbance datasets and corresponding disturbance attributions. As a whole, our workflow provides information that can support timely implementation of management practices by forest managers or support planning strategies for sustainable forest management.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Object-Guided Tree Species Classification Using Deep Learning

Authors: Mareike Weishaupt, Wen Fan, Jonas Troles, Dr. Jiaojiao Tian
Affiliations: German Aerospace Center (DLR), Remote Sensing Technology Institute (IMF), Technical University of Munich, TUM School of Life Sciences Weihenstephan, Institute of Forest Management, University of Bamberg, Cognitive Systems Group
Very high-resolution RGB imagery captured by unmanned aerial vehicles (UAVs) enables a new perspective on forest mapping, allowing more focus on spatial information. For now, it is common to use satellite imagery using spectral information. However, this limits the research to forest areas rather than specific classification at the tree level due to low resolution and the inability to capture individual tree-level details. Driven by deep learning (DL), tasks like tree species classification using spatial data, such as the texture of a tree, are possible. Especially the use of very high-resolution imagery for forest mapping has yet to be fully explored. We propose a region-based deep learning architecture for tree species classification based on the refined individual tree delineation results from our previous work. We test our approach on the "Bamforest" benchmark dataset with a 2 cm resolution and 27,160 annotated trees within 105 hectares. The data was obtained in a natural German forest and collected in the summer with a UAV. We evaluated ten classes to distinguish seven common German tree species, coniferous and deciduous tree species, one combined class for minority tree species, one for dead trees, and one for the background. The objective is to develop a semantic segmentation model that accurately classifies each pixel into distinct classes. In addition to studying the impact of the pixel amount of each class on the classification output, we propose an accuracy loss function to the training to improve results by using the spatial configuration of the ground truth to refine predictions through a voting process. In postprocessing, we combine our semantic segmentation with instance segmentation. This process involves assigning our results to each correct tree shape with a single tree species through a voting mechanism, which helps refine the tree shape and decide on one species. Our results show the importance of data availability for each class with the F1-score and the Intersection over Union (IoU) per class and the advantage of using the second accuracy loss function for minority classes with the disadvantage of additional computational time. Our experiments also prove that the voting process in postprocessing has a positive effect on the output by comparing first, a test dataset similar to the training data and second, a different test area. The further application of this framework has the potential to support forest management, biodiversity assessment, and environmental monitoring by leveraging UAV and advanced DL techniques. The extensive and diverse dataset makes the generalization of models for other forest areas possible and creates the ability to generate single-tree information in dense forests.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Combining Sentinel-1, Sentinel-2, and LiDAR data for improved forest species mapping: a case study in Navarre, Spain

Authors: Itxaso Aranguren, Dra. María González-Audícana, Eduardo Montero, Dr. José Antonio Sanz, Prof. Jesús Álvarez-Mozos
Affiliations: Institute for Sustainability & Food Chain Innovation (IS-FOOD), Department of Engineering, Public University of Navarre (UPNA), Navarre Forestry Association (FORESNA-ZURGAIA), Institute of Smart Cities, Department of Statistics, Computer Science and Mathematics, Public University of Navarre (UPNA)
Global forest loss represents a persistent and alarming trend, driven by a range of factors, including deforestation and climate related disturbances. Consequently, accurately characterizing forested areas holds critical significance for the development of effective forest management strategies, given that forests account for nearly one-third of the Earth's land surface. Multispectral imagery has been employed for the classification of forest species. Additionally, vegetation indices such as the Normalized Difference Vegetation Index (NDVI), the Normalized Difference Infrared Index (NDII), or the Normalized Burn Ratio (NBR) have also been used as input in forest classification models. Furthermore, the potential of Synthetic Aperture Radar (SAR) and Light Detection and Ranging (LiDAR) for the classification of forest species has been investigated, albeit to a lesser extent than optical data. There is a growing interest in the integration of Earth Observation data obtained from different types of sensors to enhance the accuracy of forest species mapping. The objective of this study is to quantify the added-value of combining data obtained from Sentinel-1 (S-1), Sentinel-2 (S-2) and LiDAR sensors for the classification of forest species. With this aim, a case study was conducted in Navarre, Spain, which has 3,760 km2 of forests, including broadleaved, coniferous and mixed types. The diverse climatic, edaphic and orographic conditions in Navarre contribute to a rich and diverse variety of forest ecosystems. The study aimed to classify eleven distinct forest species or groupings: four broadleaves ([1] Beech -Fagus sylvatica-, [2] Hardwood, [3] Quercus and [4] Other broadleaves), four conifers ([1] Aleppo pine -Pinus halapensis-, [2] Black pine -Pinus nigra-, [3] Scots pine -Pinus sylvestris- and [4] Other conifers) and three mixed forests ([1] Scots Pine – Beech, [2] Scots Pine – Oak and [3] Other mixtures). All available S-1 and S-2 scenes for the 2018 were downloaded and processed. The processing of S-1 data involved several steps, including the removal of thermal noise, orbit correction, calibration, speckle filtering, terrain-flattening, and orthorectification. This resulted in backscatter coefficient values (gamma0) in VH and VV polarizations. Additionally, the ratio (VH/VV) was calculated. S-2 data underwent atmospheric correction using Sen2cor, followed by mosaicking of the four granules covering the study area. Ten spectral bands were selected, and the NDVI, NDII and NBR vegetation indices were calculated. The time series for each species and groupings were extracted for the complete year 2018. Monthly composites were obtained for all features, which served as inputs for the classification models. LiDAR data was sourced from the second LiDAR coverage of the Spanish National Plan for Aerial Orthophotography, conducted in 2017, between September 8 and November 16. Using LiDAR point clouds, 25 LiDAR features of interest were generated, which consisted in several elevation-, intensity- and canopy structure-related metrics. Additionally, the DTM distributed by the Cartographic Service of the Government of Navarre was used to obtain topographic features of interest, i.e. elevation, slope and aspect. These were also included as input variables for the forest species classification models. To evaluate the synergistic use of the features, fourteen classification scenarios were proposed, by testing all possible combinations of input variables. The findings reveled that integrating data from all three sensors improved forest species classification when compared to individual scenarios. Notably, S-2 data alone yielded the best standalone performance, and its combination with S-1 did not result in further accuracy gains. The highest overall accuracy (0.80) was achieved when S-2 monthly composites were combined with LiDAR and topographic variables. S-2 red-edge and NIR bands from June, July and August emerged as the most important variables, alongside the elevation-related LiDAR and topographic variables. A class specific accuracy analysis indicated that heterogeneous and mixed forest classes performed poorly, while monospecific classes achieved higher accuracy. Overall, the results of this study underscore the potential of integrating data from multiple Earth Observation technologies to enhance the mapping and classification of forest species.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: AI-vergreens - a New Multi-level Labelled Multi-temporal Sentinel-2 Image Patch-based Training Dataset Optimized for Northern Circumboreal Forest

Authors: Birgit Heim, Lea Enguehard, Femke van Geffen, Jacob Schladebach, Stefan Kruse, Ulrike Herzschuh, Begüm Demir, Annett Bartsch, Konstantinos Papathanasssiou, Ronny Haensch
Affiliations: Alfred Wegener Institut Helmholtz Center for Polar and Marine Research (AWI), Technische Universität Berlin (TUB), b.geos, Deutsches Zentrum für Luft- und Raumfahrt (DLR)
Circumboreal forests are located primarily in Alaska, Canada, and Northern Eurasia representing close to 30% of all forested land areas and are strongly changing in response to climate and increasingly frequent disturbances such as fires and drought. Remote sensing applications for landcover in high latitudes are possible but remain challenging for optical satellite sensors due to long lasting snow coverage and frequent cloud coverage and wildfires. In addition, the accuracy of landcover mapping strongly depends on the quality and the correct context of the training data. Forest inventories are commonly used as training data for forest remote sensing. For the Northern boreal realm, specifically the subarctic treeline ecotone, publicly available reference data remain rare and are even not adequately labeled for machine learning and remote sensing-related applications. In our previous work SiDroForest (Siberian Drone Mapped Forest Inventory), we produced a training dataset containing forest-type labeled Sentinel-2 image patches linked to AWI expedition plots in the Eastern Siberian summergreen-evergreen and treeline transition zones (van Geffen et al., 2021, https://doi.org/10.1594/PANGAEA.933268, van Geffen et al. 2022). The SiDroForest forest-type labels despie linked to 54 reference plots contain as real-world reference data in some cases only few members per class and are imbalanced. As forest landcover is characterized by high heterogeneity due to topographic and disturbance constrains, measuring forest parameters requires to cover a large variation across environmental gradients. In Enguehard et al. 2024a,b https://doi.org/10.1594/PANGAEA.964699), within the frame of the AI-vergreens BMWK project we could extend the Sentinel-2 training dataset for Eastern Siberia to 79 labelled patches including more expedition data and photointerpretation to expand on the rare classes. Incorporating now the reference data from new expeditions to Alaska and NW Canada since 2022, we are currently building up an extensive labeled training dataset based on multi-temporal optical Sentinel-2 satellite data for Northern Circumboreal forest remote sensing applications. We assign multi-level labels based on our field data on tree species and crown cover percentages. Additionally, we derive the tree crown cover from LiDAR 3-D point clouds (Yellowscan Mapper on M300 DJI drone), and structure from motion from RGB (DJI Phantom 4). In addition, based on the extensive UAV data collection from expeditions in Siberia, Alaska and Canada, Kruse et al. (in prep.) are preparing a forest structure training data set derived from LiDAR 3-D point clouds from AWI expeditions, optimized for machine learning to detect individual trees and species in northern boreal forests and the tundra transition (BorFIT, AWI DataHub Information Infrastructure funds). We will be able to extend our image patch collection using BorFIT structural forest information extracting reference data from larger forest areas in addition to the AWI expedition plots. We anticipate our dataset to be a starting point for a significantly more extensive one with the addition of SAR sensor image patches (Sentinel-1, TanDEM-X). Based on field data and UAV (multispectral, hyperspectral and LiDAR) we can provide the contextual understanding, detail and specificity for a vast region where publicly available reference data remain scarce. We will make the consistently and uniformly labelled patch-based training dataset optimized for machine-learning publicly available as FAIR data publication including a WebGIS visualization of the location of the training data in the Circum-Boreal as part of the AI-vergreens BMWK project.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Upscaling UAV LiDAR-Derived Boreal Forest Type, Structure, and Successional Stages to Sentinel-2 and Landsat in Alaska and Northwestern Canada

Authors: Léa Enguehard, Stefan Kruse, Robert Jackisch, Begüm Demir, Santosh Panda, Nicola Falco, Ulrike Herzschuh, Birgit Heim
Affiliations: Alfred Wegener Institute, Helmholtz Center For Polar And Marine Research, Geoinformation in Environmental Planning Lab, Technische Universität Berlin, Remote Sensing Image Analysis (RSiM) Group, Technische Universität Berlin, Institute of Agriculture, Natural Resources and Extension, University of Alaska Fairbanks, Earth and Environmental Sciences Area, Lawrence Berkeley National Laboratory
The boreal forest, one of Earth’s largest terrestrial biomes, accounts for a third of all forests and serves as a critical carbon reservoir while providing essential ecosystem services. A key feature of boreal forest ecosystems is their structure, where the three-dimensional organization of trees reflects successional stages, provides habitats, and drives ecological functions. This study aims to map boreal forest structure, successional stages, and forest types by linking Sentinel-2 and Landsat satellite data to forest products produced from UAV-based LiDAR transects and ground surveys across North America. During the summers from 2022 to 2024, we collected a novel UAV and ground dataset to explore boreal forest structure across hundreds of sites spanning northern Alaska (68.5°N) to British Columbia (58.6°N). We inventoried vegetation and conducted extensive LiDAR drone transects covering diverse northern boreal forest landscapes and the treeline ecotone. We acquired high-density LiDAR measurements with a Yellowscan Mapper mounted on a DJI M300 RTK UAV, covering approximately 600 x 100 meters at each transect. The high-resolution LiDAR sensor processed to centimeter-scale point cloud data enabled detailed information on 3D forest structure allowing individual tree segmentation and forest type identification (deciduous or evergreen trees). Key metrics, including the abundance of deciduous and evergreen trees at varying height levels, canopy closure, forest age, and successional stage, were derived for aggregated 20 x 20-meter forest patches (Enguehard et al., in preparation). This data provides extensive training data for upscaling forest structure metrics to larger spatial scales. Using supervised machine learning and regression models, we explore the relationships between UAV-derived metrics and spectral signals from Sentinel-2 and Landsat imagery. Specifically, we examine how evergreen and deciduous forest-type abundances across various canopy height levels influence multispectral satellite data and compare the performance of the spectral data acquired during peak and late summer. The use of UAV-derived fine-scale labeled data ensures the representativeness of landscape variability, enabling robust model development and validation. This approach supports large-scale boreal forest physical parameters mapping across Alaska and Northwest Canada, advancing understanding of forest structure and its ecological implications.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Novel Remote Sensing-Based Tree Species Product With 10 m Resolution for Germany

Authors: Marco Wegler, Frank Thonfeld, Stefanie Holzwarth, Niklas Jaggy, Patrick Kacic, Ursula Gesser, Prof. Dr. Claudia Kuenzer
Affiliations: German Remote Sensing Data Center, Earth Observation Center, EOC of the German Aerospace Center, DLR, Institute for Geography and Geology, University of Wuerzburg
Forests are ecologically important and play a vital role in the global ecosystem. They act as natural carbon sinks, regulate climate patterns, improve soil quality and provide a range of environmental services. Global warming poses a significant threat to these ecosystem services. A fundamental understanding of the tree species composition in forest ecosystems is essential to ensure their long-term conservation. Tree species contribute to biodiversity, support diverse wildlife habitats and ensure the resilience of forest ecosystems to environmental changes and disturbances. The assessment of species distribution is also essential to support forest management and the adaptation to climate change. In Germany, knowledge about the tree species distribution is based on data from the National Forest Inventory (NFI), which is conducted every ten years and provides a comprehensive and valuable record of many ecologically and economically relevant forest parameters. Existing national tree species products derived from remote sensing rely on this inventory data. The geolocation information of the inventory data is not publicly available due to privacy concerns and to protect the dataset from unintentional disturbance or manipulation. Additionally, NFI data collection is inherently time consuming as it requires the collection of extensive details for each NFI plot. However, the single information on specific tree species can be obtained from alternative sources. In this study, more than 40,000 reference samples of tree species were collected using a variety of resources, including forest operational maps, urban tree registers, Google Earth Pro, Google Street View, and own field inspections. By combining this data with remote sensing data from the Sentinel-2 and Sentinel-1 satellites and a digital elevation model (Copernicus DEM GLO-30), we created several classification models to differentiate ten common tree species classes. Using DLR’s high-performance cluster, terrabyte, we evaluated the performance of various model configurations and data combinations. This included a detailed comparison of widely used machine learning algorithms, XGBoost and Random Forest, as well as an evaluation of the impact of optical, SAR and environmental data in the context of tree species delineation. To ensure a robust evaluation, validation data were collected independently from the training data using the same sampling method. This resulted in an additional dataset of 15,000 validation samples, distributed across the country and covering a wide range of taxonomic diversity. The XGBoost classifier, when paired with Sentinel-2 datasets, delivered the best results. The optimized model achieved class-specific accuracies of 82–95% for the four dominant tree species: pine, spruce, beech, and oak. For other common species, including birch, alder, larch, Douglas-fir, and fir, the accuracy ranged from 72-92%. Challenges remain with pixels in highly mixed forest stands, as forests are very complex ecosystems that are difficult to generalize. To optimize the use of remote sensing for continuous monitoring of tree species, it is crucial to ensure the availability of high quality in-situ data, especially for diverse, highly structured, heterogeneous forest environments. German forests are undergoing rapid change, especially in regions affected by recent disturbances since 2018. This underscores the urgent need for cost-effective and easily repeatable methods for monitoring tree species. The availability of precise national-wide data on tree species enables the generation of species-specific analyses for the German forest. These provide the basis for developing a climate-resilient future for the forest.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Estimating Vascular Plant Diversity in the Understory of Temperate Mountain Forests Using Airborne LiDAR and Sentinel-2

Authors: Felix Glasmann
Affiliations: Technical University of Munich, School of Life Sciences, Forest and Agroforest Systems
In the face of an accelerating Global Change, the United Nations introduced the Sustainable Development Goals (SDGs), which aim to ensure a sustainable development on an economic, social and ecological level worldwide. One of the declared goals (SDG 15) is to protect terrestrial ecosystems, including forest ecosystems and their biodiversity. Floristic biodiversity in forests plays a crucial role in maintaining ecosystem functions by regulating nutrient cycles and microclimates, but also providing forage and shelter for the variety of other forest-dwelling species. In this sense, the understory facilitates forest regeneration, and a higher floristic biodiversity can enhance forest health and forest resilience to disturbances. While forest canopy diversity can be directly sensed from an increasing amount of freely available remote sensing data by directly analyzing spectral variability and canopy structural information, it is still challenging to estimate the diversity of the undergrowth. Mapping the diversity of the understory is usually dependent on remote sensing proxies related to the abiotic growth environment in the forest understory. Especially the light availability and light variability are expected to be strong drivers of species composition. In this context, we investigate on the relationship between the canopy structure, the local light regime and the diversity of vascular plants in the understory of temperate mountain forests, and how airborne LiDAR and Sentinel-2 can be used to map these relationships over large extents. To do so, we set up 250 plots distributed along a gradient of forest succession and elevation in both, a strictly protected as well as two managed landscapes in the Bavarian Alps, Germany. At each plot, light availability was measured in 2 m height and at 5 spots within 12.6 m radius (equaling to 500 m²) using hemispherical photography, from which the average (light availability) and standard deviation (light variability) were derived. Furthermore, we calculated canopy openness from airborne LiDAR, as well as several spectral-temporal indices from Sentinel-2 timeseries. To quantify vascular plant diversity in the understory, we collected species identities at 200 m² around the plot center, using the Londo decimal scale, from which we then calculated species richness. After examining the relationships between canopy structure, light regime and species richness, we used the best model to map the species richness across the landscape. Our results provide evidence for a robust relationship between canopy structure, light regime and vascular plant diversity in the understory. We find the same sigmoidal relationship between light availability and canopy structure for both, airborne LiDAR and Sentinel-2 derived metrics, where light availability increases with canopy openness/increasing spectral indices. For the relationship between light variability and canopy structure we find a bell-shaped relationship for both, airborne LiDAR-based canopy openness and Sentinel-2-based spectral indices, suggesting that light variability peaks at a moderate canopy openness. We also find a consistent relationship between vascular plant diversity and light availability and variability, suggesting that both LiDAR-derived canopy openness and Sentinel-2 spectral indices can be used as proxies for understory vascular plant diversity. The results of our study give insights into how data from airborne LiDAR and Sentinel-2 can help estimating vascular plant diversity in the forest understory, which might help to create continuous maps of understory biodiversity and thus to adapt management practices and conservation efforts to protect forest ecosystems from challenges of Global Change.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Mapping French Trees for the Last Decade

Authors: Fajwel Fogel, Martin Schwartz, David Purnell, Gabriel Belouze, Arthur Calvi, Ibrahim Fayad, Agnès Pellisier-Tanon, Alexandre d’Aspremont, Loïc Landrieu, Philippe
Affiliations: LSCE
Dynamic monitoring of forests is crucial to estimate above ground carbon stock changes and manage forests. Forest losses are typically reported at an annual frequency, from local to country scale. Several recent studies have shown that it is possible to use deep learning models to accurately map canopy height from satellite imagery with LiDAR data supervision. However, access to data, computing infrastructure and software capabilities limit the diffusion and use of these methods at a large scale. Only a few products provide canopy height maps at tree level, i.e., with meter or sub meter resolution, and with only one estimation or for a local extent. We release annual height maps at tree level resolution (1.5m) for the full French metropolitan territory from 2013 till 2023 (551,695 km2). Following the work of [1], we estimate height from SPOT 6-7 imagery with a hierarchical transformer model (PVTV2) trained with LiDAR HD (IGN) supervision. We then post-process height maps with a spatio-temporal filter to increase the robustness of our estimations and compute annual tree losses from successive height maps. We show that tree level resolution is necessary to correctly quantify losses. We see a significant increase in losses when going from 1.5 meter resolution to 30 meter resolution, which is the resolution of Landsat derived products (e.g., [2]). We validate our results on three different areas where we have access to LiDAR data with revisits, and with ground data from the French National Inventory. Our maps enable us to detect small disturbances such as selective logging, and guarantee an unprecedented accuracy for large disturbances like clear cuts or fires, especially in mountainous areas. Moreover, we show how we can apply the same methodology at an even higher resolution (50cm) to map shrubs and trees outside forests, like hedges, which are crucial for biodiversity and agroforestry. References: An extensive bibliography for canopy height mapping can be found in: [1] Fajwel Fogel, Yohann Perron, Nikola Besic, Laurent Saint-André, Agnès Pellissier-Tanon, Martin Schwartz, Thomas Boudras et al. Open-Canopy: A country-scale benchmark for canopy height estimation at very high resolution. arXiv preprint arXiv:2407.09392, 2024. See the following recent publication for mapping European disturbances with Landsat: [2] Viana-Soto, Alba, and Cornelius Senf. The European Forest Disturbance Atlas: a forest disturbance monitoring system using the Landsat archive. Earth System Science Data Discussions 2024 (2024): 1-42.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: C-Band SAR Interferometry in Boreal and Temperate Forests: Assessing the Influence of Temporal Decorrelation

Authors: Marc Herrera-Giménez, Carlos López-Martínez
Affiliations: Universitat Politécnica de Catalunya (UPC), Institut d'Estudis Espacials de Catalunya (IEEC)
Since the adoption of Synthetic Aperture Radar (SAR) data for forestry applications, successful applications of SAR polarimetry and Polarimetric SAR Interferometry (PolSAR and PolInSAR, respectively) have been achieved for the retrieval of forest variables, including forest height, growing stock volume, and above-ground biomass [1]. However, widespread adoption of PolInSAR-based approaches in operational forestry faces notable limitations, specifically the requirements for full polarimetry, a controlled non-zero baseline between SAR acquisitions, and a short revisit time to mitigate the impact of temporal decorrelation on backscatter between repeat-pass interferometric acquisitions. The demonstrated potential of SAR technology over recent decades, particularly SAR interferometry (InSAR) approaches, has driven the development of novel SAR missions. Emerging requirements, including shorter revisit times, smaller or zero baselines, and the use of different frequency bands for specific study cases or multifrequency applications, have shaped missions such as BIOMASS, ROSE-L, NiSAR, or Sentinel New Generation. To date, numerous studies have considered C-band SAR interferometry for forest mapping. Investigations leveraging Sentinel-1 data are presented in [2] and the SinCohMap project [3], while studies involving SAOCOM data are exemplified by [4]. These works demonstrate the effectiveness of InSAR techniques for the retrieval of forest parameters. Among the widely adopted models, Cloude et al. [1] introduced the Random-Volume-over-Ground (RVoG) model to characterize vegetation volume using PolInSAR data. Subsequently, this model was expanded by [5] to incorporate the random motion of scatters, resulting in the Random-Motion-over-Ground (RMoG) model. In a later development, generated a global coherence map, underscoring the potential of utilizing InSAR coherence data worldwide to derive forest variables. This work investigates the viability of employing zero baseline InSAR techniques for extracting forestry information. Our proposal considers forest variables retrieval by employing signal models that exploit the temporal decorrelation observed in InSAR coherence in the presence of vegetation. The results are presented over different test regions across the globe, two test regions in Finland, one in the centre of the country and one south, a test region in Traunstein, Germany and two supplementary areas in Canada; one close to Ottawa, predominantly covered by forests, and another near Grande Prairie, characterized by a mix of forested areas and cultivated land. Two primary data sources are used as ground reference data. Forest tree height (TH) comes from a study conducted in 2022 by Nico Lang et al [6], that generated a canopy height model of the Earth using data fusion of Sentinel-2 and GEDI. The second one for above-ground Biomass (AGB) comes from the ESA Biomass Climate Change Initiative, derived from ALOS-2 and Sentinel-1 data. Considering the initial hypothesis that the primary factor influencing the decorrelation of InSAR coherence are temporal effects, we conduct an analysis, using the temporal decorrelation model [7]. Initially, coherences for 6-, 12-, 18-, and 24-day intervals are computed. Using Equation (1) we extract the decay coefficient tau and we analyse it with respect to the biomass and tree height. A very important aspect to consider is that the minimum revisit time available with Sentinel-1, considering Sentinel-1A and 1B, is 6 days, and for the computation of tau, all the 6-, 12-, 18- and 24-day coherences are considered. Combining them, the exponential decay is estimated, and it provides an approximation of the information that would correspond with decay rates smaller than 6 days. Hence, it is possible to obtain tau smaller than 6 days For the Finnish test sites, for both biophysical variables, there is a clear decreasing trend between the biophysical parameters and the tau value. The distribution of the biophysical variables is distributed across all possible values in the sites resulting in larger values with mean taus of around 6 to 7 days for one area, and 4.5 to 5 days for the second one. With the smallest values in the 3.5 to 3 days. On the German test site, although, the amount of biomass is quite low and the distribution of heights is larger. A decreasing trend is seen for both parameters with values between 4.5-6 days and 2 days, yet in this area, the tau saturates for the higher values and does not provide extra information. Lastly, in the Canadian test sites, in Grande Prairie the distribution of heights and biomass is quite heterogeneous, making it possible to appreciate the decreasing trend relating to tau and these parameters without much saturation, with values compressing from 7 to 3 days. On the other hand, in the area close to Ottawa, it is completely the opposite case as the test site is mostly covered by very dense tall trees, giving place to a very narrow distribution of forest height, with a reduced number of samples for low vegetation, resulting in a range of tau values in the 2-3 days. The results in this study support the initial hypothesis, asserting a discernible correlation between the temporal decorrelation in zero-baseline InSAR coherence and the forest biophysical variables, specially for C-band data. Consequently, temporal decorrelation encapsulates valuable biophysical insights for such targets. Furthermore, it becomes feasible to quantify biomass and tree height through intervals of the decay rate (tau), wherein higher tau values correspond to shorter trees, while smaller tau values align with taller trees. These outcomes exhibit robust consistency across diverse study areas, illustrating a pronounced descending trend between the mentioned parameters and the decay rate. Remarkably, by employing a combination of 6-, 12-, 18-, and 24-day coherence, information on these parameters can be extracted for intervals as short as 2 days. This InSAR methodology introduces a systematic and operational avenue for investigating forest diversity and changes due to forest management operations and climate change impacts, overcoming the constraints of previous techniques. Notably, it explores frequency bands previously overlooked for such applications of InSAR imagery, placing crucial emphasis on the potential of upcoming zero-baseline satellite SAR missions. [1] S.R. Cloude and K.P. Papathanassiou, “Polarimetric SAR interferometry,” IEEE Trans. Geosci. Remote Sens., vol. 36, no. 5, pp. 1551–1565, 1998. [2] O. Cartus, M. Santoro, U. Wegm¨uller, N. Labriere, and J. Chave, “Sentinel-1 coherence for mapping above ground biomass in semiarid forest areas,” IEEE Geosci. Remote Sens. Letters, vol. 19, pp. 1–5, 01 2022. [3] F. Vicente-Guijalba et al., “Sincohmap: Land-cover and vegetation mapping using multi-temporal sentinel-1 interferometric coherence,” in Proc. 2018 IEEE Intl. Geosci. Remote Sens. Symp., 2018, pp. 6631–6634. [4] S. A. Seppi, C. López-Martinez, and M. J. Joseau, “Assessment of L-Band SAOCOM InSAR coherence and its comparison with C-band: A case study over man- aged forests in Argentina,” Remote Sens., vol. 14, no.22, 2022. [5] M. Lavalle and S. Hensley, “Extraction of structural and dynamic properties of forests from polarimetric interferometric SAR data affected by temporal decorrelation,” IEEE Trans. Geosci. Remote Sens., vol. 53, no.9, pp. 4752–4767, 2015. [6] N. Lang, W. Jetz, K. Schindler, and J. D. Wegner, “A high-resolution canopy height model of the Earth,” 2022. [7] P. Bamler, R. Hartl, ”Synthetic Aperture Radar Interferometry” Inverse Problems, vol. 14, 1998, R1-R54
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Global long-term (1988-2021) aboveground biomass dataset

Authors: Zhen Qian, Sebastian Bathiany, Teng Liu, Lana Blaschke, Niklas Boers
Affiliations: Earth System Modelling, School of Engineering and Design, Technical University of Munich, Potsdam Institute for Climate Impact Research, School of Systems Science and Institute of Nonequilibrium Systems, Beijing Normal University, Department of Mathematics and Global Systems Institute, University of Exeter
Vegetation aboveground biomass (AGB) refers to the organic matter above the soil surface and is a key indicator of the carbon storage capacity of terrestrial ecosystems. Systematic monitoring of global AGB patterns and understanding their dynamics provides essential insights for climate change mitigation and adaptation strategies. Many initiatives have been undertaken to quantify the amounts and spatial distribution of AGB using field surveys and advanced Earth observation technologies. These efforts have produced AGB maps at various spatial scales and across different regions, representing significant progress in understanding global and regional AGB patterns. Yet, these approaches face limitations, such as sparse field measurement networks and technical challenges inherent to specific remote sensing techniques. For instance, missions like GEDI are limited by their relatively late launch dates, resulting in constrained temporal coverage, while optical remote sensing technologies, despite their longer operational histories, struggle with limited penetration in high-biomass regions. Consequently, generating globally consistent AGB time series remains a challenge. Vegetation optical depth (VOD), retrieved from microwave Earth observations, offers a promising alternative for monitoring AGB time series. As VOD reflects the total water content stored in vegetation, it is closely linked to vegetation density, abundance, and AGB. These characteristics have enabled previous studies to establish spatial relationships between annual aggregated VOD, particularly at L-band, and AGB. For estimating interannual AGB dynamics, empirical equations are often developed to calibrate spatial VOD-AGB relationships using static AGB maps, which are then applied to annual VOD time series based on the space-for-time substitution. However, challenges remain, particularly in capturing interannual AGB variability. First, most estimation methods are empirical (e.g., logistic function) and univariate, relying solely on single-band VOD, which limits their ability to accurately represent AGB distributions across heterogeneous biomes and under varying environmental conditions. Second, current VOD-derived AGB datasets primarily span recent decades, failing to reflect carbon dynamics over longer historical periods. For instance, widely used L-band VOD datasets, such as those from SMOS and SMAP, typically begin around 2010. While other studies, such as Liu et al. (2015), have utilized Ku-band VOD from LPRM to estimate AGB for earlier periods (e.g., 1993–2012), the temporal coverage of these datasets remains limited. Moreover, the reliance on different frequency bands introduces inconsistencies among AGB products, making it challenging to seamlessly integrate these datasets for reconstructing long-term AGB dynamics. Here, we developed a global, long-term (1988–2021) AGB product with an approximate spatial resolution of 25 km by utilizing harmonized CXKu-VOD datasets from VODCAv2, integrating additional Earth observation and environmental datasets, and utilizing advanced nonlinear machine learning models. The model was trained using ESA's CCI AGB maps as ground truth, enabling the detection of complex VOD-AGB relationships across diverse biomes and environmental conditions. The resulting AGB product demonstrates high spatiotemporal consistency with ESA AGB, achieving an overall RMSE of less than 5 Mg ha -1 and a temporal agreement with a Kendall's tau coefficient of 0.71. Additionally, our AGB dataset shows high agreement with other AGB references and vegetation parameters, highlighting its reliability across multiple ecological contexts. Spanning nearly three decades, this dataset provides a promising resource for analyzing terrestrial biomass dynamics over time. It also offers opportunities to investigate the drivers of AGB changes, including links to extreme climate events, land-use changes, and anthropogenic disturbances.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Mapping of Nepal’s Forest Cover Using High Resolution Satellite Data and Forest Change Inside Community Forests

Authors: Thang LE, Nicolas DELBART, François LIBOIS, Marine GUEBEN
Affiliations: LIED - Université Paris Cité, Paris School of Economics, Namur University
Since its establishment in the 1970s, the Community Forest User Groups (CFUGs) program has played an important role in forest conservation and uses in Nepal. Nowadays, nearly 35% of the country's forests are under the management of more than 20,000 CFUGs, which regulate the individual use of forest products. We aim to evaluate the impact of the CFUG program on the forest cover, first by comparing the current status of wooded cover inside and outside the community forests, and second to evaluate the temporal changes of the forest cover in the CFUG over the last decades. To reach these two objectives, we use a range of remote sensing data. We also use a map of 1,800 CFUGs that has been produced in the framework of the GolFor project, resulting from the digitalization of the paper CFUG operation plans collected in the forestry offices in 14 districts. This study uses machine learning methods to map the forest cover across several districts of Nepal by using multispectral image time series (Sentinel-2, Landsat 8,9) and Palsar radar data in the early 2020s. Results demonstrate the capability of optical images to overcome the difficulty of topography, cloud, and vegetation dynamics in Nepal, if four-year time series are used to discriminate the respective phenologies and growth cycles of the various non-forest covers (e.g. grass, crops, banana trees). Besides, radar data complement the optical data for mapping forests in the lowlands. By comparison with the existing land cover maps of Nepal provided by the International Centre for Integrated Mountain Development (ICIMOD), the study proves significant improvements in distinguishing between forests and other vegetation covers (elephant grass, banana plantations, agricultural terraces), which are the main causes for incorrect forest detection inside existing land cover products. Additionally, the high accuracy of the forest cover maps was verified with validation samples taken by very-high-resolution images, and also with in-situ data and the photographic dataset captured during the field trips in Nepal between 2018 and 2024. The method allows to evaluate the spatial distribution of the forest cover and their changes since the mid-2010s within and outside the 1800 CFUG borders, showing the efficiency of the CFUG program to preserve and regenerate the forest cover. As this method relies on the Sentinel2 data it can not be directly used to assess the changes of the forest cover in the last decades. However, it shows that a time series approach, mobilizing up to four years of data to produce a land cover map, allows to remove most of the confusions. It is therefore implemented over the historical satellite data (Landsat 4,5) to document the relative difference of the temporal changes in the forest cover inside and outside the CFUGs.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: A Bayesian Deep Learning approach for the estimation of Forest Parameters from Interferometric SAR images

Authors: Daniel Carcereri, Federico Ghio, Stefano Tebaldini, Dr. Paola Rizzoli
Affiliations: German Aerospace Center (DLR), Politecnico di Milano
The detection and analysis of forest disturbances is of paramount relevance for the monitoring efforts within the broader subject of climate change and biodiversity preservation [1]. This is typically achieved by the repeated measurement and estimation of key forest observables. The relative change rate of the observables over time captures the dynamics of the forest, which can be used to infer information about the environment and its state. In this context, the task of uncertainty estimation becomes of comparable relevance to the estimation of the forest observables themselves, since the estimation of the true change magnitude can only be reliably captured outside the uncertainty boundaries of the underlying estimation process. Deep learning (DL)-based strategies have shown to be effective in the estimation of forest parameters at large-scales, achieving state-of-the art regression accuracies from SAR and interferometric SAR (InSAR) measurements [2]. Commonly, these models represent the solution to an optimization problem which seeks to find the set of parameters that maximize some objective function (commonly the likelihood or the posterior distributions), but fails to explicitly incorporate notions of aleatoric and model uncertainty. In this work we propose a Bayesian Deep Learning (BDL)-based architecture for the estimation of forest parameters and their uncertainty starting from TanDEM-X single-pass InSAR measurements. In our approach, we explicitly predict the aleatoric uncertainty alongside the forest observable, i.e. forest height, and incorporate the notion of epistemic uncertainty by adopting the Stochastic Weight Averaging-Gaussian (SWAG) optimization algorithm [3]. The model is trained and validated considering the estimation performance and uncertainty calibration on the basis of airborne LiDAR-derived reference measurements. Crucially, we also assess the reliability of the uncertainty estimates over out-of-distribution (OOD) samples [2]. Finally, we deploy the model to generate a country-scale forest height change-map product for Gabon, including the associated uncertainty and reliability maps. The proposed work elaborates on the practical necessity to deliver uncertainty assessments for forest parameter estimates. Crucially, relying on the Bayesian paradigm allows for the explicit definition of prediction trustworthiness, the reliable measurement of forest dynamics and the intuitive merging of multiple sources of information. This is especially relevant for the large suite of heterogeneous and freely available satellite products acquired both by existing systems, such as Sentinel-1 and Sentinel-2, and by upcoming ones, such as Biomass, ROSE-L and Harmony. Bibliography: [1] Global Forest Resources Assessment 2020. FAO, 2020. doi: 10.4060/ca9825en. [2] D. Carcereri, P. Rizzoli, L. Dell’Amore, J.-L. Bueso-Bello, D. Ienco, and L. Bruzzone, “Generation of country-scale canopy height maps over Gabon using deep learning and TanDEM-X InSAR data,” Remote Sensing of Environment, vol. 311, p. 114270, Sep. 2024, doi: 10.1016/j.rse.2024.114270. [3] W. Maddox, T. Garipov, P. Izmailov, D. Vetrov, and A. G. Wilson, “A Simple Baseline for Bayesian Uncertainty in Deep Learning,” Dec. 31, 2019, arXiv: arXiv:1902.02476. Accessed: Aug. 08, 2024. [Online]. Available: http://arxiv.org/abs/1902.02476
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: MMTSCNet - Multimodal Tree Species Classification Network for the Classification of Multi-Source Single Tree LiDAR Point Clouds

Authors: Jan Richard Vahrenhold
Affiliations: Technical University Würzburg-Schweinfurt (THWS)
The automated classification of tree species is the foundation for large-scale automated forest inventory and monitoring applications. The use of airborne LiDAR data has seen a significant increase in the recent past, as it is able to penetrate the forests canopy and yields geometric information of the trees within and below the canopy. While state-of-the-art approaches often delineate features from LiDAR point clouds for classification applications, recent developments have led to architectures, which act directly upon the LiDAR point clouds. While both approaches have distinct advantages, the architecture developed for this thesis, Multimodal Tree Species Classification Network (MMTSCNet), combines both methods in a multimodal approach. MMTSCNet was evaluated using a dataset of Airborne Laser Scanning (ALS) data captured under leaf-on conditions, and Unmanned Aerial Vehicle Laser Scanning (ULS) data, captured under both, leaf-on and leaf-off conditions. The dataset was acquired near the cities Karlsruhe and Bretten in Baden-Württemberg, Germany. For each captured forest plot, Full-Waveform (FWF) data was provided in a separate dataset and utilized to develop additional features for the individual trees. The experimental results show, that MMTSCNet was able to achieve a high classification accuracy and quality for up to 7 different tree species (Overall Accuracy (OA) for ALS = 91.2%, OA for ALS + FWF: 91.8%, OA for ULS under leaf-on and leaf-off conditions: 95.4% and 95.8%, OA for ULS + FWF under leaf-on and leaf-off conditions: 96.1% and 96.3%). While MMTSCNet was not able to surpass competing state-of-the-art architectures for the ALS data, falling short by 2.2%, it achieved an increase in overall classification accuracy for seven tree species on ULS data of up to 7.8% compared to the commonly employed PointNet++. MMTSCNet leverages a modular architecture, balancing classification performance and computational efficiency while being able to directly process LiDAR point clouds and automatically engineered features.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Multitemporal fraction images derived from Sentinel-2 data to assess land use land cover changes in Mato Grosso state, Brazilian Amazon

Authors: Yosio Edemir Shimabukuro, Dr Egidio Arai, Andeise Cerqueira
Affiliations: Instituto Nacional De Pesquisas Espaciais
A priori knowledge of land use land cover map is essential for the correct detection of forest degradation by fire and by selective logging. This paper presents the method developed with a minimum pre-processing to assess the extent of land use land cover in Mato Grosso state, located in the Brazilian Amazon region. The proposed method is based on Linear Spectral Mixing Model (LSMM) applied to Sentinel-2 images to derive vegetation, soil and shade fraction images for regional analysis. Fraction images can be used for mapping land use land cover due to the following characteristics: a) vegetation fraction images highlight the forest cover conditions and allow differentiating between forest and non-forest areas similarly to vegetation indices such as the Normalized Difference Vegetation Index (NDVI) and the Enhanced Vegetation Index (EVI); b) shade fraction images highlight areas with low reflectance values such as water, shade and burned areas; and c) soil fraction images highlight areas with high reflectance values such as bare soil and clear-cuts. We used 20 m monthly composites of MSI Sentinel-2 datasets for the study area for regional analysis. Then, we built the composites corresponding to the three endmembers with the highest fraction values during the period of July 2023 to June 2024 for the Mato Grosso State, based on the assumption that: a) the greatest values of vegetation composition highlight the areas that were used for agricultural activities during the time period; b) the greatest values of shade/water highlight the areas occupied by water bodies and the burned areas during the study period; and c) the highest values of soil fraction enhance deforested and selective logging areas. In that manner we could map automatically the land cover identified as water, forest, main crops, pasture, and savannas vegetation in the study area. The Mato Grosso state southern flank of the Brazilian Amazon region is experiencing high deforestation rates and it has been observed forest degradation by fires particularly during extreme dry years and in this context, it is extremely interesting for the purposes of this study. Sentinel-2 images with spatial resolution of (10 and 20 m acquired in the period of July 2023 to June 2024, were used to assess the land cover changes due to forest degradation by fire and selective logging. In a first step, a land use land cover map was produced from Sentinel-2 fraction images. Then burned areas were identified and mapped from the shade fraction images. Finally, forest areas degraded by fires were mapped by combining forest areas with the resulting burned area and recent deforestation maps. Our results show the land use land cover map and forest degraded by fire and selective logging detected during the study period. The results are compared with fieldwork information collected in the study area. The proposed approach can be potentially used operationally for detecting, mapping and monitoring land use land cover changes using Sentinel-2 datasets, for regional and local analysis.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Quantifying Forest Carbon Stocks Change Using Earth Observation Data in the State of Mato Grosso, Brazil

Authors: Erone Ghizoni Santos, Michael Keller, Ovidiu Csillik, Luciana Alves, Sassan Saatchi
Affiliations: NASA - Jet Propulsion Laboratory - Caltech, USDA Forest Service, Wake Forest University, University of California, CTrees
In the Amazon region, human activities span both small and large scales, including agriculture, livestock farming, logging, and mining. These activities drive deforestation, forest fragmentation, and fires, resulting in forest disturbance and substantial losses of carbon storage and biodiversity. The impact of these changes extends beyond the forest, disrupting ecosystems, altering local climates, and affecting indigenous and local communities who rely on these lands for their livelihoods. Addressing these challenges requires a combination of conservation policies, sustainable land-use practices, and technological advancements for monitoring and enforcement. The importance of the Amazon cannot be overstated, as it serves a vital function in maintaining global climate stability and supporting ecological diversity. Since the 1970s, through government incentives aimed at developing the region, the state of Mato Grosso has received support to convert areas that were once forests into agricultural lands. Nowadays, the state often ranks among the top Brazilian states for annual deforestation, with approximately 17% of its total area already deforested. Only in 2023 the state had a deforestation rate of 2.086 km², being the second largest among the States in the Brazilian Amazon. The drive for financial gains frequently underlies the processes leading to forest degradation, but initiatives such as Reducing Emissions from Deforestation and Forest Degradation in Developing Countries (REDD+) offer a sustainable alternative by rewarding forest owners for preserving their land. Effective implementation of REDD+ relies on accurate data to track changes in forest biomass. With the development of advanced sensors, innovative tools, and improved methodologies, forest monitoring is gaining significance, and the precision of these estimates continues to improve as technology progresses. We are developing a system to quantify annual forest carbon change that will allow us to investigate the importance of land use and conservation status. For Mato Grosso state, we have developed an aboveground forest carbon density map with a spatial resolution of 1 ha. This map was generated using data from the Global Ecosystem Dynamics Investigation (GEDI), Harmonized Landsat-Sentinel and SAR data. In addition, we monitored the carbon recovery after disturbance events such as logging and fire. We found that even after 20 years after logging the forest is still recovering from the disturbance. For areas burned once, the carbon density 23 years after the last fire event was 115 Mg C/ha. On the other hand, for areas burned three or more times, the carbon density 13 years after the disturbance was 45 Mg C/ha while for the same timeframe, forest burned once have 77 Mg C/ha. Our Initiatives supported by NASA SERVIR-Amazonia will provide up-to-date information and promote sustainable management practices aimed at preserving the Amazon's carbon storage and biodiversity. Additionally, this initiative can play a critical role in enhancing the forest's capacity to mitigate the effects of climate change.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Evaluating the temporal dynamics of the forest cover and forest density with medium resolution Copernicus data in Nepal

Authors: Nicolas Delbart, François LIBOIS
Affiliations: Lied - Université Paris Cité, INRAe - Paris School of Economics
Nepalese forests are subjected to various ownership regimes and regulations. Most are governmental forests, including community forests and national parks and some are private. Under community forestry, Community Forest User Group (CFUG) regulates the use of forest products by local users. Thanks to these regulations and especially to the decentralization of forest management, it is generally accepted –partly based on remote sensing land cover mapping- that the Nepalese government achieved to reverse the deforestation and to promote some afforestation. However landcover mapping does not allow to verify if forests change also in terms of vegetation density, and thus does not permit exploring if CFUG induce a densification of existing forests. This study aims at exploring the possibility of using the Copernicus 1km and 300m leaf area index (LAI) to monitor the changes in the density of the vegetation in addition to the changes in the cover. Thus, we have analyzed the relationship between the LAI and the forest fraction inside the pixels derived from the high-resolution landcover maps. This relationship is established all along the seasons inside the study period (1999-2013 with SPOT VGT, 2014-2020 with PROBA-V), showing that in November forests are the only contributor to the LAI for most of the country: the spatial distribution of November LAI strictly reflects the forest cover fraction, as croplands are unvegetated ay this time of the year. Under the space-for-time hypothesis, we analyze the 20-year changes in November LAI. We decompose these changes between the short-term variations, that we assume mostly due to the climate variability, and the long-term trend. We find that a part of the trend directly reflects the changes in the forest cover, but not integrally. We associate the remaining increase to a densification of vegetation material within forests. Moreover this unexplained part of the trend is higher where the fraction of the forests area is managed under the CFUG program, showing the efficiency of the decentralization of the forest management, both for forest cover and forest density.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Regionally and Globally Trained Models for Mapping Aboveground Biomass From Remote Sensing Data Fusion: A Comparison of the Capabilities of Machine Learning in 4 Different Biomes

Authors: Daniel Cendagorta, Martí Perpinyà-Vallès, Dr Aitor Ameztegui, Dr Maria José Escorihuela, Laia Romero
Affiliations: Lobelia Earth, Universitat de Lleida, isardSAT
In recent years, multiple remote sensing technologies have been used to quantify aboveground biomass (AGB) at different scales. There have been regional and global studies each with their advantages. The former are capable of providing higher resolution and accurate estimations albeit for smaller regions. Tailoring model weights and inputs to a specific biome or region have proved to be effective to obtain better results. Global maps, on the other hand, provide an attempt to standardize AGB mapping at slightly to moderately lower resolutions and tend to have differences among different studies. Still, a comprehensive understanding of the complex relations that rule the characterization of AGB within the distinct biomes is lacking. This study uses regionally and globally trained models together with explainability methods to disentangle these relationships with the aim of understanding how regional models improve the accuracy of the estimations, until which point global models can provide reasonable results and the nuances that drive the estimation of AGB. We present a data fusion approach to mapping aboveground biomass at a 10m resolution using a convolutional neural network U-Net in 4 different biomes: semi-arid savannas in the Sahel region, dense tropical forests in Brazil, Mediterranean forests in coastal and pre-coastal areas of Catalonia, and temperate/boreal forests in Minnesota. GEDI L4A AGB data (25-m discrete footprints) is used to train a global convolutional neural network model for all 4 biomes and a regional one in each study area. We derived predictors from Sentinel-1 SAR, Sentinel-2 multi-spectral and Digital Elevation Model (DEM) datasets, which are common for all locations, and bio-climatic variables, in the case of the global model to provide biome specific information. Additionally, auxiliary data such as proximity to coastlines or human-made structures variables are used to enhance predictions in saturation-prone areas. In addition to the goodness of adjustment to the training data from GEDI, we carried out a thorough validation of the results using in-situ data from Forest Inventory plots gathered in all 4 study regions. This enables a comprehensive comparison of the capabilities of Machine Learning modelling to adapt to the particular characteristics of each ecosystem. An in-depth analysis is carried out to find the most important predictor variables in each biome, as well as to assess the accuracy that can be expected across a wide range of AGB values.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: TreeCompR: Standardized tree competition analysis based on inventory data or 3D point clouds

Authors: Julia Rieder, Dr. Roman M. Link, Konstantin Köthe, Prof. Dr. Dominik Seidel, Tobias Ullmann, Anja Žmegač, Prof. Dr. Christian Zang, Prof. Dr. Bernhard Schuldt
Affiliations: Department of Remote Sensing, Institute of Geography and Geology, Julius-Maximilians-University Würzburg, Chair of Forest Botany, Institute of Forest Botany and Forest Zoology, Technical University of Dresden (TUD), University of Göttingen, Faculty of Forest Sciences and Forest Ecology, Department for Spatial Structures and Digitization of Forests, Weihenstephan-Triesdorf University of Applied Sciences, Institute for Ecology and Landscape, TUM School of Life Sciences
The severe drought of 2018/19 in Central Europe revealed the vulnerability of European beech (<i>Fagus sylvatica </i> L.), the predominant tree species in Europe's natural forest vegetation. While some individual trees showed severe crown dieback, others in the same area appeared unaffected, showing a high heterogeneity of tree response to drought, without clear patterns. This variability highlights the complex interaction of biotic and abiotic factors, including competition, microsite conditions and structural features of the vegetation, which can influence tree growth or mortality. Understanding these factors is crucial for predicting forest resilience to future climate uncertainties and for developing adaptive management strategies. This makes factors such as individual tree competition especially relevant, as it can vary over time and can be influenced by silvicultural practices. However, collecting ground-based inventory data to quantify tree competition is time consuming and often difficult in dense forests. Furthermore, it is difficult to choose a competition index (CI), as there is no universally applicable index. We present <i>TreeCompR </i>– an open-source R package - to help users select an appropriate index to quantify tree competition. Within <i>TreeCompR </i>, users can choose between different approaches, mainly a point cloud-based (e.g. search cone method) and a conventional distance-dependent approach (e.g. Hegyi index). We show how all of these approaches can be applied, in particular different ways to pre-process data depending on the input type: terrestrial/mobile laser scans (TLS/MLS), airborne laser scans (ALS) or traditional inventory data. To demonstrate the potential of <i>TreeCompR</i>, we used it to analyze data from MLS and ALS data to assess the competition of > 300 European beech trees. These are distributed along an environmental gradient across 13 forest sites in northern Bavaria, Germany. The calculated CIs correlate with structural parameters (e.g., crown projection area (CPA) and box dimension (D<sub>b</sub>)) of the target trees, though traditional distance-based CIs seem more effective than point cloud-based approaches for predicting these parameters. We also assessed and tested potential uncertainties associated with these methods and found a high sensitivity of CI values to the chosen radius or the number of neighboring trees, which is highly dependent on the data source and pre-processing. We therefore recommend adjusting the search radius according to the average crown radius of the target trees and visually checking pre-processed products (e.g. automatically segmented trees) for plausibility. <i>TreeCompR</i> is a valuable tool for ecologists and foresters for an efficient and accurate assessment of tree competition and structural variability in forest stands. Users can choose between using original 3D point clouds or inventory data (either ground based or derived from point clouds), depending on the research question and data availability.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Quantifying the magnitude and persistence of human degradation of global tropical moist forests

Authors: Clement Bourgoin, Guido Ceccherini, Dr. Christelle Vancutsem, Lilian Blanc, Mirco Migliavacca, Rene Colditz, Alessandro Cescatti, Dr. Frederic Achard
Affiliations: Joint Research Centre, CIRAD
Selective logging, fires, and edge effects are major drivers of tropical forest degradation, causing significant carbon loss and biodiversity decline (Pearson et al. 2017; Baccini et al. 2017; Barlow et al. 2016). Forest degradation occurs at a rate similar to deforestation and can even surpass it during extreme weather events (Vancutsem et al. 2021). However, the full extent of these impacts is likely underestimated, and there are significant uncertainties in measuring their long-term effects across global tropical regions (Gao et al. 2020). We quantify the magnitude and persistence of multiple types of degradation on forest structure over the pantropical belt by combining Landsat-based wall-to-wall maps of forest degradation, deforestation and regrowth from 1990 to 2022 (Vancutsem et al. 2021) with spatially discontinuous estimates of canopy height and biomass from spaceborne LIDAR (GEDI, Dubayah et al. 2020) from 2019-2022. Leveraging over 40 million GEDI samples, we estimate that selective logging and fire reduce forest height by 15% and 50%, respectively. While canopy height recovery is slow even after two decades, signs of understory regeneration following disturbance are detectable. Agriculture and road development cause a 20-30% decline in canopy height and biomass at forest edges, with lasting effects extending 1.5 km into the forest. Edge effects impact 18% (~206 Mha) of remaining tropical moist forests, an area over 200% larger than previously estimated (Brinck et al. 2017). We will delve deeper into how human-induced fragmentation exacerbates edge effects and their ecological consequences. By sampling degraded forests slated for deforestation post-GEDI data acquisition, we find that those with over 50% canopy loss, indicative of unsustainable logging and fire, are significantly more vulnerable to future clearance than less severely degraded forests. Our results emphasize the urgent need to prevent degradation and protect already damaged forests to fulfill the conservation commitments made at recent UN Climate Change and Biodiversity conferences. (Bourgoin et al. 2024). References Pearson, T. R. H., Brown, S., Murray, L. & Sidman, G. Greenhouse gas emissions from tropical forest degradation: an underestimated source. Carbon Balance Manage 12, 3 (2017). Baccini, A. et al. Tropical forests are a net carbon source based on aboveground measurements of gain and loss. Science 358, 230–234 (2017). Barlow, J. et al. Anthropogenic disturbance in tropical forests can double biodiversity loss from deforestation. Nature 535, 144–147 (2016). Vancutsem, C. et al. Long-term (1990–2019) monitoring of forest cover changes in the humid tropics. Sci. Adv. 7, eabe1603 (2021). Gao, Y., Skutsch, M., Paneque-Gálvez, J. & Ghilardi, A. Remote sensing of forest degradation: a review. Environ. Res. Lett. 15, 103001 (2020). Dubayah, R. et al. The Global Ecosystem Dynamics Investigation: High-resolution laser ranging of the Earth’s forests and topography. Science of Remote Sensing 1, 100002 (2020). Brinck, K. et al. High resolution analysis of tropical forest fragmentation and its impact on the global carbon cycle. Nat Commun 8, 14855 (2017). Bourgoin, C., Ceccherini, G., Girardello, M. et al. Human degradation of tropical moist forests is greater than previously estimated. Nature 631, 570–576 (2024). https://doi.org/10.1038/s41586-024-07629-0
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: RIGOR-MED: Remotely-sensed and Statistically Rigorous Monitoring of Disturbances in Mediterranean Forests

Authors: Jessica Esteban, Isabel González, Nur Algeet, Mariluz Guillén, Lucía Yáñez, José Luis
Affiliations: Agresta S. Coop.
The RIGOR-MED project (2024-2025) has been selected under the framework of the European Forest Institute (EFI) call. RIGOR-MED aims to achieve a better understanding of forest disturbances, their causes, and their impacts on Mediterranean forests. The project will compare and validate existing disturbance products, as well as internally developed change products, using satellite time series and change detection algorithms. Additionally, an open reference database of forest disturbances caused by various agents will be created through the visual interpretation of high-resolution images. This database will serve to estimate the area affected by disturbances using statistically rigorous methods. Based on the results obtained, the project aims to improve the understanding of the effectiveness of remote sensing data in monitoring and detecting changes in these ecosystems, as well as to better understand the potential and limitations of currently available products and methodologies. The project advocates for the application of open science principles to ensure broad access to the results.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Seasonal and spatial variability of Primary Productivity in Bieszczady National Park, Poland: Insights from Sentinel-3 and ECMWF data

Authors: PhD Ewa Panek - Chwastyk, Prof Katarzyna Dabrowska - Zielinska, PhD Szymon
Affiliations: Institute Of Geodesy And Cartography
This study explores the use of Copernicus Land Monitoring Service (CLMS) products to assess Net Primary Production (NPP) and Gross Primary Production (GPP) in the forested areas of the Bieszczady National Park (BNP), a protected area in southeastern Poland. NPP and GPP were calculated from Dry Matter Productivity (DMP) and Gross Dry Matter Productivity (GDMP) datasets, integrating Sentinel-3/OLCI satellite imagery with meteorological data from the European Centre for Medium-Range Weather Forecasts (ECMWF). The analysis employed a modified Monteith approach to estimate Net Primary Production (NPP) and Gross Primary Production (GPP) in forested areas of the Bieszczady National Park. These metrics were derived from Dry Matter Productivity (DMP) and Gross Dry Matter Productivity (GDMP) datasets, provided by the Copernicus Land Monitoring Service (CLMS) at a spatial resolution of 300 meters and updated at 10-day intervals. Sentinel-3/OLCI imagery was used to generate fraction of absorbed photosynthetically active radiation (fAPAR) composites, which served as a key input for calculating productivity. Meteorological data, including shortwave radiation, minimum and maximum temperatures, and atmospheric CO2 concentrations, were sourced from the ECMWF to model environmental factors affecting productivity. Biomes within the study area were classified using land cover data from the ESA Climate Change Initiative (CCI), allowing the application of biome-specific light-use efficiency (LUE) coefficients tailored to forest ecosystems. Conversion of DMP and GDMP values, expressed in kilograms of dry matter per hectare per day (kg DM/ha/day), to NPP and GPP in grams of carbon per square meter per day (gC/m2/day) was achieved using IPCC-defined factors. Preliminary results indicate seasonal variations in NPP and GPP, with peak productivity observed during the summer months. Spatial analysis revealed heterogeneity in productivity within BNP, reflecting variations in forest composition, elevation, and microclimatic conditions. The dataset enables the identification of carbon flux dynamics and areas of ecological significance. The integration of CLMS products and remote sensing techniques provides a framework for monitoring carbon sequestration and ecosystem health in protected areas. This study demonstrates the potential productivity metrics to inform forest management and biodiversity conservation strategies, particularly in regions facing climate pressures.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: BuWaL-Hessen: Search Space for Natura 2000 Habitat Types Dominated by Beech Forests in Hesse

Authors: Visweshwar Arulmozhi Nambi, Frank Franken, Dr. Carina Kübert-Flock, Dr. Manuel
Affiliations: Hessisches Landesamt Für Naturschutz, Umwelt Und Geologie
ABSTRACT Beech is the most common tree species in the Hessian forests and is crucial for not only forestry but also nature conservation. In Hesse, three of the 45 natural habitat types protected under the European Union Habitats Directive are primarily beech-dominated and can be identified by their habitat codes 9110, 9130, and 9150. While beech trees are prevalent, these habitats are also characterized by accessory tree species, underlying herbaceous layers, climatological, topographical and geological factors, complicating habitat type identification directly through remote sensing. Therefore, this project aims to create a “search space” identifying potential locations of these habitat types within Natura 2000 sites by combining detected beech trees in Hesse using remote sensing techniques, with soil and geological data as a means to support the Hessian terrestrial habitat mapping (Hessische Lebensraum- & Biotopkartierung, HLBK). To map broadleaf tree species in Hesse, four Sentinel-2 mosaics (S2), along with derived vegetation indices and a Digital Elevation Model (DEM) were utilized. Additionally, image-sharpening techniques using the 10m S2 bands to enhance the 20m bands was employed, to evaluate their effect on classification accuracy. A broadleaf forest mask for Hesse was created using Copernicus High Resolution Layer Forest Type 2018 and Hessen Forst’s (HF) broadleaf mask. “Pure” species stands from HF’s forest management data served as ground truth. We classified seven tree species including beech using Random Forest. We found that image sharpening reduced tree species classification accuracy by ~14% and hence it was not used further. Overall accuracy based on spatial cross validation for the final classification was ~90% with beech achieving the highest F1 score of 95. Further comparisons based on HF’s sampling points with other studies showed that we found better classification results for beech (and alder) than Blickensdörfer et al. (2024) within Hesse. We also found an overall agreement of ~75% between the two maps for common areas and species. Currently, we are testing soil and geological data from various sources (such as soil and geological maps from HLNUG, site maps from HF, etc.) against available habitat data to identify any underlying relationships. Subsequently, the search space for the habitats will be created by combining information from the tree species map with the tested soil maps to infer the habitat types. HLNUG receives valuable information about potential locations of habitat types, which would be useful for optimizing planning and resource allocations for HLBK. Our broadleaf map could help HF as a basis for deriving species-specific biophysical parameters by integrating it with other data sources, and also provides information on species distribution outside state-managed forests. While remote sensing applications for forest habitat type mapping are not as advanced as other forestry-related applications, our project aims to demonstrate how remote sensing can complement expert knowledge, and optimize project planning when it comes to habitat type mapping. We welcome any constructive feedback on the proposed methods, their potential limitations, or insights from the participants, especially if they have had any prior experience from similar studies or projects. This project highlights how forests are being considered under the Habitats Directive and illustrates how remote sensing can be a valuable tool in bridging the interests of forestry and nature conservation. Reference 1. Blickensdörfer, L., Oehmichen, K., Pflugmacher, D., Kleinschmit, B., & Hostert, P. (2024). National tree species mapping using Sentinel-1/2 time series and German National Forest Inventory data. Remote Sensing of Environment, 304, 114069.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Detection of Temperate Forest Disturbances Based on Sentinel-1 SAR Time Series Analyses

Authors: Marius Rüetschi, Dr. Lars T. Waser, Dr. David Small, PD Dr. Andrea Manconi, Prof. Dr. Livia Piermattei, Christian Ginzler
Affiliations: Swiss Federal Research Institute WSL, University of Zurich, WSL Institute for Snow and Avalanche Research SLF
Forests provide a large variety of services to society, such as timber, biodiversity, and protection from natural hazards. Therefore, to maintain these services, it is highly relevant to continuously monitor the state of forests for adequate and sustainable management. Due to a recent increased number of natural disturbances such as windthrows, bark beetle infestation, and unfavourable climatic conditions (e.g. droughts), forests are affected and thus sometimes less able to provide these services. Remote sensing technology enables wide-area monitoring to detect such disturbances. In particular, spaceborne Synthetic Aperture Radar (SAR) has great potential to provide the required data to monitor large areas at reliably high temporal resolutions. In the present study, we analyse seasonal time series of Sentinel-1 (S1) data over the acquisition period 2015 to 2024 with the aim of evaluating what degree of disturbances in temperate forests can be detected. These forest types naturally undergo structural changes due to phenology, leading to seasonal signatures in S1 backscatter. Since these changes in S1 backscatter are on a similar magnitude as the ones stemming from structural changes caused by forest disturbances, their detection can be challenging. For our analysis, we collected different reference forest areas across Switzerland comprising delineated windthrows and bark beetle-infested forest stands. Using the full S1 constellation, a high temporal sampling was possible, with data acquisition at least every three days. Based on the acquired S1 backscatter, time series were generated on the spatial scale of one-hectare forest stands. Next, time series analysis methods with the capability to handle seasonality were used to extract the occurrence of diverging backscatter. We show 1) how different disturbance types change S1 backscatter behaviour and 2) the spatial scale and temporal lag associated with successful detection. We also analyse other potentially limiting factors, such as temperatures below the freezing point, wet snow, and heavy precipitation events, as they strongly affect SAR backscatter, potentially hindering use of S1 data for disturbance detection. Preliminary results reveal that the different forest disturbance types showed different S1 backscatter behaviours. But the results also indicate that in addition to the disturbance type, the pre-disturbance forest type (e.g. dominant leaf type and forest stand height) influenced how the backscatter behaviour changed. In general, windthrows (>0.5 ha) could be detected with the accuracy strongly dependent on the size of the disturbed area. Forest disturbance types at smaller scales (e.g. single trees) or those that occur more gradually than windthrows were detected less reliably. S1 SAR time series show considerable potential for monitoring forest disturbance at the national scale. A monitoring system based on S1 data could be established that potentially indicates disturbance-hit regions, facilitating speedily adaptable and adequate forest management.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Developing a Ground Control Point Protocol for High-Resolution Laser Scanning in tropical and temperate Forests for integration with satellite data

Authors: Johannes Wilk, Prof. Dr. Martin Herold, Dr. Benjamin Brede, Dr. Linda Lück
Affiliations: Helmholtz-Zentrum Potsdam Deutsches GeoForschungsZentrum GFZ
Terrestrial Laser Scanning (TLS) has emerged as a versatile tool in environmental science and remote sensing, in particular for the carbon storage and monitoring of forest structure in response to environmental change. Accurate identification of carbon stocks and sinks and the development of effective forest management strategies are essential to addressing global environmental challenges. TLS enables these efforts by delivering high-quality point clouds derived from precise field measurements, providing robust data for tackling pressing ecological questions. However, a significant challenge is still the lack of precise geolocation for TLS data due challenging conditions for GNSS under dense canopies. A large geo-location error hinders the ability to integrate TLS outputs with broader spatial datasets, reducing the effectiveness of scaling up findings to regional or global levels. To address this issue and achieve greater accuracy and reliability for TLS data, a Ground Control Point (GCP) protocol has been implemented. This protocol links TLS point clouds to precise geolocations, a critical step for upscaling tasks and seamless integration with larger spatial datasets. The framework emphasizes practicality and ease of use, ensuring its applicability in diverse and challenging field conditions. Such advancements are particularly significant in the context of the upcoming 2025 ESA Biomass mission, which focuses on forest biomass monitoring, and the GeoTREES project, which provides essential calibration and validation for satellite observations. Furthermore, the protocol supports the generation of high-quality ground data, contributing to a global network of long-term forest inventory sites critical for advancing forest science and management. The methodology utilizes RIEGL VZ-400i and VZ-600i laser scanners for point cloud acquisition, with GNSS data acquisition supported by Emlid dual-frequency antennas to provide precise positional information linked to the point cloud. This approach has been tested across tropical and temperate forest types, with tropical forests posing the greatest challenge due to dense canopy cover and the lack of access to reference systems. These factors highlight the operational constraints often encountered in remote and complex environments. Despite these challenges, initial results are promising. The protocol achieved a fix rate of 5–10%, enabling precise geospatial estimation of distributed points within point clouds. These findings underscore the potential to improve TLS data acquisition in remote forest environments, ensuring the generation of accurate geolocated point clouds even under adverse conditions. This study highlights the importance of developing robust and practical GCP protocols to support high-resolution TLS measurements. Future work will focus on refining the approach to further enhance fix rates and exploring its scalability to various forest types and environmental conditions.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Forester: Illegal logging detection and prediction tool based on Sentinel-1 and Sentinel-2 and in-situ data.

Authors: Mathieu Aillerie, Lisa Broekhuizen, Mariana Silvestre, Liliana Gonzalez, Liv Toonen
Affiliations: Space4good
Forester, previously known as the ILDAP (Illegal Logging Detection and Prediction) application and developed by Space4Good with the support of the ESA BASS program, provides a transformative solution to the global challenge of illegal logging, a leading cause of deforestation. Deforestation has severe environmental, economic, and social consequences, including biodiversity loss, carbon depletion, and accelerating climate change. Forester integrates satellite data and terrestrial remote sensing technologies into a comprehensive system that detects, predicts, and reports illegal logging activities in near real time. The platform meets the needs of a range of stakeholders, including NGOs, government entities, agroforestry operators, and carbon credit markets, by providing powerful tools to prevent deforestation and promote sustainable forest management. Manual detection of illegal logging is laborious, time-consuming, and dangerous, especially in dense tropical forests where organized crime is often involved. Current remote sensing systems have significant limitations, including inadequate detection accuracy, restricted applicability to complex mixed agroforestry regions, and a focus on historical rather than predictive insights. Forester overcomes these limitations by providing three core services: estimation of the size of the illegal logging area, land use classification, and activity forecasting, combined with capabilities such as a near real-time alert system, biomass assessment, and automated reporting. The platform provides actionable insights through an intuitive user interface or API, enabling informed decision-making by non-expert teams on the ground. Moreover, the platform allows fieldworkers to validate alerts and upload images captured using drones, regular cameras, or smartphones. Forester’s innovative approach of deforestation detection at 10m resolution is anchored in cutting-edge technology. It uses optical and radar satellite data (Sentinel-1 and Sentinel-2) to monitor large areas with high accuracy, overcoming cloud cover issues prevalent in tropical regions. Its algorithms are designed to classify land use, assess biomass, and predict illegal logging activities. A unique aspect of Forester is the deforestation risk prediction at 20m resolution. These algorithms are based on pattern identification and trained on past events to predict the near future, incorporating local drivers of deforestation. These prediction maps are updated monthly and provide the users with the right knowledge to anticipate or mitigate deforestation. Validation trials in Indonesia have demonstrated the system’s effectiveness, with live alerts enabling early interventions at deforestation hotspots. Forester achieves a 7% accuracy improvement compared to existing solutions such as Global Forest Watch Pro, while introducing unique features such as risk mapping and workflow integration. Forester’s target users span a variety of sectors. NGOs and governments benefit from enhanced conservation capabilities as the system assists in detecting illegal activities and protecting biodiversity in national parks, reserves, and restoration projects. Agroforestry operators and commercial producers use the platform to protect resources, reduce product theft, and manage biomass loss in forestry and agricultural supply chains. Finally, certifiers and carbon credit marketplaces leverage Forester’s biomass and carbon stock assessments to validate sustainability claims and meet environmental regulations. The inclusion of predictive capabilities sets Forester apart from other monitoring systems, enabling stakeholders to take preventative action rather than react to the damage already done. The potential market for Forester is substantial. Market research shows that the cost of illegal deforestation exceeds $10 billion annually, with a serviceable market of $65 million per year. The environmental monitoring, carbon offsetting, and precision agroforestry markets are growing rapidly, with compound annual growth rates (CAGR) of 9.5%, 11%, and 9%, respectively. Growing global commitments to zero deforestation, carbon neutrality, and sustainable supply chains are further amplifying the demand for tools like Forester. Over 350 major corporations in industries such as palm oil, soy, and timber have pledged zero net deforestation, creating a strong customer base for Forester services. The platform’s architecture ensures scalability, reliability, and security. Hosted in a dynamic cloud environment (e.g., AWS), Forester uses containerized infrastructure to ensure seamless deployment and confidentiality for customers. Each customer operates in an isolated environment, avoiding data overlap and increasing security. The system is designed for global applicability, with low latency and efficient processing of large data sets. As Forester scales, it will dynamically adapt to meet growing demand while maintaining high performance. Forester employed a phased deployment strategy, starting with pilot programs in Southeast Asia in 2023 to refine system functionalities and validate algorithms in real-world conditions. Future efforts will focus on scaling operations, expanding partnerships, and integrating advanced technologies to enhance capabilities. Commercial adoption will be driven by targeted customer acquisition, highlighting Forester's unique value proposition. Forester’s value goes beyond technical innovation. The platform delivers significant environmental and social benefits by supporting conservation efforts, reducing the economic impacts of illegal logging, and preserving carbon stocks essential for climate change mitigation. It aligns with global legislative trends as environmental regulations and carbon offset mechanisms grow exponentially. The platform’s predictive capabilities and seamless integration into workflows position it as a foundational technology for addressing deforestation and promoting sustainable forest systems. With its comprehensive service offerings, predictive analytics, and broad stakeholder applicability, Forester has the potential to redefine forest management and conservation practices globally. Space4Good is committed to advancing environmental sustainability and combating deforestation, ensuring that Forester becomes a vital tool for protecting forests and fostering a sustainable future.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Spatiotemporal patterns of Amazonian canopy mortality revealed by remote sensing time series

Authors: Kristian Bodolai, Anastasia Kozhevnikova, Dr Pablo Sánchez-Martínez, Prof. Patrick Meir, Edward Mitchard
Affiliations: Space Intelligence, School of Geosciences, University of Edinburgh
The Amazon rainforest is thought of as an important global carbon sink, but changes in local climate, extreme events, and human disturbance often result in it becoming a source of atmospheric carbon. Understanding shifting patterns in tree mortality is crucial to determining the carbon budget of the Amazon, but little is known about the extent, rate, and causes of mortality of large canopy trees, which contain most of the forest carbon and are hypothesised to be most at risk with climate change. To address this data gap, we created a mortality database from lidar surveys and sub-meter resolution imagery, and developed a cumulative sum based algorithm that accurately detects canopy tree mortality across the Amazon Basin using a time series of Planet NICFI data from 2018 to 2024 at 5m pixel spacing. We detect mortality events by identifying changes in trend over time in the multispectral reflectances caused by either a decrease of photosynthetic activity or shadowing from adjacent trees.The same principle also allows us to categorize mortality events into standing dead trees and broken/uprooted trees. The end result is monthly predictions of mortality events with a detection rate of 75% in our test areas. The probability of detection increases with tree crown size, to above 90% when the mortality event is larger than 150 square meters, which makes the algorithm particularly well suited to study large tree mortality from different modes of death, both by anthropogenic and non-anthropogenic causes. Early results show an increase in mortality across the Amazon Basin during the 2023-24 El Niño event and the potential of this method to study widespread effects of climatic changes at short temporal scales, as well as a methodology for detecting and assessing tree mortality from remote sensing data. Being able to detect large tree mortality fills an important gap in our knowledge of vegetation turnover and vulnerability, which underpins our understanding of tipping points in the Amazon and its resilience to climate change.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Monitoring Loss and Survival of Trees Using Very High Resolution Satellite Images

Authors: Laura Sirro, Jussi Peuhkurinen, Tuomas Korva
Affiliations: VTT Technical Research Centre of Finland, Finnish Forest Centre
Kestotäsmä project develops and pilots methods to support precision forestry and near real time forest monitoring to verify sustainability of forest operations. In addition to monitoring of forest operations, the results will support planning of practical operations carried out in the forests. Kestotäsmä is funded by the Ministry of Agriculture and Forestry of Finland, and it is part of the Sustainable Growth Program for Finland of the European Union. The project is coordinated by the Finnish Forest Centre. As a part of the project, remote sensing methods are being developed for high resolution monitoring of forests. One targeted application is the monitoring of retention trees, i.e. trees that are not cut down in final felling for ecological purposes to preserve biodiversity, and that are supposed to permanently stay in the forest. Monitoring of these trees can be done using airborne laser scanning (ALS) data that is collected every six years in Finland, but more frequent monitoring would be needed. Methods using very high resolution satellite (VHR) images to verify the status of the retention trees in the middle of the laser scanning updating cycle are being developed in Kestotäsmä project. The methods tested in the project include direct change detection using two VHR satellite images from different dates, and classification of VHR satellite image using canopy height model (CHM) from an earlier date as reference data and post classification comparison. Both approaches apply U-Net convolutional neural networks. The methods are being tested in two test areas, Ylivieska and Keuruu, in Finland. Direct change detection is applied in Ylivieska using VHR images from the years 2020 and 2023. The approach where the CHM is used as baseline and reference data for training VHR model is applied both in Ylivieska and Keuruu. VHR satellite images were available for the project through third party mission programme of the European Space Agency (Category-1 project ID PP0091722). Pleiades Neo images were acquired from two sites: Ylivieska in August 2023 and Keuruu in August 2024. Worldview-2 image was available from Ylivieska from June 2020. The CHM, that is available as open data by Finnish Forest Centre had been computed using ALS point cloud data of National Land Survey of Finland. Accuracy figures have been computed for the classification results of Ylivieska test area, where we have both CHM and VHR satellite image from the same year. There, 87% of the VHR image test set pixels were classified similarly with the baseline map pixels, when three classes, trees, tree shadows and background, were used. The proportion of similarly classified pixels was 96%, when the tree crowns and tree shadows were combined as one class. In an operational application the time between the CHM and VHR images would be several years, which makes the classification task more challenging. We are currently applying the method to data from Keuruu area with three-year difference between CHM and VHR satellite images to find out how the method works in operational monitoring.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Assessing the Comparability of Multispectral Data from Sentinel-2 and High Resolution UAS Imagery for Advanced Forestry Monitoring

Authors: Antonio José Castañeda-Gómez, Jakob Schwalb-Willmann, PhD. Mirjana Bevanda, Julia Rieder, Jannis Midasch, Sunniva McKeever, Ronja Seitz, Dr. Doris Klein, Univ.-Prof- Dr. Stefan Dech, Prof. Dr. Jörg Müller, Dr. Martin Wegmann, Prof. Dr. Tobias Ullmann
Affiliations: Department of Remote Sensing. Institute of Geography and Geology. Julius-Maximilians-Universiät Würzburg, Earth Observation Center, German Aerospace Center (DLR)
Forests worldwide face increasing environmental change that demands the development of effective adaptation measures, which in turn require a deeper understanding of forest conditions and their dynamics. While traditional fieldwork provides detailed insights, its limited spatial coverage and observational inconsistencies make large-scale forest monitoring ineffective. In contrast, remote sensing provides systematic observations, enabling analyses and insights that are unattainable through field methods alone. Among recent advancements, uncrewed aerial systems (UAS) have emerged as powerful tools in forestry research, offering high spatial resolution, cost efficiency, and controlled data acquisition. UAS technologies also provide unique versatility, enabling forest assessment from tree level to landscape level enhancing our understanding of ecosystem dynamics. Despite its advantages, UAS data is limited in spatial coverage, ranging from 3 to 7 km2, whereas satellite systems can cover thousands of square kilometers at lower spatial resolutions. These platforms complement each other: the flexibility of UAS systems can address satellite limitations such as cloud interference, while satellites provide context at regional or global scales and regarding the temporal dimension. Integrating these datasets can result in significant advancements in forest monitoring through data comparison, multiscale analysis, model calibration, and data fusion. Aggregating UAS data to match satellite resolutions enables comparisons, but challenges remain in spatial and temporal integration. Few studies have addressed the impact of aggregation on data quality or evaluated whether the two sources provide comparable spectral information across large areas and over longer timespans. This study at hand evaluates the comparability of UAS and Sentinel-2 imagery for a managed broad-leaved forest (3 km2) in Southern Germany. Multispectral UAS data was collected biweekly over six months using a MicaSense Altum camera. The research focuses on analyzing the impact of aggregation methods, exploring relationships among spectral bands, and comparing vegetation indices across the two platforms. The results indicate that the best comparison perspective for both datasets lies in the normalized indices, followed by the non-normalized indices, and lastly, the single bands. On the other hand, the datasets exhibit intrinsically similar behavior, as the internal relationships within each dataset closely mirror one another. By addressing key challenges in data comparability and integration, this study aims to strengthen the synergy between UAS and satellite datasets, supporting the development of more robust and comprehensive forest monitoring approaches.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Semi-automatic detection and mapping of illegal logging integrating optical and SAR satellite data with machine learning: a framework from the European SINTETIC project

Authors: Giandomenico De Luca, Lorenzo Arcidiaco, Manuela Corongiu, Tiziana A. De Filippis, Stefano Romanelli, Gianni Picchi
Affiliations: Institute of Bioeconomy (IBE) - National Research Council of Italy (CNR), LAboratory for Monitoring and Environmental Modelling for the sustainable development (LAMMA Consortium)
Illegal logging is a global issue with significant ecological, economic, and social consequences. In Europe, it often manifests as small-scale, selective harvesting operations that, although covering limited areas, contribute substantially to forest degradation, biodiversity loss, and disruptions to ecosystem services. Detecting illegal logging is therefore crucial, both for assessing its short- and long-term impacts and for supporting sustainable forest management to ensure ethical and high-quality timber sourcing standards. However, the small size, irregular spatial distribution, and fragmentation of such activities pose significant detection challenges, requiring advanced monitoring solutions. Recent advances in Earth observation technologies present promising opportunities to address these challenges. This study introduces a novel semi-automatic monitoring framework that integrates high spatial and temporal resolution data from synthetic aperture radar (SAR) Sentinel-1 and multi-spectral Sentinel-2 satellites, along with ancillary information, to detect and map potential illegal logging activities. Time-series analysis of various vegetation indices, such as the normalized difference vegetation index (NDVI), is used to identify land cover changes indicative of forest disturbances. The methodology relies on machine learning-based harmonic analysis to detect small-scale temporal anomalies by comparing the amplitude and frequency of changes across multiple years. Processed SAR backscatter, dual-polarized indices (e.g., dual-polarization SAR vegetation index, DPSVI), and interferometric coherence time-series further enhance the system’s capacity to detect logging signals. A key element of this approach is the integration of detailed ground-based, georeferenced harvesting data, including the number, volume, and location of harvested trees, collected from mechanized legal logging operations following the StanForD 2010 standard. This data is used to train and validate machine learning classification models, enabling precise correlation between observed forest disturbances and known harvesting activities. The semi-automatic detection methodology generates maps showing the location, timing, and probability of forest disturbances. Legal operations are subsequently filtered to focus on unexpected disturbances (due to natural causes or unauthorized harvest) by integrating ancillary information provided through the tracking process within the EU Horizon SINTETIC (Single Item Identification for Forest Production, Protection, and Management) project. At the time of writing, the first results of the approach are undergoing validation. This pipeline will be incorporated into the broader timber traceability framework developed under the SINTETIC project, which aims to establish a comprehensive end-to-end monitoring system, from standing trees to the final timber product. By integrating freely available satellite imagery, open-source tools, and advanced data analysis within the framework of precision forestry, the system significantly enhances proactive forest governance and supports sustainable timber markets.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Exploring Microclimate Variability and Its Ecological Impacts in the Amazon Rainforest

Authors: Zhimin Ma
Affiliations: University Of Helsinki
The Amazon rainforest represents one of the most biodiverse places on Earth. However, the temperature experienced by organisms across the Amazon basin, and the biophysical factors regulating microclimate, remain largely understudied. Microclimate patterns capture the nuances of ecological processes such as nutrient cycling, species dynamics, and tree regeneration, which cannot be captured by coarse resolution open-air temperature datasets. In this study, we collected temperature data (15 cm above the surface) from 145 sensors across eight different biogeographical regions within the Amazon basin. The data, recorded from 2016 to 2023, comprises nearly four million temperature readings, showcasing the variability of microclimates within the Amazon and their capacity to buffer or amplify macroclimatic temperatures. The results demonstrate that all monitored sites exhibit greater buffering capacity against macroclimate during the rainy season than in the dry season. Statistical analyses reveal that rainfall, Leaf Area Index (LAI), canopy height, solar radiation, and terrain features were closely linked to forest microclimate buffering. Specifically, rainfall, LAI, and canopy height showed a positive correlation with microclimatic buffering capacity, whereas terrain slope were negatively correlated. These insights contribute to a deeper understanding of how biophysical factors shape microclimates in the Amazon. Moreover, we developed a high-resolution pan-tropical model that links microclimate to climatic, vegetative, and terrain variables, providing near-ground temperature estimates across the tropical rainforest. By applying this model, we analyzed microclimatic changes from 1983 to 2100, revealing substantial spatial variability in tropical forest microclimates over time. This long-term projection highlights the potential impacts of ongoing and future climate change on forest ecosystems. These findings have critical implications for conservation and climate resilience strategies. By capturing the complexity of forest microclimates, this study informs more precise conservation planning and adaptive management, ensuring the preservation of the Amazon’s ecological integrity and biodiversity. It emphasizes the role of intact forests in buffering climate extremes, offering valuable insights into their significance in a warming world.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: GNSS-Transmissometry-Based Monitoring of Vegetation Optical Depth in Central European Beech Forest Ecosystems

Authors: Nicolas Bader, Ruxandra-Maria Zotta, Eugenio Diaz-Pines, Walter Loderer, Thomas Kager, Gregor Möller, Wouter Dorigo
Affiliations: TU Wien, BOKU University
Understanding the temporal dynamics of forest canopies is crucial for assessing ecosystem health, carbon exchange, and the impacts of climate change on forest resilience. Vegetation optical depth (VOD), a microwave-derived proxy for vegetation biomass and water content based on radiative transfer models, is widely used in global satellite monitoring. However, the scarcity of ground-based VOD observations poses challenges for validating spaceborne L-band data and refining its interpretation. To address this gap, this study leverages a ground-based Global Navigation Satellite System (GNSS) Transmissometry (GNSS-T) setup. The system was deployed in a beech (Fagus sylvatica L.) forest in eastern Austria (Lehrforst Rosalia) as part of the Austrian Science Fund’s CONSOLIDATION project (FWF, grant no. I 4489-N). Fagus sylvatica, a dominant species in Central European forests, plays a vital role in regional ecosystems but is particularly sensitive to climate variability and change. The deployed GNSS-T setup consisted of two scientific-grade GNSS stations, one located beneath the canopy and the other at an unobstructed reference site. These stations continuously recorded carrier-to-noise density ratios (C/N0) of right-hand circularly polarized (RHCP) signals. By comparing signal intensities between the canopy-covered and the reference site, L-band canopy VOD was retrieved using a simplified τ-ω radiative transfer model. The τ-ω model relates the observed signal attenuation to canopy optical depth (τ) and single scattering albedo (ω) to account for both signal absorption and scattering by vegetation. The simplified approach assumes minimal scattering within the canopy and relates signal attenuation, represented by the difference in recorded C/N0 ratios between the sites, to vegetation density. This approach captured sub-daily to seasonal VOD variability, providing detailed insights into diurnal and phenological changes in canopy dynamics. To complement these measurements, a low-cost wide-angle dual-camera system was employed to monitor leaf phenology and dynamics visually. The dual-camera system captured images at high-temporal resolution and enabled the calculation of leaf area index (LAI). Temporal dynamics of the obtained L-band canopy VOD were then compared to LAI. The presentation will detail the experimental setup and showcase the first results of canopy VOD temporal variability, highlighting its relationship with leaf dynamics and climatological drivers. These findings provide valuable insights into the response of Fagus sylvatica to environmental variability and underscore the potential of GNSS-T for advancing forest monitoring and bridging the gap between satellite-derived and ground-based observations.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Sentinel-1 forest change map using Recurrence Quantification Analysis

Authors: Felix Cremer, Daniel Loos, Fabian Gans, Christian Thiel
Affiliations: Max Planck Institute for Biogeochemistry, Institute for Data Science, German Aerospace Center
In recent years, Europe has been hit by multiple droughts and heatwaves. These droughts induced massive stress on forest ecosystems, which lead to widespread forest change. To quantify these forest change occurrences, we need spatially accurate estimations of the affected areas. We use the Recurrence Quantification Analysis (RQA) to derive a change detection metric from Sentinel-1 Synthetic Aperture Radar (SAR) backscatter time series. RQA is based on the computation of recurrence plots from a given time series. The recurrence plot is based on the similarity of every time step against every other time step of a time series. These recurrence plots can then be used to derive statistics of the time series, which take the order of the time series into account. For the analysis of forest change, we used a time series from June of the previous year to June of the next year to span a two-year period and indicate the forest change of the year in between. This approach trades the timeliness of the information against the spatial accuracy, since we do not have to spatially aggregate to reduce the noise levels of the Sentinel-1 data. We filter the resulting change clusters by cluster size to reduce the amount of false positive small scale appearances due to the multiple testing problem. Therefore, the forest change map is on a 20x20 meter resolution with a minimal mapping unit of 1 ha. The derived forest change maps show good overall fit with global deforestation maps. SAR based deforestation maps can be a good supplement to deforestation maps based on optical data since they are independent of cloud cover and provide an independent data source. In depth comparisons of the Sentinel-1 based forest change map with products derived from optical sensors is planned. The deforestation maps have been produced using the Julia programming languages and are available as a FAIR reproducible workflow to be applied to other areas of interest or other time spans.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone F-G-H-I-J)

Poster: Machine learning-based prediction of vegetation recovery time in co-seismic landslide areas

Authors: Wei Zhao, Miss Mingxuan Wan
Affiliations: Institute Of Mountain Hazards And Environment, Chinese Academy Of Sciences
Landslides resulting from seismic activity cause widespread damage to vegetation and ecosystems, posing significant challenges to ecological restoration efforts in impacted regions. Vegetation recovery is crucial for enhancing ecosystem resilience and mitigating disaster impacts. To address this gap, we propose a novel method integrating Random Forest (RF) and Interactive Geographically Weighted Regression (IGWR) to predict vegetation recovery time in areas affected by co-seismic landslides. Using Landsat imagery from 2007 to 2023, we analyzed vegetation cover changes in Maoxian, impacted by the 2008 Wenchuan earthquake. The RF model was used to predict recovery time, capturing non-linear relationships and spatial variability, while IGWR explored the spatial interactions of key environmental factors. Results showed a significant increase in vegetation cover, reaching 65% by 2023, with recovery starting in low-altitude regions and spreading northward. The RF model demonstrated strong predictive performance, achieving high fit (R² = 0.97) and accuracy (R² = 0.91). IGWR revealed that climate and topography were critical drivers, with moderate temperatures and solar radiation enhancing recovery, while extreme conditions slowed it. This integrated RF-IGWR approach provides an effective tool for evaluating recovery time in landslide-prone areas, offering insights for ecological restoration and disaster management.
Add to Google Calendar

Friday 27 June 13:00 - 13:45 (Nexus Agora)

Session: F.04.29 Air Quality Policies Development and the role of Earth Observation

Air pollution is a major environmental and health concern, causing millions of premature deaths globally each year. To tackle this issue, countries have implemented various policies and regulations aimed at reducing emissions and improving air quality. At the international level, the United Nations Economic Commission for Europe (UNECE) Convention on Long-range Transboundary Air Pollution (CLRTAP) sets emission reduction targets for various air pollutants and provides a framework for cooperation among countries. The World Health Organization (WHO) has also established air quality guidelines, which serve as a reference for setting national and regional standards. In the European Union, the Ambient Air Quality Directives set limit values for key air pollutants such as particulate matter, nitrogen dioxide, and ozone. Member states are required to monitor air quality, report exceedances, and take measures to reduce pollution levels. The European Commission's Zero Pollution Action Plan, adopted in 2021, sets a vision for 2050 to reduce air, water and soil pollution to levels no longer considered harmful to health and natural ecosystems.

This Agora will gather the main institutions driving air quality policies at international (World Health Organisation, UNEP Climate and Clean Air Coalition), European (DG ENV, EEA, CAMS) as well as regional agencies and cities to discuss the current state of the art related to air pollution. The discussions will also tackle the role of Earth Observation to fill the current monitoring gaps and establish a way forward to facilitate air quality policies.

Speakers:


  • Juliette Laurent - UNEP-CCAC
  • Paul Safar - WHO
  • TBD - CAMS

Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: D.06.03 - POSTER - Digital Revolution and Emerging Technologies

The world of Earth Observation (EO) is dramatically changing driven by the digital revolution. Emerging disruptive technologies combined with EO will entirely reshape the EO information value chain, affecting both the downstream applications sector and the space assets. New paradigm around EO value chain should be likely reinvented to further exploit synergies and translate data into contextual insights that can drive action for the climate, the economy and the environment. This session will address all ICT revolution technologies impacting EO, like the so-called web 3 (blockchain and Distributed Ledger Technologies, now associated with web 3.0 at large), IOT, Immersive Visualisation, Federated Learning, Decision Intelligence, Innovative Computing paradigms, and other not here listed.

Topics to be addressed:
- web3 (blockchain and Distributed Ledger Technologies, now associated with web 3.0 at large): We are moving towards a more decentralized, robust, secure, intelligent, and, above all, more equitable data industry, of which Earth Observation (EO) is a part. A set of paradigms will be likely changed along the entire up-and down-stream markets: inter alia with smart data storage and fusion, privacy-preserving applications, data-traceability, certification of processing chain and derived products, monetization of EO data and their added-values
- Federated Learning: federated learning will support the generation of distributed hypermodels, preserving privacy and robustness.
- Internet of Things (IoT): The integration of in situ measurements, typical of IoT, with EO and other heterogeneous sources of information will enhance the capabilities of such an integrated knowledge system. By leveraging Commercial Off-The-Shelf (COTS) sensors and communication devices, IoT technology can be used to implement a distributed in-situ measurements sensing and processing networks that could significantly complement the information provided by satellite imagery.
- Immersive visualization: It aims to bridge the gap between the vast amount of complex data available in an integrated Earth Observation (EO) knowledge system and unexpert users. It provides truly immersive experiences that can simulate scenarios generated by predictive and prescriptive AI.

These are just a few examples of the technologies to be addressed in this session. The intent is to leave the door open for the community to propose disruptive and transformative innovations for both upstream and downstream assets.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: Integrating Quantum-Classical Algorithms with Tensor Networks for Noise Reduction in Synthetic Aperture Radar

Authors: Sreejit Dutta, Dr. Sigurd Huber, Prof. Dr. Gerhard
Affiliations: Deutsches Zentrum für Luft- und Raumfahrt, Technical University of Munich
Synthetic Aperture Radar (SAR) imaging is crucial in remote sensing due to its ability to produce high-resolution images regardless of weather conditions or daylight. However, SAR images often suffer from various types of noise, especially speckle noise, which degrades image quality and complicates data analysis. Traditional noise reduction techniques face challenges in balancing noise suppression with the preservation of image details. Recent deep learning approaches, such as U-Net architectures, have made strides in addressing these issues. Simultaneously, quantum computing has emerged as a promising field that can potentially enhance computational methods in image processing. We propose a hybrid classical-quantum U-Net framework that integrates quantum-classical algorithms with tensor networks for improved noise reduction in SAR images. Leveraging the representational capacity of tensor networks and the computational strengths of quantum algorithms, our method aims to surpass the limitations of classical techniques in managing complex noise patterns. In our architecture, certain layers of the U-Net are implemented using quantum circuits specifically optimized for noise suppression. The inclusion of tensor networks enables efficient handling of high-dimensional data within the hybrid model. The idea is to perform extensive experiments on standard SAR datasets to demonstrate that our hybrid U-Net outperforms traditional denoising methods and purely classical deep learning models in both noise reduction and detail preservation. Quantitative assessments using metrics like Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) would be used for comparison. Moreover, we will evaluate the computational efficiency of our approach, underscoring the potential of quantum computing to expedite complex image processing tasks. This work aims to highlight the feasibility and benefits of integrating quantum computing into practical remote sensing applications. Our hybrid quantum-classical U-Net architecture with tensor networks paves the way for advanced noise reduction techniques in SAR imaging and potentially other domains. Future research will focus on optimizing quantum circuit designs for specific noise types and exploring the scalability of our approach alongside advancements in quantum hardware.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: Enabling Digital Twin Earth: Adapting High-Performance Computing for Destination Earth and Earth Observation Services

Authors: Henry de Waziers, Farouk Mansouri, Giovanni Corato
Affiliations: Adwaiseo, LuxProvide
Digital Twin Earth initiatives have become a central focus for governments and large organisations as humanity confronts significant global challenges such as climate monitoring, disaster management, and sustainable development. The European Union (EU), through its Destination Earth (DestinE) project, is pioneering efforts to simulate Earth’s systems with unprecedented detail. By integrating diverse datasets from multiple sources, DestinE seeks to enhance our understanding of environmental dynamics and provide actionable insights for addressing climate change and other pressing issues. The initiative is ambitious and challenging in terms of processing resources and data management. A critical enabler of such ambitious projects is high-performance computing (HPC), which offers the computational power and storage capacity required to manage and process the vast datasets central to Digital Twin Earth systems. Luxembourg’s HPC system, MeluXina, stands out as a key component of the EuroHPC network, providing state-of-the-art resources to support research and innovation across Europe. As of 2024, MeluXina offers a robust infrastructure comprising 90,000 HPC cores, 5,120 cloud vCPUs, 800 GPU-AI accelerators, 20 petabytes of high-performance storage, and over 450 terabytes of RAM. These resources are essential for enabling advanced simulations, data analysis, and machine learning-driven research at the scale required by Digital Twin Earth projects. Another crucial development in this domain is the HIGHWAY service, an initiative of the European Space Agency (ESA). HIGHWAY serves as a bridge between Earth Explorers, Earth Watch, Heritage Missions, Third Party Missions data and the Digital Twins via the Destination Earth System Platform (DESP). It delivers Digital Twin-ready, cloud-optimised data and provides infrastructure and processing tools as a service. With seamless integration into both ESA's Earth Observation and the DESP, HIGHWAY enhances accessibility to Earth Observation resources, enabling researchers, businesses, and policymakers to efficiently process and analyse critical data. Digital Twin systems are inherently complex, requiring advanced machine learning and deep learning techniques that rely on enormous datasets and demand extensive computational resources. Consequently, the integration of HPC platforms into these systems represents a vital milestone in their implementation. Traditional HPC platforms are well-equipped to meet foundational requirements such as data storage, computing power, and acceleration of algorithms. However, Digital Twin Earth projects introduce additional challenges. They aim to democratise access to data and insights, extending their availability not only to researchers but also to businesses and individuals via modern Platform-as-a-Service (PaaS) solutions. Addressing these challenges necessitates innovative adaptations in HPC platforms, such as integrating HPC systems with big data databases and data lakes, coupling them with cloud platforms and web-based applications, enabling on-demand utilisation and dynamic scheduling, and meeting stringent security requirements demanded by commercial stakeholders. In this context, adwäisEO, prime contractor of HIGHWAY, and LuxProvide, MeluXina operator, have embarked on a collaborative effort for possible future integration under the HIGHWAY service to design and implement a modern, innovative solution that adapts the MeluXina HPC platform to meet the demands of Earth Observation-based PaaS solutions. This partnership focuses on aligning MeluXina's capabilities with the operational requirements of Digital Twin Earth systems, ensuring seamless integration with modern cloud and data infrastructure. Moreover, LuxProvide is pursuing a parallel initiative to expose quantum computing resources, currently under development, to the HIGHWAY service and ESA Digital Twin platforms. The inclusion of quantum computing represents an exciting frontier, promising to further enhance computational capabilities and drive innovation within Digital Twin Earth projects. By addressing both technical and operational challenges, these collaborative efforts demonstrate how HPC platforms like MeluXina and services like HIGHWAY can play a transformative role in realising the vision of Digital Twin Earth. Through their integration into groundbreaking projects such as DestinE, these initiatives are advancing our ability to tackle global challenges and fostering a more sustainable and resilient future.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: D.04.06 - POSTER - Advancements in cloud-native formats and APIs for efficient management and processing of Earth Observation data

#stac #zarr #cog #parquet #cloud-native

Earth Observation (EO) data continues to grow in volume and complexity as the next generation satellite instruments are being developed. Furthermore, novel advanced simulation models such as the Digital Twins (DTs) deployed in the scope of the Destination Earth (DestinE) project generate immense amount of multidimensional data (few PB/day in total) thanks to the High Performance Computing (HPC) technology. Cataloguing, processing and disseminating such broad variety of data sets is a huge challenge that has to be tackled in order to unleash the full potential of EO. Storage and analytics of vast volumes of data have been moved from the on-premise IT infrastructure to large cloud computing environments such as Copernicus Data Space Ecosystem (CDSE), DestinE Core Service Platform (DESP), Google Earth Engine or Microsoft Planetary Computer. In this respect, robust multidimensional data access interfaces leveraging the latest cloud-native data formats (e.g. COG, ZARR, Geoparquet, vector tiles) and compression algorithms (e.g. ZSTD) are indispensable to enable advanced cloud-native APIs (e.g. openEO, Sentinel Hub) and data streaming (e.g. EarthStreamer). Moreover, metadata models have to be standardized and unified (e.g. STAC catalogue specification) among different data archives to allow interoperability and fast federation of various data sources. This session aims at presenting the latest advancement in data formats, data compression algorithms, data cataloguing and novel APIs to foster EO analytics in cloud computing environments.

Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: Cloud-Optimized Geospatial Formats Guide

#zarr #cloud-native

Authors: Emmanuel Mathot, Aimee Barciauskas, Alex Mandel, Kyle Barron, Zac Deziel, Vincent Sarago, Chris Holmes, Matthew Hanson, Ryan Abernathey
Affiliations: Development Seed, Radiant Earth, Element84, Earthmover
Geospatial data is experiencing exponential growth in both size and complexity. As a result, traditional data access methods, such as file downloads, have become increasingly impractical for achieving scientific objectives. With the limitations of these older methods becoming more apparent, cloud-optimized geospatial formats present a much-needed solution. Cloud optimization enables efficient, on-the-fly access to geospatial data, offering several advantages: - Reduced Latency: Subsets of the raw data can be fetched and processed much faster than downloading files. - Scalability: Cloud-optimized formats are usually stored on cloud object storage, which is infinitely scalable. When combined with metadata about where different data bits are stored, object storage supports many parallel read requests, making it easier to work with large datasets. - Flexibility: Cloud-optimized formats allow for high levels of customization, enabling users to tailor data access to their specific needs. Additionally, advanced query capabilities allow users to perform complex operations on the data without downloading and processing entire datasets. - Cost-Effectiveness: Reduced data transfer and storage needs can lower costs. Many of these formats offer compression options, which reduce storage costs. Providing subsetting as a service is feasible, but it entails ongoing server maintenance and introduces extra network latency when accessing data. This is because data must first be sent to the server running the subsetting service before reaching the end user. However, with the use of cloud-optimized formats and the right libraries, users can directly access data subsets from their own machines, eliminating the need for an additional server. When designing cloud-optimized data formats, it's essential to acknowledge that users will typically access data over a network. Traditional geospatial formats are often optimized for on-disk access and utilize small internal chunks. However, in a network environment, latency becomes a significant factor, making it crucial to consider the potential number of requests that may be generated during data access. This understanding can help improve the efficiency and performance of cloud-based data retrieval. The authors have contributed to libraries for manipulating and storing geospatial data in the cloud. They authored a guide (https://guide.cloudnativegeo.org/) designed to help understand the best practices and tools available for cloud-optimized geospatial formats. We hope that readers will be able to reuse lessons learned and recommendations to deliver their cloud native data to users in applications and web browsers and contribute to the wider adoption of this format for large scale environmental data understanding. Keywords: Cloud-Native Raster, Geospatial Data, Cloud Optimized GeoTIFF, Zarr, TileDB, Satellite Imagery, Remote Sensing, Data Processing, Cloud Computing
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: Metadata Requirements for EO Products

#stac

Authors: Katharina Schleidt, Stefania Morrone, Stefan
Affiliations: DataCove E.u., Epsilon Italia, Stiftelsen NILU
As Copernicus matures and ever more satellites are providing a wealth of data, we are also seeing an increase in the diverse data products being generated from the raw satellite data. The Copernicus Services enable access to ever more data products, spread across the six Copernicus Services. As these products are also gridded data, just like the raw source data, the same mechanisms are utilized for data discovery, metadata provision and data access. However, due to the different nature of the data being provided as derived products vs. the original raw source data, this leads to various issues in identifying and accessing relevant datasets. In this paper we will take STAC, the SpatioTemporal Asset Catalog, as an example, as we utilized this technology in the FAIRiCUBE Project. Further, we will focus on the challenge of finding data pertaining to a specific observable property, e.g. Surface Soil Moisture, Average Precipitation or Imperviousness. In the STAC Common Metadata, in addition to basic metadata such as title, provider or license, as one would expect to find as common metadata, we find structures for the description of instruments and the bands they deliver, all tailored towards satellite or drone data. Thus, in order to correctly describe a derived data product, one must look to the STAC extensions. Here, the same pattern becomes apparent, with the datacube extension only providing an informal textual description of the properties being conveyed. Finding data products on specific observable properties of interest remains a painstaking task. This gap in relevant metadata stems from two different communities encountering each other. The satellite community is relatively small, the types of raw data provided fairly constrained and well known within the community. In contrast, the terrestrial environmental science communities have long dealt with the challenge of multitudes of observable properties. This gap in relevant metadata for the description of derived products applies across technologies for gridded data, as most stem from the satellite domain and are only slowly being tailored for use with terrestrial products. In the provision of terrestrial geospatial data products, both conceptual models and vocabularies/ontologies have been utilized to better describe WHAT is actually being conveyed by the data. The ISO/OGC Observations, Measurements and Samples (ISO 19156) standard is comprised of a conceptual model providing guidance on provision of observational metadata. State of the art for indication of what data is being provided has long been references to common vocabularies or ontologies, providing the necessary concepts under stable URIs. In recent years these resources have become enriched with deeper semantics, e.g. the I-ADOPT framework for the FAIR representation of observable variables, enabling powerful search options. In order to fully reap the benefits of the increasing number of derived data products, the metadata systems used to describe them will have to evolve together with the types of derived data being made available. The EO community can gain valuable insights as to how to best describe EO derived products by taking concepts from terrestrial geospatial data on board.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: Data representations for non-regular EO data: A case study using scatterometer observations from Metop ASCAT

#zarr #cloud-native

Authors: Sebastian Hahn, Clay Harrison, Wolfgang Wagner
Affiliations: TU Wien
Earth Observation (EO) data from instruments like the Advanced Scatterometer (ASCAT) onboard the series of Metop satellite present unique challenges for data representation. Unlike optical or SAR raster data, which can be seamlessly integrated into regular multidimensional data cubes, ASCAT observations are irregular, with each observation carrying its own unique timestamp. This irregularity requires alternative data models for efficient storage, access, and processing. Despite their prevalence, non-regular EO datasets are often overlooked in discussions about data modeling, with most approaches - particularly in cloud environments - favoring standard, well-structured raster formats. In this study, we explore three specialized data representations tailored to manage non-regular data: indexed ragged arrays, contiguous ragged arrays, and the incomplete multidimensional array representation. These models address the challenge of varying feature lengths within collections by employing different strategies for handling irregularities, such as padding with missing values for simplicity or leveraging compact, variable-length representations. We present these models using widely adopted cloud-native data formats (e.g. zarr) demonstrating their practical applicability with ASCAT swath and time series data. This work highlights the importance of addressing non-standard cases in EO data representation, which are often overshadowed by solutions tailored for regular raster data. The adoption of alternative data models implemented with cloud-native data formats ensures that these datasets can be integrated into existing EO data pipelines.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: Video compression for spatio-temporal Earth System Data

#zarr

Authors: Oscar J. Pellicer-Valero, MSc Cesar Aybar, Dr Gustau Camps-Valls
Affiliations: Image Processing Lab (IPL), Universitat de València
The unprecedented growth of Earth observation data over the last few years has opened many new research avenues. Still, it also has posed new challenges in terms of storage and data transmission. In this context, lossless (no information is lost) and lossy (some information is lost) compression techniques become very attractive, with satellite imagery tending to be highly redundant in space, time, and spectral dimensions. Current approaches to multichannel image compression include (a) general-purpose lossless algorithms (e.g., Zstandard), which are frequently paired with domain-specific formats like NetCDF and Zarr; (b) image compression standards such as JPEG2000 and JPEG-XL; and (c) neural compression methods like autoencoders. While neural methods show much promise, they need more standardization, require extensive knowledge to apply to new datasets, are computationally expensive, and/or require specific hardware, limiting their practical adoption to general datasets and research scenarios. Most importantly, all methods fail to properly exploit temporal correlations in time-series data. To tackle these issues, we propose a simple yet effective solution: xarrayvideo, a Python library that leverages standard video codecs to compress multichannel spatio-temporal data efficiently. xarrayvideo is built on top of two technologies: ffmpeg and xarray. On the one hand, ffmpeg, a video manipulation library widely available and accessible for all kinds of systems, contains well-optimized implementations of most video codecs. On the other hand, xarray, a Python library for working with labeled multi-dimensional arrays, hence making xarrayvideo compatible with the existing geospatial data ecosystem. Combining both allows for seamless integration with existing workflows, making xarrayvideo easy to use for any dataset with minimal effort by the researcher. In summary, we introduce the following contributions: First, we present a new Python library, xarrayvideo, for saving multi-dimensional xarray datasets as videos using a variety of video codecs through ffmpeg. Second, we showcase its utility through a set of compression benchmarks on three real-world multichannel spatio-temporal datasets: DeepExtremeCubes, DynamicEarthNet and ERA5, as well as a Custom dataset, achieving Peak Signal-to-Noise Ratios (PSNRs) of 40.6, 55.9, 46.6, and 43.9 dB at 0.1 bits per pixel per band (bpppb) and 54.3, 65.9, 62.9, and 56.1 dB at 1 bpppb, surpassing JPEG2000 baselines in the majority of scenarios by a large margin. Third, we redistribute through HuggingFace a compressed version of the DeepExtremeCubes dataset (compressed from 3.2 Tb to 270 Gb at 55.8-56.8 dB PSNR) and the DynamicEarthNet dataset (compressed from 525 Gb to 8.5 Gb at 60.2 dB PSNR), hence serving as illustrative examples, as well as providing to the community a much more accessible version of these datasets. With xarrayvideo, we hope to solve the issues emerging from increasingly large Earth observation datasets by making high-quality, efficient compression tools accessible to everyone.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: Cloud-based framework for data cubes extraction of extreme events

#zarr #stac #cloud-native

Authors: Marcin Kluczek, Jędrzej Bojanowski S., Jan Musiał, Dr. Mélanie Weynants, Fabian Gans, Khalil Teber, Miguel Mahecha D.
Affiliations: CloudFerro S.A., Max Planck Institute for Biogeochemistry, Leipzig University
The growing need for detailed analysis of extreme environmental events requires advanced data processing and storage solutions. This work presents a cloud-based framework designed to extract multivariate data cubes of extreme events through the fusion of Sentinel-1 radar data and Sentinel-2 optical imagery. This framework supports advanced environmental monitoring within the ARCEME (Adaptation and Resilience to Climate Extremes and Multi-hazard Events) project, which focuses on global multi-hazard event assessments and aims to improve our understanding of cascading extreme events that affect ecosystems and society. The framework utilizes the SpatioTemporal Asset Catalogs (STAC) API to streamline Copernicus Earth Observation (EO) data access and management. This integration of cloud storage, multithreaded processing, and API-driven data access provides a robust solution for efficiently handling EO data in studies of extreme climate events. Key to the framework is the use of cloud-native storage in the Zarr format, which enables chunked, compressed data storage, optimizing both performance and resource utilization. Zarr’s compatibility with Dask allows for multithreaded, parallel data access, significantly accelerating data cube generation and analysis. The CREODIAS cloud infrastructure supports concurrent task execution, ensuring scalability and speed in handling large Earth Observation data, essential for real-time monitoring and large-scale analyses of extreme events. This work presents a comprehensive cloud-based framework for generating multitemporal data cubes of cascading extreme events, with a focus on efficient data filtering, preprocessing, and global-scale event detection. The framework integrates multi-hazard events from the ARCEME event database, which combines climate reanalysis data and reported impacts from cascading droughts and extreme precipitation events across diverse regions. By leveraging the STAC API, the framework streamlines data access and management, while cloud-native storage in the Zarr format ensures efficient chunking and compression. Additionally, multithreaded processing with Dask accelerates data cube generation, enabling scalable global studies of extreme events and their complex spatiotemporal dynamics and interactions.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: Optimizing Partial Access to Sentinel-2 Imagery With JPEG2000 TLM Markers

#cog #parquet

Authors: Jérémy Anger, Thomas Coquet, Carlo de Franchis
Affiliations: Kayrros, ENS Paris-Saclay
Efficient access to remote sensing data is critical for the success of applications such as agriculture and human activity monitoring. In the context of Sentinel-2, data products are distributed as JPEG2000 files at Level-1C and Level-2A processing levels. Optimizing data access involves minimizing downloads to regions of interest, optimizing latency, and reducing unnecessary decompression and decoding. Currently, the Copernicus Data Space Ecosystem (CDSE) platform provides new mechanisms, such as HTTP range requests thanks to the S3 protocol, which allow partial file downloads—a significant improvement over the previous SciHub platform. These enhancements are well known when exploiting cloud-optimized formats like Cloud Optimized GeoTIFF (COG) and Parquet files. Sentinel-2 imagery is distributed in JPEG2000 format. Considering for example a 10m band, the encoder compresses the data and organizes the image into 121 independent 1024×1024 internal tiles. Each tile is encoded sequentially, with headers (Start of Tile markers, or SOTs) indicating the length of the associated codestream. While this allows tiles to be located by sequentially fetching and interpreting the headers, retrieving a specific tile currently requires multiple HTTP range requests: up to 120 small requests (<1 KB) to determine the last tile's location and a final larger request (~1 MB) for the tile data. Although efficient in terms of data size and decoding effort, this approach incurs high latency for users and infrastructure overhead for the CDSE provider. A solution to these inefficiencies lies in utilizing TLM (Tile-Part Length Marker) headers, an optional feature in the JPEG2000 standard. TLM markers, stored in the main file header, allow direct computation of any tile's location without sequential parsing. With TLM markers, accessing a specific tile requires only two HTTP range requests: one to fetch the main header (~4 KB) containing TLM markers and another for the desired tile's data. This approach reduces the average number of requests from 61 to just 2, significantly lowering latency and system load. Additionally, this configuration offers performance similar to COGs while avoiding a major file format change. Discussions with ESA to enable TLM markers in future products are ongoing. Adopting TLM markers requires minimal modification to the existing JPEG2000 encoding pipeline, as most mainstream JPEG2000 libraries already support this feature. However, historical products (Collection 0 and Collection 1) lack TLM markers, and re-encoding these files would be prohibitively expensive. An alternative solution involves generating external TLM metadata files for past datasets, enabling rapid access to tile locations. For instance, TLM metadata for all products of a specific MGRS tile would require less than 100 MB of storage and could be distributed efficiently. In conclusion, enabling TLM markers in new Sentinel-2 products would provide substantial benefits to the remote sensing community, improving data accessibility with minimal impact on encoding processes. For already encoded imagery, the generation of external TLM metadata offers a viable pathway to achieving similar efficiency gains. These advancements align with the goal of reducing barriers to high-resolution geospatial data access and optimizing resource usage on both client and provider sides.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: GeoHEIF - Organizing geospatial images into data cubes inside a HEIF file format.

#cog

Authors: Joan Maso, Nuria Julia, Dirk Farin, Brad Hards, Martin Desruisseaux, Jérôme St-Louis, Alba Brobia
Affiliations: CREAF (Gruments), Imagemeter, Silvereye, Geomatys, Ecere
The Open Geospatial Consortium (OGC) GeoTIFF standard included the concept of georeference in the popular TIFF format. That was possible due to an extendable structure defined in the original format that is based on an Image File Directory (IFD). The IFD structure allows for multiple images in the same file. While this has been widely used to store multiband imagery in a single file (e.g. Landsat channels) or to include a multiresolution image (as it is done in the Cloud Optimized GeoTIFF; COG), there is no standard way to organize several images to create a datacube. The High Efficiency Image File Format (HEIF) is a digital container format for storing images and image sequences, developed by the Moving Picture Experts Group (MPEG) and standardized in 2015. A single HEIF file can contain multiple images, sequences, or even video and incorporate the latest advances in image compression. The structure of the content in boxes provides an extension mechanism comparable to the TIFF file. There is no standard mechanism to include georeference information in a HEIF file yet and the support of HEIF files in current GIS software is limited due to the format relatively recent introduction. This creates an opportunity for the OGC to define an extension for HEIF that describes how to include the georeference information in the file, taking advantage of the experience from the GeoTIFF but also considering the recent progress in the definition of multidimensional datacubes. The whole idea is to specify a multidimensional HEIF file, based on the aggregation of georeferenced 2D images that can optionally be structured in tiles and supporting multiresolution (a pyramidal structure, called “overviews” in COG). The fundamental datacube structure is already described in the ISO19123 that defines conceptual schema for coverages that separates the concept of domain and range. The domain consists of a collection of direct positions in a coordinate space, which can include spatial, temporal, and non-spatiotemporal (parametric) dimensions. The domain is structured in a number of axes that are also called dimensions. All the intersections of the different dimensions of the coverage can be seen as a hypercube or a hyper-grid. For each intersection of the direct positions of the dimensions we can associate one or more property values (called rangesets in the OGC Coverage Implementation Schema) populating the datacube. In its implementation in the HEIF file, we propose that the datacube can be decomposed in 2D planes (a.k.a. images) that are georeferenced using a Coordinate Reference System CRS and an affine transformation matrix (that in many cases will be ”diagonal” and define only a linear scaling and a translation of the image model into the CRS model). Each plane has a fixed “position” in the other N-2 dimensions (a.k.a. extra dimensions) forming a multidimensional stack of planes. The images will contain the values of a single property in the datacube. The HEIF file has an internal structure of property boxes (that provides a similar extensibility mechanism as the IFD structure in TIFF). The proposal described in this communication is to define property boxes for describing CRSs, extra dimensions, fixed positions of the extra dimensions and property types (coverage range types). In HEIF, each property box has a unique identifier that can be associated with HEIF entities. Since an image is an entity, each georeferenced image can be associated to the necessary property boxes to define its “position” in the datacube “stack” and the meaning of the values of its pixels (property types). It is worth noting that the 2D CRS dimensions, the extra dimensions and the property types are defined as URI that points to a semantic definition of the axes. The 2D CRS dimension points to a CRS vocabulary (commonly describing the EPSG codes) and the extra dimensions and the property types point to a concept in a variable vocabulary (such as QUDT) and to unit of measure vocabulary (commonly a UCUM ontology). Once consolidated in HEIF, this approach can be applied also to a new version of the GeoTIFF standard. This talk will present the current status of the OGC GeoHEIF standard as advanced in the OGC Testbed-20, and the OGC GeoTIFF Standard Working Group.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone P)

Poster: E.01.01 - POSTER - EO for Cultural and Natural Heritage: from innovation to user uptake

Cultural and Natural Heritage (CNH) have a major significance for local communities as an identity factor and a facilitator of social cohesion. Furthermore, CNH are strategic assets for tourism, and their promotion and valorisation contribute to countries economic development. Preservation of CNH is therefore essential, also towards strengthened sustainability and resilience to current global challenges as promoted by the United Nations’ 2030 Agenda for Sustainable Development.
Dedicated sessions held during the last two editions of the Living Planet Symposium helped unveiling the potential and benefits of Earth Observation (EO) for CNH. Satellite data and technologies from Copernicus and Contributing Missions are already playing a key role in enabling novel, transformative and cost-effective solutions to undertake a variety of specific tasks, such as: archaeological prospection, landscape archaeology, multi-temporal monitoring, risk assessment, impact assessment due to anthropogenic activities (e.g. urbanization, illegal excavations), natural hazards (e.g. floods, earthquakes) and climate change.
Public administrations and authorities in charge of CNH management are more aware of these applications, and so more remote sensing and geospatial companies are developing tailored services. At the same time, CHN have become an established application area in Copernicus and its services, e.g. Support to EU External and Security Actions (SESA) Service, Emergency Management Service (EMS) and Climate Change Service (C3S). Furthermore, various initiatives have been launched by ESA (e.g. Downstream Gateway, ARTES IAP) and national space agencies to support industry in developing downstream applications in the CNH sector, and more alliances have been established with national and international bodies such as UNESCO and ICOMOS.
In this context, this session aims to understand how EO scientists, CNH user community, institutions and industry are partnering and cooperating to enable novel applications, improve those that are already being delivered, and facilitate the user uptake to make EO data and technologies from Copernicus, Contributing Missions and commercial missions more deeply embedded into operational workflows for study, monitoring, preservation and promotion of CNH.

The session encourages submissions focusing on:
• Solutions based on the exploitation of satellite data, as well as exploitation of Artificial Intelligence, Machine Learning, thematic platforms, cloud computing resources and infrastructure, collaborative environments;
• Benefits from the use of Copernicus products and services, also in relation to impacts due to climate change and towards future resilience in CHN management;
• Use cases addressing specific user requirements and needs in the field of either CNH discovery, study, monitoring, preservation or cultural/touristic promotion;
• Success stories and best practices of EO integration in operational systems, workflows and processes on CNH;
• Downstream applications, with a focus on multidisciplinary collaboration and partnerships between heritage institutions, academia and commercial providers;
• Initiatives of capacity building towards user uptake by the CNH community and end-users.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone P)

Poster: Monitoring and analysis of land cover changes in Yala National Park, Sri Lanka using Landsat data and Google Earth Engine

Authors: LEI LUO
Affiliations: Aerospace Information Research Institute, Chinese Acasemy of Sciences, International Research Center of Big Data for Sustainable Development Goals, International Centre on Space Technologies for Natural and Cultural Heritage under the auspices of UNESCO
Abstract Yala National Park (YNP) in Sri Lanka is one of the most biodiversity-rich regions in South Asia. However, increasing land cover changes are altering landscape patterns and threatening regional biodiversity. Currently, there is a lack of dynamic, long-term monitoring of land cover changes both within and around YNP, as well as quantitative assessments of the impact of these changes on the regional landscape. This study utilizes a time series of Landsat satellite imagery and applies a continuous change monitoring classification method on the Google Earth Engine (GEE) platform to dynamically track and map land cover changes within YNP and its buffer zone. Based on the classification results, landscape metrics are used to quantitatively analyze the characteristics and trends of landscape pattern changes driven by external factors. Finally, land cover and landscape pattern changes are correlated with human-elephant conflict data to explore their implications for achieving SDG 15 (Life on Land).The results reveal the following over a 23-year period: (i) the construction of reservoirs and road expansion led to a 10% decrease in forest area within YNP and an increase in fragmentation, as evidenced by a decline in the Aggregation Index by 0.33; (ii) fragmentation in the buffer zone has been even more pronounced since the 21st century (Aggregation Index decreased by 0.49), with fragmentation patterns gradually expanding from the core area, peaking at a distance of 10–15 km; (iii) cropland in the buffer zone increased by approximately 16%, with the expansion of cropland showing a significant correlation with human-elephant fatalities (r = 0.92, p < 0.01). Such findings contribute to the realization of SDG 15.1 (the conservation and restoration of terrestrial ecosystems), SDG 15.2 (sustainable forest management), and SDG 15.5. Moving forward, the integration of scientific research with policy-making will be a core driver in achieving SDG targets in a comprehensive and effective manner. Keywords: Landsat time-series data; land cover; landscape metrics; change detection; Human–elephant conflict; Yala National Park (YNP)
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone P)

Poster: QARA+CSK Project: An Innovative Approach to Monitoring and Preserving Cultural Heritage at Risk from Quarrying and Extractive Activities.

Authors: Giuseppe Guarino, Alessia Brucato, Nicola Masini, Dr. Deodato Tapete
Affiliations: University Of Bologna, University of Bari "Aldo Moro", Italian Space Agency, Institute of Heritage Science - National Research Council of Italy
Satellite imagery has been used extensively by many archaeologists to explore and monitor remote and hard-to-reach historical contexts, which are now more than ever, at greater risk of disturbance by calamitous events and human activities. The main causes of the irreversible loss of our historical and natural heritage are manifold, including urban sprawl, and intensive land use by agricultural and industrial development. Among the latter, it is worth recalling the increasing expansion of the mining and quarrying sector, which is quite ubiquitous across the globe but has not been particularly investigated yet regarding its impacts on cultural heritage preservation. How can Earth observation data help in detecting mining and quarrying activities that may be hazardous to cultural heritage? What is the added value of satellite Synthetic Aperture Radar (SAR) compared to optical imagery that is more used by heritage practitioners? These are the main questions that the Quarry Archaeological Risk Assessment + COSMO-SkyMed (QARA+CSK) project, supported by the Italian Space Agency (ASI), aims to answer. QARA+CSK investigates the potential of SAR technologies for the monitoring of quarrying and mining activities across cultural landscapes and near heritage sites, to support risk mitigation. The study focuses on high-risk areas in Sicily, where the territory is rich in limestone and historically several private companies operate and impacts may arise for cultural heritage conservation despite existing regulations, regulatory framework and policy enforcement. The analysis relies on advanced change detection techniques, utilising COSMO-SkyMed (both First – CSK and Second Generation – CSG) data in conjunction with Copernicus Sentinel-2 imagery. A multi-temporal analysis from 2011 to 2023 was conducted as part of the project. The selected temporal interval allowed the combined analysis of CSK/CSG StripMap imagery that has been collected since 2011 by ASI through the Map Italy project according to a regular 16-day revisit time observation scenario and the full Sentinel-2 time series since 2015. Thanks to the variety of Open Access satellite repositories archives of multi-temporal and multisensor images and numerous Free Libre Open-Source Software, Machine Learning algorithms (non-parametric supervised classification with Random Forest Classifier) were initially applied on multispectral Sentinel-2 Level 2A. The obtained classification was spatially compared to the dataset of heritage sites, either digitally recorded or known from published literature, and the outcome was a map of active quarries located near high-risk archaeological areas. To assess the state of activity of the mapped quarries and open mines, and whether they are representing an ongoing threat, amplitude change detection analysis was applied to CSK/CSG StripMap image pairs. Particular care was paid to pair selection to prevent the influence of seasonal components that may interfere with the signal related to quarrying and mining. Furthermore, ascending and descending geometries were selected to achieve the best visibility of the open mines. The results show the incremental growth of the quarries over several years, suggesting which areas are more at risk for cultural heritage preservation. Therefore, the produced maps showcase how a potential application product may be generated and provided as a further geospatial layer to input into the development of targeted management and intervention protocols, facilitating the identification of proxy indicators to detect, monitor, and address diverse manifestations of archaeological risk effectively. The project integrates Earth observation data into operational workflows for monitoring cultural heritage, thereby promoting innovative approaches to preservation. Aligned with the session's objectives, this work showcases cutting-edge EO-based workflows and addresses the pressing needs of the Cultural and Natural Heritage (CNH) community. Leveraging open-access data underscores the critical role of EO technologies in safeguarding CNH while fostering multidisciplinary partnerships and advancing sustainable heritage management practices.
LPS Website link: QARA+CSK Project: An Innovative Approach to Monitoring and Preserving Cultural Heritage at Risk from Quarrying and Extractive Activities.&location=X5+–+Poster+Area+–+Zone+P" class="text-info" target="_blank">Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone P)

Poster: IRIDE SERVICE SEGMENT: GEOSPATIAL PRODUCTS TO SUPPORT THE PRESERVATION OF ITALIAN CULTURAL HERITAGE

Authors: Dr. Emanuela Valerio, Alessandro Brunetti, Dr. Enrico Ciracì, Michele Gaeta, Dr. Roberta Marini, Alessandro Patacchini, Dr. Francesco Valente, Prof. Paolo Mazzanti
Affiliations: Nhazca S.r.l., e-Geos S.p.a.
IRIDE represents one of the most relevant European space programs in the field of Earth observation. Its implementation will take place in Italy on the initiative of the Government, thanks to the resources of the National Recovery and Resilience Plan (PNRR) integrated by the funds of the National Complementary Plan (PNC). IRIDE is a satellite system that will be operational by June 2026, coordinated by ESA-European Space Agency, with the participation of ASI-Italian Space Agency. IRIDE will offer a wide range of geospatial services to all Users. These geospatial services, designed on the basis of User needs, provide information (maps, monitoring, multi-temporal analyses) relating to the territory and the sea through the processing of Earth Observation (EO) data from satellites and the integration with other data coming from models, ground networks, other available open data or provided by the User itself. Our work is inserted in the context of the Service Value Chain (SVC) S3-03 of the IRIDE Service Segment Lot 2, based on the study of the ground deformations, with the of mapping and monitoring land and infrastructure movements (from satellite data), caused by natural dynamics or events such as earthquakes, volcanic eruptions, landslides, subsidence phenomena and other natural and anthropic phenomena. In particular, our work is focused on the monitoring of the Italian Cultural Heritage and the surrounding areas; the service aims to improve our understanding of ground displacement processes that affect cultural heritage (e.g., landslides, subsidence, etc.) by using SAR high-resolution data, acquired by Sentinel-1, COSMO-SkyMed and SAOCOM constellations, and applying to them the Advanced Differential Interferometry SAR technique (A-DInSAR). Starting from the achieved deformation maps and time series, the next step involves the provision of instruments capable of producing spatio-temporal anomalies, differential deformation maps, and other useful statistical information. To perform our analyses, we apply the Advanced Differential Interferometric (A-DInSAR) technique on the Sentinel-1, COSMO-SkyMed and SAOCOM datasets. This multi-temporal interferometric analysis was carried out by using the well-known Persistent Scatterers Interferometry (PSI) technique. The application of this technique allows the generation of PS (Persistent Scatterers)/DS (Distributed Scatterers) deformation maps which represent the LOS (Line-of-Sight) and 2D displacement during the considered time interval. Unlike conventional DInSAR techniques, the A-DInSAR multi-interferometric analysis generates displacement time-series for each PS/DS. To achieve our geospatial products, we apply two main methodologies on the measurements retrieved from the above-mentioned method: • Territorial scale analysis: this method involves a statistical approach for the study of the Italian Cultural Heritage, with the aim of classifying on a territorial scale all the active ground deformations. • Single-structure scale analysis: it comprehends more algorithms which allow the interpolation of the measurement points, the spatialization of the deformations affecting the single structure. These methods will allow a detailed analysis of the single cultural asset with the aim of detecting and measuring the active deformations acting on it. Starting from the EO and non-EO data described above and applying to them the A-DInSAR method, we are able to produce five geospatial products that are very useful for the End-Users, to monitor the deformations that could affect the studied cultural heritages. In the following, we detailly describe each geospatial product: 1) InSAR Statistical Indexes: this product consists of three indexes for each considered building and/or structures, computed from the measured PS density, velocity's minimum, mean, and maximum values. 2) 3D Velocity Decomposition: this product allows the estimation and 3D visualization of horizontal and vertical deformation over vertically developed structures (e.g., towers, high monuments). 3) Identification of differential deformation over cultural heritage: this product allows to identify the portion of the culture heritage structures affected by significant differential deformations. 4) Temporal anomaly maps: this product is useful to identify the PS exhibiting displacement time series characterized by acceleration behaviour with respect to the mean deformation trend within a predefined temporal interval. 5) Intersection of spatio-temporal anomalies with exposed cultural heritages: this product allows the intersection between the cultural heritage layer, the geohazards layer and the ground deformation measurements. The aim of this product is the classification of each cultural heritage on the basis of the ‘Hazard Score’, which is determined by examining the overlap between natural hazards within a 100-meter buffer zone and the occurrence of spatio-temporal anomalies in the surrounding area.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone P)

Poster: Risk Assessment of the Effects of Climate Change on Lakeshore Sites Using Earth Observation Data. The Case of the Prehistoric Fortified Settlement at Smuszewo (Poland).

Authors: Lidia Żuk, Julia Holzner, Sławomir Królewicz, Simon Plank, Renata Graf, Włodek Rączkowski
Affiliations: Adam Mickiewicz University, German Aerospace Center (DLR)
Climate change is one of the factors affecting the preservation or degradation of material cultural heritage. Understanding the potential risks at the site level is essential to ensure that appropriate strategies are applied to protect the material relics. Lakeshore sites are located in an environment that is particularly sensitive to climate change. The waterlogged conditions are crucial for the preservation of fragile archaeological materials, such as those found at the prehistoric fortified settlement at Smuszewo, Poland. Archaeological surveys conducted between the 1950s and 2010s revealed well-preserved wooden structures on land and on the lake shore. However, changes in the lake environment (e.g. decreasing water levels, eutrophication, peatland drainage) may expose these relics to unfavourable conditions, leading to their exposure and gradual destruction. It is therefore crucial to understand the dynamics of lake changes, both seasonal and long-term. The EU-funded TRIQUETRA project (Toolbox for assessing and mitigating Climate Change risks and hazards threatening cultural heritage) seeks to provide tools to identify, assess and monitor these changes. Earth Observation data play a crucial role in the TRIQUETRA methodology and have been used to identify climate-related risks to cultural heritage. The initial analysis, based on gridded climate observational datasets (E-OBS dataset) and local weather stations with long meteorological records (from the mid-20th century), showed a noticeable trend in temperature increase. However, no statistically significant changes in precipitation were observed. In addition, there are no historical or recent records of in situ water level measurements for the nearby Lake Czeszewskie. These two factors make it difficult to estimate changes in water levels. Earth Observation data, characterised by high temporal resolution, provide a unique opportunity to understand the dynamics of the lake environment. Using Sentinel, TerraSAR-X and Planet datasets, we analyse the impact of observed climate changes (trends in temperature and precipitation) on seasonal and long-term lake changes. In particular, we consider the effect of increasing temperature on seasonal conditions and their influence on the material cultural heritage: decreasing snow/ ice cover in winter and fluctuations in water levels between spring and summer, including extreme events (floods, droughts, etc.). It is hoped that the results of these analyses will be used to mitigate negative changes in cooperation with the environmental agencies responsible for the recovery of natural resources.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone P)

Poster: Landscape Metrics Demonstrating Threats to Traditional and Archaeological Landscapes in Southern Iraq

Authors: Hope Irvine, Dr Louise Rayne, Francesca Cigna, Deodato Tapete, Dr Michelle de Gruchy, Prof. Jaafar Jotheri, Mrs Jennifer Makovics
Affiliations: Newcastle University, National Research Council (CNR) - Institute of Atmospheric Sciences and Climate (ISAC), United Nations Satellite Centre (UNOSAT)
This paper uses open-source satellite imagery to analyse long-term trends in cultivation in an area of southern Iraq bounded between the western desert and the Euphrates (for full regional scale remote sensing analysis, see Cigna et al. 2024). A chain of springs has supplied traditional irrigation systems and fields for over 1500 years, but these are now impacted by changing patterns of irrigation relying on pump wells. Southern Iraq is impacted by decreasing NDVI trends, increased salinization and fossil fuel pumps depleting groundwater. This study examines these traditional canal irrigation systems and their landscape by using NDVI time-series from Landsat 5 (since 1985), 7, 8, 9 and Sentinel-2 (since 2018) imagery. Imagery was obtained from Google Earth Engine. Different imagine processing techniques were assessed for their effect on NDVI values, including cloud filtering, Ordinary Least Squares (OLS) regression, and the choice of SR or TOA datasets. Landsat imagery was used to assess long-term NDVI change (1984-2023) and Sentinel-2 was used to spatially asses short term vegetation patterns to understand current threats to the landscape. Mann Kendall tests and landscape metrics were then applied to the imagery. Landscape Metrics are statistical measures to quantify and qualify spatial patterns and their changes. Land types were classified into vegetation types based on NDVI values (Jabal et al. 2022: 1144) which included desert, sparse shrubbery and irrigated land. The dominance, shape and fragmentation of these land types were assessed using landscape metrics such as Largest Patch Index (LPI), Mean Shape Index (SHAPE_MN) and Number of Patches (NP) respectively, across a time-series to identify signs of agricultural and landscape desertification. Findings reveal significant NDVI fluctuations since the 1980s, significant recent vegetation declines, and agricultural land fragmentation, highlighting the impact of marshland drainage and diesel-pump irrigation. This work maps and classifies historic canals and detects that traditional field systems are experiencing vegetation decline more intensely than the wider landscape. This research work contributed to the Bilateral Agreement CNR/Royal Society (United Kingdom), project “Vanishing archaeological landscapes under anthropogenic and climate change threats” (Principal Investigators: F. Cigna, CNR-ISAC, and L. Rayne, UNEW), 2023-2024 (Cigna et al., 2024). Field work undertaken By Jotheri was funded by Rayne’s Newcastle University Academic Track (NUACT) Fellowship. Hexagon rectification was undertaken by Lavris Makovics was funded by a NUACT scheme PhD scholarship. Irvine’s MSc and PhD are funded by EPSRC as part of the Geospatial Data Science Centre for Doctoral Training (CDT) with Newcastle University. References: F. Cigna et al., "Assessing Anthropogenic and Climate Change Threats to Archaeological Landscapes in Iraq Using Earth Observation," 2024 IEEE Mediterranean and Middle-East Geoscience and Remote Sensing Symposium (M2GARSS), Oran, Algeria, 2024, pp. 424-428, doi: 10.1109/M2GARSS57310.2024.10537372. Jabal, Z.K., Khayyun, T.S. and Alwan, I.A. (2022) ‘Impact of Climate Change on Crops Productivity Using MODIS-NDVI Time Series’ Civil Engineering Journal 8(6): 1136-1156
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone P)

Poster: Digital Humanities For A Holistic Cultural And Natural Heritage Remote Management Model

Authors: Dr ARCHAEOLOGIST ATHINA CHRONI
Affiliations: NATIONAL TECHNICAL UNIVERSITY OF ATHENS
Interdisciplinarity has become nowadays the new, defining component that runs through the varied scientific fields. In this context, the sector of sciences has become the catalytic factor for giving way out in multiple issues of the humanities sector, specifically archaeology, either in the fieldwork, in the laboratory process or in the dissemination of cultural information, finally ending up in the term of “digital humanities” and resulting in the emergence of integrated models of cultural and natural heritage management. Consequently, remote sensing and photointerpretation, geophysical prospection, geoinformatics, archaeoinformatics, spectral analysis of archaeological finds, biomolecular archaeology, real-time digital documentation in the field, 3D digitization and representation combined with AR/VR/MR, effective management of big data, new museological approaches, constitute, in modern times, only some of the aspects that enrich and are called upon to fill in the scientific profile of the contemporary cultural and natural heritage professional. In the afore-mentioned context, two distinct case-studies, highlight the combined application of innovative technologies, as cited, having successfully contributed in achieving remote integrated management of cultural heritage relics in a natural as well as in an urban environment, by locating buried archaeological relics of the Bronze Age at the plateau of the northern arm of Voidokoilia bay, Peloponnesus, uninhabited in modern times, in the first case, and in locating destroyed architectural assets of the Byzantine and the Post-Byzantine/Ottoman period within the dense urban web of the modern city of Ioannina, Epirus, in the second case, along with also achieving the cultural relics’ qualitative and quantitative documentation, further disseminating the research products to the scientific as well as to people’s community. Both sites are located in Greece. Concerning the first case-study, Voidokoilia bay, a natural landscape under a NATURA protective framework: the combined digital processing of remotely sensed imagery of an area of about 5000 m2 possibly archaeological, that is, air photographs as well as IKONOS satellite imagery of consecutive periods following the excavational phases of the related, nearby, excavated archaeological area of about 800 m2, has revealed linear traces and microtopographic relief shadow marks, which have been further interpreted in relation to multiple components, such as climatic and microclimatic conditions, geological and geomorphological data, flora, archaeological, architectural and social data, settlements’ nets and trade routes typology of the Bronze Age. Ground surveying and geophysically prospecting the said area of interest, has finally concluded to the strong likelihood that the traces afore-mentioned reveal buried archaeological relics of an EH II-MH I-LH settlement, that is, c. 2800 BC - c. 1100 BC. Regarding the second case-study, Ioannina city, a human constructed environment originating in the 5th century BC and still being uninterruptedly inhabited: digital processing of geospatial data, such as terrestrial and aerial photographs, satellite imagery, as well as architectural plan views and topographic maps, combined with artistic depictions such as engravings, paintings, postcards and, also, historiographic, bibliographic and archaeological data, as well as architectural data on Byzantine Christian churches and monasteries as well as on Muslim mosques and on secular architecture of the respective periods and cultures, has led to the accurate location, form and metrics of the city’s lost landmarks. The comparative scientific processing of the two distinct case-studies contributes in highlighting potentials and limitations of photointerpretation and remote sensing combined with GIS technology, for natural and urban environments. Nevertheless, main objectives in both research projects are to point out the alterations of the environment, whether natural or urban, overtime and at the same time to propose a holistic cultural and natural heritage remote management model. Ultimate goal, raising public awareness and achieving social inclusion in matters of natural and cultural heritage protection and preservation, in the framework of the Faro Convention on the Value of Cultural Heritage for Society/The Council of Europe 2005 proposing a more comprehensive and holistic view on the notion of cultural and natural heritage, thus emphasizing the important role of people's participation and engagement in cultural procedures, showing the way to public archaeology. Under this perspective, secondary school students’ involvement is considered crucial: consequently, user-friendly platforms combined with open data–open access procedures form additional scientific axis of both research projects, thus enabling and proposing continuous monitoring of the cultural and the natural environment by making use of satellite data originating from open platforms such as the EO Browser by ESA or the NASA worldview by NASA.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone P)

Poster: Automatic Detection of Tell Sites in Central Iraq Using Machine Learning on Open Access Satellite Synthetic Aperture Radar Imagery

Authors: Elena Chiricallo, Giulio Poggi, Sebastiano Vascon, Arianna Traviglia
Affiliations: Dipartimento di Scienze Ambientali, Informatica e Statistica, Ca' Foscari University Of Venice, Center for Cultural Heritage Technology, Istituto Italiano di Tecnologia
Despite the established contribution of Synthetic Aperture Radar (SAR) in studying and monitoring archaeological landscapes [1-11], there is a noticeable lack in the automatic analysis of SAR imagery through Artificial Intelligence (AI) techniques in archaeological applications. In contrast to optical and LiDAR data, in which AI applications have provided significant advancements in the automatic identification of archaeological objects and sites [12-20], SAR data is still mostly analyzed through manual visual inspection. Therefore, the potential of AI for identifying archaeological structures across extensive geographic areas remains largely unexplored and unexpressed in the SAR domain. In this study, a new Machine Learning pipeline for the automatic detection and segmentation of tell sites in Central Iraq on satellite SAR data has been developed and tested. Tells are archaeological settlement mounds peculiar to the Near and Middle Eastern landscapes and dating back from the Neolithic period to the Bronze Age. Due to their pronounced topographic elevation, manual visual inspection of satellite Synthetic Aperture Radar (SAR) imagery and Interferometric SAR Digital Elevation Models (InSAR DEMs) have been successfully used to map tells in remote and hazardous regions [5-6], where on-site inspections are impractical due to political instability and on-going conflicts and the use of LiDAR scanning from drone and airborne sensors is restricted. However, these approaches proved to be time-consuming when analyzing extensive areas, considering the large number of scattered features across the landscape. Machine Learning-based approaches can help speed up the screening process and improve the detection accuracy across large geographical areas. The pipeline employs a state-of-the-art supervised deep learning (DL) method for instance segmentation on global coverage, open-access, medium-resolution SAR products from Copernicus Sentinel and Third-Party Missions (i.e., Sentinel-1 Interferometric Wide Swath Mode (IW) Ground Range Detected (GRD) and the Copernicus Global 30-meter Digital Elevation Model (COP-30 DEM)). The model is trained and validated on a dataset of annotated tell sites created from two available datasets in the Southern Mesopotamian floodplain, in Iraq, conceived for archaeological purpose [21-22]. The process involved data cleaning and refinement through visual inspection of COP-30 DEM and Sentinel-1 Image composite, resulting in over 2700 features spanning an area of about 66.000 km². The results show the model accurately identifies tell sites on SAR products from Copernicus Sentinel and Third-Party Missions, enabling national scale analysis. The proposed workflow is transferable to other study areas in the Near and Middle Eastern landscape and the systematic mapping through DL offer valuable insights into the emergence, development, and organization of the early complex human societies. In addition, it serves as a measure for their preservation against threats such as looting and large-scale alteration of the landscape resulting from destructive anthropogenic activities such as agriculture, urbanization, and climate change. [1] Cigna, Francesca, et al. "Amplitude change detection with ENVISAT ASAR to image the cultural landscape of the Nasca region, Peru." Archaeological Prospection 20.2 (2013): 117- 13. [2] Cigna, Francesca, et al. "Persistent scatterer interferometry processing of COSMOSkyMed StripMap HIMAGE time series to depict deformation of the historic centre of Rome, Italy." Remote Sensing 6.12 (2014): 12593-12618 [3] Tapete, Deodato, Francesca Cigna, and Daniel NM Donoghue. "‘Looting marks’ in spaceborne SAR imagery: Measuring rates of archaeological looting in Apamea (Syria) with TerraSAR-X Staring Spotlight." Remote Sensing of Environment 178 (2016): 42-58. [4] Tapete, Deodato, and Francesca Cigna. "COSMO-SkyMed SAR for detection and monitoring of archaeological and cultural heritage sites." Remote Sensing 11.11 (2019): 1326. [5] Tapete, Deodato, et al. "Regional-scale systematic mapping of archaeological mounds and detection of looting using COSMO-SkyMed high resolution DEM and satellite imagery." Remote Sensing 13.16 (2021): 3106. [6] Tapete, Deodato, and Francesca Cigna. "Detection, Morphometric Analysis and Digital Surveying of Archaeological Mounds in Southern Iraq with CartoSat-1 and COSMO-SkyMed DEMs." Land 11.9 (2022): 140. [7] Cigna, Francesca, et al. "Exploiting satellite SAR for archaeological prospection and heritage site protection." Geo-spatial Information Science (2023): 1-26. [8] Linck, Roland, et al. "Possibilities of archaeological prospection by high‐resolution X‐band satellite radar–a case study from Syria." Archaeological Prospection 20.2 (2013): 97-108. [9] Dore, Nicole, et al. "New research in polarimetric SAR technique for archaeological purposes using ALOS PALSAR data." Archaeological Prospection 20.2 (2013): 79-87. [10] Chen, Fulong, Rosa Lasaponara, and Nicola Masini. "An overview of satellite synthetic aperture radar remote sensing in archaeology: From site detection to monitoring." Journal of Cultural Heritage 23 (2017): 5-11. [11] Paillou, Philippe. "Mapping palaeohydrography in deserts: contribution from space-borne imaging radar." Water 9.3 (2017): 194. [12] Menze, Bjoern H., and Jason A. Ur. "Mapping patterns of long-term settlement in Northern Mesopotamia at a large scale." Proceedings of the National Academy of Sciences 109.14 (2012): E778-E787. [13] Verschoof-Van der Vaart, Wouter Baernd, and Karsten Lambers. "Learning to look at LiDAR: The use of R-CNN in the automated detection of archaeological objects in LiDAR data from the Netherlands." Journal of Computer Applications in Archaeology 2.1 (2019). [14] Fiorucci, Marco, et al. "Machine learning for cultural heritage: A survey." Pattern Recognition Letters 133 (2020): 102-108. [15] Somrak, Maja, Sašo Džeroski, and Žiga Kokalj. "Learning to classify structures in ALS derived visualizations of ancient Maya settlements with CNN." Remote Sensing 12.14 (2020): 2215. [16] Orengo, Hector A., et al. "Automated detection of archaeological mounds using machine learning classification of multisensor and multitemporal satellite data." Proceedings of the National Academy of Sciences 117.31 (2020): 18240-18250. [17] Fiorucci, Marco, et al. "Deep learning for archaeological object detection on LiDAR: New evaluation measures and insights." Remote Sensing 14.7 (2022): 1694 [18] Guyot, Alexandre, Marc Lennon, and Laurence Hubert-Moy. "Objective comparison of relief visualization techniques with deep CNN for archaeology." Journal of Archaeological Science: Reports 38 (2021): 103027. [19] Kokalj, Žiga, et al. "Machine learning-ready remote sensing data for Maya archaeology." Scientific Data 10.1 (2023): 558. [20] Casini, Luca, et al. "A human–AI collaboration workflow for archaeological sites detection." Scientific Reports 13.1 (2023): 8699. [21] Pedersén, O. Ancient Near East on Google Earth: Problems, Preliminary Results, and Prospects. In Proceedings of the 7th International Congress on the Archaeology of the Ancient Near East, London, UK, 12–16 April 2010; pp. 385–393. [22] Marchetti, N., et al. "FloodPlains. Developing a public archaeological WebGIS for the Southern Mesopotamian alluvium." Proceedings of the 12th International Congress on the Archaeology of the Ancient Near East. 2021; pp. 921-933.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone P)

Poster: Enhancing Digital Geomedia Capabilities for UNESCO-designated sites: A Comprehensive Needs Assessment and Evaluation of Pilot Training Courses

Authors: Maike Petersen, Kristina Hild, Alexander Siegmund
Affiliations: Heidelberg University of Education, Institute of Geography and Geocommunication, Heidelberg University, Institute of Geography & Heidelberg Center for the Environment (HCE)
UNESCO-designated sites require special protection due to their outstanding universal value. The use of digital geomedia, such as remote sensing, geographic information systems (GIS), and mobile geotools, holds great potential for monitoring, modeling, and visualization. Well-designed and appropriate capacity development projects that incorporate the actual needs of stakeholders at UNESCO-designated sites are essential for the sustainable and effective use of digital geomedia. These projects aim to support individuals and interest groups, empowering them to strengthen local capacities. To this end, an international needs assessment was conducted to determine the requirements and existing knowledge of stakeholders at UNESCO-designated sites. The survey, distributed via email, received responses from 134 participants worldwide. The results indicate that while stakeholders are employing digital geomedia to some extent, there is significant demand for additional training focused on specific use cases, such as vegetation mapping or advanced software instructions, rather than introductory content. Participants identified equipment and financial resources as the primary constraints in using remote sensing, with data acquisition and theoretical knowledge being additional barriers. Furthermore, two pilot courses were conducted in Malawi and Costa Rica. These courses were evaluated before and after the training sessions, as well as through a follow-up survey 18 months later, using a mixed-methods approach. The evaluation demonstrated an increase in participants’ perceived knowledge across all areas of digital geomedia, with the highest level of improvement observed immediately after the courses. Sustained knowledge gains and active application of skills were reported 18 months later. The project, along with its subsequent evaluation, provides valuable insights into the work processes and needs of stakeholders at UNESCO-designated sites. Building on these findings, a more needs-oriented approach can be developed to support the sustainable management of such sites. Based on the knowledge gained from the needs assessment and the pilot courses, future projects can be designed to further enhance the use of digital geomedia. On a larger scale, these initiatives aim to increase stakeholders' confidence in using digital geomedia, enabling them to work independently and reducing their reliance on external assistance. This autonomy strengthens resilience to environmental and social changes on ecological, economic, and societal levels.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R)

Poster: C.02.04 - POSTER - Small Earth Science Missions

This session will address the missions under development as well as proposals for future missions based on small and nano satellites whose objective is to achieve innovative returns in the fields of Earth science. Emphasis will be given to results achieved, and to mission concepts for new and innovative EO techniques as well as to ideas for the exploitation of science products from small Earth science missions under development

Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R)

Poster: NanoMagSat: a 3x16U satellite constellation optimised for improving the monitoring and investigation of Earth’s magnetic field and ionospheric environment

Authors: Florian DECONINCK, Gauthier Hulot, Pierdavide Coisson, Jean-Michel Leger, Thomas Jager, Jean-Pascal Lejault, Massimiliano Pastena, Pepe Nieto, Lasse Clausen, John Leif Jørgensen, Maria Vallmitjana, Irina Babkina
Affiliations: Open cosmos Ltd., Université Paris Cité, Institut de Physique du Globe de Paris, CNRS, Université Grenoble Alpes, CEA, Leti, European Space Agency - ESTEC, COMET-Aerospace, University of Oslo, Technical University of Denmark - DTU Space, Open Cosmos Europe
The geomagnetic field has been monitored continuously since 1999 with low-Earth orbiting satellites of masses ranging from 61kg (Orsted) to around 500kg (CHAMP and Swarm). The main challenges of these missions are the magnetic cleanliness of the satellite, and the non-SSO spatio-temporal coverage. The small satellite industry is evolving rapidly with more performant satellite platforms available (Cubesat and Microsats up to 300 kg), more diverse launch services (dedicated, rideshare, orbital transfer vehicle), simplified operations (through the use of existing ground station networks), all at reduced cost. This allows designing new families of science missions with higher temporal resolutions and different programmatic approaches. The ESA Scout programme provides an ideal framework for such missions. Here we report on the NanoMagSat ESA Scout mission which was kicked off on November 28, 2024. NanoMagSat is a constellation optimised for providing data with fast local time and geographical coverage to monitor and investigate Earth’s magnetic field and ionospheric environment. The orbits at 545 km initial altitude feature 2x60° inclined ones offset in RAAN by 90°, and a third orbit at 87° for coverage of the poles and coverage synergy with Swarm should it still be operational. This enables a revisit of a mesh grid of [±6° long. ; ±6° lat. ; ±1.5h Local Time] within 1 to 3 months for all latitudes within -±60, and of less than 6 months for higher latitudes. The 3x16U satellites are optimised for diverse measurements and a low magnetic noise. The payload suite features a Miniaturised Absolute Magnetometer (MAM) co-located with 2 star trackers (STR) on an ultra-stable optical bench at the end of a deployable 3m boom, a High Frequency Magnetometer (HFM) located at half-boom, a multi-Needle Langmuir Probe (m-NLP) deployed on the ram face and two GNSS receivers for precise orbit determination (POD), Total Electron Content (TEC) and ionospheric radio-occultation studies. The 16U platform features a gravity-gradient Attitude and Orbit Control System (AOCS) and subsystems (specifically the Electric Power Supply) optimised for low magnetic signature. The TeleMetry & TeleCommands (TMTC) is using an S-band link, with the data downlink using an X-band. A network of ground stations both at polar and mid-latitude will be used to ensure the download of ~5GB/satellite/day. This presentation will outline the programmatic and engineering aspects of the mission, highlighting the possibilities offered by the recent evolution in the small satellite industry and concluding with the potential of scaling and improving this end-to-end system.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R)

Poster: The Twin Anthropogenic Greenhouse Gas Observers Mission

Authors: Dr Jochen Landgraf, Pepijn Veefkind, Antje Ludewig, Benjamin Leune, Edward van Amelrooy, Ryan Cooney, Paul Tol, Ruud Hoogenveen, Tobias Borsdorff, Raul Laasner, Zeger de Groot, Lisa de Backer, James Day, Nurcan Alpay Koc, Bryan de Groeij, Hugo Denier van der Gon
Affiliations: Sron Netherlands Institute For Space Research, Royal Netherlands Meteorological Institute (KNMI), TNO Netherlands Organisation for Applied Scientific Research, ISISPACE Innovative Solutions in Space
The Twin Anthropogenic Greenhouse Gas Observers (TANGO) mission is a new ESA mission that will be realized within the SCOUT program with an envisaged launch in 2028. Complementing the Copernicus atmospheric monitoring missions Sentinel-5 Precursor, Sentinel 4/5, and the CO2M carbon dioxide monitoring mission, TANGO will measure carbon dioxide and methane emissions from human activity to help verify the Paris Agreement. Tango will provide about 10.000 emission estimates per year from large industrial facilities and power plants. It will deliver high-resolution images of emission plumes with sufficient accuracy to determine emissions from single observations. The Tango mission will comprise two 25-kg 16 U satellites, TANGO-Carbon and TANGO-Nitro, orbiting in tandem: one configured to measure methane and carbon dioxide and one to measure nitrogen dioxide to support plume and emission specifications. TANGO will pave the way to innovative high-resolution spectral imaging of greenhouse gas (GHG) emissions based on European CubeSat technology. In this contribution, we will present the concept and status of the TANGO mission. CO₂ is emitted into the atmosphere by mankind through the combustion of fossil fuels (coal, natural gas, and oil), solid waste, and due to certain chemical reactions (e.g., cement manufacturing). It is removed from the atmosphere when it is taken up by the biosphere as part of the biological carbon cycle and absorbed by the oceans. Ideally, the carbon cycle would keep atmospheric CO₂ in balance between its different reservoirs, but due to mankind, we have seen a steady increase in CO₂ for decades. CH₄ is the second most important greenhouse gas with anthropogenic emissions from livestock, the oil and gas industry, coal mines, and waste treatment. Its atmospheric lifetime is 8-10 years and has a global warming potential of 28-36 times that of CO₂. To predict the impact of the recent increase of CO₂ and CH₄ concentration on near-surface temperature and to evaluate the potential effect of possible climate change mitigations, the complete Earth's climate system needs to be better understood, including the cooling effect by aerosols and complex feedback mechanisms. A large fraction of anthropogenic GHG emissions is due to localized point sources, causing distinct plume signatures in its atmospheric abundance on top of a large background concentration. To derive emission estimates from plume observations, accurate and precise measurements of the total column amount of the GHGs are required; the lower the spatial resolution of the sensor, the more precise and accurate. From this perspective, high-resolution sensors are favorable, coming along with the challenge of sufficient data coverage. This leads to an exciting synergy between global survey missions like Sentinel-5 and CO2M with spatial resolutions of 7×7 km² and 2×2 km² and a targeted mission like TANGO which measures CO₂ and CH₄ in preselected target areas of 30 ×30 km² but with a spatial resolution 50 times higher than CO2M. TANGO-Carbon senses solar-reflected radiances in the 1.6 µm spectral range with a spectral resolution of 0.45 nm to detect moderate to strong emissions of CH₄ (≥5 kt/yr) and CO₂ (≥2.5 Mt/yr). The TANGO-Nitro instrument yields collocated NO₂ observations from radiance measurements in the visible spectral range with a spectral resolution ≤ 0.6 nm, supporting plume detection and exploiting the CO₂/NO₂ ratio for emission specification. The TANGO spectrometers are carried by two agile satellite buses flying in close formation with a time difference of less than 1 minute. Platform agility is achieved by three-axis attitude control based on reaction wheel actuation. This allows flexible pointing of the spectrometer on a roll of ± 30 degrees and a forward motion compensation that improves the integration time by a factor of up to 5, resulting in better data coverage and precision of the observations. As part of the mission implementation, a ground segment will be established providing the scientific user community with open and free data including calibrated radiance measurements (level-1b data), the dry air mole fraction column densities XCO₂ and XCH₄, and tropospheric columns of NO₂ (level-2data). Furthermore, CO₂, CH₄, and NO₂ emission estimates will be provided operationally for each successful target observation (level-4 data).
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: C.06.02 - POSTER - Advances in Calibration and Product Validation for Optical Sensors

Accurate calibration and validation lies at the heart of any optical satellite measurement sensor system and it is of paramount importance for delivering high quality satellite products used in an exponentially growing way for operational environmental monitoring as well as for long-term climate research applications. Calibration may be achieved by dedicated on-board systems or by independent vicarious calibration targets and the need for sensor intercalibration and collaboration of satellite operators has become essential to provide such products to the optical satellite user community. In this context the need for common reference sources and protocols has been recognized as well as mission scenarios such as dedicated tandem campaigns to achieve sensor intercalibration and bias assessments. Moreover, precise calibration is a prerequisite for high quality geophysical Level-2 parameters retrieved from satellite and bringing independent validation efforts of these into play with sensor calibration is an essential activity to generate accurate geo-physical parameter maps at satellite-scale. In the context of the CEOS Working Group Cal/Val (WGCV) Land Product Validation (LPV) efforts ESA have led and collaborated on many such activities establishing the concept of ground-based Fiducial Reference Measurements and LPV Supersites as a firm reference for ground-based validation of satellite measurements. The aim of this is to provide well calibrated and validated Level 1 and 2 products to the satellite data user community at large including their uncertainty budgets traceable to SI and cal/val protocols. This session will provide an overview of the latest state-of-the-art Cal/Val activities for optical sensors including novel calibration approaches and new ground-based validation networks used for land product validation.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Radiometric performance of the optical sensors of Copernicus Sentinel-2 and Sentinel-3 constellations using vicarious methods

Authors: Bahjat Alhammoud, Louis Rivoire, Ludovic BOURG, Silvia Enache, Pierdomenico Paolino, Sebastien Clerc, Florian Poustomis, Claire Henocq, Jérôme BRUNIQUEL, Valentina Boccia, Steffen DRANSFELD, Véronique Bruniquel
Affiliations: Acri-st, CS-Group, Leonardo S.P.A., ESA/ESRIN
As part of the Copernicus program of the European Commission, the European Space Agency (ESA) developed and operates the Sentinel-2 constellation (S2A, S2B and S2C); and in cooperation with EUMETSAT, who operates the Sentinel-3 constellation (S3A, S3B). Both are Earth Observation optical missions, where the MultiSpectral Instrument (MSI) is carried on board Sentinel-2 mission and the Ocean and Land Colour Instrument (OLCI) and Sea Land Surface Temperature Radiometer (SLSTR) are on board the Sentinel-3 mission. This presentation provides a status of the aforementioned sensors’ radiometric validation activities, performed by the Copernicus Sentinel Optical Imaging Mission Performance Cluster (OPT-MPC). Reflective channels nominal calibrations are based on the exploitation of the on-board sun diffuser acquisitions. The advantage of the on-board sun diffuser is to provide the instrument with a very uniform and well-known signal, allowing very accurate absolute and relative radiometric calibrations. Thus the aim of the validation activities is to assess the radiometric performances related to image quality requirements: absolute radiometric uncertainties, multi-temporal and inter-bands relative radiometric uncertainties, instrument response non-uniformity, signal-to-noise ratio (SNR) etc. Vicarious methods (e.g. Rayleigh Scattering, Sun glitter, Desert- Pseudo Invariant Calibration Sites (PICS), Lunar Irradiance Model ESA (LIME) and Deep Convective Clouds (DCC)) are used for the absolute radiometry validation (Gascon et al, 2017), the multi-temporal and inter-bands relative radiometry validation (Lamquin et al. 2019). Absolute radiometric performance is assessed as well using ground-based instrumented sites (e.g. RadCalNet) (Alhammoud et al. 2019). In addition, cross-missions inter-comparisons are systematically performed (Rivoire et al. 2024). The results obtained show good radiometric performances of both missions’ products, thanks to a robust in-flight calibration strategy. The results of the cross-mission intercomparison show a good stability of the sensors, although SLSTR-A & B show slight positive trends. MSI and SLSTR pairs show good agreement of about 1% while OLCI-A is brighter than OLCI-B by up to 2% and MSI-C is brighter than MSI-A by about 1.5% over the VNIR spectral range. Keywords: Radiometric Vicarious Calibration and Validation, Optical sensors, Cross-mission inter-calibration, Copernicus Sentinels Missions. References: - F. Gascon, C. Bouzinac, O. Thépaut, M. Jung, B. Francesconi, J. Louis, V. Lonjou, B. Lafrance, S. Massera, A. Gaudel-Vacaresse, F. Languille, B. Alhammoud, F. Viallefont, B. Pflug, J. Bieniarz, S. Clerc, L. Pessiot, T. Trémas, E. Cadau, R. De Bonis, C. Isola, P. Martimort, and V. Fernandez, “Copernicus Sentinel-2A Calibration and Products Validation Status,” Remote Sensing, vol. 9, no. 6, p. 584, Jun. 2017 [Online]. Available: http://dx.doi.org/10.3390/rs9060584 - N. Lamquin, E. Woolliams, V. Bruniquel, F. Gascon, J. Gorroño, Y. Govaerts, V. Leroy, V. Lonjou, B. Alhammoud, J. A. Barsi, J. S. Czapla-Myers, J. McCorkel, D. Helder, B. Lafrance, S. Clerc, B. N. Holben, “An inter-comparison exercise of Sentinel-2 radiometric validations assessed by independent expert groups”, Remote Sensing of Environment, Vol. 233, 2019, https://doi.org/10.1016/j.rse.2019.111369 - B. Alhammoud, J. Jackson, S. Clerc, M. Arias, C. Bouzinac, F. Gascon, E. G. Cadau, R. Q. Iannone, and V. Boccia, "Sentinel-2 Level-1 Radiometry Assessment Using Vicarious Methods From DIMITRI Toolbox and Field Measurements From RadCalNet Database," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 12, no. 9, pp. 3470-3479, Sept. 2019, doi: 10.1109/JSTARS.2019.2936940. - L. Rivoire, S. Clerc, B. Alhammoud, F. Romand, N. Lamquin, “Inter-Sensor Level 1 Radiometric Comparisons Using Deep Convective Clouds. Remote Sens. 2024, 16, 4445. https://doi.org/10.3390/rs16234445
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Compressive Sampling for Geometric Correction of High Frequency Attitude Perturbations

Authors: Alicia Pelat, Kévin Kheng
Affiliations: Magellium
Remote-sensing satellites provide high resolution images widely used in Earth Observation applications. The accurate knowledge of their position, orientation and other physical parameters is primordial to ensure the best image quality possible. However these are always measured at a given frequency and with some noise impacting their precision. Therefore, further geometric correction making use of exogenous data is needed in order to meet absolute location requirements as well as to ensure geometric coherence inside the image. The most critical error lies on the modelisation of the satellite’s attitude. The orientation angles are estimated by navigation components. However residual errors should always be corrected a posteriori in order to achieve the best performance possible. In particular, satellites are subject to the effect of high-frequency attitude jitter which poses a challenging problem. These effects are of increasing importance due to the desire of always improving the images’ spatial resolution but also to reduce the cost associated with high precision star trackers or gyroscopes while preserving the image quality. Satellite image geometry usually relies on a rigorous sensor model which depicts the relationship between every 2D image point and a 3D ground point. This requires the best possible knowledge about the satellite’s physical parameters but also those of the sensor (acquisition time, detectors line of sight, optics mechanisms…) and can be improved by including an accurate digital elevation model (DEM) and ground control points (GCPs), to greatly enhance its quality. One classical method, called refining, is thus based on space triangulation between the locations of recognised points in the image, with ground coordinates. The overall process relies on several steps : (i) image correlation in order to identify points in the acquisition and their correspondent in a reference with high location precision (GCP or high resolution ortho-image). (ii) Optimisation of the physical parameters and those of functions modeling the expected perturbations in order to minimize the residuals of ground and image locations between the GCPs and their estimations by the geometric model (bundle-adjustment). (iii) Orthorectification to account for terrain deformations. However, the refining method is not efficient for instance in determining jitter frequencies without prior knowledge and thus cannot correct high-frequency perturbations. Another method using Compressive Sampling (CS) has been applied by Wang et al. (2015) to compensate for the influence of attitude jitter modeled by two image-based distortion signals. This was in the context of low frequency attitude jitter and only with the knowledge of a rational function model (RFM) providing the image location. The problematic of high-frequency attitude jitter has been little addressed in previous research. Therefore, we propose a method to correct a rigorous sensor model from these perturbations using Compressive Sampling. We study the particular case of a pushbroom sensor which is widely used in remote-sensing imagery and requires precise attitude knowledge. We use compressive sampling to retrieve an error signal between the image location corresponding to the GCPs and their identified location in the distorted image. This signal is composed of two independent components, a row-shift and a column-shift. The recovered signals are made sparse using the Fourier transform and the CS associated problem is solved by Orthogonal Matching Pursuit (OMP) algorithm. Then using the inverse Fourier transform, the distortion signal can be reconstructed. The image location can thus be corrected and then provided to the rigorous sensor model designed with the geometric library LibGeo, which greatly improves the image’s geometric quality. The framework of the experiment is based on a remote-sensing satellite simulator, consequently the perturbations that affect satellite attitude are known and controlled. A set of GCPs is also simulated using the true rigorous sensor model. Finally, the performance of the geometric correction is evaluated by measuring the location accuracy thanks to the true original location. We studied the influence of the number of GPCs available, their precision and the jitter’s frequency. Moreover, some experiments are also conducted on GCPs obtained by image correlation algorithms using a high resolution ortho-image as reference. These correlation methods are subject to outliers and more generally suffer from the algorithm’s precision but provide a more realistic case study. The CS method clearly outperforms the classical approach on high frequencies while showing similar robustness concerning the amount of GCPs needed. Besides, CS precisely retrieves distortion signals, although it is therefore sensitive to the measurement precision if a regularization method is not included. Furthermore, the reconstruction only requires the signals to be sparse in a transformation basis instead of specifying a model for the perturbation function. The latter limits the signal complexity and only allows recovering expected perturbations, the sparse hypothesis is thus a significant advantage for geometric correction. Furthermore, several CS reconstructions could be applied successively using different sparse transformations so as to identify many complex components of the signal. This study demonstrates very promising results for the application of compressive sampling methods in order to correct high-frequency attitude jitter in physical-based geometric models.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: A novel automated field spectrometer system to exploit the near-infrared

Authors: Paul Naethe, Tommaso Julitta, Mitchell Kennedy, Roberto Colombo, Claudia Ravasio, Roberto Grazonio, Biagio Di Mauro, Paolo Villa, Petya Campbell, Andreas Burkart
Affiliations: JB Hyperspectral Devices, University of Milano Bicocca, National Research Council, ISP, National Research Council, IREA, NASA GSFC
The Near Infrared (NIR) part of the electromagnetic spectrum carries valuable information in vegetation monitoring, snow and hydrology in optical remote sensing. Various different satellite missions cover at least parts of that spectral range for the investigation from space. However, so far the ground segment did not include commercially available, automated field spectrometer systems, which record hyperspectral time-series in the NIR. With a new generation of upcoming hyperspectral satellites in mind, future efforts in calibration and validation would greatly benefit from such automated devices for ground reference, suitable also for scientific applications in their own realm on the ground. We present the NIR Reflectance boX (NoX), an automatic field spectrometer system developed by JB Hyperspectral Devices, Düsseldorf, Germany, to address the gap. The NoX is capable of measuring upwelling and downwelling radiance from the visible to NIR region (from 400 nm to 1700 nm), rendering it a versatile instrument for optical environmental monitoring. Its low power consumption, combined with a robust, weather-resistant casing, makes it ideal for installation in remote and harsh environments. The NoX is suitable for a range of applications, including vegetation studies, snow cover analysis, hydrology studies and aquatic plant research, where the reflected light in the NIR range provides valuable information about the target not captured by only the visible part of the electromagnetic spectrum. We present first results from field studies using the NoX instruments and highlight the consistency over extended periods. The technical capabilities of the NoX are demonstrate and its broader applications in environmental monitoring are shown with data from long-term field studies, some gathering dense time-series over multiple years. Furthermore, we showcase the potential of the novel instrument for the validation of hyperspectral satellite observations, unprecedented for automated field spectrometers in the NIR spectral range.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Validation Of A Protocol For UAV-Based Surface BRF Retrieval

Authors: Dr Vincent Leroy, Dr Yves Govaerts, Thibaut Lamon, Dr Marta Luffarelli, Harry Morris, Niall Origo, Vincent Piret, Morven Sinclair
Affiliations: Rayference, Aeriane, National Physics Laboratory
The reflection of light at a surface is described by its bidirectional reflectance factor (BRF). The BRF is an intrinsic property of that surface, relevant to extract information about the surface’s state – e.g. extracting structural parameters of soil or vegetation. However, measuring the BRF of a surface is a challenging exercise: BRF measurement protocols require a collimated light source, which can be achieved in a laboratory, but not in open-air conditions, where the surface of interest is illuminated by the Sun (direct component) and the atmosphere (diffuse component). BRF measurement also requires a collimated sensor, which is hard to achieve due to sensor field of view and pointing accuracy. In practice, bottom-of-atmosphere (BOA) field measurements can only give access to the hemispherical-conical reflectance factor (HCRF). Inferring the BRF from HCRF requires “removing” atmospheric contributions to the downwelling sky radiance field. Various approaches have been proposed for this atmospheric correction, but their validation is impossible due to the lack of independent reference BRF records to compare against. We propose a validation protocol based on the use of a 5 m × 5 m artificial target. This object is characterized geometrically and optically, and its true BRF is obtained using the Eradiate 3D radiative transfer model. In prior work, Eradiate has been demonstrated as suitable for BRF simulation with an accuracy of 2% by comparison with SI-traceable laboratory measurements. Multi-angular HCRF records are acquired using an unmanned aerial vehicle (UAV) in the VIS/NIR spectral region. Atmospheric effects are removed using the CISAR inversion algorithm. The retrieved surface BRF is compared with the simulated BRF reference, accounting for their respective uncertainties. This protocol should contribute to improving the validation of surface BRF products. A first implementation is being done as part of the FRM4Veg 2 project. In this presentation, we introduce the protocol, the target design and manufacturing process, and discuss preliminary results.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: HyperCrop monitoring station: a prototype of automated robotic system for field spectroradiometric measurements

Authors: Alberto Crema, Lorenzo Parigi, Alessio Prini, Tommaso Julitta, Dr. Gabriele Candiani, Dr. Mirco Boschetti
Affiliations: CNR-IREA, CNR-STIIMA, JB Hyperspectral Devices GmbH
The acquisition of field data is crucial in the context of remote sensing of crop traits estimation, especially when using hyperspectral imagery. This action provides essential information on crop physiological condition and on the sensed reflectance. This ground truth data is necessary to i) further understand plant behavior and investigate spectral diagnostic features, ii) verify radiative transfer model simulations and iii) calibrate, validate, and refine remote sensing retrieval models based on machine learning regression algorithm (MLRA). The performance and generalization capacity of these models rely on the quality, accuracy and representativeness of field data acquisition. Recently, hyperspectral remote sensing has demonstrated to accurately quantify crop traits (Abdelbaki and Udelhoven 2022; Tagliabue et al. 2022; Wang et al. 2021) but the effectiveness depends heavily on the availability of high-quality calibration data for fully data-driven schema (MLRA) and Hybrid approach (RTM) exploiting Active Learning optimization (Berger et al. 2020; Candiani et al. 2022; Verrelst et al. 2020). However, collecting field data over various spatial and temporal scales is challenging because it is time-consuming, costly, and labor-intensive. For these reasons, there is a growing need for automated systems able to gather large volumes of field spectral measurements coupled with traditional indirect or destructive crop traits acquisitions. Automated systems for spectral data acquisition can streamline the process of data collection, enabling the collection of more comprehensive datasets in less time. Furthermore, continuous data acquisition systems can provide invaluable insights into the time dimension, offering more frequent and consistent measurements able to monitor plant behavior and physiological processes. The new HyperCrop monitoring platform is a robotic system designed to provide field measurements, enabling autonomous and automated data collection at multiple points. The system can autonomously rotate on itself, collect data in real time and transmit them to a cloud platform. These features allow for real-time crop monitoring at different field spot/parcels, providing a direct way to observe growth, stress, and other factors that might not be captured by a single field campaign. The metal structure (aluminum) of the platform consists of individual 0.5m modules assembled together, that reach a height ranging from 1 to 5 meters, with a horizontal reach starting from 2.5 meters. The system is equipped with an electric engine that allows 360° rotation, a control unit for instruments integration, a router for the connectivity, a battery and solar panels for the generation of electrical power. Additionally, the system features mounts for the freely placement of instruments on the structure, in order to facilitate its deployment in different field conditions. The station is currently equipped with the hyperspectral radiometer NoX (NIR reflectance box – JB Hyperspactral device GmbH), which measures high resolution up-welling and down-welling radiance (at approximately 3 nm) in the visible and near-infrared regions (350-1650 nm). The instrument is built to withstand the hazards of a permanent outdoor installation (https://www.jb-hyperspectral.com/products/nox/), allowing for the automatic and continuous monitoring of the vegetation below its structure, covering the range of its horizontal arm and with a rotation capacity of 360°. The development of such systems for seasonal or long-term measurements of crops/crop rotation offers powerful solutions to collect high-quality data with standard protocols able to capture the variations of soil/plant conditions. To gain further insight into the temporal dimension of continuous spectral data, HyperCrop will be operationally deployed in a wheat field in early 2025 to acquire daily hyperspectral data, from the tillering stage to the harvesting stage. The study area is located in the northern region of Italy and is hosted by the Bonifiche Ferraresi Spa farm in Jolanda di Savoia (JSIT), where is already located the JSIT HYPERNET-land site (https://landhypernet.org.uk/site_descriptions/JSIT). The platform will be situated at the center of an experimental area of measuring approximately 6x6 m, comprising 6 manipulation plots (3 plots and 2 treatments) with two distinct levels of nitrogen fertilization: in addition to the standard level, nitrogen will be dynamically optimized according to the crop nutritional status. High-frequency hyperspectral data will be acquired on a daily basis, with a one-hour interval, while crop traits (e.g., chlorophyll concentration, leaf area index, above-ground biomass) will be collected by operators on a weekly basis. The structure will be set at a height of 2 meters and a horizontal reach of 2.5 meters. The rotation at fixed degrees will allow the acquisition of spectral data at the center of 6 plots. Furthermore, a camera installed on the platform will also provide daily RGB images, which will be used to monitor crop growth, track crop fractional cover and assess phenological development. The HyperCrop data will be compared and integrated with the HYPERNET measurements acquired by the HYPSTAR instrument, with the goal of calibrating retrieval models and validating spaceborne maps. In conclusion, the advent of new hyperspectral satellites presents a significant opportunity to improve the mapping of crop traits. However, the success of these technologies is heavily reliant on the collection of extensive, high-quality field data. Automated systems represent a key solution to the challenges associated with field data collection, enabling more efficient and continuous monitoring of crops. Reference Abdelbaki, A., & Udelhoven, T. (2022). A Review of Hybrid Approaches for Quantitative Assessment of Crop Traits Using Optical Remote Sensing: Research Trends and Future Directions. Remote Sensing, 14(15). https://doi.org/10.3390/rs14153515 Berger, K., Verrelst, J., Féret, J. B., Hank, T., Wocher, M., Mauser, W., & Camps-Valls, G. (2020). Retrieval of aboveground crop nitrogen content with a hybrid machine learning method. International Journal of Applied Earth Observation and Geoinformation, 92(April), 102174. https://doi.org/10.1016/j.jag.2020.102174 Candiani, G., Tagliabue, G., Panigada, C., Verrelst, J., Picchi, V., Rivera Caicedo, J. P., & Boschetti, M. (2022). Evaluation of Hybrid Models to Estimate Chlorophyll and Nitrogen Content of Maize Crops in the Framework of the Future CHIME Mission. Remote Sensing, 14(8). https://doi.org/10.3390/rs14081792 Tagliabue, G., Boschetti, M., Bramati, G., Candiani, G., Colombo, R., Nutini, F., et al. (2022). Hybrid retrieval of crop traits from multi-temporal PRISMA hyperspectral imagery. ISPRS Journal of Photogrammetry and RemotTagliabue, G., Boschetti, M., Bramati, G., Candiani, G., Colombo, R., Nutini, F., Pompilio, L., Rivera-Caicedo, J.P., Rossi, M., Rossini, M., Verrelst, J., Panigada, C., 2022. Hybrid retrieval of crop traits from m, 187(March), 362–377. https://doi.org/10.1016/j.isprsjprs.2022.03.014 Verrelst, J., Berger, K., & Rivera-Caicedo, J. P. (2020). Intelligent Sampling for Vegetation Nitrogen Mapping Based on Hybrid Machine Learning Algorithms. IEEE Geoscience and Remote Sensing Letters, 1–5. https://doi.org/10.1109/lgrs.2020.3014676 Wang, S., Guan, K., Wang, Z., Ainsworth, E. A., Zheng, T., Townsend, P. A., et al. (2021). Airborne hyperspectral imaging of nitrogen deficiency on crop traits and yield of maize by machine learning and radiative transfer modeling. International Journal of Applied Earth Observation and Geoinformation, 105, 102617. https://doi.org/10.1016/J.JAG.2021.102617
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: BRDF Model Comparison Using In-Situ Automated Hyperspectral Multi-Angular Reflectance Data (HYPSTAR-XR) in Gobabeb, Namibia

Authors: Morven Sinclair, Pieter de Vis, Agnieszka Bialek, Ashley Ramsay
Affiliations: National Physical Laboratory
In order to guarantee consistency and interoperability of Earth Observation (EO) data between space-borne sensors it is necessary to have a full understanding of the uncertainties associated with such datasets. A well-established method of satellite calibration and validation is the regular comparison of satellite sensors with Earth-based optical sensor systems by comparing satellite products against near-simultaneous data obtained at bottom-of-atmosphere (BOA). To obtain consistent datasets at BOA, automated, in-situ instrumentation sites are often used, such as the Radiometric Calibration Network (RadCalNet) site in Gobabeb, Namibia. Automated sites are important sources of regular, traceable data for use in satellite calibration and validation activities in remote and complex locations, increasing the likelihood of a matchup under temporally similar illumination conditions. However, many are limited to an output product of one specific viewing angle and when differences in the sensor and satellite viewing angles are present, significant differences in the comparison data are observed. When this difference in viewing geometry is unavoidable, one has to account for these differences where possible. This is typically done using a Bidirectional Reflectance Distribution Function (BRDF) model which characterises the behaviour of the surface and the distribution of the reflected light for different light conditions. From a BRDF model it is then possible to model and compare observations with different incident lighting and viewing angles. In this study, in order to characterise the surface behaviour, a relatively stable target environment was chosen and multi-angular, hyperspectral reflectance (hemispherical-conical reflectance factor; HCRF) data was collected over a period of several days in October 2023 and May 2024. A novel hyperspectral radiometer (HYPSTAR-XR), developed as part of the LANDHYPERNET network, measured between 380 nm and 1680 nm at viewing zenith angles of up to 60 degrees, and viewing azimuth angles of close to 360 degrees, while running a customised BRDF measurement sequence. This data collection was undertaken at the Gobabeb site in Namibia, Africa; a desert gravel plain which is well-established as a stable vicarious calibration site under the RadCalNet network. With this data we examine the performance of four BRDF models: Roujean, RossThick LiSparse (RTLS), Rahman Pinty Verstraete (RPV) and Maignan. These models are initially fitted to these hyperspectral reflectance datasets using a forward modelling Markov-Chain Monte-Carlo approach in order to derive the best input parameters for each model. The forward modelling includes the application of a correction from BRDF (defined for the models) to HCRF (the measurements) based on the direct-to-diffuse irradiance ratios calculated using Radiative Transfer (RT) models. The best-fit models of surface reflectance are then compared to the in-situ surface datasets at both the HYPSTAR-XR location and the RadCalNet CIMEL sunphotometer mast site to evaluate best performance using chi-squared statistics. This study presents the process of BRDF model fitting used, and the conclusions of the model fitting success, as well as raising discussion points from these observations.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Validation of the CLMS NDVI 300m V2.1 Product Using In-situ Data From the PhenoCam Network

Authors: Sarah Gebruers, Else Swinnen, Carolien Toté, Davy Wolfs, Dominique De Munck, Michal Moroz
Affiliations: Vito
The Copernicus Land Monitoring Service (CLMS) provides a number of global bio-geophysical products which are used to monitor the status and evolution of the Earth’s land surface. One of these products is a long-term time series of the Normalized Difference Vegetation Index (NDVI) at 300m resolution. The NDVI approximates the amount and greenness of vegetation and is important to study (anomalies in) vegetation dynamics. The latest version of the NDVI 300m product (V2.1) is based on Top-Of-Canopy (TOC) reflectances from PROBA-V Collection 2, and from the reprocessed Sentinel-3 Ocean and Land Color Instrument (OLCI) CLMS TOC reflectances. The difference with previous versions of this product is that TOC reflectance data from both instruments are now corrected for the Bidirectional Reflectance Distribution Function (BRDF), thus providing a long time series of 300m NDVI data normalized to a common illumination and observing geometry. The full time series covers the period from January 2014 up until near-real time. To ensure the correctness and consistency of the NDVI 300m V2.1 product, and to assess its quality, it is of paramount importance that the data is validated thoroughly. Not only should the data be compared internally, by checking the consistency throughout the time series by comparing NDVI data created from PROBA-V with NDVI values from Sentinel-3 OLCI inputs, but the product should also be validated statistically, spatially, and temporally with external data sources. These reference data can be split into three categories: older versions of the CGLS NDVI product at 300m and 1km resolution, NDVI values derived from data of other satellites such as the Moderate Resolution Imaging Spectroradiometer (MODIS), and in-situ data from, for example, the PhenoCam network. For this presentation, we will focus on the validation with PhenoCam data, as this has not been done before for the CGLS NDVI products. PhenoCam is a network of digital cameras distributed globally that capture images of a diverse range of ecosystems with a sampling period of 30 minutes, with as goal to track the phenology of different vegetation types. The red, green, and blue bands of the images are combined into a ‘green chromatic coordinate’ (GCC) that measures the canopy greenness. The advantage of PhenoCam is that it provides long, almost continuous time series that are not obstructed by clouds and for which no atmospheric correction is needed. It is therefore an ideal reference source to calibrate and validate satellite data. However, the GCC is not identical to NDVI, resulting in timeseries with different amplitudes and possibly also different patterns. In addition, the PhenoCam cameras observe a different spatial extent than the resolution of the PROBA-V and Sentinel-3 satellites. The first problem can be remedied by normalizing both datasets and focusing on temporal patterns in the validation process. To minimize the second problem, it is crucial to select PhenoCam sites that are situated in a larger homogenous environment. The reprocessing of the CLMS NDVI 300m product with the V2.1 improvements will start soon. First validation results are expected in spring of next year and will be discussed in this presentation.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: The Goddard Laser for Absolute Measurement of Radiance (GLAMR) Facility for Spectral and Radiometric Calibration

Authors: Julia Barsi, Dr. Brendan McAndrew, Dr. Samuel Kitchen-McKinley, Dr. Gerhard Meister, Zhipeng Wang, Thomas Ellis, Gary Hoffman
Affiliations: NASA, Science Systems and Applications, Inc. (SSAI), Bay Area Environmental Research Institute (BAERI)
The Goddard Laser for Absolute Measurement of Radiance (GLAMR) Calibration Facility is a tunable-laser based radiometric calibration system for instruments operating between 340 and 2500nm. Based on technology developed by the US national laboratory, the National Institute of Standards and Technology (NIST), GLAMR combines spectral and radiometric characterization into one measurement sweep by creating a monochromatic extended source. The large exit port of GLAMR’s integrating sphere illuminates the full aperture and field of view of many sensors, providing a radiometrically stable and spectrally uniform source for calibration of the absolute spectral response. Spectral position of the light is measured with tens-of-picometer accuracy and the radiometric scale can be transferred with 0.2 to 0.35% (k=1) uncertainty, outside of the water vapor bands. Standard products derived from the GLAMR measurements include absolute spectral response, relative spectral response for both inband and out-of-band spectral regions and linearity. Results from GLAMR measurements can be used to derive stray light correction models and nonlinearity corrections, as well as band-integrated responsivity. This talk will provide a description of the GLAMR system, operations, traceability and sample results from recent measurement campaigns. Where possible, the GLAMR calibration results will be compared to absolute radiometric calibration results with traditional broadband sources.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Reassessment of TROPOMI's Absolute Radiometric Calibration

Authors: Edward van Amelrooy, Mirna van Hoek, Erwin Loots, Antje Ludewig, Emiel van der Plas, Nico Rozemeijer
Affiliations: KNMI, Triopsys
The Sentinel-5 Precursor (S5P) mission, part of the EU’s Copernicus programme, hosts the TROPOspheric Monitoring Instrument (TROPOMI) as its sole payload. TROPOMI’s push-broom imaging system achieves 5.5 km × 3.5 km spatial sampling, enabling daily global coverage of trace gases and aerosols critical for air quality, climate, and ozone studies. With spectrometers covering ultraviolet (UV), visible (UVIS), near-infrared (NIR), and shortwave infrared (SWIR) bands, TROPOMI observes key atmospheric constituents such as ozone (O₃), nitrogen dioxide (NO₂), carbon monoxide (CO), methane (CH₄), and aerosols. It measures radiance during daylight orbits and solar irradiance daily through a dedicated port, ensuring comprehensive atmospheric monitoring and accurate data. This study focuses on a detailed reassessment of TROPOMI's absolute radiometric calibration, specifically targeting the ultraviolet (UV) bands—band 1 (270–300 nm) and band 2 (300–330 nm). These spectral ranges are crucial for atmospheric measurements, including the retrieval of ozone and other trace gases, where calibration precision directly impacts data quality and scientific applications. The reassessment draws on multiple calibration methods used during TROPOMI's on-ground calibration campaign to refine the absolute radiometric accuracy, particularly in the UV range. Absolute radiance can be determined directly using the absolute radiance calibration measurements. Alternatively, it can be derived by combining absolute irradiance calibration data with the Bidirectional Scattering Distribution Function (BSDF) measurements. These interrelated methods require testing and evaluating various combinations of measurements to determine the most accurate and consistent approach for obtaining absolute radiance. By systematically analyzing these combinations, the study seeks to optimize the calibration framework, addressing potential discrepancies and improving radiometric accuracy. Refinements in the calibration process enhance the traceability and consistency of TROPOMI measurements across its spectral bands, ultimately increasing the reliability of data for atmospheric monitoring, climate research, and environmental studies. This effort also emphasizes the intricate relationship between calibration parameters, illustrating the need for integrating diverse methodologies to achieve the best radiometric performance. Furthermore, it highlights the broader importance of continuously reevaluating and improving calibration processes for satellite instruments to ensure their scientific utility and accuracy over time.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Sentinel-3 SLSTR SST Validation using a Fiducial Reference Measurements (FRM) Service

Authors: Werenfrid Wimmer, Tim Nightingale, Arrow Lee, Jacob Hoeyer, Gusisella Gacitua, Hugh Kelliher, Ruth Wilson, Dr. Craig Donlon
Affiliations: University Of Southampton, RAL Space, Danish Metrological Institute, Space ConneXion, ESA ESTEC
ESA has built on over 25 years of continuous Fiducial Reference Measurements (FRM) from UK-funded shipborne radiometers, by establishing a service to provide historic and ongoing FRM measurements to the wider sea surface temperature (SST) community through an International SST FRM Radiometer Network (ships4sst). This service not only collects shipborne radiometer data, but also uses the FRM SST data to validate satellite SST products; a long-term data archive of the FRM datasets at Ifremer provides a validation service based on the ESA felyx match-up database (MDB) using ships4sst data in netCDF L2R format. This is hosted at EUMETSAT. All data and validation results are freely available to the public via the ships4sst website. At ships4sst we organise and participate in regular inter-comparisons at the National Physics Laboratory (NPL) in the UK and the National Institute of Standards and Technology (NIST) in the USA to ensure not only the SI (International System of Units) traceability of our measurements but also the validity of the per SST value uncertainties. To demonstrate the value of the FRM SST service we will show some examples of the ships4sst data around the world and also the most recent validation results from SLSTR A and B from the ships4sst network regions. This not only demonstrates that SLSTR A and B are performing to specification and at least as well as their predecessor AATSR, but also shows a potential route for SI-traceability for SLSTR SST measurements. Finally, we will show results from SLSTR A and B during the Sentinel-3 tandem phase using a number of comparison tools, including triple collocations between the ships4sst FRM data and the SLSTR instruments on Sentinel 3 A and B. ships4sst is open to partners around the world, and currently compromises of partners from the UK (University of Southampton, Rutherford Appleton Laboratory, Space ConneXions), Denmark (Danish Meteorological Institute) and France (Ifremer).
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Evaluation of GRS-based Sentinel-2/MSI Level-2A products over inland waters using in-situ hyperspectral radiometric measurements spanning a large gradient of climatic and trophic conditions

Authors: Joana Roussillon, Jean-Michel Martinez, Matheus Tavares, Martin Rapilly, Valentin Baute, David Guimaraes
Affiliations: Géosciences Environnement Toulouse (GET), UMR5563, CNRS, IRD, Université Toulouse 3, Universidad Autónoma de Santo Domingo (UASD), Zona Universitaria, Distrito Nacional, Magellium, 1 Rue Ariane
Inland water bodies play a crucial role in the global carbon cycle, the functioning of ecosystems and provide vital services to meet societal demands. Monitoring inland water quality evolution (e.g. chlorophyll concentration, sediment content, etc) is of paramount importance to better understand how it evolves in a changing climate and in relation to anthropogenic activities. Since 2015, the availability of Sentinel-2 satellite images led to the development of processing chains to monitor the water bodies with an unprecedented spatial, temporal and spectral resolution. However, the complexity and cost of in-situ calibration campaigns, combined with the high optical variability of inland waters, make robust model validation and accuracy assessments across diverse conditions a persistent challenge. Here, we present a synthesis of our team's efforts to validate our processing chain of satellite-derived reflectance products using a comprehensive dataset of consistent in-situ hyperspectral measurements. It results from 43 field campaigns carried out almost every months from March 2021 to July 2024 across a wide range of climatic (tropical, humid tropical, temperate, and semi-arid) and of trophic and turbidity conditions (from oligotrophic to super eutrophic), gathering around 400 radiometric ‘above-water’ measurements. It enabled a robust quantitative assessment through matchup comparisons between the level-2A remote sensing reflectance (Rrs), derived from L1C data processed with the GRS algorithm for atmospheric and glint correction, and coincident ground-based radiometer observations. Aerosol optical properties are retrieved from global model-based data provided by the Copernicus Atmosphere Monitoring Service (CAMS). Rrs are intersected with a water detection algorithm to prevent cloud issues. Results illustrate fine accuracy between satellite reflectances and field measurements for a wide range of conditions, with RMSEs between 0.0016 and 0.0043 sr-1, according to the selected spectral bands, the study areas and the season. However, some biases still remain in some particular cases, such as when the Rrs are especially low. Using different inversion algorithms, impact of those biases on retrieving the dynamics of some variables of interest (chlorophyll concentration, suspended particulate matter) will be discussed in the context of eutrophication or sedimentation studies, as well as the potential source of those errors (eg : adjacency effects).
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Cross-calibration of Sentinel-2 MSI and Landsat-8 OLI for high spatial resolution climate studies

Authors: Madeline Stedman, Mr Samuel E. Hunt, Pieter de Vis, Professor Jadu Dash, Mr Omar Ali Eweys, Mr Joe Riordan, Mr Matthew Scholes
Affiliations: National Physical Laboratory, University of Southampton
The development of high-spatial resolution climate data products is essential to support regional-scale climate action and address the needs of public and commercial sector users. To support this, a UK national initiative Earth Observation Climate Information Service (EOCIS) produced a set of novel datasets of different climate variables called Climate data at high resolution for the UK (CHUK) - this includes an EOCIS CHUK vegetation dataset. The applicability of derived datasets such as these for regional-scale applications can be limited by the temporal and spatial frequency of available observations, a problem that is exacerbated by cloud cover. Integrating data from additional sensors mitigates this by increasing observation frequency. This work demonstrates the importance of radiometric harmonisation to ensure consistency and interoperability between multiple sensors underpinning the datasets to support the generation of robust high-resolution datasets suitable for climate monitoring and analysis. The EOCIS CHUK vegetation dataset covers the vegetation biophysical variables Leaf Area Index (LAI) and the Fraction of Absorbed Photosynthetically Active Radiation (FAPAR), derived from multiple Sentinel-2 MSI surface reflectance in the Red and near infrared (NIR) bands. This work focusses on the harmonisation of Sentinel-2 MSI and Landsat-8 OLI with the aim of facilitating the integration of Landsat-8 OLI data into datasets like those produced in CHUK vegetation product as an example application. The primary technique used to cross-calibrate EO sensors against a reference in-flight is comparison of measurements within satellite-to-satellite match-ups. A match-up is defined as an event where two satellite sensors observe approximately the same location at approximately the same time. This is also commonly referred to as a simultaneous-nadir overpass (SNO). In this work, matchups between Sentinel-2 MSI and Landsat-8 OLI are identified and analysed to evaluate a harmonisation of the top-of-atmosphere reflectances. Matchup sensor data are processed to compensate for differences in spectral and spatial sampling, as well as other observation mismatches, such as in viewing geometry.In this work, a simulation-based approach is used to assess and mitigate sources of error in the matchup analysis for desert Pseudo Invariant Calibration Sites (PICS) Libya-4, providing additional insights complementary to existing methodologies such as Sen2Like and HLS. We present the results of this matchup analysis and evaluate the impact of the Sentinel-2 / Landsat-8 harmonisation on the derived datasets for the vegetation biophysical variables included within the CHUK vegetation dataset. These are further compared to reference in situ measurements, such as those produced as part of Ground-Based Observations for Validation (GBOV) dataset.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Eye-safe Raymetrics Aerosol Profiler (RAP) in Horizontal Pointing Mode: a new tool for the validation of aerosol products from Very High-Resolution optical missions

Authors: Lorenzo Veltri-Gomes, Nicola Ferrante, Anna Maria Iannarelli, Annalisa Di Bernardino, George Tsaknakis, Ourania Soupiona, Flavio Marcozzi, Stefano Casadio
Affiliations: Serco Italia, Sapienza University of Rome, RAYMETRICS
The Raymetrics Aerosol Profiler (RAP) was designed in 2020 to be an ACTRIS-compliant elastic lidar (1064 nm single wavelength), equipped with automated telecover-test and fully automated operability. The prototype was developed in the context of an ESA QA4EO work-package, with the aim of supporting the retrieval of atmospheric cloud fraction using full sky images from a sky-camera stereo system. Since March 2021, RAP is operating continuously 24/7 at Sapienza University of Rome, in the synergy with the BAQUNIN-supersite instrumental suite (www.baqunin.eu). In its nominal configuration, RAP points vertically and acquires backscattered lidar signals from 0 to 15 km heights at 3.75 m range resolution and 10 seconds coadding time; the full overlap height is 150 m. This configuration allows for the retrieval of backscattering and extinction coefficient vertical profiles with excellent accuracy. After late 2024 refurbishment, RAP can be operated in different observation modes, as its elevation and azimuth pointing angles have been made easily adjustable and controlled, and its position on the roof of the Physics Department of Sapienza can be modified with little (or no) efforts. In this context, a new application of RAP measurements has been proposed by Raymetrics, for which the system is operated in the so-called Horizontal Pointing Mode (HPM): RAP elevation is set to values close to 0 degrees in order to probe the lowest portion of the urban boundary layer in for user defined azimuth directions. Thus, the lidar signal is relative to a portion of the atmosphere usually not accessible to conventional Lidars and Ceilometers, for relatively long distances (up to 5 km) at very high spatial and temporal resolution (3.75m, 10 sec). Raymetrics, aiming to provide cutting-edge lidar solutions, has developed a novel scanning lidar system specifically designed for evaluating particulate concentration in the atmosphere. To date, Raymetrics' 3D scanning lidars, emitting @ 355nm offer wide-area coverage of up to 50 km², leveraging a unique algorithm that transforms backscatter signals into PM aerosol concentrations. This system provides a spatial resolution of 3.75 m and a time resolution of 10 seconds. The application incorporates the latest advancements from the scientific and engineering communities, and its performance has been calibrated and extensively validated for PM retrievals using in-situ sensors, with a focus from PM2.5 up to PM10. To assess the applicability and accuracy of this novel algorithm for measuring particles with larger diameters (PM>10), it is necessary to conduct experimental testing using the existing PMeye lidar setup at different wavelengths (1064nm). This will involve the collocation of in-situ optical sensors capable of measuring PM≥10 particle diameters, for calibration and validation purposes. The BAQUNIN Team has developed a simulation software, in order to test RAP efficiency in HPM mode and other elevation angles (slant pointing mode, SPM). The simulator is written in Python language and will be described in detail and made available via the BAQUNIN website. The Lidar signal analysis is be based on state-of-art retrieval schemes, providing detailed information about aerosol properties and associated uncertainties. The main advantage in operating RAP in HPM is in its usability for validation of aerosol loads and atmospheric corrections from very-high resolution satellite optical missions, as spatial and temporal observation scales are essentially comparable. To our knowledge, no other systems can provide such innovative validation opportunity.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: In-Situ Cal/Val Activities of Sentinel-2 and Sentinel-3 Products in highly turbid inland waters: investigation in the Madeira River, Brazil

Authors: David Guimaraes, Professor Jean-Michel Martinez, Tristan Harmel, Béatrice Berthelot, Juan Gossn, Ewa Kwiatkowska, Diogo Olivetti, Professor Henrique Roig, Matheus Tavares, Osmair Ferreira, Joana Roussillon, Valentin Baute, Mauricio Cordeiro, Martin Rapilly
Affiliations: Géosciences Environnement Toulouse (GET), Magellium (MAG), Institute of Geosciences, University of Brasília (IG-UnB), National Water and Sanitation Agency (ANA), European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT), Institute of Research for Development (IRD), Universidad Autónoma de Santo Domingo (UASD)
Water quality is a critical component of environmental health, directly impacting ecosystems and economic activities such as agriculture, fisheries, and tourism. Therefore, monitoring water quality (WQ) is a major requirement in order to inform decision-makers about urgent challenges such as climate change, urbanization, and industrial activities. These activities can rapidly degrade WQ and strain freshwater availability. Thus, remote sensing (RS), especially with advanced satellite platforms like Sentinel-2 and Sentinel-3, offers an efficient solution for WQ monitoring. Part of the European Space Agency's (ESA) Copernicus program, the Sentinel-2 and Sentinel-3 satellites are equipped with the MultiSpectral Instrument (MSI) and Ocean and Land Colour Instrument (OLCI), respectively. This instruments enables large-scale, frequent revisits, and cost-effective observations of key optically-active WQ parameters, like chlorophyll-a, colored dissolved organic matter (CDOM) and suspended particulate matter (SPM). But before providing decision-makers with actionable insights for managing water resources, the data produced by these instruments needs treatment and validation from collective calibration and validation activities (Cal/Val). In-situ measurements play a crucial role in Cal/Val quantifying associated uncertainties in RS instruments. Nevertheless, this kind of in situ data are still very scarce in numerous regions in the world, especially in the case of inland water type. In this work, we present the results of a recent field campaign dedicated to hyperspectral radiometry data collection, where 16 points where sampled over the Madeira River, which is one of the main tributaries of the Amazon River. The Madeira River has been the subject of several RS studies focused on WQ and SPM, being the region characterized by exceptionally high loads of suspended sediments. For the methods, radiometric measurements over these waters where performed by the team on a boat using 3 TriOS-Ramses hyperspectral radiometers, simultaneously, in order to allow for the computation of the remote-sensing reflectance (Rrs). The date of the measurements 06/06/2024 made possible to sample data at the same time of a Sentinel-2 and Sentinel-3 overpass, allowing for unique orbital versus in-situ match-ups. Orbital data was extracted from the centroid of 16 pre-delimited vectorial squares of 300x300 meters in order to accommodate roughly homogeneous Senitnel-3 pixels. Three atmospherically-corrected products where evaluated in this work: Sen2Cor and Glint Removal for Sentinel-2-like sensors (GRS) for Sentinel-2; And Sentinel-3 L2_WFR baseline-3 provided by the Copernicus Datas Space portal. Similar central-wavelengths for 8 bands where chosen for both Sentinel-2, Sentinel-3 and the in-situ instruments for inter-comparison; And the results favor the performance of Sentinel-3 OLCI over the other products with r² = 0.93 and MAPE of 0.07% when compared with the measured in-situ data over the best performing Point 16 (P16), believed to be the point least affected by adjacency effects. Over the same P16, GRS performed with r²=0.63 and MAPE=0.21%; And Sen2Cor performed poorly with negative results, possibly because it is not specifically tuned for highly-turbid waters.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: The North Australian Satellite Validation Facility

Authors: Stefan Maier
Affiliations: maitec
The North Australian Satellite Validation Facility (NASVF) undertakes on-going and long-term collection of on- and near-ground data in Northern Australia for the development and validation of earth observation satellite products using automated and manually operating instrumentation. The NASVF is located in Litchfield National Park, a 1,461 km2 conservation reserve located in the monsoonal north of the Northern Territory, approx. 80 km south of Darwin. Average annual rainfall is ~1420 mm, of which ~90% falls between December and March. The site is located in Eucalypt woodland, dominated by Eucalyptus miniata and Eucalyptus tetrodonta. Due to the wet / dry season climate, the area is highly dynamic in atmospheric (ranging from very dry to very moist and very clear to very hazy) as well as surface conditions (ranging from very dry to very wet soils and bare to highly vegetated surfaces). Yet, the area has relatively low cloud cover, ensuring high observation frequencies for optical satellite sensors. The proximity to the equator allows to support satellite missions in polar and near equatorial orbits. Continuously measured variables include air temperature and humidity which are measured with an SHT31 from Sensirion. Rainfall is captured with a RIMCO RIM-7499-020. ML3 ThetaProbes from Delta-T Devices measure soil moisture and temperature at 5cm depth. Currently, 4 probes are located randomly within a radius of 400 m from the LoRa gateway. A range of cameras (with standard and fish-eye lenses) are capturing images of canopy and understorey for estimating vegetation cover and leaf area index. A network of sensors measuring down- and upwelling photosynthetically active radiation at different levels above, within and below the canopy and understorey allow the estimation of fAPAR. Various instruments are currently in development, including surface temperature sensor, lightning detector, solar induced fluorescence spectrometer, sky camera and total column atmospheric water vapour sensor. For real time data from high data rate instruments and remote access a 2.4GHz WiFi access point is used. Low data rate instruments communicate via a LoRaWAN gateway through The Things Network. Other instrumentation can be installed on request for permanent operation or for specific time frames. Engineering support (mechanical, electronical, optical) is available for instrument adaptation, calibration and installation. Regular measurement campaigns are undertaken using spectroradiometer, thermal infrared sensor and a variety of imaging systems mounted on remotely piloted aerial systems (RPAS) to capture top-of-canopy reflectance, surface temperature, orthomosaics, 3d vegetation structure, etc. Collaborators include ACRI-ST, European Commission, European Space Agency (ESA), Environmental Sensing Systems, German Aerospace Center (DLR), Northern Territory Government, University of Leicester, University of Southampton, University of Valencia and Charles Darwin University.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Inter-satellite and a-posteriori in-situ comparisons for the validation of Sentinel-3 fire products

Authors: Thomas Miraglio, Sebastien Clerc, Professor Martin Wooster, Weidong Xu, Jerome Bruniquel, Steffen Dransfeld, Veronique Bruniquel
Affiliations: ACRI-ST, King's College London, ESA
Wildfires are significant contributors to greenhouse gas emissions, with potentially major health and economic impacts. As the frequency of vegetation fires is expected to increase in the coming century due to climate change, accurately detecting and characterizing fires at the global scale is critical to better understand their spatiotemporal distribution as well as their effects on the environment. To that aim, spaceborne sensors have been successfully used for several years. In 2016, to ensure continuity with AATSR, the European Space Agency (ESA) launched Sentinel-3A, followed by Sentinel-3B in 2018, both carrying the SLSTR sensor. Validating the accuracy of the fire products estimated by these spaceborne sensors is critical to ensure the reliability of the measurements, and the resulting science. To that aim, the Optical Mission Performance Cluster (OPT-MPC) monitors continuously the SLSTR instrument, with routine validation of its Non Time Critical (NTC) fire products and quarterly reporting. Due to the unpredictability of biomass fires, there is no direct and standard method to validate fire detection and characterization by spaceborne sensors. Nevertheless, the OPT-MPC has set up operational methods relying on direct comparisons with MODIS Terra during simultaneous nadir overpasses, indirect comparisons between the SLSTR instruments at the global scale, as well as indirect a-posteriori fire detection validation using public reported wildfire databases. This presentation will focus on the framework that has been set up to ensure the routine monitoring of the Sentinel-3 fire products. We present the performances of SLSTR and show that there is a good agreement with MODIS Terra for fire detection and characterization during both nighttime and daytime, as well as between SLSTR-A and SLSTR-B. Additionally, we present the results of the fire detection validation using reported wildfires. We conclude that SLSTR-A/B could fill the gap left by the decommissioning of MODIS Terra concerning fire detection and characterization.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: TRACTIONS Demonstration Project TSM and Primary Production Services Improved by a Continuous Local Calibration and Validation of EO Data using High-Frequency Hyperspectral Insitu Instruments

Authors: Dr. Marnix Laanen, Annelies Hommersom, Dr Steef Peters, Semhar Ghezehegn, MSc. Lazaros Spaias
Affiliations: Water Insight
In the ESA BASS Demonstration project TRACTIONS we aim to develop a long term Near Real Time operational satellite information service for surface water quality that is continuously and automatically calibrated and validated by a network of hyperspectral in situ instruments. The service shall integrate satellite images from various platforms. The TRACTIONS information services are tailored to the dredging and aquaculture industry. There is a need for water transparency products from EO at various scales and sometimes with a high temporal resolution. The products will help to assess environmental impact of dredging and to drastically reduce the costs of traditional monitoring. Water transparency is also a key parameter for the aquaculture sector especially when it is integrated with PAR and algal biomass into a measure of (potential) Primary Production (PP). We use a network of WISPstation radiometric instruments installed on water bodies, measuring semi-continuously. The in situ WISPstation instruments are fully autonomous. They measure the upwelling and downwelling light (Lu, Ed and Ld) over the visible and near-infrared every 15 minutes (as a standard setting, this is adjustable) and send the raw data via 4G to the WISPcloud processor and database. WISPcloud performs a quality control, derives the calibrated water-leaving radiance reflectance (remote sensing reflectance) and basic parameters such as chlorophyll-a and makes these available via an Application Protocol Interface (API) within minutes after the observation. Water Insight currently achieves good results with machine learning algorithms based on simulated reflectances coupled to a published general atmospheric correction model (MODTRAN). There will always be differences between the observed and modelled reflectances due to unknowns and assumptions in atmospheric correction models and unknown variability in the atmospheric composition. Furthermore, regular updates of atmospheric correction models impact the resulting reflectances. Therefore, a long term operational service has to work with compromises between freezing software to a certain version or explaining to users that reprocessing of historic images is necessary to avoid discontinuities in resulting time series. All of this can be avoided by calibrating satellite derived remote sensing reflectances using a network of automated in-situ optical sensors. Our results indicate a diminished dependency on inconsistencies in the AC caused by unknown parameter quantities and an improvement in the consistency of derived water quality parameters. The method can be applied to any hyperspectral satellite system or multispectral system for which the SRF and the at-satellite calibration coefficients are known. In the end, the quality of the procedure is determined by the quality of the WISPstation observations, and hence its calibration accuracy and stability over time. By setting up a network of WISPstations, there will always be a reflectance measurement available within 15 minutes of a satellite image acquisition covering the network area to be used for validation. Besides removing the uncertainties regarding atmospheric correction, WISPstation measurements can also be used to calibrate the water quality algorithms using laboratory measurements by the user. This will guarantee the continuity in water quality data records of the users, which is a major requirement. On our poster we will provide initial results of validation/calibration of atmospheric correction of Very High Resolution satellite imagery based on WISPstation remote sensing reflectances. TRACTIONS Demonstration is a project funded under the ESA ARTES 4.0 BUSINESS APPLICATIONS programme.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: A web-based tool for the validation of Sentinel-2 and Sentinel-3 derived bio‐geophysical products against ICOS terrestrial ecosystems measurements

Authors: Noelle Cremer, Dr Fabrizio Niro, Dario Papale, Giacomo Nicolini, Simone Sabbatini, Luke Brown
Affiliations: Serco For ESA ESRIN, National Research Council - IRET, CMCC Foundation - Euro-Mediterranean Center on Climate Change, University of Salford
This study aims to demonstrate how ICOS terrestrial ecosystem sites can serve as a network for validating Earth Observation products, thus enhancing spatial and temporal coverage of validation efforts. Focusing on the current ESA optical imaging sensors, the project aims to generate a match-up dataset of Sentinel-2 and Sentinel-3 observations over ICOS ecosystem sites for easing the validation of satellite-derived bio-geophysical products. The objective is to bridge the gap between ecosystem monitoring and satellite remote sensing communities, by facilitating the uptake and exploitation of ICOS data for validation purposes. The project focuses on the validation of a subset of vegetation parameters measured within the existing ICOS network, including key terrestrial Essential Climate Variables, specifically Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) and Leaf Area Index (LAI). As a main outcome of this activity, a set of Jupyter Notebooks on the Terrascope platform is developed and shared within the satellite Cal/Val community, allowing to read the ICOS data, extract the land biophysical products of interest and match them to the corresponding satellite overpasses. Issues such as the spatial representativeness of ICOS sites, the temporal sampling of satellite acquisitions and the inclusion of in-situ uncertainties as a step towards fiducial reference measurements are explored. Statistical metrics are provided within the Notebooks to estimate the suitability of the various sites for the validation at satellite pixel size. This tool is mainly developed to foster the usage of this invaluable ensemble of in-situ reference data for satellite calibration and validation purposes.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Copernicus Ground-Based Observation for Validation (GBOV): Production of Albedo and Top Of Canopy Reflectance for EO Data Cal/Val

Authors: Jean-sébastien Carrière, Christophe Lerebourg, Rémi Grousset, Simon Nativel, Gabriele Bai, PhD Jan Peter Muller, PhD Rui Song, PhD Marco Clerici, PhD Nadine Gobron
Affiliations: ACRI-ST, University College London, European Comission Joint Reasearch Center
The Ground-Based Observations for Validation (GBOV) project, funded by the European Commission, is developing a pre-operational infrastructure for validation of EO land products (LP) using a worldwide network of ground-based measurements. The EO products of particular interest are those from the Copernicus Global Land Service (CGLS) covering biophysical and radiation variables. GBOV is funded by the European Commission and led by the Joint Research Centre under the Framework Contract number 941837. This abstract focuses on the energy/radiation aspects of GBOV, with the productions of the so-called Land Products (LPs) such as black- (DHR - Directional Hemispherical Reflectance) and white-sky (BHR - Bi-Hemispherical Reflectance) albedos, and Top Of Canopy Reflectance (TOC-R), both derived from the In Situ Reference Measurement (RM). The main purpose of GBOV service is the implementation of upscaling procedure on ground measurements to generate Land Products (LPs) comparable to CGLS product grid. These up-scaling procedures are using high resolution EO data such as Sentinel-2 image extracted by Sentinel-2 Global Mosaic (S2GM), with the so-called SIAC atmospheric processor (Sensor invariant Atmospheric Correction). S2GM service is also a component of the Copernicus Global Land Service, providing Sentinel-2 surface reflectance mosaics. The RM products consist of the downward and upward shortwave radiative fluxes, as well as the diffuse radiation. The shortwave radiation for validation purposes is measured by albedometers from the top of ground-based flux-towers. The first Land Product (LP1, TOC-R) represents the normalised top of canopy reflectance at 10:00 am local time at a nadir view angle, and we propose a new approach for the validation of TOC-R, which consists of computing BRDF parameters using tower data, and then creating TOC-R using the semi-empirical linear model of Roujean, 1992. The second Land Product (LP2, BHR and DHR), represent theoretical cases of uniform illumination from all angles (BHR), and illumination from a single infinitesimally small point source (DHR). They are indirectly compared using a compromise value called blue-sky albedo; BHR can be approximated by blue-sky albedo under certain conditions, such as under thick unbroken Stratus clouds, just before sunrise, and just after sundown, while DHR can be well approximated under totally cloud free conditions with a very low level of aerosol at local solar noon. The upscaling of in-situ measurement of BHR, DHR and TOC-R is particularly necessary over heterogeneous surfaces if we expect to compare them to CGLS products whose pixel resolution is 1/112º. The process is the same for each LP and consists of two major steps. Firstly, the LP estimated from the tower-derived BRDF is compared with the LP obtained from high resolution EO data (S2GM) using the same solar and view angles. Then, a calibration factor is used to upscale the LP from the high resolution EO scale to the coarse CGLS resolution. Twenty tower sites have been selected for the first phase of the GBOV project. As of 2023, the number of sites for which energy LP have been produced reaches 36, and are located over four continents as follows: Europe (10 sites), North America (11 sites), South America (1 site), South Africa (2 sites), Australia (10 sites), with the last 2 sites located in Antarctica at the South-Pole. All the sites display various geographic and climate conditions, start date timeframe, land cover type and other characteristics.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: The CEOS Land Product Validation Subgroup: enhancing trust in satellite-derived global land products

Authors: Fabrizio Niro, Michael Cosh, Jaime Nickeson
Affiliations: Serco For ESA/ESRIN, U.S. Department of Agriculture, SSAI/NASA
To ensure the widespread and effective application of satellite EO data by users and policymakers, it is crucial to validate these data rigorously to build confidence and trust in them. Ideally, satellite EO data should be accompanied by a quantitative and standardized assessment of their quality and suitability for intended applications. This criterion is often unmet, however, because accuracy standards and methodologies used for assessment vary greatly across EO missions. This lack of standardization limits our ability to effectively monitor Earth system processes, despite the ever-increasing amount of satellite EO data at our disposal [1]. With the objective of addressing these limitations, the Land Product Validation (LPV) subgroup of the Committee on Earth Observation Satellites Working Group on Calibration and Validation (CEOS-WGCV) has for many years been developing and promoting standardized best practices and a common set of reference data for validating and intercomparing satellite-derived land products. The main focus of the LPV subgroup has been the terrestrial Essential Climate Variables (ECVs), as defined by the Global Climate Observing System (GCOS). This includes notably those variables related to snow cover, surface radiation, land cover, vegetation biophysical indicators, fire, land surface temperature, biomass and soil moisture. The LPV domain has recently expanded to include two new variables: Evapotranspiration (ET) and Gross/Net Primary Productivity (GPP/NPP). These variables are increasingly important, with ET playing a critical role in agricultural applications and GPP/NPP contributing significantly to climate change monitoring and adaptation efforts. This abstract introduces the CEOS-LPV subgroup's mission and its critical contributions to the EO community. Key recent achievements include the preparation and publication of an updated best practice protocol for the land cover variable, the first major revision since 2006, and ongoing updates to the LAI, fAPAR, and Burned Area protocols. Establishing and promoting these community best practices is critical to ensuring satellite EO land products are trusted and effectively applied across a wide range of scientific and societal domains. [1] Niro, F., M. Cosh, and J. Nickeson (2024), Trustworthy satellite Earth observations for science and society, Eos, 105, https://doi.org/10.1029/2024EO240055.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Automatic time series measurements of direct and diffuse radiation with very high spectral resolution

Authors: Marc Krause, Tommaso Julitta, Andreas Burkart, Prof. Franziska Schaube
Affiliations: Jb Hyperspectral Devices Gmbh
The distribution of light inside the complex architecture of vegetation canopies is an interesting trait for the drivers of photosynthesis, but also impacts significantly the amount of reflected radiation. Therefore reconstructing the path of the light inside a plant is the aim of countless studies. But a major driver for the light availability is the actual spectral composition. Automatic spectrometers nowadays give an excellent understanding of how the light quality is changing over the day, and long-time series over many different canopies are available. This data consists of the total irradiance and reflected radiance, but does not differentiate between direct and diffuse radiation. We identified this as a highly relevant trait, and with the presented work, we challenge the unmixing of hyperspectral irradiance into the two components direct radiation and diffuse radiation. A prototype of a fully automated shading unit has been developed and built, that intermediately shades the cosine receptor of a “FloX” hyperspectral measurement device, by intercepting the suns path based on GPS time and position information. Using this system in the field, we were able to collect time-series of hyperspectral reflectance with very high spectral resolution in the O2A and O2B band, important for retrieval of Sun-Induced-Fluorescence (SIF). Analysing the spectral data, we show significant differences in spectral quality of direct to diffuse light, depending on cloud cover and time of day. Providing this data for prolonged time series and different installations around the globe, we will contribute valuable input to radiative transfer models of the canopy, and improve models of atmospheric light transfer.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Status of PICSCAR CEOS/IVOS initiative

Authors: Béatrice Berthelot, Patrice Henry
Affiliations: Magellium, CNES
The PICSCAR Initiative encompasses a range of activities, with the primary focus on improving the characterization of Pseudo Invariant Calibration Sites (PICS). PICS play a critical role in evaluating the long-term stability of satellite instruments and enabling inter-comparison between multiple instruments. A comprehensive database has been created, incorporating the latest collections from a variety of sensors, including MODIS Aqua/Terra, PARASOL, Sentinel-2 (A/B) MSI, Landsat 8/9, SLSTR (A/B), OLCI (A/B), MERIS, PROBAV, and VEGETATION-1/2. This database covers the six CEOS-endorsed PICS sites and is used to assess the stability of these sites over time. To achieve this goal, an initial analysis of cloud detection quality has been conducted. This analysis serves as a foundation for monitoring site stability, which is further addressed through the characterization of the PICS Bidirectional Reflectance Distribution Function (BRDF) using PARASOL products. PICS BRDFs adjusted from this data collection will be presented.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Massive radiometric cross-comparison of the Sentinel and Landsat families over PICS using the SADE/MUSCLE system

Authors: Camille Desjardins, Damien Rodat, Aimé Meygret
Affiliations: Cnes
ESA, through the Copernicus program, and NASA, through the Landsat program, provide unparalleled opportunities to study Earth's surface using spatial datasets that offer global coverage and extend over decades. Sentinel-2 is an optical imaging mission dedicated to the operational monitoring of land and coastal areas, offering global coverage with a wide field of view (290 km), a high revisit frequency (5 days with two satellites), high resolution (10 m, 20 m, and 60 m), and multi-spectral imagery across 13 spectral bands in the visible and shortwave infrared domains. The Sentinel-3 mission focuses on sea surface topography, surface temperature, and color, supports ocean forecasting, environmental monitoring, and climate studies. Its OLCI (Ocean and Land Colour Imager) and SLSTR (Sea and Land Surface Temperature Radiometer) instruments ensure continuity with Envisat's MERIS and AATSR, capturing data in multiple spectral bands with resolutions ranging from 300 m to 1 km. The Landsat program provides moderate-resolution multispectral imagery. Enhanced capabilities, such as the Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS) on Landsat 8 and 9, have been critical for monitoring Earth's surface changes. Although all these missions include onboard calibration systems—such as sun diffusers and black bodies for radiometric consistency—vicarious calibration methods remain essential. These methods leverage natural targets, including Rayleigh scattering, sunglint, desert sites, and instrumented sites like RadCalNet, to validate radiometry, ensure cross-sensor consistency, and maintain data interoperability. Cross-calibration between sensors, particularly through Pseudo-Invariant Calibration Sites (PICS), is vital for achieving consistency and comparability of data over time and across instruments. Twenty-five years ago, the Centre National d’Études Spatiales (CNES) developed the SADE/MUSCLE system to address these calibration needs. SADE, a database that now contains over 33 million measurements from 49 sensors, systematically stores and manages calibration data, while MUSCLE facilitates operational vicarious calibration and cross-comparison. The desert PICS method, which leverages the temporal stability and spatial homogeneity of selected sites, supports both temporal stability validation and cross-calibration across missions. This approach also incorporates corrections for atmospheric variability and spectral response differences, ensuring consistency across sensors. CNES was a pioneer in the use of Pseudo-Invariant Calibration Sites, selecting 20 sites in North Africa and the Middle East in the 1990s for their temporal stability and spatial homogeneity. Among these sites, six were later endorsed by CEOS (the Committee on Earth Observation Satellites): Algeria 3 & 5, Libya 1 & 4, and Mauritania 1 & 2. Over the last two decades, CNES has used the SADE/MUSCLE calibration system to calibrate validate and/or cross-compare a wide range of optical imagers, including SPOT 1–7, MERIS, Pleiades, Pleiades-Neo, POLDER, VENµS, PRISMA, EnMAP, Sentinel-2 A/B/C MSI, Sentinel-3 A/B OLCI & SLSTR, and Landsat 7, 8, and 9. This paper presents a large-scale cross-calibration effort involving Sentinel-2A, -2B, -2C, Sentinel-3A, -3B, and Landsat-7, -8, and -9 sensors over reflective bands using PICS. The results provide a detailed assessment of absolute radiometric differences between the sensors within these families and enable the evaluation of their temporal drifts. Quantitative recommendations for gain adjustments and long-term drift corrections are proposed to support the sustained alignment of sensors and enhance the interoperability of radiometric data. This work underscores the critical role of vicarious calibration in achieving the interoperability and harmonization of radiometric measurements across missions. Moreover, it highlights the importance of this approach in enabling the combination of data from past and present sensors to ensure reliable multi-temporal studies, which are particularly valuable for climate research. These efforts are essential for delivering high-quality radiometric measurements and consistent time series that support diverse scientific, operational, and environmental applications, reinforcing the foundation of global environmental monitoring.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Straylight and degradation in long-term trends in the TROPOMI L1 radiance signal

Authors: Emiel van der Plas, Dr Erwin Loots, Mirna van Hoek, Edward van Amelrooy, Nico Rozemeijer, Robert van Versendaal, Antje
Affiliations: KNMI
The TROPOMI instrument on board of Sentinel 5P has been measuring radiance and irradiance data in an operational schedule since April 2018. The instrument is suffering from degradation in various parts of the lightpath. The degradation has been divided into several contributions that are attributed to different parts of the instrument. Some contributions amount to an increase in the signal, which can be caused by degradation but also by an increase of straylight. The radiance monitor data has been studied to look at how different ways of straylight mitigation influence the L1b data.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Quantifying The Spatial Variability of LANDHYPERNETS Sites Using PlanetScope Data

Authors: Morven Sinclair, Pieter de Vis, Kevin Ruddick, Mirco Boschetti, Vittorio Brando, Ana Dogliotti, Estefania Piegari, Joel Kuusk, Harry Morris, Agnieszka Bialek
Affiliations: National Physical Laboratory, Royal Belgian Institute of Natural Sciences, Consiglio Nazionale delle Ricerche, Instituto de Astronomía y Física del Espacio, Consejo Nacional de Investigaciones Científicas y Técnicas, University of Tartu
To assess the performance of surface reflectance products over time, post-launch validation of the satellite against ground measurements is needed. Traditionally, this has been performed using handheld spectrometers or using automated instruments such as those deployed in the Radiometric Calibration Network (RadCalNet). However, these validation activities are often limited to optimal location with homogeneous surfaces and clear, stable atmospheric conditions, neglecting complex surfaces such as forests and croplands. The HYPERNETS validation network is a series of water and land sites and was designed to fill a gap in the validation of surface reflectance products by collecting automated, multiangular, hyperspectral data covering multiple surface types including agricultural, deciduous forests, snow and ice and deserts. The LANDHYPERNET network utilises the HYPSTAR®-XR instrument which has both a VNIR and SWIR sensor. It provides VNIR data with a wavelength range of 380–1000 nm with 0.5 nm sampling and 3nm resolution, and SWIR data ranging from 1000–1680 nm with 3 nm sampling and 10 nm resolution. All data is processed using the HYPERNETS_PROCESSOR and data products are available for distribution through the LANDHYPERNETS data portal (www.landhypernet.og.uk). To enable a comprehensive validation of decametric and hectometric satellite products it is important to consider additional sources of uncertainty associated with the different spectral bands of the sensors, spatial heterogeneity of the site at the satellite spatial resolution and the influence of time and angular components. Vegetated sites have a seasonal profile associated with the different growth stages of the vegetation which will has an impact on the spatial homogeneity of the site. This poster outlines several techniques including using hyperspectral airborne, PlanetScope data and RGB images to account for the spatial variability of the sites at several spatial scales. This analysis is performed at multiple LANDHYPERNET sites covering forest, desert, grassland, and peatland landcovers. The spatial homogeneity (SHI) and spatial representativeness index (SRI) are used to quantify the variability at different satellite pixel extractions scales. A matchup database of recommended matchup locations and associated spatial uncertainty are generated for each site. Additionally, recommendations for how to incorporate the spatial uncertainty of the site in the overall uncertainty budget of the satellite matchup are shown.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Using Sentinel-2 and Landsat-8 matchups with RadCalNet to define protocols minimising comparison biases

Authors: Ashley Ramsay, Agnieszka Bialek, Pieter de Vis, Morven Sinclair, Marc Bouvet
Affiliations: National Physical Laboratory, European Space Agency
Satellite-borne imagers have found a wide and varied use across disciplines, from the detection and monitoring of fires to the mapping of human uses of land. With such a portfolio of applications, it is important to have high confidence in the data these imagers provide. Regular comparisons are an essential component of any effort to provide consistent, validated data through the monitoring of sensor performance. Such comparisons can be accomplished satellite-satellite or satellite-in situ. However, in the case of satellite-in situ comparison, regular laboratory calibration and characterizations can be done to give a clear picture of the in situ instrument performance with well-quantified uncertainties. The Radiometric Calibration Network (RadCalNet) is a provider of radiometrically calibrated SI-traceable data, for a defined area, provided both at Bottom of Atmosphere (BOA) and Top of Atmosphere (TOA), at Nadir, for the visible and shortwave infrared spectral range. Specifically, RadCalNet products provide data at 10 nm intervals, covering a spectral range from 400 nm to 2500 nm. In addition, RadCalNet data includes uncertainties statements derived by the site owners and approved by the RadCalNet Working Group. However, a rigid framework of calibration/validation procedures for satellite-RadCalNet comparisons has not yet been established. Comparisons require the spectral convolution of hyperspectral RadCalNet data with the Spectral Response Function (SRF) of the satellite sensor. Additionally, differences may be introduced due to the time lag between satellite and in situ acquisitions, Bi-Directional Reflectance Distribution Function (BRDF), by occluding cloud, and other discrepancies in the satellite data. Each RadCalNet site also has a Region of Interest (ROI) which is defined by the site owner; this represents the area for which the ground Reflectance is representative of the RadCalNet product. In order to quantify these effects, a search has been conducted for occasions where Sentinel-2 or Landsat-8 overpass one of the RadCalNet sites within 15-minutes of an acquisition and a comparison performed. The aim of this study is to identify the practices that will minimise the bias for satellite-RadCalNet comparison. Once applied, the variation between Sentinel-2, Landsat-8, and RadCalNet can be taken as a measure of the effectiveness of these procedures.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Merging Multiple Analyses of SLSTR Vis-SWIR Vicarious Calibration Results

Authors: Dave Smith, Maciej Neneman, Ali Mousivand, Bahjat Alhammoud, Sebastien Clerc, Camile Desjardins, Steffen
Affiliations: RAL Space
The Sentinel-3 SLSTR instrument is primarily designed to measure Sea and Land Surface Temperatures for metrological and climate research applications. As well as spectral bands in the thermal infrared for measurement of surface temperatures, the radiometer is equipped with channels in the visible to short wavelength infrared range, primarily for daytime cloud screening and scene classification. However, the VIS-SWIR channels are used in the retrievals of Fire Radiative Power (FRP) and the land applications when used in synergy with the OLCI instrument data. Furthermore, when combined with the dual view capability of the radiometer, the data from the VIS-SWIR channels are used for measurements of aerosol and cloud properties not possible with a single view instrument. The use of the VIS-SWIR channels in Sentinel-3 level-2 data products, demands that they be radiometrically calibrated to standards traceable to SI. Demonstrating direct traceability to SI is not so straightforward as the L1 processing has many inputs, and several assumptions are made about the instrument model. Analysis of the radiometric model used in the L1 processing shows that there are several key factors that affect the radiometric performances, for instance: • Calibration of the diffuser-based calibrator • Long term stability of the calibrator • Response non-linearity • Ground-orbit changes – e.g., where there are differences between the pre-launch test conditions and flight operations. Analysis has been performed by several groups to assess the on-orbit performance of the VIS-SWIR channels using Pseudo Invariant Calibration Sites (PICS). An earlier analysis performed by RAL for the ESA-funded Sentinel-3 Mission Performance Centre (MPC) established calibration corrections for users to apply to the level-1b data products [1]. Since then, analysis further MPC activities haves continued to monitor the calibration performance by RAL using comparisons to measurements of historical sensors and , ACRI-ST using the DIMITRI toolkit. In addition,, EUMETSAT using MICMICS intercalibration and calibration (radiative transfer) modules, and CNES using the SADE database and associated tools have provided further independent assessments. Although each group uses the same reference sites and input Level-1 data, differences in the analysis methods, in particular the reference sensor used can result in small differences in the results. A working group has been set up to compare the results of the different methods, with a view to providing updated coefficients for the absolute radiometric calibration and the long-term stability of the calibration. A key challenge is to define a suitable common reference that covers the full spectral range of SLSTR. References: 1. Sentinel-3 SLSTR VIS and SWIR Channel Vicarious Calibration Adjustments - Sentinel-3 Mission Performance Centre
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Radiance intercomparison of Sentinel 2 and Landsat satellites at a global scale.

Authors: Francesco Bove, Javier Gorroño, Luis Guanter, Ana Vidal-Pantaleoni, Marco Manzoni
Affiliations: Universitat Politècnica de València, Politecnico di Milano
The Copernicus Sentinel-2 mission from the European Commission has been fully operating since June 2017. It carries an optical imaging sensor onboard named MSI (Multi-Spectral Instrument). The mission acquires optical data products in the visible near-infrared (VNIR) and short-wave infrared (SWIR) spectral range with a spatial resolution of 10 to 60 m and a 5-day revisit. On the other hand, the Landsat mission from the (USGS)/National Aeronautics and Space Administration (NASA) also offers optical images in the same spectral range with a spatial resolution of 30 m and an 8-day revisit. For both missions, their data are applied to many applications, including land monitoring, emergency management, and security. The similarity between the missions facilitates their combination in a large set of applications with an enhanced temporal revisit. However, it requires that the calibration of both missions is aligned, and biases are corrected. The contribution proposes an intercalibration methodology at a global scale and for the entire radiance range between Sentinel 2 and Landsat missions with the expectation that this leads to a better harmonisation of their products. The methodology starts by defining a set of match-ups, i.e. common earth areas observed by the different satellites within a certain time delay, by using the Google Earth Engine archive. The initial search is relaxed (e.g., the sun zenith angle is not filtered) in order to maximise the number of opportunities. On a next step, we focus on filtering those results by analysing potential parameters that define a selection of the “best” matchups. These are defined by either a low radiometric error between satellites or a convergence of those errors over different samples (i.e. low error correlation). Those parameters include the sun/view angular configuration or the temporal distance. For example, the former might need to be restricted under angular configurations close to the hot-spot or for very high solar zenith angles where the signal and anisotropy largely impact any angular difference between the missions. The latter defines a variation of the solar angle as well as a variation of the scene itself mainly due to cloud dynamics. The result of this task leads to a reduced number of match-ups from which an initial intercomparison can be extracted. This approach facilitates the comparison of the full radiance curve and potentially other parameters (e.g. latitude dependence ) by clustering the matchups. These initial results are further refined, including other criteria, and the comparison can be enhanced if systematic corrections between the satellites are considered. One of the most important effects is the difference between spectral response functions. These small differences in the bandpass between missions can result in an error between missions that depend on the scene.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Copernicus Sentinel-2 Radiometric Calibration Status From The Optical Mission Performance Cluster - Focus On Improvements Of Sun-Diffuser Radiometric Calibrations

Authors: Bruno Lafrance, Emmanuel Hillairet, Jonathan Guinet, Sebastien Clerc, Silvia Enache, Rosalinda Morrone, Valentina Boccia
Affiliations: CS Group, ACRI-ST, Starion Group, European Space Agency - ESRIN
As part of the Copernicus programme of the European Commission, the European Space Agency (ESA) has been operating the Sentinel-2 missions since 2015 with the first platform Sentinel-2A launched in June 2015, Sentinel-2B launched in March 2017, and recently Sentinel-2C launched in September 2024. The Multispectral Instruments (MSI) measure the Earth's reflected radiance in 13 spectral bands from VNIR to SWIR, with a high spatial resolution from 10 to 60 meters, covering a large swath width. The radiometric consistency of the image time series is ensured by specific performance requirements such as the radiometric stability, routinely monitored by the Sentinel-2 Optical Mission Performance Cluster (Sentinel-2 OPT-MPC). This presentation provides a description of the radiometric calibration activities and expertise carried out by the OPT-MPC. The radiometric calibration relies on images acquired over ocean at night (for dark signal calibration) and on the on-board Sun diffuser images (for relative and absolute gains calibration). The on-board Sun diffuser is a full-field/full-pupil diffuser called the Calibration and Shutter Mechanism (CSM) which reflects the Sun light to the sensor. The homogeneous illumination covering all the field of view allows a fine estimate of the equalization coefficients for each pixel, per detector and band. Combined with the simulation of the sunlight entering the sensor after reflection on the diffuser, these images can also be used to update the absolute gains for each spectral band. The temporal evolution of the absolute gains is monitored with the monthly sun-diffuser acquisitions. Since the beginning of 2022, the absolute calibration coefficients for the VNIR bands of MSI-A and MSI-B have been showing a slight regular increase in a range of 1%. This phenomenon is likely related to an erosion of black coatings of the diffuser due to ATOX, triggered by the strong solar activity. The consequence would be an increase of the stray-light in calibration mode. We will present the monitoring of absolute gains and how OPT-MPC manages this evolution to provide reflectance measurements as stable as possible. For SWIR bands (B10 and B11 bands), there is a natural decrease trend which is currently up to −1.0% in one year for MSI-A and MSI-B. This loss of sensitivity is faster for MSI-C as might be expected after its recent launch. The monthly updates of these coefficients correctly compensate for this trend in order to guarantee the supply of well-calibrated reflectances. Moreover, the MSI decontamination operations (every year for MSI-A and MSI-B) allows recovering the sensor sensitivity to avoid a loss of the Signal-to-Noise Ratio (SNR) for SWIR bands. The equalization coefficients are also updated each time a Sun-diffuser image is acquired. The variation of the relative gains is weak between two successive monthly assessments, with maximal variations do not exceeding 0.2 to 0.4% for the VNIR bands, and up to 3% for some SWIR pixels. But, OPT-MPC has noticed slight trends of seasonal variation of the equalization coefficients over a full year cycle, in relation to slight defaults in the sun-diffuser BRDF characterization. The quantification of these defaults from the time series of MSI-A and MSI-B acquisitions is a difficult task, as it is not possible to distinguish between real changes in the sensor response and artefacts in the Sun-diffuser BRDF. For Sentinel-2C, a special yaw manoeuvre has been realized during the commissioning phase to get 6 successive Sun-diffuser acquisitions covering the range of incident Sun angles of a full year cycle. These acquisitions are a great opportunity to estimate accurately the slight defaults in the Sun-diffuser BRDF characterized on-ground during the manufacturing of this platform. Indeed, the sensor response can be reasonably considered as stable for these successive acquisitions and therefore the differences in estimated relative and absolute gain coefficients from these acquisitions only highlight the defaults in the sun-diffuser BRDF characterization. We will present how they are estimated and managed to improve the Sentinel-2C Sun-diffuser BRDF and then the accuracy of the relative and absolute gain coefficients. The latest radiometric calibration results highlight the good quality of the mission products in terms of radiometry. Among these results, the SNR - key parameter for assessing pixel health - is well inside the requirement for all bands and appears to be very stable. The current performances for the three MSI sensors compared to mission product quality requirements will be presented.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Developing Improved Sentinel-2 LAI and FAPAR Products Through Machine Learning-Based Calibration with Fiducial Reference Measurements: the GROUNDED EO Project

Authors: Luke Brown, Richard Fernandes, Jochem Verrelst, Harry Morris, Najib Djamai, Rosalinda Morrone, Philippe Goryl, Stephen Plummer
Affiliations: University of Salford, Canada Centre for Remote Sensing, Universitat de València, National Physical Laboratory, Starion for European Space Ageny, European Space Agency
Estimates of vegetation biophysical variables such as leaf area index (LAI) and the fraction of absorbed photosynthetically active radiation (FAPAR) are essential for effective agricultural and forest monitoring/management. They also represent a crucial input into models of crop yield, carbon exchange, and the weather and climate systems. Exploiting optical Earth Observation (EO), algorithms including the Sentinel-2 Level 2 Prototype Processor (SL2P) provide routine decametric (10 m to 100 m) retrievals, but are subject to known biases due to assumptions embedded within the radiative transfer models used in their calibration. To overcome these limitations, retrieval approaches might be directly calibrated with real EO data and contemporaneous ground reference observations. Historically, however, such ground reference observations (typically obtained through one-off field campaigns) have been limited in quantity, and have suffered from inconsistencies and unquantified measurement uncertainties. In recent years, advances in novel automated field instrumentation, standardized processing, and routine data collection by environmental monitoring networks (including the National Ecological Observatory Network (NEON), Integrated Carbon Observation System (ICOS), and Terrestrial Ecosystem Research Network (TERN), amongst others) have led to substantial improvements in ground reference data spatiotemporal coverage and consistency. Through work under the Fiducial Reference Measurements for Vegetation (FRM4VEG) programme, methods to quantify ground reference measurement uncertainties are now also available. In parallel, cutting-edge machine learning approaches such as Gaussian processes have dramatically reduced the number of required calibration samples (i.e. hundreds as opposed to hundreds of thousands) whilst allowing explicit treatment of measurement uncertainties. As a result of these advances, for the first time, the development of operational LAI and FAPAR retrieval approaches directly calibrated with fiducial reference measurements is a realistic possibility. Funded by the European Space Agency’s Living Planet Fellowship programme, the Ground Reference Observations Underlying Novel Decametric Vegetation Data Products from Earth Observation (GROUNDED EO) project (https://eo4society.esa.int/projects/grounded-eo/) is adopting such a strategy. Within the GROUNDED EO project, a harmonized fiducial reference database was first established, consisting of 16,087 fiducial reference measurements covering 81 NEON, ICOS and TERN sites between 2013 and 2022 (likely the most extensive to date). To avoid biases inherent to many existing ground reference databases, particular attention to the influence of woody material and understory vegetation was paid over forest sites. Fiducial reference measurements were paired with contemporaneous (± 1 day) cloud-free Sentinel-2 L2A observations, yielding 4,167 matchups. Active learning approaches were then adopted to select the most informative samples, which were used to calibrate Gaussian processes for retrieval. When validated on 3,226 fiducial reference measurements not used in calibration (using a leave-site-out approach to ensure performance statistics were not unfairly advantaged), GROUNDED EO retrievals exhibited increased overall performance than SL2P (RMSD = 0.96 vs. 1.07 for LAI and 0.15 vs 0.18 for FAPAR), with substantial reductions in bias (-0.06 vs. -0.23 for LAI and 0.00 vs. -0.04 for FAPAR) and an increased proportion of retrievals meeting Sentinels for Science (SEN4SCI) user requirements (74% vs. 69% for LAI and 69% vs. 53% for FAPAR). Crucially, GROUNDED EO retrievals are accompanied by per-pixel uncertainties. Public release of the harmonized fiducial reference database and algorithm is planned, and the potential for integrating the algorithm within the Sentinels Application Platform (SNAP) is now being explored to maximize user uptake.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: ARTEMIS, Space imagery Geometry Software for in-flight Calibration and Accuracy Monitoring

Authors: Nicolas Champdavoine, Arnaud Kelbert, Sylvia Sylvander, Benjamin Lavie
Affiliations: Magellium, CNES
ARTEMIS is a CNES highly modular software designed to be : multi-mission but handling all specificity of sensors, using geometrical references libraries without being dependent to any of them, and able to run on multiple platform including distribution over computing nodes of HPC, Cloud computing or a single PC. It has been developed first to meet the requirements of the CNES/ADS CO3D mission and then to be used for other CNES missions. This software is designed to be used in the Image Quality expertise centers of CNES missions, for in-flight geometrical calibration/validation (e.g. calibration of the focal plane) and for mission accuracy assessment over the lifetime of the mission. As a matter of fact, it is currently being used by CNES geometric experts for Sentinel-2C geometrical calibration and validation activities. The main steps of the focal plane calibration are the resampling in common geometry, searching for matching points between products and reference imagery such as Ground Control Points (GCP), bundle adjustment to find an optimized value (using minimization) for geometrical model, verification of the found solution using the previous steps and display in plots of the final disposition of the geometrical model. All these functionalities are based on CNES third parties, that are already used in ground segments and for studies. The resampling in common geometry, could be done in ground or image geometry and take into account sensors resolutions to avoid aliasing introduced by the change of resolution. Matching points are at the moment computed with a dense sub-pixel correlation technique but a feature matching technique is in the roadmap. The bundle adjustment is highly customizable, by the definition of the geometrical model, the uncertainties related to this model and the filtering of matching points. Design-wise, ARTEMIS is composed of a high-level orchestrator of modules, which handle the scientific need and create tasks that effectively do the processing on local or remote computing units. This chaining of algorithm might change from a mission to another and is highly configurable to meet scientific requirements. New module's implementation brought by users can also be inserted in the workflow definition. The software is written in Python to facilitate additions and modifications by scientific experts, but used libraries could be developed in other languages to increase the time performances of low-level functions. The modules exchange data through a shared folder and a YAML file to keep track of the produced data. It allows scientific experts to inspect manually or with some code all the intermediate data produced. All modules come as Python packages to allows users to create the chaining they need, that also allow them to modify the baseline code in case they have a very specific need or add new modules to add functionalities. Currently supported missions are CO3D, Sentinel-2 and METOP-SG 3MI but the software is already designed to support other sensors through a plugin mechanism and meet their requirements for the geometrical calibration/validation. The capabilities of ARTEMIS have also brought to light derivative uses such as the production of analysis ready multi-mission image stacks and orthorectified images. Intent to publish ARTEMIS over github.com and pypi.org under an Apache V2 License is in the roadmap for 2025.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Hyperspectral radiometry on BGC-Argo floats: first steps and challenges towards an FRM status

Authors: Lou Andres, Vincenzo Velluci, Edouard Leymarie, Marine Bretagnon, Antoine Poteau, Antoine Mangin, Emanuele Organelli
Affiliations: ACRI-ST / Laboratoire d'Océanographie de Villefranche (LOV), UMR 7093, CNRS-Sorbonne Université, Sorbonne Université, CNRS, Institut de la Mer de Villefranche, IMEV, 06230, Villefranche-sur-Mer, France, Laboratoire d'Océanographie de Villefranche (LOV), UMR 7093, CNRS-Sorbonne Université, ACRI-ST, National Research Council of Italy (CNR), Institute of Marine Sciences (ISMAR)
The BioGeoChemical-Argo program is progressively implementing a global network of profiling floats carrying biogeochemical and bio-optical sensors, including the measurements of downwelling irradiance (Ed) at three bands : 380, 412 and 490 nm. Since 2022 and in the context of the ERC-REFINE project, some of these floats have been equipped with ruggedized hyperspectral TriOS-RAMSES radiometers to measure profiles of both Ed and upwelling radiance (Lu). Fourteen of these floats are now operational providing profiles and surface data in near real time for seven different open ocean regions representative of the bio-optical diversity of the ocean (Indian Ocean, Arabian Sea, Kerguelen Islands, North Atlantic Ocean, Guinea Dome, Baffin Bay, Labrador Sea, Hawaiian Islands). Originally set on a conventional 10-day Argo resolution, measurements were acquired every 5 days during 4 months (from Mid-March to mid-June) to increase the probability of match-up during the first phase of PACE acquisition. This study provides an evaluation of the potential of BGC-Argo floats to reach Fiducial Reference Measurement (FRM) standards for ocean color radiometry satellite validation. To achieve this, we focus on assessing the different methods to determine hyperspectral remote sensing reflectance (Rrs) from BGC-Argo floats data. In particular, we present the current work on TriOS-RAMSES characterization and platform-specific uncertainties (calibration, light extrapolation, dark offset, temperature dependency, tilt, self-shading, etc.). The aim of this study is to obtain an uncertainty on each Rrs measurement obtained from hyperspectral BGC-Argo floats. For this, we follow the method presented in Bialek et al. (2020) for the BOUSSOLE buoy and use mainly characterizations done on our sensors by Tartu Laboratory. A method for hyperpsectral quality control based on Organelli et al. (2016) has also been developped and allows us to provide first matchups between our QCed in-situ data and OC-satellites (PACE and Sentinel3) data. This study ends with a SWOT analysis and indications of gaps that need to be filled to raise the array of BGC-Argo floats measuring hyperspectral reflectance to a fleet of FRM platforms.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: The Advanced Surface Temperature Radiometer Network: A Next Generation In Situ Radiometer

Authors: Tim Nightingale, Werenfrid Wimmer, Dr Arrow Lee, Dr Dave Smith, Dr Michelle Hamilton, Ms Ruth Wilson, Darren Ghent
Affiliations: STFC / RAL Space, University of Southampton, Space ConneXions Limited, University of Leicester
Measurements of surface temperature from satellite observations make an essential contribution to long term climate data records. Traceable Fiducial Reference Measurements (FRMs) of surface temperature ensure the long term quality of these satellite data through post-launch validation and, in some cases, recalibration. FRM datasets are fundamental elements of the temperature measurement system and must be globally sampled across a range of surface types, including ocean, inland waters, land and ice, to maximise utility of the satellite data. Relevant current and future surface temperature missions include (A)SLSTR, CIMR and LSTM (ESA / Copernicus), TRISHNA (CNES) and SBG (NASA). CEOS is exploring a radiometer network to validate and calibrate satellite missions. A new generation of autonomous, in situ infrared FRM radiometers is required to enhance and maintain this capability for the next decade. The new Advanced Surface Temperature Radiometer Network (ASTeRN) instrument will generate high quality FRM datasets of level 1 brightness temperatures and level 2 surface temperatures. The instrument is an evolution of existing designs such as SISTeR, ISAR and M-AERI, and will use the same proven measurement approach, but draws on lessons learned and will incorporate improved components and methods. The design includes additional spectral channels for land characterisation and an extended capability for measuring surface and air temperatures. It also addresses obsolescence issues, as well as manufacturability and maintainability. The ASTeRN project is manufacturing a set of prototype radiometers with the capability to measure Sea Surface Temperature (SST), Land Surface Temperature (LST), and Ice Surface Temperature (IST) with high accuracy, precision and traceability to the Système International d'unités (SI units). The design is based on a recent study for a next generation SST radiometer, funded by the European Space Agency (ESA) and performed by RAL Space and the University of Southampton (UoS). The University of Leicester (UoL) has provided the LST requirements for the radiometer and the National Physical Laboratory (NPL) will validate the radiometer calibrations against traceable SI standards and verify their uncertainty models.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Accuracy assessment of Sentinel-3 OLCI 300 m LAI, FAPAR and FCover products based on direct comparison with GBOV reference measurements over spatially representative sites

Authors: Fernando Camacho, Enrique Martinez-Sanchez, Dr Jorge Sanchez-Zapero, Aleixandre Verger, Roselyne Lacaze
Affiliations: EOLAB, CSIC, HYGEOS
The Copernicus Land Monitoring Service (CLMS) provides a series of qualified bio-geophysical products on the status and evolution of land surface. The CLMS portfolio includes the leaf area index (LAI), the fraction of photosynthetically active radiation absorbed by vegetation (FAPAR) and the fraction of vegetation cover (FCOVER) derived from Sentinel-3 OLCI observations at 300 m resolution. For the validation of satellite-based bio-geophysical products, the land product validation (LPV) subgroup of the CEOS working ground on calibration and validation (CEOS WGCV) has established validation good practices (Fernandes et al. 2014). For LAI, the community-agreed-upon approach consists in upscaling ground references up the satellite pixel resolution of the product to be validated using transfer functions between the reference data and auxiliary high spatial resolution satellite data (Morisette et al. 2006). This approach was however stablished more than 20 years ago for validating low spatial resolution products from kilometric sensors over heterogeneous sites to solve the upscaling issue but may not be well suited for the validation of hectometric and decametric products under metrological principles. Recently, in the context of the ESA fiducial reference measurement (FRM) for vegetation (FRM4Veg) project, aiming at applying for the first time metrological principles to the validation of satellite bio-geophysical products, we found that the uncertainties attached to the upscaling approach exceed the mission requirements. Consequently, it was recommended setting procedures to undertake validation and conformity testing at the decametric and the hectometric scales by directly comparing with spatially representative FRMs. This would avoid the two-stage validation approach uncertainties (Camacho et al., 2024), similarly to the good-practice recommended in CEOS LPV validation protocol for surface albedo products (Wang et al., 2019). The aim of this talk is two-fold: 1) We further explore the uncertainties related to the upscaling procedures by evaluating the Copernicus Ground-Based Observation for Validation (GBOV) upscaled products V3 over homogeneous sites. 2) We present the first validation results of the CLMS OLCI 300m LAI, FAPAR and FCOVER products based on GBOV reference measurements processed from NEON sites ground data. The evaluation of GBOV upscaled products shows important issues resulting in both low accuracy and low precision of this dataset. The generic upscaling function used in V3 for forest or non-forest sites introduces significant bias in forest sites (mean bias between 1 and 2 LAI units) as compared to the reference measurements used to calibrate the GBOV upscaled products. In addition, the use of different versions (from V3.1 to V3.3), which use different data and regression methods, for different periods of GBOV time series introduce temporal inconsistencies (low inter-annual precision), mainly for FAPAR and FCOVER. Moreover, over non-forest sparsely vegetated sites, GBOV upscaled products provide systematically lower values than CLMS and other global satellite products. We ainvestigated this issue by reprocessing the NEON digital hemispherical photos using the CAN-EYE software (Weiss and Baret 2017). Our estimates agreed with the magnitude of satellite products whilst GBOV reference measurements systematically provided lower values. Other inconsistencies were found in the measurements used for the calibration of GBOV V3 including the use of a constant value of understory for the ICOS sites. All these inconsistencies suggest that GBOV V3 upscaled products are not a reliable source of information to qualify satellite-based bio-geophysical products. The preliminary comparison of CLMS OLCI 300m products with GBOV reference measurements over homogeneous sites provides better agreement than with GBOV upscaled maps. Indeed, the bias is close to zero for LAI, the liner fit is close to 1:1 with RMSD values of 0.9. We obtained as well improved correlation and percentage of retrievals conforming the CLMS accuracy requirements. For FAPAR and FCOVER, the comparison with reference measurements also reduced the bias as compared with upscaled maps but the RMSD is slightly higher. This is due to the large stochastic errors associated to the reference measurements which are smoothed out during the upscaling procedure. As a conclusion, the GBOV/NEON reference measurements of LAI, FAPAR and FCOVER variables represent a more trustable source of reference data for the validation of satellite-based products than the upscaled GBOV V3 dataset over homogeneous sites due to the large uncertainties introduced by the upscaling generic approach used in GBOV. References: - Camacho, F.; Martínez-Sánchez, E.; Brown, L.A.; Morris, H.; Morrone, R.; Williams, O.; Dash, J.; Origo, N.; Sánchez-Zapero, J.; Boccia, V. Validation and Conformity Testing of Sentinel-3 Green Instantaneous FAPAR and Canopy Chlorophyll Content Products. Remote Sens. 2024, 16, 2698. DOI:10.3390/rs16152698 -Fernandes, R., Plummer, S., Nightingale, J., Baret, F., Camacho, F., Fang, H., Garrigues, S., Gobron, N., Lang, M.L., R., LeBlanc, S., Meroni, M., Martinez, B., Nilson, T., Pinty, B., Pisek, J., Sonnentag, O., Verger, A., Welles, J., Weiss, M., & Widlowski, J.L. (2014). Global Leaf Area Index Product Validation Good Practices. Version 2.0. . In M.R. G. Schaepman-Strub, & J. Nickeson (Ed.), Best Practice for Satellite-Derived Land Product Validation: Land Product Validation Subgroup (WGCV/CEOS) (pp. 1-76) -Morisette, J., Baret, F., Privette, J.L., Myneni, R.B., Nickeson, J., Garrigues, S., Shabanov, N., Weiss, M., Fernandes, R., Leblanc, S., Kalacska, M., Sanchez-Azofeifa, G.A., Chubey, M., Rivard, B., Stenberg, P., Rautiainen, M., Voipio, P., Manninen, T., Pilant, D., Lewis, T., Iiames, J., Colombo, R., Meroni, M., Busetto, L., Cohen, W., Turner, D., Warner, D., Petersen, G.W., Seufert, G., & Cook, R. (2006). Validation of global moderate resolution LAI Products: a framework proposed within the CEOS Land Product Validation subgroup. IEEE Transactions on Geoscience and Remote Sensing, 44, 1804-1817. -Wang, Z., Schaaf, C., Lattanzio, A., Carrer, D., Grant, I., Román, M., Camacho, F., Yu, Y., Sánchez-Zapero, J. & Nickeson, J. (2019). Global Surface Albedo Product Validation Best Practices Protocol. Version 1.0. In Z. Wang, J. Nickeson & M. Román (Eds.), Good Practices for Satellite-Derived Land Product Validation (p. 45): Land Product Validation Subgroup (WGCV/CEOS), doi: 10.5067/DOC/CEOSWGCV/LPV/ALBEDO.001 -Weiss M. and Baret F. (2017). CAN-EYE V6.4.91 User Manual. INRA. Available at https://can-eye.paca.hub.inrae.fr/content/download/3052/30819?version=4
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Validation of the Copernicus Sentinel-2C Level-2A Products

Authors: Avi Putri Pertiwi, Raquel De Los Reyes, Kevin Alonso, Jerome Louis, Francesco Pignatale, Florian Poustomis, Silvia Enache, Rosalinda Morrone, Ferran Gascon, Valentina Boccia
Affiliations: German Aerospace Center (DLR), German Aerospace Center (DLR), Starion for ESA ESRIN, Telespazio France - A Leonardo / Thales Company, Telespazio Germany – A Leonardo / Thales Company, ACRI-ST, CS Group, European Space Agency (ESA), ESRIN
The third satellite of the Copernicus Sentinel-2 constellation, Sentinel-2C (S2C), was launched on 5 September 2024. The satellite is currently in its commissioning phase and it is planned to accompany Sentinel-2B (S2B), which was launched in 2017. During the commissioning, a tandem phase between Sentinel-2A (S2A) and S2C is implemented in order to perform a cross calibration between the instruments, before S2C replaces S2A in early 2025. The three satellites are all equipped with an optical imaging sensor Multi-Spectral Instrument (MSI) that acquires high spatial resolution images of land, islands, inland and coastal waters from the Earth. Users from all around the world have been using these images for land cover change observation in relation to agriculture, forestry, urban, rural, and coastal areas, that is also useful for risk and disaster mapping. The Level-1C (L1C) Top-Of-Atmosphere reflectance and Level-2A (L2A) surface reflectance data are available to the public through the Copernicus Data Space Ecosystem (CDSE). Sen2cor is the processing tool implemented to remove the atmospheric effects from L1C products and produce L2A products, providing surface reflectance products together with Aerosol Optical Thickness (AOT), Total Column Water Vapour (WV) and Scene Classification (SCL) maps. The tandem and drift phases between S2A and S2C provide a crucial opportunity to assess the performance of the Level-2A products. This contribution will show the commissioning phase validation results for cloud masking, AOT, WV, and surface reflectance retrieval of the newly launched S2C imageries, with associated accuracy and uncertainty scores of AOT and WV retrieval, assessed with reference measurements from Aerosol Robotic Network (AERONET) stations. BOA surface reflectance retrievals were estimated with ground reference data provided by independent measurements such as Radiometric Calibration Network (RadCalNet) and HYPERNETS. Reference data for SCL cloud masking validation was generated by visually labelling the cloud and cloud-free pixel classes of the images from the test sites.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone S-T)

Poster: Ground-Based Observation for Validation: Production of Vegetation Land Products and Uncertainties

Authors: Rémi Grousset, Christophe Lerebourg, Gabriele Bai, Dr. Jadu Dash, Somnath Paramanik, Finn James, Luke Brown, Nadine Gobron, Marco Clerici
Affiliations: ACRI-ST, University of Southampton, University of Salford, European Commission Joint Research Center
GBOV (Copernicus Ground-Based Observation for Validation) funded by the European Commission and led by the Joint Research Centre under the Framework Contract number 941837, is an element of CLMS (Copernicus Land Monitoring Service: https://land.copernicus.eu). Its main purpose is to collect and process worldwide ground data to develop Analysis Ready Validation Dataset (ARVD) in support of the Copernicus Land Service validation strategy. Additional projects and services are also relying on GBOV ARVD like ESA opt-MPC on top of a global user community. GBOV has two separate components for the development of its validation dataset: component 1 relies on data from existing stations worldwide while component 2 activities are involving instrument deployment with the objective to reduce the geographic and thematic gap. The Global Land Service provides a wide range of bio-geophysical parameters, encompassing soil moisture, snow, temperature, reflectance, vegetation and water bodies. GBOV specifically addresses the validation of seven core land service products, and particularly key vegetative biophysical parameters which are the focus of this presentation: Leaf Area Index (LAI), Fraction of Absorbed Photosynthetically Active Radiation (FAPAR), and Fraction of Vegetation Cover (FCOVER). DHPs (Digital Hemispherical Photograph) from ICOS and NEON networks as well as automated DHP systems deployed as part of GBOV component 2 are used to derive three so-called Reference Measurements (RMs): FIPAR (Fraction of intercepted Photosynthetically Active Radiation), Tcanopy (transmittance through the canopy) and LAI (Leaf Area Index). They are further processed to generate FAPAR, LAI and FCOVER (the so-called “Land Products”, LPs) through the implementation of a transfer function. The purpose of this transfer function is to generate LPs for the validation of CLMS equivalent products of coarser resolution. Intermediate EO data are used to achieve this upscaling, namely Sentinel2 A and B. The footprints of the DHP acquisition plots are close to Sentinel-2 resolution (10m, 20m and 60m depending on the waveband), while the resolution of the CLMS products is 300 meters, or, more specifically, 1/336 degree. To overcome this resolution mismatch, the upscaling procedure is derived and applied as follows: Sentinel 2 data are post-processed using SL2P (Sentinel-2 Level-2 Prototype Processor) to generate LAI, FAPAR and FCOVER at 20m. Three transfer/calibration functions are derived based on matchups between ground-based data (RMs derived from DHPs) and SL2P outputs. These three transfer functions correspond to the different vegetation type and the ground measurement setup: (1) one for forest vegetation type, calibrated on DHPs acquired from both overstory and understory, (2) one for grasslands and croplands, calibrated on DHPs acquired only from the understory, (3) one for forest vegetation type, calibrated on DHPs acquired only from the overstory. These three transfer functions are then applied to SL2P outputs over an area of approximately 5x5km. The final product at 300m resolution matching CLMS resolution is then computed through an aggregation procedure. Since a measurement cannot be fully understood without considering its associated uncertainty, particular attention was given in propagating uncertainties throughout the process. This involved accounting for uncertainties from both satellite-derived and ground-based measurements during the calibration of the transfer function but also during its application, ensuring that they are well reflected in the final Land Product. The uncertainty computation follows the GUM (Guide to the expression for Uncertainty in Measurement). This presentation will describe the technical approach to derive GBOV vegetation Land Products. GBOV service is freely accessible on https://land.copernicus.eu/global/gbov and provides Land Products over 53 sites for vegetation.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: C.06.07 - POSTER - Recent progress on uncertainty analysis for Earth Observation measurements

Since the least decade, considerable efforts have been devoted to better understand uncertainties associated to Earth Observation measurements. Indeed, the uncertainty is directly linked to the value and usefulness of a measurement, whether it is used for predictions, risk assessment or decision making.
This multi-disciplinary session will present recent progress on evaluating measurement uncertainties, as well as practical use cases from the whole range of EO techniques.
We welcome contributions covering:
• recent results on uncertainty estimation either from theoretical (uncertainty propagation from measurement principles) or data-driven approaches,
• novel calibration and validation methods providing insights for the characterization or verification of uncertainty estimates,
• studies using Fiducial Reference Measurements, either to characterize their own uncertainties or as a source of truth for EO measurements,
• methods to convey uncertainty estimations to end users.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: ISSI Forum on Understanding disparities by uncertainty analyses in VSWIR imaging spectroscopy of Earth surface ecosystems

Authors: Mike Werfeli, David Thompson, Andreas Hueni, Dr. Raymond Kokaly, Dr. Jennifer Adams, Dr Raymond Soffer, Dr. Raquel De Los Rayes, Dr Cindy Ong, Agnieszka Bialek, Dr. Takahiro Kawashima, Marco Celesti, Sabine Chabrillat, Dr Kévin Turpie, Dr. Martin Bachmann, Dr. Javier Gorroño, Dr.ssa Claudia Giardino, Dr. Monica Pepe, Ferran Gascon, Dr. Samuel Hunt, Nicolas Lamquin, Jonathan Mittaz, Emma Woolliams, Maximilian Brell, Kevin Alonso, José Moreno, Dr. Jens Nieke, Dr. Joanne Nightingale, Ludovic BOURG, Dr. Kurtis Thome, Prof. Dr. Michael Rast
Affiliations: Remote Sensing Laboratories University of Zurich, Jet Propulsion Laboratory (NASA/JPL), Califronia Institute of Technology, U.S. Geological Survey (USGS), Geology, Geophysics, and Geochemistry Science Center, Aerospace Research Center, National Research Council of Canada, German Aerospace Center (DLR), Earth Observation Center (EOC), Commonwealth Scientific and Industrial Research Organisation (CSIRO), National Physical Laboratory, Department of Aeronautics and Astronautics, University of Tokyo, ESA European Space Research and Technology Centre (ESTEC), Helmholtz Center Potsdam, GFZ German Research Center for Geosciences, Leibniz University Hannover, Institute of soil science, University of Maryland Baltimore County (UMBC), Research Institute of Water and Environmental Engineering (IIAMA), Universitat Polytecnica de Valencia, Institute for Electromagnetic Sensing of the Environment (IREA), National Research Council of Italy (CNR), ESA European Space Research Institute (ESRIN), ACRI-ST, Department of Meteorology, University of Reading, HySpex – Norsk Elektro Optikk AS, Faculty of Physics, Universitat de Valencia, NASA/Goddard Space Flight Center (GSFC), International Space Science Institute (ISSI)
With growing interest in ready-to-use hyperspectral imaging Earth Observation (EO) data at a global scale, intensified global cooperation in the field of EO is needed. VSWIR spectroscopy in particular, has seen the launch of sensors such as EnMAP (DLR), PRISMA (ASI) and EMIT (NASA) over the last 5 years and will see CHIME (ESA) and SBG (NASA) launched in the late 2020s/early 2030s. The different sensors brings global imaging spectroscopy data to users at unprecedented spatial and temporal scales for a wide variety of applications from agriculture and soil, biodiversity, food security, raw materials, water quality and environmental degradation. However, the use of multiple spectroscopy satellites, is prone to disparities, hampering their uptake by users. Disparities in the context of EO describe spectro-radiometric differences between various sensors, measuring the same target on the Earth’s surface. With disparities, the differences are meant with which imaging spectroscopy satellite operators are handling the spectro-radiometric information from the sensors in the processing chain between at-sensor radiance down to bottom-of-atmosphere (BOA) reflectance. To address disparities and uncertainty quantification of VNIR/SWIR imaging spectroscopy products, the International Space Science Institute in Bern organised a Forum on understanding disparities between space-based VNIR/SWIR imaging spectroscopy sensors in mid-June 2024. These disparities arise form 1) the hardware used to measure incoming solar radiance reflected from the Earth’s surface and 2) from processing the measured data to get from top of atmosphere (TOA) radiance to BOA reflectance. The differences in data recording are traceable to the hardware used to convert the photons to digital numbers (Level 0 (L0)) and the pre-launch laboratory and/or in-flight calibration to get calibrated radiances (Wm-2nm-1sr-1) (Level 1 (L1)). The differences in data processing are traceable to the algorithms applied to transform the TOA radiances to BOA reflectances (e.g. ATCOR/PACO and ISOFIT). The key to getting a better understanding and a better handle on measurement disparities is in a commonly accepted and metrologically sound methodology of uncertainty estimation and propagation to a traceable and rigorous uncertainty budget associated with the BOA reflectance (L2). This also includes a thorough sensitivity analysis of the different parameters contributing to the various processing algorithms along the processing chain (L1 – L2), building on recommendations from CEOS, and in particular QA4EO. To add an additional layer to the data quality and intercomparison, lessons learned from Cal/Val activities and intercomparison exercises (e.g., ACIX, RAMI) have been considered, to come up with a suggestion on how to handle different data sets with associated uncertainties, to build a metrologically founded bed on which more scientific advances in different fields of Earth science and ecology can flourish.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: JRC multi-decadal field measurement programs and radiative transfer activities supporting the assessment of uncertainties in satellite ocean color data products

Authors: Barbara Bulgarelli, Giuseppe Zibordi, Jean-Francois Berthon, Pietro Sciuto, Davide D'Alimonte, Ewa Kwiatkowska
Affiliations: European Commission, Joint Research Centre, National Aeronautics and Space Administration, Goddard Space Flight Center, Aequora, EUMETSAT
Understanding and quantifying uncertainties associated with Earth Observation (EO) measurements of the aquatic environment is essential to their use in the anticipation, definition, implementation and monitoring of EU Water Acquis and Marine legislation. The assessment of uncertainties in ocean color (OC) satellite data products, both primary (i.e., radiometric) and derived (e.g., phytoplankton pigments concentration or sea water inherent optical properties), requires accurate and comprehensive in situ bio-optical measurements, and benefits by the application of advanced radiative transfer theoretical modelling. Dating back to the 90s, and anticipating validation needs for the Sea-Wide Field-of-View (SeaWiFS) ocean color mission, several measurement programs were established to gather bio-optical data representative of the world marine waters. Among these, the Coastal Atmosphere & Sea Time Series (CoASTS) and the Bio-Optical mapping of Marine optical Properties (BiOMaP) were implemented by the Marine Optical Laboratory of the Joint Research Center (JRC) in collaboration with a number of European institutions to collect bio-optical measurements across the European Seas. While CoASTS benefited of the Acqua Alta Oceanographic Tower (AAOT) in the northern Adriatic Sea to generate time-series data at a fixed coastal site, BiOMaP relied on oceanographic ships to collect spatially distributed measurements across various European Seas. Complementary to these programs, the Ocean Color component of the Aerosol Robotic Network (AERONET-OC) was successively established within the framework of a collaboration between the JRC and the National Aeronautics and Space Administration (NASA) to specifically support the validation of satellite OC radiometric data products with time series generated through automated instruments operated on globally-distributed fixed platforms. CoASTS, BiOMaP and AERONET-OC apply standardization of instruments, measurement methods, quality control schemes and processing codes to enforce consistency to temporally and spatially distributed data products. The activities benefitted from the application of accurate radiative transfer modelling tools, such as codes included in JRC Advanced Radiative Transfer Models for the simulation of In situ and Satellite Ocean Color data (ARTEMIS-OC) software suite, as well as the Monte Carlo code for Ocean Color Simulations (MOX). Overall, CoASTS, BiOMaP and AERONET-OC programs extended over a period of almost three decades and constitute a unique source of data for bio-optical investigations and the validation of satellite OC data products. This work summarizes the three measurement programs with focus on a number of key data applications leading to major achievements during the recent years and highlighting recent progress on evaluating uncertainties in OC satellite data products (e.g., accounting for spectral and atmospheric dependences in the correction of sea-surface contributions in above-water radiometric measurements, and accurately modelling super-structure perturbations).
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: Provision of operational FRM measurements for Sentinel-3 over inland water: the St3TART Follow on project

Authors: Valentin Fouqueau, Gabrielle Cognot, Rémy Lopez, Eliot Lesnard-Evangelista, Emma R Woolliams, Jean-Christophe Poisson, Sajedeh Behnia, Nicolas Taburet, Julien Renou, Marie Chapelier, Hervé Yesou, Maxime Azzoni, Angelica Tarpanelli, Karina Nielsen, Mohammad Tourian, Michel Calzas, Pascal Bonnefond, Claire Miller, El Hajj Mahmoud, Filomena Catapano, Femenias Pierre
Affiliations: Vortex-io, NOVELTIS, NPL, National Physical Laboratory, CNES, DTU Space, CLS, SERTIT, CNR-IRPI, GIS, Institute of Geodes, DT-INSU, SYRTE, ESA-ESRIN
Calibration and validation are essential for satellite missions to ensure confidence in geophysical data derived from satellite measurements. For Copernicus Sentinel-3 inland water products, the ESA St3TART project developed a roadmap and demonstrated a proof of concept for operational Fiducial Reference Measurements (FRM) to support validation of the Sentinel-3 radar altimeter over inland waters. The first St3TART project involved hydrology-related activities focused on validating and monitoring the performance and stability of the Sentinel-3 altimeter measurements using Fiducial Reference Measurements (FRM). Key steps included reviewing existing methodologies and ground instrumentation, defining protocols to account for errors from in-situ sensors, satellite data, and validation site conditions, and developing a roadmap for operational FRM provision. This roadmap supports validation and enhances the use of Sentinel-3 SAR altimeter data for inland waters. Demonstrative field campaigns were successfully conducted based on the established procedures and protocols. A follow-on project has been initiated by ESA, aiming to implement the roadmap established during the initial St3TART project. Its primary objective is to deliver FRMs for inland water as a service. The entire consortium from the original project is actively participating in this continuation. This new project builds on the instrumentation deployed during the previous phase, enhanced to ensure operational redundancy across super sites. Super sites have been deployed over very different inland waterbodies in Europe. This network of super-sites is completed by data from third-party in-situ networks, designated as opportunity sites. A fully integrated and automated processing chain has been developed by vortex-io, enabling the operational provision of FRMs to the MPC within six days of a Sentinel-3 overflight at a super site. This automated processing relies on the computation methodology and procedures defined during the first project. For each FRM, uncertainties are calculated to provide insights into the error budget associated with both the in-situ sensors and the processing workflows. We’ll present specific results to assess the quality of the FRMs provided in the project. Scientific partners in the project carry out in-depth analyses of specific cases and super sites, using the FRMs and datasets generated by the vortex-io processing center and ensuring the relevance of the produced FRMs. The project will span three years, with operational efficiency being central to its success. A range of protocols and key performance indicators (KPIs) have been established to ensure the timely delivery and high quality of the produced Fiducial Reference Measurements (FRMs). All FRM data will be hosted on a dedicated data hub developed as part of the project, facilitating access for a wide network of scientific partners to support research activities using Sentinel-3 measurements over inland waters.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: Global Sensitivity Analysis on Water Vapour and Aerosol Optical Thickness for the Upcoming ESA CHIME Mission

Authors: Carmen Meiller, Mike Werfeli, Pieter De Vis, Dr. Astrid M. Zimmermann, Dr. Andreas Hueni
Affiliations: Remote Sensing Laboratories (RSL), University of Zurich, National Physical Laboratory (NPL)
Within the L2 consortium of the upcoming European Space Agency (ESA) hyperspectral satellite mission Copernicus Hyperspectral Imaging Mission for the Environment (CHIME), uncertainty calculations will be performed and propagated along the entire atmospheric correction processing chain. As the final product, a surface reflectance cube is generated for further usage by the end user. Additionally, the propagated uncertainty estimates will be made available. In the CHIME atmospheric correction algorithm used to calculate the remote sensing surface reflectance from a top-of-atmosphere (TOA) radiance cube, simulations of radiative processes in the atmosphere under different illumination, viewing and atmosphere conditions are used in the form of precomputed look-up-tables (LUTs). Depending on the time, date and location, as well as the state of the atmosphere when the satellite image was acquired, these parameters vary and so do their uncertainties. Independent on the used radiative transfer model, it is important to gain more information about the behaviour of certain simulated parameters and their associated uncertainty when the parameters are varied. This means that the behaviour of the resulting atmospherically corrected surface reflectance cube and its uncertainty is investigated when one of the parameters is fixed, and the other parameters vary. To gain further insights into this topic, a global sensitivity analysis will be performed in this on-going study to better understand the sensitivity of the atmospheric correction due to the two parameters Aerosol Optical Thickness (AOT) and Water Vapour (WV). Different artificial surface reflectance scenes with varying reflectance properties were generated: a checkerboard pattern with 25 %, 50 % and 75 % of Dark Dense Vegetation (DDV) pixels. The remaining percentages were filled with pixels representing a sand surface. Further, a selection of TOA radiances was forward modelled using the aforementioned scenes. To make this study feasible, only a selection of the parameters (e.g. the minimum, median and maximum values of all parameters) was chosen, apart from the AOT and WV, where all possible LUT parameters were selected, and every possible combination was calculated. In a next step, all the resulting scenes were convolved with the spectral response functions of CHIME. The final TOA radiances, now with CHIME specifications, serve as input for the sensitivity analysis applying the CHIME Open-Source Library (CHIME OSL) atmospheric correction. Therefore, the OSL atmospheric correction algorithm is executed for all generated scenes and the surface reflectance cubes are retrieved along with their uncertainty estimates. Ultimately, a global sensitivity analysis will be performed for all artificial datasets by evaluating the variation in the calculated surface reflectances per fixed AOT and WV value while varying all the other parameters. In this way, the most influential values for AOT and WV as well as the affected wavelength range can be identified. The sensitivity analysis thus provides insights for optimising model inputs and outputs.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: Insight into validation methods of SAR altimeter measurements for the provision of uncertainty estimates of river water level

Authors: Julien Renou, Nicolas Taburet, Claire Miller, Valentin Fouqueau, Filomena Catapano, Pierre Femenias
Affiliations: Collecte Localisation Satellites (CLS), Noveltis, VorteX-io, ESA-ESRIN
The regular provision from radar altimetry missions of accurate measurements of the water level over continental areas at global scale is of key interest to monitor the freshwater stocks. While radar altimetry has been initially developed to measure sea surface topography, recent progress in altimetry data processing has enabled to increase the amount of quality observations over hydrological areas. In particular, the Copernicus Sentinel-3 and Sentinel-6 Michael Freilich missions measure water level with enhanced spatial resolution due to their on-board Synthetic Aperture Radar (SAR) altimeter. The performance assessment of the altimetric measurements over rivers is standardly achieved through the comparisons of water level timeseries estimated from these measurements with water level timeseries provided by in-situ stations. However, this standard validation technique often incorporates uncertainties, due to the combined impact of the orbit excursion and the river slope between the positions of the altimetric measurements and the in-situ station, that cannot be distinguished from uncertainties of the altimetric measurements. In this study, we propose to combine several validation methods with the objective of reducing contributions of these external error sources and hence better estimating uncertainties of SAR altimeter measurements. We apply these complementary strategies that have been developed in the ESA MPCS3, St3TART-FO and S6-J3 tandem phase exploitation projects. First, we present the routine validation of Sentinel-3 measurements on limited number of sites based on Fiducial Reference Measurements (FRM). We then complement these results with an innovative validation technique which takes advantage of the SAR processing to redefine to notion of virtual station to provide Sentinel-3 measurements for which uncertainties are less impacted by the river slope bias. Finally, we exploit river water level profiles based on novel SAR processing technique providing meter along-track resolution to Sentinel-6 measurements. The combination of these various methods and datasets will be discussed in order to bring a comprehensive definition of the error budget of SAR altimeter measurements over rivers.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: Svalbard as a supersite for cryosphere observing satellite reference measurement collection

Authors: Ashley Morris, Bartlomiej Luks, Tom Rune Lauknes, Eirik Malnes, Geir Moholdt, Michal Laska, Eero Rinne
Affiliations: Svalbard Integrated Arctic Earth Observing System, Institute of Geophysics, Polish Academy of Sciences, NORCE Norwegian Research Centre AS, Norwegian Polar Institute, Institute of Earth Sciences, University of Silesia, The University Centre in Svalbard
The Svalbard archipelago is located midway between the Norwegian mainland and the North Pole. Its main settlement, Longyearbyen, is one of the most accessible locations in the high Arctic, served year-round by daily commercial flights. The archipelago hosts an established research and field logistics infrastructure. 60% of the area is glacier-covered, ranging from cirque and valley glaciers to expansive icefields and ice caps, whilst the remainder is blanketed by a seasonal snowpack. Landfast sea ice forms in the fjords, and the Arctic pack ice can be found to the north and east. As a result of the high latitude location and convergence of satellite ground tracks towards the poles, satellite overpass frequency is high. These factors combine to make Svalbard a supersite for the collection of reference measurements for the current constellation of Earth Observation satellites, and upcoming platforms such as CRISTAL, CIMR, ROSE-L and Sentinel-1 next generation. Reference measurement collection is currently undertaken by numerous members of the Svalbard Integrated Arctic Earth Observing System (SIOS) consortium. We present some ongoing activities across the cryosphere domain, as well as plans for future activities. These reference measurement collection activities include fieldwork across the archipelago, and the use of Uncrewed Aerial Vehicles (UAVs) and Crewed Aircraft to bridge the gap in spatial coverage to Earth observation satellite data. For example, the Norwegian Polar Institute (NPI) and University of Oslo (UiO) field campaigns on the Austfonna ice cap have been collecting reference measurements spanning multiple decades and a number of different radar and laser altimetry platforms. These campaigns have collected GNSS surface elevation profiles along transects aligned, at various times with ICESat, CryoSat-2, ICESat-2 and Sentinel-3 ground tracks. Measurements of snow thickness and characteristics from GPR, manual snow probing and snow pit analysis have also been collected to assess the penetration bias in radar altimetry data from CryoSat-2 and Sentinel-3. Likewise, the Norwegian-Polish SnowInOpt project collected snow thickness and structure profiles using GPR, as well as measurements of snow water equivalent (SWE) and basal ice thickness below ICESat-2 overpasses of the southern portion of the main island, Spitsbergen. An earlier Norwegian-Polish collaborative project, CRIOS, facilitated fieldwork and investment in instrumentation and infrastructure including InSAR corner reflectors, whilst the HarSval project includes WetSnowEx, an initiative to collect a range of snow reference measurements across the melt season. It is increasingly common for fieldwork in Svalbard to include the use of UAVs to collect optical and LiDAR reference measurements over larger areas of glacier ice, landfast sea ice or terrestrial snow cover. In addition to these field-based activities, the Lufttransport Dornier Do-228 aircraft (LN-LYR), part of the SIOS-InfraNor project, is pivotal in observing geophysical and biophysical processes, and significantly enhances our capability to collect reference measurements across Svalbard. This crewed aircraft is permanently equipped with a scientific pod containing a state-of-the-art hyperspectral imager and a medium format camera. It is also certified for operation with a roller-door for temporary side-looking sensor setups. The proposed permanent installation of imaging radar infrastructure would further extend the aircraft’s capabilities, facilitating continuous science data collection during regular operations. Such an airborne SAR platform would enable precise and flexible collection of reference measurements for satellite data, particularly in the remote and dynamic Arctic and Barents Sea regions, thus supporting the enhanced monitoring capabilities required for the upcoming generation of Earth observation satellites.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: Estimation of Sentinel-3 MWR Wet Tropospheric Correction uncertainty

Authors: Thibault Pirotte, Morgane Farradèche, Laiba Amarouche, Franck Borde
Affiliations: CLS, ESA/ESTEC
The Global Mean Sea Level (GMSL) is a critical climate change indicator. Since 1993, the GMSL has been continuously and globally monitored thanks to satellite altimetry. From 1993 to 2021, the GMSL rose by 3.3 ± 0.2 mm per year (Ablain et al. 2019). The Wet Tropospheric Correction (WTC) is a component of the GMSL and its uncertainty was identified as one of the main contributions to the error budget of the GMSL (Ablain et al. 2009, Ablain et al. 2019). For this reason, requirements for altimetry missions are increasingly demanding in terms of low WTC uncertainty. According to ESA Mission Requirement Documents of altimetry missions Sentinel-3 and Sentinel-3 NG-T, the WTC over ocean shall be provided with an uncertainty of 1.5 cm and 0.7 cm respectively. In addition, users of altimetry data are increasingly demanding maximum information on the uncertainties related to the different geophysical estimates. Developing a robust estimation method for this uncertainty is therefore essential in order to assess the satisfaction of mission requirements and to evaluate the performance of altimetry missions over the oceans. However, WTC uncertainty is very difficult to estimate due to the lack of a reference. Given that several factors contribute to the WTC uncertainty, the method consists in evaluating a decomposition of the errors and estimating the contributions of each of these different elements to deduce the most exhaustive uncertainty possible. In this context, a study was carried out at CLS on behalf of ESA to decompose and quantify the errors present in the WTC estimation. For Sentinel-3, the WTC retrieval algorithm is based on a neural network architecture with three inputs: the 23.8 GHz and 36.5 GHz brightness temperatures (which are respectively close to the water vapor absorption band and the cloud liquid water absorption band) and the Ku-band altimeter backscattering coefficient σ0 (Obligis et al, 2006, Frery et al, 2020). Following ENVISAT heritage (Obligis et al. 2009), two additional parameters can be used to enhance the WTC retrieval: The Sea surface temperature (SST) and the temperature lapse rate. The study therefore focuses on these 3 and 5-input neural networks algorithms. The neural network is trained using brightness temperatures, sigma0 and reference WTC all simulated using a radiative transfer model and numerical weather prediction model. From the state of the art (Aires et al. 2004, Cadeddu et al.2009), the uncertainty of the WTC estimation has been estimated as the sum of three independent contributions: the intrinsic error, which includes the intrinsic variability of the training dataset and radiative transfer and weather model uncertainties, the observation error due to the propagation of input observation errors, including instrumental errors, and the network model error which represents errors associated with the determination of the optimal weights by the Neural Network. Sources of uncertainties was analyzed using dedicated methods, such as secondary neural networks for intrinsic errors and statistical models for observational errors. The intrinsic error, linked to input parameter representativeness, was the main contributor of uncertainty in a 3-input algorithm but reduced with the 5-input algorithm. Observation errors can become dominant in the 5-input algorithm, depending on input noise.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: Integration of InSAR and GNSS for Enhanced Geodetic Accuracy: Methodology and Evaluation within the Slovak Corner Reflector Network

Authors: Lukas Kubica, Juraj Papco, Matus Bakon, Branislav Droscak, Martin Ferianc, Antonio M. Ruiz, Joaquim J. Sousa
Affiliations: Department of Theoretical Geodesy and Geoinformatics, Faculty of Civil Engineering, Slovak University of Technology in Bratislava, Radlinskeho 11, Astra Vigil Ltd, Konstantinova 3, Geodetic and Cartographic Institute Bratislava, Chlumeckeho 4, Department of Engineering, School of Sciences and Technology, Universidade de Tras-os-Montes e Alto Douro, Quinta de Prados,l, Department of Cartographic, Geodetic and Photogrammetry Engineering, University of Jaen, Campus Las Lagunillas s/n, Center for Advanced Studies in Earth Sciences, Energy and Environment (CEACTEMA), University of Jaen, Campus Las Lagunillas s/n,, Research Group RNM-282 Microgeodesia Jaen, University of Jaen, Campus Las Lagunillas s/n, Institute for Systems and Computer Engineering, Technology and Science INESC-TEC
Interferometric Synthetic Aperture Radar (InSAR) is a remote sensing technique that facilitates precise large-scale displacement monitoring with accuracy comparable to terrestrial geodetic methods. Leveraging measurements from Sentinel-1 satellites, persistent scatterer networks can monitor extensive areas, often exceeding hundreds of square kilometers. However, the intrinsic relativity of InSAR displacement measurements in the Line-of-Sight (LOS) direction presents challenges in achieving absolute geodetic accuracy. To address this, collocation with complementary geodetic techniques, such as Global Navigation Satellite Systems (GNSS), is essential to tie InSAR observations to a globally recognized terrestrial reference frame. This study introduces an optimized methodology for collocating InSAR and GNSS, emphasizing the precise definition of spatial relationships between reference points of both techniques. The Slovak Corner Reflector Network (SCRN), an innovative geodetic infrastructure developed to integrate InSAR and GNSS measurements, serves as the experimental framework. The SCRN includes strategically deployed corner reflectors co-located with GNSS stations across Slovakia, part of the Slovak real-time positioning service (SKPOS) network, enabling detailed analysis of radar backscattering properties and spatial alignment. Several stations were analyzed for their Signal-to-Clutter Ratio (SCR) and Radar Cross Section (RCS) characteristics. All stations demonstrated robust backscatter properties, with SCR values exceeding the 20 dB threshold required for reliable InSAR processing. The experimental setup also involved precise spatial vector determination between GNSS and InSAR reference points, achieved through geodetic surveying techniques, including static GNSS observations and total station measurements. Results indicate that the SCRN significantly improves the geodetic precision of InSAR measurements by transforming relative LOS displacements into an absolute reference frame. This collocation strategy not only enhances the reliability of displacement monitoring but also establishes a scalable framework for integrating InSAR with existing geodetic networks. The SCRN offers a foundation for advanced applications in infrastructure monitoring, geohazard assessment, and the calibration of systematic errors in satellite-derived geodetic observations. This research demonstrates the viability of the SCRN as a cornerstone for geodetic innovations, highlighting its potential for regional and national-scale implementations in Slovakia and beyond.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: Safeguarding the Earth Observation Radio-Frequency Spectrum for a Sustainable Future

Authors: Matteo Emanuelli, Ines Ortega Castello, Thomas Schmidt, Christoph Schwipps, Alexander Kaptein
Affiliations: Airbus Defence and Space GmbH, Airbus Defence And Space SAS
As of 2023, 1200 active governmental and commercial remote sensing spacecraft were listed as active, with this number increasing annually. These satellites are indispensable for various applications and domains, such as disaster management, agriculture, environmental monitoring, defense, intelligence, and science. By 2030, the data produced by Earth observation satellites is projected to contribute up to $700 billion to the global gross domestic product (GDP) each year. Despite the critical role of space-based Earth observation for society, economy, and security, the upcoming 2027 International Telecommunication Union (ITU) World Radiocommunication Conference (WRC-27) will consider permitting the deployment of International Mobile Telecommunications (IMT) future sixth-generation (6G) mobile and fixed services (IMT-2030) in the currently allocated and used Earth observation frequencies: 7 125-7 235 MHz, 7 235-7 250 MHz, and 8025-8400 MHz, as well as meteorological satellite frequencies: 7 450-7 550 MHz, 7 750-7 900 MHz and 8 175-8 215 MHz . These frequencies, collectively known as X-Band, are vital for the Earth observation community. Notably, the 8025-8400 MHz range is crucial for downlinking data, providing a contiguous 375 MHz of protected bandwidth, with no scalable alternative available. This specific range enables weather-indepentant high-volume data transmission, which is essential for real-time and accurate Earth observation data. There is a significant risk that remote sensing satellites operating within this segment could be constrained, limiting their access to ground station networks in numerous regions and countries where 6G networks are deployed and posing a substantial threat to the uninterrupted flow of crucial data used in various critical applications, from weather forecasting to disaster response. This paper presents an overview of the on-going compatibility studies, assumptions, and methodologies developed by ITU working parties to assess the possible interference levels produced by 6G networks. Given Airbus Defence and Space's active involvement in these studies, preliminary results will also be included. Furthermore, the technological, operational, and regulatory trends related to future X-Band usage by space-based Earth observation systems will be analyzed, considering the potential limitations imposed by 6G implementation. Finally, the paper will focus on how WRC decisions based on theoretical studies could impact real-world regulatory constraints for Earth observation operations, highlighting the need for a balanced approach to spectrum management that safeguards critical Earth observation capabilities.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: Uncertainty estimation of Earth Observation categorical datasets: Case Study with Copernicus Land Monitoring Service Small Woody Features

Authors: Nadine Gobron, Gorica Bratic, Jean-Michel Terres, Renate Koeble
Affiliations: Joint Research Centre, Arcadia SIT srl, Arhs Developments Italia
Space Earth Observation (EO) products, among other information, are essential for environmental analysis and monitoring, as well as for informing, implementing, and evaluating various EU policies. EO data include both numerical and categorical types. Assessing their uncertainties is important, but for categorical types that are very common in the land and cryosphere domain (such as land cover, permafrost, and sea ice products), it remains a challenge due to the lack of a standardized framework akin to the Guide to the Expression of Uncertainty in Measurement (GUM) for numerical measurements [https://doi.org/10.59161/JCGM100-2008E]. The key issue is that a common definition of uncertainty for categorical properties is missing. Furthermore, as in the case of many numerical products, the measurement model of categorical products is very often a data-driven algorithm (machine learning or artificial intelligence) rather than a physically based model [https://library.wmo.int/idurl/4/66028], for which uncertainty propagation is not yet defined. These methodology gaps for categorical data lead to issues in portraying uncertainties, which are vital for their uptake and model optimization. We implemented the suggestions for uncertainty estimation, developed for numerical EO products by FIDUCEO and QA4EO project [https://qa4eo.org/docs/1_Executive_Summary.pdf] to the High Resolution Layer (HRL) Small Woody Features (SWF) categorical product provided by the Copernicus Land Monitoring Service (CLMS). The metrological guidance steps involve defining the measurand and the measurement model, establishing traceability with a diagram, evaluating each source of uncertainty and filling out an effects table, calculating the data product along with uncertainties, and documenting those uncertainties. Our aims were to explore the applicability of metrological steps for numerical data to categorical data and to identify practical challenges that arise beyond those previously recognized. The HRL SWF suite of products for 2015 and 2018 covers pan-European regions and French Overseas Departments. SWF has the potential to support the Common Agricultural Policy's indicator I.21, which is defined as the “share of agricultural land covered with landscape features”. We focused on 2018 products which represent SWF in different formats (raster/vector), aggregation levels (presence or absence of SWF/SWF density per ha), and geometric specifications of SWF classes (Woody Vegetation/Small Woody Vegetation), among others. The SWF category encompasses landscape elements such as hedgerows, tree alignments, or scrubs along field margins; riparian woody vegetation along waterways and streams; scattered groups of trees or scrubs; isolated trees or scrubs; and tree alignments or scrubs along roads. As soon as we started implementing the first step of metrological principles - defining the measurand and the measurement model - on the SWF dataset, challenges arose due to the absence of a clear measurand for categories like SWF. The second step was to establish traceability with diagrams. To do this, we first created a detailed workflow of the SWF production process. This overview of the complex process helped identify various sources of uncertainty that are likely to occur in each data transformation operation. The workflow involves processing high-resolution 2018 satellite imagery from multiple sources to create a detailed vegetation probability map. Initially, images were analysed to extract various features, which were then used in a hierarchical segmentation process and incorporated into a Random Forest model to predict vegetation presence. Probabilities from overlapping images were combined, refined, and filtered to differentiate between woody and non-woody vegetation and to remove or connect certain features based on size and proximity criteria. The workflow also integrates a forest mask to exclude forested areas and employs a tiling system for organizing the data. Subsequent manual refinement uses ultra-high-resolution imagery and field data to adjust the woody vegetation map, which is then resampled to a 5 m resolution. The resulting Woody Vegetation Mask (WVM) is converted into a vector format, post-processed for smoothness and simplicity, and then rasterized back into a 5 m resolution raster. The final output is a SWF density map at a 100 m resolution, derived by aggregating the SWF raster. We based our work on available documentation, e.g. validation report and user manual, which did not fully explain the algorithm. As a result, our proposed workflow was incomplete. Nevertheless, we created a simplified uncertainty tree diagram that highlights the complexity and dependencies among different parts of the SWF product derivation process. Due to the lack of comprehensive documentation, the uncertainty propagation framework could not be fully implemented. Therefore, we provided some recommendations to be implemented alongside the development of a GUM framework for categorical data that will facilitate uncertainty assessment in categorical EO data in the future. Our first recommendation is to provide an Algorithm Theoretical Basis Document (ATBD) with extensive details about EO dataset production. One more recommendation is related to the fact that the input space data, such as Very High Resolution imagery (Pléiades 1A and 1B, SuperView-1, KOMPSAT-3 and 3A, PlanetScope, etc.), field data, Google Earth Pro imagery, Microsoft Bing Maps, etc., lack uncertainty quantification. Furthermore, these products may depend also on other existing EO products datasets. Estimating the uncertainties of SWF by attempting to quantify the uncertainties of all input data would result in an unmanageable process. Therefore, it is essential that all input data come with their own uncertainty estimates that will then be accounted for in the uncertainty assessment. Copernicus and other EO products are indispensable for environmental and climatic monitoring, as well as for defining and measuring policy indicators, with their reliability depending on the estimation of uncertainties. While numerical EO data benefit from established uncertainty frameworks like GUM, categorical data still lack a standardized approach, leaving a critical gap in the field due to a lack of clarity on how to address the categorical nature of data and data-driven models. The exploratory application of metrological principles of numerical data to the HRL SWF categorical product underscores the significant challenges associated with quantifying uncertainty for categorical EO data. These challenges include the imperative for exhaustive documentation and the necessity for accompanying uncertainty estimates for each input dataset. It is crucial that the EO and metrology community confronts and resolves these issues, so that when a GUM framework for categorical data is established, it does not represent an obstacle to its implementation.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: Uncertainty Analysis of Sentinel-3 SLSTR Radiometry

Authors: Dave Smith
Affiliations: RAL Space
The usefulness of SLSTR data for climate applications, in fact any application, requires that measurement uncertainties are reported, and the traceability chain documented. Historically, providing uncertainty information in satellite datasets has been inconsistent, but recent efforts have established standardised methods and nomenclatures. The approach defined by the Fiduceo project is to begin with the basic calibration model, build an effects tree to identify all sources of uncertainty, describe and document the effects and propagate the effects through the model. SLSTR covers two wavelength ranges, Thermal InfraRed (TIR) channels at 3.7 µm, 10.8 µm and 12 µm used for measuring temperature, and VISible to Short Wave InfraRed (VIS/SWIR) channels at 0.555 µm, 0.660 µm, 0.870 µm, 1.375 µm, 1.64 µm and 2.25 µm to measure reflected sunlight for cloud and aerosol detection. For the TIR channels, the calibration is traced via two on-board blackbody sources where the primary effects are the blackbody emissivity and temperature measurements. Nevertheless, other effects contribute, in particular the detector non-linearity and the spectral response. For the VIS/SWIR channels, the uncertainty is dependent on a diffuser based VISCAL system. As with the TIR channels, other contributors include non-linearity and spectral response. In addition, the VIS/SWIR channels are also dependent on uncertainties in so-called vicarious calibration methods used to intercompare with reference sensors and to determine long term drift. In this paper we present the status of the uncertainty analysis of the SLSTR L1 products and how to access information in the level-1 products to provide per pixel uncertainty information. In particular, the paper reports on updates to the uncertainty estimates for the VIS/SWIR channels based on reanalysis of the pre-launch and post-launch calibration results.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: Uncertainty Budget Investigations in the Evaluation of SMOS Satellite Soil Moisture Data

Authors: François Gibon, Arnaud Mialon, Yann Kerr, Philippe Richaume, Nemesio Rodriguez-Fernandez, Daniel Aberer, Nicolas Bader, Peter Balogh, Wouter Dorigo, Alexander Gruber, Johanna Lems, Wolfgang Preimesberger, Alexander Boresch, Monika Tercjak, Raffaele Crapolicchio
Affiliations: CESBIO, Université de Toulouse, CNES/CNRS/INRAe/IRD/UPS, TU Wien, Department of Geodesy and Geoinformation, Angewandte Wissenschaft Software und Technologie GmbH (AWST), European Space Agency (ESA) - ESRIN
Evaluating the uncertainties of satellite soil moisture (as SMOS or SMAP) is crucial for many aspects, such as applications reliability, evaluation of version update or mission requirement assessment. In the case of soil moisture estimation at the satellite footprint scale, the main challenge to derive the uncertainty is to find a reference to compare with. In fact, there is no soil moisture measurement at the same scale. The commonly used references are ground measurements, despite the scale mismatch with the satellite footprint (~10⁵ m). As a consequence, the computed uncertainty utotal of the satellite data using the in situ measurement as reference includes different sources as utotal²= usat²+uref²+uscale²+cov. First, the one from the satellite, usat, is the one we want to estimate, which is influenced by the radiometric noise, the uncertainty on the modelling, the uncertainty on the auxiliary database or the radio frequency interference. Secondly, the one related to the reference, uref, which is influenced by the uncertainty on the probe calibration, the technology representativeness or the installation quality. Thirdly, the one represents the spatial scale mismatch, uscale, between the in situ measurement and the satellite footprint. Fourthly, the satellite footprint content in terms of nature of the surface, the level of heterogeneity or the spatial distribution of the different land cover, and which can be considered as correlated with the other uncertainty sources. Part of the ESA's Fiducial Reference Measurement for Soil Moisture project (FRM4SM), this study proposes to present our investigations on the description of the SMOS soil moisture uncertainty budget using the ISMN in situ measurements network as reference. To estimate the different components of the budget, we used various methods and sources, from sensor uncertainty provided by constructors, to more complex sensitivity analysis of the SMOS performances metrics to different features (Shannon index of the footprint, colocation strategy with the probes,…). This presentation highlights the need to take all contributions into account when interpreting uncertainties of satellite data.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: Can the Sentinel-3 Next Generation Topography Altimeter Mission's continuity with Sentinel-3 be assessed with a 4-Hour Tandem Phase?

Authors: Noemie Lalau, Thomas Vaujour, Michaël Ablain, Gérald Dibarboure, François Boy, Nicolas Picot, Alejandro Egido, Craig Donlon, Robert Cullen
Affiliations: Magellium, CNES, ESA-ESTEC
Since 1993, satellite altimetry missions have consistently provided long-term records of Sea Surface Height (SSH) measurements. Tandem flight phases have been crucial in verifying and ensuring consistency between successive altimetry missions. These phases have played a key role in both reference missions (Topex/Poseidon, Jason-1, Jason-2, Jason-3, Sentinel-6 Michael Freilich) and complementary altimetry missions (such as Sentinel-3A and Sentinel-3B). During a tandem phase, two consecutive altimetry missions closely follow each other on the same ground track with a time interval shorter than a minute. As they observe the same ocean simultaneously, measurement differences primarily reflect relative errors. These errors arise from instrumental variations in altimeter characteristics (e.g., altimeter noise), measurement processing (e.g., retracking algorithm), precise orbit determination, and mean sea surface. By averaging relative errors over periods exceeding 100 days, systematic instrumental differences can be accurately determined, enabling precise calibration of the two altimeters. For the future altimetry constellation, Copernicus Sentinel-3 Next Generation Topography (S3NG-TOPO) will succeed Sentinel-3 (S3), supporting various Copernicus applications. The S3NG-TOPO mission aims at maintaining the continuity of existing S3 nadir-altimeter measurements while enhancing measurement capabilities and performance. A tandem phase between S3 and S3NG is expected to evaluate this continuity. Due to design constraints, S3NG satellites will be launched into a sun-synchronous orbit with a Local Time of Ascending Node (LTAN) at 6 pm, differing from the current S3 constellation's 10 pm LTAN. As a result, the tandem phase between S3 and S3NG satellites will have a 4-hour separation instead of the current few seconds. This study evaluates the impact of the 4-hour tandem phase on the assessment of S3NG-TOPO's continuity with S3. Using the exceptional distribution of crossovers measurements between KaRIn (swath) and S3 (nadir), we analyzed the variance of ocean variability and altimetry errors at 4-hours intervals. Additionally, we simulated a 4-hour tandem phase between S3A and S3B by adding random noise due to ocean variability and altimetry errors at 4 hours to the SSH differences of the real tandem phase between S3A and S3B (separated by a few seconds). Our findings indicate that the 4-hour tandem phase offers intermediate continuity performance between a classical tandem phase and a non-tandem scenario. For example, at regional scales of about 1000 km, a 4-hour tandem phase of 6 months would achieve an offset uncertainty of +/- 5 mm (at 1-sigma), while a classical tandem phase gives a result close to +/- 2 mm (at 1-sigma), and a non-tandem configuration would be above +/- 20 mm (at 1-sigma). This finding highlights the 4-hour tandem phase's potential to meet continuity requirements, depending on user needs.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: Impact of geometric knowledge performance on per pixel uncertainty for non-uniform scenes: the case for CHIME and LSTM missions

Authors: Ignacio Fernández, Antonio Gabriele, Itziar Barat, Adrián García, Kevin Alonso, Ilias Manolis, Francois Bernard, Marco Celesti, Ana Bolea, Jens Nieke
Affiliations: European Space Agency, European Space Research and Technology Centre (ESA-ESTEC), Starion for ESA - European Space Agency, European Space Research Institute (ESA-ESRIN), Aurora for ESA - European Space Research and Technology Centre (ESA-ESTEC)
Recent advances in quantitative remote sensing have resulted in the development of per pixel uncertainty estimation across the different data processing levels. Missions such as Sentinel-2 and Sentinel-3 have successfully implemented per pixel uncertainty propagation information, which is now disseminated to users, in the case of Sentinel-2 as a tool (i.e. RUT) and for Sentinel-3 OLCI integrated as part of the product. A rigorous metrological framework has been established thanks to projects like FIDUCEO and Q4EO paving the way for developing a consistent approach on how to propagate uncertainties for future Earth Observation missions. However, main focus so far has been placed on the definition and propagation of radiometric uncertainties whereas other sources of uncertainty like the impact of geometric knowledge error has remained under-explored. For heterogeneous scenes, this source of uncertainty can emerge as an important driver particularly for pixels where radiometric contrast with adjacent pixels is high. Multiple applications of future Copernicus Expansion Missions CHIME and LSTM, for example precision farming, crop health monitoring, mineral mapping, water quality or land cover mapping are required at high resolution and will experience cases where this source of uncertainty becomes one or the main driver of the total per pixel uncertainty. This study has aimed at filling this gap proposing a first definition of geometric knowledge performance impact on the total uncertainty. The goal has been to develop a simple algorithm by first principles that can be easily implemented in operational pipelines without significant impact on the timeliness performance or the memory needs but that at the same time provides results comparable or enveloping more dedicated Monte Carlo analyses. The algorithm has been first validated on a 3x3 pixel grid for any pixel radiometric content combination. The validation results demonstrate that the algorithm matches and envelopes well Monte Carlo results for any subpixel geolocation error performance and radiometric content combination. We have then applied the proposed algorithm to actual data products of relevant use cases for both LSTM and CHIME missions. The presentation will discuss the algorithm and share these first results, elaborate on its future implementation on each mission data processing pipeline and comment on its limitations, further potential applications and way forward.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: Geodetic Datum Connection using the Integrated Geodetic Reference Station (IGRS), an Evaluation of Efficacy

Authors: Ramon Hanssen, Freek van Leijen, Alex Lapadat, Dr. Hans van der Marel, Yustisi Lumban-Gaol, Philip Conroy, Simon van Diepen, Wietske Brouwer, Yuqing Wang, Jurjen Kamphuis, John Zouros
Affiliations: Delft University of Technology (TU Delft)
Estimates related to elevation of (objects on) the Earth's surface can be obtained from different geodetic techniques, such as leveling, photogrammetry, airborne laser scanning (ALS), GNSS, gravity, or satellite radar interferometry (InSAR). Each technique works in a local geodetic datum, which need to be connected in order to merge their results. While InSAR results can be conceptually viewed as double-difference values, i.e., spatio-temporal differences in position between at least two distinct points at two distinct epochs, in fact this is not how they are produced. The primary InSAR estimates are not obtained by differencing earlier position estimates, but are direct spatio-temporal observables irrespective of their positions. Simultaneously but independently, spatial position differences can be estimated as well, albeit with a precision that is three orders of magnitude worse and insufficient for estimating sub-decimeter displacements. Since both the elevations as well as the displacements are estimated independently relative to a reference point, the datum connection needs to be performed at two levels simultaneously. First, the 3D position of the reference point needs to be known in the geodetic datum of choice, typically with a decimeter-level precision, at an arbitrary epoch within the time interval covered by the time series. Second, the temporal changes in the the position of the reference point needs to be known, relative to a reference epoch within the time interval covered by the time series, at least in the Line-of-Sight, with a precision of not worse than a few millimeters. These two requirements for proper datum connection impose strict conditions on the choice of the reference point, or more precisely, the reference point scatterer (RPS). To allow for an quantitative quality assessment, this cannot be based on 'assumptions of stability' or 'nearby' benchmarks that are not physically connected to the RPS. The only way to guarantee that the elevation h(t₀) (as part of the 3D position) as well as the 3D displacement δ₀(t) at each epoch are connected to a geodetic datum, is to provide a physical, mechanical connection between the radar scatterer, the GNSS antenna, the leveling benchmark, and the laser scanning surface. In this contribution, we present the design, implementation, methodology, and analysis of the Integrated Geodetic Reference Station (IGRS), a dedicated structure enabling the mechanical coupling of all the geodetic techniques mentioned above. With observations ranging back to 2018 and about 60 stations established over several European countries, it is now possible to evaluate its efficacy and impact, and provide recommendations for further deployment and improvement. The impact and relevance of the adoption of integrated geodetic reference stations is that, for the first time, all the mentioned geodetic techniques can be brought in the same reference system. This enables, for example, the link between the sea level variability, including sea level rise, and land subsidence and uplift; an important challenge for many coastal areas. Apart from solving the scientific challenges involved, this facilitates decision making by authorities, as well as the operationalization of services.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: In preparation for the forthcoming Sentinel-4 and Sentinel-5 missions: Validation of tropospheric and total NO2 column products over Thessaloniki, Greece

Authors: Dimitris Karagkiozidis, Dr. MariLiza Koukouli, Mr. Dimitris Nikolis, Prof. Alkiviadis Bais, Prof. Dimitris Balis
Affiliations: Laboratory of Atmospheric Physics, Aristotle University of Thessaloniki
Monitoring of trace gases in the atmosphere is being routinely performed at the Laboratory of Atmospheric Physics (LAP; 40.633°N, 22.956°E, 60 m a.s.l.), Thessaloniki, Greece, for over a decade. In the troposphere, trace gas concentrations are monitored by multiple ground-based instruments that rely on Multi Axis Differential Optical Absorption Spectroscopy (MAX-DOAS), a technique that is sensitive to numerous absorbers, including Nitrogen Dioxide (NO2), with the highest sensitivity in the lowest few kilometers of the atmosphere. In late 2022, a Pandora 2S instrument was installed at LAP as part of the Pandonia Global Network (PGN), thus further improving the air quality monitoring capabilities at Thessaloniki. Pandora provides near-real-time, centrally processed and quality-assured data of tropospheric trace gas vertical columns using off-axis measurements at low elevation angles, similar to the other MAX-DOAS instruments operating at LAP. Pandora also performs regular direct sun observations. By applying Direct Sun (DS) DOAS on the measured spectra, a technique that is equally sensitive to tropospheric and stratospheric absorbers, total columns of NO2 are also retrieved and distributed as an operational product by PGN. The launch of Sentinel-4 on a geostationary orbit brings significant enhancements in air quality monitoring from space and will allow continuous monitoring of NO2 concentrations from space, which however poses new challenges in its validation. For the first time, ground-based systems will be called to demonstrate the ability to provide consolidated uncertainty characterization for highly diurnally variable space-born observations. The ground-based MAX-DOAS and Pandora instruments have a high temporal resolution and the diurnal cycles of NO2 have been well observed, showing characteristic patterns across different seasons. In this study we present how the MAX-DOAS and Pandora instruments of LAP can be deployed to validate tropospheric and total NO2 data from the forthcoming Sentinel-4 and Sentinel-5 satellites. The uncertainty of the satellite-derived products in capturing similar diurnal patterns, can be investigated in relation to known geophysical parameters, such as the solar zenith angle, cloud fraction and the averaging radius around the measurement site. Furthermore, in tandem with the instruments at LAP, located near the urban city centre of Thessaloniki, an additional MAX-DOAS instrument operates in a suburban area a few kilometers away, thus allowing for the evaluation, validation and bias assessment of the new satellite instruments under different pollution conditions.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: St3TART-FO: Establishing a framework for operational Fiducial Reference Measurements (FRM) for Sentinel-3 Hydro-Cryo Altimetry products and beyond

Authors: Claire Miller, Jean-Christophe Poisson, Henriette Skourup, Sara Fleury, Vincent Favier, Geir Moholdt, Eva Le Merle, Marine Dechamp-Guillaume, Christian Haas, Renée M.F. Hansen, Ghislain Picard, Emma Woolliams, Vincent Boulenger, Julien Sarrau, Dominique Segura, Jaoudat Sabalbal, Valentin Fouqueau, Rémy Lopez, Sebastian B. Simonsen, Sajedeh Behnia, Thomas Erni, Laurent Arnaud, Emmanuel Lemeur, Dr. Thomas Krumpen, Nicolas Taburet, Julien Renou, Marie Chapellier, Jérémie Aublanc, Angelica Tarpanelli, Nico Sneeuw, Mohammad Tourian, James Foster, Frédéric Viviver, Antonio Lourenco, Robert Ricker, Rolf-Ole Rydeng Jenssen, Hervé Yesou, Maxime Azzoni, Sabrine Amzil, Thomas Le Dauphin, Pascal Bonnefond, Olivier Laurain, Mahmoud El Hajj, Nicolas Picot, Filomena Catapano, Pierre Femenias
Affiliations: NOVELTIS, vorteX-io, DTU Space, LEGOS, IGE, NPI, AWI, NPL, CLS, CNR-IRPI, GIS, LOCEAN, NORCE, SERTIT, SYRTE, CNES, ESA ESRIN
The Copernicus Sentinel-3 Surface Topography Mission (S3 STM) provides critical surface elevation information for sea ice, land ice and inland waters (Hydro-Cryo observations). This valuable data is acquired through its high-resolution synthetic aperture radar altimeter (SRAL), a unique orbit that reaches the polar regions up to 81.5N, and its dual-mission constellation (S3A+S3B) providing semi-monthly (~15 days) repeat-orbits. To ensure reliable measurements and maximize the mission's benefits, adequate validation is essential. This process involves comparing the mission's geophysical retrieval methods, processing algorithms, and corrections against independent observations known as Fiducial Reference Measurements (FRM). FRM are independent, tailored and fully characterised reference measurements with traceable uncertainty characterisation, that shall provide the users full confidence in the observations through validation of the satellite observations. The St3TART-FO (Sentinel-3 Topography mission Assessment through Reference Techniques Follow On) project activities address the provision of an operational framework for FRM to support validation activities and foster the exploitation of the Sentinel-3 SAR altimeter Hydro-Cryo Thematic data products. The project tackles the specific challenges associated with three types of surfaces: sea ice, land ice and inland waters. To achieve this, St3TART-FO sets forth several objectives, including the identification and operation of super and opportunity sites, the acquisition, processing, and delivery of FRM for Calibration/Validation (Cal/Val) activities, and the characterization of uncertainties associated with each FRM data product and measurand. It also involves the exploitation of FRM for Cal/Val activities, and the preparation of a roadmap for future altimetry missions beyond S3 STM, focusing on the cryosphere and hydrology. We will present the project's objectives, tools, current accomplishments, as well as future directions that will pave the way for providing FRM tailored to hydro-cryo altimetry thematic satellite observations from S3 STM and other future altimetry missions such as CRISTAL and S3 Next-Generation Topography as well as potential synergies with the Copernicus CIMR expansion mission.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: CHIME Level 2 Processor Traceable Uncertainty Propagation – Reflectance Retrieval and Adjacency Correction

Authors: Mike Werfeli, Dr. Andreas Hueni, Pieter de Vis, Carmen Meiller, Astrid M. Zimmermann, Dr. Kevin Alonso Gonzalez
Affiliations: Remote Sensing Laboratories University of Zurich, National Physical Laboratory NPL, European Space Research Institute ESA ESRIN
With increasing interest in hyperspectral space-based imaging sensors, the European Space Agency (ESA) is developing a new space-based Earth observation (EO) satellite mission (CHIME). To get from digital counts of photons to top of atmosphere (TOA) radiances (L1B) and finally to bottom of atmosphere (BOA) reflectances (L2), the measured data is processed in several steps. In L1B the digital numbers are converted to radiances, correcting for sensor characteristics. In the subsequent L2 the radiances are converted to surface reflectances, removing atmospheric features from the signal measured. While correcting the signal, the measurement uncertainties are propagated through every processing step, to get to a surface reflectance (SR) measurement with a traceable uncertainty associated to it. Here we will focus on the uncertainty propagation through the adjacency effect correction and the SR retrieval module of the CHIME L2 processor. The reflectance retrieval and adjacency correction are performed in the same processing module, where the adjacency is corrected on an initial surface reflectance image. The adjacency correction is performed on an initial surface reflectance, computed under the assumption of a flat terrain as long as less than 1% of the pixels have a slope smaller than six degrees. This initial reflectance cube is corrected for adjacency effects (atmospheric blurring). This step removes photons recorded at a specific pixel location, scattered from the surroundings into the sensors line of sight. These effects are corrected by subtracting a weighted mean of the surrounding pixels, scaled with a ratio of upwelling/downwelling transmittance. The reflectance retrieval is done with three iterations, in which the four main radiance sources are considered and corrected for. After each iteration a mean reflectance is computed with a lowpass filter, which is fed into the next iteration step. Propagating the uncertainties and error-correlation matrices associated to the adjacency correction and the surface reflectance retrieval is based on the QA4EO guidelines and is implemented with the CoMet toolkit. The uncertainties are processed using the Monte Carlo (MC) method. The different measurement functions are run on the MC samples passed on from the previous L2 processing steps. For the adjacency correction algorithm different sources of uncertainty have been identified and quantified (e.g. adjacency regions). These effects have been investigated by running the respective algorithm module on generic radiance fields. Such a radiance field was set up in the shape of a checkerboard containing dark dense vegetation (ddv) and bright sand pixels next to each other. This radiance field was used as “ground truth”, to which pre-defined atmospheric conditions where added. Running the adjacency correction on this radiance field enabled to quantify the uncertainties related to this processing module as well as isolating the sensitivities of the input uncertainties.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: Uncertainty budget for Land Cover categorical variables: the ESA CCI Land Cover Approach

Authors: Celine Lamarche, Grit Kirches, Ralf Quast, Clément Albergel, Pierre Defourny
Affiliations: UCLouvain-Geomatics (Belgium), Brockmann Consult GmbH, ESA ECSAT
Errors in Earth Observation (EO) data propagate through processing chains, from raw data acquisition to final products, impacting the reliability of Climate Data Records (CDRs) for Essential Climate Variables (ECVs) such as Land Cover (LC). While metrology principles are increasingly applied to end-to-end uncertainty quantification in EO pre-processing, including surface directional reflectance derivation, their application to LC classification remains limited. Current methods applied to categorical variables like LC maps primarily address class accuracy, class area variability, or posterior maximum probability, leaving the full uncertainty budget underexplored. Here, as part of our global objective to develop an end-to-end uncertainty framework for the ESA CCI Land Cover processing chain, we present a method to quantify the influence of pre-processing uncertainties on LC map uncertainties. Using Monte Carlo simulations and Random Forest (RF) classifications on one year of Sentinel-3 OLCI data, we assessed how pre-processing uncertainties impact categorical uncertainties across study sites in Africa, Europe, and South America. We introduce three novel per-pixel indicators—PMax*, Std*, and ERP*—to spatially quantify uncertainty, leveraging class membership probabilities from ensemble RF classifications. Our results show that extending compositing periods reduces surface directional reflectance uncertainty by 10% to 77%, depending on site conditions. We disentangle, quantify, and discuss uncertainty sources stemming from pre-processing errors and RF classifications. Additionally, ensemble LC maps reduce uncertainty by 52% by improving patch compactness and minimizing isolated pixels. PMax* and ERP* prove particularly effective in identifying mixed-pixel uncertainty, with mosaic classes exhibiting the highest sensitivity. These findings highlight the critical role of various pre-processing choices in shaping LC map uncertainties, demonstrating that compositing duration and ensemble classification could significantly impact map stability and reliability. By providing spatially explicit indicators for uncertainty quantification, our approach contributes to a comprehensive end-to-end framework for quantifying uncertainty and improving the accuracy and consistency of EO-derived categorical LC products.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: Uncertainty-aware building height estimation

Authors: Ritu Yadav, Andrea Nascetti, Yifang Ban
Affiliations: KTH Royal Institute of Technology
Building heights, the vertical dimension of buildings, play a key role in various aspects of sustainable urban planning and monitoring from land use consumption rate to energy consumption, urban heat island, population estimation, disaster risk assessment and others. With the availability of open-source satellite data several studies have successfully explored estimating building height at different scales (city, national, continental, and Global scale), and spatial resolutions (1 km to 2.5 m). While advancements in machine learning and computer vision have significantly enhanced the accuracy of height estimation from open-source satellite imagery, the quantification of uncertainty in the estimated values is missing. There are uncertainties in the estimated heights due to data quality, atmospheric interference, sensor limitations, and model imperfections. Incorporating uncertainty quantification and a careful evaluation strategy in the pipeline provides a clear understanding of potential error margins. This not only offers an assessment of the robustness and reliability of the model when applied to a new region but knowing uncertainty in estimated heights also helps stakeholders to make informed and careful decisions. We propose an uncertainty-aware framework for building height estimation. The framework targets two types of uncertainty; aleatoric uncertainty and epistemic uncertainty. Aleatoric uncertainty is modeled with a novel deep-learning regression model, predicting the height distribution. With our proposed model we tackle the well-known saturation problem in height estimation. Combined with epistemic uncertainty we provide an in-depth comprehensive evaluation of estimated building heights both in-distribution and out-of-distribution data. The results are evaluated using building height ground truth derived from high-resolution aerial orthophoto imagery and LiDAR pointclouds. The dataset covers urban areas with diverse density, structural and environmental characteristics. By advancing the state-of-the-art in uncertainty-aware remote sensing, We contribute to the broader goal of making deep learning models more transparent and reliable. Our approach underscores the importance of embracing uncertainty as a core component of geospatial analysis.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: A first prototype of Sentinel-2 Level-2A uncertainty products

Authors: Javier Gorroño, Luis Guanter, Lukas Graf, Ferran Gascon
Affiliations: Universitat Politècnica de València, Terensis GmbH, European Space Agency
The Copernicus Sentinel-2 (S2) satellite mission acquires high spatial resolution optical imagery over land and coastal areas. The mission delivers uncertainty products through an offline tool named Radiometric Uncertainty tool (RUT). The access to the uncertainty information alongside S2 data products can be beneficial to several applications that include a better constrain of retrieval algorithms, the propagation further downstream the retrieval uncertainty, and support with informed decisions to end-users. In a first stage of the development Level-1 products (i.e. Top of atmosphere reflectance) were considered. This study presents a second phase of the tool which includes the estimation of uncertainty associated to S2 L2A data products (i.e., surface reflectance). The prototype code is available at https://doi.org/10.5281/zenodo.11971517 and delivers both L1 and L2A uncertainty estimates together with spectral error correlation. The uncertainty considers the main instrument and calibration uncertainty sources associated to the L1 estimates as well as the main uncertainty sources arising from the atmospheric correction. We present different results of surface reflectance uncertainty over a range of scenes that exemplify the variation of uncertainty as well as the spectral error correlation. The development of L2A uncertainty estimates presents a set of challenges that have been studied and discussed in detail. These include the characterisation of some uncertainty sources, the uncertainty combination and the processing/dissemination of the products. Although a large number of uncertainty sources are included, some of them are not yet available or partially assessed in this first prototype. In some cases, we have obtained uncertainty estimates from literature or validation exercises. For example, the uncertainty associated to the aerosol optical thickness (AOT) has been inferred from the validation of the S2 AOT retrieval against different Aeronet stations. The uncertainty propagation is based on a multivariate Monte Carlo model (MCM) rather than the law of propagation of uncertainty (LPU) as proposed in a first stage. This is driven by the non-normal error distribution associated to the L2A measurements, the nonlinear transformation through the atmospheric correction and the absence of an explicit mathematical model for some of these transformations. A comparison of a multivariate MCM against an LPU propagation methodology indicates the limitations of the latter for scenes dominated by the atmospheric path. Finally, its implementation as an operational per-pixel processing and dissemination of both the uncertainty and spectral error correlation becomes challenging. Therefore, this methodology and first prototype version defines a basis towards further strategies to its operationalisation. These strategies could include, for example, the training of a parameterised model or the dissemination of ensemble scenarios.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: Benefits of a fast-repeat phase for error/signal separation in satellite radar altimetry measurements, implications on the GMSL uncertainty budget

Authors: Pierre Prandi, Victor Quet, Etienne Jussiau, Gérald Dibarboure, Matthias Raynal, Michaël Ablain
Affiliations: Collection Localisation Satellites, Magellium, CNES
The Global Mean Sea Level (GMSL) is an Essential Climate Variable monitored using satellite radar altimetry missions over more than 30 years. Provision of an accurate uncertainty budget is critical as it prevents misinterpretation of measurement artifacts as true geophysical signal. It is also critical to improve our knowledge of altimetry errors considering the recent update of climate science requirements on the long-term stability of the GMSL record from 0,3 to 0,1 mm/yr (Meyssignac et al., 2023) In currently available GMSL uncertainty budgets (Guérou et al., 2023, Ablain et al., 2019) the uncertainties due to short-term time-correlated errors at periods lower than 1 year are derived from the variance of the GMSL record itself. These uncertainties therefore mix measurement system uncertainties and natural geophysical ocean topography variability. The SWOT mission was launched in December 2022 and embarks a Jason class nadir altimeter. During the first six months of the mission, SWOT flew on a oneday repeat orbit. This unprecedented revisit permits the characterization of high-frequency (typically with periods shorter than two month) signals and their contribution to the GMSL uncertainty budget for periods from two days to a few months. Using Copernicus Marine gridded sea level anomaly fields in combination with SWOT’s nadir altimeter along-track data further allows for the removal of large-scale and long period ocean topography signals in SWOT nadir measurements providing better insights into the separation of error and geophysical signal contributions. This analysis suggests that high frequency uncertainties in the in nadir LRM mode altimetry measurements of ocean surface topography typically decorrelate over 20 days and 100 km. This leads to an updated uncertainty allocation for short-term time-correlated errors in the GMSL uncertainty budget, both in terms of magnitude and correlation length, which results in reducing the GMSL trend uncertainty by up to 30% over 10 year periods.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: Establishment and maintenance of a cross-cutting validation framework for and validation of European Copernicus Land Monitoring Service products

Authors: Oliver Buck, Ms Lucie Clous, Mr Luca Kleinewillinghöfer, Mr Tobias West, Mr Pedro Carvalho, Lorenzo Solari, Mr Manuel Mayr
Affiliations: EFTAS GmbH, JBA Consulting, COBA, European Environment Agency
Remote sensing-based land products are essential for monitoring and managing the Earth's landscapes. The Copernicus Land Monitoring Service (CLMS) is one of six thematic core services operated by Copernicus, the Earth observation component of the European Union’s Space programme. CLMS provides a portfolio of high-resolution datasets covering the entire land domain and critical for applications such as climate action, urban planning, and biodiversity conservation. However, the reliability and trustworthiness of these products for modelling and decision support depends on rigorous quality assurance and validation. A crucial step in ensuring the usability and credibility of these products is their validation. Validation involves assessing the accuracy of land cover/use maps and other datasets by comparing them with independent reference data collected from trusted sources, such as field surveys or very high-resolution imagery. To ensure accuracy and consistency, CLMS currently implements a validation framework that defines validation methods applicable to its various products. This framework is designed to maximize synergies across different datasets while allowing for adaptations to product-specific characteristics. It addresses the different validation elements referred to in the ISO/DIS 19157:2023 standard, titled "Geographic information — Data quality", to validate the quality of geographic data. Assessment are extended to product usability in terms of conformance to selected user requirements, accessibility and documentation quality and completeness. This paper presents the concept, setup and implementation of the CLMS Validation System Framework and Environment (VSE), which comprises optimized cross-cutting workflows for thematic and temporal accuracy assessments implemented in a cloud-based software environment that relies exclusively on open-source technology. CLMS product validation results for the recent CLMS High-Resolution Layer Vegetated Land Cover Characteristics (VLCC) datasets Forest Type (FTY), Tree Cover Density (TCD) as well as the novel Crop Type (CTY) and Grassland Mowing Events/Dates (GRAME/D) datasets are presented and discussed in terms of validation data availability and coverage, independence and data gaps, thematic and overall accuracy. The VSE software solution will be presented as a tool to support efficient CLMS product validation including the archiving and documentation of CLMS product validation steps. This allows assessments to be replicated, compared and documented over time, supporting the ongoing refinement of products and methods.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: CryoSat Long-Term Ocean Data Analysis and Validation: GOP from Baseline-C to Baseline-D

Authors: Marc Naeije, Ernst Schrama, Alessandro Di Bella
Affiliations: Tu Delft / Space Engineering, ESA/ESRIN
ESA's Earth explorer CryoSat-2 (CS2), with the main objective to monitor ice, has also proven to provide a valuable measurement of the impact of climate change over the oceans. In frame of the CS2 project “Long-term Ocean Data Analysis and Validation”, we present our conclusions on the long-term quality and stability of the CS2 Geophysical Ocean Products (GOP) Level-2 Baseline-C data, and update them with analyses of the new Baseline-D data and extend the CAL/VAL study to polar oceans. The validation is based on cross-comparison with concurrent altimeters and altimeter products and comparisons with concurrent Fiducial reference measurements. Highlights include Baseline-C showing issues with the ionosphere and pole tide correction. The latter giving rise to an east-west pattern in range bias. Between Synthetic Aperture Radar (SAR) and Low Resolution mode (LRM) a 1.4 cm jump in range bias is explained by a 0.5 cm jump in sea state bias which relates to a significant wave height SAR-LRM jump of 10.5 cm. The remaining 0.9 cm is due to a range bias between ascending and descending passes, exhibiting a clear north-south pattern and ascribed to a timing bias of +0.367 ms affecting both time-tag and elevation. The overall range bias of Baseline-C is established at –2.9 cm referenced to all calibrated concurrent altimeter missions. The bias drift is not exceeding 0.2 mm/yr, leading to the conclusion that Baseline-C is stable and measures up to the altimeter reference missions. This is confirmed by tide gauge comparison with a selected set of 309 calibrated PSMSL tide gauges over 2010–2022, which sows an average correlation of 0.82, a mean standard deviation of 5.75 cm (common reference and GIA corrected), and a drift of 0.17 mm/yr. In conclusion, the quality, continuity, and reference of Baseline-C is exceptionally good and stable over time and no proof of any deterioration or platform aging has been found so far. It is anticipated that GOP Baseline-D data will be at least as good as Baseline-C and resolve a few of Baseline-C's issues.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: Uncertainty Estimation and Fiducial Reference Measurements in Monitoring Satellite Altimetry

Authors: Stelios Mertikas, Craig Donlon, Emma Woolliams, Dr. Demetrios Matsakis, Fabrice Collard, Dr. Costas Kokolakis, Dr. Dimitrios Piretzidis, Mr. Achilles Tripolitsiotis, Mr. Xenophon Frantzis
Affiliations: Technical University of Crete, Space Geomatica, European Space Agency-ESTEC, National Physical Laboratory, Masterclock, Inc., Space Geomatica, OceanDataLab
The growing number of altimetry satellites, payloads, manufacturers, as well as accelerated developments in measuring techniques and data processing, urgently call for measures to ensure the quality of satellite products for Earth observation. Such sought indicators of quality to be properly working, must be reliable, indisputable, tied and traceable to reference and international standards (e.g., Système international d'Unités). End-users will thus be able to accurately determine whether EO products meet requirements and are suitable for their applications. At first, each EO mission follows its own in-flight calibration and validation (Cal/Val) to secure quality products. Later after Commissioning, external Cal/Val procedures are pursued with comparisons of satellite observations and products against ground-truth reference measurements. This work describes the way the uncertainty of the reference measurements on ground must be established beyond doubt, and the way external Cal/Val needs to be carried out to properly evaluate the produced satellite observations and products, and thus ensure accurate communication between satellite operators and users. This investigation would be based on the reference measurements made at the ESA Permanent Facility for Altimetry Calibration (ESA-PFAC) in Crete and Gavdos, Greece, where all ESA and world satellite altimeters have been calibrated for almost 20 years. The Cal/Val uncertainties will be established according to the ESA principle for Fiducial Reference Measurements and the guidelines of the Bureau International des Poids et Mesures (BIPM) for determining measurement uncertainty. Integration of Cal/Val results at this facility obtained from diverse instruments, locations, setting, techniques will be also presented. Finally, practical guidelines for achieving FRM-quality ground reference data will be provided, along with plans for expanding the ESA-PFAC Cal/Val services to other, non-altimetry EO missions.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone U)

Poster: Validation and Uncertainty Assessment of the Updated GIFAPAR Product Sentinel-3 OLCI During the Tandem Phase

Authors: Ana Pérez-Hoyos, David Garcia-Rodriguez, Ernesto Lopez-Baeza, Ludovic BOURG, Sebastien Clerc, Nadine Gobron, Steffen DRANSFELD
Affiliations: Albavalor, IRTIC-LISITT, ACRI-ST, European Commission Joint Research Centre, European Space Agency
The Copernicus Sentinel-3 mission, developed by the European Space Agency (ESA) in collaboration with EUMETSAT, provides a new generation of optical data products, including critical biogeophysical variables for global environmental monitoring. The Ocean and Land Colour Instrument (OLCI), aboard the Sentinel-3A and Sentinel-3B satellites (launched on February 16, 2016, and April 25, 2018, respectively), generates two Level-2 land products, including the Green Instantaneous Fraction of Absorbed Photosynthetically Active Radiation (GIFAPAR). Derived using JRC FAPAR algorithms, GIFAPAR is an essential biophysical parameter for modelling plant canopy radiation transfer. The GIFAPAR algorithm, developed by the Joint Research Centre (JRC), involves a two-step process: (1) computation of rectified reflectance values from OLCI bands (10,17) using atmospheric corrections from Band 3 and directional normalization with the Rahman-Pinty-Verstraete (PRV) model, and (2) optimization-based retrieval of FAPAR values, accompanied by per-pixel uncertainty estimates (Gobron, 2010). Recent updates to GIFAPAR introduce new spectral bands (011, 012) and updated coefficients for existing bands (010,017) to align with the instrument’s enhanced spectral response. These changes improve reflectance accuracy by addressing atmospheric and angular effects, while updated uncertainty estimates are derived through radiative transfer simulations and optimization techniques (Gobron, 2024). This research aims to deliver a robust validation framework for the updated GIFAPAR product and its uncertainties during the GIFAPAR tandem phase. The tandem phase enables direct comparisons between previous and updated GIFAPAR products, assessing inter-product discrepancies, uncertainties and temporal. Conducted within the Optical Mission Performance Cluster (OPT-MPC) framework—responsible for the calibration and validation activities of Sentinel-3—this research ensures the delivery of high-quality and reliable geophysical variables for global environmental monitoring. The validation activities include multiple steps as a per-pixel inter-comparison of co-registered OLCI-A and B L1 scenes (i.e., channels 010, 017), but also GIFAPAR-A and B to evaluate the differences between the previous and new versions. Direct validation against in-situ measurement from the Copernicus Global Land Product service’s GBOV (Ground-Based Observation for Validation measurements) generating temporal profiles and providing validation statistics such as Root Means Square Error (RMSE), Mean Absolute Error (MAE) and Bias. Indirect validation by comparison of updated L2 OLCI GIFAPAR land products with Envisat MERIS climatology (2002-2012) over selected sites of the CEOS (Committee for Earth Observation Satellites), LPV supersites and the SEVT (Sentinel-3 Validation Team), which ensures continuity and compatibility of the updated product with historical benchmarks. Finally, differences in measurement uncertainties between Sentinel-3A and Sentinel-3B are evaluated. Variance propagation techniques are applied to quantify data spread and characterize uncertainties at pixel and product levels. Metrics such as uncertainty propagation and difference uncertainty provide a robust statistical framework for evaluating the reliability and consistency of the updated GIFAPAR estimates.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: A.02.07 - POSTER - Monitoring grasslands and rangelands from space

Grasslands and rangelands account for a large share in global agricultural managed landscapes and provide essential ecosystem services such as carbon sequestration or water retention. At the same time, they represent a critical source of biomass for animal feed and support livelihoods, for instance, of pastoralists and other livestock producers. Grasslands and rangelands provide habitats for a wide range of floristic and faunistic species and have therefore a crucial role in the context of climate change and biodiversity. A key challenge is to find a balance between maintaining or enhancing ecosystem services and functions – which are influenced by the intensity of their utilization and management - while at the same time meeting the demand of a growing population in terms of meat and dairy products, as well as pressures from other land uses. This brings the topic of grassland/rangeland ecosystems to the agenda of land managers, nature conservationists but also prominently in international policies. Therefore, accurate Earth Observation (EO) based mapping and monitoring approaches are crucial to deepen our understanding and to support evidence-based policy development and evaluation. While remote sensing-based approaches have already proven valuable for mapping and monitoring the intensity of grassland/rangeland use over large areas, several challenges such as degradation and recovery of those ecosystems remain unresolved for which new solutions are needed.

We welcome contributions that either focus on the testing and implementation of novel EO data and methods for the assessment of rangelands and grassland management and condition, or that provide insight into the valorization of established technologies for innovative EO-based services in support of climate action and sustainable management. These could include but are not restricted to:

- multisource imaging / data fusion / time series
- estimation of grassland yields / biomass / quality and other relevant biophysical variables
- degradation and recovery of grasslands and rangelands
- differentiation of pastures / meadows
- rangeland/grassland use intensity, resilience and carrying capacity
- grassland use in the context of biodiversity / climate change
- monitoring and evaluation of agricultural and environmental policies
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Monitoring breakpoints under grazing pressure in Eastern Mongolia

Authors: Shuxin JI, Ganzorig Gonchigsumlaa, Tserendavaa Tseren, Sugar Damdindorj, Amartuvshin Otgondemberel, Enkh-Amgalan Gurjav, Munguntsetseg Puntsagsuren, Batnaran Tsabatshir, Tumendemberel Gungaa, Narantsetseg Batbold, Lukas Drees, Bayarchimeg Ganbayar, Dulamragchaa Orosoo, Bayartsetseg Lkhamsuren, Badamtsetseg Ganbat, Myagmarsuren Damdinsuren, Gantogoo Gombosuren, Batnyambuu Dashpurev, Thanh Noi Phan, Nandintsetseg Dejid, Thomas Müller, Lukas Lehnert
Affiliations: LMU, Department of geography, School of Economics and Business, Mongolian University of Life Sciences, Institute of Natural Resource and Agricultural Economics, Senckenberg Gesellschaft für Naturforschung, Institute for Social-Ecological Research, Institute of Meteorology and Climate Research Atmospheric Environmental Research (IMK-IFU), Karlsruhe Institute of Technology
Mongolian society and food production are deeply reliant on livestock farming, which is predominantly practiced through nomadic pastoralism. These migratory patterns are driven by the availability of forage resources, which fluctuate across seasons and regions. Ensuring the sustainable use of pastures and preventing overgrazing present critical challenges, particularly in the context of climate change and land degradation. Traditional monitoring methods often lack the capacity to provide accurate, large-scale insights into grazing patterns. Therefore, advanced technologies for tracking herder movements and assessing grazing impacts are essential for the sustainable management of Mongolia's rangelands. To detect movement patterns, a new machine learning (ML) based method to detect breakpoints in vegetation condition has been developed and compared to the widely-used BFAST algorithm. The results have been validated using test sites spread across the entire eastern Mongolian steppe ecosystem, covering different grassland use intensities. The results indicate that (1) the new ML method performed superior compared to BFAST, detecting 41.5% of movements. Movements in summer pastures mainly occured from April to June, while on winter pastures, they emerged in October, November, and the following February and March. (2) Regarding spatial prediction, the model developed in this study effectively predicts movements in areas located in the western (hit rate of 62%) and northern (68%) regions of the Mongolian grasslands. However, it fails to detect the spatial distribution under grazing pressure for the experimental sites located in the eastern and southern regions. (3) No relationship has been found between prediction accuracy and livestock densities. The newly developed method in this study can spatially distinguish between summer and winter pastures in northern and western areas. To our knowledge, this study is the first attempt to explore large scale detection of herder’s movement under grazing pressure in pasture or grassland.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Grassland Conversion and Land Cover Change in Lower Saxony –Four Decades of Satellite Time Series Analysis

Authors: Hauke Itzek, Professor Christoph Raab
Affiliations: Institute of Geography, University Of Hildesheim
Grasslands are among the world's largest biomes, providing vital ecosystem services. Despite their ecological importance, grasslands are shrinking due to land-use conversion. In Germany, the area of permanent grassland has decreased by approximately 12% over the past two decades, with an even higher reduction of over 15% in Lower Saxony between 2000 and 2020. These changes have significant implications for biodiversity, soil quality, and other ecosystem functions. Understanding the dynamics and drivers of grassland conversion requires detailed spatial and temporal analyses of land cover changes. This study examines how recent land cover data for Lower Saxony, Germany, can be extended into the past using point training data. The analysis is meant to cover the maximum time frame enabled by the Landsat mission, utilizing archived Earth Observation imagery from the 1980s to the present. The accuracy of recent data is validated through visual interpretation. Change detection is performed on validated points using historic Earth Observation imagery. For this, well established approaches, such as Continuous Change Detection and Classification (CCDC) will be explored. CCDC enables continuous monitoring by detecting changes through deviations in spectral trends over time, allowing for a more dynamic understanding of land cover transitions. Significant changes are identified based on thresholds in spectral differences. Points with detected changes are further verified through visual confirmation of reference imagery and analysis of spectral profiles. Historical land-use data is incorporated if available. Finally, supervised classification algorithms (e.g. Random Forest) are applied to classify land cover, and changes over time are quantified and visualized to reveal spatial and temporal trends. The analysis is expected to reveal trends and spatial patterns in land cover changes, particularly highlighting the magnitude and spatial distribution of grassland-to-farmland conversion. By combining time-series analysis and change detection techniques, the study provides an understanding of grassland conversion and associated transitions.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Hyperspectral Data for Mapping and Predicting Plant Traits and Gradients of Functional Species Composition in German Grasslands

Authors: Ines Grünberg, Ass. Prof. Dr. Christine Wallis, Dr Michael Förster
Affiliations: Technische Universität Berlin, LUP – Luftbild Umwelt Planung GmbH
Hyperspectral remote sensing offers a fast, large-scale and non-destructive method for estimating functional plant traits, providing essential insights into ecosystem functioning while supporting sustainable management practices and conservation strategies. Understanding variations in plant traits, driven by abiotic and biotic factors, can be essential to advance our knowledge of functional species composition and its ecological implications. This study aims to evaluate the potential of airborne-acquired hyperspectral imagery (408-985 nm) for assessing and predicting the spectral response of structural and biochemical functional plant traits in German grasslands. To achieve this, we first analysed the correlation patterns between seven structural and biochemical communityweighted mean (CWM) vegetation traits collected from three distinct grassland regions along a North-South gradient. Potential variations may reflect differences in local environmental, climatic or management conditions, possibly pointing to distinct plant functional strategies. Prior to the analysis, we performed preprocessing steps and then applied dimensionality reduction separately to the CWM trait and spectral datasets using Principal Component Analysis (PCA). The resulting principal components were subsequently analysed together to assess their interrelationships. We identified strong correlations between multiple structural and biochemical traits, as well as between certain traits and one spectral principal component (PC). To further explore these relationships, we applied Partial Least Squares Regression (PLSR) evaluating the correlations between specific wavelengths and plant traits, aiming to develope a predictive model for trait estimation based on spectral data. Model refinement was achieved by selecting only the spectral bands that showed high correlations with the respective traits. The model achieved intermediate accuracy for structural traits, with R² LOOCV values of 0.53 for height and 0.49 for specific leaf area (SLA) and corresponding RMSE percentages of 16.3% and 14.9%, respectively. However, for biochemical traits such as leaf nitrogen and leaf phosphorus the model accuracy was poorest (Leaf N R² LOOCV 0.03; Leaf P R² LOOCV -0.02), likely due to the limited wavelength range of the dataset, as these traits are typically more sensitive to the SWIR region, which was not included. While the model showed intermediate accuracy for structural traits, improvements could be achieved for biochemical traits by using spectral data covering extended wavelength ranges, such as SWIR. Further research is needed to improve the linkage between spectral data and functional species composition, as this could provide deeper insights into how plant communities respond to environmental gradients and management practices. Understanding the spectral and functional dynamics of species compositions could not only improve trait prediction models but also offer possibilities to assess ecosystem processes and biodiversity at broader spatial scales. The acquired findings of the study highlight the potential of hyperspectral remote sensing as a valuable tool for advancing the understanding and sustainable management of grassland ecosystems.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Towards Operational Surface Moisture Monitoring with Sentinel-1 in Five Project Areas in South Africa - Comparison of Five Years of In-Situ Soil Moisture Measurements with a Surface Moisture Index, NASA SMAP, ESA SMOS and ESA CCI Soil Moisture Products

Authors: Christiane Schmullius, Jussi Baade, Eugene Hahndiek, Carsten Pathe, Philip Schubert, Tercia Strydom, Marcel Urban, Marco Wolsza, Marcel Felix
Affiliations: University Jena, Nuwejaars Wetlands Special Management Area, South African National Parks, ESN (EnergiesystemeNord)
This presentation compares in-situ soil moisture with various EO-products at two project sites. In the framework of the South African-German SPACES South Africa Land Degradation Monitor (SALDi) project and the SANParks COLD-EMS project, we installed in March 2019 an in-situ instrument with 8 Time-Domain Devices at Lower Sabie,in southern Kruger National Park, which measures every 30 minutes temperature and soil moisture in 10 cm depth. This data set represents a unique reference source for related product developments from remote sensing. Soil moisture is one of IPCC’s Essential Climate Variables (ECVs). Hence its measurement with adequate spatial and temporal resolution is a prerequisite for understanding climatological conditions and their impact on vegetational cycles. Since radar signals are highly sensitive to the water content of the land surface (whether vegetation or soil), efforts have been ongoing since decades to generate an operational soil moisture product. Three global soil moisture products exist: ESA’s CCI Soil Moisture based on active sensors and the SMAP- and SMOS-products based on passive microwave satellite data. These three products however have very low spatial resolutions between 10 km and 50 km. With the launch of Sentinel-1, new time-series became available to better estimate the impact of the vegetational development on a possible soil moisture retrieval. This talk compares 20 m Surface Moisture Indices (SurfMI), that we developed from Sentinel-1 time-series with the above-mentioned global products and our in-situ measurements for the Lower Sabie region. The results indicate good correlations to SMAP products and great sensitivity of the Sentinel-1 SurfMI-product to both, phenology and precipitation, allowing to develop a model that takes both surface parameters into account. In a second site, we installed in March 2019 an in-situ instrument with 8 Time-Domain Devices at the Nuwejaars Wetlands Special Management Area in the South African Overberg district. The sensors measure every 30 minutes temperature and soil moisture in 10 cm depth. Surface soil moisture can be estimated using a variety of remote sensing technologies and algorithms. The approach developed by the Technical University Vienna uses backscatter intensity of synthetic aperture radar (SAR) data and soil porosity to determine volumetric soil moisture (SM). The index used in this study, which is based on high resolution Sentinel-1 data, is called SurfMI. SurfMI products and the collocated Soil Moisture Active Passive (SMAP) time series are validated against the SALDi in-situ SM probe buried in a flat grassy meadow. All time-series were decomposed using Season-Trend decomposition using Loess (STL). In addition, SurfMI and its residuals are analyzed with Sentinel-2-based vegetation indices (NDVI, EVI) and the Sentinel-2 bare soil index (BSI). Further, a triple collocation analysis (TCA) is performed on the in-situ, SMAP and SurfMI time-series to estimate the first and second order biases, unbiased RMSD and signal-to-noise-ratio (SNR). It was found, that SurfM does not share SMAP’s tendency to underestimate SM in wet seasons - especially when precipitation is high. Instead, SurfMI displayed high susceptibility to moist conditions and rough surfaces, therefore mostly overestimating SM. NDVI and EVI demonstrated great agreement and a contrarian relationship to BSI.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Enhancing Grassland Cut Detection Using Sentinel-2 Time Series Through Integration of Sentinel-1 SAR and Weather Data

Authors: Aleksandar Dujakovic, Francesco Vuolo, Cody Watzig, Andreas Schaumberger, Andreas Klingler, Clement Atzberger
Affiliations: Institute Of Geomatics, University Of Natural Resources And Life Sciences (BOKU), Agricultural Research and Education Center (AREC)
Permanent grassland in Austria covers an area of nearly 1.33 million hectares and serves multiple important functions, including providing essential forage for livestock farming and playing significant ecological functions in soil stabilization and water quality maintenance. The diverse climatic and soil conditions in the Central Alps of Austria lead to a considerable variation in the frequency of grassland cutting which depends on factors like crop condition, temperature, water and nutrient availability. The detection of grassland cuts is relevant for modelling grassland yield and quality because information on cut dates and cut intensity aids in estimation of the nutrient biomass ratio of fodder. To support individual farmers, and for regional scale applications, remote sensing is a valuable tool for monitoring grassland vegetation, identifying cuts, and estimating yield continuously. This research improves an existing grassland cut detection methodology developed for Austria based on Sentinel-2 (S2) optical time series. In this method, cuts were identified through a threshold-based comparison between the fitted NDVI curve and the observed NDVI values with several additional pre-processing steps to limit the impact of cloud cover. To further improve the detection accuracy, the new method incorporates Sentinel-1 (S1) Synthetic Aperture Radar (SAR) and daily weather data utilizing a machine learning-based model (Catboost). The Catboost model subsequently addresses limitations in S2 data caused by cloud cover and other sub-optimum observation conditions. The Catboost model (1) identifies missing cuts in periods with no S2 data, and (2) eliminates false positive cuts. Weather data is utilized to identify the start of the cutting season and to define the (minimum required) time span between two consecutive cuts. The method was trained on 110 plots and validated on a further 489 plots over three years (2020, 2021 and 2022). Validation dataset encompassed a large and diverse range of cut frequencies in mountainous terrain and persistent cloud cover. Results demonstrate an improvement in cut date f-score (from 0.77 to 0.81), a reduced false detection rate (from 0.21 to 0.16), and a slight decrease in mean absolute error between true and estimated cut dates (from 4.6 to 4.1). The improvement in the accuracy was more evident for plots with high mowing frequency, while some remaining false detections were evident for extensively managed grasslands. The incorporation of S1 SAR and weather data enables the cut detection for the entire calendar year and eliminates the need for fixed growing season start/end dates. However, S1 SAR data alone did not provide reliable detection accuracy, showing its limitations in depicting vegetation dynamics for grassland. Overall, the improvements in accuracy and flexibility demonstrate the efficacy of the enhanced methodology, emphasizing the potential of combining S1 and S2 with weather data in large scale and cost-efficient grassland monitoring.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: EU Grassland Watch: From Satellite Data Towards Grassland Characterization

Authors: Jan Mišurec, Jiří Tomíček, Sivasankar Arul, Jiří Kadlec, Mathilde De Vroey, Victor Verhaert, Dr. Hans Vanrompay
Affiliations: Gisat, Vlaamse Instelling voor Technologisch Onderzoek (VITO)
Based on satellite data, the EU Grassland Watch (EUGW) platform will provide thematic information about grasslands within more than 16 000 protected sites of the terrestrial Natura 2000 network. Covering the period for more than 30 years (from 1994 until present), the thematic products provided initially exploit Landsat data and since 2016 also data acquired by Copernicus Sentinel-1 and Sentinel-2 systems. In support of the EO data, the EUGW grassland products also utilize information from other datasets such as the Copernicus Land Monitoring Service (CLMS) High Resolution Vegetation Phenology and Productivity (HR-VPP), Corine Land Cover Plus – Backbone (CLC-BB+) and High Resolution Layers (HRLs) such as Water and Wetness (WAW), Tree Cover Density (TCD) and Small Woody Features (SWF). On the one hand, they use currently existing information for further and more detailed grassland characterization without a need for their independent extraction from the source satellite data, and on the other hand, it ensures synchronization of the content of the EUGW products with the content of other datasets available via the CLMS. The EUGW product portfolio consists of four main components: 1) Land Cover, 2) Grassland Type, 3) Grassland Management and finally 4) Grassland Productivity. The Land Cover Component is based on an annual land cover classification driven by the source satellite data providing a baseline for the Grassland Identification, where the current spatial extent of grassland is defined and later used as the area of interest for all other grassland-focused components. Moreover, it also provides crucial information for the indicators and variables related to temporal land cover changes. The Grassland Type Component provides a categorization of grassland from biological / habitat perspective considering the EUNIS Level-2 nomenclature. It combines information on the main biophysical characteristics (such as biomass amount, green pigments or water content etc.) of grassland provided by a chosen set of vegetation indices. Such information is further combined with phenology and productivity variables extracted from the HR-VPP dataset. Finally, other characteristics having an impact of the occurrence of different grassland types such as landscape topography are taken into account using the detailed Copernicus Digitel Elevation Model. Both the land cover and grassland type classification tasks are based on an implementation of the Random Forest machine learning algorithm. Within the Grassland Management Component, we focus on quantifying grassland management intensity based on temporal series of satellite data primarily focusing on differentiation of intensively and extensively managed grasslands. The approach is based on advanced analysis of dense temporal profiles of Normalized Difference Vegetation Index (NDVI) aiming to identify features potentially corresponding to grassland management events. This is then combined with theoretical modelling of “no-management scenario” which is further used to quantify real management intensity. On top of that, the component also looks into grassland persistence taking into account the temporal length of the period for which grassland exists at a certain location. This is to differentiate permanent and temporary grasslands and the results of the Land Cover Component are considered here as the main input. Finally, the last part of the Grassland Management Component focuses on the detection of bare soil anomalies within grassland areas which may correspond to ploughing or other activities / events resulting into temporary removal of grass cover. Here, combination of three bare soil occurrence indicators based on NDVI, Bare Soil Index (BSI) and Normalized Difference Tillage Index (NDTI) are combined together by a fitted weighting function which is then further followed by spatio-temporal statistical analysis of significance of the identified anomalies. Finally, the detected anomalies are categorized by their temporal behaviour to describe different types of grassland canopy disturbance. The last component is related to Grassland Productivity. It provides insight into long-term (1994 and onwards) and short-term (2017 and onwards) temporal trends identifying the areas with statistically significant increase or decrease in grassland productivity as separate from the areas with stable productivity characteristics. The second part of the component, productivity state, focuses on analysing grassland productivity in temporal dimension providing statistical comparison of the currently observed productivity with the common productivity observed in the selected reference period. Similarly, the productivity is analysed also in the spatial domain by the grassland performance module where the grassland productivity at a certain place is compared with the common performance of the corresponding grassland type in the given climatic conditions. Our presentation aims to demonstrate how the source satellite imagery and the supportive datasets are taken together as a valuable mosaics of inputs which are then transformed into grassland-related products that should be easily understood by users both inside and outside the EO community.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Monitoring grasslands in in Wallonia (Belgium) based on satellite imagery and grass growth model to help farms management in climatic change context

Authors: Cozmin Lucau Danila, Yannick Curnel, Hamza Himdi, Richard Lambert, Viviane Planchon
Affiliations: Walloon Agricultural Research Centre, Catholic University of Louvain
In Wallonia (BE), grasslands cover 367 200 ha and represent the dominant land cover class (47% of Utilized Agricultural Area). This ecosystem provides a wide range of services by contributing to food, feed and fiber consumption, by providing climate related services (e.g. carbon/water storage) and by offering biodiversity and landscape amenity value. Grassland yields are quite fluctuating according to different factors such as the pedoclimatic region, the floristic composition, the intensification level (e.g. rest period length, stocking density or fertilization level), the parcel (e.g. exposition or slope) as well as management (e.g. cutting height) especially in the context of climatic change. The weather characteristics throughout the growth season (usually from February to November depending on agroclimatic zone) have un big impact on grass biomass quantity and quality. A slightly lower yield of grass biomass observed during years with less precipitation is often partially compensated by a high quality of grass. The drought conditions determine a decrease of grass yield (especially during heatwaves) while a higher amount of precipitation gives more biomass but less quality of grass. The use of grass growth models in these situations is less accurate. Information extracted from Sentinel 2 (S2) images are used in a complementary way to improve the grass biomass estimation. A multi-approach concept based on large reference field data set, satellite imagery and grass growth model is developed. The main objective is to develop W@llHerbe decision support tool adapted to breeders needs. An intensive field campaign was carried out in 2022 and 2023 (from March to November) on 71 permanent grasslands parcels distributed in Wallonia. Compressed sward height (CSH) has been measured weekly with an electronic plate meter and fresh/dry biomass and fodder quality on a monthly basis. The Leaf Area Index (LAI) obtained from S2 correlates with field measurements for biomass estimation. Integrating the grass growth model with remote sensing data enables the estimation and prediction of grass growth, achieving a Root Mean Square Error (RMSE) ranging from 367 to 482 kg of dry matter per hectare on a daily basis. The use of drone and PRISMA satellite hyperspectral imagery helps detect and study the characteristics of grassland fields. By using the detailed spectral information from available hyperspectral data, our method focuses on the thorough spectral analysis of grass, allowing us to identify vegetation indices that estimate grass quality. This research not only provides insights into the spatial variability of grass quantity and quality but also introduces a scalable technique for monitoring and managing grasslands more effectively. The implications of this study are significant, offering a potential for improved grasslands management practices that can lead to increased agricultural productivity and sustainability.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Examining the Relationship Between Copernicus HR-VPP and Yield Data for Hungarian Grasslands

Authors: Vivien Pacskó, Ottó Petrik, Zoltán Barcza
Affiliations: Eötvös Loránd University, Lechner Knowledge Centre, Earth Observation Operations
Phenology is the study and documentation of periodic recurring events in the life cycle of vegetation. Phenological studies provide an integrated measure of ecosystem responses to climatic factors and human-induced disturbances, and thus offer a means of monitoring ecosystem condition and degradation. The aim of our study is to examine the relationship between grassland yield data and the so-called Vegetation Phenology and Productivity Parameters (VPP) from the Copernicus Land Monitoring Service High Resolution Vegetation Phenology and Productivity (HR-VPP) product suite over Hungary for several years. The study focuses on Hungary (Central Europe) where extreme weather events became frequent in the last decades. In Hungary, grasslands are highly sensitive to climate variability, which affects associated ecosystem services, perhaps the most important of which is production. Despite the importance of the topic and the high interest from stakeholders, the productivity of Hungarian grasslands is not well researched. We have produced crop maps of Hungary for each year between 2015 and 2024 that include not just arable lands, but also grasslands. These maps were made using time series of Sentinel-2 images from March to October in each year, spectral indices derived thereof and Sentinel-1 temporal integrals by applying a Random Forest classifier implemented in Python. Reference data was obtained from the anonymised claim database for agricultural subsidies, provided by the Hungarian State Treasury. The spatial resolution of the classified maps is 20 m and the average overall accuracy is above 95%. The VPP product consists of a set of 13 parameters describing the different stages of the seasonal vegetation phenological cycle. These parameters are extracted from the seasonal trajectories of the Plant Phenological Index (PPI) at 10 m resolution from Sentinel-2 satellite observations and allow to monitor the phenology of vegetation using satellite proxy time series. In the present study, we investigate certain layers of the VPP for Hungarian grasslands for the years 2017-2024, including the start of season date (SOSD), maximum value of PPI trajectories (MAXV), seasonal (SPROD) and total productivity (TPROD) parameters. Focusing on the grassland classes, we investigate the spatial and inter-annual variability of certain VPPs and the relationship between VPPs and mowing based yield data from Hungarian National Parks. Our results provide a basis for the implementation of a future spatial segmentation based on productivity differences of grasslands and for the analysis of the causality of spatial and temporal variability with respect to meteorological variables.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Earth Observations for Grassland and Rangeland Managers: Current Utility and Co-Development of New Methods and Tools

Authors: Anthony Vorster, Nicholas Young, Daniel Cleveland
Affiliations: Colorado State University
For Earth Observation (EO) data to reach their potential as a tool to support climate action, biodiversity conservation, and sustainable management of grasslands and rangelands, it must demonstrate value and be integrated into decision making by those utilizing and managing these landscapes. We are working with ranchers and Native American tribes in the western United States to understand the current levels of adoption of EO products for informing their management of grasslands and rangelands, reveal what applications these products are used for, identify factors limiting adoption, and prioritize the types of EO products that should be developed that have the greatest potential to add value to grassland and rangeland managers. We present results from a formal survey and insights from use cases working intensively with ranchers and tribes to test and co-develop EO applications for monitoring and informing decisions on grasslands and rangelands. There is great potential for EO to be integrated into management decision making, but we find that adoption rates by managers are generally low due to several factors, including: EO products often have a delayed release time not in sync with the advanced and real-time decision making timelines required of managers; the spatial and temporal scale of EO products; instances of mismatch between metrics being remotely sensed and the metrics of greatest interest to managers; and the trust in and utility of EO products by those on the ground being challenged by limited accuracy. We discuss and demonstrate these concepts in detail through use cases working closely with grassland and rangeland managers where we evaluate the ability of current EO products to inform management decisions and develop new EO products when needed. We present the management needs that have been co-identified with our partner livestock producers and three research studies: (1) extensive evaluation of current EO biomass products across regions, vegetation communities, and grazing status; (2) development of innovative time series methods to quantify rangeland resilience; and (3) monitoring of rangeland degradation and recovery over decades and how this relates to livestock management practices. This work is part of NASA Acres, a consortium assembled to develop “user-driven, fit-for-purpose EO products, tools, and applications to empower U.S. agriculture to be more sustainable, productive, and resilient.” While this survey and research are focused on the western United States, we expect the findings to resonate with researchers, policy makers, and pastoralists or other livestock producers working in grasslands and rangelands around the world. This presentation will compare the state of the science to the needs of grassland and rangeland managers to identify how EO products are already providing utility, prioritize areas for research and development to facilitate the greatest adoption by livestock producers and therefore impact of EO on management decision making, and look ahead to what existing and future sensors provide and their relevance to grassland and rangeland managers.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Global grassland and livestock data for conversion monitoring

Authors: Lindsey Sloat, Leandro Parente, Vinicius Mesquita, Davide Consoli, Radost Stanimirova, Ana Rebeoredo Segovia, Carmelo Bonannella, Nathalia Monteiro Teles, Maria Hunter, Ana Paula Matos, Steffen Ehrmann, Ziga Malek, Ivelina Georgieva, Carsten Meyer, Ichsani Wheeler, Fred Stolle, Tomislav Hengl, Steffen Fritz, Laerte Ferreira
Affiliations: World Resources Institute, OpenGeoHub, Image processing and GIS Laboratory (LAPIG / UFG), German Centre for Integrative Biodiversity Research (iDiv), International Institute for Applied Systems Analysis (IIASA)
The growing demand for agricultural commodities and land for economic development continues to drive widespread deforestation and the conversion of non-forest ecosystems. While significant progress has been made in monitoring forest loss, a critical gap persists in understanding the dynamics of grasslands and other non-forest ecosystems, which are increasingly under pressure. Grasslands, in particular, are essential ecosystems, supporting biodiversity, mitigating climate change, and sustaining livelihoods. However, their diversity and heterogeneity seasonality pose challenges for consistent monitoring at global scales. To address this gap, the Land & Carbon Lab initiated the Global Pasture Watch (GPW) research consortium. Comprising experts in geospatial monitoring, machine learning, ecology, and agriculture, the consortium is innovating new methods and datasets to monitor grasslands and grazing systems comprehensively. This presentation will highlight several globally consistent datasets developed by GPW to advance conversion monitoring and inform sustainable land management, their integration, and their application for conversion monitoring. 1. Grassland extent: Global, 30-m maps capturing grassland extent for two classes, i.e. natural / semi-natural and planted grasslands. 2. Short vegetation height: Continuous height measurements based on IceSat-2 LiDAR to capture the dominant vegetation height outside of standing forests. 3. Livestock density: Spatially explicit time series of density estimates for several livestock animals. 4. Gross Primary Productivity (GPP): Bi-monthly productivity estimates, downscaled and gap-filled to support monitoring trends in vegetation condition. These annual, global datasets span the years 2000–2022 and include per-pixel probabilities, which enhance their usability for local and regional applications. By integrating diverse data sources—including Landsat analysis-ready data, IceSat-2 LiDAR, and expert feedback through platforms like Geo-Wiki—the consortium delivers flexible tools for tracking land use and land cover change, improving management practices, and guiding conservation efforts. These datasets serve as critical inputs for conversion monitoring. For example, enabling the differentiation of grassland areas susceptible to conversion from those that are already planted or managed. These planted grasslands serve as a robust proxy for pasturelands, and together with livestock density data, serve as important commodity mapping inputs for animal agriculture. Further, the assessment of trends in vegetation productivity and height can provide a multi-dimensional perspective on vegetation health, aiding in the identification of areas undergoing the process of or in a state of degradation. This talk will explore how these datasets can be leveraged to bridge gaps in conversion monitoring and enhance the understanding of land use dynamics across forest and non-forest ecosystems. Through collaboration and continued innovation, Land & Carbon Lab aims to drive more effective policies and practices for protecting, restoring, and better managing the world’s valuable landscapes.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Mapping Grassland Management Regimes in the Alps and Carpathians Using Fused Time Series From Landsat and Sentinel-2

Authors: Šimon Opravil, Róbert Pazúr, Tomáš Goga
Affiliations: Institute of Geography, Slovak Academy of Sciences
The semi-natural grasslands of the Alps and Carpathians are among the most species-rich habitats globally, providing ecosystem services such as climate change mitigation and food security. The persistence and functionality of these grasslands in many areas depend on human management practices, particularly grazing and mowing events, along with the distribution and frequencies of these events. However, mapping grassland management regimes over large areas remains inefficient and poses significant challenges for remote sensing. In the case of mowing, recent advances in remote sensing and data processing options contributed to high accuracy in detecting these events using Sentinel-2 time-series data. A considerable challenge remains in identifying grazing activities due to the variabilities of these events and environmental conditions that heavily influence grassland phenology. This study aims to map mowing and grazing activities using the data derived from Sentinel-2 and Landsat sensors that we combined into a single time series. The combining of the imagery was conducted within the following steps in the Google Earth Engine platform: 1. Histogram Matching where Sentinel-2 and Landsat 9 data were aligned with the spectral characteristics of Landsat 8. 2. Pansharpening where Landsat OLI bands were enhanced using a simplified Fourier transform effect, where the panchromatic (pan) band was downscaled to match the resolution of the spectral bands using the mean value for synchronisation. The difference between the original pan band and the calculated mean value was then applied as an offset to each spectral band, essentially adding the result of a high-pass filter to the spectral bands. 3. Resampling and Reprojecting where Landsat OLI bands were resampled and reprojected to match the resolution and spatial characteristics of Sentinel-2 data. Using such fused images, we derived the time series and calculated the markers of mowing and grazing regimes designed to detect vegetation changes associated with grassland management practices. We hypothesised that the management events would be detected by contrasting the measurements to a hypothesised growing season curve derived from the phenological maxima derived from surrounding grasslands. The mowing marker detects observations where mowing results in an abrupt drop in vegetation index values. Thresholds for these changes are defined using the average residuals of change between observations, incorporating data from surrounding grasslands for improved precision. The grazing marker identifies cumulative negative changes in vegetation index values caused by grazing. Additionally, spatial patterns, such as vegetation index entropy, are included to refine the detection of grazing activities. We demonstrated the applicability of the proposed approach in selected localities characterised by high species richness within the Alps and Carpathians. The results are validated using field-based botanical surveys, geotagged photographs from the field, and imagery from panorama cameras available in these localities. The presented work contributes to advancing the understanding and mapping of grassland management regimes that are applied particularly in semi-natural areas of the Alps and Carpathians. Since the proposed method is built on local data, contrasting the phenology measurement with a hypothesised local phenological curve gives the opportunity to apply the presented approach elsewhere.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Spatio-temporal Unmixing of the South African Protected Savanna Ecosystems Using a multi-data approach with spectral temporal metrics.

Authors: Hilma Nghiyalwa, Dr Akpona Okujeni, Dr Eliakim Hamunyela, Tshililo Ramaswiela, Dr. Jussi Baade, Prof. Dr. Patrick Hostert, Prof. Dr. Christiane Schmullius
Affiliations: Department for Earth Observation, Friedrich Schiller University Jena, GFZ Helmholtz Center Potsdam, GFZ German Research Center for Geosciences, Department of Environmental Science, School of Science, Faculty of Agriculture, Engineering and Natural Sciences, University of Namibia, SAEON Arid Lands Node, Department of Geography, Friedrich Schiller University Jena, Geography Department, Humboldt-Universitat zu Berlin
Approximately more than 50% of Africa’s land surface is covered by the savanna biome. Estimation of savanna vegetation fraction is essential to provide detailed information for conservation purposes, and for understanding the impacts of bush encroachment, herbivory, droughts and fire on savannas biomes. Satellite remote sensing offers a practical and operational approach for large area quantification of vegetation fractional cover. Several studies mapped vegetation fractional cover in savanna biomes by combining radar and optical data. Advantages are often reported when radar and optical data are combined for vegetation fractional mapping, but the challenge associated with mixed pixel problem and seasonal effects are often ignored in the accuracy assessment. We present results on the use of neural network regression models, trained on synthetically derived training data from a spectral library and applied to Sentinel-1 and Sentinel-2 yearly and wet/dry spectral temporal metrics, to estimate woody and herbaceous vegetation cover fractions in Benfontein Nature Reserve and Mokala National Park in the Northern Cape Province of South Africa. Our study made use of fieldwork data as collected with support from South African Environmental Observation Network (SAEON), the South African National Parks (SANParks) in conjunction with South African Land Degradation Monitor (SALDI) project. Our spectral library development has combined both data from the field and Very High Resolution (VHR) images. In-depth analysis of our spectral library revealed differentiated separability between woody and herbaceous vegetation types, observed between both study locations, seasons, and optical and radar sources. While similar variations between sites, seasons and data were reflected in the retrieved fraction maps, a model-based on Sentinel-2 spectral temporal metrics only predicts fractions covers with a better accuracy. The multi-data approach method, where Sentinel-1 and Sentinel-2 spectral temporal metrics were combined, was only be beneficial in Benfontein Nature Reserve, where spatial configuration of the vegetation cover is a continuous grass layer with scattered trees. In summary, the study contributes to the discourse on integrating radar Sentinel-1 and optical Sentinel-2 data, highlighting the potential benefits and limitations of this approach for mapping woody and herbaceous fractions across diverse environments.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: The First Open and Free High-Resolution Mowing Event Data Set for Austria based on Sentinel-2 Time Series Data

Authors: Manuela Hirschmugl, Petra Miletich, Alexander Schippl, Marco Kirchmair
Affiliations: Dept. of Geography and Regional Science, University of Graz, Joanneum Research
Mowing events are important for many use cases: not only in agricultural applications, such as yield and fodder production, but also for biodiversity assessments, habitat modeling or protected area monitoring. We present the first data set of mowing events for the whole Austria for the year 2023 at a 10x10 m resolution. Previous works usually worked on polygon-level using the integrated administration and control system (IACS) data and building on training data. The shortcomings of these approaches are: (i) limited resolution, i.e. differences in mowing within the IACS parcels are not reflected; (ii) no information for areas outside the IACS parcel, e.g. for protected areas; and (iii) limited applicability in case of missing training data. Our approach works independently from training data and is therefore easy to apply and cost-efficient in production for any given pixel. The approach is, similar to previous studies, relying on Sentinel-2 time series data identifying declines in the time series curves. The NDVI time series are used to detect mowing events, and the mean of the two SWIR bands is applied to exclude misclassification caused by remaining clouds and shadows. Important is a stringent pre-processing including atmospheric correction (Sen2Cor), cloud masking (FMASK) and topographic correction (Minneart correction). Validation was done using the Panomax webcam archive and digitizing the meadows and the corresponding mowing events. In addition, field observations were added to the data set. Altogether 211 mowing events were recorded in 85 mapped areas across Austria. The main share were located in Carinthia (29%) followed by Salzburg (24%) and Styria (17%). For three provinces (Burgenland, Vorarlberg and Lower Austria), only one area per province is so far available. In order to make a comparison, the majority value of the detected mowing date in each mapped meadow was used in conjunction with the reference mowing date. The preliminary results (without plausibility checks) show that if allowing 30 days maximum delay, 79.15% of all mowing events were detected with a mean time delay of 4.26 days. When reducing the maximum allowable delay to 15 days, still 77.73% of the mowing events were detected with a similar mean time delay of 4.06 days. There was a slightly smaller delay for the mowing event detection in summer compared to the ones in spring or fall (M1:5.2, M2:3.3; M3:3.5; M4:3.2; M5:5.6 for 15 days and M1:5.1, M2:3.3; M3:4.1; M4:3.9; M5:5.6 for 30 days). This can be explained by a generally higher likelihood of cloud-free image acquisition in summer. A second quality check was done by comparing the individual pixel values with the IACS mowing intensity data for the whole Austria (> 1 Mio. polygons in total, roughly 971,000 of which are grasslands/meadows/pastures). For this assessment, we also compared the pixel-based with the polygon-based method. While both perform similarly for pastures (majority of pixels/polygons did not detect any mowing event) and single-mown meadows (Fig. 1); for two mowing meadows, the pixel-based approach worked much better. More detailed analysis including quality checks are still needed and further analysis according to elevation and/or region are the next steps. Also more information in so-far underrepresented provinces should be included in the analysis. However, our accuracy results are very similar to previous works for Austria building on a training data set (Watzig et al., 2023). Therefore, our study shows a high-quality result derived in a fully automated manner.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Growth Unveiled: Decoding the Start of Grassland Seasons in Austria

Authors: Aleksandar Dujakovic, Francesco Vuolo, Andreas Klinger, Andreas Schaumberger, Konrad Mayer, Clement Atzberger, Anja
Affiliations: Institute Of Geomatics, University Of Natural Resources And Life Sciences (BOKU), Agricultural Research and Education Center (AREC), Central Institute for Meteorology and Geodynamics (Geosphere)
The start of the growing season (SOS) in grasslands is a critical factor that significantly affects grassland dynamics, production and quality. In the context of grassland fodder production, the exact detection of the start of the growing season allows farmers and land managers to precisely plan harvest schedules for hay production or silage making. At SOS, vegetation shows a continuous increase in the greening-up directly after a long period of photosynthetic dormancy in winter. Remotely sensed Land Surface Phenology metric such as “spring onset” provides broader-scale estimations of phenological variations. However, satellite-based SOS detection is challenging due to the gradual onset of growth in spring, resulting in less distinct trends in satellite vegetation indices (VI). Melting snow further complicates SOS detection, causing unrelated VI increases. A commonly used approach relies on NDVI crossing a predefined threshold, but it often leads to delayed SOS estimation. These challenges emphasize the need for novel approaches to improve satellite-based SOS detection accuracy. This study establishes a new methodology for identifying the SOS specifically tailored to grassland regions in Austria. To accomplish this, a synergistic approach combining remote sensing and weather data analysis was adopted. The Moderate Resolution Imaging Spectroradiometer (MODIS) NDVI dataset at 250 m offers advantages for SOS detection such as high revisiting frequency and the ability to cover large-scale areas, providing a comprehensive view of ecosystems and their changes over time. On the other hand, incorporating weather data for SOS detection utilizes a strong relationship between the temperature and phenological changes. NDVI-based SOS is first estimated using pixel-based NDVI threshold, and then refined with the Multicriterial Thermal Definition (MTD) criteria, which require specific temperature thresholds to be met. Results indicate that the integrated approach outperforms alternative methods in estimating SOS. Results showed a correlation (r2) of 0.35 between detected SOS for grasslands and SOS for reference species (Forsythia). Comparing estimated SOS dates with reference SOS dates based on visual interpretation of high spatial resolution satellite images revealed an r2 of 0.65 and absolute mean difference of 5.3 days. This method is applicable for estimating SOS in Austrian grasslands for yield modeling, as the accuracy of the yield model for the first growth period, and subsequently the whole vegetation period, strongly depends on the predicted SOS, which initiates the entire yield modeling process. Although detected SOS dates align with phenological observations and high-resolution satellite images, the lack of reliable in-situ data for Austrian grasslands remains a limitation. To improve the reliability and applicability of SOS estimation for grassland yield modelling and environmental monitoring, additional validation techniques are crucial. Overcoming the challenge of obtaining reference data will enhance the methodology's usefulness in optimizing grassland fodder production and management.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Near real-time anomaly detection in permanent grasslands using Sentinel-2: A tool for monitoring CAP compliance

Authors: Eric Kosczor, Christine Wessollek, Daniel Kinalczyk, Pierre Karrasch, Nora Egli, Matthias Forkel
Affiliations: TUD Dresden University of Technology, Junior Professorship in Environmental Remote Sensing, Dresden, Germany, Saxon State Office for Environment, Agriculture and Geology (LfULG), Dresden, Germany
Permanent grasslands play a critical role in maintaining biodiversity by providing habitats for numerous plant and animal species. They act as vital carbon sinks, sequestering carbon dioxide and helping mitigate climate change. Grasslands also contribute to soil stability, preventing erosion, and improving water retention, which supports local water cycles. The ploughing of permanent grassland should therefore be avoided as this can lead to a loss of ecosystem services, reduced biodiversity, and increased greenhouse gas emissions. The ban on permanent grassland conversion is codified in the Common Agricultural Policy (CAP) of the European Union (EU) as one of nine measures for maintaining Good Agricultural and Environmental Conditions (GAEC). Monitoring compliance with these measures is essential to ensure that valuable ecosystems are preserved and efforts to achieve environmental and climate goals are not undermined. Here we propose a monitoring approach for permanent grasslands which leverages the high temporal and spatial resolution of Sentinel-2 and aims to identify anomalous activities on permanent grasslands in near real-time, allowing timely intervention measures through administrative bodies. We derive various spectral features for every parcel in the German state of Saxony currently classified as permanent grassland in the state’s Integrated Administration and Control System (IACS) from ready-to-use FORCE-processed Sentinel-2 ARD data. After the elimination of insufficient and low-quality data, we group the parcels based on management practices (mainly grazing, mowing) and climatic conditions (grassland temperature sums, growing degree-days), the latter of which were defined beforehand through a clustering approach. This step is done under the assumption that grassland parcels appear similar under similar conditions, which allows the approach to be carried out unsupervised. Following this, we apply a three-stage sequence to identify anomalous parcels. In the first stage, we apply an Isolation Forest algorithm to each group to detect outliers in the dataset. In order to narrow down the anomalies to possible ploughing events, we eliminate parcels above/below a certain quantile threshold of mean NDVI (Normalized Difference Vegetation Index) and BSI (Bare Soil Index) in the second stage, before a second Isolation Forest is conducted in the third stage. The approach was evaluated on Sentinel-2 data for 2024, with the anomaly detection performed for each available data set between June and beginning of November. We validated our results through on-site inspections, which revealed a reliable performance in detecting ploughing events. The algorithm also proved to be suitable for detecting other anomalous occurrences like the presence of construction or infrastructure material on the grassland, recently spread manure, or flooded parcels. The strengths of the approach lie in its little need for preprocessing and its unsupervised nature which makes it easy to apply to other regions and relatively simple to implement and interpret for end-users at regional authorities. It is planned to integrate the approach into an interactive tool to channel the on-site CAP inspections regularly conducted by the authority in charge of Saxony.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Quantifying the Multi-dimensional Impact of Rangeland Restoration From Space

Authors: Mendy van der Vliet, Dr. Yoann Malbeteau, Darren Ghent, Sander de Haas, Dr. Karen L. Veal, Thijs van der Zaan, Professor Rajiv Sinha, Dr. Rasmus Houborg, Dr. Richard A. M. de Jeu
Affiliations: Performed this work at Planet PBC, Haarlem, Netherlands, but now received a job offer at Wageningen University (I will update this information asap), Planet Labs PBC, National Centre for Earth Observation, Department of Physics and Astronomy, University of Leicester, Justdiggit, Department of Earth Sciences, Indian Institute of Technology Kanpur, Planet Labs PBC, Transmissivity BV
The quantification of the effects of ecosystem conservation and restoration activities is considered necessary to increase the effectiveness of these activities. While assessments of ecosystem services (ES) are occurring more frequently, their integration in operational decision-support is still limited as reported by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES). Identified barriers for uptake are a lack of evidence and standardized methods. The availability, scalability and spatial resolution of ecosystem data also plays an important role. Stakeholder engagement in outlining the purpose and design of the ES assessments is highly recommended by the IPBES Value Assessment. In this study we co-designed an approach to evaluate the effectiveness of landscape restoration using a combination of high resolution and long-term satellite data records. Stakeholders contributed to the outline of the research, the design of the method, the prototype and their refinement. We investigated the potential of several satellite datasets with different spatial resolutions and data availability for the detection and attribution of restoration changes. Based on this research we selected satellite-derived soil moisture, land surface temperature and vegetation greenness products at resolutions ranging from 250 m to 3 m. We used these datasets to quantify the effectiveness of recent landscape restoration activities for two sites in Tanzania. These sites consist of rangelands that were degraded due to overgrazing and in which water bunds were dug to bring back vegetation. We quantified the amount of water retained by the top layer of the soil (∼13% average increase), a local soil cooling (∼-0.5°C), and an increase in vegetation greenness (∼50% average increase) in 3.5 years. This approach demonstrates the environmental impact of restoration initiatives and forms the base for automatic reporting of comprehensive metrics to partners and donors. Satellite observations from space agencies and commercial providers are now achieving the accuracy, frequency and resolution needed for effective assessment of restoration activities.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Monitoring Herbaceous Biomass and Restoration of Semi-Natural Grasslands Using Machine Learning on Sentinel-1 and Sentinel-2 Imagery

Authors: Iris Luik, Triin Reitalu, Elisabeth Prangel, Liis Kasari-Toussaint, Aveliina Helm, Evelyn Uuemaa
Affiliations: Landscape Geoinformatics Lab, Department of Geography, Institute of Ecology and Earth Sciences, University of Tartu, Vanemuise 46, 51003, Landscape Biodiversity Group, Department of Botany, Institute of Ecology and Earth Sciences, University of Tartu, J. Liivi 2 50409
Semi-natural grasslands are critical ecosystems that provide a range of essential services, with providing habitat to many species being one of the most significant. However, over the past century, semi-natural grasslands, which once covered vast areas across Europe, have predominantly been transformed into intensively managed agricultural lands, abandoned, or converted to forested areas. These large-scale land-use changes have led to considerable biodiversity loss, making the conservation and restoration of semi-natural grasslands an important part of sustainable landscape management. Remote sensing technologies offer powerful tools for monitoring these ecosystems, making the evaluation of landscape management practices more efficient and comprehensive. We utilized 86 in-situ herbaceous biomass samples collected during the summer of 2019. These samples were taken from 20 × 20 cm plots nested within larger 2 × 2 m botanical plots located in Western Estonia. All observation sites were located in alvar grasslands and had undergone restoration between 2015 and 2019. Sentinel-1 and Sentinel-2 imagery from the summer of 2019 was used in the analysis, with median values calculated for each band over the given period. In addition, various indices were calculated e.g. Normalized Difference Vegetation Index (NDVI), Bare Soil Index (BSI), VH/VV. Random Forest models were developed to estimate herbaceous biomass based on satellite-derived data and field measurements. Over 20 predictor variables were considered, including all Sentinel-1 and Sentinel-2 bands, tree cover, and topographical features. To enhance model performance and reduce dimensionality, feature selection was performed. The models achieved an overall accuracy of 65% in detecting herbaceous biomass. SHapley Additive exPlanations (SHAP) analysis revealed that the most significant predictor of biomass was the near-infrared band (B8), followed by the Vertical-Vertical (VV) polarization and the Red Edge 1 band (B5). The findings suggest that combining optical (Sentinel-2) and SAR (Sentinel-1) data provides a promising approach for monitoring herbaceous biomass in semi-natural grasslands. This integrated methodology offers a cost-effective tool for monitoring landscape-level conservation and restoration efforts.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Towards a European-wide grassland productivity estimation system

Authors: Mattia Rossi, Julien Morel, Davide Fumagalli, Martin Claverie
Affiliations: European Commission, Joint Research Center (JRC)
Grasslands play crucial roles in many key environmental processes such as water retention, soil erosion or sequestration of greenhouse gases. At the same time, grasslands are the main source of fodder for feeding livestock and estimates on quantity and quality are crucial to determine food availability and efficiently implement policy actions. This importance has recently been advocated in policies such as the Farm to Fork strategy or the EU protein strategy. To monitor grassland systems in Europe, earth observation sensors, which are capable of providing timely and tailored time series about biophysical properties as well as evidence about management practices in grasslands, are of central importance. Within the Joint Research Centre, the Agri4cast team is building a system capable of quantitatively estimating grassland productivity in the European Union. The current setup is based on two grassland productivity models (Lingra and Modvege) which are driven by meteorological and soil data aggregated on a 10km grid across EU. However, such grassland specific models are designed to predict productivity for a specific species composition, and their parameters need to be calibrated to account for location- or region-specific conditions such as management practices. First models runs suggested mixed results depending on the regions considered: acceptable performances are reported in parts of highly productive grasslands regions in western Europe. On the other hand, we see clear changes in adjacent cells, most probably triggered by the model’s limited capability to predict grassland cuts. In southern Europe, poor results were observed, as mixed or extensive grasslands are not well accounted for by the models. More generally, the biggest difficulty is dividing meadows and pastures. Here information on mowing frequency and grassland types are crucial to delimit these two grassland classes. Remote sensing is a key component for a large geographical scale application of these models. While meteorological and soil parameters are covered by a-priori generated datasets, time series of biophysical variables and management are crucial for model transferability. For biophysical variables, the Sentinel-2 LAI product is an important variable as it can be compared to the model’s prediction over time. Concerning management, the models are strongly reliant on the reset of the growth pattern after a mowing or grazing event, which results in notable small-scale heterogeneity of the predicted productivity whenever the reset conditions are not properly met. Therefore, two current earth observation-based developments are key inputs to model performance and are both tested in the model setup. Firstly, the results of the currently ongoing MODCiX initiative comparing different cut detection models from remote sensing imagery, and secondly the novel Copernicus HRL grassland layers that are about to be released. Using these novel remote sensing products and tailored algorithms alongside the current model implementation should result in a more accurate grassland productivity modelling, most notably in intensively managed permanent grasslands.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Advancing Fractional Woody Cover Mapping and Monitoring in African Savannahs: Multi-Sensor Remote Sensing for Enhanced Ecosystem Monitoring

Authors: Elias Symeonakis, Dr Christina Karakizi, Dr Akpuna Okujeni, Dr Thomas Higginbottom, Dr Joana Borges, Konstantinos Karantzalos, Dr Patrick Hostert
Affiliations: Manchester Metropolitan University, National Technical University of Athens, Humboldt-Universität zu Berlin, Airbus Defense and Space
Savannah ecosystems, covering approximately 20% of the Earth's land surface, play a critical role in global biodiversity, carbon cycling, and the livelihoods of millions. However, these landscapes face increasing threats from land degradation, driven largely by woody vegetation encroachment. This phenomenon, characterized by the expansion of shrubs and trees at the expense of grasses, disrupts ecosystem functions, reduces grazing capacity, and alters fire regimes and hydrological cycles. Accurate monitoring and mapping of fractional woody cover (FWC) are essential for informing sustainable land management and supporting Land Degradation Neutrality goals. Over the last years, we have been working across African savannahs, in an attempt to improve our understanding of savannah vegetation processes and provide improved Earth Observation tools for their monitoring: Higginbottom et al., (2018) utilised Landsat data in conjunction with ALOS PALSAR radar imagery to improve FWC mapping accuracy. The fusion of optical and radar data captured both reflectance properties and structural information, emphasizing the importance of multi-sensor integration. Seasonal differences further enhanced accuracy by capturing phenological variability. Borges et al., (2022) used Landsat-based spectral temporal metrics and vegetation indices, and demonstrated that cost-effective, freely available data can yield robust results for fractional land cover mapping. The emphasis on vegetation indices highlighted their utility in distinguishing grassland from woody cover, underscoring the potential for scalable monitoring solutions. Karakizi et al., (subm) used Sentinel-1 and Sentinel-2 data from wet and dry seasons to explore the synergy between these sensors. The study tested various machine learning (ML) and deep learning (DL) algorithms, finding that multi-seasonal inputs significantly enhanced FWC prediction accuracy. Ancillary data, including climatic and soil metrics, further improved model performance. These studies employed a range of ML and DL techniques to optimize FWC mapping. Random Forest and Support Vector Machines were commonly used in earlier studies due to their robustness and ease of implementation. However, recent advancements have seen a shift toward more sophisticated models, such as XGBR, which consistently delivered superior results in terms of both accuracy and computational efficiency. Deep learning methods, particularly the U-Net architecture, were also explored for their ability to capture complex spatial patterns in high-resolution data. U-Net’s ability to model spatial dependencies and extract hierarchical features made it particularly effective for generating high-accuracy classification masks, as demonstrated in the Karakizi et al. (subm.) study. These findings underscore the importance of selecting algorithms that balance accuracy, computational efficiency, and the ability to handle large datasets. Beyond fractional woody cover: identifying woody species with hyperspectral data Being able to accurately map fractional woody vegetation has clear benefits compared to previous attempts of mapping woody cover through hard classification approaches (e.g., Symeonakis et al. 2018). However, more recently, through consultations with local stakeholders, it has become increasingly acknowledged that, to protect both livelihoods and biodiversity, a key challenge is to accurately monitor and manage woody vegetation encroachment at the species level, as some woody species can support livelihoods, opening new opportunities, while others undermine livelihoods and cause land degradation. Traditional field-based methods of monitoring are limited in spatiotemporal coverage, while operational remote sensing approaches fail to identify specific species, hindering the development and implementation of effective management strategies. To this end, Karakizi et al. (2024) leveraged EnMAP hyperspectral imagery to map FWC at the species level. By incorporating Sentinel-2 spectro-temporal metrics, the study demonstrated the potential for distinguishing between woody species, a critical step for more targeted ecosystem management strategies. EnMAP’s high spectral resolution has proven invaluable for species-level mapping, providing the granularity needed to differentiate between similar woody species. However, hyperspectral data comes with its own set of challenges, including higher data volumes, increased computational demands, and the need for more sophisticated preprocessing techniques. Additionally, atmospheric corrections and noise reduction are critical for ensuring data quality and accuracy. Moving Forward: Opportunities with CHIME The upcoming Copernicus Hyperspectral Imaging Mission (CHIME) presents a significant opportunity to overcome some of these challenges. CHIME will provide continuous, high-resolution hyperspectral data, designed to meet the needs of environmental monitoring. Its ability to deliver global coverage with frequent revisits will enable more consistent monitoring of savannah ecosystems, addressing the temporal limitations currently faced with existing hyperspectral missions: during a single acquisition, CHIME will acquire four times more data than any other hyperspectral satellite. Integrating CHIME data with existing Sentinel-1 and Sentinel-2 datasets will offer a powerful opportunity for advancing FWC mapping. The fusion of hyperspectral and multispectral data, coupled with multi-temporal analysis, will provide an unprecedented level of detail for monitoring changes in vegetation structure and composition. This will be particularly valuable for tracking species-level dynamics, informing targeted interventions, and supporting policy decisions aimed at mitigating land degradation. Conclusion Our work highlights the importance of multi-sensor data fusion and advanced ML/DL algorithms in enhancing the accuracy and granularity of FWC mapping in savannah ecosystems. The integration of optical, radar, and hyperspectral data has proven critical for capturing the complexity of these landscapes, particularly when combined with ancillary bioclimatic and soils data. Moving forward, leveraging time-series data from the upcoming hyperspectral missions like CHIME will be essential for developing more nuanced, species-level monitoring frameworks. Such advancements will play a vital role in supporting sustainable land management and achieving global biodiversity and climate goals. Keywords: savannah, land degradation, woody vegetation encroachment, multi-sensor, hyperspectral, Landsat, Sentinel, EnMAP References Borges, J., Higginbottom, T.P., Cain, B., Gadiye, D.E., Kisingo, A., Jones, M., Symeonakis, E. 2022. Landsat time series reveal forest loss and woody encroachment in the Ngorongoro Conservation Area, Tanzania. Remote Sensing in Ecology and Conservation, 8 (6), pp. 808-826. Higginbottom, T.P., Symeonakis, E., Mayer, H. van der Linden, S., 2018. Mapping Woody Cover in Semi-arid Savannahs using Multi-seasonal Composites from Landsat Data. ISPRS Journal of Photogrammetry and Remote Sensing 139, 88-102 DOI: 10.1016/j.isprsjprs.2018.02.010 Karakizi, C., Gounari, O., Sofikiti, E., Begkos, G., Kellner, K., Karantzalos, K., Symeonakis, E., subm. Fractional Woody Vegetation Mapping with Machine and Deep Learning in a South African Savannah. International Journal of Applied Earth Observation and Geoinformation. Karakizi, C., Okujeni, A., Sofikiti, E., Tsironis, V., Psalta, A., Konstantinos; K., Hostert, P., Symeonakis, E., 2024. Mapping Savannah Woody Vegetation at the Species Level with Multispectral Drone and Hyperspectral EnMAP Data. IEEE International Geoscience and Remote Sensing Symposium, 4447-4451, 7-12 July Athens. 10.1109/IGARSS53475.2024.10641996 Symeonakis, E., Higginbottom, T.P., Petroulaki., K. Rabe, A. 2018. Optimisation of Savannah Land Cover Characterisation with Optical and SAR Data. Remote Sensing, 10(4), 499; https://doi.org/10.3390/rs10040499
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Assessing the Impact of Local Terrain in Krkonoše Mountains on Grassland Mowing Detection Using Sentinel-1 SAR coherence and Multi-Sensor Validation

Authors: Jakub Dvořák, Markéta Potůčková
Affiliations: Charles University, Faculty of Science, Department of Applied Geoinformatics and Cartography, TRL SPACE
Grassland mowing is a vital management practice in European grasslands, supporting livestock fodder production and promoting biodiversity. Reliable monitoring of these activities is essential to support decision making in institutions responsible for monitoring and ensuring the balance between agricultural productivity and ecological sustainability. Satellite remote sensing plays a key role in monitoring grassland management practices, particularly in regions with complex terrain where traditional ground-based methods are challenging. While optical remote sensing has been widely applied, Synthetic Aperture Radar (SAR) is particularly beneficial in areas with persistent cloud cover or mountainous terrain. A key technique for detecting grassland mowing using SAR is interferometric coherence analysis. Mowing significantly alters the scattering properties of the vegetation: before mowing, the dominant volumetric scattering comes from the grass canopy, whereas after mowing, the dominant scattering mechanism changes to surface scattering from the exposed soil or grass sod. This change in scattering characteristics leads to an increase in coherence after mowing. This study investigates the challenges of monitoring grassland mowing in mountainous regions, specifically in Krkonoše mountains, using time series of Sentinel-1 SAR coherence data. We hypothesize that using a more accurate local Digital Elevation Model (DEM) will improve the coregistration and terrain correction of SAR imagery, and that mowing detection accuracy is influenced by local incidence angles in complex terrain. To test these hypotheses, we analyzed Sentinel-1 SAR data from 2021 and 2022, alongside in-situ validation data, supplemented by optical imagery from Sentinel-2 and Planetscope. The results demonstrate that aggregating detections from multiple Sentinel-1 orbits leads to improved detection certainty, with higher accuracy compared to previous studies using a different method for aggregating information from individual orbits. We observed that using either the local or global DEM led to a slight horizontal shift during terrain correction, but the shift did not significantly affect detection accuracy at a field level. Also, the detection accuracy in 2022 was notably lower than the year before, underscoring the importance of having access to data from at least two Sentinel-1 satellites for reliable mowing detection. Moreover, no clear relationship was found between local incidence angles and detection accuracy, suggesting that the current SAR coherence processing methods are robust for mowing detection even in mountainous terrain. This research highlights the potential of Sentinel-1 SAR coherence analysis for grassland mowing detection in complex terrain, emphasizing the importance of multi-sensor data fusion (e.g., Sentinel-1, Sentinel-2, and PlanetScope) for enhanced monitoring accuracy. These findings point to the continued relevance of SAR-based approaches for monitoring grasslands, especially in complex landscapes, such as those found in Krkonoše mountains. The findings also call for further development of SAR-optical data integration and detection algorithms to improve scalability and precision in grassland management and biodiversity conservation monitoring. Acknowledgement: This research was supported by the European Commission, CINEA Horizon Europe project no. 101081307 “Towards Sustainable Land-Use in the Context of Climate Change and Biodiversity in Europe (Europe-LAND)”.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Assessing Vegetation Dynamics and Land Cover Changes in Agricultural Pasture Areas and Croplands of the Mantsopa Region, South Africa

Authors: Sophie Kazenwadel, Lukas Heinz, Marco Wolsza, Dr. Jussi Baade, Dr. Theunis Morgenthal, Prof. Dr. Christiane Schmullius
Affiliations: Department for Earth Observation, Friedrich Schiller University Jena, Department of Physical Geography, Friedrich Schiller University Jena, Department of Agriculture Land Reform and Rural Development
South Africa, known for its diverse ecosystems, is increasingly affected by land degradation processes driven by both natural and anthropogenic factors. The Mantsopa region, one of six focal areas of the South African Land Degradation Monitor project (SALDi), provides a solid foundation for monitoring, detecting and quantifying vegetation changes on agricultural land. This study focuses on agricultural pasture areas and croplands within the Mantsopa region, leveraging Earth observation data from the data cubes of the SALDi project. The project employs high-resolution Earth observation data to analyze subtle land cover changes that may contribute to degradation. Funded under the SPACES II program by the German Federal Ministry of Education and Research (BMBF), the project utilizes data from international missions, including Sentinel-1 and -2 of the European Space Agency (ESA) and TanDEM-X of the German Aerospace Center (DLR). To determine the actual annual land use and enhance the reliability of findings, on-site information will be analyzed through partnership with local farmers. This data will serve as deterministic reference and control values for validating our analyses and evaluations. The methodological approach includes processing remote sensing data, applying cloud masking, and conducting time series to ensure robust analysis. The analysis focuses on document vegetation dynamics, visualizing land cover changes and calculating vegetation indices such as the Enhanced Vegetation Index (EVI) and the Normalized Difference Vegetation Index (NDVI). Land cover maps from South Africa's Department of Forestry, Fisheries, and Environment serve as a baseline for examining specific categories, such as pasture areas and cropland, with detailed and critical assessment of changes within these categories. In addition, pasture areas and croplands are to be distinguished from each other using coherence analyses based on Sentinel-1 data, allowing for separate analysis. An estimation of surface moisture data using the Sentinel-1 Surface Moisture Index (SurfMI) is systematically compared with in-situ-based soil moisture measurements to develop understanding of remote sensing outputs. Furthermore, precipitation datasets, including MSWEP and CHIRPS, are cross-referenced to evaluate their representativeness and applicability in the study area. Comparative analysis with established models from current research efforts provides additional validation and contextualization. This research contributes to the development of scalable, comprehensive monitoring systems capable of addressing land use dynamics at both local and regional levels. The findings offer valuable insights for scientific research, providing a foundation for further studies and enabling wider conclusions to be drawn about sustainable land use strategies and environmental challenges. This work is part of an MSc-seminar at the University Jena in cooperation with South African institutions.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Grassland Management Identification Based on Object Detection From Orthoimagery

Authors: Marcel Hudcovič
Affiliations: Institute of Geography, Slovak Academy of Sciences, Faculty of Natural Sciences, Comenius University in Bratislava
Human activities influence the structure and function of grassland ecosystems through their management activities. Efficient management of grasslands is essential for maintaining biodiversity and providing ecosystem services. Management practices such as mowing or grazing are important for the sustainable use and conservation of the grasslands and preventing their transition towards different ecosystems. Conventional ways of identifying management practices are heavily based on field observations and surveys. While effective at a small scale, these methods require labor and time, and their capacity to provide a complex view of grassland management across larger regions is limited. Advances in remote sensing technologies are transforming grassland management monitoring by enabling consistent and repeatable analyses at large scales. Very high spatial resolution imagery provides observation of detailed grassland structures and their changes. That will help in the more precise identification of different features concerning the management of grasslands. To identify detailed landscape structures, the high-resolution orthophotos represent suitable scales but their availability is limited in time for mapping of specific objects. However, the potential of AI-based object detection techniques enhanced the usefulness of orthophotos for this case by allowing the identification of specific objects such as livestock, machinery, or hay stacks, that directly link to the specific grassland management activities. By annotating these objects at orthophotos will enable us to accurately detect and classify management-related features with high precision and speed. This approach overcomes many of the practical limitations of manual identification and allows extensive mapping of grassland management practices with almost no human interventions. This study presents the outcomes of such an object-based detection model by integrating the YOLO ( Jocher, G., Qiu, J., & Chaurasia, A. (2023). Ultralytics YOLO (Version 8.0.0). https://github.com/ultralytics/ultralytics) algorithm applied over the country-wide orthophotomosaic of Slovakia. We used orthophotos over the time period of 2023-2024 generated with a ground sampling distance of 15 centimeters and four channels (RGB + NIR). Within this study, we present the methodology of object detection over selected areas trained over manually outlined and further augmented training samples. The presented model is transferable and supposed to be applied elsewhere in similar landscapes wherever the high-resolution imagery is available. Obtained results extend the available information on grassland management strategies and may be used complemented with other data and remote sensing technologies to define the dimensions of grassland management regimes. Such knowledge is crucial for understanding the efficiency of biodiversity conservation efforts at multiple scales.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Time series analysis of grass vegetation dynamic across a gradient of management intensity in Alpine Europe

Authors: Paul Jacques Connetable, Konrad Turlej, Michael Munk, Alexander Prishchepov
Affiliations: DHI, University of Copenhagen
Grassland management practices, such as mowing, grazing, and fertilization, play a critical role in shaping mountain hay meadows (MHMs) and their associated ecosystems in the Alpine regions of Europe. While the environmental impacts of these practices have been widely acknowledged, there is limited understanding of their broader effects on long-term vegetation trends. To address this, we analyzed the relationship between management intensity and vegetation development in MHMs using Sentinel-2 NDVI time series data. Our study focused on selected sites in Armenia, Austria, Germany, and Sweden, utilizing Sentinel-2 data from 2017 to 2024, a period during which the two-satellite constellation enabled consistent 10-meter resolution imagery every five days. Vegetation trends were investigated using the BFAST algorithm (Break Detection for Satellite Time Series) to detect changes and trends in the NDVI signal, alongside integrals from annual and seasonal NDVI time series as a proxy for biomass production. These results were cross-referenced with information on management intensity to assess their combined effects on grassland vegetation. Our findings reveal valuable insights into how management practices impact the ecological characteristics of MHMs. The integration of trend analysis with biomass production estimates provides a more comprehensive understanding of the interactions between management intensity and vegetation dynamics. Satellite remote sensing analysis in mountain areas is limited by complex topography, which can cause terrain-induced distortions and shadowing, and persistent cloud cover, which reduces the availability of clear imagery and hampers continuous monitoring. Despite its limitations these results demonstrate an added value as satellite imagery enable detection of both temporal and spatial changes in vegetation or land management practices, at regional or national scale in hard-to-reach landscapes like MHMs. This research contributes to broader efforts to evaluate and optimize grassland management strategies, supporting the preservation of biodiversity and ecosystem functions in mountain hay meadows across Europe.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Mapping the Intensity and Use of European Grasslands by Combining Detailed Statistics, Spatial Modeling and Earth Observation Data

Authors: Žiga Malek, Dr Steffen Fritz, Dr Linda See, Zoriana Romanchuk, Orysia Yashchun
Affiliations: Biotechnical Faculty, University Of Ljubljana, International Institute for Applied Systems Analysis
Many European grasslands are not managed sustainably, leading to low biodiversity levels, nutrient excess and soil compaction. Recently, there have been many advances in mapping European grassland use from space, resulting in high-resolution pan-European spatial data on grassland distribution, and some aspects of use (e.g. mowing, ploughing, presence of woody features...). We demonstrate an approach, where we combine such EO data with detailed spatial statistics on ruminant livestock, in-situ data on grazing and machine learning, to map locations in Europe, where grasslands are being managed through grazing, mowing, or both. This is crucial, as looking at EO data alone, cannot provide the full picture of the intensity of use of grasslands, especially in case these are not mown, but potentially grazed at high livestock densities. Our data enables more refined distinction of ruminant livestock density, and provides necessary context on EO data, such as where we can improve the sustainability of grassland management by increasing the share of grazing in zero-grazing systems, where we can increase grazing density of areas covered with woody features, and where we need to decrease grazing density, subsequently potentially leading to lower ruminant livestock numbers.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Data-Driven Identification of High-Nature Value Grasslands Using Harmonized Landsat Sentinel-2 Time Series Data

Authors: Kim-Cedric Gröschler, Prof. Dr. Natascha Oppelt
Affiliations: Christian-Albrechts-Universität zu Kiel
European grasslands, representing about one fifth of the continent’s terrestrial surface, are important ecosystems that support biodiversity, climate regulation, and food production. However, in recent decades, these grasslands have suffered significant decline due to threats including agricultural intensification, abandonment, and climate impacts, exacerbated by natural hazards such as droughts, fires, and invasive species. This trend affects semi-natural grassland in particular, which has historically been the predominant land cover in Europe's agricultural landscape. Consequently, conservationists regard these remains as high nature value (HNV), prioritizing their management and restoration. This is reflected by a complex array of EU policies, like the Common Agricultural Policy, the Habitats and Bird Directive, and complementary member-state policies. Several key topics emerge from the interaction of these policies, including agricultural practices, biodiversity conservation, energy and afforestation strategies, and the evolving focus on sustainable grassland management, yet conservation efforts are hindered by limitations in current monitoring practices. Traditionally, field mapping conducted by experts is the main contributing method to HNV grassland monitoring. While field mappings achieve a high level of detail, they are costly, labor-intensive, and challenging to implement at a large scale, often resulting in irregular monitoring intervals and potentially outdated data that limit the detection of changes across broad landscapes. Addressing this, we propose a data-driven approach to improve large-scale field mapping efficiency by identifying HNV grasslands using remote sensing. Serving as a representative example of European and national grassland monitoring, we combine the regional habitat map of Schleswig-Holstein, Germany, with Harmonized Landsat Sentinel-2 time series data to train annual XGBoost models for the period of 2018 to 2022. Our models achieved high classification performance with average F1-scores of 0.89 before and 0.86 after feature selection, distinguishing eight grassland classes that represent gradients of different key functional traits. We put special emphasize on model explainability, analyzing model decision-making patterns using an adapted version of SHapley Additive exPlanation values. The results indicated that start-of-season (March and April), end-of-season (November, September), Red-Edge, and spectral change features significantly impacted model predictions across classes. Utilizing the classification result, we produced annual HNV grassland probability maps, and, by aggregating yearly results, derived a robust estimate of the HNV status of grasslands in our study area. Applying our HNV estimate to an independent dataset comprising 2363 km² of grassland plots with unknown HNV status, we identified 84 km² as HNV. The identification of these areas highlights the potential of our methodology to fill critical gaps in existing conservation efforts by offering a scalable, cost-effective a priori information source. We conclude that integrating remote sensing can significantly improve the efficiency and comprehensiveness of large-scale grassland mapping initiatives.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: A Self-Attention based-Convolutional Regressor for Alpine Grasslands Leaf Area Index Spatial-Gap Filling with SAR-Optical Data Fusion

Authors: Abhishek Singh, Göhner Caroline Sophie, Laura Stendardi, Mohammad Alasawedah, Abraham Mejia-Aguilar, Michele Claus, Giovanni Cuozzo, Alexander Jacob, Mariapina Castelli
Affiliations: Institute for Earth Observation, Eurac Research, Center for Sensing Solutions, Eurac Research, Universität Münster, Institute for Geoinformatics
Abstract: In this study, we aim to estimate mountain grassland yield losses due to drought events using Sentinel-2 and Sentinel-1 satellite data. Leaf Area Index (LAI) is used as a proxy for yield, due to its high correlation with aboveground grassland biomass. LAI can be estimated rapidly and accurately from Sentinel-2 multispectral data [1]. However, the frequent unavailability of cloud-free imagery poses a significant challenge, creating spatial-temporal gaps. To address this limitation, there is growing interest in incorporating Sentinel-1 Synthetic Aperture Radar (SAR) backscatter data to perform spatial gap-filling, enhancing the temporal resolution of grassland LAI derived from Sentinel-2 and enabling continuous monitoring of grassland conditions all over the growing season [2, 3]. Addressing all season grassland monitoring challenges in alpine regions using backscatter data requires innovative solutions to integrate domain-specific ancillary information. Alpine grasslands are characterized by complex topography, seasonal variations and heterogeneous agricultural management, making traditional monitoring methods less effective [4]. The use of Sentinel-1 SAR data, with its ability to penetrate cloud cover and capture surface characteristics regardless of weather conditions, offers an approach to overcome these challenges. However, effective utilization of SAR data requires advanced machine learning methods that can exploit complexity of the backscatter signals in alpine grasslands. Incorporating domain knowledge and multi-source information into machine learning frameworks, makes it possible to enhance the accuracy and scalability of grassland monitoring [5, 4]. In this research, we propose a domain specific self-attention based convolutional network to extract feature from SAR data and its derivatives to capture long-range dependencies in the data and studied its impact in spatial gap-filling for grassland monitoring. We have considered Sentinel-1 and Sentinel-2 data over the Trentino-South Tyrol region, in north-eastern Italy for the year 2023. The effectiveness of the proposed method for predicting LAI has been validated by testing it against Sentinel-2-derived LAI and ground-measured LAI. In the preliminary results, we achieved an R2 ranging from 0.15 to 0.42 depending on the features selection on 40% of the test sets between the predicted LAI and Sentinel-2 LAI, and observed a positive correlation with ground-measured LAI. Keywords: Data Fusion, Grassland Monitoring, Self-Attention, Deep Learning, Regression, SAR, Remote Sensing References [1] M. Castelli, G. Peratoner, L. Pasolli, G. Molisse, A. Dovas, G. Sicher, A. Crespi, M. Rossi, M. H. Alasawedah, E. Soini, et al., “Insuring alpine grasslands against drought-related yield losses using sentinel-2 satellite data,” Remote Sensing, vol. 15, no. 14, p. 3542, 2023. [2] I. Tsardanidis, A. Koukos, V. Sitokonstantinou, T. Drivas, and C. Kontoes, “Cloud gap-filling with deep learning for improved grassland monitoring,” arXiv preprint arXiv:2403.09554, 2024. [3] R. Cresson, N. Nar¸con, R. Gaetano, A. Dupuis, Y. Tanguy, S. May, and B. Commandre, “Comparison of convolutional neural networks for cloudy optical images reconstruction from single or multitemporal joint sar and optical images,” arXiv preprintarXiv:2204.00424, 2022. [4] R. Guo, J. Gao, S. Fu, Y. Xiu, S. Zhang, X. Huang, Q. Feng, and T. Liang, “Estimating aboveground biomass of alpine grassland during the wilting period using in situ hyper- spectral, sentinel-2 and sentinel-1 data,” IEEE Transactions on Geoscience and Remote Sensing, 2023. [5] E. Chiarito, F. Cigna, G. Cuozzo, G. Fontanelli, A. Mejia Aguilar, S. Paloscia, M. Rossi, E. Santi, D. Tapete, and C. Notarnicola, “Biomass retrieval based on genetic algorithm feature selection and support vector regression in alpine grassland using ground-based hyperspectral and sentinel-1 sar data,” European Journal of Remote Sensing, vol. 54, no. 1, pp. 209–225, 2021.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Central Asia’s land systems: mapping, modeling, and exploring future pathways.

Authors: Elizaveta Khazieva, Dr. Ziga Malek, Dr. Peter Verburg
Affiliations: Vrije Universiteit Amsterdam, University of Ljubljana
Central Asia's landscapes are characterized by complex land system dynamic shaped by the interaction between climate variability, socio-economic pressures, and environmental challenges. Understanding land system mechanisms is vital for preserving ecosystem services, supporting livelihoods, and ensuring the region's long-term resilience to growing environmental and economic pressures. This study integrates land system modeling with land degradation metrics to examine how competing demands on land resources will shape future land systems in Central Asia. We developed a regional-scale land system model to simulate future changes under two scenarios: the business-as-usual (SSP2) and an alternative scenario accounting for land degradation (SSP2+). By employing regional-scale modeling, the research captures grazing intensity, land use dynamics, and socio-economic factors, providing better understanding of land system dynamic from 2010 to 2050. Our findings highlight that under the SSP2 scenario, high-intensity grazing systems are projected to expand considerably, driven by increasing demands for livestock and crops. This intensification is likely to amplify pressures on ecosystems, especially in areas near settlements and regions already experiencing degradation. Conversely, the SSP2+ scenario demonstrates a more sustainable trajectory, emphasizing medium-intensity grazing systems while limiting degradation-driven land conversion. Both scenarios show an increase in mixed systems combining grazing and cropland, essential for meeting agricultural demands in the region's semi-arid landscapes. The study emphasizes the importance of integrating land degradation metrics into land use planning to enhance sustainability in in Central Asia. The region’s vulnerability to natural pressures, such as climate variability, combined with human-induced challenges like overgrazing, underscores the importance of adopting strategies that not only reduce degradation but also restore and sustain vital land system services. A comparison of both scenarios highlights the pivotal role of sustainable land-use strategies in managing these ecosystems. While both scenarios predict an increase in mosaic systems, the SSP2+ scenario demonstrates a more sustainable trajectory, by mitigating degradation risks and preserving ecosystem stability. This is particularly important for Central Asia, where land degradation, if not properly addressed, can lead to severe biodiversity loss, reduced ecosystem services, and profound long-term impacts on food security and regional sustainability.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Deep Learning of Natura 2000 Grassland Habitats

Authors: Jonathan Sommer
Affiliations: Technische Universität Berlin
Grasslands play a critical role in maintaining biodiversity, supporting ecosystem services, and mitigating climate change. In recent decades, drastic shifts in agricultural practices, such as agriculture intensification and land use conversion, have put immense pressure on European grassland habitats. Within the EU, approximately two thirds of grassland habitats are in a ”poor” or ”bad” ecological state. In parts of Germany, the plant species richness of grassland habitats decreased by 30-50 % over a 50-year period. Simultaneously, grassland-dependent animal species experience widespread shrinking in numbers. As a result, the restoration and protection of grassland have been made a priority of conservation efforts. Starting with the EU Habitats Directive in 1992, the EU has introduced comprehensive legislation over decades, with the Nature Restoration Law being only the latest on the list. As national legislature adopts EU law and demands measures to that end, effective means for quantifying their potential impact and monitoring their efficacy are needed. Driven by rapid developments in sensor technology, high-performance computing, and machine learning, satellite monitoring offers an effective low-cost means to support grassland conservation. However, the comprehensive and accurate mapping of grassland habitats remains a challenge due to spatial heterogeneity, region-specific ecological variability, and seasonality, among others. The objective of this research is to develop a robust framework for mapping Natura 2000 grassland habitats (NGH) across Germany using combinations of convolutional (CNN) and recurrent (RNN) neural network architectures, applied to multitemporal, multispectral satellite data. This work aims to investigate the following: 1. to what extent satellite data and deep learning are capable of identifying and distin- guishing NGHs, 2. the factors influencing model performance, and 3. the most influential variables for model prediction We used an extensive ground truth dataset of roughly 90,000 plots with a total of 10 different NGHs stretching across different management regimes, as well as gradients of humidity and pH, covering the six terrestrial natural regions and three biogeographic regions in Germany. As this data was supplied from 10 different federal states, we carried out an extensive pre-processing and harmonization procedure. In addition, we tested several strategies to further reduce the inherent biases and imbalances of the datasets. We dropped several NGHs during data preparation due to data scarcity or poor spatial distribution. To address potential edge effects given the resolution of Sentinel-2, we filtered out small and eccentric shapes. With the remaining habitats, we created a stratified random sampling. We developed an efficient satellite data processing scheme to handle a large spatial scale without creating spatially irrelevant data. We trained our model with satellite data derived from harmonized multi-temporal multispectral Sentinel-2 and Landsat-8 data, which we later restricted to Sentinel-2 data only. The image data was composed by the March to October monthly mean of seven commonly used vegetation indices, which we created using FORCE. For label data, we rasterized all polygon data to match the 10 × 10 m Sentinel-2 grid. We created image and label data tiles of 16 × 16 pixels for each sample. For habitat classification, we used U-Net, a CNN architecture optimized for semantic segmentation, and ConvLSTM, which is based on RNNs built for sequential learning, but extended with features of CNNs. U-Net consists of two paths: the encoder contains convolutional layers that perform pooling operations to downsample each tile input to extract relevant features. Afterwards, the expanding path (decoder) performs upsampling operations to reconstruct the input dimensions, using skip connections concatenating arrays from the encoder and the decoder. ConvLSTM combines the strengths of CNNs with the ability of sequential input processing. Due to its short-term memory gating structure, the model is able to selectively adjust the information flow, ”forgetting” irrelevant information and ”memorize” relevant information long-term. Both model show promising trial performances. We encounter limitations for mapping small-scale habitat types. Larger and more abundant habitats can be classified robustly. Early results also hint to the many pitfalls NGH classification based on multitemporal satellite data holds. In particular, we find that the temporal dimension is difficult to represent appropriately: while our U-Net architecture seems overall superior to the ConvLSTM in terms of spatial mapping accuracy, it cannot not specifically account for the timeliness in our input data. At the same time, the ConvLSTM is able to capture the seasonal trajectories, but falls short at precisely locating habitats. Nevertheless, our preliminary results demonstrate the many possibilities artificial intelligence and high-performance computing provide for grassland conservation efforts. Our efficient workflow is transferable to smaller and larger spatial scales, able to digest various image inputs, and applicable to countless classification objects.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Evaluating the Impact of Drought Events on Grassland and Forest Ecosystems in Northern Italian Mountains

Authors: Dr. Birhane G. Tesfamariam, Dr. Maurizio Sarti, Olga Gavrichkova
Affiliations: Research Institute on Terrestrial Ecosystems, National Research Council (IRET-CNR)
Understanding how vegetation ecosystems respond to climate variability has become a growing research priority. In European mountains, a prominent land-use change trend involves the abandonment of traditional pastoral activities, leading to the rewilding of grasslands and their conversion to forests. This study examines the response of grasslands and forests, which have developed following grassland abandonment, to climate variability, particularly meteorological drought events, in a mountain ecosystem in northern Italy. Study was carried out using time series climate and satellite observations data sets. The European Center for Medium-Range Weather Forecasts, ECMWF Reanalysis v5 (ERA5) grid climate data (2.2 km spatial resolution) was used to extract monthly precipitation, average monthly minimum temperature (Tmin), and average monthly maximum temperature (Tmax) covering from 1981 to 2023, which were used to assess meteorological drought events and climate variability trends. The Standardized Precipitation Evapotranspiration Index (SPEI) was employed for drought assessment, and the Mann-Kendall (MK) trend test and Sen’s slope estimator were applied to examine existing trends in monthly precipitation and minimum and maximum temperatures. Similarly, to characterize the responses of the forest and grassland ecosystems to climate variability, the Moderate Resolution Imaging Spectroradiometer (MODIS) 16-day normalized difference vegetation index (NDVI) time series (2001-2023) was used as input data. The SPEI was calculated at 1-, 3-,4-,6-, and 12-month time scales to assess the vegetation response at different time lags, which was evaluated based on the standardized NDVI anomaly metrics. The trend analysis, assessment of drought, and associated vegetation response focused on May to August (MJJA), the growing period that experiences relatively higher temperatures and shortage of moisture. The result revealed a significant increasing trend (α<0.05) in Tmin and Tmax for JJA over 43 years, from 1981 to 2023. The trends’ Sen’s slope coefficient ranges from 0.0438 to 0.0497 oc/year. On the other hand, the study area has experienced dominantly moderate and short-duration drought events during MJJA from 2001 to 2023. Moreover, a weak correlation (low temporal alignment) was found between the occurrence of drought events and the time of vegetation stress although slightly varied upon SPEI time scales. Vegetation stress was detected largely when no drought events occurred while no stress was found during several drought events. Thus, drought does not appear to be the primary cause of the vegetation stress observed in the study area, either for forests or grasslands. However, the increasing trends of Tmin and Tmax combined with drought events could have an increased future impact on vegetation productivity.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Assessing the Effects of Mowing, Burning, and Grazing on Semi-Natural Grasslands Using Landsat-Derived NDVI Time Series Analysis

Authors: Christoph Raab, Friederike Riesch
Affiliations: Institute of Geography, University of Hildesheim, Germany, Grassland Science, University of Goettingen, Germany, Centre of Biodiversity and Sustainable Land Use, University of Goettingen, Germany
Semi-natural grasslands are biodiversity-rich ecosystems that require active management to maintain their ecological integrity. In Central Europe, traditional practices such as mowing and grazing are fundamental for preventing succession in these landscapes. This study assesses the impact of different management strategies on grassland by analysing vegetation cover trends using the Normalized Difference Vegetation Index (NDVI) derived from Landsat satellite imagery time series. We monitored 30 plots of 15×15 m, subjected to varying management approaches— no treatment, grazing, burning with or without grazing, mowing with or without grazing—in an area with a high abundance of free ranging red deer (Cervus elaphus) as the main grazing animal. The plots were distributed over five sites with one plot per treatment, respectively. Plots were burnt in early spring and mown in July. NDVI trends were analysed for the period after implementation of the experimental management (2015–2024). The annual maximum NDVI (NDVImax) values between April 1 and October 31 were used to estimate vegetation cover trends for each plot. Utilizing the R package LandsatTS, we applied the Mann-Kendall test to detect the presence or absence of a monotonic trend and Sen’s slope estimator to derive the magnitude of the trend (α = 0.10). No significant overall trends were observed. However, distinct patterns in maximum NDVI change were evident for the 2015–2024 period based on the applied treatments. Differences emerged between plots accessible for grazing and those that were fenced. Plots with the combined treatment of mowing and grazing, which represent the management prior to the experiment, showed stable conditions (0.001, SD = 0.04), whereas plots subjected to grazing only showed a slight decrease in median NDVImax (-0.004, SD = 0.004). The slight positive trend in median NDVImax for the combined mowing and grazing treatment may be attributed to the stimulating effects of grazing compared to mowing alone. For plots under grazing only (median NDVImax = 0.033, SD = 0.03) and those without any treatment (median NDVImax = 0.004, SD = 0.02), an increase in NDVImax was observed during the study period. These relatively higher values, compared to the mowing treatment, could be related to a lower proportion of biomass removed. Additionally, the higher values in the grazing treatment may reflect reduced accumulation of dead plant material. For the burning-only treatment, median NDVImax values of 0.028 (SD = 0.01) were observed, potentially linked to the fertilization effect of burning and the absence of biomass removal associated with either grazing or mowing. In contrast, the burning with grazing treatment showed slightly negative median maximum NDVI values (-0.004, SD = 0.02). Our study shows that Landsat NDVI time series can effectively monitor grassland ecosystem management activities. This approach provides a cost-effective and scalable method for informing grassland conservation efforts, especially in regions where field data collection is challenging. Future studies should aim to disentangle trends across different phenological phases, such as the start, peak, and end of the growing season, to better evaluate the impact of treatments at various growth stages. Additionally, systematic differences between the plots should be addressed by analysing historical changes in NDVImax prior to the establishment of the experiment.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Two Decades of Global Grassland Productivity: High-resolution GPP and NPP via Light Use Efficiency Model

#stac

Authors: Mustafa Serkan Isik, Leandro Parente, Davide Consoli, Lindsey Sloat, Vinicius Mesquita, Laerte Guimaraes Ferreira, Radost Stanimirova, Nathália Teles, Tomislav Hengl
Affiliations: Opengeohub Foundation, Land & Carbon Lab, World Resources Institute, Remote Sensing and GIS Laboratory (LAPIG/UFG)
Grassland ecosystems play a crucial role in absorbing carbon dioxide from the atmosphere and helping to reduce the impacts of climate change by sequestering carbon in the soil. They can either become the source or sink of the carbon cycle, depending on a number of factors like environmental constraints, climate variability, and land management. Given the importance of grasslands to the global carbon budget, accurately measuring and understanding Gross Primary Productivity (GPP) and Net Primary Productivity (NPP) in these ecosystems is essential. However, the spatial resolution and coverage of available productivity maps are often limited, reducing the possibility of capturing the spatial variability of grasslands and other ecosystems. In this paper, we present a high-resolution mapping framework for estimating GPP and NPP in grasslands at 30 m spatial resolution globally between 2000 and 2022. The GPP values are derived through a Light Use Efficiency (LUE) model approach, using 30-m Landsat reconstructed images combined with 1-km MOD11A1 temperature data and 1-degree CERES Photosynthetically Active Radiation (PAR). We first implemented the LUE model by taking the biome-specific productivity factor (maximum LUE parameter) as a global constant, producing a productivity map that does not require a specific land cover map as input and enables data users to calibrate GPP values accordingly to specific biomes/regions of interest. Then, we derived GPP maps for the global grassland ecosystems by considering maps produced by the Global Pasture Watch research consortium and calibrating the GPP values based on the maximum LUE factor of 0.86 gCm-²d-¹MJ-¹. Nearly 500 eddy covariance flux towers were used for validating the GPP estimates, resulting in R² between 0.48-0.71 and RMSE below 2.3 gCm-²d-¹ considering all land cover classes. In order to estimate the annual NPP, we computed the amount of yearly maintenance respiration (MR) of grasslands using MOD17 Biome Property Look-Up Table. The daily estimation of MR values are accumulated to yearly MR and finally subtracted from GPP to calculate annual NPP maps. The final time-series of GPP maps (uncalibrated and grassland) are available as bimonthly and annual periods in Cloud-Optimized GeoTIFF (23 TB in size) as open data (CC-BY license). The users can access the maps using SpatioTemporal Asset Catalog (http://stac.openlandmap.org) and Google Earth Engine. The NPP product is still an experimental product and is in the process of being developed. To our knowledge, these are the first global GPP time-series maps with a spatial resolution of 30m and covering a period of 23 years.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Integration of novel Analysis-Ready PlanetScope (ARPS) imagery to improve performance of wide-scale area detection of in-field mowing events

Authors: Miguel Castro, Greg Robson, Anze Zupanc, Katja Bajec, Davor Deranja, Domagoj Korais, Matic Pecovnik, Matic Lubej, Jovan Višnjić, Tamara Suligoj, Klemen Lovenjak, Ziga Luksic, Miha Filipic
Affiliations: Planet Labs
Regulations around managed grasslands and pasture form an important part of several agricultural and biodiversity compliance programs as effective management of meadows is vital for both the environment and rural economies. In addition to providing feed for meat and dairy production, meadows and grasslands are also important habitats for birds and insects and so maintaining cover in a field is also critical for biodiversity. Previous studies have demonstrated the link between mowing frequency and plant diversity and resilience within managed fields. National rural payment programs within EU countries therefore incentivise particular mowing activity, such as not mowing before a particular date or reducing the number of mowing events within a given period. It is therefore advantageous for paying authorities and biodiversity monitoring schemes to have a system in place which can monitor a large number of fields at scale, and, within those, automatically detect the frequency and timing of mowing events. This kind of large-scale area monitoring infrastructure in combination with publicly available EO imagery such as that from Sentinel-2 can be used for this application. However, the low cadence of this imagery can lead to missed mowing events during prolonged periods of cloud cover. The relatively coarse spatial resolution of such a dataset also makes it challenging to accurately detect mowing events in smaller fields. Taken together, these issues restrict the effectiveness of such monitoring efforts. In the context of agricultural monitoring, Planet’s Area Monitoring System (AMS) is a scalable infrastructure designed to process satellite imagery for precise and automated analysis. For a given agricultural field, it operates by generating signals, which represent time-series derived both from satellite imagery and spectral indices. Once produced, AMS computes markers, indicators that identify specific agricultural events by looking for specific characteristics within the time-series data. Amongst these, the mowing marker is designed to automatically detect and flag mowing events within a field. The algorithm looks for significant drops in NDVI values, which occur when vegetation is cut during a mowing event. It then tracks these drops and subsequent recoveries in the NDVI time-series of an area of interest. By identifying pairs of local maxima (indicating growth) and minima (indicating mowing), the marker can pinpoint the timing of mowing events. Analysis-Ready PlanetScope (ARPS) is a harmonized satellite imagery product derived from PlanetScope sensors and enhanced with data from trusted third-party sources, including Landsat, Sentinel-2, MODIS, and VIIRS. Designed for time-series analysis and machine learning applications, ARPS provides orthorectified 3-meter resolution near-daily surface reflectance imagery corrected for terrain distortions. Radiances from PlanetScope sensors are converted to TOA reflectance and transformed into surface reflectance during processing. This ensures radiometric consistency with FORCE-processed Sentinel-2 and Landsat data. ARPS spectral bands are equivalent to Sentinel-2’s blue (B2), green (B3), red (B4), and narrow near-infrared (B8a) bands, with reflectance data normalized to nadir view through BRDF adjustments. Enhanced geometric performance delivers a positional accuracy of less than 4 meters. Harmonized reflectance values, a high spatial resolution and a high temporal cadence make ARPS particularly suited for deriving accurate and timely agricultural insights from its time-series. Here, we incorporate this novel ARPS dataset into Planet’s existing mowing marker within the AMS system to demonstrate an improvement in the accuracy and timeliness of detected mowing events on a per-field basis. We also evaluate the influence of utilizing this higher-resolution dataset on the effectiveness of detecting mowing events in smaller fields. Field data gathered from national payment agencies will be used to validate these outcomes. The outcomes of this research can be used to inform the refinement and potential strengthening of existing compliance measures, leveraging the improved ARPS-integrated mowing marker’s enhanced capabilities, References: https://planet.widen.net/s/8czllfmmvp/planet-userdocumentation-analysisreadyplanetscope. Last Accessed: November 27, 2024 https://medium.com/sentinel-hub/area-monitoring-concept-effc2c262583. Last Accessed: November 27, 2024
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Predicting grassland land use intensity using HR-VPP data, a case study from Skåne, South Sweden

Authors: Anna Terskaia, Lars Eklundh, Dr. Peter Olsson, Prof. Johan Ekroos
Affiliations: Department of Physical Geography and Ecosystem Science, Lund University, Center for Environmental and Climate Science, Lund University, Department of Agricultural Sciences, Helsinki Institute of Sustainability Science
Grasslands play a crucial role in agricultural production and are ecosystems of high importance for maintaining biodiversity. Today’s challenge for grassland management is to fulfil the increasing demand for forage production under climate change impacts while supporting biodiversity conservation requirements adopted in many countries. In Sweden’s Skåne region, one of the country’s most intensively farmed areas, grasslands make up a substantial portion of the agricultural landscape. Improper management of grasslands increases risks of reducing forage availability for livestock, and habitat quality for wildlife. Therefore, farmers and policymakers aim to enhance grassland sustainability by adopting improved management practices. This requires accurate and relevant information to guide decisions on management intensity that uphold a balance between the productivity and biodiversity of grasslands. Novel high-resolution satellite data with frequent revisiting provides the opportunity to monitor grassland land use intensity. The methods utilize high-resolution time-series data from satellites (Sentinel-2, Rapid-Eye, Landsat-8) and are based on the estimation of mowing events, which serve as indicators of grassland use intensity. The drops in the annual vegetation index curves indicate a lower green biomass following mowing. Studies on grassland management intensity are often limited by the availability of reference data at the parcel level. We propose a method of assessment of grassland use intensity on the basis of Copernicus high-resolution remotely sensed phenology and productivity data (HR-VPP) and machine learning modelling for Skåne grasslands. The HR-VPP is based on time series of the Plant Phenology Index (PPI) derived from Sentinel-2 data and consists of Vegetation Phenology Parameters (VPP), including mean amplitude, length of the season, and seasonal productivity. We have analyzed Skåne grasslands of three categories of intensity, according to the agricultural statistical database: conventional leys, organic leys, and seminatural grasslands. In Skåne, management of these three grasslands categories follows specific regulations, thus semi-natural grasslands have restrictions on mowing frequency, while non-organic fertilization is prohibited on organic leys. For each category, we calculated the mean values of HR-VPP parameters and analyzed their dynamics across the years 2017-2022. Statistical tests revealed significant differences in mean values across categories. Conventional leys had the highest amplitude values, while organic leys exhibited longer growing seasons, resulting in higher seasonal and total productivity. Semi-natural grasslands consistently showed lower values for all investigated HRVPP parameters. We are developing a machine learning model to predict three levels of grassland use intensity. Initial results indicate that HR-VPP parameters contribute to model performance, though prediction accuracy was limited to 40–50%. Adding spectral information, such as various Sentinel-2 vegetation indices and soil organic carbon data can improve the accuracy up to about 80%. Notably, the Sentinel-2 Red-Edge Position vegetation index, previously reported as a good indicator of fertilization, is highly influential in predictions. The model results show high dependence on the area of training samples location, specifically, models trained for small areas have better prediction accuracy. Our method classifies grassland intensity levels using remote sensing data and limited reference samples, making it particularly useful in regions where field-level agricultural data, i.e. levels of fertilization, the frequency of mowing events, and harvest rates, is scarce. These intensity estimates can complement existing agricultural databases or serve as an independent validation. The resulting maps of modelled grassland intensity can guide regional policies to support organic agriculture. By identifying low-intensity grasslands, including semi-natural areas, the maps highlight biodiversity hotspots and critical habitats for rare species. This information is essential for studying the relationship between management intensity and biodiversity. With the open access and annual availability of HR-VPP data, our approach has a potential to offer a scalable solution for monitoring grassland use intensity and analyzing its dynamics in response to climate change.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Grass as a Sensor: Advancing Insurance Design with Remote Sensing

Authors: Dr. Quirina Merz, Michael Rubin, Dr. Hans
Affiliations: Swiss Hail Insurance
Grasslands play a crucial role in Switzerland's agricultural landscape, serving as vital resources for livestock feed and supporting significant biodiversity. However, climate change poses a substantial threat to these ecosystems. The impacts of shifting climate patterns, including altered precipitation regimes, temperature extremes, and increased drought frequency, threaten the productivity and ecological stability of grasslands. Given their importance as a source of forage, sudden decreases in grassland productivity can threaten the financial stability of a farm. Grassland insurance helps to cover these losses and can act as a safety net for farms to mitigate the financial risks associated with a lack of fodder for livestock. Traditional indemnity-based insurance schemes typically require experts to assess the actual damage. Recently, weather index insurances have have gained popularity. Typically, a weather index is designed to represent the growing conditions using, e.g., a water balance. Once the growing conditions are deemed to be adverse, a payout is triggered. The adequate, risk-appropriate parametrization for insurance is not trivial and needs to consider many factors, e.g., the severity and the return period of adverse weather leading to grassland productivity losses. Long-term weather data often serve as a foundation to establish these weather indices. However, insufficient or wrong parametrization of a weather index can lead to increased basis risk, which means that the index might not be triggered, albeit a significant decrease in grassland production occurred. This situation undermines the effectiveness of the safety net, creating challenges for farmers, insurers, and government organizations alike. It is therefore crucial that the chosen index represents the growing conditions adequately. To represent growing conditions in our indices, we utilize gridded weather data widely available across Switzerland. The data are sufficient to calculate a simple water balance building upon the Budyko hydrological framework and related work, offering various parametrization options for different local circumstances. However, actual data on soil water storage remain scarce, making it challenging to assess the reliability of these models. Yet, soil type and subsequently soil water storage are crucial factors influencing the drought resistance of grassland and should therefore be considered when designing an index that should reflect optimal growing conditions for grassland productivity. To address this gap, we propose the innovative approach of using “grass as a sensor”. Using remote sensing tools, we monitor grassland growth by assessing changes in grassland productivity. For this case, data were collected from over 100 grass fields distributed across the Swiss Plateau for 2 years. For each field, information was recorded on the type of grassland (pasture or meadow), certain management aspects (e.g. organic) and its intensity of use, and for a few fields, also recent information about the soil was collected. The water balance for each field was calculated using the Budyko framework with different parameterizations, resulting in multiple potential outcomes per field. To find the outcome that resembles the actual growing conditions best, cropSAR data was obtained. The cropSAR (by VITO) is based on a data fusion of Sentinel-2 fAPAR (optical) and Sentinel-1 backscatter (SAR). Whereas the optical sensors reveal biophysical processes of vegetation, SAR imagery focuses on structural components and moisture content of vegetation as well as the underlying surface. This combination offers a robust and coherent time series even during cloudy conditions. A Granger causality test was performed to identify the best options of the various water balance parameterizations for each field using the cropSAR as a measure for grassland productivity. By analyzing the parameters of the best matching water balance and considering all the additional information from the dataset such as grassland type and use and soil conditions (where available) we are able to find the optimal parametrization for each field and to draw conclusions on the type of soil water storage that is available to the grassland – which is crucial to design a robust grassland insurance index. This research offers multiple benefits. Firstly, it provides a cost-effective and scalable alternative to traditional soil monitoring methods. Secondly, it demonstrates how “plants as sensors” combined with water balance modeling can yield valuable insights into soil water storage. Finally, this work showcases a practical application of remote sensing technology in the insurance industry, enhancing both agricultural sustainability and risk management.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone D-E-F)

Poster: Learning from Image-Level Labels: Where Unsupervised and Supervised AI Algorithms Come Together

Authors: Dr Agata M. Wijata, Peter Naylor, Jakub Nalepa
Affiliations: Esa-Φ Lab, Silesian University of Technology
Technological advances in hyperspectral image (HSI) analysis enable exciting possibilities for Earth observation (EO) applications. Extracting insights from these high-dimensional data may be supported by various machine learning pipelines. Since real-world data often lack labels and obtaining ground truth (GT) is costly, techniques like transfer learning, semi- and self-supervised learning, and HSI data augmentation have been developed. In downstream tasks like soil parameter estimation, GT data are available at the image or plot level. Consequently, GT refers to diverse and noisy pixels, and not all contribute equally to lab-validated values. The ground-truth data on soil parameters are available only at the image (or plot) level. This means that while we have laboratory-derived values for four soil parameters per plot (i.e., per image in the HYPERVIEW dataset already used in the HYPERVIEW Challenge organized jointly by KP Labs and European Space Agency), and therefore we lack precise information about how these parameters vary spatially within the plot. This limitation arises because soil samples from different areas of each plot are mixed together prior to laboratory analysis, making it impossible to know the exact distribution of the parameters. In our study, to further investigate these spatial variations, we introduced a superpixels-powered processing chain, as a means of analyzing the consistency and distribution of them within the entire crop (based on their spectral characteristics). Our aim was to examine the consistency of areas represented in each image and to assess the feasibility of identifying and selecting clusters of superpixels across all images (e.g., within the training or test subsets of HYPERVIEW). This task was approached using unsupervised machine learning techniques, specifically unsupervised clustering. The use of superpixels might enable a more detailed view of the heterogeneity within the plot, potentially enhancing our ability to estimate soil parameters with greater spatial accuracy and to determine the parts of the parcel that are more “consistent” and (hopefully) less noisy – such subparts might be used for supervised training of machine learning models within the next steps of the processing chain. Thus, we will show how using unsupervised clustering may help us deal with difficult EO data, for which only image-level ground-truth information is available, as capturing pixel-level high-quality ground truth is too costly and practically infeasible. We believe that combining such fundamentally different approaches (supervised and unsupervised) may lead to higher-quality models that can be better generalized in the wild.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L)

Poster: C.02.03 - POSTER - SMOS – 15 years in space

SMOS pioneered L-band radiometry from space. Having been launched on 2nd November 2009 it provides the benchmark dataset for current and future L-band radiometer missions. Initially foreseen to provide information on ocean salinity and terrestrial soil moisture, today the mission contributes to various Earth System domains. Beyond soil moisture and sea surface salinity, SMOS data are used to estimate sea ice thickness, terrestrial freeze/thaw cycles, extreme ocean winds, biomass, ice sheet temperature, solar flux and many more. Many of these applications evolved to operational services in recent years and are used by operational service providers with a direct societal benefit. This session will take stock of the achievements and lessons learned from SMOS and its future.
We encourage submissions related to the mission’s status of SMOS, cal/val, the product status and evolution, the application of SMOS data in the various application domains, novel exploitation ideas and future L-band concepts building on the SMOS legacy.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L)

Poster: 15 years of SMOS ESAC Operations: lessons learnt

Authors: José Manuel Blanco, Carmen Gamella, Cristina Mosquera, Isaac Guerrero, Daniel Reyes, Jorge Fauste, Antonio De la Fuente
Affiliations: Isdefe, European Space Agency, European Space Agency
The 15-year journey of SMOS operations at the European Space Astronomy Centre (ESAC) has demonstrated the dynamic and evolving nature of an Earth Explorer mission. Unlike static operational frameworks, SMOS ESAC operations have continuously adjusted to address the specific challenges and requirements of this Earth observation mission. These operations have culminated in a highly efficient and seamless workflow, reflecting the distinctive characteristics that make SMOS a one-of-a-kind mission in its field. SMOS is the only Earth Observation mission jointly and very successfully operated between CNES and ESA. A key aspect of SMOS ESAC operations is the integration of its Flight Operations Segment (FOS), nominally based at ESOC, with the Data Payload Ground Segment (DPGS), both located at the European Space Astronomy Centre (ESAC) in Spain. This co-location improves collaboration between teams, streamlining communication and enabling faster decision-making. The proximity of operational teams allows for seamless exchange of telemetry and telecommand information during S- and X-band passes, significantly reducing response times to potential anomalies. This agile operational model exemplifies how local integration of key teams can enhance mission performance. Additionally, while X-band data is primarily received at Svalbard, ESAC hosts also a reception antenna, managed by the DPGS operations team, thus having a complete Payload Data Ground Segment. This setup not only enhances operational efficiency but also consolidates a more comprehensive technical understanding of the mission across the Ops entire team. Over its 15-year mission, the Soil Moisture and Ocean Salinity (SMOS) satellite has witnessed a continuous evolution in operational hardware infrastructure to enhance efficiency. Most operational servers have been virtualized, excluding Near Real-Time servers due to their high CPU and memory demands. Additionally, increasing cybersecurity challenges have led to the implementation of many security improvement actions as part of the EOP-GE Security Risk Treatment Plan for flying missions. This effort has included a complete upgrade of the DMZ infrastructure to meet the stringent security requirements of the European Space Agency, reflecting the growing importance of cybersecurity in satellite operations. The reprocessing activities of the ESAC SMOS Ops team, include both the campaigns for the calculation of metrics and the full reprocessing campaigns, which are carried out whenever there is a significant evolution in the processors' software. The infrastructure and software needed to carry out these reprocessing campaigns has evolved to handle the growing volume of data, driven by the need to reprocess an increasing number of years as the mission progresses, allocating millions of CPU hours as well as several Terabytes of RAM memory. The integration of the reprocessing function at ESAC operated by the SMOS DPGS Ops Team has proved to have a number of advantages, including autonomous and low friction operations, maximization of internal know-how, cost effective solution (platform used almost 100% of the time), excellent collaboration with other teams with a very positive appreciation from Mission Users. The monitoring of systems within the SMOS ground segment has also undergone significant evolution over the mission's lifespan. What began as a rudimentary and fragmented monitoring setup has been transformed into a modern, centralized system that integrates advanced monitoring and alerting capabilities. This new system provides real-time insights into performance and resource usage, enabling proactive issue detection and rapid response to potential disruptions. The centralized approach ensures better coordination across all components, enhancing reliability and operational efficiency as the mission's complexity and demands continue to grow. This contribution reflects on the key lessons learned from 15 years of SMOS operations at ESAC. It underscores the critical role of flexibility, continuous improvement, and effective teamwork in ensuring the success of a complex, long-term mission. The experiences shared here provide useful guidance for future Earth observation missions, showing that operational ground segments can remain adaptable and responsive to evolving scientific and technical challenges.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L)

Poster: Hydrological drought monitoring in the Ebro basin: Standardized Soil Moisture Index

Authors: Guillem Sánchez Alcalde, Dr Maria José Escorihuela
Affiliations: isardSAT
Recent studies highlight the increasing frequency and severity of droughts, primarily driven by climate change. As one of the most significant climate risks, understanding and monitoring droughts is crucial. Droughts result from complex interactions between atmospheric processes, continental surfaces, and water resource management systems. Following the classification proposed by (Wilhite and Glantz, 1985), droughts can be categorized into four types based on their severity: meteorological, agricultural, hydrological, and socioeconomic. The first three are defined by their physical impact, related to precipitation deficits, soil moisture depletion, and groundwater shortages, respectively. To monitor and assess droughts, various indices have been developed, often based on precipitation or vegetation status. However, these indices have inherent limitations. Vegetation-based indices, for instance, can only identify droughts after vegetation has already been affected, making them unsuitable for early drought detection. Similarly, precipitation-based indices rely on in-situ measurements and/or land surface models, as remote sensing precipitation data tend to have high uncertainties and low spatial resolution. Soil moisture-based drought indices, such as those derived from the Soil Moisture and Ocean Salinity (SMOS) mission, provide a valuable alternative without the limitations faced by precipitation-based indices. With over 15 years of historical data since the launch of SMOS, soil moisture retrieved via remote sensing has emerged as a transformative tool for advancing drought monitoring capabilities. This study introduces the Standardized Soil Moisture Index (SSI), an adaptation of the widely used Standardized Precipitation Index (SPI), to evaluate droughts based on soil moisture data. The SSI leverages the temporal flexibility of the SPI methodology to capture different drought types depending on the integration timescale applied. The SSI has been calculated using downscaled SMOS soil moisture data, processed through the DISPATCh (Disaggregation based on Physical And Theoretical scale Change) algorithm, achieving 1 km spatial resolution and daily temporal frequency. The DISPATCh algorithm integrates SMOS observations with MODIS-derived land surface temperature and the Normalized Difference Vegetation Index (NDVI) to achieve this high-resolution dataset (Merlin et al., 2013). Our study focuses on the Ebro basin, a semi-arid region in northeastern Spain. We evaluate the SSI across multiple timescales, spanning June 2010 to May 2023, and compare its performance with in-situ SPI data. Results reveal a strong correspondence between SSI and SPI, particularly for longer integration times (12 and 24 months), underscoring their complementary utility for hydrological drought monitoring and basin-scale water resource planning. The high spatial resolution of our SSI product provides unprecedented granularity, outperforming gridded SPI datasets such as those by Vicente-Serrano et al. (2017). We conclude that the long-term SSI is a reliable indicator of groundwater content. Moreover, our findings indicate that the 12-month SSI (SSI12) succeeds in identifying already known hydrological droughts that affected the area, such as those in 2011-2012 and in 2017-2018. Furthermore, the historical SSI12 also manifests that another hydrological drought started developing at the end of 2022, with its conclusion taking place during the spring of 2024. The SSI12's results from this period are consistent with the precipitation shortage that stroke the Iberian Peninsula during 2022-2023. Such behaviour proves that the SSI12 is a reliable index for hydrological drought monitoring, since immediate precipitation events do not recover the SSI12 values if the rainfall is not sustained over time. Despite the relatively short 14-year record of SMOS-derived soil moisture data, our findings indicate that the SSI can reliably discern drought trends, offering a robust tool for long-term monitoring. The methodological advancements and spatial detail provided by the SSI hold significant promise for drought assessment and early warning systems, particularly in regions with limited ground-based observations. By advancing the integration of remote sensing and geophysical modelling, this study demonstrates the potential of the SSI as a cornerstone for future drought monitoring efforts. As drought risks intensify globally, the SSI provides an invaluable resource for climate adaptation and sustainable water management strategies.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L)

Poster: CIMR multi-frequency products advancing from L-band heritage

Authors: María Piles Guillem, Verónica González-Gambau, Dr. Carolina Gabarró, Dr. Roberto Fernandez-Moran, Moritz Link, Andrés Terrer-Gómez, Lina Zschenderlein, Kimmo Rautiainen, Kari Luojus, Marcus Huntemann, Gunnar Spreen, Dr. Joe Tenerelli, Dr. Nicolas Reul, Thomas Lavergne, Dr Michele Scagliola, Filomena Catapano, Pierre Femenias, Craig Donlon
Affiliations: Universitat De València, Institut de Ciències del Mar, CSIC, Finnish Meteorological Institute, University of Bremen, Oceandatalab, Ifremer, Norwegian Meteorological Institute, European Space Agency, European Space Agency
The Copernicus Imaging Microwave Radiometer (CIMR) mission is one of the six Copernicus Expansion Missions currently being implemented by the European Space Agency and the European Commission. CIMR will provide multi-frequency microwave imaging radiometry measurements and derived products with global coverage and sub-daily revisit in the polar regions and adjacent seas to support the Integrated European Union Policy for the Arctic. The primary instrument is a conically scanning microwave radiometer operating at five spectral bands (L-, C-, X-, K- and Ka-band) with varying spatial resolution (< 60 km for L-band, ≤ 15 km for C-/X-bands, < 5 km for K/Ka-bands). Two satellites are being implemented (to be launched sequentially), with a first launch anticipated in 2028/29. In addition to monitoring changes in the Arctic, the CIMR mission ensures microwave L-band continuity from other missions (e.g., SMAP, SMOS) by providing a global view of derived parameters over global land and ocean. CIMR measuring characteristics allow the development of new approaches to estimating bio-geophysical variables from multi-frequency brightness temperature observations. Most notably, CIMR channels are very well fit for the retrieval of a suite of Level-2 geophysical products over global land and polar oceans that are physically consistent. From a temporal perspective, CIMR has a higher revisit time than previous L-band missions (about one day global, sub-daily at the poles). This improved temporal resolution could allow resolving critical time scales of Earth System processes. The CIMR Level-2 Product Algorithm Development project (CIMR L2PAD) is a 4-year project initiated by ESA. Kicked off in November 2023, CIMR L2PAD aims to a) develop state-of-the-art algorithms for all Terrestrial and Polar Oceans (including Sea Ice) products, b) develop the open-source Level-2 Ground-Processor Prototype (GPP), c) conduct pre-flight performance assessment activities, and d) liaise with the user community (including but not limited to the Copernicus Services) to ensure adequate user preparedness and uptake. An assessment of the L2 algorithms, product formats, and L2 GPP outputs will be performed by an Expert User Group, which is independent of the algorithm developers. The project is implemented by a consortium of 12 partner institutions, under the leadership of the Norwegian Meteorological Institute. This presentation introduces the CIMR mission and the CIMR L2PAD activities and their status after one year of activity, with a focus on the portfolio of those products that rely on L-band measurements (Soil Moisture, Vegetation Indicators, Freeze/Thaw, Sea Surface Salinity, Ocean Wind Vectors, Sea Ice Thickness). The main strategies for using higher frequencies to enhance L-band retrievals and the overall approach for product consistency will be covered.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L)

Poster: TriHex – projecting SMOS lessons learnt into a follow-on mission concept

Authors: Dr Manuel Martin-Neira, Iain McKenzie, Michal Miler, Elena Checa, Berthyl Duesmann, Ms Astrid Barzaghi, Alexandra Bulit, Erio Gandini, Petri Piironen, Don de Wilde, Montserrat Piñol, Peter van der Plas, Volodymyr Kudriashov, Thomas Bader, Emilio Fernandez-Lisbona, Pauline Carpi, François Deborgies, Dr Francesca Scala, Camilla Colombo, Gabriela Gaias, Albert Zurita
Affiliations: European Space Agency, DLR, Politecnico di Milano (PoliMi), Airbus Defence and Space (ADS)
As early as in 2004, the idea of an SMOS follow-on mission was already put on the table, with the objective of providing an operational service for a long period of time. The initial proposal, coined SMOSops, consisted of three replicas of SMOS: straight and simple. The launch of SMOS came with the surprising discovery of the existence of numerous Radio Frequency Interferences (RFI) in the protected band of observation. The analysis of the RFI tails demonstrated that an hexagonal array would have much lower sidelobes than the Y-shape of SMOS. Hence, by 2012, it became clear that any follow-on instrument to MIRAS, the payload aboard SMOS, would have to have the shape of an hexagon. After 12 years of SMOS in orbit, by 2021, the pursue for higher spatial resolution led to study the concept of Formation Flying L-band Aperture Synthesis (FFLAS). The result consisted of the formation of three 8 m hexagonal SMOSops satellites. However, the cost of FFLAS was very high: an Ariane 6 launcher, 8 m deployable arrays and very heavy spacecraft. This paper will present the evolution of FFLAS into TriHex, a far more cost effective solution for an SMOS follow-on mission, thanks to several studies and technology activities. TriHex consists of the combination of a particular type of formation flying, that of free-fall formation flying, with alias-free aperture synthesis, incorporating all lessons learned from 15 years of SMOS in orbit: filtering RFIs, avoiding Sun effects and improving spatial resolution by a factor 2 (with potential for much better) as well as sensitivity, accuracy and coverage.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L)

Poster: The synergy between SMOS L-VOD and satellite LiDAR data in the framework of global forest monitoring

Authors: Cristina Vittucci, Prof. Leila Guerriero, Prof. Paolo Ferrazzoli, Dr. Philippe Richaume, Dr. Yann Kerr
Affiliations: Tor Vergata University of Rome, Centre d'Etudes Spatiales de la Biosphère
In the last decade many advancements were achieved in the exploitation of radiometric measurements collected by L-band radiometers such as SMOS and SMAP. Besides the continuous monitoring of soil moisture, passive microwave L-band missions can retrieve the optical depth (L-VOD), which is being used as a new tool for vegetation monitoring. Several previous researches reported that L-VOD increases with forest height and/or biomass, and reaches saturation for height/biomass values higher than the ones observed at higher frequencies and/or with active microwave instruments. More recently, the influence of forest intactness on L-VOD was also investigated by making a comparison with maps of Intact Forest Landscapes (IFL) updated until 2020 (https://intactforests.org/). Intact forests markedly show the highest L-VOD values in tropical latitudes, while the influence of forest intactness disappears in boreal latitudes, because of the extreme climatic factors that limit vegetation growing even in case of intact landscapes. With the availability of operational satellite LiDAR missions, such as GEDI and ICESat-2, these investigations were extended. Indeed, recent studies demonstrated that L-VOD signatures, retrieved from v700 version of level 2 SMOS data, were significantly correlated with plant variables provided by the latest versions of LiDAR products collected by both GEDI and ICESat-2 instruments. In particular, L-VOD proved to be spatially correlated with top of canopy height (approximated by the RH100 LiDAR variable) recorded by both satellite instruments and with the Plant Area Index (PAI) recorded by GEDI. The correlation coefficient, which was obtained after the spatial average of LiDAR samples at the SMOS spatial resolution, is very high for tropical forest in all seasons (R>0.88). At temperate northern and southern latitudes, the spatial correlations decrease in cold months but are still significant (R> 0.7). Lower correlations were achieved at high boreal latitudes, particularly in cold months. By providing a reference for vegetation properties, LIDAR data may help to constrain the assumptions and models used in VOD estimation, leading to more reliable and accurate results, particularly for forest monitoring at global scale. The synergy of SMOS and LiDAR data may be particularly important for regions with complex vegetation structures or varying soil moisture conditions, where passive microwave-based VOD retrievals can be prone to uncertainties. Such results and considerations encouraged the testing of a new methodology to initialize the L-VOD. While previous and current versions of Level 2 SMOS algorithm initialized L-VOD using Leaf Area Index (LAI) as auxiliary variable, the new proposed procedure aims at replacing LAI by LiDAR variables, for different seasons and vegetation classes. This paper illustrates the experimental results obtained by considering both linear and non-linear initialization functions which include RH100 and PAI variables as proxies to model the fixed part (due to woody vegetation) and the variable part (due to seasonal canopy changes) of L-VOD. Since PAI information is not provided as a standard ICESat-2 product, a deep learning algorithm has been implemented to retrieve PAI also for this sensor starting from the nominal ICESat-2 L2 data.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L)

Poster: 10 YEARS OF SMOS – PASSIVE MICROWAVE VEGETATION OPACITY STUDY (PM-VO-S) : The OSMOSE database

Authors: Béatrice Berthelot, Dr Arnaud Mialon, Dr Nemesio Rodriguez-Fernandez, Simon Boitard, Julio-Cesar Salazar-neira, Mike Schwank, Yiwen Zhou, Dr Stephane Mermoz, Dr Marie Weiss, Frederic Baret, Hongliang Ma, Daria Malik, Matthias Drusch
Affiliations: Magellium, CESBIO, WSL, INRAE, Globeo, ESA/ESTEC
The ESA OSMOSE project aims to enhance the understanding of vegetation opacity datasets derived from SMOS measurements and other low-frequency passive microwave observations. By combining optical data with microwave measurements, the project seeks to improve insights into the variability and properties of the vegetation layer. The primary goal of the OSMOSE project is to provide harmonized data cubes that combine complementary datasets. These include: Microwave Datacube: o A dataset containing brightness temperature measurements from SMOS (from launch to 2023), as well as AMSRE and AMSR2 low-frequency passive microwave measurements. These data were processed using the CESBIO model to estimate the Vegetation Optical Depth (VOD) time series. o Passive microwave VOD were considered at three frequencies i.e. L, C and X bands. They were derived from brightness temperatures acquired by three sensors that are SMOS, AMSRE and AMSR2, spanned over a twenty-years period. The output VOD are all processed on the same EASE Grid version 2 (Brodzik et al. (2012), Brodzik et al. (2014))), with a spatial resolution of 25km x 25km (at 30deg of latitude), covering the whole. Optical Datacubes: o Optical data acquired concurrently with SMOS measurements were collected and aligned with SMOS products in the same spatial projection. o Two optical datacubes were created: -Optical Datacube #1: A 0.05° spatial resolution dataset on a latitude/longitude grid. - Optical Datacube #2: A dataset on the EASE2 25 km grid, matching the grid of VOD products. o Input data for these cubes included: - LAI/FAPAR/FCOVER variables from AVHRR and the Copernicus Global Land Service collections. - MODIS products such as MOD13C1 and MCD15A2H vegetation indices, MCD43C4 BRDF data, and MCD18C2 Photosynthetically Active Radiation (PAR) products. To validate the datasets and evaluate their similarity, time series data from both the VOD and optical datacubes were systematically extracted at 300 LANDVAL stations. This approach ensures robust comparisons and enables a detailed analysis of the relationships between the microwave and optical datasets. The resulting database, which includes both microwave and optical datacubes, will be described in detail, and key time series will be presented. This valuable resource will be available upon request.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L)

Poster: CATDS : SMOS L3/L4 products generation and dissemination

Authors: Stéphane Tarot, Dr Jacqueline Boutin, Yann Kerr
Affiliations: Ifremer, LOCEAN, Cesbio
The CATDS (Centre Aval de Traitement des Données SMOS) is the French ground-segment facility in charge of the generation, calibration, archiving and dissemination of the SMOS level 3 and level 4 science products. It processes Sea Surface Salinity (SSS, also named Ocean Salinity, OS) and Soil Moisture (SM). CATDS is also providing services to users and scientists. More specifically, it is in charge of: - Processing of the SMOS level 3 and 4 science data from level 1B data received from the ESA Data Processing Ground Segment (DPGS), - Re-processing of the SMOS level 3 and 4 data, - Catalogue and archive of the level 3 and 4 products, including the auxiliary data, the calibration data, and the level 1B data used for the processing, - Calibration and validation of the SMOS L3 and L4 products, - Dissemination of the SMOS level 3 and level 4 products to the users, - Assistance and support to SMOS L3 and L4 users, - Potential feedback towards DPGS (new algorithms, calibration functions, and software). CATDS involves two main components: - A production centre (called C-PDC) which routinely produces and disseminates L3 and L4 data. - 2 expertise centres (called C-EC) which host the definition of algorithms, assess the quality of the products and provide specific information to users. The C-PDC processing chain which generates SM and SSS/OS L3 data from L1B products is composed of several processors. Some of them are based on the ESA DPGS’s prototypes. The main difference with level 2 products is that DPGS products are organized by half orbits, whereas C-PDC processing is based on multi orbit (from daily to monthly products). One of the goal of the C-PDC L3 processors is to select correct input data and to reject the dubious ones. These L3 processors perform temporal analysis, time aggregations and spatial and temporal averages. The Ocean products generated and distributed in near real time at C-PDC are: - L3 debiased daily SSS maps - L3 10-day and monthly debiased SSS maps (simple average) - L3 9-day debiased SSS maps (gaussian average) - L4 SMOS/ISAS (In Situ) and SMOS/SMAP/ISAS optimal Interpolation SSS maps In debiased products, the SSS is corrected from land-sea contamination and latitudinal seasonal biases. An additional salinity field provide the SSS corrected from rain freshening. NRT products are delivered within one day. The Land products generated and distributed are: - L3 daily SM maps - L3 3-day, 10-day and monthly agregated SM maps - L4 root zone soil moisture maps Each product is referenced with a DOI. A web site (www.catds.fr) presents the products and gives information related to the operational status. It includes an online catalogue which lists, describes and gives access to the products. The website also includes a tool (maps.catds.fr) to search, browse and visualize the main fields of the generated products. Users can access the products either through HTTP and FTP, or through the Sipad, an interactive Web based tool which allows aggregations and sub-settings. A single point of contact (support@catds.fr) operated by the production center allows users: - To obtain support either technical or scientific on the products and their usage - To give feedback about the products We present here the CATDS production centre with a focus on the available products. We also describe the L3 data dissemination process and the way to retrieve CATDS L3/L4 products.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L)

Poster: Mission Analysis for TriHex, Formation Flying for SMOS Follow-On

Authors: Ms Astrid Barzaghi, Dr Manuel Martin-Neira, Dr Francesca Scala, Mr Berthyl Duesmann
Affiliations: European Space Agency (ESTEC), German Aerospace Center (DLR)
TriHex is a mission concept designed as follow up to SMOS. It employs three satellites flying on a Low Earth, Sun-Synchronous and Dawn-Dusk orbit around the Earth and on (relative) General Circular Orbits with respect to each other. This particular relative orbit is designed such that the spacecrafts naturally revolve around each other once per orbit at a distance of 5.9 meters, always with the Sun in the back; by doing so it is possible to simulate a larger antenna aperture leading to improved performances compared to SMOS. Achieving an element spacing of 0.577λ, TriHex will produce alias-free imaging with reduced noise floor errors, wider swath, shorter revisit time and an average spatial resolution of 20 km. The formation flying concept also results in several advantages for the platform itself such as being smaller and stackable in the launcher. The mission analysis of TriHex must consider the peculiarities of such close formation flying in developing the propellant budget and selecting the propulsion system. While the regular orbit maintenance is feasible with cold gas thrusters, the strict formation maintenance constraints (within 5mm in all directions) imply careful consideration both in terms of propulsion used – either electrical for continuous control or cold gas for pulse control – and in terms of thrusters’ orientation. To answer these questions, a dedicated simulator is being developed using GODOT, the astrodynamics library from ESA/ESOC, allowing to take into account both orbit and formation flying maintenance requirements at the same time. Eventually, following a validation campaign, the simulator will be developed into a tool for ready evaluation of formation flying missions regardless of their configuration. Proposed analysis to showcase the tool will consider maneuvers such as maintenance of the General Circular Orbits without and with air drag, formation acquisition after launch, deep space calibration and safe mode. The presentation will cover the tool development and the progress on the implementation of the suggested maneuvers with the relative results. This new tool is but one of the many novelties brought about by the TriHex mission which in succeeding represents an important technological step forward for ESA Earth Observation and formation flying capabilities.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L)

Poster: Exploring ice sheets with the SMOS observations

Authors: Marion Leduc-Leballeur, Ghislain Picard, Catherine Ritz, Pierre Zeiger, Giovanni Macelloni
Affiliations: IFAC-CNR, IGE, UGA, CNRS
The European Space Agency (ESA)’s Soil Moisture and Ocean Salinity (SMOS) mission, operating since 15 years, provides an unprecedented dataset all over the world including polar regions. The capability to penetrate deeper into the ice and the sensitivity to water’s presence at L-band, enables to provide information on ice sheet about the ice temperature profile and the melt detection. We will present the current status of these applications and theirs perspectives. (i) The internal temperature is a key parameter for the ice sheet dynamics. Up to now, temperature profile was available in few boreholes or from glaciological models. Macelloni et al. (2019) performed the first retrieval of the ice sheet temperature in Antarctica by using the SMOS observations. This is made possible due to the very low absorption of ice and the low scattering by particles (grain size, bubbles in ice) at L-band, which implies a large penetration in the dry snow and ice of several hundreds of meters. The current retrieval algorithm involves three-dimensional glaciological modelling and is based on a Bayesian approach. First results were obtained on Antarctica and tests are in progress to extend the concept to Greenland. (ii) L-band observations also infer information about melt detection due to the significant impact of liquid water on the microwave emissivity of the surface. Based on decades of use of the passive microwave observations to detect melt over ice sheets, we developed a binary daily indicator from the SMOS observations. The methodology has been developed first for Antarctica and then will be applied to Greenland. To exploit at best the L-band specificities, we combine the SMOS indicator with the single-frequency binary melt indicators at 19 and 36 GHz from AMSR2 observations. The final classification is composed of nine melt status, and provides a clear and synthetic description of the melt along the season. This opens a good opportunity for a potential use for the Copernicus Imaging Microwave Radiometer (CIMR). These applications highlight the relevance of the L-band observations for the cryosphere community and the maintenance of such measurements by satellite.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L)

Poster: Machine learning SMOS soil moisture product assimilation at ECMWF

Authors: Kirsti Salonen, Peter Weston, Ewan Pinnington, Axel Bonet, Patricia de Rosnay
Affiliations: ECMWF
SMOS soil moisture produced using a neural network retrieval has been assimilated in the European Centre for Medium-Range Weather Forecasts (ECMWF) operational system since 2019 with Integrated Forecasting System (IFS) cycle 46r1. The latest operational ECMWF model cycle 49R1, implemented in November 2024, includes several improvements in the surface assimilation and modelling, such as two meter temperature laps rate correction, increased background errors for soil moisture, changes in observation usage in the snow data assimilation and updated vegetation maps. Thus, the SMOS soil moisture assimilation product has required updating. The new product has been developed and is produced in-house at ECMWF using the eXtreme Gradient Boosting (XGBoost) method, which is a gradient boosted decision tree machine learning library. The XGBoost model has been trained using 5 years of co-located SMOS brightness temperature observations from incidence angles 30°, 40° and 50° and ECMWF model fields. One year of data has been used for independent testing of the XGBoost model. The SMOS brightness temperatures are used over land only and screened over snow and frozen ground. The Radio Frequency Interference (RFI) filtering is done based on the flags provided with the SMOS data. Sensitivity tests have indicated that including time related features to the XGBoost model improve the correlation and decrease the root mean square error (RMSE) between the observed and IFS model soil moisture. Assimilation experiments with the new product indicate rather neutral impact on average globally but over Southern hemisphere mid-latitudes a positive impact is seen for short range forecasts when compared to independent near-surface observations. For longer range forecasts the impact is neutral. The impact is generally similar in magnitude compared to impact seen from assimilating the neural network product.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L)

Poster: Monitoring Data Quality for Characterising the SMOS Fundamental Climate Data Record (FCDR).

Authors: Ben Coombs, Raul Diez Garcia, Verena Rodriguez Gonzalez, Swati Punaini, Raffaele Crapolicchio
Affiliations: Telespazio UK Ltd. for ESA, Serco Italia SpA for ESA
The concept of a "Fundamental Climate Data Record" (FCDR) encompasses a carefully curated long-term dataset derived from either a single sensor or a series of instruments. An FCDR is designed to meet the stringent requirements of accuracy and stability, both spatially and temporally, essential for climate-related applications. These records also include detailed documentation on quality control, traceability, and the supporting ancillary data used during calibration processes. Since its launch in November 2009, the European Space Agency’s (ESA) Soil Moisture and Ocean Salinity (SMOS) mission, featuring the MIRAS instrument (Microwave Imaging Radiometer using Aperture Synthesis), has been delivering L-band brightness temperature data. Spanning over 15 years, this dataset now represents the longest continuous time series of its kind. Its extended coverage and quality make it a prime candidate for developing an FCDR of L-band brightness temperature tailored to support climate research, particularly for generating Essential Climate Variables (ECVs) like soil moisture (SM) and sea surface salinity (SSS). This presentation will highlight the comprehensive efforts of the SMOS Calibration Expert Centre (CEC), operating within the IDEAS-QA4EO & QA4EO-2 frameworks, in monitoring and analysing the long-term trends of polarimetric brightness temperatures. By leveraging stable reference targets, such as the Antarctic plateau and in situ sea surface salinity data, the stability of MIRAS’s measurements has been systematically assessed. Key parameters affecting the instrument, including receiver gain and offset, have been scrutinized, alongside external factors such as solar interference and the impact of Radio Frequency Interference (RFI). These efforts ensure an accurate understanding of MIRAS’s stability, which is critical for interpreting geophysical trends associated with climate variability. In addition to trend analysis, the study examines various anomalies encountered during the SMOS mission and their implications for data quality. These include unexpected instrument anomalies and their resolution, all of which are essential for ensuring the traceability and reliability of the SMOS dataset. The resulting insights contribute to the development of a robust and well-documented FCDR for L-band brightness temperatures, positioning SMOS as a key resource for advancing climate science and enhancing future studies of global environmental change.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L)

Poster: Mitigating RFI in SMOS SSS Observations: Toward Enhanced Global Accuracy

Authors: Fabrice Bonjean, Jacqueline Boutin, Jean-Luc Vergely, Philippe Richaume, Roberto Sabia
Affiliations: CNRS, ACRI-ST, CNES, ESA
The Soil Moisture and Ocean Salinity (SMOS) mission has been instrumental in providing global Sea Surface Salinity (SSS) data, critical for understanding ocean dynamics and climate processes. However, Radio Frequency Interference (RFI) remains a significant challenge, particularly in regions where human activities introduce contamination into the L-band microwave signals used for SSS retrieval. This study presents a refined approach to RFI mitigation, extending methodologies originally developed for region-specific applications to a broader, global framework. Our methodology leverages the spatial variability of RFI contamination across the SMOS observational swath, employing two distinct correction approaches: 1.Regional Method (RM): This method identifies and corrects RFI patterns by applying Principal Component Analysis (PCA) to SSS fields, capturing coherent spatial and temporal RFI structures over large areas. 2.Pointwise Method (PM): A pixel-level correction technique that utilizes local RFI time series without requiring prior knowledge of contamination sources. The effectiveness of these approaches is demonstrated using SSS data from three distinct regions: the Samoa region, the Barbados region, and the Guinea Gulf region, which were strongly impacted by RFI during different phases of the SMOS mission. Results show that both methods significantly reduce the influence of RFI on salinity variability, as validated against in-situ datasets and CESBIO RFI probability maps. RM is particularly effective in restoring large-scale variability, such as signals related to the El Niño Southern Oscillation (ENSO), while PM proves advantageous for localized corrections and dynamic environments. While these techniques have shown substantial improvements in mitigating RFI contamination, they also highlight complementary strengths. This insight has led to the development of a hybrid approach that combines the robustness of RM with the flexibility of PM. The hybrid method systematically incorporates RFI probability maps, uses additional PCA components during local processing, and enhances the ability to address RFI across diverse global regions. Examples of corrections, including recent applications near South Greenland, a region critical for studying the global climate system, will be presented. These advancements not only enhance the accuracy of SMOS SSS products but also pave the way for systematic global RFI mitigation, ensuring the continued utility of SMOS data in long-term climate research and operational applications.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L)

Poster: "The impact of Radio Frequency Interference (RFI) on SMOS Level 2 Data Retrievals"

Authors: Raul Diez Garcia, Ben Coombs, Philippe Richaume, Verena Rodriguez Gonzalez, Swati Punaini, Raffaele Crapolicchio
Affiliations: Telespazio UK Ltd. for ESA, Centre d’Étude Spatiale de la BIOsphère (CESBIO-UPS-CNRS-IRD-CNES-INRAE), Serco Italia SpA for ESA
Radio Frequency Interference (RFI) remains a critical challenge in radiometry for satellite-based Earth observation. Since its launch in November 2009, the European Space Agency’s (ESA) Soil Moisture and Ocean Salinity (SMOS) mission, featuring the Microwave Imaging Radiometer using Aperture Synthesis (MIRAS), has provided L-band brightness temperature data for Soil Moisture (SM) and Ocean Salinity (OS) retrievals. However, RFI significantly affects the quality of SMOS data. As the first mission of its kind, SMOS does not include advanced onboard signal-processing techniques to mitigate RFI, making it particularly vulnerable. This issue has worsened in recent years, notably due to intensified RFI sources linked to geopolitical factors such as the Ukraine conflict. The impacts of RFI on scientific measurements remain poorly understood, with two primary effects identified. First, strong RFI often creates retrieval gaps. Powerful RFI sources are easily identified as the radiometric power measured is above physical levels. Affected pixels are usually discarded early in processing. Moreover, even if not discarded, the high biases introduced by RFI are usually enough to prevent the convergence of the inversion algorithms used. If the RFI contamination is not so strong, it may be within the margins of physical retrievals. This type of contamination is often termed “insidious” RFI contamination. Undetected insidious RFI can bias measurements, as observed in SM retrievals, where artifacts like artificially “drier” land near RFI sources have been detected. This study systematically analyses the implications of RFI on SMOS data. By examining the complete SMOS Level 2 (L2) dataset, we quantify retrieval unavailability directly attributable to RFI, distinguishing these from gaps caused by other factors such as surface modelling errors. The temporal evolution and geographical distribution of RFI are mapped, and detailed statistics are presented. Additionally, the study investigates the bias introduced into retrieved measurements through selected case studies, highlighting the broader implications of RFI contamination on the mission's scientific objectives, and the strong need for a continuous monitoring of the L-band that ensures its protection.
Affiliation(s): Telespazio UK Ltd. for ESA, Centre d’Étude Spatiale de la BIOsphère (CESBIO-UPS-CNRS-IRD-CNES-INRAE), Serco Italia SpA for ESA
LPS Website link:
"The impact of Radio Frequency Interference (RFI) on SMOS Level 2 Data Retrievals"&location=X5+–+Poster+Area+–+Zone+L" class="text-info" target="_blank">Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L)

Poster: 15-Year Time Series of Liquid Water Amount in the Greenland Ice Sheet Percolation Zone Derived from SMOS and SMAP

Authors: Alamgir Hossan, Andreas Colliander, Joel Harper, Nicole-Jeanne Schlege, Baptiste Vandecrux, Julie Miller, Shawn Marshall
Affiliations: Jet Propulsion Laboratory, California Institute Of Technology, Department of Geosciences, University of Montana, NOAA Geophysical Fluid Dynamics Laboratory (GFDL), Geological Survey of Denmark and Greenland, Cooperative Institute for Research in Environmental Sciences, University of Colorado Boulder, Department of Geography, University of Calgary
With the accelerated mass loss over the recent decades, the Greenland ice sheet (GrIS) is now the leading contributor to the global sea level rise. Amid the growing concern about the GrIS response to climate change and its arctic amplification, quantifying total liquid water amounts (LWA) in the GrIS becomes indispensable for understanding its physical processes and their changes, enabling better projections of future sea-level rise. In situ observations have inadequate spatial and temporal coverage, while significant process limitations and uncertainties exist among regional climate models in estimating LWA across GrIS. Satellite microwave radiometers are very sensitive to ice sheet melting in day-night and all-weather conditions and have been widely used for monitoring polar ice sheet surface melt since the 1990s. Yet, quantifying total LWA remains an ongoing challenge because these conventional methods employing high-frequency bands (i.e., 19 and 37 GHz bands) can only provide daily freeze/thaw status or the extent of the surface melt. ESA’s Soil Moisture and Ocean Salinity (SMOS) mission (November 2009 - present), followed by NASA’s Soil Moisture Active Passive (SMAP) mission (March 2015 - present), has created an unprecedented opportunity for actual quantification of LWA over polar ice sheets. Because of the larger penetration, L-band signals can track liquid water in deeper layers and accurately estimate surface-to-subsurface LWA. Here, we combined L-band brightness temperature (TB) observations from the SMOS and the SMAP missions to quantify the total LWA and examine their spatial and temporal variability in the percolation zone of the GrIS for the 2010-2024 period. We converted the SMOS multi-angular observations to SMAP equivalent TB at a 40° incidence angle to create a consistent TB record between the two sensors. Then, we used vertically polarized (V-pol) TBs in an empirically derived adaptive thresholding technique to detect melt events. We used a snow microwave emission and radiative transfer model to simulate corresponding L-band TBs, which were then used in an inversion algorithm for LWA retrieval. Finally, the retrieval was compared with the corresponding LWA derived from two reference models. The first reference is a locally calibrated ice sheet energy and mass balance (EMB) model, which was forced by in situ measurements from automatic weather stations (AWS) of the Programme for Monitoring of the Greenland Ice Sheet (PROMICE) and Greenland Climate Network (GC-Net) located in the percolation zone of the GrIS. The model was initialized by appropriate in situ ice core density and sub-surface temperature profiles. The second reference is the corresponding LWA obtained from the Glacier Energy and Mass Balance (GEMB) model within the National Aeronautics and Space Administration’s (NASA) Ice-sheet and Sea-Level System Model (ISSM) forced with ERA-5 reanalysis. The retrievals generally agree with both the references in the percolation zone, signifying the potential for advancing our understanding of ice sheet physical processes to better project Greenland’s contribution to global sea level rise in response to the warming climate.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L)

Poster: A model-based approach for mapping Forest Above-ground Biomass with SMOS and SMAP L-band Radiometer Data

Authors: Oliver Cartus, Maurizio Santoro, Carlos Jimenez, Mike Schwank
Affiliations: Gamma Remote Sensing AG, Estellus, Swiss Federal Research Institute for Forest, Snow and Landscape Research
L-band radiometer data acquired by the Soil Moisture Ocean Salinity (SMOS) and Soil Moisture Active Passive (SMAP) missions have shown potential for mapping the spatial distribution and temporal changes of the above-ground biomass (AGB) of forests. In most studies, the relationship between existing AGB maps and the vegetation optical depth (VOD) derived from radiometer data has been analysed. Only few studies have attempted a direct estimation of AGB from the original brightness temperature measurements. We present a fully automated AGB retrieval approach for SMOS and SMAP brightness temperatures which builds upon existing AGB retrieval frameworks developed for C- and L-band Synthetic Aperture Radar data (Santoro et al, 2024). A requirement of retrieval algorithms targeting global mapping of AGB is the independence from in-situ information because such data, mostly in the form of small inventory plots, are either not available for large parts of the world or the sampling is too sparse to support spatially adaptive calibration of models relating brightness temperatures to AGB. In the retrieval approach presented here, the Two-Stream Microwave Emission Model (2S-MEM) was adapted to relate brightness temperatures to forest canopy cover and height available from space-borne optical and LiDAR missions. Using canopy cover information from AVHRR optical data and forest height information from IceSAT-2, the effect of spatial and temporal variations of soil moisture and signal extinction in the forest canopies was quantified across an annual stack of 10-days composites of a polarimetric index calculated per incidence angle range from H and V polarization brightness temperatures acquired by SMOS and SMAP. A global database of functions describing the interrelationships between canopy density, height, and AGB (Santoro et al., 2024) was then exploited to invert the 2S-MEM for AGB and to estimate AGB for each 10-days composite of SMOS and SMAP data acquired in 2016. In the inversion, SMAP and SMOS brightness temperatures were combined to benefit from the radiometric accuracy of SMAP and the multi-angular capabilities of SMOS. When compared to the ESA CCI Biomass AGB map of the same year (version 5), the global AGB estimates produced from SMOS and SMAP presented a reasonable agreement with low systematic biases and explained, dependent on the type of forest, between 40 % and 80 % of the AGB variability in the reference map. The differences between maps produced for 36 10-days composites of SMOS and SMAP observations in 2016 tended to be negligible, which suggested that the influence of varying soil moisture or signal extinction in the canopy was adequately captured. An additional comparison with AGB reference information derived from plot-level inventory data for a limited number of sites across the major forest biomes generally confirmed the merit of the suggested retrieval approach but also revealed a need for improving the retrieval algorithm locally. The results generally suggested the retrieval approach to be a viable, very flexible, option for mapping AGB globally from L-band radiometer data independent of the availability of a dense network of in-situ data. Nonetheless, the results also pointed out challenges associated with modelling space-borne L-band radiometer measurements as function of AGB: 1) The retrieval of AGB with parametric models requires knowledge of parameters such as the soil dielectric properties/soil moisture, surface roughness, or the scattering and absorption properties of vegetation at L-band. Similar to approaches targeting the retrieval of soil moisture and VOD, various simplifications with respect to the effect of parameters such as surface roughness, albedo of vegetation, or the emissivity of water bodies are required to be able to invert the model for AGB. More theoretical and experimental research will be needed to improve the model parametrization. 2) The forest structural component of the model, i.e., the global database of modelled relationships between forest canopy cover, height, and AGB, requires continuous re-evaluation and refinement. The current database is based on LiDAR-derived canopy cover and height and AGB information available from national or provincial inventories at the scale of administrative units (e.g., counties, countries) and therefore describes differences in the relationships at rather coarse spatial scales. This research was supported by the European Space Agency with ESRIN contracts 4000138809/22/I-DT-BGH (BiomAP, Integrating Active and Passive microwave data towards a novel global record of aboveground biomass maps) and 4000123662/18/I-NB (Climate Change Initiative BIOMASS project). References Santoro, M., Cartus, O., Quegan, S., Kay, H., et al. (2024). Theory, design and performance of the Climate Change Initiative Biomass global retrieval algorithm. Science of Remote Sensing, 10, 100169.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: A.01.06 - POSTER - Aerosol, clouds, their interactions and the radiation budget

The Intergovernmental Panel on Climate Change (IPCC) has recognised that aerosol and clouds remain the largest contributions to overall uncertainty in climate feedback. Aerosols particles directly affect the Earth’s energy budget by scattering and absorbing sunlight and, to a lesser extent, infrared radiation. Aerosol layers often include tiny particles known as cloud condensation nuclei (CCN), around which water condenses to form cloud droplets, and ice nuclei. Aerosols therefore affect the size and concentration of water droplets and ice crystals, which in turn affects the radiative properties of the clouds. Aerosols indirect forcing is broadly defined as the overall process by which aerosols perturb the Earth-atmosphere radiation balance by modulation of cloud albedo and cloud amount.

Satellite observations have up to now been less successful at providing the quantitative particle optical, microphysical, and chemical properties needed to model aerosol forcing and to interpret the interactions with clouds. The recent EarthCARE mission collects co-registered observations from a suite of four instruments located on a common platform: the two active instruments provide vertical profiles of the atmosphere along the satellite nadir path while the two passive instruments provide scene context information to support the active instruments data interpretation.

This session is dedicated to the presentation of improved detection and retrieval capabilities of aerosols and clouds characteristics, to the understanding of their interactions and their effect on the global radiative budget by means of novel and past satellite missions such as but not limited to Sentinels-4,5,5P, EarthCARE and Aeolus.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Enhancing COSP for Aeolus and EarthCARE: Toward Long-Term Simulations of Cloud Lidar Observations

Authors: Marie-Laure Roussel, Zacharie Titus, Hélène Chepfer, Marine Bonazzola, Olivier Chomette
Affiliations: CNRS / LMD, Sorbonne Université / LMD
Space borne lidar as ALADIN and ATLID – respectively onboard Aeolus and EarthCARE satellites - are crucial for advancing our understanding of Earth's atmosphere by providing global vertical cloud distributions. However, comparing observational data to atmospheric models raises challenges, particularly due to differences in measurement assumptions and the unique characteristics of satellite instruments, such as wavelength, beam inclination, or high spectral resolution capabilities. Simulators are employed to replicate how instruments would measure cloud properties derived from modeled fields, facilitating direct comparisons between observations and simulations. We have enhanced the features of the CFMIP Observation Simulator Package (COSP) (Bodas-Salcedo et al., 2011; Swales et al., 2018; Bonazzola et al., 2023) to simulate the observations of ALADIN. This development integrates instrument-specific characteristics, including wind retrieval simulations based on modeled winds and accounting for ALADIN's geometry, wavelength, and cloud masking effects. Using COSP with outputs from the LMDZ atmospheric model, we have conducted simulations and performed model evaluation with respect to the latest observational datasets (Titus et al., 2024, under review), revealing valuable insights into cloud processes and the representation of winds in models. Building on this work, we are preparing to expand COSP for ATLID, whose initial datasets are becoming available. This upgrade will enable realistic long-term simulations that account for the successive deployment of satellite instruments, incorporating their distinct observational characteristics. These developments promise to enhance model evaluation capabilities and support comprehensive analyses of global cloud patterns and radiative processes across multiple satellite missions.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Development and Applications of New GEO-RING Radiance Data Set.

Authors: Andy Heidinger, Carlos Horn, Viju John, Ken Knapp, Jan Fokke Meirink, Coda Phillips, Martin Stengel
Affiliations: NOAA/NESDIS GEO, EUMETSAT, Deutsher Wetterdienst, CIMSS, KNMI, NOAA/NESDIS NCEI
With the recent launches of advanced geostationary imagers, GEWEX initiated a project to develop a new GEO-RING data set to make these data readily available to the satellite climate community. The specific motivation for this effort was to enable a next generation of the International Satellite Cloud Climatology Project (ISCCP-NG). This paper will present the prototype GEO-RING Level-1 Gridded Data (L1G) developed by NESDIS and EUMETSAT. This work will also demonstrate the new cloud and other applications that are possible with the improved spatial, spectral and temporal resolution of the GEO-RING L1G.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Quantifying the radiative effect of volcanic sulfate aerosols from the Hunga Tonga 2022 eruption with infrared satellite sounders.

Authors: Benjamin Navarrete Saavedra, Dr. Lieven Clarisse, Pr. Pierre-François Coheur, Pr. Cathy Clerbaux, Dr. Simon Whitburn
Affiliations: Université Libre De Bruxelles (ULB), Spectroscopy, Quantum Chemistry and Atmospheric Remote Sensing (SQUARES), Royal Meteorological Institute of Belgium (RMIB), Atmospheric Composition, Measurements and Modelling (ACM2), LATMOS/IPSL, Sorbonne Université, CNRS
The January 2022 eruption of the Hunga Tonga volcano located in the Southern Pacific Ocean, one of the most intense ever recorded, propelled a plume directly into the stratosphere to an altitude of up to 50 km and released around 0.4 Gg of sulfur dioxide (SO2) in the atmosphere [Zuo, 2022]. The SO2 initially released was rapidly converted to large amounts of sulfuric acid (H2SO4), significantly affecting the Earth’s radiation budget for months after the eruption [Zhu, 2022]. In this study, we exploit the spectrally resolved Outgoing Longwave Radiation (SR-OLR) dataset [Whitburn, 2020] derived from the Infrared Atmospheric Sounding Interferometer (IASI) on board the Metop-B and -C platforms to evaluate directly the radiative impact of the eruption with a specific focus on H2SO4. A spectral index (HRI) is developed to detect and map the IASI pixels affected by H2SO4 in the Southern Hemisphere during the first six months following the eruption. The SR-OLR corresponding to these pixels is then compared to a reference database of SR-OLR, without the signature of H2SO4, built from the years before the eruption representative of the diverse scenes (in terms of surface and atmospheric parameters) observed within the Earth-atmosphere system. This approach enables us to derive the longwave direct radiative effect (LW-DRE) of H2SO4. Although many studies already exist on the amounts of SO2 and H2SO4 released and on their radiative impact [Schroeber, 2024 ; Sellitto, 2024 ; Sicard, 2024], this is the first time that the radiative effect of the Hunga Tonga eruption is evaluated directly from the SR-OLR, without relying on any forward model. These results provide a valuable alternative for assessing the radiative impact of volcanic eruptions. References: Schoeberl, M. R., Wang, Y., Taha, G., Zawada, D. J., Ueyama, R., & Dessler, A. (2024). Evolution of the climate forcing during the two years after the Hunga Tonga‐Hunga Ha'apai eruption. Journal of Geophysical Research: Atmospheres, 129, e2024JD041296. https://doi.org/10.1029/ 2024JD041296 Sellitto, P., Siddans, R., Belhadji, R., Carboni, E., Legras, B., Podglajen, A., et al. (2024). Observing the SO2 and sulfate aerosol plumes from the 2022 Hunga eruption with the Infrared Atmospheric Sounding Interferometer (IASI). Geophysical Research Letters, 51, e2023GL105565. https://doi.org/10.1029/ 2023GL105565 Sicard, M., Baron, A., Ranaivombola, M., Gantois, D., Millet, T., Sellitto, P., Bègue, N., Payen, G., Marquestaut, N., Duflot, V. (2023). Radiative impact of the Hunga Tonga-Hunga Ha’apai stratospheric volcanic plume: role of aerosols and water vapor in the southern tropical Indian Ocean. 10.22541/essoar.170231679.99186200/v1. Whitburn, S., Clarisse, L., Bauduin, S., George, M., Hurtmans, D., Safieddine, S., Coheur, P., Clerbaux, C. (2020). Spectrally Resolved Fluxes from IASI Data: Retrieval Algorithm for Clear-Sky Measurements. Journal of Climate. 33. 10.1175/JCLI-D-19-0523.1. Zhu, Y., Bardeen, C.G., Tilmes, S. et al. (2022). Perturbations in stratospheric aerosol evolution due to the water-rich plume of the 2022 Hunga-Tonga eruption. Commun Earth Environ 3. https://doi.org/10.1038/s43247-022-00580-w Zuo, M., Zhou, T., Man, W. et al. (2022). Volcanoes and Climate: Sizing up the Impact of the Recent Hunga Tonga-Hunga Ha’apai Volcanic Eruption from a Historical Perspective. Adv. Atmos. Sci. 39. https://doi.org/10.1007/s00376-022-2034-1
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Quantifying the Semi-Direct Effect of Aerosols on Clouds Using Satellite Observations and Advanced Time Series Analysis

Authors: Gabriela Ciocan, Silke Gross, Mihai Dima, Anca Nemuc
Affiliations: National Institute of Research and Development for Optoelectronics - INOE 2000, Remote Sensing Department, University of Bucharest, Faculty of Physics, Institut für Physik der Atmosphäre, Deutsches Zentrum für Luft- und Raumfahrt (DLR)
Aerosols significantly influence Earth's climate by interacting with clouds and altering the planet's radiative balance. These effects occur through direct interactions with solar radiation, indirect modifications to cloud properties, and semi-direct effects, where aerosol absorption impacts local radiative and atmospheric dynamics. According to the IPCC Fourth Assessment Report from 2007, among these, the semi-direct effect remains a key uncertainty in understanding global radiative forcing. Absorbing aerosols can reduce low-cloud coverage by heating the surrounding air, decreasing relative humidity, and suppressing surface moisture fluxes, thereby amplifying warming and altering regional climate patterns (Johnson et al., 2004). This study investigates these mechanisms by using high-resolution satellite observations to analyse aerosol-cloud interactions on both global and regional scales. Satellite data from the Moderate Resolution Imaging Spectroradiometer (MODIS, https://modis.gsfc.nasa.gov/about/ ) aboard the Terra (https://terra.nasa.gov/ ) and Aqua (https://aqua.nasa.gov/ ) platforms are central to this analysis. MODIS provides global, near-daily coverage of key parameters, including aerosol optical depth (AOD), cloud fraction, and surface air temperature. The Multi-Angle Implementation of Atmospheric Correction (MAIAC, https://ladsweb.modaps.eosdis.nasa.gov/missions-and-measurements/science-domain/maiac/ ) AOD product (http://dx.doi.org/10.5067/MODIS/MCD19A2N.NRT.061 ) offers aerosol measurements at a 1 km resolution, essential for capturing spatial variations in aerosol loading. Cloud fraction data (http://doi.org/10.5067/MODIS/MOD06_L2.NRT.061, http://doi.org/10.5067/MODIS/MOD06_L2.NRT.061, http://doi.org/10.5067/MODIS/MYD06_L2.NRT.061, http://doi.org/10.5067/MODIS/MYD06_L2.NRT.061), measured at 2 km imagery resolution, provide insights into cloud coverage and its evolution over time. Surface air temperature data (doi: 10.5067/RAEHAOH4VZM5) from the Atmospheric Infrared Sounder (AIRS, https://airs.jpl.nasa.gov/ ) complement these datasets, offering 32 km/pixel resolution and twice-daily coverage, crucial for understanding temperature variations linked to aerosol dynamics. To ensure consistency across datasets, all parameters are resampled to match the spatial and temporal resolutions of the Aqua/AIRS surface air temperature product. Specifically, AOD and cloud fraction data are aggregated to a 32 km/pixel visualization, with analyses beginning on August 30, 2002, to align with the temporal coverage of the temperature dataset. This pre-processing ensures compatibility between datasets and enables reliable cross-parameter comparisons. The methodology consists in the integration of these harmonized datasets to compute monthly means, global anomalies, and time series correlations. Key analyses include cross-correlation between aerosols and temperature, temperature and cloud coverage, and aerosols and clouds, by quantifying the degree and lag of their association over time. This allows the study to determine not only the strength but also the temporal order of these interactions. Autocorrelation analysis explores the temporal persistence of each variable by assessing how strongly a parameter, such as AOD, correlates with its past values at various time lags. This provides insights into underlying patterns or cycles within the data, which can indicate processes such as seasonal variations or longer-term trends. Scaled correlation methods focus on high-frequency segments of the datasets, examining the relationships between variables on shorter timescales. These analyses are particularly useful for capturing rapid changes or localized interactions, such as those triggered by transient meteorological events or episodic increases in aerosol emissions. The scaling ensures that even subtle interactions are captured, adding granularity to the analysis. Initial results demonstrate the relationship between increasing aerosol loadings and surface temperature anomalies, with subsequent reductions in cloud fraction, consistent with the semi-direct effect. By leveraging satellite data, this study not only underscores the significant role of aerosols in cloud dynamics but also highlights the capabilities of remote sensing in addressing uncertainties in aerosol-cloud interactions. Acknowledgements: This work is supported by the Romanian Core Program within the National Research Development and Innovation Plan 2022-2027, carried out with the support of MCID, project no. PN 23 05, and by the COST (European Cooperation in Science and Technology) under the Action CA21119 HARMONIA (International network for harmonization of atmospheric aerosol retrievals from ground-based photometers).
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Detection of Polar Stratospheric Clouds With IASI

Authors: Manon Hermans, Lieven Clarisse, Daniel Hurtmans, Catherine Wespes, Cathy Clerbaux, Pierre Coheur
Affiliations: Université libre de Bruxelles (ULB), Brussels Laboratory of the Universe (BLU-ULB), Spectroscopy, Quantum Chemistry and Atmospheric Remote Sensing (SQUARES), LATMOS/IPSL, Sorbonne Université, UVSQ, CNRS
Polar Stratospheric Clouds (PSCs) play a key role in the formation of the Antarctic ozone hole. They contribute to the formation of reactive halogen species (mainly chlorine) and lead to the denitrification and dehydration of the stratosphere, which prevents neutralization of these active chlorine species. PSCs, composed of nitric acid, sulfuric acid, and water, form when temperatures are sufficiently low, i.e., during the winter inside the polar vortex at altitudes between 15 and 25 km. Space-based observations of PSCs are critical for better characterizing their abundance, formation and spatiotemporal variations, as demonstrated by the recent observations from the limb sounders MIPAS and ACE, and the lidar sounder CALIOP. While their measurements offer excellent vertical characterization, they suffer from poor spatial coverage. Here, we report the first IASI observations of the most important and abundant PSCs, type Ia PSCs, which are composed of nitric acid trihydrate (NAT) particles. We analyse in detail a selection of IASI spectra and show the unambiguous presence of the characteristic spectral signature of NAT particles in the infrared. Using a hyperspectral range index (HRI) specifically developed for the South Pole (below 50°S), we further demonstrate that PSCs can be observed systematically in the polar winter and obtain detailed daily detection maps. With respect to the physicochemical mechanisms at play in the Antarctic stratosphere, we show a remarkable consistency between the 2008-2023 timeseries of detected PSCs and the timeseries of temperature and nitric acid. In addition, during the winter months, we manage to show a clear anticorrelation at short time scales between the detection of PSCs and the abundance of nitric acid (or temperature). Interannual variability is also briefly investigated and discussed. Finally, we present the first steps towards the development of a quantitative product. Overall, our results emphasize the remarkable capabilities of nadir hyperspectral sounders to monitor stratospheric polar processes. Further insights on the chemistry at play during the polar night will undoubtedly come with IASI-NG, owing especially to its improved radiometric performances.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: New dust detection algorithm for EarthCARE MSI over ocean

Authors: Gregor Walter, Dr Anja Hünerbein, Dr Nicole Docter, Dr Sebastian Bley
Affiliations: Leibniz Institute for Tropospheric Research
Following the launch of the EarthCARE (Earth Clouds, Aerosols and Radiation Explorer) satellite in May 2024, the Multi-spectral Imager (MSI) supplements the vertical information retrieved from the Atmospheric Lidar (ATLID) and the Cloud Profiling Radar (CPR) instruments by providing information on cloud and aerosol properties in the cross-track direction. MSI features a 150 km swath and a spatial resolution of 500 m across its four solar and three thermal channels. A cloud detection algorithm serves as basis for deriving micro- and macrophysical properties of clouds and aerosol optical thickness for non-cloudy pixels. However, the current cloud mask algorithm does not distinguish between clouds and aerosols, potentiallly leading to misclassifications of thick aerosol layers over ocean originating from dust storms, wildfires, or volcanic eruptions. As dust is the most prevalent aerosol type in the atmosphere, the development of a robust dust detection algorithm is crucial to improve cloud-aerosol discrimination. Due to different properties of MSI, it is not possible to rely on existing dust detections for other imaging sensors. This study introduces a dust detection algorithm over ocean for MSI based on a random forest (RF) classification model. RF is a machine-lerning approach that constructs multiple decision trees, which outcomes are aggregated to generate accurate and reliable predictions. In advance of MSI data availability, a precursor dust detection algorithm was developed with Moderate-resolution Imaging Spectroradiometer (MODIS) data, using channels similar to those of MSI and adjusted to match MSI’s swath dimensions. Individual reflectances, spectral tests (e.g., split-window techniques) and spatial variability tests are used as inputs. The RF model was tested with additional MODIS scenes and compared against the MOD35 dust detection product. Results indicate comparable performance between the RF model and MOD35, with the RF model correctly identifying more dust pixels while producing fewer misclassifications of non-dust pixels, despite relying on fewer channels. Further validation using five months of Aqua data ensured the model's robustness. Due to the sensitivity of dust detection in the thermal infrared (TIR) and the differences in TIR response functions between MSI and MODIS, adjustments were made to the TIR channels before applying the RF model to real MSI data. A case study of a dust storm outbreak captured by MSI demonstrates the model's capability and highlights its potential for operational use.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: The Connection Between the TOA LW Cloud Radiative Effect and Cloud Properties, Using ATLID and BBR Instruments on board EarthCare

Authors: Aiten Alava-Baldazo, Olivier Chomette, Professor Hélène Chepfer
Affiliations: Lmd/cnrs
To verify the cloud feedback mechanisms simulated in climate models using observations, it is essential to first establish a clear connection between fundamental cloud variables and the cloud radiative effect (CRE) at the top of the atmosphere (TOA). This allows any change in a cloud variable to be directly linked to a corresponding change in the CRE at the TOA. Specifically, to decompose the short-term longwave (LW) cloud feedback into contributions from different cloud variables, we must ensure that the LWCRE can be accurately retrieved from key cloud variables observed by lidar. Vaillant de Guélis et al. (2017) demonstrated that the LWCRE can be retrieved from five cloud properties observed by lidar, which are linearly associated with the LWCRE: opaque cloud altitude (ZOpaque), optically thin cloud altitude (ZThin), opaque cloud cover (COpaque), optically thin cloud cover (CThin), and thin cloud emissivity (εThin). In this work, we propose to use ATLID observations to retrieve the LWCRE following Vaillant de Guélis' (2017) method, compare the results with collocated broadband radiometer (BBR) data to establish the link between the LWCRE and the five cloud properties, so that we can then analyze the long-term data record to decompose cloud feedback into its components. References Vaillant de Guélis, T., Chepfer, H., Noel, V., Guzman, R., Dubuisson, P., Winker, D. M., and Kato, S.: The link between outgoing longwave radiation and the altitude at which a spaceborne lidar beam is fully attenuated, Atmos. Meas. Tech., 10, 4659–4685, https://doi.org/10.5194/amt-10-4659-2017, 2017
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Advanced cloud products from NASA’s PACE mission and their relevance for other missions

Authors: Bastiaan Van Diedenhoven, Andrew Sayer, Chamara Rajapakshe, Andrzej Wasilewski, Brian Cairns, Mikhail Alexandrov, Daniel Miller, Zhibo Zhang, Kenneth Sinclair, Kirk Knobelspiesse, Otto Hasekamp, Brent McBride, Vanderlei Martins
Affiliations: Sron Netherlands Institute For Space Research, NASA Goddard Space Flight Center, NASA Goddard Institute for Space Studies, University of Maryland, Baltimore County
The sensitivities of cloud properties to changes in the climate and to anthropogenic aerosol emissions are crucial for understanding Earth’s climate but remain highly uncertain. Global cloud observations from satellites are needed to advance our knowledge on processes related to the formation and evolution of clouds and precipitation. While long term satellite data records of cloud microphysical properties exist, largely obtained by multi-spectral imagers, they are known to be substantially biased or failing in particular situations, such as in regions of broken and/or mixed-phase clouds. The cloud products provided by NASA’s Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) mission, which was launched on 8 February 2024, have several advantages over past missions. PACE caries the Ocean Color Instrument (OCI), which is a multi-spectral imager, the Hyper-angular Rainbow Polarimeter (HARP-2) and the Spectropolarimeter for Planetary Exploration (SPEXone). Advanced, pixel-level cloud microphysical products are produced from the polarimeters, including cloud top phase and full droplet size distributions, while collocated retrievals are provided by OCI using more traditional methods. Instrument-synergy products include liquid water path and droplet number concentrations. We present first global advanced cloud products from PACE. We present validation using airborne campaigns that indicates that the polarimetry products are much less affected by the presence of broken and mixed-phase clouds than OCI observations, consistent with previous studies using simulations and observations. These observations provide new insights on the microphysical properties of global clouds, including their drop size distribution width and bi-modality which may be linked to precipitation formation. Furthermore, we show that the polarimeter retrievals along with OCI’s unique combination of three commonly-used shortwave infrared wavelength bands allows to assess some of the biases in traditional bi-spectral retrievals in unprecedented detail and on a global scale. We show that the biases in bi-spectral results depend on cloud structure and on the wavelength used for the droplet size retrievals. The PACE data provides crucial information to reduce biases in traditional bi-spectral cloud retrievals by essentially all multi-spectral imagers in the program of record (including EarthCARE’s MSI) that result from, e.g., sub-pixel cloudiness, mixed-phase cases and 3D radiative transfer effects. We make recommendations on how biases in bi-spectral results may be mitigated.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Atmospheric ice mass retrievals with the Arctic Weather Satellite

Authors: Peter Mcevoy, Patrick Eriksson, Eleanor May, Lara Leko
Affiliations: Chalmers University of Technology
Atmospheric ice hydrometeors significantly affect the Earth's radiation budget and hydrological cycle. Despite their critical role, there is significant uncertainty in atmospheric ice hydrometeor mass among climate models, highlighting the need for larger quantities of more precise data. Currently, there are two feasible methods of retrieving atmospheric ice hydrometeor mass from space-borne systems. The first method involves using Earth observation satellites, such as CloudSat and EarthCARE, equipped with profiling radars capable of quantifying water and ice content in a vertical slice through the atmosphere with high vertical resolution. However, these vertical profiles are limited to nadir observations along the satellite path. The second method, exemplified by the Chalmers Cloud Ice Climatology (CCIC), employs machine-learning techniques on infrared (IR) measurements from geostationary satellites to retrieve the ice hydrometeor mass. In the case of CCIC, the machine-learning model is trained on IR data with corresponding CloudSat retrieved ice mass. The Arctic Weather Satellite (AWS), launched in August 2024, is the first satellite in operation to enable a new method for retrieving ice mass. It is equipped with a cross-track-scanning radiometer covering a wider area and is sensitive to frequencies in the 54 GHz, 82 GHz, and 183 GHz range, and a notable addition of sub-mm wavelength frequencies in the 325 GHz range. Simulations have shown that the sub-mm wavelength region is especially suitable for quantifying ice hydrometeor masses. This marks the debut of a space-borne radiometer sensitive to sub-mm wavelengths and represents a significant step into this regime, with future satellite missions set to include instruments sensitive to even higher frequencies. We present and discuss the results of ice hydrometeor mass retrievals and the development of a Level 2 product based on the new operational AWS data. These retrievals mark the first application of a methodology developed over the past few years to real sub-millimetre data. Additionally, we compare these retrievals to existing ones from sources like EarthCARE and CCIC, highlighting differences that provide insights into the strengths and shortcomings. In a broader context, these insights and the retrieval methodology can be applied to data from future missions. For instance, the proposed EPS Sterna constellation, which will consist of six satellites similar to AWS in polar orbit over 13 years. Additionally, the upcoming Metop-SG-B2 which will operate for 21 years and carries the Ice Cloud Imager, a radiometer sensitive to frequencies from 325 GHz to 664 GHz. Together, these missions promise a wealth of sub-millimetre data that can be used to significantly improve the quantification of atmospheric ice hydrometeors.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Comparison of ATLID/EarthCare and IceSat-2 ATLAS Clouds Measurements.

Authors: Marius Dahuron
Affiliations: Laboratoire de Météorologie Dynamique
EarthCare ATLID is the most up-to-date instrument in global measurement of atmospheric profiles. Launched successfully in May 2024, it is expected to extend the record of space LiDAR cloud profiles to provide a more complete picture of the inter-annual variability in the cloud’s vertical profile and to possibly observe cloud profile changes in response to climate warming. Unfortunately, the CALIPSO/CALIOP mission, launched in April 2006, stopped acquiring data in August 2023 before ATLID launch. Thus, the temporal gap between ATLID and CALIOP precludes comparing directly ATLID and CALIOP measurements. IceSat-2, launched in September 2018, is a space lidar designed for surface altimetry purposes not for atmospheric observation, but it has flown at the same time as CALIPSO and also at the same time as ATLID. Therefore, the purpose of the current work is to explore the possibility to bridge the gap in cloud profile observation between CALIPSO and ATLID using IceSat-2 data. In this study, we compare IceSat-2 ATLAS observations (ATL09 Level 3 Product) to ATLID measurements. We first consider precisely co-located and co-temporals acquisitions between IceSat-2 and ATLID and compare them as case studies to learn about fundamental differences in their attenuated backscatter profiles and in their cloud detection. The colocation windows being narrow (few kilometers and less than tens of minutes) the observations are limited to polar region most of the time. We then consider statistical comparisons between the observations of the two instruments by relaxing the criteria on the spatio-temporal colocation in order to evaluate their differences over different cloud types all over the globe. The results of these comparisons will be presented during the conference.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Advanced Aerosol Retrievals with RemoTAP and PARASOL: Enhancing Understanding of Aerosol-Cloud Interactions

Authors: Piyushkumar Patel, Bastiaan van Diedenhoven, Otto Hasekamp, Guangliang Fu
Affiliations: SRON Netherlands Institute For Space Research
The precise retrieval of aerosol properties from satellite data is pivotal for advancing our understanding of their impacts on climate and air quality. The RemoTAP (Remote Sensing of Trace Gas and Aerosol Products) algorithm represents a significant leap forward, leveraging data from multi-angle polarimeters (MAPs), such as the past PARASOL-POLDER instrument, the current PACE-SPEXone and the future Metop-SG-3MI and CO2M-MAP instruments. A unique ability of these instruments to measure both the intensity and polarization of sunlight across multiple wavelengths and viewing angles offers an unparalleled dataset for aerosol characterization, including number concentrations, size distributions, and refractive indices. We have substantially enhanced the RemoTAP results by integrating improved cloud fraction values derived from MAPs using a neural network approach, ensuring more accurate aerosol retrievals through better cloud filtering techniques. To further elevate data quality, advanced quality filters utilizing multiple key metrics were developed, effectively enhancing data integrity, resulting in a more refined aerosol dataset essential for precise atmospheric analysis. The validation of these enhancements involved comparisons with ground-based AERONET (Aerosol Robotic Network) observations over 284 sites, demonstrating the reliability of RemoTAP-derived aerosol properties. Furthermore, a pixel-level cross-comparison was carried out with GRASP-derived PARASOL-based aerosol data, as RemoTAP and GRASP are similar kind of algorithms for polarimetric measurements. The scientific implications of these advancements are profound, as the improved retrieval of aerosol size and composition using advanced polarimetric observations directly refines the estimation of cloud condensation nuclei (CCN) (proxy) concentrations and consequently the global CCN-Nd (cloud droplet number concentration) relationship. This refined relationship is crucial for understanding aerosol-cloud interactions, allowing for more accurate quantification of aerosol-induced cloud albedo changes, thereby reducing uncertainties in radiative forcing estimates due to aerosol-cloud interactions (RFaci). Such improvements contribute to a more precise representation of aerosol impacts in climate models, ultimately enhancing predictions of climate sensitivity and future warming scenarios. By advancing the RemoTAP algorithm, our findings underscore the transformative potential of these methodologies in delivering accurate and reliable aerosol climatology, driving forward the frontier of atmospheric science and climate research.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: WIMEX (Wave Interaction Models EXploitation)

Authors: Giancarlo Rivolta, Carla Orrù, Claudio Camporeale, Abdul Mujeeb, Maddalena Iesuè, Mehrez Zribi, Emna Ayari, Nicolas Baghdadi, Sami Najem, Juval Cohen, Jorge Jorge Ruiz, Dr. Juha Lemmetyinen, Aniello Fiengo, Francesca Ticconi, Davide Comite
Affiliations: Esa, Progressive Systems S.r.l., Toulouse III University, UMR TETIS, INRAE, University of Montpellier, Finnish Meteorological Institute, Sapienza University of Rome
Forward and inverse models developed by the Earth Observation (EO) scientific community describe, respectively, the relation between electromagnetic waves interacting with natural surfaces and the methodologies for retrieving bio-geophysical variables from remotely sensed data. Yet the current landscape – marked by the development of an extensive and heterogeneous suite of models – reveals some limitations. These encompass models not systematically implemented; models tested and validated only on small amounts of data; and limited integration of the models with the emerging Artificial Intelligence (AI)-based inversion techniques. In this paper, we introduce the Wave Interaction Models EXploitation (WIMEX) framework, which aims at addressing these challenges, in the frame of an ESA-funded project. The framework, leveraging the unprecedented amount of EO data today available, creates a systematic approach to the development, validation, and use of existing and future forward and inverse models for increased efficiency and flexibility. Unlike existing solutions, WIMEX is designed to be model-independent, accommodate, manage, and operate any forward and inverse model independently of sensor and model objectives, supporting the use of cloud infrastructure for improving performance. This versatility reinforces the framework role in supporting existing and next-generation EO missions. Moreover, being designed to interface with different EO data sources (EO data, in situ data, ad-hoc datasets, etc.), it supports the design, development and validation of forward models, the generation and storing of look-up tables and datacubes in a flexible way. It assists with the use of these outputs for testing and calibrating inverse models, through processes like neural networks training and use, as well as for performing sensitivity analyses across broad ranges of input parameters. It also facilitates the application of inverse models over extensive volumes of EO data, hence improving the statistics of the retrieved bio-geophysical variables and enhancing their interpretation. WIMEX also aims to boost the design of new and/or more accurate inverse models seamlessly combining their implementation with AI-based inversion techniques. This added value helps improving the overall performance of the models and aligns with the evolving needs of the research community. The prototype version of WIMEX, set to be released in early 2025, demonstrates its efficacy and flexibility for remote sensing over land applications, specifically through the management and execution of forward and inverse models targeting soil moisture and snow water equivalent. In the near future, upcoming versions will incorporate support to additional sensors and variables, as well as diverse missions, to enhance user experience and address a wider spectrum of requirements.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Advanced Sentinel-3 Near Real Time (NRT) L2 aerosols capabilities Lessons learned and plans for the next years

Authors: Julien Chimot, Dr. Edouard Martins, Dr. Jaap Onderwaater, Dr. Dominika Leskow-Czyzewska, Dr. Bertrand Fougnie
Affiliations: Eumetsat
As an operational user-driven Earth observation satellite agency, EUMETSAT is the reference European provider of Near Real Time (NRT - < 3h from the sensing time) Level 2 (L2) aerosol satellite observations from a constellation combining both Low Earth Orbit (LEO), with Metop / Sentinel-3 and EPS-SG, and GEOstationary (MSG & MTG). Primary users are operational air quality and climate services. Notably, for several years, EUMETSAT has closely interacted with the Copernicus Atmospheric Monitoring Service (CAMS) and provided expertise to support the uptake of all its observations into the modelling and assimilation processes. With two multi-spectral optical sensors and observations acquired at a high spatial resolution at 10:00, Sentinel-3 is the main Copernicus mission mandated by the European Commission (EC) to provide a high quality of Aerosol Optical Depths (AODs) at global coverage during morning overpass time for the long future. As such, EUMETSAT is mandated since 2014 by its Member States and Copernicus to develop, enhance and ensure the Copernicus NRT Sentinel-3 Aerosol product. This is the 2nd European product delivering Aerosol Optical (AOD) after Metop with PMAP (Polar Multi-sensor Aerosol optical Properties). Since 2020, it is derived from the OSSAR-CS3 (Optimized Simultaneous Surface Aerosol Retrieval for Copernicus Sentinel-3) algorithm, jointly specified & developed by EUMETSAT scientific experts & the Swansea University team led by Prof. Dr. Peter North (Chimot et al., 2021). Furthermore, EUMETSAT intensively works with the European Centre for Medium-Range Weather Forecasts (ECMWF) by exchanging the necessary expertise to support the future operational assimilation by the Copernicus Atmospheric Monitoring Service (CAMS), as done with PMAP. The evolved Baseline Collection 3 has been released in November 2022 with major improvements covering: 1) a sophisticated classification mask (called Naïve probabilistic) leading to enhanced coverage over all waters, and better cloud, snow, dust / ash, and dark vs. inland waters distinction, 2) more accurate 1st guess of land vegetation reflectance, 3) improved constraints for bare soils in the dual-angular land model, and 4) a new Quality Indicator (QI) system integrated in the L2 product to guide further the users. Following user requests, EUMETSAT has developed an extension of this Baseline labelled as Collection 3.1, publicly released in July 2024, to address 2 main needs: To improve the underlight scattering caused by Ocean Colour features in dark oceans by using the Sentinel-3 OLCI water reflectance, and generate a multi-year time series to evaluate its stability and support the CAMS Reanalysis campaign. This presentation will demonstrate the major asset of the latest developments and an extensive set of validation results including match-up with ground-based AERONET, and inter-comparison of time series with MODIS/VIIRS from NASA and NOAA. Latest feedbacks and exchanges from the Sentinel-3 Validation Team (S3VT) atmosphere members, ECMWF/CAMS, and other aerosol users will be also reported. Based on lessons learned, EUMETSAT is now leading a major redesign of this algorithm, structured in two steps: first a Day-2 solely based on SLSTR, and a Day-3 intended as a NRT synergy (OLCI + SLSTR) to be based on the future Sentinel-3 NRT L1C under study and generated by EUMETSAT (De Bartolomei, 2014). As such, enhanced aerosol typing and further understanding of interaction with clouds are being investigated. This redesign also covers the development of Aerosol Layer Height (ALH) retrieval from the OLCI O2-A bands.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Airborne In-situ Measurements during JATAC/CAVA-AW 2021/2022 campaigns - Climate-Relevant Results: extinction coefficients and heating rate of mineral dust

Authors: Jesús Yus-Díez, Marija Bervida, Luka Drinovec, Uroš Jagodič, Blaž Žibert, Matevž Lenarčič, Eleni Marinou, Peristera Paschou, Nikolaos Siomos, Holger Baars, Ronny Engelmann, Annett Skupin, Cordula Zenk, Thorsten Fehr, Griša Močnik
Affiliations: University Of Nova Gorica, Haze Instruments d.o.o., Aerovizija d.o.o, IAASARS, National Observatory of Athens, Meteorological Institute, Ludwig Maximilian University of Munich (LMU), Leibniz Institute for Tropospheric Research, Ocean Science Centre Mindelo, GEOMAR Helmholtz Centre for Ocean Research, ESA/ESTEC
The JATAC/CAVA-AW campaign took place on and above Cape Verde Islands in September 2021 and September 2022. It has provided a vast dataset of in-situ and remote measurements. In addition to the calibration and validation (Cal/Val) of the ESA’s Aeolus ALADIN during the campaign, the campaign also featured secondary scientific objectives related to the effects of the atmospheric aerosol particles on climate change. The atmosphere above the Atlantic Ocean off the coast of West Africa is ideal for the study of the Saharan Aerosol layer (SAL), the long-range transport of dust, and the regional influence of SAL aerosols on weather patterns, such as the formation of tropical storms, and climate. We have performed 10 instrumented flights with a light aircraft (Advantic WT-10) which enabled in-situ aerosol characterization. The flights were conducted over the Atlantic Ocean up to a height over 3000 m a.s.l. during two intense dust transport events. In addition, two surface-based lidars: PollyXT, and EvE lidars were deployed at the Ocean Science Center, which helped plan the flights and provided measurements of the vertical optical properties of aerosols. The particle light absorption coefficient was determined at three different wavelengths with a filter-based Continuous Light Absorption Photometers (CLAP). They were calibrated with the dual wavelength photo-thermal interferometric (PTAAM-2, Haze Instruments d.o.o.) measurement of the aerosol light-absorption coefficient in the laboratory. The particle size distributions above 0.3 µm diameter were measured with two Grimm 11-D Optical Particle Size Spectrometers (OPSS). These measurements were conducted separately for the fine aerosol fraction and the enriched coarse fraction using an isokinetic inlet and a pseudo-virtual impactor, respectively. The aerosol light scattering and backscattering coefficients were measured at three different wavelengths with a polar integrating nephelometer Aurora 4000. The instrument used an independent isokinetic inlet and was calibrated prior to the campaign, and its calibration validated after the campaign with CO2 as a span gas. The total and diffuse solar irradiance was measured with a DeltaT SPN1 pyranometer. During the flights, additional variables such as the CO2 concentration, temperature, aircraft GPS position altitude, and air and ground speed were also measured. The in-situ single-scattering albedo Angstrom exponent (SSAAE) and the lidar depolarization ratio will be compared as two independent parameters indicating the presence of Saharan dust particles. We will show differences between homogeneous Saharan dust layer in space (horizontally and vertically) and time and events featuring strong horizontal gradients in aerosol composition and concentration and layering in the vertical direction. These layers often less than 100 m thick, separated by layers of air without the influence of mineral dust. The in-situ measured aerosol light extinction coefficient will be compared to the remotely measured ones (ground lidars and ALADIN). Complex mixtures of aerosols in the outflow of Saharan dust over the Atlantic Ocean in the tropics will be characterized. We will show the in-situ atmospheric heating/cooling rate and provide insight into the regional and local effects of this heating of the dust layers. Furthermore, we will provide a first comparison of measured vs modelled Heating Rates using a radiative transfer model. These results will help improve the research on evolution, dynamics, and predictability of tropical weather systems as well as provide input into the models as well as help with their verification.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Comparisons of cloud top heights derived from lidar and radar observations and levels of neutral buoyancy

Authors: Seiji Kato, David Duda, Kuan-Man Xu, Seung-hee Ham, Sunny Sun-Mack, Yan Chen, Walter Miller
Affiliations: NASA Langley Research Center, Analytical Mechanics Associates Inc
The structure of convective clouds and processes within them are complex. A simplified view of a convective cloud is based upon parcel theory, which assumes that a parcel of air originating in the boundary layer is lifted adiabatically to the level of neutral buoyancy (LNB). How actual convective cloud and anvil cloud top heights are different from the level of neutral buoyancy has an important implication to radiation budget because their cloud top temperature determines outgoing longwave radiation. Unlike cloud top heights derived from passive sensors, cloud top heights derived from Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO, Winker et al. 2009) and CloudSat (Stephens et al. 2008) observations are derived directly from their signals, independently from temperature profiles. Therefore, CALIPSO/CloudSat cloud top heights are independent from errors in temperature, hence more accurate than those derived from passive sensors. Cloud top heights of deep convective clouds derived from lidar and radar observations are analyzed as a function of convective available potential energy (CAPE) and compared with the level of neutral buoyancy (LNB). Observed cloud top heights increase with CAPE. The mean difference in LNB weighted by the number of samples for CAPE greater than 1000 J kg-1 is 0.4±0.15 km, where LNB being lower. The 0.4 km lower level of neutral buoyancy leads to a +5 Wm-2 outgoing longwave irradiance bias, if LNB is used to compute the irradiance. The cause of the difference is unknown. Doppler velocity provided by cloud profiling radar of EarthCARE can be used to investigate whether overshooting of convective core can contribute to the difference.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: High-Altitude Aerosols Water Vapor and Clouds – a Canadian contribution to NASA AOS mission

Authors: Dr Konstantin Baibakov, Adam Bourassa, Jean-Pierre Blanchet, Doug Degenstein, Yacine Bouzid, Patrick Boyle, Cassandra Bolduc, Chris McLinden, Joel Bedard, Jason Cole, Zen Mariani, Mengistu Wolde, Natalia Bliankinshtein, Kaley Walker, Yi Huang, Jeff Langille, Yann Blanchard, Landon Rieger, Marko Adamovic, Jamie Sevigny
Affiliations: Canadian Space Agency, University of Saskatchewan, UQAM - University of Quebec at Montreal, Environment and Climate Change Canada, National Research Council of Canada, University of Toronto, McGill University, University of New Brunswick
The NASA Atmosphere Observing System (AOS) is a planned NASA mission that will provide critical measurements for clarifying aerosol and cloud processes that drive extreme weather events and climate change. Canada is contributing the High-Altitude Aerosol Water Vapor and Clouds (HAWC) mission to AOS with a planned launch date of 2031. HAWC will include a zenith-looking far infrared radiometer (TICFIRE) on NASA AOS-Sky, as well as a limb-viewing aerosol spectral imager (ALI) and an imaging spatial heterodyne spectrometer (SHOW) on the HAWCsat Canadian spacecraft. TICFIRE, operating at the kilometer resolution in the 4-73 µm spectral range, will provide key information on the vertically resolved microphysical properties, horizontal distribution and radiative impacts of thin ice clouds and will contribute to the radiative closure verification. ALI will measure aerosol extinction and particle polarization in several spectral bands between 610 and 1560 nm from mid-troposphere to approximately 35 km altitude. SHOW will measure limb-scattered radiation in a vibrational band of water to provide the vertical distribution of water vapour with sensitivity from 30 km down to several kilometers below the tropopause. The instruments will operate in synergy between themselves as well as with other AOS instruments to provide a unique perspective on aerosols, clouds and water vapour properties with a focus on the climate-relevant UTLS processes. The paper will provide an overview of HAWC and its main scientific objectives as well as the latest updates on the mission Phase 0/A development activities. It will also review linkages to other international missions (e.g. EarthCare, FORUM, PREFIRE) and will discuss Science and Applications pre-launch activities including modelling and data assimilation work, as well as PONEX (POlar Night EXperiment) – the first polar night-centric HAWC sub-orbital campaign scheduled for early 2026 in the High Canadian Arctic.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Machine Learning Arctic Clouds Identification and Characterization With IASI Spectra: Comparison With Ground-Based and IASI L2 Products

Authors: Federico Donat, Michele Martinazzo, Tiziano Maestri, Guido Masiello
Affiliations: University Of Basilicata, University of Bologna
Clouds radiative properties are key for the Earth's energy budget, whose uncertainty is markedly related to clouds mischaracterization and failure to detect optically thin clouds. In particular, the satellite-based detection and characterization of Arctic and Antarctic clouds is challenging for the similar radiative properties of cloud top and ice-covered surface, and for the clouds' temperature-driven tendency to present low optical depths. These conditions are especially critical for methods based on brightness temperature differences rather than inter-channel complex spectral features. In the challenging context of Greenland interiors (Summit Camp), two independent algorithms are trained with full IASI spectra. The first, developed by the University of Bologna and based on a supervised machine learning technique (Cloud Identification and Classification algorithm), is used to identify cloudy scenes and classify the thermodynamic phase. The second, developed at the University of Basilicata, is based on Cumulative Discriminant Analysis and provides a cloud mask. The annotation of the training spectra is performed exploting a small portion of an extensive (2010-2021) dataset of colocated ground-based products, derived from lidar backscattering and depolarization data (doi:10.18739/A26W96979), comprising cloud presence, cloud phase, optical depth, and cloud bottom height. The performances of these satellite cloud products are then evaluated with various indices aimed at assessing the correlation with the spatio-temporally colocated ground products; particular regard is given to the role of cloud optical depth and height in the quality of the satellite classifications. Also, these classifications are compared to the colocated IASI L2 products, that provide a global cloud mask to be validated in such extreme conditions. This local training of machine learning algorithms is aimed at showing that even in the Greenland interiors IASI spectra contain the necessary information to obtain cloud products consistent with surface measurements in terms of cloud detection and characterization. Also, such classifications will be used to inform the retrieval of microphysical parameters with the radiative transfer inversion scheme δ-IASI.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: A ML-Based Perspective on Spatio-Temporal Patterns of Convective Organisation

Authors: Sarah Brüning, Dr. Holger Tost
Affiliations: Johannes Gutenberg University Mainz
Convective clouds play a primary role in the hydrological cycle of the Earth. Especially in the tropics, we observe convective clouds forming into extensive spatially connected (organised) structures. These mesoscale convective systems (MCSs) are prolific producers of severe weather and can be associated with future changes in precipitation extremes. However, uncertainties persist about the strength and spatio-temporal variability of convective organisation. The complexity of objectively defining convective organisation further contributes to this challenge. In response, various indices have been designed to quantify the organisational state of the atmosphere. In this study, we focus on deriving regional characteristics of convective organisation and their connection to convective cloud microphysics. The goal is to deepen the understanding of the regional drivers of convective organisation. Our area of interest covers a region from 30 °W to 30 °E and 30 °N to 30 °S, i.e. the region close to the nadir position of the MSG SEVIRI sensor. Throughout the year, convective cells typically develop in this region. We focus on a period between boreal spring and summer following the northward shift of the Inter-Tropical Convergence Zone (ITCZ). To detect convective clouds and approximate the degree of convective organisation, we employ 3D radar reflectivities derived from a machine learning (ML) based extrapolation of passive (MSG SEVIRI) and active (CloudSat) 2D remote sensing sensors. The data provides a simultaneous perspective on horizontal and vertical processes of convective development. We use three organisation indices (COP, SCAI, ROME) (a) to compare the indices' performance and (b) to examine the relationship between convective cloud development and large-scale organisation. An object-based algorithm detects convective core and anvil regions within the predicted 3D radar reflectivities for each time step. We label these cloud objects and link them in time to create contiguous 4D trajectories following the cloud movement in three dimensions. Then, we calculate the three indices to characterise the degree of organisation at each point in time. Our results comprise an evaluation of derived regional statistics for convective organisation and the diagnosed characteristics of the analysed systems. We identify regional hotspots of convective organisation over the Gulf of Guinea, continental West Africa, and the Atlantic Ocean. Long-lasting cloud systems with high convective activity, such as MCSs, occur frequently within these hotspots. The seasonality of convective cloud development leads to a slight increase in organisation of about 5 % in summer. For instance, the landmass distribution and the influence of extratropical dynamics in the southern hemisphere may promote a considerably higher variability than in the northern hemisphere. Over the ocean, the organisation indies (COP, SCAI) are about 5 – 10 % higher than over land. In summary, our results emphasise the importance of regional characteristics for assessing convective organisation. Moreover, combining the information of multiple remote sensing instruments may deliver profound insights into convective organisation, which may prove beneficial for an enhanced climate risk assessment. However, we detect overlapping effects of isolated and clustered convection occurring in the same region that may affect our statistics. To disentangle their impact calls for an adapted index.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Study of Aerosol Cloud Interaction in the Eastern Mediterranean: Long-term lidar Observations over Cyprus

Authors: Rodanthi Elisavet Mamouri, Argyro Nisantzi, Maria Poutli, Athina Savva, Hossein Panahifar, George Kotsias, Constantinos Chrysostomou, Diofantos Hadjimitsis, Albert Ansmann
Affiliations: ERATOSTHENES Centre of Excellence, Cyprus University of Technology, Faculty of Engineering and Technology, Cyprus, Leibniz Institut für Troposphärenforschung, Leipzig, Germany
A promising approach to studying cloud evolution, with a focus on heterogeneous ice nucleation, is the use of ground-based remote sensing. Continuous vertical profiling of aerosols, altocumulus, and cirrus layers using lidars and radars provides a detailed and coherent means of examining these processes. Located in the heart of the eastern Mediterranean, Cyprus provides an ideal setting for atmospheric and climate research, particularly in studying cloud and precipitation formation. This research focuses on the impact of both natural aerosols, such as desert dust, soil dust, marine particles, and anthropogenic aerosols, including urban haze and biomass burning smoke, on these processes. In this study, we discuss four years of continuous observations with the PollyXT lidar of the Cyprus Atmospheric Remote Sensing Observatory (CARO). CARO is a key research infrastructure for atmospheric studies in Cyprus hosted by the ERATOSTHENES Centre of Excellence. This facility features two advanced containers equipped with a multiwavelength lidar, a Doppler lidar, a cloud radar, and radiometric instruments, which used to monitor air quality, dust transport, and cloud properties over Cyprus. Long-range aerosol transport events observed over Limassol will be utilized to investigate the role of aerosol particles in cirrus formation in the upper troposphere. The simultaneous occurrence of aerosol layers together with intense cirrus features is a strong sign that particles were serving as the dominant INPs. Acknowledgements The authors acknowledge the ‘EXCELSIOR’: ERATOSTHENES: EΧcellence Research Centre for Earth Surveillance and Space-Based Monitoring of the Environment H2020 Widespread Teaming project (www.excelsior2020.eu). The ‘EXCELSIOR’ project has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No 857510, from the Government of the Republic of Cyprus through the Directorate General for the European Programmes, Coordination and Development and the Cyprus University of Technology. This study was supported by the ATARRI Horizon Europe Widespread Twinning Project. ATARRI receives funding from the European Union’s Horizon Europe Twinning Call (HORIZON-WIDERA-2023-ACCESS-02) under the grant agreement No 101160258.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Scale Invariance of Cloud Size Spectra to Near Planetary Scales

Authors: Thomas Dewitt, Karlie Rees, Tim Garrett
Affiliations: University Of Utah, Department of Atmospheric Sciences
One fundamental way to characterize the atmosphere is by examining how its dynamics vary across different scales. This scale dependence can be explored observationally by analyzing more accessible metrics, such as the size distribution and reflectivity of clouds. Notably, many of these atmospheric properties follow power-law relationships with respect to the spatial or temporal scales over which they are measured. Such power-law functions are termed "scale invariant" because the observed patterns are unchanged when the measurement scale is changed. Spatial scale invariance has typically been studied only up to scales of order 1000 km, largely due to limitations in the size of observational domains. To address this, we leverage GEO-RING, a novel dataset that integrates geostationary satellite data to provide zonally continuous imaging of the Earth. The much larger spatial coverage of GEO-RING enables us to extend the analysis to near-planetary scales. Our results reveal that cloud properties maintain scale invariance across these expanded spatial scales. Moreover, we observe that cloud reflectivity structure functions exhibit scale-invariant behavior not just in space but also in time. The scale invariance of cloud properties suggests that the underlying atmospheric dynamics driving these patterns are also scale invariant. This insight is significant because scale invariance implies that the atmosphere can be modeled using a unified framework, avoiding the need to invoke unique dynamical mechanisms for each scale. To contextualize our findings, we apply the concept of universal multifractals, a theoretical framework well-suited to describe such scaling behaviors in complex systems.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Validation and Comparison of Preliminary Retrievals of Clouds and Aerosols from Flexible Combined Imager (FCI) Data using the Optimal Retrieval of Aerosol and Cloud (ORAC) Algorithm

Authors: Daniel Robbins, Gareth Thomas, Simon Proud, Dora Hegedus, Adam Povey, Roy Grainger, Elisa Carboni
Affiliations: STFC - RAL Space, NCEO, European Space Agency ESTEC, NCEO University of Leicester, NCEO University of Oxford
Geostationary imagers provide high temporal resolution data over the geographical sector they observe. The most recent generation of geostationary imagers offer increased spatial and temporal resolution compared with previous generations, allowing for higher resolution retrievals of clouds and aerosols using existing algorithms. These higher resolution products offer potentially improved solar radiation and air quality information through resolving smaller-scale clouds and tracking short-lived aerosol events. With the release of Flexible Combined Imager (FCI) data from the Meteosat-12 satellite, the benefits of higher resolution geostationary imagers can now be realised for Europe and Africa. The Optimal Retrieval of Aerosol and Cloud (ORAC) algorithm is a community algorithm that uses optimal estimation to retrieve the optical properties of clouds and aerosols, as well as cloud top height and temperature information. As ORAC is designed to be as generalised as possible, it has been applied to a range of passive sensors. This includes routinely running ORAC on the Sea and Land Surface Temperature Radiometer (SLSTR) instruments on-board Sentinel-3a and -3b, as well as the Spinning Enhanced Visible Infra-Red Imager (SEVIRI) on-board the Meteosat Second Generation (MSG) series of satellites. ORAC has also been applied to the Advanced Himawari Imager (AHI), a new generation geostationary imager, on-board the Himawari-8 geostationary satellite. Therefore, ORAC can be applied to FCI data to offer improved cloud and aerosol products over Europe and Africa. This presentation will show early results of applying ORAC to FCI data, to produce aerosol, cloud and radiative flux products using the full temporal resolution of FCI. Comparisons of both ORAC cloud and aerosol products against L2 FCI products from EUMETSAT will be presented, as well as a validation of ORAC aerosol retrievals using AErosol RObotic NETwork (AERONET) data as truth. In addition, initial validation and comparison of FCI results to collocated observations from SEVIRI will be presented.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Improving MAJA’s cloud mask on water surfaces by leveraging Machine Learning and Deep Learning methods for Sentinel-2

Authors: Hélène Savatier-Dupré, Clément Deschene, Matthieu Denisselle, Chloé Thenoz, Hervé Yesou, Jérôme Maxant, Emilie Mangold, Santiago Peña Luque, Olivier Hagolle, Sophie Coustance
Affiliations: Magellium, ICUBE-SERTIT, Centre National d'Etudes Spatiales
This abstract presents the study carried out on machine learning and deep learning methods to produce a robust and precise cloud segmentation algorithm. The motivation of this study is to improve current MAJA’s [1] cloud detection over water. This work is carried out with CNES funding. Cloud cover poses a significant challenge to process data from space-based optical remote sensing instruments. Since clouds can obstruct - totally or partially - the ground and clouds’ shadows modify its radiometry, they impede an automatic use of the acquired data for downstream applications. Consequently, cloud detection is an essential pre-processing step for atmospheric correction and for remote sensing-based climate studies, like water quality. Although it has been an active research field for decades, handling clouds and their shadows remains a complex and not fully resolved issue due to the diversity of scenarios, including varying levels of occlusion, cloud types, and a broad spectrum of ground surface signatures prones to confusions. The CNES/CESBIO, in collaboration with the German Space Agency (DLR), has developed MAJA processing chain for atmospheric correction, performing cloud detection as a preliminary step. The methodology differs whether pixels are land or water ones. For emerged lands, MAJA employs "expert" thresholding on reflectance levels and normalized indices. Its specificity is to carry a multi-temporal analysis to create a cloud-free composite image that is compared to the processed image to help detect clouds, which give good results in recall. However, over water bodies, a single date radiometric and cirrus-band thresholding is applied, as the high temporal variability of water reflectance hinders leveraging temporal consistency, producing non-optimal cloud masks as well as poor transitions between land and water. In addition, three key limitations in MAJA for water applications are identified: the detection of small clouds, the detection of thin clouds (especially over high altitude surfaces) and sunglint. Consequently, we argue that MAJA's performance over water can be greatly improved. Recently, machine learning and deep learning methodologies [2], [3], [4], [5], [6] have shown promising results allowing to handle confusing surfaces at a high spatial resolution and with limited computational resources. This work explores the contribution of Machine Learning and Deep Learning methods to cloud and cloud shadow detection over Sentinel-2 water areas compared to current MAJA’s results and analyses the best compromise between results quality and computational resources. Moreover, we are keen to provide a measure of the masks’ uncertainty to give the user an idea of the relevance and quality of our model. The detection of cloud shadows, directly related to cloud detection, is also an important issue as shadows affect the radiometry of the images, which can be critical for water analysis. The problem of cloud shadow detection is not systematically addressed in the algorithms proposed in the literature. To tackle this problematic, we studied two directives: integrate cloud shadow as a class of the segmentation map, letting the model take care of learning unknown specific features, or jointly use spectral analysis to briefly detect dark pixels and geometric techniques to project cloud shadows from sun angles, satellite sight and cloud height and thickness hypotheses. To train and validate our algorithms, we exploit various data sources. On the one hand, for model training and global validation, we take advantage of CloudSEN12 dataset [7] which offers an open source, diverse and large dataset of Sentinel-2 tiles with their manually labeled masks, including water areas. Exogenous data like atmospheric conditions from CAMS [8] or Digital Elevation Model are integrated to bring more context data. Besides, this CloudSEN12 provides water masks of JRC Global Surface Water which allows to retrieve water pixels and concentrate the learning on water surfaces. On the other hand, we collaborate with water surface experts for dataset completion and local analysis of our algorithms’ performances. A quantitative and qualitative evaluation is conducted over specific scenes that have been selected and manually labelled by them to handle challenging water conditions such as high altitude lakes, salt sea, turbid water, high dynamic lakes, tropical water and coasts. Preliminary results [9] have already shown better results than what MAJA offered so far. Moreover, we expect reaching state of the art performances meaning beating comparable methods in place like S2CloudLess [2] while being computationally competitive. [1] O.Hagolle, M.Huc, C. Desjardins, “MAJA Algorithm Theoretical Basis Document”. [2] E. O. Research, “Cloud Masks at Your Service,” Sentinel Hub Blog. Accessed: Sep. 20, 2024. [Online]. Available: https://medium.com/sentinel-hub/cloud-masks-at-your-service-6e5b2cb2ce8a [3] D. López-Puigdollers, G. Mateo-García, and L. Gómez-Chova, “Benchmarking Deep Learning Models for Cloud Detection in Landsat-8 and Sentinel-2 Images,” Remote Sens., vol. 13, no. 5, Art. no. 5, Jan. 2021, doi: 10.3390/rs13050992. [4] M. Domnich et al., “KappaMask: AI-Based Cloudmask Processor for Sentinel-2,” Remote Sens., vol. 13, no. 20, Art. no. 20, Jan. 2021, doi: 10.3390/rs13204100. [5] Y. Tian, S. Pang, and Y. Qu, “Fusion Cloud Detection of Multiple Network Models Based on Hard Voting Strategy,” in IGARSS 2022 - 2022 IEEE International Geoscience and Remote Sensing Symposium, Jul. 2022, pp. 6646–6649. doi: 10.1109/IGARSS46834.2022.9883485. [6] X. Yao, Q. Guo, and A. Li, “Light-Weight Cloud Detection Network for Optical Remote Sensing Images with Attention-Based DeeplabV3+ Architecture,” Remote Sens., vol. 13, no. 18, Art. no. 18, Jan. 2021, doi: 10.3390/rs13183617. [7] C. Aybar et al., “CloudSEN12, a global dataset for semantic understanding of cloud and cloud shadow in Sentinel-2,” Sci. Data, vol. 9, p. 782, Dec. 2022, doi: 10.1038/s41597-022-01878-2. [8] “Home | Copernicus.” Accessed: Nov. 27, 2024. [Online]. Available: https://atmosphere.copernicus.eu/ [9] Hélène Savatier-Dupré and Chloé Thenoz, “Cloud Detection with Deep Neural Networks from Multi-Temporal Sentinel-2 imagery,” in IAC 2024 Proceedings, Milan, Italy, Oct. 2024.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Comparison of Airborne In-situ Measurements with LIDAR measurements during JATAC/CAVA-AW 2021/2022 campaigns

Authors: Marija Bervida Mačak, Jesús Yus-Díez, Luka Drinovec, Uroš Jagodič, Blaž Žibert, Matevž Lenarčič, Sangita Gautam, Eleni Marinou, Peristera Paschou, Nikolaos Siomos, Holger Baars, Ronny Engelmann, Annett Skupin, Athina Augusta Floutsi, Cordula Zenk, Thorsten Fehr, Griša Močnik
Affiliations: University Of Nova Gorica, Haze Instruments d.o.o., Aerovizija, IAASARS, National Observatory of Athens, Leibniz Institute for Tropospheric Research, Ocean Science Centre Mindelo, GEOMAR Helmholtz Centre for Ocean Research, ESA/ESTEC
The JATAC campaigns conducted in September 2021/2022 on and above the Cape Verde Islands resulted in an extensive dataset of the in-situ and remote measurements. The main aim was to calibrate and validate the ESA’s Aeolus satellite equipped with the ALADIN lidar, with secondary objectives related to climate change. Constraining remote sensing data with the in-situ observations is critical for proper characterization and accurate description of the atmospheric 3-D structure. We present the results obtained from an instrumented light aircraft (Advantic WT-10), set-up for the in-situ aerosol measurements. 27 flight were conducted during intense dust transport events over the Atlantic Ocean at altitudes up to about 3000 m above sea level. The aerosol light extinction coefficient was obtained at 3 different wavelengths as a combination of the absorption coefficients determined using Continuous Light Absorption Photometers (CLAP) and the scattering coefficients measured with an Ecotech Aurora 4000 nephelometer. The particle size distributions above 0.3 µm diameter were measured with two Grimm 11-D Optical Particle Size Spectrometers (OPSS). Besides that, CO2 concentration, temperature, aircraft GPS position and altitude, air and ground speed were also measured. We compare the airborne in-situ measurements of the aerosol extinction coefficient with the space-borne ALADIN lidar measurements, as well as with the ground-based eVe and PollyXT lidar measurements, which overlapped in space and time. The comparison is performed for the closest available wavelengths, with the in-situ measurements extrapolated to match those of the lidar systems. Overall, we find an underestimation of the extinction coefficient derived from lidars compared to the in-situ measurements. The regression analysis indicates good relationship between the in-situ data with both ground-based lidars and space-borne ALADIN lidar. The slopes of regression lines of ground-based lidars, PollyXT and eVe, against the in-situ measurements are characterised by values ranging from 0.69 to 0.94 and R2 between 0.74 and 0.92. Relationship described by fitting the ALADIN lidar to in-situ data is characterised by the slope value 0.85 and R2 of 0.5. We discuss the major factors which may affect the comparison, such as the spatial and temporal resolution of measurements and their limitations for comparison purposes, as well as the effects of the clouds on the comparison. The presented results reflect the importance of the comparison of the remote with the in-situ measurements for the support of the research on evolution, dynamics, and predictability of tropical weather systems and provide input into and verification of the climate models.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Effect of aerosol optical scattering and absorbing properties on the Urban heat island intensity during summertime in Rome, Italy

Authors: Monica Campanelli, Annalisa Di Bernardino, Stefania Argentini, Giampietro Casasanta, Andrea Cecilia, Margherita Erriu, Tiziano Maestri, Anna Maria Siani, Masahiro Momoi, Henri Dièmoz, Victor Estelles, Gaurav Kumar, Stefano Casadio
Affiliations: CNR-ISAC Tor Vergata, Sapienza University of Rome, University of Rome Tor Vergata, University of Bologna, GRASP-SAS, ARPA VdA, University of Valencia, SERCO Italia
In 2023, the Italian project “uRban hEat and pollution iSlands inTerAction in Rome and possible miTigation strategies” (RESTART) has been funded by the Italian Ministry for University and Research for 2 years. RESTART aims to explore the interaction between the Urban Heat Island (UHI) and the Urban Pollution Island (UPI) in Rome (Italy), providing a series of mitigation strategies, including tailored Nature-Based Solutions, and ready-to-use guidelines for the improvement of well-being and liveability in urban environments. The city of Rome (Lat. 41.90 °N, Lon. 12.54 °E) is the most populous and extended Italian city and the third most densely populated metropolis in Europe. Rome is located in the central region of the Italian Peninsula, about 27 km inland from the Tyrrhenian coast. Due to its position in the middle of the Mediterranean Basin and the complex orography of its surroundings, the city is frequently subjected to the advection of Saharan dust in the case of persistent southerly winds, and to the sea breeze regime from the southwest, the latter particularly evident during summertime under anticyclonic conditions. In recent years, the city has experienced significant atmospheric warming and a substantial intensification of extreme weather events, such as heat waves (HW), tropical nights, and droughts. Within the objectives of this project, an exploratory study was performed on the possible connections between the columnar aerosol optical properties Aerosol Optical Depth (AOD), Single Scattering Albedo (SSA) and Angstrom exponent (Ang), obtained by the SKYNET network, and the Urban Heat Island intensities (UHII) in Rome in the Summer 2022. The wavelength dependence of this impact within the region 340-1020 nm, during daytime and nighttime (only AOD and ANG in this part of the day), and during HW events was analysed. A relationship between the clear sky (no clouds) aerosol columnar optical properties and the UHII in Rome was found, particularly: i) AOD in the entire summer period was found to be generally anti-correlated with UHII, with the anti-correlation increasing during HW events; during nighttime the correlation was positive; ii) Ang, calculated at the two wavelengths 340-500, was also anticorrelated; iii) SSA and UHII were found to be always correlated, with a weaker correlation during HW events. The obtained results look physically reasonable. Aerosols, by scattering and absorption, reduce the direct Sun irradiance reaching the surface and possibly contribute to a reduction of the air temperature at the ground. A mechanism which possibly explains the anticorrelation during daytime is that the greatest is the AOD, the larger is the contribution to the temperature reduction and the smaller is the UHII. Concerning ANG, the smaller the particles in the atmosphere (larger ANG values) and the more isotropic is the diffused radiation field which means a reduction of total sunlight reaching the surface and smaller values of UHII, which is consistent with the anti-correlation. Finally, large values of SSA mean less absorbing aerosol in the atmosphere and more radiation reaching the ground level, which is consistent with a correlation between UHII and SSA. But how strong is this radiative impact with respect to the meteorological effects that can both favour or inhibit the development of the UHI, and changing the aerosol properties? To tackle this question, broad band radiative transfer models (Rstar [1] and Mstrn [2]) will be run to retrieve the radiative forcing of the aerosol having the optical properties measured and retrieved by the SKYNET network in the site of Rome. Other ancillary measurements, necessary as input to the models (e.g., water vapour and aerosol vertical profiles and total ozone amount) will be obtained by real measurements (if available) or climatological studies. The results from this study will be presented in this work. References [1] Nakajima, T., and M. Tanaka, 1986: Matrix formulations for the transfer of solar radiation in a plane-parallel scattering atmosphere. J. Quant. Spectrosc. Radiat. Transfer, 35, 13–21, https://doi.org/10.1016/0022-4073(86)90088-9. [2] Sekiguchi, M., and T. Nakajima, 2008: A k-distribution- based radiation code and its computational optimization for an atmospheric general circulation model, J. Quant. Spectrosc. Radiat. Transfer, 109, 2779 –2793
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Feasibility study for a new climate indicator “aerosol and cloud cooling”

Authors: Thomas Popp, Stefan Kinne, Dr Marta Luffarelli
Affiliations: Deutsches Zentrum Für Luft- Und Raumfahrt (dlr), Rayference
Since pre-industrial times anthropogenic aerosol (through direct and indirect effects) has slowed climate warming attributed to anthropogenic greenhouse gases. Within the ESA CLIMATE-SPACE cross-ECV project SATACI (SATELLITE OBSERVATIONS TO IMPROVE OUR UNDERSTANDING OF AEROSOL-CLOUD INTERACTIONS), global, long-term satellite data records will be used to demonstrate the feasibility of a method to derive a new climate indicator which enables monitoring the cooling offset due to (anthropogenic) aerosols and (aerosol modified) clouds. This new indicator would complement the existing WMO climate indicators based on off-line (dual call) two-stream radiative transfer simulations. Baseline optical aerosol properties are taken from the MACv3 aerosol climatology, which is tied to multi-annual ground-based statistics from sun-/sky photometry and derived aerosol type contributions. Aerosol indirect effects can be included based on statistically sound associations between relevant aerosol and cloud properties. In this setup, key aerosol properties (i,e AOD, AODc, AODf, AAOD) and key aerosol/ cloud associations (i.e. AODf vs CDNC, AODf vs CAL, AODf vs LWP, AODf vs, CRE for low altitude clouds) will be replaced step by step with CCI / Copernicus Climate Change Service (C3S) (and other, such as CM-SAF) satellite retrievals. Outputs are global monthly maps of aerosol impact associated radiative effects at the top of the atmosphere (TOA). For initial demonstration of the climate indicator MODIS aerosol retrievals over the last two decades and model-based anthropogenic contributions by decade since 1850 will be applied. Uncertainties and diversity between different satellite datasets will be assessed by using different satellite data records for each variable and through uncertainty propagation of those satellite inputs through the radiative transfer code. The new climate indicator will be compared with TOA radiative forcing estimated by climate models. This feasibility study aims at providing an initial demonstration of a cooling indicator, assessing its potential, by exploiting the value of global, consistent, multidecadal satellite records, and identifying its limitations, such as diversity and uncertainties. To ease communication, a simple parameterization (similar to the last IPCC) to convert TOA radiative effect changes to an equivalent surface temperature change (0.7W/m2 ~ 1 Celsius) will be tested. This paper will discuss results of the first phase of the project to consolidate service requirements (e.g. consistency of long-term records and between different variables) and to assess the validation status of candidate satellite data records.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: A.08.06 - POSTER - Ocean Extremes and multiple stressors events

Since the beginning of the industrial era, oceans have absorbed 90% of the atmosphere excess heat and 30% of the anthropogenic carbon. This buffering capacity has led to global ocean warming, sea level rise, increasing surface acidification, and loss of oxygen from the surface to 1000m. One of the most visible effects of the on-going climate crisis is the increase of the frequency and severity of Extreme Climate Events like e.g. tropical cyclone winds and rainfall, extreme waves, extreme acidification, marine heatwaves. This stress on the marine environment has a tremendous impact on the many organisms living in the Ocean. In particular, impact of Extreme Climate Events on marine life is expected to be very high as they provide little opportunity for organisms to adapt. As such, there is a need to deepen the monitoring, assess the ocean preconditioning, and anticipate the ocean response to extreme events and compounds multiple stressors. This session welcomes contributions demonstrating the role of Remote Sensing data, in synergy with other observations measurements (in-situ, airplanes, etc.) and models, to better observe, monitor and predict Extreme and multi-stressors events over the Oceans and further understand how they impact the marine physical and biogeochemical environment and the marine ecosystems. Studies investigating the occurrence, phasing, interplay and impact of compound events are also welcome. The session is also open to contributions demonstrating how EO derived products can be used to support management actions aiming at mitigating the impact of extreme and multiple stressors events on the marine environment.

Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: From MHW detection to Impact Assessment Use Cases

Authors: Dr. Ana Oliveira, Dr. Emanuele Organelli, Dr. Alcide di Sarra, Inês Girão, Beatriz Lopes, Mr. Rui Baeta, Dr. Elena Principato, Rita
Affiliations: CoLAB +Atlantic, CNR-ISMAR, ENEA
As global ocean temperatures continue to rise, Marine Heat Waves (MHWs) have become more widespread, threatening marine ecosystems and their services for food provision, livelihoods and recreation. Detecting and predicting the occurrence, intensity and duration of these extreme events, and understanding their impacts on marine ecosystems are key steps towards developing science-based solutions for sustainable development. The project "deteCtion and threAts of maRinE Heat waves - CAREHeat", funded by ESA in the framework of the Ocean Health initiative, aims at improving the current MHW detection and characterization methodology, as well as advancing the understanding of the corresponding impacts. Within CAREHeat, the Impact Assessment Use Cases have the final goal of evaluating the ecological and/or socio-economic impacts of MHWs and identifying the steps towards a comprehensive understanding of their effects for the development of effective mitigation strategies. The primary objectives include assessing the outcomes of MHWs on two sectors, namely aquaculture, and marine protected areas (MPAs), to propose actionable strategies to mitigate their adverse effects. This activity reflects the efforts of the consortium in evaluating the impacts of MHWs uniquely related to two aquaculture-farmed species and several MPAs-protected species. The work encompasses the following activities: • Consolidation of the user consultation activities as well as the scientific impact assessment analysis. • Scientific Impact Assessment Use Cases per each of the two main sectors (aquaculture and MPAs) and the corresponding species. This is to be achieved following a multidisciplinary approach encompassing ocean physics, biology, and data scientists. From the conducted work, there are several findings that can be highlighted when looking at MHW societal impacts and Earth Observation (EO) user adoption: 1. Importance of EO and geospatial climate data products: on all cases, from the broadly consulted users to the Early Adopters show interest and willingness to adopt climate data on their operations, especially where local-specific indicators may impact their decision-making on short or long-term activities; nonetheless, few are sensitive to using geospatial products routinely, preferring time series and/or more simple indices for early warning, and lacking capacity for critically choosing best suited datasets and methods. Accordingly, the uptake of the CAREHeat MHWs Atlas (and overall EO data) may benefit from further collaborative opportunities between Early Adopters and the scientific community, to test the relevance of these data in anticipating impacts. 2. Users’ Understanding of MHWs: on all cases, users showcase some difficulty in acknowledging what MHWs are, and what they mean, especially as they tend to confound absolute temperatures with MHWs (i.e., they have some difficulties in interpreting them as relative measures of deviation from normality). As a result, the dialogue with the Early Adopters must be kept regularly, to ensure that the objective formulation of the use cases problem is scientifically sound but resonates with their specific needs. 3. Measuring MHWs impacts: on all cases, the biological data available is much more limited, compared to the CAREHeat MHW Atlas in terms of length, spatial coverage and consistency. This poses scientific challenges, since the statistical significance of the findings may be limited. Furthermore, the biological data from both the aquaculture and MPA Use Cases is collected from population groups that are not in an environmentally controlled (laboratorial) setting, i.e., theses samples do not have enough repeated measures to account for the various confounding factors (e.g., feedstock provider, other metocean parameters, pathologies), to establish strong/controlled univariate relationships. Hence, there is a real need to support continuous and consistent monitoring of biological data to serve as ‘sentinel’ data points for impact assessment studies to be pursued. Offering incentives for more users to collaborate by sharing their anonymised data could be an option moving forward, in order to overcome the current data scarcity constraints.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: An integration of satellite, in-situ and modelling data for monitoring Sea Surface Temperature in two contrasting lagoons

Authors: NAGENDRA JAIGANESH SANKARA NARAYANAN, Andrew Tyler, Evangelos Spyrakos, Dr. Debora Bellafiore, Dr. Francesca De Pascalis, Dr. Michol Ghezzo, Dr. Irina Dinu, Dr. Georg Umiggisier, Adrian Stanica
Affiliations: University Of Stirling, Institute of Marine Sciences of the National Research Council of Italy (CNR-ISMAR), GeoEcoMar
Sea Surface Temperature (SST) is a critical variable for understanding coastal and transitional ecosystems, but capturing its variability in dynamic environments such as lagoons remains challenging due to complex interactions at the land-sea interface. Advancements in satellite-based observations and innovations in computational modelling techniques have enhanced in-situ SST monitoring capabilities. However, despite the availability of high spatial resolution sensors, gaps due to temporal limitations, cloud cover, and inaccuracies in estimations persist. Hydrodynamic models, on the other hand, require extensive hydrological and meteorological data, local knowledge, and assumptions, making their implementation in such dynamic environments equally challenging. To address these challenges, this study integrates satellite-based observations and hydrodynamic modelling to provide optimized SST estimates. The study focuses on the Venice Lagoon (Italy), a well-monitored transitional system facing severe threats from climate change and human activities, making it a critical site for studying their impacts, including from engineered interventions such as the MOSE flood protection gates. Four SST datasets were assessed against in-situ observations at six locations: (1) ESA CCI (European Space Agency Climate Change Initiative) Level 4 SST, which provides temporal completeness; (2) Landsat-8 TIRS (Thermal Infrared Sensor) derived SST from the Level 2 Standard Product; (3) Landsat-8 TIRS SST processed using the TACT (Thermal Atmospheric Correction Tool); and (4) SST estimates from the hydrodynamic model, SHYFEM (Shallow water HYdrodynamic Finite Element Model). The results indicate that SHYFEM effectively captures spatial variations and reproduced temporal trends in the Venice Lagoon. However, SHYFEM showed a tendency to underestimate SST during winter, particularly in the early simulation phase. The ESA CCI SST dataset exhibited homogenous SST across all stations, making it less suitable for lagoon studies that require detailed spatial resolution. The Landsat-8 standard product captured spatial variations of SST but exhibited a consistent positive bias of 1–2 °C across all parts of the lagoon. The TACT product demonstrated an improvement, achieving a bias within ±1 °C for 95% of water pixels. Both the Landsat-8 (standard and TACT) derived thermal products also revealed small thermal features within the lagoon, primarily from industrial discharge, providing an opportunity to further refine the model prediction. Additionally, SHYFEM was applied with ERA5 meteorological forcings and demonstrated strong consistency, with validation against in-situ measurements yielding an R² > 0.95 and a MAPE of 7.82%. Based on this evidence, SHYFEM was applied to the less well-characterised Razelm-Sinoe Lagoon (Romania) using ERA5 inputs, with the Landsat-8 TACT SST serving as a reference for model validation. The model performed well, yielding results comparable to those observed in Venice. This framework highlights the synergy between satellite observations and hydrodynamic models, particularly for generating spatially and temporally resolved SST datasets. The insights gained demonstrate the value of satellite data for model validation, which together can provide the information to understand the benefits and impacts of climate change adaptation and management strategies in transitional ecosystems.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: Cyclones from Sentinel-2: three dimensional reconstruction and wind speed using near simultaneous acquisitions

Authors: Marcello de Michele, Daniel Raucoules, Goneri Le Cozannet
Affiliations: French Geological Survey
Tropical cyclones (TCs) are among the most powerful and destructive natural phenomena on Earth, with significant impacts on and human lives and economies. Characterized by intense winds, heavy rainfall, and storm surges, TCs can lead to catastrophic flooding, and infrastructural damage. The increasing frequency and severity of these events, as observed in recent decades, underscore the urgent need for comprehensive understanding and effective mitigation strategies. The significance of studying cyclones extends beyond immediate disaster response. TCs play a crucial role in the global climate system, influencing atmospheric circulation patterns, ocean heat distribution, and precipitation regimes. Their interactions with sea surface temperatures (SSTs) and ocean heat content are particularly pertinent in the context of anthropogenic climate change. As global temperatures rise, alterations in oceanic and atmospheric conditions are anticipated to affect the genesis, intensity, and trajectories of TCs, potentially leading to more frequent rapid intensification events (RIEs) and higher peak wind speeds (Vv.Aa in Elsenr and Jagger, 2008; Holland and Bruyère, 2014 ). In this context, developing accurate methods for measurements of cyclones behavior is thus imperative for enhancing models via data assimilations. Advanced observational technologies, such as satellite platforms like the Copernicus Sentinel-2, can be used for cyclone monitoring by providing high spatial resolution (in contrast to low temporal resolution) data on atmospheric parameters such as the heights and the wind speeds of a tropical cyclone. In this study, we propose to use the near simultaneous acquisitions of Sentinel 2 to retrieve the three-dimensional height map of a cyclone as well as its wind speeds map at high spatial resolution. We developed a method that use interband-band image correlation as proposed in the past, with applications to volcanic clouds (e.g. de Michele et al. 2016). In this new approach, we can assume cylindrical symmetry for cloud heights and central symmetry (with respect to the TC center) for the velocity field. On this basis, we propose an approach for extracting heights and velocities of the upper surface of a TC from a Sentinel-2 dataset. Some critical technical points arise –to which we propose a solution- since the inverse problem is underdetermined/ill posed. We test our method to tropical cyclone Beryl, a Category 5 hurricane that impacted parts of the Caribbean, the Yucatán Peninsula and the Gulf Coast of the United States in late June and early July 2024. The first results show that the average error of the method (the absolute values of the differences between simulated and restored values) is 306m for heights and 4.1 m/s for velocities. Our conclusion is that the method could be used to investigate the interplay between cyclones vertical structure, and wind speed dynamics. This work highlights the role of satellite technologies like the Copernicus Sentinel-2 as a new tool for cyclone research. Keywords— 3D restitution of tropical cyclones, Copernicus Sentinel 2, image correlation, near simultaneous acquisitions References Hurricanes and Climate Change. Edited by B. Elsner, Thomas H. Jagger, Springer New York, NY, ISBN978-0-387-09409-0, Published: 26 November 2008, DOI https://doi.org/10.1007/978-0-387-09410-6. Holland, G., Bruyère, C.L. Recent intense hurricane response to global climate change. Clim Dyn 42, 617–627 (2014). https://doi.org/10.1007/s00382-013-1713-0 de Michele, M., Raucoules, D. Arason, Þ. (2016). Volcanic Plume Elevation Model and its velocity derived from Landsat 8. Remote Sensing of Environment, 176, 219-224.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: Assessing the Relative Contributions of Anthropogenic and Natural Drivers to ocean extremes

Authors: Angela Landolfi, Davide Cavaliere, Riccardo Droghei, Salvatore Marullo, Alexandre Mignot, Federico Serva, Emanuele Organelli, Rosalia Santoleri
Affiliations: CNR-ISMAR, MOi
Extreme conditions in ocean properties—such as temperature, acidity, and wind patterns—are becoming more frequent and intense, with an increasing likelihood of these events occurring simultaneously, resulting in compound events. These events pose serious risks to marine ecosystems and the essential services they provide, including food security and climate regulation. The primary drivers of these extreme conditions are ongoing climate trends linked to rising anthropogenic greenhouse gas emissions, combined with natural climate variability, which can either intensify or dampen these extremes, influencing their frequency and severity. However, the relative contributions and the interactions of natural climate variability and long-term anthropogenic trends to the development of both single and compound extreme events remain poorly quantified. In this study, we apply statistical techniques to separate and quantify the roles of anthropogenic influence and natural variability. We also discuss the impacts of both individual and compound extreme events on ocean biogeochemistry and air-sea interactions.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: Error characterization of satellite and synergistic sea-surface wind products under tropical cyclone conditions

Authors: Dr. Federico Cossu, Ms. Evgeniia Makarova, Dr. Albert S. Rabaneda, Dr. Marcos Portabella, Dr. Joe Tenerelli, Dr. Nicolas Reul, Ad Stoffelen, Dr. Giuseppe Grieco, Dr. Joseph Sapp, Dr. Zorana Jelenak, Dr. Paul Chang
Affiliations: Institute of Marine Sciences (ICM-CSIC), Norwegian Meteorological Institute (Met Norway), Ocean Data Lab (ODL), Institut Français de Recherche pour l'Exploitation de la Mer (IFREMER), Royal Netherlands Meteorological Institute (KNMI), Institute of Marine Sciences (ISMAR-CNR), National Oceanic and Atmospheric Administration (NOAA-NESDIS), Global Science & Technology, Inc.
In the framework of the ESA OCEAN+EXTREMES MAXSS project, a consistent, inter-calibrated extreme wind data record for satellite scatterometers (ASCAT-A, -B, -C, RapidSCAT, Oceansat-2, ScatSat-1, HY-2A, -2B) and radiometers (AMSR-2, Windsat, SMAP, SMOS) over the period 2010-2020 has been generated, using the Stepped Frequency Microwave Radiometer (SFMR) winds onboard the US National Oceanic and Atmospheric Administration (NOAA) and US Air Force Reserve Command (AFRC) “hurricane hunter” planes as reference for satellite wind data adjustment. Then, the satellite adjusted winds have been blended using the Optical Flow Morphing technique together with the European Centre for Medium-range Weather Forecast (ECMWF) Fifth reanalysis (ERA5) winds, to produce a high spatial and temporal frequency multi-mission (MM) wind product. An important step within the MAXSS study is to characterize the errors of the different MM input wind sources (i.e., scatterometers and radiometers) as well as those of the MM product itself. Note that since the MM product ingests a variety of satellite sensors with different effective spatial resolution and thus different error characteristics over different regions, it is important to fully characterize such errors and understand how these eventually impact the MM product quality. A well-established method to assess errors of different wind data sources is the triple collocation analysis (Stoffelen, 1998). The method accounts for different spatial (and/or temporal) representation of the three collocated data sources, allowing for error characterization at the scales of both the medium resolution and the lowest resolution system. Also, it includes the estimation of the so-called representativeness error (r2), which represents the common true variance resolved by the two higher resolution systems but not by the coarsest one. The r2 can be estimated by means of spectral analysis (Vogelzang et al., 2011). However, more accurate estimates are given by the spatial variances method (Vogelzang et al., 2015), which computes wind variances as a function of scale. Unlike the spectral method, it operates in the spatial domain rather than the frequency domain. Spatial variances have a clear interpretation and are tolerant for missing data, which are expected under tropical cyclone conditions due to increased sub-cell wind variability and rain screening. The spatial variance analysis is thus used to estimate r2, i.e., the common variance resolved by the in situ and the satellite wind data but not by the model data. Using all satellite data products at 25 km grid spacing (i.e., about 50 km spatial scales), a r2 of 0.3 m2/s2 is used in the triple collocation analysis. The estimated wind speed errors (at scatterometer scales) by triple collocation analyses show that ASCAT winds contain the lowest errors (standard deviation of 1.1 m/s) of all the extreme wind datasets. In order to keep extreme wind sampling, some rain contamination is allowed in the Ku-band data (by not using the KNMI_QC flag available in the product), which leads to significantly higher errors for Ku-band (ranging from 1.5 m/s to 2.1 m/s) than for C-band scatterometers. The radiometer winds show significantly larger errors than scatterometers, ranging from 2.0 m/s (SMAP) to 2.9 m/s (Windsat), in which the quality of the worst-performing scatterometer (OSCAT) is comparable to that of the best-performing radiometer (SMAP). The triple collocation analysis also reveals a substantial error reduction of the MM (1.7 m/s) w.r.t. ERA5 (2.6 m/s). Moreover, a similar behavior of the MM product is found under extra-tropical cyclone conditions. Note though that the ERA5 product used in the MM generation is actually a suite of ERA5 analysis winds and thus not the same as the one used in the error assessment presented here, i.e., a series of ERA5 forecast winds. The former is actually very much impacted by (and correlated to) the assimilated scatterometer and radiometer observations. As such, and to further verify the added value of the MM product (w.r.t. ERA5), the latter should be reprocessed using ERA5 forecast data instead, and triple collocation analysis redone. Future work should also focus in translating the ERA5 storm centre to that captured by the in situ and/or satellite data. The ERA5 storm positioning errors are thought to significantly contribute to the error budget, while those are not strictly speaking random errors. As such, the ERA5 storm translation prior to triple collocation will lead to more realistic ERA5 random error estimates, and in turn, to more accurate in situ and satellite wind errors. A further extension of the adjusted wind datasets up to the early days of operational scatterometry (i.e., ERS-1) can provide a 30+ year data series of inter-calibrated extreme wind data suitable for climate variability studies and, in particular, extreme wind trend analyses. REFERENCES Vogelzang, J., A. Stoffelen, A. Verhoef, and J. Figa-Saldaña, “On the quality of high-resolution scatterometer winds”, J. Geophys. Res., vol. 116, C10033, DOI: 10.1029/2010JC006640, 2011. Vogelzang, J., G. P. King, and A. Stoffelen, “Spatial variances of wind fields and their relation to second-order structure functions and spectra”, J. Geophys. Res. Oceans, vol. 120, pp. 1048-1064, DOI: 10.1002/2014JC010239, 2015. Stoffelen, A. (1998), Toward the true near-surface wind speed: Error modeling and calibration using triple collocation, J. Geophys. Res., 103, 7755–7766, doi:10.1029/97JC03180.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: Assessment of the Frequency and Impact of Composite Marine and Atmospheric Heatwaves in the Mediterranean Region

Authors: Manal Hamdeno, Dr. Aida Alvera-Azcárate
Affiliations: University of Liège
Under the impacts of global warming, heatwaves are increasing in frequency, severity, extent and duration, both in the ocean and in the atmosphere. Marine heatwaves (MHWs), characterized by prolonged periods of abnormally high sea surface temperatures, have devastating effects on marine ecosystems, including coral bleaching, death of benthic communities, declines in sea surface productivity, and loss of seagrass beds and kelp forests. On the other hand, atmospheric heatwaves (AHWs), characterized by prolonged periods of elevated air temperatures, pose a significant environmental threat and impact all aspects of life, including individual and public health. The aim of this work is to investigate the occurrence of MHWs and AHWs in the Mediterranean region during the last four decades, focusing on their spatial and temporal characteristics, their co-occurrence and the variability of ocean-atmosphere heat fluxes during these compound events. Our preliminary results show that the frequency of both MHWs and AHWs has increased, especially in the last decade. Both MHWs and AHWs exerts high spatial variability. The western Mediterranean (WMED) MHWs are more frequent and intense while the eastern Mediterranean (EMED) have longer MHW durations. The long AHWs were found in the WMED, the intense AHWs were observed along the northern coast of Africa and the Adriatic and Aegean Seas, and the central and eastern Mediterranean basins were characterized by the frequent AHWs. Our results also show that about 47% of the MHWs are categorized as moderate and 25% as strong, with the rest falling between severe and extreme. The majority of AHW events are in the moderate category (~63%), with 17% categorized as strong, and the rest divided between severe and extreme. It was also found that over 44% of MHWs in the Mediterranean co-occur with AHWs, with AHWs preceding MHWs by 1 to 3 days. This work is ongoing, and the full results will be presented at the symposium.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: Towards a High-Resolution Inventory of Marine Heatwaves Over Coastal Habitats in the Southwestern Atlantic Ocean

Authors: MSc. Gabriel Lucas Xavier da Silva, MSc. Vitor Galazzo de Paiva, Milton Kampel
Affiliations: Instituto Nacional De Pesquisas Espaciais, Institute of Science Tokyo
Global marine habitats are threatened by the increasing frequency and intensity of extreme warming events. Marine Heatwaves (MHWs) are defined as extreme events in which the sea surface temperature (SST) remains above a climatological threshold (e.g., the 90th percentile) for at least five consecutive days. MHWs are associated with the decline of various marine habitats due to sudden and prolonged increases in temperature levels, causing mass mortality of coral populations, disruption of local ecological structures, and even socioeconomic impacts. In particular, ecosystems in the Southwestern Atlantic are home to important endemic species that require monitoring and conservation efforts to ensure their ecosystem health and long-term biodiversity. Although previous efforts have been made to understand the dynamics of MHWs in the Southwestern Atlantic, a comprehensive historical mapping of these events over marine habitats has not yet been conducted. Here, we present our ongoing efforts towards a high-resolution atlas of marine heatwaves over Southwestern Atlantic habitats during the last four decades, from 1985 to 2024. Our goals include identifying patterns of occurrence, trends in intensification, and potential connections to large-scale climate phenomena, thereby contributing to the understanding of the impacts of marine heatwaves and informing conservation strategies for future climate scenarios. A long-term composite sea surface temperature (SST) product with 1 km resolution was validated against in situ measurements and retrieved for more than 100,000 sites, divided into eight habitat categories: (1) Coral reefs, (2) Mesophotic reefs, (3) Bryozoan reefs, (4) Rhodolith beds, (5) Halimeda banks, (6) Seagrass meadows, (7) Kelps and (8) Rocky shores. MHWs were then identified using the standard methodology where periods of anomalous SST conditions exceeding the climatological 90th percentile threshold for five or more consecutive days are classified as MHW events. The reference climatology was derived by averaging individual days of the year over the period 1985–2015, followed by smoothing with a 30-day moving average. The preliminary results indicate that the validation of the long-term composite SST product showed high R² values (0.96) with a low bias (0.01 °C) and RMSE (0.67 °C), making it sufficiently suitable for detecting marine heatwave events along the Southwestern Atlantic marine habitats. Regarding the occurrence of MHWs, 25% of the total habitat area analyzed has experienced at least one strong marine heatwave event during the last four decades. In addition, for almost all habitats, more than half of their total area is currently experiencing an increase in the duration (3.5 ± 1.4 days per decade) and frequency (1.4 ± 0.3 events per decade) of MHWs. The increasing trend in the duration of MHWs is also corroborated with the most severe events recorded by their cumulative intensity in each Southwestern Atlantic ecoregion (230 ± 40 °C days). We believe that the future availability of this inventory will be a great contribution to the scientific community to analyze the impact of marine heatwaves on the evolution and conservation of coastal habitats in recent decades. This study is a contribution to the FAPESP project 2021/04128-8.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: Climate Extremes Shape Phytoplankton Bloom Phenology across Spanish Marine Ecoregions

Authors: Manuel Fernández-Barba, Pablo Almaraz, Emma Huertas I., Gabriel Navarro
Affiliations: ICMAN-CSIC
Climate change trends are altering the phenology of phytoplankton (i.e., the magnitude and timing of blooms) on a global scale. However, less is known about these changes in coastal regions, where short-term extreme events are becoming the new normal. Here, we analyze spatiotemporal changes in phytoplankton phenology across Spanish marine ecoregions using high-resolution chlorophyll-a data from Copernicus' multisatellite observations. We find less intense phytoplankton blooms that both initiate and terminate earlier in recent years. The reduced reproducibility of their seasonal cycle further confirms changes in phenological dynamics in recent years. Beyond long-term trends, our causality analysis links phytoplankton seasonality to marine heatwaves and strong wind events, emphasizing the critical role of these climate extremes in driving dynamic alterations in phytoplankton blooms. Enhancing the predictive skill for phytoplankton community responses to a changing climate is essential for anticipating environmental and socioeconomic impacts under future scenarios, offering valuable insights for mitigation and adaptation strategies.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: High-Impact Tropical Cyclones and Ocean – Role of Marine Heat Wave, Ocean Eddy and Ocean Internal Tides

Authors: I-i Lin, Iam-Fei Pun, Rob
Affiliations: National Taiwan University
Tropical cyclones (TCs) are one of the most severe natural disasters in the world. Ocean is the energy source for TCs. How oceans interact with TCs has fundamental importance to TC’s development and its intensity evolution hence vital to the fragile populations impacted by TCs. Using synergy of multiple remote sensing including satellite altimetry and cloud-penetrating microwave sea surface temperature (SST), in situ Argo float ocean temperature and salinity observations, and numerical modelling, this research explore how ocean interacts with a number of high impact TCs over the western North Pacific (WNP), the largest TC basin on earth. These TCs include 3 record-breaking TCs, i.e., super-typhoon Haiyan (2013, the most intense tropical cyclone in the history of the WNP), super-typhoon Hagibis (2019, the fastest intensified TC in the WNP and the world), typhoon Bavi (2020, the first TC making the North Korea to make its first TC warning in history). The role of marine heat wave, ocean feature such as eddies, and ocean internal tides are discussed and reason why these TCs can have such special intensification events are discussed and explained. Reference: Guan, Shoude, Fei-Fei Jin, Jiwei Tian, I. Lin, Iam-Fei Pun, Wei Zhao et al., Ocean Internal Tides Suppress Tropical Cyclones in the South China Sea, Nature Communications 2024. Pun, I., H. Hsu, I.Moon, I. Lin, & J. Jeong, Marine Heatwave as a Supercharger for the Strongest Typhoon in the East China Sea, npj Climate and Atmospheric Science, 2023. Lin, I., R,F. Rogers et al., A Tale of Two Rapidly-Intensifying Supertyphoons: Hagibis (2019) and Haiyan (2013), BAMS, 2021. Lin, I., I.Pun et al., ‘Category-6’ Supertyphoon Haiyan in Global Warming Hiatus: Contribution from Subsurface Ocean Warming, Geophysical Research Letters, doi:10.1002/2014GL061281, 2014
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: Ocean Extreme events: characterization of 3D Marine Heatwaves in ARMOR3D, a multi-observations dataset

Authors: NATHALIE Verbrugge, Christine Boone, Patricia Zunino
Affiliations: Collecte Localisation Satellites
The “deteCtion and threAts of maRinE Heat waves” - CAREHeat (https://careheat.org) project is focused on the methodologies and process understanding related to marine extremes, more specifically Marine Heatwaves (MHW), prolonged periods of anomalous warm marine temperatures which can have substantial impacts on marine ecosystems and services. In particular, reconstructing the information related to MHW events in the subsurface of the ocean is crucial for our understanding of MHW and their impact on ecosystems. The propagation of surface events to depth can strongly impact the benthic organisms and ecosystems. Different studies demonstrated an intensification of MHW at depth as well as a deeper penetration of the signal (Darmaraki et al., 2019; Schaeffer and Roughan, 2017). In the scope of ESA CareHEAT project, Marine Heatwaves (MHW) have been estimated from 0 to 300-m depth from the ARMOR3D temperature 3D fields. ARMOR3D is a 3D gridded product based on a statistical approach that merges satellite and in situ observations to retrieve temperature, salinity, mixed layer depth and geostrophic velocities from 0 to 1500m depth and based on seasonal climatic temperature and salinity from 1500 to 5500m. This product is part of the Copernicus Marine Service catalog (https://doi.org/10.48670/moi-00052) and its resolution has been increased to reach a daily temporal resolution and 1/8° spatial resolution. During the first stage of this project, it became clear that the definition of MHWs proposed by (Hobday et al, 2016) needed to be refined. The aim is to filter non-significant events in terms of amplitude or in terms of spatial extent. MHW are often estimated from the sea surface temperature. However, the associated temperature anomaly can propagate into depth and impact deeper ecosystems. Next, we analyse the spatio-temporal variability of MHW vertical extension, the associated heat content and thermosteric signal related to these events, as well as associated variability patterns and potential trends. Darmaraki, S., Somot, S., Sevault, F., Nabat, P., 2019. Past Variability of Mediterranean Sea Marine Heatwaves. Geophysical Research Letters 46, 9813–9823. https://doi.org/10.1029/2019GL082933. Hobday, A.J., Alexander, L.V., Perkins, S.E., Smale, D.A., Straub, S.C., Oliver, E.C.J., Benthuysen, J.A., Burrows, M.T., Donat, M.G., Feng, M., Holbrook, N.J., Moore, P.J., Scannell, H.A., Sen Gupta, A., Wernberg, T., 2016. A hierarchical approach to defining marine heatwaves. Progress in Oceanography. https://doi.org/10.1016/j.pocean.2015.12.014 Schaeffer, A., Roughan, M., 2017. Subsurface intensification of marine heatwaves off southeastern Australia: The role of stratification and local winds. Geophysical Research Letters 44, 5025–5033. https://doi.org/10.1002/2017GL073714
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: Contribution of C-band Synthetic Aperture Radar to the Monitoring of Tropical Cyclones : Status of CyclObs Database Product and Algorithms

Authors: Alexis Mouche, Arthur Avenas, Clément Combot, Théo Cevaer, Dr Léo Vinour, Quentin Febvre, Jean-Renaud Miadana, Bertrand
Affiliations: Ifremer
SAR capabilities have advanced remarkably since the 1990s, enabling the measurement of ocean surface winds under extreme conditions and across larger swath areas. The launch of Radarsat-2 by the Canadian Space Agency (CSA) highlighted the critical role of the cross-polarization channel, which prevents signal saturation in intense storms. Building on this, the European Space Agency (ESA) launched the Sentinel-1 constellation in 2014 and 2016, introducing the Satellite Hurricane Observation Campaign (SHOC) to maximize SAR acquisitions over tropical cyclones and complement the Canadian hurricane watch initiative. This C-band SAR constellation has since been further strengthened by the Radarsat Constellation Mission (RCM), consisting of three SAR satellites. Following the SHOC initiative, Ifremer established the CyclObs database in 2017 to create a comprehensive archive of C-band SAR observations, with data extending back to 2012. Today, the database spans three C-band SAR missions and contains over 1,600 acquisitions of tropical cyclones worldwide. This study presents the current status of the database, including its coverage by hurricane season, ocean basin, and tropical cyclone intensity. Algorithms originally developed to estimate high-resolution ocean surface winds from SAR data for the Radarsat-2 and Sentinel-1 missions have been updated for RCM. Specifically, the Geophysical Model Function (GMF) has been revised using collocated airplane measurements (e.g., dropsondes, SFMR) and satellite observations. Furthermore, wind direction retrieval has been integrated into the Level-2 TCWVF (Tropical Cyclone Wind Vector Field) product, which provides wind data at 3-km resolution. This new wind direction variable offers critical insight into the ocean surface wind flow. When an acquisition captures a tropical cyclone’s eye, the measurements are further used to characterize the cyclone vortex, determining parameters such as center position, maximum wind speed, wind radii, and radius of maximum wind speed. These outputs are compiled into the Level-2 TCVA product (Tropical Cyclone Vortex Analysis). Comparisons with other datasets and discussions of the products' limitations are also included in this study. We also highlight ongoing efforts to incorporate sea state parameters derived from SAR data into the database. The paper concludes by exploring how these wind estimates can be leveraged to depict ocean surface wind fields with high spatial and temporal resolution throughout a tropical cyclone's lifecycle.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: Investigating changes in extreme ocean surface wind speeds

Authors: Alexandre Payez, Ad Stoffelen, Dr. Cees de Valk, Dr. Rianne Giessen
Affiliations: Royal Netherlands Meteorological Institute - KNMI
As the climate of our planet is changing, an interesting question is whether statistically significant trends over the last decades exist in the extreme wind-speed percentiles that can be associated to tropical cyclones and similar extreme storms over the ocean. Reliable observational extreme-wind trend climate data records over the ocean do not exist yet. To answer this, several challenges need to be addressed. First of all, extremes usually correspond to noisy datasets, as extreme values correspond by definition to only a small subset of the data. In addition, comparing observations from different instruments with different properties and wind-extremes statistics should be done carefully, to make sure to identify geophysical trends that are truly independent of changes in instrumentation. Finally, to confidently conclude on the presence of decadal trends, considering multiple decades of data is also crucial, not to be too sensitive to natural variations related to the El-Niño Southern Oscillation (ENSO) and similar phenomena. Within the MAXSS project of the European Space Agency, to address the first of these points we recently developed at KNMI a robust method which borrows statistical models and mathematical tools from Extreme Value Theory, designed for characterising rare events, and which allows to confidently investigate extreme percentiles of surface wind speeds over the ocean. This method was developed using both wind speed data derived from the satellite-borne ASCAT-A scatterometer and ERA5 model wind speeds, focusing on a number of tropical basins. Now that we have this method, going sufficiently back in time is a sine-qua-non requirement to reach stable and meaningful conclusions via hypothesis testing about the possible existence of statistically significant decadal trends in extreme scatterometer winds over the ocean. It therefore follows that working with data from multiple instruments is necessarily required. As a follow-on, our current focus at KNMI is on the development of an approach to enable the meaningful comparison of wind data that was derived from various instruments (moreover paying attention to calibration differences and the effect of rain contamination), with the help of collocated model wind speed data, as well as overlapping years of observations between some of the instruments. The idea is to then apply the existing KNMI extreme-percentile interpolation method to observations and model data covering the last 30 years.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: B.03.05 - POSTER - Heritage at Risk: Innovative Tools for Assessing and Mitigating Climate Change and Natural Hazards

In a landscape where climate change is accelerating and the frequency of natural disasters is increasing, the preservation of cultural heritage has become a critical challenge. This session aims to address these pressing issues by showcasing the latest tools and technologies designed to protect cultural assets and mitigate the stress posed on them by climate change and its side effects.
The session will explore a series of novel tools and methodologies designed to assess and mitigate the risks that climate change poses to cultural heritage. The session will highlight advancements in technology, including satellite remote sensing methods that can be used for the identification, monitoring and impact assessment of different types of threats as well as innovative sensors such as flash LiDAR, modelling techniques, novel coatings and data analytics, that are being utilised to safeguard invaluable cultural sites. Participants will gain insights into the practical application of these tools in real-world scenarios through the showcasing of successful case studies, and explore collaborative strategies to enhance resilience through a series of expert presentations and interactive discussions.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: Analyzing Climate Change Impacts on European Cultural Heritage Sites Using High-Temporal-Resolution Satellite Time Series

Authors: Marco Ottinger, Dr Felix Bachofer, Dr. Sophie Reinermann, Dr. Aristeidis Georgoulias, Dr. Prodomos Zanis, Juliane Huth
Affiliations: German Aerospace Center (DLR), Earth Observation Center (EOC), German Remote Sensing Data Center (DFD), Department of Meteorology and Climatology, School of Geology Aristotle, University of Thessaloniki
This study investigates the derivation, application and use of Earth observation (EO)-based datasets and products to analyze and understand the impacts of climate change on selected cultural heritage sites in Europe. Conducted as part of the Horizon Europe Triquetra Project, this research utilized high-temporal-resolution satellite time series data from multiple sensors. Specifically, we examined long-term changes in snow cover, coastline morphology, and vegetation drought effects to evaluate their implications for these culturally significant landscapes. At the Kalapodi archaeological site in Greece, snow cover extent was analyzed using binary masks derived from DLR Global SnowPack daily snow cover products for the period 2000–2022 These products, generated using MODIS sensors on the Terra and Aqua satellites, provide daily snow cover data at a 500-meter resolution. The data were aggregated into monthly composites to identify temporal trends and multi-decadal means. Comparisons with global climate model snowfall data provided valuable insights into the relationship between observed snow cover dynamics and climatic variables. The analyses at Kalapodi highlights the importance of continuous snow cover monitoring using EO data and emphasize the need of long-term space-borne missions to support continued observation, ensuring effective site management and sustainable conservation in the face of climate change. For drought analysis at Roseninsel, Lake Starnberg, Germany, Terra MODIS Enhanced Vegetation Index (EVI) time series from 2000 to 2022 were used to detect vegetation anomalies. Significant drought-induced stress was observed, particularly during the summers and autumns of 2003, 2004, and 2008, offering valuable insights into vegetation response and supporting improved management of vegetated areas to mitigate erosion risks. The coastline detection approach integrates three primary methods: cloud-processing of annual water index images, local coastline extraction, and the creation of shore-normal transects. Using multispectral Landsat surface reflectance data from various instruments (TM, ETM+, OLI) we generated annual water-index composites based on the Modified Normalized Difference Water Index (MNDWI). These composites facilitated the delineation of sub-pixel shorelines and the quantification of coastline changes through linear regression techniques. Additionally, a high spatial resolution PlanetScope optical yearly time series with a 3 m resolution was employed, providing enhanced precision and frequency for shoreline morphology analyses. This study demonstrates the capability of remote sensing techniques to monitor environmental changes and their impacts on cultural heritage sites, offering valuable tools for climate change adaptation and preservation efforts.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: ArchMed Heritage Hub: A European project for managing and preserving at-risk cultural heritage in the Mediterranean basin

Authors: ELENA Segreti, Mr Giulio PELLEGRINO, Mr Daniele Malfitana, Mr Antonino
Affiliations: ESSCA, University of Catania, ArchMed Hub
The Mediterranean region, with its exceptional concentration of UNESCO World Heritage archaeological and cultural sites, faces increasing threats from climate change and the growing frequency of natural disasters. The ArchMed Heritage Hub, proposed to be established in Sicily, envisions an interdisciplinary platform to address these challenges. By leveraging Earth Observation (EO) technologies, digital archaeology tools, and innovative management strategies, this initiative seeks to protect and enhance cultural heritage while fostering sustainability, inclusivity, and Mediterranean-wide cooperation. Strategic collaboration with the University of Catania A cornerstone of the project will be its collaboration with the Specialization School in Archaeological Heritage at the University of Catania, which will bring academic rigor and expertise to the initiative. Prototypes of innovative ideas developed through academic research could evolve into early-stage startups within the Hub’s incubator, bridging the gap between research and application. To further enhance its impact, the Hub will establish a multidisciplinary consortium, composed of academic partners, research and technology institutes, and public and private organizations. This integrated network will ensure effective collaboration, facilitate the sharing of expertise and resources and enhance the scalability of the project’s solutions. Innovative tools and technologies for heritage management To address the pressing needs of at-risk cultural heritage, the Hub will leverage state-of-the-art technologies, including: • EO and advanced sensors: high-resolution satellite imagery, LiDAR, and UAVs for monitoring environmental risks such as erosion, flooding, and structural degradation. • Predictive modelling: AI-powered tools to anticipate risk scenarios and guide targeted interventions. • Digital archaeology: the creation of 3D models and digital twins for research, public engagement, and reducing physical pressure on heritage sites. Fostering entrepreneurship and managerial excellence One of the Hub’s primary goals will be to attract startups and researchers dedicated to developing innovative cultural heritage solutions. It will: • Support startups in creating scalable solutions for risk monitoring, overtourism management, and site digitalization. • Collaborate with Mediterranean universities to nurture academic projects in digital archaeology and cultural heritage management. • Develop managerial competencies that balance preservation, accessibility, and socio-economic growth, creating integrated solutions for sustainable site management. Sicily: a living laboratory for scalable solutions Sicily’s unique combination of archaeological wealth and environmental challenges provides an ideal laboratory for developing and testing scalable solutions. The region experiences a wide array of environmental pressures, including volcanic activity, seismic risks, coastal erosion, and rising sea temperatures, which collectively threaten cultural heritage sites. Pilot projects will include: • Monitoring the impacts of seismic and volcanic activity on archaeological structures, using EO technologies and predictive modelling to develop early warning systems. • Tracking coastal erosion and sea-level rise to safeguard marine and coastal heritage sites. • Designing EO-driven models to mitigate the impact of overtourism and promote sustainable tourism. • Partnering with startups to create innovative digital platforms for heritage interpretation and virtual experiences. A global vision for resilient cultural heritage management The ArchMed Heritage Hub aspires to create a replicable model for managing cultural heritage at risk across the Mediterranean and beyond. By integrating cutting-edge technologies, fostering entrepreneurship, and building collaborative networks, the project addresses critical environmental and cultural challenges. Its alignment with the United Nations Sustainable Development Goals (SDGs) and European policies underscores its commitment to regional and global resilience in cultural heritage management. Aligned with the objectives of the session B.03.05 Heritage at Risk, the ArchMed Heritage Hub demonstrates how strategic collaborations, advanced technologies, and innovative management strategies can protect cultural heritage at risk while fostering sustainability, inclusivity, and socio-economic growth.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: Using citizen engagement through crowdsourcing to monitor heritage at risk

Authors: Kyriacos Themistocleous, Valentinos Evripidou
Affiliations: Eratosthenes Centre Of Excellence, Cyprus University of Technology
One of the most significant consequences of Climate change is the threat to cultural heritage sites. The TRIQUETRA project addresses the critical challenge posed by climate change to cultural heritage sites by applying a comprehensive risk assessment framework. This framework integrates traditional and advanced technologies, including remote sensing and laser-based spectroscopy, to quantify the severity of risks, monitor their progression, and inform effective mitigation strategies. Understanding the potential risks at site level is vital to ensure that appropriate adaptation and mitigation measures are put in place. Recent research underscores the compounded impacts of climate-induced geo-hazards, such as landslides and earthquakes, which threaten the physical integrity of monuments and the socio-economic systems they support. Citizen engagement is also an integral part of the TRIQUETRA project, creating a dynamic web and mobile platform where visitors actively participate in cultural heritage (CH) site monitoring. The TRIQUETRA application enables citizens and cultural heritage site visitors to play a vital role in capturing and uploading site photos, contributing therefore, valuable datasets that complement and enhance the existing 3D models. This process is aided by a backend system that can aid cultural site authorities to better monitor the site by having up to date imagery and reports from visitors. At the same time the TRIQUETRA Citizen Engagement Application creates an interactive and enhanced experience for cultural heritage site visitors through immersive Augmented Reality (AR) experiences. The application provides additional information through an AR experience where user can learn more about critical features at risk, such as areas affected by climate change or structural vulnerabilities, fostering awareness and promoting preservation efforts. The methodology was applied to the Choirokoitia site in Cyprus, which is listed as a UNESCO World Heritage Site as it is one of the best-preserved Neolithic sites in the Mediterranean. The Choirokoitia site will examine the potential risk of rockfall, as the topology of the site is vulnerable to movements as a result of extreme climate change as well as of daily/seasonal stressing actions. By integrating advanced technologies and community-driven monitoring, TRIQUETRA ensures a holistic approach to safeguarding cultural heritage. The project creates a replicable framework that enhances risk assessment and promotes active participation in preservation efforts, offering scalable benefits for cultural heritage sites worldwide.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: Advancing Cultural Heritage Preservation with Computer Vision for Remote and in-Situ Visual Inspection and Monitoring

Authors: Nikolaos Schetakis, Mr. Napoleon Papoutsakis, Mr. Anastasios Litsas, Ms Vasiliki Charalampopoulou, Dr. Georgios Xekalakis, Professor Georgios Stavrakakis, Professor Georgios E. Stavroulakis
Affiliations: Technical University of Crete, ALMA SISTEMI SRL, QUANTUM INNOVATION PC, Geomatics (Cyprus) Ltd, Department of Civil Engineering, Frederick University
Cultural Heritage (CH) buildings represent invaluable architectural and historical assets, yet they are increasingly vulnerable to structural degradation from natural disasters, environmental factors and the effects of climate change. Assessing and monitoring these buildings' structural stability is crucial for guiding effective preservation and adaptation strategies. This work introduces an innovative approach to assesing CH buildings at risk from various impacts by leveraging advanced computer vision tools to analyze facade images of these structures. Utilizing foundational models like Segment-Anything Model (SAM) and Grounding DINO, Vision Language Models (VLM) like GPT-4 and ViLT, and various different YOLO neural networks, our approach precisely identifies and segments key indicators of structural integrity, including the number of floors, presence of openings, construction period, the material used in the construction, and visible signs of deterioration such as mold, peeling paint and delamination. Integrated within a mobile application, these tools streamline rapid, in-situ visual inspections and incorporate hybrid (FEMA-154, ISTOS) forms, enabling systematic data collection for resilience assessments and climate adaptation planning. Moreover, an automated platform integrated with the app enables remote visual inspection using crowdsourced images from sources like Google Street View, thereby extending monitoring capabilities to hard-to-access locations and aiding in proactive risk management while providing a means to validate in-situ inspections. The app has already been deployed in the cities of Chania, Heraklion, and Rhodes in Greece, demonstrating its practical applicability. With an estimated cost of just €20 per city and a runtime of approximately three hours, the platform efficiently processes large datasets, generating detailed heatmaps of structural features overlaid on a geographical map. These visual outputs highlight key vulnerabilities and areas requiring immediate attention, enabling stakeholders to make informed decisions regarding conservation priorities. By combining in-field and remote inspection capabilities, this approach supports a sustainable strategy for CH building preservation, enhancing resilience to natural hazards and informing data-driven climate adaptation efforts.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: Leveraging GIS and Satellite Remote Sensing for Earthquake Susceptibility Mapping and Disaster Management Planning at the wider area of Ancient Olympia archaeological site

Authors: PhD(c) NIkolaos Stasinos, Dr Stavroula Alatza, Dr Nikolaos Stathopoulos, MSc Maria (Marietta) Papakonstantinou, MSc Melpomeni Zoka, PhD(c) Emmanouil Salas, Charalampos Kontoes
Affiliations: Operational Unit BEYOND-Centre of EO Research & Satellite Remote Sensing, Institute for Space Applications and Remote Sensing, National Observatory of Athens
Over the last decades, cultural heritage sites globally have become increasingly more susceptible to natural hazards (with earthquakes among them) resulting in significant risk associated with their preservation and safe visits. Ancient Olympia and its archaeological site attracts millions of tourists in a year, with the pick month to be August (derived for Hellenic Statistical Authority). Its location in a seismic-prone area, with a complex tectonic setting, raises concern. While the need for disaster risk reduction and emergency planning, is to protect this invaluable heritage site. To assist in mitigating such risks, the current study proposes an integrated methodology for earthquake susceptibility mapping based on advanced geospatial technologies and methods, such as GIS, remote sensing, and Multi-Criteria Decision Analysis (MCDA). It intends to provide data-driven decision support in assessing earthquake susceptibility, improving disaster response, and increasing resilience to seismic hazard events for the archeological site and its wider area. In this study, with the help of GIS and MCDA, many contributing factors to earthquake susceptibility are being integrated into one weighted model. Thus, include distance from faults, ground shaking intensity, soil types, slope stability, and the characteristics of the buildings and infrastructures. The first factor that is important, is the proximity to a fault line. And that because the areas near active faults are prone to ground acceleration. To investigate those areas, buffer zones. are created around them in varying distances. The geology of the area plays a crucial role in seismic susceptibility as well. The soil type, in the area of interest, is among the factors, for the seismic propagation. By classify the soil types, the areas that are prone to seismic amplification are identified. In that way, is enabling a refined assessment of the earthquake intensity. The slope is also important. Digital Elevation Model (DEM) is used to analyze terrain morphology, highlighting areas that are steep, increasing the risk during seismic events. The vulnerability of settlements and buildings of the wider area of archaeological site of Ancient Olympia was examined. Using data concerning the buildings characteristics (construction material, height, and age), the zones where high risk are determined. Also, the factor of proximity to critical infrastructure like hospitals, emergency services were also inspected. Such proximity analysis, using GIS, helped in evacuation and first response planning, by identifying the points that may incur delays. An additional criterion for the identification of areas affected by ground deformation phenomena, is Line of Sight (LOS) displacements using Persistent Scatterer Interferometry on Sentinel-1 images. Specifically, LOS displacements for the broader area of Ancient Olympia, from the Greece InSAR project were incorporated in the analysis. Greece InSAR is a nationwide LOS displacements mapping product, assessed with a fully automated, parallelized PSI (P-PSI) processing chain, developed in the Operational Unit BEYOND of IAASARS/NOA. By including ground deformation in the process of earthquake susceptibility mapping, this study provides insights not only into current risk levels but also into ongoing changes in the landscape that may influence seismic hazard in the future. The final susceptibility map that is produced, is essential towards a comprehensive emergency and response plan development for the archaeological site of Ancient Olympia and its wider area. In that way, coherent planning on evacuation routes outside dangerous zones, are organized, so that visitors or residents can pass safely. The areas that are most vulnerable due to the factors that are already analyzed with the weighted model, are being mapped. The study presented here is a combination of GIS, satellite remote sensing, and MCDA for the assessment of earthquake susceptibility at the broader significant area of Ancient Olympia’s archaeological site. In such cases, by combining these tools, disaster preparedness and response can be greatly supported. Given that seismic hazard and other geohazards continue to impact cultural sites, this study’s developed methodologies may be of use in the efforts to protect those sites against future threats. This study is supported by the European Union’s Caroline Herschel Framework Partnership Agreement on Copernicus User Uptake under grant agreement No 2022/SI2.879178 & S12.879180/SGA20 Implementing the FPA 275/G/GRO/COPE/17/10042,project FPCUP (Framework Partnership Agreement on Copernicus User Uptake).
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone M)

Poster: D.02.02 - POSTER - Advances of Machine Learning (ML) methods for Cryosphere applications

Over the past decade, the AI4EO (Artificial Intelligence for Earth Observation) community has been constantly growing and has established a notable position in the remote sensing domain. This growth is due to clear advantages, such as the ability to harvest vast amounts of archived data, the speed up of the processing time and the enhancement of spatial analysis, among other benefits.
Fully automated and data-driven approaches are essential to harness the enormous amount of available data. Modern techniques, such as Explainable AI and physics-aware machine learning, enable the analysis of relationships on the Earth's surface, interactions between different domains, and physical dependencies through data. Furthermore, with the rise of self-supervised learning methods, the lack of usable annotations or labels becomes less relevant.
Despite these advancements, there remains a significant gap in publications for cryosphere applications, and AI4EO research has yet to realize its full potential in polar sciences and cryosphere applications.
Implementing AI and ML in Earth Observation is not straightforward due to the unique characteristics of remote sensing data compared to other AI applications. Thus, it demands the knowledge of geospatial and data sciences on top of core domain expertise. If we bring these groups together, there is immense potential for innovation and growth in this field, and ongoing research promises to overcome these challenges and unlock new insights in cryosphere science.
The focus of this session is on the adaptation and development of ML-methods, including uncertainty quantification, to address all aspects of satellite remote sensing and adjunct methods of the cryosphere (i.e. sea ice, glaciers, ice sheets, snow, ocean-ice-atmosphere interactions, and thermodynamic modeling), with highlighting the difficulties, special cases, and challenges we may encounter.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone M)

Poster: AI4IS - Towards AI-based forecasting of Antarctic Ice shelf calving

Authors: Daniele Fantin, Mr Jacob Hay, Mr Hamze Issa, David Parkes, Dr Malcom McMillan, Amber Leeson, Katie Miles, Kate Briggs, Dr Jan Wuite, Dr. Thomas Nagler
Affiliations: Science And Technology AS, Lancaster University, ENVEO
AI4IS is an ESA-funded project to develop AI-based forecasting for calving of Antarctic ice shelves. Understanding ice shelf instability is one of the most important open questions in polar science, due to its capacity to drive rapid sea level change at the high end of current climate projections. Currently, however, forecasting future instability is notoriously difficult because of the complex, non-linear forcing mechanisms controlling an ice shelf’s response, including calving at the ice margin. As a result, the timescales for ice shelf collapse form one of the largest uncertainties in modelling future sea level scenarios. The study of the dynamic nature of ice shelves in Antarctica is of direct consequence to the effects it has on sea level rise predictions. Ice shelf calving events are therefore in need of not only recording but also predicting. In the AI4IS project, we aim to test the use of a multivariate data cube that combines a large variety of EO and modelled ice shelf datasets (wind speed, surface mass balance, ice velocity, strain rate, surface elevation, supraglacial lakes, ice shelf coastlines, and more) to feed a convolutional neural network trained to predict the occurrence of calving events. Besides existing available and routinely produced products, such as ice velocity, surface elevation change and surface melt, the project also aims to develop and generate novel products on strain rate and ice shelf damage in order to enhance and expand the existing portfolio of EO data products characterising critical ice shelf parameters. To ensure suitability for the AI model, the datasets, which come with a wide variety of native spatial resolutions, are combined into a homogeneous datacube using a Gaussian random field process. At each monthly timestep, observations from each of the input datasets are regressed onto a single shared grid, representing a pixel structure used as input to the AI processing. The Gaussian random field approach to this regression effectively deals with multiple input resolutions, accounts for uncertainty in the input datasets, and allows for infilling of missing data. A grid of uncertainties derived from this process for each input dataset is produced alongside the processed version of each dataset, which can be fed into the AI model as additional pixel information. The model then processes the data cube and trains to locate calving events of the Antarctic ice shelves and flag the time and location of possible events. Using a modified U-Net deep learning architecture built on the SNTML framework, the model will be trained on labels from the manually delineated Qi et al. (2021) calving event dataset. Our model of iceberg calving will act as a valuable first step towards data-driven forecasting of future ice shelf calving events and, ultimately, ice shelf instability. The proposed work therefore has a high scientific and societal impact. Additionally, our project is designed with the long-term vision of incorporating our data-model framework into a future digital twin of Antarctica. This opens up a significant opportunity for future work to apply our model to ‘what if?’ scenarios and, ultimately, to inform downstream models, such as ocean circulation models, in a coupling between earth system components. Qi, M., Liu, Y., Liu, J., Cheng, X., Lin, Y., Feng, Q., Shen, Q., and Yu, Z.: A 15-year circum-Antarctic iceberg calving dataset derived from continuous satellite observations, Earth Syst. Sci. Data, 13, 4583–4601, https://doi.org/10.5194/essd-13-4583-2021, 2021
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone M)

Poster: Evaluating deep learning approaches for automated rock glacier mapping using Earth observation data

Authors: Vanessa Streifeneder, Dr. Prof. Benjamin Aubrey Robson, Dr. Daniel Hölbling, Elena Nafieva, Dr. Zahra Dabiri, Emma Hauglin, Lorena Abad
Affiliations: Department of Geoinformatics-Z_GIS, University of Salzburg, Department of Earth Science, University of Bergen
Rock glaciers are tongue-shaped complex landforms that indicate present or past permafrost conditions, commonly found in many high-latitude and/or high-elevation environments. They consist of poorly sorted angular debris and ice-rich sediments that form through gravity-driven creep. Information about the location, extent and characteristics of rock glaciers is important for several reasons, for example, for estimating their hydrological significance as water resource (e.g., for alpine huts) and assessing the geohazard potential because of rock glacier destabilisation due to climate change. Unlike other cryosphere components, such as snow and glaciers, which are spectrally distinct from the surrounding terrain, rock glaciers are spectrally inseparable. This makes them difficult to automatically detect and delineate from Earth observation (EO) data. Thus, rock glaciers are usually mapped through labor-intensive subjective manual interpretation of EO data. This often results in inhomogeneous, incomplete, and inconsistent mapping with large differences among different regions. Therefore, there is a need for objective, automated, and efficient methods to map rock glaciers using globally applicable satellite datasets, such as Sentinel-2. Modern machine learning methods, such as deep learning (DL), offer new possibilities for automating mapping tasks. DL works by recognising recurring patterns and textures through artificial neural networks. However, limited research has been done on DL-based rock glacier mapping, and there is no consensus on the best parameters for this purpose. Moreover, features with similar surface textures to rock glaciers, such as landslides and avalanche or fluvial deposits, can be misclassified by DL models. Therefore, there is a need to conduct a thorough investigation of the DL model architectures and input data types that produce the best results for mapping rock glaciers. In the project “ROGER - EO-based rock glacier mapping and characterisation”, we tested different DL models (e.g. Unet, DeepLABV3) with different backbones (resnet, deepnet) trained on different input layers (high-resolution three-band and four-band optical data, as well as a combination of information derived from DEM data and optical data) to find the most promising combination. We assessed the performance, robustness, and reliability of the different DL models for automated EO-based rock glacier mapping in study areas in Austria and Svalbard, Norway, and quantified the accuracy of the results in comparison with reference data. With our research, we aim to make a significant contribution to cryospheric research by evaluating methods for the automated identification and characterisation of rock glaciers, thereby expanding our understanding of the potential of DL to efficiently map complex natural phenomena using EO data. The findings will contribute to increasing the trustworthiness of DL methods, which is critical for many applications and especially when communicating and explaining results to stakeholders and decision makers.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone M)

Poster: Towards representation learning of radar altimeter waveforms for sea ice surface classification by stages of development

Authors: Lena Happ, Sonali Patil, Dr Stefan Hendricks, Riccardo Fellegara, Lars Kaleschke, Andreas Gerndt
Affiliations: Alfred Wegener Institute, German Aerospace Center (DLR), University of Bremen
Satellite radar altimeters provide crucial insights into polar oceans and their sea ice cover, enabling the estimation of sea level, sea ice freeboard, and thickness. These retrieval algorithms depend on accurate discrimination between radar altimeter waveforms from sea ice and ocean surfaces in heterogeneous and dynamic surface conditions. A further and less mature step is classifying different sea ice types in addition to the ice/ocean discrimination. We aim to develop new methods for a novel multi-category sea ice and ocean surface classification directly from satellite radar altimeter data to improve sea ice climate data records. Traditional waveform representations are limited to a small set of parameters, leading to information loss. Moreover, machine learning models for sea ice classification often depend on supervised training, which is vulnerable to uncertainties in labeled data, such as uniform ice chart polygons that do not represent the true variability of sea ice surface types. To address these limitations, we explore self-supervised learning methods to optimize waveform representations, which can capture more detailed information for a classification with finer granularity. Furthermore, they do not require labeled data, which is not available at the spatial coverage and resolution of radar altimeter waveforms. We apply these techniques to SRAL data from the Sentinel-3 mission to produce a representation that is optimized for the down stream task of sea ice surface classifiaction. We show that the information preserved in the latent space of an auto-encoder enhances the feature space of traditional waveform parameters, improving the subsequent classification process, when comparing our results to available sea ice charts and other remote sensing products. Our results demonstrate better generalization compared to supervised approaches. However, training only on the reconstruction loss of the auto-encoder lacks a mechanism to explicitly separate dissimilar waveforms. This is directly addressed by the loss of contrastive learning. To apply contrastive learning in a self-supervised manner the positive and negative pairs have to be defined not based on class labels but on information inherent to the input space, information directly associated with the radar altimeter waveforms. Consequently, our adaptation of contrastive learning approaches to radar altimeter data in polar ocean regions is focused on this data engineering part, incorporating expert knowledge from the sea ice domain.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone M)

Poster: Bayesian Deep Learning for Enhanced Arctic Summer Sea Ice Surface Classification

Authors: Weibin Chen, Michel Tsamados, Dr Rosemary Willatt, Dr So Takao, Sam Powell
Affiliations: University College London, California Institute of Technology
Sentinel-3A (launched February 2016) and Sentinel-3B (April 2018) provide high-resolution Ku-band radar altimetry up to 81°N. Combining their unique capability of Sentinel-3's synthetic aperture radar altimetry (SRAL) and Ocean and Land Colour Instrument (OLCI) imagery provides coincident optical and radar data. Leveraging this synergy, we apply deep learning to classify sea ice and leads. To address the challenge of lead detection in summer, we propose a Bayesian deep learning framework that incorporates auxiliary melt-pond fraction data, enhancing interpretability and improving discrimination between leads and melt ponds. This may provide insights into the underlying mechanisms determining how different waveforms are returned from complicated surfaces such as leads and melt-ponds. This new approach to summer lead detection will significantly enhance the retrieval of snow depth and sea ice thickness during summer, an area that remains relatively underexplored.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone M)

Poster: Cryo2S1: Mapping sea ice freeboard from CryoSat-2 in Sentinel-1 SAR imagery using deep learning

Authors: Andreas Stokholm, Jack Christopher Landy, Roberto Saldo, Tore Wulf, Kristian Sørensen, Anton Korosov, Sine Munk Hvidegaard
Affiliations: DTU Space, Arctic University of Norway, UiT, Danish Meteorological Institute, Nansen Environmental and Remote Sensing Center
Mapping the ever-changing sea ice in the cold and remote Arctic is critical for safe and efficient maritime navigation, as ships can get trapped and capsize. Sea ice is also vital to monitor to understand the state of the changing climate and is a critical component of climate models, reflecting sunlight towards space and acting as an insulating material between the ocean surface and the atmosphere. Sea ice observations can further be used as input to weather forecasts to improve accuracy. Sea ice is mapped by professional sea ice analysts at national ice services, such as the Greenland Ice Service at the Danish Meteorological Institute (DMI), using radar images acquired by satellites orbiting the Earth [1], such as the Sentinel-1 (S1) [2] satellite constellation. Ice analysts manually interpret radar images of sea ice surfaces using their in-depth knowledge and experience to create sea ice charts [1]. The sea ice charts provide contemporary information about the sea ice conditions in the monitored region that allows, for instance, fishing and freight transport in remote areas and tourist ships to traverse or circumnavigate the sea ice. Therefore, sea ice mapping is an important economic enabler. A challenge for the S1 radar measurements is that the radar cannot penetrate far into the sea ice and is scattered/reflected by the surface. Therefore, an image only provides information about the surface of the sea ice. The ice analysts use these images to identify and classify sea ice conditions. One of these conditions described in the sea ice charts is the stage of development, representing the type of sea ice - an indicator of its thickness. It supports decision-making when determining whether certain types of ships can break the ice. However, the manual charting process uses sea ice classes, defined by the International Ice Charting Working Group (IICWG) on behalf of the World Meteorological Organization (WMO). These ice classes have considerable uncertainties associated with them, as the thickness within an individual class can vary from, e.g. 30-200cm or 70-120cm [3]. In addition, this is suboptimal for climate models, as continuous values are preferred instead of discrete classes, which restricts the use of the information in this context. Currently, deep-learning models that produce stage-of-development information from S1 radar images exist [4]. However, given the supervised nature of the models developed, the same inherent limitations of the sea ice charts, such as large discrete thickness classes, are present in the model outputs. The current state-of-the-art sea ice thickness retrieval method relies on altimeter satellites, such as the ESA’s Earth Explorer satellite mission CryoSat-2 (CS2) [5]. Altimeters measure the distance between the ocean and the sea ice, known as the sea ice freeboard. In the case of a Ku-band radar altimeter like CS2, the radar response is assumed to penetrate the snow and return from the sea ice surface. Since the true penetration is unknown, and the radar wave propagation is also delayed as the signal passes through snow, the measured quantity is known as the radar freeboard [6]. The radar freeboard can be used to estimate the sea ice thickness by calculating the sea ice's buoyancy based on snow and ice density estimates, and auxiliary snow depth information [6]. This method can retrieve the sea ice thickness with an accuracy of around 20-40% [5]. However, the area mapped by CS2 is very narrow, covering only 1600m across the orbit and is, therefore, only adequate for monthly monitoring of sea ice thickness in the Arctic - insufficient for many applications, such as maritime navigation, and leaves knowledge gaps in our data records for sea ice thickness on shorter timescales. On the other hand, S1 SAR has a coverage of 400km in Extra Wide mode across the orbit, enabling monitoring to a much larger extent and with repeating coverage every week. We propose to circumvent the limitations of CS2 and S1 by training supervised deep-learning convolutional neural network (CNN) models to recognise sea ice textures in S1 SAR images and assign sea ice freeboard estimates acquired by CS2. This enables an effective approach to transfer information acquired by CS2 to S1, which we call Cryo2S1. Our approach involves initially training a CNN model to map sea ice concentration using the ASIDv2+ dataset [7] - a dataset with several thousands of co-located S1 SAR and sea ice charts from the Canadian Ice Service and Greenland Ice Service. Afterwards, the model is fine-tuned to a curated winter Cryo2S1 dataset that contains collocated S1 SAR images and along-track CS2 measurements during 2020-2021 with several thousands of scenes. This approach allows the model to get accustomed to the S1 SAR data before training on the data-sparse CS2 data. Here, we present the preliminary results of our study. References: [1] R. Saldo et al., “AI4Arctic / ASIP Sea Ice Dataset - Version 2,” Technical University of Denmark, Jun. 03, 2021. See manual. https://data.dtu.dk/articles/dataset/AI4Arctic_ASIP_Sea_Ice_Dataset_-_version_2/13011134/3 [2] R. Torres et al., “GMES Sentinel-1 mission,” Remote Sensing of Environment, vol. 120, pp. 9–24, May 2012, doi: 10.1016/j.rse.2011.05.028. [3] International Ice Charting Working Group’s Ad Hoc Format Team for the World Meteorological Organization, “SIGRID-3 :a vector archive format for sea ice charts,” Mar. 2002. https://library.wmo.int/records/item/37171-sigrid-3-a-vector-archive-format-for-sea-ice-charts (accessed Jan. 05, 2024). [4] A. R. Stokholm et al., “The AutoICE Challenge,” Copernicus GmbH, Dec. 2023. Accessed: Jan. 05, 2024. [Online]. Available: http://dx.doi.org/10.5194/egusphere-2023-2648 [5] Landy, J.C., Petty, A.A., Tsamados, M. and Stroeve, J.C., 2020. Sea ice roughness overlooked as a key source of uncertainty in CryoSat‐2 ice freeboard retrievals. Journal of Geophysical Research: Oceans, 125(5), p.e2019JC015820. [6] R. Ricker, S. Hendricks, V. Helm, H. Skourup, and M. Davidson, “Sensitivity of CryoSat-2 Arctic sea-ice freeboard and thickness on radar-waveform interpretation,” The Cryosphere, vol. 8, no. 4, pp. 1607–1622, Aug. 2014, doi: 10.5194/tc-8-1607-2014. [7] Wulf T, Buus-Hinkler J, Singha S, Shi H, Kreiner MB. Pan-Arctic sea ice concentration from SAR and passive microwave. The Cryosphere. 2024 Nov 19;18(11):5277–300.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone M)

Poster: Towards Sensor-Agnosticism for SAR-based Sea Ice Retrieval

Authors: Tore Wulf, Jørgen Buus-Hinkler, Suman Singha, Athanasios Athanasiadis, Ninna Juul Ligaard, Matilde Brandt Kreiner
Affiliations: Danish Meteorological Institute
The Arctic region is undergoing unprecedented changes due to anthropogenic warming. A fundamental prerequisite for anticipating and mitigating the impacts of these changes is the close monitoring of the Arctic sea ice. The retreat and thinning of the Arctic sea ice allow for an increase in human activity in the region, emphasizing the need for detailed, near real-time sea ice information as well as improved sea ice forecasts for maritime safety and planning. Satellite-borne synthetic aperture radar (SAR) sensors are particularly well-suited for retrieving detailed sea ice information due to their high spatial resolution (<100m), independence of solar illumination and all-weather capabilities. Since the advent of deep learning and sophisticated computer vision techniques in the domain of Earth Observation, many significant contributions have been made within the topic of SAR-based sea ice retrieval, including publicly available training datasets [1, 2, 3, 4] and data-driven sea ice retrieval models [5, 6, 7, 8, 9, 10]. While such contributions have advanced the field considerably, they tend to rely on a single SAR mission, most often Sentinel-1, without addressing the generalization capabilities of the retrieval models to other, similar SAR sensors. The loss of Sentinel-1B in December 2021 highlighted the vulnerability of operational sea ice retrieval models depending solely on a single SAR mission for monitoring the Arctic sea ice. With only Sentinel-1A remaining operational, the acquisition of imagery was significantly reduced, limiting update frequency and spatial coverage of the downstream sea ice products, and underscoring the need for sea ice retrieval models capable of leveraging data from multiple SAR platforms. A robust and consistent sensor-agnostic SAR-based sea ice retrieval model has potential to greatly improve the coverage and update frequency of the derived sea ice products in the Arctic, which could benefit not only the national ice services and their users, but also the sea ice modeling community by providing timely and high-resolution sea ice products for data assimilation. In this study, we investigate and quantify the generalization capability of the DMI-ASIP sea ice concentration retrieval algorithm [10], which has been trained on a vast dataset of Sentinel-1 imagery, to SAR imagery from the Radarsat Constellation Mission. We present training strategies to improve generalization from one SAR sensor to the other. Lastly, we demonstrate the improved utility of a retrieval model capable of generalizing to multiple SAR platforms. [1] Khaleghian, S., Lohse, J. P., Kræmer, T. (2020). Synthetic-Aperture Radar (SAR) based Ice types/Ice edge dataset for deep learning analysis [Data set]. DataverseNO. https://doi.org/10.18710/QAYI4O [2] Saldo, R., Brandt Kreiner, M., Buus-Hinkler, J., Pedersen, L. T., Malmgren-Hansen, D., Nielsen, A. A., & Skriver, H. (2020). AI4Arctic / ASIP Sea Ice Dataset - version 2 (Version 3) [Data set]. Technical University of Denmark. https://doi.org/10.11583/DTU.13011134.v3 [3] Hughes, N., & Amdal, F. (2021). ExtremeEarth Polar Use Case Training Data (2.0.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.4683174 [4] Buus-Hinkler, J., Wulf, T., Stokholm, A. R., Korosov, A., Saldo, R., Pedersen, L. T., Arthurs, D., Solberg, R., Longépé, N., &amp; Brandt Kreiner, M. (2022). AI4Arctic Sea Ice Challenge Dataset (Version 2) [Data set]. Danish Meteorological Institute. https://doi.org/10.11583/DTU.c.6244065.v2 [5] Wang, L., Scott, K. A., Xu, L., & Clausi, D. A. (2016). Sea ice concentration estimation during melt from dual-pol SAR scenes using deep convolutional neural networks: A case study. IEEE Transactions on Geoscience and Remote Sensing, 54(8), 4524–4533. https://doi.org/10.1109/TGRS.2016.2543660 [6] Lohse, J., Doulgeris, A. P., & Dierking, W. (2020). Mapping sea-ice types from Sentinel-1 considering the surface-type dependent effect of incidence angle. Annals of Glaciology, 61(83), 260–270. doi:10.1017/aog.2020.45 [7] Malmgren-Hansen, D., Pedersen, L. T., Nielsen, A. A., Kreiner, M. B., Saldo, R., Skriver, H., Lavelle, J., Buus-Hinkler, J., & Krane, K. H. (2021). A convolutional neural network architecture for Sentinel-1 and AMSR2 data fusion. IEEE Transactions on Geoscience and Remote Sensing, 59(3), 1890–1902. https://doi.org/10.1109/TGRS.2020.3004539 [8] Kortum, K., Singha, S., & Spreen, G. (2022). Robust multiseasonal ice classification from high-resolution X-band SAR. IEEE Transactions on Geoscience and Remote Sensing, 60, 1–12. https://doi.org/10.1109/TGRS.2022.3144731 [9] Stokholm, A., Buus-Hinkler, J., Wulf, T., Korosov, A., Saldo, R., Pedersen, L. T., Arthurs, D., Dragan, I., Modica, I., Pedro, J., Debien, A., Chen, X., Patel, M., Cantu, F. J. P., Turnes, J. N., Park, J., Xu, L., Scott, K. A., Clausi, D. A., ... Kreiner, M. B. (2024). The AutoICE Challenge. The Cryosphere, 18(8), 3471–3494. https://doi.org/10.5194/tc-18-3471-2024 [10] Wulf, T., Buus-Hinkler, J., Singha, S., Shi, H., & Kreiner, M. B. (2024). Pan-Arctic sea ice concentration from SAR and passive microwave. The Cryosphere, 18(11), 5277–5300. https://doi.org/10.5194/tc-18-5277-2024
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone M)

Poster: Segmentation and classification of multi-temporal ICEYE sea ice SAR data

Authors: Juha Karvonen
Affiliations: Finnish Meteorological Institute
ICEYE is a commercial X-band SAR satellite constellation with over 30 SAR satellites in sun-synchronous orbits. In this study use of multi-temporal ICEYE SCAN mode and DWELL mode SAR imagery for sea ice characterization was studied. SCAN mode imagery covers an area of 100 x 100 km with a resolution of 15 m and DWELL mode imagery consists of sequences of 25 SAR frames over the same area acquired within a time period 25 seconds. The DWELL mode imagery covers an area of 5 x 5 km with a sub-meter resolution. ICEYE operates at X-band and VV polarization. The SCAn mode imagery were acquired with a relatively short temporal difference of a few hours between temporally adjacent images over a test area in the Baltic Sea during a two-week study period in mid-February 2024. The total number of SCAN mode ICEYE images acquired during the test period was 49 and the number of DWELL mode images was three. The SCAN mode imagery were analyzed by first applying an ice drift estimation algorithm to estimate ice drift between image pairs formed by two temporally adjacent images. After the drift estimation, a predicted image based on the first image of the image pair and the ice drift vector field was generated. Three different cross-correlations were then computed between the two images of each SAR image pair and the predicted image. These cross-correlations, in addition to calibrated SAR backscattering and the estimated ice drift magnitude, were then used to segment and classify sea ice by applying a U-net neural network. The segmentation training reference data were generated by clustering the segment medians of the above-mentioned five quantities. The daily FMI Baltic Sea ice charts were used as reference data. The ice charts are made by trained ice analysts and they present uniform ice areas as polygons. Multiple sea ice parameters are assigned to each ice chart polygon by the ice analysts. These parameters include sea ice concentration, sea ice thickness and degree of sea ice deformation. The segments used in the training data generation for the SAR segmentation were the polygons of the ice charts co-registered with the SAR imagery. Degree of deformation classification using the U-net was also studied. The degree of deformation training data and reference data were also based on the co-registered ice charts and their polygon-wise degree of deformation. The DWELL mode imagery were analyzed by studying statistical changes within the 25 s acquisition period. Also, decompositions based on principal component analysis (PCA) and independent component analysis (ICA) to extract the most significant information from the 25 s image time series were applied and the results analyzed. The results indicate that multi-temporal X-band SAR data, in addition to accurate short-time ice drift estimates, provides more information on the sea ice properties than single image SAR backscattering alone. This information can be utilized in local near-real-time sea ice monitoring in ice charting and fine-scale local automated sea ice products.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone M)

Poster: EO-based Greenland Surface Mass Balance Using Deep Learning

Authors: Anna Puggaard, Anne Solgaard, Louise Sørensen, Ruth Mottram, Sebastian Simonsen
Affiliations: Techincal University of Denmark, Danish Meteorological Institute, Geological Survey of Denmark and Greenlan
The surface mass balance (SMB) of the Greenland Ice Sheet plays a critical role in understanding the implications of climate change and its contribution to sea level rise. In recent years, developments in deep learning have proved helpful in estimating both surface processes and dynamics of ice sheets. While satellite EO datasets provide valuable insights into individual surface processes, an entirely data-driven SMB at finer scales beyond basin-wide estimates remains challenging. EO datasets have varying spatiotemporal resolutions and often come on non-Euclidian grids, further complicating efforts to integrate observational data into a unified framework using traditional numerical models. Here, we present a novel deep learning model that combines ERA5 atmospheric reanalysis fields with EO-based datasets, effectively integrating data with different spatiotemporal resolutions to derive a data-driven SMB product for the Greenland ice sheet. Leveraging daily fields of temperature, surface pressure, shortwave and longwave radiation, and precipitation from ERA5, the model captures daily fluctuations in SMB with high temporal resolution. Additionally, it incorporates albedo variations from MODIS, melt patterns from ASCAT, surface elevation changes from CryoSat, and ice velocity from Sentinel-1. The deep learning model is trained against independent satellite observations from GRACE and GRACE-FO alongside discharge estimates derived from Sentinel-1 to ensure that the model realistically represents the MB to the corresponding SMB and observed SMB on a basin scale. Offering an EO-based SMB product allows for direct benchmarking of surface mass balance model outputs by validation of model output beyond basin scale.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone M)

Poster: Extending Glacier Calving Front Segmentation with Spatiotemporal Learning Techniques

Authors: Marcel Dreier, Nora Gourmelon, Dakota Pyles, Prof. Dr. Matthias Braun, Dr. Thorsten Seehaus, Prof. Dr. Andreas Maier, Dr. Vincent
Affiliations: Pattern Recognition Lab, Institut für Geographie
Glacier calving is the process by which the ice breaks off from a lake- or marine-terminating glacier into the ocean and it is a major contributor to global mean sea level rise. Hence, monitoring calving fronts – the boundary between the sea and the glacier – is necessary to accurately estimate and calculate the ice mass released into the ocean. To facilitate a continuous and large-scale observation of calving fronts, scientists commonly rely on satellite imagery. Synthetic Aperture Radar imagery is especially suitable for this task because it does not require solar illumination, unlike optical satellites. Therefore, it can also acquire images during the polar night. With such large-scale monitoring, the cost and time of manually processing and interpreting the acquired data significantly increases. Thus, techniques to automate Calving Front Segmentation have emerged in recent years. Deep learning models are particularly prevalent, as they achieve the best results with an error of less than 400m compared to a human annotator [1]. While these models accomplish considerable results, they commonly process only a single satellite image at a time. Thus, they cannot grasp the temporal context between a series of images from the same glacier, limiting their performance. In our work, we plan to address this issue by exploring spatiotemporal learning for calving front segmentation. Spatiotemporal learning has already shown great success in segmentation tasks for satellite image time series [2], suggesting similar beneficial effects for calving front segmentation. However, a crucial difference between the two tasks is that the calving front is actively moving over time. Thus, typical spatiotemporal learning models are not directly applicable for calving front segmentation as they limit themselves to one prediction per time series. Only very few approaches examine multiple predictions per time series and to the best of our knowledge, none explore Synthetic Aperture Radar imagery or calving front segmentation. To combine spatiotemporal learning with calving front segmentation, we extend current state-of-the-art models for calving front segmentation with spatiotemporal learning techniques. In particular, we design multiple approaches to exploit the spatiotemporal information and validate them in a series of experiments. To provide a fair and accurate analysis of our results, we conduct all our experiments and evaluations on the CaFFe [3] benchmark dataset. [1] F. Wu et al., "Contextual HookFormer for Glacier Calving Front Segmentation," in IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1-15, 2024, Art no. 5205915, doi: 10.1109/TGRS.2024.3368215. keywords: {Glaciers;Transformers;Task analysis;Computer architecture;Semantic segmentation;Network architecture;Computational modeling;Glacier calving front segmentation;semantic segmentation;transformer}, [2] Tarasiou, M., Chavez, E., & Zafeiriou, S. (2023). Vits for sits: Vision transformers for satellite image time series. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 10418-10428). [3] Gourmelon, N., Seehaus, T., Braun, M., Maier, A., and Christlein, V.: Calving fronts and where to find them: a benchmark dataset and methodology for automatic glacier calving front extraction from synthetic aperture radar imagery, Earth Syst. Sci. Data, 14, 4287–4313, https://doi.org/10.5194/essd14-4287-2022, 2022.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone M)

Poster: Advancing Glacial Lake Mapping with Remote Sensing Geo-Foundation Models: A U-ViT Approach

Authors: Di Jiang, Mr. Shiyi Li, Irena Hajnsek, Prof. Muhammad Adnan Siddique
Affiliations: ETH Zurich, Microwave and Radar Institute, German Aerospace Center, Information Technology University
Glacial lakes are critical components of the cryosphere, serving as sensitive indicators of climate change and providing insights into glacier dynamics, mass balance, and sea-level projections. Despite advancements in machine learning for glacial lake mapping, significant challenges remain, including the detection of small lakes, interference from terrain and cloud shadows, complex image feature extraction, and the scarcity of labeled multi-sensor data. To address these challenges, we present a novel U-ViT model that leverages the IBM-NASA Prithvi Geo-Foundation Model (GFM) to advance glacial lake mapping. The U-ViT model adopts a U-shaped encoder-decoder architecture featuring enhanced multi-channel data fusion and global-local feature extraction. It integrates an Enhanced Squeeze-Excitation (ESE) block for flexible fine-tuning across various input dimensions and combines Inverted Bottleneck Blocks (IBB) to improve local feature representation. The model was trained on two distinct datasets: a Sentinel-1 and Sentinel-2 fusion dataset from North Pakistan (NPK) and a Gaofen-3 Synthetic Aperture Radar (SAR)-only dataset from West Greenland (WGL), addressing the challenges posed by diverse data modalities. Experimental results highlight the U-ViT model’s effectiveness, achieving an F1 score of 0.894 on the NPK dataset, significantly outperforming traditional CNN-based models with scores below 0.8. It excelled in detecting small lakes, segmenting boundaries precisely, and handling cloud-shadowed features compared to public datasets. Notably, the U-ViT demonstrated robust performance with a 50% reduction in training data, underscoring its potential for efficient learning in data-scarce tasks. However, its performance on the WGL dataset did not surpass that of DeepLabV3+, revealing limitations stemming from differences between pre-training and input data modalities. In conclusion, the U-ViT model offers a promising approach for leveraging GFMs in large-scale glacial lake mapping, with the Prithvi GFM demonstrating considerable potential for advancing cryosphere research. Future work should focus on developing a comprehensive benchmark dataset that spans diverse regions and temporal scales to facilitate systematic evaluations and foster the development of more generalized models.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone M)

Poster: Snow depth estimation over the Alps from Sentinel-1 polarimetry observations and weather variables using an eXtreme Gradient Booster

Authors: Lucas Boeykens, Devon Dunmire, Jonas-frederik Jans, Willem Waegeman, Ezra Beernaert, Gabriëlle De Lannoy, Niko E. C. Verhoest, Dr. Hans Lievens
Affiliations: Department of Environment, Ghent University, Department of Data Analysis and Mathematical Modelling, Ghent University, Department of Earth and Environmental Sciences, KU Leuven
Seasonal snow is vital for society, nature and the climate. It provides drinking water to more than 1 billion people worldwide, supports hydropower generation and aids agriculture. Moreover, the meltwater from snow moistens soils in spring and summer, contributes to streamflow and cools down mountain streams. Snow also reflects a large part of the incoming solar radiation, influencing the global energy balance. Yet, our understanding of the amount of mountainous snow, a major component of seasonal snow, remains incomplete. The increased availability of freely available satellite products has led to the development of algorithms estimating mountainous snow depth (SD) using satellite observations. Some of these make use of backscatter observations at C-band, collected by the Sentinel-1 (S1) constellation. They rely on the correlation between snow properties and the backscatter in co- (VV) and cross- (VH) polarization, or use a cross-polarization ratio (i.e. the difference between VH and VV in dB-scale). Recent research, however, has shown that S1 C-band polarimetric variables also show potential for estimating snow properties. Higher correlations were observed between multi-year modeled snow depth and the polarimetric scattering parameter α at high elevations, as well with the first Stokes parameter S₀ in forested areas compared to the cross-polarization ratio and the backscatter in co-polarization, respectively. Nevertheless, relying solely on polarimetric variables in an empirical framework does not resolve the challenges associated with satellite-based observations. Shallow snow, wet snow conditions and forested areas impact C-band snow retrievals, making them less trustworthy under such conditions. This highlights the need for methodologies that can effectively address these limitations. In this research, the potential of the polarimetric variables for snow depth estimation for the vast area of the European Alps is explored. To address the aforementioned challenges, a Machine Learning algorithm, namely the eXtreme Gradient Boosting (XGB) model, is used. A spatiotemporal cross validation was conducted, accounting for the spatial, temporal and the spatiotemporal aspects of the snowpack. First results reveal comparable performance when using the polarimetric variables instead of the cross-polarization ratio and VV-polarized backscatter, with a trend to underestimate high records of snow depth. However, feature importance analysis reveals the model’s preference for the polarimetric scattering parameter over the cross-polarization ratio, while the opposite is observed when comparing S₀ with co-polarized backscatter. The ability of the model and the satellite-derived variables to capture interannual variability was further examined using snow depth maps of the Dischma valley in combination with a SHapley Additive exPlanations (SHAP)-value analysis. Additionally, various weather variables were incorporated as inputs to evaluate their potential to enhance the explanation of interannual variability. Here, SHAP-value analysis reveals similar added value of the satellite variables between different orbits, and highlights the potential of cumulative precipitation sums, although not all patterns explaining the observed snow depth (e.g. snow drift, local compaction...) are captured yet.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone M)

Poster: A Data-Driven Deep Learning Model for Lake Ice Cover Forecasting

Authors: Sam Johnston, Claude Duguay, Justin Murfitt
Affiliations: H2o Geomatics, University of Waterloo, Hydro-EO
Lakes comprise a significant portion of the land surface at high latitudes, and thereby affect local/regional weather. Lake ice cover plays a critical role in the energy balance at northern latitudes and provides key socioeconomic services. Climate change is impacting lake ice cover (LIC) and its thickness; two thematic products of Lakes as an essential climate variable (ECV). Accurately forecasting LIC from key environmental controls is vital for understanding these impacts and improving weather/climate predictions in lake-rich regions of the northern hemisphere. This presentation introduces LIF-DL (Lake Ice Forecasting using Deep Learning), a novel data-driven model for forecasting LIC extent and dates associated with its phenology. LIF-DL uses Spatial-Temporal Transformer Networks (STTN) to model relationships between lake conditions, bathymetry, and meteorological forcings. Five lakes with pronounced ice phenology—Great Slave Lake, Great Bear Lake, Lake Winnipeg, Athabasca Lake, and Reindeer Lake (Canada)—were selected for this study. Data sources include the US National Ice Center’s Interactive Multi-Sensor Snow and Ice Monitoring System (IMS) for ice observations, meteorological data from the European Centre for Medium-Range Weather Forecasts (ECMWF) 5th generation of European ReAnalysis for the land component (ERA5-Land), and Canadian Ice Service (CIS) ice records for external validation. The training dataset spanned from 2004 to 2017, while the testing data covered 2018 to 2022. The model was first trained to only produce one-week forecasts. For evaluation, it was then deployed auto-regressively, whereby LIF-DL predictions are reused to advance the forecast. Using this approach, four-year forecasts from 2018 to 2022 were produced and then evaluated against IMS and CIS observations and compared with the widely used Freshwater Lake (FLake) model embedded in ERA5 and ERA5-Land. Analysis of the model's input gradients revealed that accumulated freezing and thawing degree days, air temperature, solar radiation, and bathymetry have the greatest influence on model prediction. Furthermore, estimated variable importance varied between freeze-up and break-up periods, highlighting the model’s ability to learn seasonal lake ice dynamics. When validating LIC fractions against CIS observations, LIF-DL achieved mean absolute errors (MAE) of 8.1 days for break-up start and 11.3 days for break-up end, matching the performance of the FLake model. For freeze-up, LIF-DL outperforms FLake, with MAE scores of 7.1 days for freeze-up start and 7.6 days for freeze-up end, compared to FLake’s 14.8 and 14.4 days. Spatial comparisons between IMS observations and LIF-DF predictions demonstrate LIF-DF’s ability to generate spatially consistent ice cover forecasts, which are vital for the prediction of weather patterns. This study shows that deep learning models, such as LIF-DL, are highly effective for LIC forecasting. Future work will focus on applying these approaches for more lakes, exploring hybrid modelling approaches, and conducting additional validation to enhance interpretability and move toward an operational forecasting tool.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone M)

Poster: Baltic Sea ice concentration estimation from dual-polarized C-band SAR and microwave radiometer based on advanced machine learning

Authors: Juha Karvonen, Dr. Mako Mäkynen
Affiliations: Finnish Meteorological Institute
A U-net deep neural network with a customized loss function and seven encoding and decoding layers is applied for estimation of the Baltic Sea ice concentration (SIC) from dual-polarized C-band SAR images acquired by Sentinel-1 (S-1) and Radarsat-2 (RS-2). The image data are S-1 extra wide swath (EW) and RS-2 ScanSAR mode imagery with the HH/HV polarization combination. The microwave radiometer (MWR) data used in this study are acquired by Advanced Microwave Scanning Radiometer 2 (AMSR2). AMSR2 has six frequency bands of 6.9, 10.65, 18.7, 23.8, 36.5, and 89.0 GHz and two polarizations (V anh H). SIC is presented as eleven classes (from 0/10 to 10/10) to the U-net. The final local SIC estimates are produced by interpolating the 11-valued PDF’s provided by the U-net. The input of the U-net consists of two 3-channel images. The first input is derived from the SAR imagery and consists of the HH channel, HV channel and suitably scaled ([0.0,1.0] -> [0,255]) HH/HV channel cross-correlation. The other input image is derived from the AMSR2 MWR data. According to some earlier tests a neural network learns much faster when using polarization ratios (PR) and gradient ratios (GR) instead of brightness temperatures (TB) of the MWR frequency channels directly. Therefore, the PR's and GR's are used in this study. To reduce the MWR input data dimension to three, partial least squares (PLS) regression is performed to reduce the original 36 PR (6) and GR (30) channels derived from the MWR TB’s to three channels. These three channels are scaled to the range [0,255] by using the PLS channel extreme values. This result is then provided as a 3-channel input image to the U-net. The resolution of the estimates is 500 m and both MWR data and the SAR imagery are re-sampled to this resolution before computations. Simple uncertainty estimates are provided based on the PDF spread around the estimated SIC class provided by the U-net model. The loss function applied in this study is the earth-movers-distance loss. It has been shown to perform well in this kind of classification tasks where the classes (ordered class indices) closer to each other differ less from each other. Because of computational limitations, the images are processed in patches of 256 x 256 pixels and the full-size SIC estimate grids are then composed of these patches. The results are evaluated against the Baltic Sea ice charts and SIC derived from selected MODIS spectroradiometer data acquired during cloud-free days. The results indicate good SIC estimation in normal winter conditions, and a slight deterioration in the estimation results during the melt period. The associated uncertainties can be used to indicate the time and location of uncertain estimation and utilized e.g. in data assimilation and as an initial condition for ice charting.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L-M)

Poster: A.09.03 - POSTER - Understanding the Arctic as a system

The Arctic is warming up to four times faster than the rest of the globe, a phenomenon known as Arctic amplification (Rantanen et al., 2022). This warming has triggered a chain of changes and feedbacks in the Arctic system whose expected impacts will exceed those forecasted for many other regions in the planet. Climate change in polar regions has wide ranging implications for human activities, primary productivity, ecosystems, atmosphere-ocean interactions, the carbon cycle, ocean and atmosphere circulation, as well as impacts on the Earth and climate system as a whole.

This session will focus on the latest research to understand complex processes and feedbacks in the Arctic including, but not limited to: Freshwater fluxes and impact on the Arctic ocean and its biology; Sea-ice evolution (thickness, extent, concentration, motion, snow-cover-depth, roughness, melt ponds) and its feedback to the climate system; Interactions between the ocean and atmosphere; Snow melt processes and related downstream impacts, Impacts of climate change on Arctic biodiversity; Impact of extreme events.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L-M)

Poster: Downstream Arctic Freshwater Impacts on Phytoplankton Dynamics on the East Greenland Shelf

Authors: Rafael Gonçalves-Araujo, Dr. Camila Serra-Pompei, Dr Hamze Issa, Sara Nielsen Jensen, Dr Laura de Steur, Dr Jan Wuite, Dr. Thomas Nagler, Prof. Colin Stedmon, Daniele Fantin, Prof. Patrizio Mariani, Prof. Ole B. Andersen
Affiliations: DTU Aqua, Science & Technology, DTU Space, Norwegian Polar Institute, ENVEO
The East Greenland Shelf (EGS) is a critical Arctic gateway, contributing to approximately 50% of the Arctic Ocean’s freshwater and 80% of its sea ice exported to the North Atlantic. Along with that, substantial freshwater contributions are exported to the EGS from the Greenland Ice Sheet (GIS). While recent studies indicate a decline in both oceanic freshwater and sea ice flux through the Fram Strait, GIS-derived freshwater export has surged over recent decades. These freshwater inputs profoundly influence water column stratification, a key factor regulating the onset and dynamics of phytoplankton communities. Currently, three different projects are studying different aspects and controlling factors of the organic carbon cycling in Arctic waters: the Horizon Europe SEA-Quester (2024–2028), the DFF Sapere Aude WaterColor Project (2024–2028) and the Fresh4Bio ESA Project (2024–2026). The latter focuses on understanding the combined effects of oceanic and GIS freshwater export on water column structure and phytoplankton biomass, represented by chlorophyll-a concentration (Chl-a). Here, we integrate in situ observations, satellite data, biogeochemical models, and AI-based techniques to disentangle the complex relationships between freshwater inputs, ocean dynamics and Chl-a variability. Two complementary spatial approaches are being employed: (a) large-scale processes across the outer shelf analyzed using 4 km resolution satellite data, and (b) sub-mesoscale processes over the inner shelf studied using high-resolution imagery, including Sentinel-3 OLCI and SWOT. Preliminary results from 2000 to 2022 reveal a significant decline in Chl-a concentrations along the EGS shelf break (70-72 °N), contrasting a marked increase in the Iceland Sea, south of Jan Mayen. This latter region is characterized by dynamic frontal systems, which likely drive its productivity. Over the inner shelf, ongoing analysis of Sentinel-3 OLCI and SWOT data is uncovering a dynamic hydrographic environment, characterized by sub-mesoscale features such as meanders and eddies. These features appear to play a crucial role in modulating phytoplankton biomass distribution. While initial findings highlight notable trends and spatial contrasts, further analysis is required to fully understand the mechanisms driving these patterns. Insights from the Fresh4Bio project will provide valuable knowledge on how freshwater input from the GIS and the Arctic influences ecosystem dynamics along the East Greenland Shelf, a region pivotal to the Arctic-North Atlantic system.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L-M)

Poster: Different Ground Track Maintenance Strategies for Long-Term Sea-Ice Thickness Monitoring in Polar Altimetry Missions

Authors: Ricardo Gomes, Laura Moreschi, Mr Berthyl Duesmann, Sérgio Brás
Affiliations: DEIMOS Space S.L.U. for the European Space Agency, European Space Agency
Monitoring sea-ice thickness, ice sheets and glaciers are fundamental to understand the changes in the cryosphere and their implications for global climate. Precise ground track maintenance over the lifespan of polar altimetry missions is sometimes essential to ensure consistent, repeated and accurate data collection. This study evaluates three orbital maintenance strategies for polar altimetry missions to achieve long-term trends and patterns and their consequences on snow measurement accuracy and data usability for end-users. In parallel, two of the depicted strategies aim to minimize the fuel consumption, which will ultimately lead to a bigger mission lifetime. To ensure a precise ground track repetition, a common approach is to fix the inclination by performing periodic out-of-plane manoeuvres and maintaining a constant semi-major axis. Keeping the inclination constant ensures that the ground track does not drift, but it requires additional fuel consumption for the out-of-plane manoeuvres needed to correct any inclination deviations. The second strategy involves allowing the orbital inclination to oscillate naturally while keeping the semi-major axis constant. This would result in a lack of precise ground track repetition, but it is possible to have an accurate prediction of the inclination and therefore a good determination of the ground track at the poles. Moreover, the study shows that by adjusting the orbital inclination, it is possible to create a repeatable trend in the ANX longitude difference enabling a repetitive pattern to emerge every 4 years. This strategic modification allows for predictable and consistent observational opportunities. In addition, by accepting these natural oscillations, the need for Out-of-Plane manoeuvres is eliminated, conserving fuel. The last strategy allows the inclination to oscillate naturally while keeping the orbital period constant. By following an orbital period profile that maintains ground track repetition (similar to the approach used in the CryoSat mission) the satellite avoids retrograde manoeuvres. This method relies on managing orbital decay to correct the orbital period without additional fuel use. Each strategy presents a trade-off between ground track repetition, fuel consumption, and operational complexity. Emphasis is placed on how each approach impacts the continuity of altimetry data critical for snow observations in the cryosphere. This study highlights the implications of the different strategies for the end-users of altimeter data.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L-M)

Poster: Assessing The Evolution Of Terrestrial Arctic Heatwaves: A 3D Clustering Approach

Authors: Grégoire Canchon, Pr Gabriele Hegerl, Dr Andrew Schurer, Pr Piers Forster, Pr Isla Myers-Smith, Stuart King
Affiliations: University Of Edinburgh, University of British Columbia, University of Leeds
The Arctic has been impacted more by climate change than other regions, warming four times faster than the rest of the globe, with dramatic consequences for society and ecosystems [1]. The Arctic is a complex system and feedbacks have already been observed as a result of global warming, such as vegetation change (greening and browning trends), increase in wildfires and permafrost thaw [2,3,4]. However, these consequences of climate change and how they will impact on society are not fully understood. Typically, heatwaves in the Arctic have been studied as specific events in space and time (eg, Siberian heatwave in 2020) [5], but there is a need for a global assessment of the evolution of terrestrial Arctic heatwaves, and an evaluation of their impacts on various systems such as the vegetation or permafrost, and the extent heatwaves play in major Arctic feedbacks, such as early snowmelt, or vegetation browning [6,7]. Here we use spaceborne data from the NASA’s Moderate-Resolution Imaging Spectroradiometer (MODIS) on the TERRA satellite to improve our understanding of heatwave impacts on the terrestrial Arctic. Trends from land surface temperature for the last 23 years are extracted and a 3D clustering algorithm used to detect heatwaves in space and time, classifying them to understand their distribution spatially and temporally. These are correlated with meteorological data collected from weather stations in order to increase levels of confidence in the results, and land cover maps are used to determine whether there is a relationship between land cover change and the formation of recent terrestrial Arctic heatwaves. This presentation will showcase results of global Arctic trends, as well as separately for Arctic regions (North America, Northern Europe and Siberia). The 3D cluster algorithms successfully detects and classifies spatio-temporal heatwaves, providing detailed insights on heatwave formation in the Arctic and their correlation with land cover. This study lays the groundwork for future applications and further work on the impact of heatwaves in the Arctic. Bibliography [1] Serreze, M. C., & Barry, R. G. (2011). Processes and impacts of Arctic amplification: A research synthesis. Global and planetary change, 77(1-2), 85-96. [2] Berner, L. T., Massey, R., Jantz, P., Forbes, B. C., Macias-Fauria, M., Myers-Smith, I., ... & Goetz, S. J. (2020). Summer warming explains widespread but not uniform greening in the Arctic tundra biome. Nature Communications, 11(1), 4621. [3] Hegedűs, D., Ballinger, A. P., & Hegerl, G. C. (2024). Observed links between heatwaves and wildfires across Northern high latitudes. Environmental Research Letters, 19(3), 034041. [4] Turetsky, M. R., Abbott, B. W., Jones, M. C., Walter Anthony, K., Olefeldt, D., Schuur, E. A., ... & Sannel, A. B. K. (2019). Permafrost collapse is accelerating carbon release. [5] Overland, J. E., & Wang, M. (2021). The 2020 Siberian heat wave. [6] McEvoy, D. J., & Hatchett, B. J. (2023). Spring heat waves drive record western United States snow melt in 2021. Environmental Research Letters, 18(1), 014007. [7] Phoenix, G. K., & Bjerke, J. W. (2016). Arctic browning: extreme events and trends reversing arctic greening. Global change biology, 22, 2960-2962.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L-M)

Poster: Case studies of extreme events and their implications in the Arctic system.

Authors: Louise Sandberg Sørensen, Renée Fredensborg Hansen, Kirk Scanlan, Anna E. Hogg, Anders Kusk, Dr Martin Wearing, Malcolm McMillan, Jennifer Maddalena, Kate Briggs, Amber Leeson, Jack Landy, Robbie Mallett, Daniele Fantin
Affiliations: DTU Space, University of Leeds, University of Lancaster, UIT, ESA ESRIN, S&T
The Arctic is experiencing rapid environmental changes, driven by amplified warming and increasing frequency of extreme events. These extreme events will likely impact the complex interplay between the atmosphere, ocean, sea ice, and land ice, with cascading effects on global climate systems. To advance our understanding of such processes, we have initiated ARCTEX, an ESA-funded cross-disciplinary project. Within ARCTEX we will conduct three case studies in the Atlantic sector of the Arctic Ocean: Austfonna Ice Cap, Flade Isblink Ice Cap, and Nioghalvfjerdsfjorden Glacier/Zachariae Isstrøm. These studies aim to unravel region-specific processes while also trying to identify Arctic-wide connections and patterns. Global warming is projected to intensify extreme events in the Arctic, with impacts on cryospheric systems. For example, the collapse of floating ice tongues and dynamic thinning of marine-terminating glaciers highlight the critical need for high-resolution, time-specific datasets in the critical zone between land and sea ice. Despite recent progress in observing Arctic processes from Earth Observation data, significant gaps remain in understanding how extreme events propagate through interconnected systems. ARCTEX aim to address these gaps. Within ARCTEX we will employ tailored datasets and advanced methodologies to examine extreme events and their consequences in detail, focusing on their impacts on glacier stability, sea ice dynamics, and ocean-ice interactions. The three chosen case studies are: 1) the collapse of ice tongues and their implications for buttressing at Nioghalvfjerdsfjorden Glacier and Zachariae Isstrøm, 2) the rapid ice loss dynamics at Austfonna Ice Cap, 3) surface topography changes driven by extreme events at Flade Isblink Ice Cap. The findings will enhance our understanding of Arctic-wide vulnerabilities to extreme events, with implications for global climate forecasting and adaptation strategies. By addressing critical knowledge gaps, ARCTEX will contribute to the development of predictive models, aiding communities worldwide in managing risks associated with rising sea levels and disrupted weather patterns. Here, we present the scope and early findings in each of the chosen case studies.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L-M)

Poster: The ESA Arctic Freshwater budget project (ARCFRESH)

Authors: Ole Andersen, Carsten Ludwigsen, And ARCFRESH
Affiliations: Dtu Space
ARCFRESH (Arctic Freshwater Budget) is a joint research project funded by ESA through the Climate Space program. The project utilizes data from several CCI ECV projects to estimate the freshwater fluxes and budget of the Arctic Ocean. Understanding and monitoring the Arctic Ocean freshwater budget is crucial for predicting and adapting to climate change, as the Arctic region is highly sensitive and serves as a flagship area of the Earth's climate system (IPCC AR6, 2021). The freshwater fluxes in the Arctic Ocean contribute to the global thermohaline circulation, also known as the ocean conveyor belt. Changes in the freshwater budget can disrupt this circulation, which plays a crucial role in redistributing heat around the planet and can have cascading effects on regional and global climate patterns (IPCC, 2021). Variations in freshwater content influence ocean circulation patterns, including the Atlantic Meridional Overturning Circulation (AMOC), which impacts the transfer of heat between the equator and the polar regions, influencing climate conditions in adjacent areas and affecting weather patterns at lower latitudes (Rahmstorf et al., 2015). The Arctic Ocean freshwater budget is linked to climate feedback mechanisms, such as the melting of Arctic ice releasing freshwater into the ocean, affecting local and global climate (Serreze and Barry, 2011). This process can contribute to positive feedback loops, amplifying the warming trend in the Arctic (Screen and Simmonds, 2010). Freshwater input, mainly from rivers and precipitation, affects the salinity of the Arctic Ocean, impacting the extent and thickness of sea ice (Petty et al., 2018). The Arctic climate system is transitioning into a new regime characterized by less sea ice, allowing for more interaction between the atmosphere and the ocean (Solomon et al., 2021). A changing freshwater budget also influences the transport of organic matter and nutrients in the Arctic Ocean, impacting carbon cycling and the absorption and release of greenhouse gases, which, in turn, influence global climate dynamics (Carmack et al., 2016). Changes in the freshwater fluxes may also have local ecological impacts, affecting the distribution and abundance of marine species in the Arctic Ocean (Polyakov et al., 2020). Altered sea ice conditions and ocean circulation patterns can impact ecosystems, including the food web, with potential consequences for fisheries and other marine resources (IPCC, 2021).
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L-M)

Poster: Advancing Arctic Vegetation Monitoring for Climate Resilience: Integrating Novel Indices and AI Frameworks Using MODIS, Sentinel-2, and Landsat Data

Authors: Katarzyna Ostapowicz
Affiliations: Norwegian Institute For Nature Research (nina)
The Arctic is one of the most sensitive regions to climate change, with vegetation dynamics such as greening and browning serving as critical indicators of ecosystem responses to rising temperatures, altered precipitation regimes, and permafrost degradation. These trends have significant implications for global carbon cycling, biodiversity, and the livelihoods of Arctic communities. Despite advances in remote sensing, existing metrics and methods often struggle to capture the spatial and temporal heterogeneity of low-biomass Arctic ecosystems. Addressing these challenges, this study aims to develop a comprehensive framework for Arctic vegetation monitoring by integrating multi-sensor optical data with machine learning techniques. The framework utilizes data from MODIS (NDVI, EVI, GPP), Sentinel-2, and Landsat to derive new vegetation indices and metrics optimized for Arctic conditions. Novel indices, such as the Red-Edge Greenness Index (REGI), Normalized Difference Phenology Index (NDPI), and Enhanced Low-Biomass Vegetation Index (ELBVI), are introduced to improve sensitivity to subtle phenological changes, vegetation stress, and browning in low-biomass regions. Seasonal composites are generated to address challenges posed by cloud cover and limited observational data, ensuring robust temporal analysis and high-quality input for downstream processing. Machine learning methods, including Random Forest, Gradient Boosting, and Support Vector Machines, are employed to classify vegetation dynamics, detect anomalies, and identify trends in greening and browning. Transfer learning is incorporated to enhance the adaptability of models across diverse Arctic ecosystems, leveraging calibration datasets from in-situ measurements of biomass, phenology, and environmental variables. This approach improves the accuracy and scalability of predictions, offering a significant advancement over traditional methods. The study spans the entire Arctic region, allowing for the exploration of spatial variability and the identification of key environmental drivers of vegetation change, such as temperature anomalies, shifting precipitation patterns, and soil moisture variability. By combining multi-sensor datasets, the framework also facilitates cross-validation of results, ensuring consistency and reliability. Preliminary findings indicate that the proposed indices and methods significantly enhance the detection of greening and browning trends, capturing heterogeneity in vegetation dynamics. This work represents a step forward in Arctic vegetation monitoring by addressing the limitations of current metrics and proposing a scalable, data-driven approach that leverages state-of-the-art machine learning and Earth observation technologies. The integration of MODIS, Sentinel-2, and Landsat data enables the generation of high-resolution, temporally consistent insights into vegetation responses to climate drivers. Additionally, the study highlights the importance of spectral indices in characterizing phenological shifts and vegetation stress, particularly in regions where field data collection is logistically challenging. The outcomes of this study align with the European Space Agency’s (ESA) objectives to advance sustainable Earth observation practices and provide actionable tools for addressing global environmental challenges. By linking this work to United Nations Sustainable Development Goals (SDGs), particularly SDG 13 (Climate Action) and SDG 15 (Life on Land), it emphasizes the role of innovative monitoring systems in supporting climate resilience, biodiversity conservation, and sustainable land management in fragile Arctic ecosystems. Furthermore, the methodologies developed in this project are scalable, offering potential applications for monitoring vegetation dynamics in other climate-sensitive regions worldwide. In conclusion, the integration of multi-sensor optical data, novel vegetation indices, and machine learning frameworks provides a transformative approach to Arctic vegetation monitoring. By addressing existing gaps in sensitivity and resolution, this study offers new insights into Arctic greening and browning patterns, contributing to a deeper understanding of ecosystem resilience and global carbon dynamics.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L-M)

Poster: Observational Assessment of Arctic Sea Ice Albedo Feedback Between 1979 and 2023

Authors: André Emil Toft Jensen, Aslak Grinsted, Gorm Dybkjær, Jason Box, Jacob Høyer
Affiliations: Technical University of Denmark, University of Copenhagen, Danish Meteorological Institute, Geological Survey of Denmark and Greenland
Arctic sea ice retreat is known to greatly alter surface albedo, but the resulting radiative forcing and feedback remain debated. Surface albedo feedback (SAF) is calculated by multiplying (a) the observed sensitivity of surface albedo to global mean surface temperature by (b) the atmosphere’s average radiative sensitivity to albedo changes (RS). Spread in estimates of Arctic sea ice SAF has been speculated to stem from uncertainty in retrievals of surface albedo trends and RS, but no study has explicitly dealt with the differences between results. We applied state-of-the observations to derive a fully observational assessment of Arctic sea ice SAF and critically evaluate estimates of RS. Observations of blue sky surface albedo was taken from the CLARA-A3 product and observations of temperature from the HadCRUT5 product. CLARA-A3 is here for the first time applied to assess the Arctic sea ice SAF. RS is traditionally estimated using climate model simulations of so-called "radiative kernels", but we derive RS from CERES observations of clouds and energy fluxes by using two simple one-layer atmosphere models named ”BO18” and ”QH06”. The QH06 one-layer model is then further improved using CERES meteorological data and named ”QH06a”. The estimated RS is compared against an ensemble of state-of-the-art radiative kernels. Evaluation of such estimates is a common issue, but we present a validation framework comparing the resulting surface albedo forcing to anomalies in TOA reflected shortwave flux. Our observational estimate of the 1979-2023 Arctic sea ice SAF was 0.176 ± 0.010 W m^−2 K^−1. This estimate agrees with previous studies which apply the radiative kernel technique and observations of surface albedo changes, and it agrees with constrained climate model simulations, but it is significantly lower than other studies that apply linear regression models of sea ice fraction, planetary clear-sky and all-sky albedo. Using the RS validation framework we find that two of the radiative kernels derived from GCM simulations showed unreasonably high sensitivities and that the very simple QH06 atmospheric model is unfit for the complex Arctic region. The BO18 model and the QH06a model yielded similar and equally likely estimates of RS as the radiative kernels derived from GCM simulations. To summarise, we developed the QH06a model and validated the BO18 model, which are simple models of surface albedo RS suited for remote sensing that works even in the complex Arctic climate system, and we developed a validation framework to evaluate estimates of RS. These tools together with the CLARA-A3 albedo product was applied to make an updated, more precise estimate of the Arctic sea ice SAF.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L-M)

Poster: Complete and improved CryoSat-2 Polar Ocean product

Authors: Carsten Ludwigsen, Malcom McMillan, David Brockley, Roshin Raj, Ole Andersen
Affiliations: DTU Space, UCL MSSL, NERSC
The CryoSat ThEMatic PrOducts (Cryo-TEMPO) projects aim to deliver a new paradigm of simplified, harmonized, and agile CryoSat-2 products, that are easily accessible to new communities of non-altimeter experts and end users. The latest Cryo-TEMPO Polar Ocean (Baseline-D) is a merge between the standard LRM processor and dedicated SAMOSA+ retracing of SAR and SARin regions. The LRM processor has been modified so that the mean sea surface, ionospheric correction and wet troposphere correction align with the SAR data. For both SAR/in and LRM, we have updated the Ocean tide model to include the latest FES2022 model. The dataset provides nearly 15 years of CryoSat-2 polar ocean data and is maximizing the output of this unprecedented satellite altimetry mission. It is continuously updated with new observations and easily accessible for the end-users. Currently data are provided along-track, but a Level-3 gridded product is expected in mid-2025.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone L-M)

Poster: Dynamic Vegetation Changes in the Arctic: Circumpolar Trends based on Earth Observation

Authors: Martina Wenzl, Celia Baumhoer, Dr. Andreas Dietz, Claudia Kuenzer
Affiliations: DLR, Institute for Geography and Geology, University of Wuerzburg
The Arctic, characterised by severe climatic conditions and sparse vegetation, is experiencing rapid warming, with temperatures increasing up to four times the global rate since 1979. Extensive impacts from these changes have far-reaching consequences for the global climate and energy balance. Observing and quantifying the changes of the Arctic vegetation is therefore not only important to local stake holders, as changes in the Arctic region will affect the rest of the world. Satellite remote sensing is a valuable tool for monitoring Arctic vegetation dynamics, particularly in regions with limited ground observations. To investigate the impact of climate change on Arctic and subarctic vegetation dynamics, a literature review of 154 studies published between 2000 and 2023 was conducted based on the PRISMA guidelines and the Web of Science database. The research objectives, spatial distribution of study areas, methods and the temporal and spatial resolution of utilised satellite data were extracted for further analysis. The results show circumpolar greening trends for all major optical remote sensing satellite products, widespread decline in lichen coverage often accompanied by an increase of shrubs and increasing plant productivity. However, start and end of growing season dynamics are heterogeneous throughout the circumpolar Arctic, while the growing season length generally increased. The disturbance and recovery mechanisms in the tundra region are diverse and no uniform development derived. Although an increase in temperature has been linked to greening, shrub expansion and increased plant productivity, the driving forces behind the diverse and localised ecological changes observed in the Arctic and subarctic regions are numerous and complex. Subsequent studies could benefit from methodological advancements, such as the fusion of satellite data to overcome data gaps particular in the shoulder months of the growing season. Merging optical satellite data to achieve a better temporal and spatial resolution would benefit all examined research objectives. The fusion of SAR and optical data (e.g. by merging the Sentinel-1 and Sentinel-2 data) might be beneficial for research objectives focusing on plant productivity, species composition and disturbance analysis. The application of machine learning methods, and deep learning methods in particular, is expected to increase in the future as more (open-source) models become available, which benefits the analysis of disturbance mechanisms and species composition in the tundra. At the Living Planet Symposium, the central findings of this review will be summarised, providing an overview of the most prevalent circumpolar developments and ecological conditions. Finally, the remaining challenges of satellite remote sensing are identified.
Add to Google Calendar

Friday 27 June 13:00 - 13:45 (ESA Agora)

Session: F.04.24 How to make Earth Observation Science more actionable for policymakers?

Policymakers often ask for science to be more actionable - to help them understand how to act on pressing issues such as climate change. At the same time, they do not want scientists to become prescriptive - to tell them what they should do.
In this session, neuroscientist Kris De Meyer will share insights that help to understand how the brains of policy professionals perceive risk and make decisions. We will illustrate how these insight have helped the UCL Climate Action Unit to bridge the science-policy gap with examples from a past project working with the IPCC, a project developing new climate risk metrics, and wider work with government policymakers.
The session will also create space for discussion of the challenges that participants are experiencing in bridging the science-policy gap, and where they’ve made progress on finding the sweet spot of providing policymakers with actionable knowledge that is not prescriptive.

Speakers:


  • Kris De Meyer - UCL
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: C.02.01 - POSTER - Aeolus Mission: 5 years of advancing atmospheric understanding through spaceborne lidar technology.

The Aeolus mission, launched by the European Space Agency (ESA) in August 2018, represented a pioneering effort in advancing our understanding of Earth's atmosphere through the innovative utilization of spaceborne lidar technology. This mission aims to provide unprecedented insights into global wind patterns, atmospheric dynamics, and aerosol distribution, thereby enhancing our ability to forecast weather, monitor climate change, and improve air quality.
At the heart of the Aeolus mission laid the revolutionary Atmospheric Laser Doppler Instrument (ALADIN). ALADIN employed the principle of Doppler wind lidar, utilizing pulses of laser light to measure the Doppler shift of backscattered signals from atmospheric molecules and particles. This technique allows for the precise measurement of wind speed and direction throughout the depth of the atmosphere, from the Earth's surface up to the stratosphere.
One of the primary objectives of the Aeolus mission is to fill the critical gap in our observational capabilities regarding global wind profiles. Aeolus, operating from a sun-synchronous orbit at an altitude of approximately 320 kilometers, offered a unique vantage point for comprehensive and continuous global wind observations, covering the data sparse tropics and polar regions.
By accurately mapping atmospheric wind fields, Aeolus contributed to improving weather forecasting models, and facilitated the monitoring and enhanced the understanding of atmospheric circulation patterns, including the dynamics of jet streams, tropical cyclones, and the interplay between atmospheric and oceanic circulation systems, preparing the grounds for Aeolus-2 meteorological system.
In addition to wind profiling, Aeolus data is contributing in characterizing the distribution and properties of atmospheric aerosols, including pollutants, dust particles, and volcanic ash, which provides valuable insights into aerosol transport aiding in the refinement of climate models and air quality forecasts.
The mission ended nominal operations on 30 April 2023, followed by the end-of-life phase where scientific technological experiments were carried out before the satellite reentered from space on 28 July 2023 through an innovative and pioneering reentry approach.
The scope of this session is to review and discuss the main scientific achievements of the Aeolus mission, including the results from the international Cal/Val campaigns and the outcome of the end-of-life phase.

Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: Atmospheric Background Radiation Measured by the ALADIN Instrument of ESA's Aeolus Mission

Authors: Karsten Schmidt, Oliver Reitebuch
Affiliations: German Aerospace Center (DLR), German Aerospace Center (DLR)
After each laser shot and the acquisition of the 24 atmospheric range gates, the acquisition of the atmospheric background radiation was started with a sufficiently long time delay after the ground return at zero altitude. The background radiation measurements were accumulated on-chip the ACCD with a longer duration as the backscattered laser signals due to their, in general, smaller signal strength. For instance, the longest background integration times was 3750 μs, while the shortest integration time for the atmospheric range gates was 2.1 μs. This background radiation was measured with a bandwidth of about 1 nm centred at the laser wavelength of 355 nm, and has been assumed as a constant offset over frequency for the Mie and Rayleigh spectrometer (MSP and RSP) measurements. It needed to be measured and corrected during the processing because it acted as a constant signal contribution to the acquired backscattered laser signals. Additional offsets to the signals, which needed to be corrected for, arisen within the detection chain from the electronic offset voltages, which was applied before digitization (detection chain offset (DCO)). Different approaches to determine DCO have been tested and implemented. Furthermore, the background integration time was tested and changed several times during the mission lifetime in order to find the optimal setting. A normalization with respect to the integration time can be applied to eliminate its impact on the background signals. Moreover, large spiky signals were observed in the background data on different ACCD pixels. They could be caused e.g. by cosmic rays. A spike detection and correction scheme has been developed and applied to the background signals in order to reject them. In total, a harmonised and quality controlled atmospheric background radiation data set has been obtained by Aeolus in the UV at 355 nm of nearly 5 years from September 2018 – July 2023. In our contribution, some aspects of the DCO correction and the spike detection and correction will be presented and discussed first. The main part of the presentation comprises the discussion of the daily and annual behaviour of the background radiation in terms of time and geolocation. It is demonstrated that the background signals were determined by the changing sun irradiation with respect to Aeolus along the track, by the changing Earth albedo with respect to location and time, and by the Earth orbit around the sun and the inclination of the Earth’s rotational axis with respect to the ecliptic. It is furthermore shown that the ratio of the background radiation, measured in the Rayleigh and Mie channel, shows a similar behaviour. Though the atmospheric background radiation, measured by Aeolus over nearly 5 years, was primary aimed to correct the backscattered laser signals and has not been a nominal data product of the mission, it is of a certain value. For instance, it can be used to compare with measurements of other EO instruments in the near UV, for further investigations, or in radiation simulations after a radiometric calibration.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: Evaluation of Aeolus feature mask and particle extinction coefficient profile products using CALIPSO data and ground-based data

Authors: Ping Wang, David Donovan, Gerd-Jan van Zadelhoff, Jos de Kloe, Dorit Huber, Katja Reissig
Affiliations: Royal Netherlands Meteorological Institute (knmi), DoRIT, IB Reissig
The Atmospheric LAser Doppler INstrument (ALADIN) on board Aeolus was the first High-Spectral-Resolution Lidar (HSRL) in space. It was launched in 2018 and re-entered in 2023. The ATLID FeatureMask (A-FM) and extinction PROfile algorithms (A-PRO) developed for the Earth Cloud Aerosol and Radiation Explorer (EarthCARE) HSRL ATmospheric LIDar (ATLID) have been adapted to Aeolus and called AEL-FM and AEL-PRO, respectively. These algorithms have been purposely built to process low signal-to-noise ratio space-based lidar signals. AEL-FM and AEL-PRO prototype products have been evaluated using the collocated Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) vertical feature mask (VFM) product and level 2 aerosol profile product for 2 months of data in October 2018 and May 2019. Aeolus and CALIPSO are both polar-orbiting satellites, but they have different overpass times. The evaluations are focused on desert dust aerosols over Africa. We have found that the AEL-FM feature mask and the CALIPSO VFM show similar aerosol patterns in the collocated orbits, but AEL-FM does not separate aerosol and cloud features. Aeolus and CALIPSO have a good agreement for the extinction coefficients for the dust aerosols, especially for the cloud-free scenes. The Aeolus aerosol optical thickness (AOT) is larger than the CALIPSO AOT, it is difficult to distinguish aerosols and thin ice clouds using the Aeolus extinction coefficients alone . The AEL-FM and AEL-PRO products are available in the Aeolus B16 reprocessing campaign L2A products. The findings for the Saharan dust are also valid for the AEL-PRO products in the B16 L2A products. We also evaluate the AEL-PRO B16 products using CALIPSO and ground-based EARLINET (European Aerosol Research LIdar NETwork) products for other regions, for example in Europe and Asia. Based on experiences from the evaluations of the ATLID A-PRO products, we can select similar sites and regions for the AEL-PRO and A-PRO evaluations although in different years. In the presentation we will show the results of the evaluation using the CALIPSO data and some new results about the evaluation of reprocessing campaign B16 L2A products using ground-based data. Some results between AEL-PRO and A-PRO at similar regions and sites will be discussed.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: The impact of Aeolus wind observations on the predictability of tropical cyclones

Authors: Giovanna De Chiara, Dr Linus Magnusson, Michael Rennie, Saleh Abdalla, Dr Sean Healy
Affiliations: Ecmwf
The world’s first functioning Doppler wind lidar in space, Aeolus, produces wind retrievals that have been shown to improve global Numerical Weather Prediction forecasts. At ECMWF, this was demonstrated by assessing the Aeolus Level-2B (L2B) HLOS wind data quality with the ECMWF model equivalents, by NWP impact verification with OSEs (Observing System Experiments) and by using the FSOI (Forecast Sensitivity Observation Impact) method. Recently, an ESA-funded project at ECMWF investigated if the assimilation of Aeolus L2B winds improves the predictability of high impact and extreme weather events in the Tropics and extra-tropics, with a focus on Tropical Cyclones and European extra-tropical storms. Also, the impact of Aeolus wind assimilation on the European forecast bust statistics was evaluated. Observing system experiments (OSEs), with/without Aeolus L2B winds, were run to assess the impact of Aeolus wind on analyses and forecasts. The OSEs covered the period from June 2019 to September 2021. The OSEs used the second officially reprocessed L2B data (generated by the DISC and PDGS) which covers the FM-B laser period from June 2019 to October 2020. This was a period when Aeolus had its highest atmospheric signal levels and was therefore of interest for reassessing the impact with improved quality reprocessed data. Operational data was used from October 2020 to September 2021. The OSEs were run at much higher resolution than previous Aeolus OSEs. In this presentation we will focus on the impact of the Aeolus winds on the predictability of tropical cyclones. The assessment included the computation of the position error and central pressure error. The global statistics showed that the assimilation of Aeolus observations has a neutral/negative impact on the trajectory forecast but a neutral/positive impact on the intensity forecast. The results of the position error are not consistent over time. Six months statistics showed the worst impact in 2019. The statistics were also computed on OSEs run over the same period using the first reprocessed dataset and run at two different resolutions. Results were better for the second reprocessed dataset than the first one. The OSEs resolution improves the tropical cyclones scores. Further comparisons with other OSEs at different resolutions and using different Aeolus datasets were also performed. Results also showed that the assimilation of Aeolus winds allows to capture more developing storms. In this presentation the results will be presented and discussed.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: Mitigating the impact of hot-pixels on the signal levels and wind bias during reprocessing of Aeolus data

Authors: Persis Misquitta, Vittoria Cito-Filomarino, Stefanie Knobloch, Oliver Lux, Oliver Reitebuch, Jos de Kloe
Affiliations: Institute Of Atmospheric Physics, Deutsches Zentrum für Luft und Raumfahrt (DLR), Koninklijk Nederlands Meteorologisch Instituut (KNMI)
The landmark Earth Observation mission Aeolus provided near-real time (NRT) global wind profiles using the Atmospheric Laser Doppler Instrument (ALADIN) – the first Doppler wind lidar in space. Launched on 22 August 2018, it successfully completed its operational mission on 30 April 2023, yielding nearly five years of valuable data. After completing several end-of-life tests, the instrument was switched off on 5 July 2023, with a successful re-entry into Earth’s atmosphere on 28 July 2023. ALADIN measured Doppler frequency shifts of backscattered light from clouds/aerosols and molecules using two complimentary receiver channels, Mie and Rayleigh, both equipped with accumulation charge-coupled device (ACCD) detectors for precise measurements. Shortly after the mission started, single pixels used for on-chip charge accumulation on the detectors exhibited anomalous behavior, characterized by random shifts in the dark current signal. The dark current anomalies on these “hot pixels” caused large errors in the wind and aerosol profiles of affected altitude bins and were attributed to displacement damage in the semiconductor lattice from high-energy particles like protons. Over the course of the mission lifetime, the number of hot pixels increased almost linearly, but with long gaps in autumn 2021 and autumn 2022, reaching a total of 75 by the time the instrument was switched off. To mitigate the detrimental impact of hot pixels on the data quality, a dedicated calibration mode called ‘Down Under Dark Experiment’ (DUDE) was introduced three months after launch to measure dark currents in the absence of lidar signals. Initially conducted twice daily, the frequency of DUDE measurements was gradually increased to eight times per day by 30 October 2021 to account for the growing number of hot pixels. Despite the DUDE’s effectiveness, NRT processing faced challenges when dark current levels shifted between consecutive DUDE measurements. This issue was particularly pronounced during the two mission phases when the first Flight Model laser (FM-A) was operational: the early phase of the mission, characterized by rare DUDE measurements (31 August 2018 to 16 June 2019), and the late mission period, marked by a large number of hot pixels (28 November 2022 to 30 April 2023). In reprocessing campaigns, Aeolus data quality can be improved by retrospectively applying dark current corrections to the intervals between detected dark current shifts and the next available DUDE measurement. To achieve this, a step-detection identifies dark current shifts, even in the presence of large atmospheric signal variations, by comparing the hot pixel signal to that of a neighboring pixel. Anomalies found in the reprocessed L1B and L2B data products are then used to iteratively fine-tune the step-detection algorithm, ensuring more accurate corrections and improved overall data quality. The poster will highlight the hot pixel correction scheme developed for reprocessing of the two FM-A periods, providing insights into the methodology and its impact on data quality. Additionally, it will shed light on the hot pixel behavior and their evolution throughout the Aeolus mission.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: The impact of Aeolus wind observations on extra-tropical storms and on forecast busts

Authors: Giovanna De Chiara, Dr Linus Magnusson, Michael Rennie, Saleh Abdalla, Dr Sean Healy
Affiliations: Ecmwf
The world’s first functioning Doppler wind lidar in space, Aeolus, produces wind retrievals that have been shown to improve global Numerical Weather Prediction forecasts. At ECMWF, this was demonstrated by assessing the Aeolus Level-2B (L2B) HLOS wind data quality with the ECMWF model equivalents, by NWP impact verification with OSEs (Observing System Experiments) and by using the FSOI (Forecast Sensitivity Observation Impact) method. Recently, an ESA-funded project at ECMWF investigated if the assimilation of Aeolus L2B winds improves the predictability of high impact and extreme weather events in the Tropics and extra-tropics, with a focus on Tropical Cyclones and European extra-tropical storms. Also, the impact of Aeolus wind assimilation on the European forecast bust statistics was evaluated. Observing system experiments (OSEs), with/without Aeolus L2B winds, were run to assess the impact of Aeolus wind on analyses and forecasts. The OSEs covered the period from June 2019 to September 2021. The OSEs used the second officially reprocessed L2B data (generated by the DISC and PDGS) which covers the FM-B laser period from June 2019 to October 2020. This was a period when Aeolus had its highest atmospheric signal levels and was therefore of interest for reassessing the impact with improved quality reprocessed data. Operational data was used from October 2020 to September 2021. The OSEs were run at much higher resolution than previous Aeolus OSEs. In this poster we will focus on the impact of the Aeolus winds on the predictability of extra-tropical storms and on the forecast busts statistics. A few extra-tropical cyclones case studies were selected and analysed but a clear signal was not seen. According to the ECMWF classification, a “European forecast bust” is defined as an occasion when the day-6 high-resolution forecast of the European Geopotential Height at 500 hPa has an anomaly correlation coefficient less than 40% and a root-mean-square error greater than 60 m. Based on this definition, the statistics showed no major differences between the OSEs. The impact was not the same over time. Results showed that the assimilation of Aeolus winds is more beneficial on forecast busts on the first year of the OSEs than on the second. In this presentation the results will be presented and discussed.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: Enhancing Aeolus’ Data Quality: From Processor Evolution to Reprocessing Campaigns in Aeolus DISC Phase F1

Authors: Stefanie Knobloch, Oliver Reitebuch
Affiliations: Deutsches Zentrum für Luft- und Raumfahrt (DLR), Institut für Physik der Atmosphäre, ECMWF, DoRIT, IB Reissig, KNMI, Serco, S&T, TROPOS, Les Myriades, OLA, Physics Solution
The European Space Agency's Aeolus mission, launched in August 2018, represents a pioneering mission in global wind profile measurement using the first space-borne Doppler wind lidar. As the mission progresses into Phase F1 efforts have intensified to refine processor baselines and execute comprehensive reprocessing campaigns of the whole mission lifetime. These efforts aim to enhance data quality and extend the mission’s scientific legacy. During Phase F1, processor baselines B17 and B18 are developed by the Aeolus Data Innovation and Science Cluster (DISC). B17 is planned to be released by the end of 2025 and includes several updates. The L1B processor is updated in the Rayleigh and Mie signal-to-noise-ratio (SNR) calculations, including the addition of the detection chain offset (DCO) and read-out noise. The inclusion of the new Mie core 3 algorithm for calculating the scattering ratio (SR), SNR, and L1B Mie winds, along with error estimates for SR, further refines the data processing. Additionally, a verification of the ground-detection scheme for nadir pointing (IRC) and wind mode has been conducted, leading to bug fixes, refinements, and threshold tuning. The L2A processor features also significant improvements. A new Lidar Surface Return (LSR) product is introduced, enhancing the consistency between different L2A optical products, including SCA, MLE, and AEL-PRO, as well as their error estimates. Quality control is refined across several processing steps to ensure more reliable data products. For the L2B processor, improvements are made to handle moon-blinding periods more effectively, flagging only those periods with significant wind bias. The grouping for Mie cloudy and Rayleigh clear scenes is adapted and refined based on the updated L1B SNR and SR. Aligning the calculation of scattering ratio and SNR between L1B and L2B improves consistency. Further refinements balance the quality control for Mie winds between coverage (e.g., aerosol scenes) and quality (gross errors). More realistic random error estimates for Rayleigh and possibly Mie winds, based on L1B SNR, are developed. Across all processors, a Digital Object Identifier (DOI) is added to L0, L1A, L1B, L2A, and L2B products, ensuring better traceability and accessibility of data. The final processor baseline, B18, is scheduled for release at the end of 2027. This update will primarily focus on bug fixes and code optimization to ensure smooth operation for users and to facilitate future applications of the processors for the follow-on mission, Aeolus-2/EPS-Aeolus. Regarding reprocessing campaigns, the fourth reprocessing with baseline B16 is currently ongoing at ESA’s external service DAMPS (Data Archival, Management & Processing Services) for the FM-A-1 period, with a public release expected in April 2025. Concurrently, work on the second FM-A-2 period is progressing on DISC side, with reprocessing by DAMPS scheduled for March-May 2025 and a public release expected in June 2025. The fifth reprocessing for the full mission lifetime will be performed with the B17 processor baseline, with a public release foreseen by mid-2027. B17 will be the last version featuring new functionalities and algorithm improvements. The sixth reprocessing with B18 is expected by the end of 2028, aimed at establishing ESA’s long-term heritage archive. As the Aeolus mission advances through Phase F1, the Aeolus DISC team remains committed to optimizing processor baselines and executing thorough reprocessing campaigns. Continuous improvements in data processing and reprocessing have significantly facilitated the quality and utility of Aeolus data, reinforcing its role in advancing our understanding of atmospheric dynamics. The ongoing efforts in Phase F1 not only enhance the scientific output of the Aeolus mission but also set a robust foundation for future Earth observation missions employing similar lidar technologies. The collaborative efforts within the DISC and the broader scientific community are pivotal in realizing the full potential of Aeolus data, ensuring its lasting impact on atmospheric science and climate research.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: Seasonal characteristics and temporal evolution of ground returns from Aeolus

Authors: Katharina Uhlmannsiek, Vittoria Cito Filomarino, Uwe Marksteiner, Oliver Reitebuch
Affiliations: DLR (German Aerospace Centre), Institute of Atmospheric Physics
ESA's Aeolus mission (2018-2023) carried the first space-borne Doppler wind lidar, enabling measurements of winds and aerosols. The use of Aeolus wind products had a significant positive impact on numerical weather prediction (NWP). The correction of a dominant bias on the retrieved wind data is based on ECMWF model data. This correction is implemented as a function of primary mirror temperatures and derived from the difference between the Aeolus Level 2B wind profiles and the ECMWF predicted wind profiles. To achieve a model-independent bias correction as originally intended, the understanding and application of a so-called Zero Wind Correction (ZWC) is essential. Initial studies have investigated the use of ground returns for bias correction, including the ZWC method, and have shown that this method performs on average about 10% worse than the model-based approach. Despite this good performance and its inherent potential, the ZWC method has never been implemented operationally. A critical aspect of ZWC is the effective use of ground returns, the quality of which depends on the surface albedo in the ultraviolet spectral region and on the instrument performance (e.g., laser energy, instrument transmission), which vary seasonally and over the lifetime of the mission. The ground return signal is typically distributed over 1-3 range gates (max. 5) with a typical vertical resolution of 250-1000 m or more. The minimum horizontal resolution of Aeolus was provided on measurement level with about 3 km. However, ground winds are measured on observation level, with a horizontal resolution of about 86 km. This rather coarse resolution can lead to difficulties in interpreting the ground returns. High-signal ground returns are mainly observed in polar regions with a high surface albedo of ice and snow, such as Antarctica or Greenland, and in high mountain regions, such as the Himalayas and the southern Andes, where the albedo remains high throughout the year. This study investigates systematic patterns in the availability and quality of ground returns in Level 1B data from the latest processing baseline (B16), focusing on signal strength and observed ground wind. Seasonal, geographic and temporal variations in the backscattered ground signal are analyzed using data from 2019 to 2022. The results indicate that both the Rayleigh and Mie channel signals show a decay similar to that observed in the atmospheric backscatter and related to the overall signal decay over the lifetime of the mission. To investigate the relationship between surface albedo and the backscattered signal, regions of characteristic surface reflectance are compared with albedo values from ESA's ADAM database, which provides a monthly albedo climatology for the year 2005. This analysis allows an investigation of how changes in surface albedo affect the backscattered signal, providing insight into its geographical and seasonal variability.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: Lessons Learned From Aeolus Rayleigh-channel Winds With Mie Contribution

Authors: Gert-Jan Marseille
Affiliations: KNMI
Aeolus has been the first satellite to actively measure winds from space for almost 5 years since its launch in August 2018. Aeolus was a European Space Agency (ESA) explorer mission with the objective to retrieve wind data from the collected atmospheric return signal that is the result of Mie and Rayleigh scattering of laser-emitted light by atmospheric molecules and particulates (aerosol, cloud droplets and ice crystals). Aeolus winds have proven to be beneficial for all global models in which the observations were ingested. Recently, also positive impact impact of Aeolus winds could be demonstrated for the regional scale convection-permitting numerical weather prediction (NWP) model HARMONIE-AROME, which is used for operational short-range weather forecasts in many countries in Europe. This could be achieved through the use of a 4-dimensional variational (4D-Var) data assimilation scheme and the continuously improving Aeolus wind products from reprocessing activities as part of the ESA DISC (Data, Innovation, and Science Cluster), in which a number of European institutes participate. The main wind products from Aeolus include Mie winds in cloudy and dense aerosol conditions (the so-called Mie-cloudy wind product) and Rayleigh winds in clear air conditions (the so-called Rayleigh-clear wind product). Recently, good progress was achieved on Rayleigh winds in cloudy conditions as well (the so-called Rayleigh-cloudy wind product). Their quality now is good enough for use in NWP. A short term Observing System Experiment (OSE) in the ECMWF system showed positive impact, even in the presence of Mie-cloudy winds. Crucial for Rayleigh-cloudy winds is to handle the Mie contribution in the Rayleigh channel signal. This will probably become even more important for EPS-Aeolus, the proposed operational follow-on mission of Aeolus, as part of the EUMETSAT Polar System and partnering with the European Space Agency (ESA). ESA will provide two satellites, to be launched sequentially, that will both carry a Doppler wind lidar instrument. The design of this instrument is under investigation but is expected to be more sensitive to Mie contribution in the Rayleigh signal. Lessons learned from the Aeolus Rayleigh-cloudy wind product may be crucial in preparation of EPS-Aeolus.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: Combined impact of Aeolus and COSMIC-2 GNSS-RO observations in NWP in the Tropics

Authors: Giovanna De Chiara, Dr Sean Healy, Dr Robin Pilch Kedzierski, Michael Rennie, Prof. Nedjeljka Žagar
Affiliations: Ecmwf, Universität Hamburg, Universität Wien
ESA’s Earth Explorer mission Aeolus was the first Doppler wind lidar mission in space. Aeolus was assimilated into the ECMWF Integrated Forecasting System from January 2020 until the end-of-life testing phase which started in May 2023. Aeolus provided near global coverage of the line-of-sight wind profiles from the surface to the lower stratosphere. The Aeolus measurements clearly improved the quality of Numerical Weather Prediction (NWP) forecasts of wind, temperature, and humidity, particularly in the upper troposphere and lower stratosphere (UTLS), with the main improvements seen in the Tropics. The GNSS (Global Navigation Satellite System) radio occultation (GNSS-RO) measurements technique has been used operationally at many NWP centres since around 2006 and have been increasing in number significantly in the past few years, starting with the introduction of COSMIC-2 measurements in 2020. GNSS-RO bending angle measurements are a useful component of the global observing system because they complement the information provided by satellite radiances. Perhaps surprisingly, COSMIC-2 GNSS-RO had a significant impact on tropical UTLS wind forecasts in the ECMWF system. They are assimilated up to 50km height but have larger weight in the core region between 8 and 30 km. Among major observing systems currently in use at ECMWF, GNSS-RO measurements have the largest impact on short-range forecast wind errors in the UTLS as well as in the middle stratosphere. About a decade old set of Observing System Experiments (OSEs) at ECMWF suggested that wind observations were on average more valuable than mass-field data in the upper troposphere, lower stratosphere and in the tropics, whereas mass data were found more valuable in the lower troposphere of the midlatitudes. Recent developments of the Global Observing System and the ECMWF model developments, especially 4D-Var, may have changed these conclusions. This was investigated in the ESA-funded Aeolus+Processes by a series of OSEs comparing the relative value of Aeolus and GNSS-RO data, especially their multivariate aspects. The aim was a better understanding of the relative value of the mass and wind-field information in 4D-Var with a focus in the tropical area. In this contribution, the outcome of this study will be presented and discussed.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: Monitoring and assimilating Aeolus atmospheric composition products in ECMWF’s IFS-COMPO 

Authors: Will McLean, Kirsti Salonen, Michael Rennie, Angela Benedetti
Affiliations: ECMWF
Aerosols, solid or liquid matter suspended in the Earth’s atmosphere, have a broad range of both natural and anthropogenic sources, including marine life, desert dust, biomass burning, industrial processes, and volcanic eruptions. Scattering and absorption from aerosols impacts incoming and outgoing radiation, varying with the physical properties of the aerosol distribution. In addition to this direct effect on radiation, the presence of aerosols impacts cloud formation, acting as condensation nuclei and affecting cloud droplet size and albedo, thereby influencing the radiative properties of the cloud.    ESA’s Aeolus satellite launched in August 2018, the fifth in the Earth Explorer series, carrying the Atmospheric LAser Doppler INstrument ALADIN. It was the first Doppler wind lidar in space, providing horizontal line-of-sight (HLOS) wind components from the near surface up to the stratosphere. In addition to the primary HLOS wind product, the high spectral resolution lidar (HSRL) instrument design allows for the retrieval of extinction and backscatter coefficients of cloud and aerosol as a function of altitude, separating the thermally-broadened molecular returns from the narrower particulate signal, reducing the assumptions made in the optimal estimation. However, unlike the ATLID instrument on the recently-launched EarthCARE satellite, ALADIN’s design did not include a cross-polar channel, impacting the ability to perform aerosol typing without other a priori information, and resulting in underestimations of the backscatter, particularly noticeable in regions with relatively high dust aerosol concentrations.    The European Centre for Medium-Range Weather Forecasts, ECMWF, implements the Copernicus Atmosphere Monitoring Service, CAMS, on behalf of the European Union. CAMS provide daily near-real time analyses and forecasts of global atmospheric composition variables, including surface particulate matter, as well as reanalysis datasets, giving historical information on air quality which can be used in global or regional climate models. ECMWF use a global state-of-the-art data assimilation and modelling framework, the integrated forecasting system (IFS), set up in atmospheric composition mode, IFS-COMPO, for producing our atmospheric composition forecasts and analyses. IFS-COMPO is optimised for the assimilation, prediction, and analysis of greenhouse gases, reactive and trace gases, and aerosols in the atmosphere, assimilating multiple satellite-derived aerosol optical depth (AOD) retrieval products. AOD is a column-integrated quantity; as such it does not provide a constraint on the altitude of an aerosol layer.     As part of the Aeolus Data, Innovation, and Science Cluster (DISC), ECMWF has carried out near-real time monitoring and assimilation of particle backscatter retrievals in IFS-COMPO, along with conducting observing system experiments (OSEs) aimed at quantifying the necessary requirements for handling these datasets appropriately in our system. Here we present results from work undertaken following the end of the operational mission and de-orbiting of Aeolus in July 2023, specifically, results from assimilating the baseline 16 4th reprocessing for several periods throughout the mission lifetime. A comparison of the different retrieval products assimilated in IFS-COMPO is shown, with a particular focus on the AEL-PRO retrieval product, produced by KNMI as part of the Aeolus DISC, which is an adapted EarthCARE algorithm including a feature classification scheme, enabling discrimination between cloud and aerosol fields without the use of a cloud screening mechanism in IFS-COMPO.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: Exploration of Utilizing EarthCARE Feature Mask and Profile Products for Further Improvement of the Aeolus Data Processing

Authors: Diko Hemminga, dr. Jos de Kloe, Gerd-Jan van Zadelhoff, dr. Dave Donovan, Ad Stoffelen
Affiliations: Royal Netherlands Meteorological Institute (KNMI)
Aeolus, ESA’s wind mission, has provided almost five years’ worth of global wind observations. The horizontal line-of-sight wind measurements from the world’s first space-based Doppler wind lidar have provided positive impact by near-real-time assimilation in numerical weather prediction (NWP). To increase Aeolus’s impact further, remaining errors should be identified to further improve the Aeolus data processing chain for future assimilation of reprocessed data. Potential improvements may be found in the context of scene classification and the determination of the scattering ratio. In this work we focus our attention on the data products developed for the ATmospheric LIDar (ATLID) data from the Earth Cloud, Aerosol and Radiation Explorer (EarthCARE) satellite platform. Horizontal line-of-sight wind in the planetary boundary layer, troposphere, and lower stratosphere is measured with the Doppler wind lidar technique. The Atmospheric LAser Doppler INstrument (ALADIN), operating at 355 nm, uses a variation of the High-Spectral-Resolution Lidar (HSRL) technique to detect the Doppler shift of the attenuated backscatter return signal, which is induced by atmospheric movements with respect to the satellite. The atmospheric backscatter, from air molecules and particles, is measured in two separate main detection channels, referred to as the (molecular) Rayleigh and (particulate) Mie channels. The construction of accumulations of a certain ‘atmospheric class’ is one of the main features of the Aeolus L2B processing. In current data processing the Aeolus measurements are grouped into ‘clear’ and ‘cloudy’ types by a scene classification algorithm to allow using the optimal wind retrieval algorithm. The atmospheric class is decided by the optical properties of the sensing volume: the contribution of particulate backscatter relative to molecular backscatter to the signals on ALADIN’s channels. For the results of the fourth reprocessing campaign with the L2B processor on baseline 16, the scene classification of the Rayleigh channel is currently based on the Mie channel signal-to-noise ratio (SNR). Rayleigh altitude bins where the Mie SNR exceeds a chosen threshold are classified as cloudy. Moreover, the scene classification of the Mie channel is currently switched off: all valid data is classified as cloudy. Note that the classification ‘cloudy’ is intended to encompass all particulate scattering, including hydrometeors and aerosols, while ‘clear’ is intended to mean no particulate scattering, i.e., only molecular scattering. It is expected that misclassification may negatively influence measurement error and bias. In addition, wind processing of scenes with heterogeneous optical properties may be improved by segregating these scenes into more uniform classes, which is most important for the Rayleigh channel with the current settings. The single-instrument ATLID (L2a) data processing includes the ATLID feature mask (A-FM) and ATLID profile retrieval (A-PRO) algorithms. In order to make accurate retrievals of cloud and aerosol extinction and backscatter, these algorithms have been developed to address the low signal-to-noise ratio of the lidar signal by devising suitable data binning. The feature mask (A-FM) product provides a probability mask to help guide data binning and subsequent data smoothing in the A-PRO algorithm, ensuring ‘strong’ and ‘weak’ signals are not mixed. Both processors and their data products have been added to the Aeolus processor chain as both improvement of the Aeolus data processing and pre-launch EarthCARE algorithm tests. These so-called AEL-FM and AEL-PRO implementations result in similar data products sampled on the Aeolus measurement resolution. AEL-FM defines coherent atmospheric structures based on two complementary image processing techniques. This output is explored for the possibility of scene classification and subsequent grouping, thereby defining accumulations based on the feature mask output. In addition to comparisons between the current scene classification and the feature mask, lately the scattering ratio generated by the AEL-PRO algorithm has become of interest, since this is needed by the L2B processor to decontaminate the Rayleigh channel from particulate backscatter. While the scattering ratio is already calculated as part of the Aeolus L1B processing, AEL-PRO results implementing data smoothing over coherent features may lead to improved error characteristics. Current results indicate that, as expected, clear sky features are most prominent in the Rayleigh clear data-subset, while in Rayleigh and Mie cloudy data mainly features with the highest cloud probabilities are present. The presence of weak features, e.g., aerosols and thin ice clouds, is found across both channels and both scene classifications and requires further investigation. A weak or strong feature, i.e., a misclassification, in the Rayleigh clear data-subset generally seems to perform better than expected in terms of the found average bias between observation and NWP model wind. Moreover, these average biases are smaller than found for clear sky features in the Mie cloudy data-subset. Lastly, further investigation is required to disentangle the connection between scattering ratio and feature mask index.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: Characterization of an intercontinental smoke transport event from America to Europe in September 2020 using Aeolus Baseline16 data and multi-platform data

Authors: Kangwen Sun, Guangyao Dai, Songhua Wu, Holger Baars, Dimitri Trapon, Albert Ansmann
Affiliations: Leibniz Institute for Tropospheric Research (TROPOS), College of Marine Technology, Faculty of Information Science and Engineering, Ocean University of China, Laoshan Laboratory, 4Institute for Advanced Ocean Study, Ocean University of China
In September 2020, massive wildfires in the western United States, especially in California, emitted vast quantities of biomass burning aerosols, including organic carbon and black carbon. These aerosols were lofted into the free troposphere and transported across continents. This study leverages the unique capabilities of the Aeolus satellite, equipped with the ALADIN lidar system, to simultaneously investigate the vertical distribution and the horizontal variation of smoke aerosols during their transport. Using data from multi-platform, involving ALADIN onboard Aeolus, CALIOP onboard CALIPSO, MODIS onboard Aqua and Terra, VIIRS onboard Soumi NPP, MERRA-2 reanalysis model, HYSPILT model, the large-scale long-term smoke aerosol transport event from western America over the Atlantic Ocean to the northern Europe has been captured and analyzed. Firstly, the temporal range (13th to 21st September 2020) and the horizontal study region (30°N to 70°N, 140°W to -40°E) have been selected as at this period the distinct high AOD values (partly larger than 5) at 550 nm appeared from California to Northern European. Then, with the use of the aerosol classification data from CALIOP and VIIRS, the selected study region is found dominated by smoke-related aerosol, which verifies the smoke transport event. To acquire smoke aerosol profiles from Aeolus observation, after quality control, cloud screening and outlier elimination, the Aeolus optical properties (L2A) are extracted. Seven Aeolus cross-sections on 14th, 15th, 16th, 18th, 19th, 20th and 21st September, located from western America over the Atlantic Ocean to Northern Europe capturing smoke aerosol layers are discovered. These seven cross-sections are considered to describe the whole smoke aerosol transport, while the spatial and temporal positions are verified by the HYSPLIT model. The distributions and the variations of the smoke optical properties including extinction coefficient, lidar ratio are analyzed during the transport, based on the MLE products of Aeolus Baseline 16. Assuming the directly proportional relationship between smoke mass concentration (MC) and its extinction of an individual layer, a new approach for smoke MC estimation is presented which utilizes Aeolus extinction coefficient and column MC from MERRA-2 model. Wind vector products from Aeolus (L2B) provided the dynamic information of the transport. A smoke transport tunnel from western America to northern Europe is found and the transport fluxes of each cross-section are calculated to represent the transport intensity. This study demonstrates the capability of reprocessed Aeolus wind and aerosol products on the characterization of huge smoke transport events.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: Scale-Dependent and Flow-Dependent Effects of Aeolus Winds in the ECMWF 4D-Var Data Assimilation System

Authors: Chen Wang, Nedjeljka Žagar, Giovanna De Chiara, Michael Rennie, Sean Healy, Massimo Bonavita
Affiliations: University of Hamburg, European Centre for Medium-Range Weather Forecasts
During its lifetime, horizontal line-of-sight (HLOS) wind profiles from Aeolus were assimilated into the ECMWF’s operational four-dimensional variational (4D-Var) data assimilation system, as well as in multiple observing system experiments (OSEs). Extended OSEs carried out with reprocessed HLOS wind profiles after Aeolus’s end-of-life were used in the ESA-funded Aeolus+Processes project to understand the impact of Aeolus data in relation to the background flow, dynamical regime and spatial scales of processes. Particular attention was given to the properties of the ECMWF 4D-Var system. The investigation proceeded along two lines. First, systematic and random differences were evaluated between analyses that included Aeolus HLOS winds on top of all other data assimilated (denoted Aeolus) and those without Aeolus (denoted NoAeolus). Second, analysis increments from both experiments were assessed to understand the 4D-Var effects. A novel diagnostic method was used to decompose motions in terms of their zonal and meridional scales, as a function of vertical level and latitude. The approach benefits particularly tropical region where traditional spherical harmonics decomposition is less effective. This presentation will address the following questions: 1. How does the scale dependency of Aeolus wind assimilation vary with latitude, altitude and flow regime? 2. What are the impacts of HLOS winds on the variance spectra of 4D-Var analysis increments across tropical, subtropical, and midlatitude regions? 3. How does 4D-Var propagate the effects of predominantly zonal Aeolus winds in the tropics to cross-equatorial flows? 4. What are the impacts of HLOS wind assimilation on 4D-Var analysis balance? The results show that the most significant impact of Aeolus data is on the n=1 Rossby wave, the most energetic Rossby wave in the global atmosphere. On average, Aeolus wind assimilation enhances planetary-scale circulation for zonal wavenumber spectra and synoptic-scale flows for horizontal wavenumber spectra. Importantly, Aeolus wind assimilation increases analysis balance by strengthening increments associated with Rossby modes (linearly balanced motion) and reducing increments linked to inertia-gravity modes (linearly unbalanced motion). The mean-square-difference spectra show at least an order of magnitude stronger effect for Rossby modes, with the largest impact observed for the n=1 Rossby wave with the zonal wavenumber k=1. In the tropics, HLOS wind assimilation exhibits contrasting effects on the two key equatorial waves, the Kelvin and mixed Rossby-gravity (MRG) waves that represent a portion of the zonal and meridional winds, respectively. Kelvin wave analysis increments in NoAeolus are enhanced by the assimilation of Aeolus data, especially at k=1 near 100 hPa and 500 hPa levels. In contrast, the analysis increments associated with the MRG waves appear supressed by Aeolus observations, just like the inertia-gravity increments. This shows that Aeolus winds influence the representation of cross-equatorial flows produced by 4D-Var in the absence of wind observations. Moreover, comparisons of Aeolus and NoAeolus analyses reveal a stronger Aeolus impact on synopic-scale MRG wave meridional winds than on k=1 Kelvin wave zonal winds in the tropical tropopause layer. This underscores the poorly constrained meridional flow in the tropics that might have contributed to uncertainties in the representation of the Hadley cell in reanalysis data. These findings underscore critical importance of wind profile observation for both weather forecasting and climate modelling.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: Comparing performance simulations for Aeolus-1 and Aeolus-2

Authors: Dr. Uwe Marksteiner, Dr. Stefanie Knobloch, Dr. Oliver Reitebuch, Dr. Benjamin Witschas, Dr. Markus Meringer, Dorit Huber, Katja Reissig
Affiliations: DLR - IPA, DLR - IMF, DoRIT, IB Reissig
On August 22nd, 2018, the European Space Agency (ESA) launched its Earth Explorer satellite Aeolus, which carried the first wind lidar into space: ALADIN (Atmospheric LAser Doppler INstrument). This lidar emitted laser pulses at 355 nm and analysed the light, that was Doppler shifted and backscattered by the atmosphere, by using two different types of interferometers for the detection of Rayleigh and Mie scattering from molecules and aerosols or clouds, respectively. During its almost five years of operation Aeolus provided the first measurements of line-of-sight wind profiles between 0 – 25 km on a global scale and in near real time, which showed a significantly and positive impact on numerical weather prediction. On July 28th, 2023, ESA successfully performed an assisted re-entry of Aeolus, another world’s first. Based on the success of Aeolus-1, a follow-up mission called Aeolus-2 (or EPS-Aeolus at EUMETSAT) is currently being planned in cooperation between EUMETSAT and ESA. The design of Aeolus-2 is still under development, with major modifications envisaged for the instrument setup, including the laser, the beam path (bi-static instead of transceiver) and the detector. This shall enable Aeolus-2 to provide an improved vertical (up to 125 m) and horizontal (10 km for the Mie channel) resolution, an extended coverage higher into the stratosphere (> 30 km with the Rayleigh channel) and more precise wind measurements (< 2.5 m/s random error in the Troposphere). In a “Doppler Wind Instrument Simulator Study” funded by EUMETSAT the German Aerospace Center (DLR) updates the configuration of the existing End-to-End Simulator (E2S) for Aeolus-1 to carry out initial performance simulations representative of the future Aeolus-2 satellites. By considering the detailed technical characteristics of the instrument as well as user defined atmospheric conditions, the E2S produces Rayleigh and Mie signal data. Together with albedo and digital elevation model information as well as housekeeping and ephemeris data the E2S then creates output files with the same format as provided by the Aeolus-1 satellite. This allows a direct coupling of the E2S to the operational Level 0/1 processors and comparisons of simulation results against measurements of Aeolus-1. Our study also evaluated the impact of replacing the Fabry-Pérot interferometer, as used for Aeolus’ Rayleigh channel, with a double-field compensated Michelson interferometer (DMI). We will explain the structure and capabilities of the E2S and name assumptions and limitations of our approach. The primary goal of the study is to pave the way towards a performance simulator for Aeolus-2. As a first step, this includes achieving consistency between simulation and measurement for Aeolus-1 in terms of the atmospheric signal levels and the wind random error. We will present comparisons that oppose real measurements by Aeolus-1 to respective simulations for Aeolus-1 as well as for Aeolus-2.
Add to Google Calendar

Friday 27 June 13:00 - 13:45 (Frontiers Agora)

Session: C.06.12 Trends, challenges and communicaition on RF interference and frequency management

Radio-Frequency Interference (RFI) is a growing concern for EO missions. This Agora will include discussions on how to prevent RFI in terms of better regulatory environment, how to cope with it once it affects the data of EO missions, and what could be done in terms of communications about the impact of RFI both within the science community and outside of it.
General trends on RFI and spectrum managements will also be discussed, including in the context of the next World Radio Conference (WRC-27).

Moderators:


  • Yan Soldo - ESA

Speakers:


  • Stephen English - ECMWF
  • Giulia Panegrossi - CNR
  • Bruno Espinosa - ESA
  • Eric Allaix - Météo-France
  • Philippe Aubineau - ITU
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: D.04.01 - POSTER - Data Access and Interoperability to enable Infrastructure-agnostic Science Reproducibility

Accessing a consolidated set of data on different cloud-based ecosystems is a challenge to enable full science reproducibility and allow full infrastructure independence. The difference of data organization, formats, availability, access protocols and their change over time requires scientists to perform to frequent and costly adaptations of their application. This session focuses on solutions to ensure uniform data searchability and access from multiple cloud environments, with the purpose to allow interoperability of platforms, applications and full reproducibility of executions.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: xcube: A Scalable Framework for Unified Access of Earth Observation Data

#pangeo #zarr #stac #cloud-native

Authors: Konstantin Ntokas, Pontus Lurcock, Gunnar Brandt, Norman Fomferra
Affiliations: Brockmann Consult GmbH
The increasing availability of Earth observation (EO) data from diverse sources has created a demand for tools that enable efficient, robust, and reproducible data access and pre-processing. Users of EO data often resort to custom software solutions that are time-consuming to develop, challenging to maintain, and highly dependent on the user’s programming skills and documentation practices. To address these challenges, the open-source Python library xcube has been developed to streamline the process of accessing EO data and presenting them in analysis-ready data cubes that comply with Climate and Forecast (CF) metadata conventions. xcube is a versatile toolkit designed to access, prepare, and disseminate EO data in a cloud-compatible and user-friendly manner. One key component of the software is its data store framework, which provides a unified interface for accessing various cloud-based EO data sources. This framework employs a plug-in architecture, enabling easy integration of new data sources while maintaining a consistent user experience. Each data store supports a standardized set of functionalities, abstracting the complexities of underlying APIs. This ensures that users have a consistent toolset for accessing and managing data from various, distributed providers, providing this data in the form of well-established Python data models such as those offered by xarray or geopandas. To date, several data store plug-ins have been developed for prominent cloud-based APIs, including the Copernicus Climate Data Store, ESA Climate Change Initiative (CCI), Sentinel Hub, and the SpatioTemporal Asset Catalog (STAC) API. These tools are already employed in various ESA science missions, simplifying data access for researchers and service providers. Ongoing developments focus on creating additional data stores, including support for the new EOPF product format for the Sentinels, alongside a multi-source data store framework. The latter will facilitate the integration of multiple federated data sources and incorporate advanced preprocessing capabilities such as sub-setting, reprojection, and resampling. By ingesting diverse datasets into a single analysis-ready data cube and recording the entire workflow, this approach significantly enhances the reproducibility and transparency of the data cube generation process. Prepared data cubes can be stored in multiple formats, with Zarr as the preferred choice. Zarr is a chunked format optimized for cloud storage solutions like Amazon S3. Once generated, these data cubes can be disseminated through xcube Server, which provides standard APIs such as STAC, OGC Web Map Tile Service (WMTS), OGC API - coverages, and many more. A client for these APIs is the built-in tool xcube Viewer – a single-page web application used to visualize and analyse data cubes and vector data published by xcube Server APIs. The xcube framework integrates seamlessly into the broader Pangeo ecosystem, leveraging its compatibility with Python libraries such as xarray, Dask, and Zarr. This ensures efficient data handling, scalable computation, and cloud-optimized storage. Beyond the Pangeo community, xcube’s standardized outputs using the xarray data model make them broadly applicable for researchers working with N-D spatiotemporal, multivariate datasets. In summary, xcube offers an open, scalable, and efficient solution for accessing, preparing, and disseminating EO data in analysis-ready formats. By providing standardized interfaces, robust preprocessing capabilities, and cloud-native scalability, xcube empowers researchers to focus on scientific analysis while ensuring reproducibility and interoperability across diverse datasets.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: Exploring Earth: Your Gateway to ESA Earth Observation Data

Authors: Federico Cappelletti, Simone Mantovani, Alessia Cattozzo, Mattia Stipa, Roberto Alacevich
Affiliations: MEEO, ESA-ESRIN
The Common Services within the European Space Agency (ESA) play a crucial role in the dissemination of data, ensuring that information from various space missions is effectively shared with users, stakeholders, and the public. As part of the service evolution, ESA promotes the adoption of standards and emerging cloud technologies and services to guarantee an adequate level of interoperability while extending the service offering and performances. The ESA PDGS-DataCube project is a long term initiative funded since 2018 to adopt Virtual Data Cube approach, ensuring scalable and interoperable pixel-based access services for Earth Explorer, Heritage and Third-Party missions. Building upon this foundation, Exploring Earth emerges as the innovative gateway to explore, visualise, and access data from the diverse mission sources. The web application's core innovation lies in its two-step search methodology, which simplifies navigation through the complex landscape of Earth Observation data: users can first browse collections and then perform detailed searches for specific products. For this purpose, advanced search capabilities allow filtering by critical parameters including geographic area, time frame, and mission-specific attributes. Moreover, the application allows users to export and import performed searches in order to quickly revisit and share their previous queries. Exploring Earth supports multiple data access modes, accommodating both synchronous and asynchronous service requirements. The application also integrates the Web Map Tile Service standard, thus ensuring broad interoperability. Finally, it has been designed with universal accessibility in mind, hence the responsive and customizable interface works seamlessly across desktop computers, tablets, and smartphones. By integrating the rich data ecosystem of the PDGS-DataCube with user-friendly technology, the application empowers users to explore and understand Earth's phenomena with ease and depth.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: A new sub-chunking strategy for fast netCDF-4 access in local, remote and cloud infrastructures

Authors: Flavien Gouillon, Cédric PENARD, Xavier DELAUNAY, Sylvain
Affiliations: Cnes
NetCDF (Network Common Data Form) is a self-describing, portable and platform-independent format for array-oriented scientific data which has become a community standard for sharing measurements and analysis results in the fields of oceanography, meteorology but also in the space domain. The volume of scientific data is continuously increasing at a very fast rate. Object storage, a new paradigm that appeared with cloud infrastructures, can help with data storage and parallel access issues, but NetCDF may not be able to get the most out of this technology without some tweaks and fine tuning. The availability of ample network bandwidth within cloud infrastructures allows for the utilization of large amounts of data. Processing data where the data is located is preferable as it can result in substantial resource savings. But for some use cases downloading data from the cloud is required (e.g. processing also involving confidential data) and results still have to be fetched once processing tasks have been executed on the cloud. Networks exhibit significant variations in capacity and quality (ranging from fiber-optic and copper connections to satellite connections with poor reception in degraded conditions on boats, among other scenarios). Therefore, it is crucial for formats and software libraries to be specifically designed to optimize access to data by minimizing the transfer to only what is strictly necessary. In this context, a new approach has emerged in the form of a library that indexes the content of netCDF-4 datasets. This indexing enables the retrieval of sub-chunks, which are pieces of data smaller than a chunk, without the need to reformat the existing files. This approach targets access patterns such as time series in netCDF-4 datasets formatted with large chunks. This report provides a performance assessment of netCDF-4 datasets for varied use cases. This assessment executes these use cases under various conditions, including POSIX and S3 local filesystems, as well as a simulated degraded network connection. The results of this assessment may provide guidance on the most suitable and most efficient library for reading netCDF data in different situations.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: ESA Multi-Mission Algorithm and Analysis Platform (ESA-MAAP): A Cloud-Based Collaborative Environment for Data Access and Innovation Boosting the Impact of EO Science Missions

Authors: Saskia Maria Brose, Roberto Alacevich, Cristiano Lopes, Francesco Ferrante, Francesco Cantavenera, Simone Mantovani, Max Solomcuk
Affiliations: ESA, Serco Italia S.p.a, CGI-Italia, MEEO, CGI-UK
Earth observation missions are producing unprecedented volumes of data, driven by advances in satellite technology and sensor capabilities. While this surge in big data offers immense potential for advancing Earth Observation (EO) research, it necessitates a new approach to organizing and handling the payload data. Traditional approaches to data access for these missions, which involve designing mission-specific infrastructures, result in fragmented data organization and multiple entry points for access creating redundant services. To align with the collaborative nature of today's world, we introduce a new cloud-based platform: the ESA Multi-Mission Algorithm and Analysis Platform (ESA-MAAP). Building on experience gained from precursor projects, the Thematic Exploitation Platforms (TEPs) and the EO Exploitation Platform Common Architecture (EOEPCA+), ESA-MAAP empowers scientific users to co-create innovative solutions that tackle challenges in EO science. This setup also presents itself as the ideal space to communicate science and provide a foundation for EO education and outreach. Originally conceived as a combined effort between ESA and NASA to support the BIOMASS, GEDI and NISAR missions, ESA-MAAP has since expanded its scope to cater to multiple missions. At its core is a cloud-based collaborative environment, known as the Product Algorithm Laboratory (PAL) which connects to a scalable parallel-processing system enabling data exploration and fusion across multiple space missions while promoting engagement within the EO community. The PAL provides a shared space where dedicated users can jointly develop algorithms, validate, integrate new processing services, and exchange datasets and results. Simultaneously it also serves as an ideal backdrop for educational and training initiatives, such as the recently hosted joint DLR-ESA PolinSAR training course. By providing a unified environment, for both research and learning, the collaborative capabilities of the ESA-MAAP enable participants to transition from gaining the foundational knowledge to conducting independent research, all within the same platform. Complementing the collaborative environment, two key services form the backbone of the ESA-MAAP: the catalogue and the explorer, which enhance dynamic data search capabilities and ensure efficient metadata handling, also providing access to ESA and Third-Party Missions (TPM) data offering all forms of data from a single location. On top of that, users are also provided with a data access service which for example allows them to retrieve only specific subsets of data or specific band combinations, avoiding the need to process more data than necessary. This service is based on well-established OGC-compliant interfaces and APIs. By lowering the barriers to data dissemination and analysis while offering integrated training capabilities, the ESA-MAAP fosters connections among scientists, between space agencies (e.g. NASA and DLR) paving the way for accelerated advances in research. The integration of the ESA-MAAP platform with the ESA EO Ground Segment Common Services thus streamlines data discovery, access, and processing for ESA and TPM data. While providing a common approach to data access and processing, the platform also addresses mission-specific needs through dedicated initiatives, enabling respective communities to collaborate on tailored developments. Currently, this approach is being applied to the recently launched or upcoming Earth Explorer missions: EarthCARE, BIOMASS, and FLEX, ensuring that the missions benefit from a robust and innovative payload data management throughout their entire lifecycles. This presentation will showcase the individual components of the ESA-MAAP, highlight use-cases born from its collaborative capabilities, and demonstrate how its integrated educational and outreach activities boost the impact of EO science missions.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: An Information Factory Prototype to make Science Usable & Reproducible

Authors: Rosalie van der Maas, Daniel van der Maas
Affiliations: Ellipsis Drive
The need that the Information Factory prototype addresses: There is a need for an infrastructure solution that allows the creators of analytics to get their models/processes launched in a dockerized environment and connected to the required data inputs (EO data and otherwise) in order to run them in a fully operational and scalable setting. We also need to provide a solution that makes these data/results/processes ready for commercialization and easy use via endpoints that cater to the spatial data consumption needs of every downstream user, both technical and non-technical. This fills the current infrastructure gap that holds back effective (re)use of EO data and EO-powered models/processes by industry professionals as well as the mainstream. The Information Factory prototype project objectives: Our main technical and programmatic objective was to prototype an Information Factory (IF) that enables data owners/administrators, model/process providers (including scientists and the government-, NGO- and IGO-backed research institutes they work for) and end-users to host, find and ingest EO/spatial data and get them parsed into models/processes, making them (the resulting data as well as the models/processes) available for direct integration and consumption in operational workflows at scale. The system we have built shows that it is possible to enable people who are new to the ecosystem, or specialised in other aspects of data analytics, to use EO resources and publish their work while automatically adhering to industry standards such as OpenEO and OGC protocols. What we have developed: We have developed an Information Factory prototype where: 1. Owners of spatial data (and derived products) can get their content published for high performance search, analytics, easy consumption and possible commercialization via industry standard endpoints, packages, protocols and tools under fitting permissions. 2. Model/process creators can get their models/processes automatically dockerized for operational use and connect them to the appropriate EO/spatial data sources within the Information Factory. 3. Third party end users can find and (re)use these models/processes on demand, and have the outputs published for easy and high performance consumption via industry standard endpoints, packages, protocols and tools. The innovations underpinning these achievements: Ellipsis Drive (ED) has found a way to construct a federated logical data unit on top of any collection of spatial data files (we call this the ED native data layer) which allows us to guarantee true interoperability and high performance for all open endpoints simultaneously regardless of the spatial data type and size that was used as input. The improvement over the previous state of the art is the ability to automatically and scalably connect any spatial data to models/processes, and to directly facilitate the broadest audience in using EO/spatial data to date. Leveraging the ED native layer is one of the main building blocks and innovation areas of the Information Factory prototype. The second core technology and innovation area within this IF project is the creation of a Model <> Layer Connector. This was a required improvement on the state of the art because it is essential that users are able to develop and test their models locally and push them to a dockerized environment with a single command (no knowledge of dockers or virtual environments required). This means that the connector is now able to read data locally via the API and scalably in the dockerized environment within the IF in functionally the same (though operationally different) way. In order to achieve this, we have federated the ED local Python package with the scalable reading modules created on top of ED native layers. Findings and outlook for commercialization As we successfully developed the IF prototype the way we set out to do in 2024, we have been able to confirm that the Information Factory capabilities are fulfilling a demand/need for: 1) Entities who do not only need to see their geodata published (as to make it findable/usable for any selected audience), but also their models/processes. These entities are primarily government-, NGO- and IGO-backed research institutes, agencies and scientific communities who are required to publish and share their acquired datasets, models/processes and research results in a way that is FAIR compliant. 2) Entities who are looking to build-out or simplify their open geodata based analytics workflows. These are SMEs and large enterprises who rely on open geodata as an input for their analytics processes and who are interested in simplifying the data ingestion overhead. 3) Entities who are looking for a solution that adds more spatial/raster analytics power to the baseline Ellipsis Drive offering. These are mostly large enterprises within the P&C and Ag insurance/finance industry - which rely on geospatial data inputs, especially EO (-derived) raster data for risk assessment, claims and portfolio management -, who also have a need for a distributed compute capability.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: A.08.05 - POSTER - Measuring and Monitoring Winds

Global ocean surface winds are forcing the oceans for momentum, heat fluxes and gas exchanges, furthermore inhibiting deep vertical exchange processes in both atmosphere and ocean. This intimate coupling of the ocean and the atmosphere drives both atmospheric and oceanic circulations that determine our weather and climate, while ocean-atmosphere exchanges over 70% of the earth’s surface are poorly understood. Winds further impact our lives in many ways from off-shore energy provision to global trade and aviation.

Altimeters, scatterometers and radiometers have long provided marine wind data, with some efforts made to achieve multi-decadal self-consistent datasets suitable for addressing issues of climate change. In addition, new instruments such as Aeolus provide a wealth of data not seen before that enable greater insight into the dynamics of storms and for the monitoring of high-altitude winds. These also provide challenges for their optimum use in weather forecasting models and understanding earth climate dynamics on the longer term. Papers are invited covering all aspects of the remote-sensing of winds, the calibration and quality control of the data, and their analysis to better understand the Earth’s climate or the dynamical processes occurring in individual storms.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: Wind speed sensor intercomparison in cyclone conditions: CYGNSS and Sentinel-1 contrasted with ERA-5

Authors: Carmela Galdi, Matteo Barone, Maurizio di Bisceglie, Dr. Cinzia Zuffada
Affiliations: Università degli Studi del Sannio, Jet Propulsion Laboratory, California Institute of Technology
Satellite based wind speed retrievals in or near tropical cyclones pose significant challenges in terms of accuracy, resolution, spatial-temporal proximity and revisit time. This study investigates possible synergy between GNSS reflectometry with CYGNSS satellite constellation and Sentinel-1 SAR data, aimed to improve the quality and availability of wind speed information under such extreme weather conditions. The challenges are posed by the differences in spatial and temporal resolutions, as well as by the different scattering mechanisms exploited by the two systems. CYGNSS is a constellation of small, low-cost satellites, designed for fast and reliable response from the evolving scene. It utilizes the Global Navigation Satellite System Reflectometry (GNSS-R) concept which exploits the availability of L-band GNSS signals of opportunity, scattered in the specular direction and collected by CYGNSS in LEO when forming a bistatic scattering geometry. The mechanism of bistatic scattering causes the received power to primarily originate from waves scattered within the ‘glistening zone’, with the strongest signal reflecting from its center, known as the nominal specular point. As the surface roughness increases, the glistening zone expands, reducing the received power and resulting in a consistent decrease of the normalized bistatic radar cross section as sea surface wind speed rises. The technique allows reliable penetration of the inner cyclone core and the surrounding clouds while providing high temporal resolution, with revisit times on the order of hours and 25×25 km spatial resolution. The Ocean Wind Fields (OWI) component of the Sentinel-1 Level 2 Ocean (OCN) product delivers 10-meter wind speed and direction C-band data at a spatial resolution of 10 km. This data is publicly accessible via the Copernicus Data Space Ecosystem. The surface wind component exclusively provides information from the co-polarized signal and depends on pre-existing wind direction data. The analysis covers three tropical cyclones: typhoon In-Fa (2021), cyclone Cheneso (2023), and typhoon Belal (2024), selected to illustrate different scenarios of temporal and spatial alignment between the two sensors. For typhoon In-Fa, Sentinel-1 data were acquired on July 24, 2021, with CYGNSS data available within a maximum temporal delay of five minutes. Cyclone Cheneso, observed on January 25, 2023, presents a case with a slightly longer delay of 15 minutes whereas for typhoon Belal, Sentinel-1 and CYGNSS acquisitions were separated by 4 minutes. A first comparison between Sentinel-1 and CYGNSS shows a quite good agreement, at least globally, with a correlation coefficient greater than 0.7 in all cases. However larger discrepancies appear in correspondence of high wind regions, with CYGNSS overestimating with respect to Sentinel-1for 4 m/s and more. To further refine the analysis and provide a trusted reference, ERA-5 reanalysis data from the European Centre for Medium-Range Weather Forecasts (ECMWF) was considered. Quality and reliability of reanalysis data are secured by the combination of Numerical Weather Prediction (NWP) models and observation data. The method combines previous forecast with newly available observations in an optimal way to produce a new best estimate of the state of the atmosphere. Among other atmosphere variables, ERA-5 provides global wind speed data with a spatial resolution of approximately 31 km and hourly temporal interval. This reanalysis dataset was interpolated onto the spatial grids of CYGNSS and Sentinel-1, allowing a consistent framework for comparing sensors performance. The general comparison of Sentinel-1 and CYGNSS against ERA-5 across all cyclones reveals clear patterns. Sentinel-1 exhibits consistently lower errors, with bias values ranging from 0.2 m/s to 1.5 m/s and RMSE values around 2.5 m/s. The consistency of these results under varying meteorological conditions, demonstrates the robustness of Sentinel-1 for tropical cyclone wind analysis. CYGNSS is offering the advantage of frequent observations, showing larger RMSE values and variability, with RMSE values up to 5m/s. To consolidate the comparison analysis, further work is under development and other reference datasets will be considered, like NOAA’s Hurricane Weather Research and Forecast system for hurricane prediction (HWRF) reanalysis model. HWRF performs generally better than ERA5 in presence of cyclone high wind speeds, condition in which ERA5 seems instead to suffer from some underestimation. This further comparison could be useful for a more precise assessment of the CYGNSS overestimation with respect to Sentinel-1. Finally, the study underlines the relevance of temporal alignment and illustrates the role of spatial resolution in comparisons between sensors. REFERENCES Barone, M., Galdi, C., di Bisceglie, M., Zuffada, C. (2024), “A Cross-Comparison Study on CYGNSS and Sentinel-1 wind Speed Products in Tropical Cyclones”. Proc. Of IGARSS 2024-2024 IEEE International Geoscience and Remote Sensing Symposium” pp. 842-845. Saïd, F., Jelenak, Z., Park, J., and Chang, P. S., (2022) “The NOAA Track-Wise Wind Retrieval Algorithm and Product Assessment for CYGNSS,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-24. Ricciardulli, L., et al. (2021) “Assessment of CYGNSS wind speed retrievals in tropical cyclones.” Remote Sensing 13(24), 5110. Ruf, C., Al-Khaldi, M., Asharaf, S., Balasubramaniam, R., McKague, D., Pascual, D., Russel, A., Twigg, D., & Warnock, A. (2024). “Characterization of CYGNSS Ocean Surface Wind Speed Products”. Remote Sensing, 16(22), 4341 Gandoin, R., and Garza, J., (2024) “Underestimation of strong wind speeds offshore in ERA5: evidence, discussion and correction”, Wind Energy Science, 9(8), pp. 1727—1745.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: Wind and wave signatures in wind scatterometry

Authors: Xingou Xu, Ad Stoffelen
Affiliations: National Space Science Center, Chinese Academy of Sciences, Koninklijk Nederlands Meterologisch Instituut (KNMI)
Scatterometers play a crucial role in ocean-surface wind-vector retrieval. By observing the wind-roughened ocean surfaces, they may also capture phenomena such as gustiness, multiple wave systems, wave-breaking effects, and moist convection. Over approximately 40 years of operational data provision, scatterometer wind products have been refined to achieve the accuracy required for various meteorological, oceanographic and climate applications. However, a thorough investigation of the interactions between wind and waves remains necessary, particularly for improving the understanding of scatterometers' capabilities in monitoring high-wind conditions. These extreme environments may involve stronger interactions with non-local conditions due to wave propagation. In this study, we first review the development of geophysical model functions (GMFs), used to map scatterometer observations to wind vectors. We then describe parameters that characterize the air-sea interface, including wind-related aspects of the observational scene. Using the residuals from scatterometer wind retrievals based on GMFs—specifically, the Maximum Likelihood Estimation (MLE) residual values—we analyze the unmodeled components of the normalized radar cross-sections (NRCS). These residuals are compared against sea state parameters such as wind gustiness, wave age, significant wave height, and dominant wave number. The latter two parameters are further combined to derive the Froude number and Charnock coefficient. To account for rain effects, which are well documented in scatterometer observations via quality control (QC) indicators including MLE, we also examine collocated rain rate data. The scatterometer data used in this study come from the Advanced Scatterometers (ASCAT) aboard the MetOp satellite series (operating at C-band) and OSCAT-2 aboard SCATSAT-1 (operating at Ku-band). Collocated sea state descriptions are obtained from the National Data Buoy Center (NDBC) of NOAA, while rain rate data are sourced from high-quality reprocessed Global Precipitation Measurement Mission (GPM) products. A hurricane case study has been analyzed using high-spatial-resolution wind products from the RADARSAT Synthetic Aperture Radar (SAR). The results indicate that scatterometer observations are generally resilient to sea state influences under normal conditions and during high-wind events. We discuss the improvements need for more accurate high-wind retrievals, as well as the future design requirements for microwave instruments. These include modifications to provide more detailed descriptions of wind and wave interactions, better spatial resolution, and complementary Doppler measurements. Finally, we emphasize that existing QC and wind processing methods are essential references to ensure the quality of wind vector derivations and to facilitate the retrieval of ocean surface wave and current characteristics.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: A Novel Ocean Calibration for the Ocean Surface Current Airborne Radar (OSCAR)

Authors: Wenming Lin, Dr. Marcos Portabella, Dr. Giuseppe Grieco, Mr. Xiaoheng Mou, Dr. David L. McCaan, Dr. Adrien C. H. Martin
Affiliations: Nanjing University Of Information Science And Technology, Institut de Ciències del Mar (ICM–CSIC), Institute of Marine Science at Italian National Research Council, National Oceanography Centre, NOVELTIS, National Oceanography Centre
In recent years, more and more airborne radar observation experiments have been carried out to assess the feasibility of innovative radars in the remote sensing of sea surface wind, wave, and/or current. For example, as the candidate for the Earth Explorers Program (EE11) of the European Space Agency (ESA), a Ku-band squinted along-track interferometry synthetic aperture radar (SAR, i.e., SeaSTAR), have been proposed to measure the sea surface currents, waves, as well as winds, from space. A set of flight campaigns of the Ocean Surface Current Airborne Radar (OSCAR), namely the SeaSTAR experiment (SEASTARex), were carried out in May 2022 over the Iroise Sea, West of Brest France, in order to verify the feasibility of the ESA SeaSTAR mission concept. Similar to the satellite missions, precise calibration of the airborne radar backscatter measurements (σ0) is required in order to provide accurate sea state and, notably wind vector retrievals. Conventionally, the ocean calibration, which is based on the mean difference between the measured σ0 and those simulated with a collocated wind reference (either scatterometer or numerical weather prediction output) and the corresponding geophysical model function, is used to estimate the radar calibration coefficients. However, the calibration of airborne σ0 observations is more challenging than the spaceborne ones, due to the relatively small radar footprint and the limited amount of airborne observations. In particular, during the experimental campaigns of the ocean surface current airborne radar (OSCAR), the conventional ocean calibration shows up to 4 dB difference between the calibration curves from different flight legs. One possible reason for such inconsistency is small-scale wind variability captured by OSCAR but not by the reference winds. This paper proposes a novel ocean calibration method for the airborne radars by accounting for leg-to-leg true wind variations, in order to find a single calibration curve for all the legs within the same flight day. Then an inter-comparison exercise of different sea surface wind sources proves the potential of OSCAR to retrieve high-quality, high-resolution winds, provided that the radar backscatter measurements are properly calibrated. Specifically, a summary of the OSCAR calibration efforts, as well as the OSCAR wind retrieval, shows that the variance of the retrieved wind fields is reduced with increasing wind vector cell (WVC) grid size, indicating a reduction of both noise (Kp) and spatial resolution (i.e., reduced small-scale variance). Similarly, the agreement between OSCAR-derived winds and collocated ECMWF and ASCAT winds increases with increasing WVC grid size.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: HR-WIND: SAR Measurements Are Now Available in the CMEMS Wind TAC !

Authors: Marine De Carlo, Romain Husson, Anthony Cariou, Alexis Mouche, Jean-François Piollé, Aurélien Colin, Hugo Singhoff, Théo Cevaer
Affiliations: CLS Group, IFREMER, Univ. Brest, CNRS, IRD, Laboratoire d'Océanographie Physique et Spatiale (LOPS), IUEM, CELAD, OCEANSCOPE
The Copernicus Marine Service is a European founded service that operationally delivers regular and systematic reference information on the past, present and near future ocean state for the global ocean, including the European regional seas. The service gathers in-situ and space observations along with model simulations to describe, monitor and forecast the physical, biogeochemical and sea ice ocean state. The information provided by the Copernicus Marine Service is free of charge and follows the FAIR (Findable, Accessible, Interoperable and Reusable) guiding principle. To ensure the state-of-the-art quality of this information the Copernicus Marine Service relies on Thematic Assembly Centers (TACs) that provide in-situ and space observations and on Monitoring and Forecasting Centers (MFCs) that run numerical models of the ocean assimilating data from the TACs. Amongst the TACs, the Wind TAC is in charge of providing L3 and L4 sea surface wind and wind stress observation products at a global and regional scale. From 2015, the Wind TAC has been successful in providing data coming from scatterometer observations both Near Real Time (NRT) and reprocessed (MultiYear , MY) data. In december 2023, in order to prepare for the next phase of the Copernicus Marine Service, the transition project HR-WIND (High Resolution Wind) was launched to smooth the entry of high-resolution (c.a. 1 km) data from Synthetic Aperture Radar (SAR) in the Wind TAC (effective in 2025). As the Copernicus Marine Service evolution is mainly user-driven, we take this opportunity to announce that L3 daily products of Sentinel-1A winds are operationally processed and delivered in Near Real Time since June 2024. In November 2024, the Multi Year database of L3 daily products covering from June 2021 to June 2024, has been uploaded to the Copernicus Marine Service. To obtain wind data from SAR observations the ocean surface roughness measured by the satellite is transformed into wind speed and direction using an empirical inversion method called the "dual-pol" (Mouche et al. 2017, Mouche et al. 2019) and an a priori wind helps constrain the possible values. To this purpose, the HR-Wind Production Unit uses the L2 files produced by ESA's Mission Performance Center for Sentinel-1 (MPC-S1) both for NRT and MY processing. Then, the only difference between the two processing chains (NRT and MY) lies in the a priori wind used to constrain the computed wind speed: while the NRT chain uses wind model from the European Center for Medium-range Weather and Forecasting (ECMWF) operational Integrated Forecast System (IFS), the MY chain uses wind model from ECMWF ReAnalysis v5 (ERA5). To quantify the impact of the input a priori wind, the NRT timeseries has been extended for three months in the past and NRT and MY results have been compared. More generally, the SAR wind data from HR-WIND have been qualified through comparisons against Numerical Weather Forecast (NWP) models, in situ measurements from buoys and co-located measurements from scatterometer. The results of these comparisons are shown here, along with more general figures about these new wind products. Additionally, the "dual-pol" method - using both the co- and cross-polarisation channels - used for these products has been defined and validated for very high wind speeds (i.e. in Tropical Cyclone conditions). Here we present a first investigation about the relevance of using this new method for moderate wind speeds too, compared to the more classical "co-pol" inversion method that only uses the co-polarisation channel.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone B)

Poster: WindRAD Scatterometer Quality Control Against Rain Contamination

Authors: Zhen Li, Anton Verhoef, Ad Stoffelen
Affiliations: Royal Netherlands Meteorological Institute
FY-3E (Fengyun-3E) is one of the Chinese FY-3 meteorological satellite series launched on 5 July 2021. It carries the WindRAD (Wind Radar) scatterometer, the first dual-frequency (C and Ku band) rotating fan-beam scatterometer. The number of normalized radar cross sections (the back-scattered power from the wind-roughed sea surface, called σ°s) increases significantly compared to the pencil beam and the fixed fan beam instruments due to the dual-frequency rotating fan beam design. In this paper, the QC (Quality Control) in association with rain contamination for Ku-involved wind retrievals (Ku-band-only and C&Ku) has been thoroughly investigated, focusing on the tropical region. Rain corrupts Ku-band scatterometer wind retrieval by attenuating the signatures of the backscatter measurements (σ°) on the sea surface. The measurements are sensitive to rain due to the short wavelength, and the rain-contaminated measurements in a WVC (Wind Vector Cell) deviate from the measurements that are simulated using the wind GMF (Geophysical Model Function). Therefore, QC is essential to guarantee the retrieved winds' quality and consistency. Several QC indicators have been studied. The normalized Maximum Likelihood Estimator (MLE or Rn) is a QC indicator representing the distance between the measurements and the GMF; it works locally in one WVC. Joss is the speed component of the observation cost function, which is sensitive to spatial inconsistencies in the wind field. RnJ is a combined indicator, and it takes both local information (Rn) and spatial consistency (Joss) into account. The combined QC RnJ can filter out about 0.41% extra WVCs for Ku-band-only wind retrieval. With more rejected winds, the rejected wind statistics are worse than the results with Rn, and at the same time, the accepted wind statistics are all slightly better than with Rn. Moreover, the number of accepted WVCs with a high rain rate (> 7 mm/hr) is reduced by half for RnJ, as compared to Rn. In conclusion, the combined RnJ method can distinguish rain-contaminated WVCs better than Rn. The C-band measurements are hardly influenced by rain, so the Ku-based Rn is proposed for the C&Ku wind retrieval instead of the total Rn from both C and Ku bands. 1.11% of the C&Ku wind retrievals are rejected by Ku-based Rn QC, which is reduced by about half compared to the Ku-band-only retrieval. At the same time, the statistics of the accepted and rejected winds do not differ much from those using Ku-band-only. The Ku-based Rn and RnJ for C&Ku retrievals are also compared. About 0.42% extra WVCs are rejected by the Ku-based RnJ, and with more rejected winds, the statistics are worse than the result with Rn QC, and the accepted wind statistics are slightly improved compared to the result with Rn QC. Therefore, the Ku-based RnJ can accurately exclude more rainy WVCs and works consistently with the Ku-band-only wind retrieval. Above all, the RnJ is an optimal QC method to detect rain contamination for wind retrievals where Ku-band observations are used. The Ku-based Rn is the key to improving QC for C&Ku wind retrieval. Adding the C-band observations to the wind retrieval suppresses the rain effect, and the same QC skill (in terms of accepted wind statistics) can be achieved with fewer rejected WVCs.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: D.02.08 - POSTER - Explainable AI for Earth Observation and Earth Science

The rapid expansion of AI in Earth system science and Earth observation (EO) is accelerating research and innovation. However, for AI-based solutions to be truly impactful in Earth Action initiatives, they must demonstrate explainability, physics-awareness, and trustworthiness to ensure they are fit for purpose.
This session will explore cutting-edge advancements in explainable AI (XAI) methods across diverse EO data types, including Synthetic Aperture Radar (SAR), optical, and hyperspectral data. Contributions are invited on integrating AI with physical models, interpretable deep learning, uncertainty quantification, causal inference, and other approaches to improve transparency, consistency, and robustness in AI-driven solutions.
We welcome case studies and research addressing a variety of Earth science missions and applications, such as SAR processing, Earth system process understanding, image classification, 3D reconstruction, and climate/environmental monitoring. The session will also cover strategies for tackling data gaps, physical inconsistencies, and ensuring responsible, ethical AI use.
Attendees will gain valuable insights into the latest research on explainable AI for EO, with a focus on enhancing model interpretability and trustworthiness in applications that advance Earth observation and Earth system science, supporting actionable solutions for environmental and climate challenges.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: OpenSAR Insight

Authors: Ahmed Rohaan, Anya Marie Forestell, ABDULHAMEED YUNUSA, Andrei-Lucian Alexe, Cristina Hiedra, Marcos García, María José González, Juan M Cuerda, David Mata-Moya, Maria-Cortes Benito-Ortiz, Maria-Pilar Jarabo-Amores, Nerea del-Rey-Maestre, Roberto Gil-Pita
Affiliations: Deimos Space UK LTD, Universidad de Alcalá, INSTITUTO NACIONAL DE TÉCNICA AEROESPACIAL (INTA), Deimos Space SRL
Early detection from space is an area of critical importance, with great potential to both save lives and decrease economic damage from natural and human-made disasters. For example, a 24-hour advance notice can lead to a 30% decrease in natural disaster-related deaths, according to a 2022 study by the International Labour Organization. Satellites with Synthetic Aperture Radar (SAR) payloads are uniquely suited for these use-cases as radar data has the ability to operate 24-hours and in all-weather conditions, regardless of cloud cover, sunlight, or other external energy sources. Finally, current methods of utilizing SAR data are time consuming, requiring downlinking and on-ground compute intensive processing, leading to significant delays that can impact timely decision-making. Therefore, for critical applications which require low latency decision-making, the need for rapid detection using SAR data is critical. The OpenSARinsight* project aims to address these issues by enabling EdgeAI-supported onboard processing of raw SAR data, with the goal of reducing latency and improving response times. As a baseline, we propose using a hybrid-approach to generating intelligent insights from raw SAR data. Using traditional SAR processing techniques, we will aim to generate Machine Learning model ingestible information from L0 (raw) SAR data. We will then develop two types of models, one unrestrained (i.e., free from computational constraints), and one optimized for satellite on-board processing. The output of these models will be focused SAR L1B (Level 1-B) products which can be transmitted directly to ground, optimizing bandwidth utilization and decision-making speed. This process will be implemented for multiple use-cases. In consultation with key stakeholders, we have chosen the following use-cases to focus on: - Dark vessel detection for compliance monitoring - Flood detection and extent mapping - Radio Frequency Interference Detection and Mitigation As a first step, we are creating a repository of free-to-use legacy SAR datasets for training. These datasets will contain a wide range of scenarios and conditions, providing a solid foundation for training. We will then develop a neural network with an interchangeable network head to handle multiple use cases. Our analysis shows that this approach could effectively process SAR data onboard for multiple use-cases, with potentially significant improvements in inference speed and overall response times. Upon completion of the project, the AI-ready datasets, code, models, metadata, and validation and testing parameters will be made available on a publicly accessible archiving platform. This transparency ensures that other researchers can replicate our results and build upon our work. *OpenSARinsight is led by Deimos (Spain), with critical support from the National Institute of Aerospace Technology (Spain), University of Alcalá (Spain), and Deimos (Romania). Key funding is provided by the European Space Agency.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: Studying Livability at the Block-scale in Amsterdam Using an Interpretable and Lightweight Multimodal Model

Authors: Alex Levering, Professor Devis Tuia
Affiliations: Vrije Universiteit Amsterdam, École polytechnique fédérale de Lausanne
Introduction The living quality and comfort of residents in cities is a determinant of both mental and physical health. Factors such as safety, building quality, and social cohesion affect how pleasant a neighborhood is to live in, which in turn affects wellbeing. As such, scalable methods for studying livability and its determinants are needed in order to create inventories at a large scale. Previous research has demonstrated that a variety of image modalities can be used to generate such maps, for instance through aerial overhead images, Google Streetview, and Flickr images. However, methods are needed to better derive knowledge from such AI-based livability studies, which currently are constrained by a lack of model transparency. Moreover, methods have to be simplified in order to more broadly appeal to practitioners and urban planners. In this study we therefore introduce a lightweight, multimodal model for predicting and studying compounded socio-economic outcomes at the urban block level, using a combination of overhead aerial images and/or street-level photographs. Methodology Our model uses frozen, pre-computed image features to regress a set of socio-economic sub-scores (e.g. building quality or safety statistics). These socio-economic sub-scores are then used to regress a compounded score (e.g. livability) in a two-stage manner, using the intermediate sub-scores as a bottleneck. For our image features we compute image embeddings using a pre-trained, frozen SigLIP vision-language model. We use a Softmax attention mechanism to allow the model to filter relevant street-level photograph image embeddings within the observed block, while aerial image embeddings are used as-is. We first regress granular sub-scores from these image embeddings as an intermediate task. We then use a modified semantic bottleneck to map sub-scores to the compounded score. For each sub-score, we initialize W learnable weights (neurons). Then, using a Softmax attention mechanism, we assign importance to each of the weights, computed relative to each example. This forces the model to create W prototype values for each of the sub-scores, which are multiplied by the input sub-score value. The summary socio-economic score is then regressed as the sum of all sub-score prototype scores. Intuitively, the model is therefore tasked with estimating which prototype a particular example most likely belongs to, and to determine if each prototype contributes positively, neutrally, or negatively to the compounded score. When multiple modalities are used at once, we first calculate sub-scores for each modality using modality-specific layers, before averaging the sub-scores computed across different modalities. We concatenate features from both modalities as input for the attention mechanism of the semantic bottleneck. Our model design therefore extends semantic bottlenecks to handle multimodal inputs in order to provide interpretable urban visual analytics. Our model enables new interpretability possibilities, such as studying which street-level photographs the model prioritizes for each sub-score, and mapping which prototypes are selected for each of the sub-scores to study how they contribute to the compounded score of the block. Case study We test our model on the task of predicting livability using five intermediate sub-scores: building quality, amenities, physical environment, safety, and social cohesion. All of these are sourced from the Leefbaarometer 3.0 (LBM) dataset, made available by the Dutch government. This dataset consists of 100x100m grid cells, which incorporate neighbourhood effects at various scales depending on each sub-score. We utilize the labels for 2020. We combine our LBM reference labels with cloud-free 1-meter resolution aerial images of 300 by 300 pixels taken in 2020. Images are centered on each grid cell center such that they effectively cover the 8-pixel neighbourhood of each grid cell. By allowing for visual overlap between grid cells we take into account some of the neighbourhood effects accounted for in the LBM reference dataset. We use only the RGB bands, as the NIR band is not compatible with the pre-trained feature extractor. We also sample geotagged Flickr street-level photographs within this 8-pixel neighbourhood of each cell. Our dataset consists of 6,433 grid cells. 100% of these grid cells have aerial images and 78% have Flickr images, with a median of 32 Flickr images per grid cell. Experimental design The dataset is randomly split in training, validation, and test sets by a procedure that ensures that no visual overlap and adjacency exists between training and test grid examples. Firstly, we randomly sample 30 grid cells along with their 8-pixel neighbourhood, which are assigned to the test set. We then assign grid cells in a double 8-pixel neighbourhood around all test cells to the validation set. All remaining cells are assigned to the training set. In total, the dataset consists of 5,016 training, 879 validation, and 538 test grid cells. We set W, the number of weights for each sub-score in the bottleneck, to 3. We create models for separate modalities, as well as a feature merging design. We optimize the model over Huber losses for all of the six scores. we use a weighting term set to 10 for the five intermediate sub-scores in order to ensure that the model prioritizes sub-scores. The best model is selected by the average Huber loss across the five sub-scores. To determine the goodness-of-fit on the test set we calculate Kendall’s Tau, which is a ranking coefficient ranging from 1 (all elements equally ordered) to -1 (both sets perfectly oppositely ordered). Results The model is able to achieve good results on all scores (tau scores of 0.45-0.7 across scores and modalities), with the exception of physical environment (Kendall’s Tau of 0.2-0.25). This is logical, as this score includes more distant variables than what is covered by the 8-neighbourhood around each grid cell. We observe that merging modalities consistently leads to better performance, between 10 and 15% for all scores, which hints at the complimentary nature that social media photos have with aerial images. Our results extend the same conclusions of previous studies to a new set of sub-scores, and they confirm that the assumption that multimodality benefits prediction holds for socio-economic outcomes that can only be predicted through proxy visual signals (i.e. safety, social cohesion). Conclusions and future work This preliminary study has confirmed that it is possible to use a lightweight model that uses frozen pre-trained features in order to predict livability in a single city for a variety of socio-economic outcomes. The model demonstrates good performance and will be used to study patterns relating to livability. In future work we would like to include further Dutch cities to further test generalization, and to include a wider variety of modalities to study their usefulness, such as Mapillary images and textual information. Most prominently, we would like to study the patterns learned by the model, such as image selections made by the attention mechanism for sub-scores, and learned patterns for neuron selection.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: Interpretable Prototype-based Deep Learning for Extreme Event Analysis

Authors: Adrian Höhl, Miguel-Ángel Fernández-Torres, Xiao Xiang Zhu
Affiliations: Technical University of Munich, Universidad Carlos III de Madrid, Munich Center for Machine Learning
As global warming continues to advance, extreme climatic conditions are expected to become more intense. Extreme events such as floods, droughts, and heatwaves threaten public safety, health, infrastructure, and food security. Earth observation is frequently used to guide policies and high-stakes decisions surrounding these extremes. As a result, the processes and models must be trustworthy [1]. The majority of remote sensing methodologies currently proposed for extreme event modeling rely on deep learning models. Despite the great capabilities of these models for representation learning, they often operate as black boxes, which are difficult for humans to interpret. To overcome this drawback, explainable AI strategies aim to provide them with explanations of the model's reasoning. In the field of Earth observation, post-hoc methods are commonly employed. However, these methods often overlook the properties of earth observation and remote sensing data [2] and, therefore, can be unreliable for time series data [3]. Alternatively, prototypical neural networks have been recently proposed for computer vision tasks. These models offer inherent interpretability, as their decisions depend on prototypes that are easier to understand. Despite the promising fundament of prototype-based models in explainable AI for geosciences [4], they typically do not consider spatiotemporal dependencies, which are crucial for better comprehension of individual and compound extreme events. In this study, we examine various prototype-based neural networks to detect and analyze extreme events. Our novel approach extends the existing spatial and temporal prototype-based models and aims to identify more meaningful spatiotemporal prototypes. Additionally, we explore visualization techniques to illustrate the learned patterns, intending to enhance the understanding of the models' outcomes. To this end, our research relies on the ESA DeepExtremesCubes dataset [5], which focuses on worldwide natural vegetation responses to compound heatwave and drought events, and provides minicubes containing both Sentinel-2 and ERA5 time series between 2016 and 2022. [1] G. Camps-Valls, M.Á. Fernández-Torres, K.H. Cohrs, A. Höhl, A. Castelletti, A. Pacal, ... & T. Williams (2024). AI for Extreme Event Modeling and Understanding: Methodologies and Challenges. arXiv preprint arXiv:2406.20080. [2] A. Höhl, I. Obadic, M.A. Fernandez-Torres, H. Najjar, D.A.B. Oliveira, Z. Akata, A. Dengel, and X. X. Zhu. Opening the Black Box: A systematic review on explainable artificial intelligence in remote sensing, in IEEE Geoscience and Remote Sensing Magazine (2024), doi: 10.1109/MGRS.2024.3467001. [3] A. A. Ismail, M. Gunady, H. C. Bravo, and S. Feizi. Benchmarking deep learning interpretability in time series predictions. Advances in neural information processing systems 33 (2020): 6441-6452. [4] C. Ji, T. Fincke, V. Benson, G. Camps-Valls, M.A. Fernandez-Torres, F. Gans, G. Kraemer, F. Martinuzzi, D. Montero, K. Mora, O. J. Pellicer-Valero, C. Robin, M. Soechting, M. Weynants, and M. D. Mahecha. DeepExtremeCubes: Integrating Earth system spatio-temporal data for impact assessment of climate extremes, Jun. 26, 2024, arXiv: arXiv:2406.18179. doi: 10.48550/arXiv.2406.18179 [5] A. Narayanan and K. Bergen (2024). Prototype-based Methods in Explainable AI and Emerging Opportunities in the Geosciences. International Conference on Machine Learning (ICML) 2024 AI for Science Workshop.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: Speckle Filtering of Sentinel-1 Dual-Polarimetric SAR Images with Deep Learning

Authors: Alejandro Mestre-Quereda, Juan M. Lopez-Sanchez
Affiliations: University Of Alicante
Synthetic Aperture Radar (SAR) images are intrinsically characterized by the presence of speckle, which causes a noisy aspect to the images with important intensity fluctuations between adjacent pixels, even in completely homogeneous areas. Speckle removal in either single or multi-polarized SAR images is a long-standing topic, and a wide variety of methods have been developed. The simplest solution to mitigate speckle is to average a given number of uncorrelated samples (denoted as looks), by using a rectangular or squared window. This method is denoted as multilooking and allows mitigating speckle at the expense of a loss in spatial resolution, caused by the averaging of non-homogeneous pixels (i.e., pixels that belong to different types of land-cover). More sophisticated methods that help preserve the original spatial resolution have been developed, among which are those that are spatially-based or transform-based. In the former group, notable methods include the well-known Non-Local SAR (NL-SAR), the Block-3D-Mathching (BM3D), and Multi-channel Logarithm Gaussian denoiser (MuLoG) algorithms, which represent the state-of-the-art in spatially-based speckle filtering. In the latter group, Total-Variational (TV) or Wavelet-based filters constitute some examples of such transform-based approaches. Recently, the application of Deep Learning and Artificial Intelligence techniques to SAR data has grown significantly. These techniques represent a paradigm change in such a way that instead of relying on pre-defined signal processing algorithms or theoretical models, these methods depend solely on the data. In this context, speckle filtering using deep learning has proven not only feasible but also capable of delivering results comparable to the best state-of-the-art SAR speckle filters developed with conventional methods. Indeed, several studies have demonstrated the successful training of Convolutional Neural Networks (CNNs) for speckle suppression in single-polarized SAR images (i.e., one polarization). Note that, in the single-pol case, CNN are trained using real-valued intensity or amplitude images. However, their extension to the polarimetric case has not yet been sufficiently explored and is not straightforward. This is mainly caused by the nature of polarimetric information, usually represented by the complex covariance matrix of every image pixel, which includes not only real-valued intensities, but also complex correlations and phase differences between polarizations. This complex covariance matrix is Hermitian and positive semi-definite, and such mathematical properties must be preserved independently of the speckle filter employed, and this can only be achieved if all elements of the covariance matrix are filtered in exactly the same way. A complete end-to-end polarimetric speckle filtering framework for Sentinel-1 images using convolutional neural networks is proposed in this work. Evidently, speckle-free SAR images cannot be acquired, and, thus, only a temporal average can be considered as an image without speckle, that is, a reference. Consequently, the method leverages large multitemporal stacks of co-registered images to create a huge training dataset that is fed to the neural network in a supervised way. The proposed method introduces some key aspects that ensure that the network only learns the underlying speckle component present in the images. Firstly, the method includes a reversible transformation of the complex covariance matrix into positive real-valued intensities, which can be readily used as input into the neural network. Then, after filtering (i.e., after using the trained model), one can recover the complex entries of the covariance matrix by using the inverse transformation. It is important to note that this transformation only changes how the polarimetric information is represented, but the content of the original complex covariance is not altered. Also, note that this transformation is necessary because it enables the processing of intensities only (i.e., the same type of information in the form of real-valued and positive data). The direct use of the complex covariance is not suitable because it contains different types of information (intensities in the diagonal elements and complex correlations in the off-diagonal ones), which are not adequate for training neural networks. Secondly, an important aspect of the proposed method is that it considers temporal changes. Indeed, averaging a multitemporal stack of images generates a new image with reduced speckle and where the spatial resolution is preserved. However, this is only valid in stationary areas, i.e., in areas that did not change during the time span covered by the acquisition of the images. Changed areas cannot be used as a reference for training neural networks since, besides speckle, they contain important intensity fluctuations caused by the changes. Therefore, we used the well-known Omnibus Test to detect temporal changes in the different multitemporal stacks so that changed areas are not used as input to the neural network. The performance of the proposed Deep Learning approach is quantitatively compared with the multilook filter and with two conventional (i.e. not AI-based) state-of-the-art algorithms, the NL-SAR and the MuLoG with BM3D. Results show that the proposed filter offers remarkable results in terms of both speckle reduction and resolution preservation in comparison with the aforementioned methods. More importantly, it is not producing artifacts or biased pixel values, which is actually the main problem of AI-based approaches. The code of the proposed algorithm and evaluation results are available in the following GitHub repository https://github.com/alejandromestrequereda/CNN_PolSAR_Despeckling/ , and ArxiV file https://arxiv.org/pdf/2408.15678, respectively.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: AI4Drought: Seasonal Prediction of Droughts From Large and Local Scale Drivers

Authors: Dr Jesús Peña-Izquierdo, David Civantos, Lluís Palma, Dr Markus Donat, Dr Gonzalo Vilella, Dr Mihnea Tufis, Dr Arjit Nand, Dr Maria José Escorihuela, Laia Romero
Affiliations: Lobelia Earth, Barcelona Supercomputing Center, Eurecat
Current dynamical seasonal prediction systems present limited skill in the extratropics mainly due to the weak seasonal predictability but also because of the limited representation in the physical models of key processes such as large-scale teleconnections, land-atmosphere feedbacks and realistic initial conditions. The AI4DROUGHT project funded by ESA enhances our understanding and predictive capabilities of droughts in Europe at seasonal timescale by the combination of climate models, Earth Observation (EO) and artificial intelligence methods. A hybrid approach is proposed relying on existing state of the art dynamical predictions plus the statistical relations extracted from two main sources of predictability (large-scale and local-scale). The large-scale modes of variability, which contributes to the onset or termination of droughts, are empirically learned from thousands years of climate simulations and fine tuned later with observational time series (ERA5). A conditional Variational AutoEncoder (cVAE) is trained using CMIP6 data, which enables it to perform temperature and precipitation forecasts at seasonal scale (atmospheric forcing). The results from those predictions show skill improvement over SEAS5 from ECMWF in Europe, yet predictability at seasonal timescales remains very limited compared to climatology. Land-atmosphere feedbacks, critical for the amplification of the former large-scale signal, are mostly driven by local conditions represented by satellite-based variables like soil moisture and vegetation indices. Using a tree-based model (pixel based), soil moisture conditions can be forecasted in a seasonal time-scale. By using as inputs initial conditions and the atmospheric forcing predicted from the large-scale model, we infer the relevance of initial conditions in drought prediction and the importance of accurately quantifying drought conditions using satellite observations. For each season, the model outputs performance maps, which can be used to deduce those regions where initial conditions are more important and where soil moisture has more memory.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: Generating Atmospheric Dynamics From Sentinel-1 SAR Data Using Score-Based Models

Authors: Clément Lacrouts, Quentin Febvre, Alexis Mouche, Pierre Tandeo, Bertrand Chapron
Affiliations: Ifremer, IMT Atlantique
High resolution Synthetic Aperture Radar (SAR) data from Sentinel-1 often provide unique capabilities to quantify the stability state of the marine atmospheric boundary layer. Indeed, atmospheric processes with modulated surface winds are often very well delineated with characteristic signatures on the very high resolution SAR-detected ocean surface roughness. Often immediately spotted from expert eyes, varied textures can then be readily identified and classified (e.g. rolls, convective cells). These SAR observations can thus uniquely monitor complex local interactions between the atmosphere and the ocean, largely depending on the air-sea temperature differences and surface wind speed. More precisely detecting and identifying these marine-atmosphere conditions may then help improve weather forecasting, and inner storm lower atmosphere stability conditions. To decipher and more quantitatively interpret these high resolution ocean surface imprints, traditional statistical models, which primarily analyze the spectral distribution corresponding to second-order statistical moments, may fail to fully characterize the multi-scale interactions inherent to lower-atmosphere turbulence. To address this limitation, we consider the use of generative methods based on machine learning, which have the potential to more precisely approximate high-dimensional distributions from samples while preserving their higher-order statistical properties. In particular, our proposed method is able to retrieve the distribution of the phase information captured by SAR ocean scenes. Retrieving the phase information is fundamental to ensure the preservation of essential statistical features. In this study, we propose an adapted score-based model to generate simulated SAR scenes under specific atmospheric turbulence conditions, including parameters such as wind speed, wind direction, and Monin-Obukhov length air-sea temperature stability parameters. We explore the latent space of these generative models, analyze the statistical properties of the generated scenes, and demonstrate the potential for creating coherent video sequences through interpolation between SAR images, or through the evolution of physical and latent parameters. This approach offers new perspectives for studying multi-scale interactions and visualizing dynamic processes in the atmospheric boundary layer through SAR imagery.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: From Data to Insight: Explainable AI and Meteorological Inputs for Eddy-covariance Station Flux Predictions in Vineyards

Authors: David Garcia Rodriguez, Pablo Catret Ruber, Dr. Domingo Jose Iglesias Fuente, Dr Juan Jose Martínez Dura, Dr Ernesto Lopez Baeza, Antonio Garcia Celda
Affiliations: Universitat de València, Valencian Institute of Agricultural Research (IVIA)
Advanced monitoring tools are increasingly critical to maximize the ability of living systems to mitigate emissions and adapt to changing environmental conditions, especially in the context of climate change. This study addresses the prediction of fundamental fluxes in agricultural crops, specifically vineyards, through a novel approach combining machine learning (ML) models and explainable artificial intelligence (XAI) techniques. Nine predictive models were employed, ranging from traditional algorithms such as Linear Regression (LR) and K-Nearest Neighbors (KNN) to more advanced methods like Gaussian Process (GP), Random Forest (RF), XGBoost (XGB), and Multi-Layer Perceptron (MLP). The primary goal was to predict sensible heat flux (H), latent heat flux (LE), and carbon dioxide flux (CO2) using conventional meteorological parameters as input features. The performance of these models was evaluated using metrics such as Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and the Coefficient of Determination (R²). Results showed that the GP model achieved an outstanding R² of 0.99 for H, while the SVR model demonstrated superior performance for predicting LE and CO2, with R² values of 0.96 for both variables. These findings highlight the robustness of these models in dealing with complex environmental conditions in semi-arid vineyards. A key innovation in this study was the integration of XAI through Shapley Additive Explanations (SHAP), which allowed the identification and ranking of the most influential variables in the predictions. SHAP values revealed that radiative components, such as incoming solar radiation (RS↓) and reflected solar radiation (RS↑), along with atmospheric temperature (AT), were the most critical variables for predicting H and LE. For CO2, in addition to these radiative variables, cumulative atmospheric temperature over 36 hours (AT_36h) played a significant role, indicating a strong connection between environmental conditions and the stomatal dynamics of plants. This XAI approach provided unprecedented transparency, demonstrating how and why specific variables impact model outcomes. It also improved the practical applicability and replicability of the results across other agricultural systems. The study underscores the potential of ML techniques to overcome traditional barriers associated with data analysis from eddy covariance stations, which typically require expensive infrastructures and complex mathematical methodologies. By incorporating explainability techniques, the study not only optimized prediction accuracy but also facilitated a deeper understanding of underlying processes in agricultural ecosystems. In summary, this research demonstrates that by leveraging advanced ML and XAI techniques, it is possible to effectively estimate key fluxes such as H, LE, and CO2 from conventional meteorological data. This approach reduces dependence on specialized infrastructure while creating new opportunities for optimizing agricultural management in response to extreme climate scenarios. The integration of SHAP values as part of an XAI strategy enhances the reliability and transparency of these models, establishing them as practical predictive tools. This is particularly significant in enabling sustainable agricultural practices and improving system resilience in the face of climate change.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: Predicting Sea Surface Height in Coastal Regions Using Hybrid Neural Networks A Case Study in the Aegean Sea

Authors: Nikos Flokos, Ms Maria Tsakiri
Affiliations: National Technical University Of Athens
Satellite altimetry has revolutionized our understanding of sea surface height (SSH) variations, providing valuable data for climate studies, oceanography, and coastal management. However, accurately predicting SSH in coastal areas remains a significant challenge due to complex environmental conditions, reduced data quality near the coasts, and limitations in traditional processing methods. This study explores the feasibility of predicting Sea Surface Height Anomalies (SSHAs) using data from Sentinel-3, in the coastal region of the Aegean Sea, a region distinguished by its dense network of islands and complex coastline, while also establishing a framework to incorporate data from other satellite altimetry missions in the future. To achieve this, we integrate two types of satellite altimetry datasets: Level 1b (L1b) and Level 2 (L2). The L1b data represent the raw waveform measurements, offering rich spatial and temporal information, while the L2 data contain processed geophysical parameters such as sea surface heights (SSH), significant wave height (SWH), backscatter coefficient (sigma0) and distance to the coast. The combination of L1b and L2 data allows us to leverage the strengths of both datasets, using the fine-grained waveform information to enhance the predictive capacity and the geophysical parameters to provide essential context for SSH prediction. By aligning and preprocessing these datasets, we aim to build a robust model capable of addressing the unique challenges of coastal altimetry. The neural network architecture employed in this study is a hybrid model designed to process the distinct characteristics of the L1b and L2 datasets. A convolutional neural network (CNN) branch is used to extract features from the L1b waveform data, capturing spatial patterns and variability. Simultaneously, a dense neural network branch processes the tabular geophysical parameters from the L2 data, identifying relationships between variables such as SWH, sigma0, and distance to the coast. The outputs from these branches are concatenated and passed through fully connected layers to produce the final SSH prediction. This hybrid architecture was chosen for its ability to handle heterogeneous input types, combining the feature extraction capabilities of CNNs with the interpretability and flexibility of dense networks. Our work seeks to address key challenges in SSH prediction, including noise in altimetry data near the coast, variability in environmental conditions, and the need for models capable of generalizing across different coastal regions. The preprocessing pipeline incorporates several critical steps, such as filtering data for specific regions of interest, removing outliers, and normalizing features to ensure compatibility across datasets. By focusing on the Aegean Sea, this study emphasizes the potential for satellite altimetry to improve our understanding of coastal processes in a highly complex maritime environment and contribute to the broader goal of enhancing the utility of satellite altimetry for coastal oceanography and management.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: Enhancing Island Wake Parameterization: Segmentation of Synthetic Aperture Radar Imagery with Explainable AI Insights

Authors: Madeleine Dawson, Dr. Hans Graber
Affiliations: University Of Miami, The Center for Southeastern Tropical Advanced Remote Sensing
With the rapid growth of Earth Observation (EO) platforms and the surge in data availability, Synthetic Aperture Radar (SAR) has become a premier remote sensing system for advancing the understanding of ocean and atmospheric phenomena across diverse spatial and temporal scales. Spaceborne SAR systems, capable of imaging regardless of sunlight illumination and through cloud coverage, capture high-resolution imagery of ocean and atmosphere interactions. This data enables the extraction of critical insights into the highly dynamic mesoscale and sub-mesoscale processes that regulate and influence our climate. Unlocking the full potential of the vast amount of SAR data requires the utilization of techniques in Artificial Intelligence (AI), specifically Deep Learning (DL) for efficient information retrieval, and segmentation of such phenomena. Convolutional neural networks (CNNs) have stood out as a key DL model in ocean remote sensing due to their ability to learn complex spatial patterns from extensive datasets. However, these models often operate as “black boxes”, offering limited interpretability on the model’s decision making process. The integration of Explainable AI (XAI) techniques is essential to bridge this gap and enhance trust in model outputs, particularly for environmental applications where fine-scale accuracy is important. This study utilizes C-band Sentinel-1 SAR imagery from the European Space Agency’s Copernicus Open Access Hub to investigate wind-induced island wake segmentation, a critical step toward advancing feature parameterization of these phenomena. These wakes play a pivotal role in the turbulent energy cascade, driving the transfer of energy from larger to smaller scales. Despite their significance in influencing sub-mesoscale and mesoscale dynamics, these processes remain poorly constrained in current climate and weather models. Following evaluations will be performed on SAR image acquisition data from the University of Miami’s Center for Southeastern Tropical Advanced Remote Sensing (CSTARS). As an initial step, feature maps from the U-Net segmentation model, a well-established CNN in the ocean remote sensing domain, were visualized to identify spatial patterns learned during training. Particular focus was placed on the bottleneck layer, as it captures critical features necessary for generating the final segmentation map. This analysis established a baseline understanding of how island wake features influence model training and informed specific modifications to the U-Net architecture to better accommodate SAR image characteristics. Building on this foundation, Gradient-weighted Class Activation Mapping (Grad-CAM) was employed to pinpoint regions of the SAR imagery that most influenced the model’s segmentation predictions, providing deeper insights into the spatial features prioritized by the network. By leveraging Grad-CAM, this study explores the synergy between advanced DL models and XAI techniques, enabling improved extraction and interpretation of coupled ocean-atmosphere phenomena while fostering transparency in SAR-focused DL analyses.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: Integrating physical modelling and machine learning within a Bayesian framework: a novel algorithm for coastal marine remote sensing

Authors: Pirta Palola, Dr Varunan Theenathayalan, Dr Cornelius Schröder, Dr Victor Martinez-Vicente, Dr Antoine Collin, Ms Rosalie Wright, Dr Melissa Ward, Dr Eleanor Thomson, Dr Patricia Lopez-Garcia, Dr Eric Hochberg, Dr Yadvinder Malhi, Dr Lisa Wedding
Affiliations: University Of Oxford, Plymouth Marine Laboratory, Machine Learning in Science, University of Tübingen and Tübingen AI Center, Coastal Geoecology Lab, Ecole Pratiques des Hautes Etudes - Paris Sciences Lettres, Windward Sciences, National Oceanography Centre, Bermuda Institute of Ocean Sciences, Arizona State University
Optical remote sensing is a powerful tool for monitoring environmental change in shallow coastal marine ecosystems. However, interpretation of the observed optical signal (remote-sensing reflectance) is an ill-posed inverse problem, as there may be multiple different combinations of water constituents, depth, and benthic reflectance that result in a similar optical signal. Reliably mapping water constituents in optically shallow waters remains a major challenge in marine remote sensing. Here we apply a new approach, simulation-based inference (SBI), for addressing the inverse problem. The SBI algorithm combines physics-based analytical modelling with approximate Bayesian inference and machine learning. The input to the algorithm is remote-sensing reflectance, and the output is the likely range (posterior probability density) of phytoplankton and suspended minerals concentrations, coloured dissolved organic matter absorption, wind speed, and depth. We compare algorithms trained with hyperspectral or multispectral reflectance spectra characterised by different signal-to-noise ratios. We apply the algorithm to in-situ radiometric data and multispectral drone imagery collected in shallow coral reef waters on the Tetiaroa atoll in the South Pacific. We show that water constituent concentrations can be estimated from hyperspectral and multispectral remote-sensing reflectance in optically shallow environments, assuming a single benthic cover. In contrast to traditional semi-analytical inversion methods that provide a single point estimate as a solution, without knowledge about whether the solution is a global or a local optimum, SBI captures the range of plausible solutions. Plotting the posterior distributions provides useful insights into the likely parameter space corresponding to the observed remote-sensing reflectance. For example, the widths of the posterior distributions can be quantified. In our field application, the SBI algorithm was consistently more confident about phytoplankton concentration compared to the other water constituents inferred. Additionally, the analysis of correlation structures in pairwise marginalised posterior distributions can reveal novel insight on the parameter space that solves the inverse problem. Through the integration of physical modelling and machine learning within a Bayesian framework, SBI provides a novel approach to marine remote sensing that is physics-aware, interpretable, and explicitly handles uncertainty. An important advantage of SBI is that the algorithm is amortised: once trained, it can be applied to new observations without retraining. This makes SBI a promising approach for computationally efficient analysis of large amounts of satellite and drone data. Future developments should consider spectral mixing of multiple benthic cover types. Mapping and monitoring the spatial-temporal dynamics of water constituent concentrations using remote sensing would improve our understanding about key ecological and biogeochemical processes taking place in shallow coastal waters. Ultimately, a better understanding of the variability in water constituent concentrations could help identify priority sites for management and restoration action in coastal marine ecosystems. This study advances the mapping of shallow coastal waters and applies an innovative physics-informed machine learning approach to address the inverse problem of marine remote sensing.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: Earth Observation and Artificial Intelligence Ethics for Environmental Well-being

Authors: Dr. Loretta Latronico, Silvio De Magistris, Prof. Alberto del
Affiliations: Esa
The remarkable advancements in Artificial Intelligence (AI) have driven significant technological progress across various fields in recent years, such as security, health, infrastructure, climate change monitoring, resource usage, waste management, agriculture, productivity, and production, among others, playing a role in tacking environmental challenges and maintaining the planet's ecosystem. While AI offers immense potential benefits, it also poses substantial challenges and risks, including the so-called Existential and Catastrophic Risks [1][2]. It therefore becomes essential to consider AI technology with respect to its ethical impact. This research explores how AI for Earth Observation can contribute to the implementation of AI Ethics Standards for Environmental Well-being. While various studies and guidelines for defining AI ethics have been available for several years [3][4][5], little attention has been given to the human-environment interconnection, and an extremely anthropocentric perspective still prevails. Unlike these approaches, ecocentrism in environmental ethics [6] considers the interconnection and interdependence of all life forms within their natural ecosystem [7][8][9]. This perspective transitions from anthropocentrism, which prioritizes human interests, to a more comprehensive outlook that encompasses the entire environment and its inhabitants. This growing orientation is developing due to increasing awareness of the profound relationships between humans and the environment, leading to studies on sustainable development strategies and their impact on human and planetary survival and flourishing [10][11]. This awareness suggests an AI ethics that supports a systemic approach to caring for humans and planetary well-being, inspired by natural ecosystems [12][13]. Particularly, in recent years, the Planetary Boundary framework [14][15][16] has been developed to describe the close interconnection between humans and the planet. It presents the fundamental conditions within which the planet can sustain human development without risking collapse, while considering the balance of natural ecosystems [17]. The framework establishes the parameters for sustainable human development. Planetary boundaries offer a comprehensive perspective on the health of our planet and delineate the systems that ensure the Earth's stable functioning i.e. “Transgressing one or more planetary boundaries may be deleterious or even catastrophic due to the risk of crossing thresholds that will trigger non-linear, abrupt environmental change within continental- to planetary-scale systems” [14]. The framework outlines 9 Planetary Boundaries that define the limits for sustainable human development and prosperity for future generations. These boundaries indicate the thresholds that maintain the stability of the planet’s ecosystems, by considering: 1. Climate change - CO2 concentration; 2. Ocean acidification - Mean saturation state of surface sea water in relation to aragonite; 3. Stratospheric ozone depletion - reduction in O3; 4. Interference with the global phosphorus, and nitrogen cycles - limit industrial and agricultural fixation of N2 and annual P inflow to oceans; 5. Rate of biodiversity loss - Annual rate of species extinctions; 6. Global freshwater use - consumptive use of runoff resources; 7. Land-system change - ice-free land surface under cropland; 8. Aerosol loading - Annual mean interhemispheric aerosol optical depth (AOD) difference; 9. Chemical Pollution - Percentage of synthetic chemicals released to the environment. By harnessing AI techniques, Earth Observation data can support the continuous monitoring of these Planetary Boundaries, providing detailed, real-time data on key environmental variables and crucial insights to inform global efforts in maintaining Earth's resilience, sustainability, and promoting human development that stays within those limits. Particularly, EO and AI Ethics for Environmental Well-being plays a central role in supporting organisations to: - Contribute to a nature-positive and net-zero economy across value chains. - Understand their impacts and dependencies on nature. - Set science-based decarbonisation targets. Integrating AI requires careful consideration of responsible practices. Key components include social good, bias mitigation, AI security, geo-privacy, scientific excellence, open data, and ethical AI application [18]. By fostering collaboration and dialogue among stakeholders, this study aims to advance ethical discourse and promote a culture of responsible AI development and deployment. It provides a foundational framework for navigating ethical dilemmas and guiding the responsible integration of AI technologies into Earth Observation missions. This study advocates for AI Ethics Standards focused on Environmental Well-being, including open access to data and code to support evidence-based decision-making, enabling stakeholders to make well-informed choices. It emphasizes AI’s critical role in addressing pressing environmental challenges through sustainable resource management and conservation efforts [18]. By aligning AI for Earth Observation (AI4EO) workflows with sustainability principles, this work presents a strategy for leveraging AI technologies to advance the Planetary Boundaries framework, thereby contributing to a more resilient and sustainable future for humanity and the planet. [1] N. Bostrom and M. M. Cirkovic, Global catastrophic risks. Oxford University Press, USA, 2011. [2] N. Bostrom, “Existential risks: Analyzing human extinction scenarios and related hazards,” Journal of Evolution and technology, vol. 9, 2002. [3] The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Autonomous and Intelligent Systems, first edition ed. IEEE, 2019. [Online]. Available: https://standards. ieee.org/wp-content/uploads/import/documents/other/ead v2.pdf [4] European Commission: Directorate-General for Communications Networks, Content and Technology, Ethics Guidelines for Trustworthy AI. Publications Office of the European Union, 2019. [Online]. Available: https://data.europa.eu/doi/10.2759/346720 [5] IEEE SA, “Prioritizing people and planet as the metrics for responsible ai,” 2023. [Online]. Available: https://standards.ieee.org/wp-content/uploads/2023/07/ead-prioritizing-people-planet.pdf. [6] X. Liu, G. Liu, Z. Yang, B. Chen, S. Ulgiati, “Comparing national environmental and economic performances through energy sustainability indicators: Moving environmental ethics beyond anthropocentrism toward ecocentrism”, Renewable and Sustainable Energy Reviews, Volume 58, 2016, Pages 1532-1542, ISSN 1364-0321, https://doi.org/10.1016/j.rser.2015.12.188. [7] A. Leopold, A Sand County Almanac: And Sketches Here and There. Oxford University Press, 1949. [8] H. Rolston III, “The balance of nature: A ground for human values,” Main Currents in Modern Thought, vol. 26, 1969. [9] A. Naess, E. Cavazza, and A. Drengson, Siamo l’aria che respiriamo: saggi di ecologia profonda. Piano B, 2021. [10] Nations, U. (2015). Transforming our world: The 2030 agenda for sustainable development. New York: United Nations, Department of Economic and Social Affairs, 1, 41. . [11] World Business Council for Sustainable Development (WBCSD), “Vision 2050: Time to transform,” 2021, world Business Council for Sustainable 186 BIBLIOGRAPHY Development. [Online]. Available: https://timetotransform.biz/wp-content/uploads/2021/03/WBCSD Vision 2050 Time-To-Transform.pdf [12] S. De Magistris and A. Del Bimbo, “Sdg-based ai ethics: an analysis of recent computer vision research,” in 2023 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE). IEEE, 2023, pp. 560–564. [13] S. De Magistris and A. Del Bimbo, “Ai ethics for human beings and the planet: an analysis of recent computer vision research,” in AI Ethics for Human Beings and the Planet: an analysis of recent Computer Vision research. International Conference on Sustainable Development, 2023. [14] J. Rockstrom, W. Steffen, K. Noone, A. Persson, F. S. Chapin III, E. Lambin, T. M. Lenton, M. Scheffer, C. Folke, H. J. Schellnhuber et al., “Planetary boundaries: exploring the safe operating space for humanity,” Ecology and society, vol. 14, no. 2, 2009. [Online]. Available: http://www.jstor.org/stable/26268316 [15] W. Steffen, K. Richardson, J. Rockstrom, S. E. Cornell, I. Fetzer, E. M. Bennett, R. Biggs, S. R. Carpenter, W. De Vries, C. A. De Wit et al., “Planetary boundaries: Guiding human development on a changing planet,” science, vol. 347, no. 6223, p. 1259855, 2015. [Online]. Available: https://www.science.org/doi/abs/10.1126/science.1259855. [16] K. Richardson, W. Steffen, W. Lucht, J. Bendtsen, S. Cornell, J. Donges, M. Drake, I. Fetzer, B. Govindasamy, W. Von Bloh, G. Feulner, S. Fiedler, D. Gerten, T. Gleeson, M. Hofmann, W. Huiskamp, M. Kummu, C. Mohan, D. Nogu Bravo, and J. Rockstrom, “Earth beyond six of nine planetary boundaries,” Science Advances, vol. 9, 09 2023. [17] W. Steffen, K. Richardson, J. Rockstrom, S. E. Cornell, I. Fetzer, E. M. Bennett, R. Biggs, S. R. Carpenter, W. De Vries, C. A. De Wit et al., “Planetary boundaries: Guiding human development on a changing planet,” science, vol. 347, no. 6223, p. 1259855, 2015. [Online]. Available: https://www.science.org/doi/abs/10.1126/science.1259855. [18] P. Ghamisi, A. Marinoni, C. M. Gevaert, C. Persello, S. Selvakumaran et al. & (2024). Responsible AI for Earth Observation. arXiv preprint arXiv:2405.20868.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: Comparison of hyper-spectral and multi-spectral imaging for culture classification

Authors: Domenico Pomarico, Roberto Cilli, Ester Pantaleo, Loredana Bellantuono, Vincenzo Giannico, Alfonso Monaco, Giovanni Sanesi, Sabina Tangaro, Roberto Bellotti, Gaetano Alessandro Vivaldi, Nicola Amoroso
Affiliations: Dipartimento di Fisica Michelangelo Merlin, Università degli Studi di Bari, Istituto Nazionale Fisica Nucleare, Sezione di Bari, Bari, BA Puglia,, Dipartimento di Biomedicina Traslazionale e Neuroscienze (DiBraiN), Universit`a degli Studi di Bari, Dipartimento Di Scienze Del Suolo, Della Pianta E Degli Alimenti, Università degli studi di Bari A. Moro, Dipartimento di Farmacia - Scienze del Farmaco, Università degli studi di Bari A. Moro
The identification of cultures in satellite images plays a key role in the real-time resource optimization and potential monitoring for adverse environment conditions. We consider Sentinel-2 and PRISMA, supported respectively by European Space Agency (ESA) and Italian Space Agency (ASI), for earth observations exploiting multi-spectral and hyper-spectral technologies, respectively. The availability of a higher number of bands could represent an inherent advantage in characterizing the collected spectra for classification purposes. We exploit multiple classification methods, ranging from conventional ones, like random forest, in order to establish a performance benchmark, up to deep learning techniques referring to convolutional neural networks (CNN) and transformers. In this work we present a comparison of the informative content provided by multi- and hyper-spectral imagery. Currently available dataset in literature are endowed with a high resolution because of the adopted data collection method, while we focus on a much lower satellite imagery resolution. Data are collected in the Barletta-Andria-Trani (BAT) province, located in Puglia, Southern Italy, a geographic region undergoing wide variations with respect to temperature and dry environmental conditions. The considered cultures consist in olive trees, grapevine and sowable terrains, which represent the statistically dominant land cover in this area labeled by expert agronomists, while remaining labels are grouped within a unique category. The multi-class classification performance is evaluated according to accuracy, highlighting specific labels recognition in terms of precision, recall, F1-score. The lower satellite images resolution imposes a hard discrimination between olive trees and grapevine, since the ground influences the related pixels proportionally to the area. The confounding elements are investigated according to the multiple time frames available, revealing a different behavior with respect to the considered season. Explainable artificial intelligence (XAI) is included to target this seasonal characterization, highlighting the possibility to exploit super-pixels correlation in the proper environmental condition. Multiple XAI techniques are exploited with this aim because of their complementary perspective within the abundant information content of images which imposes a high combinatorial complexity.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: Explainable AI (XAI) for Feature Selection for Satellite-Based Sea Ice Mapping

Authors: Athanasios Athanasiadis, Tore Wulf, Jørgen Buus-Hinkler, Suman Singha, Ninna Ligaard, Matilde Brandt
Affiliations: National Center for Climate Research, Danish Meteorological Institute
Artificial Intelligence (AI) and deep learning (DL) have transformed Earth Observation, leveraging extended datasets and advanced computational power. These advancements have enabled the analysis of complex patterns and the generation of meaningful results. However, there is an inverse relationship between complex models and their explainability [1]. Deep Learning architectures, while powerful, often operate as ‘’black boxes’’, requiring attention to understand their inner workings and get insights about their predictions. In Earth Observation, where decision-making often relies on models, it is important to understand their predictions to ensure accuracy and reliability. To address these challenges, the field of eXplainable AI (XAI) has emerged in recent years [2],[3]. Many XAI algorithms have been developed and can be used depending on the application and the specific model to be explained. In this study, we apply XAI algorithms such as SHAP (SHapley Additive exPlanations) [1], a leading XAI technique, to investigate the DMI-ASIP (Automated Sea Ice Products) model [4]. DMI-ASIP is a deep learning operational model predicting high-resolution Sea Ice Concentration (SIC) from fusing Sentinel-1 SAR and Advanced Microwave Scanning Radiometer (AMSR2) passive microwave observations. ASIP has been trained on a vast dataset consisting of Sentinel-1 HH/HV (horizontal transmit, horizontal/vertical receive polarizations) imagery and AMSR2 brightness temperatures with regional ice charts as labels [5]. The model operates at a pan-Arctic scale serving as a tool for sea ice monitoring. The motivation for this study is the identification of potential redundancies among the model input features, which include 2 SAR and 14 AMSR2 channels. Our approach combines SHAP, which quantifies the contribution of each input feature to model predictions, with feature ablation to assess the impact of removing individual or group of features. Initial results show the existence of redundancy among the input features. By reducing the input features while retaining the same model performance, training time can be shortened, data dimensionality can be reduced and overfitting can be limited. Finally, models with fewer input channels are inherently easier to interpret, which is an advantage for operational tools like DMI-ASIP which can contribute to Arctic operations and support decision-making. References: [1] S. M. Lundberg, S. I. Lee, “A unified approach to interpreting model predictions” in Advances in Neural Information Processing Systems, T. Sejnowski, Ed. (Neural Information Processing Systems, 2017), pp. 4768-4777. [2] A. Adadi and M. Berrada, "Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)," in IEEE Access, vol. 6, pp. 52138-52160, 2018, doi: 10.1109/ACCESS.2018.2870052. [3] Gevaert, C.M., 2022. Explainable AI for earth observation: a review including societal and regulatory perspectives. Int. J. Appl. Earth Obs. Geoinf. 112, 102869. [4] Wulf, T., Buus-Hinkler, J., Singha, S., Shi, H., and Kreiner, M. B.: Pan-Arctic sea ice concentration from SAR and passive microwave, The Cryosphere, 18, 5277-5300, https://doi.org/10.5194/tc-18-5277-2024, 2024. [5] Saldo, R., Brandt Kreiner, M., Buus-Hinkler, J., Pedersen, L. T., Malmgren-Hansen, D., and Nielsen, A. A.: AI4Arctic/ASIP Sea Ice Dataset – version 2, DTU Data [data set], https://doi.org/10.11583/DTU.13011134.v2, 2020.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: Harmonizing Attributions in CNNs: A Feature-Based Approach for Land Cover Classification in Satellite Imagery

Authors: Timo Tjaden Stomberg, Lennart Arne Reißner, Prof. Dr. Martin Schultz, Prof. Dr.-Ing. Ribana Roscher
Affiliations: University Of Bonn, Forschungszentrum Jülich, University Of Cologne
Explainable machine learning has gained substantial attention for its role in enhancing transparency and trust in machine learning applications. In computer vision, attribution methods like Grad-CAM and occlusions are frequently used to identify how features or (input) pixels contribute to predictions of convolutional neural networks. They are particularly valuable for improving transparency and weakly-supervised segmentation tasks. However, it is a well-known challenge that different attribution methods often yield different outcomes. We present a methodology to harmonize the outcomes of attribution methods making them more similar to one another and thus more trustworthy. We achieve this by analyzing the learned feature representations of the entire training data concerning their attributions. In this way, features are linked with attributions, and features of test images can be directly assigned to the determined attributions. By coherently attributing feature representations, we make the following contributions: 1) Linking features with attributions provides greater clarity compared to standard attribution methods, whose computation is often quite complex. 2) By averaging attributions across features in the training data, results become more consistent across different attribution methods, making the choice of method less critical. 3) Attributions for different input samples become more uniform, enhancing consistency across samples. 4) Validated against ground truth data in the context of land cover segmentation, harmonized attributions are more accurate and insightful compared to original attributions. We focus on the application of land cover classification in satellite imagery and evaluate our methodology utilizing two datasets, DFC2020 [1] and Ben-ge [2], and three model architectures: VGG-16, ResNet-18, and UH-Net [3]. The UH-Net is an interpretable-by-design architecture consisting of a U-Net and a classifier Head. The specific architecture allows for high-level feature attributions in a deep layer maintaining high resolution. We evaluate nine attribution methods: Sliding Window Occlusions, k-means Occlusions, Grad-CAM, Gradients X Features, Layer-wise Relevance Propagation, Integrated Gradients, DeepLift, Gradient-SHAP, and DeepLift-SHAP; and compare their similarity using the Pearson correlation coefficient. For the DFC2020 dataset and the UH-Net, the nine tested attribution methods have - on average - a correlation of 0.61. Harmonizing them, the correlation increases to 0.87. For VGG-16, harmonization raises the correlation from 0.71 to 0.84, and for ResNet-18, from 0.56 to 0.66. For all models, Grad-CAM is best aligned with the feature representations before harmonization. Validating the predominant class of attributions with the segmentation ground truth of the DFC2020 dataset, harmonization increases the F1-score (macro) from 43 to 67, averaged over all attribution methods. We observe similar improvements for the Ben-ge dataset. [1] Robinson, C., Malkin, K., Jojic, N., Chen, H., Qin, R., Xiao, C., Schmitt, M., Ghamisi, P., Hansch, R., & Yokoya, N. (2021). Global Land-Cover Mapping With Weak Supervision: Outcome of the 2020 IEEE GRSS Data Fusion Contest. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 14, 3185–3199. https://doi.org/10.1109/JSTARS.2021.3063849 [2] Mommert, M., Kesseli, N., Hanna, J., Scheibenreif, L., Borth, D., & Demir, B. (2023). Ben-Ge: Extending Bigearthnet with Geographical and Environmental Data. IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium, 1016–1019. https://doi.org/10.1109/IGARSS52108.2023.10282767 [3] Stomberg, T., Weber, I., Schmitt, M., & Roscher, R. (2021). JUNGLE-NET: USING EXPLAINABLE MACHINE LEARNING TO GAIN NEW INSIGHTS INTO THE APPEARANCE OF WILDERNESS IN SATELLITE IMAGERY. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, V-3–2021, 317–324. https://doi.org/10.5194/isprs-annals-V-3-2021-317-2021
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: Governance Approaches for Ethical and Gender-Inclusive Use of Earth Observation Data in Organizations

Authors: Dr. Loretta Latronico, Dr Aizhan
Affiliations: Esa
Integrating Earth Observation (EO) data into decision-making processes holds significant potential for addressing global challenges, particularly in humanitarian sectors. However, the governance of EO data, especially in relation to gender-sensitive interventions, remains underexplored. An analysis of both societal influences and the industry itself illustrates how this situation emerged. The societal challenges primarily pertain to the imbalanced gender ratios across all STEM fields, which are particularly evident within the Artificial Intelligence (AI) communities. The underrepresentation of women EO practitioners has led to certain gender-specific issues being neglected. However, these challenges also extend to the lack of clear ethical guidelines guiding the collection of EO data and, subsequently, ethical issues such as gender mainstreaming being excluded from the design stage of projects [1] [2] [3]. Given the limited research on the ethical implications of Earth Observation (EO), it is evident that external challenges play a significant role in shaping the activities within this sector. Gender-inclusivity issues are also evident at the sector level. Data collection often lacks consideration of ethical implications or measures to prevent biased outcomes. While it is understandable that EO practitioners might perceive their outputs as neutral or unbiased, given that they are essentially images depicting real-world occurrences, this perception requires scrutiny. Without applying a gender-inclusive perspective, there is a risk that users of this data may overlook critical insights about the populations being monitored, such as the gender composition and distinct needs of each gender within these groups. Consequently, both the EO community and development practitioners who design interventions based on this data may perceive EO images as definitive, rather than recognizing them as part of a broader dataset that requires additional information about the surveyed populations to effectively address their needs [4] [5] [6] [7] [8]. This study examines organizational governance frameworks for EO data, with a focus on their alignment with diversity and inclusion principles. It explores how data stewardship practices and ethical guidelines can address biases in EO data collection, processing, and application, ensuring equitable outcomes. Utilizing a scoping review of published and grey literature, the study aims on the one hand, to identify approaches to EO data governance and their relevance to one of the International Labour Organization's fundamental rights: the elimination of discrimination in employment and occupation. On the other hand, the study provides recommendations to stakeholders, including EO practitioners and NGOs using this information, on strategies to ensure that the collection and use of EO data is gender-inclusive. The findings will inform actionable recommendations to help organizations institutionalize gender-inclusive EO data governance, contributing to a more equitable society. [1] A. Deen, M. R. Guerrero, J. E. Bonilla Morales, Z. del Villar, May 2022, Science-Policy Brief for the Multistakeholder Forum on Science, Technology and Innovation for the SDGs, Replicating Gender Bias from Above: Earth Observation, Machine Learning and SDG 5 [2] Kofman, E. (2019). Gendered mobilities and vulnerabilities: refugee journeys to and in Europe. Journal of Ethnic and Migration Studies, 45(12), 2185-2199. [3] Lary, D. J., Zewdie, G. K., Liu, X., Wu, D., Levetin, E., Allee, R. J., ... & Aurin, D. (2018). Machine learning applications for earth observation. Earth observation open science and innovation, 165. [4] Stephen R.J. Sheppard, Petr Cizek (2009),The ethics of Google Earth: Crossing thresholds from spatial data to landscape visualisation, Journal of Environmental Management, Volume 90, Issue 6,Pages 2102-2117,ISSN 0301- 4797,https://doi.org/10.1016/j.jenvman.2007.09.012. [5] SERVIR-Mekong. 2015. Gender and GIS: Guidance Notes. Asian Disaster Preparedness Center: Bangkok, Thailand. [6] Wachter, S., Mittelstadt, B., & Russell, C. (2020). Bias preservation in machine learning: the legality of fairness metrics under EU non-discrimination law. [7] W. Va. L. Rev., 123, 735. Wickert, L. & Bogen, M. & Richter, M.. (2020). SUPPORTING THE MANAGEMENT OF HUMANITARIAN OPERATIONS CONCERNING MIGRATION MOVEMENTS WITH REMOTE SENSING. ISPRS -International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. XLIII-B3-2020. 233-240. 10.5194/isprs-archives-XLIII-B3-2020-233-2020. [8] Witmer, F. D. (2015). Remote sensing of violent conflict: eyes from above. International Journal of Remote Sensing, 36(9), 2326-2352
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: AI Ethics for SDGs: Computer Vision in Earth Observation

Authors: Dr. Loretta Latronico, Silvio De Magistris, Prof. Alberto del
Affiliations: Esa
In recent decades, the concept of well-being has gained increasing attention. It has become the subject of numerous scientific studies aimed at defining and measuring it [1][2][3][4]. Well-being is no longer associated exclusively with humanity, as it was in the past, but has expanded to include the entire Planet and Nature as a whole [5]. It has become central to economic and political strategies to ensure the flourishing of the Planet and the of future generations [6]. The UN Agenda 2030 for Sustainable Development sets out 17 Sustainable Development Goals (SDGs), which are part of a broader program of action consisting of 169 associated targets that can be grouped into three categories: environmental, economic, and social. It is an action program for people, the Planet, and prosperity. Human and planetary well-being are at the heart of Agenda 2030. Particularly, the Agenda 2030 defines 17 SDGs that address: 1-No Poverty, 2-Zero Hunger, 3-Good Health and Well-being, 4-Quality Education, 5-Gender Equality, 6-Clean Water and Sanitation, 7-Affordable and Clean Energy”, 8-Decent Work and Economic Growth, 9-Industry, Innovation and Infrastructure, 10-Reduced Inequalities, 11-Sustainable Cities and Communities, 12-Responsible Consumption and Production, 13-Climate Action, 14-Life below Water, 15-Life on Land, 16-Peace, Justice and Strong Institutions, 17-Partnerships for the Goals. Numerous studies have highlighted the potential contribution of AI to achieving the SDGs [7][8][9] [10] and, more recently, some approaches to AI ethics based on SDGs were proposed [11][12][13]. However, the rapid growth of AI research and the increasing expansion of its applications call for a more focused analysis that considers the distinct sub-fields of AI. Accordingly, this research examines the influence of AI on the SDG targets, with a specific emphasis on the domain of Computer Vision. It focuses on Computer Vision for Earth Observation (EO) and its potential to advance the implementation of AI Ethics for planet’s well-being. Computer Vision is a key AI technology aiming at enabling computers to derive meaningful information from digital images and video. Key Computer Vision functions are detection, recognition, classification, prediction and more recently image and video generation either starting from visual or text data. Progress in Computer Vision over the past few years has leveraged advancements in Machine Learning, marking significant developments in AI and its applications across various fields, including agriculture, healthcare, autonomous vehicles, security, and manufacturing [14][15]. In the space sector, and specifically in the EO sub-domain, Computer Vision involves using advanced algorithms to analyze satellite and aerial imagery. This process enables the extraction of valuable information about the Earth's surface. Computer Vision for EO can contribute significantly to the achievement of the SDGs in various ways, including monitoring deforestation (SDG 15), tracking climate change impacts (SDG 13), assessing agricultural productivity to ensure food security (SDG 2), mapping urban growth for sustainable cities (SDG 11). By providing accurate and timely data, AI and Computer Vision for EO aids in decision-making, resource management, and environmental protection efforts, thus contributing to planetary and human well-being. AI for EO has the potential to demonstrate how new and relevant indicators for the SDG framework can be developed, how countries can set and plan their SDG targets using EO-based support tools, and how EO can facilitate evidence-based decision-making to support sustainable development policies [16]. This research aligns AI for EO efforts with the SDGs, establishing a benchmark for AI ethics through transparency, reproducibility, and trust in AI for EO to benefit the planet and the human well-being. It promotes open access to data and code to facilitate evidence-based decision-making, enabling stakeholders to make informed choices. Additionally, it highlights the importance of addressing biases and protecting privacy rights to encourage fair outcomes in vulnerable communities while fostering societal trust and cohesion [17]. [1] A. Antonovsky, “Unraveling the mystery of health: How people manage stress and stay well,” San Francisco, vol. 175, 1987. [2] C. D. Ryff and C. L. M. Keyes, “The structure of psychological well-being revisited.” Journal of personality and social psychology, vol. 69, no. 4, p. 719, 1995. [3] World Health Organization (WHO), Basic Documents, 49th ed. The Organization, 2020. [4] IEEE SA, “Prioritizing people and planet as the metrics for responsible ai,” 2023, accessed: 2024-09-30. [Online]. Available: https://standards.ieee.org/ wp-content/uploads/2023/07/ead-prioritizing-people-planet.pdf [5] M. Elo, J. Hytonen, S. Karkulehto, T. Kortetmaki, J. S. Kotiaho, M. Puurtinen, and M. Salo, Interdisciplinary perspectives on planetary well-being. Taylor & Francis, 2024. [6] The Council of the European Union, “Council conclusions on the economy of wellbeing,” Official Journal of the European Union, 2019. [Online]. Available: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52019XG1213(01) [7] R. Vinuesa, H. Azizpour, I. Leite et al. “The role of artificial intelligence in achieving the Sustainable Development Goals”. Nat Commun 11, 233 (2020). https://doi.org/10.1038/s41467-019-14108-y. [8] Arpan Kumar Kar, Shweta Kumari Choudhary, Vinay Kumar Singh, How can artificial intelligence impact sustainability: A systematic literature review, Journal of Cleaner Production, Volume 376, 2022, 134120, ISSN 0959-6526, https://doi.org/10.1016/j.jclepro.2022.13412. [9] Nasir, Osama & Javed, Rana & Gupta, Shivam & Vinuesa, Ricardo & Qadir, Junaid. (2022). Artificial intelligence and sustainable development goals nexus via four vantage points. Technology in Society. 72. 102171.10.1016/j.techsoc.2022.102171. [10] Palomares, I., Martinez-Camara, E., Montes, R. et al. A panoramic view and swot analysis of artificial intelligence for achieving the sustainable development goals by 2030: progress and prospects. Appl Intell 51, 6497–6527 (2021). https://doi.org/10.1007/s10489-021-02264-y. [11] S. De Magistris and A. Del Bimbo, “Sdg-based ai ethics: an analysis of recent computer vision research,” in 2023 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE). IEEE, 2023, pp. 560–564. [12] F. Mazzi and L. Floridi, The ethics of artificial intelligence for the sustainable development goals. Springer, 2023. [Online] Available: https://papers.ssrn.com/sol3/papers.cfm?abstract id=4448859 [13] S. De Magistris and A. Del Bimbo, “Ai ethics for human beings and the planet: an analysis of recent computer vision research,” in AI Ethics for Human Beings and the Planet: an analysis of recent Computer Vision research. International Conference on Sustainable Development, 2023. [14] Junyi Chai, Hao Zeng, Anming Li, Eric W.T. Ngai, Deep learning in computer vision: A critical review of emerging techniques and application scenarios, Machine Learning with Applications, Volume 6, 2021, 100134, ISSN 2666-8270, https://doi.org/10.1016/j.mlwa.2021.100134. [15] Hassaballah, M., & Awad, A. I. (Eds.). (2020). Deep learning in computer vision: principles and applications. CRC Press. [16] EARTH OBSERVATION FOR SDG Compendium of Earth Observation contributions to the SDG Targets and Indicators, May 2020, https://eo4society.esa.int/wpcontent/uploads/2021/01/EO_Compendium-for-SDGs.pdf [17] P. Ghamisi, A. Marinoni, C. M. Gevaert, C. Persello, S. Selvakumaran et al. & (2024). Responsible AI for Earth Observation. arXiv preprint arXiv:2405.20868.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone N)

Poster: Interpreting Environmental Risk Hotspots in the Apulia Region with eXplainable Artificial Intelligence (XAI)

Authors: Alessandra Costantino, Professor Francesco Giordano, Professor Nicola
Affiliations: University of Bari Aldo Moro
The management of risks arising from multiple and complex land degradation processes, such as hydrogeological instability, flooding, and wildland-urban interface fires, requires an interdisciplinary approach. In fact, effective solutions require the integration of multi-source geospatial data to identify risk hotspots and support mitigation strategies. This approach combines multiple forms of Earth Observation (EO) techniques to assess the contribution of factors like land use patterns and extreme weather events to environmental disasters in urban and peri-urban areas. The GEORES project, part of the Italian Space Agency (ASI) initiative "Innovation for Downstream Preparation for Science" (I4DP-SCIENCE), explores the integration of EO technologies with eXplainable Artificial Intelligence (XAI). This combination is used for the analysis of geospatial data from various sources, enabling the assessment of risks such as floods, sediment connectivity, land displacements, and wildfires. A key outcome of the project is the development of a web-based geospatial application. This tool will be used to provide a comprehensive framework for the evaluation of the causes of extreme events and to offer a guide for the implementation of data-driven mitigation strategies. This work focuses on the multi-risk feature extraction module, which aims to identify hotspots of four environmental risks: floods, land displacements, sediment connectivity, and wildfires. A Random Forest (RF) binary classification model was used to process a dataset of predictor features, identified and reduced in collaboration with the project partners CNR-IREA (National Research Council, Institute for Electromagnetic Sensing of the Environment), UNIBA-DISTEGEO, and UNIBA-DISSPA (University of Bari, Departments of Earth and Geo-environmental Sciences and Department of Soil, Plant, and Food Sciences), along with the company GAP s.r.l, specialist in remote sensing technologies. The dataset includes indicators from various sources, such as terrain models, hydrological indices, vegetation cover, and urban density data, sampled on a 100m grid. The dataset encompasses a test area that includes most of the Gargano promontory, located in the southern Italian region of Apulia. The area under examination is approximately 1800 km² and will be expanded to include a second region, covering the metropolitan area of Bari. The dataset was used to train and validate one instance of a RF model for each one of the four risk types. The target variable for each version of the model is represented by an indicator signaling the presence of a risk hotspot in a given cell. The target indicators were either evaluated using models that predict the hazard or the magnitude of each phenomenon (sediment connectivity, wildfires) or by data publicly available from local or national administrative sources (land displacements, floods). To ensure a realistic evaluation of model performance, a spatially-aware k-fold cross-validation approach was implemented. The test area was divided into macro-cells, and training and validation examples were assigned to different macro-cells to ensure that the model was not trained on grid points adjacent to test grid points (except for points along the cell borders). The analysis was then expanded with the use of eXplainable Artificial Intelligence (XAI) techniques in the form of Shapley values, which quantify the contribution of individual features and provide an interpretation of the model predictions. Identifying the driving factors is critical for decision-making in the management of environmental risks, particularly in regard to the efficient use of available resources for implementing preemptive measures. The output of the XAI analysis consists of Shapley maps, which provide both a "global" explanation of the phenomenon—enabling an understanding of the average contribution of each feature to the prediction across the entire region under study—and a "local" explanation. The local explanation quantifies each feature's contribution to the model’s decision to identify a grid point as a risk hotspot and represents a useful guide for the implementation of proactive strategies to mitigate environmental risks at the local administration level. The combination of EO technologies with XAI and the creation of a web-based geospatial application is a valuable tool for the improvement of urban resilience and sustainability, as it facilitates the scientific downstream and empowers the end users to formulate informed decisions and take smart and effective actions for the mitigation and management of environmental risks.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: C.01.14 - POSTER - Exploring new observations and mission concepts for atmospheric measurements (observations, modelling and theories)

The atmospheric remote sensing community is abuzz with innovative ideas for new future mission concepts allowing innovative research applications by means of active (radar & lidar) and passive (optical & microwave) measurement concepts. The Observations from new airborne and ground-based instruments play a vital role in assessing the feasibility of these ideas for future satellite applications. These new measurements need to go hand in hand with the mathematical understanding of the theoretical frameworks and models. This session invites presentations on:

- Innovative observations of geophysical products

- Modelling efforts and theoretical frameworks to obtain innovative observations

- Feedback and lessons learned from ongoing or planned developments as well as from first ground-based or airborne campaigns
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Towards the application of σ-FORUM radiative transfer code to the Martian atmosphere

Authors: Lorenzo Buriola, Enzo Papandrea, Tiziano Maestri, Giuliano Liuzzi
Affiliations: Department of Physics and Astronomy “Augusto Righi”, University of Bologna, National Research Council (CNR), Institute of Atmospheric Sciences and Climate (ISAC), Department of Engineering, University of Basilicata
Mars is the most studied planet after Earth because of its proximity, with the purpose of understanding its evolution, searching for life and preparing for future human exploration. For these reasons, over the past years, a significant number of measurements have been collected through various satellite missions. In the framework of the “Earth Moon Mars (EMM)” project¹, this study aims to create a new tool for the fast and accurate analysis of legacy and future Mars datasets, enhancing our understanding of the Martian atmosphere. Adopting a comparative planetology approach, this work also attempts to provide insight into Earth’s atmosphere. Although the final pipeline will include a retrieval module to invert radiance measurements, here the focus is on the forward model, dedicated to simulate Martian infrared spectra. Leveraging the flexibility and capabilities of the recently developed forward model σ-FORUM (otherwise known as σ-IASI/F2N) [1], this project seeks to extend its application to the study of the Martian atmosphere. σ-FORUM is a high-performance and multipurpose radiative transfer code, originally developed to study Earth’s atmosphere radiation at longwave. It generates high-resolution spectra while maintaining computational efficiency by employing parametrized optical depths. It allows the computation of fast analytical derivatives of the radiance with respect to atmospheric properties and clouds, thus being suitable for the application in fast retrieval of infrared instruments. To adapt σ-FORUM for use in Martian atmospheric studies, the code has been appropriately tuned to simulate the unique characteristics of Mars. New pressure levels have been chosen to discretize the atmosphere, properly accounting for the high variability of the surface pressure of the planet. High resolution parametrized optical depths for the most abundant and active gas species have been computed employing a line-by-line radiative transfer code, the Planetary Spectrum Generator (PSG) by NASA [2]. Information about atmospheric quantities was derived from version 5.3 of the Mars Climate Database (MCDv5.3) [3]. The simulations obtained by σ-FORUM are then compared with the full-physics spectra of PSG. Additional tests have also been conducted including dusty and cloudy atmospheric scenarios that σ-FORUM is capable of managing. [1] Guido Masiello et al. “The new σ-IASI code for all sky radiative transfer calculations in the spectral range 10 to 2760 cm-1: σ-IASI/F2N”. In: Journal of Quantitative Spectroscopy and Radiative Transfer 312 (2024), p. 108814. issn: 0022-4073. doi: https://doi.org/10.1016/j.jqsrt.2023.108814. url:https://www.sciencedirect.com/science/article/pii/S0022407323003321. [2] G.L. Villanueva et al. “Planetary Spectrum Generator: An accurate online radiative transfer suite for atmospheres, comets, small bodies and exoplanets”. In: Journal of Quantitative Spectroscopy and Radiative Transfer 217 (Sept. 2018), pp. 86–104. issn: 0022-4073. doi: 10.1016/j.jqsrt.2018.05.023. url: http://dx.doi.org/10.1016/j.jqsrt.2018.05.023. [3] E. Millour et al. “The Mars Climate Database (version 5.3)”. In: From Mars Express to ExoMars. Feb. 2018, 68, p. 68. ¹Part of the research activities described in this paper were carried out with contribution of the Next Generation EU funds within the National Recovery and Resilience Plan (PNRR), Mission 4 - Education and Research, Component 2 - From Research to Business (M4C2), Investment Line 3.1 - Strengthening and creation of Research Infrastructures, Project IR0000038 – “Earth Moon Mars (EMM)”. EMM is led by INAF in partnership with ASI and CNR.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Retrieval of Liquid Water Path in Stratocumulus Clouds Using Slant-Angle W-Band Airborne Radar-Radiometer Observations: Preliminary Insights for the WIVERN Mission

Authors: Cuong Nguyen, Philip Gabriel, Phuong Dong Le, Alessandro Battaglia, Paloma Borque, Natalia Bliankinshtein, Kenny Bala, Keyvan Ranjbar, Leonid Nichman, Mengistu Wolde, Pavlos Kollias, Anthony Illingworth, Tania Casal
Affiliations: National Research Council Canada, Horizon Science and Technology, University of Waterloo, Department of Environment, Land and Infrastructure Engineering, Politecnico of Torino, Division of Atmospheric Sciences, Stony Brook University, Department of Meteorology, University of Reading, European Space Agency
The Wind Velocity Radar Nephoscope (WIVERN) mission is one of the two candidate missions competing for selection as the Earth Explorer 11 mission within the European Space Agency’s FutureEO programme. WIVERN features a single payload: a conically scanning W-band Doppler radar with polarization diversity. For the first time, it will provide global measurements of in-cloud winds alongside detailed cloud and precipitation microphysical and macrophysical properties at unprecedented sampling rates and horizontal resolutions. According to research studies and operatorial activities conducted by Numerical Weather Centers (NWPs) based on using Aeolus data, wind observations significantly increase the accuracy of mid-range weather forecasts. At the same time, the high-resolution radar reflectivity and brightness temperature data (~1 km horizontal resolution) will improve our understanding of global cloud and precipitation systems. An additional feature of WIVERN is its 100 MHz wideband receiver, which can separate the slower changing narrowband signal of hydrometeors from the rapidly changing noise, and thus provide changes in 94 GHz brightness temperatures at H and V every km to within 2 or 3 K over the sea. This offers unique observations of the microstructure and temporal evolution of both drizzle and liquid water pathways at the kilometre scale in the extensive stratocumulus and cumulus clouds over the oceans. The latest IPCC report states that "the combined effect of all climate feedback processes is to amplify the climate response to CO2 forcing (virtually certain)" and "clouds remain the largest contributor to the overall uncertainty in climate feedbacks (high confidence)". Many climate models predict that a future decrease in stratocumulus and cumulus over the tropical ocean will provide this positive feedback. The WIVERN observations should provide invaluable insight into the factors influencing the stability and lifetime of tropical stratocumulus and cumulus clouds, addressing critical uncertainties in climate feedback mechanisms. In 2024, the ESA-funded suborbital WIVERNex-CAN Phase A project began, with the primary goal of advancing WIVERN's cloud and precipitation products. The project focuses on the detailed characterization of stratocumulus clouds over water surfaces, such as lakes and oceans. This objective is achieved through comprehensive analyses of co-located, simultaneous radar and radiometer observations at W-band (94.05 GHz). The National Research Council of Canada (NRC) airborne W-band radar (NAW) was modified to include a radiometer channel operating in an interleaved mode with the radar, aligned at an incidence angle of 42°, as envisioned for the WIVERN antenna. The spatial resolution of these measurements, determined by the antenna sizes, 30 cm for the radar and 8 cm for the radiometer, represents the first-ever deployment of such a capability on the NRC Convair-580 research aircraft. This paper presents results on retrieving the Liquid Water Path (LWP) of stratocumulus clouds using coincident radiometric signals and radar reflectivity. Data from the WIVERNex-CAN PhA campaign, conducted over Lake Ontario and Georgian Bay, ON, Canada during September-November 2024, are used to demonstrate the method and evaluate its performance. Existing literature suggests that droplets in these cloud types typically have diameters smaller than 50 microns, indicating that a radiative transfer model (RTM) accounting only for absorption and emission is sufficient to compute brightness temperatures at 94.05 GHz. Analyzing the optical properties of the particle size distribution will help assess the effects of multiple scattering. Additionally, the radiometer operates in vertical polarization to minimize the baseline brightness temperature of the water surface, a particularly important consideration at the 42° viewing angle.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Expanding Ground-Based Remote Sensing: Continuous Sun and Lunar DOAS Observations at High-Latitude Stations

Authors: Markus Kilian, Christoph Waldauf, Manuel Roca, Manuel Gebetsberger, Martin Tiefengraber, Alexander Cede
Affiliations: LuftBlick, SciGlob
Continuous ground-based nitrogen dioxide (NO2) measurements by the Pandonia Global Network (PGN) serve as essential data sources for satellite validation, air quality assessments, and chemical closure studies. Direct sun measurements are the basis of well-established data products of the PGN, providing datasets during daytime. However, reliance on direct sun measurements creates observational gaps during twilight and nighttime, particularly in high-latitude regions where extended periods of polar day and night amplify these gaps. Lunar or direct moon measurements address this limitation by bridging the twilight and nocturnal gaps left by direct sun measurements, enabling more comprehensive diurnal and seasonal coverage of atmospheric trace gases. High-latitude stations are particularly important as they often serve as background sites with minimal anthropogenic influence, making them essential for studying atmospheric conditions. Accurate and continuous measurements of trace gases at these locations, including NO2, ozone (O3), and water vapor (H2O), are important for understanding baseline atmospheric conditions. These stations provide a unique perspective on atmospheric processes that are not heavily impacted by local emissions, making the data highly valuable for global and regional air quality and climate modeling. Extending observations into polar night using lunar measurements is essential for producing robust datasets that can support satellite validation and improve the temporal coverage of atmospheric monitoring. Datasets complemented with data from direct moon measurement are particularly important in high-latitude regions where the lack of sunlight poses challenges to atmospheric observation methods. This study presents continuous time series of newly developed lunar products, including O3 and H2O, alongside NO2, as measured by PGN instruments. By employing advanced correction techniques and enhancing the capability of PGN instruments for direct lunar measurements, we present showcases for continuous time series with a focus on selected high-latitude stations. The findings underscore the unique role of lunar observations in complementing trace gas total column time series, filling observational gaps, and expanding the utility of PGN instruments for comprehensive monitoring.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Observing the NO2 Pollution in Rome With the NO2 Camera From the BAQUNIN Supersite

Authors: Emmanuel Dekemper, Pierre Gramme, Cedric Busschots, Stefano Casadio, Anna Maria Iannarelli, Nicola Ferrante, Annalisa Di Bernardino, Paolo Pettinari, PhD Elisa Castelli, Luca Di Liberto, Francesco Cairo, Andrè Achilli
Affiliations: BIRA-IASB, SERCO, Sapienza University, CNR-IASC
The monitoring of air quality in urban area is crucial for assessing the exposure of citizens to harmful anthropogenic particles. Among substances of concern, NO2 is a clear tracer of fuel burning processes occuring in vehicle engines, heating systems, and industrial and power plants. This species can be measured with a wide spectrum of instruments, ranging from in situ electro-chemical sensors, to satellite nadir-looking instruments operating in the UV-VIS domain. As such, Sentinel-4 will soon be mapping the tropospheric NO2 on a hourly basis over Europe. Still, the spatial resolution of the product will be too coarse for detecting the street-level variability characteristic of the urban NO2 field. The street-level exposure is the key parameter for assessing the impact of environmental policies, and addressing questions of air quality justice. A new type of instrument is being developed to bridge this gap. The NO2 camera can probe the NO2 field in a city with a high degree of spatio-temporal resolution. Using an acousto-optical tunable filter (AOTF), it records hyperspectral cubes of the urban landscape in the 430-450nm spectral domain, with a field of view of 20x20 degrees. This portion of spectra captured in each of its 500x500 pixels can be processed by the differential optical absorption spectroscopy (DOAS) method, yielding 2D maps of NO2 slant column densities. In March 2024, the NO2 camera was set up in the center of Rome, on the roof of a Spienza university building, among other instruments of the BAQUNIN urban supersite. From there, observations of the NO2 field were performed in coincidence with two other reference instruments: a PANDORA, and a MAX-DOAS. The main two objectives of the campaign were (1) to assess the biases and uncertainties of the NO2 camera product, and (2) to study the characteristics of the Roman NO2. The coincident acquisitions have revealed a high degree of consistency between the different instruments. Furthermore, transient local enhancements in the Roman suburbs have been detected, demonstrating the additional value of the imaging capabilities of the NO2 camera. We will present the results obtained during this campaign regarding these two objectives.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: WIVERN: An ESA Earth Explorer 11 Candidate Advancing Climate Science Through In-Cloud Observations of Global Winds

Authors: Alessandro Battaglia, Anthony Illingworth, Cathy Hohenegger, Eleni Marinou, Prof Ewan O'Connor, Dr Kamil Mroz, Ishuwa Sikaneta, Jana Mendrok, Jean-Christophe Angevain, Mary Borderies, Marcel Kleinherenbrink, Maximilian Maahn, Mario Montopoli, Maryam Pourshamsi, Mike Rennie, Michel Tossaint, Tania Casal, Valerie Dutto, Prof Pavlos Kollias, Remy Roca
Affiliations: Politecnico Di Torino, University of Reading, Max-Planck-Institute for Meteorolog, National Observatory of Athens, Finnish Meteorological Institute, NCEO, University of Leicester, European Space Agency (ESA-ESTEC), The Deutscher Wetterdienst (DWD), Météo-France, Leipzig University, Italy National Research Council, European Centre for Medium Range Weather Forecast (ECMWF), Stony Brook University, CNRS
The WIVERN (WInd VElocity Radar Nephoscope) mission concept, one of the two candidate missions competing for selection as the Earth Explorer 11 mission within the European Space Agency’s Future EO programme, promises to revolutionise the study of clouds, with its 800 km swath conically scanning 94 GHz Doppler radar (Illingworth et al., 2018). Two features are ground-breaking: 1. Horizontal wind measurements within weather systems: WIVERN will measure winds within weather systems, such as mid-latitude storms and tropical cyclones (Tridon et al., 2023), without suffering from any significant velocity folding, thanks to the use of closely spaced pulse pairs, one H-polarised and the other V-polarised (Rizik et al., 2023). These in-cloud winds, with an accuracy of 2-3 m/s, will complement the global winds in clear air and thin clouds from the proposed Aeolus Doppler Lidar follow-on mission (EPS-Aeolus). 2. Three-dimensional mapping of mesoscale weather systems: The combination of wide-swath scanning and a 3 m near-circular antenna will allow the vertical structure of mesoscale weather systems to be mapped with a vertical/horizontal resolution of the order of 600 m/1000 m. This will provide, on average daily revisits to every 20 km by 20 km area poleward of 50° and every 1.6 days at the equator. These unique observations of in-cloud winds and radar reflectivity, for the first time, will provide: a) Full vector winds within clouds over parts of the 800 km swath by combining the forward and backward looks, offering new insights into cloud dynamics. b) Insights into convective organisation and anvils, including their radiative effects, by combining the line of sight winds with radar reflectivity to derive convective mass fluxes and radiative effects. c) Advanced knowledge of the processes driving the formation, organisation and intensification of mesoscale convective systems, tropical cyclones and mid-latitude windstorms. d) A global climatology of the vertical distribution of clouds and precipitation at regional scale over monthly/seasonal scales. In this study, the unique capabilities of WIVERN (mentioned above) and their anticipated scientific impacts will be presented. Particularly, WIVERN instrument end to end simulations (Battaglia et al., 2022) are applied to two different scenes: a) the lifecycle of a tropical cyclone over the Caribbean region, and b) a mesoscale convective system over the Atlantic Ocean. These scenes are generated by the Weather Research and Forecasting (WRF) and ICON cloud resolving models respectively. The study demonstrates the retrieval of the horizontal wind structure and vertical distribution of the ice mass as if observed by the WIVERN mission, and highlights the expected accuracy and anticipated scientific impacts. REFERENCES Battaglia, A., Martire, P., Caubet, E., Phalippou, L., Stesina, F., Kollias, P., Illingworth, A., 2022: Observation error analysis for the WInd VElocity Radar Nephoscope W-band Doppler conically scanning spaceborne radar via end-to-end simulations , Atmos. Meas. Tech., DOI: https://doi.org/10.5194/amt-15-3011-2022 Illingworth, A. J., Battaglia, A. et al., 2018: Wivern: A new satellite concept to provide global in-cloud winds, precipitation and cloud properties. Bull.Amer. Met. Soc., DOI: 10.1175/BAMS-D-16-0047.1, 1669-1687. Rizik, A., Battaglia, A., Tridon, F., Scarsi, F.E., Kötsche, A., Kalesse-Los, H., Maahn, M., and Illingworth., A., 2023: Impact of cross-talk on reflectivity and Doppler measurements for the WIVERN Polarization Diversity Doppler Radar, in IEEE Transactions on Geoscience and Remote Sensing, doi: 10.1109/TGRS.2023.3320287. Tridon, F., Battaglia, A., Rizik, A., Scarsi, F. E., & Illingworth, A., 2023: Filling the gap of wind observations inside tropical cyclones. Earth and Space Science, 10, e2023EA003099. https://doi.org/10.1029/2023EA003099
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: A.02.01 - POSTER - Land Surface Temperature and Emissivity Data for Research and Applications

This session invites presentations on the use of land surface temperature and emissivity data for research and applications. In particular we welcome studies using high spatial resolution multispectral thermal infrared data such as those from the ECOsystem Spaceborne Thermal Radiometer on Space Station (ECOSTRESS) and airborne instruments as well as from the evolving New Space sector. The session will also provide an opportunity to discuss the preparation of upcoming missions such as the ESA - LSTM mission, CNESISRO TRISHNA mission and ASI-NASA SBG mission.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: Leveraging the SBG-TIR Mission for Thermal Monitoring of Volcanoes

Authors: Gaetana Ganci, Giuseppe Bilotta, Annalisa Cappello, Maddalena Dozzo, Roberto Guardo, Marco Spina, Francesco Spina, Francesco Zuccarello
Affiliations: Istituto Nazionale di Geofisica e Vulcanologia
Over recent decades, numerous space missions have established a systematic and comprehensive framework for advancing Earth sciences, including remote sensing of active volcanoes. Recent and upcoming missions, such as ESA’s Sentinel satellites, TRISHNA (CNES/ISRO), SBG-Thermal (NASA/ASI), and LSTM (ESA/EC), along with other thermal infrared sensor studies, promise further advancements. The forthcoming Surface Biology and Geology (SBG) thermal mission, developed by NASA-JPL and ASI, will feature a MWIR-LWIR sensor and a VNIR camera, enhancing volcano monitoring capabilities from space. Unlike prior missions, SBG-TIR will include six MIR-TIR bands at 60m spatial resolution, with high-temperature MIR bands (800–1200 K) and TIR bands (∼500 K), offering improved thermal detection, reduced data saturation and better repeat coverage (3–5 days). The study investigates SBG-TIR’s potential for detecting subtle thermal anomalies and estimating effusion rates in case of effusive eruptions. About 16 millions of synthetic pixels possibly related with volcanic activity were simulated taking into account the theoretical spectral response function of the SBG-TIR sensor with varying thermal structures. Normalized Thermal Indices (NTIs) considering all the MIR and TIR band radiances were computed for each synthetic pixel. NTIs help distinguish thermal anomalies by weighting radiances in MIR, emphasizing lava flows over colder features, like clouds. Simulations revealed that the MIR band centered at 4,8 𝛍m is critical for detecting subtle thermal anomalies, and NTIs combining this band with the TIR centered at 9.07 𝛍m are effective discriminants. Thresholds based on multiple NTIs provide insights into pixel thermal structures. To test the capability to detect small or cooled lava flow and to derive the related effusion rate, we also generated synthetic SBG-TIR images corresponding to various lava flow configurations simulated with the GPUFLOW model. These simulations took into account variations in lava flow extent and surface temperatures to effectively represent different volcanic scenarios. This work highlights SBG-TIR’s capability to advance volcanic hazard monitoring by identifying and characterizing high-temperature volcanic features and supporting long-term monitoring.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: Scalable Climate Monitoring: Integrating Multi-Source Temperature Data for Urban Adaptation in Bavaria, Germany

Authors: John Friesen, Christian Schäfer, Tobias Leichtle, Hannes Taubenböck
Affiliations: University Of Wuerzburg, German Aerospace Center
Understanding and mitigating the impacts of climate change in urban areas requires accurate, scalable, and high-resolution temperature data to monitor environmental dynamics and support decision-making. This study focuses on the 50 largest cities in Bavaria (1.5 million inhabitants in Munich down to 25,000 inhabitants in Waldkraiburg), analyzing the interrelationship between four temperature datasets: satellite-derived surface temperatures from Landsat 8, MODIS (Moderate Resolution Imaging Spectroradiometer), downscaled climate model outputs from CHELSA, and in situ measurements from nearest weather stations. The overarching aim is to assess how well these datasets align and explore the extent to which their use is scalable for climate observation and the development of climate adaptation and mitigation strategies, particularly for smaller cities where monitoring resources are often limited. The methodology integrates multi-scale remote sensing data and ground observations. Landsat 8 provides high-resolution (30 m) Land Surface Temperature (LST) data derived from its Thermal Infrared Sensor (TIRS), operational since 2013. MODIS, with a coarser spatial resolution (1 km) but greater temporal coverage (from 2000 onwards), complements Landsat by offering near-daily observations for trend analysis and regional assessments. CHELSA’s downscaled global climate model outputs (spanning 1979–2016 at ~1 km resolution) provide historical climate context and large-scale patterns. In situ measurements from weather stations, although limited in spatial coverage, are critical for validating the satellite and climate model datasets. This combination allows us to evaluate the strengths, weaknesses, and scalability of each dataset for monitoring urban temperature dynamics. To ensure comparability, the datasets are harmonized spatially and temporally. CHELSA data and weather station observations are temporally aligned with the Landsat and MODIS LST data as far as possible. Statistical methods, including correlation analysis, Root Mean Square Error (RMSE), and Mean Absolute Error (MAE), are employed to quantify the agreement between datasets. Geospatial analyses highlight spatial variability in temperature patterns, focusing on urban heat islands (UHIs), which are critical for identifying areas of thermal stress. This study also investigates the scalability of these datasets for climate observation. While Landsat provides detailed spatial resolution suitable for identifying localized heat islands, its temporal coverage is limited, requiring supplementation with MODIS for broader temporal trends. MODIS’s coarser spatial resolution may limit its applicability for smaller-scale urban analyses but is expected to perform well for regional assessments. CHELSA, as a climate model product, offers a long temporal coverage, providing valuable insights into historical trends and baseline conditions, but its downscaled nature may result in oversimplification of complex urban temperature dynamics. Weather station data serve as the benchmark, providing accurate point-based measurements but lacking spatial detail, especially in cities. The expected outcomes include a robust understanding of how these datasets align in capturing urban temperature patterns and the conditions under which they can be reliably scaled for smaller cities. We anticipate that Landsat data will excel in detecting highly localized temperature differences, such as those caused by green spaces, water bodies, or impervious surfaces. MODIS, with its high temporal resolution, is expected to highlight seasonal and interannual trends but may underestimate fine-scale variability in urban environments. CHELSA data will likely capture broader climatic trends and regional gradients but may require careful validation in urban areas due to its reliance on downscaling techniques. Weather station data, while limited spatially, will provide essential validation to ensure the reliability of the remote sensing and modeled data. A key focus of this research is determining whether these datasets can be effectively scaled for use in smaller cities, where direct observations and high-resolution monitoring are often unavailable. By comparing results for the example of the 50 largest cities in Bavaria, we aim to identify how well the methods and findings translate to less urbanized areas. This scalability analysis is essential for ensuring that the tools developed can support climate observation and policy-making across a wide range of urban contexts. This study contributes to the broader goal of developing scalable methodologies for climate observation and planning. By integrating datasets with varying spatial and temporal resolutions, we demonstrate how remote sensing, climate models, and ground observations can be combined to create robust tools for urban climate monitoring. The findings will inform not only mitigation strategies for large cities but also adaptable solutions for smaller municipalities, which are often overlooked in climate research despite facing similar vulnerabilities to heat stress and extreme weather. Aligned with efforts to address climate challenges in Bavaria, this research aims to support policymakers, urban planners, and public health officials by providing actionable insights and scalable methods for climate-resilient urban development. It underscores the importance of combining multi-source data to enhance the accuracy and applicability of climate analyses, ensuring that adaptation measures are both scientifically grounded and practically feasible. Ultimately, this study aims to bridge the gap between high-resolution satellite products, long-term climate models, and real-world urban applications, offering a replicable framework for addressing the impacts of climate change in urban areas of all sizes.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: Downscaling of Satellite Passive Microwave Land Surface Temperature for All-Weather Global Enhanced-Resolution Long Time Series

Authors: Iris de Gélis, Carlos Jiménez, Samuel Favrichon, Catherine Prigent
Affiliations: Estellus, Observatoire de Paris, Gamma Remote Sensing AG, CNRS, LERMA, Observatoire de Paris
Land Surface Temperature (LST) plays a crucial role in the Earth's climate system. It reflects processes such as the transfer of energy and water between the land surface and the atmosphere. It also drives longwave surface-emitted radiation. A precise understanding of LST dynamic at both global and regional scales is essential for assessing land surface–atmosphere exchange processes in models. When integrated with other physical characteristics like vegetation and soil moisture, it offers a valuable measure of surface conditions. The ESA LST Climate Change Initiative (LST_cci, https://climate.esa.int/en/projects/land-surface-temperature/) project aims to deliver an accurate representation of land surface temperatures worldwide over the last 25 years. It seeks to fulfill the requirements of the Global Climate Observing System (GCOS) for climate applications by developing methods to combine archived data from various satellites into a comprehensive long-term satellite record for climate study. The main LST_cci data records are derived by inverting infrared observations from a variety of satellite sensors. This is the established technique to derive reasonably accurate and spatially resolved LST estimates. However, clouds obscure the radiation emanating from the surface, so infrared data records can only provide a clear-sky view of the surface. To have estimates for “all-weather” conditions, LST_cci is also providing a 25-year data record derived from microwave observations, which can see through most clouds, therefore observing the surface independent of cloud coverage. A first version of the global long-term record has been provided at a fixed 6 am/pm local time thanks to the microwave instruments from the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager / Sounder (SSMIS). The sampling time of the product has now been enhanced in a second version by combining the observations from SSM/I and SSMIS with measurements from two additional instruments with close observing frequencies: the Advanced Microwave Scanning Radiometer for EOS (AMSR-E), and its successor AMSR2. The AMSR-E and AMSR2 overpassing time of 1.30 am/pm, together with the SSM/I and SSMIS 6 am/pm, result in an “all weather” LST product with estimates every ~6 hours, allowing to better resolve the LST diurnal cycle, which is of great importance for some climate applications. However, for many applications, the initial spatial resolution of the microwave-derived LST (25 km) is a limiting factor. Although the coming microwave instruments, such as the Copernicus Imaging Microwave Radiometer (CIMR), will provide microwave observations at higher spatial resolutions (~5 km at 36 GHz), improving the low spatial resolution of past and current instruments is needed. Merging the infrared and microwave LST datasets in a common product is also attractive in order to fill the cloud related gaps of the infrared product, and for this as well, improving the microwave spatial resolution is highly desirable. As a consequence, here we propose to work toward the improvement of the spatial resolution of the microwave-derived LST climate record, using downscaling techniques. In a previous study, we prototyped a method to downscale microwave LST over the African continent using the Spinning Enhanced Visible and Infrared Imager (SEVIRI) geostationary observations at 5 km as a reference (Favrichon et al. 2021). In particular, the idea is to derive a high-resolution temperature pattern correction to apply to the low-resolution microwave temperature to derive high-resolution products. This correction is obtained using a statistical approach (a neural network) to link high-resolution ancillary information and LST estimates. Firstly, the statistical function is trained to reproduce the infrared LST in a clear-sky context using data extracted from infrared products (the minimal value and the amplitude of the median clear-sky temperature diurnal cycle), along with other ancillary data (vegetation cover information, and the solar elevation angle). In the application phase, the statistical function is then used to produce a high-resolution LST pattern that is added to the coarse resolution LST to provide spatially finer estimates. For cloudy scenes, the same statistical function is used, but with a prior reduction of the amplitude of the diurnal cycle input to account for the cloudy conditions. This correction is based on an analysis of the average impact of clouds on the LST using cloud information from ECMWF ERA5 reanalysis. In this study, we have extended and adapted the methodology to the global scale and to a long-time record based on LST_cci products. In particular, we aim at downscaling SSMIS and AMSR2 microwave products having a resolution of approximately 25 km and 12.5 km respectively to the fine-resolution of LST_cci infrared products at 5 km. Preliminary results are encouraging. Even though the errors of the downscaled microwave LST are larger than those of the infrared product, due to the original uncertainty of the original microwave retrievals and the additional uncertainties associated to the spatial downscaling, it offers 5 km LST estimates for all-weather conditions, complementing the clear-sky only infrared LST product. Reference : Favrichon, S., Prigent, C., & Jiménez, C. (2021). A Method to Downscale Satellite Microwave Land-Surface Temperature. Remote Sensing, 13(7), 1325. https://doi.org/10.3390/rs13071325
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: SPACE_EPC: Multi-Source Earth Observation Data for Measuring the Thermal Efficiency of Buildings

Authors: Qunshan Zhao, Mingyu Zhu, Maoran Sun, Qunshan Zhao, Natalia Kuniewicz, Luis Serra
Affiliations: School of Social and Political Sciences, University of Glasgow, Sustainable Design Group, University of Cambridge, SatVu
In the UK, heating accounts for a significant portion of around 60-70% of building energy consumption, particularly in residential properties (Department for Energy Security & Net Zer, UK). Improving the energy efficiency of heating systems and insulation in buildings is a critical focus for decarbonization efforts in the UK. The openly available Energy Performance Certificates (EPC) data captures detailed building characteristics of the properties and rates them from A (most efficient) to G (least efficient). However, EPC data primarily relies on manual investigations and updates while the houses have been sold or rented out, making the process time-consuming and susceptible to data bias. As a result, EPC data only covers 50-60% of the UK residential buildings now and is commonly outdated in all UK housing stock. Thermal infrared (TIR) imagery can retrieve land surface temperature (LST) in large scale and is applied in numerous research. Satellite-borne median resolution TIR imagery such as Landsat 8 and ECOSTRESS were used in many urban-related application including urban heat islands, urban microclimate analysis, etc. However, finer grain analysis such as building energy efficiency studies, requires higher resolution TIR imagery. The limited availability of satellite-borne high-resolution TIR imagery has indeed constrained the application of remote sensing in identifying building-level energy efficiency with high accuracy. With the new development of the HotSAT series of Satellite VU satellite series, very high-resolution TIR imagery is available globally to further enable various urban applications. The objectives of this study include 1) proposing a methodology combining very high-resolution TIR and other remote sensing imagery, building characteristics, and socio-economic data by using the end-to-end deep learning model to identify the potential building retrofit targets on a large scale; 2) understanding the potential of different spatial resolution of TIR imagery in classifying building energy efficiency; 3) underscoring the complementary nature of heterogeneous data sources to measure building energy efficiency. The study areas include three cities in the UK including Glasgow in Scotland, Cambridge in England, and Cardiff in Wales. The airborne TIR imagery (wavelength of 3.4 to 5.0 micrometers) with a spatial resolution of 3.5-meter is acquired in the wintertime to assess the heat loss from the building. During the flight campaign, the temperature of rooftops, ground, water body, and air is collected by temperature data logger and thermal infrared camera to calibrate and validate the TIR imagery. LST imagery collected in the nearest flight campaign period of Landsat 8 in 30 m spatial resolution and ECOSTRESS in 70 m spatial resolution are downloaded. Meanwhile, aerial imagery and street view imagery showing the features of windows, walls, and roofs that affect building energy efficiency performance are collected from commercial (i.e. Google, Microsoft) or open source (i.e. Kartaview, Maplillary). Additionally, information such as building footprints, ages, construction types, and functions, along with socio-economic data such as deprivation measurement across income, employment, education, health, access to services, crime, and housing are collected. All the above data are spatially joined at individual building level and the EPC rating of each building will be used as the ground truth energy efficiency labels. An end-to-end multi-channel deep learning model is designed and trained by utilizing high-resolution airborne TIR imagery, aerial imagery, street view images, and building characteristic data to predict the EPC rating. A comprehensive ablation study will be conducted to understand the importance of each data source in identifying building EPC. Moreover, to compare the performance of different TIR data, airborne high-resolution TIR imagery is replaced by either Landsat 8 or ECOSTRESS data in both deep learning model and ablation study. The output is validated through a sensor-enhanced housing survey. Sensors in selected households in the study area collect indoor temperature and humidity data, to assess indoor thermal comfort, while smart meters collect their energy consumption information. Together, they provide insights including indoor environment and energy usage, and validate the prediction model performance. The study results are expected to provide an accurate map of the building energy efficiency performance that allows to target the potential buildings for retrofitting. Importantly, the proposed methodology demonstrates a robust, scalable, and adaptive deep-learning model using remote sensing data to measure the thermal efficiency of individual buildings at larger citywide scale.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: Developments of the new version of the LSA-SAF LST suite of products based on SEVIRI and AVHRR

Authors: Sofia Ermida, Emanuel Dutra, Sandra Gomes, Isabel Trigo
Affiliations: Instituto Portugues do Mar e da Atmosfera (IPMA)
The EUMETSAT Satellite Application Facility on Land Surface Analysis (LSA SAF) provides an array of satellite datasets and products that allow the characterization of the land surface, such as radiation, vegetation, evapotranspiration and wild fires. The LSA-SAF is responsible for the operational production of Land Surface Temperature from EUMETSAT satellites, namely from observations of the Spinning Enhanced Visible Infra-Red Imager (SEVIRI) on-board Meteosat Second Generation (MSG), and of the Advanced Very High Resolution Radiometer (AVHRR) on-board the Metop satellites. The SEVIRI instrument provides observations with a 15-minute frequency and sub-satellite resolution of 3km, covering Europe, Africa and part of South America. The AVHRR provides global twice-daily observations with a resolution of 1km. The two LST products, MLST (SEVIRI) and EDLST (AVHRR), are based on the so-called Generalized Split-Window (GSW) algorithm that requires measurements from two infrared channels in the 10-12 µm region. New developments for the upcoming new version of the products includes an update to the calibration methodology of the GSW algorithm, using state-of-the-art reanalysis data and high accuracy radiative transfer modelling and relying on a robust sampling methodology to unsure a good performance of the GSW for all atmospheric and surface conditions. The GSW requires prescribed values of surface emissivity. The Land Surface Emissivity (LSE) generated within the LSA-SAF is based on the Vegetation Cover Method (VCM) that provides emissivity estimates based on land cover and fraction of vegetation cover. In the new version, the VCM is combined with offline retrievals of LSE performed with the Temperature-Emissivity Separation (TES) algorithm that is able to provide a better representation of the spatial variability of LSE over bare areas. Furthermore, the TES was modify to allow the LSE to adjust for directional effects, providing view angle dependent estimates that contribute to harmonize the LST estimates. The MSG/SEVIRI products further includes instantaneous estimates of directional effects on the LST. New radiance calibration coefficients were also implemented to reduce biases observed between the different sensors composing each series (MSG/SEVIRI and Metop/AVHRR), resulting in increased stability of the two series. The MLST clear-sky estimates are also combined with cloud-sky LST estimates provided by the recently updated MSG Evapotranspiration product (METv3), resulting in the MSG LST All-Sky product (MLST-AS), that provides seamless LST estimates every 30 minutes. The new products versions, MLSTv2, EDLSTv2 and MLST-ASv2, will be available through the LSA-SAF website (https://lsa-saf.eumetsat.int/en/) and data server (https://datalsasaf.lsasvcs.ipma.pt/). These new developments will also be implemented to the upcoming Meteosat Third Generation and Metop Second Generation, ensuring consistency with the previous generation.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: Hyperspectral Soil Property Mapping Using Thermal Infrared (LWIR) Imagery

Authors: Helge Daempfling, Dr. Robert Milewski, Prof. Dr. Sabine Chabrillat
Affiliations: GFZ Potsdam, Leibniz University Hannover
Soils are among the largest terrestrial carbon reservoirs and play a crucial role in maintaining ecosystem health. Our study explores the use of advanced thermal hyperspectral sensors to assess topsoil properties. Key soil attributes, such as texture and organic and inorganic carbon content, provide essential information for farmers to identify soil susceptible to erosion and land degradation early, allowing for timely local interventions and fertility assessments. Hyperspectral remote sensing has proven to be an effective method for quantitatively predicting topsoil properties. However, traditional remote sensing in the visible-near infrared (VNIR) and shortwave infrared (SWIR) wavelength regions (0.4-2.5 µm) can be limited in estimating coarse-textured soils due to the lack of distinct spectral features in these regions (e.g., sand content, quartz, and feldspar mineralogy). The longwave infrared region (LWIR, 8-12 μm) offers potential improvements in determining these properties, thanks to the presence of fundamental vibration modes of silicate and carbonate minerals, as well as carbon-hydrogen bonds in this spectral range. Recent advancements in high spectral resolution airborne sensors operating in the LWIR range have sparked interest in applying this technology to environmental research and soil monitoring. The primary goal of this work is to utilize our extensive hyperspectral LWIR soil reference library to validate and analyze airborne hyperspectral LWIR images for estimating soil properties, focusing on soil organic matter, texture, and mineralogical composition. Our main study site is located in Northern Greece (Western Macedonia) in the intensively agricultural used areas around the Amyntaio lignite mine which is around 100 Km west of Thessaloniki. More than 250 soil samples were collected in several field Campaigns from 2018 to 2023 covering the highly variable soils of the region. In 2019 an airborne survey acquired a hyperspectral dataset with a HySpex VNIR and SWIR, and a Hyper-Cam (LWIR) covering the study site. This predominantly agricultural region has diverse crops such as alfalfa, winter wheat, corn, and sunflower. The study area's soils exhibit highly variable topsoil composition, ranging from silicate to carbonate-rich mineralogy, loamy to clay texture, and organic carbon-rich fields near a lignite mine in the southeast. Topsoil properties are estimated using our hyperspectral LWIR soil library, which comprises over 250 soil samples, and the Hyper-Cam LWIR airborne dataset, employing various statistical and machine learning methods. Initial results indicate a significant increase in model accuracy, particularly for soil organic matter and carbonate content, when incorporating LWIR spectral information. Another objective of this study is to simulate and validate soil products using recent satellite sensors (e.g., EnMAP, PRISMA, ECOSTRESS) and upcoming next-generation hyperspectral optical and thermal multispectral satellite missions (ESA CHIME and LSTM, NASA/JPL SBG) to evaluate their potential for large-scale quantitative soil property mapping.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: Calibration of UAV-Based Uncooled Thermal Cameras for Crop Water Stress Detection: Lessons Learned from Mission Planning to Post-Processing Procedures

Authors: Quanxing Wan, Dr. Magdalena Smigaj, Dr. Benjamin Brede, Prof.dr.ir Lammert Kooistra
Affiliations: Laboratory of Geo-Information Science and Remote Sensing, Wageningen University & Research, Helmholtz Center Potsdam GFZ German Research Centre for Geosciences
Accurately detecting and monitoring the spatial and temporal variability of crop water stress is both challenging and essential for guiding precision irrigation strategies and preventing crop yield loss. The use of high-quality UAV-based multi-sensor data has gained widespread use in effectively detecting crop water stress. Among the available sensor combinations, thermal imaging combined with visible and near-infrared multispectral (VISNIR-MS) sensors is particularly popular. However, the effectiveness of thermal sensors is often compromised by their intrinsic characteristics and susceptibility to ambient environmental conditions, introducing unavoidable uncertainties when using these sensors in combination. Drawing on previous laboratory and field-based experiences with UAV-based uncooled thermal sensors, this study offers insights into potential improvements throughout the implementation of thermal-focused practices in agricultural fields.These improvements address the entire process, from pre- to post-processing, including optimizing mission planning, preparing appropriate calibration targets, and standardizing user-based radiometric calibration procedures. A key aspect of this study was the application of a comprehensive methodology for the radiometric calibration of UAV-based thermal orthophotos, which were derived from thermal orthomosaics generated from individual images collected in the field. To achieve accurate calibration, UAV-mounted cameras (both thermal and VISNIR-MS) conducted multiple flights over self-prepared reference materials, allowing for temperature measurements to be compared with ground-based thermocouple observations. The results of these multi-temporal comparisons revealed relatively strong correlations between the thermal orthophotos and the ground-based data. Additionally, the visual quality of the reconstructed thermal orthomosaics was significantly improved, showing better consistency between flight lines and individual images compared to the uncorrected data. However, mapping the spatial variability of water stress across different crop growth stages using thermal and multispectral drone data poses significant challenges. Questions regarding the reliability of temperature data persist: Are the land surface temperature variations and water stress levels derived from UAV-based thermal sensors reliable enough to inform irrigation strategies? It is also vital to assess how calibration errors might differ across growth stages and to derive lessons from these variations. In practical UAV thermal campaigns, careful attention is required at every step, from data collection to data processing. The primary aim of this study is to optimize calibration methodologies for UAV-based thermal sensors, thereby improving their utility in monitoring and managing agricultural and environmental resources. This research integrates theoretical insights with empirical data to develop refined calibration protocols, enhancing both accuracy and reliability. The significance of this research lies in its potential to fill the current knowledge gap in sensor calibration, offering robust solutions that could benefit precision agriculture practices. Recognizing the limitations inherent in current sensor technologies, this study contributes to a deeper understanding of thermal remote sensing applications. It systematically examines thermal sensor calibration in field practices and provides a clear and detailed overview of the challenges and new developments influencing the use of UAV thermal imagery in precision agriculture. The growing use of uncooled thermal sensors across various sectors—including agriculture, forestry, and infrastructure—has highlighted the importance of sensor calibration. Research from multiple fields, not limited to optical engineering, has sought to develop rigorous laboratory-based calibration workflows to enhance measurement accuracy. Despite these advances, the demand for absolute temperature precision varies by application. In many practical scenarios, relative temperature measurements are often considered sufficient. Furthermore, implementing rigorous calibration protocols from the optical engineering field can be impractical, time-consuming, and resource-intensive. This discrepancy highlights a notable research gap: while optical engineering continues to refine calibration techniques, there is a pressing need for standardized, accessible calibration procedures that can be easily adopted in real-world applications.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: Contribution of thermal infrared images on the understanding of subsurface hydrology and subsurface-atmosphere interaction

Authors: Teodolina Lopez, Raphaël Antoine, David Baratoux, Thomas Junique, Michel Rabinowicz, Cyrille Fauchard
Affiliations: Cerema, Direction Territoriale Occitanie, Satellite Team, Cerema, Direction Territoriale Normandie-Centre, Research team ENDSUM, UMR-GET (UT3, CNES, CNRS, IRD), Université Félix Houphouët-Boigny, UFR Sciences de la Terre et Ressources Minières, Université de Reims Champagne-Ardenne
The characterization and understanding of the local and regional water cycle and more particularly the exchanges between the subsurface and the atmosphere is primordial in the context of climate change. High temporal resolution of space-based thermal infrared (TIR) images from METEOSAT and MODIS, along with the development of field TIR cameras have permitted the increasing use of thermal remote sensing in Earth Sciences. TIR images are influenced by many factors such as atmosphere, solar radiation, topography and physico-chemical properties of the surface. Considering these limitations, we present several examples showing the assets of TIR images to understand the subsurface hydrology and exchanges with the atmosphere dynamics at multiple spatial and temporal scales. The Land Surface Temperature brings added value when combined with various remote sensing data, geophysical observations and thermal/geometrical numerical modelling. Our presentation highlights the role of subsurface fluid flows, that are controlled by permeability changes, on the surface temperature dynamic. This dynamic that ranges from meters to few hundred kilometers scale has been observed: - in coastal retreat research using drone TIR images; - in volcanology, within the inactive Formica Leo scoria cone and the Piton de la Fournaise volcano (La Réunion Island) using field and airborne TIR images; - in water resources, within the sedimentary Lake Chad Basin associated with surface temperature anomalies, observed from space-based images. Our studies shows that 1) ~5-10°C thermal anomalies associated to subsurface flows may be distinguished from thermal inertia/albedo/emissivity influences by taking into account the dynamics of the surface temperatures, 2) such weak thermal anomalies are observable at small scale as well at very large scale and 3) the combination of various observations and numerical modelling is very efficient to understand subsurface hydrology processes. Eventually, for the first time, high resolution spatial and temporal TIR data provided by drones to satellites bring new insights for the characterization of soil-atmosphere interactions.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: Estimation of Vegetation Fractional Cover and Leaf Area Index Using the VIREO VNIR Camera in SBG-TIR Configuration

Authors: Luca Tuzzi, Roberto Colombo
Affiliations: University Of Milano-Bicocca
The Surface Biology and Geology Thermal Infrared (SBG-TIR) mission will operate in a sun-synchronous polar orbit, with instruments designed to acquire spectral data in the optical domain (0.4 µm to 2.5 µm) at a spatial resolution of 30 m, and in the thermal domain (3 to 5 µm and 8 to 12 µm) at 60 m spatial resolution. The VNIR camera consists of two spectral channels centered at 655 nm and 835 nm, respectively, with a full width at half maximum (FWHM) of 80 nm. The spatial resolution ranges between 30 and 60 m, with an overall swath width of 958 km and a field of view of ±34°. A primary goal of the mission is to use the VNIR camera onboard the SBG-TIR platform to estimate key vegetation parameters, namely Fraction of Vegetation Cover (Fc) and Leaf Area Index (LAI). The Fc corresponds to the fraction of ground area covered by green vegetation. It quantifies the spatial extent of vegetation as projected at nadir. Using fractional cover as a normalization tool, is advantageous over alternatives (rather than APAR for example), as it has greater independence from illumination conditions and is more stable in time. Fc is widely used to modulate surface albedo and surface emissivity. For mixed pixels a weighted function of the mixture components (e.g. vegetation and soil) can be defined. The correlation between these structural vegetation parameters and thermal applications is well-demonstrated (e.g. Lin et al., 2020, Yin et al., 2020) and their quantification helps in creating synergies between VNIR and TIR cameras during the space mission. In this context the aim of this study is to develop a robust method for retrieving Fc. To achieve this, we began by working with simulated data to establish a baseline for our algorithm. This initial phase involved generating a synthetic dataset that mimic the spectral characteristics of the VNIR camera on the SBG-TIR mission. By doing so, we could control various parameters and systematically test the performance of our retrieval algorithms under different conditions. Our choice for generating synthetic data is SCOPE. The Soil, Canopy Observation, Photochemistry and Energy Fluxes (SCOPE) is a radiative transfer model that simulates soil, leaf and canopy radiance and reflectance, not only in the optical domain (0.4 – 2.4), but also in the thermal domain (2.5 - 50), due to its energy balance closure and leaf temperature calculations (Van Der Tol et al., 2009; Yang et al., 2021). Due to this capability to simulate these quantities on both spectral domains, SCOPE can be used to study a variety of sensors that work in both optical and thermal spectra, including the upcoming mission. Our approach leverages the spectral information provided by the VNIR camera’s two channels, centred at 655 nm and 835 nm, to retrieve FC. We also take into account the variability of illumination and observation conditions as well as temperature information from TIR region. We calibrated and validated our method using these simulated datasets, ensuring that it accurately reflects the real-world variability in vegetation fractional cover. Various machine learning algorithm have been trained (such as Gaussian process regression and neural networks), and their performance have been evaluated. The dataset was divided into training and test sets. After the training phase and hyperparameter optimization, the algorithm was tested on the never seen before test dataset. In this preliminary phase, we achieved a best RMSE in Fc prediction of 0.057, which is consistent with the results obtained by García-Haro et al. (2018). Next, we applied our method to actual satellite data, fine-tuning the algorithm to address real-world complexities such as mixed pixels. Specifically, we tested our approach and compared the results with predictions obtained using the method proposed by Weiss et al. (2020). Finally, leveraging experimental datasets such as GBOV (Bai, G. et al. 2019), we evaluated the algorithm's performance by applying it to existing images and comparing the retrieved values with those obtained from field measurements. This work demonstrates the value of integrating VNIR and TIR data for vegetation monitoring and its potential to support SBG-TIR missions. Through iterative testing and validation, we showed that our method not only reliably estimates from synthetic data but also effectively utilizes real-world data to achieve accurate retrieval, enabling precise mapping of vegetation fractional cover.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: Investigating surface water loss in southern Italy: validation of the IASI-based ECI-WDI synergy with ground-stations measurements

Authors: Pamela Pasquariello, Guido Masiello, Carmine Serio, Prof. Vito Telesca, Marco D'Emilio, Dr. Giuliano Liuzzi, Rocco Giosa, Lorenzo Cassini, Dr. Italia De Feis, Fabio Della Rocca, Sara Venafra
Affiliations: University of Basilicata, University of Rome "La Sapienza", IAC-CNR, University of Naples "Federico II", Italian Space Agency
Surface dryness poses a significant threat to biodiversity and ecological landscapes. Its impact on croplands and woodlands is particularly pronounced in Mediterranean Europe, with southern Italy being one of the most affected areas due to extreme summer heatwaves and scarce rainfall. In this context, identifying the occurrence of water loss from the surface to the atmosphere can enhance our understanding of its dynamics and support stakeholders and policymakers in taking timely and appropriate actions. Since surface water content is related to thermal-infrared (TIR) emissivity, and air water content depends on both surface and dew point temperatures, we selected the Emissivity Contrast Index (ECI) and the Water Deficit Index (WDI) to investigate their effectiveness in identifying water loss from the surface to the atmosphere in part of southern Italy, focusing on the period 2015–2023. ECI is defined as the one’s complement of the emissivity contrast in the TIR, i.e., the difference between the maximum and minimum emissivity values across a specific number of TIR emissivity channels, chosen for their spectral properties (e.g., sensitivity to senescent or green vegetation, bare soil). WDI, on the other hand, is the difference between surface and dew point temperatures near the surface, directly reflecting variations in water content between the surface and the lower atmosphere. These indices were estimated from hyperspectral satellite data acquired by the Infrared Atmospheric Sounding Interferometer (IASI) onboard EUMETSAT’s MetOp polar satellites. Developed in France by the Centre national d'études spatiales (CNES), IASI’s spectral range spans from 645 to 2760 cm⁻¹, with a sampling interval of 0.25 cm⁻¹, resulting in 8461 spectral channels. IASI radiances were processed using phi-IASI, a physical inversion scheme that simultaneously retrieves a comprehensive state vector of atmospheric and surface quantities, including temperature and water vapor profiles, as well as surface temperature and the emissivity spectrum. Ground station measurements of air temperature, relative humidity, and precipitation provided by local environmental agencies were also used for comparison with IASI retrievals, alongside surface soil moisture derived from Copernicus’ Sentinel-1 acquisitions. The selected datasets were co-located and compared over the period of interest, highlighting a clear relationship between emissivity, temperature, and water loss from the surface to the atmosphere. The primary limitation of this methodology is the spatial resolution of the TIR data. However, these indices could prove even more effective if estimated using higher-resolution instruments, such as NASA-ASI’s Surface Biology and Geology Observing Terrestrial Thermal Emission Radiometer (SBG-OTTER), enabling more precise monitoring of small, heterogeneous areas where multiple land covers coexist and exhibit different responses to water stress.
LPS Website link: Investigating surface water loss in southern Italy: validation of the IASI-based ECI-WDI synergy with ground-stations measurements&location=X5+–+Poster+Area+–+Zone+C-D" class="text-info" target="_blank">Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: Monitoring Earth surface skin temperature and emissivity from IASI satellite observations

Authors: Daniel Zhou
Affiliations: NASA Langley Research Center
Surface parameters such as emissivity and skin temperature have been retrieved from IASI measurements of last 18 years. Monthly and spatially gridded surface skin temperature is produced to show the phenomena of its natural variability, which is also reflected in the surface emissivity derived from the same time series of measurements. The anomalies of surface skin temperature and emissivity are used to estimate their trends. The IASI global surface skin temperature trend is comparable with the NASA GISTEMP global surface air temperature trend. The trend of IASI global surface skin temperature reports that an approximate 0.031 K/yr. increase has evolved during 2007–2024. Attention has been given to the arctic region where an average warming rate ~0.087 K/yr. is near 3 times higher than the global average. In some regions, the rapid warming rate has reached ~0.25 K/yr. which is about 10 times higher than the global average. Geophysical distribution of the surface temperature trend in the Arctic and Antarctic regions is emphasized and reported.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: A GIS-based framework for illegal waste management: integration of remote sensing and ground surveys for environmental and cost optimization

Authors: Alfonso Ragazzo, Dr Alessandro Mei, Flaminia Fois, Dr Patrizio Tratzi
Affiliations: National Research Council of Italy - Insitute of Atmospheric Pollution Research, Department of Engineering, Unit of Electronics for Sensor Systems, University Campus Bio-Medico of Rome, University of Tuscia - Department of Agriculture and Forest Sciences
Illegal abandoned waste is a global issue with significant environmental consequences. The criminal activities and the lacks of adequate waste disposal services severely impact soil, water and air quality. This study presents an integrated approach to waste management and cost analysis through the combination of remote sensing, ground survey, and a GIS-based framework, supplemented with mathematical models and scripting tools. The proposed workflow means to identify and monitor illegal waste disposal sites within the Circular Economy context, leveraging data science, remote sensing, and field data collection. Initially, a pre-recognition phase used satellite platforms to detect potential illegal waste sites at varying spatial resolution. Consequently, by using spatial data analysis methods, the prediction density of illegal waste sites from satellite images occurred. As a second step, from the previous outcomes, a subsequent piloted ground survey is provided, including product analysis, waste density distributions mapping, and 2.5D volumetric assessments. Finally, a cost analysis is accomplished, considering both waste transportation and final management for the different scenarios identified (e.g., landfill, incinerator, gasifier). The findings highlight how waste management and planning, through different disposal plants, can result in numerous advantages for the environment and public health, notably reducing greenhouse gas emissions and pollutants. In conclusion, this study demonstrates the efficacy of a multi-parametric, GIS-based approach, able to identify illegal waste sites, assessing associated costs, and optimizing material collection and transport to suitable disposal facilities. Further prospectives focus on Life Cycle Assessment (LCA) decision-support tools for enhancing regional waste management planning. Accordingly, policy-makers should converge on strategies aimed to minimize waste transport and generation, promoting the recycling and recovery of materials. Implementing LCA methods, as highlighted in this study, can serve as an effective tool for identifying more sustainable waste management practices, with particular attention on the public health. In addition, raising community awareness in socially complex regions is crucial for minimizing illegal practices and fostering sustainable waste disposal behavior.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: Assessment of level 2 LST products estimated by HyTES and OwL sensors in the framework of NET-Sense 2023 campaign

Authors: Drazen Skokovic, José A. Sobrino, Rafael Llorens, Daniel Salinas, Sergio Gimeno, Virginia Crisafulli
Affiliations: University Of Valencia
Airborne campaigns are an important tool in algorithm testing and characterization of uncertainties in LST retrieval for future sensors. To prepare for the future Land Surface Temperature (LST) Monitoring (LSTM) mission, NET-Sense 2023 campaign took place in Grosseto, Italy, at an experimental site consisting of irrigated flat areas with growing crops, mainly maize and alfalfa. In situ measurements were an important part of the campaign, providing LST, radiation fluxes and evapotranspiration data. An extensive database was created focusing on several studies such as LST directionality effects, LST algorithms tests and calibration/validation (cal/val) activities on Thermal InfraRed (TIR) spectrum, among others. Two aircrafts involved in the campaign were provided by the British Antarctic Survey and Kenn Borek Air, carrying out two hyperspectral TIR sensors (7.5-12.5 µm), OwL and HyTES. Both took simultaneous images following the same designed flight line with different view angles and with nadir pixel sizes below 3 m. LST and Land Surface Emissivity outputs were generated by their respective teams, providing valuable information on the LST behavior as a function of zenith angle. In this work, we focused on the cal/val and the performance of LST and LSE products estimated for both sensors by taking the in situ data measured by our team with multiband and broadband radiometers. We also performed LST and LSE reprocessing by applying the Temperature Emissivity Separation algorithm and compared the results with Level 2 images, as well as with the LST product from ECOSTRESS and Landsat-8 TIRS. Finally, integrating obtained LST with the energy flux data, evapotranspiration was calculated by applying the Simplified Surface Energy Balance Index algorithm.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: Modelling of the Annual-Diurnal Land Surface Temperature Dynamics

Authors: Lluís Pérez-Planells, Frank-M. Göttsche, Jan Cermak
Affiliations: Karlsruhe Institute Of Technology
The annual and diurnal cycles of land surface temperature (LST) are driven to a large extent by solar geometry. These regular LST variations follow different time scales and can be modelled with two separate functions controlled by a small number of parameters, typically from three to six, which can be determined by fitting the respective model functions to time series of LST observations. Here we describe an annual-diurnal temperature cycle (ADTC) model that combines an annual temperature cycle (ATC) model controlled by the solar zenith angle (ATCsza) with a four-parameter version of the diurnal temperature cycle (DTC) model GOT09. The ADTC model simultaneously describes the annual and diurnal surface temperature dynamics with only five controlling ADTC parameters, which all have physical meaning, i.e. annual minimum temperature, annual temperature amplitude, annual maximum of daily temperature amplitude, mean time of thermal noon and time lag of maximum temperature with respect to summer solstice. The model was fitted to LST data (MOD11/MYD11 A1) retrieved in 2021 from the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the EOS – TERRA and EOS – AQUA satellites. ADTC parameters for five globally representative MODIS tiles were obtained, i.e. h18v04, h17v05, h17v07, h21v08 and h19v11. The MODIS LST were randomly split into two datasets: a training dataset containing 80 % of datapoints and a test dataset with the remaining 20 % of datapoints. For the five tiles, the test dataset yielded a mean root mean square error (RMSE) of 4.2 K. Additionally, the modelled LSTs were evaluated against in-situ LST from Lake Constance, KIT Forest and Gobabeb stations, yielding an RMSE of 3.1, 4.0 and 2.8 K, respectively. Finally, the spatial consistency of the ADTC parameters was investigated over five selected areas representing different land covers (i.e. urban, lake, forest, mountain area and desert). The results revealed a close link between the parameters and corresponding surface and climate properties, indicating that they provide useful information for LST analyses and climate studies and for further applications, e.g. land cover classification. Future perspectives for the ADTC model are its application to high-resolution thermal infrared (TIR) data, e.g. from Landsat and the ECOsystem Spaceborne Thermal Radiometer on Space Station (ECOSTRESS) as well as upcoming TIR satellite missions, e.g. TRISHNA, SBG, LSTM and Constellr. Combined, these missions will provide daily high-resolution global coverage and are suitable input for the ADTC model, thereby allowing its application to topics that require finer spatial resolutions, e.g., urban heat island studies, volcanology or agriculture applications.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: New Generation of GEO-ring Land Surface Temperature for the Copernicus Land Monitoring Service

Authors: Francisco Lopes, Dr. Emanuel Dutra, Sofia Ermida, João Martins, Dr. Sandra Gomes, PhD Isabel Trigo
Affiliations: Instituto Português do Mar e da Atmosfera (IPMA), Instituto Dom Luiz (IDL), 3European Centre for Medium-Range Weather Forecasts (ECMWF)
Bio-geophysical products from the Copernicus Land Monitoring Service (CLMS) are designed to monitor the status and evolution of the Earth’s main land surface components. These products support a series of applications such as climate monitoring, water resource management, and decision-making for sustainable land management and environmental policies. Among these is the Land Surface Temperature (LST) global product, derived from a GEO-ring constellation of satellites, providing consistent and accurate hourly and 10-daily LST estimates at a global scale. A new version of the LST global product is currently in development, integrating several updates to improve its quality and stability. These updates include changes in several components, such as dynamic vegetation-based surface emissivity, high-frequency cloud information from SAF-Nowcasting, and a comprehensive reprocessing effort spanning seven years (2018-2024), ensuring temporal consistency with near real time updates. The LST processing includes the GOES-16 and Himawari satellite estimates, which are merged with the ones produced by the Satellite Application Facility on Land Surface Analysis (LSA SAF) for the Meteosat Second Generation (MSG) prime and Indian Ocean (IODC) satellite missions. Validation against 40 in-situ Global-Based Observations for Validation (GBOV) sites demonstrates an improved scoring of the reprocessed LST estimates, achieving an overall accuracy of approximately -0.42 K and a root mean square difference of 3.25 K. A detailed spatial analysis indicates that major differences between the reprocessed and operational datasets are located in specific geographical regions, particularly in arid and desert areas. These differences can be more pronounced during the summer months for the LST maximum, where surface emissivity updates have a significant impact. Comparisons with skin temperature from the ERA5 reanalysis show an increased temporal stability throughout the entire reprocessed period, highlighting improvements across the maximum, mean, and minimum LST global sub-products, thus offering a more comprehensive and reliable dataset for climate and environmental applications.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: Overview of SBG-TIR data products

Authors: Sara Venafra, Simona Zoffoli, Francesco Longo, Tiziana Scopa, Giorgio Viavattene, Kerry Cawse-Nicholson, Simon Hoook, Demetrio Labate, Maria Giulia Pancalli, Fabrizia Buongiorno, Malvina Silvestri, Niccolò Camarlinghi, Benedetto Michelozzi, Giovanni Laneve
Affiliations: Italian Space Agency (ASI), California Institute of Technology, Jet Propulsion Laboratory, Leonardo, Istituto Nazionale di Geofisica e Vulcanologia, Flysight s.r.l., Università di Roma 'La Sapienza' - Scuola di Ingegneria Aerospaziale
The Joint ASI - NASA/JPL Surface Biology and Geology Thermal Infrared (SBG-TIR) Mission will enhance our scientific understanding of climate, ecosystem and natural resources, hydrology, weather and solid earth, in response to recommendations from Italian and American national users and communities. The SBG-TIR mission will contribute to the development of innovative approaches and methodologies useful for both the scientific and user communities across a broad field of applications in agriculture, food security, surface water management, conservation, wildfire management, volcanic risk assessment and detecting heat wave events. The SBG-TIR mission will embark two instruments: Orbiting Terrestrial Thermal Emission Radiometer (OTTER) instrument and Visible InfraRed Earth Observation (VIREO) Camera provided by NASA and ASI respectively. High spatial resolution measurements will be acquired on a global scale, with a revisit time of 3 days, with standard and low latency products (available in less than 72 and 24 hours, respectively). Multispectral radiometric measurements of surface emission and reflectance from the Earth’s land, including polar regions, inland water, and coastal regions are expected from both instruments. The OTTER instrument will acquire data in MIR (3-5 µm) and TIR (8-12 µm) spectral bands during day and night time, while VIREO will provide observations in 3 VNIR spectral bands (one PAN (0.75 µm centre) and two VNIR channels (VNIR-0 (0.655 µm centre) and VNIR-1 (0.835 µm centre)) during daytime with a spatial resolution at nadir of 60m over land and 1km over ocean water for OTTER and 30/60m over land for VIREO. All of these requirements shall allow for the retrieval of a valuable source of information about geophysical parameters fundamental in the monitoring of natural and anthropogenic changes to the ecosystems, such as emissivity and land surface temperature, evapotranspiration (ET), surface reflectance, Normalized Difference Vegetation Index (NDVI), surface mineralogy, elevated temperature features, volcanic activity data products, etc. This work will give an overview of SBG-TIR data products, by focusing the attention on both OTTER and VIREO products and relative processing chains and on their possible applications.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: Assessing Ecosystem Responses to Drought in the MENA Region Using Long-Term, High-Frequency Thermal Infrared Data

Authors: Adrian Kreiner, Dr. Kanishka Mallick, Tian Hu, Pr. Abdelhakim Amazirh, Dr. Zoltan Stanzoi
Affiliations: European Space Agency ESA ESRIN, Luxembourg Institute of Science and Technology, Center of Remote Sensing Applications, Mohammed VI Polytechnic University
Diurnal cycling of plant carbon uptake and water use, and their responses to water and heat stresses, are critical for understanding ecosystem productivity, agricultural management practices, and the carbon and water cycles and feedbacks to the climate. Throughout the day, variations in temperature, solar radiation, atmospheric water demand, soil moisture, and leaf water potential drive diurnal changes in stomatal conductance, photosynthesis, and transpiration rates. Despite their importance, the response of vegetation physiology to drought at large spatial scales is poorly understood due to a lack of direct observations. Earth observations from polar-orbiting satellites are incapable of studying these diurnal variations. However, operational satellites, such as Meteosat Second Generation (MSG), provide continuous, high-frequency land surface temperature LST and emissivity offering the ability to analyze diurnal patterns over long time periods. In this study, we investigate the diurnal variability of LST using the temperature rise rate derived from MSG LST data (2004–2024, 30-minute temporal resolution, 0.05° spatial resolution) over the Middle East and North Africa (MENA) region to understand vegetation responses to water scarcity over a long time period in a changing climate. The underlying principle is that transpiration, regulated by stomatal conductance, drives photosynthesis and carbon uptake while causing water loss. Simultaneously, transpiration cools plants through evaporative heat dissipation. Under dry conditions, reduced stomatal conductance limits transpiration, reducing this cooling effect and resulting in a more rapid increase in LST compared to wetter conditions. To validate findings from coarse spatial resolution MSG data, we incorporate ECOSTRESS LST data, available from mid-2018, which offers higher spatial resolution (70 m) and varying acquisition times throughout the day. For evaluating structural changes in vegetation in relation to diurnal LST dynamics, we integrate Gross Primary Production (GPP) and Normalized Difference Vegetation Index (NDVI) data from MODIS, spanning 2004–2024. Additionally, to analyze phenological changes, we use photosynthetic activity data derived from sun-induced chlorophyll fluorescence (SIF) from the Sentinel-5P TROPOMI sensor, available from 2018, which provides daily SIF measurements. This combined approach enables a comprehensive assessment of the relationship between diurnal LST variations, vegetation structure, and photosynthetic function. This research is ongoing and we expect to present impactful results in the near future.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: Assessing the Feasibility of Producing a Moderate Extremes Dataset Based on Satellite Land Surface Temperature

Authors: Josh Blannin, Dr Elizabeth Good
Affiliations: Met Office
Research consistently shows that climate change is making temperature extremes more likely. This will result in greater heat/cold stress, leading to negative impacts on many sectors such as health and agriculture. Consistent monitoring of surface heat allows for the identification of regions that are experiencing these temperature extremes. Typically, this is performed using 2-meter air temperature (T2m) measured at weather stations, however this is restricted to regions that are well-observed in situ and can leave sparsely monitored regions unassessed. This work examines the potential use of satellite land surface temperature (LST) to augment information obtained from T2m and produce climate indices which provide metrics on the frequency, duration and intensity of temperature extremes. The indices used here were developed by the World Meteorological Organisation’s (WMO) Expert Team on Climate Change Detection and Indices (ETCCDI; now discontinued) and Expert Team on Sector-specific Climate Indices (ET-SCI) for the Climpact project (https://climpact-sci.org/indices/), and have been cited by the Intergovernmental Panel on Climate Change (IPCC). This study uses T2m weather station data from the National Centers for Environmental Information’s Global Historical Network daily (GHCNd), and LST data from the European Space Agency’s Land Surface Temperature Climate Change Initiative (LST_cci; https://climate.esa.int/en/projects/land-surface-temperature/), to assess the consistency between these two variables for a selection of temperature indices. Satellites cannot measure T2m but instead estimate the LST through measuring infrared (IR) or microwave (MW) radiation emitted from the Earth’s surface. Studies have shown that T2m and LST are often well coupled, with similar temperature signals present in both. Therefore, the hypothesis is that whilst T2m and LST may differ in their measurement values, they will display similar patterns in their temperature signals, particularly when considering extreme event indicators. To test this, the LST_cci microwave LST data between 1996-2020 and GHCNd weather station data are spatially and temporally co-located. Extreme events are identified using the Climpact indices, and are categorised into value-, threshold-, and percentile-based indices. Value-based indices record statistical measures of the data and are in the units of the variable e.g., TXx is the monthly maximum of the daily maximum temperatures. Threshold-based indices log how many days, either monthly or annually, exceed a pre-defined threshold e.g., SU refers to “summer days” and is the annual number of days that exceed 25°C. Percentile-based indices track the percentage of days, monthly or annually, that each day exceeds a given percentile e.g., TX90p is the percentage of days per month that each day exceeded its 90th percentile. Current results for T2m and LST value-based indices show correlations of r ≈ 0.96, with some indices showing a difference in their respective index means of less than 5°C. Percentile based indices also show promising results with correlations of r ≥ 0.76. Percentile-based indices are beneficial because they are normalised to each dataset. Noticeably, percentile-based indices show that T2m and LST can identify the same extreme events, with multiple occurrences over the 25-year record. Threshold-based indices do not align as well, though many show differences of less than 10% of a year. This is expected as threshold-based indices were developed for the assessment of T2m, meaning the thresholds are not suitable for LST, leading to inaccurate conclusions. To resolve this, two adjustment methods are trialled: Kernel Density Estimation (KDE) and Logistic Regression (LR). For each threshold-based index the KDE method evaluates the proportion of T2m values that exceed the respective threshold. The adjusted LST threshold is found through applying the inverse and determining the temperature that results in a similar exceedance rate. To perform the LR method, a logistic regression model is fitted using LST as the explanatory variable, and the response variable is a binary flag indicating if T2m has exceeded its threshold. The new threshold for LST is the temperature at which there is a greater than 50% chance of T2m being flagged as exceeding its threshold. To evaluate the performance of these methods, precision and recall metrics are used to quantify the proportion of occurrences where both LST and T2m exceed their respective thresholds. Both methods show promise, improving the mean precision and recall, however obtaining a revised LST threshold is challenging for indices with very few events in the record e.g. ≤ 40. Work is ongoing to define a final set of ‘Climpact-like’ LST indices that could be used in conjunction with T2m to monitor global temperature extremes with fewer geographical gaps compared with using T2m data alone. In addition, the LST data could potentially provide information at a higher spatial resolution (e.g. < 0.25°) compared to gridded T2m indices (e.g. 1.25° x 1.875°), enabling the use of extreme temperature indices data in a wider range of climate applications.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: Large scale exploitation of satellite data for the assessment of urban surface temperatures: the EO4UTEMP project

Authors: Zina Mitraka, Giannis Lantzanakis, Maria Gkolemi, Nektarios Chrysoulakis
Affiliations: FORTH - RSLab.gr
Climate change increases stress on urban areas due to the rise in heat waves, which can threaten people’s wellbeing and even lives. Temperature is a crucial parameter in climate monitoring and Earth Observation (EO) systems. Advances in remote sensing technology have expanded opportunities for monitoring surface temperature from space. With numerous satellite thermal missions anticipated in the coming years, there is a pressing need for improved methods to retrieve surface temperatures for cities. While EO satellites are excellent for mapping Land Surface Temperature (LST), the unique properties and geometry of urban surfaces, along with the trade-off between temporal and spatial resolution, pose challenges in retrieving urban surface temperature (UST). To this end, the EO4UTEMP project explored the use of EO data for monitoring UST from space. EO4UTEMP developed innovative methods and algorithms for producing detailed, accurate, and frequent UST products. A UST retrieval algorithm for high-resolution thermal sensors (e.g., Landsat, ASTER, ECOSTRESS, and the upcoming TRISHNA, LSTM, and SBG) includes emissivity corrections using ancillary information from external sources (e.g. urban surface cover information from Sentinel-2, Landsat) and spectral libraries. The algorithm accounts for the sensor’s viewing angle and considers the fraction of vertical urban facades in the UST retrieval, increasing the accuracy in retrievals. Combined with a thermal imagery downscaling approach, the UST retrieval algorithm allows for the use of low-resolution satellite thermal imagery, therefore increasing the frequency of UST observations. The EO4UTEMP methodology was evaluated using in-situ measured UST from meteorological station measurements in Heraklion, Greece, with a Mean Absolute Error (MAE) of up to 3.6 K for daytime and 1.4 K. The EO4UTEMP methodology is transferable and applicable to cities worldwide and the project showcases new technologies and tools to the EO community and promotes the use of EO data in urban meteorology.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: Temporal Data-Quality-Based Thresholding (TDQBT) for Artefact Identification in Landsat Analysis-Ready Surface Temperature Data in a Tropical Urban Area

Authors: Gulam Mohiuddin, Prof. Dr. Jan-Peter Mund, Prof. Dr. Matthias Möller
Affiliations: Eberswalde University for Sustainable Development, Otto-Friedrich-University Bamberg
Land Surface Temperature (LST) is critical for understanding urban thermal dynamics and mitigating urban heat island (UHI) effects. While Landsat Collection-2 Level-2 Surface Temperature (ST) products offer analysis-ready data, they are prone to artefacts arising from sensor and preprocessing inconsistencies. These artefacts compromise the reliability of LST analyses, particularly in tropical urban regions like Phnom Penh, Cambodia, where high cloud cover and missing data exacerbate the issue. Despite the widespread use of conventional thresholding methods, including Otsu’s and Li’s techniques, their limited sensitivity to the spatiotemporal complexities of tropical environments highlights a research gap. This study introduces the Temporal Data-Quality-Based Thresholding (TDQBT) method, a novel approach to detecting and eliminating artefacts in Landsat LST products. The research aims to refine artefact identification by incorporating factors such as temporal data quality, cloud cover on the Landsat scene, and missing pixel percentages within the research area, ultimately establishing a realistic LST range for the study area. The study utilised 243 Landsat images spanning 2014–2023, accessed via the Google Earth Engine Python API. After applying cloud masks and preprocessing, TDQBT was implemented in two stages: first, the highest quality data were identified for each month based on minimal missing pixels and cloud cover. Second, thresholds for realistic minimum and maximum LST values were computed using the most reliable monthly data. Through visualisations and statistical analyses, the TDQBT approach was compared with traditional thresholding methods—Otsu’s, Li’s, improved Otsu, and quantile method. Additionally, the artefacts identified by TDQBT were explored using temporal patterns, spatial distribution, and correlation analyses. TDQBT significantly improved artefact detection and filtering. The realistic LST range for the study area was established as 12.89°C to 69.62°C. Artefacts were predominantly associated with cloud cover exceeding 60% and missing pixels above 1.2%. The method effectively reduced artefacts where traditional methods failed. Exploratory analysis revealed that artefacts in minimum LST are more frequent and influenced by seasonal patterns. High cloud cover and missing pixel percentages were key contributors to artefact occurrence. The TDQBT method offers a robust and adaptive framework for artefact detection in Landsat analysis-ready LST products, enhancing data reliability for tropical urban studies. TDQBT ensures realistic LST estimates critical for urban heat island research, urban planning, and data-driven decision-making. Its application is not only limited to tropical regions but also holds potential for global scalability across diverse geographic and land-use settings upon further investigations in diverse geographic regions and land-use settings. Future research should also explore its adaptability to other sensors, higher temporal resolutions, and integration into automated workflows for operational remote sensing applications.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: Long-term Trends of LST From a New Daytime-normalized AVHRR Time Series Over Central and Southern Europe

Authors: Philipp Reiners, Dr. Christina Eisfelder, Stefanie Holzwarth
Affiliations: DLR
For monitoring land surface conditions repeatedly over large areas, satellite derived Land Surface Temperature (LST) has become an indispensable tool. However, to make climate relevant statements and quantify the impact of climate change on land surface variables over long time, we need sensors that are available for more than 30 years. The Advanced Very High-Resolution Radiometer (AVHRR) is the only sensor offering spatially and temporally continuous daily measurements for the past 40 years. The German Aerospace Center's (DLR) TIMELINE project focuses on producing a consistent multi-decadal time series from AVHRR data (covering three AVHRR sensors on 16 NOAA platforms and three MetOp platforms) for Europe and North Africa. Nevertheless, variations in observation times among AVHRR sensors affect the recorded temperature, introducing discontinuities to the AVHRR LST time series. Additionally, the orbit drift gradually shifts observation times throughout a sensor's lifespan. The resulting challenges in interpreting the AVHRR LST time series are likely the main reason why AVHRR observations have not yet been part of established long-term LST products. Several orbit drift correction methods have been developed to address orbit drift and observation time inconsistencies in AVHRR LST time series. However, existing models often oversimplify spatial and temporal variations, limiting their effectiveness in diverse landscapes like Europe. For generating the TIMELINE LST product, we integrated the AVHRR LST time series with geostationary SEVIRI LST data. The daytime normalization was achieved through pixel-specific correction terms derived from a diurnal cycle model based on SEVIRI observations. The resulting dataset offers daily maximum LST at 1 km resolution over the past 40 years for Central and Southern Europe. Previous validation studies for the novel dataset revealed, that the resulting time series shows a stable behavior towards the CCI LST time series and that the discontinuities between the different NOAA platforms were minimized. In this conference contribution, we present our analysis of long-term LST trends using this daytime normalized LST product. The trends were assessed using the Mann-Kendall significance test and Theil-Sen Slope Estimator, focusing on trends with a significance level of p < 0.1. Furthermore, the trends were categorized by land cover and elevation. To validate our results, the derived trends are compared to existing long-term LST records from ERA 5 and air temperature measurements.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: The Joint ASI - NASA/JPL Surface Biology and Geology Thermal Infrared (SBG-TIR) Mission

Authors: Raffaele Votta, Dr. Simona Zoffoli, Sara Venafra, Tiziana Scopa, Giorgio Viavattene, Francesco Longo, Francesco Tataranni, Vincenzo Martucci, Rocco Pellegrini, Maria Fabrizia Buongiorno, Gianluca Giallatini, Andrea Cici, Fabio Lucchini, Eliana Gargiulo, Lorenzo Franchi, Maria Giulia Pancalli, Demetrio Labate, Lauren White, Ralph Basilio, Simon Hook, Sarah Hunyadi-Lay, Melora Larson, Jennifer Cruz, Peter Xaypraseuth, Sam
Affiliations: Italian Space Agency, Istituto Nazionale di Geofisica e Vulcanologia (INGV), Thales Alenia Space Italia (TAS.I), Leonardo, NASA Jet Propulsion Laboratory,
In response to the recommendations from the 2017 Earth Science Decadal Survey by the National Academies of Science, Engineering, and Medicine, NASA is developing the Earth System Observatory (ESO) and the Strategic Vision Document for Space 2020-2029 delivered by the Italian Space Agency (ASI) President and approved by the Interministerial Committee for space and aerospace-related policies (COMINT), NASA and ASI are jointly developing the Surface Biology and Geology - Thermal Infrared Mission (SBG-TIR). The SBG-TIR mission consists of the science (research and applications), the observatory (consisting of an ASI-provided platform hosting the NASA-provided TIR instrument and ASI-provided Visible and Near-Infrared (VNIR) instrument), the mission and instrument operations centers, the ground system (uplink/downlink) and related data transfer services, and science data processing facilities and archive centers. The SBG system will enhance scientific understanding of climate change, agriculture, species’ terrestrial and marine habitats, the surface water cycle, and the distribution of surface natural resources, by answering open questions about the fluxes of carbon, water, nutrients, and energy within and between ecosystems and the atmosphere, the ocean, and between ecosystems and the atmosphere, ocean and land. The main focus areas of the SBG mission are to answer questions related to the following areas: • Ecosystems and natural resources • Hydrology • Weather • Climate • Solid Earth The National Academies 2017 decadal survey identified five Most Important’ and seven ‘Very Important’ objectives for SBG. The SBG-TIR Mission consists of an observatory (satellite) based on multi-mission PRIMA-S platform, provided by ASI together with the E2E system, embarking two elements: • TIR instrument, provided by NASA, a.k.a, Orbiting Terrestrial Thermal Emission Radiometer (OTTER) • VNIR instrument, provided by ASI, a.k.a Visible InfraRed Earth Observation (VIREO) Camera The payload, as the present paper, consists of OTTER and VIREO. The main objective of the SBG-TIR mission is to provide systematic observation coverage over land, inland water areas, and coastal zones located between ± 86° latitude during the prime mission life cycle of 3 years. The system will operate in a Low Earth Orbit (LEO) repeating track sun-synchronous orbit with a revisit time <= 3 days and systematic acquisitions will be driven by a science-driven coverage mask. The data are expected to be acquired in the early afternoon to capture the drawdown in evapotranspiration due to plant stress with sufficient frequency to measure ecosystem function at weekly, seasonal and yearly time scales and sufficient detail to measure within-field variability. The data are expected to also allow measurement of dynamic processes occurring in active volcanoes and wildfires such as subtle precursory degassing and thermal output at volcanic systems and fire temperatures, both at spatial resolution of 60 meters or better. It is anticipated that there will be sufficient spectral resolution to detect unsaturated high temperatures and map a suite of minerals. It is also expected that the data should contribute to better characterizing ocean dynamics, such as eddies, fronts, upwelling patterns, and river discharge and plumes, which impact aquatic biogeochemistry and ecology. The SBG-TIR mission will coordinate their activities with other similar missions (e.g. TRISHNA, LSTM ) to open the way to new opportunities (cross-validation and improved science returns) whenever possible. The SBG-TIR Satellite is designed to be placed into sun-synchronous orbit (SSO), from which the OTTER and VIREO instruments are expected to conduct multispectral radiometric measurements of surface emission and reflectance from the Earth’s land, inland water, and coastal regions. These observations are intended to be acquired globally with high spatial resolution at nadir (60m for TIR and 30/60m for VNIR instruments), a revisit time of 3 days that supports NASA and ASI science objectives. The SBG-TIR project will provide observations in 8 MIR (3-5 µm) and TIR (8-12 µm) spectral bands during day and night time, and 3 VNIR spectral bands [one PAN (0.75 µm centre) and two VNIR channels (VNIR-0 (0.655 µm centre) and VNIR-1 (0.835 µm centre)] during daytime. The mission shall provide low latency and standard products within 24 and 72 hours respectively from the time of acquisition by the instrument. The joint implementation team (JPL, ASI, TAS-I, and LDO) has developed a mission definition that meets requirements The next major steps are preliminary and detailed/critical design.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: Advancing Alpine lake monitoring and modelling with high-resolution thermal remote sensing

Authors: Abolfazl Irani Rahaghi, Daniel Odermatt, Kathrin Naegeli
Affiliations: Eawag, Swiss Federal Institute of Aquatic Science & Technology, Surface Waters – Research and Management, Department of Geography, University of Zurich
Lakes are essential components of the Earth's hydrological cycle, providing water for domestic, agriculture, and industrial use. Lake thermal dynamics offer critical insights into regional and global climate change, and play a regulatory role in the lake biogeochemical cycles. In situ measurements, satellite Earth observation, and hydrodynamic modelling are three primary sources of lake temperature monitoring. Besides monitoring purposes, in situ data are also essential for calibration and validation (cal/val) of both satellite products and numerical models. However, such data are often scarce or irregular for many lakes. Additionally, data assimilation of lake surface water temperature (LSWT) products proved to improve numerical model performance. Satellite thermal imagery has been one of the key sources for LSWT monitoring, which can provide consistent datasets regionally and globally. However, today’s operational LSWT services are limited to 1 km spatial resolution and cover only the world’s thousand largest lakes. High-resolution, high-revisit thermal Earth Observation (EO) missions, such as ECOSTRESS, LSTM, TRISHNA, and SBG, mark a significant step forward, extending LSWT services to hundreds of thousands of smaller lakes. Yet, their large viewing zenith angles and unique radiometric properties demand dedicated product calibration and validation efforts. This includes collecting reliable skin temperature and ancillary datasets across diverse lakes, as well as assessment and optimization of LSWT retrieval algorithms. Developing a transferrable pipeline for data and algorithm integration will accelerate the adoption of upcoming missions. Timely dissemination of validated products will further enhance their accessibility and usability for scientists, stakeholders, and the public. Our research, in the frame of ESA-funded TRISHNA – Science and Electronics Contribution (T-SEC) project, focuses on validating and improving high-resolution LSWT products of current and future missions for inland and coastal waters, and openly publishing the final products for lakes in the Alpine region. We operate automated reference stations across an altitudinal gradient, including Lake Geneva (Switzerland/France; local name: Lac Léman), Lake Aegeri (Switzerland; local name: Ägerisee), Lago Bianco (Switzerland), and Greifensee (Switzerland). Lake Geneva is a large, low-altitude (372 a.s.l.m), and well-monitored pre-alpine lake. Skin, sky, and bulk temperatures, as well as meteorological data are available from two stations on this lake, i.e., Buchillon (2016-2018; excluding sky temperatures) and LéXPLORE (2022-now). Lake Aegeri, a mid-size sub-alpine lake (724 a.s.l.m), has been monitored intermittently since 2023 and continuously since June 2024 with similar sensors. We recently deployed a research buoy in Lago Bianco, a high-altitude glacier lake (2234 m.a.s.l.), with skin, sky, and bulk temperatures, as well as meteorological data collected in September 2024. During ice-covered winter period in Lago Bianco, we utilize the buoy’s instruments on a research platform in Greifensee, a eutrophic low-altitude lake (435 m.a.s.l), to provide additional cal/val datasets. Our datasets cover a variety of morphological, bio-physical, and meteorological features across Alpine lakes. We evaluated Landsat 7/8/9 LSWT products from USGS Collection-2 Level-2 data and the single-band Acolite-TACT algorithm. Our matchup comparisons yielded a Mean Absolute Error (MAE) of < 1.3 °C, a Mean Bias Error (MBE) < 0.4 °C and a correlation coefficient (R2) of > 0.94. Comparing the performance of individual sensors did not show a significant difference among Landsat-7, Landsat-8, and Landsat-9 for our test sites. However, analyzing Level-2 ECOSTRESS data obtained from NASA AρρEEARS API exhibited a weaker performance (MAE > 2.4 °C, MBE < -2 °C, and R2 < 0.85). This highlights the need for further cal/val and algorithm development, particularly regarding emissivity corrections, for ECOSTRESS products, and for other upcoming EO missions. The calibrated and validated algorithms for Landsat have been used to operationally monitor LSWT of Swiss lakes through AlpLakes web platform (www.alplakes.eawag.ch). Initially developed under a previous ESA-funded project, AlpLakes integrates satellite data, in situ measurements, and hydrodynamic models. The platform originally focused on water quality products and 3D numerical models for 12 Alpine lakes. Its scalable and transferable design of backend and frontend facilitates the rapid implementation of new remote sensing sensors and products, as well as the inclusion of additional lakes. For example, we anticipate promoting the further dissemination of our tools for lakes across the entire Alpine region in a new EU Interreg AlpineSpace project DiMark (https://www.alpine-space.eu/project/dimark/). Ultimately, our research and datasets will contribute to ongoing lake monitoring and modelling activities in Switzerland and beyond. This includes refining skin-to-bulk parameterization for an improved satellite-model comparison, and assimilation of LSWT data into existing 1D and 3D numerical models on AlpLakes. Furthermore, several studies on perialpine lakes have demonstrated that integrating satellite Earth observation, hydrodynamic modelling, and in situ measurements not only enhances the utility of each individual tool, but also improves understanding of both short-term extreme events and long-term trends and regime shifts in lakes. From a broader perspective, combining regionally validated thermal and optical satellite products with numerical models on a unified platform will enable deeper insights into the connections between physical and biological processes, fostering interdisciplinary research.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: Exploiting the EXtended Control Vector Framework for Improved Coupled Land-Atmosphere Data Assimilation

Authors: Christoph Herbert, Peter Weston, Patricia de Rosnay, Dr. Sebastien Massart
Affiliations: ECMWF
The land-atmosphere coupling approach in current state-of-the-art NWP systems is based on weakly coupled data assimilation for individual Earth-system components. Atmospheric and land surface analyses are performed separately, and results are fed back through cycling into the next assimilation window based on a coupled model forecast. This can lead to unbalanced initial conditions, and observations often cannot be fully harnessed if they are assimilated differently in one or both components within the same assimilation window. The CopERnIcus climate change Service Evolution (CERISE) project aims to advance coupled surface-atmosphere assimilation in the preparation forof the next generations of global and regional reanalysis systems. As for the land, ECMWF’s activities are towards a unified Land Data Assimilation System (LDAS) based on the Simplified Extended Kalman Filter (SEKF) that incorporates multi-layer soil moisture analysis in operations and is currently being extended to other variables, making it suitable for improved coupling. As part of CERISE, a “quasi-strongly coupled assimilation” approach is being developed based on an "outer-loop land-atmosphere coupling", exploiting the incremental formulation of the 4D-Var implementation for atmospheric analysis. Initial efforts focused on infrastructure developments to enable running the SEKF as part of the first 4D-Var non-linear trajectory. Ongoing investigation includes activating the SEKF in multiple 4D-Var outer loops and establishing two-way coupling within the same assimilation window to initialise subsequent outer loops from updated land and atmospheric fields. The aim is to attain a configuration with an optimal degree of coupling that best balances the information between land and atmospheric variables. A new method using an eXtended Control Vector (XCV) had been developed where the 4D-Var control vector is extended by a selected number of surface-sensitive variables. The XCV approach is used for additional variables including skin and first-layer soil temperature, which are adjusted in the model space during the atmospheric analysis to achieve optimal accuracy. By including the first-layer soil temperature in the XCV in the weakly coupled system, preliminary results have already shown improved forecast skill when verified against satellite observations without making use of this variable in the surface analysis. This work presents the results of numerical experimentation based on the latest outer loop coupling developments and the assessment of assimilating XCV first-layer soil temperature analysis fields as pseudo observations into the SEKF. The impact is evaluated in terms of atmospheric forecast skill and validation against observations such as the land surface temperature from the Spinning Enhanced Visible and Infrared Imager (SEVIRI). The infrastructure with increased coupling enables better exploitation of interface observations so that they can simultaneously influence the analysis of both the land and the atmosphere.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: CIMR Level 2 Land Surface Temperature Retrieval Using Machine Learning Approaches

Authors: Andrés Terrer-Gómez, Moritz Link, Dr. Roberto Fernandez-Moran, Dr. Julia Amorós, Dr. Maria Piles, Thomas Lavergne
Affiliations: Image Processing Lab (UV), Noruegan Metheorological Instutute
CIMR Level 2 Land Surface Temperature Retrieval Using Machine Learning Approaches Andrés Terrer-Gómez, Moritz Link, Roberto Fernández-Moran, Julia Amorós, Thomas Lavergne, María Piles A.02.01 Land Surface Temperature and Emissivity Data for Research and Applications Presentation request: Poster Keywords: Land surface temperature, machine learning, microwave. Satellite data used: Windsat, ERA5, TELSEM Abstract: The estimation of LST from infrared satellite measurements is well-established, however, such estimates are only available for clear-sky conditions due to the opacity of clouds to infrared radiation. Passive microwave measurements in the high-frequency range (18-89 GHz) can extend the sensing capabilities of infrared sensors due to their ability to sense LST during cloudy and clear-sky conditions, albeit with lower spatial resolution and additional challenges that may compromise accuracy. The Copernicus Imager Microwave Radiometer (CIMR) is one of the six Copernicus Expansion Missions currently being implemented by the European Space Agency and the European Commission. CIMR will provide high-spatial resolution microwave imaging radiometry measurements and derived products with global coverage and sub-daily revisit in the polar regions and adjacent seas to address Copernicus user needs. Here we present recent developments in machine learning (ML) algorithms trained to retrieve land surface temperature from the upcoming CIMR mission. Our main approach builds from the retrieval methodology proposed by C.Prigent et. al. [1] and implemented at the ESA CCI MW LST [2], based on a simple multi-layer perceptron model that is trained on a curated database of TBs and monthly emissivities [3] using LST inversions as the regression target. We propose a similar methodology tailored to CIMR K-band (18.7GHz) and Ka-band (37GHz) frequencies and 6AM/6PM overpass times. Apart from Neural Networks, we also explored other ML algorithms based on regression trees or Bayesian probability, such as random forest regressors and Gaussian processors as a benchmark. We discuss the implementation of a handful of ML models, each one will be specialized and trained on ascending/descending swath data and frozen/non-frozen conditions, since snow and ice-covered surfaces have been a source of retrieval error for previous works [1] and swath sector corresponds with different physical conditions. The CIMR algorithms discussed in this study are developed as part of the CIMR L2PAD project. The algorithm theoretical baseline documents (ATBDs) and code will be hosted publicly at https://github.com/CIMR-L2PAD . Citations: [1] C. Prigent. “Toward ‘all weather,’ long record, and real-time land surface temperature retrievals from microwave satellite observations”, 2016. https://doi.org/10.1002/2015JD024402 [2] ESA CCI. Land Surface Temperature - ESA Climate Change Initiative. Algorithm Theoretical Basis Document (2023) URL: https://climate.esa.int/en/projects/land-surface-temperature/ [3] F. Aires. A tool to estimate land-surface emissivities at microwave frequencies (telsem) for use in numerical weather prediction, 2011. https://doi.org/10.1002/qj.803
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: TRISHNA : innovative concepts for a first global delivery of Earth Observation thermal infra-red data

Authors: Jean-louis Raynaud, Philippe Gamet, Corinne Salcedo, Emmanuelle Sarrazin, Loïc Lyard, Carole Amiot, Cécile Picard, Jean-Louis Roujean, Bimal Bhattacharya
Affiliations: CENTRE NATIONAL D’ETUDES SPATIALES (CNES), Thales Services Numeriques, CENTRE D’ETUDES SPATIALES DE LA BIOSPHÈRE (CESBIO), INDIAN SPACE RESEARCH ORGANISATION (ISRO)
The TRISHNA mission (Thermal infraRed Imaging Satellite for High-resolution Natural resource Assessment) is a cooperation between French and Indian space agencies, dedicated to environmental monitoring. This satellite, which should be launched end of 2026, will provide worldwide measurements from visible and short-wave infrared to thermal infra-red, thanks to a unique combination of two on board payloads. This will allow to deliver a wide range of specific variables, including land surface temperatures and emissivities, adapted to many monitoring environmental activities,. Based on these features, TRISHNA mission will act as a precursor for the next thermal infrared missions, such as SBG (NASA) or LSTM (ESA/COPERNICUS). Based on shared algorithms specifications and products definitions, as well as on two separate ground segments, in France and in India, this project has an innovative approach for data processing. Indeed, each agency will be in charge, with the same processing capabilities, of the generation and storage of a half-world coverage. Regarding the operational data access, two separate gateways will provide mirroring downloading capabilities for the world coverage of the mission, which implies a close cooperation and a cross-validation effort between CNES and ISRO. Finally, considering the effective delivery of the products, some specific TRISHNA mission features shall also be mentioned. In a demonstration context, some products useful for quick decision making shall be processed and made available to the users in less than 12 hours after acquisition. This Near Real Time demonstration objective is one of the key-driver for the ground segment development. For what concerns the Level 3 product, an innovative algorithm has been conceived to allow an optimized delivery of biophysical variables to the users. Globally, it has to be enhanced that all the TRISHNA product levels have been optimized to take into account on one hand the mission volumetry, and on the other hand the feedbacks from both scientific community and future missions (SBG/LSTM) development teams. This presentation will address all these topics, giving an overview of the current status of TRISHNA mission, with details on the various specificities of this project. The mission characteristics, the ground segment features, and the specificities of the products and their delivery will be detailed to highlight the innovative concepts of this project.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: Hyper-Cam Airborne Mini: Remote sensing of the environment using airborne imaging thermal-infrared spectroscopy with high spatial and spectral resolution

Authors: Bastian Sander, Marion Pause, Uwe
Affiliations: Anhalt University Of Applied Sciences
We report on first results on airborne retrieval of thermal infrared spectroscopy data by means of the Hyper-Cam Airborne Mini. Only, very few Hyper-Cams exist worldwide in civil use. As its technical performance is outstanding, airborne Earth observation missions with various applications will be feasible in high quality from local to regional scale. While mounted on an aircraft or gyrocopter, the Hyper-Cam achieves a ground-sampling distance in the sub-meter range and an adaptive spectral resolution up to 0.5 1/cm. The covered wavelength range of 7.4 to 11.8 microns is thus sampled by up to 1,000 channels. Two different operation modes of the Hyper-Cam enable both long-term observations of single spots and scanning of large regions. Both atmospheric corrections and temperature-emissivity separation are conducted with FLAASH-IR software leading to highly resolved spectra per image pixel. We present a first set of data recorded during test campaigns in 2024 and 2025 in Canada and Germany. We put emphasis on how to operate flight missions, postprocess and quality of the spectral data, and discuss scopes and options for prospective applications. These comprise (1) retrieval of biotic and abiotic land-surface parameters and processes (transpiration, evaporation, evapotranspiration), (2) vegetation vitality (drought stress, diseases), (3) soil characteristics (moisture, composition). An outlook is given on the feasibility of multi-sensor campaigns within various research corporations.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone C-D)

Poster: Integrated Satellite Analysis of Thermal Variations, Volcanic Gas Emissions, and Lava Flow Mineralogy Using Multisensor and Hyperspectral Data: the case study of Stromboli Island

Authors: Malvina Silvestri, Vito Romaniello, Federico Rabuffi, Fabrizia Buongiorno, Massimo Musacchio
Affiliations: Istituto Nazionale di Geofisica e Vulcanologia, Jet Propulsion Laboratory, California Institute of Technology
Stromboli volcano is renowned in scientific literature for its long-standing activity, which has persisted for approximately 1.5 thousand years. This activity is characterized by continuous degassing and mild, intermittent fountaining, known as Strombolian activity. The estimation of surface temperature by Earth Observation (EO) data enables the detection of new thermal anomalies or changes in existing ones, providing valuable insights for hazard assessment. Stromboli temperature measurements, during quiescent phases, can be analyzed to improve the monitoring of volcanic activity. Surface temperature maps are generated by processing Thermal InfraRed (TIR) data from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Landsat, and the last recent ECOsystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS) mission. The long-term EO datasets, including ASTER and Landsat, offer a unique opportunity to create extensive time series (about 25 years of Land Surface Temperature estimation) of surface temperature maps, particularly in the summit area where thermal anomalies are detectable at spatial resolutions of 30–100 meters. Moreover, the use of ECOSTRESS provides a new and unique data set that combines the spatial resolution of a Landsat-class sensor, the LWIR spectral resolution of an ASTER class sensor, with the temporal resolution of a MODIS-class sensor. These data permit to analyze temporal series and have been validated and integrated by means of Unmanned Aerial Vehicle (UAV) or ground measurements. Additionally, the ASI-PRISMA (PRecursore IperSpettrale della Missione Applicativa) hyperspectral satellite, which operates in the 0.4–2.5 µm spectral range with a spatial resolution of 30 meters plus a 5 m Ground Sample Distance (GSD) panchromatic channel, provides the capability to delineate the lava flows. This information offers the possibility to update the surface materials distribution related to different Stromboli volcanic activities using the EO suitable data available. Furthermore, hyperspectral data, such as collected by PRISMA and by HyTES (Hyperspectral Thermal Emission Spectometer) can support the detection of gas emissions (e.g., CO2 in the Short-Wave InfraRed (SWIR) and SO2 in the TIR spectral region) from the summit craters. This multisensor approach wants to provide insights into the dynamic processes of Stromboli, contributing to the understanding and monitoring of active volcanic systems.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R)

Poster: C.01.03 - POSTER - Innovative space technologies enabling growth for new EO science

This peer-review session addresses aspects that lead to reducing the cost of recurring spacecraft developments and to the operation of a growing number of EO satellites. This could include:
- technology aspects that lead to better system performance (e.g. pointing knowledge, higher downlink data rates)
- standardization of the spacecraft avionics with interchangeable and interoperable modules from multiple suppliers and ready to be adopted by multiple integrators,
- new challenges such as platforms compatible with more demanding Zero Debris policies at the mission end of life,
- also more efficient process aspects such as the digitalization of requirements and designs with modern approaches like Model Based System Engineering or use of advanced COTS components.

All these industrial points are crucial to enable better EO science thanks to leaving better margins and resources for more performing EO sensors and also for industrial scalability towards affordable constellations delivering better revisit times.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R)

Poster: Leveraging SMOS RFI Detection, Monitoring, and Reporting for Future ESA Earth Observation Missions

Authors: Ekhi Uranga, Judit González, Álvaro Llorente, Antonio De La Fuente
Affiliations: Isdefe, European Space Agency
Over the past 15 years, the Soil Moisture and Ocean Salinity (SMOS) mission has been a pioneer in addressing Radio Frequency Interference (RFI) challenges in the L-band spectrum. The insights gained from SMOS provide a foundation for managing RFI in future ESA missions, with the potential for application across other frequency bands. The SMOS mission has demonstrated that RFI severely impacts passive remote sensing, often rendering data unusable. Its RFI management framework underscores the need for raw (dirty) data, alongside mitigated (clean) products, to support geolocation and reporting. While clean data ensures immediate usability, raw data preserves critical interference characteristics, such as geolocation, signal power, polarization, and time-frequency behavior, essential for characterizing and locating RFI sources. In this regard there are many lessons learned from SMOS: 1. Raw Data for Geolocation: Preserving unmitigated observations is critical to accurately identifying sources of radio frequency interference. SMOS has successfully used unmitigated data to detect and characterize interference sources and provide actionable reports to spectrum authorities. 2. Synergies Between Mitigation and Reporting: While mitigation and reporting have distinct objectives and requirements, they can share RFI detection algorithms if thresholds and methodologies are carefully designed and balanced. Detection for mitigation focuses on minimizing data contamination, while reporting aims to characterize and flag RFI sources. Properly calibrated algorithms can support both need. 3. Long-Term Monitoring: Continuous monitoring and historical analysis of RFI have been critical for SMOS. Tracking RFI over time provides insights into the behavior of interference sources, such as temporal patterns, transmission power trends, and the effectiveness of shutdowns after being reported. This long-term perspective is essential for prioritizing reporting efforts and refining detection methodologies. 4. Administrative Considerations: Reporting success varies by region, highlighting the need for adaptive approaches based on administrative responsiveness and geopolitical contexts. 5. Scalable Data Handling: Next-generation missions must accommodate larger datasets with efficient processing workflows, leveraging tools for automation and scalability. The entire development undertaken by the SMOS DPGS RFI Team to manage SMOS RFI data has evolved from manual detection methods to a fully automated system that generates detailed maps, statistics and reports that are shared with national authorities to enforce spectrum regulations. This design and functionality is a valuable model for future tools, which could be extended beyond the L-band to manage RFI in other frequency bands. Future ESA missions may adopt a dual-product strategy, routinely downlinking clean and raw data. By leveraging SMOS’s lessons and adapting its tools for broader applications, ESA can address the growing challenge of RFI and safeguard the integrity of Earth observation data across a crowded radio frequency environment.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R)

Poster: Anomaly detection using Machine/Deep Learning for Time series data

Authors: Long Phil Chau, Claudio Mammone
Affiliations: Telespazio Germany GmbH
Anomaly detection for Time series is an important capability for satellite systems, whether it involves payload (e.g. Earth Observation) or telemetry data. Anomalies are normally the first symptom of a series of events that could lead to disruption in satellite operations, so a robust and efficient way to isolate them is paramount in a context where the number of missions and satellites are constantly growing. Moreover, performing this activity manually (e.g. checking several reports and parameter values to see if something seems off) is becoming unfeasible due to the growing number and complexity of satellite systems. To keep up with all that data and to alleviate the burden on operators with this problem, automatization is necessary. While straightforward automatic anomaly detection systems using predefined limits or known anomalous pattern matching exist and are already in use, they can’t detect more complex anomalies. Currently, Machine/Deep Learning (ML/DL) solutions have shown some success in this area. However, they often come with limitations. These solutions usually only work with small systems and are still prone to giving too many false alarms. Another problem found in recent years is the many flaws of the datasets used to evaluate the solutions. Some of these flaws include mislabelled ground truths, too simple anomalies and an unrealistic anomaly density. Furthermore, there was also a missing evaluation standard for these solutions. Fortunately, there has been progress on this issue with the recent publication of ESA’s Anomaly Detection Benchmark (ESA-ADB) which introduced datasets containing satellite telemetry data from three ESA Missions addressing the found flaws and an evaluation standard for the ML/DL solutions. The goal of this work is to review and investigate a few state-of-the-art ML/DL models that were not included in the ESA-ADB and use the results of this investigation to build a new model based on their results. By evaluating the results, we can get a closer look at the challenges and issues current ML/DL models face and build our prototype in a way that addresses these challenges. In this work, we will present an overview of the current state of anomaly detection for time series and the challenges and issues that previous work faces while using the ESA-ADB and its datasets for evaluation. Additionally, we introduce our anomaly detection prototype that aims to address these identified challenges and present their preliminary results.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R)

Poster: Calibration and Characterisation of Platform Magnetometers on-board GRACE-FO

Authors: Ingo Michaelis, Dr Guram Kervalishvili, Jan Rauberg, Dr Martin Rother, Dr Monika Korte
Affiliations: GFZ German Research Centre for Geosciences
Magnetic field data from satellite missions play an important role in characterizing and understanding space weather conditions. Satellite magnetic field observations from different altitudes and local times are necessary to disentangle the complex processes contributing to each observation of Earth’s magnetic field and study the various individual processes. While dedicated magnetic field satellite missions give good global data coverage at first sight, the coverage is still sparse if simultaneous observations from several different altitudes and with a good local time coverage are desired. Moreover, gaps between dedicated magnetic field satellite missions, such as between the CHAMP and Swarm missions from 2010 to 2013, exist and might occur again in the future. However, many satellites that are not dedicated to geomagnetic field measurements carry so-called platform magnetometers (PlatMags) as part of the attitude and orbit control system (AOCS). These satellites have a variety of mission goals and the PlatMags are additional instrumentation for navigational use only. The accuracy of their magnetic field measurements is not high enough to provide information about the geomagnetic field for mapping or scientific purposes. We have developed an analytical and machine learning tool to calibrate the PlatMags to provide more accurate geomagnetic field data than required for their navigational purpose. We are able to characterize and remove artificial disturbances from the satellite in a way that turns PlagMag data into useful information for scientific application. We show the results of the data calibration processing for the example of the Gravity Recovery and Climate Experiment-Follow-On (GRACE-FO) satellites and give an overview of additional telemetry data that are needed for this purpose from the housekeeping data of the satellite mission. Thus, small changes to the storage and distribution of housekeeping data or to the downlink strategy can improve the gain of the mission with a minimum of extra costs.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R)

Poster: Horizon Scanning the Satellite Sensor Market: Aligning Emerging Technical Capabilities with the Goldrush in Sustainability Data Products

Authors: Liz Scott, Jean Joy
Affiliations: Satellite Applications Catapult
With the increasing frequency of launches and a wave of new technologies appearing on the satellite sensors planned for orbit, the next few years look to be an exciting time for the development of Earth Observation (EO) based data products. This presentation highlights the findings of a horizon scanning project designed to assess the emerging and future landscape over the next three years of satellite sensors in orbit, with a focus on their potential to collect data that can be used by product developers to address critical environmental challenges. The study examines the growing number of commercial satellite sensors from the UK and Europe, alongside government missions, to identify opportunities for downstream innovation in data products and services as well as identify market gaps. New environmental regulations such as those born out of the Taskforce for Climate-related Financial Disclosures (TCFD) and Taskforce for Nature-related Financial Disclosures (TNFD), as well as the sustainable finance sector, are creating goldrush conditions for new data products to flood the market. Downstream from the technology in orbit, data product creators are already building services designed for end users who may have no idea that the insights they are ingesting are derived from space. Historically, users have had to contend with some amount of geospatial or data interpretation expertise to understand and navigate web mapping platforms, but whilst these still have their place, EO data derived information can find its way into dashboards and reports understandable by a wider range of end users. Opportunities exist now for both start-ups as well as established commercial companies to get first-mover advantage when releasing products that address specific market concerns, for example Biodiversity Net Gain in England, with the potential for products to be tailored to a particular audience affected by new legislation. This could be asset managers, local authorities, construction companies or ecologists all wanting information on the same location but presented to suit their requirements. Currently there are limitations to the utility of insights that space data can offer, but with products based on coarse data already in existence and end users seeing the potential, the ground is already being prepared. We expect an influx of new products will emanate from the data from the new forthcoming sensors with finer spatial resolutions, more frequent temporal resolutions and much higher spectral resolutions. Our analysis categorises satellite sensors across Multispectral, Hyperspectral, SAR, LIDAR, Thermal, RF, Emission Detection, and Atmospheric types. Through this analysis, we aim to provide a comprehensive understanding of how the evolving satellite sensor market can contribute to global sustainability priorities by providing new opportunities for innovation in downstream data products and services.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: B.04.04 - POSTER - Spaceborne data for the analysis of Natural Hazards and AI: new insights from Artificial Intelligence technologies and recent missions

Natural hazards, can be defined as “all atmospheric, hydrologic, geologic, and wildfire phenomena that, because of their location, severity, and frequency, have the potential to affect humans and their environment”. In 2023, the Emergency Events Database (EM-DAT) recorded a total of 399 disasters related to natural hazards.These events resulted in 86,473 fatalities and affected 93.1 million people. Therefore, analysing natural hazards, particularly focused on mapping and monitoring, is a key activity for land management and risk reduction policies as pledged by many national governments and international institutions. Compared to traditional field-based surveys, spaceborne remote sensing technologies have proven to be of utmost importance due to their high spatial and temporal coverage and reduced costs.
The actual scenario of large volumes of freely accessible data allows to retrieve relevant information as well as to develop new techniques and methodologies for the investigation, characterization, monitoring, and modeling of hazards. Alongside remote sensing, Artificial Intelligence (AI) and Machine Learning (ML) play now an important component for the analysis of large datasets of EO data. These new approaches have widely demonstrated their suitability in many scientific fields, being characterized by high accuracy and specific advantages for different applications.
This session is intended to collect recent and promising advances on the use of AI/ML for the processing of (optical, multispectral, hyperspectral, radar and thermal) satellite data and the analysis of geohazards such as landslides, earthquakes, subsidence, volcanic eruptions, and hydrometeorological hazards, such as wildfires,tsunamis, floods, storms, avalanches, etc. . The outcome of the session will provide a very insightful state-of-the-art on the current and future perspectives on EO capabilities for studying natural hazards and risk reduction policies.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: EarthAnomalyNet: A Framework for Spatio-Temporal Anomaly Detection in Satellite Image Time Series

Authors: Paul Höhn, Dr. Konrad Heidler, Prof. Dr.-Ing. habil. Xiaoxiang Zhu
Affiliations: German Aerospace Center (DLR), Technical University of Munich (TUM)
Natural hazards such as landslides, wildfires, earthquakes and floods pose significant threats worldwide and require efficient analysis and monitoring for effective land management and risk reduction policies. While advances in Earth observation (EO) technologies have generated abundant satellite image time series (SITS) from diverse sources, there is a critical need for specialised models capable of spatio-temporal anomaly detection in multimodal data. Existing approaches to anomaly detection in satellite imagery time series (SITS) have significant limitations. Many rely on pixel-level analysis using pre-calculated features such as NDVI, which fail to capture complex spatial dependencies and coherent patterns. Methods such as Dynamic Time Warping (DTW) and Hidden Markov Models (HMMs), while effective for temporal analysis, struggle with high-dimensional spatial data due to oversimplified assumptions. Similarly, Long Short-Term Memory (LSTM) networks, commonly used to model temporal sequences, often fall short when integrating multimodal data and representing spatial patterns. Recent models, such as U-TAE, ConvLSTM and UNet3D, have made progress by jointly exploiting spatial and temporal information. However, these models have primarily been developed for semantic segmentation tasks and remain largely unexplored in the context of anomaly detection. This highlights a significant research gap that we aim to address by developing models specifically tailored for spatio-temporal anomaly detection in multimodal SITS. To address these challenges, we propose EarthAnomalyNet, a novel framework that jointly exploits spatial and temporal dependencies without relying on pre-extracted features or pixel-level analysis. Building on previous work such as SITS-Former, EarthAnomalyNet processes patches encoded with time series information and predicts anomalies as binary masks that identify both the spatial extent and specific time steps of anomalies. Our model extends the SITS-Former architecture to anomaly detection while maintaining compatibility with multi-temporal semantic segmentation tasks. We enhance its effectiveness by pre-training on a large, diverse EO dataset and fine-tuning on a specialised anomaly detection dataset that integrates multimodal data sources. This approach allows for high spatial and temporal precision in anomaly detection and provides a unified framework without intermediate processing steps. We evaluate the performance of EarthAnomalyNet against established architectures, including U-TAE, UNet3D, UNet2D-ConvLSTM and TSViT, using the Sen12Landslides dataset, which integrates Sentinel-1, Sentinel-2 and elevation data for landslide detection. In addition, we plan to test the model on datasets such as OSCD, OMBRIA, CalFire, and various deforestation datasets, adapting them as needed for anomaly detection tasks. By using a variety of datasets, we aim to demonstrate the versatility of EarthAnomalyNet in detecting different anomalies, including rapid events such as wildfires, floods and landslides, as well as gradual changes such as construction. This approach highlights the potential of EarthAnomalyNet to generalise across different spatio-temporal anomalies, providing a robust framework for monitoring and analysing geohazards and land-use change. Our methodology can be adapted to detect different types of anomalies in multimodal SITS, providing a powerful tool for natural hazard analysis. By combining EO data with advanced machine learning techniques, EarthAnomalyNet offers a practical and scalable solution, especially for regions with limited accessibility. The results of this model can improve our understanding of natural hazards, which often manifest themselves as anomalies on the Earth's surface, and enable more precise identification and monitoring. This approach represents a significant step forward in natural hazard analysis and management, and promotes the integration of artificial intelligence into satellite data processing and geohazard detection. Ultimately, EarthAnomalyNet contributes to more effective risk reduction strategies and improved land management policies.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: InSAR and Machine Learning for Landslide Susceptibility Mapping in Western Greece

Authors: Dr Stavroula Alatza, Constantinos Loupasakis, Charalampos Kontoes, Alexis Apostolakis, Martha Kokkalidou, Nikolaos S. Bartsotas, Nikolaos Antoniadis
Affiliations: National Observatory of Athens, Operational Unit BEYOND Centre for Earth Observation Research and Satellite Remote Sensing IAASARS/NOA, Laboratory of Engineering Geology and Hydrogeology, School of Mining and Metallurgical Engineering, National Technical University of Athens, School of Electrical and Computer Engineering - National Technical University of Athens
Landslides are among the most severe geohazards worldwide, posing a serious threat to human lives and properties. The last decades EO data and remote sensing techniques, have significantly contributed in the detection and monitoring of landslides. Specifically, SAR Interferometry is a well-established method for identifying ground deformation phenomena, revealing areas prone to landslides and slope instability. An exceptionally large dataset of landslides was created with the use of InSAR time-series analysis on Sentinel-1 data. Leveraging the national-scale inventory of Line-of-Sight ground displacements provided by the Greece InSAR project, over 3,000 landslides were identified in Western Greece. InSAR processing was performed with the P-PSI processing chain of the Operational Unit BEYOND Center of NOA. P-PSI is a fully automated, parallelized, processing chain for the implementation of persistent scatterer interferometry (PSI) techniques implemented on big volumes of SAR EO data. Line Of Sight displacements of the Greece InSAR project, for Western Greece were evaluated by geotechnical experts and an extremely large inventory of landslide locations and non-landslide locations was created for landslide susceptibility mapping by applying ML techniques. The landslide dataset is located in three of the most landslide prone geotectonic zones of Greece, specifically the Ionian, Gavrovo, and Pindos zones. Key landslide causative factors, such as topographical, geological, meteorological, hydrological parameters, and vegetation, were integrated into the model. An innovative aspect is the addition of the geotectonic zone as a feature. The results of the ML model trained on this dataset showed that the geotectonic zone index, facilitated the migration of the model to additional areas within the same geotectonic zone. Leveraging EO data and InSAR techniques a unique landslide dataset is created in the most landslide prone areas in Greece. To evaluate the informativeness of this dataset, we trained a baseline machine learning model using the XGBoost algorithm. The model demonstrated very high validation metrics for landslide detection, and the resulting landslide susceptibility map was assessed as highly accurate by domain experts. Finally, the addition of the geotectonic zone as a dataset feature, facilitates the generalization of the model and sets a strong base for the creation of a national scale landslide susceptibility mapping and forecast system.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: Advanced Thunderstorm Nowcasting with AI and Satellite Data for Enhanced Safety and Preparedness

Authors: Lukáš Ivica, Juraj Bartok, Ivan Martynovskyi, Ivana Bartoková, Irina Malkin Ondík
Affiliations: MicroStep-MIS
Anthropogenic-driven climate change is expected to affect, mainly to increase, frequency and intensity of various types of extreme events. One of them are thunderstorms that generate flash floods, tornadoes and other adverse phenomena that may cause damages to infrastructure, economical losses, but also threaten safety, health and lives of inhabitants in the affected areas. The customers of meteorological technological company MicroStep-MIS regularly emphasize their need for a more accurate and easily available nowcasting of thunderstorms because accurate warning of a thunderstorm enables people to take preventive measures to protect their property and health. We try to address this need in the AI4THOR project – AI for Improved Thunderstorm Nowcasting from MSG and MTG (under ESA 1st Slovak RPA call). Most standard State of the art thunderstorm nowcasting is based on extrapolation of radar data. We would like to introduce thunderstorm nowcasting which is primary based on satellite data and AI technique – deep learning using U-net, because it can better grasp complex non-linear features of natural processes. Satellite data (from ESA and EUMETSAT) offer advantages over radar data: they are available globally, including seas and less developed countries, and provide a free, extensive archive of over 10 years of data. This long-term dataset is crucial for training AI models, as more data leads to higher accuracy. In contrast, radar data often suffers from limited historical records and inconsistent quality, posing challenges for effective model training. We use CRR - convective rainfall rate product as main satellite input with additional weather prediction model inputs like CAPE and 850hPa winds. Ongoing validation shows improvement over baseline methods.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: Landslide Mapping from Sentinel-2 Imagery Through Change Detection

Authors: Mr. Tommaso Monopoli, Mr. Fabio Montello, Claudio Rossi
Affiliations: Links Foundation, Technical University of Denmark, DTU
INTRODUCTION Landslides are one of the most common catastrophic geological events, causing extensive economic losses, posing a serious threat to human settlements, and often resulting in fatalities. The damage that landslides cause to life and property has been ever-increasing over recent years, mainly because of the wider spread of human activities and urban development in areas that are more prone to landslides. It is widely recognized that rapid landslide mapping is crucial for effective disaster response and damage mitigation, especially in the immediate aftermath of such catastrophic events. The timely discovery and delineation of areas impacted by landslides can significantly enhance the overall disaster management process, enabling authorities to make informed decisions, allocate resources efficiently, and estimate the impact on affected communities and infrastructures. In this work, we introduce a novel deep learning model for the automatic mapping of landslides from Sentinel-2 images. The model is designed to address landslide mapping as a change detection task: given a pair of Sentinel-2 images taken respectively before and after a landslide-triggering event, find newly occurred landslides, which are visible in the “post” image but not in the “pre”. Our model is also designed to integrate single-temporal Digital Elevation Model (DEM) data along with bitemporal Sentinel-2 image pairs. Our experiments show that the addition of DEM data can boost accuracy on the automatic landslide mapping task. The main contributions of this work are: (i) the development of a novel bitemporal-bimodal deep learning architecture for change detection, which beats existing state-of-the-art change detection models on the task of automatic landslide mapping from Sentinel-2 images (ii) the creation of a globally diverse geodatabase encompassing and harmonizing several manually validated open-access landslide inventories RELATED WORKS AND MOTIVATION Recent research works have explored innovative approaches for automatic landslide mapping based on machine learning and deep learning techniques. The wider availability of landslide inventories and commercial and non-commercial satellite data has fostered the creation of supervised machine learning methodologies for the landslide delineation task. However, despite the promising results showcased, existing works have several limitations: - Most previous studies focus on a single ecoregion, i.e., a geographical area exhibiting homogeneous properties in terms of lithology, morphology, landforms, soil composition, vegetation type, etc. The limited geographical scale of the training data inherently hampers a model's ability to generalize to new, visually different areas. Therefore, effort is needed to create an extensive database of landslide events, with a focus on heterogeneity of ecoregions, landslide sizes and landslide triggering causes. - Most works deal with commercial (very) high-resolution data. While high-resolution imagery can capture fine-grained details that might be useful to delineate even small landslides and improve segmentation performances, acquiring the data is relatively expensive. This makes using high-resolution satellite imagery for emergency response use cases impractical, especially when the region of interest has a large area of several km^2. Using medium-resolution non-commercial satellite data, such as Sentinel-2 optical imagery, can be a solution, but it is still not as widely explored in the literature. - Existing landslide segmentation models typically expect a single image acquired after the landslide event. This leads to the detection of both newly activated landslides, associated with the event of interest, and old landslides, which may have happened in the past. This ambiguity is not desirable in emergency response scenarios, in which it is critical to rapidly map the impact area of newly activated events. Therefore, we propose to frame the landslide delineation as a change detection task. This work introduces a novel methodology, dataset and deep learning model to deal with these issues. MATERIALS AND METHODS To tackle the visual homogeneity problem, we harmonize and combine several recent open-access landslide inventories from several ecoregions of the world. The resulting database comprises over 34,000 manually-validated landslide polygons, which we then use to train our landslide delineation model. The database encompasses landslides of different sizes (very small, ~100 m^2, to very large ~100,000 m^2) and triggered by different catastrophic events (earthquakes, heavy rainfall). Each inventory in the database is associated with the date of the occurrence of the natural disaster which triggered the landslides in the region. For each region of interest of each inventory in the database, pre and post event Sentinel-2 L2A images are acquired, roughly ranging between 3 months before and 1 month after the event. Sentinel-2 L2A bands are characterized by a resolution ranging between 10 and 60 m/px. We manually discard S2 images acquired in the winter if the region is covered in snow on the date of acquisition, since snow can potentially cover landslide scars on the ground. Due to the extended presence of clouds in the images, we perform cloud detection on the retrieved images. Pixels classified as clouds are ignored during training. Moreover, for each region of interest we also retrieved the ALOS-PALSAR DEM data, characterized by a spatial resolution of 30 m/px. To enrich the data, we compute additional products from DEM data, namely the terrain slope and aspect direction. To train a segmentation model, some data preprocessing steps are applied. First, all bands of the S2 images and DEM data are upscaled to a common resolution of 10 m/px. Then, we consider all the possible S2 pre-post pairings for each region. We tile each S2 image pair and its corresponding ground-truth mask and DEM data by extracting 256x256 patches. To enhance the visual heterogeneity of the dataset and increase the generalization capability of the models, we apply several data augmentation techniques to the patches at training time. These include geometric transformations (applied both to S2, DEM and ground truth mask data) such as random horizontal/vertical flips and random rotations, and color transformations (applied only to S2 patches) such as brightness and contrast jittering and histogram matching. EXPERIMENTS AND RESULTS We frame the landslide mapping task as a change detection problem: delineating landslides which are present in a post-event S2 image but not in a pre-event one. We evaluated several state-of-the-art change detection models on the task of rapid landslide mapping: Unet-Siam-Diff with ResNet50 encoder, BIT, SEIFNet, TinyCD. However, these bitemporal models only support pre-post image pairs, therefore DEM data cannot be exploited. For this reason, we introduce a novel bitemporal and bimodal change detection model which can handle both a bitemporal pair of remote sensing images (e.g., S2 image pair) and an additional, single-temporal data stack (e.g., DEM data) which enriches the bitemporal pair with potentially useful additional contextual information. Its architecture is a modified Unet-Siam-Diff, with the addition of a simple convolutional fusion module. We refer to this novel model as BBUnet (standing for Bitemporal-Bimodal Unet). We split our dataset into a training set, a validation and test set. We train the models on the training split and apply early stopping as the F1 score on the validation set does not show relevant increase anymore. Finally, we test each model on the left-out test set. We compared different performance metrics, such as F1 score, Precision and Recall. Our BBUnet model achieves the highest F1 score (34.8%) and Precision (58.2%) among all tested architectures. Overall, achieved performances are generally not so high. Despite all efforts put into the dataset construction and choice of data augmentations, we argue that the not so perfect quality of the manually-mapped landslide inventories and the relatively low resolution of S2 satellite in comparison to landslides' size constitute an intrinsic problem, as well as the major challenge of this research work. Nonetheless, achieved precision values are generally higher, meaning that the models effectively learn to correctly characterize and delineate landslides without too much false positives. These results suggest that the addition of DEM data to support Sentinel-2 images is beneficial to the landslide delineation task. This could be expected, as there is a proven high correlation between the occurrence of landslides and terrain features, such as its slope and aspect. CONCLUSION We introduced BBUnet, a novel bitemporal and bimodal change detection architecture, and applied it to the task of automatic landslide mapping from Sentinel-2 images and DEM data. This model beats existing state-of-the-art change detection models on this task, demonstrating how DEM data can be useful to make up for Sentinel-2 relatively low resolution. To foster future research and scientific cooperation on the topic of landslide segmentation, we share our landslide database, pretrained model and code on GitHub. To the best of our knowledge, our geodatabase is the largest and most heterogeneous collection of landslides events available in open access. Several aspects of our research remain open, including meticulous data quality assessment for the existing landslide inventories and handling incorrectly mapped/missing landslides in an inventory. These challenges shall be addressed in future work, along with improvements in the proposed methodology. ACKNOWLEDGMENT This work was partially funded by the Horizon Europe projects ATLANTIS (GA n.101073909) and RescueME (GA n.101094978), and by the project NODES through the MUR—M4C2 1.5 of PNRR under Grant ECS00000036.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: Earthquake Damage Level Estimation Using Very High-Resolution Imagery and Deep Learning

Authors: Helene Weiß, Dr. Ridvan Kuzu, Yao Sun, Dr. Corneiu Octavian Dumitru
Affiliations: DLR
This study introduces a methodology for automating the estimation of building damage levels following the February 2023 earthquakes in Southeastern Turkey, which devastated 11 cities and affected over 15 million people. The approach utilizes very high-resolution (VHR) satellite imagery and state-of-the-art deep learning techniques to deliver efficient and accurate assessments essential for disaster response and urban resilience efforts. The proposed method employs convolutional neural networks (CNNs) trained on a carefully curated dataset comprising building footprints paired with pre- and post-event optical and SAR imagery, as well as validated field damage reports. By leveraging multi-modal satellite data, including SAR imagery to mitigate the limitations of optical data in cloudy or low-light conditions, this study ensures a robust and operationally reliable framework. Damage assessment is treated as a multi-class classification problem, enabling the systematic evaluation of damage levels across a broad range of building types and urban settings. To address challenges such as imbalanced class distributions and the scarcity of annotated data in disaster scenarios, the framework incorporates advanced data augmentation and loss-balancing techniques to enhance model performance. Furthermore, the use of SAR-optical data fusion improves damage detection accuracy by combining complementary information from both imaging modalities. This multi-modal approach is particularly effective in densely populated urban areas where occlusions and complex structural geometries pose significant challenges. The methodology is validated using real-world earthquake data, demonstrating its effectiveness in accurately identifying and classifying building damage. The framework's adaptability to diverse urban contexts and its capability for rapid deployment make it a valuable tool for disaster management operations. Key contributions include the introduction of a reproducible pipeline for damage classification using VHR imagery and the benchmarking of CNN architectures optimized for earthquake damage assessment. This research highlights the potential of VHR imagery and advanced machine learning to enhance post-disaster decision-making processes. Future work will focus on expanding the dataset to include a wider range of geographic and structural scenarios, as well as refining the algorithms to improve scalability and generalizability for global disaster response applications.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: Risk Assessment of Land Subsidence in the Rhineland Coalfield in Germany: A Model Cluster Approach Integrating Remote Sensing, Deep learning and multi-source Geological and Environmental Factors

Authors: Maoqi Liu, Professor Motagh Mahdi, Scientist Baes Marzieh, Postdoc Sun Yao, Dibakar Kamalini Ritushree
Affiliations: School of Resources & Environment and Safety Engineering, Hunan University of Science and Technology, Helmholtz-centre Potsdam - German Research Centre For Geosciences (gfz), Institute for Photogrammetry and GeoInformation, Leibniz University Hannover, Data Science in Earth Observation, Technical University of Munich
Abstract: Studies have shown that human activities such as natural resource extraction can lead to land subsidence, fault reactivation, and damage to infrastructure, seriously threatening the lives and properties of residents around mining areas. The Rhineland Coalfield is the largest lignite-producing area in Europe, located in Germany’s most densely populated state of North Rhine-Westphalia between Aachen and Cologne. Results from the European Ground Motion Service (EGMS) illustrate a large deformation zone around the Hambach open-pit mine of approx. 600km2, encompassing several major and small cities including Düren, Niederzier, Elsdorf, Bergheim, and Kerpen. Land subsidence and fault reactivation have caused several damages to civil engineered structures, residential property, and underground utilities in this region. Based on the surface deformation induced by mining in the Rhineland Coalfield, this study systematically explores the influence of various environmental and geological factors that led to this hazardous process using a model cluster approach. The auxiliary factors utilized include mine, urban, fault, and surface lithology distribution maps, geological features, and groundwater level change time series. Together with the historical displacement time series data and the subsets of the six auxiliary factors (in total 64 groups of datasets), the goal of the models is to predict the current displacement map. To effectively extract spatial-temporal features of the input data, the proposed model utilized ResNet as a backbone, followed by the widely used Transformer model as the head model to predict the displacement map. This displacement estimation framework allows both short-term and long-term forecasting. The model is used to train the 64 groups of datasets from scratch respectively. Our results reveal that compared with baseline models, our model is of the highest consistency across different datasets. Subsequently, we use the 64 models’ prediction errors to determine the important degree of the six auxiliary factors. We assume the important degree is linearly related to the performance improvement compared with the model trained only with historical displacement data. (One-hot coding combined with the Design Matrix was used to establish a model with six auxiliary factors as independent variables, and the coefficients of the independent variables were solved as the importance of the auxiliary factors. The derived coefficients are normalized to sum up to 1. Finally, the pixel-based risk level is calculated as the weighted sum of the values of the six auxiliary factors with the coefficients derived. Note that the groundwater level time series are averaged through the temporal dimension. The final risk assessment results show that the areas near the mine and the faults are high-risk areas, which is similar to the results of the assessment using only the InSAR deformation results, and it is also found that some places far away from the mine where the InSAR results show small displacements are also high-risk areas, which is related to geological factors and cannot be found by the assessment of the InSAR results only. The main contribution of this study is to take static natural resource maps and displacement dynamics into consideration during the risk assessment. The generated risk map can help the mine site exploitation planning, and reduce the potential extreme disasters, and minimize the impact caused by human mining activities. Keywords: Deep Learning; Risk Assessment; Multi-modal Data; Model cluster; Disaster Mitigation
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: Multi-source remote sensing approach for large-scale high resolution mapping of seasonal wildfire probability of occurrence

Authors: Clément Licour, Ewan Sean, Jean Loubet, Aurélien de Truchis, Valentin Albrecht, Loïck M’Pemba
Affiliations: Kayrros SAS
Wildfires pose a significant environmental challenge, impacting ecosystems, human populations, and global climate systems. Accurate prediction of wildfire occurrence is critical for effective mitigation strategies and resource allocation. This study introduces a novel framework for large-scale, high-resolution mapping of wildfire probability of occurrence using multi-source remote sensing data. The approach integrates optical satellite observations (derived from Sentinel-2) with climatic and topographic datasets to model the spatial and temporal variability of fire drivers, including vegetation status, meteorological conditions, and anthropogenic factors. The framework supports hazard modeling at both seasonal and sub-seasonal scales, offering fine-grained temporal insights into wildfire risk dynamics. A machine learning model based on the XGBoost algorithm is developed and trained on historical wildfire footprint data to estimate the probability of individual pixels being affected by a wildfire. The modelling approach incorporates a diverse set of predictors, including vegetation indices (e.g., Normalized Difference Vegetation Index) and vegetation water content, topographic characteristics such as slope, or elevation, meteorological variables like temperature, precipitation, and wind speed, and indicators of human activity like proximity to urban areas or infrastructure (i.e. roads or overhead power lines). Feature importance analysis is conducted to provide interpretability and insights into the relative contribution of each predictor to the model's output. The methodology generates comprehensive, high-resolution, wall-to-wall probability maps of wildfire occurrence. These maps, designed for annual updates, enable the integration of the most recent environmental and climatic data, ensuring their relevance for dynamic wildfire risk assessments. The model is validated against historical wildfire events from multiple regions, demonstrating robust predictive performance across varied geographic contexts, vegetation types, and climatic conditions. This approach based on the analysis of natural hazards and machine learning offers a scalable, scientifically robust, and interpretable tool to support policymakers, land managers, emergency response teams, and insurance companies in wildfire mitigation and preparedness efforts, enabling informed decision-making and targeted resource allocation.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: Large InSAR Dataset for Landslide Analysis at National Scale: Challenges and New Insights

Authors: Camilla Medici, PhD Alessandro Novellino, Claire Dashwood, Silvia Bianchini
Affiliations: Earth Sciences Department, University Of Florence, British Geological Survey
The increasing availability of Earth Observation (EO) data, combined with advancements in processing techniques, offers unprecedented opportunities for monitoring and mitigating natural hazards. However, this also leads to significant challenges, such as the need for automated methodologies to handle the vast volume of InSAR data for landslide analysis. This study focuses on addressing these challenges within the framework of the European Ground Motion Service (EGMS), which provides freely accessible Sentinel-1 data at a continental scale with millimetric accuracy. In this context, a semi-automatic methodology has been developed and applied for processing large InSAR datasets for landslide mapping and monitoring, demonstrating its application at the national scale in Great Britain. Spatial clustering and machine learning techniques have been employed to represent landslide phenomena more reliably than single-point data and simultaneously facilitate the management of massive datasets. Great Britain has been chosen as the study area because despite being considered a low landslide risk environment compared to other European countries, landslides remain a critical problem, particularly for the transport network. This is exemplified by the £38.6 million in damage caused to the railway system in 2023 alone. Additionally, while the National Landslide Database (NLD) provides valuable information, it is affected by several weaknesses. Specifically, less than half of the reported landslides are mapped as polygons, their state of activity is unknown, and the inventory is biased towards densely populated areas due to its data collection methods. The proposed approach, starting from the EGMS data, relies on a semi-automatic tool developed at the Centre Tecnològic de Telecomunicacions de Catalunya to identify the Active Deformation Areas (ADAs). These ADAs have then been characterised spatially and temporally, enabling the classification of each Slope Unit (SU) in Great Britain as landslide-affected or not. This classification has been compared with the NLD to create an "InSAR landslide inventory map", highlighting critical areas requiring further investigation for improved landslide risk management. This study demonstrates the potential of combining InSAR data with advanced analytical techniques to overcome limitations in existing inventories and big SAR data management providing valuable insight into landslide risk management.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: Validation of Satskred avalanche monitoring using Sentinel-1 data – A case study using road network data

Authors: Aron Widforss, Yngve Are Antonsen, Stefan Blumentrath, Niklas Fossli Gjersø, Tore Humstad, Karsten Müller, Solveig Havstad Winsvold
Affiliations: Norwegian Water Resources And Energy Directorate, Norwegian Public Roads Administration
Snow avalanches are a common natural hazard in Norway, and large avalanches affect public infrastructure, such as roads, every winter season. The national traffic information system operated by the Norwegian Public Roads Administration (SVV) published notices regarding hundreds of road disruptions in the winter of 2023-2024: 294 notices were due to avalanches, 133 due to avalanche hazard and 25 due to avalanche control operations. Satskred is an avalanche monitoring system using change detection in Sentinel-1 (S1) data to identify the location, quantity and size of snow avalanches. A machine learning model has been trained to identify areas in each S1-scene with increased backscatter due to avalanche debris. Over time data from multiple satellite passes are combined to enhance the temporal accuracy of the detections. The detection system has been developed by NORCE and is operated by the Norwegian Water Resources and Energy Directorate (NVE). The Satskred system makes it possible to evaluate large-scale avalanche cycles giving vastly better spatial coverage than traditional methods like manual field observations or infrasound detections. It makes it possible to evaluate the degree of precision achieved in public avalanche forecasts as well as other mitigation efforts such as evacuation of inhabitants out of harm’s way. SVV operate in-situ monitoring of avalanche-prone areas above public road infrastructure at several sites across Norway. The detection modality varies from infra-sound and ground-based radar to seismic methods. In this case-study we will present in-situ validations of Satskred detections over time, using data collected for SVV in areas where continuous avalanche monitoring is conducted. By taking into account the geography of the sites, weather conditions and avalanche parameters, we investigate what kind of avalanches the Satskred system can detect, and what might be missed when looking at larger-scale events. Understanding how and when this technology works is essential to apply it in an operational setting, ensuring effective use under favorable conditions and handling false detections when conditions for satellite-borne radar are poor e.g., by applying alternative methods.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: Landslide Impacts on Human Infrastructure: A Comparison of Support Vector Machines, Robust Satellite Techniques, and Object-Based Image Analysis

Authors: Mohammad Kazemi Garajeh, Dr Valeria Satriano, Dr. Christian Geiß, Dr. Hannes Taubenböck, Mr Annibale Guariglia, Dr Parivash Paridad, Mr Raffaele Santangelo, Professor Valerio Tramutoli
Affiliations: University Of Bonn, Institute of Methodologies for Environmental Analysis, German Aerospace Center (DLR), German Aerospace Center (DLR), Geocart S.p.A, Geocart S.p.A, Geocart S.p.A, Department of Engineering
Landslides are a hazardous phenomenon that poses substantial risks to both the environment and human infrastructure. While most landslides are primarily triggered by precipitation, numerous other factors can also contribute to slope destabilization, particularly in urban areas. Previous studies (Vega et al., 2016; Sun et al., 2021; Kazemi Garajeh et al., 2024) have documented the detrimental effects of landslides on human infrastructure, highlighting the increasing frequency of such events as a growing threat. Consequently, the detection and monitoring of landslides are crucial to mitigating their impact and preventing large-scale casualties. This study evaluates the impacts of landslides on human infrastructure by comparing three analytical techniques: Support Vector Machine (SVM), Robust Satellite Technique (RST), and Object-Based Image Analysis (OBIA) in the Pomarico region of Italy. During the autumn and winter of 2018, Pomarico city, located in the Matera municipality of the Basilicata Region in southern Italy, experienced frequent and intense rainfall events. Between the night of January 24 and 25, 2019, heavy rainfall exacerbated pre-existing soil instability, triggering a significant rotational slide approximately 760 meters long and 80–100 meters wide. This event evolved into a channelized earth flow along the southwest side of the Pomarico ridge, culminating on January 29 in the collapse of several residential buildings and the city's main access road (Perrone et al., 2021). This study utilized Sentinel-2 satellite data to detect areas affected by landslides using the SVM within the GEE. Of the collected Ground Control Points (GCPs), 70% were allocated for training, while the remaining 30% were used to validate the results. In the domain of RST for analyzing the impact of landslides on human infrastructure, the study employed the Normalized Difference Vegetation Index (NDVI) derived from a time series of Sentinel-2 data spanning from 2016 to 2019. This approach leverages changes in land cover as indicators of mass movement. The RST methodology is based on the preliminary characterization of satellite signal data (either single bands or band combinations) using temporal means and natural variability (standard deviations) calculated from long-term data series acquired under consistent conditions (Tramutoli, 2007). OBIA was applied to assess the effects of landslides on human infrastructure. Sentinel-2 data from February 9, 2019, were downloaded, and segmentation was performed using the multi-resolution segmentation technique. Various spectral features (e.g., texture), as well as geometric and spatial features (e.g., shape index), were utilized to classify each object as either "landslide" or "non-landslide." Classification was performed using the Nearest Neighbor technique, and accuracy was assessed through an error matrix. The findings reveal that the RST method achieved the highest accuracy in landslide detection, with an overall accuracy of 90.00%, surpassing SVM (87.00%) and OBIA (85.00%). Furthermore, the study highlights significant damage caused by the landslide, including the destruction of electrical poles, roads, and built-up areas. These findings underscore the importance of using advanced remote sensing techniques to monitor and mitigate the effects of landslides on human infrastructure. 1. Vega, J. A., & Hidalgo, C. A. (2016). Quantitative risk assessment of landslides triggered by earthquakes and rainfall based on direct costs of urban buildings. Geomorphology, 273, 217-235. 2. Sun, L., Ma, B., Pei, L., Zhang, X., & Zhou, J. L. (2021). The relationship of human activities and rainfall-induced landslide and debris flow hazards in Central China. Natural hazards, 107, 147-169. 3. Kazemi Garajeh, M., Guariglia, A., Paridad, P., Santangelo, R., Satriano, V., & Tramutoli, V. (2024). Detecting small-scale landslides along electrical lines using robust satellite-based techniques. Geomatics, Natural Hazards and Risk, 15(1), 2409203. 4. Perrone, A., Canora, F., Calamita, G., Bellanova, J., Serlenga, V., Panebianco, S., ... & Lapenna, V. (2021). A multidisciplinary approach for landslide residual risk assessment: the Pomarico landslide (Basilicata Region, Southern Italy) case study. Landslides, 18, 353-365. 5. Tramutoli, V. (2007, July). Robust satellite techniques (RST) for natural and environmental hazards monitoring and mitigation: Theory and applications. In 2007 international workshop on the analysis of multi-temporal remote sensing images (pp. 1-6). IEEE.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: AI and Multisensor Data Fusion for Accurate Lava Flow Segmentation

Authors: Simona Cariello, Nantheera Anantrasirichai, Juliet Biggs, Edna Warsame Dualeh, Claudia Corradino, Ciro Del Negro
Affiliations: INGV - Istituto Nazionale di Geofisica e Vulcanologia, Visual Information Laboratory, University of Bristol, School of Earth Sciences, University of Bristol
Continuous monitoring of volcanoes is crucial to understanding the behavior of active volcanic structures and mitigating the risks associated with their eruptions. Timely and accurate analysis of volcanic phenomena requires an approach that integrates data from multiple sources, as each sensor provides unique and complementary information. The integration of data from different sources, such as those collected by Sentinel-2 and SAR sensors, allows the extraction of critical information regarding both thermal anomalies and lava flow evolution. With recent advancements, artificial intelligence (AI) has become a powerful tool to address the complexity of these heterogeneous datasets and maximize their informational value. AI provides advanced methods to process the data and deliver more precise and reliable results for monitoring and predicting volcanic dynamics. In this proposal, we aim to integrate data from three different sensors (Sentinel-2, SAR, and InSAR) using an advanced deep learning approach to significantly improve lava flow segmentation. Sentinel-2 provides multispectral data that is particularly useful for detecting fresh lava flows and thermal anomalies due to its high spatial and temporal resolution. SAR and InSAR data enable accurate monitoring even under cloud cover and provide detailed information on topographic changes and ground movements. The objective is to overcome challenges related to temporal management and data fusion, thereby improving the accuracy of lava flow and thermal anomaly segmentation. To perceive this goal, different strategies will be investigated. One of the approaches involves the use of SAM2 (Segment Anything 2) to process data from a single sensor. This model is designed to extract and analyze the specific characteristics of the sensor's input, producing a feature map. This map serves as a structured synthesis of the relevant information contained in the input image, optimized for subsequent analysis stages. One of the main challenges in integrating these datasets is managing acquisition dates, as the images from different sensors correspond to different moments in the lava flow evolution. This asynchrony complicates the fusion of information. To address this issue, the feature maps obtained from SAM 2 are organized into a temporal stack, which reconstructs the entire evolution of the flow, from its initial formation to its final extent. At this stage, a potential next step could involve exploring the use of a Long Short-Term Memory (LSTM) network, a type of recurrent neural network particularly effective for analyzing temporal sequences. The LSTM would take the temporal stack of features as input and learn to capture the evolving dynamics of the lava flow. However, we will resort to using an LSTM only when the temporal series includes a substantial number of multi-step observations. In such cases, the LSTM's ability to handle long-term dependencies can be utilized to correlate events across different time frames, effectively integrating information from the three sensors over the temporal sequence. Within the LSTM, a logic is implemented that considers the fundamental physical concepts influencing the formation and evolution of a lava flow. These include the cooling process of lava, which determines its progressive solidification, and the complex interaction of the flow with a nonlinear topography, which affects the direction and speed of the flow. This approach not only models the temporal dynamics but also incorporates the physical aspects governing lava flow behavior, further enhancing the accuracy and consistency of predictions. The final output of the algorithm is a detailed and accurate segmentation of thermal anomalies and lava flows, accompanied by a representation of the temporal evolution of the phenomenon. The proposed system not only identifies the static characteristics of the volcanic phenomenon but also dynamically represents its temporal evolution, ensuring a higher level of robustness and precision compared to traditional methods. This approach represents a significant advancement in volcanic monitoring, leveraging the potential of data fusion using AI-based techniques to effectively address the inherent complexities of dynamic and multimodal natural systems.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone O)

Poster: Reference Burned Area Delineation with Deep Learning Model for Validating High Resolution Global Burned Area Maps

Authors: Bhogendra Mishra, Dr Daniela Stroppiana, Dr. M. Lucrecia Pettinari, Prof. Emilio Chuvieco
Affiliations: Consiglio Nazionale delle Ricerche-Istituto per il Rilevamento Elettromagnetico dell’Ambiente, Science Hub, Environmental Remote Sensing Research Group, Department of Geology, Geography and the Environment, Universidad de Alcalá
The growing availability of Earth observation satellite imagery has led to the production of global and regional burned area products with higher temporal and spatial resolution. Quality assurance of these products is essential for both producers and users before the public release. This requires large volumes of high-quality reference data, which should be spatially explicit and have minimal errors to accurately represent the actual areas burned across the eco-regions. Field surveys and ground truth data cannot support the generation of these reference datasets since they are very expensive, time consuming and difficult to reach at the global scale. We address these challenges by developing a spatially explicit and accurate reference dataset using Generative Adversarial Networks (GAN) model in combination with visual interpretation with the PlanetScope based time series images. This dataset can be used to validate to high-resolution datasets (S2) burned area (BA) products developed within the ESA FireCCI project and others. We selected validation units (regular hexagon tessellation, each covering 625km2) across the world ensuring the representation of all biomes and fire regimes. The number of sites in each biome is set based on the number of fire incidence within the biomes and sites within each biome are selected with random sampling. The GAN model, used for automatically mapping burned areas between pair of images representing pre-fire and post-fire conditions, is trained and validated using the highly accurate datasets of fire perimeters derived interactively from PlanetScope imageries. The large area BA products are validated by comparing reference to classified fire perimeters over a set of validation units, representative of the heterogeneity of the vegetation and fire characteristics, to ensure statistically robust and unbiased estimation of the accuracy metrics. To develop the reference data, a continuous time interval (referred to as the "long unit", >40 days) is defined to reduce the influence of temporal reporting (i.e. dating) errors on the spatial accuracy of metrics estimation. This long unit is subdivided into shorter time intervals (called "short units") represented by image pairs of the source input imagery. These pairs are classified into burned/unburned areas with the validated GAN model followed by visual interpretation methods. Once the burned area for each pair is finalized, the long-unit burned area is computed by combining the short-unit dataset. The extracted polygons include the burned area boundaries along with the image acquisition date that is used for burned area mapping. If the area is detected as burned multiple times during the long unit, the first detection date is preserved. Keywords: Forest Fire, GAN, PlanetScope, Burned Area
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone Q)

Poster: C.02.02 - POSTER - Heritage Missions and Long Time Data Series

EO archives worldwide contain several decades of data acquired from a variety of missions and sensors. The value of these scientific time-series is continuously growing. Information extracted from long time series of data is used to observe and characterise the current climate, to detect climate change and to determine the rate of change. Hence there is a strong need to secure state-of-art preservation of the growing volume of heritage data and associated information holdings, to allow their discoverability/accessibility through state of art technologies, and to ensure continuous improvement and valorisation to maximise usability and impact with current and future missions. The session focuses on activities and projects dealing with the preservation and valorization of Earth observation heritage (historical) data including new datasets generation, algorithms improvements, reprocessing and long time series generation through harmonization and combination of data from multiple sensors.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone Q)

Poster: Study on the possible correction of the GOME/ERS-2 reflectance degradation as part of FDR4ATMOS

Authors: Dr Melanie Coldewey-Egbers, Günter Lichtenberg, Sander Slijkhuis, Bernd Aberle, Abdalmenem Owda, Angelika Dehn, Gabriele Brizzi
Affiliations: German Aerospace Center, ESA-ESRIN, SERCO
The Fundamental Data Record for ATMOSpheric Composition (FDR4ATMOS) project is part of the European Space Agency (ESA) Long Term Data Preservation (LTDP) programme. The aim is to generate long-term (preferably multi-decadal) records of selected Earth observation level-1 parameters, i.e. irradiance and reflectance, from the heritage missions GOME/ERS-2 (1995-2011) and SCIAMACHY/ENVISAT (2002-2012) and the GOME-2/MetOp-A/-B/-C sensors (2007-today). Thereby, the focus is on the compilation of harmonised multi-instrument time series, which then provide an improved performance compared to the single sensor data records. During the first phase of FDR4ATMOS, harmonization approaches for both solar irradiance and reflectance for GOME and SCIAMACHY observations have been developed. SCIAMACHY was selected as the reference sensor and GOME has been adjusted to this reference using broadband correction factors which ensure preservation of the spectral structures relevant for the retrieval of level-2 data products such as atmospheric amounts of ozone and nitrogen dioxide or cloud parameters. The harmonization approach proposed for the reflectance was based on quasi-co-located observations from both sensors over the Saharan Pseudo-Invariant Calibration Sites (PICS) limited to the year 2003. Thus, the derived correction factors, that were then applied to GOME, were constant in time. However, from previous studies it is known that GOME suffered from substantial differential degradation of radiance and irradiance spectra and, thus, to significant degradation of the reflectance, due to different light paths. In this paper we present results of a study related to a possible method for the soft-correction of the degradation of the GOME reflectance. The approach is based on measurements over PICS. These sites are characterized by stable atmospheric conditions and spectral properties as well as distinct spatial homogeneity. We extract all overpasses 1995-2011 and investigate the behaviour of the reflectance with respect to a selected reference year for three wavelength bands in the ultra-violet, the visible, and the near-infrared spectral range. The selection of the overpasses is further optimized based on a detailed analysis of the corresponding PMD (Polarization Measurement Device) measurements, which have a much higher spatial resolution. Only those pixels indicated by homogeneous behaviour of the PMD values are taken into account. We investigate if the normalized reflectance can be used to correct for the degradation, thereby assuming that the atmospheric conditions did not change significantly over time.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone Q)

Poster: Revisiting altimetry data in rivers: SpecR and the biases and pitfalls of the OCOG retracker

Authors: Fernando Niño, Malik Boussaroque, Carlos Yanez, Romulo Jucá Oliveira, Fabien Blarel, Sylvain Biancamaria
Affiliations: CNRS - LEGOS, Université de Toulouse, Hydro-Matters, Centre National d'Etudes Spatiales (CNES)
Radar altimetry is a major tool used to monitor inland waters and the altimetry records cover now over 20 years of lake and river measurements with the Jason/Sentinel-6 missions. In spite of this wealth of data, altimetry waveforms over rivers are difficult to analyze due to varying surface roughness, surrounding topography, river geometry and presence of nearby water bodies. One of the main problems this entails is the lack of a physical model of the expected radar echo (as the Brown model that exists for ocean surfaces), to aid in the interpretation of the physical properties of the water bodies. As an alternative, many efforts for the retrieval of water levels in continental surfaces have been using the empirical Offset Centre of Gravity (OCOG) retracker. Based on the geometrical properties of the waveform, it is a robust and popular method to obtain water level time series from nadir Low Resolution Mode (LRM) altimetry data; however, the OCOG retracker is not particularly tailored for measurements over rivers (it was originally created for the ice surfaces), and the many difficulties of this particular environment often lead to large errors in the processed water level time series. Using real data and simulations, we demonstrate that the OCOG retracker can not only return biased water levels, but also retrieve water level different from the intended ones. As an alternative, we present the SpecR retracker which uses a physical model of the specular reflection of the radar signal, which is frequent on rivers. The retracker not only pinpoints accurately the specular sources in the radar footprint, but also attributes the probability of them being the intended water body. Furthermore, the physical model allows accurate calculation of water levels even in the presence of signal clipping, frequent in the Jason altimeter series. We compare the performance of this retracker on the french rivers, and the Amazon (Brazil) and the Mississippi (USA) basins, where good quality in-situ data are available.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone Q)

Poster: DAMPS: A Comprehensive Service for Earth Observation Data Archival, Management, and (Re)Processing

Authors: Olivia Lesne, Giuseppe Parrella, Gilbert Barrot, Paolo Boezi, Gaston Briot, Domenico Castrovillari, Gianluca Colamussi, Giovanni Corato, Mai-Eli Johansen, Alessandra Paciucci, Salvatore Papaleo, Frederic Rouffi, Olivier Sardou, Frederic Twahirwa
Affiliations: European Space Agency (esa), ACRI-ST, Starion Group, Serco, adwaïsEO, KSAT
DAMPS (Data Archival, Management & Processing Services) is a service commissioned by the European Space Agency (ESA) in 2021 to offer a coherent, scalable, and secure end-to-end solution for Earth Observation (EO) data archival, retrieval, maintenance, consolidation, formatting and (re)processing. By offering secure archival, advanced processing capabilities and rigorous management, DAMPS ensures the longevity, accessibility, and usability of EO datasets crucial for scientific research and policymaking. 1. Key Features and Objectives DAMPS, as the ESA Master Archive, ensures to date the preservation of over 11 PB of data from historical and active EO missions across a dual-site infrastructure with data centres in Luxembourg (Betzdorf) and France (Sophia Antipolis), separated by about 700 km. This setup ensures full disaster recovery capabilities and compliance with the highest standards of data security and integrity. The main site in Luxembourg is housed in a TIER-IV certified facility, ISO 27001-compliant, and leveraging green-certified energy solutions. Together, these sites offer scalable storage and processing capabilities, supporting data-intensive applications with a focus on safety and sustainability. The archival system is built to accommodate historical and live ESA missions (including newly launched and future Earth Explorers missions like EarthCare, Biomass, and FLEX), ESA Campaign data as well as Third-Party Missions (TPM) data. Its architecture supports not only static datasets but also dynamic reprocessing workflows, making it adaptable to the needs of evolving missions. Key technological enablers, such as Kubernetes and containerization, enhance processing scalability and compatibility. 2. DAMPS Core Services DAMPS is structured around two main categories of services: Continuous and Project-based. Together, these services cover the full spectrum of EO data management needs. 2.1 Continuous Services: a. Data Ingestion and Long-term Archival DAMPS implements a robust ingestion process, ensuring that all incoming data is securely archived in both locations (Luxembourg and France). Data integrity is verified using checksum comparisons, and archival status reports are generated periodically. The system supports diverse data formats and mission types, from ESA’s historical missions (e.g. ERS, Envisat) to third-party contributions like Landsat. b. Rolling and Scratch Archival Tailored archival strategies support dynamic data needs. Rolling archival ensures that only the latest datasets are preserved for missions requiring frequent reprocessing, while scratch archival offers temporary storage, useful, for example, when data repackaging is needed in a relatively short-term. c. Data Management and Archive Integrity A central Data Management System (DMS) tracks all archival activities, including metadata management, dataset relationships, and data lifecycle operations. Routine integrity checks are conducted to detect and prevent corruption, ensuring the reliability of archived datasets. d. EOSIP product validation The EO-SIP check consists of a complete validation of EO-SIP packages to ensure their compliance with ESA specifications. This includes verifying compliance with conventions, validating metadata against reference schemas, comparing with native product metadata, ensuring the integrity of package contents and ensuring the correctness of the product footprint. An analysis report with detected errors and corrective recommendations is delivered. e. Data Retrieval and Repatriation DAMPS facilitates efficient data retrieval via online interfaces and physical media. Requests are processed with meticulous attention to ensure data consistency, and delivery reports are shared with users to confirm successful transfers. f. Data Information System (DIS) The DIS consolidates metadata across multiple services into a unified repository, offering a comprehensive view of data origin and associated value-added operations (data provenance). This feature enhances user confidence in data reliability and facilitates complex analysis by providing contextual metadata. g. Monitoring and Website DAMPS features a user-friendly portal (damps.info) that provides real-time monitoring, archive statistics, and updates on ongoing projects. The portal also provides service transparency by detailing data retrieval processes, completed maintenance activities, and upcoming initiatives. To date, the DAMPS archival encompasses more than 80 missions (10 of which with automatic data dissemination and near real time ingestion) with more than 1600 datasets resulting in about 11.5 PB and more than 100 million products. The output products of the conversion in EO-SIP format of TPM missions are systematically checked, as also some of other ESA missions. About 120 data maintenance and 50 Data Change Requests have been performed so far. 2.2 Project-Based Services DAMPS extends its capabilities to support large-scale, tailor-made projects involving data consolidation, conversion, (re)processing, and processors testing. The integration of processing environments within the archival infrastructure ensures data security, operational efficiency, and minimal data transfer overhead. a. Consolidation/Completeness check DAMPS conducts comprehensive analyses to create master datasets, eliminating redundancies and ensuring complete coverage. These projects involve verifying data consistency, naming conventions, and alignment with external repositories. b. Data formatting/packaging DAMPS allows EOSIP packaging consisting in converting products from native format to an ESA standardized format (ZIP container) according to specialisation provided by the missions. The EO-SIP converter tool can be either provided as a CFI or developed/updated by DAMPS. c. Bulk processing and reprocessing This service integrates processors into the DAMPS infrastructure, enabling efficient, large-scale processing of the selected set of data. The generated datasets are verified before the data is released. This approach minimizes data transfer overhead and maximizes security by keeping all activities within the DAMPS environment. d. Processor testing and validation DAMPS provides a complete environment for testing and validating new processors. This environment is fully isolated to guarantee data security and compatibility with various processing needs. In this context, several projects have been carried out combining one or more of the offered services. Several reprocessing activities were conducted by DAMPS, including systematic EO-SIP conversion of the output reprocessed products. Among them, DAMPS has reprocessed: • Cryosat ICE Baseline E data (11 years of data) after preliminary work of data collection, data consolation/completeness check, processor integration/orchestration. • SNOW-CCI Initiative data which consisted in processing products from ERS-1/2 and Envisat ASAR missions covering five regions of interest. Two other reprocessing projects are ongoing: • Aeolus FM-A and FM-B Baseline 16 (~17 months of data) • Cryosat-2 OCEAN Baseline D (14 years of data). DAMPS also realizes projects focusing only on EO-SIP converter tool development/update and EO-SIP conversion. DAMPS has, for instance, developed or updated several EO-SIP converter tools, ensuring afterwards the formatting of associated product types, such as for the ERS-1/2 and Envisat missions. DAMPS also ensures the maintenance of all the TPM EO-SIP converter tools and is in charge of the conversion of the corresponding TPM products. 3. Innovation and Adaptability DAMPS leverages cutting-edge technologies to enhance its service offerings. For instance, Kubernetes and Docker enable containerized processing, ensuring compatibility across different hardware and software configurations and scalability. The system’s flexibility allows for the seamless integration of new processing tools and workflows. DAMPS offers tailored archival strategies, such as scratch and rolling archives, to meet the dynamic needs of data lifecycle management. Additionally, the Data Information System (DIS) consolidates metadata across services, enabling a unified view of data origin and value-added operations. Security and sustainability are at the heart of DAMPS’ operations. The use of green-certified data centres underlines ESA’s commitment to reducing the environmental impact of its activities. Meanwhile, rigorous security protocols, including redundant point-to-point connections and routine integrity checks, safeguard the integrity and confidentiality of EO data. 4. Impact and Vision DAMPS fully showcases ESA's commitment to safeguarding EO data as a cornerstone of its Long-Term Data Preservation program. By combining rigorous data management practices with innovative and efficient processing capabilities, DAMPS empowers researchers, policymakers, and industry stakeholders to address global challenges through enhanced EO data accessibility and reliability.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone Q)

Poster: Ensuring quality of oceanographic data from CryoSat’s 15-years in orbit

Authors: Dr. Chris Banks, Francisco Calafat, Alessandro Di Bella
Affiliations: National Oceanography Centre, University of the Balearic Islands, European Space Agency
CryoSat-2 has been in orbit for over 15 years since ESA launched the mission with the innovative Synthetic Aperture Interferometric Radar Altimeter (SIRAL) onboard in April 2010. The primary focus was to determine variations in the thickness of the Earth's continental ice sheets and marine ice cover. However, CryoSat-2’s contributions to oceanography have been substantial and the quality of the long-term timeseries are considered here. Long term monitoring of sea level, globally and regionally, has been possible from space since the launch of TOPEX/Poseidon in August 1992. It is of note that CryoSat-2’s single-satellite, long timeseries accounts for almost half of the total altimetry period. However, instruments, such as SIRAL, can provide more than just information on sea level as values of ocean wind speed and significant wave height can be obtained from the waveforms. By exceeding its expected lifetime, CryoSat-2 provides the longest data record of the sea surface height anomaly (SSHA), significant wave height (SWH) and wind speed (WSP) measurements from a single instrument. In addition, CryoSat-2 pioneered the use of delay doppler (SAR) altimetry that has since been adopted for subsequent ESA nadir altimeters. A dedicated ocean processing chain was established in 2014 and has been modified and improved since then. The processing chain includes time critical near real time ocean product (NOP with latency ~ 3 hours) and intermediate ocean product (IOP 3-day latency), along with a non-time critical geophysical ocean product (GOP) with a ~30-day latency. The most recent change to CryoSat-2 processing was an upgraded operational processor version (Baseline D replacing Baseline C) that started providing the CryoSat Ocean Products (COP) in October 2024. The full timeseries of GOP is now being reprocessed to provide a consistent dataset for the oceanography community. This presentation reports on work to compare the long-term SIRAL data (Baseline C) with other sources of information including tide gauges, model output, other satellites, and in situ buoy measurements. Global and regional trends will be presented for SSHA, SWH and WSP. This work is part of a larger ESA funded project concerned with ensuring the quality of the SIRAL data (CryOcean-QCV) including routine daily and monthly quality reports (available from https://qras.earth.esa.int/?mis=CryoSat&ins=SIRAL). As such, we will also report on the ongoing validation studies of Baseline D. Finally, we present findings of novel local/regional studies on the utility of SIRAL data for investigation of specific oceanographic features and even the identification of shipping channels.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone Q)

Poster: Swell Aware Retracking For Synthetic Aperture Radar Altimetry

Authors: Chris Ray, Alba Granados
Affiliations: isardSAT
There is mounting evidence that the presence of swell on the sea surface alters the sea-state bias of a synthetic aperture radar (SAR) altimeter. This swell dependence will lead to biases in the measured sea surface height if the amount of swell in a year changes. This talk will present the results of an ESA funded study of a proposed methodology to compensate for the effect of swell on the retracked geophysical parameters. The primary observation is that the variation of the observed significant wave height is dependent on the amount of swell present. Thus the variance of the observed significant wave height is an additional measure of the sea surface that can be used to estimate the amount of swell, and consequently allow for an swell dependent sea state bias correction to be applied.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone Q)

Poster: Wet tropospheric correction for altimetry from ERS-1 to Sentinel-6 using a 1DVAR approach: status quo and outlook

Authors: Frank Fell, Ralf Bennartz, Bruno Picard
Affiliations: Informus Gmbh, Vanderbilt University, Fluctus SAS
The dual channel (23.8 and 36.5 GHz) Microwave Radiometer (MWR) flown on altimetry missions from ERS-1 up to the Sentinel-3 series of satellites allows to accurately determine total column water vapour (TCWV) in the atmosphere over the ice-free global ocean, and hence wet tropospheric correction (WTC) in support of ocean altimetry. This presentation summarises the main results of our recent efforts to apply a 1D variational approach (1D-VAR) for the retrieval of TCWV and WTC from MWR brightness temperature observations, optimised for use in an operational environment. We compare 1D-VAR retrieval results with a number of reference datasets (e.g. the corresponding operational products, or GNSS-based or ECMWF-IFS TCWV) to identify strengths and weaknesses of the retrieval approach. We also look at ways to correct the impact of land contamination to improve the availability of WTC in coastal areas. The presented method can be applied to all MWR-like instruments. Reprocessing of all MWR observations from the ERS-1, ERS-2 and Envisat missions is currently ongoing. Combined with the corresponding retrievals from the Sentinel-3 series of satellites, this will ultimately lead to the generation of TCWV and WTC climate data records covering the period from 1991 to the present and beyond. Finally, we introduce ongoing efforts to assess potential retrieval accuracy gains from the incorporation of additional spectral information, such as provided by the three-channel (18.7, 23.8 and 34.0 GHz) Advanced Microwave Radiometer for Climate (AMR-C), with the ultimate aim of implementing the 1D-VAR retrieval also for the Sentinel-6 series of satellites.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone Q)

Poster: The ESA FDR4ATSR Project

Authors: Fay Emmett, Lisa Haskell, Fabiano Costantini, Philippe
Affiliations: Telespazio UK, ESA-ESRIN
The datasets derived from the European Space Agency (ESA) (Advanced) Along-Track Scanning Radiometer ((A)ATSR: AATSR (Envisat), ATSR-2 (ERS-2), ATSR-1 (ERS-1)) instrument series, spanning just over two decades, represents a valuable resource that is important to a variety of atmosphere-, cryosphere-, ocean- and land-based applications in the climate domain. The main objective of this project is to develop a new processor, currently under development by Telespazio UK, that will be used in the next reprocessing campaign, the 5th (A)ATSR Reprocessing, to generate Level 1 datasets that are Committee for Earth Observing Systems (CEOS) Fundamental Data Record (FDR) / Fundamental Climate Data Record (FCDR) (Full and Easy) and Analysis Ready Data (ARD)-compliant. The achievement of the latter, which has been made possible with the extensive work that has been carried out on this instrument series, including the work under ESA IDEAS, IDEAS+, IDEAS-QA4EO and QA4EO-2 services, will be enabled by the successful implementation of the following: • Improvements to uncertainty estimates; knowledge obtained from the pre and post launch calibration will be used to revise and improve all Level 1 uncertainty estimates (VIS, SWIR, TIR) using a Fidelity and Uncertainty in Climate Data Records from Earth Observations (FIDUCEO) approach to understand sources of error and uncertainty together with relevant error correlation aspects (spatial / temporal scales, etc.) for each instrument. • Improvements to new orthorectification and image re-gridding; ortho-rectification, which builds upon ortho-geolocation implemented in the previous reprocessing, with image re-gridding using ortho-coordinates and "best" rather than "first" neighbour selection. • Improvements to surface classification; “ready-to-go” developments from the most recent Sentinel-3 SLSTR reprocessing campaign, e.g. continuous coastline flag, inland water flag. • Improvements to product metadata, e.g. corrections to product metadata issues recorded in relevant quality control documentation. This presentation will explore the processor, and some of the new algorithms implemented (developed by RAL Space and the University of Reading) and the next reprocessing campaign in more detail.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone Q)

Poster: Barcelona Expert Center on Remote Sensing lab distributing RS datasets

Authors: Toni López, Marco Talone, Dr. Carolina Gabarró, Aina García-Espriu, Verónica González-Gambau, Cristina González-Haro, Ferran Hernández-Macià, Estrella Olmedo, Marta Umbert, Maria Sánchez, Antonio Turiel
Affiliations: Institute Of Marine Sciences (ICM-CSIC), Barcelona Expert Center (BEC)
The Barcelona Expert Center (BEC) is a research and development remote sensing center established in 2007 to support the European Space Agency’s (ESA) SMOS (Soil Moisture and Ocean Salinity) mission. Its primary goal was to enhance the algorithms used to retrieve sea surface salinity from satellite data. Over time, BEC has grown significantly, expanding both its research focus and technical capabilities. The main roles and expertises are: 1) Algorithm Developments from level 0 to level 4 with the aim of improving the accuracy of remote sensing data, particularly for oceanographic variables; 2) Data Production and Distribution: Since 2014, BEC has been the only Spanish center responsible for producing and distributing SMOS data. 3) It has expanded its portfolio to include additional variables, other satellite missions, and derived products related to oceanography and hydrology. BEC provides a variety of products derived from satellite observations, particularly from the SMOS mission, including: -Sea Surface Salinity (SSS): High spatial resolution (L3 & L4) salinity data used for oceanographic and climate studies, and some specially tailored for regional areas (Arctic and Southern Oceans, Mediterranean Sea, Baltic Sea and Black Sea). -Soil Moisture Products: Essential for hydrology and agriculture applications. -Sea Ice Concentration and Thickness (future): Based on SMOS radiometric data, useful for Arctic and Antarctic studies. Colored Detrital Matter: Only regionally for the Baltic sea (Danuve mouth) and Beaufort Sea (Mackenzie mouth, future) -Freshwater and Density Fluxes (future): Derived from ocean surface salinity measurements. -Custom Datasets: We are able to produce advanced satellite-derived products, such as singularity Exponents, which is useful for front identification, eddy tracking, and assessing mesoscale activity. These products are utilized by scientists worldwide for diverse research purposes, ranging from climate change and water cycle studies to improving weather prediction models. BEC has a strong international user base, with most of its downloads and web traffic originating from countries such as the United States, France, Canada, Germany, and China, while Spain ranks sixth in user activity. We system has an average of 2,500 unique visitors, 177,000 page views, and 7 GB of data downloaded each month.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone Q)

Poster: Data fusion with heritage satellite, IoT and Copernicus satellite data for estimating health of trees

Authors: Massimiliano Ferrante, Iolanda Maggio, Daniele Iozzino, Sergio Folco, Mirko Albani
Affiliations: Esa, Starion
The scope of this paper is to describe how integrating multiple data sources, such as heritage satellites like ERS-1 and ERS-2 (European Remote-Sensing Satellite), IoT data coming from AQP (Air Quality Platform) and HTP (Health Tree Platform) that measure in situ parameters, and Copernicus satellites like Sentinel-2, Sentinel-3 and Sentinel-5P, it is possible to obtain more consistent and useful information than the one provided by any individual data source for what it concerns the health of trees. In this activity it is possible to identify three main macro working areas: 1) the electronic and IoT sensors; 2) data science and coding software; 3) implementation of machine learning algorithms. In the first area the HTP and AQP's sensors collect and transmit IoT sensors data to the ESA web-server. The second area focuses on how to manage and visualize in situ and satellite data. Finally, in the third area machine learning algorithms have been implemented at ESA fusing in situ and satellite data for checking trends and relevant parameters of trees. Regarding the first area, at the ESA ESRIN EO Laboratory, an Internet of Things (IoT) device called Health Tree Platform (HTP) has been developed to acquire local information about the health of a tree. This platform has been integrated into the network of a citizen science project initiated by ESA in 2019 at the Living Planet Symposium named AQP (Air Quality Platform) that measures the quality of the air. The network includes approximately 200 platforms spread across Europe. The AQP is an assembly Kit (Hardware and Software) designed to sense local ambient parameters and send data to an ESA central server. The software that runs on the AQP boards (Raspberry-pi and Arduino) is open source and is written in Python and in C languages. The hardware solution is also open, since it allows the addition of new electronic components and sensors on the basic architecture. By employing only low cost sensors, the IoT ESA platform is able to measure 16 parameters: Temperature, Humidity, PM10, PM2.5, CO2, CO, NH3 (Ammonia), NO2 (nitrogen dioxide) CH4 (methane), O3 (Ozone), CH2O (Formaldehyde), luminosity, sound level, and position with GPS (latitude, longitude, altitude above mean sea level). The HTP (Health Tree Platform) is the evolution of the AQP, fully dedicated to trees. It is an assembly Kit (Hardware and Software) capable of sensing trees parameters and send data to an ESA web-server. The first prototype of the HTP has been designed and built internally in ESA at the ESRIN EO Laboratory. The HTP is composed of low-cost sensors. This platform, once properly connected to a tree, is able to estimate its health measuring some parameters such as: three-axis accelerometer and gyro to measure the oscillation of the plant, three angular velocity (gx, gy, gz), wind speed and direction, soil moisture nearby the tree, local near infrared bands (NIR), barometer, temperature and humidity, ambient light, RGB values end sound level. The second area is in charge of downloading, managing and visualizing in-situ and satellite data, both heritage missions and Sentinel data. Using the EO Browser - https://apps.sentinel hub.com/eo-browser/ - selecting Sentinel-2, Sentinel-3 and Sentinel-5P Satellites and the relevant parameters, it is possible to calculate the Normalised Difference Vegetation Index (NDVI) for the period 2016-2024 and the quality of the air. Using the ESA Earth Online service - https://earth.esa.int/eogateway/missions/ers/data - and selecting heritage missions such as ERS-1 and ERS-2 relevant products, it is possible to download the data in the interesting areas for the period 1991-2011. Using the ESA Air Quality Map https://aqp.eo.esa.int/map/ it is possible to visualize and download the AQP and HTP parameters selecting the relevant IoT device for the period 2019-2024. Finally for the third area, machine learning algorithms have been implemented at ESA fusing in-situ and satellite data for checking trends and relevant parameters of trees. Three different zones have been identified to estimate health of trees: ESA Esrin in Frascati (Italy) where it is possible to have data from AQP, HTP and satellite data (both heritage and Copernicus satellite data), Castel Porziano in Rome (Italy) and the surrounding area of Milan, where it is possible to study the evolution of the NDVI and air of quality from satellite data (both heritage and current data). This study calculates the variation of the NDVI in these areas from 1991 to 2024. In the Castel Porziano area, the reduction of the NDVI parameter in recent years is quite evident, largely due to the presence of cochineal insects. In summary, this activity allowed to show how the combination of heritage satellite data, IoT data and machine learning methods, offers significant advantages in processing, analysing, and understanding the trend of the health of trees during decades.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone Q)

Poster: Scenes investigation for enhancing cross-calibration performance: GOME and SCIAMACHY case study

Authors: Dr Abdalmenem Owda, Dr Günter Lichtenberg, Dr Melanie Coldewey-Egbers, Dr Sander Slijkhuis
Affiliations: German Aerospace Center
Cross-calibration is one of the calibration methods used to enhance the con- sistency, accuracy, and comparability of measurements from different instru- ments, platforms, and missions. Cross-calibration is basically comparing the monitored with the reference instruments (Chander et al. 2013). The cross- calibration is done on combination of large set of data that provide spatial and temporal collocation simultaneously. However, having the spatiotemporal coincidence measurements for the cross-calibration is always the case, which increases the differences in measurement of compared sensors and complicates the cross-calibration methods. Vicarious calibration is one of the common calibration techniques used in cross-calibration. The observations were taken from the sites where the radi- ance field upwelling is nearly constant. These sites are called pseudo-invariant calibration sites (PICS) and were proposed by Cosnefroy in 1996. The stable temporal and spatial variability of the measurements acquired from these sites refer to the dry conditions of the sites, the absence of precipitation, the absence of vegetation cover, and the absence of human-induced changes. Subsequently, the difference in measurements will be directly related to the changes occurred to the sensors if the measurements are acquired simultaneously. Heritage mission data and an increasing number of satellite missions have led to the rapid growth of remote sensing archives. This large volume of data helps to track and monitor parameters and phenomena. The Global Ozone Monitor- ing Experiment (GOME-1) and Scanning Imaging Absorption Spectrometers for Atmospheric Chartography (SCIAMACHY) were two spectrometers used for trace-gas retrieval. Both sensors overlapped in their mission periods and spectral range, but were not time-synchronized. The time differences in their local overpassing time is approximately 30 minutes. This time is the main chal- lenge for the cross-calibration. The Fundamental Data Record in the Domain of Satellite Atmospheric Com- position (FDR4ATMOS) project has been initiated by the European Space Agency (ESA). It aims to provide a long-term record of selected Earth Ob- servations of level-1 parameters (radiance, irradiance, and reflectance). SCIA- 1 MACHY is used as the reference sensor in the cross-calibration with GOME, as the degradation was monitored extensively in flight and corrected. Transfer functions were generated from three different PICS in the Saharan Desert to harmonize GOME reflectance. These transfer functions describe the relationship between the reflectance of the synthesized SCIAMACHY data into GOME pixel to the GOME reflectance for the same overlapping area, as function of wavelength. This abstract belongs to the study which investigates the results of FDR4ATMOS cross-calibration, with the aim of providing insights into the conditions of the collocations used to generate the transfer functions. It also seeks to estab- lish selection criteria for the cross-calibration of missions that are not time- synchronized. It is expected to cover the following objectives in LPS2025: 1. Check the reliability of the FDR4ATMOS derived transfer functions for other regions in the globe. 2. Investigate the conditions of the GOME and SCIAMACHY collocations from PICS and non-PICS used to derive the transfer functions. The con- ditions include the the uniformity, viewing geometry of sensors, and cloud fraction. The non-PICS refer to other selected regions from land and ocean and are not considered as PICS. 3. Setup selection criteria for the collocations used in the cross-calibration
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone Q)

Poster: Geometric validation results of the full (A)ATSR mission

Authors: Caroline Cox, Dr Frauke Izdebski, Dr David
Affiliations: RAL Space, STFC
The (Advanced) Along Track Scanning Radiometer (A)ATSR mission started in 1991 with the launch of the European Remote Sensing satellite (ERS-1) carrying the first of a series of three instruments. Since that launch, ATSR-1, ATSR-2 and AATSR have been making observations of the Earth’s top-of-atmospheric radiances, until the mission ended in 2012. This mission has been followed by the current Copernicus Sentinel-3 Sea and Land Surface Temperature Radiometer (SLSTR). The surface temperatures retrieved from the observations are vital in studying the climate, and therefore it is planned that the (A)ATSR dataset will become a Fundamental Climate Data Record (FCDR). It is therefore important that all aspects of the measurements are validated, including the geolocation performance. The geolocation of (A)ATSR instruments involves using the scan mirror's position, the orbit state vector, and a geometric model of the line of sight to compute the intersection point between the line of sight and the Earth's surface of every pixel, providing the latitudes and longitudes. As a dual viewing instrument, forward and nadir images are generated, and it is particularly important to ensure colocation between these views, as well as between spectral bands as multiple views and bands are used in the retrieval of surface temperatures. In the recent 4th reprocessing of the (A)ATSR mission, evolutions were made to the inter-view colocation accuracy, reducing the along track shift of 1-pixel that was present in the 3rd reprocessing, and to is vital to now assess the quality of the full dataset. In addition, the geolocation was performed in reference to a Digital Elevation Model, rather than the geoid. Assessment of the geolocation accuracy of the 4th reprocessing was previously performed over a limited number of sites. In this presentation, we report on the geolocation accuracy of the over the full mission. We consider both the inter-band co-registration performance and the inter-view performance of these dual view instruments. Finally, the absolute geolocation performance is assessed by comparing with the Copernicus Sentinel-2 Global Reference Image over a set of locations spanning different latitudes. This is the first time such a thorough assessment of the geolocation performance of the (A)ATSR mission has been performed.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone Q)

Poster: Algorithm Updates in the (A)ATSR 5th Reprocessing

Authors: Connor McGurk, Caroline Cox, Dave Smith
Affiliations: STFC RAL Space
The (A)ATSR instruments produced a time-series of surface temperature measurements stretching from the launch of ERS-1 in 1991 to the end of the ENVISAT mission in 2012. These instruments produced a valuable historic climate record, and were the predecessors to the modern SLSTR instruments currently in operation aboard the Sentinel-3 series of satellites. The Fundamental Data Record for Along Track Scanning Radiometers (FDR4ATSR) project aims to create a full Fundamental Climate Data Record for the 5th reprocessing of (A)ATSR data, in which improved algorithms from SLSTR have been applied to the heritage data. We have made significant modifications to three aspects of the Level 1 processing algorithm: - Pixel ortho-regridding - Surface classification - Product quality flagging Regridding is the process by which pixels are translated from their positions in the instrument field of view onto an image referenced to coordinates on the Earth’s surface. In the 4th reprocessed product, this was performed using positional data referenced to the geodetic surface. A more accurate regridding algorithm takes account of the intersection of the instrument field of view with surface topography to provide the best representation of the satellite’s view across mountainous terrain. This will be applied in the 5th reprocessing, and examples of the improved product are shown. Prior assessments of pixel surface type were made in the 4th reprocessing by comparing the position of the centre of each pixel’s field of view against a series of reference maps. In reality, the pixel observes a broader area of the surface, whose dimensions can be constrained by pre-launch measurements of the instrument field of view per pixel, and its projection onto the Earth. We applied an improved method, originally developed for SLSTR, that generates a more comprehensive surface classification mask by considering a fuller extent of the pixel field of view. Again, this will be implemented in the 5th reprocessing and examples are shown of the improved land, water, and coastline surface mask. The function of product quality flagging is to warn end-users of potential issues in applying (A)ATSR products to their applications. A series of checks allows for setting the flag to PASSED or DEGRADED to indicate to users a distinction between good-quality products and those partially or wholly affected by data quality issues. Taken together, these improvements will help to ensure that the (A)ATSR datasets are consistent with the products actively being generated by SLSTR today, facilitating time-series research over three decades of scanning-radiometer measurements.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone Q)

Poster: The Advanced Infra-Red WAter Vapour Estimator-v3 (AIRWAVE) TCWV dataset: history and applications

Authors: PhD Elisa Castelli, Stefano Casadio, Enzo Papandrea, Massimo Valeri, Andre' Achilli, Alessandra Cofano, Dr. Bianca Maria Dinelli, Sabrina Pinori
Affiliations: CNR-ISAC, SERCO Italia, University of Bologna
The atmospheric water vapour affects the atmosphere at both a local and global scale and can be retrieved (integrated from the ground to the top of the atmosphere, known as Total Column Water Vapour (TCWV)) from satellite measurements exploiting different spectral regions. The Infrared measurements from the Along Track Scanning Radiometer (ATSR) instrument series (ATSR-1, ATSR-2, AATSR) in the 11 and 12 μm channels (measured in clear sky scenarios over water surfaces), are used by the Advanced Infra-Red WAter Vapour Estimator (AIRWAVE, Casadio et al., 2016, Castelli et al., 2019) algorithm. The algorithm produces TCWV at 1x1 km2 resolution over the time period from 1991 to 2012. A set of data, extracted from the first and second versions of the AIRWAVE datasets and aggregated, have been included into the two versions of the GEWEX Water Vapour Assessment (G-VAP) archive (Schroeder et al., 2018, Trent et al., 2023). Recently, in the frame of the Instrument Data Evaluation and Analysis Service (IDEAS+) Quality Assurance for Earth Observation (QA4EO) ESA project, we have updated the AIRWAVE code to generate its third version and re-processed the entire ATSR-1, ATSR-2, AATSR datasets using the 4th L1B re-processing of ATSR missions and some codes update. The data set has been validated against other satellite data, such as Special Sensor Microwave/Imager (SSM/I) and the Analyzed RadioSounding Archive (ARSA) radiosoundings highlighting good performances. The availability of such an extensive dataset, that covers more than 20 years at very high spatial resolution, open the possibility for several applications. In these years, the AIRWAVE datasets have been used for different spatial/temporal scale applications. The aggregated dataset at low spatial resolution has been used for the investigation of the ITCZ latitudinal displacement using Geodesic P-spline smoothing (Castelli et al.,2018) over the whole ATSR mission. The dataset at high (native) spatial resolution was used for the development of the automatic detection of the Lee waves in the Mediterranean basin (Papandrea et al, 2018). Here we will illustrate the different AIRWAVE dataset versions, their performances and potentiality for long term studies. References: Casadio et al, "Total column water vapour from along track scanning radiometer series using thermal infrared dual view ocean cloud free measurements: The Advanced Infra-Red Water Vapour Estimator (AIRWAVE) algorithm", Remote Sensing of Environment, vol. 172, pp. 1-14, 2016. Castelli, et al., The Advanced Infra-Red WAter Vapour Estimator (AIRWAVE) version 2: algorithm evolution, dataset description and performance improvements, Atmos. Meas. Tech., 12, 371–388, https://doi.org/10.5194/amt-12-371-2019, 2019. Schröder et al, The GEWEX water vapor assessment archive of water vapour products from satellites and reanalyses, Earth Syst. Sci. Data, 10, 1093–1117, ,https://doi.org/10.5194/essd-10-1093-2018, 2018. Trent, et al.: Evaluation of Total Column Water Vapour Products from Satellite Observations and Reanalyses within the GEWEX Water Vapor Assessment, EGUsphere [preprint], https://doi.org/10.5194/egusphere-2023-2808, 2023. Castelli et al., ITCZ trend analysis via Geodesic P-spline smoothing of the AIRWAVE TCWV and cloud frequency datasets, Atmospheric Research 214 (2018) 228–238, 2018 Papandrea, et al.: Lee wave detection over the Mediterranean Sea using the Advanced Infra-Red WAter Vapour Estimator (AIRWAVE) total column water vapour (TCWV) dataset, Atmos. Meas. Tech., 12, 6683–6693, https://doi.org/10.5194/amt-12-6683-2019, 2019.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone Q)

Poster: Theoretically Clear: Maximizing the Utility of the Sentinel-2 Record With AI-powered Cloud and Shadow Detection

Authors: Valerie Pasquarella, William Rucklidge, Estefania Lahera, Andréa Puzzi Nicolau, Chris Herwig, Wanda Czerwinski, Christoper Brown, Emily Schechter, Sean Askay
Affiliations: Google, Spatial Informatics Group, SERVIR-Amazonia
When it comes to optical imagery, utility or “usability” is generally a function of how clear an image is. Ideally, pixels should be free of clouds, shadows, and other atmospheric artifacts that obscure surface conditions. Thus, the utility of optical image archives is largely contingent on the availability of cloud and shadow detection and other quality assessment data that can be used to filter observations for analysis. Historically, cloud detection algorithms have been developed for individual sensors, and while many different approaches have been proposed, only a subset of these have been operationally deployed. Cloud Score+ is a new AI-powered solution for generating per-pixel atmospheric QA scores. Unlike conventional bitmasks, Cloud Score+ provides a set of continuous scores that effectively rank the quality or usability of a given observation. These scores can be thresholded to produce a usability mask or used directly as weights, making them a flexible solution that can be tuned to the needs of specific use cases. Cloud Score+ assessments have been generated for the entire Sentinel-2 L1C archive and are available as an Image Collection in the Google Earth Engine Data Catalog. In this talk, we provide an initial summary of Cloud Score+ use cases and lessons learned from the first year of deployment and highlight the critical role of long time series for the modeling approach. We’ll review partner case studies from some of the cloudiest places in the world and other published use cases, discuss how we used Cloud Score+ to improve the Google basemap, and touch on how we’re currently using Cloud Score+ for other deep learning efforts. We’ll also discuss how we trained the Cloud Score+ model such that it can also be applied to the Landsat record and provide initial evidence of performance on Landsat imagery. We’ll conclude with an argument for treating long time-series of Earth observation acquisitions as video analysis problems in order to unlock new potential from Earth observation archives.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone Q)

Poster: Domain adaptation from current to historical aerial images with constrained generative model for past tree cover semantic segmentation

Authors: Thierry Gaubert, Valentine Bellet, Yousra Hamrouni, Thierry Feuillet, Déborah Birre, David Sheeren
Affiliations: University of Toulouse, INRAE, UMR DYNAFOR, University of Caen, CNRS – UMR 6266 IDEES, Université Sorbonne Paris Nord, Pléiade, UR 7338
Tree cover is subject to change due to anthropogenic activities and natural disturbances. Therefore, long-term tree cover mapping is a key step to assess vegetation dynamics and monitor ecosystems and habitats. Currently, tree cover mapping and monitoring for the recent decades can be easily performed through remote sensing images from spaceborne sensors or aerial surveys [1]. However, diving into past tree cover is essential to understand the conservation status of the forest habitats. To this aim, archival aerial images are a key source of information. Accessibility of large aerial imagery surveys of the 20th century has been facilitated with digitization, dedicated data infrastructures and web services [2]. For example, in France, aerial surveys from the 1950's are now freely available at the national scale through the BD ORTHO® Historique product [3]. Historical aerial images contain less information than those acquired with contemporary means, but enough to perform effective long-term environmental monitoring and change detection. Historical aerial images are indeed characterized by (i) a unique panchromatic channel which prevents any analysis of the spectral information, (ii) highly variable radiometry and shadow configurations due to the various acquisition time and date, (iii) artefacts, noise or blur due to their conservation state or digitization conditions and parameters [2]. However, archived aerial images usually present a fine spatial resolution, below 1 meter. Considering the aforementioned elements, automatic analysis of historical aerial images remains a challenging task, and likely relies on relevant features linked to the geometric shapes and texture rather than radiometry. Recently, automatic methods developed for the semantic segmentation of true color images have been employed successfully on historical aerial images. These methods rely on deep learning architectures with convolution layers such as the U-Net architecture [4,5,6]. However, the performance of these methods depends mainly on the use of large reference datasets to accurately train the models. This type of dataset, with finely manually annotated historical aerial images, is lacking to train a tree cover segmentation model on a large scale, mainly due to the time required for such a task. Common data augmentation strategies, such as rotations and symmetries, could be employed to alleviate the issue, but they would likely fail to represent spatial variability of landscapes. To by-pass this issue, one solution is to use recent aerial images where the tree cover can be automatically annotated using spectral fingerprint and, potentially, an associated digital canopy height model. Then, recent aerial images can be transformed into historical-like images with domain adaptation methods [7]. A basic conversion from RGB to grayscale [8] is a simple solution to the adaptation problem. However, this conversion does not necessarily respect the properties and data distribution of the historical aerial images. More advanced conversion methods based on domain adaptation techniques such as optimal transport [9], deep generative models [10,11] can be used, to ensure a consistent image-to-image style transfer. Among the few existing works, the cycleGAN architecture/strategy for domain adaptation to historical images has been recently implemented with success [12]. Although inspiring, this approach suffers from the fact that the adaptation is not optimized for the classification task. The domain adaptation model and the classification model are learned independently while previous works have shown that coupling the generation and the classification tasks provides more accurate results than with decoupled tasks [13]. In the remote sensing community, coupling the two tasks has been proposed to adapt the cycleGAN architecture into a classification-constrained generative model with the addition of two classification modules and a classification consistency loss to constrain the model [14]. In this presentation, we propose an implementation of a cycleGAN constrained with semantic segmentation modules to map finely past tree cover, including narrow hedges and isolated trees. The proposed architecture encompasses two pairs of adversarial generator and discriminator networks such as the cycleGAN. We add one pre-trained semantic segmentation module to each adversarial pair as constraint. One of the generators produces historical-like images from recent images that are optimized for the semantic segmentation task and comprising accurate style and semantic content. We hypothesize that the historical-like images should not only be visually consistent but also interpretable by a learning model for the final purpose. Our model is trained with the FLAIR dataset [15] encompassing 77,412 manually annotated patches of recent aerial images over France, covering approximately 810 km². For the validation of our model on the BD ORTHO® Historique product, we extracted and manually annotated patches over the whole metropolitan France territory to get a faithful representation of the landscape diversity, the variety of sensors, and acquisition conditions. Our strategy to analyze the results is twofold. Firstly, we assess the performance of our model by comparison with methods from the literature, including the original cycleGAN architecture, with multiple training of the semantic segmentation model using each distinct historical-like transformation. Secondly, we also derive baseline scores: (1) an upper baseline training the semantic segmentation model with the annotated historical images, spatially disjointed from validation data, (2) a lower baseline training the semantic segmentation model on FLAIR dataset converted to grayscale. The scores are computed with the mean intersection over union (mIoU) and our current results provide a lower and upper baseline score of 0.39 and 0.78 respectively. We expect the semantic segmentation-constrained cycleGAN to achieve more accurate results than state-of-the-art methods since the generation of historical-like images is optimized for the tree cover segmentation task. REFERENCES [1] Senf, C., Seidl, R., 2021. Mapping the forest disturbance regimes of Europe. Nat Sustain 4, 63–70. [2] Giordano, S., Le Bris, A., Mallet, C., 2017. Fully automatic analysis of archival aerial images current status and challenges, in: 2017 Joint Urban Remote Sensing Event (JURSE). Presented at the 2017 Joint Urban Remote Sensing Event (JURSE), pp. 1–4. [3] https://geoservices.ign.fr/bdorthohisto [4] Mboga, N., Grippa, T., Georganos, S., Vanhuysse, S., Smets, B., Dewitte, O., Wolff, E., Lennert, M., 2020. Fully convolutional networks for land cover classification from historical panchromatic aerial photographs. ISPRS Journal of Photogrammetry and Remote Sensing 167, 385–395. [5] Dias, M., Monteiro, J., Estima, J., Silva, J., Martins, B., 2020. Semantic segmentation and colorization of grayscale aerial imagery with W-Net models. Expert Systems 37, e12622. [6] V. B. Ferreira, D. Sheeren, S. Lefèvre and S. Lang, "More Labels or Better Labels? A Semantic Segmentation Study Case Using Historical Aerial Images for Tree Delineation," IEEE IGARSS 2024, Athens, Greece, 2024. [7] Mboga, N., D’Aronco, S., Grippa, T., Pelletier, C., Georganos, S., Vanhuysse, S., Wolff, E., Smets, B., Dewitte, O., Lennert, M., Wegner, J.D., 2021. Domain Adaptation for Semantic Segmentation of Historical Panchromatic Orthomosaics in Central Africa. ISPRS International Journal of Geo-Information 10, 523. [8] Wang, Z., Ginzler, C., Eben, B., Rehush, N., Waser, L.T., 2022. Assessing Changes in Mountain Treeline Ecotones over 30 Years Using CNNs and Historical Aerial Images. Remote Sensing 14, 2135. [9] de Bézenac, E., Ayed, I., Gallinari, P., 2019. Optimal Unsupervised Domain Translation. [10] Zhu, J.-Y., Park, T., Isola, P., Efros, A.A., 2017. Unpaired Image-To-Image Translation Using Cycle-Consistent Adversarial Networks. Presented at the Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232. [11] Benjdira, B., Bazi, Y., Koubaa, A., Ouni, K., 2019. Unsupervised Domain Adaptation Using Generative Adversarial Networks for Semantic Segmentation of Aerial Images. Remote Sensing 11, 1369. [12] Van den Broeck, W.A.J., Goedemé, T., Loopmans, M., 2022. Multiclass Land Cover Mapping from Historical Orthophotos Using Domain Adaptation and Spatio-Temporal Transfer Learning. Remote Sensing 14, 5911. [13] Xu, X., Bao, X., Lu, X., Zhang, R., Chen, X., Lu, G., 2023. An end-to-end deep generative approach with meta-learning optimization for zero-shot object classification. Information Processing & Management 60, 103233. [14] Voreiter, C., Burnel, J.-C., Lassalle, P., Spigai, M., Hugues, R., Courty, N., 2020. A Cycle Gan Approach for Heterogeneous Domain Adaptation in Land Use Classification, in: IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium. Presented at the IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium, pp. 1961–1964. [15] Garioud, A., Gonthier, N., Landrieu, L., Wit, A.D., Valette, M., Poupée, M., Giordano, S., Wattrelos, B., 2023. FLAIR: a Country-Scale Land Cover Semantic Segmentation Dataset From Multi-Source Optical Imagery, in: Advances in Neural Information Processing Systems (NeurIPS) 2023.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone Q)

Poster: Processing of Spot World Heritage and Pleiades World Heritage products with the MAJA software

Authors: Mickael Savinaud, Alexandre Fiche, Axelle Pochet, Mickael Savinaud, Julien
Affiliations: Cs Group - France
The SPOT satellite series has been providing images since 1986. To promote this nearly 30 million images, the Spot World Heritage (SWH) program was announced in 2014 by CNES, the French Space Agency. The products are Level 1C : orthorectified reflectances at the top of the atmosphere. The resolution of the images ranges from 20m to 2,5m depending on the generation of satellite. To produce Level 2A images, corrected from the effects of the atmosphere, CNES considered using MAJA, its atmospheric correction and cloud screening processor. But MAJA is designed for Sentinel2, Venus and Landsat images, and takes advantage of the temporal and spectral richness of those sensors in its algorithm. Indeed, MAJA works better with time series of image because it evaluates small temporal variations at pixel level. Also, MAJA uses specific wavelengths to evaluate water vapor and cloud presence. As SPOT images don’t fulfill those two conditions, this complicates the use of MAJA as processor to generate L2A products. We will present how we adapt MAJA so it is now able to process SPOT products. MAJA was already able to process single images without benefiting from the richness of a time series. But the lack of proper spectral band to evaluate the water vapor had to be address. The selected solution was to use an exogenous climate data: the ERA5 dataset from ECMWF. With the knowledge of the elevation, and the column water vapor from the ERA5 dataset, the water content can be assessed with the proper integration. We had a similar approach to process products from the Pleiades World Heritage (PWH) program announced for 2024, which share a lot of similarities with the SWH program. Indeed, since 2011 the Pleiades program produces high resolution images at 2,6m, and the CNES wants to promote those products the same way it did with SPOT.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone Q)

Poster: Combining Thirty-year AVHRR NDVI Time-series and Driver Analysis to Identify Forcing Climate Variables on Vegetation Over Europe

Authors: Christina Eisfelder, Soner Üreyen, Dr. Sarah Asam, Andreas Hirner, Philipp Reiners, Juliane Huth, Felix Bachofer, Martin Bachmann, Stefanie Holzwarth
Affiliations: German Aerospace Center (DLR), German Remote Sensing Data Center (DFD)
Large-scale monitoring of vegetation trends and analysing their climatic drivers is useful to understand possible impacts of climate change on vegetation. To identify climate-relevant vegetation changes, long time periods of at least three decades are required. Such long time-series can be derived from the Advanced Very High Resolution Radiometer (AVHRR), which is the only satellite sensor that provides time-series on a daily temporal resolution, going back to the early 1980s. For monitoring the state of vegetation, the Normalised Difference Vegetation Index (NDVI) is most widely used. A novel multi-decadal homogeneous NDVI time-series has been generated from AVHRR sensors at the German Remote Sensing Data Center (DFD) of the German Aerospace Center (DLR) within the TIMELINE (Time-Series Processing of Medium Resolution Earth Observation Data assessing Long-Term Dynamics In our Natural Environment) project. The NDVI time-series covers Europe and North Africa and allows monitoring vegetation dynamics at a 1 km spatial resolution. The TIMELINE NDVI product consist of daily, 10-day, and monthly composites. In the presented study, we derived seasonal NDVI trends from the monthly TIMELINE NDVI time-series using Mann-Kendall trend test and Theil-Sen Slope Estimator for spring, summer, and autumn for the 30-year period 1989 to 2018. We analysed the trends for areas with stable land cover derived from ESA Climate Change Initiative (CCI) land cover products. The area with stable land cover extends over 86% of the study area. Our analysis separates among ten different land cover types within ten separate biogeographical regions (European Environment Agency, EEA). Additionally, we determined the influence of the climate variables temperature, precipitation, solar radiation, and vapor pressure deficit on the NDVI using the Peter and Clark Momentary Conditional Independence (PCMCI+) algorithm. The results of our study show positive NDVI trends for the three seasons for large areas over Europe. The NDVI trends are generally stronger in autumn for western and central Europe, and in spring for south-eastern Europe. Larger areas with significant negative NDVI trends can be found in the Mediterranean in summer and in the Steppic region in summer and autumn. In spring, the Pannonian and Black Sea regions show high percentages of area (54% and 71%) with a significant positive trend. The other regions have significant positive trends in 19–32 % of their area. In summer, the Mediterranean and Anatolian regions show significant negative NDVI trends for >10% of the area, and the Steppic region for 27% of the area. Most regions show significant positive NDVI trends for about 20-38% of the area in summer. In autumn, NDVI increase is widespread with seven regions showing significant positive trends in at least 45% of their area. The Steppic region experiences significant negative NDVI trends for 26% of the stable land cover area. The largest percentage area with significant positive NDVI trends was observed for forest areas, especially in autumn. Largest percentages of areas with negative trends can be observed for grassland and sparse vegetation in summer, as well as for sparse vegetation and rainfed cropland in autumn. In spring, significant positive causal links between the NDVI and temperature are dominant across Europe for all regions and land cover classes. Vapor pressure deficit influences the NDVI in north-eastern and south-western Europe. In summer, precipitation has a positive influence on the NDVI in parts of the Anatolian and Mediterranean regions, while vapor pressure deficit influences NDVI regionally over Europe. Precipitation and vapor pressure deficit are also the dominant controlling factors on the NDVI in autumn for central-eastern Europe and parts of the Iberian Peninsula. The results of this study assist in understanding and quantifying ongoing vegetation change in Europe and reveal the influence climate variables have on the vegetation activity and trends within different regions and for different land cover in Europe. The availability of consistent long-term time-series, such as the AVHRR based TIMELINE NDVI product, is a prerequisite for performing this kind of analyses.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone Q)

Poster: Cloud-Native Strategies for Legacy EO Data: Processing Challenges and Innovations

#stac #cog #cloud-native

Authors: Stefan Reimond, Senmao Cao, Christoph Reimer, Richard Kidd, Christian Briese, Clément Albinet, Mirko Albani
Affiliations: EODC Earth Observation Data Centre for Water Resources Monitoring GmbH, European Space Agency (ESA)
Preserving and utilizing Earth Observation (EO) data from heritage missions like ERS-1/2 and Envisat is crucial for advancing scientific research. However, integrating these legacy systems into modern cloud environments presents significant challenges. This contribution explores complexities and solutions associated with processing historic satellite data using legacy software in today's cloud-native ecosystems, illustrated by an example of ERS-1/2 and Envisat SAR data over Austria. Modern cloud technologies offer significant advantages for data processing and accessibility. They provide scalable, flexible, and efficient solutions that can handle large volumes of data with ease. Specifically, at EODC, we operate a Kubernetes cluster on top of OpenStack to manage our cloud infrastructure. Apart from providing state-of-the-art services like Dask and Jupyter for contemporary data analysis, this setup also supports the execution of legacy processing workflows. By making use of containerization tools like Docker to encapsulate these older processors, we minimize the risk of incompatibilities, ensuring they are executable and functional in the cloud. To efficiently manage such complex data processing workflows, we use Argo. However, several challenges arise when adapting legacy software to these modern environments. Compatibility issues with outdated libraries often require modifications or workarounds. Developing new software to manage input/output data and configuration files is essential to ensure smooth operation. Additionally, handling broken raw data and missing auxiliary data necessitates robust data management strategies. These challenges demand extensive testing and adjustments to ensure that legacy processors can function efficiently in a scalable cloud environment. An example application of this approach is the generation of a comprehensive time series of ERS-1/2 and Envisat (A)SAR data over Austria, demonstrating the practical implementation of these methodologies. This project, conducted in cooperation with ESA, highlights the successful integration of legacy processors into a Kubernetes cluster, utilizing Docker for containerization and Argo for workflow automation. Preliminary results from these processing efforts include various Level-1 and Analysis Ready Data (ARD) datasets, most notably Normalized Radar Backscatter (NRB) products. When applicable, these datasets utilize cloud-native formats like Cloud Optimized GeoTIFFs (COGs) and are accessible through EODC's SpatioTemporal Asset Catalog (STAC) interface. This setup enables on-the-fly analysis of decades-long time series using tools such as Jupyter and Dask, significantly enhancing data discoverability, accessibility, and usability.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone Q)

Poster: AVHRR Data Recovery Project

Authors: Iolanda Maggio, Mirko Albani, Sergio Folco, Stefan Wunderle, Marin Tudoroiu, Christoph Neuhaus
Affiliations: Starion Italia S.p.A., ESA, University of Bern
Data from the Advanced Very High Resolution Radiometer (AVHRR) onboard many NOAA and, since 2006, EUMETSAT MetOp-satellites provide a long time series over the last 45 years with daily availability and 1.1 km spatial resolution (AVHRR 1km LAC data). This long period fulfils the requirements of the World Meteorological Organization for a statistically sound analysis to study climate change induced shifts through computing and monitoring of a set of Essential Climate Variables (ECV). Besides the central NOAA CLASS and EUMETSAT archives, many AVHRR 1km data local archives exist worldwide. In the frame of ESA’s Heritage Space Programme the data holdings of University of Bern (Switzerland), European Space Agency (ESA) and Dundee Satellite Receiving Station (UK) were recently combined and reprocessed to generate a full and harmonised 40-years+ AVHRR LAC data coverage of Europe at Level-1b and Level-1c which is now freely accessible on-line (e.g. https://tpm-ds.eo.esa.int/socat/AVHRR_L1B_1_1KM). Many of the other existing archives worldwide are unfortunately not or only partly accessible or not adequately maintained to be used by research teams. ESA is now leading a project in the frame of the CEOS Working Group on Information Systems and Services (WGISS - https://ceos.org/ourwork/workinggroups/wgiss/), having the goal to generate an inventory of AVHRR LAC archives worldwide, focussing on the ones with data acquired before 2006, and to possibly recover the ones not accessible to users for the benefit of the scientific community and mankind at large. In more detail the project objectives are to unfold and make accessible 1 km AVHRR data from regional archives, transcribe unique data from heritage media, identify a common format for AVHRR Level-1b and Level-1c data, pursue (re)processing from AVHRR data owners/holders and data accessibility through CEOS agencies systems, and facilitate data discovery through CEOS based entry points (WGISS Connected Data Assets infrastructure, https://ceos.org/ourwork/workinggroups/wgiss/access/connected-data-assets/). Focus of the activities at ESA is now on the recovery of the 1km data acquired by a set of stations worldwide in 1992-1999 in the frame of a joint activity between ESA and USGS. Recovered data from ESA stations are accessible online at https://tpm-ds.eo.esa.int/socat/NOAA_AVHRR_L1B_LAC_out-of-Europe, whilst data from USGS stations are being reformatted and will be made accessible in 2025 after bulk reprocessing. ESA is moreover repatriating data from different organizations in South America and Africa which will be processed in an harmonised format and made accessible to users open and free. These data are in some cases still stored on heritage media that have been repatriated to ESA for transcription and recovery. Several hundred optical disks, Exabyte and DLT tapes are currently being transcribed through ad-hoc transcription chains set-up at ESA ESRIN in Frascati; transcriptions will continue in the next months. All extracted data are then transferred to the ESA funded FDR4AVHRR project for full processing and then disseminated to end users.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone Q)

Poster: Generation of Long Time Data Series for ESA heritage TPM missions

Authors: Mirko Albani, Nora Wiik, Roberto Biasutti, Iolanda Maggio
Affiliations: Starion Italia S.p.A., ESA, Serco
The need for accessing historical Earth Observation (EO) data series strongly increased in the last ten years, particularly for long-term science and environmental monitoring applications. This trend is likely to increase even more in the future, in particular regarding the growing interest in global change monitoring which is driving users to request time-series of data spanning 20 years and more. The need to support the United Nations Framework Convention on Climate Change (UNFCCC) is also further driving demand for historical data series. The contents of EO data archives are continuously growing from a few years to decades and therefore, their value as a scientific time-series is ever-increasing. Hence, there is a strong need to preserve the EO space data without time constraints and to ensure that they are accessible and exploitable by the community. The preservation of EO space data can also be considered as a responsibility of the Space Agencies or data owners as they constitute a humankind asset. The Heritage Space Programme responds to the mandate of preserving, making accessible, and valorising ESA heritage space data and associated information holdings generated by payloads and instruments on-board space platforms from ESA and ESA-managed Third Party Missions with a strong focus on their accessibility and exploitability. Over 40 Earth Observation missions and 80 EO campaigns are initially included in the Heritage Space programme starting from 2023. As part of the programme activities, a particular focus is put on the generation of long time series of coherent data in support of climate change monitoring activities, needs and related applications. This paper describes the activities implemented at ESA for the generation of a long time series of data starting from the ESA Heritage Third Party Missions (TPM) L-band SAR instruments data holdings. These activities consisted in the recovery of data from old media, their consolidation, archiving, reprocessing and dissemination to end users. They focussed on the recovery and consolidation of the SEASAT SAR (the first space-based civil EO SAR mission, launched by NASA on June 28th of 1978) heritage mission dataset available at ESA and at its alignment with JERS-1 L-Band SAR and ALOS PALSAR missions data in terms of processing chain and output format. As a result of these activities, the complete SEASAT and JERS-1 SAR datasets have been reprocessed and aligned to the more recent ALOS PALSAR dataset producing a long time series of coherent data spanning from 1978 to 2011 (with gaps between the three missions operational lifetime). These data are now accessible on an open and free basis for all users through ESA.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone Q)

Poster: Atmospheric Correction of the 30+ Year Australian AVHRR Archive Using the RTTOV Radiative Transfer Model

Authors: Andrew Prata, Edward King, Aarond Dino, Matt Paget, Robert Bridgart, Matthew Stenson, David Jupp
Affiliations: CSIRO Environment, CSIRO Environment, CSIRO Environment, CSIRO Environment
First launched on 13 October 1978, the Advanced Very High Resolution Radiometer (AVHRR) series of instruments now represent one of the longest archives of global, daily, multi-spectral satellite observations comprising measurements from the visible to the thermal infrared. AVHRR data have found use in a wide range of scientific disciplines including areas for which they were originally intended such as meteorology and oceanography to surprising and now heavily relied upon applications in environmental science (e.g. vegetation index estimation) and volcanology (e.g. volcanic cloud detection). Historically, AVHRR data could be received in two modes: Direct Broadcast and via limited on-board recording. Recent efforts have been made by CSIRO to recover, reprocess and stitch together unique AVHRR High Resolution Picture Transmission (HRPT, 1km resolution) data that were received by Australian and Antarctic stations dating back to the mid-1980s. Currently, we have completed the processing of a geolocated and radiometrically calibrated dataset of top-of-the-atmosphere (TOA) reflectance and brightness temperature from 1992-2023. The TOA satellite measurements provide information on the coupled land/ocean-atmosphere system. This initial data set is available via the TERN data system. However, to retrieve accurate quantitative information about the land/ocean surface both the atmosphere and surface properties must be considered. Here we present first results of a physics-based atmospheric correction algorithm to convert the Australian historical archive of TOA AVHRR HRPT data to bottom-of-atmosphere (BOA) surface reflectance and brightness temperature. To forward model TOA radiances, we use the Radiative Transfer for TOVS (RTTOV) model. We account for the bi-directional reflectance distribution function (BRDF) over land by adopting an atmosphere-BRDF coupling correction. We use a kernel-driven model approach to BRDF, using RTTOV’s BRDF atlas which is based on the Moderate Resolution Imaging Spectroradiometer BRDF product (MCD43C1). We also use RTTOV’s CAMEL (Combined ASTER and MODIS Emissivity over Land) infrared emissivity atlas to correct the AVHRR TOA thermal infrared brightness temperatures to surface temperature. For meteorological and aerosol profile information we use the ECMWF Reanalysis v5 (ERA5) and Copernicus Atmosphere Monitoring Service (CAMS) aerosol reanalysis datasets, respectively. The outcome of this work will be to provide a new, moderate spatial resolution (1 km2) and high frequency (every day over 30+ years) surface reflectance and temperature product for use in a wide range of scientific applications such as the study of long-term (potentially ~40 years) trends and changes in vegetation and land/ocean surface temperature across Australia.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone Q)

Poster: Long-Term Data Enhancement with the New X-TRACK/L2P Product for Coastal Applications

Authors: Fabien Léger, Florence Birol, Fernando Niño, Mathilde Cancet, Cécile Kocha
Affiliations: CNRS - LEGOS, LEGOS (CNES/CNRS/IRD/UT3), Université de Toulouse, Collecte Localisation Satellites (CLS)
To enhance the completeness and accuracy of sea surface height information obtained from satellite altimetry in coastal ocean regions, the Center of Topography of the Ocean and Hydrosphere (CTOH) has developed a dedicated software called X-TRACK. The newly reprocessed 1-Hz Level 3 X-TRACK product represents a significant improvement in oceanographic observations by combining the X-TRACK processing system developed by CTOH, with the L2P altimetry product from CNES. This dataset includes the most recent parameters and corrections from 16 altimetry missions recently reprocessed (DT24), from 1992 (TP) to 2024 (S6A), optimizing the accuracy of sea surface height information across coastal regions worldwide, resulting in an invaluable resource for coastal sea level analysis. A key feature of this product is the intermission bias adjustment based on Global Mean Sea Level (GMSL), which harmonizes data from various missions. Additionally, three long-term combined mission series (based on Jason, ERS and Jason-interleaved orbit) provide a unique dataset to characterize sea level variability, enhancing its relevance for researchers and policymakers in coastal studies and climate change research. By offering a robust and user-friendly tool product for analysing sea level changes, this dataset significantly enhances our understanding of ocean and coastal dynamics. Additionally, we compute the X-TRACK Tidal Constants, an added-value coastal altimetry product. This product provides accurate along-track tidal constant estimates at regional scale in all coastal seas worldwide. It effectively complements tide gauge observations for tidal studies, validation of tidal models, and data assimilation into models. Both datasets are freely accessible on the AVISO+ platform
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R)

Poster: C.02.20 - POSTER - Past and Future EO Scientific Mission Concept

ESA's FutureEO and specifically its Earth Explorer component is a world-class programme of scientific research EO missions. By its very nature it is a very selective one, so for every selected missions there are many more that are recognised as scientifically valid and technically feasible also.
In fact the combination of scientific topics, Earth Observation physics and current state of technology means that at any given time there will always be a given pool of mission ideas, which may or may not be submitted or selected by ESA depending on various factors. The large but finite extent of this informal pool is evidenced for instance by repeated submissions of similar ideas for successive Earth Explorer calls.
This session will present some of the more proeminent mission ideas and concepts, either as specific past (or future planned) submissions to Earth Explorer calls, or as general concepts.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R)

Poster: Hydroterra+: a game changer in the monitoring of the water cycle over Mediterranean and Africa areas

Authors: Professor Andrea Monti Guarnieri, Professor Francois Bouttier, Luca Brocca, Professor Stephen Hobbs, Professor Vassiliki Kotroni, Professor Harald Kustmann, Dr. Juha Lemmetyinen, Professor Juan Manuel Lopez Sanchez, Director of the Microwaves and Radar Institute Alberto Moreira, Thomas Nagler, Professor Peter Van Oevelen, Antonio Parodi, Wolfgang Wagner, Professor Tim Wright, Dr. Christian Rossi
Affiliations: Cima Research Foundation, Polytechnic University of Milan, DESR/CNRM -GAME (UMR CNRS & Météo-France), Research Institute for Geo-Hydrological Protection, Cranfield University, National Observatory of Athens, Karlsruhe Institute of Technology, Finnish Metereological Institute, University of Alicante, DLR, ENVEO IT GmbH, University of Leeds, TU Wien, GEWEX, European Space Agency
Hydroterra+ is a candidate mission for the European Space Agency’s 12th Earth Explorer program. This mission presents an impressive array of capabilities that could greatly influence the fields of weather forecasting, hydrology, and the mountain cryosphere. Additionally, it would enhance the observation of ground motion, which is crucial for monitoring landslides and improving our understanding of earthquakes, volcanic activity, and tectonic movements. The societal benefits derived from Hydroterra+ are therefore clear. This initiative is an evolution of the Hydroterra mission, which was proposed for the 10th Earth Explorer (EE10). Hydroterra underwent a review in December 2020, receiving praise for its scientific and technological innovations, although it was not deemed ready for flight selection at that time. Hydroterra+ builds upon the insights gained from Hydroterra while incorporating new scientific expertise. Since 2020, the science team has diligently advanced the mission concept and its scientific rationale. With the EE12 opportunity, the focus is to refine the Hydroterra+ science case and implementation strategy to address previous shortcomings and leverage recent advancements. Hydroterra+ aims to utilize a radar system in geosynchronous orbit to monitor the water cycle. This radar is particularly adept at detecting water, and its orbit, which maintains a constant view of Europe and Africa, allows for multiple observations of the same area each day. This capability will enable, for the first time from space, the observation of processes occurring over timescales of tens of minutes to hours, potentially unlocking significant insights in Earth system science. An important ancillary benefit of this mission is the ability to detect ground motion over these short timescales, facilitating substantial new scientific discoveries.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R)

Poster: Improving flood and landslide prediction, and irrigation estimation with soil moisture obtained from the Hydroterra+ mission

Authors: Luca Brocca, Dr. Paolo Filippucci, Maria Teresa Brunetti, Antonio Parodi, Wolfgang Wagner, Andrea Virgilio Monti-Guarnieri, Cristian Rossi
Affiliations: National Research Council of Italy, CIMA Research Foundation, TU Wien, Politecnico di Milano, ESA ESTEC
Hydroterra+ is an ESA Earth Explorer 12 (EE-12) candidate mission to monitor fast processes (hours to days) of the hydrological cycle, with a particular focus on Europe, the Mediterranean basin and certain parts of Africa. These areas are highly vulnerable to climate change, and Hydroterra+ specifically targets a significant gap in observations related to the water cycle - processes that occur on timescales of hours to days on a regional scale. Current and planned low Earth orbit missions do not adequately address this need. Specifically, to improve flood and landslide forecasting systems and to support water management, e.g. irrigation in agriculture, soil moisture products need high spatial resolution and sub-daily time resolution. For this reason, one of the main objectives of Hydroterra+ is the estimation of surface soil moisture with a spatial resolution of less than 1 km and a revisit time of 3-6 hours. Only then is it possible to follow the rapid evolution of soil moisture, especially during extreme events that trigger floods and landslides, or during periods of intensive irrigation. Therefore, Hydroterra+ will be a game changer for hydrological applications providing essential information to decision makers involved in flood and landslide risk mitigation and water resources management in agricolture. In this study, different configurations for the Hydroterra+ surface soil moisture product, i.e., different spatial and temporal resolution, spatial and temporal coverage, and uncertainty, will be used as input into synthetic experiments (observing system simulation experiment) to evaluate the benefit of Hydroterra+ for the selected hydrological applications: flood and landslide prediction, and estimation of irrigation water use. First results will be shown at the conference clearly highlighting the strong benefit of high temporal resolution and large spatial coverage of Hydroterra+ mission.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: A.01.09 - POSTER - Atmospheric 3D Winds for Weather and Climate

The three-dimensional (3D) distribution of horizontal atmospheric wind vectors (referred to as 3D winds) is an integral part of the Earth system. 3D winds interact with clouds, convection, and precipitation. They transport heat, moisture, momentum, aerosols, and trace gases. They also drive ocean circulations and affect sensible and latent heat fluxes between the atmosphere and the underlying land or ocean surfaces. Their accurate observation is critical for a variety of applications, such as weather and climate forecasting, aerosol and pollution transport, disaster weather monitoring, wind energy generation, aircraft flight planning, and wildfire and volcanic plume movement. They are also crucial to reconcile the spaceborne measurements of greenhouse gas concentrations versus survey-based regional reports of emissions.

3D wind measurements remain a challenge. Conventional observations of horizontal winds, such as from rawinsondes, wind profilers, and airborne dropsondes, have sparse coverage. Wind scatterometers provide accurate vector winds, but near the ocean surface only. Feature-matching satellite cloud top or water vapor fields have been used for decades to retrieve atmospheric motion vectors (AMVs), but this approach is mostly limited to a single and uncertain pressure level at a given time. Satellite wind lidar measurements, as pioneered by the ESA’s Aeolus mission, are expected to provide more accurate data and capture the line-of-sight wind for clear skies, within cirrus clouds, and above thick clouds, but only along a curtain parallel to the satellite track. For these reasons, 3D winds are one of the seven observables for the NASA Earth System Explorer competition.

In this session, we invite innovative contributions on the 3D winds and related measurements; e.g. vertical velocity of cloud top, vertical motions in clouds (e.g., via ESA EarthCARE), convective vertical mass flux (e.g., via NASA INCUS), and ocean surface winds. These contributions could cover retrieval algorithms, mission architecture, fusion of measurements (e.g., via machine learning), and impacts on science and applications.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: New Data Records of the Ocean Surface Winds: Contributing to the Understanding of the Atmospheric and Oceanic Circulations

Authors: Svetla Hristova-Veleva, Bryan Stiles, Alexander Fore, Alexander Wineteer, Federica Polverari, Mark Bourassa, Ethan Wright, Douglas Vandemark, Larry O'Neill, David Moroni, Shakeel Asharaf
Affiliations: Jet Propulsion Laboratory, California Institute of Technology, Florida State University, University of New Hampshire, Oregon State University
Ocean surface winds (OSW) are one of the key components of the Earth system. Indeed, the ocean surface winds and stress are Essential Climate Variables (ECV) identified by the Global Climate Observing System (GCOS) [GCOS-200, 2016; and again by GCOS, 2021 ]. They represent a unique measurement at the interface of two fluids – the ocean and the atmosphere. As such, they reflect the interactions at this interface and modify the boundary layers in each one of them, through modulating the sensible and latent heat fluxes and through modifying the turbulent mixing in both the atmospheric and the oceanic boundary layers. They also affect the atmospheric and ocean dynamics. The OSW are a major driver of the ocean circulation through the surface stress curl. They drive the atmospheric convection by providing dynamical forcing through the convergence of the near-surface winds, in addition to providing fuel to the weather systems by modulating the sensible and latent heat fluxes. Hence, the OSWs leave a large footprint in the 3D structure of the atmospheric circulation. Understanding these interactions is critical for improving ocean modeling and weather forecasting on a variety of spatial and temporal scales - from the isolated convective cores, to the organized mesoscale systems, to hurricanes, to the seasonal and intraseasonal phenomena such as MJO, El Nino and the trends and variability in the large-scale atmospheric circulation (e.g. monsoon rains). From a climate standpoint, the OSW capture the changes in the large-scale atmospheric circulation, as depicted by its surface branch. Here we will present two records of the OSW: i) a recently-developed climate data record of the ocean surface wind vectors titled “Creating an Extended and Consistent ESDR of the Ocean Surface Winds, Stress, and Their Dynamically-Significant Derivatives for the Period 1999-2022”; ii) the outline of a new project, supported by NASA’s Satellite Needs Working Group (SNWG) program, to develop operational 6-hourly maps of the OSW based on observations by a number of scatterometer, radiometers (including polarimetric), SAR and SWOT winds. This will be a 5-year effort to build the system. We will begin this presentation by describing the three main goals our NASA MEaSUREs-funded project, namely: i) creation of a consistent long-term Earth Science Data Record (ESDR) that includes observations from several different missions while eliminating inconsistencies between them; ii) development of the dynamically-significant derived products - the surface wind stress and the curl and divergence of the surface wind and stress; iii) development of scatterometer-only user-friendly uniformly gridded products (Level 3 products, with space/time gaps between observations), to fill an unmet user need and complement existing gap-free L4 products (blending models and observations), which have their own roles. This effort is not the first of its kind, following with the traditions of others. Here we provide new state-of-the-art set of retrieved products based on different retrieval algorithms, thus providing an additional ESDR of climate quality. Only through analyses of a number of different ESDRs can we obtain a better understanding of the uncertainties associated with the retrieval approaches and the creation of the climate-quality ESDRs. Our products bring also some new elements: i) equivalent neutral (EN), surface-relative wind vector retrievals homogenized across the retrievals from the different instruments/missions; ii) estimation of the 10 m true wind vectors; iii) estimation of the wind stress vector; iv) computation of the divergence and vorticity of the wind and the stress; v) uncertainty estimation; vi) providing collocated data from ERA-5, IMERG and GlobeCurrents to support both the product evaluation and also a better understanding of the processes that are not captured by the observations alone. After the introduction of motivation and goals, the presentation will provide a short overview of the new elements, and a summary on how some of the new products were developed. We will then provide initial evaluation and validation of the products. Next, we will describe the objectives and the current design of the new SNWG project to provide, operationally, gridded data for OSW speed every 6 hours, combining wind retrievals from a variety of different instruments: scatterometers, SAR instruments, altimeters and passive microwave radiometers. Wind direction will be provided when available (scatterometers, polarimetric radiometers). The instruments under consideration for inclusion are: i) scatterometers - ASCAT-B/C, ScatSat2; ii) SARs - Sentinel 1, RadarSat 2, NISAR, ALOS 4, commercial SARs, SWOT; iii) Radiometers - COWVR, SMAP, GMI, AMSR2/3, the SSMIs. Among the benefits provided by this new operational product are: i) increased global and temporal coverage; ii) data available from a single source, with a common format, and on a common grid; iii) providing wind stress and derivatives of the wind and wind stress which are more directly related to air/sea flux quantities than the winds themselves; iv) flexibility to add new instruments in the future; v) collocated observations from multiple instruments that will enable: precise uncertainty estimates; development of synergistic products. A special note: The gridded data will be provided with two resolutions – 1/8 and 1/64 deg, with the higher-resolution grid being specific to the SAR observations. Developing a global set of high-resolution SAR observations that are collocated with lower-resolution OSW from scatterometers and radiometers will open the possibility of using Machine Learning to develop new, higher-resolution, wind speed estimates from the scatterometer and the radiometer wind observations. Finally, we will illustrate how a long-term data record of the scatterometer-derived ocean surface wind vectors can be used to study the trends and variability in the large-scale circulation, as depicted by its lower branch. Climate models, re-analyses, and observations suggest changes in the characteristics of the Hadley cell. This is considered to be an atmospheric response to the observed tropical ocean warming trend. The observed and predicted changes suggest opposing trends for the two branches of the Hadley Circulation (HC): i) a poleward expansion trend of the descending branch; ii) a narrowing of the ascending branch. Both of these trends could result in shifts in precipitation patterns affecting natural ecosystems, agriculture, water resources and disasters. While a number of re-analyses show agreement in the estimation of widening of the HC (poleward expansion of the descending branch), there is a significant variability in the estimation of the intensity of the HC. The intensity of the HC is qualified as, either or both, narrowing of the ascending branch (generally determined as the width of the ascending area depicted by the Inter Tropical Convergence Zone (ITCZ)) and/or increase in intensity of the ascent. Here we will illustrate how the long-term record of the OSW can be used to evaluate both the widening of the HC and the intensification of its ascending branch - the changes in the characteristics of the ITCZ in terms of width and intensity of the convergence. Part of the work was performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Monitoring Atmospheric 3D Winds with the HALO Doppler Wind Lidar at the CARO National Facility in Limassol, Cyprus

Authors: Constantinos Chrysostomou, George Kotsias, Argyro Nisantzi, Patric Seifert, Rodanthi Elisavet Mamouri
Affiliations: ERATOSTHENES Centre of Excellence, Cyprus University of Technology, Faculty of Engineering and Technology, Cyprus, Leibniz Institut für Troposphärenforschung, Leipzig, Germany
The study of atmospheric 3D wind fields is vital for both monitoring the atmospheric dynamics as well as advancing weather prediction and climate research. Since January 2023, the Cyprus Atmospheric Remote Sensing Observatory (CARO) Ground-Based Station (GBS) located in coastal Limassol, has been hosting the HALO (Snoopy) Doppler wind lidar, providing continuous and detailed vertical and horizontal wind profiles. This ongoing dataset provides a high-resolution view of wind dynamics over the region, contributing to a deeper understanding of atmospheric processes and their variability. The present research utilizes the HALO lidar to extract 3D wind fields and assess their variability across diurnal, seasonal, and synoptic scales. The analysis extends for almost two years (from February 2023 to December 2024) and focuses on monthly and seasonal variations of wind speed and direction. Also, by integrating advanced remote sensing techniques, including the calculation of vertical velocity variance, the study estimates the Mixing Layer Height (MLH) across the seasons, offering insights into atmospheric dynamics over the region. Furthermore, the ongoing data collection at CARO provides a valuable opportunity for calibration and validation activities supporting satellite-based observations. As a consequence, in the present study the validation of ESA’s Aeolus satellite mission’s Level 2B Rayleigh-Clear and Mie-Cloudy data against ground-based LIDAR measurements is performed, aiming to assess their accuracy and applicability in regional wind profiling. In general, the findings of this work highlight distinct seasonal wind patterns, with notable differences in wind speeds and directions between daytime and nighttime, as well as the influence of regional meteorology, such as local topographical features, and the influence of the Mediterranean Sea. These results contribute to the understanding of wind climatology patterns and intra-annual variability in the Eastern Mediterranean, emphasizing the importance of integrated satellite and ground-based observations for climate studies and weather forecasting. Acknowledgements The authors acknowledge the ‘EXCELSIOR’: ERATOSTHENES: EΧcellence Research Centre for Earth Surveillance and Space-Based Monitoring of the Environment H2020 Widespread Teaming project (www.excelsior2020.eu). The ‘EXCELSIOR’ project has received funding from he European Union’s Horizon 2020 research and innovation programme under Grant Agreement No 857510, from the Government of the Republic of Cyprus through the Directorate General for the European Programmes, Coordination and Development and the Cyprus University of Technology. This study was supported by the ATARRI Horizon Europe Widespread Twinning Project. ATARRI receives funding from the European Union’s Horizon Europe Twinning Call (HORIZON-WIDERA-2023-ACCESS-02) under the grant agreement No 101160258
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Assimilation of WIVERN Doppler Data in Weather Research and Forecasting (WRF) Model for the Medicane Ianos: A Comparison with Advanced SCATterometer (ASCAT)

Authors: Stefano Federico, Rosa Claudia Torcasio, Claudio Transerici, Mario Montopoli, Alessandro Battaglia
Affiliations: CNR-ISAC, via del Fosso del Cavaliere 100, DIATI - Politecnico di Torino, Corso Duca degli Abruzzi 24
Improving the representation of the initial state of the atmosphere in the Numerical Weather Prediction (NWP) models is critical for advancing the quality of weather forecasts which are vital for our daily life. Wind, cloud and precipitation are driving factors for Earth’s water and energy cycles and sometimes they can represent weather-related threats. Uncertain measurements of these variables present challenges for NWP models, making accurate forecasting particularly difficult. The WIVERN (Wind Velocity Radar Nephoscope) mission (Illingworth et al., 2018), for observing global winds, clouds and precipitation, has the unprecedented opportunity to be the first space-based mission to provide in-cloud winds. It is one of the two candidate missions competing for selection as the Earth Explorer 11 mission within the European Space Agency’s FutureEO programme. WIVERN data is anticipated to be significantly beneficial to several sectors: improve our knowledge of weather phenomena, validate climate statistics, and enhance NWP performance. In this work, we focus on the NWP performance after assimilating WIVERN Doppler data, specifically Line Of Sight (LoS) winds, for the high-impact case study of the Medicane Ianos, occurred in mid-September 2020 in the central Mediterranean with final inland dissipation in the west coasts of Greece. The experimental results of WIVERN Doppler assimilation are compared with those obtained from the output of a similar experiment assimilating the Advanced SCATterometer (ASCAT) radar data. To achieve this, we generate pseudo-observations that are assimilated in the Weather Research and Forecasting (WRF) model. Pseudo-observations are derived by running an ensemble of WRF at 4 km horizontal resolution, using the European Centre for Medium range Weather Forecast – Ensemble Prediction System (ECMWF-EPS) analysis/forecast cycle issued at 12 UTC on 16 September 2020 as initial and boundary conditions. The trajectories of the different simulations are compared with the reference trajectory determined by the method of Flaounas et al. (2023), which uses ERA5 reanalysis data. Among the ensemble simulations, we select the best member, which is the member having the best agreement with the reference trajectory. WIVERN and ASCAT pseudo-observations are then generated at 12 UTC on 17 September from the best member using the WIVERN simulator (Battaglia et al., 2022) for the former and the winds at the first model level, interpolated over a passage of the METOPC satellite, for ASCAT. Pseudo-observations are then assimilated, at 12 UTC on 17 September, into the WRF model using the 3DVar scheme of Federico (2013). At that time, the storm was in its mature phase but still far from the landfall, which occurred on the morning of 18 September. To evaluate robustly the impact of data assimilation, pseudo-observations are assimilated in all the ensemble members. The results of the experiments of WIVERN and ASCAT data assimilation show that WIVERN data assimilation gives a larger improvement to the Ianos trajectory prediction compared to ASCAT (more than 20% error decrease). The synergistic effect of assimilating WIVERN and ASCAT is also discussed. References Battaglia, A., et al., 2022, https://doi.org/10.5194/amt-15-3011-2022. Federico, S., 2013, https://doi.org/10.5194/amt-6-3563-2013. Illingworth, A. J., et al., 2018, DOI: 10.1175/BAMS-D-16-0047.1, 1669-1687. Flaounas, E., et al., 2023, https://doi.org/10.5194/wcd-4-639-2023
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: 3D reconstruction of turbulent wind using lidar measurements.

Authors: Christian Musso, Doctor Amaury Capmas-Pernet, Doctor Tomline Michel, Doctor Matthieu Valla, Doctor Frédéric
Affiliations: Onera
3D wind speed measurement in turbulent wind is particularly interesting for several applications such as weather and climate forecasting, planning and safety of aircraft during their flight, transport of aerosols and pollution, monitoring of weather conditions in case disasters, wind power generation, forest fires and volcanic plume movement. In particular, it is critical for the current development of low consumption aircraft with high aspect ratio wings to increase their lift. However, such wings are sensitive to loads induced by turbulences. That is why they require using adaptive wings that modify their shape according to the 3D wind turbulence measured ahead of the airplane (so called feed-forward gust load alleviation). To this end, we have developed a UV molecular wind lidar with QMZ interferometer (see D. T. Michel et al. session C.01.11) that measures the wind projected on the lidar axes. The lidar is addressed along different axes to obtain the projections of the wind speed along each of them and reconstruct the 3D wind by inverting the matrix of projections. If this method works for homogenous wind, in case of turbulence, each projection is measured at different spatial position, so for slightly different wind which leads to large errors in the reconstructed wind field. Therefore, it is important to develop a specific algorithm to optimize the wind reconstruction in turbulent wind field. We propose a Bayesian inversion method close to Kriging (Gaussian processes) to reconstruct the 3D wind from lidar measurements. It consists in estimating the wind speed at any point in space by incorporating a priori modeling of turbulence physics. This approach is part of the general trend towards incorporating physics into machine learning methods (physics-informed kernel learning). Taking into account lidar instrumental errors, we show, through realistic simulations, that the Von Karman-type turbulent wind is well estimated ahead of the aircraft. We are also developing a method for optimizing lidar axis angles. The aim is to find the optimum configuration of these axes in space, so as to minimize wind estimation errors on the aircraft axis. These errors are given by the Bayesian inversion algorithm. On the basis of simulations with turbulence of varying intensity, we show that with 4 lidar axes well positioned in space, these errors are minimal. The proposed wind reconstruction methodology can be used in a variety of contexts where we need to determine the 3D wind from lidar measurements.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone A)

Poster: Climatic characteristics of various tracks of tropical cyclones and their impact on rainfall in a temperate coastal region based on 70 years of observation data

Authors: Dr. Yan Ma
Affiliations: Qingdao Meteorological Bureau
This study analyses the climatic characteristics of tropical cyclones affecting Qingdao, a temperate coastal region in China. It considers the frequency, period, and intensity of tropical cyclones and their impact on rainfall in Qingdao. The analysis is based on the Typhoon Yearbook, the tropical cyclone data of China during 1949-2020, and surface meteorological observations. The findings indicated that the frequency of tropical cyclones affecting Qingdao during the period between 1949 and 2020 demonstrated an overall decreasing trend, with a notable variation period of 2-4 years. Tropical cyclones that turned after landfall and continued northward after landfall accounted for 47.9% of the total. In July, the majority of tropical cyclones followed a track of continuing northward after landfall, while tracks turning after landfall were predominantly observed in September. The intensity of tropical cyclones demonstrated an overall weakening trend with pronounced inter-decadal fluctuations. However, there was minimal change in intensity when tropical cyclones with a northward trajectory made close landfall in Qingdao. The diverse tracks of tropical cyclones resulted in spatially heterogeneous, significant rainfall in Qingdao. The track of continuing northward after landfall led to the highest daily and process rainfall, while tropical cyclones turning nearshore or advancing offshore before landfall exhibited relatively minimal rainfall.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R-S)

Poster: C.06.04 - POSTER - New Space Missions Data Quality & Cal/Val

New Space global players are launching more and more Earth-Observation satellites, with a great potential to complement Copernicus and ESA Programmes for both operational and scientific purposes. Data Quality and Calibration/Validation activities are critical with a view to build up an interoperable system of systems. The goal of this session is to discuss the current and future developments in the area of Data Quality and Calibration/Validation of spaceborne Very High-Resolution instruments, including pre-launch/post-launch activities and best practices. Characterization/Traceability, Cal/Val of Constellations, Interoperability, Standard and Best Practices, Cal/Val Sites, are among the key elements of the session, including contributions from various domains such as SAR, Multi-/Hyper-Spectral, ThermalIR, Atmospheric Composition.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R-S)

Poster: Achieving CEOS Analysis Ready Data Compliance with the EarthDaily Constellation: Advancements in Calibration, Validation, and Data Quality

Authors: Christopher Rampersad, William Parkinson
Affiliations: Earthdaily Analytics
The EarthDaily Constellation (EDC), scheduled to launch in 2025, will deliver unprecedented Earth observation capabilities, offering daily global coverage of nearly the Earth's entire landmass (over 95%) with scientific-grade quality. Drawing inspiration from other science missions such as Sentinel-2, Landsat-9, and Landsat Next, EDC is equipped with 22 spectral bands ranging from visible and near-infrared (VNIR), short-wave infrared (SWIR), mid-wave infrared (MWIR), to long-wave infrared (LWIR). With a five-meter resolution in VNIR, EDC will enable a deep understanding of our planet's changes on a daily basis. Developed by EarthDaily Analytics (EDA), EDC is designed to produce data products that are compliant with the CEOS Analysis Ready Data (CEOS-ARD) specifications, particularly in the Surface Reflectance and Surface Temperature product families. Achieving CEOS-ARD compliance is a core objective for EDA, and this commitment is reflected in our approach to the design and calibration of the mission. Pre-launch, we are employing tunable lasers to calibrate the responsivity of our detectors similar to other science missions such as Sentinel-5P [1]. This method surpasses traditional broadband light sources in integrating spheres in terms of absolute radiometric accuracy. It ensures our sensors have well-characterized spectral responses down to the nanometer level, aligning with best practices from governmental science missions such as Sentinel-2. By utilizing tunable lasers, we achieve precise calibration critical for accurate surface reflectance measurements. Post-launch, EDA will leverage EarthPipeline, a sophisticated ground segment system that has been in development for over eight years and automatically calibrates and processes raw downlinked data into analysis-ready products. EarthPipeline uses rigorous physical modelling combined with advanced machine learning techniques to generate high-quality data products. It includes specialized deep-learning cloud masks and advanced atmospheric correction algorithms that have been rigorously tested and validated. Our cloud masking algorithm incorporates ground truth data from community contributions such as CloudSEN12 [2] and has been benchmarked against ESA's baseline, demonstrating improved accuracy in cloud, shadow, and haze detection. EarthPipeline has been tested with a variety of commercial satellites, is currently operational, and serves as the engine for achieving and validating our data quality. To ensure the quality and accuracy of our data products, EDA has utilized Sentinel-2 open data to validate the performance of critical algorithms. We have conducted inter-comparison studies of EDA’s atmospheric correction algorithms versus ESA’s Sen2Cor for approximately 650 Sentinel-2 scenes at established RadCalNet [3] calibration sites. Our surface reflectance products have shown excellent agreement with Sentinel-2 data, exhibiting approximately a 2% variation. Specifically, EDA’s Level 2A (L2A) products had a 6.5% root mean square error (RMSE) from RadCalNet compared to an 8.6% RMSE for ESA’s L2A products. These validation efforts confirm the technical correctness of our products and their suitability for time series analysis and data interoperability necessary to achieve CEOS-ARD compliance. Additionally, we have implemented haze correction algorithms to maximize the usability of pixels affected by atmospheric conditions. We provide metadata to indicate the corrections applied, enhancing the transparency and usability of our data products. By adhering to CEOS-ARD standards, we aim to support time series analysis and data interoperability across various applications, including environmental monitoring, agriculture, forestry, urban planning, and disaster management. In this presentation, we will provide detailed insights into our calibration and validation methodologies, the development of the EarthPipeline processing system, and our strategies for achieving and maintaining CEOS-ARD compliance. We will also discuss the challenges encountered, such as dealing with diverse atmospheric conditions and varying land cover types, and the solutions implemented to overcome them. By sharing our experiences and methodologies, we hope to foster collaboration and contribute to advancing data quality and interoperability in the Earth observation community. Moreover, the EDA’s commitment to CEOS-ARD compliance underscores the importance of partnerships and cooperation between commercial entities and the international Earth observation community. Today's pressing complex challenges require multiple data sources and by aligning with CEOS-ARD standards, we can facilitate data integration and interoperability, enhancing decision-making and analysis for end users. Keywords: EarthDaily Constellation, CEOS-ARD, Calibration, Validation, EarthPipeline, Atmospheric Correction, Cloud Masking, Remote Sensing, Earth Observation, Data Quality, Interoperability [1] Kleipool, Q., Ludewig, A., Babić, L., Bartstra, R., Braak, R., Dierssen, W., Dewitte, P.-J., Kenter, P., Landzaat, R., Leloux, J., Loots, E., Meijering, P., van der Plas, E., Rozemeijer, N., Schepers, D., Schiavini, D., Smeets, J., Vacanti, G., Vonk, F., & Veefkind, P. (2018). Pre-launch calibration results of the TROPOMI payload on-board the Sentinel-5 Precursor satellite. Atmospheric Measurement Techniques, 11(12), 6439–6479. https://doi.org/10.5194/amt-11-6439-2018 [2] https://github.com/cloudsen12/cloudsen12 [3] Doxani, G., Vermote, E. F., Roger, J. C., Skakun, S., Gascon, F., Collison, A., De Keukelaere, L., Desjardins, C., Frantz, D., Hagolle, O., Kim, M., Louis, J., Pacifici, F., Pflug, B., Poilvé, H., Ramon, D., Richter, R., & Yin, F. (2023). Atmospheric Correction Inter-comparison eXercise, ACIX-II Land: An assessment of atmospheric correction processors for Landsat 8 and Sentinel-2 over land. Remote Sensing of Environment, 285, 113412. https://doi.org/10.1016/j.rse.2022.113412
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R-S)

Poster: Validating and Utilizing the HYPSO hyperspectral constellation

Authors: Aria Alinejad, Joseph Garrett
Affiliations: Norwegian University of Science and Technology (NTNU)
Recently, the Norwegian University of Science and Technology (NTNU) launched HYPSO-2. Together with its predecessor satellite, HYPSO-1, it became part of one of the first hyperspectral imaging constellations. While both satellites in the HYPSO constellation carry a hyperspectral imager tailored to monitoring the ocean (400-800nm), HYPSO-2 complements it with a relatively higher resolution red-green-blue camera for image fusion. Since its launch, in-orbit radiometric and spectral calibrations of the HYPSO-2 have been performed, as well as tests of the interoperability of the satellites. To fully utilize the capabilities of the HYPSO constellation, relative radiometric and spectral sensitivities have been compared on the same scene at a selection of calibration sites. Consequences of dissimilarities between the satellites for their relative data products have been estimated (reflectance and chlorophyll-a concentration). Additionally, some planning and automation strategies for using two satellites for joint measurement campaigns together with in-situ agents will be discussed. NTNU has already started planning its next hyperspectral satellite mission. Utilizing data from the first two HYPSO satellites showed particular areas of the mission design that could be improved for future missions. A new hyperspectral camera with commercial off-the-shelf optics has been designed to measure the 600-1100nm wavelength range in order to better estimate water vapor and aerosol concentration, which have emerged as two major hurdles to developing an effective atmospheric compensation. A better atmospheric compensation will both permit more consistent data products and allow data collected at angles far from nadir to be interpreted more clearly. A second new hyperspectral camera is being developed that will target a spatial resolution of 5-10m. This spatial resolution will enable clearer observations in narrow parts of fjords and in inland waters. The aforementioned cameras are being combined with two others into a payload designed to fit on a 16U cubesat. As with HYPSO-1 and HYPSO-2, the payload is built from commercial off-the-shelf optical components with customized mounting. This will allow for extensive testing of the new payload prior to launch.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R-S)

Poster: Consistency of hyperspectral time-series datasets, showcased through New Space and Third Party Missions

Authors: Samantha Lavender, Fay Emmett, Lisa Haskell, Kevin Halsall, Leonardo De Laurentiis, Roberto Biasutti
Affiliations: Telespazio UK, ESA/ESRIN
The Earthnet Data Assessment Project (EDAP+) is tasked with assessing New Space missions before ESA considers them as Third Party Missions (TPMs). Within the optical domain, very high resolution optical missions are considered alongside hyperspectral missions, and increasingly, these are constellations rather than single satellite missions. As a result, the assessments undertaken need to consider quality variations within a constellation, geographically and temporally. Also, spectral fidelity is a critical component of hyperspectral missions and the assessment carried out needs to be performed across the spectral range the mission is operating within. Examples of hyperspectral missions assessed under EDAP+ included HyperScout-1 and -2, alongside the ISRO HySIS and Chinese Jilin-1 GP missions. In parallel, work on ESA’s TPM dataset from the Compact High Resolution Imaging Spectrometer (CHRIS)/Proba-1 mission through IDEAS-QA4EO/QA4EO-2 is moving towards a reprocessing campaign focused on generating CEOS-compliant Analysis Ready Data (ARD). The Proba-1 mission was launched in 2001 and CHRIS stopped acquiring data in December 2022, having provided up to 62 bands over the 400-1050 nm spectral range with data acquired in five different acquisition modes (Barnsley et al. 2004, https://doi.org/10.1109/TGRS.2004.827260). The CHRIS dataset was processed in real-time using an SSTL/Airbus processing chain that did not significantly changed since the early 2000s when radiometric calibration campaigns were undertaken within a few years of launch. Therefore, a CHRIS reprocessing activity is under development that will include atmospheric correction, cloud detection, and geometric correction. There is also the option to perform noise reduction, including destriping. Calibration analysis results (Lavender et al. 2022, https://doi.org/10.1109/IGARSS46834.2022.9884457) showed a relatively close radiometric alignment compared to reference data over land, and CHRIS's performance was not significantly worse compared to more modern hyperspectral missions. However, improvements are under consideration, and the calibration of aquatic sites is being reviewed. A further topic that needs significant work for CHRIS is the geometric correction, as the Proba-1 mission had limitations with its pointing accuracy; the overlap of multiple views varies. The work presented will showcase the activities performed within EDAP+ on the listed hyperspectral missions, including how the techniques developed for assessing New Space missions have been adjusted to deal with hyperspectral and constellation datasets. In addition, it will showcase how tools developed for Quality Control can also support reprocessing campaigns by providing insights into the focus areas and support improvements to support CEOS-compliant ARD generation from historical datasets. The presentation will conclude by reviewing lessons learnt from the work undertaken and what is needed going forward.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R-S)

Poster: CEOS Commercial Mission Radiometric Calibration Monitoring System

Authors: Samuel Hunt, Pieter De Vis, Samantha Malone, Madeline Stedman, Pr Nigel Fox
Affiliations: NPL
The increasing range of applications of Earth Observation data products and availability of low-cost launches has resulted in a rapid expansion of the commercial EO sector. These commercial EO datasets provide complementary capabilities to those of Space Agencies, such as increased temporal coverage or higher spatial resolution. Adoption of these data products for many applications requires that they meet an assured level of quality that is fit for the given purpose. To provide this assurance the community of commercial data vendors and users, through fora such as VH-RODA and JACIE (Joint Agency Commercial Imagery Evaluation), has expressed an interest in the development of an objective, open platform to compare the performance of these missions to agreed references. In response, the Committee for Earth Observation Satellites (CEOS) Working Group on Calibration and Validation (WGCV) opened an action to develop such a system for the monitoring of the radiometric performance of “new space” missions sensing in the optical domain. In support of this objective, a calibration/validation pipeline has been developed that identifies matchups between target missions and a variety of references (e.g., CEOS RadCalNet) – that together can be combined to form a single “virtual reference”. Observations from these matchups are processed to compensate for differences in spectral and spatial sampling, as well as other observation mismatches, such as in viewing geometry. The overall uncertainty of this processing is then considered to validate if the mission performance is with the expected bounds. Results are presented in a web dashboard. Presented here is the progress on the development of this system, including a demonstration of its operation. This demonstrator currently considers comparison of the Sentinel-2 and Landsat Space Agency missions and the Planet commercial mission. The system has been designed to enable straightforward extension to include a wider range of target products and references, ultimately through contributions to it as an open-source software project.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R-S)

Poster: EDAP+ activities in the Atmospheric Domain. Assessment and Validation of potential ESA TPMs and a potential Cal/Val Park

Authors: Leonardo De Laurentiis, Chloe H. Martella, Sabrina Pinori, Gabriele Mevi, Dr. Clement
Affiliations: ESA
In the frame of the Earthnet Data Assessment Project, we present the activities carried out for the atmospheric domain of potential ESA Third Party Missions. The Earth Observation (EO) and New Space sector is experiencing significant growth in recent years; new products are emerging from New Space that could potentially complement the agencies services, enhancing the capabilities of the EO sector. In the frame of EDAP+, we perform early assessment of missions’ technical documentation and an independent data quality evaluation. The results are summarized in a maturity matrix (MM) that is generated following atmospheric guidelines. The final scope of EDAP+ activities is to foster cooperation with New Space and liaise with the mission’s owner for improving their product, being compliant with Quality Assurance for EO (QA4EO) guidelines. Here, we present the activities carried out during the past year and the outcomes of the assessments. We will also discuss the guidelines for the atmospheric domain and their future development. • SPIRE GNSS-RO vertical profiles of atmospheric parameters (i.e., dry temperature and dry pressure of the atmosphere). Spire constellation continuously collects data from occulted GNSS signals (GEO orbits) from which they derive vertical profiles of atmospheric variables using the Radio Occultation (RO) sounding technique. • Atmospheric ground-based network that could become a promising Cal/Val Park designed by Vodafone Business. The network provides real-time and high-resolution data that could be used for long-term risk assessment, climate mitigation measures, and Cal/Val activities of satellite sensors. The “Network as a Sensor” project provides the following atmospheric data: Meteorological parameters (i.e., temperature, pressure, relative humidity, wind speed, and direction) from sensors hosted on passive telecoms infrastructure, Vantage Towers. Air quality (i.e., carbon dioxide, nitric oxide, nitrogen dioxide, nitrogen oxides, ozone, 𝑃𝑀1, 2.5, 4, 10). Precipitation intensity data derived from wireless links of ground-based infrastructure retrieved by Wireless DNA.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R-S)

Poster: STITCH – A scalable reference image based on Sentinel-2 worldwide tiles

Authors: Dr Alexandre Constantin, Jean-Louis Dalla-Verde, Renaud Binet, Sylvia Sylvander
Affiliations: Cnes
In the framework of Spot World Heritage (SWH) program, which gathers and makes available all images from Spot 1 to Spot 5, the French space agency (CNES) reprocessed all the data to match better standards. In the part of geometrical processing, in order to improve the absolute location of SPOT images, the CNES developed a tool, named Sentinel-2 Tiled Image patchwork (STITCH), which allows to provide an homogeneous reference data based on Sentinel2 L1C images. Sentinel-2 [4] is a long-term optical satellite mission providing worldwide submerged land images at a high revisit frequency of five days. To this end, two satellites are currently in orbit, Sentinel-2A launched in 2015 and later Sentinel-2B in 2017. Furthermore, the mission will be renewed with the recent launch of Sentinel-2C and in the future the launch of Sentinel-2D, and will be pursued after 2030 with a new generation of Sentinel-2 imagers. In addition, ESA is applying an open data access policy and all the data, from 2015 to date, are available through the Copernicus Data Space Ecosystem (CDSE, [5]). The Sentinel-2 images were initially proposed with a geometrical model which provides a circular error upper bound at 95% of around 12 meters. The data are now refined thanks to the Global Reference Image (GRI, [3]), they are flagged as “refined” in the metadata, and are improved to a circular error upper bound at 95% of around 8.5 meters [2]. This absolute location accuracy is completely compliant with SPOT refining requirements. STITCH aims to provide Sentinel-2 images without any additional geometrical processes. It allows to get images with an absolute location accuracy that meets ESA Sentinel-2 Mission Performance Center. STITCH provides a list of candidates (dates for each tile) with respect to the following criteria: (1) the coverage (snow, cloud and orbit), (2) if the product has been “refined”, (3) a time delta with a reference date, (3) the closest date to day and (4) the quality control of the product. The candidates are ranged with a weighted sum where the weights are input from the user. For example, the resulting worldwide reference data may be updated depending on the season in order to improve the refining performance. In the case of landscape changes or huge time deltas, the geometrical performances might be reduced and STITCH will be able to generate a new product. The worldwide reference data produced by STITCH for SWH will remain as it. Whereas, for new mission, it will be able to update it. Thus, in the framework of the new TRISHNA mission, an update is currently in progress. A collection of Sentinel-2 homogeneous reference data should be available on GEODES [1], the CNES data provider platform, in a few months. [1] CNES. Earth observation data platform access, in french. https://geodes.cnes.fr. Accessed: 2024-11-22. [2] Mission performance cluster service. Data quality report: Sentinel-2 L1C multispectral imager. Technical Report 101, European Space Agency, July 2024. [3] Cécile Dechoz, Vincent Poulain, Stéphane Massera, Florie Languille, Daniel Greslou, Françoise De Lussy, Angélique Gaudel, Céline L’Helguen, Cécile Picard, and Thierry L. Trémas. Sentinel 2 Global Reference Image. In Image and signal processing for remote sensing XXI, volume 9643, pages 94–107. SPIE, 2015. [4] Matthias Drusch, Umberto Del Bello, S. Carlier, Olivier Colin, V. Fernandez, Ferran Gascon, Bianca Hoersch, Claudia Isola, P. Laberinti, Philippe Martimort, Aimé Meygret, Francois Spoto, O. Sy, Franco Marchese, and Pier Bargellini. Sentinel-2: ESA’s Optical High-Resolution Mission for GMES Operational Services. Remote Sensing of Environment, 120:25–36, 5 2012. 5] ESA. Copernicus data space ecosystem. https://dataspace.copernicus.eu/ explore-data. Accessed: 2024-08-23.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R-S)

Poster: Satellite Validation and Monitoring of Belgian Water: WATERHYPERNET and AERONET-OC Data From Thornton Bank.

Authors: Francesca Ortenzio, Kevin Ruddick, Dimitry Vanderzande, Dieter Vansteenwegen
Affiliations: RBINS, VLIZ
Satellite optical remote sensing is used to monitor coastal and inland water quality, providing important data by measuring parameters such as chlorophyll-a and suspended particulate matter. The quality of this satellite operational data depends on the quality of the validation, supported by in situ measurements of water-leaving radiance reflectance. The WATERHYPERNET, a new international ground-based network of automated radiometers on zenith- and azimuth- pointing systems, provides in situ hyperspectral validation covering a spectral range of 400-900 nm, sufficient for all visible and near infrared bands of current deployed and future satellite platforms, such as Sentinel-3/OLCI, MODIS, ENMAP, PACE and CHIME. The network is built on the experience gained from AERONET-OC, a multispectral network existing for more than 20 years, that demonstrated how efficient radiometric validation of satellites is, when achieved by a network of automated radiometers with common data acquisition protocol and processing. The PANTHYR, a “PAN-and-Tilt HYperspectral Radiometer system” is part of the WATERHYPERNET Network and it has been successfully installed in the southern part of the North Sea, in Belgium, on the C-Power Windfarm Offshore Transformer Station (OTS). The OTS is situated 30 km off the Belgian coastline in the Thornton sandbank. Measurements are collected every 20 minutes from sunrise until sunset with the main application of providing data for satellite validation. Nonetheless, the availability of high frequency and long-term hyperspectral water reflectance measurements can be useful for many other applications related to Belgian water monitoring such as phytoplankton, turbidity dynamics and diatoms blooming. Belgian waters have been a key site for validation of satellite ocean colour radiometry since the launch of MERIS in 2002. At first through regular comparison of ship-based measurements with satellite derived marine reflectance and later through the installation of two AERONET-OC CIMEL-SeaPrism instruments, one in turbid nearshore waters (Site - MOW1), one further offshore in clearer waters (C-Power OTS, Thornton Bank). A PANTHYR and a CIMEL-SeaPrism operate on the same platform, allowing for a comparison between the two sensors, multi- and hyperspectral. This work provides a description of the site and presents the initial data obtained from the Panthyr installation on C-Power, alongside a comparison with AERONET-OC data. High-frequency measurements of hyperspectral water-leaving radiance are essential for validating the new generation of satellite missions and gaining a better understanding of the dynamics of the water on this specific site. The comparison with the AERONET will also allow to confirm and assess the validity of this collection of in situ data.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R-S)

Poster: A Methodology for Region-specific Cal/Val of Small Thermal Satellites.

Authors: Stylianos Kossieris, Efthymios Papachristos, Dimitrios Bliziotis, Chrysovalantis Tsiakos, Georgios Tsimiklis, Nikolaos Kordalis, Dr. Marios Vlachos, Prof. Ioannis Papoutsis, Dr. Angelos Amditis
Affiliations: Institute of Communication and Computer Systems (ICCS), Hellenic Space Center (HSC), National Technical University of Athens (NTUA)
In this work, a novel methodology for small satellites’ validation procedure is introduced. This methodology will be deployed and validated in the framework of Greek National Satellite Space Project's Thermal Space Segment, tailored to the specific needs of the Greek territory. It includes the acquisition, delivery and exploitation of thermal images over Greece, followed by the deployment of multiple of small satellites. The primary objective of the methodology is to provide a cost-effective validation solution that aligns with the growing demand for small satellite missions. The methodology is based on the Temperature-based (T-based) method, one of the four different methods established by CEOS Satellites Working Group on Calibration and Validation (WGCV) and approved by the GCOS. Therefore, this methodology requires independent ground-based measurements for the validation of space-based observations. By integrating low-cost sensors alongside high-precision instruments, the methodology can balance affordability with accuracy. Furthermore, the process of calibration, which is demanding in terms of hardware for high sensitivity measurements will then be addressed indirectly, through the proposed validation procedure. Through optimal site selection strategy, the methodology enables a region-specific Cal/Val to enhance the validity of the regional satellite products. While the methodology is inherently generic, allowing for broad applicability across different regions and satellite missions, it is being specifically implemented as a use case for OroraTech’s thermal products as part of the Greek National Satellite Space Project. The optimal site selection for the sensor's placement is based on the analysis of several parameters, to ensure topographical and emissivity homogeneity and reduce high-frequency temporal variations on the signal. Thus, over a decade long time-series of different parameters are analyzed over the Greek territory. The features list includes surface topography, land and sea surface temperatures, cloud cover, along with other proxies such as vegetation indices, total column water vapor, and wind speed and direction among others.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R-S)

Poster: ESA/NASA Quality Assurance Framework for Earth Observation Products

Authors: Samuel Hunt, Leonardo De Laurentiis, Jaime Nickeson, Batuhan Osmanoglu, Dr Paul Green, Dr. Clement Albinet, Philippe Goryl, Frederick Policelli, Dana Ostrenga, Melissa Yang Martin
Affiliations: NPL, ESA, NASA
Across the broad potential user base for Earth Observation (EO) data, ensuring confidence in the quality of the available products is vital. This is particularly important for quantitative applications, especially those related to public policy or commercial/legal risk, e.g., methane leak monitoring. However, as the commercial EO sector rapidly expands, it is an increasing challenge for the user community to discern between the wide variety of product offerings in a reliable manner, especially in terms of product quality. In response to this ESA and NASA, through their Joint Program Planning Group (JPPG) Subgroup, have developed a common EO product Quality Assurance (QA) Framework to provide comprehensive assessments of product quality. The evaluation is primarily aimed at verifying that the data has achieved its claimed performance levels, and, reviews the extent to which the products have been prepared following community best practice in a manner that is “fit for purpose”. A Cal/Val maturity matrix provides a high-level colour-coded a simple summary of the quality assessment results for users. The matrix contains a column for each section of analysis (e.g., metrology), and cells for each subsection of analysis (e.g., sensor calibration). Subsection grades are indicated by the colour of the respective grid cell, which are defined in the key. Both ESA and NASA have on-going activities supporting the procurement of commercial EO data that make use of the joint QA Framework – to ensure decisions on data acquisition are made with confidence. On the ESA side, the Earthnet Data Assessment Project (EDAP) project performs data assessments on EO missions in optical, atmospheric and SAR domains. Similarly, the NASA Earth Science Division (ESD) Commercial Smallsat Data Acquisition (CSDA) Program, completed a pilot study in 2020, and has since entered sustainment use phase for some of the commercial data sets. To date, these activities have released an official joint QA Framework for SAR products, with optical and atmospheric products to follow. In this presentation the joint ESA/NASA QA Framework is described, with some examples of its application to commercial EO products.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R-S)

Poster: HotSat-1 Post-Launch Radiometric Calibration and Validation of Top-of-Atmosphere Radiances Using VIIRS

Authors: Joshua Chadney, Jade Constantinou, Daniel Evans, Ross Hawton, Jamie McMillan, James O’Connor, Teresa Steinke, Martin Wooster
Affiliations: Satvu
The HotSat instrument is comprised of a single band mid-wave infrared (MWIR) imager, the first iteration of which was launched as HotSat-1 in June 2023. The satellite captured 6 months of imagery that we have used to construct and evaluate a radiometric model of the instrument during nighttime operation. To derive the radiometric model, we use data from the satellite’s onboard heated high-emissivity paddles, alongside HotSat-1 imagery of the open ocean. The ocean images are thermally homogeneous scenes for which we know the emissivity and have obtained estimates of the surface temperature from the Operational Sea Surface Temperature and Ice Analysis (OSTIA) system, produced by the UK Met Office. We assess the performance of this radiometric calibration on a dataset of HotSat-1 images captured over land. For this we make use of level 1B radiance products from VIIRS in the I4, M12, and M13 bands, which have significant overlaps with the HotSat waveband. In addition, the Suomi-NPP, NOAA-20, and NOAA-21 satellites which possess VIIRS instruments have similar overpass times to HotSat-1 such that we can assume that the surface emission and the state of the atmosphere have not changed. From this, we formulate radiative transfer equations in a way which aims to minimise the dependency on the unknown surface emissivity over each instrument’s wavebands and generate an equation describing the relationship between VIIRS and HotSat observations. Lastly, we aggregate the mean response in radiance for each image and report the residual as mean absolute percentage error terms.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R-S)

Poster: Radiometric Calibration and Validation of Hydrosat's Dual-Payload Mission for High-Resolution Land Surface Temperature Data

Authors: Tania Kleynhans, Joe McGlinchy, Scott Soenen, Sam Nissim, William Thomas, Yi Zhang, Ian McGreer, Joshua B. Fisher, Mya
Affiliations: Hydrosat
In August 2024, Hydrosat launched its first pathfinder mission, VanZyl-1, on an ESPA-class SmallSat, carrying a dual-sensor payload. The primary instrument, a Longwave Infrared Imaging (LIRI) system, offers a ground sampling distance of 70 meters, complemented by a Visible to Near-Infrared Imaging (VIRI) multispectral sensor with a 30-meter ground sampling distance. The LIRI payload features advanced thermal control and calibration subsystems to ensure high-quality thermal data acquisition. The mission’s primary focus is on the radiometric performance assessment of both the LIRI and VIRI sensors. This includes Level-1 radiance (L1) and Level-2 land surface temperature (L2) data products from the LIRI sensor, and Level-1 radiance (L1) and Level-2 surface reflectance (L2) data products from the VIRI sensor. LIRI calibration is achieved via an onboard blackbody source. Due to data rate constraints the application of a two-point Non-Uniformity Correction (NUC) and time-delay-integration (TDI) occurs on orbit, prior to ground-based radiometric calibration. Validation of the LIRI data employs ocean buoy bulk temperatures that is converted to skin temperature, and utilizing radiative transfer modelling converted to band-effective at-sensor radiance. The land surface temperature product is validated with both ocean buoys and the Surface Radiation Budget (SURFRAD) network’s uniform land calibration sites. The VIRI sensor underwent absolute radiometric calibration pre-launch. On-orbit validation uses cross-calibration over pseudo-invariant calibration sites. VIRI’s spectral band similarity to Sentinel-2 facilitates robust validation methodologies. In addition, data from the Radiometric Calibration Network (RadCalNet) instrumented sites are used for both at-sensor radiometric calibration and surface reflectance product verification. This mission represents a significant step in Hydrosat’s mission to deliver high-resolution, daily land surface temperature (LST) data. VanZyl-1 paves the way for Hydrosat's planned 16-satellite constellation, enabling global efforts in precision agriculture by providing actionable data to optimize water use and improve food production. This effort aligns with global initiatives to address food security challenges and foster sustainable agricultural practices.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R-S)

Poster: Assessment of third-party SAR missions in the framework of the Earthnet Data Assessment Project (EDAP+)

Authors: Andrea Recchia, Laura Fioretti, Juval Cohen, Jorge Ruiz, Kevin Halsall, Leonardo De Laurentiis, Dr. Clement Albinet
Affiliations: Aresys, FMI, TELESPAZIO UK, ESA
The Earthnet Data Assessment Project (EDAP) is an activity that is responsible for assessing the quality and suitability of candidate missions being considered for the Earthnet Third Party Missions (TPM). For over 40 years ESA's Earthnet Programme has played a significant role as part of ESA's mandatory activities, providing the framework for integrating non-ESA missions, i.e. Third Party Missions, into the overall ESA Earth Observation (EO) strategy. Complementary to ESA-owned EO missions, the programme allows European users access to a large portfolio of TPM data and is particularly important for promoting the international use of EO data. The key objective of ESA's EDAP is to take full advantage of the increased range of available data from non-ESA operated missions and to perform an early data assessment for various missions that fall into one of the following instrument domains: • Optical Missions • SAR missions • Atmospheric Missions • AIS & RF Missions The present contribution focuses on the assessment of the data quality of third-party SAR missions. The early data assessment is intended to provide some indication of the potential of each existing mission to remain as a TPM. The mission quality assessment is based on specific guidelines and covers the following aspects: • Data Provider Documentation Review: the assessment covers the products information, metrology and products generation topics. The goal of this assessment is to evaluate the quality of the documentation provided to the users in terms of products formats, generation and calibration; and of the availability and accessibility of the SAR products. • Independent validation of the data quality by analyzing ad hoc datasets of the third-party missions over calibration sites (e.g., point target calibration sites or Rain Forest) in order to verify the overall data quality in terms of Impulse Response Function characteristics, resolution, radiometric calibration, geolocation accuracy and noise level. The results of the performed validation are documented in dedicated Technical Notes (TNs) that are published on the EDAP website (https://earth.esa.int/eogateway/activities/edap/sar-missions). The published TNs include the so-called Mission Quality Assessment Matrix, summarizing in a compact form the results of the performed validation activities. The first EDAP project was active between 2018 and 2021 and allowed to assess the following third-party SAR missions: • SAOCOM • ICEYE (X4-X7) • Capella • PAZ In July 2022, the continuation of the EDAP project (called EDAP+) started with a foreseen duration of two years (with further extension under evaluation). In the framework of the new project, the following SAR missions have been assessed or are currently under assessment: • NovaSAR: a small SAR mission from the cooperation of SSTL (Surrey Satellite Technology Ltd.) and Airbus DS UK, funded by the UK Government via the UKSA (UK Space Agency). It is designed for low-cost programmes and optimized for shared launch opportunities. The S-band (3.1-3.3 GHz, wavelength of ~ 10 cm) antenna is a micro-strip patch phased array of ~ 3 m x 1 m in size. NovaSAR-1 was launched in 2018 and operates in a sun-synchronous orbit at 580km. The SAR instrument can be operated in Stripmap and ScanSAR mode with different combinations of coverage and resolution. • ICEYE: an X-band Synthetic Aperture Radar (SAR) satellites constellation which is continuing to grow its capacity in specialized orbital planes designed to provide persistent monitoring capabilities and high-resolution view of the Earth's surface. The assessment of ICEYE products for EDAP+ includes new satellites recently launched (from X8 to X13) and extends the assessment to ScanSAR and interferometric products. • EOS-04: also known as RISAT-1A, is operated by the Indian space agency (ISRO) and was launched in 2022. It is intended as a follow-on mission to RISAT-1 launched in 2012 and is expected to have its twin satellite, RISAT-1B, launched by 2024. The EOS-04 satellite operates in the C-band, and can acquire in Stripmap, ScanSAR and potentially Spotlight modes, with spatial resolution ranging from 3 m up to 50 m. Due to its characteristics, and to the ESA Sentinel-1B failure happened in 2021, it is evaluated as a potential backup candidate for Sentinel-1A mission. The contribution, besides a resume of the achievements of the EDAP+ project and of the ongoing activities, will focus on the results of EOS-04 and ICEYE data assessment and will present an open-source tool for the quality assessment of SAR data. The SAR Calibration Toolbox (SCT) has been developed in the framework of the ESA funded EDAP+ and SAR Mission Performance Cluster (MPC) projects, with the goal of making it an open-source tool for multi-mission general purpose SAR data quality assessment. The tool has been designed to allow easy integration of new missions and new Cal/Val analyses and has been released to the public in July 2024. It can be found at the following GitHub page: https://github.com/aresys-srl The git repository includes a suite of unit tests covering the different modules and a set of E2E tests. It also includes full on-line documentation guiding the users through the installation and the usage of SCT. The SAR mission L1 products currently supported include Sentinel-1, ICEYE, SAOCOM, NovaSAR and EOS-04. New missions will be supported in the future. The SCT tool implements the following analyses: • Point targets analysis: geometric resolution, IRF parameters assessment, absolute radiometric calibration and geolocation accuracy • Distributed target analysis: extraction of radiometric profiles for the assessment of relative calibration over Rain Forest and of thermal noise level (NESZ) over low bac-scatter areas • Interferometric data analysis: assessment of the interferometric coherence from an input interferometric product or from 2 co-registered SLC products The presentation will focus on the general assessment framework and on the results obtained during ICEYE assessment.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R-S)

Poster: The SARCalNet database and website

Authors: Muriel Pinheiro, Antonio Valentino, Paolo Castracane
Affiliations: ESA/ESRIN, Starion
SARCalNet is an initiative of the SAR subgroup of the Working Group on Calibration and Validation (WGCV) of the Committee on Earth Observation Satellites (CEOS). Conception of SARCalNet began in 2020, based on the longstanding and existing target database of calibration targets contributed to this same group. The idea of SARCalNet is to evolve the CEOS target database concept and provide a curated standard set of information of existing SAR calibration targets and sites, both natural and artificial. This should facilitate joint calibration and performance comparisons between sensors as well as post-launch Cal/Val of any SAR sensor operating at typical SAR frequencies and polarizations. SARCalNet also targets at the provision of calibration data sets that were used to either calibrate or monitor calibration and performance of specific sensors and at the compilation of reference tools and resources useful for SAR cal/val. In order to build on the information available in the portal, the current actions were and are being carried out by the SARCalNet working group: 1) Definition of the submission templates and submission procedures to ensure standard cal/val information is provided for each site. The calibration targets and sites are submitted to SARCalNet by site/target owners via the website following a dedicated procedure defined by the group. Each submitted information will be curated by the SARCalNet working group using a procedure currently being consolidated. 2) A document on image analysis is being prepared and shall include standards to promote the use of common standards by expert and non-expert users. The document intends to provide a clear description of the recommended methodology(ies) used to perform the measurement of relevant quality parameters, including details about all the corrections applied to compensate know errors. 3) Documents which define the specific requirements for artificial and natural targets according to their intended calibration use are being written and will be published on the website. In the context of SARCalNet, for each type of target, specific information is expected. For example: a. For natural targets or targets of opportunity it is expected to have, inter alia: • Detailed and specific locations and their rationale as a calibration target. • Examples of SAR backscatter measured at different frequency bands, polarizations, and incidence angles by different instruments, preferably in the form of time series. • Methods used for calculating the time series, either as documents or Python tools. b. For artificial passive targets, it is expected, inter alia: • Target characterization, e.g., expected RCS with associated accuracy. • Frequency band for which they are appropriated. • Precise location with associated accuracy, possibly to be updated over time. • Heading, elevation angle, orthogonality, flatness, accuracy of measurements • Regular updates regarding orientation. • Survey accuracy, date of survey, deployment date, any termination date, or gap in deployment. • Characterization of the background SAR backscatter over time for the region, ideally as seen by different SAR sensors. • Pictures of the targets and the surrounding landscape, and any relevant manufacturing details. • Availability for realignment by request and the description of the procedure for request. • Point of contact of maintainers. c. For artificial active targets, it is expected, inter alia: • Frequency band for which they are designed. • Characteristics of the target, including relevant plots and measurements. • Stability, antenna pattern, gain. • Pictures of the targets and the surrounding landscape, and any relevant manufacturing details. • Availability for deployments by request and the description of the procedure for request. • Point of contact of maintainers. The development of the current SARCalNet database and website has started in 2022 in an activity funded by the European Space Agency (ESA) and in coordination with the CEOS SARCalNet working group. ESA has a long history of support to standardized cal/val, as demonstrated, e.g., by its strong support and promotion of Fiducial Reference Measurement (FRM) concepts and utilization in several fields of Earth Observation, and by its funding of past and upcoming dedicated activities aiming to consolidate and harmonize SAR cal/val procedures and guidelines. Moreover, ESA is highly committed to the development of calibration networks and actively contributed to the creation and maintenance of RADCalNet. The extension to other fields including SAR through SARCalNet is seen as a key element to the ESA strategy for approaching cal/val. The SARCalNet website/database is currently open to the public and already contain information of a few calibration sites, which are under review.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R-S)

Poster: GRASP-AirPhoton Multi-Angle Polarimeters (GRASP-GAPMAP) atmospheric data: EDAP+ (Earthnet Data Assessment Project) evaluation

Authors: Sabrina Pinori, Chloe H Martella, Gabriele Mevi, David Fuertes, Oleg Dubovik, Leonardo De Laurentiis, Dr. Clement
Affiliations: Esa
The Earthnet Data Assessment Project (EDAP+) is an activity that is responsible for assessing the quality and suitability of candidate missions being considered for the Earthnet Third Party Missions (TPM) and Copernicus Contributing Missions (CCM). For over 40 years ESA's Earthnet Programme has played a significant role as part of ESA's mandatory activities, providing the framework for integrating Third Party Missions (i.e. non-ESA missions), into the overall ESA Earth Observation (EO) strategy. Complementary to ESA-owned EO missions, the programme allows European users access to a large portfolio of TPM data and is particularly important for promoting the international use of EO data. The key objective of ESA's EDAP+ activity is to take full advantage of the increased range of available data from non-ESA operated missions and to perform an early data assessment in various instrument domains: Optical, SAR and Atmospheric. This contribution focuses on the assessment of the GRASP GAPMAP (GRASP-AirPhoton Multi-Angle Polarimeters) data quality. The mission quality assessment is based on specific guidelines and covers the following aspects: • Data Provider Documentation Review: the goal of this assessment is to evaluate the quality of the documentation provided to the users in terms of products formats, generation and calibration; and of the availability and accessibility of the products. • Independent validation of the data quality by comparing and analysing ad hoc datasets of the mission with ground-based measurement and other satellite data. In this work we will present the results of the EDAP+ scientific assessment of the first GAPMAP demonstrator (GAPMAP-0) atmospheric parameters. It was launched on board of ADLER-2 satellite in April 2023 and re-entered in the Earth atmosphere mid of August 2024. The GAPMAP constellation, currently planned to be composed of 10 satellites, will observe each pixel of the Earth at 4 wavelengths, 60 angles and in multiple polarization states providing high-quality calibrated data which allows for detailed characterization of microphysical properties of aerosols, cloud water and ice particles in the atmosphere. The instrument provides snapshots ca 1000 km across, with a pixel resolution of 5 km. The GRASP (Generalized Retrieval of Atmosphere and Surface Properties) algorithm is used to derive the satellite product which is compared to reference datasets as AERONET or BAQUNIN. We present the results of the analysis derived in synergy with the satellite data provider, working together to improve the data quality and the data documentation accessibility.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R-S)

Poster: System For The Consolidation of L2 Products of The PRISMA-SG Mission – The COOL Project

Authors: Federico Santini, Massimo Musacchio, Maria Teresa Melis, Stefano Andreucci, Marco Casu, Giovanni Battista De Giudici, Luisa Galasso, Camilla Gentili, Saham Mirzaei, Stefano Naitza, Salvatore Noli, Angelo Palombo, Simone Pascucci, Stefano Pignatti, Alessia Scalabrini, Lorenzo Sedda, Malvina Silvestri
Affiliations: CNR-IMAA - Institute of Methodologies for Environmental Analysis of the National Research Council, INGV - National Institute of Geophysics and Volcanology, UNICA - University of Cagliari
Earth Observation represents an irreplaceable resource that can contribute to the pursuit of multiple strategic, social and economic goals. The COOL project was selected in the framework of the Call for Ideas ‘Scientific Activities to Support the Development of Earth Observation Missions’, promoted by the Italian Space Agency (ASI), and it is the result of a collaboration between the Institute of Methodologies for Environmental Analysis of the National Research Council (CNR-IMAA), the National Institute of Geophysics and Volcanology (INGV) and the University of Cagliari (UNICA). The project's main challenge is to develop an advanced, modular, and scalable system, capable of supporting the consolidation of the scientific readiness level (SRL) of the PRISMA Second Generation (PRISMA-SG) mission and related L2 products, in view of the specific characteristics of its new hyperspectral sensor and with particular reference to off-nadir acquisition geometries. The outcomes of the COOL project consists of: a) an L2-product processor, based on the most advanced methods of radiative transfer physics in the earth/atmosphere system and specialised for PRISMA-SG characteristics, including aspects related to off-nadir view; b) a top-of-atmosphere (TOA) radiance simulator for different types of soil/raw terials; c) characterization of two mineralogical/raw material sites (Sale ‘e Porcus salty pond - Oristano, and Campo Pisano – Carbonia-Iglesias, Italy) as references for simulation and validation activities. Concerning the mineralogical/raw material sites, the general mineralogical composition of Campo Pisano is known but dedicated spectral signature surveys and samples collection will be conducted to calibrate and specify the signals. At Sal’e Porcus, feasibility studies are being carried out to establish a site, unique and original in the European context, for hyperspectral CAL/VAL activities. Additionally, site of Sal’e Porcus salty pond has been chosen as venue of the International Remote Sensing Summer School organised by INGV and UNICA, with the support of Italian Society of Remote Sensing (AIT) and ASI to deeply study its spectral setting. On both sites, simultaneous EnMAP and PRISMA images were acquired for two consecutive years. Regarding the L2 processor, advanced algorithms are implemented to optimise the atmospheric and topographic correction procedures for the generation of reflectance and water vapour maps. The atmospheric correction (AC) process implemented in the processor includes several steps, such as topographic harmonization, adjacency effect correction, and compensation for off-nadir viewing angles. The algorithms developed within the COOL project, including the correction for off-nadir viewing, are regularly integrated into the ImaACor atmospheric correction tool developed by CNR-IMAA and part of the ACIX-III Land Atmospheric Correction Inter-comparison eXercise. A sensitivity study is underway to assess the impact of the of-nadir view on the at sensor signal and estimate the residual errors expected after atmospheric correction of the PRISMA-SG data. The Level 2 processor developed as part of the COOL project has been applied to EnMAP and PRISMA images, focusing particularly on scenes acquired in geographically and topographically complex areas. The results demonstrate a significant improvement in data quality, with greater uniformity and spectral consistency even under variable solar illumination conditions. The effectiveness of these corrections has been verified by comparing the obtained reflectances with in situ measurements from RadCalNet sites, such as Gobabeb, Namibia, and Le Craux, France. These comparisons showed good agreement (RMS<5%), confirming the reliability of the processor. Preliminary results of the sensitivity studies suggest that the impact of off-nadir viewing angles strongly depends on the illumination geometry. For large off-nadir angles, a pronounced dependence on the solar zenith angle (SZA) is observed, whereas for relatively low angles (less than 10°), the impact is generally limited and predominantly affects the initial portion of the spectrum. These effects have been confirmed through the analysis of EnMAP images. In parallel, specific scenarios are being defined to further validate the data production processes for Level 2 using the calibration and validation (CAL/VAL) sites identified within the project. These scenarios represent a crucial step in ensuring the robustness and reproducibility of the implemented procedures.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone R-S)

Poster: PRISCAV: next developments of the Scientific CAL/VAL of PRISMA mission

Authors: Lorenzo Genesio, Dr.ssa Federica Braga, Mariano Bresciani, Dr. Federico Carotenuto, Sergio Cogliati, Simone Colella, Roberto Colombo, Dr.ssa Claudia Giardino, Dr. Beniamino Gioli, Dr. Daniela Meloni, Dr. Monica Pepe, Dr. Simone Pascucci, Dr. Stefano Pignatti, Biagio Di Mauro, Patrizia Sacco, Dr. Giuseppe Satalino, Dr. Jose Luis Pancorbo, Dr. Giandomenico De Luca, Franco Miglietta
Affiliations: CNR-IBE, CNR-ISMAR, CNR-IREA, University of Milan - Bicocca (UNIMIB), Italian Space Agency (ASI), ENEA, CNR-IMAA, CNR-ISP
The PRecursore IperSpettrale della Missione Applicativa (PRISMA) satellite has been launched by the Italian Space Agency (ASI) in 2019 and it represents the first mission of a new generation of European hyperspectral spaceborne sensors. The key idea behind PRISMA is to demonstrate the possibility of leveraging hyperspectral data for the creation of new environmental products ranging from studies of surface biology, agriculture and forestry up to raw material exploitation, mining and for the monitoring of snow and aquatic environments. To this end, the PRISMA satellite is equipped with a hyperspectral imager covering the spectral range 400-2500 nm with 237 spectral bands and a panchromatic camera operating at 400-750 nm. Radiometric accuracy is key to obtain reliable hyperspectral reflectance and, in this context, ASI is cooperating with CNR in a joint calibration/validation (CAL/VAL) effort (PRISCAV: Scientific CAL/VAL for PRISMA mission) on a series of selected target sites representatives of different land cover. Here we present the results of a comparison made on target sites, between ground fiducial reference measurements, airborne measurements and both PRISMA radiance (L1) and reflectance (L2) products generated by ASI operational pipelines. Results demonstrate that the discrepancy in radiance and reflectance are generally lower than 5% across the spectrum. The match-up enabled to assess PRISMA performances in different scenarios and to evaluate the radiometric calibration, the image quality and the atmospheric correction set-up. Based on the results of 5 years of validation campaigns, critical areas were identified and enabled the selection of further CAL/VAL sites to be monitored continuously and tasking background PRISMA acquisitions. In particular, 4 reference scenarios were selected: i) low reflective surfaces in the VIS-NIR-SWIR (water with low particulate/ chl-content), ii) high-reflective surfaces in the NIR e low reflective in the SWIR (vegetation), iii) high-reflective surface in the VIS and mid-reflective in the NIR (snow), iv) mid-reflective surfaces in the VIS-NIR and high-reflective in the SWIR (pseudo-invariant concrete surface) .
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: A.07.01 - POSTER - Soil moisture

Over the past two decades, the technique of microwave remote sensing has made tremendous progress to provide robust estimates of surface soil moisture at different scales. From local to landscape scales, several field or aircraft experiments have been organised to improve our understanding of active and passive microwave soil moisture sensing, including the effects of soil roughness, vegetation, spatial heterogeneities, and topography. At continental scales, a series of several passive and active microwave space sensors, including SMMR, AMSR, ERS/SCAT provided information on surface soil moisture. Current investigations in L-band passive microwave with SMOS and SMAP, and in active microwave with MetOp/ASCAT series and Sentinel-1, enable an accurate quantification of the soil moisture at regional and global scales. Building on the legacy of these mission operational programmes like Copernicus but also novel developments will further enhance our capabilities to monitor soil moisture, and they will ensure continuity of multi-scale soil moisture measurements on climate scales.

At the same time the field of metrology has received more and more attention by the Earth observation community, which has led to a growing awareness of the concept of error traceability and the necessity of well characterized, so-called “Fiducial Reference Measurements” (FRMs). As a consequence, research has put a new focus on obtaining traceable error budgets for soil moisture products, and improved ground reference data.

We encourage submissions related to soil moisture ground and remote sensing, including:
- Global soil moisture estimation from coarse resolution active and passive sensors.
- High spatial resolution soil moisture estimation based on e.g. Sentinel observations, GNSS reflections, or using novel downscaling methods.
- Field experiment, theoretical advances in microwave modelling and calibration/validation activities.
- Root zone soil moisture retrieval and soil moisture data assimilation in land surface models, hydrological models and in Numerical Weather Prediction models.
- Evaluation and trend analysis of soil moisture climate data records such as the ESA CCI soil moisture product as well as soil moisture from re-analysis.
- Inter-comparison and inter-validation between land surface models, remote sensing approaches and in-situ validation networks.
- Progress towards the estimation of SI-traceable uncertainty budgets including uncertainty characterization across scales.
- Application of satellite soil moisture products in scientific and operational disciplines.

Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: High Resolution Soil Moisture Estimation via GNSS-R and SAR Data Fusion With an Open Source Algorithm

Authors: Samuel Christelow, Paul Blunt, Stephen Grebby, Stuart Marsh
Affiliations: The University Of Nottingham
Understanding soil moisture at high temporal and spatial resolutions is key to effective natural hazard mitigation, hydrological modelling and agricultural planning. Remote sensing provides a wide-ranging and cost-effective solution to this problem. With accelerating climate change driving higher rates of droughts, wildfires and floods, the world is facing an increasing need for getting useful remote sensing data into the hands of those who need it. With many current and future users of remote sensing soil moisture data being governments and NGOs with limited budgets, an open-source approach is sensible and will maximise impact. For this reason, this work contains fully open-source algorithms and data sources. This work details the development of an open-source estimation algorithm and associated data outputs, using machine-learning driven data fusion of spaceborne GNSS Reflectometry (GNSS-R) and Synthetic Aperture Radar (SAR) data to estimate volumetric surface soil moisture. This product was primarily developed with the intention of being more useful for climate hazard risk assessments, as the usefulness of existing spaceborne GNSS-R soil moisture products for this purpose is limited by their coarse spatial resolution. Despite this, GNSS-R remains extremely promising due to revisit times as low as a few hours on some missions, which is useful for monitoring rapidly changing hazards. SAR data was chosen as the other main input due to its complimentary characteristics of very high spatial resolution but comparatively lower temporal resolution. Other inputs to the data fusion model include vegetation parameters and land cover type. Using a newly created grid architecture based on the Equal-Area Scalable Earth grid system, the two data types are used in combination to estimate the soil moisture with a high spatial resolution at a given location, despite the dissimilar spatial resolution of the inputs. This allows the output estimations to take advantage of the favourable characteristics of both measurement types, as well as slightly different information from different parts of the EM spectrum. Several models were tested, with an Artificial Neural Network model found to provide the best soil moisture estimations when compared to the validation data sets. While current results focus on soil moisture estimation over several test sites, the open-source design of this project aims to encourage further work in deploying this model across the globe. This work uses the Cyclone GNSS (CYGNSS) UCAR/CU Soil Moisture product as the GNSS-R input, and Sentinel-1 backscatter data at the SAR input. Validation data comes from the SMAP Enhanced L3 Global Daily Soil Moisture product.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: Surface Soil Moisture Dynamics Across Stable Land Cover Zones In The Kruger National Park Using Multi-Source Remote Sensing and Precipitation Data

Authors: Gerrit Eisele, Anton Potapov, Marco Wolsza, Dr. Jussi Baade, Tercia Strydom, Prof. Dr. Christiane Schmullius
Affiliations: Department for Earth Observation, Friedrich-Schiller-University Jena, Department for Physical Geography, Friedrich-Schiller-University Jena, Scientific Services, South African National Parks
This work is part of an MSc seminar at the Friedrich-Schiller-University Jena in cooperation with South African institutions. Surface soil moisture is a critical component of hydrological and ecological systems. It directly influences aspects such as vegetation health, fire recovery, or water availability. In the face of climate change, the ability to monitor surface soil moisture at high spatial and temporal resolutions is becoming increasingly important. With precipitation patterns shifting and extreme weather events intensifying, understanding surface soil moisture dynamics is ever more important for assessing ecosystem resilience, predicting agricultural productivity, and managing water resources effectively. This study investigates surface soil moisture variability across areas of stable land cover in the Kruger National Park, South Africa, over a six-year period (2018–2023). We used a multi-method approach, integrating radar backscatter from Sentinel-1 Synthetic Aperture Radar (SAR), optical data from Sentinel-2, and precipitation datasets, like Climate Hazards InfraRed Precipitation with Station data (CHIRPS) and the Multi-Source Weighted-Ensemble Precipitation (MSWEP). Terrain attributes such as slope, flow accumulation, and the Topographic Wetness Index (TWI), derived from elevation data, are incorporated to account for their hydrological influences. Soil characteristics from the ISRIC Soil and Terrain (SOTER) database further provide context on water retention variability. Surface soil moisture estimates are derived using a combination of methods, including radar-based retrieval models, the surface moisture index (SurfMI), and a soil line feature space analysis based on Sentinel-2 Red-NIR bands. These are validated against SMAP (Soil Moisture Active Passive) data and in situ measurements. Challenges such as vegetation interference in radar data, the reliability of precipitation datasets, and the interplay of terrain factors are critically examined. By comparing and integrating these methods, we aim to identify strengths, limitations, and synergies of different approaches. This study aims to offer insights into optimizing soil moisture monitoring in semi-arid ecosystems, enhancing our understanding of seasonal dynamics, and informing strategies for climate resilience and resource management.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: Assessing the Impact of Updated Land Cover and Snow Assimilation on Soil Moisture and Land Surface Temperature: Simulations in Eurasia

Authors: Oscar Rojas, Jean-Christophe Calvet, Bertrand Bonan
Affiliations: CNRM, Université de Toulouse, Météo-France, CNRS
Understanding the role of land cover (LC) changes and data assimilation in improving the accuracy of soil moisture (SM) and land surface temperature (LST) simulations is critical for advancing land surface modeling and reanalysis systems. This study evaluates the integration of updated ESA CCI land cover data into the ISBA land surface model, implemented within the SURFEX (Surface externalisée) land surface modeling platform of Météo-France. Simulations were performed over Eurasia at a spatial resolution of 0.25° from 2010 to 2022, comparing results from a baseline setup using pre-existing LC information with a new setup that incorporates updated CCI-LC data through the ECOCLIMAP-SG tool. ECOCLIMAP-SG enables the ingestion of high-resolution (300m) LC maps, offering a more detailed representation of land surface processes. The evaluation focuses on ISBA open-loop simulations, which are compared against satellite observations from ESA CCI SM and LST products to assess the impact of updated LC data on soil moisture and land surface temperature simulations. These comparisons are used as benchmarks to quantify how changes in land cover representation influence the accuracy of simulated water and energy budgets. By aligning model outputs with satellite observations, the study investigates the potential of updated LC data to address uncertainties in land surface modeling and improve the quality of reanalyses. In addition to open-loop simulations, this study assesses the impact of assimilating Snow Water Equivalent (SWE) data from ESA CCI into the ISBA land surface model using the LDAS-Monde land data assimilation system. LDAS-Monde, integrated within the SURFEX platform, employs a sequential Kalman filter to assimilate satellite-based observations of SWE alongside vegetation-related variables such as Leaf Area Index (LAI). The inclusion of SWE assimilation provides a unique opportunity to evaluate its effect on model performance, particularly in regions where snow processes significantly influence soil moisture dynamics and land-atmosphere interactions. The combined evaluation addresses several key questions: How do uncertainties in land cover propagate to soil moisture and land surface temperature simulations? Can Earth observation data improve land surface reanalyses through better representation of land cover and snow processes? What is the impact of SWE assimilation on the accuracy of soil moisture and energy budgets? Results from this study provide valuable insights into the added value of updated LC data and SWE assimilation for improving land surface model simulations and reanalysis products. The findings highlight the importance of integrating high-resolution Earth observation data into land surface models to refine predictions of soil moisture and land surface temperature, thereby enhancing our understanding of water and energy fluxes in response to changing land cover and climate conditions. These advancements are critical for addressing global challenges related to water resources, agriculture, and climate adaptation.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: Model-based Tensor Decomposition for Soil Moisture Estimation from Polarimetric SAR Time Series

Authors: Nikita Basargin, Alberto Alonso-González, Irena Hajnsek
Affiliations: Microwaves and Radar Institute, German Aerospace Center (DLR), School of Life Sciences, Technical University of Munich (TUM), Munich School for Data Science (MUDS), Signal Theory and Communications Department, Universitat Politécnica de Catalunya (UPC), Institute of Environmental Engineering, ETH Zürich
Soil moisture is an essential climate variable that is important for hydrology, climate modeling, and agriculture. Currently, most operational products provide a resolution on the order of kilometers [1], which is not sufficient for some applications, such as precision agriculture. To close this gap, this work focuses on data obtained with the Synthetic Aperture Radar (SAR) sensors. SAR provides high-resolution and weather-independent information about the dielectric properties of the Earth's surface and, therefore, is an important data source for high-resolution soil moisture products. Future missions like ROSE-L will offer short revisit times combined with a large swath, highlighting the importance of SAR for continuous soil moisture monitoring. In recent years, several approaches to retrieve soil moisture from SAR have been proposed, including short-term backscatter change detection [2], differential interferometry [3], and polarimetry [4]. One of the main challenges is the presence of vegetation or crops covering the ground. While longer wavelengths (e.g., L-band) have a larger penetration and see the ground through the vegetation, the presence of plants cannot be ignored as they strongly influence the signal even at L-band. In this work, we focus on polarimetric techniques from [4] that allow the separation of the signal into the ground and the vegetation contributions based on the differences in the scattering mechanisms. The data is approximated by a physical model composed of three components representing the ground, the vegetation, and the dihedral scattering (interactions of ground and vegetation). The inversion from a single polarimetric acquisition can be ambiguous since there are more model parameters than polarimetric observables. Existing inversion schemes typically require strong assumptions, use simpler models, or set some parameters to fixed values to resolve the ambiguities. This limits the validity range of the model as it is not able to accurately describe the data in some cases. Instead of using a simpler model, we address the inversion ambiguities by expanding the observation space and adding an additional data dimension, for example, a time series of acquisitions. Instead of a single polarimetric matrix, we work with tensors representing stacks of matrices. Joint inversion of several matrices reduces the ambiguities and allows the use of models with more parameters to accurately model the surface and vegetation, extending the validity range of the model. The inversion is implemented in PyTorch and formulated as an optimization problem that is iteratively solved by gradient descent using automatic differentiation. We evaluate the proposed method on high-resolution airborne F-SAR data obtained during experimental campaigns including CROPEX 2014 and HTERRA 2022. The model characterizes the dominant scattering mechanisms and provides soil moisture estimates in more regions compared to a simpler X-Bragg model. The model performance depends on the crop type and the phenological stage indicating that the model can benefit from additional data dimensions like SAR interferometry to improve the ground and vegetation separation. References [1] T. Schmidt, M. Schrön, Z. Li, et al., "Comprehensive quality assessment of satellite-and model-based soil moisture products against the COSMOS network in Germany," Remote Sensing of Environment, 2024. [2] A. Balenzano, F. Mattia, G. Satalino, and M. W. J. Davidson, "Dense temporal series of C- and L-band SAR data for soil moisture retrieval over agricultural crops," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2011. [3] F. De Zan, A. Parizzi, P. Prats-Iraola, and P. López-Dekker, "A SAR interferometric model for soil moisture," IEEE Transactions on Geoscience and Remote Sensing, 2013. [4] I. Hajnsek, T. Jagdhuber, H. Schon, and K. P. Papathanassiou, "Potential of estimating soil moisture under vegetation cover by means of PolSAR," IEEE Transactions on Geoscience and Remote Sensing, 2009.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: Neural Spatiotemporal Interpolation: A Scalable Deep Learning Framework for Filling Gaps in GNSS-R Soil Moisture Data

Authors: Oto Alves, Wolfgang Wagner, Dr Emanuele Santi, Dr Martin Unwin, Gabrielle Marigold
Affiliations: TU Wien, Institute of Applied Physics, National Research Council (IFAC-CNR), Surrey Satellite Technology Ltd. (SSTL)
As key Earth observation satellite missions such as SMAP and SMOS approach the end of their operational lifetimes, the global soil moisture (SM) monitoring efforts face a significant reduction in data collection capacity. In this context, Global Navigation Satellite System-Reflectometry (GNSS-R) has emerged as a cost-effective option to complement and enhance the remaining observation tools, thus ensuring the continuity of global-scale SM monitoring. This innovative technology, exemplified by satellite missions like NASA's Cyclone GNSS (CYGNSS) and ESA's upcoming HydroGNSS, captures L-band Global Navigation Satellite System (GNSS) signals that are reflected off the Earth's surface. These signals are sensitive to soil water content, thus enabling the retrieval of SM data. GNSS-R-derived SM data products often contain significant spatial and temporal data gaps, which can limit their utility for applications that require daily or sub-daily sampling rates over a specific location. These gaps can be filled by employing interpolation techniques which estimate missing soil moisture values across space and time. The state-of-the-art interpolation method for GNSS-R data, Previously-Observed Behavior Interpolation (POBI), relies on an ensemble of pixel-specific regression models. While effective, this approach is computationally intensive due to the storage of redundant information across multiple models, especially when using high-resolution geospatial grids. To mitigate this limitation, we propose a novel deep learning approach called Neural Spatiotemporal Interpolation (NSTI). NSTI takes advantage of deep learning's ability to compress domain knowledge into a single, compact model, mitigating the redundancies inherent to POBI's ensemble approach, where each model must independently learn both the overarching SM dynamics and local phenomena specific to its corresponding pixel. Furthermore, by incorporating neural network architectures such as convolutional neural networks (CNNs), which are particularly well-suited for handling spatial information, NSTI can efficiently model the complex spatiotemporal dynamics present in soil moisture data products. To evaluate the effectiveness of NSTI in real-world scenarios, we applied both POBI and NSTI to GNSS-R soil moisture data derived from the CYGNSS constellation, framing the interpolation of spatiotemporal data gaps as a regression problem. The regression error of both algorithms was validated against independent reference datasets, enabling a direct comparison of their performance. Our results show that NSTI achieves an interpolation performance comparable to that of POBI while significantly reducing the number of required model parameters and the corresponding storage demands. This positions NSTI as a scalable and resource-efficient alternative for GNSS-R SM data interpolation, addressing computational challenges associated with existing methods.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: Generation of a Continuous Bias-free Land Surface Reanalysis Dataset over EURO-CORDEX for 2002 to 2022

Authors: Haojin Zhao, Mikael Kaandorp, Harrie-Jan Hendricks Franssen
Affiliations: Forschungszentrum Jülich
The characterization of soil moisture (SM) and evapotranspiration (ET) at high spatial and temporal resolution and quality is of great importance for water resources assessment, water resources management and atmospheric modelling. Integrating satellite-based observations of SM and in situ ET measurements offers useful information to reduce land surface model parameter uncertainty, and lead to a more accurate characterization of terrestrial states and fluxes of land surface models (LSMs). Here, we apply the Ensemble Smoother with Multiple Data Assimilation (ES-MDA) framework to estimate sensitive vegetation and soil model parameters and states by assimilating the Soil Moisture Active Passive (SMAP) SM product, and ET measured at Integrated Carbon Observation System (ICOS) sites into a land surface model (Community Land Model version 5.0). We estimate hard coded pedotransfer function parameters that link soil texture and soil hydraulic parameters, as well as key plant physiological parameters that govern processes such as photosynthesis, stomatal conductance, and nutrient use. The estimated parameters are used to generate the SM and ET at 12 km spatial and daily temporal resolution over EURO-CORDEX for 2002 to 2022. The performance of the SM dataset is assessed against the International Soil Moisture Network (ISMN) database over 192 sites. The Pearson r increases from 0.074 to 0.077, the median bias decreases from 0.105 cm³/cm³ (default parameters) to 0.046 cm³/cm³ (updated parameters) and the RMSE decreases from 0.133 cm³/cm³ to 0.094 cm³/cm³. Notably, this bias reduction demonstrates that the updated parameters can compensate for model errors, such as the overestimation of SM that is frequently reported in the literature. Furthermore, these improvements illustrate that the dataset effectively captures the daily, inter-annual, and intra-seasonal patterns. This dataset offers long-term, daily surface SM and ET data at high spatiotemporal resolution, offering significant benefits for hydrological applications and enhancing insights into long-term trend analysis and land-atmosphere interactions under climate change.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: Physics-Informed AI for Soil Moisture Retrieval: a Deep Neural Network Approach Enhanced by Radiative Transfer Model

Authors: Lorenzo Giuliano Papale, Xiaodong Huang, Marco Lavalle, Fabio Del Frate, Heresh Fattahi, Xiaolan Xu
Affiliations: Tor Vergata University Of Rome, Jet Propulsion Laboratory, California Institute of Technology
Soil moisture is a crucial parameter in hydrology and agronomy applications [1,2] playing an important role in water resource and irrigation management. Synthetic Aperture Radar (SAR) data have been extensively used in several studies to retrieve the soil moisture across various land covers from bare to vegetated soil, including cropland, grassland, and forest [3-4]. Several scattering models have been developed to retrieve soil moisture beneath vegetation [5-7]. Using first-order radiative transfer (RT), backscattering models requires a complete description of vegetation structure in the scene. Thus, extensively used semi-physical models such as the water cloud model (WCM), rely on vegetation information (e.g., vegetation water content and biomass) derived from optical vegetation indices including LAI and NDVI. However, such indices provide a partial description of vegetation structure, leading to soil moisture retrieval errors depending on vegetation structure variability across different land covers. In this context, Deep Learning (DL) techniques can be synergically used with physics-based models to optimize their parameters and compensate the lack of vegetation structure description. In this study, we propose a methodology for accurate and physically meaningful soil moisture retrieval starting from UAVSAR acquisitions at HV polarization and multi-sources Remote Sensing (RS) data, e.g., vegetation water content, soil texture, and weather information. It is worth mentioning that while the HV polarization is used as input for the AI model, the HH polarization is employed, as described later, for the model optimization task. Specifically, the DL algorithm, i.e., a feed forward Residual Deep Networks (ResNet) is trained to optimize a first-order radiative transfer model by estimating its parameters, i.e., the four scattering contributions (surface, double, volume, and triple scattering) and the attenuation term and by approximating the complicated form of the phase matrix. This optimization is performed by the Adam optimizer with the Huber loss cost function to minimize the difference between the measured total backscatter at HH polarization and the estimated one, calculated as the sum of the optimized attenuated scattering components. Then, the estimated optimal models’ parameters are fed to a Random Forest algorithm designed to retrieve the volumetric soil moisture. To guarantee the physical validity of the estimation, a physical constraint, derived from RT model simulations, is imposed to the ResNet by customizing the loss function. Ultimately, the methodology is validated using the SMAPVEX12 soil moisture measurements and the UAVSAR L-band acquisitions with aggregated resolution of 100 meters. Results show that the estimated scattering components are consistent with the physical simulations with dominating surface and volume scattering components and rather weak triple scattering. Moreover, the retrieved backscatter coefficient at HH polarization has a high agreement with the UAVSAR acquisitions (~3 dB). Lastly, the estimated soil moisture shows a very high agreement with the in-situ soil moisture measurements with a root mean square error (RMSE) of 5.64% and R2 of 0.70. Moreover, the performance of the proposed approach was assessed individually for each of the major land cover classes present at the site, namely corn, soybeans, wheat, and grassland. The results show that the approach performed best for wheat, while its performance was least effective for grassland. In conclusion, the study demonstrates how models with physics foundation can enhance DL-based algorithms to achieve accurate and physically meaningful results. The developed soil moisture retrieval scheme is applicable to L-band NISAR data, which will provide the opportunity to test the robustness of the estimate soil moisture retrieval against the global environmental variability. [1] Schaufler, Kitzler, A. Schindlbacher, U. Skiba, M. A. Sutton, and S. Zechmeister‐Boltenstern. "Greenhouse gas emissions from European soils under different land use: effects of soil moisture and temperature." European Journal of Soil Science 61, no. 5 (2010): 683-696. [2] Sheffield, Justin, Gopi Goteti, Fenghua Wen, and Eric F. Wood. "A simulated soil moisture based drought analysis for the United States." Journal of Geophysical Research: Atmospheres 109, no. D24 (2004). [3] Hosseini, Mehdi, and Heather McNairn. "Using multi-polarization C-and L-band synthetic aperture radar to estimate biomass and soil moisture of wheat fields." International Journal of Applied Earth Observation and Geoinformation 58 (2017): 50-64. [4] Baghdadi, Nicolas N., Mohamad El Hajj, Mehrez Zribi, and Ibrahim Fayad. "Coupling SAR C-band and optical data for soil moisture and leaf area index retrieval over irrigated grasslands." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 9, no. 3 (2015): 1229-1243. [5] Baghdadi, Nicolas, and Mehrez Zribi. "Evaluation of radar backscatter models IEM, OH and Dubois using experimental observations." International Journal of Remote Sensing 27, no. 18 (2006): 3831-3852. [6] Fung, Adrian K., and Kun-Shan Chen. "An update on the IEM surface backscattering model." IEEE Geoscience and Remote Sensing Letters 1, no. 2 (2004): 75-77. [7] Bracaglia, M., P. Ferrazzoli, and L. Guerriero. "A fully polarimetric multiple scattering model for crops." Remote Sensing of Environment 54, no. 3 (1995): 170-179.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: Field-Scale Soil Moisture Monitoring Using Sentinel-1 SAR: Exploring the Role of Soil Texture in Backscatter Sensitivity and Hydraulic Property Retrieval

Authors: Claire Stanyer, Professor Belen Marti-Cardona, Dr Irene Seco-Rizo
Affiliations: University Of Surrey, Systra SWS
Monitoring Soil Moisture (SM) at the crop field scale is of great interest for agricultural applications. Synthetic Aperture Radar (SAR) systems such as Sentinel-1 (S1) provide sensitivity to surface SM (SSM) at a spatial resolution compatible with crop field monitoring. Many algorithms have been proposed to relate SAR backscattering to SSM, yet most overlook soil texture as a modulating factor. Using extensive data from the agricultural sites of the UK COsmic-ray Soil Moisture Observing System (COSMOS-UK) SM monitoring network, this study investigated the influence of soil texture (closely linked to hydraulic properties) on the relationship between S1 C-band backscattering and SSM. During times of bare soil, changes in S1 VV backscattering will be primarily driven by surface moisture which varies more rapidly than surface roughness. Field-sectors at each location were delineated based on the COSMOS sensor footprints and S1 backscatter values, following an orbit correction, were averaged into time-series data based on these field-sectors. Sentinel-2 (S2) data were used to determine a time-series of vegetation cover for each field-sector, allowing the selection of S1 backscatter time-series ranges during periods of low vegetation. This remote sensed S1 backscatter was compared to the COSMOS Volumetric Water Content (VWC) ground truth. Our results clearly evidenced the semi-empirical first-order relationship between SSM and S1 field-averaged VV backscattering. We determined that the gradient of their linear regression is consistent over different time-periods at the same location. By indicating the gradients on the site’s position on the soil texture triangle (based on UKSO data), we also showed that the gradients are indicative of soil texture, which in turn reflects field hydraulic properties. For instance, in sandy loam soil (Bunny Park), the S1 response showed high sensitivity to volumetric water content (VWC) with a change of 1.69% VWC per dB of S1 response. This compared to the lower sensitivity of a clayey soil (Loddington) at a change of 4.81% per dB, a factor of 2.8 difference. These findings lay the foundation for the retrieval of field-scale soil hydraulic properties from backscatter temporal patterns, when used together with rainfall data and soil moisture models. By optimising the temporal trends of modelled surface moisture to match the trends observed in S1 VV backscattering during low-vegetation periods, field-specific soil hydraulic conductivity and other infiltration properties can be successfully derived. This would enable more accurate simulation of root-zone SM based on remotely sensed meteorological variables and vegetation dynamics.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: First Results From the New Copernicus 1 km Surface Soil Moisture Product

Authors: Bernhard Raml, Roselyne Lacaze, Samuel Massart, Dr. Mariette Vreugdenhil, Wolfgang Wagner
Affiliations: Tu Wien, HYGEOS
Surface soil moisture (SSM) at a kilometre scale resolution is crucial for various applications, including crop yield prediction, drought monitoring, and flood forecasting. While the high-resolution microwave SAR Sentinel-1 sensors provide the spatial sampling to derive such data, retrievals remain challenging due to the sensitivity of the C-band to vegetation, and the prevalence of subsurface scattering anomalies in the active radar signal over arid and semi-arid regions. This update to the Copernicus 1 km SSM product aims to address these challenges through innovative solutions. The updated product includes advancements such as masking dynamic water bodies as well as freezing/thawing soil, however, in this submission we will focus on two key improvements: static filtering of soil moisture-insensitive pixels and a novel vegetation modelling approach using a 0th-order Radiative Transfer model (RT0). We filter soil moisture insensitive pixels at 20 m sampling to improve the soil moisture signal at 1 km resolution by applying a recently developed sensitivity mask to the backscatter data. To model Vegetation Optical Depth the RT0 model ingest smoothed 300 m Leaf Area Index (LAI) data provided by the Copernicus service. An incremental analysis was conducted to evaluate the contributions of each improvement to the overall performance of the updated product, using reference soil moisture from modelled and in-situ datasets. Enhancements achieved through improved static filtering are particularly pronounced in arid and semi-arid regions, where subsurface scattering effects previously hindered retrievals. Concretely, we see an average improvement in Pearson correlation of 0.17 over southern Spain with respect to modelled soil moisture from NASA's Global Land Data Assimilation System (GLDAS). Meanwhile, the integration of RT0 for vegetation modelling has reduced seasonal vegetation biases. This improvement is especially evident in densely vegetated areas such as Austria and Romania, where we see an average increase in correlation of 0.19 and 0.24 respectively. This is further substantiated by detailed analyses of in-situ time series data. Despite these advancements, certain challenges persist. For example, double bounce effects caused by inundated vegetation continue to hamper retrievals in areas like the Po Valley. Additionally, some in-situ stations have shown degradation in performance. Further investigation of these stations has led to hypotheses about the underlying causes, including soil roughness changes due to agricultural practices, limitations in using LAI as a proxy for Vegetation Optical Depth, and potential issues with in-situ sensor placement. These findings point us to avenues for further improvement. Overall, these first results are encouraging and demonstrate the potential of the new Copernicus 1km SSM product to meet the demands of high-resolution soil moisture monitoring.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: The International Soil Moisture Network (ISMN): providing a permanent service for earth system sciences

Authors: Wolfgang Korres, Tunde Olarinoye, Fay Boehmer, Kasjen Kramer, Stephan Dietrich, Matthias Zink
Affiliations: Federal Institute Of Hydrology, International Center for Water Resources and Global Change (ICWRGC)
Soil moisture is recognized as an Essential Climate Variable (ECV), because it is crucial to assess water availability for plants and hence food production. Having long time series of freely available and interoperable soil moisture data with global coverage enables scientists, practitioners (like farmers) and decision makers to detect trends, assess the impacts of climate change and develop adaptation strategies. The collection, harmonization and archiving of in situ soil moisture data was the motivation to establish the International Soil Moisture Network (ISMN) at the Vienna University of Technology in 2009 as a community effort. Based on several project funding periods by the European Space Agency (ESA), the ISMN became an essential means for validating and improving global land surface satellite products, climate and hydrological models. In December 2022, the ISMN was transferred to a new hosting facility the International Centre for Water Resources and Global Change (ICWRGC) and the German Federal Institute of Hydrology (BfG) in Koblenz (Germany). ISMN data are successfully provided from the new host since then and will be for decades to come as the German government committed to its long-term funding. This presentation is going to showcase the International Soil Moisture Network (ISMN). Beyond offering comprehensive in situ soil moisture data, ISMN freely disseminates additional environmental variables, including soil temperature, snow depth, snow water equivalent, precipitation, air temperature, surface temperature and soil water potential if they are available from our data providers. With a global reach, ISMN has already accumulated 3000 stations with observations at various depths, while about 1000 stations are updated on a daily basis. Ongoing efforts are concentrated on expanding the database by incorporating additional stations and networks from institutional or governmental sources. Substantial resources are directed towards fortifying the operational system and improve usability to better serve our users. Additional efforts are undertaken to include ISMN in the data-to-value chain by contributing to international initiatives like WMO, FAO and GCOS. One example is the contribution to WMO’s yearly Global State of the Water Resources report.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: Short-term SAR change detection for soil moisture estimation: A case study over multiple test-sites in Denmark.

Authors: Mr. Miquel Negre Dou, Mr. John Peter Merryman
Affiliations: Technical University Of Denmark
This study investigates several aspects related to the application of the short term change detection (STCD) algorithm proposed by Balenzano et al. [1], and further developed by Mengen et al. [2]. The main assumption behind this approach being that changes in surface roughness and vegetation growth contributing to backscatter occur at longer time scales than soil moisture fluctuations. Therefore, when analyzing the ratios of backscatter coefficients from successive acquisitions, only the information pertaining to changes in surface soil moisture content should remain. However, to derive a soil moisture time-series from this data, external information is required to constrain the inversion. In order to achieve this, typically a coarser soil moisture product is required, such as GLDAS (Global Land Data Assimilation System) [3], together with a bounded least squares inversion approach. Soil texture and field capacity estimates are also incorporated to the retrieval process through the use of ancillary data. The impact of key algorithmic parameters were assessed, such as the spatial averaging (multilooking) factors, the temporal window size and the choice of external datasets used to constrain the inversion. The method's performance was validated against data from the Danish Hydrological Observatory (HOBE) [4] [5] between the years 2018 and 2020. For several stations, the measured soil moisture time-series, were found to reproduce the highly nonlinear patterns observed in the ground truth data, albeit for some outliers, which were mainly located in stations where the recorded soil moisture consistently surpassed the expected field capacity. Furthermore, the model’s ability to resolve soil moisture variation at a subfield scale was explored. This is particularly relevant for identifying greenhouse gas emission hotspots, which have been linked to poor drainage conditions [6]. To this end, the STCD soil moisture retrievals were tested in the sites of the KortDrænN2O project, which have undergone different drainage treatments at a sub-field scale, in the context of a broader initiative to provide robust figures for nitrous oxide (N2O) emissions from mineral soils across Denmark. References: [1] A. Balenzano, F. Mattia, G. Satalino, et al., “Sentinel-1 soil moisture at 1km resolution: A validation study,” Remote Sensing of Environment, vol. 263, p. 112 554, 2021, issn: 0034-4257. doi: https://doi.org/10.1016/j.rse.2021.112554. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0034425721002741. [2] D. Mengen, T. Jagdhuber, A. Balenzano, et al., “High spatial and temporal soil moisture retrieval in agricultural areas using multi-orbit and vegetation adapted sentinel-1 sar time series,” Remote Sensing, vol. 15, no. 9, 2023, issn: 2072-4292. doi: 10.3390/rs15092282. [Online]. Available: https://www.mdpi.com/2072-4292/15/9/2282. [3] H. Beaudoing, M. Rodell, NASA, et al., “GLDAS Noah Land Surface Model L4 3 Hourly 0.25 × 0.25 Degree; Version 2.1,” Goddard Earth Sciences Data and Information Services Center, 2020. [4] S. Bircher, N. Skou, K. H. Jensen, et al., “A soil moisture and temperature network for smos validation in western denmark,” Hydrology and Earth System Sciences, vol. 16, no. 5, pp. 1445 1463, 2012. doi: 10.5194/hess- 16- 1445- 2012. [Online]. Available: https://hess.copernicus.org/ articles/16/1445/2012/. [5] K. H. Jensen and J. C. Refsgaard, “HOBE: The Danish Hydrological Observatory,” Vadose Zone Journal, vol. 17, no. 1, p. 180 059, 2018. doi: https://doi.org/10.2136/vzj2018.03.0059. eprint: https://acsess.onlinelibrary.wiley.com/doi/pdf/10.2136/vzj2018.03.059. [Online]. Available: https://acsess.onlinelibrary.wiley.com/doi/abs/10.2136/vzj2018.03.0059. [6] K. J. S. Jensen, S. Hansen, M. E. Styczen, et al., “Yield and development of winter wheat (triticum aestivum l.) and spring barley (hordeum vulgare) in field experiments with variable weather and drainage conditions,” European Journal of Agronomy, vol. 122, p. 126 075, 2021, issn: 1161-0301. doi: https : / / doi . org / 10 . 1016 / j. eja . 2020 . 126075. [Online]. Available: https : / / www .sciencedirect.com/science/article/pii/S1161030120300824.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: Field scale study on the surface soil moisture estimation from SAOCOM data

Authors: Giulia Graldi, Filippo Bocchino, Alireza Hamoudzadeh, Lorenza Ranaldi, Deodato Tapete, Alessandro Ursi, Maria Virelli, Patrizia Sacco, Valeria Belloni, Roberta Ravanelli, Mattia Crespi
Affiliations: Sapienza University of Rome, Geodesy and Geomatics Division, Italian Space Agency, University of Liège, Department of Geography (Faculty of Sciences), Geomatics Unit, Sapienza School for Advanced Studies, Sapienza University of Rome
The current climate crisis we are facing worldwide should force us into a conservative usage of the water resources. In particular, the agricultural sector currently uses 70% of the available freshwater [1]. This underlies the necessity for a collective effort in order to find suitable solutions for saving the resources exploited in the agricultural practices. In this context, remote monitoring of soil moisture conditions from Earth Observation data could help in saving freshwater resources by providing useful information for the optimal management of irrigation practices. From an historical point of view, surface soil moisture has been largely estimated from active and passive microwave remote sensed data, mainly providing products characterized by tens to sub-kilometric resolutions. Synthetic Aperture Radar (SAR) remote sensing is particularly suited for high-resolution analysis, and more specifically, for applications at the field scale. However, working at this spatial resolution is extremely challenging, since the hypothesis at the basis of the estimation of the surface soil moisture that can be respected at wider spatial scales, may not be valid when working at the field scale. Specifically, among the factors that should be taken into account at the field scale are the seasonal effect of the vegetation and the changes in the soil roughness conditions due to agricultural practices. Recently, it has been detected over C-band data acquired by different sensors and at different spatial scales [2, 3] that the drying process of the terrain may play a dominant role on the total backscattering coefficient. In the frame of the GRAW project, funded by the Italian Space Agency (ASI), this investigation explores the potentialities of the L-band data of the SAOCOM mission [4] for the estimation of surface soil moisture at the field scale. These data are characterized by a spatial resolution of around 10 m and are representative of the surface moisture conditions of a deeper layer of the soil (~20 cm) with respect to the C-band (~5 cm). The case study is sited over an agricultural area in Spain where surface soil moisture measurements from the REMEDHUS network are available. Data on the vegetation type are derived from the EUCROPMAP for the year 2022 and the changes in the roughness conditions due to agricultural practices are taken into account with an anomaly detection [5]. Acknowledgements This research is performed in the framework of the GRAW project, funded by the Italian Space Agency (ASI), Agreement n. 2023-1-HB.0, as part of the ASI’s program “Innovation for Downstream Preparation for Science” (I4DP_SCIENCE). References [1] Food and Agriculture Organization of the United Nations (FAO) (2017) Water for Sustainable Food and Agriculture - A Report Produced for the G20 Presidency of Germany. http://www.fao.org/3/i7959e/i7959e.pdf [2] Wagner, W. et al. (2022). ‘Widespread occurrence of anomalous C-band backscatter signals in arid environments caused by subsurface scattering’. In: Remote Sensing of Environment 276, p. 113025. DOI: 10.1016/j.rse.2022.113025. [3] Graldi, G., (2024), ‘SAR for superficial soil moisture retrieval at the field scale over an agricultural area’, PhD thesis. [4] Azcueta, M., Gonzalez, J.P.C., Zajc, T., Ferreyra, J., Thibeault, M., 2021. External calibration results of the SAOCOM-1A commissioning phase. IEEE Trans. Geosci. Remote Sens. 60, 1–8. [5] Zhu, L. et al. (2019). ‘Roughness and vegetation change detection: A pre-processing for soil moisture retrieval from multi-temporal SAR imagery’. In: Remote Sensing of Environment 225, pp. 93–106. DOI: 10.1016/j.rse.2019.02.027.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: A gap-filled global long-term satellite soil moisture climate data record from ESA CCI SM

Authors: Wolfgang Preimesberger, Pietro Stradiotti, Wouter Dorigo
Affiliations: TU Wien, GeoVille Informationssysteme und Datenverarbeitung GMBH
ESA CCI Soil Moisture is a multi-satellite climate data record that consists of harmonized, daily observations coming at present from 19 satellites operating in the microwave domain. The wealth of satellite information, particularly over the last decade, facilitates the creation of a daily data record at 0.25° spatial sampling, with the highest possible data consistency and coverage. However, data gaps are still found in the record. This is particularly notable in earlier periods when a limited number of satellites were in operation, but can also arise from various retrieval issues, such as frozen soils, dense vegetation, and radio frequency interference. These data gaps present a challenge for many users, as they have the potential to obscure relevant events within a study area or are incompatible with (machine learning) software that often relies on gap-free inputs. Here, we present a new gap-filled version of ESA CCI SM. Our gap-filling framework employs a Discrete Cosine Transform with Penalized Least Squares (DCT-PLS) algorithm, applied within a spatial moving window to reconstruct missing values using temporal and regional neighborhood information. To enhance performance in densely vegetated areas, we incorporate additional soil moisture retrievals from C- and L-band sensors, previously absent in ESA CCI SM. For periods of frozen soil, a linear interpolation approach is used to approximate soil moisture storage in frozen conditions. Uncertainty quantification is achieved by modelling how input observation quality influences gap-filling performance under diverse surface conditions. Validation against artificially introduced gaps in reanalysis data confirm the ability of our method to restore soil moisture in satellite-like gaps. Validation of the gap-filled product with independent in situ reference measurements results in good performance metrics, with a global median correlation of R=0.72 and ubRMSD=0.05. The new ESA CCI SM v09.1 GAPFILLED dataset is publicly available at https://doi.org/10.48436/s5j4q-rpd32 and will be regularly updated as part of the operational ESA CCI SM production. This product is recommended for climate studies requiring reliable, gap-free soil moisture estimates. The development of ESA CCI SM has been supported by ESA’s Climate Change Initiative for Soil Moisture (Contract No. 4000104814/11/I-NB & 4000112226/14/I-NB).
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: Innovative High-Resolution Soil Moisture Retrieval in Catalan Agricultural Lands Using Change Detection and Downscaling Techniques

Authors: Zihao Wang, Qi
Affiliations: Centre Tecnologic de Telecomun
Soil moisture (SM) is a critical variable in hydrological, ecological, and agricultural domains, playing a pivotal role in irrigation management, drought monitoring, crop yield prediction, and precipitation estimation (Wei et al., 2007). It is also linked to natural disasters such as floods and landslides, as well as seasonal dynamics like wildfires and carbon storage(Gu et al., 2019). While direct measurement methods provide highly accurate SM data, they come with high time and labor costs and limited spatial coverage (Arabi et al., 2020). Remote sensing techniques, by contrast, enable large-scale monitoring but face challenges related to coarse spatial resolution, weather and surface conditions interference, as well as inversion model accuracy(Cui et al., 2016). Existing satellite SM products, with a spatial resolution 1 km or coarser, struggle to capture rapid SM variations crucial for precision agriculture (Abbaszadeh et al.,2018, Das et al., 2019). High-resolution SM data are therefore essential for applications like precision irrigation and farmland classification. To address these challenges, this study proposes an innovative high-resolution surface soil moisture (SSM) retrieval framework that combines a Machine Learning (ML) approach with the Change Detection and Down-Scaling (CDDS) method, aimed at improving precision agriculture and water resource management in the Catalonia region of northeastern Spain (41°49′N, 1°28′E to 41.817°N, 1.467°E). Catalonia’s Mediterranean climate, with frequent droughts, makes it an ideal area for studying SM dynamics. The region also provides abundant geoinformation resources, including soil moisture, temperature, precipitation, crop types, and irrigation practices, along with a network of soil measurement stations. The framework integrates Sentinel-1 synthetic aperture radar (SAR), Sentinel-2 multispectral instrument (MSI) data, and background SSM reference. Sentinel-1 SAR’s VV and VH polarization bands capture spatiotemporally continuous surface features, while Sentinel-2 NDVI represents vegetation coverage. Additionally, a novel input variable—backscatter difference—introduced through the change detection approach combines the strengths of microwave and optical remote sensing, effectively mitigating the impacts of vegetation and surface roughness. The ML training dataset, spanning 2017-2021, includes VV and VH bands, NDVI, backscatter difference, background SSM, and in-situ SSM. The background SSM product used is the Disaggregation based on Physical And Theoretical Scale Change (DISPATCH) SSM, a 1 km daily resolution product covering the entire Catalonia region (Ojha et al.,2021). The in-situ SSM data were obtained from the Institut Cartogràfic i Geològic de Catalunya (ICGC) Soil Measurement Network, which provides continuous soil moisture measurements at depths of 5 cm to 100 cm across 18 sites in Catalonia, with data recorded every 30 minutes. In this study, in-situ SSM data from 2017–2021 were used for training, and data from 2022–2024 were used for validation. By incorporating these variables into a machine learning-based downscaling model, the framework enables the retrieval of high-resolution (30 m), temporally consistent (3-5 days) SSM. For model selection, four machine learning algorithms—Support Vector Regression (SVR), Random Forest (RF), Extreme Gradient Boosting (XGBoost), and Long Short-Term Memory (LSTM) networks—were evaluated. Bayesian optimization was employed to fine-tune model parameters, and ground-based measurements were used for validation. Among these, XGBoost outperformed other models, achieving an R² of 0.933 and an RMSE of 0.023 on the test set. Validation across ground measurement sites revealed an average correlation coefficient (R) of 0.63, with an RMSE of 0.057, mean absolute error (MAE) of 0.046, and bias of 0.024, indicating minimal prediction errors. Temporal comparisons showed the highest correlations in January (0.81) and June (0.73), with monthly RMSE values ranging from 0.05 to 0.07, and the lowest RMSEs of 0.05 observed in March and September. MAE values were generally between 0.04 and 0.05, reflecting stable prediction performance, while bias ranged from -0.02 to 0.04, with the smallest bias of -0.004 recorded in March. Time series curves comparisons over two years demonstrated strong consistency with in-situ SSM data, effectively capturing both seasonal variations and abrupt changes following precipitation and irrigation events. In terms of spatial distribution, the high-resolution SSM products revealed significant spatiotemporal variability across the study area. The southern, drought-prone regions were drier, while the northern mountainous areas remained consistently moist. The retrieved SSM closely aligned with irrigation type maps, distinguishing rainfed from irrigated farmland. The proposed method accurately estimates high-resolution SM at the field scale, demonstrating strong predictive performance, consistency, and applicability over large spatial and temporal ranges, making it valuable for studying drought conditions in Catalonia. Future research should explore the integration of additional data sources and the refinement of temporal interpolation techniques to enhance model reliability and scalability for broader applications. Reference 1. Abbaszadeh, P., Hamid, M., & Zhan, X. (2018). Downscaling SMAP radiometer soil moisture over the CONUS using an ensemble learning method. Water Resources Research, 55. 2. Abowarda, A. S., Bai, L., Zhang, C., Long, D., Li, X., Huang, Q., & Sun, Z. (2021). Generating surface soil moisture at 30 m spatial resolution using both data fusion and machine learning toward better water resources management at the field scale. Remote Sensing of Environment, 255, 112301. 3. Arabi, B., Salama, M. S., Pitarch, J., & Verhoef, W. (2020). Integration of in-situ and multi-sensor satellite observations for long-term water quality monitoring in coastal areas. Remote Sensing of Environment, 239, 111632. 4. Chen, D., Chen, N., Zhang, X., Ma, H., & Chen, Z. (2021). Next-generation soil moisture sensor web: High-density in situ observation over NB-IoT. IEEE Internet of Things Journal, 8(17), 13367–13383. 5. Cui, Y., Long, D., Hong, Y., Zeng, C., Zhou, J., Han, Z., et al. (2016). Validation and reconstruction of FY-3B/MWRI soil moisture using an artificial neural network based on reconstructed MODIS optical products over the Tibetan Plateau. Journal of Hydrology, 543, 242–254. 6. Das, N. N., Entekhabi, D., Dunbar, R. S., Chaubell, M. J., Colliander, A., Yueh, S., et al. (2019). The SMAP and Copernicus Sentinel 1A/B microwave active-passive high resolution surface soil moisture product. Remote Sensing of Environment, 233, 111380. 7. De Lannoy, G. J. M., & Reichle, R. H. (2016). Assimilation of SMOS brightness temperatures or soil moisture retrievals into a land surface model. Hydrology and Earth System Sciences, 20(12), 4895–4911. 8. Dorigo, W., Wagner, W., Albergel, C., Albrecht, F., Balsamo, G., Brocca, L., et al. (2017). ESA CCI Soil Moisture for improved Earth system understanding: State-of-the art and future directions. Remote Sensing of Environment, 203, 185–215. 9. Ford, T., & Quiring, S. (2019). Comparison of contemporary in situ, model, & satellite remote sensing soil moisture with a focus on drought monitoring. Water Resources Research, 55. 10. Gu, X., Zhang, Q., Li, J., Singh, V. P., Liu, J., Sun, P., et al. (2019). Intensification and expansion of soil moisture drying in warm season over Eurasia under global warming. Journal of Geophysical Research: Atmospheres, 124(7), 3765–3782. 11. Nitu Ojha, Olivier Merlin, Christophe Suere, Maria José Escorihuela (2021). Extending the Spatio Temporal Applicability of DISPATCH Soil Moisture Downscaling Algorithm: A Study Case Us ing SMAP, MODIS and Sentinel-3 Data. Frontiers in Environmental Science, 9, 555216. 12. Senanayake, I.P.; Pathira Arachchilage, K.R.L.; Yeo, I.-Y.; Khaki, M.; Han, S.-C.; Dahlhaus, P.G (2024). Spatial Downscaling of Satellite-Based Soil Moisture Products Using Machine Learning Techniques: A Review. Remote Sens, 16, 2067.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: Combined use of ground-based GNSS-R and Sentinel-2 imagery for soil moisture estimation in irrigated grassland

Authors: Dr. Marcel M. El Hajj, Susan Steele-Dunne, Dr. Kasper Johansen, Dr. Samer K. Almashharawi, Dr. Oliver M. Lopez, Dr. Omar A. López Camargo, Dr. Adria Amezaga-Sarries, Dr. Andreu Mas-Viñolas, Prof. Dominique Courault, Prof. Claude Doussan, Prof. Matthew F. McCabe
Affiliations: Hydrology, Agriculture and Land Observation Laboratory, Division of Biological and Environmental Sciences and Engineering, King Abdullah University of Science and Technology, (KAUST), Thuwal, Saudi Arabia, Department of Geoscience and Remote Sensing, Delft University of Technology, 2628 CN Delft, Netherland, Microwave Sensors and Electronics SL, Barcelona, Spain, UMR 1114 EMMAH INRAE, Avignon University, Avignon 84914, France
Soil moisture plays a central role in water cycle dynamics and land-atmosphere interactions, acting across local and regional scales. Over the past decade, global navigation satellite systems reflectometry (GNSS-R) has emerged as a “signal of opportunity” for continuous and near real-time soil moisture and vegetation parameter estimation at sub-plot scale. With ground-based GNSS-R, the estimation of soil moisture is based on the use of the direct signal from GNSS satellites and the reflected signal from the Earth’s surface. Most studies have employed an interferometric GNSS-R (GNSS-IR) ground-based receiver with a single antenna to estimate soil moisture and vegetation parameters by analyzing the phase, amplitude, and frequency of the Signal-to-Noise Ratio (SNR) waveform. In 2009, the Interference Pattern Technique (IPT) was introduced as an alternative means to estimate soil moisture from GNSS-R acquisitions. A unique feature of the IPT approach is that it measures the power fluctuations of the interference of the direct and the reflected signals as a function of the GNSS satellite elevation angle. This allows the use of information on the satellite elevation angle for soil moisture and vegetation parameter estimation from the interference power (IP) waveform at vertical polarization. To the best of our knowledge, there is only one study that has used IP oscillation from IPT to estimate soil moisture and vegetation height, with that investigation focusing on tracking wheat through a growing season (heights between 10 and 60 cm). In that study, soil moisture and wheat height were estimated by determining the position of multiple notch positions (elevation angle values of smallest oscillations) in the IP oscillation for satellite elevation values between 5 and 70°. However, several studies, found it challenging to accurately determine notch positions in real GNSS-R acquisitions, especially when the IP oscillation exhibits low-frequency oscillations or maintains a constant amplitude over a wide range of satellite elevation angles. Therefore, a more robust and practical method to estimate soil moisture from IP waveforms over vegetated areas is needed. This study introduces the use of IP amplitude at vertical polarization (V-pol), readily extracted from the IP oscillations, as an alternative to notch positions for soil moisture estimation beneath vegetation cover. The experiment was conducted in a grassland field irrigated by gravity (approximately 3 hectares) during a period covering the growth phase and pre- and post-harvest. A ground-based GNSS-R IPT, i.e., the Soil Moisture Interference-pattern GNSS Observations at L-band Reflectometer (SMIGOL-R), was positioned at a height of 4.75 m. Additionally, a weather station was installed in the field to gather standard meteorological measurements, along with soil moisture at a depth of 7 cm from two Stevens HydraProbe-II sensors. An empirical model was developed for estimating soil moisture that uses the Normalized Difference Water Index (NDWI) from the Sentinel-2 satellite to account for the impact of vegetation on IP amplitude. Results indicated that the IP amplitude at V-pol accurately estimates soil moisture (RMSE = 0.04 m³/m³). Moreover, the results show that the vegetation layer mainly attenuates the IP amplitude with a non-significant scattered contribution to the IP, allowing for the simplification of the empirical model by ignoring the scattered contribution of vegetation. The simplified empirical model can be numerically solved to estimate the NDWI if the soil moisture is known. In summary, this study highlights the effectiveness of the ground-based IPT for close-range sensing of soil moisture and biomass proxies, such as NDWI.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: H SAF ASCAT Disaggregated Surface Soil Moisture at 0.5 km: First Validation Results with ISMN Data Over Europe

Authors: Pavan Muguda Sanjeevamurthy, Sebastian Hahn, Mariette Vreugdenhil, Bernhard Raml, Wolfgang Wagner
Affiliations: Technische Universität Wien
Monitoring soil moisture at high spatiotemporal resolution is vital for understanding several hydrological processes and supporting agricultural applications. However, while most existing remote sensing soil moisture products have a high temporal sampling rate, they are characterized by a coarse spatial resolution (tens of kilometers). Furthermore, their spatial interpolation is not straightforward near coastlines, urban areas and mountainous regions. To address this latter challenge, this study investigates a new Disaggregated ASCAT Surface Soil Moisture product (DIS ASCAT SSM 0.5 km v2 - H28) at 0.5 km sampling, developed under EUMETSAT's Satellite Application Facility on Support to Operational Hydrology and Water Management (H SAF). The DIS ASCAT SSM 0.5 km v2 product enhances spatial sampling by downscaling the H SAF ASCAT SSM 6.25 km product using directional downsampling parameters derived from Sentinel-1 C-SAR backscatter data. The disaggregation method is based on the concept of temporal stability, which assumes that soil moisture patterns across local (0.5 km) and regional (6.25 km) scales remain consistent over time. This temporal stability is also reflected in Sentinel-1 backscatter measurements, as they are sensitive to changes in soil moisture. Hence, the static parameters derived from the backscatter time series are used to resample the 6.25 km SSM to 0.5 km, improving the spatial patterns in complex environments while preserving the temporal dynamics of the ASCAT observations. Furthermore, to effectively represent soil moisture across the soil profile, the DIS ASCAT SSM 0.5 km v2 product is integrated into a two-layer water balance model that applies a recursive formulation of the exponential filter to generate daily soil water index (SWI) values for eight different characteristic time lengths (T-value). To validate the DIS ASCAT SSM 0.5 km v2 product, in situ measurements from the International Soil Moisture Network (ISMN) were utilized, focusing on European stations with soil moisture data for the top soil layer (0–10 cm depth) available between 2007 and 2023. The validation procedure followed established protocols, comparing the DIS ASCAT SSM 0.5 km v2 product to the nearest ISMN stations using the Pearson correlation coefficient and root mean square error (RMSE). To assess the ability of DIS ASCAT SSM 0.5 km v2 product to capture short-term dynamics, anomalies were calculated using a 35-day moving window and evaluated using the same metrics. The daily soil water index (SWI) at 0.5 km sampling, generated from the two-layer water balance model, was similarly validated against ISMN data. Since SWI represents soil moisture dynamics across the root zone, validation required selecting ISMN stations with measurements extending beyond the surface layer to represent deeper soil profiles. ISMN stations were grouped by soil layer depth: near-surface (0–10 cm), subsurface (10–30 cm), and deep surface (30–100 cm). SWI values were matched to these depths using a specific set of T-values. The same statistical metrics—Pearson correlation coefficient and RMSE—were used to ensure consistent comparisons across all layers. This stratified approach provided a comprehensive evaluation of the SWI's ability to capture soil moisture dynamics at varying depths. To gain a holistic understanding of the validation results, statistical metrics for the DIS ASCAT SSM 0.5 km v2 product and its SWI derivative were analyzed by grouping ISMN stations according to networks, Köppen–Geiger climate classifications, and land cover types. Our findings indicate strong agreement between the SSM and SWI products with ISMN observations, demonstrating the potential of the new DIS ASCAT SSM 0.5 km v2 product for diverse land cover types and climatic conditions across Europe.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: On the relationship between C-band InSAR closure phases and temporal changes in soil moisture, and vegetation water content

Authors: Nuno Cirne Mira, Joao Catalao, Mr. Giovanni Nico
Affiliations: IDL, Faculdade de Ciencias, Universidade de Lisboa, CINAMIL, Academia Militar, Av. Conde Castro Guimarães, Istituto per le Applicazioni del Calcolo (IAC), Consiglio Nazionale delle Ricerche (CNR), 70126 Bari, Italy.
Recent developments in SAR interferometry have revealed the relationship between phase closure and soil moisture variation. Non-zero phase closures (phase inconsistencies) occur when scatterers with independent phase behavior from different populations interfere with each other. Soil moisture or vegetation water content may be the cause of these phase inconsistencies. Subsequent studies have discovered the presence of a new interferometric signal in multilooked interferograms, characterized as short-lived, decaying with increasing temporal baseline, and distinct from temporal decorrelation. The presence of this short-lived signal introduces a bias in interferograms with a small temporal baseline. It was found that the behavior of the short-lived signal varies significantly in different land cover types, and the model proposed by deZan et al. (2014), based on soil moisture variation, was not sufficient to explain the observed phase inconsistencies. In this study, we aim to clarify the relationship between closure phases, soil moisture, and vegetation water content , while also investigating the impact of phase decorrelation on InSAR estimates of terrain deformation. Current methods for mitigating phase bias in short temporal baseline interferograms typically require the computationally intensive processing of hundreds of interferograms. Here, we propose a practical and straightforward solution to mitigate phase bias by leveraging vegetation water content information derived from multispectral satellite data. The core premise of our experiment is that phase decorrelation is influenced by variations in both soil moisture and vegetation water content, rather than solely by vegetation growth. If this hypothesis holds, the phase bias can be addressed using the temporal variability of the normalized difference water index (NDWI) as a proxy for vegetation water content variability. Additionally, decorrelation phases could offer valuable insights into changes in vegetation water content. Building on this theoretical framework, we analyse the relationship between phase decorrelation, soil moisture, and land cover types (e.g., agricultural, forest, urban, and bare soil). For this purpose, regression techniques are used, where the interferometric phase observables are expressed as a function of soil moisture and phase decorrelation is expressed as a function of vegetation indices used as proxies for vegetation cover and variability of water content in vegetation. We will show how that the magnitude and temporal variability of the decorrelation phase are low or very low over urban and bare soil areas, while it increases for agricultural and forest areas. This means that decorrelation phases are more related to the vegetation water content variations than to the soil moisture variability. Furthermore, we will show that the temporal variation of vegetation water content is related to the phase bias of the cumulative displacement computed with short-interval interferograms. In other words, small variations in vegetation water content between two SAR image acquisitions result in an increase in the interferometric phase in each interferogram, leading to a cumulative bias in the cumulative displacement. The correction of the contribution due to the temporal variation of vegetation water content to the cumulative displacements, reduces the spatial correlation between the cumulated displacements and the agricultural fields, making the cumulative displacement map of the study area, which is known to be stable, closer to the expected map having a zero-mean Gaussian distribution. ACKNOWLEDGMENT This work was supported in part by Academia Militar, Portugal, under PhD Grant to Nuno Cirne Mira. J. Catalão was funded by the Portuguese Fundação para a Ciência e a Tecnologia (FCT) I.P./MCTES through national funds (PIDDAC) – UIDB/50019/2020 (https://doi.org/10.54499/ UIDB /50019/2020), UIDP/50019/2020 (https://doi.org/10.54499/UIDP/50019 /2020) and LA/P/0068/2020 (https://doi.org/ 10.54499 /LA/P/ 0068 /2020). G. Nico was funded by OT4CLIMA Project, which was funded by the Italian Ministry of Education, University and Research (D.D. 2261 del 6.9.2018, PON R&I 2014–2020). The Sentinel‐1A&B SAR images are available from the Copernicus Open Access Hub in https://scihub.copernicus.eu/. References: [1] S. Zwieback, S. Hensley and I. Hajnsek, "Soil moisture estimation using differential radar interferometry: Toward separating soil moisture and displacements", IEEE Trans. Geosci. Remote Sens., vol. 55, no. 9, pp. 5069-5083, Sep. 2017. [2] F. De Zan and G. Gomba, "Vegetation and soil moisture inversion from SAR closure phases: First experiments and results", Remote Sens. Environ., 217, 562-572, 2018. [3] N Mira, J Catalão, G Nico, “On the mitigation of phase bias in SAR interferometry applications: a new model based on NDWI”, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 17, 3850 – 3859, 2024.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: Development and Evaluation of a Machine Learning-Based Model Integrating Ground-Based and Sentinel-1 Satellite Soil Moisture Data

Authors: Zalán Tobak, Viktória Blanka, György Sipos, Károly Barta, Brigitta Szabó, Boudewijn van Leeuwen
Affiliations: University of Szeged, Bay Zoltán Nonprofit Ltd. for Applied Research, HUN-REN Centre for Agricultural Research, National Laboratory for Water Science and Water Safety
Hydrological extremes, such as droughts and floods, pose significant challenges for agriculture, water and land management. Water deficits in the soil occur before vegetation starts to react to drought conditions; therefore, by tracking soil moisture, it is feasible to predict the need for supplementary water resources or to implement other strategies to avert negative ecological and economic consequences. Therefore, it is crucial to monitor and assess the development of water shortage, where soil moisture (SM) can serve as an effective indicator. However, the ability to track and predict soil moisture in real-time is significantly hindered by the limitations and shortcomings of existing monitoring methods. Existing soil data and models can provide partial assistance in addressing these issues, but due to the dynamic nature of soil moisture variations, some form of proxy information on the spatial and temporal changes between measuring stations is necessary. A promising approach for mapping the dynamically changing soil moisture involves fusion of various data sources. This research aims to create and test a machine learning model that combines ground-based and Sentinel-1 satellite-derived soil moisture data (S1SSM), along with meteorological and relevant environmental parameters, to estimate spatiotemporal variability in soil moisture with a spatial resolution of 100 m on the southern part of the Great Hungarian Plain, which is a region increasingly facing extreme hydrological events. Data from both ground observations and satellite-based surface soil moisture (SSM) were utilized in the analysis. Hourly in-situ data was collected from a network of 40 ground stations, on average 15 km apart, at 10 cm depth. The SSM1km product by VITO provided relative soil moisture content in percentage for the top 5 cm layer derived from Sentinel-1 satellite data. In addition to soil moisture data, the analyses included other environmental data that affect soil moisture, such as meteorological parameters from the ground stations (e.g. precipitation, temperature, evapotranspiration), soil characteristics from HU-SoilHydroGrids database (e.g. saturated water content, field capacity, wilting point, saturated hydraulic conductivity and van Genuchten parameters), and vegetation information extracted from Sentinel-2 imagery. In this research a statistical analysis of the relation between ground-based soil moisture, the precipitation data measured by the stations, and satellite-based soil moisture (S1SSM) was performed. The study also determines the importance of the input datasets and parameters of the model, as well as discusses the feasibility and limitations of using Sentinel-1 data for soil moisture monitoring by data fusion. After considering data gaps and errors, the database consists of almost 12000 records of 12 variables at the 40 locations of the monitoring station for each date during a three-year interval (2021-2023). Four machine-learning approaches for regression were assessed in the research. Deep neural network (DNN), Support Vector Machine Regression (SVR), Multi Linear Regression (MLR), and eXtreme Gradient Boosting (XGBoost) models were trained and applied to the derive soil moisture values. The correlation between the observed SM and S1SSM values ranged from weak to moderate (on average 0.33), while the relationship between the precipitation of the preceding periods and the SM and the S1SSM values was the strongest (on average 0.37 and 0.52, respectively) in case of the 7-day period. Among the tested machine-learning methods, XGBoost delivered the best outcomes for estimating surface soil moisture (R2 values were 0.59, 0.72, 0.82 and 0.92 for the MLR, SVR, DNN and XGBoost models, respectively). The analysis indicated that the XGBoost algorithm, when utilizing both ground-based soil moisture and meteorological station data, spatially continuous soil properties, and vegetation data (e.g., NDVI), shows great potential for monitoring surface soil moisture over larger areas. The predicted and actual soil moisture values had a correlation of 0.92. For most ground-based monitoring stations, the difference between the modelled and observed soil moisture values was less than 3%, while more than 5% difference was noted at just five stations. The difference between the observed S1SSM and the modelled values showed a high degree of consistency, with differences below 10% across 45.5% of the area, while larger differences of over 20% occurred in just 27.1% of the region. Our analysis highlighted the potential application of S1SSM data in agriculture for soil moisture monitoring. The developed soil moisture model and the observed S1SSM values are well-aligned in regions with less diverse land use, predominantly agricultural lands (e.g., arable lands, vineyards, grasslands). However, results indicate that estimating surface soil moisture using Sentinel-1 data fusion may be influenced by land use and/or soil types, although the distinct impact of land use and sandy soils warrants further investigation.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: Assessment of InSAR-based Soil Moisture Retrieval in the ESA DEMETRAS Project

Authors: Francesco De Zan, Paolo Filippucci, Luca Brocca
Affiliations: delta phi remote sensing GmbH, National Research Council, Research Institute for Geo-Hydrological Protection
High-resolution soil moisture is important to improve the modeling of hydrological and meteorological processes, which are fundamental for the prediction of extreme event such as droughts and floods. SAR Interferometry (InSAR) has the potential to contribute to high-resolution soil moisture retrieval from SAR along with the well-established methods based on backscatter change. The key observation that enables the retrieval of soil moisture with InSAR is the closure phase which enables to model simultaneously the behaviour of different layers in the surface radar return. Considering that this is a novel technology, in the DEMETRAS project (ESA) we have conducted an activity to assess the potential of the method by validating or cross-comparing the products with in-situ measurements or other remote sensing and modeling soil moisture products such as the ones derived from the SMAP radiometer or the GLEAM model. We have processed Sentinel-1 acquisitions to obtain soil moisture index maps over 8 different sites, mostly in the Mediterranean basin. The sites cover a variety of climate type and interferometric coherence levels, ranging from dry and very coherent arid regions like Lybia, to sub-tropical and alpine climate in northern Italy, characterized by a rather low coherence. Over one site, in Eastern Spain, we have produced 3-year-long times series, from both ascending and descending passes, which amount to about 300 data points in time, for a common area of almost 40’000 square kilometers sampled on a 1-km grid. The correlations with SMAP and GLEAM products are generally high, especially in areas of high InSAR coherence with correlations of 0.8-0.9. All interferometric measurements are limited by the degradation of interferometric coherence and our case proved no exception. We have observed coverage reduction when processing long time series, or the scatterer was affected by vegetation changes, snow and other disturbances. Other than that, we have not encountered any other obvious limitation yet. During the analysis of the data we have made the following observations which should be further investigated: 1) Different processing choices yield time series with different characters, more or less peaky, possibly representing different scatterers (different depth or different vegetation levels). If this can be proven, it might be exploited to focus the attention onto different scatterers depending on the interest. 2) There is a broad complementarity between the performance of InSAR and SAR backscatter products depending on the land cover. This should be better analysed in view of an optimal fusion of the two. Future ESA missions such as Hydroterra+ and ROSE-L are expected to provide more suitable data for the widespread applicability of InSAR techniques for soil moisture retrieval by having higher coherence compared to Sentinel-1, either because of the short revisit (Hydroterra+) or the longer wavelength (ROSE-L). Analysis of campaign data and L-band data from other sensors (SAOCOM, NISAR) will prove crucial in preparing for those missions.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: Potential of Deep Learning based quality control methods for soil moisture time series in an operational data service

Authors: Wolfgang Korres, Tunde Olarinoye, Fay Boehmer, Kasjen Kramer, Stephan Dietrich, Matthias Zink
Affiliations: Federal Institute Of Hydrology, International Center for Water Resources and Global Change (ICWRGC)
Soil moisture and its measurements are important in various fields and applications, from agriculture and hydrology to climate modelling, ecology and ecosystem health management. The monitoring of soil moisture gained widespread recognition in the early 2000s as an integral component of the hydrological and meteorological observation systems. This momentum was accelerated with the establishment of several soil moisture monitoring networks. The collection of data from different networks with a diversity of sensors and data formats, harmonization, quality control, archiving of in situ soil moisture data and ensuring the free accessibility of this data for end-users underline the motivation behind the foundation of the International Soil Moisture Network (ISMN) in 2009. All in situ data sourced from the different providers undergo two quality checks. First, a visual inspection of the data excluding near real-time data and second, a rule-based automatic quality control procedure before inclusion in the ISMN database to ensure high quality research-ready soil moisture data for end-user. Thirteen different plausibility checks are applied to every singular hourly observation, which is flagged then as dubious if one of these checks fail, otherwise as “good”. These plausibility checks can be categorized into: i) a geophysical range verification, detecting the exceedance of certain thresholds (e.g., soil moisture values below 0 Vol.-%), ii) geophysical consistency methods, taking either ancillary in situ data if available or NASA’s GLDAS Noah data into account. An example is the flagging of soil moisture where soil temperature is negative). And iii) spectrum-based approaches, using the first and second derivatives of the entire soil moisture timeseries to detect dubious soil moisture patterns (i.e., spikes, drops, and plateaus). Publications in recent years point to the great potential of Deep Learning (DL) based methods for identifying anomalies in time series data. In this study, the potential of Long Short-Term Memory (LSTM) and related models for anomaly detection in soil moisture time series is being investigated. Therefore, randomly selected time series from the ISMN are manually (visually) quality flagged (labelled). In order to be able to label these data we developed a guidance how to visually quality control in situ soil moisture data. The deep learning models are trained with a combination of artificially inserted and real-world anomalies. Different Deep Learning methods are validated against the manually labelled data and compared to the previously implemented and other external flagging methods. The method will be further developed and evaluated for its use in ISMN operations. The incorporation of additional flagging information, especially when enhanced by Deep Learning methods, is anticipated to lead to a better usability of soil moisture data, as well as promoting a more robust quality control by the ISMN for its users in the future.This is a progress report of the ongoing research.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: Inter-comparison and merging of active, passive and model-based soil moisture products for drought monitoring over differently instrumented regions.

Authors: Jaime Gaona Garcia, Luca Brocca, Paolo Filippucci, Usman Liaquat, Guido Fioravanti, Davide Bavera, Prof. Wolfang Wagner, Wouter Dorigo, Sebastian Hahn, Dr. Mariette Vreugdenhil, Silvia Puca, Nicoletta Roberto
Affiliations: CNR IRPI, Joint research Center, ARCADIA SIT, TU Wien, Dipartamento Protezione Civile
The monitoring of hydrological extremes faces challenges due to the complexity of processes, technical obstacles, and data limitations, especially in poorly instrumented areas vulnerable to climate change. In many ungauged basins, modelling and remote sensing of soil moisture (SM) can help identify significant hydrological alterations like droughts and floods in the absence of quality data. Advances in models and remote sensing (RS) techniques can improve hazard monitoring but also face technical limitations in extreme conditions. To evaluate the SM products, we compare (1) active RS data from the advanced Scatterometer (ASCAT) onboard the series of Metop satellites, (2) passive data from the European Space Agency (ESA) Climate Change Initiative (CCI) and (3) the modelled dataset from the Global Drought Observatory (GDO) using the LISFLOOD model. The analysis covers Europe and Africa from 2007-2022 at a 10-day time step and 5 x 5 km resolution. The European Reanalysis of the Atmosphere version 5 with land surface variables (ERA5-L) of the European Centre for Medium-Range Weather Forecasts is used as the reference dataset given it is often assumed in the absence of good data. Validation uses data from the uppermost soil layers from the International Soil Moisture Network (ISMN) across Europe and Africa. Performance is also compared across different environmental conditions like rough terrain, deep forests, and monsoon areas of bi-seasonal regimes using climate and land cover classifications. The results show that the ASCAT SM dataset outperforms others in consistency and continuity. The passive datasets of ESA CCIp follow, with regional dominance in arid areas. The model-based product offers unmatched continuity and coverage in dense canopy or rough terrain. The merged products generally match or surpass individual SM products. Despite the scarcity of observations from the ISMN, RS, modelling, and their combination provide better SM estimates than the ERA5-L product which is often used as the best reference in the absence of observations. Nonetheless, the scarcity of ISMN data further prompts discussion on the implications of data representativity in validation. Overall, the study highlights the value of combining RS and modelled SM products, considering their technical strengths and limitations, and their performance across varied conditions. The various merging techniques offer a broad range of options, effectively combining the strengths of each product. These insights aim to improve individual products and their combination, enhancing the accuracy, consistency, and continuity of data for operational monitoring of soil moisture and related hydrological hazards.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: High-resolution soil moisture mapping in boreal forests using SMAP data and downscaling techniques

Authors: Emmihenna Jääskeläinen, Professor Miska Luoto, Doctoral Researcher Pauli Putkiranta, Dr. Mika Aurela, Dr. Tarmo Virtanen
Affiliations: Finnish Meteorological Institute, University of Helsinki
In boreal forest ecosystems soil moisture is a crucial parameter for predicting different phenomena, such as tree growth or forest fire risk, both of which influence the carbon storage capacity of boreal forests. However, current satellite-based soil moisture products for vegetated areas often have a high temporal scale at the expense of spatial resolution. To capture finer spatial variations in soil moisture, higher resolution data is needed. Therefore, we developed a satellite-data-based model to estimate daily soil moisture over boreal forested areas with high resolution (1 km and 250 m), covering the annual time period from May to October, which corresponds to a mostly snow-free period in the northern boreal forest. We use the enhanced 9 km resolution soil moisture data from the Soil Moisture Active Passive (SMAP) mission as the basis of the model. The chosen model-building method is a tree-based machine-learning algorithm called gradient boosting. Additional soil properties (such as bulk density and silt content), vegetation properties (normalized vegetation index and enhanced vegetation index), reanalysis data (precipitation, solar radiation, and temperature), and in situ soil moisture observations from Northern Finland are used to guide the model construction process. Produced data maps have a spatial resolution of 1 km and 250 m and the same temporal resolution as SMAP. The analysis of the developed model shows that the model retains the temporal and large-scale spatial variability of SMAP soil moisture. In addition, when compared to the independent data set (observed in situ soil moisture data from Alaska, USA), the soil moisture values predicted by the developed model have a better agreement with in situ values than SMAP soil moisture, as RMSE decreases from 0.097 m³/m³ to 0.065 m³/m³, and correlation increases from 0.30 to 0.52 over forest sites. This constructed model can therefore be used to predict high-resolution soil moisture over boreal forested areas.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: Improvement of surface soil moisture modeling from Sentinel-1 SAR data using polarimetric decompositions.

Authors: Dariusz Ziółkowski, Professor Katarzyna Dąbrowska-Zielińska Dąbrowska-Zielińska, Msc Konrad Wróblewski, Karol Paradowski, PhD Szymon Jakubiak, Magdalena Łągiewska, Maciej Jurzyk
Affiliations: Institute Of Geodesy And Cartography
Soil moisture is a very important parameter that is of great importance, among others, in agriculture, food security as well as in hydrological and climate change modelling. This is due to the complex nature of the interaction between the microwave and the imaged terrain and the influence on the backscatter coefficient also soil roughness or vegetation cover. The aim of the presented research is to show the possibilities of improving the accuracy of soil moisture modeling from Sentinel-1 SAR data using products of polarimetric signal decomposition. Field measurements were carried out in different parts of Poland in five research areas from March to May and from August to October 2024. Three of them were located in central Poland near Warsaw, one in the Wielkopolskie Voivodeship (Polish JECAM Site) and one in the Opolskie Voivodeship in the southwestern Poland in the Kędzierzyn-Koźle area, where a very large flood occurred in September 2024. The research was carried out on the basis of over 750 measurement points located in 90 bare soil fields using the TRIME probe. All field measurements were carried out at time of the Sentinel-1 satellite's flight over the given study area. Each measurement was averaged from five independent measurements located within a 10x10m square representing a Sentinel-1 pixel. The research areas were selected to obtain the greatest possible diversity of soil types characterized by very different water retention capacity. Diverse soil conditions as well as specific weather conditions in Poland in 2024, from drought to floods (Opolskie test site), allowed for very large differences in soil moisture of agricultural areas, from 5 to 60%. The measurements were carried out on soils at very different stages of field work, which resulted in very diverse roughness of soil surface, from very smooth up to a dozen centimeters of height difference. Sentinel-1 one radar data were processed in ESA SNAP software. Both GRDH and SLC data were used for processing. GRDH data were standard processed, including among others multi-temporal filtering using LeeSigma 11x11 filter and radiometric and geometric correction to backscatter coefficient gamma0. The local incidence angle variability due to the location of individual research areas in different parts of the radar scenes and due to local variability of the terrain varied from 28 to 47°. Independently, SLC images were also used to perform polarimetric signal decompositions: dual-pol HAlpha decomposition and model-based decomposition. The analysis of the backscatter coefficient gamma0 in relation to the soil moisture field measurements showed a clearly linear correlation however with relatively low accuracy with R2 equal to 0.43 and 0.38 for VV and VH polarisations. After selecting only the points with relatively smooth soil, the correlation increased to 0.61 and 0.57 respectively. It was also clearly observed that soil surfaces of similar soil moisture but with higher roughness generally have a higher backscatter coefficient than relatively smooth soil surfaces. A modified water cloud model was used to improve the accuracy of soil moisture determination, but this did not result in a significant improvement because various combination of VH and VV polarizations were not sufficient to explain the differences in backscattering variability associated with increasing soil roughness. In the next step, polarimetric decompositions were used to provide information on different scattering mechanisms to link soil roughness with the radar signal. For modeling purposes, 70% of the points were used to build the model, while 30% were left for validation purposes. In the first step, using field measurements, the linear relationship between soil moisture and gamma0 backscatter coefficient for smooth soils was determined. It was assumed that an increase in soil roughness is always associated with an increase in VH and VV for a given soil moisture. In the next step, the difference in backscatter coefficient between the observed values and the modeled VH and VV values for smooth soil for a given soil moisture was calculated for both VH and VV polarisations. These differences were interpreted as an increase in VH and VV due to roughness, and relationships between them and individual scattering mechanisms from individual polarimetric channels were sought. They were then used to build a model that took into account both soil moisture and roughness. The validation carried out using the remaining 30% of field points showed that using polarimetric information it is possible to obtain a significant increase in the accuracy of soil moisture modeling on Sentinel-1 radar data in relation to information derived only from the backscattering, in particular for fields with very rough soil surfaces. The study is very important in finding the changes of soil moisture far from the flooded area and monitor the conditions of drying out.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: Evaluating Remote Sensing Products for Soil Moisture Retrieval From Satellite Data

Authors: Gala Tomás-Portalés, Lluís Pérez-Planells, Enric Valor, Raquel Niclòs, Jesús Puchades
Affiliations: Thermal Remote Sensing Group, Department of Earth Physics and Thermodynamics, Faculty of Physics, University of Valencia (UV), Institute of Meteorology and Climate Research, Karlsruhe Institute of Technology (KIT)
Soil Moisture (SM) is an Essential Climate Variable (ECVs) as designated by the Global Climate Observing System (GCOS) and the Climate Change Initiative (CCI) of the European Space Agency (ESA). It serves as a crucial hydrological parameter that bridges atmospheric and Earth's surface processes. Understanding the status and dynamics of SM is vital for a wide range of meteorological, climatological, and hydrological applications, while also improving insights into the water, energy, and carbon cycles. Furthermore, it plays a pivotal role in forecasting extreme climatic events, underscoring the need for detailed global monitoring at reasonable temporal and spatial resolutions. This study focuses on validating satellite-derived SM products against in situ measurements to evaluate the accuracy and reliability of near-surface SM estimates from various spaceborne sensors. The analysis was conducted over a 7-year period (January 2015–December 2021) in the northeast of Spain and the south of France. The ISMN (International Soil Moisture Network) provided field data from 30 stations distributed across 4 networks (COSMOS, FR-Aqui, IPE, and SMOSMANIA). Four microwave-based missions, both passive and active, were assessed: ASCAT (Advanced Scatterometer), SMOS (Soil Moisture and Ocean Salinity), SMAP (Soil Moisture Active Passive), and CCI. A thorough validation procedure was applied, which primarily involved various statistical metrics, scatter plot analysis, and linear regression of the respective time series. SMAP emerged as the most accurate and consistent sensor, showing an intercept near zero, a slope close to unity, a correlation coefficient of R = 0.72, and a Root Mean Square Error (RMSE) of 0.07 m³/m³. CCI also performed well, whereas ASCAT and SMOS demonstrated higher uncertainties and weaker correlations. Depth variations of the in situ measurements were also analyzed using SMAP data, with integrated depths of 0–6 cm and point-specific depths of 5 cm yielding the most reliable results. Despite significant advancements in satellite-based SM monitoring, this study highlights the need for further research to enhance field-level accuracy, emphasizing that current efforts, while promising, still face challenges in achieving complete alignment with ground observations. This study was carried out within the framework of the Tool4Extreme PID2020-118797RBI00 project, funded by MCIN/AEI/10.13039/501100011033, and also the PROMETEO/2021/016 project, funded by Conselleria d’Educació, Universitats i Ocupació de la Generalitat Valenciana.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: Spatial and Temporal Analysis of a Novel Data Assimilation Approach for High-Resolution Soil Moisture Estimates Using Sentinel-1 and Sentinel-2 Data

Authors: Thomas Weiß, Dr. Swen Meyer, Prof. Dr. Philip Marzahn
Affiliations: Fraunhofer Institute For Computer Graphics Research, Faculty of Agricultural and Environmental Sciences, University Rostock
Soil moisture is crucial for various applications in fields such as hydrology and smart farming, particularly when monitored with high spatial and temporal resolution. We introduce a novel data assimilation method that leverages freely available remote sensing data to generate temporal and spatial soil moisture estimates over agricultural fields. This innovative approach utilizes microwave satellite remote sensing data (Sentinel-1) and radiative transfer models to estimate soil moisture. Retrieving accurate soil moisture values becomes particularly challenging during periods of high vegetation cover, necessitating additional information on vegetation status to effectively parameterize the vegetation component of the radiative transfer model. For this purpose, optical remote sensing data (Sentinel-2) is employed, enhancing the robustness of the moisture estimation process under high vegetation cover. Moreover, addressing the equifinality problem inherent in soil moisture estimation from radiative transfer and remote sensing data requires a reliable initial soil moisture state to optimize the data assimilation process. To establish this, we use a medium-resolution (1km x 1km) RADLOLAN-based soil moisture product as prior information which is available for our test sites in Germany. The validation of our soil moisture retrieval method utilized extensive in-situ data from a test site in Brandenburg, Germany, with numerous measurements per field taken on various days during the vegetation periods from 2018 to 2023 to assess spatial accuracy. Unfortunately, dense time series data for temporal evolution assessment was unavailable for the Brandenburg site. Therefore, to further validate the temporal accuracy of our method, we employed permanently installed soil moisture probes in several fields across northern Germany during the vegetation periods of 2023 and 2024. This study aims to provide another step towards operational usable high spatio-temporal soil moisture estimates for smart farming purposes and potentially leading to more decision-making in related industries. Future work will focus on refining these methodologies and expanding their application to additional agricultural regions to enhance global agricultural water management strategies.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: Quality Assurance for Soil Moisture (QA4SM): A Platform for Validating Satellite Soil Moisture Data Against Fiducial Reference Measurements

Authors: Daniel Aberer, Nicolas Bader, Wolfgang Preimesberger, Monika Tercjak, Alexander Boresch, Johanna Lems, Peter Balogh, François Gibon, Dr Arnaud Mialon, Philippe Richaume, Wouter Dorigo, Raffaele Crapolicchio, Alexander Gruber
Affiliations: TU Wien, Department of Geodesy and Geoinformation, Angewandte Wissenschaft Software und Technologie GmbH (AWST), CESBIO, Université de Toulouse, CNES/CNRS/INRAe/IRD/UPS, European Space Agency (ESA) - ESRIN
The Quality Assurance for Soil Moisture (https://qa4sm.eu) service provides a centralised, cloud-based platform for soil moisture data validation. Accessible through a user-friendly graphical web interface, QA4SM caters to both producers and users of satellite soil moisture data. It enables the assessment of quality requirements for satellite products, such as those defined by the Global Climate Observing System (GCOS), and facilitates the validation and comparison of satellite data against in situ reference measurements and land surface model data. QA4SM delivers reproducible validation results based on a consistent methodology and community-agreed best practices. Numerous well-known data products are readily available and periodically updated, including satellite products from SMOS, SMAP, ASCAT, and Sentinel-1 missions. Data from the Copernicus Climate Change Services (C3S), the ESA Climate Change Initiative (CCI), and reanalysis models like NASA’s GLDAS-Noah and ECMWF’s ERA5 and ERA5-Land are also provided. In situ measurements are the preferred reference data source for most validation studies. In QA4SM, users have access to data from the International Soil Moisture Network (ISMN). As part of ESA’s FRM4SM project, a subset of representative measurements has been identified, which we recommend for soil moisture validation at coarse (radiometer) scale. Additionally, the platform supports the upload and validation of custom datasets in various formats, while ensuring that data is securely stored and exclusively accessible to the owner. The platform offers a range of processing tools, including dataset filtering based on flags or versions, spatial and temporal scaling options, selection of spatial and temporal subsets, temporal matching methods, and calculations of various metrics and anomalies for up to six datasets simultaneously. It also supports the publication of results, including the generation of digital object identifiers (DOIs). The latest QA4SM release (version 3) introduces enhanced functionality for analyzing soil moisture datasets. This includes the ability to assess intra-annual variations in skill metrics through seasonal and monthly analyses, as well as the computation of stability metrics on an annual scale. The platform now includes automatic updates for SMOS L2, ERA5, ERA5-Land, and C3S soil moisture data. These enhancements expand QA4SM's capabilities, enabling users to investigate temporal dynamics more thoroughly and offering new ways to monitor operational satellite datasets.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: Machine-Learning-Based Observation Operators For Land-Surface Data Assimilation

Authors: Peter Weston, Patricia de Rosnay
Affiliations: ECMWF
Generally, current land-surface assimilation systems at numerical weather prediction (NWP) centres rely on the assimilation of in-situ data and derived products from satellite instruments such as Advanced Scatterometer (ASCAT) soil moisture, Soil Moisture Ocean Salinity mission (SMOS) soil moisture and Interactive Multisensor Snow and Ice Mapping System (IMS) snow cover. The aim in the future is to move towards the assimilation of level 1 observations, e.g. ASCAT backscatter or microwave radiances, over all surfaces including land and snow. Until now, direct exploitation of level 1 observations in land-surface data assimilation has been held back by the lack of accurate observation operators due to the heterogeneity of the land surfaces and representativeness errors. In the atmospheric assimilation satellite radiances are assimilated directly and provide a large positive impact on forecast skill. However, surface-sensitive satellite observations in both the microwave and infrared are often screened out over non-ocean surfaces due to larger uncertainties in the surface contribution to the radiative transfer simulations. This means the full information content of the satellite observations is not being exploited, particularly over land and snow-covered areas. Land, atmospheric and coupled assimilation systems would all benefit from better exploitation of level 1 observations, in particular for improved analyses of variables such as soil moisture. This work, conducted in the context of the CERISE (Copernicus Climate Change Service Evolution) Horizon Europe project, aims to address this by developing machine-learning observation operators to accurately simulate level 1 observations allowing for their assimilation into a land-atmosphere coupled data assimilation system. This can be applied to current low frequency microwave instruments such as Advanced Microwave Scanning Radiometer – 2 (AMSR2), SMOS and Soil Moisture Active-Passive (SMAP) but is also highly relevant for future missions such as the Copernicus Imaging Microwave Radiometer (CIMR). The development of machine-learning observation operators relies on a large training database of collocated data from both an NWP model and the observations of interest. Here, a combination of data from the ERA5 reanalysis and the European Centre for Medium-range Weather Forecasts (ECMWF) Integrated Forecasting System (IFS) are used along with low frequency passive microwave observations from the AMSR2, SMOS and SMAP instruments. An information content analysis of the most important features in the database will be presented. The machine-learning algorithms and training protocols used will be described. Finally, initial results from data assimilation experiments using the machine-learning-based observation operators in the ECMWF IFS will be presented and potential benefits for NWP and future generation of global reanalysis systems will be discussed.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone J-K)

Poster: Assessing the Contribution of Soil Moisture Memory in Drought predictability Across Europe

Authors: David Civantos, Dr Jesús Peña-Izquierdo, Lluís Palma, Laia Romero
Affiliations: Lobelia Earth, Barcelona Supercomputing Center
Predicting drought events remains a significant challenge, particularly in Europe, where the seasonal predictability of essential atmospheric variables like anomalies in mean temperatures or precipitation is notably weak. Moreover, although the onset and termination of droughts are typically driven by large-scale atmospheric dynamics, they can be also significantly amplified by complex local interactions between land conditions, mostly soil moisture, and low atmosphere conditions. Despite these challenges, the large inertia that satellite-based soil moisture presents deserves special attention for exploiting potential drought predictability signals at specific regions and periods of the year. In this study, we quantify the linear contributions of both SM initial conditions and the actual atmospheric forcing of the corresponding season, represented by SPEI (Standardized Precipitation Evapotranspiration Index), to the predictability of SM at seasonal time scales across Europe. Monthly analyses are conducted using SM data from GLEAM and ESA's Climate Change Initiative (CCI), ensuring a medium-resolution (25km) satellite-based assessment. The linear model with only these 2 contributions alone succeed to explain most of the observed variability revealing broad regions and seasons where the role of the SM inertia outweighs the contribution of actual precipitation and evapotranspiration conditions during the target seasons. In particular, the inertia of SM plays a more dominant role than atmospheric forcing in driving drought variability in the Iberian Peninsula in summer; Eastern Europe in fall and winter, or central Europe in late summer/ autumn. This dominance persists for up to several months, underscoring the importance of soil moisture inertia in drought dynamics. Moreover, discrepancies (large residuals) between observed conditions and model predictions suggest the influence of nonlinear processes, such as land-atmosphere feedbacks, which merit further investigation. These findings emphasize the critical value of precise soil moisture estimates. Even considering the low atmospheric predictability at the seasonal scale in Europe, distinct windows of predictability emerge. Such insights can be directly exploited to enhance applications in agriculture, as well as in the prevention and adaptation strategies for extreme events intimately linked to droughts like heatwaves and wildfires. The study highlights the potential of satellite-based soil moisture memory as a cornerstone for developing effective drought management and resilience frameworks.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone P)

Poster: F.05.03 - POSTER - Tracking conflict impacts: Earth Observation for socio-economic resilience and global policy support

In the context of increasing global conflicts, the ability to monitor and assess their socio-economic and environmental impacts has never been more critical. This session will explore how Earth Observation (EO) technologies are being utilized to provide essential data that supports governments and international institutions in developing policies and strategies for resilience and recovery.
This session will explore the practical benefits of using Earth Observation (EO) data in conflict zones. These benefits include damage assessment, aiding economic recovery, restoring the environment and supporting humanitarian efforts enabling evidence-based decision-making across sectors such as energy, climate, and sustainability. Participants will discuss how EO-derived information is applied to reduce conflict impacts, assist in post-conflict reconstruction, conduct effective damage assessment, and encourage sustainable development.
By deepening the understanding of the socio-economic benefits of EO, this session will support European and international institutions in their efforts to enhance resilience and promote sustainable development in conflict-affected areas.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone P)

Poster: Accurate Field Delineation with High-Resolution Earth Observation data and Deep Learning

Authors: Mykola Lavreniuk, Nataliia Kussul, Andrii Shelestov, Bohdan Yailymov, Yevhenii Salii, Volodymyr Kuzin, Zoltan Szantoi
Affiliations: ESA, Space Research Institute NASU and SSAU, University of Maryland, National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”
Accurate delineation of agricultural field boundaries is a cornerstone for effective land management, agricultural monitoring, and socio-economic recovery in conflict-affected regions. Earth Observation (EO) technologies offer invaluable support in such contexts by providing detailed spatial data that informs post-conflict recovery strategies and fosters long-term resilience. This work focuses on advancing automatic field boundary detection using cutting-edge deep learning (DL) models and high-resolution EO imagery. The socio-economic benefits of precise field boundary mapping are extensive, especially in regions impacted by conflict. They enable better agricultural planning, enhance food security, and support equitable land ownership practices. Additionally, they provide critical insights for post-conflict economic recovery and sustainable land use. However, current methodologies and datasets face several limitations that restrict their effectiveness in such demanding scenarios. A prominent challenge in this domain is the quality and availability of training datasets. Existing datasets are typically small, error-prone, or limited to medium spatial resolution imagery such as Sentinel-2. To overcome these issues, we integrated and enhanced multiple data sources. We refined and cleaned existing datasets to remove inconsistencies and leveraged available LPIS (Land Parcel Identification System) data where accessible. For regions lacking LPIS data, such as Ukraine, manual labeling was conducted using commercial EO imagery. The developed dataset includes data from diverse sources, such as Sentinel-2, Planet, Pleiades, Maxar, and orthoimagery, with resolutions ranging from 10 meters up to 0.3 meters. This effort produced a high-quality dataset that spans multiple countries and scales, significantly larger and more reliable than those previously available. The use of high-resolution commercial EO imagery further enhances the quality of the developed delineation methodology of this work. Incorporating data from various satellites and resolutions enables field boundary detection at a level of detail that supports practical applications in diverse geographic and socio-economic contexts. This is particularly important in post-conflict regions, where precise data is critical for rebuilding resilience and promoting sustainable development. Many existing approaches rely on DL’s U-Net architectures, which, while widely used, are often suboptimal for the intricate patterns seen in high-resolution EO data in agricultural settings. To address this, our research employs advanced DL models that are more efficient and capable of capturing complex spatial details. These improvements lead to more accurate delineation results, even in challenging environments. By addressing limitations in current methods and datasets, this research advances the state of EO applications for field boundary delineation. The resulting insights contribute directly to socio-economic recovery efforts, offering tools to support land use planning, reduce agricultural vulnerability, and promote equitable development. This work demonstrates how technological innovation in EO can translate into meaningful, real-world benefits, particularly in regions facing the long-term impacts of conflict.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone P)

Poster: Fields Ablaze: Remote Sensing Analysis of Burned Agricultural Land Along Ukraine's Frontline

Authors: Leonid Shumilo, Sergii Skakun, Dr. Nataliia Kussul, Prof. Andrii Shelestov, Sofiia Drozd, Oleksandr Parkhomchuk, Dr. Bohdan Yailymov, Dr. Hanna
Affiliations: University Of Maryland
The ongoing Russian aggression in Ukraine represents a significant disruption to global food security, with far-reaching consequences. Ukraine, the largest country in Europe, is a critical agricultural hub, with approximately 70% of its territory dedicated to farming. This vast expanse of fertile land supports food production for an estimated 400 million people globally, as highlighted by the U.S. International Trade Administration [1]. However, the war has severely impacted this vital sector. The conflict, concentrated primarily in the eastern and southern regions of Ukraine, intersects with the steppe agro-ecological zone, an area of intensive agricultural activity [2]. The hostilities have inflicted substantial damage on the local environment, rendering vast tracts of fertile farmland unusable both during the war and potentially in its aftermath. Consequently, evaluating the war’s impact on agricultural land is critical for future recovery and restoration efforts. To address this urgent need, we conducted a detailed analysis of agricultural fields affected by fires within a 90-kilometer buffer zone around the frontline between 2022 and 2024, taking into account annual shifts in the conflict demarcation line. As a baseline for our research we used neural network’s delineated field boundaries and generated time series data from Sentinel-2 satellite imagery. For each field, we calculated key vegetation and burn indices—Normalized Difference Vegetation Index (NDVI), Normalized Burned Ratio (NBR), and Burned Area Index (BAI)—summarized through maximum, minimum, average, and standard deviation values. To overcome gaps caused by cloud coverage, we interpolated these time series using a cubic polynomial function. This dataset served as input for a random forest model trained on polygons of active fires identified through the shortwave infrared (SWIR) band of Sentinel-2. Our methodology achieved high accuracy, with an F1 score of 0.81, enabling us to generate a comprehensive map of burned agricultural fields over three years of combat. Integrating this data with crop type maps and conducting reference data stratified random sampling allowed us to quantify agricultural losses, offering valuable insights into the war's impact on food production. This analysis provides a critical foundation for understanding the environmental and economic toll of the war on Ukraine’s agricultural sector. The results can inform strategies for post-war restoration, ensuring that affected lands are rehabilitated to support global food security and the livelihoods of millions dependent on Ukrainian agriculture. References: 1. U.S. International Trade Administration. Ukraine Country Commercial Guide. https://www.trade.gov/ukraine-country-commercial-guide 2. Kussul, Nataliia, et al. "Assessing damage to agricultural fields from military actions in Ukraine: An integrated approach using statistical indicators and machine learning." International Journal of Applied Earth Observation and Geoinformation 125 (2023): 103562. Acknowledgement: This work was supported by the NASA Land-Cover/Land-Use Change (LCLUC) Program [grant number: 80NSSC21K0314] and NASA FINESST [grant number: 8NSSC22K1542] programs.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone P)

Poster: Implementation of Land Parcel Identification System Pilot in Ukraine

Authors: Andrii Shelestov, Guido Lemoine, Natasa Luketic, Bohdan Yailymov, Hanna Yailymova, Liudmyla Kolos, Liudmyla Pidgorodetska, Yevhenii Salii, Volodymyr Kuzin, Sofia Drozd, Artem Shelestov, Olexandr Parkhomchuk, Yelyzaveta Volkova, Klaus Deininger
Affiliations: National Technical University of Ukraine "KPI", Joint Research Center of European Commission, World Bank, Space Research Institute NAS and SSA of Ukraine, Georgia Institute of Technology
The implementation of the Land Parcel Identification System (LPIS) in Ukraine represents a crucial milestone in the country's way to the European Union. This became even more significant following Ukraine's confirmation as an EU candidate in December 2023. The necessity for such a system is based on that Ukraine has extensive agricultural area, which exceeds the agricultural areas of France, Germany, and Poland in total, making the Common Agricultural Policy (CAP) and its part LPIS an essential component of the realization of EU practices in the country. The LPIS pilot project, initiated within the framework of land reform policy cooperation between the EU and Ukraine since 2015, aims to establish a robust digital infrastructure for agricultural land management. This collaboration, implemented through the World Bank Group in partnership with Ministry of Agrarian Policy and Food of Ukraine, focuses on creating both a digital farm registry and a comprehensive geospatial information infrastructure. The significance of this project has been further amplified following the Russian war of aggression in February 2022, leading to increased international support for agricultural producers and affected rural communities. The LPIS pilot has been started in March 2024 as part of the World Bank's "Supporting Transparent Land Governance in Ukraine" project. The implementation is based on fourteen selected zones across different oblasts, each covering 100 km2. The project utilized VHR Maxar ortho-imagery for delineation with spatial resolution between 0.3 and 0.5 m for early spring periods of 2022 and 2023. The technical framework consists of a comprehensive classification system differentiating between arable land, personal farming, permanent crops, permanent grassland, and agroforestry [1]. This system is supported by advanced attribute tracking, including reference parcel identification, maximum eligible area calculations, and detailed production validation protocols [2, 3]. In-situ data collection campaign in 2024 for fields' validation covered 4,600 km, documenting approximately 7,500 fields across 16 blocks. This extensive ground survey has revealed important insights about land use patterns and agricultural practices across Ukraine's diverse regions. Quality assessment through comparison with the State Service of Ukraine for Geodesy, Cartography and Cadastre (StateGeoCadastre) has identified several critical areas for attention, including unregistered territories and boundary discrepancies. However, the project has achieved quite high accuracy in crop classification using authors’ method [4-6], with an overall accuracy of 94.5%, particularly for major crop identification such as wheat, sunflower, and maize. These results validate the robustness of the technical approach and provide a solid foundation for scaling the system nationally. The challenges emerged in the analysis of personal farming areas, which often feature complex land use patterns and smaller parcel sizes. The project developed specialized approaches for these areas, including enhanced boundary detection algorithms and specific validation protocols. This attention to detail has proven crucial for maintaining accuracy across diverse agricultural contexts, from large industrial fields to small-scale personal farming plots. The technical infrastructure of the LPIS pilot leverages the Copernicus Data Space Ecosystem (CDSE), incorporating several software components. At its core, a PostgreSQL server with PostGIS extension handles spatial feature storage and enables advanced spatial queries. This database infrastructure manages both intermediate and final LPIS delineated features, along with sample StateGeoCadastre features and Sinergise parcel delineation data for 2023 and 2024. The system uses S3 buckets as cloud storage for managing the VHR Maxar imagery and crop maps, all converted to Cloud Optimized GeoTiff format for optimal performance. Virtual machines facilitate batch processing tasks, particularly crucial for handling the full Sentinel data archive through mounted /eodata paths within CREODIAS cloud infrastructure, enabling efficient time series extraction and solving sophisticated analysis tasks. The system can process both Sentinel-1 radar data and Sentinel-2 optical imagery, enabling comprehensive monitoring regardless of weather conditions. This dual-sensor approach has proven particularly valuable for maintaining continuous monitoring capabilities throughout the growing season. For visualization and analysis tasks it was used JupyterLab with DataCube-like capabilities and specific geospatial modules. This software supports high-level functional components which are based on rasterio and geopandas packages for advanced geospatial processing. The leafmap module provides interactive map displaying capabilities, important for visualizing GeoJSON data and integrating Postgresql/postgis access. The attribute management system incorporates different sophisticated data handling capabilities that are beyond basic spatial information. Key features include automatic area calculation that account for terrain variations and projection specificities, temporal tracking for monitoring land use changes and crop rotations, integration capabilities with existing national cadastral system, and quality control mechanisms for data validation. The LPIS pilot project in Ukraine has demonstrated significant achievements and established basis for the Ukraine's EU integration process. Particularly, several key conclusions can be drawn. The pilot project has established a robust infrastructure through the CDSE Copernicus ecosystem, proving the viability of implementing LPIS at a national scale. The developed software components show strong compatibility with EU standards and requirements, particularly in terms of data management and agricultural monitoring capabilities. The implementation of both Sentinel-1 and Sentinel-2 data processing chains ensures comprehensive monitoring capabilities aligned with EU practices. The comparison with the StateGeoCadastre revealed important gaps, including unregistered territories and boundary discrepancies. These findings provide crucial insights for future system development and extension and policy adjustments. Acknowledgement This work has been supported by the European Commission through the joint World Bank/EU project ‘Supporting Transparent Land Governance in Ukraine’ [grant numbers ENI/2017/387–093 and ENI/2020/418–654], expert contracts EC Joint Research Center (JRC) [CT-EX2022D670387-101, CT-EX2022D670387-102]. Bibliography 1. Devos Wim, JRC IPSC/G03/P/SKA/ska D(2005)(4560) Last revised: 12 July 2005 , “Parcel Identification System Creation and/or Updating Parcel Block interpretation and numbering”. 2. Luketic N., Devos W., Milenov P. (2015). LPIS DS-CDP-2015-10, Technical guidance on “Management of different layers in LPIS”. 3. “LPIS mapping specifications” (Part I). (2024). Technical specification v.1.2. 4. Kussul N., et al. (2017). Deep Learning Classification of Land Cover and Crop Types Using Remote Sensing Data. IEEE Geoscience and Remote Sensing Letters, 12(5), 778-782. 5. Shelestov A., et al. (2020). Cloud Approach to Automated Crop Classification Using Sentinel-1 Imagery. IEEE Transactions on Big Data, 6(3), 572-582. 6. Shelestov, A., Yailymov, B., Yailymova, H., Nosok, S., Piven, O. (2023). Cloud-Based Technologies for Data Processing in Ukraine: International Context. In: Ilchenko, M., Uryvsky, L., Globa, L. (eds) Progress in Advanced Information and Communication Technology and Systems. MCiT 2021. Lecture Notes in Networks and Systems, vol 548. Springer, Cham. https://doi.org/10.1007/978-3-031-16368-5_5.
LPS Website link: Implementation of Land Parcel Identification System Pilot in Ukraine&location=X5+–+Poster+Area+–+Zone+P" class="text-info" target="_blank">Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone P)

Poster: Analyzing Economic Activity in Ukraine Using Night Lights and Air Quality Indicators: Insights Amid War

Authors: Dr. Andrii Kolotii, Bohdan Yailymov, Oleksandr Parkhomchuk, Prof. Andrii Shelestov
Affiliations: Space Research Institute National Academy of Sciences of Ukraine and State Space Agency of Ukraine, National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute"
The ongoing war in Ukraine has profoundly impacted the country's economic landscape, disrupting industries, altering urban development activities, and creating significant environmental and social challenges. Traditional methods of measuring economic activity, such as Gross Regional Product (GRP) statistics, are particularly constrained in the case of conflict, as data collection becomes nearly impossible in regions situated along active conflict lines. This study leverages satellite-derived data Night Lights and Sentinel-5P air quality indicators as a proxy for economic activity to provide additional perspective on the spatial and temporal dynamics of Ukraine’s economy under wartime conditions. Night Lights, captured by the Visible Infrared Imaging Radiometer Suite (VIIRS), serve as a robust indicator of human and industrial activity, reflecting regional economic intensity. At the city level case, additional analysis of Kyiv's urban core through Night Lights data and Copernicus-compliant Urban Atlas [1] mapping offers deeper insights into the redistribution of activity within the capital. This approach reveals fluctuations in activity tied to energy disruptions, population displacement, and shifts in nighttime security protocols. Sentinel-5P data, focused on air quality metrics such as nitrogen dioxide (NO₂), carbon monoxide (CO) etc., provides additional layers of insight by linking industrial emissions and transportation activity to economic outputs. These datasets enable a multidimensional analysis of economic performance, offering valuable insights into how conflict has reshaped Ukraine's economic. The analysis highlights a consistent decline in Night Lights intensity across all regions of Ukraine, except the Autonomous Republic of Crimea. Regions closer to active conflict lines demonstrate the most significant decreases, reflecting the immediate impact of military actions, industrial destruction, and population displacement. Conversely, regions farther from the frontlines show less pronounced declines, indicating a relative preservation of economic and industrial activity. On the other hand, Sentinel-5P air quality data reveals an increase in NO₂ concentrations in the western regions of Ukraine when comparing 2022 to 2021. This trend correlates with the influx of evacuees and displaced populations, which increased transportation emissions and localized human activity in these areas. The interplay between economic activity and environmental indicators underscores the complex trade-offs faced by Ukraine. Regions with high pre-war NO₂ concentrations, typically linked to robust industrial activity, now exhibit simultaneous declines in emissions and economic productivity. Meanwhile, regions such as those in western Ukraine, which absorbed large numbers of displaced people, are experiencing smaller decreases of emissions due to heightened human activity, transportation, and local economic adjustments. This dual impact highlights the critical need for sustainable reconstruction strategies that balance economic recovery with environmental health. The findings demonstrate the value of satellite-based data for real-time economic monitoring in conflict zones. Night Lights provide a reliable proxy for economic activity, while Sentinel-5P data captures the environmental dimensions of war’s impact as well as migration processes (in indirect way) as large-scale displacement of populations influences patterns of human activity and economic redistribution. Together, these tools offer a comprehensive framework for understanding regional economic transformations, guiding policy responses, and informing recovery efforts. At the city level, Kyiv use-case demonstrates the usefulness of high-resolution data [2] for analyzing urban resilience. Urban Atlas mapping and Night Lights data help pinpoint areas of economic recovery and stress, providing practical insights for urban planning in the post-war period. Incorporating additional datasets, such as infrastructure damage assessments and renewable energy potential, can further support efforts to develop sustainable and resilient rebuilding strategies in the future. In conclusion, the integration of Night Lights and Sentinel-5P data, supplemented by city-level analytics using tools like Urban Atlas, provides an innovative, scalable method for analyzing economic activity in conflict settings. For Ukraine, this approach captures both the immediate economic and environmental impacts of war and offers actionable insights for shaping a resilient and sustainable future. Acknowledgments This research was partially supported by U_CAN (Ukraine towards Carbon Neutrality) project. References 1. Shelestov, A., Yailymova, H., Yailymov, B., Shumilo, L., & Lavreniuk, A. M. (2021, July). Extension of copernicus urban atlas to non-european countries. In 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS (pp. 6789-6792). IEEE. 2. Kussul, N., Lavreniuk, M., Skakun, S., & Shelestov, A. (2017). Deep learning classification of land cover and crop types using remote sensing data. IEEE Geoscience and Remote Sensing Letters, 14(5), 778-782.
LPS Website link: Analyzing Economic Activity in Ukraine Using Night Lights and Air Quality Indicators: Insights Amid War&location=X5+–+Poster+Area+–+Zone+P" class="text-info" target="_blank">Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone P)

Poster: Large-scale automated conflict monitoring in Ukraine with Sentinel-1 SAR coherence data

Authors: Olivier Dietrich, Dr Torben Peters, Prof Jan Dirk Wegner, Prof Konrad Schindler
Affiliations: Eth Zurich, University of Zurich
The Russian invasion of Ukraine in February 2022 started a war that caused immense humanitarian and economic devastation, including thousands of civilian casualties, the displacement of millions, and widespread destruction of infrastructure. The scale and duration of the conflict pose significant challenges to maintaining a comprehensive understanding of the situation on the ground. For humanitarian organizations and NGOs, such assessments are vital to prioritize interventions, allocate resources, or document potential war crimes. Satellite imagery provides a non-invasive, independent, and reliable solution to address this need, enabling the detection of destruction even in inaccessible or high-risk areas. Yet, traditional methods relying on manual analysis of very high-resolution images are labor-intensive, time-consuming, and dependent on expensive, irregularly acquired commercial imagery. Developing scalable screening tools that utilize freely available satellite imagery could transform conflict monitoring, enabling rapid identification of destruction hotspots and better resource prioritization. Several studies have attempted to monitor war-induced destruction in Ukraine, yet only a few have addressed this at a national scale, and over the entire duration of the conflict. In their work, Dietrich et al. [1] developed a method leveraging amplitude Synthetic Aperture Radar (SAR) time series from Sentinel-1 and machine learning to generate country-wide damage heatmaps with a ground sampling distance (GSD) of 10m. Their approach further refines these outputs by integrating existing building footprint data to produce building-level damage assessments across Ukraine. Relying on the cloud capabilities of Google Earth Engine (GEE), their method demonstrates excellent scalability but suffers from a relatively high rate of false positives, which can compromise its reliability in operational contexts. In contrast, Scher et al. [2] employed a coherent change detection framework to address the same challenge. Their approach simplifies the processing workflow but demands significant computational effort during data preparation, as it involves generating coherence maps from thousands of Sentinel-1 image pairs. While their damage heatmaps have a coarser GSD of 40m, they achieve an exceptionally low false positive rate (<1\%) while keeping the true positive rate at a relatively good level (~60%). In this work, we build on the coherence maps developed by Scher et al. by adopting a learning-based approach similar to that used by Dietrich et al. Our goal is to develop a robust framework that combines classical machine learning or deep learning techniques with a variety of data sources to improve damage detection and monitoring. To achieve this, we plan to implement the following models and methods: (1) Framework Development: Building on the baseline established by Scher et al., we aim to extend the machine-learning environment introduced by Dietrich et al. in GEE. This involves integrating and ablating the coherence maps from Scher et al. with additional ancillary datasets, such as Sentinel-1 amplitude SAR data, Sentinel-2 optical imagery, digital elevation models, land cover information, and potentially weather data, to create a more comprehensive framework. (2) Deep Learning Approach: Building on the framework developed in (1), we plan to enhance the classification problem by integrating deep learning methods, allowing us to capture more contextual information and improve performance. (3) Model Enhancements: We further consider exploring alternative ways to reduce reliance on coherence maps, thereby avoiding the need for recomputation in future workflows. Our aim is to identify the optimal balance between accessibility, scalability, and performance. (4)Dataset Expansion: We will create an expanded dataset of damage labels derived from updated Google Earth imagery basemaps. This dataset will be used to train and validate the model, ensuring its effectiveness in detecting and monitoring damage. The ultimate objective is to develop a scalable, freely available screening tool capable of monitoring war-induced damage at a national scale. This tool is intended to support humanitarian organizations and policymakers by providing actionable insights for decision-making and resource allocation. References: [1] Olivier Dietrich et al. An Open-Source Tool for Mapping War Destruction at Scale in Ukraine using Sentinel-1 Time Series. 2024. arXiv: 2406:02506 [cs.CV]. URL: https://arxiv.org/abs/2406.02506. [2] Corey Scher and Jamon Van Den Hoek. Nationwide mapping of damage to human settlements across Ukraine using Sentinel-1 InSAR coherence change detection. AGU Fall Meeting Abstracts. Ed. by AGU. Vol. 2023. Dec. 2023. GC23A–01. URL: https://agu.confex.com/agu/ fm23/meetingapp.cgi/Paper/1412879.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone P)

Poster: Leveraging the Synergetic Products of Copernicus, Google, and NASA in Synopsizing Tree-Land Loss Dynamics from Space

Authors: Mr Wim Zwijnenburg, Mr Bilnd Hambi
Affiliations: PAX for Peace Organization, University Of Duhok
-Wim Zwijnenburg – Humanitarian Disarmament Project Leader, PAX for Peace, Zwijnenburg@paxforpeace.nl -Bilnd Hambi – Forest Information Technology Scientist, University of Duhok, bilnd.hambi@uod.ac Forests in the Kurdistan Region of Iraq (KRI) have an enduring history, impacted by social, political, and environmental changes. The indigenous people ‘Kurds’ have depended on the forest products and services of the regions for centuries. Deforestation followed when Degradation activities, such as conversion to other land uses, illegal logging, military operations, fires, and natural disturbances, took place. Millions of mines left as a result of wars since the 1980s have also been a hurdle for firefighters and police in the forest areas. Recent concerns over Forest degradation from authoritative media sources and studies in Kurdistan raise the need for proper documentation and mapping of these fact-based discussions, particularly involving modern technology such as space systems. Kurdistan region is affiliated with the Irano-Anatolian biodiversity hotspot with a high biological diversity that holds at least 6 thousand plant species and about 2500 of them are endemic. The region provides immense natural resources of economic importance such as medicinal, aromatic, and edible plants, and crops. However, the existence of periodic armed conflicts in the area was one of the hitches against applying sufficient field inventories in this cold spot of knowledge in discovering plant diversity. The Oak forests form around 90% of the total forest cover in the Kurdistan region. The Quercus aegilops alone encloses about 70% of the Oak forests. The Oak forests have economic and cultural benefits, and also ecological importance while providing shelter and food for numerous species of mammals and birds living in the region. This research leveraged a delicate design to address the Tree cover loss issue and its causes in the AOI (Kurdistan region/IRQ) from 2015 to 2022, with an interval in 2019. A collection of consistent remote sensing satellite-derived products was utilized in the code editor of the cloud computing platform Google Earth Engine-JavaScript API. The Dynamic World V1 dataset was used for delineating Tree cover loss areas in the AOI by upgrading a nominated scripted expression. The Tree cover denomination and overall LULC (Land Use/Land Cover) codification were also derived from this dataset using the best grading predictions in the Autumn season of the AOI. Furthermore, the standardized versions of NASA VIIRS and MODIS satellite products were used for the identification of forest areas damaged by fires. The VIIRS instrument data on board the S-NPP satellite were ordered from the archive of ‘Fire Information for Resource Management System-FIRMS’- the University of Maryland. Class zero (Presumed vegetation) of the scientific quality of VIIRS thermal anomalies was considered in this report's study. While MODIS instruments on the Terra and Aqua satellites provided a combined data product (MCD64A1 Version 6) monthly with a global gridded of 500m for Burned Areas. For determining individual and major fire events in the AOI, in particular, in the tree cover loss intersection areas, the spatial and temporal texture of MCD64A1 with VIIRS products were applied in this research. As part of the Environment and Conflict Alerts from PAX, the fusion of open-access imageries and tools provided by ESA, NASA, and Google have been leveraged for the first time in the AOI to support faster suitable recording and mapping of interlinkages of grounds and tree-land loss dynamics by Forest scientists and Decision makers. The region's main causes of tree cover loss included wildfires, over-exploitation for timber, conversion to agricultural areas, and the negative consequences of climate change. Between 2015 and 2019, Erbil experienced the largest loss in tree cover proportion to other governorates in the AOI, losing 317 ha (0.384%) from 82,637 ha. Duhok showed the second largest loss, with Sulaymaniyah and Halabja losing only 29 and 2.35 ha respectively. Halabja experienced the largest percentage 0.31% (18.07 ha) of tree cover loss from 2019 to 2022, with Erbil having the smallest proportion. The recommendations to mitigate degradation and tree cover loss in KRI are concluded with decreasing the overconsumption of forest products and transitioning to more sustainable practices, awareness campaigns for planting trees to maintain air and soil quality, using agroforestry to support sustainable agricultural strategies, responsibly managing forests to provide certified wood products to the community, reducing meat consumption to decrease demand on land for livestock grazing and feed production, and reinforcing the regional and international forest policy framework and its applications. Consequently, urgent map-based actions are needed from the KRG and Iraqi governments to restore the damage to the native forests from these disturbances/conflicts.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone P)

Poster: Assessment of the Forest Cover Changes in Ukraine as an Impact of Military Aggression

Authors: Bohdan Yailymov, Hanna Yailymova, Volodymyr Kuzin, Yevhenii Salii, Oleksandr Parkhomchuk, Andrii Shelestov
Affiliations: Space Research Institute NAS of Ukraine and SSA of Ukraine, National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute"
In the context of military aggression against Ukraine, the assessment of changes in forest cover, in particular in territories directly affected by hostilities, is of particular relevance. As part of the study, a universal methodology for mapping forests based on satellite data has been developed, which has been successfully applied both for the territory of Ukraine and for the whole of Europe. The methodology is based on the integrated use of remote sensing data and includes machine learning algorithms for classifying forest cover types. Within the framework of the SWIFTT (Satellites for Wilderness Inspection and Forest Threat Tracking) project, forest cover maps of Europe for 2022 [1], [2] and 2024 were created, highlighting three main classes: coniferous, deciduous and mixed forests. For the territory of Ukraine, an additional retrospective analysis of forest cover changes since 2015 was conducted, which allowed assessing anthropogenic changes caused by military actions. The results of the study demonstrate significant changes in forest cover in the eastern part of Ukraine, where active hostilities are taking place. The situation is particularly critical in the Emerald Network area, where large-scale forest losses have been identified [3], as well as in the southwestern part of the Kherson region. Satellite data clearly record the disappearance or decline of valuable natural areas due to shelling, fires, and intensive military use of the territories. The developed methodology allows not only to detect changes in forest cover, but also to quantitatively assess the damage caused to forest ecosystems. Based on a time series of satellite data, maps of forest cover loss were created with a breakdown by forest type and causes of their destruction. Special attention is paid to the assessment of losses of protected areas and natural ecosystems. The practical significance of the study lies in creating a methodological basis for systematic monitoring of forest conditions and assessment of ecological damage resulting from military operations. The results obtained can be used for planning the restoration of forest ecosystems, assessing the necessary compensations, and developing strategies for biodiversity conservation in the post-war period. Further research will be aimed at improving methods for automatic detection of forest cover changes, developing predictive models for forest ecosystem restoration, and creating recommendations for priority measures for the restoration of damaged areas. Particular attention will be paid to monitoring the Emerald Network and other valuable natural complexes affected by military operations. Acknowledgments The study was supported by the HORIZON Europe project SWIFTT No. 101082732, “Satellites for Wilderness Inspection and Forest Threat Tracking.” References 1. Bohdan Yailymov, Hanna Yailymova, Nataliia Kussul and Andrii Shelestov (2024) Semi-supervised European forest types mapping using high-fidelity satellite data. 4th International Workshop of IT-professionals on Artificial Intelligence (ProfIT AI 2024), 25-27.09.2024. https://ceur-ws.org/Vol-3777/paper1.pdf 2. Kussul, N., Shelestov, A., Yailymov, B., & Yailymova, H. (2023, September). Semi-Supervised Forest Type Mapping in Europe on Satellite Data. In 2023 IEEE 12th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), Vol. 1, pp. 454-458. Doi: 10.1109/IDAACS58523.2023.10348948 3. Shumilo, L., Skakun, S., Gore, M. L., Shelestov, A., Kussul, N., Hurtt, G., ... & Yarotskiy, V. (2023). Conservation policies and management in the Ukrainian Emerald Network have maintained reforestation rate despite the war. Communications Earth & Environment, 4(1), 443. https://doi.org/10.1038/s43247-023-01099-4
LPS Website link: Assessment of the Forest Cover Changes in Ukraine as an Impact of Military Aggression&location=X5+–+Poster+Area+–+Zone+P" class="text-info" target="_blank">Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone P)

Poster: Satellite Monitoring and Geoportal Technologies for Supporting the Restoration of Ukraine's War-Damaged Agricultural Lands

Authors: Sofiia Drozd, Dr. Hanna Yailymova, Yevhenii Salii, Volodymyr Kuzin, Nataliia Kussul, Prof. Andrii Shelestov, Dr. Bohdan Yailymov
Affiliations: National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, Space Research Institute NAS Ukraine and SSA Ukraine, University of Maryland College Park
The Ukrainian-Russian conflict, ongoing since 2022, has caused significant damage to Ukraine's economy and infrastructure, with global repercussions. A particular concern is the agricultural sector, as the fighting has resulted in the damage or loss of thousands of hectares of fertile land. The reduction in cultivated areas makes it impossible to produce cereals at pre-war levels, threatening global food security and creating the risk of famine in countries dependent on Ukrainian agricultural products. In the context of constant danger, an important task is to support farmers and effectively restore agricultural infrastructure, which requires precise damage monitoring. To determine the location and assess the extent of field damage, traditional field inspections are ineffective due to danger, the vastness of the territories, and a lack of resources. An alternative is the use of satellite data and machine learning, which allows automating the monitoring process, reducing costs, and providing access even to occupied or dangerous zones. Our study proposes a methodology for analyzing damaged agricultural land using free Sentinel-2 satellite data [1]. We apply a three-step methodology, using manually collected training labels for each step. In the first stage, a binary classification of fields into damaged and undamaged is performed. For this, a random forest classifier was trained on the statistical characteristics of the Sentinel-2 spectral bands blue (B2) and green (B3), and the NDVI and GCI vegetation indices. The model analyzes average, minimum, maximum values, and data variance. The second stage focuses on identifying whether there is damage of a specific type on the field. Binary classification is applied separately for each damage type: light craters, dark craters, and burnt areas. Each classifier determines the presence or absence of a particular damage type on the field. Statistical parameters of the Sentinel-2 spectral bands and vegetation indices, the date of the image acquisition, and land cover classes are used for analysis. Each field can have multiple types of damage simultaneously. The third stage uses an anomaly detection method to determine the location of specific damage zones, such as craters or burnt areas. Threshold values for anomaly detection were set using regression modeling, which uses the same predictors as in the previous stages [2, 3]. For fields with light craters, anomalies in the B2 band spectrum and the GCI index are analyzed; for dark craters, anomalies in the B3 band and the GCI(1) (inversed GCI) index are considered; and for burnt areas, the BAIS2 index is used. A damage raster mask is created based on the identified anomalies. According to the model accuracy checks, the F1 score for the binary classification of damaged fields is 0.9. For the classification of damage types, the F1 score was 0.765 for light craters, 0.74 for dark craters, and 0.78 for burnt areas. The accuracy of the regression models for determining the threshold coefficients for anomaly detection varies from R² = 0.72 for the GCI(I) index to R² = 0.96 for GCI, with mean squared errors (RMSE) of 0.171 and 0.039, respectively. In general, Sentinel-2 with the proposed methodology allows for accurate detection of most craters and pits (except for very small ones [4]), trenches, tracks of military vehicles, and burnt areas. The developed methodology was integrated into the Google Earth Engine (GEE) platform, which enabled the creation of a geoportal called "WarScan" [5]. This tool allows users to access vector boundaries of damaged fields, raster masks of damaged areas, and detailed information about the percentage of damage to each field relative to its total area. The geoportal already contains the results of field analysis in 10 pilot regions of Ukraine that have been affected by the conflict since February 22, 2022. Data on the platform is regularly updated, ensuring the relevance and accuracy of the information. The proposed methodology and geoportal are valuable tools for the Ukrainian government and international organizations to make strategically sound decisions that will aid in the effective recovery of Ukraine’s agricultural sector and safeguard global food security. This approach can also be adapted to address the consequences of conflicts and natural disasters in other regions, enhancing its value for the international community. Acknowledgments This work is supported by the National Research Foundation in Ukraine within the project "Geospatial monitoring system for the war impact on the agriculture of Ukraine based on satellite data" (grant number 2023.04/0039) and by the project of the Ministry of Education and Science of Ukraine "Information technologies of geospatial analysis of the development of rural areas and communities" (grant number RN/27-2023). References 1. Kussul, N., Drozd, S., Yailymova, H., Shelestov, A., Lemoine, G., & Deininger, K. (2023). Assessing damage to agricultural fields from military actions in Ukraine: An integrated approach using statistical indicators and machine learning. International Journal of Applied Earth Observation and Geoinformation, 125, 103562. https://doi.org/10.1016/j.jag.2023.103562 2. Drozd, S., & Kussul, N. (2024). Optimizing war damage detection in agricultural lands using regression models and satellite imagery. In Proceedings of the 1st International Workshop on Data-Centric Artificial Intelligence (DEARING 2024) at the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2024), Vilnius, Lithuania. 3. Parkhomchuk, O., Shelestov, A., & Drozd, S. (2024). Symbiotic artificial intelligence and satellite imagery for rapid assessment of war-induced agricultural damage in Ukraine. International Symposium on Applied Geoinformatics 2024 (ISAG2024), Wroclaw, Poland, 9–10 May 2024. 4. Kussul, N., Drozd, S., Skakun, S., Duncan, E., & Becker-Reshef, I. (2023). Fusion of very high and moderate spatial resolution satellite data for detection and mapping of damages in agricultural fields. In 2023 13th International Conference on Dependable Systems, Services and Technologies (DESSERT) (pp. 1-7). IEEE. https://doi.org/10.1109/DESSERT61349.2023.10416533 5. Drozd, S., Kuzin, V., Salii, Y., Shelestov, A., & Kussul, N. (2024). Remote sensing and machine learning for war impact assessment on agriculture in Ukraine. 14th International Conference Dependable Systems, Services and Technologies (DESSERT) 2024, Athens, Greece, 11–13 October.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone P)

Poster: Monitoring Active Cropland Dynamics Amid Civil War in Eastern DRC

Authors: Josef Wagner, Dr. Inbal Becker-Reshef, Abhishek Kotcharlakota, Manav Gupta, Erik Duncan, Shabarinath Nair
Affiliations: University of Strasbourg | ICube Laboratory, University of Maryland College-Park
In 2022, the long-standing civil conflict between M23 rebels and government forces escalated in the Masisi and Rutshuru regions of the Democratic Republic of Congo. The intensifying violence led to widespread people displacement, reports of war crimes, and a surge in military clashes across the two regions. Amidst the turmoil, food production is majorly dependent on small-holder farms, leading to the critical question: did the conflict contribute to a reduction in active cropland, exacerbating the risk of food insecurity? This presentation will describe how, in support of the Famine and Early Warning Sytems Network (FEWS NET), we monitored active cropland dynamics between 2021 and 2023, in Masisi and Rutshuru regions, DRC. We first mapped yearly active cropland and then leveraged the maps and the Armed Conflict Location and Event Data Project (ACLED) dataset in a random stratified sample frame, for estimating unbiased active cropland areas, with regards to conflict intensity. We defined as active croplands all cropland that undergoes a tillage or bare soil phase. In the absence of data for supervised model training, we defined the pixel-level sequence "high photosynthetical activity – senescence - bare soil - green-up - high photosynthetical activity" as the target signal for placing points in active-cropland and non-cropland classes. Annotating on Planet 14-day composites time series, we created trainsets of around 10k points seasonally between 2021 and 2023. We then trained separate random forest classifiers and inferenced on Planet data in each year. We used the ACLED dataset to subdivide the Masisi and Rutshuru regions by conflict intensity and used active cropland maps as stratifiers for sampling in unbiased area estimation. Results showed no large-scale cropland abandonments, in line with reports received by the local FEWS-NET and ACLED teams. According to reports, farmers continue to cultivate, with new landlords imposed by M23 or paying a direct tribute to the latter.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone P)

Poster: Soil Moisture Changes in Southern Ukraine as a Result of Military Operations

Authors: Bohdan Yailymov, Andrii Shelestov, Hanna Yailymova, Liudmyla Pidgorodetska, Liudmyla Kolos, Oleh Fedorov
Affiliations: Space Research Institute NAS of Ukraine and SSA of Ukraine, National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute"
The military aggression against Ukraine has led to large-scale environmental consequences, among which the destruction of the Kakhovka reservoir in 2023 occupies a special place [1]. The study focuses on the analysis of changes in soil moisture in the southern part of Ukraine, with special attention to the territories directly affected by the destruction of the dam. The study conducted a comprehensive analysis of available satellite soil moisture assessment products for the territory of Ukraine. A comparative analysis of data from various satellite missions was carried out, including SMAP (Soil Moisture Active Passive), SMOS (Soil Moisture and Ocean Salinity) and Copernicus. A correlation analysis was conducted between different products to determine their consistency and reliability in the conditions of the studied region [2]. Special attention is paid to the area near the Kakhovka reservoir, which covers over 100 territorial communities, most of which are located in the Kherson region. This area is characterized by high vulnerability to changes in the hydrological regime due to the specifics of the soil cover and climatic features of the region. The destruction of the reservoir caused a cascading effect of changes in the hydrological regime of the area, which was reflected in significant fluctuations in soil moisture indicators. The results of the study demonstrate significant changes in the spatial and temporal distribution of soil moisture after the destruction of the reservoir. Areas of critical moisture reduction have been identified, which will require priority measures to adapt agricultural activities and restore the water balance. Satellite data show the formation of new moisture distribution patterns that differ from historical trends. Based on the analysis, maps of soil moisture dynamics will be created, which will reflect both short-term changes after the disaster and long-term trends. The practical significance of the study is to create a methodological basis for assessing the ecological consequences of the destruction of water bodies and their impact on the environment. The results obtained can be used to plan measures to restore the water balance of the territory, adapt agricultural activities and assess economic losses. Further research will be aimed at developing models for increasing the spatial resolution of soil moisture in conditions of a disturbed hydrological regime and creating recommendations for water resources management in the post-conflict period. Acknowledgement The authors acknowledge the funding received from National Research Foundation of Ukraine 2023.04/0081 “Remote monitoring information technology for land cover ecological state in the occupied territories”. References 1. Yailymov B., Yailymova H., Kussul N., Shelestov A. Flooded areas monitoring below the Kakhovka Dam based on machine learning and satellite data. IEEE International Geoscience and Remote Sensing Symposium, 7 – 12 July, 2024, Athens, Greece, pp. 1486-1489. DOI: 10.1109/IGARSS53475.2024.10642248 2. Yailymov B., Yailymova Н., Pidgorodetska L., Kolos L., Fedorov O., Shelestov A. Analysis of Satellite and Geospatial Soil Moisture Data for the Land Cover Ecological State Monitoring in Ukraine. ІЕЕЕ DESSERT 2024: Proceedings of the 14th International Conference Dependable Systems, Services and Technologies. Athens, Greece, October 11–13. 2024. 6 р.
LPS Website link: Soil Moisture Changes in Southern Ukraine as a Result of Military Operations&location=X5+–+Poster+Area+–+Zone+P" class="text-info" target="_blank">Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone P)

Poster: Assessing the Impact of the War on Irrigation and Agricultural Systems in Southern Ukraine Using Satellite Data

Authors: Dr. Nataliia Kussul, Dr. Bohdan Yailymov, Dr. Hanna Yailymova, Prof. Andrii Shelestov, Sheila Baber, Olexandr Parkhomchuk, Sergii Skakun, Inbal Becker-Reshef
Affiliations: University Of Maryland, Space Research Institute NAS Ukraine and SSA Ukraine, Ky, National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, Kyiv, Ukraine
The Russian full-scale invasion of Ukraine has profoundly disrupted the country's agricultural sector, particularly in the southern regions where irrigation systems are vital for crop yields. The destruction of critical infrastructure, including the Kakhovka hydroelectric power station, and ongoing hostilities have caused significant agricultural damage, posing severe risks to food security in Ukraine and globally. Satellite monitoring provides an invaluable tool for assessing the impact of war-related damages on irrigation and agriculture and understanding their broader socio-economic consequences. The purpose of this study is to identify crop changes due to war-related damages, assess the impact of these damages on irrigation systems, and analyze shifts in crop cultivation, including the extent of abandoned lands using satellite data. This research utilizes high-resolution multispectral imagery from Sentinel-2 and Landsat-8 satellites, complemented by radar data from Sentinel-1. The datasets support analyses of soil moisture, vegetation indices, and ground temperature [1], which are essential for evaluating active irrigation systems. Crop type classification maps [2] and maps of irrigated fields in Kherson region, produced within World Bank and EU project, Supporting Transparent Land Governance in Ukraine, and the EU-funded project Support to Agriculture and Food Policy Implementation (SAFPI) [3], were employed to determine which crops have been irrigated before and during the war. Time series data spanning 2021–2024 enable comparative analysis of pre- and post-war conditions. The study integrates advanced machine learning algorithms for crop classification, incorporating transfer learning methods to address the limited availability of data from temporarily occupied territories. High-resolution satellite data were analyzed to monitor soil moisture levels, vegetation health, and ground temperatures. Changes in irrigation and crop patterns were assessed by examining shifts in vegetation indices, active irrigation coverage, and cultivated area. Accounting for the biases inherent in Earth Observation-based models for land cover analysis, and the aforementioned lack of ground truth data in the occupied territories, photo-interpretation of stratified random samples is used for area estimation and accuracy assessment. For this purpose, evapotranspiration data and high temporal cadence vegetation indices, temperature, and soil water content products available through the NASA Harvest and Planet Labs Public Private Partnership are utilized. The analysis revealed several critical findings: 1. Reduction in Irrigated Lands: A significant decline in active irrigation systems was observed, especially in the Kherson region, due to the destruction of the Kakhovka hydroelectric power station [3]. Damaged infrastructure, disrupted pumping stations, and restricted access further exacerbated the reduction in irrigated areas. 2. Decrease in Cultivated Lands: Satellite data indicate a marked reduction in cultivated land, particularly near frontline and occupied regions. 3. Change in Crop Variety: Farmers shifted to technical crops, such as rapeseed, abandoning traditional crops like cereals, sunflowers, soybeans, and vegetables. 4. Abandoned Lands: In active conflict zones, large expanses of uncultivated land have emerged. 5. Crop Pattern Changes in Occupied Territories: Using transfer learning, significant changes in crop patterns in temporarily occupied territories were identified, reflecting war-driven disruptions. The study highlights the profound impact of war on Ukraine’s agricultural landscape, including significant reductions in irrigation and cultivated lands, shifts in crop varieties, and the emergence of abandoned farmland. These changes have drastically diminished Ukraine's agricultural export potential and heightened risks to regional food security. The findings underscore the practical value of satellite monitoring and machine learning techniques for understanding and addressing the consequences of conflict. The results can inform the planning of irrigation system restoration, the development of investment strategies for rebuilding agriculture, and the adoption of advanced irrigation technologies during reconstruction. Acknowledgments This research is supported by the National Research Foundation of Ukraine within projects 2023.01/0040 “DT4LC: Developing Scalable Digital Twin Models for Land Cover Change Detection Using Machine Learning” and 2023.04/0039 “Geospatial monitoring system for the war impact on the agriculture of Ukraine based on satellite data”. Sheila Baber is in part supported by the National Science Foundation Graduate Research Fellowship Program (Grant No. DGE 1840340) and the NASA Harvest Consortium (Grant No. 80NSSC23M0032) References 1. Kussul, N., Shelestov, A., Yailymov, B., Yailymova, H., Lavreniuk, M., Shumilo, L., & Bilokonska, Y. (2020). Crop monitoring technology based on time series of satellite imagery. In 2020 IEEE 11th International Conference on Dependable Systems, Services and Technologies (DESSERT), pp. 346-350. DOI: 10.1109/DESSERT50317.2020.9125031. 2. Shelestov, A., Lavreniuk, M., Vasiliev, V., Shumilo, L., Kolotii, A., Yailymov, B., ... & Yailymova, H. (2019). Cloud approach to automated crop classification using Sentinel-1 imagery. IEEE Transactions on Big Data, 6(3), pp. 572-582. DOI: 10.1109/TBDATA.2019.2940237. 3. Kussul, N., Shelestov, A., Lavreniuk, M., Kolotii, A., Vasiliev, V., 2019. Transparent Land Governance in Ukraine within World Bank Program. IEEE 2nd Ukraine Conference on Electrical and Computer Engineering (UKRCON), Lviv, Ukraine, 2019, pp. 1077-1080, https://doi.org/10.1109/UKRCON.2019.8879771. 4. Yailymov, B, et al. Flooded Areas’ Monitoring Under the Kakhovka Dam Based on Machine Learning and Satellite Data. In: IGARSS 2024-2024 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2024. p. 1486-1489. DOI: 10.1109/IGARSS53475.2024.10642248
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone P)

Poster: Satellite-Based Assessment of Rural Areas in Ukraine for the Effective Restoration of War-Affected Regions and Strategic Planning

Authors: Sofiia Drozd, Dr. Hanna Yailymova, Bohdan Potuzhnyi, Vlasa Svirsh, Natallia Kussul, Prof. Andrii Shelestov, Bogdan Yailymov
Affiliations: National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, Space Research Institute NAS Ukraine and SSA Ukraine, University of Maryland College Park
The prolonged crisis in Ukraine caused by the war has led to the degradation of many sectors, significantly affecting the well-being of the population and the development of both urban and rural areas. The conflict has caused extensive infrastructure damage: numerous facilities, such as post offices, factories, educational institutions, and mobile networks of Ukrainian operators, have ceased functioning in occupied territories and active combat zones. Many institutions and industrial facilities were forced to relocate to safer regions in central and western Ukraine, resulting in a significant imbalance in infrastructure availability across the country. This study focuses on Ukraine's rural areas, assessing their development and residents' quality of life based on satellite data. Particular attention is paid to the accessibility of various infrastructure facilities and mobile internet coverage in rural regions at the micro level. We aim to identify deficits in specific infrastructure objects in rural areas, providing a basis for strategic development planning and efficient investment allocation. To achieve the research objectives, we employ a comprehensive approach combining three independent methods for assessing rural development and comparing their outcomes. The first method is based on a gradational approach that classifies villages into three categories—developed, moderately developed, and depressed [1]. The evaluation relies on a composite score calculated using remote sensing data. We utilize information from OpenStreetMap and other sources about the shortest distances to key infrastructure facilities (hospitals, schools, banks, shops, roads, power lines, etc.), natural ecosystems (water bodies, forests, parks), and occupied territories. The second method uses graph models and cluster analysis, analyzing a similar dataset enriched with mobile internet availability information [2,3]. This approach generates maps showing the development level of each village, along with heatmaps and 3D visualizations that enhance the understanding of clustering results and reveal regional disparities in infrastructure access. The third method involves multi-criteria analysis, specifically the analytic hierarchy process and weighted linear combination. The criteria include proximity to socio-cultural facilities (schools, kindergartens, hospitals, churches, libraries), economic infrastructure (shops, banks, grain storage facilities), transport networks (roads, bridges, post offices, hotels), technological infrastructure (power lines, mobile internet), and ecological resources (parks). Additionally, the number of available mobile internet operators in each village is considered. Through expert evaluations, weight coefficients are assigned to each criterion, and a Rural Development Index (RDI) is calculated as a weighted linear combination. Based on the RDI, Ukrainian villages are classified into five categories, ranging from very low to very high development. Analysis revealed that 9.83% of villages are highly developed, and 27.11% are well-developed, significantly outnumbering those with low (11.47%) and very low (1.87%) development levels. Almost half of rural areas (49.72%) exhibit a moderate development level, with an index ranging from 0.75 to 0.85. Despite using different approaches, the results of all three methods were consistent, showing that the most developed rural areas are concentrated in western Ukraine and near Kyiv (e.g., Lviv, Kyiv, Ivano-Frankivsk, Kharkiv, Ternopil, Cherkasy). In contrast, the least developed areas are in the north, east, and south of Ukraine (e.g., Luhansk, Zaporizhzhia, Chernihiv, Kherson, Zhytomyr). The most depressed regions suffer from acute shortages of shops, hospitals, educational institutions, power lines, and mobile internet. For each of these regions, specific missing facilities requiring urgent attention were identified. The study results provide a scientifically grounded foundation for making strategic decisions aimed at balancing the development of Ukraine’s rural areas, distributing resources, and targeting investments, thereby laying the groundwork for sustainable national development. Acknowledgements This research was supported by the project of the Ministry of Education and Science of Ukraine "Information technologies of geospatial analysis of the development of rural areas and communities" (grant number RN/27-2023). References 1. Yailymova, H., Yailymov, B., Kussul, N., & Shelestov, A. (2023, October). Geospatial Analysis of Life Quality in Ukrainian Rural Areas. In 2023 13th International Conference on Dependable Systems, Services and Technologies (DESSERT) (pp. 1-5). IEEE. https://doi.org/10.1109/DESSERT61349.2023.10416517 2. Kussul, N., Svirsh, V., & Potuzhnyi, B. (2024). Integrated Geospatial Analysis for Rural Development Metrics. In COLINS (1) (pp. 141-160). https://doi.org/10.31110/colins/2024-1/011 3. Potuzhnyi, B., Svirsh, V., & Kussul, N. (2024). Graph-based modeling of village infrastructure development. International Symposium on Applied Geoinformatics 2024 (ISAG2024), Wroclaw, Poland, 9–10 May 2024, p. 4.
Add to Google Calendar

Friday 27 June 13:00 - 14:30 (X5 – Poster Area – Zone P)

Poster: Leveraging Remote Sensing Data To Improve Humanitarian Response From Conflict-linked Environmental Damages

Authors: Wim Zwijnenburg
Affiliations: Pax, University College London
Leveraging remote sensing data to improve humanitarian response to conflict-linked environmental damages A wealth of remote sensing data and novel applications have spurred significant progress in understanding how conflicts cause environmental harm. These destructive environmental consequences impact humanitarian response, affect livelihoods, ecosystems and have significant implications for post-conflict reconstruction and rehabilitation efforts, including building climate resilience. Over the last decade, academics, civil society groups and international organisations have made substantial progress in monitoring and assessing environmental harm through remote sensing and the use of open-source data collection. Novel applications of damage assessments with commercial and open-access optical imagery and open-source data combined with risk awareness tools for chemical substances provide insights in acute and long-term risks of civilian exposure to toxics from hazard facilities and conflict-rubble. At the same time SAR data can be rapidly deployed for change detection, enabling full scale monitoring of damaged urban areas over a large area. Similarly, identification and mapping of damage to critical infrastructure such as water facilities, or locating informal large solid waste dumps, is key for humanitarian response and planning of support operations. Land use and Land Change modeling is supporting analysis of conflict-events such as wide-scale burning of forests, destruction of agricultural land or land abandonment, while SAR and optical imagery are being used for oil spill detection both on land and at sea, from attacks on oil tankers and strikes at fossil fuel facilities. The analysis of Earth Observation (EO) data has allowed researchers across disciplines to develop a more granular understanding of conflict-linked environmental harm, which has spurred both improved response and international norm development around protection of the environment in relation to armed conflicts. With broader application of machine-learning, artificial intelligence and new satellite sensors providing insights in climate change, the EO field provides a plethora of opportunities to speed up the current trajectory of progress. Rapid data collection, analysis and sharing of these results are crucially important for humanitarian responders who often have competing priorities. There remains a dire need for harnessing this data more effectively, both through improved data sharing, addressing lessons learned, verification and validation of data and building more effective tools. Questions remain over opportunities to foster collaborations between academics, the humanitarian community and remote sensing institutions. PAX is a partner of NASA Lifelines that aims to build a strong community of practice in bringing these diverse actors together, while Bellingcat is supporting Tech Fellows in building tools with remote sensing data that improve rapid earth observation analysis. This session will have speakers from PAX and University College London, who also both are long-time Bellingcat contributors, that both have long-standing experience on this topic and pioneered methodologies combining open-source investigations with earth observation in their analysis from various conflict-affected area they work in, including the development machine learning tools and remote sensing analysis of conflict related pollution and damage assessments in Ukraine, Libya, Gaza, Sudan, South-Sudan, Syria, Lebanon and Iraq. The aim of the session is to provide a stocktaking of both current applied research and novel methodologies and sensor data, discuss lessons learned from the findings and processes to derive the results on this topic that was developed over the last decade and explore future opportunities with participants to strengthen collaboration and improved analysis of EO for humanitarian response and environmental analysis. The presentations are intended to get critical feedback from participants, explore synergies between different communities that apply EO, and create thought-providing discussion. Further reading by the authors and other relevant literature: https://international-review.icrc.org/articles/leveraging-emerging-technologies-to-enable-environmental-monitoring-and-accountability-in-conflict-zones-924 https://academic.oup.com/jpubhealth/article/42/3/e352/5628132 https://oballinger.github.io/ https://www.bellingcat.com/author/wim-zwijnenburg/
Add to Google Calendar

Friday 27 June 13:07 - 13:27 (EO Arena)

Demo: D.04.21 DEMO - Empowering EO Projects with Cloud-Based Working Environments in APEx

#stac #cloud-native

The APEx Project Tools provide EO projects with ready-to-use, cloud-native, configurable working environments. By leveraging a comprehensive suite of pre-configured tools—including a project website, JupyterHub environment, STAC catalogue, visualization tools and more—projects can quickly establish their own collaborative environment without the complexity of managing its infrastructure. By providing robust, scalable, and user-friendly environments, the APEx Project Tools foster greater collaboration and support the accessibility and reuse of project outcomes within the EO community.

This demonstration will showcase how APEx enables seamless access to flexible and scalable working environments that can be tailored to a project’s needs. Participants will be guided through the key project tools and their capabilities, illustrating how they can support activities such as data processing, visualization, and stakeholder engagement. The session will provide insights into the different instantiation options available, from project-specific portals to interactive development environments and geospatial analysis tools. By highlighting the ease of integration between these components, the session will demonstrate how APEx facilitates the rapid deployment of tailored project environments that align with project objectives.

By attending this session, EO project teams will gain a deeper understanding of how APEx streamlines the deployment of cloud-based tools, reducing technical barriers and allowing researchers to focus on scientific innovation. With APEx handling the infrastructure, teams can dedicate more time to developing and sharing impactful EO solutions, ensuring broader adoption and engagement within the community.

Speakers:


  • Bram Janssen - VITO
Add to Google Calendar

Friday 27 June 13:30 - 13:50 (EO Arena)

Demo: D.02.29 DEMO - GeoBreeze: Simple, Fast, and Flexible Evaluation of Remote Sensing Foundation Models

We present a new library to evaluate remote sensing foundation models (RSFMs) in a research context.
Problem statement: RSFMs are on the rise but their evaluation is difficult. The recent surge of incorporating more sensors and more datasets exacerbates this issue. Currently, many researchers write their own custom code for evaluation, which is time intensive and hinders reproducibility. An efficient evaluation library for common models, tasks, and datasets does not yet exist.
Unique value proposition: We present GeoBreeze, a simple, fast, and flexible evaluation kit for RSFMs. At the core, we propose a novel model wrapper returning multiple blocks from ViT-based architectures from which the library can perform all of kNN, linear probing, full fine-tuning, and segmentation evaluation. For efficient linear probing, we integrate the linear probing code from DINOv2 allowing to train multiple learning rates and feature extraction methods in parallel.

Core features:
• Simple: Single compact model wrapper (e.g., 73 lines of code for SoftCon) for all tasks
• Fast: Accelerated linear probing (e.g., 78 configurations in parallel at once with 900 batch size on 40 GB GPU RAM for SoftCon)
• Flexible: 10+ model & 17+ dataset wrappers implemented, integration with TorchGeo datasets

In our session, we present the model wrapper and example scripts. Additionally, we are highly interested in your feedback and suggestions for improvements!
This codebase has been built from gathering years of experience and pain points in evaluating RSFMs. We believe this will be very useful for a simple and more efficient evaluation of RSFMs in a research context.

Speakers:


  • Leonard Waldmann - GMX
Add to Google Calendar

Friday 27 June 13:45 - 14:30 (Frontiers Agora)

Session: F.05.07 Women Trailblazers Round Tables - Session 4 - Frontiers of Technology

“Women trailblazers: the present and the future of ...” is a series of round tables focused on the specific disciplines of Earth Observation Remote Sensing, Science, Engineering, Policy making, Entrepreneurship etc. that recognizes women’s significant contributions including senior and young promising professionals, whilst promoting female role models in our community.
The session will bring together prominent figures from diverse organisations, academia, industries, and associations to engage in a focused dialogue on collaborative strategies to address climate change and promote sustainable development. The main objective is to inspire and to discuss the current status and future development in Earth observation data and technologies to address climate change and promote sustainable development.

Speakers:


  • Anna Maria Trofaier - Cryosphere Scientist, European Space Agency (ESA)
  • Anna Jungbluth - Research Fellow, European Space Agency (ESA)
  • Rosie Willatt - Lecturer, University College of London
  • Valeria Gracheva - EO Microwave Payload Engineer, European Space Agency (ESA)

Add to Google Calendar

Friday 27 June 13:45 - 14:30 (ESA Agora)

Session: A.05.15 Rock up and pitch: Observations to support the next generation of climate modelling evaluation

The climate modeling and observation communities share a common goal, to understand and predict climate change and variability.  The increasing resolution and lengthening record of observational data has enabled benchmarking of multiple models with reference data. The Coupled Model Intercomparison Project (CMIP) allows users to compare simulated results across a diversity of models and evaluate model performance against the observed world.
The Rapid Evaluation Framework (REF) is a new CMIP led initiative to improve the availability and access to, evaluated simulations. A first iteration of this project is already underway to support the upcoming IPCC Seventh Assessment Report. This session will explore how planned activities and innovations in Earth observations can be harnessed to support and unlock the next generation of climate model evaluation and benchmarking.
Join us to discover the critical role of Earth observation datasets in advancing the REF.  The session will cover observation requirements and briefings on model evaluation tools, including 2 ESMValTool, and Obs4MIPs expansion to integrate with the REF.  Attendees are invited to pitch ideas to a panel formed of Earth observation and modelling experts.

Presentations and speakers:


Overview of the Rapid Evaluation Framework and the role of Earth observations within it


  • Birgit Hassler - German Aerospace Center (DLR) and Model Benchmarking Task Team Co-lead
  • Briony Turner - CMIP International Project Office

Observation dataset requirements for model evaluation


  • Dora Hegedus - Science and Technology Facilities Council

Participating modelling community diagnostic and performance metric packages


  • Birgit Hassler - German Aerospace Center (DLR) and Model Benchmarking Task Team Co-lead

How to get datasets REF ready


  • Dora Hegedus - Science and Technology Facilities Council
Add to Google Calendar

Friday 27 June 13:45 - 14:30 (Nexus Agora)

Session: C.01.22 What Does the Future Look Like? Future EO Architectures for Science Observations

An interactive session on how to plan for future EO system scenarios in the long term. The session will focus on the observation needs of key science questions in 2040 and beyond. A short introduction to an ongoing ESA System of Systems study will outline the objectives, ongoing work and present priority questions to address.

Topics to covered include a selection from:
• What are the most pressing science questions that EO will need to be tackling in 2040 ?
• What will the gamechangers be that fundamentally alter the way we do things ?
• What is the role for commercial EO missions in providing data and/or products for science needs ?
• What would the impact be on science with considerably more or considerably less EO capacity ?
• How can the science community better communicate future needs to satellite data providers?

The session will encourage dialogue and interaction to help verify perceptions of current and future science requirements and observation gaps. This, along with perspectives on emerging needs, techniques and capabilities, will help inform candidate scenarios for subsequent analysis. Inputs are especially encouraged from early career researchers and practitioners who will be shaping and delivering the future of EO.
This session offers a valuable engagement opportunity for symposium participants, the study team and other stakeholders across the EO science community to raise awareness of the study and help inform the analysis. The conversation will help to corroborate the assessment of requirements and characterise future scenarios for the selected science questions. The study team includes partners from the previous ESA funded activity “EO Science Strategy Foundation Study (SSFS)” that provided important supporting inputs to the recently released ESA EO Science Strategy.

Speakers:


  • Dr. Jon Styles - Assimila
  • Ian Downey - Assimila
  • Prof. Stephen Briggs - Steeple Consulting
  • Dr. Emily Dowd - University of Leeds
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 0.14)

Session: F.03.01 Commercial EO missions data for marine and land applications - Part 3

EO data plays a crucial role in marine and land applications, providing vital information for environmental monitoring, biodiversity, disaster management, hydrology, urban planning, agriculture and forestry, oceanography and coastal management, sea-ice monitoring. By leveraging commercial EO data, these applications can enhance decision-making processes, support sustainable development and help monitoring the implementation of EU policies.
This session has been structured into three thematic parts—marine, land, and multidomain—to better highlight the diverse applications of commercial Earth Observation (EO) data. They will feature presentations from data and satellite owners, with a particular focus on commercial data providers, illustrating the complementary nature of commercial EO data with other satellite missions (ESA and non-ESA missions) including Sentinel missions. The three sessions (part 1, 2 and 3) aim also to exchange experiences on applications powered by commercial EO data and the presentations will illustrate the commercial data offer, imaging capabilities and use cases.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 0.14)

Presentation: PAZ satellite, high quality radar imagery at the service of society

Authors: Juan Ignacio Cicuéndez
Affiliations: Hisdesat
The Spanish radar satellite PAZ, operated by the company Hisdesat, will celebrate its seventh anniversary into orbit in February 2025. In these first seven years of operations, PAZ will have acquired more than 175.000 images to cover the needs of governmental and private bodies, both national and international, being part as well of Copernicus Contributing Missions and Third Party Missions (TPM) programme. PAZ is a strategic instrument thanks to its SAR technology, which guarantees images of the highest quality (up to 25 centimeters resolution) in the most adverse circumstances, whether it is day or night and regardless of weather conditions. The satellite is subject to strict orbital control that allows it to be kept inside a tube with a radius of 250 m. This orbital tube is essential for operational and trustable interferometric applications, i.e. coherent change detection for subtle change control and ground deformation monitoring for infrastructures, mining and oil & gas activities. Actually, hundreds of orbital maintenance maneuvers have been carried out to keep the satellite within the tube and to avoid potential collisions. The images captured by the PAZ satellite are crucial for managing risks and emergencies, assessing disasters, monitoring environmental activities, controlling critical infrastructures or for maritime traffic surveillance, among many other applications. During 2023, the PAZ satellite was especially active in the fires that hit the Island of La Palma after already having provided a large interferometric dataset for the volcanic eruption in 2021. In addition, he left us testimony of the earthquakes that occurred in Turkey and Syria, the crowd gathered in Lisbon for the celebration of World Youth Day or of the breach of the Kakhovka dam in Ukraine. It has also closely followed the movements of Iceberg A23a, which has been moving away from Antarctica since late 2023. In 2024, it also put all its capacity into action to monitor the DANA floods that occurred in Valencia, has followed up volcanic activity in the Icelandic city of Grindavík, has monitored the water level of several reservoirs in Spain and has closely followed the collapse of the US Francis Scott Key Bridge (Baltimore), among others. PAZ satellite systematically contributes to the Copernicus Marine, Security and Emergency Services, as well as scientific projects through ESA TPM. Commercially speaking, major infrastructure operators, mining companies, environmental agencies worldwide, etc. are benefiting from services derived from PAZ data, through the PAZ/TerraSAR-X Constellation.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 0.14)

Presentation: GEISAT Precursor Mission for Methane Detection and Quantification: Advancing Satellite-Based Environmental Monitoring

Authors: Cristina Lavín, Joshua Lizundia-Loiola, Iñigo Irizar, Mikel Gardeazabal, Uxue Untzilla, Iñaki Salaverria, Markel Aramberri
Affiliations: SATLANTIS MICROSATS S.A.
Methane is the second most abundant greenhouse gas after carbon dioxide (CO2), with a global warming potential approximately 28 times greater than that of CO2 over a 100-year period, amplifying its role as a key contributor to climate change. Therefore, addressing methane emissions is essential to effectively mitigate global warming and achieve international climate targets. Anthropogenic sources account for about 60% of global methane emissions, with agriculture, waste management, and energy production being major contributors. Among these, the oil and gas industry alone is responsible for nearly 40% of human-induced methane emissions. Methane’s relatively short atmospheric lifetime compared to CO2 offers a unique opportunity to achieve rapid and substantial climate benefits by reducing its emissions. This urgency is reflected in global initiatives like the Global Methane Pledge, which aims to cut methane emissions by 30% by 2030 relative to 2020 levels. The European Union has also adopted recent regulations targeting methane emissions from the energy sector, highlighting the growing focus on this critical issue. To support these goals, advancements in methane detection and quantification technologies have been explored, with satellite-based monitoring emerging as a transformative tool. Satellites provide global coverage and enable continuous monitoring of methane emissions, especially from large-scale sources like oil and gas operations. This capability significantly outperforms ground-based methods in detecting emissions that contribute substantially to the global methane budget. The GEISAT Precursor mission designed as a foundational step for the planned GEISAT constellation, was developed to meet the critical demand for precise and reliable methane monitoring. Launched in June 2023, the 16U CubeSat represents a groundbreaking achievement in satellite-based environmental monitoring. Developed by Satlantis, the satellite is equipped with the advanced iSIM-90 VNIR SWIR camera, a proprietary imaging system designed for environmental applications. The camera operates across VNIR (400–900 nm) and SWIR (700–1700 nm) spectral bands and employs Multispectral Differential Photometry (MDP) to measure methane absorption. By capturing wavelength-specific variations in light attenuation caused by atmospheric methane, the system ensures optimal sensitivity and accuracy. Moreover, the Ultra-High-Resolution (UHR) algorithm further enhances spatial resolution, providing simultaneously imagery with ~2 m resolution for VNIR, ~5 m resolution for SWIR and ~13 m for methane products. GEISAT Precursor delivers two main data products referred to as UHR and CH4. UHR-Precision products provide TOA radiance (L2A) and surface reflectance (L2B) high-resolution multispectral imagery for generic purposes. L2 CH4 is the product resulting from the MDP, with an image of the estimated methane column densities over background concentrations (CH4) and the uncertainties associated to the estimates (CH4U). GEISAT Precursor Mission was selected as Emerging CCM Category 1 for Atmospheric Composition domain in 2023. The CCM Category 1 contract involves a rigorous calibration and validation (CAL/VAL) campaign supported by ESA and the Atmospheric Mission Performance Cluster (ATM-MPC). These efforts involve periodic assessments of radiometric, spatial, and geometric calibration to ensure the delivery of high-quality data. Additionally, CAL/VAL activities for methane data products include multiple quality assessment and validation processes. Currently, this process involves comparing methane measurements from GEISAT Precursor with those from other instruments, such as Sentinel-2, over well-characterized persistent emitters. This cross-validation includes processing both GEISAT Precursor and third-party sensor images using the MDP and assessing their consistency in terms of (1) methane column densities and (2) emission flow rates. Additionally, the CCM Category 1 contract includes various demonstration activities aiming to showcase the value of GEISAT Precursor’s data in specific use cases for Copernicus Services. These activities aim to showcase the utility of GEISAT Precursor data in practical applications, such as the systematic monitoring of critical oil and gas infrastructure and the use of a Tipping and Cueing approach. In this regard, Sentinel-5P's data (daily revisit, 2600 km swath, 5×7.5 km² resolution) is complemented by GEISAT Precursor’s higher spatial resolution data (~3 days revisit, 8 km swath, 13 m resolution). This Tipping and Cueing approach showcases the combination of the complementary advantages to timely pinpoint the precise location of a point-source methane emissions, which is valuable for the Copernicus Atmospheric Monitoring Service. Looking ahead, the planned GEISAT constellation will significantly enhance methane monitoring capabilities by 2026 worldwide. This constellation will include additional satellites with expanded spectral coverage up to 2500 nm, improved temporal resolution, and innovative features such as onboard data pre-processing and polarimetry to reduce noise. These advancements aim to deliver comprehensive and cost-effective methane monitoring, reinforcing global efforts to combat climate change. In summary, the GEISAT Precursor mission showcases the transformative potential of cost-effective, high-performance satellite technology in greenhouse gas monitoring. Its high spatial resolution and simultaneous VNIR and SWIR capabilities underscore its complementarity with existing Copernicus assets, such as Sentinel-5P, positioning it as a crucial step toward substantial methane reductions and advancing effective climate change mitigation.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 0.14)

Presentation: Advancing Atmospheric Research and Methane Mitigation with High-Resolution GHGSat Data

Authors: Carles Debart
Affiliations: GHGSat
Methane, a potent short-lived greenhouse gas, is responsible for approximately 30% of global warming since the pre-industrial era. Understanding and addressing methane emissions has been an important focus of the global research community in the last few years. Anthropogenic methane emissions, commonly associated to be originating from leaks in the O&G industry and landfills, are also wide-spread in a variety of less-known human-driven activities such as coal mining, agriculture, farming, hydro-power generation, transport, etc. These emissions, typically featuring faint and disperse characteristics, are invisible to global mappers such as Sentinel 5P due to sensor constraints. The GHGSat constellation, with the first demonstration satellite (Claire-D) launched back in 2016, and currently operating 12 units in orbit, has triggered new areas of research in the atmospheric monitoring domain by providing data of a resolution, detection threshold, revisit and delivery times that greatly improves and complements that of public missions, effectively enabling, not only scientific excellence, but also policy making and even mitigation action at local scale. Particularly, GHGSat greater sensitivity has uncovered, through applied research, emissions in sectors like palm oil production or farming, enhancing global understanding of emissions contributions in the agricultural sector, which is predominant in the global south. Additionally, and thanks to the resolution, those emissions can not only be understood, but also be attributed to specific facilities. In terms of revisit frequency (up to daily) and delivery time (down to 6h from tasking), GHGSat has demonstrated applications that were not though possible in the Atmospheric domain by a commercial Smallsat provider, such as facilitating near-real time decision-making during critical events, or supporting in disaster response efforts like in the devastating earthquake in Turkey in 2023. In this presentation, we will uncover key examples and success stories of all those lesser-known types of methane emissions that have been discovered through research-based activities with GHGSat data, fueled by private-public collaboration programs such as ESA Third Party Missions program (ESA TPM), NASA Commercial Small Satellite Data Acquisition Program (CSDA) or UK Space Catapult data sharing agreement.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 0.14)

Presentation: Enhancing Marine, Land and Atmospheric Applications with Planet Labs' Commercial Earth Observation Missions: Pelican and Tanager Constellations and Advanced Data Platforms

Authors: Paula Fernandez Del Valle, Mr Diego Vanelli, Alana Belcon
Affiliations: Planet Labs
Planet Labs has established itself as a leader in the commercial Earth observation sector, with a mission to image the entire Earth's surface every day and make global change visible, accessible, and actionable. The company's fleet of small satellites—including the PlanetScope and SkySat constellations—has revolutionized how we observe our planet by providing high-cadence, high-resolution imagery to a wide range of users engaged in land applications. Building upon this foundation, Planet Labs is advancing its capabilities with the development of two new satellite constellations: Pelican and Tanager. These constellations are designed to deliver very high-resolution and hyperspectral imaging, respectively, marking significant technological advancements in Earth observation and remote sensing for marine, land and atmospheric applications. The Pelican constellation represents the next generation of high-resolution optical satellites. Designed to supersede the current SkySat constellation, Pelican aims to provide higher spatial resolution imagery with improved image quality, higher revisit rates, and increased agility. The satellites in the Pelican constellation are engineered with advanced optics and sensor technology, enabling them to capture finer details of Earth's surface than ever before. This level of detail is crucial for land applications requiring precise spatial information, such as detailed urban mapping, infrastructure monitoring, agriculture management, and environmental assessment. Moreover, the enhanced revisit capabilities of Pelican—with multiple passes over the same location within a single day—allow for near real-time monitoring of dynamic land events, such as natural disasters, deforestation, and changes in land use. In parallel, the Tanager constellation introduces hyperspectral imaging capabilities to Planet Labs' suite of Earth observation services. Hyperspectral imaging involves capturing imagery across hundreds of narrow and contiguous spectral bands, extending beyond the visible spectrum into the near-infrared and short-wave infrared regions. This technology provides detailed spectral information for each pixel, enabling the monitoring of greenhouse gas emissions and the identification and characterization of materials based on their spectral signatures. Planet Labs is committed to making its data accessible and useful to a wide range of users, from governments and large enterprises to researchers and non-profit organizations. To this end, Planet offers Analysis-Ready Data Products such as Analysis Ready PlanetScope and Planetary Variables. Analysis Ready PlanetScope data are pre-processed to surface reflectance and corrected for atmospheric effects, making them immediately usable for analysis without the need for additional processing. Planetary Variables are derived data products that provide scientifically validated measurements of key environmental indicators, such as biomass, soil moisture, land surface temperature and vegetation indices. These variables enable users to monitor and model environmental changes over time, supporting land applications like agricultural forecasting, drought assessment, and ecosystem monitoring. To facilitate access to these data products and derive actionable insights, Planet Labs has developed the Planet Insight Platform. This platform provides users with intuitive tools and interfaces to access, visualize, and analyze Planet's vast archive of imagery and data products. By leveraging cloud-based infrastructure and advanced analytics, the Planet Insight Platform allows users to perform complex analyses, integrate data into workflows, and generate reports and visualizations that support decision-making processes in land management, agriculture, forestry, and urban planning. This presentation will focus on how Planet Labs' commercial Earth observation missions are advancing EO based applications. We will explore case studies demonstrating the practical benefits of these data products in land monitoring, agriculture optimization, deforestation tracking, and urban development. Additionally, we will discuss how the integration of these advanced datasets with the Planet Insight Platform enhances users' ability to analyze and interpret complex land-related phenomena.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 0.14)

Presentation: Covert Remote Maritime Observation with Radar in Near-Real Time (CORMORANT): Airbus’ new, fully automated vessel detection and tracking solution

Authors: Dr Pasquale Iervolino, Dr Frazer Christie, Dr Thomas Higginbottom, Dominik Rains, Dr Liam Harris, Mr Thomas Harling, Mr Tim Pattison, Mrs Vanessa Rueda Mata
Affiliations: Airbus Defence and Space, Geospatial Business UK, Airbus Defence and Space, Space Digital
Airbus Geospatial Business UK (GBUK) has significant experience in Earth Observation and satellite image processing in the Maritime domain, covering the full Tasking, Collecting, Processing, Exploitation and Dissemination cycle. In particular, GBUK is able to provide near-real time extended remote monitoring services, systems and solutions to the global maritime market, improving the safety and security of assets. Our combination of imagery and analytics ultimately provides customers with detailed surveillance information and actionable maritime intelligence. Here, we present an overview of our ‘Covert Remote Maritime Observation with Radar in Near-Real Time’ (CORMORANT) service – a fully automated solution capable of producing bespoke, customer-specified Vessel Detection Reports (VDRs) in near real-time. CORMORANT has to date been adopted by several maritime and governmental organisations, including the UK MoD, and produces VDRs through the fusion of Synthetic Aperture Radar (SAR) imaging and Automatic Identification System (AIS) radio messages broadcast by cooperative vessels. At a high level, our CORMORANT service provides two modes of operation: (1) delivery in ‘emergency mode’ for the urgent monitoring of suspicious vessel activity across the globe, and, (2) a ‘general mode’ for routine, pre-planned vessels’ identification in areas of interest chosen by the customer. CORMORANT also enables the classification of ‘dark’ vessels in key areas of interest (that is, the automatic identification and reporting of vessels not transmitting AIS messages), and is optimised to ingest both commercial (TerraSAR-X, TanDEM-X, PAZ, NovaSAR-1, ICEYE and Capella) and open source (Sentinel-1) SAR data. To provide this Maritime Service, GBUK has created and currently operates an established workflow capable of delivering VDRs within 1 hour of image acquisition over Europe (achieving a best of 22 minutes). This workflow operates 24/7, 365 days per year and can generate reports anywhere on the globe, day or night. The service relies on physical scattering and artificial intelligence (AI) models to produce the VDRs, and is hosted across Airbus’ internal infrastructure and cloud storage components. In 2022, GBUK further demonstrated its ability to produce VDRs within the ESA Data Hub Relay (DHR) software (using Copernicus Sentinel-1 data as input) by implementing the VDR processor as a bespoke plugin. Within the DHR software, a seamless experience is provided to users, allowing them to select a Sentinel-1 input image for a given area of interest and time period within a Copernicus Browser-like GUI, from which VDR’s are then automatically computed and provided via a download link. More recently, CORMORANT’s core functionality has been enriched through: 1) enabling the co-production of VDRs based on very high resolution optical imagery from the Airbus constellation (SPOT, Pleiades and Pleiades Neo); 2) the release of a new online portal for easy visualisation and browsing of VDRs and their input EO data and 3) integrating spaceborne, passive radio frequency (RF) signals (e.g. that detected by X-band and S-band marine radars) for enhanced vessel identification and monitoring purposes. Overall, GBUK Maritime solution has five main innovative features: 1) Completely automated. Output VDRs fusing satellite imagery and AIS data are generated without manual input, constituting an unsupervised service that can operate 24/7/365. 2) Superior performance. Reports are provided with a very high detection rate (more than 90%) and low false alarm probabilities (less than 10-7 on average). 3) High flexibility. Additional input data sources can be integrated into our workflow with ease, meaning that VDRs can be generated from a wide variety of satellite image sources in a ‘satellite agnostic’ way. 4) Fast delivery time. Reports are provided in near-real time, typically within one hour of satellite acquisition over Europe and its surrounding waters. 5) Scalable solution. Our solution is fully customisable to align with customer requirements, including in terms of, for example: area of interest size, input sensor type (commercial and/or freely available) and frequency of VDR report generation (daily/weekly/monthly).
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 0.49/0.50)

Session: E.01.01 EO for Cultural and Natural Heritage: from innovation to user uptake - PART 2

Cultural and Natural Heritage (CNH) have a major significance for local communities as an identity factor and a facilitator of social cohesion. Furthermore, CNH are strategic assets for tourism, and their promotion and valorisation contribute to countries economic development. Preservation of CNH is therefore essential, also towards strengthened sustainability and resilience to current global challenges as promoted by the United Nations’ 2030 Agenda for Sustainable Development.
Dedicated sessions held during the last two editions of the Living Planet Symposium helped unveiling the potential and benefits of Earth Observation (EO) for CNH. Satellite data and technologies from Copernicus and Contributing Missions are already playing a key role in enabling novel, transformative and cost-effective solutions to undertake a variety of specific tasks, such as: archaeological prospection, landscape archaeology, multi-temporal monitoring, risk assessment, impact assessment due to anthropogenic activities (e.g. urbanization, illegal excavations), natural hazards (e.g. floods, earthquakes) and climate change.
Public administrations and authorities in charge of CNH management are more aware of these applications, and so more remote sensing and geospatial companies are developing tailored services. At the same time, CHN have become an established application area in Copernicus and its services, e.g. Support to EU External and Security Actions (SESA) Service, Emergency Management Service (EMS) and Climate Change Service (C3S). Furthermore, various initiatives have been launched by ESA (e.g. Downstream Gateway, ARTES IAP) and national space agencies to support industry in developing downstream applications in the CNH sector, and more alliances have been established with national and international bodies such as UNESCO and ICOMOS.
In this context, this session aims to understand how EO scientists, CNH user community, institutions and industry are partnering and cooperating to enable novel applications, improve those that are already being delivered, and facilitate the user uptake to make EO data and technologies from Copernicus, Contributing Missions and commercial missions more deeply embedded into operational workflows for study, monitoring, preservation and promotion of CNH.

The session encourages submissions focusing on:
• Solutions based on the exploitation of satellite data, as well as exploitation of Artificial Intelligence, Machine Learning, thematic platforms, cloud computing resources and infrastructure, collaborative environments;
• Benefits from the use of Copernicus products and services, also in relation to impacts due to climate change and towards future resilience in CHN management;
• Use cases addressing specific user requirements and needs in the field of either CNH discovery, study, monitoring, preservation or cultural/touristic promotion;
• Success stories and best practices of EO integration in operational systems, workflows and processes on CNH;
• Downstream applications, with a focus on multidisciplinary collaboration and partnerships between heritage institutions, academia and commercial providers;
• Initiatives of capacity building towards user uptake by the CNH community and end-users.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 0.49/0.50)

Presentation: COSMO-SkyMed InSAR data within the “Italian Extraordinary Plan for Monitoring and Conservation of Cultural Heritage” for instability assessment of built cultural heritage: examples on the historical sites Pienza, Orvieto, Civita di Bagnoregio (Italy)

Authors: Silvia Bianchini, Dr. Irene Centauro, Dr Anna Palamidessi, prof. Veronica Tofani, Ing. Daniele Spizzichino, Deodato Tapete, Maria Virelli
Affiliations: UNESCO Chair on Prevention and Sustainable Management of Geo-Hydrological Hazards, University of Florence, Earth Sciences Department, University of Florence, ISPRA - Italian Institute for Environmental Protection and Research, Italian Space Agency (ASI)
Satellite InSAR data and in particular multi-temporal interferometric techniques, e.g. PSI (Persistent Scatterers Interferometry), are being increasingly exploited for studies of condition assessment of built cultural and natural heritage (CNH). PSI-based approaches are very flexible and suitable for back-monitoring and multi-scale analysis, over entire regional areas up to single manufacts, especially by exploiting high-resolution data from Satellite Missions, e.g. SAR imagery acquired in microwaves X-band by the Italian COSMO-SkyMed constellation (ASI-Italian Space Agency). The PSI data obtained from X-band SAR images enhance the analysis by providing greater detail, since many radar benchmarks show up on rooftops and facades, enabling effective support for mapping and monitoring ground movements that may impact urban built-up heritages. Nowadays, higher attention is devoted to raise awareness of public authorities, heritage practitioners and stakeholders on the operational satellite-based services for estimating instability of CNH built-up sites, and to collect user requirements and feedback related to the use of satellite data for the possible integration of such space-borne data into existing datasets and platforms, at local, national, international administrative levels. In Italy, the Extraordinary Plan for Monitoring and Conservation of Cultural Heritage (Piano Straordinario Nazionale di Monitoraggio e Conservazione dei Beni Culturali Immobili) has been managed and financed by the national Ministry of Culture since 2022. It involves many Italian sites of high historical-cultural value, for fostering an integrated satellite‐terrestrial investigation system that permits to possibly assess at different territorial scales the hazard factors, and eventually highlight any critical situation relating to cultural and natural heritage. Thus, thanks to the exploitation of satellite InSAR data, the aim of the Extraordinary Plan is the remotely-sensed assessment of the CH potential damage and instability, to be cross-compared with on-site and at-desk geo-spatial data, for screening the areas and supporting the planning and regular management, as well as for allowing the timely start of extraordinary conservation interventions, if any. In this study, we present procedure and results on the analysis performed within the Extraordinary Plan by the UNESCO Chair on Prevention and Sustainable Management of Geo-Hydrological Hazards of the University of Florence (Italy) on some Italian historical sites by using multi-temporal interferometric data at high spatial and temporal resolution from COSMO-SkyMed mission provided by the Italian Space Agency. In particular, over Pienza, Orvieto, Civita di Bagnoregio (Italy) we used point-wise PSI targets derived from COSMO-SkyMed SAR images acquired in ascending and descending orbits in the spanning time 2015 – 2023 for the study of the spatial distribution of ground deformations and temporal patterns of displacement at building-scale. PSI data were integrated and cross-compared with different information (i.e. satellite interferometric data archives, cartographic and geological data, patrimonial inventory databases, geomorphological zonation and location of underground cavities, if present), in order to support the radar-interpretation, and finally validated by on-site surveys performed in 2024. Outputs based on remotely-sensed ground movement gradients and in situ observations allowed the identification of some attention sites over the whole urban fabric of the selected historical cities, and can be used for prioritizing large-scale inspection efforts, as well as for identifying potential building in need of further in-depth investigations. Overall, the work framed in the Italian Extraordinary Plan for Monitoring and Conservation of Cultural Heritage is an effective example of an initiative of InSAR usage and capacity building towards a standardized roadmap and user uptake by the CNH community and end-users.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 0.49/0.50)

Presentation: Assessing the Impact of Stone Buildings on Urban Heat Island Formation in Historical Urban Centres Under Current and Future Climate Conditions: introducing a new methodology to enable wider use of thermal EO data for Heritage

Authors: JoAnn CASSAR
Affiliations: University of Malta
Urban Heat Islands (UHIs) pose significant environmental and health challenges in urbanized regions, particularly within historical urban centres with a concentration of buildings and also narrow streets. Our study aims to understand the contribution of the buildings, their layout and distribution as well as their traditional roofs to UHI effects in localized urban areas and to project the future impact of climate change on these buildings, the roof types and street-level temperatures. We are for the first time, and in Malta, conducting comprehensive monitoring of specific streets, roofs and roofing materials, collecting physical data—including air and surface temperature, humidity, and wind speed—both inside and outside selected buildings to understand the physical behaviour of these structures and their impact on the thermal of historical city urban centres. The currently launched project UrbanClimate, funded by XjenzaMalta Space Fund 2023 , will include acquiring and processing remotely sensed thermal infrared data from urban historical environments using the high resolution 30m TIR from the SDGSAT-1 satellite, unmanned aerial vehicles (UAVs), hand-held devices, and in-situ sensors. The innovation here, differing from the previous EO4HBCS project which only looked at traditional roof behaviour, is that we will employ Generative AI and analysis to build high resolution thermal mapping from the combination of all our measurements. Geographic Information System (GIS) mapping and model overlays will also be employed for data exploration and analysis using current and projected climate data, facilitating the communication of general spatial thermal patterns and trends under current and future climatic conditions. This new methodology is then expected to open up a whole new area of research for cultural heritage and its responses to current and future climates. By integrating thermal data with spatial and climatological analysis, the study also aims to provide critical insights into how local buildings, with traditional roofs (and others), contribute to UHI formation both currently and in the future. These findings should underscore the importance of selecting appropriate roofing materials and designs in urban planning to mitigate UHI effects, both for historic and other buildings. Anticipating future climatic scenarios, the research aims to highlight the need for adaptive strategies to reduce thermal heat stress in urban populations. Future work will then focus on expanding this approach to larger urban areas in Malta and elsewhere in the Mediterranean and exploring the efficacy of various mitigation techniques to enhance urban resilience against climate change-induced heat stress. ----- UrbanClimate: GRANT AGREEMENT NUMBER: SRF-2023-1S1 EO4HBCS: https://www.mdpi.com/2571-9408/4/4/196
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 0.49/0.50)

Presentation: Remote Sensing for the Archeological Heritage

Authors: Manuela Ferri, Emanuele Mele, Fabio Malvarosa, Marco Latini, Andrea Schiappelli
Affiliations: E-geos, Archeological Park of Colosseum
The project of monitoring of archeological parks is developed by e-GEOS, following the award of a tender issued by Italian Ministry of Culture. e-GEOS is responsible, under the coordination of the Colosseum Archeological Park, for the two-years monitoring of six former archeological parks on Italian territory: • Colosseum • Ancient Ostia • Ercolano • Paestum • Pompei • Phlegraean Fields The areas covered by the project will be monitored quarterly for a 24 months period. At present, two monitoring campaigns have already been completed and delivered. The scope of the project is to assess and monitor, for each area of interest, the stability of the ground and the changes in the use of the soil. Both phenomena have a high level impact on the conservation status and on the preservation of the exposed heritage. The monitoring is then aimed to give support to and to facilitate the Parks managers in their institutional activities, allowing them to prevent damages to the archeological heritage. The ground stability monitoring is realized through the application of the IFSAR methodology (by e-GEOS) to COSMO SkyMed and Sentinel 1 data. The use of two different datasets allows to perform shifted monitoring campaign over most of the parks, alternating quarterly CSK and S1 analysis, in order to have an output every 45 days. In the case of the two wider parks, Ancient Ostia and Phlegraean Fields, the analysis are performed simultaneously, but the S1 dataset is used to cover a wider area than that covered by the analysis of CSK data. The monitoring of the changes in the use of the soil is realized quarterly by the interpretation of VHR satellite optical data, in order to detect changes in building consistence, vegetation and soil assets and to identify potential threats to the safeguard and integrity of the archeological heritage of the site.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 0.49/0.50)

Presentation: SATCULT - Closing Knowledge Gaps Through Interdisciplinary Collaboration - New Perspectives for Cultural Heritage Managers by Accessing and Analysing EO Data

Authors: Karin Dr. Drda-Kühn
Affiliations: media k GmbH
Earth observation (EO) data as an early warning system and a tool for cultural heritage-related preservation activities can make a decisive contribution to cultural heritage (CH) protection. The preservation of cultural heritage urgently needs support to better cope with the effects of climate change, as many sites are already suffering from the consequences of extreme heat, flooding and storms. New possibilities in protecting CH are also opening up in monitoring war damage, calculating protection scenarios or preventing looted excavations and vandalism. However, there is still a lack of awareness of the opportunities and – even more important – missing knowledge about how to use EO data because the CH workforce is not yet qualified to record, analyse and take precautionary measures. So far, almost only the archaeological sector uses EO data to a comparably small amount. With the SATCULT project (“SATCULT – Closing a knowledge gap by vocational training about satellite-based services in cultural heritage preservation”), for the first time, the conditions under which CH organisations would be able to acquire the necessary knowledge through vocational training are investigated and recorded. SATCULT is a capacity-building initiative towards the uptake of EO data by the CH community, funded by the European ERASMUS+ scheme. It is an opportunity to make the targeted groups, networks, and initiatives from CH preservation and their training providers aware of the immense potential of EO data analysis and translating the data into concrete action in heritage preservation. The SATCULT partnership of media k GmbH (Germany / coordinator), the Institute of Heritage Sciences at the Consiglio Nazionale Delle Ricerche (Italy) and the Eratosthenes Centre of Excellence (Cyprus) focuses on practical support measures. Currently, the SATCULT team is preparing a collection of success stories and good practices of EO integration in operational systems, workflows and processes in CH. The collection will be communicated to around 6000 European CH institutions, 250 geoinformation specialists, and 200 training providers for raising awareness of the cooperative potential. In the next steps, the training needs of the CH community Europe-wide (i.e. staff in public and private CH institutions) will be recorded, identified and analysed in order to define the learning content and required skills for benefitting from EO data. The status and outcome of both steps will be presented at the symposium. Within 18 months, the project is expected to set up a pool of European EO experts who would be willing and able to support CH institutions in respective preventive measures. The overall aim is to facilitate the user uptake of EO data and technologies into operational workflows for heritage institutions and to create sustainable cooperation between EO experts and CH professionals, opening herewith a common space for EO to heritage research and exploring business opportunities.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 0.49/0.50)

Presentation: Driving Innovation in Cultural and Natural Heritage Management: Institutional Partnerships and Industry Innovation

Authors: Jolanda Patruno, Maria Michela Corvino, Gordon Campbell, Mario Hernandez
Affiliations: ESA/ESTEC, ESA/ESRIN, EARSeL
In October 2024, international experts convened for a joint workshop organized by the European Space Agency (ESA) in collaboration with the European Association of Remote Sensing Laboratories (EARSeL). After 10 years from the last co-organized event, this workshop focused on the application of Earth Observation (EO) technologies to support Cultural and Natural Heritage management and preservation. The discussion revisited progress since the joint ESA/UNESCO Open Initiative (2001), which called upon space agencies, researchers, and the private sector to apply EO technologies to monitor World Heritage sites. Over the past quarter-century, significant advancements have been achieved, with industry playing an increasingly prominent role in enhancing EO applications for heritage, and, at the same time, with Space Agencies providing an increasingly valuable EO data portfolio. Key Workshop Insights: State of EO in Heritage Management The 2024 ESA-EARSeL workshop identified four thematic areas where EO supports heritage initiatives: Monitoring, Climate and Awareness, Preserving and Preventing, and Emerging Techniques. These categories highlighted how technology and interdisciplinary collaboration are addressing long-standing challenges in heritage management. Participants included a wide array of stakeholders: remote sensing scientists, archaeologists, cultural heritage managers, private sector innovators, and representatives from national and European organizations. Their input underscored the importance of integrating cutting-edge technologies developed by industry into traditional heritage conservation practices. The past few years have seen transformative contributions from industry to Cultural and Natural Heritage management. Some of the most impactful advancements include: 1. Artificial Intelligence and Machine Learning (AI/ML): AI/ML applications have revolutionized EO data analysis. Companies now offer automated tools that detect changes in heritage sites, identify risks, assess structural integrity in historical buildings and detect illicit traffic of cultural goods. AI algorithms enable high-speed processing of multispectral and hyperspectral data, also providing unprecedented insights into vegetation health, soil conditions, and environmental stress affecting natural heritage landscapes. Despite the great potential behind automated site or risks identification, the actors involved still face technical challenges related to spatial resolution, data processing and training, false positive analysis, to cite a few. A common effort in exploiting this technique is still required. 2. High-Resolution Satellite Imaging: Advancements in high-resolution satellite imaging have revolutionized the ability to monitor and manage cultural and natural heritage sites. High-resolution imagery, now widely available from both public and commercial sources, contributing to detailed analysis and documentation of heritage sites, addressing challenges such as environmental degradation, illegal activities, and urban encroachment. This becomes more interesting and efficient if integrated with UAV systems: this hybrid approach enhances spatial resolution and provides ground-truthing capabilities, particularly useful for archaeological surveys or monitoring hard-to-access locations. To cite a few examples, the Italian Space Agency (ASI) and ESA have been instrumental in advancing high-resolution EO for heritage management. ASI's COSMO-SkyMed constellation offers high-resolution Synthetic Aperture Radar (SAR) data, valuable for assessing structural changes in monuments and landscapes. ESA's missions, including Sentinel-2, provide medium-to-high resolution optical imagery, enabling the monitoring of changes in vegetation, urban sprawl, and natural hazards affecting heritage areas. Thanks to the partnership with EO commercial high-resolution data providers, professionals can benefit from imagery platforms capable of capturing fine-scale changes, such as the erosion of ancient structures or shifts in land use surrounding protected areas. 3. Climate Modeling for Heritage Risk Assessment: Private sector tools integrating EO data with climate models are helping heritage managers predict risks such as flooding, sea level rise, and extreme weather events. 3D modeling and Digital Twin technologies play a crucial role in this context, enabling long-term planning to safeguard both cultural landmarks and natural ecosystems. By creating accurate digital replicas of historical structures, site managers can analyze wear and tear, simulate restoration techniques, and even create interactive experiences for education and tourism. 4. Internet of Things (IoT) for Monitoring: IoT-enabled sensors, when combined with EO data, are contributing to the effectiveness of cultural and natural landscapes management. These systems facilitate real-time tracking of environmental parameters, including temperature, humidity, pollution levels, and vibrations, providing critical alerts about conditions that might accelerate degradation. For instance, IoT devices installed at heritage sites can detect microclimatic changes, helping site managers mitigate risks such as mold growth, structural stress, or corrosion. When paired with EO data from satellites, these sensors demonstrate their effectiveness through the integration of local, high-frequency data with large-scale observations such as land use changes or urban encroachment. Some examples include the heritage site monitoring in Venice (Italy), where IoT sensors track tidal movements and structural shifts in historic buildings, complemented by EO data from the Copernicus Sentinel-1 SAR mission and InSAR processing technique, providing millimeter-scale subsidence monitoring; and the disaster preparedness in Greece, where IoT sensors placed at archaeological sites monitor seismic activity and environmental stress, enhanced by high-resolution commercial EO data, ensuring faster response in emergencies. Future Directions and Collaboration The workshop emphasized the need for continued partnerships between public organizations like ESA, National Space Agencies, EARSeL and private industry. Fostering collaboration between ESA, other space agencies, national institutions, and private companies is vital to overcoming challenges such as data fragmentation and limited interoperability between different systems. A long-term vision for Cultural and Natural Heritage management must include: - Developing open platforms that combine EO data with advanced analytics for accessible use by researchers and heritage managers. - Enhancing interdisciplinary research to connect EO experts with cultural heritage professionals. - Promoting training and capacity-building initiatives to ensure that new technologies are widely understood and utilized. Looking ahead, several areas for exploration and advancement have been identified: • Strengthening the integration of Earth Observation with the Sustainable Development Goals (SDGs) to address global challenges related to heritage, climate, and development. • Enhancing Big Data management capabilities, particularly leveraging Artificial Intelligence (AI) to process and analyze large-scale datasets efficiently. • Increasing the availability and utilization of different spatial resolution data, For instance, current and upcoming missions will cover a large range of SAR frequency, X, C, L, S, P band (ALOS-4, NISAR, BIOMASS, Rose-L…), which, inf integrated, can provide valuable instruments for the monitoring (the SoS Sentinel-1 FG/NG - Rose-L, with same orbit and same geometry will provide operational C and L-band images acquired routinely with wide-swath of hundreds of km and relatively high-resolution, 5m on the ground). On the other hand, In the last years, the availability of high spatial resolution imaging spectroscopy data (i.e. ~30 m pixel size) increased their use in the CH sector thanks to the successful deployment of PRISMA (ASI), DESIS (DLR), HISUI (METI), EnMAP (DLR), paving the way for the development of future missions such as PRISMA Second Generation (ASI) and CHIME (ESA). Upcoming and current optical and hyperspectral missions, beyond their native use to scope, are set to enhance monitoring, preservation, and management of heritage sites, as well as to support environmental management efforts for both natural and built heritage. Conclusion The 2024 ESA-EARSeL workshop demonstrated the immense potential of EO technologies for Cultural and Natural Heritage management, particularly when enhanced by innovations from industry. The event highlighted the importance of collaboration, technological advancements, and a shared vision to preserve heritage for future generations while addressing complex challenges such as climate change, urbanization, and environmental degradation. The workshop also identified areas where further progress is needed. Notably, there is a pressing need for a common language between EO and heritage professionals. EO experts often struggle to understand heritage requirements, while heritage experts find EO terminology and concepts challenging. To address this, participants, ESA and EARSeL representatives agreed on holding bi-annual awareness workshops designed to bridge this gap, ensuring that the two communities can collaborate more effectively. By building on this foundation, the global community can ensure EO continues to play a pivotal role in safeguarding humanity’s heritage and addressing the interdisciplinary challenges of cultural and natural landscape management.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 0.49/0.50)

Presentation: Monitoring of Cultural Heritage Assets in 3D+ Virtual Space, A Methodological Framework Utilizing “AIDL4CH”

Authors: Tamer Özalp
Affiliations: Researchturk Space Co.
The world is constantly changing and becoming more complex in every aspect. In this complex environment, cultural heritage has become increasingly important. Heritage sites play a crucial role in global economic and cultural activities. Today's world seeks to monitor, assess, and preserve cultural heritage assets as part of sustainable development. They are often complex structures with large spatial scales, which makes visualization and analysis challenging. Once a heritage site has been lost, damaged, or destroyed, the restoration process is frequently slow and, in certain instances, unfeasible. The digitization of cultural heritage assets is a crucial tool for conserving, renovating, studying, and promoting European cultural resources. A lack of awareness about the benefits of digitization among culture professionals and consumers may hinder progress. The digital skills gap remains a major barrier to making these technologies a powerful tool for everyone. The general public's lack of visibility and awareness is a problem, as mass media tends to focus on issues of broader interest. As a result, the general population has a limited understanding of the value of advanced tech services in everyday life. There is still a lack of research on the methodology and applied strategy of digital technology used for heritage monitoring and protection. Lack of systematic knowledge and sufficient information, including inventory, mapping, and monitoring of trends, changes, and major driving factors, are additional weaknesses. The current efforts are often isolated and the methods are relatively simple, which hinders a comprehensive understanding and systematic response to the sustainable protection of cultural heritage. Additionally, the younger generation and students show little interest in careers related to cultural heritage. Due to the nature of heritage information, regularly monitoring and inspecting all cultural heritage assets is a challenging task. Additionally, manual evaluation is time-consuming and costly due to the substantial labor required. Innovative tools, such as laser scanning and the Internet of Things, are used to inspect CH structures, alongside traditional vision methods. Traditional methods may not fully represent these assets, hindering detailed analysis and decision-making. Novel monitoring and detection systems are required to streamline and simplify the auditing process of CH assets. One of the recent advancements in digital technology is AI-based automation. The approach involves developing AI-powered algorithms in the space supported monitoring process and advanced deep learning models to detect damages and monitor digital replicas of cultural heritage assets on a 3D/4D virtual representation of the Earth's surface within a 3D+ virtual environment to address asset degradation. The method consists of exploratory data acquisition using Unmanned Aerial Vehicles (UAV), creating Digital Twin, developing virtual globe-based portal monitoring system. This enhances the precision and effectiveness of detecting damages and changes, ultimately creating a 3D+ virtual space. Early detection of damage in cultural heritage assets is crucial for preventive conservation measures. The model enables timely identification of potential threats to cultural heritage assets, allowing for proactive intervention and preventive measures and facilitates the documentation, monitoring, and preservation of Cultural Heritage.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.15/1.16)

Session: A.07.01 Soil moisture - PART 2

Over the past two decades, the technique of microwave remote sensing has made tremendous progress to provide robust estimates of surface soil moisture at different scales. From local to landscape scales, several field or aircraft experiments have been organised to improve our understanding of active and passive microwave soil moisture sensing, including the effects of soil roughness, vegetation, spatial heterogeneities, and topography. At continental scales, a series of several passive and active microwave space sensors, including SMMR, AMSR, ERS/SCAT provided information on surface soil moisture. Current investigations in L-band passive microwave with SMOS and SMAP, and in active microwave with MetOp/ASCAT series and Sentinel-1, enable an accurate quantification of the soil moisture at regional and global scales. Building on the legacy of these mission operational programmes like Copernicus but also novel developments will further enhance our capabilities to monitor soil moisture, and they will ensure continuity of multi-scale soil moisture measurements on climate scales.

At the same time the field of metrology has received more and more attention by the Earth observation community, which has led to a growing awareness of the concept of error traceability and the necessity of well characterized, so-called “Fiducial Reference Measurements” (FRMs). As a consequence, research has put a new focus on obtaining traceable error budgets for soil moisture products, and improved ground reference data.

We encourage submissions related to soil moisture ground and remote sensing, including:
- Global soil moisture estimation from coarse resolution active and passive sensors.
- High spatial resolution soil moisture estimation based on e.g. Sentinel observations, GNSS reflections, or using novel downscaling methods.
- Field experiment, theoretical advances in microwave modelling and calibration/validation activities.
- Root zone soil moisture retrieval and soil moisture data assimilation in land surface models, hydrological models and in Numerical Weather Prediction models.
- Evaluation and trend analysis of soil moisture climate data records such as the ESA CCI soil moisture product as well as soil moisture from re-analysis.
- Inter-comparison and inter-validation between land surface models, remote sensing approaches and in-situ validation networks.
- Progress towards the estimation of SI-traceable uncertainty budgets including uncertainty characterization across scales.
- Application of satellite soil moisture products in scientific and operational disciplines.

Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.15/1.16)

Presentation: Joint Exploitation of Sentinel-1 and ROSE-L SAR Data for Soil Moisture Retrieval

Authors: Giovanni Anconitano, Elena Arabini, Alessandro Patacchini, Nazzareno Pierdicca
Affiliations: Department of Information Engineering, Electronics and Telecommunications, Sapienza University of Rome
Soil moisture is an Essential Climate Variable (ECV) playing an important role in different science applications involving agricultural practices, such as irrigation management and crop monitoring. Synthetic Aperture Radar (SAR) missions offer the possibility to monitor soil moisture at both global and field scales, based on the capabilities of acquiring data regardless of sunlight conditions, as opposed to optical missions. Nevertheless, most of the studies focused on the use of single-frequency dataset (i.e., L-band, C-band, and X-band) [1-3]. Integration of multi-frequency SAR data potentially allows to separate the effects due to the dielectric properties of the soil (i.e., soil moisture variations) from the ones related to other geophysical parameters (i.e., surface roughness) and vegetation [4]. In addition, multi-configuration platforms could improve the temporal sampling of soil moisture maps, thanks to an enhanced revisit time, as well as the accuracy of the estimates. The use of multi-frequency SAR data for soil moisture retrieval is also motivated by the fact that more and more satellite carrying SAR sensors at different frequencies are becoming available, such as the ESA Copernicus C-band Sentinel-1, the Argentinian L-band SAOCOM, and the forthcoming L-band and S-band NASA ISRO (NISAR) mission. In particular, the Sentinel-1 C-band system will be complemented by the future Copernicus Radar Observing System for Europe at L-band (ROSE-L) mission, planned to be launched in 2028. This will offer the unprecedented opportunity to create a multi-platform radar observatory acquiring SAR data in a systematic way, relying on the short revisit time offered by the two platforms. ROSE-L is designed to fly along the same orbit as Sentinel-1. In this context, a requirement to cover the same swath at both L-band and C-band has already been specified to achieve an optimal superimposition in space of the images and foster multifrequency applications. Nevertheless, for soil moisture applications, it is not yet assessed whether it is preferable flying the two satellites very close each other to improve the accuracy of the product, or alternatively phasing the orbits to achieve a revisit time of three days with same observation geometry, thus improving the temporal resolution. In addition, differently with respect to the dual-polarization capability of Sentinel-1, ROSE-L is expected to acquire full-polarimetric data. In this study, we investigate the performances offered by a synergic use of L-band and C-band SAR data to improve soil moisture retrieval over bare and vegetated agricultural areas. The study is carried out in the frame of the “SAR-L: consolidamento della scienza” Project, funded by the Italian Space Agency (ASI). The multi-temporal algorithm proposed in [5], which implements a Bayesian statistic criterion that minimizes a cost function with respect to a set of soil parameters, has been extended to the multi-frequency case (i.e., L-band, and C-band) and tested on both simulated and real SAR data. Performances are evaluated by comparing different acquisition timing (single, alternated, or coincident) as well as polarization configurations (dual-polarization, quad-polarization). In addition, results aim at offering an answer to which is the best orbital configuration for the “System of System” formed by ROSE-L and Sentinel-1. The simulated dataset has been created by generating images over a fictious agricultural area. L-band and C-band maps of the backscattering coefficient sigma-naught at different polarizations have been simulated using the soil scattering semi-empirical model discussed in [6]. The Water Cloud Model in [7] has been used to add vegetation attenuation and scattering to the images. The performances of the algorithm for bare and vegetated soils have been evaluated. Regarding bare soils, the performances offered by using both frequencies (coincident configuration) and generating a product every six days are found to be generally better with respect to the case in which a single frequency is considered. However, the alternated configuration (i.e., three days revisit time) achieves comparable performances when C-band is acquired, keeping the L-band performances when L-band is acquired. This result suggests that there could be a good compromise between temporal resolution and accuracy. Note that the good performances when acquiring C-band are mainly due to the information about soil roughness gathered from the previous L-band acquisitions that, accordingly to the forward model, are more sensitive to the range of considered roughness. This mechanism can be only exploited thanks to the power of the multitemporal approach. Regarding the vegetated soils, the inversion of the Water Cloud Model within the retrieval scheme is performed to mitigate the vegetation effects. Results showed that the performances deteriorate significantly when using single C-band data. Since the removal of the vegetation effects is a major issue in soil moisture retrieval and subject to errors, the larger penetration of L-band signal highlights the higher performances of ROSE-L. However, it is worth noting that in the alternated configuration C-band acquisition outperforms single C-band. Finally, the algorithm was tested using time-series of alternated L-band SAOCOM-1A and C-band Sentinel-1A SAR data collected over wheat fields in the Monte Buey site, an agricultural area located in the Córdoba Province, Argentina. The experiment does not allow to test the coincident configuration. As expected, results confirmed that single L-band acquisitions offer better estimates than single C-band. However, the performances of L-band improve when the two frequencies are merged. [1] S.-B. Kim, T.-H. Liao, Robust retrieval of soil moisture at field scale across wide-ranging SAR incidence angles for soybean, wheat, forage, oat and grass, Remote Sensing of Environment, Volume 266, 2021, https://doi.org/10.1016/j.rse.2021.112712. [2] B. Bauer-Marschallinger et al., "Toward Global Soil Moisture Monitoring With Sentinel-1: Harnessing Assets and Overcoming Obstacles," in IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 1, pp. 520-539, Jan. 2019, doi: 10.1109/TGRS.2018.2858004. [3] M. El Hajj, N. Baghdadi, M. Zribi, G. Belaud, B. Cheviron, D. Courault, F. Charron, Soil moisture retrieval over irrigated grassland using X-band SAR data, Remote Sensing of Environment, Volume 176, 2016, Pages 202-218, ISSN 0034-4257, https://doi.org/10.1016/j.rse.2016.01.027. [4] F.T. Ulaby, D.G. Long, W.J. Blackwell, C. Elachi, A.K. Fung, C. Ruf, K. Sarabandi, H.A. Zebker, and J. Van Zyl, “Microwave radar and radiometric remote sensing,” Ann Arbor: The University of Michigan Press. 2014. [5] Pierdicca N., Pulvirenti L., Bignami C., “Soil moisture estimation over vegetated terrains using multi-temporal remote sensing data”. Remote Sensing of Environment, vol. 114, pp. 440-448, Feb. 2010. [6] Y. Oh, K. Sarabandi and F. T. Ulaby, "Semi-empirical model of the ensemble-averaged differential Mueller matrix for microwave backscattering from bare soil surfaces," IEEE Transactions on Geoscience and Remote Sensing, vol. 40, no. 6, pp. 1348-1355, June 2002, doi: 10.1109/TGRS.2002.800232. [7] Attema, E.P.W. and Ulaby, F.T., 1978. Vegetation modeled as a water cloud. Radio Science Journal, 13, 357–364.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.15/1.16)

Presentation: Incorporating Variable Rooting Depths into the Global ESA CCI Root-Zone Soil Moisture Product

Authors: Johanna Lems, Alexander Gruber, Wolfgang Preimesberger, Wouter Dorigo
Affiliations: Technical University of Vienna
Root-Zone Soil Moisture (RZSM) is critical for understanding hydrological processes, predicting droughts, and improving agricultural yield forecasts. Despite its explicit relevance to zones of active root uptake, most existing RZSM products are estimated for static soil layers between 0 and 1 meters, overlooking the variability in rooting depths across ecosystems. This study addresses this gap by developing a globally dynamic RZSM product that accounts for spatial differences in the effective rooting depths of plants. Effective rooting depth, derived using a carbon cost-benefit model, incorporates both the vertical extent of roots and the spacing between plants, ranging from less than 0.4 m in arid regions to over 1.5 m in tropical areas. The RZSM product is based on the global ESA CCI COMBINED v9.1 soil moisture dataset, covering the period from 1991 to 2023. RZSM is estimated using a one-parameter exponential filter applied to surface soil moisture retrievals from microwave remote sensing observations. The infiltration parameter, acting as a smoothing and delaying factor for moisture infiltration to deeper layers, is calibrated using a spatially representative dataset of in-situ measurements. By modeling each unique effective rooting depth with a distinct infiltration parameter, we created a globally dynamic RZSM product that captures the variability in rooting depths. Validation against ERA5-Land reanalysis soil moisture data showed strong alignment with model estimates at comparable depths. By more accurately representing the water accessible to plants, this novel product offers a better tool for assessing water availability for active root uptake. The development of ESA CCI and C3S SM has been supported by ESA’s Climate Change Initiative for Soil Moisture (Contract No. 4000104814/11/I-NB & 4000112226/14/I-NB) and the Copernicus Climate Change Service implemented by ECMWF through C3S 312a Lot 7 & C3S2 312a Lot 4 Soil Moisture.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.15/1.16)

Presentation: High Resolution Soil Hydraulic Properties Estimation From Remotely-Sensed Soil Moisture Time Series.

Authors: Robert Carles-Marqueño, Martí Perpinyà-Vallès, Dr Maria José Escorihuela, Dr Jesús Peña-Izquierdo
Affiliations: Lobelia Earth, isardSAT
Accurate estimation of soil hydraulic properties, particularly Field Capacity (FC) and Wilting Point (WP), collectively known as Water Holding Capacity (WHC), is a cornerstone of effective water resource management in agriculture and environmental systems. Traditionally, these properties are obtained through soil sampling and laboratory analysis or indirectly estimated using Pedo Transfer Functions (PTFs) based on soil composition data. While these methods provide valuable insights, they often assume static soil conditions and fail to fully account for dynamic influences such as precipitation, evapotranspiration, and land management practices. In this study, we propose a novel methodology for the dynamic estimation of FC and WP using long-term soil moisture time series downscaled to 100 m resolution using the DISPATCH methodology. DISPATCH combines SMAP soil moisture data with Landsat-derived Land Surface Temperature (LST) and Normalized Difference Vegetation Index (NDVI) to achieve fine-scale soil moisture estimates. This enables a nuanced characterization of soil hydraulic properties at high spatial resolution while incorporating both spatial and temporal soil moisture variability. Our approach leverages four years of soil moisture observations to estimate WHC metrics for a single year, creating a time series of WHC that evolves realistically while remaining stable over time. This stability reflects the intrinsic resilience of soil hydraulic properties while allowing for meaningful adaptations driven by environmental dynamics. An additional feature of our methodology is its integration of soil texture data, which is used to refine WHC estimates for different soil types adaptively. This contextual refinement enhances both spatial relevance and accuracy. To address potential uncertainties in the input data and modeling process, we incorporate an uncertainty layer in our estimations, quantifying confidence levels across different soil moisture conditions and landscapes. We validated our approach against in-situ stations from the International Soil Moisture Network (ISMN), which include measurements of FC and WP. Comparative analysis with SoilHydroGrids was performed to benchmark our results. Our methodology demonstrated a significant reduction in the Root Mean Square Error (RMSE) for Field Capacity, improving from 0.15 m³/m³ to 0.09 m³/m³. Additionally, it slightly outperformed laboratory measurements for Wilting Point. Notably, our approach captures a broader and more realistic range of WHC values, closely matching the variability observed in in-situ data. This study highlights the potential of using remote sensing soil moisture time series, downscaled with the DISPATCH methodology, for dynamic and adaptive estimation of soil hydraulic properties. By integrating temporal soil moisture dynamics, soil texture information, and an uncertainty quantification layer, our approach offers a robust and scalable framework for advancing precision agriculture and sustainable water management. This methodology represents a significant advancement in characterizing soil hydraulic behavior, addressing key challenges posed by changing climates, evolving land use practices, and the need for high-resolution, temporally dynamic insights.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.15/1.16)

Presentation: How comparable are soil moisture retrievals from monostatic and bistatic radar measurements?

Authors: Wolfgang Wagner, Oto
Affiliations: TU Wien
While the retrieval of soil moisture data from monostatic scatterometer and synthetic aperture radar (SAR) measurements has a long tradition, the use of microwave pulses transmitted by Global Navigation Satellite Systems (GNSS) satellites and reflected off the land surface is a relatively recent development. Essentially, GNSS Reflectometry (GNSS-R) is a bistatic radar technique that measures forward scattering at L-band. GNSS-R retrievals exploit the fact the measured reflections typically increase with soil moisture. A commonly made assumption is that the signals originating near the specular reflection point contain strong coherent scattering components from the soil surface, often through vegetation. Since coherent signals are assumed to stem from a relatively small zone around the reflection point, GNSS-R soil moisture retrievals are often said to have a high spatial resolution, on the order of kilometres. However, measurements from L-band SAR sensors show that backscatter from sparsely vegetated land surfaces can vary significantly, indicating that the assumption that soil surfaces are smooth at L-band is not universally applicable. Additionally, it is well known that mirror-like reflections from smooth inland water surfaces generate very strong forward scattering signals, even from small water bodies. As a result, GNSS-R signals from water bodies is much stronger than from surrounding land surface areas, while the reverse is true for monostatic radar measurements. Due to these pronounced differences in forward and backward scattering mechanisms, it is not clear how soil moisture retrievals from GNSS-R measurements compare to scatterometer and SAR retrievals. In this contribution, we compare available soil moisture data from the Cyclone Global Navigation Satellite System (CYGNSS) constellation with coarse-resolution ASCAT (~15 km) and high-resolution Sentinel-1 (1 km) soil moisture data. Additionally, we compare all three datasets with passive soil moisture retrievals (SMAP, SMOS), modelled soil moisture data (ERA5-Land), and high-resolution water body data sets. Our preliminary findings indicate that CYGNSS soil moisture data compare reasonably well with ASCAT soil moisture retrievals, both in terms of the spatial resolution and retrieval accuracy. On the other hand, the spatial resolution of the CYGNSS soil moisture data appears to be much lower compared to that of Sentinel-1. These findings are difficult to reconcile with the view that, over land, GNSS-R measurements are dominated by coherent signal components from a small area around the specular reflection point. Our results show that more research is needed to better understand the different physical mechanisms governing monostatic and bistatic radar measurements from land surfaces.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.15/1.16)

Presentation: Soil Moisture Retrieval Using SAOCOM L-Band SAR and Radiative Transfer Algorithm: A Game-Changer for Landslide Monitoring

Authors: Divyeshkumar Rana, Raphael Quast, Wolfgang Wagner, Paolo Mazzanti, Francesca Bozzano
Affiliations: Department of Earth Sciences, Sapienza University of Rome and CERI, Research Centre for Geological Risk, P.le Aldo Moro n.5, 00185, TU Wien Department of Geodesy and Geoinformation
Landslides represent a critical threat to infrastructure, ecosystems, and human safety, particularly in regions where soil moisture conditions are highly variable. Soil moisture is a key factor influencing landslide susceptibility, as it directly impacts soil stability. Increased soil moisture results in saturation, which raises the weight of the soil, diminishes its cohesion and reduces friction between particles. These effects collectively destabilize the soil, particularly on steep slopes, increasing the likelihood of landslides. Additionally, elevated soil moisture levels amplify pore pressure, causing the soil to behave as though it were less dense, further compromising its structural integrity. Thus, understanding and monitoring soil moisture dynamics are essential for effective landslide hazard assessment and mitigation. This study employs SAOCOM L-Band Synthetic Aperture Radar (SAR) data in combination with a radiative transfer algorithm to retrieve soil moisture in Petacciato, a landslide-prone area in the Molise region of Italy. By integrating radiative transfer modeling with supplementary soil moisture products and in-situ measurements, the study evaluates the effectiveness of SAOCOM L-Band SAR in detecting soil moisture dynamics. The results reveal a strong correlation between SAR-derived soil moisture, reference datasets, and ground-based measurements, underscoring the robustness of this approach in enhancing landslide monitoring capabilities. Furthermore, the research incorporates the Antecedent Precipitation Index (API) as a key indicator of soil moisture. The API, calculated using precipitation data from the days preceding an event, offers valuable insights into retained soil moisture from prior rainfall events. The observed linear correlation between the API and soil moisture dynamics further substantiates its relevance as a predictive tool for assessing landslide susceptibility. This study highlights the transformative potential of combining L-Band SAR data with radiative transfer algorithms to advance landslide risk assessment methodologies, with Petacciato serving as a case study. The findings emphasize the reliability of this integrated approach for continuous soil moisture monitoring in landslide-prone regions. Moreover, the use of SAOCOM L-Band SAR data and radiative transfer modeling proves particularly effective in complex and agricultural terrains, providing a significant improvement in landslide monitoring and risk management strategies. KEYWORDS: Synthetic Aperture Radar (SAR), Microwave, SAOCOM L-Band, Radiative Transfer (RT1), Soil Moisture, Antecedent Precipitation Index (API), Landslide Monitoring, Mass movement Reference: Rana D., Quast R., Wagner W., Mazzanti P., and Bozzano F., " Soil moisture retrieval in slow-moving landslide region using SAOCOM L-band: A radiative transfer model approach", IEEE-JSTARS paper is under review
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.15/1.16)

Presentation: Retrieving Soil Moisture with GNSS-IR and Machine Learning

Authors: Laura Crocetti, Elissa Dib, Benedikt Soja, Matthias Aichinger-Rosenberger
Affiliations: ETH Zurich
GNSS-Interferometric Reflectometry (GNSS-IR) is an innovative remote sensing technique enabling users to infer information about environmental parameters such as soil moisture, snow depth, or vegetation water content. This method uses reflected signals transmitted by GNSS satellites. By analyzing the Signal-to-Noise Ratio (SNR) observations, which are influenced by interference between direct and reflected signals, GNSS-IR can provide insights about the ground surface properties. The interference pattern depends on the satellite's elevation angle, signal wavelength, and the GNSS antenna's height relative to the reflecting surface. However, vegetation surrounding GNSS sites significantly alters ground reflection properties, introducing systematic errors and biases in derived soil moisture estimates. The classical approach cannot easily account for this, thus, requiring the development of new methodologies. This research focuses on soil moisture retrieval with GNSS-IR by enhancing the vegetation modeling framework using machine learning. It is part of the Global Climate Observing System (GCOS) Switzerland project “Machine-learning based Advancement and usability assessment of GNSS Interferometric Reflectometry for Climatological studies in Switzerland” (MAGIC-CH). GNSS SNR data from regional and global networks are explored and serve as features, while satellite-based soil moisture is used as the target. The preliminary framework is based on the XGBoost algorithm, uses the GNSS SNR data from different station networks in Switzerland, e.g., the Automated GNSS Network for Switzerland (AGNES) and Nagra’s permanent GNSS Network (NaGNet) and the 1 km surface soil moisture product provided by the Copernicus Global Land Monitoring Service. Additionally, the Normalized Difference Vegetation Index (NDVI) is included as feature to represent the vegetation. The performance of the machine learning framework is evaluated by comparing the results with soil moisture derived from existing GNSS-IR methods and in-situ measurements. This work seeks to advance GNSS-IR methodologies for improved environmental monitoring and accurate soil moisture estimation.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.61/1.62)

Session: F.05.03 Tracking conflict impacts: Earth Observation for socio-economic resilience and global policy support

In the context of increasing global conflicts, the ability to monitor and assess their socio-economic and environmental impacts has never been more critical. This session will explore how Earth Observation (EO) technologies are being utilized to provide essential data that supports governments and international institutions in developing policies and strategies for resilience and recovery.
This session will explore the practical benefits of using Earth Observation (EO) data in conflict zones. These benefits include damage assessment, aiding economic recovery, restoring the environment and supporting humanitarian efforts enabling evidence-based decision-making across sectors such as energy, climate, and sustainability. Participants will discuss how EO-derived information is applied to reduce conflict impacts, assist in post-conflict reconstruction, conduct effective damage assessment, and encourage sustainable development.
By deepening the understanding of the socio-economic benefits of EO, this session will support European and international institutions in their efforts to enhance resilience and promote sustainable development in conflict-affected areas.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.61/1.62)

Presentation: Assessing the long-term impacts of bombing during the Vietnam War on land use/land cover changes in Southeast Asia

Authors: Philipp Barthelme, Eoghan Darbyshire, Doug Weir, Dominick Spracklen, Dr Gary Watmough
Affiliations: University of Edinburgh, Conflict and Environment Observatory, University of Leeds
This study combines declassified historical data with Earth Observation data to determine the effects of bombing on post-war land use/land cover (LULC) changes in Vietnam and Laos. The Vietnam War left a deep and lasting legacy across Southeast Asia, where bombings, herbicide spraying, and other military activities disrupted ecosystems and livelihoods. During the conflict, over 7.5 million tons of bombs were dropped, and more than 74 million liters of herbicides were sprayed across Vietnam, Laos, and Cambodia, creating widespread environmental damage and contamination. Decades after the conflict ended, these impacts persist, with about 20% of the land still contaminated by unexploded ordnance (UXO) and the long-term health effects of dioxin exposure from herbicides affecting thousands of people. Despite these ongoing challenges, incomplete and imprecise historical records on the locations of bombing and spraying activities hinder efforts to map the exact locations of remaining UXO and residual dioxin. At the same time the long-term consequences of this contamination on both the environment and local community livelihoods remains poorly understood. Our previous work has focused on using recently declassified very high-resolution (0.6-1.2 m) KH-9 HEXAGON and high-resolution (1.8 -2.7 m) KH-4 CORONA spy satellite imagery, taken during and after the war, for a more precise damage assessment. This included the automatic detection of bomb craters using machine learning methods as well as the manual delineation of herbicide spray lines visible in the imagery. Studying the long-term effects of bombing and herbicide spraying on post-war LULC changes additionally required knowing the LULC before and during the war. To achieve this, we georeferenced more than 800 historical topographic maps (scale 1:50,000) created by the US Army Map Service between 1965 and 1973 that cover most of Vietnam and Laos and contain detailed information about LULC including locations of villages, roads and footpaths. We manually labeled a random subset of the maps which we used to train machine learning methods to automatically extract the LULC information. We compared different image segmentation methods, and found that the UNet and Segformer models performed the best with macro F1-scores >0.8. We then applied the best model to all available map sheets producing a final LULC product covering most of Vietnam and Laos at a spatial resolution of 10 m. To determine the effect of bombing on post-war LULC changes we aggregated existing data products derived from earth observation data across grid cells of 2 km and 10 km covering Vietnam and Laos. This included global and regional LULC products, nighttime lights as a proxy for economic development and population density for multiple post-war time steps. We then calculated the number and weight of all bombs dropped on each grid cell using declassified US bombing records. To determine the effect of bombing on each of these variables, we fit a spatio-temporal regression model that additionally controls for geographic and location confounders as well as LULC during the war derived from the topographic maps. We found that bombing is positively associated with post-war deforestation in both Vietnam and Laos, in particular with forest plantations in Vietnam and shifting cultivation in Laos. Bombing is negatively associated with post-war changes in rice paddies in Vietnam and in rainfed cropland in Laos, indicating that people were more reluctant to use the land for agriculture due to the potential danger of unexploded ordnance. While bombing is negatively associated with post-war changes in night-time lights in Laos, it is positively associated in Vietnam. Moreover, bombing is negatively associated with post-war changes in population density in Laos but not in Vietnam. Our results therefore show a complex relationship between bombing and post-war LULC changes that varies on a local, regional and country scale. Having access to this detailed disaggregated information is crucial to enable evidence-based decision-making for UXO clearance and to support sustainable development in the region. Our findings underscore the significant long-term impacts of conflict on environment and society, and demonstrate the benefits of integrating earth observation data for tracking impacts of past conflicts.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.61/1.62)

Presentation: Millions of Artillery Craters in Agricultural Fields Impact Crop Production in Ukraine Due to the Ongoing War

Authors: Sergii Skakun, Erik Duncan, Inbal Becker-Reshef, Dr. Nataliia Kussul, Leonid Shumilo, Andrii Shelestov, Josef Wagner
Affiliations: University of Maryland, National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, University of Strasbourg
The full-scale invasion by Russia of Ukraine in February 2022 has had a paramount impact on all spheres of life in Ukraine, including agriculture. Russia's war of aggression resulted in the substantial destruction of agricultural infrastructure, reduction of crop production and available resources, redistribution of crops, and high uncertainties surrounding the future of the agriculture sector in general. The full impact of the war is yet to be fully understood. In the context of land use science, substantial changes occur in the agricultural fields. The war is characterized by the use of heavy artillery which resulted in fields being burned, abandoned, and ultimately unplanted and/or unharvested. While there exists some information on agricultural activities in free (unoccupied) territories obtained by the Ukrainian government through limited farmer surveys, no information is currently available for Russian-held territories. Earth observation data from space is the only source of synoptic, regular, and objective information to understand and assess the impact of war on the national scale. In this study, we use very high spatial resolution (VHR) satellite imagery acquired by Maxar’s WorldView (at 0.3-0.5 m) and Planet’s SkySat (at 0.5 m), to identify and map shelled agricultural fields in 2022 and a statistical-based approach to estimate the area of fields with the presence of artillery craters, burnings, and those that were abandoned and not cultivated. Maxar imagery was acquired through the NASA Commercial Smallsat Data Acquisition (CSDA) Program (Neigh et al., 2013), and Planet’s SkySat imagery was acquired through collaboration between the NASA Harvest Program (Becker-Reshef et al., 2022) and Planet Labs. We use a previously developed deep-learning model based on U-Net to identify and map artillery craters (Duncan et al., 2023). The study area covers the frontline and adjacent areas totaling 31,034 km2 (c. 5% of the total Ukraine area). The majority of the land (c. 66%) is used for agriculture. Sixty percent of the area is used to grow major commodity crops such as wheat, maize, soybean, and sunflower. Approximately 3% is used for smallholder farming (the so-called “family gardens”, Gallego et al., 2014); another 3% has been abandoned since 2014, mainly in Donetsk and Luhansk oblasts (Skakun et al., 2019). The region was divided into blocks of 1x1 km2 and each block was assigned to one of four strata based on the number of craters in the block detected with satellite imagery. The number of strata and thresholds for strata were determined by running the optimization methods to minimize the variance of the number of crater estimates (Cochran, 1977, Eq. 5A.32). Sample allocation in each stratum was done using Neyman’s allocation (Cochran, 1977, Eq. 5.25-5.26). Fifteen blocks were randomly sampled in each stratum (60 sample units in total). Each sampled block was photo-interpreted with the help of VHR imagery to label craters and multi-temporal Sentinel-2 imagery to identify the type of crops (winter, potential summer), and whether the fields in the block were burned, harvested, and planted. We estimate over 3.8 million craters in the study region during 2022. The area of shelled agricultural fields (with the presence of craters) was estimated at >1.2 million hectares (Mha). Our results show that ~0.25 Mha (the majority of which was winter crops, 60%) were burned because of shelling, and almost 1 Mha of fields in the region were abandoned, meaning that winter crops that were planted in 2021 before the full-scale invasion were not harvested, and spring/summer crops were not planted. Information generated within this study can be further used by demining operators for prioritizing and optimizing non-technical surveys, and to assess the impact of the conflict on agricultural production and cropland abandonment in Ukraine. References: Becker-Reshef, I., Bandaru, V., Barker, B., Coutu, S., Deines, J. M., Doorn, B., ... & Justice, C. (2022). The NASA Harvest Program on Agriculture and Food Security. In Remote Sensing of Agriculture and Land Cover/Land Use Changes in South and Southeast Asian Countries (pp. 53-80). Cham: Springer International Publishing. Cochran, W. G. (1977). Sampling techniques. John Wiley & Sons. Duncan, E. C., Skakun, S., Kariryaa, A., & Prishchepov, A. V. (2023). Detection and mapping of artillery craters with very high spatial resolution satellite imagery and deep learning. Science of Remote Sensing, 7, 100092. Gallego, F. J., Kussul, N., Skakun, S., Kravchenko, O., Shelestov, A., & Kussul, O. (2014). Efficiency assessment of using satellite data for crop area estimation in Ukraine. International Journal of Applied Earth Observation and Geoinformation, 29, 22-30. Neigh, C. S., Masek, J. G., & Nickeson, J. E. (2013). High‐resolution satellite data open for government research. Eos, Transactions American Geophysical Union, 94(13), 121-123. Skakun, S., Justice, C. O., Kussul, N., Shelestov, A., & Lavreniuk, M. (2019). Satellite data reveal cropland losses in South-Eastern Ukraine under military conflict. Frontiers in Earth Science, 7, 305.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.61/1.62)

Presentation: Estimating Direct and Indirect Effects of the War through Cropland Abandonment in Ukraine

Authors: Shabarinath Nair, Josef Wagner, Sergii Skakun, Oleksandra Oliinyk, Erik Dunkan, Fangjie Li, Manav Gupta, Abhishek Kotcharlakota, Jean Rehbinder, Francoise Nerry, Inbal Becker-Reshef
Affiliations: University of Strasbourg, Department of Geographical Sciences, University of Maryland, MD, USA, NASA Harvest
Continued shelling for two consecutive years, left a visible scar of non-cultivated cropland along near static frontines in Ukraine. Some of the most productive and fertile soils globally, the black chernozems, are left uncultivated due to unexploded ordnances (UxO), mines and destroyed machinery. In the absence of agricultural practices such as tillage or herbicide spraying, natural vegetation dynamics take over and fields weed in. Weedy fields present a grassland-like temporal vegetation signature, distinguishable both from winter and summer crops by a longer seasonality. The quantification of cropland abandonment in Ukraine has been done with a general focus on the immediate surroundings of the frontlines. However, even in low conflict intensity areas, crops could be abandoned due to economic or human-capacity reasons. This presentation will describe how, in support of the State Statistics Service of Ukraine, we estimated the extent of abandoned cropland areas in low, middle and high conflict intensity conditions. To map the abandoned cropland we used an unsupervised clustering based approach which separated the longer grassland-like temporal signature. Unlike winter or summer crops these fields do not show a strong harvest signal and senescence rather gradually. We observed supervised techniques do not work particularly well in this scenario as the abandonment regions don't exactly fit into the definitions of fallow fields as there is significant vegetation growing on these fields, making it difficult to discriminate from actual crops in the absence of extensive labelled data. We generated two maps, one in-season and a second end of season using the complete time series. We use bi-weekly Planet imagery for generating the abandonment land map. We then leveraged the Armed Conflict Location and Event Data Project (ACLED) dataset in a two stage random stratified sampling frame for estimating unbiased abandoned areas at Ukraine scale. Results showed highest cropland abandonment rates close to the frontlines. Cropland abandonment in low conflict intensity areas was less frequent, but can be related to indirect effects of the war, such as economic restrictions or, more straightforwardly, the direct human toll of the conflict. *This work is co-first authored with Josef Wagner
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.61/1.62)

Presentation: Revisiting the Impact of the Russia-Ukraine War on Agricultural Production and Policy Implications

Authors: Inbal Becker-reshef, Dr. Sergii Skakun, Josef Wagner, Shabarinath Nair, Alex Olin, Natacha Kalecinski, Abdul Qadir, Yuval Sadeh, Mehdi Hosseini, Nataliia Kussul, Sheila Baber, Chris Justice, Erik Duncan
Affiliations: University of Maryland, University of Strasbourg, Monash University
The Russia-Ukraine war is reshaping global food systems, with profound implications for food security. In this talk, we present a comprehensive assessment of Russia’s agricultural gains from its occupation of Ukrainian territories since the start of the full-scale invasion. Our satellite-driven analysis reveals that Russia has harvested significant quantities of grains and oilseeds from occupied Ukrainian croplands, boosting its production and exports and further consolidating its standing as the largest wheat exporter, accounting for nearly a quarter of global exports. This concentration of the wheat market under Russia- a country prone to production volatility due to weather variability and a history of imposing export bans and restrictions—poses critical risks to global food security, particularly for regions heavily reliant on Russian wheat imports. Our findings highlight the urgent need for greater market transparency, robust monitoring of illicit trade, and diversification of import sources to reduce geopolitical vulnerabilities and safeguard against future food crises. Furthermore, we highlight the critical role satellite data can play in providing near real-time tools to monitor agricultural activity in inaccessible regions and inform agricultural policy interventions.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.61/1.62)

Presentation: Integrating geospatial and economic tools to monitor spatial disparities in real estate values and urban fabric distortions in post-conflict Damascus

Authors: Mounir Azzam, Jun.-Prof. Dr. Valerie Graw, Jun.-Prof. Dr. Andreas Rienow
Affiliations: Ruhr-Universität Bochum
The application of Earth observation technologies in monitoring urban surface changes, particularly under abnormal conditions such as conflicts and wars, is of paramount importance. Remote sensing generates vast datasets that support decision-making processes during post-conflict recovery phases. However, these datasets must be integrated with ground-based data, especially socio-economic information, to guide investment decisions effectively during reconstruction. Real estate, as a cornerstone of community stability, is particularly susceptible to the impacts of conflict. Factors such as property loss, widespread destruction, weakened governance, and demographic shifts significantly reshape the spatial dynamics of urban property values, exacerbating spatial inequities and deepening socio-economic divides. The Damascus metropolitan area, ravaged by conflict since 2011, exemplifies these challenges, with varying levels of destruction across its urban landscape intensifying disparities in property values and undermining societal cohesion. This study aims to enhance economic tools for monitoring spatial differentiation in real estate values by integrating remote sensing observations to analyze these disparities in the post-war context. A sample of 250 properties, evenly distributed across various districts, was examined to investigate the relationship between property and neighborhood conditions. Using Landsat 8 imagery from the periods of June 1 to July 1 in 2015, 2019, and 2022, the study analyzed urban transformations, comparing imagery from the conflict period (2015–2019) to the post-conflict period (2019–2022). Spatial metrics, including homogeneity and correlation values derived from the Gray-Level Co-occurrence Matrix (GLCM), were employed to represent the spatial characteristics of urban textures. Temporal comparisons of these spatial variables were conducted using the Google Earth Engine (GEE) platform, enabling a comprehensive analysis of conflict- and post-conflict-related urban dynamics. The findings reveal notable spatial disparities across the Damascus metropolitan area. During the conflict period, both homogeneity and correlation values were significantly below average, indicating pronounced spatial differentiation among urban pixels. While homogeneity values improved during the post-war period, correlation values demonstrated a growing spatial divide between the city center and its surrounding areas from 2019 to 2022. Within Damascus districts, correlation values improved post-war; however, surrounding areas experienced a substantial decline, with values dropping by 87.2%. This indicates that recovery efforts favored the city center, while Rural Damascus districts exhibited pronounced spatial differentiation. In severely damaged districts, correlation values remained consistently low, reflecting the extensive destruction and the lack of government-led reconstruction efforts, which hindered any significant returns or repairs. In undamaged districts, demographic shifts due to internal displacement caused urban texture distortions, leading to spatial inconsistencies. Moderately damaged districts experienced parallel declines in both correlation and real estate values during the war, with decreases of approximately 84%. Post-war statistics revealed further declines in real estate values, ranging from 52% to 90%, due to resident-driven modifications and partial returns. These findings highlight the complexities of post-war urban recovery, underscoring the need for integrated, data-driven approaches to address spatial inequities and support sustainable reconstruction efforts. Integrating economic and geo-spatial monitoring methods provides a robust framework for identifying spatial differentiation in real estate values within the Damascus metropolitan area. This approach yields critical insights into post-war urban transformations affecting property values and residents' livelihoods—factors that require stabilization as swiftly as possible following conflicts. This study underscores that while some disaster-affected governments have implemented innovative real estate policies, these initiatives often lack effective tools to monitor and analyze spatial differentiation in property values. Instead, such policies tend to rely on generalized assessments of negative impacts, which risk exacerbating crises by overlooking the underlying factors driving these disparities. The integrated economic and geo-spatial monitoring approach employed in this study not only offers valuable insights for the case of Damascus but also serves as a potential global model for addressing similar challenges in other conflict-affected regions.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.61/1.62)

Presentation: Tracking War Impact on Agricultural Land Use for Socio-Economic Resilience in Ukraine

Authors: Dr. Nataliia Kussul, Prof. Andrii Shelestov, Dr. Bohdan Yailymov, Dr. Hanna Yailymova, Guido Lemoine, Klaus Deininger
Affiliations: University Of Maryland, National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, Space Research Institute NAS Ukraine and SSA Ukraine, Joint Research Center of the European Commission, World Bank
The Russian war in Ukraine has caused unprecedented disruption to agricultural land use, resulting in significant socio-economic challenges and global food security risks [1]. This study employs advanced Earth Observation (EO) methodologies, leveraging high-resolution Sentinel-1 and Sentinel-2 satellite data and machine learning, to quantify and analyze the dual impacts of military conflict and climate change on cropland dynamics across Ukraine from 2015 to 2024 [2]. This decade-long analysis provides a comprehensive understanding of how armed conflict has reshaped agricultural landscapes, highlighting key regional and temporal patterns critical for policy-making and resilience-building. Using cloud platforms such as Google Earth Engine (GEE) and CREODIAS, we produced annual land use maps at 10-meter resolution with high classification accuracy (F1-scores exceeding 94% for key crops like wheat, maize, and rapeseed). In-situ data collected annually across Ukraine was essential for model training and validation, enabling the identification of trends in land use change [3]. The results reveal stark contrasts between occupied and unoccupied territories. In conflict-affected regions, cultivated areas declined dramatically, with cereal crops such as wheat and barley experiencing reductions of over 20%. Conversely, central and western regions exhibited adaptive shifts, with increased soybean and rapeseed cultivation responding to changing market demands and logistical challenges. The pre-war period (2015–2021) was characterized by agricultural expansion supported by favorable climatic conditions and economic stability, while the war period (2022–2024) saw widespread reductions in cropland and shifts in crop composition. For instance, maize and sunflower cultivation declined significantly in front-line areas, while rapeseed demonstrated stable growth across most regions, reflecting its resilience to both climatic and economic pressures. The abandonment of agricultural land, particularly in southern and eastern Ukraine, underscores the compounded challenges of land degradation, disrupted infrastructure, and reduced access to inputs and markets. This study further integrates EO with socio-economic analyses to evaluate the broader implications of these changes for global food security. Ukraine, a leading global producer of grains and oilseeds, has experienced declines in agricultural output that ripple through international markets, driving price volatility and increasing food insecurity in import-dependent countries. The findings highlight the critical need for targeted recovery strategies, including demining agricultural lands, restoring damaged infrastructure, and implementing adaptive agricultural practices. By employing advanced machine learning techniques on cloud-based platforms, this work demonstrates the potential of EO for real-time monitoring and analysis of land use changes in conflict zones. The results provide a foundation for designing policies that integrate sustainable land management, climate adaptation, and socio-economic resilience. Additionally, the study underscores the importance of aligning Ukraine’s agricultural policies with the European Union’s Common Agricultural Policy (CAP) to modernize its agricultural sector and ensure long-term sustainability. This research not only supports immediate recovery efforts but also contributes to global policy frameworks for addressing the combined challenges of conflict, climate change, and food insecurity. The integration of EO-based methodologies with field data and socio-economic models offers a replicable framework for monitoring and mitigating the impacts of similar crises worldwide, making it an invaluable tool for international development and resilience planning. Acknowledgments This work was supported by the European Commission through the joint World Bank/EU project ‘Supporting Transparent Land Governance in Ukraine’ [grant numbers ENI/2017/387–093 and ENI/2020/418–654], expert contracts EC Joint Research Center (JRC) [CT-EX2022D670387-101, CT-EX2022D670387-102] and National research foundation of Ukraine project "Geospatial monitoring system for the war impact on the agriculture of Ukraine based on satellite data" [grant number 2023.04/0039]. 1. Deininger, K. W., Ali, D. A., Kussul, N., Lemoine, G., & Shelestov, A. (2024). Micro-Level Impacts of the War on Ukraine’s Agriculture Sector: Distinguishing Local and National Effects over Time (No. 10869). The World Bank. 2. Shelestov, A., Lavreniuk, M., Vasiliev, V., Shumilo, L., Kolotii, A., Yailymov, B., ... & Yailymova, H. (2019). Cloud approach to automated crop classification using Sentinel-1 imagery. IEEE Transactions on Big Data, 6(3), pp. 572-582. DOI: 10.1109/TBDATA.2019.2940237. 3. Waldner, F., Bellemans, N., Hochman, Z., Newby, T., de Abelleyra, D., Verón, S. R., ... & Defourny, P. (2019). Roadside collection of training data for cropland mapping is viable when environmental and management gradients are surveyed. International Journal of Applied Earth Observation and Geoinformation, 80, 82-93.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall K1)

Session: C.02.04 Small Earth Science Missions

This session will address the missions under development as well as proposals for future missions based on small and nano satellites whose objective is to achieve innovative returns in the fields of Earth science. Emphasis will be given to results achieved, and to mission concepts for new and innovative EO techniques as well as to ideas for the exploitation of science products from small Earth science missions under development.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall K1)

Presentation: Science Targets and Planned Data Products of the Upcoming ESA Scout NanoMagSat Mission, a Nanosatellite Constellation to Further Improve Geomagnetic Field and Ionospheric Environment Monitoring and Modeling

Authors: Gauthier Hulot, Pierdavide Coïsson, Jean-Michel Léger, Lasse B.N. Clausen, John L. Jørgensen, Jose van den Ijssel, Louis Chauvet, Thomas Jager, Florian Deconinck, Fabrice Cipriani, Massimiliano Pastena, Jean-Pascal Lejault
Affiliations: Université Paris Cité, Institut De Physique Du Globe De Paris, CNRS, CEA-Leti, Université Grenoble Alpes, MINATEC, University of Oslo, DTU Space, Technical University of Denmark, Delft University of Technology, Open Cosmos Ltd, European Space Agency, ESTEC
Geomagnetic field and ionospheric environment monitoring is presently achieved with huge success by the three satellites of the Swarm Earth Explorer ESA constellation launched in November 2013. Maintaining and improving observations beyond the lifetime of Swarm is critical for both science investigations and advanced applications. NanoMagSat aims at fulfilling this goal. This much cheaper mission is currently in Phase B within the context of the ESA Scout program. It will deploy and operate a new Low-Earth orbiting constellation of three identical 16U nanosatellites, using two inclined (~ 60°) and one polar orbits at an initial altitude of 545 km, to complement and take over the Swarm mission. The mission is planned to start deploying end of 2027, for a minimum of three years of full constellation operation between 2028 and 2031. This constellation is designed to cover all local times (LT) at all latitudes, with special emphasis on latitudes between 60°N and 60°S, where all LT will be visited within about a month, much faster than is currently achieved by the Swarm constellation. Each satellite will carry an advanced Miniaturized Absolute scalar and self-calibrated vector Magnetometer (MAM) with star trackers (STR) collocated on an ultra-stable optical bench at the tip of a deployable boom, a new compact High Frequency Magnetometer (HFM) (at mid-boom), a multi-Needle Langmuir Probe (m-NLP) and dual frequency GNSS receivers (all on the satellite body). This payload suite will acquire high-precision/resolution oriented absolute vector magnetic data at 1 Hz, very low noise scalar and vector magnetic field data at 2 kHz, electron density data at 2 kHz, and electron temperature data at 1 Hz. GNSS receivers will also allow top-side TEC and ionospheric radio-occultation profiles to be recovered. In this presentation, the main science goals of the mission will first be introduced and the rationale for the choice of the payload and constellation design next explained. The various data products currently planned to be produced will also be described. Special emphasis will be put on the innovative aspects of the mission with respect to Swarm and other previous missions. Finally, the benefit of relying on such nanosatellite constellations for maintaining long-term observations of the magnetic field and ionospheric environment, to complement ground-based observations will also be discussed.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall K1)

Presentation: SAR Techniques for Enhanced 2D Mapping of Marine Wind and Sea Surface Velocity: Insights from the PLATINO-1 Mission

Authors: Dr. Ing. Pietro Mastro, Ing. Andrea Petrossi, Dr. Ing. Giacomo De Carolis, Dr. Ing. Gianfranco Fornaro, Dr. Ing. Simona Verde, Dr. Ing. Virginia Zamparelli, Prof. Alfredo Renga, Dr. Ing. Giovanni Paolo Blasone, Dr Deodato Tapete, Dr. Simona Zoffoli
Affiliations: Institute for the Electromagnetic Sensing of the Environment (IREA), National Research Council (CNR), Department of Industrial Engineering, University of Naples Federico II, Italian Space Agency (ASI)
This study aims to present preliminary advancements in wind fields and sea surface current velocity estimation exploiting monostatic and bistatic Synthetic Aperture Radar (SAR) configurations. Indeed, a key advantage of using SAR bistatic configurations for marine observations is expected to be their ability to capture additional scattering angles and azimuthal perspectives, thus enhancing sensitivity to the sea surface roughness and wave directionality compared to traditional SAR monostatic systems. We evaluate the scattering model's capability [1] based on the small slope approximation (SSA2) [1] and vector radiative transfer theory [2] – [4] to describe scattering mechanisms under different wind speeds, directions, and sea surface states, such as the Choppy Wave Model (CWM) for nonlinear wave effects [5], [6], which form the Doppler and Normalized Radar Cross Section (NRCS) signatures for various sea surface conditions. In the context of the Italian Space Agency (ASI) project “COMBINO” [7], our research is focused on the bistatic operational mode of the PLATINO-1 (PLT-1) sensor [8] in coordination with a COSMO-SkyMed (CSK) sensor [9]. The bistatic system operates in two phases: in Phase 1, PLT-1 flies in formation with a CSK satellite at an altitude of 619 km; in Phase 2, PLT-1 operates at a lower orbit (410 km) and can work in either a monostatic or bistatic mode. Our analysis focuses on PLT-1 Phase 1 by examining system performance at long along-track baselines (in the order of hundreds of kilometres) combined with a right-looking configuration for the COSMO-SkyMed transmitter. To define the system geometry, the GMAT software [10] was used to perform Keplerian orbital propagation in determining the ground track, position, and velocity of the satellites in the ECEF reference frame. The ground track analysis identified that the system passes over the targeted sea surface area, and key geometric parameters [11]—including the bistatic and incidence angles of PLT-1, were calculated at these points. We show that bistatic SAR configurations provide more comprehensive insights into Doppler frequency and bandwidth, key factors for accurate near-surface ocean wind and wave fields. Besides, comparison with empirical models, including XMOD2 [12], [13], affirm the reliability of the SSA2-based Fois model in accurately reflecting NRCS values in the monostatic SAR configuration. Finally, through rigorous analysis and comprehensive validation, we demonstrate how the upcoming PLATINO-1 mission can offer advanced capabilities for accurate and reliable sea surface observation. Our work underscores the effective potential of bistatic SAR systems in enhancing the monitoring of marine surfaces under various operational conditions. The Italian Space Agency has funded this study within the COMBINO research project.   References: [1] F. Fois, P. Hoogeboom, F. Le Chevalier, and A. Stoffelen, "An analytical model for the description of the full-polarimetric sea surface Doppler signature," JGR Oceans, vol. 120, no. 2, pp. 988-1015, 2015, doi: 10.1002/2014JC010589. [2] J. R. Apel, "An improved model of the ocean surface wave vector spectrum and its effects on radar backscatter," J. Geophys. Res., vol. 99, no. C8, pp. 16269-16291, 1994, doi: 10.1029/94JC00846. [3] D. Holliday, G. St-Cyr, and N. E. Woods, "A radar ocean imaging model for small to moderate incidence angles," International Journal of Remote Sensing, vol. 7, no. 12, pp. 1809-1834, 1986, doi: 10.1080/01431168608948971. [4] R. Romeiser, W. Alpers, and V. Wismann, "An improved composite surface model for the radar backscattering cross section of the ocean surface: 1. Theory of the model and optimization/validation by scatterometer data," J. Geophys. Res., vol. 102, no. C11, pp. 25237-25250, 1997, doi: 10.1029/97JC00190. [5] T. Elfouhaily, B. Chapron, K. Katsaros, and D. Vandemark, "A unified directional spectrum for long and short wind-driven waves," J. Geophys. Res., vol. 102, no. C7, pp. 15781-15796, 1997, doi: 10.1029/97JC00467. [6] F. Nouguier, C. Guérin, and B. Chapron, "Choppy wave model for nonlinear gravity waves," J. Geophys. Res., vol. 114, no. C9, 2009, doi: 10.1029/2008JC004984. [7] https://asi.portaleamministrazionetrasparente.it/archivio28_provvedimenti-amministrativi_-1_216487_726_1.html [8] M. V. Stanzione, ‘25th IAA SYMPOSIUM ON SMALL SATELLITE MISSIONS (B4) Small Satellite Missions Global Technical Session (9-GTS.5)’. [9] F.Covello et al., “COSMO-SkyMed an existing opportunity for observing the Earth”, Journal of Geodynamics, vol. 49, fasc. 3, pp. 171–180, apr. 2010, doi: 10.1016/j.jog.2010.01.001 [10] S. P. Hughes, R. H. Qureshi, S. D. Cooley, e J. J. Parker, “Verification and Validation of the General Mission Analysis Tool (GMAT)”, in AIAA/AAS Astrodynamics Specialist Conference, San Diego, CA: American Institute of Aeronautics and Astronautics, ago. 2014. doi: 10.2514/6.2014-4151. [11] A. Moccia and A. Renga, “Bistatic Synthetic Aperture Radar,” in Distributed Space Mission for Earth System Monitoring, New York, Springer Science, 2013, p. 8. doi: 10.1007/978-1-4614-4541-8_1 [12] H. Hersbach, A. Stoffelen, and S. De Haan, "An improved C-band scatterometer ocean geophysical model function: CMOD5," J. Geophys. Res., vol. 112, no. C3, 2007, doi: 10.1029/2006JC003743. [13] B. Zhang et al., "Ocean Vector Winds Retrieval From C-Band Fully Polarimetric SAR Measurements," IEEE Trans. Geosci. Remote Sensing, vol. 50, no. 11, pp. 4252-4261, 2012, doi: 10.1109/TGRS.2012.2194157.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall K1)
Friday 27 June 14:30 - 16:00 (Hall K1)

Presentation: Results from the NASA TROPICS Smallsat Constellation Microwave Sounding Mission after Two Years of Operation

Authors: William Blackwell
Affiliations: MIT Lincoln Laboratory
New constellations to provide high-resolution atmospheric observations from microwave sounders operating in low-earth orbit are now coming online and are demonstrating the potential to provide operationally useful data. The first of these missions, the NASA TROPICS (Time-Resolved Observations of Precipitation structure and storm Intensity with a Constellation of Smallsats) Earth Venture (EVI-3) mission, was successfully launched into orbit on May 8 and May 25, 2023 (two CubeSats in each of the two launches). TROPICS is now providing nearly all-weather observations of 3-D temperature and humidity, as well as cloud ice and precipitation horizontal structure, at high temporal resolution to conduct high-value science investigations of tropical cyclones. TROPICS is providing rapid-refresh microwave measurements (median refresh rate of better than 60 minutes early in the mission with four functional CubeSats, and now approximately 70-90 minutes with three functional CubeSats) over the tropics that can be used to observe the thermodynamics of the troposphere and precipitation structure for storm systems at the mesoscale and synoptic scale over the entire storm lifecycle. Over 10 Billion observations have been collected thus far by the TROPICS mission, including over 1000 high-resolution images of tropical cyclones, revealing detailed structure of the eyewall and surrounding rain bands. The new 205-GHz channel in particular (together with a traditional channel near 92 GHz) is providing new information on the inner storm structure, and, coupled with the relatively frequent revisit and low downlink latency, is already informing tropical cyclone analysis at operational centers. Four 3U CubeSats (5.4 kg each) for the TROPICS constellation mission were launched into two low-Earth orbital planes inclined at approximately 33 degrees with a 550-km altitude. Each CubeSat comprises a Blue Canyon Technologies bus and a high-performance radiometer payload to provide temperature profiles using seven channels near the 118.75 GHz oxygen absorption line, water vapor profiles using three channels near the 183 GHz water vapor absorption line, imagery in a single channel near 90 GHz for precipitation measurements (when combined with higher resolution water vapor channels), and a single channel at 205 GHz that is more sensitive to precipitation-sized ice particles. TROPICS spatial resolution and measurement sensitivity is comparable with current state-of-the-art observing platforms. Data from the three currently operational CubeSats are downlinked to the ground via the KSAT-Lite and ATLAS Space Operations ground networks with latencies better than one hour. The separate TROPICS Pathfinder mission, which launched into a sun-synchronous orbit on June 30, 2021 in advance of the TROPICS constellation mission, served as a technology demonstration and risk reduction effort. The TROPICS Pathfinder mission, now concluded, yielded useful data for 30+ months of operation and provided an opportunity to checkout and optimize all mission elements prior to the primary constellation mission. TROPICS temperature vertical profile products yield performance slightly worse than ATMS (1.5 K RMS uncertainty for TROPICS versus 1.3 K RMS uncertainty for ATMS averaged from 0-20 km in 3-km layers), and TROPICS water vapor profile products yield performance slightly better than ATMS (19% RMS uncertainty for TROPICS versus 21% uncertainty for ATMS averaged from 0-10 km in 3-km layers). TROPICS rain rate and tropical cyclone intensity products are also available with performance that is on par with that of ATMS. These TROPICS products are now available with much improved median revisit rates and excellent data latencies, enabling their use in operational tropical cyclone forecasting applications.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall K1)

Presentation: Pioneering Upper Atmosphere Exploration: the CHESS Mission’s Innovative Use of Mass Spectrometry and GNSS

Authors: Rico Fausch, Lina Kuhlmann, Alvaro Martinez, Benedikt Soja, Jean-Paul Kneib, Peter Wurz
Affiliations: University of Bern, Ecole Polytechnique Federale de Lausanne EPFL, ETH Zurich
The Constellation of High-performance Exosphere Science Satellites (CHESS) mission is poised to pioneer in situ analysis of Earth's upper atmosphere using advanced nano satellite technology. Addressing the longstanding gap in detailed, localized measurements of the exosphere–thermosphere–ionosphere region, CHESS combines innovative instrumentation with a new space approach to overcome the limitations of previous missions. Recent technological advancements have enabled the miniaturization of a high-performance mass spectrometer, CubeSatTOF, which fits within a 1U CubeSat unit. This instrument is capable of measuring atoms, molecules, radicals, and isotopes with exceptional sensitivity, dynamic range, mass range, and mass resolution—even at the hyper-velocities experienced in a sun-synchronous orbit at approximately 500 km altitude. Complementing this instrument is a new generation of dual-frequency Global Navigation Satellite System (GNSS) receivers that provide precise orbit determination and line-of-sight total electron content measurements between the satellite and the GNSS satellites at higher altitudes. By analyzing the evolution of the spacecraft's orbit, atmospheric drag can be calculated, allowing for the derivation of atmospheric density. By combining the GNSS data with the mass spectrometer's lateral resolution—which is on the order of a few kilometers—both the absolute total number density and the absolute number density of species can be assessed with unprecedented accuracy. A second CHESS satellite is designed to operate in an elliptical orbit between about 400 and 1,000 km. This satellite facilitates the analysis of altitude profiles of the species’ number densities, which can be converted into exospheric temperatures. The mission's constellation design allows for simultaneous measurements at multiple points, effectively overcoming the space-time degeneracy that has historically hindered detailed analysis of the drivers and mechanisms influencing the upper atmosphere's dynamic state. In recognition of its innovative approach and significant scientific potential, the CHESS mission team has been selected to participate in ESA's Fly Your Satellite! Design Booster program. This prestigious selection not only provides valuable support in mission development but also confirms a secured launch opportunity for the first CHESS satellite. The mission is scheduled for launch in 2028, marking a significant milestone in small satellite Earth observation missions. The data collected by CHESS will fill a critical gap in our understanding of the exosphere–thermosphere–ionosphere region. By providing updated, localized, and detailed measurements of temperature variations, chemical composition, total number density, and their altitude profiles, the mission will enhance our knowledge of the origin and evolution of Earth's habitable atmosphere. Furthermore, these mass spectrometry data will represent the first significant update to the widely used chemical composition model NRL-MSISE at these altitudes in over 40 years. Moreover, these data will be compared with atmospheric escape data from planets like Venus and Mars. This comparison offers insights into the different evolutionary paths of these rocky, initially comparable planets and contributes to the field of comparative planetology. The CHESS mission exemplifies the innovative use of small satellites in Earth science, aligning with the objectives of developing new and innovative Earth observation techniques and exploiting science products from missions under development. By leveraging miniaturized instrumentation and advanced GNSS technology, CHESS is poised to make significant contributions to atmospheric science, space weather research, and our overall understanding of planetary atmospheres.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall K1)

Presentation: Progress With the HydroGNSS Scout GNSS-R Hydrology Sensing Mission

Authors: Martin Unwin, Peter Garner, Nazzareno Pierdicca, Estel Cardellach, Kimmo Rautiainen, Leila Guerreiro, Emmanuele Santi, Giuseppe Foti, Paul Blunt, Jean-Pascal Lejault, Maria Paola Clarizia, Massimilano Pastena
Affiliations: Surrey Satellite Technology Ltd, Sapienza, University of Rome, IEEC/CSIC-ICE, FMI, Tor Vergata, IFAC-CSR, NOC, University of Nottingham, ESA
HydroGNSS is a small satellite mission under the ESA Scout Programme tapping into NewSpace, as part of ESA’s FutureEO Programme. The mission comprises two satellites using an innovative GNSS-Reflectometry instrument to measure hydrological parameters over land related to Essential Climate Variables (ECVs): soil moisture, inundation, freeze/thaw, biomass, ocean wind speed and sea ice extent. GNSS-Reflectometry is a type of bistatic radar utilising abundant GNSS signals as signals of opportunity, empowering small satellites to provide measurement quality associated with larger satellites. HydroGNSS addresses the measurement of hydrologic variables over land, of value for climate modelling, agriculture and weather forecasting, and will complement and other missions including SMOS, SMAP soil moisture missions, and ESA’s Biomass mission for forest biomass sensing. HydroGNSS will provide measurements that expand GNSS reflectometry techniques and lay the foundations for a future extended constellation offering near-daily continuity in high spatial-temporal resolution observations of the Earth’s weather and climate. The HydroGNSS instrument collects reflected GPS measurements processed on-board into Delay Doppler Maps as on previous missions. Furthermore, it exploits Galileo signals (E1b-c), dual-polarisation, complex ‘coherent channel’ (amplitude/phase) and second frequency (L5/E5a) acquisitions. These measurements enable HydroGNSS to innovate the Level 2 products to access improved resolution on the ground, sensing inundated locations and separation of roughness and vegetation effects from soil moisture, and to research new potential applications from these innovative measurements. The project kicked off in October 2021; both satellites have now been manufactured and are undergoing final integration and testing before launch preparations commence. The testing campaign includes assessment under expected environment conditions and full antenna pattern characterisation on the built-up satellites. The two satellites will be launched together, nominally in October 2025 as a rideshare into a sun-synchronous orbit at 550 km. An End-to-End Simulator, HSAVERS, has been developed by the science team in order to represent end to end data-flow and error modelling to raise the Science Readiness Level to SRL-6 prior to launch. The simulator has also played an important role in testing and exercising the Payload Data Ground Segment prior to availability of data from in orbit. After launch, the platform and payloads will be commissioned, and preliminary calibration and validation will take place for the products, to allow achievement of Scientific Readiness Level 7, and release of Level 1 and Level 2 products delivered as a service over the internet. Data will be owned by ESA and made available on a free and open basis for registered users. This talk will present the latest status of the HydroGNSS mission, testing undertaken and plans for validation once in orbit.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall G1)

Session: D.02.08 Explainable AI for Earth Observation and Earth Science - PART 3

The rapid expansion of AI in Earth system science and Earth observation (EO) is accelerating research and innovation. However, for AI-based solutions to be truly impactful in Earth Action initiatives, they must demonstrate explainability, physics-awareness, and trustworthiness to ensure they are fit for purpose.
This session will explore cutting-edge advancements in explainable AI (XAI) methods across diverse EO data types, including Synthetic Aperture Radar (SAR), optical, and hyperspectral data. Contributions are invited on integrating AI with physical models, interpretable deep learning, uncertainty quantification, causal inference, and other approaches to improve transparency, consistency, and robustness in AI-driven solutions.
We welcome case studies and research addressing a variety of Earth science missions and applications, such as SAR processing, Earth system process understanding, image classification, 3D reconstruction, and climate/environmental monitoring. The session will also cover strategies for tackling data gaps, physical inconsistencies, and ensuring responsible, ethical AI use.
Attendees will gain valuable insights into the latest research on explainable AI for EO, with a focus on enhancing model interpretability and trustworthiness in applications that advance Earth observation and Earth system science, supporting actionable solutions for environmental and climate challenges.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall G1)

Presentation: Towards the Explainability of Monocular Height Estimation from Single Remote Sensing Images

Authors: Sining Chen, Xiao Xiang Zhu
Affiliations: Technical University of Munich, Munich Center for Machine Learning
3D perception in remote sensing is a prominent research topic that has attracted significant attention in recent years, playing a crucial role in enhancing our understanding of the built environment in three dimensions. Among the various approaches, monocular height estimation from single remote sensing optical images stands out as the most efficient and cost-effective method. Despite being an inherently ill-posed problem, deep learning methods leveraging big data have achieved remarkable results. However, the mechanisms through which neural networks infer 3D information from single images remain largely unexplored. Inspired by traditional methods that estimate heights using shadow lengths, this study investigates whether neural networks infer height information from single images by leveraging shadow cues, with the solar elevation angle serving as a control factor. To address this, we first ensure the use of a well-trained network. We adopt the state-of-the-art HTC-DC B5 architecture [1], training it on the Global Building Modeling (GBM) dataset, which contains over 210,000 pairs of PlanetScope satellite images and their corresponding normalized digital surface models (nDSMs) as ground truth height maps. These images and height maps have a resolution of 3 m and span approximately 190 cities worldwide. The trained model demonstrates strong performance and generalizability across diverse urban environments. Using this trained network, we analyze its outputs on images captured under varying solar parameters in four held-out test cities: Canberra (Australia), Cologne (Germany), Guangzhou (China), and Vancouver (Canada). Our analysis reveals a strong correlation between the predicted height values as well as the prediction errors and the solar elevation angle. Specifically, as the solar elevation angle increases, the shadows tend to shorten, leading to more underestimation in the predicted heights. To address this bias, we propose a calibration method based on a shadow casting model. The method introduces a scaling factor derived from the solar elevation angle. Experiments demonstrate that the calibration is effective when assuming an imaginary solar elevation angle, where the predicted height values align most closely with the ground truth. This approach significantly reduces prediction errors. The findings are both intuitive and surprising, given that shadows are not explicitly visible in the images. These results have significant implications for explainability in monocular height estimation in remote sensing and highlight the potential to develop methods that mitigate biases introduced by solar-related factors. [1] S. Chen, Y. Shi, Z. Xiong, and X. X. Zhu, “HTC-DC net: Monocular height estimation from single remote sensing images,” IEEE Trans. Geosci. Remote Sens., vol. 61, 2023, Art. no. 10294289.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall G1)

Presentation: High-Resolution Air Quality estimation in the Apulia region using AI and Explainability Techniques

Authors: Alessandro Fania, Alfonso Monaco, Dr. Nicolò Taggio, George Benekos, Professor Nicola Amoroso, Loredana Bellantuono, Tommaso Maggipinto, Ester Pantaleo, Roberto Cilli, Sabina Tangaro, Roberto Bellotti
Affiliations: Dipartimento Interateneo di Fisica M. Merlin, Università degli Studi di Bari Aldo Moro, Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Bari, Dipartimento di Farmacia - Scienze del Farmaco, Università degli Studi di Bari Aldo Moro, Dipartimento di Biomedicina Traslazionale e Neuroscienze (DiBraiN), Università degli Studi di Bari Aldo Moro, Dipartimento di Scienze del Suolo, della Pianta e degli Alimenti Università degli Studi di Bari Aldo Moro, Planetek Italia s.r.l., Planetek Hellas E.P.E.
Air pollution is a pressing environmental challenge, with significant implications for human health, ecosystems, and climate. Recent advancements in Earth Observation (EO) and artificial intelligence (AI) provide powerful tools for tackling this issue. Recently, Satellite technology has been increasingly utilized for air quality monitoring, offering a global, continuous view of atmospheric pollutant levels, while AI offers opportunities to bridge gaps in data precision and interpretability. However, despite their advantages, satellite measurements lack the precision provided by ground-based monitoring stations. Furthermore, for AI-driven solutions to be impactful in Earth Action initiatives, they must be explainable, physics-aware, and trustworthy. This analysis aims to develop a machine learning model to predict daily surface concentrations of air pollutants (NO2, O3, PM10, and PM2.5) at a high spatial resolution of 300 meters in the Apulia region. To achieve this, we used data from ARPA monitoring stations across the region for the period 2019-2022. As predictors, we included satellite data, meteorological variables, geographical information, land use characteristics, and temporal variables. Using this 300 m grid data, we first established a baseline model with linear regression, followed by an XGBoost model. After training, we applied eXplainable Artificial Intelligence (XAI) to interpret the results. Model performance was validated through repeated cross-validation, yielding an average R-squared of 0.71 across pollutants (0.77 for NO2, 0.78 for O3, 0.67 for PM2.5, and 0.64 for PM10). The XAI analysis confirmed that our model's behavior aligns with existing knowledge, enhancing interpretability and trust in its predictions.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall G1)

Presentation: Meteors: Open Source Package for Explanations of Remotely-Sensed Images

Authors: Hubert Baniecki, Przemyslaw Biecek, Tymoteusz Kwiecinski, Nicolas Longepe, Jakub Nalepa, Lukasz Tulczyjew, Agata M. Wijata, Vladimir Zaigrajew
Affiliations: ESA, University of Warsaw, Warsaw University of Technology, Silesian University of Technology, KP Labs
The rapid advancement of Artificial Intelligence (AI) has revolutionized numerous sectors, with particularly transformative applications in space research and Earth observation. However, the prevalent "black box" nature of AI systems presents significant challenges, especially in critical domains like Earth Observation (EO) and Remote Sensing (RS), where the implications of AI-driven decisions can be far-reaching and consequential. We desperately need new techniques to explain the AI models used in this area, reliable software to disseminate the latest and best algorithms, and processes and good practices on how to implement Explainable Artificial Intelligence (XAI). To address these challenges, we introduce Meteors, an innovative open-source library specifically designed for implementing Explainable AI (XAI) methods in EO and RS applications. This comprehensive solution integrates cutting-edge XAI techniques optimized for satellite and remote sensing data analysis. The library enables sophisticated interpretation of AI models through multiple analytical approaches, including detailed activation mapping, identification and analysis of both spatial and spectral features, comprehensive feature importance analysis, and intuitive visualization of decision-making processes. The package provides comprehensive analytical capabilities for both traditional machine learning algorithms, such as Random Forest, and sophisticated deep learning architectures, including Convolutional Neural Networks (CNNs) and Vision Transformers. Through practical demonstrations, we showcase how practitioners can identify critical model features and leverage this knowledge to optimize model performance. Building on research presented at ICLR in "Red Teaming Models for Hyperspectral Image Analysis Using Explainable AI," with its methods incorporated into the package, we will demonstrate its effectiveness on both classic machine learning algorithms and state-of-the-art deep learning architectures through two key applications: detecting vulnerabilities and improving a high-performance regression model for hyperspectral image analysis, and a cloud segmentation task that validates model behavior against expert knowledge. Our solution addresses critical industry needs across various analytical tasks, including regression, classification, and segmentation problems. By providing a robust framework for model interpretation, Meteors enables organizations to develop more reliable and accountable AI systems. This advancement marks a crucial step toward responsible AI implementation in Earth observation and remote sensing, offering practitioners and organizations the tools necessary for developing and deploying more transparent, efficient, and reliable AI models in critical space applications.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall G1)

Presentation: Explainable SAR measurements for Wind Assessment with Artificial Intelligence (ESAWAAI)

Authors: Amine Benchaabane, Romain Husson, Bertrand Chapron, Alexis Mouche, Nicolas Rascle, Frédéric Nouguier, Antoine Grouazel, Mihai Datcu, Andrei Anghel, Catalin Ristea, Charlotte Bay Hasager, Krystallia Dimitriadou
Affiliations: ClS Group, IFREMER, Politehnica Bucharest, CEOSpaceTech, Technical University of Denmark
Surface winds and currents play an important role in atmosphere-ocean dynamics. They are the main circulation forces at the ocean surface and are therefore an important input for numerical ocean forecasting models. Given the ocean extended, observations from buoys and moored ships are rare and insufficient to meet the needs of ocean surface atmosphere observation or numerical ocean simulation. This is why remote sensing technologies, and space-based sensors in particular, represent complementary means of obtaining global measurements of ocean surface conditions. Today, the estimation of ocean surface winds from space-based sensors, particularly scatterometers, can be considered a mature technology, rather than RADAR-based technologies such as SAR, which suffer from limitations due to the complexity of the relationship between sea surface parameters and backscatter. To this aim, the common approach to wind inversion from space observation is based on empirical geophysical model functions (GMFs). These GMFs are generally constructed by matching measured backscatter with surface winds provided by numerical model’s reanalysis or in-situ measurements. The advantage of using GMFs is efficient computation and limited parameterization (incidence and azimuth angles, wind speed and direction). In the context of wind estimation from space-based sensors, beyond single-pol intensity retrieved from SAR, there are many wind-related observables that can be estimated if considering dual-polarization products such as such as cross-polarisation sigma0, polarimetric parameters (e.g. Co-Cross Polarimetric Coherence, or CCPC), geophysical Doppler, Imaginary part of Cross-Spectra (IMACS) or wind direction approximation in the presence of wind streaks in SAR sigma0 co and cross-pol textures. Each of these variables has its own limitations for describing, in all observation geometries, all wind conditions likely to be encountered. Several empirical relations have been proposed in the literature to explain how they vary with sea surface wind vector and observing geometry. A Bayesian approach attempting to combine them all in cost minimization problem can be proposed to maximize the number of observables and get rid of the usual ancillary wind model information required to better constrain the wind direction, but none was tested for retrieving a spatially consistent wind vector using them all, because this requires the accurate knowledge of the uncertainties associated with each of them. In addition, these observables can only be estimated accurately at various resolutions, from ~O 100m for the Sigma0 channel to ~O 10km for the CCPC. This raises an additional difficulty to properly compare these various inputs and solve this minimization problem. So, it's obvious that the best way to tackle these problems is to develop a new deep learning model that could aggregate efficiently all this aspect to better describe the ocean surface wind field from SAR observation. Today, the development of Artificial intelligence (AI) techniques and especially deep neural networks for the retrieval of bio- and geo-chemical/physical parameters from SAR measurements has been heterogeneous whether considering applications on land, over sea ice or over free ocean surfaces. In the framework of the ESAWAAI project, we are working in partnerships to develop a new eXplainable Artificial Intelligence (XAI) to answer the challenges of Bayesian GMF for wind field retrieval from space-based sensors. To obtain a higher degree of explainability, we aim at the synergy of the paradigms: algorithmic and scientific explainability. Thus, we drive our design procedures for learning of trustworthy models with physical meaning and to guarantee the transparency of understanding how the algorithm works from a mathematical point of view. The new methodology developed will enable us to: 1- understand and quantify these limitations, 2- use the variables in a complementary way, 3- obtain a better estimate of the SAR-derived wind field, with increased accuracy, better uncertainty estimation, and 4- avoid using an priori provided by a numerical weather prediction (NWP) model perform a fully independent and SAR-based wind retrieval. To better achieve the exercise, modelling with machine learning requires a volume of data import that allows them to better meet the learning need. Today, there is no AI-ready public dataset to drive the use and engagement of the AI and Earth observation (EO) community for SAR-derived wind applications. The study aims to create an unprecedent dataset that will not only be useful for this project but will also provide the AI and more generally the scientific community with a dataset for which all the required pre-processing, variable selection, co-localization... has already been carried out. This opens the possibility of using and sharing this dataset widely for other projects and challenges. In particular, the legacy data of the ESAWAAI project host several SAR features derived from Interferometric Wide (IW) and Extra Wide (EW) swaths in multi-polarization starting from Single Look Complex (SLC) Level 1 (L1) products and various NWP model forecasts or reanalyses that have been developed over European regions (CERRA, NORA3, ...) with the aim of providing spatially detailed and extensive archives of wind at the sea surface and in altitude. This dataset is based on the expertise and data developed as part of the SARWAVE project, which provides a detailed description of the sea state as perceived by the SAR, and which are used by the AI model to estimate and explain the retrieved wind field. The newly estimated wind fields will be compared and evaluated with respect to in-situ data such meteorological mast data (i.e., LIDAR) and ocean buoys. It will be compared also to state-of-the-art wind maps from the DTU processing chain using GFS, ERA5 and NEWA for wind direction input. For the offshore region located in high latitudes in the Norwegian Sea, since no in situ data is available, the validation will be between the SAR-based wind data Such sea surface wind fields are growingly used today for meteorological or wind resource assessment applications. The ESAWAAI project will provide for these applications a better characterization of the SAR observables’ various contributions to the wind field estimation, and their dependences with respect to the given metocean (meteorological and ocean) situations and observing geometry. Acknowledge: This project is funded and supported by ESA Φ-lab as part of the Future EO Foresight element and contract TOWARDS EXPLAINABLE AI4EO: APPLICATION TO SAR AND HYPERSPECTRAL IMAGERS (ESA AO/1-1-11524/22/I-DT), focusing on Topic 1: High-dimensional Synthetic Aperture Radar (SAR) data, exploiting phase-related information beyond single-pol intensity analysis.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall G1)

Presentation: EDEM: Explaining Deep lEarning Models of Satellite Time Series for the Agritech domain

Authors: Michele Linardi, Claudia Paris
Affiliations: CY Cergy Paris Université, University of Twente
EDEM: Explaining Deep lEarning Models of Satellite Time Series for the Agritech domain 1. Introduction and motivation Increasingly frequent extreme weather events strongly affect European agriculture, threatening sustainable food production [1]. Farmers crave decision-making support to adapt agriculture to a changing climate (e.g., monitoring of field and crop damage). Industry, policymakers, and large aggregators, such as the European Commission's Joint Research Centre, seek tools to assess crop loss at a regional level [2]. To address these needs, a constantly increasing effort has been devoted to satellite data analysis, which allows for continuous monitoring of the Earth’s surface at the global level. In this context, there is a pressing need for the development of innovative approaches that leverage Big Data Analytics and Artificial Intelligence (AI) to collect, analyze, and derive insights from vast Earth Observation (EO) datasets, particularly global satellite data. Among the different EO data, Satellite Image Time Series (SITS) of multispectral (MSI) data and hyperspectral images (HSI) offer invaluable insights into agriculture. The temporal information provided by MSI data is closely linked to the phenological development of crops, capturing growth stages and seasonal patterns [3]. Meanwhile, the detailed spectral information from HSI allows for the differentiation of crop types and the detection of subtle biophysical and biochemical variations [5]. These datasets form the backbone of advanced AI-driven solutions enhancing agriculture resilience and sustainability. Recent advancements in AI for EO have significantly improved the classification of EO data, leveraging techniques inspired by breakthroughs in computer vision. Deep Learning (DL) models have demonstrated exceptional performance when applied to SITS of MSI or HSI data. However, these models are often criticized as "black boxes" due to their opaque and complex decision-making processes, which hinder their adoption by end-users [4]. Specifically, the geoscience community still lacks explainable AI (xAI) methods tailored specifically to the unique properties of SITS and HSI data. In the framework of the “EDEM: Explaining Deep lEarning Models of Satellite Time Series for the Agritech domain” project — funded by the ANR (The French National Research Agency) and coordinated by Michele Linardi and Claudia Paris — we are interested in developing novel xAI techniques applied to DL models adopted in the analysis of SITS of MSI and HSI. In this context, we aim to validate our contribution by measuring how xAI enhances decision-making processes and optimizes large-scale agricultural monitoring using publicly available benchmark datasets. Specifically, EDEM aims to develop xAI approaches that provide a stronger descriptive approach to DL algorithms, as well as additional information to end-users, hence enhancing data insights. High-level insights regarding model predictions aim to assist analysts in understanding the underlying data generation process in various domains, validating model outcomes, and debugging model internals to improve prediction performance. 2. Methodology While several xAI approaches have been tested on single high-resolution optical EO data [4], little has been done so far for the analysis of SITS of MSI data [6][7] and for addressing complex HSI data [5]. In contrast, outside of the geoscience domain, several forms of explanations studied for unstructured data (e.g., images) have been recently applied to explain DL models on multivariate time series of data [8]. Building on existing methods, EDEM aims to address three specific challenges: (1) causal feature importance in DL classifiers, (2) explainability for DL architectural tuning across domains applied to SITS of MSI data or HSI data, and (3) SITS concept generation for counterfactual explainability. The first challenge aims to report the features' importance in terms of their causal effects on the outcomes of DL classifiers for agricultural area classification when using SITS of MSI data. While state-of-the-art DL models like Long Short Term Memory (LSTM) and Transformers perform well in crop mapping, their potential for inferring Granger causality between MTS dimensions remains unexplored in relevance attribution methods. Leveraging neuron activations, relevance scores can be computed with causal semantics [9]. However, challenges include managing the large investigation space of SITS and exploring alternative conditional dependency tests, such as causal Markov models, to overcome the limitations of pairwise Granger causality. The second problem concerns explaining DL models by analyzing how neural network internals influence classification outcomes, particularly for deployment in regions distinct from training areas [10]. A key challenge is addressing performance drops in crop classifiers when adapting to different temporal and spatial domains without acquiring extensive reference data. Building on prior work on explainability for DL elements (e.g., positional encoding in temporal attention encoders), the goal is to develop xAI solutions that enable holistic interventions on DL architectures [11], improving model generalization and trustworthiness. This approach aims to enhance uncertainty quantification and the robustness of DL models for crop mapping on a large scale when using both SITS of MSI data and HSI data. The third challenge is to develop methods for extracting concepts that drive explainability through counterfactual hypothesis testing. Unlike natural image datasets, generating meaningful concepts in SITS (MSI or HSI) is complex. The focus is on identifying causal multivariate sub-instances that define classes or outliers, supporting conditional tests in DL models. Examples include evaluating neural network sensitivity to concepts like crop rotation, satellite data acquisition conditions, climatic variability, and environmental factors. These factors can significantly influence model quality and reliability, necessitating robust explanations to enhance user validation and trust. 3. Product generation and conclusion EDEM aims to propose a novel Explainability framework containing methods able to capture causal relationships of model outcomes with important events inferred both from the SITS of MSI and HSI data and from DL models' building blocks. Such contributions have many practical benefits on Deep Domain Adaptation and Generalization, where the DL training process typically faces label scarcity, and the model is required to generalize over multi-source data features that can drastically differ. Domain Adaptation has been widely used in the remote sensing literature to tackle the demanding task of classifying satellite data for large-scale monitoring and mapping. For a specific agriculture task, analysts must understand the heterogeneous characteristics of the geographical study areas in different countries. Moreover, they need to cope with the varying climate evolution across the years and imaging resolutions of the available sensors, which affect the applicability of DL models due to irregular changes in the data distribution. While other data types, such as images, are naturally interpretable signals with high spatial redundancy, SITS of MSI and HSI incorporate a low semantic level even for experts with consolidated domain knowledge. Hence, they require more complex explanation forms that can match human intuition and interpret the main causes behind the outcome of a DL model applied to EO data. We note that such causes can have a dual nature. First, specific data variables or instances (e.g., parents' node in a structural causal model, repeated patterns, anomaly) can induce a DL prediction model to assign importance to certain features for obtaining the final decision. Moreover, results can also depend on the models' internal primitives (e.g., convolutional filters, positional encoding, attention heads) that drive the feature learning mechanism and thus become the root causes of the final decisions. References [1] K. Hainschet al., “Energy transition scenarios: What policies, societal attitudes, and technology developments will realize the eu green deal?” Energy , 2022 [2] Raphaël d’Andrimont, et al., From parcel to continental scale – A first European crop type map based on Sentinel-1 and LUCAS Copernicus in-situ observations. Remote Sensing of Environment, 2021 [3] G. Weikmann, Claudia. Paris, et al., "Multiyear Mapping of Water Demand at Crop Level: An End-to-End Workflow Based on High-Resolution Crop Type Maps and Meteorological Data," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing [4] Caroline M Gevaert, “Explainable ai for earth observation: A review including societal and regulatory perspectives,” nternational Journal of Applied Earth Observation and Geoinformation, 112, 102869 [5] Shuai, L., Li, Z., Chen, Z., Luo, D., & Mu, J. (2024). A research review on deep learning combined with hyperspectral Imaging in multiscale agricultural sensing. Computers and Electronics in Agriculture, 217, 108577 [6] Ovica Obadic et al., “Exploring self-attention for crop-type classification explainability,” arXiv preprint arXiv:2210.13167, 2022. [7] Theissler et al., (2022), Explainable AI for Time Series Classification: A Review, Taxonomy and Research Directions, in IEEE Access [8] Paul Boniol, et.al. dcam: Dimension-wise class activation map for explaining multivariate dataseries classification. SIGMOD ’22. [9] Etienne Vareille and Adel Abbas and Michele Linardi and Vassilis Christophides, Evaluating Explanation Methods of Multivariate Time Series Classification through Causal Lenses. IEEE DSAA 2023 [10] Indrajit Kalita, et al. Deep learning-based cross-sensor domain adaptation under active learning for land cover classification. IEEE Geoscience and Remote Sensing Letters, 2022 [11] Adel Abbass and Michele Linardi and Etienne Vareille and Vassilis Christophides and Claudia Paris, Towards Explainable AI4EO: an Explainable Deep Learning Approach for Crop Type Mapping in Satellite Image Time Series. IEEE IGARSS 2023
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.34)

Session: A.02.07 Monitoring grasslands and rangelands from space - PART 3

Grasslands and rangelands account for a large share in global agricultural managed landscapes and provide essential ecosystem services such as carbon sequestration or water retention. At the same time, they represent a critical source of biomass for animal feed and support livelihoods, for instance, of pastoralists and other livestock producers. Grasslands and rangelands provide habitats for a wide range of floristic and faunistic species and have therefore a crucial role in the context of climate change and biodiversity. A key challenge is to find a balance between maintaining or enhancing ecosystem services and functions – which are influenced by the intensity of their utilization and management - while at the same time meeting the demand of a growing population in terms of meat and dairy products, as well as pressures from other land uses. This brings the topic of grassland/rangeland ecosystems to the agenda of land managers, nature conservationists but also prominently in international policies. Therefore, accurate Earth Observation (EO) based mapping and monitoring approaches are crucial to deepen our understanding and to support evidence-based policy development and evaluation. While remote sensing-based approaches have already proven valuable for mapping and monitoring the intensity of grassland/rangeland use over large areas, several challenges such as degradation and recovery of those ecosystems remain unresolved for which new solutions are needed.

We welcome contributions that either focus on the testing and implementation of novel EO data and methods for the assessment of rangelands and grassland management and condition, or that provide insight into the valorization of established technologies for innovative EO-based services in support of climate action and sustainable management. These could include but are not restricted to:

- multisource imaging / data fusion / time series
- estimation of grassland yields / biomass / quality and other relevant biophysical variables
- degradation and recovery of grasslands and rangelands
- differentiation of pastures / meadows
- rangeland/grassland use intensity, resilience and carrying capacity
- grassland use in the context of biodiversity / climate change
- monitoring and evaluation of agricultural and environmental policies
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.34)

Presentation: Sustainable livestock intensification in Brazil based on satellite gross primary productivity estimates and derived biomass yields

Authors: Laerte Ferreira, Vinicius Mesquita, Leandro Parente, Mustafa Isik, Tomislav Hengl, Wilton Ladeira, Nathália Teles, Alessandra Bertassoni, Lindsey Sloat
Affiliations: Remote Sensing and GIS Lab at the Federal University Of Goias, OpenGeoHub, Land & Carbon Lab, World Resources Institute
Despite being the world's largest meat exporter, Brazil's livestock farming practices remain predominantly extensive, characterized by low technological input and productivity. This results in large herds spread across vast pasturelands, which occupy approximately 20% of the national territory. Such extensive land use has led to significant ecosystem fragmentation, degradation (affecting at least 36 million hectares of cultivated pastures), and the eventual abandonment of approximately 10 million hectares of previously cleared areas. Efficient use of cattle pastures is crucial for Brazil to achieve its greenhouse gas (GHG) reduction targets while simultaneously addressing habitat loss, food security, trade goals, and climate change mitigation and adaptation. Thus, in this study we utilized the 2022 Landsat-based cultivated pasture area mapped by MapBiomas (~163 million hectares) and the corresponding annual biomass yield, derived at 30m spatial resolution using a Light Use Efficiency (LUE) model. We conservatively assumed a fixed carbon rate of 0.5 gC/m²/day/MJ, representing 37% of dry biomass, to evaluate the potential for sustainable, grass-fed livestock intensification in Brazil. Based on these approaches, assumptions, and parameters, Brazil produced an estimated 3.18 gigatons (Gt) of forage in 2022, averaging 19.5 tons/hectare (with the 1st and 99th percentiles at 5.2 and 37.2 tons/hectare, respectively). This forage production could theoretically increase the livestock stocking rate from ~1.3 to 2.4 animal units (AU)/hectare (1 AU = 450 kg live weight), or alternatively, maintain the current herd size while freeing approximately 75 million hectares of pastureland for other uses, such as soybean expansion. These preliminary estimates require further investigation and comprehensive validation. The figures vary significantly across regions, especially regarding forage production and stocking rates, which range from 11.4 to 24.6 tons/hectare (average values) and 0.45 to 1.12 AU/hectare in the Caatinga (semi-arid) and Amazon biomes, respectively. Within the scope of the MapBiomas and Land & Carbon Lab Global Pasture Watch (GPW) initiatives, we aim to conduct a sensitivity analysis over the coming months. This analysis will explore different LUE values and carbon-to-biomass conversion factors, incorporating literature-based insights and in-situ data from both existing inventories and new field campaigns. These efforts will span diverse environmental conditions and prevalent pasture conditions at landscape and farm scales. Our ultimate goal is to operationally deliver reliable biomass estimates using high-resolution satellite data (e.g., Sentinel-2) to support more efficient and sustainable cattle management practices.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.34)

Presentation: Monitoring grassland and pastures at global scale: A multi-source approach based on data fusion

#stac #cloud-native

Authors: Leandro Leal Parente, Lindsey Sloat, Vinicius Mesquita, Laerte Ferreira, Radost Stanimirova, Tomislav Hengl, Davide Consoli, Nathália Teles, Maria Hunter, Ichsani Wheeler, Carmelo Bonannella, Steffen Fritz, Steffen Ehrmann, Ana Paula Mattos, Bernard Oliveira, Carsten Meyer, Martijn Witjes, Ivelina Georgieva, Mustafa Serkan Isik, Fred Stolle
Affiliations: OpenGeoHub Foundation, World Resources Institute, Remote Sensing and GIS Laboratory (LAPIG/UFG), International Institute for Applied Systems Analysis (IIASA), German Centre for Integrative Biodiversity Research (iDiv)
Covering about 40% of the Earth’s surface, grassland and pastures are critical for carbon sequestration, food production, biodiversity maintenance, and cultural heritage for people all over the world. Aiming to provide monitoring solutions for these key ecosystems, the Land & Carbon Lab’s Global Pasture Watch (GPW) initiative is developing four globally consistent time-series datasets: (i) 30-m grassland class and extent, (ii) 30-m short vegetation height, (iii) 1-km livestock densities, and (iv) 30-m bi-monthly gross primary productivity. Conceptualized as building blocks, these products were designed and implemented in a flexible way enabling, for example, local calibration based on in-situ reference points or existing area estimates, and fusion with other land cover products. This study presents the first results of integrating GPW products into a harmonized global map that delineates active grazing areas, pastures with different management intensities, and natural graze and browse-lands. The methodology applies globally and continentally derived thresholds for grassland classes, livestock densities, dominant short vegetation heights and productivity trends, to assign the final class in the integrated product. Aiming to consider the differences in quality and accuracy of the input datasets, a per-pixel uncertainty layer is provided together with the integrated map, enabling a spatial and temporal visualization/analysis of the integration errors. Despite inherent challenges and limitations, the implemented approach is entirely open (based on open source and open datasets) enabling different user's communities to adapt it to regional/local contexts and specific use cases. To further enhance usability and improve accuracy, GPW is actively promoting on-line tools (Geo-Wiki) for collecting, organizing and incorporating user feedback in future collections of the products, through additional reference data, local knowledge and new machine learning methods. All input and output data, including reference samples, are publicly available as cloud-native formats in Zenodo, SpatioTemporal Asset Catalog (STAC) systems and Google Earth Engine. The source code is publicly accessible at https://github.com/wri/global-pasture-watch.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.34)

Presentation: A Novel Phenology-Based Approach to Mapping Vegetation Fractional Cover in Semi-Arid Sub-Saharan Rangelands

Authors: Lasse Harkort, Akpona Okujeni, Vistorina Amputu, Jari Mahler, Leon Nill, Dirk Pflugmacher, Patrick Hostert
Affiliations: Geography Department, Humboldt University Of Berlin, Helmholtz Center Potsdam, GFZ German Research Center for Geosciences, Plant Ecology Group, Faculty of Mathematics & Natural Sciences, University of Tübingen, Earth Observation and Climate Processes, Faculty VI Spatial and Environmental Sciences, IRI THESys, Humboldt-Universität zu Berlin
A Novel Phenology-Based Approach to Mapping Vegetation Fractions in Semi-Arid Sub-Saharan Rangelands Rangelands provide essential ecosystem services and support livelihoods across Sub-Saharan Africa, yet monitoring their vegetation composition and dynamics remains challenging. This study presents an innovative approach for mapping fractional vegetation cover in these structurally complex landscapes by leveraging phenological information from Sentinel-2 time series. Our method introduces the concept of a phenological feature space (PFS), which extends traditional spectral unmixing approaches by incorporating temporal vegetation dynamics. The PFS utilizes two key phenological metrics that capture fundamental differences between woody and herbaceous vegetation: the dry season integral, which quantifies vegetation greenness during the dry period when most herbaceous plants have senesced, and the rate of average falling, which measures how quickly vegetation loses greenness after peak growth - a characteristic that distinguishes rapidly senescing herbaceous plants from more gradually changing woody vegetation. We implemented the approach along a 600km precipitation gradient in Namibia, demonstrating its robustness across diverse rangeland conditions. The method combines phenometric-guided endmember selection with neural network regression using synthetic training data, enabling high-resolution (10m) mapping of vegetation fractions. The neural network architecture was implemented to handle the complex spectral-temporal patterns in our data, processing a full year of observations across all Sentinel-2 bands. Validation using unmanned aerial vehicle (UAV) imagery revealed promising accuracy, with mean absolute errors of 11.87%, 13.57%, and 14.47% for woody, herbaceous, and bare surface cover, respectively. The resulting fractional cover maps successfully captured environmental gradients and land management patterns, including areas of woody vegetation clearing and varying proportions of woody to herbaceous cover across different management regimes. Our approach addresses several key challenges in rangeland monitoring: the separation of spectrally similar but functionally different vegetation types, the incorporation of seasonal dynamics, and the need for high spatial detail in heterogeneous landscapes. The method's potential to detect gradual changes in vegetation composition makes it particularly valuable for monitoring rangeland degradation and recovery processes and shows promise for scaling up to even larger geographical areas. This study advances the field of Earth observation for rangeland monitoring by providing a reproducible framework for mapping vegetation composition at management-relevant scales. The approach can support evidence-based land management and impact evaluation, particularly in the context of sustainable rangeland management and climate change adaptation strategies. Future applications could include monitoring bush thickening, assessing carrying capacity, and evaluating the effectiveness of restoration measures in semi-arid rangelands.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.34)

Presentation: A Case Study on Pastureland Dry Matter Biomass Estimation for Enhanced Livestock Production in Paraguay

Authors: Mr. David de la Fuente, Carlos Domenech, Mr Pablo Valdivia, Mr Marcelo Portaluppi, Mr Jorge Cordone, Ms Liliana Simon, Ms Rocío Sosa, Mr Héctor Gill, Juan Suarez, Mihaela Violeta Gheorghe
Affiliations: GMV, World Bank Group, FECOPROD
This abstract presents a case study developed by GMV within the context of the ESA-funded Global Development Assistance (GDA) Agriculture cluster. The study employs Earth Observation (EO) to enhance livestock production in Paraguay. This case study, which was developed in collaboration with the World Bank, GMV, and the Production Cooperative Federation (FECOPROD), illustrates the potential of Earth observation (EO) services for sustainable development. The objective of the case study is to integrate EO data into FECOPROD's digital infrastructure, thereby providing valuable insights for livestock farmers. A significant element of the case study is the estimation of dry matter biomass (DMB), which is a vital parameter for optimising grazing areas, monitoring water availability and preventing overgrazing. The methodology employed utilises the Carnegie-Ames-Stanford Approach (CASA) model to estimate net primary productivity (NPP), a pivotal variable for calculating dry matter biomass (DMB). The CASA model employs meteorological data from the European Centre for Medium-Range Weather Forecasts (ECWMF) and satellite time series from Sentinel and Landsat. The CASA model focuses on the efficiency of plant photosynthesis, quantified as light use efficiency (LUE). To this end, it incorporates factors such as absorbed photosynthetically active radiation (APAR), temperature, and water availability, which are used to adjust LUE. The conversion of NPP to DMB requires consideration of the carbon allocation ratio and the root-to-shoot ratio. This process enables the prediction of pastureland condition and yield based on the estimated dry matter biomass (DMB) and productivity levels. In order to operationalise the CASA model, the case study makes use of high-quality climate data from the ERA5-Land dataset, which is a product of ECMWF. This dataset, distinguished by its high spatial and temporal resolution, is particularly well-suited for applications in livestock farming. The calculation of NPP is a multi-stage process, comprising the generation of dekadal composites of fraction of APAR (fAPAR) and albedo from the Sentinel-2 time series, the filling of gaps in the time series while ensuring the preservation of observed trends in pasture phenology, the calculation of LUE based on global look-up tables adjusted for Central Chaco pastureland, and the incorporation of environmental stressors from the ERA5-Land dataset. The DMB is estimated by converting the accumulated NPP for each dekadal period, utilising a constant value for the crop-dependent C_ratio derived from global look-up tables adjusted for Central Chaco pastureland. The root-shoot ratio value is calibrated on FECOPROD's experimental farms using ground-truth dry biomass data, allowing for the calibration of the ratio to be validated. The case study provides valuable insights that can inform decision-making in livestock farming. FECOPROD is capable of comparing the mean daily DMB generated in each dekadal period throughout the year at a spatial resolution of 10 metres. Additionally, it generates products that illustrate plot-level biomass variability, thus enabling the identification of areas exhibiting higher or lower yield levels in comparison to the average pasture yield for a given plot. The case study provides livestock farmers with comprehensive data regarding land conditions, equipping them with the knowledge necessary to make well-informed decisions pertaining to breeding, health management, and the overall functioning of their farms. This data-driven approach contributes to increased efficiency and productivity, which in turn leads to a more resilient and environmentally conscious agricultural sector. Moreover, the systematic generation of these indicators across extensive rangeland areas facilitates the alignment of feed production with seasonal pasture cycles, thereby ensuring a stable and cost-effective feed supply. This alignment serves to enhance the efficiency of agricultural production and to minimise the environmental impact of feed production. The case study provides an exemplar of a scalable solution that addresses environmental concerns, promotes resource management, and strengthens the sustainability of the livestock industry in Paraguay and other regions. As the field of electronic monitoring continues to advance, the possibilities for innovation in livestock farming practices expand, paving the way for a more sustainable and resilient agricultural future.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.34)

Presentation: Continental-scale rangeland monitoring in Africa using Earth observation

Authors: Robert Buitenwerf, Jonas Ardö, Zhanzhang Cai, Charles Davison, Lars Eklundh, Patrick Griffiths, Donvan Grobler, Michael Munk, Mahendra Pal, Paul Senty
Affiliations: Aarhus University, Lund University, DHI, GeoVille GmbH, ESA - ESRIN
Rangelands, landscapes with (semi-)natural vegetation that is not forest, support the livelihoods of about 1 billion people globally, sustain unique biodiversity and supply globally important ecosystem services including climate change mitigation. However, global or continental-scale EO-based rangeland monitoring has lagged behind forest and crop monitoring, particularly in the Global South. The lack of accurate high-resolution rangeland monitoring prevents the assessment of longer-term global change impacts on vegetation and ecosystem dynamics and hinders rapid management responses to emerging environmental challenges and their social-economic consequences. The Rangeland Monitoring using Earth Observation project (RAMONA), funded as an ESA Continental Demonstrator, aims to provide high resolution monitoring of key rangeland variables using Sentinel 1-3 data. The goals are to 1) define the extent of African rangelands, 2) classify African rangelands into functional types to facilitate detection of social-ecological change 3) describe and quantify the vegetation phenology of African rangeland types and 4) provide temporal estimates of herbaceous biomass, a key variable in rangeland functioning that affects livestock grazing, biodiversity patterns, fire regimes and carbon dynamics and, ultimately, livelihoods. Here we will present an overview of these four core products including their technical specifications, interrelations and dependencies and accuracy assessment. All data products were generated at 10 m spatial resolution and derived from gap-filled Sentinel 2 data, where gap filling was achieved using data from Sentinel 3 and Sentinel 1. The data spans an 18-month period, centered on 2022. We will demonstrate some of the most significant continental-scale patterns that these novel data sets reveal and illustrate how they can be used in real-world use cases. RAMONA data is freely available via a web portal (app.ramona.earth) and we will demonstrate how RAMONA can be accessed for both viewing and data download. We conclude that the use of Sentinel data allows timely, high-resolution and accurate monitoring of African rangelands. Based on feedback from various African and international stakeholders, there is substantial scope for uptake of RAMONA variables among a variety of stakeholders if ongoing production and regular updates can be guaranteed.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.34)

Presentation: Validation and area estimations of the first global 30-meter natural grassland and managed grassland time series (2000–2022)

Authors: Steffen Fritz, Myroslava Lesiv, Dr Leandro Parente, Ms Ivelina Georgieva, Ms Lindsey Sloat, Ms Ana Paula Matos e Silva, Ms Nathalia Monteiro Teles, Mr Vinicius Mesquita, Dr Laerte Guiamares Ferreira, Dr Luis Baumann, Dr Ichsani Wheeler
Affiliations: IIASA, OGH, WRI, LAPIG
Pastures and rangelands are the single most extensive land use on Earth, and understanding their productivity and how they are changing over time is critical to achieving global climate goals. Despite their importance, these areas rarely have dedicated monitoring efforts, and there is a significant lack of accurate, spatially explicit data on their location, area, and condition. To address this gap, a new Land & Carbon Lab research consortium, Global Pasture Watch, has produced recurrent and high-resolution (30-meter) maps of pasture areas and productivity spanning the period 2000–2022. Mapping grasslands globally is particularly challenging, especially when distinguishing between cultivated grassland, natural/semi-natural grassland, and other land cover classes. Consequently, a rigorous accuracy assessment of these maps is essential. This presentation outlines the sampling strategy adopted to validate the grassland map, alongside initial validation results as well as area estimates of natural/semi-natural and cultivated cropland. It begins with basic validation principles and definitions of grassland classes for evaluation. The approach for validating the static map of 2020 is then described, followed by the derivation of a separate sample for assessing changes in grassland over time (covering the periods 2000–2010, 2010–2015, and 2015–2022) and another separate strata for validating the cultivated grassland class. A key objective was to collect sufficient sample data on the cultivated grassland class to generate globally and continentally robust area estimates of land conversion in the cultivated grassland domain for the 2000–2022 period. This focus is critical for applications such as global and regional land-use modeling, biodiversity studies, carbon cycle analysis, and assessments of human impact and management intensity. In particular, such assessments provide insights into the global relevance and magnitude of cultivated grasslands relative to other land conversions, such as those in the cropland and urban domains.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall K2)

Session: F.04.25 Making your science more actionable for policymakers

Following on from the agora session on "How to make Earth Observation Science more actionable?", these interactive workshops will allow you to dig into the question of how to make the science you are working on more actionable for policymakers and other 'end users' of your research.

Come to the workshop with the challenges of translating your work to the needs of your end users. Through a step-by-step process, we will help you unpick the approaches you are currently using, and give you a process on how to identify potential next steps and different ways forward.

Speakers:


  • Kris De Meyer - UCL
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall L1/L2)

Session: C.02.20 Past and Future EO Scientific Mission Concept

ESA's FutureEO and specifically its Earth Explorer component is a world-class programme of scientific research EO missions. By its very nature it is a very selective one, so for every selected missions there are many more that are recognised as scientifically valid and technically feasible also.
In fact the combination of scientific topics, Earth Observation physics and current state of technology means that at any given time there will always be a given pool of mission ideas, which may or may not be submitted or selected by ESA depending on various factors. The large but finite extent of this informal pool is evidenced for instance by repeated submissions of similar ideas for successive Earth Explorer calls.
This session will present some of the more proeminent mission ideas and concepts, either as specific past (or future planned) submissions to Earth Explorer calls, or as general concepts.

Speakers:


Nitrosat, a Satellite Mission Concept for Mapping Reactive Nitrogen at the Landscape Scale


  • Van Damme, Martin - Université Libre de Bruxelles

4D-Earth, a free and open constellation for monitoring land surfaces and topography at high spatio-temporal resolution


  • Hagolle, Olivier - CNES/CESBIO

The Fine Resolution Explorer for Salinity, Carbon and Hydrology (FRESCH): a 10 km-resolution L-band mission


  • Rodriguez, Nemesio - CNES/CESBIO

EULE – European Urban Light Explorer


  • Storch, Tobias - German Aerospace Centre (DLR)
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 0.94/0.95)

Session: F.02.10 Joint use of Sentinel-1 and RCM to support citizen services in Europe and Canada

The Sentinel-1 and RADARSAT Constellation Mission (RCM) are two of the most advanced synthetic aperture radar (SAR) missions dedicated to providing high-resolution, all-weather, day-and-night Earth observation capabilities.
This session will highlight the synergistic strengths of Sentinel-1 and RCM data interoperability and continuity, with leading experts demonstrating how their combined data enables breakthroughs in monitoring critical environmental and socio-economic challenges.

Sentinel-1, a pillar of the European Union’s Copernicus program, delivers global, consistent, and openly accessible data, optimized for applications requiring broad spatial coverage and frequent revisits. Its dual-polarization capabilities make it an invaluable resource for tracking land deformation, agricultural productivity, forestry health, and marine environments. Complementing this, RCM’s innovative three-satellite configuration provides rapid revisit times and enhanced operational flexibility, extending from Canada to global Arctic and beyond.

Through a series of invited presentations, expert speakers will demonstrate key applications and methodologies that leverage the joint use of Sentinel-1 and RCM, illustrating real-world applications where these combined data streams improve monitoring precision, frequency, and operational readiness. Topics include:
Environmental Monitoring and Natural Resource Management: Case studies in coastal erosion tracking, flood risk assessment, and sustainable forestry, benefiting from dual SAR coverage.
Disaster Response and Resilience: Enhanced techniques for detecting and predicting natural disasters, such as floods, landslides, and ice hazards, especially in high-latitude regions.
Agriculture and Urban Analysis: Advanced applications in crop monitoring, yield forecasting, and urban sprawl analysis, capitalizing on the combined revisit frequencies and polarization capabilities of both missions.
Cross-Mission Data Fusion and Interoperability: Technical insights into data fusion methods, harmonization strategies, and processing innovations that make integrated use of Sentinel-1 and RCM datasets seamless and effective.

This invited session will showcase state-of-the-art case studies and insights, providing a compelling view into how Sentinel-1 and RCM, when used in concert, are advancing Earth observation and supporting sustainable decision-making globally.

Presentations and speakers:


Sentinel-1 / RCM Mission Coordination


  • Stéphane Chalifoux - CSA
  • Nuno Miranda - ESA

Enhanced Baltic Sea Ice Monitoring by combining Sentinel-1 and RCM


  • Juha Karvonen - FMI

Combining Sentinel-1 and Radarsat Constellation for Routine Production of Regional Ice Charts for Arctic Waters


  • Keld Quistgaard - DMI

Joint Use of RADARSAT Constellation Mission (RCM) and Sentinel-1 SAR imagery at Environment and Climate Change Canada


  • Cristina Surdu - ECCC

Using SAR from Sentinel-1 and Radarsat to Monitor a Changing Agricultural Landscape


  • Catherine Champagne - AAFC

Advancing Priorities at Natural Resources Canada Using Sentinel-1 and RCM data


  • Roger De Abreu - NRCAN
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall E1)

Session: D.02.02 Advances of Machine Learning (ML) methods for Cryosphere applications

Over the past decade, the AI4EO (Artificial Intelligence for Earth Observation) community has been constantly growing and has established a notable position in the remote sensing domain. This growth is due to clear advantages, such as the ability to harvest vast amounts of archived data, the speed up of the processing time and the enhancement of spatial analysis, among other benefits.
Fully automated and data-driven approaches are essential to harness the enormous amount of available data. Modern techniques, such as Explainable AI and physics-aware machine learning, enable the analysis of relationships on the Earth's surface, interactions between different domains, and physical dependencies through data. Furthermore, with the rise of self-supervised learning methods, the lack of usable annotations or labels becomes less relevant.
Despite these advancements, there remains a significant gap in publications for cryosphere applications, and AI4EO research has yet to realize its full potential in polar sciences and cryosphere applications.
Implementing AI and ML in Earth Observation is not straightforward due to the unique characteristics of remote sensing data compared to other AI applications. Thus, it demands the knowledge of geospatial and data sciences on top of core domain expertise. If we bring these groups together, there is immense potential for innovation and growth in this field, and ongoing research promises to overcome these challenges and unlock new insights in cryosphere science.
The focus of this session is on the adaptation and development of ML-methods, including uncertainty quantification, to address all aspects of satellite remote sensing and adjunct methods of the cryosphere (i.e. sea ice, glaciers, ice sheets, snow, ocean-ice-atmosphere interactions, and thermodynamic modeling), with highlighting the difficulties, special cases, and challenges we may encounter.

Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall E1)

Presentation: Deep Learning Approaches for Estimating the Surface Elevation Bias of TanDEM-X Digital Elevation Models over Alpine Regions

Authors: Alexandre Becker Campos, Victor Oliveira Piedade Bustos, Paola Rizzoli, Carolina Gonzalez, Jose-Luis Bueso-Bello, Matthias Braun
Affiliations: German Aerospace Center (DLR), FAU Erlangen-Nurnberg, Aeronautics Institute of Technology (ITA)
The accurate estimation of surface elevation in alpine terrains is essential for understanding glacier dynamics and assessing their impact on climate change. TanDEM-X, an X-band spaceborne synthetic aperture radar (SAR) mission, offers global, high-resolution digital elevation models (DEMs) that are invaluable for these studies; however, the inherent variability in radar wave penetration into snow and ice creates challenges in accurately estimating snow mass balance and depth. Variations in glacier composition, crevasses, and terrain complexity can affect the estimation of the radar mean phase center, which is necessary for the generation of DEMs, leading to penetration bias and an underestimation of the surface topographic height. The amount of penetration depends on several factors, such as the characteristics of the illuminated snow pack, the sensor frequency and the acquisition geometry. In particular, for steep, high-relief alpine regions, layover and shadowing effects are more pronounced, while variations in the interferometric baseline between the two satellites and the incidence angle can greatly influence the measurements. Previous works have tackled the use of both physical and machine learning-based models for the estimation of the surface elevation bias of TanDEM-X data over snow- and ice-covered regions, particularly over ice sheets and stable terrain. Comparatively, fewer works have addressed the penetration-related bias of radar waves over alpine regions and glaciers. Moreover, the use of deep learning techniques for this task is limited by the scarcity of high-quality training data. While the IceSAT-2 LiDAR mission can provide sparse ground-truth elevation points using laser altimetry, its limited spatial density over mountainous regions and sensibility to the terrain slope hinder its effectiveness in providing comprehensive training data. In this work, we investigate strategies for reducing the amount of required labeled data for estimating the surface elevation bias of TanDEM-X over alpine regions. We propose an end-to-end approach that leverages pre-task learning by pre-training larger deep learning models on similar tasks. This strategy enhances the model performance in surface elevation bias estimation and mitigates the dependency on extensive datasets, typically needed in fully-supervised regression tasks. This is critical for enhancing the accuracy of TanDEM-X DEMs over snow and ice-covered regions and preparing for future bistatic InSAR missions, such as the upcoming ESA Harmony mission, which will require similar strategies to address radar penetration challenges.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall E1)

Presentation: Detecting Icebergs with SAR in Fast Ice Using a YOLO v8 Deep Learning Detection Model

Authors: Dr Johnson Bailey, Dr John
Affiliations: Lancaster University
Shipping in the Arctic is a large commercial operation. The presence of icebergs therefore poses a hazard to such operations. Of particular interest are icebergs within sea ice, and a need for automated detection methods. In this work, we utilise a convolutional neural network (CNN) for iceberg detection in fast ice environments. Fast ice is a type of sea ice that forms off of coastlines and remains attached to surrounding land or sea floor. This means that fast ice generally remains in place and is not affected by currents and wind. In Arctic seas, fast ice can extend down to 20 m and has a varied topology depending on environment. Fast ice can be distinguished from drift ice, since it does not contain large cracks and fractures. We utilise Sentinel-1 SAR data acquired over Franz Josef Region for this work. Although icebergs show up clearly in optical data, the dependence on conditions such as cloud cover means training with optical data would lead to a less robust program. As such, SAR data is used alongside a Sentinel-2 optical dataset. This SAR data is split by horizontal (HH) and horizontal-vertical (HV) polarisation, with icebergs being clearer in HV polarisation. We also make use of a land mask from the Polar Geospatial Center which helped to aid the training process. A detection filter used to help identify icebergs within sea ice was proposed by Marino et al., (2016). This filter is known as the dual intensity polarisation ratio anomaly detector (iDPolRAD) and has been successfully used in a previous study by Soldal et al., (2019) to separate, identify and detect icebergs in sea ice environments. For this work, an iDPolRAD filter is applied to the SAR data to produce training images for a YOLO v8 detection model. The YOLO v8 model utilises three loss functions. Firstly, the box classifier loss, also known as Complete Intersection Over Union (CIoU) loss predicts the error between the predicted and ground truth geometry of the bounding boxes. Secondly, the classification loss (also known as Varifocal loss), which predicts the error between the predicted class and correct class of the object. Thirdly, the distribution focal loss (DFL), which predicts the error based on the localisation precision of bounding boxes. We perform training for 50 epochs using a batch size of 16 and a learning rate of 0.001. For YOLO v8, precision, recall, F1 score and mean average precision (mAP) are used for evaluating the detection performance. Precision measures the ratio between true positives and any new detections (false positives), while recall measures the ratio between true positives and objects the model failed to detect (false negatives). The F1 score acts as a ratio between precision and recall and can be used to determine the optimum confidence score for the algorithm. The mAP score is defined as the total accuracy of the model, and is found by taking the area under the Precision Recall (PR) curve. For model evaluation, we obtained a precision of 0.759, recall of 0.706, F1 score of 0.732 and a mAP of 0.789, giving the model an accuracy of 79%. These results are acceptable for operational use. Additionally, we performed a qualitative analysis to visualise where the predicted bounding boxes are when compared to the ground truth boxes. From inspecting the predictions, it can be seen that the detector performs very well for bright and large icebergs in the image, while icebergs that appear less bright have a lower prediction value. The main limitations of this work amount to a lack of an available automated iceberg training dataset, which was addressed by creation of a manual dataset and the continued lack of coverage which can be addressed by future SAR missions (ROSE-L and NISAR). It is hoped that our automated detection system can be further improved in the future for potential commercialisation. Marino, A., Dierking, W., & Wesche, C. (2016). A depolarization ratio anomaly detector to identify icebergs in sea ice using dual-polarization SAR images. IEEE Transactions on Geoscience and Remote Sensing, 54(9), 5602-5615. Soldal, I. H., Dierking, W., Korosov, A., & Marino, A. (2019). Automatic Detection of Small Icebergs in Fast Ice Using Satellite Wide-Swath SAR Images. Remote Sensing, 11(7), 806.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall E1)

Presentation: Pre-Training a Hybrid Transformer-CNN for Glacier Segmentation

Authors: Nora Gourmelon, Marcel Dreier, Dakota Pyles, Dr. Thorsten Seehaus, Prof. Dr. Matthias Braun, Prof. Dr.-Ing. habil. Andreas Maier, Dr.-Ing. Vincent Christlein
Affiliations: Friedrich-Alexander-Universität Erlangen-Nürnberg
The factors influencing frontal ablation rates in the Arctic remain an active area of research. To better understand these factors, it is essential to monitor changes in glacier calving fronts. Due to the importance of weather-dependent and seasonal variations, Synthetic Aperture Radar (SAR) is the preferred imaging method for such monitoring. In recent years, deep learning models have been developed to automate the extraction of calving front positions (e.g. [1]-[4]), however, their performance on SAR data remains suboptimal. Compared to optical imagery, SAR imagery poses unique challenges due to its fundamentally different technology. A key issue is speckle noise, which reduces image clarity, making it difficult for models to identify detailed features such as glacier calving fronts. In addition, SAR backscatter values reflect physical properties such as surface roughness and moisture content, and translating these values into meaningful real-world features is complex. A further challenge is the lack of generalizability. Models trained on SAR data from one region or sensor struggle to perform well in other regions or with different sensors due to variations in acquisition parameters and environmental conditions. Limited labeled data and high variability in SAR images hinder traditional supervised learning. Foundation models pre-trained on large, diverse datasets could provide robust feature representations that require minimal labeled data for fine-tuning on specific SAR tasks. However, in preliminary experiments, we found that the domain gap is too large for the task of extracting calving fronts from SAR imagery, and foundation models ([5]-[7]) fine-tuned on the "CAlving Fronts and where to Find thEm" (CaFFe) benchmark dataset performed subpar. Therefore, we collected an unlabeled dataset of Sentinel-1 images depicting glaciers in the Arctic. Using this dataset, we developed a novel self-supervised pre-training strategy and applied it to pre-train a hybrid convolutional neural network (CNN)-transformer model. Fine-tuning on the CaFFe benchmark demonstrated that the pre-trained model outperforms the non-pre-trained model enabling a more robust segmentation of the calving front. [1] K. Heidler, L. Mou, E. Loebel, M. Scheinert, S. Lefèvre, and X. X. Zhu, “A deep active contour model for delineating glacier calving fronts,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–12, 2023. [2] E. Loebel, M. Scheinert, M. Horwath, K. Heidler, J. Christmann, L. D. Phan, A. Humbert, and X. X. Zhu, “Extracting glacier calving fronts by deep learning: the benefit of multi-spectral, topographic and textural input features,” IEEE Transactions on Geoscience and Remote Sensing, p. 1, 2022. [3] N. Gourmelon, T. Seehaus, M. Braun, A. Maier, and V. Christlein, “Calving fronts and where to find them: a benchmark dataset and methodology for automatic glacier calving front extraction from synthetic aperture radar imagery,” Earth System Science Data, vol. 14, no. 9, pp. 4287–4313, 2022. [4] D. Cheng, W. Hayes, E. Larour, Y. Mohajerani, M. Wood, I. Velicogna, and E. Rignot, “Calving Front Machine (CALFIN):565 glacial termini dataset and automated deep learning Extraction method for Greenland, 1972–2019”, The Cryosphere, vol. 15, pp. 1663–1675, 2021. [5] J. Jakubik, S. Roy, C.E. Phillips, P. Fraccaro, D. Godwin, B. Zadrozny, D. Szwarcman, C. Gomes, G. Nyirjesy, B. Edwards, D. Kimura, N. Simumba, L. Chu, S.K. Mukkavilli, D. Lambhate, K. Das, R. Bangalore, D. Oliveira, M. Muszynski, K. Ankur, M. Ramasubramanian, I. Gurung, S. Khallaghi, H. Li, M. Cecil, M. Ahmadi, F. Kordi, H. Alemohammad, M. Maskey, R. Ganti, K. Weldemariam, and R. Ramachandran, “Foundation Models for Generalist Geospatial Artificial Intelligence”, arXiv:2310.18660v2, 2023. [6] M. Oquab, T. Darcet, T. Moutakanni, H.V. Vo, M. Szafraniec, V. Khalidov, P. Fernandez, D. Haziza, F. Massa, A. El-Nouby, M. Assran, N. Ballas, W. Galuba, R. Howes, P.-Y. Huang, S.-W. Li, I. Misra, M. Rabbat, V. Sharma, G. Synnaeve, H. Xu, H. Jegou, J. Mairal, P. Labatut, A. Joulin, and P. Bojanowski, “DINOv2: Learning Robust Visual Features without Supervision“, Transactions on Machine Learning Research, 2024. [7] Z. Dong, Y. Gu and T. Liu, "Generative ConvNet Foundation Model With Sparse Modeling and Low-Frequency Reconstruction for Remote Sensing Image Interpretation," in IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1-16, 2024.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall E1)

Presentation: AI4Snow: Deep Learning for Improving the Spatiotemporal Resolution of Snow Products in Mountain Regions

Authors: Marlena Reil, Julia Kaltenborn, Celia Baumhoer, Thomas Nagler, Gabriele Schwaizer, Chris Derksen, Tobias Jonas, John Pomeroy, David Rolnick, John Truckenbrodt, Benoit Montpetit, Sebastian Roessler, Christopher Marsh, Zhibang Lv, Andrea Scolamiero, Markus Hetzenecker, Andreas Dietz
Affiliations: Mila Quebec AI Institute, McGill University, German Aerospace Center (DLR), ENVEO Environmental Earth Observation IT GmbH, WSL Institute for Snow and Avalanche Research SLF, ECCC Environment and Climate Change Canada, University of Saskatchewan
Snow is an essential part of the cryosphere, covering large parts of the Earth's land surface in the northern hemisphere during winter. In mountain regions, seasonal snowmelt represents the primary source of freshwater, directly impacting downstream communities and ecosystems. Snow also significantly influences surface albedo, thereby playing an important role in climate science. Accurately monitoring mountain snow is essential for hydrological modeling and forecasting, particularly in light of the variability associated with the impacts of global climate change. In recent years, remote sensing techniques have become the dominant approach to observing snow parameters due to their ability to cover large areas on a regular basis. However, most operational snow products are based on single satellite units, which are limited by low spatial or temporal resolution. In mountainous regions, these limitations are further compounded by cloud cover, forest canopy, and complex topography. The integration of multiple sensors remains a significant challenge, as differences in spatial, spectral, and temporal resolutions must be overcome. We address these difficulties using a multimodal deep learning approach. We introduce AI4Snow, a collaborative project that brings together the machine learning, snow science, remote sensing, and hydrology communities to improve mountain snow products. As part of the project, we modify Presto (Tseng et al., 2023), a transformer-based deep learning model for remote sensing. Presto encodes multi-sensor satellite time series data in conjunction with environmental factors, including temperature, precipitation, and topographic information, to create a combined feature representation. This is then used to generate daily, high-resolution fractional snow cover (FSC) and snow water equivalent (SWE) maps for downstream analyses. We are pre-training the model in a self-supervised manner on globally sampled mountain regions. The method will initially be validated in test regions in the Swiss Alps, and will then be transferred to a catchment in the Canadian Rockies. Ultimately, this has the potential to become an operational product for multi-parameter snow retrieval, enabling more accurate snow monitoring in the context of climate change.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall E1)

Presentation: Implicit Neural Representation for High Spatiotemporal Resolution of the Petermann Glacier Surface Elevation using CryoSat-2 Data

Authors: Peter Naylor, Dr Andreas Ronne Stokholm, Dr Nikolaos Dionelis, Dr Natalia Havelund Andersen, Dr Sebastian Bjerregaard Simonsen
Affiliations: Esa-Φ Lab, DTU Space
The polar regions are warming at nearly four times the global average due to climate change [1]. Warmer temperatures increase ice sheet ice loss, leading to increased freshwater discharge, contributing to sea level rise and regional changes to ocean salinity that threaten ocean current collapse [2]. The number of humans living below high tide sea level is projected to rise by 73% by the turn of the century [3]. Consequently, estimating the ice loss and the subsequent freshwater discharge is paramount for decision-makers to take the required actions. The ice loss of ice sheets can be measured using satellite altimeters, acquiring the spatial ice sheet surface height at many time instances. The observed elevation change can subsequently be converted into mass change by accounting for bedrock movement and the snow/firn processes [4,5]. A challenge in exploiting the satellite altimeter data is the unstructured characteristics of the data points caused by observing the elevation at different space and time occurrences. To solve this problem, we propose to utilise altimeter data as cloud points in the space-time domain and employ implicit neural representation (INR) to encode the elevation surface field as a continuous function varying in both time and space [6]. The INR method can capture non-linearities and long-term trends compared with traditional interpolation methods, such as trilinear interpolation or kriging [7]. Furthermore, INR can provide a compact encoding of the elevation surface field, allowing for convenient dissemination of the product [8]. We showcase a case study of applying INR to reconstruct the surface elevation of the Petermann glacier, northwest Greenland, from CryoSat-2 radar altimeter elevation measurements. Many model training experiments were carried out in ablation studies, such as exploring additional loss terms and model architectures (SIREN, RFF, KAN and MFN [9]) and hyperparameters (number and width of layers and loss term weights), to find the best combination. The main challenge is related to accurately capturing the temporal changes in the evolution of the glacier surface through time. Furthermore, the models are trained with varying quantities of data (5 months, 1 year, 2 years, 12.5 years), exploring how the quantity of data impacts model performance. The model results are examined with Operation IceBridge (OIB) LIDAR, and GeoSAR elevation measurements [10]. OIB enables the evaluation of model elevation in a large temporal and diverse geographical area, whereas GeoSAR is high-resolution elevation data on a single day over a small area. The results show that the models achieving the best scores on our metrics used the SIREN INR architecture together with high temporal and spatial loss weights. Furthermore, using data from CryoSat-2 from the entire 12.5-year time period gave rise to the best-performing models. The models appear to perform the best in regions with high data point density and lower performance in areas with low point density, such as at the outer rims of the ice sheet. The case study showcases an exciting avenue for modelling the spatiotemporal evolution of the ice sheet at a higher than kilometre resolution at a daily temporal interval using INR. We suspect that this method can be applicable to many geoscience applications where irregular data sampling in space and time is a challenge. References: [1] Rantanen M, Karpechko AYu, Lipponen A, Nordling K, Hyvärinen O, Ruosteenoja K, et al. The Arctic has warmed nearly four times faster than the globe since 1979. Communications Earth and Environment. 2022 Aug 11;3(1). [2] Ditlevsen P, Ditlevsen S. Warning of a forthcoming collapse of the Atlantic meridional overturning circulation. Nature Communications. 2023 Jul 25;14(1). [3] Kulp SA, Strauss BH. New elevation data triple estimates of global vulnerability to sea-level rise and coastal flooding. Nature Communications. 2019 Oct 29;10(1). [4] Shepherd A, Gilbert L, Muir AS, Konrad H, McMillan M, Slater T, et al. Trends in Antarctic Ice Sheet Elevation and Mass. Geophysical Research Letters. 2019 Jul 24;46(14):8174–83. [5] Otosaka IN, Shepherd A, Ivins ER, Schlegel NJ, Amory C, van den Broeke MR, et al. Mass balance of the Greenland and Antarctic ice sheets from 1992 to 2020. Earth System Science Data. 2023 Apr 20;15(4):1597–616. [6] Naylor, P., Di Carlo, D., Traviglia, A., Yamada, M. and Fiorucci, M., 2024. Implicit neural representation for change detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 935-945). [7] Raissi, M., Perdikaris, P. and Karniadakis, G.E., 2019. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics, 378, pp.686-707. [8] Strümpler, Y., Postels, J., Yang, R., Gool, L.V. and Tombari, F., 2022, October. Implicit neural representations for image compression. In European Conference on Computer Vision (pp. 74-91). Cham: Springer Nature Switzerland. [9] Liu, Z., Wang, Y., Vaidya, S., Ruehle, F., Halverson, J., Soljačić, M., Hou, T.Y. and Tegmark, M., 2024. Kan: Kolmogorov-arnold networks. arXiv preprint arXiv:2404.19756. [10] Andersen, N.H., Simonsen, S.B., Winstrup, M., Nilsson, J. and Sørensen, L.S., 2021. Regional assessments of surface ice elevations from swath-processed CryoSat-2 SARIn data. Remote Sensing, 13(11), p.2213.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall E1)

Presentation: Super-resolution of Arctic sea ice thickness using diffusion models and physical modeling.

Authors: Julien Brajard, Anton Korosov, Richard Davy, Fabio Mangini, Adrien Perrin, Yiguo Wang
Affiliations: Nersc
This work introduces a novel AI-based diffusion model to emulate high-resolution Arctic sea ice thickness, addressing limitations in current satellite observations and climate simulations with coarse spatial resolution. High-resolution data is essential for understanding small-scale processes like leads and thin ice, influencing heat fluxes and seasonal forecasts. Using the neXtSIM model, synthetic high-resolution fields of sea ice concentration, thickness, and motion were generated and degraded to mimic satellite products. The comparison with satellite products was performed during the 2020/2021 winter period. The observations used for comparison were (1) daily sea-ice concentration analysis from OSI SAF EUMETSAT, (2) sea-ice thickness derived from merging CryoSat-2 and SMOS ice thickness (CS2SMOS), and (3) daily low-resolution sea-ice displacement from OSI SAF EUMETSAT. The comparison was based on the visual inspections of the 2D fields, and through an analysis of the corresponding power spectra and histograms. Using the training dataset derived from neXtSIM, a diffusion model, a type of generative AI model, was trained to reconstruct high-resolution fields. The results were validated against a linear interpolation used as a baseline on recent years of simulation not used in the training. Validation showed a 21% reduction in root-mean-square error, which denotes an improvement in the accuracy. The diffusion model also recovered 65% of the spatial variability at scales below 20 km, outperforming the 14% achieved by linear interpolation. This is due to the fact that the fields generated by the diffusion models are more realistic and sharper than the linear interpolation. Additionally, the model can produce ensembles of high-resolution fields, providing enhanced probabilistic information for forecasting. Applied to the CS2SMOS product and combined with sea ice concentration observations, the method enabled the derivation of sea ice categories, improving data assimilation and seasonal forecasting capabilities. Its impact was demonstrated using NorCPM, a state-of-the-art seasonal forecasting system. This research is supported by the ESA-funded SuperIce project.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall G2)

Session: A.02.01 Land Surface Temperature and Emissivity Data for Research and Applications - PART 3

This session invites presentations on the use of land surface temperature and emissivity data for research and applications. In particular we welcome studies using high spatial resolution multispectral thermal infrared data such as those from the ECOsystem Spaceborne Thermal Radiometer on Space Station (ECOSTRESS) and airborne instruments as well as from the evolving New Space sector. The session will also provide an opportunity to discuss the preparation of upcoming missions such as the ESA - LSTM mission, CNESISRO TRISHNA mission and ASI-NASA SBG mission.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall G2)

Presentation: Potential of Thermal Infrared Observations for Coupled Land-atmosphere Assimilation in Earth System Predictions Systems – DANTEX/LSTM

Authors: Zdenko Heyvaert, Patricia de Rosnay, Angela Benedetti, Stephen English, David Fairbairn, Christoph Herbert, Cristina Lupu, Samuel Quesada Ruiz, Ethel Villeneuve, Pete Weston
Affiliations: European Centre for Medium-Range Weather Forecasts
Consistent exploitation of interface observations across different Earth System components in a coupled data assimilation system is a key strategic development at ECMWF (European Centre for Medium-Range Weather Forecasts). Recent advances in coupled data assimilation methodology are enabling more ambitious use of current and future observations. In collaboration with ESA, the new Data Assimilation and Numerical Testing for Copernicus Expansion Missions (DANTEX) project will develop novel ways to use so-called interface observations from a range of satellite instruments, including the Land Surface Temperature Monitoring (LSTM) mission. For LSTM, pioneering work has started in DANTEX to explore the value of the thermal infrared observations for coupled land-atmosphere assimilation in Earth system predictions systems. A particular focus of this is to enable the exploitation of land surface temperature from LSTM-like observations using existing window channel radiance observations from SEVIRI (Spinning Enhanced Visible and Infrared Imager) and FCI (Flexible Combined Imager). The method relies on using an extended control vector in the atmospheric 4D-Var (four-dimensional variational data assimilation) to analyse skin temperature over land surfaces. The extended control vector uses the so-called TOVSCV approach, providing analysed skin temperature in the observation space. Within each 12-hour data assimilation window of the coupled land-atmosphere data assimilation system, the analysed skin temperature from 4D-Var is passed to the land data assimilation system (LDAS) as a pseudo land surface temperature observation. The LDAS simplified Extended Kalman Filter soil analysis is updated to include the new pseudo-observation in the observation vector and the land surface skin temperature in the control vector. This approach allows to consistently exploit infrared window channel observations to analyse atmospheric and land surface variables. Preliminary results are shown. Further DANTEX plans towards novel ways to represent LSTM-like radiances, by combining machine learning and physical radiative transfer concepts, are also presented.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall G2)

Presentation: Evapotranspiration estimates in Sahel regions using an ensemble contextual model: medium & high spatial resolution data comparison

Authors: Nesrine Farhani, Jordi Etchanchu, Gilles Boulet, Philippe Gamet, Albert Olioso, Alain Dezetter, Ansoumana Bodian, Nanee Chahinian, Chloé Ollivier, Olivier Roupsard, Jérôme Demarty
Affiliations: HydroSciences Montpellier (HSM) Univ Montpellier CNRS IRD 15 Av. Charles Flahault 34093, Centre d’Etude Spatiales de la Biosphere (CESBIO) Universite de Toulouse CNRS CNES IRD UT3 INRAE, Indo-French Cell for Water Science ICWaR Indian Institute of Science, Unite de Recherche ecologie des Foret Mediterraneennes (URFM) INRAE 84000, Laboratoire Leidi Dynamique des Territoires et Developpement Universite Gaston Berger (UGB), Eco Sols Univ Montpellier CIRAD INRAE IRD Montpellier SupAgro, CIRAD UMR Eco Sols BP1386 CP18524, LMI IESOL Centre IRD-ISRA de Bel Air BP1386 CP18524
The Sahel region, recognized as a "hot spot" for climate change, experiences significant water scarcity and high inter-annual variability in water resources. Climate change has heightened evaporative demand, potentially leading to more frequent droughts. Accurately estimating evapotranspiration (ET) in a spatially distributed manner is thus a critical need for this region. Remote sensing (RS) data in the thermal infrared domain used in energy balance models, is highly valuable for providing spatial ET estimates across different space-time resolutions. For the Sahelian context, a suitable method has been developed based on an ensemble contextual energy balance model that integrates thermal and visible satellite data (EVASPA S-SEBI Sahel method; E3S, Allies et al., 2020, 2022). This approach relies on the thermal contrast (between hot/dry and cold/wet pixels) observed within a single thermal image, to yield an instantaneous ET estimates. In addition, this method uses a set of different alternative methods in order to identify the hottest (“dry-edge”) and coldest (“wet-edge”) points in the image corresponding to the surface temperature/albedo scatterplot and therefore to estimate Evaporative Fraction (EF). The method’s accuracy depends on: (1) sufficient heterogeneity between dry and wet pixels within the image and (2) correct identification of the dry and wet boundaries. The aim of this study is to compare the use of thermal information from medium-resolution (1 km) or high-resolution (30 m) data in order to identify correctly the heterogeneity level observed in the image. E3S was therefore applied to MODIS and Landsat datasets acquired during a common period over the mesoscale AMMA-CATCH observatory in order to perform local scale evaluation. Our findings indicate that ET simulations derived from Landsat data achieve more accurate ET estimates with lower uncertainties compared to MODIS data. Indeed, the higher spatial resolution of Landsat data improves the assessment of surface heterogeneity and edges determination, which enhances consequently ET estimation accuracy. This study also serves as a demonstration of the potential for upcoming thermal Earth observation missions, such as TRISHNA (CNES/ISRO), LSTM (ESA), and SBG (NASA), in which this work is a contributing part.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall G2)

Presentation: Radiative Transfer Modelling of Angular Effects in the Thermal Infrared Domain over Forests

Authors: Jennifer Susan Adams, Alexander Damm, Jasmin Kesselring, Julian Gröbner, Kathrin Naegeli
Affiliations: Remote Sensing Laboratories, Department of Geography, University of Zurich, Eawag: Eawag, Swiss Federal Institute of Aquatic Science and Technology, PMOD/WRC
Given the increasing frequency of extreme events and stressful conditions due to climate change, monitoring the response and changes of forest ecosystems is of paramount importance. Transpiration, evaporation, and soil water- and heat transfer are particularly affected by environmental change and are driving the thermal regime over forests. Monitoring the land surface temperature (LST) of forests, thus, helps studying forest responses to environmental change and is a key motivation behind upcoming high spatial and temporal resolution satellites such as TRISHNA, SBG and LSTM. The surface brightness temperature observed by satellites is influenced by spectral and directional variations driven by shadowing, distribution of canopy-component temperatures and topography, as well as meteorological conditions (e.g., air temperature, humidity, turbulence, clouds). Such effects can lead to significant biases in retrieved LST, whereby accurate and efficient schemes to understand and account for these effects are urgently needed. Three-dimensional radiative transfer models (RTMs) can aid such interpretation, allowing the investigation of relationships between the angular distribution of thermal infrared (TIR) radiation and canopy-soil-atmosphere properties. Whilst modelling of angular effects of 3D forests in the optical domain has a long history, parameterising 3D forests with thermal information (canopy components temperature, and 3D temperature distribution) is complex and has hampered modelling efforts. This contribution will present the parameterisation and modelling of angular effects over forests. The aims of the study are twofold. Firstly, in order to parameterise a 3D RTM with thermal information, we present a novel thermal instrumentation set up over two forest sites in Switzerland; a mixed broadleaf and coniferous (Laegern) and an evergreen coniferous (Davos) forest. Both sites belong to FLUXNET with Davos additionally an ICOS site, equipped with flux measurement towers observing a wide range of environmental and meteorological variables. The data and processing schemes required to retrieve leaf, wood and ground sunlit/shaded component temperatures will be demonstrated. Secondly, angular effects are simulated using the Discrete anisotropic radiative transfer (DART) model. 3D structural models of these two forests have previously been developed based on Terrestrial Laser Scanning (TLS) and have associated optical properties. Thermal parameterisation is based on outcomes from the first part of this study. TIR angular distribution varies with scale, thus we also perform simulations at local (i.e. seen by a TIR camera or radiometer) and satellite (e.g. at the TRISHNA, LSTM or SBG pixel level). Outcomes and insights from this study will aid the development of empirical corrections in operational processing schemes to account for angular effects, and the understanding of underlying relationships between an observed TIR signal, and canopy and environmental properties.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall G2)

Presentation: Preparation and algorithm selection for the future high resolution ESA satellite mission of Land Surface Temperature Monitoring

Authors: Agnieszka Soszynska, Michael Perry, Darren Ghent
Affiliations: University of Leicester, National Centre for Earth Observation
The Copernicus/ESA Land Surface Temperature Monitoring (LSTM) mission is planned to launch in 2029. The main purpose of the mission is to provide high resolution and high accuracy land surface temperature (LST) products. Finding the optimal LST retrieval algorithm is a crucial step in preparation of the mission. There is a number of LST retrieval algorithms which base on different assumptions, take different inputs, and provide different approaches to deal with uncertainties. Split-window algorithms run operationally in various products, however they don’t allow retrieving emissivity. There are also algorithms retrieving emissivity as well as temperature, e.g., NASA’s Temperature-Emissivity Separation. TES, however, has been reported to be inaccurate in vegetated areas and especially in humid atmospheres. Therefore, the selection of the optimal algorithm should base on objective criteria and have a fair common base for comparison. As a preparation step, the selected LST retrieval algorithms were developed in an agnostic version that allows testing various sensor parameters, and the sensitivity of the algorithms to perturbations. In order to choose the most accurate LST retrieval algorithm, a round robin procedure was employed, in which a set of algorithms was compared using a simulated database. The database was designed to mirror the conditions of natural ecosystems and atmospheric profiles, including various land covers (classified as biomes, according to ESA Land Cover cci). A set of 6 algorithms was compared: University of Leicester algorithm (UoL, operational in Sentinel-3 SLSTR), Generalised Split Window (GSW, operational for MODIS products in LST_cci), Temperature-Emissivity Separation (TES, operational in various NASA products), Optimal Estimation (OE), DirecTES (foreseen as operational for TRISHNA mission), and Hybrid Optimal Estimation (HybridOE). Each algorithm was run on the whole database, and the retrieved LST was compared to the true LST for each pixel. The results were used to calculate a set of accuracy metrics: bias, precision, and sensitivity. Additional information, such as mission compliance, complementarity, improvability, and difficulty of implementation were also considered. Algorithms were ranked based on a weighted score from each criterion. From there, two algorithms were selected for further development.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall G2)

Presentation: Land Surface Time Series Analysis Using ECOSTRESS satellite data product to Monitor Thermal Activity over Campi Flegrei volcanic area (Naples, Italy)

Authors: Federico Rabuffi, Malvina Silvestri, Maria Fabrizia Buongirono, Andrew Alamillo, Alessandro Piscini
Affiliations: Jet Propulsion Laboratory, California Institute of Technology, Istituto Nazionale di Geofisica e Vulcanologia, Osservatorio Nazionale Terremoti
The Naples volcanic district, made up by Vesuvius, Campi Flegrei and Ischia volcanoes, is one of the highest volcanic risk areas (hazard, value and vulnerability) in the world. Campi Flegrei, an active volcanic caldera within the Naples district, is characterized by a strong bradyseism. Since 2005, the INGV (Istituto Nazionale di Geofisica e Vulcanologia) monitoring system has observed a bradyseismic crisis marked by ground uplift, earthquakes, and anomalies in fumarolic emissions. In 2021, a rapid phase of uplift began, with ground velocities reaching values of centimeters per year, and this phase is still ongoing. Solfatara, an active site within Campi Flegrei, is characterized by high temperature fumarolic emissions from two main vents: Bocca Grande and Bocca Nuova. This area is the primary source of hydrothermal fluid release through degassing. The high surface temperatures values, associated with Bocca Grande and Bocca Nuova emissions, make the Solfatara a thermal anomaly area clearly identifiable using satellite data product at high spatial resolution, such as Land Surface Temperature (LST) from ECOSTRESS (Ecosystem Spaceborne Thermal Radiometer Experiment on Space Station). Thanks to the high-frequency acquisition of data (both day and night) since 2018, and more than 500 LST products covering the Campi Flegrei area, this study aims to leverage one of the ECOSTRESS data product (specifically the ECO2LSTE) to generate a time series of land surface temperatures. The aim of this work is to use high resolution satellite data in the Thermal InfraRed (TIR) wavelength region for environmental application. A statistical approach based on the analysis of the standard deviation of each image was used to achieve the goal of the work, which consist of monitoring thermal activity and detect anomalies related to changes in the volcanic behavior. The results show the capability to detect out layer in the LST time series that correspond with changes in the volcanic behavior. Furthermore, with this work the authors want to emphasize the importance role of spectroscopy data in the TIR wavelength for environmental application. Indeed, the future space missions in the TIR at high spatial and temporal resolution, such as SBG-TIR (Surface Biology and Geology – Thermal InfraRed) in cooperation between NASA-JPL and ASI, will increase the capability of monitoring such active environments, with the possibility of being able to support timely decision-making in response to potential hazard.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall G2)

Presentation: Utilizing Thermal Emissivity and SBG-TIR Like Satellite Data for Topsoil Properties Retrieval

Authors: Francesco Rossi, Hatch Brenna, Raffaele Casa, Simon Hook, Luca Marrone, Luca Medici, Saham Mirzaei, Dr. Simone Pascucci, Dr. Stefano Pignatti, Alessia Tricomi, Sara Venafra
Affiliations: Institute Of Methodologies For Environmental Analysis IMAA-CNR, NASA-JPL, Department of Agriculture and Forestry Sciences (DAFNE), University of Tuscia, E-GEOS S.P.A,, Agenzia Spaziale Italiana
The Surface Biology and Geology - Thermal Infrared (SBG-TIR) satellite, a joint project between NASA and the Italian Space Agency (ASI), aims to improve our understanding of Earth's surface properties by providing both multispectral thermal infrared (TIR) and multispectral visible near-infrared (VNIR) data. The launch is planned for 2029, with an expected end of operations in 2033. The high spatial and temporal resolution of SGB-TIR, 60 m and 3 days’ revisit time, respectively, is intended to capture the hydrological, ecological, weather, climate and solid earth dynamic states of the Earth's surface and to quantify uncertainties. Thermal emissivity plays an important role in several applications, including environmental monitoring, agriculture and climate sciences. This study, conducted within the “THERESA – THErmal infRarEd SBG Algorithms” (THERESA) research project supported by ASI, focuses on the capabilities of the SBG-TIR satellite to retrieve soil properties using machine learning (ML) techniques. By ingesting SBG-TIR like thermal emissivity data into ML models, after applying commonly used spectral pre-treatments, topsoil properties (e.g., texture and Soil Organic Matter - SOM) were retrieved A total of 59 samples were collected between 2022 and 2023 on the study site of the Jolanda di Savoia farm in Northern Italy. Wet analysis included soil texture, pH, total organic matter (SOM) and CaCO3 content. Of the total dataset, 27 samples collected in 2023 were sent to JPL to create a full-range IR Soil Spectral Library (SSL), covering wavelengths from 2.5 nm to 15.3 nm. To this aim, a Nicolet FTIR 6000 spectrometer was used. Multiple data sets were created by pairing laboratory soil properties with sensors emissivity: resampling the SSL spectra according to the spectral response functions of the (a) HYTES and (b) SBG-TIR sensors. In addition, the HYTES survey, conducted on 22/06/2023, was also analysed to consider remote images covering the whole sectors surveyed by the 2022 and 2023 field campaigns. Subsequently, different ML algorithms, such as Gaussian Process Regression (GPR), Cubist, and Partial Least Square Regression (PLSR), and Support Vector Regression (SVR) were trained on these datasets. When using the full Nicolet FTIR data set (i.e., from 2.5µm to 15.3µm), promising results were obtained using the Standard Normal Variate (SNV) pre-treated data set. For CaCO3, an R² value of 0.86 and a Root Mean Square Error RMSE of 1.11% were obtained by using the GPR algorithm. In addition, there is a correlation between SOM and emissivity, which was obtained with an R² of 0.63 and an RMSE of 1.12% using the SVR algorithm. The model performed not so good in retrieving the soil sand content, in fact, the best results obtained for sand were an R² of 0.65 and an RMSE of 5.06% using the SVR algorithm. For clay and silt soil content estimation, the validation metrics were even lower with R² values of 0.52 and 0.29 and RMSE values of 9.01% and 9.81%, respectively. By convolving the Nicolet FT-IR data set to the SBG-TIR spectral response function (SRF), the best results were observed for the estimation of CaCO3 soil content, with a R² of 0.88 and an RMSE of 0.96%, using PLSR. The most valuable emissivity absorption feature for detecting CaCO3 in topsoil is that centred around 11.5 μm, this according to the Variable of Importance (VIP) analysis performed on the simulated SBG-TIR dataset. For the SOM retrieval, a R² of 0.61 and a RMSE of 1.16% were obtained using SVR, while for clay a R² of 0.42 and an RMSE of 10.24% were obtained. Unfortunately, no significant recovery was found for sand and silt. Using the Nicolet SSL resampled to the nominal HYTES spectral range (from 7.46μm to 11.96μm), the best prediction result was confirmed for CaCO3 with a R2 of 0.79 and a RMSE of 1.23%. For SOM and clay a R2 of 0.67, 0.52 and an RMSE of 1.07% and 9.53% were obtained, respectively. As with the previous datasets, no significant results were found for silt, but for sand, using PLSR and a smoothed signal, a R² of 0.68 and a RMSE of 4.98% were obtained. When the HYTES airborne survey images (with a reduced spectral range from 8.11 μm to 10.98 μm for calibration issues relative only to these images) were used, only SOM and clay soil content showed good prediction accuracies. For SOM, the GPR prediction model on the smoothed data set shows an R² of 0.68 and an RMSE of 1.43%, while for clay the SVR algorithm resulted in a R² of 0.55 and a RMSE of 10.08% using a smoothed pre-treated signal. Poor results, instead, were obtained for sand, most probably due to the mineralogy of the sand component which is poor in quartz according to the preliminary ongoing XRD analysis. These results demonstrate the potential of the next generation SBG-TIR in retrieving topsoil properties (e.g., texture, SOM) and mapping minerals’ abundances in agricultural topsoil, such as carbonates, which are relevant for a soil quality assessment. Further studies are planned to understand the potential of the combined use of CHIME (ESA)/SBG (NASA)/PRISMA-2 (ASI) and LSTM (ESA)/TRISHNA (ISRO)/SBG-TIR in the retrieval and monitoring of agricultural topsoil properties.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall N1/N2)

Session: F.04.22 The fundamental role of National Statistical Offices for climate and environmental policies: how Earth Observation can support the simplification of the transversal reporting

The evolving landscape of European and international climate and environmental policies requires a growing and fundamental role for the National Statistical Offices with respect to reporting and monitor progress towards the implementation of these interconnected frameworks to achieve local to global sustainability goals. The session will feature representatives from national statistical agencies, the European Commission and the EEA and will explore the integration of Earth Observation (EO) into national statistics processes and reporting, addressing how EO data can enhance the accuracy, timeliness, and spatial granularity of key metrics used for example in natural capital assessments, agricultural monitoring, and GHG emission reporting. The aim is to identify how EO can effectively support policy implementation and the simplification of the transversal reporting. Panelists will discuss the technical challenges and policy implications of adopting EO-based methodologies, the harmonization of data standards, and the role of international cooperation in capacity building. By sharing best practices and innovations, the discussion aims to demonstrate EO's potential to support global sustainability goals and inform evidence-based policymaking.

Speakers:


  • Márta Nagy-Rothengass - Eurostat
  • Lorenzo De Simone - FAO
  • Mary Smyth - Central Statistics Office (CSO) Ireland
  • Usue Donezar - European Environment Agency (EEA)
  • Nina Hofer - Statistics Austria
  • Emile Boral-Rolland - OECD
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall L3)

Session: C.06.02 Advances in Calibration and Product Validation for Optical Sensors - PART 3

Accurate calibration and validation lies at the heart of any optical satellite measurement sensor system and it is of paramount importance for delivering high quality satellite products used in an exponentially growing way for operational environmental monitoring as well as for long-term climate research applications. Calibration may be achieved by dedicated on-board systems or by independent vicarious calibration targets and the need for sensor intercalibration and collaboration of satellite operators has become essential to provide such products to the optical satellite user community. In this context the need for common reference sources and protocols has been recognized as well as mission scenarios such as dedicated tandem campaigns to achieve sensor intercalibration and bias assessments. Moreover, precise calibration is a prerequisite for high quality geophysical Level-2 parameters retrieved from satellite and bringing independent validation efforts of these into play with sensor calibration is an essential activity to generate accurate geo-physical parameter maps at satellite-scale. In the context of the CEOS Working Group Cal/Val (WGCV) Land Product Validation (LPV) efforts ESA have led and collaborated on many such activities establishing the concept of ground-based Fiducial Reference Measurements and LPV Supersites as a firm reference for ground-based validation of satellite measurements. The aim of this is to provide well calibrated and validated Level 1 and 2 products to the satellite data user community at large including their uncertainty budgets traceable to SI and cal/val protocols. This session will provide an overview of the latest state-of-the-art Cal/Val activities for optical sensors including novel calibration approaches and new ground-based validation networks used for land product validation.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall L3)

Presentation: Validation of Satellite Biophysical Products Using a Wireless FAPAR Network Deployed in a Deciduous Forest

Authors: Harry Morris, Chloe Randall, Rasma Ormane, Morven Sinclair, Mr Matthew Scholes, Niall Origo
Affiliations: National Physical Laboratory
The fraction of absorbed photosynthetically active radiation (fAPAR), is designated an Essential Climate Variable (ECV) by the Global Climate Observing System (GCOS) due to its importance of monitoring the status and dynamics of vegetation. There are several operationally produced fAPAR products derived from satellite data which allow for global monitoring at decametric and hectometric spatial scales. As these products become routinely used across the community there is a growing need for validation of their performance. Wireless networks of quantum PAR sensors such as the Apogee SQ-110’s, offer a unique compliment to traditional ad-hoc validation campaigns for fAPAR products as they offer a temporally dense dataset and can be deployed at supersites in a range of environments. Additionally, as they are measuring the physical definition of fAPAR the sensors can be calibrated to SI-standard allowing Fiducial Reference Measurement (FRM) validation datasets to be generated. In this presentation we shall present the calibration and deployment of one such network at a deciduous broadleaf forest in the UK. The network consists of twenty-four sets of understory PAR nodes and two sets of overstory PAR nodes on a flux tower, where a node consists of an upwards and downwards facing sensor. The network has been deployed within a single Sentinel-3 300 m x 300 m pixel and with six subgroups within the pixel to allow validation of decametric products. The processing chain to generate a FRM-standard dataset is also discussed and comparisons with coincident digital hemispherical photography obtained across the growing season is shown. Furthermore, validation of operational fAPAR satellite products at decametric and hectometric scales for the 2023 and 2024 growing seasons are presented. Where possible, conformity testing of the satellite fAPAR products has been performed using the GCOS goal requirements for uncertainty (5% for fAPAR > 0.05, 0.0025 absolute for fAPAR < 0.05 at k = 2). Finally, recommendations for future deployments of wireless fAPAR networks are given with a particular stress on the need for uncertainty characterisation of the individual sensors within a network.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall L3)

Presentation: Current improvements of La Crau instrumented site for calibration and validation of future thermal infrared and hyperspectral missions

Authors: Arthur Dick, Sébastien Marcq, Emilie Delogu, Morgane Chapelier, Aimé Meygret, Simon Hook, Gerardo Rivera, Yevgeny Derimian, Léo Delmas
Affiliations: CNES, NASA JPL, Laboratoire d'Optique Atmosphérique
Since the 1990’s, CNES has developed a strong experience in automated instrumented sites for the calibration of multispectral satellites in a range of wavelengths from 400 to 2500 nm. With the growth of current and future hyperspectral and thermal infrared missions such as EnMap (DLR), PRISMA (ASI), Trishna (CNES/ISRO), SBG (NASA) or LSTM (ESA), CNES has the willingness to extend its instrumented sites to the thermal infrared and to upgrade its VSWIR instrument into a hyperspectral instrument in order to prepare for the in-orbit CAL/VAL of these coming missions. For this purpose, it has been decided to deploy new masts in the La Crau site, which is already a member of RADCALNET, to install and analyze data from several thermal infrared and hyperspectral instruments. Regarding thermal infrared instrumentation, the NASA JPL designed an automated robust broadband thermal radiometer which acquires day and night measurements in four different zenithal angles and with an internal active blackbody. This instrument has been deployed by JPL for years on various sites, such as Lake Tahoe, Salton Sea and Russell Ranch. The principle is to adjust the blackbody temperature to match the target and the blackbody measurements made by the instrument. Therefore, this radiometer provides surface brightness temperature in one spectral band. CIMEL, a French company, markets the CE 312, which acquires measurements in five narrow spectral bands and one broad band. Emissivity and temperature of the surface can then be retrieved simultaneously by applying a temperature and emissivity separation algorithm on the data. CNES worked with CIMEL to make this system robust and autonomous for acquire measurements on the long term. Finally, three KT15, from Heitronics company, were deployed with spectral bands similar to those in Trishna instrument. These instruments will allow calibrating Level-1 and validating Level-2 products. Concerning hyperspectral CAL/VAL, CNES is currently working with CIMEL to develop a new hyperspectral photometer in a spectral range from 400 nm to 2500 nm with spectral resolution from 1 nm to 10 nm. A first mock-up was designed to test the instrumental concept and 2 prototype are currently being constructed to take into account field environmental constraints. One of those prototype will be deployed in La Crau on a dedicated mast near the RADCALNET station. Then a test phase will begin to compare measurements and calibration results between this new instrument and the current multispectral photometer. The main characteristics and measurement protocols, and the status of their installation in the La Crau site will be described as well as the processing applied on the measurements. The first results will be discussed and analyzed regarding the final in-orbit calibration objective.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall L3)

Presentation: The Fiducial Reference Measurements for Fire (FRM4FIRE) project

Authors: Bernardo Mota, Prof Martin Wooster, Farrer Owsley-Brown, Gareth Roberts, Dirk Schüttemeyer
Affiliations: National Physical Laboratory, Kings College London, National Center of Earth Observation, University of Southampton, European Space Agency
The unique characteristics and ephemeral conditions under which vegetation fires occur on the Earth’s surface makes validation of Fire Radiative Power (FRP) products a challenge. Furthermore, the lack of community agreed protocols and limited quality assurance practices undermines the trustworthiness of these measurements. The recent investment in airborne campaigns, and the Fiducial Reference Measurement (FRM) framework developed by the Cal/Val community, provides the mechanism to generate independent, fully characterised, and traceable measurements that could be used to validate space-based FRP retrievals. The Fiducial Reference Measurements for Fire (FRM4Fire) project is a step in that direction. Its aim is to identify and characterise the main sources of uncertainty associated with satellite FRP retrievals, with retrievals from Sentinel-3 SLSTR being the main focus. A FIDUCEO style approach for uncertainty characterization and using the tools and guidelines outlined by the GEO/CEOS Quality Assurance framework for Earth Observation (QA4EO), will be used to identify, assess, and quantify the contribution of various sources of uncertainty (sensor and environment related), and develop an End-to-End uncertainty model. Based on the uncertainty characterization, and using airborne campaign measurements, we will review the current validation approaches and define a methodology and measurement protocol that is better suited for the generation of independent measurements. In collaboration with scientists, agencies, and international bodies, we will further develop a strategy for FRP product validation, exploring the potential for international frameworks and work towards defining the criteria for an FRM label, and draft a roadmap for the validation of satellite FRP products using an FRM-compliant source.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall L3)

Presentation: UAS multi-angular dataset collection for surface reflectance evaluation

Authors: Dr Ilaria Petracca, Dr Daniele Latini, Marco Di Giacomo, Dr Fabrizio Niro, Prof Stefania Bonafoni, Fabio Del Frate
Affiliations: GEO-K srl, Tor Vergata University of Rome, SERCO for ESA ESRIN, University of Perugia
Surface reflectance is a fundamental factor to consider in remote sensing applications, from the retrieval of atmospheric parameters to the study of the land surface [1] and its optical and structural properties, from the albedo estimation to the BRDF (Bidirectional Reflectance Distribution Function) correction techniques for satellite sensors. BRDF expresses the level of anisotropy of a surface in terms of viewing and illumination geometries [2]. It can be used to validate surface reflectance of satellite-derived products and to normalize satellite sensor observations - acquired at different viewing and illumination geometries - to a standard configuration geometry [3]. Different BRDF models (such as Ross-Li-Maignan [4], Rahman-Pinty-Verstraete [5], Walthall [6]) can be used for simulating the directional signature of the surface reflectance at any view and illumination geometry after the derivation of the model from multi-angular observed data. For this purpose, the collection of accurate reference BRDF in-situ dataset is essential, and it can be carried out using various devices, among which ground-based instruments (i.e., field goniometers), satellite platforms and, recently, Unmanned Aerial Systems (UASs). The latter has the advantages of a higher angular sampling capability – which is fundamental to collect a complete and robust dataset that can be used for deriving BRDF models -, and higher temporal and spatial resolution as compared to satellite platforms. Furthermore, UAS has a wider coverage area on the targeted surface and a lower cost of the mission as compared to field goniometers, as well as the possibility to perform repeatable and automatic flights with high flexibility, even in remote regions. Our study focuses on the use of UAS for the collection of reference in-situ validation multi-angular dataset for BRDF estimation and modeling. The BRDF modeling is pursued using both standard kernel-driven models, e.g. Ross-Li-Maignan, and a machine learning (ML) technique based on neural networks (NNs), comparing the performance of the two approaches. The system is tested over various surface types, both natural and anthropic, in clear-sky conditions [7], and we are currently preparing our next mission for the coming winter season on a snow cover. Our system is composed of a UAS multi-rotor designed to support a weight up to ~8kg and capable to flight up to ~20mins which carries onboard the MAIA S2 multispectral camera [8] providing images in the same spectral range of ESA (European Space Agency) Sentinel-2 (S2) MultiSpectral Instrument (MSI) sensor from visible to near infrared (443-865nm), which allows for supporting calibration and validation activities of the S2 sensor. The flight protocol that we follow is automatically programmable and it consists of circular paths covered by the UAS with a range of view azimuth angles (VAAs) – from 0° to 360° with a step of 30°/45° according to the needs of the mission – and view zenith angles (VZAs) – from 0° (nadir) to 60° with a step of 10°. This is possible thanks to the high flexibility of the system and the presence of a gimbal mounted on the drone, which allows the camera to tilt. The system is also equipped with an incident light sensor (ILS) which measures the downwelling irradiance improving data radiometric calibration. In addition, on-ground calibration panels can be used for the calibration process. After collecting data according to the above flight protocol, raw MAIA images are preprocessed with the provider software which allows for geometric correction, co-registration and radiometric correction. The multispectral data obtained are then elaborated with a MATLAB routine which organizes the dataset and creates polar diagrams where the observed reflectance values are represented for each VZA and VAA covered by the UAS during the flight mission. The next step consists in using the dataset collected for the derivation of BRDF models with a least squared fitting to find the optimum coefficients, called BRDF parameters (i.e., fiso, fvol, fgeo), which best fit the measurements. In addition, the multi-angular dataset can be used to create new models, for example we are working on a NN architecture taking as inputs the viewing and illumination geometries and giving as output the reflectance value at that specific geometric configuration. First comparisons between Ross-Li-Maignan simulations and NN predictions show that the NN-based model reproduces the observed reference reflectance values better than the standard kernel-driven BRDF model, with an average rRMSE (relative Root Mean Square Error) of 0.059% against an average rRMSE of 0.073% of the Ross-Li-Maignan model. Bibliography [1] KAUFMAN, Y., TANRE, D., GORDON, H., NAKAJIMA, T., LENOBLE, J., FROUIN, R., GRASSL, H., HERMAN, B., KING, M. e TEILLET, P. (1997), ≪Passive remote sensing of tropospheric aerosol and atmospheric correction for the aerosol effect≫, Journal of Geophysical Research: Atmospheres, vol. 102 (D14), p. 16 815–16 830. [2] G. Schaepman-Strub, M. E. Schaepman, T. H. Painter, S. Dangel, and J. V. Martonchik, “Reflectance quantities in optical remote sensing—Definitions and case studies,” Remote Sens. Environ., vol. 103, no. 1, pp. 27–42, 2006. [3] PRIVETTE, J. L., ECK, T. F. e DEERING, D. W. (1997), ≪Estimating spectral albedo and nadir reflectance through inversion of simple BRDF models with AVHRR/MODIS-like data≫, Journal of Geophysical Research: Atmospheres, vol. 102 (D24), p. 29 529–29 542. [4] F. Maignan, F. -M. Bréon, and R. Lacaze, “Bidirectional reflectance of Earth targets: Evaluation of analytical models using a large set of spaceborne measurements with emphasis on the hot spot,” Remote Sens. Environ., vol. 90, no. 2, pp. 210–220, 2004. [5] RAHMAN, H., PINTY, B. e VERSTRAETE, M. M. (1993), ≪Coupled surface-atmosphere reflectance (CSAR) model: 2. Semiempirical surface model usable with NOAA advanced very high resolution radiometer data≫, Journal of Geophysical Research: Atmospheres, vol. 98 (D11), p. 20 791–20 801. [6] WALTHALL, C., NORMAN, J., WELLES, J., CAMPBELL, G. e BLAD, B. (1985), ≪Simple equation to approximate the bidirectional reflectance from vegetative canopies and bare soil surfaces≫, Applied Optics, vol. 24 (3), p. 383–387. [7] I. Petracca, D. Latini, M. Di Giacomo, F. Niro, S. Bonafoni and F. Del Frate, "Toward a Standard Approach for UAS-Based Multiangular Dataset Collection for BRDF Analysis," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 17, pp. 19487-19502, 2024, doi: 10.1109/JSTARS.2024.3482577. [8] NOCERINO, E., DUBBINI, M., MENNA, F., REMONDINO, F., GATTELLI, M. e COVI, D. (2017), ≪Geometric calibration and radiometric correction of the maia multispectral camera≫, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 42, p. 149–156.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall L3)

Presentation: EUMETSAT Activities Towards the Development of Copernicus Infrastructure for Ocean Colour System Vicarious Calibration

Authors: Ewa Kwiatkowska, Fabienne Jacq, Lieven Bydekerke, Estelle Obligis, Bojan Bojkov, Simonetta Tribuzio, Fernando Perez Lopez, Muriel Aubry
Affiliations: EUMETSAT
EUMETSAT is initiating in 2025 the detailed design (Phase 4) and the development and demonstration (Phase 5) of a field infrastructure for Ocean Colour System Vicarious Calibration (OC-SVC) in the frame of the European Commission Copernicus Programme. The infrastructure are in-water buoys providing highly accurate in situ radiometric measurements for the system vicarious calibration. The SVC is required for all Ocean Colour missions to remove residual calibration biases because space instrument pre-launch and onboard calibrations alone cannot achieve the water signal uncertainties needed for the aquatic applications. System vicarious calibrations for Copernicus missions like Sentinel-3 OLCI have been so far accomplished using measurements from the MOBY buoys in the USA operated by NOAA off Hawaii. The immediate target of the activities is securing the Copernicus OC-SVC capability for Sentinel-3 OLCI and the Next Generation sensors. The OC-SVC infrastructure can however be also used for other current and planned Copernicus Sentinels and Expansion missions such as Sentinel-2, CHIME, and other European and international missions with Ocean Colour goals or possibilities, like US OCI-PACE and VIIRS. The Copernicus OC-SVC perspective is long-term, 20+ years of operational in-water measurements, with the aim to satisfy current and evolving user needs for products and services. The overarching goal is to maintain radiometric measurements with the lowest uncertainties of the complete OC-SVC ‘System’ and with the SI traceability on the radiometry. The current activity has been preceded by Phase 1, 2 and 3 studies, which specified the scientific, technical, and operational requirements for the Copernicus OC-SVC infrastructure (Phase-1), developed two concurrent preliminary designs with preliminary project planning and costing (Phase-2), and selected two candidate locations for the infrastructure off El Hierro (Spain) and South of Crete (Greece) (Phase-3). For these completed Phases all documentation is available online at https://www.eumetsat.int/OC-SVC. At each Phase, the activities have undergone critical review by EC observers, EUMETSAT, ESA, and an international Expert Review Board. Following outcomes of the Phase 2 studies, the type of in-water optical system deployed from the buoy platform will be based on the updated MOBY design implemented by NOAA and in parallel by NASA as MarONet for the deployment off Western Australia. The goal is to achieve the best OC-SVC infrastructure and expertise for the Copernicus Programme, mitigate development risks and standardise with equivalent global assets. The selection of one site between the two locations identified in Phase 3, El Hierro or Crete, will be made in the proposal and thoroughly justified. The infrastructure detailed design and development is a complex system engineering project. At a conceptual level, the infrastructure is composed of the three components: - Field Segment offshore component, which is composed of an optical buoy tethered to a moored buoy and operationally measures the in situ optical radiometry for OC-SVC as well as associated ancillary parameters. The optical system on the optical buoy is the major element in the uncertainty budget propagation and SI traceability of the infrastructure’s water-reflectance measurements. - Field Segment onshore component, which is responsible for operations of the offshore component (ships, divers, workshops, storage, etc.), ancillary measurements from land (atmospheric/aerosol observatory), and facilities like the optical calibration laboratory. The calibration laboratory and operational procedures are major elements in securing the uncertainty requirements on the infrastructure’s water reflectance measurements and their operational availability and continuity. - Data component, which acquires and manages all data from the infrastructure, including raw data/telemetry acquisition, programming of controllers for the optical system, sensors, and instrumentation offshore and onshore, monitoring and commanding the offshore component, and processing the data to the fully calibrated and corrected water reflectances required for OC-SVC. The data component products demonstrate that the infrastructure meets the uncertainty and the operational availability requirements. At the implementation level, the three components are tightly coupled and cannot be separated. An integrated system view needs to be applied across all sub-components working together as a whole. Free, full, and open policy will be applied on the Copernicus infrastructure data, software and documentation.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall L3)

Presentation: VICALOPS: ESA's Innovative Service for Satellite Sensor radiometric Calibration

Authors: Béatrice Berthelot, Louis Sainte-Marie, Guillaume Fassot, Tristan Lalanne, Pieter De Vis, Astrid Zimmerman, Marc Bouvet
Affiliations: Magellium, NPL, ESA/ESTEC
VICALOPS is the European Space Agency’s (ESA) innovative new service designed to monitor and ensure the radiometric performance of satellite sensors with precision and reliability. By leveraging advanced vicarious calibration methods, VICALOPS exploits the reflectance properties of diverse natural surfaces, including bright desert regions, snowfields, deep convective clouds (DCC), atmospheric molecules, and sunglint over water. These calibration techniques are seamlessly integrated to provide a detailed overview of sensor calibration and long-term stability. Proven Calibration Techniques The vicarious calibration methods utilized by VICALOPS have been rigorously tested and validated using ESA’s DIMITRI toolbox. These validations include extensive analysis of satellite data and comprehensive error assessments to guarantee accuracy and confidence in the results. Automated and Efficient Processes VICALOPS automates a range of essential processes, streamlining workflows to ensure timely and consistent calibration monitoring. Key automated functions include: • Extracting data from calibration sites, • Assessing cloud coverage in satellite products, • Running calibration algorithms on a weekly basis, • Generating detailed and customizable reports. Each synthesis provides an in-depth evaluation of sensor performance, delivering actionable insights through periodic reporting. Supported Sensors VICALOPS is equipped to monitor an impressive array of satellite sensors, spanning historical, current, and future missions. The service currently supports sensors starting from their first post-launch acquisition, including: • MODIS (Terra and Aqua), • VIIRS, • Landsat 8 and 9, • Sentinel-2/MSI (A and B), • Sentinel-3/SLSTR (A and B), • Sentinel-3/OLCI (A and B), • Historical sensors such as MERIS, VEGETATION, ATSR2/AATSR, and PARASOL. Expanding Capabilities While VICALOPS is designed to handle an extensive list of supported sensors, it is also highly flexible and open to expansion. The service can accommodate new commercial small sensors upon request and under specific agreements. This makes it an invaluable tool for new data providers seeking to validate their sensors’ radiometric calibration and quantify uncertainties through inter-comparison with well-calibrated systems like Sentinel-2. User-Centric Features Through a secure user authentication process, VICALOPS enables users to customize their calibration assessments. Users can define specific periods, calibration sites, and datasets for monitoring, tailoring the service to meet their unique requirements. Future Timeline and Demonstration VICALOPS is currently in an advanced development stage. A demonstration phase is planned for early 2025, with the full service expected to launch in Spring 2025. During the demonstration phase, users will gain valuable insights into the service's capabilities, and an overview will be presented to showcase its potential.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.14)

Session: A.08.06 Ocean Extremes and multiple stressors events

Since the beginning of the industrial era, oceans have absorbed 90% of the atmosphere excess heat and 30% of the anthropogenic carbon. This buffering capacity has led to global ocean warming, sea level rise, increasing surface acidification, and loss of oxygen from the surface to 1000m. One of the most visible effects of the on-going climate crisis is the increase of the frequency and severity of Extreme Climate Events like e.g. tropical cyclone winds and rainfall, extreme waves, extreme acidification, marine heatwaves. This stress on the marine environment has a tremendous impact on the many organisms living in the Ocean. In particular, impact of Extreme Climate Events on marine life is expected to be very high as they provide little opportunity for organisms to adapt. As such, there is a need to deepen the monitoring, assess the ocean preconditioning, and anticipate the ocean response to extreme events and compounds multiple stressors. This session welcomes contributions demonstrating the role of Remote Sensing data, in synergy with other observations measurements (in-situ, airplanes, etc.) and models, to better observe, monitor and predict Extreme and multi-stressors events over the Oceans and further understand how they impact the marine physical and biogeochemical environment and the marine ecosystems. Studies investigating the occurrence, phasing, interplay and impact of compound events are also welcome. The session is also open to contributions demonstrating how EO derived products can be used to support management actions aiming at mitigating the impact of extreme and multiple stressors events on the marine environment.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.14)

Presentation: Characterizing the tropical cyclone dynamics from Earth observation data synergy

Authors: Arthur Avenas, Alexis Mouche, Bertrand Chapron, John Knaff, Pierre Tandeo, Paul Platzer
Affiliations: ESA, Ifremer, NOAA/NESDIS RAMMB, IMT Atlantique
Satellites have been used to analyse and monitor Tropical Cyclones (TCs) since the 1960s. Probing the ocean surface during TC conditions, combined measurements from active scatterometers and passive radiometers are essential. Yet, for various reasons (low/medium spatial resolution, loss of signal sensitivity at high winds, rain contamination), the strong gradients of the surface winds that can occur at scales of a few kilometers are badly resolved by these sensors. More recently, retrieval methods to estimate ocean surface wind speeds in intense TCs have been demonstrated for spaceborne Synthetic Aperture Radar (SAR) observations. By combining the C-band co- and cross-polarized channels, SAR estimates of TCs ocean surface wind speeds can be performed at a high spatial resolution (~1 km). Such data provides unique and precious information on TCs boundary-layer processes, especially within inner-core regions of strong surface wind gradients. Today the spatio-temporal sampling of these sensors is reasonably good, with ESA’s Sentinel-1A and -B wide-swath acquisitions being complemented by different Canadian SAR missions. Existing theories for TCs steady-state and/or dynamical life cycle may thus be re-examined with respect to these unprecedented observations. An extended SAR TC database was used to corroborate that fundamental conservation principles constrain TCs radial wind structure. Controlling the radial wind decay, two characteristic radii were exhibited and shown to be critical for TCs life cycle. Based on these radii, analytical solutions for 1) TCs steady-state wind structure and 2) TCs wind structure short-term (~12 hrs) evolution were then derived and demonstrated to be fully consistent with SAR observations. Implications on how SAR observations may complement global and historical TCs vitals to refine the detection of TCs climate trends are discussed. Furthermore, possible improvements of the theory using other data sources (e.g geostationary visible/infrared sensors and passive microwave sounders) and future missions (e.g Metop Next-Generation, CIMR, ROSE-L, S1NG and Harmony) are addressed.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.14)

Presentation: Forcing, Impacts and Trends of Marine Temperature Extremes: the CAREHeat Project

Authors: Federico Serva, Salvatore Marullo, Angela Landolfi, Emanuele Organelli, Rosalia Santoleri
Affiliations: Consiglio Nazionale Delle Ricerche
Sea surface temperatures extremes, such as marine heatwaves (MHWs) are one the rise due to global warming. Their cold counterpart, marine cold spells (MCSs) also pose a threat to the ecosystem, and are generally decreasing over the last decades. The objectives of the ESA funded project “deteCtion and threAts of maRinE Heat waves – CAREHeat” are to advance our understanding of the MHW phenomenon, providing insights on the associated physical processes, such as forcing and long-term variability, and assessing the biogeochemical responses to temperature extremes at the global scale. In the frame of the CAREHeat project we assess the sensitivity to methodological choices (e.g., thresholds and climatologies) of widely used extreme detection approaches with state-of-the-art satellite data. We also study the roles of climate change and natural variability in modulating the occurrence of marine temperature extremes. Dependence on the sea surface temperature dataset choice is also a matter of concern, as this can further contribute to spread in the detection results, especially in periods when observational coverage was less complete To this aim, we integrate in our analysis data from other observational sources (e.g., in-situ sensors) and numerical models (reanalysis outputs) to better understand underlying processes with a multivariate approach. Case studies for selected marginal seas, such as the Mediterranean Sea, are also discussed, given the increasing potential implications of MHWs for ecosystem functioning and marine-based activities, as it has been demonstrated by looking at the last two decades of the observational record. More details on the outcomes of the project are available from the associated website (https://careheat.org/) and broader context is provided by the ESA Ocean Science Cluster webpage (https://eo4society.esa.int/communities/scientists/esa-ocean-science-cluster/). Key results from the project, such as the MHW atlas, are prepared following common standards for public dissemination for all interested researchers and stakeholders, aiming at compatibility with products delivered by other relevant ESA projects.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.14)

Presentation: Ocean Connectivity and Future SST Scenarios Can Predict Tipping Points in Phytoplankton Biodiversity due to Climate Change.

Authors: Bror Jönsson, Shubha Sathyendranath
Affiliations: University Of New Hampshire, Plymouth Marine Laboratory
Climate change is fundamentally altering global ocean living conditions for marine ecosystems, both by long-term trends and by increasingly extreme perturbations of physical conditions. Oceanic phytoplankton communities can mainly respond to these stresses through the evolutionary adaptation of resident taxa and/or the migration of better-acclimatized organisms to other parts of the ocean. Quantifying the relative importance of these two mechanisms is critical for assessing the overall impact of climate change and to better predict how marine ecosystems will respond. Here we present work that shows how estimates of surface-ocean connectivity can be used to calculate the migration timescales of warm-tolerant planktonic organisms and be compared with predicted rates of change of sea surface temperature. We find that for the majority of the ocean migration timescales are short enough to compensate for long-term warming, suggesting that the immigration of novel types will be the key mechanism by which marine planktonic communities adapt. The major exception to these findings are regions, mainly in the tropical ocean, that are warmer than surrounding waters and therefore experience local temperature maxima. Here warm-tolerant organisms have to travel long distances over several years to supplant the local population with genetic material that is better adapted to new conditions. The Red Sea and the Persian Gulf (one of the warmest Large Marine Ecosystems - LMEs) currently experiencing the highest temperatures act as a critical seed bank of warm-tolerant species. These temperature-defined ecosystem niches will see a greatly expanded spatial niche by 2100 (increasing from 0.15% to 24% of the global surface ocean). However, these presently warm areas are also at the highest risk of extirpation and will likely rely on evolutionary adaptation to maintain viable populations. This potentially increasing dependency of regions already experiencing conditions expected in climate change scenarios to act as a source for temperature-adapted organisms suggests a significant risk of a tipping point of marine ecosystems and is of great concern from a management and conservation perspective. It is of critical importance to better understand these ecosystems and assess their resilience, especially in light of the environmental stress that the Red Sea and Persian Gulf experience today.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.14)

Presentation: Marine Heatwaves in Fjords: Combining In Situ and Satellite Data for Better Understanding

Authors: Cécile Pujol, Alexander Barth, Iván Pérez-Santos, Aida Alvera-Azcárate
Affiliations: Geohydrodynamics and Environment Research (GHER), University Of Liège, Centro de investigaciones i~mar, Universidad de Los Lagos, Centro de Investigación Oceanográfica COPAS Sur-Austral, Universidad de Concepción, Centro de Investigación Oceanográfica COPAS COASTAL, Universidad de Concepción, Centro de Investigación en Ecosistemas de la Patagonia (CIEP), Instituto de Fomento Pesquero (IFOP)
Marine heatwaves (MHWs) are phenomena characterized by anomalously warm water events that can persist from a few days to several months. These events are increasingly observed worldwide and are typically studied using satellite data with long temporal resolution. While MHWs in the open ocean have been well documented, their occurrence and dynamics in coastal environments remain poorly explored. Coastal areas, with their complex features such as the presence of islands, channels, gulfs or even fjords, pose significant challenges for observation using satellite data alone. Although high-resolution sea surface temperature datasets derived from satellites exist, they are often not able to resolve very complex areas such as fjord systems which feature highly complex bathymetry. In this study, we investigate the development of MHWs in the Chilean Patagonian Inner Sea, which is a fjord ecosystem, using a combination of in situ and satellite data. Since the 1990s, the Patagonian Fjords have been well sampled, resulting in about 3 million of samples across different depths. We used this data to construct a climatology of the sea temperature. To achieve a continuous field with daily temporal resolution and high spatial resolution, we interpolated the in situ data using the DIVAnd algorithm. It resulted in a climatology with a spatial resolution of ~900m that does resolve the fjords and channels of the study area. Detecting MHWs implies defining a temperature threshold above which conditions are considered “extreme”. Typically, this threshold is set as the 90th percentile of the temperature dataset. However, due to temporal and spatial sparsity of the in situ dataset in some areas, defining such a threshold was not feasible. To address this problem, we used sea surface temperature data from MODIS-AQUA (4km resolution) as a baseline to calculate the threshold over the study area, and subsequently interpolating it to match the high resolution of the climatology. Our findings reveal that MHWs intensity is generally higher in enclosed areas such as gulfs and fjords, with an average frequency of 1.5 to 2.5 event per year across most of the study area. An analysis of long-term trends shows that MHWs intensity is decreasing over time in the enclosed areas but increasing in the open ocean. On the other hand, MHWs frequency is mostly increasing except in one of the basins that is characterised by faster ocean dynamics. Over the past two decades, we identified several particularly strong and/or prolonged MHWs. To understand which role played the atmospheric parameters, we used ERA5 reanalysis data to investigate their interactions with MHWs. Our results are showing that the reduction of the cloud coverage (resulting in an increase of the solar radiation), atmospheric heatwaves and reduced winds are often linked to the development of MHWs. Additionally, we investigated the relationship between MHWs and variations in chlorophyll concentration within this ecosystem. To study the changes in chlorophyll concentration during MHW events, we used OLCI datasets with a resolution of 4km and 300m to observe the changes in chlorophyll concentration during MHWs events. This analysis offers insights into how these extreme thermal events impact primary productivity and ecological dynamics in the region.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.14)

Presentation: Capabilities of SWOT satellite measuring coastal extreme sea levels

Authors: Diego Vega-Gimenez, Alexandre Antoine Paris, Ananda Pascual, Angel Amores
Affiliations: Imedea (uib-csic)
This study evaluates the capabilities of the Surface Water and Ocean Topography (SWOT) satellite to observe and analyze sea level variations associated with storm surges in coastal regions. Storm surges, caused by a combination of strong winds (wind setup), low atmospheric pressure (inverse barometer effect), and wind wave effects (wave setup), represent one of the most destructive impacts of extreme weather events. These surges lead to rapid and abnormal rises in sea level, often resulting in devastating coastal flooding, as seen in notable cases such as Storm Gloria in the Mediterranean and recent hurricanes like Helene and Milton in Florida. Traditionally, storm surges have been studied using tide gauges (TGs), which provide high-frequency sea level measurements at fixed locations. However, TGs are sparsely distributed, confined to shorelines, and unable to capture the full spatial extent of storm surges across the open ocean or along complex coastlines. The advent of satellite altimetry marked a significant step forward, enabling the detection of the spatial signal of storm surges. Missions such as TOPEX/Poseidon, Jason series, and SARAL/AltiKa have provided critical insights into storm surge dynamics over the past few decades. Despite their contributions, these missions have limitations in spatial resolution and coverage, as they measure sea level only along narrow ground tracks with significant gaps, missing the fine-scale structure and horizontal extent of storm surges. SWOT, launched in December 2022, addresses these limitations through its innovative wide-swath interferometric radar, which produces two-dimensional sea surface height (SSH) maps at an unprecedented resolution of ~2x2 km. This capability is particularly valuable in coastal regions, especially within 20 km of the shoreline, where traditional altimetry has faced challenges due to land interference. This work unveils SWOT’s ability to observe the evolving process of storm surges in two dimensions, providing a novel spatial perspective on these extreme events. By combining SWOT data with observations from tide gauges, outputs from the SCHISM hydrodynamic model, and ERA5 reanalysis of atmospheric wind and pressure fields, we provide a comprehensive analysis of storm surge dynamics. Case studies from the Baltic Sea (micro-tidal), the North Sea (macro-tidal), and the tropical cyclones illustrate SWOT’s capability to capture the spatial footprint of storm surges across diverse oceanic regimes. This is achieved through both the Calibration/Validation (Cal/Val) and Science phases of the mission, ensuring robust spatial and temporal coverage of extreme sea level events. Comparisons between SWOT-derived SLA and tide gauge records show strong agreement, validating the accuracy of satellite observations. The results highlight the potential of SWOT data to enhance the performance of coastal hydrodynamic models by providing high-resolution information in regions with sparse observational coverage. Moreover, SWOT’s ability to produce detailed two-dimensional maps of storm surges fills critical observational gaps, complementing traditional altimetry and ground-based measurements. Unlike traditional altimeters, SWOT’s wide-swath snapshots allow for the visualization of the full spatial structure of storm surges, rather than isolated measurements along predefined tracks, thus enabling a more comprehensive understanding of these phenomena. This study underscores the transformative role of SWOT in advancing storm surge monitoring, forecasting, and hazard mitigation. By bridging observational gaps and providing detailed spatial information on extreme sea level events, SWOT contributes significantly to improving coastal resilience and informing strategies for managing the impacts of climate change. The findings demonstrate that SWOT’s capabilities extend beyond traditional altimetry, offering unprecedented tools for understanding and mitigating the risks associated with storm surges in coastal areas worldwide.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.14)

Presentation: Waves inside tropical cyclones: new capabilities using high-resolution SAR measurements

Authors: Clément Pouplin, Alexis Mouche, Bertrand Chapron, Jean-François Filipot
Affiliations: France-energies-marines, Ifremer
Inside a tropical cyclone (TC), the local sea-state is usually composed of several wave systems with different properties and originating from different regions in the storm. Strong wind gradients, in both velocity and direction, can generate complex wave fields, also varying in time and space at a very small scale. Representing this complex wave field through a standard spectral wave model requires a highly resolved wind field acting over a fine grid. Launching an ensemble of forecasts may then become computationally expensive. At much lower computational cost, a practical approach is to develop parametric wave models to describe the tropical cyclone wave fields (Young (1988), Young & Vinoth (2013), Kudryavtsev et al. (2015)). In this study, a Lagrangian wave model called KYC2021 (Kudryavtsev et al. 2021a) is considered. In such a model, spectral wave properties are propagated as Lagrangian particles in a rotating and translating wind field, providing the evolution of their related energy, peak wavelength and direction. More realistic tropical cyclone wind forcings can now be obtained thanks to enhanced capabilities offered by high-resolution wind field images estimated from the present-day RCM, Sentinel-1 and Radarsat-2 SAR constellations. By running the model on well covered TC events, modeled directional wave spectra can then be compared with newly available sea state information derived from these SAR measurements. Moreover, wave-buoys, satellite CFOSAT SWIM and altimeters provide spectral or significant wave height estimates. SWOT data also reveal the propagation of very long swells (forerunners) from strong wind events and especially tropical cyclones. The simplified low-cost model can then be robustly assessed using this various catalogue of data, to identify its limitations and strengths and provide fast estimates of extreme sea states inside TCs.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 0.96/0.97)

Session: D.02.15 The ESA Φsat-2 mission: an AI empowered 6U Cubesat for Earth Observation

As part of an initiative to promote the development and implementation of innovative technologies onboard Earth Observation (EO) missions, the Φsat-2 mission demonstrates AI capabilities for new, valuable EO techniques relevant to EO user communities. Its overall objective is to address innovative mission concepts, fostering novel architectures or sensing methods that meet user-driven science and applications through onboard processing using state-of-the-art AI techniques and AI-accelerator processors. The Φsat-2 mission is based on an innovative nano-satellite platform capable of running AI applications developed by its users. It is a 6U satellite equipped with a high-resolution multispectral instrument that can acquire eight bands (seven plus panchromatic) from Visible to Near Infra-Red (VIS/NIR). Multiple applications run on the NanoSatMO Framework via the CogniSat AI processor for novel data analysis. It also allows custom AI applications to be developed, installed, updated, and operated on the satellite while in orbit, enabling Φsat-2 to adapt to changing needs and maximize its value for scientists, businesses, and governments. As of today, six AI applications will run onboard the satellite, including capabilities to turn images into maps, detect and classify clouds, provide insight into cloud distribution, detect and classify vessels, compress images onboard and reconstruct them on the ground to reduce download time, identify anomalies in marine ecosystems, and detect wildfires. This invited session will provide an overview of the Φsat-2 mission, its current status, its various AI app demontrations, and further opportunities for the open community to work with the mission. Φsat-2 is a collaborative effort with Open Cosmos as the prime contractor, supported by an industrial consortium including CGI, Simera, Ubotica, CEiiA, GEO-K, and KP-Labs, with additional AI app contributions from IRT Saint Exupery and TAS-I.

This session will feature the following contributions:

Overview of the phisat-2 mission


  • Florian Deconinck - Open Cosmos

Φsat-2 in action: AI app orchestration, data acquisition, and open access


  • Alessandro Marin - CGI Italy

From pixels to insights: developing and running AI Applications


  • The Best Cloud Detection in the World by Jakub Nalepa (KP-Labs)
  • Generative Adversarial Networks in Orbit results by Alessandro Marin (CGI Italy)
  • Deep Compression Application by Giorgia Guerrisi (Geo-K)
  • Autonomous Vessel Awareness by Andre Dias (CEiiA)
  • Monitoring the Ocean from Orbit: First Onboard Runs of Marine Anomaly Detection for Environmental Protection by Thomas Goudemant (IRT St Exupery)
  • PhiFire AI: on–board wildfires detection by Federica Biancucci / Andrea Tantucci (Thales Alenia Space Italy)

All4One or One4All? Tailoring Onboard AI with NAS and Foundation Models


  • Roberto Del Prete - ESA Φ-lab
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall M1/M2)

Session: C.01.14 Exploring new observations and mission concepts for atmospheric measurements (observations, modelling and theories)

The atmospheric remote sensing community is abuzz with innovative ideas for new future mission concepts allowing innovative research applications by means of active (radar & lidar) and passive (optical & microwave) measurement concepts. The Observations from new airborne and ground-based instruments play a vital role in assessing the feasibility of these ideas for future satellite applications. These new measurements need to go hand in hand with the mathematical understanding of the theoretical frameworks and models. This session invites presentations on:

- Innovative observations of geophysical products

- Modelling efforts and theoretical frameworks to obtain innovative observations

- Feedback and lessons learned from ongoing or planned developments as well as from first ground-based or airborne campaigns

Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall M1/M2)

Presentation: Advancing Methane Monitoring: Development of a High-Resolution Satellite Payload with Spectrometer and Micro-LiDAR Integration

Authors: Daria Stepanova, Errico Armandillo, David Vilaseca, Manuel Quisler, Herbert Nett
Affiliations: Airmo
AIRMO is developing a satellite demonstrator with innovative instruments designed specifically for the accurate monitoring of localized greenhouse gas (GHG) emissions with focus on methane, from industrial facilities, urban areas, and natural sources. The payload combines a high-resolution spectrometer and micro-LiDAR systems to enable precise detection and quantification of emissions at the facility scale. With a target accuracy for methane detection set to 100 kg/h, the payload achieves a spatial resolution of 50 meters and a revisit period of less than 4 days in target areas. A key differentiator of AIRMO's approach is its ability to detect emissions in challenging environments with high aerosol loads, where traditional monitoring approaches often struggle. This capability ensures accurate and actionable data even in regions with complex atmospheric conditions, expanding the applicability and reliability of emissions monitoring in diverse global settings. The SWIR pushbroom spectrometer, designed in 8U form factor, utilizes freeform optics and can achieve a 20 km swath from 500km orbit. The hyperspectral camera uses a 2D InGaAs array featuring a 15-micron pixel size and 0.4 nm spectral resolution, enabling detailed spectral data acquisition. The micro-LiDAR, enhances data quality by providing information on aerosol extinction, backscattering, aerosol layering, and planetary boundary layer height. This addition is projected to improve pixel data usability by up to 50% and significantly enhance retrieval accuracy based on results from preliminary design study. The project supported by ESA’s InCubed program and currently under development, targets these performance benchmarks as it progresses through critical phases, including airborne campaigns and an In-Orbit Demonstration Mission (IOD).
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall M1/M2)

Presentation: NASA’s Polarized Submillimeter Ice-Cloud Radiometer (PolSIR): Observing the diurnal cycle of ice clouds in the tropics and sub-tropics

Authors: Ralf Bennartz, Dong Wu
Affiliations: Vanderbilt University, NASA GSFC
Ralf Bennartz, Dong Wu, and the PolSIR Science Team In May 2023 NASA has selected PolSIR as the latest addition to its Earth Venture Instrument class missions (EVI-6). In November 2024 PolSIR has entered implementation phase. PolSIR addresses key research priorities related to uncertainties in our current understanding in high clouds and cloud feedbacks as formulated in NASA’s latest Decadal Survey and in the latest Intergovernmental Panel on Climate Change (IPCC) Assessment. In this context, PolSIR will address the following objectives: • Constrain the seasonally influenced diurnal cycle amplitude, form, and timing of the ice water path (IWP) and particle diameter in tropical and sub-tropical ice clouds, • Determine the diurnal variability of ice clouds in the convective outflow areas and understand relation to deep convection, • Determine the relationship between shortwave and longwave radiative fluxes and the diurnal variability of ice clouds, • Enable improvement of climate models by providing novel observations of the diurnal cycle of ice clouds, ultimately leading to improved climate modeling skills and increased fidelity of climate forecasts in support of critical decision-making. The PolSIR mission consists of two 16U CubeSats, each equipped with a cross-track scanning polarized submillimeter radiometer near 325 and 684 GHz. The two PolSIR satellites fly in separate, 52-degree inclination, non-sun-synchronous orbits, to measure the diurnal variation of cloud ice and its microphysical properties on a monthly basis in the tropics and sub-tropics. The mission is PI-led by Vanderbilt University. NASA Goddard provides the project management team that builds the two instruments, while science operations are conducted by the Space Science and Engineering Center at the University of Wisconsin. The two spacecraft will be built by Blue Canyon Technologies. Launch of the two satellites is currently anticipated for late 2026/early 2027. In this presentation we will give an overview on the science to be addressed by PolSIR and on the current project status.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall M1/M2)

Presentation: Evaluating the potential impact of EPS-Sterna and hyperspectral MW observations for global NWP in an Ensemble of Data Assimilations (EDA)

Authors: Niels Bormann, Stephen English, Katie Lean
Affiliations: ECMWF
In recent years, advancements in satellite technology have opened up two new avenues for the evolution of operational passive microwave sounding from space: 1) the design of compact instruments suitable for small satellites or even cubesats, allowing cost-effective satellite constellations, and 2) the emergence of hyperspectral MW capabilities, expected to provide finer spectral sampling. These have sparked new efforts to evaluate how such new observations are expected to improve global weather forecasts, to guide the evolution of the global observing system. The presentation will provide an overview of studies funded by ESA and EUMETSAT to assess the benefit of two examples of such missions, that is, the EPS-Sterna constellation of microwave sounders on small satellites proposed by EUMETSAT in collaboration with ESA, and a hyperspectral instrument based on the Hyperspectral Microwave Sounder (HyMS) mission developed by Spire. Both missions are evaluated in the ECMWF system using the Ensemble of Data Assimilations (EDA) method, applying Monte Carlo concepts to evaluate the expected reduction in the uncertainty of short-range forecasts. For EPS-Sterna, the assimilation of such data can mostly build on the established experience from heritage instruments. The evaluations show that significant forecast benefit can be expected from adding the proposed 6-satellite constellation on top of an observing system with MW-sounding in the 3-orbit backbone system. For a hyperspectral MW sounder, new concepts need to be developed to be able to assimilate the data efficiently. This requires the development of approaches to deal with the large increase in the number of channels (100s-1000s compared to around 20-30) combined with exploiting the strengths of the MW region for atmospheric sounding. While experience with hyperspectral IR instruments gives some basis, the all-sky assimilation approach adopted for MW instruments as well as the specific noise performance of such instruments require careful attention. We will give an overview of the approaches considered and outline recent progress.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall M1/M2)

Presentation: New Global Tropical Measurements of Convective Vertical Mass Flux from NASA’s INCUS Mission

Authors: Derek Posselt, Susan van den Heeverr, Ziad Haddad, Simone Tanelli, Kristen Rasmussen, Pavlos Kollias, Brenda
Affiliations: Jet Propulsion Laboratory
The overarching goal of the upcoming NASA INvestigation of Convective UpdraftS (INCUS) mission is to enhance our understanding of why, when and where tropical convective storms form, and why only some of these storms produce extreme weather. Deep convective storms transport air and water between Earth’s surface and the upper troposphere. This vertical transport of air and water - often referred to as convective mass flux (CMF) - plays a critical role in Earth’s weather and climate system through its impacts on large-scale atmospheric circulations, upper tropospheric moistening and high cloud-radiative feedbacks, precipitation rates, and extreme weather. Potential changes to CMF with changing climates may significantly impact these processes. In spite of the critical role of this vertical transport of water and air, representation of CMF remains a major source of error in weather and climate models, thereby limiting our ability to accurately predict convective storms and their impacts in current and future climates. The observations obtained from INCUS will enhance our understanding of tropical convective storm processes and provide guidance for representing these processes in weather and climate models. INCUS is comprised of three SmallSat platforms each carrying a 7-beam Ka-band (35.75 GHz) scanning radar. The satellite platforms will be 30 and 90 seconds apart, providing three time (30, 90 and 120 seconds) intervals – referred to as Dt’s - over which observations will be made. A time-differenced radar reflectivity profile approach will be used to retrieve CMF. In addition to the Ka-band radars, a single cross-track-scanning passive microwave radiometer, operating at frequencies of 87, 163, 174, 178, and 181 GHz, will be housed on the middle SmallSat. The radiometer will provide observations of the high anvil cloud characteristics, as well as placing the radar observations within storm context. INCUS is the first systematic investigation of the rapidly evolving CMF within tropical convective storms, and the combination of radars and radiometer on INCUS will deliver unprecedented three-dimensional views of tropical convective storms. While launch is scheduled for late 2026, extensive science and algorithm development has already occurred, including: high resolution modeling of diverse convective systems and analysis of surface based rapid-radar-scanning of convection to assess the INCUS observation approach. This presentation will highlight the observational capabilities and scientific approach of the INCUS mission, and will present highlights of the early INCUS scientific results.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall M1/M2)

Presentation: C3IEL, the Cluster for Cloud evolution ClImatE and Lightning mission to study convective clouds at high spatial and temporal resolutions

Authors: Laurène Gillot, Céline Cornet, Daniel Rosenfeld, Eric Defer, Shmaryahu Aviad, Fabrice Buffe, Cécile Cheymol, Adrien Deschamps, Laetitia Fenouil, Alex Frid, Vadim Holodovsky, Avner Kaidar, Hugo Méric, Guillaume Pénide, Raphaël Peroni, Colin Price, Didier Ricard, Antoine Rimboud, Yoav Schechner, Amaury Truffier, Yoav Yair
Affiliations: Cnes, Laboratoire d’Optique Atmosphérique, Université de Lille/CNRS, Institute of Earth Sciences, The Hebrew University of Jerusalem, Laboratoire d'Aérologie, CNRS, Asher Space Research Institute, Technion, Viterbi Faculty of Electrical Eng., Technion, Department of Geosciences, Tel Aviv University, CNRM, Météo-France-CNRS, Israel Space Agency, ISA, Reichman University, Thalès Services Numériques, Department of Earth and Atmospheric Sciences, Université du Québec à Montréal
Clouds and water vapor are key components of the Earth's climate system. Uncertainties remain as to their interactions and evolution in the context of climate change. C3IEL (Cluster for Cloud evolution, ClImatE and Lightning) mission is an innovative way, currently in Phase B (preliminary definition phase) to provide measurements of clouds and convective systems from space at a decametric scale in order to better understand the dynamic evolution of clouds by studying the processes that govern the development of convective systems and the links between their environment and their growth velocity. The C3IEL mission intends to enable the improvement of the cloud evolution models that are used by meteorological centers and thus contribute to enhance the weather forecasts. It will demonstrate the feasibility of new measurement approaches applied to clouds characterization. The mission will consist of two satellites flying together on a sun-synchronous orbit with a local time at descending node of 1:30 p.m. On-board the satellites, there will be three types of instruments : - CLOUD, a visible imager with a spatial resolution of 20m, dedicated to the estimation of cloud velocity by stereoscopic reconstruction (the expected precision on this velocity is only a few meters per second, which provides unprecedented information on the dynamics and evolution of clouds); - WATER VAPOR imagers in three near-infrared spectral bands, allowing the estimation of water vapor content in the cloud environment at high spatial resolution (around 125m at nadir) - the LOIP for Lightning Optical Imagers and Photometers, dedicated to the observation of electrical activity in order to establish links between cloud dynamics and electrification. The presentation addresses the C3IEL mission. It deals on its objectives and presents how this extremely innovative mission will make it possible to obtain the 3D envelope of convective clouds, to estimate the vertical velocity of cloud development at a decametric resolution, while observing the water vapor content and the electrical activity of the clouds.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall M1/M2)

Presentation: Out of the Dark: Recent Advances in Understanding Atmospheric Light Fluxes at Night

Authors: Angela Abascal, Hector Linares Arroyo, Martin Aubé, Brian Espey, Tobias Degen, Geza Gyuk, Franz Hölker, Andreas Jechow, Monika Kuffer, Zoltán Kolláth, Christopher Kyba, Jun.-Prof. Dr. Andreas Rienow, Alejandro Sánchez de Miguel, Johannes Schultz, Alexandre Simoneau, Dr. Tobias Storch, Ken Walczak
Affiliations: Gfz Helmholtz Centre For Geosciences, Ruhr-Universität Bochum, Engineer Department, Public University of Navarra, Institut de Ciències del Cosmos (ICC-UB-IEEC), Cégep de Sherbrooke, Département de physique, School of Physics, Trinity College Dublin, University of Dublin, Leibniz Institute of Freshwater Ecology and Inland Fisheries (IGB), Adler Planetarium, Department of Engineering, Brandenburg University of Applied Sciences, Faculty Geo-Information Science and Earth Observation, University of Twente, Department of Physics, Eszterházy Károly Catholic University, Dep. Fisica de la Tierra y Astrofísica, Universidad Complutense de Madrid, Earth Observation Center (EOC), German Aerospace Center (DLR)
When the sun goes down, the flux of atmospheric radiation in the ultraviolet, visible, and near infrared bands drops precipitously… but not to zero. Just as movement on Earth’s surface at night does not stop, but rather continues as uncountable numbers of organisms such as insects traverse it, our atmosphere at night is filled with propagating photons from natural and, increasingly, anthropogenic sources. The radiant flux of light at night is an important geophysical parameter, but our understanding of it remains severely limited. This presentation will introduce the audience to the rapidly expanding field of remote sensing using light from the 300 to 1000 nm range at night. In addition to providing a background as to why nighttime light observations are important, we will highlight the ongoing developments in observation of nighttime light, and provide a brief overview of new and planned missions. In contrast to the daytime atmosphere, which is lit overwhelmingly by a single celestial source (the sun), the nighttime atmosphere is illuminated by distributed sources external to Earth (the moon, stars, and other sources of celestial light), spontaneous emission of radiation from within the atmosphere itself (airglow and aurorae), and from sources on Earth’s surface (bioluminescence, fires, and anthropogenic light). The dominant source varies by position on the Earth and, for natural areas, with the lunar cycle, aerosol content, meteor and solar activity. This makes the nighttime radiation field more difficult to describe than the daytime field, but it also means that a wealth of information is available to those that are able to observe it. The nighttime light field is of particular importance to nocturnal animals, who use the spatial distribution of radiation (and in some cases its polarization pattern) for orientation, navigation, and other visual tasks. These tasks are increasingly disturbed or influenced by the presence of artificial light. Similarly, nearly all life, from microorganisms to humans, use the cycling of light (diurnal, monthly, or seasonal) as a “Zeitgeber” to set and synchronize their biological clocks. Changes to these cycles caused by artificial light can interfere with timed physiological and behavioral processes, for example by preventing corals from broadcast spawning. A detailed understanding of the light field is therefore key to understanding the impact of artificial light on biodiversity. In addition to being produced, absorbed, and scattered by atmospheric components, nighttime light also impacts the chemical makeup of the atmosphere itself, via photodissociation. Artificial light has been shown to have sufficient flux to affect the concentration of nitrate radicals in the troposphere and, as a consequence, ozone. With a sensitive enough measurement device, properties of the atmosphere could be studied via scattered moonlight in natural areas or, in principle, via patterns of scattered and absorbed artificial light near cities and other bright light sources. This could extend investigation of the atmospheric state globally during periods when sunlight is not available. In a similar way, reflected moonlight could be used to investigate surface reflectance at night, extending measurements into periods of polar night, and increasing the number of annual observations available in regions with persistent cloudcover. The direct emission of light from the atmosphere has already been used to study gravity waves, and improved measurement techniques could both build upon this and provide spectrally resolved observations of the auroral ring. Beyond the natural world, artificial light is arguably the clearest signal of human activity available via remote sensing, and therefore provides a wealth of information about human activities. In addition to expanding upon the work being done by observing lights from cities and industry, improved nighttime light sensors could excel at observing the light associated with transportation networks, and their associated atmospheric emissions. While the main motivations for nighttime sensors are likely to come from this domain, this talk will only touch on them briefly. It is important to note, however, that artificial light scattered by the atmosphere provides a unique window for remote sensing of human activities. The reason for this is that much of the light emitted by cities is emitted in horizontal directions (e.g. advertising lights), which are particularly difficult to observe directly from space, due to the geometry. Expanding night lights measurements to observe scattered skyglow therefore provides important complementary information to observations of artificial light emission. Nighttime light sensors are currently in their third generation. The first worldwide maps of light emissions were produced by the Defense Meteorological Satellite Program Operational Linescan System. Despite being an uncalibrated sensor, hundreds of research publications based on the dataset are still being published each year. The second generation was the DMSP successor, the Visible and Infrared Imaging Suite Day Night Band (DNB), originally onboard the Suomi-NPP, and now and in future years on the JPSS. The DNB is radiometrically calibrated, and compared to DMSP had major advances in spatial resolution and sensitivity. However, DNB has a single panchromatic band which has greatly hindered applications based on time-series data. The third generation of observations (e.g. Astronaut photographs from ISS, Jilin-1, SDGSAT-1) have provided even higher spatial resolution and multispectral capabilities (e.g. RGB observations). These satellites, however, generally have limited sensitivity, often lack radiometric calibration, and have greatly reduced spatial and temporal coverage compared to DNB observations. An exciting recent addition is TEMPO, which is now providing the first observations of nighttime lights from geostationary orbit. Recently, the EULE mission concept was proposed as Earth Explorer 12. With its extraordinary sensitivity, it may point the way forward for the fourth generation of nighttime lights satellites, which would be the first designed to intentionally observe light scattered by the atmosphere at night. After covering existing and future planned satellite and aerial missions, the talk will turn to the lessons learned with the recent advances. A particular focus will be on the causes of instability in observations. These are exceptionally large for DNB, and currently place a strong limit on our ability to monitor trends in emissions in areas experiencing gradual change. A key activity, which we argue is of particular importance in the coming years, is the expansion of existing light pollution models to provide predictions for radiances observed from orbit, and the development of the first end-to-end simulators for nighttime lights.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall F2)

Session: A.02.04 Advances in Monitoring and Management of Forest Ecosystems - PART 7

About 4 billion hectares, which is 31 percent of the global total land surface, is covered by forests according to the latest FAO Forest Resource Assessment FRA 2020. Forests are a valuable and irreplaceable ecosystem, which play an important role from an ecological, social, and economic perspective. Monitoring of forests is essential for a better understanding to protect and manage sustainably this valuable ecosystem.

Information needs vary and include forest area, forest types, forest structure, disturbances, health, and biomass. An unprecedented wealth of observations by optical and microwave sensors, active and passive, from low to high resolution allow new ways to monitor forests. Emphasis will be put on advances in detecting forest disturbances, forest health monitoring, species identification, support to sustainable forest management, estimating forest biomass and forest carbon accounting. This session will showcase some of the more recent key achievements including methods/algorithms, science and applications.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall F2)

Presentation: Global Mangrove Watch (GMW) Radar Alerts for Mangrove Monitoring (RAMM) - a cloud-based deep learning system to detect mangrove loss

#stac

Authors: Benjamin Smith, Dr. Pete Bunting, Dr Victor Tang, Lammert Hilarides, PhD Andy Dean, Dr Frank Martin
Affiliations: Hatfield Consultants, Aberystwyth University, Wetlands International, European Space Agency
Global Mangrove Watch (GMW) publishes annual global data of the extent of mangrove forests and since 2019 a mangrove deforestation alert system using Copernicus Sentinel-2 imagery (Bunting et al. 2023). The optical image alert system is limited by cloud cover, often delaying the detection of mangrove loss and diminishing its impact for mangrove conservation. Sentinel-1 Synthetic Aperture Radar (SAR) penetrates cloud cover and offers potential to provide consistent monthly alerts. However, in coastal regions analytical methods must address the complex interaction of the SAR signal with mangrove canopy and water. To meet GMW’s goal of monthly alerts, under ESA OpenEO funding, we prototyped the Radar Alerts for Mangrove Monitoring (RAMM) system as an event-driven scalable system to process, detect, and validate alerts using SAR data. Deployed in CREODIAS cloud environment on a Kubernetes cluster, all worker processes were able to access Earth Observation (EO) data through the EODATA repository with appropriate S3 credentials. Data discovery and ingestion is made available by SpatioTemporal Asset Catalog (STAC) queries. Deploying on Kubernetes permits the dynamic scaling of worker processes from zero to a set maximum for when scheduled or intermittent jobs are submitted; this reduces compute footprint and financial and environmental costs of operation. All outputs are stored in Binary Large OBject (BLOB) data storage. RAMM is implemented over the GMW baseline extent (currently 2020) in a two stage per pixel approach, designed to maximize efficiency. The first stage is a simple rule-based approach based on thresholding of backscatter value and difference from the previous year’s median backscatter value, which reduces effects of seasonal and tidal variability and radar speckle. The thresholds are designed to minimize false negatives (i.e. capture all possible mangrove loss) The second stage is a 1-Dimensional Convolutional Neural Network (1D-CNN) that is trained on a dataset of confirmed alerts and false positives (both produced by GMW). The integration of false positives into the training dataset encourages the model to recognise and flag falsely identified alerts provided by the rule-based first stage as deep learning algorithms are known to extract fine-grained patterns between domain distributions. The RAMM system is triggered by a HTTP event to an Application Programmatic Interface (API) server which then initialized a job to populate a Message Queue (MQ) with tasks (messages that are consumed one at a time by worker processes until completed by preventing message buffering on the worker). Leveraging a MQ enables jobs to be consumed concurrently across a set of workers. If any single task errors or is slow to complete, the queue may be consumed by other available workers. As the first stage workers consume the coordination job populated MQ, subsequent messages are then pushed to a second MQ that then triggers the second stage to concurrently scale up and begin processing. The first and second stage upload intermediate (for debugging) and final outputs to the dedicated BLOB storage. The first stage takes in a task and retrieves the provided GMW mangrove extent, disseminated by the initialisation coordination job. Ingesting the subsequent VH polarisation SAR data for the monthly assessment and performing a simple rule-based approach. For the signal values that have been identified, they are set to a binary mask GeoTIFF and pushed to BLOB storage. The path of this data is then delivered to the second stage MQ. As the second stage MQ is populated, worker processes begin to scale up in response to the message events. The second stage ingests the first stage binary mask of identified signals and for each of the geolocated pixels performs a temporal validation with a 1D-CNN using the previous 7 months of sequential acquisitions; given Sentinel-1 has a revisit cadence of roughly four to five days this equates to about 40 acquisitions. Upon completion of validation, the second stage determines if the first stage identified alert is a true- or false-positive. The stage 1 and 2 identified locations are encoded into a GeoJSON Feature Collection and stored in the BLOB storage. Demonstration sites in Guinea-Bissau and North Kalimantan, Indonesia, were selected for prototyping and serve as the benchmark to scale analyses to the global mangrove coverage. An assessment of separability between true- and false-alert signals provided evidence for the deep learning models efficacy potential. The model, trained over described dataset, iterated for 200 epochs using an initial learning rate of 1e-4 (0.001), loss defined by binary cross-entropy, and learning rate scheduling by the Adam optimizer. The final layer employed a sigmoid activation function to output a probability percentage, which is assessed to [0,1] exclusively to satisfy the binary labelling desired. The model achieved a test accuracy of 72%. RAMM was implemented on scalable infrastructure with minimal overhead required to keep the basics operating, the deep learning model was trained using only CPU, and all training leveraged data situated close to the compute as to minimise network usage; all to minimise the environmental impact of operating on Cloud systems in datacenters. Due to the dynamic nature of workload handling, RAMM can process the two demonstration sites in 10 minutes or less and expecting a linear growth as it is applied to the wider, global, coverage for mangroves. RAMM has great potential to contribute to mangrove conservation as a complement to the annual GMW optical mangrove extent data and optical-based alerts system.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall F2)

Presentation: Near-Real-Time Tropical Forest Loss Monitoring with Sentinel-1 data: From Threshold-Based to Deep Learning Detection

Authors: Pierre DEFOURNY, Quentin DEFFENSE, Baptiste DELHEZ
Affiliations: UCLouvain - Earth and Life Institute
Tropical forest monitoring and mapping are in the center of ecosystem preservation of the 21st century. Satellite remote sensing has become an essential tool to assess forest degradation in wide and remote areas at regional scale. Nevertheless, annual assessments based on optical imagery tend to show their limits in the African tropical forests, where the semi-persistent cloud cover prevents the proper detection of short-duration and small-scale events. The 10-year-old Sentinel-1 C-band SAR dataset provides now a proper archive to meet the challenge of continuous forest monitoring, with one acquisition every 12 days at 10-m ground sampling distance in all tropical regions. Current early-warning systems have proven the asset of SAR data to detect forest loss events. Most of the time, a loss detection is triggered at every time step, when a determined threshold identified as outlier is exceeded. Those inflexible thresholds are established empirically from trends observed on previous stability periods and forest loss samples, and the detection can require a mid-term period to be confirmed. We introduce a novel detection system using temporally standardized Sentinel-1 C-band SAR data to efficiently monitor and map forest loss in the Congo Basin using machine learning instead of systematic empirical thresholds. The system processes 3 successive acquisitions (covering a period of 24 days) and assesses forest degradation every 12 days in a temporal sliding window. Training and calibration are performed in Dekese, DRC (3°30' S), and the model is applied and validated in Likati, DRC (3°29' N). The distance of 800 km separating the two study areas ensures the potential of the model for generalization across the tropical moist forest of the Congo Basin without additional training. Three key steps have been implemented to leverage machine learning for SAR-based tropical forest monitoring and achieve automated, continuous deep learning detection: 1. Multitemporal training with machine learning using random forest (RF) algorithm: Training samples are created and weighted between classes (stable or loss), and a model is developed using multiple time periods. The trained model is then transferred to a distinct remote site, demonstrating its potential for cross-regional generalization. 2. Neural network implementation with a multi-layer perceptron (MLP): The Random Forest model is converted into an MLP, with calibration of layers to leverage the advantages of a neural network over traditional machine learning techniques. The performance improvement in detecting forest loss is assessed. 3. Refinement in deep learning through a convolutional neural network (CNN): A U-Net architecture is developed to eliminate the need for spatial filtering and empirical thresholds. The CNN is designed to dynamically adapt to local contexts and pixel-level variability, ensuring more robust and flexible detection capabilities to complex patterns. Ultimately, this study supports the operationalization of automated forest monitoring systems in the Congo Basin, enabling early detection of forest loss events within a 24-day sliding temporal window. Further validation across multiple sites is ongoing but this framework holds promise for scalable, near-real-time forest monitoring in the Congo Basin.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall F2)

Presentation: A Novel Approach to Reconstruct Sentinel-1 Backscatter Data to Enable Early Detection of Artisanal and Small-Scale Mining (ASM) in Humid Tropical Rainforests.

Authors: Isaac Obour Mensah, Brian Barrett, Dr Conor Cahalane
Affiliations: Department of Geography, Maynooth University, School of Geographical and Earth Sciences, University of Glasgow
Artisanal and small-scale mining (ASM) poses a serious threat to tropical forests, driving widespread deforestation and contributing significantly to water pollution in the Global South. Excessive cloud cover has limited the regular monitoring of ASM activity in the tropics with optical data. Synthetic Aperture Radar (SAR) imagery presents a solution in these areas due to its ability to see through clouds. However, variations in backscatter caused by seasonality and adverse atmospheric conditions limit the early detection of ASM activity using shorter wavelength SAR, such as Sentinel-1 (S1), especially during the active rainy seasons in the tropics. The atmospheric effects from active rainfall events often cause a darkening effect in multiple images, leading to a local decrease in backscatter. This results in data gaps in the time series, which significantly hinders the timely detection of ASM activities using Sentinel-1 C-band SAR imagery. In this paper, we propose the Kalman filter (KF) algorithm to reconstruct time series S1 SAR data and reduce SAR outliers in the temporal domain caused by adverse atmospheric conditions during active rainfall seasons, aiming to improve the early detection of ASM activity in humid tropical rainforests. We evaluated the performance of our method against two widely used smoothing algorithms for remote sensing applications including the Savitzky-Golay filter (SG) and locally estimated scatterplot smoothing (LOESS) using the coefficient of determination (R^2), and mean square error (MSE). The proposed approach was tested against six existing ASM sites in two distinct geographical locations, the Western and Ashanti regions in southern Ghana. KF consistently outperformed both the SG and LOESS algorithms across all mining sites evaluated in this study with a strong correlation (R^2 > 0.92 in the Ashanti test site and R^2 > 0.90 in the Western test site) between the predicted and raw S1 backscatter. Additionally, KF achieved lower errors (MSE) compared to other smoothing methods. The order of performance was found to be KF > SG > LOESS across all mining sites in both Western and Ashanti sites. The findings suggest that KF provides the most robust description of the seasonal dynamics and is good for filtering S1 time-series data with prominent outliers due to atmospheric effects during rainfall periods, providing great potential for early detection of ASM activity using S1 data in humid tropical rainforests.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall F2)

Presentation: DETER-RT: An improved, highly customizable SAR-based deforestation detection system for the Brazilian Amazon

#pangeo

Authors: Juan Doblas, Mariane Reis, Stéphane Mermoz, Claudio A. Almeida, Thierry Koleck, Cassiano Messias, Luciana Soler, Alexandre Bouvet, Sidnei Sant'Anna
Affiliations: Globeo, INPE, CNES, IRD
Introduction The last decade has seen a rapid development of automated near-real time deforestation detection (NRT-DD) systems over tropical forests. This progress has been driven by the growing availability of orbital images, particularly those from the Landsat and Copernicus initiatives, including both optical and SAR-based datasets. The need for effective monitoring of tropical forest disturbances has grown as these forests play a crucial role in climate regulation and biodiversity conservation. Most current NRT-DD systems operational over tropical regions rely on a fixed set of parameters, which are either part of a trained machine learning model (such as GLAD-L [1]) or a set of physics-based rules calibrated by expert knowledge applied over processed time-series (e.g., RADD [2] or TropiSCO [3]). These parameters, once determined, apply globally, governing the detection algorithm in a uniform manner. While this approach facilitates ease of implementation and has yielded vast amounts of valuable data on the state and evolution of tropical forests [4], several limitations exist when applied in practice by local agents. Specific challenges include: - Lack of Flexibility: Fixed global parameters cannot adequately account for the diverse forest types and deforestation patterns across different tropical regions. - Customization Limitations: Existing systems do not allow users to adjust algorithm parameters to meet specific needs, such as reducing commission errors. This might be the case of forestation deterring field teams, which needs almost absolute certainty before engaging a field action over a given warning. - Access Issues: Products from most NRT-DD systems are available only as raster images, which can be difficult or impossible to download. This presents a significant challenge for local authorities or communities with limited resources. System Overview Here, we introduce DETER-R-TropiSCO (DETER-RT), a new SAR-based NRT-DD system resulting from collaboration between the scientific teams of the National Spatial Research Institute in Brazil (INPE) and the French National Center for Space Studies (CNES), facilitated by GlobEO researchers. DETER-RT is a hybrid system that analyses Sentinel-1 data using features from two existing projects— DETER-R [5] and TropiSCO —and is developed using open-source code, leveraging the PANGEO paradigm [6] and CNES HPC computing capabilities. The collaborative nature of this project harnesses CNES’s computational resources alongside INPE’s deep understanding of regional forest dynamics and GlobEO’s operational expertise. DETER-RT takes an innovative approach to deforestation monitoring by allowing users to fine-tune detection algorithms to suit their specific needs. This system also integrates advanced knowledge-based routines absent in previous models, including: - Sensitivity maps: These maps allow users to vary detection parameters across different regions, enhancing spatial adaptability. - Proximity Sensitivity Modulation: The system modulates detection sensitivity based on the distance to previously recorded deforestation, which helps to adjust alerts more precisely. The precise function modeling spatial dependence has been modeled based on statistical analysis of the reference deforestation data, following the methodology proposed in [5]. - Morphological Post-Processing: Post-treatment routines help refine the shape and characteristics of detected anomalies to improve data quality. - Vectorized Deforestation Warnings: An extension of the system enables the export of vectorized deforestation warnings, which can facilitate easier integration into geographic information systems (GIS) and practical use by stakeholders. Tuning and Validation The DETER-RT system underwent an extensive calibration and validation process. For calibration, reference data from INPE’s PRODES and Mapbiomas Alertas [7] projects were utilized. The initial calibration step involved setting up alpha maps for the different ecoregions within the Amazon biome to account for regional variation. Other key parameters, including those affecting post-processing routines, were also adjusted to align with INPE’s specific requirements. For the validation procedure several automated and expert-guided procedures are being used to assess both omission and commission errors across multiple parameters sets and to evaluate the timeliness of the alert system compared to existing NRT-DD systems. The system has demonstrated a strong performance, featuring a relatively low omission rate (~20%) and significantly reduced commission errors—an advancement over the current state-of-the-art in the region. At the time of the writing, the validation tasks are mostly finished. Operationalization For operational purposes, the DETER-RT system is divided into two sub-systems: - Change Ratio Image Computation: This computationally intensive task runs daily using CNES infrastructure, processing Sentinel-1 data to generate change ratio images. The output is then uploaded to INPE’s network. - Anomaly Extraction and Warning Generation: The second subsystem operates within INPE's network, analyzing the ratio images to extract anomalies. These anomalies are vectorized and issued as deforestation warnings, allowing users to have full control over detection parameters and the ability to adjust them as needed. Currently, the system is operational across the entire Amazon Basin, having processed over 200,000 Sentinel-1 images. Beyond scientific innovation, DETER-RT serves as a showcase of successful international collaboration, demonstrating how coordinated efforts between different research institutions can help address specific, urgent environmental challenges. Conclusion DETER-RT stands as a significant advancement in deforestation monitoring by providing a highly adaptable SAR-based detection system capable of accommodating regional specificity. The ability to customize parameters, the use of advanced spatial analysis techniques, and the export of vectorized warnings collectively address many of the practical challenges faced by local communities and environmental authorities. The system's success showcases the power of international partnerships and innovative technology in providing real-world solutions to pressing environmental issues, contributing not only to scientific knowledge but also to more effective governance of natural resources. References [1] M. Hansen et al., "Humid Tropical Forest Disturbance Alerts Using Landsat Data," Environmental Research Letters, 2016. [2] J. Reiche et al., "Forest disturbance alerts for the Congo Basin using Sentinel-1," Environmental Research Letters, 2021. [3] S. Mermoz et al., "Continuous Detection of Forest Loss in Vietnam, Laos, and Cambodia Using Sentinel-1 Data," Remote Sensing, vol. 13, 2021. [4] WRI, Global Forest Review, 2024, update 8. Washington, DC: World Resources Institute. Available online at https://research.wri.org/gfr/global-forest-review. [5] J. Doblas et al., "DETER-R: An Operational Near-Real Time Tropical Forest Disturbance Warning System Based on Sentinel-1 Time Series Analysis," Remote Sensing, vol. 14, 2022. [6] R. Abernathey, et al. (2017): Pangeo NSF Earthcube Proposal. [7] MapBiomas, Alert Project - Validation and Refinement System for Deforestation Alerts with High-Resolution Images, accessed in 2024.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall F2)

Presentation: Evidence for large-scale farmer managed greening in Niger

Authors: Rene Lee, Dr. Martin Brandt, Stefan Oehmcke, Chris Reij, Dr. Florian Reiner, Fernando Maestre, Robert Buitenwerf, Compton Tucker, Prof. Dr. Rasmus Fensholt
Affiliations: University of Copenhagen, Department of Geosciences and Natural Resource Management, University of Rostock, Institute of Visual and Analytic Computing, World Resources Institute, King Abdullah University of Science and Technology (KAUST), Environmental Science and Engineering, Aarhus University, Center for Biodiversity Dynamics in a Changing World (BIOCHANGE), Department of Biology, Earth Science Division, NASA Goddard Space Flight Center
Agriculture in the Sahel is mostly practised by smallholder farmers. However, due to resource overexploitation, drought, and rising aridity, these croplands are highly susceptible to land degradation. Therefore, the resilience and adaptability of smallholder farmers amidst future climate change scenarios are of utmost importance for the health and well-being of local communities. Farmer Managed Natural Regeneration (FMNR), rooted in traditional indigenous techniques, is a farming practice that is currently recognised as the most successful, cost-effective, and easy-to-replicate approach to landscape restoration within African drylands. By deliberately protecting and managing the regrowth of tree and shrub stumps within croplands, farmers and their communities are re-greening their land. Farmers have observed increased crop yields and improved financial conditions, while the new trees contribute to the region's ecosystem services, including increased biodiversity, carbon sequestration, and enhanced water cycling. What began as a small-scale operation in Maradi, an administrative region of southwest Niger, has grown into a fully-fledged FMNR movement and has since spread to the surrounding areas, resulting in the restoration of millions of hectares of degraded land. However, as the region lacks national forest inventories, this regrowth has not been documented outside of limited field and remote sensing surveys. Previously available satellite technology and data analysis methods were insufficient to quantify the changes in trees on farmlands brought about by large-scale FMNR. Here, we use multispectral imagery, including the 15-meter resolution panchromatic band from Landsat-7 onwards, as input for a deep learning framework to map tree cover dynamics in Sahelian farmlands over the past two and a half decades. The developed method can map tree cover at a fine enough scale to observe changes in scattered farmland trees. We find that roughly 16% of Sahelian trees are currently growing on croplands and that 73 million km² of Sahelian agricultural land has experienced a significant increase in tree cover. Specifically, we have found a marked increase in tree cover in Maradi and the surrounding regions. Our results highlight the potential of this approach to quantify the impact of FMNR on Sahelian farmlands.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall F2)

Presentation: Enhanced Near Real-Time Forest Loss Monitoring with a Bayesian Change Detection Method and Sentinel-1 SAR Imagery: Application to the Cerrado Biome

Authors: Marta Bottani, Professor Laurent Ferro-Famil, PhD Thuy Le Toan, PhD Juan Doblas, Dr Stephane Mermoz, PhD Thierry Koleck
Affiliations: TéSA, ISAE Supaero, CNES, CESBIO, GlobEO
Forests worldwide have experienced significant transformations due to ongoing deforestation. The United Nations Food and Agriculture Organization (FAO) estimates an annual loss of approximately 10 million hectares. In Brazil, which accounts for about 12% of the world's forests, over 1.8 million hectares were deforested in 2023 alone. Alarmingly, 98% of this deforestation exhibited signs of illegality, as reported by MapBiomas Alerta in its 2023 annual report [1]. Recent trends reveal that the Cerrado, the world’s most biodiverse woodland savanna, has emerged as the biome with the highest increase in deforestation rates. Additionally, there has been a reduction in the maximum size of deforested areas, likely due to illegal activities being conducted more rapidly to evade legal consequences. These alarming patterns highlight the critical need for near real-time forest monitoring to prevent further vegetation loss and facilitate prompt interventions, particularly targeting the monitoring of savanna-like biomes. Traditionally, forest loss monitoring has relied on optical imagery, with GLAD-L [2] developed by the University of Maryland serving as a notable precursor in this field. However, the effectiveness of optical imagery is often hampered by cloud coverage, which limits the availability of usable images and potentially delays responses to illegal activities. In recent years, Synthetic Aperture Radar (SAR)-based systems have emerged, offering the advantage of all-weather operability. Notably, Sentinel-1 data has proven to be a valuable resource due to its extensive coverage, open-access policy, and high revisit frequency of six days over tropical regions, facilitated by two operational satellites. Several forest loss monitoring systems, including RADD [3] developed by Wageningen University in the Netherlands, TropiSCO [4] from the Centre Nationale d'Études Spatiales (CNES) and CESBIO in France, and DETER-R [5] from the Instituto Nacional de Pesquisas Espaciais (INPE) in Brazil, are based on Sentinel-1 imagery. However, SAR-based approaches utilizing C-band data face several challenges in the context of forest loss monitoring. These challenges include the low backscatter contrast between forested and deforested areas, which can be further reduced by factors such as variations in soil moisture and the presence of residual biomass on the ground following deforestation activities. Furthermore, accurately detecting small-scale disturbances—recently reported as a growing trend in illegal deforestation practices—remains problematic for existing SAR-based systems. This difficulty arises partly due to the spatial filtering techniques used to mitigate speckle variations, which can degrade the spatial resolution of measurements, leading to omissions and sometimes over-detections in the change monitoring process. Additionally, monitoring forest loss in regions characterized by pronounced seasonality in backscatter signals, such as dry forests and savannas, presents further limitations. As a result, the Cerrado biome in Brazil, known for being an extensive carbon sink, is significantly under-monitored. At present, monitoring efforts for the Cerrado are largely dependent on optical-based methods, with the exception of a single supervised SAR-based approach, the LUCA dataset [6]. This study presents BOCD, an innovative unsupervised methodology for detecting forest loss, utilizing SAR data, and employing Bayesian inference through a hidden infinite-state Markov chain. This approach conceptualizes forest loss as a change-point detection problem within a Radiometrically Terrain Corrected (RTC) Sentinel-1 single polarization (VH or VV) time series. A significant advantage of this method is its ability to maintain the native resolution of Sentinel-1 RTC products, eliminating the need for spatial filtering, which often compromises data quality. Each new observation is integrated into the model, contributing to the probability of deforestation by leveraging prior information alongside a robust statistical data model [7]. The method’s sequential adaptation process is designed to be resilient against variations and trends, enabling effective forest loss monitoring not only in dense forests, as demonstrated in previous studies validating our approach in regions like the Brazilian Amazon [8], but also in areas affected by seasonal fluctuations, thereby addressing limitations associated with traditional methods. Since savanna-like biomes are often characterized by a diverse composition of land cover types—including forest formations, savannas, and grasslands—we aim to enhance detection accuracy in these mixed land cover areas by leveraging the distinct sensitivities of VH and VV channels of Sentinel-1. Specifically, we extend the BOCD methodology to the dual-polarimetric case (Pol-BOCD) by modeling our data with a bivariate distribution and deriving a suitable Bayesian inference model that exploits conjugacy between the data likelihood and the prior. Pol-BOCD enables the use of specific channel characteristics, such as the sensitivity of VH to volume scattering and that of VV to double-bounce effects, thereby improving forest loss monitoring in mixed land cover regions. The BOCD and Pol-BOCD methods are evaluated for true detections, omissions, and false alarms using the MapBiomas Alerta validation dataset [9], including small-scale (<1 hectare) deforestation polygons detected in the Cerrado woodland savanna in 2020. This assessment considers varying the evaluation threshold, defined as the percentage of a detected polygon required to confirm or reject a detection based on user-defined needs. Additionally, we compare the performance of our methods with GLAD-L, a fully automated and readily available monitoring system for the Cerrado. No comparison was made with the LUCA dataset, despite its operational coverage of the Cerrado, as its forest loss products are not freely available and its accuracy metrics are reported at the continental level rather than by biome. Our research unveils significant progress in detecting small-scale disturbances, accompanied by remarkable robustness to false alarms in comparison to GLAD-L. Single-polarization VH-BOCD demonstrates greater suitability for detecting forest loss than VV-BOCD due to its higher backscatter contrast between intact vegetation and deforestation, as well as its sensitivity to volume scattering. Nonetheless, VV-BOCD performs better in specific contexts where ground visibility is prominent. Ultimately, Pol-BOCD improves true positive detections compared to the single-polarization algorithms while maintaining resilience to false alarms, especially at high evaluation thresholds. This indicates that forest loss occurring over the mixed land cover types typical of the Cerrado benefits from a dual-polarimetric approach that exploits the distinct sensitivities of Sentinel-1 VH and VV channels. Regardless, all BOCD systems attain an F1-score greater than 95%, distinctly surpassing the 75.5% obtained by GLAD-L. In conclusion, our adaptive approaches significantly improve the timely detection of forest loss with low false alarm rates, demonstrating efficacy in monitoring the Cerrado biome, where seasonal variations challenge existing systems and, in most cases, limit their monitoring capabilities. Furthermore, a focused comparison highlights a reduced over-detection of small-scale forest loss by the BOCD systems compared to GLAD-L, likely due to the avoidance of spatial filtering, which allows the preservation of Sentinel-1 data resolution. [1] RAD, "Annual Report on Deforestation in Brazil," MapBiomas Alerta, 2023. [2] M. Hansen et al., "Humid Tropical Forest Disturbance Alerts Using Landsat Data," Environmental Research Letters, 2016. [3] J. Reiche et al., "Forest disturbance alerts for the Congo Basin using Sentinel-1," Environmental Research Letters, 2021. [4] S. Mermoz et al., "Continuous Detection of Forest Loss in Vietnam, Laos, and Cambodia Using Sentinel-1 Data," Remote Sensing, vol. 13, 2021. [5] J. Doblas et al., "DETER-R: An Operational Near-Real Time Tropical Forest Disturbance Warning System Based on Sentinel-1 Time Series Analysis," Remote Sensing, vol. 14, 2022. [6] A. Mulissa et al., “LUCA: A Sentinel-1 SAR-Based Global Forest Land Use Change Alert,” Remote Sensing, 2024. [7] R. P. Adams and D. J. MacKay, Bayesian Online Changepoint Detection, arXiv 0710.3742, 2007. [8] M. Bottani et al., “A Statistical Method for Near Real-Time Deforestation Monitoring Using Time Series of Sentinel-1 Images”, Proc. IGARSS 2024, IEEE Int. Geosci. Remote Sens. Symp., 2024. [9] MapBiomas, Alert Project - Validation and Refinement System for Deforestation Alerts with High-Resolution Images, accessed in 2024.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.85/1.86)

Session: C.01.03 Innovative space technologies enabling growth for new EO science

This peer-review session addresses aspects that lead to reducing the cost of recurring spacecraft developments and to the operation of a growing number of EO satellites. This could include:
- technology aspects that lead to better system performance (e.g. pointing knowledge, higher downlink data rates)
- standardization of the spacecraft avionics with interchangeable and interoperable modules from multiple suppliers and ready to be adopted by multiple integrators,
- new challenges such as platforms compatible with more demanding Zero Debris policies at the mission end of life,
- also more efficient process aspects such as the digitalization of requirements and designs with modern approaches like Model Based System Engineering or use of advanced COTS components.

All these industrial points are crucial to enable better EO science thanks to leaving better margins and resources for more performing EO sensors and also for industrial scalability towards affordable constellations delivering better revisit times.

Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.85/1.86)

Presentation: Efficient On-Board Processing Using a Shared AI Backbone Across Multiple Tasks

Authors: Andreas Luyts, Bart Beusen, Iskander Benhadj, Tanja Van Achteren
Affiliations: Vito nv
Modern earth observation missions capture ever more detailed information, with more spatial and spectral detail. Consequently, the amount of data gathered is increasing very rapidly. The use of smaller platforms like cubesats also limits data downlink capacity, hindering usability of the missions. Diverse solutions are being developed to tackle this problem. One option is to compress the data on-board before downlinking. In the CORSA project [1], an ESA PhiLab EO Science for Society project, Beusen et al. demonstrated an AI method for near-lossless image compression using the concept of vector quantized auto-encoders [2]. The model used was based on the hierarchical VQVAE-2 [3], with some changes in the model architecture to achieve higher reconstruction accuracies, as described in [1]. Another option is to perform the image processing and information extraction on-board, avoiding unnecessary large data transfers, i.e. "edge computing”. Applied to satellite imaging, performing onboard image processing and data analysis offers a fundamental shift as the derived information to be downlinked is much more compact. However, solely downlinking extracted information instead of raw data also puts a limit on the potential use cases. The success of onboard processing solutions is limited by computational resources, including a strict power budget which is shared with imaging and downlink activities. Our work eliminates computational redundancy for three separate on-board tasks: image compression, geometric preprocessing to refine the absolute geolocation, and end-applications. We do this by using only one, shared AI backbone that provides a set of feature vectors that can directly be used for onboard compression of the raw data, applying absolute geometric corrections to the image, and finally perform change detection or other downstream applications on the corrected data. In our study, we use the encoder of the CORSA AI image compression model as the shared backbone. In the CORSA project, the lossy compression algorithm was tested on 4-channel Sentinel-2 images taken from the BigEarthNet dataset [4]. As part of the MOVIQ project, this compression model was adapted to the domain of hyperspectral data [5] and optimized to run on-board [5, 6]. The CORSA model consists of an encoder, a codebook mapper, and a decoder. The encoder part of the AI model is a convolutional neural network that generates feature vectors at multiple levels of the encoder pipeline. These feature vectors are then quantized by mapping them to a learned set of codebook vectors. The number of vectors in the codebook is limited, each vector having a unique codebook index. The vectors can thus be replaced with their codebook indices, which can be written to file as compact bit-arrays, suitable for downlinking. After downlinking, the full feature vectors can be reconstructed from the codebook indices, and the original image can be reconstructed from these vectors using the decoder part of the model. An added advantage of this approach, is that downstream tasks, e.g. change detection, semantic segmentation or classification, may be performed directly on the semantically rich feature vectors, without the need to reconstruct the original image and without loss of quality [7]. In this paper, we will demonstrate the ability to apply onboard geometric corrections by using the feature vectors from the CORSA backbone. For on-board geometric correction, as part of the Level 1 processing chain, using Ground Control Points (GCP) to enhance the geometric accuracy is a common approach. For the on-board geometric correction, we will thus leverage the CORSA features, already calculated for compression, to automatically select keypoints and match these to keypoints from the reference image. The keypoints of the reference image can be dynamically calculated on-board starting from the compressed feature vectors of the reference image (no need to store the reference image itself, only the compressed reference image needs to be available on-board), or they can be pre-calculated and stored onboard as such. Automatically extracting reliable keypoints is done by training a lightweight descriptor head on top of the CORSA features making them more robust to seasonal changes and partial occlusions. This descriptor head is inspired by the Xfeat architecture as described in [8]. By applying image matching in feature space, our proposed workflow alleviates the need to apply a separate algorithm for each task. Instead, the CORSA backbone creates a set of hierarchical feature maps, that can directly be used as input for codebook mapping (compression), lightweight descriptor head for automatic keypoint extraction and matching, and for lightweight decoders for end applications such as change detection, object detection and/or image segmentation. References [1] B. Beusen, X. Ivashkovyc, and T. V. Achteren, “Image compression using vector-quantized auto-encoders with semantically meaningful feature extraction,” 2022. [2] A. Oord, K. Kavukcuoglu, and O. Vinyals. “Neural discrete representation learning,” in Advances on Neural Information Processing Systems (NIPS), Long Beach, CA, Dec. 2017. [3] A. Razavi, A. van den Oord, O. Vinyals. “Generating diverse high-fidelity images with vq-vae-2,” in Advances in Neural Information Processing Systems, 2019, pp. 14837-14847. [4] G. Sumbul, M. Charfuelan, B. Demir, V. Markl. BigEarthNet: A Large-Scale Benchmark Archive for Remote Sensing Image Understanding, IEEE International Conference on Geoscience and Remote Sensing Symposium, pp. 5901-5904, Yokohama, Japan, 2019. [5] B. Beusen, X. Ivashkovyc, A. Luyts, T. V. Achteren. “On-Board Hyperspectral Image Compression using Vector-Quantized Auto Encoders”. IGARSS International Geoscience and Remote Sensing Symposium, 2024. [6] B. Beusen, X. Ivashkovyc, S. Livens, D. Nuyts and T. V. Achteren. “Towards On-Board Image Compression using Vector-Quantized Auto Encoders”. EDHPC 2023. [7] X. Ivashkovych, L. Landuyt, T. Van Achteren. “CORSA Deep Earth Observation Semantic Compression applied to Flood Detection”, OBPDC Conference, 2022. [8] G. Potje, F Cadar., A. Araujo, R. Martins, E.R. Nascimento. “XFeat: Accelerated Features for Lightweight Image Matching”. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 2682-2691.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.85/1.86)

Presentation: Innovating Earth Observation: The Impact of HAWK Micro-Satellites within the IRIDE Constellation

Authors: Gianmarco Reverberi, Gabriel Gutierrez, Dario Riccobono, Federico Miglioretti, Valerio Di Tana
Affiliations: Argotec
Earth observation technology has progressed from traditional large-scale satellites to advanced constellations offering extensive, frequent coverage. Earlier missions like Landsat and SPOT established the groundwork for monitoring Earth from space. The trend has shifted towards deploying multiple satellites that work in unison, which enhances global coverage and decreases the revisit time. This paper focuses on the development of a Micro-Satellite-based constellation under the IRIDE Program, a prominent European initiative aimed at Earth Observation, funded by the European Union – NextGenerationEU program. IRIDE is a comprehensive system that includes a series of low Earth orbit satellite sub-constellations (Upstream Segment), ground operational infrastructure (Downstream Segment), and services for the Italian Public Administration (Service Segment). Thanks to the use of various sensing tools and technologies, the IRIDE constellation will be unique in its kind. It includes microwave imaging technologies (via Synthetic Aperture Radar, SAR) and optical imaging with various spatial resolutions (from high to medium resolution) and in different frequency ranges, including panchromatic, multispectral, hyperspectral, and infrared bands. One of the players in the IRIDE constellation is Argotec’s HAWK micro-satellite platform, promoting ease of manufacturing and leveraging scalability of the avionics component and strengthening the existing industrial base. The project started in 2022 with an agile development approach, aiming to minimize costs and achieve a quick launch readiness within two years. The constellation will consist of 40 satellites, positioned in a sun-synchronous orbit at altitudes of 500-600 km, organized into four orbital planes to maximize Italian territorial coverage and potentially extend globally. Equipped with a multispectral imaging system, the satellites will capture high-resolution visuals in visible to near-infrared spectra, with details as fine as 2.6 meters per pixel. The integration of an advanced on-board processing system in the standard configuration used for all satellites and AI algorithms will allow for edge-computing capabilities including on-the-fly analysis of data, enhancing applications like disaster response and environmental monitoring by allowing satellites to efficiently manage data relevance and maximize the utilization of downlink throughput. The paper will go through the design and development of this constellation, including the challenges faced to set up a series assembly and test line for satellites, matching the conventional manufacturing of high-quality and space-ready systems with the paradigm of in-series production.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.85/1.86)

Presentation: The Effect of Lift on VLEO Satellites: Can it be utilised and should it be controlled

Authors: Kyle Palmer
Affiliations: ESA
In Low Earth orbit (LEO), atmospheric drag is a significant factor affecting satellite delta-V budgets and orbital perturbations. However, as satellite altitudes decrease into the very low Earth orbit (VLEO) regime, the tangential lift force becomes increasingly important. According to free molecular flow theory, the interaction of atmospheric particles with a satellite's surface generates two primary forces: a drag force parallel to the flow direction and a lift force perpendicular to it. The magnitude of these forces depends on the atmospheric density and the corresponding drag and lift coefficients. At lower altitudes, the increased atmospheric density leads to a significant amplification of the lift force. This phenomenon is further exacerbated by the adoption of long, thin satellite geometries, which are typically designed to minimize drag. Such configurations inadvertently increase the surface area susceptible to lift forces, particularly in cases where the satellite operates at a non-zero angle of attack, possesses complex geometrical features on its velocity-facing side, or lacks fixed attitude control. These factors can result in substantial orbital perturbations that are often overlooked in standard satellite design methodologies. This presentation shall investigate the impact of lift-induced orbital perturbations in VLEO for various satellite geometries, focusing on the influence of length-to-width/height ratios and other design parameters. It shall evaluate scenarios where lift forces become critical and examines their dependence on altitude, satellite geometry, and operational mission profiles. It will also suggest if and how the lift forces could be utilised for different applications and operations. Furthermore, the study will propose the development of standardised lift coefficient values and design guidelines, analogous to those applied to the drag force in early phase satellite feasibility studies, such as a standard drag coefficient of 3. These coefficients could serve as benchmarks for VLEO satellite design, ensuring more robust predictions of orbital behavior and efficient mission planning.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.85/1.86)

Presentation: Security Risks for AI Applications in Space

Authors: Evridiki Ntagiou, Piotr Wilczyński, Dawid Płudowski, Agata Kaczmarek, Ramez Shendy, Krzysztof Kotowski, Jakub Nalepa, Paweł Skorupa, Artur Janicki, Przemysław Biecek, Aniello Fiengo
Affiliations: ESA, MI2.AI, Warsaw University of Technology, KPLabs
AI systems, by their very nature, bring a whole new set of challenges in ensuring their security and robustness against random or adversarial perturbations. What are the most important risks, what are they based on, and how can they be mitigated? A number of AI systems are being developed for space applications for Earth Observation and Earth Science. While these systems demonstrate promising performance, they face substantial barriers to operational deployment. A primary concern is the lack of robust security measures, which leads to low trust among users and stakeholders. In our work, we aim to address these challenges by improving the security of AI-driven solutions in the space domain. The first phase of our study involves creating a comprehensive catalogue of security risks associated with different use cases in the space domain. These use cases include, but are not limited to, those identified in the AI for Automation Roadmap created by the European Space Operations Centre (ESOC) in collaboration with industry partners. In the document we provide guidelines on how to mitigate security issues related to AI systems used in this domain. In the security catalogue, we provide the highlights of several critical aspects of AI vulnerabilities, including attack vectors, potential threat agents, feasibility of attacks, impact severity, and examples of common risk categories. This catalogue is mapped onto related documents issued by the National Institute of Standards and Technology (NIST), Snowflake, and the Open Worldwide Application Security Project (OWASP). The second part of our study shifts focus to practical applications, where we demonstrate the proposed framework and mitigation strategies in real-life scenarios from ESOC’s operations. One highlighted scenario examines the risk of indirect prompt injection attacks on large language models (LLMs). This type of security threat involves embedding malicious hidden instructions within resources that an LLM processes. In our example, we use a malicious PDF document containing concealed instructions as the attack vector. This scenario illustrates how attackers could exploit AI systems to manipulate outputs or compromise data integrity. By showing this threat, we not only raise awareness but also propose specific mitigation strategies to counteract such vulnerabilities. As a practical extension of our research, we organize a hackathon with task related to recognizing the real and fake space related text, generated by LLMs. Afterwards, we provide lessons learned from the event. Higher accuracy in differentiation between real and fake images, results in increased security in AI-driven solutions. Our work highlights the critical need for addressing security concerns in AI-driven solutions for space applications. By cataloging security risks, mapping them to established standards, and demonstrating real-world threats and mitigation strategies, we provide a comprehensive approach to enhancing AI security. These initiatives are essential steps toward overcoming the barriers to deploying AI systems in operational space settings, ensuring they are both effective and secure. As AI continues to play an integral role in advancing Earth Observation and Earth Science, addressing security concerns will remain a cornerstone of fostering stakeholder trust and system reliability.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.85/1.86)

Presentation: Astrobus Neo as a key tool for low-risk development of Earth Observation & Science missions

Authors: Miguel Ángel Palacios Lázaro
Affiliations: Airbus Defence And Space
The Astrobus Neo satellite platform from Airbus has been developed jointly with the European Space Agency in the frame of the Copernicus program, as the basis for the CRISTAL and LSTM missions. Having gone through the System CDR in both cases, it can be regarded today as a mature product for Earth observation missions offering a high degree of technology readiness, both on equipment and on functional level. This maturity allows placing the focus of future missions on payload technologies and end-to-end mission performance aspects, rather than on the platform development, reducing risk, enabling payload-first approaches and accelerating time-to-market. The modularity of the platform and the digital toolchain employed enable adaptation to multiple missions while ensuring coherence of systems specifications, structured requirements flow-down to subsystems and equipment, and comprehensive, proven yet configurable verification and validation approaches. This modularity also offers industrial benefits, such as double sourcing and flexibility for georeturn, enabling a more efficient and cost-effective production process. Furthermore, the implementation of Astrobus Neo within future programs counts on the collaboration of ESA through common acceptance of a product approach, which involves agreeing on product-line review processes and contractual documentation applicable to suppliers. In addition, product-line verification and validation processes allow maximum exploitation of heritage, early-on close-out of system requirements verification, and thorough validation of new features and equipment in a cost-optimised, hybrid flatsat configuration. These ways of working ensure a consistent and high-quality product that meets the needs of the most demanding Earth observation applications, both for operational and scientific (e.g. Earth Explorer) programs. In conclusion, Astrobus Neo is a mature and recurring platform that offers a high degree of technology maturity, modularity, and industrial benefits, making it an ideal solution for Earth observation missions. Its collaboration with ESA further ensures a consistent and high-quality product, meeting the needs of the scientific community.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Room 1.85/1.86)

Presentation: A Data Relay Constellation to decrease latencies in Earth Observation

Authors: Quentin Van Overmeere, Hugues Van Peteghem, Pierre Joachim
Affiliations: Aerospacelab
The current paradigm in Earth Observation relies heavily on ground stations for satellite tasking and data downlink. However, the placement of ground stations is generally constrained by geopolitical factors, limiting their global distribution. As a result, even extensive ground station networks allow satellites in Low-Earth Orbit (LEO) to schedule passes for only 10% to 40% of the time. This limitation leads to latencies in data downlinking that can extend to tens of hours, which is suboptimal for time-sensitive applications. With the increasing number of sensors in space and reduced intervals between satellite revisits, there is a pressing need to decrease time-to-tasking and time-to-downlink, especially for critical use cases such as disaster response. In these scenarios, delays of even hours or minutes can result in significant environmental damage or loss of life. Satellite relays in space offer the potential to substantially increase the availability of communication links and reduce both time-to-tasking and time-to-downlink. Despite the proliferation of Earth Observation satellites developed with the "New Space" mindset—which emphasizes rapid development, cost-effectiveness, and increased accessibility—there is a noticeable gap in satellite relay services that cater to this emerging market segment. Existing relay systems often depend on proprietary, bulky, or expensive terminals, making them unsuitable for widespread adoption among "New Space" operators. Aerospacelab, a satellite and platform provider, has recently initiated an ESA-funded project aimed at developing a satellite platform specifically tailored for a Data Relay Constellation serving Earth Observation satellites. In this presentation, we will discuss how we have designed both the constellation and the satellites to achieve cost-effectiveness while enhancing the availability of communication links to nearly 100%. We will demonstrate how our architectural approach enables communication latencies well under 10 minutes for the majority of satellites in LEO. This advancement is expected to significantly improve real-time data access, accelerate decision-making processes, and enhance operational efficiency for end-users in critical sectors.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall F1)

Session: A.01.09 Atmospheric 3D Winds for Weather and Climate

The three-dimensional (3D) distribution of horizontal atmospheric wind vectors (referred to as 3D winds) is an integral part of the Earth system. 3D winds interact with clouds, convection, and precipitation. They transport heat, moisture, momentum, aerosols, and trace gases. They also drive ocean circulations and affect sensible and latent heat fluxes between the atmosphere and the underlying land or ocean surfaces. Their accurate observation is critical for a variety of applications, such as weather and climate forecasting, aerosol and pollution transport, disaster weather monitoring, wind energy generation, aircraft flight planning, and wildfire and volcanic plume movement. They are also crucial to reconcile the spaceborne measurements of greenhouse gas concentrations versus survey-based regional reports of emissions.

3D wind measurements remain a challenge. Conventional observations of horizontal winds, such as from rawinsondes, wind profilers, and airborne dropsondes, have sparse coverage. Wind scatterometers provide accurate vector winds, but near the ocean surface only. Feature-matching satellite cloud top or water vapor fields have been used for decades to retrieve atmospheric motion vectors (AMVs), but this approach is mostly limited to a single and uncertain pressure level at a given time. Satellite wind lidar measurements, as pioneered by the ESA’s Aeolus mission, are expected to provide more accurate data and capture the line-of-sight wind for clear skies, within cirrus clouds, and above thick clouds, but only along a curtain parallel to the satellite track. For these reasons, 3D winds are one of the seven observables for the NASA Earth System Explorer competition.

In this session, we invite innovative contributions on the 3D winds and related measurements; e.g. vertical velocity of cloud top, vertical motions in clouds (e.g., via ESA EarthCARE), convective vertical mass flux (e.g., via NASA INCUS), and ocean surface winds. These contributions could cover retrieval algorithms, mission architecture, fusion of measurements (e.g., via machine learning), and impacts on science and applications.

Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall F1)

Presentation: An Evaluation of Offshore Wind Information Based on the CMOD5.N Wind Product from Sentinel-1

Authors: Stian Normann Anfinsen, Daniel Johansen Trosten, Dr Heidi Hindberg, Anna Fitch
Affiliations: Norce Norwegian Research Centre
This paper evaluates information derived from the Sentinel-1 C-SAR sensor data for the purpose of offshore wind resource assessment. The standard CMOD5.N wind retrieval algorithm is used to produce 10 m wind products (Hersbach, 2008; Mouche & Vincent, 2019). These are extrapolated to different heights in the planetary boundary layer by use of a logarithmic or a power law wind profile and Monin-Obukhov similarity theory, whose atmospheric stability related parameters are computed with fields from the ERA5 reanalysis (Hersbach et al., 2020) as input. The resulting wind speed products are compared to in-situ lidar data from a prospected wind farm site over a range of heights from 10 to 300 metres above sea level. Corresponding wind speeds interpolated from ERA5 reanalysis data are compared to the same reference data. Validation results show that wind speeds derived from SAR have lower root mean square (RMS) error, mean absolute error (MAE) and bias than wind speeds from the ERA5 reanalysis up to heights of about 250 metres. Above this height, the ERA5 wind speeds are on par with the SAR-derived wind speeds or better. A further analysis shows that the wind speeds derived from SAR data show similar performance under different atmospheric stability regimes, as there is no significant variation in error measures and bias with Obukhov length, which we compute for each SAR data acquisition time based on concurrent ERA5 data. No clear seasonality or dependency of the wind speed levels can be observed for either of the performance measures. The observed accuracy and precision are apt to install trust in the SAR-based wind speed estimates. However, the required input to financial analyses of wind farm prospects is wind speed histograms, which together with the power curves of selected turbines can be used to predict the output wind power and financial yield. In this respect, a disadvantage of SAR wind as an information source is the low temporal resolution of the SAR acquisitions and the modest amount of time samples that can be harvested for estimation of the wind speed distribution. We therefore compute the distance between the empirical distributions obtained with wind speeds from SAR, ERA5, and in-situ data, respectively. We also present new machine learning algorithms that allow us to predict wind speed histograms from ERA5 reanalysis wind and SAR wind. The method we use is to establish the optimal transport matrices that define the transformation between: 1) in-situ wind speed data and ERA5 reanalysis wind speed data during the lidar campaign; and 2) ERA5 reanalysis wind speed data and SAR wind speed data over the entire window of Sentinel-1 acquisitions. These transformations capture the biases between the different data sources, as well as deviations due to their different temporal and spatial scale (aggregation levels). As an alternative, we learn the same transformations using a deep multilayer perceptron neural network. We demonstrate that both approaches provide improved wind speed histogram predictions with ERA5 reanalysis and SAR wind speeds as input, and can be used to extend the data harvest beyond the lifetime of the in-situ campaign. Finally, we estimate the reduction in output wind power uncertainty provided by SAR wind data. References: Hersbach, H. (2008). “CMOD5.N: A C-band geophysical model function for equivalent neutral wind”, Technical Memorandum no. 554, European Centre for Medium-Range Weather Forecasts, Reading, U.K. Mouche, A. & Vincent, P. (2019). “Sentinel-1 ocean wind fields (OWI) algorithm definition”, issue 2.0, ref. CLS-DAR-NT-10-167, Collecte Localisation Satellites (CLS), Ramonvilee-Saint-Agne, France. Hersbach, H., Bell, B., Berrisford, P., et al. (2020). “The ERA5 global reanalysis”, Quarterly Journal of the Royal Meteorological Society, 146(730), 1999-2049.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall F1)

Presentation: Evaluating SAR wind and stability inversions using scanning wind LiDAR measurements

Authors: William Bruch, Dr Louise Nuijens, Dr Paco Lopez-Dekker, Dr Bertrand Chapron, Dr Alexis Mouche
Affiliations: Laboratoire d’Océanographie Physique et Spatiale (LOPS), Univ. Brest, CNRS, IRD, Ifremer, IUEM, Brest, France, Geoscience and Remote Sensing, Delft University of Technology, Delft, the Netherlands
Much of our current understanding of weather and climate processes results from mesoscale satellite observations. However, modern-day observational and modeling capabilities have shed light on the existence and significance of smaller-scale features to air-sea interaction. At submesoscales, which broadly span 100 m—200km, air-sea fluxes are especially uncertain, as air-sea equilibrium is rarely achieved, and assumptions key to classical theory may be violated. Today, the study of air-sea processes at submesoscales is a key challenge identified by the community (e.g. Nuijens et al., 2024). High-resolution satellite observations, as collected by the Sentinel-1A C-band Synthetic Aperture Radar (SAR), prove useful for the study of submesoscale processes, and are increasingly employed to infer wind and atmospheric stability in the lower Marine Atmospheric Boundary Layer (e.g. Stopa et al. 2022, O’Driscoll et al., 2023). However, inversion methods heavily rely on classical theory, and their validation is limited by the scarcity of simultaneous field measurements on both sides of the air-sea interface. To help evaluate and maximize the potential of Sentinel-1A SAR inversions for the study the lower Marine Atmospheric Boundary Layer, we employ unique scanning Doppler wind LiDAR measurements achieved over 6 weeks during the Summer-2024 BOW-TIE ORCHESTRA research cruise in the Inter-Tropical Convergence Zone, where classical Monin-Obukhov (1954) theory may typically by challenged. The LiDAR measurements allow to reconstruct the average horizontal wind speed profile 1–120 m above the sea surface over 10 km range, and in a wide range of wind, wave and atmospheric stability conditions. Intercomparison is especially facilitated by the O(100m) horizontal resolution of both SAR and LiDAR data. The near-surface horizontal wind speed as well as the Richardson stability criterion are computed from the LiDAR measurements, and compared with local satellite inversions. Analysis of this dataset allows to identify physical processes behind SAR inversion accuracy, such as atmospheric stability and sea state characteristics. This research offers a novel methodology for the field measurement and remote sensing of air-sea submesoscale processes, and paves the way towards alternative solutions for satellite inversion. References: - Nuijens, L., Wenegrat, J., Lopez Dekker, P., Pasquero, C., O'Neill, L.W., Ardhuin, F., Ayet, A., Bechtold, P., Bruch, W., Laurindo, L.C. and Chen, X., 2024, November. The Air-Sea Interaction (ASI) submesoscale: physics and impact. In Lorentz Workshop. - Stopa, J.E., Wang, C., Vandemark, D., Foster, R., Mouche, A. and Chapron, B., 2022. Automated global classification of surface layer stratification using high‐resolution sea surface roughness measurements by satellite synthetic aperture radar. Geophysical Research Letters, 49(12), p.e2022GL098686. - O’Driscoll, O., Mouche, A., Chapron, B., Kleinherenbrink, M. and López‐Dekker, P., 2023. Obukhov length estimation from spaceborne radars. Geophysical Research Letters, 50(15), p.e2023GL104228. - Monin, A.S. and Obukhov, A.M., 1954. Basic laws of turbulent mixing in the surface layer of the atmosphere. Contrib. Geophys. Inst. Acad. Sci. USSR, 151(163), p.e187.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall F1)

Presentation: Vientos—A New Satellite Mission Concept for 3D Wind Measurements by Combining Passive Water Vapor Sounders with Doppler Wind Lidar

Authors: Xubin Zeng, Svetla Hristova-Veleva, Derek Posselt, Sara Tucker, Oliver Reitebuch, Ad Stoffelen, Amir Ouyed, Longtao Wu, Igor Yanovsky
Affiliations: University Of Arizona, NASA/Caltech Jet Propulsion Laboratory, BAE Systems, Inc., DLR Institute of Atmospheric Physics, Royal Netherlands Meteorological Institute (KNMI)
It is challenging to accurately characterize the three-dimensional distribution of horizontal wind vectors (known as 3D winds). Conventional observations of horizontal winds, such as from rawinsondes, wind profilers, and airborne dropsondes, have sparse coverage in both space and time, especially over oceans. An international virtual constellation of wind scatterometers provide accurate vector winds, but near the ocean surface only. Feature-matching satellite cloud top or water vapor fields have been used for decades to retrieve atmospheric motion vectors, but this approach is mostly limited to a single and uncertain pressure level at a given time. Satellite wind lidar measurements are expected to provide more accurate data and capture the line-of-sight wind for clear skies, within cirrus clouds, and above thick clouds, but only along a curtain parallel to the satellite track. This was realized for the first time through ESA’s Aeolus satellite mission for a single line-of-sight wind component only (close to the east–west direction) of horizontal winds, though with great impact, suggesting the importance of providing more complete 3D winds. Here we propose Vientos—a new satellite mission concept that combines two or more passive water vapor sounders with a Doppler wind lidar to measure 3D winds (Zeng et al. 2024). The compelling need for 3D wind observations is highlighted by inconsistencies in reanalysis estimates. For instance, when convection occurs with precipitation rate > 0.1 mm/hour, the wind vector differences between reanalyses are greater than 5 m/s for 30%–50% of the time over many regions, such as the eastern Pacific, Indian Ocean, Atlantic, and some mountain areas at different pressure levels. With the help of advanced optical flow algorithms, our recent study demonstrated the feasibility of retrieving 3D winds from 60°N to 60°S using time-differenced water vapor observations from the hyperspectral Cross-track Infrared Sounder (CrIS) and Advanced Technology Microwave Sounder (ATMS) aboard two operational polar-orbiting satellites (NOAA-20 and Suomi NPP). These two satellites have overlapped tracks separated by 50 min. Furthermore, our study demonstrated that these winds can be improved through the incorporation of a small number of collocated higher-accuracy measurements via machine learning. The Vientos mission architecture could include two SmallSats with MW and IR sounders in formation with the expected ESA/EUMETSAT Aeolus-2 mission, with the optical flow algorithm used in our prior study for the 3D wind retrieval from the two sounders, the Aeolus-2 wind retrieval from the revised algorithm of Aeolus, and the machine learning algorithm used in our prior studies for the combination of these retrievals to produce the improved 3D winds. The Vientos concept would enable simultaneous measurements of 3D winds, temperature, and humidity, and is expected to have a significant impact on scientific research, weather prediction, and other applications. For example, it can help better understand and predict the dynamic and thermodynamic preconditions for the occurrence of organized convections. Zeng, X., H. Su, S. Hristova-Veleva, D. J. Posselt, R. Atlas, S. T. Brown, R. D. Dixon, E. Fetzer, T. J. Galarneau Jr., M. Hardesty, J. H. Jiang, P. P. Kangaslahti, A. Ouyed, T. S. Pagano, O. Reitebuch, R. Roca, A. Stoffelen, S. Tucker, A. Wilson, L. Wu, and I. Yanovsky, 2024: Vientos—A New Satellite Mission Concept for 3D Wind Measurements by Combining Passive Water Vapor Sounders with Doppler Wind Lidar. Bull. Amer. Meteor. Soc., 105, E357–E369, https://doi.org/10.1175/BAMS-D-22-0283.1
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall F1)

Presentation: SWEEP: the end-to-end simulator for Earth Explorer 11 candidate mission WIVERN

Authors: Marcel Kleinherenbrink, Robert Kedzieraski, Frederic Tridon, Antonio Parodi, Ali Rizik, Maciej Rostocki, Marcin Krupinski, Marcin Krowiak, Jakub Wiechnik, Joanna Bella-Kalisz, Valere Mazeau, Dr Ishuwa Sikaneta, Jean-Christophe Angevain, Simone Flavio Rafano Carna, Katia Nagamine Urata, Valerie Dutto, Maryam Pourshamsi, Michel Toussaint
Affiliations: Esa, GMV, PoliTo, CIMA
As part of the consolidation phase of ESA Earth Explorer missions, end-to-end (E2E) simulators are developed. The primary objective of the E2E simulators is to demonstrate scientific readiness level (SRL)5 at the end of phase A. The E2E will support performance and sensitivity analyses at level-1 and level-2 and will act as a tool to help improve and implement retrieval algorithms for geophysical products. Earth Explorer 11 candidate WIVERN flies a conically scanning atmospheric Doppler radar instrument operating at W-band (94 GHz) with a radiometer mode based on noise measurements from the same receiver chain. WIVERN’s main geophysical product, horizontal line-of-sight atmospheric wind, is derived from Doppler velocities computed from polarization diversity pulse pair (PDPP) processing. WIVERNs measurements also allow to retrieve other atmospheric products (liquid water path, ice water content, snow rate and rain rate) as well as surface properties (sea-ice properties and ocean dynamics). This paper reports on the implementation and application of WIVERN’s E2E simulator, SWEEP. SWEEP mathematically mimics the system for WIVERN’s proposed mission concepts, its data processing and the associated errors. Using synthetic and realistic input scenes from (atmospheric) models, SWEEP will help to verify whether the concepts meet (uncertainty) requirements at different processing levels. Additionally, SWEEP will help to demonstrate the capabilities of WIVERN for a set of reference scenarios, including extreme weather events.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall F1)

Presentation: Evaluating the impact of WIVERN in-cloud wind profiles on global Numerical Weather Prediction model forecasts using an OSSE framework

Authors: Mary Borderies, Nicolas Sasso, Philippe Chambon, Alessandro Battaglia, Anthony Illingworth, Michael Rennie, Maryam Pourshamsi
Affiliations: CNRM UMR 3589, Météo-France/CNRS, Dipartimento di Ingegneria dell’Ambiente, del Territorio, Politecnico di Torino, Department of Meteorology, University of Reading, ECMWF, ESA-ESTEC
As part of the European Space Agency’s Future EO programme, the WIVERN satellite mission concept (Illingworth et al. 2018), has been selected as one of two candidates for the Earth Explorer 11 mission, to move into Phase A in October 2023. It is now under study, with the decision on which mission will be recommended for implementation expected in July 2025. WIVERN will be the first spaceborne, conically scanning, dual-polarisation Doppler W-band radar to provide Horizontal Line-Of-Sight (HLoS) wind profiles colocated with 94GHz brightness temperature and reflectivity observations. This unique coincident dataset of the dynamical and mass fields within the clouds will be collected at an unprecedented horizontal resolution of 1km sampling a large 800km wide conical swath. Recent studies have already demonstrated the positive impact brought by the assimilation of WIVERN HLoS wind profiles on global Numerical Weather Prediction (NWP) forecasts using an Ensemble of Data Assimilations method (Sasso et al. in revision). In this study, the impact of the assimilation of WIVERN HLoS wind profiles is further investigated using the complementary Observing System Simulation Experiment (OSSE) approach. Here, the OSSE framework previously developed at Météo-France to evaluate the impact of the EPS-Sterna constellation (Rivoire et al. 2024) in the global NWP model of Météo-France, called ARPEGE, with its state-of-the-art 4DVar data assimilation system, is used. In this presentation, we will first present the methodology. Next, the impact of WIVERN HLOS wind observations will be assessed through the computation of different metrics, such as forecast error reduction and FSOI-like energy norm. The capability of WIVERN to improve rainfall forecasts will also be shown. Additionally, a sensitivity study will be presented to examine how the prescribed observation error affects the impact of WIVERN HLOS wind observations. Finally, this study will demonstrate WIVERN’s impact on improving the prediction of extreme events, such as tropical cyclones.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall F1)

Presentation: Estimation of vertical wind speed from INCUS radar reflectivity measurements in convective updrafts and of storm-wide vertical air mass flux from INCUS radar and radiometer measurements

Authors: Ziad Haddad, Prof. Sue van den Heever, Dr. Sai Prasanth, Dr. Svetla Hristova-Veleva, Daneisha Blair, Dr. Peter Marinescu, Dr. Jennie Bukowski, Prof. Sean Freeman, Dr. Derek Posselt
Affiliations: Jet Propulsion Laboratory, California Institute Of Technology, Colorado State University, Florida State University, University of Alabama
NASA’s INvestigation of Convective UpdraftS (INCUS) mission will place a convoy of three miniaturized Ka-band radars in low-Earth orbit, to measure radar reflectivity profiles in seven overlapping beams across their ~ 10-km swath, at time separations of 30, 90 and 120 seconds. The measured reflectivities and their change over each time interval will enable the detection of convective updrafts (CUs) and the retrieval of vertical profiles of vertical wind W within detected CUs averaged over the respective interval. While the radar swath is quite narrow, the resulting small sample size (at most a handful of CUs observed over even the largest storm to be overflown) will have its significance greatly enhanced by the presence of a Tempest-like cross-track-scanning mm-wave radiometer gathering data over a very wide swath commensurate with most storms. Even a single radar-derived profile W0 of vertical wind speed will have storm-wide significance, thanks to a new approach which uses the radar-derived Ice Water Path (IWP) coincident with W to allow one to estimate, from the storm-wide distribution of IWP derived from the radiometer, the percentile corresponding to W0 in the storm-wide probability distribution of W itself. The mapping from IWP to W is certainly not exact, and in fact it does not carry any spatial information about the geographic distribution of either variable, but it does allow the estimation of the Lebesgue integral of any function of W, in particular the storm-average air mass flux FT through the cloud top or the vertical flux Fh through any specific height level h. To the extent that current data assimilation systems continue to use today’s control variables, assimilating FT or Fh may not solve all the problematic issues with assimilating cloud observations (fundamentally non-Gaussian nature of the uncertainty etc) However, these are robust diagnostic (certainly compared to point-wise observations), that have never before been available, and which don’t involve – or interfere with – local dynamics. The approach is also directly applicable to Doppler radar observations such as those expected from NASA’s future AOS mission. This talk will describe the retrieval algorithm and the extension to estimate the storm-wide mass flux and the prospects of assimilating it into numerical weather prediction systems.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall E2)

Session: B.04.04 Spaceborne data for the analysis of Natural Hazards and AI: new insights from Artificial Intelligence technologies and recent missions - PART 2

Natural hazards, can be defined as “all atmospheric, hydrologic, geologic, and wildfire phenomena that, because of their location, severity, and frequency, have the potential to affect humans and their environment”. In 2023, the Emergency Events Database (EM-DAT) recorded a total of 399 disasters related to natural hazards.These events resulted in 86,473 fatalities and affected 93.1 million people. Therefore, analysing natural hazards, particularly focused on mapping and monitoring, is a key activity for land management and risk reduction policies as pledged by many national governments and international institutions. Compared to traditional field-based surveys, spaceborne remote sensing technologies have proven to be of utmost importance due to their high spatial and temporal coverage and reduced costs.
The actual scenario of large volumes of freely accessible data allows to retrieve relevant information as well as to develop new techniques and methodologies for the investigation, characterization, monitoring, and modeling of hazards. Alongside remote sensing, Artificial Intelligence (AI) and Machine Learning (ML) play now an important component for the analysis of large datasets of EO data. These new approaches have widely demonstrated their suitability in many scientific fields, being characterized by high accuracy and specific advantages for different applications.
This session is intended to collect recent and promising advances on the use of AI/ML for the processing of (optical, multispectral, hyperspectral, radar and thermal) satellite data and the analysis of geohazards such as landslides, earthquakes, subsidence, volcanic eruptions, and hydrometeorological hazards, such as wildfires,tsunamis, floods, storms, avalanches, etc. . The outcome of the session will provide a very insightful state-of-the-art on the current and future perspectives on EO capabilities for studying natural hazards and risk reduction policies.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall E2)

Presentation: Landslide Detection in Nepal's Mount Everest Region: A Bi-Temporal Transformer Approach Using Satellite Imagery

Authors: Anna Seiche, Anna Brand
Affiliations: Remote Sensing Solutions Gmbh
Nepal, characterized by its rugged Himalayan terrain, is a country of striking natural beauty but also significant geological fragility. The combination of intense monsoon rainfall, steep slopes, and frequent seismic activity makes it one of the most landslide-prone regions in the world, with an estimated average of approximately 300 landslides occurring annually. These natural hazards not only pose significant risks to local communities but also threaten Nepal's tourism sector, particularly in the Mount Everest region, which attracts thousands of mountaineers every year. Landslides disrupt trekking routes, endanger lives, and impact the economic lifeline of remote communities reliant on tourism. In this remote environment, Earth Observation offers an effective solution for timely landslide mapping. However, there remains a research gap in detecting landslides in high mountain regions with heterogeneous terrain, where spectral signatures often resemble those of bare soil, riverbeds, or glacial formations. To address this challenge, we develop a novel landslide detection framework, designed as part of an ESA initiative to improve safety and foster sustainable tourism in Nepal's remote mountainous regions. The framework leverages Sentinel-2 satellite imagery, combined with the Copernicus DEM and advanced deep learning methodologies. Sentinel-2 provides freely accessible, high-temporal-resolution data enabling near real-time monitoring of dynamic landscapes. Our approach employs a bi-temporal segmentation model to reliably detect landslides by comparing pre- and post-event imagery. This method enhances accuracy by focusing exclusively on land cover changes, minimizing confusion with spectrally similar landscape patterns that can appear identical to landslides in mono-temporal imagery. In response to the challenges of our application area, we are implementing an innovative transformer-based architecture, specifically designed to capture global context and long-range dependencies across both spatial and temporal dimensions. This is essential for analyzing the complex, heterogeneous landscapes characteristic of mountainous regions. In these environments, extensive changes driven by seasonal variations must be carefully distinguished from the subtle terrain alterations caused by landslides. Conventional CNN-based methods often face challenges in capturing these small differences, as their localized receptive fields may limit their ability to effectively process complex spatial and temporal dependencies. This limitation underscores the importance of evaluating the potential of transformer architectures, which excel at capturing intricate patterns and dependencies across diverse scales, offering a promising solution for addressing the unique demands of our research design. To train the bi-temporal transformer model, we curated a dataset of manually annotated Sentinel-2 scene pairs capturing known landslide events across the Himalayan region. The dataset encompasses a diverse range of landslide characteristics, including variations in size, shape, and environmental context, to ensure generalizability. For accuracy assessment, we use high-resolution SPOT reference imagery (1.5m spatial resolution) to precisely evaluate the model’s performance in detecting landslides in Nepal’s Mount Everest region. By addressing the technical hurdles posed by Nepal's dynamic mountains, we develop an innovative transformer-based solution for reliable hazard monitoring. In the future, the model will be integrated into a web-based geoportal, which delivers near real-time updates on geological hazards, trail conditions, and accommodation safety. Our service supports informed decision-making, enhances traveler safety, and promotes sustainable tourism in remote Himalayan regions.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall E2)

Presentation: Confidence-Aware Deep Learning for SAR-Based Flood Mapping via Conformal Risk Control

Authors: Yu Li, Patrick Matgen, Marco Chini
Affiliations: Remote Sensing and Natural Resources Modeling, Luxembourg Institute of Science and Technology (LIST)
Synthetic Aperture Radar (SAR) is widely utilized for flood mapping due to its all-weather, day-and-night imaging capabilities. Meanwhile, deep learning has demonstrated exceptional performance in representation learning and prediction across various domains, including flood mapping. Another advantage of deep learning is its ability to perform rapid inference on new data once the model is trained, making it invaluable for flood rapid response. However, deep neural networks (DNNs) are often overconfident in their predictions and fail to quantify the associated uncertainty. Two types of uncertainty are commonly distinguished in this context: aleatoric and epistemic. Aleatoric uncertainty arises from data inherent randomness, noise, or class overlap, whereas epistemic uncertainty stems from the models’ lack of knowledge or limited exposure to certain data distributions. Proper quantification and disentanglement of these uncertainties are critical for improving model performance and supporting stakeholders in actionable decision-making. Bayesian Neural Networks (BNNs), such as Monte Carlo (MC) Dropout and Deep Ensemble, are widely used for uncertainty quantification. These approaches estimate aleatoric uncertainty using entropy and epistemic uncertainty through mutual information, requiring multiple forward passes, which increases computational cost. In addition, BNNs do not provide statistical guarantees. In theory, epistemic uncertainty should be high on out-of-distribution (OOD) data since the model lacks prior knowledge. However, theoretical and empirical work has shown this is not always the case for BNNs. For instance, in the context of SAR-based flood mapping, a model dedicatedly trained on flooded bare soils should exhibit high epistemic uncertainty when presented with flooded urban or vegetated areas as the backscattering distribution shows a large discrepancy between these two scenarios. However, our experiments showed that the Bayesian framework fails to capture such epistemic uncertainty caused by model misspecification. To address these challenges, we propose a confidence-aware deep learning framework powered by conformal risk control for uncertainty quantification and statistically guaranteed prediction. The framework comprises two key components: 1). Density-aware neural network for uncertainty quantification: Because OOD data tends to be farther from the training data, data distance can inform epistemic uncertainty. However, data distance is not preserved in the learning process since this information is not useful for the prediction task. We introduce the bi-Lipchitz constraint, ensuring data distance is reflected in the feature space. Epistemic uncertainty is thus estimated using feature density, while aleatoric uncertainty is quantified through softmax entropy. 2). Prediction under conformal risk control. Using the uncertainty estimates from 1), test data is either abstained from prediction or included in prediction sets with (α,δ) risk control, with probability at least 1 – δ the prediction risk does not exceed α. The risk function, risk tolerance α, and error level δ are defined in advance by the user. We evaluated the proposed confidence-aware deep learning framework on Senintel-1 intensity data for flood mapping in complex scenarios, such as urban environments and vegetated areas. Our results demonstrate that the framework effectively abstains from making predictions in flooded built-up areas and vegetated regions, rather than outputting incorrect predictions with high confidence. Additionally, conformal prediction sets significantly improve coverage on ambiguous flood surfaces, such as rough water surfaces caused by winds.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall E2)

Presentation: Urban Flash Flood Prediction Through High Resolution Deep Learning Approach

Authors: Pauline Delporte, Gwendoline Stéphan, Yasmine Boulfani
Affiliations: CS Group
Climate change is increasing urban flash floods, with extreme weather turning temperate climates into cycles of intense rainfalls and droughts. Urbanization worsens the problem by reducing soil absorption and straining drainage systems, concentrating populations and economic activities, and amplifying flood damages. Rapidly growing cities often lack adequate infrastructure, highlighting the need for better planning and more resilient structures. CS GROUP addresses this challenge with a use case in the ExtremeXP EU Horizon-funded project, using artificial intelligence (AI) to improve flash flood simulations and nowcasting in Nîmes, a city prone to severe flash floods. Since 1988, Nîmes has worked to limit the consequences of such events by collecting in situ data and providing them (flooding events in 2002, 2005 and 2014). The ExtremeXP project aims to simulate those historical floods to understand small-scale structures' impact on flooding. However, flash flood phenomena are extremely complex to forecast using physical models since they rely on many parameters to be measured or estimated: the soil topography, surface flows, the soil moisture and absorption capacity, cavities or underground networks. Subsoil fluid dynamic modeling is indeed very complex due to its non-observability and high heterogeneity. Operationally, these parameters need to be monitored by sensors, which are costly, sometimes difficult to install, and must be maintained for historical time duration to be relevant and useful. Few countries and organisms can afford such an effort for all areas concerned by flash floods. The rapid cinematic of the events (a few hours) requires high computing resources to predict the evolution during the crisis and the very variable hydrological response requires frequent re-calculation to cope with situation changes. In this domain AI methods can bring significant improvement through cross-analyses of several data sources, including extreme remote observation data coupled and fused with ground information. In our case, AI models could offer the advantage of drastically reducing the amount of data to be acquired as input, which is a challenge in the case of physical hydrological models. Therefore, flash flood prediction would be much more affordable and could be extended to many new areas, especially where means and data are missing. Our work highlights how AI can help to accurately anticipate extreme phenomena by dealing with multi-sources data in a shorter time than physical methods. While physical models are taking several hours to generate flood rasters, sometimes exceeding the duration of the flood event itself, AI inference time is only few minutes. The developed model generates spatial-temporal flood raster. One of the main challenges of predicting flash flood events is to consider both temporal and spatial aspects. Each one plays a crucial role in the occurrence of inundations. On one hand, spatial information is handled with convolutional layers. On the other hand, the temporal data are dealt with attention layers. This mechanism has often been used for language modeling and machine translation. In fact, it allows through a sequence to keep in memory the past states without regard to their distance in the input. Whereas language translation models take a sequence of words, we are considering the meteorological information in our use case. Models with attention modules outperform the simplest segmentation model with a better characterization of the phenomena. Our primary contribution is the development of an accurate and efficient flash flood prediction model tailored for urban areas. Additionally, significant effort was dedicated to data processing, as the model's accuracy is heavily influenced by the quality of its inputs. Adding several data types may help but would slow down the model. In that way, the right balance between quantity and time inference must be found. Data pre-processing is a big part of our work to generate a light set of precise data. The main challenges were to relate geographically the input data and to generalize the model. Flood phenomena are not independent from one tile to another, in other words, a flood in one area can have repercussions just a few kilometers away. We explored strategies to enable our model to account for the influence of surrounding tiles. Physical models are currently used to anticipate flash floods, but we expect in the future the use of deep learning models like ours to speed up the inference and therefore the prevention of floods. In addition, our goal is to ease the use of these kinds of tools, so that new non-scientific operators can use them for safety operations and regional development, for example, at the lowest cost possible.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall E2)

Presentation: Developing Machine Learning tools for the automatic interpretation of InSAR data

Authors: PhD Luke Bateson, Miss Holly Hourston, Dr Itahisa Gonzalez Alvarez, PhD Alessandro Novellino, Dr Ekbal Hussain, Dr Kay Smith, Mrs Claire Fleming
Affiliations: British Geological Survey
large cities worldwide are sinking; the problem will increase as by 2050 almost 70% of the world’s population is set to live in megalopolises, the majority of which are in low lying coastal areas. At the same time sea levels are rising. According to the World Economic Forum, several of the globe’s cities, including New York, Dhaka and London, could be partially or totally submerged by 2050-2100. A growing city adds pressure to its environment, which can lead to subsidence: land less suitable for building upon is developed, in low lying coastal regions these areas are often poorly consolidated recent superficial deposits. Loading of such deposits causes consolidation which adds to subsidence resulting from increased groundwater abstraction required to support a growing population. In order to mitigate against the effects of subsidence it is imperative to understand the location, magnitude, timing and crucially the underlying cause of the subsidence. InSAR offers such ability and when integrated with in-situ data the cause can be determined. Until now interpretation has been a time-consuming manual process meaning results are less timely and cover relatively small areas. With the advent of continental scale InSAR data, such as the European Ground Motion Service, and automated processing facilities such as ESA TEP and COMETS LICSBAS systems InSAR data is becoming far easier to access. Huge volumes of data are generated; we now need automated methods to extract actionable information on ground motions. To develop such tools, it is necessary to create libraries representative of characteristic motions patterns for common subsidence mechanisms (ground loading, groundwater extraction). It is also necessary to use novel machine learning routines to extract significant spatial and temporal patterns from thousands/millions of InSAR time series and relate these to the derived libraries. To this end we have used integrated time series of optical and InSAR data for areas of rapid urban growth to understand subsidence. Combination of interpretations with expected patterns of subsidence derived from models of groundwater abstraction and ground loading allow us to separate the two signals and causational factors. In turn this enables the generation of characteristic patterns of time series of subsidence that we would expect to see as a result of each process. Such characteristic time series form libraries that will be the basis for machine learning to automatically interpret the InSAR data. We also present a fully automated Machine Learning approach which uses Seasonal and Trend Decomposition to extract trends (such as non-linear and seasonal behaviours) from the InSAR time series. We then integrate the findings with the derived libraries to automatically interpret the InSAR data.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall E2)

Presentation: Forging Hephaestus - Towards Foundation Models for Interferometric SAR Data

Authors: Nikolas Papadopoulos, Nikolaos Ioannis Bountos, Andreas Karavias, Professor Ioannis Papoutsis
Affiliations: Orion Lab, National Technical University of Athens & National Observatory of Athens, Department of Informatics and Telematics, Harokopio University of Athens
Synthetic Aperture Radar (SAR) and their Interferometric SAR (InSAR) products are among the most abundant sources of Earth Observation (EO) data, offering invaluable insights into geophysical processes, geological activity, and the stability of man-made structures. Current state-of-the-art methods for these critical tasks often rely on data pipelines that involve heavy processing, requiring expert intervention and the incorporation of inductive biases. Deep learning methods have shown great promise in related tasks, both in computer vision and remote sensing applications. Their success, however, is often dependent on the existence of large, curated datasets, which are not commonly found in the InSAR domain, mainly due to the expensive annotation process. The recent release of Hephaestus[1], a manually annotated dataset comprising 19,919 Sentinel-1 interferograms from 44 volcanoes worldwide, along with a large unlabeled corpus of 110,573 InSAR products, addresses this gap for the crucial task of volcanic unrest detection. Understanding the importance of the spatiotemporal information on earth observation tasks we enhance and reformulate Hephaestus, providing additional information in both the temporal and spatial dimensions. Furthermore, recognizing the potential in both the labeled and unlabeled components of Hephaestus, as well as the significant advancements in foundation models in computer vision and remote sensing[2-4], applicable to a wide range of tasks, we investigate their ability to tackle essential InSAR applications. To accomplish this, we develop the first comprehensive benchmark for InSAR applications, integrating state-of-the-art methodologies from both Convolutional Neural Network and Transformer-based architectures. This benchmark will serve as a challenging testbed for future InSAR foundation models. Our key contributions are as follows: a) Enhancement of the Hephaestus dataset, resulting in Hephaestus++: We enhance the Hephaestus data with georeferenced data of high spatial resolution (100x100 m per pixel), contrary to the low-precision unsigned integer data of the original Hephaestus. b) Reformulation of volcanic unrest detection as a multi-temporal InSAR task: We redefine the problem of volcanic unrest detection, introducing an additional temporal dimension. This transformation captures temporal coherence, allowing improved detection and characterization of deformation patterns while mitigating noise from atmospheric and orbital errors and enhancing the detection of subtle geophysical signals. We define the time-series by grouping interferograms with the same primary acquisition date and different secondary dates, thus creating a set of time-series with varying lengths. These time-series are then standardized to a selected length, either through random replication or subsampling, which introduces variability to enrich the dataset. c) Creation of the first rigorous InSAR benchmark: We present the first comprehensive benchmark of state-of-the-art methods on the Hephaestus++ dataset. This benchmark assesses their performance on both single-frame interferograms and time-series of varying lengths, creating strong baselines on a crucial early warning application, for both classification and semantic segmentation tasks and providing a reference point for comparison with the developed foundation model. d) Development and exploration of the potential of foundation models for InSAR time-series analysis: We exploit the large unlabeled component of Hephaestus++ to explore large-scale self-supervised learning approaches for the creation of InSAR foundation models, capable of generalizing in critical downstream tasks in the field of interferometry. In our work, we provide the first InSAR time-series foundation model baseline, creating a complete testbed for future methods. In summary, we address critical challenges in the InSAR domain by addressing the weak points of the original Hephaestus dataset. We further reformulate volcanic unrest detection as a multi-temporal task, and establish a rigorous benchmark for state-of-the-art methods. By leveraging both labeled and unlabeled components of Hephaestus++, we explore the potential of foundation models for InSAR time-series analysis, creating a robust framework for future research and applications. [1] Bountos, Nikolaos Ioannis, et al. "Hephaestus: A large scale multitask dataset towards InSAR understanding." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. [2] Kirillov, Alexander, et al. "Segment anything." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [3 ]Bountos, Nikolaos Ioannis, Arthur Ouaknine, and David Rolnick. "FoMo-Bench: a multi-modal, multi-scale and multi-task Forest Monitoring Benchmark for remote sensing foundation models." arXiv preprint arXiv:2312.10114 (2023). [4] Xiong, Zhitong, et al. "Neural plasticity-inspired foundation model for observing the Earth crossing modalities." arXiv e-prints (2024): arXiv-2403.
Add to Google Calendar

Friday 27 June 14:30 - 16:00 (Hall E2)

Presentation: Integrated Study of Seasonal Surface Displacements at the Hatfield Moors Gas Storage site in a Peat Bog Environment

Authors: Gabriele Fibbi, PhD Alessandro Novellino, PhD Luke Bateson, Prof. Riccardo Fanti, PhD Matteo Del Soldato
Affiliations: Earth Sciences Department, University of Firenze, Via La Pira, 4 50121 Firenze - Italy, British Geological Survey, Keyworth, Nottingham NG12 5GG - UK
Underground Storage Systems (USS) are emerging as key applications in the energy transition to address climate change and global efforts to achieve net-zero carbon dioxide (CO₂) emissions as highlighted by the Intergovernmental Panel on Climate Change (IPCC). These technologies serve two purposes: (i) Underground Gas Storage (UGS) to facilitate the reliable use of natural gas as a transitional energy source and (ii) Carbon Capture and Storage (CCS) to provide long-term solutions for carbon sequestration. This study explores the application of advanced spaceborne monitoring techniques to assess ground displacements associated with gas storage operations, in the Hatfield Moors UGS site in the United Kingdom (UK). Hatfield Moors is a depleted natural gas reservoir converted for UGS in 2000 and is located beneath a peat bog landscape. This site allows understanding how these facilities can serve as prototypes for potential CCS applications. The importance of developing methodologies to ensure the environmental integrity of CCS operations is highlighted by the SENSE (Assuring integrity of CO₂ storage sites through ground surface monitoring) project. The converted reservoir has supported a working gas volume of 70 million cubic meters, with operations involving cyclical gas injection and withdrawal to meet seasonal energy demands. These activities induce surface deformation through poroelastic effects, with uplift observed during gas injection and subsidence during withdrawal. The monitoring of ground deformations is essential for the assessment of the mechanical response of the reservoir and the surrounding strata, particularly for sites moving towards potential CCS applications. A well-known approach for measuring millimetre-scale deformations is the Interferometric Synthetic Aperture Radar (InSAR) and it could be applied for detecting surface movements at the Hatfield Moors UGS site. This technique provides detailed measurements of ground displacement in both vertical and horizontal (east-west) directions, providing insight into the dynamics of the UGS operations. Vertical displacement patterns show localised uplift above injection points, while horizontal displacement extends further from the reservoir. These observations are consistent with theoretical poroelastic models, confirming that pressure changes within the reservoir propagate to the surface. On the other hand, the interpretation of InSAR time series requires careful combination of other contributing factors. In peatland environments, seasonal variations in water content have a significant influence on ground movement. The Hatfield Moors peatlands are subject to seasonal deflation and swelling, driven by precipitation and changes in groundwater levels. Cross-correlation analyses of InSAR data with UGS operational records and environmental datasets, including rainfall and piezometric measurements, reveal complex interactions between anthropogenic and natural processes. Time series analysis of InSAR data cross-correlated with UGS load/discharge curves reveals temporal patterns of surface deformation allowing the quantification of the relationship between these datasets. Two metrics are derived: (i) the maximum normalised correlation value, indicating the degree of alignment between the datasets and (ii) the optimal lag, identifying temporal shifts. At Hatfield Moors, a lag of 21 months suggests an out-of-phase relationship, challenging the direct attribution of seasonal displacements solely to UGS activities. A strong correlation was instead identified between vertical displacement and seasonal groundwater fluctuations. For example, the extreme subsidence observed in summer 2018 coincides with a drought-induced decrease in peat water content. These findings support the hypothesis that seasonal deformation in the peatlands is primarily driven by hydrological changes and the contribution of the UGS operations is secondary. The correct interpretation of the sources of deformation, thus distinguish between deformation caused by gas storage operations and natural peat dynamics is relevant to ensure environmental safety and regulatory compliance. For sites converting their goal from UGS to CCS and located in peculiar environmental fields as Hatfield Moors, this analysis results to be essential. The methodologies developed in this study provide a robust framework for preliminary site assessments to identify potential environmental impacts prior to CCS implementation. By integrating InSAR data with ancillary datasets and using cross-correlation techniques, the feasibility of monitoring surface deformation with high spatial and temporal precision also for CCS activities is demonstrated. The insights gained at Hatfield Moors contribute to a broader understanding of ground deformation processes, supporting the safe and sustainable development of UGS technologies. The importance of this study is due also to the peatlands environmental in which the Hatfield Moors UGS activity is located. In fact, the peatlands are increasingly recognised as an important natural solution to climate change challenges. As long-term carbon reservoirs, peatlands play a key role in mitigating climate change by sequestering CO₂ from the atmosphere and storing it in waterlogged peat. Conversely, drained peatlands are a significant source of carbon emissions. Rewetting drained peatlands has been shown to reduce greenhouse gas emissions, aligning with international climate change agreements. In this regard, maintaining the ecological integrity of peatlands is essential, as lowering the water table accelerates oxidation and decomposition of peat, a major cause of subsidence. Once peat is decomposed, it is extremely difficult to restore. Stable groundwater levels are therefore essential to maintain the water retention capacity and ecosystem services of peatlands. InSAR monitoring is becoming a powerful tool for identifying, characterising from a displacement point of view and managing peatlands. Changes related to hydrological variations and peatland health can be detected with InSAR, allowing decision-making to implement more effective approaches in order to conserve these environments and targeted restoration efforts. As a result, InSAR makes a significant contribution to the wider strategy of climate change mitigation and environmental conservation, ensuring that peatlands continue to provide their essential ecosystem services even if involved in human activities. This study highlights the importance of spaceborne data in the management of geohazards associated with UGS and CCS operations. As global energy systems transition towards sustainability, the methods developed make a valuable contribution to reducing risk and protecting the environment when managing underground resources.
Add to Google Calendar